content
stringlengths
0
625k
subset
stringclasses
1 value
meta
dict
--- abstract: | A *unique sink orientation* (USO) is an orientation of the $n$-dimensional hypercube graph such that every non-empty face contains a unique sink. Schurr showed that given any $n$-dimensional USO and any dimension $i$, the set of edges $E_i$ in that dimension can be decomposed into equivalence classes (so-called *phases*), such that flipping the orientation of a subset $S$ of $E_i$ yields another USO if and only if $S$ is a union of a set of these phases. In this paper we prove various results on the structure of phases. Using these results, we show that all phases can be computed in $O(3^n)$ time, significantly improving upon the previously known $O(4^n)$ trivial algorithm. Furthermore, we show that given a boolean circuit of size $poly(n)$ succinctly encoding an $n$-dimensional (acyclic) USO, it is $\mathsf{PSPACE}$-complete to determine whether two given edges are in the same phase. The problem is thus equally difficult as determining whether the hypercube orientation encoded by a given circuit is an acyclic USO \[Gärtner and Thomas, STACS'15\]. author: - Michaela Borzechowski - Simon Weber bibliography: - literature.bib - USO.bib title: On Phases of Unique Sink Orientations --- # Introduction A *unique sink orientation* (USO) is an orientation of the $n$-dimensional hypercube graph, such that for each non-empty face of the hypercube, the subgraph induced by the face contains a unique sink, i.e., a unique vertex with no outgoing edges. USOs were originally proposed by Stickney and Watson [@stickney1978digraph] as a way of modelling the candidate solutions of instances of the *P-matrix Linear Complementarity Problem* (P-LCP). After USOs were then forgotten for more than two decades, they were reintroduced and formalized as purely combinatorial objects by Szabó and Welzl in 2001 [@szabo2001usos]. Since then, USOs have been studied from many angles. Many problems in geometry, game theory, as well as mathematical programming have since been reduced to the problem of finding the global sink of a unique sink orientation [@borzechowski2023degeneracy; @fischer2004smallestball; @foniok2009pivoting; @gaertner2005stochasticgames; @gaertner2006lpuso; @halman2007stochasticgames; @schurr2004phd]. Motivated by this, much research has gone into finding better algorithms for this problem [@gaertner1995subexaop], however the original algorithms by Szabó and Welzl with runtimes exponential in $n$ are still the best-known in the general case. Much of the existing research on USOs has gone into structural and combinatorial aspects. On the $n$-dimensional hypercube, there are $2^{\Theta(2^n\log n)}$ USOs [@matousek2006numberusos]. While this is a large number, it is still dwarfed by the number of all orientations, which is $2^{n2^{n-1}}$. This shows that the USO condition is quite strong, yet still allows for a large variety of orientations. However, this also makes it quite challenging to enumerate USOs or even randomly sample a USO. The naive approach to random sampling --- generating random orientations until finding a USO --- fails since USOs are so sparse among all orientations. A natural approach is thus the one of *reconfiguration*. In reconfiguration, one defines simple (often "local") operations which can be applied to the combinatorial objects under study. This defines a so-called *flip graph*, whose vertices represent the combinatorial objects, where neighboring objects can be turned into each other by applying such a simple operation. Enumeration of all objects can then be performed by systematically walking through a spanning tree of the flip graph, e.g., with a technique due to Avis and Fukuda named *reverse-search* [@reversesearch]. If the flip graph is Hamiltonian, we can even find a so-called *Gray code*. Similarly, random sampling among the objects can be performed by simulating a random walk on the flip graph, which is often expressed as a *Markov chain*. Flip graphs have been studied for many types of combinatorial objects, and this study has in many cases contributed to a better understanding of the objects themselves [@permutationLanguages; @reconfigurationSurvey]. On USOs, a natural choice of operation is to flip the orientation of a single edge. However, it is known that there exist USOs in which no single edge can be flipped without destroying the USO condition [@borzechowski2022construction; @schurr2004phd]. Thus, to guarantee connectedness of the flip graph, we need operations that may flip multiple edges at once. This problem was intensely studied by Schurr [@schurr2004phd]. In any USO $O$ of the $n$-cube, and any dimension $i\in[n]$, we denote the set of edges of $O$ in dimension $i$ by $E_i$. Schurr proved that we can then decompose $E_i$ into equivalence classes, such that flipping any subset $S\subseteq E_i$ of edges in $O$ yields another USO if and only if $S$ is the union of some of these equivalence classes. Schurr named these equivalence classes within $E_i$ the *$i$-phases* of $O$. It turns out that phases are very useful for reconfiguration. As the operation we define the following: For any dimension $i\in[n]$, flip the set of edges given by some set of $i$-phases. Schurr showed that with only $2n$ of these operations we can obtain any $n$-dimensional USO from any other. The flip graph is thus connected and has rather low diameter. Furthermore, Schurr showed that the naturally defined Markov chain based on this operation converges to the uniform distribution. However, it is neither known whether the flip graph is Hamiltonian nor how quickly the Markov chain converges. Since this is the only known connected flip graph for USOs, it is crucial to understand it better. However, to understand this flip graph and the associated Markov chain we must first gain a better understanding of the structure of phases themselves. In this paper, we make significant progress on that front by presenting several surprising structural properties of phases. We also show some consequences of these properties to algorithmic and complexity-theoretic aspects of the problem of computing phases. ## Results This paper begins with proofs of various structural properties of phases. Specifically, in , we show that for every phase $P$, the subgraph of the hypercube induced by the endpoints of the edges in $P$ is *connected*. In , we prove various results about the relationship between phases and *hypervertices*, i.e., faces in which for every vertex the orientation of the edges leaving the face is the same. In , we prove that flipping a *matching* in a USO leads to another USO if and only if the matching is a union of phases. This statement was previously claimed by Schurr [@schurr2004phd Proposition 4.9], however his proof of the "only if" direction contained severe mistakes that were remarkably difficult to repair, requiring the use of newer results on *pseudo USOs* [@bosshard2017pseudo]. Finally, in , we show that to compute the phases of an $n$-dimensional USO by Schurr's method, it is not sufficient to compare only neighboring edges or even only edges of some bounded distance. We construct a family of USOs in which one needs to compare antipodal vertices with each other. In , we provide an algorithm to compute all phases of a given $n$-dimensional USO using $O(3^n)$ vertex comparisons, improving upon the currently best known method due to Schurr which takes $O(4^n)$ comparisons. This algorithmic improvement is then contrasted by our following main result proven in : Given a USO $O$ in succinct circuit representation, and two edges $e,e'$, the problem of deciding whether $e$ and $e'$ are in the same phase is $\mathsf{PSPACE}$-complete, even if $O$ is guaranteed to be acyclic. # Preliminaries ## Hypercubes and Orientations The *n-dimensional hypercube graph* Q_n(*n-cube*) is the undirected graph on the vertex set $V(\relax\ifmmode Q_\relax\ifmmode n\else{$n$}\fi\xspace\else{$Q_\relax\ifmmode n\else{$n$}\fi\xspace$}\fi\xspace) = \{0,1\}^\relax\ifmmode n\else{$n$}\fi\xspace$, where two vertices are connected by an edge if they differ in exactly one coordinate. On bitstrings we use for the bit-wise *xor* operation and for the bit-wise *and*. For simplicity we write a dimension $i$ in the subcube spanned by two vertices $v, w\in V(\relax\ifmmode Q_\relax\ifmmode n\else{$n$}\fi\xspace\else{$Q_\relax\ifmmode n\else{$n$}\fi\xspace$}\fi\xspace)$ as $i \in v \oplus w$, even though it would be more formally correct to use $i \in \{j\;|\;j \in [n], (v\oplus w)_j = 1 \}$. For a bitstring $v\in\{0,1\}^\relax\ifmmode n\else{$n$}\fi\xspace$ and a number $i\in [\relax\ifmmode n\else{$n$}\fi\xspace]$, we use the simplified notation $v\ominus i = v\oplus I_i$, where $I_i$ is the $i$-th standard basis vector. Thus, for a vertex v and a dimension $i\in [\relax\ifmmode n\else{$n$}\fi\xspace]$, the vertex $\relax\ifmmode v\else{$v$}\fi\xspace\ominus i$ is the neighbor of v which differs from v in coordinate $i$. The edge between v and $\relax\ifmmode v\else{$v$}\fi\xspace\ominus i$ is called an *$i$-edge*, or an *edge of dimension $i$*. We denote the set of $i$-edges by $E_i$. A *face* of Q_n described by a string $\relax\ifmmode f\else{$f$}\fi\xspace\in\{0,1,*\}^\relax\ifmmode n\else{$n$}\fi\xspace$ is the induced subgraph of $\relax\ifmmode Q_\relax\ifmmode n\else{$n$}\fi\xspace\else{$Q_\relax\ifmmode n\else{$n$}\fi\xspace$}\fi\xspace$ on the vertex set $V(\relax\ifmmode f\else{$f$}\fi\xspace):= \{\relax\ifmmode v\else{$v$}\fi\xspace\in V(\relax\ifmmode Q_\relax\ifmmode n\else{$n$}\fi\xspace\else{$Q_\relax\ifmmode n\else{$n$}\fi\xspace$}\fi\xspace)\;|\;\forall i \in [\relax\ifmmode n\else{$n$}\fi\xspace] : \relax\ifmmode v\else{$v$}\fi\xspace_i=\relax\ifmmode f\else{$f$}\fi\xspace_i \text{ or }\relax\ifmmode f\else{$f$}\fi\xspace_i=* \}$. We write $\relax\ifmmode dim(f)\else{$dim(f)$}\fi\xspace$ for the set of dimensions *spanning* the face, i.e., dimensions $i$ for which $\relax\ifmmode f\else{$f$}\fi\xspace_i=*$. The *dimension* of $\relax\ifmmode f\else{$f$}\fi\xspace$ is $|\relax\ifmmode dim(f)\else{$dim(f)$}\fi\xspace|$. A face of dimension $\relax\ifmmode n\else{$n$}\fi\xspace-1$ is called a *facet*. The facet described by the string $f$ with $f_i=1$ and $f_j=*$ for $j\neq i$ is called the *upper $i$-facet*. Its opposite facet (described by $f_i=0$, $f_j=*$ for $j\neq i$) is called the *lower $i$-facet*. An *orientation* $O$ is described by a function $\relax\ifmmode O\else{$O$}\fi\xspace: V(\relax\ifmmode Q_\relax\ifmmode n\else{$n$}\fi\xspace\else{$Q_\relax\ifmmode n\else{$n$}\fi\xspace$}\fi\xspace) \rightarrow \{0,1\}^\relax\ifmmode n\else{$n$}\fi\xspace$ such that for all $v\in V(Q_\relax\ifmmode n\else{$n$}\fi\xspace)$ and all $i\in [\relax\ifmmode n\else{$n$}\fi\xspace]$, $O(v)_i\neq O(v\ominus i)_i$. This function assigns each vertex an orientation of its incident edges, called the *outmap* of the vertex, where $\relax\ifmmode O\else{$O$}\fi\xspace(\relax\ifmmode v\else{$v$}\fi\xspace)_i=1$ denotes that the $i$-edge incident to vertex v is outgoing from v, and $\relax\ifmmode O\else{$O$}\fi\xspace(\relax\ifmmode v\else{$v$}\fi\xspace)_i=0$ denotes an incoming edge. ## Unique Sink Orientations An orientation $\relax\ifmmode O\else{$O$}\fi\xspace$ of Q_n is a *unique sink orientation (USO)* if within each face $\relax\ifmmode f\else{$f$}\fi\xspace$ of $\relax\ifmmode Q_\relax\ifmmode n\else{$n$}\fi\xspace\else{$Q_\relax\ifmmode n\else{$n$}\fi\xspace$}\fi\xspace$, there exists exactly one vertex with no outgoing edges. That is, there is a unique sink in each face with respect to the orientation O. Szabó and Welzl [@szabo2001usos] provide a useful characterization for USOs: This also implies that the function $O$ must be a bijection, even when the domain is restricted to any face $f$ and the codomain is restricted to the dimensions spanned by $f$. ## Pseudo USOs Bosshard and Gärtner [@bosshard2017pseudo] introduced the concept of *pseudo USOs* to capture orientations that are *almost* USOs. A *pseudo unique sink orientation* (PUSO) of Q_n is an orientation O that does not have a unique sink, but every proper face $\relax\ifmmode f\else{$f$}\fi\xspace\neq \relax\ifmmode Q_\relax\ifmmode n\else{$n$}\fi\xspace\else{$Q_\relax\ifmmode n\else{$n$}\fi\xspace$}\fi\xspace$ has a unique sink. Every orientation that is not a USO must contain a minimal face that does not have a unique sink. This minimal face must then be a PUSO. We use this fact in several of our proofs, in conjunction with the following property of PUSOs from [@bosshard2017pseudo]. ## Phases Let $O$ be an orientation, and let $S$ be a set of edges. We write $O\otimes S$ for the orientation $O$ with the orientation of all edges in $S$ *flipped*, i.e., their orientations are reversed. In general, given a fixed USO $O$, it is quite difficult to characterize which sets of edges $S$ lead to $O\otimes S$ being a USO again. In fact, this problem is equivalent to characterizing the set of all USOs. However, the task becomes much easier if we require $S$ to consist of only $i$-edges for some dimension $i$, i.e., $S\subseteq E_i$. Schurr [@schurr2004phd] called the minimal sets $S\subseteq E_i$ such that $O\otimes S$ is a USO *phases.* It turns out that phases form a partition of $E_i$. Furthermore Schurr proved that if $S$ is a union of phases, $O\otimes S$ is also a USO. Formally, the phases are the equivalence classes of the equivalence relation on $E_i$ obtained by taking the transitive closure of the relation of *direct-in-phaseness*: In other words, $e$ and $e'$ are in direct phase if two of their incident vertices have the same outmap within the subcube they span, apart from the orientation of the $i$-edges. We can thus see that in this case, $e$ and $e'$ must be oriented in the same direction in O. Flipping just one of the two edges leads to an immediate violation of the Szabó-Welzl condition by $v$ and $w$, the orientations $O \otimes\{e\}$ and $O\otimes\{e'\}$ are not USOs. Further note that $v$ and $w$ must lie on opposing sides of their respective $i$-edges. However, not both pairs of opposing endpoints of e and e' need to fulfill . See [2](#fig:onlyOneVertexCertificate){reference-type="ref" reference="fig:onlyOneVertexCertificate"} for an example of a USO in which two edges are in direct phase but this fact is only certified by one pair of opposing endpoints. ![The dotted edges are in direct phase. This is certified by the pair of vertices marked in solid red, but not by the vertices circled in black. Note that the phase containing the two dotted edges also contains the front right vertical edge. The back left vertical edge is not in this phase, it is flippable.](img/onlyOneVertexCertificate.pdf "fig:"){#fig:onlyOneVertexCertificate} ![The dotted edges are in direct phase. This is certified by the pair of vertices marked in solid red, but not by the vertices circled in black. Note that the phase containing the two dotted edges also contains the front right vertical edge. The back left vertical edge is not in this phase, it is flippable.](img/onlyOneVertexCertificate.pdf "fig:"){#fig:onlyOneVertexCertificate} [\[def:phases\]]{#def:phases label="def:phases"} Let $O$ be a USO. Two edges $e,e'\in E_i$ are *in phase* if there exists a sequence $$e=e_0,e_1,\ldots,e_{k-1},e_k=e'$$ such that for all $j\in [k]$, $e_{j-1}$ and $e_j$ are in direct phase. An *$i$-phase* $P\subseteq E_i$ is a maximal set of edges such that all $e,e'\in P$ are in phase. We write $\mathcal{P}_i(O)$ for the family of all $i$-phases of $O$. On the one hand, since edges that are in direct phase must be flipped together, we can never flip a set $S\subseteq E_i$ that is not a union of $i$-phases. On the other hand, if flipping some set $S\subseteq E_i$ destroys the Szabó-Welzl condition for some vertices $v,w$, their incident $i$-edges must have been in direct phase, and exactly one of the two edges is in $S$. Therefore, flipping a union of $i$-phases always preserves the Szabó-Welzl condition. We thus get the following observation, which we will strengthen in : [\[obs:schurrpropositionSingleDim\]]{#obs:schurrpropositionSingleDim label="obs:schurrpropositionSingleDim"} Let $O$ be a USO, and $i\in[n]$ a dimension. For any $S\subseteq E_i$, $\relax\ifmmode O\else{$O$}\fi\xspace\otimes S$ is a USO if and only if $S$ is a union of phases, i.e., $S=\bigcup_{P\in \mathcal{P}'}P$ for $\mathcal{P}'\subseteq \mathcal{P}_i(O)$. Note that this also implies that flipping a union of $i$-phases does not change the $i$-phases. We further want to note the following: [\[obs:EveryPairOfVerticesCertifiesDIPOfAtMost2Edges\]]{#obs:EveryPairOfVerticesCertifiesDIPOfAtMost2Edges label="obs:EveryPairOfVerticesCertifiesDIPOfAtMost2Edges"} Every pair of vertices can only certify at most one pair of edges to be in direct phase. In particular, if two vertices $v$ and $w$ certify that their incident $i$-edges are in direct phase, for all other dimension $j\in (v\oplus w)\setminus\{i\}$, the $j$-edges incident to $v$ and $w$ have opposing orientations and thus cannot be in the same phase. A special case is a phase $P=\{e\}$, i.e., a phase that consists of only a single edge. We call such an edge *flippable*. See again for an example of a flippable edge. Schurr provides the following characterization of flippable edges: [\[lem:flippable\]]{#lem:flippable label="lem:flippable"} An edge $\{\relax\ifmmode v\else{$v$}\fi\xspace,\relax\ifmmode v\else{$v$}\fi\xspace\ominus i\}$ in a USO O is *flippable* if and only if $\relax\ifmmode v\else{$v$}\fi\xspace$ and $\relax\ifmmode v\else{$v$}\fi\xspace\ominus i$ have the same outmap apart from their connecting $i$-edge, i.e., $$\forall j \in [\relax\ifmmode n\else{$n$}\fi\xspace] \setminus \{i\} : \relax\ifmmode O\else{$O$}\fi\xspace(\relax\ifmmode v\else{$v$}\fi\xspace)_j=\relax\ifmmode O\else{$O$}\fi\xspace(\relax\ifmmode v\else{$v$}\fi\xspace\ominus i)_j.$$ There exist n-dimensional USO in which every edge is flippable, i.e., USOs that have $\relax\ifmmode n\else{$n$}\fi\xspace 2^{\relax\ifmmode n\else{$n$}\fi\xspace-1}$ phases. This happens for example in the *uniform* USO, where for every $i\in [n]$, every $i$-edge is oriented towards the lower $i$-facet. In contrast, it is also possible that all $i$-edges are in phase with each other, and thus there only exists a single $i$-phase. However, this cannot happen in all dimensions simultaneously: By [@schurr2004phd Lemma 4.14], for $n>2$, every $n$-dimensional USO has at least $2n$ phases. # Structural Properties In this section we show various new insights on the structure of phases. ## Connectedness of Phases {#sec:connectedness} As a first question, we investigate whether it is possible that a phase consists of multiple "components" that are far apart. We first show that each edge that is not flippable is in direct phase with at least one "neighboring" edge. Let $e=\{v,v\ominus i\}$ be an $i$-edge. If $e$ is not flippable, there exists a dimension $j\neq i$, such that $e$ is in direct phase with the *neighboring* edge $\{v\ominus j,(v\ominus i)\ominus j\}$. *Proof.* Since $e$ is not flippable, by there exists at least one dimension $j \in [\relax\ifmmode n\else{$n$}\fi\xspace]\setminus \{i\}$ in which some orientation of v and $(\relax\ifmmode v\else{$v$}\fi\xspace\ominus i)$ disagrees, i.e., $\relax\ifmmode O\else{$O$}\fi\xspace(\relax\ifmmode v\else{$v$}\fi\xspace)_j \neq \relax\ifmmode O\else{$O$}\fi\xspace(\relax\ifmmode v\else{$v$}\fi\xspace\ominus i)_j$. Thus, in the 2-face spanned by $v$ and $(v\ominus i)\ominus j$ the $j$-edges are oriented in the opposite direction. These two vertices certify that $e$ and $\{\relax\ifmmode v\else{$v$}\fi\xspace\ominus j, (\relax\ifmmode v\else{$v$}\fi\xspace\ominus i)\ominus j\}$ are in direct phase. ◻ However, edges that are in direct phase with each other are not necessarily neighboring, as can be seen for example in [2](#fig:onlyOneVertexCertificate){reference-type="ref" reference="fig:onlyOneVertexCertificate"}. Nonetheless, we will prove that with respect to this neighboring relation, every phase is *connected*. Let us first define the neighboring relation more formally. [\[def:neighborhoodGraph\]]{#def:neighborhoodGraph label="def:neighborhoodGraph"} For some $i\in [\relax\ifmmode n\else{$n$}\fi\xspace]$ of a cube Q_n, we say two $i$-edges are *neighboring* when there exists a 2-face containing both of them. Let $\relax\ifmmode N\else{$N$}\fi\xspace_i$ be the graph with $V(\relax\ifmmode N\else{$N$}\fi\xspace_i) = E(\relax\ifmmode Q_\relax\ifmmode n\else{$n$}\fi\xspace\else{$Q_\relax\ifmmode n\else{$n$}\fi\xspace$}\fi\xspace)_i$. There is an edge between two vertices of $\relax\ifmmode N\else{$N$}\fi\xspace_i$ if the vertices correspond to neighboring $i$-edges in Q_n. We call $\relax\ifmmode N\else{$N$}\fi\xspace_i$ the *neighborhood graph* in dimension $i$. [\[thm:connectedness\]]{#thm:connectedness label="thm:connectedness"} Let P be an $i$-phase of a USO O. Then the subgraph of $N_i$ induced by the edges of P is connected. *Proof.* Let $C,\overline{C}$ be any non-trivial partition of the edges of $P$. Let $\relax\ifmmode O\else{$O$}\fi\xspace'$ be the orientation in which we flip all edges in $C$ but not the edges in $\overline{C}$, i.e., $\relax\ifmmode O\else{$O$}\fi\xspace' := \relax\ifmmode O\else{$O$}\fi\xspace\otimes C$. The orientation $\relax\ifmmode O\else{$O$}\fi\xspace'$ can obviously not be USO, since the edges of $C$ and $\overline{C}$ are in phase. This means that there is a minimal face $f$ such that $\relax\ifmmode O\else{$O$}\fi\xspace'$ is a PUSO on $f$. It is easy to see that $f$ must span dimension $i$. By , every antipodal pair of vertices in $f$ has the same outmap within the dimensions spanned by $f$, i.e., any such pair of vertices fails the Szabó-Welzl condition. Therefore, within $f$, every pair of antipodal $i$-edges is in direct phase, and every such pair consists of exactly one edge in $C$ and one edge in $\overline{C}$. We can thus see that $f$ must contain at least one $i$-edge in $C$ that is neighboring an $i$-edge in $\overline{C}$. Since this holds for all non-trivial partitions of $P$, the subgraph of $N_i$ induced by $P$ must be connected. ◻ Note that if we would instead consider the subgraph of $N_i$ in which we only use edges $\{e_1,e_2\}$ such that $e_1$ and $e_2$ are both neighboring *and* in direct phase, we could have phases with a disconnected induced subgraph. An example of this is , where none of the front two vertical edges is in direct phase with their neighboring vertical edge in the back. We will elaborate more on this in . The connectedness of phases will prove to be a useful tool in the next section where we prove , a connection between phases and *hypervertices*. ## Phases and Hypervertices {#sec:hypervertices} A face f of a cube Q_n is called a *hypervertex*, if and only if for all vertices $\relax\ifmmode v\else{$v$}\fi\xspace, \relax\ifmmode w\else{$w$}\fi\xspace\in \relax\ifmmode f\else{$f$}\fi\xspace$ and all dimensions $i$ not spanned by $f$, we have $\relax\ifmmode O\else{$O$}\fi\xspace(\relax\ifmmode v\else{$v$}\fi\xspace)_i = \relax\ifmmode O\else{$O$}\fi\xspace(\relax\ifmmode w\else{$w$}\fi\xspace)_i$. In other words, for each dimension $i$, all $i$-edges between $f$ and the rest of the cube are oriented the same way. By , a hypervertex of dimension $1$ is thus a flippable edge. We therefore know that we can change the orientation within a one-dimensional hypervertex arbitrarily without destroying the USO condition. The following lemma due to Schurr and Szabó generalizes this to higher-dimensional hypervertices. [\[lem:exchangeHypervertices\]]{#lem:exchangeHypervertices label="lem:exchangeHypervertices"} Let $O$ be a $n$-dimensional USO, and f some $k$-dimensional hypervertex. Then, the orientation on the face f can be replaced by any other $k$-dimensional USO $\relax\ifmmode O\else{$O$}\fi\xspace'_\relax\ifmmode f\else{$f$}\fi\xspace$, such that the resulting orientation $\relax\ifmmode O\else{$O$}\fi\xspace'$ defined by $$\forall v\in V(\relax\ifmmode Q_\relax\ifmmode n\else{$n$}\fi\xspace\else{$Q_\relax\ifmmode n\else{$n$}\fi\xspace$}\fi\xspace),i\in [\relax\ifmmode n\else{$n$}\fi\xspace]: \relax\ifmmode O\else{$O$}\fi\xspace'(v)_i:= \begin{cases} \relax\ifmmode O\else{$O$}\fi\xspace(v)_i & \text{if }\relax\ifmmode f\else{$f$}\fi\xspace_i\not=* \text{ or } v\not\in \relax\ifmmode f\else{$f$}\fi\xspace, \\ \relax\ifmmode O\else{$O$}\fi\xspace'_\relax\ifmmode f\else{$f$}\fi\xspace(\{v_j \;|\; \relax\ifmmode f\else{$f$}\fi\xspace_j = *\})_i & \text{otherwise}, \end{cases}$$ is also a USO. Hypervertices thus allow for a local change of the orientation. We will now investigate what this implies about the structure of phases. In particular, we prove the following two statements, given an $n$-dimensional USO $O$ and some face f: [\[lem:hypervertexIFFphases\]]{#lem:hypervertexIFFphases label="lem:hypervertexIFFphases"} The face $f$ is a hypervertex if and only if for all $i\in\relax\ifmmode dim(f)\else{$dim(f)$}\fi\xspace$, $\relax\ifmmode e\else{$e$}\fi\xspace\in E(\relax\ifmmode f\else{$f$}\fi\xspace)_i$ and $\relax\ifmmode e'\else{$e'$}\fi\xspace\in E(\relax\ifmmode Q_\relax\ifmmode n\else{$n$}\fi\xspace\else{$Q_\relax\ifmmode n\else{$n$}\fi\xspace$}\fi\xspace\setminus \relax\ifmmode f\else{$f$}\fi\xspace)_i$, e and e' are not in phase. [\[thm:AllHEdgesAreInPhase=\>Hypervertex\]]{#thm:AllHEdgesAreInPhase=>Hypervertex label="thm:AllHEdgesAreInPhase=>Hypervertex"} If there exists some $i \in \relax\ifmmode dim(f)\else{$dim(f)$}\fi\xspace$ such that $E(f)_i$ is an $i$-phase of $O$, then f is a hypervertex. In other words, says that a face is a hypervertex if and only if the phases of all edges of f are strictly contained within that face. gives a slightly different sufficient condition for $f$ being a hypervertex: f is a hypervertex if *all* edges of *one* dimension of f are are exactly *one* phase. *Proof of .* We first prove the "if" direction. Assume f is not a hypervertex. Then f has at least one incoming and one outgoing edge in some dimension $i \notin \relax\ifmmode dim(f)\else{$dim(f)$}\fi\xspace$. More specifically there exists a 2-face crossing between $f$ and $Q_n\setminus f$ that has the aforementioned incoming and outgoing edge, i.e., a 2-face with $\{\relax\ifmmode v\else{$v$}\fi\xspace, \relax\ifmmode w\else{$w$}\fi\xspace\} \in E(\relax\ifmmode f\else{$f$}\fi\xspace)_j$, $\{(\relax\ifmmode v\else{$v$}\fi\xspace\ominus i), (\relax\ifmmode w\else{$w$}\fi\xspace\ominus i)\} \in E(\relax\ifmmode Q_\relax\ifmmode n\else{$n$}\fi\xspace\else{$Q_\relax\ifmmode n\else{$n$}\fi\xspace$}\fi\xspace\setminus \relax\ifmmode f\else{$f$}\fi\xspace)_j$ and $\relax\ifmmode O\else{$O$}\fi\xspace(\relax\ifmmode v\else{$v$}\fi\xspace)_i \neq \relax\ifmmode O\else{$O$}\fi\xspace(\relax\ifmmode w\else{$w$}\fi\xspace)_i$. This however implies that the edge $\{\relax\ifmmode v\else{$v$}\fi\xspace, \relax\ifmmode w\else{$w$}\fi\xspace\}$ is in direct phase with the edge $\{(\relax\ifmmode v\else{$v$}\fi\xspace\ominus i), (\relax\ifmmode w\else{$w$}\fi\xspace\ominus i)\}$. Next, we prove the "only if" direction and assume f is a hypervertex. Let $i \in \relax\ifmmode dim(f)\else{$dim(f)$}\fi\xspace$, $\relax\ifmmode e\else{$e$}\fi\xspace\in E(\relax\ifmmode f\else{$f$}\fi\xspace)_i$ and $\relax\ifmmode e'\else{$e'$}\fi\xspace\in E(\relax\ifmmode Q_\relax\ifmmode n\else{$n$}\fi\xspace\else{$Q_\relax\ifmmode n\else{$n$}\fi\xspace$}\fi\xspace\setminus \relax\ifmmode f\else{$f$}\fi\xspace)_i$. By we can change the current orientation $O_f$ of a hypervertex f to any arbitrary USO of the same dimension. We can thus replace it by the USO $O_f'=O_f\otimes E(\relax\ifmmode f\else{$f$}\fi\xspace)_i$, i.e., $O_f$ with all edges of dimension $i$ flipped. This flips e but not e', and the result is still USO. Thus, by , e and e' are not in phase. ◻ To prove we need the connectedness of phases () from the previous subsection, as well as another ingredient, the *partial swap*: Given a USO O and a dimension $j\in [\relax\ifmmode n\else{$n$}\fi\xspace]$, the *partial swap* is the operation of swapping the subgraph of the upper $j$-facet induced by the endpoints of all *upwards oriented* $j$-edges with the corresponding subgraph in the lower $j$-facet. By [@borzechowski2022construction] this operation preserves the USO condition and all $j$-phases. *Proof of .* We prove this theorem by contradiction. Assume $O$ is a USO with a dimension $i \in \relax\ifmmode dim(f)\else{$dim(f)$}\fi\xspace$ such that $E(f)_i$ is an $i$-phase of $O$, but f is *not* a hypervertex. Thus, there exists a dimension $j \in [n] \setminus \relax\ifmmode dim(f)\else{$dim(f)$}\fi\xspace$ such that for some pair of vertices $\relax\ifmmode v\else{$v$}\fi\xspace, \relax\ifmmode w\else{$w$}\fi\xspace\in \relax\ifmmode f\else{$f$}\fi\xspace$, the orientation of the connecting edges $\{\relax\ifmmode v\else{$v$}\fi\xspace, \relax\ifmmode v\else{$v$}\fi\xspace\ominus j \}$ and $\{\relax\ifmmode w\else{$w$}\fi\xspace, \relax\ifmmode w\else{$w$}\fi\xspace\ominus j \}$ differs. First, we switch our focus to the $n'$-dimensional face $f'$ that contains $f$ and for which $dim(f')=dim(f)\cup\{j\}$. Without loss of generality, we assume $f$ is the lower $j$-facet of $f'$. Second, we adjust the orientation of the $i$-edges. We let all $i$-edges in $E(\relax\ifmmode f\else{$f$}\fi\xspace)$ point upwards and all $i$-edges in $E(f') \setminus E(\relax\ifmmode f\else{$f$}\fi\xspace)$ point downwards. Since $E(f)_i$ is a phase of $O$, the resulting orientation $O'$ of $f'$ is a USO. For all $\{v, w\} \in E(\relax\ifmmode f\else{$f$}\fi\xspace)_i$ it holds that $O(v)_j = O(w)_j$, since otherwise the edge $\{v,w\}$ would be in direct phase with the edge $\{v\ominus j,w\ominus j\}$. Thus, the orientation of the $j$-edges splits $E(f)_i$ into two parts: $$\begin{aligned} E(\relax\ifmmode f\else{$f$}\fi\xspace)_i^+ &:= \{ \{v, w\} \in E(\relax\ifmmode f\else{$f$}\fi\xspace)_i \;|\; O'(v)_j = O'(w)_j=0 \} \text{ and} \\ E(\relax\ifmmode f\else{$f$}\fi\xspace)_i^- &:= \{ \{v, w\} \in E(\relax\ifmmode f\else{$f$}\fi\xspace)_i \;|\; O'(v)_j = O'(w)_j = 1 \}. \end{aligned}$$ As $E(\relax\ifmmode f\else{$f$}\fi\xspace)_i$ is a phase, there must be some edge $\relax\ifmmode e\else{$e$}\fi\xspace^+\in E(\relax\ifmmode f\else{$f$}\fi\xspace)_i^+$ which is in direct phase to some edge $\relax\ifmmode e\else{$e$}\fi\xspace^-\in E(\relax\ifmmode f\else{$f$}\fi\xspace)_i^-$. We now perform a partial swap on $\relax\ifmmode O\else{$O$}\fi\xspace'$ in dimension $j$, yielding a USO $\relax\ifmmode O\else{$O$}\fi\xspace''$. The endpoints of $\relax\ifmmode e\else{$e$}\fi\xspace^+$ are not impacted by this operation, but the endpoints of $\relax\ifmmode e\else{$e$}\fi\xspace^-$ are moved to the opposite $j$-facet, now forming a new edge ${\relax\ifmmode e\else{$e$}\fi\xspace'}^{-}$. The two edges $e^+$ and ${\relax\ifmmode e\else{$e$}\fi\xspace'}^{-}$ must be in direct phase in $O''$, since in dimension $j$ all four of their endpoints are incident to an incoming edge, and in all the other dimensions the outmaps of the endpoints of ${\relax\ifmmode e\else{$e$}\fi\xspace'}^{-}$ in $O''$ are the same as the outmaps of the endpoints of ${\relax\ifmmode e\else{$e$}\fi\xspace}^-$ in $O'$. ![Sketch of $O'$ and $O''$ from the proof of . The edges $e^+\in E(f)_i^+$ and $e^-\in E(f)_i^-$ are pulled apart by the partial swap. If they were in direct phase before they must still be in direct phase, however the phase connecting $e^+$ and $e'^-$ after the partial swap cannot be connected.](img/proof_partialswap.pdf "fig:"){#fig:counterExample} ![Sketch of $O'$ and $O''$ from the proof of . The edges $e^+\in E(f)_i^+$ and $e^-\in E(f)_i^-$ are pulled apart by the partial swap. If they were in direct phase before they must still be in direct phase, however the phase connecting $e^+$ and $e'^-$ after the partial swap cannot be connected.](img/proof_partialswap.pdf "fig:"){#fig:counterExample} For every pair of $i$-edges neighboring in dimension $j$, exactly one is upwards and one is downwards oriented. Let $P$ be the phase containing $\relax\ifmmode e\else{$e$}\fi\xspace^+$ and ${\relax\ifmmode e\else{$e$}\fi\xspace'}^-$. Since $P$ contains only upwards oriented $i$-edges, and since $P$ contains at least one edge of each $j$-facet, the subgraph of $\relax\ifmmode N\else{$N$}\fi\xspace_i$ induced by $P$ cannot be connected. See for a sketch of $O'$ and $O''$. This is a contradiction to , which proves the lemma. ◻ can be adjusted and also holds for phases which are not full faces, only applying to dimensions that leave the minimal face which includes the phase. [\[lem:partialHypervertex\]]{#lem:partialHypervertex label="lem:partialHypervertex"} Let $O$ be a USO on the cube Q_n and $P$ an $i$-phase of $O$. Let f be the minimal face with $P \subseteq E(\relax\ifmmode f\else{$f$}\fi\xspace)_i$. Then, for all dimensions $j \in [n] \setminus \relax\ifmmode dim(f)\else{$dim(f)$}\fi\xspace$ and for all $v, w \in P$, we have $O(v)_j = O(w)_j$. *Proof.* The proof is exactly the same as for . ◻ We prove in the appendix. Note that the proof uses the connectedness of phases () as well as the *partial swap* construction from [@borzechowski2022construction]. With the same strategy we can also prove a version of for phases that are not full faces: [\[lem:partialHypervertex\]]{#lem:partialHypervertex label="lem:partialHypervertex"} Let $O$ be a USO on the cube Q_n and $P$ an $i$-phase of $O$. Let f be the minimal face with $P \subseteq E(\relax\ifmmode f\else{$f$}\fi\xspace)_i$. Then, for all dimensions $j \in [n] \setminus \relax\ifmmode dim(f)\else{$dim(f)$}\fi\xspace$ and for all $v, w \in P$, we have $O(v)_j = O(w)_j$. It is unclear how --- in the setting of --- the edges between $P$ and the vertices within $f$ behave. Furthermore it is not known whether some form of can also be extended to these non-facial subgraph structures. ## Phases and Matchings {#sec:SchurrsProposition} In [@schurr2004phd], Schurr stated the following proposition, generalizing from sets of edges in the same dimension to *all* matchings: [\[prop:schurr\]]{#prop:schurr label="prop:schurr"} Let $O$ be a USO and $H\subseteq E$ be a matching. Then, $O\otimes H$ is a USO if and only if $H$ is a union of phases of $O$. Sadly, Schurr's proof of the "only if" direction of this proposition is wrong [@Dani]. We reprove the proposition in this section. Let us first restate the (correct) proof of the "if" direction: Let us first restate the "if" direction, which was already proven correctly by Schurr: [\[lem:schurrforward\]]{#lem:schurrforward label="lem:schurrforward"} Let $O$ be a USO and $H\subseteq E$ be a matching that is a union of phases. Then, $O\otimes H$ is a USO. *Proof (from [@schurr2004phd]).* We verify for every pair of vertices $u,v \in V(\relax\ifmmode Q_\relax\ifmmode n\else{$n$}\fi\xspace\else{$Q_\relax\ifmmode n\else{$n$}\fi\xspace$}\fi\xspace)$ that they fulfill the Szabó-Welzl condition in $O\otimes H$. Both $u$ and $v$ are each incident to at most one edge in $H$. If neither are incident to an edge in $H$, they trivially fulfill the Szabó-Welzl condition in $O\otimes H$ since they also did in $O$. If the one or two edge(s) of $H$ incident to $u$ and $v$ are all $i$-edges for some $i$, then the outmaps of $u$ and $v$ are the same in $O\otimes H$ as in $O\otimes(H\cap E_i)$, which is a USO by since $H\cap E_i$ is a union of $i$-phases. We thus only have to consider the case where $u$ is incident to an $i$-edge of $H$ and $v$ is incident to a $j$-edge of $H$, for $i\neq j$ and both $i$ and $j$ are within the face that $u$ and $v$ span, i.e., $u_i \neq v_i$ and $u_j \neq v_j$. Let $w$ be the vertex in the face spanned by $u$ and $v$ with $O(w)\wedge(u\oplus v)=(O(u)\ominus i)\wedge (u\oplus v)$; This means the $i$-edges incident to $u$ and $w$ are in direct phase. Such a vertex $w$ is guaranteed to exist since $O$ is a USO and thus a bijection on each face. Now, assume that $u$ and $v$ violate the Szabó-Welzl condition in $O\otimes H$, i.e., $(u\oplus v)\wedge ((O\otimes H)(u) \oplus(O\otimes H)(v))=0^n$. Thus, we must have that in $O$, $(u\oplus v)\wedge (O(u)\oplus O(v))=I_i\oplus I_j$. We can therefore see that the $j$-edges incident to $v$ and $w$ must be in direct phase too. Since $H$ is a union of phases, it must contain both the $i$-edge and the $j$-edge incident to $w$. This contradicts the assumption that $H$ is a matching. Thus, the lemma follows. ◻ On the other hand, the "only if" part can be phrased as follows: The "only if" direction can be phrased as follows: [\[lem:schurrbackward\]]{#lem:schurrbackward label="lem:schurrbackward"} Let $O$ be a USO and $H\subseteq E$ a matching. Then, if $O\otimes H$ is a USO, $H$ is a union of phases of $O$. Schurr proves this direction by contraposition: Assume $H$ is not a union of phases. Then, there must be a phase $P$ and two edges $e,e'\in P$ such that $e\in H$ and $e'\not\in H$. While $e$ and $e'$ are not necessarily directly in phase, there must be a sequence of direct-in-phaseness relations starting at $e$ and ending in $e'$. At some point in this sequence, there must be two edges $e_i\in H$ and $e_{i+1}\not\in H$ that are in direct phase. Schurr then argues that since we flip only $e_i$ but not $e_{i+1}$, the vertices $v\in e_i$ and $w\in e_{i+1}$ certifying that these edges are in direct phase must violate the Szabó-Welzl condition after the flip. Thus, $O\otimes H$ would not be a USO. However, it is possible that $w$ is incident to another edge of a dimension in $v\oplus w$ that is contained in $H$, which would make $v$ and $w$ no longer violate the Szabó-Welzl condition. This issue seems very difficult to fix, since the core of the argument (the outmap of $w$ not being changed) is simply wrong. We thus opt to reprove in a completely different way. We first need the following observation: [\[obs:flippingNonAdjecentEdgesDoesNotAffectPhases\]]{#obs:flippingNonAdjecentEdgesDoesNotAffectPhases label="obs:flippingNonAdjecentEdgesDoesNotAffectPhases"} Let O be a USO and H a union of phases in O. Let P be a set of $i$-edges such that $\relax\ifmmode H\else{$H$}\fi\xspace\cap \relax\ifmmode P\else{$P$}\fi\xspace=\emptyset$ and $\relax\ifmmode H\else{$H$}\fi\xspace\cup \relax\ifmmode P\else{$P$}\fi\xspace$ is a matching. If P is a phase in O, it is a union of phases in $\relax\ifmmode O\else{$O$}\fi\xspace\otimes\relax\ifmmode H\else{$H$}\fi\xspace$. *Proof.* If P is a phase in O, by , both O with H flipped, and O with $\relax\ifmmode H\else{$H$}\fi\xspace\cup \relax\ifmmode P\else{$P$}\fi\xspace$ flipped are USOs. Their difference is P, and thus by , P is a union of phases in $\relax\ifmmode O\else{$O$}\fi\xspace\otimes\relax\ifmmode H\else{$H$}\fi\xspace$. ◻ We prove with the help of two *minimization* lemmata, [\[lem:minimization1,lem:minimization2\]](#lem:minimization1,lem:minimization2){reference-type="ref" reference="lem:minimization1,lem:minimization2"}. Note that a *counterexample to* is a pair $(O,H)$ such that $O$ is a USO, $H$ is a matching that is not a union of phases in $O$, but $O\otimes H$ is a USO nonetheless. [\[lem:minimization1\]]{#lem:minimization1 label="lem:minimization1"} If there exists a counterexample $(\relax\ifmmode O\else{$O$}\fi\xspace^*,\relax\ifmmode H\else{$H$}\fi\xspace^*)$ to [\[lem:schurrbackward\]](#lem:schurrbackward){reference-type="ref" reference="lem:schurrbackward"}, then there also exists a counterexample $(\relax\ifmmode O\else{$O$}\fi\xspace,\relax\ifmmode H\else{$H$}\fi\xspace)$ with $dim(\relax\ifmmode O\else{$O$}\fi\xspace^*)=dim(\relax\ifmmode O\else{$O$}\fi\xspace)$ in which H does *not* contain a whole phase of O. *Proof.* Let $U$ be the union of all phases P of $\relax\ifmmode O\else{$O$}\fi\xspace^*$ that are fully contained in the matching, i.e., $\relax\ifmmode P\else{$P$}\fi\xspace\subseteq \relax\ifmmode H\else{$H$}\fi\xspace^*$. By , $\relax\ifmmode O\else{$O$}\fi\xspace:=\relax\ifmmode O\else{$O$}\fi\xspace^*\otimes U$ is a USO. We denote by $\relax\ifmmode H\else{$H$}\fi\xspace$ the set $\relax\ifmmode H\else{$H$}\fi\xspace^*\setminus U$, which is a matching containing only incomplete sets of phases of $\relax\ifmmode O\else{$O$}\fi\xspace^*$. We now argue that $(\relax\ifmmode O\else{$O$}\fi\xspace,\relax\ifmmode H\else{$H$}\fi\xspace)$ is a counterexample with the desired property. As $\relax\ifmmode H\else{$H$}\fi\xspace^*$ originally was not a union of phases, $\relax\ifmmode H\else{$H$}\fi\xspace\not=\emptyset$. Furthermore, $\relax\ifmmode O\else{$O$}\fi\xspace\otimes\relax\ifmmode H\else{$H$}\fi\xspace$ is equal to $\relax\ifmmode O\else{$O$}\fi\xspace^*\otimes\relax\ifmmode H\else{$H$}\fi\xspace^*$ and thus by assumption a USO. It remains to be proven that $\relax\ifmmode H\else{$H$}\fi\xspace$ is not a union of phases of $\relax\ifmmode O\else{$O$}\fi\xspace$, and in particular contains no phase completely. To do so, we first prove that $U$ is a union of phases in $\relax\ifmmode O\else{$O$}\fi\xspace$. One can see this by successively flipping in $\relax\ifmmode O\else{$O$}\fi\xspace^*$ the sets $U_i:=U\cap E_i$ which decompose $U$ into the edges of different dimensions. After flipping each set $U_{i}$, by all the other sets $U_{j}$ remain unions of phases. Furthermore, as flipping a union of $i$-phases does not change the set of $i$-phases, $U_{i}$ also remains a union of phases. Now, by , any phase $\relax\ifmmode P\else{$P$}\fi\xspace\subseteq \relax\ifmmode H\else{$H$}\fi\xspace$ of $\relax\ifmmode O\else{$O$}\fi\xspace$ is a union of phases in $\relax\ifmmode O\else{$O$}\fi\xspace\otimes U = \relax\ifmmode O\else{$O$}\fi\xspace^*$. But then, by definition of $U$, $P$ would have been included in $U$. Thus, we conclude that $\relax\ifmmode H\else{$H$}\fi\xspace$ does not contain any phase of $\relax\ifmmode O\else{$O$}\fi\xspace$. ◻ [\[lem:minimization2\]]{#lem:minimization2 label="lem:minimization2"} If there exists a counterexample to , then there also exists a counterexample $(\relax\ifmmode O\else{$O$}\fi\xspace,\relax\ifmmode H\else{$H$}\fi\xspace)$ such that for each facet F of O, $\relax\ifmmode H\else{$H$}\fi\xspace\cap \relax\ifmmode F\else{$F$}\fi\xspace$ is a union of phases in F, and such that H contains no phase of O. *Proof.* Let $(\relax\ifmmode O\else{$O$}\fi\xspace,\relax\ifmmode H\else{$H$}\fi\xspace)$ be a smallest-dimensional counterexample to among all counterexamples $(\relax\ifmmode O\else{$O$}\fi\xspace^*,\relax\ifmmode H\else{$H$}\fi\xspace^*)$ where $\relax\ifmmode H\else{$H$}\fi\xspace^*$ contains no phase of $\relax\ifmmode O\else{$O$}\fi\xspace^*$. By , at least one such counterexample must exist, thus $(\relax\ifmmode O\else{$O$}\fi\xspace,\relax\ifmmode H\else{$H$}\fi\xspace)$ is well-defined. Let $n$ be the dimension of $\relax\ifmmode O\else{$O$}\fi\xspace$. We now prove that $H\cap F$ is a union of phases in $F$ for all facets $F$: If this would not be the case for some $F$, then constraining O and H to F would yield a counterexample $(\relax\ifmmode O\else{$O$}\fi\xspace_\relax\ifmmode F\else{$F$}\fi\xspace,\relax\ifmmode H\else{$H$}\fi\xspace_\relax\ifmmode F\else{$F$}\fi\xspace)$ of of dimension $n-1$. By applying , this counterexample can also be turned into a $(n-1)$-dimensional counterexample $(\relax\ifmmode O\else{$O$}\fi\xspace_\relax\ifmmode F\else{$F$}\fi\xspace',\relax\ifmmode H\else{$H$}\fi\xspace_\relax\ifmmode F\else{$F$}\fi\xspace')$ such that $\relax\ifmmode H\else{$H$}\fi\xspace_\relax\ifmmode F\else{$F$}\fi\xspace'$ contains no phase of $\relax\ifmmode O\else{$O$}\fi\xspace_\relax\ifmmode F\else{$F$}\fi\xspace'$. This is a contradiction to the definition of $(\relax\ifmmode O\else{$O$}\fi\xspace,\relax\ifmmode H\else{$H$}\fi\xspace)$ as the smallest-dimensional counterexample with this property. We conclude that $(\relax\ifmmode O\else{$O$}\fi\xspace,\relax\ifmmode H\else{$H$}\fi\xspace)$ is a counterexample with the desired properties. ◻ We are finally ready to prove , and thus also . We prove with the help of the following *minimization* lemma, the proof of which can be found in the appendix. Note that a *counterexample to* is a pair $(O,H)$ such that $O$ is a USO, $H$ is a matching that is not a union of phases in $O$, but $O\otimes H$ is a USO nonetheless. [\[lem:minimization2\]]{#lem:minimization2 label="lem:minimization2"} If there exists a counterexample to , then there also exists a counterexample $(\relax\ifmmode O\else{$O$}\fi\xspace,\relax\ifmmode H\else{$H$}\fi\xspace)$ such that for each facet F of O, $\relax\ifmmode H\else{$H$}\fi\xspace\cap \relax\ifmmode F\else{$F$}\fi\xspace$ is a union of phases in F, and such that H contains no phase of O. With this lemma we are ready to prove , and thus also . *Proof of .* By it suffices to show that there exists no counterexample $(\relax\ifmmode O\else{$O$}\fi\xspace,\relax\ifmmode H\else{$H$}\fi\xspace)$ to this lemma such that H contains no phase of O and in each facet F of O, $\relax\ifmmode H\else{$H$}\fi\xspace\cap\relax\ifmmode F\else{$F$}\fi\xspace$ is a union of phases. Assume that such a counterexample $(\relax\ifmmode O\else{$O$}\fi\xspace,\relax\ifmmode H\else{$H$}\fi\xspace)$ exists. For each dimension $i \in [\relax\ifmmode n\else{$n$}\fi\xspace]$ for which $\relax\ifmmode H\else{$H$}\fi\xspace_i:=\relax\ifmmode H\else{$H$}\fi\xspace\cap E_i$ is non-empty we consider the orientation $\relax\ifmmode O\else{$O$}\fi\xspace_i := \relax\ifmmode O\else{$O$}\fi\xspace\otimes\relax\ifmmode H\else{$H$}\fi\xspace_i$. By assumption, $\relax\ifmmode H\else{$H$}\fi\xspace_i$ is not a union of phases of O, and thus $\relax\ifmmode O\else{$O$}\fi\xspace_i$ is not a USO. Furthermore, as $\relax\ifmmode H\else{$H$}\fi\xspace_i$ is a union of phases in each facet of O, all facets of $\relax\ifmmode O\else{$O$}\fi\xspace_i$ are USOs. Thus, $\relax\ifmmode O\else{$O$}\fi\xspace_i$ is a PUSO. Recall that by , in a PUSO, every pair of antipodal vertices has the same outmap. For $\relax\ifmmode O\else{$O$}\fi\xspace_i$ to be a PUSO, and O to be a USO, exactly one vertex of each antipodal pair of vertices must be incident to an edge in $\relax\ifmmode H\else{$H$}\fi\xspace_i$. We know that H contains edges of at least two dimensions, $i$ and $j$. Consider $\relax\ifmmode H\else{$H$}\fi\xspace_i$ and $\relax\ifmmode H\else{$H$}\fi\xspace_j$. By the aforementioned argument, both $\relax\ifmmode O\else{$O$}\fi\xspace_i$ and $\relax\ifmmode O\else{$O$}\fi\xspace_j$ are PUSO. As H is a matching, there is no vertex incident to both an edge of $\relax\ifmmode H\else{$H$}\fi\xspace_i$ and $\relax\ifmmode H\else{$H$}\fi\xspace_j$. Therefore, for each pair of antipodal vertices $\relax\ifmmode v\else{$v$}\fi\xspace, \relax\ifmmode w\else{$w$}\fi\xspace$, one is incident to an edge of $\relax\ifmmode H\else{$H$}\fi\xspace_i$, and one to an edge of $\relax\ifmmode H\else{$H$}\fi\xspace_j$. Since v and w must have the same outmaps in both $\relax\ifmmode O\else{$O$}\fi\xspace_i$ and $\relax\ifmmode O\else{$O$}\fi\xspace_j$, we get the following two conditions: 1. $(\relax\ifmmode v\else{$v$}\fi\xspace\oplus\relax\ifmmode w\else{$w$}\fi\xspace) \wedge (\relax\ifmmode O\else{$O$}\fi\xspace(\relax\ifmmode v\else{$v$}\fi\xspace) \oplus\relax\ifmmode O\else{$O$}\fi\xspace(\relax\ifmmode w\else{$w$}\fi\xspace)) = I_i,$ and 2. $(\relax\ifmmode v\else{$v$}\fi\xspace\oplus\relax\ifmmode w\else{$w$}\fi\xspace) \wedge (\relax\ifmmode O\else{$O$}\fi\xspace(\relax\ifmmode v\else{$v$}\fi\xspace) \oplus\relax\ifmmode O\else{$O$}\fi\xspace(\relax\ifmmode w\else{$w$}\fi\xspace)) = I_j.$ Clearly, this implies $I_i=I_j$ and we have thus obtained a contradiction, and no counterexample can exist to . ◻ Now that we have recovered , we can also strengthen : [\[lem:nonAdjacentEdgesAreNotAffectedByPhaseflips\]]{#lem:nonAdjacentEdgesAreNotAffectedByPhaseflips label="lem:nonAdjacentEdgesAreNotAffectedByPhaseflips"} Let O be a USO and H a union of phases in O. Let P be a set of $i$-edges such that $\relax\ifmmode H\else{$H$}\fi\xspace\cap \relax\ifmmode P\else{$P$}\fi\xspace=\emptyset$ and $\relax\ifmmode H\else{$H$}\fi\xspace\cup \relax\ifmmode P\else{$P$}\fi\xspace$ a matching. Then, P is a phase in O if and only if it is a phase in $\relax\ifmmode O\else{$O$}\fi\xspace' := \relax\ifmmode O\else{$O$}\fi\xspace\otimes\relax\ifmmode H\else{$H$}\fi\xspace$. *Proof.* We first prove that $P$ is a *union of* phases in O if and only if it is a *union of* phases in $\relax\ifmmode O\else{$O$}\fi\xspace'$. By , H is a union of phases in both O and $\relax\ifmmode O\else{$O$}\fi\xspace'$. Thus, this follows from . Now, assume for a contradiction that $P$ is a single phase in O but a union of multiple phases in $\relax\ifmmode O\else{$O$}\fi\xspace'$ (the other case can be proven symmetrically): Then, let $P'\subset P$ be a phase of $\relax\ifmmode O\else{$O$}\fi\xspace'$. By the statement proven above, $P'$ is a union of phases in O. However, $P'\subset P$ and $P$ is a single phase. This yields a contradiction, thus the lemma follows. ◻ We thus conclude that every phase remains a phase when flipping some matching that is not adjacent to any edge of the phase. ## The $n$-Schurr Cube {#sec:dSchurrCube} When trying to find an efficient algorithm for computing phases, one might ask the following: Is there some small integer $k(n)$, such that for every $n$-dimensional USO, the transitive closure of the direct-in-phaseness relation stays the same when only considering the relation between pairs of $i$-edges which have a distance of at most $k(n)$ to each other in $N_i$? In other words, does it suffice to compute the direct-in-phaseness relationships only for edges that are close to each other, instead of for all pairs of $i$-edges? In this section we will show that for every $n$ there exists an $n$-dimensional USO in which a direct-in-phase relationship between some antipodal $i$-edges is necessary to define some $i$-phase, i.e., we show that no such $k(n)<n-1$ exists. Such a USO was found by Schurr [@schurr2004phd] for $n=3$ and is shown in : All four vertical edges are in phase, but if only direct-in-phaseness between non-antipodal edges is considered, there would be no connection between the front and the back facet. ![The $3$-Schurr Cube. The vertical edges are all in phase. The direct-in-phase relationships between these edges are marked in green. Note that when disregarding the direct-in-phase relationships of antipodal edges (dotted), this phase splits in two parts.](img/3schurr.pdf "fig:"){#fig:3DSchurr} ![The $3$-Schurr Cube. The vertical edges are all in phase. The direct-in-phase relationships between these edges are marked in green. Note that when disregarding the direct-in-phase relationships of antipodal edges (dotted), this phase splits in two parts.](img/3schurr.pdf "fig:"){#fig:3DSchurr} We generalize the properties of this 3-Schurr cube to the n-Schurr cube, i.e., we show that there exists an $n$-dimensional USO such that - all $n$-edges are in phase, - all $n$-edges are in direct phase with their antipodal $n$-edge, and - if ignoring the $(n-1)$-direct-in-phaseness, the phase splits apart. We can obtain a cube fulfilling this in a recursive fashion: [\[def:nschurr\]]{#def:nschurr label="def:nschurr"} Let $S^1$ be the $1$-dimensional USO consisting of a single edge oriented towards $0$. For $n\geq 2$, let $S^n$ be the cube obtained by placing $S^{n-1}$ in the lower $n$-facet, and $S^{n-1}\otimes E_{n-1}$ in the upper $n$-facet, with all $n$-edges oriented towards the lower $n$-facet. Alternatively, we can define the same cube without recursion, simplifying its analysis: $$\forall v\in \{0,1\}^n:\; S^n(v)_i=\begin{cases} v_i\oplus v_{i+1} & \text{ for } i<n,\\ v_n & \text{ for } i=n. \end{cases}$$ An example of this cube for $n=4$ can be seen in . This cube fulfills the properties outlined above: [\[lem:dschurr\]]{#lem:dschurr label="lem:dschurr"} In $S^n$ as defined in , all $n$-edges are in direct phase with the antipodal edge (certified by both pairs of antipodal endpoints), and all $n$-edges are in phase. No $n$-edge is in direct phase with any non-antipodal $n$-edge located in the opposite $1$-facet. *Proof.* We first see that every pair $v,w$ of antipodal vertices certifies their incident $n$-edges to be in direct phase, since for all $i<n$, $v_i\oplus v_{i+1}=w_i\oplus w_{i+1}$ and thus $S^n(v)_i=S^n(w)_i$. Next, we show that no $n$-edge is in direct phase with a non-antipodal $n$-edge in the opposite $1$-facet. Let $v$ be any vertex and $w$ a vertex such that $w_{1}\neq v_{1}$, $w_n\neq v_n$, but $v_i=w_i$ for some $i$. Let $i'$ be the minimum among all $i$ with $v_i=w_i$. Note that $i'>1$. Then, we have $v_{i'-1}\neq w_{i'-1}$, but we also have $S^n(v)_{i'-1}=v_{i'-1}\oplus v_{i'} \neq w_{i'-1}\oplus w_{i'}=S^n(w)_{i'-1}$, and thus the $n$-edges incident to $v$ and $w$ are not in direct phase. By a similar argument one can see that two vertices $v$ and $w$ certify their incident $n$-edges to be in direct phase if and only if there exists some integer $1<k\leq n$ such that $v_i=w_i$ for all $i<k$, and $v_i\neq w_i$ for all $i\geq k$. From this, it is easy to see that all $n$-edges are in phase: The $n$-edges in the upper $1$-facet are each in direct phase with some edge in the lower $1$-facet. The lower $1$-facet is structured in the same way as the cube $S^{n-1}$, thus we can inductively see that all $n$-edges in this facet are in phase. Therefore, all $n$-edges of $S^n$ are in phase. ◻ The proof of this lemma can be found in the appendix. ![The $4$-Schurr Cube. The combed edges between the two pictured $4$-facets are in one phase. The direct-in-phase relationships between these edges is shown on the right. Note that when disregarding the direct-in-phase relationships of antipodal edges (dotted), the phase splits in two parts.](img/4schurr.pdf "fig:"){#fig:4DSchurr width=".9\\textwidth"} ![The $4$-Schurr Cube. The combed edges between the two pictured $4$-facets are in one phase. The direct-in-phase relationships between these edges is shown on the right. Note that when disregarding the direct-in-phase relationships of antipodal edges (dotted), the phase splits in two parts.](img/4schurr.pdf "fig:"){#fig:4DSchurr width=".7\\textwidth"} ![The $4$-Schurr Cube. The combed edges between the two pictured $4$-facets are in one phase. The direct-in-phase relationships between these edges is shown on the right. Note that when disregarding the direct-in-phase relationships of antipodal edges (dotted), the phase splits in two parts.](img/4schurrPhases.pdf "fig:"){#fig:4DSchurr width=".9\\textwidth"} ![The $4$-Schurr Cube. The combed edges between the two pictured $4$-facets are in one phase. The direct-in-phase relationships between these edges is shown on the right. Note that when disregarding the direct-in-phase relationships of antipodal edges (dotted), the phase splits in two parts.](img/4schurrPhases.pdf "fig:"){#fig:4DSchurr width=".7\\textwidth"} # An Improved Algorithm to Compute Phases {#sec:ComputationOfPhases} The definitions of direct-in-phaseness and phases () naturally imply a simple algorithm to compute all phases of a USO: Compare every pair of vertices and record the edges that are in direct phase, then run a connected components algorithm on the graph induced by these direct-in-phase relationships. This takes $O(4^n)$ time for an $n$-dimensional cube. As we will see, we can do better. Based on we get a natural connection between USO recognition and computation of phases; if $O$ is a USO and $H\subseteq E_i$ a set of $i$-edges, then $H$ is a union of phases if and only if $O\otimes H$ is a USO. However, using USO recognition as a black-box algorithm would be highly inefficient for computing phases (as opposed to testing whether some set is a phase), since we would have to check whether $O\otimes H$ is a USO for many different candidate sets $H$. We will see that a single run of an USO recognition algorithm suffices to be able to compute all phases. To achieve this, we can profit from the similarity of the Szabó-Welzl condition() and the condition for being in direct phase (). Let $\mathcal{A}$ be an algorithm for USO recognition that tests the Szabó-Welzl condition for some subset $T$ of all vertex pairs $\binom{V(Q_n)}{2}$, and outputs that the given orientation is a USO if and only if all pairs in $T$ fulfill the Szabó-Welzl condition. We will show that then there exists an algorithm $\mathcal{B}$ for computing all phases of a given USO that also only compares the vertex pairs $T$. Our phase computation algorithm $\mathcal{B}$ is based on the following symmetric relation: Let $T\subseteq \binom{V(Q_n)}{2}$. Two $i$-edges $e,e'$ are *in direct $T$-phase*, if - $e$ and $e'$ are in direct phase and - there exist $v \in e$, $v' \in e'$ such that $v,v'$ certify $e,e'$ to be in direct phase, and $\{v,v'\}\in T$. [\[lem:blerbs\]]{#lem:blerbs label="lem:blerbs"} Let $T\subseteq \binom{V(Q_n)}{2}$ be the set of vertex pairs of a USO recognition algorithm $\mathcal{A}$. In every USO $O$, the transitive closure of direct-in-$T$-phaseness is equal to the transitive closure of direct-in-phaseness. *Proof.* Clearly, by definition, the equivalence classes of direct-in-$T$-phaseness (called *$T$-phases*) are a refinement of phases. We now show that every $T$-phase is also a phase. Assume there is a $T$-phase $B$ (in dimension $i$) that is a strict subset of a phase $P$. Then, by , $O\otimes B$ is not a USO. Algorithm $\mathcal{A}$ must be able to detect this. Thus, there is a vertex pair $\{v,v'\}\in T$ that violates the Szabó-Welzl condition in $O\otimes B$. Clearly, exactly one of the $i$-edges incident to $v,v'$ is contained in $B$. However, these edges are in direct phase in $O$, and thus they are also in direct $T$-phase. We conclude that both edges should be in the same $T$-phase, and thus have a contradiction. ◻ With this lemma, we can turn the USO recognition algorithm $\mathcal{A}$ into a phase computation algorithm $\mathcal{B}$: For each pair of vertices in $T$, find the edges they certify to be in direct $T$-phase. Then, calculate the connected components. The best known USO recognition algorithm --- the one based on the work of Bosshard and Gärtner on PUSOs [@bosshard2017pseudo] --- uses a set $T$ of size $|T|=3^n$: this follows from the fact that implies that for each face, only the minimum and maximum vertex need to be compared. Thus, we can also calculate the phases in $O(3^n)$ time, and any further advances in USO recognition will also imply further improvements in phase computation. The fact that holds for *all* valid USO recognition algorithms may also be used to derive some structural results on phases. For example, applied to the $2^n$ versions of the PUSO-based USO recognition algorithm (each version specified by the vertex $v$ which is interpreted as the minimum vertex of the cube) implies the following lemma: [\[lem:dist-k-dip\]]{#lem:dist-k-dip label="lem:dist-k-dip"} Let $O$ be an $n$-dimensional USO and let $P$ be an $i$-phase. If $P$ would split apart if only direct-in-phase relationships of edges of distance $<n-1$ are considered, then all antipodal vertices must certify their incident $i$-edges to be in direct phase, and $P=E_i$. *Proof.* Every version of the PUSO-based algorithm only checks one pair of antipodal vertices. By , each of these pairs must certify their incident $i$-edges to be in direct phase (since ignoring direct-in-phase relationships of antipodal vertices must split apart $P$ by assumption). Furthermore, since this in-phaseness must be relevant for $P$, both of these edges must be part of $P$, and thus all $i$-edges must be in $P$. ◻ While we were able to slightly improve the runtime of computing phases, we show in the next section that this likely cannot be improved much further, since checking whether two given edges are in phase is $\mathsf{PSPACE}$-complete. # $\mathsf{PSPACE}$-Completeness {#sec:completeness} Checking whether two edges are in *direct* phase in a USO is trivial, it can be achieved with just four evaluations of the outmap function and $O(n)$ additional time. Surprisingly, in this section we prove that testing whether two edges are in phase (not necessarily directly) is $\mathsf{PSPACE}$-complete. We first have to make the computational model more clear: Since a USO is a graph of exponential size (in the dimension $n$), the usual way of specifying a USO is by a *succinct* representation, i.e., a Boolean circuit computing the outmap function with $n$ inputs and $n$ outputs and overall size polynomial in $n$. This reflects the practical situation very well, since in all current applications of USO sink-finding, the outmap function can be evaluated in time polynomial in $n$. The decision problem 2IP is to decide the following question:\ Given a USO O by a Boolean circuit of size in $O(poly(n))$ and two edges $\relax\ifmmode e\else{$e$}\fi\xspace, \relax\ifmmode e'\else{$e'$}\fi\xspace$, are e and e' in phase? We first show that 2IP can be solved in polynomial space. 2IP is in $\mathsf{PSPACE}$. *Proof.* We show that 2IP can be solved in polynomial space on a nondeterministic Turing machine, i.e., 2IP $\in\ensuremath{\mathsf{NPSPACE}}\xspace$. By Savitch's theorem [@Savitch], $\mathsf{NPSPACE}$=$\mathsf{PSPACE}$. We solve 2IP by starting at the edge $e$ and guessing a sequence of edges that are each in direct phase with the previous edge. If in this way we can reach $e'$, $e$ and $e'$ must be in phase. Such an algorithm only needs $O(n)$ bits to store the current and the next guessed edge of the sequence. ◻ Next, we show that 2IP is $\mathsf{PSPACE}$-hard. Since our reduction will only generate acyclic USOs, the problem remains $\mathsf{PSPACE}$-hard even under the promise that the input function specifies an acyclic USO. This restriction makes the theorem much more powerful, since testing whether this promise holds is itself $\mathsf{PSPACE}$-complete [@gaertner2015recognizing]. For our proof we reduce from the following (standard) $\mathsf{PSPACE}$-complete problem: The *Quantified Boolean Formula* (QBF) is to decide the following: Given a formula $\Phi$ in conjunctive normal form on the variables $x_1,\ldots,x_n$, as well as a set of quantifiers $q_1,\ldots,q_n\in\{\exists,\forall\}$, decide whether the sentence $q_1x_1,\ldots,q_nx_n:\Phi(x_1,\ldots,x_n)$ is true. **Fact 1**. *QBF is $\mathsf{PSPACE}$-complete.* 2IP is $\mathsf{PSPACE}$-hard, even when the input is guaranteed to be an acyclic USO and $e$ and $e'$ are antipodal. *Proof.* We reduce QBF to 2IP. To prove $\mathsf{PSPACE}$-hardness, this reduction must be polynomial-time and many-one. We translate a sentence $q_1x_1,\ldots,q_nx_n:\Phi(x_1,\ldots,x_n)$ into an acyclic USO $O_0[]$, built recursively from the USOs $O_{1}[0]$ and $O_{1}[1]$ which correspond to the sentences $q_2x_2,\ldots,q_{n}x_{n}:\Phi(0,x_2,\ldots,x_{n})$ and $q_2x_2,\ldots,q_{n}x_{n}:\Phi(1,x_2,\ldots,x_{n})$, respectively. In general, a USO $O_i[b_{1},\ldots,b_{i}]$ for $b_j\in\{0,1\}$ corresponds to the sentence $q_{i+1}x_{i+1},\ldots,q_nx_n:\Phi(b_{1},\ldots,b_i,x_{i+1},\ldots,x_n)$. We show inductively that all of our orientations $O_i[b_{1},\ldots,b_i]$ fulfill the following invariants: - $O_i[b_{1},\ldots,b_i]$ is a USO. - $O_i[b_{1},\ldots,b_i]$ is acyclic. - $O_i[b_{1},\ldots,b_i]$ is combed downwards in dimension $1$. - The minimum vertex of $O_i[b_{1},\ldots,b_i]$ is its sink, the maximum vertex is its source. - In $O_i[b_{1},\ldots,b_i]$, the $1$-edges incident to the minimum and maximum vertices are in phase if and only if the sentence $q_{i+1}x_{i+1},\ldots,q_nx_n:\Phi(b_{1},\ldots,b_i,x_{i+1},\ldots,x_n)$ is true. If we can show these properties, the only step left for the proof of the reduction is to show that a circuit computing $O_0[]$ can be computed in polynomial time. We first begin by discussing the anchor of our recursive construction: The orientations $O_n[b_1,\ldots,b_n]$, which correspond to the (unquantified) sentences $\Phi(b_1,\ldots,b_n)$. The truth value of such an unquantified sentence can be efficiently tested (one simply needs to evaluate $\Phi$ once), and we can thus set these orientations to be one of two fixed orientations: the *true-* or the *false-gadget*. ![The gadget encoding true.](img/pspace/true_base.pdf "fig:"){#subfig:base_true} ![The gadget encoding true.](img/pspace/true_base.pdf "fig:"){#subfig:base_true} ![The gadget encoding false.](img/pspace/false_base.pdf "fig:"){#subfig:base_false} ![The gadget encoding false.](img/pspace/false_base.pdf "fig:"){#subfig:base_false} The two base gadgets, the true- and the false-gadget, are the $3$-dimensional USOs shown in . As can be seen, they are both acyclic USOs with sink and source at the minimum and maximum vertex, respectively, and combed downwards in dimension $1$. In the true-gadget, the minimum and maximum vertex of each $1$-facet are connected by a path of two edges (dashed) whose orientations are different in the upper and lower $1$-facets. Thus, along this path, the incident $1$-edges are always in direct phase. We can thus see that the $1$-edges incident to the minimum and maximum vertices must be in phase, as required. In contrast, in the false-gadget every $1$-edge is flippable (since the gadget is just a uniform USO), and thus the $1$-edges incident to the minimum and maximum vertices are not in phase. We thus conclude that the base cases of our induction hold. We now show how we build a USO $\mathcal{O}:=O_i[b_{1},\ldots,b_i]$, if $q_{i+1}=\forall$. We first note that $q_{i+1}x_{i+1},\ldots,q_nx_n:\Phi(b_{1},\ldots,b_i,x_{i+1},\ldots,x_n)$ is true if and only if *both* of the sentences $q_{i+2}x_{i+2},\ldots,q_nx_n:\Phi(b_{1},\ldots,b_i,B,x_{i+2},\ldots,x_n)$ for $B\in\{0,1\}$, i.e., the two sentences corresponding to $\mathcal{F}:=O_{i+1}[b_{1},\ldots,b_i,0]$ and $\mathcal{T}:=O_{i+1}[b_{1},\ldots,b_i,1]$, are true. ![The $\forall$ construction. The $1$-edges incident to the red vertices are only in phase if the minimum and maximum $1$-edges of both $\mathcal{F}$ and $\mathcal{T}$ are in phase.](img/pspace/forall_recursive.pdf "fig:"){#fig:for_all_gadget} ![The $\forall$ construction. The $1$-edges incident to the red vertices are only in phase if the minimum and maximum $1$-edges of both $\mathcal{F}$ and $\mathcal{T}$ are in phase.](img/pspace/forall_recursive.pdf "fig:"){#fig:for_all_gadget} We show how to build $\mathcal{O}$ from $\mathcal{T}$ and $\mathcal{F}$ in . Essentially, $\mathcal{O}$ consists of two copies of $\mathcal{F}$ ($\mathcal{F}$ and $\mathcal{F}'$), one copy of $\mathcal{T}$, and one uniform USO, all connected in a combed way, but then two specially marked red edges are flipped. Note that by the inductive hypothesis, we can assume both $\mathcal{F}$ and $\mathcal{T}$ to fulfill the invariants outlined above. We first prove that $\mathcal{O}$ is a USO: Since all four "ingredients" are USOs, before flipping the red edges the orientation is clearly a USO (it can be seen as a product construction as described in [@schurr2004quadraticbound]). We thus only have to show that the two red edges are flippable. The red edge in the top $1$-facet goes between the maximum vertices of $\mathcal{F}_{upper}$ and $\mathcal{F}'_{upper}$, which by inductive hypothesis are both sources of their respective subcubes. Thus, the two endpoints have the same outmap, and this red edge is flippable. The same argument works for the red edge in the bottom $1$-facet, which goes between two sinks. Thus, the orientation is a USO. Next, we prove that the construction preserves acyclicity: We can view both $1$-facets independently, since the $1$-edges are combed and thus cannot be part of a cycle. In a similar way, we can split each $1$-facet further along some combed dimension. In the resulting subcubes, since the uniform USO, $\mathcal{F}$, and $\mathcal{T}$ are acyclic, any cycle must use one of the red edges. However, in these subcubes each red edge either ends at a sink or starts at a source, and can thus not be part of any directed cycle. Thus, $\mathcal{O}$ is acyclic. Next, we want to point out that $\mathcal{O}$ is combed downwards in dimension $1$, and since the minimum vertex of $\mathcal{F}$ is a sink and the maximum vertex of $\mathcal{T}$ is a source, the sink and source of $\mathcal{O}$ are also located at the minimum and maximum vertex, respectively. Finally, we need to show that the $1$-phases of $\mathcal{O}$ are correct. In other words, we wish to prove that the $1$-edges incident to the minimum and maximum vertices are in phase if and only if this holds for *both* $\mathcal{F}$ and $\mathcal{T}$.\ The "if" direction is easy to see, since we have a chain of in-phaseness: We can first go through $\mathcal{F}$, then cross over to the right (since the red and dashed edges go in the opposite direction, their incident $1$-edges are in phase), take the same path back through $\mathcal{F}'$, cross upwards along the red and dashed edges, and finally go through $\mathcal{T}$.\ For the "only if" direction, we can assume that the $1$-edges incident to the minimum and maximum vertices are not in phase in either $\mathcal{F}$ or $\mathcal{T}$. Thus, there must be some phase $P$ in $\mathcal{F}$ that includes the $1$-edge incident to its source but not the one incident to its sink, or there exists a phase $P$ in $\mathcal{T}$ including the $1$-edge incident to its sink but not the one incident to its source. This phase $P$ forms a matching even when the two flippable red edges are added. Thus, by , we can flip $P$ also in $\mathcal{O}$. However, $P$ contains exactly one of the two $1$-edges incident to the minimum and maximum vertices of $\mathcal{O}$. Thus, these $1$-edges are not in phase.\ Thus, we conclude that the $1$-edges incident to the minimum and maximum vertices are in phase if and only if this also held for $\mathcal{F}$ and $\mathcal{T}$. Now we show how we build a USO $\mathcal{O}:=O_i[b_{1},\ldots,b_i]$, if $q_{i+1}=\exists$. We again note that $q_{i+1}x_{i+1},\ldots,q_nx_n:\Phi(b_{1},\ldots,b_i,x_{i+1},\ldots,x_n)$ is true if and only if *at least one* of the sentences $q_{i+2}x_{i+2},\ldots,q_nx_n:\Phi(b_{1},\ldots,b_i,B,x_{i+2},\ldots,x_n)$ for $B\in\{0,1\}$, i.e., the two sentences corresponding to $\mathcal{F}:=O_{i+1}[b_{1},\ldots,b_i,0]$ and $\mathcal{T}:=O_{i+1}[b_{1},\ldots,b_i,1]$, are true. ![The $\exists$ construction. The $1$-edges incident to the red vertices are in phase if and only if the minimum and maximum $1$-edges of either $\mathcal{F}$ or $\mathcal{T}$ are in phase.](img/pspace/exists_recursive.pdf "fig:"){#fig:exists_gadget} ![The $\exists$ construction. The $1$-edges incident to the red vertices are in phase if and only if the minimum and maximum $1$-edges of either $\mathcal{F}$ or $\mathcal{T}$ are in phase.](img/pspace/exists_recursive.pdf "fig:"){#fig:exists_gadget} We show how to build $\mathcal{O}$ from $\mathcal{T}$ and $\mathcal{F}$ in . Essentially, $\mathcal{O}$ consists of one copy of $\mathcal{F}$, one copy of $\mathcal{T}$, and six uniform USOs, all connected in a combed way, but then six specially marked red edges are flipped. Note that by the inductive hypothesis, we can again assume both $\mathcal{F}$ and $\mathcal{T}$ to fulfill the conditions outlined above. We first prove that $\mathcal{O}$ is a USO: Similarly to the $\forall$ construction, we only need to show that all the red edges are flippable. Again, this follows from the location of the sources and sinks of $\mathcal{F}$ and $\mathcal{T}$. Furthermore, the red edges form a matching, so they can also all be flipped together. Next, we see that the construction preserves acyclicity: We can again decompose $\mathcal{O}$ into subcubes along combed dimensions. In the remaining subcubes, we see that all red edges are incident to sinks or sources, and can thus not be used in directed cycles. It follows that $\mathcal{O}$ is acyclic. It is again obvious to see that $\mathcal{O}$ is combed downwards in dimension $1$, and the global sink and source of $\mathcal{O}$ are located in the minimum and maximum vertex, respectively. Finally, we show that the $1$-edges incident to the minimum and maximum vertices of $\mathcal{O}$ are in phase if and only if this holds for *at least one of* $\mathcal{F}$ and $\mathcal{T}$.\ The "if" direction is again simple, since we can use the in-phaseness sequence through $\mathcal{F}$ and the three lower red and dashed edge pairs, or the in-phaseness sequence through $\mathcal{T}$ and the upper three red and dashed edge pairs.\ For the "only if" direction, we see that all $1$-edges outside of $\mathcal{F}$ and $\mathcal{T}$ are flippable, except the four that are adjacent to a red edge. We can check all pairs of remaining $1$-edges for possibly being in direct phase and verify that only the direct-in-phaseness relations induced by the red and dashed edge pairs and the relations inside of $\mathcal{F}$ and $\mathcal{T}$ are present. Thus, the minimum and maximum $1$-edges of $\mathcal{O}$ can only be in phase, if that holds for $\mathcal{F}$ or $\mathcal{T}$. It only remains to prove that we can build a circuit computing the outmap function $O_0[]$ from the QBF instance in polynomial time. Based on the sequence $q_1,\ldots,q_n$ of quantifiers, we can easily assign the dimensions of $O_0[]$ to the different levels of the recursion (the first three coordinates belong to the base gadgets, and the following coordinates belong to levels in either groups of two or three coordinates, depending on whether $q_i=\exists$ or $q_i=\forall$). We can thus easily locate a given vertex $v$ within all levels of the recursive construction. If at some point the vertex is part of a uniform subcube, it does not need to be located on lower levels. Otherwise, the vertex is part of a base gadget $O_n[b_1,\ldots,b_n]$ on the last level of the recursion. Here, we can evaluate $\Phi(b_1,\ldots,b_n)$ (since CNF formulae can be efficiently evaluated by Boolean circuits), and find the orientation of $v$. Thus, $O_0[]$ can be evaluated by a polynomially sized circuit that we can also build in polynomial time, and our reduction is complete. ◻ ## Implications The $\mathsf{PSPACE}$-hardness of 2IP implies that many closely related problems concerning phases are also hard, for example, computing the set of edges in the phase $P$ in which a given edge lies. More surprisingly, its hardness also implies that the problem of *USO completion* is $\mathsf{PSPACE}$-hard. In the USO completion problem, one is given a partially oriented hypercube. This partial orientation is again encoded by a succinct circuit, and computes for each vertex $v$ its *partial outmap* as a function $C:V(Q_n)\rightarrow\{0,1,-\}^n$ where $0$ and $1$ denote incoming and outgoing edges as usual, and "$-$" denotes that the edge is not oriented. The problem is then to decide whether there exists a USO $O$ that agrees with $C$ on all edges that were oriented in $C$. It is easy to obtain a reduction from 2IP to USO completion: In the dimension of the two input $h$-edges $e,e'$, we make all edges unoriented, except $e$ and $e'$, which are oriented in opposite directions. Clearly, if this partial orientation is completable, $e$ and $e'$ cannot be in phase. If this orientation is not completable, $e$ and $e'$ must be directed in the same way in all possible connections of the two $h$-facets, i.e., they must be in the same $h$-phase. # Conclusion Since implementations of the USO Markov chain spend most of their time computing phases, our improvement from $O(4^n)$ to $O(3^n)$ vertex comparisons significantly speeds up the generation of random USOs in practice. It is conceivable that phases could be computed even faster, but $\Omega(n2^n)$ serves as a natural lower bound due to the number of edges of $\relax\ifmmode Q_\relax\ifmmode n\else{$n$}\fi\xspace\else{$Q_\relax\ifmmode n\else{$n$}\fi\xspace$}\fi\xspace$. The main open question in this area remains the mixing rate of the Markov chain, and we hope that some of our structural results may serve as new tools towards attacking this problem. Currently phases seem to be the only somewhat "local" rule to generate all USOs, but it may also prove useful to search for other operations which allow for efficient random sampling. Some of our results (, , ) (, ) indicate that the phases or direct-in-phaseness relationships of one dimension contain information on the phases in other dimension. It is also not known how the sizes of the $i$-phases affect the number and sizes of phases in other dimensions, apart from the fact that there are at least $2n$ phases in total. Such interactions between phases of different dimensions call for further study, and might help with phase computation in the future; for example it may be possible to efficiently deduce the $i$-phases given all the $j$-phases for $j\neq i$. # Omitted Proofs ## *Proof of .* We prove this theorem by contradiction. Assume $O$ is a USO with a dimension $i \in \relax\ifmmode dim(f)\else{$dim(f)$}\fi\xspace$ such that $E(f)_i$ is an $i$-phase of $O$, but f is *not* a hypervertex. Thus, there exists a dimension $j \in [n] \setminus \relax\ifmmode dim(f)\else{$dim(f)$}\fi\xspace$ such that for some pair of vertices $\relax\ifmmode v\else{$v$}\fi\xspace, \relax\ifmmode w\else{$w$}\fi\xspace\in \relax\ifmmode f\else{$f$}\fi\xspace$, the orientation of the connecting edges $\{\relax\ifmmode v\else{$v$}\fi\xspace, \relax\ifmmode v\else{$v$}\fi\xspace\ominus j \}$ and $\{\relax\ifmmode w\else{$w$}\fi\xspace, \relax\ifmmode w\else{$w$}\fi\xspace\ominus j \}$ differs. First, we switch our focus to the $n'$-dimensional face $f'$ that contains $f$ and for which $dim(f')=dim(f)\cup\{j\}$. Without loss of generality, we assume $f$ is the lower $j$-facet of $f'$. Second, we adjust the orientation of the $i$-edges. We let all $i$-edges in $E(\relax\ifmmode f\else{$f$}\fi\xspace)$ point upwards and all $i$-edges in $E(f') \setminus E(\relax\ifmmode f\else{$f$}\fi\xspace)$ point downwards. Since $E(f)_i$ is a phase of $O$, the resulting orientation $O'$ of $f'$ is a USO. For all $\{v, w\} \in E(\relax\ifmmode f\else{$f$}\fi\xspace)_i$ it holds that $O(v)_j = O(w)_j$, since otherwise the edge $\{v,w\}$ would be in direct phase with the edge $\{v\ominus j,w\ominus j\}$. Thus, the orientation of the $j$-edges splits $E(f)_i$ into two parts: $$\begin{aligned} E(\relax\ifmmode f\else{$f$}\fi\xspace)_i^+ &:= \{ \{v, w\} \in E(\relax\ifmmode f\else{$f$}\fi\xspace)_i \;|\; O'(v)_j = O'(w)_j=0 \} \text{ and} \\ E(\relax\ifmmode f\else{$f$}\fi\xspace)_i^- &:= \{ \{v, w\} \in E(\relax\ifmmode f\else{$f$}\fi\xspace)_i \;|\; O'(v)_j = O'(w)_j = 1 \}. \end{aligned}$$ As $E(\relax\ifmmode f\else{$f$}\fi\xspace)_i$ is a phase, there must be some edge $\relax\ifmmode e\else{$e$}\fi\xspace^+\in E(\relax\ifmmode f\else{$f$}\fi\xspace)_i^+$ which is in direct phase to some edge $\relax\ifmmode e\else{$e$}\fi\xspace^-\in E(\relax\ifmmode f\else{$f$}\fi\xspace)_i^-$. We now perform a partial swap on $\relax\ifmmode O\else{$O$}\fi\xspace'$ in dimension $j$, yielding a USO $\relax\ifmmode O\else{$O$}\fi\xspace''$. The endpoints of $\relax\ifmmode e\else{$e$}\fi\xspace^+$ are not impacted by this operation, but the endpoints of $\relax\ifmmode e\else{$e$}\fi\xspace^-$ are moved to the opposite $j$-facet, now forming a new edge ${\relax\ifmmode e\else{$e$}\fi\xspace'}^{-}$. The two edges $e^+$ and ${\relax\ifmmode e\else{$e$}\fi\xspace'}^{-}$ must be in direct phase in $O''$, since in dimension $j$ all four of their endpoints are incident to an incoming edge, and in all the other dimensions the outmaps of the endpoints of ${\relax\ifmmode e\else{$e$}\fi\xspace'}^{-}$ in $O''$ are the same as the outmaps of the endpoints of ${\relax\ifmmode e\else{$e$}\fi\xspace}^-$ in $O'$. For every pair of $i$-edges neighboring in dimension $j$, exactly one is upwards and one is downwards oriented. Let $P$ be the phase containing $\relax\ifmmode e\else{$e$}\fi\xspace^+$ and ${\relax\ifmmode e\else{$e$}\fi\xspace'}^-$. Since $P$ contains only upwards oriented $i$-edges, and since $P$ contains at least one edge of each $j$-facet, the subgraph of $\relax\ifmmode N\else{$N$}\fi\xspace_i$ induced by $P$ cannot be connected. See for a sketch of $O'$ and $O''$. This is a contradiction to , which proves the lemma. ◻ ![Sketch of $O'$ and $O''$ from the proof of . The edges $e^+\in E(f)_i^+$ and $e^-\in E(f)_i^-$ are pulled apart by the partial swap. If they were in direct phase before they must still be in direct phase, however the phase connecting $e^+$ and $e'^-$ after the partial swap cannot be connected.](img/proof_partialswap.pdf "fig:"){#fig:counterExample} ![Sketch of $O'$ and $O''$ from the proof of . The edges $e^+\in E(f)_i^+$ and $e^-\in E(f)_i^-$ are pulled apart by the partial swap. If they were in direct phase before they must still be in direct phase, however the phase connecting $e^+$ and $e'^-$ after the partial swap cannot be connected.](img/proof_partialswap.pdf "fig:"){#fig:counterExample} *Proof of .* The proof is exactly the same as for . ◻ ## We prove this lemma with a weaker minimization lemma: [\[lem:minimization1\]]{#lem:minimization1 label="lem:minimization1"} If there exists a counterexample $(\relax\ifmmode O\else{$O$}\fi\xspace^*,\relax\ifmmode H\else{$H$}\fi\xspace^*)$ to [\[lem:schurrbackward\]](#lem:schurrbackward){reference-type="ref" reference="lem:schurrbackward"}, then there also exists a counterexample $(\relax\ifmmode O\else{$O$}\fi\xspace,\relax\ifmmode H\else{$H$}\fi\xspace)$ with $dim(\relax\ifmmode O\else{$O$}\fi\xspace^*)=dim(\relax\ifmmode O\else{$O$}\fi\xspace)$ in which H does *not* contain a whole phase of O. *Proof.* Let $U$ be the union of all phases P of $\relax\ifmmode O\else{$O$}\fi\xspace^*$ that are fully contained in the matching, i.e., $\relax\ifmmode P\else{$P$}\fi\xspace\subseteq \relax\ifmmode H\else{$H$}\fi\xspace^*$. By , $\relax\ifmmode O\else{$O$}\fi\xspace:=\relax\ifmmode O\else{$O$}\fi\xspace^*\otimes U$ is a USO. We denote by $\relax\ifmmode H\else{$H$}\fi\xspace$ the set $\relax\ifmmode H\else{$H$}\fi\xspace^*\setminus U$, which is a matching containing only incomplete sets of phases of $\relax\ifmmode O\else{$O$}\fi\xspace^*$. We now argue that $(\relax\ifmmode O\else{$O$}\fi\xspace,\relax\ifmmode H\else{$H$}\fi\xspace)$ is a counterexample with the desired property. As $\relax\ifmmode H\else{$H$}\fi\xspace^*$ originally was not a union of phases, $\relax\ifmmode H\else{$H$}\fi\xspace\not=\emptyset$. Furthermore, $\relax\ifmmode O\else{$O$}\fi\xspace\otimes\relax\ifmmode H\else{$H$}\fi\xspace$ is equal to $\relax\ifmmode O\else{$O$}\fi\xspace^*\otimes\relax\ifmmode H\else{$H$}\fi\xspace^*$ and thus by assumption a USO. It remains to be proven that $\relax\ifmmode H\else{$H$}\fi\xspace$ is not a union of phases of $\relax\ifmmode O\else{$O$}\fi\xspace$, and in particular contains no phase completely. To do so, we first prove that $U$ is a union of phases in $\relax\ifmmode O\else{$O$}\fi\xspace$. One can see this by successively flipping in $\relax\ifmmode O\else{$O$}\fi\xspace^*$ the sets $U_i:=U\cap E_i$ which decompose $U$ into the edges of different dimensions. After flipping each set $U_{i}$, by all the other sets $U_{j}$ remain unions of phases. Furthermore, as flipping a union of $i$-phases does not change the set of $i$-phases, $U_{i}$ also remains a union of phases. Now, by , any phase $\relax\ifmmode P\else{$P$}\fi\xspace\subseteq \relax\ifmmode H\else{$H$}\fi\xspace$ of $\relax\ifmmode O\else{$O$}\fi\xspace$ is a union of phases in $\relax\ifmmode O\else{$O$}\fi\xspace\otimes U = \relax\ifmmode O\else{$O$}\fi\xspace^*$. But then, by definition of $U$, $P$ would have been included in $U$. Thus, we conclude that $\relax\ifmmode H\else{$H$}\fi\xspace$ does not contain any phase of $\relax\ifmmode O\else{$O$}\fi\xspace$. ◻ *Proof of .* Let $(\relax\ifmmode O\else{$O$}\fi\xspace,\relax\ifmmode H\else{$H$}\fi\xspace)$ be a smallest-dimensional counterexample to among all counterexamples $(\relax\ifmmode O\else{$O$}\fi\xspace^*,\relax\ifmmode H\else{$H$}\fi\xspace^*)$ where $\relax\ifmmode H\else{$H$}\fi\xspace^*$ contains no phase of $\relax\ifmmode O\else{$O$}\fi\xspace^*$. By , at least one such counterexample must exist, thus $(\relax\ifmmode O\else{$O$}\fi\xspace,\relax\ifmmode H\else{$H$}\fi\xspace)$ is well-defined. Let $n$ be the dimension of $\relax\ifmmode O\else{$O$}\fi\xspace$. We now prove that $H\cap F$ is a union of phases in $F$ for all facets $F$: If this would not be the case for some $F$, then constraining O and H to F would yield a counterexample $(\relax\ifmmode O\else{$O$}\fi\xspace_\relax\ifmmode F\else{$F$}\fi\xspace,\relax\ifmmode H\else{$H$}\fi\xspace_\relax\ifmmode F\else{$F$}\fi\xspace)$ of of dimension $n-1$. By applying , this counterexample can also be turned into a $(n-1)$-dimensional counterexample $(\relax\ifmmode O\else{$O$}\fi\xspace_\relax\ifmmode F\else{$F$}\fi\xspace',\relax\ifmmode H\else{$H$}\fi\xspace_\relax\ifmmode F\else{$F$}\fi\xspace')$ such that $\relax\ifmmode H\else{$H$}\fi\xspace_\relax\ifmmode F\else{$F$}\fi\xspace'$ contains no phase of $\relax\ifmmode O\else{$O$}\fi\xspace_\relax\ifmmode F\else{$F$}\fi\xspace'$. This is a contradiction to the definition of $(\relax\ifmmode O\else{$O$}\fi\xspace,\relax\ifmmode H\else{$H$}\fi\xspace)$ as the smallest-dimensional counterexample with this property. We conclude that $(\relax\ifmmode O\else{$O$}\fi\xspace,\relax\ifmmode H\else{$H$}\fi\xspace)$ is a counterexample with the desired properties. ◻ ## *Proof of .* We first see that every pair $v,w$ of antipodal vertices certifies their incident $n$-edges to be in direct phase, since for all $i<n$, $v_i\oplus v_{i+1}=w_i\oplus w_{i+1}$ and thus $S^n(v)_i=S^n(w)_i$. Next, we show that no $n$-edge is in direct phase with a non-antipodal $n$-edge in the opposite $1$-facet. Let $v$ be any vertex and $w$ a vertex such that $w_{1}\neq v_{1}$, $w_n\neq v_n$, but $v_i=w_i$ for some $i$. Let $i'$ be the minimum among all $i$ with $v_i=w_i$. Note that $i'>1$. Then, we have $v_{i'-1}\neq w_{i'-1}$, but we also have $S^n(v)_{i'-1}=v_{i'-1}\oplus v_{i'} \neq w_{i'-1}\oplus w_{i'}=S^n(w)_{i'-1}$, and thus the $n$-edges incident to $v$ and $w$ are not in direct phase. By a similar argument one can see that two vertices $v$ and $w$ certify their incident $n$-edges to be in direct phase if and only if there exists some integer $1<k\leq n$ such that $v_i=w_i$ for all $i<k$, and $v_i\neq w_i$ for all $i\geq k$. From this, it is easy to see that all $n$-edges are in phase: The $n$-edges in the upper $1$-facet are each in direct phase with some edge in the lower $1$-facet. The lower $1$-facet is structured in the same way as the cube $S^{n-1}$, thus we can inductively see that all $n$-edges in this facet are in phase. Therefore, all $n$-edges of $S^n$ are in phase. ◻
arxiv_math
{ "id": "2310.00064", "title": "On Phases of Unique Sink Orientations", "authors": "Michaela Borzechowski, Simon Weber", "categories": "math.CO cs.CC", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
arxiv_math
{ "id": "2310.02534", "title": "Rational configuration problems and a family of curves", "authors": "Jonathan R. Love", "categories": "math.NT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We study invariants defined by count of charged, elliptic $J$-holomorphic curves in locally conformally symplectic manifolds. We use this to define $\mathbb{Q}$-valued deformation invariants of certain complete Riemann-Finlser manifolds and their isometries and this is used to find some new phenomena in Riemann-Finlser geometry. In contact geometry this Gromov-Witten theory is used to study fixed Reeb strings of strict contactomorphisms. Along the way, we state an analogue of the Weinstein conjecture in lcs geometry, directly extending the Weinstein conjecture, and discuss various partial verifications. A counterexample for a stronger, also natural form of this conjecture is given. address: Faculty of Science, University of Colima, Mexico author: - Yasha Savelyev bibliography: - "C:/Users/yasha/texmf/bibtex/bib/link.bib" title: Elliptic curves in lcs manifolds and metric invariants --- # Introduction The study of $J$-holomorphic curves in symplectic manifolds was initiated by Gromov  [@cite_GromovPseudo]. In that work and since then it have been rational $J$-holomorphic curves that were central in the subject. We study here certain Gromov-Witten type theory of $J$-holomorphic elliptic curves in locally conformally symplectic manifolds, for short lcs manifolds. For lcs manifolds it appears that instead elliptic (and possibly higher genus) curves are central. One explanation for this is that rational $J$-curves in an lcs manifold $M$ have $\widetilde{J}$-holomorphic lifts to the universal cover $\widetilde{M}, \widetilde{J}$, where the form is globally conformally symplectic. Hence, rational Gromov-Witten theory is a priori insensitive to the information carried by the Lee form (Section [2](#sec:preliminaries){reference-type="ref" reference="sec:preliminaries"}), although it can still be useful  [@cite_SavelyevLCSnonsqueezing]. We will present various applications for these elliptic curve counts in contact dynamics and for metric invariants of Riemann-Finlser manifolds. We choose to start the discussion with applications rather than theory, as the latter requires certain buildup. We start with Riemann-Finlser geometry, and this story can be understood as a generalization of the theory in  [@cite_SavelyevGromovFuller], which focuses on more elementary geodesic counts. In what follows for a manifold $X$ the topology on various functional spaces is usually the topology of $C ^{0}$ convergence on compact subsets of $X$, unless specified otherwise. $\pi _{1} (X)$ will denote the set of free homotopy classes of continuous maps $o: S ^{1} \to X$. We recall some definitions from  [@cite_SavelyevGromovFuller]. **Definition 1**. *Let $X$ be a smooth manifold. Fix an exhaustion by nested compact sets $\bigcup _{i \in \mathbb{N}} K _{i} = X$, $K _{i} \supset K _{i-1}$ for all $i \geq 1$. We say that a class $\beta \in \pi _{1} (X)$ is ***boundary compressible*** if $\beta$ is in the image of $$inc _{*}: \pi _{1}(X - K _{i}) \to \pi _{1} (X)$$ for all $i$, where $inc: X - K _{i} \to X$ is the inclusion map. We say that $\beta$ is ***boundary incompressible*** if it is not boundary compressible.* Let $\pi _{1} ^{inc} (X)$ denote the set of such boundary incompressible classes. When $X$ is compact, we set $\pi _{1} ^{inc} (X) := \pi _{1} (X) - const$, where $const$ denotes the set of homotopy classes of constant loops. **Terminology 1**. All our metrics are Riemann-Finsler metrics unless specified otherwise, and usually denoted by just $g$. Completeness, always means forward completeness, and is always assumed, although we usually explicitly state this. Curvature always means sectional curvature in the Riemannian case and flag curvature in the Finsler case. Thus we will usually just say complete metric $g$, for a forward complete Riemann-Finsler metric. A reader may certainly choose to interpret all metrics as Riemannian metrics and completeness as standard completeness. Denote by $L _{\beta } X$ the class $\beta \in \pi _{1} % TODO: {Lbeta} ^{inc}(X)$ component of the free loop space of $X$, with its compact open topology. Let $g$ be a complete metric on $X$, and let $S (g, \beta) \subset L _{\beta } X$ denote the subspace of all unit speed parametrized, closed $g$-geodesics in class $\beta$. The elements of $\mathcal{O} (g, \beta ) =S (g, \beta) / S ^{1}$ will be called ***geodesic strings***. A geodesic string will be called ***non-degenerate*** if the corresponding $S ^{1}$ family of geodesics is Morse-Bott non-degenerate. (Equivalently, the corresponding Reeb orbit in the unit cotangent bundle is non-degenerate). **Definition 1**. *We say that a metric $g$ on $X$ is ***$\beta$-taut*** if $g$ is complete and $S (g, \beta )$ is compact. We will say that $g$ is ***taut*** if it is $\beta$-taut for each $\beta \in \pi _{1} ^{inc} (X)$.* As shown in  [@cite_SavelyevGromovFuller], a basic example of a taut metric is a complete metric with non-positive curvature, or more generally a complete metric all of whose boundary incompressible geodesics are minimizing in their homotopy class. Other substantial classes of examples are constructed in  [@cite_SavelyevGromovFuller]. Overall, the class of taut metrics is large and flexible. However, it appears it has not been extensively studied, (or possibly even explicitly defined). **Definition 1**. *Let $\beta \in \pi_{1} ^{inc}(X)$, and let $g _{0}, g _{1}$ be a pair of $\beta$-taut metrics on $X$. A ***$\beta$-taut deformation (or homotopy)*** between $g _{0}, g _{1}$, is a continuous (in the topology of $C ^{0}$ convergence on compact sets) family $\{g _{t}\}$, $t \in [0,1]$ of complete metrics on $X$, s.t. $$S (\{g _{t}\}, \beta ) := \{(o,t) \in L _{\beta }X \times [0,1] \,|\, o \in S (g _{t}, \beta )\}$$ is compact. We say that $\{g _{t}\}$ is a ***taut deformation*** if it is $\beta$-taut for each $\beta \in \pi _{1} ^{inc}(X)$.* As shown in  [@cite_SavelyevGromovFuller], the $\beta$-tautness condition is trivially satisfied if $g _{t}$ have the property that all their class $\beta$ closed geodesics are minimal. In particular if $g _{t}$ have non-positive curvature then $\{g _{t}\}$ is taut by the Cartan-Hadamard theorem,  [@cite_ChernBook]. Let $\mathcal{E} (X)$ be set of equivalence classes of tuples $$\{(g, \phi) \,|\, \text{$g$ is a taut and $\phi $ is an isometry of $g$} \},$$ where $(g _{0}, \phi _{0})$ is equivalent to $(g _{1}, \phi _{1})$ whenever there is a Frechet smooth homotopy $\{(g _{t}, \phi _{t})\} _{t \in [0,1]}$, s.t. for each $t$ $\phi _{t}$ is an isometry of $g _{t}$, and $\{g _{t}\}$ is a taut homotopy. Let us call such a $\{(g _{t}, \phi _{t})\} _{t \in [0,1]}$ an ***$\mathcal{E}$-homotopy*** for future use. By counting certain charged elliptic curves in a lcs manifold associated to $(g, \phi)$ we define in Section [4.3](#sec:Definition of GWF){reference-type="ref" reference="sec:Definition of GWF"} a functional: **Theorem 1**. *For each manifold $X$, there is a natural, (generally) non-trivial functional $$\operatorname {GWF}: \mathcal{E} (X) \times \pi _{1} ^{inc} (X) \times \mathbb{N} \to \mathbb{Q}.$$* $\operatorname {GWF}$ stands for Gromov-Witten-Fuller, as when $\phi=id$ in $\operatorname {GWF}(g, \phi, \beta, n)$ or when $n=0$ the invariant reduces to a certain geodesic counting invariant studied in  [@cite_SavelyevFuller], and in this case such counts can be defined purely using Fuller's theory  [@cite_FullerIndex]. To understand what this functional is counting in general we first define: **Definition 1**. *Let $\phi$ be an isometry of $X,g$. Then a ***charge $n$ fixed geodesic string*** of $\phi$ is a closed geodesic $o$ whose image is fixed by $\phi ^{n}$. That is $\mathop{\mathrm{\mathrm{image}}}o = \mathop{\mathrm{\mathrm{image}}}\phi ^{n} \circ o$. If the charge is not specified it is assumed to be one. We say that such a fixed string is in class $\beta$ if the class of $o$ is $\beta$.* We will see that if $\operatorname {GWF} (g, \phi, \beta, n) \neq 0$ then there is a charge $n$ fixed $g$-geodesic string of $\phi$ in class $\beta$. *Remark 1*. Moreover, $\operatorname {GWF} (g, \phi, \beta, n)$ is in fact the "count" of the latter fixed geodesic strings, if by count we mean evaluating the fundamental class of a certain compact virtual dimension zero Kuranishi space with orbifold points. This will be explained once we construct the functional as a Gromov-Witten invariant in Section [4.3](#sec:Definition of GWF){reference-type="ref" reference="sec:Definition of GWF"}. Here are some basic related phenomena. Let $\beta \in \pi _{1} ^{inc} (X)$ be not a power class (see  [@cite_SavelyevGromovFuller Definition 1.7]), and suppose that $X$ admits a $\beta$-taut metric $g$, then by Theorem  [@cite_SavelyevGromovFuller Theorem 1.10] the $S ^{1}$ equivariant homology $H _{*} ^{S ^{1}}(L _{\beta}X, \mathbb{Z})$ is finite dimensional. In this case we denote by $\chi ^{S ^{1}}(L _{\beta }X)$ its Euler characteristic. **Theorem 1**. *Let $\beta \in \pi _{1} ^{inc} (X)$ be not a power class, and suppose that $X$ admits a $\beta$-taut metric. Suppose further that $\chi ^{S ^{1}}(L _{\beta }X) \neq 0$. Then for any $\beta$-taut $g$ on $X$, any isometry $\phi$ of $g$, in the component of the $id$, has a charge one fixed geodesic string in class $\beta$.* In the next couple of corollaries, we need that the manifold $X$ admits a complete metric of negative curvature. We also need that there is a class $\beta \in \pi _{1} ^{inc} (X)$. Notably, this condition is false for $\mathbb{R} ^{n}$. *Question 1*. Suppose that $X$ is a non simply connected manifold that admits a complete metric of negative curvature, does one necessarily have $\pi _{1} ^{inc} (X) \neq \emptyset$? The answer can be shown to be yes in special cases. A simple case, any hyperbolic, genus at least one, possibly infinite type Riemann surface satisfies $\pi _{1} ^{inc} (X) \neq \emptyset$. A three dimensional example: take $X$ to be the mapping torus by a pseudo-anosov diffeomorphism of a surface that is the interior of a compact surface with boundary, with genus at least two. $X$ admits a hyperbolic metric by Thurston's classification  [@cite_ThurstonClassificationSurfaceDiffeo] and the geometrization program and it satisfies $\pi _{1} ^{inc} (X) \neq \emptyset$. **Corollary 1**. *Suppose that $X$ admits a complete metric of negative curvature, and there is a class $\beta \in \pi _{1} ^{inc} (X)$. Then for any other complete non-positively curved metric $g$ on $X$, any isometry $\phi$ of $g$ in the component of the $id$, has a charge one, class $\beta$ fixed geodesic string.* Theorem [Theorem 1](#theorem_pertubHyperbolic){reference-type="ref" reference="theorem_pertubHyperbolic"} deals with isometries homotopic to the $id$, as such it is interesting only in dimension three and higher as the topology of surfaces, admitting metrics with continuous isometry groups is extremely restricted. The following theorem is about general isometries. **Theorem 1**. *Suppose that $X$ is a manifold and:* - *There is an $\mathcal{E}$-homotopy $\{(g _{t}, \phi _{t})\}$ on $X$.* - *$g _{0}$ has a unique and non-degenerate geodesic string in class $\beta \in \pi _{1} ^{inc} (X)$.* - *$\phi _{0,*} ^{n} (\beta) = \beta,$ for $n \in \mathbb{N}$.* *Then there is a class $\beta$, charge $n$, fixed $g _{1}$-geodesic string of $\phi _{1}$.* The following is an immediate corollary, note that it is non-trivial even for (infinite type) surfaces. **Corollary 1**. *Suppose that $X$ is a manifold and:* - *There is a Frechet smooth homotopy $\{(g _{t}, \phi _{t})\} _{t \in [0,1]}$, s.t. $g _{t}$ are complete metrics on $X$ and have non-positive curvature.* - *$\phi _{t}$ is an isometry of $g _{t}$ for each $t$.* - *$g _{0}$ has negative curvature.* *Then for each $\beta \in \pi _{1} ^{inc} (X)$ s.t. $\phi _{0,*} ^{n} (\beta) = \beta$ there is a class $\beta$, charge $n$, fixed $g _{1}$-geodesic string of $\phi _{1}$.* There is a partially related theory of $\phi$-invariant geodesics. The latter are geodesics $\gamma$ satisfying $\gamma (1) = \phi (\gamma (0) )$ for some isometry $\phi$ of $X,g$. (These are also analogous to translated points of contactomorphisms mentioned ahead.) A charge 1 fixed geodesic string of $\phi$ clearly determines a circle family of closed $\phi$-invariant geodesics. On the other hand as these $\phi$-invariant geodesics are not required to be closed, if we fix a $\phi$ they can be shown exist under very general conditions using Morse theory, Grove [@cite_GroveGeodesics]. ## Contact dynamics, fixed Reeb strings and more applications to isometries {#sec:Fixed Reeb strings} Let $(C ^{2n+1}, \lambda )$ be a contact manifold with $\lambda$ a contact form, that is a one form s.t. $\lambda \wedge (d \lambda) ^{n} \neq 0$. Denote by $R ^{\lambda}$ the Reeb vector field satisfying: $$d\lambda (R ^{\lambda}, \cdot ) = 0, \quad \lambda (R ^{\lambda}) = 1.$$ We assume throughout that its flow is complete. Recall that a ***closed $\lambda$-Reeb orbit*** (or just Reeb orbit when $\lambda$ is implicit) is a smooth map $$o: (S ^{1} = \mathbb{R} / \mathbb{Z}) \to C$$ such that $$\dot o (t) = c R ^ {\lambda} (o (t)),$$ with $\dot o (t)$ denoting the time derivative, for some $c>0$ called period. Let $S (R ^{\lambda }, \beta )$ denote the space of all closed Reeb orbits in free homotopy class $\beta$, with its compact open topology. And set $$\mathcal{O} (R ^{\lambda}, \beta ) := S (R ^{\lambda }, \beta )/S ^{1},$$ where $S ^{1}$ is acting naturally by reparametrization, see Appendix [8](#appendix:Fuller){reference-type="ref" reference="appendix:Fuller"}. We say that the action spectrum is ***discrete*** if the image of the period map $A: S (R ^{\lambda }, \beta ) \to \mathbb{R} ^{}$, $o \mapsto \int _{S ^{0}} o ^{*} \lambda$ is discrete. **Definition 1**. *Let $\phi: (C, \lambda ) \to (C, \lambda)$ be a strict contactomorphism of a contact manifold. Then a ***fixed Reeb string of $\phi$*** is a closed $\lambda$-Reeb orbit $o$ whose image is fixed by $\phi$. We say that it is in class $\beta$ if the free homotopy class of $o$ is $\beta$.* **Definition 1**. *Assuming that the class $\beta$ is non-torsion [^1], we say that $(C, \lambda)$ is ***infinite type*** for class $\beta$ if the action spectrum of $\lambda$ is discrete and there is a Reeb perturbation $X$ of the vector field $\mathbb{R} ^{\lambda}$ (in a certain natural sense,  [@cite_SavelyevFuller Definition 2.6]), s.t. all but finitely many class $\beta$ orbits of $X$ have even Conley-Zehnder index or or all but finitely many orbits of $X$ have odd Conley-Zehnder index.* A typical example of infinite type is the standard contact form $\lambda _{st}$ on $S ^{2k+1}$, as shown in  [@cite_SavelyevFuller]. **Definition 1**. *We say that $(C, \lambda )$ is ***finite type*** for class $\beta$ if $\mathcal{O} (R ^{\lambda }, \beta )$ is compact. And we say that it is ***finite non-zero type*** if in addition $i (R ^{\lambda}, \beta ) \neq 0$, (the Fuller index, see Appendix [8](#appendix:Fuller){reference-type="ref" reference="appendix:Fuller"}).* We have already seen basic examples coming from unit cotangent bundles of non-positively curved manifolds. We say that $(C, \lambda )$ is ***definite type*** (for class $\beta$) if it is either finite non-zero type or infinite type. **Theorem 1**. *Let $(C, \lambda )$ be a contact manifold of definite type for class $\beta$ orbits, then every strict contactomorphism $\phi$ of $(C, \lambda )$, homotopic to the $id$ via strict contactomorphisms, has a fixed Reeb string in class $\beta$. Furthermore, the same holds for every $\lambda'$ sufficiently $C ^{1}$ nearby to $\lambda$. In particular, for any contact form $\lambda$ on $S ^{2k+1}$, sufficiently $C ^{1}$ nearby to $\lambda _{st}$, any strict contactomorphism $\phi$ of $(C, \lambda )$ homotopic to the $id$ via strict contactomorphisms has a fixed Reeb string.* There is a partial connection of the theorem with the theory of translated points. **Definition 1** (Sandon [@cite_SandonSheila]). *Given a (not necessarily strict) contactomorphism $\phi$ of $(C, \lambda )$, a point $p \in C$ is called a ***translated point*** provided that $\phi ^{*} \lambda (p) = \lambda (p)$ and $\phi (p)$ lies on the $\lambda$-Reeb flow line passing through $p$.* A fixed Reeb string for $\phi$ in particular determines a special translated point of $\phi$ (one for each point on the image of the fixed Reeb string). So the above theorem is partly related to the Sandon conjecture  [@cite_SandonSheila] on existence of translated points of contactomorphisms. However, also note that the general form of Sandon's conjecture has counterexamples on $S ^{2k+1}$ for the standard contact form $\lambda _{st}$, see Cant [@cite_cant2022contactomorphisms]. Partially related to the Sandon conjecture is the Conjecture [Conjecture 1](#conj:conformalWeinstein){reference-type="ref" reference="conj:conformalWeinstein"} in Section [3](#sec:Results on Reeb 2-curves){reference-type="ref" reference="sec:Results on Reeb 2-curves"}, which is an analogue in lcs geometry of the Weinstein conjecture. **Corollary 1**. *Let $X,g$ be complete, with a class $\beta \in \pi _{1} ^{inc} (X)$, and such that its unit cotangent bundle is definite type for class $\widetilde{\beta}$, (defined as in Section [4.3](#sec:Definition of GWF){reference-type="ref" reference="sec:Definition of GWF"}). Then every isometry of $X,g$ homotopic through isometries to the $id$ has a class $\beta$ fixed geodesic string.* **Theorem 1**. *Suppose that $(C, \lambda)$ is Morse-Bott and some connected component $N \subset \mathcal{O} (R ^{\lambda}, \beta)$ has non-vanishing Euler characteristic. Then any contact form $\lambda'$ on $C$, sufficiently $C ^{1}$ nearby to $\lambda$, any strict contactomorphism $\phi$ of $(C, \lambda')$, homotopic to the $id$ via strict contactomorphisms has class $\beta$ fixed Reeb string.* Both of the theorems above are actually special cases of the next theorem proved in Section [7](#sec:Proofs of theorems on Conformal symplectic Weinstein conjecture){reference-type="ref" reference="sec:Proofs of theorems on Conformal symplectic Weinstein conjecture"}. For more details on the Fuller index see Appendix [8](#appendix:Fuller){reference-type="ref" reference="appendix:Fuller"}. Let $\lambda$ be a contact form on a closed manifold $C$, $N \subset \mathcal{O} (R ^{\lambda }, \beta )$ and let $i (N, R ^{\lambda}, \beta) \in \mathbb{Q}$ denote the Fuller index. For example, if $\lambda$ is Morse-Bott (see  [@cite_FredericBourgeois]) and $N$ is a connected component of $\mathcal{O} (R ^{\lambda}, \beta)$ then by a computation in  [@cite_SavelyevFuller Section 2.1.1] $i (R ^{\lambda }, N, \beta) \neq 0$ if $\chi (N) \neq 0$ (the Euler characteristic). **Theorem 1**. *Let $(C, \lambda)$ be a contact manifold satisfying the condition: $i (N, R ^{\lambda}, \beta) \neq 0$, for some open compact $N \subset \mathcal{O} (R ^{\lambda }, \beta)$. Then any strict contactomorphism $\phi: (C, \lambda) \to (C, \lambda)$, homotopic to the $id$ via strict contactomorphisms has a fixed Reeb string $o$ in class $\beta$ and moreover $o \in N$.* We have already mentioned that the index assumption of the theorem holds for Morse-Bott contact forms $\lambda$, provided the Euler characteristic of some component of $N \subset \mathcal{O} (R ^{\lambda })$ is non-vanishing. We may take for instance the standard contact form $\lambda _{st}$ on $S ^{2k+1}$, the unit contangent bundle of the sphere, or see Bourgeois [@cite_FredericBourgeois] for more examples. In this Morse-Bott case the theorem may be verified by elementary considerations. To see this suppose we have a connected component $N \subset \mathcal{O} (R ^{\lambda})$ with $\chi (N) \neq 0$. Then $\phi$ as above induces a topological endomorphism $\widetilde{\phi}$ of $N$ with non-zero Lefschetz number, so that in this case the result follows by the Lefschetz fixed point theorem. In general a compact open component $N \subset \mathcal{O} (R ^{\lambda})$ may not be a finite simplicial complex, or indeed any kind of topological space to which the classical Lefschetz fixed point theorem may apply. Also the relationship of $i (N, R ^{\lambda}, \beta)$ with $\chi (N)$ breaks down in general as $i (N, R ^{\lambda}, \beta)$ is partly sensitive to the dynamics of $R ^{\lambda }$. The following is a variation of Theorem [Theorem 1](#theorem_pertubHyperbolic){reference-type="ref" reference="theorem_pertubHyperbolic"} in the absence of the condition that $\beta$ be not a power, and removing all assumptions on the metric except completeness. This is proved in Section [7](#sec:Proofs of theorems on Conformal symplectic Weinstein conjecture){reference-type="ref" reference="sec:Proofs of theorems on Conformal symplectic Weinstein conjecture"}. **Theorem 1**. *Let $X$ admit a complete metric with a unique and non-degenerate geodesic string in class $\beta \in \pi _{1} ^{inc} (X)$. Then one of the following alternatives holds:* 1. *Sky catastrophes for families of Reeb vector fields exist, and the sky catastrophe can be essential, see Definition [Definition 1](#def:bluesky){reference-type="ref" reference="def:bluesky"}. [\[alt:0\]]{#alt:0 label="alt:0"}* 2. *For any complete metric $g$ on $X$ and every isometry $\phi$ of $X,g$ homotopic through isometries to the identity, $\phi$ has a charge 1 fixed geodesic string in class $\beta$.* ## Conformal symplectic Weinstein conjecture {#sec_Conformal symplectic Weinstein conjecture} We introduce in Section [2](#sec:preliminaries){reference-type="ref" reference="sec:preliminaries"} certain analogues of Reeb orbits for lcs manifolds. In particular, we define a unifying concept of a Reeb 2-curve on which most of the subsequent theory is based. This leads us to state one analogue in lcs geometry of the classical Weinstein conjecture, and we discuss certain partial verifications. We also state in this section an important counterexample for a stronger, but also natural form of the lcs Weinstein conjecture. ## Organization {#sec_Organization} The main theorems are proved in Section [7](#sec:Proofs of theorems on Conformal symplectic Weinstein conjecture){reference-type="ref" reference="sec:Proofs of theorems on Conformal symplectic Weinstein conjecture"}. Section [2](#sec:preliminaries){reference-type="ref" reference="sec:preliminaries"} presents detailed preliminaries for lcs geometry, which should make this paper self contained and accessible to a general reader. Section [4](#sec:gromov_witten_theory_of_the_lcs_c_times_s_1_){reference-type="ref" reference="sec:gromov_witten_theory_of_the_lcs_c_times_s_1_"} defines the Gromov-Witten invariant $\operatorname {GWF}$, which is central to the applications in Riemann-Finlser geometry. # Background and preliminaries {#sec:preliminaries} **Definition 1**. *A locally conformally symplectic manifold or just an $\mathop{\mathrm{lcs}}$ manifold, is a smooth $2n$-fold $M$ with an $\mathop{\mathrm{lcs}}$ structure: which is a non-degenerate 2-form $\omega$, with the property that for every $p \in M$ there is an open $U \ni p$ such that $\omega| _{U} = f _{U} \cdot \omega _{U}$, for some symplectic form $\omega _{U}$ defined on $U$ and some smooth positive function $f _{U}$ on $U$.* These kinds of structures were originally considered by Lee in [@cite_Lee], arising naturally as part of an abstract study of "a kind of even dimensional Riemannian geometry", and then further studied by a number of authors see for instance, [@cite_BanyagaConformal] and [@cite_VaismanConformal]. An $\mathop{\mathrm{lcs}}$ manifold admits all the interesting classical notions of a symplectic manifold, like Lagrangian submanifolds and Hamiltonian dynamics, while at the same time forming a much more flexible class. For example Eliashberg and Murphy show that if a closed almost complex $2n$-fold $M$ has $H ^{1} (M, \mathbb{R}) \neq 0$ then it admits a $\mathop{\mathrm{lcs}}$ structure, [@cite_EliashbergMurphyMakingcobordisms]. Another result of Apostolov, Dloussky [@cite_ApostolovStructures] is that any complex surface with an odd first Betti number admits a $\mathop{\mathrm{lcs}}$ structure, which tames the complex structure. To see the connection with the first cohomology group $H ^{1} (M, \mathbb{R} ^{} )$, mentioned above, let us point out right away the most basic invariant of a $\mathop{\mathrm{lcs}}$ structure $\omega$, when $M$ has dimension at least 4. This is the Lee class, $\alpha = \alpha _{\omega} \in H ^{1} (M, \mathbb{R})$. This class has the property that on the associated $\alpha$-covering space (see proof of Lemma [Lemma 1](#lemma:Reeb){reference-type="ref" reference="lemma:Reeb"}) $\widetilde{M}$, the lift $\widetilde{\omega}$ is globally conformally symplectic. Thus, an $\mathop{\mathrm{lcs}}$ form is globally conformally symplectic, that is diffeomorphic to $e ^{f} \cdot \omega'$, with $\omega'$ symplectic, iff its Lee class vanishes. Again assuming $M$ has dimension at least 4, the Lee class $\alpha$ has a natural differential form representative, called the Lee form, which is defined as follows. We take a cover of $M$ by open sets $U _{a}$ in which $\omega= e ^{f _{a}} \cdot \omega _{a}$ for $\omega _{a}$ symplectic. Then we have 1-forms $d (f _{a} )$ on each $U _{a}$, which glue to a well-defined closed 1-form on $M$, as shown by Lee. We may denote this 1-form and its cohomology class both by $\alpha$. It is moreover immediate that for an $\mathop{\mathrm{lcs}}$ form $\omega$, $$d\omega= \alpha \wedge \omega,$$ for $\alpha$ the Lee form as defined above. As we mentioned $\mathop{\mathrm{lcs}}$ manifolds can also be understood to generalize contact manifolds. This works as follows. First we have a class of explicit examples of $\mathop{\mathrm{lcs}}$ manifolds, obtained by starting with a symplectic cobordism (see [@cite_EliashbergMurphyMakingcobordisms]) of a closed contact manifold $C$ to itself, arranging for the contact forms at the two ends of the cobordism to be proportional and then gluing the boundary components, (after a global conformal rescaling of the form on the cobordism, to match the boundary conditions). **Terminology 2**. For us a contact manifold is a pair $(C,\lambda)$ where $C$ is a closed manifold and $\lambda$ a contact form: $\forall p \in C: \lambda \wedge \lambda ^{2n} (p) \neq 0$. This is not a completely common terminology as usually it is the equivalence class of $(C,\lambda)$ that is called a contact manifold, where $(C,\lambda) \sim (C, \lambda')$ if $\lambda=f \lambda'$ for $f$ a positive function. (Given that the contact structure, in the classical sense, is co-oriented.) A ***contactomorphism*** between $(C _{1}, \lambda _{1})$, $(C _{2}, \lambda _{2})$ is a diffeomorphism $\phi: C _{1} \to C _{2}$ s.t. $\phi ^{*} \lambda _{2} = f \lambda _{1}$ for some $f>0$. It is called ***strict*** if $\phi ^{*} \lambda _{2} = \lambda _{1}$. A concrete basic example, which can be understood as a special case of the above cobordism construction, is the following. *Example 1* (Banyaga). Let $(C, \lambda)$ be a contact manifold, $S ^{1} = \mathbb{R} / \mathbb{Z}$, $d \theta$ the standard non-degenerate 1-form on $S ^{1}$ satisfying $\int _{S ^{1}} d \theta = 1$. And take $M=C \times S ^{1}$ with the 2-form $$\omega _{\lambda} = d_{\alpha} \lambda : = d \lambda - \alpha \wedge \lambda,$$ for $\alpha: = pr _{S ^{1} } ^{*} d\theta$, $pr _{S ^{1} }: C \times S ^{1} \to S ^{1}$ the projection, and $\lambda$ likewise the pull-back of $\lambda$ by the projection $C \times S ^{1} \to C$. We call $(M,\omega _{\lambda} )$ as above the ***lcs-fication*** of $(C,\lambda)$. This is also a basic example of a first kind lcs manifold, as in Definition [Definition 1](#def:firstkind){reference-type="ref" reference="def:firstkind"} ahead. The operator $$\label{eq:Lichnerowicz} d_{\alpha}: \Omega ^{k} (M) \to \Omega ^{k+1} (M)$$ is called the Lichnerowicz differential with respect to a closed 1-form $\alpha$, and it satisfies $d_{\alpha} \circ d_{\alpha} =0$ so that we have an associated Lichnerowicz chain complex. **Definition 1**. *An ***exact lcs form*** on $M$ is an lcs 2-form s.t. there exists a pair of one forms $(\lambda, \alpha)$ with $\alpha$ a closed 1-form, s.t. $\omega=d_{\alpha} \lambda$ is non-degenerate. In the case above we also call the pair $(\lambda, \alpha)$ ***an exact lcs structure***. The triple $(M, \lambda, \alpha )$ will be called an ***exact lcs manifold***, but we may also call $(M, \omega )$ an exact lcs manifold when $(\lambda, \alpha )$ are implicit.* An exact lcs structure determines a generalized distribution $\mathcal{V} _{\lambda}$ on $M$: $$\mathcal{V} _{\lambda} (p) = \{v \in T _{p} {M} \,|\, d \lambda (v, \cdot) = 0 \},$$ which we call the ***vanishing distribution***. We also define a generalized distribution $\xi _{\lambda}$ that is the $\omega$-orthogonal complement to $\mathcal{V} _{\lambda}$, which we call ***co-vanishing distribution***. For each $p \in M$, $\mathcal{V} _ {\lambda} (p)$ has dimension at most 2 since $d\lambda - \alpha \wedge \lambda$ is non-degenerate. If $M ^{2n}$ is closed $\mathcal{V} _{\lambda}$ cannot identically vanish since $(d\lambda) ^{n}$ cannot be non-degenerate by Stokes theorem. **Definition 1**. *Let $(\lambda, \alpha)$ be an exact lcs structure on $M$. We call $\alpha$ integral, rational or irrational if its periods are integral, respectively rational, or respectively irrational. We call the structure $(\lambda, \alpha)$ ***scale integral***, if $c \alpha$ is integral for some $0 \neq c \in \mathbb{R}$. Otherwise we call the structure ***scale irrational***. If $\mathcal{V}$ is non-zero at each point of $M$, in particular is a smooth 2-distribution, then such a structure is called ***first kind***. If $\omega$ is an exact lcs form then we call $\omega$ integral, rational, irrational, first kind if the exists $\lambda, \alpha$ s.t. $\omega = d _{\alpha } \lambda$ and $(\lambda, \alpha )$ is integral, respectively irrational, respectively first kind. Similarly define, scale integral, scale irrational $\omega$.* **Definition 1**. *A ***conformal symplectomorphism*** of lcs manifolds $\phi: (M _{1}, \omega _{1}) \to (M _{2}, \omega _{2})$ is a diffeomorphism $\phi$ s.t. $\phi ^{*} \omega _{2} = e ^{f} \omega _{1}$, for some $f$. Note that in this case we have an induced relation (when $M$ has dimension at least 4): $$\phi ^{*} \alpha _{1} = \alpha _{0} + df,$$ where $\alpha _{1}$ is the Lee form of $\omega _{1}$ and $\alpha _{0}$ is the Lee form of $\omega _{0}$. If $f=0$ we call $\phi$ a ***symplectomorphism***. A (conformal) symplectomorphism of exact lcs structures $(\lambda _{1}, \alpha _{1})$, $(\lambda _{2}, \alpha _{2})$ on $M _{1}$ respectively $M _{2}$ is a (conformal) symplectomorphism of the corresponding $lcs$ 2-forms. If a diffeomorphism $\phi: M _{1} \to M _{2}$ satisfies $\phi ^{*} \lambda _{2} = \lambda _{1}$ and $\phi ^{*} \alpha _{2} = \alpha _{1}$ we call it an ***isomorphism*** of the exact lcs structures. This is analogous to a strict contactomorphism of contact manifolds.* To summarize, with the above notions we have the following basic points whose proof is left to the reader: 1. An isomorphism of exact lcs structures $(\lambda _{1}, \alpha _{1})$, $(\lambda _{2}, \alpha _{2})$ preserves the first kind condition, and moreover preserves the corresponding vanishing distributions. 2. A symplectomorphism of lcs forms preserves the first kind condition. [\[item:preserves\]]{#item:preserves label="item:preserves"} 3. A (conformal) symplectomorphism of exact lcs structures generally does not preserve the first kind condition. (Contrast with [\[item:preserves\]](#item:preserves){reference-type="ref" reference="item:preserves"}.) 4. A (conformal) symplectomorphism of first kind lcs structures generally does not preserve the vanishing distributions. (Similar to a contactomorphism not preserving Reeb distributions.) 5. A conformal symplectomorphism of lcs forms and exact lcs structures preserves the rationality, integrality, scale integrality conditions. *Remark 1*. We say that $\omega _{0}$ is conformally equivalent to $\omega _{1}$ if $\omega _{1}= e ^{f} \omega _{0}$, i.e. the identity map is a conformal symplectomorphism $id: (M, \omega _{0}) \to (M, \omega _{1})$. It is important to note that for us the form $\omega$ is the structure not its conformal equivalence class, as for some authors. In other words conformally equivalent structures on a given manifold determine distinct but isomorphic objects of the category, whose objects are lcs manifolds and morphisms conformal symplectomorphisms. *Example 2*. One example of an $\mathop{\mathrm{lcs}}$ structure of the first kind is a mapping torus of a strict contactomorphism, see Banyaga [@cite_BanyagaConformal]. The mapping tori $M _{\phi,c}$ of a strict contactomorphism $\phi$ of $(C, \lambda )$ fiber over $S ^{1}$, $$\pi _{}: C \hookrightarrow M _{\phi,c} \to S ^{1},$$ with Lee form of the type $\alpha =c\pi ^{*}(d \theta)$, for some $0 \neq c \in \mathbb{R} ^{}$. In particular, these are scale integral first kind lcs structures. Moreover we have: **Theorem 1** (Only reformulating Bazzoni-Marrero [@cite_BazzoniFirstKind]). *A first kind lcs structure $(\lambda, \alpha)$ on a closed manifold $M$ is isomorphic to the mapping torus of a strict contactomorphism if and only if it is scale integral.* The (scaled) integrality condition is of course necessary since the Lee form of a mapping torus of a strict contactomorphism will have this property. Thus we may understand scale irrational first kind lcs structures as first (and rather dramatic) departures from the world of contact manifolds into a brave new lcs world. *Remark 1*. Note that scale irrational first kind structures certainly exist. A simple example is given by taking $\lambda, \alpha$ to be closed scale irrational 1-forms on $T ^{2}$ with transverse kernels. Then $\omega = \lambda \wedge \alpha$ is a scale irrational first kind structure on $T ^{2}$. In particular $(\lambda, \alpha )$ cannot be a mapping torus of a strict contactomorphism even up to a conformal symplectomorphism. In general, on a closed manifold we may always perturb a (first kind) scale integral lcs structure to a (first kind) scale irrational one. The examples of the present paper deal with deformations of this sort. ## Reeb 2-curves {#sec:highergenus} **Definition 1**. *Let $(M,\lambda, \alpha)$ be an exact $\mathop{\mathrm{lcs}}$ structure and $\omega= d _{\alpha} \lambda$. Define $X _{\lambda}$ by $\omega (X _{\lambda}, \cdot) = \lambda$ and $X _{\alpha}$ by $\omega (X _{\alpha}, \cdot) = \alpha$. Let $\mathcal{D}$ denote the (generalized) distribution spanned by $X _{\alpha }, X _{\lambda }$, meaning $\mathcal{D} (p) := \mathop{\mathrm{span}}(X _{\alpha} (p) , X _{\lambda }(p))$. This will be called the ***canonical distribution***.* The (generalized) distribution $\mathcal{D}$ is one analogue for exact lcs manifolds of the Reeb distribution on contact manifolds. A Reeb 2-curve, as defined ahead, will be a certain kind of singular leaf of $\mathcal{D}$, and so is a kind of 2-dimensional analogue of a Reeb orbit. *Example 3*. The simplest example of a Reeb 2-curve in an exact lcs $(M, \lambda, \alpha )$, in the case $\mathcal{D}$ is a true 2-dimensional distribution (for example if $(\lambda, \alpha )$ is first kind), is a closed immersed surface $u: \Sigma \to M$ tangent to $\mathcal{D}$. However, it will be necessary to consider more generalized curves. **Definition 1**. *Let $\Sigma$ be a closed nodal Riemann surface (the set of nodes can be empty). Let $u: \Sigma \to M$ be a smooth map and let $\widetilde{u}: \widetilde{\Sigma} \to M$ be its normalization (see Definition [Definition 1](#def:normalization){reference-type="ref" reference="def:normalization"}). We say that $u$ is a ***Reeb 2-curve*** in $(M,\lambda,\alpha)$, if the following is satisfied:* 1. *For each $z \in \widetilde{\Sigma}$, $\widetilde{u} _{*} (T _{z} \widetilde{\Sigma } ) = \mathcal{D} (\widetilde{u} (z) )$, whenever $d \widetilde{u} (z): T _{z} \Sigma \to T _{\widetilde{u} (z) }M$ is non-zero, and $\dim \mathcal{D} ({\widetilde{u} (z) }) =2$.* 2. *$0 \neq [u ^{*} \alpha] \in H ^{1} (\Sigma, \mathbb{R} ^{} )$.* It is tempting to conjecture that every closed exact lcs manifold has a Reeb 2-curve, in analogy to the Weinstein conjecture. However this is false: **Theorem 1**. *Let $T ^{2}, g _{st}$ be the 2-torus with its standard flat metric. Let $M _{\widetilde{\phi} ,1}$ be the mapping torus of the unit contangent bundle of $T ^{2}$, with $\widetilde{\phi }$ corresponding to an isometry $\phi: (T ^{2}, g _{st}) \to (T ^{2}, g _{st})$, which does not fix the image of any closed geodesic (an irrational rotation in both coordinates). Then $M _{\phi,1}$ has no Reeb 2-curves.* Nevertheless, note that for the counterexample above, the conformal symplectic Weinstein conjecture as described in the following section readily holds, by Proposition [Proposition 1](#lemma_MappingTorusReeb1curve){reference-type="ref" reference="lemma_MappingTorusReeb1curve"}. # Results on Reeb 2-curves and a conformal symplectic Weinstein conjecture {#sec:Results on Reeb 2-curves} **Definition 1**. *Define ***the set $\mathcal{L} (M)$ of exact $\mathop{\mathrm{lcs}}$ structures on $M$***, to be: $$\mathcal{L} (M) = \{ (\beta, \gamma) \in \Omega ^{1} (M) \times \Omega ^{1} (M) \,|\, \text{$\gamma$ is closed, $d _{\gamma} \beta$ is non-degenerate} \}.$$ Define $\mathcal{F} (M) \subset \mathcal{L} (M)$ to be subset of (possibly irrational) first kind lcs structures.* In what follows we use the following $C ^{\infty}$ metric on $\mathcal{L} (M)$. For $(\lambda _{1}, \alpha _{1}), (\lambda _{2}, \alpha _{2}) \in \mathcal{L} (M)$ define: $$\label{eq:dk} d _{{\infty}} ((\lambda _{1}, \alpha _{1}), (\lambda _{2}, \alpha _{2})) = d _{C ^{\infty }} (\lambda _{1}, \lambda _{2} ) + d _{C ^{\infty}} (\alpha _{1}, \alpha _{2} ),$$ where $d _{C ^{\infty}}$ on the right side is the usual $C ^{\infty}$ metric. The following theorems are proved in Section [\[sec:Proofs of theorems on Conformal symplectic Weinstein conjecture\]](#sec:Proofs of theorems on Conformal symplectic Weinstein conjecture){reference-type="ref" reference="sec:Proofs of theorems on Conformal symplectic Weinstein conjecture"}, based on the theory of elliptic pseudo-holomorphic curves in $M$. We can use $C ^{k}$ metrics for a certain $k$, instead of $C ^{\infty }$, however we cannot make $k=0$ (at least not obviously), and the extra complexity of working with $C ^{k}$ metrics, is better left for later developments. **Theorem 1**. *Let $(C, \lambda)$ be a closed contact manifold, satisfying one of the following conditions:* 1. *$(C, \lambda )$ has at least one non-degenerate Reeb orbit.* 2. *$i (N,R ^{\lambda}, \beta) \neq 0$ where the latter is the Fuller index of some open compact subset of the orbit space: $N \subset \mathcal{O} (R ^{\lambda}, \beta )$, see Appendix [8](#appendix:Fuller){reference-type="ref" reference="appendix:Fuller"}.* *Then we have the following:* 1. *Then for some $d _{\infty}$ neighborhood $U$ of the $\mathop{\mathrm{lcs}}$-fication $(\lambda, \alpha)$ of the space $\mathcal{F} (M=C \times S ^{1})$, every element of $U$ admits a Reeb 2-curve.* 2. *For any $(\lambda', \alpha' ) \in U$, the corresponding Reeb 2-curve $u: \Sigma \to M$ can be assumed to be ***elliptic*** meaning that $\Sigma$ is elliptic (more specifically: a nodal, topological genus 1, closed, connected Riemann surface).* 3. *$u$ can also be assumed to be $\alpha$-charge 1 (see Definition [Definition 1](#def:charge){reference-type="ref" reference="def:charge"}).* 4. *If $M$ has dimension 4 then $u$ can be assumed to be embedded and normal (the set of nodes is empty). And so in particular, such a $u$ represents a closed, $(\omega = d _{\alpha } \lambda)$-symplectic torus hypersurface.* ## Reeb 1-curves {#sec:Reeb 1-curves} We have stated some basic new Reeb dynamics phenomena in the introduction. We now discuss an application of a different character. **Definition 1**. *A smooth map $o: S ^{1} \to M$ is a ***Reeb 1-curve*** in an exact lcs manifold $(M,\lambda,\alpha)$, if $$\forall t \in S ^{1}: (\lambda (o' (t)) >0) \land (o' (t) \in \mathcal{D}).$$* The following is proved in Section [7](#sec:Proofs of theorems on Conformal symplectic Weinstein conjecture){reference-type="ref" reference="sec:Proofs of theorems on Conformal symplectic Weinstein conjecture"}. **Definition 1**. *We say that an exact lcs manifold $(M, \lambda, \alpha )$ satisfies the ***Reeb condition*** if: $$\lambda (X _{\alpha }) >0.$$* **Theorem 1**. *Suppose that $(M, \lambda, \alpha )$ is an exact lcs manifold satisfying the Reeb condition. If $(M, \lambda, \alpha )$ has an immersed Reeb 2-curve then it also has a Reeb 1-curve. Furthermore, if it has an immersed elliptic Reeb 2-curve, then this Reeb 2-curve is normal.* We have an immediate corollary of Theorem [Theorem 1](#thm:C0Weinstein){reference-type="ref" reference="thm:C0Weinstein"} and Theorem [Theorem 1](#thm:Reeb1curves){reference-type="ref" reference="thm:Reeb1curves"}. **Corollary 1**. *Let $\lambda$ be a contact form, on closed 3-manifold $C$, with at least one non-degenerate Reeb orbit, or more generally satisfying $i (N,R ^{\lambda}, \beta) \neq 0$ for some open compact $N$ as previously. Then there is a $d _{\infty}$ neighborhood $U$ of the $\mathop{\mathrm{lcs}}$-fication $(\lambda, \alpha)$ in the space $\mathcal{F} (C \times S ^{1})$, s.t. for each $(\lambda', \alpha ') \in U$ there is a Reeb 1-curve.* **Corollary 1**. *Every closed exact lcs surface satisfying the Reeb condition has a Reeb 1-curve.* **Lemma 1**. *Let $(M, \lambda, \alpha)$ be an exact lcs manifold with $M$ closed then $0 \neq [\alpha] \in H ^{1} (M, \mathbb{R} ^{} )$.* *Proof.* Suppose by contradiction that $\alpha$ is exact and let $g$ be its primitive. Then computing we get: $d _{\alpha} \lambda = \frac{1}{f} d (f \lambda)$ with $f = e ^{g}$. Consequently, $d (f \lambda)$ is non-degenerate on $M$ which contradicts Stokes theorem. ◻ *Proof of Corollary [Corollary 1](#lemma:conformalWeinsteinSurface){reference-type="ref" reference="lemma:conformalWeinsteinSurface"}.* This follows by Theorem [Theorem 1](#thm:Reeb1curves){reference-type="ref" reference="thm:Reeb1curves"}. As in this case by the lemma above the identity map $X \to X$ is an immersed Reeb 2-curve, by Lemma [Lemma 1](#lemma:alphaclass){reference-type="ref" reference="lemma:alphaclass"}. ◻ **Proposition 1**. *Assume the Weinstein conjecture, then the mapping torus $M _{\phi }$ of a strict contactomorphism $\phi: (C, \lambda ) \to (C, \lambda )$, where $C$ is closed, has a Reeb 1-curve.* *Proof.* Immediate from definitions. ◻ Inspired by the above considerations we conjecture: **Conjecture 1**. *Suppose that $(M, \lambda, \alpha )$ is a closed exact lcs manifold of dimension $4$ satisfying the Reeb condition, then it has a Reeb 1-curve.* **Proposition 1**. *The analogue of Conjecture [Conjecture 1](#conj:conformalWeinstein){reference-type="ref" reference="conj:conformalWeinstein"}, in all dimensions, implies the Weinstein conjecture: every closed contact manifold $(C,\lambda)$ has a closed Reeb orbit.* *Proof of Proposition [Proposition 1](#prop:Implies){reference-type="ref" reference="prop:Implies"}.* Let $(M= C \times S ^{1}, \lambda, \alpha)$ be the $\mathop{\mathrm{lcs}}$-fication of a closed contact manifold $(C, \lambda)$. Then it satisfies the Reeb condition. Suppose that $o: S ^{1} \to M$ is a Reeb 1-curve. Then $\forall t \in [0,1]: \lambda (\dot o (t)) >0$ and $o$ is tangent to $\mathcal{V} _{\lambda} = \mathcal{D}$. Consequently, $pr _{C} \circ o$ is tangent to $\ker d \lambda$, and $\forall \tau \in [0,1]: \lambda (({pr _{C} \circ o})' (\tau)) >0$. It follows that $pr _{C} \circ o$ is a Reeb orbit of $(C, \lambda )$ up to parametrization. ◻ # $J$-holomorphic curves in lcs manifolds and the definition of the invariant $\operatorname {GWF}$ {#sec:gromov_witten_theory_of_the_lcs_c_times_s_1_} Let $M,J$ be an almost complex manifold and $\Sigma,j$ a Riemann surface. Recall that a map $u: \Sigma \to M$ is said to be *$J$-holomorphic* if $du \circ j = J \circ du$. **Notation 1**. We will often say ***$J$-curve*** in place of $J$-holomorphic curve. First kind $\mathop{\mathrm{lcs}}$ manifolds give immediate examples of almost complex manifolds where the $L ^{2}$ $\mathop{\mathrm{energy}}$ functional is unbounded on the moduli spaces of fixed class $J$-curves, as well as where null-homologous $J$-curves can be non-constant. We are going to see this shortly after developing a more general theory. **Definition 1**. *Let $(M,\lambda, \alpha)$ be an exact lcs manifold, satisfying the ***Reeb condition***: $\omega (X _{\lambda}, X _{\alpha}) = \lambda (X _{\alpha}) > 0$, where $\omega = d _{\alpha} \lambda$. In this case, $\mathcal{D}$ is a 2-dimensional distribution, and we say that an $\omega$-compatible $J$ is ***$(\lambda,\alpha)$-admissible*** or $\omega$-admissible (when $\lambda,\alpha$ are implicit) if:* - *$J$ preserves the canonical distribution $\mathcal{D}$ and preserves the $\omega$-orthogonal complement $\mathcal{D} ^{\perp}$ of $\mathcal{D}$. That is $J ({V}) \subset \mathcal{D}$ and $J(\mathcal{D} ^{\perp}) \subset \mathcal{D} ^{\perp}$.* - *$d\lambda$ tames $J$ on $\mathcal{D} ^{\perp}$.* *Admissible $J$ exist by classical symplectic geometry, and the space of such $J$ is contractible see  [@cite_McDuffSalamonIntroductiontosymplectictopology]. We call $(\lambda, \alpha, J)$ as above a ***tamed exact $\mathop{\mathrm{lcs}}$ structure***, and $(\omega,J)$ is called a tamed exact $\mathop{\mathrm{lcs}}$ structure if $\omega = d_{\alpha}\lambda$, for $(\lambda, \alpha, J)$ a tamed exact $\mathop{\mathrm{lcs}}$ structure. In this case $(M, \omega,J)$, $(M, \lambda, \alpha, J)$ will be called a ***tamed exact $\mathop{\mathrm{lcs}}$ manifold***.* *Example 4*. If $(M,\lambda,\alpha)$ is first kind then by an elementary computation $\omega (X _{\lambda}, X _{\alpha}) =1$ everywhere. In particular, we may find a $J$ such that $(\lambda, \alpha, J)$ is a tamed exact lcs structure, and the space of such $J$ is contractible. We will call $(M, \lambda, \alpha, J)$ a ***tamed first kind*** lcs manifold. **Lemma 1**. *Let $(M, \lambda, \alpha, J)$ be a tamed first kind lcs manifold. Then given a smooth $u: \Sigma \to M$, where $\Sigma$ is a closed (nodal) Riemann surface, $u$ is $J$-holomorphic only if $$\mathop{\mathrm{\mathrm{image}}}d {\widetilde{u} } (z) \subset \mathcal{V} _{\lambda} ( {\widetilde{u} } (z))$$ for all $z \in \widetilde{\Sigma }$, where $\widetilde{u}: \widetilde{\Sigma } \to M$ is the normalization of $u$ (see Definition [Definition 1](#def:normalization){reference-type="ref" reference="def:normalization"}). In particular $\widetilde{u} ^{*} d\lambda =0$.* *Proof.* As previously observed, by the first kind condition, $\mathcal{V} _{\lambda}$ is the span of $X _{\lambda}, X _{\alpha}$ and hence $$V:=\mathcal{V} _{\lambda } = \mathcal{D} _{\lambda}.$$ Let $u$ be $J$-holomorphic, so that $\widetilde{u}$ is $J$-holomorphic (by definition of a $J$-holomorphic nodal map). We have $$\int _{\Sigma} \widetilde{u} ^{*} d \lambda = 0$$ by Stokes theorem. Let $proj (p): T _{p} M \to V ^{\perp}(p)$ be the projection induced by the splitting $TM = V \oplus V ^{\perp}$. Suppose that for some $z \in \widetilde{\Sigma }$, $proj \circ d \widetilde{u} (z) \neq 0$. By the conditions: - $J$ is tamed by $d\lambda$ on $V ^{\perp}$. - $d \lambda$ vanishes on $V$. - $J$ preserves the splitting $TM = V \oplus V ^{\perp}$. we have $\int _{\widetilde{\Sigma } } \widetilde{u} ^{*} d \lambda > 0$, a contradiction. Thus, $$\forall z \in \widetilde{\Sigma }: proj \circ d \widetilde{u} (z) = 0,$$ so $$\forall z \in \widetilde{\Sigma } : \mathop{\mathrm{\mathrm{image}}}d \widetilde{u} (z) \subset \mathcal{V} _{\lambda} (\widetilde{u} (z)).$$ ◻ *Example 5*. Let $(C \times S ^{1}, \lambda, \alpha)$ be the lcs-fication of a contact manifold $(C, \lambda)$. In this case $$X _{\alpha} = (R ^{\lambda}, 0),$$ where $R ^{\lambda}$ is the Reeb vector field and $$X _{\lambda} = (0, \frac{d}{d \theta})$$ is the vector field generating the natural action of $S ^{1}$ on $C \times S ^{1}$. If we denote by $\xi \subset T (C \times S ^{1} )$ the distribution $\xi (p) = \ker \lambda (p)$, then in this case $\xi = V ^{\perp}$ in the notation above. We then take $J$ to be an almost complex structure on $\xi$, which is $S ^{1}$ invariant, and compatible with $d\lambda$. The latter means that $$g _{J} (\cdot, \cdot):= d \lambda| _{\xi} (\cdot, J \cdot)$$ is a $J$ invariant Riemannian metric on the distribution $\xi$. There is an induced almost complex structure $J ^{\lambda}$ on $C \times S ^{1}$, which is $S ^{1}$-invariant, coincides with $J$ on $\xi$ and which satisfies: $$J ^{\lambda} (X _{\alpha}) = X _{\lambda}.$$ Then $(C \times S ^{1}, \lambda, \alpha, J ^{\lambda} )$ is a tamed integral first kind $\mathop{\mathrm{lcs}}$ manifold. ## Charged elliptic curves in an lcs manifold We now study moduli spaces of elliptic curves in a lcs manifold, constrained to have a certain charge. [^2] In the present context, one reason for the introduction of "charge" is that it is now possible for non-constant holomorphic curves to be null-homologous, so we need additional control. Here is a simple example: take $S ^{3} \times S ^{1}$ with $J=J ^{\lambda}$, for the $\lambda$ the standard contact form, then all the Reeb holomorphic tori (as defined further below) are null-homologous. Let $\Sigma$ be a complex torus with a chosen marked point $z \in \Sigma$, i.e. an elliptic curve over $\mathbb{C}$. An isomorphism $\phi: (\Sigma _{1}, z _{1} ) \to (\Sigma _{2}, z _{2} )$ is a biholomorphism s.t. $\phi (z _{1} ) = z _{2}$. The set of isomorphism classes forms a smooth orbifold $M _{1,1}$. This has a natural compactification - the Deligne-Mumford compactification $\overline{M} _{1,1}$, by adding a point at infinity, corresponding to a nodal genus 1 curve with one node. The notion of charge can be defined in a general setting. **Definition 1**. *Let $M$ be a manifold endowed with a closed integral 1-form $\alpha$. Let $u: T^{2} \to M$ be a continuous map. Let $\gamma, \rho: S ^{1} \to T ^{2}$ represent generators of $H _{1} (T ^{2}, \mathbb{Z})$, with $\gamma \cdot \rho =1$, where $\cdot$ is the intersection pairing with respect to the standard complex orientation on $T ^{2}$. Suppose in addition: $$\langle \gamma, u ^{*} {\alpha} \rangle =0, \quad \langle \rho, u ^{*} {\alpha} \rangle \neq 0,$$ where $\langle , \rangle$ is the natural pairing of homology and cohomology. Then we call $$n = |\langle \rho, u ^{*} {\alpha} \rangle | \in \mathbb{N}_{>0},$$ the ***$\alpha$-charge of $u$***, or just the charge of $u$ when $\alpha$ is implicit. Suppose furthermore that $\langle \rho, u ^{*} {\alpha} \rangle > 0,$ then the class $u _{*} (\gamma) \in \pi _{1} (M)$ will be called the ***$\pi$-class of $u$***, for $\pi _{1} (M)$ the set of free homotopy classes of loops as before.* It is easy to see that charge is always defined and is independent of choices above. We may extend the definition of charge to curves $u: \Sigma \to M$, with $\Sigma$ a nodal elliptic curve, as follows. If $\rho: S ^{1} \to \Sigma$ represents the generator of $H _{1} (\Sigma, \mathbb{Z} )$ then define the charge of $u$ to be $|\langle \rho, u ^{*}\alpha \rangle |$. Obviously the charge condition is preserved under Gromov convergence of stable maps. But it is not preserved in homology, so that charge is *not* a functional $H _{2} (M, \mathbb{Z} ) \to \mathbb{N}$. **Definition 1**. *By the above, associated to a continuous map $u: \Sigma \to M$ with $\Sigma$ an elliptic curve, and non-zero $\alpha$-charge, we have a triple $(A, \beta, n) \in H ^{2} (M, \mathbb{Z} ) \times \pi _{1} (M) \times \mathbb{N} _{>0}$, corresponding to the homology class, the $\pi _{}$-class, and the $\alpha$-charge. This triple will be called the ***charge class*** of $u$.* Let $(M, J)$ be an almost complex manifold and $\alpha$ a closed integral 1-form on $M$ non vanishing in cohomology, then we call $(M, J, \alpha)$ a ***Lee manifold***. Suppose for the moment that there are no non-constant $J$-holomorphic maps $(S ^{2},j) \to (M,J)$ (otherwise we need stable maps), then for $n \geq 1$ we define: $$\overline{\mathcal{M}} ^{n} _{1,1} (J, A, \beta)$$ as the set of equivalence classes of tuples $(u, S)$, for $S= (\Sigma, z)$ a possibly nodal elliptic curve and $u: \Sigma \to M$ a charge class $(A,\beta,n)$, $J$-holomorphic map. The equivalence relation is $(u _{1}, S _{1} ) \sim (u _{2}, S _{2} )$ if there is an isomorphism $\phi: S _{1} \to S _{2}$ s.t. $u _{2} \circ \phi = u _{1}$. It is not hard to see that such an isomorphism of preserves the charge class, so that $\overline{\mathcal{M}} ^{n} _{1,1} (J, A, \beta)$ is well defined. Also note that the expected dimension of $\overline{\mathcal{M}} _{1,1} ^1 ({J} ^{\lambda}, A, \beta)$ is 0. It is given by the Fredholm index of the operator [\[eq:fullD\]](#eq:fullD){reference-type="eqref" reference="eq:fullD"} which is 2, minus the dimension of the reparametrization group (for non-nodal curves) which is 2. That is given an elliptic curve $S = (\Sigma, z)$, let $\mathcal{G} (\Sigma)$ be the 2-dimensional group of biholomorphisms $\phi$ of $\Sigma$. Then given a $J$-holomorphic map $u: \Sigma \to M$, $(\Sigma,z,u)$ is equivalent to $(\Sigma, \phi(z), u \circ \phi)$ in $\overline{\mathcal{M}} _{1,1} ^1 ({J} ^{\lambda}, A, \beta)$, for $\phi \in \mathcal{G} (\Sigma)$. By slight abuse we may just denote such an equivalence class above simply by $u$, so we may write $u \in \overline{\mathcal{M}} ^{n} _{1,1} (J, A, \beta)$, with $S$ implicit. ## Reeb holomorphic tori in $(C \times S ^{1}, J ^{\lambda})$ {#sec:Reeb holomorphic tori simple} In this section we discuss an important example. Let $(C, \lambda)$ be a contact manifold and let $\alpha$ and $J ^{\lambda }$ be as in Example [Example 5](#section:lcsfication){reference-type="ref" reference="section:lcsfication"}. So that in particular we get a Lee manifold $(C \times S ^{1}, J ^{\lambda}, \alpha)$. In this case we have one natural type of charge 1 $J ^{\lambda }$-holomorphic tori in $M = C \times S ^{1}$. Let $o$ be a period $c$, closed Reeb orbit $o$ of $R ^{\lambda}$, and let $\beta$ it's class in $\pi _{1} (C) \subset \pi _{1} (M)$. A ***Reeb torus*** $u _{o}$ for $o$ is the map $$\begin{aligned} u _{o}: (S ^{1} \times S ^{1} = T ^{2}) \to C \times S ^{1} \\ u_o (s, t) = (o (s), t).\end{aligned}$$ A Reeb torus is $J ^{\lambda}$-holomorphic for a uniquely determined holomorphic structure $j$ on $T ^{2}$ defined by: $$j (\frac{\partial}{\partial s}) = c \frac{\partial} {\partial t}.$$ ## Definition of the invariant $\operatorname {GWF}$ {#sec:Definition of GWF} Let $X$ be a manifold. For $g$ a taut metric on $X$, let $\lambda _{g}$ be the Liouville 1-form on the unit cotangent bundle $C$ of $X$. If $\phi$ is an isometry of $g$ then there is a strict contactomorphism $\widetilde{\phi }$ of $(C, \lambda _{g})$, and this gives the "mapping torus" lcs manifold $(M _{\widetilde{\phi }, 1}, \lambda _{\widetilde{\phi} }, \alpha )$ as described in Section [7.1](#sec:Mapping tori and Reeb 2-curves){reference-type="ref" reference="sec:Mapping tori and Reeb 2-curves"}. If $\beta \in \pi _{1} ^{inc} (X)$, let $\widetilde{\beta} \in \pi _{1} (C)$ denote the lift of the class, defined by representing $\beta$ by a unit speed closed geodesic $o$, taking the canonical lift $\widetilde{o}$ to a closed Reeb orbit, and setting $\widetilde{\beta} = [\widetilde{o} ]$. Given $n \geq 1$, suppose that $$\label{eq:ncharephi} \widetilde{\phi} ^{n} _{*} (\widetilde{\beta}) = \widetilde{\beta }.$$ Then, as explained in Section [\[sec:Mapping tori and Reeb 2-curves\]](#sec:Mapping tori and Reeb 2-curves){reference-type="ref" reference="sec:Mapping tori and Reeb 2-curves"}, this naturally induces a map $u ^{n}: T ^{2} \to M$ well defined up to homotopy, whose class in homology is denoted by $A ^{n} _{\widetilde{\beta } } \in H _{2} (M, \mathbb{Z} )$. The $\alpha$-charge of $u ^{n}$ is $n$, and its $\pi _{}$-class is $\widetilde{\beta }$. By the tautness assumption on $g$ the space $\mathcal{O} (R ^{\lambda _{g}}, \widetilde{\beta} )$ is compact. We then get that $\overline{\mathcal{M}} ^n _{1,1} (J ^{\lambda _{\phi }}, A ^{n} _{\widetilde{\beta}}, \widetilde{\beta } )$ is compact and has expected dimension 0 by the Proposition [Proposition 1](#prop:Topembedding){reference-type="ref" reference="prop:Topembedding"}. We then define $$\operatorname {GWF} (g, \phi, \beta, n):= \begin{cases} 0, &\text{ if \eqref{eq:ncharephi} is not satisfied}\\ GW ^{n} _{1,1} (J ^{\lambda _{\phi}}, A ^{n}_{\widetilde{\beta} }, \widetilde{\beta}) ([\overline {M} _{1, 1}] \otimes [C \times S ^{1} ]), &\text{ otherwise} \end{cases},$$ where the Gromov-Witten invariant on the right side is as in [\[eq:functionals2\]](#eq:functionals2){reference-type="eqref" reference="eq:functionals2"}, of the following section. Although we take here a specific almost complex structure $J ^{\phi }$, using Proposition [Proposition 1](#prop:Topembedding){reference-type="ref" reference="prop:Topembedding"} and Lemma [Lemma 1](#prop:invariance1){reference-type="ref" reference="prop:invariance1"} we may readily deduce that any $(\lambda _{\phi}, \alpha )$-admissible almost complex structure gives the same value for the invariant. Also note that when $\phi =id$ $$\label{eq_GWF=F} \forall n \in \mathbb{N}: {GWF} (g, id, \beta, n) = F (g, \beta ),$$ where the latter is the invariant studied in  [@cite_SavelyevFuller]. This readily follows by Theorem [Theorem 1](#thm:GWFullerMain){reference-type="ref" reference="thm:GWFullerMain"}. # Elements of Gromov-Witten theory of an almost complex manifold {#sec:elements} Suppose that $(M,J)$ is an almost complex manifold (possibly non-compact), where the almost complex structures $J$ are assumed throughout the paper to be $C ^{\infty}$. Let $N \subset \overline{\mathcal{M}} _{g,k} (J, A)$ be an open compact subset with $\mathop{\mathrm{energy}}$ positive on $N$. The latter energy condition is only relevant when $A =0$. We shall primarily refer in what follows to work of Pardon in [@cite_PardonAlgebraicApproach], being more familiar to the author. But we should mention that the latter is a follow up to a theory that is originally created by Fukaya-Ono [@cite_FukayaOnoArnoldandGW], and later expanded with Oh-Ohta [@cite_FukayaLagrangianIntersectionFloertheoryAnomalyandObstructionIandII]. The construction in [@cite_PardonAlgebraicApproach] of an implicit atlas, on the moduli space $\mathcal{M}$ of $J$-curves in a symplectic manifold, only needs a neighborhood of $\mathcal{M}$ in the space of all curves. So for an *open* compact component $N$ as above, we have a well defined natural implicit atlas, (or a Kuranishi structure in the setup of [@cite_FukayaOnoArnoldandGW]). And so such an $N$ will have a virtual fundamental class in the sense of  [@cite_PardonAlgebraicApproach]. This understanding will be used in other parts of the paper, following Pardon for the explicit setup. We may thus define functionals: $$\label{eq:functionals2nonmain} GW _{g,n} (N,J, A): H_* (\overline{M} _{g,n}) \otimes H _{*} (M ) \to \mathbb{Q}.$$ In our more specific context we must in addition restrict the charge, which is defined at the moment for genus 1 curves. So supposing $(M, J, \alpha)$ is a Lee manifold we may likewise define functionals: $$\label{eq:functionals2} GW _{1,1} ^{k} (N,J, A, \beta): H_* (\overline{M} _{1,1}) \otimes H _{*} (M) \to \mathbb{Q},$$ meaning that we restrict the count to charge class $(A, \beta, k)$ curves, with $N \subset \overline{\mathcal{M}} ^{k} _{1,1} (J, A, \beta )$, an open compact subset. If $N$ is not specified it is understood to be the whole moduli space (if it is known to be compact). We now study how functionals depend on $N,J$. To avoid unnecessary generality, we discuss the case of $GW _{1,1} ^{k} (N,J, A, \beta )$. Given a Frechet smooth family $\{J _{t} \}$, $t \in [0,1]$, on $M$, we denote by $\overline{\mathcal{M}} ^{k}_{1,1} (\{J _{t} \}, A, \beta)$ the space of pairs $(u,t)$, $u \in \overline{\mathcal{M}} ^{k}_{1,1}(J _{t}, A, \beta )$. **Lemma 1**. *Let $\{J _{t} \}$, $t \in [0,1]$ be a Frechet smooth family of almost complex structures on $M$. Suppose that $\widetilde{N}$ is an open compact subset of the cobordism moduli space $\overline{\mathcal{M}} _{1,1} ^{k} (\{J _{t} \}, A, \beta)$, with $k>0$. Let $$N _{i} = \widetilde{N} \cap \left( \overline{\mathcal{M}} _{1,1} ^{k} (J _{i}, A, \beta )\right),$$ then $$GW _{1,1} ^{k} (N _{0}, J _{0}, A ) = GW _{1,1} ^{k}(N _{1}, J _{1}, A, \beta ).$$ In particular if $GW ^{k}_{1,1} (N _{0}, A, J _{0}, \beta ) \neq 0$, there is a $J _{1}$-holomorphic, stable, charge class $(A, \beta, k)$ elliptic curve in $M$.* *Proof of Lemma [Lemma 1](#prop:invariance1){reference-type="ref" reference="prop:invariance1"}.* We may construct exactly as in [@cite_PardonAlgebraicApproach] a natural implicit atlas on $\widetilde{N}$, with boundary $N _{0} ^{op} \sqcup N _{1}$, ($op$ denoting opposite orientation). And so we immediately get $$GW _{1,1} ^{k} (N _{0}, J _{0}, A, \beta ) = GW _{1,1} ^{k} (N _{1}, J _{1}, A, \beta ).$$ ◻ *Remark 1*. The condition that $k>0$ is a simple way to rule out degenerations to constant curves, but is not really essential. In the case the manifold is closed, degenerations of $J$-holomorphic curves to constant curves are impossible. This can be deduced from energy quantization coming from the general monotonicty theorem as appearing in Zinger  [@cite_zinger2017notes Proposition 3.12]. This was noted to me by Spencer Cattalani. Even if the manifold is not compact, given the assumption that $\widetilde{N}$ itself is compact, we may similarly preclude such degenerations. The following generalization of the lemma above will be useful later. First a definition. **Definition 1**. *Let $M$ be a smooth manifold. Denote by $H _{2} ^{inc} (M)$, the set of boundary incompressible homology classes, defined analogously to Definition [Definition 1](#definition_boundaryincompressible){reference-type="ref" reference="definition_boundaryincompressible"}, We say that a Frechet smooth family $\{J _{t}\}$, $t \in [0,1]$ on a manifold $M$ has a ***right holomorphic sky catastrophe*** in charge class $(A, \beta,k)$ for $A \in H _{2} ^{inc}(M)$, if there is an element $u \in \overline{\mathcal{M}} ^{k} _{1 ,1} (J _{0}, A, \beta)$, which does not belong to any open compact subset of $\overline{\mathcal{M}} _{1,1} ^{k} (\{J _{t} \}, A, \beta)$. We say that the sky catastrophe is ***essential*** if the same is true for any smooth family $\{J' _{t}\}$ satisfying $J'_0 = J _{0}$ and $J' _{1} = J _{1}$.* **Lemma 1**. *Let $\{J _{t} \}$, $t \in [0,1]$ be a Frechet smooth family of almost complex structures on $M$, $A \in H _{2} ^{inc} (M)$ and $k>0$. Suppose that $\overline{\mathcal{M}} _{1,1} ^{k} (J _{0}, A, \beta )$ is compact, and there is no right holomorphic sky catastrophe for $\{J _{t}\}$. Then there is a charge class $(A, \beta , k)$, $J _{1}$-holomorphic, stable, elliptic curve in $M$.* *Proof.* By assumption for each $u \in \overline{\mathcal{M}} _{1,1} ^{k} (J _{0}, A, \beta )$ there is an open compact $u \ni \mathcal{C} _{u} \subset \overline{\mathcal{M}} _{1,1} ^{k} (\{J _{t} \}, A, \beta )$. Then $\{\mathcal{C} _{u} \cap \overline{\mathcal{M}} _{1,1} ^{k} (J _{0}, A, \beta ) \} _{u}$ is an open cover of $\overline{\mathcal{M}} _{1,1} ^{k} (J _{0}, A, \beta )$ and so has a finite sub-cover, corresponding to a collection $u _{1}, \ldots, u _{n}$. Set $$\widetilde{N} = \bigcup _{i \in \{1, \ldots, n\}} \mathcal{C} _{i}.$$ Then $\widetilde{N}$ is an open-compact subset of $\overline{\mathcal{M}} _{1,1} ^{k} (\{J _{t} \}, A)$ s.t. $\widetilde{N} \cap \overline{\mathcal{M}} _{1,1} ^{k} (J _{0}, A, \beta ) = N_0 := \overline{\mathcal{M}} _{1,1} ^{k} (J _{0}, A, \beta )$. Then the result follows by Lemma [Lemma 1](#prop:invariance1){reference-type="ref" reference="prop:invariance1"}. ◻ We now state a basic technical lemma, following some standard definitions. **Definition 1**. *An ***almost symplectic pair*** on $M$ is a tuple $(\omega, J)$, where $\omega$ is a non-degenerate 2-form on $M$, and $J$ is $\omega$-compatible, meaning that $\omega (\cdot, J \cdot)$ defines $J$-invariant Riemannian metric, denoted by $g _{J}$ (with $\omega$ implicit).* **Definition 1**. *We say that a pair of almost symplectic pairs $(\omega _{i}, J _{i} )$ are **$\delta$-close**, if $\omega _{0}, \omega _{1}$ are $C ^{\infty}$ $\delta$-close, and $J _{0}, J _{1}$ are $C ^{\infty}$ $\delta$-close, $i=0,1$.* Let $\mathcal{S} (A)$ denote the space of equivalence classes of all smooth, nodal, stable, charge $k$, elliptic curves in $M$ in class $A$, with the standard Gromov topology determined by $g _{J}$. That is elements of $\mathcal{S} (A)$ are like elements of $\overline{\mathcal{M}} ^{k} _{1,1} (J,A, \beta )$ but are not required to be $J$-holomorphic. In particular, we have a continuous function: $$e = e _{g _{J}}: \mathcal{S} (A) \to \mathbb{R} _{\geq 0}.$$ **Lemma 1**. *Let $(\omega, J)$ be an almost symplectic pair on a compact manifold $M$ and let $N \subset \overline{\mathcal{M}} ^{k} _{1,1} (J,A, \beta )$ be compact and open (as a subset of $\overline{\mathcal{M}} ^{k} _{1,1} (J,A)$). Then there exists an open $U \subset \mathcal{S} (A)$ satisfying:* 1. *$e$ is bounded on $\overline{U}$. [\[property:ebounded\]]{#property:ebounded label="property:ebounded"}* 2. *$U \supset N$. [\[property:supset\]]{#property:supset label="property:supset"}* 3. *$\overline{U} \cap \overline{\mathcal{M}} ^{k} _{1,1} (J,A, \beta ) = N$. [\[property:intersection\]]{#property:intersection label="property:intersection"}* *Proof.* The Gromov topology on $\mathcal{S} (A)$ has a basis $\mathcal{B}$ satisfying: 1. If $V \in \mathcal{B}$ then $e$ is bounded on $\overline{V}$. 2. If $U$ is open and $u \in U$, then $$\exists V \in \mathcal{B}: (u \in V) \land (\overline{V} \subset U).$$ In the genus 0 case this is contained in the classical text McDuff-Salamon [@cite_McDuffSalamonJholomorphiccurvesandsymplectictopology page 140]. The basis $\mathcal{B}$ is defined using a collection of "quasi distance functions" $\{\rho _{\epsilon}\} _{\epsilon}$ on the set stable maps. The higher genus case is likewise well known. Thus, since $N$ is relatively open, using the properties of $\mathcal{B}$ above, we may find a collection $\{V _{\alpha}\} \subset \mathcal{B}$ s.t. - $\{V _{\alpha}\}$ covers $N$. - $\overline{V} _{\alpha} \cap \overline{\mathcal{M}} ^{k} _{1,1} (J,A, \beta ) \subset N.$ As $N$ is compact, we have a finite subcover $\{V _{\alpha _{1}}, \ldots, V _{\alpha _{n}}\}$. Set $U:= \cup _{i \in \{1, \ldots, n\}} V _{\alpha _{i}}$. Then $U$ satisfies the conclusion of the lemma. ◻ **Lemma 1**. *Let $(M, \omega, J, \alpha )$ be as above, $N \subset \overline{\mathcal{M}} ^{k} _{1,1} (J,A, \beta )$ an open compact set, and $U$ as in the lemma above. Then there is a $\delta>0$ s.t. whenever $J'$ is $C ^{2}$ $\delta$-close to $J$ if $u \in \overline{\mathcal{M}} ^{k} _{1,1} (J',A, \beta )$ and $u \in \overline{U}$ then $u \in U$.* *Proof.* Suppose otherwise, then there is a sequence $\{J _{k} \}$ $C ^{2}$ converging to $J$, and a sequence $\{u _{k} \} \in \overline{U} -U$ of $J _{k}$-holomorphic stable maps. Then by property [\[property:ebounded\]](#property:ebounded){reference-type="ref" reference="property:ebounded"} $e _{g _{J}}$ is bounded on $\{u _{k} \}$. Hence, by Gromov compactness, specifically theorems [@cite_McDuffSalamonJholomorphiccurvesandsymplectictopology B.41, B.42], we may find a Gromov convergent subsequence $\{u _{k _{j} } \}$ to a $J$-holomorphic stable map $u \in \overline{U} - U$. But by Properties [\[property:intersection\]](#property:intersection){reference-type="ref" reference="property:intersection"}, [\[property:supset\]](#property:supset){reference-type="ref" reference="property:supset"} of the set $U$, $$(\overline{U} - U) \cap \overline{\mathcal{M}} ^{k} _{1,1} (J,A, \beta ) = \emptyset.$$ So that we obtain a contradiction. ◻ **Lemma 1**. *Let $M, \omega, J, \alpha$ and $N \subset \overline{\mathcal{M}} ^{k} _{1,1} (J,A, \beta )$ be as in the previous lemma. Then there is a $\delta>0$ s.t. the following is satisfied. Let $(\omega',J')$ be $\delta$-close to $(\omega,J)$, then there is a continuous in the $C ^{\infty}$ topology family $\{ J _{t} \}$, $J_0= J$, $J_1= J'$ s.t. there is an open compact subset $$\widetilde{N} \subset \overline{\mathcal{M}} ^{k} _{1,1} (\{J _{t}\},A, \beta ),$$ satisfying $$\widetilde{N} \cap \overline{\mathcal{M}} ^{k} _{1,1} (J,A, \beta ) = N.$$* *Proof.* First let $\delta$ be as in Lemma [Lemma 1](#lemma:NearbyEnergy){reference-type="ref" reference="lemma:NearbyEnergy"}. We then need: **Lemma 1**. *Given a $\delta>0$ there is a $\delta'>0$ s.t. if $(\omega',J')$ is $\delta'$-near $(\omega, J)$ then there is a continuous in the $C ^{\infty}$ topology family $\{(\omega _{t}, J _{t} )\}$ satisfying:* - *$(\omega _{t},J _{t} )$ is $\delta$-close to $(\omega,J)$ for each $t$.* - *$(\omega _{0}, J _{0}) = (\omega, J)$ and $(\omega _{1}, J _{1}) = (\omega', J')$.* *Proof.* Let $\{g _{t} \}$ be the family of metrics on $M$ given by the convex linear combination of $g=g _{\omega _{J} } ,g' = g _{\omega',J'}$, $g _{t}= (1-t) g + t g'$. Clearly $g _{t}$ is $C ^{\infty}$ $\delta'$-close to $g _{0}$ for each $t$. Likewise, the family of 2 forms $\{\omega _{t} \}$ given by the convex linear combination of $\omega$, $\omega'$ is non-degenerate for each $t$ if $\delta'$ was chosen to be sufficiently small. And each $\omega _{t}$ is $C ^{\infty}$ $\delta'$-close to $\omega _{0} = \omega _{g,J}$. Let $$ret: Met (M) \times \Omega (M) \to \mathcal{J} (M)$$ be the "retraction map" (it can be understood as a retraction followed by projection) as defined in [@cite_McDuffSalamonIntroductiontosymplectictopology Prop 2.50], where $Met (M)$ is space of metrics on $M$, $\Omega (M)$ the space of 2-forms on $M$, and $\mathcal{J} (M)$ the space of almost complex structures. This map has the property that the almost complex structure $ret (g,\omega)$ is compatible with $\omega$, and that $ret (g _{J}, \omega ) = J$ for $g _{J} = \omega (\cdot, J \cdot)$. Then $\{(\omega _{t}, ret (g _{t}, \omega _{t}) \}$ is a compatible family. As $ret$ is continuous in $C ^{\infty}$-topology, $\delta'$ can be chosen such that $\{ ret _{t} (g _{t}, \omega _{t} \}$ are $C ^{\infty}$ $\delta$-nearby. ◻ Returning to the proof of the main lemma. Let $\delta' < \delta$ be chosen as in Lemma [Lemma 1](#lemma:Ret){reference-type="ref" reference="lemma:Ret"} and let $\{(\omega_{t}, J _{t})\}$ be the corresponding family. Set $$\widetilde{N} = \overline{\mathcal{M}} ^{k} _{1,1} (\{J _{t} \},A, \beta ) \cap (U \times [0,1]),$$ where $U$ is as in Lemma [Lemma 1](#lemma:NearbyEnergy){reference-type="ref" reference="lemma:NearbyEnergy"}. Then $\widetilde{N}$ is an open subset of $\overline{\mathcal{M}} ^{k} _{1,1} (\{J _{t} \},A, \beta )$. By Lemma [Lemma 1](#lemma:NearbyEnergy){reference-type="ref" reference="lemma:NearbyEnergy"}, $$\widetilde{N} = \overline{\mathcal{M}} ^{k} _{1,1} (\{J _{t} \},A, \beta ) \cap (\overline{U} \times [0,1]),$$ so that $\widetilde{N}$ is also closed. Finally, $\sup _{(u,t) \in \widetilde{N}} e _{g _{t}} (u) < \infty$, by condition [\[property:ebounded\]](#property:ebounded){reference-type="ref" reference="property:ebounded"} of $U$, and since $\{e _{g _{t}} \}$, $t \in [0,1]$ is a continuous family. Consequently $\widetilde{N}$ is compact by the Gromov compactness theorem. Resetting $\delta:=\delta'$, we are then done with the proof of the main lemma. ◻ **Proposition 1**. *Given an almost complex manifold $M,J$ suppose that $N \subset \overline{\mathcal{M}} ^{k} _{1,1} (J,A)$ is open and compact. Suppose also that $GW ^{k} _{1,1} (N, J, A, \beta ) \neq 0$. Then there is a $\delta>0$ s.t. whenever $J'$ is $C ^{2}$ $\delta$-close to $J$, there exists $u \in \overline {\mathcal{M}} ^{k} _{1,1} (J',A, \beta )$.* *Proof.* For $N$ as in the hypothesis, let $U$, $\delta$ and $\widetilde{N}$ be as in Lemma [Lemma 1](#lemma:NearbyEnergyDeformation){reference-type="ref" reference="lemma:NearbyEnergyDeformation"}, then by the conclusion of that lemma and by Lemma [Lemma 1](#prop:invariance1){reference-type="ref" reference="prop:invariance1"} $$GW ^{k} _{1,1} (N_1, J', A, \beta ) = GW ^{k} _{1,1} (N, J, A, \beta ) \neq 0,$$ where $N _{1} = \widetilde{N} \cap \overline{\mathcal{M}} ^{k} _{1,1} (J _{1},A, \beta)$. ◻ # Elliptic curves in the lcs-fication of a contact manifold and the Fuller index {#sectionFuller} The following elementary result is crucial for us. **Lemma 1**. *Let $(M,\lambda, \alpha, J)$ be a tamed first kind $\mathop{\mathrm{lcs}}$ manifold. Then every non-constant (nodal) $J$-holomorphic curve $u: \Sigma \to M$ is a Reeb 2-curve.* *Proof of Lemma [Lemma 1](#lemma:Reeb){reference-type="ref" reference="lemma:Reeb"}.* Let $u: \Sigma \to M$ be a non-constant, nodal $J$-curve. By Lemma [Lemma 1](#lemma:calibrated){reference-type="ref" reference="lemma:calibrated"} it is enough to show that $[u ^{*} \alpha] \neq 0$. Let $\widetilde{M}$ denote the $\alpha$-covering space of $M$, that is the space of equivalence classes of paths $p$ starting at $x _{0} \in M$, with a pair $p _{1}, p _{2}$ equivalent if $p _{1} (1) = p _{2} (1)$ and $$\int _{[0,1]} p _{1} ^{*} \alpha = \int _{[0,1]} p _{2} ^{*} \alpha.$$ Then the lift of $\omega$ to $\widetilde{M}$ is $$\widetilde{\omega}= \frac{1}{f} d (f\lambda),$$ where $f= e ^{-g}$ and where $g$ is a primitive for the lift $\widetilde{\alpha}$ of $\alpha$ to $\widetilde{M}$, that is $\widetilde{\alpha} =dg$. In particular $\widetilde{\omega}$ is conformally symplectomorphic to an exact symplectic form on $\widetilde{M}$. So if $\widetilde{J}$ denotes the lift of $J$, any closed $\widetilde{J}$-curve is constant by Stokes theorem. Now if $[u ^{*} \alpha] = 0$ then $u$ has a lift to a $\widetilde{J}$-holomorphic map $v: \Sigma \to \widetilde{M}$. Since $\Sigma$ is closed, it follows by the above that $v$ is constant, so that $u$ is constant, which is impossible. ◻ ## Preliminaries on Reeb tori {#section_preliminariesReebtori} Let $(M=C \times S ^{1}, \lambda, \alpha )$ be the lcs-fication of $(C, \lambda)$. For $\beta \in \pi _{1} (C)$ we set $A ^{1} _{\beta} = \beta \otimes [S ^{1}] \in H _{2} (M, \mathbb{Z})$. Let $\mathcal{O} (R ^{\lambda}, \beta)$, be the orbit space as in Section [1.1](#sec:Fixed Reeb strings){reference-type="ref" reference="sec:Fixed Reeb strings"}. Let $J ^{\lambda }$ on $C \times S ^{1}$ be as in Section [4.2](#sec:Reeb holomorphic tori simple){reference-type="ref" reference="sec:Reeb holomorphic tori simple"}. We have a map: $$\label{eq:calP} \mathcal{P}: \mathcal{O} (R ^{\lambda}, \beta) \to \overline{\mathcal{M}} ^1 _{1,1} (J ^{\lambda}, A ^{1} _{\beta}, \beta), \quad \mathcal{P} (o) = u _{o},$$ for $u _{o}$ the Reeb torus as previously. We can say more: **Proposition 1**. *For any $(\lambda,\alpha)$-admissible $J$ there is a natural bijection: [^3] $$\mathcal{P}: \mathcal{O} (R ^{\lambda}, \beta) \to \overline{\mathcal{M}} ^1 _{1,1} (J, A ^{1} _{\beta}, \beta),$$ with $\mathcal{P}$ the map [\[eq:calP\]](#eq:calP){reference-type="eqref" reference="eq:calP"} in the case $J=J ^{\lambda }$. (Note that there is an analogous bijection $\mathcal{O}(R ^{\lambda}, \beta) \to \overline{\mathcal{M}} ^n _{1,1} (J, A ^{n} _{\beta}, \beta ),$ for $n>1$, where $A ^{n} _{\beta} = n \cdot \beta \otimes [S ^{1}])$.* In the particular case of $J ^{\lambda}$, we see that all elliptic curves in $C \times S ^{1}$ are Reeb tori, and hence the underlying complex structure on the domain is "rectangular". That is, they are quotients of the complex plane by a rectangular lattice. This stops being the case when we consider generalized Reeb tori in Section [7.1](#sec:Mapping tori and Reeb 2-curves){reference-type="ref" reference="sec:Mapping tori and Reeb 2-curves"} for the mapping torus of some strict contactomorphism. Moreover, for more general compatible complex structures we might have nodal degenerations. *Proof of Proposition [Proposition 1](#prop:abstractmomentmap){reference-type="ref" reference="prop:abstractmomentmap"}.* We define $\mathcal{P} (o)$ to be the class represented by the unique up to isomorphism $J$-holomorphic curve $u: T ^{2} \to M$ determined by the conditions: - $u$ is charge 1. - The image of $u$ is the image $\mathcal{T}$ of the map $u _{o}: T ^{2} \to M$, $(s, t) \to (o (s), t)$, i.e. the image of the Reeb torus of $o$. - The degree of the map $u: T ^{2} \to \mathcal{T}$ is the multiplicity of $o$. We need to show that $\mathcal{P}$ is bijective. Injectivity is automatic. Suppose we have a curve $u \in \overline{\mathcal{M}}_{1,1} ^1 ({J}, A, \beta),$ represented by $u: \Sigma \to M$. By Lemma [Lemma 1](#lemma:Reeb){reference-type="ref" reference="lemma:Reeb"} $u$ is a Reeb 2-curve. Then $u$ has no spherical components, as such a component corresponds to a $J ^{\lambda }$-holomorphic map $u': \mathbb{CP} ^{1} \to M$, which by Lemma [Lemma 1](#lemma:Reeb){reference-type="ref" reference="lemma:Reeb"} is also a Reeb 2-curve, and this is impossible by second property in the definition. We first show that $u$ is a finite covering map onto the image of some Reeb torus $u _{o}$. By Lemma [Lemma 1](#lemma:ReebCurveRational){reference-type="ref" reference="lemma:ReebCurveRational"} normalization $\widetilde{u}$ is also a Reeb 2-curve. If $u$ is not normal then $\widetilde{u}$ is a Reeb 2-curve with domain $\mathbb{CP} ^{1}$, which is impossible by the argument above. Hence $u$ is normal. By the charge $1$ condition $pr _{S ^{1} } \circ u$ is surjective, where $pr _{S ^{1} }: C \times S ^{1} \to S ^{1}$ is the projection. By the Sard theorem we have a regular value $t_{0} \in S ^{1}$, so that $u ^{-1} \circ pr _{S ^{1} } ^{-1} (t _{0})$ contains an embedded circle $S _{0} \subset \Sigma$. Now $d (pr _{S ^{1} } \circ u )$ is surjective onto $T _{t _{0}} S ^{1}$ along $T \Sigma| _{S _{0} }$. And so by first property of $u$ being a Reeb 2-curve, $o = pr _{C} \circ u| _{S _{0}}$ has non-vanishing differential $d(o)$. Moreover, again by the first property, $o$ is tangent to $\ker d \lambda$. It follows that $o$ is an unparametrized $\lambda$-Reeb orbit. Also, the image of $d (pr _{C} \circ u)$ is in $\ker d \lambda$ from which it follows that $\mathop{\mathrm{\mathrm{image}}}d (pr _{C} \circ u)= \mathop{\mathrm{\mathrm{image}}}d(o)$. By Sard's theorem and by basic differential topology it follows that the image of $u$ is contained in the image of the Reeb torus $u _{o}$, which is an embedded 2-torus $\mathcal{T}$. By $J ^{\lambda }$-holomorphicity of $u$, since $\Sigma \simeq T ^{2}$, and by basic complex analysis of holomorphic maps $T ^{2} \to T ^{2}$, $u$ is a holomorphic covering map onto $\mathcal{T}$, of degree $\deg u$. Let $\widetilde{o}$ be $\deg u$ cover of $o$. Then $\mathcal{P} (\widetilde{o})$ is also represented by a degree $\deg u$, charge one holomorphic covering map $u': T ^{2} \to \mathcal{T}$. By basic covering map theory there is a homeomorphism of covering spaces: $$\begin{tikzcd} T ^{2} \ar[r, "f"] \ar [d, "u"] & T ^{2} \ar [dl, "u'"] \\ \mathcal{T} & . \end{tikzcd}$$ Then $f$ is a biholomorphism, so that $u,u'$ are equivalent. ◻ **Proposition 1**. *Let $(C, \xi)$ be a general contact manifold. If $\lambda$ is a non-degenerate contact 1-form for $\xi$ then all the elements of $\overline{\mathcal{M}}_{1,1} ^1 ( J ^{\lambda} , {A}, \beta )$ are regular curves. Moreover, if $\lambda$ is degenerate then for a period $c$ Reeb orbit $o$, the kernel of the associated real linear Cauchy-Riemann operator for the Reeb torus $u _{o}$ is naturally identified with the 1-eigenspace of $\phi _{c,*} ^{\lambda}$ - the time $c$ linearized return map $\xi (o (0)) \to \xi (o (0))$ induced by the $R^{\lambda}$ Reeb flow.* *Proof.* We already know by Proposition [Proposition 1](#prop:abstractmomentmap){reference-type="ref" reference="prop:abstractmomentmap"} that all $u \in \overline{\mathcal{M}}_{1,1} ^1 (J ^{\lambda} , {A}, \beta )$ are equivalent to Reeb tori. In particular, such curves have a representation by a $J ^{\lambda}$-holomorphic map $$u: (T ^{2},j) \to (Y = C \times S ^{1}, J ^{\lambda}).$$ Since each $u$ is immersed we may naturally get a splitting $u ^{*}T (Y) \simeq N \times T (T ^{2})$, using the $g _{J}$ metric, where $N \to T ^{2}$ denotes the pull-back, of the $g _{J}$-normal bundle to $\mathop{\mathrm{\mathrm{image}}}u$, and which is identified with the pullback of the distribution $\xi _{\lambda}$ on $Y$, (which we also call the co-vanishing distribution). The full associated real linear Cauchy-Riemann operator takes the form: $$\label{eq:fullD} D ^{J}_{u}: \Omega ^{0} (N \oplus T (T ^{2}) ) \oplus T _{j} M _{1,1} \to \Omega ^{0,1} (T(T ^{2}), N \oplus T (T ^{2}) ).$$ This is an index 2 Fredholm operator (after standard Sobolev completions), whose restriction to $\Omega ^{0} (N \oplus T (T ^{2}) )$ preserves the splitting, that is the restricted operator splits as $$D \oplus D': \Omega ^{0} (N) \oplus \Omega ^{0} (T (T ^{2}) ) \to \Omega ^{0,1} (T (T ^{2}), N ) \oplus \Omega ^{0,1}(T (T ^{2}), T (T ^{2}) ).$$ On the other hand the restricted Fredholm index 2 operator $$\Omega ^{0} (T (T ^{2})) \oplus T _{j} M _{1,1} \to \Omega ^{0,1}(T (T ^{2}) ),$$ is surjective by classical Teichmuller theory, see also [@cite_WendlAutomatic Lemma 3.3] for a precise argument in this setting. It follows that $D ^{J}_{u}$ will be surjective if the restricted Fredholm index 0 operator $$D: \Omega ^{0} (N) \to \Omega ^{0,1} (N),$$ has no kernel. The bundle $N$ is symplectic with symplectic form on the fibers given by restriction of $u ^{*} d \lambda$, and together with $J ^{\lambda}$ this gives a Hermitian structure $(g _{\lambda}, j _{\lambda} )$ on $N$. We have a linear symplectic connection $\mathcal{A}$ on $N$, which over the slices $S ^{1} \times \{t\} \subset T ^{2}$ is induced by the pullback by $u$ of the linearized $R ^{\lambda}$ Reeb flow. Specifically the $\mathcal{A}$-transport map from the fiber $N _{(s _{0} , t)}$ to the fiber $N _{(s _{1}, t)}$ over the path $[s _{0}, s _{1} ] \times \{t\} \subset T ^{2}$, is given by $$(u_*| _{N _{(s _{1}, t)} }) ^{-1} \circ (\phi ^{\lambda} _{c(s _{1} - s _{0})})_* \circ u_*| _{N _{(s _{0} , t )} },$$ where $\phi ^{\lambda} _{c(s _{1} - s _{0})}$ is the time $c \cdot (s _{1} - s _{0} )$ map for the $R ^{\lambda}$ Reeb flow, where $c$ is the period of the Reeb orbit $o _{u}$, and where $u _{*}: N \to TY$ denotes the natural map, (it is the universal map in the pull-back diagram.) The connection $\mathcal{A}$ is defined to be trivial in the $\theta _{2}$ direction, where trivial means that the parallel transport maps are the $id$ maps over $\theta _{2}$ rays. In particular the curvature $R _{\mathcal{A}}$, understood as a lie algebra valued 2-form, of this connection vanishes. The connection $\mathcal{A}$ determines a real linear CR operator $D _{\mathcal{A}}$ on $N$ in the standard way, take the complex anti-linear part of the vertical differential of a section. Explicitly, $$D _{\mathcal{A}}: \Omega ^{0} (N) \to \Omega ^{0,1} (N),$$ is defined by $$D _{\mathcal{A}} (\mu) (p) = j _{\lambda} \circ \pi ^{vert} (\mu (p)) \circ d\mu (p) - \pi ^{vert} (\mu (p)) \circ d\mu (p) \circ j,$$ where $$\pi ^{vert} (\mu (p)): T _{\mu (p)} N \to T ^{vert} _{\mu(p)} N \simeq N$$ is the $\mathcal{A}$-projection, and where $T ^{vert} _{\mu (p)} N$ is the kernel of the projection $T _{\mu (p)} N \to T _{p} \Sigma$. It is elementary to verify that the operator $D _{\mathcal{A} }$ is Fredholm $0$ with the kernel isomorphic to the kernel of $D$. See also [@cite_SavelyevOh Section 10.1] for a computation of this kind in much greater generality. We have a differential 2-form $\Omega$ on the total space of $N$ defined as follows. On the fibers $T ^{vert} N$, $\Omega= u _{*} \omega$, for $\omega= d_{\alpha} \lambda$, and for $T ^{vert} N \subset TN$ denoting the vertical tangent space, or subspace of vectors $v$ with $\pi _{*} v =0$, for $\pi: N \to T ^{2}$ the projection. While on the $\mathcal{A}$-horizontal distribution $\Omega$ is defined to vanish. The 2-form $\Omega$ is closed, which we may check explicitly by using that $R _{\mathcal{A}}$ vanishes to obtain local symplectic trivializations of $N$ in which $\mathcal{A}$ is trivial. Clearly $\Omega$ must vanish on the 0-section since it is a $\mathcal{A}$-flat section. But any section is homotopic to the 0-section and so in particular if $\mu \in \ker D$ then $\Omega$ vanishes on $\mu$. Since $\mu \in \ker D$, and so its vertical differential is complex linear, it follows that the vertical differential vanishes. To see this note that $\Omega (v, J ^{\lambda}v ) >0$, for $0 \neq v \in T ^{vert}N$ and so if the vertical differential did not vanish we would have $\int _{\mu} \Omega>0$. So $\mu$ is $\mathcal{A}$-flat, in particular the restriction of $\mu$ over all slices $S ^{1} \times \{t\}$ is identified with a period $c$ orbit of the linearized at $o$ $R ^{\lambda}$ Reeb flow, and which does not depend on $t$ as $\mathcal{A}$ is trivial in the $t$ variable. So the kernel of $D$ is identified with the vector space of period $c$ orbits of the linearized at $o$ $R ^{\lambda}$ Reeb flow, as needed. ◻ **Proposition 1**. *Let $\lambda$ be a contact form on a $(2n+1)$-fold $C$, and $o$ a non-degenerate, period $c$, $\lambda$-Reeb orbit, then the orientation of $[u _{o} ]$ induced by the determinant line bundle orientation of $\overline{\mathcal{M}} ^1 _{1,1} ( J ^{\lambda} , {A} ),$ is $(-1) ^{CZ (o) -n}$, which is $$\mathop{\mathrm{sign}}\mathop{\mathrm{Det}}(\mathop{\mathrm{Id}}| _{\xi (o(0))} - \phi _{c, *} ^{\lambda}| _{\xi (o(0))} ).$$* *Proof of Proposition [Proposition 1](#prop:regular2){reference-type="ref" reference="prop:regular2"}.* Abbreviate $u _{o}$ by $u$. Let $N \to T ^{2}$ be the vector bundle associated to $u$ as in the proof of Proposition [Proposition 1](#prop:regular){reference-type="ref" reference="prop:regular"}. Fix a trivialization $\phi$ of $N$ induced by any trivialization of the contact distribution $\xi$ along $o$ in the obvious sense: $N$ is the pullback of $\xi$ along the composition $$T ^{2} \to S ^{1} \xrightarrow{o} C.$$ Let the symplectic connection $\mathcal{A}$ on $N$ be defined as before. Then the pullback connection $\mathcal{A}' := \phi ^{*} \mathcal{A}$ on $T ^{2} \times \mathbb{R} ^{2n}$ is a connection whose parallel transport paths $p _{t}: [0,1] \to \mathop{\mathrm{Symp}}(\mathbb{R} ^{2n} )$, along the closed loops $S ^{1} \times \{t\}$, are paths starting at $\mathop{\mathrm{\mathrm{1}}}$, and are $t$ independent. And so the parallel transport path of $\mathcal{A}'$ along $\{s\} \times S ^{1}$ is constant, that is $\mathcal{A}'$ is trivial in the $t$ variable. We shall call such a connection $\mathcal{A}'$ on $T ^{2} \times \mathbb{R} ^{2n}$ *induced by $p$*. By non-degeneracy assumption on $o$, the map $p(1)$ has no 1-eigenvalues. Let $p'': [0,1] \to \mathop{\mathrm{Symp}}(\mathbb{R} ^{2n} )$ be a path from $p (1)$ to a unitary map $p'' (1)$, with $p'' (1)$ having no $1$-eigenvalues, and s.t. $p''$ has only simple crossings with the Maslov cycle. Let $p'$ be the concatenation of $p$ and $p''$. We then get $$CZ (p') - \frac{1}{2}\mathop{\mathrm{sign}}\Gamma (p', 0) \equiv CZ (p') - n \equiv 0 \mod {2},$$ since $p'$ is homotopic relative end points to a unitary geodesic path $h$ starting at $id$, having regular crossings, and since the number of negative, positive eigenvalues is even at each regular crossing of $h$ by unitarity. Here $\mathop{\mathrm{sign}}\Gamma (p', 0)$ is the index of the crossing form of the path $p'$ at time $0$, in the notation of [@cite_RobbinSalamonTheMaslovindexforpaths.]. Consequently, $$\label{eq:mod2} CZ (p'') \equiv CZ (p) -n \mod {2},$$ by additivity of the Conley-Zehnder index. Let us then define a free homotopy $\{p _{t} \}$ of $p$ to $p'$, $p _{t}$ is the concatenation of $p$ with $p''| _{[0,t]}$, reparametrized to have domain $[0,1]$ at each moment $t$. This determines a homotopy $\{\mathcal{A}' _{t} \}$ of connections induced by $\{p _{t} \}$. By the proof of Proposition [Proposition 1](#prop:regular){reference-type="ref" reference="prop:regular"}, the CR operator $D _{t}$ determined by each $\mathcal{A}' _{t}$ is surjective except at some finite collection of times $t _{i} \in (0,1)$, $i \in N$ determined by the crossing times of $p''$ with the Maslov cycle, and the dimension of the kernel of $D _{t _{i} }$ is the 1-eigenspace of $p'' (t _{i} )$, which is 1 by the assumption that the crossings of $p''$ are simple. The operator $D _{1}$ is not complex linear. To fix this we concatenate the homotopy $\{D _{t} \}$ with the homotopy $\{\widetilde{D} _{t} \}$ defined as follows. Let $\{\widetilde{\mathcal{A}} _{t} \}$ be a homotopy of $\mathcal{A}' _{1}$ to a unitary connection $\widetilde{\mathcal{A}} _{1}$, where the homotopy $\{\widetilde{\mathcal{A}} _{t} \}$ is through connections induced by paths $\{\widetilde{p} _{t} \}$, giving a path homotopy of $p'= \widetilde{p} _{0}$ to $h$. Then $\{\widetilde{D} _{t} \}$ is defined to be induced by $\{\widetilde{\mathcal{A}} _{t} \}$. Let us denote by $\{D' _{t} \}$ the concatenation of $\{D _{t} \}$ with $\{ \widetilde{D} _{t} \}$. By construction, in the second half of the homotopy $\{ {D}' _{t} \}$, ${D}' _{t}$ is surjective. And $D' _{1}$ is induced by a unitary connection, since it is induced by unitary path $\widetilde{p}_{1}$. Consequently, $D' _{1}$ is complex linear. By the above construction, for the homotopy $\{D' _{t} \}$, $D' _{t}$ is surjective except for $N$ times in $(0,1)$, where the kernel has dimension one. In particular the sign of $[u]$ by the definition via the determinant line bundle is exactly $$-1^{N}= -1^{CZ (p) -n},$$ by [\[eq:mod2\]](#eq:mod2){reference-type="eqref" reference="eq:mod2"}, which was what to be proved. ◻ **Theorem 1**. *$$GW _{1,1} ^{1} (N,J ^{\lambda}, A _{\beta}, \beta ) ([\overline {M} _{1, 1}] \otimes [C \times S ^{1} ]) = i (\mathcal{P} ^{-1}({N}), R ^{\lambda}, \beta),$$ where $N \subset \overline{\mathcal{M}} ^1 _{1,1} ( {J} ^{\lambda}, A _{\beta}, \beta )$ is an open compact set (where $\mathcal{P}$ is as in Proposition [Proposition 1](#prop:abstractmomentmap){reference-type="ref" reference="prop:abstractmomentmap"}), $i (\mathcal{P} ^{-1}({N}), R ^{\lambda}, \beta)$ is the Fuller index as described in the Appendix below, and where the left-hand side of the equation is the functional as in [\[eq:functionals2\]](#eq:functionals2){reference-type="eqref" reference="eq:functionals2"}.* *Proof.* Suppose that ${N} \subset \overline{\mathcal{M}} ^1 _{1,1} (J ^{\lambda}, A _{\beta}, \beta )$ is open-compact and consists of isolated regular Reeb tori $\{u _{i} \}$, corresponding to orbits $\{o _{i} \}$. Denote by $mult (o _{i})$ the multiplicity of the orbits as in Appendix [8](#appendix:Fuller){reference-type="ref" reference="appendix:Fuller"}. Then we have: $$GW _{1,1} ^{1} (N, J ^{\lambda}, A _{\beta}, \beta ) ([\overline{M} _{1,1} ] \otimes [C \times S ^{1} ]) = \sum _{i} \frac{(-1) ^{CZ (o _{i} ) - n} }{mult (o _{i} )},$$ where $n$ half the dimension of $M$, the numerator is as in [\[eq:conleyzenhnder\]](#eq:conleyzenhnder){reference-type="eqref" reference="eq:conleyzenhnder"}, and $mult (o _{i} )$ is the order of the corresponding isotropy group, see Appendix [9](#sec:GromovWittenprelims){reference-type="ref" reference="sec:GromovWittenprelims"}. The expression on the right is exactly the Fuller index $i (\mathcal{P}^{-1} (N), R ^{\lambda}, \beta)$. Thus, the theorem follows for $N$ as above. However, in general if $N$ is open and compact then perturbing slightly we obtain a smooth family $\{R ^{\lambda _{t} } \}$, $\lambda _{0} =\lambda$, s.t. $\lambda _{1}$ is non-degenerate, that is has non-degenerate orbits. And such that there is an open-compact subset $\widetilde{N}$ of $\overline{\mathcal{M}} _{1,1} ^1 (\{J ^{\lambda _{t} } \}, A _{\beta}, \beta)$ with $(\widetilde{N} \cap \overline{\mathcal{M}} _{1,1} ^1 (J ^{\lambda}, A _{\beta}, \beta ) = N$, see Lemma [Lemma 1](#lemma:NearbyEnergyDeformation){reference-type="ref" reference="lemma:NearbyEnergyDeformation"}. Then by Lemma [Lemma 1](#prop:invariance1){reference-type="ref" reference="prop:invariance1"} if $$N_1=(\widetilde{N} \cap \overline{\mathcal{M}} ^1 _{1,1} (J ^{\lambda _{1} }, A _{\beta}, \beta ))$$ we get $$GW ^{1} _{1,1} (N, J ^{\lambda}, A _{\beta}, \beta ) ([\overline{M} _{1,1} ] \otimes [C \times S ^{1} ]) = GW ^{1} _{1,1} (N _{1}, J ^{\lambda _{1} }, A _{\beta}, \beta ) ([\overline{M} _{1,1} ] \otimes [C \times S ^{1} ]).$$ By the previous discussion $$GW ^{1} _{1,1} (N _{1}, J ^{\lambda _{1} }, A _{\beta}, \beta ) ([\overline{M} _{1,1} ] \otimes [C \times S ^{1} ]) = i (N_1, R ^{\lambda_1}, \beta),$$ but by the invariance of Fuller index (see Appendix [8](#appendix:Fuller){reference-type="ref" reference="appendix:Fuller"}), $$i (N_1, R ^{\lambda_1}, \beta) = i (N, R ^{\lambda}, \beta).$$ ◻ What about higher genus invariants of $C \times S ^{1}$? Following the proof of Proposition [Proposition 1](#prop:abstractmomentmap){reference-type="ref" reference="prop:abstractmomentmap"}, it is not hard to see that all $J ^{\lambda}$-holomorphic curves must be branched covers of Reeb tori. If one can show that these branched covers are regular when the underlying tori are regular, the calculation of invariants would be fairly automatic from this data. See [@cite_WendlSuperRigid], [@cite_WendlChris] where these kinds of regularity calculation are made. # Proofs of main theorems {#sec:Proofs of theorems on Conformal symplectic Weinstein conjecture} To set notation and terminology we review the basic definition of a nodal curve. **Definition 1**. *A ***nodal Riemann surface*** (without boundary) is a pair $\Sigma= (\widetilde{\Sigma }, \mathcal{N} )$ where $\widetilde{\Sigma }$ is a Riemann surface, and $\mathcal{N}$ a set of pairs of points of $\widetilde{\Sigma }:$ $\mathcal{N} =\{(z _{0} ^{0}, z _{0} ^{1}), \ldots, (z _{n} ^{0}, z _{n} ^{1}) \}$, $n _{i} ^{j} \neq n _{k} ^{l}$ for $i \neq k$ and all $j,l$. By slight abuse, we may also denote by $\Sigma$ the quotient space $\widetilde{\Sigma}/\sim$, where the equivalence relation is generated by $n _{i} ^{0} \sim n _{i} ^{1}$. Let $q _{\Sigma}: \widetilde{\Sigma} \to (\widetilde{\Sigma}/\sim)$ denote the quotient map. The elements $q _{\Sigma } (\{z _{i} ^{0}, z _{i} ^{1} \}) \in \widetilde{\Sigma}/\sim$, are called ***nodes***. Let $M$ be a smooth manifold. By a map $u: \Sigma \to M$ of a nodal Riemann surface $\Sigma$, we mean a set map $u: (\widetilde{\Sigma}/\sim) \to M$. $u$ is called smooth or immersion or $J$-holomorphic (when $M$ is almost complex) if the map $\widetilde{u}= u \circ q _{\Sigma }$ is smooth or respectively immersion or respectively $J$-holomorphic. We call $\widetilde{u}$ ***normalization of $u$***. $u$ is called an embedding if $u$ is a topological embedding and its normalization is an immersion. The cohomology groups of $\Sigma$ are defined as $H ^{\bullet} (\Sigma) := H ^{\bullet } (\widetilde{\Sigma}/ \sim)$, likewise with homology. The genus of $\Sigma$ is the topological genus of $\widetilde{\Sigma}/\sim$.* We shall say that $(\widetilde{\Sigma }, \mathcal{N} )$ is normal if $\mathcal{N} = \emptyset$. Similarly, $u: \Sigma \to M$, $\Sigma = (\widetilde{\Sigma }, \mathcal{N} )$ is called ***normal*** if $\mathcal{N} = \emptyset$. The normalization of $u$ is the map of the nodal Riemann surface $\widetilde{u}: \widetilde{\Sigma} \to M$, $\widetilde{\Sigma} = (\widetilde{\Sigma}, \emptyset )$. Note that if $u$ is a Reeb 2-curve, its normalization $\widetilde{u}$ may not be a Reeb 2-curve (the second condition may fail). ## Mapping tori and Reeb 2-curves {#sec:Mapping tori and Reeb 2-curves} Let $(C, \lambda )$ be a contact manifold and $\phi$ a strict contactomorphism and let $M = (M _{\phi, 1}, \lambda _{\phi}, \alpha )$ denote the mapping torus of $\phi$, as also appearing in Theorem [Theorem 1](#thm:firstkindtorus){reference-type="ref" reference="thm:firstkindtorus"}. More specifically, $M = C \times \mathbb{R} ^{} / \sim$, where the equivalence $\sim$ is generated by $(x, \theta ) \sim (\phi (x), \theta +1)$, for more details on the corresponding lcs structure see for instance  [@cite_BazzoniFirstKind]. Then $(M, \lambda _{\phi}, \alpha)$ is an integral first kind lcs manifold. In this case $\mathcal{V} _{\lambda} = \mathcal{D}$, and in mapping torus coordinates at a point $(x, \theta )$, it is spanned by $X _{\lambda} = (0, \frac{\partial }{\partial \theta }), X _{\alpha} = (R ^{\lambda _{\theta }}, 0)$ for $R ^{\lambda _{\theta}}$ the $\lambda _{\theta}$-Reeb vector field, where $\lambda _{\theta} = \lambda _{C _{\theta}}$ the fiber over $\theta$ of the projection $M \to S ^{1}$. Analogously to the Example [Example 5](#section:lcsfication){reference-type="ref" reference="section:lcsfication"} there is an $S ^{1}$-invariant almost complex structure on $M$, which we call $J ^{\lambda _{\phi }}$. We now show that all Reeb 2-curves in $M$ must be of a certain type. Let $o: S ^{1} \to C$ be a $\lambda$-Reeb and suppose that $\mathop{\mathrm{\mathrm{image}}} \phi ^{n} (o) = \mathop{\mathrm{\mathrm{image}}}o$, for some $n>0$, so that $$\forall t \in [0,1]: \phi ^{n} (o) (t) = o (t+ \theta _{0})$$ for some uniquely determined $\theta _{0} \in [0,1)$. Let $\widetilde{o}: S ^{1} \times [0,n] \to C \times \mathbb{R}$ be the map $$\widetilde{o} (t, \tau) = (o (t+ \theta _{0} \cdot \frac{\tau }{n}), \tau).$$ Then $\widetilde{o}$ is well defined on the quotient $T ^{2} \simeq S ^{1} \times ([0,n]/0 \sim n)$, and we denote the quotient map by $u ^{n} _{o}$, called the *charge $n$ generalized Reeb torus of $o$*. If the class $[o] \in \pi _{1} ({C}) = \beta$ we denote by $A ^{n} _{\beta }$ the class of $u ^{n} _{o}$ in $H _{2} (M, \mathbb{Z} )$. The class $A ^{n} _{\beta }$ can be defined more generally whenever ${\phi} ^{n} _{*} ({\beta}) = {\beta }$, it is the class of a torus map $T ^{2} \to M$ defined analogously to the map $u ^{n} _{o}$, but no longer having the Reeb 2-curve property. We may abbreviate $A ^{1} _{\beta }$ by $A _{\beta }$. By construction, $u ^{n} _{o}$ is a charge $n$ Reeb 2-curve and its image is an embedded $J ^{\lambda _{\phi}}$-holomorphic torus $\mathcal{T}$. Moreover, $u ^{n} _{o}$ is $J ^{\lambda _{\phi}}$-holomorphic with respect to a uniquely determined complex structure on $T ^{2}$, similarly to the case of Reeb tori of Section [4.2](#sec:Reeb holomorphic tori simple){reference-type="ref" reference="sec:Reeb holomorphic tori simple"}. However, unlike the case of Reeb tori, this complex structure is not "rectangular" unless $\phi ^{n} \circ o = o$. **Proposition 1**. *Let $M = (M _{\phi}, \lambda _{\phi}, \alpha)$ be the mapping torus of a strict contactomorphism $\phi$ as above. Then:* 1. *Every charge $n$ Reeb 2-curve $u$ in $M$ has a factorization: $$\label{eq:factorization} u = u ^{n} _{o} \circ \rho,$$ for $\rho: \Sigma \to T ^{2}$ some degree one map, and for some orbit string $o$ uniquely determined by $u$.* 2. *Every element $u \in \overline{\mathcal{M}} ^{n} _{1,1} (J ^{\lambda _{\phi}}, A ^{n} _{\beta}, \beta)$ is represented (as an equivalence class in this moduli space) by $u ^{n} _{o}$, where the latter is as above, for some $o$ uniquely determined.* 3. *The Fredholm index of the corresponding real linear CR operator is 2, so that the expected dimension of $\overline{\mathcal{M}} ^{n} _{1,1} (J ^{\lambda _{\phi}}, A ^{n} _{\beta}, \beta)$ is 0. [\[label_fredholm\]]{#label_fredholm label="label_fredholm"}* 4. *Let $J$ be a $(\lambda _{\phi}, \alpha)$-admissible almost complex structure on $M$. There is a natural proper topological embedding: $$emb: \overline{\mathcal{M}} ^{n} _{1,1} (J, A ^{n} _{\beta}, \beta) \to \mathcal{O} (R ^{\lambda }, \beta ),$$ defined by $u \mapsto o$, where $o$ is uniquely determined by the condition [\[eq:factorization\]](#eq:factorization){reference-type="eqref" reference="eq:factorization"}. [\[part4_propTopEmbedding\]]{#part4_propTopEmbedding label="part4_propTopEmbedding"}* *Proof.* The proof of part one is completely analogous to the proof of Proposition [Proposition 1](#prop:abstractmomentmap){reference-type="ref" reference="prop:abstractmomentmap"}. To prove the second part, first note that by the first part $u$ has image $\mathcal{T} = \mathop{\mathrm{\mathrm{image}}}u ^{n} _{o}$ for some $n, o$, and $\mathcal{T}$ is an embedded $J ^{\lambda _{\phi}}$-holomorphic torus. By basic theory of mappings of complex tori $u$ must be a covering map $\Sigma \to \mathcal{T}$. Since we know the charge $n$, as in final part of the proof of Proposition [Proposition 1](#prop:abstractmomentmap){reference-type="ref" reference="prop:abstractmomentmap"}, we may conclude that $u \simeq u ^{n} _{o}$ for some uniquely determined $o$, where $\simeq$ is an isomorphism. We prove Part [\[label_fredholm\]](#label_fredholm){reference-type="ref" reference="label_fredholm"}. Note that $c _{1} (A ^{n} _{\beta }) = 0$, as by construction the complex tangent bundle along $u ^{n} _{o}$ admits a flat connection, induced by the natural $\mathcal{G}$-connection on $M _{\phi} \to S ^{1}$, for $\mathcal{G}$ the group of strict contactomorphisms of $(C, \lambda )$, cf. Proof of Proposition [Proposition 1](#prop:regular){reference-type="ref" reference="prop:regular"}. The needed fact then follows by the index/Riemann-Roch theorem. The last part of the proposition readily follows from the first part. ◻ *Proof of Theorem [Theorem 1](#theorem_counterexample){reference-type="ref" reference="theorem_counterexample"}.* Let $u: \Sigma \to (M = M _{\widetilde{\phi},1})$ be a Reeb 2-curve in the mapping torus as in the statement. By part one of Proposition [Proposition 1](#prop:Topembedding){reference-type="ref" reference="prop:Topembedding"}, there must be a generalized charge $n$ Reeb torus in $M$. By definitions this means that $\widetilde{\phi}$ has a charge $n$ fixed Reeb string, so that $\phi$ has a charge $n$ fixed geodesic string, which is impossible by assumptions. ◻ **Proposition 1**. *Let $(C,\lambda)$ be a contact manifold with $\lambda$ satisfying one of the following conditions:* 1. *There is a non-degenerate $\lambda$-Reeb orbit.* 2. *$i (N,R ^{\lambda}, \beta) \label{condition:prop1} \neq 0$ for some open compact $N \subset \mathcal{O} (R ^{\lambda }, \beta )$, and some $\beta$.* *Then:* 1. *Let $(\lambda, \alpha)$ be the lcs-fication of $(C, \lambda )$. There exists an $\epsilon>0$ s.t. for any tamed exact lcs structure $(\lambda', \alpha', J)$ on $M=C \times S ^{1}$, with $(d _{\alpha'} \lambda', J)$ $\epsilon$-close to $(d_{\alpha}\lambda, J ^{\lambda})$ (as in Definition [Definition 1](#def:deltaclose){reference-type="ref" reference="def:deltaclose"}), there exists an elliptic, $J$-holomorphic $\alpha$-charge 1 curve $u$ in $M$.* 2. *In addition, if $(M, \lambda', \alpha')$ is first kind and has dimension $4$ then $u$ may be assumed to be normal and embedded.* *Proof.* If we have a closed non-degenerate $\lambda$-Reeb orbit $o$ then we also have an open compact subset $N = \{o\} \subset S _{\lambda }$. Thus suppose that the condition [\[condition:prop1\]](#condition:prop1){reference-type="ref" reference="condition:prop1"} holds. Set $$(\widetilde{N} := \mathcal{P} (N)) \subset \overline{\mathcal{M}} ^{1} _{1,1} ( J ^{\lambda}, A _{\beta }, {\beta}),$$ which is an open compact set. By Theorem [Theorem 1](#thm:GWFullerMain){reference-type="ref" reference="thm:GWFullerMain"}, and by the assumption that $i (N, R _{\lambda }, \beta ) \neq 0$ $$GW ^{1} _{1,1} (N, J ^{\lambda}, A _{\beta }, \beta) \neq 0.$$ The first part of the conclusion then follows by Proposition [Proposition 1](#thm:nearbyGW){reference-type="ref" reference="thm:nearbyGW"}. We now verify the second part. Suppose that $M$ has dimension 4. Let $U$ be an $\epsilon$-neighborhood of $(\lambda, \alpha, J ^{\lambda } )$, for $\epsilon$ as given in the first part, and let $(\lambda', \alpha', J) \in U$. Suppose that $u \in \overline{\mathcal{M}} ^{1} _{1,1} ( J, \beta )$. Let $\underline{u}$ be a simple $J$-holomorphic curve covered by $u$, (see for instance  [@cite_McDuffSalamonJholomorphiccurvesandsymplectictopology Section 2.5]. For convenience, we now recall the adjunction inequality. **Theorem 1** (McDuff-Micallef-White [@cite_MicallefWhite], [@cite_McDuffPositivity]). *Let $(M, J)$ be an almost complex 4-manifold and let $A \in H_2(M)$ be a homology class that is represented by a simple J-holomorphic curve $u: \Sigma \to M$. Let $\delta (u)$ denote the number of self-intersections of $u$, then $$2\delta (u) - \chi (\Sigma) \leq A\cdot A -c _{1} (A),$$ with equality if and only if $u$ is an immersion with only transverse self-intersections.* In our case $A= A _{\beta }$ so that $c _{1} (A) =0$ and $A \cdot A = 0$. If ${u}$ is not normal its normalization is of the form $\widetilde{{u}}: \mathbb{CP} ^{1} \to M$ with at least one self intersection and with $0 = [\widetilde{{u}} ] \in H _{2} (M)$, but this contradicts positivity of intersections. So $u$ and hence $\underline{u}$ are normal. Moreover, the domain $\Sigma'$ of $\underline {u}$ satisfies: $\chi (\Sigma')= \chi (T ^{2}) = 0$, so that $\delta (\underline{u})=0$, and the above inequality is an equality. In particular $\underline{u}$ is an embedding, which of course implies our claim. ◻ *Proof of Theorem [Theorem 1](#thm:C0Weinstein){reference-type="ref" reference="thm:C0Weinstein"}.* Let $$U \ni (\omega _{0}:= d_{\alpha} {\lambda}, J _{0}:= J ^{\lambda})$$ be a set of pairs $(\omega, J)$ satisfying the following: - $\omega$ is a first kind $\mathop{\mathrm{lcs}}$ structure. - For each $(\omega, J) \in U$, $J$ is $\omega$-compatible and admissible. - Let $\epsilon$ be chosen as in the first part of Proposition [Proposition 1](#thm:holomorphicSeifert){reference-type="ref" reference="thm:holomorphicSeifert"}. Then each $(\omega, J) \in U$ is $\epsilon$-close to $(\omega _{0}, J _{0})$, (as in Definition [Definition 1](#def:deltaclose){reference-type="ref" reference="def:deltaclose"}). To prove the theorem we need to construct a map $E: V \to \mathcal{J} (M)$, where $V$ is some neighborhood of $\omega _{0}$ in the space $(\mathcal{F} (M), d _{\infty })$ (see Definition [Definition 1](#def:LM){reference-type="ref" reference="def:LM"}) and where $$\forall \omega \in V: (\omega, E (\omega)) \in {U}.$$ As then Proposition [Proposition 1](#thm:holomorphicSeifert){reference-type="ref" reference="thm:holomorphicSeifert"} tells us that for each $\omega \in V$, there is a class $A$, $E (\omega)$-holomorphic, elliptic curve $u$ in $M$. Using Lemma [Lemma 1](#lemma:Reeb){reference-type="ref" reference="lemma:Reeb"} we would then conclude that there is an elliptic Reeb 2-curve $u$ in $(M, \omega)$. If $M$ has dimension 4 then in addition $u$ may be assumed to be normal and embedded. If $\omega$ is integral, by Proposition [Proposition 1](#thm:holomorphicSeifert){reference-type="ref" reference="thm:holomorphicSeifert"}, $u$ may be assumed to be charge 1. And so we will be done. Define a metric $\rho _{0}$ measuring the distance between subspaces $W _{1}, W _{2}$, of same dimension, of an inner product space $(T,g)$ as follows. $$\rho _{0} (W _{1}, W _{2} ) := |P _{W _{1} } - P _{W _{2} } |,$$ for $|\cdot|$ the $g$-operator norm, and $P _{W _{i} }$ $g$-projection operators onto $W _{i}$. Let $\delta>0$ be given. Suppose that $\omega=d ^{\alpha'} \lambda'$ is a first kind $\mathop{\mathrm{lcs}}$ structure $\delta$-close to $\omega _{0}$ for the metric $d _{\infty}$. Then $\mathcal{V} _{\lambda'}, \xi _{\lambda'}$ are smooth distributions by the assumption that $(\alpha', \lambda')$ is a $\mathop{\mathrm{lcs}}$ structure of the first kind and $TM = \mathcal{V} _{\lambda'} \oplus \xi _{\lambda'}$. Moreover, $$\rho _{\infty} (\mathcal{V} _{\lambda'}, \mathcal{V} _{\lambda}) < \epsilon _{\delta}$$ and $$\rho _{\infty} (\xi _{\lambda'}, \xi _{\lambda}) < \epsilon _{\delta}$$ where $\epsilon _{\delta} \to 0$ as $\delta \to 0$, and where $\rho _{\infty}$ is the $C ^{\infty }$ analogue of the metric $\rho _{0}$, for the family of subspaces of the family of inner product spaces $(T _{p} M,g)$. Then choosing $\delta$ to be suitably be small, for each $p \in M$ we have an isomorphism $$\phi (p): T _{p} M \to T _{p} M,$$ $\phi _{p}:= P _{1} \oplus P _{2}$, for $P _{1}: \mathcal{V}_ {\lambda _{0} } (p) \to \mathcal{V} _{\lambda'} (p)$, $P _{2}: \xi _{\lambda _{0}} (p) \to \xi _{\lambda'} (p)$ the $g$-projection operators. Define $E (\omega) (p):= \phi (p)_*J _{0}$. Then clearly, if $\delta$ was chosen to be sufficiently small, if we take $V$ to be the $\delta$-ball in $(\mathcal{F} (M), d _{\infty })$ centered at $\omega _{0}$, then it has the needed property. ◻ **Definition 1**. *Let $\alpha$ be a scale integral closed 1-form on a closed smooth manifold $M$. Let $0 \neq c \in \mathbb{\mathbb{R} ^{}}$ be such that $c \alpha$ is integral. A ***classifying map*** $p: M \to S ^{1}$ of $\alpha$ is a smooth map s.t. $c \alpha = p ^{*} d\theta$. A map $p$ with these properties is of course not unique.* **Lemma 1**. *Let $u: \Sigma \to M$ be a Reeb 2-curve in a closed, scale integral, first kind lcs manifold $(M, \lambda, \alpha )$, then its normalization $\widetilde{u}: \widetilde{\Sigma} \to M$ is a Reeb 2-curve.* *Proof.* By Lemma [Lemma 1](#lemma:alphaclass){reference-type="ref" reference="lemma:alphaclass"} we have a surjective classifying map $p: M \to S ^{1}$ of $\alpha$. Note that the fibers $M _{t}$ of $p$, for all $t \in S ^{1}$, are contact with contact form $\lambda _{t} = \lambda| _{C _{t}}$, as $0 \neq \omega ^{n} = \alpha \wedge \lambda \wedge d \lambda ^{n-1}$ and $c \cdot \alpha = 0$ on $M _{t}$, where $c$ is as in the definition of $p$. Let $\widetilde{u}: \widetilde{\Sigma } \to M$ be the normalization of $u$. Suppose it is not a Reeb 2-curve, which in this case, by definitions, just means that $0 = [\widetilde{u} ^{*} \alpha] \in H ^{1} (\widetilde{\Sigma }, \mathbb{R} ^{} )$. Since $0 \neq [{u} ^{*} \alpha] \in H ^{1} ({\Sigma }, \mathbb{R} ^{} )$, some node $z _{0}$ of $\Sigma$ lies on closed loop $o: S ^{1} \to \Sigma$ with $\langle [o], [u ^{*}\alpha ] \rangle \neq 0$. Let $q _{\Sigma }: \widetilde{\Sigma } \to \Sigma$ be the quotient map as previously appearing. In this case, we may find a smooth embedding $\eta: D ^{2} \to \widetilde{\Sigma}$, s.t. $q _{\Sigma } \circ \eta (D ^{2})| _{\partial D ^{2}}$ is a component of a regular fiber $C _{t}$, of the classifying map $p': {\Sigma} \to S ^{1}$ of ${u} ^{*} \alpha$. See Figure [1](#figure:disk){reference-type="ref" reference="figure:disk"}, $\eta (D ^{2})$ is a certain disk in $\widetilde{\Sigma}$, whose interior contains an element of $\phi ^{-1}(z _{0})$. ![The figure for $\Sigma$. The gray shaded area is the image $q _{\Sigma} \circ \eta (D ^{2})$. The red shaded curve is the image of the closed loop $o$ as above.](disk2.pdf){#figure:disk width="3.0in"} Then analogously to the proof of Proposition [Proposition 1](#prop:abstractmomentmap){reference-type="ref" reference="prop:abstractmomentmap"} $\widetilde{u} \circ \eta| _{\partial D ^{2}}$ is a (unparametrized) $\lambda _{t}$-Reeb orbit in $M _{t}$. (The classifying maps can be arranged, such that $u(C _{t}) \subset M _{t}$.) And in particular $\int _{\partial D} \widetilde{u} ^{*} \lambda \neq 0$. Now $u$ (not $\widetilde{u}$) is a Reeb 2-curve, and the first condition of this implies that $\int _{D} d\widetilde{u} ^{*} \lambda =0$, since $\ker d \lambda$ on $M$ is spanned by $X _{\lambda }, X _{\alpha }$. So we have a contradiction to Stokes theorem. Thus, $\widetilde{u}$ must be a Reeb 2-curve. ◻ *Proof of Theorem [Theorem 1](#thm:basic0){reference-type="ref" reference="thm:basic0"}.* Let $(C, \lambda)$ and $\phi$ be as in the hypothesis. Let $\{\phi _{t}\}$, $t \in [0,1]$ be a smooth family of strict contactomorphisms $\phi _{0} = id$, $\phi _{1} = \phi$. This gives a smooth fibration $\widetilde{M} \to [0,1]$, with fiber over $t \in [0,1]$: $M _{\phi _{t},1}$, which is moreover endowed with the first kind lcs structure $(\lambda _{\phi _{t}}, \alpha )$, where this is the "mapping torus structure" as above. Let $tr: \widetilde{M} \to (C \times S ^{1}) \times [0,1]$ be a smooth trivialization, restricting to the identity $C \times S ^{1} \to C \times S ^{1}$ over $0$. Pushing forward by the bundle map $tr$, the above mentioned family of lcs structures, we get a smooth family $\{(\lambda _{t}, \alpha) \}$, $t \in [0,1]$, of first kind integral lcs structures on $C \times S ^{1}$, with $(\lambda _{0}, \alpha) = (\lambda, \alpha )$ the standard lcs-fication of $\lambda$. Fix a family $\{J ^{\lambda _{t}} \}$ of almost complex structures on $C \times S ^{1}$ with each $J ^{\lambda _{t}}$ admissible with respect $(\lambda _{t}, \alpha)$. Let $N \subset \mathcal{O} (R ^{\lambda }, \beta )$ be an open compact set satisfying $i (N, R ^{\lambda }, \beta ) \neq 0$. The embedding $emb$ from part [\[part4_propTopEmbedding\]](#part4_propTopEmbedding){reference-type="ref" reference="part4_propTopEmbedding"} of Proposition [Proposition 1](#prop:Topembedding){reference-type="ref" reference="prop:Topembedding"} induces a proper embedding $$\widetilde{emb}: \widetilde{\mathcal{M} } = \overline{ \mathcal{M}} ^{1} _{1,1} (\{J ^{\lambda _{t}}\}, A _{\beta}, \beta ) \to \mathcal{O} (R ^{\lambda}, \beta ) \times [0,1],$$ defined by $\widetilde{emb} (u,t) = (emb (u),t)$. Set $N _{0} = \mathcal{P} (N) \subset \overline{ \mathcal{M}} ^{1} _{1,1} (J ^{\lambda _{0}}, A _{\beta}, \beta ) \subset \widetilde{\mathcal{M} }$. So that by construction $\widetilde{emb} (N _{0}) = N \times \{0\}$. Set $$\widetilde{N} = \widetilde{emb} ^{-1}(\widetilde{emb}(\widetilde{\mathcal{M} }) \cap (N \times [0,1])),$$ then this is an open and compact subset of $\widetilde{\mathcal{M} }$. And by construction $\widetilde{N} \cap \overline{\mathcal{M}} ^{1} _{1,1} (J ^{\lambda _{0}}, A _{\beta}, \beta) = N _{0}$. Set $N _{1} = \widetilde{N} \cap \overline{\mathcal{M}} ^{1} _{1,1} (J ^{\lambda _{1}}, A _{\beta}, \beta )$. It follows by Theorem [Theorem 1](#thm:GWFullerMain){reference-type="ref" reference="thm:GWFullerMain"} that $$GW _{1,1} ^{1} (N _{0}, J ^{\lambda _{0}}, A _{\beta}, \beta ) ([\overline {M} _{1, 1}] \otimes [C \times S ^{1} ]) = i (N, R ^{\lambda }, \beta ) \neq 0.$$ Then applying Lemma [Lemma 1](#prop:invariance1){reference-type="ref" reference="prop:invariance1"} we get that $$GW _{1,1} ^{1} (N _{1}, J ^{\lambda _{1}}, A _{\beta}, \beta) ([\overline {M} _{1, 1}] \otimes [C \times S ^{1} ]) \neq 0.$$ By part two of Proposition [Proposition 1](#prop:Topembedding){reference-type="ref" reference="prop:Topembedding"} there is a charge 1 generalized Reeb torus $u _{o}$ in $M$. In particular, $\mathop{\mathrm{\mathrm{image}}}\phi (o) = \mathop{\mathrm{\mathrm{image}}}(o)$. Also by construction, $o \in N$ and so we are done. ◻ *Proof of Theorem [Theorem 1](#thm:infinitetype){reference-type="ref" reference="thm:infinitetype"}.* If $R ^{\lambda }$ is finite type and $i (R ^{\lambda }, \beta ) \neq 0$ then the theorem follows immediately by Theorem [Theorem 1](#thm:basic0){reference-type="ref" reference="thm:basic0"}. We leave the full definition of infinite type vector fields to the reference  [@cite_SavelyevFuller Definition 2.5]. We have that $R ^{\lambda}$ is infinite type in class $\beta$. WLOG assume that it is positive infinite type. In particular, we may find a perturbation $X ^{a}$ of $R ^{\lambda }$ together with a homotopy $X _{t}$, $t \in [0,1]$, s.t.: 1. $X _{0} = X ^{a}$, $X _{1} = R ^{\lambda }$. 2. $\mathcal{O} (X ^{a},a, {\beta}) = \{o \in \mathcal{O} (X ^{a}, {\beta}) \,|\, A (o) \leq a\}$ is discrete, where $A$ is the period map as in the introduction. 3. Each $o \in \mathcal{O} (X ^{a},a, {\beta})$ is contained in a non-branching open compact subset $K _{o} \subset \mathcal{O} (\{X ^{a} _{t}\}, \beta)$. Where the latter means that: 1. ${K} _{o} \cap \mathcal{O} (X _{1}, \beta)$ is connected. 2. For $o, o' \in \mathcal{O} (X ^{a}, a, \beta)$ $K _{o} = K _{o'}$ or $K _{o} \cap K _{o'} = \emptyset$. 3. $\mathcal{O} (X _{1}, \beta) = \cup _{o} ({K} _{o} \cap \mathcal{O}(X _{1}, \beta)).$ 4. $(K _{o} \cap \mathcal{O} (X ^{a}, \beta )) \subset \mathcal{O} (X ^{a}, a, \beta )$. [\[condition:non-branching\]]{#condition:non-branching label="condition:non-branching"} 4. $\sum _{o \in \mathcal{O} (X ^{a},a, {\beta})} i (o) > 0$. Set $N := \mathcal{O} (X ^{a},a,\beta)$, then by the condition [\[condition:non-branching\]](#condition:non-branching){reference-type="ref" reference="condition:non-branching"} and by the non-branching property, $$N = \sqcup _{i \in \{1, \ldots,n\}} \mathcal{O} (X ^{a},\beta) \cap K _{o _{i}},$$ disjoint union for some $o _{1}, \ldots o _{n} \in \mathcal{O} (X ^{a},a,\beta)$. Set $\widetilde{N} := \cup _{i \in \{1, \ldots,n\}} K _{o _{i}}$. Then $\widetilde{N} \cap \mathcal{O} (X ^{a}, \beta ) = N$. Set $N _{1} := \widetilde{N} \cap \mathcal{O} (R ^{\lambda}, \beta)$, then this is an open compact subset of $\mathcal{O} (X _{1}, \beta )$. Finally, using invariance of the Fuller index we get that $i (N _{1}, R ^{\lambda}, {\beta }) \neq 0.$ Then the result follows by Theorem [Theorem 1](#thm:basic0){reference-type="ref" reference="thm:basic0"}. ◻ *Proof of Theorem [Theorem 1](#thm:basis-1){reference-type="ref" reference="thm:basis-1"}.* Suppose that $\lambda$ is Morse-Bott and we have an open compact component $N \subset \mathcal{O} (R ^{\lambda}, \beta)$, with $\chi (N) \neq 0$ so that $i (N, R ^{\lambda}, \beta) \neq 0$,  [@cite_SavelyevFuller Section 2.1.1]. If $\lambda'$ is sufficiently $C ^{1}$ nearby to $\lambda$ then we may find open compact $N' \in \mathcal{O} (R ^{\lambda'})$ s.t. $$i (N', R ^{\lambda'}, \beta) = i (N, R ^{\lambda }, \beta ) \neq 0.$$ See  [@cite_SavelyevFuller Lemma 1.6]. Then the result follows by Theorem [Theorem 1](#thm:basic0){reference-type="ref" reference="thm:basic0"}. ◻ *Proof of Theorem [Theorem 1](#theorem_pertubHyperbolic){reference-type="ref" reference="theorem_pertubHyperbolic"}.* Under the assumptions on the Euler characteristic by  [@cite_SavelyevGromovFuller Theorem 1.10] for any $\beta$-taut $g$ on $X$: $$\operatorname {GWF}(g, id, \beta, 1) = F (g, \beta ) = \chi ^{S ^{1}}(L _{\beta }X) \neq 0.$$ Then for $\phi$ as in the hypothesis, let $\{\phi _{t}\} _{t \in [0,1]}$ be a homotopy between $id$ and $\phi$, with each $\phi _{t}$ an isometry of $g$. Then $\{(g, \phi _{t})\} _{t}$ clearly furnishes an $\mathcal{E}$-homotopy between $(g, id)$ and $(g, \phi)$. So that $\operatorname {GWF}(g, \phi, \beta, 1) \neq 0$, by Theorem [Theorem 1](#thm:generalizationGWF){reference-type="ref" reference="thm:generalizationGWF"}. ◻ *Proof of Corollary [Corollary 1](#corollary_hyperbolic){reference-type="ref" reference="corollary_hyperbolic"}.* If $X$ admits a complete metric of negative curvature, then by the proof of [@cite_SavelyevGromovFuller Theorem 1.14] we may find a not a power class $\beta' \in \pi _{1} ^{inc} (X)$, s.t. $\beta$ has a representative which is a $k$-cover of a representative of $\beta '$. Also, $$\begin{aligned} \chi ^{S ^{1}}(L _{\beta '} X) & = \chi(L _{\beta '}X/S ^{1}) \quad \text{as the action is free by the condition that $\beta' $ is not a power} \\ & = 1,\end{aligned}$$ where the last equality is immediate from the hypothesis that $X$ admits a complete metric of negative curvature and classical Morse theory, see proof of  [@cite_SavelyevGromovFuller Theorem 1.10]. Now, if $g$ is any other complete metric on $X$ with non-positive curvature then in particular it is $\beta '$-taut. Then by Theorem [Theorem 1](#theorem_pertubHyperbolic){reference-type="ref" reference="theorem_pertubHyperbolic"}, $\phi$ has a charge one, class $\beta'$ fixed geodesic string. It readily follows that $\phi$ also has a class $\beta$, charge one fixed geodesic string. ◻ *Proof of Theorem [Theorem 1](#thm:finitetypeRiemannian){reference-type="ref" reference="thm:finitetypeRiemannian"}.* Let $\{(g _{t}, \phi _{t})\}$ be an $\mathcal{E}$-homotopy as in the hypothesis. And let $o \in \mathcal{O} (g _{0}, \beta)$ be the unique and non-degenerate element. As $\phi _{0,*} ^{n} (\beta) = \beta$, it is immediate that $o$ is a charge $n$ fixed geodesic string of $\phi _{0}$. Moreover, the moduli space $\overline{\mathcal{M}} ^n _{1,1} (J ^{\lambda _{\phi _{0}}}, A ^{1} _{\widetilde{\beta}}, \widetilde{\beta} )$ consists of one point, and it is regular by the non-degeneracy of $o$. So we conclude that $\operatorname {GWF}(g_0, \phi _{0}, \beta, n) \neq 0$ ($\frac{1}{k}$ where $k$ is the multiplicity of $o$.). Then by Theorem [Theorem 1](#thm:generalizationGWF){reference-type="ref" reference="thm:generalizationGWF"} we get that $\operatorname {GWF}(g_1, \phi _{1}, \beta, n) \neq 0$ and so we are done. ◻ *Proof of Corollary [Corollary 1](#cor:Riemannian){reference-type="ref" reference="cor:Riemannian"}.* Let $g$ and $\beta \in \pi _{1} ^{inc} (X)$ be as in the hypothesis. By assumption the corresponding unit cotangent bundle $(C, \lambda )$ is definite type in class $\widetilde{\beta}$. If $\phi$ is some isometry of $g$ homotopic through isometries to the identity, then the induced strict contactomorphism $\widetilde{\phi}$ is homotopic through strict contactomorphisms to the $id$, and so by Theorem [Theorem 1](#thm:infinitetype){reference-type="ref" reference="thm:infinitetype"} $\widetilde{\phi }$ has a class $\widetilde{\beta }$ fixed Reeb string. So that $\phi$ has a class $\beta$ fixed geodesic string. ◻ *Proof of Theorem [Theorem 1](#thm:basic1){reference-type="ref" reference="thm:basic1"}.* Let $g _{0}$ be a complete metric on $X$ with a unique and non-degenerate class $\beta \in \pi _{1} ^{inc}(X)$ geodesic string. Let $\lambda _{0} = \lambda _{g _{0}}$ be the Liouville 1-form on the $g _{0}$-unit contangent bundle $C$ of $X$. Let $g$ be as in the hypothesis and let $\lambda _{1}$ be Liouville 1-form on $C$ corresponding to $g$. Let $\{\lambda _{t}\}$, $t \in [0,1]$, be a smooth homotopy between $\lambda _{0}, \lambda _{1}$. We may in addition assume that this homotopy is constant near the end points. We then get a family $\{(\lambda _{t}, \alpha)\}$ of first kind integral lcs structures on $C \times S ^{1}$. Now let $\{\widetilde{\phi} _{t}\}$, $t \in [0,1]$ be a smooth homotopy of strict contactomorphisms of $(C, \lambda _{1})$ with $\widetilde{\phi} _{0} = id$, corresponding to a homotopy $\{\phi _{t}\}$ of isometries of $X,g$, with $\phi _{0} = id$. We may suppose that $\{\widetilde{\phi} _{t}\}$ is constant near end points. As in the Proof of Theorem [Theorem 1](#thm:basic0){reference-type="ref" reference="thm:basic0"}, this gives a smooth fibration over $\widetilde{M} \to [0,1]$. And as before we get a family $\{(\lambda'' _{t}, \alpha) \}$, $t \in [0,1]$, of first kind integral lcs structures on $C \times S ^{1}$, s.t.: - $(\lambda'' _{0}, \alpha)$ is the lcs-fication of $\lambda _{0}$. - For each $t$ $(\lambda'' _{t}, \alpha)$ is isomorphic to the mapping torus structure $(\lambda _{\widetilde{\phi} _{t}}, \alpha )$. Let $\{(\lambda' _{t}, \alpha )\}$ be the concatenation of the families $\{(\lambda _{t}, \alpha ) \}$, $\{(\lambda'' _{t}, \alpha ) \}$, i.e.: $$\lambda' _{t} = \begin{cases} \lambda _{2t}, &\text{ if } t \in [0, \frac{1}{2}]\\ \lambda'' _{2t-1}, &\text{ if } t \in [\frac{1}{2}, 1].\\ \end{cases}$$ Let $\{J' _{t}\}$, $t \in [0,1]$ be a family of almost complex structures on $M=C \times S ^{1}$ s.t. $J' _{t}$ is $(\lambda' _{t}, \alpha )$-admissible for each $t$. Let $\widetilde{\beta }$ be the lift of $\beta \in \pi _{1} ^{inc} (X)$, as in Section [4.3](#sec:Definition of GWF){reference-type="ref" reference="sec:Definition of GWF"}. Now, by construction and by Theorem [Theorem 1](#thm:GWFullerMain){reference-type="ref" reference="thm:GWFullerMain"} $$GW ^{1} _{1,1} (J' _{0}, A _{\widetilde{\beta}}, \beta ) = i (R ^{\lambda _{g _{0}}}, \widetilde{\beta}) \neq 0,$$ with the last inequality due to the assumption that there is a unique non-degenerate $g _{0}$-geodesic string in class $\beta$. Then by Lemma [Lemma 1](#thm:welldefined){reference-type="ref" reference="thm:welldefined"}, we get that one of the following holds: - $(\lambda _{\widetilde{\phi}}, \alpha )$ has an elliptic charge 1, class $A _{\widetilde{\beta }}$ Reeb 2-curve $u$, and hence by part one of the Proposition [Proposition 1](#prop:Topembedding){reference-type="ref" reference="prop:Topembedding"}, $\widetilde{\phi}$ has a charge 1 fixed Reeb string in class $\widetilde{\beta }$, and so $\phi$ has a charge 1 fixed geodesic in class $\beta$. - The family $\{J ' _{t}\}$, $t \in [0,1]$ has an essential right holomorphic sky catastrophy, of charge 1, class $A _{\widetilde{\beta}}$ curves. Suppose that the latter holds. By the admissibility condition and by Lemma [Lemma 1](#lemma:Reeb){reference-type="ref" reference="lemma:Reeb"}, if $$(u, t) \in \overline{\mathcal{M}} ^{1} _{1,1} (\{J' _{t} \}, A _{\widetilde{\beta}}, \beta )$$ then $u$ is a charge 1 elliptic Reeb 2-curve. Let $\{X _{t}\}$, $t \in [0,1]$, be the smooth family of vector fields satisfying $X _{t} = R ^{\lambda _{2t}}$ for $t \in [0, \frac{1}{2}]$, $X _{t} = R ^{\lambda _{1}}$ for $t \in [\frac{1}{2}, 1]$. Analogously to part [\[part4_propTopEmbedding\]](#part4_propTopEmbedding){reference-type="ref" reference="part4_propTopEmbedding"} of Proposition [Proposition 1](#prop:Topembedding){reference-type="ref" reference="prop:Topembedding"}, we have a proper topological embedding $$emb: \overline{\mathcal{M}} ^{1} _{1,1} (\{J' _{t} \}, A _{\widetilde{\beta}}, \beta) \to \mathcal{O} (\{X _{t}\} , \widetilde{\beta}),$$ It follows, by the previous hypothesis, that the family $\{X _{t}\} _{t \in [0,1] }$ has a sky catastrophe in class $\beta=0$. In addition, this sky catastrophe must be essential, as otherwise the original holomorphic sky catastrophe would not be essential. ◻ *Proof of Theorem [Theorem 1](#thm:generalizationGWF){reference-type="ref" reference="thm:generalizationGWF"}.* We have to prove invariance of the counts. Let $\{(g _{t}, \phi _{t})\} _{t \in [0,1]}$, be an $\mathcal{E}$-homotopy. Let $(C, \lambda _{t} = \lambda _{g _{t}})$, be the $g _{t}$-unit cotangent bundle of $X$, and $\lambda _{g _{t}}$ the Liouville 1-form. (Fixing an implicit identification of the unit cotangent bundles with a fixed manifold $C$.) Let $\widetilde{\beta} \in \pi _{1} (C)$, be the lift of a class $\beta \in \pi _{1} ^{inc} (X)$ as in Section [4.3](#sec:Definition of GWF){reference-type="ref" reference="sec:Definition of GWF"}. Denote by $(M _{t}, \lambda _{t}, \alpha _{t})$ the mapping torus of $\widetilde{\phi } _{t}$ action on $(C, \lambda _{t})$, where $\widetilde{\phi} _{t}$ is the strict contactomorphism induced by the isometry $\phi _{t}$. Finally, let $\{J _{t}\} _{t \in [0,1]}$, be a smooth family with $J _{t}$ $(\lambda _{t}, \alpha _{t})$-admissible almost complex structures on $M _{t}$. By the tautness assumption the family $\{R ^{\lambda _{t}}\}$ has no sky catastrophe in the class $\widetilde{\beta }$. Then by the third part of Proposition [Proposition 1](#prop:Topembedding){reference-type="ref" reference="prop:Topembedding"}, $\{J _{t}\}$ has no sky catastrophe in class in $A _{\widetilde{\beta} }$, and so by Theorem [Lemma 1](#thm:welldefined){reference-type="ref" reference="thm:welldefined"} we have $$\operatorname {GWF} (g _{0}, \phi _{0}, \beta, n) = \operatorname {GWF} (g _{1}, \phi _{1}, \beta, n)$$ and we are done. ◻ *Proof of Theorem [Theorem 1](#thm:Reeb1curves){reference-type="ref" reference="thm:Reeb1curves"}.* Suppose that $u: \Sigma \to M$ is an immersed Reeb 2-curve, we then show that $M$ also has a Reeb 1-curve. Let $\widetilde{u}: \widetilde{\Sigma} \to M$ be the normalization of $u$, so that $\widetilde{u}$ is an immersion. We have a pair of transverse 1-distributions $D _{1}= \widetilde{u} ^{*}\mathbb{R} ^{} \langle X _{\alpha } \rangle$, $D _{2}=\widetilde{u} ^{*} \mathbb{R} ^{} \langle X _{\lambda } \rangle$ on $\widetilde{\Sigma }$. We may then find an embedded path $\gamma: [0,1] \to \widetilde{\Sigma }$, tangent to $D _{1}$ s.t. $\lambda (\gamma' (t)) >0$, $\forall t \in [0,1]$, and s.t. $\gamma (0)$ and $\gamma (1)$ are on a leaf of $D _{2}$. It is then simple to obtain from this a Reeb 1-curve $o$, by joining the end points of $\gamma$ by an embedded path tangent to $D _{2}$, and perturbing, see Figure [2](#figure:smooth){reference-type="ref" reference="figure:smooth"}. ![The green shaded path is $\gamma$, the indicated orientation is given by $u ^{*}\lambda$, the $D _{1}$ folliation is shaded in black, the $D _{2}$ folliation is shaded in blue. The purple segment is part of the loop ${o}: S ^{1} \to \Sigma$, which is is smooth and satisfies $\lambda ({o}' (t) )>0$ for all $t$. ](perturbation.pdf){#figure:smooth width="1.4in"} This proves the first part of the theorem. To prove the second part, suppose that $u: \Sigma \to M$ is an immersed elliptic Reeb 2-curve. Suppose that $u$ is not normal. Let $\widetilde{u}: \widetilde{\Sigma } \to M$ be its normalization. Then $\widetilde{\Sigma}$ has a genus 0 component $\mathcal{S}$. So that $\widetilde{u}: \mathcal{S} \simeq \mathbb{CP} ^{1} \to M$ is immersed. The distribution $D _{1} = \widetilde{u} ^{*}\mathbb{R} ^{} \langle X _{\alpha} \rangle$, as appearing above, is then a $\widetilde{u} ^{*}\lambda$-oriented 1-dimensional distribution on $\mathbb{CP} ^{1}$ which is impossible. ◻ # Fuller index {#appendix:Fuller} Let $X$ be a complete vector field without zeros on a smooth manifold $M$. Set $$S (X, \beta) = \{o \in L _{\beta} M \,\left . \right | \exists p \in (0, \infty), \, \text{ $o: \mathbb{R}/\mathbb{Z} \to M $ is a periodic orbit of $p X $} \},$$ where $L _{\beta} M$ denotes the free homotopy class $\beta$ component of the free loop space $$LM = \{o: S ^{1} \to M \,|\, \text{$o$ is smooth} \} .$$ And where recall that $S ^{1} = \mathbb{R}/\mathbb{Z}$. The above $p$ is uniquely determined and we denote it by $p (o)$ called the period of $o$. There is a natural $S ^{1}$ reparametrization action on $S (X, \beta )$: $t \cdot o$ is the loop $t \cdot o(\tau) = o (t + \tau)$. The elements of $\mathcal{O} (X, \beta ) := S (X, \beta )/S ^{1}$ will be called *orbit strings*. Slightly abusing notation we just write $o$ for the equivalence class of $o$. The multiplicity $m (o)$ of an orbit string is the ratio $p (o) /l$ for $l>0$ the period of a simple orbit covered by $o$. We want a kind of fixed point index of an open compact subset $N \subset \mathcal{O} (X, \beta )$, which counts orbit strings $o$ with certain weights. Assume for simplicity that $N \subset \mathcal{O} (X)$ is finite. (Otherwise, for a general open compact $N \subset \mathcal{O} (X, \beta )$, we need to perturb.) Then to such an $(N,X, \beta)$ Fuller associates an index: $$i (N,X, \beta) = \sum _{o \in N}\frac{1}{m (o)} i (o),$$ where $i (o)$ is the fixed point index of the time $p (o)$ return map of the flow of $X$ with respect to a local surface of section in $M$ transverse to the image of $o$. Fuller then shows that $i (N, X, \beta )$ has the following invariance property. For a continuous homotopy $\{X _{t} \}$, $t \in [0,1]$ set $$S ( \{X _{t} \}, \beta) = \{(o, t) \in L _{\beta} M \times [0,1] \,|\, \text{$o \in S (X _{t})$}\}.$$ And given a continuous homotopy $\{X _{t} \}$, $X _{0} =X$, $t \in [0,1]$, suppose that $\widetilde{N}$ is an open compact subset of $S (\{X _{t} \}, \beta ) / S ^{1}$, such that $$\widetilde{N} \cap \left (L _{\beta }M \times \{0\} \right) / S ^{1} =N.$$ Then if $$N_1 = \widetilde{N} \cap \left (L _{\beta }M \times \{1\} \right) / S ^{1}$$ we have $$i (N, X, \beta ) = i (N_1, X_1, \beta).$$ In the case where $X$ is the $R ^{\lambda}$-Reeb vector field on a contact manifold $(C ^{2n+1} , \lambda )$, and if $o$ is non-degenerate, we have: $$\label{eq:conleyzenhnder} i (o) = \mathop{\mathrm{sign}}\mathop{\mathrm{Det}}(\mathop{\mathrm{Id}}| _{\xi (x)} - F _{p (o), *} ^{\lambda}| _{\xi (x)} ) = (-1)^{CZ (o)-n},$$ where $F _{p (o), *} ^{\lambda}$ is the differential at $x$ of the time $p (o)$ flow map of $R ^{\lambda}$, and where $CZ (o)$ is the Conley-Zehnder index, see [@cite_RobbinSalamonTheMaslovindexforpaths.]. There is also an extended Fuller index $i (X, \beta) \in \mathbb{Q} \sqcup \{\pm \infty\}$, for certain $X$ having definite type. This is constructed in [@cite_SavelyevFuller], and is conceptually analogous to the extended Gromov-Witten invariant described in this paper. The following is a version of the definition of sky catastrophes first appearing in Savelyev [@cite_SavelyevFuller], generalizing a notion commonly called a "blue sky catastrophe", see Shilnikov-Turaev [@cite_ShilnikovTuraevBlueSky]. **Definition 1**. *Let $\{X _{t} \}$, $t \in [0,1]$ be a continuous family of non-zero, complete smooth vector fields on a closed manifold $M$, and let $\beta \in \pi _{1} ^{inc} (X)$. And let $S (\{X _{t}\})$ be as above. We say that $\{X _{t}\}$ has a ***right sky catastrophe in class $\beta$***, if there is an element $$y \in S (X _{0}, \beta) \subset S (\{X _{t}\}, \beta)$$ so that there is no open compact subset of $S (\{X _{t}\}, \beta)$ containing $y$. We say that $\{X _{t}\}$ has a ***left sky catastrophe in class $\beta$***, if there is an element $$y \in S (X _{1}, \beta) \subset S (\{X _{t}\}, \beta)$$ so that there is no open compact subset of $S (\{X _{t}\}, \beta)$ containing $y$. We say that $\{X _{t}\}$ has a ***sky catastrophe in class $\beta$***, if it has either left or right sky catastrophe in class $\beta$.* **Definition 1**. *In the case that $X _{t} = R ^{\lambda _{t}}$ for $\{\lambda _{t}\}$, $t \in [0,1]$ smoothly varying, we say that a sky catastrophe of Reeb vector fields $\{X _{t}\}$ is ***essential*** if the condition of the definition above holds for any family $\{X' _{t} = R ^{\lambda' _{t}}\}$ satisfying $X' _{0} = X _{0}$ and $X' _{1} = X _{1}$, and such that $\{\lambda' _{t}\}$ is smooth.* # Remark on multiplicity {#sec:GromovWittenprelims} This is a small note on how one deals with curves having non-trivial isotropy groups, in the virtual fundamental class technology. We primarily need this for the proof of Theorem [Theorem 1](#thm:GWFullerMain){reference-type="ref" reference="thm:GWFullerMain"}. Given a closed oriented orbifold $X$, with an orbibundle $E$ over $X$ Fukaya-Ono [@cite_FukayaOnoArnoldandGW] show how to construct using multi-sections its rational homology Euler class, which when $X$ represents the moduli space of some stable curves, is the virtual moduli cycle $[X] ^{vir}$. When this is in degree 0, the corresponding Gromov-Witten invariant is $\int _{[X] ^{vir} } 1.$ However, they assume that their orbifolds are effective. This assumption is not really necessary for the purpose of construction of the Euler class but is convenient for other technical reasons. A different approach to the virtual fundamental class which emphasizes branched manifolds is used by McDuff-Wehrheim, see for example McDuff [@cite_McDuffNotesOnKuranishi], [@cite_McDuffConstructingVirtualFundamentalClass] which does not have the effectivity assumption, a similar use of branched manifolds appears in [@cite_CieliebakRieraSalamonEquivariantmoduli]. In the case of a non-effective orbibundle $E \to X$ McDuff [@cite_McDuffGroupidsMultisections], constructs a homological Euler class $e (E)$ using multi-sections, which extends the construction [@cite_FukayaOnoArnoldandGW]. McDuff shows that this class $e (E)$ is Poincare dual to the completely formally natural cohomological Euler class of $E$, constructed by other authors. In other words there is a natural notion of a homological Euler class of a possibly non-effective orbibundle. We shall assume the following black box property of the virtual fundamental class technology. **Axiom 1**. *Suppose that the moduli space of stable maps is cleanly cut out, which means that it is represented by a (non-effective) orbifold $X$ with an orbifold obstruction bundle $E$, that is the bundle over $X$ of cokernel spaces of the linearized CR operators. Then the virtual fundamental class $[X]^ {vir}$ coincides with $e (E)$.* Given this axiom it does not matter to us which virtual moduli cycle technique we use. It is satisfied automatically by the construction of McDuff-Wehrheim, (at the moment in genus 0, but surely extending). It can be shown to be satisfied in the approach of John Pardon [@cite_PardonAlgebraicApproach]. And it is satisfied by the construction of Fukaya-Oh-Ono-Ohta [@cite_FOOOTechnicaldetails], the latter is communicated to me by Kaoru Ono. When $X$ is 0-dimensional this does follow immediately by the construction in [@cite_FukayaOnoArnoldandGW], taking any effective Kuranishi neighborhood at the isolated points of $X$, (this actually suffices for our paper.) As a special case most relevant to us here, suppose we have a moduli space of elliptic curves in $X$, which is regular with expected dimension 0. Then its underlying space is a collection of oriented points. However, as some curves are multiply covered, and so have isotropy groups, we must treat this is a non-effective 0 dimensional oriented orbifold. The contribution of each curve $[u]$ to the Gromov-Witten invariant $\int _{[X] ^{vir} } 1$ is $\frac{\pm 1}{[\Gamma ([u])]}$, where $[\Gamma ([u])]$ is the order of the isotropy group $\Gamma ([u])$ of $[u]$, in the McDuff-Wehrheim setup this is explained in [@cite_McDuffNotesOnKuranishi Section 5]. In the setup of Fukaya-Ono [@cite_FukayaOnoArnoldandGW] we may readily calculate to get the same thing taking any effective Kuranishi neighborhood at the isolated points of $X$. # Acknowledgements Yong-Geun Oh for discussions on related topics, and for an invitation to IBS-CGP, Korea. Egor Shelukhin and Helmut Hofer for an invitation to the IAS where I had a chance to present and discuss early versions of the work. I also thank Kevin Sackel, Kaoru Ono, Baptiste Chantraine, Emmy Murphy, Viktor Ginzburg, Yael Karshon, John Pardon, Spencer Cattalani and Dusa McDuff for related discussions. [^1]: *In the torsion case the infinite type condition is more complicated see  [@cite_SavelyevFuller].* [^2]: The name charge is inspired by the notion of charge in Oh-Wang [@cite_OhWang], in the context of contact instantons. However, the respective notions are not obviously related. [^3]: *It is in fact an equivalence of the corresponding topological action groupoids, but we do not need this explicitly.*
arxiv_math
{ "id": "2309.09848", "title": "Elliptic curves in lcs manifolds and metric invariants", "authors": "Yasha Savelyev", "categories": "math.SG math.DG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We answer a question of Pakhomov by showing that there is a consistent, c.e. theory $T$ such that no theory which is definitionally equivalent to $T$ has a computable model. A key tool in our proof is the model-theoretic notion of mutual algebraicity. address: - Department of Mathematics, University of California, Los Angeles - Department of Philosophy, New York University author: - Patrick Lutz - James Walsh bibliography: - bibliography.bib title: A theory satisfying a strong version of Tennenbaum's theorem --- # Introduction Tennenbaum's theorem states that there is no computable nonstandard model of $\mathsf{PA}$ [@tennenbaum1959non]. Often, this result is viewed as giving us one reason the standard model of $\mathsf{PA}$ is special---it is the only computable model---but another perspective is possible: Tennenbaum's theorem is a source of examples of consistent, c.e. theories with no computable models. To explain this perspective, let us say that a theory $T$ has the **Tennenbaum property** if $T$ has no computable models. Tennenbaum's theorem implies that there are many consistent extensions of $\mathsf{PA}$ with the Tennenbaum property. For example, the theory $\mathsf{PA}+ \lnot \mathop{\mathrm{\mathsf{Con}}}(\mathsf{PA})$ (which asserts that $\mathsf{PA}$ is inconsistent) is a consistent extension of $\mathsf{PA}$ with only nonstandard models and hence, by Tennenbaum's theorem, with no computable models. Furthermore, a slight extension of the proof of Tennenbaum's theorem can be used to prove that many other theories have the Tennenbaum property. For example, it is not hard to show that $\mathsf{ZFC}$ has no computable models [@hamkins2010computable] and likewise for much weaker theories like $\mathsf{Z}_2$ (the theory of full second order arithmetic), or even $\mathsf{RCA}_0$ (at least if "model" is understood in the usual sense of first order logic). More generally, it seems to be an empirical fact that every natural theory which interprets even a small fragment of second order arithmetic has the Tennenbaum property. Recently, however, Pakhomov showed that this phenomenon is somewhat fragile: it depends on the specific language in which the theory is presented [@pakhomov2022escape]. To make this idea precise, Pakhomov used the notion of **definitional equivalence** (also known as **synonymy**), a strong form of bi-interpretability introduced by de Bouvére in [@debouvere1965logical]. Roughly speaking, theories $T$ and $T'$ in languages $\mathcal{L}$ and $\mathcal{L}'$ are definitionally equivalent if they can be viewed as two instances of a single theory, but with different choices of which notions to take as primitive. **Theorem 1** (Pakhomov). *There is a theory $T$ which is definitionally equivalent to $\mathsf{PA}$ such that any consistent, c.e. extension of $T$ has a computable model.* This theorem implies that every consistent, c.e. extension of $\mathsf{PA}$ is definitionally equivalent to a theory with a computable model. Moreover, the techniques used by Pakhomov are not restricted to extensions of $\mathsf{PA}$. For example, Pakhomov notes that they are sufficient to prove that $\mathsf{ZF}$ is definitionally equivalent to a theory with a computable model. More generally, Pakhomov's techniques seem sufficient to prove that each example we have given so far of a theory with the Tennenbaum property is definitionally equivalent to a theory without the Tennenbaum property. In light of these observations, Pakhomov asked how general this phenomenon is [@pakhomov2022escape]. In particular, does it hold for every consistent, c.e. theory? **Question 1** (Pakhomov). *Is every consistent, c.e. theory definitionally equivalent to a theory with a computable model?* The purpose of this paper is to answer this question in the negative. In other words, to give an example of a consistent, c.e. theory which satisfies a strong version of the Tennenbaum property. **Theorem 2**. *There is a consistent, c.e. theory $T$ such that no theory which is definitionally equivalent to $T$ has a computable model.* To prove this theorem, we construct a theory $T$ which has no computable models but is also model-theoretically tame. A key observation in our proof is that if a theory $T$ is sufficiently tame then any theory definitionally equivalent to $T$ must also be fairly tame. In particular, if $T$ is sufficiently tame then every theory which is definitionally equivalent to $T$ satisfies a weak form of quantifier elimination. Here's why this is useful. Suppose that $M$ is a model of a theory $T'$ which is definitionally equivalent to $T$. It follows from the definition of "definitionally equivalent" that within $M$, we can define a model of $T$. If $T'$ had quantifier elimination then we could assume that this definition is quantifier free and thus $M$ can compute a model of $T$. Since $T$ has no computable models, this would imply that $M$ itself is not computable. Unfortunately, we can't quite follow this strategy: we don't know that $T'$ has full quantifier elimination, but only a weak version of it. However, using this weak form of quantifier elimination we can show that $M$ can computably approximate a model of $T$ and, by picking $T$ so that its models cannot even be computably approximated, this is enough to show that $M$ is not computable. The specific form of model-theoretic tameness that we use in our proof is known as **mutual algebraicity**, first defined in [@goncharov2003trivial] and subsequently developed by Laskowski and collaborators (e.g. [@laskowski2013mutually; @laskowski2015characterizing; @braunfeld2022mutual]). The main result we need from the theory of mutual algebraicity is a quantifier elimination theorem proved by Laskowski in [@laskowski2009elementary]. Our use of tame model theory in this paper is somewhat reminiscent of techniques used by Emil Jeřábek in the paper [@jerabek2020recursive]. In that paper, Jeřábek separated two conditions which imply that a theory $T$ is essentially undecideable: the condition that $T$ can represent all partially recursive functions and the condition that $T$ interprets Robinson's $R$. To accomplish this, he used the fact that the model completion of the empty theory in an arbitrary language is model-theoretically tame---in particular, it eliminates $\exists^\infty$ and is $\mathsf{NSOP}$. He ended the paper by asking whether there are more connections between formal arithmetic and tame model theory. We believe our results constitute a partial answer to his question. ## Acknowledgements {#acknowledgements .unnumbered} We thank Peter Cholak, Nick Ramsey, Charlie McCoy, Andrew Marks, Forte Shinko, Mariana Vicaria and Kyle Gannon for helpful conversations, James Hanson for pointing us to the literature on mutual algebraicity and Chris Laskowski for help in understanding that literature. # Preliminaries on definitional equivalence and mutual algebraicity In this section we will give the formal definition of definitional equivalence, fix some notation related to it and review the facts about mutual algebraicity that we need. ## Definitional equivalence {#sec:de} To define definitional equivalence, we first need the concept of a definitional extension of a theory. **Definition 3**. Given a theory $T$ in language $\mathcal{L}$, a **definitional extension** of $T$ is a theory $T' \supseteq T$ in a language $\mathcal{L}' \supseteq \mathcal{L}$ such that 1. **$T'$ is conservative over $T$:** for each sentence $\varphi\in \mathcal{L}$, $T'\vdash\varphi$ if and only if $T \vdash\varphi$. 2. **The symbols in $\mathcal{L}'$ are definable in $\mathcal{L}$:** for each constant symbol $c$, relation symbol $R$ and function symbol $f$ in $\mathcal{L}'$, there is a corresponding formula $\varphi_c$, $\varphi_R$, or $\varphi_f$ in $\mathcal{L}$ such that $$\begin{aligned} T' &\vdash\forall x\, (x = c \leftrightarrow \varphi_c(x))\\ T' &\vdash\forall \overline{x}\, (R(\overline{x}) \leftrightarrow \varphi_R(\overline{x}))\\ T' &\vdash\forall \overline{x}, y\, (f(\overline{x}) = y \leftrightarrow \varphi_f(\overline{x}, y)). \end{aligned}$$ **Definition 4**. Theories $T$ and $T'$ in disjoint signatures are **definitionally equivalent** if there is a single theory which is a definitional extension of both $T$ and $T'$. More generally, theories $T$ and $T'$ are definitionally equivalent if they are definitionally equivalent after renaming their symbols to make their signatures disjoint. However, there is no loss of generality from ignoring theories with overlapping signatures, so we will do that for the rest of this paper. **Example 5**. The theories of the integers with plus and with minus---i.e. $T = \mathop{\mathrm{Th}}(\mathbb{Z}, +)$ and $T' = \mathop{\mathrm{Th}}(\mathbb{Z}, -)$---are definitionally equivalent because plus and minus can both be defined in terms of the other. More formally, the theory $T'' = \mathop{\mathrm{Th}}(\mathbb{Z}, +, -)$ is a definitional extension of both $T$ and $T'$. In contrast, it is well-known that the theories $\mathop{\mathrm{Th}}(\mathbb{Z}, +)$ and $\mathop{\mathrm{Th}}(\mathbb{Z}, \times)$ are *not* definitionally equivalent, because neither plus nor times can be defined in terms of the other. A key point about definitional equivalence is that if $T$ and $T'$ are definitionally equivalent theories in languages $\mathcal{L}$ and $\mathcal{L}'$, respectively, then every model of $T$ can be viewed as a model of $T'$ and vice-versa. Likewise, every $\mathcal{L}$-formula can be viewed as an $\mathcal{L}'$-formula and vice-versa. It will be useful to us to make this idea precise and to fix some notation. **Translating models.** Suppose that $T$ and $T'$ are definitionally equivalent theories in languages $\mathcal{L}$ and $\mathcal{L}'$, respectively. Let $T''$ be an $\mathcal{L}''$-theory witnessing the definitional equivalence of $T$ and $T'$---i.e. $\mathcal{L}\cup \mathcal{L}' \subseteq \mathcal{L}''$ and $T''$ is a definitional extension of both $T$ and $T'$. Suppose that $R$ is a relation symbol in $\mathcal{L}$. Since $T''$ is a definitional extension of $T'$, there is an $\mathcal{L}'$-formula, $\varphi_R$, which $T''$ proves is equivalent $R$. We will refer to this formula as the **$\mathcal{L}'$-definition of $R$**. Similarly, every other constant, relation and function symbol of $\mathcal{L}$ has an $\mathcal{L}'$-definition and vice-versa. Given a model $M$ of $T'$, we can turn $M$ into an $\mathcal{L}$-structure by interpreting each constant, relation and function symbol of $\mathcal{L}$ according to its $\mathcal{L}'$-definition.[^1] Furthermore, it is not hard to check that the resulting $\mathcal{L}$-structure is always a model of $T$. We will denote the model produced in this way by $M^{\mathcal{L}' \to \mathcal{L}}$. Likewise, if $M$ is a model of $T$ then we can transform it into a model of $T'$, which we will denote $M^{\mathcal{L}\to \mathcal{L}'}$. It is important to note that for any model $M$ of $T'$, $M$ and $M^{\mathcal{L}' \to \mathcal{L}}$ have the same underlying set and $(M^{\mathcal{L}' \to \mathcal{L}})^{\mathcal{L}\to \mathcal{L}'} = M$. Thus we may think of $M$ and $M^{\mathcal{L}' \to \mathcal{L}}$ as two different ways of viewing the same structure. **Translating formulas.** A similar transformation is possible for formulas. Suppose $\varphi$ is an $\mathcal{L}$-formula. Then by replacing each constant, relation and function symbol in $\varphi$ by the corresponding $\mathcal{L}'$-definition, we obtain an $\mathcal{L}'$-formula, which we will denote $\varphi^{\mathcal{L}\to \mathcal{L}'}$. Likewise we can transform any $\mathcal{L}'$-formula $\varphi$ into an $\mathcal{L}$-formula, which we will denote $\varphi^{\mathcal{L}' \to \mathcal{L}}$. **Example 6**. Suppose $f$ is a unary relation symbol in $\mathcal{L}$, $\varphi_f(x, y)$ is its $\mathcal{L}'$-definition and $\psi$ is the $\mathcal{L}$-formula $\forall x, y\, (f(f(x)) = f(y))$. Then $\psi^{\mathcal{L}\to \mathcal{L}'}$ is the formula $\forall x, y\, (\exists z_1, z_2, z_3\, (\varphi_f(x, z_1) \land \varphi_f(z_1, z_2) \land \varphi_f(y, z_3) \land z_2 = z_3))$. It is not hard to check that our translations of models and of formulas are compatible with each other. In particular, if $M$ is a model of $T'$, $\varphi$ is an $\mathcal{L}'$-formula and $\overline{a}$ is a tuple in $M$ then $M \vDash\varphi(\overline{a})$ if and only if $M^{\mathcal{L}' \to \mathcal{L}} \vDash\varphi^{\mathcal{L}' \to \mathcal{L}}(\overline{a})$. Note that this implies that $M$ and $M^{\mathcal{L}' \to \mathcal{L}}$ have the same algebra of definable sets. ## Mutual algebraicity As mentioned in the introduction, we will use the model-theoretic notion of mutual algebraicity. The key definitions are of mutually algebraic formulas and mutually algebraic structures. **Definition 7**. Given a structure $M$, a formula $\varphi(\overline{x})$ with parameters from $M$ is **mutually algebraic over $M$** if there is some number $k \in \mathbb{N}$ such that for every nontrivial partition $\overline{x} = \overline{x}_0 \cup \overline{x}_1$ and every tuple $\overline{a}_0$ in $M$, there are at most $k$ tuples $\overline{a}_1$ such that $M \vDash\varphi(\overline{a}_0, \overline{a}_1)$. Note that the mutual algebraicity depends on what the free variables of the formula are. In particular, it is not preserved by adding dummy variables. Also note that any formula with at most one free variable is mutually algebraic. **Example 8**. If $M$ is the structure $(\mathbb{N}, +)$ then the formula $x = y + 5$ is mutually algebraic over $M$ because if we fix $x$ there is at most one $y$ satisfying the formula, and vice-versa. On the other hand, the formula $x = y + z + 5$ is not mutually algebraic over $M$ because when we fix $z$ there are infinitely many pairs $x, y$ which satisfy the formula. **Definition 9**. A structure $M$ is **mutually algebraic** if every formula is equivalent to a Boolean combination of formulas which are mutually algebraic over $M$ (and which are allowed to have parameters from $M$). **Example 10**. The structure $(\mathbb{N}, \mathop{\mathrm{Succ}})$ of natural numbers with the successor function has quantifier elimination and thus every formula is equivalent to a Boolean combination of atomic formulas. It is easy to check that the atomic formulas are all mutually algebraic and thus that the structure itself is. In contrast, it is possible to show that the structure $(\mathbb{Q}, \leq)$, despite having quantifier elimination, is not mutually algebraic (for example, one can show that the formula $x \leq y$ is not equivalent to a Boolean combination of mutually algebraic formulas). ## Quantifier elimination for mutually algebraic structures {#sec:qe_ma} We will make use of two quantifier elimination theorems for mutually algebraic structures. The first is due to Laskowski. **Theorem 11** ([@laskowski2009elementary], Theorem 4.2). *If $M$ is mutually algebraic then every formula $\varphi(\overline{x})$ is equivalent over $M$ to a Boolean combination of formulas of the form $\exists \overline{z}\, \theta(\overline{y}, \overline{z})$ (which may have parameters from $M$) where $\theta$ is quantifier free and mutually algebraic over $M$ and $\overline{y}$ is a subset of $\overline{x}$.* **Theorem 12**. *If $M$ is a mutually algebraic structure and $\varphi(\overline{x})$ is mutually algebraic over $M$, then there is a quantifier free formula $\theta(\overline{x}, \overline{y})$ (which may have parameters from $M$) such that $\exists \overline{y}\, \theta(\overline{x}, \overline{y})$ is mutually algebraic over $M$ and $M \vDash\varphi(\overline{x}) \to \exists\overline{y}\,\theta(\overline{x}, \overline{y})$.* The second theorem is a relatively straightforward consequence of the first one, together with some facts from the theory of mutual algebraicity. Our goal for the rest of this section is to give the proof. To do so, we will need a lemma about mutually algebraic formulas, due to Laskowski and Terry. **Lemma 13** ([@laskowski2020uniformly], Lemma A.1). *Suppose $M$ is a structure and $$\varphi(\overline{x}) := \bigwedge_i \alpha_i(\overline{x}_i) \land \bigwedge_j\lnot\beta_j(\overline{x}_j)$$ is a formula such that* 1. *$\varphi(\overline{x})$ is mutually algebraic over $M$.* 2. *$\{\overline{a} \mid M \vDash\varphi(\overline{a})\}$ contains an infinite set of pairwise disjoint tuples.* 3. *Each $\alpha_i(\overline{x}_i)$ and $\beta_j(\overline{x}_j)$ is mutually algebraic over $M$.* *Then $\alpha(\overline{x}) = \bigwedge_i \alpha_i(\overline{x}_i)$ is mutually algebraic over $M$.* Actually we need a slightly stronger version of this lemma. In particular, we need to replace the second condition on $\varphi$ with the apparently weaker assumption that $\{\overline{a} \mid M \vDash\varphi(\overline{a})\}$ is infinite. The next lemma, also due to Laskowski, tells us that since $\varphi$ is mutually algebraic, the two conditions are actually equivalent. **Lemma 14** ([@laskowski2009elementary], Lemma 3.1). *Suppose $M$ is a structure and $\varphi(\overline{x})$ is a formula which is mutually algebraic over $M$. If $\{\overline{x} \mid M \vDash\varphi(\overline{a})\}$ is infinite then it contains an infinite set of pairwise disjoint tuples.* We can now prove Theorem [Theorem 12](#thm:qe1){reference-type="ref" reference="thm:qe1"}. *Proof of Theorem [Theorem 12](#thm:qe1){reference-type="ref" reference="thm:qe1"}.* By applying Laskowski's theorem and writing the resulting formula in disjunctive normal form, we get $$M \vDash\varphi(\overline{x}) \leftrightarrow \bigvee_{i} \left(\bigwedge_j \alpha_{i,j}(\overline{x}_{i, j}) \land \bigwedge_k \lnot \beta_{i, k}(\overline{x}_{i, k})\right)$$ where each $\alpha_{i, j}(\overline{x}_{i, j})$ and each $\beta_{i, k}(\overline{x}_{i, k})$ is existential and mutually algebraic over $M$. For each $i$, define $$\begin{aligned} \varphi_i(\overline{x}) &:= \bigwedge_j \alpha_{i,j}(\overline{x}_{i, j}) \land \bigwedge_k \lnot \beta_{i, k}(\overline{x}_{i, k})\\ \alpha_i(\overline{x}) &:= \bigwedge_j \alpha_{i,j}(\overline{x}_{i, j})\\ A_i &= \{\overline{a} \mid M \vDash\varphi_i(\overline{a})\}\\\end{aligned}$$ Note that since $\varphi(\overline{x})$ is mutually algebraic and $M \vDash\varphi_i(\overline{x}) \to \varphi(\overline{x})$, $\varphi_i(\overline{x})$ is also mutually algebraic. Thus by Lemma [Lemma 13](#lemma:ma1){reference-type="ref" reference="lemma:ma1"} above (or rather, its slightly strengthened version), we have that either $A_i$ is finite or $\alpha_i(\overline{x})$ is mutually algebraic. In the former case, define $\gamma_i(\overline{x}) := \bigvee_{\overline{a} \in A_i}\overline{x} = \overline{a}$ and in the latter case, define $\gamma_i(\overline{x}) := \alpha_i(\overline{x})$. In either case, note that $\gamma_i$ is existential and mutually algebraic over $M$ and that $M \vDash\varphi_i(\overline{x}) \to \gamma_i(\overline{x})$. Since $\varphi(\overline{x})$ and $\bigvee_{i} \varphi_i(\overline{x}_i)$ are equivalent in $M$, this gives us $$M \vDash\varphi(\overline{x}) \to \bigvee_i \gamma_i(\overline{x}).$$ Since each $\gamma_i(\overline{x})$ is mutually algebraic, so is their disjunction. Pulling the existential quantifiers to the front, we have the desired formula. ◻ # The counterexample {#sec:counterexample} In this section we will describe the theory we use to answer Pakhomov's question. In order to do so, we need to fix a computable infinite binary tree $R$ with the property that none of its paths can be computably approximated. More precisely, say that a sequence $x \in 2^\omega$ is **guessable** if there is an algorithm which, for each number $n$, enumerates a list of at most $O(n^2)$ strings of length $n$, one of which is $x\mathop{\mathrm{\upharpoonright}}n$. We need a computable infinite binary tree $R$, none of whose paths are guessable. It is not hard to directly construct such a tree $R$ but we can also simply pick a computable infinite binary tree whose paths are all Martin-Löf random. Such a tree is known to exist[^2] and it is also easy to check that Martin-Löf random sequences are not guessable. See the book *Algorithmic Randomness and Complexity* by Downey and Hirschfeldt for more details about Martin-Löf randomness [@downey2010algorithmic]. Essentially, our theory is the simplest theory all of whose models code an infinite path through $R$. We now give a more precise description. **The language.** Let $\mathcal{L}$ be the language whose signature consists of: 1. A constant symbol, $0$. 2. Two unary function symbols, $S$ and $P$. 3. A unary relation symbol, $A$. Also, although it is not officially part of the language $\mathcal{L}$, we will often use the following notation. Given any $n \in \mathbb{N}$, - $\underline{n}$ denotes the $\mathcal{L}$-term $S^{n}(0)$, e.g. $\underline{3}$ denotes $S(S(S(0)))$. - $\underline{-n}$ denotes the $\mathcal{L}$-term $P^{n}(0)$, e.g. $\underline{-3}$ denotes $P(P(P(0)))$. - $x + \underline{n}$ denotes the $\mathcal{L}$-term $S^n(x)$ and $x + \underline{-n}$ denotes the $\mathcal{L}$-term $P^{n}(x)$. We will also sometimes use $x - \underline{n}$ to denote $x + \underline{-n}$. - We will often refer to $S$ as "successor" and $P$ as "predecessor." **The theory.** Fix a computable infinite binary tree $R$, none of whose infinite paths are guessable, and let $T$ be the $\mathcal{L}$-theory consisting of: 1. The theory of the integers with $0$, successor and predecessor, i.e. $\mathop{\mathrm{Th}}(\mathbb{Z}, 0, \mathop{\mathrm{Succ}}, \mathop{\mathrm{Pred}})$. 2. Axioms stating that $A$ (restricted to the elements $\underline{0}, \underline{1}, \underline{2}, \ldots$) describes a path through $R$. More precisely, for each $n \in \mathbb{N}$, $T$ contains the sentence $$\bigvee_{\sigma \in R_n} \bigg[\bigg(\bigwedge_{\sigma(i) = 0}\lnot A(\underline{i})\bigg)\land \bigg(\bigwedge_{\sigma(i) = 1} A(\underline{i}) \bigg)\bigg]$$ where $R_n$ denotes the set of strings in $R$ of length $n$. The second set of axioms ensures that from any model of $T$, we can computably recover a path through the tree $R$. We will now explain how this works. Given a sentence $\varphi$ and a model $M$, let's use the notation $\llbracket \varphi \rrbracket^M$ to denote **the truth-value of $\varphi$ in $M$**. We will often identify sequences of truth values with binary sequences by thinking of "true" as $1$ and "false" as $0$. Now suppose that $M$ is a model of $T$. We claim that the sequence $\llbracket A(\underline{0}) \rrbracket^{M}, \llbracket A(\underline{1}) \rrbracket^{M}, \llbracket A(\underline{2}) \rrbracket^{M}, \ldots$ is an infinite path through $R$. The point is that the axioms above guarantee that, for each $n \in \mathbb{N}$, the length $n$ initial segment of this sequence agrees with some *specific* length $n$ string in $R$. Since all of its proper initial segments are in $R$, the sequence $\llbracket A(\underline{0}) \rrbracket^{M}, \llbracket A(\underline{1}) \rrbracket^{M}, \llbracket A(\underline{2}) \rrbracket^{M}, \ldots$ is indeed a path through $R$. Note that this immediately implies that no model of $T$ is not computable---any such model computes an infinite path through $R$, but no such path is computable. In spite of this, we will see later that models of $T$ have quantifier elimination and so are very well-behaved in model-theoretic terms. **Models of $T$.** It will help to have a clear picture of the structure of models of $T$ and to fix some terminology for later. Since $T$ includes the theory of the integers with successor and predecessor, $T$ proves that $S$ and $P$ are injective functions with no cycles and that they are inverses. Thus any model of $T$ consists of a disjoint union of one or more $\mathbb{Z}$-chains, with $S$ moving forward along each chain, $P$ moving backward and the constant $0$ sitting in the middle of one of the chains. There is also a well-defined notion of distance: the distance between two elements of the same chain is simply the number of steps apart they are on the chain (and the distance between elements of two different chains is $\infty$). Furthermore, each element of each chain is labelled with a truth value (corresponding to whether the predicate $A$ holds of that element or not) and thus each chain gives rise to a bi-infinite binary sequence. If we start at the element $0$ and move forward along its chain, then, as we saw above, the binary sequence we get is guaranteed to be a path through the tree $R$. Given a model $M$ of $T$ and elements $a, b \in M$, we will use the following terminology. - The **signed distance** from $a$ to $b$ is the unique integer $k$ (if it exists) such that $b = a + \underline{k}$. If no such $k$ exists then the signed distance is $\infty$. - The **distance between** $a$ and $b$ is the absolute value of the signed distance (where the absolute value of $\infty$ is $\infty$). - For $k \in \mathbb{N}$, the **$k$-neighborhood** of $a$ is the set $\{a - \underline{k}, a - \underline{(k - 1)}, \ldots, a + \underline{k}\}$. Note that if the signed distance from $a$ to $b$ is $k < \infty$, the signed distance from $b$ to $a$ is $-k$. **Remark 15**. By choosing a somewhat more complicated theory, it is possible to simplify some of the proofs later in this paper. In particular, we can add axioms to $T$ which state that $A$ behaves *generically*, in the sense that every finite pattern of values of $A$ occurs somewhere. More precisely, for every finite binary string $\sigma \in 2^{<\omega}$ we add the axiom $$\exists x\bigg[\bigg(\bigwedge_{\sigma(i) = 0} \lnot A(x + \underline{i})\bigg) \land \bigg(\bigwedge_{\sigma(i) = 1}A(x + \underline{i})\bigg)\bigg].$$ Equivalently, we can replace $T$ with its model completion. Making this change would allow us to simplify the proofs of Propositions [Proposition 16](#prop:indiscernability){reference-type="ref" reference="prop:indiscernability"} and [Proposition 19](#prop:satisfaction1){reference-type="ref" reference="prop:satisfaction1"} and Lemma [Lemma 22](#lemma:ma_close){reference-type="ref" reference="lemma:ma_close"}. # Proof of the main theorem Let $\mathcal{L}$ and $T$ be the language and theory described in the previous section. In this section, we will prove that no theory which is definitionally equivalent to $T$ has a computable model. Since $T$ is a consistent, c.e. theory, this is enough to prove Theorem [Theorem 2](#thm:main){reference-type="ref" reference="thm:main"}. In order to prove this, let's fix a language $\mathcal{L}'$ and an $\mathcal{L}'$-theory $T'$ which is definitionally equivalent to $T$. Note that since the language $\mathcal{L}$ has finite signature, we may assume that $\mathcal{L}'$ does as well.[^3] Now fix a model $M$ of $T'$. Our goal is to prove that $M$ is not computable.[^4] Before beginning, it will be useful to fix a few conventions. First, recall from section [2.1](#sec:de){reference-type="ref" reference="sec:de"} that $M$ gives rise to a model $M^{\mathcal{L}' \to \mathcal{L}}$ of $T$ which has the same underlying set and the same algebra of definable sets as $M$. We will often abuse notation slightly and use $M$ to refer to both $M$ itself and $M^{\mathcal{L}' \to \mathcal{L}}$. For example, if $\varphi$ is an $\mathcal{L}$-formula, we will use $M \vDash\varphi(\overline{a})$ to mean $M^{\mathcal{L}' \to \mathcal{L}} \vDash\varphi(\overline{a})$. Also, we will say things like "$b$ is the successor of $a$" to mean $M^{\mathcal{L}' \to \mathcal{L}} \vDash b = S(a)$. Second, unless explicitly stated otherwise, we assume that formulas do not contain parameters. ## Proof strategy To prove that $M$ is not computable, we will show that the sequence $\llbracket A(\underline{0})\rrbracket^M, \llbracket A(\underline{1})\rrbracket^M, \ldots$ is guessable (in the sense of section [3](#sec:counterexample){reference-type="ref" reference="sec:counterexample"}) relative to an oracle for $M$. Since the axioms of $T$ ensure that this sequence is a path through the tree $R$, and hence not guessable, this is enough to show that $M$ is not computable. To show that the sequence $\llbracket A(\underline{0})\rrbracket^M, \llbracket A(\underline{1})\rrbracket^M, \ldots$ is guessable from an oracle for $M$, we will first prove that $M$ is mutually algebraic. To do so, we will essentially show that models of $T$ have quantifier elimination and use this to prove that $M^{\mathcal{L}' \to \mathcal{L}}$ is mutually algebraic. The mutual algebraicity of $M$ itself follows because mutual algebraicity is preserved under definitional equivalence (because mutual algebraicity depends only on the algebra of definable sets, which is itself preserved under definitional equivalence). Once we know that $M$ is mutually algebraic, we can apply the quantifier elimination results of section [2.3](#sec:qe_ma){reference-type="ref" reference="sec:qe_ma"} to infer that $S$ and $A$ are close to being quantifier-free definable in $M$. In particular, the formula $S(x) = y$ is mutually algebraic and so, by Theorem [Theorem 12](#thm:qe1){reference-type="ref" reference="thm:qe1"}, there is an existential $\mathcal{L}'$-formula $\psi_S(x, y)$ such that $\psi_S$ is mutually algebraic and $M \vDash S(x) = y \to \psi_S(x, y)$. We can think of $\psi_S$ as a multi-valued function which takes each element $a \in M$ to the set of elements $b \in M$ such that $M \vDash\psi_S(a, b)$. Since $\psi_S$ is an existential formula, the graph of this multi-valued function is computably enumerable from an oracle for $M$. Since $\psi_S$ is mutually algebraic, there are only finitely many elements in the image of each $a$. And since $M \vDash S(x) = y \to \psi_S(x, y)$, the successor of $a$ is always in the image of $a$. Putting this all together, we can think of this multi-valued function as giving us, for each $a \in M$, a finite list of guesses for $S(a)$ which is computably enumerable relative to an oracle for $M$. To finish the proof, we can leverage our ability to enumerate a finite list of guesses for the successor of each element to enumerate a short list of guesses for each initial segment of the sequence $\llbracket A(\underline{0}) \rrbracket^M, \llbracket A(\underline{1}) \rrbracket^M, \ldots$. To accomplish this, we will have to make use of our understanding of the structure of definable subsets of $M$, which we first develop in order to prove mutual algebraicity. ## Model-theoretic tameness of $M$ Our first goal is to prove that $M$ is mutually algebraic. One way to do this is to show that models of $T$ satisfy quantifier elimination and then note that all atomic $\mathcal{L}$-formulas are mutually algebraic over $M^{\mathcal{L}' \to \mathcal{L}}$---this implies that $M^{\mathcal{L}' \to \mathcal{L}}$ is mutually algebraic and hence that $M$ is as well. However, it will be helpful for us later to have a more detailed understanding of the structure of definable subsets of $M$. Thus, instead of just proving quantifier elimination for models of $T$, we will prove a stronger statement, which is essentially a quantitative version of quantifier elimination. To explain this stronger statement, let's first consider the meaning of quantifier elimination in models of $T$. By examining the atomic formulas of $\mathcal{L}$, we can see that it means that for every $\mathcal{L}$-formula $\varphi(\overline{x})$ and tuple $\overline{a}$, the truth of $\varphi(\overline{a})$ depends only on which elements of $\overline{a}$ are close to each other (and to $0$), how close they are, and the values of the predicate $A$ in a small neighborhood of each element. In our stronger statement, we will quantify exactly what "close" and "small" mean in this description. We will also extend this to $\mathcal{L}'$-formulas. We will refer to the resulting statement as the **indiscernability principle** for $M$. In order to make all of this precise, we first need to introduce some terminology. **The radius of a formula.** For any $\mathcal{L}$-formula $\varphi$ written in prenex normal form, inductively define the **radius** of $\varphi$, written $\mathop{\mathrm{rad}}(\varphi)$, as follows. 1. If $\varphi$ is quantifier free then $\mathop{\mathrm{rad}}(\varphi)$ is the total number of occurrences of $S$ and $P$ in $\varphi$. 2. If $\varphi$ has the form $\exists x\,\psi$ or $\forall x\, \psi$ then $\mathop{\mathrm{rad}}(\varphi) = 2\cdot\mathop{\mathrm{rad}}(\psi)$. If $\varphi$ is an $\mathcal{L}'$-formula in prenex normal form then we define $\mathop{\mathrm{rad}}(\varphi)$ in a similar way except that we change the case where $\varphi$ is quantifier free to define $\mathop{\mathrm{rad}}(\varphi)$ to be $\mathop{\mathrm{rad}}(\varphi^{\mathcal{L}' \to \mathcal{L}})$ (after first putting $\varphi^{\mathcal{L}' \to \mathcal{L}}$ in prenex normal form). The idea of the radius of a formula is that in the description of quantifier elimination for $M$ above, we should interpret "close" to mean "within distance $\mathop{\mathrm{rad}}(\varphi)$." **The $r$-type of a tuple.** Given a tuple $\overline{a} = (a_1,\ldots,a_n)$ in $M$ and a number $r \in \mathbb{N}$, define: - The **$r$-distance table** of $\overline{a}$ records the signed distances between the coordinates of $\overline{a}$ and between each coordinate of $\overline{a}$ and $0$, treating any distance greater than $r$ as $\infty$. More precisely, it is the function $f \colon \{0,1,\ldots, n\}^2 \to \{-r, -(r - 1), \ldots, r, \infty\}$ such that if the distance between $a_i$ and $a_j$ is at most $r$ then $f(i, j)$ is the signed distance from $a_i$ to $a_j$ and otherwise $f(i, j) = \infty$ (and where we interpret $a_0$ as $0$). - The **$r$-neighborhood type** of any element $a \in M$ is the sequence of truth values $\llbracket A(a - \underline{r})\rrbracket^M, \llbracket A(a - \underline{r - 1})\rrbracket^M, \ldots, \llbracket A(a + \underline{r})\rrbracket^M$. - The **$r$-type** of $\overline{a}$ is the $r$-distance table of $\overline{a}$ together with the sequence recording the $r$-neighborhood type of each coordinate of $\overline{a}$. **The indiscernability principle.** We can now state a formal version of the indiscernability principle described above. **Proposition 16**. *If $\varphi$ is an $\mathcal{L}$-formula in prenex normal form and of radius $r$ and $\overline{a}, \overline{b}$ are tuples in $M$ with the same $r$-type then $M \vDash\varphi(\overline{a})$ if and only if $M \vDash\varphi(\overline{b})$.* *Proof.* By induction on the number of quantifiers in $\varphi$. For quantifier free formulas, this is easy to verify. If $\varphi$ has quantifiers then it suffices to assume $\varphi= \exists x\, \psi$ since the case of a universal quantifier is symmetric (i.e. by considering $\lnot\varphi$ instead of $\varphi$ and pushing the negation past the quantifiers to get it into prenex normal form). Also, it's enough to assume $M \vDash\varphi(\overline{a})$ and prove $M \vDash\varphi(\overline{b})$---the other direction also follows by symmetry. So let's assume that $M \vDash\exists x\, \psi(\overline{a}, x)$. Thus there is some $c$ such that $M \vDash\psi(\overline{a}, c)$. We need to find some $d$ such that $M \vDash\psi(\overline{b}, d)$. Note that it is enough to find $d$ such that $\overline{a}c$ and $\overline{b}d$ have the same $r/2$-type, because if this holds then we can apply the induction hypothesis to $\psi$ to get that $M \vDash\psi(\overline{b}, d)$. There are two cases depending on whether $c$ is close to any element of $\overline{a}$ or not. Also to reduce casework, we adopt the convention that $a_0 = b_0 = 0$ (note that this does not change the fact that $\overline{a}$ and $\overline{b}$ have the same $r$-type). **Case 1.** First suppose that $c$ is distance at most $r/2$ from some coordinate of $\overline{a}$. In particular, there is some $i \leq n$ and $-r/2 \leq k \leq r/2$ such that $c = a_i + \underline{k}$. In this case, we can pick $d$ to be close to the corresponding element of $\overline{b}$, i.e. $d = b_i + \underline{k}$. We claim that $\overline{a}c$ and $\overline{b}d$ have the same $r/2$-type. First, we need to check that the $r/2$-distance tables are the same. It suffices to check that for each $j$, either $a_j, c$ and $b_j, d$ have the same signed distance or both have distance greater than $r/2$. Suppose that $a_j = c + \underline{k'}$ for some integer $-r/2 \leq k' \leq r/2$. By substitution, $a_j = (a_i + \underline{k}) + \underline{k'} = a_i + \underline{k + k'}$. Since $|k + k'| \leq r$ and since $\overline{a}, \overline{b}$ have the same $r$-distance table, this implies that $b_j = b_i + \underline{k + k'}$ and hence that $b_j = d + \underline{k'}$. The other cases can be handled similarly. Second, we need to check that the $r/2$-neighborhood type of $c$ is the same as that of $d$. This follows from the fact that the $r/2$-neighborhood of $c$ is contained in the $r$-neighborhood of $a_i$, the $r/2$-neighborhood of $d$ is contained in the $r$-neighborhood of $b_i$ and the $r$-neighborhood types of $a_i$ and $b_i$ are the same. **Case 2.** Now suppose that $c$ is distance more than $r/2$ from every coordinate of $\overline{a}$. It is enough to find some $d$ which has the same $r/2$-neighborhood type as $c$ and which is distance more than $r/2$ from every coordinate of $\overline{b}$. The point is that for such a $d$, it is easy to see that $\overline{a}c$ and $\overline{b}d$ have the same $r/2$-type. We now claim that some such $d$ must exist.[^5] Suppose for contradiction that this is false. Then every element of $M$ with the same $r/2$-neighborhood type as $c$ must be contained in the $r/2$ neighborhood of some element of $\overline{b}$. In particular, this implies that there are a finite number of such elements and they all have the form $b_i + \underline{k}$ for some $i \leq n$ and $-r/2 \leq k \leq r/2$. Suppose there are exactly $m$ such elements and they are equal to $b_{i_1} + \underline{k_1},\ldots,b_{i_m} + \underline{k_m}$ (where for each $j$, $-r/2 \leq k_j \leq r/2$). It follows from the fact that $\overline{a}$ and $\overline{b}$ have the same $r$-type that the corresponding elements $a_{i_1} + \underline{k_1}, \ldots, a_{i_m} + \underline{k_m}$ are also all distinct and have the same $r/2$-neighborhood type as $c$. However, since only $m$ elements of $M$ have this $r/2$-neighborhood type, $c$ must be among this list of elements, which contradicts the assumption that $c$ is not within distance $r/2$ of any coordinate of $\overline{a}$. ◻ **Corollary 17**. *Proposition [Proposition 16](#prop:indiscernability){reference-type="ref" reference="prop:indiscernability"} also holds for all $\mathcal{L}'$-formulas in prenex normal form.* *Proof.* Suppose $\varphi$ is an $\mathcal{L}'$-formula of radius $r$ and that $\overline{a}, \overline{b}$ are tuples in $M$ with the same $r$-type. In the case where $\varphi$ is quantifier free, the radius of $\varphi^{\mathcal{L}' \to \mathcal{L}}$ is also $r$, for the trivial reason that radius of a quantifier-free $\mathcal{L}'$-formula is defined as the radius of its $\mathcal{L}$-translation. Hence, we can apply the indiscernability principle to $\varphi^{\mathcal{L}' \to \mathcal{L}}$ to get $$\begin{aligned} M \vDash\varphi(\overline{a}) &\iff M \vDash\varphi^{\mathcal{L}' \to \mathcal{L}}(\overline{a})\\ &\iff M \vDash\varphi^{\mathcal{L}' \to \mathcal{L}}(\overline{b}) \iff M \vDash\varphi(\overline{b}).\end{aligned}$$ When $\varphi$ has quantifiers, the inductive argument that we gave in the proof of Proposition [Proposition 16](#prop:indiscernability){reference-type="ref" reference="prop:indiscernability"} still works. ◻ **$M$ is mutually algebraic.** For a fixed $r$-type, the assertion that a tuple $\overline{x} = (x_1,\ldots,x_n)$ has that $r$-type is expressible as a Boolean combination of $\mathcal{L}$-formulas of the following forms. 1. $x_i = x_j + \underline{k}$ for some indices $i, j \leq n$ and some $-r \leq k \leq r$. 2. $x_i = \underline{k}$ for some index $i \leq n$ and some $-r \leq k \leq r$. 3. $A(x_i + \underline{k})$ for some index $i \leq n$ and some $-r \leq k \leq r$. It is easy to check that each type of formula listed above is mutually algebraic over $M$ (for the second and third there is actually nothing to check because they both involve only one free variable). Furthermore, for any fixed $r$, there are a finite number of possible $r$-types. Thus the indiscernability principle implies that every $\mathcal{L}$-formula $\varphi$ is equivalent to a finite conjunction of Boolean combinations of mutually algebraic $\mathcal{L}$-formulas (namely a conjunction over all $r$-types that satisfy $\varphi$). This shows that $M$ is mutually algebraic when considered as an $\mathcal{L}$-structure (i.e. that $M^{\mathcal{L}' \to \mathcal{L}}$ is mutually algebraic). However, it is easy to conclude that $M$ is also mutually algebraic when considered as an $\mathcal{L}'$-structure. For a given formula $\varphi$, we know from our reasoning above that $\varphi^{\mathcal{L}' \to \mathcal{L}}$ is equivalent to a Boolean combination of mutually algebraic $\mathcal{L}$-formulas. Next, we can replace each formula in this Boolean combination by its corresponding $\mathcal{L}'$-formula. Since the mutual algebraicity of a formula only depends on the set that it defines, and since this is invariant under translating between $\mathcal{L}$ and $\mathcal{L}'$, we conclude that $\varphi$ is equivalent to a Boolean combination of mutually algebraic $\mathcal{L}'$-formulas. **Remark 18**. The reasoning above also shows that $M$ has quantifier elimination when considered as an $\mathcal{L}$-structure (i.e. $M^{\mathcal{L}' \to \mathcal{L}}$ has quantifier elimination). The point is just that a tuple having a certain $r$-type is expressible as a quantifier free $\mathcal{L}$-formula. ## The satisfaction algorithm We will now explain how the indiscernability principle implies that the satisfaction relation for $\mathcal{L}'$-formulas over $M$ is very nearly computable relative to an oracle for $M$. At the end of this subsection, we will explain why this is useful. The main idea (of computing the satisfaction relation) is that to check whether $M \vDash\exists x\, \varphi(\overline{a}, x)$, we don't need to try plugging in every element of $M$ for $x$, just those elements which are close to some coordinate of $\overline{a}$ (or to $0$), plus one element of each possible $\mathop{\mathrm{rad}}(\varphi)$-neighborhood type which is far from all the coordinates of $\overline{a}$. In other words, checking the truth of an existential formula can be reduced to checking the truth of a finite number of atomic formulas. This intuition is formalized by the next proposition, whose proof essentially just consists of this idea, but with a number of messy details in order to make precise the idea of trying all the different $\mathop{\mathrm{rad}}(\varphi)$-neighborhood types which are far from elements of $\overline{a}$. **Proposition 19** (Satisfaction algorithm for existential formulas). *Suppose $\varphi(\overline{x})$ is an existential $\mathcal{L}'$-formula with radius $r$. There is an algorithm which, given a tuple $\overline{a}$ in $M$ and the following data* 1. *an oracle for $M$* 2. *and a finite set $U \subseteq M$,* *tries to check whether $M \vDash\varphi(\overline{a})$. Furthermore, if $U$ contains the $r$-neighborhood of every coordinate of $\overline{a}$ then the output of the algorithm is correct.* *Proof.* Let $\theta(\overline{x}, \overline{y})$ be a quantifier free formula such that $\varphi(\overline{x}) = \exists\overline{y}\, \theta(\overline{x}, \overline{y})$ and let $n = |\overline{x}|$ and $m = |\overline{y}|$ (i.e. the number of free and bound variables in $\varphi$, respectively). Next, fix a finite set $V$ such that for each possible $r$-neighborhood type $p$, $V$ contains at least $(2r + 1)(n + m + 1)$ points of type $p$ (or if fewer than $(2r + 1)(n + m + 1)$ points have $r$-neighborhood type $p$ then $V$ contains every such point).[^6] Also $V$ should contain $0$. Let $V'$ be the set consisting of all elements within distance $r$ of some element of $V$. Note that since $V'$ is finite, we can "hard-code" it into our algorithm. *Algorithm description.* To check if $M \vDash\varphi(\overline{a})$, look at each tuple $\overline{b}$ of elements of $U \cup V'$ and check if $M \vDash\theta(\overline{a}, \overline{b})$. If this occurs for at least one such $\overline{b}$ then output "true." Otherwise, output "false." Note that checking the truth of a quantifier free formula (such as $\theta$) is computable from an oracle for $M$. *Verification.* Let's assume that $U$ contains the $r$-neighborhood of each coordinate of $\overline{a}$ and check that the output of the algorithm is correct. It is obvious that the algorithm has no false positives: if $M \vDash\theta(\overline{a}, \overline{b})$ for some $\overline{b}$ then $M \vDash\varphi(\overline{a})$. Thus it suffices to assume that $M \vDash\varphi(\overline{a})$ and show that there is some tuple $\overline{b}$ in $U\cup V'$ such that $M \vDash\theta(\overline{a}, \overline{b})$. To accomplish this, we will pick elements of $\overline{b}$ one at a time and, at each step, ensure that all the elements we have picked so far come from the set $U \cup V'$. More precisely, we will pick elements $b_1,\ldots,b_m$ such that for each $i \leq m$, $$M \vDash\exists y_{i + 1}\ldots \exists y_m \,\theta(\overline{a}, b_1,\ldots,b_i, y_{i + 1},\ldots,y_m)$$ and we will try to ensure that for each $i$, $b_i \in U \cup V'$. However, in order to do this, we will need a somewhat stronger inductive assumption. Let's first explain on an informal level how the induction works and why we need a stronger inductive assumption. On the first step of the induction, things work pretty well. It is possible to use the indiscernability principle to show that we can pick some $b_1$ which satisfies the condition above and which is close to some element of either $\overline{a}$ or $V$. Since $U$ contains a reasonably large neighborhood around each element of $\overline{a}$ and $V'$ contains a reasonably large neighborhood around each element of $V$, this means we can pick $b_1$ from $U\cup V'$. On the second step of the induction, however, things start to go wrong. We can again use the indiscernability principle to show that we can pick some $b_2$ which satisfies the condition above and which is close to either $b_1$ or to some element of either $\overline{a}$ or $V$. In the latter case, there is no problem: we can still pick $b_2$ from $U\cup V'$. But in the former case, there may be a problem. If the element $b_1$ we picked on the first step happens to be near the "boundary" of $U\cup V'$ then even a $b_2$ which is relatively close to it might no longer be inside $U\cup V'$. We can fix this problem by requiring not just that $b_1$ is in $U\cup V'$, but also that it is far from the "boundary" of $U\cup V'$. In other words, we need to require that $b_1$ is close to $\overline{a}$ or $V$ in some stronger way than simply requiring that it be in $U\cup V'$. In fact, it is enough to require that $b_1$ be within distance $r/2$ of some element of $\overline{a}$ or $V$ and more generally, that each $b_i$ is within distance $r/2 + \ldots + r/2^i$ of some element of $\overline{a}$ or $V$. To state this formally, we define sets $W_0 \subseteq W_1 \subseteq W_2 \subseteq \ldots \subseteq W_m$ as follows. $W_0$ consists of the coordinates of $\overline{a}$ together with the elements of $V$. For each $0 < i \leq m$, $W_i$ consists of all points in $M$ which are within distance $r/2^i$ of some element of $W_{i - 1}$ (note that this is equivalent to being within distance $r/2 + r/4 + \ldots + r/2^i$ of some element of $W_0$). Note that by assumption, $U \cup V'$ contains the $r$-neighborhood of each element of $W_0$. It follows that each $W_i$ is contained in $U \cup V'$ Also, define a sequence of formulas $\varphi_0, \varphi_1,\ldots,\varphi_m$ by removing the quantifiers from $\varphi$ one at a time. More precisely, define $$\varphi_i(\overline{x}, y_1,\ldots,y_i) := \exists y_{i + 1} \, \ldots, \exists y_m \theta(\overline{x}, \overline{y}).$$ So, for example, - $\varphi_0(\overline{x}) = \exists y_1 \dots \exists y_m \theta(\overline{x},\overline{y})=\varphi(\overline{x})$ - $\varphi_1(\overline{x},y_1) = \exists y_2 \dots \exists y_m \theta(\overline{x},\overline{y})$ - $\varphi_2 (\overline{x},y_1,y_2) = \exists y_3 \dots \exists y_m \theta(\overline{x},\overline{y})$ - ... - $\varphi_m (\overline{x},y_1,\dots,y_m) = \theta(\overline{x},\overline{y})$. We will now inductively construct a sequence of points $b_1,\ldots,b_m$ such that for each $i$, $b_i \in W_i$ and $M \vDash\varphi_i(\overline{a}, b_1,\ldots,b_i)$. Since $W_m \subseteq U \cup V'$ and $\varphi_m = \theta$, this is sufficient to finish the proof. The base case of this induction is simply the assertion that $M \vDash\varphi(\overline{a})$ which we assumed above. Now assume that we have already found $b_1,\ldots,b_i$ and we will show how to find $b_{i + 1}$. Since $M \vDash\varphi_i(\overline{a}, b_1,\ldots,b_i)$, there is some $c$ such that $M \vDash\varphi_{i + 1}(\overline{a}, b_1,\ldots,b_i, c)$. The idea is that we can pick $b_{i + 1}$ by mimicking $c$. If $c$ is within distance $r/2^{i + 1}$ of some coordinate of $\overline{a}$, $0$ or some $b_j$ for $j \leq i$ then we set $b_{i + 1} = c$. Otherwise, we can pick $b_{i + 1}$ to be some element of $V$ with the same $r$-neighborhood type as $c$ and which is also distance at least $r/2^{i + 1}$ from all coordinates of $\overline{a}$, $0$ and all $b_j$. We can do this because either $V$ contains many points of that $r$-neighborhood type (more than all the points within distance $r/2^{i + 1}$ of $\overline{a}$, $0$ and $b_1,\ldots, b_i$---this is why we chose the number $(2r + 1)(n + m + 1)$) or there are not very many such points and $V$ contains $c$ itself. Note that in the first case, $b_{i + 1}$ is within distance $r/2^{i + 1}$ of some element of $W_i$, and in the second case, $b_{i + 1} \in V$. Thus in either case $b_{i + 1} \in W_{i + 1}$. Also, note that in either case $\overline{a}b_1\ldots b_i c$ and $\overline{a}b_1\ldots b_i b_{i + 1}$ have the same $r/2^{i + 1}$-type. Since the radius of $\varphi_{i + 1}$ can be seen to be $r/2^{i + 1}$ and $M \vDash\varphi_{i + 1}(\overline{a}, b_1,\ldots,b_i, c)$, the indiscernability principle implies that $M \vDash\varphi_{i + 1}(\overline{a},b_1,\ldots,b_{i + 1})$, as desired. ◻ We now want to give an algorithm to compute the satisfaction relation of an arbitrary formula. One way to do this is to recursively apply the idea of Proposition [Proposition 19](#prop:satisfaction1){reference-type="ref" reference="prop:satisfaction1"} to reduce checking the truth of a formula with an arbitrary number of quantifiers to checking the truth of a finite number of atomic formulas. However, if we invoke the quantifier elimination results of section [2.3](#sec:qe_ma){reference-type="ref" reference="sec:qe_ma"} then we can do something simpler. Recall that Theorem [Theorem 11](#thm:qe2){reference-type="ref" reference="thm:qe2"} tells us every formula is equivalent over $M$ to a Boolean combination of existential formulas. Thus the algorithm for existential formulas almost immediately yields an algorithm for arbitrary formulas. **Proposition 20** (Satisfaction algorithm for arbitrary formulas). *Suppose $\varphi(\overline{x})$ is an $\mathcal{L}'$-formula. There is a number $r \in \mathbb{N}$ and an algorithm which, given any tuple $\overline{a}$ in $M$ and the following data* 1. *an oracle for $M$* 2. *and a finite set $U \subseteq M$,* *tries to check whether $M \vDash\varphi(\overline{a})$. Furthermore, if $U$ contains the $r$-neighborhood of every coordinate of $\overline{a}$ then the algorithm is correct.* **Definition 21**. For convenience, we will refer to the number $r$ in the statement of this proposition as the **satisfaction radius** of $\varphi$. *Proof.* By Theorem [Theorem 11](#thm:qe2){reference-type="ref" reference="thm:qe2"}, $\varphi(\overline{x})$ is equivalent over $M$ to a Boolean combination of existential $\mathcal{L}'$-formulas, $\psi_1(\overline{x}),\ldots, \psi_m(\overline{x})$ (which may have parameters from $M$). Let $r_1,\ldots,r_m$ denote the radii of these formulas and let $r = \max(r_1,\ldots,r_m)$. The algorithm is simple to describe, but is made slightly more complicated by the fact that the formulas $\psi_i$ may contain parameters from $M$. For clarity, we will first assume that they do not contain such parameters and then explain how to modify the algorithm in the case where they do. Here's the algorithm (in the case where there are no parameters). For each $i \leq m$, use the algorithm for existential formulas and the set $U$ to check the truth of $\psi_i(\overline{a})$. Then assume all the reported truth values are correct and use them to compute the truth value of $\varphi(\overline{a})$. If $U$ contains an $r$-neighborhood around every coordinate of $\overline{a}$ then for each $i \leq m$, it contains an $r_i$-neighborhood around each coordinate of $\overline{a}$. So in this case, the truth values we compute for $\psi_1(\overline{a}), \ldots, \psi_m(\overline{a})$ are guaranteed to be correct and thus the final truth value for $\varphi(\overline{a})$ is also correct. Now suppose that the formulas $\psi_i$ contain parameters from $M$. Let $\overline{b}_i$ be the tuple of parameters of $\psi_i$. Let $V$ be the set containing the $r$-neighborhood of each element of each tuple of parameters $\overline{b}_i$. The only modification that is needed to the algorithm described above is that instead of using $U$ itself, we should use $U \cup V$ when applying the satisfaction algorithm for existential formulas (and note that since $V$ is finite, we can simply hard-code it into our algorithm). ◻ Here's why this algorithm is useful. Note that if we had some way of computably generating the set $U$ then we would be able to outright compute the satisfaction relation for $\varphi$ using just an oracle for $M$. In turn, this would allow us to use an oracle for $M$ to compute the sequence $\llbracket A(\underline{0}) \rrbracket^{M}, \llbracket A(\underline{1}) \rrbracket^{M}, \llbracket A(\underline{2}) \rrbracket^{M},\ldots$, which is a path through $R$. Since $R$ has no computable paths, this would imply $M$ is not computable. Thus to finish our proof of the uncomputability of $M$, it is enough to find an algorithm for generating the set $U$ needed by the satisfaction algorithm. Actually, we can't quite do this in general, but we can do something almost as good: we can enumerate a short list of candidates for $U$. This is enough to show that the sequence $\llbracket A(\underline{0}) \rrbracket^{M}, \llbracket A(\underline{1}) \rrbracket^{M}, \llbracket A(\underline{2}) \rrbracket^{M},\ldots$ is guessable from an oracle for $M$. Since $R$ has no guessable paths, this is still enough to imply that $M$ is not computable. ## The guessing algorithm We will now prove that $M$ is not computable. As discussed above, we will do so by proving that the sequence $\llbracket A(\underline{0}) \rrbracket^{M}, \llbracket A(\underline{1}) \rrbracket^{M}, \llbracket A(\underline{2}) \rrbracket^{M}, \ldots$ is guessable relative to an oracle for $M$. Since the axioms of $T$ ensure that this sequence is a path through $R$ and since no path through $R$ is guessable, this implies that $M$ is not computable. In other words, we can complete our proof by constructing an algorithm which, given an oracle for $M$ and a number $n$, enumerates a list of at most $O(n^2)$ guesses (at least one of which is correct) for the finite sequence $\llbracket A(\underline{0})\rrbracket^M, \llbracket A(\underline{1})\rrbracket^M, \ldots, \llbracket A(\underline{n}) \rrbracket^M$. The rest of this section is devoted to constructing this algorithm. Since it would become annoying to append the phrase "relative to an oracle for $M$" to every other sentence that follows, we will adopt the convention that we always implicitly have access to an oracle for $M$, even if we do not say so explicitly. Thus whenever we say that something is computable or computably enumerable, we mean relative to an oracle for $M$. **Warm-up: when $S$ has a quantifier free definition.** We will begin by constructing an algorithm for one especially simple case. Note that this case is included only to demonstrate how the satisfaction algorithm can be used and to motivate the rest of the proof; it can be skipped without missing any essential details. The "especially simple case" we are referring to is the case in which $S$ has a quantifier free $\mathcal{L}'$-definition. We will see that in this case, the sequence $\llbracket A(\underline{0})\rrbracket^M, \llbracket A(\underline{1})\rrbracket^M, \ldots$ is not only guessable, but actually computable. To begin, let $\varphi_S(x, y)$ be the $\mathcal{L}'$-definition of $S$---i.e. for every $a, b \in M$, $M \vDash S(a) = b$ if and only if $M \vDash\varphi_S(a, b)$. Note that since $\varphi_S$ is quantifier-free, the successor function in $M$ is computable: to find $S(a)$ we can just enumerate elements of $M$ until we see an element $b$ such that $M \vDash\varphi_S(a, b)$ (which we can check because $\varphi_S$ is quantifier-free). Likewise, we can also compute the predecessor function: instead of waiting for an element $b$ such that $M \vDash\varphi_S(a, b)$, we wait for an element $b$ such that $M \vDash\varphi_S(b, a)$. We can now explain how to compute $\llbracket A(\underline{0})\rrbracket^M, \llbracket A(\underline{1})\rrbracket^M, \ldots$. Let $\varphi_A(x)$ be the $\mathcal{L}'$-definition of $A$ and let $r$ be the satisfaction radius of $\varphi_A$. Given a number $n$, do the following. 1. First use the fact that the successor function is computable to compute $\underline{n} = S^n(0)$. 2. Next, use the fact that the successor and predecessor functions are computable to compute the $r$-neighborhood of $\underline{n}$. Let $U$ denote the set of elements in this $r$-neighborhood. 3. Finally, use the satisfaction algorithm for $\varphi_A$, along with the set $U$, to check whether $M \vDash\varphi_A(\underline{n})$ and output the result as the truth value of $A(\underline{n})$. Note that since $U$ really does contain the $r$-neighborhood of $\underline{n}$, the outcome of this step is guaranteed to be correct. **Idea of the full algorithm.** We have just seen an algorithm that computes the sequence $\llbracket A(\underline{0})\rrbracket^M, \llbracket A(\underline{1})\rrbracket^M, \ldots$ (without needing to make guesses) in the special case where $S$ is definable by a quantifier-free $\mathcal{L}'$-formula. We can no longer assume that there is a quantifier-free definition of $S$, but by applying the quantifier elimination theorem for mutually algebraic formulas over mutually algebraic structures from section [2.3](#sec:qe_ma){reference-type="ref" reference="sec:qe_ma"}, we have something almost as good. Namely, let $\varphi_S(x, y)$ be the $\mathcal{L}'$-definition of $S$. It is easy to see that $\varphi_S$ is mutually algebraic and so, by Theorem [Theorem 12](#thm:qe1){reference-type="ref" reference="thm:qe1"}, there is a mutually algebraic existential formula $\psi_S(x, y)$ (possibly with parameters from $M$) such that $M \vDash\varphi_S(x, y) \to \psi_S(x, y)$. The formula $\psi_S(x, y)$ should be thought of as an "approximation" to the successor relation in $M$. In particular, for a fixed element $a$, any $b$ such that $M \vDash\psi_S(a, b)$ holds should be thought of as a candidate for $S(a)$ and any $b$ such that $M \vDash\psi_S(b, a)$ holds should be thought of as a candidate for $P(a)$. This is justified by the following two facts. 1. Since $M \vDash\varphi_S(x, y) \to \psi_S(x, y)$, we have $M \vDash\psi_S(a, S(a))$ and $M \vDash\psi_S(P(a), a)$. In other words, the candidates for the successor and predecessor of $a$ include the true successor and predecessor of $a$, respectively. 2. Since $\psi_S$ is mutually algebraic, there are not very many such candidates. The core idea of the algorithm is that since $\psi_S(x, y)$ is existential, the set of candidates for $S(a)$ and $P(a)$ is computably enumerable: to check if $M \vDash\psi_S(a, b)$, we simply wait until we see some tuple in $M$ which can serve as a witness. Thus we have an algorithm which, given any $a \in M$, enumerates a short list of candidates for $S(a)$ and $P(a)$. Next, we can bootstrap this into an algorithm which, for any $a \in M$ and any number $n \in \mathbb{N}$, enumerates a list of guesses for the sequence $a - \underline{n}, a - \underline{(n - 1)}, \ldots, a + \underline{n}$: basically, enumerate guesses for the successor and predecessor of $a$, then enumerate guesses for the successor and predecessor of each of those guesses and so on, for $n$ rounds. This puts us in a situation much like the previous subsection (where the successor and predecessor functions were computable). In particular, we can enumerate guesses for the sequence $\llbracket A(\underline{0})\rrbracket^M, \ldots, \llbracket A(\underline{n})\rrbracket^M$ as follows. 1. First, let $\varphi_A(x)$ be the $\mathcal{L}'$-definition of $A$ and let $r_A$ be the satisfaction radius of $\varphi_A$. 2. Given a number $n$, enumerate guesses for the sequence $\underline{-r_A}, \ldots, \underline{n + r_A}$. 3. For each such guess, use the satisfaction algorithm to compute a guess for the sequence $\llbracket A(\underline{0})\rrbracket^M, \ldots, \llbracket A(\underline{n})\rrbracket^M$. Note that if the guess from the second step is correct then the guess from the last step will be too because in this case we have correctly identified $\underline{0}, \ldots, \underline{n}$, along with the $r_A$-neighborhood of each one. There is only one problem with this algorithm: we may enumerate too many guesses. Suppose that our algorithm for enumerating guesses for the successor of an element of $M$ enumerates $k$ guesses. Then it seems that we might end up enumerating up to $k^n$ guesses for $a + \underline{n}$: $k$ guesses for $a + \underline{1}$, $k^2$ guesses for $a + \underline{2}$ (since each guess for $a + \underline{1}$ gives rise to $k$ guesses for $a + \underline{2}$), and so on. Thus in the algorithm above, we might end up with about $k^n$ guesses for the sequence $\llbracket A(\underline{0})\rrbracket^M, \ldots, \llbracket A(\underline{n})\rrbracket^M$, which is not enough to show that the sequence $\llbracket A(\underline{0})\rrbracket^M, \llbracket A(\underline{1})\rrbracket^M, \ldots$ is guessable. The second key idea of our algorithm is that we actually don't end up with so many guesses. It is possible to show that since $\psi_S$ is mutually algebraic, if $M \vDash\psi_S(a, b)$ then---almost always---$a$ and $b$ are close to each other. In particular, if the radius of $\psi_S$ is $r$ then with only finitely many exceptions, $a$ and $b$ must be distance at most $r$ apart (this will be proved in Lemma [Lemma 22](#lemma:ma_close){reference-type="ref" reference="lemma:ma_close"} below). If we ignore the finitely many exceptions, then this implies that for any $a$, every candidate for $S(a)$ is within distance $r$ of $a$. By induction, this implies that every candidate for $a + \underline{n}$ is within distance $rn$ of $a$. The point is that this means there are at most $rn$ such candidates (rather than $k^n$). This does not quite solve our problem: even if there are only $rn$ candidates for $a + \underline{n}$, there could still be exponentially many candidates for the sequence $a - \underline{n}, \ldots, a + \underline{n}$. However, it can be combined with other tricks to reduce the number of guesses to $O(n^2)$. This will be explained in detail in the proof of Lemma [Lemma 24](#lemma:alg2){reference-type="ref" reference="lemma:alg2"}. **Details of the algorithm.** We will now describe the details of the algorithm and verify that it works correctly. We will break the algorithm (and its verification) into three parts, which work as follows. 1. **The successor and predecessor guessing algorithm:** an algorithm which takes as input an element $a \in M$ and uses the existential formula approximating the successor relation to enumerate candidates for the successor and predecessor of $a$. This is described in Lemma [Lemma 23](#lemma:alg1){reference-type="ref" reference="lemma:alg1"}. 2. **The neighborhood guessing algorithm:** an algorithm which takes as input an element $a \in M$ and a number $n$ and uses the ideas discussed above to enumerate candidates for the sequence $a - \underline{n},\ldots,a + \underline{n}$. This is described in Lemma [Lemma 24](#lemma:alg2){reference-type="ref" reference="lemma:alg2"}. 3. **The $A$ guessing algorithm:** an algorithm which takes as input a number $n$ and uses the neighborhood guessing algorithm together with the satisfaction algorithm to enumerate candidates for the sequence $\llbracket A(\underline{0})\rrbracket^M, \ldots, \llbracket A(\underline{n})\rrbracket^M$. This is described in Lemma [Lemma 25](#lemma:alg3){reference-type="ref" reference="lemma:alg3"}. Before describing these algorithms and proving their correctness, we need to prove one technical lemma (which is related to our comment above stating that if $M \vDash\psi_S(a, b)$ then $a$ and $b$ are usually close together). **Lemma 22**. *Suppose that $\varphi(x, y)$ is a formula (possibly with parameters from $M$) of radius $r$ which is mutually algebraic over $M$. There is a finite set $X$ of elements of $M$ such that if $M \vDash\varphi(a, b)$ then either $a$ and $b$ are distance at most $r$ apart or at least one of $a, b$ is in $X$.[^7]* *Proof.* It will help to first make explicit the parameters of $\varphi$. Let $\overline{c}$ denote the tuple of parameters and write $\varphi'(x, y, \overline{z})$ to denote the version of $\varphi$ with the parameters exposed, i.e. $\varphi(x, y)$ is $\varphi'(x, y, \overline{c})$. Call a pair $(a, b)$ **exceptional** if $a$ and $b$ are more than distance $r$ apart and both are more than distance $r$ from every coordinate of $\overline{c}$ and $M \vDash\varphi(a, b)$. We will show that if $(a, b)$ is exceptional then the $r$-neighborhood type of $a$ occurs only finitely often in $M$, and likewise for $b$. Since there are only finitely many $r$-neighborhood types, this shows that there are only finitely many exceptional pairs. This is sufficient to finish the proof since we can take $X$ to consist of all elements which are part of some exceptional pair, together with the $r$-neighborhood of each coordinate of $\overline{c}$. The claim about exceptional pairs follows from the indiscernability principle. Suppose $(a, b)$ is an exceptional pair. If $a'$ is any element of $M$ which is distance more than $r$ from all of $b$ and from every coordinate of $\overline{c}$ and which has the same $r$-neighborhood type as $a$ then by the indiscernability principle we have $$M \vDash\varphi(a, b) \implies M \vDash\varphi'(a, b, \overline{c}) \implies M \vDash\varphi'(a', b, \overline{c})$$ and hence $M \vDash\varphi(a', b)$. Since $\varphi$ is mutually algebraic, there can only be finitely many such $a'$. Thus, outside of the $r$-neighborhood of $b$ and of each coordinate of $\overline{c}$, there are only finitely many elements with the same $r$-neighborhood type as $a$. Since these $r$-neighborhoods are themselves finite, they also contain only finitely many elements with the same $r$-neighborhood type as $a$ and thus we have shown that the $r$-neighborhood type of $a$ only occurs finitely often in $M$. Symmetric reasoning establishes the same result for $b$. ◻ **Lemma 23** (Guessing algorithm for successors and predecessors). *There is an algorithm which, given any $a \in M$ enumerates two lists of elements of $M$ such that* 1. *$S(a)$ is in the first list and $P(a)$ is in the second list.* 2. *There is a constant upper bound (independent of $a$) on the distance between any enumerated element and $a$.* *Proof.* Let $\varphi_S(x, y)$ be the $\mathcal{L}'$-definition of $S$ (i.e. $M \vDash S(a) = b$ if and only if $M \vDash\varphi_S(a, b)$). Since $\varphi_S(x, y)$ is mutually algebraic, we can apply Theorem [Theorem 12](#thm:qe1){reference-type="ref" reference="thm:qe1"} to obtain a mutually algebraic existential $\mathcal{L}'$-formula $\psi_S(x, y)$ (which may contain parameters from $M$) such that $M \vDash\varphi_S(x, y) \to \psi_S(x, y)$. Let $r$ be the radius of $\psi_S$. By Lemma [Lemma 22](#lemma:ma_close){reference-type="ref" reference="lemma:ma_close"}, there is a finite set $X$ such that if $M \vDash\psi_S(b, c)$ then either $b$ and $c$ are distance at most $r$ apart or at least one of $b, c$ is in $X$. We will hard-code into our algorithm the elements of $X$, along with the identity of their successors and predecessors. Note that since $\psi_S(x, y)$ is an existential formula, it follows that for a fixed $a$, the set of elements $b$ such that $M \vDash\psi_S(a, b)$ is computably enumerable (to see why, note that we can simply enumerate tuples in $M$ until we find one that witnesses the existential formula $\psi_S(a, b)$), and likewise for the set of elements $b$ such that $M \vDash\psi_S(b, a)$. Thus our algorithm may work as follows. 1. Begin enumerating elements $b$ such that $M \vDash\psi_S(a, b)$ or $M \vDash\psi_S(b, a)$. 2. For each element $b$ such that $M \vDash\psi_S(a, b)$, check if either $a$ or $b$ is in $X$. If so, use the hard-coded list of successors and predecessors of elements of $X$ to check if $b$ is a successor of $a$. If this is true, enumerate $b$ into the first list. If $a$ and $b$ are both not in $X$ then enumerate $b$ into the first list with no extra checks. 3. Do the same thing for each element $b$ such that $M \vDash\psi_S(b, a)$, but enumerate $b$ into the second list instead of the first. Since $M \vDash\varphi_S(x, y) \to \psi_S(x, y)$, the true successor and predecessor of $a$ will be successfully enumerated. Also, if $b$ is some element of $M$ which is distance more than $r$ from $a$ then either $M \nvDash\psi_S(a, b)$ and $M \nvDash\psi_S(b, a)$, in which case $b$ will not be enumerated, or one of $a, b$ is in $X$, in which case $b$ will still not be enumerated (because it is not a true successor or predecessor of $a$). ◻ **Lemma 24** (Guessing algorithm for neighborhoods). *There is an algorithm which, given any $a \in M$ and number $n \in \mathbb{N}$, enumerates a list of at most $O(n^2)$ guesses for the sequence $a - \underline{n}, \ldots, a + \underline{n}$, one of which is correct.* *Proof.* It is easiest to describe our algorithm in the following way. We will first describe an algorithm which has access to certain extra information (which might not be computable from an oracle for $M$) and which uses this extra information to correctly compute the sequence $a - \underline{n}, \ldots, a + \underline{n}$. We then obtain an algorithm for enumerating guesses for the sequence by trying each possible value of the extra information and running the algorithm on each of these values in parallel.[^8] To finish, we will have to show that there are only $O(n^2)$ possible values for the extra information. To begin, let $r_1$ be the constant from the statement of Lemma [Lemma 23](#lemma:alg1){reference-type="ref" reference="lemma:alg1"} (i.e. the upper bound on the distance between any $a$ and any element which is enumerated by the algorithm for guessing successors and predecessors of $a$). Let $\varphi_S(x, y)$ be the $\mathcal{L}'$-definition of $S$ and let $r_2$ be the satisfaction radius of $\varphi_S$. Suppose we are given an element $a \in M$ and a number $n \in \mathbb{N}$ as input. Let $N = r_1n + r_2$. Our algorithm proceeds in two phases. 1. In the first phase, we will use the algorithm from Lemma [Lemma 23](#lemma:alg1){reference-type="ref" reference="lemma:alg1"} to collect candidates for $a + \underline{i}$ for each $-N \leq i \leq N$. More precisely, for each such $i$ we will find a set $U_i$ which contains $a + \underline{i}$ and which is contained in the $r_1|i|$-neighborhood of $a$. 2. In the second phase, we will use the sets of candidates collected in the first stage as input to the satisfaction algorithm (applied to $\varphi_S$) to determine the exact identities of $a + \underline{i}$ for each $-n \leq i \leq n$. The "extra information" that we alluded to above is needed in the first phase of the algorithm. This is because the sets $U_i$ are not quite computable from an oracle for $M$, but only computably enumerable. However, since the they are all finite, it is possible to compute them exactly with only a small amount of additional information. Let $i$ be the index of the last $U_i$ to have a new element enumerated into it and let $m$ be the size of $U_i$ once all its elements have been enumerated (note that such an $i$ and $m$ exist because all the $U_j$ are finite). We claim that the pair $(i, m)$ is enough information to allow us to compute all the sets $U_j$ exactly and that there are only $O(n^2)$ possible values for this pair. To see why we can compute all the $U_j$ exactly, note that given $i$ and $m$ we can simply keep enumerating elements into all the $U_j$ until we see that $U_i$ has size $m$. To see why there are only $O(n^2)$ possible values for the pair $(i, m)$, note that there are only $2N + 1$ possible values for $i$ and at most $r_1(2N + 1)$ possible values for $m$ (since $U_i$ is contained in the $r_1|i|$-neighborhood of $a$, which has $r_1(2|i| + 1) \leq r_1(2N + 1)$ elements). Thus there are at most $r_1(2N + 1)^2 = O(n^2)$ possible values for $(i, m)$. *Phase 1: collecting candidates.* The sets $U_i$ for $-N \leq i \leq N$ can be enumerated as follows. To begin with, set $U_0 = \{a\}$ and set all other $U_i = \varnothing$. Then run the following processes in parallel: for each $-N < i < N$ and each element $b$ of $U_i$, use the algorithm of Lemma [Lemma 23](#lemma:alg1){reference-type="ref" reference="lemma:alg1"} to enumerate candidates for the successor and predecessor of $b$. If $i \geq 0$ then add each such candidate for the successor of $b$ to $U_{i + 1}$. If $i \leq 0$ then add each candidate for the predecessor of $b$ to $U_{i - 1}$. It is easy to show by induction that for each $i$, $a + \underline{i}$ will eventually be enumerated into $U_i$ and that each element enumerated into $U_i$ is distance at most $r_1|i|$ from $a$. *Phase 2: computing neighbors exactly.* Given the sets $U_i$ from phase 1, we can compute the exact identities of $a - \underline{n}, \ldots, a + \underline{n}$ as follows. First, let $U = U_{-N} \cup \ldots \cup U_N$ and note that $a + \underline{0} = a$. Next, loop over $i = 0, 1, \ldots, n - 1$. On step $i$, we will compute $a + \underline{i + 1}$ and $a - \underline{(i + 1)}$. Suppose that we are on step $i$ of the algorithm and assume for induction that we have already successfully computed $a + \underline{i}$ and $a - \underline{i}$ (note that for $i = 0$ this is trivial). Now do the following: 1. For each $b \in U_{i + 1}$, use the satisfaction algorithm (of Proposition [Proposition 20](#prop:satisfaction){reference-type="ref" reference="prop:satisfaction"}) with the set $U$ to check if $M \vDash\varphi_S(a + \underline{i}, b)$. 2. For each $b \in U_{-(i + 1)}$, use the satisfaction algorithm with the set $U$ to check if $M \vDash\varphi_S(b, a - \underline{i})$. Note that each $b \in U_{i + 1}$ is within distance $r_1(i + 1)$ of $a$. Since $U$ contains the entire $N$-neighborhood of $a$ and $N = r_1n + r_2 \geq r_1 (i + 1) + r_2$, $U$ also contains the $r_2$-neighborhood of $b$. Thus the conditions of the satisfaction algorithm are fulfilled and so we correctly compute whether $b$ is the successor of $a + \underline{i}$ or not. And since $U_{i + 1}$ is guaranteed to contain $a + \underline{i + 1}$, our algorithm will correctly identify $a + \underline{i + 1}$. Completely symmetric reasoning applies to show that our algorithm will correctly identify $a - \underline{(i + 1)}$. ◻ **Lemma 25** (Guessing algorithm for $A$). *There is an algorithm which, given any number $n \in \mathbb{N}$, enumerates a list of at most $O(n^2)$ guesses for the sequence $\llbracket A(\underline{0})\rrbracket^M, \ldots, \llbracket A(\underline{n})\rrbracket^M$, one of which is correct.* *Proof.* Let $\varphi_A(x)$ be the $\mathcal{L}'$-definition of $A$ (i.e. $M \vDash A(a)$ if and only if $M \vDash\varphi_A(x)$) and let $r$ be the satisfaction radius of $\varphi_A$. This algorithm essentially just combines the algorithm for guessing neighborhoods with the satisfaction algorithm for $\varphi_A$. Given a number $n \in \mathbb{N}$ as input, first use the algorithm from Lemma [Lemma 24](#lemma:alg2){reference-type="ref" reference="lemma:alg2"} to enumerate guesses for the sequence $-\underline{(n + r)}, \ldots, \underline{n + r}$ (this can be done by simply giving the element $0 \in M$ and the number $n + r$ as input to that algorithm). Let $b_{-(n + r)}, \ldots, b_{n + r}$ be one such guess and let $U = \{b_i \mid -(n + r) \leq i \leq n + r\}$. For each $0 \leq i \leq n$, use the satisfaction algorithm with the set $U$ to check if $M \vDash\varphi_A(b_i)$. If so, report that $\llbracket A(\underline{i}) \rrbracket^{M}$ is true and otherwise report that it is false. So for each guess for the sequence $-\underline{(n + r)}, \ldots, \underline{n + r}$ we generate exactly one guess for the sequence $\llbracket A(\underline{0})\rrbracket^M, \ldots, \llbracket A(\underline{n})\rrbracket^M$ and thus we generate at most $O((n + r)^2) = O(n^2)$ guesses overall. Furthermore, one of the guesses for the sequence $-\underline{(n + r)}, \ldots, \underline{n + r}$ is guaranteed to be correct. For this guess, each $b_i$ is actually equal to $\underline{i}$ and for each $i \leq n$, the set $U$ really does contain the $r$-neighborhood of $b_i$. Thus, for this guess, each $\llbracket A(\underline{i}) \rrbracket^M$ is computed correctly. ◻ # Questions ## Bi-interpretability Since definitional equivalence is a strong form of bi-interpretability, it seems reasonable to ask whether Theorem [Theorem 2](#thm:main){reference-type="ref" reference="thm:main"} still holds when definitional equivalence is replaced with bi-interpretability. **Question 2**. *Is there a consistent, c.e. theory such that no theory bi-interpretable with it has a computable model?* It seems possible that the theory $T$ we used in our proof of Theorem [Theorem 2](#thm:main){reference-type="ref" reference="thm:main"} could also be used to answer this question, but there are a few difficulties. One issue is that while the mutual algebraicity of a structure is preserved under definitional equivalence, it is not always preserved under bi-interpretability. **Example 26** (Bi-interpretability fails to preserve mutual algebraicity). Let $\mathcal{L}$ be the language with just equality and let $\mathcal{L}'$ be a language with two sorts $U$ and $V$, and two function symbols $f, g \colon V \to U$. Let $T$ be the $\mathcal{L}$-theory describing an infinite set and let $T'$ be the $\mathcal{L}'$-theory which states that $U$ is infinite and $(f, g)$ is a bijection from $V$ to $(U\times U) \setminus \{(x, x) \mid x \in U\}$. Given a model of $T'$, we can obtain a model of $T$ by forgetting the sort $V$ and the functions $f$ and $g$. Given a model of $T$ we can obtain a model of $T'$ as follows. Take as the underlying set for the model, the set of all pairs $(x, y)$ with pairs of the form $(x, x)$ forming the sort $U$ and pairs of the form $(x, y)$ for $x\neq y$ forming the sort $V$. For the functions $f$ and $g$, simply take $f((x, y)) = (x, x)$ and $g((x, y)) = (y, y)$. It is not hard to check that these two interpretations give a bi-interpretation. However, while every model of $T$ is clearly mutually algebraic, the same is not true for $T'$. For example, the formula $f(y) = x$ is not equivalent to any Boolean combination of mutually algebraic formulas. A second issue (not unrelated to the first) is that, in our proof, we relied on the fact that any model $M$ of a theory definitionally equivalent to $T$ carries a notion of distance inherited from $T$. In particular, we used this to bound the number of guesses required by the neighborhood guessing algorithm of Lemma [Lemma 24](#lemma:alg2){reference-type="ref" reference="lemma:alg2"}. However, if $M$ is only a model of a theory bi-interpretable with $T$, it is not clear if there is still a good notion of distance which can play this role. ## Natural theories Arguably, the theory $T$ that we used to prove Theorem [Theorem 2](#thm:main){reference-type="ref" reference="thm:main"} is not very natural. It would be interesting to know if this is necessary. **Question 3**. *Is there a natural theory witnessing Theorem [Theorem 2](#thm:main){reference-type="ref" reference="thm:main"}?* Of course, much depends on what the word "natural" means. In the interests of asking a somewhat more concrete question, let's say that a theory is natural if it has been studied (at least implicitly) by mathematicians who are not logicians. We can rephrase our question as follows: is there any natural theory which has satisfies the robust version of the Tennenbaum property implicit in Theorem [Theorem 2](#thm:main){reference-type="ref" reference="thm:main"}? In light of Pakhomov's results, which seem to show that any theory interpreting a decent amount of arithmetic is definitionally equivalent to a theory without the Tennenbaum property, it seems like a good idea to first ask whether any natural theory satisfies the regular version of the Tennenbaum property but does not interpret any nontrivial fragment of arithmetic. We are not aware of any such theory and would consider it highly interesting. **Question 4**. *Is there any natural (consistent) theory $T$ such that $T$ has no computable models and does not interpret any nontrivial fragment of arithmetic?* One can ask a similar question on the level of models rather than theories. In analogy with our definition for theories, let's say that a countable structure is natural if it has been studied by mathematicians who are not logicians. **Question 5**. *Is there a natural countable structure with no computable presentation?* Again, we are not aware of any completely convincing example of such a structure and would consider any such example to be very interesting. [^1]: Technically this requires checking that for every function symbol $f$ in $\mathcal{L}$, $T'$ proves that the $\mathcal{L}'$-definition of $f$ is a function and that for every constant symbol $c$ of $\mathcal{L}$, $T'$ proves that the $\mathcal{L}'$-definition of $c$ holds of exactly one element. These both follow from conservativity of $T''$. [^2]: For example we can simply take the complement of any of the levels of the universal Martin-Löf test. [^3]: The point is that if a theory $T$ is in a language with finite signature and $T'$ is any theory definitionally equivalent to $T$ then $T'$ has a subtheory in a language with finite signature which is also definitionally equivalent to $T$. [^4]: Recall that a model is computable if its underlying set is $\mathbb{N}$ and all of its functions and relations are computable as functions or relations on $\mathbb{N}$. Note that since we are assuming $\mathcal{L}'$ has finite signature, we don't need to worry about whether these functions and relations are uniformly computable. [^5]: This case becomes more or less trivial if $T$ is modified in the way described in Remark [Remark 15](#remark:generic){reference-type="ref" reference="remark:generic"}. This is because the existence of such an element $d$ is guaranteed by the extra axioms described in that remark. [^6]: The idea is that this number is big enough that if we have any other $n + m + 1$ points then at least one point in $V$ which is of $r$-neighborhood type $p$ will be distance more than $r$ from all these $n + m + 1$ points. [^7]: *Note that if $T$ is modified in the way described in Remark [Remark 15](#remark:generic){reference-type="ref" reference="remark:generic"} then both the statement and proof of this lemma can be simplified somewhat. In particular, we can replace the set $X$ with the $r$-neighborhood of $0$.* [^8]: A slightly subtle point here is that the algorithm which uses extra information to compute $a - \underline{n}, \ldots, a + \underline{n}$ might not terminate if the extra information it is given is incorrect. Thus some of the possible values that we try for the extra information will never actually output a guess. This is why we only say that our final algorithm *enumerates* a list of guesses rather than that it *computes* a list of guesses.
arxiv_math
{ "id": "2309.11598", "title": "A theory satisfying a strong version of Tennenbaum's theorem", "authors": "Patrick Lutz and James Walsh", "categories": "math.LO", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We study Nordhaus-Gaddum problems for Kemeny's constant $\mathcal{K}(G)$ of a connected graph $G$. We prove bounds on $\min\{\mathcal{K}(G),\mathcal{K}(\overline{G})\}$ and the product $\mathcal{K}(G)\mathcal{K}(\overline{G})$ for various families of graphs. In particular, we show that if the maximum degree of a graph $G$ on $n$ vertices is $n-O(1)$ or $n-\Omega(n)$, then $\min\{\mathcal{K}(G),\mathcal{K}(\overline{G})\}$ is at most $O(n)$. author: - "Sooyeong Kim^1^[^1]" - Neal Madras^1^ - Ada Chan^1^ - Mark Kempton^2^ - Stephen Kirkland^3^ - Adam Knudson^2^ title: Bounds on Kemeny's constant of a graph and the Nordhaus-Gaddum problem --- Graph, Markov chain, random walk, Kemeny's constant, spanning 2-forest, mean first passage time **AMS subject classifications:** 05C09, 60J10, 05C81, 05C50, 05A19 # Introduction Kemeny's constant is an important measure from the theory of Markov chains that has received considerable interest from the graph theory community recently. It also arises as a tool in applications of Markov chains in diverse areas such as wireless network design [@GhaL] and economics [@MooI]. Kemeny's constant is originally defined for a discrete, finite, time-homogeneous, irreducible Markov chain based on its stationary vector and mean first passage times. Random walks on graphs belong to this special family of Markov chains, and they serve as the primary focus in this article. Consequently, our attention focuses on Kemeny's constant within the context of random walks on graphs, and we refer the reader to [@kemeny1960finite] for the original definition and details. Kemeny's constant gives a measure of how quickly a random walker can move around a graph, and thus provides an intuitive measure of the connectivity of a graph. Let $G$ be a graph with vertex set $V(G)$ and edge set $E(G)$. In this article, we assume all graphs to be connected, undirected, and unweighted unless stated otherwise. Let $V(G)=\{1,\dots,n\}$ for some $n\geq 1$, and $m=|E(G)|$. We denote by $\{i,j\}$ the edge joining vertices $i$ and $j$, and say that $i$ and $j$ are adjacent. We denote by $d_i$ the degree of vertex $i$; that is, the number of vertices adjacent to vertex $i$. We begin by introducing Kemeny's constant and discussing its interpretation in the context of random walks on graphs, and discuss several other expressions for Kemeny's constant which will be used throughout the paper. Given a graph $G$ and fixed initial vertex $i\in V(G)$ from which to start the random walk, Kemeny's constant $\mathcal{K}(G)$ is defined as $$\begin{aligned} \label{eq:def} \mathcal{K}(G) = \sum_{\substack{j=1 \\ j\neq i}}^n \left(\frac{d_j}{2m}\right)m_{i,j},\end{aligned}$$ where $m_{i,j}$ is the expected time for a random walker to arrive at $j$ for the first time when it begins at $i$, which is the so-called *mean first passage time* or *hitting time* from $i$ to $j$. This expression can be interpreted in terms of the expected time for a random walk to reach a randomly-chosen destination vertex, starting from vertex $i$. The 'constant' in the name comes from the fact that this expression is, astonishingly, independent of the choice of $i$, giving the same value regardless of the fixed start point. We note that $\sum_i \frac{d_i}{2m} = 1$, and $\frac{d_i}{2m}$ is the stationary probability distribution of the random walk. Furthermore, it follows that [\[eq:def\]](#eq:def){reference-type="eqref" reference="eq:def"} may be rewritten as $$\begin{aligned} \mathcal{K}(G) = \sum_{i=1}^n\sum_{\substack{j=1\\j\neq i}}^n\left(\frac{d_i}{2m}\right)m_{i,j}\left(\frac{d_j}{2m}\right).\end{aligned}$$ Hence, Kemeny's constant may also be interpreted in terms of the average travel time between two vertices of the graph, chosen at random according to their degrees. It is shown in [@levene2002kemeny] that Kemeny's constant for a Markov chain can be written in terms of the eigenvalues of the transition matrix; specifically, $$\begin{aligned} \label{eq:eig} \mathcal{K}(G) = \sum_{i=2}^n \frac{1}{1-\lambda_i},\end{aligned}$$ where $1,\lambda_2,\dots,\lambda_n$ are the eigenvalues of the associated transition matrix. In the particular case of a random walk on $G$, the matrix $D^{-1}A$ is the transition matrix, where $D$ is the diagonal matrix of vertex degrees, and $A$ is the adjacency matrix of $G$ (see Equation ([\[eq.Pvw\]](#eq.Pvw){reference-type="ref" reference="eq.Pvw"})). For a random walk on a graph, Kemeny's constant also has a combinatorial expression which we use throughout this article. Let $\mathcal{F}(i;j)$ denote the set of spanning $2$-forests of $G$ where one component of the forest contains vertex $i$, and the other contains $j$. Let $F$ be the matrix given by $F = [f_{i,j}]$ where $f_{i,j} = |\mathcal{F}(i;j)|$. In [@kirkland2016kemeny], Kemeny's constant of $G$ is given by $$\begin{aligned} \mathcal{K}(G) = \frac{\mathbf{d}^T F \mathbf{d}}{4m\tau} = \frac{1}{4m\tau}\sum_{i=1}^n\sum_{\substack{j=1\\j\neq i}}^n d_if_{i,j}d_j,\end{aligned}$$ where $\mathbf{d}$ is the degree vector of $G$ and $\tau$ is the number of spanning trees of $G$. Let $R$ be the matrix given by $R = [r_{i,j}]$, where $r_{i,j}$ is the *effective resistance* (see [@bapat2010graphs]) between vertices $i$ and $j$. The quantity $r_{i,j}$ is given by $r_{i,j}=(e_i-e_j)^TL^\dagger (e_i-e_j)$ where $L^\dagger$ is the Moore-Penrose inverse of the Laplacian matrix $L$ of $G$, which is given by $L = D-A$. It appears in [@shapiro1987electrical] that $r_{i,j} = \frac{f_{i,j}}{\tau}$. Hence, we also have $$\begin{aligned} \label{eq:res} \mathcal{K}(G) = \frac{\mathbf{d}^T R \mathbf{d}}{4m} = \frac{1}{4m}\sum_{i=1}^n\sum_{j=1}^n d_ir_{i,j}d_j.\end{aligned}$$ In this paper, we will be concerned with Nordhaus-Gaddum questions relating to Kemeny's constant. Nordhaus-Gaddum questions in graph theory are questions that address the relationship between a graph $G$ and its complement relative to some graph invariant. The complement of a graph $G$, denoted $\overline{G}$, is the graph with the same vertex set $V(G)$ such that $\{i,j\}\in E(\overline{G})$ if and only if $\{i,j\} \notin E(G)$. Given some graph invariant $f(\cdot)$, a Nordhaus-Gaddum question usually considers bounds on either the sum $f(G)+f(\overline{G})$ or the product $f(G)f(\overline{G})$, or on the minimum or maximum of the set $\{f(G), f(\overline{G})\}$. Nordhaus and Gaddum originally studied such questions for the chromatic number of a graph [@nordhaus1956complementary]. Since then, Nordhaus-Gaddum questions have been studied for a wide range of different kinds of graph invariant; see [@aouchiche2013survey] for a survey of results of this kind. Of particular note to the present work, Nordhaus-Gaddum questions relating to a graph invariant called the *Kirchhoff index* of a graph were studied in [@zhou2008note; @yang2011new]. The Kirchhoff index $\mathop{\mathrm{Kf}}(G)$ is defined as $$\mathop{\mathrm{Kf}}(G) =\tfrac{1}{2}\sum_{i=1}^n \sum_{j=1}^n r_{i,j}.$$ In light of the formula in [\[eq:res\]](#eq:res){reference-type="eqref" reference="eq:res"}, we can view Kemeny's constant as a weighted, normalized version of the Kirchhoff index. Work in [@yang2011new] gave upper and lower bounds on the sum and product of $\mathop{\mathrm{Kf}}(G)$ and $\mathop{\mathrm{Kf}}(\overline G)$. In particular, they show that for a graph on $n$ vertices, the product $\mathop{\mathrm{Kf}}(G) \mathop{\mathrm{Kf}}(\overline{G})$ is at least $4(n-1)^2$ and is at most on the order of $n^4$, though this upper bound is not shown to be tight. The lower bound was improved to $(2n-1)^2$ in [@das2016nordhaus]. In [@chen2007resistance], the quantity $$\mathop{\mathrm{Kf}}^\ast(G) = \frac{1}{2}\sum_{i=1}^n\sum_{j=1}^n d_id_jr_{i,j}$$ is introduced, and is called the *multiplicative degree-Kirchhoff index*. This is closely related to Kemeny's constant via the formula in [\[eq:res\]](#eq:res){reference-type="eqref" reference="eq:res"}. A Nordhaus-Gaddum question for this quantity was considered (among other related quantities) in [@das2016nordhaus]; in particular, certain bounds are given on the sum $\mathop{\mathrm{Kf}}^\ast(G)+\mathop{\mathrm{Kf}}^\ast(\overline{G})$. In [@faught2023nordhaus], Nordhaus-Gaddum questions were also studied for the spectrum of the normalized Laplacian matrix defined by $D^{-\frac{1}{2}}LD^{-\frac{1}{2}}$, which is closely related to Kemeny's constant via [\[eq:eig\]](#eq:eig){reference-type="eqref" reference="eq:eig"}. For a connected graph $G$, it is proved in [@breen2019computing] that $\mathcal{K}(G) = O(n^3)$. Thus it follows trivially that $\mathcal{K}(G)+\mathcal{K}(\overline{G}) = O(n^3)$. Consequently, it is more natural to examine the product $\mathcal{K}(G)\mathcal{K}(\overline{G})$ for the Nordhaus-Gaddum problem in relation to Kemeny's constant. We note that the motivation for considering this type of problem is to further understand the influence of graph structural features on the value of Kemeny's constant, given that extremal graphs tend to have large diameter and maximum degree on the order of $n$. To begin our investigation of Nordhaus-Gaddum questions for Kemeny's constant, we examine some data. In Figure [1](#fig:kemplotswithbounds){reference-type="ref" reference="fig:kemplotswithbounds"}, we have plotted the point $(\mathcal{K}(G),\mathcal{K}(\overline{G}))$ for all graphs $G$ on $n$ vertices for $n=7,8,9$. For disconnected graphs, we consider Kemeny's constant to be infinite and plot the point at the extreme ends of the plot. ![Scatter plots of $(\mathcal{K}(G), \mathcal{K}(\overline{G}))$ for all graphs of order $n = 7, 8, 9$. The red lines mark the quantities $\frac{(n-1)^2}{n}$ and $\frac{3(n-1)^2}{2(n+1)}$.](KemenyPlotsWithBounds.png){#fig:kemplotswithbounds} Looking at these plots, a few observations become apparent. It seems that when Kemeny's constant for a graph is particularly large, then Kemeny's constant for its complement stays rather small. In the plots, we have indicated the line $y=(n-1)^2/n$, $x=(n-1)^2/n$ which come from the known minimum value of Kemeny's constant for a graph of order $n$, achieved by a complete graph. We also have the lines $y=3(n-1)^2/2(n+1)$ and $x=3(n-1)^2/2(n+1)$ which come from computation of Kemeny's constant for graphs that appear at the extreme ends of the plots. For all graphs on up to $n=9$ vertices, it appears that all points lie in the regions bounded by these four lines. While it is difficult to make a general conjecture based on looking at graphs on only up to nine vertices, it does seem natural to ask if one of $\mathcal{K}(G)$ or $\mathcal{K}(\overline{G})$ must be bounded above by $O(n)$. If true, this would imply that $\mathcal{K}(G)\mathcal{K}(\overline{G})$ is $O(n^4)$. In this paper, we prove this and related results under certain assumptions and for certain classes of graphs. In Section [2](#sec:n-O1){reference-type="ref" reference="sec:n-O1"} we prove that if the maximum degree of $G$ is $n-O(1)$, then $\mathcal{K}(G)$ is $\Theta(n)$ and hence $\mathcal{K}(G)\mathcal{K}(\overline{G})=O(n^4)$. In Section [3](#sec:n-on){reference-type="ref" reference="sec:n-on"}, we show that if the maximum degree of $G$ is at most $nU$ for some $U<1$, then $$\min\{\mathcal{K}(G),\mathcal{K}(\overline{G})\} \leq Cn$$ for some constant $C$ (possibly depending on $U$). An important corollary is that the same inequality is true for all regular graphs, but now with $C$ independent of the degree. In Section [4](#sec:join){reference-type="ref" reference="sec:join"}, we examine joins of graphs (i.e. graphs whose complements are disconnected) and show that for any such graph $\mathcal{K}(G)\leq 3n$. Finally, in Section [5](#sec:app){reference-type="ref" reference="sec:app"}, we examine particular families of graphs, and determine more specific bounds on $\mathcal{K}(G)\mathcal{K}(\overline{G})$ for various families such as regular graphs, barbell graphs, trees, strongly regular graphs and certain distance regular graphs with classical parameters. We end with some concluding remarks and a conjecture in Section [6](#sec:concl){reference-type="ref" reference="sec:concl"}. ## Notation and preliminaries {#sec:notation} We introduce the following notation and definitions which will be used throughout the remainder of this article. Given a graph $G$, we let $\delta(G)$ denote the minimum degree of a vertex of $G$, and $\Delta(G)$ denote the maximum degree. We denote by $N(i)$ the *neighbourhood* of $i$; i.e. the set of vertices adjacent to vertex $i$. The distance between two vertices $i$ and $j$, denoted $\mathop{\mathrm{dist}}(i,j)$, is defined as the length of the shortest path between vertices $i$ and $j$, while the *diameter* of $G$ is the longest pairwise distance in the graph: $$\mathop{\mathrm{diam}}(G) = \max_{i, j \in V(G)} \mathop{\mathrm{dist}}(i,j).$$ We use the following asymptotic notation. If $G_n$ represents a graph of order $n$ in a sequence or family of graphs, and $f$ is some positive-valued graph invariant which may be expressed as a function of $n$, then we write $f(G_n) = O(g(n))$ if $\limsup_{n\to\infty} \frac{f(G_n)}{g(n)}$ is finite, and understand this to mean that $f(G_n)$ has at most the same order of magnitude as $g(n)$. Furthermore, we write $f(G_n) = \Omega(g(n))$ if $g(n) = O(f(G_n))$, and $f(G_n) = \Theta(g(n))$ if $f(G_n) = O(g(n))$ and $f(G_n) = \Omega(g(n))$. We write $f(G_n) = o(g(n))$ if $\lim \frac{f(G_n)}{g(n)} = 0$, $f(G_n) = \omega(g(n))$ if $\lim \frac{f(G_n)}{g(n)} = \infty$, and $f(G_n) \sim g(n)$ if $\lim \frac{f(G_n)}{g(n)} = 1$. We usually omit the subscript $n$ when referring to a family of graphs of order $n$. Some of our proofs are probabilistic, and we shall use the following notation for the random variables corresponding to the random walk on $G$ (or, in modified form, to the random walk on $\overline{G}$). We write the random walk Markov chain on $G$ as a sequence $X_0,X_1,X_2,\ldots$ of $V(G)$-valued random variables, with one-step transition probabilities given by $P(\cdot,\cdot)$ as follows: $$\label{eq.Pvw} P(v,w) \;=\; \Pr(X_1\,=\,w\,|\,X_0=v) \;=\; \begin{cases} (d_v)^{-1}, & \text{if }\{v,w\}\in E(G); \\ 0, & \text{otherwise} . \end{cases}$$ More generally, for $t\in \mathbb{N}$, we shall write the $t$-step transition probabilities as $P^{(t)}(v,w)\,=\,\Pr(X_t=w\,|\,X_0=v)$. Assuming that $G$ is connected, the Markov chain has a unique stationary distribution, which we shall denote $\pi(\cdot)$. It is well known that $$\pi(v) \;=\; \frac{ {d_v}}{2m} \hspace{5mm}\hbox{for every $v\in V(G)$, where $m=|E(G)|$.}$$ For the Markov chain that starts at a specified vertex $v$, we write $P_v(\cdot)$ to denote $P(\cdot |X_0=v)$ and ${\mathbb E}_v(\cdot)$ to denote the expectation $\mathbb{E}(\cdot| X_0=v)$. For a given vertex $j$, we define the random variable $T_j$ to be the *first passage time* of the vertex $j$ by the random walk $\{X_t\}$; that is, $$T_j \;=\; \min\{ t\geq 1\,: \; X_t=j \} \,.$$ Thus our mean first-passage time $m_{i,j}$ is $\mathbb{E}_i(T_j)$. When $j=i$, we sometimes call $m_{ii}$ the mean return time to $i$. It is a well known fact about Markov chains (e.g., Proposition 1.14(ii) of Levin, Peres, and Wilmer 2009) that $$\label{eq.piprop} \mathbb{E}_i(T_i) \;=\; \frac{1}{\pi(i)} \;=\; \frac{2m}{d_i} \hspace{5mm}\hbox{for every $i\in V(G)$.}$$ # Nordhaus-Gaddum when maximum degree is $n-O(1)$ {#sec:n-O1} Consider a random walk on a connected graph $G$ with $n$ vertices and $m$ edges. Let $m_{i,j}$ be the mean first passage time from $i$ to $j$. It is known from [@chandra1989electrical] that $m_{i,j}+m_{j,i} = 2mr_{i,j}$. Hence, for any $i \in \{1, 2, \ldots, n\}$, $$\begin{aligned} \label{inequality1} \mathcal{K}(G) = \sum_{\substack{j=1\\j\neq i}}^n \bigl(\frac{d_j}{2m}\bigr) m_{i,j}< \sum_{\substack{j=1\\j\neq i}}^n \bigl(\frac{d_j}{2m}\bigr) (m_{i,j}+m_{j,i}) = \sum_{\substack{j=1\\j\neq i}}^n d_j r_{i,j}\end{aligned}$$ where $d_j$ is the degree of vertex $j$. It is found in the proof of [@palacios2010kirchhoff Proposition 3] that $r_{i,j}\leq \mathop{\mathrm{diam}}(G)$. We immediately obtain the following upper bound on Kemeny's constant: $$\begin{aligned} \label{inequality2} \mathcal{K}(G) < 2m\mathop{\mathrm{diam}}(G).\end{aligned}$$ **Remark 1**. For a graph $G$ of order $n$, the maximum diameter is $n-1$, and the maximum number of edges is $\frac{n(n-1)}{2}$, though these are not attained by the same graph. Thus this inequality [\[inequality2\]](#inequality2){reference-type="eqref" reference="inequality2"} gives an alternative proof that $\mathcal{K}(G)=O(n^3)$. Families of graphs with $\mathcal{K}(G)=\Theta(n^3)$ are given in [@breen2019computing]. This inequality also gives insight into the influence of diameter on the value of Kemeny's constant. In particular, we have the following result for graphs of small diameter. **Proposition 2**. *Let $G$ be a graph on $n$ vertices with $\mathop{\mathrm{diam}}(G) = O(1)$. Then, $\mathcal{K}(G)=O(n^2)$.* The following example shows that a sequence of graphs of fixed constant diameter can attain $\mathcal{K}(G) = \Theta(n^2)$. **Example 3**. Let $n\geq 4$. Consider a graph $G$ obtained from $K_{a}$ and $K_{b}$ by adding an edge between a vertex in $K_{a}$ and a vertex in $K_{b}$. The order of $G$ is $n=a+b$ and $\mathop{\mathrm{diam}}(G) = 3$. Applying [@breen2022kemeny Proposition 2.2], and letting $m$ denote the number of edges in $G$, we have $$\begin{aligned} \mathcal{K}(G) = & \frac{1}{m}\left(\frac{1}{2}(a-1)^3+\frac{1}{2}(b-1)^3+\frac{(b^2-b+2)(a-1)^2}{a}+\frac{(a^2-a+2)(b-1)^2}{b}\right)\\ &+\frac{1}{2m}(a^2-a+1)(b^2-b+1). \end{aligned}$$ Setting $a=\ceil{\frac{n}{2}}$ and $b = \lfloor\frac{n}{2}\rfloor$, we can find that $\mathcal{K}(G)\sim \frac{n^2}{8}$. In the remainder of this section, we determine an upper bound on each summand $d_jr_{i,j}$ in [\[inequality1\]](#inequality1){reference-type="eqref" reference="inequality1"} in order to find an upper bound on $\mathcal{K}(G)$. We pursue this by considering spanning trees and spanning 2-forests of specific types. First, we introduce some notation. Given a connected graph $G$ and two vertices $i$ and $j$, we let $\tau_{i,j}$ denote the number of spanning trees of $G$ in which $i$ and $j$ are adjacent. We use $\mathcal{F}(x,y;z)$ to denote the set of spanning $2$-forests of $G$ where one contains vertices $x$ and $y$, and the other contains $z$. **Lemma 4**. *Let $G$ be a connected graph on $n$ vertices, and suppose that $i$ and $j$ are adjacent vertices in $G$. Then $f_{i,j}=\tau_{i,j}$.* *Proof.* We prove this by giving a bijection between the two sets. Let $f\in \mathcal{F}(i;j)$. Then $i$ and $j$ are in different components of $f$, but they are adjacent in $G$. The addition of the edge $\{i,j\}$ to $f$ produces a spanning tree of $G$ in which $i$ and $j$ are adjacent; it is easily seen that this is a bijective map whose inverse constitutes removing the edge $\{i,j\}$ from a spanning tree of $G$ in which this edge appears, producing a unique forest in $\mathcal{F}(i;j)$. ◻ **Theorem 5**. *Let $G$ be a connected graph on $n$ vertices, and $i,j\in V(G)$. Let $Z_j$ be the set of vertices $v$ in $N(j)$ such that $v\neq i$ and $v$ is not adjacent to $i$, and $Z_i$ be the set of vertices $v$ in $N(i)$ such that $v\neq j$ and $v$ is not adjacent to $j$. Then, $$\begin{aligned} d_jf_{i,j} \leq \begin{cases*} 2\tau + \sum_{v\in Z_j}\left|\mathcal{F}(j,v;i)\right|, & \text{if $i$ and $j$ are not adjacent,}\\ 2\tau + \sum_{v\in Z_j}\left|\mathcal{F}(j,v;i)\right|-\tau_{i,j}, & \text{if $i$ and $j$ are adjacent,}\\ \end{cases*} \end{aligned}$$ where the equality holds if and only if for each $z \in Z_i$, any path from $z$ to $j$ in $G$ passes through $i$. Moreover, for any fixed $i$, $$\begin{aligned} \label{the upper for Kemeny} \mathcal{K}(G)< \sum_{j\neq i}d_jr_{i,j} < 2(n-1)+\sum_{j\neq i}|Z_j|r_{i,j}. \end{aligned}$$* *Proof.* Let $i$ and $j$ be vertices of $G$. Let $N(j)\backslash\{i\}=\{v_1,\dots,v_{s_0}\}$. If $i$ and $j$ are adjacent, then $s_0=d_j-1$; otherwise, $s_0=d_j$. For $k=1,\dots,s_0$, we define $\mathcal{F}_{v_k}(i;j)$ to be the set obtained from $\mathcal{F}(i;j)$ by labelling $v_k$ as a root. Similarly, for each $v\in Z_j$, we define $\mathcal{F}_{v}(j,v;i)$ to be the set obtained from $\mathcal{F}(j,v;i)$ by labelling $v$ as a root. We define $\mathcal{T}_i$ (resp. $\mathcal{T}_j$) to be the set of spanning rooted trees of $G$ with root $i$ (resp. with root $j$). Let $\mathcal{F} = \bigcup_{k=1}^{s_0}\mathcal{F}_{v_k}(i;j)$, and $\mathcal{T} = \mathcal{T}_i\cup\mathcal{T}_j\cup\left(\bigcup_{v\in Z_j}\mathcal{F}_v(j,v;i)\right)$. Note that $|\mathcal{F}| = d_jf_{i,j}$ if $i$ is adjacent to $j$, and $(d_j-1)f_{i,j}$ otherwise. For the proof, we shall define a map between these two combinatorial objects $\mathcal{F}$ and $\mathcal{T}$, show that it is injective, and give the conditions under which it is also surjective; this determines the inequality as stated, and also characterizes the equality case. Define a map $\psi:\mathcal{F}\rightarrow \mathcal{T}$ as follows: for each $f \in \mathcal{F}$, - If $f\in\mathcal{F}_{v_k}(i;j)$ for some $1\leq k\leq s_0$ so that $v_k$ is connected to $i$ in $f$, then $\psi(f)$ is a spanning rooted tree of $G$ obtained from $f$ by adding an edge $\{v_k,j\}$ and re-labelling $j$ as a root. - If $f\in\mathcal{F}_{v_k}(i;j)$ for some $1\leq k\leq s_0$ so that $v_k$ is connected to $j$ in $f$ and $v_k\notin Z_j$, then $\psi(f)$ is a spanning rooted tree of $G$ obtained from $f$ by adding an edge $\{v_k,i\}$ and re-labelling $i$ as a root. - If $f\in\mathcal{F}_{v_k}(i;j)$ for some $1\leq k\leq s_0$ so that $v_k$ is connected to $j$ in $f$ and $v_k\in Z_j$, then $\psi(f)=f$. Since $f$ is a spanning $2$-forest, $v_k$ is connected to either $i$ or $j$. If $v_k\notin Z_j$, then $v_k$ is adjacent to $i$, so the addition of $\{v_k,i\}$ in (ii) is well-defined. Hence, $\psi$ is well-defined. We claim that $\psi$ is injective. Suppose that $\psi(f) = \psi(g)$ for some $f,g\in\mathcal{F}$. If $\psi(f)$ and $\psi(g)$ are spanning forests, then by (iii), we have $f=g$. Suppose that $\psi(f)$ and $\psi(g)$ are spanning trees. Since $\psi(f)$ and $\psi(g)$ have the same root, the newly inserted edges to obtain $\psi(f)$ and $\psi(g)$ are both incident to either $i$ or $j$. Furthermore, they must be on the unique path from $i$ to $j$ in $\psi(f)$ and $\psi(g)$. Hence, the newly inserted edges are the same. Therefore, $f$ and $g$ must have the same root and so $f = g$. Now we consider under what circumstance $\psi$ is surjective. Let $t\in \mathcal{T}$. Note $\mathcal{F}_v(j,v;i)\subseteq \mathcal{F}_v(j;i)$. If $t\in \bigcup_{v\in Z_j}\mathcal{F}_v(j,v;i)$, then $\psi(t) = t$. Suppose that $i$ and $j$ are not adjacent. Let $t\in\mathcal{T}_i\cup\mathcal{T}_j$. There exists a vertex $w$ such that $w\neq i$, $w\neq j$, and it is adjacent to the root $r_0$ of $t$ and on the path from $i$ to $j$ (here either $r_0=i$ or $r_0=j$). Deleting the edge $\{w,r_0\}$ from $t$, assigning $w$ a root, we obtain a spanning $2$-forest $f$ with root $w$. If $t\in \mathcal{T}_j$, then $w=v_k$ for some $1\leq k\leq s_0$ so that $f\in\mathcal{F}_{v_k}(i;j)$ and $\psi(f)=t$. Suppose $t\in\mathcal{T}_i$. Then, either $w\in N(j)$ or $w\in Z_i$. Obviously, if $w\in N(j)$ then $\psi(f)=t$. Note that if $w\in Z_i$ then $i$ is not on the path from $w$ to $j$ in $t$. Therefore, $\psi$ is surjective if and only if for each $z \in Z_i$, any path from $z$ to $j$ in $G$ passes through $i$. For the remaining case, we assume that $i$ and $j$ are adjacent. Note that $s_0=d_j-1$. Let $t\in\mathcal{T}_i\cup\mathcal{T}_j$. If $t$ does not have an edge $\{i,j\}$, then the same argument above can be applied. Moreover, from Lemma [Lemma 4](#lem:forest i and j adjacent){reference-type="ref" reference="lem:forest i and j adjacent"}, the number of spanning trees containing $\{i,j\}$ in each of $\mathcal{T}_i$ and $\mathcal{T}_j$ is $f_{i,j}=\tau_{i,j}$. Hence, we have $(s_0-1)f_{i,j}\leq 2\tau + \sum_{v\in Z_j}\left|\mathcal{F}(j,v;i)\right|-2\tau_{i,j}$. Therefore, we obtain the upper bounds for $d_jf_{i,j}$ with the the same condition for the equality. Finally, we can see $|\mathcal{F}(j,v;i)|\leq f_{i,j}$ and hence $d_jf_{i,j}\leq 2\tau+|Z_j|f_{i,j}$ for any $j\in V(G)\backslash\{i\}$. Note that if $i$ and $j$ are adjacent, then $d_jf_{i,j}< 2\tau+|Z_j|f_{i,j}$. Equation [\[the upper for Kemeny\]](#the upper for Kemeny){reference-type="eqref" reference="the upper for Kemeny"} follows from [\[inequality1\]](#inequality1){reference-type="eqref" reference="inequality1"}. ◻ **Corollary 6**. *Let $G$ be a connected graph on $n$ vertices with maximum degree $n-1$. Then, $\mathcal{K}(G)<2(n-1)$ and hence $\mathcal{K}(G)=\Theta(n)$.* **Corollary 7**. *Let $G$ be a connected graph on $n$ vertices with maximum degree $n-O(1)$. Then, $\mathcal{K}(G)=O(n)$. This implies that $\mathcal{K}(G)\mathcal{K}(\overline{G})=O(n^4)$ if $\overline{G}$ is connected.* *Proof.* Assume that there is a constant $C$ such that $\Delta(G)\geq n-C$. Let $i$ be a vertex with the maximum degree. It follows that $|Z_j|\leq C-1$ for every $j\in V(G)\setminus\{i\}$ and $\mathop{\mathrm{diam}}(G)\leq C+1$. Since $r_{ij}\leq \mathop{\mathrm{diam}}(G)$, we have $\sum_{j\neq i}|Z_j|r_{ij} \leq (n-1)(C-1)(C+1)$. The conclusion follows. ◻ **Example 8**. The complements of the barbell-type graphs in [@breen2019computing] have maximum degree $n-3$, and the complement of a tree has maximum degree $n-2$. Hence, their Kemeny's constants are $O(n)$ when they are connected. In Section [5](#sec:app){reference-type="ref" reference="sec:app"}, $\mathcal{K}(G)\mathcal{K}(\overline{G})$ will be explicitly considered. The following corollary will be used in the next section. **Corollary 9**. *Let $G$ be a connected graph on $n$ vertices such that $\Delta(G)+\delta(G)\geq n$. Then $$\mathcal{K}(G) \;< \; \left( \frac{2\delta(G)}{\delta(G)+\Delta(G)-n+1}\right) \,n \,.$$ In particular, $\mathcal{K}(G)$ must be less than $2n^2$.* *Proof.* Assume $\Delta(G)+\delta(G)\geq n$. Since $|Z_j|\leq n-1-\Delta(G)$ for all $j\in V(G)\setminus\{i\}$, we find that $$\begin{aligned} \sum_{j\neq i}|Z_j|r_{i,j} \; \leq \; (n-1-\Delta(G))\sum_{j\neq i}r_{i,j} \;\leq \; (n-1-\Delta(G))\sum_{j\neq i}\frac{d_j \,r_{i,j}}{\delta(G)} . \end{aligned}$$ From [\[the upper for Kemeny\]](#the upper for Kemeny){reference-type="eqref" reference="the upper for Kemeny"}, we have $\sum_{j\neq i}d_jr_{i,j} < 2(n-1)+\sum_{j\neq i}|Z_j|r_{i,j}$. Hence, $$\left(\frac{\delta(G)+\Delta(G)-n+1}{\delta(G)}\right) \sum_{j\neq i}d_jr_{i,j} < 2n-2.$$ Note that $\delta(G)+\Delta(G)-n+1>0$ by our assumption. Using [\[the upper for Kemeny\]](#the upper for Kemeny){reference-type="eqref" reference="the upper for Kemeny"}, we obtain $$\mathcal{K}(G) \;<\; \frac{2\delta(G) \,n}{\delta(G)+\Delta(G)-n+1}.\qedhere$$ ◻ # Nordhaus-Gaddum bounds when maximum degree is $n-\Omega(n)$ {#sec:n-on} In this section, we examine the relationship between some properties of a graph $G$ and its complement $\overline{G}$. We begin by stating our results and presenting some short proofs. Then we shall introduce some notation and background that we shall need for the longer proof of Proposition [Proposition 14](#prop.complementLU){reference-type="ref" reference="prop.complementLU"}. For $v\in V(G)$, in addition to our usual notation of $d_v$ for the degree of $v$ in $G$, we shall also write $\overline{d}_v$ for the degree of $v$ in $\overline{G}$. Then $d_v+{\overline{d}}_v=n-1$ for every $v$. Recall that $\Delta(G)$ is the maximum degree in $G$ and $\delta(G)$ is the minimum. **Theorem 10**. *Let $U$ be a real constant such that $0<U<1$. Then there is a constant $\Psi_U$ such that for every $n\in\mathbb{N}$ and every graph $G$ on $n$ vertices such that $\Delta(G)\leq Un$, $$\min\left\{ \mathcal{K}(G), \, \mathcal{K}(\overline{G})\right\} \;\leq \; n\Psi_U \,.$$* **Remark 11**. We do not assume that the graphs are connected. Note that $G$ and $\overline{G}$ cannot both be disconnected. In particular, for every graph $G$, at least one of $\mathcal{K}(G)$ or $\mathcal{K}(\overline{G})$ is finite. Consequently, it suffices to prove the theorem for all $n\geq n_0$, where $n_0$ is a $U$-dependent constant. Since $\Delta(\overline{G}) = n-1-\delta(G)$, the following corollary is immediate. **Corollary 12**. *Let $L$ be a real constant such that $0<L<1$. Then for every $n\in\mathbb{N}$ and every graph $G$ on $n$ vertices such that $\delta(G)\geq Ln$, $$\min\left\{ \mathcal{K}(G), \, \mathcal{K}(\overline{G})\right\} \;\leq \; n\Psi_{1-L} \,, % \hspace{5mm}\text{for all $n$}.$$ where $\Psi_{1-L}$ is the constant from Theorem [Theorem 10](#thm.complementU){reference-type="ref" reference="thm.complementU"} with $U=1-L$.* We also obtain a linear bound that holds for all regular graphs uniformly in degree. **Corollary 13**. *There exists a constant $\Psi_{reg}$ such that $$\min\left\{ \mathcal{K}(G), \, \mathcal{K}(\overline{G})\right\} \;\leq \; n\Psi_{reg}$$ for every regular graph $G$ with $n$ vertices.* Indeed, we can take $\Psi_{reg}$ to be $\Psi_{1/2}$ from Theorem [Theorem 10](#thm.complementU){reference-type="ref" reference="thm.complementU"}. This is because for any regular $n$-vertex graph, either the graph or its complement must have its degree less than or equal to $(1/2)n$. As we shall show, Theorem [Theorem 10](#thm.complementU){reference-type="ref" reference="thm.complementU"} follows easily from the following proposition and Corollary [Corollary 9](#cor-deltaplusbound){reference-type="ref" reference="cor-deltaplusbound"}. **Proposition 14**. *Let $L$ and $U$ be real constants such that $0<L< U<1$. Then there is a constant $\Psi_{L,U}$ such that for every $n\in\mathbb{N}$ and every graph $G$ on $n$ vertices such that $Ln\leq \delta(G)\leq \Delta(G)\leq Un$, $$\min\left\{ \mathcal{K}(G), \, \mathcal{K}(\overline{G})\right\} \;\leq \; n\,\Psi_{L,U} \,. % \hspace{5mm}\text{for all $n$}.$$* **Proof of Theorem [Theorem 10](#thm.complementU){reference-type="ref" reference="thm.complementU"}:** First, recalling Remark [Remark 11](#rem.Gconn){reference-type="ref" reference="rem.Gconn"}, we note that it suffices to prove the result for sufficiently large $n$. So we shall assume that $n\geq 12/(1-U)$. Without loss of generality, assume $U>1/2$. Let $L=(1-U)/4$. Let $G$ be a graph on $n$ vertices such that $\Delta(G)\leq Un$. On the one hand, if $\delta(G)\geq Ln$, then Proposition [Proposition 14](#prop.complementLU){reference-type="ref" reference="prop.complementLU"} implies that $\min\left\{ \mathcal{K}(G), \, \mathcal{K}(\overline{G})\right\} \;\leq \; n\Psi_{L,U}$. On the other hand, if $\delta(G)<Ln$, then consider the complement: $$\begin{aligned} \Delta(\overline{G})+\delta(\overline{G}) \;& = \; (n-1-\delta(G)) \,+\,(n-1-\Delta(G)) \\ & > \; 2n-2 -Ln-Un \\ & \geq \; 2n +1 -\left(\frac{1-U}{4}\right)n -\left(\frac{1-U}{4}\right)n -Un \\ & \hspace{45mm} \left(\text{since $3\leq\frac{(1-U)n}{4}$}\right) \\ & =\;n+ 1+ \left( \frac{1-U}{2}\right)n \,.\end{aligned}$$ Now Corollary [Corollary 9](#cor-deltaplusbound){reference-type="ref" reference="cor-deltaplusbound"} tells us that $$\mathcal{K}(\overline{G}) \;\leq \; \left( \frac{2n}{\left(\frac{1-U}{2}\right)n } \right) n \;=\; \frac{4n}{1-U} \,.$$ Theorem [Theorem 10](#thm.complementU){reference-type="ref" reference="thm.complementU"} follows. $\Box$ We now introduce some notation for this section. We shall write $V$ for $V(G)$, which is the same as $V(\overline{G})$. However, for edges, we shall be careful to distinguish between $E(G)$ and $E(\overline{G})$. When $S$ and $T$ are disjoint subsets of $V$, we define $[S,T]_G$ to be the set of all edges of $G$ that have one endpoint in $S$ and one endpoint in $T$. The analogue in $\overline{G}$ is $[S,T]_{\overline{G}}$. Recall from Equation ([\[eq:eig\]](#eq:eig){reference-type="ref" reference="eq:eig"}) that for every graph $G$ with $n$ vertices, we have $$\label{eq.Keigensum} \mathcal{K}(G) \;=\; \sum_{i=2}^n \frac{1}{1-\lambda_i}$$ where $\lambda_1=1\geq \lambda_2\geq \lambda_3 \geq \ldots \geq \lambda_n$ are the eigenvalues of the normalized adjacency matrix $D^{-1}A$ (i.e., the transition probability matrix of the random walk on $G$). (Equation ([\[eq.Keigensum\]](#eq.Keigensum){reference-type="ref" reference="eq.Keigensum"}) also holds if $G$ is disconnected, on the understanding that for a disconnected graph $G$ we have $\mathcal{K}(G)=\infty$ and $\lambda_2=1$.) Therefore, we have $$\label{eq.Kembounds1} \frac{1}{1-\lambda_2} \;\leq \;\mathcal{K}(G_n) \;\leq \; \frac{n}{1-\lambda_2} \hspace{5mm}\text{for every graph $G_n$ on $n$ vertices}.$$ For $S\subseteq V$, let $${\rm vol}(S)\;:=\; \sum_{v\in S} d_v \,.$$ Note that ${\rm vol}(V)\,=\,2|E(G)|$. For example, in a regular graph of degree $d$, ${\rm vol}(S)=d|S|$. The *bottleneck ratio* of the graph $G$ is defined to be $$\label{eq.defPhi} \Phi \;=\; \Phi(G) \;=\; \min_{S\subseteq V: \,0<{\rm vol}(S)\leq |E(G)|} \frac{ |\,[S,S^c]_G|}{{\rm vol}(S)} \,.$$ The classic work of Jerrum and Sinclair [@JeSi] and Lawler and Sokal [@LaSo] proved $$\label{eq.jerrum} \frac{\Phi^2}{2} \;\leq \; 1-\lambda_2 \;\leq \; 2\Phi.$$ (Again, we note consistency when $G$ is disconnected, since then $\Phi=0$.) For an overview from the point of view of discrete reversible Markov chains, of which our context is a special case, see Sections 7.2 and 13.3 of [@LPW]. Combining ([\[eq.Keigensum\]](#eq.Keigensum){reference-type="ref" reference="eq.Keigensum"}) and ([\[eq.jerrum\]](#eq.jerrum){reference-type="ref" reference="eq.jerrum"}) yields $$\label{eq.Kemjerrbd} \frac{1}{2\Phi(G)} \;\leq \; \mathcal{K}(G) \;\leq \; \frac{2n}{\Phi(G)^2} \,.$$ **Proof of Proposition [Proposition 14](#prop.complementLU){reference-type="ref" reference="prop.complementLU"}:** Let $L$ and $U$ be the constants specified in the statement of the proposition. We shall require a large parameter $M$ and a small parameter $\epsilon$ which we define by $$\label{eq.defMeps} M \; :=\; \frac{8}{L(1-U)} \hspace{5mm} \text{and} \hspace{5mm} \epsilon \; := \; \frac{1}{M^2} \;=\; \frac{L^2(1-U)^2}{64} \,.$$ We also let $$\label{eq.psiminusdef} \Psi_- \; :=\; \frac{2}{\epsilon^2} \,.$$ We shall obtain another constant $\Psi_+$ with the property that, for the graphs under consideration, > If $\mathcal{K}(G) \,>\, n\Psi_-$, then $\mathcal{K}(\overline{G}) \,\leq\, n\Psi_+$. Then the proposition will follow by taking $\Psi_{LU}$ to be $\max\{\Psi_-,\Psi_+\}$. Let $G$ be a graph with $n$ vertices such that $\mathcal{K}(G)>n\Psi_-$ and $Ln\leq \delta(G)\leq \Delta(G)\leq Un$. Then $$\label{eq.degbound1} Ln \;\leq \; d_v \; \leq \; Un \hspace{10mm} \forall v\in V \,.$$ For future reference, we also note that $$\label{eq.volSvol} \frac{ {\rm vol}(T)}{Un} \;\leq \; |T| \;\leq \; \frac{{\rm vol}(T)}{Ln} \hspace{10mm} \forall \,T\subseteq V.$$ In the complement $\overline{G}$, we have $n-1-Un \leq \delta(\overline{G})\leq \Delta(\overline{G})\leq n-1-Ln < (1-L)n$. By restricting our attention to sufficiently large $n$ (it suffices that $n>2/(1-U)$), we have that $$\label{eq.degbound1c} \frac{(1-U)\,n}{2} \;\leq \; \overline{d}_v \; \leq \; (1-L)\,n \hspace{10mm} \forall \,v\in V \,.$$ Since it suffices to prove the proposition for sufficiently large $n$, we shall henceforth assume that $n$ is large enough so that Equation ([\[eq.degbound1c\]](#eq.degbound1c){reference-type="ref" reference="eq.degbound1c"}) holds. Here is an outline of the proof. *Step 1:* Since $\mathcal{K}(G)$ is large, the bottleneck ratio of $G$ must be small. This means that there is a set $S$ of vertices such that there are relatively few edges in the cut $[S,S^c]_G$. Moreover, we show that the sets $S$ and $S^c$ each have at least order $n$ vertices. The idea then is that the corresponding complementary cut $[S,S^c]_{\overline{G}}$ contains most of the $|S|\,|S^c|$ possible edges between $S$ and $S^c$. (If it contained every possible edge between $S$ and $S^c$, which happens only if $G$ is disconnected, then Theorem [Theorem 16](#thm.joinbound){reference-type="ref" reference="thm.joinbound"} would imply that $\mathcal{K}(\overline{G})$ would be $O(n)$.) *Step 2:* We specify two sets of vertices $A\subseteq S$ and $B\subseteq S^c$ such that every vertex in $A$ is connected (in $\overline{G}$) to most of $S^c$, every vertex in $B$ is connected to most of $S$, and the set differences $S-A$ and $S^c-B$ are both relatively small. *Step 3:* Consider an arbitrary vertex $x$, whose first passage time in $\overline{G}$ we shall be investigating. We show first that either $A$ or $B$ contains a substantial number (order $n$) of neighbours (in $\overline{G}$) of $x$. *Step 4:* We now look at properties of the random walk on $\overline{G}$. We prove that there is a $\Theta>0$ (depending on $L$ and $U$ but not $n$) such that from every vertex of $A$ (respectively, $B$), the probability of entering $B$ (respectively, $A$) on the next step is at least $\Theta$. We also prove that from any vertex, the probability that the next step is to $A\cup B$ is not small (in fact, at least $1/2$). *Step 5:* Wherever the walker happens to be, there is a probability of at least order $1/n$ that vertex $x$ will be visited within the next four steps (it is always likely to enter $A\cup B$ in one step, then it has a reasonable chance of visiting a neighbour of $x$ in either one or two more steps, and from that neighbour the probability of going to $x$ is at least $1/n$). So the first passage time of $x$ is at least as fast as the time until the first Heads when we toss a coin every four time steps and the probability of Heads is of order $1/n$ on each toss. The expected time to the first Heads in this situation is order $n$, with a constant that is independent of $x$. The bound $\mathcal{K}(\overline{G})=O(n)$ follows. Now we return to the proof. *Step 1.* Since $\mathcal{K}(G) \,>\, n\Psi_-$, it follows from Equations ([\[eq.Kemjerrbd\]](#eq.Kemjerrbd){reference-type="ref" reference="eq.Kemjerrbd"}) and ([\[eq.psiminusdef\]](#eq.psiminusdef){reference-type="ref" reference="eq.psiminusdef"}) that $\Phi(G) \; < \; \epsilon$. Therefore, by the definition of $\Phi(G)$ in Equation ([\[eq.defPhi\]](#eq.defPhi){reference-type="ref" reference="eq.defPhi"}), there is a nonempty subset $S$ of $V$ such that ${\rm vol}(S)\leq |E(G)|$ and $$\label{eq.SPhibeta} |\,[S,S^c]_G| \;< \; \epsilon \,{\rm{vol}}(S) \;\leq \; \epsilon \,n\,U\,|S|$$ (the second inequality uses Equation ([\[eq.volSvol\]](#eq.volSvol){reference-type="ref" reference="eq.volSvol"})). Also, we have $$|S^c|\,Un \;\geq \; {\rm vol}(S^c) \;=\; 2|E(G)|-{\rm vol}(S) \;\geq \;|E(G)| \;\geq \; \frac{L\,n^2}{2},$$ and hence $$\label{eq.2UScn} \frac{2U}{L}\, |S^c| \;\geq \; n \,.$$ Let $C\,=\, (L-\epsilon U)/2$. Since $\epsilon<L$ by Equation ([\[eq.defMeps\]](#eq.defMeps){reference-type="ref" reference="eq.defMeps"}), we see that $$\label{eq.Cbound} \frac{L}{2} \;>\; C \; > \; \frac{L(1-U)}{2} \;> \; 0.$$ We claim that $|S|\,>\,n C$. If not, then every vertex $v$ in $S$ has at most $nC$ neighbours in $S$ (in $G$), and hence $$|\,[v,S^c]_G| \; \geq \; Ln \,-\, Cn \;> \; \epsilon U n \,. % \;\geq \;n\,\beta.$$ Summing the above inequality over all $v\in S$ gives $|\,[S,S^c]_G| \,> \, n\,U\epsilon \,|S|$, which contradicts Equation ([\[eq.SPhibeta\]](#eq.SPhibeta){reference-type="ref" reference="eq.SPhibeta"}). This proves the claim that $$\label{eq.SgtnC} |S| \;>\; nC \, \hspace{5mm}\text{where }C\;=\; \frac{L-\epsilon U}{2}.$$ *Step 2.* Note that $M\epsilon \,=\, L(1-U)/8\,<\,1/2$ and hence $$\label{eq.Mepsbd} 1-M\epsilon \;> \; \frac{1}{2} \,.$$ Define the sets of vertices $A\subseteq S$ and $B\subseteq S^c$ by $$\begin{aligned} \label{eq.Adef1} A & = & \{v\in S\,:\, |\,[v,S^c]_{\overline{G}}| \,\geq \, |S^c|(1-M\epsilon) \,\} \\ \label{eq.Adef2} & = & \{v\in S\,:\, |\,[v,S^c]_{G}| \,\leq\, M\epsilon \, |S^c| \, \} \, , \hspace{7mm} \text{and} \\ \label{eq.Bdef1} B & = & \{w\in S^c\,:\, |\,[w,S]_{\overline{G}}| \,\geq \, |S|(1-M\epsilon) \,\} \\ \label{eq.Bdef2} & = & \{w\in S^c\,:\, |\,[w,S]_{G}| \,\leq\, M\epsilon \, |S| \, \} \,.\end{aligned}$$ From Equation ([\[eq.Adef2\]](#eq.Adef2){reference-type="ref" reference="eq.Adef2"}), it follows that $$|\,[S,S^c]_G| \;\geq \; \sum_{v\in S-A}|\,[v,S^c]_G| \;\;\geq \;\; |S-A|\times M\epsilon |S^c|$$ and hence (using Equations ([\[eq.SPhibeta\]](#eq.SPhibeta){reference-type="ref" reference="eq.SPhibeta"}) and ([\[eq.volSvol\]](#eq.volSvol){reference-type="ref" reference="eq.volSvol"}), and the bounds ${\rm vol}(S)\leq |E(G)|\leq {\rm vol}(S^c)$) that $$\label{eq.SminusAbound} |S-A|\;\leq \; \frac{ |\,[S,S^c]_G|}{M\epsilon \,|S^c|} \;\leq \; \frac{\epsilon \, {\rm vol}(S)}{M\epsilon \,({\rm vol}(S^c)/Un)} \;\leq \; \frac{nU}{M}\,.$$ Similarly, we have $$|\,[S,S^c]_G| \;\geq \; \sum_{w\in S^c-B}|\,[w,S]_G| \;\;\geq \; \; |S^c-B|\times M\epsilon |S|$$ and hence (using Equation ([\[eq.SPhibeta\]](#eq.SPhibeta){reference-type="ref" reference="eq.SPhibeta"})) that $$\label{eq.ScminusBbound} |S^c-B|\;\leq \; \frac{ |\,[S,S^c]_G|}{M\epsilon \,|S|} \;\leq \; \frac{nU}{M} \,.$$ By Equations ([\[eq.SminusAbound\]](#eq.SminusAbound){reference-type="ref" reference="eq.SminusAbound"}), ([\[eq.SgtnC\]](#eq.SgtnC){reference-type="ref" reference="eq.SgtnC"}), ([\[eq.ScminusBbound\]](#eq.ScminusBbound){reference-type="ref" reference="eq.ScminusBbound"}), and ([\[eq.2UScn\]](#eq.2UScn){reference-type="ref" reference="eq.2UScn"}), we have $$\begin{aligned} \nonumber % \label{eq.Alowerbound} |A| &\geq & |S|\,-\, n\frac{U}{M} \;\; \geq \; n\left( C - \frac{U}{M} \right) \quad\text{ and} \\ \nonumber % \label{eq.Alowerbound2} |B| &\geq & |S^c|\,-\, n\frac{U}{M} \;\; \geq \; n\left( \frac{L}{2U} - \frac{U}{M} \right) \,. \end{aligned}$$ We note the above two lower bounds are strictly positive because $$\label{eq.U2MLCbound} \frac{U}{M}\;=\; \frac{UL(1-U)}{8} \;<\; \frac{C}{4} \;<\; \frac{L}{8U} \hspace{6mm}\text{by Eq.\ (\ref{eq.Cbound})}.$$ *Step 3.* Now consider an arbitrary vertex $x\in V$. We want to show that the expected time for a random walk on $\overline{G}$ to reach $x$ from any other initial vertex $v$ is bounded by $n\Psi_+$ uniformly in $x$ and $v$. We now introduce some notation. For a vertex $w\in V$, as $N(w)$ is the neighbourhood of $w$ in $G$, let $\overline{N}(w)$ be the neighbourhood of $w$ in $\overline{G}$: $$\begin{aligned} N(w) & =& \{ u\in V\,:\, \{w,u\}\in E(G)\,\} \hspace{5mm}\text{and} \\ \overline{N}(w) & = & \{ u\in V\,:\, \{w,u\}\in E(\,\overline{G}\,)\,\}\,.\end{aligned}$$ Next, we define the set of neighbours in $\overline{G}$ of $x$ in $A$ and in $B$ respectively: $$A_x \;=\; A\cap \overline{N}(x) \hspace{5mm}\text{and}\hspace{5mm} B_x\;=\; B\cap \overline{N}(x) \,,$$ Then we have $$|A_x|\,+\,|B_x| \;\leq \; \overline{d}_x \;\leq \; |A_x|\,+\,|B_x|\,+\,|S-A| \,+\,|S^c-B|\,.$$ Therefore, by Equations ([\[eq.degbound1c\]](#eq.degbound1c){reference-type="ref" reference="eq.degbound1c"}), ([\[eq.SminusAbound\]](#eq.SminusAbound){reference-type="ref" reference="eq.SminusAbound"}), and ([\[eq.ScminusBbound\]](#eq.ScminusBbound){reference-type="ref" reference="eq.ScminusBbound"}), $$\begin{aligned} \nonumber |A_x|\,+\,|B_x| & \geq & \overline{d}_x \,-\, |S-A| \,-\, |S^c-B| \\ \nonumber & \geq & n\, \frac{1-U}{2} \,-\, 2n\,\frac{U}{M} \\ \label{eq.AxBxbd2} & = & 2n\Gamma \,, \\ \nonumber & & \hspace{5mm} \text{where}\hspace{5mm} \Gamma \;:=\; \frac{1}{2} \left(\frac{1-U}{2} -\frac{2U}{M} \right) \,. \end{aligned}$$ Since $$\frac{2U}{M} \;=\; \frac{UL(1-U)}{4} \;<\; \frac{1-U}{4} ,$$ it follows that $$\label{eq.Gammabound} \Gamma \;>\; \frac{1-U}{8} \,.$$ We see immediately from Equation ([\[eq.AxBxbd2\]](#eq.AxBxbd2){reference-type="ref" reference="eq.AxBxbd2"}) that $$\label{eq.AxorBx} \text{At least one of $|A_x|$ or $|B_x|$ is greater than or equal to $n\Gamma$.}$$ *Step 4.* To describe the random walk on the complement, we shall slightly modify the Markov chain notation introduced in Section [1.1](#sec:notation){reference-type="ref" reference="sec:notation"}. The random walk process on $\overline{G}$ is a sequence $X_0,X_1,X_2,\ldots$ of $V$-valued random variables, with one-step transition probabilities given by $$\overline{P}(v,w) \;=\; \Pr(X_1\,=\,w\,|\,X_0=v) \;=\; \begin{cases} \left(\overline{d}_v\right)^{-1} & \text{if }\{v,w\}\in E(\overline{G}) \\ 0 & \text{otherwise} . \end{cases}$$ For $t\in \mathbb{N}$, the $t$-step transition probabilities are $\overline{P}^{(t)}(v,w)\,=\,\Pr(X_t=w\,|\,X_0=v)$. For $w\in B$, we have $$\begin{aligned} \nonumber \overline{P}(w,A) & = & |\,[w,A]_{\overline{G}}| \, / \, \overline{d}_w \\ \nonumber & \geq & \frac{ |\,[w,S]_{\overline{G}}| \,-\, |S-A| }{n(1-L)} \hspace{15mm}\text{(by Eq.\ (\ref{eq.degbound1c}))} \\ \nonumber & \geq & \frac{ |S|(1-M\epsilon) \,-\,n\frac{U}{M} }{n(1-L)} \hspace{12mm}\text{(by Eqs.\ (\ref{eq.Bdef1}) and (\ref{eq.SminusAbound}))} \\ \nonumber & \geq & \frac{ nC(1-M\epsilon) \,-\,n\frac{U}{M} }{n(1-L)} \hspace{12mm}\text{(by Eq.\ (\ref{eq.SgtnC}))} \\ \label{eq.PwAbound} & = & \frac{ C(1-M\epsilon) \,-\,\frac{U}{M} }{1-L} \quad =:\; \Theta. \end{aligned}$$ We know that $\Theta>0$ from Equations ([\[eq.Mepsbd\]](#eq.Mepsbd){reference-type="ref" reference="eq.Mepsbd"}) and ([\[eq.U2MLCbound\]](#eq.U2MLCbound){reference-type="ref" reference="eq.U2MLCbound"}). Similarly, for $v\in A$, $$\begin{aligned} \nonumber \overline{P}(v,B) & = & |\,[v,B]_{\overline{G}}| \, / \, \overline{d}_v \\ \nonumber & \geq & \frac{ |\,[v,S^c]_{\overline{G}}| \,-\, |S^c-B| }{n(1-L)} \hspace{15mm}\text{(by Eq.\ (\ref{eq.degbound1c}))} \\ \nonumber & \geq & \frac{ |S^c|(1-M\epsilon) \,-\,n\frac{U}{M} }{n(1-L)} \hspace{15mm}\text{(by Eqs.\ (\ref{eq.Adef1}) and (\ref{eq.ScminusBbound}))} \\ \nonumber & \geq & \frac{ \frac{nL}{2U}(1-M\epsilon) \,-\,n\frac{U}{M} }{n(1-L)} \hspace{15mm}\text{(by Eq.\ (\ref{eq.2UScn}))} \\ \nonumber & \geq & \frac{ C(1-M\epsilon) \,-\,\frac{U}{M} }{1-L} \hspace{15mm}\text{(by Eq.\ (\ref{eq.U2MLCbound}))} \\ \label{eq.PvBbound} & = & \; \Theta.\end{aligned}$$ For every $z\in V$, $$\begin{aligned} \nonumber \overline{P}(z,(A\cup B)^c) & \leq & (|S-A|\,+\,|S^c-B|) \,/\, \overline{d}_z \\ \nonumber & \leq & \frac{ 2nU/M }{n(1-U)/2} \hspace{10mm}\text{(by Eqs.\ (\ref{eq.SminusAbound}), (\ref{eq.ScminusBbound}), and (\ref{eq.degbound1c}))} \\ \nonumber % \label{eq.PzABc} & = & \frac{4U}{M(1-U)} \,.\end{aligned}$$ Therefore $$\label{eq.PzABbound} \overline{P}(z, A\cup B) \;\geq \; \Upsilon \hspace{5mm} \text{for every vertex $z$, where}\hspace{5mm} \Upsilon\,:= \, 1\,-\, \frac{4U}{M(1-U)}\,.$$ Note that $\Upsilon>0$ because $\frac{4U}{M(1-U)} \,=\, \frac{UL}{2}\,<\,\frac{1}{2}$. *Step 5.* Now we consider the two cases in Equation ([\[eq.AxorBx\]](#eq.AxorBx){reference-type="ref" reference="eq.AxorBx"}). > [Case 1]{.ul}: $|A_x| \;\geq \; n\Gamma$;\ > [Case 2]{.ul}: $|B_x| \;\geq \, n\Gamma$. First we assume that Case 1 holds, i.e. that $|A_x|\geq n\Gamma$. (The proof for Case 2 will be very similar.) In this case, for $w\in B$ we have $$\begin{aligned} \{ t\in A_x : t\not\in \overline{N}(w) \} & \;\subseteq \; \{t\in S: t\not\in \overline{N}(w)\} \\ &\; = \; \{ t\in S: t\in N(w)\} \\ &\; = \; [w,S]_G \,,\end{aligned}$$ and hence, using Equation ([\[eq.Bdef2\]](#eq.Bdef2){reference-type="ref" reference="eq.Bdef2"}), $$|[w,A_x]_{\overline{G}}| \;\geq \; |A_x|\,-\, |[w,S]_G| \;\geq \; n\Gamma-M\epsilon |S| \;\geq \; n\Gamma \,-\ nM\epsilon.$$ Therefore, for all $w\in B$, we have from Equations ([\[eq.defMeps\]](#eq.defMeps){reference-type="ref" reference="eq.defMeps"}) and ([\[eq.degbound1c\]](#eq.degbound1c){reference-type="ref" reference="eq.degbound1c"}) that $$\begin{aligned} \overline{P}(w,A_x) \; & \geq \; \frac{n\Gamma-nM\epsilon}{ \overline{d}_w} \nonumber \\ \nonumber & \geq \; \frac{\Gamma - \frac{L(1-U)}{8} }{1-L} \\ \nonumber & \geq \; \frac{\Gamma - L\Gamma}{1-L} \hspace{5mm}\text{(by Eq.\ (\ref{eq.Gammabound}))} \\ \label{eq.PwAxbound} & = \; \Gamma \,.\end{aligned}$$ Since $\overline{P}(u,x)\geq 1/n$ for every $u$ in $A_x$, we see that for every $w\in B$ we have $$\begin{aligned} \nonumber \overline{P}^{(2)}(w,x) \; & \geq \; \sum_{u\in A_x} \overline{P}(w,u)\,\overline{P}(u,x) \\ \nonumber & \geq \; \sum_{u\in A_x} \overline{P}(w,u)\,\frac{1}{n} \\ \nonumber & = \; \overline{P}(w,A_x) \,\frac{1}{n} \\ \label{eq.P2xbound} & \geq \; \frac{\Gamma}{n} \hspace{12mm}\text{(by Equation (\ref{eq.PwAxbound}))}. \end{aligned}$$ Next, for $v\in A$, we have $$\begin{aligned} \nonumber \overline{P}^{(3)}(v,x) & \geq & \sum_{w\in B} \overline{P}(v,w)\,\overline{P}^{(2)}(w,x) \\ \nonumber & \geq & \sum_{w\in B} \overline{P}(v,w)\,\frac{\Gamma}{n} \hspace{11mm}\text{(by Equation (\ref{eq.P2xbound}))} \\ \nonumber & = & \overline{P}(v,B) \,\frac{\Gamma}{n} \,. \end{aligned}$$ Hence it follows from Equation ([\[eq.PvBbound\]](#eq.PvBbound){reference-type="ref" reference="eq.PvBbound"}) that $$\label{eq.P3Axbound} \overline{P}^{(3)}(v,x) \;\geq \; \frac{\Theta\,\Gamma}{n} \hspace{7mm}\text{for all }v\in A. % was q$$ The first passage time of the vertex $x$ by the random walk on $\overline{G}$ is the random variable $T_x$ defined by $$T_x \;=\; \min\{ t\geq 1\,: \; X_t=x \} \,.$$ Then Equations ([\[eq.P2xbound\]](#eq.P2xbound){reference-type="ref" reference="eq.P2xbound"}) and ([\[eq.P3Axbound\]](#eq.P3Axbound){reference-type="ref" reference="eq.P3Axbound"}) imply that $$\label{eq.tauxAB} \overline{P}(T_x\leq 3 \,|\, X_0=w) \;\geq \; \frac{\Theta\,\Gamma}{n} \hspace{7mm}\text{for all }w\in A\cup B.$$ Next, for every $s\in V$, we have $$\begin{aligned} \nonumber \overline{P}(T_x \leq 4 \,|\,X_0=s) & \geq & \sum_{w\in A\cup B}\overline{P}(s,w)\,\overline{P}(T_x\leq 3\,|\,X_0=w) \\ \nonumber & \geq & \overline{P}(s,A\cup B) \, \frac{\Theta\,\Gamma}{n} \hspace{11mm}\text{(by Equation (\ref{eq.tauxAB}))} \\ \label{eq.tau4bound} & \geq & \frac{\Upsilon \,\Theta\,\Gamma}{n} \hspace{11mm}\text{(by Equation (\ref{eq.PzABbound}))}. \end{aligned}$$ Therefore $$\overline{P}(T_x >4 \,|\,X_0=s) \;\leq \; 1\,-\, \frac{\Upsilon\,\Theta\,\Gamma}{n} \hspace{6mm}\text{for every }s\in V,$$ and consequently, for every $s\in V$ and $j> 0$, $$\begin{aligned} \overline{P}(T_x >4 +j \,|\,X_0=s ) \; & = \; \sum_{y\in V, \,y\neq x} \overline{P}(T_x >4 +j \,|\,X_0=s,\,X_j=y, \,T_x>j) \overline{P}(X_j=y, \,T_x >j \,|\,X_0=s) \\ & = \; \sum_{y\in V, \,y\neq x} \overline{P}(T_x >4 \,|\,X_0=y) \, \overline{P}(X_j=y, \,T_x >j \,|\,X_0=s) \\ & \hspace{80mm}\text{(by the Markov property)} \\ & \leq \; \left(1\,-\, \frac{\Upsilon\,\Theta\,\Gamma}{n} \right) \, \overline{P}(T_x >j \,|\,X_0=s) \,. \end{aligned}$$ By induction on $k\in \mathbb{Z}^+$, we obtain $$\label{eq.Ptau4k} \overline{P}(T_x >4k \,|\,X_0=s) \; \leq \; \left(1\,-\, \frac{\Upsilon\,\Theta\,\Gamma}{n}\right)^k \hspace{6mm}\text{for every }s\in V,\,k\geq 0.$$ Using that fact that for any nonnegative random variable $Z$ $$\mathbb{E}(Z) \;\leq \; \mathbb{E}(\lceil Z \rceil) \;=\; \sum_{k=0}^{\infty} \Pr(Z>k) \,,$$ we obtain (writing $\overline{\mathbb{E}}$ for the expectation with respect to $\overline{P}$) $$\begin{aligned} \nonumber \overline{\mathbb{E}}\left( \left. \frac{T_x}{4}\,\right| \,X_0=s\right) & \leq & \sum_{k=0}^{\infty} \overline{P}\left( \left. \frac{T_x}{4} > k\,\right|\,X_0=s\right) \\ \nonumber & \leq & \sum_{k=0}^{\infty} \left(1\,-\, \frac{\Upsilon\,\Theta\,\Gamma}{n}\right)^k \hspace{7mm}\text{(by Eq.\ (\ref{eq.Ptau4k}))} \\ \nonumber % \label{eq.Etaubound} & = & \frac{n}{\Upsilon\,\Theta\,\Gamma} \hspace{12mm}\text{for every }s\in V. \end{aligned}$$ To summarize: $$\nonumber % \label{eq.Case1sum} \text{If Case 1 holds, then \; $\overline{\mathbb{E}}(T_x\,|\,X_0=s) \;\leq \; \frac{ 4n}{\Upsilon \,\Theta \,\Gamma}$ \; for every $s\in V$.}$$ Now we turn to Case 2, and assume that $|B_x|\geq n\Gamma$. The approach is very similar to Case 1. We have for $v\in A$ that $$\{ u\in B_x : u\not\in \overline{N}(v) \} \;\subseteq \; \{u\in S^c: u\not\in \overline{N}(v)\} \; = \; [v,S^c]_G \,,$$ and hence, using Equation ([\[eq.Adef2\]](#eq.Adef2){reference-type="ref" reference="eq.Adef2"}), $$|[v,B_x]_{\overline{G}}| \;\geq \; |B_x|\,-\, |[v,S^c]_G| \;\geq \; n\Gamma-M\epsilon |S^c| \;\geq \; n\Gamma \,-\ nM\epsilon.$$ This gives the following analogue of Equation ([\[eq.PwAxbound\]](#eq.PwAxbound){reference-type="ref" reference="eq.PwAxbound"}): $$% \label{eq.PvBxbound} \overline{P}(v,B_x) \; \geq \; \Gamma \hspace{6mm} \text{for all $v\in A$. } % was q$$ Similarly to Case 1, we deduce $$\overline{P}^{(2)}(v,x) \;\geq \; \frac{\Gamma}{n} \hspace{6mm} \forall \, v\in A % was q$$ and, using Equation ([\[eq.PwAbound\]](#eq.PwAbound){reference-type="ref" reference="eq.PwAbound"}), $$% \label{eq.P2wuBxbound} \overline{P}^{(3)}(u,x) \;\geq \; \frac{\Theta\,\Gamma}{n} \hspace{8mm}\text{for all }u\in B.$$ Therefore $$% \label{eq.tauxABcase2} \overline{P}(T_x\leq 3 \,|\, X_0=w) \;>\; \frac{\Theta\,\Gamma}{n} \hspace{7mm}\text{for all }w\in A\cup B, \quad\text{and}$$ $$% \label{eq.tau4case2} \overline{P}(T_x \leq 4 \,|\,X_0=s) \;\geq \; \frac{\Upsilon\,\Theta\,\Gamma}{n} \hspace{6mm}\text{for every }s\in V.$$ As in Case 1, we conclude that $$% \label{eq.Case2sum} \text{If Case 2 holds, then \; $\overline{\mathbb{E}}(T_x\,|\,X_0=s) \;\leq \; \frac{ 4n}{\Upsilon \,\Theta\,\Gamma}$ \; for every $s\in V$.}$$ Combining Cases 1 and 2 tells us that $$% \label{eq.Cases_sum} \overline{\mathbb{E}}(T_x\,|\,X_0=s) \;\leq \; \frac{ 4n}{\Upsilon \,\Theta \,\Gamma} \hspace{5mm} \text{for every $s\in V$.} % was q$$ Since the upper bound is independent of $x$, we conclude that $$\mathcal{K}(\overline{G}) \;\leq \; n \Psi_+\, \hspace{5mm}\text{where } \Psi_+ \;=\; \frac{4}{\Upsilon\,\Theta \,\Gamma } \,.$$ As explained near the beginning of this proof, the proposition now follows upon defining $\Psi_{L,U}$ to be $\max\{ \Psi_-,\Psi_+\}$. $\Box$ **Remark 15**. Upon chasing through our inequalities (specifically ([\[eq.Cbound\]](#eq.Cbound){reference-type="ref" reference="eq.Cbound"}), ([\[eq.Mepsbd\]](#eq.Mepsbd){reference-type="ref" reference="eq.Mepsbd"}), ([\[eq.U2MLCbound\]](#eq.U2MLCbound){reference-type="ref" reference="eq.U2MLCbound"}), ([\[eq.Gammabound\]](#eq.Gammabound){reference-type="ref" reference="eq.Gammabound"}), and the line following ([\[eq.PzABbound\]](#eq.PzABbound){reference-type="ref" reference="eq.PzABbound"})), one finds that $\Psi_+\leq 512\, L^{-1}(1-U)^{-2}$, which is less than $\Psi_-=2\,\epsilon^{-2}$. So the above proof actually gives the value $\Psi_{L,U}=2\,\epsilon^{-2}$. # Kemeny's constant for the join of two graphs {#sec:join} Let $H_1$ and $H_2$ be two graphs (not necessarily connected) with disjoint vertex sets $V(H_1)$ and $V(H_2)$. The *join* $H_1\vee H_2$ is the graph obtained from the disjoint union of $H_1$ and $H_2$ by adding all of the edges joining a vertex of $H_1$ to a vertex of $H_2$. **Theorem 16**. *Let $H_1$ and $H_2$ be graphs with $n_1$ and $n_2$ vertices respectively. Let $G=H_1\vee H_2$ be their join, and let $n=n_1+n_2$. Then $\mathcal{K}(G) \,\leq \, 3n$.* **Proof:** For $i\in\{1,2\}$, let $V_i$ be the set of vertices in $H_i$. Also let $V=V_1\cup V_2$. We shall use the Markov chain notation of Section [1.1](#sec:notation){reference-type="ref" reference="sec:notation"} for the random walk on $G$. Recall in particular that for a vertex $x$, the random variable $T_x$ is the first passage time of $x$. The proof of the theorem is based on the following claim: > [Claim:]{.ul} For every $x,y\in V$, $\mathbb{E}_y(T_x) \,\leq\, n\,+\,2\,\mathbb{E}_x(T_x)$. Indeed, suppose the claim is true. Let $y\in V$. Then $$\begin{aligned} \mathcal{K}(G) \; & =\; \sum_{x\in V\setminus \{y\}} \pi(x) \,\mathbb{E}_y(\tau_x) \\ & \leq \; \sum_{x\in V} \pi(x) \left( n \,+\,\,\frac{2}{\pi(x)} \right) \hspace{5mm} \hbox{(by Equation (\ref{eq.piprop}))} \\ & = \; n\,+\,2n \,,\end{aligned}$$ which proves the theorem. Now we shall prove the claim. Let $x,y\in V$. The statement of the claim is trivial if $x=y$, so assume $x\neq y$. Without loss of generality, assume $x\in V_1$. Let $T[V_2]$ and $\theta$ be the random times defined by $$\begin{aligned} T[V_2] \; & =\; \min\{ t\geq 0\,:\, X_t\in V_2\} \hspace{5mm}\text{and} \\ \theta \; & = \; \min\{s\geq T[V_2]\,: \, X_{s+1}\in V_1 \} \,.\end{aligned}$$ That is, $T[V_2]$ is the first time that the random walk visits $H_2$, and $\theta$ is the time immediately before the first step from $H_2$ to $H_1$. We observe that $$\theta \;=\; \min\{ s\geq 0: X_s\in V_2 \text{ and }X_{s+1}\in V_1\} \,.$$ The following observation about $\theta$ is key. Since every vertex of $H_2$ is connected to every vertex of $H_1$, and since each step of the Markov chain chooses among neighbours of the current vertex with equal probability, it follows that the state of the Markov chain at time $\theta+1$ is *equally likely to be any of the $n_1$ vertices of $H_1$*. More formally, if $k\geq 0$ and $D$ is any event that depends only on $\{X_0,X_1,\ldots,X_k\}$, then we have $$\label{eq.PDtheta} P_y( X_{k+1}=v\,|\, D \text{ and }\theta=k) \;=\; \frac{1}{n_1} \hspace{5mm} \forall \, v\in V_1.$$ Furthermore, the Markov property implies that $$\nonumber % \label{eq.Eytauxtheta} \mathbb{E}_y\left( T_x \,|\,\theta<T_x, \,\theta=k, \, X_{k+1}=v \right) \;=\; \begin{cases} k+1 & \text{if }v=x \\ k+1 + \mathbb{E}_v(T_x) & \text{if }v\in V_1\setminus\{x\}, \end{cases}$$ and hence (using Equation ([\[eq.PDtheta\]](#eq.PDtheta){reference-type="ref" reference="eq.PDtheta"}) with $D$ being the event that $\theta<T_x$) $$\label{eq.Eytauxmarkov} \mathbb{E}_y(T_x \,|\, \theta<T_x, \,\theta=k) \,=\, k+1+S \hspace{5mm}\text{where }\quad S \,:=\, \frac{1}{n_1}\sum_{v\,\in \, V_1\setminus\{x\}} \!\! \mathbb{E}_v(T_x) \,.$$ Now we shall prove $$\label{eq.thetalesstau} P_y(\theta < T_x) \;=\; P_y(T[V_2]<T_x) \;\geq \; \frac{1}{2} \hspace{5mm}\forall \, y\in V.$$ The equality in ([\[eq.thetalesstau\]](#eq.thetalesstau){reference-type="ref" reference="eq.thetalesstau"}) is true because the relation $\theta<T_x$ holds if and only if $T[V_2]<T_x$, i.e. if and only if the chain does not visit $x$ before it first enters $V_2$ (we use the assumption that $x\in V_1$; note that $T[V_2]=0$ if $X_0\in V_2$). For the inequality, we see that $P_y(\theta<T_x)=1$ whenever $y\in V_2$, so assume that $y\in V_1$. For $k\in \mathbb{N}$, $a\in V_1$, and $b\in V_1\cup V_2$, let $\mathcal{U}_k[a\rightarrow b]$ be the set of all $k$-step walks $\underline{\beta}=(\beta_0,\ldots,\beta_k)$ in $G$ such that $\beta_0=a$, $\beta_k=b$, and $\beta_i\in V_1\setminus \{b\}$ for all $i=1,\ldots,k-1$. That is, a walk in $\mathcal{U}_k[a\rightarrow b]$ has its first visit to $b$ (or its first return, if $a=b$) at time $k$, and does not visit $H_2$ before this time. In particular, for any walk in this set we must have $T[V_2]\geq k$, with equality possible only if $b\in V_2$. Moreover, we have $$P_a(T_b<T[V_2] \hbox{ and }T_b=k) \;=\; \sum_{\underline{\beta}\, \in\, \mathcal{U}_k[a\rightarrow b]} P_a(\underline{\beta}) \hspace{12mm}\text{for }a,b\in V_1.$$ Fix $u\in V_2$. We now define a function $f$ from $\bigcup_{k=1}^{\infty}\mathcal{U}_k[y\rightarrow x]$ to $\bigcup_{k=1}^{\infty} \mathcal{U}_k[y\rightarrow u]$ as follows. For each $k\geq 1$ and $\underline{\beta}=(\beta_0,\ldots,\beta_k)\in \mathcal{U}_k[y\rightarrow x]$, let $f(\underline{\beta})=(\beta_0,\ldots,\beta_{k-1},u)$. We know that $f(\underline{\beta})$ is a walk in $G$ because $\beta_{k-1}\in V_1$ and $u\in V_2$. Observe that $f$ is one-to-one and that $P_y(\underline{\beta}) \,=\, P_y\left(f(\underline{\beta})\right)$ for every $\underline{\beta}$. Therefore $$\begin{aligned} P_y(T_x=k<T[V_2]) \; & = \; \sum_{\underline{\beta}\,\in\, \mathcal{U}_k[y\rightarrow x]} P_y(\underline{\beta}) \\ & = \; \sum_{\underline{\beta}\,\in\, \mathcal{U}_k[y\rightarrow x]} P_y(f(\underline{\beta})) \\ & \leq \; P_y(T_u=T[V_2]=k<T_x ) \\ & \leq \; P_y(T[V_2]=k<T_x ) \,.\end{aligned}$$ Summing the resulting inequality over $k$ shows that $$P_y(T_x<T[V_2]) \;\; \leq \; \; P_y(T[V_2]<T_x ) \,,$$ and Equation ([\[eq.thetalesstau\]](#eq.thetalesstau){reference-type="ref" reference="eq.thetalesstau"}) follows. Next we shall show that $$\label{eq.Ethetabound} \mathbb{E}_v(\theta) \;\leq\; n-1 \hspace{5mm}\text{for every }v\in V. % \begin{cases} n-2 & \text{if }v\in V_2, \, \\ % 2n-3 & \text{if }v\in V_1. \end{cases}$$ Notice that for every $u\in V_2$, $P(u,V_1)=n_1/d_u \,\geq \, n_1/(n-1)$. Therefore, for $v\in V_2$, we have $P_v(\theta\geq 1) \,=\,1-P(v,V_1) \,\leq\, 1-\frac{n_1}{n-1}$ and, for every $k\in \mathbb{N}$, $P_v(\theta\geq k+1\,|\,\theta\geq k)\,\leq \,1-\frac{n_1}{n-1}$. By induction, we then see that $$P_v(\theta \geq k) \;\leq \; \left(1-\frac{n_1}{n-1}\right)^{k} \hspace{5mm}\forall \,k\in \mathbb{Z}^+,$$ and hence $$\label{eq.EvthetaH2} \mathbb{E}_v(\theta) \;=\; \sum_{k=1}^{\infty} P_v(\theta\geq k) \;\leq \; \frac{ 1-\frac{n_1}{n-1}}{ 1- \left(1-\frac{n_1}{n-1}\right)} \;=\; \frac{n-1}{n_1} - 1 \hspace{6mm}\text{if }v\in V_2.$$ Now suppose $v\in V_1$. Then a similar argument to the above (but using the relation $P_v(T[V_2]\geq 1)=1$) shows that $\mathbb{E}_v(T[V_2])\,\leq \,(n-1)/n_2$ for the case that $v\in V_1$. Now consider splitting the time interval from 0 to $\theta$ into the part before $T[V_2]$ and the part after $T[V_2]$. With this viewpoint, and using Equation ([\[eq.EvthetaH2\]](#eq.EvthetaH2){reference-type="ref" reference="eq.EvthetaH2"}), it is not hard to see that $$\begin{aligned} \mathbb{E}_v(\theta) \; & =\; \mathbb{E}_v(T[V_2]) \,+\, \sum_{w\,\in\, V_2}P_v(X_{T[V_2]}=w) \, \mathbb{E}_w(\theta) \\ & \leq \; \frac{n-1}{n_2} \,+\, \frac{n-1}{n_1} -1 \quad \leq \; (n-1)\left( 1+\frac{1}{n-1} \right) -1 \quad = \; n-1 \hspace{7mm}\text{if }v\in V_1.\end{aligned}$$ This completes the proof of Equation ([\[eq.Ethetabound\]](#eq.Ethetabound){reference-type="ref" reference="eq.Ethetabound"}). From Equation ([\[eq.Eytauxmarkov\]](#eq.Eytauxmarkov){reference-type="ref" reference="eq.Eytauxmarkov"}), we see that $\mathbb{E}_x(T_x\,|\,\theta<T_x,\theta=k) \,\geq \,S$ for every $k\geq 0$. Therefore $\mathbb{E}_x(T_x\,|\, \theta<T_x) \,\geq \,S$, and $$\begin{aligned} \mathbb{E}_x(T_x) \;& = \; \mathbb{E}_x(T_x\,|\,\theta<T_x) \,P_x(\theta<T_x) \,+\, \mathbb{E}_x(T_x\,|\,\theta>T_x) \,P_x(\theta>T_x) \\ & \geq \; S \,\left(\frac{1}{2}\right) \,+\, 0 \hspace{5mm}\text{(by Equation (\ref{eq.thetalesstau}))} ,\end{aligned}$$ from which it follows that $$\label{eq.Sbound} S \;\leq \; 2\,\mathbb{E}_x(T_x) \,.$$ We are almost done. For $y\neq x$ we have $$\begin{aligned} \nonumber \mathbb{E}_y(T_x \,|\, \theta<T_x) \,& = \; \sum_{k=1}^{\infty} \mathbb{E}_y(T_x \,|\, \theta<T_x, \,\theta=k) \,P_y( \theta=k\,|\, \theta<T_x) \\ \nonumber & = \; \sum_{k=1}^{\infty}( k+1+S ) \,P_y( \theta=k\,|\, \theta<T_x) \hspace{5mm}\text{(by Eq.\ (\ref{eq.Eytauxmarkov}))} \\ \label{eq.Eytausecond} % \nonumber & = \; \mathbb{E}_y(\theta\,|\, \theta<T_x) \,+1+S \,.\end{aligned}$$ Finally, we have $$\begin{aligned} \mathbb{E}_y(T_x) \;& =\; \mathbb{E}_y(T_x\,|\,\theta>T_x)\,P_y(\theta>T_x) \,+\, \mathbb{E}_y(T_x \,|\,\theta<T_x)\,P_y(\theta<T_x) \\ & \leq \; \mathbb{E}_y(\theta \,|\,\theta>T_x)\,P_y(\theta>T_x) \,+\, \left( \mathbb{E}_y(\theta \,|\,\theta<T_x) \,+\, 1+S \right) \,P_y(\theta<T_x) \\ & \hspace{25mm}\text{(using the condition $\{\theta>T_x\}$, as well as Eq.\ (\ref{eq.Eytausecond}))} \\ & = \; \mathbb{E}_y(\theta) \,+\,(1+S)\,P_y(\theta<T_x) \\ & \leq \; (n-1) \,+\,1 \,+\, 2\,\mathbb{E}_x(T_x) \hspace{15mm}\text{(by Eqs.\ (\ref{eq.Ethetabound}) and (\ref{eq.Sbound}))}. \end{aligned}$$ This proves the claim, and the theorem follows. $\Box$ # Applications {#sec:app} In this subsection, we present various families of graphs in terms of Nordhaus-Gaddum problem. ## Regular graphs Let $G$ be a $k$-regular graph with $n$ vertices. Here we recall the Kirchhoff index of $G$: it is given by $\mathop{\mathrm{Kf}}(G) = \frac{1}{2}\mathbf{1}^TR\mathbf{1}$. Since $G$ is regular, we see from [\[eq:res\]](#eq:res){reference-type="eqref" reference="eq:res"} that $$\begin{aligned} \mathcal{K}(G) = \frac{k\mathop{\mathrm{Kf}}(G)}{n}.\end{aligned}$$ From [@palacios2010kirchhoff Proposition 4], we have $\frac{n(n-1)}{2k}\leq \mathop{\mathrm{Kf}}(G)\leq \frac{3n^3}{k}$. Hence, $$\begin{aligned} \label{bounds for regular} \frac{n-1}{2}\leq \mathcal{K}(G)\leq 3n^2.\end{aligned}$$ Hence, $\mathcal{K}(G) = O(n^2)$. By Corollary [Corollary 13](#cor.regular){reference-type="ref" reference="cor.regular"}, $\min\{\mathcal{K}(G),\mathcal{K}(\overline{G})\} = O(n)$ and so $\max\{\mathcal{K}(G),\mathcal{K}(\overline{G})\} = O(n^2)$. Therefore, $$\mathcal{K}(G)\mathcal{K}(\overline{G}) = O(n^3).$$ ### Construction of regular graphs $G$ with $\mathcal{K}(G) = \Theta(n^2)$ In this subsection, we will provide families of regular graphs for which the growth rate of Kemeny's constant is $\Theta(n^2)$. An obvious example is the $n$-cycle whose Kemeny's constant is given [@kim2022families] by $\frac{1}{6}(n^2-1)$. In addition to that, we will show two ways of constructing them. Let $G$ be a $k$-regular graph $G$ with $n$ vertices, and $\mu_1,\mu_2,\dots,\mu_n$ be the eigenvalues of the adjacency matrix of $G$ in non-increasing order. Then $\mu_1=k$. Then Kemeny's constant can be obtained from [\[eq.Keigensum\]](#eq.Keigensum){reference-type="eqref" reference="eq.Keigensum"} as follows: $$\begin{aligned} \label{Kemeny:spec reg} \mathcal{K}(G) = \sum_{i = 2}^{n}\frac{k}{k-\mu_i}.\end{aligned}$$ If some of eigenvalues are close to $k$ so that they are enough to determine the order of Kemeny's constant, then it is unnecessary to understand the remaining eigenvalues. From this, we shall use the so-called *equitable partition* (see [@godsil2001algebraic]) for the construction, which induces the so-called *quotient matrix* which is a smaller matrix whose eigenvalues belong to the spectrum of the adjacency matrix of $G$. Let $n_2\geq 2$, and $G_1,\dots,G_{n_2}$ be $k_1$-regular graphs with $n_1$ vertices. Given a $k_2$-regular graph $H$ with $n_2$ vertices, we define $G$ to be a graph from the disjoint union of $G_1,\dots,G_{n_2}$ by inserting edges as follows: for vertices $i$ and $j$ of $H$, if $i$ and $j$ are adjacent, then we add non-adjacent $n_1$ edges for which each edge joins a vertex of $G_i$ and a vertex of $G_j$ (that is, those $n_1$ edges form a matching in $G$); and if $i$ and $j$ are not adjacent, then there is no edge between $G_i$ and $G_j$. Note that $G$ is $(k_1+k_2)$-regular. Then $(V(G_1),\dots,V(G_{n_2}))$ is an equitable partition, and so the quotient matrix $Q$ is given by $$Q = k_1I +A(H)$$ where $A(H)$ is the adjacency matrix of $H$. Then the eigenvalues of $Q$ are given by $k_1+\theta_i$ for $1\leq i\leq n_2$ where $\theta_1,\dots,\theta_{n_2}$ are the eigenvalues of $A(H)$ in non-increasing order. Then $k_2 = \theta_1$. Hence, $$\begin{aligned} \mathcal{K}(G)>\sum_{i=2}^{n_2}\frac{k_1+k_2}{k_2-\theta_i} = \frac{k_1+k_2}{k_2}\sum_{i=2}^{n_2}\frac{k_2}{k_2-\theta_i} = \left(\frac{k_1}{k_2}+1\right)\mathcal{K}(H).\end{aligned}$$ Therefore, if $\mathcal{K}(H) = \Theta(n_2^2)$ with $n_2 = \Theta(n)$, then $\mathcal{K}(G) = \Theta(n^2)$ follows from [\[bounds for regular\]](#bounds for regular){reference-type="eqref" reference="bounds for regular"}. Now we introduce a different construction, using Kemeny's constant of a graph with bridges in [@breen2022kemeny]. This generalizes the necklace graph in [@breen2023kemeny] (see Figure [\[Figure:necklace\]](#Figure:necklace){reference-type="ref" reference="Figure:necklace"} for the construction of the necklace graph), and also it will be used in the next subsection. We recall some definition and notation. **Definition 17**. [@breen2022kemeny] Let $\mathcal T$ be a tree on $d$ vertices where $V(\mathcal T)=\{1,\dots,d\}$. Let $G_1,\dots,G_d$ be connected graphs. Let $G$ be a graph constructed as follows: the vertices $1,\dots,d$ are replaced by the graphs $G_1,\dots,G_d$, respectively; and if $\{i,j\}$ is an edge of $\mathcal T$, then some vertex $v_i\in V(G_i)$ is chosen, and some vertex $v_j\in V(G_j)$ is chosen, and the two vertices are joined with an edge so that $\{v_i, v_j\}$ is a bridge in $G$. Then $G$ is said to be a *chain of $G_1,\dots,G_d$ with respect to $\mathcal T$*. We denote by $\mathcal{B}_G$ the set of the $(d-1)$ bridges, used in the construction of $G$, that correspond to the edges of $\mathcal T$. Let $G$ be a chain of connected graphs $G_1,\dots,G_d$ with respect to a tree $\mathcal T$ on $d$ vertices (see Figure [\[Figure:chain\]](#Figure:chain){reference-type="ref" reference="Figure:chain"} as an example when $\mathcal T$ is a path). For each bridge $\{x, y\}\in\mathcal{B}_G$, we have exactly two components in $G\backslash \{x, y\}$, where $G\backslash \{x, y\}$ is the graph obtained from $G$ by removing the edge $\{x, y\}$. We use $W_x$ (resp. $W_y$) to denote the number of edges of the component with $x$ (resp. $y$) in $G\backslash \{x, y\}$. Let $\overline{W}_x=m_G-W_x$ and $\overline{W}_y=m_G-W_y$. We then see from [@breen2022kemeny Theorem 3.16] that $$\begin{aligned} \mathcal{K}(G) > \sum\limits_{\{x, y\} \in \mathcal{B}_G}\frac{(2\overline{W}_x-1)(2\overline{W}_y-1)}{2m_G}:=\Gamma(G).\end{aligned}$$ Let $k$ be an odd integer greater than $2$. Let $\mathcal{H}_1^k$ be the set of graphs with degree sequence $(k,\dots,k,k-1)$, and let $\mathcal{H}_2^k$ be the set of graphs with degree sequence $(k,\dots,k,k-1,k-1)$. Let $\mathcal T$ be a path on $d$ vertices. Given $G_1,G_d\in \mathcal{H}_1$ and $G_2,\dots,G_{d-1}\in \mathcal{H}_2$, we let $G$ be a chain of connected graphs $G_1,\dots,G_d$ with respect to $\mathcal T$ such that for each bridge $\{x_i, y_i\}$ in $\mathcal{B}_G$, $1\leq i\leq d$, $\mathrm{deg}(x_i) = \mathrm{deg}(y_i) = k$ (by properly choosing $x_i$'s and $y_i$'s as vertices with degree $k-1$ in $G_1,\dots,G_d$). Suppose that $n_1=|V(G_1)| = |V(G_d)|$ and $n_2=|V(G_2)|=\cdots =|V(G_{d-1})|$. Then $$\begin{aligned} 2\overline{W}_{x_i} = (d-i-1)n_2k + n_1k+1\;\;\text{and}\;\;2\overline{W}_{y_i} = (i-1)n_2k + n_1k+1.\end{aligned}$$ We can find that $$\begin{aligned} \label{temp;eqn} \Gamma(G)=&~\frac{k(d-1)}{(2n_1+(d-2)n_2)}\left((n_1-n_2)^2+dn_2(n_1-n_2)+\frac{1}{6}(d^2+d)n_2^2\right)\\ =&~\frac{k(d-1)}{6(2n_1+(d-2)n_2)}\left(6n_1^2+6(d-2)n_1n_2+(d-3)(d-2)n_2^2\right)\\ >&~\frac{k(d-1)}{6(2n_1+(d-2)n_2)}\left(4n_1^2+4(d-2)n_1n_2+(d-2)(d-2)n_2^2-(d-2)n_2^2\right)\\ =&~\frac{k(d-1)(2n_1+(d-2)n_2)}{6}-\frac{k(d-1)}{6(2n_1+(d-2)n_2)}(d-2)n_2^2.\end{aligned}$$ Let $n=|V(G)|$. Since $n = 2n_1+(d-2)n_2$, it follows that $$\begin{aligned} \mathcal{K}(G)=\Omega(kdn).\end{aligned}$$ We see that the order of Kemeny's constant is independent of $n_1$ and $n_2$. Since $k<n_1$ and $k<n_2$, we have $k<\frac{n}{d}$. Therefore, if $kd = \Theta(n)$ then $\mathcal{K}(G)= \Theta(n^2)$. As an example, since the necklace graph has $k = 3$ and $d = \frac{n-2}{4}$, its Kemeny's constant is $\Theta(n^2)$. ## Barbell-type graphs It appears in [@breen2019computing] that the graph obtained from a path on $d$ vertices by appending a clique of size $d$ to each end-vertex attains the largest order of Kemeny's constant. Here we will show that it is enough to append a graph with sufficiently many edges to each end-vertex in order to obtain the same result. Let $\mathcal T$ be a path on $d$ vertices, and $G_1,\dots,G_d$ be connected graphs. Let $G$ be a chain of connected graphs $G_1,\dots,G_k$ with respect to $\mathcal T$. Suppose that $n = |V(G)|$, $d=|V(G_1)| = |V(G_d)|$ and $1=|V(G_2)|=\cdots =|V(G_{d-1})|$. Assume that $d = \Theta(n)$, $m_{G_1}=\Theta(d^2)$ and $m_{G_d}=\Theta(d^2)$. For $i=1,\dots,d-1$, let $\{x_i, y_i\}$ be the bridge in $\mathcal{B}_G$ such that $x_i\in V(G_i)$ and $y_i\in V(G_{i+1})$. Then $$\begin{aligned} \overline{W}_{x_i} = m_{G_d}+d-i\;\;\text{and}\;\;\overline{W}_{y_i} = m_{G_1}+i.\end{aligned}$$ We can find that $$\begin{aligned} \sum\limits_{\{x, y\} \in \mathcal{B}_G}\frac{(2\overline{W}_x-1)(2\overline{W}_y-1)}{2m_G}>\sum_{i=1}^{d-1}\frac{4m_{G_1}m_{G_d}}{2(m_{G_1}+m_{G_d}+d-1)}.\end{aligned}$$ Hence, $\mathcal{K}(G) = \Theta(n^3)$. Since $\overline{G}$ has a vertex of degree $n-3$, we have $\mathcal{K}(\overline{G}) = O(n)$. Therefore, $$\mathcal{K}(G)\mathcal{K}(\overline{G}) = \Theta(n^4).$$ **Remark 18**. Taking $G_1$ and $G_{d}$ as the complete bipartite graph $K_{\lfloor\frac{d}{2}\rfloor,\lceil\frac{d}{2}\rceil}$, we can construct a bipartite graph $G$ with $\mathcal{K}(G)\mathcal{K}(\overline{G}) = \Theta(n^4)$. ## Trees Let $\mathcal T$ be a tree on $n$ vertices. Since there exists a vertex of degree $n-2$ in the complement of $\mathcal T$, we have $\mathcal{K}(\overline{\mathcal T})= O(n)$. Now we consider the order of $\mathcal{K}(\mathcal T)$. The *Wiener index*, denoted as $W(\mathcal T)$, of $\mathcal T$ is the sum of distances for all pairs of two distinct vertices. The following appears in [@jang2023kemeny]: $$\begin{aligned} \mathcal{K}(\mathcal T) = \frac{2W(\mathcal T)}{n-1}-n+\frac{1}{2}.\end{aligned}$$ Hence, trees with the maximum Wiener index attains the maximum Kemeny's constant. It is known in [@entringer1976distance] that when $\mathcal T$ is the path, the maximum of Wiener index is attained, which is $O(n^3)$. Hence, $\mathcal{K}(\mathcal T) = O(n^2)$. Therefore, $$\mathcal{K}(\mathcal T)\mathcal{K}(\overline{\mathcal T}) = O(n^3).$$ ## Strongly regular graphs In this subsection, we refer the reader to [@godsil2001algebraic Section 10] for the comprehensive definition and all relevant properties related to strongly regular graphs that we shall use here. Let $G$ be a connected strongly regular graph with parameter $(n,k;a,c)$. Then the spectrum of $G$ is given by $$\left\{k, \left(\frac{a-c+\sqrt{(a-c)^2+4(k-c)}}{2}\right)^{m_1},\left(\frac{a-c-\sqrt{(a-c)^2+4(k-c)}}{2}\right)^{m_2}\right\}$$ where $m_1 = \frac{1}{2}\left(n-1-\frac{2k+(n-1)(a-c)}{\sqrt{(a-c)^2+4(k-c)}}\right)$ and $m_2 = \frac{1}{2}\left(n-1+\frac{2k+(n-1)(a-c)}{\sqrt{(a-c)^2+4(k-c)}}\right)$. One can find from [\[Kemeny:spec reg\]](#Kemeny:spec reg){reference-type="eqref" reference="Kemeny:spec reg"} that $$\begin{aligned} \mathcal{K}(G) = \frac{(n-2)k^2-(n-1)(a-c)k}{k^2-(a-c+1)k +c}.\end{aligned}$$ It is known that $k^2 = nc+(a-c)k+k-c$. Hence $$\begin{aligned} \mathcal{K}(G) = \frac{(n-2)(nc+(a-c)k+k-c)+(n-1)(c-a)k}{nc} = O\left(n\right).\end{aligned}$$ The complement $\overline{G}$ of $G$ is also a strongly regular graph with parameter $(n,n-k-1;n-2-2k+c,n-2k+a)$. It is also known that $\overline{G}$ is disconnected if and only if $\overline{G}$ is $m$ copies of a complete graph for some $m>1$. Therefore, if $\overline{G}$ is not $m$ copies of a complete graph, then $$\mathcal{K}(G)\mathcal{K}(\overline{G}) = \Theta(n^2).$$ ## Distance regular graphs with growing diameter Considering Remark [Remark 1](#remark: n^3){reference-type="ref" reference="remark: n^3"} and Proposition [Proposition 2](#prop:fixed diameter){reference-type="ref" reference="prop:fixed diameter"}, one might expect that the order of Kemeny's constants is higher as the diameter of the graphs grows. For instance, the cycle of length $n$ has diameter $\lfloor \frac{n}{2}\rfloor$ and its Kemeny's constant is $\Theta(n^2)$. However, in this subsection, we introduce certain families of examples that display contrasting behavior. We consider distance regular graphs with classical parameters and we refer the reader to [@brouwer2012distance] for a comprehensive monograph on distance regular graphs and to [@jurivsic2017restrictions] for spectral properties of distance regular graphs with classical parameters. Note that these references use $v$ for the number of vertices while we denote it by $n$. Let $G$ be a distance regular graph with classical parameters $(d,b,\alpha,\beta)$ where $d$ is the diameter of $G$. Let $k$ be the degree of a vertex. It is known that there are $d+1$ distinct eigenvalues $\theta_0,\dots,\theta_d$. We define $[i]_b = 1+b+\cdots+b^{i-1}$ for $i\geq 1$, and $[0]_b=0$. It is found in [@jurivsic2017restrictions Lemma 2] that for $0\leq i\leq d$, $$\begin{aligned} \theta_i = [d-i]_b(\beta -\alpha [i]_b)-[i]_b.\end{aligned}$$ If $b>0$ then the eigenvalues are given in decreasing order. The multiplicity of $\theta_i$ is given by $$\begin{aligned} m_i = \frac{(1+\alpha[d-2i]_b+b^{d-2i}\beta)\prod_{j=0}^{i-1}\alpha_j}{(1+\alpha[d]_b+b^d\beta)\prod_{j=1}^i\beta_j},\end{aligned}$$ where $$\begin{aligned} \alpha_j =&~ b[d-j]_b(\beta-\alpha[j]_b)(1+\alpha [d-j]_b+b^{d-j}\beta), && (0\leq j\leq d-1);\\ \beta_j =&~ [j]_b(\beta-\alpha[j]_b+b^j)(1+\alpha[d-j]_b), && (1\leq j\leq d). \end{aligned}$$ Suppose that $b>1$ and $(b-1)\beta+\alpha>0$. We see that $k=\theta_0=\beta[d]_b$ and $\theta_1=[d-1]_b(\beta-\alpha)-1$. Then, $$1- \frac{\theta_1}{k} = 1 - \frac{[d-1]_b(\beta-\alpha)-1}{[d]_b\beta}\rightarrow \frac{(b-1)\beta+\alpha}{b\beta}\quad \quad \text{as $d\rightarrow\infty$.}$$ Therefore, $\frac{k}{k-\theta_1}=O(1)$ and so $$\mathcal{K}(G) \leq \frac{(n-1)k}{k-\theta_1} = O(n).$$ **Example 19**. Kemeny's constants for families (C2), (C3), (C3a), (C4), (C4a), (C10), (C11), and (C11a) in [@brouwer2012distance Tables 6.1 and 6.2] are $O(n)$ while their diameters grow as $n$ increases. An example of a family with $b=1$ is the set of Hamming graphs (see [@brouwer2012distance]). Then $n=q^d$, $b=1$, $\alpha=0$, and $\beta = q$. Using the formulae for eigenvalues and their multiplicities, and the fact that $\frac{1}{j}\leq \frac{2}{j+1}$ for $j\geq 1$, we can find that $$\begin{aligned} \mathcal{K}(G) = &~\frac{d}{q}\sum_{j=1}^d \binom{d}{j}\frac{(q-1)^{j+1}}{j} \\ \leq&~ \frac{2d}{q}\sum_{j=1}^d \binom{d}{j}\frac{(q-1)^{j+1}}{j+1} = \frac{2d}{q}\left(\frac{1}{d+1}q^{d+1}-(q-1)\right) = O(n).\end{aligned}$$ # Discussion {#sec:concl} Let $G$ be a connected graph. We have seen in Remark [Remark 1](#remark: n^3){reference-type="ref" reference="remark: n^3"} that $\mathcal{K}(G) = O(n^3)$. Trivially, $\mathcal{K}(G)+\mathcal{K}(\overline{G}) = O(n^3)$. Consequently, we have examined $\mathcal{K}(G)\mathcal{K}(\overline{G})$ for the Nordhaus-Gaddum Problem in relation to Kemeny's constant. We have proved that when maximum degree is $n-\Omega(n)$, or when it is $n-O(1)$, we have $\mathcal{K}(G)\mathcal{K}(\overline{G}) = O(n^4)$. Interchanging the roles of $G$ and $\overline{G}$, we see also that $\mathcal{K}(G)\mathcal{K}(\overline{G}) = O(n^4)$ if the minimum degree is either $O(1)$ or $\Omega(n)$. To completely resolve the issue, future work should consider graphs in which the maximum degree is $n-o(n)$ and $n-\omega(1)$ *and* the minimum degree is $\omega(1)$ and $o(n)$. Additionally, we have presented calculations for various families of related graphs. The above-mentioned $O(n^3)$ bound shows that $\mathcal{K}(G)\mathcal{K}(\overline{G})=O(n^6)$. Although we do not expect that this bound to be optimal, we do not have a better bound for all graphs. The final sentence of Corollary [Corollary 9](#cor-deltaplusbound){reference-type="ref" reference="cor-deltaplusbound"} implies that $\mathcal{K}(G)\mathcal{K}(\overline{G})=O(n^5)$ whenever $\Delta(G)+\delta(G) \neq n-1$ (if $\Delta(G)+\delta(G)<n-1$, then $\Delta(\overline{G})+\delta(\overline{G})\geq n$). However, we do not even have examples to show that the worst case of $\mathcal{K}(G)\mathcal{K}(\overline{G})$ is not $O(n^4)$. An alternative approach to the Nordhaus-Gaddum problem can be based on graph diameter rather than maximum degree. From Proposition [Proposition 2](#prop:fixed diameter){reference-type="ref" reference="prop:fixed diameter"}, if $G$ is of diameter $3$, then $\mathcal{K}(G)=O(n^2)$. Since the complement of a graph of diameter more than $3$ has diameter $2$, if we understand the order of Kemeny's constant of a graph with diameter $2$, then we can address the problem. **Conjecture 20**. *Let $G$ be a graph with $\mathop{\mathrm{diam}}(G)=2$. Then $\mathcal{K}(G) = O(n)$.* We can see from our findings that Conjecture [Conjecture 20](#conjecture1){reference-type="ref" reference="conjecture1"} holds for several families of graphs with diameter $2$; for instance, threshold graphs, join of graphs, and strongly regular graphs. Finally, we have not identified a family of graphs $G$ such that $\mathcal{K}(G) = \omega(n)$ and $\mathcal{K}(\overline{G})=\omega(n)$. Such a family would be interesting to understand the structure of $G$ together with $\overline{G}$ to have higher order of Kemeny's constant. If Conjecture [Conjecture 20](#conjecture1){reference-type="ref" reference="conjecture1"} is proved to be true, then that family should be found among graphs with diameter $3$ whose complements also have diameter $3$. # Acknowledgement {#acknowledgement .unnumbered} The authors are grateful to Jane Breen at Ontario Tech University and Steve Butler at Iowa State University for constructive conversations when this project was in its initial phase. # Funding {#funding .unnumbered} Ada Chan gratefully acknowledges the support of the NSERC Grant No. RGPIN-2021-03609. S. Kim is supported in part by funding from the Fields Institute for Research in Mathematical Sciences and from the Natural Sciences and Engineering Research Council of Canada (NSERC). S. Kirkland is supported by NSERC grant number RGPIN--2019--05408. N. Madras is supported in part by NSERC Grant No. RGPIN-2020-06124. [^1]: Contact: kimswim\@yorku.ca
arxiv_math
{ "id": "2309.05171", "title": "Bounds on Kemeny's constant of a graph and the Nordhaus-Gaddum problem", "authors": "Sooyeong Kim, Neal Madras, Ada Chan, Mark Kempton, Stephen Kirkland,\n Adam Knudson", "categories": "math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | An important problem in computational arithmetic geometry is to find changes of coordinates to simplify a system of polynomial equations with rational coefficients. This is tackled by a combination of two techniques, called minimisation and reduction. We give an algorithm for minimising certain pairs of quadratic forms, subject to the constraint that the first quadratic form is fixed. This has applications to 2-descent on the Jacobian of a genus $2$ curve. address: - University of Cambridge, DPMMS, Centre for Mathematical Sciences, Wilberforce Road, Cambridge CB3 0WB, UK - Harvard University, Department of Mathematics, Science Center Room 325, 1 Oxford Street, Cambridge, MA 02138, USA author: - Tom Fisher - Mengzhen Liu date: 12th September 2023 title: Minimisation of 2-coverings of genus 2 Jacobians --- # Introduction ## Models for $2$-coverings {#sec:models} We work over a field $K$ with $\operatorname{char}(K) \not= 2$. Let $C$ be a smooth curve of genus $2$ with equation $y^2 = f(x) = f_6 x^6 + f_5 x^5 + \ldots + f_1 x + f_0$ where $f \in K[x]$ is a polynomial of degree $6$. We fix throughout the polynomial $$% \label{def:G} G = z_{12}z_{34}-z_{13}z_{24}+z_{23}z_{14}.$$ The following two definitions are based on those in [@genus2ctp Section 2.4]. **Definition 1**. A *model* (for a $2$-covering of the Jacobian of $C$) is a pair $(\lambda,H)$ where $\lambda \in K^\times$ and $H \in K[z_{12},z_{13},z_{23},z_{14},z_{24},z_{34}]$ is a quadratic form satisfying $$\det(\lambda x \mathbf G- \mathbf H) = -\lambda^6 f_6^{-1} f(x)$$ where $\mathbf G$ and $\mathbf H$ are the matrices of second partial derivatives of $G$ and $H$. We identify the space of column vectors of length $6$ and the space of $4 \times 4$ alternating matrices via the map $$A : z = \begin{pmatrix} z_{12} \\ z_{13} \\ z_{23} \\ z_{14} \\ z_{24} \\ z_{34} \end{pmatrix} \mapsto \begin{pmatrix} 0 & z_{12} & z_{13} & z_{14} \\ & 0 & z_{23} & z_{24} \\ \multicolumn{2}{c}{\multirow{2}{*}{\,$-$}} & 0 & z_{34} \\ & & & 0 \end{pmatrix}$$ so that $G(z)$ is the Pfaffian of $A(z)$. Then each $4 \times 4$ matrix $P$ uniquely determines a $6 \times 6$ matrix $\wedge^2 P$ such that $$P A(z) P^T = A( (\wedge^2 P) z)$$ for all column vectors $z$. For $F \in K[x_1, \ldots, x_n]$ and $M \in \operatorname{GL}_n(K)$ we write $F \circ M$ for the polynomial satisfying $(F \circ M)(x) = F(Mx)$ for all columns vectors $x$. The Pfaffian $\operatorname{Pf}(A)$ of an alternating matrix $A$ has the properties that $\operatorname{Pf}(A)^2 = \det(A)$ and $\operatorname{Pf}(P A P^T) = (\det P) \operatorname{Pf}(A)$. The latter tells us that $G \circ \wedge^2 P = (\det P) G$. It is also not hard to show that $\det(\wedge^2 P) = (\det P)^3$. **Definition 2**. Two models are *$K$-equivalent* if they are in the same orbit for the action of $K^\times \times \operatorname{PGL}_4(K)$ via $$(c,P): (\lambda,H) \mapsto \left(c\lambda, \frac{c}{\det P} H \circ \wedge^2 P\right).$$ It may be checked using the above observations that this is a well defined (right) group action on the space of models (for a fixed choice of genus $2$ curve $C$). **Example 3**. Let $C/\mathbb Q$ be the genus $2$ curve given by $y^2 = f(x)$ where $$f(x) = -28 x^6 + 84 x^5 - 323 x^4 + 506 x^3 - 471 x^2 + 232 x - 60.$$ One of the elements of the 2-Selmer group of $\operatorname{Jac}C$ is represented by the model $$\begin{aligned} (\lambda_1&,H_1) = (42336, \,\, 25128 z_{12}^2 + 24480 z_{12} z_{13} + 14031 z_{12} z_{23} + 15408 z_{12} z_{14} \\ & + 13959 z_{12} z_{24} + 25407 z_{12} z_{34} + 2232 z_{13}^2 - 16407 z_{13} z_{23} + 4464 z_{13} z_{14} \\ & - 22815 z_{13} z_{24} + 1161 z_{13} z_{34} + 2329 z_{23}^2 + 15282 z_{23} z_{14} + 7687 z_{23} z_{24} \\ & - 19547 z_{23} z_{34} - 2304 z_{14}^2 - 17838 z_{14} z_{24} - 22590 z_{14} z_{34} - 134 z_{24}^2 \\ & + 41978 z_{24} z_{34} - 99584 z_{34}^2).\end{aligned}$$ Applying the transformation $(c,P)$ with $c = 1/3024$ and $$\label{cob} P = \begin{pmatrix} 2 & -19 & 2 & 5 \\ 4 & 4 & -31 & 38 \\ 2 & 2 & 37 & 40 \\ -7 & -7 & -14 & 7 \end{pmatrix}$$ gives the $\mathbb Q$-equivalent model $$\begin{aligned} (\lambda_2&,H_2) = (14, \,\, z_{12} z_{23} + 2 z_{12} z_{14} - z_{12} z_{24} + 8 z_{12} z_{34} - 7 z_{13}^2 - 13 z_{13} z_{23} \\ & - 12 z_{13} z_{14} - 15 z_{13} z_{24} - 20 z_{13} z_{34} - 5 z_{23}^2 - 2 z_{23} z_{14} - 25 z_{23} z_{24} \\ & - 59 z_{23} z_{34} - 4 z_{14}^2 - 14 z_{14} z_{24} - 18 z_{14} z_{34} + 17 z_{24}^2 - 37 z_{24} z_{34} - 11 z_{34}^2). \end{aligned}$$ ## Relation to previous work The change of coordinates [\[cob\]](#cob){reference-type="eqref" reference="cob"} was found by a combination of two techniques, called minimisation and reduction. *Minimisation* seeks to remove prime factors from a suitably defined invariant (usually the discriminant). The prototype example is using Tate's algorithm to compute a minimal Weierstrass equation for an elliptic curve. *Reduction* seeks to a make a final unimodular substitution so that the coefficients are as small as possible. The prototype example is the reduction algorithm for positive definite binary quadratic forms. Algorithms for minimising and reducing 2-, 3-, 4- and 5-coverings of elliptic curves are given by Cremona, Fisher and Stoll [@CFS], and Fisher [@F], building on earlier work of Birch and Swinnerton-Dyer [@BSD] for $2$-coverings. Algorithms for minimising some other representations associated to genus $1$ curves are given by Fisher and Radicevic [@FR]. A general framework for minimising hypersurfaces is described by Kollar [@K], and this has been refined by Elsenhans and Stoll [@ES]; in particular they give practical algorithms for plane curves (of arbitrary degree) and for cubic surfaces. Algorithms for minimising Weierstrass equations for general hyperelliptic curves are given by Q. Liu [@L]. In this paper we give an algorithm for minimising $2$-coverings of genus $2$ Jacobians. These are represented by pairs of quadratic forms (see Definition [Definition 1](#def:model){reference-type="ref" reference="def:model"}) where the first quadratic form is fixed. We only consider minimisation and not reduction, since the latter is already treated in [@genus2ctp Remark 4.3]. Our minimisation algorithm plays a key role in the work of the first author and Jiali Yan [@genus2ctp] on computing the Cassels-Tate pairing on the $2$-Selmer group of a genus $2$ Jacobian. Indeed the method presented in *loc. cit.* for computing the Cassels-Tate pairing relies on being able to find rational points on certain twisted Kummer surfaces. Minimising and reducing our representatives for the $2$-Selmer group elements simplifies the equations for these surfaces, and so makes it more likely that we will be able to find such rational points. Earlier works on minimisation (see in particular [@CFS]) considered both minimisation theorems (i.e., general bounds on the minimal discriminant) and minimisation algorithms (i.e., practical methods for finding a minimal model equivalent to a given one). For $2$-coverings of hyperelliptic Jacobians, some minimisation theorems have already been proved; see the papers of Bhargava and Gross [@BG Section 8], and Shankar and Wang [@SW Section 2.4]. We will not revisit these results, as our focus is on the minimisation algorithms. **Remark 4**. As noted in [@CF Lemma 17.1.1], [@FH Section 19.1] and [@genus2ctp Section 2.4] the quadratic form $G = z_{12}z_{34}-z_{13}z_{24}+z_{23}z_{14}$ has two algebraic families of $3$-dimensional isotropic subspaces. Moreover, the transformations considered in Definition [Definition 2](#def:action){reference-type="ref" reference="def:action"} do not describe the full projective orthogonal group of $G$, but only the index $2$ subgroup that preserves (rather than swaps over) these two algebraic families. Restricting attention to this index $2$ subgroup (when defining equivalence) makes no difference to the minimisation problem (see Remark [Remark 10](#rem:dual){reference-type="ref" reference="rem:dual"}), but as explained in [@genus2ctp Sections 2.4 and 2.5] it is important in the context of $2$-descent, since it means we can distinguish between elements of the $2$-Selmer group with the same image in the fake $2$-Selmer group. Some Magma [@magma] code accompanying this article, including an implementation of our algorithm, will be made available from the first author's website. ## Acknowledgements {#acknowledgements .unnumbered} This work originated as a summer project carried out by the second author and supervised by the first author. We thank the Research in the CMS Programme for their support. # Statement of the algorithm We keep the notation of Section [1.1](#sec:models){reference-type="ref" reference="sec:models"}, but now let $K$ be a field with discrete valuation $v : K^\times \to \mathbb Z$, valuation ring ${\mathcal{O}_K}$, uniformiser $\pi$, and residue field $k$. If $F$ is a polynomial with coefficients in $K$ then we write $v(F)$ for the minimum of the valuations of its coefficients. **Definition 5**. A model $(\lambda,H)$ is *integral* if $v(H) \geqslant 0$. It is *minimal* if $v(\lambda)$ is minimal among all $K$-equivalent integral models. Using the action of $K^\times$ (see Definition [Definition 2](#def:action){reference-type="ref" reference="def:action"}) to clear denominators it is clear that any model is $K$-equivalent to an integral model. By Definition [Definition 1](#def:model){reference-type="ref" reference="def:model"} we have $v(\lambda) \geqslant (v(f_6) - v(f_i))/(6-i)$ for all $i= 0,1, \ldots, 5$. We cannot have $f_0 = \ldots = f_5 = 0$ since $C$ is a smooth curve of genus $2$. Therefore $v(\lambda)$ is bounded below, and minimal models exist. It also follows from Definition [Definition 1](#def:model){reference-type="ref" reference="def:model"} that if $v(f_6) = v(\operatorname{disc}f) = 0$ then any integral model $(\lambda,H)$ has $v(\lambda) \geqslant 0$. Therefore, in global applications, minimality is automatic at all but a finite set of primes, which we may determine by factoring. Returning to the local situation, there is an evident recursive algorithm for computing minimal models if we can solve the following problem. **Minimisation problem.** Given an integral quadratic form $H \in {\mathcal{O}_K}[z_{12}, \ldots,z_{34}]$ determine whether there exists $P \in \operatorname{PGL}_4(K)$ such that $$v\left(\frac{1}{\det P} H \circ \wedge^2 P\right) > 0$$ and find such a matrix $P$ if it exists. Our solution to this problem (see Algorithm [Algorithm 8](#main_alg){reference-type="ref" reference="main_alg"}) is an iterative procedure that computes the required transformation as a composition of simpler transformations. These simpler transformations are either given by a matrix in $\operatorname{GL}_4({\mathcal{O}_K})$, in which case we call the transformation an *integral change of coordinates*, or given by one of the following operations, corresponding to $P = \operatorname{Diag}(1,1,1,\pi), \operatorname{Diag}(1,1,\pi,\pi)$ or $\operatorname{Diag}(1,\pi,\pi,\pi)$. **Definition 6**. We define the following three operations on quadratic forms $H$: - Operation 1. Replace $H$ by $\frac{1}{\pi} H(z_{12},z_{13},z_{23},\pi z_{14},\pi z_{24},\pi z_{34})$, - Operation 2. Replace $H$ by $H(\pi^{-1} z_{12},z_{13},z_{23},z_{14},z_{24},\pi z_{34})$, - Operation 3. Replace $H$ by $\frac{1}{\pi} H(z_{12},z_{13},\pi z_{23},z_{14},\pi z_{24},\pi z_{34})$, The following algorithm suggests some transformations that we might try applying to $H$. In applications $W \subset k^6$ will be a subspace determined by the reduction of $H$ mod $\pi$. We write $e_{12},e_{13},e_{23},e_{14},e_{24},e_{34}$ for the standard basis of $k^6$, and identify the dual basis with $z_{12},z_{13},z_{23},z_{14},z_{24},z_{34}$. **Algorithm 7**. (Subalgorithm to suggest some transformations.) We take as input an integral quadratic form $H \in {\mathcal{O}_K}[z_{12}, \ldots,z_{34}]$ and a vector space $W \subset k^6$ that is isotropic for $G$. When we make an integral change of coordinates we apply the same transformation (or rather its reduction mod $\pi$) to $W$. The output is either one or two transformations $P \in \operatorname{PGL}_4(K)$. - If $\dim W = 1$ then make an integral change of coordinates so that $W = \langle e_{12} \rangle$. Then apply Operation 2. - If $\dim W = 2$ then make an integral change of coordinates so that $W = \langle e_{12},e_{13} \rangle$. Then apply either Operation 1 or Operation 3. - If $\dim W = 3$ then either make an integral change of coordinates so that $W = \langle e_{12}, e_{13}, e_{23}\rangle$ and apply Operation 1, or make an integral change of coordinates so that $W = \langle e_{12}, e_{13}, e_{14}\rangle$ and apply Operation 3. We write ${\overline{H}}\in k[z_{12},\ldots,z_{34}]$ for the reduction of $H$ mod $\pi$. If $\operatorname{char}(k) \not=2$ then the rank and kernel of ${\overline{H}}$ are defined as the rank and kernel of the corresponding $6 \times 6$ symmetric matrix. If $\operatorname{char}(k)=2$ then we assume that $k$ is perfect, so that $${\overline{H}}= \frac{\partial {\overline{H}}}{\partial z_{12}} = \ldots = \frac{\partial {\overline{H}}}{\partial z_{34}} = 0$$ defines a $k$-vector space, which we call $\ker {\overline{H}}$. We then define $$\operatorname{rank}{\overline{H}}= 6 - \dim \ker {\overline{H}}.$$ We continue to write $G$ for the reduction of $G$ mod $\pi$, as it should always be clear from the context which of these we mean. **Algorithm 8**. (Minimisation algorithm.) We take as input an integral quadratic form $H \in {\mathcal{O}_K}[z_{12}, \ldots,z_{34}]$. The output is ` TRUE/FALSE` according as whether there exists $P \in \operatorname{PGL}_4(K)$ such that $$v \left( \frac{1}{\det P}H \circ \wedge^2 P \right) > 0.$$ - Compute $r = \operatorname{rank}{\overline{H}}$. If $r=0$ then return `TRUE`. - If $r=1$ then try making an integral change of coordinates so that ${\overline{H}}= z_{34}^2$. If the reductions of $G$ and $\pi^{-1} H(z_{12}, \ldots,z_{24},0)$ mod $\pi$ have a common $3$-dimensional isotropic subspace $W \subset \ker {\overline{H}}$, then (since running Algorithm [Algorithm 7](#alg:totry){reference-type="ref" reference="alg:totry"} on any such subspace $W$ gives $v(H) > 0$) return `TRUE`. - If $r = 2$ then try running Algorithm [Algorithm 7](#alg:totry){reference-type="ref" reference="alg:totry"} on each codimension $1$ subspace $W \subset \ker {\overline{H}}$ that is isotropic for $G$. If one of the suggested transformations gives $v(H) > 0$ then return `TRUE`. - If $r \in \{1,2\}$ and ${\overline{H}}$ factors as a product of linear forms defined over $k$, say ${\overline{H}}= \ell_1 \ell_2$, then for each $i=1,2$ try making an integral change of coordinates so that $\ell_i=z_{34}$ and then apply Operation 2. If at least one of these transformations gives $v(H) \geqslant 0$ then select one with $\operatorname{rank}{\overline{H}}$ as small as possible and go to Step 1. - If $r \in \{ 2,3,4,5 \}$ then try running Algorithm [Algorithm 7](#alg:totry){reference-type="ref" reference="alg:totry"} on $W = \ker {\overline{H}}$ if this subspace is isotropic for $G$, and otherwise on each codimension $1$ subspace $W \subset \ker {\overline{H}}$ that is isotropic for $G$. If at least one of the suggested transformations gives $v(H) \geqslant 0$ then select one with $\operatorname{rank}{\overline{H}}$ as small as possible and go to Step 1. - If this step is reached, or if after visiting Step 1 the first time and returning to it a further 4 times we still do not have $v(H) > 0$, then return `FALSE`. There is no difficulty in modifying the algorithm so that when it returns `TRUE` the corresponding transformation $P \in \operatorname{PGL}_4(K)$ is also returned. In Section [3](#sec:implement){reference-type="ref" reference="sec:implement"} we give further details of the implementation, in particular explaining how we make the integral changes of coordinates, and giving further details of Step 2. In Sections [4](#sec:weights){reference-type="ref" reference="sec:weights"} and [5](#sec:proof){reference-type="ref" reference="sec:proof"} we prove that Algorithm [Algorithm 8](#main_alg){reference-type="ref" reference="main_alg"} is correct. # Remarks on implementation {#sec:implement} In Algorithms [Algorithm 7](#alg:totry){reference-type="ref" reference="alg:totry"} and [Algorithm 8](#main_alg){reference-type="ref" reference="main_alg"} we are asked to try making various integral changes of coordinates. It is important to realise that we are restricted to considering matrices of the form $\wedge^2 P$ for $P \in \operatorname{GL}_4({\mathcal{O}_K})$, and not general elements of $\operatorname{GL}_6({\mathcal{O}_K})$. Therefore some care is required both in determining whether a suitable transformation exists, and in finding one when it does. Since the natural map $\operatorname{GL}_4({\mathcal{O}_K}) \to \operatorname{GL}_4(k)$ is surjective, we may concentrate on the mod $\pi$ situation here. Notice however that in the global application with $K = \mathbb Q$ and $v=v_p$ it is better to use the surjectivity of $\operatorname{SL}_4(\mathbb Z) \to \operatorname{SL}_4(\mathbb Z/p\mathbb Z)$, so that minimisation at $p$ does not interfere with minimisation at other primes. Let $k^4$ have basis $e_1, \ldots, e_4$. We identify $\wedge^2 k^4 = k^6$ via $e_i \wedge e_j \mapsto e_{ij}$. Each linear subspace $W \subset k^6$ determines a linear subspace $V_0 \subset k^4$ given by $$V_0 =\{v \in k^4 \mid v \wedge w = 0 \text{ for all } w \in W \}$$ where $\wedge$ is the natural map $k^4 \times \wedge^2 k^4 \to \wedge^3 k^4$. Let $V_1$ be the analogue of $V_0$ when $W$ is replaced by its orthogonal complement with respect to $G$. **Lemma 9**. *Let $W \subset k^6$ be a subspace, and let $P\in \operatorname{GL}_4(k)$.* 1. *If $\dim W = 1$ then $\wedge^2 P$ sends $W$ to $\langle e_{12} \rangle$ if and only if $P$ sends $V_0$ to $\langle e_1, e_2 \rangle$.* 2. *If $\dim W = 2$ or $3$ then $\wedge^2 P$ sends $W$ to a subspace of $\langle e_{12},e_{13},e_{14} \rangle$ if and only if $P$ sends $V_0$ to $\langle e_1 \rangle$.* 3. *If $\dim W = 5$ then $\wedge^2 P$ sends $W$ to $\langle e_{12},e_{13},e_{14},e_{23},e_{24}\rangle$ if and only if $P$ sends $V_1$ to $\langle e_1, e_2 \rangle$.* *Proof.* In (i) we have $W = \langle e_{12} \rangle$ if and only if $V_0 = \langle e_1, e_2 \rangle$, and in (ii) we have $W \subset \langle e_{12},e_{13},e_{14} \rangle$ if and only if $V_0 = \langle e_1 \rangle$. Since the definition of $V_0$ in terms of $W$ behaves well under all changes of coordinates this proves (i) and (ii). As noted in Section [1.1](#sec:models){reference-type="ref" reference="sec:models"}, all transformations of the form $\wedge^2 P$ preserve $G$ (up to a scalar multiple). Therefore (iii) follows from (i) on replacing $W$ by its orthogonal complement with respect to $G$. ◻ **Remark 10**. Let $\mathbf G$ be the matrix of second partial derivatives of $G$, i.e., the $6 \times 6$ matrix with entries $1,-1,1,1,-1,1$ on the antidiagonal. A direct calculation shows that for any $4 \times 4$ matrix $P$ we have $$\wedge^2 ({\operatorname{adj}}(P)^T) = (\det P) \mathbf G(\wedge^2 P) \mathbf G.$$ Letting $\operatorname{PGL}_4$ act on the space of quadratic forms via $P : H \mapsto \frac{1}{\det P}H \circ \wedge^2 P$, this tells us that applying $P$ to a quadratic form $H(z_{12},z_{13},z_{23},z_{14},z_{24},z_{34})$ has the same effect as applying $P^{-T}$ to its *dual quadratic form* which we define to be $H(z_{34},-z_{24},z_{14},z_{23},-z_{13},z_{12})$. We note that the substitution used to replace $H$ by its dual swaps over the two families of isotropic subspaces in Remark [Remark 4](#rem:index2){reference-type="ref" reference="rem:index2"}. We find the changes of coordinates in Algorithm [Algorithm 7](#alg:totry){reference-type="ref" reference="alg:totry"} by using Lemma [Lemma 9](#lem:cc){reference-type="ref" reference="lem:cc"}(i) and (ii), and the analogue of (ii) after passing to the dual as in Remark [Remark 10](#rem:dual){reference-type="ref" reference="rem:dual"}. We find the changes of coordinates in Steps 2 and 4 of Algorithm [Algorithm 8](#main_alg){reference-type="ref" reference="main_alg"} using Lemma [Lemma 9](#lem:cc){reference-type="ref" reference="lem:cc"}(iii). **Remark 11**. In Step 2 of Algorithm [Algorithm 8](#main_alg){reference-type="ref" reference="main_alg"} we must find if possible a 3-dimensional subspace $W \subset \langle e_{12},e_{13},e_{14},e_{23},e_{24}\rangle$ that is isotropic for both $G$ and ${\overline{H}}_1$ where $$H_1(z_{12}, \ldots,z_{24}) = \pi^{-1} H(z_{12}, \ldots,z_{24},0).$$ To be isotropic for $G$ we need that $\langle e_{12} \rangle \subset W$. So such a subspace $W$ can only exist if $\overline{H}_1(1,0, \ldots,0)=0$. We assume that this is the case and write $$\overline{H}_1(z_{12}, \ldots,z_{24}) = z_{12} h_1(z_{13},z_{23},z_{14},z_{24}) + h_2(z_{13},z_{23},z_{14},z_{24})$$ where $h_i$ is a homogeneous polynomial of degree $i$. Our problem reduces to that of finding a line contained in $$\{ z_{13}z_{24} - z_{23}z_{14} = h_1 = h_2 = 0 \} \subset \mathbb P^3.$$ The well known description of the lines on $\{ z_{13}z_{24} - z_{23}z_{14} = 0 \} \subset \mathbb P^3$ suggests that we substitute $(z_{13},z_{23},z_{14},z_{24}) = (x_1y_1,x_1y_2,x_2y_1,x_2y_2)$ into $h_1$ and $h_2$, take the GCD, and factor into irreducibles. The lines of interest now correspond to linear factors of the form $\alpha x_1 + \beta x_2$ or $\gamma y_1 + \delta y_2$. **Remark 12**. In Steps 3 and 5 of Algorithm [Algorithm 8](#main_alg){reference-type="ref" reference="main_alg"}, when $\ker {\overline{H}}$ is not itself isotropic for $G$, we must find all codimension $1$ subspaces of $\ker {\overline{H}}$ that are isotropic for $G$. Since the restriction of $G$ to $\ker {\overline{H}}$ is a non-zero quadratic form, it can have at most two linear factors. There are therefore at most two codimension $1$ subspaces we need to consider. In particular, the number of times that Algorithm [Algorithm 8](#main_alg){reference-type="ref" reference="main_alg"} applies one of the operations in Definition [Definition 6](#def_ops){reference-type="ref" reference="def_ops"} is uniformly bounded. # Weights and Admissibility {#sec:weights} Let $H \in {\mathcal{O}_K}[u_0, \ldots,u_5]$ be an integral quadratic form and suppose that there exists $P \in \operatorname{GL}_4(K)$ such that $$v\left(\frac{1}{\det P} H \circ \wedge^2 P\right) > 0.$$ Then $P$ is equivalent to a matrix in Smith normal form, say $$P = U {\rm Diag}(\pi^{w_1},\pi^{w_2},\pi^{w_3},\pi^{w_4}) V$$ for some $U,V \in \operatorname{GL}_4({\mathcal{O}_K})$ and $w_1,w_2,w_3,w_4 \in \mathbb Z$. We say that the weight $w = (w_1,w_2,w_3,w_4)$ is *admissible* for $H$. It is clear that permuting the entries of $w$, or adding the same integer to all entries, has no effect on admissibility. **Definition 13**. The weight $w = (w_1,w_2,w_3,w_4)$ *dominates* the weight $w' = (w'_1,w'_2,w'_3,w'_4)$ if $$\label{wt:ineq} \begin{aligned} &\max(1 + w_1 + w_2 + w_3 + w_4 - w_i - w_j - w_k - w_l,0) \\ & \hspace{5em} \geqslant \max(1 + w'_1 + w'_2 + w'_3 + w'_4 - w'_i - w'_j - w'_k - w'_l,0) \end{aligned}$$ for all $1 \leqslant i < j \leqslant 4$ and $1 \leqslant k < l \leqslant 4$. This definition is motivated by the fact that if $w$ dominates $w'$ and $w$ is admissible for $H$ then $w'$ is admissible for $H$. Our next lemma shows that (for the purpose of proving that Algorithm [Algorithm 8](#main_alg){reference-type="ref" reference="main_alg"} is correct) it suffices to consider finitely many (in fact 12) weights. **Lemma 14**. *Every weight $w = (0,a,b,c) \in \mathbb Z^4$ with $0 \leqslant a \leqslant b \leqslant c$ dominates one of the following weights $$\begin{aligned} & (0,0,0,0), \,\, (0,0,0,1), \,\, (0,1,1,1), \,\, (0,0,1,1), \,\, (0,0,1,2), \,\, (0,1,2,2), \\ & (0,1,1,2), \,\, (0,1,1,3), \,\, (0,2,2,3), \,\, (0,1,2,3), \,\, (0,1,2,4), \,\, (0,2,3,4).\end{aligned}$$* *Proof.* We list the pairs $(i,j)$ and $(k,l)$ in Definition [Definition 13](#def:dom){reference-type="ref" reference="def:dom"} in the order $(1,2)$, $(1,3)$, $(2,3)$, $(1,4)$, $(2,4)$, $(3,4)$. Taking $w = (0,a,b,c)$, the left hand side of [\[wt:ineq\]](#wt:ineq){reference-type="eqref" reference="wt:ineq"} is $\max(\xi,0)$ where $\xi$ runs over the entries of the following symmetric matrix. $$\hspace{-0.5em} \begin{bmatrix} 1+b+c-a & 1+c & 1+c-a & 1+b & 1+b-a & 1 \\ 1+c & 1+a+c-b & 1+c-b & 1+a & 1 & 1+a-b \\ 1+c-a & 1+c-b & 1+c-a-b & 1 & 1-a & 1-b \\ 1+b & 1+a & 1 & 1+a+b-c & 1+b-c & 1+a-c \\ 1+b-a & 1 & 1-a & 1+b-c & 1+b-a-c & 1-c \\ 1 & 1+a-b & 1-b & 1+a-c & 1-c & 1+a-b-c \end{bmatrix}$$ We divide into $8$ cases according as to which of the inequalities $0 \leqslant a \leqslant b \leqslant c$ are equalities. In fact we make the following more precise claims. - If $0=a=b=c$ then $w = (0,0,0,0)$. - If $0=a=b<c$ then $w$ dominates $(0,0,0,1)$. - If $0=a<b=c$ then $w$ dominates $(0,0,1,1)$. - If $0=a<b<c$ then $w$ dominates $(0,0,1,2)$. - If $0<a=b=c$ then $w$ dominates $(0,1,1,1)$. - If $0<a=b<c$ then $w$ dominates $(0,1,1,3)$, $(0,1,1,2)$ or $(0,2,2,3)$. - If $0<a<b=c$ then $w$ dominates $(0,1,2,2)$. - If $0<a<b<c$ then $w$ dominates $(0,1,2,4)$, $(0,1,2,3)$ or $(0,2,3,4)$. In each case where we list three possibilities, we further claim that these correspond to the subcases $a+b<c$, $a+b=c$ and $a+b>c$ (in that order). Since the proofs are very similar, we give details in just one case. So suppose that $0<a<b<c$ and $a+b=c$. Then we have $a \geqslant 1$, $b\geqslant 2$, $c \geqslant 3$, $b-a \geqslant 1$, $c-a \geqslant 2$ and $c-b \geqslant 1$. Listing the pairs $(i,j)$ and $(k,l)$ in the same order as before, the left hand side of [\[wt:ineq\]](#wt:ineq){reference-type="eqref" reference="wt:ineq"} is at least $$\begin{bmatrix} 5 & 4 & 3 & 3 & 2 & 1 \\ 4 & 3 & 2 & 2 & 1 & 0 \\ 3 & 2 & 1 & 1 & 0 & 0 \\ 3 & 2 & 1 & 1 & 0 & 0 \\ 2 & 1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}$$ with equality if $(a,b,c)=(1,2,3)$. Therefore $w$ dominates $(0,1,2,3)$. ◻ Our next remark further reduces the number of weights we must consider. **Remark 15**. It is clear from Remark [Remark 10](#rem:dual){reference-type="ref" reference="rem:dual"} that if $w \in \mathbb Z^4$ is admissible for $H$ then $-w$ is admissible for the dual of $H$. We say that the weights $w$ and $-w$ (or any weights equivalent to these, in the sense of permuting the entries, or adding the same integer to all entries) are *dual*. The list of $12$ weights in Lemma [Lemma 14](#minimalcaselist){reference-type="ref" reference="minimalcaselist"} consists of $4$ dual pairs $(0,a,b,c)$ and $(0,c-b,c-a,c)$ with $a+b \not=c$, and $4$ self-dual weights $(0,a,b,a+b)$. # Completion of the proof {#sec:proof} In this section we complete the proof that Algorithm [Algorithm 8](#main_alg){reference-type="ref" reference="main_alg"} is correct. We first note that if $H$ and $H'$ are related by an integral change of coordinates, and the algorithm works for $H$ then it works for $H'$. This is because before applying Operations 1, 2 or 3 we always make an integral change of coordinates that, by Lemma [Lemma 9](#lem:cc){reference-type="ref" reference="lem:cc"}, is unique up to an element of $\operatorname{GL}_4({\mathcal{O}_K})$ whose reduction mod $\pi$ preserves a suitable subspace of $k^4$. The following elementary lemma then shows that the transformed quadratic forms are again related by an integral change of coordinates. **Lemma 16**. *Let $\alpha=\operatorname{Diag}(I_r, \pi I_{4-r})$ and $P\in \operatorname{GL}_4({\mathcal{O}_K})$. Then $P\in \alpha \operatorname{GL}_4({\mathcal{O}_K}) \alpha^{-1}$ if and only if the reduction of $P$ mod $\pi$ preserves the subspace $\langle e_1,\dots,e_r \rangle$.* *Proof.* This is [@CFS Lemma 4.1]. ◻ Let $H \in {\mathcal{O}_K}[z_{12}, \ldots, z_{34}]$ be a quadratic form. If there exists $P \in \operatorname{PGL}_4(K)$ such that $$\label{aim} v\left(\frac{1}{\det P} H \circ \wedge^2 P\right) > 0,$$ then, as explained in Section [4](#sec:weights){reference-type="ref" reference="sec:weights"}, one of the $12$ weights in Lemma [Lemma 14](#minimalcaselist){reference-type="ref" reference="minimalcaselist"} is admissible for $H$. Since the analysis for dual weights (see Remark [Remark 15](#rem:dual-wts){reference-type="ref" reference="rem:dual-wts"}) is essentially identical, we only need to consider one weight from each dual pair. It therefore suffices to consider the 8 weights listed in the table below. In the case of weight $(w_1, \ldots, w_4)$ we may suppose, by an integral change of coordinates, that [\[aim\]](#aim){reference-type="eqref" reference="aim"} holds with $P = \operatorname{Diag}(\pi^{w_1} ,\ldots, \pi^{w_4})$. This implies certain lower bounds on the valuations of the coefficients of $H$. To specify these (in a way that is valid even when $\operatorname{char}(k)=2$), we relabel the variables $z_{12},z_{13},z_{23},z_{14},z_{24},z_{34}$ as $z_1, \ldots, z_6$ and write $H = \sum_{i \leqslant j} H_{ij} z_i z_j$. We also put $H_{ji} = H_{ij}$. Then the lower bounds on the $v(H_{ij})$ are as recorded in the table. -------------------------- -------------------------- -------------------------- -------------------------- **Case 1:** $(0,0,0,0)$ **Case 2:** $(0,0,0,1)$ **Case 3:** $(0,0,1,1)$ **Case 4:** $(0,1,1,2)$ $\begin{bmatrix} $\begin{bmatrix} $\begin{bmatrix} $\begin{bmatrix} 1 & 1 & 1 & 1 & 1 & 1 \\ 2 & 2 & 2 & 1 & 1 & 1 \\ 3 & 2 & 2 & 2 & 2 & 1 \\ 3 & 3 & 2 & 2 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 \\ 2 & 2 & 2 & 1 & 1 & 1 \\ 2 & 1 & 1 & 1 & 1 & 0 \\ 3 & 3 & 2 & 2 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 \\ 2 & 2 & 2 & 1 & 1 & 1 \\ 2 & 1 & 1 & 1 & 1 & 0 \\ 2 & 2 & 1 & 1 & 0 & 0 \\ 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 0 & 0 & 0 \\ 2 & 1 & 1 & 1 & 1 & 0 \\ 2 & 2 & 1 & 1 & 0 & 0 \\ 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 0 & 0 & 0 \\ 2 & 1 & 1 & 1 & 1 & 0 \\ 1 & 1 & 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 & 1 & 1 1 & 1 & 1 & 0 & 0 & 0 1 & 0 & 0 & 0 & 0 & 0 1 & 1 & 0 & 0 & 0 & 0 \end{bmatrix}$ \end{bmatrix}$ \end{bmatrix}$ \end{bmatrix}$ $r = 0$ $r = 1,2,3$ $r = 1,2$ $r=1,2,3,4$ **Case 5:** $(0,0,1,2)$ **Case 6:** $(0,1,1,3)$ **Case 7:** $(0,1,2,3)$ **Case 8:** $(0,1,2,4)$ $\begin{bmatrix} $\begin{bmatrix} $\begin{bmatrix} $\begin{bmatrix} 4 & 3 & 3 & 2 & 2 & 1 \\ 4 & 4 & 3 & 2 & 1 & 1 \\ 5 & 4 & 3 & 3 & 2 & 1 \\ 6 & 5 & 4 & 3 & 2 & 1 \\ 3 & 2 & 2 & 1 & 1 & 0 \\ 4 & 4 & 3 & 2 & 1 & 1 \\ 4 & 3 & 2 & 2 & 1 & 0 \\ 5 & 4 & 3 & 2 & 1 & 0 \\ 3 & 2 & 2 & 1 & 1 & 0 \\ 3 & 3 & 2 & 1 & 0 & 0 \\ 3 & 2 & 1 & 1 & 0 & 0 \\ 4 & 3 & 2 & 1 & 0 & 0 \\ 2 & 1 & 1 & 0 & 0 & 0 \\ 2 & 2 & 1 & 0 & 0 & 0 \\ 3 & 2 & 1 & 1 & 0 & 0 \\ 3 & 2 & 1 & 0 & 0 & 0 \\ 2 & 1 & 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 & 0 \\ 2 & 1 & 0 & 0 & 0 & 0 \\ 2 & 1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 1 & 1 & 0 & 0 & 0 & 0 1 & 0 & 0 & 0 & 0 & 0 1 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}$ \end{bmatrix}$ \end{bmatrix}$ \end{bmatrix}$ $r = 3,4$ $r = 3,4$ $r = 3,4$ $r=5$ -------------------------- -------------------------- -------------------------- -------------------------- In our analysis of each case, we will assume we are not in an earlier case. The possibilities for $r = \operatorname{rank}{\overline{H}}$ will be justified below, but are recorded in the table for convenience. We complete the proof that Algorithm [Algorithm 8](#main_alg){reference-type="ref" reference="main_alg"} is correct by going through the $8$ cases. In fact we show that if the cases are grouped as -------- -------- -------- -------- -------- Case 1 Case 2 Case 4 Case 5 Case 7 Case 3 Case 6 Case 8 -------- -------- -------- -------- -------- then at each iteration of the algorithm we move at least one column to the left. Therefore, if after visiting Step 1 the first time and returning to it a further 4 times we still do not have $v(H) > 0$ then the algorithm is correct to return `FALSE`. ## Case 1: {#case-1 .unnumbered} $w =(0,0,0,0)$. In this case we already have $v(H) > 0$, so $r=0$ and we are done by Step 1. ## Case 2: {#case-2 .unnumbered} $w =(0,0,0,1)$. We see from the table that $\langle e_{12},e_{13},e_{23} \rangle \subset \ker {\overline{H}}$ and so $r \leqslant 3$. We cannot have $r=0$, otherwise we would be in Case 1. If $r=1$ then we are done by Step 2. If $r=2$ then we are done by Step 3. If $r=3$ then Step 5 directly applies Operation 1. (By "directly" we mean that there is no preliminary integral change of coordinates.) Since this gives $v(H) > 0$ we are in Case 1 on the next iteration. ## Case 3: {#case-3 .unnumbered} $w =(0,0,1,1)$. We see from the table that ${\overline{H}}= \ell z_{34}$ for some linear form $\ell$. One of the transformations considered in Step 4 is to directly apply Operation 2. Since this gives $v(H) > 0$ we are in Case 1 on the next iteration. ## Case 4: {#case-4 .unnumbered} $w=(0,1,1,2)$. We see from the table that $\langle e_{12},e_{13} \rangle \subset \ker {\overline{H}}$ and so $r \leqslant 4$. If $r = 4$ then Step 5 directly applies Operation 1 or Operation 3. Then on the next iteration either $(0,0,0,1)$ or $(0,1,1,1)$ is admissible, which means we are in Case 2 or its dual. If $r \leqslant 3$ then by applying a block diagonal element of $\operatorname{GL}_4({\mathcal{O}_K})$ with blocks of sizes $1$, $2$ and $1$, we may suppose that $H_{35} \equiv H_{45} \equiv 0 \pmod{\pi}$. If $r=3$ then $\ker {\overline{H}}= \langle e_{12},e_{13}, a e_{23} + b e_{14} \rangle$ for some $a,b \in k$. If $a=0$ or $b=0$ then $\ker {\overline{H}}$ is isotropic for $G$. Otherwise $\langle e_{12},e_{13} \rangle$ is the unique codimension 1 isotropic subspace. Either way, Step 5 directly applies Operation 1 or Operation 3, and we are done as before. We now suppose that $r \leqslant 2$ and divide into the following cases. - Suppose that $H_{36} \not\equiv 0 \pmod{\pi}$ and $H_{46} \not\equiv 0 \pmod{\pi}$. Since $r \leqslant 2$ we have ${\overline{H}}= \ell z_{34}$ for some linear form $\ell$. Since there is no integral change of coordinates taking $\ell$ to $z_{34}$ the only possible outcome of Step 4 is to directly apply Operation 2. This brings us to Case 3. - Suppose that $H_{36} \equiv 0 \pmod{\pi}$ and $H_{46} \not\equiv 0 \pmod{\pi}$. Then $v(H_{33})=1$, otherwise we would be in Case 2. We again have ${\overline{H}}= \ell z_{34}$ for some linear form $\ell$. Although there does now exist an integral change of coordinates taking $\ell$ to $z_{34}$, following this up with Operation 2 does not preserve that $v(H) \geqslant 0$. So again the only possible outcome of Step 4 is to directly apply Operation 2. This brings us to Case 3. - Suppose that $H_{36} \not\equiv 0 \pmod{\pi}$ and $H_{46} \equiv 0 \pmod{\pi}$. This is essentially the same as the previous case by duality. - Suppose that $H_{36} \equiv H_{46} \equiv 0 \pmod{\pi}$. Then ${\overline{H}}$ is a quadratic form in $z_{24}$ and $z_{34}$ only. If this factors over $k$ then either of the transformations in Step 4 brings us to Case 3. Otherwise we proceed to Step 5 which directly applies Operation 1 or Operation 3. As before, this brings us to Case 2 or its dual. ## Case 5: {#case-5 .unnumbered} $w=(0,0,1,2)$. Applying a block diagonal element of $\operatorname{GL}_4({\mathcal{O}_K})$ with blocks of sizes $2$, $1$ and $1$, we may suppose that $H_{26} \equiv 0 \pmod{\pi}$. Then $H_{36} \not\equiv 0 \pmod{\pi}$ (otherwise we would be in Case 2) and $H_{44},H_{45},H_{55}$ cannot all vanish mod $\pi$ (otherwise we would be in Case 3). Therefore $\langle e_{12},e_{13} \rangle \subset \ker {\overline{H}}$ and $r = 3$ or $4$. The only $3$-dimensional isotropic subspaces for $G$ that contain $\langle e_{12},e_{13} \rangle$ are $\langle e_{12},e_{13},e_{23} \rangle$ and $\langle e_{12},e_{13},e_{14} \rangle$. Therefore one of the transformations considered in Step 5 is to directly apply Operation 1 or Operation 3 (the latter only being a possibility if $H_{44} \equiv 0 \pmod{\pi}$). It follows that at the next iteration we have $r \leqslant 2$, and so are in Case 4 or earlier. ## Case 6: {#case-6 .unnumbered} $w=(0,1,1,3)$. Applying a block diagonal element of $\operatorname{GL}_4({\mathcal{O}_K})$ with blocks of sizes $1$, $2$ and $1$, we may suppose that $H_{15} \equiv 0 \pmod{\pi^2}$. We have $H_{44} \not\equiv 0 \pmod{\pi}$ (otherwise we would be in Case 4) and $H_{35} \not\equiv 0 \pmod{\pi}$ (otherwise we would be in Case 5). Therefore $\langle e_{12},e_{13} \rangle \subset \ker {\overline{H}}$ and $r = 3$ or $4$. Exactly as in Case 5 we find that at the next iteration we have $r \leqslant 2$, and so are in Case 4 or earlier. ## Case 7: {#case-7 .unnumbered} $w=(0,1,2,3)$. We have $H_{26} \not\equiv 0 \pmod{\pi}$ (otherwise we would be in Case 4), and $H_{35},H_{45},H_{55}$ cannot all vanish mod $\pi$ (otherwise we would be in Case 3). Therefore $r = 3$ or $4$, and $\langle e_{12} \rangle \subset \ker {\overline{H}}\subset \langle e_{12},e_{13},e_{23},e_{14} \rangle$. If $r = 4$ then $\ker {\overline{H}}= \langle e_{12}, a e_{13} + b e_{23} + c e_{14} \rangle$ for some $a,b,c \in k$. If $b, c \not=0$ then $\langle e_{12} \rangle$ is the unique codimension $1$ subspace of $\ker {\overline{H}}$ that is isotropic for $G$. Therefore, Step 5 directly applies Operation 2, which brings us to Case 4. If $b=0$ then $c \not= 0$, and by applying a block diagonal element of $\operatorname{GL}_4({\mathcal{O}_K})$ with blocks of sizes $1$, $1$ and $2$, we may suppose that $a = 0$. Then the $3$-dimensional isotropic subspaces for $G$ containing $\ker {\overline{H}}= \langle e_{12}, e_{14} \rangle$ are $\langle e_{12}, e_{13}, e_{14} \rangle$ and $\langle e_{12}, e_{14}, e_{24} \rangle$. Step 5 applies either $\operatorname{Diag}(1,\pi,\pi,\pi)$ or $\operatorname{Diag}(1,1,\pi,1)$ bringing us to Case 5 or Case 6. The case $c = 0$ is similar by duality. If $r=3$ then $\ker {\overline{H}}= \langle e_{12}, e_{23} + a e_{13}, e_{14} + b e_{13} \rangle$ for some $a,b \in k$. By applying a block diagonal element of $\operatorname{GL}_4({\mathcal{O}_K})$ with blocks of sizes $2$ and $2$, we may suppose that $a=b=0$. Then $H_{35} \equiv H_{36}\equiv H_{45}\equiv H_{46} \equiv 0 \pmod{\pi}$ and $H_{55} \not\equiv 0 \pmod{\pi}$. The codimension $1$ subspaces of $\ker {\overline{H}}= \langle e_{12}, e_{23}, e_{14} \rangle$ that are isotropic for $G$ are $\langle e_{12}, e_{23} \rangle$ and $\langle e_{12}, e_{14} \rangle$. The $3$-dimensional isotropic subspaces for $G$ containing one of these spaces are $$\langle e_{12}, e_{13}, e_{23} \rangle, \,\,\, \langle e_{12}, e_{13}, e_{14} \rangle, \,\,\, \langle e_{12}, e_{23}, e_{24} \rangle, \,\,\, \langle e_{12}, e_{14}, e_{24} \rangle.$$ The first two of these correspond to directly applying Operation 1 or Operation 3, which brings us to Case 5 or its dual. The last two correspond to transformations which fail to preserve that $v(H) \geqslant 0$, and so cannot be selected by Step 5. ## Case 8: {#case-8 .unnumbered} $w=(0,1,2,4)$. We have $H_{35} \not\equiv 0 \pmod{\pi}$ (otherwise we would be in Case 5), $H_{26} \not\equiv 0 \pmod{\pi}$ (otherwise we would be in Case 6), and $H_{44} \not\equiv 0 \pmod{\pi}$ (otherwise we would be in Case 7). Therefore $r=5$ and $\ker {\overline{H}}= \langle e_{12} \rangle$. Step 5 directly applies Operation 2 which brings us to Case 6. **Example 17**. We give three examples where Algorithm [Algorithm 8](#main_alg){reference-type="ref" reference="main_alg"} takes the maximum of $4$ iterations to give $v(H) > 0$. The first two examples start in Case 7, with $\operatorname{rank}{\overline{H}}= 3$ or $4$, and the final one starts in Case 8. In the first two examples there are two choices on the first iteration. We made an arbitrary choice in each case, but in fact with the other choices the algorithm would still have taken $4$ iterations. Let $K = \mathbb Q$ and $v = v_p$ for any choice of prime number $p$. An arrow labelled $(w_1, \ldots, w_4)$ indicates that we replace $H$ by $\frac{1}{\det P} H \circ \wedge^2 P$ where $P = \operatorname{Diag}(p^{w_1}, \ldots, p^{w_4})$. $$\begin{aligned} p^5 z_{12}^2 + z_{13} z_{34} + p z_{23}^2 + p z_{14}^2 + z_{24}^2 & \stackrel{(0,0,0,1)}{\longrightarrow} p^4 z_{12}^2 + z_{13} z_{34} + z_{23}^2 + p^2 z_{14}^2 + p z_{24}^2 \\ & \stackrel{(0,0,1,0)}{\longrightarrow} p^3 z_{12}^2 + p z_{13} z_{34} + p z_{23}^2 + p z_{14}^2 + z_{24}^2 \\ & \stackrel{(0,1,0,1)}{\longrightarrow} p^3 z_{12}^2 + z_{13} z_{34} + p z_{23}^2 + p z_{14}^2 + p^2 z_{24}^2 \\ & \stackrel{(0,0,1,1)}{\longrightarrow} p( z_{12}^2 + z_{13} z_{34} + z_{23}^2 + z_{14}^2 + p z_{24}^2).\end{aligned}$$ $$\begin{aligned} p^5 z_{12}^2 + z_{13} z_{34} + p z_{23}^2 + z_{14} z_{24} & \stackrel{(0,0,0,1)}{\longrightarrow} p^4 z_{12}^2 + z_{13} z_{34} + z_{23}^2 + p z_{14} z_{24} \\ & \stackrel{(0,0,1,0)}{\longrightarrow} p^3 z_{12}^2 + p z_{13} z_{34} + p z_{23}^2 + z_{14} z_{24} \\ & \stackrel{(0,1,0,1)}{\longrightarrow} p^3 z_{12}^2 + z_{13} z_{34} + p z_{23}^2 + p z_{14} z_{24} \\ & \stackrel{(0,0,1,1)}{\longrightarrow} p( z_{12}^2 + z_{13} z_{34} + z_{23}^2 + z_{14} z_{24}). \end{aligned}$$ $$\begin{aligned} p^6 z_{12}^2 + z_{13} z_{34} + z_{23} z_{24} + z_{14}^2 & \stackrel{(0,0,1,1)}{\longrightarrow} p^4 z_{12}^2 + p z_{13} z_{34} + z_{23} z_{24} + z_{14}^2 \\ & \stackrel{(0,0,0,1)}{\longrightarrow} p^3 z_{12}^2 + p z_{13} z_{34} + z_{23} z_{24} + p z_{14}^2 \\ & \stackrel{(0,1,0,1)}{\longrightarrow} p^3 z_{12}^2 + z_{13} z_{34} + p z_{23} z_{24} + p z_{14}^2 \\ & \stackrel{(0,0,1,1)}{\longrightarrow} p( z_{12}^2 + z_{13} z_{34} + z_{23} z_{24} + z_{14}^2). \end{aligned}$$ 9 M. Bhargava and B.H. Gross, The average size of the 2-Selmer group of Jacobians of hyperelliptic curves having a rational Weierstrass point, in *Automorphic representations and $L$-functions*, D. Prasad, C. S. Rajan, A. Sankaranarayanan and J. Sengupta (eds.), 23--91, Tata Institute of Fundamental Research, Stud. Math. **22**, Mumbai, 2013. B.J. Birch and H.P.F. Swinnerton-Dyer, Notes on elliptic curves. I, *J. reine angew. Math.* **212** (1963), 7--25. W. Bosma, J. Cannon and C. Playoust, The Magma algebra system I: The user language, *J. Symb. Comb.* **24**, 235-265 (1997), <http://magma.maths.usyd.edu.au/magma/> J.W.S. Cassels and E.V. Flynn, *Prolegomena to a middlebrow arithmetic of curves of genus 2*, London Mathematical Society Lecture Note Series, **230**, Cambridge University Press, Cambridge, 1996. J.E. Cremona, T.A. Fisher, and M. Stoll, Minimisation and reduction of 2-,3- and 4-coverings of elliptic curves, *Algebra & Number Theory* **4** (2010), no. 6, 763--820. A.-S. Elsenhans and M. Stoll, *Minimization of hypersurfaces*, preprint, 2021,\ <https://arxiv.org/abs/2110.04625> T.A. Fisher, Minimisation and reduction of 5-coverings of elliptic curves, *Algebra & Number Theory* **7** (2013), no. 5, 1179--1205. T.A. Fisher and L. Radičević, Some minimisation algorithms in arithmetic invariant theory, *J. Théor. Nombres Bordeaux* **30** (2018), no. 3, 801--828. T.A. Fisher and J. Yan, *Computing the Cassels-Tate pairing on the 2-Selmer group of a genus 2 Jacobian*, preprint, 2023, <https://arxiv.org/abs/2306.06011> W. Fulton and J. Harris, *Representation theory. A first course*, Graduate Texts in Mathematics **129**, Springer-Verlag, New York, 1991. J. Kollár, Polynomials with integral coefficients, equivalent to a given polynomial, *Electron. Res. Announc. Amer. Math. Soc.* **3** (1997), 17--27. Q. Liu, *Computing minimal Weierstrass equations*, preprint, 2022,\ <https://arxiv.org/abs/2209.00469> A. Shankar and X. Wang, Rational points on hyperelliptic curves having a marked non-Weierstrass point, *Compos. Math.* 154 (2018), no. 1, 188--222.
arxiv_math
{ "id": "2309.06220", "title": "Minimisation of 2-coverings of genus 2 Jacobians", "authors": "Tom Fisher and Mengzhen Liu", "categories": "math.NT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Consider the following two-person mixed strategy game of a probabilist against Nature with respect to the parameters $(f, \mathcal{B},\pi)$, where $f$ is a convex function satisfying certain regularity conditions, $\mathcal{B}$ is either the set $\{L_i\}_{i=1}^n$ or its convex hull with each $L_i$ being a Markov infinitesimal generator on a finite state space $\mathcal{X}$ and $\pi$ is a given positive discrete distribution on $\mathcal{X}$. The probabilist chooses a prior measure $\mu$ within the set of probability measures on $\mathcal{B}$ denoted by $\mathcal{P}(\mathcal{B})$ and picks a $L \in \mathcal{B}$ at random according to $\mu$, whereas Nature follows a pure strategy to select $M \in \mathcal{L}(\pi)$, the set of $\pi$-reversible Markov generators on $\mathcal{X}$. Nature pays an amount $D_f(M||L)$, the $f$-divergence from $L$ to $M$, to the probabilist. We prove that a mixed strategy Nash equilibrium always exists, as well as a minimax result of the form $$\inf_{M \in \mathcal{L}(\pi)} \sup_{\mu \in \mathcal{P}(\mathcal{B})} \int_{\mathcal{B}} D_f(M||L)\, \mu(dL) = \sup_{\mu \in \mathcal{P}(\mathcal{B})} \inf_{M \in \mathcal{L}(\pi)} \int_{\mathcal{B}} D_f(M||L)\, \mu(dL).$$ This also contrasts with the pure strategy version of the game where we show a Nash equilibrium may not exist. To find approximately a mixed strategy Nash equilibrium, we propose and develop a simple projected subgradient algorithm that provably converges with a rate of $\mathcal{O}(1/\sqrt{t})$, where $t$ is the number of iterations. In addition, we elucidate the relationships of Nash equilibrium with other seemingly disparate notions such as weighted information centroid, Chebyshev center and Bayes risk. This article generalizes the two-person game of a statistician against Nature developed in \[D. Haussler (1997). A general minimax result for relative entropy. IEEE Trans. Inform. Theory 43: 1276--1280; D. Haussler and M. Opper (1997). Mutual information, metric entropy and cumulative relative entropy risk. Ann. Statist. 25(6): 2451-2492; A.A. Gushchin and D.A. Zhdanov (2006). A minimax result for $f$-divergences. In: From Stochastic Calculus to Mathematical Finance. Springer, Berlin, Heidelberg.\], and highlights the powerful interplay and synergy between modern Markov chains theory and geometry, information theory, game theory, optimization and mathematical statistics. **AMS 2020 subject classifications**: 49J35, 60J27, 60J28, 62B10, 62C20, 90C47, 91A05, 91A68, 94A17, 94A29 **Keywords**: Markov chains; $f$-divergence; information geometry; information centroid; saddle point; Nash equilibrium; minimax theorem; Chebyshev center; Bayes risk; subgradient; algorithmic game theory address: - Department of Statistics and Data Science and Yale-NUS College, National University of Singapore, Singapore - RIKEN Center for AI Project, Tokyo, Japan author: - Michael C.H. Choi - Geoffrey Wolfer bibliography: - thesis.bib title: Markov chain entropy games and the geometry of their Nash equilibria --- # Introduction {#sec:intro} This paper revolves around a two-person game where a probabilist competes against Nature, in which the game itself is parameterized by $(f, \mathcal{B},\pi)$. These parameters will be introduced in a precise manner in subsequent sections, but we nonetheless briefly describe them in order to motivate the investigation of this game. Here, $f$ denotes a convex function adhering to specific regularity conditions, the set $\mathcal{B}$ is either the collection $\{L_i\}_{i=1}^n$ or its convex hull, with each $L_i$ being a Markov infinitesimal generator (see Section [2](#sec:prelim){reference-type="ref" reference="sec:prelim"}) on a finite state space $\mathcal{X}$, and $\pi$ is a given discrete distribution on $\mathcal{X}$. We shall offer an original and in-depth study of two versions of the game, namely a **pure strategy** game and a **mixed strategy** game. In the latter setting, the probabilist's mixed strategy involves selecting a prior measure $\mu$ from the set of probability measures on $\mathcal{B}$, denoted by $\mathcal{P}(\mathcal{B})$, and subsequently choosing a random $L \in \mathcal{B}$ based on $\mu$. In contrast, Nature employs a pure strategy to opt for $M \in \mathcal{L}(\pi)$, the set of $\pi$-reversible Markov generators on $\mathcal{X}$. Nature suffers a loss of $D_f(M||L)$ to the probabilist, the $f$-divergence from $L$ to $M$. In the pure strategy version of the game, both probabilist and Nature are only allowed to choose deterministically from their respective strategy set. In the literature, similar games have been developed and analyzed in the context of probability measures. These games are known as the games of statistician against Nature in [@H97; @HO97; @GZ06], which appear naturally in statistical estimation. This statistician against Nature game serves as one major motivation and inspiration for us to develop the game of probabilist against Nature. Note that in the game of statistician against Nature, the statistician loses or pays to Nature an amount that depends on the information divergence of the estimator formed by the statistician and the ground truth chosen by Nature, while in the game of probabilist against Nature, the probabilist is awarded by Nature. We justify the interchange of roles of these two players in the context of reversibility and non-reversibility of Markov processes as follows: Symmetry is a fundamental concept that permeates many disciplines and plays an ubiquitous role across mathematics, natural sciences as well as arts and culture, see for instance the book of [@Weyl1952] for a historical account on the notion of symmetry. In view of this perspective, we postulate that Nature, as a player in this game, chooses only from the $\pi$-reversible strategy set. The fact that Nature selects reversible processes follows the current belief of physicists: for instance, the laws of quantum mechanics or of Newton are all time-reversible [@S1931]. In the context of Markov chain Monte Carlo, many classical algorithms in this area such as the Metropolis-Hastings algorithm, Barker proposal or the overdamped Langevin diffusion are naturally motivated by physics and indeed are reversible Markov processes with respect to the stationary distribution. On the other hand, in hope of designing Markov chains with improved convergence to equilibrium, many probabilists have resorted to non-reversible Markovian Monte Carlo algorithms with provably accelerated convergence rate or improved behaviour, see for example the work [@Hwang93; @Hwang05; @RR15; @Bie16; @DHN00; @BS16; @G98; @KS23] and the references therein. As a result we postulate that the probabilist, as a competitor against Nature in this game, picks from a strategy set $\mathcal{B}$ that contains in general an inventory of non-reversible chains. The gain of the probabilist, $D_f(M||L)$, can be broadly interpreted as an information-theoretic edge or speedup of non-reversibility over reversibility. While the above explanations justify the role of the probabilist and Nature in the entropy game, from a mathematical point of view however, these two roles can be safely interchanged, or one may wish to replace the term \"probabilist\" by \"Player A\" and \"Nature\" by \"Player B\" throughout the entire manuscript. Our discussion so far naturally spurs a number of interesting questions pertaining to these games: is there a Nash equilibrium [@KarlinPeres17; @MSZ20]? Is the equilibrium unique if it exists? Is there an efficient algorithm to find such an equilibrium and what is the algorithmic complexity? what is the so-called value of the game? Is there a sequential version of this game? While the number of questions may seem to be endless, in this paper we aim at addressing several fundamental questions regarding the existence and uniqueness of equilibrium, investigating situations where the equilibrium may or may not exist and developing an algorithm to approximately find an equilibrium efficiently. We shall also elaborate on connections with Bayes risk and Wald's decision game as in classical mathematical statistics. In addition to the above considerations, another major motivation of the investigation of the game of probabilist against Nature stems from an interesting question raised by Laurent Miclo, which concerns the possibility of providing game-theoretic interpretations to two Metropolis-type Markov generators, namely $P_{-\infty}$ and $P_{\infty}$, which are defined respectively to be, for $x \neq y \in \mathcal{X}$, $$\begin{aligned} P_{-\infty}(x,y) &:= \min\{L(x,y),L_{\pi}(x,y)\}, \\ P_{\infty}(x,y) &:= \max\{L(x,y),L_{\pi}(x,y)\}, \end{aligned}$$ where the diagonal terms are such that the row sums are zero for all rows and $L_{\pi}$ is the $\pi$-dual of $L$ (see Section [2](#sec:prelim){reference-type="ref" reference="sec:prelim"}). Note that $P_{-\infty}$ is the classical Metropolis-Hastings generator in continuous-time while $P_{\infty}$ has been investigated in a series of work in [@DM09; @Choi16; @CH18]. In view of the minimum or maximum that appears in these two generators, it seems natural to seek game-theoretic explanations of these two objects. Indeed, as shown in our Example [Example 4](#ex:1mix){reference-type="ref" reference="ex:1mix"} below, when the $f$-divergence $D_f$ is chosen to be the total variation distance, $P_{-\infty}$, $P_{\infty}$ and their convex combinations can be understood as (part of) a mixed strategy Nash equilibrium of a Markov chain entropy game of probabilist against Nature, thus offering a possible answer to the question of Miclo. We summarize our main contributions as follow: 1. **Introduce the two-person Markov chain entropy games of a probabilist against Nature.** This naturally generalizes the game of a statistican against Nature [@H97; @HO97; @GZ06], where instead of Markov generators the game in these references involves probability measures. Given the interdisciplinary nature of this topic, the paper draws upon tools and notions to yield new developments in several subjects and uncover interesting connections across these areas, namely - **modern Markov chains theory** (reversiblizations [@M97; @Fill91; @Paulin15; @Choi16] and Markov chain Monte Carlo algorithms [@BD01; @DM09]) - **geometry** (Chebyshev center [@EBT08; @C20], saddle point and information centroid [@CW23], and information geometry of Markov chains [@WW21]) - **information theory** ($f$-divergences [@C72; @SV16], information geometry [@Amari16; @N22], channel capacity and source coding [@DL80]) - **game theory** (Nash equilibrium, value of the game and algorithmic game theory [@KarlinPeres17; @MSZ20]) - **optimization** (minimax or robust optimization, subgradient algorithm [@Beck2017]) - **mathematical statistics** (Bayes risk, Wald's decision theory and game [@H97; @HO97; @GZ06]) This natural, beautiful and powerful synthesis promises to open new avenues of research and applications in the aforementioned disciplines. 2. **Introduce the notion of weighted information centroid in the context of Markov generators and elucidate its key role in understanding the Nash equilibria of the entropy games.** We introduce a notion of weighted information centroid of a sequence of Markov chains, which naturally generalizes the notion of information centroid of Markov chains introduced by the authors in [@CW23]. Our analysis shows that the (mixed strategy) Nash equilibrium is intimately related to the notion of Chebyshev center, which can be interpreted as a specific weighted information centroid. This important observation allows us to analyze the existence and uniqueness of Nash equilibrium in the game. 3. **Propose and analyze a simple projected subgradient algorithm to find an approximate Nash equilibrium with a provable convergence rate of $\mathcal{O}(1/\sqrt{t})$.** A central question in game theory, in particular algorithmic game theory, lies in developing efficient algorithms for (approximate) computation of the Nash equilibrium. To this end, we propose a simple and easy-to-implement projected subgradient algorithm which utilizes the information geometry of the underlying Markov generators. The rest of this paper is organized as follow. In Section [2](#sec:prelim){reference-type="ref" reference="sec:prelim"}, we begin our paper by introducing various notions and in particular the notion of weighted information centroid, which plays a central role in our subsequent investigation. We proceed to present the main results of the paper, where we first address fundamental questions concerning Chebyshev center, weighted information centroids and minimax values in Section [3](#sec:main){reference-type="ref" reference="sec:main"}, followed by a pure strategy game-theoretic analysis in Section [3.1](#subsec:puregame){reference-type="ref" reference="subsec:puregame"}. The corresponding setting of mixed strategy game is presented in Section [3.2](#subsec:mixgame){reference-type="ref" reference="subsec:mixgame"}, and several simple yet illustrative examples are given in Section [3.1.1](#subsubex:pureex){reference-type="ref" reference="subsubex:pureex"} and Section [3.2.1](#subsubex:mixedex){reference-type="ref" reference="subsubex:mixedex"}. The design and analysis of a novel projected subgradient algorithm to find an approximate mixed strategy Nash equilibrium is stated in Section [3.3](#subsec:algo){reference-type="ref" reference="subsec:algo"}. Finally, the proofs of the main results are incorporated in Section [4](#sec:proofsmain){reference-type="ref" reference="sec:proofsmain"}. ## Notations In this subsection we introduce some commonly used notations throughout the manuscript. For $a,b \in \mathbb{Z}$, we write $\llbracket a,b\rrbracket := \{a,a+1,\ldots,b-1,b\}$ and $\llbracket n \rrbracket := \llbracket 1,n \rrbracket$ for $n \in \mathbb{N}$. We denote by $\mathbf{0}$ to be the all-zeros matrix on the finite state space $\mathcal{X}$. For a given function $g(n)$, we say that it is $\mathcal{O}(h(n))$ if there exist constants $C > 0$ and $n_0$ such that $g(n) \leqslant C h(n)$ for all $n \geqslant n_0$. # Preliminaries {#sec:prelim} Consider a convex function $f : \mathbb{R}_+ \rightarrow \mathbb{R}_+$ satisfying $f(1) = 0$, and a given positive discrete distribution $\pi$ on a finite state space $\mathcal{X}$. Define $\mathcal{L}$ as the set of Markov infinitesimal generators on $\mathcal{X}$. These generators correspond to $\mathcal{X} \times \mathcal{X}$ matrices having non-negative off-diagonal elements and zero row sums. A generator $L$ is said to be $\pi$-stationary if $\pi L = 0$. Moreover, $\mathcal{L}(\pi) \subset \mathcal{L}$ is the subset of $\pi$-reversible generators, where we recall that a generator $L$ is said to be $\pi$-reversible if it satisfies the detailed balance condition, that is, for all $x \neq y \in \mathcal{X}$, we have $\pi(x) L(x,y) = \pi(y) L(y,x).$ In view of [@JK14 Proposition 1.2], we define the $\pi$-dual of a generator $L \in \mathcal{L}$ to be $L_{\pi}$. For $x \neq y$, the off-diagonal elements of $L_{\pi}$ are given by $$L_{\pi}(x,y) = \frac{\pi(y)}{\pi(x)}L(y,x),$$ and the diagonal ones ensure zero row sums for all rows. In the special case when $L$ has $\pi$ as its unique stationary distribution, we then have $L_{\pi} = L^*$, the adjoint of $L$ in $\ell^2(\pi)$ or its time-reversal. Here the space $\ell^2(\pi)$ is the standard weighted $\ell^2$ Hilbert space endowed with the inner product $\langle \cdot, \cdot \rangle_{\pi}$ given by, for any functions $g,h: \mathcal{X}\rightarrow\mathbb{R}$, $$\begin{aligned} \langle g,h\rangle_\pi:=\sum_{x\in \mathcal{X}}g(x)h(x)\pi(x).\end{aligned}$$ Following the definition as in [@DM09], for a given target $\pi$ and Markov infinitesimal generators $M, L \in \mathcal{L}$, the $f$-divergence from $L$ to $M$ with respect to $\pi$ is defined as $$\label{def:fdivML} D_f(M || L) = \sum_{x \in \mathcal{X}} \pi(x) \sum_{y \in \mathcal{X}\setminus\{x\}} L(x,y) f\left(\frac{M(x,y)}{L(x,y)}\right),$$ where the convention of $0 f(a/0) := 0$ for $a \geqslant 0$ applies. If $f^*$ is the convex conjugate of $f$, defined by $f^*(t) = tf(1/t)$ for $t > 0$, then $$D_f(M || L) = D_{f^*}(L || M),$$ and $f^*(1) = 0$. When $f$ is self-conjugate, that is, $f^* = f$, then the $f$-divergence in [\[def:fdivML\]](#def:fdivML){reference-type="eqref" reference="def:fdivML"} is symmetric. For a general generator $L$ not necessarily with $\pi$ as its stationary distribution, we consider projecting $L$ onto $\mathcal{L}(\pi)$ under suitable $f$-divergence $D_f$. Following [@WW21; @CW23], the notions of $f$-projection and $f^*$-projection are defined as: $$\label{def:emprojection} M^f = M^f(L,\pi) := \mathop{\mathrm{arg\,min}}_{M \in \mathcal{L}(\pi)} D_f(M || L), \quad M^{f^*} = M^{f^*}(L,\pi) = \mathop{\mathrm{arg\,min}}_{M \in \mathcal{L}(\pi)} D_f(L || M).$$ To understand our main results, we now introduce the notion of weighted information centroids of Markov chains. It generalizes the notion of (unweighted) information centroids of Markov chains in [@CW23]. Let $\mathbf{w} = (w_1,\ldots,w_n) \in \mathbb{R}^n_+$ be a weight vector in the probability simplex $\mathcal{S}_n$ of $n$ elements, that is, $$\begin{aligned} \label{def:probsim} \mathcal{S}_n := \bigg\{\mathbf{w} = (w_1,\ldots,w_n) \in \mathbb{R}^n_+; ~\sum_{i=1}^n w_i = 1\bigg\}.\end{aligned}$$ Note that we denote the simplex by $\mathcal{S}_n$ instead of $\mathcal{P}(\llbracket n \rrbracket)$ throughout the manuscript. Given a sequence of Markov generators $\{L_i\}_{i=1}^n$, where $L_i \in \mathcal{L}$ for $i \in \llbracket n \rrbracket$, we define the notions of $\mathbf{w}$-weighted $f$-projection centroid and $f^*$-projection centroid to be respectively $$\begin{aligned} M^{f}_n &= M^{f}_n(\mathbf{w},\{L_i\}_{i=1}^n,\pi) := \mathop{\mathrm{arg\,min}}_{M \in \mathcal{L}(\pi)} \sum_{i=1}^n w_i D_f(M || L_i), \\ M^{f^*}_n &= M^{f^*}_n(\mathbf{w},\{L_i\}_{i=1}^n,\pi) = \mathop{\mathrm{arg\,min}}_{M \in \mathcal{L}(\pi)} \sum_{i=1}^n w_i D_f(L_i || M).\end{aligned}$$ We consider two important special cases: first in the case with $n = 1$, the above notions reduce to $M_{1}^f = M^f$ and $M_{1}^{f^*} = M^{f^*}$ respectively as introduced in [\[def:emprojection\]](#def:emprojection){reference-type="eqref" reference="def:emprojection"}. In the second special case, let $n \in \mathbb{N}$ and $\{\mathbf{e}_i\}_{i=1}^n$ be the standard unit vectors. It is obvious to see that $M^f_n(\mathbf{e}_i,\{L_i\}_{i=1}^n,\pi) = M^f(L_i,\pi)$ and $M^{f^*}_n(\mathbf{e}_i,\{L_i\}_{i=1}^n,\pi) = M^{f^*}(L_i,\pi)$. Our first main result in this Section establishes existence and uniqueness of $f$ and $f^*$-projection centroids under strict convexity of $f$, and its proof is given in Section [4.1](#subsubsec:pfexistunique){reference-type="ref" reference="subsubsec:pfexistunique"}. **Theorem 1** (Existence and uniqueness of weighted $f$ and $f^*$-projection centroids). *Suppose we are given a sequence of Markov generators $\{L_i\}_{i=1}^n$, where $L_i \in \mathcal{L}$ for $i \in \llbracket n \rrbracket$, a $f$-divergence $D_f$ generated by a strictly convex $f$ which is assumed to have a derivative at $1$ given by $f^{\prime}(1) = 0$, and a weight vector $\mathbf{w} \in \mathcal{S}_n$. We further assume that there exists at least one $L_i \neq \mathbf{0}$ and at least one $w_i > 0$ when $L_i \neq \mathbf{0}$. A weighted $f$-projection of $D_f$ (resp. $f^*$-projection of $D_{f^*}$) that minimizes the mapping $$\mathcal{L}(\pi) \ni M \mapsto \sum_{i=1}^n w_i D_f(M||L_i) \quad \left(\textrm{resp.}~=\sum_{i=1}^n w_i D_{f^*}(L_i||M)\right)$$ exists and is unique, that we denote by $M^f_{n}$.* *Remark 1*. By replacing $f$ by $f^*$ in the main result of Theorem [Theorem 1](#thm:existuniquecentroid){reference-type="ref" reference="thm:existuniquecentroid"}, we can analogously define $M^{f^*}_n$. Precisely, a weighted $f^*$-projection of $D_f$ (resp. $f$-projection of $D_{f^*}$) that minimizes the mapping $$\mathcal{L}(\pi) \ni M \mapsto \sum_{i=1}^n w_i D_f(L_i||M) \quad \left(\textrm{resp.}~=\sum_{i=1}^n w_iD_{f^*}(M||L_i)\right)$$ exists and is unique, that we denote by $M^{f^*}_{n}$. *Remark 2* (On the convexity of $f$). We emphasize that throughout this manuscript, unless otherwise specified, $f$ is assumed to be a convex function rather than a strictly convex function. This yields the following interesting consequences. Suppose that $f$ is a convex function with $f(s) = 0$ for some $s > 0, s \neq 1$. For a given $L \in \mathcal{L}$, we thus have $$D_f(sL||L) = 0,$$ and hence the two generators $sL$ and $L$ are indistinguishable with respect to $D_f$. From a transition semigroup perspective, this is justifiable since $P^t := e^{sLt}$, the transition semigroup generated by $sL$, is merely a time-change of $Q^t := e^{Lt}$, the transition semigroup generated by $L$, for $t \geqslant 0$. *Remark 3*. We discuss the importance of the additional assumptions in Theorem [Theorem 1](#thm:existuniquecentroid){reference-type="ref" reference="thm:existuniquecentroid"} in this Remark. In the first case, if $L_i = \mathbf{0}$ for all $i \in \llbracket n \rrbracket$, then obviously for any $M \in \mathcal{L}(\pi)$ we have $D_f(M||L_i) = 0$ and hence $$\sum_{i=1}^n w_i D_f(M||L_i) = 0.$$ As such, this shows the existence of weighted $f$-projection centroids and they are not unique even if $f$ is strictly convex. In the second case, suppose that there exists at least one $L_i \neq \mathbf{0}$ and $w_i = 0$ whenever $L_i \neq \mathbf{0}$. This again leads to, for any $M \in \mathcal{L}(\pi)$, $$\sum_{i=1}^n w_i D_f(M||L_i) = 0.$$ We thus exclude the above two degenerate cases in Theorem [Theorem 1](#thm:existuniquecentroid){reference-type="ref" reference="thm:existuniquecentroid"}. *Remark 4*. We stress that in Theorem [Theorem 1](#thm:existuniquecentroid){reference-type="ref" reference="thm:existuniquecentroid"}, the given positive distribution $\pi$ is arbitrary and at the same time the generators $L_i$ need not admit stationary distribution $\pi$ or even be irreducible in the first place. The purpose of Theorem [Theorem 1](#thm:existuniquecentroid){reference-type="ref" reference="thm:existuniquecentroid"} is to consider the joint projection of these generators $L_i$ onto the space $\mathcal{L}(\pi)$. In the second result of this Section, we explicitly calculate the weighted $f$ and $f^*$-projection centroids $M^f_{n}$ and $M^{f^*}_{n}$ under some common $f$-divergences. The proof is deferred to Section [4.2](#subsubsec:examplecentroid){reference-type="ref" reference="subsubsec:examplecentroid"}. For these common choices of $f$, as shown in [@CW23] the power mean reversibilizations $P_p$ appear naturally as the corresponding $f$ or $f^*$-projection. For $x \neq y \in \mathcal{X}$ and $p \in \mathbb{R}\setminus\{0\}$, the off-diagonal entries of $P_p$ are given by $$\label{def:Pp} P_{p}(x,y) := \left(\frac{L(x,y)^{p} + L_{\pi}(x,y)^{p}}{2}\right)^{1/p},$$ where diagonal entries ensure zero row sums. The limiting cases are defined to be $$\begin{aligned} P_0(x,y) &:= \lim_{p \to 0} P_{p}(x,y) = \sqrt{L(x,y)L_{\pi}(x,y)}, \nonumber \\ P_{\infty}(x,y) &:= \lim_{p \to \infty} P_{p}(x,y) = \max\{L(x,y),L_{\pi}(x,y)\}, \label{eq:Pinfty}\\ P_{-\infty}(x,y) &:= \lim_{p \to -\infty} P_{p}(x,y) = \min\{L(x,y),L_{\pi}(x,y)\}. \label{eq:P-infty}\end{aligned}$$ Note that in the special case when the weights are given by $w_i = 1/n$ for all $i \in \llbracket n \rrbracket$, we recover the results in [@CW23], as expected. **Theorem 2** (Examples of weighted $f$ and $f^*$-projection centroids). *Given a sequence of Markov generators $(L_i)_{i=1}^n$, where $L_i \in \mathcal{L}$ for $i \in \llbracket n \rrbracket$ and a weight vector $\mathbf{w} \in \mathcal{S}_n$. We further assume that there exists at least one $L_i \neq \mathbf{0}$ and at least one $w_i > 0$ when $L_i \neq \mathbf{0}$. Recall the power mean reversiblizations $P_p$ in [\[def:Pp\]](#def:Pp){reference-type="eqref" reference="def:Pp"}.* 1. *(weighted $f$ and $f^*$-projection centroids under $\alpha$-divergence)[\[it:alphacentroid\]]{#it:alphacentroid label="it:alphacentroid"} Let $f(t) = \frac{t^{\alpha} - \alpha t - (1-\alpha)}{\alpha(\alpha-1)}$ for $\alpha \in \mathbb{R}\backslash\{0,1\}$. The unique $f$-projection centroid $M^{f}_n$ is given by, for $x \neq y \in \mathcal{X}$, $$\begin{aligned} M^{f}_n(x,y) =\left(\sum_{i=1}^n w_i \left( M^{f}(L_i,\pi)(x,y)\right)^{1-\alpha}\right)^{1/(1-\alpha)}, \end{aligned}$$ while the unique $f^*$-projection centroid $M^{f^*}_n$ is given by, for $x \neq y \in \mathcal{X}$, $$\begin{aligned} M^{f^*}_n(x,y) =\left(\sum_{i=1}^n w_i\left(M^{f^*}(L_i,\pi)(x,y)\right)^{\alpha}\right)^{1/\alpha}, \end{aligned}$$ where $M^{f}, M^{f^*}$ are respectively the $P_{1-\alpha}, P_{\alpha}$-reversiblization.* 2. *(weighted $f$ and $f^*$-projection centroids under squared Hellinger distance)[\[it:hellingercentroid\]]{#it:hellingercentroid label="it:hellingercentroid"} Let $f(t) = (\sqrt{t}-1)^2$. The unique $f$-projection centroid $M^{f}_n$ is given by, for $x \neq y \in \mathcal{X}$, $$\begin{aligned} \label{eq:cephellinger} M^{f}_n(x,y) =\left(\sum_{i=1}^n w_i\sqrt{M^{f}(L_i,\pi)(x,y)}\right)^2, \end{aligned}$$ while the unique $f^*$-projection centroid $M^{f^*}_n$ is given by, for $x \neq y \in \mathcal{X}$, $$\begin{aligned} M^{f^*}_n(x,y) =\left(\sum_{i=1}^n w_i\sqrt{M^{f^*}(L_i,\pi)(x,y)}\right)^2, \end{aligned}$$ where $M^{f^*} = M^f$ is the $P_{1/2}$-reversiblization.* 3. *($f$ and $f^*$-projection centroids under Kullback-Leibler divergence) Let $f(t) = t \ln t - t + 1$. The unique $f$-projection centroid $M^{f}_n$ is given by, for $x \neq y \in \mathcal{X}$, $$\begin{aligned} M^{f}_n(x,y) = \left(\prod_{i=1}^n M^{f}(L_i,\pi)(x,y)\right)^{w_i}, \end{aligned}$$ where $0^0 := 0$ in the expression above, while the unique $f^*$-projection centroid $M^{f^*}_n$ is given by, for $x \neq y \in \mathcal{X}$, $$\begin{aligned} M^{f^*}_n(x,y) = \sum_{i=1}^n w_i M^{f^*}(L_i,\pi)(x,y), \end{aligned}$$ where $M^{f}, M^{f^*}$ are respectively the $P_{0}, P_{1}$-reversiblization.* *Remark 5*. Item [\[it:hellingercentroid\]](#it:hellingercentroid){reference-type="eqref" reference="it:hellingercentroid"} can be considered as a special case of item [\[it:alphacentroid\]](#it:alphacentroid){reference-type="eqref" reference="it:alphacentroid"} by taking $\alpha = 1/2$. Another important special case of $\alpha$-divergence is the $\chi^2$-divergence where we take $\alpha = 2$. # Main results {#sec:main} In this Section, we gradually introduce and state the main results of this paper. We first state two well-known and useful max-min inequalities for $D_f$ and $D_{f^*}$. Let $\mathcal{A}, \mathcal{B} \subseteq \mathcal{L}$. We have $$\begin{aligned} \sup_{L \in \mathcal{B}} \inf_{M \in \mathcal{A}} D_f(M||L) &\leqslant\inf_{M \in \mathcal{A}} \sup_{L \in \mathcal{B}} D_f(M||L), \label{eq:minimaxineq1}\\ \sup_{L \in \mathcal{B}} \inf_{M \in \mathcal{A}} D_f(L||M) &\leqslant\inf_{M \in \mathcal{A}} \sup_{L \in \mathcal{B}} D_f(L||M). \label{eq:minimaxineq2}\end{aligned}$$ In the following, we are primarily interested in projection onto the space $\mathcal{L}(\pi)$, and hence we shall take $\mathcal{A} = \mathcal{L}(\pi)$. We are ready to define various minimax and maximin values: **Definition 1** (Minimax and maximin values, saddle point property and saddle point). Let $\mathcal{B} \subseteq \mathcal{L}$. Define $$\begin{aligned} \overline{v}^f = \overline{v}^f(\mathcal{B},\pi) &:= \inf_{M \in \mathcal{L}(\pi)} \sup_{L \in \mathcal{B}} D_f(M||L), \\ \underline{v}^f = \underline{v}^f(\mathcal{B},\pi) &:= \sup_{L \in \mathcal{B}} \inf_{M \in \mathcal{L}(\pi)} D_f(M||L). \end{aligned}$$ We say that $\overline{v}^f$ is the minimax value with respect to $(f,\mathcal{B},\pi)$, and similarly $\underline{v}^f$ is the maximin value with respect to $(f,\mathcal{B},\pi)$. In view of [\[eq:minimaxineq1\]](#eq:minimaxineq1){reference-type="eqref" reference="eq:minimaxineq1"} and [\[eq:minimaxineq2\]](#eq:minimaxineq2){reference-type="eqref" reference="eq:minimaxineq2"}, we have $$\begin{aligned} \overline{v}^f \geqslant\underline{v}^f, \quad \overline{v}^{f^*} \geqslant\underline{v}^{f^*}. \end{aligned}$$ If the equality holds, that is, $$\overline{v}^f = \underline{v}^f,$$ we say that the saddle point property is satisfied with respect to $(f,\mathcal{B},\pi)$. In this case, suppose that the pair $(M^f,L^f) \in \mathcal{L}(\pi) \times \mathcal{L}$ attains the optimal value, that is, $$\overline{v}^f = D_f(M^f||L^f),$$ then we say that $(M^f,L^f)$ is a saddle point with respect to $(f,\mathcal{B},\pi)$. As mentioned in the introduction of the manuscript, we assume that we are given a set of Markov generators $\{L_i\}_{i=1}^n$, where $L_i \in \mathcal{L}$ for $i \in \llbracket n \rrbracket$, and in the sequel we shall investigate either $\mathcal{B} = \{L_i\}_{i=1}^n$ or the convex hull of $\{L_i\}_{i=1}^n$. Consider the minimax problem $$\begin{aligned} \label{problem:minimax} \inf_{M \in \mathcal{L}(\pi)} \max_{i \in \llbracket n \rrbracket} D_f(M||L_i).\end{aligned}$$ As $L \mapsto D_f(M||L)$ is convex for fixed $M$ and pointwise maximum of convex functions preserves convexity [@Boyd2004 Section $3.2.3$], the mapping $M \mapsto \max_{i \in \llbracket n \rrbracket} D_f(M||L_i)$ is thus convex. As such the outer minimization is a convex minimization problem over the convex set $\mathcal{L}(\pi)$. Inspired by the reformulation of an analogous minimax problem in the context of probability measures in [@C20 equation $(3)$], the minimax problem [\[problem:minimax\]](#problem:minimax){reference-type="eqref" reference="problem:minimax"} can be equivalently casted as the following constrained convex minimization: $$\begin{aligned} \label{problem:minimax2} \begin{aligned} & \underset{M \in \mathcal{L}(\pi), r}{\min} & & r \\ & \text{s.t.} & & D_f(M||L_i) \leqslant r, ~\textrm{for all}\, i \in \llbracket n \rrbracket. \end{aligned}\end{aligned}$$ This problem can be interpreted geometrically as the Chebyshev center problem in the context of Markov chains. For a given $M \in \mathcal{L}(\pi)$ and $r \geqslant 0$, the set $\{L \in \mathcal{L};~ D_f(M||L) \leqslant r\}$ can be understood as the ball of Markov generators within radius $r$ of the center $M$. The constraint in [\[problem:minimax2\]](#problem:minimax2){reference-type="eqref" reference="problem:minimax2"} thus entails that this ball encloses the set $\{L_i\}_{i=1}^n$. As the minimization is with respect to the radius $r$, a so-called Chebyshev center is a center of the minimum radius ball that contains $\{L_i\}_{i=1}^n$, and its optimal value is referred as the Chebyshev radius. Precisely we define the Chebyshev center and radius as follows: **Definition 2** (Chebyshev center and radius). Consider the minimax problem [\[problem:minimax\]](#problem:minimax){reference-type="eqref" reference="problem:minimax"} and its reformulation [\[problem:minimax2\]](#problem:minimax2){reference-type="eqref" reference="problem:minimax2"}. An optimizer $$\mathop{\mathrm{arg\,min}}_{M \in \mathcal{L}(\pi)} \max_{i \in \llbracket n \rrbracket} D_f(M||L_i)$$ is called a Chebyshev center with respect to $(f,\{L_i\}_{i=1}^n,\pi)$, and the minimax value $\overline{v}^f$ is called the Chebyshev radius with respect to $(f,\{L_i\}_{i=1}^n,\pi)$. For the problem [\[problem:minimax2\]](#problem:minimax2){reference-type="eqref" reference="problem:minimax2"}, we denote the Lagrangian function $\mathsf{L}: \mathbb{R}_+ \times \mathcal{L}(\pi) \times \mathbb{R}_+^n \to \mathbb{R}$ to be $$\mathsf{L}(r,M,\mathbf{w}) := r + \sum_{i=1}^n w_i(D_f(M||L_i) - r),$$ where $\mathbf{w}$ is the associated Lagrange multiplier. Differentiating $\mathsf{L}$ with respect to $r$ and setting it to zero gives $\mathbf{w}\in \mathcal{S}_n$, the probability simplex that we introduce in [\[def:probsim\]](#def:probsim){reference-type="eqref" reference="def:probsim"}. The dual problem of [\[problem:minimax2\]](#problem:minimax2){reference-type="eqref" reference="problem:minimax2"} is now written as $$\begin{aligned} \max_{\mathbf{w} \in \mathbb{R}_+^n} \min_{r \geqslant 0, M \in \mathcal{L}(\pi)} \mathsf{L}(r,M,\mathbf{w}) &= \max_{\mathbf{w} \in \mathcal{S}_n} \min_{M \in \mathcal{L}(\pi)} \sum_{i=1}^n w_i D_f(M||L_i) \nonumber \\ &= \max_{\mathbf{w} \in \mathcal{S}_n} \sum_{i=1}^n w_i D_f(M^f_n||L_i), \label{problem:minimax2dual}\end{aligned}$$ where we recall that $M^f_n = M^{f}_n(\mathbf{w},\{L_i\}_{i=1}^n,\pi)$ is the $\mathbf{w}$-weighted information centroid as in Theorem [Theorem 1](#thm:existuniquecentroid){reference-type="ref" reference="thm:existuniquecentroid"} in the second equality. Note that by weak duality we thus have $\overline{v}^f \geqslant\max_{\mathbf{w} \in \mathcal{S}_n} \sum_{i=1}^n w_i D_f(M^f_n||L_i)$. Our main result below shows that in fact the strong duality holds and connects various notions introduced earlier on. The proof is stated in Section [4.3](#subsec:pfmainpure){reference-type="ref" reference="subsec:pfmainpure"}. **Theorem 3**. 1. *(Strong duality holds for [\[problem:minimax2\]](#problem:minimax2){reference-type="eqref" reference="problem:minimax2"})[\[it:sd\]]{#it:sd label="it:sd"} The strong duality holds for [\[problem:minimax2\]](#problem:minimax2){reference-type="eqref" reference="problem:minimax2"} and there exists $\mathbf{w}^f = \mathbf{w}^f(\{L_i\}_{i=1}^n,\pi) = (w^f_i)_{i=1}^n \in \mathcal{S}_n$ such that the optimal value for the dual problem [\[problem:minimax2dual\]](#problem:minimax2dual){reference-type="eqref" reference="problem:minimax2dual"} is attained at $\mathbf{w}^f$, that is, $$\overline{v}^f = \sum_{i=1}^n w^f_i D_f(M^{f}_n(\mathbf{w}^f,\{L_i\}_{i=1}^n,\pi)||L_i).$$ The primal problem [\[problem:minimax2\]](#problem:minimax2){reference-type="eqref" reference="problem:minimax2"} optimal value is attained at $(M^{f}_n(\mathbf{w}^f,\{L_i\}_{i=1}^n,\pi),\overline{v}^f)$, and the optimal value of the problem [\[problem:minimax\]](#problem:minimax){reference-type="eqref" reference="problem:minimax"} is attained at $(M^{f}_n(\mathbf{w}^f,\{L_i\}_{i=1}^n,\pi),j)$, where $j \in \llbracket n \rrbracket$ is such that $w^f_j > 0$.* 2. *(Chebyshev center is a weighted information centroid $M^f_n$)[\[it:ccic\]]{#it:ccic label="it:ccic"} A given pair $(M,r) \in \mathcal{L}(\pi) \times \mathbb{R}_+$ minimizes the primal problem [\[problem:minimax2\]](#problem:minimax2){reference-type="eqref" reference="problem:minimax2"} and a given $\mathbf{w} \in \mathcal{S}_n$ maximizes the dual problem [\[problem:minimax2dual\]](#problem:minimax2dual){reference-type="eqref" reference="problem:minimax2dual"} if and only if they satisfy the following two conditions:* 1. *(Complementary slackness) For $i \in \llbracket n \rrbracket$, we have $$\begin{aligned} D_f(M||L_i) \begin{cases} = r & \text{ if } w_i > 0,\\ \leqslant r & \text{ if } w_i = 0. \end{cases} \end{aligned}$$ Furthermore this implies $r = \overline{v}^f$.* 2. *$M = M^{f}_n(\mathbf{w},\{L_i\}_{i=1}^n,\pi) \in \mathop{\mathrm{arg\,min}}_{M \in \mathcal{L}(\pi)} \mathsf{L}(r,M,\mathbf{w})$.* 3. *(Concavity of the Lagrangian dual)[\[it:cl\]]{#it:cl label="it:cl"} The mapping $$\mathcal{S}_n \ni \mathbf{w} = (w_i)_{i=1}^n \mapsto \sum_{i=1}^n w_i D_f(M^{f}_n(\mathbf{w},\{L_i\}_{i=1}^n,\pi)||L_i)$$ is concave.* 4. *(Uniqueness of Chebyshev center under strict convexity of $f$)[\[it:uniquecc\]]{#it:uniquecc label="it:uniquecc"} Suppose that the parameters $(f,\{L_i\}_{i=1}^n,\pi)$ satisfy the assumptions in Theorem [Theorem 1](#thm:existuniquecentroid){reference-type="ref" reference="thm:existuniquecentroid"}. If both $\mathbf{w}_1, \mathbf{w}_2 \in \mathcal{S}_n$ maximize the dual problem [\[problem:minimax2dual\]](#problem:minimax2dual){reference-type="eqref" reference="problem:minimax2dual"}, we have $$M^{f}_n(\mathbf{w}_1,\{L_i\}_{i=1}^n,\pi) = M^{f}_n(\mathbf{w}_2,\{L_i\}_{i=1}^n,\pi).$$ In other words, the Chebyshev center is unique.* 5. *(Characterization of $\overline{v}^f = \underline{v}^f$)[\[it:vfequals\]]{#it:vfequals label="it:vfequals"} The saddle point property with respect to $(f,\{L_i\}_{i=1}^n,\pi)$ (Definition [Definition 1](#def:minimaxvalues){reference-type="ref" reference="def:minimaxvalues"}) holds, that is, $$\overline{v}^f = \underline{v}^f$$ if and only if there exists $j \in \llbracket n \rrbracket$ and $\mathbf{w}^f = (w^f_i)_{i=1}^n$ a maximizer of the dual problem [\[problem:minimax2dual\]](#problem:minimax2dual){reference-type="eqref" reference="problem:minimax2dual"} with $w^f_j > 0$ such that $$\begin{aligned} M^{f}_n(\mathbf{w}^f,\{L_i\}_{i=1}^n,\pi) \in \mathop{\mathrm{arg\,min}}_{M \in \mathcal{L}(\pi)} D_f(M||L_j). \end{aligned}$$ In particular, if the parameters $(f,\{L_i\}_{i=1}^n,\pi)$ satisfy the assumptions in Theorem [Theorem 1](#thm:existuniquecentroid){reference-type="ref" reference="thm:existuniquecentroid"}, then the above statement is equivalent to $$\begin{aligned} M^{f}_n(\mathbf{w}^f,\{L_i\}_{i=1}^n,\pi) = M^f(L_j,\pi). \end{aligned}$$* 6. *(Saddle point)[\[it:saddlept\]]{#it:saddlept label="it:saddlept"} If the saddle point property with respect to $(f,\{L_i\}_{i=1}^n,\pi)$ holds, then $$(M^f(L_l,\pi), L_l)$$ is a saddle point with respect to $(f,\{L_i\}_{i=1}^n,\pi)$, where $l = \mathop{\mathrm{arg\,max}}_{i \in \llbracket n \rrbracket} D_f(M^f(L_i,\pi),L_i)$.* *Remark 6*. The concavity of the Lagrangian dual in item [\[it:cl\]](#it:cl){reference-type="eqref" reference="it:cl"} plays an important role in developing a projected subgradient algorithm for solving [\[problem:minimax2dual\]](#problem:minimax2dual){reference-type="eqref" reference="problem:minimax2dual"} in Section [3.3](#subsec:algo){reference-type="ref" reference="subsec:algo"}. In Theorem [Theorem 3](#thm:mainpure){reference-type="ref" reference="thm:mainpure"}, we have been investigating the case where $\mathcal{B} = \{L_i\}_{i=1}^n$ in Definition [Definition 1](#def:minimaxvalues){reference-type="ref" reference="def:minimaxvalues"}, which can in fact be generalized to the convex hull of these generators. Precisely, we shall take $$\mathcal{B} = \mathrm{conv}((L_i)_{i=1}^n) := \bigg\{\sum_{i=1}^n \alpha_i L_i;~ (\alpha_i)_{i=1}^n \in \mathcal{S}_n, L_i \in \mathcal{L} \text{ for all } i \in \llbracket n \rrbracket \bigg\}$$ and consider minimax problem of the form $$\begin{aligned} \label{problem:minimaxconvexh} \inf_{M \in \mathcal{L}(\pi)} \sup_{L \in \mathrm{conv}((L_i)_{i=1}^n)} D_f(M||L) = \overline{v}^f(\mathrm{conv}((L_i)_{i=1}^n),\pi).\end{aligned}$$ An important family of Markov generators that can be written as a convex hull is the family of continuized doubly stochastic Markov generators that we denote by $\mathcal{D}$. We say that $L \in \mathcal{D} \subseteq \mathcal{L}$ if all the row and column sums of $L$ are $0$ and $L = P - I$ where $P$ is a Markov matrix and $I$ is the identity matrix on $\mathcal{X}$, and it can be shown that the discrete uniform distribution on $\mathcal{X}$ is the stationary distribution of such $L$. Let $P_i$ be a permutation matrix on $\mathcal{X}$, then by the Birkhoff-von Neumann theorem [@HJ13 Theorem $8.7.2$] we have $$\begin{aligned} \label{eq:doublysto} \mathcal{D} = \mathrm{conv}((P_i-I)_{i=1}^{|\mathcal{X}|!}).\end{aligned}$$ Another family of Markov generators that can be casted under this framework is the set of uniformizable (see [@K79 Chapter $2$])and $\mu$-reversible generators, which are used in finite truncation of countably infinite Markov chains in queueing theory [@V18], see Example [Example 7](#ex:uniformmix){reference-type="ref" reference="ex:uniformmix"} below. Our next proposition shows that the analysis in this case of convex hull of generators can be reduced to the setup of Theorem [Theorem 3](#thm:mainpure){reference-type="ref" reference="thm:mainpure"}. The proof can be found in Section [4.4](#subsubsec:pfreducefinite){reference-type="ref" reference="subsubsec:pfreducefinite"}. **Proposition 1** (Reduction to the finite case). *$$\begin{aligned} \overline{v}^f(\mathrm{conv}((L_i)_{i=1}^n),\pi) &= \overline{v}^f(\{L_i\}_{i=1}^n,\pi), \\ \underline{v}^f(\mathrm{conv}((L_i)_{i=1}^n),\pi) &= \underline{v}^f(\{L_i\}_{i=1}^n,\pi). \end{aligned}$$* The next result collects and combines both Theorem [Theorem 3](#thm:mainpure){reference-type="ref" reference="thm:mainpure"} and Proposition [Proposition 1](#prop:reducefinite){reference-type="ref" reference="prop:reducefinite"}. Its proof is omitted as it is simply restating these two results in a single Corollary. **Corollary 1**. 1. *(Attainment of the minimax problem [\[problem:minimaxconvexh\]](#problem:minimaxconvexh){reference-type="eqref" reference="problem:minimaxconvexh"}) There exists $\mathbf{w}^f = \mathbf{w}^f(\{L_i\}_{i=1}^n,\pi) = (w^f_i)_{i=1}^n \in \mathcal{S}_n$ such that the problem [\[problem:minimaxconvexh\]](#problem:minimaxconvexh){reference-type="eqref" reference="problem:minimaxconvexh"} optimal value is attained at $(M^{f}_n(\mathbf{w}^f,\{L_i\}_{i=1}^n,\pi),L_j)$, where $j \in \llbracket n \rrbracket$ is such that $w^f_j > 0$.* 2. *(Characterization of $\overline{v}^f = \underline{v}^f$) The saddle point property with respect to $(f,\mathrm{conv}((L_i)_{i=1}^n),\pi)$ (Definition [Definition 1](#def:minimaxvalues){reference-type="ref" reference="def:minimaxvalues"}) holds, that is, $$\overline{v}^f(\mathrm{conv}((L_i)_{i=1}^n),\pi) = \underline{v}^f(\mathrm{conv}((L_i)_{i=1}^n),\pi)$$ if and only if there exists $j \in \llbracket n \rrbracket$ and $\mathbf{w}^f = (w^f_i)_{i=1}^n$ a maximizer of the dual problem [\[problem:minimax2dual\]](#problem:minimax2dual){reference-type="eqref" reference="problem:minimax2dual"} with $w^f_j > 0$ such that $$\begin{aligned} M^{f}_n(\mathbf{w}^f,\{L_i\}_{i=1}^n,\pi) \in \mathop{\mathrm{arg\,min}}_{M \in \mathcal{L}(\pi)} D_f(M||L_j). \end{aligned}$$ In particular, if the parameters $(f,\{L_i\}_{i=1}^n,\pi)$ satisfy the assumptions in Theorem [Theorem 1](#thm:existuniquecentroid){reference-type="ref" reference="thm:existuniquecentroid"}, then the above statement is equivalent to $$\begin{aligned} M^{f}_n(\mathbf{w}^f,\{L_i\}_{i=1}^n,\pi) = M^f(L_j,\pi). \end{aligned}$$* 3. *(Saddle point) If the saddle point property with respect to $(f,\mathrm{conv}((L_i)_{i=1}^n),\pi)$ holds, then $$(M^f(L_l,\pi), L_l)$$ is a saddle point with respect to $(f,\mathrm{conv}((L_i)_{i=1}^n),\pi)$, where $l = \mathop{\mathrm{arg\,max}}_{i \in \llbracket n \rrbracket} D_f(M^f(L_i,\pi),L_i)$.* *In particular, the above results hold for the set of continuized doubly stochastic Markov generators $\mathcal{D}$ [\[eq:doublysto\]](#eq:doublysto){reference-type="eqref" reference="eq:doublysto"} with $L_i = P_i$ and $n = |\mathcal{X}|!$.* ## A pure strategy game-theoretic interpretation {#subsec:puregame} In this subsection, we provide a game-theoretic interpretation of the max-min inequalities and the saddle point property (Definition [Definition 1](#def:minimaxvalues){reference-type="ref" reference="def:minimaxvalues"}) and introduce various game-theoretic notions in this context. Much of the exposition of game theory in this section are drawn from content in [@Boyd2004 Section $5.2.5$ and $5.4.3$] and [@KarlinPeres17 Chapter $2$]. Consider the following (two-person non-cooperative zero-sum pure strategy) game of a probabilist against the Nature, with respect to the parameters $(f, \mathcal{B},\pi)$. If Nature chooses $M \in \mathcal{L}(\pi)$, the **strategy set of Nature**, while the probabilist chooses $L \in \mathcal{B}$, the **strategy set of the probabilist**, then Nature pays an amount $D_f(M||L)$ to the probabilist. As a result, Nature wants to minimize $D_f$ while the probabilist seeks to maximize $D_f$. The quantity $D_f(M||L)$ is known as the **payoff function** of the game, and it is said to be a **pure strategy** game as both players are only allowed to choose deterministically from their respective strategy sets. On the other hand, a game is said to be a **mixed strategy** game if the players are allowed to choose randomly within their respective strategy sets. We shall defer to Section [3.2](#subsec:mixgame){reference-type="ref" reference="subsec:mixgame"} to discuss the setting of mixed strategy game. In view of Section [3](#sec:main){reference-type="ref" reference="sec:main"}, we shall consider $\mathcal{B}$, the strategy set of the probabilist, to be either $\{L_i\}_{i=1}^n$ or its convex hull $\mathrm{conv}((L_i)_{i=1}^n)$. In the sequel, we shall call this a two-person pure strategy game of the probabilist against Nature. Suppose that Nature chooses its strategy $M$ first, followed by the probabilist, who has the knowledge of the choice of Nature. The probabilist thus seeks to choose $L \in \mathcal{B}$ to maximize the payoff $D_f(M||L)$. The resulting payoff is $\sup_{L \in \mathcal{B}} D_f(M||L)$, which depends on $M$, the choice of Nature. Nature assumes that the probabilist will choose this strategy, and hence Nature seeks to minimize the worst-case payoff, that is, Nature chooses a strategy in the set $$\mathop{\mathrm{arg\,min}}_{M \in \mathcal{L}(\pi)} \sup_{L \in \mathcal{B}} D_f(M||L),$$ known as a **minimax strategy of Nature**, which yields $$\overline{v}^f = \overline{v}^f(\mathcal{B},\pi) = \inf_{M \in \mathcal{L}(\pi)} \sup_{L \in \mathcal{B}} D_f(M||L),$$ the payoff from Nature to the probabilist. $\overline{v}^f$ is also known as the **minimax value** of the game. Now, suppose that the order of play is reversed: the probabilist chooses $L \in \mathcal{B}$ first, and Nature, with the knowledge of $L$, picks from the strategy set $\mathcal{L}(\pi)$. Following a similar argument as before, if both players follow the optimal strategy, the probabilist chooses a strategy in the set $$\mathop{\mathrm{arg\,sup}}_{L \in \mathcal{B}} \inf_{M \in \mathcal{L}(\pi)} D_f(M||L),$$ known as a **maximin strategy of the probabilist**, which yields $$\underline{v}^f = \underline{v}^f(\mathcal{B},\pi) = \sup_{L \in \mathcal{B}} \inf_{M \in \mathcal{L}(\pi)} D_f(M||L),$$ the payoff from Nature to the probabilist. $\underline{v}^f$ is known as the **maximin value** of the game. The max-min inequalities in [\[eq:minimaxineq1\]](#eq:minimaxineq1){reference-type="eqref" reference="eq:minimaxineq1"} and [\[eq:minimaxineq2\]](#eq:minimaxineq2){reference-type="eqref" reference="eq:minimaxineq2"}, in the context of this two-person game, state that it is advantageous for a player to play second, or more precisely, to know the strategy of the opponent. The difference $\overline{v}^f - \underline{v}^f \geqslant 0$ can be interpreted as the advantage conferred to a player in knowing the opponent's strategy. If the minimax equality holds, that is, if $\overline{v}^f = \underline{v}^f$, then there is no advantage in playing second or in knowing the strategy of the opponent. We now formally define the notion of pure strategy Nash equilibrium and value of the game as follows: **Definition 3** (pure strategy Nash equilibrium, value of the game and optimal strategies). Consider the two-person pure strategy game of the probabilist against Nature as described above. Suppose that the minimax equality holds, that is, $\overline{v}^f = \underline{v}^f$. We say that $$v = v(f,\mathcal{B},\pi) := \overline{v}^f = \underline{v}^f$$ is the **value of the game**. Let $M^f$ and $L^f$ be respectively a minimax strategy of Nature and a maximin strategy of the probabilist. The pair $(M^f,L^f)$, a saddle point with respect to $(f,\mathcal{B},\pi)$, is also called a **pure strategy Nash equilibrium** with respect to $(f,\mathcal{B},\pi)$. $M^f$ is said to be an optimal strategy of Nature while $L^f$ is said to be an optimal strategy of the probabilist. Analogous to Corollary [Corollary 1](#cor:convexhull){reference-type="ref" reference="cor:convexhull"} where we characterize the saddle point, we characterize the existence and uniqueness of Nash equilibrium in this game in the following result. The proof is deferred to Section [4.5](#subsubsec:pfpureNash){reference-type="ref" reference="subsubsec:pfpureNash"}. **Corollary 2**. *Consider the two-person pure strategy game of the probabilist against Nature as described above. Let $\mathcal{B}$ be either $\{L_i\}_{i=1}^n$ or its convex hull $\mathrm{conv}((L_i)_{i=1}^n)$.* 1. *(Characterization of pure strategy Nash equilibrium)[\[it:existNash\]]{#it:existNash label="it:existNash"} A pure strategy Nash equilibrium with respect to $(f,\mathcal{B},\pi)$ exists if and only if there exists $j \in \llbracket n \rrbracket$ and $\mathbf{w}^f = (w^f_i)_{i=1}^n$ a maximizer of the dual problem [\[problem:minimax2dual\]](#problem:minimax2dual){reference-type="eqref" reference="problem:minimax2dual"} with $w^f_j > 0$ such that $$\begin{aligned} M^{f}_n(\mathbf{w}^f,\{L_i\}_{i=1}^n,\pi) \in \mathop{\mathrm{arg\,min}}_{M \in \mathcal{L}(\pi)} D_f(M||L_j). \end{aligned}$$ In particular, if the parameters $(f,\{L_i\}_{i=1}^n,\pi)$ satisfy the assumptions in Theorem [Theorem 1](#thm:existuniquecentroid){reference-type="ref" reference="thm:existuniquecentroid"}, then the above statement is equivalent to $$\begin{aligned} M^{f}_n(\mathbf{w}^f,\{L_i\}_{i=1}^n,\pi) = M^f(L_j,\pi). \end{aligned}$$* 2. *(Uniqueness of pure strategy Nash equilibrium)[\[it:uniqueNash\]]{#it:uniqueNash label="it:uniqueNash"} Suppose that the parameters $(f,\{L_i\}_{i=1}^n,\pi)$ satisfy the assumptions in Theorem [Theorem 1](#thm:existuniquecentroid){reference-type="ref" reference="thm:existuniquecentroid"} and a pure strategy Nash equilibrium exists, which is given by $$(M^f(L_l,\pi), L_l),$$ where $l \in \mathop{\mathrm{arg\,max}}_{i \in \llbracket n \rrbracket} D_f(M^f(L_i,\pi),L_i)$. It is unique if and only if the index $$l = \mathop{\mathrm{arg\,max}}_{i \in \llbracket n \rrbracket} D_f(M^f(L_i,\pi),L_i)$$ is unique.* ### Examples {#subsubex:pureex} In this Section, we give a few simple yet illustrative examples to demonstrate the theory that we have developed thus far. We shall see that, depending on the parameters of the game $(f,\{L_i\}_{i=1}^n,\pi)$, the associated pure strategy Nash equilibrium or saddle point may or may not exist. **Example 1** (An example with the existence of multiple pure strategy Nash equilibria (or saddle points)). In the first example, we take $n = 2$ and consider two generators with $L_1 = L$ and $L_2 = L_{\pi}$, where $L \in \mathcal{L}$ and recall that $L_{\pi}$ is the $\pi$-dual of $L$. In view of the bisection property [@CW23], we have $D_f(M||L_1) = D_f(M||L_2)$, and hence for any weight $\mathbf{w} \in \mathcal{S}_2$, we see that $$M^{f}_2(\mathbf{w},\{L_i\}_{i=1}^2,\pi) \in \mathop{\mathrm{arg\,min}}_{M \in \mathcal{L}(\pi)} D_f(M||L_i)$$ for $i = 1,2$. Thus, a pure strategy Nash equilibrium or saddle point, with respect to $(f,\{L_i\}_{i=1}^2,\pi)$, exists, according to Corollary [Corollary 2](#cor:pureNash){reference-type="ref" reference="cor:pureNash"}. Now, we further assume that the parameters $(f,\{L_i\}_{i=1}^n,\pi)$ satisfy the assumptions in Theorem [Theorem 1](#thm:existuniquecentroid){reference-type="ref" reference="thm:existuniquecentroid"}. Using the bisection property again, the two pure strategy Nash equilibria are $$(M^f(L,\pi),L), \quad (M^f(L,\pi),L_{\pi}).$$ **Example 2** (An example with a unique pure strategy Nash equilibrium (or saddle point)). In the second example, we again take $n = 2$ and consider two generators with $L_1, L_2 \in \mathcal{L}$. We further assume that $D_f$ is the $\alpha$-divergence, and $L_1,L_2$ are chosen to satisfy the condition that $$\begin{aligned} \label{eq:examplecond} D_f(M^f(L_1,\pi)||L_1) > D_f(M^f(L_1,\pi)||L_2). \end{aligned}$$ Applying the Pythagorean identity of the $\alpha$-divergence [@CW23] to the right hand side above, we thus have $$\begin{aligned} \label{eq:examplecond2} D_f(M^f(L_1,\pi)||L_1) > D_f(M^f(L_1,\pi)||L_2) &= D_f(M^f(L_2,\pi)||L_2) + D_f(M^f(L_1,\pi)||M^f(L_2,\pi)) \nonumber \\ &\geqslant D_f(M^f(L_2,\pi)||L_2). \end{aligned}$$ Now, we claim that in this example, the pair $(M^f(L_1,\pi),D_f(M^f(L_1,\pi)||L_1)) \in \mathcal{L}(\pi) \times \mathbb{R}_+$ minimizes the primal problem [\[problem:minimax2\]](#problem:minimax2){reference-type="eqref" reference="problem:minimax2"} and $\mathbf{w} = (1,0) \in \mathcal{S}_2$ maximizes the dual problem [\[problem:minimax2dual\]](#problem:minimax2dual){reference-type="eqref" reference="problem:minimax2dual"}. This readily follows from item [\[it:ccic\]](#it:ccic){reference-type="eqref" reference="it:ccic"} in Theorem [Theorem 3](#thm:mainpure){reference-type="ref" reference="thm:mainpure"}, where the condition [\[eq:examplecond\]](#eq:examplecond){reference-type="eqref" reference="eq:examplecond"} ensures that the complementary slackness holds. Using item [\[it:existNash\]](#it:existNash){reference-type="eqref" reference="it:existNash"} in Corollary [Corollary 2](#cor:pureNash){reference-type="ref" reference="cor:pureNash"}, we see that a Nash equilibrium exists, and in view of [\[eq:examplecond2\]](#eq:examplecond2){reference-type="eqref" reference="eq:examplecond2"} and item [\[it:uniqueNash\]](#it:uniqueNash){reference-type="eqref" reference="it:uniqueNash"} in Corollary [Corollary 2](#cor:pureNash){reference-type="ref" reference="cor:pureNash"}, the unique pure strategy Nash equilibrium or saddle point is given by $$(M^f(L_1,\pi),L_1).$$ **Example 3** (An example where a pure strategy Nash equilibrium (or saddle point) fails to exist). In the final example, we again specialize into $n = 2$ and take two Markov generators with $L_1, L_2 \in \mathcal{L}$. We further assume that $D_f$ is the $\alpha$-divergence, and $L_1,L_2$ are chosen to satisfy the conditions that $M^f(L_1,\pi) \neq M^f(L_2,\pi)$ and $$\begin{aligned} \label{eq:examplecond3} D_f(M^f(L_1,\pi)||L_1) = D_f(M^f(L_2,\pi)||L_2). \end{aligned}$$ Suppose the contrary that a pure strategy Nash equilibrium exists. In view of Corollary [Corollary 2](#cor:pureNash){reference-type="ref" reference="cor:pureNash"} and $f$ is strictly convex under $\alpha$-divergence, the only possible candidates for the maximizer of the dual problem is either $\mathbf{w}^f = (1,0)$ or $\mathbf{w}^f = (0,1)$. In the former case, we thus have $D_f(M^f(L_1,\pi)||L_1) = r$ and $$\begin{aligned} r &= D_f(M^f(L_1,\pi)||L_1) \\ &\geqslant D_f(M^f(L_1,\pi)||L_2) \\ &= D_f(M^f(L_2,\pi)||L_2) + D_f(M^f(L_1,\pi)||M^f(L_2,\pi)) \\ &= D_f(M^f(L_1,\pi)||L_1) + D_f(M^f(L_1,\pi)||M^f(L_2,\pi)), \end{aligned}$$ where we use the Pythagorean identity [@CW23] in the second equality and [\[eq:examplecond3\]](#eq:examplecond3){reference-type="eqref" reference="eq:examplecond3"} in the last equality. This implies that $D_f(M^f(L_1,\pi)||M^f(L_2,\pi)) = 0$ and hence $M^f(L_1,\pi) = M^f(L_2,\pi)$, which contradicts the assumption. Analogously we can handle the case of $\mathbf{w}^f = (0,1)$. As a result, we see that a pure strategy Nash equilibrium fails to exist. For further discussions on this example, please refer to Example [Example 5](#ex:3mix){reference-type="ref" reference="ex:3mix"}. ## A mixed strategy game of the probabilist against Nature {#subsec:mixgame} Recall that in Section [3.1](#subsec:puregame){reference-type="ref" reference="subsec:puregame"}, we introduce a two-person pure strategy game of the probabilist against Nature. In this Section, we shall study a variant of this game where the probabilist uses a mixed strategy while Nature maintains a pure strategy. Precisely, consider the following game that we refer to as a two-person mixed strategy game of the probabilist against Nature, with respect to the parameters $(f, \mathcal{B},\pi)$. We denote by $\mathcal{P}(\mathcal{B})$ the set of probability measures on $\mathcal{B}$, where again we recall that $\mathcal{B}$ is either $\{L_i\}_{i=1}^n$ or its convex hull $\mathrm{conv}((L_i)_{i=1}^n)$. The probabilist first chooses a **prior measure** $\mu \in \mathcal{P}(\mathcal{B})$ and picks a $L \in \mathcal{B}$ at random according to $\mu$. On the other hand Nature, knowing $\mu$ but not knowing $L$, follows a pure strategy to choose $M \in \mathcal{L}(\pi)$. Afterwards, Nature pays an amount $D_f(M||L)$ to the probabilist. This serves as a natural generalization of the game of a statistican against Nature proposed and developed in [@H97; @HO97; @GZ06]. With these notions in mind, we now introduce the minimax and maximin value of this game: **Definition 4** (Minimax and maximin values in the two-person mixed strategy game). Consider the two-person mixed strategy game of the probabilist against Nature. The minimax value and the maximin value of this game are defined respectively to be $$\begin{aligned} \overline{V}^f = \overline{V}^f(\mathcal{B},\pi) &:= \inf_{M \in \mathcal{L}(\pi)} \sup_{\mu \in \mathcal{P}(\mathcal{B})} \int_{\mathcal{B}} D_f(M||L)\, \mu(dL), \label{eq:overlineVf}\\ \underline{V}^f = \underline{V}^f(\mathcal{B},\pi) &:= \sup_{\mu \in \mathcal{P}(\mathcal{B})} \inf_{M \in \mathcal{L}(\pi)} \int_{\mathcal{B}} D_f(M||L)\, \mu(dL). \label{eq:underlineVf} \end{aligned}$$ Note that $\overline{V}^f \geqslant\underline{V}^f$. We can define analogously $\overline{V}^{f^*}, \underline{V}^{f^*}$ by replacing $f$ by $f^*$ above. Next, we define the notions of Bayes risk and Bayes strategy of Nature: **Definition 5** (Bayes risk and Bayes strategy). Given a $\mu \in \mathcal{P}(\mathcal{B})$. We say that a Markov generator $M_{\mu} \in \mathcal{L}(\pi)$ is a **Bayes strategy** with respect to $\mu$ if the mapping $M \mapsto \int_{\mathcal{B}} D_f(M||L)\, \mu(dL)$ attains its infimum at $M_{\mu}$, that is, $$\inf_{M \in \mathcal{L}(\pi)} \int_{\mathcal{B}} D_f(M||L)\, \mu(dL) = \int_{\mathcal{B}} D_f(M_{\mu}||L)\, \mu(dL).$$ The minimum value $\int_{\mathcal{B}} D_f(M_{\mu}||L)\, \mu(dL)$ is said to be the **Bayes risk** with respect to $\mu$, Finally, we give the notions of minimax and maximin strategy of this game: **Definition 6** (Minimax and maximin strategies in the two-person mixed strategy game). Consider the two-person mixed strategy game of the probabilist against Nature. A strategy in the set $$\mathop{\mathrm{arg\,inf}}_{M \in \mathcal{L}(\pi)} \sup_{\mu \in \mathcal{P}(\mathcal{B})} \int_{\mathcal{B}} D_f(M||L)\, \mu(dL)$$ is known as a **minimax strategy of Nature**, while a strategy in the set $$\mathop{\mathrm{arg\,sup}}_{\mu \in \mathcal{P}(\mathcal{B})} \inf_{M \in \mathcal{L}(\pi)} \int_{\mathcal{B}} D_f(M||L)\, \mu(dL)$$ is known as a **maximin strategy of the probabilist**. Before we state the main results of this section, we give an important proposition that connects various minimax and maximin values we have introduced thus far, namely $\overline{v}^f,\overline{V}^f, \underline{V}^f$ and the dual problem [\[problem:minimax2dual\]](#problem:minimax2dual){reference-type="eqref" reference="problem:minimax2dual"}. The proof is deferred to Section [4.6](#subsubsec:pfminimaxineq){reference-type="ref" reference="subsubsec:pfminimaxineq"}. **Proposition 2**. *For any $\mathcal{B}\subseteq \mathcal{L}$, we have $$\begin{aligned} \label{eq:minimaxeqmixed1} \overline{v}^f(\mathcal{B},\pi) \geqslant\overline{V}^f(\mathcal{B},\pi) \geqslant\underline{V}^f(\mathcal{B},\pi), \end{aligned}$$ where we recall that $\overline{v}^f$ is introduced in Definition [Definition 1](#def:minimaxvalues){reference-type="ref" reference="def:minimaxvalues"} while $\overline{V}^f,\underline{V}^f$ are defined in Definition [Definition 4](#def:minimaxmix){reference-type="ref" reference="def:minimaxmix"}. In particular, if $\mathcal{B}$ is either $\{L_i\}_{i=1}^n$ or its convex hull $\mathrm{conv}((L_i)_{i=1}^n)$, this leads to $$\begin{aligned} \label{eq:minimaxeqmixed2} \overline{v}^f(\mathcal{B},\pi) \geqslant\overline{V}^f(\mathcal{B},\pi) \geqslant\underline{V}^f(\mathcal{B},\pi) \geqslant\max_{\mathbf{w} \in \mathcal{S}_n} \sum_{i=1}^n w_i D_f(M^f_n||L_i). \end{aligned}$$* In the setting of [\[eq:minimaxeqmixed2\]](#eq:minimaxeqmixed2){reference-type="eqref" reference="eq:minimaxeqmixed2"}, by Theorem [Theorem 3](#thm:mainpure){reference-type="ref" reference="thm:mainpure"} item [\[it:sd\]](#it:sd){reference-type="eqref" reference="it:sd"}, the strong duality holds and hence we have $$\overline{v}^f(\mathcal{B},\pi) = \max_{\mathbf{w} \in \mathcal{S}_n} \sum_{i=1}^n w_i D_f(M^f_n||L_i).$$ This forces a minimax equality, that is, $$\overline{V}^f(\mathcal{B},\pi) = \underline{V}^f(\mathcal{B},\pi).$$ In other words, all inequalities are in fact equalities in [\[eq:minimaxeqmixed2\]](#eq:minimaxeqmixed2){reference-type="eqref" reference="eq:minimaxeqmixed2"}. We collect this result and a few others in Corollary [Corollary 1](#cor:convexhull){reference-type="ref" reference="cor:convexhull"} into the following main results: **Theorem 4**. *Consider the two-person mixed strategy game of the probabilist against Nature, with respect to the parameters $(f, \mathcal{B},\pi)$, where $\mathcal{B}$ is either $\{L_i\}_{i=1}^n$ or its convex hull $\mathrm{conv}((L_i)_{i=1}^n)$. Recall that there exists $\mathbf{w}^f = \mathbf{w}^f(\{L_i\}_{i=1}^n,\pi) = (w^f_i)_{i=1}^n \in \mathcal{S}_n$ such that the optimal value for the dual problem [\[problem:minimax2dual\]](#problem:minimax2dual){reference-type="eqref" reference="problem:minimax2dual"} is attained at $\mathbf{w}^f$ by Theorem [Theorem 3](#thm:mainpure){reference-type="ref" reference="thm:mainpure"}.* 1. *(A mixed strategy Nash equilibrium always exists) The minimax equality holds, that is, $$\overline{V}^f(\mathcal{B},\pi) = \underline{V}^f(\mathcal{B},\pi) = \sum_{i=1}^n w^f_i D_f(M^{f}_n(\mathbf{w}^f,\{L_i\}_{i=1}^n,\pi)||L_i) =: V.$$ We say that $V$ is the **value of the game**. The pair $(M^{f}_n(\mathbf{w}^f,\{L_i\}_{i=1}^n,\pi),\mathbf{w}^f) \in \mathcal{L}(\pi) \times \mathcal{P}(\mathcal{B})$ is a **mixed strategy Nash equilibrium** with respect to the parameters $(f, \mathcal{B},\pi)$.* 2. *(Game-theoretic interpretation of Chebyshev center and Chebyshev radius) Recall the definition of Chebyshev center and Chebyshev radius in Definition [Definition 2](#def:Chebyshevcr){reference-type="ref" reference="def:Chebyshevcr"}. A minimax strategy of Nature is given by $M^{f}_n(\mathbf{w}^f,\{L_i\}_{i=1}^n,\pi)$, a weighted information centroid, which is also a Chebyshev center. In view of the complementary slackness in Theorem [Theorem 3](#thm:mainpure){reference-type="ref" reference="thm:mainpure"}, we have $$V = D_f(M^{f}_n(\mathbf{w}^f,\{L_i\}_{i=1}^n,\pi)||L_l),$$ where the index $l$ satisfies $w^f_l > 0$. In words, the Chebyshev radius is the value of the game $V$.* *If the parameters $(f,\{L_i\}_{i=1}^n,\pi)$ satisfy the assumptions in Theorem [Theorem 1](#thm:existuniquecentroid){reference-type="ref" reference="thm:existuniquecentroid"}, then $M^{f}_n(\mathbf{w}^f,\{L_i\}_{i=1}^n,\pi)$ is the unique minimax strategy.* 3. *(Bayes risk with respect to $\mathbf{w}^f$ is value of the game) $\mathbf{w}^f$ is a maximin strategy of the probabilist. $M^{f}_n(\mathbf{w}^f,\{L_i\}_{i=1}^n,\pi)$ is a Bayes strategy with respect to $\mathbf{w}^f$, and the Bayes risk with respect to $\mathbf{w}^f$ is the value of the game $V$.* *Remark 7*. It is interesting to note that a mixed strategy Nash equilibrium always exists in the two-person mixed strategy game (Theorem [Theorem 4](#thm:mainmix){reference-type="ref" reference="thm:mainmix"}) while a pure strategy Nash equilibrium may or may not exist in the two-person pure strategy game (Corollary [Corollary 2](#cor:pureNash){reference-type="ref" reference="cor:pureNash"}). For an example of the latter case we recall Example [Example 3](#ex:3pure){reference-type="ref" reference="ex:3pure"} and another Example [Example 5](#ex:3mix){reference-type="ref" reference="ex:3mix"} below. This is analogous to the classical two-person zero-sum game in the game theory literature where a pure strategy Nash equilibrium may not exist, see [@KarlinPeres17 Chapter $2$]. *Remark 8* (Game-theoretic consequences of the mixed strategy game). As a mixed strategy Nash equilibrium always exists, there is no advantage for a player (the probabilist or Nature) to play second, or more generally, to know the strategy of the opponent, in the mixed strategy version of the game. *Remark 9* (Another proof of $\overline{V}^f(\mathcal{B},\pi) = \underline{V}^f(\mathcal{B},\pi)$ via the Sion's minimax theorem). One common strategy in establishing minimax results, in particular in the context of source coding and information theory, relies on the Sion's minimax theorem, see for instance [@EH14 Theorem $35$]. For given $\mu \in \mathcal{P}(\mathcal{B})$ and $M \in \mathcal{L}(\pi)$, the mapping $$(M,\mu) \mapsto \int_{\mathcal{B}} D_f(M||L)\, \mu(dL)$$ is clearly concave in $\mu$ and convex in $M$. As we are minimizing over $M \in \mathcal{L}(\pi)$, the set of $\pi$-reversible generator $\mathcal{L}(\pi)$ is convex yet unbounded, and hence it is not a compact subset of $\mathcal{L}$. On the other hand, as $\mathcal{B}$ is either a finite set of Markov generators $\{L_i\}_{i=1}^n$ or its convex hull $\mathrm{conv}((L_i)_{i=1}^n)$, the set $\mathcal{P}(\mathcal{B})$ is thus compact. As a result, the Sion's minimax theorem is readily applicable in this setting which yields $$\overline{V}^f(\mathcal{B},\pi) = \underline{V}^f(\mathcal{B},\pi).$$ The proof that we presented in [\[eq:minimaxeqmixed2\]](#eq:minimaxeqmixed2){reference-type="eqref" reference="eq:minimaxeqmixed2"} relies on the Lagrangian duality theory as in Theorem [Theorem 3](#thm:mainpure){reference-type="ref" reference="thm:mainpure"}, which naturally gives fine properties concerning the mixed strategy Nash equilibrium and related important objects such as $\mathbf{w}^f$ and connects with earlier sections of the manuscript. These properties do not seem to be immediate consequences or corollaries from the Sion's minimax theorem. ### Examples {#subsubex:mixedex} In this Section, similar to Section [3.1.1](#subsubex:pureex){reference-type="ref" reference="subsubex:pureex"}, we detail four simple examples to demonstrate the theory concerning the two-person mixed strategy game of the probabilist against Nature. **Example 4** (Answering a question of Laurent Miclo: an example with the existence of multiple mixed strategy Nash equilibria). In the first example, we continue the setting discussed in Example [Example 1](#ex:1pure){reference-type="ref" reference="ex:1pure"}. That is, we consider two ($n = 2$) generators with $L_1 = L$ and $L_2 = L_{\pi}$, where $L \in \mathcal{L}$. Using the bisection property of $D_f$ [@CW23], we have $D_f(M||L_1) = D_f(M||L_2)$, and for any weight $\mathbf{w} \in \mathcal{S}_2$, we see that $$M^{f}_2(\mathbf{w},\{L_i\}_{i=1}^2,\pi) \in \mathop{\mathrm{arg\,min}}_{M \in \mathcal{L}(\pi)} D_f(M||L_i)$$ for $i = 1,2$. In other words, any pair of the form $(M^f(L_1,\pi),\mathbf{w})$ is a mixed strategy Nash equilibrium with respect to the parameters of this game. Note that this game also has multiple pure strategy Nash equilibria as shown in Example [Example 1](#ex:1pure){reference-type="ref" reference="ex:1pure"}. In the special case of considering $f(t) = |t-1|$, the $f$-divergence is the total variation distance. We also recall $P_{-\infty}, P_{\infty}$ are introduced in [\[eq:P-infty\]](#eq:P-infty){reference-type="eqref" reference="eq:P-infty"} and [\[eq:Pinfty\]](#eq:Pinfty){reference-type="eqref" reference="eq:Pinfty"} respectively. It is shown in [@CH18] that any convex combinations of $P_{-\infty}$ and $P_{\infty}$ minimize $$\{a P_{-\infty} + (1-a) P_{\infty};~ a \in [0,1]\} \subseteq \mathop{\mathrm{arg\,min}}_{M \in \mathcal{L}(\pi)} D_f(M||L_i).$$ As such, based on our earlier analysis, $(P_{\infty},\mathbf{w})$ and $(P_{-\infty},\mathbf{w})$ are mixed strategy Nash equilibria of the game under the total variation distance. This gives a possible answer to a question of Laurent Miclo on game-theoretic interpretations of $P_{-\infty}, P_{\infty}$. If $f$ is strictly convex with $f^{\prime}(1) = 0$, for instance the $\alpha$-divergence, then $M^f(L_1,\pi)$ is unique, and hence it is the unique minimax strategy of Nature in the mixed strategy game, while any $\mathbf{w} \in \mathcal{S}_2$ is a maximin strategy of the probabilist. $M^f(L_1,\pi)$ is also the unique Bayes strategy with respect to any $\mathbf{w}$. The value of the game is given by $V = D_f(M^f(L_1,\pi)||L_1)$. Note that $M^f(L_1,\pi)$ is also the unique minimax strategy of Nature in the pure strategy game. **Example 5** (An example where a pure strategy Nash equilibrium fails to exist and a mixed strategy Nash equilibrium (always) exists). In the second example, we continue our investigation in Example [Example 3](#ex:3pure){reference-type="ref" reference="ex:3pure"} and take $n = 2$ along with two Markov generators where $L_1, L_2 \in \mathcal{L}$. We choose $D_f$ to be the $\alpha$-divergence, and $L_1,L_2$ are assumed to satisfy the conditions that $M^f(L_1,\pi) \neq M^f(L_2,\pi)$ and $$\begin{aligned} \label{eq:examplecond4} D_f(M^f(L_1,\pi)||L_1) = D_f(M^f(L_2,\pi)||L_2). \end{aligned}$$ Recall that in Example [Example 3](#ex:3pure){reference-type="ref" reference="ex:3pure"}, we have already shown that a pure strategy Nash equilibrium cannot exist with these choices of parameters, and furthermore a maximizer of the dual problem [\[problem:minimax2dual\]](#problem:minimax2dual){reference-type="eqref" reference="problem:minimax2dual"}, which exists, cannot be $\mathbf{w}^f = (1,0)$ and $\mathbf{w}^f = (0,1)$. We thus come to the conclusion that $w^f_i > 0$ for $i = 1,2$. The weighted information centroid $M^{f}_2(\mathbf{w}^f,\{L_i\}_{i=1}^2,\pi)$ is the unique minimax strategy of Nature, and is also the unique Bayes strategy with respect to $\mathbf{w}^f$. Figure [1](#fig:purenotexist){reference-type="ref" reference="fig:purenotexist"} demonstrates a possible visualization of this example. ![A visualization of Example [Example 5](#ex:3mix){reference-type="ref" reference="ex:3mix"}. Note that the mixed strategy Nash equilibrium is $(M^{f}_2(\mathbf{w}^f,\{L_i\}_{i=1}^2,\pi),\mathbf{w}^f)$, while a pure strategy Nash equilibrium does not exist. The complementary slackness in Theorem [Theorem 3](#thm:mainpure){reference-type="ref" reference="thm:mainpure"} ensures that $D_f(M^{f}_2(\mathbf{w}^f,\{L_i\}_{i=1}^2,\pi)||L_1) = D_f(M^{f}_2(\mathbf{w}^f,\{L_i\}_{i=1}^2,\pi)||L_2) = \overline{v}^f = V$, the value of the mixed strategy game. [\[eq:examplecond4\]](#eq:examplecond4){reference-type="eqref" reference="eq:examplecond4"} also holds in this figure.](purenotexist.png){#fig:purenotexist width="80%"} **Example 6** (An example where a pure strategy Nash equilibrium and mixed strategy Nash equilibrium coincides). In this example, we take $n=3$ Markov generators $L_i \in \mathcal{L}$ for $i \in \llbracket 3 \rrbracket$, and they are chosen in a special way such that $$D_f(M^f(L_2,\pi)||L_1) = D_f(M^f(L_2,\pi)||L_3) = D_f(M^f(L_2,\pi)||L_2).$$ At the same time, we require that, for $j \in \{1,3\}$, $$D_f(M^f(L_2,\pi)||L_2) > D_f(M^f(L_j,\pi)||L_j).$$ With these parameter choices of the game, using Theorem [Theorem 3](#thm:mainpure){reference-type="ref" reference="thm:mainpure"} and [Theorem 4](#thm:mainmix){reference-type="ref" reference="thm:mainmix"} we see that we can take $\mathbf{w}^f = (0,1,0)$, and $(M^f(L_2,\pi),\mathbf{w}^f)$ is a mixed strategy Nash equilibrium while $(D_f(M^f(L_2,\pi)||L_2),2)$ is a pure strategy Nash equilibrium. Figure [2](#fig:puremixsame){reference-type="ref" reference="fig:puremixsame"} provides a visualization in this setting. ![A visualization of Example [Example 6](#ex:puremixsame){reference-type="ref" reference="ex:puremixsame"}. ](pureexist.png){#fig:puremixsame width="80%"} **Example 7** (An example of uniformizable and $\mu$-reversible generators). Recall that we have been considering settings where $\mathcal{B}$ is either $\{L_i\}_{i=1}^n$ or its convex hull $\mathrm{conv}((L_i)_{i=1}^n)$. This covers for instance $\mathcal{D}$, the set of doubly stochastic Markov generators, as discussed in [\[eq:doublysto\]](#eq:doublysto){reference-type="eqref" reference="eq:doublysto"}. The aim of this example is to show that this also includes the set of uniformizable and $\mu$-reversible generators, where $\mu \neq \pi$. Let $L \in \mathcal{L}(\mu)$. $L$ is said to be $\lambda$-uniformizable if $\max_{x \in \mathcal{X}} |L(x,x)| \leqslant\lambda$, and we denote by $\mathcal{L}_{\lambda}(\mu) \subset \mathcal{L}(\mu)$ to be the set of such Markov generators. Without loss of generality, in this example we let $m = |\mathcal{X}|$ and label the state space $\mathcal{X} = \{1,2,\ldots,m\}$. For a given $\lambda > 0$, we now claim that $$\mathcal{L}_{\lambda}(\mu) \subseteq \mathrm{conv}(\{L_{x,y}\}_{x<y,x,y \in \mathcal{X}} \cup \{\mathbf{0}\}),$$ where for $x < y \in \mathcal{X}$, $L_{x,y}(x,y) := \lambda \dfrac{m(m-1)}{2}$ while $L_{x,y}(y,x) := \lambda \dfrac{m(m-1)}{2} \dfrac{\mu(x)}{\mu(y)}$ and is zero otherwise for other off-diagonal entries of $L_{x,y}$. We also recall that $\mathbf{0}$ is the all-zero Markov generator. Let $L \in \mathcal{L}_{\lambda}(\mu)$. Making use of the $\mu$-reversibility, we write that $$L = \sum_{x < y} w_{x,y} L_{x,y} + \left(1-\sum_{x < y} w_{x,y}\right) \mathbf{0},$$ where $w_{x,y} := \dfrac{L(x,y)}{\lambda \dfrac{m(m-1)}{2}}$. As a consequence, various results that have been stated are readily applicable to the set $\mathcal{L}_{\lambda}(\mu)$. ## A projected subgradient algorithm to find an approximate mixed strategy Nash equilibrium (or Chebyshev center) {#subsec:algo} The aim of this Section is to develop a simple and easy-to-implement projected subgradient method to find an approximate mixed strategy Nash equilibrium. Throughout this Section, we consider the two-person mixed strategy game of the probabilist against Nature with respect to the parameters $(f, \mathcal{B},\pi)$, where $\mathcal{B}$ is either $\{L_i\}_{i=1}^n$ or its convex hull $\mathrm{conv}((L_i)_{i=1}^n)$. Recall that $\mathbf{w}^f$ is a maximin strategy of the probabilist, and the pair $(M^{f}_n(\mathbf{w}^f,\{L_i\}_{i=1}^n,\pi),\mathbf{w}^f)$ is a mixed strategy Nash equilibrium. To find such an equilibrium algorithmically, one crucial step amounts to solving the corresponding dual problem [\[problem:minimax2dual\]](#problem:minimax2dual){reference-type="eqref" reference="problem:minimax2dual"}. If the weighted information centroid admits a closed-form expression, for instance in the case of $\alpha$-divergence or the examples given in Theorem [Theorem 2](#thm:examplecentroid){reference-type="ref" reference="thm:examplecentroid"}, then making use of $\mathbf{w}^f$ a mixed strategy Nash equilibrium is found. Instead of solving the maximization problem in [\[problem:minimax2dual\]](#problem:minimax2dual){reference-type="eqref" reference="problem:minimax2dual"}, we reformulate this problem into the following equivalent minimization problem by multiplying by $-1$, in order for us to consider the subgradient (rather than supergradient) method in the optimization literature: $$\begin{aligned} \min_{\mathbf{w} \in \mathcal{S}_n} h(\mathbf{w})\end{aligned}$$ where $$\begin{aligned} \label{def:h} h(\mathbf{w}) := - \sum_{i=1}^n w_i D_f(M^{f}_n(\mathbf{w},\{L_j\}_{j=1}^n,\pi)||L_i)\end{aligned}$$ Note that the dependence of $h$ on $(f, \mathcal{B},\pi)$ is suppressed to avoid notational burden in the definition above. In view of Theorem [Theorem 3](#thm:mainpure){reference-type="ref" reference="thm:mainpure"} item [\[it:cl\]](#it:cl){reference-type="eqref" reference="it:cl"}, the function $h$ is convex. The algorithm that we propose is novel from two perspectives. First, no algorithm has been developed in the context of statistician against Nature game [@H97; @HO97; @GZ06] to compute the Nash equilibrium, while in this paper we propose a simple subgradient algorithm to approximately find the mixed strategy equilibrium. Second, our algorithm harnesses on the centroid structure and the subgradient of $h$, which has not been observed in the literature of Chebyshev center computation of probability measures [@EBT08; @C20]. In the first main result of this Section, we identify a subgradient of $h$, and its proof is deferred to Section [4.7](#subsubsec:pfsubgh){reference-type="ref" reference="subsubsec:pfsubgh"}: **Theorem 5** (Subgradient of $h$ and an upper bound of its $\ell^2$-norm). *A subgradient of $h$ at $\mathbf{v} \in \mathcal{S}_n$ is given by $\mathbf{g} = \mathbf{g}(\mathbf{v}) = (g_1,g_2,\ldots,g_n) \in \mathbb{R}^n$, where for $i \in \llbracket n \rrbracket$, we have $$\begin{aligned} \label{eq:subgi} g_i = D_f(M^f_n(\mathbf{v},\{L_j\}_{j=1}^n,\pi)||L_n) - D_f(M^f_n(\mathbf{v},\{L_j\}_{j=1}^n,\pi)||L_i). \end{aligned}$$ That is, $\mathbf{g}$ satisfies, for $\mathbf{w},\mathbf{v} \in \mathcal{S}_n$, $$h(\mathbf{w}) \geqslant h(\mathbf{v}) + \sum_{i=1}^n g_i (w_i - v_i).$$ Moreover, the $\ell^2$-norm of $\mathbf{g}(\mathbf{v})$ is bounded above by $$\begin{aligned} \label{eq:subgnormbd} \left\lVert\mathbf{g}(\mathbf{v})\right\rVert_2^2 := \sum_{i=1}^n g_i^2 \leqslant n \left(|\mathcal{X}| \sup_{\mathbf{v} \in \mathcal{S}_n;~i \in \llbracket n \rrbracket;~L_i(x,y)>0} L_i(x,y) f\left(\dfrac{M^f_n(\mathbf{v},\{L_j\}_{j=1}^n,\pi)(x,y)}{L_i(x,y)}\right)\right)^2 =: B. \end{aligned}$$* *Remark 10*. The choice of $L_n$ in the first term of $g_i$ in [\[eq:subgi\]](#eq:subgi){reference-type="eqref" reference="eq:subgi"} is in fact completely arbitrary, and we shall see in the proof that it can be replaced by any $L_l$ with $l \in \llbracket n \rrbracket$. We thus obtain at least $n$ subgradients of $h$ at a given point $\mathbf{v} \in \mathcal{S}_n$. *Remark 11*. The upper bound $B$ plays an important role in determining the convergence rate of the projected subgradient algorithm, see Theorem [Theorem 6](#thm:subgconv){reference-type="ref" reference="thm:subgconv"} and its proof below. We proceed to develop and analyze a projected subgradient algorithm to solve the dual problem [\[problem:minimax2dual\]](#problem:minimax2dual){reference-type="eqref" reference="problem:minimax2dual"}. Suppose that the number of iterations to be ran is $t$. For $i = 1,2,\ldots,t$, we first update the weights of the current iteration by its subgradient, that is, we set $$\begin{aligned} \mathbf{v}^{(i)} = \mathbf{w}^{(i-1)} - \eta \mathbf{g}(\mathbf{w}^{(i-1)}),\end{aligned}$$ where $\eta > 0$ is the constant stepsize of the algorithm while we recall that $\mathbf{g}$ is a subgradient of $h$ as in [\[eq:subgi\]](#eq:subgi){reference-type="eqref" reference="eq:subgi"}. In the second step, we project $\mathbf{v}^{(i)}$ onto the probability simplex $\mathcal{S}_n$ by computing $$\begin{aligned} \mathbf{w}^{(i)} = \mathop{\mathrm{arg\,min}}_{\mathbf{w} \in \mathcal{S}_n} \left\lVert\mathbf{w} - \mathbf{v}^{(i)}\right\rVert_2^2.\end{aligned}$$ This can be done by utilizing existing efficient projection algorithms (see e.g. [@Condat16]), and we do not further investigate these projection algorithms in this manuscript. We then repeat the above two steps for $i = 1,2,\ldots,t$. The output of the algorithm is the sequence $(\mathbf{w}^{(i)})_{i=1}^t$. Precisely, the algorithm is stated in Algorithm [\[algo:subg\]](#algo:subg){reference-type="ref" reference="algo:subg"}. Before we proceed to analyze the convergence of Algorithm [\[algo:subg\]](#algo:subg){reference-type="ref" reference="algo:subg"}, let us develop an intuition on why Algorithm [\[algo:subg\]](#algo:subg){reference-type="ref" reference="algo:subg"} possibly works. Suppose that we are in the hypothetical setting where the optimal weights are all positive, that is, $w^f_i > 0$ for all $i \in \llbracket n \rrbracket$. Recall that by the complementary slackness condition in Theorem [Theorem 3](#thm:mainpure){reference-type="ref" reference="thm:mainpure"}, in this setting the subgradient of $h$ at $\mathbf{w}^f$ is zero since $$g_i(\mathbf{w}^f) = D_f(M^f_n(\mathbf{w}^f,\{L_j\}_{j=1}^n,\pi)||L_n) - D_f(M^f_n(\mathbf{w}^f,\{L_j\}_{j=1}^n,\pi)||L_i) = 0.$$ As a result, once Algorithm [\[algo:subg\]](#algo:subg){reference-type="ref" reference="algo:subg"} reaches $\mathbf{w}^f$, it stays put and no longer moves again, which is a desirable behaviour as an optimal value has been reached. On the other hand, if at the current iteration the subgradient is $g_i > 0$ at some $i$, then the weights are updated along the subgradient direction followed by projection onto $\mathcal{S}_n$. The success of Algorithm [\[algo:subg\]](#algo:subg){reference-type="ref" reference="algo:subg"} relies crucially on computation of the subgradient $\mathbf{g}(\mathbf{w})$ of $h$ at $\mathbf{w}$, which further depends on the expression $M^f_n(\mathbf{w},\{L_j\}_{j=1}^n,\pi)$. For example, it is known in the case of $\alpha$-divergence or the examples given in Theorem [Theorem 2](#thm:examplecentroid){reference-type="ref" reference="thm:examplecentroid"}, and hence the algorithm can be readily applied in these settings. Note that in the implementation of Algorithm [\[algo:subg\]](#algo:subg){reference-type="ref" reference="algo:subg"} we assume that the centroid $M^f_n(\mathbf{w},\{L_j\}_{j=1}^n,\pi)$ is accessible and we do not discuss algorithms to compute these centroids numerically. In this paper, we do not consider possibly many variants of the projected subgradient algorithm in Algorithm [\[algo:subg\]](#algo:subg){reference-type="ref" reference="algo:subg"}, for instance algorithms with changing or adaptive stepsize, or algorithms with stochastic subgradients. Our second main result of this Section gives a convergence rate of $\mathcal{O}(1/\sqrt{t})$ of Algorithm [\[algo:subg\]](#algo:subg){reference-type="ref" reference="algo:subg"} with an appropriate choice of stepsize. The proof can be found in Section [4.8](#subsubsec:pfsubgconv){reference-type="ref" reference="subsubsec:pfsubgconv"}. We also write $$\begin{aligned} \overline{\mathbf{w}}^t := \dfrac{1}{t}\sum_{i=1}^t \mathbf{w}^{(i)},\end{aligned}$$ the arithmetic average up to iteration $t$ of the outputs of Algorithm [\[algo:subg\]](#algo:subg){reference-type="ref" reference="algo:subg"}. **Theorem 6**. *Consider Algorithm [\[algo:subg\]](#algo:subg){reference-type="ref" reference="algo:subg"} with its output $(\mathbf{w}^{(i)})_{i=1}^t$ and the notations therein. We have $$\begin{aligned} \label{eq:subgconvbd} h\left(\overline{\mathbf{w}}^t\right) - h\left(\mathbf{w}^{f}\right) \leqslant\dfrac{n}{2\eta t} + \dfrac{\eta B}{2}, \end{aligned}$$ where we recall that $h$ is the function defined in [\[def:h\]](#def:h){reference-type="eqref" reference="def:h"} and $B$ is introduced in [\[eq:subgnormbd\]](#eq:subgnormbd){reference-type="eqref" reference="eq:subgnormbd"}. In particular, if we take the constant stepsize to be $$\eta = \sqrt{\dfrac{n}{tB}},$$ we thus have $$h\left(\overline{\mathbf{w}}^t\right) - h\left(\mathbf{w}^{f}\right) \leqslant\dfrac{1}{2}\sqrt{\dfrac{nB}{t}} + \dfrac{1}{2}\sqrt{\dfrac{nB}{t}} = \sqrt{\dfrac{nB}{t}}.$$ Given an arbitrary $\varepsilon > 0$, if we further choose $$t = \left\lceil \dfrac{n B}{\varepsilon^2} \right\rceil,$$ then we reach an $\varepsilon$-close value to $h\left(\mathbf{w}^{f}\right)$ in the sense that $$h\left(\overline{\mathbf{w}}^t\right) - h\left(\mathbf{w}^{f}\right) \leqslant\varepsilon.$$* Before we introduce the final main result of this Section, we shall fix a few notations. We shall consider the set of continuized generators that does not stay at the same state in one step, that is, for $i \in \llbracket n \rrbracket$ we consider $$\begin{aligned} \label{eq:Liclass} L_i := P_i - I,\end{aligned}$$ where $I$ is the identity matrix on $\mathcal{X}$ and $P_i$ is a transition matrix with $P_i(x,x) = 0$ for all $x \in \mathcal{X}$. For two Markov generators $L,M \in \mathcal{L}$, by choosing $f(t) = |t-1|/2$, the total variation distance between $L,M$ is given by $$D_{\mathrm{TV}}(L||M) := D_f(L||M) = \dfrac{1}{2} \sum_{x \in \mathcal{X}} \pi(x) \sum_{y \in \mathcal{X}\backslash\{x\} }|L(x,y) - M(x,y)| = D_{\mathrm{TV}}(M||L).$$ For the family of $\{L_i\}_{i=1}^n$ satisfying [\[eq:Liclass\]](#eq:Liclass){reference-type="eqref" reference="eq:Liclass"}, we quantify the convergence of $M^f_n(\overline{\mathbf{w}}^t,\{L_j\}_{j=1}^n,\pi)$ towards $M^f_n(\mathbf{w}^f,\{L_j\}_{j=1}^n,\pi)$ in terms of the total variation distance $D_{\mathrm{TV}}$ and a strictly convex $f$. The proof can be found in Section [4.9](#subsubsec:pfcentroidconvalgo){reference-type="ref" reference="subsubsec:pfcentroidconvalgo"}. **Theorem 7** ($\mathcal{O}\left(1/\sqrt{t}\right)$ convergence rate of the weighted information centroid generated by Algorithm [\[algo:subg\]](#algo:subg){reference-type="ref" reference="algo:subg"}). *Consider Algorithm [\[algo:subg\]](#algo:subg){reference-type="ref" reference="algo:subg"} applied to the family $\{L_j\}_{j=1}^n$ satisfying [\[eq:Liclass\]](#eq:Liclass){reference-type="eqref" reference="eq:Liclass"} under a strictly convex $f$ with its output $(\mathbf{w}^{(i)})_{i=1}^t$ and the notations therein, and the stepsize is chosen to be $$\eta = \sqrt{\dfrac{n}{tB}}.$$ We have $$\begin{aligned} D_{\mathrm{TV}}(M^f_n(\overline{\mathbf{w}}^t,\{L_j\}_{j=1}^n,\pi)||M^f_n(\mathbf{w}^f,\{L_j\}_{j=1}^n,\pi)) = \mathcal{O}\left(\dfrac{1}{\sqrt{t}}\right), \end{aligned}$$ where the constant in front on the right hand side depends on all the parameters except $t$, that is, the constant depends on $(f,\{L_j\}_{j=1}^n,\pi,n,\mathcal{X})$.* # Proofs of the main results {#sec:proofsmain} ## Proof of Theorem [Theorem 1](#thm:existuniquecentroid){reference-type="ref" reference="thm:existuniquecentroid"} {#subsubsec:pfexistunique} The proof is a generalization of [@DM09 Proposition $1.5$] and [@CW23 Theorem $3.9$] to the setting of weighted information centroid. Pick an arbitrary total ordering on $\mathcal{X}$ with strict inequality being denoted by $\prec$. For $i \in \llbracket n \rrbracket$, we write $$\begin{aligned} a &= a(x,y) = \pi(x)M(x,y), \quad a^{\prime} = a^{\prime}(y,x) = \pi(y)M(y,x), \\ \beta_i &= \beta_i(x,y) = \pi(x) L_i(x,y), \quad \beta_i^{\prime} = \beta_i^{\prime}(y,x) = \pi(y)L_i(y,x).\end{aligned}$$ We note that $M \in \mathcal{L}(\pi)$ gives $a = a^{\prime}$. Using this, we see that $$\begin{aligned} \sum_{i=1}^n w_i D_f(M || L_i) &= \sum_{i=1}^n \sum_{x \prec y} w_i \pi(x) L_i(x,y) f\left(\dfrac{M(x,y)}{L_i(x,y)}\right) + w_i \pi(y) L_i(y,x) f\left(\dfrac{M(y,x)}{L_i(y,x)}\right) \\ &= \sum_{i=1}^n \sum_{x \prec y} w_i \beta_i f\left(\dfrac{a}{\beta_i}\right) + w_i \beta_i^{\prime} f\left(\dfrac{a}{\beta_i^{\prime}}\right) \\ &= \sum_{x \prec y} \sum_{i=1}^n w_i \beta_i f\left(\dfrac{a}{\beta_i}\right) + w_i \beta_i^{\prime} f\left(\dfrac{a}{\beta_i^{\prime}}\right) \\ &= \sum_{x \prec y} \sum_{\{i;~w_i>0 ~\textrm{and}~\beta_i>0 ~\textrm{or}~\beta_i^{\prime}>0\}} w_i \beta_i f\left(\dfrac{a}{\beta_i}\right) + w_i \beta_i^{\prime} f\left(\dfrac{a}{\beta_i^{\prime}}\right) \\ &=: \sum_{x \prec y} \Phi_{\mathbf{w},\beta_1,\ldots,\beta_n,\beta_1^{\prime},\ldots,\beta_n^{\prime}}(a).\end{aligned}$$ To minimize with respect to $M$, we are led to minimize the summand above $\phi := \Phi_{\mathbf{w},\beta_1,\ldots,\beta_n,\beta_1^{\prime},\ldots,\beta_n^{\prime}} : \mathbb{R}_+ \to \mathbb{R}_+$, where $\mathbf{w} \in \mathcal{S}_n$ and $(\beta_1,\ldots,\beta_n,\beta_1^{\prime},\ldots,\beta_n^{\prime}) \in \mathbb{R}_+^{2n}$ are assumed to be fixed. The summation in the expression of $\phi$ is non-empty in view of the assumptions in the Theorem. As $\phi$ and $f$ are convex, we denote by $\phi_+^{\prime}$ and $f_+^{\prime}$ to be their right derivative respectively. It suffices to show the existence of $a_* > 0$ such that for all $a \in \mathbb{R}_+$, $$\begin{aligned} \label{eq:alpha*} \phi_+^{\prime}(a) = \begin{cases} < 0, \quad \textrm{ if } \quad a < a_*, \\ > 0, \quad \textrm{ if } \quad a > a_*. \end{cases}\end{aligned}$$ Now, we compute that for all $a \in \mathbb{R}_+$, $$\begin{aligned} \phi_+^{\prime}(a) = \sum_{\{i;~\beta_i>0 ~\textrm{and}~\beta_i^{\prime}>0\}} w_i f^{\prime}_+\left(\dfrac{a}{\beta_i}\right) + w_i f^{\prime}_+\left(\dfrac{a}{\beta_i^{\prime}}\right) + \sum_{\{i;~\beta_i>0 ~\textrm{and}~\beta_i^{\prime}=0\}} w_i f^{\prime}_+\left(\dfrac{a}{\beta_i}\right) + \sum_{\{i;~\beta_i=0 ~\textrm{and}~\beta_i^{\prime}>0\}} w_i f^{\prime}_+\left(\dfrac{a}{\beta_i^{\prime}}\right).\end{aligned}$$ As $f^{\prime}(1) = 0$ and $f$ is strictly convex, for sufficiently small $a > 0$ $\phi_+^{\prime}(a) < 0$ while for sufficiently large $a > 0$ $\phi_+^{\prime}(a) > 0$ and $\phi_+^{\prime}$ is increasing, we conclude that there exists a unique $a_* > 0$ such that [\[eq:alpha\*\]](#eq:alpha*){reference-type="eqref" reference="eq:alpha*"} is satisfied. If we replace $f$ by $f^*$ in the above proof, and by noting that $f^*$ is also a strictly convex function with $f^*(1) = f^{*\prime}(1) = 0$, the existence and uniqueness of $M^{f^*}_n$ is shown. ## Proof of Theorem [Theorem 2](#thm:examplecentroid){reference-type="ref" reference="thm:examplecentroid"} {#subsubsec:examplecentroid} We shall only prove [\[eq:cephellinger\]](#eq:cephellinger){reference-type="eqref" reference="eq:cephellinger"} as the rest follows exactly the same computation procedure with different choices of $f$. Pick an arbitrary total ordering on $\mathcal{X}$ with strict inequality being denoted by $\prec$. For $i = 1,\ldots,n$, we also write $$\begin{aligned} a &= a(x,y) = \pi(x)M(x,y), \quad a^{\prime} = a^{\prime}(y,x) = \pi(y)M(y,x), \\ \beta_i &= \beta_i(x,y) = \pi(x) L_i(x,y), \quad \beta_i^{\prime} = \beta_i^{\prime}(y,x) = \pi(y)L_i(y,x).\end{aligned}$$ The $\pi$-reversibility of $M$ yields $a = a^{\prime}$, which leads to $$\begin{aligned} \sum_{i=1}^n w_i D_f(M || L_i) &= \sum_{i=1}^n \sum_{x \prec y} w_i\left(\pi(x) L_i(x,y) f\left(\dfrac{M(x,y)}{L_i(x,y)}\right) + \pi(y) L_i(y,x) f\left(\dfrac{M(y,x)}{L_i(y,x)}\right)\right) \\ &= \sum_{i=1}^n \sum_{x \prec y} w_i\left(a - 2 \sqrt{a \beta_i} + \beta_i + a^{\prime} - 2 \sqrt{a^{\prime} \beta_i^{\prime}} + \beta_i^{\prime}\right) \\ &= \sum_{x \prec y} \sum_{i=1}^n w_i\left(2a - 2 \sqrt{a \beta_i} - 2 \sqrt{a \beta_i^{\prime}} + \beta_i + \beta_i^{\prime}\right).\end{aligned}$$ Next, we aim to minimize each term in the sum, leading us to minimize the strictly convex function of $a$: $$a \mapsto \sum_{i=1}^n w_i \left(2a - 2 \sqrt{a \beta_i} - 2 \sqrt{a \beta_i^{\prime}}\right).$$ Differentiating the above expression with respect to $a$ yields $$\begin{aligned} M^{f}_n(x,y) =\left(\sum_{i=1}^n w_i \sqrt{M^{f}(L_i,\pi)(x,y)}\right)^2.\end{aligned}$$ ## Proof of Theorem [Theorem 3](#thm:mainpure){reference-type="ref" reference="thm:mainpure"} {#subsec:pfmainpure} We first prove item [\[it:sd\]](#it:sd){reference-type="eqref" reference="it:sd"} and [\[it:ccic\]](#it:ccic){reference-type="eqref" reference="it:ccic"}. To see that the strong duality holds for [\[problem:minimax2\]](#problem:minimax2){reference-type="eqref" reference="problem:minimax2"} and the dual optimal is attained, we shall show that the Slater's constraint qualification ([@Beck2017 Theorem $A.1$] or [@Boyd2004 Section $5.2.3$]) is verified, which amounts to prove that the constraints in [\[problem:minimax2\]](#problem:minimax2){reference-type="eqref" reference="problem:minimax2"} are strictly feasible. We take $$\begin{aligned} M &= M^f(L_1,\pi), \\ r &= \max_{i \in \llbracket n \rrbracket} D_f(M||L_i) + 1 > D_f(M||L_l),\end{aligned}$$ for all $l \in \llbracket n \rrbracket$, and hence the pair $(M,r)$ is strictly feasible. This proves the first part of item [\[it:sd\]](#it:sd){reference-type="eqref" reference="it:sd"}. As the strong duality holds, using the optimality conditions under strong duality [@Beck2017 Theorem $A.2$], it proves item [\[it:ccic\]](#it:ccic){reference-type="eqref" reference="it:ccic"}. In particular, the complementary slackness conditions entail that for all $i \in \llbracket n \rrbracket$, $$w_i (D_f(M||L_i) - r) = 0,$$ which is equivalent to $$\begin{aligned} D_f(M||L_i) \begin{cases} = r & \text{ if } w_i > 0,\\ \leqslant r & \text{ if } w_i = 0. \end{cases}\end{aligned}$$ We return to show the second part of item [\[it:sd\]](#it:sd){reference-type="eqref" reference="it:sd"}. By the first part and item [\[it:ccic\]](#it:ccic){reference-type="eqref" reference="it:ccic"}, we have $$\overline{v}^f = \sum_{i=1}^n w^f_i D_f(M^{f}_n(\mathbf{w}^f,\{L_i\}_{i=1}^n,\pi)||L_i) = D_f(M^{f}_n(\mathbf{w}^f,\{L_i\}_{i=1}^n,\pi)||L_l),$$ where $l \in \llbracket n \rrbracket$ is such that $w^f_l > 0$. Next, we prove item [\[it:cl\]](#it:cl){reference-type="eqref" reference="it:cl"}. As the Lagrangian dual is concave [@Boyd2004 Section $5.1.2$], in our case this amounts to $$\mathbf{w} \mapsto \min_{r \geqslant 0, M \in \mathcal{L}(\pi)} \mathsf{L}(r,M,\mathbf{w}) = \sum_{i=1}^n w_i D_f(M^f_n||L_i)$$ is concave. We proceed to prove item [\[it:uniquecc\]](#it:uniquecc){reference-type="eqref" reference="it:uniquecc"}. Using item [\[it:sd\]](#it:sd){reference-type="eqref" reference="it:sd"}, we see that the pair $(M^{f}_n(\mathbf{w}_1,\{L_i\}_{i=1}^n,\pi),\overline{v}^f)$ minimizes the primal problem and $\mathbf{w}_2 = (w_{2,i})_{i=1}^n$ maximizes the dual problem, and hence by item [\[it:ccic\]](#it:ccic){reference-type="eqref" reference="it:ccic"}, this gives $$M^{f}_n(\mathbf{w}_1,\{L_i\}_{i=1}^n,\pi) \in \mathop{\mathrm{arg\,min}}_{M \in \mathcal{L}(\pi)} \mathsf{L}(r,M,\mathbf{w}_2) = \mathop{\mathrm{arg\,min}}_{M \in \mathcal{L}(\pi)} \sum_{i=1}^n w_{2,i} D_f(M||L_i).$$ As $f$ is strictly convex, the minimization problem on the right hand side admits a unique minimizer $M^{f}_n(\mathbf{w}_2,\{L_i\}_{i=1}^n,\pi)$ (Theorem [Theorem 1](#thm:existuniquecentroid){reference-type="ref" reference="thm:existuniquecentroid"}), and hence $M^{f}_n(\mathbf{w}_1,\{L_i\}_{i=1}^n,\pi) = M^{f}_n(\mathbf{w}_2,\{L_i\}_{i=1}^n,\pi)$. Next, we prove item [\[it:vfequals\]](#it:vfequals){reference-type="eqref" reference="it:vfequals"}. We first show the sufficiency. Using item [\[it:ccic\]](#it:ccic){reference-type="eqref" reference="it:ccic"}, we note that $$\overline{v}^f = D_f(M^{f}_n(\mathbf{w}^f,\{L_i\}_{i=1}^n,\pi)||L_j) = \min_{M \in \mathcal{L}(\pi)} D_f(M||L_j) \leqslant\max_{l \in \llbracket n \rrbracket} \min_{M \in \mathcal{L}(\pi)} D_f(M||L_l) = \underline{v}^f.$$ This gives $\overline{v}^f = \underline{v}^f$ since $\overline{v}^f \geqslant\underline{v}^f$. To show the necessity, let $$j = \mathop{\mathrm{arg\,max}}_{l \in \llbracket n \rrbracket} \min_{M \in \mathcal{L}(\pi)} D_f(M||L_l),$$ and we take $\mathbf{w}^f$ to be the standard unit vector at the $j$-th coordinate, that is, $w^f_j = 1$ and $0$ otherwise. Since $\overline{v}^f = \underline{v}^f$, this choice of $\mathbf{w}^f$ maximizes the dual problem [\[problem:minimax2dual\]](#problem:minimax2dual){reference-type="eqref" reference="problem:minimax2dual"} and $$\begin{aligned} M^{f}_n(\mathbf{w}^f,\{L_i\}_{i=1}^n,\pi) \in \mathop{\mathrm{arg\,min}}_{M \in \mathcal{L}(\pi)} D_f(M||L_j)\end{aligned}$$ clearly holds. Finally, we prove item [\[it:saddlept\]](#it:saddlept){reference-type="eqref" reference="it:saddlept"}. As the saddle point property holds we have $$\overline{v}^f = \max_{i \in \llbracket n \rrbracket} D_f(M^f(L_i,\pi),L_i).$$ The stated pair in the theorem statement is thus a saddle point. ## Proof of Proposition [Proposition 1](#prop:reducefinite){reference-type="ref" reference="prop:reducefinite"} {#subsubsec:pfreducefinite} First, as $D_f(\cdot||\cdot)$ is jointly convex in its arguments, and maximization of a convex function over a convex hull occur at an extreme point, we see that $$\sup_{L \in \mathrm{conv}((L_i)_{i=1}^n)} D_f(M||L) = \max_{i \in \llbracket n \rrbracket} D_f(M||L_i).$$ Taking inf on both sides over $M \in \mathcal{L}(\pi)$ gives the desired result for $\overline{v}^f$. For $\underline{v}^f$, we note that the mapping $$L \mapsto \inf_{M \in \mathcal{L}(\pi)} D_f(M||L)$$ is convex as $D_f(\cdot||\cdot)$ is jointly convex and partial minimization of convex function preserves convexity [@Boyd2004 Section $3.2.5$]. Using again the property that maximization of a convex function over a convex hull occur at an extreme point, we have $$\underline{v}^f(\mathrm{conv}((L_i)_{i=1}^n),\pi) = \sup_{L \in \mathrm{conv}((L_i)_{i=1}^n)} \inf_{M \in \mathcal{L}(\pi)} D_f(M||L) = \max_{i \in \llbracket n \rrbracket} \inf_{M \in \mathcal{L}(\pi)} D_f(M||L_i) = \underline{v}^f(\{L_i\}_{i=1}^n,\pi).$$ ## Proof of Corollary [Corollary 2](#cor:pureNash){reference-type="ref" reference="cor:pureNash"} {#subsubsec:pfpureNash} For the first item, it is simply a restatement of the saddle point result Corollary [Corollary 1](#cor:convexhull){reference-type="ref" reference="cor:convexhull"} in the context of the two-person game and Nash equilibrium. For the second item, as the index $l$ is unique, using the fact that $f$ is strictly convex $M^f(L_l,\pi)$ is unique, and we thus have a unique saddle point and hence Nash equilibrium. For the other direction, suppose that $l_1, l_2 \in \mathop{\mathrm{arg\,max}}_{i \in \llbracket n \rrbracket} D_f(M^f(L_i,\pi),L_i)$, then both $(M^f(L_{l_1},\pi),L_{l_1})$ and $(M^f(L_{l_2},\pi),L_{l_2})$ are saddle points or Nash equilibria. ## Proof of Proposition [Proposition 2](#prop:minimaxeqmixed){reference-type="ref" reference="prop:minimaxeqmixed"} {#subsubsec:pfminimaxineq} We first prove the inequalities in [\[eq:minimaxeqmixed1\]](#eq:minimaxeqmixed1){reference-type="eqref" reference="eq:minimaxeqmixed1"}. The inequality $\overline{V}^f \geqslant\underline{V}^f$ is obvious. To see the first inequality, we note that $$\begin{aligned} \int_{\mathcal{B}} D_f(M||L)\, \mu(dL) \leqslant\sup_{L \in \mathcal{B}} D_f(M||L).\end{aligned}$$ The desired result follows by taking $\sup_{\mu \in \mathcal{P}(\mathcal{B})}$ then $\inf_{M \in \mathcal{L}(\pi)}$. Next, we prove the inequalities in [\[eq:minimaxeqmixed2\]](#eq:minimaxeqmixed2){reference-type="eqref" reference="eq:minimaxeqmixed2"}, and it only remains to show the rightmost inequality. We observe that $$\begin{aligned} \underline{V}^f(\mathcal{B},\pi) &= \sup_{\mu \in \mathcal{P}(\mathcal{B})} \inf_{M \in \mathcal{L}(\pi)} \int_{\mathcal{B}} D_f(M||L)\, \mu(dL) \\ &\geqslant\max_{\mathbf{w} \in \mathcal{S}_n} \inf_{M \in \mathcal{L}(\pi)} \sum_{i=1}^n w_i D_f(M||L_i) \\ &= \max_{\mathbf{w} \in \mathcal{S}_n} \sum_{i=1}^n w_i D_f(M^f_n||L_i),\end{aligned}$$ which completes the proof. ## Proof of Theorem [Theorem 5](#thm:subgh){reference-type="ref" reference="thm:subgh"} {#subsubsec:pfsubgh} First, by the definition of weighted information centroid, we have $$\sum_{i=1}^n w_i D_f(M^{f}_n(\mathbf{w},\{L_j\}_{j=1}^n,\pi)||L_i) \leqslant\sum_{i=1}^n w_i D_f(M^{f}_n(\mathbf{v},\{L_j\}_{j=1}^n,\pi)||L_i).$$ Multiplying both sides by $-1$ followed by substracting $h(\mathbf{v})$, we obtain $$\begin{aligned} h(\mathbf{w}) - h(\mathbf{v}) &= - \sum_{i=1}^n w_i D_f(M^{f}_n(\mathbf{w},\{L_j\}_{j=1}^n,\pi)||L_i) + \sum_{i=1}^n v_i D_f(M^{f}_n(\mathbf{v},\{L_j\}_{j=1}^n,\pi)||L_i) \\ &\geqslant\sum_{i=1}^n -D_f(M^{f}_n(\mathbf{v},\{L_j\}_{j=1}^n,\pi)||L_i)(w_i-v_i) \\ &= \sum_{i=1}^n -D_f(M^{f}_n(\mathbf{v},\{L_j\}_{j=1}^n,\pi)||L_i)(w_i-v_i) + \sum_{i=1}^n D_f(M^{f}_n(\mathbf{v},\{L_j\}_{j=1}^n,\pi)||L_n)(w_i-v_i) \\ &= \sum_{i=1}^n g_i (w_i - v_i),\end{aligned}$$ where the second equality follows from $\mathbf{w}, \mathbf{v} \in \mathcal{S}_n$. Next, we proceed to prove [\[eq:subgnormbd\]](#eq:subgnormbd){reference-type="eqref" reference="eq:subgnormbd"}. First we note that for any $i \in \llbracket n \rrbracket$, we have $$\begin{aligned} D_f(M^{f}_n(\mathbf{w},\{L_j\}_{j=1}^n,\pi)||L_i) &= \sum_{x \in \mathcal{X}} \pi(x) \sum_{y \in \mathcal{X}\backslash\{x\}} L_i(x,y) f\left(\dfrac{M^{f}_n(\mathbf{w},\{L_j\}_{j=1}^n,\pi)(x,y)}{L_i(x,y)}\right) \\ &\leqslant|\mathcal{X}| \sup_{\mathbf{v} \in \mathcal{S}_n;~i \in \llbracket n \rrbracket;~L_i(x,y)>0} L_i(x,y) f\left(\dfrac{M^f_n(\mathbf{v},\{L_j\}_{j=1}^n,\pi)(x,y)}{L_i(x,y)}\right),\end{aligned}$$ and hence $$\begin{aligned} \left\lVert\mathbf{g}\right\rVert_2^2 = \sum_{i=1}^n g_i^2 &\leqslant\sum_{i=1}^n \max\bigg\{D_f(M^{f}_n(\mathbf{w},\{L_j\}_{j=1}^n,\pi)||L_n)^2, D_f(M^{f}_n(\mathbf{w},\{L_j\}_{j=1}^n,\pi)||L_i)^2\bigg\} \\ &\leqslant n \max_{l \in \llbracket n \rrbracket} D_f(M^f_n(\mathbf{v},\{L_j\}_{j=1}^n,\pi)||L_l)^2 \leqslant B.\end{aligned}$$ ## Proof of Theorem [Theorem 6](#thm:subgconv){reference-type="ref" reference="thm:subgconv"} {#subsubsec:pfsubgconv} We first prove [\[eq:subgconvbd\]](#eq:subgconvbd){reference-type="eqref" reference="eq:subgconvbd"}. For $i \in \llbracket t \rrbracket$, we have $$\begin{aligned} \left\lVert\mathbf{w}^{(i+1)} - \mathbf{w}^f\right\rVert_2^2 &\leqslant\left\lVert\mathbf{v}^{(i+1)} - \mathbf{w}^f\right\rVert_2^2\\ &= \left\lVert\mathbf{w}^{(i)} - \mathbf{w}^f - \eta \mathbf{g}(\mathbf{w}^{(i)})\right\rVert_2^2 \\ &= \left\lVert\mathbf{w}^{(i)} - \mathbf{w}^f\right\rVert_2^2 + \eta^2 \left\lVert\mathbf{g}(\mathbf{w}^{(i)})\right\rVert_2^2 - 2 \eta \mathbf{g}(\mathbf{w}^{(i)}) \cdot (\mathbf{w}^{(i)} - \mathbf{w}^f) \\ &\leqslant\left\lVert\mathbf{w}^{(i)} - \mathbf{w}^f\right\rVert_2^2 + \eta^2 B - 2 \eta \mathbf{g}(\mathbf{w}^{(i)}) \cdot (\mathbf{w}^{(i)} - \mathbf{w}^f),\end{aligned}$$ where the first inequality follows from the fact that $\mathbf{w}^{(i+1)}$ is the projection of $\mathbf{v}^{(i+1)}$ onto the simplex $\mathcal{S}_n$, while the second inequality follows from the definition of $B$ in [\[eq:subgnormbd\]](#eq:subgnormbd){reference-type="eqref" reference="eq:subgnormbd"}. Upon rearranging and using the definition of subgradient, this leads to $$\begin{aligned} h(\mathbf{w}^{(i)}) - h(\mathbf{w}^f) &\leqslant\mathbf{g}(\mathbf{w}^{(i)}) \cdot (\mathbf{w}^{(i)} - \mathbf{w}^f)\\ &= \dfrac{1}{2 \eta} \left(\left\lVert\mathbf{w}^{(i)} - \mathbf{w}^f\right\rVert_2^2 - \left\lVert\mathbf{w}^{(i+1)} - \mathbf{w}^f\right\rVert_2^2\right)+\dfrac{\eta B}{2}.\end{aligned}$$ Now, we sum $i$ from $1$ to $t$ to give $$\begin{aligned} \sum_{i=1}^t h(\mathbf{w}^{(i)}) - h(\mathbf{w}^f) &\leqslant\dfrac{1}{2 \eta} \left(\left\lVert\mathbf{w}^{(1)} - \mathbf{w}^f\right\rVert_2^2 - \left\lVert\mathbf{w}^{(t+1)} - \mathbf{w}^f\right\rVert_2^2\right)+\dfrac{\eta B t}{2} \\ &\leqslant\dfrac{1}{2 \eta} \left\lVert\mathbf{w}^{(1)} - \mathbf{w}^f\right\rVert_2^2 +\dfrac{\eta B t}{2} \leqslant\dfrac{1}{2 \eta} n +\dfrac{\eta B t}{2},\end{aligned}$$ where in the last inequality we use the fact that $\mathbf{w}^{(1)},\mathbf{w}^f \in \mathcal{S}_n$. Dividing both sides by $t$ followed by the convexity of $h$ (see [\[def:h\]](#def:h){reference-type="eqref" reference="def:h"}) yields $$\begin{aligned} h\left(\dfrac{1}{t}\sum_{i=1}^t \mathbf{w}^{(i)}\right) - h\left(\mathbf{w}^{f}\right) \leqslant\dfrac{1}{t} \left(\sum_{i=1}^t h(\mathbf{w}^{(i)}) - h(\mathbf{w}^f)\right) \leqslant\dfrac{n}{2\eta t} + \dfrac{\eta B}{2}.\end{aligned}$$ The right hand side as a function of the stepsize $\eta$ is minimized when we take $\eta = \sqrt{\dfrac{n}{tB}}$. ## Proof of Theorem [Theorem 7](#thm:centroidconvalgo){reference-type="ref" reference="thm:centroidconvalgo"} {#subsubsec:pfcentroidconvalgo} We first state the following lemma in the setting of this Theorem, which is of independent interest. It can be considered as an extension of [@C72 equation $(3.25)$] to finite Markov chains: **Lemma 1**. *There exists a constant $\infty > C = C(f,\{L_i\}_{i=1}^n,\pi,n,\mathcal{X}) > 0$ such that $$\begin{aligned} D_{\mathrm{TV}}&(M^f_n(\overline{\mathbf{w}}^t,\{L_i\}_{i=1}^n,\pi)||M^f_n(\mathbf{w}^f,\{L_i\}_{i=1}^n,\pi)) \\ &\leqslant C \left(\sum_{i=1}^n w^f_i D_f(M^{f}_n(\mathbf{w}^f,\{L_i\}_{i=1}^n,\pi)||L_i) - \sum_{i=1}^n \overline{w}^t_i D_f(M^{f}_n(\overline{\mathbf{w}}^t,\{L_i\}_{i=1}^n,\pi)||L_i)\right) \\ &= C \left(h\left(\overline{\mathbf{w}}^t\right) - h\left(\mathbf{w}^{f}\right)\right). \end{aligned}$$* Once we have Lemma [Lemma 1](#lem:important){reference-type="ref" reference="lem:important"}, the desired result is obtained, since from Theorem [Theorem 6](#thm:subgconv){reference-type="ref" reference="thm:subgconv"} with the same choice of stepsize in Algorithm [\[algo:subg\]](#algo:subg){reference-type="ref" reference="algo:subg"} we know that $$D_{\mathrm{TV}}(M^f_n(\overline{\mathbf{w}}^t,\{L_i\}_{i=1}^n,\pi)||M^f_n(\mathbf{w}^f,\{L_i\}_{i=1}^n,\pi)) \leqslant C \left(h\left(\overline{\mathbf{w}}^t\right) - h\left(\mathbf{w}^{f}\right)\right) = \mathcal{O}\left(\dfrac{1}{\sqrt{t}}\right).$$ It thus remains to prove Lemma [Lemma 1](#lem:important){reference-type="ref" reference="lem:important"}. In the remaining of this proof, to minimize notational overhead we write $$M^f_n(\overline{\mathbf{w}}^t) = M^f_n(\overline{\mathbf{w}}^t,\{L_i\}_{i=1}^n,\pi), \quad M^f_n(\mathbf{w}^f) = M^f_n(\mathbf{w}^f,\{L_i\}_{i=1}^n,\pi).$$ Using the strict convexity of $f$ and [\[eq:Liclass\]](#eq:Liclass){reference-type="eqref" reference="eq:Liclass"}, according to [@C72 equation $(3.25)$] we have that $$\begin{aligned} D_{\mathrm{TV}}&(M^f_n(\overline{\mathbf{w}}^t)||M^f_n(\mathbf{w}^f)) \\ &\leqslant C \left(\sum_{i=1}^n \overline{w}^t_i D_f(M^{f}_n(\mathbf{w}^f,\{L_i\}_{i=1}^n,\pi)||L_i) - \sum_{i=1}^n \overline{w}^t_i D_f(M^{f}_n(\overline{\mathbf{w}}^t,\{L_i\}_{i=1}^n,\pi)||L_i)\right) \\ &\leqslant C \left(\max_{i \in \llbracket n \rrbracket} D_f(M^{f}_n(\mathbf{w}^f,\{L_i\}_{i=1}^n,\pi)||L_i) + h\left(\overline{\mathbf{w}}^t\right)\right) \\ &= C \left(- h\left(\mathbf{w}^{f}\right) + h\left(\overline{\mathbf{w}}^t\right)\right),\end{aligned}$$ where the last equality follows from the complementary slackness in Theorem [Theorem 3](#thm:mainpure){reference-type="ref" reference="thm:mainpure"}. # Acknowledgements {#acknowledgements .unnumbered} Michael Choi would like to thank a question raised by Laurent Miclo on possible game-theoretic interpretations of $P_{-\infty}$ and $P_{\infty}$ when he visited Toulouse School of Economics, which leaves him much to ponder on the topics touch upon by this manuscript. For a possible answer to this question, the readers are referred to Example [Example 4](#ex:1mix){reference-type="ref" reference="ex:1mix"}. Michael Choi acknowledges the financial support from the startup grant of the National University of Singapore and the Yale-NUS College, and a Ministry of Education Tier 1 Grant under the Data for Science and Science for Data collaborative scheme. Geoffrey Wolfer is supported by the Special Postdoctoral Researcher Program (SPDR) of RIKEN and by the Japan Society for the Promotion of Science KAKENHI under Grant 23K13024.
arxiv_math
{ "id": "2310.04115", "title": "Markov chain entropy games and the geometry of their Nash equilibria", "authors": "Michael C.H. Choi, Geoffrey Wolfer", "categories": "math.PR cs.IT math.IT math.OC stat.CO", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We consider some local entropy properties of dynamical systems under the assumption of shadowing. In the first part, we give necessary and sufficient conditions for shadowable points to be certain entropy points. In the second part, we give some necessary and sufficient conditions for (non) h-expansiveness under the assumption of shadowing and chain transitivity; and use the result to present a counter-example for a question raised by Artigue et al. \[Proc. Amer. Math. Soc. 150 (2022), 3369--3378\]. address: Research Institute of Science and Technology, Tokai University, 4-1-1 Kitakaname, Hiratsuka, Kanagawa 259-1292, Japan author: - Noriaki Kawaguchi title: Some results on shadowing and local entropy properties of dynamical systems --- # Introduction *Shadowing*, introduced by Anosov and Bowen [@A; @B2], is a feature of hyperbolic dynamical systems and has played an important role in the global theory of dynamical systems (see [@AH] or [@P] for background). It generally refers to a phenomenon in which coarse orbits, or *pseudo-orbits*, are approximated by true orbits. In [@M], by splitting the global shadowing into pointwise shadowings, Morales introduced the notion of *shadowable points*, which gives a tool for a local description of the shadowing phenomena. The study of shadowable points has been extended to shadowable points for flows [@AV]; pointwise stability and persistence [@DLM; @JLM; @KD2; @KDD; @KLM]; shadowable measures [@S]; average shadowable and specification points [@DKD; @KD1]; eventually shadowable points [@DJM]; shadowable points of set-valued dynamical systems [@LNY]; and so on. In [@K2], some sufficient conditions are given for a shadowable point to be an entropy point. Recall that the notion of *entropy points* is obtained by a concentration of positive topological entropy at a point [@YZ]. Also, in [@AR], the notion of shadowable points is applied to obtain pointwise sufficient conditions for positive topological entropy (see also [@RA]). In the first part of this paper, we improve the result of [@K2] by giving necessary and sufficient conditions for shadowable points to be certain entropy points. The *h-expansiveness* is another local entropy property of dynamical systems [@B1]. In the second part of this paper, we present several necessary and sufficient conditions for (non) h-expansiveness under the assumption of shadowing and chain transitivity; and use the result to obtain a counter-example for a question in [@ACCV2]. We begin with a definition. Throughout, $X$ denotes a compact metric space endowed with a metric $d$. **Definition 1**. *Given a continuous map $f\colon X\to X$ and $\delta>0$, a finite sequence $(x_i)_{i=0}^{k}$ of points in $X$, where $k>0$ is a positive integer, is called a *$\delta$-chain* of $f$ if $d(f(x_i),x_{i+1})\le\delta$ for every $0\le i\le k-1$. A $\delta$-chain $(x_i)_{i=0}^{k}$ of $f$ with $x_0=x_k$ is said to be a *$\delta$-cycle* of $f$.* Let $f\colon X\to X$ be a continuous map. For any $x,y\in X$ and $\delta>0$, the notation $x\rightarrow_\delta y$ means that there is a $\delta$-chain $(x_i)_{i=0}^k$ of $f$ with $x_0=x$ and $x_k=y$. We write $x\rightarrow y$ if $x\rightarrow_\delta y$ for all $\delta>0$. We say that $x\in X$ is a *chain recurrent point* for $f$ if $x\rightarrow x$, or equivalently, for any $\delta>0$, there is a $\delta$-cycle $(x_i)_{i=0}^{k}$ of $f$ with $x_0=x_k=x$. Let $CR(f)$ denote the set of chain recurrent points for $f$. We define a relation $\leftrightarrow$ in $$CR(f)^2=CR(f)\times CR(f)$$ by: for any $x,y\in CR(f)$, $x\leftrightarrow y$ if and only if $x\rightarrow y$ and $y\rightarrow x$. Note that $\leftrightarrow$ is a closed equivalence relation in $CR(f)^2$ and satisfies $x\leftrightarrow f(x)$ for all $x\in CR(f)$. An equivalence class $C$ of $\leftrightarrow$ is called a *chain component* for $f$. We denote by $\mathcal{C}(f)$ the set of chain components for $f$. A subset $S$ of $X$ is said to be $f$-invariant if $f(S)\subset S$. For an $f$-invariant subset $S$ of $X$, we say that $f|_S\colon S\to S$ is *chain transitive* if for any $x,y\in S$ and $\delta>0$, there is a $\delta$-chain $(x_i)_{i=0}^k$ of $f|_S$ with $x_0=x$ and $x_k=y$. **Remark 1**. *The following properties hold* - *$CR(f)=\bigsqcup_{C\in\mathcal{C}(f)}C$,* - *Every $C\in\mathcal{C}(f)$ is a closed $f$-invariant subset of $CR(f)$,* - *$f|_C\colon C\to C$ is chain transitive for all $C\in\mathcal{C}(f)$,* - *For any $f$-invariant subset $S$ of $X$, if $f|_S\colon S\to S$ is chain transitive, then $S\subset C$ for some $C\in\mathcal{C}(f)$.* Let $f\colon X\to X$ be a continuous map. For $x\in X$, we define a subset $C(x)$ of $X$ by $$C(x)=\{x\}\cup\{y\in X\colon x\rightarrow y\}.$$ By this definition, we easily see that for any $x\in X$, $C(x)$ is a closed $f$-invariant subset of $X$. We say that a closed $f$-invariant subset $S$ of $X$ is *chain stable* if for any $\epsilon>0$, there is $\delta>0$ for which every $\delta$-chain $(x_i)_{i=0}^k$ of $f$ with $x_0\in S$ satisfies $d(x_i,S)\le\epsilon$ for all $0\le i\le k$. A proof of the following lemma is given in Section 3. **Lemma 1**. *$C(x)$ is chain stable for all $x\in X$.* **Remark 2**. *For any $x\in X$, since $C(x)$ is chain stable, it satisfies the following properties* - *$CR(f|_{C(x)})=C(x)\cap CR(f)$,* - *for every $C\in\mathcal{C}(f)$, $C\subset C(x)$ if and only if $C\cap C(x)\ne\emptyset$,* - *$\mathcal{C}(f|_{C(x)})=\{C\in\mathcal{C}(f)\colon C\subset C(x)\}$.* Let $f\colon X\to X$ be a continuous map and let $\xi=(x_i)_{i\ge0}$ be a sequence of points in $X$. For $\delta>0$, $\xi$ is called a *$\delta$-pseudo orbit* of $f$ if $d(f(x_i),x_{i+1})\le\delta$ for all $i\ge0$. For $\epsilon>0$, $\xi$ is said to be *$\epsilon$-shadowed* by $x\in X$ if $d(f^i(x),x_i)\leq \epsilon$ for all $i\ge 0$. **Definition 2**. *Given a continuous map $f\colon X\to X$, $x\in X$ is called a *shadowable point* for $f$ if for any $\epsilon>0$, there is $\delta>0$ such that every $\delta$-pseudo orbit $(x_i)_{i\ge0}$ of $f$ with $x_0=x$ is $\epsilon$-shadowed by some $y\in X$. We denote by $Sh(f)$ the set of shadowable points for $f$.* For a continuous map $f\colon X\to X$ and a subset $S$ of $X$, we say that $f$ has the *shadowing on $S$* if for any $\epsilon>0$, there is $\delta>0$ such that every $\delta$-pseudo orbit $(x_i)_{i\ge0}$ of $f$ with $x_i\in S$ for all $i\ge0$ is $\epsilon$-shadowed by some $y\in X$. We say that $f$ has the *shadowing property* if $f$ has the shadowing on $X$. The next lemma is a basis for the formulation of Theorems 1.1 and 1.2. **Lemma 2**. *For a continuous map $f\colon X\to X$ and $x\in X$, the following conditions are equivalent* - *$x\in Sh(f)$,* - *$C(x)\subset Sh(f)$,* - *$f$ has the shadowing on $C(x)$.* Next, we recall the definition of entropy points from [@YZ]. Let $f\colon X\to X$ be a continuous map. For $n\ge1$, the metric $d_n$ on $X$ is defined by $$d_n(x,y)=\max_{0\le i\le n-1}d(f^i(x),f^i(y))$$ for all $x,y\in X$. For $n\ge1$ and $r>0$, a subset $E$ of $X$ is said to be *$(n,r)$-separated* if $d_n(x,y)>r$ for all $x,y\in E$ with $x\ne y$. Let $K$ be a subset of $X$. For $n\ge1$ and $r>0$, let $s_n(f,K,r)$ denote the largest cardinality of an $(n,r)$-separated subset of $K$. We define $h(f,K,r)$ and $h(f,K)$ by $$h(f,K,r)=\limsup_{n\to\infty}\frac{1}{n}\log{s_n(f,K,r)}$$ and $$h(f,K)=\lim_{r\to0}h(f,K,r).$$ We also define the topological entropy $h_{\rm top}(f)$ of $f$ by $h_{\rm top}(f)=h(f,X)$. **Definition 3**. *Let $f\colon X\to X$ be a continuous map. For $x\in X$, we denote by $\mathcal{K}(x)$ the set of closed neighborhoods of $x$.* - *$Ent(f)$ is the set of $x\in X$ such that $h(f,K)>0$ for all $K\in\mathcal{K}(x)$,* - *For $r>0$, $Ent_r(f)$ is the set of $x\in X$ such that $h(f,K,r)>0$ for all $K\in\mathcal{K}(x)$,* - *For $r>0$ and $b>0$, $Ent_{r,b}(f)$ is the set of $x\in X$ such that $h(f,K,r)\ge b$ for all $K\in\mathcal{K}(x)$.* **Remark 3**. *The following properties hold* - *$Ent(f)$, $Ent_r(f)$, $r>0$, and $Ent_{r,b}(f)$, $r,b>0$, are closed $f$-invariant subsets of $X$,* - *$$Ent(f)\subset Ent_r(f)\subset Ent_{r,b}(f)$$ for all $r,b>0$,* - *for any closed subset $K$ of $X$ and $r>0$, if $h(f,K,r)>0$, then $K\cap Ent_r(f)\ne\emptyset$,* - *for any closed subset $K$ of $X$ and $r,b>0$, if $h(f,K,r)\ge b$, then $K\cap Ent_{r,b}(f)\ne\emptyset$.* For a continuous map $f\colon X\to X$, we define $\mathcal{C}_o(f)$ as the set of $C\in\mathcal{C}(f)$ such that $C$ is a periodic orbit or an odometer. We also define $\mathcal{C}_{no}(f)$ to be $$\mathcal{C}_{no}(f)=\mathcal{C}(f)\setminus\mathcal{C}_o(f).$$ We refer to Section 2.1 for various matters related to this definition. The first theorem characterizes the shadowable points that are entropy points of a certain type. **Theorem 1**. *Given a continuous map $f\colon X\to X$ and $x\in Sh(f)$, $$x\in\bigcup_{r>0}Ent_r(f)$$ if and only if $\mathcal{C}_{no}(f|_{C(x)})\ne\emptyset$.* The following theorem characterizes the shadowable points that are entropy points of other type. **Theorem 2**. *Given a continuous map $f\colon X\to X$ and $x\in Sh(f)$, $$x\in\bigcup_{r,b>0}Ent_{r,b}(f)$$ if and only if $h_{\rm top}(f|_{C(x)})=h(f,C(x))>0$.* **Remark 4**. *In [@YZ], a point of $$\bigcup_{r,b>0}Ent_{r,b}(f)$$ is called a *uniform entropy point* for $f$.* **Remark 5**. *In Section 3, we give an example of a continuous map $f\colon X\to X$ such that* - *$f$ has the shadowing property and so satisfies $X=Sh(f)$,* - *$$Ent(f)\setminus\bigcup_{r>0}Ent_r(f)$$ is a non-empty set.* Before we state the next theorem, we introduce some definitions. **Definition 4**. *For a continuous map $f\colon X\to X$ and $(x,y)\in X^2$,* - *$(x,y)$ is called a *distal pair* for $f$ if $$\liminf_{i\to\infty}d(f^i(x),f^i(y))>0,$$* - *$(x,y)$ is called a *proximal pair* for $f$ if $$\liminf_{i\to\infty}d(f^i(x),f^i(y))=0,$$* - *$(x,y)$ is called an *asymptotic pair* for $f$ if $$\limsup_{i\to\infty}d(f^i(x),f^i(y))=0.$$* - *$(x,y)$ is called a *scrambled pair* for $f$ if $$\limsup_{i\to\infty}d(f^i(x),f^i(y))>0\:\text{ and }\:\liminf_{i\to\infty}d(f^i(x),f^i(y))=0.$$* For a continuous map $f\colon X\to X$ and $x\in X$, the *$\omega$-limit set* $\omega(x,f)$ of $x$ for $f$ is defined to be the set of $y\in X$ such that $\lim_{j\to\infty}f^{i_j}(x)=y$ for some sequence $0\le i_1<i_2<\cdots$. Note that $\omega(x,f)$ is a closed $f$-invariant subset of $X$ and satisfies $y\rightarrow z$ for all $y,z\in\omega(x,f)$. For every $x\in X$, we have $\omega(x,f)\subset C$ for some $C\in\mathcal{C}(f)$ and such $C$ satisfies $C\subset C(x)$. **Remark 6**. *Let $f\colon X\to X$ be a continuous map.* - *For a closed $f$-invariant subset $S$ of $X$ and $e>0$, we say that $x\in S$ is an *$e$-sensitive point* for $f|_S\colon S\to S$ if for any $\epsilon>0$, there is $y\in S$ such that $d(x,y)\le\epsilon$ and $d(f^i(x),f^i(y))>e$ for some $i\ge0$. We define $Sen_e(f|_S)$ to be the set of $e$-sensitive points for $f|_S$ and $$Sen(f|_S)=\bigcup_{e>0}Sen_e(f|_S).$$* - *A closed $f$-invariant subset $M$ of $X$ is said to be a *minimal set* for $f$ if closed $f$-invariant subsets of $M$ are only $\emptyset$ and $M$. This is equivalent to $M=\omega(x,f)$ for all $x\in M$.* In [@K2], the author gave three sufficient conditions for a shadowable point to be an entropy point. The next theorem refines Corollary 1.1 of [@K2]. **Theorem 3**. *Let $f\colon X\to X$ be a continuous map. For any $x\in X$ and $C\in\mathcal{C}(f)$ with $\omega(x,f)\subset C$, if one of the following conditions is satisfied, then $C\in\mathcal{C}_{no}(f|_{C(x)})$.* - *$\omega(x,f)\cap Sen(f|_{CR(f)})\ne\emptyset$,* - *there is $y\in X$ such that $(x,y)\in X^2$ is a scrambled pair for $f$,* - *$\omega(x,f)$ is not a minimal set for $f$.* **Remark 7**. *For a continuous map $f\colon X\to X$, $y\in X$ is called a *minimal point* for $f$ if $y\in\omega(y,f)$ and $\omega(y,f)$ is a minimal set for $f$. Due to Theorem 8.7 of [@F], we know that for any $x\in X$, there is a minimal point $y\in X$ for $f$ such that $(x,y)$ is a proximal pair for $f$. If $\omega(x,f)$ is not a minimal set for $f$, then it follows that $(x,y)$ is a scrambled pair for $f$, thus $(3)$ always implies $(2)$.* We consider another local property of dynamical systems so-called h-expansiveness [@B1]. Let $f\colon X\to X$ be a continuous map. For $x\in X$ and $\epsilon>0$, let $$\Phi_{\epsilon}(x)=\{y\in X\colon d(f^i(x),f^i(y))\le\epsilon\:\:\text{for all $i\ge0$}\}$$ and $$h_f^\ast(\epsilon)=\sup_{x\in X}h(f,\Phi_{\epsilon}(x)).$$ We say that $f$ is *h-expansive* if $h_f^\ast(\epsilon)=0$ for some $\epsilon>0$. The following theorem gives several conditions equivalent to (non) h-expansiveness under the assumption of shadowing and chain transitivity. **Theorem 4**. *Let $f\colon X\to X$ be a continuous map. If $f$ is chain transitive and has the shadowing property, then the following conditions are equivalent* - *$f$ is not h-expansive,* - *for any $\epsilon>0$, there is $r>0$ such that for every $\delta>0$, there is a pair $$((x_i)_{i=0}^k,(y_i)_{i=0}^k)$$ of $\delta$-chains of $f$ with $(x_0,x_k)=(y_0,y_k)$ and $$r\le\max_{0\le i\le k}d(x_i,y_i)\le\epsilon,$$* - *for any $\epsilon>0$, there are $m\ge1$ and a closed $f^m$-invariant subset $Y$ of $X$ such that $$\sup_{i\ge0}d(f^i(x),f^i(y))\le\epsilon$$ for all $x,y\in Y$ and there is a factor map $$\pi\colon(Y,f^m)\to(\{0,1\}^\mathbb{N},\sigma),$$ where $\sigma\colon\{0,1\}^\mathbb{N}\to\{0,1\}^\mathbb{N}$ is the shift map.* - *for any $\epsilon>0$, there is a scrambled pair $(x,y)\in X^2$ for $f$ such that $$\sup_{i\ge0}d(f^i(x),f^i(y))\le\epsilon.$$* In Section 4, we use this theorem to obtain a counter-example for a question in [@ACCV2]. We shall make some definitions to precisely state the properties that are satisfied by the example. **Definition 5**. *Let $f\colon X\to X$ be a continuous map and let $\xi=(x_i)_{i\ge0}$ be a sequence of points in $X$. For $\delta>0$, $\xi$ is called a *$\delta$-limit-pseudo orbit* of $f$ if $d(f(x_i),x_{i+1})\le\delta$ for all $i\ge0$, and $$\lim_{i\to\infty}d(f(x_i),x_{i+1})=0.$$ For $\epsilon>0$, $\xi$ is said to be *$\epsilon$-limit shadowed* by $x\in X$ if $d(f^i(x),x_i)\leq \epsilon$ for all $i\ge 0$, and $$\lim_{i\to\infty}d(f^i(x),x_i)=0.$$ We say that $f$ has the *s-limit shadowing property* if for any $\epsilon>0$, there is $\delta>0$ such that every $\delta$-limit-pseudo orbit of $f$ is $\epsilon$-limit shadowed by some point of $X$.* **Remark 8**. *If $f$ has the s-limit shadowing property, then $f$ satisfies the shadowing property.* **Definition 6**. *Let $f\colon X\to X$ be a homeomorphism. For $x\in X$ and $\epsilon>0$, let $$\Gamma_\epsilon(x)=\{y\in X\colon d(f^i(x),f^i(y))\le\epsilon\:\:\text{for all $i\in\mathbb{Z}$}\}.$$ We say that $f$ is* - **expansive* if there is $e>0$ such that $\Gamma_e(x)=\{x\}$ for all $x\in X$,* - **countably-expansive* if there is $e>0$ such that $\Gamma_e(x)$ is a countable set for all $x\in X$,* - **cw-expansive* if there is $e>0$ such that $\Gamma_e(x)$ is totally disconnected for all $x\in X$.* A continuous map $f\colon X\to X$ is said to be *transitive* (resp.) if for any non-empty open subsets $U,V$ of $X$, it holds that $f^j(U)\cap V\ne\emptyset$ for some $j>0$ (resp.for all $j\ge i$ for some $i>0$). **Remark 9**. *If $f$ is transitive, then $f$ is chain transitive, and the converse holds when $f$ has the shadowing property.* In Section 4, by using Theorem 1.4, we give an example of a homeomorphism $f\colon X\to X$ (Example 4.1) such that - $X$ is totally disconnected and so $f$ is cw-expansive, - $f$ is mixing, - $f$ is h-expansive, - $f$ has the s-limit shadowing property, - $f$ is not countably-expansive, - $f$ satisfies $X_e=\emptyset$, where $$X_e=\{x\in X\colon\Gamma_\epsilon(x)=\{x\}\:\text{ for some $\epsilon>0$}\}.$$ In [@ACCV1], it is proved that if a homeomorphism $f\colon X\to X$ has the *L-shadowing property*, that is, a kind of two-sided s-limit shadowing property, then $$f|_{CR(f)}\colon CR(f)\to CR(f)$$ is expansive if and only if $f|_{CR(f)}$ is countably-expansive if and only if $f|_{CR(f)}$ is h-expansive (see Corollary C of [@ACCV1]). Example 4.1 shows that even if a homeomorphism $f\colon X\to X$ satisfies the s-limit shadowing property, this equivalence does not hold. The example also gives a negative answer to the following question in [@ACCV2] (see Question 3 of [@ACCV2]): ***Question* 1**. *Is $X_e$ non-empty for every transitive h-expansive and cw-expansive homeomorphism $f\colon X\to X$ satisfying the shadowing property?* This paper consists of four sections. In Section 2, we collect some definitions, notations, and facts that are used in this paper. In Section 3, we prove Lemmas 1.1 and 1.2; prove Theorems 1.1, 1.2, and 1.3; and give an example mentioned in Remark 1.5. In Section 4, we prove Theorem 1.4 and give an example mentioned above (Example 4.1) after proving some auxiliary lemmas. # Preliminaries In this section, we briefly collect some definitions, notations, and facts that are used in this paper. ## *Odometers, equicontinuity, and chain continuity* An *odometer* (also called an *adding machine*) is defined as follows. Given an increasing sequence $m=(m_k)_{k\ge1}$ of positive integers such that $m_1\ge1$ and $m_k$ divides $m_{k+1}$ for each $k=1,2,\dots$, we define - $X(k)=\{0,1,\dots,m_k-1\}$ (with the discrete topology), - $$X_m=\{(x_k)_{k\ge1}\in\prod_{k\ge1}X(k)\colon x_k\equiv x_{k+1}\pmod{m_k}\:\text{ for all $k\ge1$}\},$$ - $g_m(x)_k=x_k+1\pmod{m_k}$ for all $x=(x_k)_{k\ge1}\in X_m$ and $k\ge1$. We regard $X_m$ as a subspace of the product space $\prod_{k\ge1}X(k)$. The homeomorphism $$g_m\colon X_m\to X_m$$ (or $(X_m,g_m)$) is called an odometer with the periodic structure $m$. Let $f\colon X\to X$ be a continuous map and let $S$ be a closed $f$-invariant subset of $X$. We say that $f|_S\colon S\to S$ is - *equicontinuous* if for every $\epsilon>0$, there is $\delta>0$ such that any $x,y\in S$ with $d(x,y)\le\delta$ satisfies $$\sup_{i\ge0}d(f^i(x),f^i(y))\le\epsilon,$$ - *chain continuous* if for every $\epsilon>0$, there is $\delta>0$ such that any $\delta$-pseudo orbits $(x_i)_{i\ge0}$ and $(y_i)_{i\ge0}$ of $f$ with $x_0=y_0$ satisfies $$\sup_{i\ge0}d(x_i,y_i)\le\epsilon.$$ Recall that for a continuous map $f\colon X\to X$, $\mathcal{C}_o(f)$ is defined as the set of $C\in\mathcal{C}(f)$ such that $C$ is a periodic orbit or an odometer, that is, $(C,f|_C)$ is topologically conjugate to an odometer. **Lemma 3**. *For a continuous map $f\colon X\to X$, the following conditions are equivalent* - *$\mathcal{C}(f)=\mathcal{C}_o(f)$,* - *$f|_{CR(f)}\colon CR(f)\to CR(f)$ is an equicontinuous homeomorphism and $CR(f)$ is totally disconnected,* - *$f|_{CR(f)}\colon CR(f)\to CR(f)$ is chain continuous.* *Proof.* We prove the implication $(1)\implies (2)$. Since $\mathcal{C}(f)=\mathcal{C}_o(f)$, - every $C\in\mathcal{C}(f)$ is totally disconnected, - $f|_{CR(f)}$ is a distal homeomorphism, that is, every $(x,y)\in CR(f)^2$ is a distal pair for $f|_{CR(f)}$. Since the quotient space $$\mathcal{C}(f)=CR(f)/{\leftrightarrow}$$ is totally disconnected, by (A), we obtain that $CR(f)$ is totally disconnected. By (B) and Corollary 1.9 of [@AGW], we conclude that $f|_{CR(f)}$ is an equicontinuous homeomorphism. For a proof of $(2)\implies(3)$ (resp.$(3)\implies(1)$), we refer to Lemma 3.3 (resp.Section 6) of [@K3]. ◻ By applying Lemma 2.1 to $f|_C\colon C\to C$, $C\in\mathcal{C}(f)$, we obtain the following corollary. **Corollary 1**. *For a continuous map $f\colon X\to X$ and $C\in\mathcal{C}(f)$, the following conditions are equivalent* - *$C\in\mathcal{C}_o(f)$,* - *$f|_C\colon C\to C$ is an equicontinuous homeomorphism and $C$ is totally disconnected,* - *$f|_C\colon C\to C$ is chain continuous.* ## *Factor maps and inverse limit* For two continuous maps $f\colon X\to X$, $g\colon Y\to Y$, where $X$, $Y$ are compact metric spaces, a continuous map $\pi\colon X\to Y$ is said to be a *factor map* if $\pi$ is surjective and satisfies $\pi\circ f=g\circ\pi$. A factor map $\pi\colon X\to Y$ is also denoted as $$\pi\colon(X,f)\to(Y,g).$$ Given an inverse sequence of factor maps $$\pi=(\pi_n\colon(X_{n+1},f_{n+1})\to(X_n,f_n))_{n\ge1},$$ let $$X=\{x=(x_n)_{n\ge1}\in\prod_{n\ge1}X_n\colon\pi_n(x_{n+1})=x_n\:\text{ for all $n\ge1$}\},$$ which is a compact metric space. Then, a continuous map $f\colon X\to X$ is well-defined by $f(x)=(f_n(x_n))_{n\ge1}$ for all $x=(x_n)_{n\ge1}\in X$. We call $$(X,f)=\lim_\pi(X_n,f_n)$$ the *inverse limit system*. It is easy to see that $f$ is transitive (resp.mixing) if and only if $f_n\colon X_n\to X_n$ is transitive (resp.mixing) for all $n\ge1$. It is also easy to see that $f$ has the shadowing property if $f_n\colon X_n\to X_n$ has the shadowing property for all $n\ge1$. # Proofs of Theorems 1.1, 1.2, and 1.3 In this section, we prove Lemmas 1.1 and 1.2; prove Theorems 1.1, 1.2, and 1.3; and give an example mentioned in Remark 1.5. First, we prove Lemma 1.1. *Proof of Lemma 1.1.* If $C(x)$ is not chain stable, then there is $r>0$ such that for any $\delta>0$, there is a $\delta$-chain $x^{(\delta)}=(x_i^{(\delta)})_{i=0}^{k_\delta}$ of $f$ with $x_0^{(\delta)}\in C(x)$ and $$d(x_{k_\delta}^{(\delta)}, C(x))\ge r.$$ Then, there are a sequence $0<\delta_1>\delta_2>\cdots$ and $y,z\in X$ such that the following conditions are satisfied - $\lim_{j\to\infty}\delta_j=0$, - $\lim_{j\to\infty}x_0^{(\delta_j)}=y$ and $\lim_{j\to\infty}x_{k_{\delta_j}}^{(\delta_j)}=z$. It follows that $y\in C(x)$, $d(z,C(x))\ge r>0$ and so $z\not\in C(x)$; and $y\rightarrow z$. However, if $y=x$, we obtain $x\rightarrow z$ implying $z\in C(x)$, a contradiction. If $y\ne x$, by $x\rightarrow y$ and $y\rightarrow z$, we obtain $x\rightarrow z$ implying $z\in C(x)$, a contradiction. Thus, the lemma has been proved. ◻ Next, we prove Lemma 1.2. *Proof of Lemma 1.2.* We prove the implication $(1)\implies(2)$. Let $x\in Sh(f)$ and $y\in C(x)\setminus\{x\}$. For any $\epsilon>0$, since $x\in Sh(f)$, there is $\delta>0$ such that every $\delta$-pseudo orbit $(x_i)_{i\ge0}$ of $f$ with $x_0=x$ is $\epsilon$-shadowed by some $z\in X$. Since $y\in C(x)\setminus\{x\}$ and so $x\rightarrow y$, we have a $\delta$-chain $\alpha=(y_i)_{i=0}^k$ of $f$ with $y_0=x$ and $y_k=y$. For any $\delta$-pseudo orbit $\beta=(z_i)_{i\ge0}$ of $f$ with $z_0=y$, we consider a $\delta$-pseudo orbit $$\xi=\alpha\beta=(x_i)_{i\ge0}=(y_0,y_1,\dots,y_{k-1},z_0,z_1,z_2,\dots)$$ of $f$. Then, since $x_0=y_0=x$, $\xi$ is $\epsilon$-shadowed by some $z\in X$ and so $\beta$ is $\epsilon$-shadowed by $f^k(z)$. Since $\epsilon>0$ is arbitrary, we obtain $y\in Sh(f)$, thus $(1)\implies(2)$ has been proved. Next, we prove the implication $(2)\implies(3)$. For a closed subset $K$ of $X$, if $K\subset Sh(f)$, then by Lemma 2.4 of [@K1], for any $\epsilon>0$, there is $\delta>0$ such that every $\delta$-pseudo orbit $(x_i)_{i\ge0}$ of $f$ with $x_0\in K$ is $\epsilon$-shadowed by some $y\in X$. Since $C(x)$ is a closed subset of $X$, this clearly implies that if $C(x)\subset Sh(f)$, then $f$ has the shadowing on $C(x)$. Finally, we prove the implication $(3)\implies(1)$. If $f$ has the shadowing on $C(x)$, then for any $\epsilon>0$, there is $\delta>0$ such that every $\delta$-pseudo orbit $(x_i)_{i\ge0}$ of $f$ with $x_i\in C(x)$ for all $i\ge0$ is $\epsilon$-shadowed by some $y\in X$. Since $x\in C(x)$ and $C(x)$ is chain stable, if $\gamma>0$ is sufficiently small, then for every $\gamma$-pseudo orbit $\xi=(y_i)_{i\ge0}$ of $f$ with $y_0=x$, by taking $x_i\in C(x)$, $i>0$, with $d(y_i,C(x))=d(y_i,x_i)$ for all $i>0$, we have that - $d(x_i,y_i)\le\epsilon$ for each $i>0$, - $$(x_i)_{i\ge0}=(x,x_1,x_2,x_3,\dots)$$ is a $\delta$-pseudo orbit of $f$ with $x_i\in C(x)$ for all $i\ge0$ and so is $\epsilon$-shadowed by some $y\in X$. It follows that $\xi$ is $2\epsilon$-shadowed by $y$. Since $\epsilon>0$ is arbitrary, we obtain $x\in Sh(f)$, thus $(3)\implies(1)$ has been proved. This completes the proof of Lemma 1.2. ◻ We give a proof of Theorem 1.1. *Proof of Theorem 1.1.* First, we prove the "if\" part. Let $C\in\mathcal{C}_{no}(f|_{C(x)})$. Due to Corollary 2.1, since $f|_C\colon C\to C$ is not chain continuous, there are $p\in C$ and $e>0$ such that for any $\delta>0$, there are $\delta$-chains $(x_i)_{i=0}^k$ and $(y_i)_{i=0}^k$ of $f|_C$ with $x_0=y_0=p$ and $d(x_k,y_k)>e$. Fix $0<r<e$ and take any $\epsilon>0$ with $r+2\epsilon<e$. Since $x\in Sh(f)$, there is $\delta_0>0$ such that every $\delta_0$-pseudo orbit $(x_i)_{i\ge0}$ of $f$ with $x_0=x$ is $\epsilon$-shadowed by some $y\in X$. We fix a pair $$((x_i)_{i=0}^K,(y_i)_{i=0}^K)$$ of $\delta_0$-chains $f|_C$ with $x_0=y_0=p$ and $d(x_K,y_K)>e$. Since $C\subset C(x)$, we have $x\rightarrow q$ for some $q\in C$. We also fix a $\delta_0$-chain $\alpha=(z_i)_{i=0}^L$ of $f$ with $z_0=x$ and $z_L=q$. Since $f|_C$ is chain transitive, by compactness of $C$, there is $M>0$ such that for any $w\in C$, there is a $\delta_0$-chain $(w_i)_{i=0}^m$ of $f|_C$ with $w_0=w$, $w_m=p$, and $m\le M$. It follows that for any $w\in C$, there is a pair $$(a^w,b^w)=((a_i^w)_{i=0}^{k_w},(b_i^w)_{i=0}^{k_w})$$ of $\delta_0$-chains of $f|_C$ with $a_0^w=b_0^w=w$, $d(a_{k_w}^w,b_{k_w}^w)>e$, and $k_w\le K+M$. Given any $N\ge1$ and $s=(s_i)_{i=1}^N\in\{a,b\}^N$, we inductively define a family of $\delta_0$-chains $$\alpha(s,n)=(c(s,n)_i)_{i=0}^{k(s,n)}$$ of $f|_C$, $1\le n\le N$, by $\alpha(s,1)=s_1^q$ and $\alpha(s,n+1)=s_{n+1}^{c(s,n)_{k(s,n)}}$ for any $1\le n\le N-1$. Then, we consider a family of $\delta_0$-chains $$\alpha(s)=(c(s)_i)_{i=0}^{k(s)}=\alpha\alpha(s,1)\alpha(s,2)\cdots\alpha(s,N)$$ of $f$, $s\in\{a,b\}^N$. Note that $c(s)_0=x$ and $k(s)\le L+N(K+M)$ for all $s\in\{a,b\}^N$; and for any $s,t\in\{a,b\}^N$ with $s\ne t$, we have $d(c(s)_i,c(t)_i)>e$ for some $$0\le i \le\min\{k(s),k(t)\}\le L+N(K+M).$$ By the choice of $\delta_0$, for every $s\in\{a,b\}^N$, there is $x(s)\in X$ such that $d(f^i(x(s)),c(s)_i)\le\epsilon$ for all $0\le i\le k(s)$. It follows that $$\{x(s)\colon s\in\{a,b\}^N\}$$ is an $(L+N(K+M),r)$-separated subset of $B_\epsilon(x)=\{y\in X\colon d(x,y)\le\epsilon\}$. Since $N\ge1$ is arbitrary, we obtain $$\begin{aligned} h(f,B_\epsilon(x),r)&=\limsup_{n\to\infty}\frac{1}{n}\log{s_n(f,B_\epsilon(x),r)}\\ &\ge\limsup_{N\to\infty}\frac{1}{L+N(K+M)}\log{s_{L+N(K+M)}(f,B_\epsilon(x),r)}\\ &\ge\limsup_{N\to\infty}\frac{1}{L+N(K+M)}\log{2^N}\\ &=\frac{1}{K+M}\log{2}>0.\end{aligned}$$ Since $\epsilon>0$ with $r+2\epsilon<e$ is arbitrary, we conclude that $x\in Ent_r(f)$, proving the "if\" part. Next, we prove the "only if\" part. Let $x\in Ent_r(f)$ for some $r>0$. Due to Lemma 2.1, it suffices to show that $$f|_{CR(f|_{C(x)})}\colon CR(f|_{C(x)})\to CR(f|_{C(x)})$$ is not chain continuous. For any $\epsilon>0$, let $$S_\epsilon=\{y\in C(x)\colon d(y,CR(f|_{C(x)}))\le\epsilon\}$$ and $$T_\epsilon=\{y\in C(x)\colon d(y,CR(f|_{C(x)}))\ge\epsilon\}.$$ Since $$CR(f|_{C(x)})= C(x)\cap CR(f),$$ we have $T_\epsilon\cap CR(f)=\emptyset$; therefore, for any $p\in T_\epsilon$, we can take a neighborhood $U_p$ of $p$ in $X$ such that - $d(a,b)\le r$ and $d(f(a),f(b))\le\epsilon$ for all $a,b\in U_p$, - $f^i(c)\not\in U_p$ for all $c\in U_p$ and $i>0$. We take $p_1,p_2,\dots,p_M\in T_\epsilon$ with $T_\epsilon\subset\bigcup_{j=1}^M U_{p_j}$. Let $U=\bigcup_{j=1}^M U_{p_j}$and take $0<\Delta\le\epsilon$ such that $$\{z\in X\colon d(z,T_\epsilon)\le\Delta\}\subset U.$$ Since $x\in C(x)$ and $C(x)$ is chain stable, we can take a closed neighborhood $K$ of $x$ in $X$ such that - $d(a,b)\le\epsilon$ for all $a,b\in K$, - $d(f^i(c),C(x))\le\Delta$ for all $c\in K$ and $i\ge0$. For any $q\in X$ and $n\ge1$, let $$A(q,n)=\{0\le i\le n-1\colon f^i(q)\in U\}$$ and take $$g(q,n)\colon A(q,n)\to\{U_{p_j}\colon1\le j\le M\}$$ such that $f^i(q)\in g(q,n)(i)$ for every $i\in A(q,n)$. By (2), we have $|A(q,n)|\le M$ for all $q\in X$ and $n\ge1$. Note that $$|\{(A(q,n),g(q,n))\colon q\in X\}|\le\sum_{k=0}^{\min\{n,M\}}\binom{n}{k}M^k\le (M+1)n^M M^M.$$ for all $n\ge1$. Since $x\in Ent_r(f)$, we have $$h(f,K,r)=\limsup_{n\to\infty}\frac{1}{n}\log{s_n(f,K,r)}>0;$$ therefore, $$s_n(f,K,r)>(M+1)n^M M^M$$ for some $n\ge1$. This implies that there are $u,v\in K$ such that $d_n(u,v)>r$ and $$(A(u,n),g(u,n))=(A(v,n),g(v,n)).$$ We fix $0\le N\le n-1$ with $d(f^N(u),f^N(v))>r$ and let $$(A,g)=(A(u,n),g(u,n))=(A(v,n),g(v,n)).$$ If $$A\cap\{0\le l\le N\}=\emptyset,$$ then $f^l(u),f^l(v)\not\in U$ for all $0\le l\le N$. By $(4)$ and the choice of $\Delta$, for any $0\le l\le N$, we obtain $$\{f^l(u),f^l(v)\}\subset\{w\in X\colon d(w,S_\epsilon)\le\Delta\}.$$ It follows that $$\max\{d(f^l(u),CR(f|_{C(x)})),d(f^l(v),CR(f|_{C(x)}))\}\le\epsilon+\Delta\le2\epsilon$$ for all $0\le l\le N$. Moreover, by $u,v\in K$ and (3), we obtain $d(u,v)\le\epsilon$. If $$A\cap\{0\le l\le N\}\ne\emptyset,$$ letting $$L=\max\:[A\cap \{0\le l\le N\}],$$ we have $f^L(u),f^L(v)\in g(L)$ and $g(L)\in\{U_{p_j}\colon1\le j\le M\}$. By (1), we have $L<N$ and $d(f^{L+1}(u),f^{L+1}(v))\le\epsilon$. By $$A\cap \{L+1\le l\le N\}=\emptyset,$$ $(4)$, and the choice of $\Delta$, similarly as above, we obtain $$\max\{d(f^l(u),CR(f|_{C(x)})),d(f^l(v),CR(f|_{C(x)}))\}\le\epsilon+\Delta\le2\epsilon$$ for all $L+1\le l\le N$. Since $\epsilon>0$ is arbitrary, we conclude that $f|_{CR(f|_{C(x)})}$ is not chain continuous, thus the "only if\" part has been proved. This completes the proof of Theorem 1.1. ◻ For the proof of Theorem 1.2, we need two lemmas. **Lemma 4**. *Let $f\colon X\to X$ be a continuous map.* - *For any $x,y\in X$ and $r>0$, if $x\in Sh(f)$ and $y\in C(x)\cap Ent_r(f)$, then $x\in Ent_s(f)$ for all $0<s<r$.* - *For any $x,y\in X$ and $r,b>0$, if $x\in Sh(f)$ and $y\in C(x)\cap Ent_{r,b}(f)$, then $x\in Ent_{s,b}(f)$ for all $0<s<r$.* *Proof.* Let $x\in Sh(f)$ and $y\in C(x)\setminus\{x\}$. For any $0<s<r$, we fix $\epsilon>0$ with $s+2\epsilon<r$. Since $x\in Sh(f)$, there is $\delta>0$ such that every $\delta$-pseudo orbit of $(x_i)_{i\ge0}$ of $f$ with $x_0=x$ is $\epsilon$-shadowed by some $z\in X$. Since $y\in C(x)\setminus\{x\}$ and so $x\rightarrow y$, we have a $\delta/2$-chain $(y_i)_{i=0}^k$ of $f$ with $y_0=x$ and $y_k=y$. For $K\in\mathcal{K}(y)$, $n\ge1$, and $r>0$, we take an $(n,r)$-separated subset $E(K,n,r)$ of $K$ with $|E(K,n,r)|=s_n(f,K,r)$. If $K$ is sufficiently small, then for any $p\in E(K,n,r)$, $$(z_i^p)_{i=0}^{k+n-1}=(y_0,y_1,\dots,y_{k-1},p,f(p),\dots,f^{n-1}(p))$$ is a $\delta$-chain of $f$ with $z_0^p=y_0=x$ and so there is $z_p\in X$ with $d(f^i(z_p),z_i^p)\le\epsilon$ for all $0\le i\le k+n-1$. It follows that $$\{z_p\colon p\in E(K,n,r)\}$$ is a $(k+n,s)$-separated subset of $B_\epsilon(x)=\{w\in X\colon d(x,w)\le\epsilon\}$ and so $$s_{k+n}(f,B_\epsilon(x),s)\ge|E(K,n,r)|=s_n(f,K,r),$$ implying $$\begin{aligned} h(f,B_\epsilon(x),s)&=\limsup_{n\to\infty}\frac{1}{k+n}\log{s_{k+n}(f,B_\epsilon(x),s)}\\ &\ge\limsup_{n\to\infty}\frac{1}{k+n}\log{s_n(f,K,r)}\\ &=h(f,K,r).\end{aligned}$$ Since $\epsilon>0$ with $s+2\epsilon<r$ is arbitrary, if $y\in Ent_r(f)$ (resp.$y\in Ent_{r,b}(f)$ for some $b>0$), we obtain $x\in Ent_s(f)$ (resp.$x\in Ent_{s,b}(f)$). Since $0<s<r$ is arbitrary, the lemma has been proved. ◻ Let $f\colon X\to X$ be a continuous map. For $\delta,r>0$ and $n\ge1$, we say that two $\delta$-chains $(x_i)_{i=0}^n$ and $(y_i)_{i=0}^n$ of $f$ is *$(n,r)$-separated* if $d(x_i,y_i)>r$ for some $0\le i\le n$. Let $$s_n(f,X,r,\delta)$$ denote the largest cardinality of a set of $(n,r)$-separated $\delta$-chains of $f$. The following lemma is from [@Mi]. **Lemma 5** (Misiurewicz). *$$h_{\rm top}(f)=\lim_{r\to0}\lim_{\delta\to0}\limsup_{n\to\infty}\frac{1}{n}\log{s_n(f,X,r,\delta)}.$$* We give a proof of Theorem 1.2. *Proof of Theorem 1.2.* First, we prove the "if\" part. Since $$h_{\rm top}(f|_{C(x)})=h(f,C(x))>0,$$ we have $h(f,C(x),r)>0$ for some $r>0$. Taking $0<b\le h(f,C(x),r)$, we obtain $C(x)\cap Ent_{r,b}(f)\ne\emptyset$. Since $x\in Sh(f)$, by Lemma 3.1, this implies $x\in Ent_{s,b}(f)$ for all $0<s<r$, thus the "if\" part has been proved. Next, we prove the"only if\" part. Let $x\in Ent_{r_0,b}(f)$ for some $r_0,b>0$. Then, we have $$h(f,K,r_0)=\limsup_{n\to\infty}\frac{1}{n}\log{s_n(f,K,r_0)}\ge b$$ for all $K\in\mathcal{K}(x)$. Since $C(x)$ is chain stable, taking $0<s<r_0$, we obtain $$\limsup_{n\to\infty}\frac{1}{n}\log{s_n(f,C(x),s,\delta)}\ge b$$ for all $\delta>0$. From Lemma 3.2, it follows that $$\begin{aligned} h_{\rm top}(f|_{C_{(x)}})&=\lim_{r\to0}\lim_{\delta\to0}\limsup_{n\to\infty}\frac{1}{n}\log{s_n(f,C(x),r,\delta)}\\ &\ge\lim_{\delta\to0}\limsup_{n\to\infty}\frac{1}{n}\log{s_n(f,C(x),s,\delta)}\\ &\ge b>0,\end{aligned}$$ thus the "only if\" part has been proved. This completes the proof of Theorem 1.2. ◻ Next, we prove Theorem 1.3. The proof of the following lemma is left to the reader. **Lemma 6**. *Let $f\colon X\to X$ be a continuous map and let $C\in\mathcal{C}(f)$. For any $\epsilon>0$, there is $\delta>0$ such that every $\delta$-chain $(x_i)_{i=0}^k$ of $f|_{CR(f)}$ with $x_0\in C$ satisfies $d(x_i,C)\le\epsilon$ for all $0\le i\le k$.* *Proof of Theorem 1.3.* Due to Corollary 2.1, it is sufficient to show that each of the three conditions implies that $f|_C\colon C\to C$ is not chain continuous. \(1\) Taking $y\in\omega(x,f)\cap Sen(f|_{CR(f)})$, we have $$y\in C\cap Sen_{e_0}(f|_{CR(f)})$$ for some $e_0>0$. Taking $0<e<e_0$, we obtain that for any $\delta>0$, there are $\delta$-chains $(x_i)_{i=0}^k$ and $(y_i)_{i=0}^k$ of $f|_{CR(f)}$ with $x_0=y_0=x$ and $d(x_k,y_k)>e$. By Lemma 3.3, this implies that $f|_C$ is not chain continuous and thus $C\in\mathcal{C}_{no}(f|_{C(x)})$. \(2\) Since $(x,y)$ is a scrambled pair for $f$, we have $$\liminf_{i\to\infty}d(f^i(x),f^i(y))=0;$$ therefore, there are a sequence $0\le i_1<i_2<\cdots$ and $z\in X$ such that $$\lim_{j\to\infty}d(f^{i_j}(x),f^{i_j}(y))=0$$ and $$\lim_{j\to\infty}f^{i_j}(x)=z,$$ which implies $z\in\omega(x,f)\cap\omega(y,f)$ and so $\omega(y,f)\subset C$. By $$\omega(x,f)\cup\omega(y,f)\subset C,$$ we obtain $$\lim_{i\to\infty}d(f^i(x),C)=\lim_{i\to\infty}d(f^i(y),C)=0.$$ On the other hand, since $(x,y)$ is a scrambled pair for $f$, we have $$\limsup_{i\to\infty}d(f^i(x),f^i(y))>0.$$ These condition clearly imply that $f|_C$ is not chain continuous and thus $C\in\mathcal{C}_{no}(f|_{C(x)})$. \(3\) Since $\omega(x,f)$ is not a minimal set for $f$, we have a closed $f$-invariant subset $S$ of $\omega(x,f)$ such that $$\emptyset\ne S\ne\omega(x,f).$$ Fix $p\in S$, $q\in\omega(x,f)$, and $e>0$ with $d(p,q)>e$. Since $\omega(x,f)\subset C$ and $f|_C\colon C\to C$ is chain transitive, for any $\delta>0$, there are $\delta$-chains $(x_i)_{i=0}^k$ and $(y_i)_{i=0}^k$ of $f|_C$ with $x_0=y_0=y_k=p$ and $x_k=q$. This implies that $f|_C$ is not chain continuous and thus $C\in\mathcal{C}_{no}(f|_{C(x)})$. ◻ Finally, we give an example mentioned in Remark 1.5. For a continuous map $f\colon X\to X$, $C\in\mathcal{C}(f)$ is said to be *terminal* if $C$ is chain stable. The proof of the following lemma is left to the reader. **Lemma 7**. *Let $f\colon X\to X$ be a continuous map. For any $x\in X$ and $C\in\mathcal{C}(f)$ with $\omega(x,f)\subset C$, if $C$ is terminal, then $$C(x)=\{f^i(x)\colon i\ge0\}\cup C.$$* **Example 1**. *This example is taken from [@K4]. Let $\sigma\colon[-1,1]^\mathbb{N}\to[-1,1]^\mathbb{N}$ be the shift map and let $d$ be the metric on $[-1,1]^\mathbb{N}$ defined by $$d(x,y)=\sup_{i\ge1}2^{-i}|x_i-y_i|$$ for all $x=(x_i)_{i\ge1},y=(y_i)_{i\ge1}\in[-1,1]^\mathbb{N}$. Let $s=(s_k)_{k\ge 1}$ be a sequence of numbers with $1>s_1>s_2>\cdots$ and $\lim_{k\to\infty}s_k=0$. Put $$S=\{0\}\cup\{-s_k\colon k\ge1\}\cup\{s_k\colon k\ge1\},$$ a closed subset of $[-1,1]$. We define a closed $\sigma$-invariant subset $X$ of $S^\mathbb{N}$ by $$X=\{x=(x_i)_{i\ge1}\in S^\mathbb{N}\colon|x_1|\ge|x_2|\ge\cdots\}.$$ Let $f=\sigma|_X\colon X\to X$, $X_k=\{-s_k,s_k\}^\mathbb{N}$ for each $k\ge1$, and let $X_0=\{0^\infty\}$. Then, we have $$CR(f)=\{x=(x_i)_{i\ge1}\in X\colon|x_1|=|x_2|=\cdots\}=X_0\cup\bigcup_{k\ge1}X_k$$ and $$\mathcal{C}(f)=\{X_0\}\cup\{X_k\colon k\ge1\}.$$ Note that $X_0$ is terminal. For a rapidly decreasing sequence $s=(s_k)_{k\ge 1}$, we can show that $f$ satisfies the shadowing property and so $X=Sh(f)$. Let $x=(s_1,s_2,s_3,\dots)$ and note that $x\in X$. Since $\omega(x,f)=X_0$ and $X_0$ is terminal, by Lemma 3.4, we obtain $$C(x)=\{f^i(x)\colon i\ge0\}\cup X_0.$$ By Theorem 1.1, we see that $$x\not\in\bigcup_{r>0}Ent_r(f).$$ We shall show that $x\in Ent(f)$. Let $$x_k=(s_1,s_2,\dots,s_k,s_k,s_k,\dots),$$ $k\ge1$, and note that $x_k\in X$ for each $k\ge1$. For any $k\ge1$, since $h(f,X_k)\ge\log{2}>0$, we have $h(f,X_k,r_k)>0$ and so $X_k\cap Ent_{r_k}(f)\ne\emptyset$ for some $r_k>0$. For every $k\ge1$, since $X_k\subset C(x_k)$, we obtain $C(x_k)\cap Ent_{r_k}(f)\ne\emptyset$; therefore, Lemma 3.1 implies that $x_k\in Ent_{s_k}(f)$ for all $0<s_k<r_k$. In particular, we have $x_k\in Ent(f)$ for all $k\ge1$. Since $\lim_{k\to\infty}x_k=x$, we conclude that $x\in Ent(f)$.* # Proof of Theorem 1.4 and an example In this section, we prove Theorem 1.4 and give an example mentioned in Section 1. *Proof of Theorem 1.4.* First, we prove the implication $(1)\implies(2)$. If $f$ is not h-expansive, then $$h_f^\ast(\epsilon)=\sup_{x\in X}h(f,\Phi_{\epsilon}(x))>0$$ for all $\epsilon>0$. Given any $\epsilon>0$, we take $x\in X$ with $h(f,\Phi_{\epsilon/2}(x))>0$ and fix $r>0$ with $h(f,\Phi_{\epsilon/2}(x),r)>0$. For any $0<\Delta\le r$, we take an open cover $\mathcal{U}=\{U_i\colon 1\le i\le m\}$ of $X$ such that $d(a,b)\le\Delta$ for all $1\le i\le m$ and $a,b\in U_i$. Since $$h(f,\Phi_{\epsilon/2}(x),r)=\limsup_{n\to\infty}\frac{1}{n}\log{s_n(f,\Phi_{\epsilon/2}(x),r)}>0,$$ we have $$s_n(f,\Phi_{\epsilon/2}(x),r)>m^2$$ for some $n\ge1$. Then, there are $u,v\in\Phi_{\epsilon/2}(x)$ such that $d_n(u,v)>r$, $u,v\in U_{i_0}$, and $f^{n-1}(u),f^{n-1}(v)\in U_{i_{n-1}}$ for some $U_{i_0},U_{i_{n-1}}\in\mathcal{U}$. It follows that $$\max\{d(u,v),d(f^{n-1}(u),f^{n-1}(v))\}\le\Delta\le r$$ and $$r<\max_{0\le i\le n-1}d(f^i(u),f^i(v))\le\epsilon.$$ Since $0<\Delta\le r$ is arbitrary, this implies the existence of $r>0$ as in (2). Since $\epsilon>0$ is arbitrary, $(1)\implies(2)$ has been proved. Next, we prove the implication $(2)\implies(3)$. The proof is similar to the proof of Lemma 3.1 in [@ACCV1]. Given any $\epsilon>0$, we choose $r>0$ as in (2). We fix $0<\gamma<\min\{\epsilon,r/2\}$. Since $f$ has the shadowing property, there is $\delta>0$ such that every $\delta$-pseudo orbit of $f$ is $\gamma$-shadowed by some point of $X$. By the choice of $r$, we obtain a pair $$(\alpha(0),\alpha(1))=((x_i)_{i=0}^k,(y_i)_{i=0}^k)$$ of $\delta$-chains of $f$ with $(x_0,x_k)=(y_0,y_k)$ and $$r\le\max_{0\le i\le k}d(x_i,y_i)\le\epsilon.$$ Then, the chain transitivity of $f$ gives a $\delta$-chain $\beta=(z_i)_{i=0}^l$ of $f$ with $z_0=x_k=y_k$ and $z_l=x_0=y_0$. For any $s=(s_n)_{n\ge1}\in\{0,1\}^\mathbb{N}$, we consider a $\delta$-pseudo orbit $$\Gamma(s)=\alpha(s_1)\beta\alpha(s_2)\beta\alpha(s_3)\beta\cdots$$ of $f$. Let $m=k+l$, $$Y=\{y\in X\colon\text{$\Gamma(s)$ is $\gamma$-shadowed by $y$ for some $s\in\{0,1\}^\mathbb{N}$}\},$$ and define a map $\pi\colon Y\to\{0,1\}^\mathbb{N}$ so that $\Gamma(\pi(y))$ is $\gamma$-shadowed by $y$ for all $y\in Y$. By a standard argument, we can show that the following conditions are satisfied - $Y$ is a closed subset of $X$, - $f^{m}(Y)\subset Y$, - $\pi$ is well-defined, - $\pi$ is surjective, - $\pi$ is continuous, - $\pi\circ f^m=\sigma\circ\pi$, where $\sigma\colon\{0,1\}^\mathbb{N}\to\{0,1\}^\mathbb{N}$ is the shift map. It follows that $Y$ is a closed $f^m$-invariant subset of $Y$ and $$\pi:(Y,f^m)\to(\{0,1\}^\mathbb{N},\sigma)$$ is a factor map. By the definition of $\Gamma(s)$, $s\in\{0,1\}^\mathbb{N}$, we see that $$\sup_{i\ge0}d(f^i(x),f^i(y))\le3\epsilon$$ for all $x,y\in Y$. Since $\epsilon>0$ is arbitrary, $(2)\implies(3)$ has been proved. We shall prove the implication $(3)\implies(1)$. Given any $\epsilon>0$, we take $m\ge1$ and $Y$ as in (3). Take $p\in Y$ and note that $Y\subset\Phi_\epsilon(p)$. It follows that $$h_f^\ast(\epsilon)\ge h(f,\Phi_\epsilon(p))\ge h(f,Y)\ge\frac{1}{m}h(f^m,Y)\ge\frac{1}{m}h(\sigma,\{0,1\}^\mathbb{N})=\frac{1}{m}\log{2}>0.$$ Since $\epsilon>0$ is arbitrary, $(3)\implies(1)$ has been proved. The implication $(4)\implies(2)$ is obvious from the definitions. It remains to prove the implication $(3)\implies(4)$. Given any $\epsilon>0$, we take $m\ge1$ and $Y$ as in (3). Since $$h(f^m,Y)\ge h(\sigma,\{0,1\}^\mathbb{N})=\log{2}>0,$$ by Corollary 2.4 of [@BGKM], there is a scrambled pair $(x,y)\in Y^2$ for $f^m$. Then, $(x,y)$ is also a scrambled pair for $f$ and satisfies $$\sup_{i\ge0}d(f^i(x),f^i(y))\le\epsilon$$ because $x,y\in Y$. Since $\epsilon>0$ is arbitrary, $(4)\implies(3)$ has been proved. This completes the proof of Theorem 1.4. ◻ We use Theorem 1.4 to obtain a counter-example for a question in [@ACCV2]. The example will be given as an inverse limit of the full-shift $(\{0,1\}^\mathbb{Z},\sigma)$ with respect to a factor map $$F\colon(\{0,1\}^\mathbb{Z},\sigma)\to(\{0,1\}^\mathbb{Z},\sigma).$$ We need three auxiliary lemmas. A homeomorphism $f\colon X\to X$ is said to be expansive if there is $e>0$ such that $$\Gamma_e(x)=\{x\}$$ for all $x\in X$ and such $e$ is called an *expansive constant* for $f$. It is known that for a homeomorphism $f\colon X\to X$ with an expansive constant $e>0$ and $x,y\in X$, if $$\sup_{i\ge0}d(f^i(x),f^i(y))\le e,$$ then $(x,y)$ is an asymptotic pair for $f$. The following lemma gives a sufficient condition for an inverse limit system to be h-expansive. **Lemma 8**. *Let $\pi=(\pi_n\colon(X_{n+1},f_{n+1})\to(X_n,f_n))_{n\ge1}$ be a sequence of factor maps such that for every $n\ge1$, $f_n\colon X_n\to X_n$ is an expansive transitive homeomorphism with the shadowing property. Let $$(Y,g)=\lim_\pi(X_n,f_n)$$ and note that $g$ is a transitive homeomorphism with the shadowing property. If $g$ is not h-expansive, then for any $N\ge 1$, there are $M\ge N$ and a scrambled pair $(x_{M+1},y_{M+1})\in X_{M+1}^2$ for $f_{M+1}$ such that $(\pi_M(x_{M+1}),\pi_M(y_{M+1}))$ is an asymptotic pair for $f_M$.* *Proof.* Let $D$ be a metric on $Y$. Let $d_n$ be a metric on $X_n$ and $e_n>0$ be an expansive constant for $f_n$ for each $n\ge1$. Given any $N\ge1$, we take $\epsilon_N>0$ such that for any $p=(p_n)_{n\ge1}$, $q=(q_n)_{n\ge1}\in Y$, $D(p,q)\le\epsilon_N$ implies $d_n(p_n,q_n)\le e_n$ for all $1\le n\le N$. Since $g$ is not h-expansive, by Theorem 1.4, there is a scrambled pair $$(x,y)=((x_n)_{n\ge1},(y_n)_{n\ge1})\in Y^2$$ for $g$ with $$\sup_{i\ge0}D(g^i(x),g^i(y))\le\epsilon_N.$$ Then, for every $1\le n\le N$, since $$\sup_{i\ge0}d_n(f_n^i(x_n),f_n^i(y_n))=\sup_{i\ge0}d_n(g^i(x)_n,g^i(y)_n)\le e_n,$$ $(x_n,y_n)$ is an asymptotic pair for $f_n$. Since $(x,y)$ is a scrambled pair for $g$ and so a proximal for $g$, $(x_n,y_n)$ is a proximal pair for $f_n$ for all $n\ge1$. If $(x_n,y_n)$ is an asymptotic pair for $f_n$ for all $n\ge1$, then $(x,y)$ is an asymptotic pair for $g$, which is a contradiction. Thus, there is $m\ge 1$ such that $(x_{m+1},y_{m+1})$ is a scrambled pair for $f_{m+1}$. Letting $$M=\min\{m\ge1\colon\text{$(x_{m+1},y_{m+1})$ is a scrambled pair for $f_{m+1}$}\},$$ we see that $M\ge N$, $(x_{M+1},y_{M+1})$ is a scrambled pair for $f_{M+1}$, and $(x_M,y_M)$ is an asymptotic pair for $f_M$, thus the lemma has been proved. ◻ A map $F\colon X\to X$ is said to be an *open map* if for any open subset $U$ of $X$, $f(U)$ is an open subset of $X$. Any continuous open map $F\colon X\to X$ satisfies the following property: for every $r>0$, there is $\delta>0$ such that for any $s,t\in X$ with $d(s,t)\le\delta$ and $u\in F^{-1}(s)$, we have $d(u,v)\le r$ for some $v\in F^{-1}(t)$. For a continuous map $f\colon X\to X$, a sequence $(x_i)_{i\ge0}$ of points in $X$ is called a *limit-pseudo orbit* of $f$ if $$\lim_{i\to\infty}d(f(x_i),x_{i+1})=0,$$ and said to be *limit shadowed* by $x\in X$ if $$\lim_{i\to\infty}d(f^i(x),x_i)=0.$$ The next lemma is needed for the proof of Lemma 4.3. **Lemma 9**. *Let $f\colon X\to X$ be a homeomorphism and let $F\colon(X,f)\to(X,f)$ be a factor map such that* - *$F$ is an open map,* - *$d(v,v')\ge1$ for all $t\in X$ and $v,v'\in F^{-1}(t)$ with $v\ne v'$.* *Suppose that* - *$(x_i)_{i\ge0}$ is a limit-pseudo orbit of $f$ and limit-shadowed by $x\in X$,* - *$(z_i)_{i\ge0}$ is a limit-pseudo orbit of $f$ with $z_i\in F^{-1}(x_i)$ for all $i\ge0$.* *Then, there is $z\in F^{-1}(x)$ such that $(z_i)_{i\ge0}$ is limit-shadowed by $z$.* *Proof.* By (3), letting $\delta_i=d(x_i,f^i(x))$, $i\ge0$, we have $\lim_{i\to\infty}\delta_i=0$. By (1), we can take a sequence $r_i>0$, $i\ge0$, such that - $\lim_{i\to\infty}r_i=0$, - for any $i\ge0$, $s,t\in X$ with $d(s,t)\le\delta_i$, and $u\in F^{-1}(s)$, we have $d(u,v)\le r_i$ for some $v\in F^{-1}(t)$. With use of (4), we fix $N\ge0$ satisfying the following conditions - $0<r_i<1/2$ for all $i\ge N$, - $d(u,v)\le r_i$ implies $d(f(u),f(v))\le1/4$ for all $i\ge N$ and $u,v\in X$, - $d(f(z_i),z_{i+1})\le1/4$ for all $i\ge N$. By $\delta_N=d(x_N,f^N(x))$ and $z_N\in F^{-1}(x_N)$, we obtain $w_N\in F^{-1}(f^N(x))$ with $d(z_N,w_N)\le r_N$. Note that $$F(f^j(w_N))=f^j(F(w_N))=f^j(f^N(x))=f^{N+j}(x)$$ for every $j\ge0$. By induction on $j$, we prove that $d(z_{N+j},f^j(w_N))\le r_{N+j}$ for all $j\ge0$. Assume that $d(z_{N+j},f^j(w_N))\le r_{N+j}$ for some $j\ge0$. Then, $$\begin{aligned} d(z_{N+j+1},f^{j+1}(w_N))&\le d(z_{N+j+1},f(z_{N+j}))+d(f(z_{N+j}),f(f^j(w_N)))\\ &\le 1/4+1/4=1/2.\end{aligned}$$ Since $$\delta_{N+j+1}=d(x_{N+j+1},f^{N+j+1}(x))$$ and $z_{N+j+1}\in F^{-1}(x_{N+j+1})$, we have $$d(z_{N+j+1},w)\le r_{N+j+1}$$ for some $w\in F^{-1}(f^{N+j+1}(x))$. Since $f^{j+1}(w_N)\in F^{-1}(f^{N+j+1}(x))$, by (2), we obtain $$\begin{aligned} d(z_{N+j+1},w')&\ge d(f^{j+1}(w_N),w')-d(z_{N+j+1},f^{j+1}(w_N))\\ &\ge1-1/2=1/2>r_{N+j+1}\end{aligned}$$ for all $w'\in F^{-1}(f^{N+j+1}(x))$ with $w'\ne f^{j+1}(w_N)$. It follows that $w=f^{j+1}(w_N)$ and so $$d(z_{N+j+1},f^{j+1}(w_N))\le r_{N+j+1};$$ therefore, the induction is complete. Let $z=f^{-N}(w_N)$ and note that $$f^N(F(z))=F(f^N(z))=F(w_N)=f^N(x).$$ Since $f$ is a homeomorphism, we have $F(z)=x$, that is, $z\in F^{-1}(x)$. Moreover, we obtain $$\lim_{i\to\infty}d(z_i,f^i(z))=\lim_{j\to\infty}d(z_{N+j},f^{N+j}(z))=\lim_{j\to\infty}d(z_{N+j},f^j(w_N))=\lim_{j\to\infty}r_{N+j}=0,$$ thus the lemma has been proved. ◻ The following lemma gives a sufficient condition for an inverse limit system to satisfy the s-limit shadowing property. **Lemma 10**. *Let $f\colon X\to X$ be a homeomorphism and let $F\colon(X,f)\to(X,f)$ be a factor map such that* - *$F$ is an open map,* - *$d(v,v')\ge1$ for all $t\in X$ and $v,v'\in F^{-1}(t)$ with $v\ne v'$.* *Let $(X_n,f_n)=(X,f)$ and $\pi_n=F\colon(X,f)\to(X,f)$ for all $n\ge1$. Let $$(Y,g)=\lim_\pi(X_n,f_n).$$ If $f$ has the s-limit shadowing property, then $g$ satisfies the s-limit shadowing property.* *Proof.* Let $D$ be a metric on $Y$. Given any $\epsilon>0$, we take $N\ge1$ and $\epsilon_N>0$ such that for any $p=(p_n)_{n\ge1}$, $q=(q_n)_{n\ge1}\in Y$, $d(p_N,q_N)\le\epsilon_N$ implies $D(p,q)\le\epsilon$. Since $f$ has the s-limit shadowing property, there is $\delta_N>0$ such that every $\delta_N$-limit-pseudo orbit of $f$ is $\epsilon_N$-limit shadowed by some point of $X$. We take $\delta>0$ such that $D(p,q)\le\delta$ implies $d(p_N,q_N)\le\delta_N$ for all $p=(p_n)_{n\ge1}$, $q=(q_n)_{n\ge1}\in Y$. Let $\xi=(x^{(i)})_{i\ge0}$ be a $\delta$-limit-pseudo orbit of $g$. Then, for every $i\ge0$, since $D(g(x^{(i)}),x^{(i+1)})\le\delta$, we have $$d(f(x_N^{(i)}),x_N^{(i+1)})=d(g(x^{(i)})_N,x_N^{(i+1)})\le\delta_N.$$ Also, since $\lim_{i\to\infty}D(g(x^{(i)}),x^{(i+1)})=0$, we have $$\lim_{i\to\infty}d(f(x_n^{(i)}),x_n^{(i+1)})=\lim_{i\to\infty}d(g(x^{(i)})_n,x_n^{(i+1)})=0$$ for all $n\ge1$. It follows that $(x_N^{(i)})_{i\ge0}$ is a $\delta_N$-limit-pseudo of $f$ and so $\epsilon_N$-limit shadowed by some $x_N\in X$. Then, since $$\lim_{i\to\infty}d(f(x_N^{(i)}),x_N^{(i+1)})=\lim_{i\to\infty}d(f^i(x_N),x_N^{(i)})=0$$ and $$\lim_{i\to\infty}d(f(x_{N+1}^{(i)}),x_{N+1}^{(i+1)})=0,$$ by Lemma 4.2, we have $$\lim_{i\to\infty}d(f^i(x_{N+1}),x_{N+1}^{(i)})=0$$ for some $x_{N+1}\in F^{-1}(x_N)$. Inductively, we obtain $x_{N+k}\in X$, $k\ge0$, such that $$\lim_{i\to\infty}d(f^i(x_{N+k}),x_{N+k}^{(i)})=0$$ and $x_{N+k+1}\in F^{-1}(x_{N+k})$ for all $k\ge0$. We define $y=(y_n)_{n\ge1}\in Y$ by $$y_n= \begin{cases} F^{N-n}(x_N)&\text{for all $1\le n\le N$}\\ x_n&\text{for all $n\ge N$} \end{cases} .$$ Given any $i\ge0$, by $$d(g^i(y)_N,x_N^{(i)})=d(f^i(y_N),x_N^{(i)})=d(f^i(x_N),x_N^{(i)})\le\epsilon_N,$$ we obtain $$D(g^i(y),x^{(i)})\le\epsilon.$$ Moreover, since $$\begin{aligned} \lim_{i\to\infty}d(g^i(y)_n,x_n^{(i)})&=\lim_{i\to\infty}d(f^i(y_n),x_n^{(i)})\\ &=\lim_{i\to\infty}d(f^i(F^{N-n}(x_N)),x_n^{(i)})\\ &=\lim_{i\to\infty}d(F^{N-n}(f^i(x_N)),F^{N-n}(x_N^{(i)}))=0\end{aligned}$$ for all $1\le n\le N$; and $$\begin{aligned} \lim_{i\to\infty}d(g^i(y)_{N+k},x_{N+k}^{(i)})&=\lim_{i\to\infty}d(f^i(y_{N+k}),x_{N+k}^{(i)})\\ &=\lim_{i\to\infty}d(f^i(x_{N+k}),x_{N+k}^{(i)})=0\end{aligned}$$ for all $k\ge0$, we obtain $$\lim_{i\to\infty}D(g^i(y),x^{(i)})=0.$$ In other words, $\xi$ is $\epsilon$-limit shadowed by $y$. Since $\epsilon>0$ is arbitrary, we conclude that $g$ satisfies the s-limit shadowing property, completing the proof of the lemma. ◻ Finally, we give the example. **Example 2**. *Let $\mathbb{Z}_2=\{0,1\}$. We define a metric $d$ on $\{0,1\}^\mathbb{Z}$ by $$d(x,y)=\sup_{n\in\mathbb{Z}}2^{-|n|}|x_n-y_n|$$ for all $x=(x_n)_{n\in\mathbb{Z}}$, $y=(y_n)_{n\in\mathbb{Z}}\in\{0,1\}^\mathbb{Z}$. Note that the shift map $$\sigma\colon\{0,1\}^\mathbb{Z}\to\{0,1\}^\mathbb{Z}$$ is an expansive mixing homeomorphism with the shadowing property and so satisfies the s-limit shadowing property (see, e.g.[@BGO]). We define a map $F\colon\{0,1\}^\mathbb{Z}\to\{0,1\}^\mathbb{Z}$ by for any $x=(x_n)_{n\in\mathbb{Z}}$, $y=(y_n)_{n\in\mathbb{Z}}\in\{0,1\}^\mathbb{Z}$, $y=F(x)$ if and only if $$y_n=x_n+x_{n+1}$$ for all $n\in\mathbb{Z}$. Note that $F$ gives a factor map $$F\colon(\{0,1\}^\mathbb{Z},\sigma)\to(\{0,1\}^\mathbb{Z},\sigma).$$* *Given any $x=(x_n)_{n\in\mathbb{Z}},y=(y_n)_{n\in\mathbb{Z}},z=(z_n)_{n\in\mathbb{Z}},w=(w_n)_{n\in\mathbb{Z}}\in\{0,1\}^\mathbb{Z}$, assume that* - *$(x,y)$ is an asymptotic pair for $\sigma$,* - *$(F(z),F(w))=(x,y)$.* *Then, there is $N\ge0$ such that $x_n=y_n$ for all $n\ge N$. If $z_N=w_N$, we have $z_n=w_n$ for all $n\ge N$ and so $(z,w)$ is an asymptotic pair for $\sigma$. If $z_N\ne w_N$, we have $z_n\ne w_n$ for all $n\ge N$ and so $$\liminf_{i\to\infty}d(\sigma^i(z),\sigma^i(w))=1>0,$$ thus $(z,w)$ is a distal pair for $\sigma$. In both cases, $(z,w)$ is not a scrambled pair for $\sigma$.* *For any $m\ge1$ and $a=(a_n)_{n=-m}^m\in\{0,1\}^{2m+1}$, we define $b=(b_n)_{n=-m}^{m-1}\in\{0,1\}^{2m}$ by $$b_n=a_n+a_{n+1}$$ for all $-m\le n\le m-1$. Letting $$S(a)=\{x=(x_n)_{n\in\mathbb{Z}}\colon x_n=a_n\:\text{ for all $-m\le n\le m$}\}$$ and $$T(b)=\{x=(x_n)_{n\in\mathbb{Z}}\colon x_n=b_n\:\text{ for all $-m\le n\le m-1$}\},$$ we obtain $F(S(a))=T(b)$, an open subset of $\{0,1\}^\mathbb{Z}$. Since $m\ge1$ and $a=(a_n)_{n=-m}^m\in\{0,1\}^{2m+1}$ are arbitrary, it follows that $F$ is an open map. Given any $y=(y_n)_{n\in\mathbb{Z}}\in\{0,1\}^\mathbb{Z}$, we define $\hat{y}=(\hat{y}_n)_{n\in\mathbb{Z}}\in\{0,1\}^\mathbb{Z}$ by $\hat{y}_n=y_n+1$ for all $n\in\mathbb{Z}$. Then, for any $x\in\{0,1\}^\mathbb{Z}$, taking $y\in F^{-1}(x)$, we have $F^{-1}(x)=\{y,\hat{y}\}$. Note that $d(y,\hat{y})=1$ for all $y\in\{0,1\}^\mathbb{Z}$.* *Let $(X_n,f_n)=(\{0,1\}^\mathbb{Z},\sigma)$ and $\pi_n=F\colon(\{0,1\}^\mathbb{Z},\sigma)\to(\{0,1\}^\mathbb{Z},\sigma)$ for all $n\ge1$. Let $$(Y,g)=\lim_\pi(X_n,f_n)$$ and let $D$ be a metric on $Y$. Since $\{0,1\}^\mathbb{Z}$ is totally disconnected and $\sigma\colon\{0,1\}^\mathbb{Z}\to\{0,1\}^\mathbb{Z}$ is a mixing homeomorphism,* - *$Y$ is totally disconnected,* - *$g$ is a mixing homeomorphism.* *By Lemmas 4.1 and 4.3, we obtain the following properties* - *$g$ is h-expansive,* - *$g$ has the s-limit shadowing property.* *We shall show that* - *$g$ is not countably-expansive,* - *$g$ satisfies $Y_e=\emptyset$, where $$Y_e=\{q\in Y\colon\Gamma_\epsilon(q)=\{q\}\:\text{ for some $\epsilon>0$}\}.$$* *Let $q=(q_n)_{n\ge1}\in Y$ and $N\ge1$. Let $F^{-1}(x)=\{x^a,x^b\}$ for all $x\in\{0,1\}^\mathbb{Z}$. Then, for all $c=(c_k)_{k\ge1}\in\{a,b\}^\mathbb{N}$, we define $q(c)=(q(c)_n)_{n\ge1}\in Y$ by $q(c)_n=q_n$ for all $1\le n\le N$; and $$q(c)_{N+k}=q(c)_{N+k-1}^{c_k}$$ for all $k\ge 1$. Given any $\epsilon>0$, if $N\ge1$ is large enough, $q(c)$, $c\in\{a,b\}^\mathbb{N}$, satisfies $q(c)\in\Gamma_\epsilon(q)$ for all $c\in\{a,b\}^\mathbb{N}$. Since $$\{q(c)\colon c\in\{a,b\}^\mathbb{N}\}$$ is an uncountable set, it follows that $g$ is not countably-expansive. Since $q\in Y$ and $\epsilon>0$ are arbitrary, it also follows that $Y_e=\emptyset$.* 99 D.V.Anosov, Geodesic flows on closed Riemann manifolds with negative curvature. Proc. Steklov Inst. Math. 90 (1967), 235 p. N.Aoki, K.Hiraide, Topological theory of dynamical systems. Recent advances. North--Holland Mathematical Library, 52. North--Holland Publishing Co., 1994. J.Aponte, H.Villavicencio, Shadowable points for flows. J. Dyn. Control Syst. 24 (2018), 701--719. A.Arbieto, E.Rego, Positive entropy through pointwise dynamics. Proc. Amer. Math. Soc. 148 (2020), 263--271. A.Artigue, B. Carvalho, W.Cordeiro, J.Vieitez, Beyond topological hyperbolicity: the L-shadowing property. J. Differ. Equations 268 (2020), 3057--3080. A.Artigue, B.Carvalho, W.Cordeiro, J.Vieitez, Countably and entropy expansive homeomorphisms with the shadowing property. Proc. Amer. Math. Soc. 150 (2022), 3369--3378. J.Auslander, E.Glasner, B.Weiss, On recurrence in zero dimensional flows, Forum Math. 19 (2007), 107--114. A.D.Barwell, C.Good, P.Oprocha, Shadowing and expansivity in subspaces. Fund. Math. 219 (2012), 223--243. F.Blanchard, E.Glasner, S.Kolyada, A.Maass, On Li-Yorke pairs. J. Reine Angew. Math. 547 (2002), 51--68. R.Bowen, Entropy-expansive maps. Trans. Amer. Math. Soc. 164 (1972), 323--331. R.Bowen, Equilibrium states and the ergodic theory of Anosov diffeomorphisms. Lecture Notes in Mathematics, 470. Springer--Verlag, 1975. P.Das, A.G.Khan, T.Das, Measure expansivity and specification for pointwise dynamics. Bull. Braz. Math. Soc. (N.S.) 50 (2019), 933--948. M.Dong, W.Jung, C.Morales, Eventually shadowable points. Qual. Theory Dyn. Syst. 19 (2020), 16, 11 pp. M.Dong, K.Lee, C.Morales, Pointwise topological stability and persistence. J. Math. Anal. Appl. 480 (2019), 123334, 12 pp. H.Furstenberg, Recurrence in ergodic theory and combinatorial number theory. Princeton University Press, 1981. W.Jung, K.Lee, C.A.Morales, Pointwise persistence and shadowing. Monatsh. Math. 192 (2020), 111--123. N.Kawaguchi, Quantitative shadowable points. Dyn. Syst. 32 (2017), 504--518. N.Kawaguchi, Properties of shadowable points: chaos and equicontinuity. Bull. Braz. Math. Soc. (N.S.) 48 (2017), 599--622. N.Kawaguchi, Maximal chain continuous factor. Discrete Contin. Dyn. Syst. 41 (2021), 5915--5942. N.Kawaguchi, Generic and dense distributional chaos with shadowing. J. Difference Equ. Appl. 27 (2021), 1456--1481. A.G.Khan, T.Das, Average shadowing and persistence in pointwise dynamics. Topology Appl. 292 (2021), 107629, 13 pp. A.G.Khan, T.Das, Stability theorems in pointwise dynamics. Topology Appl. 320 (2022), 108218, 14 pp. A.G.Khan, P.K.Das, T.Das, Pointwise dynamics under orbital convergence. Bull. Braz. Math. Soc. (N.S.) 51 (2020), 1001--1016. N.Koo, K.Lee, C.A.Morales, Pointwise topological stability. Proc. Edinb. Math. Soc. 61 (2018), 1179--1191. X.F.Luo, X.X.Nie, J.D.Yin, On the shadowing property and shadowable point of set-valued dynamical systems. Acta Math. Sin. (Engl. Ser.), 1384--1394. M.Misiurewicz, Remark on the definition of topological entropy. Dynamical systems and partial differential equations (Caracas, 1984), Univ. Simon Bolivar, Caracas, 1986, 65--67. C.A.Morales, Shadowable points. Dyn. Syst. 31 (2016), 347--356. S.Yu.Pilyugin, Shadowing in dynamical systems. Lecture Notes in Mathematics, 1706. Springer--Verlag, 1999. E.Rego, A.Arbieto, On the entropy of continuous flows with uniformly expansive points and the globalness of shadowable points with gaps. Bull. Braz. Math. Soc. (N.S.) 53 (2022), 853--872. B.Shin, On the set of shadowable measures. J. Math. Anal. Appl. 469 (2019), 872--881. X.Ye, G.Zhang, Entropy points and applications. Trans. Amer. Math. Soc. 359 (2007), 6167--6186.
arxiv_math
{ "id": "2309.13290", "title": "Some results on shadowing and local entropy properties of dynamical\n systems", "authors": "Noriaki Kawaguchi", "categories": "math.DS", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper we consider the Diophantine equation $V_n - b^m = c$ for given integers $b,c$ with $b \geq 2$, whereas $V_n$ varies among Lucas-Lehmer sequences of the second kind. We prove under some technical conditions that if the considered equation has at least three solutions $(n,m)$, then there is an upper bound on the size of the solutions as well as on the size of the coefficients in the characteristic polynomial of $V_n$. address: - Sebastian Heintze, Graz University of Technology, Institute of Analysis and Number Theory, Steyrergasse 30/II, A-8010 Graz, Austria - Volker Ziegler, University of Salzburg, Hellbrunnerstrasse 34/I, A-5020 Salzburg, Austria author: - Sebastian Heintze - Volker Ziegler bibliography: - Pillai.bib title: On Pillai's Problem involving Lucas sequences of the second kind --- # Introduction In recent time many authors considered Pillai-type problems involving linear recurrence sequences. For an overview we refer to [@Heintze:2023]. Let us note that these problems are inspired by a result due to S. S. Pillai [@Pillai:1936; @Pillai:1937] who proved that for given, coprime integers $a$ and $b$ there exists a constant $c_0(a,b)$, depending on $a$ and $b$, such that for any $c>c_0(a,b)$ the equation $$\label{eq:Pillai} a^n-b^m=c$$ has at most one solution $(n,m)\in \mathbb{Z}_{>0}^2$. Replacing the powers $a^n$ and $b^m$ by other linear recurrence sequences seems to be a challenging task which was supposedly picked up first in [@Ddamulira:2017], where it was shown that $$F_n-2^m=c$$ has at most one solution $(n,m)\in \mathbb{Z}_{>0}^2$ provided that $$c\notin \{0,1,-1,-3,5,-11, -30, 85\}.$$ More generally, Chim, Pink and Ziegler [@Chim:2018] proved that for two fixed linear recurrence sequences $(U_n)_{n\in \mathbb{N}}$, $(V_n)_{n\in \mathbb{N}}$ (with some restrictions) the equation $$U_n - V_m = c$$ has at most one solution $(n,m)\in \mathbb{Z}_{>0}^2$ for all $c\in \mathbb{Z}$, except if $c$ is in a finite and effectively computable set $\mathcal{C} \subset \mathbb{Z}$ that depends on $(U_n)_{n\in \mathbb{N}}$ and $(V_n)_{n\in \mathbb{N}}$. In more recent years several attempts were made to obtain uniform results, i.e. to allow to vary the recurrence sequences $(U_n)_{n\in \mathbb{N}}$ and $(V_n)_{n\in \mathbb{N}}$ in the result of Chim, Pink and Ziegler [@Chim:2018]. In particular, Batte et. al. [@Batte:2022] showed that for all pairs $(p,c)\in \mathbb{P} \times \mathbb{Z}$ with $p$ a prime the Diophantine equation $$F_n-p^m=c$$ has at most four solutions $(n,m)\in \mathbb{N}^2$ with $n\geq 2$. This result was generalized by Heintze et. al. [@Heintze:2023]. They proved (under some technical restrictions) that for a given linear recurrence sequence $(U_n)_{n\in \mathbb{N}}$ there exist an effectively computable bound $B\geq 2$ such that for an integer $b > B$ the Diophantine equation $$\label{eq:Pillai-gen} U_n-b^m=c$$ has at most two solutions $(n,m)\in \mathbb{N}^2$ with $n\geq N_0$. Here $N_0$ is an effective computable constant depending only on $(U_n)_{n\in \mathbb{N}}$. In this paper we want to fix $b$ in [\[eq:Pillai-gen\]](#eq:Pillai-gen){reference-type="eqref" reference="eq:Pillai-gen"} and let $(U_n)_{n\in \mathbb{N}}$ vary over a given family of recurrence sequences. In particular we consider the case where $(U_n)_{n\in \mathbb{N}}$ varies over the family of Lucas-Lehmer sequences of the second kind. # Notations and statement of the main results In this paper we consider Lucas-Lehmer sequences of the second kind, that is we consider binary recurrence sequences of the form $$V_n(A,B)=V_n=\alpha^n+\beta^n,$$ where $\alpha$ and $\beta$ are the roots of the quadratic polynomial $$X^2-AX+B$$ with $A^2\neq 4B$ and $\gcd(A,B)=1$. In the following we will assume that $V_n$ is non-degenerate. That is we assume $A^2-4B>0$ and $A\neq 0$, since this implies that $\alpha$ and $\beta$ are distinct real numbers with $|\alpha|\neq |\beta|$. We will also assume that $A>0$, which results in $V_n>0$ for all $n\in\mathbb{N}$. Then we consider the Diophantine equation $$\label{eq:PillaiVn} V_n-b^m=c,$$ where $b,c\in \mathbb{Z}$ with $b>1$ are fixed. **Theorem 1**. *Let $0<\varepsilon<1$ be a fixed real number and assume that $|B|<A^{2-\varepsilon}$ as well as that the polynomial $X^2-AX+B$ is irreducible. Furthermore, assume that $b\geq 2$ if $c\geq 0$ and $b\geq 3$ if $c<0$. Assume that Equation [\[eq:PillaiVn\]](#eq:PillaiVn){reference-type="eqref" reference="eq:PillaiVn"} has three solutions $(n_1,m_1),(n_2,m_2),(n_3,m_3)\in \mathbb{N}^2$ with $n_1>n_2>n_3\geq N_0(\varepsilon)$ for the bound $N_0(\varepsilon)=\frac{3}{2\varepsilon}$. Then there exists an effectively computable constant $C_1=C_1(\varepsilon,b)$, depending only on $\varepsilon$ and $b$, such that $n_1<C_1$ or $A<32^{1/\varepsilon}$. In particular, we can choose $$C_1= 4.83\cdot 10^{32}\frac{(\log b)^4}{\varepsilon^2}\log\left[5.56\cdot 10^{36} \frac{(\log b)^4}{\varepsilon^2}\right]^4.$$* Let us note that the bound $N_0=N_0(\varepsilon)$ in Theorem [Theorem 1](#th:n1-bound){reference-type="ref" reference="th:n1-bound"} ensures that $V_n$ is strictly increasing for $n\geq N_0$ (see Lemma [Lemma 9](#lem:increasing){reference-type="ref" reference="lem:increasing"} below). Let us also mention that it is essential to exclude the case that $V_{n_2}=V_{n_3}$. Although we can bound $n_1$ in terms of $b$ and $\varepsilon$ our method does not provide upper bounds for $A$ and $|B|$. However, in the case that we are more restrictive in the possible choice of $B$ we can prove also upper bounds for $A$ and $|B|$. **Theorem 2**. *Let $\kappa\geq 1$ be a fixed real number and assume that $|B|<\kappa A$ as well as that the polynomial $X^2-AX+B$ is irreducible. Furthermore, assume that $b\geq 2$ if $c\geq 0$ and $b\geq 3$ if $c<0$. Assume that Equation [\[eq:PillaiVn\]](#eq:PillaiVn){reference-type="eqref" reference="eq:PillaiVn"} has three solutions $(n_1,m_1),(n_2,m_2),(n_3,m_3)\in \mathbb{N}^2$ with $n_1>n_2>n_3\geq 1$. Then there exist effectively computable constants $C_2=C_2(\kappa,b)$ and $C_3=C_3(b)$, depending only on $\kappa$ and $b$, such that $\log A<\max\{C_2,C_3\}$. In particular, we can choose $$C_2=4.35 \cdot 10^{10}\log (4\kappa) \log b\log\left(5.98 \cdot 10^{11} \log b\right)$$ and $$C_3= 9.41\cdot 10^{9}\left[\log\left(1.4\cdot 10^{36}(\log b)^4\log\left[2.23\cdot 10^{37}(\log b)^4\right]^4\right) \log b\right]^2.$$* A straight forward application of our bounds yields: **Corollary 3**. *Assume that $|B|<A$. If the Diophantine equation $$\label{eq:Pillai-ex} V_n-2^m=c$$ with $c\geq 0$ has three solutions $(n_1,m_1),(n_2,m_2),(n_3,m_3)\in \mathbb{N}^2$ with $n_1>n_2>n_3\geq 1$, then either* - *$A<1024$ or* - *$n_1<1.2\cdot 10^{40}$ and $\log A<4.48\cdot 10^{13}$.* Another consequence of Theorem [Theorem 2](#th:absolute){reference-type="ref" reference="th:absolute"} together with the results of Chim et. al. [@Chim:2018] is that there exist only finitely many Diophantine equations of the form of [\[eq:PillaiVn\]](#eq:PillaiVn){reference-type="eqref" reference="eq:PillaiVn"} that admit more than two solutions, provided that $|B|<\kappa A$. The following corollary gives a precise statement. **Corollary 4**. *Let $\kappa>0$ be a fixed real number and $b\geq 2$ a fixed integer. Then there exist at most finitely many, effectively computable $9$-tuples $(A,B,c,n_1,m_1,n_2,m_2$, $n_3,m_3)\in \mathbb{Z}^9$ such that* - *$A>0$, $|B|< \kappa A$ and $A^2-4B>0$;* - *$X^2-AX+B$ is irreducible;* - *if $b=2$, then $c\geq 0$;* - *$n_1>n_2>n_3\geq 1$ and $m_1,m_2,m_3\geq 1$;* - *$V_{n_i}(A,B)-b^{m_i}=c$ for $i=1,2,3$.* *Remark 1*. Let us note that for an application of the results of Chim et. al. [@Chim:2018] we have to ensure that $\alpha$ and $b$ are multiplicatively independent. However, $\alpha$ and $b$ are multiplicatively dependent if and only if there exist integers $x,y$ not both zero such that $\alpha^x=b^y$. But $\alpha^x$ cannot be a rational number unless $x=0$ or $\alpha=\sqrt{D}$ for some positive integer $D$. But $\alpha=\sqrt{D}$ would imply $A=0$ which we have excluded. Therefore we can apply their results in our situation and obtain the statement of Corollary [Corollary 4](#cor:absolute){reference-type="ref" reference="cor:absolute"}, provided that we have found an upper bound for $A$. Let us give a quick outline for the rest of the paper. In the next section we establish several lemmas concerning properties of Lucas sequences $V_n$ under the restrictions that $|B|<A^{2-\varepsilon}$ and $|B|<\kappa A$, respectively, that we will frequently use throughout the paper. The main tool for the proofs of the main theorems are lower bounds for linear forms in logarithms of algebraic numbers. In Section [6](#sec:firstbound){reference-type="ref" reference="sec:firstbound"} we establish bounds for $n_1$, which still depend on $\log \alpha$, following the usual approach (cf. [@Chim:2018]). In Section [7](#sec:three-sol){reference-type="ref" reference="sec:three-sol"}, under the assumption that three solution exist, we obtain a system of inequalities involving linear forms in logarithms which contain $\log \alpha$. Combinig these inequalities we obtain a linear form in logarithms which does no longer contain $\log \alpha$. Thus we obtain that Theorems [Theorem 1](#th:n1-bound){reference-type="ref" reference="th:n1-bound"} and [Theorem 2](#th:absolute){reference-type="ref" reference="th:absolute"} hold or one of the following two cases occurs: - $n_1m_2-n_2m_1=0$; - $m_2-m_3\ll \log n_1$. That each of these cases implies Theorems [Theorem 1](#th:n1-bound){reference-type="ref" reference="th:n1-bound"} and [Theorem 2](#th:absolute){reference-type="ref" reference="th:absolute"} is shown in the subsequent Sections [8](#sec:degenerate-case){reference-type="ref" reference="sec:degenerate-case"} and [9](#sec:case-m2-m3){reference-type="ref" reference="sec:case-m2-m3"}. # Auxiliary results on Lucas sequences Let $\alpha$ and $\beta$ be the roots of $X^2-AX+B$. By our assumptions $\alpha$ and $\beta$ are distinct real numbers $\notin \mathbb{Q}$. Throughout the rest of the paper we will assume that $|\alpha|>|\beta|$, which we can do since by our assumptions we have $A\neq 0$. Therefore we obtain $$\alpha=\frac{A+\sqrt{A^2-4B}}2 \qquad \text{ and } \qquad \beta=\frac{A-\sqrt{A^2-4B}}2.$$ First note that $\alpha>1$ can be bounded in terms of $A$ and $\varepsilon$, respectively in terms of $A$ and $\kappa$. **Lemma 5**. *Assuming that $|B|<A^{2-\varepsilon}$, we have $\frac{A}{2}<\alpha<2A$. Assuming that $|B|<\kappa A$, we have $A<\kappa$ or $\frac{A}{2}<\alpha<2A$.* *Proof.* Assume that $A\geq \kappa$. Then both assumptions $|B|<A^{2-\varepsilon}$ and $|B|<\kappa A$ imply $|B|<A^2$. Thus we get $$\alpha\leq\frac{A+\sqrt{A^2+4|B|}}2< A\frac{1+\sqrt{5}}2 <2A$$ and $$\alpha = \frac{A+\sqrt{A^2-4B}}2> \frac{A}2.$$ ◻ At some point in our proofs of Theorems [Theorem 1](#th:n1-bound){reference-type="ref" reference="th:n1-bound"} and [Theorem 2](#th:absolute){reference-type="ref" reference="th:absolute"} we will use that $\beta$ cannot be to close to $1$. In particular, the following lemma will be needed. **Lemma 6**. *Assume that $A\geq 4$. Then we have $$\left|1-|\beta|\right|\geq \frac 2{2A+5}.$$* *Proof.* First, let us note that the function $f(x)=\frac{A-\sqrt{A^2-4x}}2$ is strictly monotonically increasing for all $x<A^2/4$. Moreover, we have $f(A-1)=1$ and $f(-A-1)=-1$. By our assumption that $X^2-AX+B$ is irreducible, a choice for $B$ such that $|\beta|=1$ is not admissible. Therefore we have $$\left|1-|\beta|\right|\geq \min\{1-f(A-2),f(A)-1,f(-A)+1,-1-f(-A-2)\}.$$ We compute $$\begin{aligned} 1-f(A-2)&=\frac{-(A-2)+\sqrt{A^2-4(A-2)}}2=\frac{4}{2\left((A-2)+\sqrt{A^2-4(A-2)}\right)}\\ &\geq \frac{2}{A-2+\sqrt{A^2-2A+1}}=\frac{2}{2A-3},\end{aligned}$$ provided that $A^2-2A+1\geq A^2-4A+8$, which certainly holds for $A\geq 7/2$. Similar computations yield $$f(A)-1\geq \frac{2}{2A-4},\quad f(-A)+1\geq \frac{2}{2A+4},\quad -1-f(-A-2)\geq \frac{2}{2A+5},$$ provided that $A\geq 4$. ◻ Next, we show that under our assumptions $|\beta|$ is not to large. **Lemma 7**. *Assume that $|B|<A^{2-\varepsilon}$. Then we have $\frac{\alpha}{|\beta|}>\frac{1}{2}A^{\varepsilon}$ or $A^{\varepsilon}< 8$.* *Proof.* Let us assume that $A^{\varepsilon}\geq 8$. First we note that $$\begin{aligned} \left|\frac{\alpha}{\beta}\right|&=\left|\frac{A+\sqrt{A^2-4B}}{A-\sqrt{A^2-4B}}\right|=\left|\frac{2A^2-4B+2A\sqrt{A^2-4B}}{4B} \right| \\ &\geq \frac{2A^2-4|B|+2A\sqrt{A^2-4|B|}}{4|B|}. \end{aligned}$$ Now we consider the function $f(x)=\frac{2A^2-4x+2A\sqrt{A^2-4x}}{4x}$, which is defined for all $x$ with $0<4x\leq A^2$. Then we have $$\begin{aligned} f'(x)&=\frac{4x\left(-4+\frac{-4A}{\sqrt{A^2-4x}}\right)-4\left(2A^2-4x+2A\sqrt{A^2-4x}\right)}{16x^2}\\ &=\frac{-\frac{16xA}{\sqrt{A^2-4x}}-8A^2-8A\sqrt{A^2-4x}}{16x^2}. \end{aligned}$$ Note that it is immediate that $f'(x)<0$ for $x>0$. Thus we deduce $$\begin{aligned} \left| \frac\alpha\beta\right|&>\frac{2A^2-4A^{2-\varepsilon}+2A\sqrt{A^2-4A^{2-\varepsilon}}}{4A^{2-\varepsilon}}\\ &=\frac{1}{2}A^{\varepsilon}-1+\frac{1}{2}A^\varepsilon\sqrt{1-4A^{-\varepsilon}}\\ &=A^{\varepsilon}\left(\frac{1}{2}-\frac{1}{A^{\varepsilon}}+\frac{1}{2}\sqrt{1-4A^{-\varepsilon}}\right)>\frac{1}{2}A^\varepsilon, \end{aligned}$$ since we assume that $A^\varepsilon\geq 8$. ◻ **Lemma 8**. *Assume that $|B|<\kappa A$. Then we have $|\beta|<2\kappa$ or $A< 4 \kappa$.* *Proof.* We assume that $A\geq 4\kappa$. Moreover, we have $$|\beta|=\left|\frac{A-\sqrt{A^2-4B}}{2}\right|\leq \frac{2|B|}{A+\sqrt{A^2-4|B|}}.$$ Let us now consider the function $f(x)=\frac{2x}{A+\sqrt{A^2-4x}}$. This function is strictly increasing with $x$ since obviously the numerator is strictly increasing with $x$ while the denominator is strictly decreasing with $x$. Therefore we obtain $$|\beta|< \frac{2 \kappa A}{A+\sqrt{A^2-4\kappa A}}=\kappa \frac{2}{1+\sqrt{1-\frac{4\kappa}{A}}}\leq 2 \kappa$$ since $A\geq 4\kappa$. ◻ Now let us take a look at the recurrence sequence $V_n$. **Lemma 9**. *Assume that $|B|<A^{2-\varepsilon}$ and $A^{\varepsilon} \geq 8$. Then $V_n$ is strictly increasing for $n\geq \frac{3}{2\varepsilon}$.* *Proof.* First, we note that $$V_{n+1}-V_n=\alpha^{n+1}+\beta^{n+1}-\alpha^n-\beta^n=\alpha^{n}(\alpha-1)+\beta^{n}(\beta-1)>0$$ certainly holds if $$\alpha^n(\alpha-1)>|\beta|^n(|\beta|+1),$$ respectively $$\label{eq:incr} \left(\frac{\alpha}{|\beta|}\right)^n>\frac{|\beta|+1}{\alpha-1}.$$ Note that $\frac{\alpha}{|\beta|}>\frac{1}{2}A^\varepsilon$, by Lemma [Lemma 7](#lem:bound-alpha-beta-case1){reference-type="ref" reference="lem:bound-alpha-beta-case1"}, and $$|\beta|=\frac{|B|}{\alpha}<\frac{2A^{2-\varepsilon}}{A}=2 A^{1-\varepsilon}.$$ Moreover, note that the smallest possible value for $\alpha$ is $\frac{1+\sqrt{5}}2$. Therefore [\[eq:incr\]](#eq:incr){reference-type="eqref" reference="eq:incr"} is certainly fulfilled if $$\left(\frac{1}{2}A^{\varepsilon}\right)^n > 6A^{1-\varepsilon} > \frac{3A^{1-\varepsilon}}{\frac{-1+\sqrt{5}}2} \geq \frac{2A^{1-\varepsilon}+1}{\frac{1+\sqrt{5}}2-1}.$$ Thus $V_n$ is increasing if $$n\left(\varepsilon\log A - \log 2\right)>\log 6+(1-\varepsilon) \log A$$ or equivalently $$n>\frac{(1-\varepsilon) \log A+\log 6}{\varepsilon\log A-\log 2}.$$ Since the rational function $f(x)=\frac{ax+b}{cx+d}$ is strictly increasing if $ad-bc>0$, strictly decreasing if $ad-bc<0$, and constant if $ad-bc=0$, we obtain that for $$n \geq \frac{3}{2\varepsilon} > \frac{\frac{1-\varepsilon}{\varepsilon} \log 8 + \log 6}{\log 8 - \log 2}$$ the Lucas sequence is strictly increasing. ◻ **Lemma 10**. *Assume that $|B|<\kappa A$ and $A \geq 4\kappa + 4$. Then $V_n$ is strictly increasing for $n\geq 0$.* *Proof.* By [\[eq:incr\]](#eq:incr){reference-type="eqref" reference="eq:incr"} we know that $V_n$ is increasing for all $n\geq 0$ if $1>\frac{|\beta|+1}{\alpha-1}$. Since we assume that $\alpha>\frac{A}{2} \geq 2\kappa+2>2$, we have $|\beta|<2\kappa$, using Lemma [Lemma 8](#lem:bound-alpha-beta-case2){reference-type="ref" reference="lem:bound-alpha-beta-case2"}, and therefore $$1\geq \frac{2\kappa+1}{\frac{A}{2}-1}>\frac{|\beta|+1}{\alpha-1}.$$ ◻ *Remark 2*. Note that assuming $|B|<\kappa A$ and $A\geq \kappa^2$ implies $|B|<A^{3/2}=A^{2-\varepsilon}$ with $\varepsilon=1/2$. Therefore all results that are proven under the assumption $|B|<A^{2-\varepsilon}$ with $\varepsilon=1/2$ also hold under the assumption $|B|<\kappa A$ and $A\geq \kappa^2$. In view of this remark and in view of the proofs of the following lemmata, we will assume for the rest of the paper that one of the following two assumptions holds: A1 : $|B|<A^{2-\varepsilon}$, $A^{\varepsilon}\geq 32$ and $N_0= \frac{3}{2\varepsilon}$, A2 : $|B|<\kappa A$, $A\geq \max\{\kappa^2,16\kappa+12,1024\}$ and $N_0=1$. *Remark 3*. Let us note that the bound $A\geq \kappa^2$ and $A\geq 1024$ in Assumption A2 results in the useful fact that Assumption A2 implies Assumption A1 with $\varepsilon=\frac{1}{2}$, but with $N_0=1$ instead of $\frac{3}{2\varepsilon}$. The assumption $A\geq 16\kappa+12$ is mainly used in the proofs of the Lemmata [Lemma 12](#lem:growth-Vn){reference-type="ref" reference="lem:growth-Vn"} and [Lemma 13](#lem:n1-m1-relation){reference-type="ref" reference="lem:n1-m1-relation"}. In view of the assumptions stated above an important consequence of Lemmata [Lemma 9](#lem:increasing){reference-type="ref" reference="lem:increasing"} and [Lemma 10](#lem:increasing-2){reference-type="ref" reference="lem:increasing-2"} is the following: **Corollary 11**. *Let us assume that [\[eq:PillaiVn\]](#eq:PillaiVn){reference-type="eqref" reference="eq:PillaiVn"} has two solutions $(n_1,m_1)$ and $(n_2,m_2)$ with $n_1>n_2\geq N_0$. Then $m_1>m_2$.* **Lemma 12**. *Let $n\geq N_0$, then we have $\frac 54 \alpha^n>V_n>\frac 34 \alpha^n$.* *Proof.* Assume that A1 holds. Then, by Lemma [Lemma 7](#lem:bound-alpha-beta-case1){reference-type="ref" reference="lem:bound-alpha-beta-case1"}, we have $$\left(\frac{|\beta|}\alpha\right)^n<\left(\frac{2}{A^\varepsilon}\right)^n< \frac{1}{4}.$$ If A2 holds, by Lemma [Lemma 8](#lem:bound-alpha-beta-case2){reference-type="ref" reference="lem:bound-alpha-beta-case2"}, we have $$\left(\frac{|\beta|}\alpha\right)^n<\left(\frac{4\kappa}{A}\right)^n<\frac 14.$$ Therefore in any case we get $$\frac{3}{4} \alpha^n < \alpha^n\left(1-\left(\frac{|\beta|}{\alpha}\right)^n\right) \leq \alpha^n+\beta^n=V_n \leq \alpha^n\left(1+\left(\frac{|\beta|}{\alpha}\right)^n\right) < \frac{5}{4} \alpha^n.$$ ◻ Another lemma that will be used frequently is the following: **Lemma 13**. *Assume that Equation [\[eq:PillaiVn\]](#eq:PillaiVn){reference-type="eqref" reference="eq:PillaiVn"} has two solutions $(n_1,m_1)$ and $(n_2,m_2)$ with $n_1>n_2\geq N_0$. Then we have $$\frac{5}{2} > \frac{b^{m_1}}{\alpha^{n_1}}>\frac{3}{8}.$$* *Proof.* Assuming the existence of two different solutions implies by an application of Lemma [Lemma 12](#lem:growth-Vn){reference-type="ref" reference="lem:growth-Vn"} the inequality $$\frac{5}{4} \alpha^{n_1}>V_{n_1}>V_{n_1}-V_{n_2}=b^{m_1}-b^{m_2}= b^{m_1}\left(1-\frac 1{b^{m_1-m_2}}\right)\geq \frac{1}{2} b^{m_1},$$ which proves the first inequality. For the second inequality we apply again Lemma [Lemma 12](#lem:growth-Vn){reference-type="ref" reference="lem:growth-Vn"} to obtain $$V_{n_1}-V_{n_2}>\frac{3}{4} \alpha^{n_1}\left(1-\frac{5}{3\alpha^{n_1-n_2}}\right).$$ Since we assume in any case that $\alpha>\frac{A}{2}>4$, we get $V_{n_1}-V_{n_2}> \frac{3}{8} \alpha^{n_1}$ and thus $$\frac{3}{8} \alpha^{n_1}<V_{n_1}-V_{n_2}=b^{m_1}-b^{m_2}< b^{m_1},$$ which yields the second inequality. ◻ Finally, let us remind Carmichael's theorem [@Carmichael:1913 Theorem XXIV]: **Lemma 14**. *For any $n\neq 1,3$ there exists a prime[^1] $p$ such that $p\mid V_n$, but $p\nmid V_m$ for all $m<n$, except for the case that $n=6$ and $(A,B)=(1,-1)$.* This lemma can be used to prove the following result: **Lemma 15**. *Assume that $c=0$ and $A>1$. Then the Diophantine equation [\[eq:PillaiVn\]](#eq:PillaiVn){reference-type="eqref" reference="eq:PillaiVn"} has at most one solution $(n,m)$ with $n\geq 1$.* *Proof.* Assume that [\[eq:PillaiVn\]](#eq:PillaiVn){reference-type="eqref" reference="eq:PillaiVn"} has two solutions $(n_1,m_1), (n_2,m_2)$ with $n_1>n_2\geq 1$. Then by Carmichael's primitive divisor theorem (Lemma [Lemma 14](#lem:Carmichael){reference-type="ref" reference="lem:Carmichael"}) we deduce that $n_1=3$ and $n_2=1$. Since $V_1=A$ and $V_3=A^3-3AB$ we obtain the system of equations $$A=b^{m_2}, \qquad A^3-3AB=b^{m_1}$$ which yields $$b^{2m_2}-3B=b^{m_1-m_2}.$$ That is $b\mid 3B$. Since we assume that $\gcd (A,B)=1$, we deduce $b\mid 3$, i.e. $b=3$. We also conclude that $3\nmid B$ which yields, by considering $3$-adic valuations, that $m_1-m_2=1$ since $m_2=0$ would imply $A=1$. Hence we have $B=3^{2m_2-1}-1$. Note that we also assume $A^2-4B>0$ which implies that $$3^{2m_2}-4 (3^{2m_2-1}-1)= 4-3^{2m_2-1}>0$$ holds. But this is only possible for $m_2=1$ and we conclude that $A=3$ and $B=2$. Since $X^2-3X+2=(X-2)(X-1)$ is not irreducible, this is not an admissible case. Therefore there exists at most one solution to [\[eq:PillaiVn\]](#eq:PillaiVn){reference-type="eqref" reference="eq:PillaiVn"}. ◻ # Lower bounds for linear forms in logarithms {#sec:linforms} The main tool in proving our main theorems are lower bounds for linear forms of logarithms of algebraic numbers. In particular, we will use Matveev's lower bound proven in [@Matveev:2000]. Therefore let $\eta\neq 0$ be an algebraic number of degree $\delta$ and let $$a_{\delta}\left(X-\eta^{(1)}\right) \cdots \left(X-\eta^{(\delta)}\right) \in \mathbb{Z}[X]$$ be the minimal polynomial of $\eta$. Then the absolute logarithmic Weil height is defined by $$h(\eta)=\frac 1\delta \left(\log |a_\delta|+\sum_{i=1}^\delta \max\{0,\log|\eta^{(i)}|\}\right).$$ With this notation the following result due to Matveev [@Matveev:2000] holds: **Lemma 16** (Theorem 2.2 with $r=1$ in [@Matveev:2000]). *Denote by $\eta_1, \ldots, \eta_N$ algebraic numbers, neither $0$ nor $1$, by $\log\eta_1, \ldots, \log\eta_N$ determinations of their logarithms, by $D$ the degree over $\mathbb{Q}$ of the number field $K = \mathbb{Q}(\eta_1,\ldots,\eta_N)$, and by $b_1, \ldots, b_N$ rational integers with $b_N\neq 0$. Furthermore let $\kappa=1$ if $K$ is real and $\kappa=2$ otherwise. For all integers $j$ with $1\leq j\leq N$ choose $$A_j\geq \max\{D h(\eta_j), |\log\eta_j|, 0.16\},$$ and set $$E=\max\left\{\frac{|b_j| A_j}{A_N}\: :\: 1\leq j \leq N \right\}.$$ Assume that $$\Lambda:=b_1\log \eta_1+\cdots+b_N \log \eta_N \neq 0.$$ Then $$\log |\Lambda| \geq -C(N,\kappa)\max\{1,N/6\} C_0 W_0 D^2 \Omega$$ with $$\begin{gathered} \Omega=A_1\cdots A_N, \\ C(N,\kappa)= \frac {16}{N!\, \kappa} e^N (2N +1+2 \kappa)(N+2) (4(N+1))^{N+1} \left( \frac 12 eN\right)^{\kappa}, \\ C_0= \log\left(e^{4.4N+7}N^{5.5}D^2 \log(eD)\right), \quad W_0=\log(1.5eED \log(eD)). \end{gathered}$$* In our applications we will be in the situation $N \in \{2,3\}$ and $K=\mathbb{Q}(\alpha)\subseteq \mathbb{R}$, i.e. we have $D=2$ and $\kappa=1$. In this special case Matveev's lower bounds take the following form: **Corollary 17**. *Let the notations and assumptions of Lemma [Lemma 16](#lem:matveev){reference-type="ref" reference="lem:matveev"} be in force. Furthermore assume $D=2$ and $\kappa=1$. Then we have $$\label{eq:linearform-bound} \begin{aligned} \log|\Lambda|&\geq - 7.26\cdot 10^{10}\log(13.81E)\Omega &\qquad \text{for $N=3$},\\ \log|\Lambda|&\geq - 6.7\cdot 10^{8} \log(13.81E)\Omega &\qquad \text{for $N=2$}.\\ \end{aligned}$$* *Remark 4*. Let us note that the form of $E$ is essential in our proof to obtain an absolute bound for $n_1$ in Theorem [Theorem 1](#th:n1-bound){reference-type="ref" reference="th:n1-bound"}. Let us also note that in the case of $N=2$ one could use the results of Laurent [@Laurent:2008] to obtain numerically better values but with an $\log(E)^2$ term instead. This would lead to numerically smaller upper bounds for concrete applications of our theorems. However, we refrain from the application of these results to keep our long and technical proof more concise. In order to apply Matveev's lower bounds, we provide some height computations. First, we note the following well known properties of the absolute logarithmic height for any $\eta, \gamma \in \overline{\mathbb{Q}}$ and $l \in \mathbb{Q}$ (see for example [@Zannier:DA Chapter 3] for a detailed reference): $$\begin{aligned} h(\eta \pm \gamma) & \leq h(\eta) + h(\gamma) + \log 2, \\ h(\eta \gamma) & \leq h(\eta) + h(\gamma),\\ h(\eta^l) & = |l| h(\eta). \end{aligned}$$ Moreover, note that for a positive integer $b$ we have $h(b)=\log b$ and $$h(\alpha)=\frac 12 \left(\max\{0,\log \alpha\}+\max\{0,\log |\beta|\}\}\right)\leq \log \alpha.$$ This together with the above mentioned properties yields the following lemma: **Lemma 18**. *Under the assumptions A1 or A2 and using the above notations from Lemma [Lemma 16](#lem:matveev){reference-type="ref" reference="lem:matveev"} the following inequalites hold for any $t\in \mathbb{Z}_{>0}$: $$\begin{aligned} 2\log b & \geq \max\{Dh(b),|\log b|,0.16\},\\ 2\log \alpha & \geq \max\{Dh(\alpha),|\log \alpha|,0.16\},\\ 2(t+1)\log b & \geq \max\{Dh(b^t\pm 1),|\log (b^t\pm 1)|,0.16\},\\ 2(t+1)\log \alpha & \geq \max\{Dh(\alpha^t\pm 1),|\log (\alpha^t\pm 1)|,0.16\}. \end{aligned}$$* One other important aspect in applying Matveev's result (Lemma [Lemma 16](#lem:matveev){reference-type="ref" reference="lem:matveev"}) is that the linear form $\Lambda$ does not vanish. We will resolve this issue with the following lemma: **Lemma 19**. *Assume that the Diophantine equation [\[eq:PillaiVn\]](#eq:PillaiVn){reference-type="eqref" reference="eq:PillaiVn"} has three solutions $(n_1,m_1)$, $(n_2,m_2)$, $(n_3,m_3)\in \mathbb{N}^2$ with $n_1>n_2>n_3>0$. Then we have $$\begin{aligned} \Lambda_i&:=n_i\log\alpha-m_i\log b\neq 0,& \text{for $i=1,2,3$}\\ \Lambda'&:=n_2\log\alpha-m_2\log b+\log\left(\frac{\alpha^{n_1-n_2}-1}{b^{m_1-m_2}-1}\right)\neq 0, \end{aligned}$$* *Proof.* Assume that $\Lambda_i=n_i\log\alpha-m_i\log b=0$ for some $i\in\{1,2,3\}$. But $\Lambda_i=0$ implies $\alpha^{n_i}-b^{m_i}=0$ which results in view of [\[eq:PillaiVn\]](#eq:PillaiVn){reference-type="eqref" reference="eq:PillaiVn"} in $$\alpha^{n_i}+\beta^{n_i}-b^{m_i}=\beta^{n_i}=c.$$ Since $X^2-AX+B$ is irreducible, $\alpha$ and $\beta$ are Galois conjugated. Therefore, by applying the non-trivial automorphism of $K=\mathbb{Q}(\alpha)$ to the equation $\beta^{n_i}=c$, we obtain $\alpha^{n_i}=c$ since $c\in \mathbb{Q}$. But this implies $\beta^{n_i}=\alpha^{n_i}$, hence $|\alpha|=|\beta|$, a contradiction to our assumptions. Now, let us assume that $$\Lambda'=n_2\log\alpha-m_2\log b+\log\left(\frac{\alpha^{n_1-n_2}-1}{b^{m_1-m_2}-1}\right)= 0.$$ This implies $\alpha^{n_1}-\alpha^{n_2}=b^{m_1}-b^{m_2}$ which results in view of [\[eq:Pillai-twosol\]](#eq:Pillai-twosol){reference-type="eqref" reference="eq:Pillai-twosol"} in $\beta^{n_1}=\beta^{n_2}$. But then $\beta=0$ or $\beta$ is a root of unity. Both cases contradict our assumption that $X^2-AX+B$ is irreducible and $\beta \in \mathbb{R}$. ◻ Finally, we want to record three further elementary lemmata that will be helpful. The first lemma is a standard fact from real analysis. **Lemma 20**. *If $|x-1|\leq 0.5$, then $|\log x| \leq 2 |x-1|$, and if $|x|\leq 0.5$, then we have $$\frac{2}{9} x^2 \leq x-\log (1+x) \leq 2 x^2.$$* *Proof.* A direct application of Taylor's theorem with a Cauchy and Lagrange remainder, respectively. ◻ Next, we want to state another estimate from real analysis: **Lemma 21**. *Let $x,n\in \mathbb{R}$ such that $|2nx|<0.5$ and $n\geq 1$. Then we have $$\left|(1+x)^n-1\right|\leq 2.6n|x|.$$* *Proof.* Since $e^y$ is a convex function, we have for $0\leq y <0.5$ that $$e^y\leq 1+y \frac{e^{0.5}-1}{\frac 12}\leq 1+1.3 y,$$ and for $-0.5<y\leq 0$ we obtain $$e^y\leq 1+y\frac{e^{-0.5}-1}{\frac 12}\leq 1-0.79y.$$ That is we have $|1-e^y|\leq 1.3 |y|$ for $|y|<0.5$. Note that by our assumptions we have $|x|<0.5$ and therefore we obtain by an application of Lemma [Lemma 20](#lem:log-est){reference-type="ref" reference="lem:log-est"} $$\left|(1+x)^n-1\right|=\left|\exp(n\log(1+x))-1\right|\leq |\exp(2nx)-1|\leq 2.6n|x|.$$ ◻ The third lemma is due to Pethő and de Weger [@Pethoe:1986]. **Lemma 22**. *Let $a,b \geq 0$, $h \geq 1$ and $x\in \mathbb{R}$ be the largest solution of $x=a + b(\log x)^h$. If $b > (e^2/h)^h$, then $$x<2^h\left(a^{1/h}+b^{1/h}\log (h^h b)\right)^h,$$ and if $b\leq (e^2/h)^h$, then $$x\leq 2^h\left(a^{1/h}+2e^2 \right)^h.$$* A proof of this lemma can be found in [@Smart:DiGl Appendix B]. # A lower bound for $|c|$ in terms of $n_1$ and $\alpha$ {#sec:c-low-bound} The purpose of this section is to prove a lower bound for $|c|$. In particular we prove the following proposition: **Proposition 23**. *Assume that Assumption A1 or A2 holds and assume that Diophantine equation [\[eq:PillaiVn\]](#eq:PillaiVn){reference-type="eqref" reference="eq:PillaiVn"} has two solutions $(n_1,m_1)$ and $(n_2,m_2)$ with $n_1>n_2\geq N_0$. Then we have $$|c|>\alpha^{n_1-K_0\log(27.62 n_1)}-|\beta|^{n_1}$$ with $K_0=2.69 \cdot 10^9 \log b$.* *Proof.* Let us assume that, in contrary to the content of the proposition, $$\frac{|c|+|\beta|^{n_1}}{\alpha^{n_1}}\leq \alpha^{-K_0\log (27.62 n_1)}<\frac{1}{2}.$$ We consider equation $$\alpha^{n_1}+\beta^{n_1}-b^{m_1}=c$$ and obtain $$\left|\frac{b^{m_1}}{\alpha^{n_1}}-1\right|\leq \frac{|c|+|\beta|^{n_1}}{\alpha^{n_1}}<\frac{1}{2}$$ which yields $$\left|m_1\log b-n_1\log\alpha\right|\leq 2\frac{|c|+|\beta|^{n_1}}{\alpha^{n_1}}.$$ The goal is to apply Matveev's theorem (Lemma [Lemma 16](#lem:matveev){reference-type="ref" reference="lem:matveev"}) with $N=2$. Note that with $\eta_1=b$ and $\eta_2=\alpha$ we choose $A_1=2\log b$ and $A_2=2\log\alpha$, in view of Lemma [Lemma 18](#lem:heightcomp){reference-type="ref" reference="lem:heightcomp"}, and obtain $$E=\max\left\{\frac{m_1\log b}{\log \alpha},n_1\right\}.$$ Due to Lemma [Lemma 13](#lem:n1-m1-relation){reference-type="ref" reference="lem:n1-m1-relation"} we have $$\frac{m_1\log b}{\log \alpha}<n_1+\frac{\log(5/2)}{\log \alpha}<2n_1.$$ Therefore we obtain by Corollary [Corollary 17](#cor:matveev-spec){reference-type="ref" reference="cor:matveev-spec"} that $$\begin{aligned} 2.68 \cdot 10^9 \log b\log \alpha \log(27.62 n_1) &\geq -\log\left|m_1\log b-n_1\log\alpha\right| \\ &\geq n_1\log \alpha - \log(|c|+|\beta|^{n_1}) -\log 2\end{aligned}$$ which implies the content of the proposition. ◻ # Bounds for $n_1$ in terms of $\log \alpha$ {#sec:firstbound} In this section we will assume that Assumption A1 or A2 holds. However, in the proofs we will mainly consider the case that Assumption A1 holds. Note that this is not a real restriction since Assumption A2 implies Assumption A1 with $\varepsilon=\frac 12$ and $N_0=1$ instead of $N_0=\frac{3}{2\varepsilon}$ (cf. Remark [Remark 3](#rem:A1-A2){reference-type="ref" reference="rem:A1-A2"}). Also assume that Diophantine equation [\[eq:PillaiVn\]](#eq:PillaiVn){reference-type="eqref" reference="eq:PillaiVn"} has three solutions $(n_1,m_1)$, $(n_2,m_2)$, $(n_3,m_3)$ with $n_1>n_2>n_3\geq N_0$. In this section we follow the approach of Chim et. al. [@Chim:2018] and prove upper bounds for $n_1$ in terms of $\alpha$. To obtain explicit bounds and to keep track of the dependence on $\log b$ and $\log \alpha$ of the bounds we repeat their proof. This section also delivers the set up for the later sections which provide proofs of our main theorems. Moreover, note that the assumption that three solutions exist, simplifies the proof of Chim et. al. [@Chim:2018]. The main result of this section is the following statement: **Proposition 24**. *Assume that Assumption A1 or A2 holds and that Diophantine equation [\[eq:PillaiVn\]](#eq:PillaiVn){reference-type="eqref" reference="eq:PillaiVn"} has three solutions $(n_1,m_1)$, $(n_2,m_2)$, $(n_3,m_3)$ with $n_1>n_2>n_3\geq N_0$. Then we have $$n_1<2.58\cdot 10^{22} \frac{\log \alpha}{\varepsilon} (\log b)^2\log\left(7.12\cdot 10^{23}\frac{\log \alpha}{\varepsilon} (\log b)^2\right)^2,$$ where we choose $\varepsilon=1/2$ in case that Assumption A2 holds.* Since we assume the existence of two solutions, we have $$V_{n_1}-b^{m_1}=c=V_{n_2}-b^{m_2}$$ and therefore obtain $$\label{eq:Pillai-twosol} \alpha^{n_1}+\beta^{n_1}-\alpha^{n_2}-\beta^{n_2}=b^{m_1}-b^{m_2}.$$ Let us write $\gamma :=\min\{\alpha,\alpha/|\beta|\}$. Note that we have $\gamma>\frac{1}{2} A^\varepsilon>\frac{1}{4} \alpha^\varepsilon$, by Lemma [Lemma 7](#lem:bound-alpha-beta-case1){reference-type="ref" reference="lem:bound-alpha-beta-case1"} and Lemma [Lemma 5](#lem:alpha-A-est){reference-type="ref" reference="lem:alpha-A-est"}. With this notation we get the inequality $$\left|\frac{b^{m_1}}{\alpha^{n_1}}-1\right|\leq \alpha^{n_2-n_1}+\frac{b^{m_2}}{\alpha^{n_1}}+\frac{|\beta|^{n_1}+|\beta|^{n_2}}{\alpha^{n_1}}.$$ Note that, depending on whether $|\beta|>1$ or $|\beta|\leq 1$, we have $$\frac{|\beta|^{n_1}+|\beta|^{n_2}}{\alpha^{n_1}}\leq \begin{dcases} \dfrac{2}{\alpha^{n_1}} \leq 2\gamma^{-n_1} &\quad \text{if $|\beta|\leq 1$} \\ 2 \cdot \dfrac{|\beta|^{n_1}}{\alpha^{n_1}} \leq 2\gamma^{-n_1} &\quad \text{if $|\beta|> 1$} \end{dcases}.$$ Therefore, using Lemma [Lemma 13](#lem:n1-m1-relation){reference-type="ref" reference="lem:n1-m1-relation"}, we obtain $$\label{eq:L1-exp} \begin{split} \left|\frac{b^{m_1}}{\alpha^{n_1}}-1\right|&\leq \alpha^{n_2-n_1}+\frac{b^{m_2}}{\alpha^{n_1}}+2\gamma^{-n_1}\\ &\leq 2\max\left\{\frac 72 \alpha^{n_2-n_1},2\gamma^{-n_1}\right\}\\ &\leq 7\max\left\{\alpha^{n_2-n_1},\gamma^{-n_1}\right\}. \end{split}$$ First, let us assume that the maximum in [\[eq:L1-exp\]](#eq:L1-exp){reference-type="eqref" reference="eq:L1-exp"} is $\gamma^{-n_1}$. Under our assumptions we have $A^{\varepsilon}\geq 32$ and $n_1\geq 3$, i.e. $7\gamma^{-n_1}<\frac{1}{2}$. Thus taking logarithms and applying Lemma [Lemma 20](#lem:log-est){reference-type="ref" reference="lem:log-est"} yields $$|\underbrace{m_1\log b-n_1\log\alpha}_{=:\Lambda}| \leq 14\gamma^{-n_1}.$$ We apply Matveev's theorem (Lemma [Lemma 16](#lem:matveev){reference-type="ref" reference="lem:matveev"}) with $N=2$. Note that with $\eta_1=b$ and $\eta_2=\alpha$ we choose $A_1=2\log b$ and $A_2=2\log\alpha$, in view of Lemma [Lemma 18](#lem:heightcomp){reference-type="ref" reference="lem:heightcomp"}, to obtain $$E=\max\left\{\frac{m_1\log b}{\log \alpha},n_1\right\}.$$ Note that, due to Lemma [Lemma 13](#lem:n1-m1-relation){reference-type="ref" reference="lem:n1-m1-relation"}, $$\frac{m_1\log b}{\log \alpha}<n_1+\frac{\log(5/2)}{\log \alpha}<2n_1.$$ Therefore we obtain from Corollary [Corollary 17](#cor:matveev-spec){reference-type="ref" reference="cor:matveev-spec"} that $$\begin{aligned} 2.68 \cdot 10^9 \log b\log \alpha \log(27.62 n_1) &\geq -\log|\Lambda| \geq n_1 \log \gamma -\log 14 \\ &\geq n_1 (\varepsilon\log \alpha - \log 4) -\log 14 \\ &\geq n_1 \frac{\varepsilon}{2} \log \alpha -\log 14,\end{aligned}$$ where for the last inequality we used $A^{\varepsilon} \geq 32$. Thus we have $$5.38 \cdot 10^9 \frac{\log b}{\varepsilon} \log(27.62 n_1)>n_1,$$ which, using Lemma [Lemma 22](#lem:log-sol){reference-type="ref" reference="lem:log-sol"}, proves Proposition [Proposition 24](#prop:n1bound-logalpha){reference-type="ref" reference="prop:n1bound-logalpha"} in this case. Note that this also proves, in this specific case, Theorem [Theorem 1](#th:n1-bound){reference-type="ref" reference="th:n1-bound"}. Now we assume that the maximum in [\[eq:L1-exp\]](#eq:L1-exp){reference-type="eqref" reference="eq:L1-exp"} is $\alpha^{n_2-n_1}$. By our assumptions on $A$ we have $7\alpha^{n_2-n_1}<\frac{1}{2}$ and obtain, by Lemma [Lemma 20](#lem:log-est){reference-type="ref" reference="lem:log-est"}, $$|\underbrace{m_1\log b-n_1\log\alpha}_{=:\Lambda}|\leq 14\alpha^{n_2-n_1}.$$ As computed before, an application of Matveev's theorem yields $$2.68 \cdot 10^9 \log b\log \alpha \log(27.62 n_1) \geq -\log|\Lambda|\geq (n_1-n_2) \log \alpha -\log 14$$ and therefore $$\label{eq:n1-n2-bound} 2.69 \cdot 10^9 \log b \log(27.62 n_1)>n_1-n_2.$$ For the rest of the proof of Proposition [Proposition 24](#prop:n1bound-logalpha){reference-type="ref" reference="prop:n1bound-logalpha"} we will assume that [\[eq:n1-n2-bound\]](#eq:n1-n2-bound){reference-type="eqref" reference="eq:n1-n2-bound"} holds. Since we assume that a third solution exists, the statement of Lemma [Lemma 13](#lem:n1-m1-relation){reference-type="ref" reference="lem:n1-m1-relation"} also holds for $m_2$ and $n_2$ instead of $m_1$ and $n_1$. In particular we have $$\begin{aligned} m_1 &< n_1 \frac{\log \alpha}{\log b} + \frac{\log \frac{5}{2}}{\log b} \leq n_1 \frac{\log \alpha}{\log b} + 2\log \frac{5}{2} \\ m_2 &> n_2 \frac{\log \alpha}{\log b} + \frac{\log \frac{3}{8}}{\log b} \geq n_2 \frac{\log \alpha}{\log b} + 2\log \frac{3}{8}\end{aligned}$$ which yields $$\label{eq:m1-m2-bound} m_1-m_2< \frac{\log \alpha}{\log b}(n_1-n_2)+\log 49< 2.7 \cdot 10^9 \log \alpha \log(27.62 n_1).$$ Let us rewrite Equation [\[eq:Pillai-twosol\]](#eq:Pillai-twosol){reference-type="eqref" reference="eq:Pillai-twosol"} again to obtain the inequality $$\left|\frac{b^{m_2}}{\alpha^{n_2}} \cdot\frac{b^{m_1-m_2}-1}{\alpha^{n_1-n_2}-1}-1\right|\leq \frac{|\beta|^{n_1}+|\beta|^{n_2}}{\alpha^{n_1}-\alpha^{n_2}} \leq 4\gamma^{-n_1}.$$ As previously noted we have $4\gamma^{-n_1}<\frac{1}{2}$ and therefore we obtain $$\left|\smash{\underbrace{m_2\log b- n_2\log \alpha + \log \left(\frac{b^{m_1-m_2}-1}{\alpha^{n_1-n_2}-1}\right)}_{=:\Lambda'}} \vphantom{\left(\frac{b^{m_1-m_2}-1}{\alpha^{n_1-n_2}-1}\right)}\right| \vphantom{\underbrace{m_2\log b- n_2\log \alpha + \log \left(\frac{b^{m_1-m_2}-1}{\alpha^{n_1-n_2}-1}\right)}_{=:\Lambda'}} \leq 8\gamma^{-n_1}.$$ We aim to apply Matveev's theorem to $\Lambda'$ with $\eta_3=\frac{b^{m_1-m_2}-1}{\alpha^{n_1-n_2}-1}$. Note that due to Lemma [Lemma 18](#lem:heightcomp){reference-type="ref" reference="lem:heightcomp"} and the properties of heights we obtain $$\begin{aligned} \max\{D h(\eta_3), |\log\eta_3|, 0.16\}&\leq 2(m_1-m_2+1)\log b+2(n_1-n_2+1)\log \alpha\\ &\leq 1.1\cdot 10^{10} \log b\log \alpha \log(27.62 n_1) =:A_3.\end{aligned}$$ Thus we obtain $E\leq 2n_1$ as before, and from Matveev's theorem $$3.2\cdot 10^{21} [\log b\log \alpha \log(27.62 n_1)]^2 \geq n_1 \log \gamma - \log 8 \geq n_1 \frac{\varepsilon}{2} \log \alpha -\log 8,$$ which yields $$\label{eq:n1-bound} 6.42\cdot 10^{21} \frac{\log \alpha}{\varepsilon} [\log b \log(27.62 n_1)]^2>n_1.$$ If we put $n'=27.62 n_1$ and apply Lemma [Lemma 22](#lem:log-sol){reference-type="ref" reference="lem:log-sol"} to [\[eq:n1-bound\]](#eq:n1-bound){reference-type="eqref" reference="eq:n1-bound"}, then we end up with $$n'< 7.12\cdot 10^{23} \frac{\log \alpha}{\varepsilon} (\log b)^2 \log\left(7.12\cdot 10^{23} \frac{\log \alpha}{\varepsilon}(\log b)^2\right)^2$$ which yields the content of Proposition [Proposition 24](#prop:n1bound-logalpha){reference-type="ref" reference="prop:n1bound-logalpha"}. # Combining linear forms of logarithms {#sec:three-sol} As done before, let us assume that Diophantine equation [\[eq:PillaiVn\]](#eq:PillaiVn){reference-type="eqref" reference="eq:PillaiVn"} has three solutions $(n_1,m_1)$, $(n_2,m_2)$, $(n_3,m_3)$ with $n_1>n_2>n_3\geq N_0$. Again we assume that Assumption A1 or A2 holds. Let us reconsider Inequality [\[eq:L1-exp\]](#eq:L1-exp){reference-type="eqref" reference="eq:L1-exp"} with $n_1,m_1,n_2,m_2$ replaced by $n_2,m_2,n_3,m_3$, respectively. Then we obtain $$\label{eq:exp-L2} \begin{split} \left|\frac{b^{m_2}}{\alpha^{n_2}}-1\right|&\leq \alpha^{n_3-n_2}+\frac{b^{m_3}}{\alpha^{n_2}}+2\gamma^{-n_2}\\ &\leq 3 \max\left\{\alpha^{n_3-n_2},\frac{5}{2}b^{m_3-m_2},2\gamma^{-n_2}\right\}. \end{split}$$ Note that with the assumption that three solutions exist we can apply Lemma [Lemma 13](#lem:n1-m1-relation){reference-type="ref" reference="lem:n1-m1-relation"} to $\frac{b^{m_2}}{\alpha^{n_2}}$ but not to $\frac{b^{m_3}}{\alpha^{n_3}}$. Let us assume for the next paragraphs that $$\label{eq:max} M_0:=\max\left\{3\alpha^{n_3-n_2},\frac{15}{2}b^{m_3-m_2},6\gamma^{-n_2}, 7\alpha^{n_2-n_1},4\gamma^{-n_1}\right\}<\frac{1}{2}.$$ Then, by applying Lemma [Lemma 20](#lem:log-est){reference-type="ref" reference="lem:log-est"} to [\[eq:L1-exp\]](#eq:L1-exp){reference-type="eqref" reference="eq:L1-exp"} and [\[eq:exp-L2\]](#eq:exp-L2){reference-type="eqref" reference="eq:exp-L2"}, we obtain the system of inequalities $$\begin{aligned} |\underbrace{m_1\log b-n_1\log \alpha}_{=:\Lambda_1}|&\leq \max\left\{14 \alpha^{n_2-n_1},8\gamma^{-n_1}\right\},\\ |\underbrace{m_2\log b-n_2\log \alpha}_{=:\Lambda_2}|&\leq\max\left\{6\alpha^{n_3-n_2},15b^{m_3-m_2},12\gamma^{-n_2}\right\}.\end{aligned}$$ We eliminate the term $\log \alpha$ from these inequalities by considering $\Lambda_0=n_2\Lambda_1-n_1\Lambda_2$ and obtain the inequality $$\label{eq:LinForm-L0} \begin{split} |\Lambda_0|&=|(n_2m_1-n_1m_2)\log b|\\ &\leq \max\left\{12n_1 \alpha^{n_3-n_2},30 n_1 b^{m_3-m_2},24 n_1 \gamma^{-n_2}, 28 n_2\alpha^{n_2-n_1},16 n_2\gamma^{-n_1} \right\}. \end{split}$$ Let us write $M$ for the maximum on the right hand side of [\[eq:LinForm-L0\]](#eq:LinForm-L0){reference-type="eqref" reference="eq:LinForm-L0"}. If $n_2m_1-n_1m_2\neq 0$, we obtain the inequality $\log b \leq M$. Since we will study the case that $n_2m_1-n_1m_2=0$ in Section [8](#sec:degenerate-case){reference-type="ref" reference="sec:degenerate-case"}, we will assume for the rest of this section that $n_2m_1-n_1m_2\neq 0$, i.e. we have $\log b \leq M$. Therefore we have to consider five different cases. In each case we want to find an upper bound for $\log \alpha$ if possible: The case $M=12n_1 \alpha^{n_3-n_2}$ : In this case we get $$\log \log b \leq \log (12n_1)-(n_2-n_3)\log \alpha$$ which yields $$\log \alpha \leq \log \left(12n_1/\log b\right)<\log (17.4 n_1),$$ since we assume $b\geq 2$. The case $M=30 n_1 b^{m_3-m_2}$ : In this case we obtain $$\log \log b \leq \log (30n_1)-(m_2-m_3)\log b$$ which yields $$m_2-m_3 \leq \frac{\log (30n_1/\log b)}{\log b}<1.45 \log(43.3 n_1).$$ To obtain from this inequality a bound for $\log \alpha$ is not straight forward and we will deal with this case in Section [9](#sec:case-m2-m3){reference-type="ref" reference="sec:case-m2-m3"}. The case $M=24 n_1 \gamma^{-n_2}$ : This case implies $$\log \log b \leq \log (24 n_1)-n_2\log \gamma \leq \log (24 n_1)-n_2 \frac{\varepsilon}{2} \log \alpha$$ and we obtain $$\log \alpha \leq \frac{2\log \left(24n_1/\log b\right)}{\varepsilon}<\frac{2\log (34.7 n_1)}{\varepsilon}.$$ The case $M=28 n_2\alpha^{n_2-n_1}$ : By a similar computation as in the first case, we obtain in this case the inequality $$\log \alpha <\log (40.4 n_2)< \log (40.4 n_1).$$ The case $M=16 n_2\gamma^{-n_1}$ : Almost the same computations as in the case that $M=24 n_1 \gamma^{-n_2}$ lead to $$\log \alpha <\frac{2\log (23.1 n_1)}{\varepsilon}.$$ In the case that [\[eq:max\]](#eq:max){reference-type="eqref" reference="eq:max"} does not hold, i.e. that $M_0\geq 1/2$, we obtain by similar computations in each of the five possibilities the following inequalities: The case $M_0=3 \alpha^{n_3-n_2}$ : $\log \alpha \leq \log 6$; The case $M_0=\frac{15}{2} b^{m_3-m_2}$ : $m_2-m_3\leq 3$; The case $M_0=6 \gamma^{-n_2}$ : $\log\alpha \leq \frac{\log 144}{\varepsilon}$; The case $M_0=7 \alpha^{n_2-n_1}$ : $\log \alpha \leq \log 14$; The case $M_0=4 \gamma^{-n_1}$ : $\log\alpha \leq \frac{\log 64}{\varepsilon}$. Let us recap what we have proven so far in the following lemma: **Lemma 25**. *Assume that Assumption A1 or A2 holds and assume that Diophantine equation [\[eq:PillaiVn\]](#eq:PillaiVn){reference-type="eqref" reference="eq:PillaiVn"} has three solutions $(n_1,m_1)$, $(n_2,m_2)$, $(n_3,m_3)$ with $n_1>n_2>n_3\geq N_0$. Then one of the following three possibilities holds:* 1. *$n_2m_1-n_1m_2=0$;* 2. *$m_2-m_3 < 1.45 \log(43.3 n_1)$;* 3. *$\log \alpha <\frac{2\log (34.7 n_1)}{\varepsilon}.$* Since we will deal with the first and second possiblity in the next sections, we close this section by proving that the last possibility implies Theorems [Theorem 1](#th:n1-bound){reference-type="ref" reference="th:n1-bound"} and [Theorem 2](#th:absolute){reference-type="ref" reference="th:absolute"}. Therefore let us plug in the upper bound for $\log \alpha$ into Inequality [\[eq:n1-bound\]](#eq:n1-bound){reference-type="eqref" reference="eq:n1-bound"} to obtain $$1.29\cdot 10^{22} \left(\frac{\log b}{\varepsilon}\right)^2 \log(34.7 n_1)^3>n_1.$$ Writing $n'=34.7 n_1$, this inequality turns into $$4.48\cdot 10^{23} \left(\frac{\log b}{\varepsilon}\right)^2 \log(n')^3>n'$$ and an application of Lemma [Lemma 22](#lem:log-sol){reference-type="ref" reference="lem:log-sol"} implies $$n_1 < 1.04 \cdot 10^{23} \left(\frac{\log b}{\varepsilon}\right)^2 \log\left( 1.21 \cdot 10^{25} \left(\frac{\log b}{\varepsilon}\right)^2\right)^3.$$ Thus Theorem [Theorem 1](#th:n1-bound){reference-type="ref" reference="th:n1-bound"} is proven in this case. Now let us assume that Assumption A2 holds. By Remark [Remark 3](#rem:A1-A2){reference-type="ref" reference="rem:A1-A2"} we get the bound $$n_1<4.16 \cdot 10^{23} \left(\log b\right)^2 \log\left( 4.84 \cdot 10^{25} \left(\log b\right)^2\right)^3.$$ If we insert our upper bound for $n_1$ into the upper bound for $\log \alpha$, we obtain $$\begin{aligned} \log A &< \log \alpha + \log 2 \leq 5 \log (34.7 n_1) \\ &\leq 5 \log \left( 1.45 \cdot 10^{25} \left(\log b\right)^2 \log\left( 4.84 \cdot 10^{25} \left(\log b\right)^2\right)^3 \right).\end{aligned}$$ This proves Theorem [Theorem 2](#th:absolute){reference-type="ref" reference="th:absolute"} in the current case. # The case $n_1m_2-n_2m_1=0$ {#sec:degenerate-case} We distinguish between the cases $c\geq 0$ and $c<0$. ## The case $c\geq 0$ In this case we have $$\label{eq:c-alpha-n3-bound} 0\leq c=V_{n_3}-b^{m_3}<\alpha^{n_3}+\beta^{n_3}<2\alpha^{n_3}.$$ Furthermore it holds $$\frac{b^{m_1}}{\alpha^{n_1}}=1+\frac{\beta^{n_1}-c}{\alpha^{n_1}}\qquad \text{ as well as } \qquad\frac{b^{m_2}}{\alpha^{n_2}}=1+\frac{\beta^{n_2}-c}{\alpha^{n_2}}.$$ Since $$\left|\frac{\beta^{n_2}-c}{\alpha^{n_2}}\right|\leq \left(\frac{|\beta|}{\alpha}\right)^{n_2}+2\alpha^{n_3-n_2}<\frac{1}{2}$$ holds under our assumptions, we may apply Lemma [Lemma 20](#lem:log-est){reference-type="ref" reference="lem:log-est"} to get the two inequalities $$\begin{aligned} \frac{2}{9} \left(\frac{\beta^{n_1}-c}{\alpha^{n_1}} \right)^2& \leq \frac{\beta^{n_1}-c}{\alpha^{n_1}} -\left(m_1\log b-n_1\log \alpha\right) \leq 2 \left(\frac{\beta^{n_1}-c}{\alpha^{n_1}} \right)^2,\\ \frac{2}{9} \left(\frac{\beta^{n_2}-c}{\alpha^{n_2}} \right)^2& \leq \frac{\beta^{n_2}-c}{\alpha^{n_2}}-\left(m_2\log b-n_2\log \alpha\right) \leq 2 \left(\frac{\beta^{n_2}-c}{\alpha^{n_2}} \right)^2.\end{aligned}$$ Multiplying the first inequality by $n_2$ and the second one by $n_1$ as well as forming the difference afterwards yields $$\label{eq:log-est-for-alpha} \begin{split} \frac{2n_2}{9} \left(\frac{\beta^{n_1}-c}{\alpha^{n_1}} \right)^2-2n_1\left(\frac{\beta^{n_2}-c}{\alpha^{n_2}} \right)^2 &\leq n_2\frac{\beta^{n_1}-c}{\alpha^{n_1}}-n_1\frac{\beta^{n_2}-c}{\alpha^{n_2}} \\ &\leq 2n_2\left(\frac{\beta^{n_1}-c}{\alpha^{n_1}} \right)^2- \frac{2n_1}{9} \left(\frac{\beta^{n_2}-c}{\alpha^{n_2}} \right)^2. \end{split}$$ Let us note that $(a+b)^2\leq 4\max\{|a|^2,|b|^2\}$, and therefore we obtain $$\begin{aligned} c\left(\frac{n_1}{\alpha^{n_2}}-\frac{n_2}{\alpha^{n_1}}\right)&\leq \frac{n_2|\beta|^{n_1}}{\alpha^{n_1}}+\frac{n_1|\beta|^{n_2}}{\alpha^{n_2}}+8n_2\max\left\{\frac{|\beta|^{2n_1}}{\alpha^{2n_1}},\frac{c^2}{\alpha^{2n_1}}\right\}\\ &\leq 10n_1\max\left\{\frac{|\beta|^{n_2}}{\alpha^{n_2}},\frac{c^2}{\alpha^{2n_2}} \right\}.\end{aligned}$$ Together with the estimate $$\frac{n_1}{\alpha^{n_2}}-\frac{n_2}{\alpha^{n_1}}>\frac{n_1}{\alpha^{n_2}}\left(1-\frac{1}{\alpha}\right)>\frac 78 \cdot\frac{n_1}{\alpha^{n_2}}$$ this implies $$\label{eq:c-est-for-c-pos} c< 12\max\left\{|\beta|^{n_2},\frac{c^2}{\alpha^{n_2}}\right\}.$$ Let us assume for the moment that the maximum is $\frac{c^2}{\alpha^{n_2}}$. Then we obtain $$\alpha^{n_2}< 12 c < 24\alpha^{n_3}$$ which implies $\alpha<24$ and thus Theorem [Theorem 2](#th:absolute){reference-type="ref" reference="th:absolute"}. Plugging in $\alpha<24$ in Proposition [Proposition 24](#prop:n1bound-logalpha){reference-type="ref" reference="prop:n1bound-logalpha"} yields the content of Theorem [Theorem 1](#th:n1-bound){reference-type="ref" reference="th:n1-bound"} in this case. Therefore we assume now $c<12|\beta|^{n_2}$. By Proposition [Proposition 23](#prop:c-low-bound){reference-type="ref" reference="prop:c-low-bound"} we obtain $$\alpha^{n_1-K_0\log(27.62 n_1)}-|\beta|^{n_1}<c<12|\beta|^{n_2}$$ which yields $$\left(n_1-K_0\log(27.62 n_1)\right) \log \alpha <\log 13 +n_1\max\{\log|\beta|,0\}.$$ Note that, due to our assumptions, we have the bound $$\alpha^{1-\frac{\varepsilon}{4}} \geq 8\alpha^{1-\varepsilon} > 4A^{1-\varepsilon} > \frac{2\alpha}{A^{\varepsilon}}>|\beta|$$ which implies $(1-\frac{\varepsilon}{4}) \log \alpha > \log |\beta|$. Thus we get $$n_1 \frac{\varepsilon}{4} \log \alpha < K_0 \log \alpha \log(27.62 n_1)+\log 13$$ and $$n_1< 1.08 \cdot 10^{10} \frac{\log b}{\varepsilon} \log(27.62 n_1).$$ As previously, solving this inequality with the help of Lemma [Lemma 22](#lem:log-sol){reference-type="ref" reference="lem:log-sol"} yields $$\label{eq:n1-bound-deg-case} n_1< 2.17 \cdot 10^{10} \frac{\log b}{\varepsilon}\log\left(2.99 \cdot 10^{11} \frac{\log b}{\varepsilon} \right)$$ which proves Theorem [Theorem 1](#th:n1-bound){reference-type="ref" reference="th:n1-bound"} in this case. So we may now assume that Assumption A2 holds and the bound for $n_1$ with $\varepsilon=\frac{1}{2}$ is valid. This yields $$\label{eq:n1-bound-deg-case-A2} n_1< 4.34 \cdot 10^{10}\log b\log\left(5.98 \cdot 10^{11} \log b\right).$$ Note that under Assumption A2 we have $|\beta|<2\kappa$, by Lemma [Lemma 8](#lem:bound-alpha-beta-case2){reference-type="ref" reference="lem:bound-alpha-beta-case2"}. Hence the quantities $c,|\beta|^{n_1},|\beta|^{n_2}$ are bounded by absolute, effectively computable constants. Let us consider the case $|c-\beta^{n_2}|\geq 2(c+|\beta|^{n_1})\alpha^{n_2-n_1}$. Note that $\beta^{n_2} \neq c$ by the usual Galois conjugation argument. If $c > \beta^{n_2}$, then [\[eq:log-est-for-alpha\]](#eq:log-est-for-alpha){reference-type="eqref" reference="eq:log-est-for-alpha"} gives us $$\begin{aligned} (2n_1-n_2)\frac{c+|\beta|^{n_1}}{\alpha^{n_1}} &\leq n_2\frac{\beta^{n_1}-c}{\alpha^{n_1}}-n_1\frac{\beta^{n_2}-c}{\alpha^{n_2}} \\ &\leq 2n_2\left(\frac{\beta^{n_1}-c}{\alpha^{n_1}} \right)^2- \frac{2n_1}{9} \left(\frac{\beta^{n_2}-c}{\alpha^{n_2}} \right)^2 \leq 2n_1\left(\frac{|\beta|^{n_1}+c}{\alpha^{n_1}} \right)^2\end{aligned}$$ which yields $$\alpha^{n_1}\leq 2(|\beta|^{n_1}+c).$$ As $c<2\alpha^{n_3}$ (see Inequality [\[eq:c-alpha-n3-bound\]](#eq:c-alpha-n3-bound){reference-type="eqref" reference="eq:c-alpha-n3-bound"}) we obtain $$0.2 \alpha^{n_1} < \alpha^{n_1}-2c \leq 2|\beta|^{n_1} < 2(2\kappa)^{n_1}$$ which yields $\alpha< 20 \kappa$. If $c < \beta^{n_2}$, then [\[eq:log-est-for-alpha\]](#eq:log-est-for-alpha){reference-type="eqref" reference="eq:log-est-for-alpha"} gives us $$\begin{aligned} \left(n_1-\frac{n_2}{2}\right)\frac{\beta^{n_2}-c}{\alpha^{n_2}} &\leq n_1\frac{\beta^{n_2}-c}{\alpha^{n_2}} - n_2\frac{\beta^{n_1}-c}{\alpha^{n_1}} \\ &\leq 2n_1\left(\frac{\beta^{n_2}-c}{\alpha^{n_2}} \right)^2- \frac{2n_2}{9} \left(\frac{\beta^{n_1}-c}{\alpha^{n_1}} \right)^2 \leq 2n_1\left(\frac{\beta^{n_2}-c}{\alpha^{n_2}} \right)^2\end{aligned}$$ which implies $$1 \leq 4\cdot \frac{\beta^{n_2}-c}{\alpha^{n_2}} \leq 4\cdot \frac{|\beta|^{n_2}}{\alpha^{n_2}} \leq 4\cdot \frac{|\beta|}{\alpha} < 8 \kappa \alpha^{-1}$$ and hence $\alpha < 8\kappa$. Thus we may now assume $|c-\beta^{n_2}|< 2(c+|\beta|^{n_1})\alpha^{n_2-n_1}$. Let us note that under the assumption $\alpha>2(c+\max\{1,|\beta|\}^{n_1})$ we can deduce $$\left|\frac{b^{m_3}}{\alpha^{n_3}}-1\right| \leq \frac{c+|\beta|^{n_3}}{\alpha^{n_3}}<\frac{1}{2},$$ and otherwise we would get the constant $C_2$ in Theorem [Theorem 2](#th:absolute){reference-type="ref" reference="th:absolute"} (cf. the calculations below). Then, by Lemma [Lemma 20](#lem:log-est){reference-type="ref" reference="lem:log-est"}, we get $$|\underbrace{m_3\log b-n_3\log \alpha}_{=:\Lambda_3}| \leq \frac{2c+2|\beta|^{n_3}}{\alpha^{n_3}} \leq \frac{24|\beta|^{n_2}+2|\beta|^{n_3}}{\alpha^{n_3}} \leq \frac{26(2\kappa)^{n_1}}{\alpha^{n_3}}.$$ Recalling from the beginning of Section [7](#sec:three-sol){reference-type="ref" reference="sec:three-sol"} the bound $$|\underbrace{m_1\log b-n_1\log \alpha}_{=:\Lambda_1}| \leq \max\left\{14 \alpha^{n_2-n_1},8\gamma^{-n_1}\right\},$$ we can again eliminate the term $\log \alpha$ from these inequalities by considering the form $\Lambda_0'=n_3\Lambda_1-n_1\Lambda_3$ and obtain the inequality $$\label{eq:LinForm-Lp0} \begin{split} |\Lambda_0'|&=|(n_3m_1-n_1m_3)\log b|\\ &\leq \max\left\{52n_1(2\kappa)^{n_1} \alpha^{-n_3}, 28 n_3\alpha^{n_2-n_1},16 n_3\gamma^{-n_1} \right\} \\ &\leq \max\left\{52n_1(2\kappa)^{n_1} \alpha^{-n_3}, 28 n_3\alpha^{n_2-n_1},16 n_3\alpha^{-n_1/4} \right\}. \end{split}$$ If $n_3m_1-n_1m_3 \neq 0$, then we have $$\log b \leq \max\left\{52n_1(2\kappa)^{n_1} \alpha^{-n_3}, 28 n_3\alpha^{n_2-n_1},16 n_3\alpha^{-n_1/4} \right\}$$ which yields $\log \alpha \leq 5 + \log n_1 + n_1 \log(4\kappa)$, and together with the bound [\[eq:n1-bound-deg-case-A2\]](#eq:n1-bound-deg-case-A2){reference-type="eqref" reference="eq:n1-bound-deg-case-A2"} this gives us constant $C_2$ in Theorem [Theorem 2](#th:absolute){reference-type="ref" reference="th:absolute"}. Hence we can assume $n_3m_1-n_1m_3 = 0$ and replace in the discussion above $m_2$ by $m_3$ as well as $n_2$ by $n_3$ in order to get $|c-\beta^{n_3}|< 2(c+|\beta|^{n_1})\alpha^{n_3-n_1}$. Thus altogether we obtain $$\label{eq:beta-bound} |\beta|^{n_3}\left|\beta^{n_2-n_3}-1\right|=\left|\beta^{n_2}-\beta^{n_3}\right|<4(c+|\beta|^{n_1})\alpha^{n_2-n_1}.$$ From [\[eq:beta-bound\]](#eq:beta-bound){reference-type="eqref" reference="eq:beta-bound"} we deduce that one of the two factors of the left hand side is smaller than $2\alpha^{\frac{n_2-n_1}2}\sqrt{c+|\beta|^{n_1}}$. By thinking of constant $C_2$ in Theorem [Theorem 2](#th:absolute){reference-type="ref" reference="th:absolute"}, we may assume $\alpha >64(c +|\beta|^{n_1})$. So we have $2\alpha^{\frac{n_2-n_1}2}\sqrt{c+|\beta|^{n_1}} < \frac{1}{4}$. Let us first assume that $$|\beta|^{n_3} < 2\alpha^{\frac{n_2-n_1}2}\sqrt{c+|\beta|^{n_1}}.$$ This implies $|\beta|^{n_3}< \frac 14$ and further $$c< |\beta|^{n_3}+2(c+|\beta|^{n_1})\alpha^{n_3-n_1}<\frac 14 +\frac 18<1.$$ Therefore we have $c=0$. But Lemma [Lemma 15](#lem:c0){reference-type="ref" reference="lem:c0"} states that there cannot be three solutions for $c=0$. Now we may assume $$\left|\beta^{n_2-n_3}-1\right| < 2\alpha^{\frac{n_2-n_1}2}\sqrt{c+|\beta|^{n_1}}.$$ Note that, in particular, we have $\beta^{n_2-n_3} > 0$, and the above inequality implies $$1-2\alpha^{\frac{n_2-n_1}2}\sqrt{c+|\beta|^{n_1}} < |\beta|^{n_2-n_3} < 1+2\alpha^{\frac{n_2-n_1}2}\sqrt{c+|\beta|^{n_1}}.$$ Using the binomial series expansion for $(1\pm x)^r$ with exponent $r=\frac{1}{n_2-n_3}$ yields $$||\beta|-1|<2\alpha^{\frac{n_2-n_1}2}\sqrt{c+|\beta|^{n_1}}.$$ Assuming $\alpha>64n_2^2(c+|\beta|^{n_1})$, we obtain by an application of Lemma [Lemma 21](#lem:exp-est){reference-type="ref" reference="lem:exp-est"} that $$||\beta|^{n_2}-1| \leq 5.2 n_2 \alpha^{\frac{n_2-n_1}2}\sqrt{c+|\beta|^{n_1}}.$$ This together with our assumption $|c-|\beta|^{n_2}| \leq |c-\beta^{n_2}|< 2(c+|\beta|^{n_1})\alpha^{n_2-n_1}$ gives us $$|c-1|<5.2 n_2 \alpha^{\frac{n_2-n_1}2}\sqrt{c+|\beta|^{n_1}}+ 2(c+|\beta|^{n_1})\alpha^{n_2-n_1}<\frac 12$$ provided that $$\alpha \geq 433 n_2^2(c+|\beta|^{n_1}).$$ Thus we may assume $c=1$ provided that $\alpha$ is large enough. But this also implies $$|1-\beta^{n_3}|<2(1+|\beta|^{n_1})\alpha^{n_3-n_1}\leq 2(1+|\beta|^{n_1})\alpha^{-2}.$$ If $\beta^{n_3}<0$, we get $$\alpha<\sqrt{2(1+|\beta|^{n_1})} \leq 2 (c+|\beta|^{n_1}).$$ Therefore we may assume $\beta^{n_3}=|\beta|^{n_3}$ is positive. Since for any real numbers $x>0$ and $n\geq 1$ we have $|1-x| \leq |1-x^n|$, we obtain from Lemma [Lemma 6](#lem:beta-1-bound){reference-type="ref" reference="lem:beta-1-bound"} together with Lemma [Lemma 5](#lem:alpha-A-est){reference-type="ref" reference="lem:alpha-A-est"} $$\frac{2}{4\alpha+5}<\frac{2}{2A+5}\leq |1-|\beta||\leq |1-\beta^{n_3}|<2(c+|\beta|^{n_1})\alpha^{-2}.$$ Hence we get $$\alpha<9(c+|\beta|^{n_1})$$ in this case. Let us summarize what we have proven so far: In the case $c\geq 0$ under Assumption A2 we cannot have three solutions to [\[eq:PillaiVn\]](#eq:PillaiVn){reference-type="eqref" reference="eq:PillaiVn"} for $$\alpha\geq 433 n_2^2(c+|\beta|^{n_1}).$$ So if there exist three solutions, then we have $$\alpha < 433 n_2^2(c+|\beta|^{n_1}) < 5629 n_1^2(2\kappa)^{n_1}.$$ With [\[eq:n1-bound-deg-case-A2\]](#eq:n1-bound-deg-case-A2){reference-type="eqref" reference="eq:n1-bound-deg-case-A2"} this implies $$\begin{aligned} \log A&<n_1\log (4\kappa)+2\log n_1+\log 11258\\ &< 4.35 \cdot 10^{10}\log (4\kappa) \log b\log\left(5.98 \cdot 10^{11} \log b\right) \end{aligned}$$ and Theorem [Theorem 2](#th:absolute){reference-type="ref" reference="th:absolute"} is proven in that case. ## The case $c< 0$ The case $c<0$ can be treated with similar arguments. Therefore we will only point out the differences. We start with the observation $$0<|c|=-c=b^{m_3}-V_{n_3}<b^{m_3}$$ and write again $$\frac{b^{m_1}}{\alpha^{n_1}}=1+\frac{\beta^{n_1}-c}{\alpha^{n_1}}\qquad \text{ as well as } \qquad\frac{b^{m_2}}{\alpha^{n_2}}=1+\frac{\beta^{n_2}-c}{\alpha^{n_2}}.$$ Note that, using Lemma [Lemma 13](#lem:n1-m1-relation){reference-type="ref" reference="lem:n1-m1-relation"}, $$\left|\frac{\beta^{n_2}-c}{\alpha^{n_2}}\right| \leq \left(\frac{|\beta|}{\alpha}\right)^{n_2}+\frac{|c|}{\alpha^{n_2}} < \frac{1}{16} + \frac{b^{m_3}}{\alpha^{n_2}} < \frac{1}{16} + \frac{5}{2} b^{m_3-m_2} < \frac{1}{2}$$ holds under our assumptions if in addition $m_2-m_3 \geq 2$. The case $m_2-m_3 =1$ is included in the next section. Thus we get again the inequality chain [\[eq:log-est-for-alpha\]](#eq:log-est-for-alpha){reference-type="eqref" reference="eq:log-est-for-alpha"} and furthermore the bound $$|c|< 12\max\left\{|\beta|^{n_2},\frac{|c|^2}{\alpha^{n_2}}\right\}.$$ If the maximum is $\frac{|c|^2}{\alpha^{n_2}}$, then we have $$\frac{2}{5} b^{m_2} < \alpha^{n_2} < 12|c| < 12 b^{m_3}$$ which implies $m_2-m_3 \leq 3$. This will be handled in Section [9](#sec:case-m2-m3){reference-type="ref" reference="sec:case-m2-m3"}. Therefore we may now again assume $|c| < 12|\beta|^{n_2}$. In the same way as in the case $c \geq 0$ we obtain again the upper bound [\[eq:n1-bound-deg-case\]](#eq:n1-bound-deg-case){reference-type="eqref" reference="eq:n1-bound-deg-case"} proving Theorem [Theorem 1](#th:n1-bound){reference-type="ref" reference="th:n1-bound"} also in the case $c<0$. Moreover, we get under Assumption A2 the same upper bound [\[eq:n1-bound-deg-case-A2\]](#eq:n1-bound-deg-case-A2){reference-type="eqref" reference="eq:n1-bound-deg-case-A2"} for $n_1$. In particular, the quantities $|c|,|\beta|^{n_1},|\beta|^{n_2}$ are bounded by absolute, effectively computable constants. Apart from the special case $$\alpha^{n_1} \leq 2(|\beta|^{n_1}+|c|)$$ which, by Lemma [Lemma 13](#lem:n1-m1-relation){reference-type="ref" reference="lem:n1-m1-relation"}, yields $$\frac{1}{15}\alpha^{n_1} < \frac{8}{45} b^{m_1} < \alpha^{n_1}-2b^{m_3} < \alpha^{n_1}-2|c| \leq 2|\beta|^{n_1} < 2(2\kappa)^{n_1}$$ and thus $\alpha < 60 \kappa$, as well as the consideration of $c=-1$ instead of $c=1$, we essentially only have to replace some (not all!) occurrences of $c$ by $|c|$ in order to perform the analogous arguments as we used in the case $c \geq 0$. Hence Theorem [Theorem 2](#th:absolute){reference-type="ref" reference="th:absolute"} is proven in this case as well. Let us summarize what we have proven so far: **Lemma 26**. *Assume that Assumption A1 or A2 holds and assume that Diophantine equation [\[eq:PillaiVn\]](#eq:PillaiVn){reference-type="eqref" reference="eq:PillaiVn"} has three solutions $(n_1,m_1)$, $(n_2,m_2)$, $(n_3,m_3)$ with $n_1>n_2>n_3\geq N_0$. Then at least one of the following three possibilities holds:* 1. *Assumption A1 holds and $$n_1 < 1.04 \cdot 10^{23} \left(\frac{\log b}{\varepsilon}\right)^2 \log\left( 1.21 \cdot 10^{25} \left(\frac{\log b}{\varepsilon}\right)^2\right)^3;$$* 2. *Assumption A2 holds and $$\log A < 4.35 \cdot 10^{10}\log (4\kappa) \log b\log\left(5.98 \cdot 10^{11} \log b\right)$$ or $$\log A < 5 \log \left( 1.45 \cdot 10^{25} \left(\log b\right)^2 \log\left( 4.84 \cdot 10^{25} \left(\log b\right)^2\right)^3 \right);$$* 3. *$m_2-m_3<1.45 \log(43.3 n_1)$.* # The case $m_2-m_3\ll \log n_1$ {#sec:case-m2-m3} In view of Theorems [Theorem 1](#th:n1-bound){reference-type="ref" reference="th:n1-bound"} and [Theorem 2](#th:absolute){reference-type="ref" reference="th:absolute"} and Lemma [Lemma 26](#lem:last-cases){reference-type="ref" reference="lem:last-cases"} we may assume that Assumption A1 or A2 holds and that $m_2-m_3<1.45 \log(43.3 n_1)$. First we reconsider inequality [\[eq:L1-exp\]](#eq:L1-exp){reference-type="eqref" reference="eq:L1-exp"} and note that $7\max\left\{\alpha^{n_2-n_1},\gamma^{-n_1}\right\}\geq \frac{1}{2}$ implies either $\alpha \leq 14$ or $A^{\varepsilon} \leq 28$. In the first cases we have an upper bound for $\alpha$ and by Proposition [Proposition 24](#prop:n1bound-logalpha){reference-type="ref" reference="prop:n1bound-logalpha"} also an upper bound for $n_1$; the second case contradicts Assumption A1 and A2 respectively. Thus Theorems [Theorem 1](#th:n1-bound){reference-type="ref" reference="th:n1-bound"} and [Theorem 2](#th:absolute){reference-type="ref" reference="th:absolute"} are shown in those situations. Now we may apply Lemma [Lemma 20](#lem:log-est){reference-type="ref" reference="lem:log-est"} and obtain, as in Section [6](#sec:firstbound){reference-type="ref" reference="sec:firstbound"}, the inequality $$\label{eq:Lambda1} |\underbrace{n_1\log\alpha-m_1\log b}_{=:\Lambda_1}| \leq 14 \max\left\{\alpha^{n_2-n_1},\gamma^{-n_1}\right\}.$$ Next, let us consider the inequality $$\begin{aligned} \left|\frac{\alpha^{n_2}}{b^{m_2}-b^{m_3}}-1\right|=\left|\frac{\alpha^{n_2}}{b^{m_3}(b^{m_2-m_3}-1)}-1\right|&\leq \frac{\alpha^{n_3}+|\beta|^{n_2}+|\beta|^{n_3}}{b^{m_2}-b^{m_3}}\\ &< 6\alpha^{n_3-n_2}+12\gamma^{-n_2}\\ &\leq 18\max\{\alpha^{n_3-n_2},\gamma^{-n_2}\}.\end{aligned}$$ In particular, note that $b^{m_2}-b^{m_3}\geq \frac{1}{2} b^{m_2} > \frac{3}{16} \alpha^{n_2}$ by Lemma [Lemma 13](#lem:n1-m1-relation){reference-type="ref" reference="lem:n1-m1-relation"}. Assuming that $18\alpha^{n_3-n_2} \geq \frac{1}{2}$ yields $\alpha \leq 36$ which implies by Proposition [Proposition 24](#prop:n1bound-logalpha){reference-type="ref" reference="prop:n1bound-logalpha"} Theorems [Theorem 1](#th:n1-bound){reference-type="ref" reference="th:n1-bound"} and [Theorem 2](#th:absolute){reference-type="ref" reference="th:absolute"}. Similarly, using $n_2\geq 2$, the assumption $18\gamma^{-n_2} \geq \frac{1}{2}$ gives us either $\alpha \leq 6$ or $A^{\varepsilon} \leq 12$ and we are done as well. Thus we may apply Lemma [Lemma 20](#lem:log-est){reference-type="ref" reference="lem:log-est"} and obtain $$\label{eq:Lambda2} |\underbrace{n_2\log\alpha-m_3\log b-\log(b^{m_2-m_3}-1)}_{=:\Lambda_2}| \leq 36\max\{\alpha^{n_3-n_2},\gamma^{-n_2}\}.$$ Eliminating the term $\log \alpha$ from the linear forms $\Lambda_1$ and $\Lambda_2$ by considering $\Lambda=n_1\Lambda_2-n_2\Lambda_1$ yields together with [\[eq:Lambda1\]](#eq:Lambda1){reference-type="eqref" reference="eq:Lambda1"} and [\[eq:Lambda2\]](#eq:Lambda2){reference-type="eqref" reference="eq:Lambda2"} the bound $$\label{eq:Lambda} \begin{split} |\Lambda|&\leq 36n_1\max\{\alpha^{n_3-n_2},\gamma^{-n_2}\}+14n_2\max\{\alpha^{n_2-n_1},\gamma^{-n_1}\}\\ &\leq 50n_1\max\{\alpha^{n_3-n_2},\gamma^{-n_2},\alpha^{n_2-n_1}\}\\ &\leq 200n_1\alpha^{-\varepsilon}, \end{split}$$ where $$\Lambda=(m_1n_2-m_3n_1)\log b-n_1\log(b^{m_2-m_3}-1).$$ Now we have to distinguish between the cases $\Lambda=0$ and $\Lambda\neq0$. ## The case $\Lambda= 0$ Since $n_1\neq 0$ this case can only occur if $b$ and $b^{m_2-m_3}-1$ are multiplicatively dependent. This is only possible if $b=2$ and $m_2-m_3=1$, i.e. if $b^{m_2-m_3}-1=1$. Note that our assumptions imply $c\geq 0$ if $b=2$. Therefore we obtain $$0\leq c =V_{n_3}-b^{m_3}=\alpha^{n_3}+\beta^{n_3}-b^{m_3}$$ which implies $b^{m_3}\leq 2\alpha^{n_3}$. From Lemma [Lemma 13](#lem:n1-m1-relation){reference-type="ref" reference="lem:n1-m1-relation"} we know that $b^{m_2}>\frac{3}{8} \alpha^{n_2}$. Hence, using the facts $b=2$ and $m_2=m_3+1$, we get the inequality $$\frac{3}{8} \alpha^{n_2}<b^{m_2}=2b^{m_3}\leq 4\alpha^{n_3}$$ which implies $\alpha^{n_2-n_3}<11$ and thus $\alpha<11$. An application of Proposition [Proposition 24](#prop:n1bound-logalpha){reference-type="ref" reference="prop:n1bound-logalpha"} yields Theorems [Theorem 1](#th:n1-bound){reference-type="ref" reference="th:n1-bound"} and [Theorem 2](#th:absolute){reference-type="ref" reference="th:absolute"}. *Remark 5*. Let us note that in the case $c<0$ the argument above does not work. This is the reason why we exclude $b=2$ if $c<0$. ## The case $\Lambda\neq 0$ Here we may apply Matveev's theorem, Lemma [Lemma 16](#lem:matveev){reference-type="ref" reference="lem:matveev"}, to $\Lambda$. Note that the case $b^{m_2-m_3}-1=1$ can be excluded by the previous subsection. First, let us find an upper bound for $|m_1n_2-m_3n_1|$. We deduce from [\[eq:Lambda\]](#eq:Lambda){reference-type="eqref" reference="eq:Lambda"} the bound $$\begin{aligned} |m_1n_2-m_3n_1|\log b &\leq n_1\log(b^{m_2-m_3}-1) + 200n_1 \\ &\leq n_1(m_2-m_3)\log b + 200n_1\end{aligned}$$ which implies $$|m_1n_2-m_3n_1|\leq 90 n_1 \log(43.3 n_1).$$ Furthermore, using Lemma [Lemma 18](#lem:heightcomp){reference-type="ref" reference="lem:heightcomp"}, we have $$\begin{aligned} \max\{Dh(b^{m_2-m_3}- 1),|\log (b^{m_2-m_3}- 1)|,0.16\} &\leq 2(m_2-m_3+1)\log b \\ &\leq 3.5 \log(43.3 n_1)\log b.\end{aligned}$$ Therefore we may choose $E=52 n_1$ in Lemma [Lemma 16](#lem:matveev){reference-type="ref" reference="lem:matveev"}. Now we obtain by Matveev's theorem $$4.69\cdot 10^{9} \log(719 n_1)\log(43.3 n_1) (\log b)^2 \geq -\log|\Lambda|$$ and then $$4.69\cdot 10^{9} [\log(719 n_1) \log b]^2 \geq -\log|\Lambda|.$$ Together with the upper bound for $|\Lambda|$ this yields $$4.69\cdot 10^{9} [\log(719 n_1) \log b]^2 \geq \varepsilon\log \alpha - \log (200n_1)$$ and thus $$\label{eq:alpha-bound-last-case} \log \alpha < 4.7 \cdot 10^{9} \frac{1}{\varepsilon}[\log(719 n_1) \log b]^2.$$ Similar as in Section [7](#sec:three-sol){reference-type="ref" reference="sec:three-sol"} we plug in this upper bound for $\log \alpha$ into [\[eq:n1-bound\]](#eq:n1-bound){reference-type="eqref" reference="eq:n1-bound"} and obtain the inequality $$719 n_1< 2.17\cdot 10^{34} \varepsilon^{-2} [\log b \log(719 n_1)]^4.$$ Writing $n'=719 n_1$ and applying Lemma [Lemma 22](#lem:log-sol){reference-type="ref" reference="lem:log-sol"} gives us an upper bound for $n'$ and in the sequel for $n_1$, namely $$n_1 < 4.83\cdot 10^{32} \frac{(\log b)^4}{\varepsilon^2} \log \left[5.56\cdot 10^{36} \frac{(\log b)^4}{\varepsilon^2}\right]^4.$$ This concludes the proof of Theorem [Theorem 1](#th:n1-bound){reference-type="ref" reference="th:n1-bound"}. Now let us assume that Assumption A2 holds. Then we put $\varepsilon=\frac{1}{2}$ and get $$n_1< 1.94\cdot 10^{33}(\log b)^4\log\left[2.23\cdot 10^{37}(\log b)^4\right]^4,$$ in particular $n_1 < 1.2\cdot 10^{40}$ for $b=2$ (cf. Corollary [Corollary 3](#cor:ex){reference-type="ref" reference="cor:ex"}). If we insert this upper bound into [\[eq:alpha-bound-last-case\]](#eq:alpha-bound-last-case){reference-type="eqref" reference="eq:alpha-bound-last-case"} with $\varepsilon=\frac{1}{2}$, then we obtain $$\log \alpha < 9.4\cdot 10^{9}\left[\log\left(1.4\cdot 10^{36}(\log b)^4\log\left[2.23\cdot 10^{37}(\log b)^4\right]^4\right) \log b\right]^2$$ which finally proves Theorem [Theorem 2](#th:absolute){reference-type="ref" reference="th:absolute"}. [^1]: *This prime $p$ is called a *primitive divisor*.*
arxiv_math
{ "id": "2309.11173", "title": "On Pillai's Problem involving Lucas sequences of the second kind", "authors": "Sebastian Heintze and Volker Ziegler", "categories": "math.NT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Representation of nonlinear dynamical systems as infinite-dimensional linear operators over a Hilbert space of functions provides a means to analyze nonlinear systems via spectral methods for linear operators. In this paper, we provide a novel representation for discrete, control-affine nonlinear dynamical systems as operators acting on a Hilbert space of functions. We also demonstrate that this representation can be used to predict the behavior of a discrete-time nonlinear system under a known feedback law; thereby extending the predictive capabilities of dynamic mode decomposition to discrete nonlinear systems affine in control. The method requires only snapshots from the dynamical system. We validate the method in two numerical experiments by predicting the response of a controlled Duffing oscillator to a known feedback law, as well as demonstrating the advantage of our method to existing techniques in the literature. author: - "Zachary Morrison, Moad Abudia, Joel Rosenfeld, and Rushikesh Kamalapurkar [^1] [^2]" bibliography: - sccmaster.bib - DCLDMDbibliography.bib - DCLDMDTemp.bib title: Dynamic Mode Decomposition of Control-Affine Nonlinear Systems using Discrete Control Liouville Operators --- # Introduction This paper presents a novel method to predict the behavior of a *discrete-time*, control-affine nonlinear dynamical system under a given feedback law. Using the feedback law, we can formulate an approximate representation of the closed-loop dynamics as an infinite-dimensional linear operator over a Hilbert space of functions. This effort is inspired by the method first developed in [@SCC.Rosenfeld.Kamalapurkar2021], which presents an algorithm for predicting the response of a *continuous-time* dynamical system under a known feedback law. The idea of representing a nonlinear system as an infinite-dimensional linear operator in Hilbert space was first put forth by B.O. Koopman in [@SCC.Koopman1931] and the resulting composition operator is aptly known as the Koopman operator. This higher-dimensional space is typically referred to as the *feature space* or *lifted space* and the Koopman operator acts as a composition operator on the lifted space. In recent years, dynamic mode decomposition (DMD) and other data-driven methods have seen a resurgence due to the abundance of data and increased computational power [@SCC.Kutz.Brunton.ea2016]. An example of the application of DMD can be seen in the fluid mechanics community, where modal decomposition of fluid flows is accomplished [@SCC.Schmid2010], [@SCC.Mezic2013]. In a more general sense, DMD is intimately connected to the Koopman operator as DMD is one method used to approximate the Koopman operator associated with the dynamical system [@SCC.Mezic2013], [@SCC.Tu.Rowley.ea2014a]. The Koopman approach is amenable to spectral methods in linear operator theory; therefore, analysis of a nonlinear system can be done via spectral methods for linear operators [@SCC.Rowley.Mezic.ea2009]. Moreover, Koopman DMD allows one to study dynamical systems without direct knowledge of the dynamics, as Koopman DMD is strictly data driven and requires no knowledge of the dynamical system [@SCC.Kutz.Brunton.ea2016]. The ultimate goal of DMD is to develop a data-driven model via an eigendecomposition of the Koopman operator, under the assumption that the full-state observable (the identity function) is in the span of the eigenfunctions [@SCC.Gonzalez.Abudia.ea2021]. The addition of control adds greater difficulty to data-driven methods like DMD, as the Koopman operator associated with the dynamical system depends upon the control input. Furthermore, in discrete-time the Koopman operator is generally not linear in its symbol, which makes separating the influence of the controller from the drift dynamics challenging. Despite the difficulty, there have been several successful methods for generalizing Koopman DMD for dynamical systems with control in results such as [@SCC.Goswami.Paley2022],[@SCC.Korda.Mezic2018a; @SCC.Proctor.Brunton.ea2016; @SCC.Williams.Hemati.ea2016; @SCC.Proctor.Brunton.ea2018]. The method presented in [@SCC.Proctor.Brunton.ea2016] yields a DMD routine to represent a general nonlinear system with control as a control-affine linear system. This idea is generalized in [@SCC.Korda.Mezic2018a] with extended DMD (eDMD), providing greater predictive power. Furthermore, for a general discrete-time, nonlinear dynamical system with control, the authors in [@SCC.Korda.Mezic2018a] utilize the shift operator to describe the time evolution of the control signal. For continuous-time dynamical systems, the Koopman canonical transform (see [@SCC.Surana2016]) is used in [@SCC.Goswami.Paley2022] to leverage a formulation of the dynamical system in the lifted space as a control-affine, bilinear system, called the Koopman bilinear form (KBF). The KBF is then amenable to the design of feedback laws using techniques from optimal control. The aforementioned methods demonstrate the ability to predict the response of both discrete-time [@SCC.Korda.Mezic2018a] and continuous-time [@SCC.Goswami.Paley2022] dynamical systems to open-loop inputs. We focus specifically on discrete-time nonlinear systems and the algorithm developed in this paper offers an advantage over the method from [@SCC.Korda.Mezic2018a] as the predictive model in response of the nonlinear system to a known feedback law has greater precision. A key contribution here is the extension of the method presented in [@SCC.Rosenfeld.Kamalapurkar2021] to the discrete-time case. The operator representation presented in [@SCC.Rosenfeld.Kamalapurkar2021] is linear in its symbol. In discrete-time the Koopman operator is generally not linear with respect to its symbol. Herein lies the difficulty of extending continous-time DMD results to discrete-time DMD results, as separation of the control input from the dynamics is difficult in the Koopman sense for discrete-time systems. We remedy this by approximately separating the dynamics and the control input in our operator representation using first-order estimates. In this paper, we take an operator-theoretic approach to DMD with some modifications to account for the effect of control and the discrete nature of the problem. The algorithm is referred to as discrete control Liouville DMD or DCLDMD for brevity. To accomplish DCLDMD, the discrete, nonlinear dynamical system is represented as a composition of two operators acting on a Hilbert space of functions. The first operator acts as a composition operator, which maps from a reproducing kernel Hilbert space (RKHS) to a vector-valued reproducing kernel Hilbert space (vvRKHS). In order to account for the affect of control, we make use of a multiplication operator which maps functions in the vvRKHS back into the RKHS. In doing so, we obtain an approximate representation of the dynamical system as a composition of the aforementioned operators. In section [4](#sec:DCLDMD){reference-type="ref" reference="sec:DCLDMD"}, we develop the data-driven routine for DMD using the DCLDMD operator. The novelty of our method is that we can predict the response of the system to a known feedback law using trajectories recorded under arbitrary open-loop control signals. The usefulness of state-feedback prediction can be realized in many aspects of motion planning, such as collision avoidance and path planning. The paper is organized into the following sections. Section [2](#sec:Background){reference-type="ref" reference="sec:Background"} establishes the mathematical background for dynamic mode decomposition with discrete control Liouville operators. Section [3](#sec:ProbS){reference-type="ref" reference="sec:ProbS"} contains the problem description. Section [4](#sec:DCLDMD){reference-type="ref" reference="sec:DCLDMD"} provides the derivation for discrete control Liouville dynamic mode decomposition, as well as outlining the DCLDMD algorithm. Section [5](#sec:NumExp){reference-type="ref" reference="sec:NumExp"} contains the numerical experiments involving the Duffing oscillator. Lastly, section [6](#sec:concl){reference-type="ref" reference="sec:concl"} concludes the paper. # Background {#sec:Background} In this section, we provide a brief overview of Reproducing Kernel Hilbert Spaces and vector-valued reproducing kernel Hilbert spaces and their role in DCLDMD. **Definition 1**. *A Reproducing Kernel Hilbert Space (RKHS) $\Tilde{H}$ over a set $X\subset \mathbb{R}^{n}$ is a Hilbert space of functions $f: X\to \mathbb{R}$ such that for all $x\in X$ the evaluation functional $E_{x}f \coloneqq f(x)$ is bounded. By the Riesz Representation Theorem, there exists a function $K_{x} \in \Tilde{H}$ such that $f(x) ={\langle f, K_{x} \rangle}_{\Tilde{H}}$ for all $f\in H$.* The snapshots of the dynamical system are embedded into an RKHS via a kernel map $x_{i} \mapsto K(\cdot,x_{i}) \coloneqq K_{x_{i}}$. Moreover, the span of the set $\{K_{x} : x \in X\}$ is dense in $\Tilde{H}$. **Proposition 1**. *If $A \coloneqq \{K_{x} : x \in X\}$, then $\text{span}A = \Tilde{H}$.* *Proof.* To show that the span of the set $\{K_{x} : x \in X\}$ is dense in $\Tilde{H}$ amounts to showing that $(A^{\perp})^{\perp} = \Tilde{H}$. Let $h \in A^{\perp}$, then $\langle h, K_{x}\rangle = h(x) = 0$. Hence $h\equiv 0$ on $X$. Thus $A^{\perp} = \{0\}$. ◻ In order to account for the effect of control, we make use of a vector-valued RKHS (vvRKHS). **Definition 2**. *Let $\mathcal{Y}$ be a Hilbert space, and let $H$ be a Hilbert space of functions from a set $X$ to $\mathcal{Y}$. The Hilbert space $H$ is a *vvRKHS* if for every $v \in \mathcal{Y}$ and $x \in X$, the functional $f \mapsto \langle f(x), v \rangle_{\mathcal{Y}}$ is bounded.* To each pair $(x,u)$ in the data, we can associate a linear operator over a vvRKHS given by $(x,u) \mapsto K_{x,u}$, following [@SCC.Rosenfeld.Kamalapurkar2021]. The function $K_{x,u}$ is known as the kernel operator and the span functions are dense in the respective vvRKHS. Given a function $f \in H$, the reproducing property of $K_{x,u}$ implies ${\langle f, K_{x,u} \rangle}_{H} = {\langle f(x), u \rangle}_{\mathcal{Y}}$. For more discussion on vector-valued reproducing kernel Hilbert spaces see [@SCC.Carmeli.DeVito.ea2010]. # Problem Statement {#sec:ProbS} Consider a control-affine, discrete time dynamical system of the form $$\label{eqn1} x_{k+1} = F(x_{k}) + G(x_{k})\mu(x_{k}),$$ where $x \in \mathbb{R}^{n}$ is the state, $\mu :\mathbb{R}^{n} \to \mathbb{R}^{m}$ is a feedback controller, $F : \mathbb{R}^{n} \to \mathbb{R}^{n}$ and $G: \mathbb{R}^{n} \to \mathbb{R}^{n\times m}$ are functions corresponding to the drift dynamics and the control effectiveness matrix, respectively. Given a set of data points $\{(x_{k},y_{k}, u_{k})\}_{k=1}^{n}$ satisfying $$y_{k} = F(x_{k}) + G(x_{k})u_{k},$$ where $u_{k}$ are the control inputs, the goal is to predict the response of the system in ([\[eqn1\]](#eqn1){reference-type="ref" reference="eqn1"}) to the feedback law $\mu$. In this paper, $\mathbb{R}^{n}$ is the state space of the dynamical system, $\mathcal{Y} = \mathbb{R}^{1\times(m+1)}$, $X\subset \mathbb{R}^{n}$ is compact, $\Tilde{H}$ denotes the RKHS of *differentiable* functions $f:X\to \mathbb{R}$, and $H$ denotes a vvRKHS. # The DCLDMD Algorithm {#sec:DCLDMD} ## Discrete Liouville Operators {#discrete-liouville-operators .unnumbered} A linear operator can be associated with the dynamical system in ([\[eqn1\]](#eqn1){reference-type="ref" reference="eqn1"}), specifically the composition of a two operators: the discrete Liouville operator and a multiplication operator - this representation is derived below. Let $K_{F,G}$ be the symbol for the discrete Liouville operator defined on an RKHS $\tilde{H}$ of differentiable functions that map from a compact set $X \subset \mathbb{R}^{n}$ into $\mathbb{R}$. The domain of the operator is given as $$\mathcal{D}(K_{F,G}) \coloneqq \{h\in\tilde{H}\mid K_{F,G}h \in H\},$$ where $H$ is a vector-valued reproducing kernel Hilbert Space of functions from $X$ to $\mathbb{R}^{1\times(m+1)}$. For $h \in \tilde{H}$, the action of $K_{F,G}$ on $h$ is given by $$K_{F,G}h = \left(h(F(\cdot)), \displaystyle\frac{\partial h}{\partial x}(F(\cdot)) G_{1}(\cdot), \ldots, \displaystyle\frac{\partial h}{\partial x}(F(\cdot)) G_{m}(\cdot) \right),$$ where the functions $G_{i} : \mathbb{R}^{n}\to \mathbb{R}^{n}$, where $1\leq i \leq m$, are the columns of $G$. This formulation lends itself to a linear approximation of the function $h$ around a point in a trajectory. To complete an operator representation of ([\[eqn1\]](#eqn1){reference-type="ref" reference="eqn1"}), a way to incorporate the control inputs into the formulation is needed. This can be done using multiplication operators corresponding to functions in the vvRKHS $H$. ## Multiplication Operators {#multiplication-operators .unnumbered} Let $\nu : X \to \mathcal{Y}$ be a continuous function. The multiplication operator with symbol $\nu$ is denoted as $M_{\nu} : \mathcal{D}(M_{\nu}) \to \Tilde{H}$. For a function $h \in \mathcal{D}(M_{\nu})$, we define the action of the multiplication operator on $h$ as $$M_{\nu}h(\cdot) = {\langle h(\cdot), \nu(\cdot)\rangle}_{\mathcal{Y}},$$ where the domain of the multiplication operator is given as $$\mathcal{D}(M_{\nu}) \coloneqq \{h\in H \mid x \mapsto {\langle h(x), \nu(x)\rangle}_{\mathcal{Y}} \in \Tilde{H}\}.$$ For completeness, a proposition from [@SCC.Rosenfeld.Kamalapurkar2021] is included. The proposition is required to calculate the finite-rank representation of the composition of the multiplication operator with the discrete Liouville operator. **Proposition 2**. *Suppose that $\nu : X \to \mathcal{Y}$ corresponds to a densely defined multiplication operator $M_{\nu} : \mathcal{D}(M_{\nu}) \to \Tilde{H}$ and $\Tilde{K} : X\times X \to \mathbb{R}$ is the kernel function of the RKHS $\Tilde{H}$. Then, for all $x\in X$, $K(\cdot, x) \in \mathcal{D}(M^{*}_{\nu})$, where $M^{*}_{\nu}$ is the adjoint of $M_{\nu}$, and $$M^{*}_{\nu}K(\cdot, x) = K_{x,\nu(x)}.$$* With these ideas in mind, the composition of the discrete Liouville operator and the multiplication operator can be used to formulate the action of the DCLDMD operator. ## The DCLDMD Operator {#the-dcldmd-operator .unnumbered} Taking the composition of $K_{F,G}$ and $M_{\nu}$, for a known feedback law $\mu : \mathbb{R}^{n}\to\mathbb{R}^{m}$, we are able to describe the evolution of an observable along trajectories of the system in terms of an infinite-dimensional linear operator, with $\nu \coloneqq (1,\mu^{T})^{T} \in H$. Now, the composition $M_{\nu}K_{F,G}$ is a linear operator and $M_{\nu}K_{F,G} : \mathcal{D}(K_{F,G}) \to \tilde{H}$. For an observable $h\in \tilde{H}$, let $$\begin{gathered} (x_{k})= \\ h(F(x_{k})) + \frac{\partial h}{\partial x}(F(x_{k}))\sum_{i=1}^{m}G_{i}(x_{k})\mu_{i}(x_{k}) \end{gathered}$$ be the action of the DCLDMD operator on the observable. One can observe that $M_{\nu}K_{F,G}$ maps $h(x_{k})$ to a first order approximation of $h$ evaluated at the point $F(x_{k}) + G(x_{k})\mu(x_{k})$ along the trajectory, i.e. $h(x_{k+1})$. This can be seen from the fact that the linear approximation of $h$ around the point $F(x_{k})$ at the point $p$ is $$h(p) = h(F(x_{k})) +\displaystyle\frac{\partial h}{\partial x}(F(x_{k})) (p - F(x_{k}) ) + \epsilon,$$ where the approximation error is $\epsilon$. If $G$ is unbounded, then $\epsilon = o\big(\sup_{x\in X}||G(x)||\sup_{u\in U}||u||\big)$. If $G$ is bounded and $U \subset R^{m}$ is bounded, then $\epsilon = o\big(\sup_{u\in U}||u||\big)$. As such, the linear approximation of of $h$ near the point $F(x_{k})$ is valid for small control inputs. Evaluating the observable $h$ at $x_{k+1} = F(x_{k}) + G(x_{k})u_{k}$ yields $$\begin{split} h(x_{k+1}) &= h(F(x_{k})) +\displaystyle\frac{\partial h}{\partial x}(F(x_{k})) ( F(x_{k}) + G(x_{k})u_{k}\\ & - F(x_{k}) ) + \varepsilon \\ &\approx h(F(x_{k})) +\displaystyle\frac{\partial h}{\partial x}(F(x_{k})) G(x_{k})u_{k} \\ &= h(F(x_{k})) + \displaystyle\frac{\partial h}{\partial x}(F(x_{k})) \sum_{i=1}^{m}G_{i}(x_{k})u_{i,k} \end{split}$$ This is a common way to represent the effect of control in the operator-theoretic representation of dynamical systems in both discrete and continuous time, see e.g. [@SCC.Surana2016],[@SCC.Huang.Ma.ea2018],[@SCC.Straesser.Berberich.ea2023]. Hence, the formulation of the DCLDMD operator governs the evolution of the observable $h$ along the flow. ## Finite-Rank Representation {#finite-rank-representation .unnumbered} In order to represent the otherwise infinite-dimensional approximation as a finite-dimensional operator, there needs to be a suitable basis for projection. We construct kernel bases using data from the dynamical system. The kernel functions themselves are centered at the snapshots $\{(x_{k},y_{k},u_{k})\}_{i=1}^{n}$ by selecting $\alpha = \{K(\cdot,x_{k})\}_{i=1}^{n} \subset \Tilde{H}$ and $\beta = \{K_{x_{k},u_{k}}\}_{k=1}^{n} \subset H$. Using the kernel bases we will construct a finite-dimensional approximation of the DCLDMD operator. DMD is then performed via an eigendecomposition of the finite-rank representation. Given an observable $h\in \Tilde{H}$, let $\tilde{h} \coloneqq P_{\alpha}h = \sum_{i=1}^{n}\Tilde{a}\Tilde{K}_{x_{i}}$ the projection of $h$ onto $\alpha$. One can recover a finite rank proxy of the DCLDMD operator by observing its action restricted to $\text{span }\alpha \subset \Tilde{H}$. Recovering the finite-rank proxy amounts to writing $P_{\alpha}M_{\nu}K_{F,G}\Tilde{h}$ as $\sum_{i=1}^{n}\Tilde{b}_{i}K_{x_{i}}$ and finding a matrix that relates the coefficients $\Tilde{a}$ and $\Tilde{b}$. The coefficients can be computed by solving the following linear system of equations (as in [@SCC.Rosenfeld.Kamalapurkar2021],[@SCC.Gonzalez.Abudia.ea2021]) $$\tilde{G} \begin{pmatrix}\tilde{b}_1\\ \tilde{b}_2 \\ \vdots \\ \tilde{b}_{n} \end{pmatrix} = \begin{pmatrix}\langle M_{\nu}P_{\beta}K_{F,G}\tilde{h}, \tilde{K}(\cdot,x_1) \rangle \\ \langle M_{\nu}P_{\beta}K_{F,G}\tilde{h}, \tilde{K}(\cdot,x_2)\rangle \\ \vdots \\ \langle M_{\nu}P_{\beta}K_{F,G}\tilde{h}, \tilde{K}(\cdot,x_{n}) \rangle\end{pmatrix},$$ where $\Tilde{G} = \{\Tilde{K}(x_{i},x_{j})\}_{i,j=1}^n$ is the kernel gram matrix for $\alpha$. Since the kernel functions in $\alpha \subset \Tilde{H}$ are in the domain of the adjoint of the multiplication operator (see Prop. [Proposition 2](#Prop2){reference-type="ref" reference="Prop2"}), we have that $$\label{eqn4} \begin{pmatrix}\langle M_{\nu}P_{\beta}K_{F,G}\tilde{h}, \tilde{K}(\cdot,x_1) \rangle \\ \langle M_{\nu}P_{\beta}K_{F,G}\tilde{h}, \tilde{K}(\cdot,x_2) \rangle \\ \vdots \\ \langle M_{\nu}P_{\beta}K_{F,G}\tilde{h}, \tilde{K}(\cdot,x_{n}) \rangle \end{pmatrix} = \begin{pmatrix}\langle K_{F,G}\tilde{h}, P_{\beta}M_{\nu}^{*}\tilde{K}(\cdot,x_1) \rangle \\ \langle K_{F,G}\tilde{h}, P_{\beta}M_{\nu}^{*}\tilde{K}(\cdot,x_2) \rangle \\ \vdots \\ \langle K_{F,G}\tilde{h}, P_{\beta}M_{\nu}^{*}\tilde{K}(\cdot,x_{n}) \rangle \end{pmatrix}$$ Now, looking at the $jth$ component of the column vector in ([\[eqn4\]](#eqn4){reference-type="ref" reference="eqn4"}): $$\begin{gathered} \langle K_{F,G}\tilde{h}, P_{\beta}M_{\nu}^{*}\tilde{K}(\cdot,x_{j}) \rangle_{\tilde{H}} = \\ \sum_{i=1}^{n}\tilde{a}_i \langle K_{F,G}\tilde{K}(\cdot, x_i), P_{\beta}M_{\nu}^{*}\tilde{K}(\cdot,x_{\tilde{j}}) \rangle_{\tilde{H}} \\ = \sum_{i=1}^{n}\tilde{a}_i \langle K_{F,G}\tilde{K}(\cdot, x_i), \sum_{k=1}^{n} w_{k,j}K_{x_k,u_k} \rangle_{\tilde{H}} \\ = \sum_{i=1}^{n} \sum_{k=1}^{n} \tilde{a}_i w_{k,j} \langle K_{F,G}\tilde{K}(\cdot, x_i), K_{x_k,u_k} \rangle_{\tilde{H}} \\ = \sum_{i=1}^{n} \sum_{k=1}^{n}\tilde{a}_i w_{k,j} \langle K_{F,G}\tilde{K}(x_k, x_i), {(1,{u_{k}}^T)}^{T} \rangle_{\mathcal{Y}} \\ = {\tilde{a}}^T \tilde{I} w_{j} = w_{j}^T \tilde{I}^T{\tilde{a}}, \end{gathered}$$ where the matrix $\tilde{I}$ is $\tilde{I} = \{ \langle K_{F,G}\tilde{K}(x_j, x_i), {(1,{u_{k}}^T)}^{T} \rangle_{\mathcal{Y}} \}_{i,j=1}^{n}$. We emphasize that $$\begin{gathered} {\langle K_{F,G}\tilde{K}(x_k, x_i), (1,{u_{k}}^T) \rangle}_{\mathcal{Y}} \\ \begin{split} &\approx \tilde{K}(F(x_k), x_i) + \frac{\partial \tilde{K}(F(x_k), x_i)}{\partial x}\sum_{l=1}^m G_{l}(x_k)u_{k,l}, \\ \end{split}\end{gathered}$$ which are the first-order approximations of $K(x_{k+1},x_{i})$. As such, in the implementation, $\tilde{I}$ is approximated as $\Tilde{I} = \{ K(x_{j+1},x_{i}\}_{i,j=1}^{n}$. Since $M_{\nu}^{*} : \tilde{K}(\cdot, x_{j}) \mapsto K_{x_{j}, \nu(x_j)}$, we can project the image $K_{x_{j}, \nu(x_j)}$ onto $\text{span }\beta \subset H$. In doing so, we can find the components of the coefficient vector $w_{j}$ by solving the linear system $$\label{eqn6} G \begin{pmatrix}w_{1,j}\\ w_{2,j} \\ \vdots \\ w_{n,j} \end{pmatrix} = \begin{pmatrix}\langle K_{x_j,\mu(x_j)}, K_{x_1,u_1}\rangle_H \\ \langle K_{x_j,\mu(x_j)}, K_{x_2,u_2}\rangle_H \\ \vdots \\ \langle K_{x_{j},\mu(x_{j})}, K_{x_n,u_n} \rangle_H\end{pmatrix},$$ where the gram matrix $G$ is computed as $G = (\langle K_{x_i,u_i}, K_{x_j,u_j} \rangle_H )_{i,j=1}^{n}$. The inner products in $G$ are computed as $\langle K_{x_i,u_i}, K_{x_j,u_j} \rangle_H = \langle K_{x_i,u_i}(x_j), (1,u_j^{T}) \rangle_H = (1,{u_{i}}^T)K_{x_i}(x_j)(1,u_{j}^T)^T,$ where the diagonal kernel operator $K_{x_i} \coloneqq \text{diag}(\Tilde{K}_{x_1},\ldots, \Tilde{K}_{x_{m+1}})$ is used, with $\Tilde{K}_{x_{j}} = \Tilde{K}_{x_{i}}$ for $1\leq j\leq m+1$. Letting $I_{j}^{T}$ denote the column vector on the right-hand side of equation ([\[eqn6\]](#eqn6){reference-type="ref" reference="eqn6"}). The $j$th row of $I$ is given by $$\begin{gathered} I_{j} = \big( {\langle K_{x_j,\mu(x_j)}, K_{x_1,u_1}\rangle}_{H}, {\langle K_{x_j,\mu(x_j)}, K_{x_2,u_2}\rangle}_{H}, \ldots, \\ {\langle K_{x_{j},\mu(x_{j})}, K_{x_M,u_M} \rangle}_{H} \big).\end{gathered}$$ The complete finite-rank representation of the DCLDMD operator is then recovered as $[P_{\alpha}M_{\nu}P_\beta K_{F,G}]_{\alpha} = \tilde{G}^{-1}IG^{-1}\tilde{I}^T$, where the subscript $\alpha$ denotes the restriction of the operator to $\alpha$ basis. ## Discrete Control Liouville Dynamic Mode Decomposition {#sec:DCLDMD .unnumbered} DMD can be accomplished via an eigendecomposition of the finite-rank proxy of DCLDMD operator. Let $\{v_{i},\lambda_{i}\}_{i=1}^{n}$ be the eigenvalue-eigenvector pairs of the finite-rank proxy of $[P_{\alpha}M_{\nu}P_\beta K_{F,G}]_{\alpha}$. Following, [@SCC.Gonzalez.Abudia.ea2021], if $v_{j}$ is an eigenvector of the matrix $[P_{\alpha}M_{\nu}P_\beta K_{F,G}]_{\alpha}$, then the function $\varphi_{j} = \sum_{i=1}^{n}(v_{j})_{i}K_{x_{i}}$ is an eigenfunction of $[P_{\alpha}M_{\nu}P_\beta K_{F,G}]_{\alpha}$. Now if $\varphi_{j}$ is an eigenfunction of $[P_{\alpha}M_{\nu}P_\beta K_{F,G}]_{\alpha}$ with eigenvalue $\lambda_{j}$, then the first order approximation of $\varphi_{j}(x_{k+1})$ is given by $$\begin{gathered} \varphi_{j}(x_{k+1}) \approx \varphi_{j}(F(x_{k})) + \frac{\partial \varphi}{\partial x}(F(x_{k}))\sum_{i=1}^{m}G_{i}(x_{k})\mu_{i}(x_{k}) \\ = M_{\nu}K_{F,G}\varphi_{j}(x_{k}) = \lambda_{j} \varphi_{j}(x_{k}).\end{gathered}$$ Hence, the eigenfunctions evolve approximately linearly with the flow. The normalized eigenfunctions are denoted $\hat{\varphi}$ and computed as $\hat{\varphi}_{j} = \frac{1}{\sqrt{v_{j}^{T}\Tilde{G}v_{j}}}\sum_{i=1}^{n}(v_{j})_{i}\Tilde{K}_{x_{i}}$, where $v_{j}$ denotes the $j$th eigenvector of $[P_{\alpha}M_{\nu}P_\beta K_{F,G}]_{\alpha}$ and $\Tilde{G}$ is the gram matrix for $\alpha$. This approach yields a data-driven model of the closed-loop dynamical system as a linear combination of eigenfunctions of the operator $[P_{\alpha}M_{\nu}P_\beta K_{F,G}]_{\alpha}$. That is, $$F(x_{k})+G(x_{k})\mu(x_{k}) \approx \sum_{i=1}^{n} \lambda_{i}^{k} \xi_{i} \hat{\varphi}_{i}(x_{0}).$$ For a given $x_{0} \in X$ we have a point wise approximation of the flow as $$x_{k+1} \approx \sum_{i=1}^{n} \lambda_{i}^{k} \xi_{i} \hat{\varphi}_{i}(x_{0}).$$ We refer to the vectors $\xi_{i}$ as the *Liouville Modes*, these are the coefficients required to represent the full-state observable in terms of the eigenfunctions. We can calculate the modes by solving $g_{id}(x)=x=\sum_{i=1}^{n}\xi_{i}\varphi_{i}$, which yields $\xi = X(V^{T}\Tilde{G})^{-1}$, where $V$ is the matrix of normalized eigenvectors of $[P_{\alpha}M_{\nu}P_\beta K_{F,G}]_{\alpha}$ and $X\coloneqq (x_{1}\ldots x_{n})$ is the data matrix. We refer to this method as the *direct reconstruction* of the flow. We can also formulate an *indirect reconstruction* of the flow by considering the function $F_{\mu} : x\mapsto \sum_{i=1}^{n} \lambda_{i} \xi_{i} \hat{\varphi_{i}}(x)$. Given $x_{0}\in X$, we have $x_{k} = F_{\mu}(x_{k-1})$. The subscript $\mu$ to denote $F_{\mu}$ is used to denote that the reconstruction predicts the response to the feedback law $\mu$. The indirect method generally performs better for approximating the nonlinear dynamics; we hypothesize that the better performance is due to the fact we are estimating nonlinear dynamics using nonlinear functions, as the indirect reconstruction yields a nonlinear model of the flow, as opposed to the direct reconstruction which is linear. We will use the indirect reconstruction in section [5](#sec:NumExp){reference-type="ref" reference="sec:NumExp"}. The entirety of the DCLDMD algorithm can be seen in Algorithm 1. # Numerical Experiments {#sec:NumExp} As a demonstration of the efficacy of the developed DCLDMD algorithm, we apply the method to the controlled Duffing oscillator, along with conducting a comparison of the performance of DCLDMD with the linear predictor developed in [@SCC.Korda.Mezic2018a]. **Experiment :** [\[exp:Duffing\]]{#exp:Duffing label="exp:Duffing"} The controlled Duffing oscillator is a highly nonlinear dynamical system with state-space form $$\begin{gathered} \label{eqn9} \begin{bmatrix} \dot{x}_{1} \\ \dot{x}_{2} \end{bmatrix} = \begin{bmatrix} x_{2} \\ -\delta x_{2} -\beta x_{1} -\alpha x_{1}^{3} \end{bmatrix} + \begin{bmatrix} 0 \\ 2+\sin(x_{1}) \end{bmatrix}u\end{gathered}$$ where $\alpha,\beta,\delta$ are coefficients in $\mathbb{R}$, $[x_{1},x_{2}]^{T}\in\mathbb{R}^{2}$ is the state, and $u \in \mathbb{R}$ is the control input. For the experiments the parameters are set as follows: $\delta =0$, $\alpha =1$, and $\beta =-1$. We generate 225 data points $\{(x_{k},y_{k}, u_{k})\}_{k=1}^{225}$ with initial conditions sampled from a $15\times 15$ grid within the set $[-3,3]\times [-3,3]\subset \mathbb{R}^{2}$ using the MATLAB ode$45$ function. The data points and corresponding control inputs satisfy the relation $y_{k}=F(x_{k})+G(x_{k})u_{k}$, where the control inputs are sampled uniformly from the interval $[-2,2]\subset \mathbb{R}$. Data points $\{(x_{k},y_{k}, u_{k})\}_{k=1}^{n}$ that satisfy $y_{k}=F(x_{k})+G(x_{k})u_{k}$, reproducing kernels $\tilde{K}_{x_{j}}$ and $K_{x_{j},u_{j}}$ for $\Tilde{H}$ and $H$, respectively. A feedback law $\mu$, kernel parameter $\sigma$, and a regularization parameter $\epsilon$. $\{\hat{\varphi}_{j},\lambda_j, \xi_{j}\}_{j=1}^{n}$ $\Tilde{G} \leftarrow \{\Tilde{K}(x_{i},x_{j})\}_{i,j=1}^n$ $\Tilde{I} \leftarrow \{\Tilde{K}(x_{k+1},x_{i})\}_{k,i=1}^{n}$ $G \leftarrow \{\langle K_{x_i,u_i}, K_{x_j,u_j} \rangle_H \}_{i,j=1}^{n}$ $I \leftarrow \{ \langle K_{x_j,\mu(x_j)}, K_{x_i,u_i}\rangle\}_{i,j=1}^{n}$ Compute $[P_{\alpha}M_{\nu}P_\beta K_{F,G}]_{\alpha} = \tilde{G}^{-1}IG^{-1}\tilde{I}^T$ Eigendecomposition: $\{\varphi_j,\lambda_j\}_{j=1}^{n} \leftarrow [P_{\alpha}M_{\nu}P_\beta K_{F,G}]_{\alpha}$ Normalize the eigenfunctions: $\{\hat{\varphi}_{j}\}_{j=1}^{n} \leftarrow \hat{\varphi}_{j} = \frac{1}{\sqrt{v_{j}^{T}\Tilde{G}v_{j}}}\sum_{i=1}^{n}(v_{j})_{i}\Tilde{K}_{x_{i}}$ Liouville modes: $\xi \leftarrow X(V^{T}\Tilde{G})^{-1}$ $\{\hat{\varphi}_{j}, \lambda_j, \xi_{j}\}_{j=1}^{n}$ Using the tuples $\{(x_{k},y_{k}, u_{k})\}_{k=1}^{225}$, we aim to predict the response of the system at the initial condition $x_{0}=[2,-2]^{T}$ to the feedback law given by $\mu(x_{k})=[-2x_{k,1},-2x_{k,2}]^{T}$ using DCLDMD. The Gaussian kernel $\Tilde{K}(x,y) = \mathrm{e}^{\frac{-{\left\Vert x-y\right\Vert}_{2}^2}{\sigma}}$ with kernel width $\sigma = 10$ is used for calculation of the Gram matrices associated with $\alpha \subset \Tilde{H}$. For $\beta \subset H$, we associate to each pair $\{(x_{k},u_{k})\}_{k=1}^{225}$ a kernel $K_{x_{k},u_{k}} \coloneqq u_{k}K_{x_{k}} \in H$. Here we use the kernel operator $K_{x_i} \coloneqq \text{diag}(\Tilde{K}_{x_1},\ldots, \Tilde{K}_{x_{m+1}}),$ where $\Tilde{K}_{x_j}(y) = \mathrm{e}^{\frac{-{\left\Vert x_{j}-y\right\Vert}_{2}^2}{10}}$ for $1\leq j\leq m$. These kernel functions will be used to calculate the matrices $\Tilde{G}, \tilde{I}$ in Algorithm [\[alg:DCLDMD\]](#alg:DCLDMD){reference-type="ref" reference="alg:DCLDMD"}. Lastly, we select $\varepsilon = 10^{-6}$ for regularization of the Gram matrices. Using the *indirect* reconstruction, we estimate the response of the system to the feedback law for a total of $6$ seconds starting from the initial condition $[2,-2]^{T}$. The indirect reconstruction accurately predicts the response of the system to the known feedback law $\mu$ with a small margin of error, as seen in Figs. [\[fig:DuffingDiscreteReconComp\]](#fig:DuffingDiscreteReconComp){reference-type="ref" reference="fig:DuffingDiscreteReconComp"} and [\[fig:DuffingDiscreteReconError\]](#fig:DuffingDiscreteReconError){reference-type="ref" reference="fig:DuffingDiscreteReconError"}. **Experiment :** [\[exp:PredictionComp\]]{#exp:PredictionComp label="exp:PredictionComp"} In this experiment, we compare the predictive capabilities of the indirect reconstruction via DCLDMD with the linear predictor derived in [@SCC.Korda.Mezic2018a]. The linear predictors in [@SCC.Korda.Mezic2018a] are of the form $z_{k+1} = Az_{k} + Bu_{k}$ with $x_{k} = Cz_{k}$ and $z$ being the lifted state (see [@SCC.Korda.Mezic2018a] for more details). Hence, for a given feedback law $\mu$, we can estimate the response of the Duffing oscillator described by equation ([\[eqn9\]](#eqn9){reference-type="ref" reference="eqn9"}) to the feedback law $\mu$ by using the linear predictor $z_{k+1} = Az_{k} + B\mu(Cz_{k})$. We generate data points $\{(x_{k},u_{k},y_{k}\}_{i=1}^{1000}$ each satisfying the relation $y_{k}=F(x_{k}) + G(x_{k})u_{k}$. DCLDMD is performed with the exact same kernel bases as in experiment [\[exp:Duffing\]](#exp:Duffing){reference-type="ref" reference="exp:Duffing"}, except the kernel widths are both set to $\sigma = 100$. For regularization we set $\varepsilon = 10^{-6}$. Starting from the initial condition $x_{0} = [2, -2]^{T}$, we compare the prediction of the indirect reconstruction with the linear predictor to the feedback law given by $\mu(x_{k})=[-2x_{k,1},-2x_{k,2}]^{T}$. The results of this experiment can be seen in Fig. [\[fig:PredictorComp\]](#fig:PredictorComp){reference-type="ref" reference="fig:PredictorComp"}. ## Discussion {#discussion .unnumbered} The experiments have shown the efficacy of DCLDMD in an academic setting with the Duffing oscillator. The experiments are done with no prior model knowledge, besides the system being affine in control, because the method only requires data points which satisfy $y_{k}=F(x_{k})+G(x_{k})u_{k}$. In figure [\[fig:PredictorComp\]](#fig:PredictorComp){reference-type="ref" reference="fig:PredictorComp"}, we observe that the linear predictor from [@SCC.Korda.Mezic2018a] struggles to accurately predict the behavior of the Duffing oscillator under the given feedback law, while the indirect reconstruction approach accurately tracks the actual trajectory of Duffing oscillator. This is somewhat expected, as the the linear predictors take the form of a linear system and one would not expect a linear system to accurately model nonlinear dynamics for a prolonged period of time. But, DCLDMD offers a clear advantage as the indirect approach yields a *nonlinear* predictor. Selection of the appropriate kernel function and kernel parameter $\sigma$ often requires a trial-and-error approach, depending on the system. For example, the Gaussian kernel with kernel parameter $\sigma = 10$ proved fruitful in experiment [\[exp:Duffing\]](#exp:Duffing){reference-type="ref" reference="exp:Duffing"}, while the exponential dot product kernel with kernel parameter $\sigma = 100$ was used for experiment [\[exp:PredictionComp\]](#exp:PredictionComp){reference-type="ref" reference="exp:PredictionComp"}. In each experiment the set $\alpha = \{K_{x_{k}}\}_{k=1}^{n}\subset \Tilde{H}$ are the kernel functions centered at the data points $x_{k}$ in the tuples $(x_{k},y_{k},u_{k})$. The set $\beta = \{K_{x_{k},u_{k}}\}_{k=1}^{n} \subset H$ are the kernel functions associated to the points $(x_{k},u_{k})$ from the data set. As DCLDMD is a projection method, we are approximating the infinite-dimensional DCLDMD operator on a subset of the Hilbert space comprised of the span of the kernel functions. In both experiments, indirect reconstruction is used to estimate the flow. The indirect reconstruction explicitly depends upon the eigenfunctions of $[P_{\alpha}M_{\nu}P_\beta K_{F,G}]_{\alpha}$. Whether or not we can always represent the full-state observable (i.e. the flow) in terms of the eigenfunctions is not entirely clear, but this is a standard assumption in the DMD literature. With this in mind, DCLDMD is a heuristic approach to estimating the dynamics. Regardless, the numerical experiments in section [5](#sec:NumExp){reference-type="ref" reference="sec:NumExp"} demonstrate the capability of DCLDMD to accurately predict the response of the control-affine system to any desired feedback law. # Conclusion {#sec:concl} In this paper, we have developed a novel method for representing any control-affine nonlinear system as a composition of a multiplication operator and a composition operator over an RKHS - called the DCLDMD operator. Since the system is affine in control, the multiplication operator allows us to capture the effect of control on the nonlinear system, while the composition operator allows us to capture the effect of the dynamics. Moreover, the method is entirely data driven and requires no model knowledge, besides the dynamical system being affine in control. Furthermore, we have demonstrated the efficacy of the algorithm in experiments [\[exp:Duffing\]](#exp:Duffing){reference-type="ref" reference="exp:Duffing"} and [\[exp:PredictionComp\]](#exp:PredictionComp){reference-type="ref" reference="exp:PredictionComp"}, showing not only the capability of the method to accurately predict the response of the discrete, control-affine nonlinear system to a known feedback law but we also demonstrate a clear advantage of DCLDMD over current techniques in the literature. [^1]: This research was supported by the Air Force Office of Scientific Research (AFOSR) under contract number FA9550-20-1-0127 and the National Science Foundation (NSF) under awards number 2027999. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the sponsoring agencies. [^2]: Z. Morrison, Moad Abudia, and R. Kamalapurkar are with the School of Mechanical and Aerospace Engineering, Oklahoma State University, Stillwater, OK, 74074, United States of America (e-mail: zachmor,abudia,rushikesh.kamalapurkar\@okstate.edu). J. Rosenfeld is with the Department of Mathematics and Statistics, University of South Florida, Tampa, FL, 33620, United States of America (e-mail:rosenfeldj\@usf.edu).
arxiv_math
{ "id": "2309.09817", "title": "Dynamic Mode Decomposition of Control-Affine Nonlinear Systems using\n Discrete Control Liouville Operators", "authors": "Zachary Morrison, Moad Abudia, Joel Rosenfeld, Rushikesh Kamalapurar", "categories": "math.OC math.DS math.FA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Our initial aim was to answer the question: does the Frobenius (symmetric) property transfers from a strongly graded algebra to its homogeneous component of trivial degree? Related to it, we investigate invertible bimodules and the Picard group of a finite dimensional quasi-Frobenius algebra $R$. We compute the Picard group, the automorphism group and the group of outer automorphisms of a $9$-dimensional quasi-Frobenius algebra which is not Frobenius, constructed by Nakayama. Using these results and a semitrivial extension construction, we give an example of a symmetric strongly graded algebra whose trivial homogeneous component is not even Frobenius. We investigate associativity of isomorphisms $R^*\otimes_RR^*\simeq R$ for quasi-Frobenius algebras $R$, and we determine the order of the class of the invertible bimodule $H^*$ in the Picard group of a finite dimensional Hopf algebra $H$. As an application, we construct new examples of symmetric algebras.\ 2020 MSC: 16D50, 16D20, 16L60, 16S99, 16T05, 16W50\ Key words: quasi-Frobenius algebra, Frobenius algebra, symmetric algebra, invertible bimodule, Picard group, strongly graded algebra, Hopf algebra, Nakayama automorphism. address: - $^1$ University of Bucharest, Faculty of Mathematics and Computer Science, Str. Academiei 14, Bucharest 1, RO-010014, Romania - | $^2$ Institute of Mathematics of the Romanian Academy, PO-Box 1-764\ RO-014700, Bucharest, Romania - $^3$ Max Planck Institut f${\rm \ddot{u}}$r Mathematik, Vivatsgasee 7, 53111 Bonn, Germany - " e-mail: sdascal\\@fmi.unibuc.ro, Constantin_nastasescu\\@yahoo.com, lauranastasescu\\@gmail.com " author: - Sorin Dăscălescu$^1$, Constantin Năstăsescu$^2$ and Laura Năstăsescu$^{2,3}$ title: Picard groups of quasi-Frobenius algebras and a question on Frobenius strongly graded algebras --- # Introduction and preliminaries A finite dimensional algebra $A$ over a field $K$ is called Frobenius if $A\simeq A^*$ as left (or equivalently, as right) $A$-modules. If $A$ satisfies the stronger condition that $A\simeq A^*$ as $A$-bimodules, then $A$ is called a symmetric algebra. Frobenius algebras and symmetric algebras occur in algebra, geometry, topology and quantum theory, and they have a rich representation theory, which is relevant for all these branches of mathematics. A general problem is whether a certain ring property transfers from an algebra on which a Hopf algebra (co)acts to the subalgebra of (co)invariants; of special interest is the situation where the (co)action produces a Galois extension. Particular cases of high relevance are: (1) Algebras $A$ on which a group $G$ acts as automorphisms, and the transfer of properties to the subalgebra $A^G$ of invariants; (2) Algebras $A$ graded by a group $G$, and the transfer of properties to the homogeneous component of trivial degree. In the second case, such an $A$ is in fact a comodule algebra over the Hopf group algebra $KG$, and the subalgebra of coinvariants is just the component of trivial degree; moreover, the associated extension is $KG$-Hopf-Galois if and only if $A$ is strongly graded. Our initial aim was to answer the following.\ **Question 1.** *If $A=\oplus_{g\in G}A_g$ is a strongly $G$-graded algebra, where $G$ is a group with neutral element $e$, and $A$ is Frobenius (symmetric), does it follow that the subalgebra $A_e$ is Frobenius (symmetric)?*\ There is an interesting alternative way to formulate this question for the Frobenius property. Frobenius algebras in the monoidal category of $G$-graded vector spaces were considered in [@dnn1], where they were called graded Frobenius algebras. Such objects and a shift version of them occur in noncommutative geometry, for example as Koszul duals of certain Artin-Shelter regular algebras, and also in the theory of Calabi-Yau algebras. A $G$-graded algebra $A$ is graded Frobenius if $A\simeq A^*$ as graded left $A$-modules, where $A^*$ is provided with a standard structure of such an object. Obviously, if $A$ is graded Frobenius, then it is a Frobenius algebra, while the converse is not true in general. If $A$ is strongly graded, then $A$ is graded Frobenius if and only if $A_e$ is Frobenius, see [@dnn1 Corollary 4.2]. Thus the question above can be also formulated as: If $A$ is a strongly graded algebra which is Frobenius, is it necessarily graded Frobenius? Question 1 cannot be reformulated in a similar way for the symmetric property. As $KG$ is a cosovereign Hopf algebra with respect to its counit, a concept of symmetric algebra can be defined in its category of corepresentations, i.e., in the monoidal category of $G$-graded vector spaces; the resulting objects are called graded symmetric algebras. As expected, $A$ is graded symmetric if $A\simeq A^*$ as graded $A$-bimodules. If $A$ is strongly graded, then $A_e$ is symmetric whenever $A$ is graded symmetric, however the converse is not true, see [@dnn1 Remark 5.3]. This shows that Question 1 is not equivalent to asking whether a symmetric strongly graded algebra is graded symmetric, nevertheless this other question is also of interest. The transfer of the Frobenius property from the strongly graded algebra $A$ to $A_e$ works well under additional conditions, for example if $A$ is free as a left and as a right $A_e$-module, in particular if $A$ is a crossed product of $A_e$ by $G$, see [@dnn2]. If $A$ is Frobenius, then it is left (and right) self-injective, and then so is $A_e$; this means that $A_e$ is a quasi-Frobenius algebra. Thus a possible example answering Question 1 in the negative should be built on a quasi-Frobenius algebra which is not Frobenius. Moreover, by Dade's Theorem, each homogeneous component of the strongly graded algebra $A$ is an invertible $A_e$-bimodule, see [@nvo], suggesting a study of the Picard group ${\rm Pic}(A_e)$ of $A_e$. In Section [2](#sectionPicardQF){reference-type="ref" reference="sectionPicardQF"} we look at invertible bimodules over a finite dimensional quasi-Frobenius algebra $R$. For such an $R$, an object of central interest is the linear dual $R^*$ of the regular bimodule $R$; we show that it is an invertible $R$-bimodule. In the case where $R$ is Frobenius, $R^*$ is isomorphic to a deformation of the regular bimodule $R$, with the right action modified by the Nakayama automorphism $\nu$ of $R$ with respect to a Frobenius form. It follows that the order of the class $[R^*]$ of $R^*$ in ${\rm Pic}(R)$ is just the order of the class of $\nu$ in the group ${\rm Out}(R)$ of outer automorphisms of $R$. If $R$ is not Frobenius, then $R^*$ cannot be obtained from $R$ by deforming the right action by an automorphism, or in other words, $[R^*]$ does not lie in the image of ${\rm Out}(R)$, and we show that it lies in the centralizer of ${\rm Out}(R)$. We compute the order of $[R^*]$ in ${\rm Pic}(R)$ for: (1) liftings of certain Hopf algebras in the braided category of Yetter-Drinfeld modules, called quantum lines, over the group Hopf algebra of a finite abelian group; (2) certain quotients of quantum planes. This order may be any positive integer, as well it can be infinite. It is known that a finite dimensional Hopf algebra is Frobenius. In this case we prove the following.\ **Theorem A.** *Let $H$ be a finite dimensional Hopf algebra with antipode $S$. Then the order of $[H^*]$ in ${\rm Pic}(H)$ is the least common multiple of the order of the class of $S^2$ in ${\rm Out}(H)$ and the order of the modular element of $H^*$ in the group of grouplike elements of $H^*$.*\ As a particular case, one gets a well-known characterization of symmetric finite dimensional Hopf algebras, as those unimodular Hopf algebras such that $S^2$ is inner. In Section [3](#sectionconstruction){reference-type="ref" reference="sectionconstruction"} we consider an algebra of dimension $9$ which is quasi-Frobenius, but not Frobenius, and we investigate its structure and determine its Picard group. This algebra was introduced by Nakayama in [@nak] in a matrix presentation, see also [@lam Example 16.19.(5)]. We use a different presentation given in [@dnn3]. Let $\mathcal{R}$ be the $K$-algebra with basis ${\bf B}=\{ E, X_1,X_2,Y_1,Y_2\}\cup \{F_{ij}|1\leq i,j\leq 2\}$, and relations $$\begin{aligned} E^2=E,& F_{ij}F_{jr}=F_{ir}\\ EX_{i}=X_{i}, &X_{i}F_{ir}=X_{r}\\ F_{ij}Y_{j}=Y_{i},& Y_{i}E=Y_{i} \end{aligned}$$for any $1\leq i,j,r\leq 2$, and any other product of two elements of $\bf B$ is zero. We show that any invertible $\mathcal{R}$-bimodule is either a deformation of $\mathcal{R}$ or one of $\mathcal{R}^*$ by an automorphism of $\mathcal{R}$, and we have an exact sequence $$1\rightarrow{\rm Inn}(\mathcal{R})\rightarrow{\rm Aut}(\mathcal{R})\rightarrow{\rm Pic}(\mathcal{R})\rightarrow C_2\rightarrow 1,$$ where $C_2$ is the cyclic group of order $2$. If $V$ is an $\mathcal{R}$-bimodule, and $\alpha$ is an automorphism of $\mathcal{R}$, we denote by $_1V_\alpha$ the bimodule obtained from $V$ by changing the right action via $\alpha$. We collect the conclusions of this section in:\ **Theorem B.** *There is an isomorphism of $\mathcal{R}$-bimodules $\varphi:\mathcal{R}^*\otimes _\mathcal{R}\mathcal{R}^*\rightarrow\mathcal{R}$, thus $[\mathcal{R}^*]$ has order $2$ in ${\rm Pic}(\mathcal{R})$. An invertible $\mathcal{R}$-bimodule is isomorphic either to $_1\mathcal{R}_{\alpha}$ or to $_1{\mathcal{R}^*}_{\alpha}$ for some $\alpha\in {\rm Aut}(\mathcal{R})$, and ${\rm Pic}(\mathcal{R})\simeq {\rm Out}(\mathcal{R})\times C_2$.*\ In Section [4](#sectionautomorfisme){reference-type="ref" reference="sectionautomorfisme"} we compute the automorphism group ${\rm Aut}(\mathcal{R})$ and the group ${\rm Out}(\mathcal{R})$ of outer automorphisms. For this aim, we use another presentation of $\mathcal{R}$, given in [@dnn3]. Thus $\mathcal{R}$ is isomorphic to the Morita ring associated with a Morita context connecting the rings $K$ and $M_2(K)$, where the connecting bimodules are $K^2$ and $M_{2,1}(K)$ with actions given by the usual matrix multiplication, and such that both Morita maps are zero. Thus $\mathcal{R}$ is isomorphic as a linear space to the matrix algebra $M_3(K)$, but its multiplication is altered by collapsing the product of the off diagonal blocks. We prove:\ **Theorem C.** *${\rm Aut}(\mathcal{R})$ is isomorphic to a semidirect product $(K^2\times M_{2,1}(K))\rtimes (K^*\times GL_2(K))$, and ${\rm Out}(\mathcal{R})\simeq K^*$.*\ We explicitly describe the automorphisms and the outer automorphisms. Comparing to the matrix algebra $M_3(K)$, where there are no outer automorphisms, the alteration of the multiplication produces non-trivial outer automorphisms of $\mathcal{R}$. As a consequence of Theorems B and C, wee see that ${\rm Pic}(\mathcal{R})\simeq K^*\times C_2$. In Section [5](#semitrivialextensions){reference-type="ref" reference="semitrivialextensions"} we consider an arbitrary finite dimensional algebra $R$ and a morphism of $R$-bimodules $\psi:R^*\otimes_RR^*\rightarrow R$ which is associative, i.e., $\psi(r^*\otimes_Rs^*)\leftharpoonup t^*=r^*\rightharpoonup\psi(s^*\otimes_Rt^*)$ for any $r^*,s^*,t^*\in R^*$; here $\rightharpoonup$ and $\leftharpoonup$ denote the usual left and right actions of $R$ on $R^*$. Then we can form the semitrivial extension $R\rtimes_\psi R^*$, which is the cartesian product $R\times R^*$ with the usual addition, and multiplication defined by $$(r,r^*)(s,s^*)=(rs+\psi(r^*\otimes_Rs^*),(r\rightharpoonup s^*)+(r^*\leftharpoonup s))$$ for any $r,s\in R, r^*,s^*\in R^*$. It has a structure of a $C_2$-graded algebra with $R$ as the homogeneous component of trivial degree. We prove:\ **Proposition A.** *$R\rtimes_\psi R^*$ is a symmetric algebra.*\ If $\psi=0$, we get a well-known construction of Tachikawa, see [@lam]; in this case $R$ may be any finite dimensional algebra. If $\psi$ is an isomorphism, which implies that $R^*$ is invertible (therefore $R$ must be quasi-Frobenius), then $R\rtimes_\psi R^*$ is a strongly $C_2$-graded algebra. Next we show that the isomorphism $\varphi:\mathcal{R}^*\otimes _\mathcal{R}\mathcal{R}^*\rightarrow\mathcal{R}$ constructed in Theorem B is associative, and we conclude that the strongly $C_2$-graded algebra $\mathcal{R}\rtimes_{\varphi} \mathcal{R}^*$ is symmetric, thus also Frobenius, while its component of trivial degree is not Frobenius. This answers in the negative Question 1, for both Frobenius and symmetric properties. It also answers the other question related to the symmetric property, since $\mathcal{R}\rtimes_{\varphi} \mathcal{R}^*$ is symmetric, but it is not graded symmetric as its homogeneous component of degree $e$ is not symmetric. Besides producing the large class of symmetric algebras presented above, the semitrivial extension construction is of interest by itself, at least taking into account the wealth of results of interest concerning trivial extensions, i.e., those associated to zero morphisms $\psi$. In Section [6](#sectionassociative){reference-type="ref" reference="sectionassociative"} we address the following.\ **Question 2.** *If $R$ is a finite dimensional algebra such that $R^*\otimes_RR^*\simeq R$ as $R$-bimodules, is it true that any isomorphism $\psi:R^*\otimes_RR^*\rightarrow R$ is associative?*\ We show that the answer depends only on the algebra, and not on a particular choice of the isomorphism, and we answer in the positive in the Frobenius case, proving the following.\ **Proposition B.** *Let $R$ be a Frobenius algebra such that $[R^*]$ has order at most $2$ in ${\rm Pic}(R)$. Then any isomorphism $\psi:R^*\otimes_RR^*\rightarrow R$ is associative.*\ As a consequence, for any Frobenius algebra $R$ such that $[R^*]$ has order $2$ in ${\rm Pic}(R)$, we can construct a semitrivial extension which is a strongly $C_2$-graded algebra and also symmetric as an algebra, and has $R$ as the homogeneous component of trivial degree. Thus we give more examples answering in the negative Question 1 for the symmetric property. We present several classes of algebras $R$ enjoying these properties. Among them, we note that for any finite dimensional unimodular Hopf algebra $H$, $[H^*]$ has order at most $2$ in ${\rm Pic}(H)$. We work over a field $K$. We refer to [@lam], [@lorenz] and [@sy] for facts related to (quasi)-Frobenius algebras and symmetric algebras, to [@nvo] for results about graded rings, and to [@radford2] for basic notions about Hopf algebras. We recall that if $G$ is a group with neutral element $e$, an algebra $A$ is $G$-graded if it has a decomposition $A=\oplus_{g\in G}A_g$ as a direct sum of linear subspaces such that $A_gA_h\subset A_{gh}$ for any $g,h\in G$; in particular, $A_e$ is a subalgebra of $A$. Such an $A$ is called strongly graded if $A_gA_h=A_{gh}$ for any $g,h\in G$. # Quasi-Frobenius algebras and invertible bimodules {#sectionPicardQF} We recall from [@bass] some basic facts concerning invertibile bimodules and the Picard group. Let $R$ be an algebra over a field $K$. An $R$-bimodule $P$ is called invertible if it satisfies one of the following equivalent conditions: (1) There exists a bimodule $Q$ such that $P\otimes_RQ$ and $Q\otimes_RP$ are isomorphic to $R$ as bimodules; (2) The functor $P\otimes_R -:R-mod\rightarrow R-mod$ is an equivalence of categories; (3) $P$ is a finitely generated projective generator as a left $R$-module, and the map $\omega:R\rightarrow {\rm End}(_RP)$, $\omega (r)(p)=pr$ for any $r\in R, p\in P$ is a ring isomorphism. We keep the usual convention that the multiplication in ${\rm End}(_RP)$ is the inverse map composition. The set of isomorphism types of invertible $R$-bimodules is a group with multiplication defined by $[U]\cdot [V]=[U\otimes_RV]$, where $[U]$ denotes the class of the bimodule $U$ with respect to the isomorphism equivalence relation. This group is called the Picard group of $R$, and it is denoted by ${\rm Pic}(R)$. If $V$ is an $R$-bimodule and $\alpha,\beta$ are elements in the group ${\rm Aut}(R)$ of algebra automorphisms of $R$, we denote by $_\alpha V_\beta$ the bimodule with the same underlying space as $V$, and left and right actions defined by $r\ast v=\alpha(r)v$ and $v\ast r=v\beta(r)$ for any $v\in V$ and $r\in R$. The following facts hold for any $\alpha,\beta,\gamma\in {\rm Aut}(R)$. All isomorphisms are of $R$-bimodules, and $1$ denotes the identity morphism.\ $\bullet$ $_{\gamma\alpha} R_{\gamma\beta}\simeq {_{\alpha} R_{\beta}}$, in particular $_{\alpha} R_{\beta}\simeq {_1R_{\alpha^{-1}\beta}}$.\ $\bullet$ $_1R_\alpha \otimes_{R} {_1R_\beta}\simeq {_1R_{\alpha\beta}}$, thus $_1R_\alpha$ is invertible, and $[_1R_\alpha]^{-1}=[_1R_{\alpha^{-1}}]$.\ $\bullet$ $_1R_\alpha\simeq {_1R_\beta}$ if and only if $\alpha\beta^{-1}$ is an inner automorphism of $R$, i.e., there exists an invertible element $u\in R$ such that $\alpha\beta^{-1}(r)=u^{-1}ru$ for any $r\in R$. Denote by ${\rm Inn}(R)$ the group of inner automorphisms of $R$. In particular, $_1R_\alpha \simeq R$ if and only if $\alpha\in {\rm Inn}(R)$, thus there is an exact sequence of groups $0\rightarrow{\rm Inn}(R)\hookrightarrow {\rm Aut}(R)\rightarrow{\rm Pic}(R)$, the last morphism in the sequence taking $\alpha$ to $_1R_\alpha$. The factor group ${\rm Aut}(R)/{\rm Inn}(R)$, denoted by ${\rm Out}(R)$, is called the group of outer automorphisms of $R$, and it embeds into ${\rm Pic}(R)$.\ $\bullet$ $_\alpha V_\beta\simeq {_\alpha R_1}\otimes_R V\otimes_R {_1R_\beta}$.\ We will also need the following. **Proposition 1**. *([@bass page 73]) Let $U$ and $V$ be invertible $R$-bimodules such that $U\simeq V$ as left $R$-modules. Then there exists $\alpha\in {\rm Aut}(R)$ such that $U\simeq {_1V_\alpha}$ as $R$-bimodules.* Now let $V$ be a bimodule over the $K$-algebra $R$. Then the linear dual $V^*={\rm Hom}_K(V,K)$ is an $R$-bimodule with actions denoted by $\rightharpoonup$ and $\leftharpoonup$, given by $(r\rightharpoonup v^*)(v)=v^*(vr)$ and $(v^*\leftharpoonup r)(v)=v^*(rv)$ for any $r\in R, v^*\in V^*, v\in V$. One can easily check that $(_\alpha V_\beta)^*= {_\alpha (V^*)_\beta}$ for any $\alpha,\beta \in Aut(R)$. If $V$ is finite dimensional, then $(V^*)^*\simeq V$, and this shows that two finite dimensional bimodules $V$ and $W$ are isomorphic if and only if so are their duals $V^*$ and $W^*$. We are interested in a particular bimodule, namely $R^*$, the dual of $R$. Some immediate consequences of the discussion above are that for any $\alpha,\beta,\gamma\in {\rm Aut}(R)$:\ $\bullet$ $_{\gamma\alpha} (R^*)_{\gamma\beta}\simeq {_{\alpha} (R^*)_{\beta}}$, in particular $_{\alpha} (R^*)_{\beta}\simeq {_1(R^*)_{\alpha^{-1}\beta}}$. Indeed, $_{\gamma\alpha} (R^*)_{\gamma\beta}\simeq (_{\gamma\beta} R_{\gamma\alpha})^*\simeq (_{\beta} R_{\alpha})^*\simeq {_{\alpha} (R^*)_{\beta}}$.\ $\bullet$ If $R$ has finite dimension, then $_{1} (R^*)_{\alpha}\simeq {_{1} (R^*)_{\beta}}$ if and only if $\alpha^{-1}\beta \in {\rm Inn}(R)$. Indeed, $_{1} (R^*)_{\alpha}$ and ${_{1} (R^*)_{\beta}}$ are isomorphic if and only if so are their duals, i.e., $_\alpha R_1\simeq {_\beta R_1}$, which is the same to $_1R_{\alpha^{-1}} \simeq {_1R_{\beta^{-1}}}$, i.e., $\alpha^{-1}\beta \in {\rm Inn}(R)$. Since ${\rm Inn}(R)$ is a normal subgroup of ${\rm Aut}(R)$, this is equivalent to $\alpha^{-1}\beta \in {\rm Inn}(R)$.\ The following holds for any finite dimensional algebra. **Proposition 2**. *Let $R$ be a finite dimensional algebra. Then the map $\omega:R\rightarrow {\rm End}(_RR^*)$ defined by $\omega (a)(r^*)=r^*\leftharpoonup a$ for any $r^*\in R^*$ and $a\in R$, is an isomorphism of algebras.* *Proof.* It is easy to check that $\omega$ is well defined and it is an algebra morphism. If $\omega(a)=0$ for some $a$, then $r^*\leftharpoonup a=0$ for any $r^*\in R^*$, and evaluating at 1, we get $r^*(a)=0$. Thus $a$ must be 0, so $\omega$ is injective. To check that $\omega$ is surjective, let $f\in {\rm End}(_RR^*)$, and let $\theta:R^*\rightarrow K, \theta(r^*)=f(r^*)(1)$. Then $\theta \in R^{**}$, and as $R\simeq R^{**}$, there is $r\in R$ such that $f(r^*)(1)=\theta(r^*)=r^*(r)$ for any $r^*\in R^*$. Then for any $r^*\in R^*$ and $s\in R$ one has $$\begin{aligned} f(r^*)(s)&=&f(r^*)(1\cdot s)\\ &=&(s\rightharpoonup f(r^*))(1)\\ &=&f(s\rightharpoonup r^*)(1)\\ &=&(s\rightharpoonup r^*)(r)\\ &=&r^*(rs)\\ &=&(r^*\leftharpoonup r)(s), \end{aligned}$$showing that $f(r^*)=r^*\leftharpoonup r$, i.e., $f=\omega (r)$. ◻ Let $R$ be a finite dimensional algebra. We recall that $R$ is called quasi-Frobenius if it is injective as a left (or equivalently, right) $R$-module. It is known that $R$ is quasi-Frobenius if and only if the left $R$-modules $R$ and $R^*$ have the same distinct indecomposable components (possibly occurring with different multiplicities), see [@lam Section 16C]. Therefore a Frobenius algebra is always quasi-Frobenius. **Corollary 3**. *Let $R$ be a finite dimensional algebra. Then $R^*$ is an invertible $R$-bimodule if and only if $R$ is a quasi-Frobenius algebra.* *Proof.* If $R^*$ is an invertible bimodule, then it is projective as a right $R$-module, so then its linear dual $(R^*)^*$ is an injective left $R$-module. But $(R^*)^*\simeq R$ as left $R$-modules, and we get that $R$ is left selfinjective. Conversely, assume that $R$ is quasi-Frobenius. Since $R$ is an injective right $R$-module, we get that $R^*$ is a projective left $R$-module. On the other hand, since the left $R$-modules $R$ and $R^*$ have the same distinct indecomposable components, we see that there is an epimorphism $(R^*)^n\rightarrow R$ for a large enough positive integer $n$, thus $R^*$ is a generator as a left $R$-module. If we also take into account Proposition [Proposition 2](#isoend){reference-type="ref" reference="isoend"}, we get that $R^*$ is invertible. ◻ If $R$ is Frobenius, then an element $\lambda\in R^*$ such that $R\rightharpoonup\lambda =R^*$, is called a Frobenius form on $R$; in this case, the map $a\mapsto a\rightharpoonup\lambda$ is an isomorphism of left $R$-modules between $R$ and $R^*$, and also, the map $a\mapsto \lambda\leftharpoonup a$ is an isomorphism of right $R$-modules from $R$ to $R^*$. The Nakayama automorphism of $R$ associated with a Frobenius form $\lambda$ is the map $\nu:R\rightarrow R$ defined such that for any $a\in R$, $\nu (a)$ is the unique element of $R$ satisfying $\nu(a)\rightharpoonup\lambda=\lambda \leftharpoonup a$, or equivalently, $\lambda (ar)=\lambda (r\nu(a))$ for any $r\in R$; $\nu$ turns out to be an algebra automorphism. If $\nu$ and $\nu'$ are Nakayama automorphisms associated with two Frobenius forms, then there exists an invertible element $u\in R$ such that $\nu'(a)=u^{-1}\nu (a)u$ for any $a\in R$, thus $\nu$ and $\nu'$ are equal up to an inner automorphism. It follows that the class of a Nakayama automorphism in ${\rm Out}(R)$ does not depend on the Frobenius form; see [@lam Section 16E], [@lorenz Section 2.2] or [@sy Chapter IV] for details. If the quasi-Frobenius algebra $R$ is not Frobenius, $R^*$ is not isomorphic to any $_1R_\alpha$, as $R^*$ is not isomorphic to $R$ as left $R$-modules. In the Frobenius case, we have the following result; it appears in an equivalent formulation in [@sy Proposition 3.15]. **Proposition 4**. *Let $R$ be a Frobenius algebra. Then there exists $\nu \in {\rm Aut}(R)$ such that $R^*\simeq {_1R_\nu}$ as bimodules. Moreover, any such $\nu$ is the Nakayama automorphism of $R$ associated with a Frobenius form. As a consequence, the order of $[R^*]$ in ${\rm Pic}(R)$ is equal to the order of the class of $\nu$ in ${\rm Out}(R)$.* *Proof.* The first part follows directly from Proposition [Proposition 1](#isoinvertible){reference-type="ref" reference="isoinvertible"}, since $R^*\simeq R$ as left $R$-modules. Let $\gamma:{_1R_\nu}\rightarrow R^*$ be an isomorphism of bimodules, and let $\lambda=\gamma (1)$. Then $R\rightharpoonup\lambda =R^*$, so $\lambda$ is a Frobenius form on $R$. Then for any $a,x\in R$ $$\begin{aligned} (\lambda \leftharpoonup a)(x) &=&(\gamma (1)\leftharpoonup a)(x)\\ &=&\gamma (1\cdot \nu (a))(x)\\ &=&\gamma (\nu(a)\cdot 1)(x)\\ &=&(\nu(a)\rightharpoonup\gamma(1))(x)\\ &=&(\nu(a)\rightharpoonup\lambda)(x), \end{aligned}$$showing that $\lambda \leftharpoonup a=\nu(a)\rightharpoonup\lambda$, thus $\nu$ is the Nakayama automorphism associated with $\lambda$. ◻ Looking inside the Picard group, the previous Proposition gives a new perspective on the well-known fact that a Frobenius algebra is symmetric if and only if the Nakayama automorphism is inner, see [@lam Theorem 16.63]. Indeed, $R$ is symmetric if and only if $R^*\simeq R$ as bimodules, i.e., ${_1R_\nu}\simeq R$, and this is equivalent to $\nu$ being inner. The following indicates a commutation property of the class of $R^*$ in the Picard group of $R$. **Proposition 5**. *Let $R$ be a quasi-Frobenius finite dimensional algebra, and let $\alpha\in {\rm Aut}(R)$. Then $R^*\otimes_R{_1R_\alpha}\simeq {_1R_\alpha}\otimes_RR^*$ as $R$-bimodules. Thus the element $[R^*]$ of the Picard group ${\rm Pic}(R)$ lies in the centralizer of the image of ${\rm Out}(R)$.* *Proof.* Taking into account the above considerations, we have isomorphisms of $R$-bimodules $$R^*\otimes_R\,{_1R_\alpha}\simeq {_1(R^*)_\alpha}\simeq {_{\alpha^{-1}}(R^*)_1}\simeq {_{\alpha^{-1}}R_1}\otimes_RR^*\simeq {_1R_\alpha}\otimes_RR^*$$ ◻ **Corollary 6**. *Let $R$ be a Frobenius algebra. Then the class of the Nakayama automorphism of $R$ lies in the centre of ${\rm Out}(R)$.* If $R$ is quasi-Frobenius, we are interested in the order of $[R^*]$ in the group ${\rm Pic}(R)$. This order is $1$ if and only if $R$ is a symmetric algebra. The following examples show that it may be any integer $\geq 2$ in other quasi-Frobenius algebras, and also it can be infinite. For the first example, we recall that if $H$ is a finite dimensional Hopf algebra, then a left integral on $H$ is an element $\lambda \in H^*$ such that $h^*\lambda =h^*(1)\lambda$ for any $h^*\in H^*$; the multiplication of $H^*$ is given by the convolution product. Any finite dimensional Hopf algebra $H$ is a Frobenius algebra, and a non-zero left integral $\lambda$ on $H$ is a Frobenius form, see [@lorenz Theorem 12.5]. **Example 7**. *Let $C$ be a finite abelian group, and let $C^*$ be its character group. We consider certain Hopf algebras in the braided category of Yetter-Drinfeld modules over the group Hopf algebra $KC$, called quantum lines, and their liftings, obtained by a bosonization construction. We obtain some finite dimensional pointed Hopf algebras with coradical $KC$, see [@as], [@bdg]. There are two classes of such objects.* *$\bf{(I)}$ Hopf algebras of the type $H_1(C,n,c,c^*)$, where $n\geq 2$ is an integer, $c\in C$ and $c^*\in C^*$, such that $c^n\neq 1$, $(c^*)^n=1$ and $c^*(c)$ is a primitive $n$th root of unity. It is generated as an algebra by the Hopf subalgebra $KC$ and a $(1,c)$-skewprimitive element $x$, i.e., the comultiplication works as $\Delta (x)=c\otimes x+x\otimes 1$ on $x$, subject to relations $x^n=c^n-1$ and $xg=c^*(g)gx$ for any $g\in G$. Note that the required conditions show that $c^*$ has order $n$.* *$\bf{(II)}$ Hopf algebras of the type $H_2(C,n,c,c^*)$, where $n\geq 2$ is an integer, $c\in C$ and $c^*\in C^*$, such that $c^*(c)$ is a primitive $n$th root of unity. It is generated as an algebra by the Hopf subalgebra $KC$ and a $(1,c)$-skewprimitive element $x$, subject to relations $x^n=0$ and $xg=c^*(g)gx$ for any $g\in G$. We note that in this case the order of $c^*$, which we denote by $m$, is a multiple of $n$.* *If $H$ is any of $H_1(C,n,c,c^*)$ or $H_2(C,n,c,c^*)$, a linear basis of $H$ is ${\mathcal{B}}=\{ gx^j|g\in C, 0\leq j\leq n-1\}$, thus the dimension of $H$ is $n|C|$, and the linear map $\lambda\in H^*$ such that $\lambda(c^{1-n}x^{n-1})=1$ and $\lambda$ takes any other element of $\mathcal{B}$ to 0, is a left integral on $H$, see [@bdg Proposition 1.17].* *If $g\in C$, then $$(\lambda \leftharpoonup g)(g^{-1}c^{1-n}x^{n-1})=\lambda(c^{1-n}x^{n-1})=1$$ and $\lambda\leftharpoonup g$ takes any other element of $\mathcal{B}$ to 0, while $$(g\rightharpoonup \lambda)(g^{-1}c^{1-n}x^{n-1})=\lambda((g^{-1}c^{1-n}x^{n-1}g)=c^*(g)^{n-1}\lambda(c^{1-n}x^{n-1})=c^*(g)^{n-1}$$ and $g\rightharpoonup\lambda$ takes any other element of $\mathcal{B}$ to 0. These show that $g\rightharpoonup\lambda =c^*(g)^{n-1}\lambda\leftharpoonup g$, so the Nakayama automorphism $\nu$ associated with the Frobenius form $\lambda$ satisfies $\nu(g)=c^*(g)^{1-n}g$.* *On the other hand, if we denote $\xi=c^*(c)$, we have $$(x\rightharpoonup\lambda)(c^{1-n}x^{n-2})=\lambda(c^{1-n}x^{n-1})=1$$ and $x\rightharpoonup\lambda$ takes any other element of $\mathcal{B}$ to 0, while $$(\lambda \leftharpoonup x)(c^{1-n}x^{n-2})=\lambda(xc^{1-n}x^{n-2})=\xi^{1-n}\lambda (c^{1-n}x^{n-1})=\xi$$ and $\lambda\leftharpoonup x$ takes any other element of $\mathcal{B}$ to 0. Thus we get $\nu(x)=\xi x$.* *Denote the order of $c^*$ by $m$; we noticed that $m=n$ in the case of $H_1(C,n,c,c^*)$, and $m=dn$ for some positive integer $d$ in the case of $H_2(C,n,c,c^*)$. If $j$ is a positive integer, then $\nu^j=1$ if and only if $\xi^j=1$ and $c^*(g)^{j(1-n)}=1$ for any $g\in C$. If the latter condition is satisfied, then $(c^*)^{j(1-n)}=1$, or equivalently, $m|j(1-n)$, hence $n|j(1-n)$, and then $n|j$, so the condition $\xi^j=1$ is automatically satisfied. Thus the order of $\nu$ is the least positive integer $j$ such that $m|j(1-n)$. For any such $j$ we have $n|j$, so $j=bn$ for some integer $b$. Then $m|j(1-n)$ is equivalent to $d|b(n-1)$, and also to $\frac{d}{(d,n-1)}|b\cdot \frac{n-1}{(d,n-1)}$. Since $\frac{d}{(d,n-1)}$ and $\frac{n-1}{(d,n-1)}$ are relatively prime, the latter condition is equivalent to $\frac{d}{(d,n-1)}|b$. We conclude that the least such $b$ is $\frac{d}{(d,n-1)}$, and the order of $\nu$ is $$j=bn=\frac{dn}{(d,n-1)}=\frac{m}{(\frac{m}{n},n-1)}.$$* *This shows that for $H_1(C,n,c,c^*)$, where $m=n$, the order of $\nu$ is necessarily $n$, while for $H_2(C,n,c,c^*)$, the order may be larger than $n$, depending on the value of $m$.* *Now we show that for any $1\leq j<\frac{m}{(\frac{m}{n},n-1)}$, $\nu^j$ is not an inner automorphism. Indeed, if it were, then there would exist an invertible $u$ such that $\nu^j(r)=u^{-1}ru$ for any $r$ in the Hopf algebra (which is either $H_1(C,n,c,c^*)$ or $H_2(C,n,c,c^*)$). In particular, for any $g\in C$, $c^*(g)^{j(1-n)}g=u^{-1}gu$. Applying the counit $\varepsilon$, one gets $c^*(g)^{j(1-n)}=1$ for any $g\in C$, so $(c^*)^{j(1-n)}=1$. Hence $m|j(1-n)$, and we have seen above that this implies that $j$ must be at least $\frac{m}{(\frac{m}{n},n-1)}$, a contradiction.* *We conclude that if $A$ is a Hopf algebra of type $H_1(C,n,c,c^*)$ or $H_2(C,n,c,c^*)$, then the order of the Nakayama automorphism $\nu$ of $A$ in the group of algebra automorphisms of $A$, as well as the order of the class of $\nu$ in ${\rm Out}(A)$ (which is the same with the order of $[A^*]$ in ${\rm Pic}(A)$) is $\frac{m}{(\frac{m}{n},n-1)}$, where $m$ is the order of $c^*$ in $C^*$. In the case of $H_1(C,n,c,c^*)$, where $m=n$, this order is just $n$.* *A particular case is when $C=C_n=<c>$ is the cyclic group of order $n\geq 2$. Then for any linear character $c^*\in C^*$ such that $c^*(c)$ is a primitive $n$th root of unity, $H_1(C,n,c,c^*)$ is a Taft Hopf algebra. For such algebras, the order of the Nakayama automorphism associated with a left integral as a Frobenius form is computed in [@sy Example 5.9, page 614].* **Example 8**. *Let $q$ be a non-zero element of a field $K$, and let $K_q[X,Y]$ be the quantum plane, which is the $K$-algebra generated by $X$ and $Y$, subject to the relation $YX=qXY$. Let $R_q=K_q[X,Y]/(X^2,Y^2)$, which has dimension 4, and a basis $\mathcal{B}=\{ 1,x,y,xy\}$, where $x,y$ denote the classes of $X,Y$ in $R$. We have $x^2=y^2=0$ and $yx=qxy$. Denote by $\mathcal{B}^*=\{ 1^*,x^*,y^*,(xy)^*\}$ the basis of $R_q^*$ dual to $\mathcal{B}$. Then $$1\rightharpoonup(xy)^*=(xy)^*, x\rightharpoonup(xy)^*=qy^*, y\rightharpoonup(xy)^*=x^*, (xy)\rightharpoonup(xy)^*=1^*,$$ showing that the linear map from $R_q$ to $R_q^*$ which takes $r$ to $r\rightharpoonup(xy)^*$ is an isomorphism. Thus $R_q$ is a Frobenius algebra and $\lambda=(xy)^*$ is a Frobenius form on $R_q$. Now since $(xy)^*\leftharpoonup x=y^*$ and $(xy)^*\leftharpoonup y=qx^*$, the Nakayama automorphism associated with $\lambda$ is $\nu\in {\rm Aut}(R_q)$ given by $\nu(x)=q^{-1}x, \nu(y)=qy$. Then it is clear that the order of $\nu$ in the automorphism group of $R_q$ is $n$ if $q$ is a primitive $n$th root of unity in $K$, and it is infinite when no non-trivial power of $q$ is 1. This fact was observed in [@sy Example 10.7, page 417] by using periodic modules with respect to actions of the syzygy and Auslander-Reiten operators.* *We show that if $t$ is a positive integer such that $q^t\neq 1$, then $\nu^t$ is not even an inner automorphism. Indeed, if it were, then $\nu^t(x)=u^{-1}xu$, or $ux=q^txu$ for some invertible $u\in R_q$. If we write $u=a1+bx+cy+dxy$ with $a,b,c,d\in K$, this means that $ax+qcxy=q^t(ax+cxy)$, showing that $a=0$. But then $u$ cannot be invertible, since $xyu=0$, a contradiction.* *In conclusion, if $q$ is not a root of unity, $\nu$ has infinite order in ${\rm Out}(R_q)$, and so does $[R_q^*]$ in ${\rm Pic}(R_q)$, while if $q$ is a primitive $n$th root of unity, then $[R_q^*]$ has order $n$ in ${\rm Pic}(R_q)$.* *We end this example with the remark that Nakayama and Nesbitt constructed in [@nn page 665] a class of examples of Frobenius algebras which are not symmetric, presented in a matrix form. More precisely, in the presentation of [@lam Example 16.66], for any non-zero elements $u,v\in K$, let $A_{u,v}$ be the subalgebra of $M_4(K)$ consisting of all matrices of the type $\left[ \begin{array}{cccc} a & b& c&d\\ 0 & a & 0&uc\\ 0&0&a&vb\\ 0&0&0&a\end{array} \right]$, where $a,b,c,d\in K$. Then $A_{u,v}$ is Frobenius for any $u,v\in K^*$, and it is symmetric if and only if $u=v$. $A_{u,v}$ has a basis consisting of the elements $$I_4,\; x=E_{12}+vE_{34},\; y=E_{13}+uE_{24},\; z=E_{14},$$ where $E_{ij}$ denote the usual matrix units in $M_4(K)$, and they satisfy the relations $$x^2=0,\; y^2=0,\; xy=uz,\; yx=vz.$$ These show that in fact, $A_{u,v}$ is isomorphic to the quotient $R_{u^{-1}v}$ of the quantum plane.* If $H$ is a finite dimensional Hopf algebra, let $t\in H$ be a non-zero left integral in $H$, i.e., $ht=\varepsilon (h)t$ for any $h\in H$, where $\varepsilon$ is the counit of $H$. As the space of left integrals is one-dimensional and $th$ is a left integral for any $h\in H$, there is a linear map $\mathcal{G}:H\rightarrow K$ such that $th=\mathcal{G}(h)t$ for any $h\in H$. In fact, $\mathcal{G}$ is an algebra morphism, thus an element of the group $G(H^*)$ of grouplikes elements of $H^*$. $\mathcal{G}$ is called the distinguished group-like element of $H^*$, and also the right modular element of $H^*$. **Theorem 9**. *Let $H$ be a finite dimensional Hopf algebra with antipode $S$ and counit $\varepsilon$, and let $\mathcal{G}$ be the modular element in $H^*$. If $n$ is a positive integer, then $[H^*]^n=1$ in ${\rm Pic}(H)$ if and only if $S^{2n}$ is inner and $\mathcal{G}^n=\varepsilon$. As a consequence, the order of $[H^*]$ in the Picard group of $H$ is the least common multiple of the order of the class of $S^2$ in ${\rm Out}(H)$ and the order of $\mathcal{G}$ in $G(H^*)$.* *Proof.* Let $\lambda$ be a non-zero left integral on $H$, which is a Frobenius form on $H$, and let $\nu$ be the associated Nakayama automorphism. By [@radford Theorem 3(a)], in the reformulation of [@lorenz Proposition 12.8], $\nu(h)=\sum \mathcal{G}(h_2)S^2(h_1)$ for any $h\in H$. Let $\ell_{\mathcal{G}}:H\rightarrow H$ be the linear map defined by $\ell_{\mathcal{G}}(h)=\mathcal{G}\rightharpoonup h=\sum \mathcal{G}(h_2)h_1$. We have $\nu=S^2 \ell_{\mathcal{G}}$. We note that $\mathcal{G}S^2=\mathcal{G}$. Indeed, it is clear that $\mathcal{G}S=S^*(\mathcal{G})=\mathcal{G}^{-1}$, since the dual map $S^*$ of $S$ is the antipode of the dual Hopf algebra $H^*$, and it takes a group-like element to its inverse. Now we have $$\begin{aligned} (\ell_{\mathcal{G}}S^2)(h)&=&\mathcal{G}\rightharpoonup S^2(h)\\ &=&\sum \mathcal{G}(S^2(h_2))S^2(h_1)\\ &=&\sum \mathcal{G}(h_2)S^2(h_1)\\ &=&S^2(\mathcal{G}\rightharpoonup h)\\ &=&(S^2\ell_{\mathcal{G}})(h), \end{aligned}$$showing that $\ell_{\mathcal{G}}S^2=S^2\ell_{\mathcal{G}}$. Since $\rightharpoonup$ is a left action, we have $(\ell_{\mathcal{G}})^n=\ell_{\mathcal{G}^n}$ for any positive integer $n$, and it follows that $\nu^n=S^{2n}\ell_{\mathcal{G}^n}$. Now if $\ell_{\mathcal{G}^n}=\varepsilon$ and $S^{2n}$ is inner, then $\nu^n=S^{2n}$ is inner, so $[H^*]^n=[{_1H_\nu}]^n=[{_1H_{\nu^n}}]=1$ in ${\rm Pic}(H)$. Conversely, if $[H^*]^n=1$, then $\nu^n$ is inner. Let $\nu^n(h)=u^{-1}hu$ for some invertible $u\in H$. Then $S^{2n}(\ell_{\mathcal{G}^n}(h))=u^{-1}hu$ for any $h\in H$, and applying $\varepsilon$ of $H$ and using that $\varepsilon S=\varepsilon$, we obtain $\varepsilon (\ell_{\mathcal{G}^n}(h))=\varepsilon(u^{-1})\varepsilon(h)\varepsilon(h)=\varepsilon(h)$. As $\varepsilon (\ell_{\mathcal{G}^n}(h))=\varepsilon (\sum \mathcal{G}^n(h_2)h_1)=\mathcal{G}^n(h)$, we get $\mathcal{G}^n=\varepsilon$. Consequently, $\nu^n=S^{2n}$, so $S^{2n}$ is inner. ◻ We note that in the particular case where $n=1$, the previous Theorem says that a finite dimensional Hopf algebra $H$ is a symmetric algebra if and only if $\mathcal{G}=\varepsilon$, i.e., $H$ is unimodular, and $S^2$ is inner. This is a result of [@os], see also [@lorenz Theorem 12.9]. # The structure of $\mathcal{R}$ and $\mathcal{R}^*$, and the Picard group of $\mathcal{R}$ {#sectionconstruction} Let $\mathcal{R}$ be the $K$-algebra presented in the Introduction. It has basis ${\bf B}=\{ E, X_1,X_2,Y_1,Y_2\}\cup \{F_{ij}|1\leq i,j\leq 2\}$, and relations $$\begin{aligned} E^2=E,& F_{ij}F_{jr}=F_{ir},\\ EX_{i}=X_{i},& X_{i}F_{ir}=X_{r},\\ F_{ij}Y_{j}=Y_{i},& Y_{i}E=Y_{i} \end{aligned}$$for any $1\leq i,j,r\leq 2$, and any other product of two elements of $\bf B$ is zero. Let $$\begin{aligned} \mathcal{V}_1&=&<X_1,F_{11},F_{21}>\\ \mathcal{V}'_1&=&<X_2,F_{12},F_{22}>\\ \mathcal{V}_2&=&<Y_1,Y_2,E>. \end{aligned}$$Then $\mathcal{R}=\mathcal{V}_1\oplus \mathcal{V}'_1\oplus \mathcal{V}_2$ is a decomposition of $\mathcal{R}$ into a direct sum of indecomposable left $\mathcal{R}$-modules, and $\mathcal{V}_1\simeq \mathcal{V}'_1\simeq{\hspace{-3.5mm}/}$$\hspace{2mm} \mathcal{V}_2$. Indeed, right multiplication by $F_{12}$ is an isomorphism from $\mathcal{V}_1$ to $\mathcal{V}'_1$, with inverse the right multiplication by $F_{21}$, while $\mathcal{V}_1$ and $\mathcal{V}_2$ are not isomorphic since they have different annihilators. Similarly, a decomposition of $\mathcal{R}$ into a direct sum of indecomposable right $\mathcal{R}$-modules is $\mathcal{R}=\mathcal{U}_1\oplus \mathcal{U}_2\oplus \mathcal{U}'_2$, with $\mathcal{U}_2\simeq \mathcal{U}'_2\simeq{\hspace{-3.5mm}/}$$\hspace{2mm} \mathcal{U}_1$, where $$\begin{aligned} \mathcal{U}_1&=&<E,X_1,X_2>\\ \mathcal{U}_2&=&<F_{11},F_{12},Y_1>\\ \mathcal{U}'_2&=&<F_{21},F_{22},Y_2>. \end{aligned}$$ **Proposition 10**. *With the notation above, ${\rm dim}_K\, (\mathcal{U}_i\otimes \mathcal{V}_j)=1$ for any $1\leq i,j\leq 2$.* *Proof.* We first look at $\mathcal{U}_1\otimes_\mathcal{R}\mathcal{V}_1$. We see that\ $E\otimes_\mathcal{R}F_{i1}=E^2\otimes_\mathcal{R}F_{i1}=E\otimes _\mathcal{R}EF_{i1}=0$ $X_i\otimes_\mathcal{R}X_1=X_i\otimes_\mathcal{R}EX_1=X_iE\otimes _\mathcal{R}X_1=0$ $X_1\otimes_\mathcal{R}F_{21}=X_1\otimes _\mathcal{R}F_{21}F_{11}=X_1F_{21}\otimes_\mathcal{R}F_{11}=0$ $X_2\otimes_\mathcal{R}F_{11}=X_2\otimes _\mathcal{R}F_{11}F_{11}=X_2F_{11}\otimes_\mathcal{R}F_{11}=0$ $X_i\otimes_\mathcal{R}F_{i1}=EX_i\otimes_\mathcal{R}F_{i1}=E\otimes _\mathcal{R}X_iF_{i1}=E\otimes_\mathcal{R}X_1,$\ thus $E\otimes_\mathcal{R}X_1=X_1\otimes_\mathcal{R}F_{11}=X_2\otimes _\mathcal{R}F_{21}$, and this element spans $U_1\otimes _\mathcal{R}V_1$, in particular ${\rm dim}_K\, (\mathcal{U}_1\otimes \mathcal{V}_1)\leq 1$. Now in $\mathcal{U}_1\otimes_\mathcal{R}\mathcal{V}_2$ we have\ $E\otimes_\mathcal{R}Y_i=E^2\otimes_\mathcal{R}Y_i=E\otimes _\mathcal{R}EY_i=0$ $X_i\otimes_\mathcal{R}Y_j=EX_i\otimes_\mathcal{R}Y_j=E\otimes _\mathcal{R}X_iY_j=0$ $X_i\otimes_\mathcal{R}E=X_i\otimes_\mathcal{R}E^2=X_iE\otimes _\mathcal{R}E=0,$\ showing that $U_1\otimes_\mathcal{R}V_2$ is spanned by $E\otimes _\mathcal{R}E$, so ${\rm dim}_K\, (\mathcal{U}_1\otimes \mathcal{V}_2)\leq 1$. Next, in $\mathcal{U}_2\otimes_\mathcal{R}\mathcal{V}_1$ one has\ $F_{1i}\otimes_\mathcal{R}X_1=F_{11}F_{1i}\otimes_\mathcal{R}X_1=F_{11}\otimes _\mathcal{R}F_{1i}X_1=0$ $Y_1\otimes_\mathcal{R}X_1=Y_1\otimes_\mathcal{R}X_1F_{11}=Y_1X_1\otimes _\mathcal{R}F_{11}=0$ $Y_1\otimes_\mathcal{R}F_{i1}=Y_1E\otimes_\mathcal{R}F_{i1}=Y_1\otimes _\mathcal{R}EF_{i1}=0$ $F_{1i}\otimes_\mathcal{R}F_{j1}=F_{11}F_{1i}\otimes _\mathcal{R}F_{j1}=F_{11}\otimes _\mathcal{R}F_{1i}F_{j1}=\delta_{ij}F_{11}\otimes _\mathcal{R}F_{11},$\ so $\mathcal{U}_2\otimes_\mathcal{R}\mathcal{V}_1$ is spanned by $F_{11}\otimes_\mathcal{R}F_{11}=F_{12}\otimes_\mathcal{R}F_{21}$ and ${\rm dim}_K\, (\mathcal{U}_2\otimes \mathcal{V}_1)\leq 1$. Finally, in $\mathcal{U}_2\otimes_\mathcal{R}\mathcal{V}_2$\ $Y_1\otimes_\mathcal{R}Y_i=Y_1E\otimes_\mathcal{R}Y_i=Y_1\otimes _\mathcal{R}EY_i=0$ $F_{1i}\otimes_\mathcal{R}E=F_{1i}\otimes_\mathcal{R}E^2=F_{1i}E\otimes _\mathcal{R}E=0$ $F_{1i}\otimes_\mathcal{R}Y_j=F_{1i}\otimes_\mathcal{R}Y_jE=F_{1i}Y_j\otimes _\mathcal{R}E=\delta_{ij}Y_1\otimes_\mathcal{R}E$\ so $\mathcal{U}_2\otimes_\mathcal{R}\mathcal{V}_1$ is spanned by $Y_1\otimes_\mathcal{R}E=F_{11}\otimes_\mathcal{R}Y_1=F_{12}\otimes _\mathcal{R}Y_2$ and ${\rm dim}_K\, (\mathcal{U}_2\otimes \mathcal{V}_2)\leq 1$. As $\mathcal{R}_{\mathcal{R}}\simeq \mathcal{U}_1\oplus \mathcal{U}_2^2$ and $_\mathcal{R}\mathcal{R}\simeq \mathcal{V}_1^2\oplus \mathcal{V}_2$, there are linear isomorphisms $$\mathcal{R}\simeq \mathcal{R}\otimes_\mathcal{R}\mathcal{R}\simeq (\mathcal{U}_1\otimes_\mathcal{R}\mathcal{V}_1)^2\oplus (\mathcal{U}_2\otimes_\mathcal{R}\mathcal{V}_1)^4\oplus (\mathcal{U}_1\otimes_\mathcal{R}\mathcal{V}_2)\oplus (\mathcal{U}_2\otimes_\mathcal{R}\mathcal{V}_2)^2,$$ and equating the dimensions we see that we must have $${\rm dim}_K\, (\mathcal{U}_1\otimes \mathcal{V}_1)={\rm dim}_K\, (\mathcal{U}_1\otimes \mathcal{V}_2)={\rm dim}_K\, (\mathcal{U}_2\otimes \mathcal{V}_1)={\rm dim}_K\, (\mathcal{U}_2\otimes \mathcal{V}_2)=1.$$ ◻ **Remark 11**. *We retain from the proof of the previous Proposition that the non-zero tensor monomials formed with elements of $\bf B$ are: $E\otimes _\mathcal{R}X_1=X_1\otimes_\mathcal{R}F_{11}=X_2\otimes_\mathcal{R}F_{21}$ in $U_1\otimes_\mathcal{R}V_1$, $F_{11}\otimes _\mathcal{R}F_{11}=F_{12}\otimes_\mathcal{R}F_{21}$ in $U_2\otimes _\mathcal{R}V_1$, $Y_1\otimes_\mathcal{R}E=F_{11}\otimes _\mathcal{R}Y_1=F_{12}\otimes_\mathcal{R}Y_2$ in $U_2\otimes _\mathcal{R}V_2$, and $E\otimes_\mathcal{R}E$ in $U_1\otimes _\mathcal{R}V_2$.* Let us look now at $\mathcal{R}^*=Hom_K(\mathcal{R},K)$, with the $\mathcal{R}$-bimodule structure induced by the one of $\mathcal{R}$; we denote by $\rightharpoonup$ and $\leftharpoonup$ the left and right actions of $\mathcal{R}$ on $\mathcal{R}^*$. Denote by ${\bf B}^*=\{ E^*, F_{ij}^*,X_i^*,Y_j^*| 1\leq i,j\leq 2\}$ the basis of $\mathcal{R}^*$ dual to $\bf B$. On basis elements, the left action of $\mathcal{R}$ on $\mathcal{R}^*$ is $$\begin{array}{llll} E\rightharpoonup E^*=E^*,& E\rightharpoonup F_{ij}^*=0,& E\rightharpoonup X_i^*=0,& E\rightarrow Y_i^*=Y_i^*,\\ F_{ij}\rightharpoonup E^*=0,& F_{ij}\rightharpoonup F_{rp}^*=\delta_{jp}F_{ri}^*,& F_{ij}\rightharpoonup X_r^*=\delta_{jr}X_i^*,& F_{ij}\rightharpoonup Y_r^*=0,\\ X_i\rightharpoonup E^*=0,& X_i\rightharpoonup F_{rj}^*=0,& X_i\rightharpoonup X_j^*=\delta_{ij}E^*,& X_i\rightharpoonup Y_j^*=0,\\ Y_i\rightharpoonup E^*=0,& Y_i\rightharpoonup F_{rj}^*=0,& Y_i\rightharpoonup X_j^*=0,& Y_i\rightharpoonup Y_j^*=F_{ji}^*, \end{array}$$ for any $1\leq i,j,r,p\leq 2$, while the right action is $$\begin{array}{llll} E^*\leftharpoonup E=E^*,& F_{ij}^*\leftharpoonup E=0,& X_i^*\leftharpoonup E=X_i^*,& Y_i^*\leftharpoonup E=0,\\ E^*\leftharpoonup F_{ij}=0,& F_{rp}^*\leftharpoonup F_{ij}=\delta_{ri}F_{jp}^*,& X_r^*\leftharpoonup F_{ij}=0,& Y_r^*\leftharpoonup F_{ij}=\delta_{ri}Y_j^*,\\ E^*\leftharpoonup X_i=0,& F_{rj}^*\leftharpoonup X_i=0,& X_j^*\leftharpoonup X_i=F_{ij}^*,& Y_j^*\leftharpoonup X_i=0,\\ E^*\leftharpoonup Y_i=0,& F_{rj}^*\leftharpoonup Y_i=0,& X_j^*\leftharpoonup Y_i=0,& Y_j^*\leftharpoonup Y_i=\delta_{ji}E^*, \end{array}$$ for any $1\leq i,j,r,p\leq 2$. We will identify $\mathcal{U}_1^*$ with $<E^*,X_1^*,X_2^*>$ inside $\mathcal{R}^*$, and similarly for the duals of $\mathcal{U}_2, \mathcal{U}'_2, \mathcal{V}_1, \mathcal{V}'_1, \mathcal{V}_2$. **Lemma 12**. *$\mathcal{U}_1^*\simeq \mathcal{V}_1$ and $\mathcal{U}_2^*\simeq \mathcal{V}_2$ as left $\mathcal{R}$-modules. Consequently, $\mathcal{V}_1^*\simeq \mathcal{U}_1$ and $\mathcal{V}_2^*\simeq \mathcal{U}_2$ as right $\mathcal{R}$-modules, $\mathcal{R}^*\simeq \mathcal{V}_1\oplus \mathcal{V}_2^2$ as left $\mathcal{R}$-modules and $\mathcal{R}^*\simeq \mathcal{U}_1^2\oplus \mathcal{U}_2$ as right $\mathcal{R}$-modules.* *Proof.* It follows from the action table above that the linear map taking $X_1$ to $E^*$, $F_{11}$ to $X_1^*$ and $F_{21}$ to $X_2^*$ is an isomorphism of left $\mathcal{R}$-modules from $\mathcal{V}_1$ to $\mathcal{U}_1^*$. Also, the mapping $Y_1\mapsto F_{11}^*$, $Y_2\mapsto F_{12}^*$, $E\mapsto Y_1^*$ defines an isomorphism $\mathcal{V}_2\simeq \mathcal{U}_2^*$. ◻ **Proposition 13**. *The space $\mathcal{R}^*\otimes_\mathcal{R}\mathcal{R}^*$ has dimension 9, with a basis consisting of the elements $$\begin{aligned} \mathcal{E}=Y_1^*\otimes _\mathcal{R}X_1^*, & \mathcal{F}_{ij}=X_i^*\otimes_\mathcal{R}Y_j^*,\\ \mathcal{X}_i=E^*\otimes_\mathcal{R}Y_i^*, &\mathcal{Y}_i=F_{1i}^*\otimes _\mathcal{R}X_1^*, \end{aligned}$$where $1\leq i,j\leq 2$. The only non-zero tensor monomials $u\otimes_\mathcal{R} v$ with $u,v \in {\bf B}^*$ are $$\begin{aligned} Y_i^*\otimes_\mathcal{R}X_i^*=\mathcal{E},& X_i^*\otimes _\mathcal{R}Y_j^*=\mathcal{F}_{ij},& E^*\otimes _\mathcal{R}Y_i^*=\mathcal{X}_i,\\ Y_i^*\otimes_\mathcal{R}F^*_{ji}=\mathcal{X}_j,& X_i^*\otimes _\mathcal{R}E^*=\mathcal{Y}_i,& F_{ij}^*\otimes _\mathcal{R}X_i^*=\mathcal{Y}_j,\end{aligned}$$where $1\leq i,j\leq 2$.* *Moreover, the linear isomorphism $\varphi:\mathcal{R}^*\otimes _\mathcal{R}\mathcal{R}^*\rightarrow\mathcal{R}$ given by $$\varphi (\mathcal{E})=E, \, \varphi (\mathcal{F}_{ij})=F_{ij},\, \varphi(\mathcal{X}_i)=X_i,\, \varphi(\mathcal{Y}_i)=Y_i$$ for any $1\leq i,j\leq 2$, is an isomorphism of $\mathcal{R}$-bimodules.* *Proof.* Using Lemma [Lemma 12](#lemaduale){reference-type="ref" reference="lemaduale"}, we see that $$\mathcal{R}^*\otimes _\mathcal{R}\mathcal{R}^*\simeq (\mathcal{U}_1\otimes_\mathcal{R}\mathcal{V}_1)^2\oplus (\mathcal{U}_1\otimes_\mathcal{R}\mathcal{V}_2)^4\oplus (\mathcal{U}_2\otimes_\mathcal{R}\mathcal{V}_1)\oplus (\mathcal{U}_2\otimes_\mathcal{R}\mathcal{V}_2)^2$$ as $K$-spaces, and using Proposition [Proposition 10](#propdimensiuni){reference-type="ref" reference="propdimensiuni"} we obtain that $\mathcal{R}^*\otimes_\mathcal{R}\mathcal{R}^*$ has dimension 9. For finding the non-zero tensor monomials $u\otimes_\mathcal{R} v$ with $u,v \in {\bf B}^*$, we use Remark [Remark 11](#remarcadimensiuni){reference-type="ref" reference="remarcadimensiuni"}, and find the only such monomials are: $\bullet$ In $\mathcal{V}_1^*\otimes_\mathcal{R}\mathcal{U}_1^*$ (isomorphic to $\mathcal{U}_1\otimes_\mathcal{R}\mathcal{V}_1$): $X_1^*\otimes_\mathcal{R}E^*=F_{11}^*\otimes_\mathcal{R}X_1^*=F_{21}^*\otimes_\mathcal{R}X_2^*\,\, (=\mathcal{Y}_1)$ $\bullet$ In $(\mathcal{V}_1')^*\otimes_\mathcal{R}\mathcal{U}_1^*$ (isomorphic to $\mathcal{U}_1\otimes_\mathcal{R}\mathcal{V}_1$): $X_2^*\otimes_\mathcal{R}E^*=F_{12}^*\otimes_\mathcal{R}X_1^*=F_{22}^*\otimes_\mathcal{R}X_2^*\,\, (=\mathcal{Y}_2)$ $\bullet$ In $\mathcal{V}_2^*\otimes_\mathcal{R}\mathcal{U}_1^*$ (isomorphic to $\mathcal{U}_2\otimes_\mathcal{R}\mathcal{V}_1$): $Y_1^*\otimes_\mathcal{R}X_1^*=Y_2^*\otimes_\mathcal{R}X_2^* \,\, (=\mathcal{E})$ $\bullet$ In $\mathcal{V}_1^*\otimes_\mathcal{R}\mathcal{U}_2^*$ (isomorphic to $\mathcal{U}_1\otimes_\mathcal{R}\mathcal{V}_2$): $X_1^*\otimes_\mathcal{R}Y_1^*\,\, (=\mathcal{F}_{11})$ $\bullet$ In $(\mathcal{V}_1')^*\otimes_\mathcal{R}\mathcal{U}_2^*$ (isomorphic to $\mathcal{U}_1\otimes_\mathcal{R}\mathcal{V}_2$): $X_2^*\otimes_\mathcal{R}Y_1^*\,\, (=\mathcal{F}_{21})$ $\bullet$ In $\mathcal{V}_2^*\otimes_\mathcal{R}\mathcal{U}_2^*$ (isomorphic to $\mathcal{U}_2\otimes_\mathcal{R}\mathcal{V}_2$): $E^*\otimes_\mathcal{R} Y_1^*=Y_1^*\otimes_\mathcal{R}F_{11}^*=Y_2^*\otimes_\mathcal{R}F_{12}^* \,\, (=\mathcal{X}_1)$ $\bullet$ In $\mathcal{V}_2^*\otimes_\mathcal{R}(\mathcal{U}_2')^*$ (isomorphic to $\mathcal{U}_2\otimes_\mathcal{R}\mathcal{V}_2$): $E^*\otimes_\mathcal{R} Y_2^*=Y_1^*\otimes_\mathcal{R}F_{21}^*=Y_2^*\otimes_\mathcal{R}F_{22}^* \,\, (=\mathcal{X}_2)$ $\bullet$ In $\mathcal{V}_1^*\otimes_\mathcal{R}(\mathcal{U}_2')^*$ (isomorphic to $\mathcal{U}_1\otimes_\mathcal{R}\mathcal{V}_2$): $X_1^*\otimes_\mathcal{R}Y_2^*\,\, (=\mathcal{F}_{12})$ $\bullet$ In $(\mathcal{V}_1')^*\otimes_\mathcal{R}(\mathcal{U}_2')^*$ (isomorphic to $\mathcal{U}_1\otimes_\mathcal{R}\mathcal{V}_2$): $X_2^*\otimes_\mathcal{R}Y_2^*\,\, (=\mathcal{F}_{22})$\ Collecting all these, the second part of the statement is proved. Now using the (left and right) action table of $\mathcal{R}$ on $\mathcal{R}^*$, we find that the left and right actions of $\mathcal{R}$ on $\mathcal{R}^*\otimes_\mathcal{R}\mathcal{R}^*$, which we also denote by $\rightharpoonup$ and $\leftharpoonup$, are given by $$\begin{array}{llll} E\rightharpoonup\mathcal{E}=\mathcal{E},& E\rightharpoonup\mathcal{F}_{ij}=0,& E\rightharpoonup\mathcal{X}_i=\mathcal{X}_i,& E\rightarrow \mathcal{Y}_i=0,\\ F_{ij}\rightharpoonup\mathcal{E}=0,& F_{ij}\rightharpoonup \mathcal{F}_{rp}=\delta_{jr}\mathcal{F}_{ip},& F_{ij}\rightharpoonup \mathcal{X}_r=0,& F_{ij}\rightharpoonup\mathcal{Y}_r=\delta_{jr}\mathcal{Y}_i,\\ X_i\rightharpoonup\mathcal{E}=0,& X_i\rightharpoonup \mathcal{F}_{rj}=\delta_{ir}\mathcal{X}_j,& X_i\rightharpoonup \mathcal{X}_j=0,& X_i\rightharpoonup\mathcal{Y}_j=0,\\ Y_i\rightharpoonup\mathcal{E}=\mathcal{Y}_i,& Y_i\rightharpoonup\mathcal{F}_{rj}=0,& Y_i\rightharpoonup\mathcal{X}_j=0,& Y_i\rightharpoonup\mathcal{Y}_j=0, \end{array}$$ for any $1\leq i,j,r,p\leq 2$, and $$\begin{array}{llll} \mathcal{E}\leftharpoonup E=\mathcal{E},& \mathcal{F}_{ij}\leftharpoonup E=0,& \mathcal{X}_i\leftharpoonup E=0,& \mathcal{Y}_i\leftharpoonup E=\mathcal{Y}_i,\\ \mathcal{E}\leftharpoonup F_{ij}=0,& \mathcal{F}_{rp}\leftharpoonup F_{ij}=\delta_{pi}\mathcal{F}_{rj},& \mathcal{X}_r\leftharpoonup F_{ij}=\delta_{ri}\mathcal{X}_j,& \mathcal{Y}_r\leftharpoonup F_{ij}=0,\\ \mathcal{E}\leftharpoonup X_i=\mathcal{X}_i,& \mathcal{F}_{rj}\leftharpoonup X_i=0,& \mathcal{X}_j\leftharpoonup X_i=0,& \mathcal{Y}_j\leftharpoonup X_i=0,\\ \mathcal{E}\leftharpoonup Y_i=0,& \mathcal{F}_{rj}\leftharpoonup Y_i=\delta_{ji}\mathcal{Y}_r,& \mathcal{X}_j\leftharpoonup Y_i=0,& \mathcal{Y}_j\leftharpoonup Y_i=0, \end{array}$$ for any $1\leq i,j,r,p\leq 2$. The first set of relations shows that $\varphi$ is a morphism of left $\mathcal{R}$-modules, while the second one shows that $\varphi$ is also a morphism of right $\mathcal{R}$-modules. ◻ **Lemma 14**. *Let $P$ be an invertible $\mathcal{R}$-bimodule. Then $P$ is isomorphic either to $\mathcal{V}_1\oplus \mathcal{V}_2^2$ or to $\mathcal{V}_1^2\oplus \mathcal{V}_2$ as a left $\mathcal{R}$-module.* *Proof.* Let $Q$ be a bimodule such that $[Q]=[P]^{-1}$ in ${\rm Pic}(R)$. Since $P$ is a finitely generated projective left module over the finite dimensional algebra $\mathcal{R}$, it is isomorphic to a finite direct sum of principal indecomposable left $\mathcal{R}$-modules, say $P\simeq \mathcal{V}_1^a\oplus \mathcal{V}_2^b$ for some non-negative integers $a,b$. But $P$ is a generator as a left $\mathcal{R}$-module, so $\mathcal{R}$ is a direct summand in the left $\mathcal{R}$-module $P^m$ for some positive integer $m$. Thus by the Krull-Schmidt Theorem, both $a$ and $b$ are positive. Similarly, $Q\simeq \mathcal{U}_1^c\oplus \mathcal{U}_2^d$ as right $\mathcal{R}$-modules for some integers $c,d>0$. Now there are linear isomorphisms $$\mathcal{R}\simeq Q\otimes_{\mathcal{R}}P\simeq (\mathcal{U}_1\otimes_{\mathcal{R}}\mathcal{V}_1)^{ca}\oplus (\mathcal{U}_1\otimes_{\mathcal{R}}\mathcal{V}_2)^{cb}\oplus (\mathcal{U}_2\otimes_{\mathcal{R}}\mathcal{V}_1)^{da}\oplus (\mathcal{U}_2\otimes_{\mathcal{R}}\mathcal{V}_2)^{db}$$ Counting dimensions and using Proposition [Proposition 10](#propdimensiuni){reference-type="ref" reference="propdimensiuni"}, we see that $(c+d)(a+b)=9$. As $a,b,c,d>0$, we must have $c+d=a+b=3$, so then either $a=1$ and $b=2$, or $a=2$ and $b=1$. ◻ **Theorem 15**. *Any invertible $\mathcal{R}$-bimodule is isomorphic either to $_1\mathcal{R}_{\alpha}$ or to $_1{\mathcal{R}^*}_{\alpha}$ for some $\alpha\in {\rm Aut}(\mathcal{R})$. As a consequence, ${\rm Pic}(\mathcal{R})\simeq {\rm Out}(\mathcal{R})\times C_2$, where $C_2$ is the cyclic group of order 2.* *Proof.* We know that a bimodule of type ${_1\mathcal{R}_{\alpha}}$, with $\alpha\in {\rm Aut}(\mathcal{R})$, is invertible; the inverse of $[{_1\mathcal{R}_{\alpha}}]$ in ${\rm Pic}(\mathcal{R})$ is $[{_1\mathcal{R}_{\alpha^{-1}}}]$. Moreover, $[{_1\mathcal{R}_{\alpha}}]\cdot [{_1\mathcal{R}_{\beta}}]=[{_1\mathcal{R}_{\alpha\beta}}]$, and $[{_1\mathcal{R}_{\alpha}}]$ depends only on the class of $\alpha$ modulo ${\rm Inn}(\mathcal{R})$. By Corollary [Corollary 3](#dualinvertible){reference-type="ref" reference="dualinvertible"}, $\mathcal{R}^*$ is an invertible $\mathcal{R}$-bimodule, and then so is $\mathcal{R}^*\otimes_\mathcal{R} {_1\mathcal{R}_{\alpha}} \simeq {_1\mathcal{R}^*_{\alpha}}$. Since $\mathcal{R}^*\otimes_\mathcal{R}{_1\mathcal{R}_\alpha}\simeq {_1\mathcal{R}_\alpha}\otimes_\mathcal{R}\mathcal{R}^*$ by Proposition [Proposition 5](#dualulcomuta){reference-type="ref" reference="dualulcomuta"}, and $\mathcal{R}^*\otimes _\mathcal{R}\mathcal{R}^*\simeq \mathcal{R}$ by Proposition [Proposition 13](#croset){reference-type="ref" reference="croset"}, we get that the subset $\mathcal{P}$ of ${\rm Pic}(\mathcal{R})$ consisting of all ${_1\mathcal{R}_{\alpha}}$ and ${_1\mathcal{R}^*_{\alpha}}$, with $\alpha\in {\rm Aut}(\mathcal{R})$, is a subgroup isomorphic to ${\rm Out}(\mathcal{R})\times C_2$; an isomorphism between $\mathcal{P}$ and ${\rm Out}(\mathcal{R})\times C_2$ takes ${_1\mathcal{R}_{\alpha}}$ to $(\hat{\alpha},e)$, and ${_1\mathcal{R}^*_{\alpha}}$ to $(\hat{\alpha},c)$, where $C_2=<c>$, $e$ is the neutral element of $C_2$, and $\hat{\alpha}$ is the class of $\alpha$ in ${\rm Out}(\mathcal{R})$. Let $P$ be an invertible $\mathcal{R}$-bimodule. By Lemma [Lemma 14](#invertibleleftmodule){reference-type="ref" reference="invertibleleftmodule"} we see that as a left $\mathcal{R}$-module, $P$ is isomorphic either to $\mathcal{R}$ or to $\mathcal{R}^*$. Now Proposition [Proposition 1](#isoinvertible){reference-type="ref" reference="isoinvertible"} shows that either $P\simeq {_1\mathcal{R}_{\alpha}}$ or $P\simeq {_1\mathcal{R}^*_{\alpha}}$ as $\mathcal{R}$-bimodules for some $\alpha\in {\rm Aut}(\mathcal{R})$. We conclude that ${\rm Pic}(\mathcal{R})=\mathcal{P}$, which ends the proof. ◻ # Automorphisms of $\mathcal{R}$ {#sectionautomorfisme} The aim of this section is to compute the automorphism group and the group of outer automorphisms of $\mathcal{R}$. We will use a presentation of $\mathcal{R}$ given in [@dnn3 Remark 4.1], where it is explained that $\mathcal{R}$ is isomorphic to the Morita ring $\left[ \begin{array}{cc} K & X\\ Y & M_2(K) \end{array} \right]$ associated with the Morita context connecting the rings $K$ and $M_2(K)$, by the bimodules $X=K^2$ and $Y=M_{2,1}(K)$, with all actions given by the usual matrix multiplication, such that both Morita maps are zero. The multiplication of this Morita ring is given by $$\left[ \begin{array}{cc} \alpha& x\\ y & f \end{array} \right]\left[ \begin{array}{cc} \alpha' & x'\\ y' & f' \end{array} \right]=\left[ \begin{array}{cc} \alpha\alpha' &\alpha x'+xf'\\ \alpha' y+fy' & ff' \end{array} \right]$$ for any $\alpha,\alpha'\in K$, $f,f'\in M_2(K)$, $x,x'\in X$ and $y,y'\in Y$. This Morita ring and $M_3(K)$ coincide as $K$-vector spaces, but they have different multiplications. An algebra isomorphism between $\mathcal{R}$ and $\left[ \begin{array}{cc} K & X\\ Y & M_2(K) \end{array} \right]$ takes $E, (X_i)_{1\leq i\leq 2}, (Y_i)_{1\leq i\leq 2}, (F_{ij})_{1\leq i,j\leq 2}$ to the elements in the Morita ring corresponding to the \"matrix units\" in $K, X, Y, M_2(K)$. Throughout this section, we will identify $\mathcal{R}$ with $\left[ \begin{array}{cc} K & X\\ Y & M_2(K) \end{array} \right]$. The multiplicative group $K^*\times GL_2(K)$ acts on the additive group $K^2\times M_{2,1}(K)$ by $$(\lambda,P)\cdot (x_1,y_1)=(\lambda x_1P^{-1},Py_1)$$ for any $\lambda \in K^*, P\in GL_2(K),x_1\in K^2, y_1\in M_{2,1}(K)$, so we can form a semidirect product $(K^2\times M_{2,1}(K))\rtimes (K^*\times GL_2(K))$. For any $x_1\in K^2,y_1\in M_{2,1}(K), \lambda \in K^*,P\in GL_2(K)$ define $\varphi_{x_1,y_1,\lambda,P}:\mathcal{R}\rightarrow\mathcal{R}$ by $$\varphi_{x_1,y_1,\lambda,P}(\left[ \begin{array}{cc} \alpha& x\\ y & f \end{array} \right])= \left[ \begin{array}{cc} \alpha& \alpha x_1+\lambda xP^{-1}-x_1PfP^{-1}\\ \alpha y_1+Py-PfP^{-1}y_1 & PfP^{-1} \end{array} \right].$$ **Theorem 16**. *$\varphi_{x_1,y_1,\lambda,P}$ is an algebra automorphism of $\mathcal{R}$ for any $x_1\in K^2,y_1\in M_{2,1}(K), \lambda \in K^*,P\in GL_2(K)$, and $\Phi:(K^2\times M_{2,1}(K))\rtimes (K^*\times GL_2(K))\rightarrow{\rm Aut}(\mathcal{R})$, $\Phi(x_1,y_1,\lambda,P)=\varphi_{x_1,y_1,\lambda,P}$ is an isomorphism of groups. An automorphism $\varphi_{x_1,y_1,\lambda,P}$ of $\mathcal{R}$ is inner if and only if $\lambda=1$. As a consequence, ${\rm Out}(\mathcal{R})\simeq K^*$.* *Proof.* Let $\varphi\in {\rm Aut}(\mathcal{R})$. Since the Jacobson radical of $\mathcal{R}$ is $J(\mathcal{R})=\left[ \begin{array}{cc} 0 & X\\ Y & 0 \end{array} \right]$, $\varphi$ induces an automorphism $\tilde{\varphi}$ of the algebra $\mathcal{R}/J(\mathcal{R})\simeq K\times M_2(K)$, thus $\tilde{\varphi}$ acts as identity on the first position, and as an inner automorphism associated to some $P\in GL_2(K)$ on the second one. Lifting to $\mathcal{R}$, we see that $\varphi(\left[ \begin{array}{cc} 1 & 0\\ 0 & 0 \end{array} \right])= \left[ \begin{array}{cc} 1 & x_1\\ y_1 & 0\end{array} \right]$ for some $x_1\in K^2$ and $y_1\in M_{2,1}(K)$, and $\varphi(\left[ \begin{array}{cc} 0 & 0\\ 0 & f \end{array} \right])= \left[ \begin{array}{cc} 0 & \mu (f)\\ \omega (f) & PfP^{-1}\end{array} \right]$ for some linear maps $\mu:M_2(K)\rightarrow X$ and $\omega:M_2(K)\rightarrow Y$. On the other hand, since $\varphi (J(\mathcal{R})\subset J(\mathcal{R})$, $\varphi(\left[ \begin{array}{cc} 0 & x\\ 0 & 0 \end{array} \right])\in \left[ \begin{array}{cc} 0 & X\\ Y & 0 \end{array} \right]$, so then $$\varphi(\left[ \begin{array}{cc} 0 & x\\ 0 & 0 \end{array} \right])=\varphi(\left[ \begin{array}{cc} 1 & 0\\ 0 & 0 \end{array} \right]\left[ \begin{array}{cc} 0 & x\\ 0 & 0 \end{array} \right])\in \left[ \begin{array}{cc} 1 & x_1\\ y_1 & 0\end{array} \right]\left[ \begin{array}{cc} 0 & X\\ Y & 0 \end{array} \right]\subset \left[ \begin{array}{cc} 0 & X\\ 0 & 0 \end{array} \right].$$ This shows that $\varphi(\left[ \begin{array}{cc} 0 & x\\ 0 & 0 \end{array} \right])=\left[ \begin{array}{cc} 0 & \theta(x)\\ 0 & 0 \end{array} \right]$ for a linear map $\theta:X\rightarrow X$; thus $\theta (x)=xA$ for any $x\in X$, where $A\in M_2(K)$. Similarly we see that $\varphi(\left[ \begin{array}{cc} 0 & 0\\ y & 0 \end{array} \right])=\left[ \begin{array}{cc} 0 & 0\\ By & 0 \end{array} \right]$ for any $y\in Y$, where $B\in M_2(K)$. Thus we obtain that $\varphi$ must be of the form $$\label{eqformulaphi} \varphi(\left[ \begin{array}{cc} \alpha& x\\ y & f \end{array} \right])= \left[ \begin{array}{cc} \alpha& \alpha x_1+xA+\mu (f)\\ \alpha y_1+By+\omega(f) & PfP^{-1} \end{array} \right].$$ By equating the corresponding entries, we see that the matrices $\varphi(\left[ \begin{array}{cc} \alpha& x\\ y & f \end{array} \right]\left[ \begin{array}{cc} \alpha' & x'\\ y' & f' \end{array} \right])$ and $\varphi(\left[ \begin{array}{cc} \alpha\alpha' &\alpha x'+xf'\\ y\alpha'+fy' & ff' \end{array} \right])$ are equal if and only if the equations $$\label{eq1} \alpha\mu(f')+\alpha x_1Pf'P^{-1}+xAPf'P^{-1}+\mu(f)Pf'P^{-1}=xf'A+\mu(ff')$$ and $$\label{eq2} \alpha' \omega(f)+\alpha'PfP^{-1}y_1+PfP^{-1}By'+PfP^{-1}\omega (f')=Bfy' +\omega (ff')$$ are satisfied for any $\alpha,\alpha'\in K$, $x,x'\in K^2$, $y,y'\in M_{2,1}(K)$, $f,f'\in M_2(K)$. If in equation ([\[eq1\]](#eq1){reference-type="ref" reference="eq1"}) we take $f=0$, we get $\alpha(\mu(f')+x_1Pf'P^{-1})+xAPf'P^{-1}-xf'A=0$. As this holds for any $\alpha\in K$, we must have $$\label{eq3} \mu(f')+x_1Pf'P^{-1}=0$$ and $x(APf'P^{-1}-f'A)=0$. As $x$ runs through $K^2$, we get $APf'P^{-1}-f'A=0$, showing that $APf'=f'AP$ for any $f'$, so $AP\in KI_2$, or equivalently, $$\label{eq4} A\in KP^{-1}.$$ On the other hand, it is clear that if equations ([\[eq3\]](#eq3){reference-type="ref" reference="eq3"}) and ([\[eq4\]](#eq4){reference-type="ref" reference="eq4"}) holds, then ([\[eq1\]](#eq1){reference-type="ref" reference="eq1"}) is satisfied. In a similar way, we see that ([\[eq2\]](#eq2){reference-type="ref" reference="eq2"}) is true if and only $$\label{eq5} \omega(f)=-PfP^{-1}y_1$$ and $$\label{eq6} B\in KP.$$ These show that a map $\varphi$ of the form given in ([\[eqformulaphi\]](#eqformulaphi){reference-type="ref" reference="eqformulaphi"}) is a ring morphism if and only if $$\mu (f)=-x_1PfP^{-1},\; \omega (f)=-PfP^{-1}y_1,\; A\in KP^{-1},\; B\in KP.$$ Thus take $A=\lambda P^{-1}$ and $B=\rho P$, with $\lambda,\rho\in K$; in fact, in order for $\varphi$ to be injective one needs $\lambda,\rho\in K^*$. For any $x_1\in K^2, y_1\in M_{2,1}(K),\lambda,\rho\in K^*,P\in GL_2(K)$, denote by $\psi_{x_1,y_1, \lambda,\rho,P}:\mathcal{R}\rightarrow\mathcal{R}$ the map defined by $$\psi_{x_1,y_1, \lambda,\rho,P}(\left[ \begin{array}{cc} \alpha& x\\ y & f \end{array} \right])= \left[ \begin{array}{cc} \alpha& \alpha x_1+\lambda xP^{-1}-x_1PfP^{-1}\\ \alpha y_1+\rho Py-PfP^{-1}y_1 & PfP^{-1} \end{array} \right].$$ The considerations above show that $\psi_{x_1,y_1, \lambda,\rho,P}$ is an algebra endomorphism of $\mathcal{R}$. As it is clearly injective, it is in fact an automorphism of $\mathcal{R}$. We showed that any automorphism of $\mathcal{R}$ is one such $\psi_{x_1,y_1, \lambda,\rho,P}$. A straightforward computation shows that $$\label{eqsemidirect} \psi_{x'_1,y'_1, \lambda',\rho',P'}\psi_{x_1,y_1, \lambda,\rho,P}=\psi_{x'_1+\lambda'x_1(P')^{-1},y'_1+\rho'P'y_1, \lambda'\lambda,\rho'\rho,P'P}$$ Consider the additive group $A=K^2\times M_{2,1}(K)$ and the multiplicative group $B=K^*\times K^*\times GL_2(K)$. Then $B$ acts on $A$ by $(\lambda,\rho,P)\cdot (x_1,y_1)=(\lambda x_1P^{-1},\rho Py_1)$, and ([\[eqsemidirect\]](#eqsemidirect){reference-type="ref" reference="eqsemidirect"}) shows that $$\Psi:A\rtimes B\rightarrow{\rm Aut}(\mathcal{R}),\,\,\, \Psi(x_1,y_1, \lambda,\rho,P)=\psi_{x_1,y_1, \lambda,\rho,P}$$ is a group morphism. We have also seen that $\Psi$ is surjective. Now $\psi_{x_1,y_1, \lambda,\rho,P}$ is the identity morphism if and only if $$PfP^{-1}=f,\; \alpha x_1+\lambda xP^{-1}-x_1PfP^{-1}=x,\; \alpha y_1+\rho Py-PfP^{-1}y_1=y$$ for any $\alpha\in K, x\in K^2,y\in M_{2,1}(K),f\in M_2(K)$. If we take $\alpha=1, x=0, f=0$ in the second relation, we get $x_1=0$. Hence $\lambda xP^{-1}=x$ for any $x$, so $P=\lambda I_2$. Similarly, the third relation shows that $y_1=0$ and $P=\rho^{-1}I_2$. Therefore ${\rm Ker}(\Psi)=0\times B_0$, where $B_0=\{ (\lambda,\lambda^{-1},\lambda I_2)|\lambda \in K^*\}$. As $B_0$ acts trivially on $A$, the action of $B$ induces an action of the factor group $\frac{B}{B_0}$ on $A$, and then ${\rm Aut}(\mathcal{R})\simeq \frac{A\rtimes B}{0\rtimes B_0}\simeq A\rtimes \frac{B}{B_0}$. Denoting by $\overline{b}$ the class of some $b\in B$ modulo $B_0$, we see that $$\overline{(\lambda,\rho,P)}=\overline{(\rho^{-1},\rho,\rho I_2)}\overline{(\lambda \rho,1,\rho^{-1}P)}=\overline{(\lambda \rho^{-1},1,\rho^{-1}P)},$$ so there is a group isomorphism $\Gamma:K^*\times GL_2(K)\rightarrow\frac{B}{B_0}$ taking $(\lambda,P)$ to $\overline{(\lambda,1,P)}$. $\Gamma$ induces an action of $K^*\times GL_2(K)$ on $A$, given by $$(\lambda,P)\cdot (x_1,y_1)=(\lambda,1,P)\cdot (x_1,y_1)=(\lambda x_1P^{-1},Py_1).$$ We obtain a composition of group isomorphisms $$\Phi:A\rtimes (K^*\times GL_2(K))\longrightarrow A\rtimes \frac{B}{B_0}\longrightarrow {\rm Aut}(\mathcal{R})$$ given by $\Phi(x_1,y_1,\lambda,P)=\psi_{x_1,y_1, \lambda,1,P}$. Now we denote $\psi_{x_1,y_1, \lambda,1,P}=\varphi_{x_1,y_1, \lambda,P}$ and the first part of the statement is proved. A direct computation shows that an element $\left[ \begin{array}{cc} \beta & z\\ g & m \end{array} \right]$ of $\mathcal{R}$ is invertible if and only if $\beta\neq 0$ and $m\in GL_2(K)$, and in this case its inverse is $\left[ \begin{array}{cc} \beta^{-1} & -\beta^{-1}zm^{-1}\\ -\beta^{-1}m^{-1}g & m^{-1} \end{array} \right]$, and the associated inner automorphism of $\mathcal{R}$ takes $\left[ \begin{array}{cc} \alpha& x\\ y & f \end{array} \right]$ to $$\left[ \begin{array}{cc} \alpha& \alpha\beta^{-1}z+\beta^{-1}xm-\beta^{-1}zm^{-1}fm\\ -\alpha m^{-1}g+\beta m^{-1}y+m^{-1}fg & m^{-1}fm \end{array} \right],$$ so it is just $\psi_{\beta^{-1}z,-m^{-1}g,\beta^{-1},\beta,m^{-1}}$. Hence $\varphi_{x_1,y_1, \lambda,P}=\psi_{x_1,y_1, \lambda,1,P}$ is inner if and only if $\psi_{x_1,y_1, \lambda,1,P}=\psi_{\beta^{-1}z,-m^{-1}g,\beta^{-1},\beta,m^{-1}}$ for some $\beta\in K^*, z\in K^2,g\in M_{2,1}(K),m\in GL_2(K)$, and taking into account the description of the kernel of $\Psi$, this equality is equivalent to $(x_1,y_1,\lambda,1,P)=(\beta^{-1}z,-m^{-1}g,\beta^{-1},\beta,m^{-1})(0,0,\rho,\rho^{-1},\rho I_2)=(\beta^{-1}z,-m^{-1}g,\beta^{-1}\rho,\beta\rho^{-1},\rho m^{-1})$ for some $\rho\in K^*$. Equating the corresponding positions, we get $1=\beta\rho^{-1}$, so $\rho=\beta$, and then $\lambda=\beta^{-1}\rho=1$, $z=\beta x_1=\rho x_1$, $m=\rho P^{-1}$ and $g=-my_1=-\rho P^{-1}y_1$. We conclude that $\varphi_{x_1,y_1, \lambda,P}$ is inner if and only if $\lambda=1$, and in this case, by making the choice $\rho =1$, $\varphi_{x_1,y_1, 1,P}$ is the inner automorphism associated with the invertible element $\left[ \begin{array}{cc} 1 & x_1\\ -P^{-1}y_1 & P^{-1} \end{array} \right]$. We got that ${\rm Inn}(\mathcal{R})=\Phi (A\rtimes (1\times GL_2(K))$, so then $${\rm Out}(\mathcal{R})=\frac{{\rm Aut}(\mathcal{R})}{{\rm Inn}(\mathcal{R})}\simeq \frac{A\rtimes (K^*\times GL_2(K))}{A\rtimes (1\times GL_2(K))}\simeq K^*.$$ Finally, we note that the outer automorphism corresponding to $\lambda \in K^*$ through the isomorphism ${\rm Out}(\mathcal{R})\simeq K^*$ is (the class of) $\varphi_{0,0, \lambda,I_2}$. ◻ # Semitrivial extensions and a question on Frobenius strongly graded algebras {#semitrivialextensions} We first present a general construction. Let $R$ be a finite dimensional $K$-algebra, and let $R^*$ be its linear dual with the usual $R$-bimodule structure, and actions denoted by $\rightharpoonup$ and $\leftharpoonup$. Let $\psi:R^*\otimes_RR^*\rightarrow R$ be a morphism of $R$-bimodules, and denote $\psi(r^*\otimes_Rs^*)$ by $[r^*,s^*]$ for any $r^*,s^*\in R^*$. We say that $\psi$ is associative if $[r^*,s^*]\rightharpoonup t^*=r^*\leftharpoonup[s^*,t^*]$ for any $r^*,s^*,t^*\in R^*$; in other words, we have a Morita context $(R,R,R^*,R^*,\psi,\psi)$ connecting the rings $R$ and $R$, with both bimodules being $R^*$, and both Morita maps equal to $\psi$. It follows from Morita theory that if $\psi$ is associative and surjective, then it is an isomorphism of $R$-bimodules. If $\psi:R^*\otimes_RR^*\rightarrow R$ is an associative morphism of $R$-bimodules, we consider the semitrivial extension $R\rtimes_\psi R^*$, which is the cartesian product $R\times R^*$ with the usual addition, and multiplication defined by $$(r,r^*)(s,s^*)=(rs+[r^*,s^*],(r\rightharpoonup s^*)+(r^*\leftharpoonup s))$$ for any $r,s\in R, r^*,s^*\in R^*$. Then $R\rtimes_\psi R^*$ is an algebra with identity element $(1,0)$; this construction was introduced in [@p]. Moreover, it is a $C_2$-graded algebra, where $C_2=<c>$ is a cyclic group of order 2, with homogeneous components $(R\rtimes_\psi R^*)_e=R\times 0$ and $(R\rtimes_\psi R^*)_c=0\times R^*$; here $e$ denotes the neutral element of $C_2$. It is a strongly graded algebra if and only if $\psi$ is surjective, thus an isomorphism. **Proposition 17**. *Let $R$ be a finite dimensional algebra and let $\psi:R^*\otimes_RR^*\rightarrow R$ be an associative morphism of $R$-bimodules. Then $R\rtimes_\psi R^*$ is a symmetric algebra.* *Proof.* We first note that $$\label{formula2} t^*([r^*,s^*])=r^*([s^*,t^*]) \mbox{ for any }r^*,s^*,t^*\in R^*.$$ Indeed, we just evaluate both sides of $[r^*,s^*]\rightharpoonup t^*=r^*\leftharpoonup [s^*,t^*]$ at 1. Denote $A=R\rtimes_\psi R^*$ and define $$\Phi:A\rightarrow A^*, \; \Phi(r,r^*)(s,s^*)=r^*(s)+s^*(r) \;\;\mbox{for any }r,s\in R, r^*,s^*\in R^*.$$ It is clear that $\Phi$ is injective. Indeed, if $\Phi(r,r^*)=0$, then $r^*(s)=\Phi(r,r^*)(s,0)=0$ for any $s\in R$, so $r^*=0$, and $s^*(r)=\Phi(r,r^*)(0,s^*)=0$ for any $s^*\in R^*$, so $r=0$. Thus $\Phi$ is a linear isomorphism. Moreover, if $(x,x^*), (r,r^*), (s,s^*)\in A$, then $$\begin{aligned} (\Phi ((x,x^*)(r,r^*)))(s,s^*)&=& (\Phi(xr+[x^*,r^*],(x\rightharpoonup r^*)+(x^*\leftharpoonup r)))(s,s^*)\\ &=&(x\rightharpoonup r^*)(s)+(x^*\leftharpoonup r)(s)+s^*(xr+[x^*,r^*]) \\ &=&r^*(sx)+x^*(rs)+s^*(xr)+s^*([x^*,r^*])\\ &=&r^*(sx)+x^*(rs)+s^*(xr)+r^*([s^*,x^*]) \;\;\;\; (\mbox{by } (\ref{formula2}))\\ &=&(s\rightharpoonup x^*+s^*\leftharpoonup x)(r)+r^*(sx+ [s^*,x^*])\\ &=&\Phi(r,r^*)(sx+ [s^*,x^*],s\rightharpoonup x^*+s^*\leftharpoonup x)\\ &=&\Phi(r,r^*)((s,s^*)(x,x^*))\\ &=&((x,x^*)\rightharpoonup\Phi(r,r^*))(s,s^*),\end{aligned}$$showing that $\Phi$ is a morphism of left $A$-modules, and $$\begin{aligned} (\Phi (x,x^*)\leftharpoonup(r,r^*))(s,s^*)&=&\Phi(x,x^*)((r,r^*)(s,s^*))\\ &=&\Phi (x,x^*)(rs+[r^*,s^*],(r\rightharpoonup s^*)+(r^*\leftharpoonup s))\\ &=&x^*(rs)+x^*([r^*,s^*])+s^*(xr)+r^*(sx)\\ &=&x^*(rs)+s^*([x^*,r^*])+s^*(xr)+r^*(sx) \;\;\;\; (\mbox{by } (\ref{formula2}))\\ &=&(\Phi ((x,x^*)(r,r^*)))(s,s^*)\;\;\;\; (\mbox{by the computations above}),\end{aligned}$$so $\Phi$ is also a morphism of right $A$-modules. We conclude that $\Phi$ is an isomorphism of $A$-bimodules. ◻ We first mention two particular cases of interest. The first one is for an arbitrary finite dimensional algebra $R$ and the zero morphism $\psi:R^*\otimes_RR^*\rightarrow R$. The associated semitrivial extension, called in fact the trivial extension, is $R\times R^*$ with the multiplication given by $(r,r^*)(s,s^*)=(rs,(r\rightharpoonup s^*)+(r^*\leftharpoonup s))$ for any $r,s\in R, r^*,s^*\in R^*$. This is just the example of Tachikawa of a symmetric algebra constructed from $R$, see [@lam Example 16.60]. The second one is for a symmetric finite dimensional algebra $R$. As $R^*\simeq R$ as bimodules, a semitrivial extension $R\rtimes_\psi R^*$ is isomorphic to $R\times R$ with multiplication $(r,a)(s,b)=(rs+\gamma(a\otimes_Rb),rb+as)$, where $\gamma:R\otimes_RR\rightarrow R$ is a morphism of $R$-bimodules. As such a $\gamma$ is of the form $\gamma (a\otimes_Rb)=zab$ for any $a,b\in R$, where $z$ is an element in the centre of $R$, any semitrivial extension of this kind is isomorphic to the algebra $A_z=R\times R$ for some $z\in Cen(R)$, whose multiplication is given by $(r,a)(s,b)=(rs+zab,rb+as)$.\ Now we return to the 9-dimensional algebra $\mathcal{R}$ discussed in the previous sections. Let $\varphi:\mathcal{R}^*\otimes _\mathcal{R}\mathcal{R}^*\rightarrow\mathcal{R}$ be the isomorphism of $\mathcal{R}$-bimodules defined in Proposition [Proposition 13](#croset){reference-type="ref" reference="croset"}, and denote $\varphi(r^*\otimes_\mathcal{R}s^*)$ by $[r^*,s^*]$. **Lemma 18**. *$[r^*,s^*]\rightharpoonup t^*=r^*\leftharpoonup[s^*,t^*]$ for any $r^*,s^*,t^*\in \mathcal{R}^*$, thus $\varphi$ is associative.* *Proof.* We list in the table below all triples $(r^*,s^*,t^*)$ with elements in the basis ${\bf B}^*$ such that $[r^*,s^*]\rightharpoonup t^*\neq 0$. For this, we first take $r^*$ and $s^*$ such that $r^*\otimes_\mathcal{R}s^*\neq 0$ (these are indicated in Proposition [Proposition 13](#croset){reference-type="ref" reference="croset"}), and for the corresponding value $\varphi(r^*\otimes_\mathcal{R}s^*)=[r^*,s^*]$, we take all possible $t^*\in {\bf B}^*$ such that $[r^*,s^*]\rightharpoonup t^*\neq 0$ by looking at the table of the left $\mathcal{R}$-action on $\mathcal{R}^*$. On each row, any of the occuring indices $i,j,r,p$ can take any value in $\{ 1,2\}$. $r^*$ $s^*$ $[r^*,s^*]$ $t^*$ $[r^*,s^*]\rightharpoonup t^*$ --- ------------ ------------ ------------- ------------ -------------------------------- 1 $Y_i^*$ $X_i^*$ $E$ $E^*$ $E^*$ 2 $Y_i^*$ $X_i^*$ $E$ $Y_j^*$ $Y_j^*$ 3 $X_i^*$ $Y_j^*$ $F_{ij}$ $F_{rj}^*$ $F_{ri}^*$ 4 $X_i^*$ $Y_j^*$ $F_{ij}$ $X_j^*$ $X_i^*$ 5 $E^*$ $Y_i^*$ $X_i$ $X_i^*$ $E^*$ 6 $Y_i^*$ $F_{ji}^*$ $X_j$ $X_j^*$ $E^*$ 7 $X_i^*$ $E^*$ $Y_i$ $Y_j^*$ $F_{ji}^*$ 8 $F_{ij}^*$ $X_i^*$ $Y_j$ $Y_p^*$ $F_{pj}^*$ We proceed similarly to find all triples $(r^*,s^*,t^*)$ with elements in the basis ${\bf B}^*$ such that $r^*\leftharpoonup[s^*,t^*]\neq 0$. We make appropriate choices for the indices to make easier the identifications between the two tables. $s^*$ $t^*$ $[s^*,t^*]$ $r^*$ $r^*\leftharpoonup[s^*,t^*]$ ------ ------------ ------------ ------------- ------------ ------------------------------ $1'$ $Y_i^*$ $X_i^*$ $E$ $E^*$ $E^*$ $2'$ $Y_j^*$ $X_j^*$ $E$ $X_i^*$ $X_i^*$ $3'$ $X_i^*$ $Y_p^*$ $F_{ip}$ $F_{ij}^*$ $F_{pj}^*$ $4'$ $X_i^*$ $Y_j^*$ $F_{ij}$ $Y_i^*$ $Y_j^*$ $5'$ $E^*$ $Y_j^*$ $X_j$ $X_i^*$ $F_{ji}^*$ $6'$ $Y_j^*$ $F_{rj}^*$ $X_r$ $X_i^*$ $F_{ri}^*$ $7'$ $X_i^*$ $E^*$ $Y_i$ $Y_i^*$ $E^*$ $8'$ $F_{ji}^*$ $X_j^*$ $Y_i$ $Y_i^*$ $E^*$ We see that the two tables indicate the same non-vanishing triples $(r^*,s^*,t^*)$ for the left hand side and the right hand side of the equality we need to prove, and in each case the two sides are indeed equal. The corresponding cases are: $1=7'$, $2=4'$, $3=6'$, $4=2'$, $5=1'$, $6=8'$, $7=5'$ and $8=3'$. ◻ As a consequence, we can construct the semitrivial extension $A=\mathcal{R}\rtimes_{\varphi} \mathcal{R}^*$. This algebra will answer in the negative our initial question presented in the Introduction, for both the symmetric property and the Frobenius property. **Corollary 19**. *Let $A=\mathcal{R}\rtimes_{\varphi} \mathcal{R}^*$ with the $C_2$-grading given by $A_e=\mathcal{R}\rtimes 0$ and $A_c=0\times \mathcal{R}^*$. Then $A$ is a strongly graded algebra which is symmetric, whose homogeneous component $A_e$ of trivial degree is not Frobenius.* This example also answers a question posed by the referee of our paper [@dnn2]. It was proved in [@dnn2 Proposition 2.1] that if $B$ is a subalgebra of a Frobenius algebra $A$, such that $A$ is free as a left $B$-module and also as a right $B$-module, then $B$ is Frobenius, too. The question was whether the conclusion remains valid if we only suppose that $A$ is projective as a left $B$-module and as a right $B$-module. The example constructed in Corollary [Corollary 19](#corolarexemplu){reference-type="ref" reference="corolarexemplu"} shows that the answer is negative. Indeed, $A$ is even symmetric, and it is projective as a left $A_e$-module and as a right $A_e$-module, however $A_e$ is not a Frobenius algebra. # Order 2 elements in Picard groups and associative isomorphisms {#sectionassociative} We return to an arbitrary finite dimensional algebra $R$. We have seen in Example [Example 7](#exempleordinR^*){reference-type="ref" reference="exempleordinR^*"} that if $R^*$ is an invertible $R$-bimodule, then $[R^*]$ may have order $>2$ (finite or infinite) in ${\rm Pic}(R)$, thus $R^*\otimes_RR^*$ may not be isomorphic to $R$. Now we look at the case where $R^*\otimes_RR^*\simeq R$ as bimodules, thus $R^*$ is an invertible bimodule, and its order in the Picard group is at most 2; we have seen in Proposition [Corollary 3](#dualinvertible){reference-type="ref" reference="dualinvertible"} that $R$ is necessarily quasi-Frobenius in this case. In order to construct semitrivial extensions, we are interested in associativity of isomorphisms $R^*\otimes_RR^*\simeq R$. Thus we addressed Question 2 in the Introduction, asking whether any such isomorphism is associative. The following shows that the answer to the question depends only on the algebra, and not on a particular choice of the isomorphism. **Proposition 20**. *If $R$ is a finite dimensional algebra such that $R^*\otimes_RR^*\simeq R$ as bimodules and there exists an associative isomorphism $R^*\otimes_RR^*\rightarrow R$, then any other such isomorphism is associative.* *Proof.* We first note that if $\psi,\psi':R^*\otimes_RR^*\rightarrow R$ are isomorphisms, then $\psi^{-1}\psi'$ is an automorphism of the bimodule $R$, so it is the multiplication by a central invertible element $c$. Therefore $\psi'(y)=c\psi(y)$ for any $y\in R^*\otimes_RR^*$. We also see that if $c$ is a central element of $R$, then $c\rightharpoonup r^*=r^*\leftharpoonup c$ for any $r^*\in R^*$. Indeed, $(c\rightharpoonup r^*)(a)=r^*(ac)=r^*(ca)=(r^*\leftharpoonup c)(a)$ for any $a\in R$. Now $R^*\otimes_RR^*\simeq R$, so $R^*$ is an invertible $R$-bimodule and $$\xymatrix{{R-mod}\ar@<1ex>[rr]^{R^*\otimes_R -}&&{R-mod}\ar@<1ex>[ll]^{R^*\otimes_R -}}$$ is an equivalence of categories. By Morita theory, see [@bass Proposition 3.1, page 60], there exists a strict Morita context $(R, R, R^*,R^*,\phi,\phi')$, where $\phi,\phi':R^*\otimes_RR^*\rightarrow R$ are isomorphisms of $R$-bimodules satisfying $\phi(r^*\otimes_Rs^*)\rightharpoonup t^*=r^*\leftharpoonup\phi'(s^*\otimes_Rt^*)$. But $\phi'=c\phi$ for some central invertible element $c\in R$. We get that $\phi(r^*\otimes_Rs^*)\rightharpoonup t^*=c\rightharpoonup(r^*\leftharpoonup\phi (s^*\otimes_Rt^*))$ for any $r^*,s^*,t^*\in R^*$, i.e. $\phi$ is associative up to the central unit $c$. We show that any isomorphism $\psi:R^*\otimes_RR^*\rightarrow R$ of bimodules has the same property. Indeed, $\psi=b\phi$ for some central invertible element $b\in R$, and then $$\begin{aligned} \psi(r^*\otimes_Rs^*)\rightharpoonup t^*&=&(b\phi(r^*\otimes_Rs^*))\rightharpoonup t^*\\ &=&\phi(r^*\otimes_Rs^*)\rightharpoonup(b\rightharpoonup t^*)\\ &=&\phi(r^*\otimes_Rs^*)\rightharpoonup( t^*\leftharpoonup b)\\ &=&(\phi(r^*\otimes_Rs^*)\rightharpoonup t^*)\leftharpoonup b\\ &=&(c\rightharpoonup(r^*\leftharpoonup\phi (s^*\otimes_R t^*)))\leftharpoonup b\\ &=&c\rightharpoonup(r^*\leftharpoonup(\phi(s^*\otimes_Rt^*)b))\\ &=&c\rightharpoonup(r^*\leftharpoonup\psi(s^*\otimes_Rt^*)) \end{aligned}$$ Now if there is such an isomorphism $\psi$ which is associative, we get that $c\rightharpoonup(r^*\leftharpoonup\psi(s^*\otimes_Rt^*))=r^*\leftharpoonup \psi(s^*\otimes_Rt^*)$ for any $r^*,s^*,t^*\in R^*$. As $\psi$ is surjective, this shows that $c\rightharpoonup r^*=r^*$ for any $r^*\in R^*$, so then $r^*(ac)=r^*(a)$ for any $a\in R$ and $r^*\in R^*$. Hence $ac=a$ for any $a$, so $c=1$. We conclude that any other isomorphism $R^*\otimes_RR^*\rightarrow R$ is associative. ◻ The following answers in the positive our question in the Frobenius case. **Proposition 21**. *Let $R$ be a Frobenius algebra such that $R^*\otimes_RR^*\simeq R$ as bimodules. Then any isomorphism $\varphi:R^*\otimes_RR^*\rightarrow R$ is associative.* *Proof.* Let $\lambda\in R^*$ be a Frobenius form and let $\nu$ be the Nakayama automorphism associated with $\lambda$. Then $R^*\simeq {_1R_\nu}$, so $R^*\otimes_RR^*\simeq {_1R_\nu}\otimes_R{_1R_\nu}\simeq {_1R_{\nu^2}}$. Thus ${_1R_{\nu^2}}\simeq R$, which shows that $\nu^2$ is inner; let $\nu^2 (r)=u^{-1}ru$ for any $r\in R$, where $u$ is an invertible element of $R$. We have seen in Proposition [Proposition 4](#dualalpha){reference-type="ref" reference="dualalpha"} that $\theta:{_1R_\nu}\rightarrow R^*$, $\theta (r)=r\rightharpoonup\lambda$, is a bimodule isomorphism. It is easy to check that $\delta:{_1R_\nu}\otimes_R{_1R_\nu}\rightarrow{_1R_{\nu^2}}$, $\delta (r\otimes_Rs)=r\nu (s)$ for any $r,s\in R$, and $\omega:{_1R_{\nu^2}}\rightarrow R$, $\omega(r)=ru^{-1}$ for any $r\in R$, are both bimodule isomorphisms. Composing these isomorphisms we obtain an $R$-bimodule isomorphism $\psi:R^*\otimes_RR^*\rightarrow R$, $\psi=\omega \delta (\theta^{-1}\otimes\theta^{-1})$. Explicitly, $$\psi ((r\rightharpoonup \lambda)\otimes_R(s\rightharpoonup\lambda))=r\nu (s)u^{-1}$$ for any $r,s\in R$. Denoting $\psi(r^*\otimes_Rs^*)$ by $[r^*,s^*]$, we have $$\begin{aligned} \rightharpoonup(t\rightharpoonup\lambda)&=& (r\nu(s)u^{-1})\rightharpoonup(t\rightharpoonup\lambda)\\ &=& (r\nu(s)u^{-1}t)\rightharpoonup\lambda \end{aligned}$$ and $$\begin{aligned} (r\rightharpoonup\lambda)\leftharpoonup[s\rightharpoonup\lambda,t\rightharpoonup\lambda]&=&(r\rightharpoonup \lambda)\leftharpoonup(s\nu(t)u^{-1})\\ &=&r\rightharpoonup(\lambda\leftharpoonup(s\nu(t)u^{-1}))\\ &=&r\rightharpoonup(\nu(s)\nu^2(t)\nu(u^{-1})\rightharpoonup\lambda)\\ &=&(r\nu(s)u^{-1}tu\nu(u)^{-1})\rightharpoonup\lambda, \end{aligned}$$showing that $\psi$ is associative if and only if $\nu(u)=u$. Now for any $a\in R$ $$\begin{aligned} \lambda (au)&=&\lambda (u\nu(a)) \;\;\; (\mbox{since }\nu\mbox{ is the Nakayama automorphism})\\ &=&\lambda(u\nu^2(\nu^{-1}(a)))\\ &=&\lambda(uu^{-1}\nu^{-1}(a)u) \;\;\;(\mbox{since }\nu^2\mbox{ is inner})\\ &=&\lambda(\nu^{-1}(a)u)\\ &=&\lambda(ua) \;\;\; (\mbox{since }\nu\mbox{ is the Nakayama automorphism}), \end{aligned}$$showing that $\lambda (au)=\lambda(ua)$, or equivalently, $u\rightharpoonup\lambda=\lambda\leftharpoonup u$. Therefore $\theta(u)=u\rightharpoonup\lambda=\lambda\leftharpoonup u=\nu(u)\rightharpoonup \lambda=\theta(\nu(u))$, so $\nu(u)=u$, since $\theta$ is injective. This shows that $\psi$ is associative, and then so is any isomorphism $\varphi:R^*\otimes_RR^*\simeq R$. ◻ As a consequence, we obtain a class of examples of strongly graded algebras that are symmetric as algebras, while their homogeneous component of trivial degree is not symmetric. Indeed, we can take a Frobenius algebra $R$ such that the order of $[R^*]$ in ${\rm Pic}(R)$ is $2$; in other words, the Nakayama automorphism $\nu$ with respect to a Frobenius form is not inner, but $\nu^2$ is inner. Then there is an isomorphism of $R$-bimodules $\psi:R^*\otimes_RR^*\rightarrow R$, and by Proposition [Proposition 21](#Frobeniusasociativ){reference-type="ref" reference="Frobeniusasociativ"}, it is associative. Hence we can form the semitrivial extension $R\rtimes_{\psi}R^*$, which is a strongly $C_2$-graded algebra which is symmetric, and its homogeneous component of trivial degree is isomorphic to $R$, which is Frobenius, but not symmetric. We have several classes of examples of Frobenius algebras $R$ such that $[R^*]$ has order $2$ in ${\rm Pic}(R)$: \(i\) A first class follows from Example [Example 7](#exempleordinR^*){reference-type="ref" reference="exempleordinR^*"}. For $R=H_1(C,n,c,c^*)$, the order of $[R^*]$ is $2$ if and only if $n=2$. Thus we obtain such an $R$ if we have a finite abelian group $C$, an element $c\in C$ with $c^2\neq 1$, and a linear character $c^*\in C^*$ such that $(c^*)^2=1$ and $c^*(c)=-1$. A particular family of such examples is when we take $C=<c>\simeq C_{2r}$, where $r\geq 2$, and $c^*\in C^*$ defined by $c^*(c)=-1$, obtaining a Hopf algebra of dimension $4r$, generated by the grouplike element $c$ and the $(1,c)$-skew-primitive element $x$, subject to relations $c^{2r}=1, x^2=c^2-1, xc=-cx$. \(ii\) A second class follows from Example [Example 7](#exempleordinR^*){reference-type="ref" reference="exempleordinR^*"}, too. For $R=H_2(C,n,c,c^*)$, the order of $[R^*]$ is $2$ if and only if $\frac{m}{(\frac{m}{n},n-1)}=2$, where $m$ is the order of $c^*$. It is easy to check that this happens if and only if $m=n=2$. Thus we need a finite abelian group $C$, a character $c^*\in C^*$ such that $(c^*)^2=1$ and an element $c\in C$ such that $c^*(c)=-1$ (in particular, the order of $c$ must be even). A particular family of such examples is when we take $C=<c>\simeq C_{2r}$, where $r\geq 1$, and $c^*\in C^*$ defined by $c^*(c)=-1$, obtaining a Hopf algebra of dimension $4r$, generated by the grouplike element $c$ and the $(1,c)$-skew-primitive element $x$, subject to relations $c^{2r}=1, x^2=0, xc=-cx$. For $r=1$ this is just Sweedler's $4$-dimensional Hopf algebra. \(iii\) Another example is $R_{-1}=K_{-1}[X,Y]/(X^2,Y^2)$ from Example [Example 8](#exemplequantumplane){reference-type="ref" reference="exemplequantumplane"} for $q=-1$. \(iv\) Let $H$ be a unimodular finite dimensional Hopf algebra, i.e., the spaces of left integrals and right integrals coincide in $H$; equivalently, the unimodular element $\mathcal{G}$ is trivial. By Radford's formula, see [@lorenz Theorem 12.10] or [@radford2 Theorem 10.5.6], $S^4(h)=a^{-1}ha$ for any $h\in H$, where $a$ is the modular element of $H^*$ regarded inside $H$ via the isomorphism $H\simeq H^{**}$. Thus $S^4$ is inner, and then the order of $S^2$ in ${\rm Out}(H)$ is either $1$ or $2$. By Theorem [Theorem 9](#ordinH*){reference-type="ref" reference="ordinH*"}, in the first case $[H^*]$ has order $1$ in ${\rm Pic}(H)$, and $H$ is symmetric, while in the second case, $[H^*]$ has order $2$ in ${\rm Pic}(H)$. We conclude that a class of Frobenius algebras as we are looking for is the family of all unimodular finite dimensional Hopf algebras that are not symmetric. A class of such objects was explicitly constructed in [@suzuki].\ We do not know whether the answer Question 2 is positive for any finite dimensional quasi-Frobenius algebra.\ **Acknowledgement.** The first two authors were supported by a grant of UEFISCDI, project number PN-III-P4-PCE-2021-0282, contract PCE 47/2022. 99 N. Andruskiewitsch, H.-J. Schneider, Lifting of quantum linear spaces and pointed Hopf algebras of order $p^3$, J. Algebra **209** (1998), 659-691. H. Bass, Algebraic K-Theory, W. A. Benjamin, Inc., 1968. M. Beattie, S. Dăscălescu, L. Grünenfelder, Constructing pointed Hopf algebras by Ore extensions, J. Algebra, **225** (2000), 743-770. S. Dăscălescu, C. Năstăsescu and L. Năstăsescu, Frobenius algebras of corepresentations and group graded vector spaces, J. Algebra **406** (2014), 226-250. S. Dăscălescu, C. Năstăsescu and L. Năstăsescu, Hopf algebra actions and transfer of Frobenius and symmetric properties, Math. Scand. **126** (2020), 32-40. S. Dăscălescu, C. Năstăsescu and L. Năstăsescu, On a class of quasi-Frobenius algebras, J. Pure Appl. Algebra **226** (2022), 106992. T. Y. Lam, Lectures on modules and rings, GTM **189**, Springer Verlag, 1999. M. Lorenz, A tour od representation theory, AMS Graduate Studies in Mathematics **193**, 2018. T. Nakayama, On Frobeniusean algebras. I, Ann. of Math. **40** (1939), 611-633. T. Nakayama, C. Nesbitt, Note on symmetric algebras, Ann. of Math. **39** (1938), 659-668. C. Năstăsescu and F. van Oystaeyen, Methods of graded rings, Lecture Notes in Math., vol. 1836 (2004), Springer Verlag. U. Oberst, H.-J. Schneider, ${\rm \ddot{U}}$ber Untergruppen endlicher algebraischer Gruppen, Manuscripta Math. **8** (1973), 217-241. I. Palm${\rm \acute{e}}$r, The global homological dimension of semi-trivial extensions of rings, Math. Scand. **37** (1975), 223-256. D. E. Radford, The trace function and Hopf algebras, J. Algebra **163** (1994), 583-622. D. E. Radford, Hopf algebras. Series on Knots and Everything, 49. World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2012. A. Skowroński, K. Yamagata, Frobenius algebras I. Basic representation theory, European Mathematical Society, 2012. S. Suzuki, Unimodularity of finite dimensional Hopf algebras, Tsukuba J. Math. **20** (1996), 231-238.
arxiv_math
{ "id": "2309.10169", "title": "Picard groups of quasi-Frobenius algebras and a question on Frobenius\n strongly graded algebras", "authors": "Sorin Dascalescu, Constantin Nastasescu and Laura Nastasescu", "categories": "math.RA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- author: - Helena Kremp, Nicolas Perkowski title: Periodic homogenization for singular SDEs --- Periodic homogenization for singular Lévy SDEs\ Helena Kremp[^1], Nicolas Perkowski[^2] ------------------------------------------------------------------------ \*Abstract We generalizes the theory of periodic homogenization for multidimensional SDEs with additive Brownian and stable Lévy noise for $\alpha\in (1,2)$ (cf. [@Bensoussan1978; @Franke2007]) to the setting of singular periodic Besov drifts $F\in(\mathscr{C}^{\beta}(\mathbb{T}^{d}))^{d}$ for $\beta\in ((2-2\alpha)/3,0)$ beyond the Young regime. For the martingale solution from [@kp] projected onto the torus, we prove existence and uniqueness of an invariant probability measure $\pi$ with strictly positive Lebesgue density exploiting the theory of paracontrolled distributions and a strict maximum principle for the singular Fokker-Planck equation. Furthermore, we prove a spectral gap on the semigroup of the diffusion and solve the Poisson equation with singular right-hand side equal to the drift itself. In the CLT scaling, we prove that the diffusion converges in law to a Brownian motion with constant diffusion matrix. In the pure $\alpha$-stable noise case, we rescale in the scaling that the $\alpha$-stable process respects and show convergence to the stable process itself. We conclude on the periodic homogenization result for the parabolic PDE for the singular generator $\mathfrak{L}^{\varepsilon}=-(-\Delta)^{\alpha/2}+\varepsilon^{1-\alpha}F(\varepsilon^{-1}\cdot)\cdot\nabla$ as $\varepsilon\to 0$.\ *Keywords: Periodic homogenization, singular diffusion, stable Lévy noise, Poisson equation, singular Fokker-Planck equation, paracontrolled distributions*\ *MSC2020: 35A21, 35B27, 60H10, 60G51, 60L40, 60K37.* ------------------------------------------------------------------------ Introduction Periodic homogenization decribes the limit procedure from microscopic boundary-value problems posed on periodic structures to a macroscopic equation. Such periodic media are for example composite materials or polymer structures. The theory originated from engineering purposes in material sciences in the 1970s, cf. [@Bensoussan1978] and the references therein. Mathematically, this leads to the study of the limit of periodic operators with rapidly oscillating coefficients. There exist analytic and probabilistic methods to determine the limit equation. We refer to the classical works [@Bensoussan1978; @Pavliotis2008] for the background on homogenization theory. We employ a probabilistic method using the Feynman-Kac formula (cf. [@Oksendal2003]). Via the Feynman-Kac formula, the periodic homogenization result for the Kolmogorov PDE with fluctuating and unbounded drift corresponds to a central limit theorem for the diffusion process.\ In this work, we generalize the theory of periodic homogenization for SDEs with additive Brownian noise, respectively stable Lévy noise, from [@Bensoussan1978], respectively [@Franke2007], from the setting of regular coefficients to singular Besov drifts $F\in(\mathscr{C}^{\beta}(\mathbb{T}^{d}))^{d}$ for $\beta\in ((2-2\alpha)/3,0)$ on the $d$-dimensional torus $\mathbb{T}^{d}$.\ In [@Bensoussan1978 Section 3.4.2], the periodic drift coefficient is assumed to be $C^{1}$ with Hölder-continuous derivative and the periodic diffusion coefficient is assumed to be symmetric and uniformly elliptic, as well as $C^{2}$ with Hölder-continuous first derivative and bounded second derivative. The assumption of uniform ellipticity can be relaxed to allow for some degeneracy, which was investigated in [@Hairer2008] using Malliavin calculus techniques.\ In [@Franke2007] the multiplicative symmetric $\alpha$-stable noise case for $\alpha\in (1,2)$ is studied and the coefficients are assumed to be even more regular, namely $C^{3}$. The regularity assumptions were relaxed in [@Huang2018; @Huang2022], where the authors more generally consider the periodic homogenization for the generator of an $\alpha$-stable-like Feller process. In [@Huang2022], using a Zvonkin transformation to remove the drift (cf. [@Zvonkin1974]), the authors can consider drifts that are bounded and $\beta$-Hölder continuous for $\beta\in (1-\alpha/2,1)$. They also consider a non-linear intensity function $\sigma$ and therefore a multiplicative noise term of the form $\sigma(X_{t},dL_{t}^{\alpha})$, see [@Huang2022 Equation (2.1)] with an isotropic $\alpha$-stable process $L^{\alpha}$, whereas in [@Franke2007] the intensity function $\sigma(x,y)$ is linear in $y$.\ In the recent article [@Chen2021] the authors further generalize the assumption on the drift coefficient to bounded, measurable drifts and consider the solution of the martingale problem associated to the SDE. The operator they consider is a Lévy-type operator that in particular includes all stable Lévy noise generators, symmetric and non-symmetric. They prove the homogenization result with the corrector method, an analytical method in homogenization theory, and show that different limit phenomena occur in the cases $\alpha\in (0,1)$, $\alpha=1$, $\alpha\in (1,2)$, $\alpha=2$ and $\alpha\in (2,\infty)$.\ With analytical methods, the papers [@Krassmann; @Arisawa; @Schwab] deal with Lévy-type operators with oscillating coefficients for $\alpha\in (0,2)$, but without drift part.\ In the mixed jump-diffusion case, [@Sandric] investigates the periodic homogenization for zero-drift diffusions with small jumps. The homogenized process in this case is also a Brownian motion.\ We focus on the addive $\alpha$-stable symmetric noise case, where different limit behaviours occure for $\alpha=2$ (the Brownian noise case) and $\alpha\in (1,2)$. Our contribution is the generalization to distributional drifts, not only in the Young, but also in the rough regime.\ For the homogenization result, we rely on Kipnis-Varadhan martingale methods (cf. [@Kipnis1986] and [@klo]). Those methods require to solve the Poisson equation for the generator of the diffusion (or more generally the resolvent equations and imposing additional assumption) and to rewrite the additive functional in terms of that solution and Dynkin's martingale. Poisson equations for generators of diffusions with regular coefficients were studied in the classical article [@Pardoux1998].\ Following [@klo], we generalize those techniques to much less regular drift coefficient. In particular this includes bounded measurable drifts or distributional drifts in the Young regime, where classical PDE techniques apply. More interestingly, our theory applies in the setting of singular drifts such as a typical realization of the periodic spatial white noise, cf. [Remark 39](#rem:p-Brox){reference-type="ref" reference="rem:p-Brox"}. In order to apply the SDE solution theory from [@kp], we restrict to additive noise.\ To be more precise, we study the functional central limit theorem for the solution $X$ of the martingale problem associated to the SDE $$\begin{aligned} \label{eq:sde} dX_{t}=F(X_{t})dt+dL_{t}\end{aligned}$$ with $F\in(\mathscr{C}^{\beta}(\mathbb{T}^{d}))^{d}$ and a symmetric $\alpha$-stable process $L$ for $\alpha\in (1,2]$. The singular generator $\mathfrak{L}$ of $X$ is given by $$\begin{aligned} \mathfrak{L}=-\mathscr{L}^{\alpha}_{\nu}+F\cdot\nabla.\end{aligned}$$ The first step is to prove existence and uniqueness of an invariant probability measure $\pi$ on $\mathbb{T}^{d}$ for $\mathfrak{L}$ with strictly positive Lebesgue density. We achieve this by solving the singular Fokker-Planck equation with singular initial condition $\mu\in \mathscr{C}^{0}_{1}$, $$\begin{aligned} (\partial_{t}-\mathfrak{L}^{\ast})\rho_{t}=0,\quad \rho_{0}=\mu,\end{aligned}$$ with formal Lebesgue adjoint $\mathfrak{L}^{\ast}$ of $\mathfrak{L}$ and proving a strict maximum principle on compacts. Furthermore, we prove spectral gap estimates for the semigroup of the diffusion projected onto the torus and solve the singular resolvent equation for $\mathfrak{L}$. This enables, through a limiting argument in a Sobolev-type space $\mathscr{H}^{1}(\pi)$ with respect to $\pi$, to solve the Poisson equation [\[eq:Poisson-eq\]](#eq:Poisson-eq){reference-type="eqref" reference="eq:Poisson-eq"} with singular right-hand side $F-\langle F\rangle_{\pi}$. Here, we define $\langle F\rangle_{\pi}=\int F d\pi$ in a stable manner.\ For the homogenization, we distinguish between the cases $\alpha=2$ (Brownian noise case) and $\alpha\in (1,2)$, as the scaling and the limit behaviour differs. In the standard Brownian noise case, we prove weak convergence $$\begin{aligned} \label{eq:mainr1} \paren[\bigg]{\frac{1}{\sqrt{n}}(X_{nt}-nt\langle F\rangle_{\pi})}_{t\in[0,T]}\Rightarrow (\sqrt{D}B_{t})_{t\in[0,T]},\end{aligned}$$ where $B$ is a standard Brownian motion and $D$ is the constant diffusion matrix with entries $$\begin{aligned} D(i,j):=\int_{\mathbb{T}^{d}} (e_{i}+\nabla \chi^{i}(x))(e_{j}+\nabla \chi^{j}(x))^{T}\pi(dx), %\int_{\mathbb{T}^{d}}[-(\nabla\chi)^{T}(x)\sigma\sigma^{T}\nabla\chi(x)+\sigma^{T}(x)\sigma(x)]\pi(dx)\end{aligned}$$ for $i,j=1,\dots,d$ and $e_{i}$ denoting the $i$-th euclidean unit vector. The limit is motivated by the result from [@Bensoussan1978 Section 3.4.2]. Furthermore, $\chi\in (L^{2}(\pi))^{d}$ solves the Poisson equation with singular right-hand side $F-\langle F\rangle_{\pi}$: $$\begin{aligned} \label{eq:Poisson-eq} (-\mathfrak{L})\chi^{i}=F^{i}-\langle F^{i}\rangle_{\pi},\end{aligned}$$ for $i=1,\dots,d$. In the pure Lévy noise case $\alpha\in (1,2)$ we rescale in the $\alpha$-stable scaling $n^{-1/\alpha}$ instead of $n^{-1/2}$. In this scaling we show, that the Dynkin martingale vanishes and thus we obtain weak convergence towards the stable process itself, $$\begin{aligned} \label{eq:mainr2} \paren[\bigg]{\frac{1}{n^{1/\alpha}}(X_{nt}-nt\langle F\rangle_{\pi})}_{t\in [0,T]}\Rightarrow (L_{t})_{t\in[0,T]}.\end{aligned}$$ In particular, compared to the Brownian noise case, there is no diffusivity enhancement in the limit (analogously to the regular coefficient case, cf. [@Franke2007]). The paper is structured as follows. Preliminaries and the strategy to prove the central limit theorem are outlined in [\[sec:prelim-PH\]](#sec:prelim-PH){reference-type="ref" reference="sec:prelim-PH"}. In [\[sec:s-FP\]](#sec:s-FP){reference-type="ref" reference="sec:s-FP"} we solve the singular Fokker-Planck equation with the paracontrolled approach. The singular resolvent equation for $\mathfrak{L}$ is solved in [\[sec:res-eq\]](#sec:res-eq){reference-type="ref" reference="sec:res-eq"}. We show in [\[sec:inv-m\]](#sec:inv-m){reference-type="ref" reference="sec:inv-m"} existence and uniqueness of the invariant measure $\pi$. [\[sec:inv-m\]](#sec:inv-m){reference-type="ref" reference="sec:inv-m"} furthermore yields a characterization of the domain of the generator $\mathfrak{L}$ in $L^{2}(\pi)$, cf. [Theorem 26](#thm:generator){reference-type="ref" reference="thm:generator"}. In [\[sec:s-P-eq\]](#sec:s-P-eq){reference-type="ref" reference="sec:s-P-eq"}, we solve the Poisson equation with singular right-hand side $F-\langle F\rangle_{\pi}$. Finally, we prove the CLT in [\[sec:CLT\]](#sec:CLT){reference-type="ref" reference="sec:CLT"} and relate to the periodic homogenization result for the parabolic PDE with oscillating operator $\mathfrak{L}^{\varepsilon}=-\mathscr{L}^{\alpha}_{\nu}+\varepsilon^{1-\alpha}F(\varepsilon^{-1}\cdot)\cdot\nabla$, cf. [Corollary 36](#cor:PDEhomog){reference-type="ref" reference="cor:PDEhomog"}. Preliminaries[\[sec:prelim-PH\]]{#sec:prelim-PH label="sec:prelim-PH"} This section gives an introduction to periodic Besov spaces and Schauder and exponential Schauder estimates on such. Furthermore, we introduce the projected solution $X^{\mathbb{T}^{d}}$ of $X$ onto the torus and its generator $\mathfrak{L}$ and semigroup. We define the space of enhanced distributions $\mathscr{X}^{\beta,\gamma}_{\infty}$. This section finishes with a summary on our strategy in proving the convergence results [\[eq:mainr1\]](#eq:mainr1){reference-type="eqref" reference="eq:mainr1"} and [\[eq:mainr2\]](#eq:mainr2){reference-type="eqref" reference="eq:mainr2"}.\ Let $\mathscr{S}(\mathbb{R}^{d})$ be the space of Schwartz functions and $\mathscr{S}'(\mathbb{R}^{d})$ the space of tempered distributions. A periodic (or $1$-periodic) distribution $u$ satisfies $u(\varphi(\cdot+1))=u(\varphi)$ for all $\varphi\in\mathscr{S}(\mathbb{R}^{d})$. Let $\mathbb{T}^{d}=(\mathbb{R}/\mathbb{Z})^{d}$ denote the torus and $\mathscr{S}(\mathbb{T}^{d})$ the space of Schwartz functions on the torus, i.e. smooth functions with the locally convex topology generated by the family of semi-norms $\norm{f}_{\gamma}=\sup_{x\in\mathbb{T}^{d}}\abs{D^{\gamma}f(x)}$ for multi-indices $\gamma\in\mathbb{N}^{d}$, and its topological dual $\mathscr{S}'(\mathbb{T}^{d})$. Let $(p_{j})_{j\geqslant -1}$ be a smooth dyadic partition of unity, i.e. a family of functions $p_{j}\in C^{\infty}_{c}(\mathbb{R}^{d})$ for $j\geqslant -1$, such that 1. $p_{-1}$ and $p_{0}$ are non-negative radial functions (they just depend on the absolute value of $x\in\mathbb{R}^{d}$), such that the support of $p_{-1}$ is contained in a ball and the support of $p_{0}$ is contained in an annulus; 2. $p_{j}(x):=p_{0}(2^{-j}x)$, $x\in\mathbb{R}^{d}$, $j\geqslant 0$; 3. $\sum_{j=-1}^{\infty}p_{j}(x)=1$ for every $x\in\mathbb{R}^{d}$; and 4. $\operatorname{supp}(p_{i})\cap \operatorname{supp}(p_{j})=\emptyset$ for all $\abs{i-j}>1$. Then, we define the Besov space on the torus with regularity $\theta\in\mathbb{R}$, integrability $p\in[1,\infty]$ and summability $q\in[1,\infty)$ as (cf. [@Schmeisser1987 Section 3.5]) $$\begin{aligned} B^{\theta}_{p,q}(\mathbb{T}^{d}):=\{u\in\mathscr{S}'(\mathbb{T}^{d})\mid \norm{u}_{B^{\theta}_{p,q}}:=\norm{(2^{js}\norm{\Delta_{j}u}_{L^{p}(\mathbb{T}^{d})})_{j\geqslant -1}}_{l^{q}(\mathbb{Z}^{d})}<\infty\}\end{aligned}$$ for the Littlewood-Paley blocks $\Delta_{j}u=\mathscr{F}^{-1}_{\mathbb{T}^{d}}(\rho_{j}\mathscr{F}_{\mathbb{T}^{d}}u)$ with Fourier transform $\mathscr{F}_{\mathbb{T}^{d}}f(k)=\hat{f}(k)=\int_{\mathbb{T}^{d}}f(x)e^{-2\pi i k \cdot x}dx$, $k\in\mathbb{Z}^{d}$, with inverse Fourier transform $\mathscr{F}^{-1}_{\mathbb{T}^{d}}f(x)=\sum_{k\in\mathbb{Z}^{d}}f(k)e^{2\pi i k\cdot x}$ and a dyadic partition of unity $(\rho_{j})_{j\geqslant -1}$ as above (cf. also [@Schmeisser1987 Section 3.4.4]). The Fourier transform on $\mathbb{R}^{d}$, we define as $\mathscr{F}f(y)= \int_{\mathbb{T}^{d}}f(x)e^{-2\pi i y \cdot x}dx$, $y\in\mathbb{R}^{d}$ with inverse $\mathscr{F}^{-1}f(y)=\mathscr{F}f(-y)$, $f\in\mathscr{S}(\mathbb{R}^{d})$. In the case $q=\infty$, we rather work with the separable Besov space, and thus define $$\begin{aligned} \label{eq:Besov-torus-infty} B^{\theta}_{p,\infty}=B^{\theta}_{p,\infty}(\mathbb{T}^{d}):=\{u\in\mathscr{S}'(\mathbb{T}^{d})\mid \norm{u}_{B^{\theta}_{p,\infty}}:=\lim_{j\to\infty} 2^{js}\norm{\Delta_{j}u}_{L^{p}}=0\}.\end{aligned}$$ We introduce the notation $\mathscr{C}^{\theta}_{p}(\mathbb{T}^{d}):=B^{\theta}_{p,\infty}(\mathbb{T}^{d})$ for $p\in[1,\infty)$ and $\mathscr{C}^{\theta}(\mathbb{T}^{d}):=B^{\theta}_{\infty,\infty}(\mathbb{T}^{d})$ and analogously for $\mathbb{T}^{d}$ replaced by $\mathbb{R}^{d}$. We simply write $B^{\theta}_{p,q}$, respectively $\mathscr{C}^{\theta}_{p}$, in the case the statement holds for any of the spaces, on the torus $\mathbb{T}^{d}$ and on $\mathbb{R}^{d}$. We recall from Bony's paraproduct theory (cf. [@Bahouri2011 Section 2]) that in general for $u\in\mathscr{C}^{\theta}$ and $v\in\mathscr{C}^{\beta}$ with $\theta,\beta\in\mathbb{R}$, the product $u v:=u\varolessthan v+u\varogreaterthan v +u \varodot v$ , is well defined in $\mathscr{C}^{\min(\theta,\beta,\theta+\beta)}$ if and only if $\theta+\beta>0$. Denoting $S_{i}u=\sum_{j=-1}^{i-1}\Delta_{j}u$, the paraproducts are defined as follows $$\begin{aligned} u\varolessthan v:=\sum_{i\geqslant -1} S_{i-1}u\Delta_{i}v,\quad u\varogreaterthan v:=v\varolessthan u, \quad u\varodot v:= \sum_{\abs{i-j}\leqslant 1}\Delta_{i}u\Delta_{j}v.\end{aligned}$$ Here, we use the notation of [@Martin2017; @Mourrat2017Dynamic] for the para- and resonant products $\varolessthan, \varogreaterthan$ and $\varodot$.\ In estimates we often use the notation $a\lesssim b$, which means, that there exists a constant $C>0$, such that $a\leqslant C b$. In the case that we want to stress the dependence of the constant $C(d)$ in the estimate on a parameter $d$, we write $a\lesssim_{d} b$.\ Let $C^{\infty}_{b}=C^{\infty}_{b}(\mathbb{R}^{d},\mathbb{R})$ denote the space of smooth, bounded functions with bounded partial derivatives.\ The paraproducts satisfy the following estimates for $p,p_{1},p_{2}\in[1,\infty]$ with $\frac{1}{p}=\min(1,\frac{1}{p_{1}}+\frac{1}{p_{2}})$ and $\theta,\beta\in\mathbb{R}$ (cf. [@PvZ Theorem A.1] and [@Bahouri2011 Theorem 2.82, Theorem 2.85]) $$\begin{aligned}\label{eq:paraproduct-estimates} \norm{u\varodot v}_{\mathscr{C}^{\theta+\beta}_{p}} & \lesssim\norm{u}_{\mathscr{C}^{\theta}_{p_{1}}}\norm{v}_{\mathscr{C}^{\beta}_{p_{2}}}, \qquad \text{if }\theta +\beta > 0,\\ \norm{u\varolessthan v}_{\mathscr{C}^{\beta}_{p}} \lesssim\norm{u}_{L^{p_{1}}}\norm{v}_{\mathscr{C}^{\beta}_{p_{2}}}& \lesssim\norm{u}_{\mathscr{C}^{\theta}_{p_{1}}}\norm{v}_{\mathscr{C}^{\beta}_{p_{2}}}, \qquad \text{if } \theta > 0,\\ \norm{u\varolessthan v}_{\mathscr{C}^{\beta+\theta}_{p}}& \lesssim\norm{u}_{\mathscr{C}^{\theta}_{p_{1}}}\norm{v}_{\mathscr{C}^{\beta}_{p_{2}}}, \qquad \text{if } \theta < 0. \end{aligned}$$ So if $\theta + \beta > 0$, we have $\norm{u v}_{\mathscr{C}^{\gamma}_{p}}\lesssim\norm{u}_{\mathscr{C}^{\theta}_{p_{1}}}\norm{v}_{\mathscr{C}^{\beta}_{p_{2}}}$ for $\gamma:=\min(\theta,\beta,\theta+\beta)$.\ Next, we collect some facts about $\alpha$-stable Lévy processes and their generators and semigroups. For $\alpha\in (0,2]$, a symmetric $\alpha$-stable Lévy process $L$ is a Lévy process, that moreover satisfies the scaling property $(L_{k t})_{t \geqslant 0}\stackrel{d}{=}k^{1/\alpha}(L_{t})_{t \geqslant 0}$ for any $k>0$ and $L\stackrel{d}{=}-L$, where $\stackrel{d}{=}$ denotes equality in law. These properties determine the jump measure $\mu$ of $L$, see [@Sato1999 Theorem 14.3]. That is, if $\alpha\in (0,2)$, the Lévy jump measure $\mu$ of $L$ is given by $$\begin{aligned} \label{eq:mu} \mu(A):=\mathds{E}\bigg[\sum_{0\leqslant t\leqslant 1}\mathbf{1}_{A}(\Delta L_{t})\bigg]=\int_{S}\int_{\mathbb{R}^{+}}\mathbf{1}_{A}(k\xi)\frac{1}{k^{1+\alpha}}dk\tilde \nu(d\xi),\quad A\in \mathscr{B}(\mathbb{R}^{d}\setminus\{0\}),\end{aligned}$$ where $\tilde \nu$ is a finite, symmetric, non-zero measure on the unit sphere $S\subset\mathbb{R}^{d}$. Furthermore, we also define for $A\in\mathscr{B}(\mathbb{R}^{d}\setminus\{0\})$ and $t\geqslant 0$ the Poisson random measure $$\begin{aligned} \pi(A\times [0,t])=\sum_{0\leqslant s\leqslant t}\mathbf{1}_{A}(\Delta L_{s}),\end{aligned}$$ with intensity measure $dt\mu(dy)$. Denote the compensated Poisson random measure of $L$ by $\hat{\pi}(dr,dy):=\pi(dr,dy)-dr\mu(dy)$. We refer to the book by Peszat and Zabczyk [@peszat_zabczyk_2007] for the integration theory against Poisson random measures. The generator $A$ of $L$ satisfies $C_{b}^{\infty}(\mathbb{R}^{d})\subset \operatorname{dom}(A)$ and is given by $$\begin{aligned} \label{eq:functional} A\varphi(x)=\int_{\mathbb{R}^{d}}\paren[\big]{\varphi(x+y)-\varphi(x)-\mathbf{1}_{\{\abs{y}\leqslant 1\}}(y) \nabla \varphi(x) \cdot y}\mu(dy)\qquad\text{for }\varphi\in C_{b}^{\infty}(\mathbb{R}^{d}).\end{aligned}$$ If $(P_t)_{t\geqslant 0}$ denotes the semigroup of $L$, the convergence $t^{-1}(P_{t}f(x)-f(x))\to Af(x)$ is uniform in $x\in\mathbb{R}^{d}$ (see [@peszat_zabczyk_2007 Theorem 5.4]). We also have a Fourier respresentation of the operator $A$, that is defined as follows. **Definition 1**. *Let $\alpha \in (0,2)$ and let $\nu$ be a symmetric (i.e. $\nu(A)=\nu(-A)$), finite and non-zero measure on the unit sphere $S\subset\mathbb{R}^{d}$. We define the operator $\mathscr{L}^{\alpha}_{\nu}$ as $$\begin{aligned} \mathscr{L}^{\alpha}_{\nu}\mathscr{F}^{-1}\varphi=\mathscr{F}^{-1}(\psi^{\alpha}_{\nu} \varphi)\qquad\text{for $\varphi\in C^\infty_b$,}\end{aligned}$$ where $\psi^{\alpha}_{\nu} (z):=\int_{S}\abs{\langle z,\xi\rangle}^{\alpha}\nu(d\xi).$ For $\alpha=2$, we set $\mathscr{L}^{\alpha}_{\nu}:=-\frac{1}{2}\Delta$.\ On the torus and for $\alpha\in (1,2)$, we define the fractional Laplacian as follows: for $f\in C^{\infty}(\mathbb{T}^{d})$ $$\begin{aligned} \mathscr{L}^{\alpha}_{\nu}f=\mathscr{F}^{-1}_{\mathbb{T}^{d}}(\mathbb{Z}^{d}\ni k\mapsto\psi^{\alpha}_{\nu}(k)\hat{f}(k))\end{aligned}$$ and for $\alpha=2$ analogously.* **Remark 2**. *If we take $\nu$ as a suitable multiple of the Lebesgue measure on the sphere, then $\psi^\alpha_\nu(z) = |2\pi z|^\alpha$ and thus $\mathscr{L}^{\alpha}_{\nu}$ is the fractional Laplace operator $(-\Delta)^{\alpha/2}$.* **Lemma 3**. *[^3] Let $\alpha \in (0,2)$ and let again $\nu$ be a symmetric, finite and non-zero measure on the unit sphere $S\subset\mathbb{R}^{d}$. Then for $\varphi \in C^\infty_b$ we have $-\mathscr{L}^{\alpha}_{\nu}\varphi = A\varphi$, where $A$ is the generator of the symmetric, $\alpha$-stable Lévy process $L$ with characteristic exponent $\mathds{E}[\exp(2\pi i\langle z,L_{t}\rangle )]=\exp(-t\psi^{\alpha}_{\nu}(z))$. The process $L$ has the jump measure $\mu$ as defined in [\[eq:mu\]](#eq:mu){reference-type="ref" reference="eq:mu"}, with $\tilde \nu = C \nu$ for some constant $C>0$.* If $\alpha=2$, then the generator of the symmetric, $\alpha$-stable process coincides with $\sum_{i,j}C(i,j)\partial_{x_{i}}\partial_{x_{j}}$ for an explicit covariance matrix $C$ (cf. [@Sato1999 Theorem 14.2]), that is, the generator of $\sqrt{2C}B$ for a standard Brownian motion $B$. To ease notation, we consider here $C=\frac{1}{2}\operatorname{Id}_{d\times d}$ and whenever we refer to the case $\alpha=2$, we mean the standard Brownian motion noise case and we set $\mathscr{L}^{\alpha}_{\nu}:=-\frac{1}{2}\Delta$. **Assumption 4**. *Throughout the work, we assume that the measure $\nu$ from [Definition 1](#def:fl){reference-type="ref" reference="def:fl"} has $d$-dimensional support, in the sense that the linear span of its support is $\mathbb{R}^d$. This means that the process $L$ can reach every open set in $\mathbb{R}^d$ with positive probability.* An $\alpha$-stable, symmetric Lévy process, that satisfies [Assumption 4](#ass){reference-type="ref" reference="ass"}, we also call non-degenerate.\ In the following, we will not distinguish between $F\in(\mathscr{C}^{\beta}(\mathbb{T}^{d}))^d$ and the periodic version on $\mathbb{R}^{d}$, $F^{\mathbb{R}^{d}}\in (\mathscr{C}^{\beta})^d$, whenever there is no danger of confusion. We understand [\[eq:sde\]](#eq:sde){reference-type="eqref" reference="eq:sde"} as a singular SDE with periodic coefficient $F^{\mathbb{R}^{d}}$ and in particular existence of a solution to the martingale problem follows from [@kp]. For that, we need to assume that the drift $F$ can be enhanced in the following sense. Let for $\gamma\in (0,1)$, $$\begin{aligned} \mathscr{M}_{\infty,0}^{\gamma}X=\{u:(0,\infty)\to X\mid \exists C>0, \forall t>0, \norm{u_{t}}_{X}\leqslant C [t^{-\gamma}\vee 1]\}.\end{aligned}$$ **Assumption 5**. *For $\beta\in (\frac{2-2\alpha}{3},\frac{1-\alpha}{2}]$ we assume that $(F_{1}=F,F_{2})\in\mathscr{X}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})$, that is $(F^{\mathbb{R}^{d}},F_{2}^{\mathbb{R}^{d}})\in\mathscr{X}^{\beta,\gamma}_{\infty}$, where $$\begin{aligned} \label{def:enhanced-dist-ph} \mathscr{X}^{\beta,\gamma}_{\infty}:=cl(\{\paren[\big]{\eta,(P_{t}(\partial_{i}\eta^{j})\varodot\eta^{k})_{i,j,k\in\{1,\dots,d\}}}\mid \eta\in C^{\infty}_{b}(\mathbb{R}^{d},\mathbb{R}^{d})\})\end{aligned}$$ for $\gamma\in [(2\beta+2\alpha-1)/\alpha,1)$ and for the closure in $\mathscr{C}^{\beta+(1-\gamma)\alpha}\times \mathscr{M}_{\infty,0}^{\gamma}\mathscr{C}^{2\beta+\alpha-1}$. For $\beta\in (\frac{1-\alpha}{2},0)$, we assume that $F\in\mathscr{C}^{\beta+(1-\gamma)\alpha}$ for $\gamma\in ((\beta-1)/\alpha,0)$ and set $\mathscr{X}^{\beta,\gamma}_{\infty}:=\mathscr{C}^{\beta+(1-\gamma)\alpha}$.* **Remark 6**. *The assumption on the enhanced distribution in [\[def:enhanced-dist-ph\]](#def:enhanced-dist-ph){reference-type="eqref" reference="def:enhanced-dist-ph"} is stronger than the assumption in [@kp-sk Definition 4.2] in the sense that $F$ is an enhanced distribution for any finite time horizon $T>0$, instead of for a fixed time horizon. This assumption will be needed in [\[sec:res-eq\]](#sec:res-eq){reference-type="ref" reference="sec:res-eq"} to solve the resolvent equation. Notice also that the blow-up $\gamma$ occures at the initial time $t=0$ and not at a terminal time and that $F$ does not depend on a time variable here. Furthermore, in the definition above we allow for three different indices $i,j,k$ in [\[def:enhanced-dist-ph\]](#def:enhanced-dist-ph){reference-type="eqref" reference="def:enhanced-dist-ph"}. This assumption is due to the fact that we also solve the adjoint equation, i.e. the Fokker-Planck equation. For the Fokker-Planck equation, we will encounter the products $P_{t}(\partial_{i}F^{i})\varodot F^{j}$ for $i,j=1,\dots,d$, whereas for the Kolmogorov equation, we have $P_{t}(\partial_{i}F^{j})\varodot F^{i}$ for $i,j$. To cover both products, we assume [\[def:enhanced-dist-ph\]](#def:enhanced-dist-ph){reference-type="eqref" reference="def:enhanced-dist-ph"}. The blow-up $\gamma$ can be thought of as close to $1$ and $t\mapsto P_{t}(\partial_{i}F^{j})\varodot F^{k}\in\mathscr{M}^{\gamma}_{\infty,0}\mathscr{C}^{2\beta+\alpha-1}$ in particular implies that for any $T>0$, $\int_{0}^{T}P_{t}(\partial_{i}F^{j})\varodot F^{k}dt\in\mathscr{C}^{2\beta+\alpha-1}$.* For completeness we state the definition of a solution to the singular martingale problem from [@kp Definition 4.1], cf. also [@Cannizzaro2018], and [@kp Theorem 4.2] about the existence and uniqueness of martingale solutions. **Definition 7** (Martingale problem). *Let $\alpha\in (1,2]$ and $\beta\in(\frac{2-2\alpha}{3},0)$, and let $T>0$ and $F^{\mathbb{R}^{d}}\in \mathscr{X}^{\beta,\gamma}_{\infty}$. Then, we call a probability measure $\mathds{P}$ on the Skorokhod space $(\Omega,\mathscr{F})$ a solution of the martingale problem for $(\mathscr{G}^{V},\delta_x)$, if* 1. *$\mathds{P}(X_{0}\equiv x)=1$ (i.e. $\mathds{P}^{X_{0}}=\delta_{x}$), and* 2. *for all $f\in C_{T}\mathscr{C}^{\varepsilon}$ with $\varepsilon > 2-\alpha$ and for all $u^{T}\in\mathscr{C}^{3}$, the process $M=(M_{t})_{t\in [0,T]}$ is a martingale under $\mathds{P}$ with respect to $(\mathscr{F}_{t})$, where $$\begin{aligned} M_{t}=u(t,X_{t})-u(0,x)-\int_{0}^{t}f(s,X_{s})ds\end{aligned}$$ and where $u$ is a mild solution of the Kolmogorov backward equation $\mathscr{G}^{F}u=f$ with terminal condition $u(T,\cdot)=u^{T}$, where $\mathscr{G}^{F}:=\partial_{t}-\mathscr{L}^{\alpha}_{\nu}+F\cdot\nabla$.* **Remark 8**. *Although we consider a drift term $F$ that does not depend on a time variable, we consider the parabolic Kolmogorov PDE in the definition above. Equivalently one could reformulate the martingale problem with the resolvent equation for the operator $-\mathscr{L}^{\alpha}_{\nu}+F\cdot\nabla$ instead. We use the above definition to be able to apply the result from [@kp].* **Theorem 9**. *Let $\alpha\in (1,2]$ and $L$ be a symmetric, $\alpha$-stable Lévy process, such that the measure $\nu$ satisfies [Assumption 4](#ass){reference-type="ref" reference="ass"}. Let $T>0$ and $\beta\in ((2-2\alpha)/3,0)$ and let $F^{\mathbb{R}^{d}}\in\mathscr{X}^{\beta,\gamma}_{\infty}$. Then for all $x\in\mathbb{R}^{d}$, there exists a unique solution $\mathbb{Q}$ on $(\Omega,\mathscr{F})$ of the martingale problem for $(\mathscr{G}^{V},\delta_x)$. Under $\mathbb{Q}$ the canonical process is a strong Markov process.* In the following, we will also consider the projected process $(X^{\mathbb{T}^{d}}_{t})=(\iota(X_{t}))$ for the canonical projection $\iota:\mathbb{R}^{d}\to\mathbb{T}^{d}$, $x\mapsto [x]=x\mod\mathbb{Z}^{d}$, and the martingale solution $X$ from [Theorem 9](#thm:mainthm1){reference-type="ref" reference="thm:mainthm1"}. The generator $\mathfrak{L}$ of $X^{\mathbb{T}^{d}}$ we define by $$\begin{aligned} \mathfrak{L}f:=-\mathscr{L}^{\alpha}_{\nu}f+F\cdot\nabla f\end{aligned}$$ acting on functions $f:\mathbb{T}^{d}\to\mathbb{R}$.\ This work moreover yields a characterization of the domain $\text{dom}(\mathfrak{L})$ of the generator $\mathfrak{L}$, cf. [Theorem 26](#thm:generator){reference-type="ref" reference="thm:generator"}. We denote its semigroup by $(T_{t}^{\mathbb{T}^{d}})_{t\geqslant 0}$ with $T_{t}^{\mathbb{T}^{d}}f:=T_{t}f^{\mathbb{R}^{d}}$, $f\in L^{\infty}(\mathbb{T}^{d})$, with the semigroup $(T_{t})_{t\geqslant 0}$ of the Markov process $(X_{t})$ on $\mathbb{R}^{d}$ with periodic drift $F^{\mathbb{R}^{d}}$.\ The semigroup $(P_{t}^{\mathbb{T}^{d}})$ of the generalized fractional Laplacian $(-\mathscr{L}^{\alpha}_{\nu})$ acting on functions on the torus, is analogously defined as $P_{t}^{\mathbb{T}^{d}}f:=P_{t}f^{\mathbb{R}^{d}}$ and the semigroup estimates for $(P_{t})$ imply the estimates for $(P_{t}^{\mathbb{T}^{d}})$ on the periodic Besov spaces $\mathscr{C}^{\theta}(\mathbb{T}^{d})=\mathscr{C}^{\theta}_{\infty}(\mathbb{T}^{d})$ (due to $u\in L^{\infty}(\mathbb{T}^{d})$ implying $u^{\mathbb{R}^{d}}\in L^{\infty}(\mathbb{R}^{d})$ and vice versa). The following lemma states the semigroup estimates for $(P^{\mathbb{T}^{d}}_{t})$ on $\mathscr{C}^{\theta}_{2}(\mathbb{T}^{d})$, that will be employed in the sequel. The proof can be found in [\[Appendix A\]](#Appendix A){reference-type="ref" reference="Appendix A"}. [Lemma 10](#lem:periodic-semi-est){reference-type="ref" reference="lem:periodic-semi-est"} in particular proves the extension of $\mathscr{L}^{\alpha}_{\nu}$ to Besov spaces $\mathscr{C}^{\beta}_{2}(\mathbb{T}^{d})$. **Lemma 10**. *Let $u\in\mathscr{C}^{\beta}_{2}(\mathbb{T}^{d})$ for $\beta\in\mathbb{R}$. Then the following estimates hold true $$\begin{aligned} \label{eq:p-la} \norm{\mathscr{L}^{\alpha}_{\nu}u}_{\mathscr{C}^{\beta-\alpha}_{2}(\mathbb{T}^{d})}\lesssim\norm{u}_{\mathscr{C}^{\beta}_{2}(\mathbb{T}^{d})}.\end{aligned}$$ Moeover, for any $\theta\geqslant 0$ and $\vartheta\in [0,\alpha]$, $$\begin{aligned} \label{eq:p-semi} \norm{P_{t}u}_{\mathscr{C}^{\beta+\theta}_{2}(\mathbb{T}^{d})}\lesssim (t^{-\theta/\alpha}\vee 1)\norm{u}_{\mathscr{C}^{\beta}_{2}(\mathbb{T}^{d})}, \quad \norm{(P_{t}-\operatorname{Id})u}_{\mathscr{C}^{\beta-\vartheta}_{2}(\mathbb{T}^{d})}\lesssim t^{\vartheta/\alpha}\norm{u}_{\mathscr{C}^{\beta}_{2}(\mathbb{T}^{d})}.\end{aligned}$$* For functions with vanishing zero-order Fourier mode, we can improve the Schauder estimates for large $t>0$. This is established in the following lemma, the proof can be found in [\[Appendix A\]](#Appendix A){reference-type="ref" reference="Appendix A"}. **Lemma 11**. *Let $(P_{t})$ be the $(-\mathscr{L}^{\alpha}_{\nu})$-semigroup on the torus $\mathbb{T}^{d}$ as defined above. Then for $g\in\mathscr{C}^{\beta}_{2}$, $\beta\in\mathbb{R}$, with $\hat{g}(0)=\mathscr{F}_{\mathbb{T}^{d}}(g) (0)=0$, exponential Schauder estimates hold true. That is, for any $\theta\geqslant 0$, there exists $c>0$, such that $$\begin{aligned} \norm{P_{t}g}_{\mathscr{C}^{\beta+\theta}_{2}(\mathbb{T}^{d})}\lesssim t^{-\theta/\alpha}e^{-ct}\norm{g}_{\mathscr{C}^{\beta}_{2}(\mathbb{T}^{d})}.\end{aligned}$$* In the sequel, we will employ the following duality result for Besov spaces on the torus. For Besov spaces on $\mathbb{R}^{d}$, the result is proven in [@Bahouri2011 Proposition 2.76]. The same proof applies for Besov spaces on the torus (cf. also [@Schmeisser1987 Theorem in Section 3.5.6]). **Lemma 12**. *Let $\theta\in\mathbb{R}$ and $f,g \in C^{\infty}(\mathbb{T}^{d})$. Then we have the duality estimate: $$\begin{aligned} \abs{\langle f, g\rangle}\lesssim \norm{f}_{B^{\theta}_{2,2}(\mathbb{T}^{d})}\norm{g}_{B^{-\theta}_{2,2}(\mathbb{T}^{d})}.\end{aligned}$$ In particular, the mapping $(f,g)\mapsto \langle f,g\rangle$ can be extended uniquely to $f\in B^{\theta}_{2,2}(\mathbb{T}^{d})$, $g\in B^{-\theta}_{2,2}(\mathbb{T}^{d})$.* Let us define the periodic Bessel-potential space or fractional Sobolev space for $s\in\mathbb{R}$, $$\begin{aligned} H^{s}(\mathbb{T}^{d})=\biggl\{u\in \mathscr{S}'(\mathbb{T}^{d})\biggm| \norm{u}_{H^{s}(\mathbb{T}^{d})}^{2}=\sum_{k\in\mathbb{Z}^{d}}(1+\abs{k}^{2})^{s}\abs{\hat{f}(k)}^{2}<\infty\biggr\},\end{aligned}$$ and the homogeneous periodic Bessel-potential space $$\begin{aligned} \dot{H}^{s}(\mathbb{T}^{d})=\biggl\{u\in \mathscr{S}'(\mathbb{T}^{d})\biggm| \norm{u}_{H^{s}(\mathbb{T}^{d})}^{2}=\sum_{k\in\mathbb{Z}^{d}}\abs{k}^{2s}\abs{\hat{f}(k)}^{2}<\infty\biggr\}.\end{aligned}$$ Motivated by the corresponding characterization of periodic Besov spaces from [@Schmeisser1987 Section 3.5.4], we define the homogeneous Besov space on the torus for $\theta\in (0,1)$ with notation $\Delta_{h}u (x):=u(x+h)-u(x)$, $h,x\in\mathbb{T}^{d}$ as follows: $$\begin{aligned} \label{eq:p-bs2} \dot{B}^{\theta}_{2,2}(\mathbb{T}^{d}):=\biggl\{u\in L^{2}(\mathbb{T}^{d})\biggm| \norm{u}_{\dot{B}^{\theta}_{2,2}(\mathbb{T}^{d})}^{2}:=\int_{\mathbb{T}^{d}}\abs{h}^{-2\theta}\norm{\Delta_{h}u}_{L^{2}(\mathbb{T}^{d})}^{2}\frac{dh}{\abs{h}^{d}}<\infty\biggr\}.\end{aligned}$$ For $\theta=1$, we set $\dot{B}^{1}_{2,2}(\mathbb{T}^{d}):=\dot{H}^{1}(\mathbb{T}^{d})$. Using derivatives of $u$, one can define homogeneous periodic Besov spaces in that way also for $\theta\geqslant 1$ (cf. [@Schmeisser1987 Section 3.5.4]), but we will not need them below. We also refer to [@Schmeisser1987 (iv) of Theorem, Section 3.5.4] for an equivalent characterization of spaces $B^{\theta}_{2,2}(\mathbb{T}^{d})$ for $\theta\in (0,1)$ in terms of the differences $\Delta_{h}u$. \*Strategy to prove the main result To prove the CLT in [Theorem 35](#thm:main-thm-ph){reference-type="ref" reference="thm:main-thm-ph"}, we distinguish between the cases $\alpha=2$ and $\alpha\in (1,2)$. In the following, we briefly summarize our strategy to prove the convergences [\[eq:mainr1\]](#eq:mainr1){reference-type="eqref" reference="eq:mainr1"} (Brownian case, cf. [@Bensoussan1978 Chapter 3, Section 4.2] in the case of $C^{2}_{b}$-drift) and [\[eq:mainr2\]](#eq:mainr2){reference-type="eqref" reference="eq:mainr2"} (pure Lévy noise case, cf. [@Franke2007] in the case of $C^{3}_{b}$-drift).\ First we prove existence of a unique invariant probability measure $\pi$ for $X^{\mathbb{T}^{d}}$. To that aim, we solve in [\[sec:s-FP\]](#sec:s-FP){reference-type="ref" reference="sec:s-FP"} the singular Fokker-Planck equation with the paracontrolled approach in $\mathscr{C}^{\alpha+\beta-1}_{1}$, yielding a continuous (as $\alpha+\beta-1>0$) Lebesgue-density. Furthermore, we prove a strict maximum principle on compacts for the Fokker-Planck equation. In [\[sec:inv-m\]](#sec:inv-m){reference-type="ref" reference="sec:inv-m"} an application of Doeblin's theorem then yields existence and uniqueness of the invariant ergodic probability measure $\pi$ for $\mathfrak{L}$ with a strictly positive Lebesgue density $\rho_{\infty}$. Doeblin's theorem furthermore yields pointwise spectral gap estimates on the semigroup $(T_{t}^{\mathbb{T}^{d}})_{t\geqslant 0}$ associated to $\mathfrak{L}$, i.e. the process $X^{\mathbb{T}^{d}}$ is exponentially ergodic. We then extend those pointwise spectral gap estimates to $L^{2}(\pi)$-spectral gap estimates. This enables to solve the Poisson equation in [Corollary 28](#cor:good-P-eq){reference-type="ref" reference="cor:good-P-eq"} for right-hand sides that are elements of $L^{2}(\pi)$ and that have vanishing mean under $\pi$. In particular, we can solve the Poisson equation with right-hand side $F^{m}-\langle F^{m}\rangle_{\pi}$ for $F^{m}\in C^{\infty}(\mathbb{T}^{d})$ for each fixed $m\in\mathbb{N}$, where $F^{m}\to F$ in $\mathscr{X}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})$, denoting the solution by $\chi^{m}$. We then prove convergence of $(\chi^{m})_{m}$ in $L^{2}(\pi)$ utilizing a Poincaré-type estimate for the opertor $\mathfrak{L}$ and combining with the theory from [@klo]. Via solving the resolvent equation $(\lambda-\mathfrak{L})g=G$ in [\[sec:res-eq\]](#sec:res-eq){reference-type="ref" reference="sec:res-eq"} with the paracontrolled approach for right-hand-sides in $G\in L^{2}(\pi)$ or $G=F^{i}$, $i=1,...,d$, we then obtain in [\[sec:s-P-eq\]](#sec:s-P-eq){reference-type="ref" reference="sec:s-P-eq"} convergence of $(\chi^{m})_{m}$ in $(\mathscr{C}^{\alpha+\beta}_{2}(\mathbb{T}^{d}))^{d}$ to a limit $\chi$ which indeed solves the Poisson equation $(-\mathfrak{L})\chi=F-\langle F\rangle_{\pi}$ with singular right-hand side $F-\langle F\rangle_{\pi}$. Here the mean $\langle F\rangle_{\pi}$ can be defined in a stable manner using the regularity, respectively the paracontrolled structure, of the density $\rho_{\infty}$, cf. [Lemma 29](#thm:def-F-pi-int){reference-type="ref" reference="thm:def-F-pi-int"}. Decomposing the drift in terms of the solution to the Poisson equation and Dynkin's martingale, we can finally prove the functional CLT in [\[sec:CLT\]](#sec:CLT){reference-type="ref" reference="sec:CLT"}.\ Via Feynman-Kac formula, the CLT yields the periodic homogenization result of [Corollary 36](#cor:PDEhomog){reference-type="ref" reference="cor:PDEhomog"} for the solution to the associated Cauchy problem with operator $\mathfrak{L}^{\varepsilon}$ as $\varepsilon\to 0$, where formally $\mathfrak{L}^{\varepsilon}f=-\mathscr{L}^{\alpha}_{\nu}f+\varepsilon^{-1} F(\varepsilon^{-1} \cdot)\cdot\nabla f$. Singular Fokker-Planck equation and a strict maximum principle[\[sec:s-FP\]]{#sec:s-FP label="sec:s-FP"} This section features the results on the Fokker-Planck equation, [Theorem 13](#thm:ex-fp-d){reference-type="ref" reference="thm:ex-fp-d"} and [Proposition 17](#prop:max-p){reference-type="ref" reference="prop:max-p"}, that will be of use in [\[sec:inv-m\]](#sec:inv-m){reference-type="ref" reference="sec:inv-m"} below.\ Let us define the blow-up spaces for $\gamma\in (0,1)$, $$\begin{aligned} \mathscr{M}_{T,0}^{\gamma}X:=\bigl\{u:(0,T]\to X\bigm| \sup_{t\in[0,T]}t^{\gamma}\norm{u_{t}}_{X}<\infty\bigr\}\end{aligned}$$ and $$\begin{aligned} C^{1,\gamma}_{T,0}X:=\biggl\{u:(0,T]\to X\biggm| \sup_{0\leqslant s<t\leqslant T}\frac{s^{\gamma}\norm{u_{t}-u_{s}}_{X}}{\abs{t-s}}<\infty\biggr\}\end{aligned}$$ with blow-up at $t=0$.\ The solution to the Fokker-Planck eqution with initial condition equal to a Dirac measure, will have a blow-up at time $t=0$ due to the singularity of the initial condition. A direct computation shows that the Dirac measure in $x\in\mathbb{R}^{d}$ satisfies $\delta_{x}\in\mathscr{C}^{-d(1-\frac{1}{p})}_{p}$ for any $p\in[1,\infty]$, in particular $\delta_{x}\in\mathscr{C}^{0}_{1}$. Moreover, one can show that the map $x\mapsto\delta_{x}\in\mathscr{C}^{-\varepsilon}_{1}$ is continuous for any $\varepsilon>0$. The next theorem proves existence of a mild solution to the Fokker-Planck equation $$\begin{aligned} (\partial_{t}-\mathfrak{L}^{\ast})\rho_{t}=0, \quad \rho_{0}=\mu,\end{aligned}$$ with initial condition $\mu\in\mathscr{C}^{-\varepsilon}_{1}$ for small $\varepsilon>0$. Here, $\mathfrak{L}^{*}$ denotes the formal Lebesgue-adjoint to $\mathfrak{L}$, $$\begin{aligned} \mathfrak{L}^{\ast}f:=-\mathscr{L}^{\alpha}_{\nu}f-\nabla\cdot (Ff)=-\mathscr{L}^{\alpha}_{\nu}f-div (Ff).\end{aligned}$$ The proof of [Theorem 13](#thm:ex-fp-d){reference-type="ref" reference="thm:ex-fp-d"} is similar to [@kp-sk Theorem 4.7]. **Theorem 13**. *Let $T>0$, $\alpha\in (1,2]$ and $p\in[1,\infty]$. Let either $\beta\in(\frac{1-\alpha}{2},0)$ and $F\in \mathscr{C}^{\beta}_{\mathbb{R}^{d}}$ or $F\in\mathscr{X}^{\beta,\gamma'}_{\infty}$ for $\beta\in (\frac{2-2\alpha}{3},\frac{1-\alpha}{2}]$, $\gamma'\in (\frac{2\beta+2\alpha-1}{\alpha},1)$.\ Then, for any small enough $\varepsilon>0$ and any initial condition $\mu\in\mathscr{C}^{-\varepsilon}_{p}$, there exists a unique mild solution $\rho$ to the Fokker-Planck equation in $\mathscr{M}_{T,0}^{\gamma}\mathscr{C}^{\alpha+\beta-1}_{p}\cap C_{T}^{1-\gamma}\mathscr{C}^{\beta}_p\cap C_{T,0}^{1,\gamma}\mathscr{C}^{\beta}_p$ for $\gamma\in (C(\varepsilon),1)$ (for some $C(\varepsilon)\in (0,1)$) in the Young regime and $\gamma\in (\gamma',\frac{\alpha\gamma'}{2-\alpha-3\beta})$ in the rough regime, i.e. $$\begin{aligned} \label{eq:fp-mild-form} \rho_{t}=P_{t}\mu+\int_{0}^{t}P_{t-s}(-\nabla\cdot (F\rho_{s}))ds,\end{aligned}$$ where $(P_{t})_{t\geqslant 0}$ denotes the $(-\mathscr{L}^{\alpha}_{\nu})$-semigroup.\ In the rough case, the solution satisfies $$\begin{aligned} \rho_{t}=\rho_{t}^{\sharp}+\rho_{t}\varolessthan I_{t}(-\nabla \cdot F)\end{aligned}$$ where $\rho_{t}^{\sharp}\in\mathscr{M}_{T,0}^{\gamma}\mathscr{C}^{2(\alpha+\beta)-2}_p\cap C_{T}^{1-\gamma}\mathscr{C}^{2\beta-2+\alpha}_p\cap C_{T,0}^{1,\gamma}\mathscr{C}^{2\beta-2+\alpha}_p$ and $I_{t}(v):=\int_{0}^{t}P_{t-s}v_{s}ds$.\ Moreover, the solution depends continuously on the data $(F,\mu)\in\mathscr{X}^{\beta,\gamma'}_{\infty}\times\mathscr{C}^{-\varepsilon}_{p}$. Furthermore, for any fixed $t>0$, the solution satisfies $(\rho_{t},\rho_{t}^{\sharp})\in\mathscr{C}^{\alpha+\beta-1}\times\mathscr{C}^{2(\alpha+\beta)-2}$.\ If $(F,\mu)$ are $1$-periodic distributions, then the solution $\rho_{t}$ is $1$-periodic.* *Proof.* We will prove that we can solve the Fokker-Planck equation for initial conditions $\mu\in \mathscr{C}^{-\varepsilon}_{p}$ for $\varepsilon=-((1-\tilde{\gamma})\alpha+\beta)$ for $\tilde{\gamma}\in [\frac{\alpha+\beta}{\alpha},1)$ in the Young regime and for $\varepsilon=-((2-\tilde{\gamma})\alpha+2\beta-1)$ for $\tilde{\gamma}\in [\frac{2\beta+2\alpha-1}{\alpha}\vee 0,\gamma']$ in the rough regime. In the Young regime, we obtain a solution $\rho\in \mathscr{M}_{T,0}^{\gamma}\mathscr{C}^{\alpha+\beta-1}_p\cap C_{T}^{1-\gamma}\mathscr{C}^{\beta}_p\cap C_{T,0}^{\gamma,1}\mathscr{C}^{\beta}_p$ for $\gamma=\tilde{\gamma}$ and the proof is analogous to [@kp-sk Theorem 4.1]. We thus only give the proof in the rough regime.\ To that aim, let us define, analogously as in the proof of [@kp-sk Theorem 4.7] for $\gamma\in (\gamma',1)$ as there, $$\begin{aligned} \mathscr{L}^{\gamma,\theta}_{T,p}:=\mathscr{M}_{T,0}^{\gamma}\mathscr{C}^{\theta}_{p}\cap C_{T}^{1-\gamma}\mathscr{C}^{\theta-\alpha}_{p}\cap C_{T,0}^{1,\gamma}\mathscr{C}^{\theta-\alpha}_{p}\end{aligned}$$ and the paracontrolled solution space $$\begin{aligned} \mbox{$\medmuskip=1mu\displaystyle\mathscr{D}^{\gamma}_{T,p}:=\{(u,u^{\prime})\in\mathscr{L}^{\gamma',\alpha+\beta-1}_{T,p}\times (\mathscr{L}^{\gamma,\alpha+\beta-1}_{T,p})^{d}\mid u^{\sharp}_{t}=u_{t}-u_{t}^{\prime}\varolessthan I_{t}(-\nabla\cdot F)\in\mathscr{L}^{\gamma,2(\alpha+\beta)-2}_{T,p}\}$}\end{aligned}$$ for $p\in [1,\infty]$, equipped with the norm $$\begin{aligned} %d((u,u^{\prime}),(w,w^{\prime})):= \norm{u-w}_{\mathscr{D}^{\gamma}_{T,p}}:=\norm{u-w}_{\mathscr{L}_{T,p}^{\gamma',\alpha+\beta-1}}+\norm{u^{\prime}-w^{\prime}}_{(\mathscr{L}_{T,p}^{\gamma,\alpha+\beta-1})^{d}}+\norm{u^{\sharp}-w^{\sharp}}_{\mathscr{L}_{T,p}^{\gamma,2(\alpha+\beta)-1}},\end{aligned}$$ which makes the space a Banach space.\ For $\mu\in \mathscr{C}_{p}^{-\varepsilon}$, $\varepsilon= -((2-\tilde{\gamma})\alpha+2\beta-2)$, we first prove that we obtain a paracontrolled solution $\rho\in \mathscr{D}^{\gamma}_{T,p}$. As the proof is similar to [@kp-sk Theorem 4.7], we only give the essential arguments of the proof. Notice that compared to [@kp-sk Theorem 4.7], here we consider the operator $\mathfrak{L}^{*}$ instead of $\mathfrak{L}$ and initial conditions in $\mathscr{C}^{-\varepsilon}_{p}$ for $\varepsilon=-((2-\tilde{\gamma})\alpha+2\beta-2)$, hence $\rho_{0}=\rho_{0}^{\sharp}$.\ For $\rho\in \mathscr{D}^{\gamma}_{T,p}$ the resonant product $F\varodot\rho =(F^{i}\varodot\rho)_{i=1,..,d}$ is well-defined and satisfies $$\begin{aligned} F^{i}\varodot\rho=F^{i}\varodot\rho^{\sharp}+\rho^{\prime}\cdot(F^{i}\varodot I_{t}(\nabla\cdot F))+ C_{1}(\rho^{\prime},I_{t}(\nabla\cdot F),F^{i})\end{aligned}$$ for the paraproduct commutator $$\begin{aligned} C_{1}(f,g,h):=(f\varolessthan g)\varodot h-f\cdot(g\varodot h).\end{aligned}$$ Using the paraproduct estimates, we obtain Lipschitz dependence of the product on $(F,\rho)\in\mathscr{X}^{\beta,\gamma'}_{\infty}\times\mathscr{D}^{\gamma}_{T,p}$, that is, $$\begin{aligned} \hspace{1em}&\hspace{-1em} \norm{F\varodot\rho}_{\mathscr{M}_{T}^{\gamma'}\mathscr{C}^{\alpha+2\beta-1}_{p}} \\&\lesssim \norm{F}_{\mathscr{X}^{\beta,\gamma'}_{\infty}}(1+\norm{F}_{\mathscr{X}^{\beta,\gamma'}_{\infty}})\paren[\big]{\norm{\rho}_{\mathscr{M}_{T}^{\gamma'}\mathscr{C}^{\alpha+\beta-1}_{p}}+\norm{\rho^{\prime}}_{(\mathscr{M}_{T}^{\gamma'}\mathscr{C}^{\alpha+\beta-1-\delta}_{p})^{d}}+\norm{\rho^{\sharp}}_{\mathscr{M}_{T}^{\gamma'}\mathscr{C}^{2(\alpha+\beta)-2-\delta}_{p}}}\\ &\lesssim\norm{F}_{\mathscr{X}^{\beta,\gamma'}_{\infty}}(1+\norm{F}_{\mathscr{X}^{\beta,\gamma'}_{\infty}})\paren[\big]{\norm{\rho}_{\mathscr{M}_{T}^{\gamma'}\mathscr{C}^{\alpha+\beta-1}_{p}}+\norm{\rho^{\prime}}_{(\mathfrak{L}_{T,p}^{\gamma,\alpha+\beta-1})^{d}}+\norm{\rho^{\sharp}}_{\mathfrak{L}_{T,p}^{\gamma,2(\alpha+\beta)-2}}}\\&\lesssim\norm{F}_{\mathscr{X}^{\beta,\gamma'}_{\infty}}(1+\norm{F}_{\mathscr{X}^{\beta,\gamma'}_{\infty}})\norm{\rho}_{\mathscr{D}_{T,p}^{\gamma}}\end{aligned}$$ for $\delta=\alpha-\alpha\frac{\gamma'}{\gamma}$, using moreover the interpolation estimate from [@kp-sk Lemma 3.7, (3.13)].\ The contraction map will be defined as $$\begin{aligned} \mathscr{D}^{\gamma}_{\overline{T},p}\ni (\rho,\rho^{\prime})\mapsto (\phi(\rho),\rho)\in \mathscr{D}^{\gamma}_{\overline{T},p}\end{aligned}$$ with $$\begin{aligned} \phi(\rho)_{t}:=P_{t}\mu+I_{t}(-\nabla\cdot (F\rho)).\end{aligned}$$ Here, $\overline{T}$ will be chosen small enough, such that the above map becomes a contraction. Afterwards the solutions on the subintervals of length $\overline{T}$ are patched together. Notice that the fixed point satisfies $\rho^{\prime}=\rho$.\ As $\varepsilon=-((2-\tilde{\gamma})\alpha+2\beta-2)$, we obtain by the semigroup estimates from [@kp-sk Lemma 2.5], that $$\begin{aligned} \label{eq:T-est} \norm{ P_{t}\mu}_{\mathscr{C}^{2(\alpha+\beta)-2}_{p}}\lesssim t^{-\tilde{\gamma}}\norm{\mu}_{\mathscr{C}^{-\varepsilon}_{p}}.\end{aligned}$$ Utilizing the Schauder estimates [@kp-sk Corollary 3.2] (which apply by a time change also for blow-up-spaces with blow-up at $t=0$ instead of blow-ups at $t=T$) and the estimate for the resonant product yields $$\begin{aligned} \norm{I(\nabla\cdot (F\rho))}_{\mathscr{L}_{T,p}^{\gamma,\alpha+\beta-1}}&\lesssim T^{\gamma-\gamma'}\norm{\nabla\cdot (F\rho)}_{\mathscr{M}_{T,0}^{\gamma'}\mathscr{C}^{\beta-1}_{p}}\\&\lesssim T^{\gamma-\gamma'}\norm{F}_{\mathscr{X}^{\beta,\gamma'}_{\infty}}(1+\norm{F}_{\mathscr{X}^{\beta,\gamma'}_{\infty}})\norm{\rho}_{\mathscr{D}^{\gamma}_{T,p}}.\end{aligned}$$ Moreover, we have that for a solution $\rho$, $$\begin{aligned} \rho_{t}^{\sharp}=P_{t}\mu + C_{2}(\rho,\nabla\cdot F)_{t}+I_{t}(-\nabla\cdot (\rho\varodot F))+I_{t}(-\nabla\cdot(\rho\varogreaterthan F))+I_{t}(-\nabla\rho\varolessthan F)\end{aligned}$$ for the semigroup commutator $$\begin{aligned} C_{2}(u,v)=I(u\varolessthan v)- u\varolessthan I(v).\end{aligned}$$ Using [\[eq:T-est\]](#eq:T-est){reference-type="eqref" reference="eq:T-est"} and [@kp-sk Corollary 3.2], we obtain $$\begin{aligned} \norm{\rho^{\sharp}}_{\mathscr{L}_{T,p}^{\gamma,2(\alpha+\beta)-2}}\lesssim \norm{\mu}_{\mathscr{C}^{-\varepsilon}_{p}}+T^{\gamma-\gamma'}\norm{F}_{\mathscr{X}^{\beta,\gamma'}_{\infty}}\norm{\rho}_{\mathfrak{L}^{\gamma',\alpha+\beta-1}_{T,p}}.\end{aligned}$$ Hence, as $\gamma>\gamma'$, replacing $T$ by $\overline{T}\leqslant T$ small enough, we obtain a paracontrolled solution in $\mathscr{D}^{\gamma}_{\overline{T},p}$. Then, we paste the solutions on the subintervals together to obtain a solution on $[0,T]$, cf. in the proof of [@kp-sk Theorem 4.7].\ It remains to justify that the solution at fixed times $t>0$ satisfies $(\rho_{t},\rho_{t}^{\sharp})\in\mathscr{C}^{\alpha+\beta-1}\times\mathscr{C}^{2(\alpha+\beta-1)}$, i.e. that we can increase the integrability from $p$ to $\infty$. From the above, we obtain $(\rho,\rho^{\sharp})\in C ([t,T],\mathscr{C}^{\alpha+\beta-1}_p)\times C([t,T],\mathscr{C}^{2(\alpha+\beta-1)}_p)$. Then, we can apply the argument to increase the integrability, that was carried out in the end of the proof of [@PvZ Proposition 2.4], to obtain that indeed $(\rho,\rho^{\sharp})\in C ([t,T],\mathscr{C}^{\alpha+\beta-1})\times C([t,T],\mathscr{C}^{2(\alpha+\beta-1)})$ for any $t\in (0,T)$.\ The continuous dependence of the solution on the data $(F,\mu)$ follows analogously as in [@kp-sk Theorem 4.12], with the above estimates and a Gronwall-type argument.\ If $(F,\mu)$ are $1$-periodic distributions, then $P_{t}\mu=p_{t}\ast\mu$ is $1$-periodic, as the convolution with the fractional heat-kernel $p_{t}$ with a periodic distribution yields a periodic function and the fixed point argument can be carried out in the periodic solution space $\mathscr{D}_{T,p}^{\gamma}(\mathbb{T}^{d})$. ◻ **Corollary 14**. *Let $X$ be the unique martingale solution of the singular periodic SDE [\[eq:sde\]](#eq:sde){reference-type="eqref" reference="eq:sde"} for $\mathfrak{L}$ (acting on functions $f:\mathbb{R}^{d}\to\mathbb{R}$), starting at $x\in\mathbb{R}^{d}$. Let $(t,y)\mapsto\rho_{t}(x,y)$ be the mild solution of the Fokker-Planck equation with $\rho_{0}=\delta_{x}$ from [Theorem 13](#thm:ex-fp-d){reference-type="ref" reference="thm:ex-fp-d"}. Then for any $t>0$, the map $(x,y)\mapsto \rho_{t}(x,y)$ is continuous.\ Furthermore, for any $f\in L^{\infty}(\mathbb{R}^{d})$, $$\begin{aligned} \label{eq:d} \mathds{E}_{X_{0}=x}[f(X_{t})]=\int_{\mathbb{R}^{d}} f(y)\rho_{t}(x,y)dy,\end{aligned}$$ that is, $\rho_{t}(x,\cdot)$ is the density of $Law(X_{t})$, if $X_{0}=x$, with respect to the Lebesgue measure. In particular, for the projected solution $X^{\mathbb{T}^{d}}$ with drift $F\in\mathscr{X}^{\beta,\gamma'}_{\infty}(\mathbb{T}^{d})$ and $f\in L^{\infty}(\mathbb{T}^{d})$ and $z\in\mathbb{T}^{d}$, $$\begin{aligned} \label{eq:tilde-d} \mathds{E}_{X_{0}^{\mathbb{T}^{d}}=z}[f(X_{t}^{\mathbb{T}^{d}})]=\int_{\mathbb{T}^{d}} f(w)\rho_{t}(z,w)dw,\end{aligned}$$ where, by abusing notation to not introduce a new symbol for the density on the torus, $\rho_{t}(z,w):=\rho_{t}(x,y)$ for $(x,y)\in\mathbb{R}^{d}$ with $(\iota(x),\iota(y))=(z,w)$, $\iota:\mathbb{R}^{d}\to\mathbb{T}^{d}$ denoting the canonical projection.* **Remark 15**. *Let $\rho(x,\cdot)$ be the solution of the Fokker-Planck equation started in $\delta_{x}$ from [Theorem 13](#thm:ex-fp-d){reference-type="ref" reference="thm:ex-fp-d"} and let $u^{y}$ solve the Kolmogorov backward equation with terminal condition $u_{T}=\delta_{y}$ whose existence follows from [@kp-sk Theorem 4.7]. Then due to [\[eq:d\]](#eq:d){reference-type="eqref" reference="eq:d"} and the Feynman-Kac formula (approximating $F$ and utilizing the continuity of the solutions maps) we see the equality $\rho_{t}(x,y)=u^{y}_{T-t}(x)$.* **Remark 16**. *If $F\in\mathscr{X}^{\beta,\gamma'}_{\infty}(\mathbb{T}^{d})$, then by definition of $(P_{t}^{\mathbb{T}^{d}})$, $\rho(z,\cdot)$ is the mild solution of the Fokker-Planck equation on the torus (that is, $(P_{t})$ replaced by $(P_{t}^{\mathbb{T}^{d}})$ in [\[eq:fp-mild-form\]](#eq:fp-mild-form){reference-type="eqref" reference="eq:fp-mild-form"}) with $\rho_{0}(z,\cdot)=\delta_{z}$.* *Proof.* Continuity in $y$ follows from $\rho_{t}(x,\cdot)\in \mathscr{C}^{\alpha+\beta-1}$ and $\alpha+\beta-1>0$. Continuity in $x$ follows from the continuous dependence of the solution on the initial condition $\delta_{x}$ and continuity of the map $x\mapsto\delta_{x}\in\mathscr{C}^{-\varepsilon}_{1}$ for $\varepsilon>0$.\ That $\rho_{t}$ is the density of $\text{Law}(X_{t})$ follows by approximation of $F$ by $F^{m}\in C^{\infty}_{b}(\mathbb{R}^{d})$ with $F^{m}\to F$ in $\mathscr{C}^{\beta}_{\mathbb{R}^{d}}$, respectively in $\mathscr{X}^{\beta,\gamma'}_{\infty}$, using that $\rho$ depends continuously on the data $(F,\mu)$ and that $X^{m}\to X$ in distribution, where $X^{m}$ is the strong solution to the SDE with drift term $F^{m}$ (cf. the proof of [Theorem 9](#thm:mainthm1){reference-type="ref" reference="thm:mainthm1"}) and the Feynman-Kac formula for classical SDEs. Indeed, for $m\in\mathbb{N}$, we have that for $f\in C^{2}_{b}$ (and thus for $f\in L^{\infty}$ by approximation), $$\begin{aligned} u^{m}_{T-t}(x)=\mathds{E}_{X_{0}^{m}=x}[f(X_{t}^{m})]=\int f(y)\rho_{t}^{m}(x,y)dy\end{aligned}$$ with $(\partial_{t}+\mathfrak{L}^{m})u^{m}=\mathscr{G}^{F^{m}}u^{m}=0$, $u^{m}_{T}=f$, and $(\partial_{t}-(\mathfrak{L}^{m})^{*})\rho=0$, $\rho_{0}=\delta_{x}$. Now, we let $m\to\infty$ to obtain [\[eq:d\]](#eq:d){reference-type="eqref" reference="eq:d"}. In particular, $\rho_{t}\geqslant 0$ and $\rho_{t}\in L^{1}(dx)$. That $\rho_{t}$ is well-defined follows as $\rho_{t}$ is periodic (due to the periodicity assumption on $F$). Equality [\[eq:tilde-d\]](#eq:tilde-d){reference-type="eqref" reference="eq:tilde-d"} follows from [\[eq:d\]](#eq:d){reference-type="eqref" reference="eq:d"} considering $f\circ \iota$ instead of $f$. ◻ **Proposition 17**. *Let $\mu\in\mathscr{C}^{0}_{1}$ be a positive, nontrivial ($\mu\neq 0$) measure. Let $\rho$ be the mild solution of the Fokker-Planck equation $(\partial_{t}-\mathfrak{L}^{\ast})\rho_{t}=0$ with $\rho_{0}=\mu$. Then for any compact $K\subset\mathbb{R}^{d}$ and any $t>0$, there exists $c>0$ such that $$\begin{aligned} \min_{x\in K}\rho_{t}(x)\geqslant c>0.\end{aligned}$$ Let $\rho_{t}$ be as in [Remark 16](#rmk:FP-torus){reference-type="ref" reference="rmk:FP-torus"}. Then, in particular, for any $z\in\mathbb{T}^{d}$, $t>0$, there exists $c>0$ such that $$\begin{aligned} \min_{x\in\mathbb{T}^{d}}\rho_{t}(z,x)\geqslant c>0.\end{aligned}$$* *Proof.* In the Brownian case, $\alpha=2$, this follows from the proof of [@cfg Theorem 5.1]. We give the adjusted argument for $\alpha\in (1,2]$.\ Let $p_{t}$ be the $\alpha$-stable density of $L_{t}$. Without loss of generality, we assume $\mu=u\in C_b(\mathbb{R}^{d})$ with $u\geqslant 0$ and with $u\geqslant 1$ on a ball $B(0,\kappa)$, $\kappa>0$. Otherwise, we may consider $\rho_{s}$ for $s>0$ as an initial condition, for which we know that $\rho_{s}\in\mathscr{C}^{\alpha+\beta-1}\subset C_b(\mathbb{R}^{d})$ and that $\rho_{s}\geqslant 0$ by [Corollary 14](#cor:density){reference-type="ref" reference="cor:density"}. Then by continuity there exists a ball $B(x,\kappa)$ where $\rho_{s}>0$. Dividing by the lower bound and shifting $\rho_{s}$, we can assume that $\rho_{s}>1$ on $B(0,\kappa)$.\ Let now $\kappa>0$ and $u\in C_{b}(\mathbb{R}^{d})$ with $u\geqslant 0$ and with $u\geqslant 1$ on the ball $B(0,\kappa)$. Then by the scaling property, we have that $$\begin{aligned} p_{t}\ast u(y)\geqslant \mathds{P}(\abs{y+t^{1/\alpha}L_{1}}\leqslant\kappa)=\mathds{P}(L_{1}\in B(yt^{-1/\alpha},\kappa t^{-1/\alpha}))\end{aligned}$$ Let $y=(\kappa+t\rho)z$ for $z\in B(0,1)$, $\rho\geqslant 0$, so that $y\in B(0,\kappa+t\rho)$. Then we obtain $$\begin{aligned} \MoveEqLeft \mathds{P}(L_{1}\in B(yt^{-1/\alpha},\kappa t^{-1/\alpha}))\\&=\mathds{P}(L_{1}\in B(z(\kappa t^{-1/\alpha}+\rho t^{1-1/\alpha}),\kappa t^{-1/\alpha})) \\&\geqslant\mathds{P}(2 z\cdot L_{1}\geqslant \abs{L_{1}}^{2} (\kappa t^{-1/\alpha}+\rho t^{1-1/\alpha})^{-1} +(\abs{z}^{2}-1)[\kappa t^{-1/\alpha} +\rho t^{1-1/\alpha}]) \\&\geqslant\inf_{\abs{z}\leqslant 1} \mathds{P}(2 z\cdot L_{1}\geqslant \abs{L_{1}}^{2} (\kappa t^{-1/\alpha}+\rho t^{1-1/\alpha})^{-1} + (\abs{z}^{2}-1)[\kappa t^{-1/\alpha}+\rho t^{1-1/\alpha}]) \allowdisplaybreaks \\&=\inf_{\abs{z}= 1} \mathds{P}(2 z\cdot L_{1}\geqslant \abs{L_{1}}^{2} (\kappa t^{-1/\alpha}+\rho t^{1-1/\alpha})^{-1} ) \\&\to \inf_{\abs{z}= 1}\mathds{P}( z\cdot L_{1}\geqslant 0) =\frac{1}{2} %\\&\geqslant\inf_{\abs{z}\leqslant 1} \p(L_{1}\in B(z(\kappa t^{-1/\alpha}+\rho t^{1-1/\alpha}),\kappa t^{-1/\alpha})) %\\&=\p(L_{1}\in B(e_{1}(\kappa t^{-1/\alpha}+\rho t^{1-1/\alpha}),\kappa t^{-1/\alpha})) %\\&\to \p(z\cdot L_{1}>0)=\frac{1}{2}\end{aligned}$$ for $t\to 0$. Here we used that $\alpha>1$ and that by symmetry of $L$, for any $z\in B(0,1)$ with $\abs{z}=1$, $\mathds{P}(z\cdot L_{1}\geqslant 0)=\mathds{P}(z\cdot L_{1}\leqslant 0)=1-\mathds{P}(z\cdot L_{1}\geqslant 0)$, because $\mathds{P}(z\cdot L_{1}=0)=\mathds{P}(L_{1}=0)=0$.\ Thus, we conclude, that there exists $t_{\rho}>0$, such that for all $t\in [0,t_{\rho}]$ and all $y\in B(0,\kappa+t\rho)$, $p_{t}\ast u(y)\geqslant \frac{1}{4}$.\ Moreover, we have $$\begin{aligned} \rho_{t}=P_{t}u+\int_{0}^{t}P_{t-s}(-\nabla\cdot (F\rho_{s}))ds\end{aligned}$$ with $P_{t}u=p_{t}\ast u$ and $$\begin{aligned} \norm[\bigg]{\int_{0}^{t}P_{t-s}(-\nabla\cdot (F\rho_{s}))ds}_{L^{\infty}}\leqslant Ct^{(\alpha+\beta-1-\varepsilon)/\alpha}\end{aligned}$$ for $\varepsilon\in (0,\alpha+\beta-1)$ by the semigroup estimates, [@kp-sk Lemma 2.5], with $\alpha+\beta-1>0$. Hence, for small enough $t$, we can achieve $$\begin{aligned} \norm[\bigg]{\int_{0}^{t}P_{t-s}(-\nabla\cdot (F\rho_{s}))ds}_{L^{\infty}}< \frac{1}{8}.\end{aligned}$$ Together with the lower bound for $p_{t}\ast u$, we obtain that there exists $t_{\rho}>0$, such that for all $t\in [0,t_{\rho}]$ and all $y\in B(0,\kappa+t\rho)$, it holds that $$\begin{aligned} \rho_{t}(y)\geqslant \frac{1}{8}.\end{aligned}$$ Using linearity of the equation, we can repeat that argument on $[t_{\rho},2t_{\rho}]$ etc. Because $K$ is compact, finitely many steps suffice (for large enough $t$, the ball $B(0,\kappa+t\rho)$ will cover $K$) to conclude that for all $T>0$ there exists $c>0$ such that for all $y\in K$ and all $t\in [0,T]$, $$\rho_{t}(y)\geqslant c>0. \qedhere$$ ◻ Singular resolvent equation[\[sec:res-eq\]]{#sec:res-eq label="sec:res-eq"} In this and all subsequent sections of this paper, we write $(P_{t})$, respectively $(T_{t})$, for the semigroups acting on the periodic Besov spaces $\mathscr{C}^{\theta}_{p}(\mathbb{T}^{d})$, $p=2, \infty$, omitting the supercript $\mathbb{T}^{d}$ that we introduced earlier.\ We solve the resolvent equation in [Theorem 19](#thm:res-eq){reference-type="ref" reference="thm:res-eq"} for the singular operator $\mathfrak{L}$ and for singular paracontrolled right-hand sides $G=G^{\sharp}+G^{\prime}\varolessthan F$, $G^{\sharp}\in\mathscr{C}^{0}_{2}(\mathbb{T}^{d})$, $G^{\prime}\in(\mathscr{C}^{\alpha+\beta-1}_{2}(\mathbb{T}^{d}))^d$, that is $$\begin{aligned} (\lambda-\mathfrak{L})g=G,\end{aligned}$$ obtaining a solution $g\in\mathscr{C}^{\alpha+\beta}_{2}(\mathbb{T}^{d})$.\ The next Lemma proves semigroup and commutator estimates for the $I_{\lambda}$-operator. **Lemma 18**. *Let $\lambda\geqslant 1$, $\delta\in\mathbb{R}$ and $v\in\mathscr{C}^{\delta}_{2}$. Let again $I_{\lambda}(v):=\int_{0}^{\infty}e^{-\lambda t}P_{t}v dt$. Then, $I_{\lambda}(v)$ is well-defined in $\mathscr{C}^{\beta+\vartheta}_{2}(\mathbb{T}^{d})$ for $\vartheta\in [0,\alpha]$ and the following estimate holds true $$\begin{aligned} \label{eq:a} \norm{I_{\lambda}(v)}_{\mathscr{C}^{\delta+\vartheta}_{2}(\mathbb{T}^{d})}\lesssim \lambda^{-(1-\vartheta/\alpha)}\norm{v}_{\mathscr{C}^{\delta}_{2}(\mathbb{T}^{d})}.\end{aligned}$$ Furthermore, for $v\in\mathscr{C}^{\sigma}_{2}(\mathbb{T}^{d})$, $\sigma<1$, $u\in\mathscr{C}^{\beta}(\mathbb{T}^{d})$, $\beta\in\mathbb{R}$, and $\vartheta\in [0,\alpha]$, the following commutator estimate holds true: $$\begin{aligned} \label{eq:b} \norm{C_{\lambda}(v,u)}_{\mathscr{C}^{\sigma+\beta+\vartheta}_{2}(\mathbb{T}^{d})}&:=\norm{I_{\lambda}(v\varolessthan u)-v\varolessthan I_{\lambda}(u)}_{\mathscr{C}^{\sigma+\beta+\vartheta}_{2}(\mathbb{T}^{d})}\nonumber\\&\lesssim \lambda^{-(1-\vartheta/\alpha)}\norm{v}_{\mathscr{C}^{\sigma}_{2}(\mathbb{T}^{d})}\norm{u}_{\mathscr{C}^{\beta}(\mathbb{T}^{d})}.\end{aligned}$$* **Proof.* The proof of [\[eq:a\]](#eq:a){reference-type="eqref" reference="eq:a"} follows from the semigroup estimates, [Lemma 10](#lem:periodic-semi-est){reference-type="ref" reference="lem:periodic-semi-est"}. Indeed, we have $$\begin{aligned} \norm{I_{\lambda}(v)}_{\mathscr{C}^{\delta+\vartheta}_{2}(\mathbb{T}^{d})}&\leqslant \int_{0}^{\infty} e^{-\lambda t}\norm{P_{t}v}_{\mathscr{C}^{\delta+\vartheta}_{2}(\mathbb{T}^{d})}dt\\&\lesssim \norm{v}_{\mathscr{C}^{\vartheta}_{2}(\mathbb{T}^{d})}\int_{0}^{\infty}e^{-\lambda t}[t^{-\vartheta/\alpha}\vee 1]dt\\&= \norm{v}_{\mathscr{C}^{\vartheta}_{2}(\mathbb{T}^{d})}\paren[\bigg]{\lambda^{-(1-\vartheta/\alpha)}\int_{0}^{1}e^{-t}t^{-\vartheta/\alpha}dt+\lambda^{-1}\int_{1}^{\infty}e^{-t}dt}\\&\lesssim \lambda^{-(1-\vartheta/\alpha)}\norm{v}_{\mathscr{C}^{\vartheta}_{2}(\mathbb{T}^{d})},\end{aligned}$$ since $\lambda\geqslant 1$ and where we use that $\int_{0}^{1}e^{-t}t^{-\vartheta/\alpha}dt\leqslant \int_{0}^{1}t^{-\vartheta/\alpha}dt<\infty$ if $\vartheta\in [0,\alpha)$ and $\int_{1}^{\infty} e^{-t}dt<\infty$. The bound in the case $\vartheta=\alpha$ follows with $$\begin{aligned} \norm{I_{\lambda}(v)}_{\mathscr{C}^{\delta+\alpha}_{p}(\mathbb{T}^{d})}&\leqslant \norm[\bigg]{\int_{0}^{1} e^{-\lambda t}P_{t}vdt}_{\mathscr{C}^{\delta+\alpha}_{2}(\mathbb{T}^{d})}+\int_{1}^{\infty}e^{-\lambda t}\norm{P_{t}v}_{\mathscr{C}^{\delta+\alpha}_{p}(\mathbb{T}^{d})}dt %\\&\leqslant\lambda^{-1}\paren[\bigg]{\norm[\bigg]{\int_{0}^{\lambda} e^{-t}P_{\lambda^{-1}t}vdt}_{\calC^{\delta+\alpha}_{2}(\mathbb{T}^{d})}+1} \\&\lesssim\norm{v}_{\mathscr{C}^{\delta}_{p}(\mathbb{T}^{d})},\end{aligned}$$ using [@kp-sk Lemma 3.1] to estimate the integral over $[0,1]$ (with, in the notation of that lemma, $T=1$, $\gamma=0$, $\sigma=\delta$, $\varsigma=\alpha$, $f_{0,t}=e^{-\lambda t}P_{t}v$).\ The commutator [\[eq:b\]](#eq:b){reference-type="eqref" reference="eq:b"} is proven analogously using [@kp-sk Lemma 2.7]. ◻* **Theorem 19**. *Let $\alpha\in (1,2]$ and $F\in\mathscr{C}^{\beta}(\mathbb{T}^{d})$ for $\beta\in (\frac{1-\alpha}{2},0)$ or $F\in\mathscr{X}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})$ for $\beta\in (\frac{2-2\alpha}{3},\frac{1-\alpha}{2}]$ and $\gamma\in (\frac{2\beta+2\alpha-1}{\alpha},1)$.\ Then, for $\lambda>0$ large enough, the resolvent equation $$\begin{aligned} R_{\lambda}g=(\lambda-\mathfrak{L})g=G\end{aligned}$$ with right-hand side $G=G^{\sharp}+G^{\prime}\varolessthan F$, $G^{\sharp}\in \mathscr{C}^{0}_{2}(\mathbb{T}^{d})$, $G^{\prime}\in(\mathscr{C}^{\alpha+\beta-1}_{2}(\mathbb{T}^{d}))^{d}$, possesses a unique solution $g\in\mathscr{C}^{\theta}_{2}(\mathbb{T}^{d})$, $\theta\in ((2-\beta)/2,\beta+\alpha)$.\ If $\beta\in (\frac{2-2\alpha}{3},\frac{1-\alpha}{2}]$, the solution is paracontrolled, that is, $$\begin{aligned} g=g^{\sharp}+(G^{\prime}+\nabla g)\varolessthan I_{\lambda}(F),\quad g^{\sharp}\in\mathscr{C}^{2\theta-1}_{2}(\mathbb{T}^{d}).\end{aligned}$$* *Proof.* Consider the paracontrolled solution space $$\begin{aligned} \mathscr{D}^{\theta}_{2}:=\{(g,g^{\prime})\in \mathscr{C}^{\theta}_{2}(\mathbb{T}^{d})\times (\mathscr{C}^{\theta-1}_{2}(\mathbb{T}^{d}))^{d}\mid g^{\sharp}:=g-g^{\prime}\varolessthan I_{\lambda}(F)\in\mathscr{C}^{2\theta-1}_{2}(\mathbb{T}^{d})\}\end{aligned}$$ with norm $\norm{g-h}_{\mathscr{D}^{\theta}_{2}}:=\norm{g-h}_{\mathscr{C}^{\theta}_{2}(\mathbb{T}^{d})}+\norm{g^{\sharp}-h^{\sharp}}_{\mathscr{C}^{2\theta-1}_{2}(\mathbb{T}^{d})}+\norm{g^{\prime}-h^{\prime}}_{\mathscr{C}^{\theta-1}_{2}(\mathbb{T}^{d})}$, which makes it a Banach space.\ The solution $g$ satisfies $$\begin{aligned} g=\int_{0}^{\infty}e^{-\lambda t} P_{t}(G+F\cdot\nabla g)dt,\end{aligned}$$ i.e. it is the fixed point of the map $\mathscr{C}^{\theta}_{2}(\mathbb{T}^{d})\ni g\mapsto \phi_{\lambda}(g):=\int_{0}^{\infty}e^{-\lambda t} P_{t}(G+F\cdot\nabla g)dt\in \mathscr{C}^{\theta}_{2}(\mathbb{T}^{d})$, respectively, in the rough case $\beta\in ((2-2\alpha)/3,(1-\alpha)/2]$, of the map $$\begin{aligned} \mathscr{D}^{\theta}_{2}\ni (g,g')\mapsto (\phi_{\lambda}(g),G^{\prime}+\nabla g)=:\Phi_{\lambda}(g,g^{\prime})\in \mathscr{D}^{\theta}_{2}.\end{aligned}$$ The product is defined as $F\cdot\nabla g:=F\varodot\nabla g+F\varolessthan\nabla g+F\varogreaterthan\nabla g$, where for $F\in\mathscr{X}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})$ and $g\in\mathscr{D}^{\theta}_{2}$, $$\begin{aligned} F\varodot\nabla g=\sum_{i=1}^{d}F^{i}\varodot\partial_{i} g&:=\sum_{i=1}^{d}\Big[F^{i}\varodot[\partial_{i} g^{\sharp}+\partial_{i} g^{\prime}\varolessthan I_{\lambda}(F)]+g(I_{\lambda}(\partial_{i} F)\varodot F^{i})\\&\qquad\qquad+C_{1}(g,I_{\lambda}(\partial_{i} F),F^{i})\Big],\end{aligned}$$ with paraproduct commutator $$\begin{aligned} \label{eq:para-comm} C_{1}(g,f,h):=(g\varolessthan f)\varodot h-g(f\varodot h)\end{aligned}$$ from [@Gubinelli2015Paracontrolled Lemma 2.4]. Analogously as before, the product of $F\in\mathscr{X}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})$ and $g\in\mathscr{D}^{\theta}_{2}$ with $\theta>(2-\beta)/2$ can thus be estimated by $$\begin{aligned} \norm{F\cdot g}_{\mathscr{C}^{\beta}_{2}(\mathbb{T}^{d})}\lesssim\norm{F}_{\mathscr{X}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})}(1+\norm{F}_{\mathscr{X}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})})\norm{g}_{\mathscr{D}^{\theta}_{2}}.\end{aligned}$$ The unique fixed point is obtained by the Banach fixed point theorem, where, in the Young case the map $\phi$, and in the rough case, $\Phi^{2}_{\lambda}=\Phi_{\lambda}\circ\Phi_{\lambda}$ are contractions for large enough $\lambda>0$. This can be seen by estimating $$\begin{aligned} \norm{\phi_{\lambda}(g)-\phi_{\lambda}(h)}_{\mathscr{C}^{\theta}_{2}(\mathbb{T}^{d})}&\lesssim\lambda^{(\theta-\beta-\alpha)/\alpha}\norm{F\cdot\nabla (g-h)}_{\mathscr{C}^{\beta}_{2}(\mathbb{T}^{d})}\\&\lesssim\lambda^{(\theta-\beta-\alpha)/\alpha}\norm{F}_{\mathscr{X}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})}(1+\norm{F}_{\mathscr{X}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})})\norm{g-h}_{\mathscr{D}^{\theta}_{2}}\end{aligned}$$ using [\[eq:a\]](#eq:a){reference-type="eqref" reference="eq:a"} and the estimate for the product. Thus a contraction is obtained by choosing $\lambda$ large enough, such that $\lambda^{(\theta-\beta-\alpha)/\alpha}\norm{F}_{\mathscr{X}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})}(1+\norm{F}_{\mathscr{X}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})})<1$, using $\theta<\alpha+\beta$. To check that indeed $\Phi_{\lambda}(g,g^{\prime})\in \mathscr{D}^{\theta}_{2}$, we note that $$\begin{aligned} \Phi_{\lambda}(g,g^{\prime})^{\sharp}&=\phi_{\lambda}(g)-[G^{\prime}e_{i}+\nabla g]\varolessthan I_{\lambda}(F)\\&=I_{\lambda}(G^{\sharp}+F\varodot g+F\varolessthan\nabla g)+C_{\lambda}(G^{\prime}e_{i}+\nabla g, F)\end{aligned}$$ for the commutator $C_{\lambda}$ from [\[eq:b\]](#eq:b){reference-type="eqref" reference="eq:b"}. Notice that, if $\beta<(1-\alpha)/2$, for $G^{\sharp}\in\mathscr{C}^{0}_{2}(\mathbb{T}^{d})$, $I_{\lambda}(G^{\sharp})\in\mathscr{C}^{\alpha}_{2}(\mathbb{T}^{d})\subset\mathscr{C}^{2\theta-1}_{2}(\mathbb{T}^{d})$ as $\theta>(1+\alpha)/2$. Hence, together with [Lemma 18](#lem:lambda-comm){reference-type="ref" reference="lem:lambda-comm"}, it follows that $\Phi_{\lambda}(g,g^{\prime})^{\sharp}\in\mathscr{C}^{2\theta-1}(\mathbb{T}^{d})$. Thereby we also get the small factor of $\lambda^{(\theta-\alpha-\beta)/\alpha}$ in the estimate. To see that $\Phi^{2}_{\lambda}=\Phi_{\lambda}\circ\Phi_{\lambda}$ is a contraction, we furthermore check $$\begin{aligned} \MoveEqLeft \norm{\Phi_{\lambda}(\Phi_{\lambda}(g,g'))'-\Phi_{\lambda}(\Phi_{\lambda}(h,h'))'}_{\mathscr{C}^{\theta-1}_{2}(\mathbb{T}^{d})}\\&= \norm{\nabla\phi_{\lambda}(g)-\nabla\phi_{\lambda}(h)}_{\mathscr{C}^{\theta-1}_{2}(\mathbb{T}^{d})}\\&\lesssim \norm{\phi_{\lambda}(g)-\phi_{\lambda}(h)}_{\mathscr{C}^{\theta}_{2}(\mathbb{T}^{d})}\\&\lesssim\lambda^{(\theta-\beta-\alpha)/\alpha}\norm{F}_{\mathscr{X}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})}(1+\norm{F}_{\mathscr{X}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})})\norm{g-h}_{\mathscr{D}^{\theta}_{2}},\end{aligned}$$ by the above estimate. ◻ Existence of an invariant measure and spectral gap estimates[\[sec:inv-m\]]{#sec:inv-m label="sec:inv-m"} In this section, we prove with [Theorem 21](#thm:inv-m){reference-type="ref" reference="thm:inv-m"} existence and uniqueness of an invariant, ergodic probability measure for the process $X^{\mathbb{T}^{d}}$ with state space $\mathbb{T}^{d}$, in the following for short denoted by $X$. The theorem moreover shows that $X$ is exponentially ergodic, in the sense that pointwise spectral gap estimates for its semigroup $(T_{t})$ hold. Furthermore, we characterize the domain of $\mathfrak{L}$ in $L^{2}(\pi)$ in [Theorem 26](#thm:generator){reference-type="ref" reference="thm:generator"} and define the mean of $F\in\mathscr{X}^{\beta,\gamma}_{\infty}$ with respect to the invariant measure $\pi$ in [Lemma 29](#thm:def-F-pi-int){reference-type="ref" reference="thm:def-F-pi-int"}.\ Existence and uniqueness of the invariant measure together with the pointwise spectral gap estimates on the semigroup are obtained by an application of Doeblin's theorem (see e.g. [@Bensoussan1978 Theorem 3.1, Chapter 3, Section 3, p. 365]), that we state here in the continuous time setting. **Lemma 20** (Doeblin's theorem). *Let $(X_{t})_{t\geqslant 0}$ be a time-homogeneous Markov process with state space $(S,\Sigma)$ for a compact metric space $S$ and its Borel-sigma-field $\Sigma$. Let $(T_{t})_{t\geqslant 0}$ be the associated semigroup, $T_{t}f(x):=\mathds{E}[f(X_{t})\mid X_{0}=x]$ for $x\in S$ and $f:S\to\mathbb{R}$ bounded measurable. Assume further, that there exists a probability measure $\mu$ on $(S,\Sigma)$ and, for any $t>0$, a continuous function $\rho_{t}:S\times S\to\mathbb{R}^{+}$, such that $T_{t}\mathbf{1}_{E}(x)=\int_{E}\rho_{t}(x,y)\mu(dy)$, $E\in\Sigma$. Assume moreover that, for any $t>0$, there exists an open ball $U_{0}$, such that $\mu(U_{0})>0$ and $\rho_{t}(x,y)>0$ for all $x\in S$ and $y\in U_{0}$.\ Then, there exists a unique invariant probability measure $\pi$ (i.e. $\int_{S}T_{t}\mathbf{1}_{E}(x)\pi(dx)=\pi(E)$ for all $E\in\Sigma$ and all $t\geqslant 0$) on $(S,\Sigma)$ with the property that there exist constants $K,\nu>0$, such that for all $t\geqslant 0$, $x\in S$ and $\phi:S\to\mathbb{R}$ bounded measurable, $$\begin{aligned} \label{eq:sg} \abs[\bigg]{T_{t}\phi(x)-\int_{S}\phi(y)\pi(dy)}\leqslant K\abs{\phi}e^{-\nu t}\end{aligned}$$ where $\abs{\phi}:=\sup_{x\in S}\abs{\phi(x)}$.* *Proof.* For discrete time Markov chains, the result follows immediately from [@Bensoussan1978 Theorem 3.1, p. 365]. For continuous time Markov processes, the proof is similar. Indeed, in the same manner one proves that if $\pi$ is such that [\[eq:sg\]](#eq:sg){reference-type="eqref" reference="eq:sg"} holds, then $\pi$ is unique and $\pi$ is invariant for $(T_{t})$. Furthermore, using the assumptions on the density $\rho$ and the same proof steps as in [@Bensoussan1978 Theorem 3.1, p. 365], one obtains existence of an invariant measure $\pi$ with $\pi(E)$ given as the limit of $(T_{n}\mathbf{1}_{E}(x))_{n}$ for any $x\in S$ and with [\[eq:sg\]](#eq:sg){reference-type="eqref" reference="eq:sg"} for $t$ replaced by $n\in\mathbb{N}$. Then, using the semigroup property, we also obtain [\[eq:sg\]](#eq:sg){reference-type="eqref" reference="eq:sg"} for any $t\geqslant 0$, with a possibly different constant $K>0$. Indeed, let $t>0$ and $n=\lfloor t\rfloor$. Then for bounded measurable $\phi$ with $\int_{S}\phi d\pi=0$, we obtain $$\begin{aligned} \abs{T_{t}\phi(x)}=\abs{T_{n}T_{t-n}\phi(x)}\leqslant K \abs{T_{t-n}\phi}e^{-\nu n}\leqslant K\abs{\phi}e^{-\nu n}=K e^{\nu (t-n)}\abs{\phi}e^{-\nu t}\leqslant Ke^{\nu}\abs{\phi} e^{-\nu t}.\end{aligned}$$ Now, by changing the constant $K$, we obtain [\[eq:sg\]](#eq:sg){reference-type="eqref" reference="eq:sg"} for all $t\geqslant 0$. ◻ **Theorem 21**. *Let $X$ be the martingale solution to the singular periodic SDE [\[eq:sde\]](#eq:sde){reference-type="eqref" reference="eq:sde"} projected onto $\mathbb{T}^{d}$ with contraction semigroup $(T_{t})_{t\geqslant 0}$ on bounded measurable functions $f:\mathbb{T}^{d}\to\mathbb{R}$.\ Then there exists a unique invariant probability measure $\pi$ for $(T_{t})$. In particular, $\pi$ is ergodic for $X$. Furthermore there exist constants $K,\mu>0$ such that for all $f\in L^{\infty}(\mathbb{T}^{d})$, $$\begin{aligned} \label{eq:p-sp-est} \norm{T_{t}f-\langle f\rangle_{\pi}}_{L^{\infty}}\leqslant K \norm{f}_{L^{\infty}} e^{-\mu t}.\end{aligned}$$ That is, $L^{\infty}$-spectral gap estimates for the associated Markov semigroup $(T_{t})$ hold true. In particular, $\pi$ is absolutely continuous with respect to the Lebesgue measure on the torus, with density denoted by $\rho_{\infty}$.* *Proof.* The proof is an application of Doeblin's theorem. We check, that the assumptions of [Lemma 20](#thm:doeblin){reference-type="ref" reference="thm:doeblin"} are satisfied. To that aim, note that for the Fokker-Planck density $\rho_{t}(x,\cdot)$ with $\rho_{0}=\delta_{x}$, the map $(x,y)\mapsto \rho_{t}(x,y)$ is continuous by [Theorem 13](#thm:ex-fp-d){reference-type="ref" reference="thm:ex-fp-d"}. It remains to show that there exists an open ball $U_{0}$ and a constant $c>0$, such that $\rho_{t}$ is bounded from below by $c$ on $\mathbb{T}^{d}\times U_{0}$. We choose $U_{0}=\mathbb{T}^{d}$ and obtain $$\begin{aligned} \min_{x\in\mathbb{T}^{d},y\in U_{0}}\rho_{t}(x,y)=\rho_{t}(x^*,y^*)\geqslant c>0.\end{aligned}$$ Indeed, this follows from the strict maximum principle for $y\mapsto\rho_{t}(x^*,y)$ by [Proposition 17](#prop:max-p){reference-type="ref" reference="prop:max-p"} with $c=c(x^*)>0$.\ The spectral gap estimates also imply absolute continuity, as $$\begin{aligned} \langle 1_{A}\rangle_{\pi}=\lim_{t\to\infty} \mathds{E}_{X_{0}=x}[1_{A}(X_{t})]=\lim_{t\to\infty} \int 1_{A}(y)\rho_{t}(x,y)dy\end{aligned}$$ and thus any Lebesgue nullset $A$ is also a $\pi$-nullset. The existence of the density thus follows by the Radon-Nikodym theorem. ◻ **Corollary 22**. *Let $\rho_{\infty}$ be the Lebesgue density of the invariant measure $\pi$. Then $\rho_{\infty}\in \mathscr{C}^{\alpha+\beta-1}(\mathbb{T}^{d})$ and it follows the paracontrolled structure $$\begin{aligned} \rho_{\infty}=\rho_{\infty}^{\sharp}+\rho_{\infty}\varolessthan I_{\infty}(\nabla\cdot F),\end{aligned}$$ where $\rho_{\infty}^{\sharp}\in\mathscr{C}^{2(\alpha+\beta)-2}(\mathbb{T}^{d})$ and $I_{\infty}(\nabla\cdot F):=\int_{0}^{\infty}P_{s}(\nabla\cdot F)ds$.\ Furthermore, the density is strictly positive, $$\begin{aligned} \min_{x\in\mathbb{T}^{d}}\rho_{\infty}(x)>0.\end{aligned}$$ In particular, $\pi$ is equivalent to the Lebesgue measure.* *Proof.* Let $t>0$. By invariance of $\pi$, i.e. $\langle T_{t}f\rangle_{\pi}=\langle f\rangle_{\pi}$ for all $f\in L^{\infty}(\mathbb{T}^{d})$, and $d\pi=\rho_{\infty}dx$, we obtain that almost surely $$\begin{aligned} \rho_{\infty}=T_{t}^{\ast}\rho_{\infty},\end{aligned}$$ where $T_{t}^{\ast}$ denotes the adjoint of $T_{t}$ with respect to $L^{2}(\lambda)$. Here $\lambda$ denotes the Lebesgue measure and $\langle f\rangle_{\pi}:=\int_{\mathbb{T}^{d}}f(x)\pi(dx)$.\ Denote $y_{t}(x):=T_{t}^{\ast}\rho_{\infty} (x)$. Then we show that $y$ is a mild solution of the Fokker-Planck equation started in $\rho_{\infty}$, that is $$\begin{aligned} \label{eq:fp-y} (\partial_{t}-\mathfrak{L}^{\ast})y=0,\quad y_{0}=\rho_{\infty}.\end{aligned}$$ Here the density satisfies $\rho_{\infty}\in L^{1}(\lambda)$, i.p. $\rho_{\infty}\in \mathscr{C}^{0}_{1}(\mathbb{T}^{d})$. Indeed, that $y_{t}=T_{t}^{\ast}\rho_{\infty}$ is a mild solution of the Fokker-Planck equation follows from approximation of $F$ by $F^{m}\in C^{\infty}(\mathbb{T}^{d})$ with $F^{m}\to F$ in $\mathscr{X}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})$ using that for $m\in\mathbb{N}$, $y^{m}=(T^{m}_t)^{\ast}\rho_{\infty}$ solves $(\partial_{t}-(\mathfrak{L}^{m})^{\ast})y^{m}=0$, $y^{m}=\rho_{\infty}$ by the classical Fokker-Planck theory, where $T^{m}$ denotes the semigroup for the strong solution of the SDE with drift $F^{m}$ and generator $\mathfrak{L}^{m}:=-\mathscr{L}^{\alpha}_{\nu}+F^{m}\cdot\nabla$. By continuity of the Fokker-Planck solution map from [Theorem 13](#thm:ex-fp-d){reference-type="ref" reference="thm:ex-fp-d"} for converging data $(F^{m},\rho_{\infty})\to (F,\rho_{\infty})$ in $\mathscr{X}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})\times\mathscr{C}^{0}_{1}(\mathbb{T}^{d})$, we deduce $y^{m}\to y$ in the paracontrolled solution space, where $y$ is the mild solution of [\[eq:fp-y\]](#eq:fp-y){reference-type="eqref" reference="eq:fp-y"}.\ The lower bound away from zero then also follows from [Theorem 13](#thm:ex-fp-d){reference-type="ref" reference="thm:ex-fp-d"}, as well as the paracontrolled structure $$\begin{aligned} \rho_{\infty}=\rho_{\infty}^{\sharp}+\rho_{\infty}\varolessthan I_{t}(\nabla\cdot F),\end{aligned}$$ where $\rho_{\infty}^{\sharp}:=y_{t}^{\sharp}\in\mathscr{C}^{2(\alpha+\beta)-2}(\mathbb{T}^{d})$ and $I_{t}(\nabla\cdot F):=\int_{0}^{t}P_{t-s}(\nabla\cdot F)ds$.\ Due to $\mathscr{F}_{\mathbb{T}^{d}}(\nabla\cdot F)(0)=0$, we have that, for any $\theta\geqslant 0$, there exists $c>0$, such that, uniformly in $s>0$, $$\begin{aligned} \label{eq:schauder-zero-fourier} \norm{P_{s}(\nabla\cdot F)}_{\mathscr{C}^{\beta-1+\theta}(\mathbb{T}^{d})}\lesssim s^{-\theta/\alpha}e^{-cs}\norm{\nabla\cdot F}_{\mathscr{C}^{\beta-1}(\mathbb{T}^{d})}.\end{aligned}$$ Indeed, this follows from [Lemma 11](#lem:exp-schauder){reference-type="ref" reference="lem:exp-schauder"}. Thus we obtain, for $t>0$ and any $\theta\geqslant 0$, that $$\begin{aligned} I_{t}(\nabla\cdot F)-I_{\infty}(\nabla\cdot F)=\int_{t}^{\infty}P_{s}(\nabla\cdot F)ds\in\mathscr{C}^{\theta}(\mathbb{T}^{d}).\end{aligned}$$ That is, the remainder is smooth and thus can be absorbed into $\rho_{\infty}^{\sharp}$. Notice, that $\int_{0}^{t}P_{s}(\nabla\cdot F)ds\in\mathscr{C}^{\alpha+\beta-1}(\mathbb{T}^{d})$ by [\[eq:schauder-zero-fourier\]](#eq:schauder-zero-fourier){reference-type="eqref" reference="eq:schauder-zero-fourier"} and in particular that $I_{\infty}(\nabla\cdot F)\in\mathscr{C}^{\alpha+\beta-1}(\mathbb{T}^{d})$ is well-defined. ◻ **Corollary 23**. *Let $X$ and $(T_{t})$ be as before. Then, the semigroup $(T_{t})_{t\geqslant 0}$ can be uniquely extended to a strongly continuous contraction semigroup on $L^{2}(\pi)$, i.e. $T_{t+s}=T_{t}T_{s}$, $T_{t}1=1$, $T_{t}f\to f$ for $t\downarrow 0$ and $f\in L^{2}(\pi)$ and $\norm{T_{t}f}_{L^{2}(\pi)}\leqslant\norm{f}_{L^{2}(\pi)}$, such that and for (possibly different) constants $K,\mu>0$, the $L^{2}(\pi)$-spectral gap estimates hold true: $$\begin{aligned} \norm{T_{t}f-\langle f\rangle_{\pi}}_{L^{2}(\pi)}\leqslant K \norm{f}_{L^{2}(\pi)} e^{-\mu t}\quad \text{ for all } f\in L^{2}(\pi).\end{aligned}$$* *Proof.* That the semigroup $(T_{t})_{t\geqslant 0}$ can be uniquely extended to a contraction semigroup on $L^{2}(\pi)$ follows from Jensen's inequality, $$\begin{aligned} \norm{T_{t}f}_{L^{2}(\pi)}^{2}=\int \abs{\mathds{E}_{X_{0}=x}[f(X_{t})]}^{2}\pi(dx)\leqslant\int \mathds{E}_{X_{0}=x}[\abs{f(X_{t})}^{2}]\pi(dx)=\norm{f}_{L^{2}(\pi)}^{2},\end{aligned}$$ for $f\in L^{\infty}$, using the invariance of $\pi$ (by [Theorem 21](#thm:inv-m){reference-type="ref" reference="thm:inv-m"}). By approximation, we then also obtain for the extension, that $T_{t}f(x)=\mathds{E}_{X_{0}=x}[f(X_{t})]$ for $f\in L^{2}(\pi)$.\ We check strong continuity of the semigroup on $L^{2}(\pi)$. Using the contraction property in $L^{2}(\pi)$, we obtain $$\begin{aligned} \label{eq:Test} \norm{T_{t}f-f}_{L^{2}(\pi)}^{2}=\norm{T_{t}f}_{L^{2}(\pi)}^{2}+\norm{f}_{L^{2}(\pi)}^{2}-2\langle T_{t}f,f\rangle_{\pi}\leqslant 2\norm{f}_{L^{2}(\pi)}^{2}-2\langle T_{t}f,f\rangle_{\pi}.\end{aligned}$$ It is left to prove that the right-hand side vanishes as $t\downarrow 0$. By Fatou's lemma and using that $X$ is almost surely càdlàg, we have that for $x\in\mathbb{T}^{d}$ and $f\in C(\mathbb{T}^{d},\mathbb{R})$, $$\begin{aligned} \label{eq:conv} \lim_{t\downarrow 0}\abs{T_{t}f(x)-f(x)}\leqslant\mathds{E}_{X_{0}=x}[\lim_{t\downarrow 0}\abs{f(X_{t})-f(X_{0})}]=0.\end{aligned}$$ Furthermore, we can bound uniformly in $x\in\mathbb{T}^{d}$ and $t>0$, $$\begin{aligned} \label{eq:bound-PH} \abs{T_{t}f(x)}=\abs[\bigg]{\int\rho_{t}(x,y)f(y)dy}\leqslant \frac{\sup_{t>0}\max_{x,y\in\mathbb{T}^{d}}\rho_{t}(x,y)}{\min_{y\in\mathbb{T}^{d}}\rho_{\infty}(y)}\norm{f}_{L^{1}(\pi)}\leqslant C \norm{f}_{L^{2}(\pi)}\end{aligned}$$ where $C>0$ is a constant (not depending on $t$, $f$) and $\rho_{t}(x,y)$ denotes the Fokker-Planck density with $\rho_{0}(x,y)=\delta_{x}$. Here, we have $\min_{y\in\mathbb{T}^{d}}\rho_{\infty}(y)>0$ by [Corollary 22](#cor:inv-d){reference-type="ref" reference="cor:inv-d"}. Furthermore we have $$\begin{aligned} \label{eq:unif-bound} \sup_{t>0}\max_{x,y\in\mathbb{T}^{d}}\rho_{t}(x,y)<\infty.\end{aligned}$$ Indeed, by the $L^{\infty}$-spectral gap estimates, it follows that $$\begin{aligned} \sup_{t\geqslant 0}\norm{\rho_{t}\ast f}_{L^{\infty}}\leqslant K\norm{f}_{L^{\infty}}+\abs{\langle f\rangle_{\pi}},\end{aligned}$$ with convolution $(\rho_{t}\ast f)(x):=\int_{\mathbb{T}^{d}}\rho_{t}(x,y)f(y)dy$. We can apply this bound for $f^{\varepsilon,\tilde{y}}(y):=\mathbf{1}_{\abs{y-\tilde{y}}<\varepsilon}$ for $\tilde{y}\in\mathbb{T}^{d}$ and $\varepsilon>0$ and let $\varepsilon\downarrow 0$. By continuity of $y\mapsto\rho_{t}(x,y)$ and the dominated convergence theorem, $(\rho_{t}\ast f^{\varepsilon,\tilde{y}})(x)\to\rho_{t}(x,\tilde{y})\lambda(\mathbb{T}^{d})$, which yields [\[eq:unif-bound\]](#eq:unif-bound){reference-type="eqref" reference="eq:unif-bound"}.\ In particular, by [\[eq:bound-PH\]](#eq:bound-PH){reference-type="eqref" reference="eq:bound-PH"}, $\sup_{t>0}\norm{T_{t}f}_{L^{\infty}}\lesssim \norm{f}_{L^{2}(\pi)}$ and an application of the dominated convergence theorem using [\[eq:conv\]](#eq:conv){reference-type="eqref" reference="eq:conv"}, yields that for $f\in C(\mathbb{T}^{d},\mathbb{R})$, $$\begin{aligned} \lim_{t\downarrow 0}\langle T_{t}f,f\rangle_{\pi}=\norm{f}_{L^{2}(\pi)}^{2}.\end{aligned}$$ We conclude with [\[eq:Test\]](#eq:Test){reference-type="eqref" reference="eq:Test"}, that for all $f\in C(\mathbb{T}^{d},\mathbb{R})$, $\norm{T_{t}f-f}_{L^{2}(\pi)}\to 0$ as $t\downarrow 0$.\ As $(T_{t})$ is a contraction semigroup on $L^{2}(\pi)$, the operator norm is trivially bounded, that is $\sup_{t\geqslant 0}\norm{T_{t}}_{L(L^{2}(\pi))}\leqslant 1$. Above, we proved that $(T_{t})$ is strongly continuous on a dense subset of $L^{2}(\pi)$. Thus, together with boundedness of the operator norm, $(T_{t})$ is also strongly continuous on $L^{2}(\pi)$ as a consequence of the Banach-Steinhaus theorem.\ It remains to prove that the $L^{2}(\pi)$-spectral gap estimates follow from the $L^{\infty}$-spectral gap estimates and the bound [\[eq:bound-PH\]](#eq:bound-PH){reference-type="eqref" reference="eq:bound-PH"}. Indeed, we obtain for $f\in L^{2}(\pi)$ with $\langle f\rangle_{\pi}=0$ and all $t>1$, $$\begin{aligned} \norm{T_{t}f}_{L^{2}(\pi)}=\norm{T_{t-1}T_{1} f}_{L^{2}(\pi)}\leqslant K e^{-\mu(t-1)}\norm{T_{1}f}_{L^{\infty}}\leqslant e^{\mu}CK e^{-\mu t}\norm{f}_{L^{2}(\pi)}.\end{aligned}$$ For $t\in[0,1]$, we trivially estimate, using the contraction property, $$\begin{aligned} \norm{T_{t}f}_{L^{2}(\pi)}\leqslant \norm{f}_{L^{2}(\pi)}\leqslant e^{\mu}e^{-\mu t} \norm{f}_{L^{2}(\pi)}.\end{aligned}$$ ◻ **Remark 24**. *The argument in the above proof of [Corollary 23](#cor:L2-spectral-gap){reference-type="ref" reference="cor:L2-spectral-gap"} (using the bound [\[eq:unif-bound\]](#eq:unif-bound){reference-type="eqref" reference="eq:unif-bound"} and $\rho_{\infty}>0$) can be adapted to prove the stronger estimate (for constants $K,\mu>0$) $$\begin{aligned} \norm{T_{t}f-\langle f\rangle_{\pi}}_{L^{\infty}}\leqslant Ke^{-\mu t}\norm{f-\langle f\rangle_{\pi}}_{L^{1}(\pi)},\end{aligned}$$ which in particular implies the $L^{2}(\pi)$-$L^{2}(\pi)$-bound from the corollary.* **Remark 25**. *More generally, one can show the Feller property, that is $(T_{t})$ is strongly continuous on $C(\mathbb{T}^{d})$. Using [@Revuz1999 Proposition III.2.4] and [\[eq:conv\]](#eq:conv){reference-type="eqref" reference="eq:conv"}, it is left to show $T_{t}f\in C(\mathbb{T}^{d})$ for $f\in C(\mathbb{T}^{d})\subset\mathscr{C}^{0}(\mathbb{T}^{d})$. But this follows from [@kp-sk Theorem 4.7], since for $R>t$, $y_{t}=T_{R-t}f$ solves the backward Kolmogorov equation with periodic terminal condition $y_{R}=f\in \mathscr{C}^{0}$ and $y\in\mathscr{M}_{R}^{\gamma}\mathscr{C}^{\alpha+\beta}$, such that in particular $x\mapsto y_{t}(x)$ is continuous.* The next theorem relates the semigroup $(T_{t})_{t\geqslant 0}$ from above with the generator $\mathfrak{L}$ and gives an explicit representation of its domain in terms of paracontrolled solutions of singular resolvent equations. **Theorem 26**. *Let $(T_{t})$ be the contraction semigroup on $L^{2}(\pi)$ from [Corollary 23](#cor:L2-spectral-gap){reference-type="ref" reference="cor:L2-spectral-gap"} and denote its generator by $(A,dom(A))$ with $A:dom(A)\subset L^{2}(\pi)\to L^{2}(\pi)$ and domain $dom(A):=\{f\in L^{2}(\pi)\mid \lim_{t\to 0} (T_{t}f-f)/t =:Af\text{ exists in } L^{2}(\pi)\}$. Let $\theta\in ((1+\alpha)/2,\alpha+\beta)$ and $$\begin{aligned} D:=\{g\in \mathscr{D}^{\theta}_{2}\mid R_{\lambda}g= G\text{ for some } G\in L^{2}(\pi)\text{ and }\lambda>0 \},\end{aligned}$$ where $R_{\lambda}:=(\lambda-\mathfrak{L})$.\ Then it follows $D = dom(A)$ and $(A,D)=(\mathfrak{L},D)$. In particular, $(\mathfrak{L},D)$ is the generator of the Markov process $X$ with state space $\mathbb{T}^{d}$ and transition semigroup $(T_{t})$.* **Remark 27**. *Since the drift $F$ does not depend on a time variable, one could reformulate the martingale problem for $X$ in terms of the elliptic generator $\mathfrak{L}$ and the domain $D\subset L^{2}(\pi)$.* *Proof.* We first show that $D\subset dom(A)$. To this aim, note that for $f\in D$, we obtain $R_{\lambda}f=G$ for $G\in L^{2}(\pi)$. For a mollification $(G^{n})\subset C^{\infty}(\mathbb{T}^{d})$ of $G$ and $(f^{n})\subset C^{\infty}(\mathbb{T}^{d})$, such that $R_{\lambda}f^{n}=G^{n}$, we obtain that in particular $f^{n}$ is a mild solution of the Kolmogorov backward equation on the torus for $\mathscr{G}=\partial_{t}+\mathfrak{L}$ with right-hand side $\lambda f^{n}-G^{n}\in L^{\infty}$ and terminal condition $f^{n}\in\mathscr{C}^{3}$. Equivalently, its periodic version is the periodic solution of the Kolmogorov backward equation on $\mathbb{R}^{d}$. As $X$ equals the projected solution of the $(\mathscr{G},x)$-martingale problem onto the torus, we have, for $n\in\mathbb{N}$ and $x\in\mathbb{T}^{d}$, that $$\begin{aligned} T_{t}f^{n}(x)-f^{n}(x)&=\mathds{E}_{X_{0}=x}[f^{n}(X_{t})-f^{n}(X_{0})]\\&=\mathds{E}_{X_{0}=x}\bigg[\int_{0}^{t}(\lambda f^{n}-G^{n})(X_{s})ds\bigg]=\int_{0}^{t}T_{s}(\lambda f^{n}-G^{n})(x)ds.\end{aligned}$$ Using that $f^{n}\to f$ in $L^{2}(\pi)$ as $G^{n}\to G$ by continuity of the resolvent solution map, we obtain that for $f\in D$, $$\begin{aligned} T_{t}f-f=\int_{0}^{t}T_{s}(\lambda f-G)ds.\end{aligned}$$ By continuity of the map $s\mapsto T_{s}(\lambda f-G)\in L^{2}(\pi)$, since $T$ is strongly continuous on $L^{2}(\pi)$, we obtain that for $f\in D$, $\lim_{t\to 0} (T_{t}f-f)/t$ exists in $L^{2}(\pi)$ and $$\begin{aligned} Af=\lambda f-G=\lambda f-R_{\lambda}f=\mathfrak{L}f.\end{aligned}$$ To prove that also $dom(A)\subset D$, we use that for $\chi\in dom (A)$, there trivially exists $f\in L^{2}(\pi)$ with $A\chi=f$. Notice that by [Theorem 19](#thm:res-eq){reference-type="ref" reference="thm:res-eq"}, we can solve the resolvent equation for $\lambda>0$ large enough, $$\begin{aligned} R_{\lambda}\tilde{\chi}=\lambda\chi-f,\end{aligned}$$ with right-hand side $\lambda\chi-f\in L^{2}(\pi)\subset \mathscr{C}^{0}_{2}$, obtaining a solution $\tilde{\chi}\in D$. By the above, we have that $A_{\mid D}=\mathfrak{L}_{\mid D}$, such that $\mathfrak{L}\tilde{\chi}=A\tilde{\chi}$. This yields by inserting in the equation for $\tilde{\chi}$ and since $f=A\chi$, that $A(\tilde{\chi}-\chi)=\lambda(\tilde{\chi}-\chi)$. As $\lambda>0$, by uniqueness of the solution of the resolvent equation for the generator $A$, we obtain $\tilde{\chi}=\chi$. Thus with the equation for $\tilde{\chi}$ this yields $\chi\in D$ and $\mathfrak{L}\chi=f$. ◻ **Corollary 28**. *Let $f\in L^{2}(\pi)$ with $\langle f\rangle_{\pi}=0$. Then there exists a unique solution $\chi\in D$ of the Poisson equation $\mathfrak{L}\chi=f$ such that $\langle\chi\rangle_{\pi}=0$.* *Proof.* This follows from the $L^{2}(\pi)$-spectral gap estimates. We can solve the Poisson equation in $L^{2}(\pi)$ for the given right-hand side $f\in L^{2}(\pi)$ with $\langle f\rangle_{\pi}=0$. The solution is explicitly given by $\chi=\int_{0}^{\infty}T_{t}fdt\in L^{2}(\pi)$.\ We check that $\chi$ is indeed a solution. By [@Ethier1986 Proposition 1.1.5 part a)], we have that for $f\in L^{2}(\pi)$, $\int_{0}^{t}T_{s}fds\in dom (A)$ and $$\begin{aligned} T_{t}f-f=A\int_{0}^{t}T_{s}fds,\end{aligned}$$ where $(A,dom(A))$ denotes again the generator of $(T_{t})$ on $L^{2}(\pi)$. By the $L^{2}$-spectral gap estimates and $\langle f\rangle_{\pi}=0$, we obtain that $(\int_{0}^{t}T_{s}fds)_{t}$ converges in $L^{2}(\pi)$ for $t\to\infty$ to a limit $\chi$, and that $(T_{t}f)_{t}$ converges to zero in $L^{2}(\pi)$ for $t\to\infty$. Hence, since $A$ is a closed operator (cf. [@Ethier1986 Corollary 1.1.6]), we obtain in the limit $t\to\infty$, that $f=A\int_{0}^{\infty} T_{t}fdt=A\chi$ and $\chi\in dom(A)$. Now, using $dom(A)=D$ and $(A,D)=(\mathfrak{L},D)$ by [Theorem 26](#thm:generator){reference-type="ref" reference="thm:generator"}, this yields $\chi\in D$ and $\mathfrak{L}\chi=f$. ◻ Thanks to the regularity of the density of the invariant measure $\pi$, we can finally define the mean of the singular drift $F$ under $\pi$, $\langle F\rangle_{\pi}=\langle F,\rho_{\infty}\rangle_{\lambda}$, respectively the product $F\cdot\rho_{\infty}$. **Lemma 29**. *Let $\rho_{\infty}$ be the density of $\pi$. Let $\langle F\rangle_{\pi}=(\langle F^{i}\rangle_{\pi})_{i=1,...,d}$ for $$\begin{aligned} \langle F^{i}\rangle_{\pi}&=(F^{i}\cdot\rho_{\infty})(\mathbf{1})\\&:=[(F^{i}\cdot \rho^{\sharp}_{\infty})+(F^{i}\varodot I_{\infty} (\nabla\cdot F))\cdot\rho_{\infty}+C_{1}(\rho_{\infty},I_{\infty}(\nabla\cdot F),F^{i})](\mathbf{1}),\end{aligned}$$ where $\mathbf{1}\in C^{\infty}(\mathbb{T}^{d})$ is the constant test function and $C_{1}$ denotes the paraproduct commutator defined in [\[eq:para-comm\]](#eq:para-comm){reference-type="eqref" reference="eq:para-comm"}.\ Then, $\langle F^{i}\rangle_{\pi}$ is well-defined and continuous, that is, $\langle F^{m}\rangle_{\pi}\to\langle F\rangle_{\pi}$ for $F^{m}\to F$ in $\mathscr{X}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})$. Moreover, the following Lipschitz bound holds true $$\begin{aligned} \norm{F\cdot\rho_{\infty}}_{\mathscr{C}^{\beta}(\mathbb{T}^{d})}\lesssim\norm{F}_{\mathscr{X}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})}(1+\norm{F}_{\mathscr{X}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})})[\norm{\rho_{\infty}}_{\mathscr{C}^{\alpha+\beta-1}}+\norm{\rho_{\infty}^{\sharp}}_{\mathscr{C}^{2(\alpha+\beta-1)}}].\end{aligned}$$* *Proof.* The proof follows directly from [Theorem 13](#thm:ex-fp-d){reference-type="ref" reference="thm:ex-fp-d"} and [Corollary 22](#cor:inv-d){reference-type="ref" reference="cor:inv-d"}. ◻ Solving the Poisson equation with singular right-hand side[\[sec:s-P-eq\]]{#sec:s-P-eq label="sec:s-P-eq"} To prove the central limit theorem for the solution of the martingale problem $X$, we utilize the classical approach of decomposing the additive functional in terms of a martingale and a boundary term, using the solution of the Poisson equation for $\mathfrak{L}$ with singular right-hand side $F-\langle F\rangle_{\pi}$. For solving the Poisson equation in [Theorem 33](#thm:poisson-eq){reference-type="ref" reference="thm:poisson-eq"} below, [Corollary 28](#cor:good-P-eq){reference-type="ref" reference="cor:good-P-eq"} is not applicable, as $F$ is a distribution and therefore not an element of $L^{2}(\pi)$. Consider an approximation $(F^{m})\subset C^{\infty}(\mathbb{T}^{d})$ with $F^{m}\to F$ in $\mathscr{X}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})$. Then, we can apply [Corollary 28](#cor:good-P-eq){reference-type="ref" reference="cor:good-P-eq"} for the right-hand sides $F^{m}-\langle F^{m}\rangle_{\pi}\in L^{2}(\pi)$, $m\in\mathbb{N}$. This way we obtain solutions $\chi^{m}=(\chi^{m,i})_{i=1,...,d}\in D^{d}\subset L^{2}(\pi)^{d}$ of the Poisson equations $$\begin{aligned} \label{eq:Poisson-m} (-\mathfrak{L})\chi^{m,i}=F^{m,i}-\langle F^{m,i}\rangle_{\pi}\end{aligned}$$ for $m\in\mathbb{N}$.\ In this section, we show that the sequence $(\chi^{m})_{m}$ converges in a space of sufficient regularity to a the limit $\chi$ that indeed solves the Poisson equation $$\begin{aligned} \label{eq:Poisson-limit} (-\mathfrak{L})\chi=F-\langle F\rangle_{\pi}.\end{aligned}$$ Let us define the space $\mathscr{H}^{1}(\pi)$ as in [@klo Section 2.2], $$\begin{aligned} \mathscr{H}^{1}(\pi):=\{f\in D\mid \norm{f}_{\mathscr{H}^{1}(\pi)}^{2}:=\langle(-\mathfrak{L})f,f\rangle_{\pi}<\infty\},\end{aligned}$$ which is the Sobolev space for the operator $\mathfrak{L}$ with respect to $L^{2}(\pi)$. Its dual is defined by $$\begin{aligned} \mathscr{H}^{-1}(\pi):=\{F:\mathscr{H}^{1}(\pi)\to\mathbb{R}\mid F\text{ linear with } \norm{F}_{\mathscr{H}^{-1}(\pi)}:=\sup_{\norm{f}_{\mathscr{H}^{1}(\pi)}=1}\abs{F(f)}<\infty\}.\end{aligned}$$ The space $\mathscr{H}^{1}(\pi)$ is related to the quadratic variation of Dynkin's martingale, see [@klo Section 2.4], which motivates the definition.\ To prove convergence of $(\chi^{m})_{m}$ in $L^{2}(\pi)^{d}$, we first establish in [Corollary 32](#cor:H1-conv){reference-type="ref" reference="cor:H1-conv"} convergence of $(\chi^{m})_{m}$ in the space $\mathscr{H}^{1}(\pi)^{d}$ and utilize a Poincaré-type bound on the operator $\mathfrak{L}$. A standard argument as in [@Guionnet02 Property 2.4] shows that the $L^{2}(\pi)$-spectral gap estimates from [Corollary 23](#cor:L2-spectral-gap){reference-type="ref" reference="cor:L2-spectral-gap"} for the constant $K=1$, imply the Poincaré estimate for the operator $\mathfrak{L}$: $$\begin{aligned} \norm{f-\langle f\rangle_{\pi}}_{L^{2}(\pi)}^{2}\leqslant \mu \langle (-\mathfrak{L})f,f\rangle_{\pi}=\mu \norm{f}_{\mathscr{H}^{1}(\pi)}^{2}, \quad\text{ for all } f\in D.\end{aligned}$$ In general, the constant $K>0$ in the spectral gap estimates from [Corollary 23](#cor:L2-spectral-gap){reference-type="ref" reference="cor:L2-spectral-gap"} does not need to satisfy $K=1$ and the above argument breaks down for $K\neq 1$. Hence, we show below in [\[eq:poincare\]](#eq:poincare){reference-type="eqref" reference="eq:poincare"} that $\norm{f-\langle f\rangle_{\pi}}_{L^{2}(\pi)}^{2}\leqslant C \norm{f}_{\mathscr{H}^{1}(\pi)}^{2}$ holds true for some constant $C>0$. That constant may differ from the constant $\mu$ and may not be optimal, but the bound suffices for our purpose of concluding on $L^{2}(\pi)^{d}$ convergence given $\mathscr{H}^{1}(\pi)^{d}$ convergence of $(\chi^{m})_{m}$.\ An optimal estimate, that however applies for a much more general situation of weak Poincaré inequalities and slower than exponential convergences, can be found in [@RoecknerWang Theorem 2.3].\ The $\mathscr{H}^{1}(\pi)^{d}$ convergence of $(\chi^{m})_{m}$ follows from $\mathscr{H}^{-1}(\pi)^{d}$-convergence of $(F^{m})_{m}$ for the approximating sequence $F^{m}\to F$ in $\mathscr{X}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})$. Convergence of $(F^{m})_{m}$ in $\mathscr{H}^{-1}(\pi)^{d}$ is established in [Theorem 31](#thm:dual){reference-type="ref" reference="thm:dual"}. The following lemma is an auxiliary result, which proves that the semi-norms in $\mathscr{H}^{1}(\pi)$ and the homogeneous Besov space $\dot{B}^{\alpha/2}_{2,2}(\mathbb{T}^{d})$, cf. [\[eq:p-bs2\]](#eq:p-bs2){reference-type="eqref" reference="eq:p-bs2"}, are equivalent. **Lemma 30**. *Let $\alpha\in (1,2]$. Define the carré-du-champ operator of the generalized fractional Laplacian as $\Gamma^{\alpha}_{\nu}(f)=\Gamma^{\alpha}_{\nu}(f,f):=\frac{1}{2}((-\mathscr{L}^{\alpha}_{\nu}) f^{2}-2f(-\mathscr{L}^{\alpha}_{\nu}) f)$. Then, there exist constants $c,C>0$, such that for all $f\in \dot{B}^{\alpha/2}_{2,2}(\mathbb{T}^{d})$, $$\begin{aligned} \label{eq:eq-norms} c\norm{f}_{\dot{B}^{\alpha/2}_{2,2}(\mathbb{T}^{d})}^{2}\leqslant\langle \Gamma^{\alpha}_{\nu}(f)\rangle_{\lambda}\leqslant C \norm{f}_{\dot{B}^{\alpha/2}_{2,2}(\mathbb{T}^{d})}^{2}.\end{aligned}$$* *Proof.* By [@Schmeisser1987 part (v) of Theorem, Section 3.5.4] we obtain that the periodic Lizorkin space $F_{2,2}^{s}(\mathbb{T}^{d})$ coincides with the periodic Bessel-potential space $H^{s}(\mathbb{T}^{d})$. Furthermore $F_{2,2}^{s}(\mathbb{T}^{d})$ coincides with $B_{2,2}^{s}(\mathbb{T}^{d})$ (cf. [@Schmeisser1987 Section 3.5.1, Remark 4]). Thus, we obtain that in particular $$\begin{aligned} \dot{B}^{s}_{2,2}(\mathbb{T}^{d})=\dot{H}^{s}(\mathbb{T}^{d}).\end{aligned}$$ It remains to show [\[eq:eq-norms\]](#eq:eq-norms){reference-type="eqref" reference="eq:eq-norms"} with $\dot{B}^{\alpha/2}_{2,2}(\mathbb{T}^{d})$ replaced by $\dot{H}^{s}(\mathbb{T}^{d})$. We prove the claim for $\alpha\in (1,2)$, for $\alpha=2$ the proof is similar. To that aim, we calculate, using the definition of $\mathscr{L}^{\alpha}_{\nu}$ for a Schwartz function $f\in \mathscr{S}(\mathbb{T}^{d})$ and $\psi^{\alpha}_{\nu}(0)=0$, $$\begin{aligned} \langle\Gamma^{\alpha}_{\nu}(f)\rangle_{\lambda}=\int_{\mathbb{T}^{d}} \Gamma^{\alpha}_{\nu}(f) (x)dx&=\mathscr{F}_{\mathbb{T}^{d}}(\Gamma^{\alpha}_{\nu}(f))(0)\\&=\frac{1}{2}\mathscr{F}_{\mathbb{T}^{d}}((-\mathscr{L}^{\alpha}_{\nu}) f^{2})(0)-\mathscr{F}_{\mathbb{T}^{d}}(f(-\mathscr{L}^{\alpha}_{\nu}) f)(0) \\&=-\frac{1}{2}\psi^{\alpha}_{\nu}(0)(\hat{f}\ast\hat{f})(0)+(\hat{f}\ast\psi^{\alpha}_{\nu}\hat{f})(0) \\&=\sum_{k\in\mathbb{Z}^{d}}\hat{f}(-k)\hat{f}(k)\psi^{\alpha}_{\nu}(k)\\&=\sum_{k\in\mathbb{Z}^{d}}\abs{\hat{f}(k)}^{2}\psi^{\alpha}_{\nu}(k).\end{aligned}$$ By [Assumption 4](#ass){reference-type="ref" reference="ass"} on the spherical component of the jump measure $\nu$, we obtain, that there exist constants $c,C>0$ with $$\begin{aligned} c\abs{k}^{\alpha}\leqslant\psi^{\alpha}_{\nu}(k)=\int_{S}\abs{\langle k,\xi\rangle}^{\alpha}\nu(d\xi)\leqslant C\abs{k}^{\alpha}.\end{aligned}$$ Thus it follows that $$\begin{aligned} c\norm{f}_{\dot{H}^{\alpha/2}(\mathbb{T}^{d})}^{2}=c\sum_{k\in\mathbb{Z}^{d}}\abs{k}^{\alpha}\abs{\hat{f}(k)}^{2}\leqslant\langle\Gamma^{\alpha}_{\nu}(f)\rangle_{\lambda}\leqslant C\sum_{k\in\mathbb{Z}^{d}}\abs{k}^{\alpha}\abs{\hat{f}(k)}^{2}=C\norm{f}_{\dot{H}^{\alpha/2}(\mathbb{T}^{d})}^{2}.\end{aligned}$$ By a density argument, the claim follows for all $f\in \dot{H}^{\alpha/2}(\mathbb{T}^{d})=\dot{B}^{\alpha/2}_{2,2}(\mathbb{T}^{d})$. ◻ **Theorem 31**. *Let $F\in\mathscr{X}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})$ for $\beta\in (\frac{2-2\alpha}{3},0)$ and $\alpha\in (1,2]$.\ Then, equivalence of the semi-norms $\norm{\cdot}_{\mathscr{H}^{1}(\pi)}\simeq \norm{\cdot}_{\dot{B}^{\alpha/2}_{2,2}(\mathbb{T}^{d})}$ follows and $\overline{F}:=F-\langle F\rangle_{\pi}\in\mathscr{H}^{-1}(\pi)^{d}$. In particular, $F^{m}\to F$ in $\mathscr{X}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})$ implies $\overline{F}^{m}\to\overline{F}$ in $\mathscr{H}^{-1}(\pi)^{d}$.* *Proof.* By invariance of $\pi$ we obtain $\langle \mathfrak{L}g\rangle_{\pi}=0$ for $g\in D$, because for $g\in D$, $(\frac{d}{dt}T_{t})_{\mid t=0}f=\mathfrak{L}f\in L^{2}(\pi)$. We now apply this for $g=f^{2}$ for which we need to check that if $f\in D$, then $\mathfrak{L}f^{2}$ is well-defined and $\mathfrak{L}f^{2}\in L^{1}(\pi)$. This follows by calculating $$\begin{aligned} f^{2}=(f^{\sharp}+\nabla f\varolessthan I_{\lambda}(F))^{2}=g^{\sharp}+g^{\prime}\varolessthan I_{\lambda}(F),\end{aligned}$$ where $$\begin{aligned} g^{\sharp}&=(f^{\sharp})^{2}+2f^{\sharp}\varodot(\nabla f\varolessthan I_{\lambda}(F))+2f^{\sharp}\varogreaterthan(\nabla f\varolessthan I_{\lambda}(F))\\&\qquad\qquad\qquad+(\nabla f\varolessthan I_{\lambda}(F))\varodot(\nabla f\varolessthan I_{\lambda}(F))\in\mathscr{C}^{2\theta-1}_{1}(\mathbb{T}^{d})\end{aligned}$$ and $$\begin{aligned} g^{\prime}=2f^{\sharp}\varolessthan\nabla f+\nabla f\varolessthan I_{\lambda}(F)\varolessthan\nabla f+I_{\lambda}(F)\varolessthan\nabla f\varolessthan I_{\lambda}(F)\in(\mathscr{C}^{\theta-1}_{1}(\mathbb{T}^{d}))^{d}.\end{aligned}$$ Hence, we conclude that for $f\in D$, $f^{2}$ admits a paracontrolled structure with $g^{\sharp}\in\mathscr{C}^{2\theta-1}_{1}(\mathbb{T}^{d})$ and $g^{\prime}\in(\mathscr{C}^{\theta-1}_{1}(\mathbb{T}^{d}))^{d}$, such that $\mathfrak{L}f^{2}$ is well-defined and $$\begin{aligned} \mathfrak{L}f^{2}=2f \mathfrak{L}f+2\Gamma^{\alpha}_{\nu}(f)=2\lambda f^{2}-2fR_{\lambda}f+2\Gamma^{\alpha}_{\nu}(f)\in L^{1}(\pi).\end{aligned}$$ Herein we used that $2\lambda f^{2}-2fR_{\lambda}f\in L^{1}(\pi)$ as $f\in D$ and $\Gamma^{\alpha}_{\nu}(f)=\Gamma^{\alpha}_{\nu}(f,f)=\frac{1}{2}(\mathscr{L}^{\alpha}_{\nu}f^{2}-2f\mathscr{L}^{\alpha}_{\nu}f)\in L^{1}(\pi)$ for $f\in\mathscr{C}^{\theta}_{2}(\mathbb{T}^{d})$ by [Lemma 30](#lem:cdc-fL){reference-type="ref" reference="lem:cdc-fL"} as $\theta$ can be chosen close to $\alpha+\beta$, such that $\theta>\alpha/2$.\ Analogously, if we denote the domain of $\mathfrak{L}$ with integrability $p$ by $D_{p}$, then for $f,g\in D_{2}$, we concluded that $f\cdot g\in D_{1}$, which in particular implies that the carré-du-champ operator $$\begin{aligned} \Gamma^{\mathfrak{L}}(f,g)=\frac{1}{2}(\mathfrak{L}(fg)-f\mathfrak{L}g-g\mathfrak{L}f)\in L^{1}(\pi)\end{aligned}$$ for $f,g\in D$ is well-defined in $L^{1}(\pi)$.\ Applying invariance of $\pi$ for $g=f^{2}$, we can add $\frac{1}{2}\langle\mathfrak{L}f^{2}\rangle_{\pi}=0$ yielding $$\begin{aligned} \norm{f}_{\mathscr{H}^{1}(\pi)}^{2}=\langle (-\mathfrak{L})f,f\rangle_{\pi}=\langle\Gamma^{\mathfrak{L}}(f)\rangle_{\pi}=\langle \Gamma^{\alpha}_{\nu}(f)\rangle_{\pi}. %\norm{f}_{\mathcal{H}^{1}(\pi)}^{2}=\langle \nabla^{T} f,\nabla f\rangle_{\pi},\end{aligned}$$ where $\Gamma^{\mathfrak{L}}(f)=\frac{1}{2}\mathfrak{L}f^{2}-f\mathfrak{L}f=\Gamma^{\alpha}_{\nu}(f)$. Thus, we obtain $$\begin{aligned} \label{eq:H1-spaces} \norm{f}_{\mathscr{H}^{1}(\pi)}^{2}=\langle\Gamma^{\alpha}_{\nu}(f)\rangle_{\pi}\simeq\langle\Gamma^{\alpha}_{\nu}(f)\rangle_{\lambda}\simeq\norm{f}_{\dot{B}^{\alpha/2}_{2,2}}^{2},\end{aligned}$$ where $\simeq$ denotes that the norms are equivalent.\ Here, we used that absolute continuity of $\pi$ with respect to the Lebesgue-measure, with density $\rho_{\infty}$ that is uniformly bounded from above and from below, away from zero by [Corollary 22](#cor:inv-d){reference-type="ref" reference="cor:inv-d"}. Moreover, note that the carré-du-champ is non-negative, $\Gamma^{\alpha}_{\nu}(f)\geqslant 0$. Furthermore we utilized [\[eq:eq-norms\]](#eq:eq-norms){reference-type="eqref" reference="eq:eq-norms"} from [Lemma 30](#lem:cdc-fL){reference-type="ref" reference="lem:cdc-fL"}.\ Thus applying the duality estimate from [Lemma 12](#lem:duality){reference-type="ref" reference="lem:duality"} (for functions $f-\langle f\rangle_{\lambda}, g-\langle g\rangle_{\lambda}$ to obtain the result for the homogeneous Besov spaces), we get for $\overline{F}:=F-\langle F\rangle_{\pi}$ with mean $\langle F\rangle_{\pi}$ from [Lemma 29](#thm:def-F-pi-int){reference-type="ref" reference="thm:def-F-pi-int"}, $$\begin{aligned} \abs{\langle \overline{F}^{i}, g\rangle_{\pi}}&=\abs{\langle\overline{F}^{i}\rho_{\infty}, g\rangle } %\lesssim \norm{((\overline{F}^{i}\rho_{\infty})\cdot g)}_{\dot{B}^{-\alpha/2}_{2,2}(\mathbb{T}^{d})} \\&\lesssim \norm{\overline{F}^{i}\rho_{\infty}}_{\dot{B}^{-\alpha/2}_{2,2}(\mathbb{T}^{d})}\norm{g}_{\dot{B}^{\alpha/2}_{2,2}(\mathbb{T}^{d})}\\&\lesssim \norm{\overline{F}^{i}\rho_{\infty}}_{\dot{B}^{\beta}_{2,2}(\mathbb{T}^{d})}\norm{g}_{\dot{B}^{\alpha/2}_{2,2}(\mathbb{T}^{d})} \\&\lesssim\norm{\overline{F}^{i}\rho_{\infty}}_{\dot{B}^{\beta}_{2,2}(\mathbb{T}^{d})}\norm{g}_{\mathscr{H}^{1}(\pi)},\end{aligned}$$ for $i=1,...,d$, using $\beta>-\alpha/2$ and [\[eq:H1-spaces\]](#eq:H1-spaces){reference-type="eqref" reference="eq:H1-spaces"}. Hence, we find $$\begin{aligned} \norm{\overline{F}^{i}}_{\mathscr{H}^{-1}(\pi)}&\lesssim \norm{\overline{F}^{i}\rho_{\infty}}_{\dot{B}^{\beta}_{2,2}(\mathbb{T}^{d})}\\&\lesssim\norm{\overline{F}^{i}\rho_{\infty}}_{B^{\beta}_{2,2}(\mathbb{T}^{d})}\\&\lesssim\norm{\overline{F}^{i}\rho_{\infty}}_{\mathscr{C}^{\beta+(1-\gamma)\alpha}(\mathbb{T}^{d})}\\&\lesssim\norm{F}_{\mathscr{X}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})}(1+\norm{F}_{\mathscr{X}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})})[\norm{\rho_{\infty}}_{\mathscr{C}^{\alpha+\beta-1}}+\norm{\rho_{\infty}^{\sharp}}_{\mathscr{C}^{2(\alpha+2\beta-1)}}],\end{aligned}$$ where the estimate for the product of $\overline{F}^{i}$ and $\rho_{\infty}$ follows from [Lemma 29](#thm:def-F-pi-int){reference-type="ref" reference="thm:def-F-pi-int"}.\ This proves that $\overline{F}\in\mathscr{H}^{-1}(\pi)^{d}$. Convergence follows by the same estimate. ◻ **Corollary 32**. *Let $F\in\mathscr{X}^{\beta,\gamma}_{\infty}$ and $F^{m}\in C^{\infty}(\mathbb{T}^{d})$ with $F^{m}\to F$ in $\mathscr{X}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})$. Let $\chi^{m}=(\chi^{m,i})_{i=1,\dots,d}\in L^{2}(\pi)^{d}$ denote the unique solution of $$\begin{aligned} (-\mathfrak{L})\chi^{m,i}=F^{m,i}-\langle F^{m,i}\rangle_{\pi}=:\overline{F}^{m,i}\end{aligned}$$ with $\langle\chi^{m,i}\rangle_{\pi}=0$. Then $(\chi^{m})_{m}$ converges in $\mathscr{H}^{1}(\pi)^{d}\cap L^{2}(\pi)^{d}$ to a limit $\chi$.* *Proof.* Convergence in $\mathscr{H}^{1}(\pi)$ follows from the estimate $$\begin{aligned} \norm{\chi^{m,i}-\chi^{m',i}}_{\mathscr{H}^{1}(\pi)}^{2}&=\langle(-\mathfrak{L})(\chi^{m,i}-\chi^{m',i}),\chi^{m,i}-\chi^{m',i}\rangle_{\pi}\\&=\langle \overline{F}^{m,i}-\overline{F}^{m',i},\chi^{m,i}-\chi^{m',i}\rangle_{\pi} %-\lambda\norm{\chi^{\lambda,m}-\chi^{\lambda,m'}}_{L^{2}(\pi)}^{2} \\&\leqslant\norm{\overline{F}^{m,i}-\overline{F}^{m',i}}_{\mathscr{H}^{-1}(\pi)}\norm{\chi^{m,i}-\chi^{m',i}}_{\mathscr{H}^{1}(\pi)}.\end{aligned}$$ Thus we obtain $$\begin{aligned} \norm{\chi^{m,i}-\chi^{m',i}}_{\mathscr{H}^{1}(\pi)}&\leqslant \norm{\overline{F}^{m,i}-\overline{F}^{m',i}}_{\mathscr{H}^{-1}(\pi)}.\end{aligned}$$ And indeed the $\mathscr{H}^{-1}(\pi)$-norm on the right-hand side is small, when $m,m'$ are close, by [Theorem 31](#thm:dual){reference-type="ref" reference="thm:dual"}.\ It remains to conclude on $L^{2}(\pi)$ convergence. By [Theorem 31](#thm:dual){reference-type="ref" reference="thm:dual"}, we also obtain the seminorm equivalences, $\norm{\cdot}_{\mathscr{H}^{1}(\pi)}\simeq\norm{\cdot}_{\dot{H}^{\alpha/2}(\mathbb{T}^{d})}\simeq\norm{\cdot}_{\dot{B}^{\alpha/2}_{2,2}(\mathbb{T}^{d})}$. Combining with the fractional Poincaré inequality on the torus, $$\begin{aligned} \norm{u-\langle u\rangle_{\lambda}}_{L^{2}}^{2}=\sum_{k\in\mathbb{Z}^{d}\setminus\{0\}}\abs{\hat{u}(k)}^{2}\leqslant\sum_{k\in\mathbb{Z}^{d}\setminus\{0\}}\abs{k}^{\alpha}\abs{\hat{u}(k)}^{2}=\norm{u}_{\dot{H}^{\alpha/2}(\mathbb{T}^{d})}^{2},\end{aligned}$$ with Lebesgue measure $\lambda$ on $\mathbb{T}^{d}$, we can thus estimate $$\begin{aligned} \label{eq:poincare} \norm{\chi-\langle\chi\rangle_{\lambda}}_{L^{2}(\pi)}\lesssim\norm{\chi-\langle\chi\rangle_{\lambda}}_{L^{2}(\lambda)}\leqslant\norm{\chi}_{\dot{H}^{\alpha/2}(\mathbb{T}^{d})}\lesssim\norm{\chi}_{\mathscr{H}^{1}(\pi)}.\end{aligned}$$ Furthermore, as $\langle\chi\rangle_{\pi}=0$, we obtain $\norm{\chi-\langle\chi\rangle_{\lambda}}_{L^{2}(\pi)}^{2}=\norm{\chi}_{L^{2}(\pi)}^{2}+\langle\chi\rangle_{\lambda}^{2}$. Together, we thus find $$\begin{aligned} \norm{\chi}_{L^{2}(\pi)}^{2}\lesssim\norm{\chi}_{L^{2}(\pi)}^{2}+\langle\chi\rangle_{\lambda}^{2}\lesssim\norm{\chi}_{\mathscr{H}^{1}(\pi)}^{2}.\end{aligned}$$ In particular, we conclude that $\mathscr{H}^{1}(\pi)$-convergence implies $L^{2}(\pi)$-convergence of the sequence $(\chi^{m})$. ◻ **Theorem 33**. *Let $(F^{m})_{m}$, $(\chi^{m})_{m}$ and $\chi$ be as in [Corollary 32](#cor:H1-conv){reference-type="ref" reference="cor:H1-conv"}.\ Then, $(\chi^{m})_{m}$ converges to $\chi$ in $(\mathscr{C}^{\theta}_{2}(\mathbb{T}^{d}))^{d}$, $\theta\in ((1-\beta)/2,\alpha+\beta)$ and there exists $\lambda>0$, such that $$\begin{aligned} \chi=\chi^{\sharp}+\nabla \chi\varolessthan I_{\lambda}(\overline{F})\end{aligned}$$ for $\chi^{\sharp}\in(\mathscr{C}^{2\theta-1}_{2}(\mathbb{T}^{d}))^{d}$.\ Furthermore, the limit $\chi$ solves the singular Poisson equation with singular right-hand side $\overline{F}$, $$\begin{aligned} (-\mathfrak{L})\chi=\overline{F}.\end{aligned}$$* *Proof.* Trivially, for $\lambda>0$, $\chi^{m}$ solves the resolvent equation $$\begin{aligned} R_{\lambda}\chi^{m}=(\lambda-\mathfrak{L})\chi^{m}=\lambda\chi^{m}+\overline{F}^{m}\end{aligned}$$ with right-hand side $G^{m}:=\lambda\chi^{m}+\overline{F}^{m}$. The right-hand sides $(G^{m})$ converge in $(\mathscr{C}^{\beta}_{2}(\mathbb{T}^{d}))^{d}$ to $G=\lambda\chi+\overline{F}$, because $\chi^{m}\to\chi$ in $L^{2}(\pi)^{d}$ by [Corollary 32](#cor:H1-conv){reference-type="ref" reference="cor:H1-conv"} and, thanks to the equivalence of $\pi$ and the Lebesgue measure $\lambda_{\mathbb{T}^{d}}$, thus also in $L^{2}(\lambda_{\mathbb{T}^{d}})^{d}$. Choosing $\lambda>1$ big enough, by [Theorem 19](#thm:res-eq){reference-type="ref" reference="thm:res-eq"}, we can solve the resolvent equation $$\begin{aligned} \label{eq:g} R_{\lambda}g^{i}=G^{i}=G^{\sharp,i}+G^{\prime,i}\varolessthan F,\end{aligned}$$ with $G^{\sharp,i}:=\lambda\chi^{i}\in L^{2}(\lambda)\subset\mathscr{C}^{0}_{2}(\mathbb{T}^{d})$ and $G^{\prime,i}:=(1-\langle F^{i}\rangle_{\pi})e_{i}\in\mathscr{C}^{\alpha+\beta-1}(\mathbb{T}^{d})$. Thereby we obtain a paracontrolled solution $g^{i}\in\mathscr{D}^{\theta}_{2}$ for $\theta<\alpha+\beta$, with $g^{i}=g^{\sharp,i}+\nabla g^{i}\varolessthan I_{\lambda}(F)$, $g^{\sharp,i}\in\mathscr{C}^{2\theta-1}_{2}(\mathbb{T}^{d})$ and $I_{\lambda}(F):=\int_{0}^{\infty}e^{-\lambda t}P_{t}F dt \in \mathscr{C}^{\alpha+\beta}(\mathbb{T}^{d})$. By continuity of the solution map for the resolvent equation, we obtain convergence of $\chi^{m,i}\to g^{i}$ in $\mathscr{D}^{\theta}_{2}$ for $m\to\infty$. Convergence of $(\chi^{m})$ to $g$ in $(\mathscr{D}^{\theta}_{2})^{d}$ in particular implies convergence in $L^{2}(\lambda_{\mathbb{T}^{d}})^{d}$ and thus in $L^{2}(\pi)^{d}$, which implies that almost surely $g=\chi$ and hence, by [\[eq:g\]](#eq:g){reference-type="eqref" reference="eq:g"}, that $\chi\in (\mathscr{D}^{\theta}_{2})^{d}$ solves $(-\mathfrak{L})\chi=\overline{F}$. ◻ Fluctuations in the Brownian and pure Lévy noise case[\[sec:CLT\]]{#sec:CLT label="sec:CLT"} In this section, we prove the central limit [Theorem 35](#thm:main-thm-ph){reference-type="ref" reference="thm:main-thm-ph"} for the diffusion $X$ with periodic coefficients. In the following, we again explicitly distinguish between $X$ and the projected process $X^{\mathbb{T}^{d}}$. Of course, the central limit theorem in particular implies that for $t>0$, $\frac{1}{n}X_{nt}\to t \langle F\rangle_{\pi}$ with convergence in probability for $n\to\infty$, i.e. a weak law of large numbers. The central limit theorem then quantifies the fluctuations around the mean $t \langle F\rangle_{\pi}$.\ Due to ergodicity of $\pi$, it follows by the von Neumann ergodic theorem that, if the projected process is started in $X_{0}^{\mathbb{T}^{d}}\sim\pi$, $\frac{1}{n}\int_{0}^{nt}b(X_{s}^{\mathbb{T}^{d}})ds\to t\langle b\rangle_{\pi}$ in $L^{2}(\mathds{P}_{\pi})$ as $n\to\infty$ for $b\in L^{\infty}(\mathbb{T}^{d})$. As $\mathds{P}_{\pi}=\int_{\mathbb{T}^{d}}\mathds{P}_{x}\pi(dx)$, this implies in particular the convergence (along a subsequence) in $L^{2}(\mathds{P}_{x})$ for $\pi$-almost all $x$.\ The pointwise spectral gap estimates yield the following slightly stronger ergodic theorem for the process started in $X_{0}^{\mathbb{T}^{d}}=x$ for any $x\in\mathbb{T}^{d}$. In particular, in the periodic homogenization result for the PDE, [Corollary 36](#cor:PDEhomog){reference-type="ref" reference="cor:PDEhomog"} below, pointwise convergence (for every $x\in\mathbb{T}^{d}$) of the PDE solutions can be proven. **Lemma 34**. *Let $b\in L^{\infty}(\mathbb{T}^{d})$ and $x\in\mathbb{T}^{d}$. Let $X^{\mathbb{T}^{d}}$ be the projected solution of the $\mathscr{G}=\partial_{t}+\mathfrak{L}$- martingale problem on the torus $\mathbb{T}^{d}$ started in $X_{0}^{\mathbb{T}^{d}}=x\in\mathbb{T}^{d}$.\ Then the following convergence holds in $L^{2}(\mathds{P})$: $$\begin{aligned} \frac{1}{n}\int_{0}^{nt}b(X_{s}^{\mathbb{T}^{d}})ds\to t\langle b\rangle_{\pi}.\end{aligned}$$* *Proof.* Without loss of generality, we assume that $\langle b\rangle_{\pi}=0$, otherwise we subtract the mean. With the Markov property we obtain $$\begin{aligned} \norm[\bigg]{\frac{1}{n}\int_{0}^{nt}b(X_{s}^{\mathbb{T}^{d}})ds}_{L^{2}(\mathds{P})}^{2}&=\frac{1}{n^2}\int_{0}^{nt}\int_{0}^{nt}\mathds{E}[b(X_{s}^{\mathbb{T}^{d}})b(X_{r}^{\mathbb{T}^{d}})]dsdr\\&=\frac{2}{n^2}\int_{0}^{nt}\int_{0}^{nt}\mathbf{1}_{s\leqslant r}\mathds{E}\Bigl[b(X_{s}^{\mathbb{T}^{d}})\mathds{E}_{s}[b(X_{r}^{\mathbb{T}^{d}})]\Bigr]dsdr %\\&\quad +\frac{1}{n^2}\int_{0}^{nt}\int_{0}^{nt}\mathbf{1}_{r< s}\E\bigg[b(X_{r})\E_{r}[b(X_{s})]\bigg]dsdr\\&=\frac{2}{n^2}\int_{0}^{nt}\int_{0}^{nt}\mathbf{1}_{s\leqslant r}T_{s}(bT_{r-s}b)(x)dsdr.\end{aligned}$$ Using the spectral gap estimate [\[eq:p-sp-est\]](#eq:p-sp-est){reference-type="eqref" reference="eq:p-sp-est"}, we can estimate $$\begin{aligned} \abs[\bigg]{\frac{2}{n^2}\int_{0}^{nt}\int_{0}^{nt}\mathbf{1}_{s\leqslant r}T_{s}(bT_{r-s}b)(x)dsdr}&\leqslant\frac{2K^{2}\norm{b}_{L^{\infty}}^{2}}{n^2}\int_{0}^{nt}\int_{0}^{nt}e^{-\mu s}e^{-\mu(r-s)}dsdr %\\&=\frac{tK^{2}\norm{b}_{L^{\infty}}^{2}}{n}\int_{0}^{nt}e^{-\mu r}dsdr \\&=\frac{tK^{2}\norm{b}_{L^{\infty}}^{2}}{n\mu}(1-e^{-\mu nt})\to 0,\end{aligned}$$ for $n\to\infty$. ◻ **Theorem 35**. *Let $\alpha\in (1,2]$ and $F\in\mathscr{C}^{\beta}(\mathbb{T}^{d})$ for $\beta\in (\frac{1-\alpha}{2},0)$ or $F\in\mathscr{X}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})$ for $\beta\in (\frac{2-2\alpha}{3},\frac{1-\alpha}{2}]$ and $\gamma\in (\frac{2\beta+2\alpha-1}{\alpha},1)$. Let $X$ be the solution of the $\mathscr{G}=(\partial_{t}+\mathfrak{L})$-martingale problem started in $X_{0}=x\in\mathbb{R}^{d}$.\ In the case $\alpha=2$ and $L=B$ for a standard Brownian motion $B$, the following functional central limit theorem holds: $$\begin{aligned} \paren[\bigg]{\frac{1}{\sqrt{n}}(X_{nt}-nt\langle F\rangle_{\pi})}_{t\in [0,T]}\Rightarrow \sqrt{D}(W_{t})_{t\in [0,T]},\end{aligned}$$ with convergence in distribution in $C([0,T],\mathbb{R}^{d})$, a $d$-dimensional standard Brownian motion $W$ and constant diffusion matrix $D$ given by $$\begin{aligned} D(i,j):=\int_{\mathbb{T}^{d}} (e_{i}+\nabla \chi^{i}(x))^{T}(e_{j}+\nabla \chi^{j}(x))\pi(dx) %\int_{\mathbb{T}^{d}}[-(\nabla\chi)^{T}(x)\sigma\sigma^{T}\nabla\chi(x)+\sigma^{T}(x)\sigma(x)]\pi(dx),\end{aligned}$$ for $i,j=1,\dots,d$ and the $i$-th euclidean unit vector $e_{i}$. Here, $\chi$ solves the singular Poisson equation $(-\mathfrak{L})\chi^{i}=F^{i}-\langle F^{i}\rangle_{\pi}$, $i=1,...,d$, according to [Theorem 33](#thm:poisson-eq){reference-type="ref" reference="thm:poisson-eq"}.\ In the case $\alpha\in (1,2)$, the following non-Gaussian central limit theorem holds: $$\begin{aligned} \paren[\bigg]{\frac{1}{n^{1/\alpha}}(X_{nt}-nt\langle F\rangle_{\pi})}_{t\in [0,T]}\Rightarrow (\tilde{L}_{t})_{t\in [0,T]},\end{aligned}$$ with convergence in distribution in $D([0,T],\mathbb{R}^{d})$, where $\tilde{L}$ is a $d$-dimensional symmetric $\alpha$-stable nondegenerate Lévy process (with generator $-\mathscr{L}^{\alpha}_{\nu}$).* *Proof.* As a byproduct of [@kp-wsmp Theorem 5.10], we obtain that there exists a probability space $(\Omega,\mathscr{F},\mathds{P})$ with an $\alpha$-stable symmetric non-degenerate process $L$, such that $X=x+Z+L$, where $Z$ is given by $$\begin{aligned} \label{eq:Z} Z_{t}=\lim_{m\to\infty}\int_{0}^{t}F^{m}(X_{s})ds\end{aligned}$$ for a sequence $(F^{m})$ of smooth functions $F^{m}$ with $F^{m}\to F$ in $\mathscr{X}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})$ and where the limit is taken in $L^{2}(\mathds{P})$, uniformly in $t\in [0,T]$.\ We write the additive functional $\int_{0}^{\cdot}(\overline{F^{m}})^{\mathbb{R}^{d}}(X_{s})ds=\int_{0}^{\cdot} \overline{F^{m}}(X_{s}^{\mathbb{T}^{d}})ds$ in terms of the periodic solution $\chi^{m}$ of the Poisson equation [\[eq:Poisson-m\]](#eq:Poisson-m){reference-type="eqref" reference="eq:Poisson-m"} with right hand side $F^{m}-\langle F^{m}\rangle _{\pi}=:\overline{F^{m}}$, such that $$\begin{aligned} X_{t}-t\langle F\rangle_{\pi}&=X_{0}+(Z_{t}-t\langle F\rangle_{\pi})+L_{t}\\&=X_{0}+\lim_{m\to\infty}\int_{0}^{t}\overline{F^{m}}(X_{s}^{\mathbb{T}^{d}})ds+L_{t}\label{eq:decomp-ph}\\&=X_{0}+\lim_{m\to\infty}\paren[\big]{[\chi^{m}(X_{0}^{\mathbb{T}^{d}})-\chi^{m}(X_{t}^{\mathbb{T}^{d}})]+M_{t}^{m}}+L_{t} \label{eq:mart-ph}\\&=X_{0}+[\chi(X_{0}^{\mathbb{T}^{d}})-\chi(X_{t}^{\mathbb{T}^{d}})]+M_{t}+L_{t}.\label{eq:decomp-ph2} %+\lambda \frac{1}{\sqrt{n}}\int_{0}^{nt}\chi^{m}(X_{s})ds} %\\&\quad+\frac{1}{\sqrt{n}}B_{nt}\end{aligned}$$ Here, the limit is again taken in $L^{2}(\mathds{P})$ and $\chi$ is the solution of the Poisson equation [\[eq:Poisson-limit\]](#eq:Poisson-limit){reference-type="eqref" reference="eq:Poisson-limit"} with right-hand side $\overline{F}$, which exists by [Theorem 33](#thm:poisson-eq){reference-type="ref" reference="thm:poisson-eq"}.\ To justify [\[eq:decomp-ph\]](#eq:decomp-ph){reference-type="eqref" reference="eq:decomp-ph"}, we use the convergence from [\[eq:Z\]](#eq:Z){reference-type="eqref" reference="eq:Z"} and $\langle F\rangle_{\pi}=\lim_{m\to\infty}\langle F^{m}\rangle_{\pi}$ by [Lemma 29](#thm:def-F-pi-int){reference-type="ref" reference="thm:def-F-pi-int"}. In [\[eq:mart-ph\]](#eq:mart-ph){reference-type="eqref" reference="eq:mart-ph"}, we applied Itô's formula to $(\chi^{m})^{\mathbb{R}^{d}}(X_{t})$ for $m\in\mathbb{N}$. For the equality [\[eq:decomp-ph2\]](#eq:decomp-ph2){reference-type="eqref" reference="eq:decomp-ph2"}, we utilized that $\chi^{m}\to\chi$ in $L^{\infty}(\mathbb{T}^{d})$ by [Theorem 33](#thm:poisson-eq){reference-type="ref" reference="thm:poisson-eq"} and that the sequence of martingales $(M^{m})$ converges in $L^{2}(\mathds{P})$ uniformly in time in $[0,T]$ to the martingale $M$. Here, for $\alpha\in (1,2)$, the martingales are given by (notation: $[y]:=y\mod\mathbb{Z}^{d}=\iota(y)$) $$\begin{aligned} M_{t}^{m}=\int_{0}^{t}\int_{\mathbb{R}^{d}\setminus\{0\}}[\chi^{m}(X_{s-}^{\mathbb{T}^{d}}+[y]))-\chi^{m}(X_{s-}^{\mathbb{T}^{d}})]\hat{\pi}(ds,dy),\end{aligned}$$ where $\hat{\pi}(ds,dy)=\pi(ds,dy)-ds\mu(dy)$ is the compensated Poisson random measure associated to $L$. $M$ is given by an analogue expression, where we replace $\chi^{m}$ by $\chi$.\ In the Brownian noise case, $\alpha=2$, we have that $M_{t}^{m}=\int_{0}^{t}\nabla\chi^{m}(X_{s}^{\mathbb{T}^{d}})\cdot dB_{s}$ and $M_{t}$ is defined analogously with $\chi^{m}$ replaced by $\chi$. Indeed, convergence of the martingales in $L^{2}(\mathds{P})$ follows from the convergence of $(\chi^{m})$ to $\chi$ in $\mathscr{C}^{\theta}_{2}(\mathbb{T}^{d})$ with $\theta\in (1,\alpha+\beta)$ by [Theorem 33](#thm:poisson-eq){reference-type="ref" reference="thm:poisson-eq"}, which in particular implies uniform convergence of $(\chi^{m})$ and $(\nabla\chi^{m})$.\ Let now first $\alpha=2$ and $L=B$ for a standard Brownian motion $B$. Then we have by the above, almost surely, $$\begin{aligned} \frac{1}{\sqrt{n}}(X_{nt}-nt\langle F\rangle_{\pi})=\frac{1}{\sqrt{n}}X_{0}+\frac{1}{\sqrt{n}}[\chi(X_{0}^{\mathbb{T}^{d}})-\chi(X_{nt}^{\mathbb{T}^{d}})]+\frac{1}{\sqrt{n}}(M_{nt}+B_{nt})\end{aligned}$$ with $M_{t}=\int_{0}^{t}\nabla\chi(X_{s}^{\mathbb{T}^{d}})\cdot dB_{s}$.\ To obtain the central limit theorem, we will apply the functional martingale central limit theorem, [@Ethier1986 Theorem 7.1.4], to $$\begin{aligned} \paren[\bigg]{\frac{1}{\sqrt{n}}(M_{nt}+B_{nt})}_{t\in [0,T]}.\end{aligned}$$ To that aim, we check the convergence of the quadratic variation $$\begin{aligned} \frac{1}{n}\langle M^{i}+B^{i},M^{j}+B^{j}\rangle_{nt}=\frac{1}{n}\int_{0}^{nt}(\operatorname{Id}+\nabla\chi(X_{s}^{\mathbb{T}^{d}}))^{T}(\operatorname{Id}+\nabla\chi(X_{s}^{\mathbb{T}^{d}})) (i,j)ds\end{aligned}$$ in probability to $$\begin{aligned} t\int_{\mathbb{T}^{d}}(\operatorname{Id}+\nabla\chi(x))^{T}(\operatorname{Id}+\nabla\chi(x))(i,j)\pi(dx)=tD(i,j).\end{aligned}$$ This is a consequence of [Lemma 34](#lem:ergodic-thm){reference-type="ref" reference="lem:ergodic-thm"}.\ The boundary term $\frac{1}{\sqrt{n}}[\chi(X_{0}^{\mathbb{T}^{d}})-\chi(X_{nt}^{\mathbb{T}^{d}})]$ vanishes when $n\to\infty$ as $\chi\in L^{\infty}(\mathbb{T}^{d})$. Furthermore, as a processes, $$\begin{aligned} \paren[\bigg]{\frac{1}{\sqrt{n}}[\chi(X_{0}^{\mathbb{T}^{d}})-\chi(X_{nt}^{\mathbb{T}^{d}})]}_{t\in [0,T]}\end{aligned}$$ converges to the constant zero process almost surely with respect to the uniform topology in $C([0,T],\mathbb{R}^{d})$.\ Using Slutsky's lemma and combining with the functional martingale central limit theorem above, we obtain weak convergence of $(n^{-1/2}X_{nt})_{t\in [0,T]}$ to the Brownian motion $\sqrt{D}W$ with the constant diffusion matrix $D$ stated in the theorem.\ Let now $\alpha\in (1,2)$. We rescale by $n^{-1/\alpha}$ and claim that the martingale $n^{-1/\alpha}M_{nt}$ vanishes in $L^{2}(\mathds{P})$ for $n\to\infty$. Indeed, in this case the martingale $M$ is given by $$\begin{aligned} M_{t}=\int_{0}^{t}\int_{\mathbb{R}^{d}\setminus \{0\}}[\chi(X_{s-}^{\mathbb{T}^{d}}+[y])-\chi(X_{s-}^{\mathbb{T}^{d}})]\hat{\pi}(ds,dy).\end{aligned}$$ Using the estimate from [@peszat_zabczyk_2007 Lemma 8.22] and the mean-value theorem, we obtain $$\begin{aligned} \mathds{E}[\sup_{t\in[0,T]}\abs{M_{nt}}^{2}]&\lesssim \int_{0}^{nT}\int_{\mathbb{R}^{d}\setminus\{0\}}\mathds{E}[\abs{\chi(X_{s-}^{\mathbb{T}^{d}}+[y])-\chi(X_{s-}^{\mathbb{T}^{d}})}^{2}]\mu(dy)ds\\&=\int_{0}^{nT}\int_{\mathbb{R}^{d}\setminus\{0\}}\mathds{E}[\abs{\chi(X_{s}^{\mathbb{T}^{d}}+[y])-\chi(X_{s}^{\mathbb{T}^{d}})}^{2}]\mu(dy)ds \allowdisplaybreaks \\&\leqslant \int_{0}^{nT}\int_{B(0,1)^{c}}\mathds{E}[\abs{\chi(X_{s}^{\mathbb{T}^{d}}+[y])-\chi(X_{s}^{\mathbb{T}^{d}})}^{2}]\mu(dy)ds\\&\qquad+ \int_{0}^{nT}\int_{B(0,1)\setminus\{0\}}\mathds{E}[\abs{\chi(X_{s}^{\mathbb{T}^{d}}+[y])-\chi(X_{s}^{\mathbb{T}^{d}})}^{2}]\mu(dy)ds\\&\leqslant 2nT\mu(B(0,1)^{c})\norm{\chi}_{L^{\infty}(\mathbb{T}^{d})^{d}}^{2}+2nT\norm{\nabla\chi}_{L^{\infty}(\mathbb{T}^{d})^{d\times d}}^{2}\int_{B(0,1)\setminus\{0\}}\abs{y}^{2}\mu(dy)\\&\lesssim nT.\end{aligned}$$ Hence, we conclude $$\begin{aligned} \label{eq:M-bound} \mathds{E}[\sup_{t\in[0,T]}\abs{n^{-1/\alpha}M_{nt}}^{2}]\lesssim Tn^{1-2/\alpha}\end{aligned}$$ and since $\alpha<2$, we obtain the claimed convergence to zero.\ As the $J_{1}$-metric (for definition, see [@Jacod2003 Chapter VI, Equation 1.26]) can be bounded by the uniform norm, [\[eq:M-bound\]](#eq:M-bound){reference-type="eqref" reference="eq:M-bound"} implies in particular, that the process $(n^{-1/\alpha}M_{nt})_{t\in[0,T]}$ converges to the constant zero process in probability with respect to the $J_{1}$-topology on the Skorokhod space $D([0,T],\mathbb{R}^{d})$. Furthermore, $(n^{-1/\alpha}L_{nt})_{t\geqslant 0}\stackrel{d}{=}(L_{t})_{t\geqslant 0}$. Using [@Jacod2003 Chapter VI, Proposition 3.17] and that the constant process is continuous, we thus obtain that $(n^{-1/\alpha}X_{nt})_{t\geqslant 0}$ convergences in distribution in $D([0,T],\mathbb{R}^{d})$ to the $\alpha$-stable process $(\tilde{L}_{t})_{t\in [0,T]}$, that has the same law as $(L_{t})_{t\in[0,T]}$. ◻ Utilizing the correspondence of the solution of the SDE (i.e. the solution of the martingale problem) to the parabolic generator PDE via Feynman-Kac, we can now show the corresponding periodic homogenization result for the PDE as a corollary. **Corollary 36**. *Let $F$ and $F^{\mathbb{R}^{d}}$ be as in [Theorem 35](#thm:main-thm-ph){reference-type="ref" reference="thm:main-thm-ph"}. Assume moreover that $\langle F\rangle_{\pi}=0$ and let $f\in C_{b}(\mathbb{R}^{d})$. Let $T>0$ and let $u\in D_{T}=\{u\in C_{T}\mathscr{C}^{\alpha+\beta}\cap C^{1}_{T}\mathscr{C}^{\beta}\mid u^{\sharp}:=u-\nabla u\varolessthan I(F)\in C_{T}\mathscr{C}^{2(\alpha+\beta)-1}\cap C^{1}_{T}\mathscr{C}^{\alpha+2\beta-1}\}$ with $I_{t}(f):=\int_{0}^{t}P_{t-s}(f)ds$ be the mild solution of the singular parabolic PDE $$\begin{aligned} (\partial_{t}-\mathfrak{L})u=0,\quad u_{0}=f^{\varepsilon},\end{aligned}$$ where $f^{\varepsilon}(x):=f(\varepsilon x)$. Let $u^{\varepsilon}(t,x):=u(\varepsilon^{-\alpha}t,\varepsilon^{-1} x)$ with $u^{\varepsilon}(0,\cdot)=f$.\ Let furthermore, for $\alpha=2$ and $-\mathscr{L}^{\alpha}_{\nu}=\frac{1}{2}\Delta$, $\overline{u}$ be the solution of $$\begin{aligned} (\partial_{t}-D:\nabla\nabla)\overline{u}=0,\quad \overline{u}_{0}=f,\end{aligned}$$ with notation $D:\nabla\nabla:=\sum_{i,j=1,...,d}D(i,j)\partial_{x_{i}}\partial_{x_{j}}$,\ and for $\alpha\in (1,2)$, let $\overline{u}$ be the solution of $$\begin{aligned} (\partial_{t}+\mathscr{L}^{\alpha}_{\nu})\overline{u}=0,\quad \overline{u}_{0}=f.\end{aligned}$$ Then, for any $t\in (0,T]$, $x\in\mathbb{R}^{d}$, we have the convergence $u^{\varepsilon}_{t}(x)\to \overline{u}_{t}(x)$ for $\varepsilon\to 0$.* **Remark 37**. *Note that $u^{\varepsilon}$ solves $(\partial_{t}-\mathfrak{L}^{\varepsilon})u^{\varepsilon}=0$, $u^{\varepsilon}_{0}=f$ with operator $\mathfrak{L}^{\varepsilon}g=-\mathscr{L}^{\alpha}_{\nu}g+\varepsilon^{1-\alpha} F(\varepsilon^{-1} \cdot)\nabla g$.* **Remark 38**. *If $\alpha=2$ and $F$ is of gradient-type, that is, $F=\nabla f$ for $f\in\mathscr{C}^{1+\beta}$ ($f$ is a continuous function, as $1+\beta>0$), the invariant measure is explicitly given by $d\pi=c^{-1}e^{-f(x)}dx$ with suitable normalizing constant $c>0$, since the operator is of divergence form, $\mathfrak{L}=e^{f}\nabla \cdot(e^{-f}\nabla\cdot)$. Then it follows that $\langle F\rangle_{\pi}=\int_{\mathbb{T}^{d}}\nabla e^{-f(x)}dx=0$. Thus, $F$ satisfies the assumptions of [Corollary 36](#cor:PDEhomog){reference-type="ref" reference="cor:PDEhomog"}.* *Proof of [Corollary 36](#cor:PDEhomog){reference-type="ref" reference="cor:PDEhomog"}.* Notice that $(\tilde{u}_{s}:=u_{t-s})_{s\in[0,t]}$ solves the backward Kolmogorov equation $(\partial_{s}+\mathfrak{L})\tilde{u}=0, \tilde{u}(t,\cdot)=f^{\varepsilon}$. Approximating $f$ by $\mathscr{C}^{3}(\mathbb{R}^{d})$ functions and using that $X$ solves the $(\partial_{t}+\mathfrak{L},x)$-martingale problem, we obtain $$\begin{aligned} u^{\varepsilon}(t,x)=\mathds{E}_{X_{0}=\varepsilon^{-1}x}[f(\varepsilon X_{\varepsilon^{-\alpha}t})].\end{aligned}$$ The stated convergence then follows from [Theorem 35](#thm:main-thm-ph){reference-type="ref" reference="thm:main-thm-ph"}. Indeed, if $X_{0}=\varepsilon^{-1}x$, then $\varepsilon X_{\varepsilon^{-2}\cdot}\to W^{x}$ in distribution, where $W^{x}$ is the Brownian motion started in $x$ with covariance $D$, respectively $\varepsilon X_{\varepsilon^{-\alpha}\cdot}\to L^{x}$ if $\alpha\in (1,2)$ for the $\alpha$-stable process $L$ with generator $(-\mathscr{L}^{\alpha}_{\nu})$ and $L_{0}=x$. The Feynman-Kac formula for the limit process then gives that the limit of $(u^{\varepsilon}(t,x))$ equals $\overline{u}(t,x)=\mathds{E}[f(W^{x})]$ if $\alpha=2$, respectively $\overline{u}(t,x)=\mathds{E}[f(L^{x})]$ if $\alpha\in (1,2)$. ◻ **Remark 39** (Brox diffusion with Lévy noise). *We can apply our theory to obtain the long-time behaviour of the periodic Brox diffusion with Lévy noise (see [@kp] for the construction). As $\alpha\in (1,2]$, [Theorem 35](#thm:main-thm-ph){reference-type="ref" reference="thm:main-thm-ph"} yields that $\abs{X_{t}}\sim t^{1/\alpha}$ for $t\to\infty$.\ In the non-periodic situation, the long-time behaviour of the Brox diffusion with Brownian noise is however very different. Brox [@Brox1986] proved, that the diffusion gets trapped in local minima of the white noise environment and thus slowed down (that is, for almost all environments: $\abs{X_{t}}\sim \log(t)^{2}$ for $t\to\infty$, cf. [@Brox1986 Theorem 1.4]). In the non-periodic pure stable noise case, the long-time behaviour of the Brox diffusion is an open problem, that we leave for future research.* Appendix[\[Appendix A\]]{#Appendix A label="Appendix A"} *Proof of [Lemma 10](#lem:periodic-semi-est){reference-type="ref" reference="lem:periodic-semi-est"}.* To show [\[eq:p-la\]](#eq:p-la){reference-type="eqref" reference="eq:p-la"}, we notice that by the isometry of the spaces $L^{2}(\mathbb{T}^{d})$, $l^{2}(\mathbb{Z}^{d})$ by the Fourier transform, $$\begin{aligned} \norm{\Delta_{j}\mathscr{L}^{\alpha}_{\nu}u}_{L^{2}(\mathbb{T}^{d})}^{2}=\sum_{k\in\mathbb{Z}^{d}}\abs{\rho_{j}(k)\psi^{\alpha}_{\nu}(k)\hat{u}(k)}^{2}.\end{aligned}$$ Due to $\rho_{j}(k)\neq 0$ only if $\abs{k}\sim 2^{j}$ and $\abs{\psi^{\alpha}_{\nu}(k)}\lesssim\abs{k}^{\alpha}$, we obtain that $$\begin{aligned} \norm{\Delta_{j}\mathscr{L}^{\alpha}_{\nu}u}_{L^{2}(\mathbb{T}^{d})}^{2}\lesssim 2^{2j\alpha}\sum_{k\in\mathbb{Z}^{d}}\abs{\rho_{j}(k)\hat{u}(k)}^{2}=2^{2j\alpha}\norm{\Delta_{j}u}_{L^{2}(\mathbb{T}^{d})}^{2}\end{aligned}$$ and thus $$\begin{aligned} \norm{\mathscr{L}^{\alpha}_{\nu}u}_{\mathscr{C}^{\beta-\alpha}_{2}(\mathbb{T}^{d})}=\sup_{j}2^{j(\beta-\alpha)}\norm{\Delta_{j}\mathscr{L}^{\alpha}_{\nu}u}_{L^{2}(\mathbb{T}^{d})}\lesssim\sup_{j} 2^{j\beta}\norm{\Delta_{j}u}_{L^{2}(\mathbb{T}^{d})}=\norm{u}_{\mathscr{C}^{\beta}_{2}(\mathbb{T}^{d})}.\end{aligned}$$ To show [\[eq:p-semi\]](#eq:p-semi){reference-type="eqref" reference="eq:p-semi"}, we again use the isometry, such that $$\begin{aligned} \norm{\Delta_{j} P_{t}u}_{L^{2}(\mathbb{T}^{d})}^{2}=\sum_{k\in\mathbb{Z}^{d}}\abs{\rho_{j}(k)\exp(-t\psi^{\alpha}_{\nu}(k))\hat{u}(k)}^{2}.\end{aligned}$$ For $j=-1$, $\rho_{j}$ is supported in a ball around zero and as $\abs{\exp(-t\psi^{\alpha}_{\nu}(k))}\leqslant 1$, the estimate $\norm{\Delta_{j} P_{t}u}_{L^{2}(\mathbb{T}^{d})}^{2}\lesssim (t^{-\theta/\alpha}\vee 1)2^{\theta}\sum_{k\in\mathbb{Z}^{d}}\abs{\rho_{j}(k)\hat{u}(k)}^{2}$ holds trivially for $\theta\geqslant 0$. For $j>-1$, $p_{j}$ is supported away from zero and we can use that $\exp(-t\psi^{\alpha}_{\nu}(\cdot))$ is a Schwartz function away from $0$ and thus, for $\abs{k}>0$, $\abs{\exp(-t\psi^{\alpha}_{\nu}(k))}\lesssim (t\psi^{\alpha}_{\nu}(k)+1)^{-\theta/\alpha}\lesssim t^{-\theta/\alpha}\abs{k}^{-\theta}$, for any $\theta\geqslant 0$. Thus, for $j>-1$, we obtain $$\begin{aligned} \norm{\Delta_{j} P_{t}u}_{L^{2}(\mathbb{T}^{d})}^{2}\leqslant 2^{-2j\theta}t^{-\theta/\alpha}\sum_{k\in\mathbb{Z}^{d}}\abs{\rho_{j}(k)\hat{u}(k)}^{2}=2^{-2j\theta}t^{-\theta/\alpha}\norm{\Delta_{j}u}_{L^{2}(\mathbb{T}^{d})}^{2},\end{aligned}$$ such that together [\[eq:p-semi\]](#eq:p-semi){reference-type="eqref" reference="eq:p-semi"} follows. To obtain the remaining estimate, we argue in a similar manner using that, due to Hölder-continuity of the exponential function, for $\theta/\alpha\in [0,1]$, $\abs{\exp(-t\psi^{\alpha}_{\nu}(k))-1}\leqslant\abs{t\psi^{\alpha}_{\nu}(k)}^{\theta/\alpha}\leqslant t^{\theta/\alpha}\abs{k}^{\theta}$. ◻ *Proof of [Lemma 11](#lem:exp-schauder){reference-type="ref" reference="lem:exp-schauder"}.* By the assumption of vanishing zero-order Fourier mode, we have $$\begin{aligned} P_{t}g=\sum_{\abs{k}\geqslant 1}\exp(-t\psi^{\alpha}_{\nu}(k))\hat{g}(k)e_{k}.\end{aligned}$$ Thus, we obtain by $\psi^{\alpha}_{\nu}(k)\geqslant c\abs{k}^{\alpha}$ for some $c>0$ (follows from [Assumption 4](#ass){reference-type="ref" reference="ass"}) the trivial estimate $$\begin{aligned} \norm{\Delta_{j}(P_{t}g)}_{L^{2}(\mathbb{T}^{d})}= \sum_{\abs{k}\geqslant 1}\abs{p_{j}(k)\exp(-t\psi^{\alpha}_{\nu}(k))\hat{g}(k)}^{2}\leqslant \norm{g}_{\mathscr{C}^{\beta}_{2}(\mathbb{T}^{d})}2^{-j\beta}\exp(-tc).\end{aligned}$$ Together with [Lemma 10](#lem:periodic-semi-est){reference-type="ref" reference="lem:periodic-semi-est"}, we then obtain for any $\theta\geqslant 0$, $$\begin{aligned} \norm{\Delta_{j}(P_{t}g)}_{L^{2}(\mathbb{T}^{d})}\lesssim \norm{g}_{\mathscr{C}^{\beta}_{2}(\mathbb{T}^{d})}\min\paren[\big]{2^{-j\beta}\exp(-tc),\, 2^{-j(\beta+\theta)}(t^{-\theta/\alpha}\vee 1)}.\end{aligned}$$ The claim thus follows by interpolation. ◻ # Acknowledgements {#acknowledgements .unnumbered} H.K. is supported by the Austrian Science Fund (FWF) Stand-Alone programme P 34992. Part of the work was done when H.K. was employed at Freie Universität Berlin and funded by the DFG under Germany's Excellence Strategy - The Berlin Mathematics Research Center MATH+ (EXC-2046/1, project ID: 390685689). N.P.  gratefully acknowledges financial support by the DFG via Research Unit FOR2402 and through the grant CRC 1114 \"Scaling Cascades in Complex Systems\". CCKW21 Mariko Arisawa. Homogenization of a class of integro-differential equations with Lévy operators. , 34, 2010. Hajer Bahouri, Jean-Yves Chemin, and Raphaël Danchin. . Grundlehren der mathematischen Wissenschaften. Springer Berlin Heidelberg, 2011. Alain Bensoussan, Jacques-Louis Lions, and George Papanicolaou. , volume 5 of *Studies in Mathematics and its Applications*. North-Holland Publishing Co., Amsterdam-New York, 1978. Thomas M. Brox. A one-dimensional diffusion process in a Wiener medium. , 14(4):1206--1218, 1986. Giuseppe Cannizzaro and Khalil Chouk. Multidimensional SDEs with singular drift and universal construction of the polymer measure with white noise potential. , 46(3):1710--1763, 2018. Xin Chen, Zhen-Qing Chen, Takashi Kumagai, and Jian Wang. . , 49(6):2874 -- 2921, 2021. Giuseppe Cannizzaro, Peter K. Friz, and Paul Gassiat. Malliavin calculus for regularity structures: The case of gPAM. , 272:363 -- 419, 2017. Stewart N. Ethier and Thomas G. Kurtz. . Wiley series in probability and mathematical statistics. John Wiley & Sons, New York, 1986. Brice Franke. A functional non-central limit theorem for jump-diffusions with periodic coefficients driven by stable Lévy-noise. , 20:1087--1100, 2007. Massimiliano Gubinelli, Peter Imkeller, and Nicolas Perkowski. Paracontrolled distributions and singular PDEs. , 3(e6), 2015. Alice Guionnet and B. Zegarlinski. Lectures on Logarithmic Sobolev Inequalities. , 36:1--134, 2002. Qiao Huang, Jinqiao Duan, and Renming Song. Homogenization of stable-like Feller processes. , 2018. Qiao Huang, Jinqiao Duan, and Renming Song. Homogenization of nonlocal partial differential equations related to stochastic differential equations with Lévy noise. , 28:1648--1674, 2022. Martin Hairer and Etienne Pardoux. Homogenization of periodic linear degenerate PDEs. , 255(9):2462--2487, 2008. Special issue dedicated to Paul Malliavin. Jean Jacod and Albert N. Shiryaev. . Grundlehren der mathematischen Wissenschaften. Springer Berlin Heidelberg, 2nd edition, 2003. Tomasz Komorowski, Claudio Landim, and Stefano Olla. , volume 345 of *Grundlehren der Mathematischen Wissenschaften*. Springer Heidelberg, 2012. Time symmetry and martingale approximation. Helena Kremp and Nicolas Perkowski. Multidimensional SDE with distributional drift and Lévy noise. , 28(3):1757--1783, 2022. Helena Kremp and Nicolas Perkowski. Fractional Kolmogorov equations with singular paracontrolled terminal condition. , 2023. Helena Kremp and Nicolas Perkowski. Rough weak solutions for singular Lévy SDEs. , 2023. Moritz Kassmann, Andrey Piatnitski, and Elena Zhizhina. Homogenization of Lévy-type operators with oscillating coefficients. , 51:3641--3665, 2019. Claude Kipnis and S. R. Srinivasa Varadhan. Central limit theorem for additive functionals of reversible Markov processes and applications to simple exclusions. , 104(1):1--19, 1986. Jörg Martin and Nicolas Perkowski. Paracontrolled distributions on Bravais lattices and weak universality of the 2d parabolic Anderson model. , 55(4):2058--2110, 2019. Jean-Christophe Mourrat and Hendrik Weber. The dynamic $\phi^4_3$ model comes down from infinity. , 356(3):673--753, 2017. Bernt Øksendal. . Universitext. Springer-Verlag, Berlin, sixth edition, 2003. An introduction with applications. Etienne Pardoux. . , pages 79--127, 1998. Grigorios A. Pavliotis and Andrew M. Stuart. , volume 53 of *Texts in Applied Mathematics*. Springer, New York, 2008. Averaging and homogenization. Nicolas Perkowski and Willem van Zuijlen. Quantitative heat-kernel estimates for diffusions with distributional drift. , 2022. Szymon Peszat and Jerzy Zabczyk. . Encyclopedia of Mathematics and its Applications. Cambridge University Press, 2007. Michael Röckner and Feng-Yu Wang. Weak Poincaré inequalities and L2-convergence rates of Markov semigroups. , 185(2):564--603, 2001. Daniel Revuz and Marc Yor. . Springer Heidelberg, 3rd edition, 1999. Nikola Sandrić. Homogenization of periodic diffusion with small jumps. , 435(1):551--577, 2016. Ken-iti Sato. . Cambridge Studies in Advanced Mathematics. Cambridge University Press, 1999. Russell W. Schwab. Periodic homogenization for nonlinear integro-differential equations. , 42(6):2652--2680, 2010. Hans-Jürgen Schmeisser and Hans Triebel. , volume 42 of *Mathematik und ihre Anwendungen in Physik und Technik*. Akademische Verlagsgesellschaft Geest & Portig K.-G., Leipzig, 1987. Alexander K. Zvonkin. A transformation of the phase space of a diffusion process that will remove the drift. , 93(135):129--149, 152, 1974. [^1]: Technische Universität Wien, helena.kremp\@asc.tuwien.ac.at [^2]: Freie Universität Berlin, perkowski\@math.fu-berlin.de [^3]: *[@kp Lemma 2.3]*
arxiv_math
{ "id": "2309.16225", "title": "Periodic homogenization for singular L\\'evy SDEs", "authors": "Helena Kremp and Nicolas Perkowski", "categories": "math.PR", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | This paper addresses the problem of designing the *continuous-discrete* unscented Kalman filter (UKF) implementation methods. More precisely, the aim is to propose the MATLAB-based UKF algorithms for *accurate* and *robust* state estimation of stochastic dynamic systems. The accuracy of the *continuous-discrete* nonlinear filters heavily depends on how the implementation method manages the discretization error arisen at the filter prediction step. We suggest the elegant and accurate implementation framework for tracking the hidden states by utilizing the MATLAB built-in numerical integration schemes developed for solving ordinary differential equations (ODEs). The accuracy is boosted by the discretization error control involved in all MATLAB ODE solvers. This keeps the discretization error below the tolerance value provided by users, automatically. Meanwhile, the robustness of the UKF filtering methods is examined in terms of the stability to roundoff. In contrast to the pseudo-square-root UKF implementations established in engineering literature, which are based on the one-rank Cholesky updates, we derive the stable square-root methods by utilizing the $J$-orthogonal transformations for calculating the Cholesky square-root factors. address: CEMAT, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa, Portugal author: - Maria V. Kulikova - Gennady Yu. Kulikov title: Continuous-discrete unscented Kalman filtering framework by MATLAB ODE solvers and square-root methods --- [^1] , Nonlinear Bayesian filtering; Unscented Kalman filter; Square-root methods, ODE solvers, MATLAB. # Introduction {#sect1} The optimal Bayesian filtering for nonlinear dynamic stochastic systems is a well-established area in engineering literature. A wide range of the filtering methods has been developed under assumption of Gaussian systems and by applying a variety of numerical methods for computing the multidimensional Gaussian integrals required for finding the mean and covariance to construct the suboptimal solution [@Ito2000]. Among well-known Gaussian filters designed, we may mention the Gauss-Hermit quadrature filters (GHQF) in [@Haykin2007], the third- and high-degree Cubature Kalman filters (CKF) in [@2010:Haykin; @2013:Automatica:Jia; @2018:Haykin], respectively. Nevertheless, the Gaussian property is rarely preserved in nonlinear Kalman filtering (KF) realm and is often violated while solving real-world stochastic applications. In contrast to the Gaussian filters, the Unscented Kalman filter (UKF) calculates true mean and covariance even in non-Gaussian stochastic models [@2000:Julier]. Motivated by this and some other benefits of the UKF framework, we suggest a few novel MATLAB-based UKF implementation methods for *accurate* and *robust* state estimation of stochastic dynamic systems. The accuracy of any *continuous-discrete* filtering algorithm heavily depends on how the implementation method manages the discretization error arisen. The robustness of the filtering method with the KF-like structure is examined with respect to roundoff errors and the feasibility to derive numerically stable square-root counterparts. Table [\[Tab:survay\]](#Tab:survay){reference-type="ref" reference="Tab:survay"} presents a systematic classification of *continuous-discrete UKF implementation methods* already established in engineering literature as well as open problems to be solved in future. It provides a basis for understanding various implementation strategies depending on discretization accuracy and numerical stability to roundoff. ----------------------------- --------------------------------------------------- -------------------------------- --------------------------------------- ---------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------- **Type I. Discrete-Discrete Approach** **Type II. Continuous-Discrete Approach** **Unscented** **Euler-Maruyama** **Itô-Taylor** **Key properties** **MDE** **SPDE** **Kalman** [@1999:Kloeden:book Chapter 9] [@1999:Kloeden:book Chapter 5] **of the examined** [@2007:Sarkka Equation (34)] [@2007:Sarkka Eq. (35)] **filtering** (strong order 0.5) (strong order 1.5) **filtering methods** **$n^2+n$ eqs.** **$2n^2+n$ eqs.** **Benefits** **Non-square-root** studied in [@2019:Leth], studied in [@lyons2014series], $\to$ unstable to roundoff $\stackrel[{\tiny \rm{\;\;ODE\;solutions:}}]{ \bullet\,\rm{ fixed-stepsize}}{}$ $\stackrel[\text{\makebox[0pt]{\cite{takeno2012numerical}}}]{\text{\makebox[0pt]{\cite{2007:Sarkka}}}}{}$ [@2007:Sarkka] $\stackrel[{\tiny \rm{to\,Type~I}}]{ \rm{\to comparable}}{\mathrm{}}$ **implementations** [@lyons2014series] etc. etc.  and to discretization $\leftarrow$ $\stackrel[{\bullet\,\rm{MATLAB\,ones:}}]{ \bullet\,\rm{NIRK\;solvers:\,\;}}{}$ $\stackrel[{\bf this\,paper}]{\bf open}{}$ $\stackrel[\text{\makebox[0pt]{\cite{KuKu20dIFAC}}}]{\text{\makebox[0pt]{\cite{2017:SP:Kulikov}}}}{}$ $\stackrel[\to \rm{LE\,control}]{\to\rm{GE\,control}}{}$ **Square-root ones** $\to$ ensures symmetry & positivity of $P_{k|k}$ $\to$ improved stability$\leftarrow$ **Theoretically derived** $\Downarrow$ This means **$\bullet$ Cholesky type** $\bullet$ triangular SR factors **SR solution:** [@2007:Sarkka Eq. (64)] [@2007:Sarkka Eq. (35)] Methods with -- 1 rank updates **open** [@KuKu21aEJCON] $\to$'pseudo-SR' variants$\leftarrow$ $\stackrel[{\bullet\,\rm{MATLAB\,ones:}}]{ \bullet\,\rm{NIRK\;solvers:\,\;}}{}$ $\stackrel[{\bf this\,paper}]{\bf open}{}$ $\stackrel[{\bf this\,paper}]{\text{\makebox[0pt]{\cite{2020:ANM:KulikovKulikova}}}}{}$ LE control are, -- JQR-based alg. [@KuKu20bIFAC] [@KuKu21aEJCON] $\to$true SR methods$\leftarrow$ $\stackrel[{\bullet\,\rm{MATLAB\,ones:}}]{ \bullet\,\rm{NIRK\;solvers:\,\;}}{}$ $\stackrel[{\bf this\,paper}]{\bf open}{}$ $\stackrel[\text{\makebox[0pt]{\cite{KuKu20dIFAC}}}]{\text{\makebox[0pt]{\cite{2020:ANM:KulikovKulikova}}}}{}$ in general, **$\bullet$ SVD type** **open** [@KuKu21IEEE_TAC] $\bullet$ fullmatrix SR factors $\stackrel[{ \bullet\,\rm{MATLAB\,ones:}}]{ \bullet\,\rm{NIRK\;solvers:\,\;}}{}$ $\stackrel[{\bf open}]{\bf open}{}$ $\stackrel[{\bf open}]{\bf open}{}$ faster, but **Properties** $\bullet$ The *prefixed mesh* prior to filtering: $\bullet$ Discretization error can be regulated: might be less $\to$ neither adaptive nor flexible; $\to$ self-adaptive and variable stepsize; accurate than $\to$ might fail on irregular sampling case; $\to$ treat irregular intervals, automatically; those with $\bullet$ no control of the discretization error. $\bullet$ flexible to use any MATLAB ODE solver. GE control. ----------------------------- --------------------------------------------------- -------------------------------- --------------------------------------- ---------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------- We can distinguish two basic approaches routinely being adopted in research devoted to the development of the continuous-discrete filtering methods [@2012:Frogerais; @2014:Kulikov:IEEE]. The first one implies the use of numerical schemes for solving the given *stochastic differential equation* (SDE) of the system at hand. It is usually done by using either the Euler-Maruyama method or the higher order methods based on the Itô-Taylor expansion [@1999:Kloeden:book]. The left panel of Table [\[Tab:survay\]](#Tab:survay){reference-type="ref" reference="Tab:survay"} collects the continuous-discrete UKF algorithms of that type, which are already developed in engineering literature. We stress that the key property of the implementation methodology based on the SDE solvers is that the users are requested to preassign the $L$-step *equidistant mesh* to be applied on each sampling interval $[t_k, t_{k+1}]$ to perform the discretization, i.e. the step size is fixed. This *prefixed* integer $L > 0$ should be chosen prior to filtering and with no information about the discretization error arisen. Certainly, this strategy yields the UKF implementation methods with a known computational load but with no information about accuracy of the filtering algorithms. In other words, the implementation framework discussed does not ensure a good estimation quality and might fail due to a high discretization error occurred. For instance, if some measurements are missing while the filtering process, then the pre-defined $L$-step equidistant mesh might be not enough for accurate integration on the longer sampling intervals related to the missing data and this destroys the filtering algorithm. Finally, the discretization error arisen in any SDE solver is a random variable and, hence, it is uncontrollable. An alternative implementation framework assumes the derivation of the related filters' moment differential equations (MDEs) and then an utilization of the numerical methods derived for solving ordinary differential equations (ODEs); see the right panel of Table [\[Tab:survay\]](#Tab:survay){reference-type="ref" reference="Tab:survay"}. In [@2007:Sarkka], the MDEs are derived for continuous-discrete UKF as well as the *Sigma Point Differential Equations* (SPDEs), which can be solved instead. Both systems represent the ODEs that should be solve on each sampling intervals $[t_{k}, t_{k+1}]$. Clearly, one may follow the integration approach with the fixed *equidistant mesh* discussed above. For instance, a few steps of the Runge-Kutta methods are often suggested as the appropriate choice in engineering literature. However, this idea does not provide any additional benefit compared to the first implementation framework summarized in the left panel of Table [\[Tab:survay\]](#Tab:survay){reference-type="ref" reference="Tab:survay"}. A rational point is to take the advantage of using ODE solvers. In particular, the discretization error arisen is now possible to control and to bound, which yields the accurate implementation way. Furthermore, the modern ODE solvers are self-adaptive algorithms, i.e. they generate the *adaptive* integration mesh in automatic way depending on the discretization error control involved for keeping the error within the prefixed tolerance. Such smart implementation strategy provides the accurate, self-adaptive and flexible UKF implementation methods where the users are requested to fix the tolerance value, only. In our previous works, we have proposed the continuous-discrete UKF methods by utilizing the variable-stepsize nested implicit Runge-Kutta (NIRK) formulas with the global error (GE) control suggested in [@2013:Kulikov:IMA]. The main drawback is that the GE control typically yields computationally costly algorithms. The GE is the true numerical error between the exact and numerical solutions, whereas local error (LE) is the error committed for one step of the numerical method, only. The MATLAB's built-in ODE solvers include the LE control that makes the novel continuous-discrete UKF algorithms faster than the previously derived NIRK-based methods. In summary, the Type II implementation framework does not prefix the computation load because the involved ODE solvers generate the adaptive mesh depending on the problem and the discretization error control utilized, but they solve the problems with the given accuracy requirements. The second part of each panel in Table [\[Tab:survay\]](#Tab:survay){reference-type="ref" reference="Tab:survay"} is focused on the *square-root* strategies established in engineering literature for deriving the numerically stable (with respect to roundoff errors) filtering algorithms with the KF-like structure. The square-root (SR) methods ensures the theoretical properties of the filter covariance matrix in a finite precision arithmetics, which are the symmetric form and positive definiteness [@2015:Grewal:book Chapter 7]. A variety of the SR filtering methods comes from the chosen factorization $P_{k|k}=S_{k|k}S_{k|k}^{\top}$. The traditional SR algorithms are derived by using the Cholesky decomposition, meanwhile the most recent advances are based on the singular value decomposition (SVD) as shown in [@2020:Automatica:Kulikova; @1986:Oshman]. For the sampled-data estimators (e.g. the CKF, GHQF and UKF), the derivation of the related SR implementations is of special interest because they demand the factorization $P_{k|k}=S_{k|k}S_{k|k}^{\top}$ in each iterate for generating the sigma/cubature/quadrature vectors. The problem of deriving the SR methods for the UKF estimator lies in the possibly negative sigma weights used for the mean and covariance approximation. This prevents the square-root operation and, as a result, the derivation of the SR UKF algorithms. To overcome this obstacle, the *one-rank Cholesky update* procedure has been suggested for computing the SR of covariance matrices every time when the negative weight coefficients appear as explained in [@2001:Merwe]. However, if the downdated matrix is not positive definite, then the one-rank Cholesky update is unfeasible and its failure yields the UKF estimator shutoff again. In other words, such methods do not have the numerical robustness benefit of the *true* SR algorithms where the Cholesky procedure is performed only once. For that reason, the SR UKF methods based on the one-rank Cholesky updates have been called the *pseudo-square-root* ones in [@2009:Haykin p. 1262]. In this paper, we suggest the MATLAB-based SR and pseudo-SR UKF methods within both the MDE- and SPDE-based implementation frameworks as indicated in the right panel of Table [\[Tab:survay\]](#Tab:survay){reference-type="ref" reference="Tab:survay"}. In summary, the novel UKF methods have the following properties: (i) they are accurate due to the automatic discretization error control involved for solving the filter's expectation and covariance differential equations, (ii) they are easy to implement because of the built-in fashion of the MATLAB ODE solver in use, (iii) they are self-adaptive because the discretization mesh at every prediction step is generated automatically by the chosen ODE solver according to the discretization error control rule implemented by the MATLAB ODE solver, (iv) they are flexible, that is, any other MATLAB ODE solver of interest can be easily implemented together with its automatic discretization error control in use, and (v) they are numerically stable with respect to roundoff due to the square-root implementation fashions derived in this paper. Thus, we provide practitioners with a diversity of algorithms giving a fair possibility for choosing any of them depending on a real-world application and requirements. Finally, it is worth noting here that the SVD-based filtering is still an open area for a future research; see Table [\[Tab:survay\]](#Tab:survay){reference-type="ref" reference="Tab:survay"}. More precisely, the SVD solution for the Euler-Maruyama discretization-based UKF is not difficult to derive by taking into account the most recent result on the Itô-Taylor expansion-based UKF in [@KuKu21IEEE_TAC]. In contrast, the derivation of the SVD-based algorithms for the Type II framework is complicated. It demands both the MDEs and SPDEs to be re-derived in terms of the SVD SR factors instead of the Cholesky ones proposed in [@2007:Sarkka]. This is an open problem for a future research. # Continuous-discrete Unscented Kalman filter {#problem:statement} Consider continuous-discrete stochastic system $$\begin{aligned} dx(t) & = f\bigl(t,x(t)\bigr)dt+Gd\beta(t), \quad t>0, \label{eq1.1} \\ z_k & = h(k,x(t_{k}))+v_k, \quad k =1,2,\ldots \label{eq1.2}\end{aligned}$$ where $x(t)$ is the $n$-dimensional unknown state vector to be estimated and $f:\mathbb R\times\mathbb R^{n}\to\mathbb R^{n}$ is the time-variant drift function. The process uncertainty is modelled by the additive noise term where $G \in \mathbb R^{n\times q}$ is the time-invariant diffusion matrix and $\beta(t)$ is the $q$-dimensional Brownian motion whose increment $d\beta(t)$ is Gaussian white process independent of $x(t)$ and has the covariance $Q\,dt>0$. Finally, the $m$-dimensional measurement vector $z_k = z(t_{k})$ comes at some discrete-time points $t_k$ with the sampling rate (sampling period) $\Delta_k=t_{k}-t_{k-1}$. The measurement noise term $v_k$ is assumed to be a white Gaussian noise with the zero mean and known covariance $R_k>0$, $R \in \mathbb R^{m\times m}$. Finally, the initial state $x(t_0)$ and the noise processes are assumed to be statistically independent, and $x(t_0) \sim {\mathcal N}(\bar x_0,\Pi_0)$, $\Pi_0 > 0$. The key idea behind the UKF estimation approach is the concept of Unscented Transform (UT). Following [@2000:Julier], the UT implies a set of $2n+1$ deterministically selected vectors called the *sigma points* generated by $$\begin{aligned} {\mathcal X}_{i} & = \hat x + S_x \xi_i, \quad i = 0, \ldots, 2n \label{eq:ukf:vec} \\ \xi_0 & = \mathbf{0}, \xi_j = \sqrt{n+\lambda} \; e_j, \xi_{n+j} = -\sqrt{n+\lambda} \; e_j \label{eq:ukf:points}\end{aligned}$$ where $e_j$ ($j=1,\ldots, n$) stands for the $j$-th unit coordinate vector in ${\mathbb R}^n$. The matrix $S_x$ is a square-root factor of $P_x$, i.e. $P_x = S_xS_x^{\top}$ and it is traditionally defined by using the Cholesky decomposition. Throughout the paper, we consider the lower triangular Cholesky factors. Following the common UKF presentation proposed in [@2001:Merwe], three pre-defined scalars $\alpha$, $\beta$ and $\kappa$ should be given to calculate the weight coefficients as follows: $$\begin{aligned} w^{(m)}_0 & =\frac{\lambda}{n+\lambda}, & w^{(m)}_i & = \frac{1}{2n+2\lambda}, \label{eq:ukf:wmean} \\ w^{(c)}_0 & =\frac{\lambda}{n+\lambda}+1-\alpha^2+\beta, & w^{(c)}_i & = \frac{1}{2n+2\lambda} \label{eq:ukf:wcov}\end{aligned}$$ where $i=1, \ldots, 2n$, $\lambda=\alpha^2(\kappa+n)-n$ and the secondary scaling parameters $\beta$ and $\kappa$ can be used for a further filter's tuning in order to match higher moments. The time update step of the *continuous-discrete* UKF methods within Type II framework presented in Table [\[Tab:survay\]](#Tab:survay){reference-type="ref" reference="Tab:survay"} implies the numerical integration schemes for solving the MDEs on each sampling interval $[t_{k-1}, t_{k}]$ as follows [@2007:Sarkka]: $$\begin{aligned} \!\!\!\frac{d\hat x(t)}{dt} & = f\bigl(t,{\mathbb X}(t)\bigr) w^{(m)}, \label{UKF:MDE1} \\ \!\!\!\frac{dP(t)}{dt}& = {\mathbb X}(t){\mathbb W}f^{\top}\bigl(t,{\mathbb X}(t)\bigr)\! +\! f\bigl(t,{\mathbb X}(t)\bigr){\mathbb W}{\mathbb X}^{\top}(t)\! +\! GQG^{\top}\!\!\! \label{UKF:MDE2}\end{aligned}$$ where ${\mathbb X}(t)$ stands for the matrix collected from the sigma points ${\mathcal X}_{i}(t)$ defined by [\[eq:ukf:vec\]](#eq:ukf:vec){reference-type="eqref" reference="eq:ukf:vec"} around the mean $\hat x(t)$ through the matrix square-root $P^{1/2}(t)$. They are located by columns in the discussed matrix, i.e. ${\mathbb X}(t) = \Bigl[{\mathcal X}_{0}(t), \ldots, {\mathcal X}_{2n}(t) \Bigr]$ is of size $n\times(2n+1)$. Additionally, the weight matrix ${\mathbb W}$ is defined as follows: $$\begin{aligned} w^{(m)} & =\bigl[w^{(m)}_0,\ldots,w^{(m)}_{2n}\bigr]^\top, \; w^{(c)} =\bigl[w^{(c)}_0,\ldots,w^{(c)}_{2n}\bigr]^\top, \label{eq:w_c}\\ {\mathbb W} & = \bigl[I_{2n+1}-\mathbf{1}_{2n+1}^\top\otimes w^{(m)}\bigr]\mbox{\rm diag}\bigl\{w^{(c)}_0,\ldots, w^{(c)}_{2n}\bigr\} \nonumber \\ & \times \bigl[I_{2n+1}-\mathbf{1}_{2n+1}^\top\otimes w^{(m)}\bigr]^\top \label{eq:W_matrix}\end{aligned}$$ where the vector ${\mathbf 1}_{2n+1}$ is the unitary column of size $2n+1$ and the symbol $\otimes$ is the Kronecker tensor product. Alternatively, one may solve the system of the UKF SPDEs derived in [@2007:Sarkka]: $$\begin{aligned} & \frac{d{\mathbb X}^{\prime}_i(t)}{dt} = f\bigl(t,{\mathbb X}(t)\bigr) w^{(m)} + \sqrt{n+\lambda} \nonumber \\ &\times \bigl[{\bf 0}, \: P^{1/2}(t)\Phi(M(t)), \: -P^{1/2}(t)\Phi(M(t))\bigr]_i \label{eq2.9}\end{aligned}$$ where the subscript $i$ refers to the $i$-th column in the related matrices, $i=0, \ldots 2n$. The notation ${\bf 0}$ stands for the zero column. The matrix $M(t)$ is computed by $$\begin{aligned} M(t) & =P^{-1/2}(t)\left[{\mathbb X}(t){\mathbb W} f^{\top}\bigl(t,{\mathbb X}(t)\bigr) \right. \nonumber \\ & \left. + f\bigl(t,{\mathbb X}(t)\bigr) {\mathbb W}{\mathbb X}^{\top}(t) + GQG^\top\right]P^{-\top/2}(t) \label{eq2.10}\end{aligned}$$ with the mapping $\Phi(\cdot)$ that returns a lower triangular matrix defined as follows: (1) split the argument matrix $M$ as $M=\bar L + D + \bar U$ where $\bar L$ and $\bar U$ are, respectively, a strictly lower and upper triangular parts of $M$, and $D$ is its main diagonal; (2) compute $\Phi(M) = \bar L + 0.5 D$ for any argument matrix $M$. Next, the UKF measurement update step at any time instance $t_k$ implies the generation of sigma points ${\mathcal X}_{i,k|k-1}$, $i=0,\ldots, 2n$, around the predicted mean $\hat x_{k|k-1}$ by formula [\[eq:ukf:vec\]](#eq:ukf:vec){reference-type="eqref" reference="eq:ukf:vec"} with the square-root $P_{k|k-1}^{1/2}$ of the predicted covariance matrix $P_{k|k-1}$ together with the weights in [\[eq:ukf:wmean\]](#eq:ukf:wmean){reference-type="eqref" reference="eq:ukf:wmean"}, [\[eq:ukf:wcov\]](#eq:ukf:wcov){reference-type="eqref" reference="eq:ukf:wcov"}. For system [\[eq1.1\]](#eq1.1){reference-type="eqref" reference="eq1.1"}, [\[eq1.2\]](#eq1.2){reference-type="eqref" reference="eq1.2"}, this allows for the following effective calculation [@2007:Sarkka p. 1638]: $$\begin{aligned} {\mathbb Z}_{k|k-1} & =h\bigl(k,{\mathbb X}_{k|k-1}\bigr), \quad \hat z_{k|k-1} ={\mathbb Z}_{k|k-1}w^{(m)}, \label{ckf:zpred} \\ R_{e,k} & ={\mathbb Z}_{k|k-1}{\mathbb W}{\mathbb Z}_{k|k-1}^{\top}+R_k, \label{ckf:rek} \\ P_{xz,k} & ={\mathbb X}_{k|k-1}{\mathbb W}{\mathbb Z}_{k|k-1}^{\top}. \label{ckf:pxy} \\ \hat x_{k|k} & =\hat x_{k|k-1}+{K}_k(z_k-\hat z_{k|k-1}), \label{ckf:state}\\ {K}_{k} & =P_{xz,k}R_{e,k}^{-1}, P_{k|k} = P_{k|k-1} - {K}_k R_{e,k} {K}_k^{\top} \label{ckf:gain}\end{aligned}$$ **MDE-based UKF: Algorithm 1** **SPDE-based UKF: Algorithm 2** ------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------ [Initialization:]{.smallcaps} 0\. Set initials $\hat x_{0|0} = \bar x_0$, $P_{0|0} = \Pi_0$. Define $\xi_i$, $i=0,\ldots, 2n$ by [\[eq:ukf:points\]](#eq:ukf:points){reference-type="eqref" reference="eq:ukf:points"} and weights in [\[eq:w_c\]](#eq:w_c){reference-type="eqref" reference="eq:w_c"}, [\[eq:W_matrix\]](#eq:W_matrix){reference-type="eqref" reference="eq:W_matrix"}. Set the options in [\[eq2.136\]](#eq2.136){reference-type="eqref" reference="eq2.136"}. [Time]{.smallcaps} 1\. Cholesky dec.: $P_{k-1|k-1}=P_{k-1|k-1}^{1/2}P_{k-1|k-1}^{\top/2}$. Generate ${\mathcal X}_{i,k-1|k-1}=\hat x_{k-1|k-1}+P_{k-1|k-1}^{1/2}\xi_i$, $i=0, \ldots, 2n$. [Update (TU):]{.smallcaps} 2\. Set $XP_{k-1|k-1} = [\hat x_{k-1|k-1}, P_{k-1|k-1}]$. 2\. Set ${\mathbb X}_{k-1|k-1}=\bigl[{\mathcal X}_{0,k-1|k-1},\ldots,{\mathcal X}_{2n,k-1|k-1}\bigr]$. 3\. Reshape $x^{(0)}_{k-1} = XP_{k-1|k-1}\verb"(:)"$. 5\. $\widetilde{{\mathbb X}}^{(0)}_{k-1|k-1} = {\mathbb X}_{k-1|k-1}\verb"(:)"$. 4\. Integrate $x_{k|k-1}\leftarrow \texttt{odesolver[MDEs},x^{(0)}_{k-1},[t_{k-1},t_k]]$. 4\. $\widetilde{{\mathbb X}}_{k|k-1}\leftarrow \texttt{odesolver[SPDEs},\widetilde{{\mathbb X}}^{(0)}_{k-1|k-1},[t_{k-1},t_k]]$. 5\. $XP_{k|k-1} \leftarrow \texttt{reshape}(x_{k|k-1}^{\texttt{end}},n,n+1)$. 5\. ${\mathbb X}_{k|k-1} \leftarrow \texttt{reshape}(\widetilde{{\mathbb X}}_{k|k-1}^{\texttt{end}},n,2n+1)$. 6\. Recover $\hat x_{k|k-1} = XP_{k|k-1}$`(:,1)`. 6\. Recover $\hat x_{k|k-1} = {\mathcal X}_{0,k|k-1}=[{\mathbb X}_{k|k-1}]_1$. 7\. Recover $P_{k|k-1} = XP_{k|k-1}$`(:,2:n+1)`. 7\. $P^{1/2}_{k|k-1} = \texttt{tril}([{\mathbb X}_{k|k-1}]_{2:n+1}-\hat x_{k|k-1})/\sqrt{n+\lambda}$. [Measurement]{.smallcaps} 7a. Cholesky decomposition $P_{k|k-1}=P_{k|k-1}^{1/2}P_{k|k-1}^{\top/2}$ $-$ Square-root $P^{1/2}_{k|k-1}$ is already available from TU. [Update (MU):]{.smallcaps} 7b. Get ${\mathcal X}_{i,k|k-1}=\hat x_{k|k-1}+P_{k|k-1}^{1/2}\xi_i$, $i=0,\ldots,2n$. $-$ Sigma vectors ${\mathcal X}_{i,k|k-1}$ are already defined from TU. 7c. Form ${\mathbb X}_{k|k-1}=\bigl[{\mathcal X}_{0,k|k-1},\ldots,{\mathcal X}_{2n,k|k-1}\bigr]$. $-$ The matrix ${\mathbb X}_{k|k-1}$ is already available from TU. 8\. Propagate ${\mathcal Z}_{i,k|k-1}=h\bigl(k,{\mathcal X}_{i,k|k-1}\bigr)$, $i=0,\ldots,2n$. Set ${\mathbb Z}_{k|k-1}=\bigl[{\mathcal Z}_{0,k|k-1},\ldots,{\mathcal Z}_{2n,k|k-1} \bigr]$ of size $m\times (2n+1)$. 9\. Find $\hat z_{k|k-1}={\mathbb Z}_{k|k-1}w^{(m)}$, $R_{e,k}={\mathbb Z}_{k|k-1}{\mathbb W}{\mathbb Z}_{k|k-1}^{\top}+R_k$, $P_{xz,k}={\mathbb X}_{k|k-1}{\mathbb W}{\mathbb Z}_{k|k-1}^{\top}$ and ${K}_{k}=P_{xz,k}R_{e,k}^{-1}$. 10\. Update the state $\hat x_{k|k}=\hat x_{k|k-1}+{K}_k(z_k-\hat z_{k|k-1})$ and the covariance $P_{k|k}=P_{k|k-1} - {K}_k R_{e,k} {K}_k^{\top}$. Auxiliary $[\tilde x(t)] \leftarrow \proc{MDEs}(x(t),t,n,\lambda,{\mathbb W},w^{(m)},G,Q)$ $[\widetilde{{\mathbb X}}(t)] \leftarrow \proc{SPDEs}(\widetilde{{\mathbb X}}(t),t,n,\lambda,{\mathbb W},w^{(m)},G,Q)$ Functions Get matrix $X = \verb"reshape"(x,n,n+1)$; Get matrix ${\mathbb X}(t) \leftarrow \verb"reshape"(\widetilde{{\mathbb X}}(t),n,2n+1)$; Recover state $\hat x(t) = [X]_1$; Recover $\hat x(t) = [{\mathbb X}(t)]_{1}$; Recover covariance $P(t) = [X]_{2~:~n+1}$; Recover $P^{1/2}(t) = \verb"tril"\bigl([{\mathbb X}(t)]_{2:n+1}-\hat x(t)\bigr)/\sqrt{n+\lambda}$; Factorize $P(t)$, generate nodes [\[eq:ukf:points\]](#eq:ukf:points){reference-type="eqref" reference="eq:ukf:points"}, [\[eq:ukf:vec\]](#eq:ukf:vec){reference-type="eqref" reference="eq:ukf:vec"} and get ${\mathbb X}(t)$; Propagate $f\bigl(t,{\mathbb X}(t)$ and find ${\mathbb X}(t){\mathbb W} f^{\top}\bigl(t,{\mathbb X}(t)\bigr)$; Propagate $f\bigl(t,{\mathbb X}(t)$ and find ${\mathbb X}(t){\mathbb W} f^{\top}\bigl(t,{\mathbb X}(t)\bigr)$; Find $M(t)$ by [\[eq2.10\]](#eq2.10){reference-type="eqref" reference="eq2.10"} and split $M=\bar L + D + \bar U$; Compute the right-hand side of [\[UKF:MDE1\]](#UKF:MDE1){reference-type="eqref" reference="UKF:MDE1"}, [\[UKF:MDE2\]](#UKF:MDE2){reference-type="eqref" reference="UKF:MDE2"}; Compute $\Phi(M) = \bar L + 0.5 D$; Get extended matrix $\tilde X = [d\hat x(t)/dt, dP(t)/dt]$; Find the right-hand side of [\[eq2.9\]](#eq2.9){reference-type="eqref" reference="eq2.9"}, i.e. get $A = d{\mathbb X}(t)/dt$; Reshape into a vector form $\tilde x(t)=\tilde X\verb"(:)"$. Reshape into a vector form $\widetilde{{\mathbb X}}(t)=A\verb"(:)"$. To provide the general MATLAB-based UKF implementation schemes, we next denote the MATLAB's built-in ODE solver to be utilized by `odesolver`. This means that the users are free to choose any method from [@2005:Higham:book Table 12.1]. The key idea of using the MATLAB's built-in ODE solvers for implementing the continuous-discrete UKF is the involved LE control that allows for bounding the discretization error. The solver creates the adaptive variable stepsize mesh in such a way that the discretization error arisen is less than the tolerance value pre-defined by user. We stress that this is done in automatic way by MATLAB's built-in functions and no extra coding is required from users except for setting the ODE solvers' options prior to filtering as follows: $$\label{eq2.136}\scriptsize \texttt{options = odeset('AbsTol',LET,'RelTol',LET,'MaxStep',0.1)}$$ where the parameters `AbsTol` and `RelTol` determine portions of the *absolute* and *relative* LE utilized in the built-in control mechanization, respectively, and $0.1$ limits the maximum step size $\tau^{\rm max}$ for numerical stability reasons. Formula [\[eq2.136\]](#eq2.136){reference-type="eqref" reference="eq2.136"} implies that `AbsTol=RelTol=LET` where the parameter `LET` sets the requested local accuracy of numerical integration with the MATLAB code. The general MATLAB-based continuous-discrete UKF strategies are proposed within both the MDE and SPDE approaches in Table [\[Tab:1\]](#Tab:1){reference-type="ref" reference="Tab:1"}. Since the MATLAB's built-in ODE solvers are vector-functions, one should re-arrange both the MDEs in [\[UKF:MDE1\]](#UKF:MDE1){reference-type="eqref" reference="UKF:MDE1"}, [\[UKF:MDE2\]](#UKF:MDE2){reference-type="eqref" reference="UKF:MDE2"} and SPDEs in [\[eq2.9\]](#eq2.9){reference-type="eqref" reference="eq2.9"} in the form of unique vector of functions, which is to be sent to the ODE solver. The MATLAB built-in function `reshape` performs this operation. More precisely, `A(:)` returns a single column vector of size $M\times N$ collected from columns of the given array $A \in {\mathbb R}^{N\times M}$. Next, the built-in function `tril(A)` extracts a lower triangular part from an array $A$. Finally, the notation $\texttt{[A]}_i$ stands for the i-th column of any matrix $A$, meanwhile $\texttt{[A]}_{i:j}$ means the matrix collected from the columns of $A$ taken from the $i$th column up to the $j$th one. We briefly discuss Algorithms 1, 2 and remark that they are of *conventional*-type implementations since the entire error covariance matrix, $P_{k|k}$, is updated. As a result, the Cholesky decomposition is required at each filtering step for generating the sigma vectors. However, the SPDE-based implementation in Algorithm 2 demands one less Cholesky factorization than the MDE-based scheme in Algorithm 1. Indeed, both methods imply the Cholesky decomposition at the time update step 1 but Algorithm 2 skips the factorization at step 7 of the measurement update because the propagated sigma vectors are already available. Thus, the Cholesky decomposition is avoided at each measurement update step in Algorithm 2. In step 7 of Algorithm 2, the matrix $P^{1/2}_{k|k-1}$ is recovered by taking into account formulas [\[eq:ukf:vec\]](#eq:ukf:vec){reference-type="eqref" reference="eq:ukf:vec"}, [\[eq:ukf:points\]](#eq:ukf:points){reference-type="eqref" reference="eq:ukf:points"}. The dimension of the system to be solved in Algorithm 1 is approximately two times less than in Algorithm 2. The MDEs in [\[UKF:MDE1\]](#UKF:MDE1){reference-type="eqref" reference="UKF:MDE1"}, [\[UKF:MDE2\]](#UKF:MDE2){reference-type="eqref" reference="UKF:MDE2"} consist of $n^2+n$ equations, meanwhile the SPDEs in [\[eq2.9\]](#eq2.9){reference-type="eqref" reference="eq2.9"} contain $(2n+1)n$ equations to be solved. This means that the MDE-based UKF implementation is faster than the related SPDE-based Algorithm 2, although the latter skips one Cholesky factorization at each iterate. Finally, the auxiliary functions summarized in Table [\[Tab:1\]](#Tab:1){reference-type="ref" reference="Tab:1"} are intended for computing the right-hand side functions in [\[UKF:MDE1\]](#UKF:MDE1){reference-type="eqref" reference="UKF:MDE1"}, [\[UKF:MDE2\]](#UKF:MDE2){reference-type="eqref" reference="UKF:MDE2"} and [\[eq2.9\]](#eq2.9){reference-type="eqref" reference="eq2.9"}, respectively. They are presented in a vector form required by any built-in MATLAB ODEs integration scheme. It should be stressed that the MDE-based approach for implementing a continuous-discrete UKF technique demands the Cholesky factorization in each iterate of the auxiliary function for computing the sigma nodes and then for calculating the right-hand side expressions in formulas [\[UKF:MDE1\]](#UKF:MDE1){reference-type="eqref" reference="UKF:MDE1"}, [\[UKF:MDE2\]](#UKF:MDE2){reference-type="eqref" reference="UKF:MDE2"}. This makes the MDE-based implementation strategy vulnerable to roundoff. The filtering process is interrupted when the Cholesky factorization is unfeasible. We conclude that the MDE-based implementation way is much faster but it is also less numerically stable compared to the SPDE-based approach. Recall, the SPDE- MATLAB-based UKF implementation method controls the discretization error arisen in $(2n+1)n$ equations. This slows down the algorithm and might yield unaffordable execution time. # SR solution with one-rank Cholesky updates {#sect3b} Historically, the first SR methods suggested for the UKF estimator have utilized the *one-rank Cholesky update* procedure for treating the negative entries appeared in the coefficient vector $w^{(c)}$ and the related formulas for computing the covariance matrices. More precisely, the UKF SR solution has been developed for a particular UKF parametrization with $\alpha=1$, $\beta=0$ and $\kappa=3-n$ in [@2001:Merwe]. This set of parameters yields the negative sigma points' weights $w^{(m)}_0$ and $w^{(c)}_0$ when the number of states to be estimated is $n>3$. Following the cited works, the one-rank Cholesky update is utilized as follows. Assume that $\tilde S$ is the original lower triangular Cholesky factor of matrix $\tilde P$. We are required to find the Cholesky factor of the one-rank updated matrix $P = \tilde P \pm uu^{\top}$. Given $\tilde S$ and vector $u$, the MATLAB's built-in function $S = \texttt{cholupdate}\bigl\{\tilde S,u,\pm \bigr\}$ finds the Cholesky factor $S$ of the matrix $P$, which we are looking for. If $u$ is a matrix, then the result is $p$ consecutive updates of the Cholesky factor using the $p$ columns of $u$. An error message reports when the downdated matrix is not positive definite and the failure yields the UKF estimator shutoff. Following [@2009:Haykin], the SR UKF algorithms developed within the one-rank Cholesky update procedure are called the *pseudo-SR* methods. In this paper, we suggest a general *array* approach for implementing the pseudo-SR UKF methods. The array form implies that the information available is collected into the unique pre-array and, next, the QR factorization is performed to obtain the post-array. All SR factors required are then simply read-off from the obtained post-array. Such algorithms are very effective in MATLAB because of vectorized operations. The key idea of our solution is to apply the QR decomposition to the part of the pre-array that corresponds to the positive entries in coefficient vector $w^{(c)}$. Next, the one-rank Cholesky updates are utilized for updating the Cholesky factors obtained by using the rows related to the negative entries in $w^{(c)}$. We illustrate our solution by the UKF parametrization with $\alpha=1$, $\beta=0$ and $\kappa=3-n$ as suggested in [@2001:Merwe]. This approach can be extended in a proper way on other UKF parametrization variants. **MDE-based UKF: Algorithm 1a** **SPDE-based UKF: Algorithm 2a** ------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------- [Initialization:]{.smallcaps} 0\. Cholesky decomposition: $\Pi_0 = \Pi_0^{1/2}\Pi_0^{\top/2}$, $\Pi_0^{1/2}$ is a lower triangular matrix. Define $\xi_i$, $i=0,\ldots, 2n$ by [\[eq:ukf:points\]](#eq:ukf:points){reference-type="eqref" reference="eq:ukf:points"}. Set initials $\hat x_{0|0} = \bar x_0$, $P_{0|0}^{1/2} = \Pi_0^{1/2}$ and options by [\[eq2.136\]](#eq2.136){reference-type="eqref" reference="eq2.136"}. Find weights in [\[eq:w_c\]](#eq:w_c){reference-type="eqref" reference="eq:w_c"}, [\[eq:W_matrix\]](#eq:W_matrix){reference-type="eqref" reference="eq:W_matrix"} and $|{\mathbb W}|^{1/2}$, $S$ by [\[class_W\]](#class_W){reference-type="eqref" reference="class_W"}, [\[class_S\]](#class_S){reference-type="eqref" reference="class_S"}. [Time]{.smallcaps} 1\. Generate the sigma nodes ${\mathcal X}_{i,k-1|k-1}=\hat x_{k-1|k-1}+P_{k-1|k-1}^{1/2}\xi_i$, $i=0,\ldots,2n$. [Update (TU):]{.smallcaps} 2\. $XP_{k-1|k-1} = [\hat x_{k-1|k-1}, P^{1/2}_{k-1|k-1}]$, $x^{(0)}_{k-1} = XP_{k-1|k-1}\verb"(:)"$ $\to$ Repeat steps 2 -- 7 of Algorithm 2 3\. Integrate $x_{k|k-1}\leftarrow \texttt{odesolver[MDEs-SR},x^{(0)}_{k-1},[t_{k-1},t_k]]$. summarized in Table [\[Tab:1\]](#Tab:1){reference-type="ref" reference="Tab:1"}. 4\. Read-off $XP_{k|k-1} \leftarrow \texttt{reshape}(x_{k|k-1}^{\texttt{end}},n,n+1)$ 5\. Get $\hat x_{k|k-1} = XP_{k|k-1}$`(:,1)`, $P_{k|k-1}^{1/2} = XP_{k|k-1}$`(:,2:n+1)`. [Measurement:]{.smallcaps} 6\. Generate ${\mathcal X}_{i,k|k-1}=\hat x_{k|k-1}+P_{k|k-1}^{1/2}\xi_i$, $i=0,\ldots,2n$. $-$ Sigma vectors ${\mathcal X}_{i,k|k-1}$ are defined from TU. [Update (MU):]{.smallcaps} 7\. Transform ${\mathcal Z}_{i,k|k-1}=h\bigl(k,{\mathcal X}_{i,k|k-1}\bigr)$. Set ${\mathbb Z}_{k|k-1}=\bigl[{\mathcal Z}_{0,k|k-1},\ldots,{\mathcal Z}_{2n,k|k-1} \bigr]$. Find $\hat z_{k|k-1}={\mathbb Z}_{k|k-1}w^{(m)}$. 8\. Build ${\mathbb A}_k = \begin{bmatrix} [{\mathbb Z}_{k|k-1}|{\mathbb W}|^{1/2}]_{2:2n+1} & R_k^{1/2} \end{bmatrix}$. Factorize ${\mathbb R}_k \leftarrow \mbox{\texttt{qr}}({\mathbb A}_k^{\top})$. Read-off main $m\times m$ block $\tilde S_{R_{e,k}} = [{\mathbb R}_k]_m$. 9\. Update $S_{R_{e,k}} = \mbox{\texttt{cholupdate}}(\tilde S_{R_{e,k}},[{\mathbb Z}_{k|k-1}|{\mathbb W}|^{1/2}]_{1},S_{1,1})$ and $R_{e,k}^{1/2} = S_{R_{e,k}}^{\top}$. 10\. Find $P_{x,z} = {\mathbb X}_{k|k-1}{\mathbb W}{\mathbb Z}_{k|k-1}^{\top}$, ${K}_{k}=P_{x,z}R_{e,k}^{-{\top}/2}R_{e,k}^{-1/2}$ and $\hat x_{k|k}=\hat x_{k|k-1}+{K}_k(z_k-\hat z_{k|k-1})$. 11\. Apply $m$ one-rank updates $S_{k|k}= \mbox{\texttt{cholupdate}}(P_{k|k-1}^{{\top}/2},{\mathbb U},'-')$ where ${\mathbb U} = {K}_kR_{e,k}^{1/2}$ and get $P^{1/2}_{k|k} = S_{k|k}^{\top}$. Auxiliary $[\tilde x(t)] \leftarrow \proc{MDEs-SR}(x(t),t,n,\lambda,{\mathbb W},w^{(m)},G,Q)$ $[\widetilde{{\mathbb X}}(t)] \leftarrow \proc{SPDEs}(\widetilde{{\mathbb X}}(t),t,n,\lambda,{\mathbb W},w^{(m)},G,Q)$ Functions $X = \verb"reshape"(x,n,n+1)$, $\hat x(t) = [X]_1$, $P^{1/2}(t) = [X]_{2:n+1}$; $\rightarrow$ Repeat from Table [\[Tab:1\]](#Tab:1){reference-type="ref" reference="Tab:1"}. Generate nodes [\[eq:ukf:points\]](#eq:ukf:points){reference-type="eqref" reference="eq:ukf:points"}, [\[eq:ukf:vec\]](#eq:ukf:vec){reference-type="eqref" reference="eq:ukf:vec"}, get ${\mathbb X}(t)$ and find ${\mathbb X}(t){\mathbb W} f^{\top}\bigl(t,{\mathbb X}(t)\bigr)$; Find $M(t)$ by [\[eq2.10\]](#eq2.10){reference-type="eqref" reference="eq2.10"} and split $M=\bar L + D + \bar U$; Compute $d\hat x(t)/dt$ and $dP^{1/2}(t)/dt$ by [\[UKF:MDE1\]](#UKF:MDE1){reference-type="eqref" reference="UKF:MDE1"}, [\[eq3.2:new\]](#eq3.2:new){reference-type="eqref" reference="eq3.2:new"}; Get $\tilde X = [d\hat x(t)/dt, dP^{1/2}(t)/dt]$, reshape $\tilde x(t)=\tilde X\verb"(:)"$. **MDE-based UKF: Algorithm 1b** **SPDE-based UKF: Algorithm 2b** [Initialization:]{.smallcaps} $\rightarrow$ Repeat from Algorithm 1a. $\rightarrow$ Repeat from Algorithm 2a. [Time Update:]{.smallcaps} $\rightarrow$ Repeat from Algorithm 1a. $\rightarrow$ Repeat from Algorithm 2a. [Measurement:]{.smallcaps} 6\. Generate ${\mathcal X}_{i,k|k-1}=\hat x_{k|k-1}+P_{k|k-1}^{1/2}\xi_i$, $i=0,\ldots,2n$. $-$ Sigma vectors ${\mathcal X}_{i,k|k-1}$ are defined from TU. [Update (MU):]{.smallcaps} 7\. Transform ${\mathcal Z}_{i,k|k-1}=h\bigl(k,{\mathcal X}_{i,k|k-1}\bigr)$. Set ${\mathbb Z}_{k|k-1}=\bigl[{\mathcal Z}_{0,k|k-1},\ldots,{\mathcal Z}_{2n,k|k-1} \bigr]$. Find $\hat z_{k|k-1}={\mathbb Z}_{k|k-1}w^{(m)}$. 8\. Build pre-array ${\mathbb A}_k = \begin{bmatrix} [{\mathbb Z}_{k|k-1}|{\mathbb W}|^{1/2}]_{2:2n+1} & R_k^{1/2} \end{bmatrix}$. Factorize ${\mathbb R}_k \leftarrow \mbox{\texttt{qr}}({\mathbb A}_k^{\top})$. Read-off $\tilde S_{R_{e,k}} = [{\mathbb R}_k]_m$. 9\. Update $S_{R_{e,k}} = \mbox{\texttt{cholupdate}}(\tilde S_{R_{e,k}},[{\mathbb Z}_{k|k-1}|{\mathbb W}|^{1/2}]_{1},S_{1,1})$ and $R_{e,k}^{1/2} = S_{R_{e,k}}^{\top}$. 10\. Find $P_{x,z} = {\mathbb X}_{k|k-1}{\mathbb W}{\mathbb Z}_{k|k-1}^{\top}$, ${K}_{k}=P_{x,z}R_{e,k}^{-{\top}/2}R_{e,k}^{-1/2}$. Compute $\hat x_{k|k}=\hat x_{k|k-1}+{K}_k(z_k-\hat z_{k|k-1})$. 11\. Build ${\mathbb A}_k = \begin{bmatrix} [\left({\mathbb X}_{k|k-1}-{K}_{k}{\mathbb Z}_{k|k-1}\right)|{\mathbb W}|^{1/2}]_{2:2n+1} & K_kR_k^{1/2} \end{bmatrix}$. Factorize ${\mathbb R}_k \leftarrow \mbox{\texttt{qr}}({\mathbb A}_k^{\top})$. Read-off $\tilde S_{P_{k|k}} = [{\mathbb R}_k]_n$. 12\. $S_{P_{k|k}} = \mbox{\texttt{cholupdate}}(\tilde S_{P_{k|k}},[\left({\mathbb X}_{k|k-1}-{K}_{k}{\mathbb Z}_{k|k-1}\right)|{\mathbb W}|^{1/2}]_{1},S_{1,1})$ and $P_{k|k}^{1/2} = S_{P_{k|k}}^{\top}$. **MDE-based UKF: Algorithm 1c** **SPDE-based UKF: Algorithm 2c** [Initialization:]{.smallcaps} $\rightarrow$ Repeat from Algorithm 1a. $\rightarrow$ Repeat from Algorithm 2a. [Time Update:]{.smallcaps} $\rightarrow$ Repeat from Algorithm 1a. $\rightarrow$ Repeat from Algorithm 2a. [Measurement:]{.smallcaps} 6\. Generate ${\mathcal X}_{i,k|k-1}=\hat x_{k|k-1}+P_{k|k-1}^{1/2}\xi_i$, $i=0,\ldots,2n$. $-$ Sigma vectors ${\mathcal X}_{i,k|k-1}$ are defined from TU. [Update (MU):]{.smallcaps} 7\. ${\mathcal Z}_{i,k|k-1}=h\bigl(k,{\mathcal X}_{i,k|k-1}\bigr)$. Set ${\mathbb Z}_{k|k-1}=\bigl[{\mathcal Z}_{0,k|k-1},\ldots,{\mathcal Z}_{2n,k|k-1} \bigr]$. Find $\hat z_{k|k-1}={\mathbb Z}_{k|k-1}w^{(m)}$. 8\. Build pre-array ${\mathbb A}_k = \begin{bmatrix} [{\mathbb Z}_{k|k-1}|{\mathbb W}|^{1/2}]_{2:2n+1} & R_k^{1/2} \\ [{\mathbb X}_{k|k-1}|{\mathbb W}|^{1/2}]_{2:2n+1} & \mathbf{0} \end{bmatrix}$. Factorize ${\mathbb R}_k \leftarrow \mbox{\texttt{qr}}({\mathbb A}_k^{\top})$. Read-off $\tilde S_{A} = [{\mathbb R}_k]_{m+n}$. 9\. Update $S_{A} = \mbox{\texttt{cholupdate}}(\tilde S_{A},\begin{bmatrix} [{\mathbb Z}_{k|k-1}|{\mathbb W}|^{1/2}]_{1} \\ [{\mathbb X}_{k|k-1}|{\mathbb W}|^{1/2}]_{1} \end{bmatrix},S_{1,1})$. Get $S_{A}^{\top} = \begin{bmatrix} R_{e,k}^{1/2} & {\bf 0} \\ \bar P_{xz,k} & P^{1/2}_{k|k} \end{bmatrix}$. Read-off $R_{e,k}^{1/2}$, $P_{k|k}^{1/2}$, $\bar P_{x,z}$. 10\. Compute $K_k = \bar P_{x,z}R_{e,k}^{-1/2}$ and find $\hat x_{k|k}=\hat x_{k|k-1}+{K}_k(z_k-\hat z_{k|k-1})$. Let us consider the measurement update step of the UKF proposed in [@2001:Merwe Algorithm 3.1]. Following the cited paper, the Cholesky factor $R_{e,k}^{1/2}$ of the residual covariance $R_{e,k}$ is computed as follows: $$\begin{aligned} \tilde R_{e,k}^{1/2} & = \!qr\!\!\left[\sqrt{w_1^{(c)}}\left([{\mathcal Z}_{i,k|k-1}]_{1:2n} - \hat z_{k|k-1}\right), R_k^{1/2} \right]\!\!\! \label{eq:ukf:1rank:rek}\\ R_{e,k}^{1/2} & = \mbox{\texttt{cholupdate}}\bigl(\tilde R_{e,k}^{1/2},[{\mathcal Z}_{i,k|k-1}]_{0} - \hat z_{k|k-1},w_0^{(c)}\bigr) \label{eq:ukf:1rank:rek2}\end{aligned}$$ where $R_k^{1/2}$ is the upper triangular Cholesky factor of the measurement covariance matrix $R_k$ and the term $[{\mathcal Z}_{i,k|k-1}]_{1:2n}$ stands for a matrix collected from the vectors $\bigl[{\mathcal Z}_{1,k|k-1},\ldots,{\mathcal Z}_{2n,k|k-1} \bigr] \in {\mathbb R}^{m\times 2n}$. It is important to note that formula [\[eq:ukf:1rank:rek\]](#eq:ukf:1rank:rek){reference-type="eqref" reference="eq:ukf:1rank:rek"} involves the scalar value $w_1^{(c)}$, only. The reason is that $w_1^{(c)}=w_2^{(c)}=\ldots =w_{2n}^{(c)}$ under the UKF parametrization examined in [@2001:Merwe]. It can be easily extended on a more general case with any positive coefficients $w_i^{(c)}>0$, $i=1,\ldots 2n$, by introducing the diagonal matrix $[W]_{1:2n} = {\rm diag}\bigl\{w^{(c)}_1,\ldots, w^{(c)}_{2n}\bigr\}$ and utilizing $\sqrt{[W]_{1:2n}}$ instead of $\sqrt{w_1^{(c)}}$ in equation [\[eq:ukf:1rank:rek\]](#eq:ukf:1rank:rek){reference-type="eqref" reference="eq:ukf:1rank:rek"}. Finally, the obtained factor $\tilde R_{e,k}^{1/2}$ is re-calculated by utilizing the one-rank Cholesky update in formula [\[eq:ukf:1rank:rek2\]](#eq:ukf:1rank:rek2){reference-type="eqref" reference="eq:ukf:1rank:rek2"} because of possible negative value $w_0^{(c)}<0$ and unfeasible square root operation $\sqrt{w_0^{(c)}}$. Following [@2007:Sarkka p. 1638] and taking into account that ${\mathbb Z}_{k|k-1} = \bigl[{\mathcal Z}_{0,k|k-1},\ldots,{\mathcal Z}_{2n,k|k-1} \bigr] \in {\mathbb R}^{m\times (2n+1)}$, the weight matrix ${\mathbb W}$ in [\[eq:W_matrix\]](#eq:W_matrix){reference-type="eqref" reference="eq:W_matrix"} and matrix-form equation [\[ckf:rek\]](#ckf:rek){reference-type="eqref" reference="ckf:rek"}, we represent formulas [\[eq:ukf:1rank:rek\]](#eq:ukf:1rank:rek){reference-type="eqref" reference="eq:ukf:1rank:rek"}, [\[eq:ukf:1rank:rek2\]](#eq:ukf:1rank:rek2){reference-type="eqref" reference="eq:ukf:1rank:rek2"} in array form as follows: $$\begin{aligned} \tilde R_{e,k}^{1/2} & = qr\left[ [{\mathbb Z}_{k|k-1}|{\mathbb W}|^{1/2}]_{1:2n} \quad R_k^{1/2} \right], \label{eq:ukf:1rank:rek:matrix}\\ R_{e,k}^{1/2} & = \mbox{\texttt{cholupdate}}\bigl(\tilde R_{e,k}^{1/2}, [{\mathbb Z}_{k|k-1}|{\mathbb W}|^{1/2}]_{0},sgn(w_0^{(c)})\bigr) \label{eq:ukf:1rank:rek2:matrix}\end{aligned}$$ where $[{\mathbb Z}_{k|k-1}|{\mathbb W}|^{1/2}]_{1:2n}$ stands for a submatrix collected from all rows and the last $2n$ columns of the matrix product ${\mathbb Z}_{k|k-1}|{\mathbb W}|^{1/2}$. The square-root matrix $|{\mathbb W}|^{1/2}$ and the related signature matrix[^2] are defined by $$\begin{aligned} |{\mathbb W}|^{1/2}& = \bigl[I_{2n+1}-\mathbf{1}_{2n+1}^\top\otimes w^{(m)}\bigr] \nonumber \\ & \times \mbox{\rm diag}\left\{\sqrt{|w_0^{(c)}|},\ldots, \sqrt{|w_{2n}^{(c)}|}\right\}, \label{class_W}\\ S & = \mbox{\rm diag}\left\{\mbox{\rm sgn}(w_{0}^{(c)}),\ldots,\mbox{\rm sgn}(w_{2n}^{(c)})\right\}. \label{class_S}\end{aligned}$$ Following [@2001:Merwe Algorithm 3.1], the square-root factor of the error covariance matrix $P_{k|k}$ is calculated by applying the same approach to the second formula in equation [\[ckf:gain\]](#ckf:gain){reference-type="eqref" reference="ckf:gain"}. This yields $m$ consecutive updates of the Cholesky factor using the $m$ columns of the matrix product ${\mathbb U} = {K}_kR_{e,k}^{1/2}$ because of the substraction in equation [\[ckf:gain\]](#ckf:gain){reference-type="eqref" reference="ckf:gain"}; see [@2001:Merwe eqs. (28),(29)]: $$\begin{aligned} {\mathbb U}& = {K}_kR_{e,k}^{1/2}, S_{k|k} = \mbox{\texttt{cholupdate}}(S_{k|k-1},{\mathbb U},'-') \label{eq:ukf:1rank:P1}\end{aligned}$$ where $S_{k|k-1}$ and $S_{k|k}$ are, respectively, the upper triangular Cholesky factors of $P_{k|k-1}$ and $P_{k|k}$. Finally, it is worth noting here that the time update step of the SR SPDE-based UKF variant coincides with the time update in the conventional SPDE-based UKF (Algorithm 2) because the sigma vectors are propagated instead of the error covariance matrix. Meanwhile, the MDEs-based UKF time update should be re-derived in terms of the Cholesky factors of the error covariance matrix. This problem is solved in [@2007:Sarkka eq. (64)] as follows: $$\frac{dP^{1/2}(t)}{dt} = P^{1/2}(t)\Phi\Bigl(M(t)\Bigr) \label{eq3.2:new}$$ where $M(t)$ is defined by equation [\[eq2.10\]](#eq2.10){reference-type="eqref" reference="eq2.10"} and the mapping $\Phi(\cdot)$ is discussed after that formula. The first pseudo-SR variants obtained are summarized by Algorithms 1a and 2a in Table [\[Tab:2\]](#Tab:2){reference-type="ref" reference="Tab:2"}. We may expect their poor numerical stability (with respect to roundoff errors) because of $m$ consecutive one-rank Cholesky updates required at the measurement undate step as suggested in [@2001:Merwe]. Recall, the downdated matrix should be positive definite, otherwise the filtering method fails due to unfeasible operation. The numerical robustness can be improved by reducing the number of the one-rank Cholesky updates involved. It is possible to perform by deriving a symmetric equation as shown in [@KuKu21aEJCON]: $$\begin{aligned} P_{k|k} & = \left[{\mathbb X}_{k|k-1}-{K}_{k}{\mathbb Z}_{k|k-1}\right]{\mathbb W}\left[{\mathbb X}_{k|k-1}-{K}_{k}{\mathbb Z}_{k|k-1}\right]^{\top} \nonumber \\ & \phantom{=} + {K}_{k}R_k {K}_{k}^{\top}. \label{eq:proof:pkk}\end{aligned}$$ We can factorize formula [\[eq:proof:pkk\]](#eq:proof:pkk){reference-type="eqref" reference="eq:proof:pkk"} as follows: $$\begin{aligned} \tilde P_{k|k}^{1/2} & = \!qr\!\!\left[ [\left({\mathbb X}_{k|k-1}-{K}_{k}{\mathbb Z}_{k|k-1}\right)|{\mathbb W}|^{1/2}]_{1:2n}, K_kR_k^{1/2} \right]\!\! \label{eq:ukf:1rank:p1:joseph}\\ P_{k|k}^{1/2} & = \mbox{\texttt{cholupdate}}(\tilde P_{k|k}^{1/2}, \nonumber \\ & [\left({\mathbb X}_{k|k-1}-{K}_{k}{\mathbb Z}_{k|k-1}\right)|{\mathbb W}|^{1/2}]_{0},sgn(w_0^{(c)})). \label{eq:ukf:1rank:p2:joseph}\end{aligned}$$ This yields two new pseudo-SR UKF Algorithms 1b and 2b summarized in Table [\[Tab:2\]](#Tab:2){reference-type="ref" reference="Tab:2"}. They reduce the number of the one-rank Cholesky updates required at each iteration step from $m+1$ to $2$, Therefore, they are expected to be more stable with respect to roundoff errors. Finally, one more pseudo-SR version can be derived for the measurement update step of the UKF estimator. It has the most simple *array* representation and allows for reducing a number of the one-rank Cholesky updates from two involved in Algorithms 1b and 2b to only one. First, we note that the system of equations [\[ckf:gain\]](#ckf:gain){reference-type="eqref" reference="ckf:gain"}, [\[ckf:rek\]](#ckf:rek){reference-type="eqref" reference="ckf:rek"}, [\[ckf:pxy\]](#ckf:pxy){reference-type="eqref" reference="ckf:pxy"} can be summarized in the following equality[^3]: $$\begin{aligned} & \begin{bmatrix} {\mathbb Z}_{k|k-1}|{\mathbb W}|^{1/2} & R_k^{1/2} \\ {\mathbb X}_{k|k-1}|{\mathbb W}|^{1/2} & \mathbf{0} \end{bmatrix}{\rm diag}\{S,I_m\} \begin{bmatrix} {\mathbb Z}_{k|k-1}|{\mathbb W}|^{1/2} & R_k^{1/2} \\ {\mathbb X}_{k|k-1}|{\mathbb W}|^{1/2} & \mathbf{0} \end{bmatrix}^{\top} \\ &=\begin{bmatrix} R_{e,k}^{1/2} & {\bf 0}_{m\times n} & {\bf 0}_{m\times (N-n)}\\ \bar P_{xz,k} & P^{1/2}_{k|k} & {\bf 0}_{n\times (N-n)} \end{bmatrix} \begin{bmatrix} R_{e,k}^{1/2} & {\bf 0}_{m\times n} & {\bf 0}_{m\times (N-n)}\\ \bar P_{xz,k} & P^{1/2}_{k|k} & {\bf 0}_{n\times (N-n)} \end{bmatrix}^{\top}\end{aligned}$$ where $N=2n+1$ is the number of sigma points, $\bar P_{xz,k} = {K}_kR_{e,k}^{1/2} = P_{x,z}R_{e,k}^{-{\top}/2}$ is the normalized gain matrix, the square-root factor $|{\mathbb W}|^{1/2}$ and the signature matrix $S$ are defined by formulas [\[class_W\]](#class_W){reference-type="eqref" reference="class_W"} and [\[class_S\]](#class_S){reference-type="eqref" reference="class_S"}, respectively. The following pseudo-SR UKF strategy within the one-rank Cholesky update methodology is derived $$\begin{aligned} \tilde R & = qr\begin{bmatrix} [{\mathbb Z}_{k|k-1}|{\mathbb W}|^{1/2}]_{1:2n} & R_k^{1/2} \\ [{\mathbb X}_{k|k-1}|{\mathbb W}|^{1/2}]_{1:2n} & \mathbf{0} \end{bmatrix}^{\top}, \label{eq:ukf:1rank:p1:array}\\ R & = \mbox{\texttt{cholupdate}}(\tilde R, \begin{bmatrix} [{\mathbb Z}_{k|k-1}|{\mathbb W}|^{1/2}]_{0} \\ [{\mathbb X}_{k|k-1}|{\mathbb W}|^{1/2}]_{0} \end{bmatrix},sgn(w_0^{(c)})) \label{eq:ukf:1rank:p2:array}\end{aligned}$$ Algorithms 1c and 2c summarized in Table [\[Tab:2\]](#Tab:2){reference-type="ref" reference="Tab:2"} are designed under this approach. # SR solution with hyperbolic QR factorization {#sect3a} **MDE-based UKF: Algorithm 1a-SR** **SPDE-based UKF: Algorithm 2a-SR** ------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------- [Initialization:]{.smallcaps} 0\. Re-order $w^{(c)}=[w^{(c)}_{1},\ldots, w^{(c)}_{2n}, w^{(c)}_{0}]$, $w^{(m)}=[w^{(m)}_{1},\ldots, w^{(m)}_{2n}, w^{(m)}_{0}]$. Find ${W}$, $|{\mathbb W}|^{1/2}$, $S$ by [\[eq:W_matrix\]](#eq:W_matrix){reference-type="eqref" reference="eq:W_matrix"}, [\[class_W\]](#class_W){reference-type="eqref" reference="class_W"}, [\[class_S\]](#class_S){reference-type="eqref" reference="class_S"}. $\rightarrow$ Repeat from Algorithm 1a in Table [\[Tab:2\]](#Tab:2){reference-type="ref" reference="Tab:2"}. $\rightarrow$ Repeat from Algorithm 2a in Table [\[Tab:2\]](#Tab:2){reference-type="ref" reference="Tab:2"}. [Time Update:]{.smallcaps} $\rightarrow$ Repeat from Algorithm 1a in Table [\[Tab:2\]](#Tab:2){reference-type="ref" reference="Tab:2"}. $\rightarrow$ Repeat from Algorithm 2a in Table [\[Tab:2\]](#Tab:2){reference-type="ref" reference="Tab:2"}. [Measurement:]{.smallcaps} 1\. Get ${\mathcal X}_{i,k|k-1}=\hat x_{k|k-1}+P_{k|k-1}^{1/2}\xi_i$, $i=0,\ldots,2n$. $-$ Sigma vectors ${\mathcal X}_{i,k|k-1}$ are defined from TU. [Update (MU):]{.smallcaps} 2\. ${\mathcal Z}_{i,k|k-1}=h\bigl(k,{\mathcal X}_{i,k|k-1}\bigr)$. Set ${\mathbb Z}_{k|k-1}=\bigl[{\mathcal Z}_{0,k|k-1},\ldots,{\mathcal Z}_{2n,k|k-1} \bigr]$. Find $\hat z_{k|k-1}={\mathbb Z}_{k|k-1}w^{(m)}$. 3\. Build ${\mathbb A}_k = \begin{bmatrix} R_k^{1/2} & {\mathbb Z}_{k|k-1}|{\mathbb W}|^{1/2} \\ \end{bmatrix}$, $J = {\mbox diag} \{I_m, S\}$. Upper triangulate ${\mathbb R}_k \leftarrow \mbox{\texttt{jqr}}({\mathbb A}_k^{\top},J)$ by hyperbolic QR. 4\. Read-off $R_{e,k}^{1/2} = [{\mathbb R}^{\top}_k]_{m}$. Find $P_{x,z} = {\mathbb X}_{k|k-1}{\mathbb W}{\mathbb Z}_{k|k-1}^{\top}$, ${K}_{k}=P_{x,z}R_{e,k}^{-{\top}/2}R_{e,k}^{-1/2}$, $\hat x_{k|k}=\hat x_{k|k-1}+{K}_k(z_k-\hat z_{k|k-1})$. 5\. Build ${\mathbb A}_k = \begin{bmatrix} P_{k|k-1}^{1/2} & K_kR_{e,k}^{1/2} \end{bmatrix}$, $J = {\mbox diag} \{I_n, -I_m\}$. Factorize ${\mathbb R}_k \leftarrow \mbox{\texttt{jqr}}({\mathbb A}_k^{\top},J)$. Read-off $P_{k|k}^{1/2} = [{\mathbb R}^{\top}_k]_{n}$. **MDE-based UKF: Algorithm 1b-SR** **SPDE-based UKF: Algorithm 2b-SR** [Initialization:]{.smallcaps} $\rightarrow$ Repeat from Algorithm 1a-SR. $\rightarrow$ Repeat from Algorithm 2a-SR. [Time Update:]{.smallcaps} $\rightarrow$ Repeat from Algorithm 1a-SR. $\rightarrow$ Repeat from Algorithm 2a-SR. [Measurement:]{.smallcaps} 1\. Get ${\mathcal X}_{i,k|k-1}=\hat x_{k|k-1}+P_{k|k-1}^{1/2}\xi_i$, $i=0,\ldots,2n$. $-$ Sigma vectors ${\mathcal X}_{i,k|k-1}$ are defined from TU. [Update (MU):]{.smallcaps} 2\. ${\mathcal Z}_{i,k|k-1}=h\bigl(k,{\mathcal X}_{i,k|k-1}\bigr)$. Set ${\mathbb Z}_{k|k-1}=\bigl[{\mathcal Z}_{0,k|k-1},\ldots,{\mathcal Z}_{2n,k|k-1} \bigr]$. Find $\hat z_{k|k-1}={\mathbb Z}_{k|k-1}w^{(m)}$. 3\. Build ${\mathbb A}_k = \begin{bmatrix} R_k^{1/2} & {\mathbb Z}_{k|k-1}| {\mathbb W}|^{1/2} \\ \end{bmatrix}$, $J = {\mbox diag} \{I_m, S\}$. Upper triangulate ${\mathbb R}_k \leftarrow \mbox{\texttt{jqr}}({\mathbb A}_k^{\top},J)$ by hyperbolic QR. 4\. Read-off $R_{e,k}^{1/2} = [{\mathbb R}^{\top}_k]_{m}$. Find $P_{x,z} = {\mathbb X}_{k|k-1} {\mathbb W}{\mathbb Z}_{k|k-1}^{\top}$, ${K}_{k}=P_{x,z}R_{e,k}^{-{\top}/2}R_{e,k}^{-1/2}$, $\hat x_{k|k}=\hat x_{k|k-1}+{K}_k(z_k-\hat z_{k|k-1})$. 5\. ${\mathbb A}_k = \begin{bmatrix} K_kR_k^{1/2} & \left({\mathbb X}_{k|k-1}-{K}_{k}{\mathbb Z}_{k|k-1}\right)|{\mathbb W}|^{1/2} \end{bmatrix}$, $J = {\mbox diag} \{I_m, S\}$. ${\mathbb R}_k \leftarrow \mbox{\texttt{jqr}}({\mathbb A}_k^{\top},J)$. Read-off $P_{k|k}^{1/2} = [{\mathbb R}^{\top}_k]_{n}$. **MDE-based UKF: Algorithm 1c-SR** **SPDE-based UKF: Algorithm 2c-SR** [Initialization:]{.smallcaps} $\rightarrow$ Repeat from Algorithm 1a-SR. $\rightarrow$ Repeat from Algorithm 2a-SR. [Time Update:]{.smallcaps} $\rightarrow$ Repeat from Algorithm 1a-SR. $\rightarrow$ Repeat from Algorithm 2a-SR. [Measurement:]{.smallcaps} 1\. Get ${\mathcal X}_{i,k|k-1}=\hat x_{k|k-1}+P_{k|k-1}^{1/2}\xi_i$, $i=0,\ldots,2n$. $-$ Sigma vectors ${\mathcal X}_{i,k|k-1}$ are defined from TU. 2\. ${\mathcal Z}_{i,k|k-1}=h\bigl(k,{\mathcal X}_{i,k|k-1}\bigr)$. Set ${\mathbb Z}_{k|k-1}=\bigl[{\mathcal Z}_{0,k|k-1},\ldots,{\mathcal Z}_{2n,k|k-1} \bigr]$. Find $\hat z_{k|k-1}={\mathbb Z}_{k|k-1}w^{(m)}$. 3\. Build ${\mathbb A}_k = \begin{bmatrix} R_k^{1/2} & {\mathbb Z}_{k|k-1}|{\mathbb W}|^{1/2} \\ \mathbf{0} & {\mathbb X}_{k|k-1}|{\mathbb W}|^{1/2} \end{bmatrix}$, $J = {\mbox diag} \{I_m, S\}$. Upper triangulate ${\mathbb R}_k \leftarrow \mbox{\texttt{jqr}}({\mathbb A}_k^{\top},J)$ by hyperbolic QR. 4\. Read-off the block $R = [{\mathbb R}_k]_{m+n}$ and get $R^{\top} = \begin{bmatrix} R_{e,k}^{1/2} & {\bf 0} \\ \bar P_{xz,k} & P^{1/2}_{k|k} \end{bmatrix}$. Read-off $R_{e,k}^{1/2}$, $P_{k|k}^{1/2}$ and $\bar P_{x,z}$. 5\. Compute $K_k = \bar P_{x,z}R_{e,k}^{-1/2}$ and $\hat x_{k|k}=\hat x_{k|k-1}+{K}_k(z_k-\hat z_{k|k-1})$. In contrast to the *pseudo*-SR algorithms that might be unstable because of the one-rank Cholesky update procedure involved in each iterate, we may design the *true* SR methods. There, the Cholesky decomposition is performed only once, i.e. for the initial matrix $\Pi_0>0$. The true SR solution implies the utilization of the so-called hyperbolic QR transformations instead of the usual QR factorization, which is used for computing the Cholesky factor of a positive definite matrix given by formula $C = A^{\top}A+B^{\top}B$. In general, the UKF formulas obey the equations of the form $C = A^{\top}A \pm B^{\top}B$. The exact form depends on the UKF parametrization implemented and, in particular, on the number of negative sigma-point coefficients $w^{(c)}$ utilized. Our illustrative UKF example used for designing the one-rank Cholesky update algorithms in Table 3 implies $\alpha=1$, $\beta=0$ and $\kappa=3-n$. This yields $w^{(c)}_0<0$ when $n>3$. Taking into account this scenario, we develop and explain a general MATLAB-based SR solution within both the MDE- and SPDE-based implementation frameworks. Following [@2003:Higham], the $J$-orthogonal matrix $Q$ is defined as one, which satisfies $Q^{\top}JQ=QJQ^{\top}=J$ where $J=\mbox{\rm diag}(\pm 1)$ is a signature matrix. The $J$-orthogonal transformations are used for computing the Cholesky factorization of a positive definite matrix $C = A^{\top}A-B^{\top}B$, where $A \in {\mathbb R}^{p\times n}$ $(p\ge n)$ and $B \in {\mathbb R}^{q\times n}$ as explained in [@2003:Higham]: if we can find a $J$-orthogonal matrix $Q$ such that $$Q \begin{bmatrix} A \\ B \end{bmatrix} = \begin{bmatrix} R \\ 0 \end{bmatrix} \label{hyperbolic_qr}$$ with $J = \mbox{diag}\{I_p, -I_q\}$, then $R$ is the Cholesky factor, which we are looking for. This follows from the equality $$C = \begin{bmatrix} A \\ B \end{bmatrix}^{\top} J \begin{bmatrix} A \\ B \end{bmatrix} = \begin{bmatrix} A \\ B \end{bmatrix}^{\top} Q^{\top}JQ \begin{bmatrix} A \\ B \end{bmatrix} =R^{\top}R \label{proof:proof}$$ Thus, the hyperbolic QR factorization can be used for computing the upper triangular Cholesky factor $R_{e,k}^{1/2}$ of the residual covariance $R_{e,k}$ in [\[ckf:rek\]](#ckf:rek){reference-type="eqref" reference="ckf:rek"} as follows: $$Q \begin{bmatrix} R_k^{1/2} \\ | {\mathbb W}|^{{\top}/2}{\mathbb Z}_{k|k-1}^{\top} \end{bmatrix} = \begin{bmatrix} R_{e,k}^{1/2} \\ 0 \end{bmatrix} \label{hypQR:rek}$$ where $Q$ is any $J = \mbox{diag}\{I_m, S \}$-orthogonal matrix that upper triangulates the pre-array. We need to stress that the form of equation [\[hyperbolic_qr\]](#hyperbolic_qr){reference-type="eqref" reference="hyperbolic_qr"} requires the negative weight coefficients in $w^{(c)}$ to be located at the end of the vector, otherwise the $J$-orthogonal matrix does not have the demanded form of $J = \mbox{diag}\{I_p, -I_q\}$ and the proof in [\[proof:proof\]](#proof:proof){reference-type="eqref" reference="proof:proof"} does not hold. In other words, we re-order the weight vector prior to implementation as follows: $w^{(c)}=[w^{(c)}_{1},\ldots, w^{(c)}_{2n}, w^{(c)}_{0}]$ because of possible $w^{(c)}_0<0$ case. The vector $w^{(m)}$ should be re-arranged as well, i.e. $w^{(m)}=[w^{(m)}_{1},\ldots, w^{(m)}_{2n}, w^{(m)}_{0}]$. Next, the weight matrix ${\mathbb W}$ and signature $S$ are defined as usual by formulas [\[class_W\]](#class_W){reference-type="eqref" reference="class_W"} and [\[class_S\]](#class_S){reference-type="eqref" reference="class_S"}. Having done that, we understand that the signature matrix has the form of $S = \mbox{diag}\{{\bf 1}_{2n},-1\}$ because of the $w^{(c)}_0<0$ scenario. Hence, the $J$-orthogonal matrix in [\[hypQR:rek\]](#hypQR:rek){reference-type="eqref" reference="hypQR:rek"} is $J = \mbox{diag}\{I_m, S \} = \mbox{diag}\{I_m, I_{2n},-1 \}$. Thus, we can easily prove $$\begin{aligned} R_{e,k} & = \begin{bmatrix} R_k^{1/2} \\ | {\mathbb W}|^{{\top}/2}{\mathbb Z}_{k|k-1}^{\top} \end{bmatrix}^{\top} J \begin{bmatrix} R_k^{1/2} \\ |{\mathbb W}|^{{\top}/2}{\mathbb Z}_{k|k-1}^{\top} \end{bmatrix} \\ & = R_k^{{\top}/2}I_mR_k^{1/2} + {\mathbb Z}_{k|k-1}| {\mathbb W}|^{1/2}S|{\mathbb W}|^{{\top}/2}{\mathbb Z}_{k|k-1}^{\top} \\ & = \begin{bmatrix} R_k^{1/2} \\ |{\mathbb W}|^{{\top}/2}{\mathbb Z}_{k|k-1}^{\top} \end{bmatrix}^{\top} Q^{\top}JQ \begin{bmatrix} R_k^{1/2} \\ |{\mathbb W}|^{{\top}/2}{\mathbb Z}_{k|k-1}^{\top} \end{bmatrix} \\ & = \begin{bmatrix} R_{e,k}^{1/2} \\ 0 \end{bmatrix}^{\top} J \begin{bmatrix} R_{e,k}^{1/2} \\ 0 \end{bmatrix} =R_{e,k}^{\top/2}I_mR_{e,k}^{1/2} + {\bf 0}S {\bf 0}^{\top}.\end{aligned}$$ In the same manner, an upper triangular factor of $P_{k|k}$ can be found by factorizing the second equation in [\[ckf:gain\]](#ckf:gain){reference-type="eqref" reference="ckf:gain"}, i.e. by utilizing the hyperbolic QR factorization with $J = \mbox{diag}\{I_n, -I_{m} \}$-orthogonal matrix as follows: $$Q \begin{bmatrix} P_{k|k-1}^{1/2} \\ R_{e,k}^{1/2} {K}_k^{\top} \end{bmatrix} = \begin{bmatrix} P_{k|k}^{1/2} \\ 0 \end{bmatrix}.$$ This yields the first exact SR UKF implementations with the hyperbolic QR factorizations summarized by Algorithms 1a-SR and 2a-SR in Table [\[Tab:5\]](#Tab:5){reference-type="ref" reference="Tab:5"}. They are the counterparts of the one-rank Cholesky-based methods in Algorithms 1a and 2a in Table [\[Tab:2\]](#Tab:2){reference-type="ref" reference="Tab:2"}. Similarly, we design the exact SR variants of Algorithms 1b and 2b and summarize them by Algorithms 1b-SR and 2b-SR in Table [\[Tab:5\]](#Tab:5){reference-type="ref" reference="Tab:5"}. Finally, Algorithms 1c-SR and 2c-SR are the SR versions of the array implementations in Algorithms 1c and 2c. # Numerical experiments {#numerical:experiments} To investigate a difference in the numerical robustness (with respect to roundoff errors) of the suggested SR UKF methods, we explore the target tracking problem with artificial ill-conditioned measurement scheme. This scenario yields a divergence due to the singularity arisen in the covariance $R_{e,k}$ caused by roundoff [@2015:Grewal:book p. 288]. **Example 1**. *When performing a coordinated turn in the horizontal plane, the aircraft's dynamics obeys equation [\[eq1.1\]](#eq1.1){reference-type="eqref" reference="eq1.1"} with the following drift function: $f(\cdot) =\left[\dot{\epsilon}, -\omega \dot{\eta}, \dot{\eta}, \omega \dot{\epsilon}, \dot{\zeta}, 0, 0\right]$ where $G={\rm diag}\left[0,\sigma_1,0,\sigma_1,0,\sigma_1,\sigma_2\right]$ with $\omega=3^\circ/\mbox{\rm s}$, $\sigma_1=\sqrt{0.2}\mbox{ \rm m/s}$, $\sigma_2=0.007^\circ/\mbox{\rm s}$ and $Q=I_7$. The state vector is $x(t)= [\epsilon, \dot{\epsilon}, \eta, \dot{\eta}, \zeta, \dot{\zeta}, \omega]^{\top}$, where $\epsilon$, $\eta$, $\zeta$ and $\dot{\epsilon}$, $\dot{\eta}$, $\dot{\zeta}$ stand for positions and corresponding velocities in the Cartesian coordinates at time $t$, and $\omega(t)$ is the (nearly) constant turn rate. The initial $\bar x_0=[1000\,\mbox{\rm m}, 0\,\mbox{\rm m/s}, 2650\,\mbox{\rm m},150\,\mbox{\rm m/s}, 200\,\mbox{\rm m}, 0\,\mbox{\rm m/s},\omega^\circ/\mbox{\rm s}]^{\top}$ and $\Pi_0=\mbox{\rm diag}(0.01\,I_7)$. The dynamic state is observed through the following ill-conditioned scheme [@2020:Automatica:Kulikova]: $$\begin{aligned} \label{problem:2} z_k & = \begin{bmatrix} 1 & 1 & 1 & 1 & 1 & 1 & 1\\ 1 & 1 & 1 & 1 & 1 & 1 & 1 +\delta \end{bmatrix} x_k + \begin{bmatrix} v_k^1 \\ v_k^2 \end{bmatrix}, \; R_k=\delta^{2}I_2\end{aligned}$$ where parameter $\delta$ is used for simulating roundoff effect. This increasingly ill-conditioned target tracking scenario assumes that $\delta\to 0$, i.e. $\delta=10^{-1},10^{-2},\ldots,10^{-12}$.* The system is simulated on the interval $[0s, 150s]$ by Euler-Maruyama method with the step size $0.0005(s)$ for producing the exact trajectory $x^{true}_k$. For a fixed ill-conditioning parameter $\delta$ the measurements $z_k$ are defined from $x^{true}_k$ through [\[problem:2\]](#problem:2){reference-type="eqref" reference="problem:2"} for the sampling interval $\Delta=1(s)$, $\Delta = |t_{k}-t_{k-1}|$. Next, the inverse (filtering) problem is solved by various UKF implementation methods. This yields the estimated trajectory $\hat x_{k|k}$, $k=1, \ldots, K$. The experiment is repeated for $M=100$ Monte Carlo runs for each $\delta=10^{-1},10^{-2},\ldots,10^{-12}$. The performance of various UKF methods is assessed in the sense of the *Accumulated Root Mean Square Errors* in position ($\mbox{\rm ARMSE}_p$) defined in [@2010:Haykin]. The MATLAB-based UKF algorithms are implemented by using the built-in ODE solver `ode45` with the tolerance value $\epsilon_g = 10^{-4}$. ------------ ------------------- ----------- ----------- -------------------- ----------- ----------- $\delta$ **MDE-based UKF** **SPDE-based UKF** Alg.1 Alg.1b-SR Alg.1c-SR Alg.2 2b-SR 2c-SR $10^{-1}$ 7.918 7.918 7.918 7.921 7.921 7.921 $10^{-2}$ **fails** 6.055 6.055 9.415 6.041 6.041 $10^{-3}$ 6.044 6.044 **fails** 6.032 6.032 $10^{-4}$ 6.549 6.549 6.120 6.120 $10^{-5}$ 6.068 6.068 7.872 7.872 $10^{-6}$ 6.069 6.066 7.850 10.19 $10^{-7}$ 6.066 6.066 7.040 10.19 $10^{-8}$ 9.216 6.066 **fails** 10.19 $10^{-9}$ **fails** 6.067 10.19 $10^{-10}$ **fails** 10.16 $10^{-11}$ **fails** ------------ ------------------- ----------- ----------- -------------------- ----------- ----------- : The $\mbox{\rm ARMSE}_p$ (m) of the standard and SR MATLAB-based UKF methods with hyperbolic QR factorizations. The numerical stability with respect to roundoff is investigated in terms of a speed of divergence when $\delta$ tends to a machine precision limit. Having analyzed the results presented in Table [1](#tab:acc1){reference-type="ref" reference="tab:acc1"}, we make a few conclusions. First, it is clearly seen that the conventional Algorithms 1 and 2 are unstable with respect to roundoff errors. They diverge fast as the problem ill-conditioning grows. The MDE-based conventional Algorithm 1 fails (due to unfeasible Cholesky factorization) when the ill-conditioning parameter $\delta=10^{-2}$. Meanwhile, the breakdown value of the SPDE-based conventional Algorithm 2 is $\delta = 10^{-3}$. This result has been anticipated and it can be explained by extra Cholesky decompositions required at each step of any MATLAB numerical scheme for computing the matrix $\mathbb X$ involved in [\[UKF:MDE1\]](#UKF:MDE1){reference-type="eqref" reference="UKF:MDE1"}, [\[UKF:MDE2\]](#UKF:MDE2){reference-type="eqref" reference="UKF:MDE2"}. The conventional SPDE-based Algorithm 2 does not have such a drawback since it propagates the sigma vectors $\mathbb X$, straightforward. From Table [1](#tab:acc1){reference-type="ref" reference="tab:acc1"}, it is also clear that both the pseudo-SR and SR methods diverge much slower than the standard implementations, which process the full filter covariance matrix. Thus, they indeed improve the robustness with respect to roundoff errors, although the numerical properties of the SR methods are different. Next, we discuss them in detail. ------------ ------------------- ----------- ----------- -------------------- ----------- ----------- $\delta$ **MDE-based UKF** **SPDE-based UKF** Alg.1a Alg.1b Alg.1c Alg.2a Alg.2b Alg.2c $10^{-1}$ 7.918 7.918 7.918 7.921 7.921 7.921 $10^{-2}$ 6.056 6.055 6.055 9.415 6.041 6.041 $10^{-3}$ **fails** 6.044 6.044 **fails** 6.032 6.032 $10^{-4}$ 6.549 6.549 6.120 6.120 $10^{-5}$ 6.068 6.068 7.874 7.872 $10^{-6}$ 6.066 6.066 10.16 10.19 $10^{-7}$ **fails** 6.067 **fails** 10.19 $10^{-8}$ 6.066 10.19 $10^{-9}$ 6.067 10.19 $10^{-10}$ **fails** 10.18 $10^{-11}$ **fails** ------------ ------------------- ----------- ----------- -------------------- ----------- ----------- : The $\mbox{\rm ARMSE}_p$ (m) of various pseudo-SR MATLAB-based UKF methods with one-rank Cholesky updates. Following Table [1](#tab:acc1){reference-type="ref" reference="tab:acc1"}, the SPDE-based implementations are slightly more accurate than the MDE-based algorithms when $10^{-2} \ge \delta \ge 10^{-4}$, i.e. in cases of a well-conditioned scenario. Meanwhile, in the moderate ill-conditioned cases, i.e. when $10^{-5} \ge \delta \ge 10^{-8}$, the estimation errors of the SPDE-based algorithms are $1.4$ times greater than in the related MDE-based implementations. The accumulated impact of the roundoff errors is higher in case of the SPDE-based implementations compared to the MDE-based methods because of a larger ODE system to be solved. Additionally, it creates a difficulty for the MATLAB solvers for controlling the discretization error arisen and might yield impracticable execution time. As can be seen, the *array* SR Algorithms 1c-SR and 2c-SR are the most robust to roundoff. Recall, Algorithms 1b-SR and 2b-SR compute the SR factors $R_{e,k}^{1/2}$ and $P_{k|k}^{1/2}$, separately. In particular, they require two hyperbolic QR transformations at each measurement step. In contrast, Algorithms 1c-SR and 2c-SR collect one extended pre-array and apply one hyperbolic transformation for computing $R_{e,k}^{1/2}$ and $P_{k|k}^{1/2}$, in parallel. The SR UKF implementations with less hyperbolic QR transformations involved are more numerically stable. Let us examine the performance of the pseudo-SR UKF algorithms proposed in this paper. Following Table [2](#tab:acc2){reference-type="ref" reference="tab:acc2"}, a superior performance of the SPDE-based implementation way (Algorithms 2a, 2b, 2c) over the MDE-based one (Algorithms 1a, 1b, 1c) is observed in case of the well-conditioned scenario. The situation is dramatically changed in case of the moderate ill-conditioned tests where the estimation errors of the SPDE-based algorithms are greater than in the MDE-based counterparts. The impact of the accumulated roundoff errors is higher because of a larger ODE system to be solved by the SPDE-based methods. Next, the pseudo-SR approach in Algorithms 1a and 2a provides the least stable implementation way. We have anticipated this result in Section 3 because of the $m$ consecutive one-rank Cholesky updates required. Meanwhile, Algorithms 1c and 2c are again the most stable methods, similarly to Algorithms 1c-SR and 2c-SR discussed previously. Thus, the reduced number of the one-rank Cholesky updates yields a more stable implementation strategy. The *array* Algorithms 1c and 2c are the most robust method among all pseudo-SR UKF variants under examination, i.e. they diverge slowly when $\delta$ tends to machine precision. Finally, having compared the results in Tables [1](#tab:acc1){reference-type="ref" reference="tab:acc1"} and [2](#tab:acc2){reference-type="ref" reference="tab:acc2"}, we conclude that the SR approach based on the hyperbolic QR factorizations yields the same estimation quality as the pseudo-SR methodology within the one-rank Cholesky updates. This is a consequence of algebraic equivalence between the true SR methods proposed, the pseudo-SR algorithms and the standard UKF implementations (which update the full filter covariance matrix). However, it is important to stress that their stability, i.e. the speed of divergence in ill-conditioned situations is different. The most stable SR UKF implementation methods turn out to be the SPDE-based Algorithms 2c and 2c-SR (with an unique feature of being the *array* algorithms), although they require a two times larger ODE system to be solved than in the MDE-based UKF implementations in Algorithms 1c and 1c-SR. The users can choose any of them depending on the requirements for solving practical applications. 60 I. Arasaratnam and S. Haykin. . , 54(6):1254--1269, Jun. 2009. I. Arasaratnam, S. Haykin, and R. J. Elliott. Discrete-time nonlinear filtering algorithms using [*G*]{.smallcaps}auss-[*H*]{.smallcaps}ermit quadrature. , 95(5):953--977, May 2007. I. Arasaratnam, S. Haykin, and T. R. Hurd. . , 58(10):4977--4993, Oct. 2010. P. Frogerais, J.-J. Bellanger, and L. Senhadji. . , 57(4):1000--1004, Apr. 2012. M. S. Grewal and A. P. Andrews. . John Wiley & Sons, New Jersey, 4-th edition edition, 2015. D. J. Higham and N. J. Higham. . SIAM, Philadelphia, 2005. N. J. Higham. . , 45(3):504--519, 2003. K. Ito and K. Xiong. Gaussian filters for nonlinear filtering problems. , 45(5):910--927, May 2000. B. Jia, M. Xin, and Y. Cheng. . , 49(2):510--518, 2013. S. Julier, J. Uhlmann, and H. F. Durrant-Whyte. . , 45(3):477--482, 2000. P. E. Kloeden and E. Platen. . Springer, Berlin, 1999. G. Yu. Kulikov. . , 33(1):136--163, 2013. G. Yu. Kulikov and M. V. Kulikova. . , 67(1):366--373, 2022. G. Yu. Kulikov and M. V. Kulikova. . , 59(1):273--279, 2014. G. Yu. Kulikov and M. V. Kulikova. . , 139:25--35, 2017. G. Yu. Kulikov and M. V. Kulikova. . , 147:196--221, 2020. G. Yu. Kulikov and M. V. Kulikova. The [*$J$*]{.smallcaps}-orthogonal square-root [*E*]{.smallcaps}uler-[*M*]{.smallcaps}aruyama-based unscented [*K*]{.smallcaps}alman filter for nonlinear stochastic systems. , 53(2):2361--2366, 2020. G. Yu. Kulikov and M. V. Kulikova. Itô-[*T*]{.smallcaps}aylor-based square-root unscented [*K*]{.smallcaps}alman filtering methods for state estimation in nonlinear continuous-discrete stochastic systems. , 58:101--113, 2021. M. V. Kulikova and G. Yu. Kulikov. . , 53(2):4967--4972, 2020. M. V. Kulikova and G. Yu. Kulikov. . , 120, 2020.  109110. J. Leth and T. Knudsen. . , 64(5):2198--2205, 2019. S. M. J. Lyons, S. Särkkä, and A. J. Storkey. . , 62(6):1514--1524, 2014. Y. Oshman and I. Y. Bar-Itzhack. . , 22(5):599--604, 1986. E. Santos-Diaz, S. Haykin, and T. R. Hurd. . , 12(11):1225--1232, Nov. 2018. S. Särkkä. . , 52(9):1631--1641, Sep. 2007. M. Takeno and T. Katayama. . , 8(3):2261--2274, 2012. R. Van der Merwe and E. A. Wan. . In *2001 IEEE International Conference on Acoustics, Speech, and Signal Processing Proceedings*, vol. 6, pp. 3461--3464, 2001. [^1]: This paper was not presented at any IFAC meeting. Corresponding author M. V. Kulikova.\ The authors acknowledge the financial support of the Portuguese FCT --- *Fundação para a Ciência e a Tecnologia*, through the projects UIDB/04621/2020 and UIDP/04621/2020 of CEMAT/IST-ID, Center for Computational and Stochastic Mathematics, Instituto Superior Técnico, University of Lisbon. [^2]: For zero entries $w_{i}^{(c)}=0$, $i=0,\ldots,2n$, set $\mbox{\rm sgn}(w_{i}^{(c)})=1$. [^3]: The equality can be proved by multiplying the pre- and post-arrays involved and comparing both sides of the formulas with the original UKF equations [\[ckf:gain\]](#ckf:gain){reference-type="eqref" reference="ckf:gain"}, [\[ckf:rek\]](#ckf:rek){reference-type="eqref" reference="ckf:rek"}, [\[ckf:pxy\]](#ckf:pxy){reference-type="eqref" reference="ckf:pxy"}.
arxiv_math
{ "id": "2310.04126", "title": "Continuous-discrete unscented Kalman filtering framework by MATLAB ODE\n solvers and square-root methods", "authors": "Maria Kulikova and Gennady Kulikov", "categories": "math.NA cs.NA cs.SY eess.SY math.OC", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Using a concept of filter we propose one generalization of Riemann integral, that is integration with respect to filter. We study this problem, demonstrate different properties and phenomena of filter integration. address: School of Mathematics and Informatics V.N. Karazin Kharkiv National University, 61022 Kharkiv, Ukraine author: - Dmytro Seliutin title: On integration with respect to filter --- # Introduction Let us remind main concepts which we use in this paper. Throughout this article $\Omega$ stand for a non-empty set. Non-empty family of subsets $\mathfrak{F}\subset 2^\Omega$ is called *filter on $\Omega$*, if $\mathfrak{F}$ satisfies the following axioms: 1. $\emptyset \notin \mathfrak{F}$; 2. if $A,\ B \in \mathfrak{F}$ then $A \cap B \in \mathfrak{F}$; 3. if $A \in \mathfrak{F}$ and $D \supset A$ then $D \in \mathfrak{F}$. Also very useful for us is a concept of filter base. Non-empty family of subsets $\mathfrak{B}\subset 2^\Omega$ is called *filter base on $\Omega$*, if $\emptyset \notin \mathfrak{B}$ and for every $A,\ B \in \mathfrak{B}$ there exists $C \in \mathfrak{B}$ such that $C \subset A \cap B$. We say that filter base *generates filter $\mathfrak{F}$* if and only if for each $A \in \mathfrak{F}$ there is $B \in \mathfrak{B}$ such that $B \subset A$. Let $X$ be a topological vector space, $f: X \rightarrow {\mathbb R}$ be a function. For $t \in X$ denote $\mathcal{O}(t)$ the family of all neighbourhoods of $t$. Let $\mathfrak{F}$ be a filter on $X$, $y \in {\mathbb R}$. Function $f$ is said to be *convergent to $y$ over filter $\mathfrak{F}$* (denote $y = \lim\limits_{\mathfrak{F}} f$), if for each $U \in \mathcal{O}(y)$ there exists $A \in \mathfrak{F}$ such that for each $t \in A$ the following holds true: $f(t) \in U$. We refers, for example, to [@kadets] for more information about filter and related concepts. The concept of filter is a very powerful tool for studying different properties of general topological vector spaces. For example, in [@(2)listan-garcia] authors study convergence over ideal, generated by the modular function. Ideal is a concept dual to filter. In [@kad_sel_comp] we study completeness and its generalization using filters. In this this article we refer our attention to classical Riemann integral. Let us remind how we can construct this object. Let $[a,b] \subset {\mathbb R}$, let $f: [a,b] \rightarrow {\mathbb R}$ be a continuous function. Denote $\Pi = \{a \leqslant\xi_1 \leqslant\xi_2 \leqslant...\leqslant\xi_n = b\}$ the partition of $[a,b]$, in other words, $\overset{n}{\underset{k=1}{\cup}} [\xi_{k-1}, \xi_k] = [a,b]$. Consider also the set $T = \{t_1, t_2, ...,t_n\}$ such that for each $k=1,2,...,n$ $t_k \in [\xi_{k-1}, \xi_k]$. Let us call the pair $(\Pi, T)$ by the *tagged partition on the segment*. Denote $d(\Pi)$ *the diameter of the $\Pi$* -- maximum length of $[\xi_{k-1}, \xi_k]$, where $k=1,2,...,n$. Let us recall that function $f$ is said to be *Riemann integrable* if there exist the limit $I = \lim\limits_{d(\Pi) \rightarrow 0} \sum\limits_{k=1}^{n} f(t_k) \cdot |\xi_k - \xi_{k-1}|$, and we call this limit the Riemann integral of the function $f$, and write $I = \int\limits_{a}^{b} f(t)dt$. We know many different properties of this integral, for example linearity, integration on subsegment of $[a,b]$ etc. If we look at the definition of Riemann integral more attentively, we realize that, in fact we can use one special filter and obtain desirable result. In next section we are going to develop this idea. # Integration with respect to filter Just for simplicity we are going to consider functions, defined on $[0,1]$. Let $f:[0,1] \rightarrow {\mathbb R}$ be a function. As above, denote $\Pi = \{a \leqslant\xi_1 \leqslant\xi_2 \leqslant...\leqslant\xi_n = b\}$ the partition of $[0,1]$, in other words, $\overset{n}{\underset{k=1}{\cup}} [\xi_{k-1}, \xi_k] = [0,1]$. Consider also the set $T = \{t_1, t_2, ...,t_n\}$ such that for each $k=1,2,...,n$ $t_k \in [\xi_{k-1}, \xi_k]$. For $k=1,2,...,n$ denote $\Delta_k := |\xi_k - \xi_{k-1}|$. Denote also $\text{TP}[0,1]$ the set of all tagged partition of $[0,1]$. For a tagged partition $(\Pi, T) \in \text{TP}[0,1]$ denote $$S(f, \Pi, T) = \sum_{k=1}^{n} f(t_k) \Delta_k.$$ Now we are going to introduce the central definition of this paper **Definition 1**. Let $f:[0,1] \rightarrow {\mathbb R}$ be a function, $\mathfrak{F}$ be a filter on $\text{TP}[0,1]$. We say that $f$ is *integrable over filter $\mathfrak{F}$* ($\mathfrak{F}$-integrable for short), if there exists $I \in {\mathbb R}$ such that $I = \lim\limits_{\mathfrak{F}} S(f, \Pi, T)$. The number $I$ is called *the $\mathfrak{F}$-integral of the $f$* (denote $I = \int\limits_{0}^{1} f d \mathfrak{F}$). *Remark 2*. The fact that $f$ is $\mathfrak{F}$-integrable we will write as follows: $$f \in \text{Int}(\mathfrak{F}).$$ *Remark 3*. Using Definition [Definition 1](#main_def){reference-type="ref" reference="main_def"} we can construct the Riemann integral as follows. Let $\delta > 0$ be a real positive number. Denote $$P_{<\delta} = \{(\Pi, T) \in \text{TP}[0,1]:\ d(\Pi) < \delta\},$$ where $d(\Pi)$ stands for diameter of $\Pi$. Consider now $$\mathfrak{B}_{<\delta} = \{P_{<\delta}: \delta > 0\}.$$ It is easy to check that $\mathfrak{B}_{<\delta}$ is a filter base. Denote $\mathfrak{F}_{<\delta}$ filter generate by $\mathfrak{B}_{<\delta}$. Let $f:[0,1] \rightarrow {\mathbb R}$ be a function. Then $f$ is integrable by Riemann if there exists the limit $\lim\limits_{\mathfrak{F}_{<\delta}} S(f, \Pi, T)$. Bellow we study different properties of filter integration. **Definition 4**. Let $X$ be a non-empty set, $f: X \rightarrow {\mathbb R}$ be a function, and $\mathfrak{F}$ be a filter on $X$. We say that $f$ is *bounded with respect to $\mathfrak{F}$* ($\mathfrak{F}$-bounded for short), if there is $C > 0$ such that there exists $A \in \mathfrak{F}$ such that for every $t \in A$ $|f(t)| < C$. The following lemma is very simple, but for readers convenient we present its proof. **Lemma 5**. *Let $X$ be a non-empty set, $f: X \rightarrow {\mathbb R}$ be a function, and $\mathfrak{F}$ be a filter on $X$. Suppose that there exists $I \in {\mathbb R}$, $I = \lim\limits_{\mathfrak{F}} f$. Then $f$ is $\mathfrak{F}$-bounded.* *Proof.* We know that $I = \lim\limits_{\mathfrak{F}} f$. It means that for every $\varepsilon > 0$ there exists $A \in \mathfrak{F}$ such that for all $t \in A$ $|f(t) - I| < \varepsilon$. Consider $$|f(t)| - |I|\leqslant|f(t) - I| < \varepsilon.$$ In other words, $|f(t)| \leqslant|I| + \varepsilon$. Then just put $C := |I| + \varepsilon$. ◻ The next theorem generalizes well-know fact about Riemann integral: if function in integrable by Riemann then it's bounded. **Theorem 6**. *Let $\mathfrak{F}$ be a filter on $\text{TP}[0,1]$, $f:[0,1] \rightarrow {\mathbb R}$ be a function, and $f \in \text{Int}(\mathfrak{F})$. Then $S(f, \Pi, T)$ is $\mathfrak{F}$-bounded.* *Proof.* Just use Lemma [Lemma 5](#lemma_bounded){reference-type="ref" reference="lemma_bounded"}. ◻ Let us formulate well-known fact about Riemann integral, using filters. **Theorem 7**. *Let $f: [0,1] \rightarrow {\mathbb R}$, there exists $\lim\limits_{\mathfrak{F}_{<\delta}} S(f, \Pi, T)$. Then $f$ is bounded, in other words, there is $C > 0$ such that for all $t \in [0,1]$ $|f(t)| \leqslant C$.* The next theorem is natural generalization of the Theorem [Theorem 7](#bound){reference-type="ref" reference="bound"}. **Theorem 8**. *Let $f: [0,1] \rightarrow {\mathbb R}$, let $\mathfrak{F}$ be a filter on $TP[0,1]$ such that for every $A \in \mathfrak{F}$ there exists $B \in \mathfrak{F}_{<\delta}$ such that $B \subset A$ and let there exists $I \in {\mathbb R}$ such that $I = \lim\limits_{\mathfrak{F}} S(f,\Pi, T)$. Then $C > 0$ such that for each $t \in [0,1]$ we have $|f(t)| < C$.* *Proof.* There exists $I \in {\mathbb R}$ such that $I = \lim\limits_{\mathfrak{F}} S(f,\Pi, T)$ $\Leftrightarrow$ for all $\varepsilon > 0$ there exists $A \in \mathfrak{F}$ such that for all $(\Pi, T) \in A$ $|S(f, \Pi, T) - I| < \varepsilon$. We know that for $A \in \mathfrak{F}$ there is $B \in \mathfrak{F}_{<\delta}$ such that $B \subset A$, then, particularly, for all $\varepsilon > 0$ there exists $A \in \mathfrak{F}$ there is $B \in \mathfrak{F}_{<\delta}$ such that $B \subset A$ such that for all $(\Pi, T) \in B$ $|S(f, \Pi, T) - I| < \varepsilon$ $\Rightarrow$ for all $\varepsilon > 0$ there exists $B \in \mathfrak{F}_{<\delta}$ such that for all $(\Pi, T) \in B$ $|S(f, \Pi, T) - I| < \varepsilon$ $\overset{\text{Theorem \ref{bound}}}{\Rightarrow}$ $C > 0$ such that for each $t \in [0,1]$ we have $|f(t)| < C$, in other words, $f$ is bounded. ◻ Now we are going to demonstrate that filter integration has additive property. To demonstrate this we proof next easy two lemmas. **Lemma 9**. *Let $X$ be a non-empty set, $f,\ g: X \rightarrow {\mathbb R}$ be a functions, and $\mathfrak{F}$ be a filter on $X$. Let $x = \lim\limits_{\mathfrak{F}} f$, $y = \lim\limits_{\mathfrak{F}} g$. Then $\lim\limits_{\mathfrak{F}} (f+g) = x+y$.* *Proof.* We know that $x = \lim\limits_{\mathfrak{F}} f$, so for each $U \in \mathcal{O}(x)$ there is $A \in \mathfrak{F}$ such that $f(A) \subset U$. Analogically, $y = \lim\limits_{\mathfrak{F}} f$, it means that for each $V \in \mathcal{O}(x)$ there is $B \in \mathfrak{F}$ such that $f(B) \subset V$. We have to demonstrate that for each $W \in \mathcal{O}(x + y)$ there exists $C \in \mathfrak{F}$ such that $(f+g)(C) \subset W$. Let fix $W \in \mathcal{O}(x+y)$. Then there exist $W_1 \in \mathcal{O}(x)$ and $W_2 \in \mathcal{O}(y)$ such that $W \supset W_1 + W_2$. Then there are $C_1,\ C_2 \in \mathfrak{F}$ such that $f(C_1) \subset W_1$ and $f(C_2) \subset W_2$. Denote $C := C_1 \cap C_2$. Clearly that $C \in \mathfrak{F}$. So $$(f+g)(C) = f(C) + g(C) \subset W_1 + W_2 \subset W.$$ ◻ **Lemma 10**. *Let $X$ be a non-empty set, $f: X \rightarrow {\mathbb R}$ be a function, $\mathfrak{F}$ be a filter on $X$, and $\alpha \in {\mathbb R}$. Let $x = \lim\limits_{\mathfrak{F}} f$. Then $\lim\limits_{\mathfrak{F}} \alpha f = \alpha x$.* *Proof.* $x = \lim\limits_{\mathfrak{F}} f$, it means that for each $U \in \mathcal{O}(x)$ there is $A \in \mathfrak{F}$ such that $f(A) \subset U$. We have to demonstrate that for all $V \in \mathcal{O}(\alpha x)$ there is $B \in \mathfrak{F}$ such that $(\alpha f)(B) \subset V$. Suppose that $\alpha \neq 0$. The case $\alpha = 0$ is obvious. Remark that if $W \in \mathcal{O}(x)$ then $\alpha W \in \mathcal{O}(\alpha x)$. So just put $B := A$. Then $(\alpha f)(B) = \alpha f(B) \subset \alpha U \in \mathcal{O}(\alpha x)$. ◻ **Theorem 11**. *Let $\mathfrak{F}$ be a filter on $\text{TP}[0,1]$, $f,g:[0,1] \rightarrow {\mathbb R}$ be a functions, $f \in \text{Int}(\mathfrak{F})$, $\alpha,\ \beta \in {\mathbb R}$, and $f \in \text{Int}(\mathfrak{F})$ and $g \in \text{Int}(\mathfrak{F})$. Then $(\alpha f + \beta g) \in \text{Int}(\mathfrak{F})$* *Proof.* Just use Lemmas [Lemma 9](#sum){reference-type="ref" reference="sum"} and [Lemma 10](#mult){reference-type="ref" reference="mult"}. ◻ # Integration with respect to different filters In the previous section we've studied arithmetic properties of integral over filter and problems deals with boundedness. This section is devoted to integration over different filters and its relations. For $(\Pi, T) \in \text{TP}[0,1]$ and $t \in T$ we denote $\Delta(t)$ length of the element of partition of $\Pi$ which covers $t$. Let $(\Pi_1, T_1), (\Pi_2, T_2)$ be partitions of $[0,1]$. Consider $$\label{dist} \begin{split} \rho((\Pi_1, T_1), (\Pi_2, T_2)) = \\ \sum_{t \in T_1 \cap T_2} |\Delta_1(t) - \Delta_2(t)| + \sum_{T_1\setminus T_2} \Delta_1(t) + \sum_{T_2\setminus T_1} \Delta_2(t). \end{split}$$ For easy using of concept defined in Equation [\[dist\]](#dist){reference-type="ref" reference="dist"} consider ${\mathbb F}: [0,1] \rightarrow l_1[0,1]$, such that ${\mathbb F}(t) = e_t$, where $$e_t(\tau) = \begin{cases} 1, \text{if } \tau = t;\\ 0, \text{otherwise}. \end{cases}$$ It is clearly then that $$\rho((\Pi_1, T_1), (\Pi_2, T_2)) = ||S({\mathbb F}, \Pi_1, T_1) - S({\mathbb F}, \Pi_2, T_2)||.$$ Now we are going to demonstrate that the mapping $\rho$, defined above, is a metric, or distance between two tagged partitions. **Proposition 1**. *Consider $\rho: \text{TP}[0,1] \times \text{TP}[0,1] \rightarrow {\mathbb R}$, $\rho((\Pi_1, T_1), (\Pi_2, T_2)) = ||S({\mathbb F}, \Pi_1, T_1) - S({\mathbb F}, \Pi_2, T_2)||$. Then $\rho$ satisfies all metric axioms.* *Proof.* 1. let $(\Pi_1, T_1) = (\Pi_2, T_2)$. It is clear that in this case $\rho((\Pi_1, T_1), (\Pi_2, T_2)) = 0$; 2. let $\rho((\Pi_1, T_1), (\Pi_2, T_2)) = 0$. Then $\rho((\Pi_1, T_1), (\Pi_2, T_2)) = \sum\limits_{t \in T_1 \cap T_2} |\Delta_1(t) - \Delta_2(t)| + \sum\limits_{T_1\setminus T_2} \Delta_1(t) + \sum\limits_{T_2\setminus T_1} \Delta_2(t) = 0$. We have a sum of non-negative numbers equals to 0. This means that - $\forall t \in T_1 \cap T_2$ $|\Delta_1(t) - \Delta_2(t)| = 0$ $\Rightarrow$ $\forall t \in T_1 \cap T_2$ $\Delta_1(t) = \Delta_2(t)$; - $\forall t \in T_1 \setminus T_2$ $\Delta_1(t)= 0$; - $\forall t \in T_2 \setminus T_1$ $\Delta_2(t)= 0$; $\Rightarrow$ $(\Pi_1, T_1) = (\Pi_2, T_2)$. 3. consider $(\Pi_1, T_1), (\Pi_2, T_2), (\Pi_3, T_3)$. Then $$\begin{split} \rho((\Pi_1, T_1), (\Pi_2, T_2)) =\\ ||S({\mathbb F}, \Pi_1, T_1) - S({\mathbb F}, \Pi_2, T_2) + S({\mathbb F}, \Pi_3, T_3) - S({\mathbb F}, \Pi_3, T_3)|| \leqslant\\ ||S({\mathbb F}, \Pi_1, T_1) - S({\mathbb F}, \Pi_3, T_3)|| + ||S({\mathbb F}, \Pi_3, T_3) - S({\mathbb F}, \Pi_2, T_2)|| =\\ \rho((\Pi_1, T_1), (\Pi_3, T_3)) + \rho((\Pi_3, T_3), (\Pi_2, T_2)) \end{split}$$  ◻ Now we introduce very important concept. **Definition 12**. Let $\mathfrak{F}_1, \mathfrak{F}_2$ be filters on $TP[0,1]$. We say that $\mathfrak{F}_2$ *$\rho$-dominates* filter $\mathfrak{F}_1$ ($\mathfrak{F}_2 \succ_{\rho} \mathfrak{F}_1$), if for every $\varepsilon < 0$ and for each $A_1 \in \mathfrak{F}_1$ there exists $A_2 \in \mathfrak{F}_2$ such that for all $(\Pi_2, T_2) \in A_2$ there is $(\Pi_1, T_1) \in A_1$ such that $\rho((\Pi_1, T_1), (\Pi_2, T_2)) < \varepsilon$. **Proposition 2**. *Let $\mathfrak{F}_2 \supset \mathfrak{F}_1$. Then $\mathfrak{F}_2$ $\rho$-dominates $\mathfrak{F}_1$.* *Proof.* As $\mathfrak{F}_2 \supset \mathfrak{F}_1$ we obtain that if $A \in \mathfrak{F}_1$ then $A \in \mathfrak{F}_2$. Consider an arbitrary $\varepsilon > 0$. Then for every $A_1 \in \mathfrak{F}_1$ there is $A_2 \in \mathfrak{F}_2$, $A_2 := A_1$ such that for each $(\Pi_2, T_2) \in A_2$ there exists $(\Pi_1, T_1) \in A_1$, $(\Pi_1, T_1) := (\Pi_2, T_2)$ such that $\rho\left((\Pi_1, T_1), (\Pi_2, T_2)\right) = \rho\left((\Pi_2, T_2), (\Pi_2, T_2)\right) = 0 < \varepsilon$. ◻ Previous proposition shows us that $\rho$-dominance generates some relation of order on $\text{TP}[0,1]$ and is more general concept that relation of inclusion. It is clear that if $\mathfrak{F}_1 \subset \mathfrak{F}_2$ and $f \in \text{Int}(\mathfrak{F}_1)$ then $f \in \text{Int}(\mathfrak{F}_2)$ -- just use the definition of function limit over filter. So we can formulate next easy proposition. **Proposition 3**. *Let $f: [0,1] \rightarrow {\mathbb R}$ be a function, $\mathfrak{F}_1,\ \mathfrak{F}_2$ be filters on $\text{TP}[0,1]$ such that $\mathfrak{F}_1\ \subset \mathfrak{F}_2$ and $f \in \text{Int}(\mathfrak{F}_1)$. Then $f \in \text{Int}(\mathfrak{F}_2)$.* **Theorem 13**. *Let $\mathfrak{F}_1, \mathfrak{F}_2$ be filters on $[0,1]$. Let $f: [0,1] \rightarrow {\mathbb R}$ be a bounded function. Let $I = \lim\limits_{\mathfrak{F}_1} S(f, \Pi, T)$ and $\mathfrak{F}_2 \succ_{\rho} \mathfrak{F}_1$. Then $I = \lim\limits_{\mathfrak{F}_2} S(f, \Pi, T)$.* *Proof.* Denote $C := \sup\limits_{t \in [0,1]} |f(t)|$. We have to proof that for every $\varepsilon > 0$ there exists $B \in \mathfrak{F}_2$ such that for each $(\Pi_B, T_B) \in B$ we have $|S(f, \Pi_B, T_B) - I| < \varepsilon$. We know that for every $\varepsilon > 0$ there exists $A \in \mathfrak{F}_1$ such that for each $(\Pi_1, T_1) \in A$ we have $|S(f, \Pi_1, T_1) - I| < \varepsilon$. Now for an arbitrary $\varepsilon > 0$ and $A \in \mathfrak{F}_1$ found above one can find $A_2 \in \mathfrak{F}_2$ such that for all $(\Pi_2, T_2) \in A_2$ there is $(\Pi_1, T_1) \in A_1$ such that $\rho((\Pi_1, T_1), (\Pi_2, T_2)) < \varepsilon$. Then put $B := A_2$. Then for all $(\Pi_B, T_B) \in B$ we have $(\Pi_1, T_1) \in A_1$ such that $$\begin{aligned} |S(f, \Pi_B, T_B) - I| =\\ |S(f, \Pi_B, T_B) - S(f, \Pi_1, T_1) + S(f, \Pi_1, T_1) -I| \leqslant\\ |S(f, \Pi_B, T_B) - S(f, \Pi_1, T_1)| + |S(f, \Pi_1, T_1) -I| = \\ \sum_{t \in T_B\cap T_1} |f(t)|\cdot|\Delta_B(t) - \Delta_1(t)| + \sum_{t \in T_B \setminus T_1} |f(t)|\cdot\Delta_B(t) +\\ \sum_{t \in T_1 \setminus T_B} |f(t)|\cdot\Delta_1(t) + \varepsilon \leqslant C\cdot \rho((\Pi_B, T_B), (\Pi_1, T)) + \varepsilon \leqslant\\ C\varepsilon + \varepsilon \leqslant\varepsilon(1 + C). \end{aligned}$$ ◻ # Exactly tagged filters In this part of our paper we consider problems deals filter integration of unbounded functions. **Definition 14**. Let $\mathfrak{B}$ be a filter base on $TP[0,1]$. We say that $\mathfrak{B}$ is *exactly tagged* if there exist $A\subset [0,1]$ -- a strictly decreasing sequence of numbers such that for each $B \in \mathfrak{B}$ and for every $(\Pi, T) \in B$ we have that $T \cap A = \emptyset$. **Definition 15**. We say that filter $\mathfrak{F}$ on $TP[0,1]$ is *exactly tagged* if there exists exactly tagged base $\mathfrak{B}$ of $\mathfrak{F}$. **Theorem 16**. *If filter $\mathfrak{F}$ on $TP[0,1]$ is exactly tagged then there exists unbounded function $f: [0,1] \rightarrow {\mathbb R}$ such that $f \in Int(\mathfrak{F})$.* *Proof.* Denote $\displaystyle {\mathbb N}^{-1} = \left\{ \frac{1}{n} \right\}_{n \in {\mathbb N}}$ and consider next filter base $\mathfrak{B}= (B_n)_{n \in {\mathbb N}}$ on $TP[0,1]$:\ $\displaystyle B_1 = \left\{(\Pi, T): T \cap {\mathbb N}^{-1} = \emptyset \text{ and } d(\Pi) < 1\right\}$;\ $\displaystyle B_2 = \left\{(\Pi, T): T \cap {\mathbb N}^{-1} = \emptyset \text{ and } d(\Pi) < \frac{1}{2}\right\}$;\ $\displaystyle B_3 = \left\{(\Pi, T): T \cap {\mathbb N}^{-1} = \emptyset \text{ and } d(\Pi) < \frac{1}{3}\right\}$;\ \...\ $\displaystyle B_m = \left\{(\Pi, T): T \cap {\mathbb N}^{-1} = \emptyset \text{ and } d(\Pi) < \frac{1}{m}\right\}$. Consider now $$f(t) = \begin{cases} \displaystyle n, \text{ if } t = \frac{1}{n},\ n \in {\mathbb N}\\ 0, \text{ otherwise} \end{cases}.$$ Then for each $n \in {\mathbb N}$ and for every $(\Pi, T) \in B_n$ we have that $S(f, \Pi, T) = 0$, so $\lim\limits_{\mathfrak{B}}S(f, \Pi, T) = 0$. ◻ For a tagged partition $(\Pi, T)$ of $[0, 1]$ and $\tau \in [0, 1]$ denote $\ell(\Pi, T, \tau)$ the number which is equal to the length of the segment $\Delta \in \Pi$, for which $\tau \in \Delta$, if $\tau \in T$. If $\tau \notin T$, we put $\ell(\Pi, T, \tau) = 0$. In this notation $$S(f, \Pi, T) = \sum_{t \in [0,1]} f(t) \ell(\Pi, T, t).$$ **Theorem 17**. *For a filter $\mathfrak{F}$ on $TP[0,1]$ the following assertions are equivalent:* 1. *There exists an unbounded function $f:[0,1] \rightarrow [0, +\infty)$ such that $S(f, \Pi, T)$ is $\mathfrak{F}$-bounded;* 2. *There exists a countable subset $\{t_n\}_{n \in {\mathbb N}} \subset [0, 1]$ such that there is $A \in \mathfrak{F}$ such that for every $(\Pi, T) \in A$ $$\sum_{n \in {\mathbb N}} n \cdot \ell(\Pi, T, t_n) < 1.$$* *Proof.* **(1)$\Rightarrow$(2)**: Let $f$ be a non-negative, unbounded function on $[0,1]$ such that there is $C > 0$ and $B \in \mathfrak{F}$ such that for each $(\Pi, T) \in B$ we have $\sum\limits_{t \in [0,1]} f(t) \cdot \ell(\Pi, T, t) < C$. As $f$ is unbounded, there exists $(\alpha_n) \subset [0,1]$ such that for every $n \in {\mathbb N}$ $f(\alpha_n) \geqslant C n$. Then there exists $(\alpha_n) \subset [0,1]$, $C > 0$, there is $A \in \mathfrak{F}$, $A := B$ such that for all $(\Pi, T) \in A$ we obtain: $$\begin{aligned} \sum_{t \in [0,1]} n \cdot \ell(\Pi, T, \alpha_n) \leqslant\sum_{n \in {\mathbb N}} \frac{f(\alpha_n)}{C} \cdot \ell(\Pi, T, \alpha_n) \leqslant\\ \frac{1}{C}\sum_{t \in [0,1]} f(t) \cdot \ell(\Pi, T, t) < \frac{1}{C} \cdot C = 1. \end{aligned}$$ **(2)$\Rightarrow$(1)**: Let there exists a countable subset $\{t_n\}_{n \in {\mathbb N}} \subset [0, 1]$ and $C > 0$ such that there is $A \in \mathfrak{F}$ such that for every $(\Pi, T) \in A$ $\sum\limits_{n \in {\mathbb N}} n \cdot \ell(\Pi, T, t_n) < C$. Consider function $$f(t) = \begin{cases} n, \text{ if } t = \alpha_n,\ n \in {\mathbb N}\\ 0, \text{ if } t \neq \alpha_n \end{cases}.$$ Obviously, $f(t)$ is unbounded. Then there is $C > 0$ and there is $B \in \mathfrak{F}$, $B := A$ such that for every $(\Pi, T) \in A$ $$\begin{aligned} \sum_{t \in [0,1]} f(t) \cdot \ell(\Pi, T, t) \leqslant\sum_{n \in {\mathbb N}} f(\alpha_n) \cdot \ell(\Pi, T, \alpha_n) \leqslant\\ \sum_{n \in {\mathbb N}} n \cdot \ell(\Pi, T, \alpha_n) < C \end{aligned}$$. ◻ # Integration over filter on a subsegment Our next goal is as follows: if function $f$ is integrable on $[0,1]$ over filter $\mathfrak{F}$ on $\text{TP}[0,1]$ then for an arbitrary $[\alpha, \beta] \subset [0,1]$ function $f$ is is integrable on $[\alpha, \beta]$ over filter $\mathfrak{F}$. To achieve this purpose we need to construct some restriction of filter $\mathfrak{F}$ on subsegment $[\alpha, \beta] \subset [0,1]$. Now we present how we can construct such restriction. Consider an arbitrary $[\alpha, \beta] \subset [0,1]$. We consider only $T$ such that $T \cap (\alpha, \beta) \neq \emptyset$. Consider an arbitrary $(\Pi, T) \in TP[0,1]$. We have four cases: 1. $\min \{T \cap (\alpha, \beta)\} > \min\{\Pi \cap (\alpha, \beta)\}$\ $\max \{T \cap (\alpha ,\beta)\} < \max \{\Pi \cap (\alpha ,\beta)\}$; 2. $\min \{T \cap (\alpha, \beta)\} > \max \{\Pi \cap (0, \alpha)\}$\ $\max \{T \cap (\alpha ,\beta)\} < \max \{\Pi \cap (\alpha ,\beta)\}$; 3. $\min \{T \cap (\alpha, \beta)\} > \min\{\Pi \cap (\alpha, \beta)\}$\ $\max \{T \cap (\alpha ,\beta)\} < \min\{\Pi \cap (\beta, 1)\}$; 4. $\min \{T \cap (\alpha, \beta)\} > \max \{\Pi \cap (0, \alpha)$\ $\max \{T \cap (\alpha ,\beta)\} < \min\{\Pi \cap (\beta, 1)\}$. We have to construct a restriction of $(\Pi, T)$ on $[\alpha, \beta]$. In each of four described cases we have such $(\Pi_k, T_k) \in TP[\alpha, \beta]$, $k =1, 2, 3, 4$: 1. $\Pi_1 = \bigg(\Pi \setminus \big((\Pi \cap [0, \alpha)) \cup (\Pi \cap (\beta, 1)) \cup \min\{\Pi \cap (\alpha, \beta)\} \cup \max \{\Pi \cap (\alpha ,\beta)\}\big)\bigg) \cup \{\alpha, \beta\}$\ $T_1 = T \setminus \big((T \cap [0, \alpha)) \cup (T \cap (\beta, 1])\big)$; 2. $\Pi_2 = \bigg(\Pi \setminus \big((\Pi \cap [0, \alpha)) \cup \max\{\Pi \cap (\alpha, \beta)\} \cup (\Pi \cap (\beta, 1))\big)\bigg)\cup \{\alpha, \beta\}$\ $T_2 = T_1$; 3. $\Pi_3 = \bigg(\Pi \setminus \big((\Pi \cap [0,\alpha)) \cup \min\{\Pi \cap (\alpha)\} \cup (\Pi \cap [\beta, 1)) \big)\bigg)\cup \{\alpha, \beta\}$\ $T_3 = T_1$; 4. $\Pi_4 = \bigg(\Pi \setminus \big((\Pi \cap [0, \alpha)) \cup (\Pi \cap [\beta, 1))\big)\bigg)\cup \{\alpha, \beta\}$\ $T_4 = T_1$. Now if we have an arbitrary filter $\mathfrak{F}$ on $TP[0,1]$ we can construct filter $\mathfrak{F}_{[\alpha, \beta]}$ on $TP[\alpha, \beta]$, induced with $\mathfrak{F}$ in such way: consider an arbitrary $A \in \mathfrak{F}$ and for each $(\Pi, T) \in A$ we have to execute an algorithm, described above. For each $A \in \mathfrak{F}$ denote $A_\alpha^\beta$ the restriction of $A$ on $[\alpha, \beta]$, described above. **Definition 18**. Let $\mathfrak{F}$ be a filter on $TP[0,1]$, $[\alpha, \beta] \subset [0,1]$. We call the filter $\mathfrak{F}$ *$[\alpha, \beta]$-complemented* if for each $A \in \mathfrak{F}$, for every $(\Pi_1, T_1),\ (\Pi_2, T_2) \in A_\alpha^\beta$ there exists $(\Pi^*, T^*) \in TP[0, \alpha]$ and $(\Pi^{**}, T^{**}) \in TP[\beta, 1]$ such that $$(\Pi^*, T^*) \cup (\Pi_1, T_1) \cup (\Pi^{**}, T^{**}) \in A,$$ $$(\Pi^*, T^*) \cup (\Pi_2, T_2) \cup (\Pi^{**}, T^{**}) \in A.$$ Here we present promised result about filter integration on subsegment. **Theorem 19**. *Let $f: [0,1] \rightarrow {\mathbb R}$, $\mathfrak{F}$ be a filter on $TP[0,1]$ such that for each $[\alpha, \beta] \subset [0,1]$ $\mathfrak{F}$ is $[\alpha, \beta]$-complemented. Let $f$ is integrated of $[0,1]$ with respect to $\mathfrak{F}$. Then for every $[\alpha, \beta] \subset [0,1]$ $f$ is integrated on $[\alpha, \beta]$ with respect to $\mathfrak{F}$* *Proof.* We know that for an arbitrary $\varepsilon > 0$ there exists $A \in \mathfrak{F}$ such that for all $(\Pi_1, T_1), (\Pi_2, T_2) \in A$ we have: $|S(f, \Pi_1, T_1) - S(f, \Pi_2, T_2)| < \varepsilon$. Let fix $\varepsilon > 0$ and consider an arbitrary $[\alpha, \beta] \subset [0,1]$. For $A \in \mathfrak{F}$ consider an arbitrary $(\Pi^1, T^1), (\Pi^2, T^2) \in A_\alpha^\beta$. As $\mathfrak{F}$ is $[\alpha, \beta]$-complemented we can find $(\Pi^*, T^*) \in A_0^\alpha$ and $(\Pi^{**}, T^{**}) \in A_\beta^1$ such that $(\Pi_{11}, T_{11}) := (\Pi^*, T^*) \cup (\Pi^1, T^1) \cup (\Pi^{**}, T^{**}) \in A$ and $(\Pi_{22}, T_{22}) :=(\Pi^*, T^*) \cup (\Pi^2, T^2) \cup (\Pi^{**}, T^{**}) \in A$. Then $\varepsilon > |S(f, \Pi_{11}, T_{11}) - S(f, \Pi_{22}, T_{22})| = |S(f, \Pi^1, T^1) - S(f, \Pi^2, T^2)|$. ◻ **Acknowledgment.** This paper is partially supported by a grant from Akhiezer Foundation (Kharkiv). The author is thankful to his parents for their support and his scientific adviser, professor Vladimir Kadets for his constant help with this project. Also author thanks the Defense Forces of Ukraine for the defence and fight against Russian aggressors. 99 V. Kadets, *A course in Functional Analysis and Measure Theory*. Translated from the Russian by Andrei Iacob. Universitext. Cham: Springer. xxii, 539 p. (2018). V. Kadets, D. Seliutin, *Completeness in topological vector spaces and filters on $\mathbb{N}$*, Bulletin of the Belgian Mathematical Society, Simon Stevin 28(4). Listán-García, M.C. $f$-statistical convergence, completeness and $f$-cluster points, Bull. Belg. Math. Soc. Simon Stevin, **23**, 235--245 (2016).
arxiv_math
{ "id": "2309.10879", "title": "On integration with respect to filter", "authors": "Dmytro Seliutin", "categories": "math.FA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | This work improves the existing central limit theorems (CLTs) on Gibbs processes in three aspects. First, we derive a CLT for weakly stabilizing functionals, thereby improving on the previously used assumption of exponential stabilization. Second, we show that this CLT holds for interaction ranges up to the percolation threshold of the dominating Poisson process. This avoids imprecise branching bounds from graphical construction. Third, by extending the concept of Stein couplings from the Poisson to the Gibbs setting, we provide a quantitative CLT in terms of Kolmogorov bounds for normal approximation. An important conceptual ingredient in these advances are extensions of disagreement coupling adapted to increasing windows and to the comparison at multiple spatial locations. address: - Department of Mathematics, Aarhus University, Ny Munkegade 118, 8000 Aarhus C, Denmark - DIGIT Center, Aarhus University, Finlandsgade 22, 8200 Aarhus N, Denmark - | Department of Mathematics\ Aalborg University\ Aalborg, 8000, Denmark author: - Christian Hirsch - Moritz Otto - Anne Marie Svane bibliography: - ./lit.bib title: | Normal approximation for Gibbs processes\ via disagreement couplings --- # Introduction {#sec:int} Because of their ability to encode possibly highly complex interactions, Gibbs point processes are applied in a broad range of domains such as biology, ecology, materials science and telecommunication networks [@Baddeley:Rubak:Wolf:15; @g1]. However, even for simple models, the use of Gibbs point processes comes at massive computational costs as the simulation requires elaborate Markov chain Monte Carlo methods [@Baddeley:Rubak:Wolf:15]. This issue becomes particularly pressing when devising goodness-of-fit tests on large datasets. Therefore, an attractive approach is to develop test statistics that are asymptotically normal on large windows. Then, to develop a hypothesis test only the mean and variance under the null model are needed. This explains the need for central limit theorems (CLTs) on functionals of Gibbs point processes. The last decade was coined by vigorous research activities in the asymptotic theory of statistics that can be written as a sum of certain scores evaluated at the points of a point process with rapidly decaying correlations. Improving on the methods used in the classical setting of Poisson point processes in [@penrose], CLTs for Gibbs point processes could be derived in [@gibbs_limit; @gibbsCLT] by relying on the graphical construction of Gibbs processes [@ferrari]. However, despite these recent advances, some aspects of the limit theory for Poisson processes could so far not been transferred to Gibbs point processes: 1. All of [@CX22; @gibbs_limit; @gibbsCLT] are formulated under the condition of exponential stabilization. This makes it difficult to apply [@gibbs_limit; @gibbsCLT] for delicate topological functionals, such as the persistent Betti numbers. 2. Both [@gibbs_limit; @gibbsCLT] rely on the graphical construction of Gibbs processes from [@ferrari], thereby imposing restrictive constraints on the interaction range of the Gibbs process for which a (partially quantitative) CLT can be established. 3. In [@benes], disagreement coupling is used to establish a CLT with a Gibbs particle process as input. However, the investigations are restricted to U-statistics and do not provide quantitative error bounds for normal approximation. Very recently, [@CX22] derived a quantitative CLT with optimal convergence rates (up to logarithmic corrections) and for a very general class of $\beta$-mixing point processes. However, the convergence is quantified with respect to the Wasserstein distance and the methods do not extend easily to the Kolmogorov distance. In our work, we will address all of the shortcomings mentioned above. The key tool for achieving the improvements will be a more refined analysis of the correlation structure of Gibbs processes using disagreement coupling. While this technique was initially suggested for studying the spatial correlation structure of lattice-Gibbs point processes [@maes], it has recently been successfully extended to derive Poisson approximation theorems for continuum Gibbs processes [@dp]. However, the construction in [@dp] is not adapted to the setting of increasing windows or the coupling at several locations, which are critical for the CLT improvements derived in the present article. Therefore, a major part of our investigation is devoted to making disagreement coupling more flexible so that it can accommodate the needs from spatial limit theorems. Equipped with these conceptual advances, we improve the existing CLTs for Gibbs point processes in three aspects. 1. We extend the CLT for weakly stabilizing functionals from [@penrose] to the setting of Gibbs point processes. In particular, this yields a CLT for persistent Betti numbers. 2. We prove a CLT for Gibbs point processes under the condition that the interaction range is smaller than the critical value of continuum percolation of an associated Poisson point process. This improves on the restrictive constraints coming from the branching bounds of the graphical construction in [@ferrari]. 3. We show how the convergence in Wasserstein distance from [@CX22] can be upgraded to a convergence in Kolmogorov distance provided stricter conditions are imposed on the point process and the functional. To that end, we extend the theory of Stein couplings from [@chen] to Gibbs processes. The rest of this manuscript is organized as follows. First, in Section [2](#sec:mod){reference-type="ref" reference="sec:mod"}, we introduce the model and main results, Theorems [Theorem 3](#thm:1){reference-type="ref" reference="thm:1"} and [Theorem 4](#thm:2){reference-type="ref" reference="thm:2"}. Then, in Section [3](#sec:exa){reference-type="ref" reference="sec:exa"}, we discuss examples satisfying the conditions of these results. We prove Theorems [Theorem 3](#thm:1){reference-type="ref" reference="thm:1"} and [Theorem 4](#thm:2){reference-type="ref" reference="thm:2"} in Sections [5](#sec:weak){reference-type="ref" reference="sec:weak"} and [6](#sec:gibbs){reference-type="ref" reference="sec:gibbs"}, respectively. Both proofs rely on a techniques from disagreement percolation discussed in Section [4](#sec:const){reference-type="ref" reference="sec:const"}. Finally, we provide a technical auxiliary result on radial orderings in the Appendix [7](#sec:iota){reference-type="ref" reference="sec:iota"}. # Model and main results {#sec:mod} The main results of this article, Theorems [Theorem 3](#thm:1){reference-type="ref" reference="thm:1"} and [Theorem 4](#thm:2){reference-type="ref" reference="thm:2"} below, are CLTs for stabilizing functionals on Gibbs point processes. First, we discuss in greater detail the considered classes of Gibbs point processes. For a concise introduction to Gibbs point processes, we refer the reader to [@dereudre]. ## Conditions on the Gibbs process Let $\mathbb R^d$ be Euclidean space equipped with the Borel $\sigma$-algebra $\mathcal B^d$. Define $\mathcal B^d_0$ to be the set of bounded Borel sets. Let $\mathbf N$ be the space of all locally finite subsets of $\mathbb R^d$ and let $\mathcal N$ denote the smallest $\sigma$-algebra such that $\mu\mapsto \mu(B) :=| \mu \cap B|$ is measurable for all $B\in\mathcal B_0^d$. Let $\mathbf N_0\subseteq\mathbf N$ denote the subspace of finite subsets and for $B\in \mathcal B^d_0$, let $\mathbf N_B\subseteq\mathbf N$ denote the locally finite subsets of $B$. A *point process* is a measurable map from $(\Omega,\mathcal F)$ to $(\mathbf N,\mathcal N)$ where $(\Omega,\mathcal F,\mathbb P)$ is some probability space. To define Gibbs point processes, we start from an energy functional $E:\mathbf N_0 \to (-\infty,\infty]$ which satisfies: *Non-degeneracy:* $E(\varnothing)=0$. *Hereditary:* For all $\psi\in \mathbf N_0$ and $x\in \mathbb R^d$, $E(\psi)=\infty$ implies $E(\psi \cup \{x\} )=\infty$. *Stability:* There is a constant $C>0$ with $$\label{cond:stability} \inf_{\psi \in \mathbf N_0} E(\psi ) \geq -C|\psi|. \tag{{\bf STAB}}$$ The *local energy* $J\colon{\mathbf N}_0\times{\mathbf N}_0\to (-\infty,\infty]$ is given by $$\begin{aligned} \label{eHamilton} J(\mu,\psi):= E(\mu \cup \psi) - E(\psi),\end{aligned}$$ which encodes the energy increase when adding the point configuration $\mu$ to $\psi$. We will assume that $J$ has *finite interaction range*. That is, there exists $r_0 > 0$ such that $$J(x, \psi) = J(x, \psi\cap B_{r_0}(x))$$ for all $x \in \mathbb R^d$ and $\psi \in \mathbf N_0$, where $$B_{r_0}(x):= x + B_{r_0}:= \{y\in \mathbb R^d\colon|y - x|\leqslant r_0\}$$ denotes the Euclidean ball with radius $r_0 > 0$ centered at $x \in \mathbb R^d$. Note that this assumption allows to define $J(\mu, \psi)$ also for $\psi \in \mathbf N$. For $B\in\mathcal B_0^d$, the *partition function* $Z_B\colon{\mathbf N}\to (0,\infty)$ is defined by $$\begin{aligned} Z_B(\psi):=\mathbb E[e^{-J(\mathcal P^{\mathsf{d}}_B,\psi)} ],\quad \psi\in{\mathbf N},\end{aligned}$$ where $\mathcal P^{\mathsf{d}}_B$ denotes a unit intensity Poisson process on $B$. Condition [\[cond:stability\]](#cond:stability){reference-type="eqref" reference="cond:stability"} indeed ensures that $Z_B(\psi)<\infty$ and since $J(\varnothing,\psi)=0$ for all $\psi\in{\mathbf N}$, we also have that $Z_B(\psi)\geqslant e^{-|B|}>0.$ For $B\in \mathcal B_0^d$ and $\psi\in \mathbf N$, we now define the *finite-volume Gibbs point process* $\mathcal X(B,\psi)$ on $B$ with boundary conditions $\psi$ as the point process with density $$\begin{aligned} \label{eGibbsmeasure} \mu \mapsto Z_B(\psi)^{-1}e^{-J(\mu ,\psi)}\end{aligned}$$ with respect to the unit intensity Poisson process on $B$. This is well-defined since $0< Z_B(\psi)<\infty$. The finite-volume Gibbs process is sometimes, e.g. in [@dp], introduced in terms of the Papangelou conditional intensity (PI) $\kappa\colon\mathbb R^d \times\mathbf N_0\to [0,\infty)$ defined by $$\kappa(x,\psi) = e^{-J(x,\psi)}.$$ Then, $\mathcal X(B,\psi)$ can be characterized as the unique point process satisfying the *GNZ equations* $$\begin{aligned} \label{eGNZ_finite} \mathbb E\Big[\sum_{X_i \in \mathcal X(B,\psi)} f(X_i,\mathcal X(B,\psi))\Big]= \mathbb E\Big[ \int_B f(x,\mathcal X(B,\psi) \cup\{x\}) \kappa(x,\mathcal X(B,\psi) \cup \psi)\,\lambda({\rm d}x)\Big],\tag{{\bf GNZ1}}\end{aligned}$$ for each $f\colon B \times\mathbf N\to [0,\infty)$ that is measurable [@Georgii76; @NgZe79]. The finite volume Gibbs point process $\mathcal X(B,\psi)$ on $B\in\mathcal B_0$ with boundary conditions $\psi\in \mathbf N$ satisfies the *DLR equations*: For $C\subseteq B$ and a.a. $\varphi\in \mathbf N_{B \backslash C}$ $$\begin{aligned} \label{eq:DLR1} \mathbb P(\mathcal X(B,\psi)\cap C \in\cdot \mid \mathcal X(B,\psi)\cap {B\backslash C} = \varphi) = \mathbb P(\mathcal X(C,\psi \cup \varphi)\in \cdot).\tag{{\bf DLR1}}\end{aligned}$$ In words, the conditional distribution of $\mathcal X(B,\psi)\cap C$ given $\mathcal X(B,\psi)\cap {B\backslash C}$ is again a Gibbs point process, namely the Gibbs process on $C$ with boundary conditions $\psi \cup( \mathcal X(B,\psi)\cap {B\backslash C})$. The GNZ equations can be used to define Gibbs processes in unbounded domains. A point process $\mathcal X$ is said to be an *infinite-volume Gibbs point process* on $\mathbb R^d$ if $$\begin{aligned} \label{eGNZ} \mathbb E\Big[\sum_{X_i \in \mathcal X} f(X_i,\mathcal X)\Big]= \mathbb E\Big[ \int f(x,\mathcal X\cup \{x\}) \kappa(x,\mathcal X)\,\lambda({\rm d}x)\Big],\tag{{\bf GNZ2}}\end{aligned}$$ for each measurable $f\colon\mathbb R^d\times\mathbf N\to [0,\infty)$. While the finite-volume Gibbs process was explicitly defined in [\[eGibbsmeasure\]](#eGibbsmeasure){reference-type="eqref" reference="eGibbsmeasure"}, existence and in particular uniqueness of infinite-volume Gibbs processes is generally far from trivial, see e.g. [@betsch; @dereudre]. We therefore make some further assumptions on $\kappa$. Let $r_c = r_c(\alpha_0)$ denote the critical threshold of continuum percolation for a Poisson point process $\mathcal P^{\mathsf{d}}$ with intensity $\alpha_0 > 0$. That is, the infimum of all $r > 0$ such that $B_{r/2}(\mathcal P^{\mathsf{d}}) := \mathcal P^{\mathsf{d}}+ B_{r/2}$ has an unbounded connected component with positive probability. Throughout the paper, we will work under the following assumption. $$\begin{aligned} \label{as:(A)} \text{The PI is bounded $0 \leqslant\kappa\leqslant\alpha_0$ and has finite interaction range $r_0 < r_c(\alpha_0)$. }\tag{{\bf A}} \end{aligned}$$ Then, for any given PI $\kappa$, there exists a unique infinite-volume Gibbs point process (see [@dereudre Thm. 5.4] or Proposition [Proposition 16](#pr:uniq){reference-type="ref" reference="pr:uniq"} below). We give two well-known examples of Gibbs point process models satisfying the assumptions. **Example 1** (Strauss process). In this example, we use for $\alpha_0, r_0>0$ and $\beta \in [0,1]$ the potential $$J(x,\psi):= \log(\alpha_0) + \psi(B_{r_0}(x))\log(\beta).$$ **Example 2** (Area-interaction process [@bvl]). In this example, we use for $\alpha_0, r_0>0$ and $\beta \in [0,1]$ the potential $$J(x,\psi):= \log(\alpha_0) + V(x, \psi) \log(\beta),$$ where $$\begin{aligned} V(x,\psi):=\Big|B_{r_0/2}(x)\setminus\bigcup_{y \in \psi \cap B_{r_0/2}(x)} B_{r_0/2}(y)\Big|.\end{aligned}$$ In both cases, the PI $\kappa$ is *translation-invariant*, i.e., $\kappa(x, \psi) = \kappa(x+y, \psi +y)$ for all $x,y\in \mathbb R^d$ and $\psi\in \mathbf N_0$. We finally mention, that an infinite-volume Gibbs process also satisfies the DLR equations: For any $B\in \mathcal B_0$, it holds for a.a. $\varphi\in \mathbf N_{B^c}$ that $$\label{eq:DLR2} \mathbb P(\mathcal X\cap B \in\cdot \mid \mathcal X\cap{ B^c} = \varphi) = \mathbb P( \mathcal X(B, \varphi) \in \cdot). \tag{{\bf DLR2}}$$ That is, given the boundary conditions $\mathcal X\cap{ B^c}$, the distribution of $\mathcal X\cap B$ is a finite volume Gibbs process. We note that large parts of our arguments could be extended from a Euclidean interaction radius $r_0$ to integrable functions $\alpha_0$ and general symmetric relations $\sim$, see [@dp]. However, to make the proofs more accessible, we present our arguments with the Euclidean interaction radius $r_0$. Moreover, most arguments carry over to the case where the reference Poisson process is marked and the energy is allowed to depend on marks. ## Statement of main results and conditions on the functional The main results of this article are two CLTs for functionals on Gibbs processes on growing domains, see Theorems [Theorem 3](#thm:1){reference-type="ref" reference="thm:1"} and [Theorem 4](#thm:2){reference-type="ref" reference="thm:2"}. These results are complementary in the following sense. While Theorem [Theorem 3](#thm:1){reference-type="ref" reference="thm:1"} describes a CLT for a general class of translation-invariant Gibbs functionals without convergence rates, Theorem [Theorem 4](#thm:2){reference-type="ref" reference="thm:2"} provides a quantitative CLT for the more restricted class of Gibbs functionals that can be written as a sum of scores. Therefore, it is natural that the two results require different sets of conditions. However, loosely speaking, in both cases the conditions can be thought of as a suitable growth/moment bound and stabilization. We now discuss a CLT for weakly stabilizing functionals as the first main result of our paper. Let $H: \mathbf N_0 \to \mathbb R$ be a functional. We say that $H$ is *translation-invariant* if $H(\varphi) = H(\varphi+ x)$ for all $x\in \mathbb R^d$ and $\varphi\in \mathbf N_0$. Writing $Q_n :=[-n/2,n/2]^d$, $n\geq 1$, we consider functionals $H_n$ of locally finite point patterns $\varphi\in \mathbf N$ of the form $$H_n (\varphi):= H(\varphi\cap Q_n)$$ To prove a CLT for $H_n(\mathcal X)$, we need to impose a growth condition and a weak stabilization condition. We express these conditions in terms of the *add-one cost operator* $$(D_xH)(\varphi) := H(\varphi\cup \{x\}) - H(\varphi).$$ For the growth condition, we demand that there are constants $c_{\mathsf M, 1},R_{\mathsf M, 1}>0$ such that for any $y\in \mathbb R^d$ and any locally finite point pattern $\varphi\subseteq\mathbb R^d$, $$\begin{aligned} \label{eq:m1} |D_yH(\varphi)| \leqslant c_{\mathsf M, 1}\exp\big(c_{\mathsf M, 1}\varphi(B_{R_{\mathsf M, 1}}(y))\big).\tag{{\bf M1}} \end{aligned}$$ In the *weak stabilization* condition, we require that for every locally finite $\varphi\subseteq\mathbb R^d$ and any sequence of windows $Q_{z_n,c_n} := z_n + [-c_n/2,c_n/2]^d$ with $\bigcup_{k\geq 1} \bigcap_{n\geq k} Q_{z_n,c_n} = \mathbb R^d$, the limit $$\label{eq:s1} D_0H(\varphi):=H(\varphi\cup \{0\}) - H(\varphi) := \lim_{n\to \infty } D_0H(\varphi\cap Q_{z_n,c_n}) \tag{{\bf S1}}$$ exists and is independent of the sequence of windows $Q_{z_n,c_n}$. We let $N(0,\sigma^2)$ denote a normal random variable with variance $\sigma^2 \geqslant 0$. **Theorem 3** (CLT for weakly stabilizing functionals on Gibbs processes). *Let $\mathcal X$ be an infinite-volume Gibbs point process with translation-invariant PI satisfying **(A)**. Let $H$ be a translation-invariant functional on finite point patterns satisfying conditions [\[eq:m1\]](#eq:m1){reference-type="eqref" reference="eq:m1"} and [\[eq:s1\]](#eq:s1){reference-type="eqref" reference="eq:s1"}. Then, $$|Q_n|^{-1}\mathsf{Var}(H_n(\mathcal X) ) \to \sigma^2,$$ for some $\sigma^2 \geqslant 0$. Moreover, $$|Q_n|^{-1/2}(H_n(\mathcal X) - \mathbb E[H_n(\mathcal X)]) \to N(0,\sigma^2).$$* While [@penrose] give a CLT for weakly stabilizing functionals on Poisson point processes, most CLTs for more general point process models in the literature require exponential stabilization [@yogesh; @gibbsCLT]. The weak stabilization condition [\[eq:s1\]](#eq:s1){reference-type="eqref" reference="eq:s1"} closely mimics the one in [@penrose]. The price for the weak stabilization condition is a strong growth condition [\[eq:m1\]](#eq:m1){reference-type="eqref" reference="eq:m1"}. While this resembles the power growth condition in [@yogesh Equation (1.18)], it requires a fixed radius $R_{\mathsf M, 1}$. We note that [\[eq:m1\]](#eq:m1){reference-type="eqref" reference="eq:m1"} is only used to obtain the moment bounds in Corollary [Corollary 1](#cor:moment_bounds){reference-type="ref" reference="cor:moment_bounds"} below, which may be proved by other methods if [\[eq:m1\]](#eq:m1){reference-type="eqref" reference="eq:m1"} does not apply. Positivity of the limiting variance is not part of the statement in Theorem [Theorem 3](#thm:1){reference-type="ref" reference="thm:1"}. In Corollary [Corollary 5](#cor:betti){reference-type="ref" reference="cor:betti"}, we give an example where positivity can be checked. The argument easily generalizes to a variety of other functionals. Our second main result is a quantitative normal approximation result in the *Kolmogorov distance* $$\begin{aligned} d_{\mathsf K}(X,Y):=\sup_{u \in \mathbb R} |\mathbb P(X \leqslant u) - \mathbb P(Y \leqslant u)|. \end{aligned}$$ of random variables $X,Y$. Concerning the setup for Theorem [Theorem 4](#thm:2){reference-type="ref" reference="thm:2"}, for $n \geqslant 1$, we consider the functionals $$\begin{aligned} H^n(\varphi)& :=\sum_{x \in \varphi\cap Q_n} g(x, \varphi), \qquad \varphi\in \mathbf N, \label{eqn:poixidef} \end{aligned}$$ for some measurable score function $g\colon\mathbb R^d \times\mathbf N\to [0, \infty)$. We follow the convention that $g(x, \varphi) = g(x, \varphi\cup \{x\})$ for every locally finite $\varphi\subseteq\mathbb R^d$ and $x \in \varphi$. We first impose a bounded moments condition. More precisely, we assume that $$\begin{aligned} \sup_{x_1, \dots, x_5 \in \mathbb R^d} \mathbb E[g(x_1,\mathcal X\cup \{x_1, \dots, x_5\})^{6}]<\infty,\tag{{\bf M2}} \label{eq:m2}\nonumber \end{aligned}$$ where we note that the $x_1, \dots, x_5$ do not need to be pairwise distinct. Second, we require that $g$ is *exponentially stabilizing* in the spirit of [@yogesh]. To make this precise, we assume that there is a *stabilization radius* $R(x,\varphi)$. That is, $g(x, \varphi) = g(x, \varphi\cap B_{R(x, \varphi)}(x))$ holds for all locally finite $\varphi\subseteq\mathbb R^d$ and $x \in \varphi$. We also assume that the event $\{R(\varphi)\leqslant r\}$ is measurable with respect to $\varphi\cap B_r$. Then, we impose that $$\begin{aligned} - \limsup_{r\uparrow\infty}\, \sup_{x_1, \dots, x_5 \in \mathbb R^d} r^{-1}{\log\mathbb P(R(x_1,\mathcal X\cup \{x_1,\dots,x_5\}) > r)} =:c_{\mathsf{es}}>0.\tag{{\bf S2}} \label{eq:s2}\nonumber \end{aligned}$$ Finally, we assume that $$\begin{aligned} \liminf_{n \to \infty} \frac{\mathsf{Var}(H^n)}{|Q_n|}\in(0,\infty).\tag{{\bf Var2}}\label{eqn:poimomass}\nonumber \end{aligned}$$ **Theorem 4** (Quantitative normal approximation of Gibbsian score sums). *Let $\mathcal X$ be an infinite-volume Gibbs point process satisfying **(A)**. Assume that [\[eq:m2\]](#eq:m2){reference-type="eqref" reference="eq:m2"}, [\[eq:s2\]](#eq:s2){reference-type="eqref" reference="eq:s2"}, [\[eqn:poimomass\]](#eqn:poimomass){reference-type="eqref" reference="eqn:poimomass"} hold. Moreover, we assume that $g(x, \varphi) = 0$ implies $g(x, \varphi') = 0$ for all $x\in \mathbb R^d$ and locally finite $\varphi\subseteq\varphi'\subseteq\mathbb R^d$. Then, $H^n:=H^n(\mathcal X)$ defined at [\[eqn:poixidef\]](#eqn:poixidef){reference-type="eqref" reference="eqn:poixidef"} satisfies $$\begin{aligned} d_{\mathsf K}\Big(\frac{H^n-\mathbb E[H^n]}{\sqrt{\mathsf{Var}(H^n)}}, N(0,1) \Big) \in O\Big( \frac{(\log |Q_n|)^{2d}}{\sqrt {|Q_n|}}\Big). \end{aligned}$$* To prove Theorems [Theorem 3](#thm:1){reference-type="ref" reference="thm:1"} and [Theorem 4](#thm:2){reference-type="ref" reference="thm:2"}, we generalize to Gibbs processes the martingale approach from [@penrose] and the Palm couplings from [@chen]. Our main contribution is a precise control of the decay of spatial correlations through refined forms of disagreement percolation that are tailored to Theorems [Theorem 3](#thm:1){reference-type="ref" reference="thm:1"} and [Theorem 4](#thm:2){reference-type="ref" reference="thm:2"}. As mentioned in Section [1](#sec:int){reference-type="ref" reference="sec:int"}, very recently a quantitative CLT was derived for general weakly correlated point processes [@CX22] without the need for any monotonicity assumptions on the score function $g$. However, the convergence rates are given in terms of the Wasserstein distance, whereas bounds for the Kolmogorov distance are often easier to interpret. We note that also in other contexts, one frequently encounters the situation where extending Wasserstein convergence rates to Kolmogorov convergence rates requires substantial additional work [@kolm]. # Examples {#sec:exa} In this section, we present the persistent Betti numbers and the total edge length of Gibbs-Voronoi tessellations as specific examples for Theorems [Theorem 3](#thm:1){reference-type="ref" reference="thm:1"} and [Theorem 4](#thm:2){reference-type="ref" reference="thm:2"}, respectively. ## Example for Theorem [Theorem 3](#thm:1){reference-type="ref" reference="thm:1"}: Persistent Betti numbers {#sec:bet} The persistent Betti numbers are invariants used in topological data analysis for summarizing the persistence diagram. They are defined for $r\leqslant s$ and a finite point pattern $\varphi$ by $$\beta^{r,s}_q = \mathsf{dim}\left(\mathsf{Im}(H_q( B_r(\varphi);\mathbb Z/2\mathbb Z)\to H_q(B_s(\varphi);\mathbb Z/2\mathbb Z) ) \right),$$ where $H_q(\cdot,\mathbb Z/2\mathbb Z)$ denotes the $q$th homology group with coefficients in $\mathbb Z/2\mathbb Z$. The use of persistent Betti numbers for statistics of point patterns was first suggested in [@robins] under the name rank functions. For a thorough introduction to topological data analysis, see e.g. [@edHar]. Most existing CLTs for functionals on point processes, e.g. [@yogesh; @gibbsCLT], assume the functional is written in terms of an exponentially stabilizing score function. This is not easily verified for $\beta^{r,s}_q$. However, [@shirai] derived a CLT for Poisson processes from [@penrose] by observing that weak stabilization is satisfied by persistent Betti numbers. The same observation leads to the following corollary of Theorem [Theorem 3](#thm:1){reference-type="ref" reference="thm:1"}: **Corollary 5**. *Let $\mathcal X$ be an infinite-volume Gibbs point process with translation-invariant PI satisfying **(A)**. Then for all $0\leq r\leq s$ $$|Q_n|^{-1/2}(\beta_q^{r,s}(\mathcal X\cap Q_n) - \mathbb E\beta_q^{r,s}(\mathcal X\cap Q_n) ) \to N(0,\sigma^2)$$ for some $\sigma^2 \geqslant 0$. If $\kappa>0$, then $\sigma^2>0$.* We show the first statement of Corollary [Corollary 5](#cor:betti){reference-type="ref" reference="cor:betti"} below. The proof of positivity of the limiting variance is deferred to the end of Section [5.2](#sec:clt){reference-type="ref" reference="sec:clt"}. The proof relies on the fact that $\beta_q^{r,s}$ can be computed from the Čech complex via the Nerve Theorem, see [@edHar Chap. III.2] for details. *Proof.* This follows from Theorem [Theorem 4](#thm:2){reference-type="ref" reference="thm:2"} if we can show Condition [\[eq:m1\]](#eq:m1){reference-type="eqref" reference="eq:m1"} and [\[eq:s1\]](#eq:s1){reference-type="eqref" reference="eq:s1"}. Condition [\[eq:s1\]](#eq:s1){reference-type="eqref" reference="eq:s1"} follows directly from [@shirai Lem 5.3]. For Condition [\[eq:m1\]](#eq:m1){reference-type="eqref" reference="eq:m1"}, recall that $k+1$ points in a finite point pattern $\varphi$ form a $k$-simplex of filtration value $t$ in the Čech complex if and only if the smallest ball containing the $k+1$ points has radius $t$. Thus, adding a point $y$ to a point pattern $\varphi$ changes the $k$-simplices in the Čech complex with filtration value less than or equal $s$ only by adding and removing simplices with all $k+1$ vertices contained in a ball of radius $2s$ around $y$. By [@shirai Lem. 2.11], this changes $\beta_q^{r,s}$ by at most the number of added and removed $q$- and $(q+1)$-simplices. This number is bounded by the total number of $(q+1)$- and $(q+2)$-tuples of points in $\varphi\cap B_{2s}(y)$, which is again bounded by $$\big(\varphi(B_{2s}(y)) + 1\big)^ {q+1} + \big(\varphi(B_{2s}(y)) + 1\big)^{q + 2} \leqslant 2 \big(\varphi(B_{2s}(y)) + 1\big)^{q+2}.$$ This shows Condition [\[eq:m1\]](#eq:m1){reference-type="eqref" reference="eq:m1"}. ◻ ## Example for Theorem [Theorem 4](#thm:2){reference-type="ref" reference="thm:2"}: Total edge length of Gibbs-Voronoi tessellation {#sec:vor} For a locally finite $\varphi\subseteq\mathbb R^2$ we define the *Voronoi cell* $C(x,\varphi)$ of $x \in \varphi$ as the set of all points in $\mathbb R^2$ whose Euclidean distance to $x$ is less than or equal to its distance to all other points of $\varphi$. The system $\{C(x,\varphi)\}_{x \in \varphi}$ is called the *Voronoi tessellation*. We define $g(x,\varphi)$ as one half the total edge length of the finite length edges in the cell $C(x,\varphi)$. Then, the total edge length of the Voronoi tessellation induced by $\varphi$ with centers in $Q_n$ is (up to boundary effects) given by $$\begin{aligned} H^n := \sum_{x \in \varphi\cap Q_n} g(x,\varphi). \end{aligned}$$ Examples for specific Gibbs processes for which the bounded moments condition [\[eq:m2\]](#eq:m2){reference-type="eqref" reference="eq:m2"} and the exponential stabilization property [\[eq:s2\]](#eq:s2){reference-type="eqref" reference="eq:s2"} hold are provided in [@gibbs_limit Section 1 and 5.2]. Also the variance lower bound [\[eqn:poimomass\]](#eqn:poimomass){reference-type="eqref" reference="eqn:poimomass"} is verified in such examples, see [@gibbsCLT Theorem 2.3]. For the condition $g(x,\varphi\cup \{y\})\leqslant g(x,\varphi)$, we note that $C(x,\varphi\cup \{y\})\subseteq C(x,\varphi)$ are convex subsets of $\mathbb R^2$. Hence, it follows from Theorem [Theorem 4](#thm:2){reference-type="ref" reference="thm:2"} that the normalized score sum of $H_n$ satisfies a CLT. This extends [@chen Theorem 4.13] to Gibbs processes satisfying **(A)**. # Constructions of disagreement couplings {#sec:const} In this section, we review and refine the disagreement coupling for spatial Gibbs processes with the goal of obtaining sharp bounds on the decay of spatial correlations. The basic idea for disagreement coupling is to realize Gibbs processes as a thinning of a suitable dominating Poisson process. Since we need several variants of the disagreement coupling differing in subtle technical aspects, we repeat the general definitions and properties even at the risk of creating some redundancy with [@dp]. First, Section [4.1](#ss:fw){reference-type="ref" reference="ss:fw"} introduces various ways of constructing finite-volume Gibbs processes. We show in Section [4.2](#ss:hd){reference-type="ref" reference="ss:hd"} how these can be used to analyze decorrelation properties. In Section [4.3](#ss:rfin){reference-type="ref" reference="ss:rfin"}, we investigate how the disagreement couplings are influenced by small perturbations of the Papangelou intensity. Finally, in Section [4.4](#ss:emb){reference-type="ref" reference="ss:emb"}, we demonstrate how applying the couplings from Section [4.1](#ss:fw){reference-type="ref" reference="ss:fw"} for growing window sizes leads to a coupling between finite and infinite-volume Gibbs processes by thinning a common dominating Poisson process. Henceforth, we let $\mathcal P^{\mathsf{d}, *}$ be a homogeneous unit-intensity Poisson point process on $\mathbb R^d \times[0, \alpha_0]$ and write $\mathcal P^{\mathsf{d}}$ for the projection of $\mathcal P^{\mathsf{d}, *}$ onto the $\mathbb R^d$ coordinate. The key idea in disagreement coupling is to relate the spatial decorrelation of the Gibbs process $\mathcal X(Q, \psi)$ to percolation properties of the dominating process $\mathcal P^{\mathsf{d}}$ on a window $Q$. To make this precise, for Borel sets $A, B \subseteq\mathbb R^d$, we let $\{A \leftrightsquigarrow B\}$ denote the event that there exists a connected component of the Boolean model $B_{r_0/2}(\mathcal P^{\mathsf{d}}) := \mathcal P^{\mathsf{d}}+ B_{r_0/2}(o)$ intersecting both $B_{r_0/2}(A)$ and $B_{r_0/2}(B)$. One of the most fundamental results in continuum percolation is that this model exhibits a sharp phase transition, i.e., for $r < r_c$, the connection probabilities $\mathbb P(A \leftrightsquigarrow B)$ decay at exponential speed in the distance between $A$ and $B$. **Proposition 6** (Sharp phase transition, [@raoufi_sub]). *Let $r < r_c$. Then, $\limsup_{n \uparrow\infty} n^{-1}\log \mathbb P\big(o \leftrightsquigarrow\partial B_n \big) < 0.$* *Proof.* See e.g. [@raoufi_sub Theorem 1.4]. ◻ *Remark 7*. In Proposition [Proposition 6](#pr:spt){reference-type="ref" reference="pr:spt"} and the preceding construction, we focused on the most frequently considered continuum percolation model based on a fixed connectivity threshold. We believe that a refinement of our arguments could extend the idea of disagreement coupling to more general interaction ranges. ## Thinning algorithms in fixed windows {#ss:fw} In this section, we review the construction from [@dp] of the finite-volume Gibbs process $\mathcal X(Q,\psi)$ on a bounded domain $Q$ with boundary conditions $\psi \subseteq Q^c$ as a thinning of $\mathcal P^{\mathsf{d}, *}$. In fact, we shall consider two different versions of the algorithm: (i) the standard Poisson embedding [@dp Sec. 5], which is the original version of the algorithm, and (ii) disagreement coupling [@dp Sec. 6], which provides a coupling between finite-volume Gibbs processes with different boundary conditions. ### Standard Poisson embedding {#sss:spe} Starting from a compact window $Q \subseteq\mathbb R^d$ and a boundary configuration $\psi \subseteq Q^c$, we describe how to realize $\mathcal X(Q, \psi)$ as a thinning $T_{Q, \psi}(\mathcal P^{\mathsf{d}, *}) = \{x_1, x_2, \dots\}$ of the Poisson process $\mathcal P^{\mathsf{d}, *}$. For this, we need a fixed order on $Q$, which we may think of as a measurable injective map $\iota\colon Q \to \mathbb R$. The main idea of the Poisson embedding is to go through the points of $\mathcal P^{\mathsf{d}}\cap Q$ in the order induced by $\iota$ and decide for each whether to keep it in the thinned process. Suppose that $\psi'\subseteq\mathbb R^d$ are the points retained in the thinned process so far, including the boundary conditions $\psi$. The next point $x$ considered for retention in the thinned process with $(x,u)\in \mathcal P^{\mathsf{d}, *}$ is kept if $p(x,Q,\psi')\leqslant u$, where $$\begin{aligned} p(x, Q, \psi') := \kappa(x, \psi') \frac{Z_{Q_{(x, \infty)}}\big(\psi' \cup \{x\}\big)}{Z_{Q_{(x, \infty)}}(\psi')}.\label{def:p}\end{aligned}$$ denotes the *retention probability*. Here, $$Q_{(x, \infty)} :=\{ y \in Q \colon y > x\} := \{ y \in Q \colon\iota(y) > \iota(x)\}$$ denotes the set of all $y \in Q$ succeeding $x$ in the $\iota$-order. A crucial feature of the thinning is thus that the retention probability depends both on the location of the considered point and on the decision about which of the previously considered points have already been included. We now give the precise thinning algorithm: To define the first point $x_1 = x_1(\mathcal P^{\mathsf{d}, *})$ of the thinning, we consider the set $$\big\{x \colon(x, u) \in \mathcal P^{\mathsf{d}, *}\text{ and } u \leqslant p(x, Q, \psi) \big\}$$ of admissible Poisson points and choose $x_1$ to be its smallest element in the $\iota$-order. If there is no such element, then the construction terminates with $T_{Q, \psi}(\mathcal P^{\mathsf{d}, *}) = \varnothing$. Otherwise, we proceed inductively by defining $x_{k + 1}(\mathcal P^{\mathsf{d}, *})$ as the smallest element of the admissible set $$\big\{x \colon(x, u) \in \mathcal P^{\mathsf{d}, *}\text{ and } u \leqslant p(x, Q_{(x_k, \infty)}, \psi \cup \{x_1, \dots, x_k\})\big\}.$$ Again, the construction terminates once the admissible set is empty. We will henceforth rely on the key distributional result of [@dp Theorem 5.1], namely that the so-defined thinned process $T_{Q, \psi}(\mathcal P^{\mathsf{d}, *}) \subseteq\mathcal P^{\mathsf{d}}$ provides a sample of the Gibbs process $\mathcal X(Q, \psi)$. **Proposition 8** (Correctness of the Poisson embedding). *Assume that the PI satisfies $\kappa\leqslant\alpha_0$. Let $Q \subseteq\mathbb R^d$ be bounded Borel and $\psi \subseteq\mathbb R^d \setminus Q$ be finite. Furthermore, let $\mathcal P^{\mathsf{d}, *}\subseteq\mathbb R^d \times[0, \alpha_0]$ be a unit-intensity Poisson point process. Then, $T_{Q, \psi}(\mathcal P^{\mathsf{d}, *})$ provides a sample of the Gibbs process $\mathcal X(Q, \psi)$.* *Proof.* See [@dp Theorem 5.1]. ◻ ### Disagreement coupling {#sss:dc} Disagreement coupling is an extension of the Poisson embedding which also constructs the Gibbs process by thinning a Poisson process in a compact window $Q$. Various versions of the algorithm can be defined. Our version below is tailored to the applications in this paper. The key feature is that it provides couplings between Gibbs processes with boundary conditions that differ in a certain specified Borel set $B$. The idea is to explore the window $Q$ stepwise. In each step, the Poisson embedding is used to construct the Gibbs process in a part of the window which has not yet been explored. The algorithm relies on a deterministic finite set $D \subseteq Q$ of *landmarks* with the coverage property that $Q\subseteq B_{r_0}(D)$. The steps of the algorithm are indexed by a pair of lexicographically ordered integers $(i,j)$, $i=1,\dots,I$, $j=1,\dots,N_i$. In step $(i,j)$ of the algorithm, we define three random subsets of $Q$ depending on $\mathcal P^{\mathsf{d}}$, namely 1. ${W_{i,j}}$, the region that is still unexplored at the beginning of step $(i, j)$; 2. ${v_{i,j}} \in D\cup (\mathcal P^{\mathsf{d}}\cap Q)$, the center for the $(i, j)$th scanning step; 3. ${Z_{i,j}}$, the region that is actually explored in step $(i, j)$. In particular, the unexplored region $W_{i, j}$ at step $(i, j)$ is always obtained by removing the explored region in step $(i, j - 1)$ from the unexplored region in step $(i, j - 1)$. That is, we initialize $W_{1, 1} :=Q$ and then set $$\label{eq:Wij} W_{i, j} := W_{i, j - 1} \setminus Z_{i, j - 1},$$ where we always interpret the index $(i,0)$ as $(i-1, N_{i-1})$. Moreover, in the step $(i, j)$, we scan the region $$V_{i, j} := W_{i, j} \cap B_{r_0}(v_{i, j}).$$ Finally, the explored regions $Z_{i, j}$ are defined as $Z_{i, j} := V_{i, j}$ if $\mathcal P^{\mathsf{d}}\cap V_{i, j} = \varnothing$, and $$Z_{i,j}:=\{x \in V_{i,j}:\,x\leqslant\inf(\mathcal P^{\mathsf{d}}\cap V_{i, j})\}$$ if $\mathcal P^{\mathsf{d}}\cap V_{i, j} \ne \varnothing$. Thus, each $Z_{i,j}$ is constructed to contain at most one point from $\mathcal P^{\mathsf{d}}$, which we call $$x_{i,j} := \inf(\mathcal P^{\mathsf{d}}\cap V_{i, j})$$ if it exists, otherwise we set $x_{i,j}:=\varnothing$. Thus, each cluster is explored (at most) one point at a time. The construction of the scanning centers $v_{i,j}$ is more involved and is given recursively below. Before that, we first provide the reader with further intuition, which is also supported by the illustration in Figure [\[fig:2\]](#fig:2){reference-type="ref" reference="fig:2"}. The explored regions $Z_{i,j}$ form a partition of $Q$ and satisfy $Z_{i,j} \subseteq V_{i,j} \subseteq V_i := \bigcup_{j=1}^{N_i} V_{i,j}$. They $V_{i,j}$ have the important property that $V_{i,j}$ is deterministic given $\mathcal P^{\mathsf{d}}\cap W_{i,j}^c$. Moreover, the construction will be such that for each $i$, $V_i \cap \mathcal P^{\mathsf{d}}$ is either empty or corresponds to a cluster of $B_{r_0/2}(\mathcal P^{\mathsf{d}}\cap Q)$, so we may think of the algorithm as exploring one cluster at a time. We now give the precise construction of the scanning centers $v_{i, j}$. We first describe the first step and let $v_{1, 1}$ be the smallest point in $D$ such that $B_{r_0}(v_{1,1}) \cap W_{1,1} \cap B_{r_0}(B) \ne \varnothing$ if $W_{1,1}\cap B_{r_0}(B)\ne \varnothing$ and otherwise just the smallest point in $D$ such that $B_{r_0}(v_{1,1}) \cap W_{1,1} \ne \varnothing$. Next, we explain how to construct the sets recursively and assume that the $(i, j)$th construction step has been completed. Then, $W_{i,j+1}$ is constructed by [\[eq:Wij\]](#eq:Wij){reference-type="eqref" reference="eq:Wij"} and let $$k_{i,j+1}:=\inf \{k \in \{1,\dots,j\}: W_{i,j + 1} \cap B_{r_0} (x_{i,k})\ne \varnothing\}$$ be the first index $k$ where $B_{r_0} (x_{i,k})$ still intersects some unexplored part (with the convention $B_{r_0}(\varnothing)=\varnothing$). If $k_{i,j+1}<\infty$, we define $v_{i, j+ 1} := x_{i,k_{i,j+1} }$. Otherwise, if $k_{i, j + 1}= \infty$, the current cluster has been completely explored, so we set $N_i:=j$ and start a new one. In this case, we define $W_{i+1,1}:=W_{i,j+1}$ and $v_{i+1,1}$ as the smallest element of $D$ such that $B_{r_0}(v_{i+1,1})\cap W_{i+1, 1} \cap B_{r_0}(B) \ne \varnothing$ (or, if $W_{i +1, 1} \cap B_{r_0}(B) =\varnothing$, such that $B_{r_0}(v_{i+1,1})\cap W_{i+1 , 1}\ne \varnothing$). The algorithm terminates as soon as $W_{i,j}=\varnothing$ for some $i \geqslant 1$, $j \geqslant 1$. To see that this happens after a finite number of steps (by which we mean that $\sum_{i \geqslant 1} N_i <\infty$ a.s.), note that all $v_{i,1}$ with $N_i =1$ are pairwise distinct landmarks in $D$. Since $|D| < \infty$ and there is almost surely only finitely many $i \geqslant 1$ with $N_i >1$ (since $\mathcal P^{\mathsf{d}}(Q) <\infty$ a.s.), this shows that the algorithm terminates after a finite number of steps. We now explain how the Gibbs process is constructed. In step $(1,1)$ we order each of $V_{1,1}$ and $W_{1,1}$ according to $\iota$ and obtain a total order by declaring $x<y$ if $x\in V_{1,1}$ and $y\in W_{1,1}$. With this order, we apply the Poisson embedding on $W_{1,1}$ and let $$\begin{aligned} \xi_{1,1}:= Z_{1,1}\cap T_{W_{1,1},\psi }(\mathcal P^{\mathsf{d}}\cap W_{1,1}).\end{aligned}$$ Assume that before step $(i,j)$, we have constructed point processes $\xi_{i',j'}$ on $Z_{i',j'}$ for all $(i',j')$ preceding $(i,j)$ and set $\mathcal X_{i,j-1}:=\xi_{1,1}\cup \cdots \cup \xi_{i,j-1}$. We define a new total order on $Q$ by ordering each of $V_{i,j}$ and $W_{i,j}$ according to $\iota$ putting $x<y$ if $x\in V_{i,j}$ and $y\in W_{i,j}$. With this order, we let $$\begin{aligned} \xi_{i,j}:=Z_{i,j}\cap T_{W_{i,j},\psi \cup \mathcal X_{i,j-1}}(\mathcal P^{\mathsf{d}}\cap W_{i,j}), \label{def:xis} \end{aligned}$$ We denote the resulting process by $$T^{\mathsf{dc}}_{Q,B,\psi}(\mathcal P^{\mathsf{d}, *}):= \cup_{i \geqslant 1} \cup_{j=1}^{N_i} \xi_{i,j}.$$ The proof that $T^{\mathsf{dc}}_{Q,B,\psi}(\mathcal P^{\mathsf{d}, *})$ indeed has the correct $\mathcal X(Q, \psi)$-distribution follows by the exact same arguments as in the proof of [@dp Theorem 6.3] noting that the construction has the following important properties: (i) In each step $(i,j)$, the new order is deterministic when conditioned on $\mathcal P^{\mathsf{d}}\cap W_{i,j}^c$ since $V_{i,j}$ is measurable with respect to the stopped $\sigma$-algebra $\sigma(\mathcal P^{\mathsf{d}}\cap W_{i,j}^c)$. (ii) The mappings $\mu \mapsto W_{i,j}(\mu)^c$ are stopping sets in the sense that $\{ W_{i,j}(\mu)^c\subseteq S\}=\{ W_{i,j}(\mu\cap S)^c\subseteq S\}$ for all compact $S \subseteq\mathbb R^d$, see [@stoppingset]. For future reference, we state the correctness and main property of $T^{\mathsf{dc}}_{Q,B, \psi}(\mathcal P^{\mathsf{d}, *})$ in Proposition [Proposition 9](#pr:dc_prop){reference-type="ref" reference="pr:dc_prop"} below. **Proposition 9** (Correctness of disagreement coupling). *Suppose the PI is bounded by $\alpha_0$. Let $Q \subseteq\mathbb R^d$ be bounded Borel, $B\subseteq Q^c$, and let $\psi \subseteq Q^c$ be finite. Furthermore, let $\mathcal P^{\mathsf{d}, *}\subseteq\mathbb R^d \times[0, \alpha_0]$ be a unit-intensity Poisson point process. Then, $T^{\mathsf{dc}}_{Q,B, \psi}(\mathcal P^{\mathsf{d}, *})$ provides a sample of the Gibbs process $\mathcal X(Q, \psi)$.* *If $\psi,\psi'\subseteq\mathbb R^d \setminus Q$ are finite sets differing only on $B$, then $T^{\mathsf{dc}}_{Q,B, \psi}(\mathcal P^{\mathsf{d}, *})$ and $T^{\mathsf{dc}}_{Q,B, \psi'}(\mathcal P^{\mathsf{d}, *})$ differ only on the clusters of $\mathcal P^{\mathsf{d}}\cap Q$ intersecting $B_{r_0}(B)$.* The disagreement coupling construction presented above differs from the one in [@dp Section 6] in four aspects. First, the algorithm in [@dp] only considers the case $B=Q^c$. Second, that algorithm explores the clusters in a different order. Third, that algorithm works on clusters of the thinned Gibbs process rather than the full $\mathcal P^{\mathsf{d}}$, and fourth, that algorithms stops when all clusters intersecting $\partial Q$ has been explored. We conclude by demonstrating the strength of disagreement couplings of Gibbs processes with different boundary configurations. For compact sets $P \subseteq Q \subseteq\mathbb R^d$ and boundary configurations $\psi \subseteq\mathbb R^d \setminus Q$, we let $$d_{Q,B, \psi}(P) := \mathbb P\big(T^{\mathsf{dc}}_{Q,B, \psi}(\mathcal P^{\mathsf{d}, *}) \cap P \ne T^{\mathsf{dc}}_{Q,B, \varnothing}(\mathcal P^{\mathsf{d}, *}) \cap P\big)$$ denote the probability that there is a disagreement inside $P$. Then, the following key auxiliary result bounds $d_{Q,B, \psi}(P)$ in terms of probabilities associated with percolation events. **Lemma 10** (Percolation). *Let $P\subseteq Q \subseteq\mathbb R^d$ be compact. Then, $\sup_{\psi \in \mathbf N_{Q^c}}d_{Q,B, \psi}(P) \leqslant\mathbb P(P \leftrightsquigarrow{B_{r_0}(B)})$.* *Proof.* We need to show that if $P \not \leftrightsquigarrow{B_{r_0}(B)}$, then $T^{\mathsf{dc}}_{Q,B, \psi}(\mathcal P^{\mathsf{d}, *}) \cap P = T^{\mathsf{dc}}_{Q,B, \varnothing}(\mathcal P^{\mathsf{d}, *}) \cap P$. To achieve this goal, we note that disagreement coupling first considers all points that are connected to $B$ and then all other points as well as some that are only connected to $B_{r_0}(B)$. Let $i \geqslant 1$ be the last index such that $V_{i}$ is connected to $B$. Then, there are no points of $\psi \cup (\mathcal P^{\mathsf{d}}\cap (V_{1}\cup \cdots \cup V_{i}))$ in $B_{r_0}(W_{i+1,1})$. Hence, $$p(x, W, \mu \cup \varphi) =p(x, W, \mu),\quad x \in W, \,W \subseteq W_{i+1,1},\,\mu \in \mathbf N_{W^c},\varphi\subseteq\psi \cup (\mathcal P^{\mathsf{d}}\cap (V_{1}\cup \cdots \cup V_{i})).$$ Hence, we conclude the proof by observing that $P \not \leftrightsquigarrow{B_{r_0}(B)}$ implies that $P \subseteq W_{i+1,1}$. In particular, the acceptance of points not connected to ${B_{r_0}(B)}$ is independent of $\psi$. ◻ ## Homogeneity and decorrelation via disagreement coupling {#ss:hd} In this section, we consider the Gibbs process $\mathcal X_n:=\mathcal X(Q_n) := \mathcal X(Q_n, \varnothing)$ on the window $Q_n$ with empty boundary conditions. We highlight how disagreement coupling can be used to establish homogeneity and decorrelation of subcritical Gibbs point processes. The homogeneity and decorrelation is captured through the total variation distance and the $\alpha$-mixing coefficient, respectively. More precisely, we write $$\begin{aligned} \label{eq:dtv} d_{\mathsf{TV}}(X, Y) := \sup_{f\colon S \to [0, 1]}\big|\mathbb E[f(X)] - \mathbb E[f(Y)]\big|,\tag{{\bf DTV}}\end{aligned}$$ for the *total-variation distance* between random variables $X, Y$ with values in a common measurable space $S$, where the supremum is taken over all measurable functions $f\colon S \to [0, 1]$ . Moreover, we define the *$\alpha$-mixing coefficient* of two $\sigma$-algebras $\mathcal A$ and $\mathcal B$ by $$\begin{aligned} \alpha(\mathcal A, \mathcal B) := \sup_{A\in \mathcal A, B\in \mathcal B} \big|\mathbb P(A \cap B) - \mathbb P(A) \mathbb P(B)\big|.\tag{$\alpha$-{\bf MIX}}\end{aligned}$$ **Proposition 11** (Exponential decay of the total variation distance). *Suppose that the Gibbs process $\mathcal X$ satisfies condition [\[as:(A)\]](#as:(A)){reference-type="eqref" reference="as:(A)"}. Then, there are constants $c_1,c_2>0$ with the following property. For any bounded Borel set $Q \subseteq\mathbb R^d$ and $B\subseteq Q^c$, it holds that $$\sup_{\substack{\psi, \psi' \in \mathbf N_{Q^c}\\ \psi\cap B^c=\psi'\cap B^c}} d_{\mathsf{TV}}\big(\mathcal X(Q, \psi)\cap P, \mathcal X(Q,\psi') \cap P\big) \leqslant c_1 |P| e^{-c_2 \mathsf{dist}(P, B)}.$$* *Proof.* Let $\mathcal P^{\mathsf{d}, *}$ be a unit intensity Poisson process on $\mathbb R^d \times \mathbb R_+$. We realize $\mathcal X(Q,\psi)$ and $\mathcal X(Q, \psi')$ through the disagreement couplings $T^{\mathsf{dc}}_{Q,B, \psi}(\mathcal P^{\mathsf{d}, *})$ and $T^{\mathsf{dc}}_{Q,B, \psi'}(\mathcal P^{\mathsf{d}, *})$, respectively. Then, by Lemma [Lemma 10](#lem:dperc){reference-type="ref" reference="lem:dperc"}, $$d_{\mathsf{TV}}\big((\mathcal X(Q, \psi) )\cap P, \mathcal X(Q, \psi') \cap P\big) \leqslant d_{Q,B, \psi}(P)+d_{Q,B, \psi'}(P) \leqslant 2\mathbb P(P \leftrightsquigarrow{B_{r_0(B)}}).$$ Since $r_0 < r_c$, Proposition [Proposition 6](#pr:spt){reference-type="ref" reference="pr:spt"} implies that the diameters of $r_0$-connected connected components have exponential tails, thereby proving the asserted exponential decay. ◻ From Proposition [Proposition 11](#pr:dec){reference-type="ref" reference="pr:dec"} we can derive useful homogeneity and decorrelation bounds. We will use the notation $Q_s(x_0)=Q_s + x_0$ for $x_0\in \mathbb R^d$. We note that also [@benes Theorem 2] establishes a decorrelation property via disagreement coupling, where in their case the spatial dependence is measured via moment correlation structure. However, in our setting, we need the $\alpha$-mixing for our further arguments, which is why we include the short proof of Proposition [Proposition 11](#pr:dec){reference-type="ref" reference="pr:dec"} below. Yet another form of dependence is a variant of $\beta$-mixing called *exponentially decay dependence* in [@CX22]. Although we do not need it in the following, we are confident that disagreement coupling is powerful enough to establish this form of spatial decorrelation. **Proposition 12** (Spatial decorrelation and local homogeneity). *Suppose that the Gibbs process $\mathcal X$ satisfies condition [\[as:(A)\]](#as:(A)){reference-type="eqref" reference="as:(A)"}.* - *Let $r > 0$ and $P\subseteq Q_n$ be Borel. Then $$\alpha\big(\sigma\big(\mathcal X_n\cap P\big),\sigma\big(\mathcal X_n\setminus B_r(P)\big)\big)\leqslant c_1 |P| e^{-c_2r},$$ where $c_1,c_2>0$ are the constants from Proposition [Proposition 11](#pr:dec){reference-type="ref" reference="pr:dec"}.* - *Let the PI be translation-invariant. Then $$\limsup_{n \uparrow\infty}n^{-1/2}\sup_{x_0 \in Q_{n - 2\sqrt n}}\log\big(d_{\mathsf{TV}}\big((\mathcal X_n - x_0)\cap Q_{\sqrt n}, \mathcal X_n \cap Q_{\sqrt n}\big) \big)< 0.$$* *Proof.* We start with the proof of (i). Let $\mathcal X' := \mathcal X(Q_n, \varnothing) \cap P$, $\mathcal X'' := \mathcal X(Q_n, \varnothing) \setminus B_r(P)$ and $\mathcal Y:= \mathcal X(B_r(P), \mathcal X'') \cap P$, we note that for any measurable sets of configurations $A, B$, we have $$\begin{aligned} \mathbb P(\mathcal X' \in A, \mathcal X'' \in B) &= \mathbb E\big[\mathbb P(\mathcal X' \in A\,|\,\mathcal X'') \mathbbmss{1}\{\mathcal X'' \in B\}\big]\\ &= \mathbb P(\mathcal Y\in A) \mathbb P(\mathcal X''\in B) + \mathbb E\Big[\big(\mathbb P(\mathcal X' \in A\,|\,\mathcal X'') - \mathbb P(\mathcal Y\in A)\big) \mathbbmss{1}\{\mathcal X'' \in B\}\Big]\\ &\leqslant\mathbb P(\mathcal Y\in A) \mathbb P(\mathcal X''\in B) + \sup_{\psi \in \mathbf N_{Q_n\setminus B_r(P)}} d_{\mathsf{TV}}(\mathcal X(B_r(P))\cap P,\mathcal X''),\mathcal X(B_r(P)) \cap P,\psi)).\end{aligned}$$ Noting that $\mathcal Y$ and $\mathcal X'$ have the same distribution, we conclude the proof of part (i) by applying Proposition [Proposition 11](#pr:dec){reference-type="ref" reference="pr:dec"} with $Q= B_r(P)$ and $B= Q_n \setminus B_r(P)$. For the proof of (ii) note that holds $Q_{2\sqrt n} \subseteq Q_{n }-x_0$ for every $x_0 \in Q_{n - 2\sqrt n}$. We deduce that $$\begin{aligned} d_{\mathsf{TV}}\big((\mathcal X_n - x_0)\cap Q_{\sqrt n}, \mathcal X_n \cap Q_{\sqrt n}\big) \leqslant 2\sup_{\psi \in \mathbf N_{Q_{2\sqrt n}^c}} d_{\mathsf{TV}}\big(\mathcal X(Q_{2\sqrt n}; \psi)\cap Q_{\sqrt n}, \mathcal X(Q_{2\sqrt n},1) \cap Q_{\sqrt n}\big),\end{aligned}$$ where we used that by stationarity, the distribution of $(\mathcal X_n-x_0)\cap Q_{2\sqrt n}$ is $\mathcal X(Q_{2\sqrt n },\psi)$, when conditioned on $(\mathcal X_n-x_0)\cap Q_{2\sqrt n}^c=\psi$. Hence, the claim follows from Proposition [Proposition 11](#pr:dec){reference-type="ref" reference="pr:dec"}. ◻ ## Disagreement coupling for perturbed Papangelou intensities {#ss:rfin} In Section [4.1](#ss:fw){reference-type="ref" reference="ss:fw"}, we introduced disagreement coupling on a fixed sampling window as a simple iterative algorithm to construct Gibbs point processes in a sampling window. However, one potential issue is that small local perturbations of the Papangelou intensity might change the sampled processes substantially. In this section, we show that with high probability the changes remain localized in a small neighborhood of the perturbation. For the application of Stein's method in Section [6](#sec:gibbs){reference-type="ref" reference="sec:gibbs"}, it will be convenient to consider disagreement coupling with respect to two different PIs $\kappa, \kappa'$ both satisfying Assumption [\[as:(A)\]](#as:(A)){reference-type="eqref" reference="as:(A)"}. In the following proposition let $Q \subseteq\mathbb R^d$ be a bounded Borel set, let $\psi$, $\psi' \subseteq\mathbb R^d\setminus Q$ be locally finite and let $T_{Q,\psi}(\mathcal P^{\mathsf{d}, *})$ and $T'_{Q,\psi}(\mathcal P^{\mathsf{d}, *})$ be the embedding operators with respect to $\kappa$ and $\kappa'$, respectively, and assume that $T_{Q,\psi}(\mathcal P^{\mathsf{d}, *})$ and $T'_{Q,\psi'}(\mathcal P^{\mathsf{d}, *})$ both proceed by a common total order $\leqslant$ on $\mathbb R^d$ that is induced by the bimeasurable map $\iota$. **Proposition 13** (Disagreement probability and total variation distance). *Suppose that $\kappa, \kappa'$ are PIs with $0 \leqslant\kappa, \kappa' \leqslant\alpha_0$. Let $\mathcal X:=T_{Q,\psi}(\mathcal P^{\mathsf{d}, *})$ and $\mathcal X':=T'_{Q,\psi'}(\mathcal P^{\mathsf{d}, *})$. Then, $$\mathbb P(\inf \mathcal X_y\ne \inf \mathcal X'_y)\leqslant(2+\alpha_0 |Q_y|)\,d_{\mathsf{TV}}(\mathcal X_y,\mathcal X'_y),$$ where $\mathcal X_y :=\mathcal X\cap (-\infty,y)$, $Q_y:=Q \cap (-\infty, y)$ and the infimum is taken with respect to the order given by $\iota$.* *Proof.* To ease notation, for $x<y$ we let $$q(x) :=\mathbb P(\mathcal X_y \cap(-\infty,x)=\varnothing)= \mathbb P\Big(U_i > p(X_i; \psi)\text{ for all }(X_i, U_i) \in \mathcal P^{\mathsf{d}, *}\cap\big( Q_x\times [0,\alpha_0]\big)\Big),$$ denote the probability that all points of $\mathcal P^{\mathsf{d}, *}$ before $x$ are rejected in the thinning. Similarly, we define $q'(x)$ and $q^\vee(x)$ by replacing $p(X_i;\psi)$ by $p'(X_i;\psi')$ and $p(X_i;\psi)\vee p'(X_i;\psi)$, respectively. By construction of the Poisson embedding, and the multivariate Mecke equation [@LP Theorem 4.4], we obtain that the probability that the first points of $\mathcal X$ and $\mathcal X'$ disagree on $Q_y$ is given by $$\begin{aligned} \mathbb P(\inf \mathcal X_y\ne \inf \mathcal X'_y)&= \int_{Q_y} |p(x,\psi) - p'(x,\psi')| q^\vee(x)\,\mathrm dx \nonumber\\ &\leqslant\int_{Q_y} |p(x,\psi) - p'(x,\psi')| q(x)\,\mathrm dx \nonumber\\ &\leqslant\int_{Q_y} |p(x,\psi)q(x) - p'(x,\psi')q'(x)|\,\mathrm dx+\int_{Q_y}p'(x,\psi')|q'(x) - q(x)|\,\mathrm dx.\label{dtvbou}\end{aligned}$$ Hence, since $\mathbb P(\inf \mathcal X_y \in B) =\int_{Q_y} p(x,\psi)q(x) \mathds 1\{x \in B\} \, {\rm d}x$, by the definition of total variation distance, $$\int_{Q_y} |p(x,\psi)q(x) - p'(x,\psi')q'(x)|\,\mathrm dx\leqslant 2 \sup_{B \in \mathcal B^d} \big| \mathbb P(\inf \mathcal X_y \in B) - \mathbb P(\inf \mathcal X_y' \in B) \big| \leqslant 2d_{\mathsf{TV}}(\mathcal X_y,\mathcal X_y'),$$ where the first inequality can be seen by letting $$B:= \begin{cases} \{x \in Q_y:\,p(x,\psi)q(x)\geqslant p'(x,\psi')q'(x)\},\quad \text{if } \quad \mathbb P(\inf \mathcal X\in Q_y) \geqslant\mathbb P(\inf \mathcal X' \in Q_y),\\ \{x \in Q_y:\,p(x,\psi)q(x)< p'(x,\psi')q'(x)\},\quad\text{if } \quad \mathbb P(\inf \mathcal X\in Q_y) < \mathbb P(\inf \mathcal X' \in Q_y). \end{cases}$$ To bound the second integral in [\[dtvbou\]](#dtvbou){reference-type="eqref" reference="dtvbou"} we use that $p'(x,\psi')\leqslant\alpha_0$ and obtain $$\int_{Q_y}p'(x,\psi')|q'(x) - q(x)|\,\mathrm dx\leqslant\alpha_0 |Q_y| \sup_{B \in \mathcal B^d}|\mathbb P(\mathcal X_y \cap B=\varnothing) - \mathbb P(\mathcal X'_y \cap B=\varnothing)| \leqslant\alpha_0 |Q_y| d_{\mathsf{TV}}(\mathcal X_y, \mathcal X_y'),$$ which finishes the proof. ◻ We will apply this bound in the following situation. Let $Q$ be a Borel set, let $\kappa, \kappa'$ be PIs satisfying the assumption [\[as:(A)\]](#as:(A)){reference-type="eqref" reference="as:(A)"} and let $\psi, \psi'\subseteq Q^c$ be locally finite. Let $T_{Q,Q^c,\psi}^{{\sf{dc}}}$ denote the operator for the disagreement coupling with PI $\kappa$ and $T_{Q,Q^c,\psi'}^{'\sf{dc}}$ with PI $\kappa'$, respectively. **Proposition 14**. *Suppose that $\kappa, \kappa'$ are PIs satisfying condition [\[as:(A)\]](#as:(A)){reference-type="eqref" reference="as:(A)"}. Then, there is a constant $C>0$ with the following property. Assume that $s > 0$ and $P\subseteq Q'\subseteq Q$ are such that $\mathsf{dist}(P,(Q')^c)\geqslant 3s$ and $\kappa(x,\mu)=\kappa'(x,\mu)$ for all $x \in Q'$, $\mu \in \mathbf N$. Then, $$\begin{aligned} \mathbb P\big(T^{{\sf{dc}}}_{Q,Q^c,\psi}(\mathcal P^{\mathsf{d}, *})\cap P\ne T^{'\sf{dc}}_{Q,Q^c,\psi'}(\mathcal P^{\mathsf{d}, *}) \cap P\big) \leqslant C |B_{2s}(P)| e^{-c_2s},\label{eq:XXbou} \end{aligned}$$ where $C>0$ does not depend on $P$ and $c_2>0$ is the constant from Proposition [Proposition 11](#pr:dec){reference-type="ref" reference="pr:dec"}.* *Proof.* Let $\mathcal X:=T^{{\sf{dc}}}_{Q,Q^c,\psi}(\mathcal P^{\mathsf{d}, *})$ and $\mathcal X':=T^{'\sf{dc}}_{Q,Q^c,\psi'}(\mathcal P^{\mathsf{d}, *})$. We may choose the landmark set $D \subseteq Q$ as any maximal set with the property that no two elements are closer than distance $r_0$. In particular, $\sup_{x \in Q}|D \cap B_{r_0}(x)| \leqslant c_{\mathsf K}$ for some constant $c_{\mathsf K} > 0$. The idea is to bound the disagreement probability of $\mathcal X\cap P$ and $\mathcal X' \cap P$ by distinguishing the individual $\mathcal P^{\mathsf{d}}$ clusters. In order to restrict to clusters whose first point is in $B_s(P)$, we introduce $A_s$ as the event that $P$ is not connected to $(B_s(P))^c$ via $B_{r_0}(\mathcal P^{\mathsf{d}})$. Now, we recall the definitions of $V_{i, j}$ and $\xi_{i,j}$ from [\[def:xis\]](#def:xis){reference-type="eqref" reference="def:xis"} and let $\mathcal X_{m,n}:=\xi_{m,1} \cup \dots \cup \xi_{m,n},\, n \leqslant N_{m}$. Then, we let $V_{m_1, 1}, V_{m_2, 1}, \dots, V_{m_J, 1}$ denote the subsequence of sets from the algorithm in Section [4.1.2](#sss:dc){reference-type="ref" reference="sss:dc"} that intersect $B_s(P)$. Now, we let $A_{2s}$ be the event that $B_s(P)$ is not connected to $B_{2s}(P)^c$ via $B_{r_0/2}(\mathcal P^{\mathsf{d}})$. By construction of the exploration algorithm, this allows us to assume that the sets $V_{m_1,n},\,n \leqslant N_{m_1},$ are contained in $B_{2s}(P)$. Moreover, Proposition [Proposition 6](#pr:spt){reference-type="ref" reference="pr:spt"} implies that $$\mathbb P(A_s^c)=\mathbb P(P \leftrightsquigarrow B_s(P)^c)\leqslant c_1|P| e^{-c_2s},\qquad \mathbb P(A_{2s}^c)=\mathbb P(P \leftrightsquigarrow B_{2s}(P)^c)\leqslant c_1|B_s(P)| e^{-c_2s}.$$ Next, we set $\mathcal X_i:=\mathcal X\cap V_i$, $\mathcal X'_i:=\mathcal X' \cap V_i$. Then, $$\begin{aligned} &\mathbb P(\{\mathcal X\cap P\ne \mathcal X' \cap P\} \cap A_s \cap A_{2s})\\ &=\sum_{j \geqslant 1} \mathbb P(\{(\mathcal X_{m_1},\dots,\mathcal X_{m_{j-1}}) = (\mathcal X'_{m_1},\dots,\mathcal X'_{m_{j-1}}) , \mathcal X_{m_j}\ne \mathcal X'_{m_j}, J \geqslant j\}\cap A_{2s})\\ &= \sum_{j\geqslant 1}\mathbb E\left[\mathds 1\{(\mathcal X_{m_1},\dots,\mathcal X_{m_{j-1}}) = (\mathcal X'_{m_1},\dots,\mathcal X'_{m_{j-1}}),J\geqslant j\}\mathbb P(\{\mathcal X_{m_{j}}\ne \mathcal X'_{m_{j}}\} \cap A_{2s}\mid \mathcal P^{\mathsf{d}}\cap W_{m_j,1}^c)\right]. \end{aligned}$$ We now claim that $$\begin{aligned} \label{eq:pr14} \mathbb P(\{\mathcal X_{m_{j}}\ne \mathcal X'_{m_{j}}\} \cap A_{2s}\mid \mathcal P^{\mathsf{d}}\cap W_{m_j,1}^c)\leqslant c'e^{-c_2r} \mathbb E\Big[\sum_{n\leqslant N_{m_j}}|V_{m_j,n}|\mathds 1\{V_{m_j,n}\subseteq B_{2s}(P)\}\Big], \end{aligned}$$ for some $c' > 0$. We now first explain how to conclude the proof of the Proposition provided [\[eq:pr14\]](#eq:pr14){reference-type="eqref" reference="eq:pr14"}. Afterward we establish [\[eq:pr14\]](#eq:pr14){reference-type="eqref" reference="eq:pr14"}. Now, [\[eq:pr14\]](#eq:pr14){reference-type="eqref" reference="eq:pr14"} gives that $$\begin{aligned} &\mathbb P(\{\mathcal X\cap P\ne \mathcal X' \cap P\} \cap A_s\cap A_{2s}) \leqslant c' e^{-c_2r} \mathbb E\Big[\sum_{j\leqslant J} \sum_{n\leqslant N_{m_j}}|V_{m_j,n}|\mathds 1\{ V_{m_j,n}\subseteq B_{2s}(P)\}\Big].\label{bou:XneqX} \end{aligned}$$ Next we bound the double sum in [\[bou:XneqX\]](#bou:XneqX){reference-type="eqref" reference="bou:XneqX"} by $$\begin{aligned} \sum_{j\leqslant J} \sum_{n\leqslant N_{m_j}} |V_{m_j,n}|\mathds 1\{V_{m_j,n}\subseteq B_{2s}(P)\} &\leqslant\sum_{j\leqslant J} \sum_{n\leqslant N_{m_j}} |V_{m_j,n}| \mathds 1\{\mathcal P^{\mathsf{d}}\cap V_{m_j,n}=\varnothing, V_{m_j,n}\subseteq B_{2s}(P)\}\label{bouXX1}\\ &\quad + \sum_{j\leqslant J} \sum_{n\leqslant N_{m_j}} |V_{m_j,n}| \mathds 1\{\mathcal P^{\mathsf{d}}\cap V_{m_j,n}\ne \varnothing, V_{m_j,n}\subseteq B_{2s}(P)\}.\label{bouXX2} \end{aligned}$$ Note that if $\mathcal P^{\mathsf{d}}\cap V_{m_j,n}=\varnothing$, any point $x \in Q$ is covered by at most $c_{\mathsf K}$ of the sets $V_{m_j,n}=Z_{m_1,n}$. Since they are all contained in $B_{2s}(P)$, the expected value of [\[bouXX1\]](#bouXX1){reference-type="eqref" reference="bouXX1"} is bounded by $c_{\mathsf K}|B_{2s}(P)|$. For the expected value of [\[bouXX2\]](#bouXX2){reference-type="eqref" reference="bouXX2"} note that every point in $\mathcal P^{\mathsf{d}}$ can only be the first point in one of the sets $V_{m_j,n}$ and $|V_{m_j,n}|\leq \kappa_d r_0^d$, where $\kappa_d$ denotes the volume of the unit ball in $\mathbb R^d$. Hence, we conclude that [\[bouXX2\]](#bouXX2){reference-type="eqref" reference="bouXX2"} is bounded by $$\kappa_d r_0^d \cdot \mathbb E[\mathcal P^{\mathsf{d}}(B_{2s}(P))]\leqslant\kappa_d r_0^d\cdot \alpha_0 |B_{2s}(P)|.$$ These estimates show that for some constant $C'>0$, [\[bou:XneqX\]](#bou:XneqX){reference-type="eqref" reference="bou:XneqX"} is bounded by $$\begin{aligned} c'(c_{\mathsf K}+\alpha_0\kappa_d r_0^d)|B_{2s}(P)| e^{-c_2r} \leqslant C' |B_{2s}(P)| e^{-c_2r}. \end{aligned}$$ It remains to prove [\[eq:pr14\]](#eq:pr14){reference-type="eqref" reference="eq:pr14"}. To that end, note that given $\mathcal P^{\mathsf{d}}\cap W_{m_j,1}^c$, the processes $\mathcal X_{m_{j}}$ and $\mathcal X'_{m_{j}}$ are restrictions of Gibbs processes on $W_{m_j,1}$ to $V_{m_j}$ with boundary condition $\mathcal X_1\cup \cdots\cup \mathcal X_{m_{j}-1} \cup \psi$ and $\mathcal X'_1\cup \cdots\cup \mathcal X'_{m_{j}-1} \cup\psi'$, respectively. Hence, we are reduced to the case $j = 1$. Here, we obtain $$\begin{aligned} &\mathbb P\Big(\Big\{\mathcal X\cap \bigcup_{j\leqslant N_{m_1}}V_{m_1,j}\ne \mathcal X'\cap \bigcup_{j\leqslant N_{m_1}} V_{m_1,j} \Big\}\cap A_{2s}\Big)\nonumber\\ &\, \leqslant\sum_{n \geqslant 1} \mathbb P(\mathcal X_{m_1,n-1}=\mathcal X'_{m_1,n-1}, \xi_{m_1,n}\ne \xi'_{m_1,n},N_{m_1}\geqslant n, V_{m_1,n}\subseteq B_{2s}(P))\nonumber\\ &\,=\mathbb E\big[\sum_{n \leqslant N_{m_1}}\mathds 1\{\mathcal X_{m_1,n-1}=\mathcal X'_{m_1,n-1}, V_{m_1,n}\subseteq B_{2s}(P)\}\, \mathbb P( \xi_{m_1,n}\ne \xi'_{m_1,n} \mid \mathcal P^{\mathsf{d}}\cap W_{m_1,n}^c)\big],\label{eq:Y1n} \end{aligned}$$ where $\{\mathcal X_{m_1,0}=\mathcal X'_{m_1,0}\}$ is interpreted in the obvious way. Here we have used that both the set $V_{m_1, n}$ and the event $\{N_{m_1}\geqslant n\}$ are measurable with respect to the stopped $\sigma$-algebra $\sigma(\mathcal P^{\mathsf{d}}\cap W_{m_1,n}^c)$. Given $\mathcal P^{\mathsf{d}}\cap W_{m_1,n}^c$, we have that $\xi_{m_1,n}$ is the restriction of a Gibbs process $\tilde{\mathcal X}_{m_1,n}$ on $W_{m_1,n}$ with PI $\tilde \kappa$ $$\tilde \kappa(x,\mu):=\kappa(x,\mu \cup \xi_{m_1,1} \cup \dots \cup \xi_{m_1,n-1} \cup \psi),\quad x \in W_{m_1,n},\,\mu \in \mathbf N_{W_{m_1,n}},$$ and analogously for $\xi'_{m_1,n}$. Recalling that $\xi_{m_1, n}$ is $\inf(\mathcal P^{\mathsf{d}}\cap V_{m_1,n}) \cap \tilde{\mathcal X}_{m_1,n}$ (with $\inf(\varnothing)$ interpreted as $\varnothing$), the event $\{ \xi_{m_1,n}\ne \xi'_{m_1,n} \}$ is contained in the event $\{\inf (\tilde{\mathcal X}_{m_1,n} \cap V_{m_1,n})\ne \inf (\tilde{\mathcal X}_{m_1,n}'\cap V_{m_1,n}) \}$. Thus, we may apply Proposition [Proposition 13](#pr:distv){reference-type="ref" reference="pr:distv"} with $Q=W_{m_1,n}$ and $y=\sup V_{m_1,n}$. Since $W_{m_1,n}$ is ordered such that all of $V_{m_1,n}$ comes before the remainder of $W_{m_1,n}$, we have that $\{x \in W_{m_1,n}:\,x \leqslant y\}=V_{m_1,n}$. Since $|V_{m_1,n}|\leqslant\kappa_d r_0^d$, Proposition [Proposition 13](#pr:distv){reference-type="ref" reference="pr:distv"} shows that the conditional probability in [\[eq:Y1n\]](#eq:Y1n){reference-type="eqref" reference="eq:Y1n"} is bounded by $$\begin{aligned} c'' d_{\mathsf{TV}}(\mathcal X(W_{m_1,n},\mathcal X_{m_1,n-1} \cup \psi)\cap V_{m_1,n}, \mathcal X(W_{m_1,n},\mathcal X'_{m_1,n-1} \cup \psi')\cap V_{m_1,n}), \end{aligned}$$ where $c'' := 2+\alpha_0\kappa_d r_0^d$. Using here that $\psi, \psi'\subseteq(Q')^c$, Proposition [Proposition 11](#pr:dec){reference-type="ref" reference="pr:dec"} with $P:= V_{m_1, n}$ and $B:=(Q')^c$ gives the bound $c' | V_{m_1,n}| e^{-c_2r},$ where $c' := c''c_1$ and $r:=\mathsf{dist}(B_{2s}(P), (Q')^c)$. Using this finding in [\[eq:Y1n\]](#eq:Y1n){reference-type="eqref" reference="eq:Y1n"} yields the asserted bound in [\[eq:pr14\]](#eq:pr14){reference-type="eqref" reference="eq:pr14"} $$\begin{aligned} \mathbb P\Big(\big\{\mathcal X\cap V_{m_1}\ne \mathcal X'\cap V_{m_1} \big\}\cap A_{2s}\Big)\leqslant c'e^{-c_2r} \mathbb E\Big[\sum_{n\geqslant 1}|V_{m_1,n}|\mathds 1\{N_{m_1}\geqslant n, V_{m_1,n}\subseteq B_{2s}(P)\}\Big]. \label{eq:W1lim} \end{aligned}$$ ◻ ## Radial thinning in increasing windows {#ss:emb} *Radial thinning* refers to the special case of disagreement coupling, where $B=\varnothing$ and the order is given by $\iota(x):=-\ell_\infty(x)$, where $\ell_\infty$ denotes the radial distance to the reference point $o \in \mathbb R^d$ in the infinity-norm. Note that although $\iota$ not a bimeasurable map as required in the definition of the Poisson embedding from [@dp], we show in Proposition [Proposition 29](#pr:iota){reference-type="ref" reference="pr:iota"} in the appendix that this choice of order leads to the same Gibbs distribution. Since the Poisson embedding is used in each step of the disagreement coupling algorithm and injectivity of $\iota$ is used in no other place in the construction, also disagreement coupling remains well-defined for such $\iota$. For the choice of the landmarks $v_{i, 1}$ for $V_i$ in the disagreement coupling algorithm, we define the set $D :=\delta\mathbb Z^d$, where $\delta>0$ is sufficiently small so that $Q_{2\delta}\subseteq B_{r_0}$. Thus, $B_{r_0}(D) =\mathbb R^d$. From $\iota$, we obtain a partial order of $D$, which we extend to a total order of $D$ by applying the lexicographical order to $x,y\in D$ if $\iota(x) = \iota(y)$. Then, at the beginning of $V_i$, we choose $$v_{i, 1} = \min \{x\in D\colon B_{r_0}(x)\cap W_{i,1}\ne \varnothing\}.$$ We use the notation $T^{\mathsf{rad}}_{ U, \psi}(\mathcal P^{\mathsf{d}, *}) = T_{ U,\varnothing, \psi}^{{\sf{dc}}}(\mathcal P^{\mathsf{d}, *})$ when the disagreement coupling is obtained in this way. We assume for simplicity, that the radial thinning is always centered at the origin. The main result of this section, Proposition [Proposition 15](#pr:xxff){reference-type="ref" reference="pr:xxff"} below, establishes that the radial thinning on $Q_n$ converges almost surely to a limit process as $n\to \infty$. **Proposition 15** (Convergence of radial couplings). *Assume that the Papangelou intensity satisfies **(A)**. Let $U \subseteq\mathbb R^d$ be closed and $\psi \subseteq U^c$ be locally finite. Then, as $n\uparrow\infty$, the embeddings $T^{\mathsf{rad}}_{Q_n \cap U, \psi}(\mathcal P^{\mathsf{d}, *})$ converge almost surely to a limiting process $$\begin{aligned} \label{eq:xxff} \mathcal X(\infty, U, \psi) := \bigcup_{n_0 \geqslant 1} \bigcap_{n \geqslant n_0}T^{\mathsf{rad}}_{Q_n \cap U, \psi}(\mathcal P^{\mathsf{d}, *}).\end{aligned}$$ If $\kappa$ is translation-invariant, the process $\mathcal X(\infty) := \mathcal X(\infty, \mathbb R^d, \varnothing)$ obtained for $U = \varnothing$ is a stationary and mixing limiting process. Moreover, $\mathcal X(\infty, U, \psi) +x$ has the same distribution as $\mathcal X(\infty, U+x, \psi+x)$.* In fact, Lemma [Lemma 17](#lem:rad){reference-type="ref" reference="lem:rad"} below shows the stronger convergence statement that for any bounded $A \subseteq\mathbb R^d$ and $\omega \in \Omega$, there is an $N(\omega)$ such that $\mathcal X(\infty, U, \psi)\cap A = T^{\mathsf{rad}}_{Q_n \cap U, \psi}(\mathcal P^{\mathsf{d}, *}) \cap A$ whenever $n\geq N$. We stress that, although $\mathcal X(\infty)$ is stationary when $\kappa$ is stationary, the thinning is not translation-covariant in the sense that radial thinning of $\mathcal P^{\mathsf{d}, *}+ x$ is generally not the same as $\mathcal X(\infty) +x$. **Proposition 16** (Uniqueness of infinite-volume Gibbs process). *The process $\mathcal X(\infty)$ is the distributionally unique infinite-volume Gibbs process with Papangelou intensity $\kappa$.* The rest of this section is devoted to proving Propositions [Proposition 15](#pr:xxff){reference-type="ref" reference="pr:xxff"} and [Proposition 16](#pr:uniq){reference-type="ref" reference="pr:uniq"}. The key ingredient is the following almost-sure consistency property of the disagreement coupling in increasing domains. **Lemma 17** (Consistency of the radial thinning). *Let $n \leqslant n'$, $A \subseteq\mathbb R^d$ be bounded, $U \subseteq\mathbb R^d$ be closed and $\psi \subseteq U^c$ be locally finite. Then, almost surely on the event $\{A \not \leftrightsquigarrow Q_{n-4r_0}^c\}$ we have $$\label{e:eqonA} T^{\mathsf{rad}}_{Q_n \cap U, \psi}(\mathcal P^{\mathsf{d}, *}) \cap A = T^{\mathsf{rad}}_{Q_{n'} \cap U, \psi}(\mathcal P^{\mathsf{d}, *})\cap A.$$* *Proof.* Let $V_1, \dots, V_K$ be the regions considered in the disagreement coupling algorithm on $Q_{n'}$ and assume that $V_1,\dots, V_k$ are the ones that start from a point $v_{i,1} \notin Q_{n-2r_0}$. Then $W_{k+1,1}$ is obtained from $Q_{n'}$ by removing $B_{r_0}(D\cap Q_{n-2r_0}^c)$ as well as all $B_{r_0/2}(\mathcal{C}_l)$, $l=1,\ldots,L$, where $\mathcal{C}_l$ is a component of $B_{r_0/2}(\mathcal P^{\mathsf{d}}\cap Q_{n'})$ such that $B_{r_0/2}(\mathcal{C}_l)$ contains a point in $D\cap Q_{n-2r_0}^c$. Similarly, the disagreement coupling algorithm on $Q_{n}$ first considers the $\tilde{V}_{1},\dots, \tilde{V}_{\tilde{k}}$ that start from a point $\tilde{v}_{i,1} \notin Q_{n-2r_0}$. Then, $\tilde{W}_{\tilde{k},1} = Q_n\backslash (\tilde{V}_1\cup \dots \cup \tilde{V}_{\tilde{k}})$ is obtained from $Q_{n}$ by removing $B_{r_0}(D\cap Q_{n-2r_0}^c)$ as well as all $B_{r_0/2}(\tilde{\mathcal{C}}_{\tilde{l}})$ where $\tilde{\mathcal{C}}_{\tilde{l}}$ is a component of $B_{r_0/2}(\mathcal P^{\mathsf{d}}\cap Q_{n})$ such that $B_{r_0/2}(\tilde{\mathcal{C}}_{\tilde{l}})$, $\tilde{l}=1,\ldots,\tilde{L}$, contains a point in $D\cap Q_{n-2r_0}^c$. We claim that $W_{k,1}=\tilde{W}_{\tilde{k},1}$, and hence the disagreement coupling algorithm proceeds the same from step $k$ and $\tilde{k}$, respectively. On the event $\{A \not \leftrightsquigarrow Q_{n-4r_0}^c\}$, $A\subseteq W_{k,1}=\tilde{W}_{\tilde{k},1}$ and hence [\[e:eqonA\]](#e:eqonA){reference-type="eqref" reference="e:eqonA"} follows. To see that $W_{k,1}=\tilde{W}_{\tilde{k},1}$, we show that $\mathbb R^d \backslash W_{k,1}=\mathbb R^d \backslash \tilde{W}_{\tilde{k},1}$, i.e. $$\label{eq:union} B_{r_0}(D\cap Q_{n-2r_0}^c)\cup \bigcup_l B_{r_0/2}(\mathcal{C}_l ) = B_{r_0}(D\cap Q_{n-2r_0}^c)\cup \bigcup_{\tilde{l}} B_{r_0/2}( \tilde{\mathcal{C}}_{\tilde{l}}).$$ Any component $\tilde{\mathcal{C}}_{\tilde{l}}$ is contained in one of the components $\mathcal{C}_l$, which shows one inclusion. Moreover, for a component $\mathcal{C}_l$ that contains a point in $D\cap Q_{n-2r_0}^c$ within distance $r_0/2$, $\mathcal{C}_l$ must be contained in a union of some of the components in $B_{r_0/2}(\mathcal P^{\mathsf{d}}\cap Q_{n})$ that either contain a point in $D\cap Q_{n-2r_0}^c$ within distance $r_0/2$ themselves or has distance at most $r_0/2$ to $\partial Q_n$ and therefore also contain a point in $D\cap Q_{n-2r_0}^c$ within distance $r_0/2$, as well as some sets of the form $B_{r_0/2}(x)$ with $x\in \mathcal P^{\mathsf{d}}\cap Q_n^c$. Sets of the latter form must satisfy $B_{r_0}(x) \subseteq Q_{n-2r_0}^c\subseteq B_{r_0}(D\cap Q_{n-2r_0}^c)$ by the choice of $\delta$. In total, this shows that $B_{r_0/2}(\mathcal{C}_l )$ is contained in the right hand side of [\[eq:union\]](#eq:union){reference-type="eqref" reference="eq:union"}. ◻ For the rest of this section, we say that a measurable function $f \colon\mathbf N\to [0, 1]$ is *local* if there exists $r > 0$ such that $f(\varphi) = f(\varphi\cap Q_r)$ holds for all locally finite $\varphi\subseteq\mathbb R^d$. We call $Q_r$ a locality region of $f$. *Proof of Proposition [Proposition 15](#pr:xxff){reference-type="ref" reference="pr:xxff"}.* Since $r_0 < r_c$ is sub-critical, the almost-sure convergence in [\[eq:xxff\]](#eq:xxff){reference-type="eqref" reference="eq:xxff"} follows from Lemma [Lemma 17](#lem:rad){reference-type="ref" reference="lem:rad"}. To prove that $\mathcal X(\infty)$ is stationary, we must show $\mathbb E[f(\mathcal X(\infty) + x)] = \mathbb E[f(\mathcal X(\infty))]$ for every $x \in \mathbb R^d$ and local $f\colon\mathbf N\to [0, 1]$. Since $f$ is local, this is equivalent to $\lim_{n \uparrow\infty} \big(\mathbb E[f(\mathcal X_n+x )] - \mathbb E[f(\mathcal X_n)]\big) = 0$. Now, for $n$ so large such that $\sqrt n>r$ and $x\in Q_{n-2\sqrt n}$, $$|\mathbb E[f(\mathcal X_n+x )] - \mathbb E[f(\mathcal X_n)]|\leqslant d_{\mathsf{TV}}((\mathcal X_n+x) \cap Q_{\sqrt n},\mathcal X_n \cap Q_{\sqrt n}),$$ which goes to 0 by part (ii) of Proposition [Proposition 12](#pr:lhom){reference-type="ref" reference="pr:lhom"}, i.e., the spatial homogeneity. The claim $\mathcal X(\infty, U, \psi) +x \sim \mathcal X(\infty, U+x, \psi+x)$ is shown similarly using the straightforward generalization of part (ii) of Proposition [Proposition 12](#pr:lhom){reference-type="ref" reference="pr:lhom"} to Gibbs processes with boundary conditions. It remains to show that $\mathcal X(\infty)$ is mixing, i.e., that $$\lim_{|x|\uparrow\infty}\mathbb E[f(\mathcal X(\infty) + x)g(\mathcal X(\infty))] = \mathbb E[f(\mathcal X(\infty))]\mathbb E[g(\mathcal X(\infty))]$$ for every $x \in \mathbb R^d$ and local $f,g\colon\mathbf N\to [0, 1]$. Let $A \subseteq\mathbb R^d$ be a fixed compact set containing the locality regions of $f(\cdot)$ and $g(\cdot )$. Then, setting $n(x) = |x|^2$, Lemma [Lemma 17](#lem:rad){reference-type="ref" reference="lem:rad"} gives that $$\mathbb E\big|[f(\mathcal X(\infty) + x)g(\mathcal X(\infty)) - f(\mathcal X(Q_{n(x)}) + x)g(\mathcal X(Q_{n(x)}))]\big| \leqslant 2\mathbb P(A \cup (A + x)\leftrightsquigarrow\partial Q_{n(x)-4r_0}).$$ for all sufficiently large $|x|$. As before, since $\mathcal P^{\mathsf{d}, *}$ is sub-critical, the right-hand side tends to 0 as $|x| \uparrow\infty$. Now, we need to show that $$\lim_{|x|\uparrow\infty}\Big(\mathbb E[f(\mathcal X(Q_{n(x)}) + x)g(\mathcal X(Q_{n(x)}))] - \mathbb E[f(\mathcal X(Q_{n(x)}))]\mathbb E[g(\mathcal X(Q_{n(x)}))]\Big) = 0.$$ Hence, invoking Proposition [Proposition 11](#pr:dec){reference-type="ref" reference="pr:dec"}, i.e., the spatial decorrelation concludes the proof of the mixing property. ◻ We now show the uniqueness result asserted in Proposition [Proposition 16](#pr:uniq){reference-type="ref" reference="pr:uniq"}. *Proof of Proposition [Proposition 16](#pr:uniq){reference-type="ref" reference="pr:uniq"}.* First, for every $n \geqslant 1$, the point process $T^{\mathsf{rad}}_{Q_n, \varnothing}(\mathcal P^{\mathsf{d}, *})$ is a Gibbs point process on $Q_n$ with PI $\kappa$ and hence it satisfies the GNZ equations [\[eGNZ_finite\]](#eGNZ_finite){reference-type="eqref" reference="eGNZ_finite"} for any $f:\mathbb R^d\times \mathbf N$ with support in $Q\cap \mathbf N_Q$ for some compact $Q$. Taking limits, we see that also $\mathcal X(\infty)$ satisfies the GNZ equations [\[eGNZ\]](#eGNZ){reference-type="eqref" reference="eGNZ"} for $f$ and hence is an infinite-volume Gibbs point process with PI $\kappa$. Now, let $A\subseteq\mathbb R^d$ be a bounded Borel set and let $\mathcal X$ be any infinite-volume Gibbs point processes with Papangelou intensity $\kappa$. Then, to sample $\mathcal X$ in $A$, one may first generate a sample $\psi$ from $\mathcal X$ in $Q_n^c$ with $Q_n \supseteq A$ and then draw a sample from $\mathcal X\cap Q_n$ under the boundary condition $\psi$. By the DLR equations [\[eq:DLR2\]](#eq:DLR2){reference-type="eqref" reference="eq:DLR2"}, the conditional distribution of $\mathcal X\cap Q_n$ given $\psi$ can be represented as $T^{\mathsf{rad}}_{Q_n, \psi,o}(\mathcal P^{\mathsf{d}, *})$. Hence, it suffices to show that $\lim_{n \uparrow\infty}\sup_{\psi, \psi' \subseteq Q_n^c}\mathbb P(E_n(\psi,\psi'))=0$, where $$E_n(\psi, \psi') := \big\{T^{\mathsf{rad}}_{Q_n, \psi}(\mathcal P^{\mathsf{d}, *}) \cap A \ne T^{\mathsf{rad}}_{Q_n, \psi'}(\mathcal P^{\mathsf{d}, *})\cap A\big\},$$ with the supremum running over all locally finite $\psi, \psi' \subseteq Q_n^c$. Hence, we conclude by using Lemma [Lemma 10](#lem:dperc){reference-type="ref" reference="lem:dperc"}. ◻ *Remark 18*. While the order, and hence the thinning, goes from the outside towards the origin, the proof of Lemma [Lemma 17](#lem:rad){reference-type="ref" reference="lem:rad"} shows that the radial thinning construction of $\mathcal X(\infty)$ can actually be performed by moving from the origin and outwards by considering a growing sequence of windows $Q_1,Q_2,\dots$ After the thinning of $Q_n$, we keep the thinning we had on components of $B_{r_0/2}(\mathcal P^{\mathsf{d}})$ that do not contain a point in $D \cap Q_{n-2r_0}^c$ within distance $r_0/2$. On $Q_{n+1}$ we only run the radial thinning until all components containing a point in $D\cap Q_{n-2r_0}^c$ within distance $r_0/2$ have been considered, since the remaining components will not change. # Proof of Theorem [Theorem 3](#thm:1){reference-type="ref" reference="thm:1"} -- CLT under weak stabilization {#sec:weak} In this section, we prove Theorem [Theorem 3](#thm:1){reference-type="ref" reference="thm:1"}, i.e., the CLT for Gibbsian functionals under weak stabilization. The main idea is to extend the martingale approach that was developed for Poisson point processes in [@penrose Theorem 3.1]. However, for Gibbs processes substantial complications arise. While the independence property for Poisson processes allows for a convenient expression of martingale differences in terms of resampling, the setting for Gibbs patterns is more involved. Similarly, ergodicity and spatial decorrelation arguments become more delicate. Our main tool will be the disagreement coupling developed in Section [4](#sec:const){reference-type="ref" reference="sec:const"}. The rest of the section is organized as follows. First, in Section [5.1](#sec:gen){reference-type="ref" reference="sec:gen"}, we establish a CLT under a general moment and stabilization condition. Second, in Section [5.2](#sec:clt){reference-type="ref" reference="sec:clt"}, we verify that [\[eq:m1\]](#eq:m1){reference-type="eqref" reference="eq:m1"} and [\[eq:s1\]](#eq:s1){reference-type="eqref" reference="eq:s1"} implies the general conditions. Finally, we complete the proof of Corollary [Corollary 5](#cor:betti){reference-type="ref" reference="cor:betti"}. ## CLT under general conditions {#sec:gen} In the following, we let $a_n$ be a sequence such that $\lim_{n\to \infty}a_n= \infty$. Let $\mathcal Y_{a_n}$, $n\geq 1$, denote a sequence of point processes on $Q_{a_n}$. Later in this section, we always consider the specific choices $a_n=n+2\sqrt n$ and $\mathcal Y_{a_n} = \mathcal X_{a_n}$. However, since it causes no additional work, we formulate the proofs in the general setting. For $n\geqslant 1$, we consider functionals $H_n(\varphi)$ defined for any finite point pattern $\varphi\subseteq Q_{a_n}$. Note that in the present subsection, we do not require stationarity of $H_n$ and allow $H_n$ to depend on $n$. Let $Z_{a_n}$ be the set of lattice points $z\in \mathbb Z^d$ such that $Q_{z,1}:= z + Q_1$ intersects the window $Q_{a_n}$. Then, $Q_{a_n}\subseteq\bigcup_{z\in Z_{a_n}} Q_{z,1}$. Order $Z_{a_n}$ lexicographically as $z_1,\dots,z_{k_n}$ and define $\mathcal F_{i,n}:= \sigma\big(\mathcal Y_{a_n} \cap \bigcup_{j\leqslant i} Q_{z_j,1}\big)$ and $\mathcal F_{0,n}$ to be the trivial $\sigma$-algebra. Then, the key observation in the martingale approach is that $$H_n(\mathcal Y_{a_n}) - \mathbb E[H_n(\mathcal Y_{a_n})]= \sum_{i\leqslant k_n} \Delta_{i,n},$$ where $$\Delta_{i,n} := \Delta_{z_i,n}:= \mathbb E[H_n(\mathcal Y_{a_n}) | \mathcal F_{i,n}] - \mathbb E[H_n(\mathcal Y_{a_n}) | \mathcal F_{i-1,n}].$$ For each $n$, the $\Delta_{i,n}$ define a martingale wrt. $\mathcal F_{i,n}$. By orthogonality of martingale differences, the variance is given by $$\mathsf{Var}(H(\mathcal Y_{a_n}) - \mathbb E[H(\mathcal Y_{a_n})])= \sum_{i\leqslant k_n}\mathbb E[ \Delta_{i,n}^2].$$ We impose the following two conditions: - $$\sup_{n\geqslant 1, z\in Z_{a_n}} \mathbb E[\Delta_{z,n}^4] < \infty$$ - There exists a stationary ergodic point process $\mathcal Y$ such that for every $z\in \mathbb Z^d$ there is a random variable $\Delta_z = \Delta_z(\mathcal Y)$ such that ${\Delta}_ {z,n} \to \Delta_z$ in probability (and hence in $L^2$ due to (i)). The limit is shift covariant meaning that for any $z_0\in \mathbb Z^d$, $\Delta_{z+z_0} (\mathcal Y+ z_0) = \Delta_ z(\mathcal Y)$. The convergence must be uniform in the sense that $$\lim_{n\to \infty} \sup_{z\in Z_{a_n-\sigma(a_n)}}\lVert\Delta_z-\Delta_{z,n}\rVert_{L^2}=0,$$ where $\sigma:(0,\infty) \to \mathbb R$ is a function with $\sigma(x)\in o(x)$ and $\lim_{x\to \infty} \sigma(x) = \infty$. The proposition below and its proof is an adaption of [@penrose Thm. 3.1] to general point processes. **Proposition 19** (CLT for general functionals). *Let $(a_n)$ be a sequence with $\lim_{n\to \infty} a_n = \infty$ and $\mathcal Y_{a_n}$ a sequence of point processes on $Q_{a_n}$. Let $\sigma^2 = \mathbb E[\Delta_0^2]$. Under Conditions (i) and (ii), $$a_n^{-d}\mathsf{Var}(H_n(\mathcal Y_{a_n} )) \to \sigma^2\geqslant 0 \label{eq:lim_var}$$ and $$a_n^{-d/2}(H_n(\mathcal Y_{a_n} ) - \mathbb E[H_n(\mathcal Y_{a_n} )] ) \to N(0,\sigma^2)$$ in distribution.* *Proof.* Since $\lim_{n\to \infty}a_n^d/k_n=1$, Proposition [Proposition 19](#thm:martingale){reference-type="ref" reference="thm:martingale"} follows from the martingale CLT of [@mcleish] if the following three conditions are satisfied: - $\sup_{n \geqslant 1} k_n^{-1}\mathbb E\big[ \max_{i \leqslant k_n} \Delta_{i,n}^2\big] < \infty$; - $k_n^{-1/2}\max_{i\leqslant k_n}|\Delta_{i,n}| \to 0$ in probability; and - $k_n^{-1}\sum_{i \leqslant k_n} \Delta_{i,n}^2 \to \sigma^2$ in $L^1$. In particular, [\[eq:lim_var\]](#eq:lim_var){reference-type="eqref" reference="eq:lim_var"} follows from 3). The first two conditions follow from Condition (i) exactly as in [@penrose proof of Thm. 3.1]. To make our presentation self-contained, we briefly reproduce the argument. - Condition (i) implies $$\sup_{ n\geqslant 1} k_n^{-1}\mathbb E\big[ \max_i \Delta_{i,n}^2\big] \leqslant\sup_ n k_n^{-1 } \sum_{i\leqslant k_n} \mathbb E[ \Delta_{i,n}^2]<\infty.$$ - This follows from Markov's inequality and (i) because $$\mathbb P\big(k_n^{-1/2}\max_{i\leqslant k_n}|\Delta_{i,n}| \geqslant\varepsilon\big) \leqslant\sum_{i\leqslant k_n}\mathbb P\big(k_n^{-1/2}|\Delta_{i,n}| \geqslant\varepsilon\big) \leqslant k_n \frac{\max_{i\leqslant k_n}\mathbb E[\Delta_{i,n}^4]}{k_n^ 2\varepsilon^2},$$ which tends to 0 as $n\to \infty$. - We start by noting that $\Delta_{z}$ is in $L^2$, since $\Delta_{z,n}$ converges to $\Delta_z$ in probability, and (i) implies uniform integrability. By Assumption (ii), ${\Delta_z(\mathcal Y)}_{z\in \mathbb Z^d}$ forms a multidimensional ergodic sequence, meaning that the sum $$|Z_{a_n-\sigma(a_n)}|^{-1}\sum_{z\in Z_{a_n-\sigma(a_n)}} \Delta_ z^2 \xrightarrow{L^1} \sigma^2,$$ see [@kallenberg Thm. 10.12]. Moreover, since $|Z_{a_n-\sigma(a_n)}| = a_n^d + o(a_n^{d}) = k_n+o(k_n)$ and $\Delta_z\in L^2$, $$(k_n^{-1}-|Z_{a_n-\sigma(a_n)}|^{-1})\sum_{z\in Z_{a_n-\sigma(a_n)} } \Delta_ z^2 \xrightarrow{L^1} 0, \quad \text{ and }\quad k_n^{-1}\sum_{z\in Z_{a_n}\setminus Z_{a_n-\sigma(a_n)} } \Delta_ z^2 \xrightarrow{L^1} 0.$$ It remains to show that $$k_n^{-1}\sum_{z\in Z_{a_n}} |\Delta_ z^2 - \Delta_ {z,n}^2| \xrightarrow{L^1} 0.$$ For this, note that $$\mathbb E\big[|\Delta_ z^2 - \Delta_{z,n}^2|\big] \leqslant\mathbb E\big[(\Delta_ z + \Delta_{z,n})^2\big]^{1/2} \mathbb E\big[ (\Delta_ z - \Delta_{z,n})^2\big]^{1/2}.$$ The first term is uniformly bounded by (i) and the fact that $\Delta_ z \in L^2$. From Assumption (ii) we have that $$k_n^{-1} \sum_{z\in Z_{a_n-\sigma(a_n)}} \mathbb E\big[ (\Delta_ z - \Delta_{z,n})^2\big] \to 0$$ as $n \to 0$. Finally, $$k_n^{-1} \sum_{z\in Z_{a_n}\setminus Z_{a_n-\sigma(a_n)}} \mathbb E\big[(\Delta_ z - \Delta_{z,n})^2\big] \to 0,$$ again by (i) and the fact that $\Delta_z\in L^2$. This shows 3).  ◻ ## A CLT for translation-invariant functionals of Gibbs point processes {#sec:clt} We now return to Gibbs point processes. The present subsection is devoted to connecting the conditions (i) and (ii) stated in the general CLT in Proposition [Proposition 19](#thm:martingale){reference-type="ref" reference="thm:martingale"} with the conditions [\[eq:m1\]](#eq:m1){reference-type="eqref" reference="eq:m1"} and [\[eq:s1\]](#eq:s1){reference-type="eqref" reference="eq:s1"} from Section [2](#sec:mod){reference-type="ref" reference="sec:mod"} in the Gibbs setting. We consider a functional $H_n$ of the form $$\label{eq:H_n_Gibbs} H_n(\varphi) = H(\varphi\cap Q_n)$$ where $\varphi\subseteq\mathbb R^d$ is a locally finite and $H$ is a translation-invariant functional of finite sets in $\mathbb R^d$. We let $\mathcal X_{a_n}$ denote the finite Gibbs process $\mathcal X(Q_{a_n},\varnothing)$ and $\mathcal X$ denote the infinite-volume Gibbs process on $\mathbb R^d$. Throughout this subsection we let $a_n=n+2\sqrt n$. In the following, for a fixed $z\in \mathbb Z^d$, we define sets $$U_z = \bigcup_{z'< z} Q_{z',1},\quad U_z^+ = \bigcup_{z'\leqslant z} Q_{z',1},\quad L_z = \bigcup_{z' > z} Q_{z',1}, L_z^+ = \bigcup_{z'\geqslant z} Q_{z',1},$$ where $\leq$ refers to the lexicographic ordering, and denote by $U_{z,n},U_{z,n}^+,L_{z,n},L_{z,n}^+$ their intersections with $Q_n$. We refer the reader to Figure [\[fig:fig1\]](#fig:fig1){reference-type="ref" reference="fig:fig1"} for an illustration of these sets. We aim at proving the following theorem: **Proposition 20** (CLT for translation-invariant functionals). *Let $H$ be a translation-invariant functional on finite point patterns satisfying conditions [\[eq:m1\]](#eq:m1){reference-type="eqref" reference="eq:m1"} and [\[eq:s1\]](#eq:s1){reference-type="eqref" reference="eq:s1"}. For the infinite-volume Gibbs point process $\mathcal X$ with stationary PI satisfying **(A)**, the limit $$\Delta_0 = \lim_{n\to \infty } \left(\mathbb E[ H_n(\mathcal X)| \mathcal X\cap U_ 0^+ ] - \mathbb E[ H_n(\mathcal X) | \mathcal X\cap U_ 0]\right)$$ exists in $L^2$. Then, $|Q_n|^{-1}\mathsf{Var}(H_n(\mathcal X) ) \to \sigma^2:= \mathbb E[\Delta_0^2]$ and $$|Q_n|^{-1/2}\left(H_n(\mathcal X) - \mathbb E[H_n (\mathcal X)] \right) \to N(0,\sigma^2).$$* The rest of this subsection is devoted to the proof of Proposition [Proposition 20](#thm:gibbs_CLT){reference-type="ref" reference="thm:gibbs_CLT"}, which proceeds in three steps. In the first two steps, we elucidate how to derive conditions (i) and (ii) from conditions [\[eq:m1\]](#eq:m1){reference-type="eqref" reference="eq:m1"} and [\[eq:s1\]](#eq:s1){reference-type="eqref" reference="eq:s1"} for the finite-volume Gibbs process $\mathcal X_{a_n}$. In the last step, we move from $H_n(\mathcal X_{a_n} )$ to $H_n(\mathcal X)$. **Lemma 21** (Moment condition). *Let $\mathcal X$ be an infinite-volume Gibbs point process with stationary PI satisfying **(A)**. For a functional of the form [\[eq:H_n\_Gibbs\]](#eq:H_n_Gibbs){reference-type="eqref" reference="eq:H_n_Gibbs"}, Condition (i) holds for the sequence $\mathcal X_{a_n}$ under Condition [\[eq:m1\]](#eq:m1){reference-type="eqref" reference="eq:m1"}.* **Lemma 22** (Weak stabilization). *Let $\mathcal X$ be an infinite-volume Gibbs point process with stationary PI satisfying **(A)**. Suppose that $H_n$ takes the form [\[eq:H_n\_Gibbs\]](#eq:H_n_Gibbs){reference-type="eqref" reference="eq:H_n_Gibbs"} where $H$ is a translation-invariant functional of finite point patterns and $a_n = n + 2\sqrt n$. Assume that Conditions [\[eq:m1\]](#eq:m1){reference-type="eqref" reference="eq:m1"} and [\[eq:s1\]](#eq:s1){reference-type="eqref" reference="eq:s1"} hold for $\mathcal X_{a_n}$. Then, Condition (ii) is satisfied.* **Lemma 23** (From $H_n(\mathcal X_{a_n} )$ to $H_n(\mathcal X)$). *Let $\mathcal X$ be an infinite-volume Gibbs point process with stationary PI satisfying **(A)**. It holds that $H_n(\mathcal X_{a_n})-H_n(\mathcal X) \to 0$ in $L^2$.* Before proving Lemmas [Lemma 21](#lem:(i)'){reference-type="ref" reference="lem:(i)'"}--[Lemma 23](#lem:anx){reference-type="ref" reference="lem:anx"}, we explain how to deduce Proposition [Proposition 20](#thm:gibbs_CLT){reference-type="ref" reference="thm:gibbs_CLT"} from them. *Proof of Proposition [Proposition 20](#thm:gibbs_CLT){reference-type="ref" reference="thm:gibbs_CLT"}.* From Proposition [Proposition 19](#thm:martingale){reference-type="ref" reference="thm:martingale"}, it follows that $$|Q_n|^{-1/2}(H_n(\mathcal X_{a_n}) - \mathbb E[H_n (\mathcal X_{a_n} )] ) \to N(0,\sigma^2)$$ The statement follows because $H_n(\mathcal X_{a_n})-H_n(\mathcal X) \to 0$ in $L^2$ by Lemma [Lemma 23](#lem:anx){reference-type="ref" reference="lem:anx"} ◻ We now prove Lemmas [Lemma 21](#lem:(i)'){reference-type="ref" reference="lem:(i)'"}--[Lemma 23](#lem:anx){reference-type="ref" reference="lem:anx"}, starting with Lemma [Lemma 21](#lem:(i)'){reference-type="ref" reference="lem:(i)'"}. Here, we assume that $\mathcal X_{a_n}$ is constructed by radial thinning of common a Poisson process $\mathcal P^{\mathsf{d}, *}_U$, and $\mathcal X$ will denote the infinite-volume Gibbs process constructed by radial thinning of the same $\mathcal P^{\mathsf{d}, *}_U$, see Proposition [Proposition 15](#pr:xxff){reference-type="ref" reference="pr:xxff"}. In the proofs below, we use the following explicit construction of the martingale differences $$\Delta_{z_i,n} = \mathbb E[H_n(\mathcal X_{a_n}) | \mathcal F_{i,n}] - \mathbb E[H_n(\mathcal X_{a_n}) | \mathcal F_{i-1,n}] = \mathbb E[H_n(\mathcal X_{a_n,z_i}')- H_n(\mathcal X_{a_n,z_i}'' )| \mathcal F_{i,n}]$$ where $\mathcal X_{a_n,z_i}' = \mathcal X_{a_n}$ on $U_{z_i}^+$ while $\mathcal X_{a_n,z_i}'' = \mathcal X_{a_n}$ on $U_{z_i}$ and is extended over $Q_{z,1}$ by radial thinning of a Poisson process $\mathcal P^{\mathsf{d}, *}_z$ independent of $\mathcal P^{\mathsf{d}, *}_U$. The extensions of $\mathcal X_{z_i,a_n}'$ and $\mathcal X_{z_i,a_n}''$ over $L_{z_i,a_n}$ are constructed by radial a common Poisson process $\mathcal P^{\mathsf{d}, *}_L$ on $L_{z_i,a_n}$, independent of $\mathcal P^{\mathsf{d}, *}_U$ and $\mathcal P^{\mathsf{d}, *}_z$, using $\mathcal X_{z_i,a_n}'\cap U_{z_i,a_n}^+$ and $\mathcal X_{a_n,z_i}''\cap U_{z_i,a_n}^+$, respectively, as boundary conditions. The thinning procedure used in this last step does not matter when we take the conditional expectation, so the choice of thinning may change according to the context. We often suppress $z$ from the notation and just write $\mathcal X_{a_n}'$ and $\mathcal X_{a_n}''$. *Proof of Lemma [Lemma 21](#lem:(i)'){reference-type="ref" reference="lem:(i)'"}.* We may write $$\label{eq:A_m} H_n(\mathcal X_{a_n}') - H_n(\mathcal X_{a_n}'') = \sum_{m\geqslant 1} 1_{A_m}(\mathcal P^{\mathsf{d}}_L) (H_n(\mathcal X_{a_n}') - H_n(\mathcal X_{a_n}'')),$$ where $A_m$ is the event that $m$ is the smallest integer such that $Q_{z,1}\stackrel{\mathcal P^{\mathsf{d}}_L}{\not\leftrightsquigarrow} Q_{z,m}^c$, where $Q_{z,m}:=Q_m + z$. In this proof, we construct $\mathcal X_{a_n}'\cap L_z$ and $\mathcal X_{a_n}''\cap L_{z}$ by applying the disagreement coupling thinning $T_{L_z,Q_{z,1}^+,\psi}^{dc }$, where $\psi$ refers to the respective boundary conditions in $U_z^+$. Then, on the event $A_m$ we have that $\mathcal X_{a_n}'\setminus Q_{z,m}=\mathcal X_{a_n}''\setminus Q_{z,m}$ and hence $$\begin{aligned} \label{eq:minusWindow} 1_{A_m}(\mathcal P^{\mathsf{d}}_L) (H_n(\mathcal X_{a_n}') - H_n(\mathcal X_{a_n}'') ) = 1_{A_m}(\mathcal P^{\mathsf{d}}_L)\big( (H_n(\mathcal X_{a_n}') - H_n(\mathcal X_{a_n}'\setminus Q_{z,m})) - (H_n(\mathcal X_{a_n}'')- H_n(\mathcal X_{a_n}''\setminus Q_{z,m}^c)) \big).\nonumber\end{aligned}$$ For simplicity, we observe that by [\[eq:A_m\]](#eq:A_m){reference-type="eqref" reference="eq:A_m"} and the conditional Jensen inequality, it is enough to show that $$\label{eq:inequality1} \sup_{n,z} \mathbb E\Big[\Big(\sum_{m\geq 1} 1_{A_m}(\mathcal P^{\mathsf{d}}_L) (H_n( \mathcal X_{a_n}') - H_n( \mathcal X_{a_n}'\setminus Q_{z,m}))\Big)^4\Big] < \infty$$ and $$\label{eq:inequality2} \sup_{n,z} \mathbb E\Big[\Big(\sum_{m\geq 1} 1_{A_m}(\mathcal P^{\mathsf{d}}_L) (H_n( \mathcal X_{a_n}'') - H_n( \mathcal X_{a_n}''\setminus Q_{z,m}))\Big)^4\Big] < \infty.$$ We consider only [\[eq:inequality1\]](#eq:inequality1){reference-type="eqref" reference="eq:inequality1"} noting that the argument in [\[eq:inequality2\]](#eq:inequality2){reference-type="eqref" reference="eq:inequality2"} is exactly similar. We know that $$\begin{aligned} \nonumber &\mathbb E\Big[\Big(\sum_{m\geq 1} 1_{A_m}(\mathcal P^{\mathsf{d}}_L) (H_n( \mathcal X_{a_n}') - H_n( \mathcal X_{a_n}'\setminus Q_{z,m}))\Big)^4\Big]\\ \nonumber &\leqslant\mathbb E\Big[\Big(\sum_{m\geq 1} 1_{A_m}(\mathcal P^{\mathsf{d}}_L) \sum_{x\in \mathcal X_{a_n}' \cap Q_{z,m}} c_{\mathsf M, 1}\exp( c_{\mathsf M, 1}|\mathcal X_{a_n}' \cap B_R(x)|)\Big)^4\Big]\\ \nonumber &\leqslant\mathbb E\Big[\Big(\sum_{m\geq 1} 1_{A_m}(\mathcal P^{\mathsf{d}}_L) \sum_{x\in \tilde{\mathcal P^{\mathsf{d}}} \cap Q_{z,m}} c_{\mathsf M, 1}\exp(c_{\mathsf M, 1}|\tilde{\mathcal P^{\mathsf{d}}} \cap B_R(x)|)\Big)^4\Big]\\ \nonumber &= \sum_{m\geqslant 1} \mathbb E\Big[ 1_{A_m}(\mathcal P^{\mathsf{d}}_L) \Big(\sum_{x\in \tilde{\mathcal P^{\mathsf{d}}} \cap Q_{z,m}} c_{\mathsf M, 1}\exp(c_{\mathsf M, 1}|\tilde{\mathcal P^{\mathsf{d}}} \cap B_R(x)\lvert)\Big)^4\Big]\\ \label{eq:exp_sum} &\leqslant\sum_{m\geqslant 0}\exp(-cm) \mathbb E\Big[\Big(\sum_{x\in \tilde{\mathcal P^{\mathsf{d}}} \cap Q_{z,m+1}} c_{\mathsf M, 1}\exp(c_{\mathsf M, 1}|\tilde{\mathcal P^{\mathsf{d}}} \cap B_R(x)\lvert)\Big)^8\Big]^{1/2}.\end{aligned}$$ In the first inequality, we add points in $Q_n\cap \mathcal X_{a_n}' \cap Q_{z,m}$ to $Q_n\cap \mathcal X_{a_n}'\setminus Q_{z,m}$ one at a time and repeatedly use Condition [\[eq:m1\]](#eq:m1){reference-type="eqref" reference="eq:m1"}. In the second inequality, we bound the number of points in $\mathcal X_{a_n}'$ by the number of points in $\tilde{\mathcal P^{\mathsf{d}}} = (\mathcal P^{\mathsf{d}}_U \cap U_{z,n}^+)\cup (\mathcal P^{\mathsf{d}}_L \cap L_{z,n})$. In the equality we use that the events $A_m$ are disjoint. In the final inequality, we apply the Cauchy-Schwarz inequality and bound the probabilities $\mathbb P(A_m)$ by the probability that the cluster of $Q_{z,1}^{\oplus r_0}$ exceeds $Q_{z,m-1}$ using Proposition [Proposition 6](#pr:spt){reference-type="ref" reference="pr:spt"}. Multiplying out the sum $\big(\sum_{x\in \tilde{\mathcal P^{\mathsf{d}}} \cap Q_{z,m+1}} \exp(c_{\mathsf M, 1}|\tilde{\mathcal P^{\mathsf{d}}} \cap B_R(x)\lvert)\big)^8$, we get a finite linear combination of terms of the form $$\begin{aligned} &\mathbb E\Big[\sum_{(x_1,\dots ,x_k)\in (\tilde{\mathcal P^{\mathsf{d}}} \cap Q_{z,m+1})_{\neq}^k} \prod_{i=1}^k \exp\big(\big|\tilde{\mathcal P^{\mathsf{d}}} \cap B_R(x_i)\big|\big)\Big] \leq \int_{Q_{m+1}^k}\alpha^k \mathbb E_ x \exp\Big(k\Big|\tilde{\mathcal P^{\mathsf{d}}} \cap \bigcup_i B_R(x_i)\Big|\Big){\rm d}x\leqslant C m^{dk},\end{aligned}$$ where $k\leqslant 8$ and $C$ is independent of $m$. The notation $(\varphi)_{\neq}^k$ denotes the set of $k$-tuples of pairwise distinct points in the point pattern $\varphi$. Inserting this in [\[eq:exp_sum\]](#eq:exp_sum){reference-type="eqref" reference="eq:exp_sum"} proves the claim. ◻ By similar arguments as in the proof of Lemma [Lemma 21](#lem:(i)'){reference-type="ref" reference="lem:(i)'"}, we obtain the following moment bounds, that will be used in the proof of Lemma [Lemma 22](#lem:(ii)'){reference-type="ref" reference="lem:(ii)'"} below. We use the notation $\mathcal X_{z}'$ and $\mathcal X_{z}''$ for the infinite analogues of $\mathcal X_{a_n,z}'$ and $\mathcal X_{a_n,z}''$ which equal $\mathcal X$ on $U_z^+$ and $U_z$, respectively, and are extended to a Gibbs process on all of $\mathbb R^d$ in the same way as $\mathcal X_{a_n,z}'$ and $\mathcal X_{a_n,z}''$ using radial thinning. Moreover, $\mathcal X_{a_n,\infty, z}'$ and $\mathcal X_{a_n,\infty,z}''$ will denote the point processes that equal $\mathcal X'$ on $U_ z^+$ and $U_z$, respectively, and are extended to $Q_{a_n}\cap L_z$ and $Q_{a_n}\cap L_z^+$ in the same way as $\mathcal X_{a_n,z}'$ and $\mathcal X_{a_n,z}''$ using $\mathcal X\cap U_z^+$ and $\mathcal X\cap U_z$, respectively, as boundary conditions. Again, we may skip the $z$ from the notation when it is clear from the context. In particular, Lemma [Lemma 23](#lem:anx){reference-type="ref" reference="lem:anx"} is a consequence of Corollary [Corollary 1](#cor:moment_bounds){reference-type="ref" reference="cor:moment_bounds"} [\[ineq:2\]](#ineq:2){reference-type="eqref" reference="ineq:2"} with $l=0$. **Corollary 1**. *Assume that the PI is stationary and satisfies **(A)** and that $H$ satisfies Condition [\[eq:m1\]](#eq:m1){reference-type="eqref" reference="eq:m1"}. Then, there are constants $M$, $C_l$ such that $$\begin{aligned} \label{ineq:1} &\sup_{n,z,l\geqslant 0} \mathbb E\big[\big( H_n( \mathcal X_{a_n,z}'\setminus Q_{z,l}) - H_n( \mathcal X_{a_n,z}''\setminus Q_{z,l}) \big)^4\big]<M\\\label{ineq:2} &\sup_{z,l\geqslant 0} \mathbb E\big[\big (H_n( \mathcal X_{z}'\setminus Q_{z,l}) - H_n(\mathcal X_{a_n,z}' \setminus Q_{z,l})\big)^4\big] < M \exp(-c \sqrt n)\\ \label{ineq:4} &\sup_{n,z} \mathbb E\big[\big(H_n( \mathcal X_{z}') - H_n( \mathcal X_{z}' \setminus Q_{z,l})\big)^4\big] <C_l \end{aligned}$$ In [\[ineq:1\]](#ineq:1){reference-type="eqref" reference="ineq:1"}, we assume that $\mathcal X_{a_n,z}', \mathcal X_{a_n,z}''$ are constructed using the thinning $T_{L_z,Q_{z,1},\psi}^{dc}$ combined with radial thinning for the extension to $L_z$. In [\[ineq:2\]](#ineq:2){reference-type="eqref" reference="ineq:2"} we assume radial thinning was used for extending the point processes. Inequality [\[ineq:1\]](#ineq:1){reference-type="eqref" reference="ineq:1"}-[\[ineq:2\]](#ineq:2){reference-type="eqref" reference="ineq:2"} also hold when $\mathcal X_{a_n,z}'$ and $\mathcal X_{a_n,z}''$ are replaced by $\mathcal X_{a_n,\infty,z}'$ and $\mathcal X_{a_n,\infty,z}''$, and [\[ineq:2\]](#ineq:2){reference-type="eqref" reference="ineq:2"}--[\[ineq:4\]](#ineq:4){reference-type="eqref" reference="ineq:4"} also hold with $\mathcal X'$ replaced by $\mathcal X''$.* *Proof.* Inequalities [\[ineq:1\]](#ineq:1){reference-type="eqref" reference="ineq:1"} and [\[ineq:4\]](#ineq:4){reference-type="eqref" reference="ineq:4"} follow by the obvious modifications of the proof of Lemma [Lemma 21](#lem:(i)'){reference-type="ref" reference="lem:(i)'"}. We now prove [\[ineq:2\]](#ineq:2){reference-type="eqref" reference="ineq:2"}. By assumption, $\mathcal X_{a_n}'$ and $\mathcal X'$ are both constructed on $U_z^+$ by radial thinning of $\mathcal P^{\mathsf{d}, *}_U$ and extended to $L_z$ by radial thinning of $\mathcal P^{\mathsf{d}, *}_L$. The difference between $\mathcal X_{a_n}' \cap Q_n$ and ${\mathcal X}_{}'\cap Q_n$ will only be on clusters of either $\mathcal P^{\mathsf{d}}_U$ or $\mathcal P^{\mathsf{d}}_L$ intersecting $Q_{a_n}^c$. Define the event $$E_n=\Big\{Q_{n} \stackrel{\mathcal P^{\mathsf{d}}_U}{\not\leftrightsquigarrow} Q_{a_n}^c\Big\}\cap \Big\{Q_{n} \stackrel{\mathcal P^{\mathsf{d}}_L}{\not\leftrightsquigarrow} Q_{a_n}^c\Big\}.$$ Then, $$\begin{aligned} \mathbb E\big[( H_n(\mathcal X_{a_n}') -H_n(\mathcal X') )^4\big] &= \mathbb E\big[ 1_{E_n^c} (H_n(\mathcal X_{a_n}') - H_n(\mathcal X'))^4\big]\leqslant\mathbb P(E_n^c)^{1/2} \big(\mathbb E\big[ (H_n(\mathcal X_{a_n}') - H_n(\mathcal X'))^8\big]\big)^{1/2}. \end{aligned}$$ Applying Condition [\[eq:m1\]](#eq:m1){reference-type="eqref" reference="eq:m1"} and arguing as in the proof of Lemma [Lemma 21](#lem:(i)'){reference-type="ref" reference="lem:(i)'"}, we have $$\begin{aligned} \mathbb E\big[H_n(\mathcal X_{a_n}')^8\big] {}&\leqslant\mathbb E\Big[ \Big(\sum_{x\in \mathcal X_{a_n}' \cap Q_n} c_{\mathsf M, 1}\exp(c_{\mathsf M, 1}|B_R(x)\cap \mathcal X_{a_n}'|) \Big)^8\Big] \leqslant Cn^{8d}. \end{aligned}$$ Treating the fourth moment of $H_n(\mathcal X')$ similarly, we get $$\begin{aligned} &\mathbb E\big[( H_n(\mathcal X_{a_n}') -H_n(\mathcal X') )^4\big] \leqslant C n^{8d}\exp(-c'\sqrt n), \end{aligned}$$ thereby concluding the proof. ◻ Finally, we prove Lemma [Lemma 22](#lem:(ii)'){reference-type="ref" reference="lem:(ii)'"}. *Proof of Lemma [Lemma 22](#lem:(ii)'){reference-type="ref" reference="lem:(ii)'"}.* Let $b_n= n+\sqrt n$. For the proof below, we define three non-percolation events: $$\begin{aligned} E_L= \Big\{Q_ n\stackrel{\mathcal P^{\mathsf{d}}_L}{\not\leftrightsquigarrow} Q_{b_n}^c\Big\},\qquad E_U= \Big\{Q_{b_n}\stackrel{\mathcal P^{\mathsf{d}}_U}{\not\leftrightsquigarrow} Q_{a_n}^c\Big\}, \qquad E_z = \Big\{Q_ n\stackrel{\mathcal P^{\mathsf{d}}_z}{\not\leftrightsquigarrow} Q_{b_n}^c\Big\}. \end{aligned}$$ Note that, in case [\[eq:s1\]](#eq:s1){reference-type="eqref" reference="eq:s1"} is satisfied, for all $\varepsilon> 0$, $y \in \mathbb R^d$ and locally finite $\varphi\subseteq\mathbb R^d$ there is an $\varepsilon$-radius of stabilization $R_\varepsilon(\varphi,y) >0$ such that for any $Q_{z,c}$ with $Q_{y,R_{\varepsilon}(\varphi,y)} \subseteq Q_{z,c}$, $$\begin{aligned} \label{eq:eps-stab} | D_yH(\varphi\cap Q_{z,c}) - D_yH(\varphi)|\leqslant\varepsilon. \end{aligned}$$ We first show that $$\begin{aligned} \sup_{z\in a_n}\big\lVert&\mathbb E[(H_n( \mathcal X_{a_n}') - H_n( \mathcal X_{a_n}''))| \mathcal F_{n,z}] - \mathbb E[(H_n( \mathcal X_{a_n,\infty}') - H_n( \mathcal X_{a_n,\infty}''))| \mathcal X\cap U_{\infty,z}^+ ]\big\rVert_{L^2} \in O(\exp(-c\sqrt n)) \label{eq:infty_condition}. \end{aligned}$$ On the event $E_U$, $\mathcal X_{a_n} \cap Q_{b_n} = \mathcal X\cap Q_{b_n}$ when both are constructed by radial thinning centered at the origin as in Section [4.4](#ss:emb){reference-type="ref" reference="ss:emb"}. Moreover, if we use radial thinning centered at the origin for extending $\mathcal X_{a_n}'$ and $\mathcal X'$ to $L_z$, then $\mathcal X_{a_n}' \cap L_{z,n}= \mathcal X' \cap L_{z,n}$ on $E_L\cap E_U$. Similarly, $\mathcal X_{a_n}'' \cap L_{z,n} = \mathcal X'' \cap L_{z,n}$ on the event $E_L\cap E_U\cap E_z$. Thus, $$\begin{aligned} \nonumber 1_{E_U}\mathbb E\big[1_{E_L\cap E_z}(H_n( \mathcal X_{a_n}') - H_n( \mathcal X_{a_n}''))| \mathcal F_{n,z}\big] &= \mathbb E\big[1_{E_U}1_{E_L\cap E_z} (H_n( \mathcal X_{a_n}') - H_n( \mathcal X_{a_n}''))| \mathcal P^{\mathsf{d}}_U \big]\\ \nonumber &= \mathbb E\big[1_{E_U}1_{E_L\cap E_z}(H_n( \mathcal X_{a_n,\infty}') - H_n( \mathcal X_{a_n,\infty}''))| \mathcal P^{\mathsf{d}}_U \big]\\ &= 1_{E_U}\mathbb E\big[1_{E_L\cap E_z}(H_n( \mathcal X_{a_n,\infty}') - H_n( \mathcal X_{a_n,\infty}''))| \mathcal X\cap U_{\infty,z}^+ \big]. %\label{eq:E_U_restriction}. \end{aligned}$$ By Corollary [Corollary 1](#cor:moment_bounds){reference-type="ref" reference="cor:moment_bounds"} [\[ineq:1\]](#ineq:1){reference-type="eqref" reference="ineq:1"}, Cauchy-Schwarz, and the conditional Jensen inequality, $$\begin{aligned} \sup_{z\in Z_{a_n}} \lVert 1_{E_U^c} \mathbb E[ (H_n( \mathcal X_{a_n}') - H_n( \mathcal X_{a_n}'')) | \mathcal F_{n,z} ] \rVert_{L_2}^2 &\leqslant\mathbb P(E_U^c)^{1/2}\mathbb E\big[\mathbb E[ (H_n( \mathcal X_{a_n}') - H_n( \mathcal X_{a_n}'')) | \mathcal F_{n,z} ]^4\big]^{1/2} \\ &\leqslant C\exp(-c(a_n-b_n)) M^{1/2}, \end{aligned}$$ and similarly, $$\begin{aligned} \sup_{z\in Z_{a_n}} \lVert 1_{E_L^c\cup E_z^c} \mathbb E\big [ (H_n( \mathcal X_{a_n}') - H_n( \mathcal X_{a_n}''))| \mathcal F_{n,z} \big]\rVert_{L_2}^2\leqslant C\exp(-c(b_n-n)) M^{1/2}. \end{aligned}$$ Similar inequalities hold with $\mathcal F_{n,z}$ replaced by $\mathcal X\cap U_{z,\infty}$, again by Corollary [Corollary 1](#cor:moment_bounds){reference-type="ref" reference="cor:moment_bounds"}. This shows [\[eq:infty_condition\]](#eq:infty_condition){reference-type="eqref" reference="eq:infty_condition"}. Fix $m\in \mathbb N$ and put $\tilde A_m := \Big\{Q_{z,1} \stackrel{\mathcal P^{\mathsf{d}}_L}{\not\leftrightsquigarrow} Q_m^c\Big\}$. We construct $\mathcal X_{a_n}'$ and $\mathcal X_{a_n}''$ on $L_z$ using the thinning operator $T^{\mathsf{dc}}_{L_z,Q_{z,1},\psi}$. We have by Corollary [Corollary 1](#cor:moment_bounds){reference-type="ref" reference="cor:moment_bounds"} [\[ineq:1\]](#ineq:1){reference-type="eqref" reference="ineq:1"} and the Cauchy-Schwarz and conditional Jensen inequality that $$\begin{aligned} \nonumber &\big\lVert\mathbb E[H_n( \mathcal X_{a_n,\infty}')- H_n( \mathcal X_{a_n,\infty}'') | \mathcal X\cap U_z^+ ] -\\ \nonumber & \mathbb E[(H_n( \mathcal X_{a_n,\infty}') - H_n( \mathcal X_{a_n,\infty}'\setminus Q_{z,m})) - (H_n( \mathcal X_{a_n,\infty}'') - H_n( \mathcal X_{a_n,\infty}''\setminus Q_{z,m})) | \mathcal X\cap U_z^+ ]\big\rVert_{L^2}^2 \\ \nonumber &= \mathbb E[\mathbb E[ 1_{\tilde A_m^c}(H_n( \mathcal X_{a_n,\infty}'\setminus Q_{z,m}) - H_n( \mathcal X_{a_n,\infty}''\setminus Q_{z,m}) )|\mathcal X\cap U_z^+ ]^2]\\ \nonumber &\leqslant\mathbb P(\tilde A_m^c)^{1/2} \mathbb E[( H_n( \mathcal X_{a_n,\infty}'\setminus Q_{z,m}) - H_n( \mathcal X_{a_n,\infty}''\setminus Q_{z,m}) )^4]^{1/2} \\ &\leqslant C\exp(-c m) \sqrt M.\label{eq:remove_window} \end{aligned}$$ This will allow us to treat the $L^2$ limits of $\mathbb E\big[H_n( \mathcal X_{a_n,\infty}') - H_n( \mathcal X_{a_n,\infty}'\setminus Q_{z,m})| \mathcal X\cap U_z^+ \big]$ and $\mathbb E\big[H_n( \mathcal X_{a_n,\infty}'') - H_n( \mathcal X_{a_n,\infty}''\setminus Q_{z,m}) | \mathcal X\cap U_z^+ \big]$ separately. To treat $\mathbb E\big[H_n( \mathcal X_{a_n,\infty}') - H_n( \mathcal X_{a_n,\infty}'\setminus Q_{z,m}) | \mathcal X\cap U_z^+ \big],$ we construct $\mathcal X_{a_n,\infty}'$ on $L_z$ using a radial thinning centered at the origin. The term involving $\mathcal X_{a_n}''$ is handled similarly. On the event $E_L$, $\mathcal X_{a_n,\infty}'\cap Q_n = \mathcal X'\cap Q_n$. Hence, by similar arguments as above and Corollary [Corollary 1](#cor:moment_bounds){reference-type="ref" reference="cor:moment_bounds"} [\[ineq:2\]](#ineq:2){reference-type="eqref" reference="ineq:2"}, $$\begin{aligned} \big\lVert\mathbb E[(H_n( \mathcal X')\hspace{-.09cm} - \hspace{-.09cm} H_n( \mathcal X'\setminus Q_{z,m})) | \mathcal X\cap U_z^+] \hspace{-.09cm}-\hspace{-.09cm} \mathbb E[H_n( \mathcal X_{a_n,\infty}') \hspace{-.09cm}-\hspace{-.09cm} H_n( \mathcal X_{a_n,\infty}'\setminus Q_{z,m}) | \mathcal X\cap U_z^+ ]\big\rVert^2_{L^2} \in O\big(\sqrt{\mathbb P(E_L^c)}\big). \label{eq:infty_inner} \end{aligned}$$ Fix $\varepsilon>0$. For any realization of $\mathcal X'$, $\mathcal X'\setminus Q_{z,m}$ is obtained by removing finitely many points. For each point we remove, there is an $\varepsilon$-radius of stabilization by Condition [\[eq:s1\]](#eq:s1){reference-type="eqref" reference="eq:s1"}. Let $R_{z,m,\varepsilon}(\mathcal X')$ be the maximum of these such that $$|H(\mathcal X' \cap Q_{x,n}) - H( (\mathcal X'\cap Q_{x,n}) \setminus Q_{z,m}) - (H(\mathcal X' ) - H( \mathcal X' \setminus Q_{z,m}))| \leqslant\varepsilon,$$ whenever $Q_{z,m + R_{z,m,\varepsilon}(\mathcal X')} \subseteq Q_{x,n}$. On the event $\{R_{z,m,\varepsilon}(\mathcal X')\leqslant\sqrt n - m\}$ we have for any $z\in Q_{n-\sqrt n}$, that $Q_{z,m + R_{z,m,\varepsilon}(\mathcal X')}\subseteq Q_n$. By another application of Corollary [Corollary 1](#cor:moment_bounds){reference-type="ref" reference="cor:moment_bounds"} [\[ineq:4\]](#ineq:4){reference-type="eqref" reference="ineq:4"}, $$\begin{aligned} \nonumber &\lVert\mathbb E[(H_n( \mathcal X') - H_n( \mathcal X' \setminus Q_{z,m}))-(H(\mathcal X') - H(\mathcal X' \setminus Q_{z,m}))| \mathcal X\cap U_z^+]\rVert_{L^2}^2 \\ \nonumber &\leqslant\lVert 1_{R_{z,m,\varepsilon}(\mathcal X)> \sqrt n-m} ( (H_n( \mathcal X') - H_n( \mathcal X' \setminus Q_{z,m}))-(H(\mathcal X') - H(\mathcal X' \setminus Q_{z,m})))\rVert_{L^2}^2 + \varepsilon^2\\ &\leqslant\mathbb P(R_{z,m,\varepsilon}(\mathcal X)>\sqrt n-m)^{1/2} 2C_m + \varepsilon^2,\label{eq:Rmeps} \end{aligned}$$ where we used that by Fatou's Lemma $$\begin{aligned} \mathbb E[(H(\mathcal X') - H(\mathcal X' \setminus Q_{z,m}))^4] \leqslant\liminf_{n\to \infty} \mathbb E[(H_n( \mathcal X') - H_n( \mathcal X' \setminus Q_{z,m}))^4] < C_m. \end{aligned}$$ Note that [\[eq:Rmeps\]](#eq:Rmeps){reference-type="eqref" reference="eq:Rmeps"} is uniform in $z\in Q_{n-\sqrt n}$ since the distribution of $R_{z,m,\varepsilon}(\mathcal X')$ is independent of $z$ by translation invariance of $H$ and Proposition [Proposition 15](#pr:xxff){reference-type="ref" reference="pr:xxff"}. Moreover, $\mathbb P(R_{z,m,\varepsilon}(\mathcal X')>s) \to 0$ as $s\to \infty$. Since $\varepsilon$ was arbitrary, $$\begin{aligned} \label{eq:R_m_bound} &\lim_{n\to \infty}\sup_{z\in Q_{n-\sqrt n}} \big\lVert\mathbb E[(H_n( \mathcal X') - H_n( \mathcal X' \setminus Q_{z,m}))-(H(\mathcal X') - H(\mathcal X' \setminus Q_{z,m}))| \mathcal X\cap U_z^+]\big\rVert_2 = 0. \end{aligned}$$ Combining [\[eq:infty_condition\]](#eq:infty_condition){reference-type="eqref" reference="eq:infty_condition"}, [\[eq:remove_window\]](#eq:remove_window){reference-type="eqref" reference="eq:remove_window"}, [\[eq:infty_inner\]](#eq:infty_inner){reference-type="eqref" reference="eq:infty_inner"} and [\[eq:R_m\_bound\]](#eq:R_m_bound){reference-type="eqref" reference="eq:R_m_bound"} shows that $$\begin{aligned} \nonumber &\lim_{n\to \infty}\sup_{z\in Q_{n-\sqrt n}}\big\lVert\mathbb E[H_n( \mathcal X_{a_n}') - H_n( \mathcal X_{a_n}'') | \mathcal X_{a_n}\cap U_ z^+ ] - \\ \nonumber &\mathbb E[(H(\mathcal X') - H( \mathcal X'\setminus Q_{z,m}))- (H(\mathcal X'') - H( \mathcal X''\setminus Q_{z,m})) | \mathcal X\cap U_ z^+ ] \big\rVert_{L^2}\\ \label{eq:cauchy} &\leqslant CM'\exp(-cm). \end{aligned}$$ Equation [\[eq:cauchy\]](#eq:cauchy){reference-type="eqref" reference="eq:cauchy"} implies that $$\begin{aligned} \label{eq:sequence} \{\mathbb E[(H( \mathcal X') - H( \mathcal X'\setminus Q_{z,m}))- (H( \mathcal X'') - H(\mathcal X''\setminus Q_{z,m})) | \mathcal X\cap U_ z^+ ]\}_{m\geqslant 1} \end{aligned}$$ is a Cauchy sequence in $L^2$ and hence the limit exists. We call this limit $\Delta_z$. It follows again from [\[eq:cauchy\]](#eq:cauchy){reference-type="eqref" reference="eq:cauchy"} that $$\begin{aligned} \label{eq:L2} \lim_{n\to \infty}\sup_{z\in Q_{n-\sqrt n}}\big\lVert\mathbb E[H_n( \mathcal X_{a_n}') - H_n( \mathcal X_{a_n}'') | \mathcal X_{a_n} \cap U_ z^+ ] - \Delta_z\big\rVert_{L^2} =0. \end{aligned}$$ We see that translating $\mathcal X$ and $z$ by $z_0$ does not change the sequence [\[eq:sequence\]](#eq:sequence){reference-type="eqref" reference="eq:sequence"} so that $\Delta_z(\mathcal X)=\Delta_{z+z_0}(\mathcal X+z_0)$ a.s. Since $\mathcal X$ is stationary and ergodic by Proposition [Proposition 15](#pr:xxff){reference-type="ref" reference="pr:xxff"}, Condition (ii) is shown. ◻ We now complete the proof of Corollary [Corollary 5](#cor:betti){reference-type="ref" reference="cor:betti"} by showing positivity of the limiting variance. *Proof of variance positivity in Corollary [Corollary 5](#cor:betti){reference-type="ref" reference="cor:betti"}.* In the proofs of Theorem [Proposition 19](#thm:martingale){reference-type="ref" reference="thm:martingale"} and [Proposition 20](#thm:gibbs_CLT){reference-type="ref" reference="thm:gibbs_CLT"}, instead of using unit cubes, we could have used cubes of the form $az+Q_a$ for $z\in \mathbb Z$ and some $a>0$. In the following we choose $a>d+2(d\vee r_0)$. The variance is $a^{-d}\mathbb E\Delta_{0,a}^2$, where $\Delta_{0,a}$ is the $L^2$ limit of $$\begin{aligned} \mathbb E[H_n( \mathcal X_{a_n}') - H_n( \mathcal X_{a_n}'') | \mathcal X_{a_n} \cap aU_ 0^+ ] \end{aligned}$$ as in [\[eq:L2\]](#eq:L2){reference-type="eqref" reference="eq:L2"}. Thus, it is enough to show $\liminf_n \mathbb E[\mathbb E[H_n( \mathcal X_{a_n}') - H_n( \mathcal X_{a_n}'') | \mathcal X_{a_n} \cap aU_ 0^+ ]^2]>0$. We construct point processes $\mathcal X_{a_n}'$, $\mathcal X_{a_n}''$, and $\tilde{\mathcal X}_{a_n}'$ as follows: all three are equal to $\mathcal X_{a_n} \cap aU_ 0$ on $aU_ 0$. Let $\mathcal P^{\mathsf{d}, *}_U,\mathcal P^{\mathsf{d}, *}_L,\mathcal P^{\mathsf{d}, *}_z$ be Poisson processes such that independent of each other and of $\mathcal X_{a_n}\cap U_0$. We define $\mathcal X_{a_n}\cap Q_a$ by radial thinning of $\mathcal P^{\mathsf{d}, *}_U$ and set $\mathcal X_{a_n}' \cap Q_a = \mathcal X_{a_n} \cap Q_a$. We extend $\mathcal X_{a_n}''$ to $Q_a$ by radial thinning $\mathcal P^{\mathsf{d}}_L$, while $\tilde{\mathcal X}_{a_n}'$ is set to be empty on $Q_{a}$. Now all processes are defined on $U_0^+$, and all three processes are extended to $L_0$ by radial thinning of $\mathcal P^{\mathsf{d}}_z$. For a point process $\mathcal Y$, we define disjoint events $$\begin{aligned} E_1{}& = \{ \mathcal Y \cap Q_a = \varnothing\} \\ E_2{}& = \{ \mathcal Y \cap (Q_a\setminus Q_{a-2(r_0\vee d)}) = \varnothing, \beta_q^{b,d}(\mathcal Y \cap Q_{a-2(r_0\vee d)})=1 \}. \end{aligned}$$ Both events have positive probability when $\mathcal Y$ is a Poisson process, see [@shirai Example 1.8]. Then, $$\begin{aligned} & \mathbb E\big[\mathbb E\big[H_n( \mathcal X_{a_n}') - H_n( \mathcal X_{a_n}'') | \mathcal X_{a_n} \cap aU_ 0^+ \big]^2 \big]\\ &\geqslant\mathbb E\big[\mathbb E\big[(H_n( \mathcal X_{a_n}') - H_n( \mathcal X_{a_n}'') )(1_{E_1}(\mathcal X_{a_n}') + 1_{E_2}(\mathcal X_{a_n}')) | \mathcal X_{a_n} \cap aU_ 0^+ \big]^2 \big] \\ &= \mathbb E\big[\mathbb E\big[(H_n( \tilde{\mathcal X}_{a_n}') - H_n( \mathcal X_{a_n}'') )1_{E_1}({\mathcal X}_{a_n}') + (H_n( \tilde{\mathcal X}_{a_n}') - H_n( \mathcal X_{a_n}'') + 1 )1_{E_2}(\mathcal X_{a_n}') | \mathcal X_{a_n} \cap aU_ 0^+ \big]^2 \big]\\ & = \mathbb E\big[\mathbb E\big[H_n( \tilde{\mathcal X}_{a_n}') - H_n( \mathcal X_{a_n}'') |\mathcal X_{a_n} \cap aU_ 0^+ \big]^2 1_{E_1}({\mathcal X}_{a_n}') + \mathbb E\big[H_n( \tilde{\mathcal X}_{a_n}') - H_n( \mathcal X_{a_n}'') + 1 | \mathcal X_{a_n} \cap aU_ 0^+ \big]^21_{E_2}(\mathcal X_{a_n}') \big]. \end{aligned}$$ Here, we used disjointness of $E_1$ and $E_2$, measurability of $1_{E_1}(\mathcal X_{a_n}')$ and $1_{E_2}(\mathcal X_{a_n}')$ with respect to $\mathcal X_{a_n} \cap aU_ 0^+$, and the fact that on $E_1$ and $E_2$, $\mathcal X_{a_n}'$ agrees with $\tilde{\mathcal X}_{a_n}'$ on $L_0^+$ and $L_0$, respectively. Using the conditional Jensen inequality, we obtain the lower bound $$\begin{aligned} & \mathbb E\big[\mathbb E\big[H_n( \tilde{\mathcal X}_{a_n}'\setminus Q_a) - H_n( \mathcal X_{a_n}'') |\mathcal X_{a_n} \cap aU_ 0 \big]^2 \mathbb E\big[1_{E_1}({\mathcal X}_{a_n}') |\mathcal X_{a_n} \cap aU_ 0\big] \\ &\quad + \mathbb E\big[H_n( \tilde{\mathcal X}_{a_n}'\setminus Q_a) - H_n( \mathcal X_{a_n}'') + 1 | \mathcal X_{a_n} \cap aU_ 0 \big]^2\mathbb E\big[1_{E_2}(\mathcal X_{a_n}') |\mathcal X_{a_n} \cap aU_ 0\big] \big] \\ &\geqslant\mathbb E\big[\max\big\{\mathbb E\big[H_n( \tilde{\mathcal X}_{a_n}'\setminus Q_a) - H_n( \mathcal X_{a_n}'') |\mathcal X_{a_n} \cap aU_ 0 \big]^2,\, \big(\mathbb E\big[H_n( \tilde{\mathcal X}_{a_n}'\setminus Q_a) - H_n( \mathcal X_{a_n}'') |\mathcal X_{a_n} \cap aU_ 0 \big]+1\big)^2 \big\} \\ &\quad \cdot \min_{i=1,2} \big\{\mathbb E\big[1_{E_i}({\mathcal X}_{a_n}')|\mathcal X_{a_n} \cap aU_ 0 \big]\big\}^2\big]\\ &\geqslant\tfrac 1 4\mathbb E\big[ \min_{i=1,2} \big\{\mathbb E\big[1_{E_i}({\mathcal X}_{a_n}')|\mathcal X_{a_n} \cap aU_ 0 \big]\big\}^2\big], \end{aligned}$$ where the inequality $a_1b_1 + a_2b_2\geq (a_1\vee a_2) \cdot (b_1\wedge b_2)$ for $a_1,a_2,b_1,b_2\geq 0$ was used to obtain the second inequality, and the third inequality used that $x^2 \vee (x+1)^2\geqslant 1/4$ for all $x\in \mathbb R$. First, concerning $E_1$, we note that $$\mathbb E[1_{E_1}({\mathcal X}_{a_n}')|\mathcal X_{a_n} \cap aU_ 0 ] \geqslant\mathbb P(\mathcal P^{\mathsf{d}}_U\cap Q_a = \varnothing).$$ For $\mathbb E[1_{E_2}({\mathcal X}_{a_n}')|\mathcal X_{a_n} \cap aU_ 0 ]$, we get $$\begin{aligned} &\mathbb E[1_{E_2}({\mathcal X}_{a_n}')| \mathcal X_{a_n} \cap aU_ 0 ]\\ & = \mathbb P(\mathcal X_{a_n} \cap (Q_a\setminus Q_{a-2r_0\vee d}) = \varnothing| \mathcal X_{a_n} \cap aU_ 0 ) \mathbb P(\beta_q^{b,d}(\mathcal X_{a_n}\cap Q_{a-2r_0\vee d})=1 | \mathcal X_{a_n} \cap (Q_a\setminus Q_{a-2r_0\vee d}) = \varnothing). \end{aligned}$$ The first probability is again bounded from below by $\mathbb P(\mathcal P^{\mathsf{d}}_U \cap (Q_a\setminus Q_{a-2r_0\vee d})=\varnothing)$, while the second equals $\mathbb P(\beta_q^{b,d}(\mathcal X_{a-2r_0\vee d})=1 )$ by [\[eq:DLR1\]](#eq:DLR1){reference-type="eqref" reference="eq:DLR1"}, which is non-zero because the corresponding probability for a Poisson process $\mathbb P(\beta_q^{b,d}(\mathcal P^{\mathsf{d}}\cap Q_{a-2r_0\vee d})=1 )$ is positive and $\mathcal X_{a-2r_0\vee d}$ has a positive density with respect to $\mathcal P^{\mathsf{d}}\cap Q_{a-2r_0\vee d}$ by the assumption $\kappa>0$. ◻ # Proof of Theorem [Theorem 4](#thm:2){reference-type="ref" reference="thm:2"} -- normal approximation for Gibbsian score sums {#sec:gibbs} Before elaborating on the technical details of the proof of Theorem [Theorem 4](#thm:2){reference-type="ref" reference="thm:2"}, we first give a broad overview of the main idea. We consider the general framework for a random measure $\Xi$ on a locally compact metric space $\mathbb X$. In the proof of Theorem [Theorem 4](#thm:2){reference-type="ref" reference="thm:2"}, this will be applied to the random measure $$\begin{aligned} \Xi_n := \Xi_n[\mathcal X]& :=\sum_{x \in \mathcal X\cap Q_n} g(x, \mathcal X)\,\delta_x, \label{eqn:poixidefXi}\end{aligned}$$ noting that $|\Xi_n|:=\Xi_n(\mathbb X)=H^n(\mathcal X)$ is the score sum defined at [\[eqn:poixidef\]](#eqn:poixidef){reference-type="eqref" reference="eqn:poixidef"}. The concept of Palm theory is of central importance for our approach. We recall that a collection of random measures $\{\Xi_x\}_{x \in \mathbb X}$ is a *Palm version* of $\Xi$ if it satisfies $$\begin{aligned} \mathbb E\Big[ \int_{\mathbb X} f(x,\Xi) \,\Xi({\rm d}x)\Big]=\int_{\mathbb X} \mathbb E[f(x,\Xi_x)] \,\Lambda({\rm d}x) \label{def:palm1}\end{aligned}$$ for all non-negative measurable $f$, where $\Lambda:= \mathbb E[\Xi]$ denotes the intensity measure. Moreover, we set $\sigma^2:=\mathsf{Var}(|\Xi|) := \mathsf{Var}(\Xi(\mathbb X))$. For an introduction to Palm theory and in particular for the definition of Palm processes with respect to different point processes we refer to [@randMeas Section 6]. Assume that $\Xi$ and its Palm versions $\{\Xi_x\}_{x \in \mathbb X}$ are defined on the same probability space and let $$\begin{aligned} Y_x :=|\Xi_x|-|\Xi|, \quad \Delta_x :=\frac{Y_x}{\sigma^2}=\frac{|\Xi_x|-|\Xi|}{\sigma^2}. \label{def:Ws}\end{aligned}$$ The proof of Theorem [Theorem 4](#thm:2){reference-type="ref" reference="thm:2"} relies on the following theorem from [@chen], which follows by an application of Stein's method. **Theorem 24** (Theorem 3.1 in [@chen]). *Let $\Xi$, $\{\Xi_x\}_{x \in \mathbb X}$, $Y_x$, and $\Delta_x$ be as in [\[def:Ws\]](#def:Ws){reference-type="eqref" reference="def:Ws"}. Then, $$\begin{aligned} d_{\mathsf K}\Big(\frac{|\Xi|-\mathbb E|\Xi|}{\sigma^2}, N(0,1)\Big) \leqslant 2E_1+5.5E_2+5E_3+10E_4+7E_5\end{aligned}$$ where $$\begin{aligned} E_1&:=\frac 1{\sigma^2} \mathbb E\Big|\int_{\mathbb X} (Y_x\, \mathbbmss{1}\{|Y_x|\leqslant\sigma\}-\mathbb E\big[Y_x \, \mathbbmss{1}\{|Y_x|\leqslant\sigma\}\big] )\,\Lambda({\rm d}x)\Big|,\\ E_2&:=\frac 1{\sigma^3}\int_{\mathbb X} \mathbb E\big[Y_x^2\,\mathbbmss{1}\{|Y_x| \leqslant\sigma\}\big] \,\Lambda({\rm d}x),\\ E_3&:=\frac 1{\sigma^2}\int_{\mathbb X} \mathbb E\big[|Y_x|\,\mathbbmss{1}\{|Y_x| > \sigma\}\big] \,\Lambda({\rm d}x),\\ E_4&:=\frac 1{\sigma^2} \int_{-1}^1 \int_{\mathbb X} \int_{\mathbb X} \mathsf{Cov}\big(\phi_{x}(t), \phi_{y}(t)\big) \, \Lambda({\rm d}x)\,\Lambda({\rm d}y)\,{\rm d}t,\\ E_5&:=\frac 1{\sigma}\Big(\int_{-1}^1 \int_{\mathbb X} \int_{\mathbb X} |t|\,\mathsf{Cov}\big(\phi_{x}(t), \phi_{y}(t)\big) \, \Lambda({\rm d}x)\,\Lambda({\rm d}y)\,{\rm d}t\Big)^{1/2},\end{aligned}$$ where to simplify the notation, we write $$\begin{aligned} \phi_{x}(t)=\begin{cases} \mathbbmss{1}\{1 \geqslant\Delta_x>t>0\},\quad &t>0,\\ \mathbbmss{1}\{-1 \leqslant\Delta_x<t<0\},\quad &t<0. \end{cases}\end{aligned}$$* Typically, the most delicate expressions are the terms $E_1$, $E_4$, and $E_5$. This is because, loosely speaking, those terms encode a bound on the deviation of the difference $Y_x$ when averaged over space. In contrast, the terms $E_2, E_3$ only involve moment bounds at individual space points. To prepare the proof of Theorem [Theorem 4](#thm:2){reference-type="ref" reference="thm:2"}, we write $\{\mathcal X_x^\Xi\}_{x\in Q_n}$ for the Palm version of $\mathcal X$ with respect to $\Xi_n$ in $Q_n$. That is, for all measurable $f:\mathcal X\times\mathbf N \to [0,\infty)$, $$\begin{aligned} \mathbb E\int_{Q_n} f(x,\mathcal X) \,\Xi_n({\rm d}x)= \int_{Q_n} \mathbb Ef(x,\mathcal X_x^\Xi) \, \Lambda_n({\rm d}x). \label{def:palm2}\end{aligned}$$ We note that assumption [\[eq:m2\]](#eq:m2){reference-type="eqref" reference="eq:m2"} ensures the $\sigma$-finiteness of $\Lambda_n$, and thereby the existence of a Palm version $\{\mathcal X_x^\Xi\}_{x\in Q_n}$ see [@kallenberg]. From [\[def:palm1\]](#def:palm1){reference-type="eqref" reference="def:palm1"} and [\[def:palm2\]](#def:palm2){reference-type="eqref" reference="def:palm2"} it follows that $\{\Xi_n[\mathcal X_x^\Xi ] \}_{x\in Q_n}$ is a Palm version $\{\Xi_x\}_{x\in Q_n}$ of $\Xi$. Lemma [Lemma 25](#lem:palmgibbs){reference-type="ref" reference="lem:palmgibbs"} below shows that the reduced Palm process of $\mathcal X$ with respect to $\Xi_n$ at some point $x \in Q_n$ is a Gibbs process. This property is used in the proof of Theorem [Theorem 4](#thm:2){reference-type="ref" reference="thm:2"} to justify that we can construct a coupling of $\Xi_n$ and its Palm version via disagreement coupling. **Lemma 25**. *For a.a. $x \in Q_n$, the reduced process $\mathcal X_x^{!,\Xi}:=\mathcal X_x^\Xi\setminus\{x\}$ is a Gibbs process with PI $\kappa_x$ given by $$\kappa_x(y,\omega):=\kappa(y,\omega\cup \{x\}) \frac{g(x,\omega\cup \{x,y\})}{g(x,\omega\cup \{x\})}$$ where $0/0:=0$.* *Proof.* We must verify [\[eGNZ\]](#eGNZ){reference-type="eqref" reference="eGNZ"} for $\mathcal X_x^{!,\Xi}$. Let $f:\mathbb R^d \times\mathbf N \to [0,\infty)$ be measurable. We note that the Palm versions are only defined uniquely up to a zero-set of points $x \in Q_n$. Therefore, it suffices to show that for all measurable $h:\mathbb R^d \to [0,\infty)$, $$\begin{aligned} \int_{Q_n}h(x) \mathbb E\Big[ \sum_{Y_i \in \mathcal X_x^{!,\Xi}} f(Y_i, \mathcal X_x^{!,\Xi})\Big] \,\Lambda_n({\rm d}x)=\int_{Q_n}h(x) \int\mathbb E\Big[f(y,\mathcal X_x^{!,\Xi} \cup \{y\}) \kappa_x(y,\mathcal X_x^{!,\Xi}) \Big]{\rm d}y \,\Lambda_n({\rm d}x). \label{eqn:palmlem} \end{aligned}$$ Here, the right-hand side is, by the definitions of $\mathcal X_x^{!,\Xi}$ in [\[def:palm2\]](#def:palm2){reference-type="eqref" reference="def:palm2"} and of $\kappa_x(y,\omega)$, given by $$\begin{aligned} &\mathbb E\Big[\int_{Q_n}h(x) \int_{Q_n}f(y,(\mathcal X\setminus\{x\})\cup \{y\}) \kappa_x(y,\mathcal X\setminus\{x\}) \,{\rm d}y \,\Xi_n({\rm d}x)\Big] \\ &\quad =\mathbb E\Big[ \sum_{X_i \in \mathcal X} h(X_i) \int_{Q_n}f(y,(\mathcal X\setminus\{X_i\})\cup \{y\}) \kappa(y,\mathcal X) g(X_i,\mathcal X\cup \{y\}) \,{\rm d}y\Big].\end{aligned}$$ We use Fubini's theorem and apply [\[eGNZ\]](#eGNZ){reference-type="eqref" reference="eGNZ"} to the integral over $y$ to get $$\begin{aligned} \mathbb E\Big[\sum_{X_i \ne X_j \in \mathcal X} h(X_i) f(X_j,\mathcal X\setminus\{X_i\}) g(X_i,\mathcal X) \Big] =\mathbb E\Big[ \int_{Q_n}h(x) \sum_{X_j\in \mathcal X\setminus\{x\}} f(y,\mathcal X\setminus\{x\}) \,\Xi_n({\rm d}x)\Big].\end{aligned}$$ It follows from the definition [\[def:palm2\]](#def:palm2){reference-type="eqref" reference="def:palm2"} of $\mathcal X_x^{!,\Xi}$ that the above coincides with the left-hand side in [\[eqn:palmlem\]](#eqn:palmlem){reference-type="eqref" reference="eqn:palmlem"}. ◻ The proof of Theorem [Theorem 4](#thm:2){reference-type="ref" reference="thm:2"} is based on the following coupling construction. Let $r_n:=4s_n := 4c_{\mathsf s} \log n$ with $c_{\mathsf s} := {120}d\max(c_2^{-1}, c_{\mathsf{es}}^{-1})$, where $c_2>0$ is the constant from Proposition [Proposition 11](#pr:dec){reference-type="ref" reference="pr:dec"} and $c_{\mathsf{es}}>0$ is introduced in [\[eq:s2\]](#eq:s2){reference-type="eqref" reference="eq:s2"}. Let $\mathcal X$ be a Gibbs process with PI $\kappa$ and let $\mathcal X_x^\Xi$, $x \in Q_n$, be independent reduced Palm processes of $\mathcal X$ with respect to $\Xi_n$ at $x$, which is possible by the existence of uncountable product measures, see [@kallenberg Corollary 6.18]. We specify $Q:=B_{2r_n}(Q_n)$, $\Psi:=\mathcal X\cap Q^c$, $\Psi_x= \mathcal X_x^\Xi \cap (Q^c \cup B_{s_n}(x))$ and $\kappa'(z,\omega):=\kappa_x(z,\omega_{B_{s_n}(x)^c} \cup (\Psi_x \cap B_{s_n}(x))) \mathbbmss{1}\{z \in B_{s_n}(x)^c\}$, with $\kappa_x$ from Lemma [Lemma 25](#lem:palmgibbs){reference-type="ref" reference="lem:palmgibbs"}. Hence, when conditioned on $\Psi$ and $\{\Psi_{x, n}\}_x$, we can carry out the disagreement coupling of $\mathcal X$ and $\mathcal X_{x, n}^\Xi$, i.e., $$\begin{aligned} \mathcal X_n&:=T^{\mathsf{dc}}_{Q,Q^c,\Psi}(\mathcal P^{\mathsf{d}, *}) \cup \Psi,\qquad \mathcal X_{x, n}^\Xi:=T^{'\mathsf{dc}}_{Q,Q^c,\Psi_x \cap Q^c}(\mathcal P^{\mathsf{d}, *})\cup \Psi_x. \label{eqn:coupling}\end{aligned}$$ Note that while in the random measure $\Xi$ we only sum over points in $Q_n$, the dependence through the score functions means that $\Xi$ also depends on the configuration of $\mathcal X_n$ outside $Q_n$. Therefore, we use the enlarged window $Q = B_{2r_n}(Q_n)$ in the disagreement couplings in [\[eqn:coupling\]](#eqn:coupling){reference-type="eqref" reference="eqn:coupling"}. Given $\Psi_x$, it follows from Proposition [Proposition 9](#pr:dc_prop){reference-type="ref" reference="pr:dc_prop"} that $T^{'\mathsf{dc}}_{Q,Q^c,\Psi_x \cap Q^c}(\mathcal P^{\mathsf{d}, *})$ is a Gibbs process on $Q$ with PI $\kappa'$ and boundary condition $\Psi_x$. On the other hand, $\mathcal X_x^\Xi \cap Q$ conditioned on $\Psi_x$ is by [\[eq:DLR2\]](#eq:DLR2){reference-type="eqref" reference="eq:DLR2"} also a Gibbs process on $Q$ with PI $\kappa'$ and boundary condition $\Psi_x$. Since the distribution of a Gibbs process is unique on compact domains, we conclude that $$T^{'\mathsf{dc}}_{Q,Q^c,\Psi_x \cap Q^c}(\mathcal P^{\mathsf{d}, *}) \stackrel{d}{=} \mathcal X_x^\Xi \cap Q \mid \Psi_x.$$ Therefore, $\mathcal X_{x, n}^\Xi \stackrel d= \mathcal X_x^\Xi$ and similarly $\mathcal X_n \stackrel d=\mathcal X$. In particular, $\{\mathcal X_{x, n}^\Xi\}_{x \in Q_n}$ is a Palm version of $\mathcal X$ with respect to $\Xi_n$ on $Q_n$. It follows that $\{\Xi_n[\mathcal X_{x, n}^\Xi]\}_{x \in Q_n}$ is a Palm version of $\Xi_n$ on $Q_n$. The moment bound provided by the following lemma will be used in the proof of Theorem [Theorem 4](#thm:2){reference-type="ref" reference="thm:2"}. **Lemma 26** (Moment bound for $Y_{x, n}$). *For $x \in Q_n$ let $Y_{x,n}:=|\Xi_{x,n}|-|\Xi_n|$ with $\Xi_{x,n}:=\Xi_n[\mathcal X_{x, n}^\Xi]$ and $\Xi_n:=\Xi_n[\mathcal X_{x,n}]$, where $\mathcal X_n$ and $\mathcal X_{x, n}^\Xi$ are given at [\[eqn:coupling\]](#eqn:coupling){reference-type="eqref" reference="eqn:coupling"}. Under the assumptions of Theorem [Theorem 4](#thm:2){reference-type="ref" reference="thm:2"}, we have for $m \leqslant 4$, $$\int_{Q_n} \mathbb E[|Y_{x, n}|^m]\, \Lambda_n ({\rm d}x) \in O\big(|Q_n|s_n^{dm}\big).$$* *Proof.* Set $\Xi_n^>:=\Xi_n - \Xi_n^\leqslant$, $\Xi_{x, n}^>:=\Xi_{x,n} - \Xi_{x,n}^\leqslant$ and $\mathcal X_n \Delta\mathcal X_{x, n}^\Xi:=(\mathcal X_n \setminus \mathcal X_{x, n}^\Xi) \cup (\mathcal X_{x, n}^\Xi \setminus \mathcal X_n)$, where $$\begin{aligned} \Xi_{x,n}^\leqslant:= \sum_{y \in \mathcal X_{x, n}^\Xi \cap Q_n} g(y,\mathcal X_{x, n}^\Xi) \mathds 1\{R(y,\mathcal X_{x, n}^\Xi)\leqslant s_n\}\delta_y,\qquad \Xi_n^\leqslant:= \sum_{y \in \mathcal X_n \cap Q_n} g(y,\mathcal X_n) \mathds 1\{R(y,\mathcal X_n)\leqslant s_n\}\delta_y.\label{def:Xile} \end{aligned}$$ Furthermore, we also put $\Xi_{x, n, r_n} :=\Xi_{x,n}^\leqslant\big(B_{r_n}(x)\big)$. We decompose $Y_{x,n}$ as $$\begin{aligned} Y_{x, n}&=\big(|\Xi_{x, n}^> |-|\Xi_n^>|\big) + \big(\Xi_{x, n, r_n}^\leqslant-\Xi_{n, r_n} ^\leqslant\big( B_{r_n}(x)\big) +\big(\Xi_{x,n}^\leqslant\big( B_{r_n}(x)^c\big)-\Xi_n^\leqslant\big( B_{r_n}(x)^c\big)\big) \mathbbmss{1}\{\mathcal X_n \Delta \mathcal X_{x, n}^\Xi \not \subseteq B_{3s_n}(x)\}. \end{aligned}$$ From the Hölder inequality applied to both the expectation and the integral with respect to $\Lambda_n$, we get $$\begin{aligned} \int_{Q_n} \mathbb E[ | Y_{x, n}|^m] \, \Lambda_n({\rm d}x) \leqslant\sum_{\substack{(i_1,\dots,i_6):\\ i_1+\cdots+i_6=m}} &\Bigg[\Big(\int_{Q_n} \mathbb E[\Xi_{x,n}^\leqslant\big(B_{r_n}(x)\big)^m] \, \Lambda_n({\rm d}x)\Big)^{\frac{i_1}{m}} \Big(\int_{Q_n} \mathbb E[\Xi_n^\leqslant\big(B_{r_n}(x)\big)^m] \, \Lambda_n({\rm d}x)\Big)^{\frac{i_2}{m}}\nonumber\\ &\times \Big(\int_{Q_n} \mathbb E[|\Xi_{x, n}^>|^m] \, \Lambda_n({\rm d}x)\Big)^{\frac{i_3}{m}} \Big(\int_{Q_n} \mathbb E[|\Xi_n^>|^m] \, \Lambda_n({\rm d}x)\Big)^{\frac{i_4}{m}}\nonumber\\ &\times \Big(\int_{Q_n} \mathbb E[|\Xi_{x,n}|^m \mathbbmss{1}\{\mathcal X_n \Delta\mathcal X_{x, n}^\Xi \not\subseteq B_{3s_n}(x)\}] \, \Lambda_n({\rm d}x)\Big)^{\frac{i_5}{m}}\nonumber\\ &\times \Big(\int_{Q_n} \mathbb E[|\Xi_n|^m \mathbbmss{1}\{\mathcal X_n \Delta\mathcal X_{x, n}^\Xi \not\subseteq B_{3s_n}(x)\}] \, \Lambda_n({\rm d}x)\Big)^{\frac{i_6}{m}} \Bigg].\label{eqn:minpalmloc} \end{aligned}$$ We now bound the six integrals on the right-hand side in [\[eqn:minpalmloc\]](#eqn:minpalmloc){reference-type="eqref" reference="eqn:minpalmloc"} separately. For the first integral, we obtain by an expansion of the $m$th power and iteratively applying [\[eGNZ\]](#eGNZ){reference-type="eqref" reference="eGNZ"}, $$\begin{aligned} \nonumber &\int_{Q_n} \mathbb E\big[|\Xi_{x, n, r_n}^\leqslant|^m\big] \Lambda_n({\rm d}x)\\ \nonumber &\leqslant\int_{Q_n}\mathbb E\Big[ \Big(\sum_{X_j \in \mathcal X_{x, n}^\Xi \cap B_{r_n}(x)} g(X_j, \mathcal X_{x, n}^\Xi)\Big)^m\Big]\, \Lambda_n({\rm d}x)\\ \nonumber &= \mathbb E\Big[\sum_{X_i \in \mathcal X\cap Q_n}g(X_i, \mathcal X) \Big(\sum_{X_j \in \mathcal X\cap B_{r_n}(X_i)} g(X_j, \mathcal X)\Big)^m \Big]\\ \nonumber &=\int_{Q_n} \sum_{k=0}^{m} \sum_{\substack{(i_1, \dots, i_{k + 1}):\\ i_{1}+\cdots+i_{k + 1}=m}} \mathbb E\Big[\kappa(x,\mathcal X) \, g(x,\mathcal X\cup \{x\})^{i_{k + 1}+1}\sum_{(X_1, \dots, X_k) \in (\mathcal X\cap B_{r_n}(x))_{\neq}^k} \prod_{j=1}^{k} g(X_j,\mathcal X\cup \{x\})^{i_j} \Big]\,{\rm d}x\\ &=\int_{Q_n} \sum_{k=0}^{m} \sum_{\substack{(i_1, \dots, i_{k + 1}):\\ i_{1}+\cdots+i_{k + 1}=m}} \int_{ B_{r_n}(x)^k} \mathbb E\Big[\kappa(\{x,{\boldsymbol w}\}, \mathcal X)\, g(x,\mathcal X\cup \{x,{\boldsymbol w}\})^{i_{k + 1}+1}\prod_{j=1}^{k} g(w_j,\mathcal X\cup \{x,{\boldsymbol w}\})^{i_j}\Big] \,{\rm d}{\boldsymbol w}\, {\rm d}x, \label{eqn:firstfactor}\end{aligned}$$ where $\kappa(\{x,{\boldsymbol w}\},\mathcal X):=\kappa(x,\mathcal X)\kappa(w_1,\mathcal X\cup \{x\})\cdots \kappa(w_k,\mathcal X\cup \{x,w_1,\dots,w_{k-1}\})$ for ${\boldsymbol w}= (w_1,\ldots,w_k)\in (\mathbb R^d)^k$. Using the moment condition [\[eq:m2\]](#eq:m2){reference-type="eqref" reference="eq:m2"}, we find that the above is bounded by $$\begin{aligned} &|Q_n| \sum_{k\leqslant m} \sum_{\substack{(i_1, \dots, i_{k + 1}):\\ i_{1}+\cdots+i_{k + 1}=m}} \alpha_0^{k+1} \sup_{{\boldsymbol x}\in Q_n^{k+1}} g(x_1,\mathcal X\cup \{{\boldsymbol x}\})^{k+1}|B_{r_n}(x)|^k \in O\big(|Q_n| r_n^{dm}\big). \end{aligned}$$ The same bound can be established for the second integral on the right-hand side in [\[eqn:minpalmloc\]](#eqn:minpalmloc){reference-type="eqref" reference="eqn:minpalmloc"}. For the third integral in [\[eqn:minpalmloc\]](#eqn:minpalmloc){reference-type="eqref" reference="eqn:minpalmloc"} we find analogously to above that $$\begin{aligned} &\int_{Q_n} \mathbb E\Big[\Big|\sum_{X_j \in \mathcal X_{x, n}^\Xi} g(X_j,\mathcal X_{x, n}^\Xi) \mathds 1\{R(X_j,\mathcal X_{x, n}^\Xi)>s_n\}\Big|^m\Big] \Lambda_n({\rm d}x)\nonumber\\ &\leqslant\int_{Q_n} \sum_{k=0}^{ m} \sum_{\substack{(i_1, \dots, i_{k + 1}):\\ i_{1}+\cdots+i_{k + 1}=m}} \int_{ Q_n^k} \mathbb E\Big[\kappa(\{x,{\boldsymbol w}\}, \mathcal X)\, g(x,\mathcal X\cup \{x,{\boldsymbol w}\})^{i_{k + 1}+1}\nonumber\\ &\qquad \qquad \qquad \times \prod_{j=1}^{k} g(w_j,\mathcal X\cup \{x,{\boldsymbol w}\})^{i_j} \mathbbmss{1}\{R(w_j,\mathcal X\cup \{x,{\boldsymbol w}\})>s_n\} \Big] \,{\rm d}{\boldsymbol w}\, {\rm d}x\nonumber\\ &\leqslant\sum_{k=0}^{m} \sum_{\substack{(i_1, \dots, i_{k + 1}):\\ i_{1}+\cdots+i_{k + 1}=m}} \hspace{-0.3cm} \alpha_0^{k+1} |Q_n|^{k+1} \sup_{{\boldsymbol x}\in Q_n^{k+1}}\mathbb E\big[ g(x_{k+1},\mathcal X\cup \{{\boldsymbol x}\})^{i_{k + 1}+1} \prod_{j=1}^{k}g(x_j,\mathcal X\cup \{{\boldsymbol x}\})^{i_j} \mathbbmss{1}\{R(x_j,\mathcal X\cup {\boldsymbol x}) > s_n\}\big].\nonumber \end{aligned}$$ Here, we bound every indicator in the expectation except for the one with $j=1$ by $1$. Then we apply the Hölder inequality with $m+2$ factors (counted with multiplicities). This gives the bound $$\begin{aligned} \sum_{k\leqslant m} \sum_{\substack{(i_1, \dots, i_{k + 1}):\\ i_{1}+\cdots+i_{k + 1}=m}} \alpha_0^{k+1} |Q_n|^{k+1} \sup_{{\boldsymbol x}\in Q_n^{k+1}}\mathbb P(R(x_1,\mathcal X\cup \{{\boldsymbol x}\}) > s_n )^{{\frac 1{m+2}}}\sup_{{\boldsymbol x}\in Q_n^{k+1}}\mathbb E[g(x_1,\mathcal X\cup \{{\boldsymbol x}\})^{m+2}]^{\frac{m+1}{m+2}}, \label{eqn:Yint3} \end{aligned}$$ which is of order $O(1)$. The integral of $\mathbb E[|\Xi_n^>|^m]$ in [\[eqn:minpalmloc\]](#eqn:minpalmloc){reference-type="eqref" reference="eqn:minpalmloc"} can be bounded similarly. For the fifth integral in [\[eqn:minpalmloc\]](#eqn:minpalmloc){reference-type="eqref" reference="eqn:minpalmloc"}, we find from the Hölder inequality the bound $$\begin{aligned} &\Big(\int_{Q_n}\mathbb E\Big[{|\Xi_{x,n}|^{m+1}}\Big]\,\Lambda_n({\rm d}x)\Big)^{\frac{m}{m+1}} \Big( \int_{Q_n} \mathbb P\big(\mathcal X_n \Delta\mathcal X_{x, n}^\Xi \not\subseteq B_{3s_n}(x) \big) \, \Lambda_n({\rm d}x)\Big)^{\frac 1{m+1}}. \label{eqn:Yint5} \end{aligned}$$ Now, it follows from the bounded moments condition [\[eq:m2\]](#eq:m2){reference-type="eqref" reference="eq:m2"} analogously to the bounds of the terms in [\[eqn:firstfactor\]](#eqn:firstfactor){reference-type="eqref" reference="eqn:firstfactor"} that the first integral is in $O(|Q_n|^{m+2})$, whereas we bound the probability in the second integral by $$\begin{aligned} \mathbb P(R(x,\mathcal X_{x,n}^{\Xi})>s_n)+ \mathbb P\big(\mathcal X_n \Delta\mathcal X_{x, n}^\Xi \not\subseteq B_{3s_n}(x), \, R(x,\mathcal X_{x,n}^{\Xi})\leqslant s_n \big). \label{eq:5int2} \end{aligned}$$ By definition of $\mathcal X_{x,n}^{\Xi}$, the $\Lambda_n$-integral of the first probability is given by $$\begin{aligned} \mathbb E\Big[ \sum_{x \in Q_n} \mathbbmss{1}\{R(x,\mathcal X)>s_n\} g(x,\mathcal X)\Big]&=\int_{Q_n} \mathbb E[\kappa(x,\mathcal X) \mathbbmss{1}\{R(x,\mathcal X\cup \{x\})>s_n\} g(x,\mathcal X)]\,{\rm d}x\nonumber\\ &\leqslant\alpha_0 \int_{Q_n} \mathbb P(R(x,\mathcal X\cup \{x\})>s_n)^{1/2} \mathbb E[g(x,\mathcal X\cup \{x\})^2]^{1/2}\,{\rm d}x,\label{eq:Rpalm}\end{aligned}$$ where we have used [\[as:(A)\]](#as:(A)){reference-type="eqref" reference="as:(A)"} and the Cauchy-Schwarz inequality in the last line. Since $\mathbb P(R(x,\mathcal X)>s_n)\leqslant c_1 |Q_n|^{-120}$ by [\[eq:s2\]](#eq:s2){reference-type="eqref" reference="eq:s2"}, the last integral is in $O(|Q_n|^{-59})$. To treat the second probability, we apply Proposition [Proposition 14](#th:disdis){reference-type="ref" reference="th:disdis"}. To that end, we note that for $R(x,\mathcal X_{x,n}^{\Xi})\leqslant s_n$ we have by the stopping property of $R$ and the definition of $\Psi_{x, 0}$ that $\kappa_x(y,\omega\cup \Psi_{x, 0})=\kappa(y,\omega\cup \Psi_{x, 0})$ for $y \in Q\setminus B_{s_n}(x)$ and $\omega \subseteq Q\backslash B_{s_n(x)}$. Therefore, we obtain from Proposition [Proposition 14](#th:disdis){reference-type="ref" reference="th:disdis"} with $\kappa'(y,\omega) = \kappa(y,(\omega\setminus B_{s_n}(x)) \cup \Psi_{x, 0})\mathbbmss{1}\{y\in Q\backslash B_{s_n}(x)\}$, $Q':=Q\setminus B_{s_n}(x)$, $P:=Q_n\setminus B_{3s_n}(x)$ and $s:=s_n$ that $$\begin{aligned} \mathbb P\big(\mathcal X_n \Delta\mathcal X_{x, n}^\Xi \not\subseteq B_{3s_n}(x), \, R(x,\mathcal X_{x,n}^{\Xi})\leqslant s_n \big) \leqslant C |Q_n| \exp(-c_2 s_n)\leqslant C |Q_n|^{-119}. \label{eqn:Yint5b} \end{aligned}$$ Hence, [\[eqn:Yint5\]](#eqn:Yint5){reference-type="eqref" reference="eqn:Yint5"} is in $O(1)$. The sixth integral in [\[eqn:minpalmloc\]](#eqn:minpalmloc){reference-type="eqref" reference="eqn:minpalmloc"} is treated analogously. ◻ The following lemma will be crucial to bound the terms $E_1$, $E_4$ and $E_5$ in the proof of Theorem [Theorem 4](#thm:2){reference-type="ref" reference="thm:2"}. To ease notation, we write $F_{x, n} := F_n(\mathcal X_{x, n, r_n})$ where $\mathcal X_{x, n, r_n}:=\mathcal X_{x,n} \cap B_{r_n}(x)$ for any measurable $F_n:\mathbf N \to [0,\infty)$. **Lemma 27**. *Let $p > 0$ and $F_n:\mathbf N \to [0,\infty)$ be measurable. Furthermore, assume that $F_n(\omega)\leqslant|Q_n|^p |\omega|$ for all large $n \geqslant 1$. Then, $$\iint_{Q_n^2 \cap \{|x-y|>4r_n\}} \mathsf{Cov}(F_{x, n}, F_{y, n})\, \Lambda_n^2({\rm d}x, {\rm d}y) \in O(r_n^{2d}|Q_n|^{2p-30}|Q_n|^2).$$* *Remark 28*. As the proof reveals, Lemma [Lemma 27](#lem:thm2cov){reference-type="ref" reference="lem:thm2cov"} also holds if $F_{x,n}$ or $F_{y,n}$ is replaced by $F_n(\mathcal X\cap B_{r_n}(x))$ or $F_n(\mathcal X\cap B_{r_n}(y))$, respectively. In this case, the first step of the proof (where $F_{z,n}$ is restricted to $\tilde F_{z,n}$) can be avoided. *Proof.* For $z \in \{x,y\}$, we write $$\tilde F_{z, n}:= F_n(\mathcal X_{z,n} \cap B_{r_n}(z)) \mathbbmss{1}\{R_{z, n}\leqslant s_n\},$$ where we set $R_{z, n} := R(z,\mathcal X_{z,n})$. Then, $$\begin{aligned} \mathsf{Cov}(F_{x, n}, F_{y, n}) &=\mathsf{Cov}(\tilde F_{x, n}, \tilde F_{y, n}) + \mathsf{Cov}(\tilde F_{x, n}, F_{y, n}\mathbbmss{1}\{R_{y, n}> s_n\}) + \mathsf{Cov}(F_{x, n} \mathbbmss{1}\{R_{x, n}> s_n\}, F_{y, n} )\label{lemco1bou2}\end{aligned}$$ As the steps for the second and the third covariance are similar, we only discuss the second in detail. Here, using that $F_n$ is nonnegative, the Cauchy-Schwarz inequality gives the upper bound $$\begin{aligned} \mathbb E[F_{x,n}^2]^{1/2} \mathbb E[F_{y,n}^2 \mathbbmss{1}\{R_{y,n}>s_n\}]^{1/2}.\end{aligned}$$ An upper bound for the negative covariance can be derived similarly. We set $N_{z,r_n}:=|(\mathcal X\cup \{z\}) \cap B_{r_n}(z)|$ for $z \in \{x,y\}$, $q_{y,n}:=\mathbb P(R(y,\mathcal X\cup \{y\})>s_n)$ and find from Jensen's and Hölder's inequality and [\[eq:s2\]](#eq:s2){reference-type="eqref" reference="eq:s2"} that the $\Lambda_n^2$-integral of the above is bounded by $$\begin{aligned} &|Q_n|^{2p}\Lambda_n(Q_n)\Big(\iint \mathbb E[F_{x,n}^2] \mathbb E[F_{y,n}^2 \mathbbmss{1}\{R_{y,n}>s_n\}]\Lambda_n^2({\rm d}x, {\rm d}y)\Big)^{1/2}\\ &\, \leqslant\alpha_0 |Q_n|^{2p} \Lambda_n(Q_n) \Big(\iint E[N_{x,r_n}^2 g(x,\mathcal X\cup \{x\})] \mathbb E[N_{y,r_n}^2 g(y,\mathcal X\cup\{y\})\mathbbmss{1}\{R(y,\mathcal X\cup \{y\})>s_n\}] \,{\rm d}x \,{\rm d}y)\Big)^{1/2}\\ &\, \leqslant\alpha_0 |Q_n|^{2p} \Lambda_n(Q_n) \Big(\iint E[N_{x,r_n}^4]^{1/2} \mathbb E[g(x,\mathcal X\cup \{x\})^2]^{1/2} \mathbb E[N_{y,r_n}^4]^{1/4} \mathbb E[g(y,\mathcal X\cup\{y\})^4]^{1/4} \sqrt{q_{y,n}}\,{\rm d}x \,{\rm d}y)\Big)^{1/2} \\ &\, \leqslant\alpha_0 |Q_n|^{2p} \Lambda_n(Q_n)^2 \sup_{y \in Q_n} \mathbb E[g(y,\mathcal X\cup \{y\})^4]^{1/4} \sup_{y \in Q_n} \mathbb E[N_{y,r_n}^4]^{3/8} \sup_{y \in Q_n} q_{y,n}^{1/4}.\end{aligned}$$ Here we use [\[eq:m2\]](#eq:m2){reference-type="eqref" reference="eq:m2"} and that $\sup_{y \in Q_n} \mathbb E[N_{y,r_n}^4] \in O(r_n^{4d})$, $\sup_{y \in Q_n} q_{y,n}\in O(|Q_n|^{-120})$ and thus find that the above is in $O(r_n^{3d/2}|Q_n|^{2p-30}|Q_n|^{2})$. For the first covariance on the right-hand side in [\[lemco1bou2\]](#lemco1bou2){reference-type="eqref" reference="lemco1bou2"} we first work conditioned on $\Psi_x$ and $\Psi_y$ Then, again, we want to apply Proposition [Proposition 14](#th:disdis){reference-type="ref" reference="th:disdis"} with $Q:=B_{2r_n}(Q_n)$, $Q':=B_{2r_n}(z)^c$, and $P:=B_{2s_n}(z)$. Setting $\Psi_{z, 0} : = \Psi_z \cap B_{s_n}(z)$, we also consider the PI $${\kappa_{1,z}(w, \omega) :=\kappa(w,\omega_{B_{s_n}(z)^c} \cup \Psi_{z, 0})\mathbbmss{1}\{w\in Q\setminus B_{s_n}(z)\},}$$ and the interpolation PI $$\begin{aligned} \kappa_1'(w,\omega):=\begin{cases} \kappa(w,\omega_{B_{s_n}(x)^c} \cup \Psi_{x, 0}) & w \in B_{2r_n}(x) \setminus B_{s_n}(x),\\ \kappa(w,\omega_{B_{s_n}(y)^c} \cup \Psi_{y, 0}) &w \in B_{2r_n}(y) \setminus B_{s_n}(y),\\ 0 &\text{otherwise}. \end{cases}\end{aligned}$$ Then, as before conditioned on $\Psi_{x, 0}$ and $\Psi_{y, 0}$, we note that on the event $\{R(z,\Psi_{z, 0}) \leqslant s_n\}$, the process $\mathcal X_{z,n}^\Xi$ is a Gibbs process on $Q\setminus B_{s_n}(z)$ with PI $\kappa_{1,z}(w, \omega)$. Hence, when defining $$\mathcal X'_n:=T_{Q,Q^c,\kappa_1'}(\mathcal P^d) \cup \Psi_{x, 0} \cup \Psi_{y, 0},$$ then Proposition [Proposition 14](#th:disdis){reference-type="ref" reference="th:disdis"} gives that $$\begin{aligned} \mathbb P(S_{z,n}^c) \leqslant C s_n^d e^{-c_2s_n},\label{est:Szn}\end{aligned}$$ where $S_{z,n}:=\{\mathcal X_{z,n} \cap B_{2s_n}(z)=\mathcal X'_n \cap B_{2s_n}(z)\}$. Similarly as before, we bound the first covariance on the right-hand side in [\[lemco1bou2\]](#lemco1bou2){reference-type="eqref" reference="lemco1bou2"} by $$\begin{aligned} &\mathsf{Cov}(\tilde F_{x, n} \mathbbmss{1}_{S_{x,n}}, \tilde F_{y, n}\mathbbmss{1}_{S_{y,n}}) + \mathsf{Cov}(\tilde F_{x, n} \mathbbmss{1}_{S_{x,n}^c}, \tilde F_{y, n}\mathbbmss{1}_{S_{y,n}}) + \mathsf{Cov}(\tilde F_{x, n} , \tilde F_{y, n}\mathbbmss{1}_{S_{y,n}^c})\end{aligned}$$ Here, we use [\[est:Szn\]](#est:Szn){reference-type="eqref" reference="est:Szn"} to bound the last two covariances similarly to [\[lemco1bou2\]](#lemco1bou2){reference-type="eqref" reference="lemco1bou2"}. Setting $\tilde F_{x, n}':=\tilde F_n(\mathcal X'_{x, n, r_n})$ where $\mathcal X'_{x, n, r_n}:=\mathcal X_n'\cap B_{r_n}(x)$, the first covariance is further decomposed into $$\begin{aligned} \mathsf{Cov}(\tilde F_{x, n}' , \tilde F_{y, n}')- \mathsf{Cov}(\tilde F_{x, n}' \mathbbmss{1}_{S_{x,n}}, \tilde F_{y, n}'\mathbbmss{1}_{S_{y,n}^c}) &-\mathsf{Cov}(\tilde F_{x, n}' \mathbbmss{1}_{S_{x,n}^c}, \tilde F_{y, n}'\mathbbmss{1}_{S_{y,n}}) - \mathsf{Cov}(\tilde F_{x, n}' \mathbbmss{1}_{S_{x,n}^c}, \tilde F_{y, n}'\mathbbmss{1}_{S_{y,n}^c}). \label{lem:co1co}\end{aligned}$$ Since computations are similar for all summands, we only consider the first covariance. It is given by $$\begin{aligned} &\mathbb E\big[\mathsf{Cov}(\tilde F_{x, n}' , \tilde F_{y, n}'\mid \Psi_x, \Psi_y)\big] + \mathsf{Cov}\big(\mathbb E\big[\tilde F_{x, n}'\mid \Psi_x,\Psi_y\big], \mathbb E\big[\tilde F_{y, n}'\mid \Psi_x,\Psi_y\big] \big).\label{lem:co_cosplit}\end{aligned}$$ To bound the first term, we first note that by the tower property of conditional expectation, $$\begin{aligned} \mathsf{Cov}(\tilde F_{x, n}' , \tilde F_{y, n}'\mid \Psi_x, \Psi_y)=\mathbb E\big[\tilde F_{y, n}' \big(\mathbb E\big[\tilde F_{x, n}' \mid \Psi_x, \mathcal X'_{y, n, r_n} \big]-\mathbb E[\tilde F_{x, n}'\mid \Psi_x, \Psi_y]\big)\mid \Psi_x, \Psi_y \big]. \end{aligned}$$ To bound the inner expression we apply Proposition [Proposition 11](#pr:dec){reference-type="ref" reference="pr:dec"} with $B := B_{r_n}(x)$ and the Gibbs process $\mathcal X'_n$. Using here that $\tilde F_n(\mathcal X'_{x, n, r_n}) \leqslant|Q_n|^p |\mathcal X_{x, n , r_n}'| \leqslant|Q_n|^p \mathcal P^{\mathsf{d}}(B_{r_n}(x))$ almost surely, we bound the difference of the expectations in the round brackets using Proposition [Proposition 11](#pr:dec){reference-type="ref" reference="pr:dec"} by $$\begin{aligned} &\big| \mathbb E\big[\tilde F_{x, n}' \mid \Psi_x, \mathcal X'_{y, n, r_n} \big]-\mathbb E[\tilde F_{x, n}'\mid \Psi_x, \Psi_y] \big|\\ &\quad \leqslant|Q_n|^p\mathbb E\big[ \mathcal P^{\mathsf{d}}(B_{r_n}(x)) d_{\mathsf{TV}}\big(\mathcal L(\mathcal X'_{x, n, r_n}\hspace{-.15cm}\mid \Psi_x, \mathcal X'_{y, n, r_n}) ,\mathcal L(\mathcal X'_{x, n, r_n}\hspace{-.15cm}\mid \Psi_x, \Psi_y )\big)\big]\\ &\quad \leqslant|Q_n|^p c_1 |B_{r_n}(x)| e^{-2c_2s_n}\mathbb E[ \mathcal P^{\mathsf{d}}(B_{r_n}(x))] \in O(r_n^{2d} n^{p-120}).\end{aligned}$$ In particular, $\mathsf{Cov}(\tilde F_{x, n}' , \tilde F_{y, n}'\mid \Psi_x, \Psi_y)\in O(r_n^{3d} |Q_n|^{2p-120}).$ We split the second term from [\[lem:co_cosplit\]](#lem:co_cosplit){reference-type="eqref" reference="lem:co_cosplit"} into $$\begin{aligned} &\mathsf{Cov}\big(\mathbb E\big[\tilde F_{x, n}'\mid \Psi_x\big], \mathbb E\big[\tilde F_{y, n}'\mid \Psi_y\big] \big)+\mathsf{Cov}\big(\mathbb E\big[\tilde F_{x, n}'\mid \Psi_x,\Psi_y\big]- \mathbb E\big[\tilde F_{x, n}'\mid \Psi_x\big], \mathbb E\big[\tilde F_{y, n}'\mid \Psi_y\big] \big)\nonumber\\ &\quad +\mathsf{Cov}\big(\mathbb E\big[\tilde F_{x, n}'\mid \Psi_x,\Psi_y\big], \mathbb E\big[\tilde F_{y, n}'\mid \Psi_x,\Psi_y\big] - \mathbb E\big[\tilde F_{y, n}'\mid \Psi_y\big]\big).\label{lem:co_cond}\end{aligned}$$ Due to the independence of $\Psi_x$ and $\Psi_y$, the first covariance vanishes. The second and the third covariances can be treated very similarly, so we only consider the second one. Here, we use that $\tilde F_n(\mathcal X_{x, n, r_n}') \leqslant|Q_n|^p |\mathcal X_{x, n, r_n}'| \leqslant|Q_n|^p \mathcal P^{\mathsf{d}}(B_{r_n}(x))$ and obtain the bound $$\begin{aligned} &\mathbb E\big[\big(\mathbb E\big[\tilde F_{x, n}'\mid \Psi_x,\Psi_y\big]- \mathbb E\big[\tilde F_{x, n}'\mid \Psi_x\big]\big)^2\big]^{1/2} \mathbb E\big[(\tilde F_{y, n}')^2\mid \Psi_y\big]^{1/2} \\ &\leqslant|Q_n|^{2p} \mathbb E\big[\mathcal P^{\mathsf{d}}(B_{r_n}(x))^2 d_{\mathsf{TV}}\big(\mathcal L(\mathcal X'_{x, n, r_n}\mid \Psi_x,\Psi_y),\mathcal L(\mathcal X'_{x, n, r_n}\mid \Psi_x ) \big)\big]^{1/2} \mathbb E[ \mathcal P^{\mathsf{d}}(B_{r_n}(x))^2]^{1/2}.\end{aligned}$$ Again by Proposition [Proposition 11](#pr:dec){reference-type="ref" reference="pr:dec"}, the total variation distance in the first expectation is bounded by $c_1 |B_{r_n}(x)|e^{-2c_2 s_n}$ uniformly in $\Psi_y$. Therefore, we obtain the bound $|Q_n|^{2p} \mathbb E[\mathcal P^{\mathsf{d}}(B_{r_n}(x))^2] c_1 |B_{r_n}(x)| e^{-2c_2s_n}.$ ◻ *Proof of Theorem [Theorem 4](#thm:2){reference-type="ref" reference="thm:2"}.* We apply Theorem [Theorem 24](#thm:rollin){reference-type="ref" reference="thm:rollin"} with $\Xi:=\Xi_n=\Xi_n[\mathcal X_n]$ and $\Xi_x:=\Xi_{x,n}=\Xi_n[\mathcal X_{x, n}^\Xi]$ where $\mathcal X_n$ and $\mathcal X_{x, n}^\Xi$ are from [\[eqn:coupling\]](#eqn:coupling){reference-type="eqref" reference="eqn:coupling"}. We start by considering $E_2$ and $E_3$ since the argument is particularly short here. From Lemma [Lemma 26](#lem:Yintbou){reference-type="ref" reference="lem:Yintbou"} and [\[eqn:poimomass\]](#eqn:poimomass){reference-type="eqref" reference="eqn:poimomass"}, we obtain that $$\begin{aligned} \sigma_n^3(E_2+E_3) \leqslant\int_{Q_n} \mathbb E[Y_{x, n}^2]\,\Lambda_n({\rm d}x)\in O( r_n^{2d} |Q_n|).\end{aligned}$$ First, we have that $$\begin{aligned} \sigma_n^{4}E_1^2&\leqslant\mathbb E\Big[\Big(\int_{Q_n} (Y_{x,n}\, \mathbbmss{1}\{|Y_{x,n}|\leqslant\sigma_n\}-\mathbb E\big[Y_{x,n} \, \mathbbmss{1}\{|Y_{x,n}|\leqslant\sigma_n\}\big] )\,\Lambda_n({\rm d}x)\Big)^2\Big]\nonumber\\ &= \iint_{Q_n^2} \mathsf{Cov}(Y_{x, n} \mathbbmss{1}\{|Y_{x,n}|\leqslant\sigma_n\},Y_{y, n} \mathbbmss{1}\{|Y_{y,n}|\leqslant\sigma_n\})\,\Lambda_n({\rm d}x) \Lambda_n({\rm d}y).\label{eqn:E1bou0}\end{aligned}$$ Hence, by [\[eqn:poimomass\]](#eqn:poimomass){reference-type="eqref" reference="eqn:poimomass"}, it suffices to show that the right-hand side is of order $O\big(|Q_n|r_n^{4d}\big)$. We write $Y_{x,n}^\leqslant:=Y_{x, n} \mathbbmss{1}\{|Y_{x,n}|\leqslant\sigma_n\}$ and $Y_{x,n}^>:=Y_{x, n} \mathbbmss{1}\{|Y_{x,n}|> \sigma_n\}$, $x \in \mathbb R^d$, and reformulate the covariance in [\[eqn:E1bou0\]](#eqn:E1bou0){reference-type="eqref" reference="eqn:E1bou0"} by $$\begin{aligned} \mathsf{Cov}(Y_{x, n},Y_{y, n})- \mathsf{Cov}(Y^\leqslant_{x, n},Y^>_{y, n})- \mathsf{Cov}(Y^>_{x, n},Y_{y, n}).\label{eqn:E1bouu}\end{aligned}$$ The second and third covariances can be treated similarly. Hence, we only discuss the integral over the second covariance. Here, we use the Hölder inequality to bound the covariance in the second integral by $$\begin{aligned} \mathsf{Cov}(Y^\leqslant_{x, n},Y^>_{y, n})&\leqslant\mathbb E[|Y^\leqslant_{x,n}-\mathbb EY^\leqslant_{x,n}|^4]^{1/4}\mathbb E\Big[\Big((|Y^>_{y,n}-\mathbb EY^>_{y,n}|)^{1/3}\Big)^4\Big]^{3/4}\\ &\leqslant\mathbb E[(|Y^\leqslant_{x,n}|+|\mathbb EY^\leqslant_{x,n}|)^4]^{1/4}\mathbb E\Big[\Big(|Y^>_{y,n}|^{1/3}+|\mathbb EY^>_{y,n}|^{1/3}\Big)^4\Big]^{3/4}.\end{aligned}$$ Now, we evaluate the fourth powers under the expectations and apply Jensen's inequality with the convex mappings $z\mapsto z^4$ and $z \mapsto z^{4/3}$ to each of the resulting terms. This yields the bound $$(16 \mathbb E[|Y^\leqslant_{x,n}|^4])^{1/4} (16\mathbb E[|Y^>_{y,n}|^{4/3}])^{3/4} \leqslant 16 \mathbb E[Y_{x,n}^4]^{1/4}\mathbb E[|Y_{y,n}^>|^{4/3}]^{3/4} \leqslant 16\mathbb E[Y_{x,n}^4]^{1/4}\mathbb E[Y_{y,n}^4]^{1/4} \mathbb P(|Y_{y,n}|>\sigma_n)^{1/2}.$$ Hence, bounding the last probability by Markov's inequality, we find from Lemma [Lemma 26](#lem:Yintbou){reference-type="ref" reference="lem:Yintbou"} and from Jensen's inequality applied to the normalized $\Lambda_n$-integrals, that the second integral in [\[eqn:E1bouu\]](#eqn:E1bouu){reference-type="eqref" reference="eqn:E1bouu"} is bounded by $$\begin{aligned} {16}{\sigma_n^{-2}} \iint_{Q_n^2} \mathbb E[Y_{x,n}^4]^{1/4}\mathbb E[Y_{y,n}^4]^{3/4}\,\Lambda_n({\rm d}x) \Lambda_n({\rm d}y)\leqslant{16\Lambda_n(Q_n)}{\sigma_n^{-2}} \int_{Q_n} \mathbb E[Y_{x,n}^4]\,\Lambda_n({\rm d}x) \in O(|Q_n|s_n^{4d}),\end{aligned}$$ It remains to bound the integral over the first covariance in [\[eqn:E1bouu\]](#eqn:E1bouu){reference-type="eqref" reference="eqn:E1bouu"}, which we split by $$\begin{aligned} \int_{Q_n^2} \mathbbmss{1}\{|x-y|\leqslant 4r_n\} \mathsf{Cov}(Y_{x, n},Y_{y, n})\,\Lambda_n({\rm d}x)\Lambda({\rm d}y)+ \int_{Q_n}\int_{Q_n \setminus B_{4r_n}(y)} \mathsf{Cov}(Y_{x, n},Y_{y, n})\,\Lambda_n({\rm d}x) \Lambda_n({\rm d}y).\label{eqn:E1Ysplit} \end{aligned}$$ Here, the Cauchy-Schwarz inequality gives that $\mathsf{Cov}(Y_{x, n},Y_{y, n})\leqslant\sqrt{\mathbb E[Y_{x,n}^2]} \sqrt{\mathbb E[Y_{y,n}^2]}\leqslant\mathbb E[Y_{x,n}^2]+\mathbb E[Y_{y,n}^2]$. Hence, we can bound the first integral in [\[eqn:E1Ysplit\]](#eqn:E1Ysplit){reference-type="eqref" reference="eqn:E1Ysplit"} using Lemma [Lemma 26](#lem:Yintbou){reference-type="ref" reference="lem:Yintbou"} by $$2|B_{4r_n}(y)| \int_{Q_n} \mathbb E[Y_{x,n}^2]\, \Lambda_n({\rm d}x) \in O(|Q_n|s_n^{3d}).$$ To deal with second integral in [\[eqn:E1Ysplit\]](#eqn:E1Ysplit){reference-type="eqref" reference="eqn:E1Ysplit"}, we set $\Xi_{x, n, r_n} :=\Xi_{x,n}^\leqslant\big(B_{r_n}(x)\big)$, and split $Y_{x, n}$ as $$\begin{aligned} Y_{x, n} =Y_{x,n}^{>,*}+Y_{x,n}^{\leqslant, \mathsf{in}}+Y_{x,n}^{\leqslant, \mathsf{out}} &:=\Big(|\Xi_{x, n}^> |-|\Xi_n^>|\Big) + \Big(\Xi_{x, n, r_n}^\leqslant-\Xi_n^\leqslant\big( B_{r_n}(x)\Big) \\ &\quad +\Big(\Xi_{x,n}^\leqslant\big( B_{r_n}(x)^c\big)-\Xi_n^\leqslant\big( B_{r_n}(x)^c\big)\Big) \mathbbmss{1}\{\mathcal X_n \Delta \mathcal X_{x, n}^\Xi \not \subseteq B_{3s_n}(x)\},\end{aligned}$$ and bound [\[eqn:E1Ysplit\]](#eqn:E1Ysplit){reference-type="eqref" reference="eqn:E1Ysplit"} by $$\begin{aligned} &\int_{Q_n^2} \bigg(\mathsf{Cov}(Y_{x,n}^{>,*}, Y_{y,n}^{>,*}) + 2 \mathsf{Cov}(Y_{x,n}^{>,*}, Y_{y,n}^{\leqslant, \mathsf{in}}) + 2 \mathsf{Cov}(Y_{x,n}^{>,*}, Y_{y,n}^{\leqslant, \mathsf{out}}) + \mathsf{Cov}(Y_{x,n}^{\leqslant, \mathsf{out}}, Y_{y,n}^{\leqslant, \mathsf{out}}) \nonumber\\ &\quad + 2 \mathsf{Cov}(Y_{x,n}^{\leqslant, \mathsf{out}}, Y_{y,n}^{\leqslant, \mathsf{in}})\bigg) \,\Lambda_n({\rm d}x) \Lambda_n({\rm d}y) + \int_{Q_n}\int_{Q_n \setminus B_{4r_n}(y)} \mathsf{Cov}(Y_{x,n}^{\leqslant, \mathsf{in}}, Y_{y,n}^{\leqslant, \mathsf{in}}) \,\Lambda_n({\rm d}x) \Lambda_n({\rm d}y).\end{aligned}$$ Now, the Cauchy-Schwarz inequality together with the bounds [\[eqn:Yint3\]](#eqn:Yint3){reference-type="eqref" reference="eqn:Yint3"}, [\[eqn:Yint5\]](#eqn:Yint5){reference-type="eqref" reference="eqn:Yint5"} and [\[eqn:Yint5b\]](#eqn:Yint5b){reference-type="eqref" reference="eqn:Yint5b"} in the proof of Lemma [Lemma 26](#lem:Yintbou){reference-type="ref" reference="lem:Yintbou"} give that the first integral is of order $O(|Q_n|s_n^{4d})$. For the covariance in the second integral, we rely on the decomposition $$\begin{aligned} \mathsf{Cov}(Y^{\leqslant, \mathsf{in}}_{x,n} ,Y^{\leqslant, \mathsf{in}}_{y,n})&= \mathsf{Cov}\big(\Xi_{x, n, r_n}^\leqslant, \Xi_{y, n, r_n}^\leqslant\big) -\mathsf{Cov}\big(\Xi_{x, n, r_n}^\leqslant, \Xi_n^\leqslant\big( B_{r_n}(y)\big)\big) \\ &\quad -\mathsf{Cov}\big(\Xi_n^\leqslant\big( B_{r_n}(x)\big), \Xi_{y, n, r_n}^\leqslant\big)+\mathsf{Cov}\big(\Xi_n^\leqslant\big( B_{r_n}(x)\big), \Xi_n^\leqslant\big( B_{r_n}(y)\big)\big). \end{aligned}$$ Since the arguments for the other terms work analogously, we only consider the first term on the right-hand side. As a first step, we let $M:=|Q_n|^{10}$ and split the covariance by $$\begin{aligned} \Xi_{x,n}^\leqslant&= \sum_{w \in \mathcal X_{x, n, r_n}} g(w,\mathcal X_{x,n}) \mathbbmss{1}\{R(w,\mathcal X_{x,n})\leqslant s_n, g(x,\mathcal X_{x,n})\leqslant M\} \\ &\quad + \sum_{w \in \mathcal X_{x, n, r_n}} g(w,\mathcal X_{x,n}) \mathbbmss{1}\{R(w,\mathcal X_{x,n})\leqslant s_n, g(w,\mathcal X_{x,n})>M\}=:\Xi_{n,x}^{\leqslant M} + \Xi_{n,x}^{>M}.\end{aligned}$$ This gives $$\begin{aligned} & \mathsf{Cov}\big(\Xi_{x, n, r_n}^\leqslant, \Xi_{y, n, r_n}^\leqslant\big) =\mathsf{Cov}\big(\Xi_{x, n, r_n}^{\leqslant M}, \Xi_{y, n, r_n}^{\leqslant M} )+\mathsf{Cov}\big(\Xi_{x, n, r_n}^{\leqslant M},\Xi_{y, n, r_n}^{> M}) + \mathsf{Cov}\big(\Xi_{x, n, r_n}^{> M} , \Xi_{y,n, r_n} ).\label{thm2:E1tr}\end{aligned}$$ It follows from Lemma [Lemma 27](#lem:thm2cov){reference-type="ref" reference="lem:thm2cov"} and Remark [Remark 28](#rem:cov){reference-type="ref" reference="rem:cov"} that the $\Lambda_n^2$-integral of the first covariance is in $O(s_n^{2d}|Q_n|^{-10}|Q_n|^2)$. The integrated second covariance term is by the Jensen inequality bounded by $$\begin{aligned} \int_{Q_n^2} \hspace{-.2cm}\sqrt{\mathsf{Var}(\Xi_{x, n, r_n}^{\leqslant M}) \mathsf{Var}(\Xi_{y, n, r_n}^{> M})} \Lambda_n^2({\rm d}x, {\rm d}y) \leqslant\Lambda_n(Q_n) \sqrt{\int_{Q_n}\hspace{-.2cm}\mathbb E\big[\big(\Xi_{x, n, r_n}^{\leqslant M}\big)^2\big] \Lambda_n({\rm d}x){ \int_{Q_n} \hspace{-.2cm}\mathbb E\Big[\big(\Xi_{y, n, r_n}^{> M}\big)^2\Big] \Lambda_n({\rm d}y)}}. \label{eqn:thm2intM}\end{aligned}$$ Here we find by a similar computation as in [\[eqn:firstfactor\]](#eqn:firstfactor){reference-type="eqref" reference="eqn:firstfactor"} that the first integral is in $O(|Q_n|^3)$. For the second integral we obtain similarly to [\[eqn:firstfactor\]](#eqn:firstfactor){reference-type="eqref" reference="eqn:firstfactor"} the bound $$\begin{aligned} &\int_{Q_n} \mathbb E\Big[\big(\Xi_{y, n, r_n}^{> M}\big)^2\Big] \Lambda_n({\rm d}y)\\ & \leqslant|Q_n| \sum_{k=0}^{2} \sum_{\substack{(i_1,\dots,i_{k + 1}):\\ i_1+\cdots+i_{k + 1}=2}} \hspace{-0.3cm} \alpha_0^{k+1} |B_{r_n}(y)|^{k} \sup_{{\boldsymbol x}\in Q_n^{k+1}}\mathbb E\big[ g(x_{k+1},\mathcal X\cup \{{\boldsymbol x}\})^{i_{k + 1}} \prod_{j=1}^{k}g(x_j,\mathcal X\cup \{{\boldsymbol x}\})^{i_j} \mathbbmss{1}\{g(x_j,\mathcal X\cup \{{\boldsymbol x}\}) > M\}\big].\end{aligned}$$ Here, we apply the Hölder inequality and the Markov inequality to the expectations above and obtain from [\[eq:m2\]](#eq:m2){reference-type="eqref" reference="eq:m2"} that the second integral from [\[eqn:thm2intM\]](#eqn:thm2intM){reference-type="eqref" reference="eqn:thm2intM"} is in $O(r_n^{2d}|Q_n|^{-3})$. The same bound can be established for the third covariance on the right-hand side in [\[thm2:E1tr\]](#thm2:E1tr){reference-type="eqref" reference="thm2:E1tr"}. Therefore, $$\int_{Q_n}\int_{Q_n \setminus B_{4r_n}(y)}\mathsf{Cov}(Y_{x,n}^{\leqslant, \mathsf{in}}, Y_{y,n}^{\leqslant, \mathsf{in}}) \,\Lambda_n({\rm d}x) \Lambda_n({\rm d}y) \in O(s_n^{2d}).$$ We deal separately with the cases $|x-y| \leqslant 4 r_n$ and $|x-y| > 4 r_n$. First, consider the case where $|x - y|\leqslant 4r_n$. Then, by the Cauchy-Schwarz inequality, $$\begin{aligned} &\int_0^1 |\mathsf{Cov}(\phi_{x, n}(t),\phi_{y, n}(t))| {\rm d}t \leqslant\int_0^1 \sqrt{\mathbb E[\phi_{x, n}(t)]}\sqrt{\mathbb E[\phi_{y, n}(t)]}{\rm d}t \leqslant\bigg(\int_0^1 \mathbb E[\phi_{x, n}(t)]{\rm d}t \int_0^1\mathbb E[\phi_{y, n}(t)]{\rm d}t\bigg)^{1/2}\\ & \leqslant\sigma_n^{-1} \sqrt{\mathbb E[|Y_{x, n}|]}\sqrt{\mathbb E[|Y_{y, n}|]}.\end{aligned}$$ Therefore, by Lemma [Lemma 26](#lem:Yintbou){reference-type="ref" reference="lem:Yintbou"}, $$\sigma_n\int_0^1\int_{Q_n}\int_{B_{4r_n}(y)} \mathsf{Cov}\big(\phi_{x, n}(t),\phi_{y, n}(t))\, \Lambda_n({\rm d}x)\,\Lambda_n({\rm d}y) {\rm d}t\in O(s_n^{2d}|Q_n|),$$ and an analogous calculation holds for $t \in [-1, 0]$. Finally, we consider the case $|x - y|> 4r_n$. Writing $U_{x, n}:=\{Y_{x,n}^{>} = Y_{x,n}^{\leqslant, \mathsf{out}}=0\}$, we decompose the covariance $\mathsf{Cov}(\phi_{x, n},\phi_{y, n})$ as $$\begin{aligned} \mathsf{Cov}(\phi_{x, n} \mathbbmss{1}\{U_{x, n}^c\},\phi_{y, n}) + \mathsf{Cov}(\phi_{x, n} \mathbbmss{1}\{U_{x, n}\},\phi_{y, n}\mathbbmss{1}\{U_{y, n}^c\}) + \mathsf{Cov}(\phi_{x, n} \mathbbmss{1}\{U_{x, n}\},\phi_{y, n}\mathbbmss{1}\{U_{y, n}\}).\label{eq:thm2E4}\end{aligned}$$ Using that $\mathsf{Cov}(\phi_{x, n} \mathbbmss{1}\{U_{x, n}^c\},\phi_{y, n}) \leqslant\mathbb P(U_{x,n}^c)\in O( |Q_n| e^{-\min\{c_2,c_{\mathsf{es}}\}s_n})$ by the proof of Lemma [Lemma 26](#lem:Yintbou){reference-type="ref" reference="lem:Yintbou"}, we find that the integral of the first term is in $O(|Q_n|s_n^{2d})$. A similar estimate holds for the second covariance term. We let $\Delta_{x,n}^\leqslant:=(|\Xi_{x,n}^\leqslant|-|\Xi_n^\leqslant|)/{\sigma_n^2}$, $$\begin{aligned} \phi_{x,n}^\leqslant(t)=\begin{cases} \mathbbmss{1}\{1 \geqslant\Delta_{x,n}^\leqslant>t>0\}&t>0,\\ \mathbbmss{1}\{-1 \leqslant\Delta_{x,n}^\leqslant<t<0\}&t<0, \end{cases}\end{aligned}$$ and decompose the third covariance as $$\begin{aligned} \mathsf{Cov}(\phi_{x, n}^\leqslant,\phi_{y, n}^\leqslant) - \mathsf{Cov}(\phi_{x, n}^\leqslant\mathbbmss{1}\{U_{x, n}^c\},\phi_{y, n}^\leqslant\mathbbmss{1}\{U_{y, n}\})- \mathsf{Cov}(\phi_{x, n}^\leqslant,\phi_{y, n}^\leqslant\mathbbmss{1}\{U_{y, n}^c\}).\end{aligned}$$ Here, the first term is by Lemma [Lemma 27](#lem:thm2cov){reference-type="ref" reference="lem:thm2cov"} with $p=0$ in $O(r_n^{2d}|Q_n|^{-30}|Q_n|^2)$. The second and the third covariance are bounded similarly to the first and second covariance in [\[eq:thm2E4\]](#eq:thm2E4){reference-type="eqref" reference="eq:thm2E4"}. Therefore, $E_4 \in O(s_n^{2d} |Q_n|^{-1/2})$, as asserted. The steps are very similar to the arguments for $E_4$. We deal separately with the cases $|x-y| \leqslant 4 r_n$ and $|x-y| > 4 r_n$. First, consider the case where $|x - y|\leqslant 4r_n$. Then, $$\big|\int_0^1 t\mathsf{Cov}(\phi_{x, n}(t),\phi_{y, n}(t)) {\rm d}t\big| \leqslant\big| \int_0^1 t\mathbb E[\phi_{x, n}(t)]{\rm d}t\big| \leqslant 0.5\sigma_n^{-2} \mathbb E[Y_{x, n}^2].$$ Therefore, $$\sigma_n^2\int_0^1\int_{Q_n}\int_{B_{4r_n}(y)} t\,\mathsf{Cov}(\phi_{x, n}(t),\phi_{y, n}(t))\, \Lambda_n({\rm d}x)\,\Lambda_n({\rm d}y) {\rm d}t\in O(s_n^{{3d}}|Q_n|),$$ and an analogous calculation holds for $t \in [-1, 0]$. Hence, it remains to consider the case where $|x - y|> 4r_n$. We use the same bound for $\mathsf{Cov}(\phi_{x, n}(t),\phi_{y, n}(t))$ as in the case $E_4$, which yields $E_5 \in O(s_n^{2d}|Q_n|^{-1/2})$, as asserted. ◻ # Choice of ordering for disgreement coupling {#sec:iota} In the radial thinning, we used the distance to the origin to order the poisson process by rather than a bimeasurable map. While this is not strictly necessary for the present paper, it simplifies the presentation and makes the construction more intuitive. The proposition below shows that this is valid. **Proposition 29**. *Assume that $\iota: \mathbb R^d \to \mathbb R$ is a measurable map such that for all $r\in \mathbb R$ $$\lambda( \iota^{-1}(r) )= 0.$$ Then, this almost surely induces a well-defined total ordering of $\mathcal P^{\mathsf{d}}$. Moreover, replacing the bimeasurable map by $\iota$ in the Poisson embedding and the cluster based thinning yields a Gibbs point process with the same distribution.* *Proof.* We may assume that the domain $Q$ is bounded and that we work with the Poisson embedding. We define a partial ordering of $\mathbb R^d$ by $x\leq y$ if $\iota(x) <\iota (y)$ or $x=y$. If $\lambda( \iota^{-1}(r) )= 0$ for all $r\in \mathbb R$, this almost surely induces a well-defined total ordering of the Poisson process. We remark first, that with probability one, $\iota$ induces a total ordering on the points of $\mathcal P^d$. Moreover, the window $Q_x = \iota^{-1}(\iota(x),\infty)$ is again a measurable set. In fact, $(x,\mu)\mapsto \mu_{\mid Q_x}$ is measurable and hence the thinning probabilities $p(x,\psi)$ are measurable maps. Thus, using $\iota$ for the Posson embedding is well-defined. We may assume $\iota(Q)\subseteq (0,1]$. For each $n=1,2,\ldots$, we construct a map $\iota^{n}: Q \to \mathbb R$ which is bimeasurable onto its image and provides almost the same ordering as $\iota$. Let $\phi:Q \to (0,1]$ be any bimeasurable map. For $x\in Q$, there is a unique $m\in \{0,\ldots,2^{n-1}\}$ such that $x\in\iota^{-1}(m2^{-n+1}-2^{-n},m2^{-n+1}+2^{-n}]$, and we define $\iota^{n}(x) = m + \phi(x) \in (m,m+1]$. I.e., $\iota^n$ is given by first dividing $Q$ into subsets $A_m=\iota^{-1}(m2^{-n+1}-2^{-n},m2^{-n+1}+2^{-n}]$ and ordering these subsets by $m$ and then apply the ordering $\phi$ to order the points inside each set. For each $n$, the ordering induced by $\iota^n$ may be used in the Poisson embedding leading to a thinned point process $\mathcal{X}^{n}(Q,\psi)$, which we know has the correct Gibbs distribution. We must show three things: - $\mathcal{X}^{n}(Q,\psi)$ converges almost surely to a point process $\mathcal{X}^{\infty}(Q,\psi)$. - The limit $\mathcal{X}^{\infty}(Q,\psi)$ is the same point process we would get if we had used the ordering $\iota$. - The limit $\mathcal{X}^{\infty}(Q,\psi)$ again has the correct Gibbs distribution. Let $Q_x=\{y\in Q| i(y)> i(x)\}$, $Q_x^+=\{y\in Q| i(y)< i(x)\}$, $Q_x^n=\{y\in Q|\iota^n(y) >\iota^n(x) \}$, and $Q_x^{+,n}=\{y\in Q|\iota^n(y) <\iota^n(x) \}$. We claim that for any fixed $x\in Q$, $$\label{eq:symdif} \lim_{n \to \infty }\lambda(Q_x\Delta Q_x^{n})=0, \qquad \lim_{n \to \infty }\lambda(Q_x^+\Delta Q_x^{+,n})=0.$$ Note first that $Q_x\Delta Q_x^{n}=Q_x^+\Delta Q_x^{+,n}$. Suppose $i(x)\in (m_{n,x}2^{-n+1}-2^{-n},m_{n,x}2^{-n+1}+2^{-n}]$. Then, $$\begin{aligned} Q_x \backslash Q_x^n &\subseteq \iota^{-1}(i(x) , m_{n,x}2^{-n+1}+2^{-n}])\\ Q_x^n \backslash Q_x &\subseteq \iota^{-1}(m_{n,x}2^{-n+1}-2^{-n},i(x)].\end{aligned}$$ In total, $$\begin{aligned} \label{eq:symdif2} Q_x^n \Delta Q_x \subseteq \iota^{-1}((m_{n,x}2^{-n+1}-2^{-n},m_{n,x}2^{-n+1}+2^{-n}]). \end{aligned}$$ The claim now follows because the sets $$\begin{aligned} \iota^{-1}((m_{n,x}2^{-n+1}-2^{-n},m_{n,x}2^{-n+1}+2^{-n}])\end{aligned}$$ decrease towards $\iota^{-1}(r)$, which has Lebesgue measure zero by assumption. First fix $(x,\psi)$ with $\psi \subseteq Q_x^+$. By [\[eq:symdif2\]](#eq:symdif2){reference-type="eqref" reference="eq:symdif2"}, there is a $n_0$ such that also $\psi \subseteq Q_x^{+,n}$ for $n> n_0$. For such $n$, we have $$\begin{aligned} p^n(x, \psi) := \kappa(x,\psi) \frac{\mathbb E(e^{-J(\mu \cap {Q_x^n},\psi + \delta_x)})}{\mathbb E(e^{-J(\mu\cap {Q_x^n},\psi )})} \to \kappa(x,\psi) \frac{\mathbb E(e^{-J(\mu\cap {Q_x},\psi + \delta_x)})}{\mathbb E(e^{-J(\mu\cap {Q_x},\psi )})}= p(x,\psi) \end{aligned}$$ as $n \to \infty$. Indeed, looking at the denominator first, we have $$\begin{aligned} \mathbb E\left(e^{-J(\mu\cap {Q_x^n},\psi )}\right) = {}&\mathbb E\left(e^{-J(\mu \cap {Q_x},\psi )}\right) + \mathbb E\left(e^{-J(\mu \cap {Q_x^n },\psi )}1_{\mu \cap (Q_x \Delta Q_x^{n}) \neq \emptyset}\right) \\ &- \mathbb E\left(e^{-J(\mu_\cap {Q_x},\psi )}1_{\mu\cap (Q_x \Delta Q_x^n)\neq \emptyset}\right) .\end{aligned}$$ The middle term goes to zero by an application of the Cauchy-Schwartz inequality, since the stability assumption ensures that $\mathbb E(e^{-2J(\mu \cap {Q_x^n},\psi )})<\infty$ and from [\[eq:symdif\]](#eq:symdif){reference-type="eqref" reference="eq:symdif"} we have that $P(\mu\cap (Q_x \Delta Q_x^{n})\neq \emptyset) = 1-e^{-\lambda(Q_x \Delta Q_x^{n})}$, which goes to zero when $n\to \infty$. The last term goes to zero by the same reasoning. The numerator is treated similarly. We now show (i). For a.a. $\omega$, $\iota$ and $\iota^{n}$ will define the same ordering of $\mathcal P^d(\omega)\cap Q$ whenever $n>N(\omega)$ according to [\[eq:symdif2\]](#eq:symdif2){reference-type="eqref" reference="eq:symdif2"}. Moreover, since the thinning probabilities converge pointwise, for a.a. $\omega$ there is an $N'(\omega)\geq N(\omega)$ such that the thinned process does not change whenever $n \geq N'(\omega)$. We define $\mathcal{X}^\infty(Q,\psi)$ to be this limiting point process and note that it is a.s. the the same as one would have obtained by using the ordering $\iota$ for the thinning, which also shows (ii). It remains to show (iii), that $\mathcal{X}^\infty(Q,\psi)$ has the correct Gibbs distribution. For this, let $f: \mathbf N_Q \to \mathbb R$ be a bounded measurable function. Then $$\begin{aligned} &|\mathbb E( f( \mathcal{X}^n (Q,\psi))) - \mathbb E(f( \mathcal{X}^\infty (Q,\psi))) | \leq 2|f|_{\infty}P( \mathcal{X}^n (Q,\psi) \neq \mathcal{X}^\infty (Q,\psi)). \end{aligned}$$ The right hand side goes to $0$ and the left hand side is constant since all $\mathcal{X}^n(Q,\psi)$ have the same distribution $\mathcal{X}(Q,\psi)$, so it follows that also $\mathcal{X}^\infty(Q,\psi)$ must have the same distribution. ◻
arxiv_math
{ "id": "2309.00394", "title": "Normal approximation for Gibbs processes via disagreement couplings", "authors": "Christian Hirsch, Moritz Otto, Anne Marie Svane", "categories": "math.PR", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | This paper aims to examine the characteristics of the posterior distribution of covariance/precision matrices in a "large $p$, large $n$\" scenario, where $p$ represents the number of variables and $n$ is the sample size. Our analysis focuses on establishing asymptotic normality of the posterior distribution of the entire covariance/precision matrices under specific growth restrictions on $p_n$ and other mild assumptions. In particular, the limiting distribution turns out to be a symmetric matrix variate normal distribution whose parameters depend on the maximum likelihood estimate. Our results hold for a wide class of prior distributions which includes standard choices used by practitioners. Next, we consider Gaussian graphical models which induce sparsity in the precision matrix. Asymptotic normality of the corresponding posterior distribution is established under mild assumptions on the prior and true data-generating mechanism. bibliography: - bibfile.bib title: High-Dimensional Bernstein Von-Mises Theorems for Covariance and Precision Matrices --- # Introduction {#sec1} The advent and proliferation of high-dimensional data and associated Bayesian statistical methods in recent years have generated significant interest in establishing high-dimensional asymptotic guarantees for such methods. The Bernstein von-Mises (BvM) theorem [@bernstein; @cam2000asymptotics; @vaart; @vonMises] is a key result that can provide justification for Bayesian methods from a frequentist point of view. The BvM approach assumes a frequentist data-generating model and defines criteria for the prior that result in the posterior becoming asymptotically Gaussian as the number of observations $n$ increases. The primary use of the BvM method is to justify the construction of Bayesian credible sets as a Bayesian counterpart of the frequentist confidence region. It is useful in cases where uncertainty quantification through frequentist methods is not feasible due to the presence of unknown parameters in the asymptotic distribution, making it challenging to construct frequentist confidence regions directly. Although there is extensive literature establishing Bernstein von-Mises theorems in settings where the number of parameters $p$ stays fixed as $n$ increases, analogous results for high-dimensional settings where $p = p_n$ can grow with sample size $n$ are comparatively sparse. In the context of linear models, BvM results were established by [@Dominique; @castillo1; @ghosal1], while [@Boucheron; @clarke; @ghosal3; @ghosal2] studied it for high-dimensional exponential models, subject to certain conditions on the growth rate of the dimension. Spokoiny [@spokoiny] explored similar ideas in a wider "general likelihood setup\". Panov and Spokoiny [@panov] explored BvM results in a semiparametric framework with finite sample bounds for distance from normality since modern statisticians are increasingly focused on models with limited sample sizes. See also [@kelijn; @castillo2; @rivoirard] for additional results in this context. Our focus in this paper is Bayesian methods for high-dimensional covariance estimation. In particular, suppose we have $n$ independent and identically distributed samples $\mbox{\boldmath$Y$}^{n}=(Y_1,\dots,Y_n)$ drawn from a $p$-variate normal distribution with covariance matrix $\mbox{\boldmath$\Sigma$}$. We first consider the "unstructured\" estimation of $\mbox{\boldmath$\Sigma$}$, i.e., no dimension-reducing structure, such as sparsity or low-rank, is imposed on $\mbox{\boldmath$\Sigma$}$. In this setting, Gao and Zhou [@chaogao] studied BvM results for one-dimensional functionals of the covariance matrix such as matrix entries and eigenvalues, in a high-dimensional setting. Silin [@silin] derived finite sample bounds for the total variation distance between the posterior distributions of $\mbox{\boldmath$\Sigma$}$ obtained by employing an Inverse-Wishart (IW) prior and a flat prior. Moreover, he investigated Bernstein-von Mises theorems for one-dimensional functionals and spectral projectors of the covariance matrix. However, when it comes to simultaneously inferring various functionals of the covariance matrix (such as multiple entries of the covariance matrix, its inverse, and multiple eigenvalues) the above results are not applicable even with a very basic conjugate family of Inverse-Wishart (IW) priors. Although a Bonferroni inequality-based approach could potentially be utilized, it often results in inefficient and loose bounds, particularly in high-dimensional settings. The key goal of this paper is to provide a high dimensional Bernstein-von Mises theorem for the entire covariance matrix $\mbox{\boldmath$\Sigma$}$ (or the precision matrix $\mbox{\boldmath$\Omega$}$) for a general enough class of priors. We show that as long as the prior distribution satisfies the flatness condition around sample covariance matrix $\mbox{\boldmath$S$}$ (see equation [\[flat\]](#flat){reference-type="ref" reference="flat"}) the total variation norm between the posterior distribution of $\sqrt{n}(\mbox{\boldmath$\Sigma$}-\mbox{\boldmath$S$})$ (or $\sqrt{n}(\mbox{\boldmath$\Omega$}-\mbox{\boldmath$S$}^{-1})$) and a suitable mean zero symmetric matrix variate normal distribution tends to zero under standard regularity assumptions (Theorem [\[th_sigma\]](#th_sigma){reference-type="ref" reference="th_sigma"} and [\[th_omega\]](#th_omega){reference-type="ref" reference="th_omega"}). We show that a large collection of the prior distributions for $\mbox{\boldmath$\Sigma$}$ (or $\mbox{\boldmath$\Omega$}$) satisfy this flatness condition around $\mbox{\boldmath$S$}$ (Lemma [Lemma 4](#postcont1){reference-type="ref" reference="postcont1"}, [Lemma 5](#postcont2){reference-type="ref" reference="postcont2"} and [Lemma 6](#postcont3){reference-type="ref" reference="postcont3"}). This includes standard conjugate IW prior and several scale mixtures of IW prior proposed in [@gelman2006prior; @gelman_book; @gelman2006data; @huang; @mulder; @zava]. These mixtures have been shown to offer more effective noninformative choices. In fact, we are able to show that the flatness condition around $\mbox{\boldmath$S$}$ is satisfied by a significantly generalized version of the mixture priors proposed in the above literature. Establishing BvM results for the entire covariance matrix poses a significant challenge, especially in high-dimensional settings. The primary issue arises from the fact that an unrestricted $(p\times p)$ covariance/precision matrix involves a large number of free parameters, which scales with $O(p^2)$. Consequently, as the dimension increases and $p_n$ grows with $n$, the number of parameters escalates rapidly. Furthermore, as discussed in [@ghosal1], when $p_n$ grows with the sample size $n$, there can be a tail region where the posterior probability is significant, even if the likelihood is small in that region. With these challenges, we establish BvM results for the entire covariance matrix $\mbox{\boldmath$\Sigma$}$ where $p(=p_n)$ can increase with $n$ but is subject to the condition that $p_n^{5}=o(n)$. This seemingly stringent requirement is not due to any imprecise bounds in the proof and is somewhat expected given related results under simpler settings in the literature. Silin [@silin] requires the exact same condition to establish the asymptotic equivalence (in TV norm) of posterior distributions using IW prior and a flat prior. In a simpler context of BvM results for high-dimensional regression condition $p_n^4(\log(p_n))=o(n)$ is required in [@ghosal1]. To establish BvM results for several *one-dimensional* functionals of $\mbox{\boldmath$\Sigma$}$, the authors in [@chaogao] need the condition $p_n^4=o(n)$. Recall that the above discussion is completely based on an unstructured covariance matrix where no structure is imposed on the covariance matrix to reduce its dimensionality. A standard and popular approach for a high dimensional covariance matrix is to impose sparsity in the precision matrix. These models are referred to as Gaussian graphical models or concentration graphical models (see [@lauritzen]). A specific sparsity pattern in $\mbox{\boldmath$\Omega$}$ can be conveniently represented by a graph $G$ involving the set of $p$ variables. The $G$-Wishart distribution, as introduced by Roverato [@roverato], offers a conjugate family of priors for the concentration graphical model corresponding to a given graph $G$. Decomposable graphs, which have received considerable attention in Bayesian literature on concentration graph models (see [@dawid; @letac; @rajaratnam; @roverato]), form a notable subfamily within this framework. Extensive literature exists that demonstrates the posterior consistency of the precision matrix for such models (see [@banerjee1; @liu2019empirical; @xiang]). In Section [7](#sec8){reference-type="ref" reference="sec8"}-[8](#sec9){reference-type="ref" reference="sec9"}, employing a sparsity setup similar to [@xiang] and under mild regularity conditions on the prior distribution, we establish BvM results for the precision matrix when the imposed sparsity pattern corresponds to an approximately decomposable graph (Theorem [\[th_omega_sp\]](#th_omega_sp){reference-type="ref" reference="th_omega_sp"}). If the maximum vertex degree of this graph is assumed to be bounded (e.g. see [@banerjee1]), then the condition $p_n^5=o(n)$ that we needed for the unstructured setting is significantly weakened to $p_n^2(\log(p_n))^3=o(n)$. Section [9](#sec10){reference-type="ref" reference="sec10"} of the paper aims to demonstrate that there is nothing special about using total variation norms for BvM results. Other distance measures, such as the Bhattacharyya-Hellinger distance [@bhattacharyya1946measure; @Hellinger] or Rényi's $\alpha$-divergence [@renyi], can also be employed to draw similar conclusions. The remainder of the paper is structured as follows. After setting up the basic notation in the next subsection, basic definitions and preliminaries are in Section [2](#sec2){reference-type="ref" reference="sec2"}. Section [3](#sec3){reference-type="ref" reference="sec3"} is dedicated to discussing various prior distributions. The BvM results for the unstructured settings are provided in Section [4](#sec4){reference-type="ref" reference="sec4"} and proofs of these results are provided in Section [5](#sec5){reference-type="ref" reference="sec5"}. Preliminaries related to concentration graphical models are provided in Section [6](#sec7){reference-type="ref" reference="sec7"}. Sparsity-based model for the precision matrix and the corresponding prior distributions are formulated in Section [7](#sec8){reference-type="ref" reference="sec8"}. The BvM result for this setting are provided in [8](#sec9){reference-type="ref" reference="sec9"}. Section [9](#sec10){reference-type="ref" reference="sec10"} delves into the equivalence of different norms in the context of convergence. Proofs of certain theorems and technical lemmas are provided in the Appendix. Finally, we conclude the paper with a summary of our findings and concluding remarks in Section [10](#sec11){reference-type="ref" reference="sec11"}. ## Notation {#1.1} Let us introduce some notation and definitions. For positive sequences $a_n$ and $b_n$, we denote $a_n = O(b_n)$ if there exists a constant $C$ such that $a_n \leq C b_n$ for all $n \in \mathbb{N}$, and $a_n = \Omega(b_n)$ if there exists a constant $C$ such that $a_n \geq C b_n$ for all $n \in \mathbb{N}$. We use $a_n = o(b_n)$ to denote the limit $\lim_{n \to \infty} \frac{a_n}{b_n} = 0$. Given a metric space $(X, d),\;\mathcal{N}(X,\epsilon)$ represents the $\epsilon$-covering number, which is the minimum number of balls of radius $\epsilon$ needed to cover $X$. In the following notation, $\mbox{\boldmath$I$}_p$ represents the identity matrix of order $p$, and $\mbox{\boldmath$O$}$ represents a matrix of size $a \times b$ with all zero entries. If $\mbox{\boldmath$A$}$ is a symmetric square matrix, then $\lambda_{\min}(\mbox{\boldmath$A$})$ and $\lambda_{\max}(\mbox{\boldmath$A$})$ denote the smallest and largest eigenvalues of $\mbox{\boldmath$A$}$, respectively. The tensor or Kronecker product between two arbitrary matrices $\mbox{\boldmath$A$}$ and $\mbox{\boldmath$B$}$ is denoted by $\mbox{\boldmath$A$}\otimes\mbox{\boldmath$B$}$. Consider the set $M_p$, which comprises all symmetric matrices of size $p \times p$, and a subclass of $M_p$, $\mathbb{P}_p^+$, representing the collection of symmetric positive definite matrices of size $p \times p$. The unit Euclidean sphere in $\mathbb{R}^p$ is denoted by $\mathcal{S}^{p-1}$. For a vector $x \in \mathbb{R}^p$, we denote its $r$-th norm by $\left\lVert x\right\rVert_r = \left(\sum_{j=1}^p \lvert x_j\rvert^r\right)^{1/r}$. $\left\lVert x\right\rVert_2$ denotes the Euclidean norm. For a $p\times p$ matrix $\mbox{\boldmath$A$}=(A_{ij})_{1\leq i,j\leq p}$, we denote $$\left\lVert\mbox{\boldmath$A$}\right\rVert_{\max}=\underset{1\leq i,j\leq p}{\max}\lvert A_{ij}\rvert,$$ $$\left\lVert\mbox{\boldmath$A$}\right\rVert_{r,s}=\sup\left\{\left\lVert\mbox{\boldmath$A$}x\right\rVert_{s}:\left\lVert x\right\rVert_{r}=1\right\},$$ where $1\leq r,s\leq \infty$. In particular, we have $\left\lVert\mbox{\boldmath$A$}\right\rVert_{(\infty,\infty)}=\underset{i}{max}\sum_j\lvert A_{ij}\rvert$, and spectral norm of a matrix is defined as $$\left\lVert\mbox{\boldmath$A$}\right\rVert_2:=\sup_{u\in\mathcal{S}^{n1}}\left\lVert\mbox{\boldmath$A$}u\right\rVert_2(=\left\lVert\mbox{\boldmath$A$}\right\rVert_{2,2}).$$ We define the vectorization of $\mbox{\boldmath$A$}$ as $\mbox{vec}(\mbox{\boldmath$A$}) = (A_{11},\dots,A_{p1},A_{12},\dots,A_{pp})^T$. If $\mbox{\boldmath$A$}$ is a symmetric matrix, there will be repeated elements in $\mbox{vec}(\mbox{\boldmath$A$})$. For a $p \times p$ symmetric matrix $\mbox{\boldmath$A$}$, $\mbox{vech}(\mbox{\boldmath$A$})$ is a column vector of dimension $\frac{1}{2}p(p+1)$ formed by taking the elements below and including the diagonal, column-wise. In other words, $\mbox{vech}(\mbox{\boldmath$A$}) = (A_{11},A_{21},\cdots,A_{p1},A_{22}\cdots,A_{p2},\cdots,A_{pp})^T$. For a symmetric matrix $\mbox{\boldmath$A$}$, we can establish the connection between $\mbox{vec}(\mbox{\boldmath$A$})$ and $\mbox{vech}(\mbox{\boldmath$A$})$ using a elimination matrix $\mbox{\boldmath$B$}_p^T$, expressed as $\mbox{vech}(\mbox{\boldmath$A$}) = \mbox{\boldmath$B$}_p^T \mbox{vec}(\mbox{\boldmath$A$})$. While it's important to note that this elimination matrix may lack uniqueness, we can construct a $\frac{1}{2}p(p+1) \times p^2$ elimination matrix $\mbox{\boldmath$B$}_p^T$ in the following systematic manner, as described in [@magnus] $$\mbox{\boldmath$B$}_p^T=\underset{1\leq j \leq i \leq p}{\sum}\left(u_{ij}\otimes e_j^T\otimes e_i^T\right),$$ where $e_{i}$ is a unit vector whose $i$-th element is one and zeros elsewhere and $u_{ij}$ is a unit vector of order $\frac{1}{2}p(p+1)$ having the value $1$ in the position $(j-1)p+i-{\frac {1}{2}}j(j-1)$ and $0$ elsewhere. Let $f$ and $g$ be two densities, each absolutely continuous with respect to some $\sigma$-finite measure $\mu$. Also, let $P(A) =\int_A f d\mu$ and $Q(A) =\int_A g d\mu$. Then Total Variation (TV) norm between two distributions $P$ and $Q$ (or two densities $f$ and $g$) is defined as $$TV(f,g)=\underset{A}{\sup}\;\lvert P(A)-Q(A)\rvert=\frac{1}{2}\int\lvert f-g\rvert\;d\mu.$$ # Preliminaries and Model Formulation {#sec2} We consider an independent and identically distributed sample of size $n$, $\mbox{\boldmath$Y$}^n=(Y_1,\dots,Y_n)$, drawn from the $N_p(0,\;\mbox{\boldmath$\Sigma$}=\mbox{\boldmath$\Omega$}^{-1})$ distribution. In the case of estimating $\mbox{\boldmath$\Sigma$}$, the moment-based or maximum likelihood estimation is represented as $\mbox{\boldmath$S$}=\frac{1}{n}\sum_{i=1}^{n}Y_{i}Y_{i}^T$, whereas, for $\mbox{\boldmath$\Omega$}$, it is $\mbox{\boldmath$S$}^{-1}$. Within this Gaussian framework, we can express the log-likelihood function of $\mbox{\boldmath$\Sigma$}$, denoted as $l_{1n}(\mbox{\boldmath$\Sigma$})$, as follows $$\begin{aligned} \label{likelihood1} l_{1n}(\mbox{\boldmath$\Sigma$})=-\frac{np}{2}\log(2\pi)-\frac{n}{2}\log(\det(\mbox{\boldmath$\Sigma$}))-\frac{n}{2}\mathop{\mathrm{tr}}(\mbox{\boldmath$\Sigma$}^{-1}\mbox{\boldmath$S$})\end{aligned}$$ Similarly, we can write the log-likelihood function of $\mbox{\boldmath$\Omega$}$, denoted as $l_{2n}(\mbox{\boldmath$\Omega$})$, as $$\begin{aligned} \label{likelihood2} l_{2n}(\mbox{\boldmath$\Omega$})=-\frac{np}{2}\log(2\pi)+\frac{n}{2}\log(\det(\mbox{\boldmath$\Omega$}))-\frac{n}{2}\mathop{\mathrm{tr}}(\mbox{\boldmath$\Omega$}\mbox{\boldmath$S$})\end{aligned}$$ In a Bayesian framework, a prior $\Pi_{1n}(\cdot)$ is assigned to the covariance matrix $\mbox{\boldmath$\Sigma$}$. The induced prior on the precision matrix $\mbox{\boldmath$\Omega$}$ is denoted as $\Pi_{2n}(\cdot)$. Let $\pi_{1n}(\cdot)$ and $\pi_{2n}(\cdot)$ represent the corresponding prior densities. We will consider an asymptotic framework where $p$ will be allowed to grow with the sample size $n$. This is why the dependence of the priors on $n$ is highlighted in the above notation. For the sake of notational simplicity, we will sometimes refer to $\pi_{1n}(\cdot)$ and $\pi_{2n}(\cdot)$ as $\pi_{1}(\cdot)$ and $\pi_{2}(\cdot)$, respectively. Now, after centering $\mbox{\boldmath$\Sigma$}$ by $\mbox{\boldmath$T$}_1=\sqrt{n}(\mbox{\boldmath$\Sigma$}-\mbox{\boldmath$S$})$ or $\mbox{\boldmath$\Omega$}$ by $\mbox{\boldmath$T$}_2=\sqrt{n}(\mbox{\boldmath$\Omega$}-\mbox{\boldmath$S$}^{-1})$, we define the following functions $$\begin{aligned} &M_{1n}(\mbox{\boldmath$T$}_1)=\exp{\left(l_{1n}\left(\mbox{\boldmath$S$}+\frac{\mbox{\boldmath$T$}_1}{\sqrt{n}}\right)-l_{1n}\left(\mbox{\boldmath$S$}\right)\right)},\;\text{and}\label{m1}\\ &M_{2n}\left(\mbox{\boldmath$T$}_2\right)=\exp{\left(l_{2n}\left(\mbox{\boldmath$S$}^{-1}+\frac{\mbox{\boldmath$T$}_2}{\sqrt{n}}\right)-l_{2n}\left(\mbox{\boldmath$S$}^{-1}\right)\right)}\label{m2},\end{aligned}$$ where $\mbox{\boldmath$T$}_1 \in G_{1n}$ and $\mbox{\boldmath$T$}_2 \in G_{2n}$, where $G_{1n}=\{\mbox{\boldmath$T$}_1:\mbox{\boldmath$S$}+\dfrac{\mbox{\boldmath$T$}_1}{\sqrt{n}}\in\mathbb{P}_p^+\}$ and $G_{2n}=\{\mbox{\boldmath$T$}_2:\mbox{\boldmath$S$}^{-1}+\dfrac{\mbox{\boldmath$T$}_2}{\sqrt{n}}\in\mathbb{P}_p^+\}$. If $\mbox{\boldmath$T$}_1$ or $\mbox{\boldmath$T$}_2$ falls outside $G_{1n}$ or $G_{2n}$, we set $M_{1n}(\mbox{\boldmath$T$}_1)$ or $M_{2n}(\mbox{\boldmath$T$}_2)$ equal to zero. Suppose, posterior distributions of $\mbox{\boldmath$T$}_1$ and $\mbox{\boldmath$T$}_2$ are given by $\Pi_{1n}(\cdot\mid\mbox{\boldmath$Y$}^n)$ and $\Pi_{2n}(\cdot\mid\mbox{\boldmath$Y$}^n)$ respectively. Analogously, let $\pi_{1n}(\cdot\mid\mbox{\boldmath$Y$}^n)$ and $\pi_{2n}(\cdot\mid\mbox{\boldmath$Y$}^n)$ be the corresponding posterior densities. Then it is not difficult to check $$\begin{aligned} &\pi_{1n}(\mbox{\boldmath$T$}_1\mid\mbox{\boldmath$Y$}^n)=\frac{M_{1n}(\mbox{\boldmath$T$}_1)\pi_{1}\left(\mbox{\boldmath$S$}+\dfrac{\mbox{\boldmath$T$}_1}{\sqrt{n}}\right)}{\bigintsss_{G_{1n}} M_{1n}(W)\pi_{1}\left(\mbox{\boldmath$S$}+\dfrac{\mbox{\boldmath$W$}}{\sqrt{n}}\right))dW}\;, \text{and}\label{post1}\\ &\pi_{2n}(\mbox{\boldmath$T$}_2\mid\mbox{\boldmath$Y$}^n)=\frac{M_{2n}(\mbox{\boldmath$T$}_2)\pi_{2}\left(\mbox{\boldmath$S$}^{-1}+\dfrac{\mbox{\boldmath$T$}_2}{\sqrt{n}}\right)}{\bigintsss_{G_{2n}} M_{2n}(W)\pi_{2}\left(\mbox{\boldmath$S$}^{-1}+\dfrac{\mbox{\boldmath$W$}}{\sqrt{n}}\right)dW}.\label{post2}\end{aligned}$$ Before proceeding with further discussion, let us revisit two important definitions from the literature. *Definition 1*. **(Symmetric Matrix-normal distribution)**[\[def1\]]{#def1 label="def1"} Let $\mbox{\boldmath$X$}$ be a $p\times p$ symmetric random matrix, $\mbox{\boldmath$M$}$ is a $p\times p$ constant symmetric matrix and say $\mbox{\boldmath$\Psi$}_1$ and $\mbox{\boldmath$\Psi$}_2$ be constant $p\times p$ positive definite symmetric matrices such that $\mbox{\boldmath$\Psi$}_1\mbox{\boldmath$\Psi$}_2=\mbox{\boldmath$\Psi$}_2\mbox{\boldmath$\Psi$}_1$. Then $\mbox{\boldmath$X$}(=\mbox{\boldmath$X$}^T)$ is said to have a symmetric matrix-normal distribution, denoted by $\mathcal{SMN}_{p \times p}(\mbox{\boldmath$M$},\;\mbox{\boldmath$B$}_p^T(\mbox{\boldmath$\Psi$}_1\otimes\mbox{\boldmath$\Psi$}_2)\mbox{\boldmath$B$}_p)$, if and only if $\mbox{vech}(\mbox{\boldmath$X$})\sim\mathcal{N}_{p(p+1)/2}(\mbox{vech}(\mbox{\boldmath$M$}),\; \mbox{\boldmath$B$}_p^T(\mbox{\boldmath$\Psi$}_1\otimes\mbox{\boldmath$\Psi$}_2)\mbox{\boldmath$B$}_p)$. The probability density function $f(\cdot)$ of a $\mathcal{SMN}_{p \times p}(\mbox{\boldmath$M$},\;\mbox{\boldmath$B$}_p^T(\mbox{\boldmath$\Psi$}_1\otimes\mbox{\boldmath$\Psi$}_2)\mbox{\boldmath$B$}_p)$ can be expressed as follows $$f(X)=\frac{\exp\{-\mathop{\mathrm{tr}}(\mbox{\boldmath$\Psi$}_1^{-1}(X-\mbox{\boldmath$M$})\mbox{\boldmath$\Psi$}_2^{-1}(X-\mbox{\boldmath$M$}))/2\}}{(2\pi)^{p(p+1)/4}\det(\mbox{\boldmath$B$}_p^T(\mbox{\boldmath$\Psi$}_1\otimes\mbox{\boldmath$\Psi$}_2)\mbox{\boldmath$B$}_p)^{1/2}},\;X \in M_p.$$ If $\mbox{\boldmath$X$}\sim\mathcal{SMN}_{p \times p}(\mbox{\boldmath$M$},\;\mbox{\boldmath$B$}_p^T(\mbox{\boldmath$\Psi$}_1\otimes\mbox{\boldmath$\Psi$}_2)\mbox{\boldmath$B$}_p)$ and if $\mbox{\boldmath$A$}$ is a $q \times p$ matrix of rank $q(\leq p)$ then $\mbox{\boldmath$A$}\mbox{\boldmath$X$}\mbox{\boldmath$A$}^T\sim\mathcal{SMN}_{q \times q}(\mbox{\boldmath$A$}\mbox{\boldmath$M$}\mbox{\boldmath$A$}^T,\;\mbox{\boldmath$B$}_{q}^T((\mbox{\boldmath$A$}\mbox{\boldmath$\Psi$}_1\mbox{\boldmath$A$}^T)\otimes(\mbox{\boldmath$A$}\mbox{\boldmath$\Psi$}_2\mbox{\boldmath$A$}^T))\mbox{\boldmath$B$}_{q})$. For more properties of a symmetric matrix variate normal distribution see Gupta and Nagar [@gupta]. *Definition 2*. **(Sub-Gaussian random variable)**[\[def2\]]{#def2 label="def2"} A mean zero random variable X that satisfies $E[\exp(tX)]\leq \exp(t^2k_1^2)$ for all $t \in \mathbb{R}$ and a constant $k_1$ is called a sub-Gaussian random variable. If X is a sub-Gaussian random variable then it satisfies $\left( E(\lvert X\rvert^p)\right)^{1/p}\leq k_2\sqrt{p}$ for a constant $k_2$ and one can define its sub-Gaussian norm as $\left\lVert X\right\rVert_{\psi_2}\coloneqq \sup_{p\geq1}\;p^{-1/2}\left( E(\lvert X \rvert^p)\right)^{1/p}$. A mean zero random vector $X\in \mathbb{R}^q$ is said to be sub-Gaussian if for any $u\in \mathcal{S}^{q-1}$, the random variable $u^{T}X$ is sub-Gaussian. The sub-Gaussian norm of a random vector X is defined as, $$\begin{aligned} \left\lVert X\right\rVert_{\psi_2}\coloneqq \underset{u\in \mathcal{S}^{q-1}}{\sup}\;\left\lVert u^{T}X\right\rVert_{\psi_2}.\notag\end{aligned}$$ See Vershynin [@vershynin] for more details. We now specify the *true data-generating mechanism*. As mentioned earlier, we denote the dimensionality of the responses as $p_n$ to highlight the fact that the number of responses, denoted by $p$, can grow with the sample size $n$, making our results applicable to high-dimensional scenarios. We assume that the observations $Y_1,\dots, Y_n$ are independently and identically distributed from a sub-Gaussian random variable with zero mean, where the variance of $Y_1$ is denoted as $\mbox{\boldmath$\Sigma$}_{0n}$ (or $\mbox{\boldmath$\Omega$}_{0n}^{-1}$). Thus, the sequence of true covariance (or precision) matrices is represented as $\{\mbox{\boldmath$\Sigma$}_{0n}\}_{n\geq1}$ (or $\{\mbox{\boldmath$\Omega$}_{0n}\}_{n\geq1}$). For convenience, we denote $\mbox{\boldmath$\Sigma$}_{0n}^{p_n\times p_n}$ as $\mbox{\boldmath$\Sigma$}_{0}$ and $\mbox{\boldmath$\Omega$}_{0n}^{p_n\times p_n}$ as $\mbox{\boldmath$\Omega$}_{0}$, specifically highlighting that $\mbox{\boldmath$\Sigma$}_{0}$ or $\mbox{\boldmath$\Omega$}_{0}$ depends on $p_n$ (and therefore on $n$). Let $\mathbb{P}_{0n}$ denote the probability measure underlying the true model described above. To simplify notation, we will use $\mathbb{P}_{0}$ instead of $\mathbb{P}_{0n}$. With all this notion in hand, we will define a notion of *posterior consistency* for $\mbox{\boldmath$\Sigma$}$ as follows. *Definition 3*. The sequence of marginal posterior distributions of $\mbox{\boldmath$\Sigma$}$ given by $\{\Pi_n(\mbox{\boldmath$\Sigma$}\mid\mbox{\boldmath$Y_n$})\}_{n\geq1}$ is said to be consistent at $\mbox{\boldmath$\Sigma$}_{0}$, if for every $\delta>0$, $$\Pi_n(\left\lVert\mbox{\boldmath$\Sigma$}-\mbox{\boldmath$\Sigma$}_{0}\right\rVert_2 > \delta \mid \mbox{\boldmath$Y_n$}) \overset{P}{\to} 0$$ as $n \to \infty$, under $\mathbb{P}_{0}$. Additionally, if $\Pi_n(\left\lVert\mbox{\boldmath$\Sigma$}-\mbox{\boldmath$\Sigma$}_{0}\right\rVert_2 > \epsilon_n \mid \mbox{\boldmath$Y_n$}) \overset{P}{\to} 0$ as $n \rightarrow \infty$, under $\mathbb{P}_{0}$ for some sequence $\epsilon_n \rightarrow 0$. Then we refer to $\epsilon_n$ as the contraction rate of $\{\Pi_n(\mbox{\boldmath$\Sigma$}\mid\mbox{\boldmath$Y_n$})\}_{n\geq1}$ around $\mbox{\boldmath$\Sigma$}_0$. Likewise, posterior consistency and contraction rate for $\mbox{\boldmath$\Omega$}$ can also be defined. The relationship between the concept of posterior consistency and the BvM theorem may not be immediately apparent at this stage. However, in various contexts, posterior consistency is often a crucial requirement for proving BvM results (refer to [@chaogao; @ghosal1] for specific instances). Further discussion can be found in Section [4](#sec4){reference-type="ref" reference="sec4"}. With the notion of posterior consistency in hand, similar to [@silin] we define the concept of *flatness of a prior around the sample covariance matrix $S$*. For the sequence of priors $\{\Pi_{1n}(\cdot)\}_{n\geq1}$, let us define $\rho^{\pi_{1}}(\epsilon_n)$ as follows $$\begin{aligned} \label{flat} \rho^{\pi_{1}}(\epsilon_n):= \underset{\mathbf{T_1}\in D(\epsilon_n)}{\sup}\;\left\lvert\frac{\pi_{1}(\mbox{\boldmath$S$}+\dfrac{\mbox{\boldmath$T$}_1}{\sqrt{n}})}{\pi_{1}(\mbox{\boldmath$S$})}-1\right\rvert,\end{aligned}$$ where $D(\epsilon_n)=\{\mbox{\boldmath$T$}_1\in G_{1n}\mid\left\lVert\mbox{\boldmath$T$}_1\right\rVert_2 \leq \sqrt{n}\epsilon_n \}$. Note that, $\mbox{\boldmath$T$}_1\in D(\epsilon_n)$ if and only if $\left\lVert\mbox{\boldmath$\Sigma$}-\mbox{\boldmath$S$}\right\rVert_2 \leq \epsilon_n$, where $\epsilon_n$ is the posterior contraction rate. A prior distribution with density $\pi_{1}(\cdot)$ is considered flat around $\mbox{\boldmath$S$}$ if $\rho^{\pi_{1}}(\epsilon_n)$ tends to $0$ in probability as $n$ approaches infinity. Similarly one can define the flatness of prior around the inverse of sample covariance matrix $\mbox{\boldmath$S$}^{-1}$ for the sequence of induced prior distribution( $\{\Pi_{2n}(\cdot)\}_{n\geq1}$) for the precision matrix using posterior contraction rates of $\mbox{\boldmath$\Omega$}$. Note that, Definition [Definition 3](#def3){reference-type="ref" reference="def3"} considers contraction around the true value $\mbox{\boldmath$\Sigma$}_{0}$, hence this notion of flatness will be useful only when $\mbox{\boldmath$S$}$ also contracts around $\mbox{\boldmath$\Sigma$}_{0}$ at the same rate $\epsilon_n$. Fortunately, this holds for all classes of prior distribution that will be considered in the next section( See the discussion after Assumption [Assumption 3](#as3){reference-type="ref" reference="as3"}). With the above notations and notions in hand, we can now state our goal more formally. We aim to show that for large $n$ the posterior distribution of $\mbox{\boldmath$T$}_1$ (or $\mbox{\boldmath$T$}_2$) can be well approximated by an appropriate zero mean symmetric matrix variate normal distribution. In other words, we want to show that the total variation norm between $\Pi_{1n}(\cdot\mid\mbox{\boldmath$Y$}^n)$ (or $\Pi_{2n}(\cdot\mid\mbox{\boldmath$Y$}^n)$) and an appropriate zero mean symmetric matrix variate normal distribution will converge in probability to $0$ under $\mathbb{P}_{0}$. # Prior Distributions {#sec3} Although our main results (Theorem [\[th_sigma\]](#th_sigma){reference-type="ref" reference="th_sigma"} and [\[th_omega\]](#th_omega){reference-type="ref" reference="th_omega"}) hold true for a broad range of prior distributions, it is important to provide specific examples of prior distributions within that class for practical implementation purposes. In this section, we will define some standard and popular prior distributions available for an unstructured covariance or precision matrix. We will show in Section [4](#sec4){reference-type="ref" reference="sec4"} that all these prior distributions will satisfy our desired criteria under some mild assumptions. ## The Inverse Wishart Prior The natural conjugate prior for a covariance matrix is the Inverse Wishart (IW) prior. We say, $\mbox{\boldmath$\Sigma$}\sim IW(\nu+p-1,\;\mbox{\boldmath$\Psi$}_1)$ if the probability density function of $\mbox{\boldmath$\Sigma$}$ is given by, $$\begin{aligned} \label{prior1s} \pi_{1}^{IW}(\mbox{\boldmath$\Sigma$})\propto \det(\mbox{\boldmath$\Sigma$})^{-(\nu+2p)/2}\exp{(-\mathop{\mathrm{tr}}\left(\mbox{\boldmath$\Psi$}_1\mbox{\boldmath$\Sigma$}^{-1}\right)/2)},\end{aligned}$$ where $\nu$ and $\mbox{\boldmath$\Psi$}_1$ are user-specified hyperparameters. It is easy to check the corresponding induced class of the prior distributions on $\mbox{\boldmath$\Omega$}$ will be the Wishart distribution. More precisely if $\mbox{\boldmath$\Sigma$}\sim IW(\nu+p-1,\;\mbox{\boldmath$\Psi$}_1)$, then $\mbox{\boldmath$\Omega$}\sim W(\nu+p-1,\mbox{\boldmath$\Psi$}_1^{-1})$ where $$\begin{aligned} \label{prior1p} \pi_{2}^{W}(\mbox{\boldmath$\Omega$})\propto \det(\mbox{\boldmath$\Omega$})^{(\nu-2)/2}\exp{(-\mathop{\mathrm{tr}}\left(\mbox{\boldmath$\Psi$}_1\mbox{\boldmath$\Omega$}\right)/2)}.\end{aligned}$$ While the class of IW priors is a popular choice due to conjugacy and associated algebraic simplicity, it suffers from various drawbacks. Gelman [@gelman2006prior] strongly discouraged the use of vague inverse gamma priors in a one-dimensional setting, and the IW priors share similar drawbacks in multivariate settings. Alvarez et al. [@alvarez] expounded that a sole degree of freedom parameter regulates the uncertainty for all variance parameters, thereby lacking the flexibility to encompass distinct levels of prior knowledge for various variance components. Tokuda et al. [@Tokuda2011VisualizingDO] discovered that large correlation coefficients correspond to large marginal variances in an IW distribution. This situation can lead to considerable bias in parameter estimations, particularly when correlation coefficients are substantial but marginal variances are limited, and vice versa. Additional comprehensive information can be found in [@alvarez; @gelman2006prior; @gelman2006data; @Tokuda2011VisualizingDO]. To overcome these drawbacks several scale mixed versions of IW distributions have been proposed in recent literature. In the subsequent sections, we discuss two prominent members of this class. ## The Diagonal Scale-Mixed Inverse Wishart Prior The Diagonal Scale-Mixed Inverse Wishart (DSIW) prior for $\mbox{\boldmath$\Sigma$}$ is an extension of the Inverse Wishart distribution that incorporates additional parameters to enhance flexibility. Let $\nu > 0$ and $c_{\nu} > 0$ (depending on $\nu$) be user-specified hyperparameters. If $\mbox{\boldmath$\Delta$}$ is a diagonal matrix with the $i$-th diagonal element equal to $\delta_i$, then we can define the DSIW prior using the following hierarchical representation $$\begin{aligned} \mbox{\boldmath$\Sigma$}\mid \mbox{\boldmath$\Delta$}\sim IW(\nu+p-1,\;c_{\nu}\mbox{\boldmath$\Delta$}), \quad \pi(\mbox{\boldmath$\Delta$}) = \prod_{i=1}^p \pi_i(\delta_i),\end{aligned}$$ where $\pi_i(\cdot)$ is a density function with support in the positive real line for every $1 \leq i \leq p$. The marginal prior on $\mbox{\boldmath$\Sigma$}$ can be expressed as follows: $$\begin{aligned} \label{prior2s} \pi_{1}^{DSIW}(\mbox{\boldmath$\Sigma$}) \propto \det(\mbox{\boldmath$\Sigma$})^{-(\nu+2p)/2} \prod_{i=1}^p \int_{0}^{\infty} \delta_i^{\nu+p-1/2} \exp{\left\{-\frac{c_{\nu}}{2} (\Sigma^{-1})_{i} \delta_i\right\}} \pi_i(\delta_i) d\delta_i,\end{aligned}$$ where $(\Sigma^{-1})_{i}$ is the $i$-th diagonal element of $\mbox{\boldmath$\Sigma$}^{-1}$. Similarly, the corresponding induced prior on $\mbox{\boldmath$\Omega$}$ is referred to as the diagonal scale mixed Wishart (DSW) prior. It can be expressed as $$\begin{aligned} \label{prior2p} \pi_{2}^{DSW}(\mbox{\boldmath$\Omega$}) \propto \det(\mbox{\boldmath$\Omega$})^{(\nu-2)/2} \prod_{i=1}^p \int_{0}^{\infty} \delta_i^{\nu+p-1/2} \exp{\left\{-\frac{c_{\nu}}{2} (\Omega)_{i} \delta_i\right\}} \pi_i(\delta_i) d\delta_i,\end{aligned}$$ where $(\Omega)_{i}$ is the $i$-th diagonal element of $\mbox{\boldmath$\Omega$}$, and since the Jacobian of the transformation from $\mbox{\boldmath$\Sigma$}$ to $\mbox{\boldmath$\Omega$}$ is $\det(\mbox{\boldmath$\Omega$})^{-(p+1)}$. There are several choices of $\pi$ recommended in the literature. O'Malley and Zaslavsky [@zava] propose independent log-normal priors on $\delta_i$ with a scale parameter $c_{\nu}=1$(LN-DSIW prior). Another option they suggest is independent truncated normal priors for the $\delta_i$'s, resulting in the induced prior on $\mbox{\boldmath$\Sigma$}$ (TN-DSIW prior). This prior corresponds to the multivariate version of Gelman's folded half-T prior [@gelman2006prior]. Gelman et al. [@gelman_book] recommend using independent uniform priors on the $\delta_i$'s with $c_{\nu}=1$ ( U-DSIW prior) for non-informative modeling. Huang and Wand [@huang] suggested employing independent Gamma priors on the $\delta_i$'s with a shape parameter of 2 and $c_{\nu}=2\nu$, (IG-DSIW prior). This prior extends Gelman's Half-t priors on standard deviation parameters to achieve high non-informativity. When $\nu=2$, the correlation parameters under this prior have uniform distributions on the interval $(-1, 1)$. Additionally, for the last two mentioned choices of $\pi_i(.)$'s one can have close form expression for the marginal distribution of $\mbox{\boldmath$\Sigma$}$ or $\mbox{\boldmath$\Omega$}$. Gelman and Hill [@gelman2006data] also recommend this prior with $\nu=2$ and $c_{\nu}=1$ to ensure uniform priors on the correlations, similar to the IW prior, but with added flexibility to incorporate prior information about the standard deviations. Similar versions of the aforementioned priors can also be defined for the precision matrix $\mbox{\boldmath$\Omega$}$. See [@sarkar] for more detailed information regarding the posterior distributions for these priors in a general framework. For our analysis, we will later make a mild assumption about the tails of the $\pi_i$s. This assumption holds for all the aforementioned choices of $\pi_i$s, including the LN-DSIW, TN-DSIW, U-DSIW, and IG-DSIW priors. It allows future researchers the flexibility to explore additional options and choose from a wider range of $\{\pi_i\}_{i=1}^p$ distributions. ## The Matrix-$F$ Prior In the work by Mulder and Pericchi [@mulder], a matrix-variate generalization of the $F$ distribution known as the matrix-$F$ distribution for $\mbox{\boldmath$\Sigma$}$ is proposed. Similar to the univariate $F$ distribution, the matrix-$F$ distribution can be specified through a hierarchical representation as follows $$\begin{aligned} \mbox{\boldmath$\Sigma$}\mid \bar{\mbox{\boldmath$\Delta$}} \sim IW(\nu+p-1,\;\bar{\mbox{\boldmath$\Delta$}}), \quad \bar{\mbox{\boldmath$\Delta$}} \sim W(\nu^*,\;\mbox{\boldmath$\Psi$}_2),\end{aligned}$$ where $\bar{\mbox{\boldmath$\Delta$}}$ is a matrix-valued random variable, $\nu$ is a positive parameter, $\nu^*$ is the degrees of freedom parameter, and $\mbox{\boldmath$\Psi$}_2$ is a positive definite scale matrix. For the matrix-$F$ distribution, closed-form expressions for the marginal prior on $\mbox{\boldmath$\Sigma$}$ and the corresponding induced prior on $\mbox{\boldmath$\Omega$}$ are available. The marginal prior on $\mbox{\boldmath$\Sigma$}$ is given by $$\begin{aligned} \label{prior3s} \pi_{1}^{F}(\mbox{\boldmath$\Sigma$}) \propto \det(\mbox{\boldmath$\Sigma$})^{-(\nu+2p)/2} \det(\mbox{\boldmath$\Sigma$}^{-1}+\mbox{\boldmath$\Psi$}_2^{-1})^{-(\nu^*+\nu+p-1)/2}.\end{aligned}$$ Similarly, the induced prior on $\mbox{\boldmath$\Omega$}$ can be expressed as $$\begin{aligned} \label{prior3p} \pi_{2}^{F}(\mbox{\boldmath$\Omega$}) \propto \det(\mbox{\boldmath$\Omega$})^{(\nu-2)/2} \det(\mbox{\boldmath$\Omega$}+\mbox{\boldmath$\Psi$}_2^{-1})^{-(\nu^*+\nu+p-1)/2}.\end{aligned}$$ The key difference compared to the DSIW prior is that the scale parameter for the base Inverse Wishart distribution is now a general positive definite matrix. For posterior distributions and further details, we refer the reader to [@mulder; @sarkar]. # Main Results {#sec4} Before providing our main BvM results, we will outline the assumptions made on the true data-generating model and the prior distribution, along with their implications. *Assumption 1*. There exists $k_{\sigma}\in(0,1]$ such that $\mbox{\boldmath$\Sigma$}_{0}\in \mathcal{C_{\sigma}}$, where $\mathcal{C_{\sigma}}=\{\mbox{\boldmath$\Sigma$}^{p_n\times p_n}\mid\\\;0<k_{\sigma}\leq\lambda_{min}(\mbox{\boldmath$\Sigma$})\leq\lambda_{max}(\mbox{\boldmath$\Sigma$})$ $\leq1/k_{\sigma}<\infty\}$. We will also assume $\left\lVert\mbox{\boldmath$\Sigma$}_{0}^{-1/2}Y_1\right\rVert_{\psi_2}$ is at most $\sigma_0>0$. Here $k_{\sigma}$ and $\sigma_0$ are fixed constants which do not vary with $n$. The assumption of uniform boundedness of eigenvalues is a standard assumption in high-dimensional asymptotics for covariance estimation, both in the frequentist and Bayesian settings. It has been widely studied and utilized in various research papers, including [@banerjee1; @banerjee2; @bickel2008regularized; @spectrum; @xiang]. Bickel and Levina [@bickel2008regularized] referred to the class of covariance matrices satisfying this assumption as "well-conditioned covariance matrices\" and provided several examples of processes that can generate matrices in this class. It is not difficult to check $\mbox{\boldmath$\Sigma$}_{0}\in \mathcal{C_{\sigma}}$ iff $\mbox{\boldmath$\Omega$}_{0}\in \mathcal{C_{\sigma}}$. The bound on the sub-Gaussian norm, involving $\sigma_0$, ensures that there are no unusual or atypical moment behaviors for the distribution of $Y_1$. *Assumption 2*. We assume $p_n^5=o(n)$, that is, the number of responses $p_n$ is allowed to grow with $n$, but the ratio $p_n^5/n$ converges to $0$ as $n$ increases. As discussed in the introduction, this requirement is unsurprising given the lack of a low-dimensional structure on the covariance matrix and the goal of obtaining BvM results for the *entire* covariance matrix. Relaxing this assumption is challenging without additional structure, such as sparsity, in the covariance or precision matrix. Section [6](#sec7){reference-type="ref" reference="sec7"}-[8](#sec9){reference-type="ref" reference="sec9"} of our paper is dedicated to demonstrating BvM results under sparsity in the precision matrix the above assumption can be significantly weakened (see Assumption [Assumption 5](#asB){reference-type="ref" reference="asB"} in Section [8](#sec9){reference-type="ref" reference="sec9"}). *Assumption 3*. The sequence of prior distributions $\{\Pi_{1n}(\cdot)\}_{n\geq1}$ (or $\{\Pi_{2n}(\cdot)\}_{n\geq1}$) on $\mbox{\boldmath$\Sigma$}$ (or $\mbox{\boldmath$\Omega$}$) is flat around $\mbox{\boldmath$S$}$ (or $\mbox{\boldmath$S$}^{-1}$) and the posterior contraction rate under this prior is $\sqrt{\frac{p_n}{n}}$. When Assumption [Assumption 1](#as1){reference-type="ref" reference="as1"} holds and $p_n=o(n)$ (which can also be inferred from Assumption [Assumption 2](#as2){reference-type="ref" reference="as2"}), it can be demonstrated that the sample covariance matrix converges to its true value at a rate of $\sqrt{p_n/n}$ (refer to Lemma 5.2 in [@sarkar]). In this setting, it has been shown in [@sarkar] that a large class of priors for an unstructured covariance matrix adheres to the contraction rate of $\sqrt{p_n/n}$. Using these results we will show the priors discussed in Section [3](#sec3){reference-type="ref" reference="sec3"} (IW, DSIW, and Matrix-$F$) also satisfy this condition under mild assumptions on relevant hyperparameters. **Lemma 4**. *Given Assumptions [Assumption 1](#as1){reference-type="ref" reference="as1"} and [Assumption 2](#as2){reference-type="ref" reference="as2"}, the Inverse-Wishart (IW) prior on $\mbox{\boldmath$\Sigma$}$, as defined in ([\[prior1s\]](#prior1s){reference-type="ref" reference="prior1s"}), will satisfy Assumption [Assumption 3](#as3){reference-type="ref" reference="as3"}, even when $\left\lVert\mbox{\boldmath$\Psi$}_1\right\rVert_2=O(p_n)$.* **Lemma 5**. *Consider a class of DSIW prior distributions on $\mbox{\boldmath$\Sigma$}$ as defined in ([\[prior2s\]](#prior2s){reference-type="ref" reference="prior2s"}). Given Assumptions [Assumption 1](#as1){reference-type="ref" reference="as1"} and [Assumption 2](#as2){reference-type="ref" reference="as2"}, if there exists a constant $k$ (independent of $n$) such that $\pi_i(x)$ decreases in $x$ for $x > k$ for every $1 \leq i \leq p$, then the prior distribution will satisfy Assumption [Assumption 3](#as3){reference-type="ref" reference="as3"}.* The proofs of these lemmas are provided in the Appendix. It is interesting to note that the condition on $\pi_i(.)$ in Lemma [Lemma 5](#postcont2){reference-type="ref" reference="postcont2"} is relatively straightforward and encompasses a wide range of commonly used continuous distributions, including truncated normal, half-t distribution, gamma and inverse gamma, beta, Weibull, log-normal, and others. It is worth noting that all the DSIW priors currently discussed in existing literature satisfy this assumption when an appropriate value of $k$ is chosen. **Lemma 6**. *Given Assumptions [Assumption 1](#as1){reference-type="ref" reference="as1"} and [Assumption 2](#as2){reference-type="ref" reference="as2"}, the matrix-$F$ prior on $\mbox{\boldmath$\Sigma$}$, as defined in ([\[prior3s\]](#prior3s){reference-type="ref" reference="prior3s"}), will satisfy Assumption [Assumption 3](#as3){reference-type="ref" reference="as3"}, even when $\nu^*=O(p_n)$.* A proof is provided in the Appendix. With the above lemmas in hand, we now present the key findings of this paper in an unstructured setting. We first establish the BvM results for the covariance matrix $\mbox{\boldmath$\Sigma$}$ in the following theorem. **Theorem 7**. ***(BvM Theorem for Covariance Matrix)**[\[th_sigma\]]{#th_sigma label="th_sigma"} Let $Y_1, Y_2, \cdots, Y_n$ be independent and identically distributed Gaussian mean zero random variable and covariance matrix $\mbox{\boldmath$\Sigma$}$. Consider a working Bayesian model that utilizes one of the sequences of priors $\{\Pi_{1n}(\cdot)\}_{n\geq1}$ satisfying Assumption [Assumption 3](#as3){reference-type="ref" reference="as3"}, and assume that true data generating mechanism (see Section [2](#sec2){reference-type="ref" reference="sec2"}) satisfies Assumptions [Assumption 1](#as1){reference-type="ref" reference="as1"}-[Assumption 2](#as2){reference-type="ref" reference="as2"}. Then $$\begin{aligned} \int_{G_{1n}} \lvert \pi_{1n}(\mbox{\boldmath$T$}_1\mid\mbox{\boldmath$Y$}^n)-\phi(\mbox{\boldmath$T$}_1;\mbox{\boldmath$S$})\rvert\; d\mbox{\boldmath$T$}_1 \overset{P}{\to}0,\; \textit{as}\;n\to\infty\; \textit{under}\;\mathbb{P}_{0}, \end{aligned}$$ where $\mbox{\boldmath$T$}_1=\sqrt{n}(\mbox{\boldmath$\Sigma$}-\mbox{\boldmath$S$})$ and $\phi(\mbox{\boldmath$T$}_1;\mbox{\boldmath$S$})$ denotes the probability density function of the $\mathcal{SMN}_{p \times p}(\mbox{\boldmath$O$},\;2 \mbox{\boldmath$B$}_p^T(\mbox{\boldmath$S$}\otimes\mbox{\boldmath$S$})\mbox{\boldmath$B$}_p)$ distribution as defined in Section [2](#sec2){reference-type="ref" reference="sec2"}.* The detailed proof can be found in Section [5](#sec5){reference-type="ref" reference="sec5"}. Theorem [\[th_sigma\]](#th_sigma){reference-type="ref" reference="th_sigma"} essentially states that the TV norm between the posterior distribution of $\sqrt{n}(\mbox{\boldmath$\Sigma$}-\mbox{\boldmath$S$})$ and a $\mathcal{SMN}_{p \times p}(\mbox{\boldmath$O$},\;2 \mbox{\boldmath$B$}_p^T(\mbox{\boldmath$S$}\otimes\mbox{\boldmath$S$})\mbox{\boldmath$B$}_p)$ converges to zero in probability as $n\to\infty$. In other words, under Assumptions [Assumption 1](#as1){reference-type="ref" reference="as1"}-[Assumption 3](#as3){reference-type="ref" reference="as3"}, we can approximate the posterior distribution of $\sqrt{n}(\mbox{\boldmath$\Sigma$}-\mbox{\boldmath$S$})$ effectively using a symmetric matrix variate normal distribution with mean $\mbox{\boldmath$O$}$, and with a scale parameter $2 \mbox{\boldmath$B$}_p^T(\mbox{\boldmath$S$}\otimes\mbox{\boldmath$S$})\mbox{\boldmath$B$}_p$. This finding proves particularly valuable for constructing credible intervals for (possibly multi-dimensional) functionals of $\mbox{\boldmath$\Sigma$}$ directly, as the distribution of $\mathcal{SMN}_{p \times p}(\mbox{\boldmath$O$},\;2 \mbox{\boldmath$B$}_p^T(\mbox{\boldmath$S$}\otimes\mbox{\boldmath$S$})\mbox{\boldmath$B$}_p)$ can be completely determined from the available data. We now focus on BvM results for an unstructured precision matrix $\mbox{\boldmath$\Omega$}$ and start with the results for the posterior contraction rate of $\mbox{\boldmath$\Omega$}$. **Lemma 8**. *Suppose the posterior distribution of $\mbox{\boldmath$\Sigma$}$ exhibits a contraction rate $\epsilon_n$, where $\epsilon_n$ converges to $0$ as $n$ increases. Then under Assumption [Assumption 1](#as1){reference-type="ref" reference="as1"}, the induced posterior on the precision matrix $\mbox{\boldmath$\Omega$}$ will contract around $\mbox{\boldmath$\Omega$}_0$ at the rate of $\epsilon_n$ as well.* A proof is provided in the Appendix. To prove BvM results for $\mbox{\boldmath$\Omega$}$ with the corresponding induced prior $\Pi_{2n}(\cdot)$, a condition like Assumption [Assumption 3](#as3){reference-type="ref" reference="as3"} is necessary. Specifically, it is essential for the prior distribution $\Pi_{2n}(\cdot)$ to be flat around $\mbox{\boldmath$S$}^{-1}$. In fact, under Assumptions [Assumption 1](#as1){reference-type="ref" reference="as1"} and [Assumption 2](#as2){reference-type="ref" reference="as2"}, similar results to Lemma [Lemma 4](#postcont1){reference-type="ref" reference="postcont1"}, [Lemma 5](#postcont2){reference-type="ref" reference="postcont2"}, and [Lemma 6](#postcont3){reference-type="ref" reference="postcont3"} can be established for the induced prior on $\mbox{\boldmath$\Omega$}$. The proofs are essentially identical and hence are not provided. We now establish the BvM result for precision matrix $\mbox{\boldmath$\Omega$}$. **Theorem 9**. ***(BvM Theorem for Precision Matrix)**[\[th_omega\]]{#th_omega label="th_omega"} Let $Y_1, Y_2, \cdots, Y_n$ be independent and identically distributed Gaussian mean zero random variable and precision matrix denoted as $\mbox{\boldmath$\Omega$}$. Consider a working Bayesian model that utilizes one of the sequences of priors $\{\Pi_{2n}(\cdot)\}_{n\geq1}$ satisfying Assumption [Assumption 3](#as3){reference-type="ref" reference="as3"}, and assume that true data generating mechanism (see Section [2](#sec2){reference-type="ref" reference="sec2"}) satisfies Assumptions [Assumption 1](#as1){reference-type="ref" reference="as1"}-[Assumption 2](#as2){reference-type="ref" reference="as2"}. Then $$\begin{aligned} \int_{G_{2n}} \lvert \pi_{2n}(\mbox{\boldmath$T$}_2\mid\mbox{\boldmath$Y$}^n)-\phi(\mbox{\boldmath$T$}_2;\mbox{\boldmath$S$})\rvert\; d\mbox{\boldmath$T$}_2 \overset{P}{\to}0,\; \textit{as}\;n\to\infty\; \textit{and under}\;\mathbb{P}_{0}, \end{aligned}$$ where $\mbox{\boldmath$T$}_2=\sqrt{n}(\mbox{\boldmath$\Omega$}-\mbox{\boldmath$S$}^{-1})$ and $\phi(\mbox{\boldmath$T$}_2;\mbox{\boldmath$S$})$ denotes probability density function of the $\mathcal{SMN}_{p \times p}(\mbox{\boldmath$O$},\;2 \mbox{\boldmath$B$}_p^T(\mbox{\boldmath$S$}^{-1}\otimes\mbox{\boldmath$S$}^{-1})\mbox{\boldmath$B$}_p)$ distribution as defined in Section [2](#sec2){reference-type="ref" reference="sec2"}.* The implication of Theorem [\[th_omega\]](#th_omega){reference-type="ref" reference="th_omega"} is the same as Theorem [\[th_sigma\]](#th_sigma){reference-type="ref" reference="th_sigma"}, except for the fact that it applies to $\mbox{\boldmath$\Omega$}$ instead of $\mbox{\boldmath$\Sigma$}$. The elements of $\mbox{\boldmath$\Omega$}$ are particularly useful when we want to study the conditional dependence structure between the underlying variables. Parts of the proof for Theorem [\[th_omega\]](#th_omega){reference-type="ref" reference="th_omega"} bear a strong resemblance to the corresponding parts of the proof of Theorem [\[th_sigma\]](#th_sigma){reference-type="ref" reference="th_sigma"}, which is discussed in Section [5](#sec5){reference-type="ref" reference="sec5"}. For this reason, its proof is included in the Appendix. The key distinction when handling the precision matrix, as opposed to the covariance matrix, lies in the formulation of the likelihood, as illustrated in ([\[likelihood1\]](#likelihood1){reference-type="ref" reference="likelihood1"}) and ([\[likelihood2\]](#likelihood2){reference-type="ref" reference="likelihood2"}). # Proof for Theorem [\[th_sigma\]](#th_sigma){reference-type="ref" reference="th_sigma"} {#sec5} We now proceed to prove [\[th_sigma\]](#th_sigma){reference-type="ref" reference="th_sigma"} stated in Section [4](#sec4){reference-type="ref" reference="sec4"}. The proof makes use of a technical lemma which is stated below. The proof of this lemma is provided in the Appendix. **Lemma 10**. *If $\mbox{\boldmath$A$}$ is a $p\times p$ symmetric random matrix such that $\mbox{\boldmath$A$}\sim\mathcal{SMN}_{p \times p}(\\\mbox{\boldmath$O$},\; \mbox{\boldmath$B$}_p^T(\mbox{\boldmath$I$}_p\otimes\mbox{\boldmath$I$}_p)\mbox{\boldmath$B$}_p)$ then $P(\left\lVert\mbox{\boldmath$A$}\right\rVert_2\geq c_1 \sqrt{p})\leq2\exp{(-c_2p)}$, where $c_1$ is sufficiently large positive constant not depending on $p$ and $c_2$ is a positive constant which depends on $c_1$.* ## Proof of Theorem [\[th_sigma\]](#th_sigma){reference-type="ref" reference="th_sigma"} {#proof_th_sigma} Observe that for any constant $M>0$ the following holds $$\begin{aligned} \label{th1:1} \int_{G_{1n}} \lvert \pi_{1n}(\mbox{\boldmath$T$}_1\mid\mbox{\boldmath$Y$}^n)-\phi(\mbox{\boldmath$T$}_1;\mbox{\boldmath$S$})\rvert\; d\mbox{\boldmath$T$}_1\leq &\int_{F_{1n}} \lvert \pi_{1n}(\mbox{\boldmath$T$}_1\mid\mbox{\boldmath$Y$}^n)-\phi(\mbox{\boldmath$T$}_1;\mbox{\boldmath$S$})\rvert\; d\mbox{\boldmath$T$}_1\notag\\& +\int_{ F_{1n}^c} \pi_{1n}(\mbox{\boldmath$T$}_1\mid\mbox{\boldmath$Y$}^n)\; d\mbox{\boldmath$T$}_1\notag\\&+\int_{F_{1n}^c}\phi(\mbox{\boldmath$T$}_1;\mbox{\boldmath$S$})\; d\mbox{\boldmath$T$}_1,\end{aligned}$$ where $F_{1n}:=\{\mbox{\boldmath$T$}_1\in G_{1n}\;\mid\;\left\lVert\mbox{\boldmath$S$}^{-1/2}\mbox{\boldmath$T$}_1\mbox{\boldmath$S$}^{-1/2}\right\rVert_2\leq M\sqrt{p_n}\}$. Note that the third term of ([\[th1:1\]](#th1:1){reference-type="ref" reference="th1:1"}) can be expressed as follows $$\begin{aligned} \label{th1:2} \int_{F_{1n}^c}\phi(\mbox{\boldmath$T$}_1;\mbox{\boldmath$S$})\; d\mbox{\boldmath$T$}_1=P\left(\left\lVert\mbox{\boldmath$S$}^{-1/2}\mbox{\boldmath$T$}_1\mbox{\boldmath$S$}^{-1/2}\right\rVert_2\geq M\sqrt{p_n}\right),\end{aligned}$$ where $\mbox{\boldmath$T$}_1\sim \mathcal{SMN}_{p \times p}(\mbox{\boldmath$O$},\;2 \mbox{\boldmath$B$}_{p_n}^T(\mbox{\boldmath$S$}\otimes\mbox{\boldmath$S$})\mbox{\boldmath$B$}_{p_n}^T)$. By utilizing the properties of the symmetric matrix-normal distribution as mentioned in Section [2](#sec2){reference-type="ref" reference="sec2"}, it becomes evident that $\frac{\mbox{\boldmath$S$}^{-1/2}\mbox{\boldmath$T$}_1\mbox{\boldmath$S$}^{-1/2}}{\sqrt{2}}\sim\mathcal{SMN}{p_n \times p_n}(\mbox{\boldmath$O$},\; \mbox{\boldmath$B$}_{p_n}^T(\mbox{\boldmath$I$}_{p_n}\otimes\mbox{\boldmath$I$}_{p_n})\mbox{\boldmath$B$}_{p_n})$. Consequently, according to Lemma [Lemma 10](#smn){reference-type="ref" reference="smn"}, we have $$\begin{aligned} \label{th1:3} P\left(\left\lVert\mbox{\boldmath$S$}^{-1/2}\mbox{\boldmath$T$}_1\mbox{\boldmath$S$}^{-1/2}\right\rVert_2\geq M\sqrt{p_n}\right)\leq 2\exp{(-c\;p_n)}\to 0,\end{aligned}$$ as $n\to\infty$, where $c$ is a constant. Now we deal with the second term of ([\[th1:1\]](#th1:1){reference-type="ref" reference="th1:1"}). Prior to that, we will define a set as follows, $$\begin{aligned} \label{h} H_n:=\{(1-\eta)\lambda_{\min}(\mbox{\boldmath$\Sigma$}_0)\leq\lambda_{\min}(\mbox{\boldmath$S$})\leq\lambda_{\max}(\mbox{\boldmath$S$})\leq(1+\eta)\lambda_{\max}(\mbox{\boldmath$\Sigma$}_0)\},\end{aligned}$$ where $\eta\in(0,1)$ is a fixed number. Verifying Assumptions [Assumption 1](#as1){reference-type="ref" reference="as1"} and [Assumption 2](#as2){reference-type="ref" reference="as2"} makes it evident that as $n$ approaches infinity, $\mathbb{P}_{0}(H_n)$ tends to 1 (refer to [@sarkar Lemma 5.3] or [@vershynin Theorem 5.39] for further details). Consequently, a significant portion of our subsequent analysis will focus on the set $H_n$, with no impact on our asymptotic outcomes given that $H_n$ is a set of asymptotic probability one under $\mathbb{P}_{0}$. Within the set $H_n$, the second term of ([\[th1:1\]](#th1:1){reference-type="ref" reference="th1:1"}) can be expressed as follows $$\begin{aligned} \label{th1:4} \int_{ F_{1n}^c} \pi_{1n}(\mbox{\boldmath$T$}_1\mid\mbox{\boldmath$Y$}^n)\; d\mbox{\boldmath$T$}_1 &= \Pi_n\left(\left\lVert\mbox{\boldmath$S$}^{-1/2}\mbox{\boldmath$T$}_1\mbox{\boldmath$S$}^{-1/2}\right\rVert_2 > M\sqrt{p_n} \mid \mbox{\boldmath$Y_n$}\right)\notag\\ &\leq \Pi_n(\left\lVert\mbox{\boldmath$T$}_1\right\rVert_2 > M^*\sqrt{p_n} \mid \mbox{\boldmath$Y_n$}),\end{aligned}$$ where $M^*=M(1-\eta)k_{\sigma}$. The last step follows from Assumption [Assumption 1](#as1){reference-type="ref" reference="as1"}. Let us recall that $\mbox{\boldmath$T$}_1=\sqrt{n}(\mbox{\boldmath$\Sigma$}-\mbox{\boldmath$S$})$, and it is known under Assumptions [Assumption 1](#as1){reference-type="ref" reference="as1"} and [Assumption 2](#as2){reference-type="ref" reference="as2"} that $\mbox{\boldmath$S$}$ exhibits a convergence rate of $\sqrt{p_n/n}$. Consequently, based on Assumption [Assumption 3](#as3){reference-type="ref" reference="as3"} regarding the posterior contraction rate, we can conclude that with an appropriate choice of $M$, $\Pi_n(\left\lVert\mbox{\boldmath$T$}_1\right\rVert_2 > M^*\sqrt{p_n} \mid \mbox{\boldmath$Y_n$})\overset{P}{\to}0$ as $n$ tends to infinity, under the true model $\mathbb{P}_{0}$ since $\mathbb{P}_{0}(H_n)\to 1$ as $n \to \infty$. We have previously demonstrated that both the second and third terms of ([\[th1:1\]](#th1:1){reference-type="ref" reference="th1:1"}) converge in probability to zero as $n$ approaches infinity, under the true model $\mathbb{P}_{0}$. Now it is enough the show the first term of ([\[th1:1\]](#th1:1){reference-type="ref" reference="th1:1"}) i.e. $$\begin{aligned} \label{th1:5} \int_{F_{1n}} \lvert \pi_{1n}(\mbox{\boldmath$T$}_1\mid\mbox{\boldmath$Y$}^n)-\phi(\mbox{\boldmath$T$}_1;\mbox{\boldmath$S$})\rvert\; d\mbox{\boldmath$T$}_1\;\overset{P}{\to}0, \end{aligned}$$ as $n \to \infty$, under the true model $\mathbb{P}_{0}$. Utilizing the notations introduced in ([\[post1\]](#post1){reference-type="ref" reference="post1"}), we can represent the first term of ([\[th1:1\]](#th1:1){reference-type="ref" reference="th1:1"}) in the following manner $$\begin{aligned} \label{th1:6} &\int_{F_{1n}} \lvert \pi_{1n}(\mbox{\boldmath$T$}_1\mid\mbox{\boldmath$Y$}^n)-\phi(\mbox{\boldmath$T$}_1;\mbox{\boldmath$S$})\rvert\; d\mbox{\boldmath$T$}_1\notag\\ & \leq \int_{F_{1n}} \left \lvert \frac{M_{1n}(\mbox{\boldmath$T$}_1)\pi_{1}\left(\mbox{\boldmath$S$}+\dfrac{\mbox{\boldmath$T$}_1}{\sqrt{n}}\right)}{\bigintsss M_{1n}(\mbox{\boldmath$W$})\pi_{1}\left(\mbox{\boldmath$S$}+\dfrac{\mbox{\boldmath$W$}}{\sqrt{n}}\right))d\mbox{\boldmath$W$}} -\frac{\Tilde{M}_{1n}(\mbox{\boldmath$T$}_1)\pi_{1}\left(\mbox{\boldmath$S$}\right)}{\bigintsss \Tilde{M}_{1n}(\mbox{\boldmath$W$})\pi_{1}\left(\mbox{\boldmath$S$}\right)d\mbox{\boldmath$W$}}\right\rvert\; d\mbox{\boldmath$T$}_1\notag\\&= \int_{ F_{1n}^c} \pi_{1n}(\mbox{\boldmath$T$}_1\mid\mbox{\boldmath$Y$}^n)\; d\mbox{\boldmath$T$}_1+\int_{F_{1n}^c}\phi(\mbox{\boldmath$T$}_1;\mbox{\boldmath$S$})\; d\mbox{\boldmath$T$}_1+ 3\left(\int\Tilde{M}_{1n}(\mbox{\boldmath$T$}_1)\pi_{1}\left(\mbox{\boldmath$S$}\right)\; d\mbox{\boldmath$T$}_1\right)^{-1}\notag\\&\times\int_{F_{1n}}\left\lvert M_{1n}(\mbox{\boldmath$T$}_1)\pi_{1}\left(\mbox{\boldmath$S$}+\frac{\mbox{\boldmath$T$}_1}{\sqrt{n}}\right)-\Tilde{M}_{1n}(\mbox{\boldmath$T$}_1)\pi_{1}\left(\mbox{\boldmath$S$}\right)\right\rvert \; d\mbox{\boldmath$T$}_1,\end{aligned}$$ where $\Tilde{M}_{1n}(\mbox{\boldmath$T$}_1)=\exp\{-\mathop{\mathrm{tr}}(S^{-1/2}\mbox{\boldmath$T$}_1S^{-1/2})^2/4\}$, is the kernel of the probability density function of the $\mathcal{SMN}_{p \times p}(\mbox{\boldmath$O$},\;2 \mbox{\boldmath$B$}_p^T(\mbox{\boldmath$S$}\otimes\mbox{\boldmath$S$})\mbox{\boldmath$B$}_p)$ distribution. The last step of ([\[th1:6\]](#th1:6){reference-type="ref" reference="th1:6"}) follows from Lemma A.$1$. of Ghosal(1999) [@ghosal1], Appendix $1$. We have successfully established that the first and second terms of ([\[th1:6\]](#th1:6){reference-type="ref" reference="th1:6"}) converge to zero in probability, as demonstrated in ([\[th1:2\]](#th1:2){reference-type="ref" reference="th1:2"})-([\[th1:4\]](#th1:4){reference-type="ref" reference="th1:4"}). For the third term note that $$\begin{aligned} \label{th1:7} &\left(\int\Tilde{M}_{1n}(\mbox{\boldmath$T$}_1)\pi_{1}\left(\mbox{\boldmath$S$}\right)\; d\mbox{\boldmath$T$}_1\right)^{-1}\notag\\ &\times \int_{F_{1n}}\left\lvert M_{1n}(\mbox{\boldmath$T$}_1)\pi_{1}\left(\mbox{\boldmath$S$}+\dfrac{\mbox{\boldmath$T$}_1}{\sqrt{n}}\right)-\Tilde{M}_{1n}(\mbox{\boldmath$T$}_1)\pi_{1}\left(\mbox{\boldmath$S$}\right)\right\rvert \; d\mbox{\boldmath$T$}_1\notag\\\leq &\underset{\mbox{\boldmath$T$}_1\in F_{1n}}{\sup}\;\left\lvert\frac{\pi_{1}(\mbox{\boldmath$S$}+\dfrac{\mbox{\boldmath$T$}_1}{\sqrt{n}})}{\pi_{1}(\mbox{\boldmath$S$})}-1\right\rvert\frac{\int_{F_{1n}}M_{1n}(\mbox{\boldmath$T$}_1)\; d\mbox{\boldmath$T$}_1}{\int \Tilde{M}_{1n}(\mbox{\boldmath$T$}_1)\; d\mbox{\boldmath$T$}_1}+ \frac{\int_{F_{1n}}\lvert M_{1n}(\mbox{\boldmath$T$}_1)-\Tilde{M}_{1n}(\mbox{\boldmath$T$}_1)\rvert\; d\mbox{\boldmath$T$}_1}{\int \Tilde{M}_{1n}(\mbox{\boldmath$T$}_1)\; d\mbox{\boldmath$T$}_1}\end{aligned}$$ We can infer from the utilization of the set $H_n$ in conjunction with Assumptions [Assumption 1](#as1){reference-type="ref" reference="as1"} and [Assumption 3](#as3){reference-type="ref" reference="as3"} (flatness of the prior around $\mbox{\boldmath$S$}$) that $$\begin{aligned} \label{th1:8} \underset{\mbox{\boldmath$T$}_1\in F_{1n}}{\sup}\;\left\lvert\frac{\pi_{1}(\mbox{\boldmath$S$}+\dfrac{\mbox{\boldmath$T$}_1}{\sqrt{n}})}{\pi_{1}(\mbox{\boldmath$S$})}-1\right\rvert\leq \rho^{\pi_{1}}\left(\sqrt{\frac{p_n}{n}}\right)\overset{P}{\to}0,\end{aligned}$$ as $n\to\infty$. If we can show that the second term of ([\[th1:7\]](#th1:7){reference-type="ref" reference="th1:7"}) converges to zero in probability as $n$ approaches infinity, it implies that $\dfrac{\int_{F_{1n}}M_{1n}(\mbox{\boldmath$T$}_1)\; d\mbox{\boldmath$T$}_1}{\int \Tilde{M}_{1n}(\mbox{\boldmath$T$}_1)\; d\mbox{\boldmath$T$}_1}$ remains bounded in probability. Consequently, we can infer from ([\[th1:7\]](#th1:7){reference-type="ref" reference="th1:7"})-([\[th1:8\]](#th1:8){reference-type="ref" reference="th1:8"}) that it is sufficient to establish the second term of ([\[th1:7\]](#th1:7){reference-type="ref" reference="th1:7"}) i.e. $$\begin{aligned} \frac{\int_{F_{1n}}\lvert M_{1n}(\mbox{\boldmath$T$}_1)-\Tilde{M}_{1n}(\mbox{\boldmath$T$}_1)\rvert\; d\mbox{\boldmath$T$}_1}{\int \Tilde{M}_{1n}(\mbox{\boldmath$T$}_1)\; d\mbox{\boldmath$T$}_1} \overset{P}{\to}0,\end{aligned}$$ as $n$ tends to infinity. It can be noted that by referring to ([\[m1\]](#m1){reference-type="ref" reference="m1"}) and ([\[likelihood1\]](#likelihood1){reference-type="ref" reference="likelihood1"}), we can express the following $$\begin{aligned} \label{th1:9} \log(M_{1n}(\mbox{\boldmath$T$}_1))=&l_{1n}\left(\mbox{\boldmath$S$}+\dfrac{\mbox{\boldmath$T$}_1}{\sqrt{n}}\right)-l_{1n}\left(\mbox{\boldmath$S$}\right)\notag\\=&-\frac{n}{2}\log\left(\det\left(\mbox{\boldmath$I$}_{p_n}+\frac{\mbox{\boldmath$S$}^{-1/2}\mbox{\boldmath$T$}_1\mbox{\boldmath$S$}^{-1/2}}{\sqrt{n}}\right)\right)+\notag\\ & \frac{n}{2} \mathop{\mathrm{tr}}\left(\mbox{\boldmath$I$}_{p_n}-\left(\mbox{\boldmath$I$}_{p_n}+\frac{\mbox{\boldmath$S$}^{-1/2}\mbox{\boldmath$T$}_1\mbox{\boldmath$S$}^{-1/2}}{\sqrt{n}}\right)^{-1}\right)\notag\\=&\;\frac{n}{2} \sum_{i=1}^{p_n}\left\{1-\frac{1}{1+\frac{\lambda^*_i}{\sqrt{n}}}-\log(1+\frac{\lambda^*_i}{\sqrt{n}})\right\},\end{aligned}$$ where $\lambda^*_i$ is the $i^{th}$ eigenvalue of the matrix $\mbox{\boldmath$S$}^{-1/2}\mbox{\boldmath$T$}_1\mbox{\boldmath$S$}^{-1/2}$. By employing Taylor's series approximation up to second order, including the integral remainder term, we can express ([\[th1:9\]](#th1:9){reference-type="ref" reference="th1:9"}) in the following manner $$\begin{aligned} \label{th1:10} &\frac{n}{2} \sum_{i=1}^{p_n}\left\{1-\frac{1}{1+\frac{\lambda^*_i}{\sqrt{n}}}-\log(1+\frac{\lambda^*_i}{\sqrt{n}})\right\}\notag\\=&-\frac{1}{4}\sum_{i=1}^{p_n} (\lambda^{*}_i)^2 + n\sum_{i=1}^{p_n}\int_{0}^{\frac{\lambda^*_i}{\sqrt{n}}} \frac{(2-t)}{(1+t)^4}\left(\frac{\lambda^*_i}{\sqrt{n}}-t\right)^2dt\notag\\=&-\frac{1}{4}\mathop{\mathrm{tr}}(\mbox{\boldmath$S$}^{-1/2}\mbox{\boldmath$T$}_1\mbox{\boldmath$S$}^{-1/2})^2+ n\sum_{i=1}^{p_n}\int_{0}^{\frac{\lambda^*_i}{\sqrt{n}}} \frac{(2-t)}{(1+t)^4}\left(\frac{\lambda^*_i}{\sqrt{n}}-t\right)^2dt\end{aligned}$$ Since $\log(\Tilde{M}_{1n}(\mbox{\boldmath$T$}_1))=-\frac{1}{4}\mathop{\mathrm{tr}}(\mbox{\boldmath$S$}^{-1/2}\mbox{\boldmath$T$}_1\mbox{\boldmath$S$}^{-1/2})^2$ it follows from ([\[th1:9\]](#th1:9){reference-type="ref" reference="th1:9"}) and ([\[th1:10\]](#th1:10){reference-type="ref" reference="th1:10"}) that $$\begin{aligned} \label{th1:11} &\lvert \log(M_{1n}(\mbox{\boldmath$T$}_1)) - \log(\Tilde{M}_{1n}(\mbox{\boldmath$T$}_1)) \rvert\notag\\ =& \;n\left\lvert \sum_{i=1}^{p_n}\int_{0}^{\frac{\lambda^*_i}{\sqrt{n}}} \frac{(2-t)}{(1+t)^4}\left(\frac{\lambda^*_i}{\sqrt{n}}-t\right)^2\;dt\right\rvert \notag\\\leq& \;2n\sum_{i=1}^{p_n} \left\lvert \int_{0}^{\frac{\lambda^*_i}{\sqrt{n}}}\left(\frac{\lambda^*_i}{\sqrt{n}}-t\right)^2dt\right\rvert=2n\sum_{i=1}^{p_n}\left\lvert\frac{(\lambda^{*}_i)^3}{\sqrt{n}}\right\rvert\leq \frac{2p_n}{\sqrt{n}}\left\lVert S^{-1/2}\mbox{\boldmath$T$}_1S^{-1/2}\right\rVert_2^3.\end{aligned}$$ The final term of ([\[th1:11\]](#th1:11){reference-type="ref" reference="th1:11"}) is uniformly bounded by $2M\sqrt{p_n^5/n}$ on $F_{1n}$. By applying Assumption [Assumption 2](#as2){reference-type="ref" reference="as2"} and considering sufficiently large $n$, we can deduce from ([\[th1:11\]](#th1:11){reference-type="ref" reference="th1:11"}) that $$\begin{aligned} \label{th1:12} \left\lvert\frac{M_{1n}(\mbox{\boldmath$T$}_1)}{\Tilde{M}_{1n}(\mbox{\boldmath$T$}_1)}-1\right\rvert\leq 2 \lvert \log(M_{1n}(\mbox{\boldmath$T$}_1)) - \log(\Tilde{M}_{1n}(\mbox{\boldmath$T$}_1)) \rvert \leq 4M\sqrt{\frac{p_n^5}{n}},\end{aligned}$$ uniformly on $F_{1n}$. Utilizing all the results obtained thus far, we can express the second term of ([\[th1:7\]](#th1:7){reference-type="ref" reference="th1:7"}) as follows $$\begin{aligned} \label{th1:13} \frac{\int_{F_{1n}}\lvert M_{1n}(\mbox{\boldmath$T$}_1)-\Tilde{M}_{1n}(\mbox{\boldmath$T$}_1)\rvert\; d\mbox{\boldmath$T$}_1}{\int \Tilde{M}_{1n}(\mbox{\boldmath$T$}_1)\; d\mbox{\boldmath$T$}_1} \leq 4M\sqrt{\frac{p_n^5}{n}} \frac{\int_{F_{1n}}\Tilde{M}_{1n}(\mbox{\boldmath$T$}_1)}{\int\Tilde{M}_{1n}(\mbox{\boldmath$T$}_1)}\leq 4M\sqrt{\frac{p_n^5}{n}} \to 0,\end{aligned}$$ as $n$ tends to infinity, utilizing Assumption [Assumption 2](#as2){reference-type="ref" reference="as2"}. This concludes the proof. # Concentration Graphical Models: Preliminaries {#sec7} Before delving into BvM results for concentration graphical models, we will provide the required background material in this section. ## Decomposable Graphs An undirected graph $G = (V, E)$ consists of a vertex set $V = \{1, \ldots, p\}$ with an edge set $E \subseteq \{(i,\;j) \in V \times V : i \neq j\}$, where $(i, j) \in E$ if and only if $(j, i) \in E$. Two vertices $v$ and $v'$ in $V$ are considered adjacent if there exists an edge between them. A complete graph is an undirected graph in which every pair of distinct vertices in $V$ are adjacent. On the other hand, a cycle is a graph that can be represented by a permutation $\{v_1, v_2, \ldots, v_p\}$ of $V$ such that $(v_i, v_j) \in E$ if and only if $\lvert i - j\rvert = 1$ or $\lvert i - j\rvert = p - 1$. The induced subgraph of $G = (V,\;E)$ corresponding to a subset $V' \subseteq V$ is an undirected graph with a vertex set $V'$ and an edge set $E' = E \cap (V' \times V')$. A subset $V'$ of $V$ is considered a clique if the induced subgraph corresponding to $V'$ is a complete graph. Additional details can be found in references such as [@letac; @lauritzen]. Let $\lvert V \rvert$ denote the cardinality of set $V$. For an undirected graph $G = (V, E)$, we denote $M_G$ as the set of all $\lvert V \rvert \times \lvert V \rvert$ matrices $\mbox{\boldmath$A$}= (A_{ij})_{1 \leq i,\;j \leq \lvert V \rvert}$ satisfying $A_{ij} = A_{ji} = 0$ for all pairs $(i, j) \notin E$, $i \neq j$. Similarly, $P_G$ represents the set of all symmetric positive definite $(V' \times V')$ matrices that are elements of $M_G$. An induced subgraph $G' = (V', E')$ of $G = (V, E)$ is defined when $V' \subseteq V$ and $E' = (V' \times V') \cap E$, and is denoted as $G' \subseteq G$. Let us now revisit the definition of decomposable graphs as stated in [@lauritzen]. *Definition 11*. A graph G is considered decomposable if it does not contain an induced subgraph that forms a cycle of length greater than or equal to $4$. Additional characterizations of decomposable graphs can be found in other references such as [@xiang; @roverato]. An important property of matrices in the class $P_G$ is worth noting. If $\mbox{\boldmath$\Omega$}\in P_G$, the graph $G$ is decomposable, and the vertices in $V$ are arranged according to a perfect vertex elimination scheme, the Cholesky factor of $\mbox{\boldmath$\Omega$}$ exhibits the same pattern of zeros in its lower triangle (see for example [@roverato Theorem $1$]). ## Sparse Symmetric Matrix-Normal Distributions We introduce a new class of distributions that parallels the symmetric matrix variate normal distributions, called *sparse symmetric matrix-normal distributions(SSMN)*. These distributions will show up as the limiting distributions in the BvM results in Section [8](#sec9){reference-type="ref" reference="sec9"}. We start by introducing some useful notations for clarity and convenience. Consider a $p \times p$ sparse symmetric matrix $\mbox{\boldmath$A$}$ where the sparsity structure is given by graph $G$. Let us recall the vectorization of $\mbox{\boldmath$A$}$ denoted by $\mbox{vec}(\mbox{\boldmath$A$})$ as defined in subsection [1.1](#1.1){reference-type="ref" reference="1.1"}. Let $f_p$ represent the number of non-zero unique elements in $\mbox{vec}(\mbox{\boldmath$A$})$. Now, consider an $f_p \times 1$ vector $\mbox{vech}^*(\mbox{\boldmath$A$})$ that comprises of the non-zero unique elements of $\mbox{vec}(\mbox{\boldmath$A$})$. Next, we define a $p^2 \times f_p$ matrix $\mbox{\boldmath$D$}_{G}$ as an elimination matrix corresponding to graph G (similar to $\mbox{\boldmath$B$}_p$ mentioned in subsection [1.1](#1.1){reference-type="ref" reference="1.1"}) such that $\mbox{vech}^*(\mbox{\boldmath$A$}) = \mbox{\boldmath$D$}_{G}^T\mbox{vec}(\mbox{\boldmath$A$})$. We now formally define the SSMN distribution as follows. *Definition 12*. **(Sparse Symmetric Matrix-normal distribution)**[\[def4\]]{#def4 label="def4"} For a given decomposable graph $G$, let $\mbox{\boldmath$X$}$ be a $p\times p$ sparse symmetric random matrix taking values in $M_G$. Then $\mbox{\boldmath$X$}(=\mbox{\boldmath$X$}^T)$ is said to have a sparse symmetric matrix-normal distribution with parameters $\mbox{\boldmath$M$}\in M_G$, and $\mbox{\boldmath$\Psi$}_1, \mbox{\boldmath$\Psi$}_2$ ($p\times p$ positive definite matrices satisfying $\mbox{\boldmath$\Psi$}_1\mbox{\boldmath$\Psi$}_2=\mbox{\boldmath$\Psi$}_2\mbox{\boldmath$\Psi$}_1$) if $\mbox{vech}^*(\mbox{\boldmath$X$})\sim\mathcal{N}_{f_p}(\mbox{vech}^*(\mbox{\boldmath$M$}),\; \mbox{\boldmath$D$}_{G}^T(\mbox{\boldmath$\Psi$}_1\otimes\mbox{\boldmath$\Psi$}_2)\mbox{\boldmath$D$}_{G})$. This distribution is denoted by $\mathcal{SSMN}_{G}(\mbox{\boldmath$M$},\\\;\mbox{\boldmath$D$}_{G}^T(\mbox{\boldmath$\Psi$}_1\otimes\mbox{\boldmath$\Psi$}_2)\mbox{\boldmath$D$}_{G})$, and the corresponding probability density function $f(\cdot)$ on $M_G$ is given by $$f(X)=\frac{\exp\{-\mathop{\mathrm{tr}}(\mbox{\boldmath$\Psi$}_1^{-1}(X-\mbox{\boldmath$M$})\mbox{\boldmath$\Psi$}_2^{-1}(X-\mbox{\boldmath$M$}))/2\}}{(2\pi)^{p(p+1)/4}\det(\mbox{\boldmath$D$}_{G}^T(\mbox{\boldmath$\Psi$}_1\otimes\mbox{\boldmath$\Psi$}_2)\mbox{\boldmath$D$}_{G})^{1/2}}.$$ # Concentration Graphical Models: Model Formulation and Prior Specification {#sec8} In a manner similar to Section [2](#sec2){reference-type="ref" reference="sec2"}, we consider a set of $n$ independent and identically distributed samples $\mbox{\boldmath$Y$}^{n}=(Y_1,\cdots,Y_n)$ drawn from a multivariate Gaussian distribution $N_p(0,\;\mbox{\boldmath$\Sigma$}=\mbox{\boldmath$\Omega$}^{-1})$. For a given undirected graph $G = (V, E)$ with $V = \{1, \ldots, p\}$, the Gaussian concentration model corresponding to G assumes that $\mbox{\boldmath$\Omega$}\in P_G$. Assuming Gaussianity, the log-likelihood function of $\mbox{\boldmath$\Omega$}$, denoted as $l_{3n}(\mbox{\boldmath$\Omega$})$, can be expressed as follows $$\begin{aligned} \label{likelihood3} l_{3n}(\mbox{\boldmath$\Omega$}) = -\frac{np}{2}\log(2\pi) + \frac{n}{2}\log(\det(\mbox{\boldmath$\Omega$})) - \frac{n}{2}\mathop{\mathrm{tr}}(\mbox{\boldmath$\Omega$}\mbox{\boldmath$S$}),\end{aligned}$$ where $\mbox{\boldmath$S$}=\frac{1}{n}\sum_{i=1}^{n}Y_{i}Y_{i}^T$. In a Bayesian framework, we assign a prior $\Pi_{3n}(\cdot)$ to the covariance matrix $\mbox{\boldmath$\Omega$}$, with support in $P_G$. For simplicity of notation, we will sometimes refer to the prior distribution $\pi_{3n}(\cdot)$ as $\pi_{3}(\cdot)$. In a similar manner to Xiang et al., [@xiang], we will now specify the *true data-generating mechanism* within the above framework. We assume that the observations $Y_1,\dots, Y_n$ are independently and identically distributed from a multivariate Gaussian distribution $N_{p_n}(0,\;\bar{\mbox{\boldmath$\Omega$}}_n^{-1})$, where $\{{\bar{\mbox{\boldmath$\Omega$}}_n}\}_{n\geq1}$ represents the sequence of true precision matrices. Let $G_n=(V_n,\; E_n)$, with $V_n=\{1,\cdots,p_n\}$, be a decomposable graph where the vertices are ordered according to a perfect vertex elimination scheme. We define a matrix $\Tilde{\mbox{\boldmath$\Omega$}}_n$ as $$\begin{aligned} (\Tilde{\mbox{\boldmath$\Omega$}}_n)_{ij}=\begin{cases} (\bar{\mbox{\boldmath$\Omega$}}_n)_{ij} & \text{if}\;(i,j)\in E_n\\ 0 & \text{otherwise}, \end{cases}\end{aligned}$$ and $\mbox{\boldmath$A$}_n=\bar{\mbox{\boldmath$\Omega$}}_n-\Tilde{\mbox{\boldmath$\Omega$}}_n$. It is evident that $\Tilde{\mbox{\boldmath$\Omega$}}_n\in M_{G_n}$. Let $d_n$ denote the maximum number of non-zero entries in any row of the symmetric matrix $\Tilde{\mbox{\boldmath$\Omega$}}$. We denote the probability measure underlying the true model as $\mathbb{P}_{0,\; G_{n}}$. For simplicity, we will use $\mathbb{P}_{0,G}$ instead of $\mathbb{P}_{0,\;G_{n}}$. Next, we define the Maximum Likelihood Estimator of $\mbox{\boldmath$\Omega$}$ within the class $P_{G_n}$ as $$\begin{aligned} \label{omega_g} \hat{\mbox{\boldmath$\Omega$}}_G(\;= \hat{\mbox{\boldmath$\Omega$}}_{G_n})=\underset{\Omega\in P_{G_n}}{\sup} l_{3n}(\mbox{\boldmath$\Omega$}),\end{aligned}$$ where $l_{3n}(\mbox{\boldmath$\Omega$})$ is defined in ([\[likelihood3\]](#likelihood3){reference-type="ref" reference="likelihood3"}).Let $\mbox{\boldmath$T$}_3=\sqrt{n}(\mbox{\boldmath$\Omega$}-\hat{\mbox{\boldmath$\Omega$}}_G)$ be a centered version of $\mbox{\boldmath$\Omega$}$. In this context, we define the function $$\begin{aligned} M_{3n}(\mbox{\boldmath$T$}_3)=\exp{\left(l_{3n}\left(\hat{\mbox{\boldmath$\Omega$}}_G+\dfrac{\mbox{\boldmath$T$}_3}{\sqrt{n}}\right)-l_{3n}\left(\hat{\mbox{\boldmath$\Omega$}}_G\right)\right)},\label{m3}\end{aligned}$$ where $\mbox{\boldmath$T$}_3$ belongs to $G_{3n}$, and $G_{3n}=\{\mbox{\boldmath$T$}_3:\hat{\mbox{\boldmath$\Omega$}}_G+\frac{\mbox{\boldmath$T$}_3}{\sqrt{n}}\in P_{G_n}\}$. If $\mbox{\boldmath$T$}_3$ falls outside $G_{3n}$, we set $M_{3n}(\mbox{\boldmath$T$}_3)$ to be zero. Clearly $G_{3n}$ is a subset of $M_{G_n}$. Now, suppose the posterior distribution for $\mbox{\boldmath$T$}_3$ is given by $\Pi_{3n}(\cdot\mid\mbox{\boldmath$Y$}^n)$. Analogously, let $\pi_{3n}(\cdot\mid\mbox{\boldmath$Y$}^n)$ represent the corresponding posterior density. Then it is not difficult to check that, $$\begin{aligned} &\pi_{3n}(\mbox{\boldmath$T$}_3\mid\mbox{\boldmath$Y$}^n)=\frac{M_{3n}(\mbox{\boldmath$T$}_3)\pi_{3}\left(\hat{\mbox{\boldmath$\Omega$}}_G+\frac{\mbox{\boldmath$T$}_3}{\sqrt{n}}\right)}{\bigintsss_{G_{3n}} M_{3n}(W)\pi_{3}\left(\hat{\mbox{\boldmath$\Omega$}}_G+\frac{W}{\sqrt{n}}\right))dW}\label{post3}.\end{aligned}$$ Our objective is to demonstrate that the total variation norm between $\Pi_{3n}(\cdot\mid\mbox{\boldmath$Y$}^n)$ and an appropriate zero-mean sparse symmetric matrix variate normal distribution converges in probability to $0$ under $\mathbb{P}_{0,G}$. ## The $G$-Wishart Distribution As in Section [3](#sec3){reference-type="ref" reference="sec3"} while our main results hold true for a broad range of prior distributions, it is important to provide specific examples of prior distributions within that class for practical implementation purposes. In this subsection, we will define a standard prior distribution available for the precision matrix under the Gaussian concentration model with respect to the graph $G$ defined in Section [6](#sec7){reference-type="ref" reference="sec7"}. Dawid and Laurizen [@dawid] developed a class of hyper inverse Wishart distributions for $\mbox{\boldmath$\Sigma$}=\mbox{\boldmath$\Omega$}^{-1}$ when $\mbox{\boldmath$\Omega$}\in P_G$. The corresponding class of induced priors for $\mbox{\boldmath$\Omega$}$ is known as the class of $G$-Wishart distributions on $P_G$ (See [@atay; @roverato]). Specifically, the $G$-Wishart distribution with parameters $\beta\geq 0$ and $\mbox{\boldmath$\Psi$}_3$ positive definite, denoted by $W_G(\beta,\mbox{\boldmath$\Psi$}_3)$, has a density proportional to $$\begin{aligned} \label{prior4p} \pi_{3}^{WG}(\mbox{\boldmath$\Omega$})\propto \det(\mbox{\boldmath$\Omega$})^{\beta/2}\exp{(-\mathop{\mathrm{tr}}\left(\mbox{\boldmath$\Psi$}_3\mbox{\boldmath$\Omega$}\right)/2)},\;\mbox{\boldmath$\Omega$}\in P_G.\end{aligned}$$ The class of $G$-Wishart distributions on $P_G$ forms a conjugate family of priors under the Gaussian concentration graphical model corresponding to $G$. If $G$ is decomposable, then quantities such as the mean, mode, and normalizing constant for $W_G(\beta,\mbox{\boldmath$\Psi$}_3)$ are available in closed form (see, for instance, [@rajaratnam]). In Section [8](#sec9){reference-type="ref" reference="sec9"}, we will demonstrate that the $G$-Wishart distribution falls into our desired class of prior distributions under suitable assumptions. # Main Results {#sec9} Before presenting our BvM results, let us outline the assumptions made on the true data-generating model and the prior distribution, as well as their implications. *Assumption 4*. The eigenvalues of $\{{\bar{\mbox{\boldmath$\Omega$}}_n}\}_{n\geq1}$ are uniformly bounded i.e. $\bar{\mbox{\boldmath$\Omega$}}_n\in \mathcal{C_{\sigma}}$, where $\mathcal{C_{\sigma}}$ is defined in Assumption [Assumption 1](#as1){reference-type="ref" reference="as1"}. *Assumption 5*. $\frac{d_n^{15}p_n^2(\log(p_n))^3}{n}\to 0$ and $p_n\to\infty$ as $n\to\infty$. If $d_n$, which represents the maximum number of non-zero entries in any row of the symmetric matrix $\Tilde{\mbox{\boldmath$\Omega$}}$ stays constant or increases slowly with $n$ (e.g. as observed in cases such as the banded concentration graphical model from [@banerjee1]), Assumption [Assumption 5](#asB){reference-type="ref" reference="asB"} becomes much weaker compared to Assumption [Assumption 2](#as2){reference-type="ref" reference="as2"} ($p_n^5=o(n)$). This implies that we can relax the strict conditions on $p_n$ by imposing a sparse structure on the precision matrix and handle scenarios with larger $p_n$ without sacrificing the validity of our results. *Assumption 6*. $\left\lVert A\right\rVert_{(\infty,\infty)}\leq\gamma(d_n)$, where $(np_n)^{1/3}d_n^{3/2}\gamma(d_n)\to0$ as $n\to\infty$. Assumption [Assumption 6](#asC){reference-type="ref" reference="asC"} is introduced to ensure that the true precision matrix generating the data exhibits approximate decomposability, as discussed in [@xiang; @banerjee1]. This assumption imposes a stricter condition compared to those needed for establishing posterior consistency in [@xiang; @banerjee1]. Proving BvM results often requires stronger assumptions than those needed for posterior consistency, to be expected. A parallel can be drawn from the frequentist setup, where proving results akin to the Central Limit Theorem typically necessitates stronger assumptions compared to demonstrating simple parameter consistency. *Assumption 7*. The sequence of prior distributions $\{\Pi_{3n}(\cdot)\}_{n\geq1}$ on $\mbox{\boldmath$\Omega$}$ with support on $P_{G_n}$ is flat around $\hat{\mbox{\boldmath$\Omega$}}_{G_n}$ and the posterior contraction rate under this prior is $\epsilon_n$, where $\epsilon_n=d_n^{5/2}(\log(p_n)/n)^{1/2}+d_n^{3/2}\gamma(d_n)$. To define flatness of the prior distributions $\{\Pi_{3n}(\cdot)\}_{n\geq1}$ around $\hat{\mbox{\boldmath$\Omega$}}_{G_n}$ we will use $\rho^{\pi_3}(\epsilon_n)$ in a similar manner as in ([\[flat\]](#flat){reference-type="ref" reference="flat"}) from Section [2](#sec2){reference-type="ref" reference="sec2"}. When Assumption [Assumption 4](#asA){reference-type="ref" reference="asA"}, [Assumption 5](#asB){reference-type="ref" reference="asB"}, and [Assumption 6](#asC){reference-type="ref" reference="asC"} hold, it has been shown that the maximum likelihood estimator, $\hat{\mbox{\boldmath$\Omega$}}_G$ converges. The lemma below shows that the $G$-Wishart prior distribution satisfies Assumption [Assumption 7](#asD){reference-type="ref" reference="asD"} under certain conditions on the hyperparameters. A proof is presented in the Appendix. **Lemma 13**. *Consider the $G$-Wishart prior distribution, $W_{G_n}(\beta,\mbox{\boldmath$\Psi$}_{3n})$ as defined in ([\[prior4p\]](#prior4p){reference-type="ref" reference="prior4p"}). Given Assumptions [Assumption 4](#asA){reference-type="ref" reference="asA"}-[Assumption 6](#asC){reference-type="ref" reference="asC"}, if we take $\beta>0$ and $\mbox{\boldmath$\Psi$}_{3n}$ satisfies $\left\lVert\mbox{\boldmath$\Psi$}_{3n}\right\rVert_2\leq a < \infty$, then the prior distribution will satisfy Assumption [Assumption 7](#asD){reference-type="ref" reference="asD"}.* Let us now present the key result of this section. We will establish the BvM results for precision matrix $\mbox{\boldmath$\Omega$}$ under the concentration graphical model in the theorem below. **Theorem 14**. ***(BvM Theorem for Sparse Precision Matrix)**[\[th_omega_sp\]]{#th_omega_sp label="th_omega_sp"} Let $Y_1, Y_2,\\Y_3, \cdots, Y_n$ be independent and identically distributed Gaussian random variables with mean $0$ and precision matrix $\mbox{\boldmath$\Omega$}$, where $\mbox{\boldmath$\Omega$}\in P_{G_n}$ and $G_n$ is the undirected graph associated with the Gaussian concentration model. Consider a working Bayesian model that utilizes one of the sequences of priors $\{\Pi_{3n}(\cdot)\}_{n\geq1}$ satisfying Assumption [Assumption 7](#asD){reference-type="ref" reference="asD"}, and assume that true data generating mechanism (see Section [7](#sec8){reference-type="ref" reference="sec8"}) satisfies Assumptions [Assumption 4](#asA){reference-type="ref" reference="asA"}-[Assumption 6](#asC){reference-type="ref" reference="asC"}, Then $$\begin{aligned} \int_{G_{3n}} \lvert \pi_{3n}(\mbox{\boldmath$T$}_3\mid\mbox{\boldmath$Y$}^n)-\phi(\mbox{\boldmath$T$}_3;\hat{\mbox{\boldmath$\Omega$}}_G)\rvert\; d\mbox{\boldmath$T$}_3 \overset{P}{\to}0,\; \textit{as}\;n\to\infty\; \textit{and under}\;\mathbb{P}_{0,G}, \end{aligned}$$ where $\mbox{\boldmath$T$}_3=\sqrt{n}(\mbox{\boldmath$\Omega$}-\hat{\mbox{\boldmath$\Omega$}}_G)$ and $\phi(\mbox{\boldmath$T$}_3;\hat{\mbox{\boldmath$\Omega$}}_G)$ denotes the probability density function of the $\mathcal{SSMN}_{G_n}(\mbox{\boldmath$O$},\;2 \mbox{\boldmath$D$}_{G}^T(\hat{\mbox{\boldmath$\Omega$}}_G\otimes\hat{\mbox{\boldmath$\Omega$}}_G)\mbox{\boldmath$D$}_{G})$ distribution as defined in Section [6](#sec7){reference-type="ref" reference="sec7"}.* The proof of Theorem [\[th_omega_sp\]](#th_omega_sp){reference-type="ref" reference="th_omega_sp"} is included in the Appendix. # Equivalence of Different Norms in terms of Convergence {#sec10} As previously mentioned, the use of the total variation (TV) norm is not exclusive to this problem. In this section, we will demonstrate that similar results to those in Theorem [\[th_sigma\]](#th_sigma){reference-type="ref" reference="th_sigma"}, [\[th_omega\]](#th_omega){reference-type="ref" reference="th_omega"}, and [\[th_omega_sp\]](#th_omega_sp){reference-type="ref" reference="th_omega_sp"} can be obtained by considering alternative norms. We consider two densities, namely, $f_n$ and $g_n$, both of which are absolutely continuous with respect to a $\sigma$-finite measure $\mu$ that depends on $n$. We can define the $\alpha$-divergence, proposed by Rényi(1961) [@renyi], between $f_n$ and $g_n$ as follows $$\begin{aligned} \label{renyi} R_{\alpha}(f_n,\;g_n)=\frac{1}{\alpha-1}\log\left[\int f_n^{\alpha}g_n^{1-\alpha}d\mu\right].\end{aligned}$$ Similarly, we can define another type of divergence, denoted as $D_{\alpha}$ (or information divergence of type $(1-\alpha)$), given by $$\begin{aligned} \label{d_alpha} D_{\alpha}(f_n,g_n)=\frac{1}{\alpha(1-\alpha)}\left[1-\int f_n^{\alpha}g_n^{1-\alpha}d\mu\right].\end{aligned}$$ It is evident that $$\begin{aligned} \label{renyi_d_alpha} R_{\alpha}(f_n,g_n)=\frac{1}{\alpha-1}\log\left[1-\alpha(1-\alpha)D_{\alpha}(f_n,g_n)\right].\end{aligned}$$ Additionally, as a special case of the latter, we have $$\begin{aligned} D_{1/2}(f_n,g_n)=4\left[1-\int f_n^{1/2}g_n^{1/2}d\mu\right]=2\int(f_n^{1/2}-g_n^{1/2})^2 d\mu =2H^2(f_n,g_n),\end{aligned}$$ where $H(f_n,g_n)=\left\{\int(f_n^{1/2}-g_n^{1/2})^2 d\mu\right\}^{1/2}$ represents the Bhattacharya-Hellinger distance [@bhattacharyya1946measure; @Hellinger] between the densities $f_n$ and $g_n$. We now establish an inequality between the total variation distance $TV(f_n,g_n)$ and $D_{\alpha}(f_n,g_n)$. The following proof is due to [@ghosh]. **Lemma 15**. *For $0\leq\alpha\leq 1$, $\alpha(1-\alpha)D_{\alpha}(f_n,g_n)\leq\mbox{TV}(f_n,g_n)$. For $0\leq\alpha\leq 1$, $$\begin{aligned} \alpha(1-\alpha)D_{\alpha}(f_n,g_n) & = 1-\int f_n^{\alpha}g_n^{1-\alpha}d\mu \nonumber\\ & \leq 1-\int(\mbox{min}(f_n,g_n))^{\alpha} (\mbox{min}(f_n,g_n))^{1-\alpha}d\mu\notag\\&=1-\int\mbox{min}(f_n,g_n)d\mu\nonumber\\ & = (1/2)(2-2\int\mbox{min}(f_n,g_n)d\mu) \nonumber\\ & = (1/2)\int\lvert f_n-g_n\rvert d\mu=\mbox{TV}(f_n,g_n).\end{aligned}$$* This result shows that if $TV(f_n,g_n)\rightarrow 0$, then $D_{\alpha}(f_n,g_n)$ also tends to 0 for all $\alpha\in(0,1)$. Additionally, the Hellinger divergence measure yields the inequality $H^2(f_n,g_n)\leq 2\mbox{TV}(f_n,g_n)$. There is another result, attributed to Le Cam and presented in [@wainwright] as an exercise, that provides an upper bound for $TV(f_n,g_n)$ in terms of $H(f_n,g_n)$ is given below **Lemma 16**. *$[\mbox{TV}(f_n,g_n)]^2\leq H^2(f_n,g_n)\left[1-\frac{1}{4}H^2(f_n,g_n)\right] \leq H^2(f_n,g_n)$.* Hence, Lemmas [Lemma 15](#normequiv1){reference-type="ref" reference="normequiv1"} and [Lemma 16](#normequiv2){reference-type="ref" reference="normequiv2"} have an important consequence, establishing the following convergence equivalence: $$\begin{aligned} \label{normequiv3} H(f_n,g_n)\rightarrow 0\equiv TV(f_n,g_n)\rightarrow 0\equiv D_{\alpha}(f_n,g_n) \rightarrow 0 \equiv R_{\alpha}(f_n,g_n) \rightarrow 0,\end{aligned}$$ for all $0<\alpha<1$ as $n\to\infty$. The equivalence between TV and Hellinger distances mentioned above is also stated in [@gibbs], but it does not discuss the general Rényi or $\alpha$ divergence. Further details on other available norms and convergences can be found in [@ghosh]. By utilizing the convergence equivalence stated in ([\[normequiv3\]](#normequiv3){reference-type="ref" reference="normequiv3"}), we can infer that in Theorems [\[th_sigma\]](#th_sigma){reference-type="ref" reference="th_sigma"}, [\[th_omega\]](#th_omega){reference-type="ref" reference="th_omega"}, and [\[th_omega_sp\]](#th_omega_sp){reference-type="ref" reference="th_omega_sp"}, the TV norm can be substituted with the Hellinger distance, general Rényi divergence, or $\alpha$ divergence for any $0<\alpha<1$. This extension allows for a wider range of norms to be applied, thereby broadening the scope of our results. # Discussion {#sec11} This article focuses on establishing high-dimensional Bernstein von Mises (BvM) results for covariance and precision matrices within an independent and identically distributed Gaussian framework. In the unstructured setting, we establish BvM for $\mbox{\boldmath$\Sigma$}$ (and for $\mbox{\boldmath$\Omega$}$) under mild regularity assumptions on a number of variables, the true data-generating mechanism, and for a general class of priors (Theorem [\[th_sigma\]](#th_sigma){reference-type="ref" reference="th_sigma"} and Theorem [\[th_omega\]](#th_omega){reference-type="ref" reference="th_omega"}). Next, we consider concentration graphical models where sparsity is introduced in the precision matrix to reduce the effective number of parameters. For this specific model, we established BvM results for the sparse $\mbox{\boldmath$\Omega$}$ under mild regularity assumption on the number of variables and the true data-generating mechanism, as well as on the priors (Theorem [\[th_omega_sp\]](#th_omega_sp){reference-type="ref" reference="th_omega_sp"}). Another common approach to introduce a low-dimension structure in the covariance matrix is to induce sparsity in the Cholesky parameter of the precision matrix (rather than the precision matrix itself). The sparsity patterns in these matrices can be uniquely represented using appropriate directed graphs, leading to models known as directed acyclic graph models [@cao; @geiger; @smith]. However, it should be noted that without the assumption of decomposability, the precision matrix and its Cholesky parameter are not guaranteed to share the exact same sparsity structure. Hence, establishing BvM results for a general directed acyclic graph model and for more complex covariance structures remains an open problem. Nevertheless, Theorem [\[th_omega_sp\]](#th_omega_sp){reference-type="ref" reference="th_omega_sp"} represents a promising step in this direction. # Appendix {#appendix .unnumbered} # Proofs for Section [4](#sec4){reference-type="ref" reference="sec4"} {#proofs-for-section-sec4} ## Proof of Lemma [Lemma 4](#postcont1){reference-type="ref" reference="postcont1"} {#proof-of-lemma-postcont1} The well-established result states that the posterior contraction rate under the IW prior is $\sqrt{p_n/n}$. For reference we mention Lemma 2.1 in [@chaogao]. Therefore, our main objective is to demonstrate that this prior is flat around $\mbox{\boldmath$S$}$. Using ([\[prior1s\]](#prior1s){reference-type="ref" reference="prior1s"}), we can express the following quantity as follows $$\begin{aligned} \label{a1} \frac{\pi_{1}^{IW}\left(\mbox{\boldmath$S$}+\frac{\mbox{\boldmath$T$}_1}{\sqrt{n}}\right)}{\pi_{1}^{IW}(\mbox{\boldmath$S$})}= &\det\left(\mbox{\boldmath$I$}_{p_n}+\frac{\mbox{\boldmath$S$}^{-1/2}\mbox{\boldmath$T$}_1\mbox{\boldmath$S$}^{-1/2}}{\sqrt{n}}\right)^{-(\nu+2p_n)/2}\notag\\&\times \exp\left(-\mathop{\mathrm{tr}}\left(\mbox{\boldmath$\Psi$}_1\left[(\mbox{\boldmath$S$}+\frac{\mbox{\boldmath$T$}_1}{\sqrt{n}})^{-1}-\mbox{\boldmath$S$}^{-1}\right]\right)/2\right).\end{aligned}$$ Hence, using ([\[a1\]](#a1){reference-type="ref" reference="a1"}), we can write $$\begin{aligned} \label{a2} &\lvert\log(\pi_{1}^{IW}\left(\mbox{\boldmath$S$}+\frac{\mbox{\boldmath$T$}_1}{\sqrt{n}})\right)-\log(\pi_{1}^{IW}(\mbox{\boldmath$S$}))\rvert \notag\\ \leq & \frac{(\nu+2p_n)}{2}\sum_{i=1}^{p_n} \left\lvert\log\left(1+\frac{\lambda_i(\mbox{\boldmath$S$}^{-1/2}\mbox{\boldmath$T$}_1\mbox{\boldmath$S$}^{-1/2})}{\sqrt{n}}\right)\right\rvert+ \notag\\ & \mathop{\mathrm{tr}}\left(\mbox{\boldmath$\Psi$}_1\mbox{\boldmath$S$}^{-1}\left(\mbox{\boldmath$I$}_{p_n}+\sqrt{n}\mbox{\boldmath$S$}^{1/2}\mbox{\boldmath$T$}_1^{-1}\mbox{\boldmath$S$}^{1/2}\right)^{-1}\right),\end{aligned}$$ where $\lambda_i(\mbox{\boldmath$S$}^{-1/2}\mbox{\boldmath$T$}_1\mbox{\boldmath$S$}^{-1/2})$ represents the $i$-th eigenvalue of the matrix $\mbox{\boldmath$S$}^{-1/2}\mbox{\boldmath$T$}_1\\\mbox{\boldmath$S$}^{-1/2}$, with $1\leq i \leq p_n$. The last step follows from Woodbury's matrix identity. Now, let us define the set $D_n:= \{\mbox{\boldmath$T$}_1\in G_{1n}\mid\left\lVert\mbox{\boldmath$T$}_1\right\rVert_2 \leq M\sqrt{p_n}\}$, where $M$ is a positive constant. Also, recall the definition of the set $H_n$ from ([\[h\]](#h){reference-type="ref" reference="h"}). Hence, for sufficiently large $n$, within the set $H_n$ and uniformly on $D_n$, we can express ([\[a2\]](#a2){reference-type="ref" reference="a2"}) as follows: $$\begin{aligned} \label{a3} \lvert\log(\pi_{1}^{IW}(\mbox{\boldmath$S$}+\frac{\mbox{\boldmath$T$}_1}{\sqrt{n}}))-\log(\pi_{1}^{IW}(\mbox{\boldmath$S$}))\rvert & \leq \frac{p_n(\nu+2p_n)}{2}\frac{\left\lVert\mbox{\boldmath$S$}^{-1/2}\mbox{\boldmath$T$}_1\mbox{\boldmath$S$}^{-1/2}\right\rVert_2}{\sqrt{n}}\notag\\&+ p_n \left\lVert\mbox{\boldmath$\Psi$}_1\mbox{\boldmath$S$}^{-1}\right\rVert_2 \frac{1}{1+\frac{\sqrt{n}}{\left\lVert\mbox{\boldmath$S$}^{-1/2}\mbox{\boldmath$T$}_1\mbox{\boldmath$S$}^{-1/2}\right\rVert_2}}\notag \\ &\leq M^{*}\sqrt{\frac{p_n^5}{n}},\end{aligned}$$ where $M^*$ is an appropriate constant and we are using the fact that $\left\lVert\mbox{\boldmath$\Psi$}_1\right\rVert_2 = O(p_n)$. The last step follows from utilizing the set $D_n$ and $H_n$. Now, for sufficiently large $n$, we can state that within the set $H_n$, the following inequality holds: $$\begin{aligned} \label{a4} \underset{\mbox{\boldmath$T$}_1\in D_n}{\sup}\;\left\lvert \frac{\pi_{1}^{IW}(\mbox{\boldmath$S$}+\frac{\mbox{\boldmath$T$}_1}{\sqrt{n}})}{\pi_{1}^{IW}(\mbox{\boldmath$S$})}-1\right\rvert \leq M^{*}\sqrt{\frac{p_n^5}{n}},\end{aligned}$$ which converges in probability to $0$ using Assumption [Assumption 2](#as2){reference-type="ref" reference="as2"} and since $\mathbb{P}_{0}(H_n)\to 1$ as $n \to \infty$. The proof is now complete. ## Proof of Lemma [Lemma 5](#postcont2){reference-type="ref" reference="postcont2"} {#proof-of-lemma-postcont2} In their work, Sarkar et al. [@sarkar] provided proof (Theorem 4.1) that under Assumption [Assumption 1](#as1){reference-type="ref" reference="as1"} and the condition $p_n=o(n)$ (which is a weaker condition compared to Assumption [Assumption 2](#as2){reference-type="ref" reference="as2"}), the DSIW prior demonstrates a posterior contraction rate of $\sqrt{p_n/n}$. Additionally, they need the assumption that there exists a constant $k$ (independent of $n$) such that $\pi_i(x)$ decrease in $x$ for $x > k$ for every $1 \leq i \leq p_n$. Hence, all we need to prove the DSIW prior is flat around $\mbox{\boldmath$S$}$. Using ([\[prior2s\]](#prior2s){reference-type="ref" reference="prior2s"}), we can express the following quantity as follows $$\begin{aligned} \label{a5} \frac{\pi_{1}^{DSIW}(\mbox{\boldmath$S$}+\frac{\mbox{\boldmath$T$}_1}{\sqrt{n}})}{\pi_{1}^{DSIW}(\mbox{\boldmath$S$})}= &\det\left(\mbox{\boldmath$I$}_{p_n}+\frac{\mbox{\boldmath$S$}^{-1/2}\mbox{\boldmath$T$}_1\mbox{\boldmath$S$}^{-1/2}}{\sqrt{n}}\right)^{-(\nu+2p_n)/2}\notag\\&\times \prod_{i=1}^{p_n} \frac{\int_0^{\infty}\delta_i^{(\nu+p_n-1)/2}\exp{\{-(\mbox{\boldmath$S$}+\mbox{\boldmath$T$}_1/\sqrt{n})^{-1}_i\delta_i/2\}}\pi_i(\delta_i)d\delta_i}{\int_0^{\infty}\delta_i^{(\nu+p_n-1)/2}\exp{\{-(\mbox{\boldmath$S$})^{-1}_i\delta_i/2\}}\pi_i(\delta_i)d\delta_i},\end{aligned}$$ where for a matrix $\mbox{\boldmath$A$}$, we denote its $i$-th diagonal element by $(\mbox{\boldmath$A$})_i$. From ([\[a5\]](#a5){reference-type="ref" reference="a5"}), it is evident that $$\begin{aligned} \label{a6} &\lvert\log(\pi_{1}^{DSIW}(\mbox{\boldmath$S$}+\frac{\mbox{\boldmath$T$}_1}{\sqrt{n}}))-\log(\pi_{1}^{DSIW}(\mbox{\boldmath$S$}))\rvert\notag\\ &\leq \frac{(\nu+2p_n)}{2} \left\lvert \log\left(\det\left(\mbox{\boldmath$I$}_{p_n}+\frac{\mbox{\boldmath$S$}^{-1/2}\mbox{\boldmath$T$}_1\mbox{\boldmath$S$}^{-1/2}}{\sqrt{n}}\right)\right)\right\rvert\notag \\ &+ \sum_{i=1}^{p_n}\left\lvert\log\left(\frac{\int_0^{\infty}\delta_i^{(\nu+p_n-1)/2}\exp{\{-(\mbox{\boldmath$S$}+\mbox{\boldmath$T$}_1/\sqrt{n})^{-1}_i\delta_i/2\}}\pi_i(\delta_i)d\delta_i}{\int_0^{\infty}\delta_i^{(\nu+p_n-1)/2}\exp{\{-(\mbox{\boldmath$S$})^{-1}_i\delta_i/2\}}\pi_i(\delta_i)d\delta_i}\right)\right\rvert.\end{aligned}$$ Once again, let us consider the set $D_n = \{\mbox{\boldmath$T$}_1\in G_{1n}\mid\left\lVert\mbox{\boldmath$T$}_1\right\rVert_2 \leq M\sqrt{p_n}\}$, where $M$ is a positive constant. Also, recall the definition of the set $H_n$ from ([\[h\]](#h){reference-type="ref" reference="h"}). Now, the first term of ([\[a6\]](#a6){reference-type="ref" reference="a6"}) can be bounded similarly to ([\[a2\]](#a2){reference-type="ref" reference="a2"}), and thus, for sufficiently large $n$ within the set $H_n$ and uniformly on $D_n$, we have $$\begin{aligned} \label{a7} \frac{(\nu+2p_n)}{2} \left\lvert\log\left(\det\left(\mbox{\boldmath$I$}_{p_n}+\frac{\mbox{\boldmath$S$}^{-1/2}\mbox{\boldmath$T$}_1\mbox{\boldmath$S$}^{-1/2}}{\sqrt{n}}\right)\right)\right\rvert \leq M^*\sqrt{\frac{p_n^5}{n}},\end{aligned}$$ where $M^{*}$ is a constant that depends on $M$. Before addressing the second term in ([\[a6\]](#a6){reference-type="ref" reference="a6"}), let us examine the following expression: $$\begin{aligned} \label{a8} &\left\lvert\frac{\int_0^{\infty}\delta_i^{(\nu+p_n-1)/2}\exp{\{-(\mbox{\boldmath$S$}+\mbox{\boldmath$T$}_1/\sqrt{n})^{-1}_i\delta_i/2\}}\pi_i(\delta_i)d\delta_i}{\int_0^{\infty}\delta_i^{(\nu+p_n-1)/2}\exp{\{-(\mbox{\boldmath$S$})^{-1}_i\delta_i/2\}}\pi_i(\delta_i)d\delta_i}-1\right\rvert\notag\\ &\leq\frac{\int_0^{\infty}\delta_i^{(\nu+p_n-1)/2}\exp{\{-(\mbox{\boldmath$S$})^{-1}_i\delta_i/2\}}\pi_i(\delta_i)\exp{\left(k_{n,p}\delta_i/2\right)}d\delta_i}{\int_0^{\infty}\delta_i^{(\nu+p_n-1)/2}\exp{\{-(\mbox{\boldmath$S$})^{-1}_i\delta_i/2\}}\pi_i(\delta_i)d\delta_i}-1,\end{aligned}$$ Here, $k_{n,p}=\lvert(\mbox{\boldmath$S$}+\sqrt{n}\mbox{\boldmath$S$}\mbox{\boldmath$T$}_1^{-1}\mbox{\boldmath$S$})^{-1}_i\rvert$, and the last step follows from the Woodbury's matrix identity. Assuming the existence of a constant $k$ (independent of $n$), such that $\pi_i(x)$ decreases in $x$ for $x > k$ for every $1 \leq i \leq p_n$, we can rewrite ([\[a8\]](#a8){reference-type="ref" reference="a8"}) as follows $$\begin{aligned} \label{a9} &\frac{\int_0^{\infty}\delta_i^{(\nu+p_n-1)/2}\exp{\{-(\mbox{\boldmath$S$})^{-1}_i\delta_i/2\}}\exp{\left(k_{n,p}\delta_i/2\right)}\pi_i(\delta_i)d\delta_i}{\int_0^{\infty}\delta_i^{(\nu+p_n-1)/2}\exp{\{-(\mbox{\boldmath$S$})^{-1}_i\delta_i/2\}}\pi_i(\delta_i)d\delta_i}-1\notag\\ &\leq (\exp{\left(cp_nk_{n,p}\right)}-1)+\frac{\pi_i(cp_n)\int_{cp_n}^{\infty}\delta_i^{(\nu+p_n-1)/2}\exp{\{-((\mbox{\boldmath$S$})^{-1}_i-k_{n,p})\delta_i/2\}}d\delta_i}{\pi_i(cp_n)\int_{k}^{cp_n}\delta_i^{(\nu+p_n-1)/2}\exp{\{-(\mbox{\boldmath$S$})^{-1}_i\delta_i/2\}}d\delta_i}\notag\\ &= (\exp{\left(cp_nk_{n,p}\right)}-1) + \frac{P(Z_1\geq cp_n)}{P(Z_2 \geq k)-P(Z_2\geq cp_n)}\left\{\frac{(\mbox{\boldmath$S$})^{-1}_i}{(\mbox{\boldmath$S$})^{-1}_i-k_{n,p}}\right\}^{(\nu+p_n-1)/2},\end{aligned}$$ where $c$ is a constant that will be chosen accordingly. Also, $((\mbox{\boldmath$S$})^{-1}_i-k_{n,p})Z_1$ and $(\mbox{\boldmath$S$})^{-1}_iZ_1$ are identically distributed as a $\chi^2$ distribution with $(\nu+p_n-1)/2$ degrees of freedom. Within the sets $D_n$ and $H_n$, it can be verified that $k_{n,p}=O(\sqrt{p_n/n})$. Thus, it is easy to observe that within the sets $D_n$ and $H_n$, we have $$\begin{aligned} \label{a10} \left\{\frac{(\mbox{\boldmath$S$})^{-1}_i}{(\mbox{\boldmath$S$})^{-1}_i-k_{n,p}}\right\}^{(\nu+p_n-1)/2} \leq \exp{(c_1 p_nk_{n,p})},\end{aligned}$$ where $c_1$ is a constant. According to Armagan et al. [@armagan], for all $m > 0, P(\chi^2_m \geq x) \leq \exp(-x/4)$, whenever $x \geq 8m$. This inequality can be utilized together with the concentration inequality for the $\chi^2$ distribution from Cao, X. et al. [@cao Lemma 4.1], along with ([\[a10\]](#a10){reference-type="ref" reference="a10"}), and taking into account that within the sets $D_n$ and $H_n$, the value of $k_{n,p}$ is $O(\sqrt{p_n/n})$. Hence we can express ([\[a10\]](#a10){reference-type="ref" reference="a10"}) in the subsequent manner $$\begin{aligned} \label{a11} &\frac{\int_0^{\infty}\delta_i^{(\nu+p_n-1)/2}\exp{\{-(\mbox{\boldmath$S$})^{-1}_i\delta_i/2\}}\exp{\left(k_{n,p}\delta_i/2\right)}\pi_i(\delta_i)d\delta_i}{\int_0^{\infty}\delta_i^{(\nu+p_n-1)/2}\exp{\{-(\mbox{\boldmath$S$})^{-1}_i\delta_i/2\}}\pi_i(\delta_i)d\delta_i}-1\notag\\&\leq (\exp{\left(cp_nk_{n,p}\right)}-1)+\frac{\exp{(-c_2p_n)}}{1-\exp{(-c_3p_n)-\exp{(-c_4p_n)}}} \exp{(c_1 p_nk_{n,p})},\end{aligned}$$ where $c_2$, $c_3$, and $c_4$ are all appropriately chosen constants. For sufficiently small $x$, it is known that $\lvert\log(x)\rvert\leq2\lvert x-1\rvert$. From ([\[a11\]](#a11){reference-type="ref" reference="a11"}) and by assuming Assumption [Assumption 2](#as2){reference-type="ref" reference="as2"} for sufficiently large $n$, the second term of ([\[a6\]](#a6){reference-type="ref" reference="a6"}) can be written as follows: $$\begin{aligned} \label{a12} &\sum_{i=1}^{p_n}\left\lvert\log\left(\frac{\int_0^{\infty}\delta_i^{(\nu+p_n-1)/2}\exp{{-(\mbox{\boldmath$S$}+\mbox{\boldmath$T$}_1/\sqrt{n})^{-1}_i\delta_i/2}}\pi_i(\delta_i)d\delta_i}{\int_0^{\infty}\delta_i^{(\nu+p_n-1)/2}\exp{{-(\mbox{\boldmath$S$})^{-1}i\delta_i/2}}\pi_i(\delta_i)d\delta_i}\right)\right\rvert\notag\\&\leq 2\sum_{i=1}^{p_n}\left\lvert\frac{\int_0^{\infty}\delta_i^{(\nu+p_n-1)/2}\exp{{-(\mbox{\boldmath$S$}+\mbox{\boldmath$T$}_1/\sqrt{n})^{-1}i\delta_i/2}}\pi_i(\delta_i)d\delta_i}{\int_0^{\infty}\delta_i^{(\nu+p_n-1)/2}\exp{{-(\mbox{\boldmath$S$})^{-1}i\delta_i/2}}\pi_i(\delta_i)d\delta_i}-1\right\rvert\notag\\&\leq 2p_n(\exp{\left(cp_nk_{n,p}\right)}-1)+\frac{2p_n\exp{(-c_2p_n)}}{1-\exp{(-c_3p_n)-\exp{(-c_4p_n)}}} \exp{(c_1 p_nk_{n,p})}\notag\\&\leq 2M^{**} \sqrt{\frac{p_n^5}{n}},\end{aligned}$$ where $M^{**}$ is a constant. The last step follows from using the fact that $k_{n,p}=O(\sqrt{p_n/n})$. By combining ([\[a7\]](#a7){reference-type="ref" reference="a7"}) and ([\[a12\]](#a12){reference-type="ref" reference="a12"}), we can write the following expression within the set $H_n$ $$\begin{aligned} \label{a13} \underset{\mbox{\boldmath$T$}_1\in D_n}{\sup}\left\lvert \frac{\pi_{1}^{DSIW}(\mbox{\boldmath$S$}+\dfrac{\mbox{\boldmath$T$}_1}{\sqrt{n}})}{\pi_{1}^{DSIW}(\mbox{\boldmath$S$})}-1\right\rvert \leq 2\max\{M^{*},M^{**}\}\sqrt{\frac{p_n^5}{n}},\end{aligned}$$ which converges in probability to $0$ as $n$ approaches infinity since $\mathbb{P}_{0}(H_n)\to 1$ as $n \to \infty$. Therefore, the proof is now complete. ## Proof of Lemma [Lemma 6](#postcont3){reference-type="ref" reference="postcont3"} {#proof-of-lemma-postcont3} Sarkar et al. [@sarkar] (Theorem 4.2) established that, assuming Assumption [Assumption 1](#as1){reference-type="ref" reference="as1"} and the condition $p_n=o(n)$ (which is less stringent than Assumption [Assumption 2](#as2){reference-type="ref" reference="as2"}), the matrix-$F$ prior exhibits a rate of posterior contraction of $\sqrt{p_n/n}$ when $\nu^*=O(p_n)$. Consequently, it suffices to demonstrate that the matrix-$F$ prior is flat around $\mbox{\boldmath$S$}$. Using ([\[prior3s\]](#prior3s){reference-type="ref" reference="prior3s"}), we can express the following quantity as follows $$\begin{aligned} \label{a14} \frac{\pi_{1}^{F}(\mbox{\boldmath$S$}+\frac{\mbox{\boldmath$T$}_1}{\sqrt{n}})}{\pi_{1}^{F}(\mbox{\boldmath$S$})}= &\det\left(\mbox{\boldmath$I$}_{p_n}+\frac{\mbox{\boldmath$S$}^{-1/2}\mbox{\boldmath$T$}_1\mbox{\boldmath$S$}^{-1/2}}{\sqrt{n}}\right)^{-(\nu+2p_n)/2}\notag\\&\times \left(\frac{\det\left((\mbox{\boldmath$S$}+\mbox{\boldmath$T$}_1/\sqrt{n})^{-1}+\mbox{\boldmath$\Psi$}_2^{-1}\right)}{\det\left(\mbox{\boldmath$S$}^{-1}+\mbox{\boldmath$\Psi$}_2^{-1}\right)}\right)^{-(\nu^*+\nu+p_n-1)/2}.\end{aligned}$$ It is evident from ([\[a14\]](#a14){reference-type="ref" reference="a14"}) that: $$\begin{aligned} \label{a15} &\lvert\log(\pi_{1}^{F}(\mbox{\boldmath$S$}+\frac{\mbox{\boldmath$T$}_1}{\sqrt{n}}))-\log(\pi_{1}^{F}(\mbox{\boldmath$S$}))\rvert\notag\\ &\leq \frac{(\nu+2p_n)}{2} \left\lvert \log\left(\det\left(\mbox{\boldmath$I$}_{p_n}+\frac{\mbox{\boldmath$S$}^{-1/2}\mbox{\boldmath$T$}_1\mbox{\boldmath$S$}^{-1/2}}{\sqrt{n}}\right)\right)\right\rvert\notag \\ &+ \frac{(\nu^*+\nu+p_n-1)}{2} \left\lvert\log\left(\frac{\det\left((\mbox{\boldmath$S$}+\mbox{\boldmath$T$}_1/\sqrt{n})^{-1}+\mbox{\boldmath$\Psi$}_2^{-1}\right)}{\det\left(\mbox{\boldmath$S$}^{-1}+\mbox{\boldmath$\Psi$}_2^{-1}\right)}\right)\right\rvert\end{aligned}$$ Let us now consider the set $D_n = \{\mbox{\boldmath$T$}_1\in G_{1n}\mid\left\lVert\mbox{\boldmath$T$}_1\right\rVert_2 \leq M\sqrt{p_n}\}$, where $M$ is a positive constant. Additionally, recall the definition of the set $H_n$ from ([\[h\]](#h){reference-type="ref" reference="h"}). Consequently, the first term in ([\[a15\]](#a15){reference-type="ref" reference="a15"}) can be bounded in a similar manner as ([\[a2\]](#a2){reference-type="ref" reference="a2"}). Hence, for sufficiently large $n$ within the set $H_n$ and uniformly on $D_n$, the following inequality holds: $$\begin{aligned} \label{a16} \frac{(\nu+2p_n)}{2}\left\lvert\log\left(\det\left(\mbox{\boldmath$I$}_{p_n}+\frac{\mbox{\boldmath$S$}^{-1/2}\mbox{\boldmath$T$}_1\mbox{\boldmath$S$}^{-1/2}}{\sqrt{n}}\right)\right)\right\rvert \leq M^*\sqrt{\frac{p_n^5}{n}},\end{aligned}$$ Here, $M^{*}$ is a constant that depends on $M$. To address the second term of ([\[a15\]](#a15){reference-type="ref" reference="a15"}), it is worth noting that Woodbury's identity can be used to express the following $$\begin{aligned} \label{a17} &\det\left(\left((\mbox{\boldmath$S$}+\mbox{\boldmath$T$}_1/\sqrt{n})^{-1}+\mbox{\boldmath$\Psi$}_2^{-1}\right)\left(\mbox{\boldmath$S$}^{-1}+\mbox{\boldmath$\Psi$}_2^{-1}\right)^{-1}\right)\notag\\=&\det\left(I_{p_n}-\mbox{\boldmath$\Psi$}^{*}\left(I_{p_n}+\sqrt{n}\mbox{\boldmath$S$}^{1/2}\mbox{\boldmath$T$}_1^{-1}\mbox{\boldmath$S$}^{1/2}\right)^{-1}\mbox{\boldmath$\Psi$}^{*}\right),\end{aligned}$$ where $\mbox{\boldmath$\Psi$}^{*}=\left(I_{p_n}+\sqrt{n}\mbox{\boldmath$S$}^{1/2}\mbox{\boldmath$\Psi$}_2^{-1}\mbox{\boldmath$S$}^{1/2}\right)$. By employing ([\[a17\]](#a17){reference-type="ref" reference="a17"}), it can be easily verified, similar to ([\[a3\]](#a3){reference-type="ref" reference="a3"}), that for sufficiently large $n$, uniformly on $D_n$ and under the set $H_n$ $$\begin{aligned} \label{a18} &\frac{(\nu^*+\nu+p_n-1)}{2}\left\lvert\log\left(\frac{\det\left((\mbox{\boldmath$S$}+\mbox{\boldmath$T$}_1/\sqrt{n})^{-1}+\mbox{\boldmath$\Psi$}_2^{-1}\right)}{\det\left(\mbox{\boldmath$S$}^{-1}+\mbox{\boldmath$\Psi$}_2^{-1}\right)}\right)\right\rvert\notag\\\leq& \frac{(\nu^*+\nu+p_n-1)}{2}\sum_{i=1}^{p_n}\left\lvert1-\lambda_i\left(\mbox{\boldmath$\Psi$}^{*}\left(I_{p_n}+\sqrt{n}\mbox{\boldmath$S$}^{1/2}\mbox{\boldmath$T$}_1^{-1}\mbox{\boldmath$S$}^{1/2}\right)^{-1}\mbox{\boldmath$\Psi$}^{*}\right)\right\rvert\notag\\\leq&\;p_n \frac{(\nu^*+\nu+p_n-1)}{2} \left\lVert\mbox{\boldmath$\Psi$}^{*}\right\rVert_2^2\frac{1}{1+\frac{\sqrt{n}}{\left\lVert\mbox{\boldmath$S$}^{1/2}\mbox{\boldmath$T$}_1^{-1}\mbox{\boldmath$S$}^{1/2}\right\rVert_2}}\notag\\\leq &M^{**}\sqrt{\frac{p_n^5}{n}},\end{aligned}$$ where $\lambda_i\left(\mbox{\boldmath$\Psi$}^{*}\left(I_{p_n}+\sqrt{n}\mbox{\boldmath$S$}^{1/2}\mbox{\boldmath$T$}_1^{-1}\mbox{\boldmath$S$}^{1/2}\right)^{-1}\mbox{\boldmath$\Psi$}^{*}\right)$ represents the $i$-th eigenvalue of the matrix $\left(\mbox{\boldmath$\Psi$}^{*}\left(I_{p_n}+\sqrt{n}\mbox{\boldmath$S$}^{1/2}\mbox{\boldmath$T$}_1^{-1}\mbox{\boldmath$S$}^{1/2}\right)^{-1}\mbox{\boldmath$\Psi$}^{*}\right)$. By combining ([\[a16\]](#a16){reference-type="ref" reference="a16"}) and ([\[a18\]](#a18){reference-type="ref" reference="a18"}), we can express the following expression within the set $H_n$, for sufficiently large $n$ $$\begin{aligned} \label{a19} \underset{\mbox{\boldmath$T$}_1\in D_n}{\sup}\left\lvert \frac{\pi_{1}^{F}(\mbox{\boldmath$S$}+\frac{\mbox{\boldmath$T$}_1}{\sqrt{n}})}{\pi_{1}^{F}(\mbox{\boldmath$S$})}-1\right\rvert \leq 2\max\{M^{*},M^{**}\}\sqrt{\frac{p_n^5}{n}},\end{aligned}$$ This expression converges in probability to $0$ as $n$ tends to infinity, since $\mathbb{P}_{0}(H_n)\to 1$ as $n \to \infty$. Thus, the proof is now complete. ## Proof of Lemma [Lemma 8](#postcont4){reference-type="ref" reference="postcont4"} {#proof-of-lemma-postcont4} Since for the covariance matrix $\mbox{\boldmath$\Sigma$}$ the rate of posterior contraction is $\epsilon_n$, we can write the following $$\begin{aligned} \label{a20} \Pi_n(\left\lVert\mbox{\boldmath$\Sigma$}-\mbox{\boldmath$\Sigma$}_{0}\right\rVert_2 > \epsilon_n \mid \mbox{\boldmath$Y_n$}) \overset{P}{\to} 0\end{aligned}$$ This convergence holds true as n approaches infinity, under the true data generating mechanism $\mathbb{P}_{0}$, where $\epsilon_n$ is a sequence that tends to zero. Here, $\mbox{\boldmath$\Sigma$}_{0}$ represents the true value of the covariance matrix. Let us consider the precision matrix $\mbox{\boldmath$\Omega$}$ as the inverse of $\mbox{\boldmath$\Sigma$}$, and $\mbox{\boldmath$\Omega$}_{0}$ as the true value of the precision matrix. Within the set of covariance matrices satisfying the condition $\{\mbox{\boldmath$\Sigma$}\lvert\;\left\lVert\mbox{\boldmath$\Sigma$}-\mbox{\boldmath$\Sigma$}_{0}\right\rVert_2 \leq \epsilon_n\}$, we get the following inequality: $$\begin{aligned} \label{a21} \left\lVert\mbox{\boldmath$\Omega$}-\mbox{\boldmath$\Omega$}_0\right\rVert_2&\leq\left\lVert\mbox{\boldmath$\Omega$}\right\rVert_2\left\lVert\mbox{\boldmath$\Sigma$}-\mbox{\boldmath$\Sigma$}_{0}\right\rVert_2\left\lVert\mbox{\boldmath$\Omega$}_0\right\rVert_2\leq \epsilon_n\left\lVert\mbox{\boldmath$\Omega$}_0\right\rVert_2(\left\lVert\mbox{\boldmath$\Omega$}-\mbox{\boldmath$\Omega$}_0\right\rVert_2+\left\lVert\mbox{\boldmath$\Omega$}_0\right\rVert_2)\end{aligned}$$ Hence, for sufficiently large n and under Assumption [Assumption 1](#as1){reference-type="ref" reference="as1"}, it can be easily shown from equation ([\[a21\]](#a21){reference-type="ref" reference="a21"}) that $$\begin{aligned} \left\lVert\mbox{\boldmath$\Omega$}-\mbox{\boldmath$\Omega$}_0\right\rVert_2\leq M^{*} \epsilon_n,\end{aligned}$$ whenever $\left\lVert\mbox{\boldmath$\Sigma$}-\mbox{\boldmath$\Sigma$}_{0}\right\rVert_2 \leq \epsilon_n$. Consequently, we can conclude that $$\begin{aligned} \Pi_n( \left\lVert\mbox{\boldmath$\Omega$}-\mbox{\boldmath$\Omega$}_0\right\rVert_2 > M^{*}\epsilon_n \mid \mbox{\boldmath$Y_n$}) \overset{P}{\to}0 \end{aligned}$$ as n approaches infinity, under the true data generating mechanism $\mathbb{P}_{0}$, with $\epsilon_n$ being a sequence that tends to zero. Therefore, our assertion is valid. ## Proof of Theorem [\[th_omega\]](#th_omega){reference-type="ref" reference="th_omega"} {#proof_th_omega} To begin with note that similar to ([\[th1:1\]](#th1:1){reference-type="ref" reference="th1:1"}), for any constant $M>0$ we can write the following $$\begin{aligned} \label{th2:1} \int_{G_{2n}} \lvert \pi_{2n}(\mbox{\boldmath$T$}_2\mid\mbox{\boldmath$Y$}^n)-\phi(\mbox{\boldmath$T$}_2;\mbox{\boldmath$S$})\rvert\; d\mbox{\boldmath$T$}_2\leq &\int_{F_{2n}} \lvert \pi_{2n}(\mbox{\boldmath$T$}_2\mid\mbox{\boldmath$Y$}^n)-\phi(\mbox{\boldmath$T$}_2;S)\rvert\; d\mbox{\boldmath$T$}_2\notag\\& +\int_{ F_{2n}^c} \pi_{2n}(\mbox{\boldmath$T$}_2\mid\mbox{\boldmath$Y$}^n)\; d\mbox{\boldmath$T$}_2\notag\\&+\int_{F_{2n}^c}\phi(\mbox{\boldmath$T$}_2;\mbox{\boldmath$S$})\; d\mbox{\boldmath$T$}_2,\end{aligned}$$ where $F_{2n}:=\{\mbox{\boldmath$T$}_2\in G_{2n}\;\mid\;\left\lVert\mbox{\boldmath$S$}^{1/2}\mbox{\boldmath$T$}_2\mbox{\boldmath$S$}^{1/2}\right\rVert_2\leq M\sqrt{p_n}\}$. The final term of ([\[th2:1\]](#th2:1){reference-type="ref" reference="th2:1"}) can be treated in a similar manner as demonstrated in ([\[th1:2\]](#th1:2){reference-type="ref" reference="th1:2"}-[\[th1:3\]](#th1:3){reference-type="ref" reference="th1:3"}). By recognizing that $\mbox{\boldmath$T$}_2\sim\mathcal{SMN}_{p \times p}(\mbox{\boldmath$O$},\;2 \mbox{\boldmath$B$}_p^T(\mbox{\boldmath$S$}^{-1}\otimes\mbox{\boldmath$S$}^{-1})\mbox{\boldmath$B$}_p)$ and applying Lemma [Lemma 10](#smn){reference-type="ref" reference="smn"}, we can deduce that $\int_{F_{2n}^c}\phi(\mbox{\boldmath$T$}_2;\mbox{\boldmath$S$}); d\mbox{\boldmath$T$}_2$ tends to zero as $n$ approaches infinity. To address the second term of [\[th2:1\]](#th2:1){reference-type="ref" reference="th2:1"}, we will leverage Lemma [Lemma 8](#postcont4){reference-type="ref" reference="postcont4"}, which states that under Assumption [Assumption 1](#as1){reference-type="ref" reference="as1"}, the posterior contraction rate of $\mbox{\boldmath$\Omega$}$ will resemble that of $\mbox{\boldmath$\Sigma$}$. Therefore, based on Assumption [Assumption 3](#as3){reference-type="ref" reference="as3"}, we can assert that the contraction rate for the posterior distribution of $\mbox{\boldmath$\Omega$}$ is also $\sqrt{p_n/n}$. Additionally, it is known, under Assumptions [Assumption 1](#as1){reference-type="ref" reference="as1"} and [Assumption 2](#as2){reference-type="ref" reference="as2"}, that $\mbox{\boldmath$S$}^{-1}$ exhibits a convergence rate of $\sqrt{p_n/n}$. By employing these insights and employing the set $H_n$ in a similar manner as in ([\[th1:4\]](#th1:4){reference-type="ref" reference="th1:4"}), we can state that $\int_{ F_{2n}^c} \pi_{2n}(\mbox{\boldmath$T$}_2\mid\mbox{\boldmath$Y$}^n)\; d\mbox{\boldmath$T$}_2$ converges to zero in probability as $n$ tends to infinity, under the true model $\mathbb{P}_{0}$. Once again, we have previously established that both the second and third terms of ([\[th2:1\]](#th2:1){reference-type="ref" reference="th2:1"}) converge in probability to zero as $n$ approaches infinity. Therefore, it suffices to demonstrate that the first term of ([\[th2:1\]](#th2:1){reference-type="ref" reference="th2:1"}) also converges in probability to zero. Analogous to ([\[th1:6\]](#th1:6){reference-type="ref" reference="th1:6"}), we can represent the first term of ([\[th2:1\]](#th2:1){reference-type="ref" reference="th2:1"}) as follows $$\begin{aligned} \label{th2:2} &\int_{F_{2n}} \lvert \pi_{2n}(\mbox{\boldmath$T$}_2\mid\mbox{\boldmath$Y$}^n)-\phi(\mbox{\boldmath$T$}_2;\mbox{\boldmath$S$})\rvert\; d\mbox{\boldmath$T$}_2\notag\\ & \leq \int_{ F_{2n}^c} \pi_{2n}(\mbox{\boldmath$T$}_2\mid\mbox{\boldmath$Y$}^n)\; d\mbox{\boldmath$T$}_2+\int_{F_{2n}^c}\phi(\mbox{\boldmath$T$}_2;\mbox{\boldmath$S$})\; d\mbox{\boldmath$T$}_2+ 3\left(\int\Tilde{M}_{2n}(\mbox{\boldmath$T$}_2)\pi_{2}\left(\mbox{\boldmath$S$}^{-1}\right)\; d\mbox{\boldmath$T$}_2\right)^{-1}\notag\\&\times\int_{F_{2n}}\left\lvert M_{2n}(\mbox{\boldmath$T$}_2)\pi_{2}\left(\mbox{\boldmath$S$}^{-1}+\dfrac{\mbox{\boldmath$T$}_2}{\sqrt{n}}\right)-\Tilde{M}_{2n}(\mbox{\boldmath$T$}_2)\pi_{2}\left(\mbox{\boldmath$S$}^{-1}\right)\right\rvert \; d\mbox{\boldmath$T$}_2,\end{aligned}$$ This step follows from Lemma A.$1$. from Ghosal(1999) [@ghosal1]. As we have already demonstrated that both the first and second terms of ([\[th2:2\]](#th2:2){reference-type="ref" reference="th2:2"}) converge to zero in probability as $n$ approaches infinity we can address the final part by applying, as illustrated in ([\[th1:7\]](#th1:7){reference-type="ref" reference="th1:7"}), yielding the following $$\begin{aligned} \label{th2:3} &\left(\int\Tilde{M}_{2n}(\mbox{\boldmath$T$}_2)\pi_{2}\left(\mbox{\boldmath$S$}^{-1}\right)\; d\mbox{\boldmath$T$}_2\right)^{-1}\notag\\&\times\int_{F_{2n}}\left\lvert M_{2n}(\mbox{\boldmath$T$}_2)\pi_{2}\left(\mbox{\boldmath$S$}^{-1}+\frac{\mbox{\boldmath$T$}_2}{\sqrt{n}}\right)-\Tilde{M}_{2n}(\mbox{\boldmath$T$}_2)\pi_{2}\left(\mbox{\boldmath$S$}^{-1}\right)\right\rvert d\mbox{\boldmath$T$}_2\notag\\&\leq \underset{\mbox{\boldmath$T$}_2\in F_{2n}}{\sup}\;\left\lvert\frac{\pi_{2}(\mbox{\boldmath$S$}^{-1}+\frac{\mbox{\boldmath$T$}_2}{\sqrt{n}})}{\pi_{2}(\mbox{\boldmath$S$}^{-1})}-1\right\rvert\frac{\int_{F_{2n}}M_{1n}(\mbox{\boldmath$T$}_2)\; d\mbox{\boldmath$T$}_2}{\int \Tilde{M}_{2n}(\mbox{\boldmath$T$}_2)\; d\mbox{\boldmath$T$}_2}\notag\\&+ \frac{\int_{F_{2n}}\lvert M_{2n}(\mbox{\boldmath$T$}_2)-\Tilde{M}_{2n}(\mbox{\boldmath$T$}_2)\rvert\; d\mbox{\boldmath$T$}_2}{\int \Tilde{M}_{2n}(\mbox{\boldmath$T$}_2) d\mbox{\boldmath$T$}_2},\end{aligned}$$ where $\Tilde{M}_{2n}(\mbox{\boldmath$T$}_2)=\exp\{-\mathop{\mathrm{tr}}(\mbox{\boldmath$S$}^{1/2}\mbox{\boldmath$T$}_2\mbox{\boldmath$S$}^{1/2})^2/4\}$, is nothing but the kernel of the probability density function of $\mathcal{SMN}_{p \times p}(\mbox{\boldmath$O$},\;2 \mbox{\boldmath$B$}_p^T(\mbox{\boldmath$S$}^{-1}\otimes\mbox{\boldmath$S$}^{-1})\mbox{\boldmath$B$}_p)$ distribution. Once again, we can assert that $$\left\lvert\frac{\pi_{2}(\mbox{\boldmath$S$}^{-1}+\dfrac{\mbox{\boldmath$T$}_2}{\sqrt{n}})}{\pi_{2}(\mbox{\boldmath$S$}^{-1})}-1\right\rvert\overset{P}{\to}0,$$ as $n$ approaches infinity, utilizing Assumption [Assumption 1](#as1){reference-type="ref" reference="as1"} and Assumption [Assumption 3](#as3){reference-type="ref" reference="as3"} (pertaining to the flatness of the induced prior on $\mbox{\boldmath$\Omega$}$ around $\mbox{\boldmath$S$}^{-1}$). Hence similar to ([\[th1:7\]](#th1:7){reference-type="ref" reference="th1:7"}) it is sufficient to establish the final term of ([\[th2:3\]](#th2:3){reference-type="ref" reference="th2:3"}) converges to $0$ in probability. It can be noted from ([\[m2\]](#m2){reference-type="ref" reference="m2"}) and ([\[likelihood2\]](#likelihood2){reference-type="ref" reference="likelihood2"}) that $$\begin{aligned} \label{th2:4} &\log(M_{2n}(\mbox{\boldmath$T$}_2))=l_{2n}\left(\mbox{\boldmath$S$}^{-1}+\frac{\mbox{\boldmath$T$}_2}{\sqrt{n}}\right)-l_{2n}\left(\mbox{\boldmath$S$}^{-1}\right)\notag\\=&\frac{n}{2}\log\left(\det\left(\mbox{\boldmath$I$}_{p_n}+\frac{\mbox{\boldmath$S$}^{1/2}\mbox{\boldmath$T$}_2\mbox{\boldmath$S$}^{1/2}}{\sqrt{n}}\right)\right)- \frac{\sqrt{n}}{2} \mathop{\mathrm{tr}}\left(\mbox{\boldmath$S$}^{1/2}\mbox{\boldmath$T$}_2\mbox{\boldmath$S$}^{1/2}\right)\notag\\=& \frac{n}{2} \sum_{i=1}^p\left\{\log(1+\frac{\lambda^{**}_i}{\sqrt{n}})-\frac{\lambda^{**}_i}{\sqrt{n}}\right\},\end{aligned}$$ where $\lambda^{**}_i$ is the $i^{th}$ eigenvalue of the matrix $\mbox{\boldmath$S$}^{1/2}\mbox{\boldmath$T$}_2\mbox{\boldmath$S$}^{1/2}$. By utilizing Taylor's series assumption up to the second order, which incorporates the integral remainder term, we can represent ([\[th2:4\]](#th2:4){reference-type="ref" reference="th2:4"}) as follows $$\begin{aligned} \label{th2:5} &\frac{n}{2} \sum_{i=1}^p\left\{\log(1+\frac{\lambda^{**}_i}{\sqrt{n}})-\frac{\lambda^{**}_i}{\sqrt{n}}\right\}\notag\\=&-\frac{1}{4}\sum_{i=1}^p (\lambda^{**}_i)^2 + n\sum_{i=1}^p\int_{0}^{\frac{\lambda^{**}_i}{\sqrt{n}}} \frac{1}{(1+t)^3}\left(\frac{\lambda^{**}_i}{\sqrt{n}}-t\right)^2dt\notag\\=&-\frac{1}{4}\mathop{\mathrm{tr}}(\mbox{\boldmath$S$}^{1/2}\mbox{\boldmath$T$}_1\mbox{\boldmath$S$}^{1/2})^2+ n\sum_{i=1}^p\int_{0}^{\frac{\lambda^{**}_i}{\sqrt{n}}} \frac{1}{(1+t)^3}\left(\frac{\lambda^{**}_i}{\sqrt{n}}-t\right)^2dt.\end{aligned}$$ Based on the expression of $\Tilde{M}_{2n}(\mbox{\boldmath$T$}_2)$ it follows from ([\[th2:4\]](#th2:4){reference-type="ref" reference="th2:4"}-[\[th2:5\]](#th2:5){reference-type="ref" reference="th2:5"}) $$\begin{aligned} \label{th2:6} \lvert &\log(M_{2n}(\mbox{\boldmath$T$}_2)) - \log(\Tilde{M}_{2n}(\mbox{\boldmath$T$}_2)) \rvert = n\left\lvert \sum_{i=1}^p\int_{0}^{\frac{\lambda^{**}_i}{\sqrt{n}}} \frac{1}{(1+t)^3}\left(\frac{\lambda^{**}_i}{\sqrt{n}}-t\right)^2\;dt\right\rvert \notag\\\leq &\;n\sum_{i=1}^p \left\lvert \int_{0}^{\frac{\lambda^{**}_i}{\sqrt{n}}}\left(\frac{\lambda^{**}_i}{\sqrt{n}}-t\right)^2dt\right\rvert=n\sum_{i=1}^p\left\lvert\frac{(\lambda^{**}_i)^3}{\sqrt{n}}\right\rvert\leq \frac{p}{\sqrt{n}}\left\lVert\mbox{\boldmath$S$}^{1/2}\mbox{\boldmath$T$}_1\mbox{\boldmath$S$}^{1/2}\right\rVert_2^3.\end{aligned}$$ On $F_{2n}$, the final term of ([\[th1:11\]](#th1:11){reference-type="ref" reference="th1:11"}) can be bounded uniformly by $M\sqrt{p_n^5/n}$. By applying Assumption [Assumption 2](#as2){reference-type="ref" reference="as2"} and considering a sufficiently large $n$, we can deduce from ([\[th2:6\]](#th2:6){reference-type="ref" reference="th2:6"}) that $$\begin{aligned} \label{th2:7} \left\lvert\frac{M_{2n}(\mbox{\boldmath$T$}_2)}{\Tilde{M}_{2n}(\mbox{\boldmath$T$}_2)}-1\right\rvert\leq 2 \lvert \log(M_{2n}(\mbox{\boldmath$T$}_2)) - \log(\Tilde{M}_{2n}(\mbox{\boldmath$T$}_2)) \rvert \leq 2M\sqrt{\frac{p_n^5}{n}},\end{aligned}$$ uniformly on $F_{2n}$. Drawing upon all the results obtained thus far, we can represent the second term of ([\[th2:3\]](#th2:3){reference-type="ref" reference="th2:3"}) in the following manner $$\begin{aligned} \label{th2:8} \frac{\int_{F_{2n}}\lvert M_{2n}(\mbox{\boldmath$T$}_2)-\Tilde{M}_{2n}(\mbox{\boldmath$T$}_2)\rvert\; d\mbox{\boldmath$T$}_1}{\int \Tilde{M}_{2n}(\mbox{\boldmath$T$}_1)\; d\mbox{\boldmath$T$}_2} \leq 2M\sqrt{\frac{p_n^5}{n}} \frac{\int_{F_{2n}}\Tilde{M}_{2n}(\mbox{\boldmath$T$}_2)}{\int\Tilde{M}_{2n}(\mbox{\boldmath$T$}_2)}\leq 2M\sqrt{\frac{p_n^5}{n}} \to 0,\end{aligned}$$ as $n$ tends to infinity, utilizing Assumption [Assumption 2](#as2){reference-type="ref" reference="as2"}. The proof is now complete. # Proofs for Section [5](#sec5){reference-type="ref" reference="sec5"} {#proofs-for-section-sec5} ## Proof of Lemma [Lemma 10](#smn){reference-type="ref" reference="smn"} {#proof-of-lemma-smn} If $\mbox{\boldmath$A$}\sim\mathcal{SMN}_{p \times p}(\mbox{\boldmath$O$},\; \mbox{\boldmath$B$}_p^T(\mbox{\boldmath$I$}_p\otimes\mbox{\boldmath$I$}_p)\mbox{\boldmath$B$}_p)$ then it is easy to show for any vector $u$ such that $\left\lVert u\right\rVert_2=1$ we have $u^T\mbox{\boldmath$A$}u\sim N(0,1)$. Using the inequality $1-\Phi(t)\leq \exp(-t^2/2)$, where $\Phi(\cdot)$ is the standard normal cumulative distribution function we can claim the following $$\begin{aligned} \label{b1} P(\lvert u^T\mbox{\boldmath$A$}u \rvert \geq c_1 \sqrt{p}/2)\leq 2 \exp{(-c_1^2p/8)}\end{aligned}$$ Let $\mathcal{N}_{\frac{1}{4}}$ be a $\frac{1}{4}$-net of $\mathcal{S}^{p-1}$ and say $\mathcal{N}(\mathcal{S}^{p-1}, {\frac{1}{4}})$ is the minimal cardinality of $\mathcal{N}_{\frac{1}{4}}$. From Lemma 5.2 of Vershynin [@vershynin] we know $\mathcal{N}(\mathcal{S}^{p-1}, {\frac{1}{4}})\leq9^{p}$. Also from Lemma 5.3 of Vershynin [@vershynin] it follows that $\left\lVert\mbox{\boldmath$A$}\right\rVert_2\leq 2\underset{{u\in \mathcal{N}_{\frac{1}{4}}}}{\sup} \lvert u^T\mbox{\boldmath$A$}u\rvert$. Thus $$\begin{aligned} \label{b2} P(\left\lVert\mbox{\boldmath$A$}\right\rVert_2 \geq c_1 \sqrt{p}) \leq &\;P(\underset{{u\in \mathcal{N}_{\frac{1}{4}}}}{\sup} \lvert u^T\mbox{\boldmath$A$}u\rvert \geq c_1 \sqrt{p}/2) \leq 9^{p} \underset{\left\lVert u\right\rVert_2= 1}{\sup}P(\lvert u^T\mbox{\boldmath$A$}u\rvert \geq c_1 \sqrt{p}/2),\end{aligned}$$ where last inequality follows by taking union over all vectors $u \in \mathcal{N}_{\frac{1}{4}}$ and then applying union-sum inequality. From last step of ([\[b2\]](#b2){reference-type="ref" reference="b2"}) and using ([\[b1\]](#b1){reference-type="ref" reference="b1"}) we can write the following $$\begin{aligned} \label{b3} P(\left\lVert\mbox{\boldmath$A$}\right\rVert_2 \geq c_1 \sqrt{p})\leq \exp{(-c_2p)}, \end{aligned}$$ where $c_2=(c_1^2/8-\log(18))>0$ by choosing any $c_1>\sqrt{8\log(18)}$. # Proofs for Section [8](#sec9){reference-type="ref" reference="sec9"} {#proofs-for-section-sec9} ## Proof of Lemma [Lemma 13](#gw){reference-type="ref" reference="gw"} {#proof-of-lemma-gw} Xiang et al.([@xiang]) presented a theorem (Theorem 3.1) in their work, demonstrating that the $G$-Wishart prior exhibits a posterior contraction rate of $\epsilon_n$ under the Gaussian concentration model, given Assumptions [Assumption 4](#asA){reference-type="ref" reference="asA"}-[Assumption 6](#asC){reference-type="ref" reference="asC"} (although we need much weaker conditions compared to Assumptions [Assumption 5](#asB){reference-type="ref" reference="asB"}-[Assumption 6](#asC){reference-type="ref" reference="asC"}). The rate of contraction, denoted by $\epsilon_n$, can be expressed as $\epsilon_n = d_n^{5/2}(\log(p_n)/n)^{1/2}+d_n^{3/2}\gamma(d_n)$. Hence, establishing the flatness of the $G$-Wishart prior around $\hat{\mbox{\boldmath$\Omega$}}_{G_n}$ is the only remaining task. Using ([\[prior4p\]](#prior4p){reference-type="ref" reference="prior4p"}), we can express the following quantity as follows $$\begin{aligned} \label{c1} \frac{\pi_{3}^{WG}\left(\hat{\mbox{\boldmath$\Omega$}}_G+\frac{\mbox{\boldmath$T$}_3}{\sqrt{n}}\right)}{\pi_{3}^{WG}(\mbox{\boldmath$S$})}= &\det\left(\mbox{\boldmath$I$}_{p_n}+\frac{\hat{\mbox{\boldmath$\Omega$}}_G^{-1/2}\mbox{\boldmath$T$}_3\hat{\mbox{\boldmath$\Omega$}}_G^{-1/2}}{\sqrt{n}}\right)^{\beta/2}\notag\\&\times \exp\left(-\mathop{\mathrm{tr}}\left(\mbox{\boldmath$\Psi$}_3^{1/2}\frac{\mbox{\boldmath$T$}_3}{\sqrt{2n}}\mbox{\boldmath$\Psi$}_3^{1/2}\right)\right).\end{aligned}$$ Hence, using ([\[c1\]](#c1){reference-type="ref" reference="c1"}), we can write for sufficiently large $n$ $$\begin{aligned} \label{c2} &\lvert\log\left(\pi_{3}^{WG}\left(\hat{\mbox{\boldmath$\Omega$}}_G+\frac{\mbox{\boldmath$T$}_3}{\sqrt{n}}\right)\right)-\log(\pi_{3}^{WG}(\hat{\mbox{\boldmath$\Omega$}}_G))\rvert \notag\\ \leq & \frac{\beta}{2}\sum_{i=1}^{p_n} \left\lvert\log\left(1+\frac{\lambda_i\left(\hat{\mbox{\boldmath$\Omega$}}_G^{-1/2}\mbox{\boldmath$T$}_3\hat{\mbox{\boldmath$\Omega$}}_G^{-1/2}\right)}{\sqrt{n}}\right)\right\rvert + \frac{1}{2}\sum_{i=1}^{p_n} \left\lvert\lambda_i\left(\mbox{\boldmath$\Psi$}_3^{1/2}\frac{\mbox{\boldmath$T$}_3}{\sqrt{n}}\mbox{\boldmath$\Psi$}_3^{1/2}\right)\right\rvert \notag\\\leq & \frac{\beta p_n}{\sqrt{n}}\left\lVert\hat{\mbox{\boldmath$\Omega$}}_G^{-1/2}\mbox{\boldmath$T$}_3\hat{\mbox{\boldmath$\Omega$}}_G^{-1/2}\right\rVert_2+\frac{p_n}{2\sqrt{n}}\left\lVert\mbox{\boldmath$\Psi$}_3^{1/2}\mbox{\boldmath$T$}_3\mbox{\boldmath$\Psi$}_3^{1/2}\right\rVert_2,\end{aligned}$$ where $\lambda_i\left(\hat{\mbox{\boldmath$\Omega$}}_G^{-1/2}\mbox{\boldmath$T$}_3\hat{\mbox{\boldmath$\Omega$}}_G^{-1/2}\right)$ represents the $i$-th eigenvalue of the matrix $\hat{\mbox{\boldmath$\Omega$}}_G^{-1/2}\mbox{\boldmath$T$}_3\\\hat{\mbox{\boldmath$\Omega$}}_G^{-1/2}$, with $1\leq i \leq p_n$. The last step follows from the fact that $\lvert\log(1+x)\rvert\leq 2\lvert x\rvert$ for sufficiently small $x$. Now, let us define the set $D_n^{*}:= \{\mbox{\boldmath$T$}_3\in G_{3n}\mid\left\lVert\mbox{\boldmath$T$}_3\right\rVert_2 \leq M\sqrt{n}\epsilon_n\}$, where $M$ is a positive constant. Now, we define a set $H_n^{*}:=\{(1-\eta)\lambda_{\min}(\bar{\mbox{\boldmath$\Omega$}}_n)\leq\lambda_{\min}(\hat{\mbox{\boldmath$\Omega$}}_G)\leq\lambda_{\max}(\hat{\mbox{\boldmath$\Omega$}}_G)\leq(1+\eta)\lambda_{\max}(\bar{\mbox{\boldmath$\Omega$}}_n)\}$, where $\eta\in(0,1)$ is a fixed number. It can be verified that under Assumptions [Assumption 4](#asA){reference-type="ref" reference="asA"}-[Assumption 6](#asC){reference-type="ref" reference="asC"}, $\mathbb{P}_{0,G}(H_n^*)\to 1$ as $n \to \infty$ (refer to [@xiang Lemma 3.7] for details). Also, recall that $\mbox{\boldmath$\Psi$}_{3n}$ satisfies $\left\lVert\mbox{\boldmath$\Psi$}_{3n}\right\rVert_2\leq a < \infty$. Hence, for sufficiently large $n$, within the set $H_n^*$ and uniformly on $D_n^*$, from ([\[c2\]](#c2){reference-type="ref" reference="c2"}) it follows: $$\begin{aligned} \label{c3} \lvert\log\left(\pi_{3}^{WG}\left(\hat{\mbox{\boldmath$\Omega$}}_G+\frac{\mbox{\boldmath$T$}_3}{\sqrt{n}}\right)\right)-\log(\pi_{3}^{WG}(\hat{\mbox{\boldmath$\Omega$}}_G))\rvert & \leq M^* p_n\epsilon_n,\end{aligned}$$ where $M^*$ is an appropriate constant. The last step follows from utilizing the set $D_n^*$ and $H_n^*$. Now, for sufficiently large $n$, we can state that within the set $H_n^*$, the following inequality holds $$\begin{aligned} \label{c4} \underset{\mbox{\boldmath$T$}_3\in D_n^*}{\sup}\;\left\lvert \frac{\pi_{3}^{WG}(\hat{\mbox{\boldmath$\Omega$}}_G+\dfrac{\mbox{\boldmath$T$}_3}{\sqrt{n}})}{\pi_{3}^{WG}(\hat{\mbox{\boldmath$\Omega$}}_G)}-1\right\rvert \leq M^{*}p_n\epsilon_n,\end{aligned}$$ which converges in probability to $0$ using Assumptions [Assumption 5](#asB){reference-type="ref" reference="asB"}-[Assumption 6](#asC){reference-type="ref" reference="asC"} and since $\mathbb{P}_{0,G}(H_n^*)\to 1$ as $n \to \infty$. Thus, the proof is now complete. ## Proof of Theorem [\[th_omega_sp\]](#th_omega_sp){reference-type="ref" reference="th_omega_sp"} In this proof, we will use a technical lemma stated below. The proof of this lemma is provided after this proof. **Lemma 17**. *Recall the definition of $\mbox{\boldmath$S$}$ and $\hat{\mbox{\boldmath$\Omega$}}_G$ from Section [6](#sec7){reference-type="ref" reference="sec7"}. Then, for any matrix $\mbox{\boldmath$T$}\in M_{G_n}$, we have $\mathop{\mathrm{tr}}(\mbox{\boldmath$T$}\mbox{\boldmath$S$})=\mathop{\mathrm{tr}}(\mbox{\boldmath$T$}\hat{\mbox{\boldmath$\Omega$}}_G^{-1})$.* Now we will proceed with our original proof. In a manner similar to ([\[th1:1\]](#th1:1){reference-type="ref" reference="th1:1"}), the following inequality holds for any constant $M>0$ $$\begin{aligned} \label{th3:1} \int_{G_{3n}} \lvert \pi_{3n}(\mbox{\boldmath$T$}_3\mid\mbox{\boldmath$Y$}^n)-\phi(\mbox{\boldmath$T$}_3;\hat{\mbox{\boldmath$\Omega$}}_G)\rvert\; d\mbox{\boldmath$T$}_3\leq &\int_{F_{3n}} \lvert \pi_{3n}(\mbox{\boldmath$T$}_3\mid\mbox{\boldmath$Y$}^n)-\phi(\mbox{\boldmath$T$}_3;\hat{\mbox{\boldmath$\Omega$}}_G)\rvert\; d\mbox{\boldmath$T$}_3\notag\\& +\int_{ F_{3n}^c} \pi_{3n}(\mbox{\boldmath$T$}_3\mid\mbox{\boldmath$Y$}^n)\; d\mbox{\boldmath$T$}_3\notag\\&+\int_{F_{3n}^c}\phi(\mbox{\boldmath$T$}_3;\hat{\mbox{\boldmath$\Omega$}}_G)d\mbox{\boldmath$T$}_3,\end{aligned}$$ where $F_{3n}$ is defined as $F_{3n}:=\{\mbox{\boldmath$T$}_3\in G_{3n}\;\mid\;\left\lVert\hat{\mbox{\boldmath$\Omega$}}_G^{-1/2}\mbox{\boldmath$T$}_3\hat{\mbox{\boldmath$\Omega$}}_G^{-1/2}\right\rVert_2\leq M\sqrt{n}\epsilon_n\}$, with $\epsilon_n$ given by $\epsilon_n=d_n^{5/2}(\log(p_n)/n)^{1/2}+d_n^{3/2}\gamma(d_n)$. Let us denote the third term in ([\[th3:1\]](#th3:1){reference-type="ref" reference="th3:1"}) as: $$\begin{aligned} \label{th3:2} \int_{F_{3n}^c}\phi(\mbox{\boldmath$T$}_3;\hat{\mbox{\boldmath$\Omega$}}_G)\; d\mbox{\boldmath$T$}_3=P\left(\left\lVert\hat{\mbox{\boldmath$\Omega$}}_G^{-1/2}\mbox{\boldmath$T$}_3\hat{\mbox{\boldmath$\Omega$}}_G^{-1/2}\right\rVert_2\geq M\sqrt{n}\epsilon_n\right),\end{aligned}$$ where $\mbox{\boldmath$T$}_3$ follows a $\mathcal{SSMN}_{G_n}(\mbox{\boldmath$O$},\;2 \mbox{\boldmath$D$}_{G}^T(\hat{\mbox{\boldmath$\Omega$}}_G\otimes\hat{\mbox{\boldmath$\Omega$}}_G)\mbox{\boldmath$D$}_{G})$ distribution. Again, we define a set $H_n^{*}:=\{(1-\eta)\lambda_{\min}(\bar{\mbox{\boldmath$\Omega$}}_n)\leq\lambda_{\min}(\hat{\mbox{\boldmath$\Omega$}}_G)\leq\lambda_{\max}(\hat{\mbox{\boldmath$\Omega$}}_G)\leq(1+\eta)\lambda_{\max}(\bar{\mbox{\boldmath$\Omega$}}_n)\}$, where $\eta\in(0,1)$ is a fixed number. Using [@xiang Lemma 3.7] it is easy to check under Assumptions [Assumption 4](#asA){reference-type="ref" reference="asA"}-[Assumption 6](#asC){reference-type="ref" reference="asC"}, $\mathbb{P}_{0,G}(H_n^*)\to 1$ as $n \to \infty$. Within the set $H_n^*$, we can express ([\[th3:2\]](#th3:2){reference-type="ref" reference="th3:2"}) as follows $$\begin{aligned} \label{th3:3} P\left(\left\lVert\hat{\mbox{\boldmath$\Omega$}}_G^{-1/2}\mbox{\boldmath$T$}_3\hat{\mbox{\boldmath$\Omega$}}_G^{-1/2}\right\rVert_2\geq M\sqrt{n}\epsilon_n\right)\leq P\left(\left\lVert\mbox{\boldmath$T$}_3\right\rVert_2\geq M'\,d_n^{5/2}\sqrt{\log(p_n)}\right),\end{aligned}$$ where $M' = M(1-\eta)k_{\sigma}$. Now, note that if $\mbox{\boldmath$T$}_3\sim \mathcal{SSMN}_{G_n}(\mbox{\boldmath$O$},\;2 \mbox{\boldmath$D$}_{G}^T(\hat{\mbox{\boldmath$\Omega$}}_G\otimes\hat{\mbox{\boldmath$\Omega$}}_G)\mbox{\boldmath$D$}_{G})$, then $T_{3,ij}\sim N(0,\sigma_{ij}^2)$, where $T_{3,ij}$ represents the $(i,j)^{th}$ element of $\mbox{\boldmath$T$}_3$, and $\sigma_{ij}^2=2x^T(\hat{\mbox{\boldmath$\Omega$}}_G\otimes\hat{\mbox{\boldmath$\Omega$}}_G)x$, with $x$ being the appropriate row of the matrix $\mbox{\boldmath$D$}_{G}$. Due to the construction of $\mbox{\boldmath$D$}_{G}$, it is evident that $\sigma_{ij}^2$ is bounded below by $2(1-\eta)k_{\sigma}^2$ within the set $H_n^*$. By utilizing Lemma 3.1 from [@xiang], we can continue the derivation as follows $$\begin{aligned} \label{th3:4} P\left(\left\lVert\mbox{\boldmath$T$}_3\right\rVert_2\geq M^{\prime}d_n^{5/2}\sqrt{\log(p_n)}\right)&\leq P\left(\left\lVert\mbox{\boldmath$T$}_3\right\rVert_{\text{max}}\geq M^{\prime}d_n^{3/2}\sqrt{\log(p_n)}\right)\notag\\ &= P\left(\underset{(i,j)\in E_n\; \text{or} \; i=j}{\max}\lvert T_{3,ij}\rvert\geq M^{\prime}\sqrt{\log(p_n)}\right)\;\notag\\ &\leq \sum_{(i,j)\in E_n\; \text{or} \; i=j} P\left(\lvert T_{3,ij}\rvert\geq M^{\prime}\sqrt{\log(p_n)}\right),\end{aligned}$$ The last step follows from the union-sum inequality. Now, if we define $z_{ij}=T_{3,ij}/\sigma_{ij}$, we can write: $$\begin{aligned} \label{th3:5} P\left(\lvert T_{3,ij}\rvert\geq M^{\prime}\sqrt{\log(p_n)}\right)\leq P\left(\lvert z_{ij}\rvert\geq M^{\prime\prime}\sqrt{\log(p_n)}\right)\leq 2\exp\left(-\frac{(M^{\prime\prime})^2\log(p_n)}{2}\right),\end{aligned}$$ where $M^{\prime\prime}=\sqrt{2}M(1-\eta)^{3/2}k_{\sigma}^2$, and the last step follows from the inequality $1-\Phi(t)\leq \exp(-t^2/2)$, where $\Phi(\cdot)$ is the standard normal cumulative distribution function. Combining ([\[th3:4\]](#th3:4){reference-type="ref" reference="th3:4"}) and ([\[th3:5\]](#th3:5){reference-type="ref" reference="th3:5"}), we obtain: $$\begin{aligned} \label{th3:6} &P\left(\left\lVert\mbox{\boldmath$T$}_3\right\rVert_2\geq M^{\prime}d_n^{5/2}\sqrt{\log(p_n)}\right)\notag\\&\leq 2p_nd_n\exp\left(-\frac{(M^{\prime\prime})^2\log(p_n)}{2}\right)\leq 2p_n^{2-(M^{\prime\prime})^2/2}\ \to 0\end{aligned}$$ as $n\to\infty$, by choosing $M$ sufficiently large. Therefore, we can conclude that $\int_{F_{3n}^c}\phi(\mbox{\boldmath$T$}_3;\hat{\mbox{\boldmath$\Omega$}}_G) \overset{P}{\to} 0$ as $n\to\infty$ under the true model $\mathbb{P}_{0,G}$. To address the second term of ([\[th3:1\]](#th3:1){reference-type="ref" reference="th3:1"}), we will make use of Lemma 3.7 from [@xiang]. This lemma states that under Assumptions [Assumption 4](#asA){reference-type="ref" reference="asA"}-[Assumption 6](#asC){reference-type="ref" reference="asC"}, the convergence rate of $\hat{\mbox{\boldmath$\Omega$}}_G$ will resemble the posterior contraction rate of $\mbox{\boldmath$\Omega$}$ under Assumption [Assumption 7](#asD){reference-type="ref" reference="asD"}, and both rates are equal to $\epsilon_n$. Recall that $\mbox{\boldmath$T$}_3=\sqrt{n}(\mbox{\boldmath$\Omega$}-\hat{\mbox{\boldmath$\Omega$}}_G)$. By utilizing these insights and employing the set $H_n^*$ in a similar manner as in ([\[th1:4\]](#th1:4){reference-type="ref" reference="th1:4"}), we can conclude that $\int_{ F_{3n}^c} \pi_{3n}(\mbox{\boldmath$T$}_3\mid\mbox{\boldmath$Y$}^n)\; d\mbox{\boldmath$T$}_3$ converges to zero in probability as $n$ tends to infinity, under the true model $\mathbb{P}_{0,G}$. Once again, we have previously established that both the second and third terms of ([\[th3:1\]](#th3:1){reference-type="ref" reference="th3:1"}) converge in probability to zero as $n$ approaches infinity. Therefore, it suffices to demonstrate that the first term of ([\[th3:1\]](#th3:1){reference-type="ref" reference="th3:1"}) also converges in probability to zero. Analogous to ([\[th1:6\]](#th1:6){reference-type="ref" reference="th1:6"}), we can represent the first term of ([\[th3:1\]](#th3:1){reference-type="ref" reference="th3:1"}) as follows $$\begin{aligned} \label{th3:7} &\int_{F_{3n}} \lvert \pi_{3n}(\mbox{\boldmath$T$}_3\mid\mbox{\boldmath$Y$}^n)-\phi(\mbox{\boldmath$T$}_3;\hat{\mbox{\boldmath$\Omega$}}_G)\rvert\; d\mbox{\boldmath$T$}_3\notag\\ & \leq \int_{ F_{2n}^c} \pi_{2n}(\mbox{\boldmath$T$}_3\mid\mbox{\boldmath$Y$}^n)\; d\mbox{\boldmath$T$}_3+\int_{F_{3n}^c}\phi(\mbox{\boldmath$T$}_3;\hat{\mbox{\boldmath$\Omega$}}_G)\; d\mbox{\boldmath$T$}_3+ 3\left(\int\Tilde{M}_{3n}(\mbox{\boldmath$T$}_3)\pi_{3}\left(\hat{\mbox{\boldmath$\Omega$}}_G\right)\; d\mbox{\boldmath$T$}_3\right)^{-1}\notag\\&\times\int_{F_{3n}}\left\lvert M_{3n}(\mbox{\boldmath$T$}_3)\pi_{3}\left(\hat{\mbox{\boldmath$\Omega$}}_G+\frac{\mbox{\boldmath$T$}_3}{\sqrt{n}}\right)-\Tilde{M}_{3n}(\mbox{\boldmath$T$}_3)\pi_{3}\left(\hat{\mbox{\boldmath$\Omega$}}_G\right)\right\rvert \; d\mbox{\boldmath$T$}_2,\end{aligned}$$ This step follows from Lemma A.$1$. from Ghosal(1999) [@ghosal1]. Since we have already demonstrated that both the first and second terms of ([\[th2:2\]](#th2:2){reference-type="ref" reference="th2:2"}) converge to zero in probability as $n$ approaches infinity, we can address the final part by applying a similar approach as shown in ([\[th3:7\]](#th3:7){reference-type="ref" reference="th3:7"}). This yields the following inequality $$\begin{aligned} \label{th3:8} &\left(\int\Tilde{M}_{3n}(\mbox{\boldmath$T$}_3)\pi_{3}\left(\hat{\mbox{\boldmath$\Omega$}}_G\right)\; d\mbox{\boldmath$T$}_3\right)^{-1}\notag\\& \times \int_{F_{3n}}\left\lvert M_{3n}(\mbox{\boldmath$T$}_3)\pi_{3}\left(\hat{\mbox{\boldmath$\Omega$}}_G+\frac{\mbox{\boldmath$T$}_3}{\sqrt{n}}\right)-\Tilde{M}_{3n}(\mbox{\boldmath$T$}_3)\pi_{3}\left(\hat{\mbox{\boldmath$\Omega$}}_G\right)\right\rvert d\mbox{\boldmath$T$}_3\notag\\ &\leq \sup_{\mbox{\boldmath$T$}_3\in F_{3n}}\left\lvert\frac{\pi_{3}(\hat{\mbox{\boldmath$\Omega$}}_G+\frac{\mbox{\boldmath$T$}_3}{\sqrt{n}})}{\pi_{3}(\hat{\mbox{\boldmath$\Omega$}}_G)}-1\right\rvert\frac{\int_{F_{3n}}M_{3n}(\mbox{\boldmath$T$}_3)\; d\mbox{\boldmath$T$}_3}{\int \Tilde{M}_{3n}(\mbox{\boldmath$T$}_3)\; d\mbox{\boldmath$T$}_3}\notag\\&+ \frac{\int_{F_{3n}}\lvert M_{3n}(\mbox{\boldmath$T$}_3)-\Tilde{M}_{3n}(\mbox{\boldmath$T$}_3)\rvert\; d\mbox{\boldmath$T$}_3}{\int \Tilde{M}_{3n}(\mbox{\boldmath$T$}_3) d\mbox{\boldmath$T$}_3},\end{aligned}$$ where $\Tilde{M}_{3n}(\mbox{\boldmath$T$}_3)=\exp\left\{-\mathop{\mathrm{tr}}(\hat{\mbox{\boldmath$\Omega$}}_G^{-1/2}\mbox{\boldmath$T$}_3\hat{\mbox{\boldmath$\Omega$}}_G^{-1/2})^2/4\right\}$ is the kernel of the probability density function of the $\mathcal{SSMN}_{G_n}(\mbox{\boldmath$O$},\;2 \mbox{\boldmath$D$}_{G}^T(\hat{\mbox{\boldmath$\Omega$}}_G\otimes\hat{\mbox{\boldmath$\Omega$}}_G)\mbox{\boldmath$D$}_{G})$ distribution. Once again, we can assert that $\underset{\mbox{\boldmath$T$}_3\in F_{3n}}{\sup}\left\lvert\frac{\pi_{3}(\hat{\mbox{\boldmath$\Omega$}}_G+\dfrac{\mbox{\boldmath$T$}_3}{\sqrt{n}})}{\pi_{3}(\hat{\mbox{\boldmath$\Omega$}}_G)}-1\right\rvert\overset{P}{\to}0$ as $n$ approaches infinity, utilizing Assumption [Assumption 4](#asA){reference-type="ref" reference="asA"} and Assumption [Assumption 6](#asC){reference-type="ref" reference="asC"} (pertaining to the flatness of the prior of $\mbox{\boldmath$\Omega$}$ around $\hat{\mbox{\boldmath$\Omega$}}_{G_n}$). Hence, similar to ([\[th1:7\]](#th1:7){reference-type="ref" reference="th1:7"}), it is sufficient to establish that the final term of ([\[th3:8\]](#th3:8){reference-type="ref" reference="th3:8"}) converges to $0$ in probability. The next steps involve following a similar approach as in ([\[th1:9\]](#th1:9){reference-type="ref" reference="th1:9"})-([\[th1:13\]](#th1:13){reference-type="ref" reference="th1:13"}) from the proof of Theorem [\[th_sigma\]](#th_sigma){reference-type="ref" reference="th_sigma"}. By proceeding with the same steps and utilizing Lemma [Lemma 17](#lemma_sp){reference-type="ref" reference="lemma_sp"}, it can be shown that: $$\begin{aligned} \label{th3:9} &\frac{\int_{F_{3n}}\lvert M_{3n}(\mbox{\boldmath$T$}_3)-\Tilde{M}_{3n}(\mbox{\boldmath$T$}_3)\rvert\; d\mbox{\boldmath$T$}_3}{\int \Tilde{M}_{3n}(\mbox{\boldmath$T$}_3) d\mbox{\boldmath$T$}_3}\leq 2M np_n \epsilon_n^3\notag\\ \leq& 16M\left( \sqrt{{\frac{d_n^{15}p_n^2(\log(p_n))^3}{n}}}+((np_n)^{1/3}d_n^{3/2}\gamma(d_n))^{1/3}\right)\to 0,\end{aligned}$$ as $n\to\infty$, since $\epsilon_n=d_n^{5/2}(\log(p_n)/n)^{1/2}+d_n^{3/2}\gamma(d_n)$. This completes the proof. ## Proof of Lemma [Lemma 17](#lemma_sp){reference-type="ref" reference="lemma_sp"} {#proof-of-lemma-lemma_sp} Recall the definition of $\hat{\mbox{\boldmath$\Omega$}}_G$ from ([\[omega_g\]](#omega_g){reference-type="ref" reference="omega_g"}). $$\begin{aligned} \label{c14} \hat{\mbox{\boldmath$\Omega$}}_G=\underset{\Omega\in P_{G_n}}{\sup} \left\{\frac{n}{2}\log(\det(\mbox{\boldmath$\Omega$})) - \frac{n}{2}\mathop{\mathrm{tr}}(\mbox{\boldmath$\Omega$}\mbox{\boldmath$S$})\right\}.\end{aligned}$$ We will modify our optimization problem using Lagrange multipliers. Suppose $\mbox{\boldmath$\Gamma$}=(\mbox{\boldmath$\Gamma$}_{ij})_{1\leq i,j \leq p}$ be the matrix of Lagrange multipliers. Considering our problem we will take $\mbox{\boldmath$\Gamma$}_{ij}=0$ if $(i,j)\in E_n$, for all ${1\leq i,j \leq p}$. Then we can rewrite ([\[c14\]](#c14){reference-type="ref" reference="c14"}) by the following unconstrained optimization problem $$\begin{aligned} \label{c15} \hat{\mbox{\boldmath$\Omega$}}_G=\underset{\Omega}{\sup} \left\{\frac{n}{2}\log(\det(\mbox{\boldmath$\Omega$})) - \frac{n}{2}\mathop{\mathrm{tr}}(\mbox{\boldmath$\Omega$}\mbox{\boldmath$S$})-\frac{n}{2}\mathop{\mathrm{tr}}(\mbox{\boldmath$\Omega$}\mbox{\boldmath$\Gamma$})\right\}\end{aligned}$$ From ([\[c15\]](#c15){reference-type="ref" reference="c15"}) it is not difficult to check $\hat{\mbox{\boldmath$\Omega$}}_G^{-1}= \mbox{\boldmath$S$}+ \mbox{\boldmath$\Gamma$}$. Now from the construction of $\mbox{\boldmath$\Gamma$}$ it is trivial that for any $\mbox{\boldmath$T$}\in M_{G_n},$ $\mathop{\mathrm{tr}}(\mbox{\boldmath$T$}\mbox{\boldmath$\Gamma$})=0$. Hence we can conclude for any $\mbox{\boldmath$T$}\in M_{G_n},$ $\mathop{\mathrm{tr}}(\mbox{\boldmath$T$}\mbox{\boldmath$S$})=\mathop{\mathrm{tr}}(\mbox{\boldmath$T$}\hat{\mbox{\boldmath$\Omega$}}_G^{-1})$. The proof is now complete.
arxiv_math
{ "id": "2309.08556", "title": "High-Dimensional Bernstein Von-Mises Theorems for Covariance and\n Precision Matrices", "authors": "Partha Sarkar, Kshitij Khare, Malay Ghosh and Matt P. Wand", "categories": "math.ST stat.TH", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Asymptotic uniform upper density, shortened as a.u.u.d., or simply upper density, is a classical notion which was first introduced by Kahane for sequences in the real line. Syndetic sets were defined by Gottschalk and Hendlund. For a locally compact group $G$, a set $S\subset G$ is syndetic, if there exists a compact subset $C\Subset G$ such that $SC=G$. Syndetic sets play an important role in various fields of applications of topological groups and semigroups, ergodic theory and number theory. A lemma in the book of Fürstenberg says that once a subset $A \subset {\mathbb Z}$ has positive a.u.u.d., then its difference set $A-A$ is syndetic. The construction of a reasonable notion of a.u.u.d. in general locally compact Abelian groups (LCA groups for short) was not known for long, but in the late 2000's several constructions were worked out to generalize it from the base cases of ${\mathbb Z}^d$ and ${\mathbb R}^d$. With the notion available, several classical results of the Euclidean setting became accessible even in general LCA groups. Here we work out various versions in a general locally compact Abelian group $G$ of the classical statement that if a set $S\subset G$ has positive asymptotic uniform upper density, then the difference set $S-S$ is syndetic. author: - "Szilárd Gy. Révész[^1]" title: Kahane's upper density and syndetic sets in LCA groups --- **MSC 2020 Subject Classification.** Primary 22B05; Secondary 22B99, 05B10. **Keywords and phrases.** *density, asymptotic uniform upper density, locally compact Abelian group, difference set, syndetic set.* # Introduction The notion of syndetic sets was introduced in the fundamental book of Gottschalk and Hedlund [@GH]. A subset $S\subset G$ in a topological Abelian (semi)group is a *syndetic* set, if there exists a compact set $K\subset G$ such that for each element $g\in G$ there exists a $k\in K$ with $gk\in S$; in other words, in topological groups $\cup_{k\in K} Sk^{-1}=G$. Fürstenberg presents as Proposition 3.19 (a) of [@Furst] the following. **Proposition 1**. *Let $S\subset {\mathbb Z}$ be a set having positive asymptotic uniform upper density $\overline{D}(S)>0$. Then $S-S$ is syndetic.* Here asymptotic uniform upper density[^2] stands for $\overline{D}(S):=\lim_{r\to \infty} \sup_{x \in {\mathbb Z}} \frac{[x-r,x+r] \cap S|}{2r}$. Generalization of the notion to ${\mathbb Z}^d$ and ${\mathbb R}^d$, as well as for cases of non-discrete sets $S$ when the numerator becomes the Lebesgue measure instead of the cardinality measure, are straightforward. So is the extension of the proposition to these cases. However, the notion of syndetic sets is even more general, and found many applications in several areas including dynamical systems, number theory, harmonic analysis. Still, an appropriate generalization of this proposition to general topological groups was not known, because there was no reasonably general and suitable notion of upper density. In the following we discuss two generalized notions of asymptotic uniform upper density. on arbitrary LCA groups, which appeared only some fifteen years ago. With these generalized notions of Kahane's density, we present various generalized versions of the above result. In this paper we are to prove generalizations of the above statement which typically read as follows. **Theorem 1**. *Let $G$ be a LCA group and $S\subset G$ a set with positive asymptotic uniform upper density: $\overline{D}(S)>0$. Then the difference set $S-S$ is a syndetic set.* In this, everything seems to be quite clear even in the generality of LCA groups -- except for the right definition or construction of the upper density. So, the bulk of the paper will be devoted to sufficiently explain and describe various generalized notions of asymptotic uniform upper density in locally compact Abelian groups. The structure of the paper is as follows. In Section [2](#sec:classical){reference-type="ref" reference="sec:classical"} we recall the classical notion of Kahane's density, in Section [3](#sec:appearence){reference-type="ref" reference="sec:appearence"} we describe how the generalizations occurred, in Section [4](#sec:auudLCA){reference-type="ref" reference="sec:auudLCA"} and [6](#sec:auud-discrete){reference-type="ref" reference="sec:auud-discrete"} we explain the generalized density notions. Then in Section [7](#sec:additiveresults){reference-type="ref" reference="sec:additiveresults"} we recall results on difference sets, known to be valid in ${\mathbb Z}$ or in some other special cases and including the above Proposition from Fürstenberg's book. These we extend to LCA groups with the new generalized density notions in Sections [\[sec:additiveproposgeneralized\]](#sec:additiveproposgeneralized){reference-type="ref" reference="sec:additiveproposgeneralized"} and [9](#sec:Furst-extension-auud){reference-type="ref" reference="sec:Furst-extension-auud"}. On our way we also prove some properties of our density notions, like e.g. subadditivity, which do not seem to be so obvious from their abstract, somewhat tricky definition. # The classical notion of a.u.u.d. {#sec:classical} The notion of asymptotic uniform upper density -- *a.u.u.d. for short* -- of real sequences first appeared in the PhD thesis [@KahaneThese] of J.-P. Kahane in 1954, see also [@KahaneThAnnIF]. Other early, but definitely later appearances of the notion can be seen in e.g. [@Groemer], [@Landau], [@beurling], [@beurlingB], [@Furst]. Although his first construction was different, Kahane immediately shows [@KahaneThese Ch. I, §3, no. 1, p. 20] that the notion can be equivalently defined as follows[^3]. **Definition 1** (Kahane). *If $S\subset {\mathbb R}$ is a uniformly discrete sequence, then $$\label{RUNdensity} \overline{D}^{\#}(S) := \limsup_{r\to\infty} \frac{\sup_{x\in{\mathbb R}} \# \{s\in S~:~ |s-x|\leq r\}}{2r}.$$* In fact, Kahane uses $\limsup_{|x|\to\infty}$ also in place of $\sup_{x\in{\mathbb R}}$, but these variants are easily seen to be equivalent Also, the $\limsup$ is actually a limit, for the quantity essentially decreases in function of $r$. It is clear that a.u.u.d. is a translation invariant notion. Kahane used the notion in harmonic analysis, and that remained a major field of applications ever since. Quite fast several related results appeared and the notion proved to be very fruitfully applied in seemingly different questions. A typical area of application is investigation of differences (additive bases, difference sets, packing and tiling, sets avoiding certain prescribed distances etc.), which can easily be understood if we e.g. note that in case $\overline{D}^{\#}(S)>0$, then $S-S$ has positive asymptotic upper density[^4] $\overline{d}^{\#}(S-S)\geq \overline{D}^{\#}(S)$, which is already a quite strong property. Still another area of application occurs in ergodic theory [@Furst]. As is already mentioned, in ${\mathbb R}^d$ or ${\mathbb Z}^d$ one can analogously consider, with a fixed basic set $K\subset {\mathbb R}^d$ like e.g. the unit ball or unit cube, $$\label{RUNdensity2} \overline{D}^{\#}_{K}(S) := \limsup_{r\to\infty} \frac{\sup_{x\in{\mathbb R}^d} {\rm \#} (S \cap (rK+x))}{|rK|}~.$$ Here $K\subset{\mathbb R}^d$ can be e.g. any fat body, or can even be more general. It is also well-known, that $\overline{D}^{\#}_{K}(S)$ *gives the same value* for all nice sets $K\subset {\mathbb R}^d$ (although this fact does not seem immediate from the formulation). To prove directly, it requires some tedious $\varepsilon$-covering of the boundary of $K_1$ by homothetic copies of $K_2$ etc. Landau works out this direct proof as [@Landau Lemma 4] in the generality of compact sets $K\Subset {\mathbb R}^d$ normalized to have unit measure and satisfying the condition that the boundary $\partial K$ has zero measure. Actually, here we will obtain this as a side result, being an immediate corollary of our Theorem [Theorem 3](#th:Requivalence){reference-type="ref" reference="th:Requivalence"}, see Remark [Remark 2](#c:easy){reference-type="ref" reference="c:easy"}. Moreover, from our approach the same equivalence follows elegantly for arbitrary bounded measurable sets $K$, normalized to have unit measure and satisfying $|\partial K|=0$, i.e. the compactness criterion can be dropped from conditions of Landau. Also, non-discrete, but locally Lebesgue-measurable sets arise in the context (in problems of plane geometry e.g.), where the natural density is defined by means of volume, not of cardinality. Then a.u.u.d of a Lebesgue-measurable set $A\subset {\mathbb R}^d$ is defined as $$\label{RUdensity} \overline{D}_{K}(A) := \limsup_{r\to\infty} \frac{\sup_{x\in{\mathbb R}^d} |A\cap (rK+x)|}{|rK|}~.$$ That motivates a further extension: we can consider asymptotic uniform upper densities of *measures*, say some measure $\nu$ and not only sequences $S$ or sets $A \subset {\mathbb R}^d$. So, a general formulation in ${\mathbb R}^d$ (or ${\mathbb Z}^d$) would thus be (writing $|\cdot|$ for the Lebesgue measure in ${\mathbb R}^d$ or for the cardinality measure in ${\mathbb Z}^d$), $$\label{def:Rnu-density} \overline{D}_{K}(\nu):= \limsup_{r\to\infty} \frac{\sup_{x\in{\mathbb R}^d}\nu(rK+x)}{|rK|}~,$$ To make sense, it is only needed here that the measure $\nu$ is *locally* a finite measure. For simplicity, in this work we will only consider nonnegative measures, so that mentioning measure will be understood as such, always. (Even the very consideration of measures is above the needs of the present work.) It is, however, not an essential restriction that the measure have to be a Borel measure (i.e. the family of measurable sets contain the Borel sigma-algebra) -- we can consider the outer measure $\overline{\nu}$ arising from $\nu$. However, if we want to have translation invariance of a.u.u.d. (which is a basic requirement towards any reasonable such density notion), then taking $\sup_{x\in {\mathbb R}^n}$ inside leaves us with not much choice regarding the measure in the denominator: at least asymptotically we need to have it translation invariant, too. This, in turn, more or less determines the measure, too, if we want it to be a Borel measure with finite values on compact sets. In fact, in any locally compact Abelian group, such a translation invariant measure is unique (and thus is called *the* Haar measure) up to a constant factor -- in particular in ${\mathbb R}^d$ it must be the Lebesgue measure $\lambda$ and in ${\mathbb Z}^d$ it must be the counting measure $\#$. The notion [\[def:Rnu-density\]](#def:Rnu-density){reference-type="eqref" reference="def:Rnu-density"} of a.u.u.d. indeed remains translation invariant. E.g. in [\[RUNdensity\]](#RUNdensity){reference-type="eqref" reference="RUNdensity"} $\nu:={\rm \#}$ is the cardinality or counting measure of a set $S$, while in [\[RUdensity\]](#RUdensity){reference-type="eqref" reference="RUdensity"} $\nu:=\lambda|_A$ is the trace of $\lambda$ on the measurable set $A\subset {\mathbb R}$. In fact the point of view of measures, at least as concerns $\nu:=\sum_{s\in S} \delta_s$ with Dirac measures $\delta_s$ placed at the points of $s\in S$, has already been taken by Kahane himself in [@Kahane1 page 303] under the name \"measure caractéristique\". # The appearance of the notion of a.u.u.d. on LCA groups {#sec:appearence} Starting from 2003, we aimed at extending the notion of a.u.u.d. to locally compact Abelian groups (LCA groups henceforth). Our work directly stemmed out from our interest in extending, to LCA groups, some results on the so-called \"Turán extremal problem\". We indeed succeeded to extend at least the \"packing type estimate\" of [@kolountzakis:groups] from compact groups and ${\mathbb R}^d$ and ${\mathbb Z}^d$ to general LCA groups, see [@LCATur] and [@dissertation]. Further, in a recent work [@Berdysheva] we have similarly analyzed the Delsarte extremal problem and its relation to packing. Note that the Delsarte extremal problem proved to be the precise tool to prove the breakthrough result by Viazovska [@Viaz] regarding the densest ball packing in ${\mathbb R}^8$, subsequently extended also to ${\mathbb R}^{24}$ in [@CKMRV]. The notion of a.u.u.d. is a way to grab the idea of a set being relatively considerable, even if not necessarily dense or large in some other more easily accessible sense. In many theorems, in particular in Fourier analysis and in additive problems where difference sets are considered, the a.u.u.d. is the right notion to express that a set becomes relevant in the question. However, previously the notion was only extended to sequences and subsets of the real line, and some immediate relatives like ${\mathbb N}$, ${\mathbb Z}^d$, ${\mathbb R}^d$, as well as to finite, or at least finitely constructed (e.g. $\sigma$-finite) cases. A framework where the notion might be needed is the generality of LCA groups. In recent decades it is more and more realized that many questions e.g. in additive number theory can be investigated, even sometimes structurally better understood/described, if we leave e.g. ${\mathbb Z}$, and consider the analogous questions in Abelian groups. In fact, when some analysis, i.e. topology also has a role -- like in questions of Fourier analysis e.g. -- then the setting of LCA groups seems to be the natural framework. And indeed several notions and questions, where in classical results a.u.u.d. played a role, have already been defined, even in some extent discussed in LCA groups. Nevertheless, for long no attempt has been made to extend the very notion of a.u.u.d. to this setup. Parallel to the first phase of ours, a research which aimed at extending the very first topic where a.u.u.d. have been used -- concerning conditions for sets being sets of sampling or sets of interpolation -- was successfully conducted in [@GrochKutySeip]. The construction there is particularly interesting, because it is also a round-about way of arriving at a general notion of a.u.u.d. -- demonstrating that there was no immediate access, and the construction needed some effort. Indeed, to show that the constructed density is equivalent in ${\mathbb R}^d$ to the classical one, is not quite obvious and is formulated and proved as [@GrochKutySeip Lemma 8]. Actually, the main results of [@GrochKutySeip] are formulated under the additional assumption that the dual group $\widehat{G}$ is compactly generated, as it is needed for the construction of a.u.u.d. (The relaxation of this extra condition is then discussed in [@GrochKutySeip Section 8]. The key is that every bandlimited function $f$ in the studied class of functions lives on a quotient $G/K$ and may be identified with a function $\widetilde{f} \in L^2(G/K)$ with some compact subgroup $K$ such that $G/K$ factors according to the structure theorem of locally compact Abelian groups, generated compactly.) The paper constructs the definition of a.u.u.d. referring to a tricky partial ordering relation of uniformly discrete subsets; it is then mentioned that this definition can equivalently be defined using Haar measures, too. That later equivalent formulation surfacing in [@GrochKutySeip formula (18)] and the discussion following it hints the one we have worked out along quite a different way; details will be seen below. The possibility of consideration of measures and their a.u.u.d. is mentioned in this discussion, too. We thank to Professor Joaquim Ortega-Cerdà for calling our attention to the (then also quite recent) work [@GrochKutySeip] right the day after our initial preprint [@density] appeared on the ArXiv. For the overlap thus pointed out that earlier version of our work has never been published in a journal, although later we have published a concise version without proofs in the conference abstract [@RAE] and the notion, being instrumental for the mentioned extension of the Turán extremal problem to LCA groups, was also presented in the thesis [@dissertation] and the paper [@LCATur]. # The first construction of a.u.u.d. in LCA groups {#sec:auudLCA} We will consider two generalizations here. The first applies for the class of Abelian groups $G$, equipped with a topological structure which makes $G$ a LCA (locally compact Abelian) group. Considering such groups are natural for they have an essentially unique translation invariant Haar measure $\mu_G$ (see e.g. [@rudin:groups]), what we fix to be our $\mu$. By construction, $\mu$ is a Borel measure, and the sigma algebra of $\mu$-measurable sets is just the sigma algebra of Borel mesurable sets, denoted by ${\mathcal B}$ throughout. Furthermore, we will write ${\mathcal B}_0$ to be the subfamily of ${\mathcal B}$ of Borel measurable sets with compact closure. Sets $B\in {\mathcal B}_0$ necessarily have finite Haar measure. If the topology changes, it is reflected by the corresponding change of the (essentially unique) Haar measure, and so the characteristic property of being finite on ${\mathcal B}_0$ singles out the respective Haar measure from the family of translation-invariant Borel measures. Our heuristics in finding a definition of a.u.u.d. was the following. We wanted to grasp the fact that the set, where we may analyze relative densities of the given set $A$ or measure $\nu$, must grow large (as in case of ${\mathbb R}$ the dilated copies $rK$ do). However, in general LCA groups neither a standard basic neighborhood of $0$ nor dilations exist. Then we encountered the following nice and basic result in LCA groups, see [@rudin:groups 2.6.7. Theorem] or [@HewittRossII (31.36) Lemma]. **Theorem 2**. *If ${\varepsilon}>0$ and $C\Subset G$, then there exists $V\in{\mathcal B}_0$ such that $\mu(V+C)<(1+{\varepsilon})\mu(V)$.* Thinking of ${\mathbb R}^d$, it is natural to visualize the content of this lemma as follows. For any given compact set $C$ the difference between $V$ and $V+C$ is just a bounded (compact) perturbation on the boundary of $V$, so if $V$ is chosen quite large, than the change of volume becomes relatively negligible. This suggested us the idea of replacing limits and size restrictions by the trick of division by $\mu(V+C)$, in place of simply $\mu(V)$, in the definition of a.u.u.d., thus leading to [\[Cnudensity\]](#Cnudensity){reference-type="eqref" reference="Cnudensity"}. Indeed, if $\mu(V)$, that is $V$, is large enough -- in the sense of the above Theorem [Theorem 2](#th:Rudinlemma){reference-type="ref" reference="th:Rudinlemma"} -- then the increase of $\mu(V)$ to $\mu(V+C)$ does not matter asymptotically; and if $V$ is not enough large, than the division by a larger measure (of $\mu(C+V)$) makes the corresponding quantity out of interest in the search of high relative density (i.e. in the inner supremum). That was our heuristical idea in the construction of the below Definition [Definition 2](#def:compactdensity){reference-type="ref" reference="def:compactdensity"}. **Definition 2**. *Let $G$ be a LCA group and $\mu:=\mu_G$ be its Haar measure. If $\nu$ is another (locally finite, nonnegative) measure on $G$ with the sigma algebra of measurable sets being ${\mathcal S}$, then we define $$\label{Cnudensity} \overline{D}(\nu) %%% := \overline{D}(\nu;\mu) := \inf_{C\Subset G} \sup_{V\in {\mathcal S} \cap {\mathcal B}_0} \frac{\nu(V)}{\mu(C+V)}~.$$ In particular, if $A\subset G$ is Borel measurable and $\nu=\mu_A$ is the trace of the Haar measure on the set $A$, then we get $$\label{CAdensity} \overline{D}(A) :=\overline{D}(\mu_A) := %%% \overline{D}(\mu_A;\mu) := \inf_{C\Subset G} \sup_{V\in {\mathcal B}_0} \frac{\mu(A\cap V)}{\mu(C+V)}~.$$ If $\Lambda\subset G$ is any (e.g. discrete) set and $\gamma :=\gamma_\Lambda:=\sum_{\lambda\in\Lambda} \delta_{\lambda}$ is the counting measure of $\Lambda$, then we get $$\label{CLdensity} \overline{D}^{\#}(\Lambda) := \overline{D}(\gamma_{\Lambda}) := \inf_{C\Subset G} \sup_{V\in {\mathcal B}_0} \frac{{\rm \# }(\Lambda\cap V)}{\mu(C+V)}~.$$* **Remark 1**. If we do not want to bother with $\nu$-measurability of sets, i.e. with $V\in {\mathcal S}$, then we may as well use the outer measure $\overline{\nu}$, defined for arbitrary sets. As for all $C$ we want $C+V$ belong to ${\mathcal B}$ (for $\mu(C+V)$ to make sense), it is natural to consider $V\in{\mathcal B}$ only; but if we further drop the condition that $\overline{V}$ be compact, then the definition becomes untractable already in ${\mathbb R}$. Indeed, then it can easily happen -- and in fact that is what happens normally with a compact $C$ having $\mu(C)>0$ -- that $\mu(V+C)=\infty$; further, it is easy to find cases where also the numerator is infinite. Take e.g. $\nu$ to be the counting measure $\#$ and $\Lambda$ some sequence $\Lambda=\{\lambda_k~:~k\in {\mathbb N}\}$, say tending to infinity; then it is easy to define a (non-compact, but still measurable) union $V$ of decreasingly small neighborhoods of the points $\lambda_k$ such that the Haar measure of $V$ equals 1, but all of $\Lambda$ stays in $V$, hence the counting measure of $\Lambda\cap V$ is infinite. To avoid such untractable situations, we are thus restricted to $V\in{\mathcal B}_0$. The very first thing one wants to have after such an abstract definition, based only on some tricky heuristics, is to see that it indeed is a generalization of the classical notion. **Theorem 3**. *Let $K$ be any bounded, Lebesgue-measurable subset of ${\mathbb R}^d$ with $|K|=|\overline{K}|=1$ (i.e. assume that $K$ itself is of positive, normalized volume $1$ and its closure is of the same measure). Let $\nu$ be any (nonnegative, locally finite) measure with sigma algebra of measurable sets $\mathcal S$. Then we have $$\label{Rd-equivalance} \overline{D}(\nu) = \overline{D}_K(\nu)~.$$ The same statement applies also to ${\mathbb Z}^d$.* **Remark 2**. In particular, we find that the asymptotic uniform upper density $\overline{D}_K(\nu)$ does not depend on the choice of $K$, as long as $K$ is bounded and measurable with $|K|=1$ and $|\partial K|=0$. **Remark 3**. If we further drop the condition that $K$ be bounded, then the statement may fail -- or becomes untractable -- already in dimension 1, i.e. for ${\mathbb R}$. The proof of this equivalence result was stated in [@LCATur] as Proposition 1 and fully proved in [@density] as Theorem 1 and in [@dissertation] as Theorem 3.1. Given that the proof is not available in a journal, for the reader's convenience we give the proof here, too. # Proof of Theorem [Theorem 3](#th:Requivalence){reference-type="ref" reference="th:Requivalence"} {#sec:proof} [Proof of $\overline{D}(\nu) \geq \overline{D}_K(\nu)$]{.ul}. Assume first that $K$ is a convex body (which is then also bounded, and of boundary measure zero with ${\bf 0}\in {\rm int\,}K$). Let now $\tau<\tau'< \overline{D}_K(\nu)$ and $C\Subset {\mathbb R}^d$ be arbitrary. Since $C$ is compact, and ${\bf 0}\in {\rm int\,}K$, for some sufficiently large $r'>0$ we have $C\Subset r'K$, hence by convexity also $C+rK\subset r'K + r K = (r'+r)K$ for any $r>0$. Observe that in view of $\tau'< \overline{D}_K(\nu)$ there exist $r_n\to\infty$ and $x_n\in {\mathbb R}^d$ with $\nu(r_nK+x_n)>\tau'|r_nK|$. With large enough $n$, we also have $|(r_n+r')K| / |r_nK| = (1+r'/r_n)^d < \tau'/\tau$, hence with $V:=r_n K+x_n$ we find $\nu(V) > \tau'|r_n K| > \tau |(r_n+r')K|= \tau |x_n+r_n K +r'K| \geq \tau |V+C|$. This proves that $\overline{D}(\nu) \geq \tau$, whence the assertion. The proof is only slightly more complicated for the general case. What we need to observe is that if $B\subset {\mathbb R}$ is the unit ball, then for $\eta\to 0$ we have $|K+\eta B|\to |\overline{K}|$ (where $K+\eta B=\{x+\eta b~:~x\in K, b\in B\}$ is the usual Minkowski- or complexus sum). So if $C\Subset aB$ holds (with some $a$ chosen sufficiently large) then for any given $\eta>0$ we necessarily have $|rK+ C|\le |rK+aB| = r^d |K+(a/r) B| \le r^d (1+\eta) |\overline{K}| =(1+\eta)|rK|$ for all $r>r_0=r_0(\eta,a)$ (where we have used also that $|\overline{K}|=|K|$). As above, take $\tau<\tau'< \overline{D}_K(\nu)$ and $\tau'|r_nK| < \nu(r_nK+x_n)$ with $r_n\to \infty$. Next let us apply the above with $\eta:=\tau'/\tau-1$, noting that as $r_n\to \infty$, in particular we have $r_n>r_0$ for $n\ge n_0$. We thus find for all $n\ge n_0$ the inequalities $\tau |r_nK+aB| < \tau (1+\eta) |r_nK| =\tau' |r_nK| <\nu(r_nK+x_n)$, whence also $\tau |r_nK+C| <\nu(r_nK+x_n)$. It follows that $\limsup_{n\to\infty} \nu(r_nK+x_n)/|r_nK+C|>\tau$, for any $\tau< \overline{D}_K(\nu)$, whence even $\overline{D}(\nu) \ge \overline{D}_K(\nu)$, as wanted. [Proof of $\overline{D}_K(\nu) \geq \overline{D}(\nu)$]{.ul}. We have already shown in [@LCATur Lemma 2] the following lemma. **Lemma 1**. *Let $W$ be any Borel measurable subset of a LCA group $G$ with its closure $\overline{W}$ compact, and let $\nu$ be a *nonnegative*, uniformly locally bounded Borel measure on $G$--i.e. $\nu\in {\mathcal M}_+(G)$. Denote $\overline{D}_{G}(\nu)=\rho$. If $\gamma<\rho$ is given arbitrarily, then there exists $x \in G$ such that $$\label{VN+z} \nu(W+x)\geq \gamma \mu(W).$$* To prove the inequality $\overline{D}_K(\nu) \geq \overline{D}(\nu)$, and whence Theorem [Theorem 3](#th:Requivalence){reference-type="ref" reference="th:Requivalence"}, it remains to choose an arbitrary $\gamma<\overline{D}(\nu)$, put $W:=rK$ with any $r>0$, and apply the Lemma: we will get a translate $W+x=rK+x$ with $\nu(x+rK) \ge \gamma |rK|$. Then it follows that $\inf_{r>0} \sup_{x \in {\mathbb R}^d} \nu(x+rK)/ |rK| \ge \gamma$, whence also $\overline{D}_K(\nu) \ge \gamma$, and this holding for all $\gamma<\overline{D}(\nu)$ gives the statement. # An even larger notion of a.u.u.d. {#sec:auud-discrete} Note if we consider the discrete topological structure on any Abelian group $G$, it makes $G$ a LCA group with Haar measure $\mu_G={\rm \#}$, the counting measure. Therefore, our notions above certainly cover all discrete groups. This is the natural structure for ${\mathbb Z}^d$, e.g. On the other hand all $\sigma$-finite groups admit the same structure as well, unifying considerations. (Note that ${\mathbb Z}^d$ is not a $\sigma$-finite group since it is *torsion-free*, i.e. has no finite subgroups.) Furthermore, we also introduce a second notion of density as follows. **Definition 3**. *Let $G$ be a LCA group and $\mu:=\mu_G$ be its Haar measure. If $\nu$ is another (locally finite, nonnegative) measure on $G$ with the sigma algebra of measurable sets being ${\mathcal S}$, then we define $$\label{Fnudensity} \overline{{\Delta}}(\nu) := %% \overline{\D}(\nu;\mu) := \inf_{F\subset G,\,{\rm \#} F<\infty } \sup_{V\in {\mathcal S} \cap {\mathcal B}_0} \frac{\nu(V)}{\mu(F+V)}~.$$ In particular, if $A\subset G$ is Borel measurable and $\nu=\mu_A$ is the trace of the Haar measure on the set $A$, then we get $$\label{FAdensity} \overline{{\Delta}}(A) :=\overline{\Delta}(\mu_A) := %%% \overline{\Delta}(\mu_A;\mu) := \inf_{F\subset G,\, {\rm \#} F<\infty } \sup_{V\in {\mathcal B}_0} \frac{\mu(A\cap V)}{\mu(F+V)}~.$$ If $\Lambda\subset G$ is any (e.g. discrete) set and $\gamma :=\gamma_\Lambda:=\sum_{\lambda\in\Lambda} \delta_{\lambda}$ is the counting measure of $\Lambda$, then we get $$\label{FLdensity} \overline{{\Delta}}^{\rm \#}(\Lambda):=\overline{\Delta}(\gamma_{\Lambda}) := \inf_{F\subset G,\,{\rm \# } F< \infty} \sup_{V\in {\mathcal B}_0} \frac{{\rm \# }(\Lambda\cap V)}{\mu(F+V)}~.$$* The two definitions are rather similar, except that the requirements for $\overline{{\Delta}}$ refer to finite sets only. Because all finite sets are necessarily compact in an LCA group, [\[Cnudensity\]](#Cnudensity){reference-type="eqref" reference="Cnudensity"} of Definition [Definition 2](#def:compactdensity){reference-type="ref" reference="def:compactdensity"} extends the same infimum over a wider family of sets than [\[Fnudensity\]](#Fnudensity){reference-type="eqref" reference="Fnudensity"} of Definition [Definition 3](#finitedensity){reference-type="ref" reference="finitedensity"}; therefore we get **Proposition 2**. *Let $G$ be any LCA group, with normalized Haar measure $\mu$. Then we have $$\label{Fd-equivalance-gen} \overline{{\Delta}}(\nu) \ge \overline{D}(\nu)~.$$ Furthermore, in a discrete Abelian group $G$ we always have $\overline{{\Delta}}(\nu) = \overline{D}(\nu)~.$* The second part is even more obvious, because in discrete groups the Haar measure is the counting measure and the compact sets are exactly the finite sets. So there is no difference for ${\mathbb Z}$, e.g. In general, however, the two densities, defined above, may be different. As for the heuristical idea of grasping growth of $V$ through the above trick of taking $\mu(V+C)$ instead of dilations -- which in general do not exist -- we must admit that in Definition [Definition 3](#finitedensity){reference-type="ref" reference="finitedensity"} the heuristics fail. That is the essence of the following straightforward example, showing that indeed $\overline{\Delta}(\nu) > \overline{D}(\nu)$ for some $\nu$ whenever $G$ is not discrete. This we were guessing and V. Totik showed that this is indeed the case. **Proposition 3** (Totik, [@totik]). *If $G$ is a non-discrete LCA group, then there exists a probability measure $\nu$ such that $\overline{\Delta}(\nu) > \overline{D}(\nu)$.* *Proof.* First we find an open set which has small Haar measure (which is clearly not possible if $G$ was discrete.) Take an open neighborhood $U$ of 0 with compact closure (and thus of finite Haar measure $0<\mu(U)<\infty$), and let $U_0:=U$. First we will construct other neighborhoods $U_k$ inductively for all $k\in{\mathbb N}$ and with small Haar measure. As $G$ is non-discrete, with any given $k$ the neighborhood $U_k$ contains some point $0\ne x_k\in U_k$ out of 0 itself. As addition is a continuous function from $G\times G \to G$, $0+x_k=x_k\in U_k$, and $U_k$ is open, there exists a neighborhood $V_k$ of 0, so that $V_k\times (V_k+x_k)$ is mapped inside $U_k$. Also, by assumption that $G$ is Hausdorff, there are neighborhoods $W_k$ and $W'_k$ of 0 and $x_k$, respectively, which are disjoint. Therefore, taking now $U_{k+1}:=V_k \cap W_k \cap U_k \cap (W'_k-x_k) \cap (U_k-x_k)$ (which is still a neighborhood of 0) we find that $U_{k+1}\cap(U_{k+1}+x_k) =\emptyset$ while $U_{k+1}, U_{k+1}+x_k \subset U_k$. It follows that $U_{k+1}$ is an open neighborhood of 0, within $U_k$, and its Haar measure is at most $\mu(U_k)/2$. Therefore, arbitrarily small Haar measures can be prescribed: if $\eta>0$, there exists some open neighborhood $W$ of 0 such that $0<\mu(W)<\eta$. It also follows by outer regularity of $\mu$ that $\mu(\{0\})=0$ for the one point compact set $\{0\}$, whence for any finite set $F\subset G$ we have $\mu(F)=0$, too. Now we take $\nu:=\delta_0$ the Dirac measure at 0. Let us compute first $\overline{D}(\nu)$. Consider $C\Subset G$ with a positive measure; clearly then $\sup_{V\in {\mathcal B}_0} \nu(V)/\mu(V+C)\leq 1/\mu(C)$ and this is attained for $V:=\{0\}$, which is clearly in ${\mathcal B}_0$, with measure 0. Taking infimum over $C$ we find that $\overline{D}(\nu)=1$ if the group $G$ is compact but non-discrete (and thus is normalized to have $\mu(G)=1$) and $\overline{D}(\nu)=0$ when $G$ is non-compact (and thus there exist compact sets of arbitrarily large measure). Next we compute $\overline{\Delta}(\nu)$. The same argument (and the same choice of $\{0\}$ for $V$) with a finite set $F\subset G$ in place of $C\Subset G$ shows that $\overline{\Delta}(\nu)=\inf_{F\subset G, \#F<\infty} 1/\mu(F)$, which, however, is $1/0=\infty$ whenever $G$ is non-discrete. Even if we exclude division by 0, the same result obtains taking first some $V\in{\mathcal B}_0$ with $0<\mu(V)<\eta$, and then writing $\nu(V)/\mu(V+F)\geq 1/(\# F \eta)$, hence $\sup_{V\in {\mathcal B}_0} \nu(V)/\mu(V+F) \geq \sup_{\eta>0} 1/(\# F \eta)= \infty$, for any fixed finite subset $F$. On this, not even the subsequent $\inf_F$ can help. Therefore the inequality $\overline{\Delta}(\nu) > \overline{D}(\nu)$ is proved if $G$ is non-discrete. ◻ Note that here -- contradicting to our original heuristics of $\mu(C+V) \to \infty$ together with $\mu(V)\to \infty$ whenever the defined value of our density is approximated closely -- the sets which exhibit close-to-optimal density are very small ones. Applications of density are used in different contexts; in general in e.g. number theory a density is understood as some form of asymptotic density, with measures tending to infinity, but in e.g. real analysis local densities, over small neighborhoods, are equally important. Even if we have a certain heuristics telling what we would like to grasp, we should be careful not to be misled by our own imaginations: this density, what we have defined above, may be extremal also in small sets sometimes. We will see other instances, too, when the heuristics -- e.g. that \"the larger the density is, the better it is for a plausible statement\" -- may fail. # Some additive number theory flavored results for difference sets {#sec:additiveresults} We have already noted that extremal problems of Turán and Delsarte, as well as conditions for sets being sets of sampling or interpolation, can be investigated in the generality of LCA groups by means of the a.u.u.d. properly extended. Here we collect a few other instances, mainly of number theoretic flavor, where generalizations have also been tried, and where we will apply our general definition to extend known results of more restrictive cases to LCA groups in general. Let us denote the usual upper density of $A\subset {\mathbb N}$ as $\overline{d}(A):=\lim\sup_{n\to\infty} A(n)/n > 0$ with $A(n):= {\rm \#} (A\cap [1,n])$. Erdős and Sárközy (seemingly unpublished, but quoted in [@hegyvari:differences] and in [@ruzsa:difference]) observed the following. **Proposition 4** (Erdős-Sárközi). *If the upper density $\overline{d}(A)$ of a sequence $A\subset {\mathbb N}$ is positive, then writing the positive elements of the sequence $D(A):=D_1(A):=A-A$ as $D(A) \cap {\mathbb N} =\{(0<)d_1<d_2<\dots\}$ we have $d_{n+1}-d_n=O(1)$.* This is analogous, but not contained in the following result of Hegyvári, obtained for $\sigma$-finite groups. An Abelian group is called $\sigma$-finite (with respect to $H_n$), if there exists an increasing sequence of *finite* subgroups $H_n$ so that $G=\cup_{n=1}^\infty H_n$. For such a group Hegyvári defines asymptotic upper density (with respect to $H_n$) of a subset $A \subset G$ as $$\label{GHdensity} \overline{d}_{H_n}(A) := \limsup_{n\to\infty} \frac{{\rm \#} (A\cap H_n)}{{\rm \#} H_n}~.$$ Note that for finite groups this is just ${\rm \#} (A\cap G) / {\rm \#} G$. Hegyvári proves the following [@hegyvari:differences Proposition 1]. **Proposition 5** (Hegyvári). *Let $G$ be a $\sigma$-finite Abelian group with respect to the increasing, exhausting sequence $H_n$ of finite subgroups and let $A\subset G$ have positive upper density with respect to $H_n$. Then there exists a finite subset $B\subset G$ so that $A-A+B=G$. Moreover, we have ${\rm \#}B\le 1/\overline{d}_{H_n}(A)$.* Fürstenberg calls a subset $S\subset G$ in a topological Abelian (semi)group a *syndetic* set, if there exists a compact set $K\subset G$ such that for each element $g\in G$ there exists a $k\in K$ with $gk\in S$; in other words, in topological groups $\cup_{k\in K} Sk^{-1}=G$. Then he presents as Proposition 3.19 (a) of [@Furst] the following. **Proposition 6** (Fürstenberg). *Let $S\subset {\mathbb Z}$ with positive a.u.u.d. Then $S-S$ is syndetic.* In the following we use the above extended notions of a.u.u.d. on arbitrary LCA groups, and present various generalized versions of the above results. Furthermore, we obtain sharpened variants of these results making use of both density notions. # The first extension of the propositions of Erdős-Sárközy, of Hegyvári, and of Fürstenberg[\[sec:additiveproposgeneralized\]]{#sec:additiveproposgeneralized label="sec:additiveproposgeneralized"} In this section we will prove the following result. **Theorem 4**. *If $G$ is a LCA group with Haar measure $\mu$, and $A\subset G$ has $\overline{{\Delta}}(A)>0$, then there exists a finite subset $B\subset G$ so that $A-A+B=G$. Moreover, we can find $B$ with $\# B \le [1/\overline{{\Delta}}(A)]$.* Theorem [Theorem 4](#th:general-hegyvari){reference-type="ref" reference="th:general-hegyvari"}, stated in the Introduction, will be a corollary of this result. **Remark 4**. We need a translation-invariant (Haar) measure, but not the topology or compactness. *Proof of Theorem [Theorem 4](#th:general-hegyvari){reference-type="ref" reference="th:general-hegyvari"}.* Assume that $H\subset G$ satisfies $(A-A)\cap(H-H)=\{0\}$ and let $L=\{b_1,b_2,\dots,b_k\}$ be any finite subset of $H$. By construction, we have $(A+b_i)\cap (A+b_j)=\emptyset$ for all $1\le i < j\le k$. Take now $C:=L$ in the definition of density [\[FAdensity\]](#FAdensity){reference-type="eqref" reference="FAdensity"} and take $0<\tau<\rho:=\overline{{\Delta}}(A)$. By Definition [Definition 3](#finitedensity){reference-type="ref" reference="finitedensity"} of the density $\overline{{\Delta}}(A)$, there are $x\in G$ and $V \subset G$ open with compact closure -- or, a $V\in {\mathcal S}$ with $0<|V|<\infty$ -- satisfying $$\label{AVx} |A\cap(V+x)|>\tau|V+L|~.$$ In the other direction, $$\label{VLetc} V+L=\bigcup_{j=1}^k \left(V+x+(b_j-x) \right)\supset \bigcup_{j=1}^k \left( ((V+x)\cap A)+b_j \right)-x$$ and as $A+b_j$ (thus also $((V+x)\cap A) +b_j$) are disjoint, and the Haar measure is translation invariant, we are led to $$\label{kVxA} |V+L|\ge k|(V+x)\cap A|~.$$ Combining [\[AVx\]](#AVx){reference-type="eqref" reference="AVx"} and [\[kVxA\]](#kVxA){reference-type="eqref" reference="kVxA"} we are led to $$\label{tauk} |V+L| > k\tau |V+L|~,$$ hence after cancelation by $|V+L|>0$ we get $k<1/\tau$ and so in the limit $k\le K:=[1/\rho]$. It follows that $H$ is necessarily finite and $\# H\le K$. So let now $B=\{b_1,b_2,\dots,b_k\}$ be any set with the property $(A-A)\cap(B-B)=\{0\}$ (which implies $\# B \le K$) and maximal in the sense that for no $b'\in G\setminus B$ can this property be kept for $B':=B\cup\{b'\}$. In other words, for any $b'\in G\setminus B$ it holds that $(A-A)\cap (B'-B')\ne\{0\}$. Clearly, if $A-A=G$ then any one point set $B:=\{b\}$ is such a maximal set; and if $A-A\ne G$, then a greedy algorithm leads to one in $\le K$ steps. Now we can prove $A-A+B=G$. Indeed, if there exists $y\in G\setminus(A-A+B)$, then $(y-b_j)\notin A-A$ for $j=1,\dots,k$, hence $B':=B\cup\{y\}$ would be a set satisfying $(B'-B')\cap(A-A)=\{0\}$, contradicting maximality of $B$. ◻ **Corollary 1**. *Let $A\subset {\mathbb R}^d$ be a (measurable) set with $\overline{{\Delta}}(A)>0$. Then there exist $b_1,\dots,b_k$ with $k \le K := [1/\overline{{\Delta}}(A)]$ so that $\cup_{j=1}^k (A-A+b_j)={\mathbb R}^d$.* This is interesting as it shows that the difference set of a set of positive density $\overline{{\Delta}}$ is necessarily rather large: just a few translated copies cover the whole space. Observe that we have Proposition [Proposition 6](#prop:Furst){reference-type="ref" reference="prop:Furst"} as an immediate consequence of Theorem [Theorem 4](#th:general-hegyvari){reference-type="ref" reference="th:general-hegyvari"}, because ${\mathbb Z}$ is discrete, and thus the two notions $\overline{{\Delta}}$ and $\overline{D}$ of a.u.u.d. coincide; moreover, the finite set $B:=\{b_1,\dots,b_k\}$ is a compact set in the discrete topology of ${\mathbb Z}$. But in fact we can as well formulate the following extension. **Corollary 2**. *Let $G$ be a LCA group and $S\subset G$ a set with positive a.u.u. density, i.e. $\overline{D}(S)>0$ (where $\overline{D}(S)= \overline{D}(\mu|_S)$, in line with [\[CAdensity\]](#CAdensity){reference-type="eqref" reference="CAdensity"} above). Then the difference set $S-S$ is a syndetic set: moreover, the set of translations $K$, for which we have $G=S+K$, can be chosen not only compact, but even to be a finite set with $\# K \leq [1/\overline{D}(S)]$ elements.* This corollary is immediate, because $\overline{{\Delta}}(S)\geq \overline{D}(S)$ according to Proposition [Proposition 2](#prop:densitycompari){reference-type="ref" reference="prop:densitycompari"}. Note that we have already stated a less precise form of this (without the estimate on the size of $K$), as Theorem [Corollary 2](#cor:genFurst){reference-type="ref" reference="cor:genFurst"} in the Introduction. This indeed generalizes the proposition of Fürstenberg. Also this result contains the result of Hegyvári: for on $\sigma$-finite groups the natural topology is the discrete topology, whence the natural Haar measure is the counting measure, and so on $\sigma$-finite groups Corollary [Corollary 2](#cor:genFurst){reference-type="ref" reference="cor:genFurst"} and Theorem [Theorem 4](#th:general-hegyvari){reference-type="ref" reference="th:general-hegyvari"} coincides. Finally, this also generalizes and sharpens the Proposition of Erdős and Sárközy. Indeed, on ${\mathbb Z}$ or ${\mathbb N}$ we naturally have $\overline{{\Delta}}(A)=\overline{D}(A)\geq \overline{d}(A)$, so if the latter is positive, then so is $\overline{D}(A)$; and then the difference set is syndetic, with finitely many translates belonging to a translation set $K\subset {\mathbb N}$, say, covering the whole ${\mathbb Z}$. Hence $d_{n+1}-d_n-1$ cannot exceed the maximal element of the finite set $K$ of translations. # Still another extension of the Lemma of Fürstenberg {#sec:Furst-extension-auud} The above given generalization is satisfactory for discrete groups in particular, for those groups the $\overline{\Delta}$ notion of density matches the $\overline{D}$ notion, and is hence a generalized version of the density used in ${\mathbb Z}$ by Fürstenberg. However, for general LCA groups, a generally smaller density, that is $\overline{D}$, is known to be the right generalization. The above Corollary [Corollary 2](#cor:genFurst){reference-type="ref" reference="cor:genFurst"} settled the generalization right for this density notion. Here we pass on to a third density notion, more precisely, the same density notion but applied to the discrete \"characteristic measure\" or \"cardinality measure\". Again, for discrete groups it matches the above two notions, as is trivial from the fact that in discrete groups the Haar measure is just the counting measure. However, in general LCA groups, the number of elements is considerably smaller than the Haar measure -- in fact the cardinality measure is infinite whenever the Haar measure is positive. Therefore, out of all the a.u.u.d. notions, $\overline{D}^{\#}(S) := \overline{D}(\gamma_{S})$ becomes the largest, and knowing that this upper density is positive, is in general the weakest possible assumption on a set. Nevertheless, we have the following result even with this bigger notion of a.u.u.d.. **Theorem 5**. *Let $G$ be a LCA group and $S\subset G$ a set with a positive, (but finite) a.u.u.d., regarding now the counting measure of elements of $S$ in the definition of a.u.u.d., i.e. $\overline{D}^{\#}(S) := \overline{D}(\gamma_{S})$ in line with [\[CLdensity\]](#CLdensity){reference-type="eqref" reference="CLdensity"}. Then the difference set $S-S$ is a syndetic set.* **Remark 5**. One would like to say that a density $+\infty$ is \"even the better\", so that we could drop the finiteness condition from the formulation of Theorem [Theorem 5](#thm:strongFurst){reference-type="ref" reference="thm:strongFurst"}. However, in non-discrete groups this is not the case: such a density can in fact be disastrous. Consider e.g. the set of points $S:=\{1/n~:~n\in {\mathbb N}\}$ as a subset of ${\mathbb R}$. Clearly for any compact $C$ of positive Haar (i.e. Lebesgue) measure $|C|>0$, and for any $V\in {\mathcal B}_0$ of finite measure and compact closure, $|V+C|$ is positive but finite. Therefore, whenever $0\in {\rm int} V$, we automatically have $\#(S\cap V)=\infty$ and also $\#(S\cap V)/|C+V|=\infty$, hence $\overline{D}^{\#}(S)=\infty$; but $S-S\subset [-1,1]$. Thus with a compact $B$ it is not possible for $B+S-S$ to cover $G={\mathbb R}$, and whence $S-S$ is not syndetic. **Problem 1**. *The implicitly occurring set of translations $K$, for which we have $G=(S-S)+K$, seems not too well controlled in size by the proof below. However, there follows *some* bound, see Remark [Remark 6](#rem:Ksize){reference-type="ref" reference="rem:Ksize"}. One may want to find the right bound, perhaps even $\mu (K) \leq [1/\overline{D}(S)]$, for an appropriately chosen compact set of translates $K$. This we cannot do yet.* *Proof of Theorem [Theorem 5](#thm:strongFurst){reference-type="ref" reference="thm:strongFurst"}.* Even if the proof may be not the optimal one, we consider it worthwhile to present it in full detail, for the auxiliary steps done seem to be rather general and useful statements. Correspondingly, we break the argument in a series of lemmas. **Lemma 2**. *Let $S\subset G$ and assume $\rho:=\overline{D}^{\#}(S)=\overline{D} (\gamma|_S) \in (0,\infty)$. Consider any compact set $H \Subset G$ satisfying the \"packing type condition\" $H-H\cap S-S =\{0\}$ with $S$. Then we necessarily have $\mu(H) \leq 1/\overline{D}^{\#}(S)$.* *Proof.* Let $0<\tau<\rho$ be arbitrary. By definition of $\overline{D}^{\#}(S)$, (using $H$ in place of $C$) there must exist a measurable set $V\in {\mathcal B}_0$ so that $\infty>\#(S\cap V)>\tau \mu(V+H)$, therefore also $\#(S\cap V)>\tau \mu((S\cap V)+H)$. However, for any two elements $s\ne s' \in (S\cap V)\subset S$, $(s+H)\cap (s'+H) =\emptyset$, since in case $g\in (s+H)\cap (s'+H)$ we have $g=s+h=s'+h'$, i.e. $s-s'=h-h'$, which is impossible for $s\ne s'$ and $(H-H)\cap (S-S)=\{0\}$. Therefore for each $s\in (S\cap V)$ there is a translate of $H$, totally disjoint from all the others: i.e. the union $(S\cap V)+H= \cup_{s\in(S\cap V)}(s+H)$ is a disjoint union. By the properties of the Haar measure, we thus have $\mu((V\cap S)+H)=\sum_{s\in(S\cap V)} \mu(s+H)= \#(V\cap S)\mu(H)$. Whence we find $\#(S\cap V) \geq \tau \#(S\cap V) \mu(H)$. As $\#(S\cap V)>\tau \mu(V+H)$ was positive, we can cancel with it and infer $\mu(H)\leq 1/\tau$. This holding for all $\tau<\rho =\overline{D}^{\#}(S)$, we obtained that any compact set $H$, satisfying the packing type condition with $S$, is necessarily bounded in measure by $1/\overline{D}^{\#}(S)$. ◻ **Lemma 3**. *Suppose that $S-S\cap H-H=\{0\}$ with $\rho:=\overline{D}^{\#}(S)=\overline{D} (\gamma|_S) \in (0,\infty)$ and $H\Subset G$ with $0<\mu(H-H)$. Then the set $A:=S+H$, (that is the trace of the Haar measure on $A$) has the asymptotic uniform upper density $\overline{D}(\mu|_A)$ not less than $\rho\cdot \mu(H)$.* *Proof.* Let $C\Subset G$ be arbitrary. We want to estimate from below the ratio $\mu(A\cap V)/\mu(C+V)$ for an appropriately chosen $V\in {\mathcal B}_0$. Let us fix that we will take for $V$ some set of the form $U+H$ with $U\in {\mathcal B}_0$. Clearly $A\cap V = (S+H)\cap (U+H) \supset (S\cap U)+H$. Now for any two elements $s\ne t \in S$, thus even more for $s,t \in (S\cap V)$, the sets $s+H$ and $t+H$ are disjoint, this being an easy consequence of the packing property because $s+q=t+r \Leftrightarrow s-t=q-r$, which is impossible for $s-t\ne 0$ by condition. Therefore by the properties of the Haar measure we get $\mu((S\cap U)+H)=\sum_{s\in(S\cap U)} \mu(s+H)= \#(S\cap U) \cdot \mu(H)$. In all, we found $\mu(A\cap V)\geq \#(S\cap U) \cdot \mu(H)$. It remains to choose $V$, that is, $U$, appropriately. For the compact set $C+H\Subset G$ and for any given small ${\varepsilon}>0$, by definition of $\overline{D} (\#|_S;\mu)=\rho$ there exists some $U\in {\mathcal B}_0$ such that $\#(S\cap U) >(\rho-{\varepsilon}) \mu((C+H)+U)$. Choosing this particular $U$ and combining the two inequalities we are led to $\mu(A\cap V)\geq (\rho-{\varepsilon}) \mu(C+H+U) \mu(Q)$, that is, for $V:=U+H$ written in $\mu(A\cap V)/\mu(C+V)\geq (\rho-{\varepsilon}) \mu(H)$. As we find such a $V$ for every positive ${\varepsilon}$, the sup over $V\in{\mathcal B}_0$ is at least $\rho\mu(H)$, and because $C\Subset G$ was arbitrary, we infer the assertion. ◻ **Lemma 4**. *Suppose that $S-S\cap H-H=\{0\}$ with $\rho:=\overline{D}^{\#}(S)=\overline{D} (\gamma|_S) \in (0,\infty)$ and $H\Subset G$ with $0<\mu(H)$. Then there exists a finite set $B=\{b_1,\dots,b_k\}\subset G$ of at most $k\leq [1/(\rho\mu(H))]$ elements so that $B+(H-H)+(S-S)=G$. In particular, the set $S-S$ is syndetic with the compact set of translates $B+(H-H)$.* *Proof.* By the above Lemma [Lemma 3](#l:fattening){reference-type="ref" reference="l:fattening"} we have an estimate on the asymptotic uniform upper density of $A:=S+H$ (i.e. $\mu|_A$). But then we may apply Corollary [Corollary 2](#cor:genFurst){reference-type="ref" reference="cor:genFurst"} to see that the difference set $(S+H)-(S+H)$ is a syndetic set with the set of translates $B$ admitting $\#B\leq [1/\overline{D}(\mu|_A)]\leq [1/(\rho\mu(H))]$. Because also the set $H$ is compact, this yields that $S$ is syndetic as well, with set of translations being $B+(H-H)$. ◻ One may think that it is not difficult, for a discrete set $S$ of finite density with respect to counting measure, to find a compact neighborhood $R$ of 0, so that $R\cap (S-S)$ be almost empty with 0 being its only element. If so, then by continuity of subtraction, also for some compact neighborhood $H$ of zero with $(H-H)\subset R$ (and, being a neighborhood, with $\mu(H)>0$, too) we would have $(H-H)\cap(S-S)=\{0\}$, the packing type condition, whence concluding the proof of Theorem [Theorem 5](#thm:strongFurst){reference-type="ref" reference="thm:strongFurst"}. Unfortunately this idea turns to be naive. Consider the sequence $S=\{n+1/n ~:~ n\in {\mathbb N}\} \cup {\mathbb N}$ (in ${\mathbb R}$), which has asymptotic uniform upper density 2 with the cardinality measure, whilst $S-S$ is accumulating at 0. Nevertheless, this example is instructive. What we will find, is that sets of *finite* positive asymptotic uniform upper density cannot have a too dense difference set: it always splits into a fixed, bounded number of disjoint subsets so that the difference set of each subset already leaves out a fixed compact neighborhood of 0. This will be the substitute for the above naive approach to finish our proof of Theorem [Theorem 5](#thm:strongFurst){reference-type="ref" reference="thm:strongFurst"} through proving also some kind of subadditivity of the asymptotic uniform upper density -- another auxiliary statement interesting for its own right. **Lemma 5**. *Let $H\Subset G$ be any compact neighborhood of 0 and let $S$ have positive but finite asymptotic uniform upper density with respect to cardinality measure, i.e. $\rho:=\overline{D}^{\#}(S) \in (0,\infty)$.* *Then there exists a finite disjoint partition $S=\bigcup_{j=1}^n S_j$ of $S$ such that $(S_j-S_j)\cap H-H =\{0\}$.* *Moreover, for any given ${\varepsilon}>0$ choosing an appropriate compact neighborhood $H$ of 0, depending on ${\varepsilon}>0$, we can even guarantee that the number $n$ of subsets in the partition is not more than $(1+{\varepsilon})\rho\mu(H-H)$.* *Proof.* Let $s\in S$ be arbitrary, put $Q:=H-H$, and consider $R:=s+Q$ for an arbitrary, but fixed $s\in S$. Let let us try to estimate the number of other elements of $S$ falling in $R$. Clearly $R\in {\mathcal B}_0$, so for any $C\Subset G$ we have $\#(S\cap R)/\mu(C+R)\leq \sup_{V\in{\mathcal B}_0}\#(S\cap V)/\mu(C+V)$ and thus for any ${\varepsilon}>0$ and with some appropriate $C\Subset G$ this is bounded by $\rho+{\varepsilon}$ according to the density condition. Note that the choice of $C$ depends only on ${\varepsilon}$, but not on $R$. That is, we already have a bound $k:=\#(S\cap R)\leq (\rho+{\varepsilon}) \mu(C+R)$ with the given $C=C({\varepsilon})$, independently of $R$, i.e. of $H$. Next we show how to obtain the bound $k:=\#(S\cap R)\leq (\rho+{\varepsilon})\mu(Q)$ for some appropriate choice of $H$. This hinges upon a lemma from [@rudin:groups], stating that for any given compact set $C\Subset G$ and $\eta>0$ there exists another Borel set $V$, also with compact closure, so that $\mu(C+V)<(1+\eta)\mu(V)$, c.f. 2.6.7 Theorem on page 52 of [@rudin:groups]; moreover, Rudin remarks that this can even be proved (actually, read out from the proof) with open sets $V$ having compact closure. It is a matter of invariance of Haar measure with respect to translations to ascertain that (some) of the interior points of $V$ be 0, so that $V$ is a neighborhood of 0: also, by regularity of the Borel measure, and by compactness of the closure, we can as well take $V$ to be its own closure. See Theorem [Theorem 2](#th:Rudinlemma){reference-type="ref" reference="th:Rudinlemma"} and the nice proof of Ruzsa below. In all, working with the above chosen compact set $C=C({\varepsilon}/2)$ belonging to ${\varepsilon}/2$ and *with an appropriate choice of $V$ for $H$*, we even have $k:=\#(S\cap R)\leq (\rho+{\varepsilon}/2) \mu(C+R)< (\rho+{\varepsilon}/2)(1+\eta) \mu(Q)$. Note that here the dependence on $C$ disappears from the end formula, but there is a dependence of $H$ on $\eta$. It remains to construct the partition once we have a compact neighborhood $H$ of 0 and a finite number $k\in {\mathbb N}$ such that $\#(S\cap(s+H-H)\leq k$ for any $s\in S$. More precisely, we will construct a disjoint partition with $n\leq k$ parts, so the above estimate of $k$ will also imply the asserted bound $n\leq (1+{\varepsilon})\rho\mu(H-H)$. This is a standard argument. Consider a graph on the points of $S$ defined by connecting two points $s$ and $t$ exactly when $t\in s+H-H$. By virtue of the symmetry of $Q:=H-H$ this happens exactly when $s \in t+ H-H$, so the above definition defines indeed a graph, not just a directed graph, on the points of $S$. In this graph by condition the degree of any point of $s\in S$ is at most $k-1$, as there are at most $k-1$ further points $t$ of $S$ in $s+H-H$. But it is well-known that such a graph can be partitioned into $k$ subgraphs with no edges within any of the induced subgraphs.[^5] That is, the set of points $S$ split into the disjoint union of some $S_j$ with no two points $s,t\in S_j$ being in the relation $t\in s+H-H$, defining an edge between them. It is easy to see that with this we have constructed the required partition: the $S_j$ are disjoint, and so are $(S_j-S_j)$ and $H-H\setminus\{0\}$, for any $j=1,\dots,k$, because $s-t=h-h'$ implies $t=s+h'-h\in s+H-H$, excluded by construction. This concludes the proof. ◻ **Lemma 6** (**subadditivity**). *Let $\nu_0=\sum_{j=1}^n \nu_j$ be a sum of measures, all on the common set algebra ${\mathcal S}$ of measurable sets. Then we have $\overline{D}(\nu_0) \leq \sum_{j=1}^n \overline{D}(\nu_j)$.* *In particular, this holds for one given measure $\nu$ and a disjoint union of sets $A_0=\cup_{j=1}^n A_j$, with $\nu_j:=\nu|_{A_j}$, for $j=0,1,\dots,k$. If $\nu=\mu$, this gives $\overline{D}(\cup_{j=1}^{n}A_j) \leq \sum_{j=1}^n \overline{D}(A_j)$.* *Proof.* Let us write $\rho_j:=\overline{D}(\nu_j)$, $j=0,1,\ldots,n$. A.u.u.d. is clearly monotone in the measures considered, therefore all $\nu_j$ have an a.u.u.d $0\leq \rho_j\leq \rho<\infty$. Let ${\varepsilon}>0$ be arbitrary, and take $C_j\Subset G$ so that for all $V\in {\mathcal B}_0$ in the definition of $\overline{D}(\nu_j)$ we have $\nu_j(V)\leq (\rho_j+{\varepsilon})\mu(C_j+V)$ for $j=1,\dots,n$. Such $C_j$ exist in view of the infimum on $C\Subset G$ in the definition [\[Cnudensity\]](#Cnudensity){reference-type="eqref" reference="Cnudensity"} of a.u.u.d. Consider the (still) compact set $C:=C_1+\dots+C_n$. By definition of a.u.u.d. there is $V\in{\mathcal B}_0$ such that $\nu_0(V)\geq (\rho_0-{\varepsilon}) \mu(C+V)$. Obviously, $\mu(C_j+V)\leq \mu(C+V)$, so on combining the above we obtain $$\rho_0 -{\varepsilon}\leq \frac{\nu_0(V)}{\mu(C+V)} = \frac{\sum_{j=1}^k \nu_j(V)}{\mu(C+V)} \leq \sum_{j=1}^k \frac{\nu_j(V)}{\mu(C_j+V)} \leq \sum_{j=1}^k (\rho_j+{\varepsilon}),$$ that is, $\rho_0-{\varepsilon}\leq \sum_j (\rho_j+{\varepsilon})$. This holding for all ${\varepsilon}$, we find $\rho_0\leq \sum_j \rho_j$, as it was to be proved. ◻ *Continuation of the proof of Theorem [Theorem 5](#thm:strongFurst){reference-type="ref" reference="thm:strongFurst"}*. We take now an *arbitrary* compact neighborhood $H\Subset G$ of $0$, with of course $\mu(H)>0$. By Lemma [Lemma 5](#l:partition){reference-type="ref" reference="l:partition"} there exists a finite disjoint partition $S=\cup_{j=1}^n S_j$ with $(S_j-S_j)\cap(H-H)=\{0\}$. By subadditivity of a.u.u.d. (that is, Lemma [Lemma 6](#l:subadditivity){reference-type="ref" reference="l:subadditivity"} above), at least one of these $S_j$ must have positive a.u.u.d. $\rho_j$ (with respect to the counting measure), namely of density $0< \rho/n \leq \rho_j \leq \rho < \infty$, with $\rho:=\overline{D}^{\#}(S)$. Selecting such an $S_j$, we can apply Lemma [Lemma 4](#l:ifclearthengood){reference-type="ref" reference="l:ifclearthengood"} to infer that already $S_j$ -- hence also $S\supset S_j$ -- is syndetic. ◻ **Remark 6**. In fact, the above proof also provides an estimate on the measure of the compact translation set $B+H-H$ exhibiting the syndetic property of $S-S$, or, more precisely, already of some $S_j-S_j$, if we select $H$ suitably. Namely, $\# B\leq 1/\rho_j\mu(H)$, where $\rho_j\geq \rho/n$ and $n\leq (1+{\varepsilon})\rho \mu(H-H)$ yield $\mu(B+H-H)\leq (n/\rho) \mu(H) \leq (1+{\varepsilon})\mu(H-H)/\mu(H)$. 99 A. Beurling, Interpolation for an interval in R1, in: *The Collected Works of Arne Beurling, in: Harmonic Analysis*, vol. **2**, Birkhauser Boston, Boston, MA, 1989. A. Beurling, Balayage of Fourier-Stieltjes transforms, in: *The Collected Works of Arne Beurling, in: Harmonic Analysis*, vol. **2**, Birkhauser Boston, Boston, MA, 1989. E. Berdysheva, Sz. Gy. Révész, *Delsarte's extremal problem and packing on locally compact Abelian groups*, Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) **Vol. XXIV** (2023), 1007--1052. Cohn, Henry; Kumar, Abhinav; Miller, Stephen D.; Radchenko, Danylo; Viazovska, Maryna, The sphere packing problem in dimension 24. *Ann. of Math.* (2) **185** (2017), no. 3, 1017--1033 H. Fürstenberg, *Recurrence in Ergodic Theory and Combinatorial Number Theory*, Princeton University Press, Princeton, 1981. , *Topological dynamics*. American Mathematical Society Colloquium Publications **36**, American Mathematical Society, Providence, R. I., 1955. vii+151 pp.. K. Gröchenig, G. Kutyniok, K. Seip, Landau's necessary density conditions for LCA groups, *J. Funct. Anal.* **255** (2008) 1831--1850. Groemer, Helmut; Existenzsätze für Lagerungen im Euklidischen Raum. (German) *Math. Z.* **81** (1963) 260--278. N. Hegyvári, On iterated difference sets in groups, *Periodica Mathematica Hungarica* **43** \# (1-2), (2001), 105--110. E. Hewitt and K. A. Ross, *Abstract harmonic analysis*, **II**, Die Grundlehren der mathematischen Wissenchaften, Band **152**, Springer Verlag, Berlin, Heidelberg, New York \... Budapest, 1970. J.-P. Kahane, Sur les fonctions moyenne-périodiques bornées, *Ann. Inst. Fourier* **7** (1957) 293--314. J.-P. Kahane, *Sur quelques problèmes d'unicité et de prolongement, relatifs aux fonctions approchables par des sommes d'exponentielles*, Thése de doctorat, Université de Paris, Série A, no. 2633, 1954, 102 pages. J.-P. Kahane, Sur quelques problèmes d'unicité et de prolongement, relatifs aux fonctions approchables par des sommes d'exponentielles, *Ann. Inst. Fourier* **5** (1954) 39--130. M. N. Kolountzakis, Sz. Gy. Révész, Turán's extremal problem for positive definite functions on groups, *J. London Math. Soc.*, **74** (2006), 475--496. H. J. Landau, Necessary density conditions for sampling and interpolation of certain entire functions, *Acta Math.* **117** (1967) 37--52. , On asymptotic uniform upper density in locally compact Abelian groups, *preprint*, see on ArXive as `arXiv:0904.1567`, (2009), 13 pages. Sz. Gy. Révész, Turán's extremal problem on locally compact Abelian groups, *Anal. Math.* **37** (2011), 15-50. Sz. Gy. Révész, *Extremal problems for positive definite functions and polynomials*, Thesis for the \"Doctor of the Academy\" degree, April 2009, see at ` http://www.renyi.hu/ revesz/preprints.html` Sz. Gy. Révész, On asymptotic uniform upper density in locally compact Abelian groups, *Real Analysis Exchange*, **37** (2012), no. 1, 24--31. (in the Supplement *35th Summer Symposium Conference Reports*). W. Rudin, *Fourier analysis on groups*, Interscience Tracts in Pure and Applied Mathematics, No. **12** Interscience Publishers (a division of John Wiley and Sons), New York-London 1962 ix+285 pp. , On difference-sequences, *Acta Arith.* **XXV** (1974), 151--157. V. Totik, On comparision of asymptotic uniform upper densities in non-discrete LCA groups, *e-mail to Sz. Révész*, 25 July 2010; also in the Referee report on the thesis *\"Extremal problems for positive definite functions and polynomials\"* by Sz. Gy. Révész, Hungarian academy of Sciences, Budapest, 2011. M. S. Viazovska, The sphere packing problem in dimension $8$, *Ann. of Math.* **185** (2017), no. 3, 991--1015. \ E-mail: `revesz@renyi.hu` [^1]: Supported in part by the Hungarian National Office for Research, Development and Innovation, Project no. \# K-132097 [^2]: It is often, but seemingly erroneously called Banach density, among others also by Fürstenber. [^3]: It seems that in the literature almost exclusively this latter equivalent form of the definition is used, although Kahane's original formulation is quite useful. [^4]: This is defined in ${\mathbb R}^d$ or ${\mathbb Z}^d$ as $\limsup_{r\to \infty} \frac{\# \{s\in S~:~ |s||\leq r\}}{2r}$, without taking a sup with respect to the center of the interval. [^5]: The proof of this is very easy for finite or countable graphs: just start to put the points, one by one, inductively into $k$ preassigned sets $S_j$ so that each point is put in a set where no neighbor of it stays; since each point has less than $k$ neighbors, this simple greedy algorithm can not be blocked and the points all find a place. For larger graphs the same argument works in each connected, (hence countable) component.
arxiv_math
{ "id": "2309.06088", "title": "Kahane's upper density and syndetic sets in LCA groups", "authors": "Szil\\'ard Gy. R\\'ev\\'esz", "categories": "math.CA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We develop Conley's theory for multivalued maps on finite topological spaces. More precisely, for discrete-time dynamical systems generated by the iteration of a multivalued map which satisfies appropriate regularity conditions, we establish the notions of isolated invariant sets and index pairs, and use them to introduce a well-defined Conley index. In addition, we verify some of its fundamental properties such as the Ważewski property and continuation. address: - Jonathan Barmak, Universidad de Buenos Aires, Facultad de Ciencias Exactas y Naturales, Departamento de Matemática, Buenos Aires, Argentina. CONICET-Universidad de Buenos Aires, Instituto de Investigaciones Matemáticas Luis A. Santaló (IMAS), Buenos Aires, Argentina - Marian Mrozek, Division of Computational Mathematics, Faculty of Mathematics and Computer Science, Jagiellonian University, ul. St. Łojasiewicza 6, 30-348 Kraków, Poland - Thomas Wanner, Department of Mathematical Sciences, George Mason University, Fairfax, VA 22030, USA author: - Jonathan Barmak - Marian Mrozek - Thomas Wanner bibliography: - references.bib date: - today - Version compiled on title: Conley index for multivalued maps on finite topological spaces --- [^1] # Introduction {#sec:intro} Topological methods have always been at the heart of the qualitative study of dynamical systems. For example, topological fixed point theorems can establish the existence of stationary states based purely on topological properties of the underlying system and the space it is acting on. But even more complicated dynamical behavior can be studied in this way, for example recurrent and chaotic dynamics. One of the central tools in this context was developed by Charles Conley in [@conley:78a]. He realized that rather than focusing on the qualitative study of arbitrary invariant sets, it is advantageous to restrict one's attention to isolated invariant sets. Broadly speaking, such sets are more robust to continuous perturbations than general invariant sets. This insight allowed Conley to associate an index to isolated invariant sets $S$, which encodes some of their dynamical properties. The Conley index of $S$ can be determined without explicit knowledge of the specific isolated invariant set through associated index pairs, which provide rough topological enclosures of $S$. In the case of classical continuous-time dynamical systems the Conley index can either be defined as a pointed topological space, or in a more computationally friendly version, as a homology module. While Conley's theory originally considered continuous-time dynamical systems, it has since been extended to the case of iterated maps, i.e., to discrete-time dynamical systems. As it turns out, its definition is more elaborate in this situation, since the use of the underlying flow for the construction of homotopies and other auxiliary techniques are no longer available. In the discrete-time setting, in its most general form, the Conley index is the shift equivalence class of the homotopy type of the so-called index map, which is defined on a topological space constructed via index pairs ([@franks:richeson:00], see also [@szymczak:95a]). On the homology level, it is the Leray reduction of the homology of the index map [@mrozek:90a]. For more details, we refer the reader to [@mischaikow:mrozek:02a]. Moreover, Conley's theory has successfully been extended to the case of multivalued discrete-time dynamics, see for example [@batko:mrozek:16a; @kaczynski:etal:04a; @kaczynski:mrozek:95a; @stolot:06a] and the references therein. All of the results mentioned so far assume that the underlying phase space has nice topological properties, in particular, that it is at least a Hausdorff space. This is due to the fact that in order to construct the index and derive its properties, separation properties are essential to the perturbation robustness of the index. With the advent of modern data sciences, however, discrete spaces receive more and more attention. They can take the form of point clouds or simplicial complexes, or more generally cell complexes, Lefschetz complexes, and finite topological spaces. Dynamics on such spaces were studied by Forman in [@forman:98a; @forman:98b] using the concept of combinatorial vector fields. While these papers primarily served to extend Morse theory to the case of cell complexes, they also addressed some more general dynamical concepts. It was shown in [@kaczynski:etal:16a] that the notion of isolated invariant set does indeed have an analogue in the setting of combinatorial vector fields, and that one can define a Conley index. This was later extended to the case of combinatorial multivector fields on Lefschetz complexes in [@mrozek:17a], and on general finite topological spaces in [@lipinski:etal:23a]. For related results, we refer the reader to [@batko:etal:20a; @dey:etal:19a; @mrozek:etal:22a; @mrozek:wanner:21a]. Common to all of these results is that the underlying notion of dynamics is created through a combinatorialized version of a vector field, i.e., through the generator of a dynamical system which is reminiscent of continuous-time dynamics. In the present paper, we aim to demonstrate that Conley's theory can be extended to the case of general dynamical systems on finite topological spaces. As we will see in more detail in Section [3](#sec:comb-dyn){reference-type="ref" reference="sec:comb-dyn"}, actual dynamical systems on such combinatorial objects necessarily have to be multivalued and time-discrete. Thus, we consider the iteration of multivalued maps on finite topological spaces and define the notions of isolated invariant sets and their Conley index. We prove that the index is well-defined, and establish some of its basic properties. While our approach is modeled after previous results [@mrozek:90a; @batko:mrozek:16a], the involved proof techniques are significantly different. This is due to the lack of sufficient separation in finite topological spaces, and will be addressed in more detail later. The remainder of this paper is organized as follows. In Section [2](#sec:prelim){reference-type="ref" reference="sec:prelim"} we recall basic definitions concerning finite topological spaces and continuity properties of multivalued maps. This is followed in Section [3](#sec:comb-dyn){reference-type="ref" reference="sec:comb-dyn"} by a brief discussion of combinatorial topological dynamics, which specifically demonstrates that on finite topological spaces interesting dynamics can only be observed in the context of iterating a multivalued map. In addition, we introduce the central notion of solution in this context. We then turn our attention to Conley theory. Section [4](#sec:isoset){reference-type="ref" reference="sec:isoset"} is devoted to isolated invariant sets and Morse decompositions, while Section [5](#sec:indexpair){reference-type="ref" reference="sec:indexpair"} is concerned with index pairs and their properties. Using these results, we can define the Conley index in Section [6](#sec:conleyindex){reference-type="ref" reference="sec:conleyindex"}, and derive some of its fundamental properties in Section [7](#sec:conleyindexprop){reference-type="ref" reference="sec:conleyindexprop"}. Finally, Section [8](#sec:future){reference-type="ref" reference="sec:future"} addresses some future work and open problems. # Preliminaries {#sec:prelim} We begin by recalling basic concepts and definitions for finite topological spaces, as well as for multivalued maps between them. While we focus only on the essentials, additional material can be found in [@alexandrov:37a; @barmak:11a; @barmak:etal:20a]. Given a finite topological space $X$ and a subspace $A$, we denote by $\opn A$ the open hull of $A$, that is, the smallest open set containing $A$. When $A$ consists of a unique point $a$ we also write $\opn A= \opn a$. Note that $\opn A= \bigcup\limits_{a\in A} \opn a$. The closure of $A$ is denoted by $\cl A$. Notice that for arbitrary elements $x,y\in X$ the inclusion $x \in \opn y$ is satisfied if and only if $y\in \cl x$. Every finite space has an associated preorder $\le$ (i.e., a reflexive and transitive relation) given by $x \le y$ if $x\in \cl y$.[^2] Conversely every finite set with a preorder $\le$ has a corresponding topology with the up-sets as the open sets. Recall that a subset $A\subseteq X$ is an up-set, if $a\le x$ for some $a\in A$ implies $x\in A$. Then, the dually defined down-sets correspond to the closed sets in this topology. A finite space $X$ is $T_0$ if and only if the preorder is an order (i.e., antisymmetric). A map $f:X\to Y$ between finite spaces is continuous if and only if it is order preserving, that is, if the inequality $x\le x'$ always implies $f(x)\le f(x')$. Although this correspondence is very useful to understand finite spaces from a combinatorial perspective, we have chosen to use the topological notation $\cl A$ instead of $X_{\le A}=\{x\in X | \ \exists \ a \in A$ with $x\le a \}$ and $\opn A$ instead of $X_{\ge A}=\{x\in X | \ \exists \ a \in A$ with $a\le x \}$ in order to make more evident the connection between this theory and the classical one. We say that a multivalued map $F: X \mto Y$ between two topological spaces has *closed values*, if $F(x)\subseteq Y$ is closed for every $x\in X$. Furthermore, the map $F$ is called *lower semicontinuous* if the small preimage $F^{-1}(H)=\{x \in X | F(x)\subseteq H\}$ is closed for every closed subset $H\subseteq Y$. For a multivalued map $F:X\mto Y$ with closed values between finite spaces, one can easily verify that being lower semicontinuous is equivalent to the condition that $x' \le x$ implies $F(x') \subseteq F(x)$, or, in other words, $x' \in \cl x$ implies $F(x') \subseteq F(x)$, see also [@barmak:etal:20a Lemma 3.5]. Finally, we say that $F$ has *acyclic values*, if for every $x\in X$ the subspace $F(x) \subseteq Y$ is acyclic. For the majority of the paper, we consider multivalued maps $F:X\mto Y$ which are lower semicontinuous and have closed values. If we assume in addition that the map has acyclic values, then the projection $p_1:F\to X$ from the graph $$F=\{(x,y)\in X\times Y \ | \ y\in F(x)\} \subseteq X\times Y$$ into $X$ induces isomorphisms in all homology groups. This in turn implies that for such multivalued maps there is an induced homomorphism $F_*:H_*(X)\to H_*(Y)$ given by $F_*=(p_2)_*(p_1)^{-1}_*$ where $p_2:F\to Y$ stands for the other projection (see [@barmak:etal:20a Proposition 4.7]). # Combinatorial topological dynamics {#sec:comb-dyn} In this brief section we introduce the notion of a combinatorial dynamical system on a finite topological space, as well as the assumed topological properties of its multivalued generator $F:X\mto X$. Moreover, we indicate why in the setting of a finite topological space only discrete-time dynamics is of interest. We would like to point out, however, that through the notion of combinatorial vector fields on finite topological spaces, one can in fact arrive at a notion of dynamics which is similar in spirit to the continuous-time case, albeit not the same. Finally, we introduce the notion of solution, which is central to this paper. ## Multivalued dynamics on finite topological spaces Classical dynamical systems can broadly be divided into two categories --- discrete-time and continuous-time dynamical systems. In the former case, one is interested in the evolution of a system state at discrete points in time, and this is usually modeled by the iteration of a continuous map $F : X \to X$. Unfortunately, in the context of a finite topological space this leads to trivial dynamical behavior, with every orbit of the system eventually becoming periodic. Thus, in order to capture interesting dynamics, one is forced to consider multivalued maps $F:X\mto X$. While this has already been described in [@batko:etal:20a; @kaczynski:etal:16a; @lipinski:etal:23a; @mrozek:17a; @mrozek:etal:22a; @mrozek:wanner:21a], these papers consider very specific multivalued maps generated by an underlying combinatorial vector field or combinatorial multivector field --- and this approach is more in the spirit of the continuous-time case. See also our comments below. In contrast, the present paper is devoted to the study of general multivalued discrete-time dynamical systems on a finite topological space $X$. Since such general systems cannot rely on any supporting underlying structure such as a combinatorial multivector field, we need to impose certain regularity assumptions on the map $F$. Throughout this paper, we assume that $F:X\mto X$ is a lower semicontinuous multivalued map with closed values. These assumptions are inspired by the case of classical multivalued dynamics [@deimling:92a; @gorniewicz:06a], and they have also been used recently in the proof of a Lefschetz fixed point theorem for multivalued maps on finite spaces [@barmak:etal:20a]. We think of the map $F$ as a *combinatorial dynamical system*, which is obtained by iterations of the map, and which naturally leads to the concept of a *solution* --- as described in more detail in the following section. For now we would like to point out that a combinatorial dynamical system may also be viewed as a finite directed graph whose set of vertices is the topological space $X$, and with $F$ interpreted as the map sending a vertex to the collection of its neighbors connected via an outgoing directed edge. This so-called *$F$-digraph* encodes the dynamics of $F$ on a purely combinatorial level. However, for the derivation of more advanced concepts such as isolated invariant sets and their Conley index the topological properties of $X$ and $F$ are essential. In view of our focus on the discrete-time case, it is natural to wonder why we exclude the continuous-time case. As the following result shows, the semigroup property of a multivalued continuous-time dynamical system immediately forces the dynamics to be trivial. In fact, every orbit of the system has to be constant. Let $X$ be a finite set and let $F: X \times \RR_{\ge 0} \multimap X$ denote a multivalued map which satisfies the semigroup property $F(x,t+s)=F(F(x,t),s)$ for every $t,s\ge 0$. Then $F(x,-): \RR_{> 0}\multimap X$, given by $t\mapsto F(x,t)$, is constant for every $x\in X$. *Proof.* The map $F$ induces a singlevalued map $F:\mathcal{P}(X)\times \RR_{\ge 0} \to \mathcal{P}(X)$ given by $(A,t)\mapsto F(A,t)$. Here $\mathcal{P}(X)$ denotes the power set of $X$. Note that the identity $F(A,t+s)=F(F(A,t),s)$ holds for every $t,s\ge 0$. Since $\mathcal{P}(X)$ is finite, it suffices to prove the following assertion: - If $Y$ is a finite set and $F:Y\times \RR_{\ge 0} \to Y$ is a singlevalued map satisfying the semigroup property $F(y,t+s)=F(F(y,t),s)$ for every $t,s\ge 0$, then the map $F(y,-)$ is constant on $\RR_{>0}$ for each $y\in Y$. To show this, note that if $f:Y \to Y$ is any map, then the sequence $(f^n(Y))_{n\in \NN}$ is decreasing. We call $f^{\infty}(Y)\subseteq Y$ its eventual value. It is clear that $f^{\infty}(Y)=f^n(Y)$ for every $n$ greater than or equal to the cardinality $N$ of $Y$. The map $f$ induces a bijection from the eventual value $f^{\infty}(Y)$ to itself, and since the group of bijections has order dividing $N!$, the map $f^{N!}:f^{\infty}(Y) \to f^{\infty}(Y)$ is the identity. This in turn shows that $f^{N!}:Y \to f^{\infty}(Y)$ is a retraction, i.e., it is the identity when restricted to its codomain. Every $t\ge 0$ induces a map $F_t:Y \to Y$, $y\to F(y,t)$. Denote $R_t=F_t^{\infty}(Y) \subseteq Y$. By the comments above, the iterate $F_t^{N!}: Y \to R_t$ is a retraction. Furthermore, the set $R_t$ is the set of fixed points of $F_t^{N!}$. Now let $n\in \NN$ and $t\ge 0$. Then we have $F_{nt}=F_t^n$ in view of our hypothesis on $F$. Thus $F_{nt}^{N!}=F_t^{nN!}$ fixes every point of $R_t$ and does not fix any point outside $R_t$. This proves that $R_{nt}=R_t$. We deduce that for $t>0$, the set $R_t$ depends only on the class of $t$ modulo $\QQ_{>0}$, i.e., we have $R_t=R_{s}$ if $t^{-1}s\in \QQ$. In particular, this implies $R_{t/N!}=R_t$, and thus $R_t$ is the image of $F_t$, and $F_t$ is the identity on $R_t$. Finally, let $t,s > 0$. Since $F_{t+s}=F_sF_t$, the image of $F_{t+s}$ is contained in the image of $F_s$, i.e., $R_{t+s}\subseteq R_s$. Thus $R_t\subseteq R_s$ for every $t\ge s$. Since $R_s=R_{s/n}$ for every $n\in \NN$, one also obtains $R_t \subseteq R_s$ for every $t>0$. This in turn establishes the identity $R_t=R_s$ for every $t,s>0$. Suppose $s>t>0$. Then $F_{s-t}$ is the identity on $R_{s-t}=R_s=R_t$. Since $F_{s}=F_{s-t}F_t$ and $F_{s-t}$ is the identity on the image of $F_t$, then $F_s=F_t$. This proves the assertion. 0◻ ◻ The above result shows that it is the semigroup property alone which is incompatible with nonconstant dynamics if the underlying phase space is finite. As the reader undoubtedly noticed, we did not make use of any topological structure on $X$. Note, however, that one can mimic the behavior of a continuous-time dynamical system even on finite topological spaces by restricting dynamical transitions between subsets to shared boundaries. This is precisely what Forman had in mind with his combinatorial vector fields, and also lies at the center of the theory of multivector fields. In contrast, the discrete-time dynamics studied in the present paper does not have these restrictions, as it allows for transitions between states without topological closeness. ## Solutions and invariant sets Our study of the dynamics of discrete-time multivalued dynamical systems is based on the notions of solution and invariant set. These are defined just as in the classical situation. Consider a multivalued map $F: X \mto X$. Then a *solution* of $F$ in $A\subseteq X$ is a partial map $\sigma:\ZZ\nrightarrow A$ whose *domain*, denoted $\dom \sigma$, is an interval of integers, and for any $i,i+1\in\dom\sigma$ the inclusion $\sigma(i+1)\in F(\sigma(i))$ is satisfied. The solution $\sigma$ is called a *full solution* if $\dom \sigma=\ZZ$, otherwise it is a *partial solution*. A partial solution whose domain is bounded is referred to as a *path*. We denote the set of all paths with values in $A\subseteq X$ by $\mbox{$\operatorname{Path}$}(A)$. Given a path $\sigma$ with domain $\dom\sigma=\ZZ\cap [m,n]$ for some $m,n \in \ZZ$, we call $\sigma(m)$ and $\sigma(n)$, respectively, the left and right *endpoint* of $\sigma$. We denote these endpoints by the symbols $\sigma^{\sqsubset}$ and $\sigma^{\sqsupset}$, respectively. If $\tau$ is another path with $\dom\tau=\ZZ\cap [m',n']$ and such that $\tau^{\sqsubset} \in F(\sigma^{\sqsupset})$ holds, then we define the concatenation of the paths $\sigma$ and $\tau$, denoted by $\sigma.\tau$, as the path with domain $\dom \sigma.\tau := \ZZ\cap [m,n+n'-m'+1]$ and defined by $$(\sigma.\tau)(k):=\begin{cases} \sigma(k) & \text{ if $k\in \ZZ\cap [m,n]$},\\ \tau(k+m'-n-1) & \text{ if $k\in \ZZ\cap [n+1,n+1+n'-m']$}. \end{cases}$$ It is straightforward to verify that $\sigma.\tau$ is indeed a path. We now recall the definition of invariance. For this, we say that a solution $\sigma$ *passes* through $x\in X$ if $x=\sigma(i)$ for some $i\in\dom\sigma$. Moreover, a set $A\subseteq X$ is called *invariant* if for every $x\in A$ there exists a full solution in $A$ which passes through $x$. Thus, $A$ is invariant if $A\subseteq F(A)$ and for each $a\in A$, $F(a)\cap A\neq \varnothing$. # Isolated invariant sets and Morse decompositions {#sec:isoset} The concept of isolated invariant set lies at the heart of Conley theory. In the classical situation, an isolated invariant set $S$ is characterized by the property that it is the largest invariant set in some neighborhood of $S$. Unfortunately, it is not possible to define isolated invariant sets in an analogous way in the context of finite topological spaces due to the lack of sufficient separation. Therefore, in this section we introduce an appropriate notion for our setting and derive some first properties of such isolated invariant sets. We also show how they form the building blocks for Morse decompositions of phase space. Throughout this section, we assume that $X$ is a finite $T_0$ topological space and that the multivalued map $F:X\mto X$ is lower semicontinuous with closed values. ## Isolated invariant sets {#ssec:iso-inv-set} We begin by introducing the notion of isolating invariant set, which in turn is based on an isolating set. The latter set is the analogue of the isolating neighborhood in classical Conley theory, but its topological properties are weaker to account for the poor separation in finite spaces. [\[defn:isolating-set\]]{#defn:isolating-set label="defn:isolating-set"} A closed set $N\subseteq X$ is called an *isolating set* for an invariant set $S$ if the following two conditions are satisfied: - Every path in $N$ with endpoints in $S$ has all its values in $S$. - We have the equality $S\cap\cl(F(S)\setminus N) = \varnothing$, i.e., the set $S$ and $\cl(F(S)\setminus N)$ are disjoint. If such an isolating set for $S$ exists, we say that $S$ is an *isolated invariant set*. Notice that condition (IS2) is satisfied if and only if $\opn S \cap (F(S)\setminus N)=\varnothing$. Thus, it is equivalent to assuming the inclusion - $\opn S \cap F(S) \subseteq N$. Note also that since $S$ is invariant, $S\subseteq F(S)$, and hence (IS2') implies $S\subseteq N$. Establishing condition (IS2), or its equivalent reformulation (IS2'), is the less intuitive aspect of verifying an invariant set as an isolated invariant set. It is therefore useful to also have sufficient conditions for its validity. Two of these are the subject of the following remark. [\[rem:suffcond-is2\]]{#rem:suffcond-is2 label="rem:suffcond-is2"} Assume that $S$ is an invariant set and that $N$ is closed. Then any of the following two conditions imply (IS2): 1. We have $S \subseteq\inte N$, where $\inte N$ denotes the interior of $N$, or 2. the inclusion $F(S) \subseteq N$ is satisfied. Indeed, the first condition is equivalent to $\opn S \subseteq N$, and therefore either of the above two conditions implies (IS2'), and thus (IS2). As we mentioned earlier, in classical Conley theory, the isolated invariant set is uniquely determined by its isolating neighborhood $N$. In fact, it is the largest invariant subset of $N$. In contrast, in the above setting the same set $N$ may be an isolating set for more than one isolated invariant set. This is illustrated in the following two examples. [\[ex:simplecvf\]]{#ex:simplecvf label="ex:simplecvf"} *We begin with a simple example that rotates an equilateral triangle. In the left part of Figure [1](#fig:simplecvf){reference-type="ref" reference="fig:simplecvf"} we indicate the action of the map on a simplicial complex, which is just a two-dimensional simplex. More precisely, the map rotates the triangle in a counterclockwise fashion by 120$^\circ$. This example is inspired by a combinatorial vector field in the sense of Forman, which contains the three vectors $\{ A, AB \}$, $\{ B, BC \}$, and $\{ C, AC \}$ along the boundary, as well as the critical cell $\{ ABC \}$. While we refer the reader to [@forman:98a; @forman:98b; @kaczynski:etal:16a; @mrozek:wanner:21a] for more details on the general definition of a combinatorial vector field and its relation to classical dynamics, it is intuitively clear that in the situation of Figure [1](#fig:simplecvf){reference-type="ref" reference="fig:simplecvf"} one can observe both an unstable fixed point at the triangle, as well as periodic motion along its simplicial boundary.* (12.0,4.0) (0.0,0.0) ![A simple rotation on a simplicial complex given by one triangle, as well as three edges and three vertices. The table on the right defines the associated multivalued map $F : X \mto X$ on the finite topological space consisting of all seven simplices, and equipped with the closure operation induced by the face relationship.](simplecvf.pdf "fig:"){#fig:simplecvf height="4cm"} (7.0,0.0) (4.0,4.0) $x$ Elements of $F(x)$ ----- -------------------------- A B B C C A AB B, C, BC BC A, C, AC AC A, B, AB ABC A, B, C, AB, BC, AC, ABC In order to formulate this dynamical behavior via a multivalued map on a finite topological space, we use the standard construction given by the face poset, that is the poset $X$ of simplices where $x\le y$ if $x$ is a face of $y$. In other words, the topology is given by $x \in \cl y$ if and only if $x$ is a face of $y$. The associated multivalued map $F : X \mto X$ is defined in the table in Figure [1](#fig:simplecvf){reference-type="ref" reference="fig:simplecvf"}. One can easily verify that $F$ has closed values, and that it is lower semicontinuous. Iteration of the map $F$ leads for example to the following three isolated invariant sets: $$S_1 = \{ A, B, C \} \; , \quad S_2 = \{ AB, BC, AC \} \; , \quad\mbox{ and }\quad S_3 = \{ ABC \} \; .$$ If we then define the closed sets $$N_1 = \{ A, B, C \} \; , \quad N_2 = \{ A, B, C, AB, BC, AC \} \; , \quad\mbox{ and }\quad N_3 = X \; ,$$ then one can easily verify that $N_3$ is an isolating set for all three of the above isolated invariant sets, the set $N_2$ isolates both $S_1$ and $S_2$, and the set $N_1$ is an isolating set for $S_1$ only. Finally, we note that also the unions $S_1 \cup S_2$ and $S_2 \cup S_3$ are isolated invariant sets, with isolating sets $N_2$ and $N_3$, respectively. On the other hand, while the union $S_1 \cup S_3$ is invariant, it is not an isolated invariant set. To see this, note that any isolating set for $S_1 \cup S_3$ has to contain the closure of $S_1 \cup S_3$, and therefore $N = X$ would be the only possibility. Yet, one can easily see that (IS1) is not satisfied for this choice. [\[ex:reflectedcvf\]]{#ex:reflectedcvf label="ex:reflectedcvf"} *Our second example is similar to the previous one but it is induced by the reflection of the triangle about the vertical line through $A$, as depicted in the left panel of Figure [2](#fig:reflectedcvf){reference-type="ref" reference="fig:reflectedcvf"}. The corresponding multivalued map $G : X \mto X$ is defined in the table on the right. Notice that also $G$ has closed values and is lower semicontinuous.* (12.0,4.0) (0.0,0.0) ![The map $G:X \mto X$ defined on the right induced by a reflection about the vertical line through $A$ indicated on the left.](reflectedcvf.pdf "fig:"){#fig:reflectedcvf height="4cm"} (7.0,0.0) (4.0,4.0) $x$ Elements of $G(x)$ ----- -------------------------- A A B C C B AB A, C, AC BC B, C, BC AC A, B, AB ABC A, B, C, AB, BC, AC, ABC Iteration of the map $G$ leads to new isolated invariant sets. For example, both the singleton $R_1 = \{ A \}$ and the doubleton $R_2 = \{ B, C \}$ are examples, and they have associated isolating sets $M_1 = R_1$ and $M_2 = R_2$, respectively. Notice, however, that both sets are also isolated by $M = X$. In addition, we have the isolated invariant sets $$R_3 = \{ BC \} \; , \quad R_4 = \{ AB, AC \} \; , \quad\mbox{ and }\quad R_5 = \{ ABC \} \; .$$ If we then define the closed sets $$M_3 = \{ B, C, BC \} \; , \quad M_4 = \{ A, B, C, AB, AC \} \; , \quad\mbox{ and }\quad M_5 = X \; ,$$ then one can easily verify that $M_k$ is an isolating set for $R_k$ for $k = 3,4,5$. Furthermore, the set $M_5$ isolates both $R_3$ and $R_4$ as well. We leave it to the reader to find additional isolated invariant sets. The examples above will be analyzed along the paper. We have chosen finite spaces associated with simplicial complexes because the geometric interpretation they have make notions simpler to visualize. However, we want to stress that the theory we develop here can be applied to any finite $T_0$ space. While at first glance the nonuniqueness of the isolating set seems strange, it is necessary in finite topological space due to the lack of sufficient separation. Nevertheless, the following remark sheds more light on this issue. [\[newr\]]{#newr label="newr"} It is clear that there is a smallest closed set $N$ satisfying condition (IS2'), which is the set $$\label{newr-eq1} N = \cl (\opn S \cap F(S)) \; .$$ On the other hand, one can easily see that condition (IS1) is preserved by taking subsets: If $N'\subseteq N$ and $N$ satisfies this condition, then so does $N'$. In conclusion, the invariant set $S$ is an isolated invariant set if and only if the set $N$ defined in ([\[newr-eq1\]](#newr-eq1){reference-type="ref" reference="newr-eq1"}) satisfies condition (IS1). We will see, however, that it is frequently useful to work with different isolating sets for the same isolated invariant set. We leave it to the reader to illustrate the above remark in the context of Examples [\[ex:simplecvf\]](#ex:simplecvf){reference-type="ref" reference="ex:simplecvf"} and [\[ex:reflectedcvf\]](#ex:reflectedcvf){reference-type="ref" reference="ex:reflectedcvf"}, and close this section with the following simple result. [\[prop:iso-set-intersection\]]{#prop:iso-set-intersection label="prop:iso-set-intersection"} Assume $M$ and $N$ are two isolating sets for an isolated invariant set $S$. Then their intersection $M\cap N$ is also an isolating set for $S$. *Proof.* Clearly the intersection $M\cap N$ is closed. Since every path in $M\cap N$ is also a path in $M$, property (IS1) for $M\cap N$ follows from the validity of property (IS1) for $M$. Finally, it is clear that (IS2') for $M$ and $N$ implies that (IS2') also holds for $M\cap N$. 0◻ ◻ ## Morse decompositions Isolated invariant sets as defined in the last section are the fundamental building blocks for analyzing the global dynamics of a dynamical system. In general, they can be used to divide phase space into regions of recurrent and gradient-like behavior. This leads to the notion of a Morse decomposition. [\[defn:morese-decomp\]]{#defn:morese-decomp label="defn:morese-decomp"} Consider a lower semicontinuous multivalued map $F:X\mto X$ with closed values, on a finite $T_0$ topological space $X$. A family $\{M_p\}_{p\in P}$ of mutually disjoint, non-empty, isolated invariant sets indexed by a poset $P$ is called a *Morse decomposition* of $X$ if for every full solution $\gamma$ either all values of $\gamma$ are contained in the same set $M_p$, or there exist indices $q>r$ in $P$ and $t_q,t_r\in\ZZ$ such that $\gamma(t)\in M_q$ for $t\leq t_q$ and $\gamma(t)\in M_r$ for $t\geq t_r$. In the latter case, the solution $\gamma$ is called a *connection* from $M_q$ to $M_r$. Furthermore, the sets $M_p$ are called the *Morse sets* of the Morse decomposition. In the context of classical dynamics, Morse decompositions are a fairly difficult object of study, since it is possible for a dynamical system to have infinitely many different Morse decompositions. Of course, this cannot happen in the setting of a finite topological space. In fact, there is always a finest Morse decomposition which can easily be determined using graph theoretic methods. To see this, recall that the dynamics of a multivalued map $F:X\mto X$ can be encoded via its $F$-digraph $G_F$, whose vertices are given by the elements of $X$, and such that there is a directed edge from $x$ to $y$ if and only if $y \in F(x)$. On $X$, we can define an equivalence relation by saying that $x \sim y$ if and only if there is both a directed path in $G_F$ from $x$ to $y$, and one from $y$ to $x$.[^3] This equivalence relation partitions $X$ into equivalence classes which are called the *strongly connected components* of $G_F$. Such a component is called *trivial*, if it consists of a single vertex which is not connected to itself with an edge, otherwise it is *non-trivial*. Moreover, if each strongly connected component (along with all the edges which begin and finish in the component) is contracted to a single vertex, the resulting graph is a directed acyclic graph, called the *condensation* of $G_F$. After these preparations, one obtains the following result. [\[prop:scc-as-morsedecomp\]]{#prop:scc-as-morsedecomp label="prop:scc-as-morsedecomp"} Consider a lower semicontinuous multivalued map $F:X\mto X$ with closed values on a finite $T_0$ topological space $X$. Denote the non-trivial strongly connected components of the associated $F$-digraph $G_F$ by $\{M_p\}_{p\in P}$. Furthermore, let $q > r$ if there exists a directed path in $G_F$ from $M_q$ to $M_r$. Then each of the sets $M_p$ is an isolated invariant set for $F$, and $\{M_p\}_{p\in P}$ is a Morse decomposition of $X$. *Proof.* We begin by showing that any path which starts and ends in $M_p$ has to be completely contained in $M_p$. To see this, let $p \in P$ be fixed, let $x,y \in M_p$, let $\gamma$ denote any path from $x$ to $y$, and let $z$ denote any point on the path $\gamma$. Then $\gamma$ clearly can be restricted to a path from $z$ to $y$. Furthermore, since $M_p$ is a non-trivial strongly connected component of $G_F$, there exists a path from $y$ to $x$. Concatenation of this path with the part of $\gamma$ from $x$ to $z$ gives a path in $G_F$ from $y$ to $z$. This immediately implies that $y \sim z$, and therefore we have $z \in M_p$, and the above statement follows. We now turn to the verification of the proposition. It is easy to see that $>$ is indeed a (strict) partial order on $P$, since the condensation of $G_F$ is acyclic. Moreover, for any $x \in M_p$ one can easily construct a full solution through $x$ in $M_p$, by infinite concatenations of the paths from $x$ to $y$ and from $y$ to $x$, for some $y \in M_p$, again using the above observation. Thus, every set $M_p$ is invariant. These sets are also isolated invariant sets, since the whole space $X$ is an isolating set for each $M_p$. For this, note that (IS2) follows trivially from Remark [\[rem:suffcond-is2\]](#rem:suffcond-is2){reference-type="ref" reference="rem:suffcond-is2"}, and (IS1) from our above observation. Finally, if $\gamma$ denotes an arbitrary full solution, then the acyclicity of the condensation of $G_F$ together with the finiteness of $X$ immediately implies the existence of $t_q,t_r\in\ZZ$ such that $\gamma(t) \in M_q$ for $t\leq t_q$ and $\gamma(t)\in M_r$ for $t\geq t_r$, for some $q \ge r$. If $q = r$, then our above observation implies that $\gamma$ is contained in $M_q$, and this completes the proof of the proposition. 0◻ ◻ Finding strongly connected components in digraphs can be done efficiently, and thus the problem of decomposing the dynamics of a multi-valued map $F$ into *recurrent dynamics*, given by the Morse sets $M_p$, and *gradient-like dynamics*, encoded in the condensation of $G_F$, is inherently computable. Furthermore, one can easily see that the above result does in fact produce the finest Morse decomposition of $X$. It is customary to represent the information about this Morse decomposition in the form of its *Morse graph*. This graph consists of the Hasse diagram of the poset $P$ with vertices representing the individual Morse sets $M_p$. In other words, it is the subgraph of the condensation induced by the non-trivial strongly connected components. [\[ex:morsedecomp\]]{#ex:morsedecomp label="ex:morsedecomp"} *We return to the two examples introduced earlier in this section. Recall that these examples introduced two multivalued maps $F,G : X \mto X$ on the finite topological space $$X = \{ A, B, C, AB, AC, BC, ABC \}$$ induced by a two-dimensional simplex. In Example [\[ex:simplecvf\]](#ex:simplecvf){reference-type="ref" reference="ex:simplecvf"} we identified the three isolated invariant sets $$S_1 = \{ A, B, C \} , \quad S_2 = \{ AB, BC, AC \} , \quad\mbox{ and }\quad S_3 = \{ ABC \} ,$$ and one can easily see that they are all strongly connected components of the $F$-digraph. Similarly, in Example [\[ex:reflectedcvf\]](#ex:reflectedcvf){reference-type="ref" reference="ex:reflectedcvf"} we found the isolated invariant sets $$R_1 = \{ A \} , \;\; R_2 = \{ B, C \} , \;\; R_3 = \{ BC \} , \;\; R_4 = \{ AB, AC \} , \;\; R_5 = \{ ABC \} ,$$ which also partition the space $X$ and are again strongly connected components. Thus, in both of these examples all strongly connected components are non-trivial, and one obtains the Morse graphs shown in Figures [3](#fig:simplemorse){reference-type="ref" reference="fig:simplemorse"} and [4](#fig:reflectedmorse){reference-type="ref" reference="fig:reflectedmorse"}, respectively.* (8.0,4.0) (0.0,0.0) ![Finest Morse decomposition for the map $F : X \mto X$ from Example [\[ex:simplecvf\]](#ex:simplecvf){reference-type="ref" reference="ex:simplecvf"}. The Morse graph is shown on the right, the Morse sets are indicated on the left.](simplemorse.pdf "fig:"){#fig:simplemorse height="4cm"} (8.0,4.0) (0.0,0.0) ![Finest Morse decomposition for the map $G : X \mto X$ from Example [\[ex:reflectedcvf\]](#ex:reflectedcvf){reference-type="ref" reference="ex:reflectedcvf"}. The Morse graph is shown on the right, the Morse sets are indicated on the left.](reflectedmorse.pdf "fig:"){#fig:reflectedmorse height="4cm"} We would like to point out that the Morse sets involved in these example exhibit different types of recurrent dynamics. While the sets $S_3$, $R_1$, $R_3$, and $R_5$ are all equilibria of the dynamics, the remaining Morse sets are periodic orbits. More precisely, the sets $S_1$ and $S_2$ are periodic orbits of period $3$, while the two sets $R_2$ and $R_2$ have period $2$. # Index pairs {#sec:indexpair} While isolated invariant sets $S$ are the fundamental objects of study in Conley theory, it is their Conley index that provides algebraic information about the dynamics inside of $S$. In classical dynamics, this index information can be computed easily from certain isolating neighborhoods called isolating blocks, but these are often difficult to find. For this reason, one usually uses a different object for the index computation, called an *index pair*. In the present section, we transfer this concept to the setting of multivalued maps. Throughout, we again assume that $X$ is a finite $T_0$ topological space and that the multivalued map $F:X\mto X$ is lower semicontinuous with closed values. ## Definition and existence of index pairs {#sec:indexpairdef} For the definition of an index pair, we need to recall two important concepts. On the one hand, if $A\subseteq X$ is any subset, then the *invariant part* $\Inv (A)$ of $F$ in $A$ is the set of all points $x\in A$ for which there exists a full solution in $A$ which passes through $x$. On the other hand, a *topological pair* in $X$ is a pair $P$ of subsets $P=(P_1,P_2)$ which satisfy the inclusion $P_2 \subseteq P_1$. With this, we have the following central definition. We say that a topological pair $P=(P_1,P_2)$ of closed subsets of an isolating set $N$ for an isolated invariant set $S$ is an *index pair* for $S$ in $N$ if the following three conditions are satisfied: - $F(P_i)\cap N\subseteq P_i$ for $i=1,2$, - $P_1\cap\cl(F(P_1)\setminus N)\subseteq P_2$, - $S=\Inv(P_1\setminus P_2)$. In addition, we say that index pair $P=(P_1,P_2)$ is *saturated* if $S=P_1\setminus P_2$. We would like to point out that condition (IP1) implies $F(P_i) \cap N = F(P_i) \cap P_i$, and therefore (IP2) could also be replaced by the inclusion $P_1\cap \cl (F(P_1)\setminus P_1)\subseteq P_2$ to obtain an equivalent definition. In the remainder of this section, we establish some basic properties of index pairs. In addition, we show that every isolated invariant set $S$ with isolating set $N$ does indeed have an associated index pair. For this we need another definition. For subsets $S\subseteq N\subseteq X$ we define $$\begin{aligned} \Inv^-(N,S)&:=&\setof{y\in N\mid \exists \sigma\in\mbox{$\operatorname{Path}$}(N)\; \text{ with }\; \sigma^{\sqsubset}\in S, \; \sigma^{\sqsupset}=y},\\[0.5ex] \Inv^+(N,S)&:=&\setof{y\in N\mid \exists \sigma\in\mbox{$\operatorname{Path}$}(N)\; \text{ with }\; \sigma^{\sqsubset}=y, \; \sigma^{\sqsupset}\in S}.\end{aligned}$$ In other words, the set $\Inv^+(N,S)$ consists of all points in $N$ from which one can reach $S$ in forward time with a path in $N$, and $\Inv^-(N,S)$ is the analogous set in backwards time. The following proposition follows immediately from the definition of $\Inv^\pm(N,S)$. [\[prop:Inv-M-N\]]{#prop:Inv-M-N label="prop:Inv-M-N"} Assume that $M\subseteq N$ are two isolating sets for an isolated invariant set $S$. Then $\Inv^\pm(M,S)\subseteq\Inv^\pm(N,S)$. 0◻ In addition, the above two sets have interesting topological properties, and they can be used to reconstruct an isolated invariant set $S$, as the next result shows. [\[prop:Inv-top-prop\]]{#prop:Inv-top-prop label="prop:Inv-top-prop"} Assume that $S\subseteq N\subseteq X$, that $S$ is an invariant set, and that $N$ is closed. Then the set $\Inv^-(N,S)$ is closed and $\Inv^+(N,S)$ is open in $N$. If in addition $N$ isolates $S$, then one also has $$\label{eq:Inv-top-prop} \Inv^-(N,S) \cap \Inv^+(N,S) = S,$$ and the isolated invariant set $S$ is locally closed in $X$, that is $S$ is a difference of two closed sets in $X$ (see [@En1989 Problem 2.7.1]). *Proof.* Denote $N^-:=\Inv^-(N,S)$ and $N^+:=\Inv^+(N,S)$. In order to prove that $N^-$ is closed take a $y\in\cl N^-$. Then $y\in\cl y'$ for some $y'\in N^-$. Hence, we may take a path $\sigma\in\mbox{$\operatorname{Path}$}(N)$ from a point in $S$ to $y'$. Since $S$ is invariant, without loss of generality we may assume that $|\sigma|\geq 2$. Since $F$ has closed values, replacing $y'$ by $y$ in $\sigma$ one obtains a new path, so $y\in N^-$. This proves that $N^-$ is indeed closed. To see that the set $N^+$ is open in $N$, choose any $x\in\opn_N N^+ = N\cap\opn N^+$. Then $x\in\opn x'$ for some $x'\in N^+$. Let $\sigma\in\mbox{$\operatorname{Path}$}(N)$ be a path from $x'$ to some point in $S$. Since $F$ is lower semicontinuous with closed values, $F(x')\subseteq F(x)$. Thus, replacing $x'$ by $x$ in $\sigma$ gives another path, so $x\in N^+$. Therefore $N\cap\opn N^+ \subseteq N^+$, so $N^+$ is open in $N$. Finally, the inclusion $S\subseteq N^-\cap N^+$ is obvious. Suppose now that $N$ isolates $S$. To see the opposite inclusion, let $x\in N^-\cap N^+$ be arbitrary. Then there exist a path in $N$ from a point in $S$ to $x$ and a path in $N$ from $x$ to a point in $S$. Concatenation of these gives a path in $N$ through $x$, and with endpoints in $S$. Hence, since $N$ isolates $S$, we obtain $x\in S$ and ([\[eq:Inv-top-prop\]](#eq:Inv-top-prop){reference-type="ref" reference="eq:Inv-top-prop"}) holds. Moreover, the representation ([\[eq:Inv-top-prop\]](#eq:Inv-top-prop){reference-type="ref" reference="eq:Inv-top-prop"}) shows that $S$ can be written as $$S = N^- \setminus \left(N \setminus N^+ \right).$$ Since $N^-$ and $N \setminus N^+$ are closed, $S$ is locally closed in $X$. 0◻ ◻ The above result shows that also in the multivalued map case, isolated invariant sets necessarily have to be locally closed. This is reminiscent of the situation in the multivector case [@lipinski:etal:23a], and it provides a sufficient condition for recognizing invariant sets which are not isolated invariant. In fact, this criterion does not make any reference to an associated isolating set $N$. For example, one can easily see that the set $S_1 \cup S_3$ in Example [\[ex:simplecvf\]](#ex:simplecvf){reference-type="ref" reference="ex:simplecvf"} is not locally closed, and therefore it cannot be an isolated invariant set. We now turn our attention to the existence of index pairs for isolated invariant sets. For this, we need the following definition, as well as the subsequent result. [\[defpn\]]{#defpn label="defpn"} Given an isolating set $N$ for an isolated invariant set $S$, we define the *standard index pair* $P^N=(P^N_1,P^N_2)$ by $$P^N_1 := \Inv^-(N,S) \qquad\mbox{ and }\qquad P^N_2 := P^N_1\setminus\Inv^+(N,S).$$ If we want to explicitly emphasize the dependence of the index pair on the isolated invariant set $S$, we also write $P^{S,N}=(P^{S,N}_1,P^{S,N}_2)$ instead of $P^N=(P^N_1,P^N_2)$. [\[thm:ip-existence\]]{#thm:ip-existence label="thm:ip-existence"} Assume that $N\subseteq X$ is an isolating set for an isolated invariant set $S$. Then $P^N$ is a saturated index pair for $S$ in $N$. *Proof.* It follows from Proposition [\[prop:Inv-top-prop\]](#prop:Inv-top-prop){reference-type="ref" reference="prop:Inv-top-prop"} that the sets $P^N_1$ and $P^N_2$ are both closed. Moreover, property (IP1) is a straightforward consequence of the definition of the sets $\Inv^-(N,S)$ and $\Inv^+(N,S)$. From [\[eq:Inv-top-prop\]](#eq:Inv-top-prop){reference-type="eqref" reference="eq:Inv-top-prop"} we obtain $S = P^N_1 \setminus P^N_2$, which establishes both (IP3) and the fact that $P^N$ is saturated, once condition (IP2) has been proved. Thus, it remains to verify that property (IP2) is satisfied. For this, assume to the contrary that there exists an element $y\in (P_1^N \cap \cl (F(P_1^N) \setminus N)) \setminus P^N_2$. This implies that $y\in P^N_1\setminus P^N_2=S$, and there exists $y'\in F(P^N_1)\setminus N$ such that $y\in \cl y'$. Let $x\in P^N_1$ be such that $y'\in F(x)$. Then $y\in\cl y'\subseteq\cl F(x)=F(x)$, since $F$ has closed values. In view of $x\in P^N_1=\Inv^-(N,S)$, there exists a path $\sigma\in\mbox{$\operatorname{Path}$}(N)$ such that $\sigma^{\sqsubset}\in S$ and $\sigma^{\sqsupset}=x$. It follows that $\sigma\cdot y$ is a path in $N$ with endpoints in $S$, and therefore (IS1) yields $x\in S$. This in turn implies $y'\in F(x)\subseteq F(S)$. Thus, one obtains $y\in S\cap\cl y'\subseteq S\cap \cl(F(S)\setminus N)$, which contradicts (IS2). 0◻ ◻ The standard index pair $P^N$ that can be associated with every isolated invariant $S$ with isolating set $N$ will be important for our further considerations. Yet, as we pointed out earlier, this is only one possible choice among many. In particular, although the standard index pair is sufficient to define the Conley index, the flexibility in choosing index pairs matters when addressing properties of the index, for instance continuation (see Sec. [7.2](#sec:continuation){reference-type="ref" reference="sec:continuation"}). While the collection of index pairs will be further studied in the next section, we close this one with a simple observation. [\[prop:ip-M-N\]]{#prop:ip-M-N label="prop:ip-M-N"} Assume $M\subseteq N$ are two isolating sets for an isolated invariant set $S$. Then the associated standard index pairs satisfy $P^M_i\subseteq P^N_i$ for $i=1,2$. *Proof.* The inclusion $P^M_1\subseteq P^N_1$ follows immediately from Proposition [\[prop:Inv-M-N\]](#prop:Inv-M-N){reference-type="ref" reference="prop:Inv-M-N"}. On the other hand, in view of [\[eq:Inv-top-prop\]](#eq:Inv-top-prop){reference-type="eqref" reference="eq:Inv-top-prop"} we have $P^M_2 = P^M_1 \setminus S \subseteq P^N_1 \setminus S = P^N_2$. 0◻ ◻ To close this section, we briefly return to our previous two examples and present the standard index pairs for selected isolated invariant sets. [\[ex:standardindexpairs\]]{#ex:standardindexpairs label="ex:standardindexpairs"} *For the two simple multivalued maps $F : X \mto X$ and $G : X \mto X$ from Examples [\[ex:simplecvf\]](#ex:simplecvf){reference-type="ref" reference="ex:simplecvf"} and [\[ex:reflectedcvf\]](#ex:reflectedcvf){reference-type="ref" reference="ex:reflectedcvf"}, respectively, on the finite topological space $X = \{ A, B, C, AB, AC, BC, ABC \}$, one can easily determine the associated standard index pairs. Recall that in Example [\[ex:simplecvf\]](#ex:simplecvf){reference-type="ref" reference="ex:simplecvf"} we used the closed sets $N_1 = \{ A, B, C \}$, $N_2 = \{ A, B, C, AB, BC, AC \}$, and $N_3 = X$ as respective isolating sets for the three isolated invariant sets $S_1$, $S_2$, and $S_3$ given below. This leads to the standard index pairs $$\label{ex:standardindexpairs1} \begin{array}{llclcl} \displaystyle P^{S_1,N_1}_1 = N_1, & \displaystyle P^{S_1,N_1}_2 = \varnothing& \;\mbox{ for }\; & \displaystyle S_1 = \{ A, B, C \} & \;\mbox{ in }\; & N_1, \\[0.5ex] \displaystyle P^{S_2,N_2}_1 = N_2, & \displaystyle P^{S_2,N_2}_2 = N_1 & \;\mbox{ for }\; & \displaystyle S_2 = \{ AB, BC, AC \} & \;\mbox{ in }\; & N_2, \\[0.5ex] \displaystyle P^{S_3,N_3}_1 = N_3, & \displaystyle P^{S_3,N_3}_2 = N_2 & \;\mbox{ for }\; & \displaystyle S_3 = \{ ABC \} & \;\mbox{ in }\; & N_3. \end{array}$$ For example, in order to establish the second standard index pair in this list, note that $\Inv^-(N_2,S_2) = \{ A, B, C, AB, BC, AC \}$ and $\Inv^+(N_2,S_2) = \{ AB, BC, AC \}$, which immediately yields the above form for $P^{S_2,N_2}$.* We now turn our attention to Example [\[ex:reflectedcvf\]](#ex:reflectedcvf){reference-type="ref" reference="ex:reflectedcvf"}. In this case, we defined the closed sets $M_1 = \{ A \}$, $M_2 = \{ B, C \}$, $M_3 = \{ B, C, BC \}$, $M_4 = \{ A, B, C, AB, AC \}$, as well as $M_5 = X$, as respective isolating sets for the isolated invariant sets $R_k$ given below. More precisely, one obtains the standard index pairs $$\label{ex:standardindexpairs2} \begin{array}{llclcl} \displaystyle P^{R_1,M_1}_1 = M_1, & \displaystyle P^{R_1,M_1}_2 = \varnothing& \;\mbox{ for }\; & \displaystyle R_1 = \{ A \} & \;\mbox{ in }\; & M_1, \\[0.5ex] \displaystyle P^{R_2,M_2}_1 = M_2, & \displaystyle P^{R_2,M_2}_2 = \varnothing& \;\mbox{ for }\; & \displaystyle R_2 = \{ B, C \} & \;\mbox{ in }\; & M_2, \\[0.5ex] \displaystyle P^{R_3,M_3}_1 = M_3, & \displaystyle P^{R_3,M_3}_2 = M_2 & \;\mbox{ for }\; & \displaystyle R_3 = \{ BC \} & \;\mbox{ in }\; & M_3, \\[0.5ex] \displaystyle P^{R_4,M_4}_1 = M_4, & \displaystyle P^{R_4,M_4}_2 = M_1 \cup M_2 & \;\mbox{ for }\; & \displaystyle R_4 = \{ AB, AC \} & \;\mbox{ in }\; & M_4, \\[0.5ex] \displaystyle P^{R_5,M_5}_1 = M_5, & \displaystyle P^{R_5,M_5}_2 = M_3 \cup M_4 & \;\mbox{ for }\; & \displaystyle R_5 = \{ ABC \} & \;\mbox{ in }\; & M_5. \end{array}$$ Thus, we have identified the standard index pairs for all isolated invariant sets contained in the Morse decompositions shown in Figures [3](#fig:simplemorse){reference-type="ref" reference="fig:simplemorse"} and [4](#fig:reflectedmorse){reference-type="ref" reference="fig:reflectedmorse"}. ## Properties of index pairs {#sec:indexpairprop} In the last section, we introduced the notion of an index pair $P = (P_1,P_2)$ associated with an isolated invariant set $S$ and its isolating set $N$. These index pairs will prove to be central for the definition of the Conley index. Yet, as we already mentioned several times, index pairs are not unique, and the present section collects results on the construction of a variety of index pairs. These results will be crucial for the next section, which introduces the Conley index. In the following, we assume that $N$ is an isolating set for the isolated invariant set $S$. If $P=(P_1,P_2)$ and $Q=(Q_1,Q_2)$ are two topological pairs, we use the abbreviation $P\subseteq Q$ for the validity of the two inclusions $P_1\subseteq Q_1$ and $P_2\subseteq Q_2$. Furthermore, by $P\cap Q$ we denote the pair $(P_1\cap Q_1, P_2\cap Q_2)$. We begin by showing that index pairs are closed under intersection. [\[inter1\]]{#inter1 label="inter1"} If $P$ and $Q$ are two index pairs for an isolated invariant set $S$ in an isolating set $N$, then so is $P\cap Q$. *Proof.* Applying property (IP1) of $P$ we get $F(P_i\cap Q_i)\cap N\subseteq F(P_i)\cap N\subseteq P_i$. Similarly, we obtain $F(P_i\cap Q_i)\cap N\subseteq Q_i$. Therefore, $F(P_i\cap Q_i)\cap N\subseteq P_i\cap Q_i$ for $i = 1,2$, which proves the inclusions in (IP1) for $P\cap Q$. As for the second property (IP2) of an index pairs, we observe that since both $P$ and $Q$ satisfy it, one obtains the inclusions $$\begin{aligned} P_1 \cap Q_1 \cap \cl (F(P_1\cap Q_1)\setminus N) &\subseteq& P_1 \cap \cl (F(P_1)\setminus N) \cap Q_1 \cap \cl (F(Q_1)\setminus N) \\[0.5ex] & \subseteq& P_2\cap Q_2.\end{aligned}$$ It remains to establish (IP3). First observe that in view of (IP3) for both $P$ and $Q$ we have $$S \;\subseteq\; (P_1\setminus P_2)\cap (Q_1\setminus Q_2) \;\subseteq\; (P_1\cap Q_1)\setminus (P_2\cap Q_2).$$ Therefore, $S=\Inv S\subseteq\Inv((P_1\cap Q_1)\setminus (P_2\cap Q_2))$. To prove the opposite inclusion, assume to the contrary that there exists $y\in \Inv((P_1\cap Q_1)\setminus (P_2\cap Q_2))\setminus S$. Moreover, let $\sigma:\ZZ\to (P_1\cap Q_1)\setminus (P_2\cap Q_2)$ be a full solution through $y$. Then there has to exist an index $p\in\ZZ$ such that $\sigma(p)\in P_2$, because otherwise we obtain the inclusion $\im\sigma\subseteq\Inv(P_1\setminus P_2)=S$ in view of (IP3) for $P$. In addition, due to (IP1) for $P$, together with $\im\sigma \subseteq P_1 \subseteq N$, one has to have $\sigma(r)\in P_2$ for every $r\geq p$. Symmetrically, there exists an index $q\in \ZZ$ such that $\sigma (r) \in Q_2$ for all $r\geq q$. In particular, this implies $\sigma (\max \{p,q\}) \in P_2\cap Q_2$, a contradiction. 0◻ ◻ The next two results introduce a few ways for constructing new index pairs from two given nested ones. [\[inter2\]]{#inter2 label="inter2"} If $P\subseteq Q$ are index pairs in $N$ for an isolated invariant set $S$, then so are $(P_1,P_1\cap Q_2)$ and $(P_1\cup Q_2,Q_2)$. *Proof.* Let us start with the first pair $(P_1,P_1\cap Q_2)$. The verification of property (IP1) is straightforward. Observe that in view of (IP2) for the index pair $P$ we get $P_1\cap \cl (F(P_1)\setminus N) \subseteq P_2\subseteq P_1\cap Q_2$, and therefore (IP2) holds. To establish (IP3), we observe that due to (IP3) for both $P$ and $Q$ one has $$\begin{aligned} S & = & \Inv S \; \subseteq\; \Inv((P_1\setminus P_2)\cap(Q_1\setminus Q_2)) \; \subseteq\; \Inv(P_1\setminus Q_2) \\[0.5ex] & = & \Inv(P_1\setminus(P_1\cap Q_2)) \; \subseteq\; \Inv(P_1\setminus P_2) \; = \; S.\end{aligned}$$ Hence, $\Inv(P_1\setminus (P_1\cap Q_2))=S$, which completes the proof that $(P_1,P_1\cap Q_2)$ is indeed an index pair. Consider now the second pair $(P_1\cup Q_2,Q_2)$. As before, the verification of (IP1) is straightforward. In order to establish (IP3) we observe that as seen above $$S=\Inv(P_1\setminus Q_2)= \Inv((P_1\cup Q_2)\setminus Q_2).$$ Finally, in order to verify (IP2) for $(P_1\cup Q_2,Q_2)$ we note that $$(P_1 \cup Q_2) \cap \cl (F(P_1\cup Q_2) \setminus N) \; \subseteq\; Q_1 \cap \cl (F(Q_1) \setminus N) \; \subseteq\; Q_2,$$ which yields (IP2) for the second pair and completes the proof. 0◻ ◻ The second lemma is concerned with a useful construction of new index pairs which includes the action of $F$ itself. For this, suppose we are given two index pairs $P$ and $Q$ for an isolated invariant set $S$ in $N$, and such that $P\subseteq Q$. We then define a topological pair of sets $G(P,Q)=(G_1(P,Q),G_2(P,Q))$ by $$G_i(P,Q) = P_i\cup(F(Q_i)\cap N) \quad\mbox{ for }\quad i=1,2.$$ Note that we always have $G_2(P,Q)\subseteq G_1(P,Q) \subseteq N$, as required by a topological pair, and that $G_i(P,Q)$ is closed for $i=1,2$. The latter fact is due to the closedness of the values of $F$. While in general the pair $G(P,Q)$ is not an index pair for $S$ in $N$, the following result gives sufficient conditions, as well as a number of other useful properties. [\[lem:G_PQ\]]{#lem:G_PQ label="lem:G_PQ"} Let $P\subseteq Q$ be two index pairs for the isolated invariant set $S$ in $N$, and let $G=G(P,Q)$ be defined as above. Then we have the following properties. - $P\subseteq G\subseteq Q$. - $P_i=Q_i$ implies $P_i=G_i=Q_i$, for $i=1,2$. - If $P_1=Q_1$ or $P_2=Q_2$ then $G$ is an index pair in $N$. - $F(Q_i)\cap N\subseteq G_i$, for $i=1,2$. - If $P_{3-i}=Q_{3-i}$ and $G_i=Q_i$, then $P_i=Q_i$ for $i=1,2$. *Proof.* The first inclusion in (i) is obvious. The second one follows from (IP1) for $Q$. Moreover, property (ii) is an immediate consequence of (i). In order to prove property (iii), let us begin with property (IP1). Its verification does not require the hypothesis of (iii), since we have $$F(G_i) \cap N \; = \; (F(P_i) \cap N) \cup (F(F(Q_i)\cap N) \cap N) \; \subseteq\; P_i \cup (F(Q_i) \cap N) \; = \; G_i$$ in view of (IP1) applied to $P$ and $Q$. If $P_1=Q_1$, then we have $G_1\cap \cl (F(G_1)\setminus N)=P_1\cap \cl (F(P_1)\setminus N) \subseteq P_2 \subseteq G_2$ by (i), (ii), and (IP2) for $P$, and this establishes (IP2) for $G$ in this case. On the other hand, if the equality $P_2=Q_2$ holds, then $$G_1 \cap \cl (F(G_1)\setminus N) \; \subseteq\; Q_1 \cap \cl (F(Q_1)\setminus N) \; \subseteq\; Q_2=G_2,$$ in view of (IP2) for $Q$, (i), and (ii). This proves (IP2) for $G$ also in this case. In order to verify property (IP3), observe that by (IP3) applied to $P$ and $Q$ one obtains $S\subseteq(P_1\setminus P_2) \cap (Q_1\setminus Q_2) = P_1\setminus Q_2\subseteq G_1\setminus G_2$, and this in turn immediately yields $S=\Inv S\subseteq\Inv ( G_1\setminus G_2)$. According to property (i) we have the inclusion $G_1\setminus G_2\subseteq Q_1\setminus P_2$. Hence, if $P_1=Q_1$, we obtain $G_1\setminus G_2\subseteq P_1\setminus P_2$, and (IP3) applied to $P$ further implies $\Inv(G_1\setminus G_2)\subseteq \Inv(P_1\setminus P_2)=S$. Similarly, if instead the equality $P_2=Q_2$ holds, then one obtains $G_1\setminus G_2\subseteq Q_1\setminus Q_2$, and (IP3) applied to $Q$ furnishes $\Inv(G_1\setminus G_2)\subseteq \Inv(Q_1\setminus Q_2)=S$. Altogether, we get the inclusion $\Inv(G_1\setminus G_2) \subseteq S$, which completes the proof of (IP3) for $G$, and establishes the latter as an index pair, as claimed in (iii). Since property (iv) is obvious, it remains to establish (v). For this, fix $i\in\{1,2\}$ and assume that the identities $P_{3-i}=Q_{3-i}$ and $G_i=Q_i$ are satisfied. We want to show that $P_i=Q_i$. Since $P_i\subseteq Q_i$ by assumption, we only need to verify the inclusion $Q_i\subseteq P_i$. Thus, take an arbitrary point $y\in Q_i$. We will begin by constructing recursively a function $\sigma:\ZZ_-\to Q_i$ as follows, where $\ZZ_-$ denotes the set of all nonpositive integers. We set $\sigma(0):=y\in Q_i$. Assuming $\sigma(-k)\in Q_i$ has already been defined for $k\in\NN_0$, we consider two cases to define $\sigma(-k-1)$. If we have $\sigma(-k)\in P_i$, then we define $\sigma(-k-1):=\sigma(-k)$. If instead we have $\sigma(-k)\not\in P_i$, then one obtains from the assumption $Q_i = G_i$ and the above definition $G_i = P_i\cup(F(Q_i)\cap N)$ that $\sigma(-k)\in F(Q_i)$, and we can select an element $\sigma(-k-1)\in Q_i$ which satisfies the inclusion $\sigma(-k)\in F(\sigma(-k-1))$. We claim that $\im\sigma\cap P_i\neq\varnothing$. Assume the contrary. Then $\sigma:\ZZ_-\to Q_i\setminus P_i$ is a solution. Since the space $X$ is finite, we can therefore find indices $m,n\in\ZZ_-$ such that $m<n$ and $\sigma(m)=\sigma(n)$. Thus, the point $\sigma(m)$ lies on a periodic solution in the set difference $Q_i\setminus P_i$. But then we have $\sigma(m)\in\Inv (Q_i\setminus P_i)$. Consider now first the case $i=1$. Then we have $P_2=Q_2$ and $\sigma:\ZZ_- \to Q_1\setminus P_1$, as well as the inclusion $Q_1\setminus P_1\subseteq Q_1\setminus P_2=Q_1\setminus Q_2$. Hence, using property (IP3) applied to $Q$ one obtains that $\sigma(m) \in \Inv (Q_1\setminus Q_2) = S \subseteq P_1$, which contradicts our assumption that $\im\sigma\cap P_1 =\varnothing$. Consider now the second case $i=2$. Then one has $P_1=Q_1$ and $\sigma:\ZZ_-\to Q_2\setminus P_2$, as well as $Q_2\setminus P_2\subseteq Q_1\setminus P_2=P_1\setminus P_2$. Hence, we get from (IP3) applied to $P$ that $\sigma(m)\in\Inv (P_1\setminus P_2)=S\subseteq Q_1\setminus Q_2$. Therefore, $\sigma(m)\not\in Q_2$, again a contradiction. Thus, we established $\im\sigma\cap P_i\neq\varnothing$. With this we can immediately complete the proof of (v). According to the last paragraph, the index $m:=\max\setof{k\in\ZZ_-\mid \sigma(k) \in P_i}$ is well defined. We cannot have $m<0$, because in that case one obtains $\sigma(m+1) \in F(\sigma (m)) \subseteq F(P_i)$, and due to (IP1) applied to $P$ one further gets $\sigma(m+1) \in Q_i \cap F(P_i) \subseteq N \cap F(P_i) \subseteq P_i$, which is a contradiction. Hence, we have to have $m=0$, and thus $y=\sigma(0)\in P_i$. This completes the proof of the lemma. 0◻ ◻ The next result shows that for nested index pairs $P \subseteq Q$ which satisfy $P_1 = Q_1$ or $P_2 = Q_2$, it is always possible to construct a sequence of index pairs between them with certain mapping properties. While the specifics of this lemma might seem strange at first sight, it is essential for proving that the Conley index computation is independent of the underlying index pair. [\[lem:seq-Qi\]]{#lem:seq-Qi label="lem:seq-Qi"} Let $P\subseteq Q$ be index pairs for an isolated invariant set $S$ in $N$ such that either $P_1=Q_1$ or $P_2=Q_2$. Then there exists a sequence of index pairs for $S$ in $N$ $$P=Q^n\subseteq Q^{n-1}\subseteq\cdots\subseteq Q^1\subseteq Q^0=Q$$ which satisfy the following: - $P_i=Q_i$ implies $Q^k _i=P_i=Q_i$ for all $k=1,2,\dots, n-1$ and $i=1,2$, - $F(Q^k _i)\cap N\subseteq Q^{k+1} _i$ for all $k=0,1,\dots, n-1$ and $i=1,2$. *Proof.* Define the index pairs $Q^k$ recursively by $Q^0:=Q$ and $Q^{k+1}:=G(P,Q^k)$ for $k\in\NN$. Using Lemma [\[lem:G_PQ\]](#lem:G_PQ){reference-type="ref" reference="lem:G_PQ"}(i), (ii) and (iii), together with induction on $k$, one can easily show that the family $\{Q^k\}$ forms a decreasing sequence of index pairs with respect to $k$ which satisfies property (a). In addition, Lemma [\[lem:G_PQ\]](#lem:G_PQ){reference-type="ref" reference="lem:G_PQ"}(iv) implies that they also satisfy property (b) for all $k\in\NN_0$. Finally, since $X$ is finite, there has to be an $n\in \NN_0$ such that $Q^n=Q^{n+1}=G(P,Q^n)$, and an application of Lemma [\[lem:G_PQ\]](#lem:G_PQ){reference-type="ref" reference="lem:G_PQ"}(v) shows that then $Q^n=P$. 0◻ ◻ For the remainder of this section, we briefly introduce and study a topological pair which can be associated with an index pair, and which plays a crucial role for the definition of the *index map* in the next section. To define this topological pair, we again let $P=(P_1,P_2)$ denote an index pair for an isolated invariant set $S$ in the isolating set $N$. Then we define $\bar{P}:=(\bar{P}_1,\bar{P}_2)$ via $$\bar{P}_i := P_i \cup \cl (F(P_1)\setminus N) \quad\mbox{ for }\quad i = 1,2.$$ Notice that the new topological pair $\bar{P}$ extends the index pair $P$ by adding the closure of the images of $P_1$ under $F$ that lie outside of $N$. The resulting pair $\bar{P}$ still consists of closed sets, but in general it is no longer an index pair. Nevertheless, it will allow us to study the action of $F$ on $P$ on the homological level in the next section. For now, we note the following proposition. [\[prop:index-map\]]{#prop:index-map label="prop:index-map"} Assume that $P=(P_1,P_2)$ is an index pair for the isolated invariant set $S$ in an isolating set $N$. Then the following hold for the extended topological pair $\bar{P}$ defined above: $$\begin{aligned} && P\subseteq\bar{P},\label{eq:index-map-1}\\ && F(P)=(F(P_1),F(P_2))\subseteq\bar{P},\label{eq:index-map-2}\\ && \bar{P}_1\setminus\bar{P}_2 = P_1\setminus P_2.\label{eq:index-map-3}\end{aligned}$$ *Proof.* As we already mentioned, property [\[eq:index-map-1\]](#eq:index-map-1){reference-type="eqref" reference="eq:index-map-1"} follows directly from the definition of $\bar{P}$. To see [\[eq:index-map-2\]](#eq:index-map-2){reference-type="eqref" reference="eq:index-map-2"}, note that in view of (IP1) we have $F(P_i)\setminus P_i \subseteq F(P_i) \setminus N$, and therefore $F(P_i)\subseteq P_i\cup (F(P_i)\setminus P_i) \subseteq P_i\cup (F(P_i)\setminus N) \subseteq\bar{P}_i$. Finally, observe that property (IP2) implies $$\begin{aligned} \bar{P_1} \setminus \bar{P_2} & = & (P_1\cup \cl (F(P_1)\setminus N)) \setminus (P_2\cup \cl (F(P_1)\setminus N)) \\[0.5ex] & = & P_1\setminus P_2\setminus \cl (F(P_1)\setminus N) = P_1\setminus P_2,\end{aligned}$$ and this completes the proof of the proposition. 0◻ ◻ As our final result of this section, we consider the extended topological pairs of the standard index pairs introduced in Definition [\[defpn\]](#defpn){reference-type="ref" reference="defpn"}. More precisely, we consider the situation of nested isolating sets for the same isolated invariant set $S$. [\[prop:barip-M-N\]]{#prop:barip-M-N label="prop:barip-M-N"} Assume that the closed sets $M\subseteq N$ are two isolating sets for the same isolated invariant set $S$. Then the inclusion $\overline{P^M} \subseteq \overline{P^N}$ holds. *Proof.* We first establish the validity of $\overline{P^M_1}\subseteq\overline{P^N_1}$. Using Proposition [\[prop:ip-M-N\]](#prop:ip-M-N){reference-type="ref" reference="prop:ip-M-N"} one obtains the inclusion $$\begin{aligned} \overline{P^M_1} & = & P^M_1\cup \cl (F(P^M_1)\setminus M) \; \subseteq\; P^N_1\cup \cl (F(P^N_1)\setminus M) \\[0.5ex] & = & P^N_1 \cup \cl (F(P^N_1)\setminus N) \cup \cl ((F(P^N_1)\cap N) \setminus M) \\[0.5ex] & \subseteq& P^N_1\cup \cl (F(P^N_1)\setminus N) \; = \; \overline{P^N_1},\end{aligned}$$ where the last inclusion follows from (IP1) and the fact that $P^N_1$ is closed. It remains to show that $\overline{P^M_2} \subseteq\overline{P^N_2}$. For this, let $y\in \overline{P^M_2}=P^M_2 \cup \cl(F(P^M_1)\setminus M)$. If in fact we have $y\in P^M_2$, then an application of Proposition [\[prop:ip-M-N\]](#prop:ip-M-N){reference-type="ref" reference="prop:ip-M-N"} immediately implies that $y\in P^N_2\subseteq\overline{P^N_2}$. Suppose therefore that we have $y\in \cl (F(P^M_1)\setminus M)$ and $y\notin P^N_2$. Furthermore, let $y'\in F(P^M_ 1) \setminus M$ be such that $y\in \cl y'$. We now claim that $y' \not\in N$. To verify this, assume to the contrary that $y'\in N$. Let $x\in P^M_1$ be such that $y'\in F(x)$. If $x\in P^M_2$, then $y'\in F(P^M_2)\subseteq F(P^N_2)$, and therefore $y'\in F(P^N_2)\cap N \subseteq P^N_2$ by (IP1), as well as $y\in \cl y' \subseteq\cl P^N_2=P^N_2$, a contradiction. Thus we have to have $x\notin P^M_2$, and this yields $x\in P^M_1\setminus P^M_2=S$ by Theorem [\[thm:ip-existence\]](#thm:ip-existence){reference-type="ref" reference="thm:ip-existence"}. Hence, $y'\in F(S)\setminus M$ and $y\in \cl(F(S)\setminus M)$. Since we assumed $y'\in N$, one obtains $y'\in F(P^M_1)\cap N \subseteq F(P^N_1)\cap N \subseteq P^N_1$, and together with the closedness of $P^N_1$ this further implies $y\in P^N_1$. This in turn shows that the inclusion $y\in P^N_1\setminus P^N_2=S$ holds, by Theorem [\[thm:ip-existence\]](#thm:ip-existence){reference-type="ref" reference="thm:ip-existence"}. However, this finally furnishes $y\in S\cap \cl (F(S)\setminus M)$, which contradicts (IS2). Thus, we deduce that our assumption on $y'$ was wrong and we actually have $y'\notin N$. With this in hand the proof of the second inclusion can easily be finished. We now have $y'\in F(P^M_1)\setminus N \subseteq F(P^N_1)\setminus N$, as well as $y\in \cl(F(P^N_1)\setminus N) \subseteq\overline{P^N_2}$. 0◻ ◻ We close this section by deriving the extended topological pairs for the index pairs in Example [\[ex:standardindexpairs\]](#ex:standardindexpairs){reference-type="ref" reference="ex:standardindexpairs"}. [\[ex:extendedtoppairs\]]{#ex:extendedtoppairs label="ex:extendedtoppairs"} *We leave it to the reader to verify that all eight standard index pairs given in ([\[ex:standardindexpairs1\]](#ex:standardindexpairs1){reference-type="ref" reference="ex:standardindexpairs1"}) and ([\[ex:standardindexpairs2\]](#ex:standardindexpairs2){reference-type="ref" reference="ex:standardindexpairs2"}) are in fact equal to their extended topological pairs as defined above. In every one of these cases, if $S$ is an isolated invariant set in an isolating set $N$, we have both $P^{S,N}_1 = N$, as well as either $F(N) \subseteq N$ or $G(N) \subseteq N$, respectively, depending on which multivalued map is considered. From this our claim follows immediately.* # Definition of the Conley index {#sec:conleyindex} With this section, we finally turn our attention to the Conley index for isolated invariant sets. For this, we first introduce the index map based on an index pair in Section [6.1](#subsec:indexmap){reference-type="ref" reference="subsec:indexmap"}, which transfers the action of the multivalued map $F$ restricted to the index pair to the algebraic level in terms of homology. Clearly, this map will depend on the chosen index pair, and the remainder of the section is aimed at deriving an index definition from the index map which only depends on the isolated invariant set. Our approach relies on the notion of normal functor, which is introduced in Section [6.2](#subsec:normalfunctor){reference-type="ref" reference="subsec:normalfunctor"}. Finally, Section [6.3](#subsec:conleyindex){reference-type="ref" reference="subsec:conleyindex"} combines both notions to define the Conley index and prove that it is well-defined. In addition, we compute the Conley indices for the isolated invariant sets in our earlier examples. In contrast to the previous section, we need to impose an additional condition on the underlying multivalued map. In this section the multivalued map $F:X\mto X$ will be assumed to be lower semicontinuous with closed and *acyclic* values. The additional acyclicity assumption is needed in order to obtain induced maps in homology. All the homology groups are considered with coefficients in a fixed ring $R$. ## The index map {#subsec:indexmap} The basic idea of the Conley index in this paper is to lift information from the multivalued map $F:X\mto X$ close to an isolated invariant set $S$ to the setting of homology. On the level of the phase space, this is accomplished by considering the relative homology $H_*(P) = H_*(P_1,P_2)$ of an index pair for $S$, and on the level of the map $F$ by the associated *index map* $I_P$, which is an endomorphism of $H_*(P)$. The latter map should of course in some way lift the dynamics of $F$ to the homology level. Passing from a multivalued map to a map induced in homology is slightly more involved than the classical map setting, and we begin by reviewing the necessary approach. As it was already said in Section [2](#sec:prelim){reference-type="ref" reference="sec:prelim"}, every lower semicontinuous map with closed values is in fact *strongly lower semicontinuous (slsc)*. Recall that a multivalued map $F:X\mto Y$ between finite $T_0$ spaces is called strongly lower semicontinuous, if $x\in \cl x'$ implies $F(x) \subseteq F(x')$. If in addition $F:X\mto Y$ has acyclic values, then it induces a homomorphism $F_*:H_*(X)\to H_*(Y)$ in homology. More precisely, in [@barmak:etal:20a Proposition 4.7] it is shown that if $F$ is identified with its graph $F\subseteq X\times Y$, then the restriction $p_1:F\to X$ of the projection onto the first coordinate is an isomorphism in homology in every degree, and therefore one can define the induced map in homology as $F_* = (p_2)_*(p_1)_*^{-1}$ with $p_2:F\to Y$ denoting the restriction of the other projection. Although the results in [@barmak:etal:20a] are stated only for integer coefficients, i t is easy to see that the same results hold for homology with coefficients in an arbitrary ring (see [@barmak:etal:20a Theorem 2.2]). As we mentioned earlier, the index map will be a homological version of the action of $F$ on a given index pair, and it is therefore not surprising that we have to recall a few notions about maps of pairs. A multivalued map $F:(X,A)\multimap (Y,B)$ between pairs of finite $T_0$ spaces is a multivalued map $F:X\multimap Y$ which satisfies the inclusion $F(a)\subseteq B$ for every $a\in A$. We say that $F:(X,A)\multimap (Y,B)$ is slsc (or with closed values, or with acyclic values) if $F:X\multimap Y$ has the respective property. Suppose that $F:(X,A) \multimap (Y,B)$ is slsc with acyclic values. Then the restriction $F|^B_A:A\multimap B$ is also slsc with acyclic values, and its graph is a subspace of $F$. Since the projections $F\to X$ and $F|^B_A\to A$ induce isomorphisms in homology, by the long exact sequence of homology and the five lemma, so does the projection of pairs $p_1:(F,F|^B_A)\to (X,A)$. Finally, in view of these preparations we can define the homomorphims $F_*:H_*(X,A)\to H_*(Y,B)$ by letting $F_*=(p_2)_*(p_1)^{-1}_*$ as before. Before moving on to the definition of the index map, we need the following two auxiliary results concerning maps in homology induced by compositions. [\[lemma1\]]{#lemma1 label="lemma1"} Let $F:X\multimap Y$ and $G:Z\multimap Y$ be slsc multivalued maps with acyclic values, and suppose that $f:X\to Z$ is a continuous map such that $Gf=F$. Then we have $G_*f_*=F_*:H_*(X)\to H_*(Y)$. The same result holds more generally, for pairs. *Proof.* Consider the following two commutative diagrams: $$\begin{diagram} \node{X} \arrow{s,l}{f} \node{Y} \arrow{w,t,dd}{F} \arrow{sw,r,dd}{G} \node{X} \arrow{s,l}{f} \node{F} \arrow{w,l}{p_1} \arrow{e,l}{p_2} \arrow{s,l}{f\times 1_Y} \node{Y} \\ \node{Z} \node{} \node{Z} \node{G} \arrow{ne,b}{p_2} \arrow{w,b}{p_1} \end{diagram}$$ The commutativity of the first diagram implies that $f\times 1_Y:F\to G$ is well defined, and this immediately leads to the second commutative diagram. The result then follows by definition. For pairs we have the exact same proof. 0◻ ◻ [\[lemma2\]]{#lemma2 label="lemma2"} Let $F:Z\multimap X$ and $G:Z\multimap Y$ be slsc multivalued maps with acyclic values, and let $f:Y\to X$ be a continuous map such that $fG=F$. Then $f_*G_*=F_*:H_*(Z)\to H_*(X)$. The same result holds more generally, for pairs. *Proof.* Similar to the last proof, consider the following commutative diagrams: $$\begin{diagram} \node{} \node{Y} \arrow{sw,l,dd}{G} \arrow{s,r}{f} \node{} \node{G} \arrow{sw,l}{p_1} \arrow{s,r}{1_Z\times f} \arrow{e,l}{p_2} \node{Y} \arrow{s,r}{f} \\ \node{Z} \node{X} \arrow{w,b,dd}{F} \node{Z} \node{F} \arrow{w,b}{p_1} \arrow{e,b}{p_2} \node{X} \end{diagram}$$ The commutativity of the first diagram implies that $1_Z\times f:Z\times Y \to Z\times X$ is well defined. This leads to the second commutative diagram, and the result then follows by definition. For pairs we have the exact same proof. 0◻ ◻ As our last preparation we turn our attention briefly to the *strong excision* property. For this, let $(Y_1,Y_2)$ and $(Z_1,Z_2)$ denote two topological pairs of closed subspaces of a finite $T_0$ space $X$ such that the inclusions $Y_i\subseteq Z_i$ hold for $i=1,2$, and that $Y_1\setminus Y_2 = Z_1\setminus Z_2$. Then the inclusion $i:(Y_1,Y_2) \to (Z_1,Z_2)$ induces a homomorphism $i_*$ between the relative homology groups $H_*(Y_1,Y_2)$ and $H_*(Z_1,Z_2)$. In fact, the strong excision property states that $i_*:H_*(Y_1,Y_2) \to H_*(Z_1,Z_2)$ is an isomorphism. This result follows directly from the pair of McCord isomorphisms $H_*(\mathcal{K}(Y_1),\mathcal{K}(Y_2))\to H_*(Y_1,Y_2)$ and $H_*(\mathcal{K}(Z_1),\mathcal{K}(Z_2))\to H_*(Z_1,Z_2)$, where $\mathcal{K}$ is the functor which associates to each poset its order complex ([@mccord:66a Corollary 1]). The hypotheses imply that the chains in $Y_1$ which are not in $Y_2$ are the same as the chains in $Z_1$ not in $Z_2$, and thus $i_* :C_*(\mathcal{K}(Y_1),\mathcal{K}(Y_2)) \to C_*(\mathcal{K}(Z_1),\mathcal{K}(Z_2))$ is already an isomorphism of chain complexes, and in particular an isomorphism in homology. After these preparations we can finally introduce the index map. In the rest of the paper $X$ will be a finite $T_0$ topological space and $F:X\mto X$ will be lower semicontinuous with closed and acyclic values. The index map lifts the action of the multivalued map $F$ on an index pair $P$ to the homological level. This has to be done with care, since we usually do not have $F(P) \subseteq P$. In fact, we will make use of the extended pair $\bar{P}$ whose properties where established in Proposition [\[prop:index-map\]](#prop:index-map){reference-type="ref" reference="prop:index-map"}. More precisely, let $P$ be an index pair for an isolated invariant set $S$ in the isolating set $N$. By applying Proposition [\[prop:index-map\]](#prop:index-map){reference-type="ref" reference="prop:index-map"}, we then immediately obtain both an inclusion induced isomorphism $(\iota_P)_*: H_*(P)\to H_*(\bar{P})$ and a homomorphism $(F_P)_*: H_*(P)\to H_*(\bar{P})$, where the latter is induced by the multivalued map $F_P=F|_{P_1}^{\overline{P}_1}: P\mto \overline{P}$. This leads to the following definition. [\[def:indexmap\]]{#def:indexmap label="def:indexmap"} Let $P$ be an index pair for an isolated invariant set $S$ in the isolating set $N$. Then the associated *index map* is the endomorphism $$I_P : H_*(P_1,P_2) \to H_*(P_1,P_2) \quad\mbox{ given by }\quad I_P:=(\iota_P)_*^{-1}\circ (F_P)_*,$$ where we use the maps induced in homology by the restriction $F_P=F|_{P_1}^{\overline{P}_1}: P\mto \overline{P}$ and the inclusion $\iota_P : P \to \bar{P}$. [\[index-mapnew\]]{#index-mapnew label="index-mapnew"} Note that the definition of $\bar{P}$ makes sense and the conclusion of Proposition [\[prop:index-map\]](#prop:index-map){reference-type="ref" reference="prop:index-map"} remains true even if $P=(P_1,P_2)$ is merely a pair of closed subspaces of $X$, and if $N$ is a closed subspace of $X$ such that $P_2\subseteq P_1 \subseteq N$ and conditions (IP1) and (IP2) hold. In other words, the isolated invariant set $S$, and the conditions (IS1), (IS2), and (IP3) are not needed for the above. Thus, the index map $I_P: H_*(P_1,P_2)\to H_*(P_1,P_2)$ can still be defined as in Definition [\[def:indexmap\]](#def:indexmap){reference-type="ref" reference="def:indexmap"}. ## Normal functors {#subsec:normalfunctor} Next we need to recall some definitions and results from category theory, in particular centered around the notion of normal functors. For this, let $\cE$ denote a category. We define the category of endomorphisms of $\cE$, denoted by $\mbox{$\operatorname{Endo}$}(\cE)$ as follows: - The objects of $\mbox{$\operatorname{Endo}$}(\cE)$ are pairs $(A,a)$, where $A\in\cE$ and $a\in\cE(A,A)$ is an endomorphism of $A$. - The set of morphisms from $(A,a)\in \mbox{$\operatorname{Endo}$}(\cE$) to $(B,b)\in \mbox{$\operatorname{Endo}$}(\cE)$ is the subset of $\cE(A,B)$ consisting of exactly those morphisms $\varphi\in\cE(A,B)$ for which $\varphi a = b\varphi$. We write $\varphi:(A,a)\rightarrow (B,b)$ to denote that $\varphi$ is a morphism from $(A,a)$ to $(B,b)$ in $\mbox{$\operatorname{Endo}$}(\cE)$. It is easy to see that if $\varphi: (A,a)\to (B,b)$ is a morphism in $\mbox{$\operatorname{Endo}$}(\cE)$ which is an isomorphism in $\cE$, then it is also an isomorphism in $\mbox{$\operatorname{Endo}$}(\cE)$. Note that any endomorphism $a\in \cE(A,A)$ is in particular a morphism $a:(A,a)\map (A,a)$ in $\mbox{$\operatorname{Endo}$}(\cE)$. Such morphisms of $\mbox{$\operatorname{Endo}$}(\cE)$ are called *induced*. Now let $L:\mbox{$\operatorname{Endo}$}(\cE)\map\cC$ be a functor. We say that $L$ is *normal* if $L(a)$ is an isomorphism in $\cC$ for every induced morphism $a:(A,a)\map (A,a)$ in $\mbox{$\operatorname{Endo}$}(\cE)$. Then we have the following result. [\[prop:normal\]]{#prop:normal label="prop:normal"} In the situation above, let $L:\mbox{$\operatorname{Endo}$}(\cE)\map\cC$ denote a normal functor, and let $\varphi: A\to B$ and $\psi:B\to A$ be morphisms in $\cE$. Then $\varphi: (A, \psi \varphi)\to (B,\varphi \psi)$ is a morphism in the category $\mbox{$\operatorname{Endo}$}(\cE)$, and $L(\varphi)$ is an isomorphism in $\cC$. *Proof.* Clearly we have that $\varphi$ is a morphism from $(A, \psi \varphi)$ to $(B, \varphi \psi)$ in $\mbox{$\operatorname{Endo}$}(\cE)$ and $\psi$ is a morphism from $(B, \varphi \psi)$ to $(A, \psi \varphi)$. In addition, one obtains the commutative diagram $$\begin{diagram} \node{(B, \varphi \psi)} \arrow{s,l}{\psi} \arrow{e,t}{\varphi \psi} \node{(B, \varphi \psi)} \arrow{s,r}{\psi} \\ \node{(A,\psi \varphi)} \arrow{e,t}{\psi \varphi} \arrow{ne,t}{\varphi} \node{(A,\psi \varphi)} \end{diagram}$$ If we now apply the functor $L$ to this diagram, then the horizontal morphisms become isomorphisms in $\cC$. Thus, the image $L(\varphi)$ has both a left and a right inverse, and therefore it is also an isomorphism. 0◻ ◻ We would like to point out that if $W$ denotes the class of induced morphisms in $\mbox{$\operatorname{Endo}$}(\cE)$, then the natural functor $\mbox{$\operatorname{Endo}$}(\cE)\to \mbox{$\operatorname{Endo}$}(\cE)[W^{-1}]$ to the localization is universal in the sense that any other normal functor $\mbox{$\operatorname{Endo}$}(\cE)\to \cC$ factorizes through it, see also [@mrozek:99a; @szymczak:95a]. We close this section with one specific example of a normal functor. For further examples we refer the reader to the paper [@mrozek:92a]. [\[ex:lerayfunctor\]]{#ex:lerayfunctor label="ex:lerayfunctor"} *For the example computations of this paper, we make use of the specific normal functor introduced in [@mrozek:90a], the *Leray functor*. For this, let $\Mod$ denote the category of graded moduli over the ring $R$ together with homomorphisms of degree zero. Using the setting for the definition of normal functors from above, we consider the categories $$\cE = \Mod \quad\mbox{ and }\quad \cC = \mbox{$\operatorname{Auto}$}(\Mod) \; ,$$ where $\mbox{$\operatorname{Auto}$}(\Mod) \subseteq\mbox{$\operatorname{Endo}$}(\Mod)$ is the subcategory of automorphisms of $\Mod$. Then the *Leray functor* $L_{\mathrm{Leray}}: \mbox{$\operatorname{Endo}$}(\Mod) \map \mbox{$\operatorname{Auto}$}(\Mod)$ can be defined as the composition of the following maps:* - Let $(H,h) \in \mbox{$\operatorname{Endo}$}(\Mod)$ be arbitrary. Then the *generalized kernel* of $h$ can be defined as $$\mathrm{gker}(h) := \bigcup_{n \in \NN} h^{-n}(0) \; ,$$ and one can easily see that the map $h : H \to H$ induces a well-defined map $h' : H/\mathrm{gker}(h) \to H/\mathrm{gker}(h)$. Thus, the definition $$L'(H,h) := (H/\mathrm{gker}(h), h') \in \mbox{$\operatorname{Mono}$}(\Mod) \subseteq\mbox{$\operatorname{Endo}$}(\Mod)$$ gives an object in the category $\mbox{$\operatorname{Mono}$}(\Mod)$ of monomorphisms of $\Mod$. Furthermore, it is straightforward to define $L'(\varphi)$ also for morphisms $\varphi$ in $\mbox{$\operatorname{Endo}$}(\Mod)$, and to show that in this way one obtains a well-defined contravariant functor $L' : \mbox{$\operatorname{Endo}$}(\Mod) \to \mbox{$\operatorname{Mono}$}(\Mod)$. - Now let $(H,h) \in \mbox{$\operatorname{Mono}$}(\Mod)$ be arbitrary. Then the *generalized image* of $h$ can be defined as $$\mathrm{gim}(h) := \bigcap_{n \in \NN} h^n(H) \; ,$$ and it is not difficult to verify that the map $h : H \to H$ induces a well-defined map $h'' : \mathrm{gim}(h) \to \mathrm{gim}(h)$. Thus, the definition $$L''(H,h) := (\mathrm{gim}(h), h'') \in \mbox{$\operatorname{Auto}$}(\Mod) \subseteq\mbox{$\operatorname{Endo}$}(\Mod)$$ gives an object in the category $\mbox{$\operatorname{Auto}$}(\Mod)$ of automorphisms of $\Mod$. In addition, it is again straightforward to define $L''(\varphi)$ also for morphisms $\varphi$ in $\mbox{$\operatorname{Mono}$}(\Mod)$, and to show that this time one obtains a well-defined contravariant functor $L'' : \mbox{$\operatorname{Mono}$}(\Mod) \to \mbox{$\operatorname{Auto}$}(\Mod)$. - Finally, the Leray functor is defined as $L_{\mathrm{Leray}}:= L'' \circ L'$. For more details on the above construction, as well as the proof that the Leray functor is indeed a normal functor, we refer the reader to [@mrozek:90a Section 4]. For our applications below, we note that by the construction of $L_{\mathrm{Leray}}$ we have the implication $$\label{eqn:lerayid} (H,h) \in \mbox{$\operatorname{Auto}$}(\Mod) \subseteq\mbox{$\operatorname{Endo}$}(\Mod) \quad\Longrightarrow\quad L_{\mathrm{Leray}}(H,h) = (H,h) \; ,$$ i.e., the Leray functor is the identity on $\mbox{$\operatorname{Auto}$}(\Mod) \subseteq \mbox{$\operatorname{Endo}$}(\Mod)$. This fact will enable us to determine the Conley index of isolated invariant sets in many situations. ## The Conley index {#subsec:conleyindex} After these preparations we can finally define the Conley index. A first attempt would be to use the index map $I_P : H_*(P) \to H_*(P)$ introduced in Definition [\[def:indexmap\]](#def:indexmap){reference-type="ref" reference="def:indexmap"}. Unfortunately, however, this would mean that the index depends on the chosen index pair of the isolated invariant set. This issue can be addressed by using the concept of normal functors from the last section. More precisely, let $\Mod$ denote as before the category of graded moduli over the ring $R$ and let $L:\mbox{$\operatorname{Endo}$}(\Mod)\to\mbox{$\operatorname{Auto}$}(\Mod)$ be a fixed normal functor. Note that if $P$ is an index pair for an isolated invariant set $S$ in an isolating set $N$, then one obtains $(H_*(P),I_P) \in \mbox{$\operatorname{Endo}$}(\Mod)$. Thus, the *$L$-reduction* $L(H_*(P),I_P)$ is an automorphism of a graded module over $R$, and we have the following crucial result. [\[th_ind\]]{#th_ind label="th_ind"} In the situation described above, the isomorphism type of $L(H_*(P),I_P) \in \mbox{$\operatorname{Auto}$}(\Mod)$ does not depend on the choice of the isolating set $N$ for the isolated invariant set $S$, or on the chosen index pair $P$ in $N$. *Proof.* To begin, let $M$ and $N$ be two isolating sets for $S$, and let $P$ and $Q$ denote two index pairs in $N$ and $M$, respectively. Our goal is to establish the equivalence $L(H_*(P),I_P) \cong L(H_*(Q),I_Q)$. This is accomplished in five steps. *Step 1.* We first consider the special case - $M=N$, - $P\subseteq Q$, - $P_1=Q_1$ or $P_2=Q_2$, - $F(Q)\cap N\subseteq P$. Let $D=(D_1,D_2)$ be the pair of closed sets defined by $D_i=P_i\cup \cl(F(Q_1)\setminus N)$ for $i=1,2$. By (iv) we may treat $F$ as a map of pairs $F_{QD} = F|_Q^{D}: Q\mto D$. In view of (i) and (ii), we also have $\overline{P}\subseteq D\subseteq\overline{Q}$. This gives the following commutative diagram $$\begin{diagram} \node{P} \arrow[2]{s,l}{j} \node{\overline{P}} \arrow{w,t,dd}{F_P} \arrow{s,r}{k} \node{P} \arrow{w,t}{\iota_P} \arrow[2]{s,r}{j} \\ \node{} \node{D} \arrow{sw,l,dd}{F_{QD}} \arrow{s,r}{l} \node{} \\ \node{Q} \node{\overline{Q}} \arrow{w,t,dd}{F_Q} \node{Q} \arrow{w,t}{\iota_Q} \end{diagram}$$ in which vertical arrows denote inclusions. Since $F$ induces a map $F_{PD}=F|_P^D$, by Lemmas [\[lemma1\]](#lemma1){reference-type="ref" reference="lemma1"} and [\[lemma2\]](#lemma2){reference-type="ref" reference="lemma2"} we have $k_*(F_P)_*=(F_{PD})_*=(F_{QD})_*j_*$ and $l_*(F_{QD})_*=(F_Q)_*$. We then obtain a commutative diagram $$\begin{diagram} \dgARROWLENGTH=1.8em \node{H_*(P)} \arrow[2]{s,l}{j_*} \arrow{e,t}{(F_P)_*} \node{H_*(\overline{P})} \arrow{s,r}{k_*} \node{H_*(P)} \arrow{w,t}{(\iota_P)_*} \arrow[2]{s,r}{j_*} \\ \node{} \node{H_*(D)} \arrow{s,r}{l_*} \node{} \\ \node{H_*(Q)} \arrow{e,t}{(F_Q)_*} \arrow{ne,l}{(F_{QD})_*} \node{H_*(\overline{Q})} \node{H_*(Q)} \arrow{w,t}{(\iota_Q)_*} \end{diagram}$$ We claim that $k$ induces an isomorphism in homology. Indeed, if $P_1=Q_1$, $k$ is the identity. Otherwise, by (iii) we have $P_2=Q_2$. In this case we claim that $k$ fulfills the hypothesis of strong excision, namely, $$P_1\setminus (P_2 \cup \cl(F(P_1)\setminus N))= P_1\setminus (P_2 \cup \cl(F(Q_1)\setminus N)).$$ Inclusion of the second subspace in the first is trivial, and their difference is $$P_1\cap \cl(F(Q_1)\setminus N) \setminus (P_2 \cup \cl(F(P_1)\setminus N)) \subseteq Q_1\cap \cl(F(Q_1)\setminus N) \setminus P_2,$$ which is equal to $Q_1\cap \cl(F(Q_1)\setminus N) \setminus Q_2$, and this is empty by (IP2). If one defines $I_{QP}:=(\iota_P)_*^{-1}k_*^{-1} (F_{QD})_*$, then we get the commutative diagram in $\Mod$ given by $$\begin{diagram} \node{H_*(P)} \arrow{s,l}{j_*} \arrow{e,t}{I_P} \node{H_*(P)} \arrow{s,r}{j_*} \\ \node{H_*(Q)} \arrow{e,t}{I_Q} \arrow{ne,l}{I_{QP}} \node{H_*(Q)} \end{diagram}\quad,$$ and $L(j_*):L(H_*(P), I_{QP}j_*)=L(H_*(P), I_P)\to L(H_*(Q), j_*I_{QP})=L(H_*(Q),I_Q)$ is an isomorphism in view of Proposition [\[prop:normal\]](#prop:normal){reference-type="ref" reference="prop:normal"}. *Step 2.* Next we drop assumption (iv). According to Lemma [\[lem:seq-Qi\]](#lem:seq-Qi){reference-type="ref" reference="lem:seq-Qi"} we can find a sequence $Q^0,Q^1, \ldots ,Q^n$ of index pairs such that $Q^0=Q$ and $Q^n=P$, and such that each pair $(Q^{k+1}$, $Q^k)$ satisfies assumptions (i)--(iv). Due to *Step 1* the $L$-reductions $L(H_*(Q^{k},I_{Q^k}))$ and $L(H_*(Q^{k+1},I_{Q^{k+1}}))$ are isomorphic, and the conclusion follows. *Step 3*. We now drop assumptions (iii) and (iv). For this, notice that in view of Lemma [\[inter2\]](#inter2){reference-type="ref" reference="inter2"} the pairs $R=(P_1,P_1\cap Q_2)$ and $T=(P_1\cup Q_2,Q_2)$ are index pairs. The pairs $P$ and $R$ satisfy assumptions (ii) and (iii), and therefore they have isomorphic $L$-reductions. The same holds for $T$ and $Q$. On the other hand, the inclusion $j:R \hookrightarrow T$ induces an isomorphism $j_*:H_*(R)\to H_*(T)$ by strong excision. Since $R\subseteq T$, we have an inclusion $\overline{j}:\overline{R}\hookrightarrow \overline{T}$, as well as the commutative diagram $$\begin{diagram} \node{H_*(R)} \arrow{s,l}{j_*} \arrow{e,t}{(F_R)_*} \node{H_*(\overline{R})} \arrow{s,r}{\overline{j}_*} \node{H_*(R)} \arrow{w,t}{(\iota_R)_*} \arrow{s,r}{j_*} \\ \node{H_*(T)} \arrow{e,t}{(F_T)_*} %\arrow{ne,b}{(F_{QP})_*} \node{H_*(\overline{T})} \node{H_*(T)} \arrow{w,t}{(\iota_T)_*} \end{diagram}$$ Thus, one obtains $j_*I_R = j_*(\iota_R)_*^{-1}(F_R)_* = (\iota_T)_*^{-1}\overline{j}_*(F_R)_* = (\iota_T)_*^{-1}(F_T)_*j_* = I_Tj_*$. This shows that $j_*\in \mbox{$\operatorname{Endo}$}(\Mod)((H_*(R),I_R),(H_*(T),I_T))$, and since $j_*$ is an isomorphism in $\Mod$, it also is an isomorphism in $\mbox{$\operatorname{Endo}$}(\Mod)$. This in turn implies that $L(j_*):L(H_*(R),I_R)\to L(H_*(T),I_T)$ is an isomorphism, and that $P$ and $Q$ indeed have isomorphic $L$-reductions. *Step 4*. Now we only assume (i). By Lemma [\[inter1\]](#inter1){reference-type="ref" reference="inter1"} the pair $P\cap Q$ is an index pair. Hence, the claim follows from *Step 3* applied to $P\cap Q\subseteq P$ and $P\cap Q\subseteq Q$. *Step 5*. Finally, we drop all auxiliary assumptions. We have already proved that the isomorphism type of the $L$-reduction depends only on the isolating set for $S$. Moreover, since by Proposition [\[prop:iso-set-intersection\]](#prop:iso-set-intersection){reference-type="ref" reference="prop:iso-set-intersection"}, the intersection of two isolating sets is again an isolating set, we may assume $M\subseteq N$. Consider the index pairs $P^M$ for $S$ in $M$ and $P^N$ for $S$ in $N$. In view of Proposition [\[prop:ip-M-N\]](#prop:ip-M-N){reference-type="ref" reference="prop:ip-M-N"} and Proposition [\[prop:barip-M-N\]](#prop:barip-M-N){reference-type="ref" reference="prop:barip-M-N"} we then have the commutative diagram $$\begin{diagram} \node{P^M} \arrow{s,l}{j} \node{\overline{P^M}} \arrow{w,t,dd}{F_{P^M}} \arrow{s,r}{k} \node{P^M} \arrow{w,t}{\iota_{P^M}} \arrow{s,r}{j} \\ \node{P^N} \node{\overline{P^N}} \arrow{w,t,dd}{F_{P^N}} \node{P^N} \arrow{w,t}{\iota_{P^N}} \end{diagram}$$ in which vertical arrows denote inclusions. Then $I_{P^N}j_*=j_* I_{P^M}$, which implies that $j_*:(H_*(P^M), I_{P^M})\to (H_*(P^N), I_{P^N})$ is a morphism in $\mbox{$\operatorname{Endo}$}(\Mod)$. On the other hand, since $P^M$ and $P^N$ are saturated by Theorem [\[thm:ip-existence\]](#thm:ip-existence){reference-type="ref" reference="thm:ip-existence"}, strong excision shows that $j_*:H_*(P^M)\to H_*(P^N)$ is an isomorphism in $\Mod$. Thus, the map $j_*$ is an isomorphism in $\mbox{$\operatorname{Endo}$}(\Mod)$, and then so is $L(j_*)$. 0◻ ◻ Based on the above result, the Conley index can now be defined as follows. We would like to point out that the functor $L$ in the definition could be, for example, the computationally convenient Leray functor of Example [\[ex:lerayfunctor\]](#ex:lerayfunctor){reference-type="ref" reference="ex:lerayfunctor"}. [\[def:con\]]{#def:con label="def:con"} The $L$-reduction $L(H_*(P),I_P)$ will be called the *homological Conley index of $S$*, and be denoted by $C(S,F)$, or simply $C(S)$ if $F$ is clear from context. Due to Theorem [\[th_ind\]](#th_ind){reference-type="ref" reference="th_ind"} the Conley index $C(S)\in \mbox{$\operatorname{Auto}$}(\Mod)$ is well-defined up to isomorphism. In order to illustrate the above abstract definition of the Conley index, we now briefly return to our earlier two examples and determine the Conley indices of all the Morse sets shown in Figures [1](#fig:simplecvf){reference-type="ref" reference="fig:simplecvf"} and [2](#fig:reflectedcvf){reference-type="ref" reference="fig:reflectedcvf"}. [\[ex:conleyindices\]]{#ex:conleyindices label="ex:conleyindices"} *We return one last time to the two simple multivalued maps $F : X \mto X$ and $G : X \mto X$ from Examples [\[ex:simplecvf\]](#ex:simplecvf){reference-type="ref" reference="ex:simplecvf"} and [\[ex:reflectedcvf\]](#ex:reflectedcvf){reference-type="ref" reference="ex:reflectedcvf"}, respectively. We have already seen that these maps give rise to associated Morse decompositions with three and five isolated invariant sets, which themselves are subsets of the finite topological space $X = \{ A, B, C, AB, AC, BC, ABC \}$. Notice that in view of Example [\[ex:extendedtoppairs\]](#ex:extendedtoppairs){reference-type="ref" reference="ex:extendedtoppairs"} in all of these cases the extended topological pair $\overline{P}$ equals the index pair $P$ that was chosen for each isolated invariant set. Thus, the index map $I_P$ is simply given by $I_P = (F_P)_* : H_*(P) \to H_*(P)$ for the sets in ([\[ex:standardindexpairs1\]](#ex:standardindexpairs1){reference-type="ref" reference="ex:standardindexpairs1"}), and similarly for the isolated invariant sets in ([\[ex:standardindexpairs2\]](#ex:standardindexpairs2){reference-type="ref" reference="ex:standardindexpairs2"}).* Consider now the multivalued map $F : X \mto X$ from Example [\[ex:simplecvf\]](#ex:simplecvf){reference-type="ref" reference="ex:simplecvf"}. For the sake of simplicity, we compute the Conley index for the ring $R = \ZZ$ and with respect to the Leray functor. Then for the isolated invariant set $S_1 = \{ A, B, C \}$ one can easily see that $H_0(P^{S_1,N_1}) \simeq \ZZ^3$. Moreover, the index map $I_{P^{S_1,N_1}}$ maps the generators in a cyclic fashion, i.e., it is an automorphism. Based on ([\[eqn:lerayid\]](#eqn:lerayid){reference-type="ref" reference="eqn:lerayid"}), this shows that the Conley index with respect to $L_{\mathrm{Leray}}$ is just $(H_*(P^{S_1,N_1}), I_{P^{S_1,N_1}})$. In a similar way, one can determine the Conley index for all the isolated invariant sets in Figure [1](#fig:simplecvf){reference-type="ref" reference="fig:simplecvf"} as $$\begin{array}{lclcl} \displaystyle S_1 = \{ A, B, C \} & : & \displaystyle H_0(P^{S_1,N_1}) \simeq \ZZ^3 & \mbox{ with } & \displaystyle I_{P^{S_1,N_1}} (e_i) = e_{(i+1) \;\mathrm{mod}\; 3}, \\ \displaystyle S_2 = \{ AB, BC, AC \} & : & \displaystyle H_1(P^{S_2,N_2}) \simeq \ZZ^3 & \mbox{ with } & \displaystyle I_{P^{S_2,N_2}} (e_i) = e_{(i+1) \;\mathrm{mod}\; 3}, \\ \displaystyle S_3 = \{ ABC \} & : & \displaystyle H_2(P^{S_3,N_3}) \simeq \ZZ & \mbox{ with } & \displaystyle I_{P^{S_3,N_3}} (e_i) = e_i, \end{array}$$ where in each case all unlisted homology groups are trivial, and the listed group $\ZZ^k$ has a suitable basis $\{ e_0, e_1, \ldots, e_{k-1} \}$. Similarly, for the multivalued map $G$ from Example [\[ex:reflectedcvf\]](#ex:reflectedcvf){reference-type="ref" reference="ex:reflectedcvf"} and the isolated invariant sets in Figure [2](#fig:reflectedcvf){reference-type="ref" reference="fig:reflectedcvf"} one obtains $$\begin{array}{lclcl} \displaystyle R_1 = \{ A \} & : & \displaystyle H_0(P^{R_1,M_1}) \simeq \ZZ & \mbox{ with } & \displaystyle I_{P^{R_1,M_1}} (e_i) = e_i, \\ \displaystyle R_2 = \{ B, C \} & : & \displaystyle H_0(P^{R_2,M_2}) \simeq \ZZ^2 & \mbox{ with } & \displaystyle I_{P^{R_2,M_2}} (e_i) = e_{(i+1) \;\mathrm{mod}\; 2}, \\ \displaystyle R_3 = \{ BC \} & : & \displaystyle H_1(P^{R_3,M_3}) \simeq \ZZ & \mbox{ with } & \displaystyle I_{P^{R_3,M_3}} (e_i) = -e_i, \\ \displaystyle R_4 = \{ AB, AC \} & : & \displaystyle H_1(P^{R_4,M_4}) \simeq \ZZ^2 & \mbox{ with } & \displaystyle I_{P^{R_4,M_4}} (e_i) = e_{(i+1) \;\mathrm{mod}\; 2}, \\ \displaystyle R_5 = \{ ABC \} & : & \displaystyle H_2(P^{R_5,M_5}) \simeq \ZZ & \mbox{ with } & \displaystyle I_{P^{R_5,M_5}} (e_i) = -e_i, \end{array}$$ where we use the same conventions as above. We leave the details of these straightforward computations to the reader. # Properties of the Conley index {#sec:conleyindexprop} In this section, we present first properties of the Conley index for multivalued maps defined in the last section. In addition to the Ważewski property, we also briefly address continuation. ## The Ważewski property In classical Conley theory, the Ważewski property is central, as it allows one to deduce the existence of a nontrivial isolated invariant set $S$ from a nontrivial index, and the latter can be computed from an index pair without explicit knowledge of $S$. In order to show that the same result still holds in the multivalued context of the present paper, let $P=(P_1,P_2)$ denote a topological pair of closed subspaces of $X$. Suppose further that $N=P_1$ satisfies conditions (IP1) and (IP2), i.e., we have the inclusion $P_1\cap (\cl(F(P_1)\setminus P_1) \cup F(P_2))\subseteq P_2$. In view of Remark [\[index-mapnew\]](#index-mapnew){reference-type="ref" reference="index-mapnew"}, the index map $I_P:H_*(P)\to H_*(P)$ is defined in this situation. Then we have the following result. Suppose that $X$ is a finite $T_0$ topological space and that the multivalued map $F:X\mto X$ is lower semicontinuous with closed and acyclic values. Moreover, let $P=(P_1,P_2)$ be a pair of closed subspaces of $X$ such that $$P_1 \cap \left( \cl \left( F(P_1) \setminus P_1 \right) \cup F(P_2) \right) \; \subseteq\; P_2 .$$ If one further has $L(H_*(P),I_P) \neq 0 \in \mbox{$\operatorname{Auto}$}(\Mod)$, then $\Inv(P_1\setminus P_2)\neq \varnothing$. *Proof.* Suppose $\Inv(P_1\setminus P_2)=\varnothing$. Then $N=P_1$ is an isolating set for the invariant set $S=\varnothing$, and $P$ is an index pair for $S$ in $N$. According to our hypothesis, we have $C(S)\neq 0$. But this is absurd since $S$ admits $N'=\varnothing$ as isolating set and $P'=(\varnothing,\varnothing)$ is an index pair for $S$ in $N'$. Thus, we have the equality $H_*(P')=0$, as well as $C(S)=L(H_*(P'),I_{P'})=0$. 0◻ ◻ ## Homotopies and continuation {#sec:continuation} As our second property of the Conley index we address the fundamental concept of continuation. For this, we first need to review some results on homotopies in finite topological spaces. Let $X$ and $Y$ be two finite $T_0$ spaces. Two lower semicontinuous multivalued maps $F,G:X\multimap Y$ with closed and acyclic values are called *homotopic* if there exists a lower semicontinuous map $H:X\times [0,1] \multimap Y$ with closed and acyclic values such that $H(x,0)=F(x)$ and $H(x,1)=G(x)$ for every $x\in X$. This definition extends in a natural way to maps $(X,A)\multimap (Y,B)$ between pairs of finite $T_0$ spaces by requiring that $H:X\times [0,1]\to Y$ maps $(a,t)$ to $H(a,t)\subseteq B$ for every $a\in A$ and $t\in [0,1]$. General homotopies in the setting of finite topological spaces can be more succinctly described as follows. Define an order on the set of all lower semicontinuous multivalued maps $X \multimap Y$ with closed and acyclic values by letting $F\le G$ if we have $F(x)\subseteq G(x)$ for all $x\in X$. A sequence $F=F_0\le F_1 \ge F_2\le \ldots F_k=G$ is called a *fence* from $F$ to $G$. Then the proof of the following result is essentially the same as the proof of [@barmak:etal:20a Proposition 8.1], and therefore we omit it. Let $X$ and $Y$ be two finite $T_0$ spaces and let $F,G:X\multimap Y$ be two lower semicontinuous multivalued maps with closed and acyclic values. Then the maps $F$ and $G$ are homotopic if and only if there exists a fence $F=F_0\le F_1\ge F_2\le \ldots F_k=G$ of lower semicontinuous multivalued maps $X\multimap Y$ with closed and acyclic values. Furthermore, if the maps $F,G:(X,A)\multimap (Y,B)$ are maps of pairs of finite $T_0$ spaces, then they are homotopic if and only if there exists a fence as above in which the maps are maps of pairs $(X,A)\multimap (Y,B)$. In terms of the associated maps in homology we have the following result, which is in the spirit of [@barmak:etal:20a Corollary 8.2]. [\[comparable\]]{#comparable label="comparable"} Let $X,Y$ be finite $T_0$ spaces, and let $F,G:X\multimap Y$ be two homotopic lower semicontinuous multivalued maps with closed and acyclic values. Then $F_*=G_*:H_*(X)\to H_*(Y)$ for the maps induced in homology. The same result holds more generally for pairs. *Proof.* We may assume $F\le G$. Consider the following commutative diagram $$\begin{diagram} \node{} \node{F} \arrow[2]{s,r}{j} \arrow{sw,l}{p_1} \arrow{se,l}{p_2} \node{} \\ \node{X} \node{} \node{Y} \\ \node{} \node{G} \arrow{nw,r}{\widetilde{p}_1} \arrow{ne,r}{\widetilde{p}_2} \node{} \end{diagram}$$ in which $j$ denotes the inclusion between the graphs, and the other maps are the projections to the first or second coordinate. Since $p_1$ and $\widetilde{p}_1$ induce isomorphisms in homology, so does $j$. This immediately implies $$G_*=(\widetilde{p}_2)_*(\widetilde{p}_1)_*^{-1}= (p_2)_*(j_*)^{-1}j_*(p_1)^{-1}_*=F_* \; : \; H_*(X)\to H_*(Y) .$$ The result for pairs follows with the exact same proof. 0◻ ◻ The following definition introduces the notion of *continuation* for the setting of multivalued maps in finite topological spaces. [\[def:continuation\]]{#def:continuation label="def:continuation"} Let $X$ be a finite $T_0$ space and let $F,G:X\multimap X$ be two lower semicontinuous multivalued maps with closed and acyclic values such that $F\le G$ or $F\ge G$. Moreover, let $S_F,S_G\subseteq X$ be isolated invariant sets for $F$ and $G$, respectively. We say that $(S_F,F)$ and $(S_G,G)$ (or just $S_F$ and $S_G$) are *related by an elementary continuation* if there exist isolating sets $N_F$ and $N_G$ for $S_F$ and $S_G$ with respect to $F$ and $G$, respectively, as well as a pair $P=(P_1,P_2)$ which is both - an index pair for $S_F$ in $N_F$ with respect to $F$, and - an index pair for $S_G$ in $N_G$ with respect to $G$. More generally, let $F,G:X\multimap X$ denote two homotopic lower semicontinuous multivalued maps with closed and acyclic values. We say that isolated invariant sets $S_F$ and $S_G$ for $F$ and $G$, respectively, are *related by continuation*, if there exists a fence $F=F_0\le F_1\ge F_2\le \ldots F_k=G$ of lower semicontinuous multivalued maps $X\multimap X$ with closed and acyclic values, as well as isolated invariant sets $S_i$ for $F_i$, for $0\le i\le k$, such that $S_0=S_F$, $S_k=S_G$, and $(S_i,F_i)$, $(S_{i+1},F_{i+1})$ are related by an elementary continuation for each $0\le i <k$. As in the classical case, we then have the following central result. [\[prop:continuation\]]{#prop:continuation label="prop:continuation"} Let $F,G:X\multimap X$ be homotopic lower semicontinuous multivalued maps with closed and acyclic values, and let $S_F$ and $S_G$ be isolated invariant sets for $F$ and $G$, respectively, which are related by continuation. Then the Conley index $C(S_F,F)$ is isomorphic to the Conley index $C(S_G,G)$. *Proof.* We can assume without loss of generality that $F\le G$, and that $S_F$ and $S_G$ are related by an elementary continuation. Let $N_F$, $N_G$, and $P$ be as in Definition [\[def:continuation\]](#def:continuation){reference-type="ref" reference="def:continuation"}. Since we have $F\le G$, one obtains the inclusion $$\begin{aligned} \overline{P_i}^F & = & P_i\cup \cl(F(P_1)\setminus N_F) \; = \; P_i\cup \cl(F(P_1)\setminus P_1) \\[0.5ex] & \subseteq& P_i\cup \cl(G(P_1)\setminus P_1) \; = \; P_i\cup \cl(G(P_1)\setminus N_G) \; = \; \overline{P_i}^G.\end{aligned}$$ Thus we have a (non-commutative) diagram $$\begin{diagram} \node{} \node{\overline{P}^F} \arrow{sw,t,dd}{F_{P}} \arrow[2]{s,r}{j} \node{} \\ \node{P} \node{} \node{P} \arrow{nw,l}{\iota_{P,F}} \arrow{sw,r}{\iota_{P,G}} \\ \node{} \node{\overline{P}^G} \arrow{nw,r,dd}{G_P} \node{} \end{diagram}$$ in which $j$ denotes inclusion. According to Lemma [\[lemma2\]](#lemma2){reference-type="ref" reference="lemma2"} one has $j_*(F_P)_*=(jF_P)_*$ as a map from $H_*(P)$ to $H_*(\overline{P}^G)$. Moreover, our assumption $F \le G$ immediately implies $jF_P\le G_P$, and therefore Lemma [\[comparable\]](#comparable){reference-type="ref" reference="comparable"} yields $(jF_P)_*=(G_P)_*$. Since the right triangle is in fact commutative, the map $j_*$ is an isomorphism. Thus the index map $I_{P,F}$ of $P$ with respect to $F$ is given by $$(\iota_{P,F})^{-1}_* (F_P)_* \; = \; (\iota_{P,F})^{-1}_*j_*^{-1} j_*(F_P)_* \; = \; (\iota_{P,G})_*^{-1}(G_P)_* \; = \; I_{P,G},$$ and this furnishes in particular $L(H_*(P), I_{P,F})=L(H_*(P), I_{P,G})$. In other words, the Conley indices $C(S_F,F)$ and $C(S_G,G)$ are isomorphic. 0◻ ◻ To close this section, we present a detailed example which illustrates the concept of continuation, and also provides further insight into isolated invariant sets and their Conley indices. (12.5,4.0) (0.0,0.0) ![The finite $T_0$ topological space $X$ used in Example [\[ex:continuation\]](#ex:continuation){reference-type="ref" reference="ex:continuation"}. The left panel shows a simplicial complex in the form of a pentagon, given by five vertices and five edges. Using the order given by the face relationship, one obtains the ten-point finite topological space $X$, which is shown in the right panel via its poset representation.](continuationX.pdf "fig:"){#fig:continuationX height="4.0cm"} [\[ex:continuation\]]{#ex:continuation label="ex:continuation"} *For this example, we let $X$ denote the finite topological space which is generated by a simplicial representation of a pentagon, as shown in Figure [5](#fig:continuationX){reference-type="ref" reference="fig:continuationX"}. Using the Alexandrov topology induced by the face relation, one obtains the ten-point topological space $X$ indicated in the right panel of the figure as a poset. Note that we can identify $X$ with the set $\ZZ_{10}$, where the topology is given as in the poset.* (12.0,7.0) (0.0,0.0) ![Definition of the multivalued map $F : X \mto X$. The left image shows the graph of $F$. For this, we represent the pentagon from Figure [5](#fig:continuationX){reference-type="ref" reference="fig:continuationX"} as a line segment, whose end points are identified. The table on the right lists all function values $F(x)$.](continuationF.pdf "fig:"){#fig:continuationF height="7.0cm"} (8.5,1.0) (3.5,5.5) $x$ $F(x)$ ----- --------------- 0 0 1 0 2 0 3 0 1 2 4 2 5 2 3 4 5 6 7 8 6 8 7 0 8 9 8 0 9 0 (12.0,7.0) (0.0,0.0) ![Definition of the multivalued map $G : X \mto X$. The left image shows the graph of $G$. As before, the pentagon from Figure [5](#fig:continuationX){reference-type="ref" reference="fig:continuationX"} is represented by a line segment with identified end points. The table on the right lists all function values $G(x)$.](continuationG.pdf "fig:"){#fig:continuationG height="7.0cm"} (8.5,1.0) (3.5,5.5) $x$ $G(x)$ ----- --------------- 0 0 1 2 1 0 1 2 2 0 1 2 3 0 1 2 4 2 5 2 3 4 5 6 7 8 6 4 5 6 7 8 7 0 4 5 6 7 8 9 8 0 9 0 1 2 On the topological space $X$, we consider the two multivalued maps $F : X \mto X$ and $G : X \mto X$ which are defined in the tables in Figures [6](#fig:continuationF){reference-type="ref" reference="fig:continuationF"} and [7](#fig:continuationG){reference-type="ref" reference="fig:continuationG"}, respectively. In addition, these two figures show the graphs of these maps, where we represent the pentagon from Figure [5](#fig:continuationX){reference-type="ref" reference="fig:continuationX"} as a line segment, whose end points correspond to $0$ and are identified. Both maps are lower semicontinuous and have closed and acyclic values. In addition, one can easily see that both maps give rise to a Morse decomposition with two isolated invariant sets, namely $$\begin{array}{lcll} \displaystyle S_F = \{ 0 \} & \quad\mbox{ and }\quad & \displaystyle R_F = \{ 5 \} & \quad\mbox{ for~$F$, and} \\[0.6ex] \displaystyle S_G = \{ 0, 1, 2 \} & \quad\mbox{ and }\quad & \displaystyle R_G = \{ 5, 6, 7 \} & \quad\mbox{ for~$G$.} \end{array}$$ We claim that the isolated invariant sets $(S_F,F)$ and $(S_G,G)$ are related by an elementary continuation. For this, we use the isolating sets $N_F = N_G = \{ 0,1,2 \}$, as well as the topological pair $P = (P_1,P_2)$ with $P_1=\{0,1,2\}$ and $P_2=\varnothing$. Then one can easily see that $P$ is an index pair for $S_F$ in $N_F$ with respect to $F$, as well as for $S_G$ in $N_G$ with respect to $G$. In addition, the definitions of $F$ and $G$ immediately imply $F \le G$, which furnishes our claim. Thus, in view of Proposition [\[prop:continuation\]](#prop:continuation){reference-type="ref" reference="prop:continuation"} the Conley indices $C(S_F,F)$ and $C(S_G,G)$ are isomorphic. We leave it to the reader to verify that the only nontrivial homology group occurs in dimension zero, that it is one-dimensional, and that the index map is the identity. In other words, both isolated invariant sets have the Conley index of an attracting fixed point. We note that also $(R_F,F)$ and $(R_G,G)$ are related by an elementary continuation, but leave the verification of this and the index computation to the reader. (14.5,9.5) (0.0,5.0) ![A sample fence $F_0 \le F_1 \ge F_2 \le \ldots$ of lower semicontinuous multivalued maps $F_i : X \mto X$ with closed and acyclic values, as defined in ([\[eqn:exfence\]](#eqn:exfence){reference-type="ref" reference="eqn:exfence"}). The panels depict the first six functions of the fence. The associated isolated invariant sets $S_{F_i}$ are indicated in orange, and they are related by continuation.](continuationF0.pdf "fig:"){#fig:continuationFk height="4.5cm"} (5.0,5.0) ![A sample fence $F_0 \le F_1 \ge F_2 \le \ldots$ of lower semicontinuous multivalued maps $F_i : X \mto X$ with closed and acyclic values, as defined in ([\[eqn:exfence\]](#eqn:exfence){reference-type="ref" reference="eqn:exfence"}). The panels depict the first six functions of the fence. The associated isolated invariant sets $S_{F_i}$ are indicated in orange, and they are related by continuation.](continuationF1.pdf "fig:"){#fig:continuationFk height="4.5cm"} (10.0,5.0) ![A sample fence $F_0 \le F_1 \ge F_2 \le \ldots$ of lower semicontinuous multivalued maps $F_i : X \mto X$ with closed and acyclic values, as defined in ([\[eqn:exfence\]](#eqn:exfence){reference-type="ref" reference="eqn:exfence"}). The panels depict the first six functions of the fence. The associated isolated invariant sets $S_{F_i}$ are indicated in orange, and they are related by continuation.](continuationF2.pdf "fig:"){#fig:continuationFk height="4.5cm"} (0.0,0.0) ![A sample fence $F_0 \le F_1 \ge F_2 \le \ldots$ of lower semicontinuous multivalued maps $F_i : X \mto X$ with closed and acyclic values, as defined in ([\[eqn:exfence\]](#eqn:exfence){reference-type="ref" reference="eqn:exfence"}). The panels depict the first six functions of the fence. The associated isolated invariant sets $S_{F_i}$ are indicated in orange, and they are related by continuation.](continuationF3.pdf "fig:"){#fig:continuationFk height="4.5cm"} (5.0,0.0) ![A sample fence $F_0 \le F_1 \ge F_2 \le \ldots$ of lower semicontinuous multivalued maps $F_i : X \mto X$ with closed and acyclic values, as defined in ([\[eqn:exfence\]](#eqn:exfence){reference-type="ref" reference="eqn:exfence"}). The panels depict the first six functions of the fence. The associated isolated invariant sets $S_{F_i}$ are indicated in orange, and they are related by continuation.](continuationF4.pdf "fig:"){#fig:continuationFk height="4.5cm"} (10.0,0.0) ![A sample fence $F_0 \le F_1 \ge F_2 \le \ldots$ of lower semicontinuous multivalued maps $F_i : X \mto X$ with closed and acyclic values, as defined in ([\[eqn:exfence\]](#eqn:exfence){reference-type="ref" reference="eqn:exfence"}). The panels depict the first six functions of the fence. The associated isolated invariant sets $S_{F_i}$ are indicated in orange, and they are related by continuation.](continuationF5.pdf "fig:"){#fig:continuationFk height="4.5cm"} Yet, even more is true. Recall that we use the representation $X = \ZZ_{10}$ for our underlying topological space $X$. By using addition and subtraction modulo $10$ we can then define the maps $F_i : X \mto X$ via $$\label{eqn:exfence} \begin{array}{rclccl} \displaystyle F_i(a) & = & \displaystyle F(a-i)+i & \subseteq& X & \quad\mbox{ for even~$i \in \ZZ_{10}$,} \\[0.5ex] \displaystyle F_i(a) & = & \displaystyle G(a-i+1)+i-1 & \subseteq& X & \quad\mbox{ for odd~$i \in \ZZ_{10}$,} \end{array}$$ for every $a\in X$. These definitions give a fence $F_0\le F_1\ge F_2\le \ldots F_9\ge F_0$ of lower semicontinuous multivalued maps with closed and acyclic values. By suitably adapting the argument from above, one can show that for odd $i$ the map $F_i$ has the isolated invariant set $S_{F_i} = \{ i-1, i, i+1 \}$. Furthermore, this set is related by an elementary continuation to both the isolated invariant set $S_{F_{i-1}} = \{i-1\}$ for $F_{i-1}$, as well as to the isolated invariant set $S_{F_{i+1}} = \{i+1\}$ for $F_{i+1}$. This in turn shows for example that $S_{F_0} = \{0\}$ and $S_{F_4} = \{4\}$ are related by continuation. This is illustrated in Figure [13](#fig:continuationFk){reference-type="ref" reference="fig:continuationFk"}, where we only depict the first six functions of the fence, and indicate the isolated invariant sets in orange. # Future work and open problems {#sec:future} In this paper, we have developed a notion of isolated invariant sets and Conley index for multivalued maps on finite topological spaces. Our theory requires these maps to be lower semicontinuous with closed and acyclic values. In addition, we have established first properties of these objects, which mimic the corresponding results in the setting of classical dynamics. We would like to point out, however, that crucial assumptions concerning isolation had to be completely changed, due to poor separation in finite topological spaces. In addition, due to space constraints, we have omitted a number of properties of the Conley index, such as for example its additivity, and how it can be used to detect heteroclinic orbits. While the results of this paper are very general and should be useful in a number of applied situations, we would like to close with a comment on one unresolved issue. To explain this in more detail, recall that classical dynamics can be broadly divided into continuous-time and discrete-time. As we saw earlier in this paper, on finite topological spaces the continuous-time analogue is trivial. Nevertheless, there is a dynamical theory which mimics the behavior of flows, and it is based on the concepts of combinatorial vector and multivector fields, see [@forman:98a; @forman:98b; @lipinski:etal:23a; @mrozek:17a]. In these approaches, the flow-like behavior is achieved by requiring solutions to move between adjacent elements of the space via their shared boundary. In contrast, the results of the present paper allow for large jumps in the orbits via iteration of a multivalued map, i.e., our results mimic the discrete-time case. (11.0,4.0) (0.0,0.0) ![Two sample combinatorial vector fields in the sense of Forman. While the one depicted on the left can be represented via an admissible multivalued map $F : X \mto X$ on the underlying finite topological space with the same overall dynamics, this is not possible for the vector field shown on the right. There exists no lower semicontinuous $G : Y \mto Y$ with closed and acyclic values for which the set $S = \{ B, AC \}$ is an isolated invariant set, and such that the map $G$ has the same Morse graph as the indicated combinatorial vector field.](examplecvf1.pdf "fig:"){#fig:examplecvf height="4cm"} (7.0,0.0) ![Two sample combinatorial vector fields in the sense of Forman. While the one depicted on the left can be represented via an admissible multivalued map $F : X \mto X$ on the underlying finite topological space with the same overall dynamics, this is not possible for the vector field shown on the right. There exists no lower semicontinuous $G : Y \mto Y$ with closed and acyclic values for which the set $S = \{ B, AC \}$ is an isolated invariant set, and such that the map $G$ has the same Morse graph as the indicated combinatorial vector field.](examplecvf2.pdf "fig:"){#fig:examplecvf height="4cm"} It is natural to wonder what the relationship is between combinatorial vector and multivector fields, and the theory of this paper. For classical dynamics it has been shown in [@mrozek:89a; @mrozek:90b] that every isolated invariant set for a continuous-time dynamical system is also an isolated invariant set for the discrete-time time-one-map. In this sense, continuous-time dynamical systems can also be studied via discrete-time results. Is the same true in the case of combinatorial vector fields? To illustrate this, Figure [15](#fig:examplecvf){reference-type="ref" reference="fig:examplecvf"} shows two different combinatorial Forman vector fields. The one on the left is defined on a $2$-simplex, while the one on the right is defined on a simplicial complex representing the boundary of a triangle. One can easily see that the dynamics of the left vector field can equivalently be described by a multivalued map $F : X \mto X$, where $X$ denotes the associated seven-point finite space. One just has to map every vertex to its opposite edge, every edge to everything along the boundary except itself, and the triangle to everything --- and the resulting Morse graph induced by $F$ is the same as the Morse graph associated with the depicted combinatorial vector field. However, this is not possible for the example on the right. If $Y$ denotes the six-point finite space given by the boundary of the triangle, then one can show that there exists no lower semicontinuous multivalued map $G : Y \mto Y$ with closed and acyclic values for which the set $S = \{ B, AC \}$ (consisting of a vertex and the opposite edge) is an isolated invariant set, and such that the Morse graph of $G$ equals the Morse graph of the indicated Forman vector field. This failure is due to our last two requirements on $G$. It is therefore an interesting open problem as to whether our theory could be generalized to allow for a larger class of multivalued maps. [^1]: J.B. is a researcher of CONICET; he is partially supported by grants PICT 2019-2338, PICT-2017-2806, PIP 11220170100357CO, UBACyT 20020190100099BA, UBACyT 20020160100081BA. The research of M.M. was partially supported by the Polish National Science Center under Maestro Grant No. 2014/14/A/ST1/00453 and Opus Grant No. 2019/35/B/ST1/00874. T.W. was partially supported by NSF grant DMS-1407087 and by the Simons Foundation under Award 581334. [^2]: Note that this convention is the one used in [@alexandrov:37a], and it is the most appropriate one for the setting of dynamics. We would like to point out, however, that alternatively the preorder could be defined by letting $x \le y$ if $x\in \opn y$. This definition is also extensively used in the literature, see for example the discussion in [@barmak:etal:20a]. [^3]: Notice that we have $x \sim x$ for every $x \in X$, since there always exists a path of length zero from $x$ to $x$, i.e., a path without edges.
arxiv_math
{ "id": "2310.03099", "title": "Conley index for multivalued maps on finite topological spaces", "authors": "Jonathan Barmak, Marian Mrozek, Thomas Wanner", "categories": "math.DS math.AT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | It is well known that the Bernoulli polynomials $\mathbf{B}_n(x)$ have nonintegral coefficients for $n \geq 1$. However, ten cases are known so far in which the derivative $\mathbf{B}'_n(x)$ has only integral coefficients. One may assume that the number of those derivatives is finite. We can link this conjecture to a recent conjecture about the properties of a product of primes satisfying certain $p$-adic conditions. Using a related result of Bordellès, Luca, Moree, and Shparlinski, we then show that the number of those derivatives is indeed finite. Furthermore, we derive other characterizations of the primary conjecture. Subsequently, we extend the results to higher derivatives of the Bernoulli polynomials. This provides a product formula for these denominators, and we show similar finiteness results. address: Göttingen, Germany author: - Bernd C. Kellner title: On the finiteness of Bernoulli polynomials whose derivative has only integral coefficients --- # Introduction The Bernoulli polynomials $\mathbf{B}_n(x)$ are defined by the exponential generating function $$\label{eq:bpgf} \frac{t e^{xt}}{e^t - 1} = \sum_{n=0}^{\infty} \mathbf{B}_n(x) \frac{t^n}{n!} \quad (|t| < 2\pi)$$ and explicitly given by the formula $$\label{eq:bpdef} \mathbf{B}_n(x) = \sum_{k=0}^{n} \binom{n}{k} \mathbf{B}_{n-k} \, x^k \quad (n \geq 0),$$ where ${\mathbf{B}_n = \mathbf{B}_n(0) \in \mathbb{Q}}$ is the $n$th Bernoulli number. It easily follows from [\[eq:bpgf\]](#eq:bpgf){reference-type="eqref" reference="eq:bpgf"} that $\mathbf{B}_n=0$ for odd ${n \geq 3}$. For more properties see Cohen [@Cohen:2007 Chapter 9]. The Bernoulli polynomials ${\mathbf{B}_n(x) \in \mathbb{Q}[x]}$ are Appell polynomials [@Appell:1880]. Therefore, they satisfy the rule $$\label{eq:bpderiv} \mathbf{B}'_n(x) = n \mathbf{B}_{n-1}(x) \quad (n \geq 1).$$ While ${\mathbf{B}_n(x) \notin \mathbb{Z}[x]}$ for ${n \geq 1}$, which is equivalent to ${\mathop{\mathrm{denom}}(\mathbf{B}_n(x)) > 1}$ for ${n \geq 1}$ (the denominators are discussed in the next section), it turns out that $$\mathbf{B}'_n(x) \in \mathbb{Z}[x] \quad \text{for } n \in \mathcal{S}:= \{1, 2, 4, 6, 10, 12, 28, 30, 36, 60\}.$$ The set $\mathcal{S}$ equals the finite sequence [[A094960]{.ul}](https://oeis.org/A094960) in the OEIS [@OEIS] as published in 2004. So far, no further terms have been found. It is mainly assumed that sequence [[A094960]{.ul}](https://oeis.org/A094960) is indeed finite and completely determined by $\mathcal{S}$. Note that we implicitly omit the trivial case for $n=0$, since $\mathbf{B}_0(x) = \mathbf{B}_0 = 1$. Define $$\overline{\mathcal{S}}:= \{n \geq 1 : \mathbf{B}'_n(x) \in \mathbb{Z}[x]\}.$$ For our purposes, we split the conjecture into two parts as follows. **Conjecture 1**. We have the following statements: 1. The set $\overline{\mathcal{S}}$ is finite. 2. We have $\overline{\mathcal{S}}= \mathcal{S}$. We link the above conjecture to a more recent conjecture of the author [@Kellner:2017] in a $p$-adic context, where $p$ always denotes a prime. The function $s_p(n)$ gives the sum of the base-$p$ digits of an integer $n \geq 0$. Let $\omega(n)$ be the additive function that counts the distinct prime divisors of $n$. As usual, an empty product is defined to be $1$, and an empty sum is defined to be $0$. We consider the product $$\label{eq:ddp} \mathbb{D}^+_n := \prod_{\substack{p \, > \, \sqrt{n}\\ s_p(n) \, \geq \, p}} p \quad (n \geq 1),$$ where $p$ runs over the primes. Note that the above product is always finite, since $s_p(n) = n$ for $p > n$. By Kellner [@Kellner:2017 Theorem 4], we have a further relation that $$\label{eq:ddpo} \omega(\mathbb{D}^+_n) = \sum_{\substack{ p \, > \, \sqrt{n}\\ \lfloor{\frac{n-1}{p-1}}\rfloor \, > \, \lfloor{\frac{n}{p}}\rfloor}} \!\! 1 < \sqrt{n} \quad (n \geq 1).$$ We shall clarify the notation of $\mathbb{D}^+_n$ in a more general setting in the next section. **Conjecture 2** (Kellner [@Kellner:2017 Conjectures 1, 2]). We have the following statements: 1. We have $\mathbb{D}^+_n > 1$, respectively, $\omega(\mathbb{D}^+_n) > 0$ for $n > 192$. 2. There exists a constant $\kappa > 1$ such that $$\label{eq:ddpo2} \omega(\mathbb{D}^+_n) \, \sim \, \kappa \, \frac{\sqrt{n}}{\log n} \quad \text{as $n \to \infty$.}$$ At first glance, Conjectures [Conjecture 1](#conj:main){reference-type="ref" reference="conj:main"} and [Conjecture 2](#conj:kel){reference-type="ref" reference="conj:kel"} seem to be incompatible. However, we can establish the following connection. **Theorem 3**. *Conjecture [Conjecture 2](#conj:kel){reference-type="ref" reference="conj:kel"}(i) and (ii) imply Conjecture [Conjecture 1](#conj:main){reference-type="ref" reference="conj:main"}(ii) and (i), respectively.* Meanwhile, Conjecture [Conjecture 2](#conj:kel){reference-type="ref" reference="conj:kel"}(ii) with $\kappa=2$ was proven by Bordellès et al. [@BLMS:2018] for sufficiently large $n > n_0$. This result was achieved by exploiting [\[eq:ddpo\]](#eq:ddpo){reference-type="eqref" reference="eq:ddpo"}, since the condition $s_p(n) \geq p$ as in [\[eq:ddp\]](#eq:ddp){reference-type="eqref" reference="eq:ddp"} is replaced by $\lfloor{\frac{n-1}{p-1}}\rfloor > \lfloor{\frac{n}{p}}\rfloor$ in [\[eq:ddpo\]](#eq:ddpo){reference-type="eqref" reference="eq:ddpo"}, which enabled them to use powerful analytic tools. Unfortunately, their methods do not lead to an explicit or computable bound $n_0$. Using their results, we arrive at the following corollary. **Corollary 4** (Bordellès, Luca, Moree, and Shparlinski [@BLMS:2018 Corollary 1.6]). *Conjecture [Conjecture 2](#conj:kel){reference-type="ref" reference="conj:kel"}(ii) is true, so Conjecture [Conjecture 1](#conj:main){reference-type="ref" reference="conj:main"}(i) is true.* **Theorem 5**. *We have the following statements:* 1. *If $n \in \overline{\mathcal{S}}$, then $n+1$ is prime.* 2. *If $\overline{\mathcal{S}}\neq \mathcal{S}$ and $n \in \overline{\mathcal{S}}\setminus \mathcal{S}$, then $n > 10^7$.* As a consequence, the set $$\overline{\mathcal{S}}+ 1 = \{2, 3, 5, 7, 11, 13, 29, 31, 37, 61, \ldots\}$$ contains only primes. It would be very unlikely that $\mathbb{D}^+_n = 1$ happens for $n > 192$. See the graph [@Kellner:2017 Figure B1] of $\omega(\mathbb{D}^+_n)$ in the range below $10^7$ and consider the coincident and proven asymptotic formula of $\omega(\mathbb{D}^+_n)$ for sufficiently large $n$. However, it is still an open task to show Conjectures [Conjecture 1](#conj:main){reference-type="ref" reference="conj:main"} and [Conjecture 2](#conj:kel){reference-type="ref" reference="conj:kel"} completely. The paper is organized as follows. In the next section, we give a survey about $p$-adic properties of the denominators of the Bernoulli polynomials. We also show further characterizations of Conjecture [Conjecture 1](#conj:main){reference-type="ref" reference="conj:main"}. In Section [3](#sec:extend){reference-type="ref" reference="sec:extend"}, we extend the results to higher derivatives of the Bernoulli polynomials. Section [4](#sec:proof){reference-type="ref" reference="sec:proof"} contains the proofs of the theorems. # Denominators and $p$-adic properties To study the denominators of the Bernoulli polynomials, it is convenient to consider for $n \geq 1$ the related denominators $$\begin{aligned} \mathbf{D}_n &:= \mathop{\mathrm{denom}}(\mathbf{B}_n) = 2,6,1,30,1,42,1,30,1,66,\ldots,\\ \mathbb{D}_n &:= \mathop{\mathrm{denom}}(\mathbf{B}_n(x) - \mathbf{B}_n) = 1,1,2,1,6,2,6,3,10,2,\ldots,\\ \mathfrak{D}_n &:= \mathop{\mathrm{denom}}(\mathbf{B}_n(x)) = 2,6,2,30,6,42,6,30,10,66,\ldots,\end{aligned}$$ which are all squarefree. They are sequences [[A027642]{.ul}](https://oeis.org/A027642), [[A195441]{.ul}](https://oeis.org/A195441), and [[A144845]{.ul}](https://oeis.org/A144845), respectively. Obviously, we have by definition the relation $$\label{eq:dbdn} \mathfrak{D}_n = \mathop{\mathrm{lcm}}(\mathbb{D}_n, \mathbf{D}_n).$$ The denominators $\mathbf{D}_n$ of the Bernoulli numbers are given by the well-known von Staudt--Clausen theorem of $1840$ (Clausen [@Clausen:1840] and von Staudt [@Staudt:1840]), which states for even positive integers $n$ that $$\mathbf{B}_n + \sum_{p-1 \,\mid\,n} \frac{1}{p} \in \mathbb{Z}, \quad \text{which implies that} \quad \mathbf{D}_n = \prod_{p-1 \,\mid\,n} p.$$ However, the denominators $\mathbf{D}_n$ do not play a role here, since we follow the approaches of Kellner [@Kellner:2017] and Kellner and Sondow [@Kellner&Sondow:2017; @Kellner&Sondow:2018; @Kellner&Sondow:2021], which are concerned with the $p$-adic properties of the denominators $\mathbb{D}_n$. For $n \geq 1$, these denominators are given by the remarkable formula $$\label{eq:ddprod} \mathbb{D}_n = \prod_{s_p(n) \, \geq \, p} p,$$ which arises from the $p$-adic product formula; see Kellner [@Kellner:2017 Section 5]. The decomposition $$\label{eq:decomp} \mathbb{D}_n = \mathbb{D}^-_n \,\cdot\, \mathbb{D}^+_n,$$ where $\mathbb{D}^+_n$ is defined as in [\[eq:ddp\]](#eq:ddp){reference-type="eqref" reference="eq:ddp"} and $$\mathbb{D}^-_n := \prod_{\substack{p \,<\, \sqrt{n}\\ s_p(n) \,\geq\, p}} p,$$ leads to Conjecture [Conjecture 2](#conj:kel){reference-type="ref" reference="conj:kel"}. Note that the decomposition [\[eq:decomp\]](#eq:decomp){reference-type="eqref" reference="eq:decomp"} omits the possible term for $p = \sqrt{n}$, but then we would have $p^2 = n$ and so $s_p(n) = 1$. For computational purposes, those products, which run over the primes $p$ and contain the condition $s_p(n) \geq p$, are trivially bounded by $p < n$. Moreover, the following bounds [@Kellner:2017 Lemmas 1, 2] are self-induced by properties of $s_p(n)$. Namely, we have for $n \geq 1$ that $$s_p(n) < p, \quad \text{if\ } p > \frac{n+1}{\lambda} \text{\ where\ } \lambda = \begin{cases} 2, & \text{if $n$ is odd;} \\ 3, & \text{if $n$ is even.} \end{cases}$$ For the sake of completeness, we show that the polynomials $\mathbf{B}_n(x) - \mathbf{B}_n$, which have no constant term, arise in a natural context. For $n \geq 0$, define the sum-of-powers function $$S_n(m) := \sum_{\nu=0}^{m-1} \nu^n \quad (m \geq 0).$$ It is well known that $$\label{eq:powint} S_n(x) = \int_{0}^{x} \, \mathbf{B}_n(t) \, dt = \frac{1}{n+1}(\mathbf{B}_{n+1}(x) - \mathbf{B}_{n+1}).$$ As a result of Kellner [@Kellner:2017 Theorem 5] and Kellner and Sondow [@Kellner&Sondow:2017 Theorems 1, 2], we then have for $n \geq 0$ that $$\label{eq:ds} \mathcal{D}_n := \mathop{\mathrm{denom}}(S_n(x)) = (n+1) \, \mathbb{D}_{n+1} = 1, 2, 6, 4, 30, 12, 42, 24, 90, 20, \ldots,$$ which is sequence [[A064538]{.ul}](https://oeis.org/A064538). *Remark 1*. Since the Bernoulli polynomials $\mathbf{B}_n(x)$ are Appell polynomials satisfying the reflection relation $\mathbf{B}_n(1-x) = (-1)^n \, \mathbf{B}_n(x)$, the integral in [\[eq:powint\]](#eq:powint){reference-type="eqref" reference="eq:powint"} can be reinterpreted by Faulhaber-type polynomials that are connected with certain reciprocal Bernoulli polynomials, as recently shown by the author; see [@Kellner:2021 Example 5.6] and [@Kellner:2023 Section 11]. Let ${n \geq 1}$, and let $\mathop{\mathrm{rad}}(n)$ be the squarefree kernel of $n$. Define the decompositions $$\label{eq:decomp2} \mathbb{D}_n = \mathbb{D}^\top_n \,\cdot\, \mathbb{D}^\bot_n \quad \text{and} \quad \mathop{\mathrm{rad}}(n) = \mathbb{D}^\top_n \,\cdot\, \mathbb{D}^{\top^{\scriptstyle\star}}_n,$$ where $$\label{eq:ddprod2} \mathbb{D}^\top_n := \prod_{\substack{p \,\mid\,n\\ s_p(n) \, \geq \, p}} p, \quad \mathbb{D}^\bot_n := \prod_{\substack{p \,\nmid\,n\\ s_p(n) \, \geq \, p}} p, \quad \text{and} \quad \mathbb{D}^{\top^{\scriptstyle\star}}_n := \prod_{\substack{p \,\mid\,n\\ s_p(n) \, < \, p}} p.$$ The sequences of $\mathbb{D}^\top_n$, $\mathbb{D}^\bot_n$, and $\mathbb{D}^{\top^{\scriptstyle\star}}_n$ are sequences [[A324369]{.ul}](https://oeis.org/A324369), [[A324370]{.ul}](https://oeis.org/A324370), and [[A324371]{.ul}](https://oeis.org/A324371), respectively. We arrive at the following theorem. **Theorem 6** (Kellner and Sondow [@Kellner&Sondow:2021 Theorem 3.1]). *For $n \geq 1$, the denominator $\mathfrak{D}_n$ of the Bernoulli polynomial $\mathbf{B}_n(x)$ splits into the triple product $$\mathfrak{D}_n = \mathbb{D}^\bot_{n+1} \,\cdot\, \mathbb{D}^\top_{n+1} \,\cdot\, \mathbb{D}^{\top^{\scriptstyle\star}}_{n+1}.$$* Consequently, the interplay of the factors of $\mathfrak{D}_n$ yields the relations $$\label{eq:dbrel} \mathfrak{D}_n = \mathbb{D}^\bot_{n+1} \,\cdot\, \mathop{\mathrm{rad}}(n+1) = \mathbb{D}_{n+1} \,\cdot\, \mathbb{D}^{\top^{\scriptstyle\star}}_{n+1} = \mathop{\mathrm{lcm}}(\mathbb{D}_{n+1}, \mathop{\mathrm{rad}}(n+1)).$$ Compared with [\[eq:dbdn\]](#eq:dbdn){reference-type="eqref" reference="eq:dbdn"}, one may observe that the above equations have a shifted index $n+1$ regarding the related factors of $\mathbb{D}_n$. To simplify notation, we include the case $\mathfrak{D}_0 = 1$, which coincides with [\[eq:dbrel\]](#eq:dbrel){reference-type="eqref" reference="eq:dbrel"}. Apart from that, we explicitly avoid the case ${n=0}$ of the related symbols of $\mathbb{D}_n$ in view of their product identities [\[eq:decomp2\]](#eq:decomp2){reference-type="eqref" reference="eq:decomp2"} and [\[eq:ddprod2\]](#eq:ddprod2){reference-type="eqref" reference="eq:ddprod2"}. **Corollary 7**. *Let $n \geq 1$. The following statements hold:* 1. *We have that $\mathfrak{D}_n = \mathop{\mathrm{rad}}(\mathcal{D}_n)$.* 2. *We have that $\mathfrak{D}_n$ is even, which implies that $\mathbf{B}_n(x) \notin \mathbb{Z}[x]$.* 3. *We have that $\mathbb{D}^\bot_n$ is odd, if $n = 1$ or $n \geq 2$ is even. and $\mathbb{D}^\bot_n$ is even, if $n \geq 3$ is odd.* A different proof of part (ii) via [\[eq:dbdn\]](#eq:dbdn){reference-type="eqref" reference="eq:dbdn"} is given by Kellner and Sondow [@Kellner&Sondow:2017 Theorem 4]. *Proof.* Let $n \geq 1$. We show three parts. (i). From [\[eq:ds\]](#eq:ds){reference-type="eqref" reference="eq:ds"} and [\[eq:dbrel\]](#eq:dbrel){reference-type="eqref" reference="eq:dbrel"}, we derive that $\mathfrak{D}_n = \mathop{\mathrm{lcm}}(\mathbb{D}_{n+1}, \mathop{\mathrm{rad}}(n+1)) = \mathop{\mathrm{rad}}(\mathbb{D}_{n+1}\,(n+1)) = \mathop{\mathrm{rad}}(\mathcal{D}_n)$. (iii). We have $\mathbb{D}^\bot_1 = 1$. If $2 \mid n$, then $2 \nmid \mathbb{D}^\bot_n$. Otherwise, for odd $n \geq 3$, it follows that $2 \nmid n$ and $s_2(n) \geq 2$, providing that $2 \mid \mathbb{D}^\bot_n$. (ii). Considering [\[eq:dbrel\]](#eq:dbrel){reference-type="eqref" reference="eq:dbrel"}, the factor $\mathop{\mathrm{rad}}(n+1)$ is even for odd $n$, whereas $2 \mid \mathbb{D}^\bot_{n+1}$ when $n$ is even using part (iii). Both cases show that $\mathfrak{D}_n$ is even for $n \geq 1$. This completes the proof. ◻ The properties of $\mathbb{D}_n$ and $\mathbb{D}^\bot_n$ lead to the following characterizations, which are connected with Conjecture [Conjecture 1](#conj:main){reference-type="ref" reference="conj:main"}. For this purpose, we define the sets $$\overline{\mathcal{R}}:= \{n \geq 1 : \mathbb{D}_n = \mathop{\mathrm{rad}}(n+1)\} \quad \text{and} \quad \mathcal{R}:= \{3,5,8,9,11,27,29,35,59\}.$$ **Theorem 8**. *Let $n \geq 1$. We have that $\mathbf{B}'_n(x) \in \mathbb{Z}[x]$ if and only if $\mathbb{D}^\bot_n = 1$, or equivalently, $\mathfrak{D}_{n-1} = \mathop{\mathrm{rad}}(n)$. In these cases, the number $n+1$ is prime.* **Theorem 9**. *If $n \in \overline{\mathcal{R}}$, then $n+1$ is composite. In particular, if $n$ is odd, then $\mathbb{D}^\bot_{n+1} = 1$. Otherwise, we have that $n = 2^e$ for some $e \geq 1$. Moreover, the set $\overline{\mathcal{R}}$ is finite.* **Conjecture 10**. We have $\overline{\mathcal{R}}= \mathcal{R}$. **Theorem 11**. *Conjecture [Conjecture 10](#conj:rad){reference-type="ref" reference="conj:rad"}, reduced to odd numbers $n \in \mathcal{R}$, implies Conjecture [Conjecture 1](#conj:main){reference-type="ref" reference="conj:main"}.* # Denominators and higher derivatives {#sec:extend} In this section, we extend the results to higher derivatives of $\mathbf{B}_n(x)$. Let $(n)_k$ denote the falling factorial such that $\binom{n}{k} = (n)_k / k!$. We define the related denominators by $$\mathfrak{D}_n^{(k)} := \mathop{\mathrm{denom}}(\mathbf{B}_n^{(k)}(x)) \quad (k, n \geq 1).$$ **Theorem 12**. *Let $k, n \geq 1$. Then we have $$\label{eq:dbderiv} \mathfrak{D}_n^{(k)} = \frac{\mathfrak{D}_{n-k}}{\gcd(\mathfrak{D}_{n-k},(n)_k)} = \frac{\mathbb{D}^\bot_{n-k+1}}{\gcd(\mathbb{D}^\bot_{n-k+1},(n)_{k-1})} = \prod_{\substack{p \,\nmid\,(n)_k\\ s_p(n-k+1) \, \geq \, p}} p \quad (n \geq k).$$ Otherwise, we have $\mathfrak{D}_n^{(k)} = 1$. Moreover, the denominators $\mathfrak{D}_n^{(k)}$ have the property that $p \nmid \mathfrak{D}_n^{(k)}$ for all primes $p \leq k$. In particular, we have $$\mathfrak{D}_n^{(1)} = \frac{\mathfrak{D}_{n-1}}{\mathop{\mathrm{rad}}(n)} = \mathbb{D}^\bot_n \quad (n \geq 1).$$* Define the related sets $$\overline{\mathcal{S}}_k := \{n \geq 1 : \mathbf{B}^{(k)}_n(x) \in \mathbb{Z}[x]\} \quad (k \geq 1),$$ where $\overline{\mathcal{S}}_1 = \overline{\mathcal{S}}$. Let $\mathcal{S}_k$ denote the computable subsets of $\overline{\mathcal{S}}_k$ with $\mathcal{S}_1 = \mathcal{S}$. **Theorem 13**. *We have that all sets $\overline{\mathcal{S}}_k$ are finite for $k \geq 1$. Moreover, we have that $$\overline{\mathcal{S}}_1 \subset \overline{\mathcal{S}}_2 \subset \overline{\mathcal{S}}_3 \subset \cdots.$$* Recall that $$\mathcal{S}_1 = \{1, 2, 4, 6, 10, 12, 28, 30, 36, 60\}.$$ We use the notation, e.g., $\{a-b\} = \{a,\ldots,b\}$ for ranges. We compute the sets $$\begin{aligned} \mathcal{S}_2 = \{ &1-7,9-13,15,16,21,25,28-31,36,37,55,57,60,61,70,121,190 \}, \\ \mathcal{S}_3 = \{ &1-18, 20-22, 25, 26, 28-32, 35-38, 42, 50, 52, 55-58, 60-62, \\ & 66, 70-72, 78, 80, 92, 110, 121, 122, 156, 176, 177, 190, 191, 210, 392 \}.\end{aligned}$$ **Conjecture 14**. We have $\overline{\mathcal{S}}_2 = \mathcal{S}_2$ and $\overline{\mathcal{S}}_3 = \mathcal{S}_3$. # Proofs of the theorems {#sec:proof} We give the proofs of the theorems in the order of their dependencies. First, we need some key lemmas. **Lemma 15**. *Let $n \geq 1$. We have the following properties:* 1. *We have that $\mathbb{D}_n$ is odd if and only if $n = 2^e$ for some $e \geq 0$.* 2. *If $n+1$ is composite, then $\mathop{\mathrm{rad}}(n+1) \mid \mathbb{D}_n$ and $\mathop{\mathrm{rad}}(n+1) \mid \mathbb{D}^\bot_n$.* 3. *If $n \geq 3$ is odd, then $\mathbb{D}_n = \mathop{\mathrm{lcm}}(\mathbb{D}_{n+1}, \mathop{\mathrm{rad}}(n+1))$.* *Proof.* (i). See [@Kellner&Sondow:2017 Theorem 1]. (ii). See [@Kellner:2017 Theorem 1] and [@Kellner&Sondow:2018 Corollary 2], respectively. (iii). See [@Kellner&Sondow:2021 Theorem 3.2]. ◻ **Lemma 16**. *For $n \geq 1$, we have $\mathbb{D}^+_n \mid \mathbb{D}^\bot_n$.* *Proof.* Let $n \geq 1$, and assume that $\mathbb{D}^+_n > 1$. Otherwise, we are done. If a prime $p \mid \mathbb{D}^+_n$, then by [\[eq:ddp\]](#eq:ddp){reference-type="eqref" reference="eq:ddp"} we have $p > \sqrt{n}$ and $s_p(n) \geq p$. Thus, we have $p^2 > n$ implying the $p$-adic expansion $n = a_0 + a_1 p$. It then follows from $a_0 + a_1 = s_p(n) \geq p$ that $a_0 \neq 0$ and $p \nmid n$. Finally, this shows by [\[eq:ddprod2\]](#eq:ddprod2){reference-type="eqref" reference="eq:ddprod2"} that $p \mid \mathbb{D}^\bot_n$. ◻ **Lemma 17**. *For $k \geq 1$, there exists a sufficiently large number $n_k$ such that $\mathbb{D}^+_n > (n+k)_k$ for $n > n_k$.* *Proof.* Let $k \geq 1$. Using the result of Bordellès et al. [@BLMS:2018 Corollary 1.6], we have that [\[eq:ddpo2\]](#eq:ddpo2){reference-type="eqref" reference="eq:ddpo2"} holds with $\kappa = 2$. Thus, there exists a sufficiently large number $n_k$ such that $\omega(\mathbb{D}^+_n) \geq 2k$ for $n > n_k$. We group $2k$ prime divisors $p_1 < p_2 < \cdots < p_{2k}$ of $\mathbb{D}^+_n$ in pairs. Since $p \mid \mathbb{D}^+_n$ implies $p > \sqrt{n}$, we infer that $n+1 < p_1 p_2 < p_3 p_4 < \cdots$. Consequently, we get $\mathbb{D}^+_n > (n+1)\cdots(n+k) = (n+k)_k$ for $n > n_k$. ◻ *Proof of Theorem [Theorem 8](#thm:deriv){reference-type="ref" reference="thm:deriv"}.* Let $n \geq 1$. If $\mathbf{B}'_n(x) \in \mathbb{Z}[x]$, then we have by [\[eq:bpderiv\]](#eq:bpderiv){reference-type="eqref" reference="eq:bpderiv"} that $$\mathop{\mathrm{denom}}(\mathbf{B}'_n(x)) = \mathop{\mathrm{denom}}(n \, \mathbf{B}_{n-1}(x)) = 1.$$ As a consequence, we have that $\mathfrak{D}_{n-1} \mid n$. Applying Theorem [Theorem 6](#thm:triple){reference-type="ref" reference="thm:triple"} and [\[eq:dbrel\]](#eq:dbrel){reference-type="eqref" reference="eq:dbrel"}, it follows that $$\label{eq:ddcond} \mathbb{D}^\bot_n \,\cdot\, \mathop{\mathrm{rad}}(n) \mid n.$$ Since $\mathbb{D}^\bot_n$ is coprime to $n$, we conclude that $\mathbb{D}^\bot_n = 1$. In the other direction, condition [\[eq:ddcond\]](#eq:ddcond){reference-type="eqref" reference="eq:ddcond"} is satisfied only if $\mathbb{D}^\bot_n = 1$. By [\[eq:dbrel\]](#eq:dbrel){reference-type="eqref" reference="eq:dbrel"}, this is equivalent to $\mathfrak{D}_{n-1} = \mathop{\mathrm{rad}}(n)$. From Lemma [Lemma 15](#lem:ddprop){reference-type="ref" reference="lem:ddprop"}(ii), it further follows that $n+1$ must be prime. Otherwise, we would have that $\mathop{\mathrm{rad}}(n+1) \mid \mathbb{D}^\bot_n$. This completes the proof. ◻ *Proof of Theorem [Theorem 3](#thm:main){reference-type="ref" reference="thm:main"}.* First, we assume Conjecture [Conjecture 2](#conj:kel){reference-type="ref" reference="conj:kel"}(i). Let $n > 192$. We then have that $\mathbb{D}^+_n > 1$. By Lemma [Lemma 16](#lem:dddiv){reference-type="ref" reference="lem:dddiv"}, this property transfers to $\mathbb{D}^\bot_n > 1$. Using Theorem [Theorem 8](#thm:deriv){reference-type="ref" reference="thm:deriv"} implies that $\mathbf{B}'_n(x) \notin \mathbb{Z}[x]$ for $n > 192$. We have to check the remaining cases $1 \leq n \leq 192$. By Theorem [Theorem 8](#thm:deriv){reference-type="ref" reference="thm:deriv"}, it then suffices to check the numbers $\mathbb{D}^\bot_n$ when $n+1$ is prime. Finally, this confirms that $\mathbf{B}'_n(x) \in \mathbb{Z}[x]$ if and only if $n \in \mathcal{S}$, implying Conjecture [Conjecture 1](#conj:main){reference-type="ref" reference="conj:main"}(ii). Secondly, we now assume Conjecture [Conjecture 2](#conj:kel){reference-type="ref" reference="conj:kel"}(ii). It follows from [\[eq:ddpo2\]](#eq:ddpo2){reference-type="eqref" reference="eq:ddpo2"} that there exists a number $n_0$ such that $\mathbb{D}^+_n > 1$ for $n > n_0$. Similar to the first part above, this implies that $\mathbf{B}'_n(x) \notin \mathbb{Z}[x]$ for $n > n_0$. Hence, the set $\overline{\mathcal{S}}$ is finite, which is Conjecture [Conjecture 1](#conj:main){reference-type="ref" reference="conj:main"}(i). This proves the theorem. ◻ *Proof of Theorem [Theorem 5](#thm:main2){reference-type="ref" reference="thm:main2"}.* We have to show two parts. (i). By Theorem [Theorem 8](#thm:deriv){reference-type="ref" reference="thm:deriv"}, we have that $n \in \overline{\mathcal{S}}$ implies that $n+1$ is prime. (ii). This follows from computations of the graph [@Kellner:2017 Figure B1] of $\omega(\mathbb{D}^+_n)$ in the range below $10^7$. ◻ *Proof of Theorem [Theorem 9](#thm:rad){reference-type="ref" reference="thm:rad"}.* Let $n \in \overline{\mathcal{R}}$. Assume that $\mathbb{D}_n = \mathop{\mathrm{rad}}(n+1)$, where $p = n+1$ is prime. Then $\mathbb{D}_n = p$ contradicts [\[eq:ddprod\]](#eq:ddprod){reference-type="eqref" reference="eq:ddprod"}, since $s_p(n) = n < p$. Therefore, the number $n+1$ is composite. If $n$ is even, then $\mathbb{D}_n = \mathop{\mathrm{rad}}(n+1)$ implies that $\mathbb{D}_n$ is odd. From Lemma [Lemma 15](#lem:ddprop){reference-type="ref" reference="lem:ddprop"}(i), it follows that $n = 2^e$ for some $e \geq 1$. Now, we assume that $n \geq 3$ is odd, neglecting the case $\mathbb{D}_1=1$. Using Lemma [Lemma 15](#lem:ddprop){reference-type="ref" reference="lem:ddprop"}(iii), we infer that $$\mathbb{D}_n = \mathop{\mathrm{lcm}}(\mathbb{D}_{n+1}, \mathop{\mathrm{rad}}(n+1)) = \mathbb{D}^\bot_{n+1} \mathop{\mathrm{lcm}}(\mathbb{D}^\top_{n+1}, \mathop{\mathrm{rad}}(n+1)) = \mathop{\mathrm{rad}}(n+1),$$ so $\mathbb{D}^\bot_{n+1} = 1$ as desired. There remains to show that $\overline{\mathcal{R}}$ is finite. Applying Lemma [Lemma 17](#lem:ddestim){reference-type="ref" reference="lem:ddestim"} with $k=1$ shows that there exists $n_1$ such that $\mathbb{D}^+_n > n+1$ for $n > n_1$. Since $\mathbb{D}_n \geq \mathbb{D}^+_n$ by [\[eq:decomp\]](#eq:decomp){reference-type="eqref" reference="eq:decomp"}, it follows that $\overline{\mathcal{R}}$ is finite. This completes the proof. ◻ *Proof of Theorem [Theorem 11](#thm:rad2){reference-type="ref" reference="thm:rad2"}.* Let $\mathcal{R}' = \{3,5,9,11,27,29,35,59\}$ be the reduced set of $\mathcal{R}$ consisting only of odd numbers. From Theorems [Theorem 8](#thm:deriv){reference-type="ref" reference="thm:deriv"} and [Theorem 9](#thm:rad){reference-type="ref" reference="thm:rad"}, we derive that $n \in \mathcal{R}'$ implies that $\mathbb{D}^\bot_{n+1} = 1$ and so $n+1 \in \mathcal{S}$. Since $\mathcal{R}'+1 = \mathcal{S}\setminus \{1,2\}$, the result follows. ◻ *Proof of Theorem [Theorem 12](#thm:high){reference-type="ref" reference="thm:high"}.* First assume that $1 \leq n < k$. By [\[eq:bpdef\]](#eq:bpdef){reference-type="eqref" reference="eq:bpdef"}, the Bernoulli polynomial $\mathbf{B}_n(x)$ is a monic polynomial of degree $n$. Thus, the $k$th derivative of $\mathbf{B}_n(x)$ vanishes, yielding $\mathfrak{D}_n^{(k)} = 1$. If $n=k$, then $\mathbf{B}^{(k)}_n(x) = n!$ and so $\mathfrak{D}_n^{(k)} = 1$. As $\mathfrak{D}_0 = \mathbb{D}^\bot_1 = 1$, this case coincides with [\[eq:dbderiv\]](#eq:dbderiv){reference-type="eqref" reference="eq:dbderiv"}. Now, let $n > k$. From [\[eq:bpderiv\]](#eq:bpderiv){reference-type="eqref" reference="eq:bpderiv"}, it follows that $$\mathfrak{D}_n^{(k)} = \mathop{\mathrm{denom}}(\mathbf{B}^{(k)}_n(x)) = \mathop{\mathrm{denom}}((n)_k \, \mathbf{B}_{n-k}(x)) = \frac{\mathfrak{D}_{n-k}}{\gcd(\mathfrak{D}_{n-k},(n)_k)}.$$ Using [\[eq:dbrel\]](#eq:dbrel){reference-type="eqref" reference="eq:dbrel"}, we have $\mathfrak{D}_{n-k} = \mathbb{D}^\bot_{n-k+1} \mathop{\mathrm{rad}}(n-k+1)$. Recall that $\mathfrak{D}_{n-k}$ is squarefree. Together with $(n)_k = (n)_{k-1} (n-k+1)$, we infer that $$\gcd(\mathfrak{D}_{n-k},(n)_k) = \gcd(\mathbb{D}^\bot_{n-k+1},(n)_{k-1}) \mathop{\mathrm{rad}}(n-k+1).$$ As a consequence, we obtain $$\label{eq:dbderiv2} \mathfrak{D}_n^{(k)} = \frac{\mathbb{D}^\bot_{n-k+1}}{\gcd(\mathbb{D}^\bot_{n-k+1},(n)_{k-1})}.$$ By [\[eq:ddprod2\]](#eq:ddprod2){reference-type="eqref" reference="eq:ddprod2"}, we have $$\mathbb{D}^\bot_{n-k+1} = \prod_{\substack{p \,\nmid\,n-k+1\\ s_p(n-k+1) \, \geq \, p}} p.$$ Removing those prime factors $p \mid \gcd(\mathbb{D}^\bot_{n-k+1},(n)_{k-1})$ from $\mathbb{D}^\bot_{n-k+1}$ implies that we can exclude all primes $p \mid (n)_{k-1}$ from the above product. Finally, we get $$\mathfrak{D}_n^{(k)} = \prod_{\substack{p \,\nmid\,(n)_k\\ s_p(n-k+1) \, \geq \, p}} p.$$ From $k! \mid (n)_k$ and $p \nmid (n)_k$, we derive that $p \nmid \mathfrak{D}_n^{(k)}$ for $p \leq k$. At the end, we consider the case $k=1$ for $n \geq 1$. Including the case $\mathfrak{D}_1^{(1)} = \mathbb{D}^\bot_1 = 1$, then [\[eq:dbderiv2\]](#eq:dbderiv2){reference-type="eqref" reference="eq:dbderiv2"} reduces to $\mathfrak{D}_n^{(1)} = \mathbb{D}^\bot_n$. From [\[eq:dbrel\]](#eq:dbrel){reference-type="eqref" reference="eq:dbrel"} and $\mathfrak{D}_0 = 1$, it also follows that $\mathfrak{D}_n^{(1)} = \mathfrak{D}_{n-1} / \mathop{\mathrm{rad}}(n)$. This proves the theorem. ◻ *Proof of Theorem [Theorem 13](#thm:high2){reference-type="ref" reference="thm:high2"}.* Let $k \geq 1$. Combining Lemmas [Lemma 16](#lem:dddiv){reference-type="ref" reference="lem:dddiv"} and [Lemma 17](#lem:ddestim){reference-type="ref" reference="lem:ddestim"}, there exists a number $n_k$ such that $$\mathbb{D}^\bot_n \geq \mathbb{D}^+_n > (n+k)_k \quad (n > n_k).$$ By Theorem [Theorem 12](#thm:high){reference-type="ref" reference="thm:high"} and shifting the index $n$ to $n+k-1$ in [\[eq:dbderiv\]](#eq:dbderiv){reference-type="eqref" reference="eq:dbderiv"}, we obtain $$\mathfrak{D}_{n+k-1}^{(k)} = \frac{\mathbb{D}^\bot_{n}}{\gcd(\mathbb{D}^\bot_{n},(n+k-1)_{k-1})} \quad (n \geq 1).$$ Since $\gcd(\mathbb{D}^\bot_{n},(n+k-1)_{k-1}) \leq (n+k-1)_{k-1} < (n+k)_k$, we then deduce that $$\mathfrak{D}_{n+k-1}^{(k)} > \frac{\mathbb{D}^\bot_{n}}{(n+k)_k} > 1 \quad (n > n_k),$$ showing that $\overline{\mathcal{S}}_k$ is finite. As $\mathbf{B}^{(k)}_n(x) \in \mathbb{Z}[x]$ also implies that $\mathbf{B}^{(k+1)}_n(x) \in \mathbb{Z}[x]$, we infer that $\overline{\mathcal{S}}_k \subset \overline{\mathcal{S}}_{k+1}$. Hence, this yields $\overline{\mathcal{S}}_1 \subset \overline{\mathcal{S}}_2 \subset \overline{\mathcal{S}}_3 \subset \cdots$, completing the proof. ◻ 10 P. Appell, Sur une classe de polynômes, *Ann. Sci. École Norm. Sup.* (2) **9** (1880), 119--144. O. Bordellès, F. Luca, P. Moree, and I. E. Shparlinski, Denominators of Bernoulli polynomials, *Mathematika* **64** (2018), 519--541. T. Clausen, Lehrsatz aus einer Abhandlung über die Bernoullischen Zahlen, *Astr. Nachr.* **17** (1840), 351--352. H. Cohen, *Number Theory, Volume II: Analytic and Modern Tools*, GTM **240**, Springer--Verlag, New York, 2007. B. C. Kellner, On a product of certain primes, *J. Number Theory* **179** (2017), 126--141. B. C. Kellner, On (self-) reciprocal Appell polynomials: symmetry and Faulhaber-type polynomials, *Integers* **21** (2021), \#A119, 1--19. B. C. Kellner, Faulhaber polynomials and reciprocal Bernoulli polynomials, *Rocky Mountain J. Math.* **53** (2023), 119--151. B. C. Kellner and J. Sondow, Power-sum denominators, *Amer. Math. Monthly* **124** (2017), 695--709. B. C. Kellner and J. Sondow, The denominators of power sums of arithmetic progressions, *Integers* **18** (2018), \#A95, 1--17. B. C. Kellner and J. Sondow, On Carmichael and polygonal numbers, Bernoulli polynomials, and sums of base-$p$ digits, *Integers* **21** (2021), \#A52, 1--21. N. J. A. Sloane et al., *The On-Line Encyclopedia of Integer Sequences*. Published electronically at <https://oeis.org>, 2023. K. G. C. von Staudt, Beweis eines Lehrsatzes die Bernoullischen Zahlen betreffend, *J. Reine Angew. Math.* **21** (1840), 372--374.
arxiv_math
{ "id": "2310.01325", "title": "On the finiteness of Bernoulli polynomials whose derivative has only\n integral coefficients", "authors": "Bernd C. Kellner", "categories": "math.NT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper, we introduce a new discretization of the Gaussian curvature on surfaces, which is defined as the quotient of the angle defect and the area of some dual cell of a weighted triangulation at the conic singularity. A discrete uniformization theorem for this discrete Gaussian curvature is established on surfaces with non-positive Euler number. The main tools are Bobenko-Lutz's discrete conformal theory for decorated piecewise Euclidean metrics on surfaces and variational principles with constraints. address: - School of Mathematics and Statistics, Wuhan University, Wuhan, 430072, P.R.China - School of Mathematics and Statistics, Wuhan University, Wuhan 430072, P.R. China author: - Xu Xu, Chao Zheng title: "**A discrete uniformization theorem for decorated piecewise Euclidean metrics on surfaces**" --- [^1] # Introduction Bobenko-Lutz [@BL] recently introduced the decorated piecewise Euclidean metrics on surfaces. Suppose $S$ is a connected closed surface and $V$ is a finite non-empty subset of $S$, we call $(S, V)$ a marked surface. A piecewise Euclidean metric (PE metric) $dist_{S}$ on the marked surface $(S,V)$ is a flat cone metric with the conic singularities contained in $V$. A decoration on a PE surface $(S,V, dist_{S})$ is a choice of circle of radius $r_i\geq 0$ at each point $i\in V$. These circles in the decoration are called vertex-circles. We denote a decorated PE surface by $(S,V, dist_{S}, r)$ and call the pair $(dist_S,r)$ a decorated PE metric. In this paper, we focus on the case that $r_i>0$ for all $i\in V$ and each pair of vertex-circles is separated. For a PE surface $(S,V, dist_{S})$, denote $\theta_i$ as the cone angle at $i\in V$. The angle defect $$\label{Eq: curvature W} W: V\rightarrow (-\infty, 2\pi), \quad W_i=2\pi-\theta_i,$$ is used to describe the conic singularities of the PE metric. Let $\mathcal{T}={(V,E,F)}$ be a triangulation of $(S, V)$, where $V,E,F$ are the sets of vertices, edges and faces respectively. The triangulation $\mathcal{T}$ for a PE surface $(S,V, dist_S)$ is a geodesic triangulation if the edges are geodesics in the PE metric $dist_S$. We use one index to denote a vertex (such as $i$), two indices to denote an edge (such as $\{ij\}$) and three indices to denote a face (such as $\{ijk\}$) in the triangulation $\mathcal{T}$. For any decorated geodesic triangle $\{ijk\}\in F$, there is a unique circle $C_{ijk}$ simultaneously orthogonal to the three vertex-circles at the vertices $i,j,k$ [@Glickenstein; @preprint]. We call this circle $C_{ijk}$ as the face-circle of the decorated geodesic triangle $\{ijk\}$ and denote its center by $c_{ijk}$ and radius by $r_{ijk}$. The center $c_{ijk}$ of the face-circle $C_{ijk}$ of the decorated geodesic triangle $\{ijk\}$ is the geometric center introduced by Glickenstein [@Glickenstein; @JDG] and Glickenstein-Thomas [@GT] for general discrete conformal structures on surfaces. Denote $\alpha_{ij}^k$ as the interior intersection angle of the face-circle $C_{ijk}$ and the edge $\{ij\}$. Please refer to Figure [\[figure 1\]](#figure 1){reference-type="ref" reference="figure 1"} (left) for the angle $\alpha_{ij}^k$. The edge $\{ij\}$, shared by two adjacent decorated triangles $\{ijk\}$ and $\{ijl\}$, is called weighted Delaunay if $$\label{Eq: F7} \alpha_{ij}^k+\alpha_{ij}^l\leq \pi.$$ The triangulation $\mathcal{T}$ is called weighted Delaunay in the decorated PE metric $(dist_S,r)$ if every edge in the triangulation is weighted Delaunay. Connecting the center $c_{ijk}$ with the vertices $i,j,k$ by geodesics produces a cellular decomposition of the decorated triangle $\{ijk\}$. Denote $A_i^{jk}$ as the sum of the signed area of the two triangles adjacent to $i$ in the cellular decomposition of the decorated triangle $\{ijk\}$. The area of the triangle with the vertices $i$, $j$, $c_{ijk}$ is positive if it is on the same side of the edge $\{ij\}$ as the decorated triangle $\{ijk\}$, otherwise it is negative (or zero if $c_{ijk}$ lies in $\{ij\}$). Please refer to the shaded domain in Figure [\[figure 1\]](#figure 1){reference-type="ref" reference="figure 1"} (left) for $A_i^{jk}$. Gluing these cells of all decorated triangles isometrically along edges in pairs leads a cellular decomposition of the decorated PE surface $(S,V, dist_S,r)$. Set $$A_i=\sum_{\{ijk\}\in F}A_i^{jk}.$$ Please refer to Figure [\[figure 1\]](#figure 1){reference-type="ref" reference="figure 1"} (right) for $A_i$. figure_0.pdf (15,15) $i$ (80.5,15) $j$ (48.5,74) $k$ (10,26) $r_i$ (46,35) $c_{ijk}$ (30,55) $\alpha_{ik}^j$ (20,30) $\alpha_{ik}^j$ (27,12.5) $\alpha_{ij}^k$ (63,12.5) $\alpha_{ij}^k$ figure_3.pdf (48,42) $i$ (92,44) $j$ (74,90) $k$ (74,57) $c_{ijk}$ **Definition 1**. Suppose $(S,V, dist_S, r)$ is a decorated PE surface and $\mathcal{T}$ is a weighted Delaunay triangulation of $(S,V, dist_S, r)$. The discrete Gaussian curvature $K_i$ at the vertex $i\in V$ is the quotient of the angle defect $W_i$ and the area $A_i$ of the dual cell at the vertex $i\in V$, i.e., $$\label{Eq: K_i} K_i=\frac{W_i}{A_i}.$$ **Remark 2**. In the literature, the discrete curvature is usually defined by the angle defect $W$ in ([\[Eq: curvature W\]](#Eq: curvature W){reference-type="ref" reference="Eq: curvature W"}). However, the angle defect $W$ is scaling invariant and does not approximate the smooth Gaussian curvature pointwisely on smooth surfaces as the triangulations of the surface become finer and finer. This is supported by the discussions in [@BPS; @GX2]. For the discrete Gaussian curvature $K$ in ([\[Eq: K_i\]](#Eq: K_i){reference-type="ref" reference="Eq: K_i"}), it scales by a factor $\frac{1}{u^2}$ upon a global rescaling of the decorated PE metric by a factor $u$. This property is paralleling to that of the smooth Gaussian curvature on surfaces. On the other hand, the definition of the discrete Gaussian curvature $K_i$ coincides with the original definition of the Gaussian curvature on smooth surfaces. This implies that the discrete Gaussian curvature $K_i$ is a good candidate as a discretization of the smooth Gaussian curvature on surfaces. **Remark 3**. According to Definition [Definition 1](#Def: new curvature){reference-type="ref" reference="Def: new curvature"}, the discrete Gaussian curvature $K_i$ defined by ([\[Eq: K_i\]](#Eq: K_i){reference-type="ref" reference="Eq: K_i"}) seems to depend on the choice of weighted Delaunay triangulations of the decorated PE surface $(S, V, dist_S, r)$. We will show that $K_i$ is an intrinsic geometric invariant of the decorated PE surface $(S,V, dist_S, r)$ in the sense that it is independent of the weighted Delaunay triangulations of $(S, V, dist_S, r)$. Note that the angle defect $W_i$ defined by ([\[Eq: curvature W\]](#Eq: curvature W){reference-type="ref" reference="Eq: curvature W"}) is an intrinsic geometric invariant of a decorated PE surface, we just need to prove that $A_i$ is independent of the choice of weighted Delaunay triangulations. This is true by Lemma [Lemma 14](#Lem: independent){reference-type="ref" reference="Lem: independent"}. **Remark 4**. The weighted Delaunay triangulation is a natural generalization of the classical Delaunay triangulation. When the weighted Delaunay triangulation is reduced to the classical Delaunay triangulation, i.e. $r_i=0$ for all $i\in V$, the area $A_i$ is exactly twice the area of the Voronoi cell at the vertex $i$. Thus the area $A_i$ is a generalization of the area of the Voronoi cell at the vertex $i$. As a result, the discrete Gaussian curvature in Definition [Definition 1](#Def: new curvature){reference-type="ref" reference="Def: new curvature"} generalizes Kouřimská's definition of discrete Gaussian curvature in [@Kourimska; @Kourimska; @Thesis]. The discrete Yamabe problem for a decorated PE metric $(dist_S,r)$ on $(S,V)$ asks if there exists a discrete conformal equivalent decorated PE metric on $(S,V)$ with constant discrete Gaussian curvature. The following discrete uniformization theorem solves this problem affirmatively for the discrete Gaussian curvature $K$ in Definition [Definition 1](#Def: new curvature){reference-type="ref" reference="Def: new curvature"}. **Theorem 5**. For any decorated PE metric $(dist_S,r)$ on a marked surface $(S,V)$ with Euler number $\chi(S)\leq 0$, there is a discrete conformal equivalent decorated PE metric with constant discrete Gaussian curvature $K$. By the relationships of the discrete Gaussian curvature $K$ and the classical discrete Gaussian curvature $W$, the case $\chi(S)=0$ in Theorem [Theorem 5](#Thm: main){reference-type="ref" reference="Thm: main"} is covered by Bobenko-Lutz's work [@BL]. Therefore, we just need to prove the case $\chi(S)<0$ in Theorem [Theorem 5](#Thm: main){reference-type="ref" reference="Thm: main"}. **Remark 6**. The discrete Yamabe problem on surfaces for different types of discrete conformal structures with respect to the classical discrete Gaussian curvature $W$ has been extensively studied in the literature. For Thurston's circle packings on surfaces, the solution of discrete Yamabe problem gives rise to the famous Koebe-Andreev-Thurston Theorem. See also the work of Beardon-Stephenson [@BS; @uniformization] for the discrete uniformization theorems for circle packings on surfaces. For the vertex scalings introduced by Luo [@Luo1] on surfaces, Gu-Luo-Sun-Wu [@Gu1], Gu-Guo-Luo-Sun-Wu [@Gu2], Springborn [@Springborn] and Izmestiev-Prosanov-Wu [@IPW] give nice answers to this problem in different background geometries. Recently, Bobenko-Lutz [@BL] established the discrete conformal theory for decorated PE metrics and prove the corresponding discrete uniformization theorem. Since Bobenko-Lutz's discrete conformal theory of decorated PE metrics also applies to the Euclidean vertex scalings and thus generalizes Gu-Luo-Sun-Wu's result [@Gu1] and Springborn's result [@Springborn], Theorem [Theorem 5](#Thm: main){reference-type="ref" reference="Thm: main"} also generalizes Kouřimská's results in [@Kourimska; @Kourimska; @Thesis]. It should be mentioned that Kouřimská [@Kourimska; @Kourimska; @Thesis] constructed counterexamples to the uniqueness of PE metrics with constant discrete Gaussian curvatures. We conjecture that the decorated PE metric with constant discrete Gaussian curvature $K$ in Theorem [Theorem 5](#Thm: main){reference-type="ref" reference="Thm: main"} is not unique. The main tools for the proof of Theorem [Theorem 5](#Thm: main){reference-type="ref" reference="Thm: main"} are Bobenko-Lutz's discrete conformal theory for decorated PE metrics on surfaces [@BL] and variational principles with constraints. The main ideas of the paper come from reading of Bobenko-Lutz [@BL] and Kouřimská [@Kourimska; @Kourimska; @Thesis]. The paper is organized as follows. In Section [2](#Sec: Preliminaries){reference-type="ref" reference="Sec: Preliminaries"}, we briefly recall Bobenko-Lutz's discrete conformal theory for decorated PE metrics on surfaces. Then we show that $A_i$ is independent of the choice of weighted Delaunay triangulations, i.e., Lemma [Lemma 14](#Lem: independent){reference-type="ref" reference="Lem: independent"}. We also give some notations and a variational characterization of the area $A_i^{jk}$. In this section, we also extend the energy function $\mathcal{E}$ and the area function $A_{tot}$. In Section [3](#Sec: proof the main theorem){reference-type="ref" reference="Sec: proof the main theorem"}, we translate Theorem [Theorem 5](#Thm: main){reference-type="ref" reference="Thm: main"} into an optimization problem with constraints, i.e., Lemma [Lemma 23](#Lem: minimum lies at the boundary){reference-type="ref" reference="Lem: minimum lies at the boundary"}. Using the classical result from calculus, i.e., Theorem [Theorem 24](#Thm: calculus){reference-type="ref" reference="Thm: calculus"}, we translate Lemma [Lemma 23](#Lem: minimum lies at the boundary){reference-type="ref" reference="Lem: minimum lies at the boundary"} into Theorem [Theorem 25](#Thm: key){reference-type="ref" reference="Thm: key"}. By analysing the limit behaviour of sequences of discrete conformal factors, we get an asymptotic expression of the function $\mathcal{E}$, i.e., Lemma [Lemma 33](#Lem: E decomposition){reference-type="ref" reference="Lem: E decomposition"}. In the end, we prove Theorem [Theorem 25](#Thm: key){reference-type="ref" reference="Thm: key"}.\ \ **Acknowledgements**\ The first author thanks Professor Feng Luo for his invitation to the workshop "Discrete and Computational Geometry, Shape Analysis, and Applications\" taking place at Rutgers University, New Brunswick from May 19th to May 21st, 2023. The first author also thanks Carl O. R. Lutz for helpful communications during the workshop. # Preliminaries on decorated PE surfaces {#Sec: Preliminaries} ## Discrete conformal equivalence and Bobenko-Lutz's discrete conformal theory In this subsection, we briefly recall Bobenko-Lutz's discrete conformal theory for decorated PE metrics on surfaces. Please refer to Bobenko-Lutz's original work [@BL] for more details on this. The PE metric $dist_{S}$ on a PE surface with a geodesic triangulation defines a length map $l: E\rightarrow \mathbb{R}_{>0}$ such that $l_{ij}, l_{ik}, l_{jk}$ satisfy the triangle inequalities for any triangle $\{ijk\}\in F$. Conversely, given a function $l: E\rightarrow \mathbb{R}_{>0}$ satisfying the triangle inequalities for any face $\{ijk\}\in F$, one can construct a PE metric on a triangulated surface by isometrically gluing Euclidean triangles along edges in pairs. Therefore, we use $l: E\rightarrow \mathbb{R}_{>0}$ to denote a PE metric and use $(l,r)$ to denote a decorated PE metric on a triangulated surface $(S,V,\mathcal{T})$. **Definition 7** ([@BL], Proposition 2.2). Let $\mathcal{T}$ be a triangulation of a marked surface $(S,V)$. Two decorated PE metrics $(l,r)$ and $(\widetilde{l},\widetilde{r})$ on $(S,V, \mathcal{T})$ are discrete conformal equivalent if and only if there exists a discrete conformal factor $u\in \mathbb{R}^V$ such that $$\label{Eq: DCE1} \widetilde{r}_i=e^{u_i}r_i,$$ $$\label{Eq: DCE2} \widetilde{l}_{ij}^2 =(e^{2u_i}-e^{u_i+u_j})r^2_i +(e^{2u_j}-e^{u_i+u_j})r^2_j +e^{u_i+u_j}l_{ij}^2$$ for all $\{ij\}\in E$. **Remark 8**. Note that the inversive distance $$\label{Eq: inversive distance} I_{ij}=\frac{l^2_{ij}-r^2_i-r^2_j}{2r_ir_j}$$ between two vertex-circles is invariant under Möbius transformations [@Coxeter]. Combining ([\[Eq: DCE1\]](#Eq: DCE1){reference-type="ref" reference="Eq: DCE1"}) and ([\[Eq: DCE2\]](#Eq: DCE2){reference-type="ref" reference="Eq: DCE2"}) gives $I=\widetilde{I}$. Since each pair of vertex-circles is required to be separated, we have $I>1$. Therefore, Definition [Definition 7](#Def: DCE){reference-type="ref" reference="Def: DCE"} can be regarded as a special case of the inversive distance circle packings introduced by Bowers-Stephenson [@BS]. One can refer to [@CLXZ; @Guo; @Luo; @GT; @Xu; @AIM; @Xu; @MRL] for more properties of the inversive distance circle packings on triangulated surfaces. In general, the existence of decorated PE metrics with constant discrete Gaussian curvatures on triangulated surfaces can not be guaranteed if the triangulation is fixed. In the following, we work with a generalization of the discrete conformal equivalence in Definition [Definition 7](#Def: DCE){reference-type="ref" reference="Def: DCE"}, introduced by Bobenko-Lutz [@BL], which allows the triangulation of the marked surface to be changed under the weighted Delaunay condition. **Definition 9** ([@BL], Definition 4.11). Two decorated PE metrics $(dist_{S},r)$ and $(\widetilde{dist}_{S},\widetilde{r})$ on the marked surface $(S,V)$ are discrete conformal equivalent if there is a sequence of triangulated decorated PE surfaces $(\mathcal{T}^0,l^0,r^0),...,(\mathcal{T}^N,l^N,r^N)$ such that \(i\) : the decorated PE metric of $(\mathcal{T}^0,l^0,r^0)$ is $(dist_{S},r)$ and the decorated PE metric of $(\mathcal{T}^N,l^N,r^N)$ is $(\widetilde{dist}_{S},\widetilde{r})$, \(ii\) : each $\mathcal{T}^n$ is a weighted Delaunay triangulation of the decorated PE surface $(\mathcal{T}^n,l^n,r^n)$, \(iii\) : if $\mathcal{T}^n=\mathcal{T}^{n+1}$, then there is a discrete conformal factor $u\in \mathbb{R}^V$ such that $(\mathcal{T}^n,l^n,r^n)$ and $(\mathcal{T}^{n+1},l^{n+1},r^{n+1})$ are related by ([\[Eq: DCE1\]](#Eq: DCE1){reference-type="ref" reference="Eq: DCE1"}) and ([\[Eq: DCE2\]](#Eq: DCE2){reference-type="ref" reference="Eq: DCE2"}), \(iv\) : if $\mathcal{T}^n\neq\mathcal{T}^{n+1}$, then $\mathcal{T}^n$ and $\mathcal{T}^{n+1}$ are two different weighted Delaunay triangulations of the same decorated PE surface. Definition [Definition 9](#Def: GDCE){reference-type="ref" reference="Def: GDCE"} defines an equivalence relationship for decorated PE metrics on a marked surface. The equivalence class of a decorated PE metric $(dist_S,r)$ on $(S,V)$ is also called as the discrete conformal class of $(dist_S,r)$ and denoted by $\mathcal{D}(dist_S,r)$. **Lemma 10** ([@BL]). The discrete conformal class $\mathcal{D}(dist_S,r)$ of a decorated PE metric $(dist_S,r)$ on the marked surface $(S,V)$ is parameterized by $\mathbb{R}^V=\{u: V\rightarrow \mathbb{R}\}$. For simplicity, for any $(\widetilde{dist}_S,\widetilde{r})\in \mathcal{D}(dist_S,r)$, we denote it by $(dist_S(u),r(u))$ for some $u\in \mathbb{R}^V$. Set $$\mathcal{C}_\mathcal{T}(dist_{S},r) =\{u\in \mathbb{R}^V |\ \mathcal{T}\ \text{is a weighted Delaunay triangulation of}\ (S,V,dist_S(u),r(u))\}.$$ For any decorated PE surface, there exists a unique complete hyperbolic surface $\Sigma_g$, i.e., the hyperbolic surface induced by any triangular refinement of its unique weighted Delaunay tessellation. It is homeomorphic to $S\backslash V$ and called as the fundamental discrete conformal invariant of the decorated PE metric $(dist_{S},r)$. The decoration of $\Sigma_g$ is denoted by $\omega:=e^{h}$ and here the height $h$ is related to $u$ by $dh_i=-du_i$. The canonical weighted Delaunay tessellation $\mathcal{T}$ of $\Sigma_g$ is denoted by $\mathcal{T}_{\Sigma_g}^\omega$. Bobenko-Lutz [@BL] defined the following set $$\mathcal{D}_\mathcal{T}(\Sigma_g) =\{\omega\in \mathbb{R}_{>0}^V|\mathcal{T}\ \text{refines}\ \mathcal{T}_{\Sigma_g}^\omega\}$$ and proved the following proposition. **Proposition 11** ([@BL], Proposition 4.3). Given a complete hyperbolic surface with ends $\Sigma_g$. \(1\) : Each $\mathcal{D}_{\mathcal{T}_n}(\Sigma_g)$ is either empty or the intersection of $\mathbb{R}^V_{>0}$ with a closed polyhedral cone. \(2\) : There is only a finite number of geodesic tessellations $\mathcal{T}_1,...,\mathcal{T}_N$ of $\Sigma_g$ such that $\mathcal{D}_{\mathcal{T}_n}(\Sigma_g)$ $(n=1,...,N)$ is non-empty. In particular, $\mathbb{R}^V_{>0} =\bigcup_{n=1}^N\mathcal{D}_{\mathcal{T}_n}(\Sigma_g)$. Let $P$ be the polyhedral cusp corresponding to the triangulated surface $(S,V,\mathcal{T})$ with fundamental discrete conformal invariant $\Sigma_g$. The polyhedral cusp is convex if and only if $\mathcal{T}$ is a weighted Delaunay triangulation. The set of all heights $h$ of convex polyhedral cusps over the triangulated hyperbolic surface $(\Sigma_g,\mathcal{T})$ is denoted by $\mathcal{P}_\mathcal{T}(\Sigma_g)\subseteq \mathbb{R}^V$. **Proposition 12** ([@BL], Proposition 4.9). Given a decorated PE metric $(dist_{S},r)$ on the marked surface $(S,V)$. Then $\mathcal{C}_\mathcal{T}(dist_{S},r)$, $\mathcal{P}_\mathcal{T}(\Sigma_g)$ and $\mathcal{D}_\mathcal{T}(\Sigma_g)$ are homeomorphic. Combining Proposition [Proposition 11](#Prop: finite decomposition){reference-type="ref" reference="Prop: finite decomposition"} and Proposition [Proposition 12](#Prop: some spaces){reference-type="ref" reference="Prop: some spaces"} gives the following result. **Lemma 13** ([@BL]). The set $$J=\{\mathcal{T}| \mathcal{C}_{\mathcal{T}}(dist_{S},r)\ \text{has non-empty interior in}\ \mathbb{R}^V\}$$ is a finite set, $\mathbb{R}^V=\cup_{\mathcal{T}_i\in J}\mathcal{C}_{\mathcal{T}_i}(dist_{S},r)$, and each $\mathcal{C}_{\mathcal{T}_i}(dist_{S},r)$ is homeomorphic to a polyhedral cone (with its apex removed) and its interior is homeomorphic to $\mathbb{R}^V$. ## A decorated triangle Denote $r_{ij}$ as half of the distance of the two intersection points of the face-circle $C_{ijk}$ and the edge $\{ij\}$. Denote $h_{ij}^k$ as the signed distance of the center $c_{ijk}$ to the edge $\{ij\}$, which is defined to be positive if the center is on the same side of the line determined by $\{ij\}$ as the triangle $\{ijk\}$ and negative otherwise (or zero if the center is on the line). Note that $h_{ij}^k$ is symmetric in the indices $i$ and $j$. By Figure [\[figure 2\]](#figure 2){reference-type="ref" reference="figure 2"}, we have $$\label{Eq: F6} h_{ij}^k=r_{ij}\cot\alpha_{ij}^k.$$ Since $r_{ij}>0$ and $\alpha_{ij}^k\in (0,\pi)$, if $h_{ij}^k<0$, then $\alpha_{ij}^k\in (\frac{\pi}{2},\pi)$. The equality ([\[Eq: F6\]](#Eq: F6){reference-type="ref" reference="Eq: F6"}) implies that ([\[Eq: F7\]](#Eq: F7){reference-type="ref" reference="Eq: F7"}) is equivalent to $$\label{Eq: F9} h_{ij}^k+h_{ij}^l\geq 0$$ for any adjacent triangles $\{ijk\}$ and $\{ijl\}$ sharing a common edge $\{ij\}$. Therefore, the equality ([\[Eq: F9\]](#Eq: F9){reference-type="ref" reference="Eq: F9"}) also characterizes a weighted Delaunay triangulation $\mathcal{T}$ for a decorated PE metric $(l,r)$ on $(S,V)$. Due to this fact, the equality ([\[Eq: F9\]](#Eq: F9){reference-type="ref" reference="Eq: F9"}) is usually used to define the weighted Delaunay triangulations of decorated PE surfaces. See [@CLXZ; @Glickenstein; @DCG] and others for example. Then $A_i^{jk}$ can be written as $$\label{Eq: Area 1} A_i^{jk}=\frac{1}{2}l_{ij}h_{ij}^k+\frac{1}{2}l_{ki}h_{ki}^j.$$ Since $h_{ij}^k, h_{ki}^j$ are the signed distances, thus $A_i^{jk}$ is an algebraic sum of the area of triangles, i.e. a signed area. **Lemma 14**. The area $A_i$ is independent of the choice of weighted Delaunay triangulations of a decorated PE surface. Suppose a decorated quadrilateral $\{ijlk\}$ is in a face of the weighted Delaunay tessellation of a decorated PE surface, then there exist two weighted Delaunay triangulations $\mathcal{T}_1$ and $\mathcal{T}_2$ of the decorated PE surface with an edge $\{jk\}$ in $\mathcal{T}_1$ flipped to another edge $\{il\}$ in $\mathcal{T}_2$. Please refer to Figure [\[figure 3\]](#figure 3){reference-type="ref" reference="figure 3"}. We just need to prove the signed area $A^{jk}_i$ in $\mathcal{T}_1$ is equal to the signed area $A_i^{kl}+A_i^{jl}$ in $\mathcal{T}_2$. In $\mathcal{T}_1$, the signed area at the vertex $i$ in $\{ijlk\}$ is $A_i^{jk}=\frac{1}{2}l_{ki}h_{ki}^j+\frac{1}{2}l_{ij}h_{ij}^k$. In $\mathcal{T}_2$, the signed area at the vertex $i$ in $\{ijlk\}$ is $$\begin{aligned} A_i^{kl}+A_i^{jl} &=\frac{1}{2}l_{ki}h_{ki}^l+\frac{1}{2}l_{il}h_{il}^k +\frac{1}{2}l_{ij}h_{ij}^l+\frac{1}{2}l_{il}h_{il}^j\\ &=\frac{1}{2}l_{ki}h_{ki}^l+\frac{1}{2}l_{ij}h_{ij}^l +\frac{1}{2}l_{il}(h_{il}^k+h_{il}^j). \end{aligned}$$ Since $\mathcal{T}_1$ and $\mathcal{T}_2$ are two weighted Delaunay triangulations of the same decorated PE metric on $(S,V)$, then $h_{il}^k+h_{il}^j=0$ by ([\[Eq: F9\]](#Eq: F9){reference-type="ref" reference="Eq: F9"}). One can also refer to [@BL] (Proposition 3.4) for this. Moreover, $h_{ki}^l=h_{ki}^j$ and $h_{ij}^l=h_{ij}^k$. Then $A_i^{kl}+A_i^{jl}=A_i^{jk}$. figure_2.pdf (4,20) $i$ (41.5,20) $l$ (23,41) $k$ (17,2) $j$ (3.5,24.5) $r_i$ (14,17) $h_{ij}^k$ (16.5,27) $h_{ki}^j$ (23.5,19) $c_{ijk}$ (56.5,20) $i$ (94,20) $l$ (75,41) $k$ (69,2) $j$ (56,24.5) $r_i$ (76,19) $c_{ijk}$ (66.5,17) $h_{ij}^l$ (69,27) $h_{ki}^l$ (76,21) $h_{il}^j$ Denote $c_{ij}$ as the center of the edge $\{ij\}$, which is obtained by projecting the center $c_{ijk}$ to the line determined by $\{ij\}$. Denote $d_{ij}$ as the signed distance of $c_{ij}$ to the vertex $i$, which is positive if $c_{ij}$ is on the same side of $i$ as $j$ along the line determined by $\{ij\}$ and negative otherwise (or zero if $c_{ij}$ is the same as $i$). In general, $d_{ij}\neq d_{ji}$. Since the face-circle $C_{ijk}$ is orthogonal to the vertex-circle at the vertex $j$, we have $$\label{Eq: F13} r_{ijk}^2+r_j^2=d_{jk}^2+(h^i_{jk})^2=d_{ji}^2+(h^k_{ij})^2.$$ Please refer to Figure [\[figure 2\]](#figure 2){reference-type="ref" reference="figure 2"} for this. Moreover, we have the following explicit expressions of $d_{ij}$ and $h_{ij}^k$ due to Glickenstein [@Glickenstein; @JDG], i.e., $$\label{Eq: d ij} d_{ij}=\frac{r_i^2+r_ir_jI_{ij}}{l_{ij}},$$ and $$\label{Eq: h ijk} h_{ij}^k=\frac{d_{ik}-d_{ij}\cos \theta_{jk}^i}{\sin \theta_{jk}^i},$$ where $\theta^i_{jk}$ is the inner angle of the triangle $\{ijk\}$ at the vertex $i$. The equality ([\[Eq: d ij\]](#Eq: d ij){reference-type="ref" reference="Eq: d ij"}) implies that $d_{ij}\in \mathbb{R}$ is independent of the existence of the center $c_{ijk}$. Since each pair of vertex-circles is required to be separated, then $I>1$. This implies $$d_{rs}>0,\ \ \forall\{r,s\}\subseteq\{i,j,k\}.$$ figure_1.pdf (15,15) $i$ (80.5,15) $j$ (48.5,74) $k$ (30,55) $\alpha_{ik}^j$ (20,30) $\alpha_{ik}^j$ (37.5,27) $\alpha_{ij}^k$ (27,12.5) $\alpha_{ij}^k$ (63,12.5) $\alpha_{ij}^k$ (10,25) $r_i$ (46,25) $h_{ij}^k$ (36,40) $h_{ki}^j$ (53,43.5) $h_{jk}^i$ (31,21) $d_{ij}$ (45,36) $c_{ijk}$ (44,16) $c_{ij}$ (34,16) $r_{ij}$ (26,42) $c_{ik}$ (66,46) $c_{jk}$ The following lemma gives some useful formulas. **Lemma 15** ([@Guo; @Xu; @AIM; @Xu; @MRL]). Let $\{ijk\}$ be a decorated triangle with the edge lengths $l_{ij}, l_{jk}, l_{ki}$ defined by ([\[Eq: DCE2\]](#Eq: DCE2){reference-type="ref" reference="Eq: DCE2"}). If the decorated triangle $\{ijk\}$ is non-degenerate, then $$\label{Eq: angle deform} \frac{\partial \theta_{jk}^i}{\partial u_j} =\frac{\partial \theta_{ki}^j}{\partial u_i} =\frac{h_{ij}^k}{l_{ij}}, \ \ \ \frac{\partial \theta_{jk}^i}{\partial u_i} =-\frac{\partial \theta_{jk}^i}{\partial u_j} -\frac{\partial \theta_{jk}^i}{\partial u_k},$$ where $$\label{Eq: h_ijk} h_{ij}^k=\frac{r_i^2r_j^2r_k^2}{2A_{ijk}l_{ij}} [\kappa_k^2(1-I_k^2)+\kappa_j\kappa_k\gamma_{i} +\kappa_i\kappa_k\gamma_{j}] =\frac{r_i^2r_j^2r_k^2}{2A_{ijk}l_{ij}}\kappa_kh_k$$ with $A_{ijk}=\frac{1}{2}l_{ij}l_{jk}\sin \theta_{ki}^j$, $\gamma_i=I_{jk}+I_{ij}I_{ki}$, $\kappa_i:=r_i^{-1}$ and $$\label{Eq: h_i} \begin{aligned} h_i=\kappa_i(1-I_{jk}^2)+\kappa_j\gamma_{k}+\kappa_k\gamma_{j}. \end{aligned}$$ As a direct application of Lemma [Lemma 15](#Lem: useful 1){reference-type="ref" reference="Lem: useful 1"}, we have the following result. **Lemma 16**. The area $A_{ijk}(u)$ of each decorated triangle $\{ijk\}\in F$ is an analytic function with $$\label{Eq: A_ijk u_i} \frac{\partial A_{ijk}}{\partial u_i}=A_i^{jk}.$$ By ([\[Eq: h ijk\]](#Eq: h ijk){reference-type="ref" reference="Eq: h ijk"}), we have $$h^k_{ij}=\frac{d_{ik}-d_{ij}\cos \theta_{jk}^i}{\sin \theta_{jk}^i}, \quad h_{ki}^j=\frac{d_{ij}-d_{ik}\cos \theta_{jk}^i}{\sin \theta_{jk}^i}.$$ Direct calculations give $$\label{Eq: F11} h_{ki}^j=d_{ij}\sin \theta_{jk}^i-h^k_{ij}\cos \theta_{jk}^i.$$ Combining ([\[Eq: DCE2\]](#Eq: DCE2){reference-type="ref" reference="Eq: DCE2"}), ([\[Eq: inversive distance\]](#Eq: inversive distance){reference-type="ref" reference="Eq: inversive distance"}) and ([\[Eq: d ij\]](#Eq: d ij){reference-type="ref" reference="Eq: d ij"}), it is easy to check that $$\label{Eq: F8} \frac{\partial l_{ij}}{\partial u_i}=d_{ij}.$$ Differentiating $A_{ijk}=\frac{1}{2}l_{ij}l_{jk}\sin \theta_{ki}^j$ with respect to $u_i$ gives $$\begin{aligned} \frac{\partial A_{ijk}}{\partial u_i} &=\frac{1}{2}\frac{\partial l_{ij}}{\partial u_i}l_{jk}\sin \theta_{ki}^j +\frac{1}{2}l_{ij}l_{jk}\cos \theta_{ki}^j \frac{\partial \theta_{ki}^j}{\partial u_i}\\ &=\frac{1}{2}d_{ij}l_{jk}\sin \theta_{ki}^j +\frac{1}{2}l_{ij}l_{jk}\cos \theta_{ki}^j \frac{h^k_{ij}}{l_{ij}}\\ &=\frac{1}{2}d_{ij}l_{ki}\sin \theta_{jk}^i +\frac{1}{2}l_{jk}\cos \theta_{ki}^j h^k_{ij}\\ &=\frac{1}{2}d_{ij}l_{ki}\sin \theta_{jk}^i +\frac{1}{2}(l_{ij}-l_{ki}\cos \theta_{jk}^i)h^k_{ij}\\ &=\frac{1}{2}l_{ki}(d_{ij}\sin \theta_{jk}^i-h^k_{ij}\cos \theta_{jk}^i)+\frac{1}{2}l_{ij}h^k_{ij}\\ &=\frac{1}{2}l_{ki}h_{ki}^j+\frac{1}{2}l_{ij}h_{ij}^k\\ &=A_i^{jk}, \end{aligned}$$ where the second equality uses ([\[Eq: F8\]](#Eq: F8){reference-type="ref" reference="Eq: F8"}) and ([\[Eq: angle deform\]](#Eq: angle deform){reference-type="ref" reference="Eq: angle deform"}), the third equality uses the sine laws and the penultimate line uses ([\[Eq: F11\]](#Eq: F11){reference-type="ref" reference="Eq: F11"}). **Remark 17**. One can refer to Glickenstein [@Glickenstein; @JDG] for a nice geometric explanation of the result in Lemma [Lemma 16](#Lem: A_ijk u_i){reference-type="ref" reference="Lem: A_ijk u_i"}. ## The extended energy function and the extended area function There exists a geometric relationship between the decorated triangle $\{ijk\}$ and the geometry of hyperbolic polyhedra in $3$-dimensional hyperbolic space. Specially, there is a generalized hyperbolic tetrahedra in $\mathbb{H}^3$ with one ideal vertex and three hyper-ideal vertices corresponding to a decorated triangle $\{ijk\}$. Please refer to [@BL] for more details on this fact. Springborn [@Sp1] found the following explicit formula for the truncated volume $\mathrm{Vol}(ijk)$ of this generalized hyperbolic tetrahedra $$\label{Eq: volume} \begin{aligned} 2\mathrm{Vol}(ijk) =&\mathbb{L}(\theta_{jk}^i)+\mathbb{L}(\theta_{ki}^j) +\mathbb{L}(\theta_{ij}^k)\\ &+\mathbb{L}(\frac{\pi+\alpha_{ki}^j+\alpha_{ij}^k-\theta_{jk}^i}{2}) +\mathbb{L}(\frac{\pi+\alpha_{ki}^j-\alpha_{ij}^k-\theta_{jk}^i}{2})\\ &+\mathbb{L}(\frac{\pi-\alpha_{ki}^j+\alpha_{ij}^k-\theta_{jk}^i}{2}) +\mathbb{L}(\frac{\pi-\alpha_{ki}^j-\alpha_{ij}^k-\theta_{jk}^i}{2})\\ &+\mathbb{L}(\frac{\pi+\alpha_{jk}^i+\alpha_{ij}^k-\theta_{ki}^j}{2}) +\mathbb{L}(\frac{\pi+\alpha_{jk}^i-\alpha_{ij}^k-\theta_{ki}^j}{2})\\ &+\mathbb{L}(\frac{\pi-\alpha_{jk}^i+\alpha_{ij}^k-\theta_{ki}^j}{2}) +\mathbb{L}(\frac{\pi-\alpha_{jk}^i-\alpha_{ij}^k-\theta_{ki}^j}{2})\\ &+\mathbb{L}(\frac{\pi+\alpha_{jk}^i+\alpha_{ki}^j-\theta_{ij}^k}{2}) +\mathbb{L}(\frac{\pi+\alpha_{jk}^i-\alpha_{ki}^j-\theta_{ij}^k}{2})\\ &+\mathbb{L}(\frac{\pi-\alpha_{jk}^i+\alpha_{ki}^j-\theta_{ij}^k}{2}) +\mathbb{L}(\frac{\pi-\alpha_{jk}^i-\alpha_{ki}^j-\theta_{ij}^k}{2}), \end{aligned}$$ where $$\label{Eq: Lobachevsky function} \mathbb{L}(x)=-\int_0^x \log|2\sin(t)|dt$$ is Milnor's Lobachevsky function. Milnor's Lobachevsky function is bounded, odd, $\pi$-periodic and smooth except at integer multiples of $\pi$. Please refer to [@Milnor; @Rat] for more information on Milnor's Lobachevsky function $\mathbb{L}(x)$. Set $$\label{Eq: F ijk} \begin{aligned} F_{ijk}(u_i,u_j,u_k) =&-2\mathrm{Vol}(ijk)+\theta_{jk}^iu_i+\theta_{ki}^ju_j+\theta_{ij}^ku_k\\ &+(\frac{\pi}{2}-\alpha_{ij}^k)\lambda_{ij} +(\frac{\pi}{2}-\alpha_{ki}^j)\lambda_{ki} +(\frac{\pi}{2}-\alpha_{jk}^i)\lambda_{jk}, \end{aligned}$$ where $\cosh \lambda_{ij}=I_{ij}$. Then $\nabla F_{ijk}=(\theta_{jk}^i,\theta_{ki}^j, \theta_{ij}^k)$ and $$\label{Eq: property of F ijk} F_{ijk}((u_i,u_j,u_k)+c(1,1,1)) =F_{ijk}(u_i,u_j,u_k)+c\pi$$ for $c\in \mathbb{R}$. Furthermore, on a decorated PE surface $(S,V,l,r)$ with a weighted Delaunay triangulation $\mathcal{T}$, Bobenko-Lutz [@BL] defined the following function $$\label{Eq: F1} \mathcal{H}_{\mathcal{T}}(u) =\sum_{\{ijk\}\in F}F_{ijk}(u_i,u_j,u_k) =-2\mathrm{Vol}(P_h)+\sum_{i\in V}\theta_iu_i+\sum_{\{ij\}\in E_{\mathcal{T}}}(\pi-\alpha_{ij})\lambda_{ij},$$ where $P_h$ is the convex polyhedral cusp defined by the heights $h\in \mathbb{R}^V$, $\theta_i=\sum_{\{ijk\}\in F_\mathcal{T}}\theta^i_{jk}$ and $\alpha_{ij}=\alpha_{ij}^k+\alpha_{ij}^l$. Note that the function $\mathcal{H}_{\mathcal{T}}(u)$ defined by ([\[Eq: F1\]](#Eq: F1){reference-type="ref" reference="Eq: F1"}) differs from its original definition in [@BL] (Equation 4-9) by some constant. By ([\[Eq: property of F ijk\]](#Eq: property of F ijk){reference-type="ref" reference="Eq: property of F ijk"}), for $c\in \mathbb{R}$, we have $$\label{Eq: property of H ijk} \mathcal{H}_{\mathcal{T}}(u+c\mathbf{1}) =\mathcal{H}_{\mathcal{T}}(u)+c|F|\pi.$$ Using the function $\mathcal{H}_{\mathcal{T}}$, we define the following energy function $$\mathcal{E}_{\mathcal{T}}(u) =-\mathcal{H}_{\mathcal{T}}(u)+2\pi\sum_{i\in V}u_i,$$ which is well-defined on $\mathcal{C}_\mathcal{T}(dist_{S},r)$ with $\nabla_{u_i} \mathcal{E}_{\mathcal{T}} =2\pi-\sum_{\{ijk\}\in F_\mathcal{T}}\theta^i_{jk}=W_i$. Moreover, for $c\in \mathbb{R}$, we have $$\label{Eq: property of E} \begin{aligned} \mathcal{E}_{\mathcal{T}}(u+c\mathbf{1}) =&-\mathcal{H}_{\mathcal{T}}(u+c\mathbf{1}) +2\pi\sum_{i\in V}(u_i+c) \\ =&-\mathcal{H}_{\mathcal{T}}(u)-c|F|\pi +2\pi\sum_{i\in V}u_i+2c|V|\pi \\ =&\mathcal{E}_{\mathcal{T}}(u)+2c\pi\chi(S), \end{aligned}$$ where $2|V|-|F|=2\chi(S)$ is used in the last line. **Theorem 18** ([@BL], Proposition 4.13). For a discrete conformal factor $u\in \mathbb{R}^V$, let $\mathcal{T}$ be a weighted Delaunay triangulation of the decorated PE surface $(S,V,dist_S(u),r(u))$. The map $$\label{Eq: extended H} \begin{aligned} \mathcal{H} :\ \mathbb{R}^V&\rightarrow \mathbb{R},\\ u&\mapsto \mathcal{H}_{\mathcal{T}}(u) \end{aligned}$$ is well-defined, concave, and twice continuously differentiable over $\mathbb{R}^V$. Therefore, the function $\mathcal{E}_{\mathcal{T}}(u)$ defined on $\mathcal{C}_\mathcal{T}(dist_{S},r)$ can be extended to be $$\label{Eq: extended E} \mathcal{E}(u) =-\mathcal{H}(u)+2\pi\sum_{i\in V}u_i =-\sum_{\{ijk\}\in F}F_{ijk}(u_i,u_j,u_k) +2\pi\sum_{i\in V}u_i$$ defined on $\mathbb{R}^V$. **Definition 19**. Suppose $(S,V,\mathcal{T})$ is a triangulated surface with a decorated PE metric $(l,r)$. The area function $A^\mathcal{T}_{tot}$ on $(S,V,\mathcal{T})$ is defined to be $$A^\mathcal{T}_{tot} :\ \mathcal{C}_\mathcal{T}(dist_{S},r)\rightarrow \mathbb{R},\\ \quad \ A^\mathcal{T}_{tot}(u)=\sum_{\{ijk\}\in F}A_{ijk}(u).$$ By Lemma [Lemma 16](#Lem: A_ijk u_i){reference-type="ref" reference="Lem: A_ijk u_i"}, we have the following result. **Corollary 20**. The function $A^\mathcal{T}_{tot}$ is an analytic function with $$\label{Eq: A_tot u_i} \frac{\partial A^\mathcal{T}_{tot}}{\partial u_i}=A_i.$$ Lemma [Lemma 14](#Lem: independent){reference-type="ref" reference="Lem: independent"} and Corollary [Corollary 20](#Cor: A_tot u_i){reference-type="ref" reference="Cor: A_tot u_i"} imply the following result, which shows the function $A^\mathcal{T}_{tot}$ defined on $\mathcal{C}_\mathcal{T}(dist_{S},r)$ can be extended. **Theorem 21**. For a discrete conformal factor $u\in \mathbb{R}^V$, let $\mathcal{T}$ be a weighted Delaunay triangulation of the decorated PE surface $(S,V,dist_S(u),r(u))$. The map $$\label{Eq: extended A_tot} \begin{aligned} A_{tot} :\ \mathbb{R}^V&\rightarrow \mathbb{R},\\ u&\mapsto A^\mathcal{T}_{tot}(u) \end{aligned}$$ is well-defined and once differentiable. By Corollary [Corollary 20](#Cor: A_tot u_i){reference-type="ref" reference="Cor: A_tot u_i"}, the function $A_{tot}$ is once differentiable in the interior of any $\mathcal{C}_\mathcal{T}(dist_{S},r)$. At the boundary of $\mathcal{C}_\mathcal{T}(dist_{S},r)$, the weighted triangulations induce the same weighted Delaunay tessellation. The conclusion follows from Lemma [Lemma 14](#Lem: independent){reference-type="ref" reference="Lem: independent"}. # The proof of Theorem [Theorem 5](#Thm: main){reference-type="ref" reference="Thm: main"} {#Sec: proof the main theorem} ## Variational principles with constraints In this subsection, we translate Theorem [Theorem 5](#Thm: main){reference-type="ref" reference="Thm: main"} into an optimization problem with inequality constraints by variational principles, which involves the function $\mathcal{E}$ defined by ([\[Eq: extended E\]](#Eq: extended E){reference-type="ref" reference="Eq: extended E"}). **Proposition 22**. The set $$\mathcal{A}=\{u\in \mathbb{R}^V|A_{tot}(u)\leq 1\}.$$ is an unbounded closed subset of $\mathbb{R}^V$. By Theorem [Theorem 21](#Thm: extended area){reference-type="ref" reference="Thm: extended area"}, the set $\mathcal{A}$ is a closed subset of $\mathbb{R}^V$. Since $A_{ijk}((u_i,u_j,u_k)+(c,c,c))=e^{2c}A_{ijk}(u_i,u_j,u_k)$, thus $A_{tot}(u+c\mathbf{1})=e^{2c}A_{tot}(u)$. Then $A_{tot}(u+c\mathbf{1})=e^{2c}A_{tot}(u)\leq 1$ is equivalent to $c\leq -\frac{1}{2}\log A_{tot}(u)$. This implies that the ray $\{u+c\mathbf{1}|c\leq -\frac{1}{2}\log A_{tot}(u)\}$ stays in the set $\mathcal{A}$. Hence the set $\mathcal{A}$ is unbounded. According to Proposition [Proposition 22](#Prop: set A){reference-type="ref" reference="Prop: set A"}, we have following result. **Lemma 23**. If $\chi(S)<0$ and the function $\mathcal{E}(u)$ attains a minimum in the set $\mathcal{A}$, then the minimum value point of $\mathcal{E}(u)$ lies at the boundary of $\mathcal{A}$, i.e., $$\partial \mathcal{A} =\{u\in \mathbb{R}^V|A_{tot}(u)=1\}.$$ Furthermore, there exists a decorated PE metric with constant discrete Gaussian curvature $K$ in the discrete conformal class. Suppose the function $\mathcal{E}(u)$ attains a minimum at $u\in \mathcal{A}$. Taking $c_0=-\frac{1}{2}\log A_{tot}(u)$, then $c_0\geq0$ by $A_{tot}(u)\leq 1$. By the proof of Proposition [Proposition 22](#Prop: set A){reference-type="ref" reference="Prop: set A"}, $u+c_0\mathbf{1}\in \mathcal{A}$. Hence, by the additive property of the function $\mathcal{E}$ in ([\[Eq: property of E\]](#Eq: property of E){reference-type="ref" reference="Eq: property of E"}), we have $$\mathcal{E}(u)\leq \mathcal{E}(u+c_0\mathbf{1}) =\mathcal{E}(u)+2 c_0\pi\chi(S).$$ This implies $c_0\leq0$ by $\chi(S)<0$. Then $c_0=0$ and $A_{tot}(u)=1$. Therefore, the minimum value point of $\mathcal{E}(u)$ lies in the set $\partial\mathcal{A} =\{u\in\mathbb{R}^V|A_{tot}(u)=1\}$. The conclusion follows from the following claim.\ **Claim :** Up to scaling, the decorated PE metrics with constant discrete Gaussian curvature $K$ in the discrete conformal class are in one-to-one correspondence with the critical points of the function $\mathcal{E}(u)$ under the constraint $A_{tot}(u)=1$. We use the method of Lagrange multipliers to prove this claim. Set $$G(u,\mu)=\mathcal{E}(u)-\mu(A_{tot}(u)-1),$$ where $\mu\in \mathbb{R}$ is a Lagrange multiplier. If $u$ is a critical point of the function $\mathcal{E}$ under the constraint $A_{tot}(u)=1$, then by ([\[Eq: A_tot u_i\]](#Eq: A_tot u_i){reference-type="ref" reference="Eq: A_tot u_i"}) and the fact $\nabla_{u_i} \mathcal{E}=W_i$, we have $$0=\frac{\partial G(u,\mu)}{\partial u_i} =\frac{\partial \mathcal{E}(u)}{\partial u_i} -\mu\frac{\partial A_{tot}(u)}{\partial u_i} =W_i-\mu A_i.$$ This implies $$%\frac{\partial W_i}{\partial A_i}=\mu. W_i=\mu A_i.$$ Since the anger defect $W$ defined by ([\[Eq: curvature W\]](#Eq: curvature W){reference-type="ref" reference="Eq: curvature W"}) satisfies the following discrete Gauss-Bonnet formula $$%\label{Eq: Gauss-Bonnet} \sum_{i\in V} W_i=2\pi \chi(S),$$ we have $$2\pi \chi(S)=\sum_{i\in V}W_i =\mu \sum_{i\in V}A_i=\mu A_{tot}=\mu.$$ under the constraint $A_{tot}(u)=1$. Therefore, the discrete Gaussian curvature $$K_i=\frac{W_i}{A_i}=2\pi \chi(S)$$ for any $i\in V$. ## Reduction to Theorem [Theorem 25](#Thm: key){reference-type="ref" reference="Thm: key"} {#reduction-to-theorem-thm-key} By Lemma [Lemma 23](#Lem: minimum lies at the boundary){reference-type="ref" reference="Lem: minimum lies at the boundary"}, we just need to prove that the function $\mathcal{E}(u)$ attains the minimum in the set $\mathcal{A}$. Recall the following classical result from calculus. **Theorem 24**. Let $A\subseteq \mathbb{R}^m$ be a closed set and $f: A\rightarrow \mathbb{R}$ be a continuous function. If every unbounded sequence $\{u_n\}_{n\in \mathbb{N}}$ in $A$ has a subsequence $\{u_{n_k}\}_{k\in \mathbb{N}}$ such that $\lim_{k\rightarrow +\infty} f(u_{n_k})=+\infty$, then $f$ attains a minimum in $A$. One can refer to [@Kourimska; @Thesis] (Section 4.1) for a proof of Theorem [Theorem 24](#Thm: calculus){reference-type="ref" reference="Thm: calculus"}. The majority of the conditions in Theorem [Theorem 24](#Thm: calculus){reference-type="ref" reference="Thm: calculus"} is satisfied, including the set $\mathcal{A}$ is a closed subset of $\mathbb{R}^V$ by Proposition [Proposition 22](#Prop: set A){reference-type="ref" reference="Prop: set A"} and the function $\mathcal{E}$ is continuous by Theorem [Theorem 18](#Thm: extended E){reference-type="ref" reference="Thm: extended E"}. To prove Theorem [Theorem 5](#Thm: main){reference-type="ref" reference="Thm: main"}, we just need to prove the following theorem. **Theorem 25**. If $\chi(S)<0$ and $\{u_n\}_{n\in \mathbb{N}}$ is an unbounded sequence in $\mathcal{A}$, then there exists a subsequence $\{u_{n_k}\}_{k\in \mathbb{N}}$ of $\{u_n\}_{n\in \mathbb{N}}$ such that $\lim_{k\rightarrow +\infty} \mathcal{E}(u_{n_k})=+\infty$. ## Behaviour of sequences of discrete conformal factors Let $\{u_n\}_{n\in \mathbb{N}}$ be an unbounded sequence in $\mathbb{R}^V$. Denote its coordinate sequence at $j\in V$ by $\{u_{j,n}\}_{n\in \mathbb{N}}$. Motivated by [@Kourimska], we call the sequence $\{u_n\}_{n\in \mathbb{N}}$ with the following properties as a "good\" sequence. \(1\) : It lies in one cell $\mathcal{C}_\mathcal{T}(dist_{S},r)$ of $\mathbb{R}^V$; \(2\) : There exists a vertex $i^*\in V$ such that $u_{i^*,n}\leq u_{j,n}$ for all $j\in V$ and $n\in \mathbb{N}$; \(3\) : Each coordinate sequence $\{u_{j,n}\}_{n\in \mathbb{N}}$ either converges, diverges properly to $+\infty$, or diverges properly to $-\infty$; \(4\) : For any $j\in V$, the sequence $\{u_{j,n}-u_{i^*,n}\}_{n\in \mathbb{N}}$ either converges or diverges properly to $+\infty$. By Lemma [Lemma 13](#Lem: finite decomposition){reference-type="ref" reference="Lem: finite decomposition"}, it is obvious that every sequence of discrete conformal factors in $\mathbb{R}^V$ possesses a "good\" subsequence. Hence, the "good\" sequence could be chosen without loss of generality. In the following arguments, we use the following notations $$\label{Eq: F2} l^n_{ij}=\sqrt{r^2_{i,n}+r^2_{j,n}+2 I_{ij}r_{i,n}r_{j,n}},$$ $$\label{Eq: F3} r_{i,n}=e^{u_{i,n}}r_i,$$ $$\label{Eq: F4} (l^n_{ij})^2 =(e^{2u_{i,n}}-e^{u_{i,n}+u_{j,n}})r^2_i +(e^{2u_{j,n}}-e^{u_{i,n}+u_{j,n}})r^2_j +e^{{u_{i,n}+u_{j,n}}}l_{ij}^2.$$ For a decorated triangle $\{ijk\}\in F$ in $(S,V,\mathcal{T})$, set $$\label{Eq: C ijk} \mathcal{C}_{ijk}=\{(u_i,u_j,u_k)\in \mathbb{R}^3|u\in \mathcal{C}_\mathcal{T}(dist_{S},r)\}.$$ Let $(u_{i,n},u_{j,n},u_{k,n})_{n\in \mathbb{N}}$ be a coordinate sequence in $\mathcal{C}_{ijk}$. Then the edge lengths $l^n_{ij},l^n_{jk},l^n_{ki}$ satisfy the triangle inequalities for all $n\in \mathbb{N}$. **Lemma 26**. There exists no sequence in $\mathcal{C}_{ijk}$ such that as $n\rightarrow +\infty$, $$u_{r,n}\rightarrow +\infty, \quad u_{s,n}\rightarrow +\infty, \quad u_{t,n}\leq C,$$ where $\{r,s,t\}=\{i,j,k\}$ and $C$ is a constant. Without loss of generality, we assume $\lim u_{i,n}=+\infty$, $\lim u_{j,n}=+\infty$ and the sequence $u_{k,n}\leq C_1$. The equality ([\[Eq: F3\]](#Eq: F3){reference-type="ref" reference="Eq: F3"}) implies $\lim r_{i,n}=+\infty$, $\lim r_{j,n}=+\infty$ and the sequence $r_{k,n}\leq C_2$. Here $C_1,C_2$ are constants. By ([\[Eq: F2\]](#Eq: F2){reference-type="ref" reference="Eq: F2"}), we have $$\begin{aligned} (l^n_{jk}+l^n_{ki})^2 =&r^2_{i,n}+r^2_{j,n}+2r^2_{k,n}+2 I_{jk}r_{j,n}r_{k,n} +2 I_{ki}r_{k,n}r_{i,n}\\ &+2\sqrt{(r^2_{j,n}+r^2_{k,n}+2 I_{jk}r_{j,n}r_{k,n})(r^2_{k,n}+r^2_{i,n}+2 I_{ki}r_{k,n}r_{i,n})}. \end{aligned}$$ Note that $I_{ij}>1$, then $$\lim\frac{r^2_{k,n}+I_{jk}r_{j,n}r_{k,n} +I_{ki}r_{k,n}r_{i,n}+\sqrt{(r^2_{j,n}+r^2_{k,n}+2 I_{jk}r_{j,n}r_{k,n})(r^2_{k,n}+r^2_{i,n}+2 I_{ki}r_{k,n}r_{i,n})}}{I_{ij}r_{i,n}r_{j,n}}<1.$$ Therefore, there exists $n\in \mathbb{N}$ such that $(l^n)^2_{ij}=r^2_{i,n}+r^2_{j,n}+2 I_{ij}r_{i,n}r_{j,n}>(l^n_{jk}+l^n_{ki})^2$, i.e., $l^n_{ij}>l^n_{jk}+l^n_{ki}$. This contradicts the triangle inequality $l^n_{ij}< l^n_{jk}+l^n_{ki}$. Combining Lemma [Lemma 26](#Lem: two infty one bounded){reference-type="ref" reference="Lem: two infty one bounded"} and the connectivity of the triangulation $\mathcal{T}$, we have the following result. **Corollary 27**. For a discrete conformal factor $u\in \mathbb{R}^V$, let $\mathcal{T}$ be a weighted Delaunay triangulation of the decorated PE surface $(S,V,dist_S(u),r(u))$. For any decorated triangle $\{ijk\}\in F$ in $\mathcal{T}$, at least two of the three sequences $(u_{i,n}-u_{i^*,n})_{n\in \mathbb{N}}$, $(u_{j,n}-u_{i^*,n})_{n\in \mathbb{N}}$, $(u_{k,n}-u_{i^*,n})_{n\in \mathbb{N}}$ converge. To characterize the function $F_{ijk}(u_i,u_j,u_k)$ in ([\[Eq: F ijk\]](#Eq: F ijk){reference-type="ref" reference="Eq: F ijk"}), we need the following lemmas. **Lemma 28**. Assume that the sequence $(u_{i,n})_{n\in \mathbb{N}}$ diverges properly to $+\infty$ and the sequences $(u_{j,n})_{n\in \mathbb{N}}$ and $(u_{k,n})_{n\in \mathbb{N}}$ converge. Then the sequence $(\theta^{i,n}_{jk})_{n\in \mathbb{N}}$ converges to zero. Furthermore, if the sequences $(\theta^{j,n}_{ki})_{n\in \mathbb{N}}$ and $(\theta^{k,n}_{ij})_{n\in \mathbb{N}}$ converge to non-zero constants, then \(1\) : the sequences $(h_{jk}^{i,n})_{n\in \mathbb{N}}$, $(h_{ki}^{j,n})_{n\in \mathbb{N}}$ and $(h_{ij}^{k,n})_{n\in \mathbb{N}}$ converge; \(2\) : the sequences $(\alpha^{i,n}_{jk})_{n\in \mathbb{N}}$, $(\alpha^{j,n}_{ki})_{n\in \mathbb{N}}$ and $(\alpha^{k,n}_{ij})_{n\in \mathbb{N}}$ converge. By the assumption, we have $\lim r_{i,n}=+\infty$, $\lim r_{j,n}=c_1$ and $\lim r_{k,n}=c_2$, where $c_1,c_2$ are positive constants. The equality ([\[Eq: F2\]](#Eq: F2){reference-type="ref" reference="Eq: F2"}) implies $$\label{Eq: F5} \lim \frac{l_{ij}^n}{r_{i,n}}=1,\ \lim \frac{l_{ki}^n}{r_{i,n}}=1,\ \lim l_{jk}^n=c_3,$$ where $c_3$ is a positive constant. By the cosine law, we have $$\lim \cos\theta^{i,n}_{jk} =\lim\frac{-(l_{jk}^n)^2+(l_{ij}^n)^2+(l_{ki}^n)^2} {2l_{ij}^nl_{ki}^n} =1.$$ This implies $\lim\theta^{i,n}_{jk}=0$. Suppose the sequences $(\theta^{j,n}_{ki})_{n\in \mathbb{N}}$ and $(\theta^{k,n}_{ij})_{n\in \mathbb{N}}$ converge to non-zero constants. Then $$\label{Eq: A_ijk} \lim \frac{A_{ijk}^n}{r_{i,n}} =\lim \frac{l^n_{ij}l^n_{jk}\sin\theta^{j,n}_{ki}}{2r_{i,n}} =c_4$$ for some constant $c_4>0$. **(1):** Since $\kappa_i=\frac{1}{r_i}$, then $\lim \kappa_{i,n}=0$, $\lim \kappa_{j,n}=\frac{1}{c_1}$ and $\lim \kappa_{k,n}=\frac{1}{c_2}$. By ([\[Eq: h_i\]](#Eq: h_i){reference-type="ref" reference="Eq: h_i"}), we have $$\begin{aligned} \lim h_{i,n} =&\lim(\kappa_{i,n}(1-I_{jk}^2)+\kappa_{j,n}\gamma_{k} +\kappa_{k,n}\gamma_{j})=c_5>0, \\ \lim h_{j,n} =&\lim(\kappa_{j,n}(1-I_{ki}^2)+\kappa_{i,n}\gamma_{k} +\kappa_{k,n}\gamma_i)=c_6, \\ \lim h_{k,n} =&\lim(\kappa_{k,n}(1-I_{ij}^2)+\kappa_{i,n}\gamma_j +\kappa_{j,n}\gamma_i)=c_7,\end{aligned}$$ where $c_5,c_6,c_7$ are constants. Note that $c_6,c_7$ may be non-positive. The equalities ([\[Eq: h_ijk\]](#Eq: h_ijk){reference-type="ref" reference="Eq: h_ijk"}) and ([\[Eq: A_ijk\]](#Eq: A_ijk){reference-type="ref" reference="Eq: A_ijk"}) imply $$\begin{aligned} \lim h_{jk}^{i,n} =&\lim \frac{r_{i,n}^2r_{j,n}^2r_{k,n}^2}{2A^n_{ijk}l^n_{jk}} \kappa_{i,n}h_{i,n} =\frac{c_1^2c^2_2c_5}{2c_3c_4}>0,\\ \lim h_{ki}^{j,n} =&\lim \frac{r_{i,n}^2r_{j,n}^2r_{k,n}^2}{2A^n_{ijk}l^n_{ki}} \kappa_{j,n}h_{j,n} =\frac{c_1c^2_2c_6}{2c_4},\\ \lim h_{ij}^{k,n} =&\lim \frac{r_{i,n}^2r_{j,n}^2r_{k,n}^2}{2A^n_{ijk}l^n_{ij}} \kappa_{k,n}h_{k,n} =\frac{c_1^2c_2c_7}{2c_4}.\end{aligned}$$ Hence the sequences $(h_{jk}^{i,n})_{n\in \mathbb{N}}$, $(h_{ki}^{j,n})_{n\in \mathbb{N}}$ and $(h_{ij}^{k,n})_{n\in \mathbb{N}}$ converge. **(2):** The equality ([\[Eq: d ij\]](#Eq: d ij){reference-type="ref" reference="Eq: d ij"}) implies $$\label{Eq: F12} \lim d^n_{jk}=\lim \frac{r_{j,n}^2+r_{j,n}r_{k,n}I_{jk}}{l^n_{jk}} =\frac{c_1^2+c_1c_2I_{jk}}{c_3}>0.$$ By ([\[Eq: F13\]](#Eq: F13){reference-type="ref" reference="Eq: F13"}), we have $$\lim (r^n_{ijk})^2 =\lim[(d^n_{jk})^2+(h^{i,n}_{jk})^2-r_{j,n}^2] =c_8.$$ where $c_8$ is a constant. Note that $h^{i}_{jk}=r_{ijk}\cos \alpha_{jk}^{i}$. Hence, $$\begin{aligned} \lim\cos\alpha_{jk}^{i,n} =&\lim \frac{h_{jk}^{i,n}}{r^n_{ijk}} =\frac{c_1^2c^2_2c_5}{2c_3c_4\sqrt{c_8}}>0,\\ \lim\cos\alpha_{ki}^{j,n} =&\lim \frac{h_{ki}^{j,n}}{r^n_{ijk}} =\frac{c_1c^2_2c_6}{2c_4\sqrt{c_8}},\\ \lim\cos\alpha_{ij}^{k,n} =&\lim \frac{h_{ij}^{k,n}}{r^n_{ijk}} =\frac{c_1^2c_2c_7}{2c_4\sqrt{c_8}}.\end{aligned}$$ Then the sequences $(\alpha^{i,n}_{jk})_{n\in \mathbb{N}}$, $(\alpha^{j,n}_{ki})_{n\in \mathbb{N}}$ and $(\alpha^{k,n}_{ij})_{n\in \mathbb{N}}$ converge. **Lemma 29**. Assume that the sequence $(u_{i,n})_{n\in \mathbb{N}}$ diverges properly to $+\infty$ and the sequences $(u_{j,n})_{n\in \mathbb{N}}$ and $(u_{k,n})_{n\in \mathbb{N}}$ converge. If the sequence $(\theta^{j,n}_{ki})_{n\in \mathbb{N}}$ converge to zero, then $$\lim h^{i,n}_{jk}=+\infty,\ \lim h^{j,n}_{ki}=+\infty,\ \lim h^{k,n}_{ij}=-\infty.$$ Lemma [Lemma 28](#Lem: converge 1){reference-type="ref" reference="Lem: converge 1"} shows that $\lim\theta^{i,n}_{jk}=0$, thus $\lim (\theta^{j,n}_{ki}+\theta^{k,n}_{ij})=\pi$. Since $\lim \theta^{j,n}_{ki}=0$, then $\lim \theta^{k,n}_{ij}=\pi$. Then $$\label{Eq: F10} \lim \frac{A_{ijk}^n}{r_{i,n}} =\lim \frac{l^n_{ij}l^n_{jk}\sin\theta^{j,n}_{ki}}{r_{i,n}} =0.$$ By the proof of Lemma [Lemma 28](#Lem: converge 1){reference-type="ref" reference="Lem: converge 1"}, we have $$\lim h_{jk}^{i,n} =\lim \frac{r_{i,n}^2r_{j,n}^2r_{k,n}^2}{2A^n_{ijk}l^n_{jk}} \kappa_{i,n}h_{i,n} =\lim \frac{r_{i,n}^2c_1^2c_2^2}{2A^n_{ijk}c_3}\cdot \frac{1}{r_{i,n}}c_5 =+\infty,$$ where ([\[Eq: F10\]](#Eq: F10){reference-type="ref" reference="Eq: F10"}) is used and $c_1,c_2,c_3,c_5$ are positive constants. Similar to ([\[Eq: F12\]](#Eq: F12){reference-type="ref" reference="Eq: F12"}), we have $$\begin{aligned} \lim d^n_{ji} =&\lim\frac{r_{j,n}^2+r_{i,n}r_{j,n}I_{ij}}{l^n_{ij}}=c_9, \\ \lim d^n_{ki} =&\lim\frac{r_{k,n}^2+r_{i,n}r_{k,n}I_{ki}}{l^n_{ki}}=c_{10}.\end{aligned}$$ Here $c_9,c_{10}$ are positive constants. By ([\[Eq: F13\]](#Eq: F13){reference-type="ref" reference="Eq: F13"}), we have $$\begin{aligned} (r^n_{ijk})^2 &=(d^n_{jk})^2+(h^{i,n}_{jk})^2-r_{j,n}^2 \\ &=(d^n_{ji})^2+(h^{k,n}_{ij})^2-r_{j,n}^2 \\ &=(d^n_{ki})^2+(h^{j,n}_{ki})^2-r_{k,n}^2.\end{aligned}$$ This implies $\lim r^n_{ijk}=+\infty$, $\lim (h^{k,n}_{ij})^2=+\infty$ and $\lim (h^{j,n}_{ki})^2=+\infty$. Therefore, we have the following four cases $(i)$ : $\lim h^{k,n}_{ij}=+\infty$ and $\lim h^{j,n}_{ki}=+\infty$; $(ii)$ : $\lim h^{k,n}_{ij}=-\infty$ and $\lim h^{j,n}_{ki}=-\infty$; $(iii)$ : $\lim h^{k,n}_{ij}=+\infty$ and $\lim h^{j,n}_{ki}=-\infty$; $(iv)$ : $\lim h^{k,n}_{ij}=-\infty$ and $\lim h^{j,n}_{ki}=+\infty$. For the case $(i)$, since $\lim h_{jk}^{i,n}>0$, $\lim h^{k,n}_{ij}>0$ and $\lim h^{j,n}_{ki}>0$. This implies that the center $c_{ijk}$ of the face-circle $C_{ijk}$ lies in the interior of the triangle $\{ijk\}$ by the definition of $h_{jk}^i, h^{k}_{ij}, h^{j}_{ki}$. However, in this case, $\lim h_{jk}^{i,n}, \lim h^{k,n}_{ij}, \lim h^{j,n}_{ki}$ are bounded. This is a contradiction. figure_4.pdf (48,11)$i$ (96,-2)$j$ (82,-1)$k$ Both the cases $(ii)$ and $(iii)$ imply $d_{kj}<0$. This contradicts with the fact that $d_{rs}>0$ for any $\{r, s\}\subseteq \{i,j,k\}$. Indeed, the center $c_{ijk}$ lies in the red region in Figure [\[figure 4\]](#figure 4){reference-type="ref" reference="figure 4"} in the case $(ii)$ and lies in the blue region in Figure [\[figure 4\]](#figure 4){reference-type="ref" reference="figure 4"} in the case $(iii)$. By projecting the center $c_{ijk}$ to the line determined by $\{jk\}$, we have $d_{kj}<0$. This completes the proof. **Remark 30**. Similar to the proof of Lemma [Lemma 29](#Lem: converge 2){reference-type="ref" reference="Lem: converge 2"}, if the sequence $(\theta^{k,n}_{ij})_{n\in \mathbb{N}}$ converges to zero, then $\lim h^{i,n}_{jk}=+\infty,\ \lim h^{j,n}_{ki}=-\infty,\ \lim h^{k,n}_{ij}=+\infty.$ Consider a star-shaped $s$-sided polygon in the marked surface with boundary vertices $1,\cdots,s$ ordered cyclically ($v_{s+1}=v_1$). Please refer to Figure [\[figure 5\]](#figure 5){reference-type="ref" reference="figure 5"}. Let $i\in V$ be a vertex such that the sequence $(u_{i,n})_{n\in \mathbb{N}}$ diverges properly to $+\infty$ and the sequences $(u_{j,n})_{n\in \mathbb{N}}$ converge for $j\sim i$. figure_5.pdf (74,90)$j+1$ (92,44)$j$ (87,14)$j-1$ (50,38)$i$ (78,16)$0$ (82,21)$\pi$ (85,39)$0$ (85,46)$\pi$ (70,82)$0$ (64,86)$\pi$ (38,92)$0$ (33,90)$\pi$ (35,100)$s$ (6,55)$0$ (5,48)$\pi$ (-4,50)$1$ (18,18)$0$ (23,14)$\pi$ (16,8)$2$ (53,4)$0$ (59,5)$\pi$ **Lemma 31**. The sequences of inner angles at the boundary vertices in the triangles of a star-shaped polygon converge to non-zero constants. As $\lim u_{i,n}=+\infty$ and $\lim u_{j,n}=C$ for $j\sim i$. By Lemma [Lemma 28](#Lem: converge 1){reference-type="ref" reference="Lem: converge 1"}, for any $j=1,...,s$, we have $\lim \theta^{i,n}_{j-1,j}=0$ and hence $\lim (\theta^{j-1,n}_{i,j}+\theta^{j,n}_{i,j-1})=\pi$. We prove the result by contradiction. Without loss of generality, we assume $\lim \theta^{j-1,n}_{i,j}=\pi$ and $\lim \theta^{j,n}_{i,j-1}=0$ in the triangle $\{i,j-1,j\}$. Then for $n$ large enough, we have $$l^n_{i,j-1}<l^n_{i,j}.$$ By Lemma [Lemma 29](#Lem: converge 2){reference-type="ref" reference="Lem: converge 2"}, we have $\lim h^{j,n}_{i,j-1}=+\infty,\ \lim h^{j-1,n}_{i,j}=-\infty,\ \lim h^{i,n}_{j-1,j}=+\infty$. Since the edge $\{i,j\}$ is weighted Delaunay, thus by ([\[Eq: F9\]](#Eq: F9){reference-type="ref" reference="Eq: F9"}), we have $$h^{j-1, n}_{i,j}+h^{j+1, n}_{i,j}\geq 0.$$ This implies $\lim h^{j+1,n}_{i,j}=+\infty$. In the triangle $\{i,j,j+1\}$, suppose the sequences $(\theta^{j,n}_{i,j+1})_{n\in \mathbb{N}}$ and $(\theta^{j+1,n}_{i,j})_{n\in \mathbb{N}}$ converge to non-zero constants. By Lemma [Lemma 28](#Lem: converge 1){reference-type="ref" reference="Lem: converge 1"}, the sequences $(h_{i,j}^{j+1,n})_{n\in \mathbb{N}}$ and $(h_{i,j+1}^{j,n})_{n\in \mathbb{N}}$ converge. This contradicts $\lim h^{j+1,n}_{i,j}=+\infty$. Hence the sequences $(\theta^{j,n}_{i,j+1})_{n\in \mathbb{N}}$ or $(\theta^{j+1,n}_{i,j})_{n\in \mathbb{N}}$ converge to zero. By Lemma [Lemma 29](#Lem: converge 2){reference-type="ref" reference="Lem: converge 2"} and Remark [Remark 30](#Rem: 2){reference-type="ref" reference="Rem: 2"}, we have $\lim \theta^{j,n}_{i,j+1}=\pi$, $\lim \theta^{j+1,n}_{i,j}=0$ and $\lim h^{j,n}_{i,j+1}=-\infty$. Then for $n$ large enough, we have $$l^n_{i,j}< l^n_{i,j+1}.$$ Please refer to Figure [\[figure 5\]](#figure 5){reference-type="ref" reference="figure 5"}. By induction, for $n$ large enough, we have $$l^n_{i,j-1}< l^n_{i,j}< l^n_{i,j+1}< l^n_{i,j+2}<...< l^n_{i,j-1}.$$ This is a contradiction. Combining ([\[Eq: F ijk\]](#Eq: F ijk){reference-type="ref" reference="Eq: F ijk"}), Lemma [Lemma 28](#Lem: converge 1){reference-type="ref" reference="Lem: converge 1"} and Lemma [Lemma 31](#Lem: converge 3){reference-type="ref" reference="Lem: converge 3"}, we have the following result. **Corollary 32**. Assume that the sequence $(u_{i,n})_{n\in \mathbb{N}}$ diverges properly to $+\infty$ and the sequences $(u_{j,n})_{n\in \mathbb{N}}$ and $(u_{k,n})_{n\in \mathbb{N}}$ converge. Then the sequence $(F_{ijk}(u_{i,n},u_{j,n},u_{k,n}))_{n\in \mathbb{N}}$ converges. By the definition of $F_{ijk}(u_i,u_j,u_k)$ in ([\[Eq: F ijk\]](#Eq: F ijk){reference-type="ref" reference="Eq: F ijk"}), we have $$\begin{aligned} F_{ijk}(u_{i,n},u_{j,n},u_{k,n}) =&-2\mathrm{Vol}^n(ijk)+\theta_{jk}^{i,n}u_{i,n} +\theta_{ki}^{j,n}u_{j,n}+\theta_{ij}^{k,n}u_{k,n}\\ &+(\frac{\pi}{2}-\alpha_{ij}^{k,n})\lambda_{ij} +(\frac{\pi}{2}-\alpha_{ki}^{j,n})\lambda_{ki} +(\frac{\pi}{2}-\alpha_{jk}^{i,n})\lambda_{jk}. \end{aligned}$$ Combining Lemma [Lemma 28](#Lem: converge 1){reference-type="ref" reference="Lem: converge 1"} and Lemma [Lemma 31](#Lem: converge 3){reference-type="ref" reference="Lem: converge 3"} gives that $\lim\theta^{i,n}_{jk}=0$, the sequences $(\theta^{j,n}_{ki})_{n\in \mathbb{N}}$ and $(\theta^{k,n}_{ij})_{n\in \mathbb{N}}$ converge to non-zero constants and the sequences $(\alpha^{i,n}_{jk})_{n\in \mathbb{N}}$, $(\alpha^{j,n}_{ki})_{n\in \mathbb{N}}$ and $(\alpha^{k,n}_{ij})_{n\in \mathbb{N}}$ converge. Combining the continuity of Milnor's Lobachevsky function defined by ([\[Eq: Lobachevsky function\]](#Eq: Lobachevsky function){reference-type="ref" reference="Eq: Lobachevsky function"}) and the definition of the truncated volume $\mathrm{Vol}(ijk)$ defined by ([\[Eq: volume\]](#Eq: volume){reference-type="ref" reference="Eq: volume"}), we have that the sequence $(\mathrm{Vol}^n(ijk))_{n\in \mathbb{N}}$ converges. Note that $\lambda_{ij}=\mathrm{arccosh} I_{ij}$ keeps invariant. Hence, $$\lim F_{ijk}(u_{i,n},u_{j,n},u_{k,n})=\lim \theta_{jk}^{i,n}u_{i,n}+c_{11}$$ for some constant $c_{11}$. By ([\[Eq: F3\]](#Eq: F3){reference-type="ref" reference="Eq: F3"}), we have $u_{i,n}=\log r_{i,n}-\log r_i$. Then $$\begin{aligned} \lim\theta_{jk}^{i,n}u_{i,n} =&\lim \sin\theta^{i,n}_{jk}(\log r_{i,n}-\log r_i)\\ =&\lim\frac{2A_{ijk}^n}{l_{ij}^nl_{ki}^n}\log r_{i,n}\\ =&2c_4\lim\frac{\log r_{i,n}}{r_{i,n}}\\ =&0, \end{aligned}$$ where the equalities ([\[Eq: F5\]](#Eq: F5){reference-type="ref" reference="Eq: F5"}) and ([\[Eq: A_ijk\]](#Eq: A_ijk){reference-type="ref" reference="Eq: A_ijk"}) is used in the second line and $\lim_{x\rightarrow +\infty}\frac{1}{x}\log x=0$ is used in the third line. Therefore, $\lim F_{ijk}(u_{i,n},u_{j,n},u_{k,n})=c_{11}$. The following lemma gives an asymptotic expression of the function $\mathcal{E}$. **Lemma 33**. There exists a convergent sequence $\{D_n\}_{n\in \mathbb{N}}$ such that the function $\mathcal{E}$ satisfies $$\mathcal{E}(u_n)=D_n+2\pi\left(u_{i^*,n}\chi(S)+\sum_{j\in V}(u_{j,n}-u_{i^*,n})\right).$$ By ([\[Eq: extended E\]](#Eq: extended E){reference-type="ref" reference="Eq: extended E"}), we have $$\begin{aligned} \mathcal{E}(u_n) =&-\sum_{\{ijk\}\in F}F_{ijk}(u_{i,n},u_{j,n},u_{k,n}) +2\pi\sum_{j\in V}u_{j,n}\\ =&-\sum_{\{ijk\}\in F}F_{ijk} ((u_{i,n},u_{j,n},u_{k,n})-u_{i^*,n}(1,1,1)) -\pi|F|u_{i^*,n}+2\pi\sum_{j\in V}u_{j,n}\\ =&D_n-\pi(2|V|-2\chi(S))u_{i^*,n}+2\pi\sum_{j\in V}u_{j,n}\\ =&D_n+2\pi\left(u_{i^*,n}\chi(S)+\sum_{j\in V}(u_{j,n}-u_{i^*,n})\right),\end{aligned}$$ where $D_n=-\sum_{\{ijk\}\in F}F_{ijk} ((u_{i,n},u_{j,n},u_{k,n})-u_{i^*,n}(1,1,1))$, the equation ([\[Eq: property of F ijk\]](#Eq: property of F ijk){reference-type="ref" reference="Eq: property of F ijk"}) is used in the second line and $2|V|-|F|=2\chi(S)$ is used in the third line. The sequence $\{D_n\}_{n\in \mathbb{N}}$ converges by Corollary [Corollary 27](#Cor: one infty two converge){reference-type="ref" reference="Cor: one infty two converge"} and Corollary [Corollary 32](#Cor: F converge){reference-type="ref" reference="Cor: F converge"}. The following lemma gives the influence of the sequence $(u_n)_{n\in \mathbb{N}}$ on the area $A_{ijk}$ of a decorated triangle $\{ijk\}$. **Lemma 34**. For a discrete conformal factor $u\in \mathbb{R}^V$, let $\mathcal{T}$ be a weighted Delaunay triangulation of the decorated PE surface $(S,V,dist_S(u),r(u))$. Assume the sequences $(u_{j,n}-u_{i^*,n})_{n\in \mathbb{N}}$, $(u_{k,n}-u_{i^*,n})_{n\in \mathbb{N}}$ converge for $\{ijk\}$ in $\mathcal{T}$ with edge lengths $l_{ij}^n, l_{jk}^n, l_{ki}^n$. \(a\) : If $(u_{i,n}-u_{i^*,n})_{n\in \mathbb{N}}$ converges, there exists a convergent sequence of real numbers $(C_n)_{n\in \mathbb{N}}$ such that $$\label{Eq: key 1} \log A_{ijk}^n=C_n+2u_{i^*,n}.$$ \(b\) : If $(u_{i,n}-u_{i^*,n})_{n\in \mathbb{N}}$ diverges to $+\infty$, there exists a convergent sequence of real numbers $(C_n)_{n\in \mathbb{N}}$ such that $$\label{Eq: key 2} \log A_{ijk}^n=C_n+u_{i,n}+u_{i^*,n}.$$ Applying ([\[Eq: F4\]](#Eq: F4){reference-type="ref" reference="Eq: F4"}) to $A_{ijk}=\frac{1}{2}l_{ij}l_{jk}\sin \theta_{ki}^j$ gives $$\begin{aligned} A^n_{ijk} =&\frac{1}{2}l_{ij}^nl_{jk}^n\sin\theta_{ki}^{j,n}\\ =&\frac{1}{2}\sin\theta_{ki}^{j,n} \sqrt{r^2_i e^{2u_{i,n}}+r^2_je^{2u_{j,n}}+(l_{ij}^2 -r^2_i-r^2_j)e^{(u_{i,n}+u_{j,n})}}\\ &\times \sqrt{r^2_j e^{2u_{j,n}}+r^2_ke^{2u_{k,n}}+(l_{jk}^2 -r^2_j-r^2_k)e^{(u_{j,n}+u_{k,n})}}. \end{aligned}$$ Then $$\begin{aligned} \log A^n_{ijk} =&\log(\frac{1}{2}\sin\theta_{ki}^{j,n})+2u_{i^*,n}\\ &+\frac{1}{2}\log(r^2_i e^{2(u_{i,n}-u_{i^*,n})} +r^2_je^{2(u_{j,n}-u_{i^*,n})}+(l_{ij}^2 -r^2_i-r^2_j)e^{(u_{i,n}-u_{i^*,n})+(u_{j,n}-u_{i^*,n})})\\ &+\frac{1}{2}\log(r^2_j e^{2(u_{j,n}-u_{i^*,n})} +r^2_ke^{2(u_{k,n}-u_{i^*,n})}+(l_{jk}^2 -r^2_j-r^2_k)e^{(u_{j,n}-u_{i^*,n})+(u_{k,n}-u_{i^*,n})})\\ =&\log(\frac{1}{2}\sin\theta_{ki}^{j,n})+u_{i,n}+u_{i^*,n}\\ &+\frac{1}{2}\log(r^2_i+r^2_je^{2(u_{j,n}-u_{i^*,n})-2(u_{i,n}-u_{i^*,n})} +(l_{ij}^2-r^2_i-r^2_j) e^{-(u_{i,n}-u_{i^*,n})+(u_{j,n}-u_{i^*,n})})\\ &+\frac{1}{2}\log(r^2_j e^{2(u_{j,n}-u_{i^*,n})} +r^2_ke^{2(u_{k,n}-u_{i^*,n})}+(l_{jk}^2 -r^2_j-r^2_k)e^{(u_{j,n}-u_{i^*,n})+(u_{k,n}-u_{i^*,n})}). \end{aligned}$$ If the sequence $(u_{i,n}-u_{i^*,n})_{n\in \mathbb{N}}$ converges, then $\log A_{ijk}^n=C_n+2u_{i^*,n}$. If the sequence $(u_{i,n}-u_{i^*,n})_{n\in \mathbb{N}}$ diverges to $+\infty$, then the sequence $(\theta_{ki}^{j,n})_{n\in \mathbb{N}}$ converges to a non-zero constant in $(0, \pi)$ by Lemma [Lemma 31](#Lem: converge 3){reference-type="ref" reference="Lem: converge 3"}. This implies $\log A_{ijk}^n =C_n+u_{i,n}+u_{i^*,n}$. In both cases, the sequence $(C_n)_{n\in \mathbb{N}}$ converges. ## Proof of Theorem [Theorem 25](#Thm: key){reference-type="ref" reference="Thm: key"} {#proof-of-theorem-thm-key} Let $\{u_n\}_{n\in \mathbb{N}}$ be an unbounded "good\" sequence. Suppose $\chi(S)<0$ and $\{u_n\}_{n\in \mathbb{N}}$ is an unbounded sequence in $\mathcal{A}$. Combining $\chi(S)<0$ and Lemma [Lemma 33](#Lem: E decomposition){reference-type="ref" reference="Lem: E decomposition"}, we just need to prove that $\lim_{n\rightarrow +\infty} u_{i^*,n}=-\infty$. By the definition of "good\" sequence, the sequence $\left(\sum_{j\in V}(u_{j,n}-u_{i^*,n})\right)_{n\in \mathbb{N}}$ converges to a finite number or diverges properly to $+\infty$. If $\left(\sum_{j\in V}(u_{j,n}-u_{i^*,n})\right)_{n\in \mathbb{N}}$ converges to a finite number, then the sequence $(u_{j,n}-u_{i^*,n})_{n\in \mathbb{N}}$ converges for all $j\in V$. Since the sequence $\{u_n\}_{n\in \mathbb{N}}$ lies in $\mathcal{A}$, the area $A_{ijk}$ of each triangle is bounded from above. This implies $\{u_{i^*,n}\}_{n\in \mathbb{N}}$ is bounded from above by ([\[Eq: key 1\]](#Eq: key 1){reference-type="ref" reference="Eq: key 1"}). Then $\{u_{i^*,n}\}_{n\in \mathbb{N}}$ converges to a finite number or diverges properly to $-\infty$. Suppose $\{u_{i^*,n}\}_{n\in \mathbb{N}}$ converges to a finite number. Since $(u_{j,n}-u_{i^*,n})_{n\in \mathbb{N}}$ converges for all $j\in V$, then $\{u_{j,n}\}_{n\in \mathbb{N}}$ are bounded for all $j\in V$, which implies $\{u_n\}_{n\in \mathbb{N}}$ is bounded. This contradicts the assumption that $\{u_n\}_{n\in \mathbb{N}}$ is unbounded. Therefore, the sequence $\{u_{i^*,n}\}_{n\in \mathbb{N}}$ diverges properly to $-\infty$. If $\left(\sum_{j\in V}(u_{j,n}-u_{i^*,n})\right)_{n\in \mathbb{N}}$ diverges properly to $+\infty$, then there exists at least one vertex $i\in V$ such that the sequence $(u_{i,n}-u_{i^*,n})_{n\in \mathbb{N}}$ diverges properly to $+\infty$. By Corollary [Corollary 27](#Cor: one infty two converge){reference-type="ref" reference="Cor: one infty two converge"}, the sequences $(u_{j,n}-u_{i^*,n})_{n\in \mathbb{N}}$ and $(u_{k,n}-u_{i^*,n})_{n\in \mathbb{N}}$ converge for $j\sim i$ and $k\sim i$. Since the area $A_{ijk}$ of each triangle is bounded from above, thus $u_{i,n}+u_{i^*,n}\leq C$ and $u_{j,n}+u_{i^*,n}\leq C$ by ([\[Eq: key 2\]](#Eq: key 2){reference-type="ref" reference="Eq: key 2"}), where $C$ is a constant. Then $(u_{i,n}-u_{i^*,n})+2u_{i^*,n}\leq C$. This implies $\{u_{i^*,n}\}_{n\in \mathbb{N}}$ diverges properly to $-\infty$. **Remark 35**. For the case $\chi(S)>0$, Kouřimská [@Kourimska; @Kourimska; @Thesis] gave the existence of PE metrics with constant discrete Gaussian curvatures. However, we can not get similar results. The main difference is that the edge length defined by ([\[Eq: DCE2\]](#Eq: DCE2){reference-type="ref" reference="Eq: DCE2"}) involves the square term of discrete conformal factors, such as $e^{2u_i}$, while the edge length defined by the vertex scalings only involves the mixed product of the first order terms, i.e., $e^{u_i+u_j}$. Indeed, in this case, we can define the set $\mathcal{A}_+=\{u\in \mathbb{R}^V|A_{tot}(u)\geq 1\}$, which is an unbounded closed subset of $\mathbb{R}^V$. Under the conditions that $\chi(S)>0$ and the function $\mathcal{E}(u)$ attains a minimum in the set $\mathcal{A}_+$, Lemma [Lemma 23](#Lem: minimum lies at the boundary){reference-type="ref" reference="Lem: minimum lies at the boundary"} still holds. Using Theorem [Theorem 24](#Thm: calculus){reference-type="ref" reference="Thm: calculus"}, we just need to prove Theorem [Theorem 25](#Thm: key){reference-type="ref" reference="Thm: key"} under the condition $\chi(S)>0$. However, we can not get a good asymptotic expression of the area $A_{ijk}$. The asymptotic expression of the area $A_{ijk}$ in ([\[Eq: key 2\]](#Eq: key 2){reference-type="ref" reference="Eq: key 2"}) involves $u_{i,n}+u_{i^*,n}$, which is not enough for this case. 50 A. Beardon, K. Stephenson, *The uniformization theorem for circle packings*. Indiana Univ. Math. J. 39 (1990), no. 4, 1383-1425. A. Bobenko, C. Lutz, *Decorated discrete conformal maps and convex polyhedral cusps*. [arXiv:2305.10988v1\[math.GT\]](https://arxiv.org/abs/2305.10988). A. Bobenko, U. Pinkall, B. Springborn, *Discrete conformal maps and ideal hyperbolic polyhedra*. Geom. Topol. 19 (2015), no. 4, 2155-2215. P. L. Bowers, K. Stephenson, *Uniformizing dessins and Belyĭ maps via circle packing*. Mem. Amer. Math. Soc. 170 (2004), no. 805. Y. Chen, Y. Luo, X. Xu, S. Zhang, *Bowers-Stephenson's conjecture on the convergence of inversive distance circle packings to the Riemann mapping*, [arXiv:2211.07464 \[math.MG\]](https://arxiv.org/abs/2211.07464). H. S. M. Coxeter. *Inversive distance*. Annali di Matematica, 71(1):73-83, December 1966. H. Ge, X. Xu, *A combinatorial Yamabe problem on two and three dimensional manifolds*, Calc. Var. Partial Differential Equations 60 (2021), no. 1, 20. D. Glickenstein, *A monotonicity property for weighted Delaunay triangulations*. Discrete Comput. Geom. 38 (2007), no. 4, 651-664. D. Glickenstein, *Discrete conformal variations and scalar curvature on piecewise flat two and three dimensional manifolds*, J. Differential Geom. 87 (2011), no. 2, 201-237. D. Glickenstein, *Geometric triangulations and discrete Laplacians on manifolds*, [arXiv:math/0508188 \[math.MG\].](https://arxiv.org/abs/math/0508188) D. Glickenstein, J. Thomas, *Duality structures and discrete conformal variations of piecewise constant curvature surfaces*, Adv. Math. 320 (2017), 250-278. X. D. Gu, R. Guo, F. Luo, J. Sun, T. Wu, *A discrete uniformization theorem for polyhedral surfaces II*, J. Differential Geom. 109 (2018), no. 3, 431-466. X. D. Gu, F. Luo, J. Sun, T. Wu, *A discrete uniformization theorem for polyhedral surfaces*, J. Differential Geom. 109 (2018), no. 2, 223-256. R. Guo, *Local rigidity of inversive distance circle packing*, Trans. Amer. Math. Soc. 363 (2011) 4757-4776. I. Izmestiev, R. Prosanov, T. Wu, *Prescribed curvature problem for discrete conformality on convex spherical cone-metrics*, [arXiv:2303.11068 \[math.MG\]](https://arxiv.org/abs/2303.11068). H. Kouřimská, *Polyhedral surfaces of constant curvature and discrete uniformization*. PhD thesis, Technische Universität Berlin, 2020. H. Kouřimská, *Discrete Yamabe problem for polyhedral surfaces*, Discrete Computational Geometry 70 (2023), 123-153. F. Luo, *Combinatorial Yamabe flows on surfaces*, Commun. Contemp. Math. 6 (2004), no. 5, 765-780. F. Luo, *Rigidity of polyhedral surfaces, III*, Geom. Topol. 15 (2011), 2299-2319. J. Milnor, *Hyperbolic geometry: The first 150 years*, Bull. Amer. Math. Soc. 6 (1982) 9-24. John G. Ratcliffe, *Foundations of hyperbolic manifolds*. Second edition. Graduate Texts in Mathematics, 149. Springer, New York, 2006. xii+779 pp. ISBN: 978-0387-33197-3; 0-387-33197-2. B. Springborn, *A variational principle for weighted Delaunay triangulations and hyperideal polyhedra*. J. Differential Geom. 78 (2008), no. 2, 333-367. B. Springborn, *Ideal hyperbolic polyhedra and discrete uniformization*. Discrete Comput. Geom. 64 (2020), no. 1, 63-108. X. Xu, *Rigidity of inversive distance circle packings revisited*, Adv. Math. 332 (2018), 476-509. X. Xu, *A new proof of Bowers-Stephenson conjecture*, Math. Res. Lett. 28 (2021), no. 4, 1283-1306. [^1]: MSC (2020): 52C26
arxiv_math
{ "id": "2309.05215", "title": "A discrete uniformization theorem for decorated piecewise Euclidean\n metrics on surfaces", "authors": "Xu Xu, Chao Zheng", "categories": "math.DG math.GT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We shall show that there exists only one duality pair for ordered graphs. We will also define a corresponding definition of $\chi$-boundedness for ordered graphs and show that all ordered graphs are $\chi$-bounded and prove an analogy of Gyarfás-Sumner conjecture for ordered graphs. We shall conclude with showing that these results hold for directed ordered graphs, but they do not hold for oriented ordered graphs. author: - | Michal Čertı́k, Jaroslav Nešetřil\ Computer Science Institute, Faculty of Mathematics and Physics\ Charles University\ Prague, Czech Republic\ bibliography: - references.bib title: Duality and $\chi$-Boundedness of Ordered Graphs --- # Introduction An *Ordered Graph* is an undirected graph whose vertices are totally ordered. Thus, ordered graph $G$ is a triple $G=(V,E,\le_G)$ (see figure [1](#fig:OrdHomsInterval){reference-type="ref" reference="fig:OrdHomsInterval"}). Ordered graphs are interesting objects which have been studied in the context of extremal problems ( [@Pach2006]), Ramsey theory ( [@Balko_2019]) and structural graph theory ( [@bonnet2021twinwidthI]). They also provide an important class of objects in model theory (NIP classes which are not stable, see e.g. [@simon2014guide]). Ordered structures are also a key object in structural Ramsey theory (see e.g. [@Hubi_ka_2019], [@nesetrilrodlstatistics2017]). In this paper we consider homomorphisms of ordered graphs. These are defined as edge- and order-preserving mappings and they are naturally related to ordered chromatic number (which in turn naturally relates to extremal results, see e.g. [@Pach2006]). We characterize homomorphism dualities and we also consider Gyarfás-Sumner type problems ($\chi$-boundedness) and in the context of ordered graphs we fully characterize $\chi$-boundedness. # Preliminaries and Statement of Results For ordered graphs $G=(V,E,\le_G)$ and $G'=(V',E',\le_{G'})$, an *Ordered Homomorphism* is a mapping $f:V\to V'$ preserving both edges and orderings. Explicitly, $f$ satisfies 1. $f(u)f(v) \in E'$ for all $uv \in E$, 2. $f(u) \le_{G'} f(v)$ whenever $u \le_{G} v$. The existence of ordered homomorphism will be denoted by $G\to G'$ (see figure [1](#fig:OrdHomsInterval){reference-type="ref" reference="fig:OrdHomsInterval"}). Ordered homomorphisms compose and thus most of the categorical definitions can be considered without any changes, see e.g. [@HellNesetrilGraphHomomorphisms]. Particularly we can define a *core* of a graph $G$ as the smallest subgraph $H$ of $G$ such that $G\to H$. (Equivalently, this is the smallest retract of $G$.) Let us now define an *independent interval* as a set of independent vertices, explicitly, if $\le_G$ is given as $v_1, v_2, \ldots, v_n$, then an $[i,j]$ independent interval in $G$ is the set $\{v_i,v_{i+1},\ldots,v_j\}$ which does not contain any edge of $G$ (see figure [1](#fig:OrdHomsInterval){reference-type="ref" reference="fig:OrdHomsInterval"}). Further on, we may call independent interval simply, an interval. ![Ordered Homomorphism $f$ and Independent Intervals.](OrderedHoms.png){#fig:OrdHomsInterval} The *(ordered) chromatic number* $\chi(G)$ is then the minimum $k$ such that $V(G)$ can be partitioned to $k$ disjoint intervals. Notice that for ordered graphs this is the size of the smallest homomorphic image and, alternatively, the minimum $k$ such that $G\to K_k$. ($K_k$ is of course the complete graph with a fixed linear ordering.) We shall call $\chi(G)$ also a *colouring* of $G$. As opposed to the chromatic number of (unordered) graphs the ordered chromatic number can be determined by a simple greedy algorithm. This is formulated below as Proposition [Proposition 1](#prop:GreedyAlg){reference-type="ref" reference="prop:GreedyAlg"} (which may be a folklore). *Greedy Algorithm* is a natural one: process the vertices in the given order and colour each vertex by the smallest available colour which fits the rule. What is the rule? For graph $G$ with ordering $\le_G$ each colour has to be an interval in $\le_G$. We have the following: **Proposition 1**. *For every ordered graph $G$ the greedy algorithm finds $\chi(G)$.* *Proof.* Put $\chi(G)=k$. Obviously, by the algorithm, the greedy algorithm finds a $k'$-colouring for $k'\ge k$. We prove $k'=k$ by an induction on $k$. For $k=1$ the graph is just a single interval. In the induction step let $\chi(G)=k+1$ and let $I_1, I_2, \ldots, I_{k}$ be an optimal colouring. Consider the interval $I'_1$ given by the greedy algorithm. Clearly $I_1\subseteq I'_1$ and thus $G'=G-I'_1$ (with the vertex set $I'_1$ deleted) satisfies $\chi(G')=k$ and the greedy algorithm produces also a $k$-colouring. Thus also $k+1$ is the result of the greedy algorithm. ◻ Following the standard definition of the complexity class $\mathcal{P}$ as in [@AroraBarak-Complexity], the following result then follows. **Corollary 2**. *Let $G$ be an ordered graph. Then determining $\chi(G)$ is in $\mathcal{P}$.* *Proof.* As the Greedy algorithm goes over all vertices of $G$ and at each step it checks whether the vertex is not connected to some of the vertices before, the complexity of an algorithm is at most $\mathcal{O}(|G|^2)$. ◻ For the purposes of further explorations we define a *double* as a pair of consecutive vertices in an ordered graphs connected by an edge. Then $M_k$ will be an ordered graph that has $k$ doubles, $k$ edges, and $2k$ vertices and we will call $M_k$ an *ordered matching*. Let us now define a *(Singleton) Homomorphism Duality* as a pair of graphs $F, D$ satisfying $$F\not\to G \text{ if and only if } G\to D$$ for every graph $G$. For graphs and relational structures the dualities are characterized in [@NESETRILTARDIF200080]. In Theorem [\[thm:Uniq\]](#thm:Uniq){reference-type="ref" reference="thm:Uniq"} we provide result in ordered setting. The key property is played by ordered matching. Note that ordered matching plays role also in Ramsey context ( [@balko2023ordered]). In Section [4](#chapt:ChiBound){reference-type="ref" reference="chapt:ChiBound"} we deal with questions which subgraphs are unavoidable in large chromatic number. Formulating it dually, we ask when is chromatic number of a graph bounded as a function of its subgraphs (for not ordered graphs this amounts to bound chromatic number as a function of the clique number, which leads to $\chi$-bounded classes and Gyarfás-Sumner conjecture). For ordered graphs is the situation easier and we prove the analogous statement, Theorem [\[thm:Gyarfas\]](#thm:Gyarfas){reference-type="ref" reference="thm:Gyarfas"}, with the following definition of some of such unavoidable graphs. **Definition 1**. *Ordered Matching* $M_n$ has points $a_i, b_i, i=1,\ldots, n$, with ordering $a_1<b_1<a_2<b_2<\ldots<a_n<b_n$ and edges $\{a_i,b_i\}, i=1,\ldots,n$. $a_i$ are left vertices, $b_i$ are right vertices. - $M^{LR}_n$ is $M_n$ together with all edges $\{a_i, b_j\}, i<j$. - $M^{RL}_n$ is $M_n$ together with all edges $\{b_i, a_j\}, i<j$. - $M^{+}_n$ is just $M^{LR}_n \cup M^{RL}_n$. ![$M_n, M^{LR}_n, M^{RL}_n$ and $M^{+}_n$](ForbMn.png){#fig:MnMLR} In Section [5](#chapt:ChiBoundDirected){reference-type="ref" reference="chapt:ChiBoundDirected"} we extend our results to oriented and directed graphs. We show that the general pattern for directed graphs is similar, while for the oriented graphs it does not hold. To conclude the preliminaries, let us also define $P_m$ as an ordered graph on $m$ vertices and $E(P_m)=\{v_iv_{i+1}|i=1,\ldots,m-1;v_i\in V(P_m)\}$. We will call $P_m$ a *directed path*. And we define an edge set $E'$ in an ordered graph as *non-intersecting* if for every two distinct edges $e_1=\{v_1,v_2\}, v_1<v_2$ and $e_2=\{v_3,v_4\}, v_3<v_4$ in $E'$, either $v_2\le v_3$ or $v_4\le v_1$ (we again use the intuition of (in this case open) intervals). **Observation 3**. *Let $G$ be an ordered graph with $k$ non-intersecting edges. Then if there exists an ordered homomorphism from $G$ to $H$, $H$ must also contain at least $k$ non-intersecting edges.* *Proof.* By definition, the ordered homomorphism from $G$ to $H$ must preserve an ordering $<_G$ as well as edges of $G$, therefore the observation follows. ◻ # Dualities of Ordered Graphs {#chapt:Dualities} We shall show now that the following duality is the only duality of ordered graphs. thmduality [\[thm:Uniq\]]{#thm:Uniq label="thm:Uniq"} Let $G$ be any ordered core. Then $M_k$ and $K_k$ is the only pair of ordered cores satisfying $M_k\not\to G$ if and only if $G\to K_k$. *Proof.* We will first show that $M_k$ and $K_k$ are a singleton. From Observation [Observation 3](#Obs:ConsEdges){reference-type="ref" reference="Obs:ConsEdges"}, we see that $M_k\not\to K_k$, as $K_k$ has at most $k-1$ non-intersecting edges and $M_k$ would need at least $k$. Therefore if $M_k\to G$ then $G\not\to K_k$. On the other hand, if $M_k\not\to G$, then we use the greedy algorithm on $G$ and Proposition [Proposition 1](#prop:GreedyAlg){reference-type="ref" reference="prop:GreedyAlg"} to get ordered graph $A_g$ on minimum number of $g$ vertices. $A_g$ will of course contain a directed path $P_{g}$ (as we took all the partition classes maximal). It is clear that $g-1<k$, as otherwise $M_k\to A_g$, and therefore also to $G$, as $M_k\to A_g$ maps non-intersecting edges of $M_k$ to non-intersecting edges of $A_g$, therefore $M_k$ could map to these edges in $G$. Hence as $G\to A_g \to K_k$ (as $A_g$ is on $k$ or less vertices), we get $G\to K_k$. Now we shall prove that there does not exist any other duality pair. Let $F$ and $H$ be ordered cores satisfying $F\not\to G$ if and only if $G\to H$ and $F$ and $H$ are not equal to $M_k$ and $K_k$, resp. Let $G$ be any ordered matching. We see that if $F\to G$, then $F$ must contain partitions that map to an ordered matching. But then $G$ and $F$ must contain an ordered matching $M_k$, for which the ordered homomorphism $F$ to $M_k$ is onto. But then $M_k$ is the core of $F$. Now, if there is no homomorphism from $F$ to any ordered matching, then there does not exist any ordered graph $H$, for which there exists ordered homomorphism from ordered matching to $H$, as ordered matchings can require a homomorphic image of any size. E.g. already for $F$ being a $P_3$ or $K_3$, with $G$ being an ordered matching, we can see that there is no ordered homomorphism from $F$ to $G$, yet we can get $\chi(G)$ of any size by increasing $n$ in $M_n$. We also see that if we forbid an ordered homomorphism from $M_k$ to $G$, then $K_k$ is the only graph satisfying $M_k\not\to G$ if and only if $G\to K_k$. ◻ # $\chi$-boundedness of Ordered Graphs {#chapt:ChiBound} For the ordinary graphs, we recall that $\chi$-bounded family $\mathcal{F}$ of graphs is one for which there is a function $f$ such that, with $\omega(G)$ being a size of maximum clique in $G$, for every $\omega(G), G\in \mathcal{F}$, $\chi(G)$ coloring of $G$ is at most $f(\omega(G))$. We see that we could adopt this notion simply by replacing ordinary graphs with ordered graphs and homomorphism with an ordered homomorphism. We of course see that not all ordered graphs are $\chi$-bounded, as already the matching itself can have arbitrarily large chromatic number and maximum clique number of two. The standard notion of $\chi$-boundedness will therefore not work. Adopting the notion of $\chi$-bounded graphs from the ordinary graphs nonetheless, we shall try to replace the maximum clique size in the definition of $\chi$-boundedness with the size $n$ of maximum ordered matching $M_n$. We can then define the analogy of $\chi$-boundedness for ordered graphs as follows. $\chi^<$*-bounded* family $\mathcal{F}$ of ordered graphs is one for which there is a function $f$ such that, for every $n$, $n$ being a size of maximum ordered matching subgraph $M_n$ of $G$ (maximum in the sense that there is no greater $k>n$ such that $M_k$ is an ordered matching subgraph of $G$), $\chi(G)$ is at most $f(n)$. Without any risk of confusion, we shall denote $\chi^<$-bounded simply as $\chi$-bounded further on. We will now show that all ordered graphs are $\chi$-bounded. **Theorem 4**. *Let $G$ be an ordered graph. Let $M_k$ be maximum ordered matching subgraph of $G$. Then $\chi (G)\le 2k+1$.* *Proof.* Let us choose a subgraph $M_k$ of $G$, where $k$ is maximum, and run the Greedy Algorithm on $G$. As a result of this procedure we will get an ordered graph $A_g$, where $|A_g|=\chi(G)$ from [Proposition 1](#prop:GreedyAlg){reference-type="ref" reference="prop:GreedyAlg"}. As in [\[thm:Uniq\]](#thm:Uniq){reference-type="ref" reference="thm:Uniq"}, $A_g$ will contain a directed path $P_{g}$. We also see from Observation [Observation 3](#Obs:ConsEdges){reference-type="ref" reference="Obs:ConsEdges"} that the $M_k$ edges will map to non-intersecting edges in $A_g$, as they were non-intersecting in $G$ (where in $G$ they did not share a common vertex). Now let us assume that $\chi(G)>2k+1$. Then $A_g$ will have at least $2k+2$ vertices and it will contain $P_{2k+2}$. But then $P_{2k+2}$ contains an ordered matching of size $k+1$, which should have been chosen in $G$ initially. ◻ Let us now try to prove a stronger (induced) version of the statement, again borrowing an idea from the ordinary graphs - the famous *Gyárfás--Sumner conjecture*. The conjecture states that for every tree $T$ and complete graph $K$, the graphs with neither $T$ nor $K$ as induced subgraphs can be properly colored using only a constant number of colors. We shall replace the tree with previously introduced forbidden structures and prove the following statement. thmchibound [\[thm:Gyarfas\]]{#thm:Gyarfas label="thm:Gyarfas"} There exists a function $f:\mathbb{N}\to\mathbb{N}$ with the following property: Let $G$ be an ordered graph not containing any of the following graphs as induced subgraphs: $$K_n, M_n, M^{RL}_n, M^{+}_n.$$ Then $\chi(G)\le f(n)$. Advancing the proof, let us first define a set of ordered graphs as *non-induced*, if for every two ordered graphs within this set, one ordered graph is not an induced graph of the other, and vice versa. We prove that $M_n, K_m, M^{RL}_k, M^{+}_l, n,k\ge 2, m,l\ge 3$ are non-induced ordered graphs and we determine a coloring of them. **Proposition 5**. *Let $k,l,m,n\in \mathbb{N}, n,k\ge 2, m,l\ge 3$. Then $M_n, K_m, M^{RL}_k, M^{+}_l$ are non-induced ordered graphs and $\chi(M_n)=n+1, \chi(K_m)=m, \chi(M^{RL}_k)=2k, \chi(M^{+}_l)=2l$.* *Proof.* Let us first define a *size of an edge $\{v_i,v_{i+j}\}$* as $j$ and a *distance* in between vertices $v_i$ and $v_{i+j}$ as $j$ as well. We shall start with an easy observation that none of the ordered graphs $K_m, M^{RL}_k, M^{+}_l$ can be an induced subgraph of $M_n$. This is due to the fact that there are always more edges in these graphs than in any induced subgraph of $M_n$ that has more than $3$ vertices (as $K_m, M^{RL}_k, M^{+}_l, m\ge 2, m,l\ge 3$ have more than three vertices). We can also see that none of $M_n, M^{RL}_k, M^{+}_l$ can be an induced subgraph of $K_m$, as $M_n, M^{RL}_k, M^{+}_l$ are at least on four vertices and their first four vertices do not form a complete graph, which is always the case for any induced subgraph of $K_m$. Let us show that $M^{RL}_k$ does not contain $M_n, K_m, M^{+}_l$ as induced subgraphs. We notice that $M^{RL}_k$ contains $2k$ vertices and directed path $P_{2k}$ on all of these vertices. We will call the set of edges in $P_{2k}$ the *path edges* and we will call all the other edges (of size longer than one) *long edges*. We shall notice that all of the path edges are of size one and all the long edges are of odd size of at least three (from definition). Therefore there are no edges of even size. We will first show that $M^{RL}_k$ does not contain a triangle. As there are no edges of even size in $M^{RL}_k$, every edge will connect even and odd vertex. As every triangle needs three vertices, it will contain at least two even or two odd vertices. But these vertices would need an even edge to connect them. Let us now show that $M^{RL}_k$ does not contain induced ordered matching of size two. There are three possible cases of such a matching. Having both path edges, both long edges or having one path edge and one long edge of $M^{RL}_k$, all pairs of edges of course non-intersecting (without common vertex). Let us first introduce a notation for a path edge as $e_i=\{v_i,v_{i+1}\}$. In case the induced matching of size two would be containing both path edges, these edges of course could not be consecutive ($e_i$ and $e_{i+1}$) and they could not be edges $e_i$ and $e_{i+2}$ as they would have another path edge $e_{i+1}$ in between them. Therefore, these edges would need to be $e_i$ and $e_{i+j}$ with $j>2$. But all of these path edges are connected by a long edge. If we consider an induced matching of size two consisting of two long edges, we notice that these will of course need to have each left vertex of each of these edges at different even vertices. But we see that every two non-intersecting edges with different even left vertex will be connected by an edge connecting the first vertex of the first edge and the second vertex of the second edge (as their distance is odd and greater than two). The last case of induced matching of size two consists of one path edge $r$ and one long edge $s$. We notice that there are two case. One where path edge $r$ is on the left and one where it is on the right side of the long edge $s$. In case path edge $r$ is on the left, long edges coming from the even vertex of path edge $r$ (whether it is the first or the second vertex of it) will connect $r$ with the second vertex of the long edge $s$, again as long edges in $M^{RL}_k$ connect even vertex on the left and odd vertex on the right, if the distance in between them is longer than two. If path edge $r$ is on the right, we will notice that its odd vertex will connect it with left vertex of every long edge $s$ on the left side of the path edge $r$, as this vertex is even and distance in between them is again longer than two. Let us show that $M^{RL}_k$ does not contain $M^{+}_l$ as an induced subgraph. We see that by taking vertices $b_1, a_2, b_2, a_3$ in $M^{RL}_k, k\ge 3$ we get $M^{+}_2$. Therefore $M^{+}_l$ needs to have $l\ge 3$. We can also observe that in $M^{RL}_k, k\ge 2$ every edge (path or long) connects two vertices, out of which one vertex can have long edges connecting this vertex to the vertices on the right and another vertex does not have any long edges connecting it to the vertices on the right (by definition). Another observation is that when choosing an induced subgraph $G$ of $M^{RL}_k, k\ge 2$, this subgraph will need to contain a directed path $P_{|G|}$, as $M^{+}_l, l\ge 3$ always contains a directed path $P_{|M^{+}_l|}$. We then observe that $M^{+}_l$ always contains edges $(a_1,b_1),(a_1,b_n),(a_n,b_n),(b_1,a_n)$, where $(a_1,b_n),(b_1,a_n)$ are long edges connecting the first vertex with the last one and second vertex with the one vertex before the last one. But then whichever induced subgraph $G$ we take in $M^{RL}_k, k\ge 2$, the first two vertices will need to be connected, as there needs to be a path $P_{|G|}$ within $G$. $G$ will also need to contain at least $6$ vertices (because of the minimum order of $M^{+}_l, l\ge 3$), and therefore first two vertices of $G$ will need to be connected to the last two vertices by long edges of size $|G|-1$ and $|G|-3$. But as we have shown, every edge in $M^{RL}_k$ contains a vertex, which is not connected to the vertices on the right by a long edge. Therefore as the first and second vertex in $G$ are connected by a path edge, one of the first two vertices in $G$ is not connected to neither of the last two vertices. Let us show that $M^{+}_l, l\ge 3$ does not contain $M_n, K_m, M^{RL}_k, n,k\ge 2, m\ge 3$ as induced subgraphs. The proof of the first two ordered induced subgraphs $M_n, K_m$ goes essentially the same as before. We notice that $M^{+}_l$ contains $2l$ vertices and directed path $P_{2l}$. Again, all of the path edges are of size one and all the long edges are of odd size of at least three (by definition). Therefore, there are no edges of even size also in $M^{+}_l$. We will first show that $M^{+}_l$ does not contain a triangle. As observed before, as there are no edges of even size in $M^{+}_l$ either, every edge will connect even and odd vertex. As every triangle needs $3$ vertices, it will again contain at least two even or two odd vertices and these vertices would need an even edge to connect them. Let us now show that $M^{+}_l$ does not contain induced matching of size $2$. There are again three possible cases of such an induced matching - having both path edges, both long edges or having one path edge and one long edge in $M^{+}_l$, all the three cases again non-intersecting without common vertex in $M^{+}_l$. In case the induced matching of size $2$ would be containing both path edges, we notice that these edges could not be consecutive again ($e_i$ and $e_{i+1}$) and they could not be edges $e_i$ and $e_{i+2}$ as they would have another path edge $e_{i+1}$ in between them. Therefore, these edges would need to be $e_i$ and $e_{i+j}$ with $j>2$. But all of these path edges are connected by a long edge. If we consider the induced matching of size $2$ consisting of two long edges, then there are 3 cases. - Left vertex of each of the long edges is at different odd vertices. - Left vertex of each of the long edges is at different even vertices. - Left vertex of one of the long edges is at odd vertex and left vertex of another long edge is at even vertex. For the first case, we see that every two long edges with different odd left vertex will be connected by an edge in between the first vertex of the first edge and second vertex of the second edge (and also by an edge in between second vertex of the first edge and first vertex of the second edge). For the second case, we see that every two long edges with different even left vertex will be also connected by first vertex of the first edge and second vertex of the second edge (and also by an edge in between second vertex of the first edge and first vertex of the second edge). In the last case, these edges will be connected by an edge in between the first vertex of the first edge and the first vertex of the second edge (and also by an edge in between the second vertex of the first edge and the second vertex of the second edge). The last case of induced matching of size $2$ consists of one path edge $r$ and one long edge $s$. We notice that there are two case. One where path edge $r$ is on the left and one where it is on the right side of the long edge $s$. In case path edge $r$ is on the left, we can see that vertices of $r$ are connecting this edge to all the vertices on the right from $r$ (by definition), therefore $r$ must be connected to $l$ by an edge in $M^{+}_l$. If path edge $r$ is on the right, we will notice that $l$ will have one odd and one even vertex. But then also $l$ is connected to all the vertices on the right from $l$ (again by definition), so $l$ must be connected to $r$ by an edge in $M^{+}_l$. For the proof that $M^{+}_l, l\ge 3$ does not contain $M^{RL}_k, k\ge 2$ as induced subgraph, we first notice that first vertex of the $M^{RL}_k$ is not connected to other vertices on the right by a long edge. We also notice that $M^{RL}_k$ has at least four vertices and that $M^{RL}_k$ always contains a directed path $P_{2k}$. Therefore any induced ordered subgraph $G$ of $M^{+}_l$ will also always need to contain a path $P_{4}$ on the first $4$ vertices. We also notice that for every vertex $v_i$ in $M^{+}_l$, vertex $v_i$ is connected to all the vertices, with odd distance that is greater than $2$ (from definition), on the right from $v_i$. This means that as the edges in $M^{+}_l$ are all of the odd size, whichever edges we pick in $M^{+}_l$ to form the directed path $P_4$ on the first four vertices of $M^{RL}_k$, the sum of the sizes of these edges in $M^{+}_l$ will be odd and greater than two, as the sum of three odd numbers is always odd and greater than two. But then the first four vertices of $G$ must have the first and the fourth vertex connected, and as these can be connected only by long edge in $G$, we observed that this cannot be the case for $M^{RL}_k$, as $M^{RL}_k$ does not have the first vertex connected to any vertices on the right by long edge. The last thing to show is the coloring of these ordered graphs. But this is clear, as for $M_n, n\ge 2$ we map all the pairs of vertices $b_i, a_{i+1}, 1\le i\le n-1$ to one interval (this is equivalent to running the greedy algorithm on $M_n$) and get a directed path $P_{n+1}$ as a minimum homomorphic image of $M_n, n\ge 2$. For $M^{RL}_k, M^{+}_l, k\ge 2, l\ge 3$ this is also straightforward, as both of these ordered graphs contain a directed path $P_{2k}$ and $P_{2l}$, respectively, and coloring of $K_m, m\ge 3$ is $m$. ◻ *Proof of Theorem [\[thm:Gyarfas\]](#thm:Gyarfas){reference-type="ref" reference="thm:Gyarfas"}.* We see from the Theorem [Theorem 4](#thm:chiboundedness){reference-type="ref" reference="thm:chiboundedness"} that forbidding ordered matchings as subgraphs can bound the coloring of the ordered graphs and from Theorem [Theorem 4](#thm:chiboundedness){reference-type="ref" reference="thm:chiboundedness"} and Theorem [\[thm:Uniq\]](#thm:Uniq){reference-type="ref" reference="thm:Uniq"}, that as the coloring of our input ordered graph $G$ grows, so must grow our ordered matching subgraph $M_k$ of $G$ and there are no other ordered graphs with this property. Let us now take this ordered matching subgraph $M_k$ of $G$ and take ordered matching induced subgraph $\tilde{M}_k$ of $G$ on the vertices of $M_k$. We see that there can be four different edges in between two doubles of $\tilde{M}_k$ and we denote them as follows: - $LR$ edge, if it connects left vertex of the first double with the right vertex of the second double. - $RL$ edge, if it connects right vertex of the first double with the left vertex of the second double. - $LL$ edge, if it connects left vertex of the first double with the left vertex of the second double. - $RR$ edge, if it connects right vertex of the first double with the right vertex of the second double. We therefore have $2^4=16$ possible ways how two doubles can be connected (with one option being no edges in between them). Let us denote a set of edges connecting two doubles as e.g. $\{LR,RL,RR\}$, if these two doubles are connected by edges $LR, RL$ and $RR$, and other sets of edges connecting two doubles analogously. Let us take an ordered graph where all the doubles are connected by the same set of edges (one of $16$). Then these $16$ different ordered graphs correspond to the following five induced graphs: - If $n$ doubles are not connected by any edge $\{\}$, this results in an induced ordered matching $M_n$. - If $n$ doubles are all connected to each other by an edge $\{LR\}$, we get an ordered graph $M^{LR}_n$. - If $n$ doubles are all connected to each other by an edge $\{RL\}$, we get an ordered graph $M^{RL}_n$. - If $n$ doubles are all connected to each other by edges $\{LR, RL\}$, we get an ordered graph $M^{+}_n$. - The rest of the cases is if $n$ doubles are all connected to each other by edges $\{LL\}$ or $\{RR\}$ or any other sets of edges containing these edges. $$\begin{aligned} & \{LL,RR\}, \{LL,RL\}, \{LL,LR\}, \{RR,RL\}, \{RR,LR\} , \{LL,RR,LR\}, \\ &\{LL,RR,LR\}, \{LL,RL, LR\}, \{RR,RL,LR\}, \{LL,RR,LR, RL\} \end{aligned}$$ Then we get a complete ordered graph $K_n$ (or bigger complete ordered graph, e.g. $K_{2n}$ on $\{LL,RR,LR, RL\}$ edge set). We can easily see this, as if all the doubles are connected by the vertex on the same side of the double, these vertices will be connected in all doubles, and therefore form a complete graph. The Ramsey's Theorem tells us that for any given finite number of colours, $c$, and any given integers $n_1,\ldots,n_c$, there is a number, $R(n_1,\ldots,n_c)$, such that if the edges of a complete graph of order $R(n_1,\ldots,n_c)$ are coloured with $c$ different colours, then for some $i$ between $1$ and $c$, it must contain a complete subgraph of order $n_i$ whose edges are all of colour $i$. We notice that the ordered matching induced subgraph $\tilde{M}_k$ of $G$ can be thought of as such a complete graph $J$, where each double can be thought of as a vertex of $J$, and $2^4=16$ possible edge connections can be though of as $16$ possible edge colors of $J$. Then using the Ramsey's Theorem we see that for any $j\in \mathbb{N}$, there exists sufficiently large ordered graph $J$, such that $J$ must contain at least one of $16$ complete graphs of order $j$ with all edges of the same color. This complete graph with all edges of the same color will then correspond to $j$ doubles in $\tilde{M}_k$ that are all connected to each other by the same edge set, which will result in one of the five induced ordered subgraphs of $\tilde{M}_k$. Therefore with growing coloring of $G$ and consequent growing of $\tilde{M}_k$ (as we have shown before), also at least one of the five resulting ordered graphs listed above will grow. We showed in Proposition [Proposition 5](#prop:non-chibound){reference-type="ref" reference="prop:non-chibound"} that all of the ordered graphs $M_n, K_m, M^{RL}_k, M^{+}_l, n,k\ge 2, m,l\ge 3$ are non-induced. We have also shown that their coloring grows with their size. The last thing to show is that set of ordered graphs $M_n, K_m, M^{RL}_k, M^{+}_l$ together with an ordered graph $M^{LR}_r$ is not non-induced. But this can be seen by taking the vertices $b_1, a_2, b_3, a_4, b_5, a_6, \ldots$ of $M^{RL}_k$ and seeing that resulting induced ordered subgraph is $M^{LR}_r$. Therefore the set of induced ordered graphs $M_n, K_n, M^{RL}_n, M^{+}_n$ of $G$ is indeed minimal and sufficient to limit the size of $\chi(G)$. ◻ As mentioned before, this then answers the analogy of the Gyaŕfaś--Sumner conjecture for $\chi$-boundedness of ordered graphs, with replacing the forbidden structures (of a tree and clique) in the original conjecture (with our four graph classes). # $\chi$-boundedness of Oriented and Directed Ordered Graphs {#chapt:ChiBoundDirected} Let us now have a look at an analogy of previous results for the following structure. Let us allow single and double orientation of edges in between two vertices of an ordered graph $G$ (single in a sense that there is only one arrow in between two vertices and double when there are two arrows in both directions), then we shall call this a *directed ordered graph* and denote $\overleftrightarrow{G}$. Let us then also define a $\overleftrightarrow{K}_n$ as a complete ordered graph with each edge having double orientation. We then define a *directed colouring* $\overleftrightarrow{\chi}(\overleftrightarrow{G})$ as the minimum $n$ such that there exists ordered homomorpshism from $\overleftrightarrow{G}$ to $\overleftrightarrow{K}_n$. We notice that this notion leads to the similar results as the coloring of ordered graphs. **Theorem 6**. *Let $G$ be an ordered graph and $\overleftrightarrow{G}$ a corresponding directed ordered graph with any orientation of the edges of $G$. Then $\chi(G)=\overleftrightarrow{\chi}(\overleftrightarrow{G})$.* *Proof.* Let us assume an ordered homomorphism from $G$ to $K_n$. We can see, that if we impose left, right or double orientation on edge of $G$, this edge can always map to the same edge in $\overleftrightarrow{K}_n$ as it did in $K_n$. We can also easily see that if there is an ordered homomorphism $\overleftrightarrow{G}$ to $\overleftrightarrow{K}_n$, then we can simply forget the directions and replace them by simple edges and we get the same homomorphism from $G$ to $K_n$. ◻ We might therefore denote $\overleftrightarrow{\chi}$ simply as $\chi$. We can then of course adopt similar concept for $\overleftrightarrow{M_k}$ as we did for the ordered matching $M_k$ by imposing an orientation on $M_k$ and defining corresponding $\chi$-boundedness. We shall call $\overleftrightarrow{M_k}$ an *Directed Ordered Matching*. Let us then define the following ordered graphs. *Right Oriented Ordered Matching* $M^R_k$ has vertices $a_i, b_i, i=1,\ldots, k$, with ordering $a_1<b_1<a_2<b_2<\ldots<a_k<b_k$ and edges $(a_i,b_i), i=1,\ldots,k$. *Left Oriented Ordered Matching* $M^L_k$ has points $a_i, b_i, i=1,\ldots, k$, with ordering $a_1<b_1<a_2<b_2<\ldots<a_k<b_k$ and edges $(b_i,a_i), i=1,\ldots,k$. *Directed Ordered Matching* $M^D_k$ has points $a_i, b_i, i=1,\ldots, k$, with ordering $a_1<b_1<a_2<b_2<\ldots<a_k<b_k$ and edges $(a_i,b_i)$ and $(b_i,a_i), i=1,\ldots,k$. Then the following holds. **Corollary 7**. *Let $\mathcal{C}_k$ be a class of directed ordered graphs where for every $\overleftrightarrow{G}\in \mathcal{C}_k$, $\overleftrightarrow{G}$ has the biggest left, right oriented and directed ordered matching subgraph smaller or equal to $M^R_a$, $M^L_b$ and $M^D_c$, resp., where $a+b+c\le k$. Then $\chi(\overleftrightarrow{G})\le 2(a+b+c)+1$ for every $\overleftrightarrow{G}\in \mathcal{C}_k$.* *Proof.* As every directed ordered graph $\overleftrightarrow{M_k}$ consists of $M^R_a$, $M^L_b$ and $M^D_c$, where $(a+b+c)=k$, the statement follows from Theorem [Theorem 4](#thm:chiboundedness){reference-type="ref" reference="thm:chiboundedness"} and [Theorem 6](#thm:direct-same){reference-type="ref" reference="thm:direct-same"}. ◻ Let us now consider an orientation of an ordered graph $G$, where we allow only left or right orientation of edges (and not both). We shall call this an *oriented ordered graph* and denote $\overrightarrow{G}$. The topic has been studied for standard graphs in [@BORODIN199977]. One might hope that the similar approach as we applied for the directed graphs might work for limiting the coloring of the classes of oriented ordered graphs. Perhaps surprisingly, a similar statement does not hold for the oriented ordered graphs. We shall notice first that the coloring of oriented ordered graphs using the intervals will not work. This is due to the fact that oriented ordered graphs do not allow edges with both directions. This will give us cases such as $V(\overrightarrow{G})=(v_1,v_2,v_3), E(\overrightarrow{G})=\{(v_1,v_2),(v_3,v_1)\}$, where $v_2$ and $v_3$ form an interval, but they cannot map into the same vertex, as we would create an edge with both directions. We therefore define an *oriented ordered coloring* $\overrightarrow{\chi} (\overrightarrow{G})$ as the order of minimum homomorphism image of oriented ordered graph $\overrightarrow{G}$ (which is in line with the definition of coloring of ordered graphs). Let oriented ordered graph $\overrightarrow{M}_2$ be an ordered matching $M_2$ with any orientation and analogously an oriented ordered graph $\overrightarrow{P}_3$ be an ordered path $P_3$ with any orientation. **Proposition 8**. *For every $n\in \mathbb{N}$ there exists an oriented ordered graph $\overrightarrow{G}$ that has each vertex of degree $1$, order of $\overrightarrow{G}$ is $2n$, $\overrightarrow{G}$ does not contain an oriented ordered subgraph $\overrightarrow{M}_2$ or $\overrightarrow{P}_3$, and has $\chi(\overrightarrow{G})=n+1$.* *Proof.* We shall prove this statement by construction. Let $\overrightarrow{G}$ be an oriented ordered graph on $2n$ vertices $v_1, v_2, \ldots, v_{2n}$ with the following edge set (see Figure [3](#fig:GArrow){reference-type="ref" reference="fig:GArrow"}). $$E(\overrightarrow{G})=\{(v_1,v_{2n}),(v_{2n-1},v_2),(v_3,v_{2n-2}),\ldots\}$$ We see that $\overrightarrow{G}$ does not contain neither $\overrightarrow{M}_2$ nor $\overrightarrow{P}_3$ and that every vertex of $\overrightarrow{G}$ has degree $1$. We notice that every time we map vertices $v_{i-1}$ and $v_{i}, 2\le i\le n$ in ordered homomorphism to one vertex, we cannot map vertices $v_{n+i-1}$ and $v_{n+i}$ to one vertex in the same ordered homomorphism anymore, and vice versa. Therefore we have maximum $n-1$ options to map two vertices into one in ordered homomorphism (as we cannot map vertices $v_n$ and $v_{n+1}$ to one vertex). Therefore the order of minimum homomorphism image of $\overrightarrow{G}$ is $2n-(n-1)=n+1$. ◻ ![$\overrightarrow{G}$ from the Proposition [Proposition 8](#prop:orient-nope){reference-type="ref" reference="prop:orient-nope"}](GArrowRainbow.png){#fig:GArrow} Natural next step would be to investigate a complexity of determining $\overrightarrow{\chi} (\overrightarrow{G})$, but we shall not address it in this paper. We might only mention that greedy algorithm will not help us in this case, as the inability to create an edge with both directions will affect the rule within the greedy algorithm, as we will need to check if by mapping the vertices in an interval into one vertex we shall not create an edge with both directions in our homomorphism image. This will then not always yield an oriented ordered graph, that has number of vertices equal to the coloring of the input oriented ordered graph. An example of oriented ordered graph that would not yield the same homomorphism image after running the (adjusted) greedy algorithm from left or right is a graph $V(\overrightarrow{G})=(v_1,v_2,v_3,v_4,v_5), E(\overrightarrow{G})=\{(v_3,v_1),(v_5,v_1),(v_2,v_4),(v_3,v_5)\}$, where starting the greedy algorithm with the adjusted rule from left would yield a $\overrightarrow{\chi} (\overrightarrow{G})=4$, while starting it from the right would give us a $\overrightarrow{\chi} (\overrightarrow{G})=3$. This makes a theory of ordered graphs yet even more fascinating. We focus on the complexity of finding ordered homomorphims and cores of ordered graphs in prepared papers [@nescerfelrza2023FindingHoms] and [@nescerfelrza2023Core], resp.
arxiv_math
{ "id": "2310.00852", "title": "Duality and $\\chi$-Boundedness of Ordered Graphs", "authors": "Michal \\v{C}ert\\'ik and Jaroslav Ne\\v{s}et\\v{r}il", "categories": "math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | A distribution system can flexibly adjust its substation-level power output by aggregating its local distributed energy resources (DERs). Due to DER and network constraints, characterizing the exact feasible power output region is computationally intensive. Hence, existing results usually rely on unpractical assumptions or suffer from conservativeness issues. Sampling-based data-driven methods can potentially address these limitations. Still, existing works usually exhibit computational inefficiency issues as they use a random sampling approach, which carries little information from network physics and provides few insights into the iterative search process. This letter proposes a novel network-informed data-driven method to close this gap. A computationally efficient data sampling approach is developed to obtain high-quality training data, leveraging network information and legacy learning experience. Then, a classifier is trained to estimate the feasible power output region with high accuracy. Numerical studies based on a real-world Southern California Edison network validate the performance of the proposed work. author: - | Qi Li, , Jianzhe Liu, , Bai Cui, ,\  Wenzhan Song, , and Jin Ye,   [^1] [^2] [^3] [^4] bibliography: - distri_flex.bib title: | Distribution System Flexibility Characterization:\ A Network-Informed Data-Driven Approach --- # Introduction Proper coordination of distributed energy resources (DERs) transforms a passive distribution system into an active grid asset. From the grid operation standpoint, it is critical to characterize the distribution system flexibility region, i.e., the set of feasible substation-level power outputs subject to network and component operational constraints. This set is essentially a projection from the high-dimensional DER and network operation region, and, in general, finding its exact characterization is computationally unrealistic [@chen2021leveraging]. A variety of approximation methods to characterize the flexibility region have been developed. For example, a Minkowski sum-based approximation method is proposed; this method is scalable but cannot handle network constraints [@cui2020network]. Ref. [@chen2021leveraging; @cui2020network] use robust optimization methods to find a convex inner approximation of the flexibility set, explicitly considering network constraints and the temporal coupling of the DER operation decisions. Nevertheless, these approximations are conservative, and the shapes of the approximated set are fixed and presumed, which do not necessarily correspond to the actual geometry. Data-driven methods have been investigated as well [@ageeva2019analysis; @taheri2022data]. They usually use a random sampling approach and numerical approaches based on iterative algorithms to find labeled data for training purposes. The sampling and labeling operations could limit the scalability and bring in high computational overhead. To close this gap, we propose a network-informed data-driven approximation approach that exhibits superior scalability. Our main contributions are two-fold. First, unlike existing methods that use iterative algorithms or prescribed approximation shapes, we propose a new approach that uses a highly scalable matrix operation-based classifier to efficiently sketch an approximated region with limited conservativeness. Second, the classifier is obtained by a novel training strategy with high efficiency. As shown in Fig. [1](#fig:architecture){reference-type="ref" reference="fig:architecture"}, we develop a closed-loop data filtering algorithm to actively select samples that are most helpful to classifier training in a rolling horizon. Moreover, we explicitly use the network knowledge to develop a rigorous condition for sample labeling. This essentially trims the sample space to improve approximation accuracy and scalability further. ![Proposed training method to obtain the classifier](Figs/illus_2.pdf){#fig:architecture width="0.7\\linewidth"} # System Model and Problem Description We consider a distribution system with one substation feeder bus and $n$ load buses, on which there are $m$ controllable DERs. The time horizon is given by $\mathcal{T} = \{1,\cdots,T\}$. The power outputs of the DERs are managed and aggregated to achieve controllability for the substation-level power output so that the distribution system becomes a controllable grid asset. ## Distribution System Operation Model {#sec:model:model} The DER aggregation considers 1) DER capacity limits represented by interval constraints; 2) network constraints, including linearized power flow equations and interval voltage limits. Based on the above discussion, the system operation constraints are modeled in the following compact form[@chen2021leveraging]: $$\begin{aligned} &\mathbf{Wp \leq z}, \quad \mathbf{p_0 = Dp+b}, \label{eq:main}\end{aligned}$$ where $\mathbf{p} \in \mathbb{R}^{mT}$ and $\mathbf{p_0} \in \mathbb{R}^{T}$ represent the dispatchable DER power outputs and substation-level power output, respectively; $\mathbf{W}$ and $\mathbf{D}$ are both given constant matrices such that $\mathbf{W}$ captures the DER operational constraints and the network voltage constraints, and $\mathbf{D}$ models the mapping of DER power outputs to the substation; $\mathbf{z}$ and $\mathbf{b}$ are constant coefficients, representing given parameters such as load forecasts. The inequality in [\[eq:main\]](#eq:main){reference-type="eqref" reference="eq:main"} represents the DER and network operational constraints, and the equality constraint models the mapping of DER and load power to substation-level power output based on the linear power flow model. The use of the linear power flow model is justified owing to the tight voltage limits in the distribution system. The constraint captures the steady-state behavior of various kinds of DERs, including HVACs, energy storage units, and photovoltaics [@chen2021leveraging; @cui2020network]. ## Flexibility Characterization Problem Distribution system flexibility set (DSFS) refers to the set of all the substation-level power output realizations that are feasible to [\[eq:main\]](#eq:main){reference-type="eqref" reference="eq:main"} with appropriate $\mathbf{p}$. Obtaining its exact characterization is generally computationally expensive. A network-informed data-driven approach is proposed in this letter. First, we use a novel offline training method to obtain a classifier that determines whether a substation-level power output sample belongs to the DSFS. Then, the samples from the substation-level power output space are classified; the union of the identified DSFS members forms a DSFS estimation. Note that the second step is scalable as it only involves a) computationally trivial sampling operations and b) simple matrix operations associated with the classification. Compared to an iterative algorithm-based numerical method, our method is five orders of magnitude faster, as shown in Section [4](#sec:case){reference-type="ref" reference="sec:case"}. Nonetheless, the offline training step to obtain the classifier is more computationally demanding. Developing a new efficient training strategy is the focus of this letter. # Proposed Training Strategy Traditional data-driven classifier training strategy can be summarized as *sampling $\to$ labeling $\to$ training*. It is an open-loop process where one needs to prepare a training dataset before training commences. Given no information about the sample space geometry, a larger set of randomly drawn samples is usually needed to ensure the representativeness of the sample space at the cost of increased computational burdens. To circumvent this issue, an active training strategy is proposed, as illustrated below: $$\text{sampling} \to \text{filt}\tikz[overlay,remember picture] \node (a) {};\text{ering} \to \text{lab}\tikz[overlay,remember picture] \node (b) {};\text{eling} \to \text{train}\tikz[overlay,remember picture] \node (c) {};\text{ing} \nonumber \tikz[overlay,remember picture] {\draw[->,square arrow] (c.south) to (a.south);} \tikz[overlay,remember picture] {\draw[->,square arrow] (c.south) to (b.south);}$$ We create a closed-loop training process where the classifier is trained through multiple steps, as shown in Fig. [1](#fig:architecture){reference-type="ref" reference="fig:architecture"}: 1) sampling: randomly select samples from the unlabeled pool (colorless circles); 2) filtering: determine posterior probabilities and select the most uncertain samples (yellow circles with question marks) for labeling; 3) labeling: label selected samples as feasible (red circle) or unfeasible (blue circle), leveraging network knowledge; 4) training: train the model using the enlarged training set, including newly selected samples, in which transfer learning can be used to accelerate training, utilizing parameters from a historical model (represented by the dotted box in the figure). Ideally, the dataset size is relatively small at first and then grows sequentially by incorporating selected high-value training data points identified in each epoch. Here we use the growing knowledge about the sample space to develop a filtering algorithm for such data selection. The filtering and labeling algorithms keep improving to ensure accuracy and scalability throughout the training process, as will be discussed later in detail. It is worth noting that although the feedback-learning framework is first proposed in the machine learning community [@ren2021survey], here, it is used as a vehicle to implement the nontrivial and novel network-informed algorithms. ## Network-Informed Labeling Each training data point consists of a substation-level power output sample and a label about whether this sample is feasible, i.e., belonging to DSFS. Let $\mathbf{x}_i=[\mathbf{\hat{p}_{0,i}}^\top,y_i]^\top$, where $\mathbf{\hat{p}_{0,i}}$ is the sample, and $y_i$ is the label with $1$ representing "feasible" and $0$ otherwise. In practice, this label is obtained through numerical methods to test whether a $\mathbf{\hat{p}_{0,i}}$ is feasible to [\[eq:main\]](#eq:main){reference-type="eqref" reference="eq:main"}, which are computationally intensive when dealing with a large number of samples. To simplify the process, we leverage the network knowledge to trim the sample space such that points from a certain region bear no need for numerical labeling. To this end, we first find a convex inner approximation of DSFS, whence any members must have a "1" label, by solving: $$\begin{aligned} \label{eq:aro} \max_{\mathbf{p}^-_0 \le \mathbf{p}^+_0} \left\{ \mathbf{1}^\top \left( \mathbf{p}^+_0 - \mathbf{p}^-_0 \right) + \min_{\mathbf{p}^-_0 \le \mathbf{p}_0 \le \mathbf{p}^+_0} \max_{\substack{\mathbf{p}_0 = \mathbf{D}\mathbf{p} + \mathbf{b} \\ \mathbf{W}\mathbf{p} \le \mathbf{z}}} \mathbf{0}^\top \mathbf{p} \right\}\end{aligned}$$ where $\mathbf{p}_0^+, \mathbf{p}_0^- \in \mathbb{R}^T$ represent the upper and lower bounds of the substation-level power output, respectively. The inner min-max (feasibility) problem admits the optimal value of $0$ if and only if for any substation-level power output between $\mathbf{p}_0^-$ and $\mathbf{p}_0^+$, there exists a DER output schedule that makes all the operational constraints described by [\[eq:main\]](#eq:main){reference-type="eqref" reference="eq:main"} satisfied. Therefore, the hyperbox $\{\mathbf{p}_0: \mathbf{p}^-_0 \le \mathbf{p}_0 \le \mathbf{p}^+_0\}$ must be a subset of DSFS when the optimal value of the outer problem is finite. Note that [\[eq:aro\]](#eq:aro){reference-type="eqref" reference="eq:aro"} is an adaptive robust optimization (ARO) problem. One usually makes $\mathbf{p}$ a function of $\mathbf{p}_0$ in solving an ARO problem of this type. This paper assumes an affine decision rule. The problem then reformulates into a max-min problem in the form of $\max \min \mathbf{1}^\top \left( \mathbf{p}^+_0 - \mathbf{p}^-_0 \right)$. Inserting a slack variable $\mathbf{s} = \min \mathbf{1}^\top \left( \mathbf{p}^+_0 - \mathbf{p}^-_0 \right)$ yields a standard robust linear programming problem with the objective function becoming $\max \mathbf{s}$. The problem is tractable with well-established solution methods. We enlarge the DSFS approximation and further trim the sample space in each epoch. Note that [\[eq:main\]](#eq:main){reference-type="eqref" reference="eq:main"} are convex constraints, and DSFS is a projection of the feasibility region of [\[eq:main\]](#eq:main){reference-type="eqref" reference="eq:main"} onto the $\mathbf{p}_0$-space; hence, DSFS is a convex set, and the convex hull of any DSFS members must be a subset of DSFS. Recall that we train the classifier in epochs; in each epoch, the training set is expanded by new samples. Given a DSFS subset in an epoch, we only need to label those lying outside of the subset numerically. Then, the convex hull of those new samples with "1" labels and the original DSFS subset becomes a new DSFS subset. Hence, in the next epoch, those new samples lying in this enlarged set can be directly labeled again, thanks to the use of network information. ## Closed Loop Filtering {#sec:closed-loop} In each epoch, we seek to find the samples that are most *uncertain* to the classifier, i.e., containing the most *fresh* knowledge about the sample space. An uncertainty quantification method is applied. Let $\mathbb{P}(1|\mathbf{\hat{p}}_{0,i})$ be the posterior probability of a sample being feasible, according to an estimator. The closer $\mathbb{P}(1|\mathbf{\hat{p}}_{0,i})$ is to $1$ (resp., $0$), the more likely the sample is feasible (resp., infeasible); whereas the closer it is to $0.5$, the more uncertain it is. Then, by a simple mapping, we can find a monotone uncertainty metric: If $\mathbb{P}(1|\mathbf{\hat{p}}_{0,i}) > 0.5$, let $\mathbb{M}(\mathbf{\hat{p}}_{0,i}) = 2(1-\mathbb{P}(1|\mathbf{\hat{p}}_{0,i}))$; otherwise, $\mathbb{M}(\mathbf{\hat{p}}_{0,i}) = 2\mathbb{P}(1|\mathbf{\hat{p}}_{0,i})$, where $\mathbb{M}(\mathbf{\hat{p}}_{0,i})$ is the quantified uncertainty. After using this metric to evaluate all unlabeled samples, the most uncertain samples can be selected by ranking the quantified uncertainties. As for the initial number of samples and the selected number of samples in each epoch, they are open to customization, which acts as the hyperparameters for our model. The execution of the aforementioned process depends on finding $\mathbb{P}(1|\mathbf{\hat{p}}_{0,i})$. The classifier is structured to accomplish this task. We build the classifier using a multi-layer perceptron (MLP) model, defined as $f(\mathbf{\hat{p}}_{0,i}): \mathbb{R}^T \to [0,1]$. Its output is $\mathbb{P}(1|\mathbf{\hat{p}}_{0,i})$. Meanwhile, if a classification result ($0$ or $1$) is needed, a simple probabilistic smoothing approximation can be used, for example, $\mathop{\mathrm{sign}}f$, which is $1$ if $f > 0.5$ and $0$ otherwise. It is worth mentioning that the proposed strategy is general, and we can use models other than the MLP model. With the above discussion, the closed-loop filtering is conducted as follows: In each epoch, given an unlabeled sample pool, we first find the posterior probability of each unlabeled sample using the classifier obtained in the last epoch (or the initial classifier); then, the most uncertain samples with a suitable size are selected to label and then train the classifier; the updated classifier is then similarly used in the next epoch. The initial classifier's parameters can either be randomly generated or transferred from a historical model. Numerical testing suggests that the transfer learning approach is effective in characterizing the DSFS, for the transferred model entails substation-level power output sample space geometry knowledge that can warm-start the training. In addition, the training speed per epoch is accelerated since there are fewer trainable parameters during transfer learning. # Case Studies {#sec:case} In this section, we conduct numerical testing based on a three-phase distribution feeder of Southern California Edison (SCE) with 126 load buses and 366 DER having temporal couplings [@chen2021leveraging; @cui2020network]. We estimate the aggregated flexibility region of the substation-level real power output profile. For visualization purposes, we first conduct a numerical study regarding a two-dimensional aggregated power profile. With a time step (TS) of one hour, the flexibility for the time window \[8,10\] is estimated. For the specific setting, we randomly picked 100 samples as the initial training samples and sequentially added 10 more samples with the most model uncertainty in each epoch. As mentioned in Section[3.2](#sec:closed-loop){reference-type="ref" reference="sec:closed-loop"}, we implement our classifier using MLP model, consisting of 1 input layer, 4 hidden layers, and 1 output layer. ReLU and Adam serve as the activation function and optimizer, respectively. We also apply the transfer learning technique using a model obtained for the time window \[14,16\] with historical data. In transfer learning mode, the first hidden layer is frozen while the remaining layers are kept trainable. We compare the performance of the proposed work with the benchmark using a random sampling approach with the same initial model and hyperparameters. The performance of the proposed method without the transfer learning is shown in Fig.  [2](#fig:case1){reference-type="ref" reference="fig:case1"}(b). Compared to the benchmark shown in Fig. [2](#fig:case1){reference-type="ref" reference="fig:case1"}(a), the proposed method shows much superior performance, as it pinpoints the boundary of the DSFS much faster and more accurately due to the well-positioned samples. As shown in Fig. [2](#fig:case1){reference-type="ref" reference="fig:case1"}(c), with the transferred model, the classifier achieves even better results, despite the fact the DSFS of time \[14,16\] (similar to the boundary characterized in epoch 5) is quite visually different. In comparison, it can be observed that existing methods [@chen2021leveraging; @cui2020network; @taheri2022data] that use hyperbox or ellipsoid for inner approximation may be more conservative than our results, as these sets do not fully capture the geometry of the DSFS. Note that classifying a batch of 1000 samples in GPU with the proposed work on a laptop with Intel(R) UHD Graphics 620 and Core i5-8350U takes only 0.001s. Meanwhile, checking one sample using the traditional simulation-based method with Mosek 9.1.9 takes about 0.2s. The performance of our approach is credited to the simple operations utilized in the MLP model. During prediction, the computations primarily consist of basic matrix operations and element-wise manipulations. These can be effectively parallelized across multiple processing units, such as GPUs. ![Benchmarking the proposed method using uncertainty heatmap.](Figs/case1.pdf){#fig:case1 width="0.90\\linewidth"} We then show the scalability of the proposed work. We consider such a scenario that the distribution system estimates the flexibility four-time steps ahead in a rolling horizon, from hour 8 to hour 14. From TS 2, we initialize the classifier model with the one obtained from the previous TS. From Fig. [3](#fig:case2){reference-type="ref" reference="fig:case2"}, the benchmark with the random sampling approach can only achieve the same level of accuracy as ours with almost 10 times more training iterations in TS 1, and cannot keep up for all the following TSs anymore. To study the adaptability of our model against noise, we consider the DER injection uncertainty. We introduce varying levels of uncertainty into the PV system and loads on each node, generating 1000 samples for each uncertainty level as a new test dataset. Table [\[tab:noise_performance\]](#tab:noise_performance){reference-type="ref" reference="tab:noise_performance"} shows the F1 score performance of our classifier across a range of DER injection uncertainty levels, spanning from 3% to 40%. It can be observed that even at an uncertainty level of 20%, our model consistently achieves an F1 score exceeding 0.95, indicating its robustness. Moreover, at a heightened uncertainty level of 40%, the F1 score remains high at 0.88. The results show the notable adaptability of our model against uncertainty. The observation of the decreasing estimation accuracy also implies that uncertainties indeed affect the geometry of the DSFS. ![Rolling-horizon DSFS estimation results.](Figs/case2_2.pdf){#fig:case2 width="0.85\\linewidth"} [\[fig:case2\]]{#fig:case2 label="fig:case2"} # Conclusion We propose a data-driven approach to approximate the DSFS. It involves using a new network-informed method to train a classifier that only needs to use scalable matrix operations for the approximation. We propose a numerically efficient training strategy that uses the network information and the accumulated knowledge about the sample space to accelerate the training. Case studies based on the SCE system verify the validity and value of the proposed work. [^1]: Q. Li, W. Song, and J. Ye are with University of Georgia, Georgia, U.S.A. [^2]: J. Liu is with Shanghai Jiao Tong University, Shanghai, China. [^3]: B. Cui is with Iowa State University, Ames, IA, U.S.A.. [^4]: J. Liu is the corresponding author.
arxiv_math
{ "id": "2310.05529", "title": "Distribution System Flexibility Characterization: A Network-Informed\n Data-Driven Approach", "authors": "Qi Li, Jianzhe Liu, Bai Cui, Wenzhan Song and Jin Ye", "categories": "math.OC", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | A subdivided star $SK_{1,l}$ is obtained by identifying exactly one pendant vertex from $l$ copies of the path $P_3.$ This study is on the existence of quantum state transfer on double subdivided star $T_{l,m}$ which is a pair of subdivided stars $SK_{1,l}$ and $SK_{1,m}$ joined by an edge to the respective coalescence vertices. Using the Galois group of the characteristic polynomial of $T_{l,m},$ we analyze the linear independence of its eigenvalues which uncovers no perfect state transfer in double subdivided stars when considering the adjacency matrix as the Hamiltonian of corresponding quantum system. We finally establish a characterization on double subdivided stars exhibiting pretty good state transfer. The method presented in this paper may be devised to classify family of graphs exhibiting other such quantum transportation phenomena, such as quantum fractional revival, pretty good fractional revival, etc.\ \ *Keywords:* Spectra of graphs, Field extensions, Galois group, Perfect state transfer, Pretty good state transfer.\ \ *MSC: 15A16, 05C50, 12F10, 81P45.* author: - Sarojini Mohapatra - Hiranmoy Pal bibliography: - references.bib title: "*A Characterization of State Transfer on Double Subdivided Stars*" --- # Introduction The transfer of quantum states between two different locations in a quantum network plays an important role in quantum information processing. Let a quantum network of $n$ interacting qubits be modeled by a graph $G$ where the vertices correspond to the qubits and the edges represent the interactions between qubits. Transfer of state among such qubits can be described using the continuous-time quantum walk operator acting on the characteristic vectors of the vertices. If the network admits a transfer of quantum state between two qubits without any loss of information, then this phenomenon is called perfect state transfer (PST). The main objective here is to identify quantum networks which enable high probability state transfer between qubits. We consider state transfer with respect to the adjacency matrix of a graph $G$ having the vertex set $\{a_1,a_2,\ldots,a_n\}.$ The adjacency matrix $A=[a_{jk}]$ is the $n\times n$ matrix having $a_{jk}=1$ if there is an edge between $a_j$ and $a_k$, otherwise $a_{jk}=0$. The continuous-time quantum walk on $G$ relative to the adjacency matrix $A$ is defined by $$U(t):=\exp{(itA)},~\text{where}~t\in\mathbb{R}~\text{and}~i=\sqrt{-1}.$$ Farhi and Gutmann [@farhi] first used the method of continuous-time quantum walks in analysing various quantum transportation phenomena. One can observe that the transition matrix $U(t)$ is symmetric as well as unitary. The square of the absolute value of $(a,b)$-th entry of $U(t)$ provide the probability of state transfer from site $a$ to site $b$ after time $t$. Suppose all the distinct eigenvalues of $A$ are $\theta_1,\theta_2,\ldots,\theta_d$. Let $E_{\theta_j}$ denote the orthogonal projection onto the eigenspace corresponding to $\theta_j.$ The spectral decomposition of the transition matrix $U(t)$ can be evaluated as $$U(t)=\sum_{j=1}^{d}\exp{(it\theta_j)}E_{\theta_j}.$$ Let ${\mathbf e}_a$ denote the characteristic vector corresponding to a vertex $a$ of $G$. The eigenvalue support of $a$ defined by $\sigma_a=\{\theta_j:E_{\theta_j}{\mathbf e}_a\neq0\}.$ The graph $G$ is said to exhibit PST between a pair of distinct vertices $a$ and $b$ if there exists $\tau\in\mathbb{R}$ such that $$\label{equ1} U(\tau){\mathbf e}_a=\gamma {\mathbf e}_b,~\text{for some}~\gamma\in\mathbb{C}.$$ It is now evident that the existence of PST between $a$ and $b$ depends only on the eigenvalues in the support $\sigma_a$ and the corresponding orthogonal projections. PST in quantum communication networks was first introduced by Bose in [@bose]. There it shows that PST occurs between the end vertices of the path $P_2$ on two vertices. In [@chr1], Christandl et al. showed that PST occurs between the end vertices of a path $P_n$ on $n$ vertices if and only if $n=2$, $3$. Remarkably, Bašić [@mil4] established a complete characterization of integral circulant graphs having PST. The existence of PST in several well-known families of graphs and their products are also investigated in [@ack1; @ange1; @pal9; @che; @cou7; @pal1; @pal2], etc. Later, Coutinho et al. [@cou5] showed that there is no PST in a graph $G$ between two cut vertices $a$ and $b$ that are connected only by the path $P_2$ or $P_3$, unless the graph $G$ is itself $P_2$ or $P_3.$ This infers that the double subdivided star $T_{l,m}$ does not exhibit PST between the coalescence vertices of degree $l$ and $m$ for all positive integers $l$ and $m$. Here we show that $T_{l,m}$ does not exhibit PST between any pair of vertices for all such cases using the linear independence of the eigenvalues of $T_{l,m}$ in Section [3](#s3){reference-type="ref" reference="s3"}. In case $a=b$ in [\[equ1\]](#equ1){reference-type="eqref" reference="equ1"}, the graph $G$ is said to be periodic at the vertex $a$ with period $\tau$. If $G$ is periodic at every vertex with the same period, then it is called a periodic graph. It is well known that if there is PST between a pair of vertices $a$ and $b$ at time $\tau$ then $G$ is periodic at both $a$ and $b$ with period $2\tau$. Therefore periodicity at the vertex $a$ is necessary for the existence of PST from $a$. In what follows, we find that if $G$ is periodic at a vertex then it must satisfy the following ratio condition. **Theorem 1**. *[@god3] Suppose a graph $G$ is periodic at vertex $a$. If $\theta_k,\theta_l,\theta_r, \theta_s$ are eigenvalues in the support of $a$ and $\theta_r\neq \theta_s$, then $$\dfrac{\theta_k- \theta_l}{\theta_r-\theta_s}\in\mathbb{Q}.$$* The existence of PST in graphs is a rare phenomena as observed in [@god2], and consequently, the notion of pretty good state transfer (PGST) was introduced in [@god1; @vin]. A graph $G$ is said to exhibit PGST between a pair of distinct vertices $a$ and $b$ if there exists a sequence $\tau_k$ of real numbers such that $$\lim_{k\to\infty}U(\tau_k){\mathbf e}_a=\gamma{\mathbf e}_b,~\text{for some}~~ \gamma\in \mathbb{C}.$$ In [@god4], Godsil et al. showed that there is PGST between the end vertices of $P_n$ if and only if $n+1=2^t~\text{or}~p~\text{or}~2p$, for some positive integer $t$ and odd prime $p$. Moreover, if there is PGST between the end vertices of $P_n$, then it occurs between the vertices $a$ and $n+1-a$ as well, whenever $a\neq (n+1)/2.$ Further investigation is done in [@cou3] to determine infinite family of paths admitting PGST between a pair of internal vertices, where there is no PGST between the end vertices. Among other trees, PGST is investigated on double star [@fan], $1$-sum of stars [@hou], etc. Pal et al. [@pal4] showed that a cycle $C_n$ and its complement $\overline{C}_n$ admit PGST if and only if $n$ is a power of $2$, and it occurs between every pair of antipodal vertices. It is worth noting that PGST is not monogamous unlike PST as argued in [@pal5 Example 4.1]. More results on PGST can be found in [@cou6; @eis; @pal6; @pal7; @bom1], etc. Here we investigate the existence of PGST on double subdivided stars. A subdivided star with $l$ branches, denoted by $SK_{1,l},$ is obtained by identifying exactly one pendant vertex from $l$ copies of the path $P_3.$ A double subdivided star is formed by joining the coalescence vertices of a pair of subdivided stars $SK_{1,l}$ and $SK_{1,m}$ by an additional edge, and the resulting graph is denoted by $T_{l,m}.$ We analyze the linear independence of the eigenvalues of $T_{l,m}$ in Section [2](#s2){reference-type="ref" reference="s2"} and then, in Section [3](#s3){reference-type="ref" reference="s3"}, the existence of PGST in $T_{l,m}$ is investigated. A pair of vertices $a$ and $b$ in a graph $G$ are called strongly cospectral if $E_{\theta_j}{\mathbf e}_a=\pm E_{\theta_j}{\mathbf e}_b,$ for all eigenvalues $\theta_j$. Next we observe that strong cospectrality is necessary for the existence of PGST between a pair of vertices. **Lemma 1**. *[@god1][\[l4\]]{#l4 label="l4"} If a graph $G$ exhibits pretty good state transfer between a pair of vertices $a$ and $b$, then they are strongly cospectral.* If $P$ is a matrix of an automorphism of $G$ with adjacency matrix $A$ then $P$ commutes with $A$. Since the transition matrix $U(t)$ is a polynomial in $A$, the matrices $P$ and $U(t)$ commute as well. Therefore, if $G$ allows PGST between $a$ and $b,$ then each automorphism fixing $a$ must fix $b$. We use the following Kronecker approximation theorem on simultaneous approximation in characterizing double subdivided stars having PGST. **Theorem 2**. *[@tom] Let $\alpha_1,\alpha_2,\ldots,\alpha_l$ be arbitary real numbers. If $1,\theta_1,\ldots,\theta_l$ are real, algebraic numbers linearly independent over $\mathbb{Q}$, then for $\epsilon>0$, there exists $q\in\mathbb{Z}$ and $p_1,p_2,\ldots,p_l\in\mathbb{Z}$ such that $$|q\theta_j-p_j-\alpha_j|<\epsilon.$$* Now we recall few results on the spectra of graphs. Let $G$ be a graph having distinct eigenvalues $\theta_1, \theta_2, \ldots , \theta_d$ with multiplicities $k_1, k_2,\ldots,k_d$, respectively. We denote the spectrum of $G$ as, $\theta_1^{k_1}, \theta_2^{k_2}, \ldots , \theta_d^{k_d}$. In case $\theta_i$ is a simple eigenvalue, we omit the power $k_i=1.$ A graph $G$ is bipartite if there is a bipartition of the set of vertices such that the edges connect only vertices in different parts. The eigenvalues and the corresponding eigenvectors of a bipartite graph has a special structure as mentioned below. **Proposition 1**. *[@bro] If $\theta$ is an eigenvalue of a bipartite graph $G$ with multiplicity $k$, then $-\theta$ is also an eigenvalue of $G$ having the same multiplicity. If $\left[\begin{smallmatrix} u \\ v \end{smallmatrix}\right]$ is an eigenvector with eigenvalue $\theta$, then $\left[\begin{smallmatrix} u \\ -v \end{smallmatrix}\right]$ is an eigenvector with eigenvalue $-\theta$.* In the above Proposition [Proposition 1](#p3){reference-type="ref" reference="p3"}, the vectors $u$ and $v$ correspond to the vertices in the two partite sets of $G$. If two vertices $a$ and $b$ are adjacent then we write $a\sim b.$ An eigenvector $v$ can be realized as a function on the vertex set $V(G)$ where $v(a)$ denotes the $a$-th component of $v.$ Then $v$ is an eigenvector of $G$ with eigenvalue $\theta$ if and only if $$\label{eqn1} \theta\cdot v(a)=\sum_{b\sim a}v(b)~~\text{for all $a\in V(G)$},$$ where the summation is taken over all vertices $b\in V(G)$ that are adjacent to $a.$ Later [\[eqn1\]](#eqn1){reference-type="eqref" reference="eqn1"} shall be used in determining the eigenvectors of $T_{l,m}.$ The spectrum of a subdivided star $SK_{1,l}$ is given in [@bro], which can also be obtained using [\[eqn1\]](#eqn1){reference-type="eqref" reference="eqn1"} as $$-\sqrt{l+1},~ (-1)^{l-1},~ 0,~ 1^{l-1},~ \sqrt{l+1}.$$ The characteristic polynomial of $SK_{1,l}$ is $x(x^2-1)^{l-1}(x^2-l-1).$ In the following section, we determine the set of linearly independent eigenvalues of $T_{l,m},$ which proves to be significant in characterizing state transfer in double subdivided stars. # Linear independence of eigenvalues {#s2} Suppose $H$ is a graph having a vertex $g.$ Then $H-g$ is defined to be the graph obtained by removing the vertex $g$ from $H.$ Recall that a double subdivided star $G:=T_{l,m}$ is considered as a pair of subdivided stars $H_1:=SK_{1,l}$ and $H_2:=SK_{1,m}$ joined by an edge to the respective coalescence vertices, say, $a$ and $b$. Using [@cve1 Theorem 2.2.4], the characteristic polynomial of $G$ can be evaluated as $$\begin{aligned} P_G(x) & =P_{H_1}(x)P_{H_2}(x)-P_{H_1-a}(x)P_{H_2-b}(x) \\ &=x(x^2-1)^{l-1}(x^2-l-1)x(x^2-1)^{m-1}(x^2-m-1)-(x^2-1)^l(x^2-1)^m \\ &=(x^2-1)^{l+m-2}q(x),\end{aligned}$$ where $q(x)=x^6-(l+m+3)x^4+(lm+l+m+3)x^2-1.$ One can observe that $q(x)$ is a polynomial having only the even terms, and none of $-1,0$ and $1$ are roots of $q(x)$. Suppose $q(x)$ has the roots $\pm\theta_1$, $\pm\theta_2$, $\pm\theta_3$. Considering $Q(x)=x^3-(l+m+3)x^2+(lm+l+m+3)x-1$, we have $Q(x^2)=q(x),$ and hence $\theta_1^2$, $\theta_2^2$, $\theta_3^2$ are the roots of $Q(x)$. Since $Q(x)$ has no rational root, it is irreducible over $\mathbb{Q}$, and hence all roots of $Q(x)$ are simple. Consequently, all roots of $q(x)$ are simple as well. Then the spectrum of $T_{l,m}$ is $$(-1)^{l+m-2},~ 1^{l+m-2},~\pm\theta_1,~\pm\theta_2,~\pm\theta_3.$$ Since $\theta_1^2$, $\theta_2^2$, $\theta_3^2$ are the roots of $Q(x)$, we also have the following identities. $$\begin{aligned} \theta_1^2+\theta_2^2+\theta_3^2 &=& l+m+3. \label{e1}\\ \theta_1^2\theta_2^2+\theta_2^2\theta_3^2+\theta_3^2\theta_1^2&=&lm+l+m+3. \label{e2}\\ \theta_1^2\theta_2^2\theta_3^2&=&1.\label{e3}\end{aligned}$$ The next result demonstrates that if the polynomial $q(x)$ is reducible, then $1,\theta_1,\theta_2$ are linearly independent over $\mathbb{Q}.$ **Lemma 2**. *Let $l$ and $m$ be two positive integers. Suppose $\pm\theta_1, \pm\theta_2, \pm\theta_3$ are the roots of $q(x)=x^6-(l+m+3)x^4+(lm+l+m+3)x^2-1$ in its splitting field over $\mathbb{Q}.$ If $q(x)$ is reducible, then $1,\theta_1,\theta_2$ are linearly independent over $\mathbb{Q}.$* *Proof.* One can observe that if the polynomial $q(x)$ is reducible, then it must be factored into two irreducible monic polynomials $f(x;l,m)$ and $f(-x;l,m)$ of degree three such that $q(x)=f(x;l,m)\cdot f(-x;l,m).$ Without loss of generality, let $\theta_1,\theta_2,\theta_3$ be the distinct roots of $f(x;l,m)$, and suppose $\theta_3$ is the largest among them. Let $\alpha,\beta,\gamma \in \mathbb{Q}$ such that $$\label{eq27} \alpha+\beta\theta_1+\gamma\theta_2=0.$$ By [@dum Theorem 13.27], the Galois group of the irreducible polynomial $f(x;l,m)$ can be realized as a transitive subgroup of $S_3$ with respect to the ordering of roots $\theta_1, \theta_2, \theta_3$. So the Galois group of $f(x;l,m)$ must contain the alternating group $A_3=\{(1),(123),(132)\}.$ As the automorphisms in the Galois group fix $\mathbb{Q},$ the elements of $A_3$ acting on [\[eq27\]](#eq27){reference-type="eqref" reference="eq27"} give $$\alpha+\beta\theta_1+\gamma\theta_2=0,~\alpha+\beta\theta_2+\gamma\theta_3=0,~ \alpha+\beta\theta_3+\gamma\theta_1=0,$$ which is a homogeneous system of linear equations in $\alpha,~\beta$ and $\gamma.$ The coefficient matrix can be reduced to obtain $$\begin{bmatrix} 1 & \theta_1 & \theta_2\\ 0 & \theta_2-\theta_1 & \theta_3-\theta_2\\ 0 & 0 & \dfrac{(\theta_1-\theta_2)^2+(\theta_3-\theta_1)(\theta_3-\theta_2)}{(\theta_1-\theta_2)} \end{bmatrix}.$$ Since all the pivots are non zero, the rank of the coefficient matrix is $3.$ Consequently, $1,\theta_1,\theta_2$ are linearly independent over $\mathbb{Q}.$ ◻ From [\[e1\]](#e1){reference-type="eqref" reference="e1"} and [\[e2\]](#e2){reference-type="eqref" reference="e2"} we find $$\begin{aligned} (\theta_1+\theta_2+\theta_3)^2&=&l+m+3+2(\theta_1\theta_2+\theta_2\theta_3+\theta_1\theta_3),\label{e35}\\(\theta_1\theta_2+\theta_2\theta_3+\theta_1\theta_3)^2&=&lm+l+m+3+2\theta_1\theta_2\theta_3(\theta_1+\theta_2+\theta_3).\label{e34}\end{aligned}$$ If $\theta_1+\theta_2+\theta_3=0,$ then [\[e35\]](#e35){reference-type="eqref" reference="e35"} and [\[e34\]](#e34){reference-type="eqref" reference="e34"} implies that $(l-m)^2+2l+2m=3$, which is impossible as $l,m\in\mathbb{N}.$ Now Lemma [Lemma 2](#lemm5){reference-type="ref" reference="lemm5"} along with $\theta_1+\theta_2+\theta_3 \neq 0$ infer that if the polynomial $q(x)$ is reducible, then any proper subset of $\{1,\theta_1,\theta_2,\theta_3\}$ is linearly independent over $\mathbb{Q}.$ **Theorem 3**. *Let $l,m\in\mathbb{N}.$ If the polynomial $q(x)=x^6-(l+m+3)x^4+(lm+l+m+3)x^2-1$ is reducible over $\mathbb{Q},$ then the set of all positive eigenvalues of $T_{l,m}$ except one is linearly independent over $\mathbb{Q}.$* Note that if $l=m$ then $q(x)=f(x;l,l)\cdot f(-x;l,l)$ where $f(x;l,l)=x^3-x^2-(l+1)x +1.$ The roots $\theta_1, \theta_2, \theta_3$ of $f(x;l,l)$ satisfy the followings. $$\begin{aligned} \theta_1+\theta_2+\theta_3 &=& 1. \label{e10}\\ \theta_1\theta_2+\theta_2\theta_3+\theta_3\theta_1 &=&-(l+1). \label{e11}\\ \theta_1\theta_2\theta_3 &=& -1. \label{e12} \end{aligned}$$ As a consequence to Theorem [Theorem 3](#cor2){reference-type="ref" reference="cor2"} we have the following result. **Corollary 1**. *The set of all positive eigenvalues of $T_{l,l}$ except one is linearly independent over $\mathbb{Q}$ for all positive integer $l.$* Before we proceed with the case when $q(x)$ is irreducible over $\mathbb{Q},$ consider the following result on the linear independence of $\theta_1^2,\theta_2^2$ and $\theta_3^2$ over $\mathbb{Q}.$ **Lemma 3**. *Let $\theta_1^2,\theta_2^2,\theta_3^2$ be the roots of $Q(x)=x^3-(l+m+3)x^2+(lm+l+m+3)x-1,$ for $l,m\in\mathbb{N},$ in its splitting field over $\mathbb{Q}$. Then $\theta_1^2,\theta_2^2,\theta_3^2$ are linearly independent over $\mathbb{Q}$.* *Proof.* Let $\alpha,\beta,\gamma \in \mathbb{Q}$ such that $$\label{eq13} \alpha\theta_1^2+\beta\theta_2^2+\gamma\theta_3^2=0.$$ Note that $Q(x)$ is an irreducible polynomial of degree $3$ over $\mathbb{Q}$. Since the Galois group corresponding to $Q(x)$ is transitive, it contains the alternating group $A_3.$ The elements of $A_3$ acting on [\[eq13\]](#eq13){reference-type="eqref" reference="eq13"} yield $$\alpha\theta_1^2+\beta\theta_2^2+\gamma\theta_3^2=0,~ \alpha\theta_2^2+\beta\theta_3^2+\gamma\theta_1^2=0,~ \alpha\theta_3^2+\beta\theta_1^2+\gamma\theta_2^2= 0.$$ The coefficient matrix for the corresponding homogeneous system is row equivalent to the following matrix. $$\begin{bmatrix} \theta_1^2 & \theta_2^2 & \theta_3^2\\\vspace{0.3cm} 0 & \theta_3^2-\dfrac{\theta_2^4}{\theta_1^2} & \theta_1^2-\dfrac{\theta_2^2\theta_3^2}{\theta_1^2}\\\vspace{0.3cm} 0 & 0 & \dfrac{3-\theta_1^6-\theta_2^6-\theta_3^6}{\theta_3^2\theta_1^2-\theta_2^4} \end{bmatrix}.$$ Since $\theta_1^2,\theta_2^2,\theta_3^2$ are distinct real roots of $Q(x)$ satisfying $\theta_1^2\theta_2^2\theta_3^2=1,$ we have $\theta_1^6+\theta_2^6+\theta_3^6>3.$ Now $\theta_3^2-\dfrac{\theta_2^4}{\theta_1^2}=0$ infers that $\theta_2^6=1$ or $\theta_2^2=1,$ a contradiction. So all three pivots are non-zero, and therefore, the rank of the coefficient matrix is $3.$ Hence $\theta_1^2, \theta_2^2, \theta_3^2$ are linearly independent over $\mathbb{Q}.$ ◻ Suppose $q(x)=x^6-(l+m+3)x^4+(lm+l+m+3)x^2-1$ is irreducible over $\mathbb{Q}$, and consider the Galois group $\mathcal{G}$ of $q(x)$ that fixes $\mathbb{Q}.$ Applying [@dum Theorem 13.27], the Galois group $\mathcal{G}$ can be realized as a transitive subgroup of $S_6$ with respect to the ordering of roots $\theta_1, -\theta_1, \theta_2, -\theta_2, \theta_3, -\theta_3$ of $q(x).$ Since the discriminant $D$ of $Q(x)$ satisfy $D=(\theta_2^2-\theta_1^2)^2(\theta_3^2-\theta_1^2)^2(\theta_3^2-\theta_2^2)^2\in\mathbb{Q},$ the discriminant of $q(x)$ evaluated as $64D^2$ is a square of an element in $\mathbb{Q}.$ Using [@dum Proposition 14.34], the Galois group $\mathcal{G}$ is a transitive subgroup of the alternating group $A_6$. Now we have the following result. **Theorem 4**. *Let $l$ and $m$ be two positive integers. Suppose $\pm\theta_1, \pm\theta_2, \pm\theta_3$ are the roots of $q(x)=x^6-(l+m+3)x^4+(lm+l+m+3)x^2-1$ in its splitting field over $\mathbb{Q}.$ If $q(x)$ is irreducible then $1,\theta_1,\theta_2,\theta_3$ are linearly independent over $\mathbb{Q}.$* *Proof.* Let $\alpha,\beta,\gamma,\delta\in \mathbb{Q}$ such that $$\label{e39} \alpha+\beta\theta_1+\gamma\theta_2+\delta\theta_3=0.$$ Since the Galois group $\mathcal{G}$ of $q(x)$ is a transitive subgroup of $A_6$, there is an automorphism $\sigma\in \mathcal{G}$ such that $\sigma(\theta_1)=-\theta_1.$ Applying $\sigma$ on both sides of [\[e39\]](#e39){reference-type="eqref" reference="e39"} and adding the resulting equation to it yields $$\label{e41} 2\alpha+\gamma(\theta_2+\sigma(\theta_2))+\delta(\theta_3+\sigma(\theta_3))=0.$$ Each automorphism in $\mathcal{G}$ that maps $\theta_i$ to $\theta_j$ must map $-\theta_i$ to $-\theta_j.$ Since $\mathcal{G}$ is a subgroup of $A_6$, the only possibility for $\sigma$ remains $(12)(34)$, $(12)(56)$, $(12)(3546)$ or $(12)(3645).$ If $\sigma=(12)(34)$, then [\[e41\]](#e41){reference-type="eqref" reference="e41"} becomes $\alpha+\delta\theta_3=0.$ Since $\theta_3\notin\mathbb{Q}$, we obtain $\alpha=\delta=0.$ Thus [\[e39\]](#e39){reference-type="eqref" reference="e39"} reduces to $\beta\theta_1+\gamma\theta_2=0,$ which further gives $\beta =\gamma=0$ as Lemma [Lemma 3](#lemma3){reference-type="ref" reference="lemma3"} holds. Therefore $1,\theta_1,\theta_2,\theta_3$ are linearly independent over $\mathbb{Q}.$ Using similar argument for $\sigma=(12)(56)$, we arrive at the same conclusion. If $\sigma=(12)(3546)$ then we find $\sigma^{-1}=(12)(3645)$. Now [\[e41\]](#e41){reference-type="eqref" reference="e41"} becomes $$\label{e42} 2\alpha+(\gamma-\delta)\theta_2+(\gamma+\delta)\theta_3=0.$$ Applying $\sigma^2=(34)(56)$ on both sides of [\[e42\]](#e42){reference-type="eqref" reference="e42"} gives $\alpha=0.$ Finally by Lemma [Lemma 3](#lemma3){reference-type="ref" reference="lemma3"}, we find $\alpha=\beta=\gamma=\delta=0.$ Hence $1,\theta_1,\theta_2,\theta_3$ are linearly independent over $\mathbb{Q}.$ ◻ # State transfer on $T_{l,m}$ {#s3} In quest of the existence of PST (or PGST) in $T_{l,m}$ from a vertex $a,$ we analyze the eigenvalues in the support $\sigma_a$ and the corresponding orthogonal projections. In this regard, we determine the eigenvectors of $T_{l,m}$ corresponding to each of its eigenvalues. Recall that the eigenvectors are real-valued functions on the vertex set of $T_{l,m}.$ In case $l>1$ (or $m>1$), an eigenvector corresponding to $1$ can be obtained by assigning the value $1$ to a pair of adjacent vertices of degree $1$ and $2$ in a branch, $-1$ to another such pair in a branch adjacent to the previous one, and the remaining vertices are assigned $0.$ Considering other such adjacent branches, we obtain $l+m-2$ linearly independent eigenvectors corresponding to $1$. Similarly, a set of $l+m-2$ linearly independent eigenvectors for the eigenvalue $-1$ can be obtained using [\[eqn1\]](#eqn1){reference-type="eqref" reference="eqn1"}. Suppose $E_{-1}$ and $E_1$ are idempotents corresponding to $-1$ and $1$, respectively. For the vertices $a$ and $b$ in $T_{l,m}$ (see Figure [\[fig1\]](#fig1){reference-type="ref" reference="fig1"}), note that $-1$ and $1$ are not in $\sigma_a$ as well as $\sigma_b$ since we have $E_{-1}{\mathbf e}_a=E_{-1}{\mathbf e}_b=0$ and $E_{1}{\mathbf e}_a=E_{1}{\mathbf e}_b=0$. However, we observed that $E_{-1}{\mathbf e}_c=E_{-1}{\mathbf e}_d\neq0~\text{and}~ E_{1}{\mathbf e}_c=E_{1}{\mathbf e}_d\neq0,$ and hence both $-1$ and $1$ belong to $\sigma_c$ as well as $\sigma_d.$ The eigenvectors for the remaining eigenvalues are obtained as follows. Let $P$ be the permutation matrix corresponding to an automorphism of $T_{l,m}$. Note that $P$ commutes with the adjacency matrix $A$ of $T_{l,m}$. Let $v$ be an eigenvector of $T_{l,m}$ satisfying $Av = \theta v$ with $\theta \neq -1, 1.$ Now $APv = PAv = \theta Pv$ implies that $Pv$ is also an eigenvector corresponding to $\theta$. Suppose the entries of $v$ are $z_1,z_2,x_j,y_j,u_k,w_k$, where $j=1,2,\ldots,l$ and $k=1,2,\ldots,m$ as mentioned in Figure [\[fig2\]](#fig2){reference-type="ref" reference="fig2"}. In particular, suppose $P$ is an automorphism of $T_{l,m}$ which switches vertices assigned with entries $x_1$ and $x_2$, $y_1$ and $y_2$, and fixing all other vertices. Since all eigenvalues except $-1$ and $1$ are simple, the eigenvectors $v$ and $Pv$ are parallel. As a result $Pv=\alpha v$ for some scalar $\alpha$, which further gives $\alpha z_1=z_1$. If $z_1=0$ then [\[eqn1\]](#eqn1){reference-type="eqref" reference="eqn1"} infers that $\theta x_1=y_1$ and $\theta y_1=x_1$, which is absurd as $\theta\neq\pm1.$ Hence $\alpha=1,$ and we have $x_1=x_2$ and $y_1=y_2$. We therefore conclude $x_1=x_j$, $y_1=y_j$, $u_1=u_k$ and $w_1=w_k$ for all $j$ and $k$. In case $l=m,$ consider an automorphism $P'$ which switches vertices assigned with entries $z_1$ and $z_2$. Since $v$ and $P'v$ are parallel, $P'v=\beta v$ for some scalar $\beta$. This gives $z_2=\beta z_1$, $z_1=\beta z_2$, $u_1=\beta x_1$ and $w_1=\beta y_1$. Consequently, we have $\beta=\pm 1$. The eigenvector $v$ corresponding to the case $\beta=1$ is given in Figure [\[fig3\]](#fig3){reference-type="ref" reference="fig3"}. For $\beta=-1,$ the eigenvector can be obtained similarly. The following result shows that the double subdivided star $T_{l,m}$ does not exhibit PST for any positive integers $l~\text{and}~m.$ **Theorem 5**. *There is no perfect state transfer in the double subdivided star $T_{l,m}$ for any positive integers $l$ and $m.$* *Proof.* We begin with the case $l=m=1,$ where the double subdivided star $T_{l,m}$ becomes the path $P_6$. The characteristic polynomial of $P_6$ can be evaluated as $f(x;1,1)\cdot f(-x;1,1).$ Accordingly, the eigenvalues are $\pm\theta_1,\pm\theta_2,\pm\theta_3$. The eigenvalue support of the vertices $a,c~\text{and}~e$ (see Figure [\[fig1\]](#fig1){reference-type="ref" reference="fig1"}) can be determined as $\sigma_a=\sigma_c=\sigma_e=\{\pm\theta_1,\pm\theta_2,\pm\theta_3\}.$ Suppose $T_{1,1}$ is periodic at vertex $a$. Then the ratio condition in Theorem [Theorem 1](#p6){reference-type="ref" reference="p6"} gives $$\dfrac{\theta_1-(-\theta_1)}{\theta_2-(-\theta_2)}=\dfrac{\theta_1}{\theta_2}\in\mathbb{Q},$$ which is a contradiction to Lemma [Lemma 2](#lemm5){reference-type="ref" reference="lemm5"}. Therefore $P_6$ is not periodic at $a,$ and hence there is no PST from the vertex $a$. Similarly, there is no PST from the vertices $c$ and $e$ as well. In case $l,m \neq 1$, recall that the spectrum of $T_{l,m}$ is $$(-1)^{l+m-2},~1^{l+m-2},~\pm\theta_1,~\pm\theta_2,~\pm\theta_3.$$ The eigenvalue support of the vertices $a,c~\text{and}~e$ can be evaluated as $$\sigma_a=\{\pm\theta_1,\pm\theta_2,\pm\theta_3\},~\sigma_c = \sigma_e = \{\pm 1, \pm \theta_1, \pm\theta_2, \pm\theta_3\}.$$ As argued before, Theorem [Theorem 1](#p6){reference-type="ref" reference="p6"} and Lemma [Lemma 2](#lemm5){reference-type="ref" reference="lemm5"} together conclude there is no PST from the vertices $a,~c$ and $e$ in this case as well. Hence the result follows. ◻ The fact that $P_6$ does not exhibit PST was previously observed in [@god1] as well. Next we investigate the existence of PGST in $T_{l,m}.$ ## Pretty good state transfer The graph $T_{l,m}$ becomes the path $P_6$ for $l=m=1$. The existence of PGST in $P_6$ is mentioned in [@god4], where we find that there is PGST from all vertices of $P_6$. Now we consider the remaining cases. It follows from Lemma [\[l4\]](#l4){reference-type="ref" reference="l4"} that if there is PGST between a pair of vertices then they have the same degree. It is well known that if there is PGST between two vertices then each automorphism fixing one must fix the other. Therefore there is no pretty good state transfer in $T_{l,m}$ whenever $l$ and $m$ are distinct and $l\neq 2\neq m.$ Next we investigate the existence of PGST in $T_{l,l}.$ **Lemma 4**. *There exists pretty good state transfer between the coalescence vertices in $T_{l,l}$ with respect to a sequence in $(4\mathbb{Z}-1)\dfrac{\pi}{2}$ for all natural number $l.$* *Proof.* The eigenvalue support of the coalescence vertices $a$ and $b$ in $T_{l,l}$ are $$\sigma_a=\{\pm\theta_1,\pm\theta_2,\pm\theta_3\}=\sigma_b.$$ Recall that the eigenvalues $\pm\theta_1,\pm\theta_2,\pm\theta_3$ are simple. Suppose $v_1,v_2,v_3$ are the eigenvectors corresponding to $\theta_1,\theta_2,\theta_3$, respectively. Using Proposition $\ref{p3}$, we determine the eigenvectors corresponding to $-\theta_1,-\theta_2,-\theta_3$ as well. Therefore $$\begin{aligned} {\mathbf e}_a^TU(t){\mathbf e}_b&=&\sum_{\theta\in \sigma_a}\exp{(it\theta)}{\mathbf e}_a^TE_{\theta}{\mathbf e}_b\nonumber \\&=& \sum_{j=1}^{3} \left[\exp{(it\theta_j)}\dfrac{v_j(a)v_j(b)}{||v_j||^2}-\exp{(-it\theta_j)}\dfrac{v_j(a)v_j(b)}{||v_j||^2}\right] \end{aligned}$$ We already showed that $v_j(a)=v_j(b)\neq 0.$ Without loss of generality, let $v_j(a)=1$ for $j=1,2,3.$ Thus the above equation yields $$\begin{aligned} \label{eqn17} {\mathbf e}_a^TU(t){\mathbf e}_b&=& \sum_{j=1}^{3} \left[\dfrac{\exp{(it\theta_j)}-\exp{(-it\theta_j)}}{||v_j||^2}\right]\end{aligned}$$ By Lemma [Lemma 2](#lemm5){reference-type="ref" reference="lemm5"}, the algebraic numbers $1,\theta_1,\theta_2$ are linearly independent over $\mathbb{Q}.$ Let $\epsilon>0,$ and consider $\alpha_j=\dfrac{1+\theta_j}{4}$ in Theorem [Theorem 2](#t1){reference-type="ref" reference="t1"}. Then there exist $q,p_1,p_2\in\mathbb{Z}$ such that $$\label{e17} \left|(4q-1)\dfrac{\pi}{2}\theta_j-\left(2\pi p_j +\dfrac{\pi}{2}\right)\right|<2\pi\epsilon ~~\text{for}~~ j=1,2.$$ Since $\theta_1+\theta_2+\theta_3= 1$ as in [\[e10\]](#e10){reference-type="eqref" reference="e10"}, this further yields $$\label{e18}\left|(4q-1)\dfrac{\pi}{2}\theta_3-\left(\pi(2(q-p_1-p_2)-1) -\dfrac{\pi}{2}\right)\right|<4\pi\epsilon.$$ We obtain a sequence $\tau_k\in(4\mathbb{Z}-1)\dfrac{\pi}{2}$ from [\[e17\]](#e17){reference-type="eqref" reference="e17"} and [\[e18\]](#e18){reference-type="eqref" reference="e18"} such that $\displaystyle\lim_{k\to\infty} \exp{(i\tau_k\theta_j)}=i$ for all $j,$ and thus [\[eqn17\]](#eqn17){reference-type="eqref" reference="eqn17"} gives $$\label{e19}\lim_{k\to \infty}{\mathbf e}_a^TU(\tau_k){\mathbf e}_b=2i\left[\dfrac{1}{||v_1||^2}+\dfrac{1}{||v_2||^2}+\dfrac{1}{||v_3||^2}\right]=i,$$ since $U(0)=I.$ This completes the proof. ◻ In case of $P_6,$ the support of each vertex contains all its eigenvalues. The proof of Lemma [Lemma 4](#th1){reference-type="ref" reference="th1"} can be devised to determine that $P_6$ exhibits PGST between the pair of vertices $n$ and $7-n$ for all $n=1,2,3.$ Now we investigate the existence of PGST in $T_{2,m}$ for some positive integer $m.$ Note that there is no PGST between the coalescence vertices of $T_{2,m}$ whenever $m\neq 2.$ Therefore, if $T_{2,m}$ admits PGST then it occurs between the pair of vertices $c, d$ or the pair of vertices $e, f$ as in Figure [\[fig4\]](#fig4){reference-type="ref" reference="fig4"}. Here we find that $$\sigma_c=\sigma_d=\sigma_e=\sigma_f=\{\pm 1,\pm\theta_1,\pm\theta_2,\pm\theta_3\},$$ which is the set of all eigenvalues of $T_{2,m}$. Let $v_1, v_2, v_3$ be the eigenvectors corresponding to the simple eigenvalues $\theta_1, \theta_2, \theta_3$, respectively. Then Proposition $\ref{p3}$ gives the eigenvectors corresponding to $-\theta_1, -\theta_2, -\theta_3$ as well. Recall that $E_{-1}$ and $E_1$ are the idempotents corresponding to eigenvalues $-1$ and $1,$ respectively. Using Gram-Schmidt procedure on the set of linearly independent eigenvectors corresponding to $-1$ and $1$, we evaluate $${\mathbf e}_c^TE_{-1}{\mathbf e}_ d={\mathbf e}_c^TE_{1}{\mathbf e}_ d=-\dfrac{1}{4}={\mathbf e}_e^TE_{-1}{\mathbf e}_ f={\mathbf e}_e^TE_{1}{\mathbf e}_ f.$$ Now we have $$\begin{aligned} {\mathbf e}_c^TU(t){\mathbf e}_d &=&\sum_{j=1}^{3}\left[\exp{(it\theta_j)}\dfrac{v_j(c)v_j(d)}{||v_j||^2}+\exp{(-it\theta_j)}\dfrac{v_j(c)v_j(d)}{||v_j||^2}\right]\\ &&+\exp{(it)}\left(-\dfrac{1}{4}\right)+\exp{(-it)}\left(-\dfrac{1}{4}\right).\end{aligned}$$ We already obtained $v_j(c)=v_j(d)\neq 0.$ Without loss of generality, let $v_j(c)=1$ for $j=1,2,3.$ Therefore $$\begin{aligned} \label{eq26} {\mathbf e}_c^TU(t){\mathbf e}_d =\sum_{j=1}^{3}\left[\dfrac{\exp{(it\theta_j)}+\exp{(-it\theta_j)}}{||v_j||^2}\right]+\dfrac{1 }{4}\left[\exp{(i(t+\pi))}+\exp{(-i(t-\pi))}\right].\end{aligned}$$ Similarly, we use a different set of eigenvectors $v_1, v_2, v_3$ satisfying $v_j(e)=v_j(f)=1$ for all $j$ to obtain $$\begin{aligned} \label{eqn21} {\mathbf e}_e^TU(t){\mathbf e}_f =\sum_{j=1}^{3}\left[\dfrac{\exp{(it\theta_j)}+\exp{(-it\theta_j)}}{||v_j||^2}\right]+\dfrac{1 }{4}\left[\exp{(i(t+\pi))}+\exp{(-i(t-\pi))}\right].\end{aligned}$$ In case $l=2$ and $m$ is any positive integer, the polynomial $q(x)$ for the graph $T_{2,m}$ becomes $q(x)=x^6-(m+5)x^4+(3m+5)x^2-1.$ Next we classify the existence of PGST in $T_{2,m}.$ **Theorem 6**. *Let $m$ be a positive integer. Suppose $\pm\theta_1, \pm\theta_2, \pm\theta_3$ are the roots of the polynomial $q(x)=x^6-(m+5)x^4+(3m+5)x^2-1$ in its splitting field over $\mathbb{Q}.$ Then the following holds in $T_{2,m}$.* 1. *If $q(x)$ is irreducible over $\mathbb{Q}$, then there is pretty good state transfer with respect to a sequence in $(2\mathbb{Z}+1)\pi$ between both pair of vertices $c,d$ and $e,f$.* 2. *Let $q(x)$ is reducible over $\mathbb{Q}.$ If $\theta_1+\theta_2+\theta_3$ is an even integer, then there is pretty good state transfer with respect to a sequence in $(2\mathbb{Z}+1)\pi$ between both pair of vertices $c,d$ and $e,f$. Moreover, if $\theta_1+\theta_2+\theta_3$ is an odd integer then there is no pretty good state transfer from $c,d,e$ and $f.$* *Proof.* Suppose $q(x)$ is irreducible. Then by Theorem [Theorem 4](#t9){reference-type="ref" reference="t9"}, the algebraic numbers $1, \theta_1, \theta_2, \theta_3$ are linearly independent over $\mathbb{Q}.$ Let $\epsilon>0$ and consider $\alpha_j=-\dfrac{\theta_j}{2},$ for $j=1,2,3.$ By Theorem [Theorem 2](#t1){reference-type="ref" reference="t1"}, there exist $q,p_1,p_2\in\mathbb{Z}$ such that $$\left|(2q+1)\pi\theta_j-2\pi p_j\right|<2\pi\epsilon~\text{ for}~j=1,2,3.$$ This along with [\[eq26\]](#eq26){reference-type="eqref" reference="eq26"} gives a sequence $\tau_k\in(2\mathbb{Z}+1)\pi$ such that $$\displaystyle\lim_{k\to \infty}{\mathbf e}_c^TU(\tau_k){\mathbf e}_d=2\left[\dfrac{1}{||v_1||^2} +\dfrac{1}{||v_2||^2}+\dfrac{1}{||v_3||^2}+\dfrac{1}{4}\right]=1,$$ since $U(0)=I.$ Therefore, PGST occurs between the pair of vertices $c$ and $d$ with respect to the sequence $\tau_k\in(2\mathbb{Z}+1)\pi.$ Similarly using [\[eqn21\]](#eqn21){reference-type="eqref" reference="eqn21"}, we find that $T_{2,m}$ exhibits PGST between the pair of vertices $e$ and $f$ with respect to the same sequence $\tau_k$ as well. Suppose $q(x)$ is reducible and $\theta_1+\theta_2+\theta_3=2n$ for some $n\in\mathbb{Z}$. Using Lemma [Lemma 2](#lemm5){reference-type="ref" reference="lemm5"}, the algebraic numbers $1,\theta_1,\theta_2$ are linearly independent over $\mathbb{Q}$. Let $\epsilon>0$ and consider $\alpha_j=-\dfrac{\theta_j}{2}$ whenever $j=1,2.$ By Theorem [Theorem 2](#t1){reference-type="ref" reference="t1"}, there exist $q,p_1,p_2\in\mathbb{Z}$ such that $$\label{e36} \left|(2q+1)\pi\theta_j-2\pi p_j\right|<2\pi\epsilon,~\text{ for}~j=1,2.$$ This further yields $$\label{e38} \left|(2q+1)\pi\theta_3-2\pi(2qn+n-p_1-p_2)\right|<4\pi\epsilon.$$ Using [\[e36\]](#e36){reference-type="eqref" reference="e36"} and [\[e38\]](#e38){reference-type="eqref" reference="e38"}, we obtain a sequence $\tau_k\in(2\mathbb{Z}+1)\pi$ such that $\displaystyle\lim_{k\to\infty} \exp{(i\tau_k\theta_j)}=1,$ for $j=1,2,3.$ Using [\[eq26\]](#eq26){reference-type="eqref" reference="eq26"}, we have $$\displaystyle\lim_{k\to\infty}e_c^TU(\tau_k)e_d=2\left[\dfrac{1}{||v_1||^2}+\dfrac{1}{||v_2||^2}+\dfrac{1}{||v_3||^2}+\dfrac{1}{4}\right]=1.$$ Hence $T_{2,m}$ exhibits PGST between the pair of vertices $c$ and $d$ with respect to the sequence $\tau_k\in(2\mathbb{Z}+1)\pi$. Similarly using [\[eqn21\]](#eqn21){reference-type="eqref" reference="eqn21"} we find that $T_{2,m}$ exhibits PGST between the pair of vertices $e$ and $f$ with respect to the same sequence $\tau_k$ as well. Finally, consider the case that $q(x)$ is reducible and $\theta_1+\theta_2+\theta_3=2n+1$ for some $n\in\mathbb{Z}.$ In the proof of the main result in [@god4], one can observe that if there is PGST in a bipartite graph between a pair of vertices $a$ and $b$ with $\displaystyle\lim_{k\to\infty}U(\tau_k){\mathbf e}_a=\gamma{\mathbf e}_b,~\text{for some}~ \tau_k\in\mathbb{R} \text{ and }\gamma\in \mathbb{C}$ then $\gamma=\pm 1$ whenever $a$ and $b$ are in the same partite set, otherwise $\gamma=\pm i$. Since $U(0)=I$, we conclude from [\[eq26\]](#eq26){reference-type="eqref" reference="eq26"} that if there is PGST between $c$ and $d$, then we have a sequence $\tau_k\in\mathbb{R}$ such that for all $j=1,2,3,$$$\displaystyle\lim_{k\to\infty}\exp{(i(\tau_k+\pi))}=\displaystyle\lim_{k\to\infty}\exp{(i\tau_k\theta_j)}=\pm 1.$$ In case $\displaystyle\lim_{k\to\infty}\exp{(i(\tau_k+\pi))}=1,$ it follows that $\tau_k\in(2\mathbb{Z}+1)\pi.$ Since $\theta_1+\theta_2+\theta_3=2n+1,$ we have a contradiction that $-1=\displaystyle\lim_{k\to\infty}\exp{\left[i\tau_k\left(\theta_1+\theta_2+\theta_3\right)\right]}=1,$ where the equality on the right is obtained by using the property of exponentials. When $\displaystyle\lim_{k\to\infty}\exp{(i(\tau_k+\pi))}=-1,$ we have $\tau_k\in 2\mathbb{Z}\pi,$ and again it leads to a contradiction. Hence there is no PGST between the vertices $c$ and $d.$ Using [\[eqn21\]](#eqn21){reference-type="eqref" reference="eqn21"} and a similar argument, we conclude that there is no PGST between the vertices $e$ and $f$ as well. ◻ Considering $\alpha_j=\frac{1}{2}$ in the proof of Theorem [Theorem 6](#t8){reference-type="ref" reference="t8"} where $q(x)$ is irreducible, one can deduce that there is PGST with respect to a sequence in $2\mathbb{Z}\pi$ between the same pair of vertices. Combining Lemma [Lemma 4](#th1){reference-type="ref" reference="th1"} and Theorem [Theorem 6](#t8){reference-type="ref" reference="t8"}, one obtains a complete characterization of double subdivided stars $T_{l,m}$ exhibiting PGST. # Acknowledgements {#acknowledgements .unnumbered} The authors are indebted to the reviewers for the valuable comments and generous suggestions to improve the manuscript. S. Mohapatra is supported by Department of Science and Technology (INSPIRE: IF210209). H. Pal is funded by Science and Engineering Research Board (Project: SRG/2021/000522).
arxiv_math
{ "id": "2310.04107", "title": "A Characterization of State Transfer on Double Subdivided Stars", "authors": "Sarojini Mohapatra and Hiranmoy Pal", "categories": "math.CO quant-ph", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We prove a colorful generalization of the Borsuk--Ulam theorem and derive colorful consequences from it, such as a colorful generalization of the ham sandwich theorem. Even in the uncolored case this specializes to a strengthening of the ham sandwich theorem, which given an additional condition, contains a result of Bárány, Hubard, and Jerónimo on well-separated measures as a special case. We prove a colorful generalization of Fan's antipodal sphere covering theorem, we derive a short proof of Gale's colorful KKM theorem, and we prove a colorful generalization of Brouwer's fixed point theorem. Our results also provide an alternative between Radon-type intersection results and KKM-type covering results. Finally, we prove colorful Borsuk--Ulam theorems for higher symmetry. address: Dept. Math. Sciences, Carnegie Mellon University, Pittsburgh, PA 15213, USA author: - Florian Frick - Zoe Wellner date: September 25, 2023 title: Colorful Borsuk--Ulam theorems and applications --- [^1] # Introduction *Colorful* (or *rainbow*) results are popular across combinatorics and discrete geometry. These results take the following general form: If sets $S_1, \dots, S_n$ have property $P$, then there is a transversal with property $P$. Here a *transversal* of $S_1,\dots, S_n$ is a set $\{s_1,\dots, s_n\}$ with $s_i \in S_i$. For example, Bárány's colorful Carathéodory theorem [@Barany1981] states that if $S_1,\dots, S_{d+1} \subset \mathbb{R}^d$ satisfy $0 \in \mathop{\mathrm{\mathrm{conv}}}S_i$ for all $i \in [d+1] = \{1,\dots, d+1\}$, then there is a transversal $S$ with $0 \in \mathop{\mathrm{\mathrm{conv}}}S$. The "non-colorful" case $S_1 = \dots = S_{d+1}$ reduces to Carathéodory's theorem [@Caratheodory1911] that if $0$ is in the convex hull of $S \subset \mathbb{R}^d$, then some convex combination of at most $d+1$ elements of $S$ is equal to $0$. Other colorful results in geometry include colorful Helly theorems [@Barany1981; @kalai2005], Gale's colorful generalization of the KKM theorem [@Gale1984], and colorful versions of Tverberg's theorem [@barany1989; @zivaljevic1992; @blagojevic2015]. Prominent examples in combinatorics include results on rainbow arithmetic progressions [@jungic2003; @axenovich2004], rainbow matching results (such as [@AharoniBerger2009]) and rainbow Ramsey results (see [@fujita2010] for a survey) among several others. Topological methods have proven to be a powerful tool in attacking combinatorial and discrete-geometric problems [@bjorner1995; @deLongueville2013; @kozlov2008; @Matousek2003book]. Among the standard techniques are fixed point theorems (an early example is Nash's result on equilibria in non-cooperative games [@Nash51]) and equivariant methods such as the Borsuk--Ulam theorem (see for example [@Matousek2003book; @blagojevic2017]), which states that any continuous map $f\colon S^d \to \mathbb{R}^d$ from the $d$-sphere $S^d$, which is *odd* (i.e., $f(-x) = -f(x)$ for all $x$), has a zero. Here we ask: *Can colorful results be lifted to colorful topological methods?* For fixed point theorems this is true in quite some generality [@shih1993; @FrickHoustonEdwardsMeunier2017; @FrickZerbib2019]. There is an abundance of generalizations of the Borsuk--Ulam theorem; see for example [@Fan1952; @MeunierSu2019; @Dold1983; @Bourgin1955; @yang1954]. Here we prove a colorful generalization of the Borsuk--Ulam theorem. We introduce one piece of terminology: We say that a matrix $A\in \mathbb{R}^{d \times d}$ has *rows in intersecting cube facets* if for any two distinct rows $a$ and $b$ of $A$ whenever there is a $j \in [d]$ such that $|a_j| \ge |a_i|$ and $|b_j| \ge |b_i|$ for all $i \in [d]$, then $a_jb_j > 0$. That is, if the largest entries in absolute value in two different rows occur in the same column, then they cannot have opposite sign. If we normalize all rows to have sup-norm $1$, that is, to lie on the boundary of the hypercube in $\mathbb{R}^d$, then $A$ has rows in intersecting cube facets if no two rows lie strictly within vertex-disjoint facets of the hypercube. We can now state our colorful Borsuk--Ulam theorem: **Theorem 1**. *Let $f \colon S^d \to \mathbb{R}^{(d+1) \times (d+1)}$ be an odd and continuous map. Then there is an ${x\in S^d}$ such that either $f(x)$ does not have rows in intersecting cube facets or there is a permutation $\pi$ of $[d+1]$ such that $f(x)_{\pi(i)i}$ is non-negative and $|f(x)_{\pi(i)i}| \ge |f(x)_{\pi(i)j}|$ for all $i, j \in [d+1]$.* Thus for any odd map from the $d$-sphere to $\mathbb{R}^{(d+1) \times (d+1)}$ one matrix $A$ in the image either has rows that are "almost opposite" or the maximal absolute values of entries of $A$ in each row form a row-column transversal and these entries are non-negative. Here we think of the rows of $f(x)$ as $d+1$ odd maps $S^d \to \mathbb{R}^{d+1}$. Theorem [Theorem 1](#thm:colorfulBU){reference-type="ref" reference="thm:colorfulBU"} is indeed a colorful generalization of the Borsuk--Ulam theorem: Let $f \colon S^d \to \mathbb{R}^d$ be an odd and continuous map. Let $\widehat f \colon S^d \to \mathbb{R}^{d+1}$ be the map obtained from $f$ by appending a zero in the last coordinate. Define $F\colon S^d \to \mathbb{R}^{(d+1) \times (d+1)}$ to have all rows equal to $\widehat f$. By Theorem [Theorem 1](#thm:colorfulBU){reference-type="ref" reference="thm:colorfulBU"} there is an $x\in S^d$ such that either for some $j \in [d+1]$ we have that $|\widehat f(x)_j| \ge |\widehat f(x)_\ell|$ for all $\ell \in [d+1]$ and $\widehat f(x)_j$ is both non-negative and non-positive, which implies $\widehat f(x) = 0$, or there is a permutation $\pi$ of $[d+1]$ such that $F(x)_{\pi(i) i}$ is largest in absolute value in row $\pi(i)$ for all $i$. Since all rows of $F$ are equal to $\widehat f$ this means that each entry $\widehat f(x)_j$ is maximal in absolute value among the coordinates of $\widehat f(x)$, so these absolute values are all equal. Since $\widehat f(x)_{d+1} = 0$, we have that $\widehat f(x) = 0$. Here we collect consequences and generalizations of Theorem [Theorem 1](#thm:colorfulBU){reference-type="ref" reference="thm:colorfulBU"}: The classical Ham Sandwich theorem, conjectured by Steinhaus and proved by Banach (see [@beyer2004; @StoneTukey1942]), asserts that for any $d$ probability measures on $\mathbb{R}^d$ with continuous density, there is an affine hyperplane $H$ that bisects all probability measures, that is, both halfspaces of $H$ have measure $\frac12$ in all $d$ probability measures. We prove a colorful generalization of the ham sandwich theorem; see Theorem [Theorem 17](#colHS){reference-type="ref" reference="colHS"}. When we specialize this to the non-colorful case, we derive a strengthening of the ham sandwich theorem (Theorem [Theorem 18](#kfhs){reference-type="ref" reference="kfhs"}), which contains a close variant of a measure partition result due to Bárány, Hubard, Jerónimo [@BaranyHubardJeronimo2008] on well-separated measures as a special case; see Corollary [Corollary 19](#cor:bhj){reference-type="ref" reference="cor:bhj"}. Fan proved that if $A_1, \dots, A_{d+1} \subseteq S^d$ are closed such that $A_i \cap (-A_i) = \varnothing$ for all $i \in [d+1]$ and $S^d = \bigcup_i A_i \cup(-A_i)$, then $A_1 \cap (-A_2) \cap A_3 \cap \dots \cap (-1)^d A_{d+1} \ne \varnothing$. We establish a colorful generalization; see Theorem [Theorem 7](#colkf){reference-type="ref" reference="colkf"}. The Borsuk--Ulam theorem easily implies Brouwer's fixed point theorem, which asserts that any continuous self-map of a closed $d$-ball has a fixed point. Similarly, the colorful Borsuk--Ulam theorem implies a colorful Brouwer's fixed point theorem; see Theorem [Theorem 15](#thm:colBrouwer){reference-type="ref" reference="thm:colBrouwer"}. We give a simple proof of Gale's colorful KKM theorem [@Gale1984] as a consequence of our main result; see Corollary [Corollary 13](#cor:colKKM){reference-type="ref" reference="cor:colKKM"}. Again, the uncolored version is stronger than the classical KKM theorem about set coverings of the $d$-simplex $\Delta_d$. In this case, we obtain an alternative between KKM results and the topological Radon theorem, which unifies both of these results; see Theorem [Theorem 11](#thm:RadonKKM){reference-type="ref" reference="thm:RadonKKM"}. We give a generalization of Fan's result to $\mathbb{Z}/p$-symmetry for $p$ a prime; see Theorem [Theorem 20](#thm:p-fold-kyfan){reference-type="ref" reference="thm:p-fold-kyfan"}. We derive a generalization of the Bourgin--Yang $\mathbb{Z}/p$-Borsuk--Ulam theorem (Corollary [Corollary 23](#cor:kyfan-dold){reference-type="ref" reference="cor:kyfan-dold"}) and the corresponding set covering variant (Theorem [Theorem 21](#thm:p-cover){reference-type="ref" reference="thm:p-cover"}). We then prove a colorful generalization of Theorem [Theorem 21](#thm:p-cover){reference-type="ref" reference="thm:p-cover"}, which in the special case $p=2$ gives our earlier colorful generalization of Fan's set covering result; see Theorem [Theorem 25](#thm:col-p-kyfan){reference-type="ref" reference="thm:col-p-kyfan"}. Meunier and Su already proved a colorful generalization of Fan's theorem [@MeunierSu2019], which also exhibits a colorful Borsuk--Ulam phenomenon. Their generalization is different from ours and neither easily implies the other. We will discuss the differences after stating our colorful generalization of Fan's theorem (Theorem [Theorem 7](#colkf){reference-type="ref" reference="colkf"}). # Preliminaries In this section we collect a few definitions used throughout. We refer to Matoušek's book [@Matousek2003book] for further details. A *simplicial complex* $\Sigma$ is a collection of finite sets closed under taking subsets. We refer to the ground set $\bigcup_{\sigma \in \Sigma} \sigma$ as the *vertex set* of $\Sigma$. Elements $\sigma \in \Sigma$ are called *faces*; inclusion-maximal faces are *facets* and two-element faces are *edges*. For a simplicial complex $\Sigma$ on vertex set $V$ we denote its *geometric realization* by $|\Sigma| = \bigcup_{\sigma \in \Sigma} \mathop{\mathrm{\mathrm{conv}}}\{e_v \ : \ v\in \sigma\} \subseteq \mathbb{R}^V$, where $e_v$ denote the standard basis vectors of $\mathbb{R}^V$ and $\mathop{\mathrm{\mathrm{conv}}}(X)$ denotes the convex hull of the set $X$. The simplicial complex of all subsets of $[n] = \{1,2,\dots,n\}$ is the *$(n-1)$-simplex* $\Delta_{n-1}$. For ease of notation we will denote the geometric realization of $\Delta_{n-1}$ also by $\Delta_{n-1}$. Observe that $\Delta_{n-1}$ is the convex hull of the standard basis vectors in $\mathbb{R}^n$. We call $\Sigma$ a *triangulation* of $S^d$ if $\Sigma$ is a simplicial complex whose geometric realization is homeomorphic to $S^d$. Let $\Sigma$ and $\Sigma'$ be simplicial complexes on vertex sets $V$ and $V'$, respectively. A map $\varphi\colon V \to V'$ is *simplicial* if for every $\sigma \in \Sigma$ we have that $\varphi(\sigma) \in \Sigma'$. In this case we write $\varphi \colon \Sigma \to \Sigma'$. By convex interpolation any simplicial map induces a continuous map $|\Sigma| \to |\Sigma'|$. A triangulation $\Sigma$ of $S^d$ is *centrally symmetric* if there is a simplicial map $\iota \colon \Sigma \to \Sigma$ and homeomorphism $h \colon |\Sigma| \to S^d$ such that for every $x\in |\Sigma|$ we have that $h(\iota(x)) = -h(x)$. The smallest centrally symmetric triangulation of $S^d$ is given by the (boundary of the) *crosspolytope* $\partial\Diamond_{d+1}$, the simplicial complex on $V = \{1, \dots, d\} \cup \{-1, \dots, -d\}$, where $\sigma \subseteq V$ is a face of $\partial\Diamond_{d+1}$ if for every $i \in \sigma$ we have that $-i \notin \sigma$. The geometric realization $|\partial\Diamond_{d+1}|$ can be realized in $\mathbb{R}^{d+1}$ with convex faces by taking the boundary of the convex hull of $\{\pm e_1, \dots, \pm e_{d+1}\}$. A simplicial map $\iota \colon \Sigma \to \Sigma$ induces a *$\mathbb{Z}/p$-action* if the $p$-fold composition $\iota^p$ is the identity. In this case $\Sigma$ is a *$\mathbb{Z}/p$-equivariant triangulation*. For $s \in \mathbb{Z}/p$ and $x \in |\Sigma|$ we write $s\cdot x$ for $\iota^s(x)$. The $\mathbb{Z}/p$-action is *free* if $s\cdot x \ne x$ for all $s \in \mathbb{Z}/p \setminus \{0\}$ and all $x \in |\Sigma|$. Given two spaces $X$ and $Y$ (homeomorphic to simplicial complexes) with $\mathbb{Z}/p$-actions, a map $f\colon X \to Y$ is *$\mathbb{Z}/p$-equivariant* if $f(s\cdot x) = s\cdot f(x)$ for all $s\in \mathbb{Z}/p$ and all $x\in X$. A map $f\colon S^d \to \mathbb{R}^n$ is *antipodal* or *odd* if $f(-x) = -f(x)$ for all $x \in S^d$. We reserve the term *map* for a continuous function. The same definition applies for a map $f\colon S^d \to \mathbb{R}^{n \times m}$ to the space of $(n \times m)$-matrices. For any such map, we write $f_i \colon S^d \to \mathbb{R}^m$ for the map to the $i$th row of $f$ and $f_{ij} \colon S^d \to \mathbb{R}$ for the map to the entry in row $i$ and column $j$ of $f$. The *degree* $\deg f$ of a map $f \colon S^d \to S^d$ is the integer $k$ such that the induced map on top homology $f_*\colon H_d(S^d) \cong \mathbb{Z}\to H_d(S^d) \cong \mathbb{Z}$ is multiplication by $k$. The Borsuk--Ulam theorem [@Borsuk1933] can be stated in various forms; here we collect three such statements: **Theorem 2** (Borsuk--Ulam theorem). * * *Any odd map $f\colon S^d \to \mathbb{R}^d$ has a zero.* *Any odd map $f\colon S^d \to S^d$ has odd degree.* *For any odd map $f\colon S^d \to \mathbb{R}^{d+1}$ there is an $x \in S^d$ such that all coordinates of $f(x)$ are the same.* Item (b) implies item (a), which is easily seen to be equivalent to the statement that any odd map $S^{d-1}\to S^{d-1}$ has nonzero degree. For item (c) observe that for the diagonal $D = \{x\in \mathbb{R}^{d+1} \ : \ x_1 = x_2 = \dots = x_{d+1}\}$ the composition of $f \colon S^d \to \mathbb{R}^{d+1}$ with the projection $\mathbb{R}^{d+1} \to \mathbb{R}^{d+1} / D \cong \mathbb{R}^d$ is an odd map. This composition has a zero if and only if there is an $x \in S^d$ with $f(x) \in D$. We will use one immediate corollary of the Borsuk--Ulam theorem, which we state below. Any non-surjective map $S^d \to S^d$ has degree zero; thus we get: **Corollary 3**. *Any odd map $f\colon S^d \to S^d$ is surjective.* The Borsuk--Ulam theorem has been generalized to $G$-equivariant maps beyond $G = \mathbb{Z}/2$; see for example Dold [@Dold1983]. Here we will need the following (see [@kushkuley2006 Cor. 2.2]): **Lemma 4**. *For any free $\mathbb{Z}/p$-action on $S^d$, any $\mathbb{Z}/p$-equivariant map $f\colon S^d\to S^d$ has degree $1 \ \mathrm{mod} \ p$.* Let $\Sigma$ be a simplicial complex on vertex set $V$. The *deleted join* $\Sigma^{*2}_\Delta$ of $\Sigma$ is a simplicial complex on vertex set $V\times \{1,2\}$, where $(\sigma_1 \times \{1\}) \cup (\sigma_2 \times \{2\})$ is a face of $\Sigma^{*2}_\Delta$ if $\sigma_1$ and $\sigma_2$ are faces of $\Sigma$ such that $\sigma_1 \cap \sigma_2 = \varnothing$. The deleted join of the $n$-simplex is $(\Delta_n)^{*2}_\Delta = \partial\Diamond_{n+1}$ the boundary of the crosspolytope. Notice that any point $z \in |(\Delta_n)^{*2}_\Delta|$ in the geometric realization of the boundary of the crosspolytope is of the form $\lambda x + (1-\lambda) y$ for $\lambda \in [0,1]$ and $x,y \in |\Delta_n|$ that are contained in vertex-disjoint faces. This is true more generally, for points in the geometric realization of the deleted join of any simplicial complex. We surpress bars, and denote the geometric realization of the deleted join of the simplex by $(\Delta_n)^{*2}_\Delta$ for ease of notation. Thus the notation $\lambda x + (1-\lambda)y \in (\Delta_n)^{*2}_\Delta$ refers to the point in $|(\Delta_n)^{*2}_\Delta|$ determined by $\lambda \in [0,1]$ and $x,y \in |\Delta_n|$ in vertex-disjoint faces. The *$p$-fold join* $\Sigma^{*p}$ of $\Sigma$ is a simplicial complex on vertex set $V\times \{1,2,\dots, p\}$, where $(\sigma_1 \times \{1\}) \cup (\sigma_2 \times \{2\})\cup\dots\cup(\sigma_p \times \{p\})$ is a face of $\Sigma^{*p}_\Delta$ if for all $i$, $\sigma_i$ is a face of $\Sigma$. If we additionally require that $\sigma_i \cap \sigma_j = \varnothing$ when $i\neq j$ we get the *$p$-fold deleted join* $\Sigma^{*p}_\Delta$. The $p$-fold join of $S^d$ is homeomorphic to $S^{p(d+1)-1}$. The *barycentric subdivision* $\Sigma'$ of $\Sigma$ is the simplicial complex on vertex set $\Sigma$ with $\{\sigma_1, \dots, \sigma_k\}$ is a face of $\Sigma'$ if $\sigma_1 \subset \sigma_2 \subset \dots \subset \sigma_k$. # Proof of the colorful Borsuk--Ulam theorem In 1952 Ky Fan published his "combinatorial lemma" generalizing Tucker's lemma [@Fan1952], which is a discretized version of the Borsuk--Ulam theorem. Ky Fan's lemma applies to iterated barycentric subdivisions of the boundary of the crosspolytope and states that if the vertices of such a subdivision are labelled with the $2m$ numbers $\{\pm 1, \pm 2, \dots, \pm m\}$ such that labels of antipodal vertices sum to zero, while labels of vertices connected by an edge do not sum to zero, then the number of facets labelled $\{k_1, -k_2, k_3, \dots, (-1)^{d}k_{d+1}\}$, where $1 \le k_1 < k_2 < \dots < k_{d+1}$, is odd. Below we give a short proof of a version of Ky Fan's lemma that applies in greater generality, that is, for any centrally symmetric triangulation, while having a slightly weaker conclusion. In fact, with more care one could derive a generalization of Ky Fan's lemma from our setup below, but we will not need this generality. The proof we present here is not new; see De Loera, Goaoc, Meunier and Mustafa [@LoeraGoaocMeunierMustafa2017]. Further note that generalizations of this theorem have been proven in other settings; for example, Musin proved a generalization of Fan's lemma for manifolds [@Musin2012] and Živaljević proved a generalization for oriented matroids [@Zivaljevic2010]. **Theorem 5**. *Let $\Sigma$ be a centrally symmetric triangulation of $S^d$ with vertex set $V$. Let $\ell\colon V \to \{\pm 1, \pm 2, \dots, \pm (d+1)\}$ be a map with $\ell(-v) = -\ell(v)$ for all $v \in V$. Fix signs $s_1, \dots, s_{d+1} \in \{-1,+1\}$. Then either $\Sigma$ has an edge $e$ with $\ell(e) = \{-j,+j\}$ for some $j \in [d+1]$ or $\Sigma$ has a facet $\sigma$ with $\ell(\sigma) = \{s_1\cdot 1, \dots, s_{d+1}\cdot (d+1)\}$.* *Proof.* Assume that $\Sigma$ has no edge $e$ with $\ell(e) = \{-j,+j\}$ for $j \in [d+1]$. Then the map $\ell$ induces a simplicial map $L\colon \Sigma \to \partial\Diamond_{d+1}$ to the boundary of the crosspolytope $\partial\Diamond_{d+1}$ by identifying label $j$ with standard basis vector $e_j$, and similarly identifying $-j$ with $-e_j$. This map is odd, and thus $L$ is surjective by Corollary [Corollary 3](#cor:surj){reference-type="ref" reference="cor:surj"}. In particular, there is a facet $\sigma$ that $L$ maps to the facet $\{s_1\cdot 1, \dots, s_{d+1}\cdot (d+1)\}$ in $\partial\Diamond_{d+1}$, which finishes the proof. ◻ By taking limits, we derive a version of the theorem above for set coverings of the sphere instead of labellings of triangulations. Fan already derived such a set covering variant in his original paper. We include a proof for completeness. **Theorem 6**. *Let $A_1, \dots, A_{d+1} \subseteq S^d$ be closed sets such that $S^d = \bigcup_i A_i \cup \bigcup_i (-A_i)$. Fix signs $s_1, \dots, s_{d+1} \in \{-1,+1\}$. Either there is an $i\in [d+1]$ such that $A_i \cap (-A_i) \ne \varnothing$ or $\bigcap_i s_i A_i \ne \varnothing$.* *Proof.* Assume that $A_i \cap (-A_i) = \varnothing$ for all $i$. Let $\varepsilon>0$ be sufficiently small that the distance between $A_i$ and $-A_i$ is larger than $\varepsilon$ for all $i$. Let $T_\varepsilon$ be a centrally symmetric triangulation of $S^d$ such that each facet has diameter less than $\varepsilon$. This can be achieved by taking repeated barycentric subdivisions of a given centrally symmetric triangulation. Let $\ell\colon V(T_{\varepsilon})\to \{\pm 1,\dots, \pm (d+1)\}$ be a labelling of the vertices of $T_\varepsilon$ such that $\ell(v) = i$ only if $v \in A_i$, and such that $\ell(-v)=-\ell(v)$. Thus $\ell(v) = -i$ only if $v \in (-A_i)$. By our choice of $\varepsilon$, the sum of labels of any edge is non-zero. By Theorem [Theorem 5](#newkf){reference-type="ref" reference="newkf"} there is a facet with labels $s_1\cdot 1, \dots, s_{d+1}\cdot (d+1)$. Let $x_\varepsilon$ be the barycenter of some such facet. As $\varepsilon$ approaches zero, by compactness of $S^d$, the $x_\varepsilon$ have an accumulation point $x$. Since the $A_i$ are closed, we have that $x\in \bigcap_i s_i A_i$. ◻ We will now derive a colorful set covering version of Fan's lemma by considering the barycentric subdivision of fine triangulations. We then label each vertex according to the $j$th set covering if the dimension of face it subdivides is $j$. This idea is not new: It was originally used by Su [@Su1999] to derive a colorful version of Sperner's lemma in order to establish results on rental harmony. Recently this idea was employed in [@FrickZerbib2019] to prove colorful versions of set covering results for polytopes, such as a colorful KKMS and colorful Komiya's theorem. The new ingredient there was the application to set covering (instead of vertex labelling) results. The following is a colorful generalization of Fan's sphere covering result: **Theorem 7**. *For $j \in [d+1]$ let $A_1^{(j)}, \dots, A_{d+1}^{(j)} \subseteq S^d$ be closed sets such that $S^d = \bigcup_i A_i^{(j)} \cup \bigcup_i (-A_i^{(j)})$ for each $j$. Suppose $A_i^{(j)} \cap (-A_i^{(\ell)}) = \varnothing$ for all $i$ and for all $j \ne \ell$. Fix signs $s_1, \dots, s_{d+1} \in \{-1,+1\}$. Then there is a permutation $\pi$ of $[d+1]$ such that $\bigcap_i s_{i} A_i^{(\pi(i))} \ne \varnothing$.* *Proof.* As before, let $T_\varepsilon$ be a triangulation, where every face has diameter at most $\varepsilon > 0$. Here $\varepsilon$ is chosen sufficiently small so that no face intersects both $A_i^{(j)}$ and $-A_i^{(\ell)}$ for $j \ne \ell$ and any $i$. Let $T'_\varepsilon$ denote the barycentric subdivision of $T_\varepsilon$. Similar to the proof of Theorem [Theorem 6](#thm:set-kyfan){reference-type="ref" reference="thm:set-kyfan"}, let $\ell\colon V(T'_{\varepsilon})\to \{\pm 1,\dots, \pm (d+1)\}$ be a labelling of the vertices of $T'_\varepsilon$ such that $\ell(v) = i$ only if $v \in A_i^{(k)}$ and $v$ subdivides a $(k-1)$-dimensional face of $T_\varepsilon$. We may assume that $\ell(-v)=-\ell(v)$. By our choice of $\varepsilon$, the sum of labels of any edge is non-zero. By Theorem [Theorem 5](#newkf){reference-type="ref" reference="newkf"} there is a facet with labels $s_1\cdot 1, \dots, s_{d+1}\cdot (d+1)$. Let $x_\varepsilon$ be the barycenter of some such facet. As $\varepsilon$ approaches zero, by compactness of $S^d$, the $x_\varepsilon$ have an accumulation point $x$. For each small $\varepsilon > 0$, let $\pi_\varepsilon$ be the permutation of $[d+1]$ with $\pi_\varepsilon(i) = j$ if the (unique) vertex of the facet of $T'_\varepsilon$ that contains $x_\varepsilon$ and subdivides a face of dimension $j-1$ has label $s_i\cdot i$. (If $x_\varepsilon$ is in multiple facets, we choose one arbitrarily.) Since there are only finitely many permutations of $[d+1]$, we can choose a sequence of $\delta_1, \delta_2, \dots$ with $\delta_n$ converges to $0$ and all permutations $\pi_{\delta_n}$ are the same. Call this permutation $\pi$. Since the $A_i^{(j)}$ are closed, we have that $x\in \bigcap_i s_i A_i^{(\pi(i))}$. ◻ We compare this colorful generalization of Fan's theorem to the result of Meunier and Su [@MeunierSu2019], who proved a multilabelled Fan's theorem. We will translate their result to a set covering result by taking limits to make a direct comparison with Theorem [Theorem 7](#colkf){reference-type="ref" reference="colkf"}. **Theorem 8** (Meunier and Su [@MeunierSu2019]). *Let $m \ge 1$ and $d\ge 1$ be integers, and let $d_1, \dots, d_m$ be non-negative integers with $d_1 + \dots + d_m = d$. For $j \in [m]$ let $A_1^{(j)}, \dots, A_{d+1}^{(j)} \subseteq S^d$ be closed sets such that $S^d = \bigcup_i A_i^{(j)} \cup \bigcup_i (-A_i^{(j)})$ for each $j$. Suppose $A_i^{(j)} \cap (-A_i^{(j)}) = \varnothing$ for all $i, j$. Then there are strictly increasing maps $f_j\colon [d_j+1] \to [d+1]$ and signs $s_j \in \{-1,+1\}$ for $j \in [m]$ such that $$\bigcap_{j=1}^m \bigcap_{i=1}^{d_j+1} s_j\cdot (-1)^i A_{f_j(i)}^{(j)} \ne \varnothing.$$* Think of the sets $A_i^{(j)}$ recorded in a matrix, with set $A_i^{(j)}$ in the $j$th row and $i$th column. Theorem [Theorem 8](#thm:ms){reference-type="ref" reference="thm:ms"} has no assumptions regarding intersections between sets in different rows (only on sets in the same row) and the conclusion gives an intersection among sets (with alternating signs) that form a row-transversal. Theorem [Theorem 7](#colkf){reference-type="ref" reference="colkf"} has no assumption regarding intersections of sets in the same row (only distinct rows) and the conclusion gives an intersection among sets (with prescribed signs) that form a row and column transversal. *Proof of Theorem [Theorem 1](#thm:colorfulBU){reference-type="ref" reference="thm:colorfulBU"}.* Define the following sets: $$A_i^{(j)}=\left\{ x\in S^d \ | \ f(x)_{ji}=|f(x)_{ji}| \ge |f(x)_{j \alpha }| \ \text{for all} \ \alpha \in [d+1]\right\},$$ $$-A_i^{(j)}=\left\{ x\in S^d \ | \ -f(x)_{ji}=|f(x)_{ji}| \ge |f(x)_{j \alpha }| \ \text{for all} \ \alpha \in [d+1]\right\}.$$ Note that for each $j$, the collection of sets $A_i^{(j)}$ will satisfy $S^d=\bigcup_i A_i^{(j)} \cup \bigcup_i (-A_i^{(j)})$ since for every $x\in S^d$ the maximal entry in absolute value in the $j$th row of $f(x)$ must be achieved somewhere. If $A_i^{(j)} \cap (-A_i^{(\ell)}) \neq \varnothing$ for some $i$ and for some $j \ne \ell$ then $f(x)_{ji}=|f(x)_{ji}| \ge |f(x)_{j \alpha }|$ and $-f(x)_{\ell i}=|f(x)_{\ell i}| \ge |f(x)_{\ell\alpha }|$ for all $\alpha \in [d+1]$. Thus $f(x)$ does not have rows in intersecting cube facets. If $A_i^{(j)} \cap (-A_i^{(\ell)}) = \varnothing$ for all $i$ and for all $j \ne \ell$ apply Theorem [Theorem 7](#colkf){reference-type="ref" reference="colkf"} with all signs $s_i = +1$, to get that there is a permutation $\pi$ of $[d+1]$ such that $\bigcap_i A_i^{(\pi(i))} \ne \varnothing$. Then for $x \in \bigcap_i A_i^{(\pi(i))}$ we have that $f(x)_{\pi(i)i} = |f(x)_{\pi(i)i}| \ge |f(x)_{\pi(i)j}|$ for all $i, j \in [d+1]$. ◻ # Radon--KKM alternative and the colorful KKM theorem Recall that $\Delta_d = \{x \in \mathbb{R}^{d+1} \ : \ \sum_i x_i = 1, \ x_i \ge 0 \ \forall i \in [d+1]\}$ denotes the *$d$-simplex*. For a subset $J \subseteq [d+1]$ the set $\Delta_d^J = \{x \in \Delta_d \ : \ x_j = 0 \ \forall j \notin J\}$ is a *face* of $\Delta_d$. If $J = [d+1]\setminus\{i\}$, then we call $\Delta_d^J$ the *$i$th facet* of $\Delta_d$. For $f\colon \Delta_d \to \mathbb{R}^n$, a partition $J \sqcup J'$ of $[d+1]$ with $f(\Delta_d^J) \cap f(\Delta_d^{J'}) \ne \varnothing$ is a *Radon partition* for $f$. The following is a "discretized" variant of the Borsuk--Ulam theorem: **Theorem 9** (Topological Radon theorem -- Bajmoczy and Bárány [@Bajmoczy1979]). *Let $f \colon \Delta_d \to \mathbb{R}^{d-1}$ be continuous. Then $f$ has a Radon partition.* It is no loss of generality to state the topological Radon theorem for maps $f\colon \Delta_d \to \Delta_{d-1}$ or $f\colon \Delta_d \to \partial\Delta_d \cong S^{d-1}$, where $\partial\Delta_d$ denotes the boundary of $\Delta_d$. The topological Radon theorem is derived from the Borsuk--Ulam theorem, and in fact can be seen as a discretized version of it. Before exploring the colorful extension implied by our main result, we first investigate the non-colorful version, which will apply to maps $f\colon \Delta_d \to \Delta_d$. We will show that the topological Radon theorem, in a sense, is dual to the KKM theorem: **Theorem 10** (KKM theorem [@KnasterKuratowskiMazurkiewicz1929]). *Let $A_1, \dots, A_{d+1}$ be an open cover of $\Delta_d$ such that for every $J \subseteq [d+1]$ we have that $\Delta_d^J \subseteq \bigcup_{j \in J} A_j$. Then $\bigcap A_i \ne \varnothing$.* We say that a finite sequence of continuous maps $\alpha_1, \dots, \alpha_n \colon \Delta_d \to [0,1]$ is a *partition of unity* if $\sum_i \alpha_i(x) = 1$ for all $x \in \Delta_d$. Note that this means that $\alpha = (\alpha_1, \dots, \alpha_n)$ is a map to the $(n-1)$-simplex $\Delta_{n-1}$. A partition of unity $\alpha_1, \dots, \alpha_n \colon \Delta_d \to [0,1]$ is *subordinate* to an open cover $A_1, \dots, A_n$ of $\Delta_d$ if $\alpha_i(x) > 0$ implies $x \in A_i$. Any open cover $A_1, \dots, A_n$ of $\Delta_d$ has a partition of unity subordinate to it, since the simplex is locally compact and Hausdorff. Conversely, any continuous $\alpha \colon \Delta_d \to \Delta_{n-1}$ gives an open cover $A_i = \{x\in \Delta_d \ : \ \alpha_i(x) > 0\}$ of $\Delta_d$. Having a Radon partition is a degeneracy of a map $\alpha \colon \Delta_d \to \Delta_d$, while for a cover $A_1, \dots, A_{d+1}$ having empty intersection, $\bigcap A_i = \varnothing$, is a degeneracy. Theorem [Theorem 11](#thm:RadonKKM){reference-type="ref" reference="thm:RadonKKM"} shows that these degeneracies are dual to one another. **Theorem 11**. *Let $A_1, \dots, A_{d+1}$ be an open cover of $\Delta_d$ and let $\alpha_1, \dots, \alpha_{d+1} \colon \Delta_d \to [0,1]$ be a partition of unity subordinate to the cover $\{A_1,\dots,A_{d+1}\}$. Let $\alpha = (\alpha_1,\dots, \alpha_{d+1}) \colon \Delta_d \to \Delta_d$. Then $\alpha$ has a Radon partition or $\bigcap A_i \ne \varnothing$.* *Proof.* Let $F\colon (\Delta_d)^{*2}_\Delta \to \mathbb{R}^{d+1}, \ \lambda x + (1-\lambda)y \mapsto \lambda\alpha(x) - (1-\lambda)\alpha(y)$. Since $(\Delta_d)^{*2}_\Delta$ is homeomorphic to $S^d$, by the Borsuk--Ulam theorem (Theorem [Theorem 2](#thm:BU){reference-type="ref" reference="thm:BU"}(c)), $F$ must hit the diagonal $D = \{(z_1,\dots, z_{d+1}) \in \mathbb{R}^{d+1} \ : \ z_1 = z_2 =\dots = z_{d+1}\}$. If $F(\lambda x + (1-\lambda)y) = 0$ then $\lambda\alpha(x) = (1-\lambda)\alpha(y)$ and also $0 = \sum \lambda\alpha_i(x) - (1- \lambda)\alpha_i(y) = \lambda \sum \alpha_i(x) - (1-\lambda)\sum \alpha_i(y) = 2\lambda-1$, which implies $\lambda = \frac12$ and thus $\alpha(x) = \alpha(y)$. If $F(\lambda x + (1-\lambda)y) \in D \setminus \{0\}$, then $\alpha_i(x) > 0$ for all $i \in [d+1]$ or $\alpha_i(y) > 0$ for all $i \in [d+1]$ depending on whether the coordinates of $F(\lambda x + (1-\lambda)y)$ are positive or negative. Thus either $x \in \bigcap A_i$ or $y \in \bigcap A_i$. ◻ Theorem [Theorem 11](#thm:RadonKKM){reference-type="ref" reference="thm:RadonKKM"} easily implies both the topological Radon theorem and the KKM theorem. To derive the topological Radon theorem as a consequence, note that for a map $\alpha\colon \Delta_d \to \Delta_{d-1} \subset \Delta_d$, the coordinate functions $\alpha_1, \dots, \alpha_{d+1}$ are subordinate to the open cover $A_i = \{x \in \Delta_d \ : \ \alpha_i(x) > 0\}$, where $A_{d+1} = \varnothing$. Thus $\bigcap A_i = \varnothing$ and $\alpha$ has a Radon partition by Theorem [Theorem 11](#thm:RadonKKM){reference-type="ref" reference="thm:RadonKKM"}. We can regard Theorem [Theorem 11](#thm:RadonKKM){reference-type="ref" reference="thm:RadonKKM"} as a natural strengthening of the KKM theorem, where the KKM condition (that $\Delta_d^J \subseteq \bigcup_{j \in J} A_j$ for all $J \subseteq [d+1]$) is replaced by the condition that a partition of unity subordinate to the cover avoids a Radon partition, which is a weaker requirement. We can now prove a colorful generalization: **Theorem 12**. *Let $\alpha^{(1)}, \dots, \alpha^{(d+1)} \colon \Delta_d \to \Delta_d$ be continuous maps such that for every $J \subseteq [d+1]$ and for every $i \in [d+1]$ we have that $\alpha^{(i)}(\Delta_d^J) \subseteq \Delta_d^J$. Then there is an $x \in \Delta_d$ and a permutation $\pi$ of $[d+1]$ such that $\alpha^{(i)}_{\pi(i)}(x) \ge \alpha^{(i)}_j(x)$ for all $j \in [d+1]$.* *Proof.* Let $A \colon \Delta_d \to \mathbb{R}^{(d+1) \times (d+1)}, \ x \mapsto (\alpha^{(i)}_j(x))_{i,j}$. The condition $\alpha^{(i)}(\Delta_d^J) \subseteq \Delta_d^J$ implies that for $x \in \Delta_d^J$ the matrix $A(x)$ has non-zero entries only in the columns corresponding to $j \in J$. Let $$F\colon (\Delta_d)^{*2}_\Delta \to \mathbb{R}^{(d+1) \times (d+1)}, \ \lambda x + (1-\lambda)y \mapsto \lambda A(x) - (1-\lambda)A(y).$$ We observe that no column of ${F(\lambda x + (1-\lambda)y)}$ has both positive and negative entries: Indeed, if $\lambda=1$ all entries of ${F(\lambda x + (1-\lambda)y)}$ are non-negative; if $\lambda= 0$ all entries of ${F(\lambda x + (1-\lambda)y)}$ are non-positive; and if $0 < \lambda < 1$ then $x$ and $y$ are in proper faces of $\Delta_d$, which are disjoint by definition of deleted join, say $x \in \Delta_d^J$ and $y \in \Delta_d^{[d+1] \setminus J}$. Then since $\alpha^{(i)}(\Delta_d^J) \subseteq \Delta_d^J$, columns of ${F(\lambda x + (1-\lambda)y)}$ corresponding to $j \in J$ are non-negative and all other columns are non-positive. Since each column of $F(\lambda x + (1-\lambda)y)$ is either entirely non-negative or entirely non-positive, $F(\lambda x + (1-\lambda)y)$ has rows in intersecting cube facets. By Theorem [Theorem 1](#thm:colorfulBU){reference-type="ref" reference="thm:colorfulBU"} there is a point $\lambda x + (1-\lambda)y \in (\Delta_d)^{*2}_\Delta$ and a permutation $\pi$ of $[d+1]$ such that $F(\lambda x + (1-\lambda)y)_{\pi(i)i} = \lambda\alpha^{\pi(i)}_i(x) - (1-\lambda)\alpha^{\pi(i)}_i(y)$ is non-negative and $|F(\lambda x + (1-\lambda)y)_{\pi(i)i}| \ge |F(\lambda x + (1-\lambda)y)_{\pi(i)j}|$ for all $i, j \in [d+1]$. If some $F(\lambda x + (1-\lambda)y)_{\pi(i)i}$ were zero, then since these entries maximize their respective rows, the entire row would be zero, which is impossible. Thus all $F(\lambda x + (1-\lambda)y)_{\pi(i)i}$ are positive. In particular, since the sign of columns is constant, all columns are non-negative. Notice that it is also not possible for an entire column to be zero, since one entry in this column maximizes its row in absolute value and this row is not identically zero. By the above this can only be the case when $\lambda = 1$. Thus $F(\lambda x + (1-\lambda)y)_{\pi(i)i} = \alpha^{\pi(i)}_i(x) > 0$, and $|F(\lambda x + (1-\lambda)y)_{\pi(i)i}| \ge |F(\lambda x + (1-\lambda)y)_{\pi(i)j}|$ for all $i, j \in [d+1]$ gives that $\alpha^{\pi(i)}_i(x) \ge \alpha^{\pi(i)}_j(x)$ for all $i, j \in [d+1]$. ◻ We derive the colorful KKM theorem, originally due to Gale [@Gale1984], as an immediate consequence of Theorem [Theorem 12](#thm:colKKM){reference-type="ref" reference="thm:colKKM"}: **Corollary 13** (colorful KKM theorem). *For each $i \in [d+1]$ let $A_1^{(i)}, \dots, A_{d+1}^{(i)}$ be open covers of $\Delta_d$ such that for every $J \subseteq [d+1]$ and for every $i \in [d+1]$ we have that $\Delta_d^J \subseteq \bigcup_{j \in J} A_j^{(i)}$. Then there is a permutation $\pi$ of $[d+1]$ such that $\bigcap A^{(i)}_{\pi(i)} \ne \varnothing$.* *Proof.* Let $\alpha^{(i)} \colon \Delta_d \to \Delta_d$ be a partition of unity subordinate to the cover $\{A_1^{(i)}, \dots, A_{d+1}^{(i)}\}$. Then for every $J \subseteq [d+1]$ and for every $i \in [d+1]$ we have that $\alpha^{(i)}(\Delta_d^J) \subseteq \Delta_d^J$, since $\Delta_d^J \subseteq \bigcup_{j \in J} A_j^{(i)}$. Thus by Theorem [Theorem 12](#thm:colKKM){reference-type="ref" reference="thm:colKKM"} there is an $x \in \Delta_d$ and a permutation $\pi$ of $[d+1]$ such that $\alpha^{(i)}_{\pi(i)}(x) \ge \alpha^{(i)}_j(x)$ for all $j \in [d+1]$. In particular, $\alpha^{(i)}_{\pi(i)}(x) > 0$ and thus $x \in \bigcap A^{(i)}_{\pi(i)}$. ◻ **Remark 14**. In the same way that the Borsuk--Ulam theorem strengthens Brouwer's fixed point theorem, Theorem [Theorem 12](#thm:colKKM){reference-type="ref" reference="thm:colKKM"} and Corollary [Corollary 13](#cor:colKKM){reference-type="ref" reference="cor:colKKM"} exhibit the colorful Borsuk--Ulam theorem as a strengthening of Gale's colorful KKM theorem. Here it is interesting that in the proof of Theorem [Theorem 12](#thm:colKKM){reference-type="ref" reference="thm:colKKM"} we used that the columns of any matrix in the image of the map $F$ are either non-negative or non-positive. This is a much stronger condition than is necessary for the application of Theorem [Theorem 1](#thm:colorfulBU){reference-type="ref" reference="thm:colorfulBU"}. As a consequence of Corollary [Corollary 13](#cor:colKKM){reference-type="ref" reference="cor:colKKM"} we can state a colorful Brouwer's fixed point theorem that for $d+1$ maps $f_i \colon \Delta_d \to \Delta_d$ asserts the existence of a point $x\in \Delta_d$ and a set of inequalities that in the case $f_1 = f_2 = \dots = f_{d+1}$ specialize to $x$ is a fixed point. We introduce one piece of terminology: Let $S(d) \subseteq \mathbb{R}^{d \times d}$ be the set of *stochastic matrices*, that is, $A \in S(d)$ if all entries of $A$ are non-negative and every row sums to one. Thus stochastic matrices are those $(d \times d)$-matrices, where every row vector belongs to the $(d-1)$-simplex $\Delta_{d-1}$. **Theorem 15**. *Let $f\colon \Delta_d \to S(d)$ be continuous. Then there is an $x \in \Delta_d$ and a permutation $\pi$ of $[d+1]$ such that $f_{i\pi(i)}(x) \le x_{\pi(i)}$ for all $i\in[d+1]$.* *Proof.* Let $A_j^{(i)} = \{x \in \Delta_d \ : \ f_{ij}(x) \le x_j\}$. These sets are closed by continuity of $f$. Let $J \subseteq [d+1]$ be some non-empty set. Then for $x \in \Delta_d^J$ we have that $\sum_{j \in J} x_j = 1$ and $\sum_{j \in J} f_{ij}(x) \le 1$ for every $i \in [d+1]$. This implies that for some $j \in J$ we have that $f_{ij}(x) \le x_j$ and thus $x \in A^{(i)}_j$. This shows that $\Delta_d^J \subseteq \bigcup_{j \in J} A^{(i)}_j$. A standard approximation argument shows that Corollary [Corollary 13](#cor:colKKM){reference-type="ref" reference="cor:colKKM"} also holds for collections of closed sets, and thus we get that there is a permutation $\pi$ of $[d+1]$ such that $\bigcap A^{(i)}_{\pi(i)} \ne \varnothing$. Any $x \in \bigcap A^{(i)}_{\pi(i)}$ satisfies the desired set of inequalities. ◻ This is indeed a colorful generalization of Brouwer's fixed point theorem. If every row of $f\colon \Delta_d \to S(d)$ is equal to $h \colon \Delta_d \to \Delta_d$, then Theorem [Theorem 15](#thm:colBrouwer){reference-type="ref" reference="thm:colBrouwer"} asserts the existence of $x\in \Delta_d$ with $h_i(x) \le x_i$ for all $i \in [d+1]$. Since $\sum_i h_i(x) = 1 = \sum_i x_i$, this implies $h(x) = x$. # The colorful ham sandwich theorem We first recall the classical Ham Sandwich theorem, conjectured by Steinhaus and proved by Banach; see [@beyer2004]. Here *hyperplane* refers to an affine subspace of codimension one. We will think of every hyperplane $H \subseteq \mathbb{R}^d$ as coming with a fixed orientation so that its positive halfspace $H^+$ is well-defined (similarly, its negative halfspace $H^-$), that is, for $H = \{x \in \mathbb{R}^d \ : \ \langle x, z \rangle = b\}$ for $z \in \mathbb{R}^d \setminus \{0\}$ and $b \ge 0$, we let $H^+ = \{x \in \mathbb{R}^d \ : \ \langle x,z \rangle \ge b\}$ and $H^- = \{x \in \mathbb{R}^d \ : \ \langle x,z \rangle \le b\}$. **Theorem 16** (Ham Sandwich theorem). *Let $\mu_1, \mu_2,\dots \mu_{d}$ be Borel probability measures on $\mathbb{R}^d$ such that for every hyperplane $H$ we have that $\mu_i(H) = 0$ for all $i \in [d]$. Then there is a hyperplane $H$ such that $\mu_i(H^+)=\frac{1}{2} = \mu_i(H^-)$ for all $i\in[d]$.* A colorful version of Theorem [Theorem 16](#thm:hs){reference-type="ref" reference="thm:hs"} will take several families of Borel probability measures as input and guarantee the existence of a hyperplane that separates the measures in a certain way. To be a true colorful version, such a result should specialize to Theorem [Theorem 16](#thm:hs){reference-type="ref" reference="thm:hs"} if all families of Borel measures are the same. We will discuss two natural attempts at formulating a colorful Ham Sandwich theorem, for simplicity for $d=2$. Given two families of Borel probability measures $\sigma_1=\left\{ r_1,g_1\right\},\sigma_2=\left\{ r_2,g_2\right\}$ in $\mathbb{R}^2$, is there a line $H$ such that $r_1(H^+)\ge \frac{1}{2}, r_2(H^-)\ge \frac{1}{2}$ and $g_1(H^-)\ge \frac{1}{2}, g_2(H^+)\ge \frac{1}{2}$? For $r_1 = r_2$ and $g_1 = g_2$ this would reduce to the usual Ham Sandwich theorem. However, this proposed colorful version is false: Figure [\[fig:badcolHS\]](#fig:badcolHS){reference-type="ref" reference="fig:badcolHS"} shows two families of two Borel measures (red and green) distributed along the parabola such that no line $H$ as above exists. Since this proposed colorful generalization of the Ham Sandwich theorem has a conclusion that is too strong to be true, a refined attempt at arriving at a colorful version might allow us to switch the roles of $g_2$ and $r_2$ in the second family of Borel measures. That is, given two families of Borel probability measures $\sigma_1=\left\{ r_1,g_1\right\}, \sigma_2=\left\{ r_2,g_2\right\}$, is there a line $H$ such that $r_1(H^+)\ge \frac{1}{2}, r_2(H^-)\ge \frac{1}{2}$ and either $g_1(H^+)\ge \frac{1}{2}, g_2(H^-)\ge \frac{1}{2}$ or $g_1(H^-)\ge \frac{1}{2}, g_2(H^+)\ge \frac{1}{2}$? Again, if $r_1 = r_2$ and $g_1 = g_2$ then this reduces to the usual Ham Sandwich theorem. While this colorful version of the Ham Sandwich theorem is true, it is a trivial consequence of the Ham Sandwich theorem itself: Let $H$ be a line that simultaneously bisects the measures $r_1$ and $g_1$. Then $r_1(H^+) = \frac12 = r_1(H^-)$. By flipping the orientation of $H$ if necessary, we can make sure that $r_2(H^-) \ge \frac12$. Moreover, $g_1(H^+) = \frac12 = g_1(H^-)$ and one of $g_2(H^+) \ge \frac12$ or $g_2(H^-) \ge \frac12$ has to hold as well. Nevertheless, the Ham Sandwich theorem admits a (non-trivial) colorful generalization. The example above shows that we need to impose that measures in different families are not "oppositely distributed." We make this notion precise now and then state the colorful ham sandwich theorem. Let $\mathcal M = \{\mu_1, \dots, \mu_m\}$ be a family of finite Borel measures on $\mathbb{R}^d$, and let $H$ be a hyperplane. We say that $\mu_i$ *maximizes $H^+$ for $\mathcal M$* if $\mu_i(H^+) -\frac12\mu_i(\mathbb{R}^d) \ge \mu_\alpha(H^+) - \frac12\mu_\alpha(\mathbb{R}^d)$ for all $\alpha \in [m]$. Similarly, $\mu_i$ *minimizes $H^+$ for $\mathcal M$* if $\mu_i(H^+)-\frac12\mu_i(\mathbb{R}^d) \le \mu_\alpha(H^+) - \frac12\mu_\alpha(\mathbb{R}^d)$ for all $\alpha \in [m]$. We may now state the colorful generalization of the Ham Sandwich theorem: **Theorem 17**. *For each $j \in [d+1]$ let $\mathcal M_j = \{\mu_1^{(j)}, \dots, \mu_{d+1}^{(j)}\}$ be an ordered family of $d+1$ finite Borel measures on $\mathbb{R}^d$ such that for every hyperplane $H$ we have that $\mu_i^{(j)}(H) = 0$ for all $i \in [d+1]$. Suppose further that for $j, k \in [d+1]$ with $j \ne k$ and any hyperplane $H$ we have that if $\mu_i^{(j)}$ maximizes $H^+$ for $\mathcal M_j$ then $\mu_i^{(k)}$ does not minimize $H^+$ for $\mathcal M_k$. Then there is permutation $\pi$ of $[d+1]$ such that $\mu_{i}^{(\pi(i))}(H^{+})-\frac12\mu_i^{(\pi(i))}(\mathbb{R}^d) \ge \mu_j^{(\pi(i))}(H^+) - \frac12\mu_j^{(\pi(i))}(\mathbb{R}^d)$ for all $i,j \in [d+1]$.* *Proof.* Let $u = (u_0,u_1, \dots, u_d) \in S^d$. If there is an $i\in[d]$ such that $u_i\neq 0$, i.e., if $u_0 \ne \pm 1$, assign to $u$ the halfspace $$H^+(u) = \{(x_1,\dots, x_d)\in \mathbb{R}^d \ | \ u_1x_1+\dots u_dx_d\le u_0\}$$ Notice that $$H^+(-u) = \{(x_1,\dots, x_d)\in \mathbb{R}^d \ | \ -u_1x_1 -\dots -u_dx_d\le -u_0\}$$ $$=\{(x_1,\dots, x_d)\in \mathbb{R}^d \ | \ u_1x_1+\dots +u_dx_d\ge u_0\}=H^-(u).$$ Define $f_{ji}(u)=\mu_i^{(j)}(H^+(u)) - \frac12\mu_i^{(j)}(\mathbb{R}^d)$. These maps are continuous and admit a continuous extension into the north and south pole. Thus the maps $f_{ij}$ define an odd and continuous map $F \colon S^d \to \mathbb{R}^{(d+1) \times (d+1)}$. If $F(u)$ does not have rows in intersecting cube facets then (writing $H = H(u)$) there are $i,j,k \in [d+1]$ with $j \ne k$ such that $$\begin{aligned} &|\mu_i^{(j)}(H^+) - \tfrac12\mu_i^{(j)}(\mathbb{R}^d)| \ge |\mu_\alpha^{(j)}(H^+) - \tfrac12\mu_\alpha^{(j)}(\mathbb{R}^d)| \ \text{and} \\ &|\mu_i^{(k)}(H^+) - \tfrac12\mu_i^{(k)}(\mathbb{R}^d)| \ge |\mu_\alpha^{(k)}(H^+) - \tfrac12\mu_\alpha^{(k)}(\mathbb{R}^d)| \ \text{for all} \ \alpha \in [d+1],\end{aligned}$$ where $\mu_i^{(j)}(H^+) - \frac12\mu_i^{(j)}(\mathbb{R}^d) \ge 0$ and $\mu_i^{(k)}(H^+) - \frac12\mu_i^{(k)}(\mathbb{R}^d) \le 0$. This means that $\mu_i^{(j)}$ maximizes $H^+$ for $\mathcal M_j$ and $\mu_i^{(k)}$ minimizes $H^+$ for $\mathcal M_k$, in contradiction to our assumption. Thus $F(u)$ has rows in intersecting cube facets for every $u \in S^d$. Applying Theorem [Theorem 1](#thm:colorfulBU){reference-type="ref" reference="thm:colorfulBU"} finishes the proof. ◻ The uncolored version of Theorem [Theorem 17](#colHS){reference-type="ref" reference="colHS"}, that is, when $\mu_i^{(1)} = \mu_i^{(2)} = \dots = \mu_i^{(d+1)}$, is the following strengthening of the Ham Sandwich theorem: **Theorem 18**. *Let $\mu_1, \mu_2,\dots \mu_{d+1}$ be finite Borel measures on $\mathbb{R}^d$ such that for every hyperplane $H$ we have that $\mu_i(H) = 0$ for all $i \in [d+1]$. Then there is a hyperplane $H$ such that $\mu_i(H^+)- \mu_i(H^-)=\mu_j(H^+) - \mu_j(H^-)$ for all $i,j\in[d+1]$.* *Proof.* Apply Theorem [Theorem 17](#colHS){reference-type="ref" reference="colHS"} in the case $\mathcal M_1 = \mathcal M_2 = \dots = \mathcal M_{d+1} = \{\mu_1, \dots, \mu_{d+1}\}$. First suppose that there exists some hyperplane $H$ such that the halfspace $H^+$ both maximizes and minimizes one of the $\mu_i$. This implies $\mu_i(H^+) -\frac12\mu_i(\mathbb{R}^d) = \mu_\alpha(H^+) - \frac12\mu_\alpha(\mathbb{R}^d)$ for all $\alpha \in [d+1]$. Since $\mu_i(H^+) + \mu_i(H^-) = \mu_i(\mathbb{R}^d)$, this implies $\mu_i(H^+)- \mu_i(H^-)=\mu_\alpha(H^+) - \mu_\alpha(H^-)$ for all $\alpha \in[d+1]$. If no $\mu_i$ simultaneously maximizes and minimizes some halfspace $H^+$, then by Theorem [Theorem 17](#colHS){reference-type="ref" reference="colHS"}, we again get that $\mu_i(H^+) -\frac12\mu_i(\mathbb{R}^d) = \mu_\alpha(H^+) - \frac12\mu_\alpha(\mathbb{R}^d)$ for all $\alpha \in [d+1]$. ◻ A family $\mathcal F$ of convex sets in $\mathbb{R}^d$ is *well-separated* if any collection $x_1, \dots, x_k$ of points from pairwise distinct $K_1, \dots, K_k \in \mathcal F$ is in general position, that is, for any $k \le d+1$, any pairwise distinct $K_1, \dots, K_k \in \mathcal F$ and any $x_1 \in K_1, \dots, x_k \in K_k$ the set $\{x_1, \dots, x_k\}$ is not contained in a common $(k-2)$-dimensional affine subspace. See Figure [\[fig:sepmass\]](#fig:sepmass){reference-type="ref" reference="fig:sepmass"} for an example. We call a family $\mathcal M$ of Borel measures on $\mathbb{R}^d$ *well-separated* if the family of convex hulls of supports $\mathcal F = \{\mathop{\mathrm{\mathrm{conv}}}(\mathrm{supp} \ \mu) \ : \ \mu \in \mathcal M\}$ is well-separated. The *support* of a Borel measure $\mu$ on $\mathbb{R}^d$ is $\mathrm{supp} \ \mu = \{x \in \mathbb{R}^d \ : \ \forall \varepsilon > 0 \ \mu(B_\varepsilon(x)) > 0\}$. In particular, if $\mathcal F$ is a well-separated family of $d+1$ sets in $\mathbb{R}^d$, then no hyperplane can intersect all sets in $\mathcal F$. Bárány, Hubard, and Jerónimo [@BaranyHubardJeronimo2008] show that if $\mu_1, \dots, \mu_d$ are finite Borel measures on $\mathbb{R}^d$ with bounded supports that are well-separated and such that $\mu_i(H) = 0$ for every hyperplane $H$ in $\mathbb{R}^d$, then there is a hyperplane that cuts off a specified fraction from each measure $\mu_i$, that is, for $\alpha_1,\dots,\alpha_d\in (0,1)$ there is a hyperplane $H$ such that $\mu_i(H^+)=\alpha_i\mu_i(\mathbb{R}^d)$ for all $i\in[d]$. We use Theorem [Theorem 18](#kfhs){reference-type="ref" reference="kfhs"} to show the following variant: **Corollary 19** (Variant of a result of Bárány, Hubard, Jerónimo [@BaranyHubardJeronimo2008]). *Let $\mu_1, \mu_2,\dots \mu_d$ be finite Borel measures on $\mathbb{R}^d$ with bounded supports and $\mu_i(H) = 0$ for all hyperplanes $H$ in $\mathbb{R}^d$ and for all $i \in [d]$. Suppose there is an $x \in \mathbb{R}^d$ such that no hyperplane through $x$ may intersect the supports of all $\mu_i$. Then for all $\alpha_1,\dots,\alpha_d\in (0,1)$ there is a hyperplane $H$ such that $\mu_i(H^+)=\alpha_i\mu_i(\mathbb{R}^d)$ for all $i\in[d]$.* *Proof.* Normalize the measures $\mu_i$ such that $\alpha_i\mu_i(\mathbb{R}^d) = 1$ for all $i$ by dividing each $\mu_i$ by $\alpha_i\mu_i(\mathbb{R}^d)$. In particular, after this normalization $\mu_i(\mathbb{R}^d) = \frac{1}{\alpha_i} > 1$ for all $i$. By a standard compactness argument there is an $\varepsilon > 0$ such that any hyperplane that intersects $B_\varepsilon(x)$ does not intersect the supports of all $\mu_i$. Thus we may construct a Borel measure $\mu_{d+1}$ on $\mathbb{R}^d$ supported in $B_\varepsilon(x)$ with continuous density and with $\mu_{d+1}(\mathbb{R}^d) = 1$. Now apply Theorem [Theorem 18](#kfhs){reference-type="ref" reference="kfhs"} to this collection, which yields a hyperplane $H$ with $\mu_i(H^+)- \mu_i(H^-)=\mu_j(H^+) - \mu_j(H^-)$ for all $i,j\in[d+1]$. The hyperplane $H$ cannot intersect the supports of all measures $\mu_1, \dots, \mu_{d+1}$. Let $i\in [d+1]$ such that $H$ is disjoint from the support of $\mu_i$. If $i \ne d+1$ then $\mu_i(H^+)-\mu_i(H^-) = \mu_i(\mathbb{R}^d) > 1$, but $|\mu_{d+1}(H^+)-\mu_{d+1}(H^-)| \le 1$, so $\mu_i(H^+)- \mu_i(H^-)\ne \mu_{d+1}(H^+) - \mu_{d+1}(H^-)$. Thus $i = d+1$. This implies $\mu_j(H^+) - \mu_j(H^-) = \mu_{d+1}(H^+) - \mu_{d+1}(H^-) =1$, which finishes the proof. ◻ Notice that Corollary [Corollary 19](#cor:bhj){reference-type="ref" reference="cor:bhj"} contains the result of Bárány, Hubard, Jerónimo as a special case, provided that for any family $\{K_1, \dots, K_d\}$ of compact, convex sets in $\mathbb{R}^d$ that are well-separated, there is an $x\in \mathbb{R}^d$ such that $\{K_1,\dots, K_d, \{x\}\}$ is well-separated. We have been unable to show this. # Colorful Borsuk--Ulam theorems for higher symmetry The methods we have used here to prove colorful results for symmetric set coverings of spheres generalize easily to other settings. Here we prove generalizations for free $\mathbb{Z}/p$-actions on spheres, $p$ a prime. Below $s\in\mathbb{Z}/p$ acts on $(j,t) \in [d] \times \mathbb{Z}/p$ by $s\cdot (j,t) = (j, t+s)$. **Theorem 20**. *Let $p$ be a prime. Let $d\ge1$ and $n = (p-1)d-1$ be integers. Fix some free $\mathbb{Z}/p$-action on $S^n$. Let $\Sigma$ be a $\mathbb{Z}/p$-equivariant triangulation of $S^n$ with vertex set $V$. Let $\ell\colon V \to [d] \times \mathbb{Z}/p$ be $\mathbb{Z}/p$-equivariant. Fix $s_1, \dots, s_d \in \mathbb{Z}/p$. Then either there is a $(p-1)$-face $\sigma$ of $\Sigma$ with $\ell(\sigma) = \{j\} \times \mathbb{Z}/p$ for some $j \in [d]$ or there is a facet $\sigma$ of $\Sigma$ with $\ell(\sigma) = \{(j, s) \ : \ j \in [d], \ s \in \mathbb{Z}/p \setminus \{s_j\}\}$.* *Proof.* If no $(p-1)$-face is labelled with all elements in $\{j\}\times\mathbb{Z}/p$ then $\ell$ induces an equivariant map $\Sigma \to (\partial\Delta_{p-1})^{\ast d}$ by identifying each label $(j,s)$ with the vertex $s$ in the $j$th copy $\partial \Delta_{p-1}$. The $d$-fold join $(\partial\Delta_{p-1})^{\ast d}$ is a sphere of dimension $n$. By Lemma [Lemma 4](#lem:dold){reference-type="ref" reference="lem:dold"} such an equivariant map will have non-zero degree and thus be surjective. In particular, some face $\sigma$ of $\Sigma$ maps to the facet $\{(j, s) \ : \ j \in [d], \ s \in \mathbb{Z}/p \setminus \{s_j\}\}$ of $(\partial\Delta_{p-1})^{\ast d}$. ◻ **Theorem 21**. *Let $p$ be a prime. Let $d\ge1$ and $n = (p-1)d-1$ be integers. Let $A_i \subset S^n$ be closed sets for $i \in [d]$ such that $S^n = \bigcup_i (A_i \cup s\cdot A_i \cup \dots \cup s^{p-1}A_i)$. Suppose that $\bigcap_k \varepsilon^k\cdot A_i = \varnothing$ for every $i \in [d]$. Then for all $s_1, \dots, s_d \in \mathbb{Z}/p$ we have that $\bigcap_i \bigcap_{s \ne s_i} s\cdot A_i \ne \varnothing$.* *Proof.* Assume that $\bigcap_k s^k\cdot A_i = \varnothing$ for all $i$. Let $T_\varepsilon$ be a $\mathbb{Z}/p$-symmetric triangulation of $S^d$ such that each facet has diameter less than $\varepsilon$, where $\varepsilon >0$ is chosen such that any set of diameter less than $\varepsilon$ intersects at most $p-1$ of the sets $A_i, s\cdot A_i, \dots, s^{p-1}A_i$. This can be achieved by taking repeated barycentric subdivisions of a given $\mathbb{Z}/p$-symmetric triangulation. Let $\ell\colon V(T_{\varepsilon})\to [d] \times \mathbb{Z}/p$ be a labelling of the vertices of $T_\varepsilon$ such that $\ell(v) = (i,g)$ only if $v \in g\cdot A_i$. By our choice of $\varepsilon$, there is no face $\sigma$ with $\ell(\sigma) = \{j\} \times \mathbb{Z}/p$. By Theorem [Theorem 20](#thm:p-fold-kyfan){reference-type="ref" reference="thm:p-fold-kyfan"} there is a facet labelled precisely by the set $\{(i, s) \ : \ i \in [d], \ s \in \mathbb{Z}/p \setminus \{s_i\}\}$. Let $x_\varepsilon$ be the barycenter of some such facet. As $\varepsilon$ approaches zero, by compactness of $S^d$, the $x_\varepsilon$ have an accumulation point $x$. Since the $A_i$ are closed, we have that $\bigcap_i \bigcap_{s\ne s_i} s\cdot A_i$. ◻ **Remark 22**. The proof of Theorem [Theorem 20](#thm:p-fold-kyfan){reference-type="ref" reference="thm:p-fold-kyfan"} shows that if in Theorem [Theorem 20](#thm:p-fold-kyfan){reference-type="ref" reference="thm:p-fold-kyfan"} we increase $n$ by one, that is, $n = (p-1)d$, then the first alternative will always occur: There is a face $\sigma$ of $\Sigma$ labelled with an entire $\mathbb{Z}/p$-orbit, that is, $\ell(\sigma) = \{j\} \times \mathbb{Z}/p$ for some $j \in [d]$. Similarly, for Theorem [Theorem 21](#thm:p-cover){reference-type="ref" reference="thm:p-cover"}, if $n = (p-1)d$ then it is impossible that $\bigcap_k s^k\cdot A_i = \varnothing$ for every $i \in [d]$. **Corollary 23**. *Let $p$ be a prime. Let $d \ge 1$ and $n = (p-1)d-1$ be integers. Let $f \colon S^n \to \mathbb{R}^d$ be continuous. Then there is a $\mathbb{Z}/p$-orbit $x, s\cdot x, \dots, s^{p-1}\cdot x$ such that $f$ maps $p-1$ points in this orbit to the same point $y$ in $\mathbb{R}^d$ and the remaining point to $y-(\alpha, \dots, \alpha)$ for some $\alpha \in \mathbb{R}$.* *Proof.* For $x \in S^n$ denote its $\mathbb{Z}/p$-orbit $\{x, s\cdot x, \dots, s^{p-1}\cdot x\}$ by $G\cdot x$. Denote the $i$th coordinate function of $f\colon S^n \to \mathbb{R}^d$ by $f_i \colon S^n \to \mathbb{R}$. Let $x \in S^n$. We place $x \in A_i$ if $\mathop{\mathrm{\mathrm{diam}}}(f_i(G\cdot x)) \ge \mathop{\mathrm{\mathrm{diam}}}(f_j(G\cdot x))$ for all $j \in [d]$ and $f_i(x) \ge f_i(s^k \cdot x)$ for all $k \in [p]$. Thus $x \in A_i$ if $f_i$ fluctuates at least as much on the orbit of $x$ as any other coordinate function $f_j$, and additionally $f_i(x)$ is the largest value in the orbit of $x$. As both of these values have to maximized somewhere $S^n = \bigcup_i (A_i \cup s\cdot A_i \cup \dots \cup s^{p-1}A_i)$. Suppose there is some point $x$ in $\bigcap_k s^k\cdot A_i$. Then $f_i(x) = f_i(s \cdot x) = \dots = f_i(s^{p-1}\cdot x)$ and $0 = \mathop{\mathrm{\mathrm{diam}}}(f_i(G\cdot x)) \ge \mathop{\mathrm{\mathrm{diam}}}(f_j(G\cdot x))$ for all $j \in [d]$. This implies $f(x) = f(s\cdot x) = \dots = f(s^{p-1}x)$. Otherwise by Theorem [Theorem 21](#thm:p-cover){reference-type="ref" reference="thm:p-cover"} there is some $x$ in $\bigcap_i \bigcap_{k \ne p-1} s^k\cdot A_i$. Then $f(x) = f(s\cdot x) = \dots = f(s^{p-2} \cdot x)$ and $\mathop{\mathrm{\mathrm{diam}}}(f_i(G\cdot x)) = \mathop{\mathrm{\mathrm{diam}}}(f_j(G\cdot x))$ for any $i,j \in [d]$. Since the first $p-1$ points in $G\cdot x$ are mapped to the same point, we have that $f_i(s^{p-1}\cdot x) = f_i(x) - \mathop{\mathrm{\mathrm{diam}}}(f_i(G\cdot x))$ for all $i \in [d]$. ◻ **Remark 24**. For $p=2$ and $f\colon S^{d-1} \to \mathbb{R}^d$ an odd map, Corollary [Corollary 23](#cor:kyfan-dold){reference-type="ref" reference="cor:kyfan-dold"} asserts that $f$ maps a pair of antipodal points to $f(x) = y$ and $f(-x) = y - (\alpha, \dots, \alpha)$. Since $f$ is odd, $f(-x) = -y$ and thus $(\alpha, \dots, \alpha) = 2y$, that is, the corollary asserts that $f$ maps a pair of antipodal points to the $1$-dimensional diagonal in $\mathbb{R}^d$. Following Remark [Remark 22](#rem:n++){reference-type="ref" reference="rem:n++"} the proof of Corollary [Corollary 23](#cor:kyfan-dold){reference-type="ref" reference="cor:kyfan-dold"} shows that for $n \ge (p-1)d$, we get that any continuous $f\colon S^n \to \mathbb{R}^d$ maps an entire $\mathbb{Z}/p$-orbit to the same point. This is a classical result of Bourgin--Yang and others [@Dold1983; @yang1954; @yang1955; @Bourgin1955]. Corollary [Corollary 23](#cor:kyfan-dold){reference-type="ref" reference="cor:kyfan-dold"} extends this orbit collapsing result in the same fashion that Ky Fan's theorem extends the Borsuk--Ulam theorem. We can now derive a colorful generalization of Theorem [Theorem 21](#thm:p-cover){reference-type="ref" reference="thm:p-cover"} in the same way that we showed the colorful generalization (Theorem [Theorem 7](#colkf){reference-type="ref" reference="colkf"}) Fan's theorem. **Theorem 25**. *Let $p$ be a prime. Let $d\ge1$ and $n = (p-1)d-1$ be integers. Let $A_i^{(j)} \subset S^n$ be closed sets with $i \in [d]$ and $j \in [n+1]$ such that $S^n = \bigcup_i (A_i^{(j)}\cup s\cdot A_i^{(j)} \cup \dots \cup s^{p-1}\cdot A_i^{(j)})$ for every $j \in [n+1]$. Suppose that $\bigcap_k s^k\cdot A_i^{(j_k)} = \varnothing$ for every $i \in [d]$ and for pairwise distinct $j_1, \dots, j_p \in [n+1]$. Then for all $s_1, \dots, s_d \in \mathbb{Z}/p$ we have that $\bigcap_i \bigcap_{s \ne s_i} s\cdot A_i^{(\pi(i, s))} \ne \varnothing$ for some bijection $\pi\colon \{(i,s) \in [d] \times \mathbb{Z}/p \ : \ s \ne s_i\} \to [n+1]$.* *Proof.* Let $T_\varepsilon$ be a triangulation, where every face has diameter at most $\varepsilon > 0$. Here $\varepsilon$ is chosen sufficiently small so that no face intersects $s\cdot A_i^{(j_1)}, s^2\cdot A_i^{(j_2)}, \dots, s^p\cdot A_i^{(j_p)}$ for pairwise distinct $j_1, \dots, j_p \in [n+1]$ and any $i$. Let $T'_\varepsilon$ denote the barycentric subdivision of $T_\varepsilon$. Let $\ell\colon V(T'_{\varepsilon})\to [d] \times \mathbb{Z}/p$ be a labelling of the vertices of $T'_\varepsilon$ such that $\ell(v) = (i,g)$ only if $v \in g\cdot A_i^{(k)}$ and $v$ subdivides a $(k-1)$-dimensional face of $T_\varepsilon$. We may assume that $\ell$ is $\mathbb{Z}/p$-equivariant. By our choice of $\varepsilon$, there is no face $\sigma$ with $\ell(\sigma) = \{j\} \times \mathbb{Z}/p$. By Theorem [Theorem 20](#thm:p-fold-kyfan){reference-type="ref" reference="thm:p-fold-kyfan"} there is a facet labelled precisely by the set $\{(i, s) \ : \ i \in [d], \ s \in \mathbb{Z}/p \setminus \{s_i\}\}$. Let $x_\varepsilon$ be the barycenter of some such facet. As $\varepsilon$ approaches zero, by compactness of $S^n$, the $x_\varepsilon$ have an accumulation point $x$. For every $\varepsilon > 0$ we can find a bijection $\pi_\varepsilon\colon \{(i,s) \in [d] \times \mathbb{Z}/p \ : \ s \ne s_i\} \to [n+1]$ such that $x_\varepsilon$ is at distance less than $\varepsilon$ from the sets $g\cdot A_i^{(\pi(i, g))}$ with $i \in [d]$ and $g \ne g_i$. Since there are finitely many bijection $\pi\colon \{(i,s) \in [d] \times \mathbb{Z}/p \ : \ s \ne s_i\} \to [n+1]$, as $\varepsilon \to 0$ one such bijection will be realized infinitely many times. Call this bijection $\pi$. Since the $A_i^{(j)}$ are closed, we have that $x\in \bigcap_i \bigcap_{s \ne s_i} s\cdot A_i^{(\pi(i, s))}$. ◻ **Remark 26**. Theorem [Theorem 25](#thm:col-p-kyfan){reference-type="ref" reference="thm:col-p-kyfan"} has the colorful Borsuk--Ulam theorem (Theorem [Theorem 1](#thm:colorfulBU){reference-type="ref" reference="thm:colorfulBU"}) as a corollary: The case $p=2$ specializes to Theorem [Theorem 7](#colkf){reference-type="ref" reference="colkf"}. 10 Ron Aharoni and Eli Berger, *Rainbow matchings in $r$-partite $r$-graphs*, Electronic J. Combin. **16**(1) (2009), R119. Maria Axenovich and Dmitri Fon-Der-Flaass, *On rainbow arithmetic progressions*, Electronic J. Combin. **11** (2004), no. 1, R1. Ervin G. Bajmóczy and Imre Bárány, *On a common generalization of Borsuk's and Radon's theorem*, Acta Math. Acad. Sci. Hungar. **34** (1979), 347--350. Imre Bárány, *A generalization of Carathéodory's theorem*, Discrete Math. **40** (1981), 141--152. Imre Bárány, Zoltán Füredi, and László Lovász, *On the number of halving planes*, Combinatorica **10** (1990), no. 2, 175--183. Imre Bárány, Alfredo Hubard, and Jesús Jerónimo, *Slicing convex sets and measures by a hyperplane*, Discrete Comput. Geom. (2008), 67--75. William A. Beyer and Andrew Zardecki, *The early history of the ham sandwich theorem*, Amer. Math. Monthly **111** (2004), no. 1, 58--61. Anders Björner, *Topological methods*, Handbook of combinatorics, vol. 2, Amsterdam, 1995, pp. 1819--1872. Pavle V. M. Blagojević, Benjamin Matschke, and Günter M. Ziegler, *Optimal bounds for the colored Tverberg problem*, J. Europ. Math. Soc. **17** (2015), no. 4, 739--754. Pavle V. M. Blagojević and Günter M. Ziegler, *Beyond the Borsuk--Ulam theorem: the topological Tverberg story*, A Journey Through Discrete Mathematics: A Tribute to Jiřı́ Matoušek (2017), 273--341. Karol Borsuk, *Drei Sätze über die $n$-dimensionale euklidische Sphäre*, Fund. Math. **20** (1933), 177--190. David G. Bourgin, *On some separation and mapping theorems*, Comment. Math. Helv. **29** (1955), 199--214. Constantin Carathéodory, *Über den Variabilitätsbereich der Fourier'schen Konstanten von positiven harmonischen Funktionen*, Rend. Circ. Mat. Palermo **32** (1911), 193--217. Jesús De Loera, Xavier Goaoc, Frédéric Meunier, and Nabil Mustafa, *The discrete yet ubiquitous theorems of Carathéodory, Helly, Sperner, Tucker, and Tverberg*, Bull. Amer. Math. Soc. **56** (2019), 415--511. Mark De Longueville, *A course in topological combinatorics*, Springer Science & Business Media, 2013. Albrecht Dold, *Simple proofs of some Borsuk-Ulam results*, Contemp. Math. **19** (1983), 65--69. Ky Fan, *A generalization of Tucker's combinatorial lemma with topological applications*, Ann. Math. **56** (1952), no. 3, 431--437. Florian Frick, Kelsey Houston-Edwards, and Frédéric Meunier, *Achieving rental harmony with a secretive roommate*, Amer. Math. Monthly **126** (2017), 18--32. Florian Frick and Shira Zerbib, *Colorful coverings of polytopes and piercing numbers of colorful $d$-intervals*, Combinatorica **39** (2019), 627--637. Shinya Fujita, Colton Magnant, and Kenta Ozeki, *Rainbow generalizations of Ramsey theory: a survey*, Graphs Combin. **26** (2010), 1--30. David Gale, *Equilibrium in a discrete exchange economy with money*, Internat. J. Game Theory **13** (1984), no. 1, 61--64. Veselin Jungic, Jacob Licht, Mohammad Mahdian, Jaroslav Nesetril, and Rados Radoicic, *Rainbow arithmetic progressions and anti-Ramsey results*, Combin. Probab. Comput. **12** (2003), no. 5-6, 599--620. Gil Kalai and Roy Meshulam, *A topological colorful Helly theorem*, Adv. Math. **191** (2005), no. 2, 305--311. Bronislaw Knaster, Kazimierz Kuratowski, and Stefan Mazurkiewicz, *Equilibrium in a discrete exchange economy with money*, Fund. Math. **14** (1929), no. 1, 132--137. Dimitry Kozlov, *Combinatorial algebraic topology*, vol. 21, Springer Science & Business Media, 2008. Alexander M. Kushkuley and Zalman I. Balanov, *Geometric methods in degree theory for equivariant maps*, Lecture Notes in Mathematics, Springer, 2006. Jiří Matoušek, *Using the Borsuk--Ulam theorem*, Springer--Verlag Berlin Heidelberg, 2003. Frédéric Meunier and Francis Edward Su, *Multilabeled versions of Sperner's and Fan's lemmas and applications*, SIAM J. Appl. Algebra Geom. **3** (2019), 391--411. Oleg R. Musin, *Extensions of Sperner and Tucker's lemma for manifolds*, J. Comb. Theory, Ser. A **132** (2012), 172--187. John Nash, *Non-cooperative games*, Ann. Math. **54** (1951), no. 2, 286--295. Mau-Hsiang Shih and Shyh-Nan Lee, *Combinatorial formulae for multiple set-valued labellings*, Math. Ann. **296** (1993), no. 1, 35--61. Arthur H. Stone and John W. Tukey, *Generalized "sandwich" theorems*, Duke Math. J. **9** (1942), no. 2, 356--359. Francis Edward Su, *Rental harmony: Sperner's lemma in fair division*, Amer. Math. Monthly **106** (1999), no. 10, 930--942. Rade Živaljević, *Oriented matroids and Ky Fan's theorem*, Combinatorica **30** (2010), 471--484. Chung-Tao Yang, *On theorems of Borsuk--Ulam, Kakutani-Yamabe-Yujobô and Dyson, I*, Ann. Math. **60** (1954), no. 2, 262--282. Chung-Tao Yang, *On theorems of Borsuk--Ulam, Kakutani-Yamabe-Yujobô and Dyson, II*, Ann. Math. **62** (1955), no. 2, 271--283. Rade Živaljević and Siniša Vrećica, *The colored Tverberg's problem and complexes of injective functions*, J. Combin. Theory, Ser. A **61** (1992), no. 2, 309--318. [^1]: FF and ZW were supported by NSF CAREER Grant DMS 2042428.
arxiv_math
{ "id": "2309.14539", "title": "Colorful Borsuk--Ulam theorems and applications", "authors": "Florian Frick and Zoe Wellner", "categories": "math.CO math.MG", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In this paper we investigate Ramanujan hypegraphs by using coverings. Let $\bar{H}$ be a $2$-fold covering (or $2$-lift) of a hypergraph $H$. We first show that the spectrum of $\bar{H}$ is a multiset union of the spectrum of $H$ and the spectrum of a quasi-signed hypergraph associated with $H$ and the covering projection. By using interlacing family, we prove that every $d$-regular $r$-uniform (simply called $(d,r)$-regular) hypergraph has a right-sided Ramanujan $2$-covering, and there exists an infinite family of $(d,d)$-regular Ramanujan hypergraphs for every degree greater than $2$. We also prove that there exist infinitely many $(d+1,d)$-regular left-sided (or right-sided) Ramanujan hypergraphs for any prime power $d$ greater than $4$. In addition, we give a lower bound for the second largest eigenvalue of a $d$-regular hypergraph by its universal cover. address: - School of Mathematical Sciences, Anhui University, Hefei 230601, P. R. China - Center for Pure Mathematics, School of Mathematical Sciences, Anhui University, Hefei 230601, P. R. China author: - Yi-Min Song - Yi-Zheng Fan\* title: Hypergraph coverings and Ramanujan Hypergraphs --- [^1] # introduction A *hypergraph* $H=(V,E)$ consists of a vertex set $V=\{v_1,v_2,\cdots,v_n\}$ denoted by $V(H)$ and an edge set $E=\{e_1,e_2,\cdots,e_m\}$ denoted by $E(H)$, where $e_i\subseteq V$ for $i\in[m]:=\{1,2,\cdots,m\}$. If $|e_i|=r$ for each $i\in[m]$ and $r\ge2$, then $H$ is called an $r$-*uniform* hypergraph. For a vertex $v\in V(H)$, denote by $N_H(v)$ or simple $N(v)$ the neighborhood of $v$, i.e., the set of vertices of $H$ adjacent to $v$; and denote by $E_H(v)$ or $E(v)$ the set of edges containing $v$. The *degree* of $v$, denoted by $d_v$, is the cardinality of the set $E(v)$. The hypergraph $H$ is called *$d$-regular* if each vertex has degree $d$, and is called *$(d,r)$-regular* if it is $d$-regular and $r$-uniform. A simple graph is a $2$-uniform hypergraph without multiple edge. A *homomorphism* from a hypergraph $\bar{H}$ to $H$ is a map $\varpi: V(\bar{H})\to V(H)$ such that $\varpi(e) \in E(H)$ for each $e \in E(\bar{H})$; namely, $\varpi$ maps edges to edges. So $\varpi$ induces a map denoted by $\tilde{\varpi}$ from $E(\bar{H})$ to $E(H)$, and particularly $\tilde{\varpi}$ maps $E_{\bar{H}}(\bar{v})$ to $E_H(\varpi(\bar{v}))$ for each vertex $\bar{v} \in V(\bar{H})$. **Definition 1**. A homomorphism $\varpi$ from $\bar{H}$ to $H$ is called a *covering projection* if $\varpi$ is a surjection, and the induced map $\tilde{\varpi}|_{E_{\bar{H}}(\bar{v})}: E_{\bar{H}}(\bar{v}) \to E_H(v)$ is a bijection for each vertex $v \in V(H)$ and each $\bar{v} \in \pi^{-1}(v)$. Throughout of this paper, *we always assume that the covering projection $\varpi$ in Definition [Definition 1](#cov-def){reference-type="ref" reference="cov-def"} satisfies the following condition: for any edge $e \in E(\bar{H})$, $\varpi|_e: e \to \varpi(e)$ is a bijection so that $e$ and $\varpi(e)$ have the same size*. Under this assumption, if $\bar{H}$ is $m$-uniform, so is $H$. If $H$ is connected, then there exists a positive integer $k$ such that each vertex $v$ of $H$ has $k$ vertices in its preimage $\pi^{-1}(v)$, and each edge $e$ of $H$ has $k$ edges in $\pi^{-1}(e)$. In this case, $\pi$ is called a *$k$-fold covering projection* (or $k$-sheeted covering projection) from $\bar{H}$ to $H$, and $\bar{H}$ is called a *$k$-cover* (or *$k$-lift*) of $H$. In Definition [Definition 1](#cov-def){reference-type="ref" reference="cov-def"}, if both $\bar{H}$ and $H$ are simple graphs, then $\tilde{\varpi}|_{E_{\bar{H}}(\bar{v})}: E_{\bar{H}}(\bar{v}) \to E_H(v)$ can be replaced by $\varpi|_{N_{\bar{H}}(\bar{v})}: N_{\bar{H}}(\bar{v}) \to N_H(v)$ as each edge of $E_{\bar{H}}(\bar{v})$ (respectively, $E_H(v)$) contains exactly two vertices: $\bar{v}$ and one neighbor of $\bar{v}$ (respectively, $v$ and one neighbor of $v$). Gross and Tucker [@GT] showed that all coverings of simple graphs can be characterized by the derived graphs of permutation voltage graphs. Stark and Terras [@ST] showed the (Ihara) zeta function of a finite graph divides the zeta function of any covering over the graph. Li and Hou [@LH] applied Gross and Tucker's method to generate all hypergraph coverings, and proved that the zeta function of a finite hypergraph divides the zeta function of any covering over the hypergraph. Recently, Song and Fan et al. [@Song] investigated the relationship between the spectrum of a uniform hypergraph and that of its cover in the setting of adjacency tensor eigenvalues. The *adjacency matrix* of a hypergraph $H$ ([@FL]) is defined to be the matrix $A(H)=(a_{uv})$ with entries $$a_{uv}=\left\{ \begin{array}{l} | \{e \in E(H): \{u,v\} \subseteq e\} | \mbox{~if~} u \ne v,\\ 0 \mbox{~if~} u = v. \end{array}\right.$$ Note that if $H$ is a simple graph, then $A(H)$ is the usual adjacency matrix of $H$. We arrange the eigenvalues of $A(H)$ as $\lambda_1(H) \ge \lambda_2(H) \ge \cdots \ge \lambda_n(H),$ where $n$ is the number of vertices of $H$. By Perron-Frobenius theorem, the largest eigenvalue $\lambda_1(H)$ is the spectral radius of $A(H)$, denoted by $\rho(H)$. We note there are other ways to define the eigenvalues for hypergraphs, for example, as the eigenvalues for the adjacency tensor (hypermatrix) [@CD; @FW; @LM3; @Lim; @Qi] or the Laplacian operators of the simplicial complex consisting of all subsets of edges of the hypergraphs [@BGP; @DR; @HJ; @PR]. In this paper the spectrum, eigenvalues, and eigenvectors of a hypergraph always refer to the adjacency matrix of the hypergraph unless specified somewhere. The aim of this paper is to investigate the Ramanujan hypergraphs by using coverings. We start the discussion from a simple graph $G$. Let $$\lambda(G):=\max\{|\lambda_i(G)|: \lambda_i(G) \ne \pm \lambda_1(G)\},$$ which is called the *second eigenvalue* of $G$. It is known that $\lambda(G)$ is closely related to the expansion property of $G$: the smaller $\lambda(G)$ is, the better expanding $G$ is; see [@Alon; @HLW]. However, $\lambda(G)$ cannot be arbitrarily small. In fact, $\lambda(G) \ge \rho(\mathbb{T}_G)-o_n(1)$ ([@Green; @Cio]), where $\mathbb{T}_G$ is the universal cover of $G$. Following [@HLW; @LPS], a graph $G$ is called *Ramanujan* if $\lambda(G) \le \rho(\mathbb{T}_G)$. For example, if $G$ is $d$-regular, then $\mathbb{T}_G$ is an infinite $d$-ary tree denoted by $\mathbb{T}_d$ and $\rho(\mathbb{T}_d)=2 \sqrt{d-1}$ ([@GM]), which yields the Alon-Boppana bound: $\lambda(G) \ge 2 \sqrt{d-1}-o_n(1)$ ([@Alon; @Nil]); if $G$ is $(d,r)$-biregular, then $\mathbb{T}_G$ is an infinite $(d,r)$-biregular tree $\mathbb{T}_{d,r}$ and $\rho(\mathbb{T}_d)=\sqrt{d-1}+\sqrt{r-1}$ ([@LS]), which yields the Feng-Li bound: $\lambda(G) \ge \sqrt{d-1}+\sqrt{r-1}-o_n(1)$ ([@FL; @LS]). Let $\varpi: \bar{G} \to G$ be a $k$-fold covering projection. If $f: V(G) \to \mathbb{R}$ is an eigenvector of $G$ defining on the vertices of $G$, then $f \circ \pi$ is also an eigenvector of $\bar{G}$. So, out of $kn$ eigenvalues of $A(\bar{G})$ (considered as a multiset), $n$ eigenvalues are induced from $G$ and are referred to as *old eigenvalues*, and the other $(k-1)n$ eigenvalues are called the *new eigenvalues*. Hall et al. [@HPS] called $\bar{G}$ is a *Ramanujan covering* of $G$ if all the new eigenvalues lie in $[-\rho(\mathbb{T}_G), \rho(\mathbb{T}_G)]$, and call $\bar{G}$ is a *one-sided Ramanujan covering* of $G$ if all the new eigenvalues are bounded from above by $\rho(\mathbb{T}_G)$[^2]. Bilu and Linial [@BL] suggested to construct Ramanujan graphs by Ramanujan $2$-covering: start with your favorite $d$-regular Ramanujan graph and construct an infinite tower of Ramanujan $2$-coverings. They conjectured that every (regular) graph has a Ramanujan $2$-covering. This approach turned out to be very useful in the groundbreaking result of Marcus, Spielman and Srivastava [@MSS], who proved that every graph has a one-sided Ramanujan 2-covering. Hall et al. [@HPS] generalized the result of [@MSS] to coverings of every degree. This translates to that there are infinitely many $d$-regular bipartite Ramanujan graphs of every degree $d$. The question remains open with respect to fully (i.e. non-bipartite) Ramanujan graphs. Suppose that $H$ is a connected $(d,r)$-regular hypergraph. Then $\lambda_1(H)=d(r-1)$, called the *trivial eigenvalue* of $H$. Feng and Li [@FL] proved that $$\lambda_2(H) \ge r-2+ 2\sqrt{(d-1)(r-1)}-o_n(1).$$ Cioabǎ et al. [@CKMNO] improved the bound for $\lambda_2(H)$ given in [@FL], and determined the largest order of a $(d,r)$-regular hypergraph with bounded second largest eigenvalue. The universal cover of $H$ is one of the bipartite halves of the infinite $(d,r)$-biregular tree $\mathbb{T}_{d,r}$, denoted by $\mathbb{D}_{d,r}$, which has the spectral radius $$\rho(\mathbb{D}_{d,r})=r-2+ 2\sqrt{(d-1)(r-1)}.$$ In addition, if $d<r$ (or $\nu(H) > e(H)$), then $H$ will have the (least) *obvious eigenvalue* $-d$ with multiplicity $\nu(H)-e(H)$. **Definition 2**. [@LS][\[RamaGraph\]]{#RamaGraph label="RamaGraph"}[^3] Let $H$ be a connected $(d,r)$-regular hypergraph. We call $H$ a *Ramanujan hypergraph* if any non-trivial non-obvious eigenvalue $\lambda$ of $H$ satisfies $$\label{RamaH} | \lambda-(r-2)| \le 2\sqrt{(d-1)(r-1)},$$ $H$ a *right-sided Ramanujan hypergraph* if any non-trivial eigenvalue $\lambda$ of $H$ satisfies $$\label{RamaHL} \lambda\le r-2+ 2\sqrt{(d-1)(r-1)},$$ and $H$ a *left-sided Ramanujan hypergraph* if any non-obvious eigenvalue $\lambda$ of $H$ satisfies $$\label{RamaHR} \lambda\ge r-2- 2\sqrt{(d-1)(r-1)}.$$ When $r=2$ (i.e. $H$ is a $d$-regular graph), the condition ([\[RamaH\]](#RamaH){reference-type="ref" reference="RamaH"}) is reduced to $|\lambda| \le 2 \sqrt{d-1}$, and $H$ has the obvious eigenvalue $-d$ if and only if $H$ is bipartite. So the definition of Ramanujan hypergraph covers the graph case. In this paper, we will follow Bilu and Linial's idea to construct Ramanujan hypergraphs by using Ramanujan covering. Let $\bar{H}$ be a $2$-cover of $H$. We show that $n$ eigenvalues of $\bar{H}$ are induced from $H$ (*old eigenvalues*), and the other $n$ eigenvalues are induced form a quasi-signed hypergraph (*new eigenvalues*); see Theorem [Theorem 13](#spec){reference-type="ref" reference="spec"}. We introduce the (left-sided, right-sided) Ramanujan covering as follows, and prove that every hypergraph has a right-sided Ramanujan 2-covering (Theorem [Theorem 19](#onesR){reference-type="ref" reference="onesR"}), and hence we can construct an infinite family of $(d,d)$-regular Ramanujan. **Definition 3**. Let $\bar{H}$ be a $2$-cover of a connected $(d,r)$-regular hypergraph $H$. We call $\bar{H}$ is a *Ramanujan covering* of $H$ if all the non-obvious new eigenvalues holds Eq. ([\[RamaH\]](#RamaH){reference-type="ref" reference="RamaH"}), a *right-sided Ramanujan covering* of $H$ if all new eigenvalue holds ([\[RamaHL\]](#RamaHL){reference-type="ref" reference="RamaHL"}), and a *left-sided Ramanujan covering* of $H$ if all non-obvious new eigenvalue holds ([\[RamaHR\]](#RamaHR){reference-type="ref" reference="RamaHR"}). The structure of the remaining part of the paper is as follows. In Section 2, we give some basic notions and results for graphs and hypergraphs. In Section 3 we establish a relationship between the spectrum of a covering hypergraph and that of its underlying hypergraph. In Section 4, we present a lower bound for the second largest eigenvalue of a hypergraph by the universal cover. In the last section, we use Ramanujan covering to prove the existence of infinitely many of $(d, d)$-regular Ramanujan hypergraphs for every degree greater than $2$, and the existence of infinitely many $(d+1,d)$-regular left-sided (or right-sided) Ramanujan hypergraphs for any prime power $d$ greater than $4$. # preliminaries ## Hypergraphs and matrices Let $H$ be a hypergraph. Denote by $\nu(H)$ the number of vertices of $H$ and $e(H)$ the number of edges of $H$. A *walk* $W$ of length $l$ in $H$ is a sequence of alternate vertices and edges: $v_{0}e_{1}v_{1}e_{2}\cdots e_{l}v_{l}$, where $v_{i} \ne v_{i+1}$ and $\{v_{i},v_{i+1}\}\subseteq e_{i+1}$ for $i=0,1,\ldots,l-1$. The walk $W$ is called a *path* if no vertices or edges are repeated in the sequence, a *cycle* if $v_0=v_l$ and no vertices or edges are repeated in the sequence except $v_0=v_l$, and is *non-backtracking* if $e_{i} \ne e_{i+1}$ for all $i=1,\ldots,l-1$. In the case of $H$ being a simple graph, we simply write $W: v_0 v_1 \cdots v_l$ as each edge contains exactly two vertices. The hypergraph $H$ is said to be *connected* if any two vertices are connected by a walk, and is called a *hypertree* if it is connected and contains no cycles. The *dual hypergraph* $H^*$ of $H$ is the hypergraph with vertex set $E(H)$ and edge set $\{\{e \in E(H): v \in e\}: v \in V(H)\}$. The *incident (bipartite) graph* of $H$, denoted by $B_H$, with vertex set $V(H) \cup E(H)$ and edge set $\{\{v,e\}: v\in V(H), e \in E(H), v \in e\}$. If $H$ is a simple graph, then $B_H$ is exactly the subdivision of $H$ (namely, the graph obtained by inserting an additional vertex into each edge of $H$). The *incident matrix* of $H$, denoted by $Z(H)=(z_{ve})$, is defined by $z_{ve}=1$ if $v \in e$ and $0$ else. Then we have the adjacency matrix of $B_H$ as follows: $$A(B_H)=\left(\begin{array}{cc} O & Z(H) \\ Z(H)^\top & O \end{array} \right).$$ The *(signless) Laplacian* of $H$, denoted by $Q(H)$, is defined by $$Q(H)=Z(H)Z(H)^\top=D(H)+A(H),$$ where $D(H)=\text{diag}\{d_v: v \in V(H)\}$, the degree matrix of $H$. It is known that $Z_{H^*}=Z(H)^\top$, and $B_{H^*}$ is isomorphic to $B_H$. We also have $$A(B_H)^2=\left(\begin{array}{cc} Z(H)Z(H)^\top & O \\ O & Z(H)^\top Z(H)\end{array} \right)= \left(\begin{array}{cc} Q(H) & O \\ O & Q(H^*) \end{array} \right).$$ Let $\psi_A(\lambda)$ denote the characteristic polynomials of square matrix $A$. Then $$\label{polyR} \lambda^{\nu(H)} \psi_{A(B_H)}(\lambda)= \lambda^{e(H)} \psi_{Q(H)}(\lambda^2), ~ \lambda^{e(H)} \psi_{Q(H)}(\lambda)=\lambda^{\nu(H)} \psi_{Q(H^*)}(\lambda).$$ So, the spectrum of one of $A(B_H), Q(H)$ and $Q(H^*)$ yields that of the other two. In particular, if $H$ is $(d,r)$-regular, then $Q(H)=dI + A(H)$ and $Q(H^*)=rI+A(H^*)$. In this case, the eigenvalues of $A(B_H)$ are the square roots of the eigenvalues of $A(H)$ plus $d$; and if $\nu(H)>e(H)$, $A(H)$ has the (least) *obvious eigenvalue* $-d$ with multiplicity $\nu(H)-e(H)$. Also, there are *obvious eigenvalue* $0$ of $A(B_H)$ with multiplicity $|\nu(H)-e(H)|$ if $\nu(H) \ne e(H)$. ## Finite covering and universal covering Gross and Tucker [@GT] applied the permutation voltage graphs to characterize the finite coverings of simple graphs. Let $D$ be digraph possibly with multiple arcs, and let $\mathbb{S}_k$ be the symmetric group on the set $[k]$. Let $\phi:E(D)\to\mathbb{S}_k$ which assigns a permutation to each arc of $D$. The pair $(D, \phi)$ is called a *permutation voltage digraph*. A *derived digraph* $D^\phi$ associated with $(D, \phi)$ is a digraph with vertex set $V(D)\times[k]$ such that $((u, i),(v, j))$ is an arc of $D^\phi$ if and only if $(u, v)\in E(D)$ and $i=\phi(u, v)(j)$. Let $G$ be a simple graph, and let $\overleftrightarrow{G}$ denote the symmetric digraph obtained from $G$ by replacing each edge $\{u, v\}$ by two arcs with opposite directions, written as $e=(u, v)$ and $e^{-1}:= (v, u)$ respectively. Let $\phi: E(\overleftrightarrow{G})\to\mathbb{S}_k$ be a permutation assignment of $\overleftrightarrow{G}$ which holds that $\phi(e)^{-1}=\phi(e^{-1})$ for each arc $e$ of $\overleftrightarrow{G}$. The pair $(G, \phi)$ is called a *permutation voltage graph*. The derived digraph $\overleftrightarrow{G}^\phi$, simply wrriten as $G^\phi$, has symmetric arcs by definition, and is considered as a simple graph. Gross and Tucker [@GT] established a relationship between $k$-fold coverings and derived graphs. **Lemma 4**. *[@GT][\[per\]]{#per label="per"} Let $G$ be a connected graph and let $\bar{G}$ be a $k$-cover of $G$. Then there exists an assignment $\phi$ of permutations in $\mathbb{S}_k$ on $G$ such that $G^\phi$ is isomorphic to $\bar{G}$.* The coverings of a hypergraph can be obtained by permutation assignments via the incident graph of the hypergraph. Let $H$ be a connected hypergraph and let $B_H$ be its incident graph. Let $\phi: E(\overleftrightarrow{B_H}) \to \mathbb{S}_k$ such that $\phi(v,e)=\phi^{-1}(e,v)$ for each pair $(v,e)$ with $v \in e$. By definition, We will get a $k$-covering $B_H^\phi$ of $B_H$. Note that in the graph $B_H^\phi$, $(e,j)$ is adjacent to the vertices $(u, \phi(u,e)j)$ for all $u \in e$. Then we will get a $k$-fold covering of $H$ arisen from $B_H^\phi$, denoted by $H_B^\phi$, which has vertex set $V(H) \times [k]$, and edges still denoted by $(e,j):=\{(u, \phi(e,u)j): u \in e\}$ for all $e \in E(H)$ and $j \in [k]$; see Fig. [1](#FC){reference-type="ref" reference="FC"} for the construction of a $2$-fold covering of a hypergraph. ![Construction of a $2$-covering of a hypergraph, where an edge of $H$ or $H^\phi$ is represented by the point set of a triangle, and only the blue edge in $(B_H,\phi)$ is assigned to the permutation $(12)$ and all other edges are assigned to the identity](FCov.pdf){#FC} **Lemma 5**. *[@LH; @Song][\[hypcov\]]{#hypcov label="hypcov"} Let $\bar{H}$ be a $k$-fold covering of a connected hypergraph $H$. Then there exists a permutation voltage assignment $\phi$ in $\mathbb{S}_k$ on $B_H$ such that $B^\phi_H$ is isomorphic to $B_{\bar{H}}$ by a map which sends $V(H)\times[k]$ to $V(\bar{H})$, and hence $H^\phi_B$ is isomorphic to $\bar{H}$.* The universal cover of a simple graph $G$ is the unique infinite tree $\mathbb{T}_G$ such that every connected covering of $G$ is a quotient of $\mathbb{T}_G$. To construct the universal cover, fix a "root" vertex $v_0$ of $G$, and then place one vertex in $\mathbb{T}_G$ for each non-backtracking walk starting at $v_0$ of every length $l \in \mathbb{N}$. Two vertices are adjacent in $\mathbb{T}_G$ if their length differs by $1$ and one is a prefix of another. The universal cover of a graph is unique up to isomorphism and is independent of the choice of the root $v_0$. The universal cover $\mathbb{T}_H$ of a hypergraph $H$ is a hypertree, which can be constructed as follows (see [@LS]). First we have the universal cover $\mathbb{T}_{B_H}$ of the incident graph $B_H$ of $H$. Observe that $\mathbb{T}_{B_H}$ contains two kinds of levels: one consisting of the walks ending at vertices of $H$ (called the *vertex levels*) and the other consisting of the walks ending at the edges of $H$ (called the *edge levels*). Take the vertices in the vertex levels as the vertices of $\mathbb{T}_H$, and for each vertex $e$ in the edge levels of $\mathbb{T}_{B_H}$, form an edge of $\mathbb{T}_H$ by taking all vertices of $\mathbb{T}_H$ that is adjacent to $e$; see Fig. [2](#UC){reference-type="ref" reference="UC"} for the universal cover of the hypergraph in Fig. [1](#FC){reference-type="ref" reference="FC"}. So, $\mathbb{T}_H$ is the bipartite half of $\mathbb{T}_{B_H}$ corresponding the vertex levels. ![Construction of the universal cover of the hypergraph $H$ in Fig. [1](#FC){reference-type="ref" reference="FC"} via the universal cover of its incident graph $B_H$, where in the left graph (the infinite tree $\mathbb{T}_{B_H}$), a vertex represents a non-backtracking walk of $B_H$ from the root $v_2$ to the vertex which is the unique path of $\mathbb{T}_{B_H}$ from $v_2$ to the vertex, e.g. the vertex $v_1$ in the $7$th level represents the walk: $v_2e_1v_3e_2v_2e_1v_1$, a unique path of $\mathbb{T}_{B_H}$ from the root to $v_1$ in the $7$th level](UniC.pdf){#UC} In particular, if $G$ is a $d$-regular graph, then $\mathbb{T}_G$ is the infinite $d$-ary tree $\mathbb{T}_d$; if $G$ is a $(d,r)$-biregular graph, then $\mathbb{T}_G$ is infinite $(d,r)$-biregular tree $\mathbb{T}_{d,r}$ in which the degrees alternate between $d$ and $r$ on every other level, where a graph is called $(d,r)$-*biregular* if it is bipartite and the vertices in one part of the bipartition have degree $d$ and the vertices on the other part have degree $r$. If $H$ is a $(d,r)$-regular hypergraph, then $B_H$ is a $(d,r)$-biregular graph the universal cover of which is $\mathbb{T}_{d,r}$, and hence $\mathbb{T}_H$ is the bipartite half of $\mathbb{T}_{d,r}$ corresponding to the levels of vertices of degree $d$, denoted by $\mathbb{D}_{d,r}$. ## Matching polynomial A *matching* in a graph is a subset of its edges, no two of which share a common vertex. For a graph $G$, let $m_i$ denote the number of *$i$-matchings* (i.e. a matching consisting of $i$ edges) of $G$ (with $m_0=1$). Heilmann and Lieb [@HL] defined the *matching polynomial* of $G$ to be the polynomial $$\mu_G(x)=\sum_{i\ge0}(-1)^im_i x^{n-2i},$$ where $n$ is the number of vertices of the graph $G$. The matching polynomial of $G$ is closely related to the characteristic polynomial of the path tree $T_G(u)$ of $G$. For a graph $G$ and a vertex $u$ of $G$, the *path tree* $T_G(u)$ contains one vertex for each path beginning at $u$, and two vertices (paths) are adjacent if their length differs by $1$ and one is a prefix of another. Godsil [@God81] proved that the matching polynomial $\mu_G(x)$ of $G$ divides the characteristic polynomial of $A(T_G(u))$, which implies that all roots of $\mu_G(x)$ are real and have absolute value at most the spectral radius (or the largest eigenvalue) of $A(T_G(u))$. Let $\mu(G)$ denote the largest root of $\mu_G(x)$. By definition, the path tree $T_G(u)$ is a finite induced subgraph of the universal cover $\mathbb{T}_G$. By Lemma 3.6 of [@MSS], $$\label{MatUni}\mu(G) \le \rho(T_G(u)) \le \rho(\mathbb{T}_G).$$ Note that $\rho(\mathbb{T}_d)=2\sqrt{d-1}$ and $\rho(\mathbb{T}_{d,r})=\sqrt{d-1}+\sqrt{r-1}$. So, if $G$ is a $d$-regular graph, then $\mu(G) \le 2\sqrt{d-1}$, and if $G$ is a $(d,r)$-biregular graph, then $\mu(G) \le\sqrt{d-1}+\sqrt{r-1}$. Let $G$ be a $(d,r)$-biregular graph, and let $V_1,V_2$ be the two parts of $G$ consisting of vertices of degree $d$ and $r$ respectively. Let $\tau_G:= \min \{|V_1|,|V_2|\}$. We will have the following result on the $\tau_G$-th largest root of $\mu_G(x)$. **Lemma 6**. *Let $G$ be a $(d,r)$-biregular graph, and let $\mu_{\tau_G}(G)$ be the $\tau_G$-th largest root of $\mu_G(x)$. Then $G$ has a $\tau_G$-matching, and $$0< \mu_{\tau_G}(G) \le \max\{\sqrt{d}, \sqrt{r}\}.$$* *Proof.* Let $V_1,V_2$ be the two parts of $G$ consisting of vertices of degree $d$ and $r$ respectively. Then $d |V_1|=r |V_2|$. Assume first that $d \ge r$ (or $|V_1| \le |V_2|$). We assert that $G$ has a matching covering all vertices of $V_1$, which implies that $G$ has a $\tau_G$-matching as $|V_1|=\tau_G$. For any nonempty subset $U$ of $V_1$, letting $N(U)$ be the set of neighbors of the vertices of $U$, surely $N(U) \subseteq V_2$, and $d \cdot |U| \le r\cdot |N(U)|.$ So we have $| N(U)| \ge |U|$ as $d \ge r$, and hence arrive at the assertion by Hall's Marriage Theorem. Denote $n_1:=|V_1|=\tau_G$, and $n_2:=|V_2|$. The matching polynomial of $G$ can be written as $$\begin{aligned} \mu_G(x)&=\sum_{i=0}^{n_1} (-1)^i m_i x^{n_1+n_2-2i}=x^{n_2-n_1} \sum_{i=0}^{n_1} (-1)^i m_i (x^2)^{n_1-i}\\ &=x^{n_2-n_1} (x^2-\mu_1(G)^2)\cdots (x^2-\mu_{n_1}(G)^2), \end{aligned}$$ where $\mu_1(G), \ldots, \mu_{n_1}(G)$ are the first $n_1$ nonnegative largest roots of $\mu_G(x)$ arranged in non-increasing order. As $G$ has an $n_1$-matching, $$\mu_1(G)^2 \cdots \mu_{n_1}(G)^2=m_{n_1}>0,$$ and then $\mu_{n_1}(G)=\mu_{\tau_G}(G)>0$. Noting that $m_1$ is exactly the number of edges of $G$, we have $$n_1 \mu_{n_1}(G)^2 \le \mu_1(G)^2 + \cdots + \mu_{n_1}(G)^2=m_{1}=n_1 d,$$ and hence $\mu_{n_1}(G)^2 \le d$. Similarly, if $r \ge d$, then $n_2 \le n_1$ and $G$ has an $n_2$-matching covering all vertices of $V_2$. In this case, $\mu_{n_2}(G)=\mu_{\tau_G}(G)>0$, and $\mu_{n_2}(G)^2 \le r$. The result now follows. ◻ We now recall an identity of Godsil and Gutman [@God81] that relates the expected characteristic polynomial over uniformly random signings of the adjacency matrix of a graph to its matching polynomial. Let $G$ be a graph with $m$ edges ordered as $e_1, \ldots, e_m$. Associate a signing $s\in\{\pm1\}^m$ to the edges of $G$ such that the sign of $e_i$ is $s_i$ for each $i \in [m]$. We then let $A_s$ denote the signed adjacency matrix corresponding to $s$ and define $$\label{CharSgn} \psi_s(x)=\mathrm{det}(xI-A_s)$$ to be the characteristic polynomial of $A_s$. **Theorem 7**. *[@God81] [\[Match1\]]{#Match1 label="Match1"}Let $G$ be a simple graph and let $\psi_s(x)$ be defined as in ([\[CharSgn\]](#CharSgn){reference-type="ref" reference="CharSgn"}). Then $$\mathbb{E}_{s\in\{\pm1\}^m}[\psi_s(x)]=\mu_G(x).$$* ## Interlacing families Marcus et al. [@MSS] defined interlacing families and examined their properties. **Definition 8**. [@MSS] We say that a real-rooted polynomial $g(x)=\prod^{n-1}_{i=1}(x-\alpha_i)$ *interlaces* a real-rooted polynomial $f(x)=\prod^{n}_{i=1}(x-\beta_i)$ if $$\beta_1\le\alpha_1\le\beta_2\le\alpha_2\le\cdots\le\alpha_{n-1}\le\beta_n.$$ We say that polynomials $f_1,\ldots, f_k$ have a *common interlacing* if there is a single polynomial $g$ that interlaces each $f_i$ for $i \in [k]$. By applying a similar discussion as in [@MSS Lemma 4.2], we have the following result. **Lemma 9**. *Let $f_1,\ldots, f_k$ be degree-$n$ real-rooted polynomials with positive leading coefficients, and define $$f_\emptyset=\sum_{i=1}^kf_i.$$ If $f_1,\ldots, f_k$ have a common interlacing, then for each $\ell \in [n]$ there exists an $i$ for which the $\ell$-th largest root of $f_i$ is at most (at least) the $\ell$-th largest root of $f_\emptyset$.* **Definition 10**. [@MSS] Let $S_1,\ldots, S_m$ be finite sets, and for each assignment $(s_1,\ldots, s_m)\in S_1 \times \cdots \times S_m$, let $f_{s_1,\ldots,s_m}(x)$ be a real-rooted degree $n$ polynomial with positive leading coefficient. For a partial assignment $(s_1,\ldots, s_k)\in S_1 \times \cdots \times S_k$ with $k < m$, define $$f_{s_1,\ldots,s_k}=\sum_{s_{k+1}\in S_{k+1}, \ldots, s_m \in S_m} f_{s_1,\ldots, s_k,s_{k+1},\ldots,s_m}$$ as well as $$f_\emptyset=\sum_{s_1\in S_1, \ldots, s_m \in S_m} f_{s_1,\ldots, s_m}.$$ We say $\{f_{s_1,\ldots,s_m}\}_{S_1,\ldots,S_m}$ form an *interlacing family* if for all $k=0,1,\ldots,m-1$ and all $(s_1,\ldots, s_k)\in S_1 \times \cdots \times S_k$, the polynomials $\{f_{s_1,\ldots,s_k,t}\}_{t \in S_{k+1}}$ have a common interlacing. By applying a similar discussion as [@MSS Theorem 4.4], we have the following result. **Lemma 11**. *Let $S_1, \cdots, S_m$ be finite sets, and let $\{f_{s_1, \ldots, s_m}\}$ be an interlacing family of polynomials of degree $n$. Then for each $\ell \in [n]$ there exists some $s_1, \ldots, s_m\in S_1\times\cdots\times S_m$ so that the $\ell$-th largest root of $f_{s_1, \ldots, s_m}$ is at most (at least) the $\ell$-th largest root of $f_\emptyset$.* Marcus et al. [@MSS] proved that the characteristic polynomials $\{\psi_s\}_{s \in \{\pm 1\}^m}$ defined in ([\[CharSgn\]](#CharSgn){reference-type="ref" reference="CharSgn"}) form an interlacing family. So, similar to Theorem 5.3 of [@MSS], combining Theorem [\[Match1\]](#Match1){reference-type="ref" reference="Match1"} and Eq. ([\[MatUni\]](#MatUni){reference-type="ref" reference="MatUni"}), we have the following result. **Corollary 12**. *Let $G$ be a simple graph on $n$ vertices with adjacency matrix $A$. Then for each $\ell \in [n]$ there exists a signing $s$ of $A$ so that the $\ell$-th largest eigenvalue of $A_s$ is at most (at least) the $\ell$-th largest root of the matching polynomial of $G$. In particular, there exists a signing $s$ of $A$ so that the largest eigenvalue of $A_s$ is at most $\rho(\mathbb{T}_G)$.* # Spectra of hypergraph covering In this section, we will investigate the spectral property of a hypergraph covering. By Lemma [\[hypcov\]](#hypcov){reference-type="ref" reference="hypcov"}, it suffices to consider a $k$-cover $H_B^\phi$ of a hypergraph $H$, where $\phi: E(\overleftrightarrow{B_H}) \to \mathbb{S}_k$. For the permutation voltage graph $(B_H, \phi)$, noting that $\mathop{\mathrm{sgn}}\phi(u,e)=\mathop{\mathrm{sgn}}\phi(e,u)$ for each incidence $(u,e)$ with $u \in e$, define a *signed incident graph* also denoted by $(B_H, \phi)$ such that $(B_H, \phi)_{u,e}=(B_H, \phi)_{e,u}=\mathop{\mathrm{sgn}}\phi(u,e)$ if $u \in e$, and $0$ else; and a *signed incident matrix* $Z(H,\phi)$ such that $Z(H,\phi)_{ue}=\mathop{\mathrm{sgn}}(u,e)$ for each $u \in e$, and $0$ else. Define a *quasi-signed adjacency matrix* $A(H,\phi)$ of $H$ such that $$A(H,\phi)_{uv}=\sum_{e: \{u,v\} \subseteq e} \mathop{\mathrm{sgn}}\phi(u,e) \mathop{\mathrm{sgn}}\phi(v,e),$$ and a *quasi-signed Laplacian matrix* $$Q(H,\phi):=Z(H,\phi) Z(H,\phi)^\top = D(H)+A(H,\phi).$$ If $H$ is linear (namely, any two vertices are contained in at most one edge), then $A(H,\phi)_{uv}=\mathop{\mathrm{sgn}}\phi(u,e)\phi(v,e) \in \{1,-1\}$ if $\{u,v\} \subseteq e$, and $0$ else, implying that $A(H,\phi)$ is a signed matrix. We get the following relationship between the spectrum of the covering hypergraph $H_B^\phi$ and that of its underlying hypergraph $H$, which generalizes the result of Bilu and Linial [@BL] on the graph coverings, where $\text{Spec}A$ denotes the spectrum of a square matrix $A$. **Theorem 13**. *Let $H_B^\phi$ be a $k$-cover of a hypergraph $H$. Then, as multi-sets, the spectrum of $A(H_B^\phi)$ contains that of $A(H)$; in particular, if $k=2$, then the spectrum of $A(H^\phi)$ also contains that of $A(H,\phi)$, and $\text{Spec}A(H_B^\phi)=\text{Spec}A(H) \cup \text{Spec}A(H,\phi)$ as multiset union.* *Proof.* Let $x=(x_u)$ be an eigenvector of $A(H)$ associated with an eigenvalue $\lambda$. Let $\tilde{x}=(x_{u,i})$ be defined on $V(H) \times [k]$ such that $\tilde{x}|_{u\times [k]}:=x_u$ for all $u\in V(H)$. We assert $\tilde{x}$ is an eigenvector of $A(H_B^\phi)$ associated with the same eigenvalue $\lambda$. By the eigenvector equation, for each $u\in V(H)$, $$\label{eig_H} \lambda x_u=\sum_{e\in E_H(u):\{u,v\}\subseteq e}x_v.$$ Let $A(H_B^\phi):=(a_{(u,i)(v,j)})$. Observe that $$a_{(u,i)(v,j)}=|\{(e,l) \in E(H) \times [k]: \{(u,i),(v,j)\} \subseteq (e,l)\}|,$$ and by the definition of $H_B^\phi$, $\{u,v\}\subseteq e$ and $l=\phi(e,u)i=\phi(e,v)j$. So $$a_{(u,i)(v,j)}=|\{e\in E(H): \{u,v\} \subseteq e, \phi(e,u)i=\phi(e,v)j\}|.$$ Now, for any $u\in V(H)$ and $i\in[k]$, by Eq. ([\[eig_H\]](#eig_H){reference-type="ref" reference="eig_H"}) and the definition of $\tilde{x}$, we have $$\begin{aligned} (A(H_B^\phi)\tilde{x})_{(u,i)}&=\sum_{(v,j) \in V(H) \times [k]} a_{(u,i)(v,j)}\tilde{x}_{v,j}\\ &=\sum_{e\in E_H(u): \{u,v\}\subseteq e,\phi(e,v)j=\phi(e,u)i}\tilde{x}_{v,j}\\ &=\sum_{e\in E_H(u):\{u,v\}\subseteq e}\tilde{x}_{v,\phi(e,v)^{-1}\phi(e,u)i}\\ &=\sum_{e\in E_H(u):\{u,v\}\subseteq e}x_v\\ &=\lambda x_u\\ &=\lambda \tilde{x}_{(u,i)},\end{aligned}$$ which implies that $\tilde{x}$ is an eigenvector of $A(H_B^\phi)$ associated with the eigenvalue $\lambda$. If $k=2$, let $y$ be an eigenvector of $A(H,\phi)$ associated with an eigenvalue $\mu$. Let $\tilde{y}$ be defined on $V(H) \times \{1,2\}$ such that $\tilde{y}_{u,1}=y_u$ and $\tilde{y}_{u,2}=-y_u$ for all $u\in V(H)$. We assert that $\tilde{y}$ is an eigenvector of $A(H^\phi)$ associated with the same eigenvalue $\mu$. By the eigenvector equation, for each $u\in V(H)$, $$\label{eig_Hp} \mu y_u =\sum_{e\in E_H(u):\{u,v\}\subseteq e}\mathop{\mathrm{sgn}}\phi(u,e) \mathop{\mathrm{sgn}}\phi(v,e)y_v.$$ For any $u \in V(H)$, $$\begin{aligned} (A(H_B^\phi)\tilde{y})_{u,1}&=\sum_{(v,j) \in V(H) \times \{1,2\}} a_{(u,1)(v,j)}\tilde{y}_{v,j}\\ &=\sum_{e\in E_H(u): \{u,v\}\subseteq e,\phi(e,v)j=\phi(e,u)1}\tilde{y}_{v,j}\\ &=\sum_{e\in E_H(u):\{u,v\}\subseteq e}\tilde{y}_{v,\phi(e,v)^{-1}\phi(e,u)1}\\ &=\sum_{e\in E_H(u):\{u,v\}\subseteq e}\mathop{\mathrm{sgn}}\phi(u,e)\mathop{\mathrm{sgn}}\phi(v,e)y_v\\ &=\mu y_u=\mu\tilde{y}_{u,1};\end{aligned}$$ and $$\begin{aligned} (A(H_B^\phi)\tilde{y})_{(u,2)}&=\sum_{(v,j)\in V(H) \times \{1,2\}} a_{(u,2)(v,j)}\tilde{y}_{v,j}\\ &=\sum_{e\in E_H(u):\{u,v\}\subseteq e,\phi(e,v)j=\phi(e,u)2}\tilde{y}_{v,j}\\ &=\sum_{e\in E_H(u):\{u,v\}\subseteq e}\tilde{y}_{v,\phi^{-1}(e,v)\phi(e,u)2}\\ &=\sum_{e\in E_H(u),\{u,v\}\subseteq e}-\mathop{\mathrm{sgn}}\phi(u,e)\phi(v,e)y_v\\ &=-\mu y_u=\mu\tilde{y}_{u,2},\end{aligned}$$ which implies that $\tilde{y}$ is an eigenvector of $A(H_B^\phi)$ associated with the eigenvalue $\mu$. By the above discussion, if $x$ is an eigenvector of $A(H)$, then by an arrangement of the vertices, $(x,\ldots,x)$ (repeating $k$ times) is an eigenvector of $A(H_B^\phi)$. So, the spectrum of $A(H_B^\phi)$ contains that of $A(H)$, including multiplicity. If $k=2$, if $y$ is an eigenvector of $A(H,\phi)$, then $(y,-y)$ is an eigenvector of the $2$-covering $H^\phi$. Noting that $(x,x)$ is orthogonal to $(y,-y)$, so, as multiset union, $\text{Spec}A(H_B^\phi)=\text{Spec}A(H) \cup \text{Spec}A(H,\phi)$ in this case. ◻ ***Remark** 14*. Theorem [Theorem 13](#spec){reference-type="ref" reference="spec"} also holds for Laplacian matrix, namely, under the condition of Theorem [Theorem 13](#spec){reference-type="ref" reference="spec"}, $\text{Spec}Q(H_B^\phi) \supseteq \text{Spec}Q(H)$, and if $k=2$, $\text{Spec}Q(H_B^\phi) = \text{Spec}Q(H) \cup \text{Spec}Q(H,\phi)$. We omit the details here. # The second largest eigenvalue Let $G$ be a graph or hypergraph (not necessarily finite but local finite with bounded degrees and bounded edge sizes), and let $\mathbb{C}^{V(G)}=\{(x_v)_{v \in V(G)}: x_v \in \mathbb{C}, v \in V(G)\}$. Define an inner product $\langle x, y \rangle=\sum_{v \in V(G)} x_v \bar{y}_v$, and the induced norm $\|x\|=\langle x, x \rangle^{1/2}$, where $\bar{y}_v$ denotes the conjugate of $y_v$. Let $$\ell^2(G)=\{x \in \mathbb{C}^{V(G)}: \sum_{v \in V(G)} |x_v|^2 < +\infty\}.$$ Then $\ell^2(G)$ is a complex Hilbert space. The adjacency operator $A(G)$ of $G$ is defined to be for each $u \in V(G)$ $$\label{Aop}(A(G) x)_u= \sum_{e \in E_G(u): \{u,v\} \in E(G)} x_v.$$ So $A(G)$ is a bounded linear operators on $\ell^2(G)$. As $A(G)$ is self-adjoint, the spectral radius $\rho(G)$ of $A(G)$ can be defined as (see [@Mohar]) $$\label{norm} \rho(G) =\|A(G)\|=\sup_{\|x\|_2=1} |\langle A(G)x, x \rangle|=\sup_{\|x\|_2=1} \langle A(G)x, x \rangle.$$ Let $H$ be a hypergraph, $B_H$ be its incident graph, and $\mathbb{T}_{B_H}$ the universal cover of $B_H$. Then we will have a linear operator $A(\mathbb{T}_{B_H})$ on $\ell^2(\mathbb{T}_{B_H})$ defined as in ([\[Aop\]](#Aop){reference-type="ref" reference="Aop"}). Recall that $\mathbb{T}_{B_H}$ has a root vertex say $v_0 \in V(H)$, and contains two kinds of vertex levels: the vertex levels and the edge levels. Denote by $V_1,V_2$ the sets of vertices in the vertex levels and the edge levels, respectively. With a little abusing of notations, for a vertex $w$ of $V_1$ (respectively, $V_2$) in $\mathbb{T}_{B_H}$, namely a non-backtracking walk $w$ of $B_H$ beginning at $v_0$ and ending at a vertex $v$ of $H$ (respectively, an edge $e$ of $H$), we wills simply use the vertex $v$ (respectively, the edge $e$) to denote the walk $w$; see the labeling of the vertices in the left graph in Fig. [2](#UC){reference-type="ref" reference="UC"}. We have a decomposition: $$\ell^2(\mathbb{T}_{B_H})=\mathbb{C}^{V_1} \oplus \mathbb{C}^{V_2}.$$ Define an operator $$\label{Z}Z: \mathbb{C}^{V_1} \to \mathbb{C}^{V_2}$$ such that $$(Zx)_e=\sum_{v: v \in e} x_v;$$ and $$\label{Z*} Z^*: \mathbb{C}^{V_2} \to \mathbb{C}^{V_1}$$ such that $$(Z^*x)_v=\sum_{e: v \in e} x_e.$$ It is easily seen that $Z^*$ is the adjoint operator of $Z$, and $$A(\mathbb{T}_{B_H})^2=Z^*Z \oplus ZZ^*.$$ As $A(\mathbb{T}_{B_H})$ is self-adjoint and $\rho(Z^*Z)=\rho(ZZ^*)$, we have $$\label{Rerad} \rho(A(\mathbb{T}_{B_H}))^2=\rho(A(\mathbb{T}_{B_H})^2)=\rho(Z^*Z)=\rho(ZZ^*).$$ **Lemma 15**. *Let $H$ be a hypergraph, let $Z,Z^*$ be defined in ([\[Z\]](#Z){reference-type="ref" reference="Z"}) and ([\[Z\*\]](#Z*){reference-type="ref" reference="Z*"}) respectively. Then $$Z^*Z=D(\mathbb{T}_H)+A(\mathbb{T}_H),$$ where $D(\mathbb{T}_H)$ is the degree diagonal operator such that $(D(\mathbb{T}_H)x)_v=d_v x_v$ for each $x \in \ell^2(\mathbb{T}_H)$ and each vertex $v \in V(\mathbb{T}_H)$.* *Proof.* Let $V_1,V_2$ be the sets of vertices in the vertex levels and the edge levels of $\mathbb{T}_{B_H}$, respectively. Then, for any $x \in \mathbb{C}^{V_1}$ and $v \in V_1$, $$\begin{aligned} (Z^*Zx)_v & =\sum_{e: v \in e} (Zx)_e\\ & =\sum_{e: v \in e} \sum_{u: u \in e} x_u\\ &=\sum_{e: v \in e} \left(x_v+ \sum_{u: u \in e, u \ne v} x_u\right)\\ &=d_v x_v+ \sum_{e\in E_G(v): \{v,u\} \subseteq e} x_u\\ & =(D(\mathbb{T}_H)x)_v + (A(\mathbb{T}_H)x)_v.\end{aligned}$$ The result follows. ◻ Note that $Z^*Z$ is a self-adjoint and positive operator on $\ell^2(\mathbb{C}^{V_1})$. So by Lemma [Lemma 15](#OpR){reference-type="ref" reference="OpR"}, $$\label{Radius} \rho(Z^*Z)=\sup_{\|x\|_2=1} \langle (Z^*Z)x, x \rangle =\sup_{\|x\|_2=1} \left(\langle D(\mathbb{T}_H) x, x \rangle + \langle A(\mathbb{T}_H) x, x \rangle\right).$$ Let $G$ be a graph on $n$ vertices. Greenberg [@Green] (see also [@Cio]) proved that $$\label{Green}\lambda(G) \ge \rho(\mathbb{T}_G)-o_n(1).$$ We will give a corresponding result on the second largest eigenvalue of a $d$-regular hypergraph. **Theorem 16**. *Let $H$ be a $d$-regular hypergraph. Then $$\lambda_2(H) \ge \rho(\mathbb{T}_H)-o_n(1).$$* *Proof.* As $H$ is $d$-regular, we have $A(B_H)^2=dI + A(H)$ and $D(\mathbb{T}_H)=dI$, where $I$ is the identity matrix or operator. By ([\[Green\]](#Green){reference-type="ref" reference="Green"}), ([\[Rerad\]](#Rerad){reference-type="ref" reference="Rerad"}) and Lemma [Lemma 15](#OpR){reference-type="ref" reference="OpR"}, we have $$\begin{aligned} \lambda_2(H)& =\lambda_2(B_H)^2 -d \\ &> (\rho(\mathbb{T}_{B_H})-o_n(1))^2-d\\ & =\rho(\mathbb{T}_{B_H})^2-d-o_n(1)\\ &=\rho(\mathbb{T}_{B_H}^2)-d-o_n(1)\\ & =\rho(Z^*Z)-d-o_n(1)\\ &=d+\rho(\mathbb{T}_H)-d-o_n(1)\\ &=\rho(\mathbb{T}_H)-o_n(1).\end{aligned}$$ ◻ **Corollary 17**. *[@LS] Let $H$ be a $(d,r)$-regular hypergraph. Then $$\lambda_2(H) \ge r-2+ 2\sqrt{(d-1)(r-1)}-o_n(1),$$* *Proof.* If $H$ is $(d,r)$-regular, then $\mathbb{T}_{B_H}$ is a $(d,r)$-biregular infinite tree, and $\rho(\mathbb{T}_{B_H})=\sqrt{d-1}+\sqrt{r-1}$. By ([\[Rerad\]](#Rerad){reference-type="ref" reference="Rerad"}) and ([\[Radius\]](#Radius){reference-type="ref" reference="Radius"}), we have $$\rho(\mathbb{T}_H)=\rho(\mathbb{T}_{B_H})^2-d=r-2+ 2\sqrt{(d-1)(r-1)}.$$ The result follows by Theorem [Theorem 16](#lowB){reference-type="ref" reference="lowB"}. ◻ # Ramanujan hypergraphs In this section, we will investigate the existence or the construction of Ramanujan hypergraphs by coverings. Let $H_B^\phi$ be a $k$-fold covering of a hypergraph $H$, where $\phi: E(\overleftrightarrow{B_H}) \to \mathbb{S}_k$. By definition, $$A(B_H,\phi)=\left[ \begin{array}{cc} O & Z(H,\phi) \\ Z(H,\phi)^\top & O \end{array} \right],$$ and $$Q(H,\phi):=Z(H,\phi) Z(H,\phi)^\top = D(H)+A(H,\phi).$$ Similar to Eq. ([\[polyR\]](#polyR){reference-type="ref" reference="polyR"}), we have $$\lambda^{\nu(H)}\psi_{A(B_H,\phi)}(\lambda) = \lambda^{e(H)}\psi_{Q(H,\phi)}(\lambda^2).$$ Now taking $k=2$ and considering the expectation on both sides over all choices of $\phi$ in $\mathbb{S}_2$ independently and uniformly, by Theorem [\[Match1\]](#Match1){reference-type="ref" reference="Match1"}, we have $$\label{exprel} \lambda^{\nu(H)}\mu_{B_H}(\lambda)=\lambda^{\nu(H)} \mathbb{E}(\psi_{A(B_H,\phi)}(\lambda)) = \lambda^{e(H)}\mathbb{E}(\psi_{Q(H,\phi)}(\lambda^2)),$$ as each $\phi: E(\overleftrightarrow{B_H}) \to \mathbb{S}_2$ is one to one corresponding to a signing of $B_H$ which yields a signed matrix $A(B_H,\phi)$. ## Right-sided Ramanujan hypergraphs **Lemma 18**. *There exits an assignment $\phi: E(\overleftrightarrow{B_H}) \to \mathbb{S}_2$ such that the largest eigenvalue of $Q(H,\phi)$ holds that $$\lambda_1(Q(H,\phi)) \le \rho(\mathbb{T}_{B_H})^2.$$ In particular, if $H$ is $(d,r)$-regular, then there exits a $\phi: E(\overleftrightarrow{B_H}) \to \mathbb{S}_2$ such that $$\lambda_1(A(H,\phi)) \le r-2+ 2\sqrt{(d-1)(r-1)}.$$* *Proof.* By Eq. ([\[exprel\]](#exprel){reference-type="ref" reference="exprel"}) and Corollary [Corollary 12](#Match3){reference-type="ref" reference="Match3"}, there exits an assignment $\phi$ such that the largest eigenvalue of $A(B_H,\phi)$ is at most $\rho(\mathbb{T}_{B_H})$. So, $$\lambda_1(Q(H,\phi))=\lambda_1(A(B_H,\phi))^2 \le \rho(\mathbb{T}_{B_H})^2.$$ If $H$ is $(d,r)$-regular, then $\lambda_1(Q(H,\phi))=d+\lambda_1(A(H,\phi)$, and $\rho(\mathbb{T}_{B_H})=\sqrt{d-1}+\sqrt{r-1}$. Hence, $$\begin{aligned} \lambda_1(A(H,\phi)& =\lambda_1(Q(H,\phi))-d \\ & \le \left(\sqrt{d-1}+\sqrt{r-1}\right)^2-d\\ &=r-2+ 2\sqrt{(d-1)(r-1)}.\end{aligned}$$ The result follows. ◻ **Theorem 19**. *Every connected $(d,r)$-regular hypergraph has a right-sided Ramanujan $2$-covering.* *Proof.* Let $H$ be a $(d,r)$-regular hypergraph. By Lemma [Lemma 18](#Extsgn){reference-type="ref" reference="Extsgn"}, there is a $\phi: E(\overleftrightarrow{B_H}) \to \mathbb{S}_2$ such that $\lambda_1(A(H,\phi)) \le r-2+ 2\sqrt{(d-1)(r-1)}.$ By definition, $H_B^\phi$ is a $2$-fold covering of $H$. By Theorem [Theorem 13](#spec){reference-type="ref" reference="spec"}, $\text{Spec}A(H_B^\phi)=\text{Spec}A(H) \cup \text{Spec}A(H,\phi)$, as multi-set union. So $H^\phi_B$ is a right-sided Ramanujan covering of $H$. ◻ ***Example** 20*. (Examples of Ramanujan hypergraphs) Let $K^d_{d+1}$ be a complete $d$-uniform hypergraph on $d+1$ vertices, which is obviously $(d,d)$-regular. The eigenvalues of $A(K^d_{d+1})$ are $d(d-1)$, and $-(d-1)$ with multiplicity $d$. It is easily verified that $K^d_{d+1}$ is a Ramanujan hypergraph. A *projective plane* $PG(2,q)$ of order $q$ consists of a set $X$ of $q^2 + q + 1$ elements called points, and a set $B$ of $(q + 1)$-subsets of $X$ called lines, such that any two points lie on a unique line. It can be derived from the definition that any point lies on $q + 1$ lines, and two lines meet in a unique point. Associated with the projective plane we have a hypergraph also denoted by $PG(2,q)$ whose vertices are the points of $X$ and edges are the lines of $B$. Then $PG(2,q)$ is a $(q + 1,q+1)$-regular hypergraph with $q^2 + q + 1$ vertices. It is easily to verify that $PG(2,q)$ has the trivial eigenvalue $q(q+1)$, and the eigenvalues $-1$ with multiplicity $q(q+1)$. So $PG(2,q)$ is a Ramanujan hypergraph. An *affine plane* $AG(2, q)$ of order $q$ can be obtained from the projective plane $PG(2,q)$ by removing a line and all the points on it. It is easily seen that the corresponding hypergraph $AG(2, q)$ is a $(q+1,q)$-regular, which has $q^2$ vertices and $q^2+q$ edges such that any two point lie in a unique line. The eigenvalues of $AG(2, q)$ are $q^2-1$, and $-1$ with multiplicity $q^2-1$, which implying that $AG(2, q)$ is a Ramanujan hypergraph. By Theorem [Theorem 13](#spec){reference-type="ref" reference="spec"} and Theorem [Theorem 19](#onesR){reference-type="ref" reference="onesR"}, if we have a right-sided Ramanujan hypergraph, we can construct an infinite tower of right-sided Ramanujan $2$-coverings. So, from the Ramanujan hypergraphs in Example [**Example** 20](#exm){reference-type="ref" reference="exm"}, we have the following result immediately. Note that a projective plane or affine plane of order $q$ always exists if $q$ is a prime power. **Theorem 21**. *There exists an infinite family of $(d,d)$-regular (respectively, $(d+1,d)$-regular) right-sided Ramanujan hypergraphs for every $d\ge3$ (respectively, for every prime power $d$).* In the Definition [\[RamaGraph\]](#RamaGraph){reference-type="ref" reference="RamaGraph"} of Ramanujan hypergraphs, if $d=r$, then the lower bound in ([\[RamaHR\]](#RamaHR){reference-type="ref" reference="RamaHR"}) is reduced to $\lambda\ge -d$, which is a trivial condition. So, in this case, $H$ is Ramanujan if and only if it is right-sided Ramanujan. **Corollary 22**. *There exists an infinite family of $(d,d)$-regular Ramanujan hypergraphs for every $d\ge3$.* ## Left-sided Ramanujan hypergraphs We now discuss the $(d,r)$-regular left-sided Ramanujan hypergraphs $H$ for $d \ne r$. Note that $d<r$ if and only if $\nu(H) > e(H)$. Define $\tau_H=\min\{\nu(H),e(H)\}$. Then the lower bound in ([\[RamaHR\]](#RamaHR){reference-type="ref" reference="RamaHR"}) is exactly $$\lambda_{\tau_H}(H) \ge r-2-2\sqrt{(d-1)(r-1)}.$$ By Lemma [Lemma 6](#mutau){reference-type="ref" reference="mutau"} we know that $\mu_{\tau_H}(B_H)>0$. If $\mu_{\tau_H}(B_H)$ has a lower bound in Eq. ([\[conj\]](#conj){reference-type="ref" reference="conj"}) below, we will show that $H$ has a left-sided Ramanujan $2$-covering. **Theorem 23**. *Let $H$ be a $(d,r)$-regular hypergraph, and $\mu_{\tau}(B_H)$ be the $\tau$-th largest root of the matching polynomial $\mu_{B_H}(x)$ of $B_H$, where $\tau=\tau_H$. If $$\label{conj} \mu_{\tau}(B_H)\ge \left|\sqrt{d-1}-\sqrt{r-1}\right|,$$ then $H$ has a left-sided Ramanujan $2$-covering.* *Proof.* By Corollary [Corollary 12](#Match3){reference-type="ref" reference="Match3"}, there exists an assignment $\phi: E(\overleftrightarrow{B_H}) \to \mathbb{S}_2$ such that the $\tau$-th largest eigenvalue $\lambda_{\tau}(A(B_H,\phi))$ of $(B_H, \phi)$ is at least $\mu_{\tau}(B_H)$. So, $$\begin{aligned} \lambda_{\tau}(A(H,\phi)) & = \lambda_{\tau}(B_H,\phi)^2 -d \ge \mu_{\tau}(B_H)^2-d \\ & \ge \left(\sqrt{d-1}-\sqrt{r-1}\right)^2-d=r-2-2\sqrt{(d-1)(r-1)},\end{aligned}$$ which implies that $H$ has a left-sided Ramanujan $2$-covering $H^\phi_B$. ◻ If $H$ is a left-sided Ramanujan hypergraph satisfying Eq. ([\[conj\]](#conj){reference-type="ref" reference="conj"}), then $H^\phi$ is a left-sided Ramanujan hypergraph for some $\phi: E(\overleftrightarrow{B_H}) \to \mathbb{S}_2$, as $\tau(H^\phi)=2\tau(H)$ and $$\lambda_{\tau(H^\phi)} \ge \min\{\lambda_{\tau(H)}(H),\lambda_{\tau(H)}(A(H,\phi))\}.$$ So, we pose the following problem: **Problem 1.** Does Eq. ([\[conj\]](#conj){reference-type="ref" reference="conj"}) hold for every $(d,r)$-regular hypergraph or $(d,r)$-regular left-sided Ramanujan hypergraph $H$ with $d \ne r$? Finally, we give an explicit construction of infinitely many $(q+1,q)$-regular left-sided Ramanujan hypergraphs, where $q$ is a prime power greater than $4$. We need some knowledge for preparation. Let $G$ be a simple graph and $G^\phi$ be a $k$-fold covering of $G$, where $\phi: E(\overleftrightarrow{G})\to\mathbb{S}_k$. For each permutation $\sigma\in \mathbb{S}_k$, we associate it with a permutation matrix $P_{\sigma}=(p_{ij})$ of order $k$ such that $p_{ij}=1$ if and only if $i=\sigma(j)$. The above mapping $\sigma\to P_{\sigma}$ is the permutation representation of $\mathbb{S}_k$. Adopting the terminology from [@MoharT], $G^\phi$ is called an *Abelian lift* (or *Abelian covering*) if all permutations $\sigma$ or representation matrices $P_{\sigma}$ commute with each other for all $\sigma\in \phi(E(\overleftrightarrow{G}))$. Mohar and Tayfeh-Rezaie [@MoharT] used the representation matrices to define the covering graph $G^\phi$, which are consistent with the covering graph defined in this paper. Suppose that $G^\phi$ is an Abelian $k$-lift of $G$. As the matrices $P_{\sigma}$ commute with each other for all $\sigma\in \phi(E(\overleftrightarrow{G}))$, they have a common basis of $k$ eigenvectors denoted by $\mathcal{B}_\phi$. Mohar and Tayfeh-Rezaie [@MoharT] proved that following beautiful result on the spectrum of $A(G^\phi)$, where $\lambda_x(A)$ denotes the eigenvalue of $A$ corresponding to $x$ if $x$ is an eigenvector of $A$. **Lemma 24**. *[@MoharT][\[Mspec\]]{#Mspec label="Mspec"} Let $G$ be a simple graph and $G^\phi$ be an Abelian $k$-lift of $G$, where $\phi: E(\overleftrightarrow{G})\to\mathbb{S}_k$. For each $x \in \mathcal{B}_\phi$, let $A_x$ be the matrix obtained from the adjacency matrix $A(G)$ of $G$ by replacing each nonzero $(u,v)$-entry by $\lambda_x(P_{\phi(u,v)})$. Then the spectrum of $A(G^\phi)$ is the multiset union of the spectra of $A_x$ for all $x \in \mathcal{B}_\phi$.* **Theorem 25**. *For any prime power $q$ greater than $4$, there exist infinitely many $(q+1,q)$-regular left-sided Ramanujan hypergraphs.* *Proof.* Let $G$ be hypergraph corresponding to the affine plane $AG(2,q)$, where $q$ is a prime power $q$ greater than $4$. Let $B_G$ be the incident bipartite graph of $G$, which is a $(q+1,q)$-biregular graph. Let $\{v_0,e_0\}$ be an edge of $B_G$, where $v_0 \in V(G)$, $e_0 \in E(G)$ and $v_0 \in e_0$. Let $\phi: E(\overleftrightarrow{B_G})\to\mathbb{S}_k$ such that $\phi((v_0,e_0))=(12\cdots k)=:\sigma_0$ (a cyclic permutation of $\mathbb{S}_k$), $\phi((e_0,v_0))=\sigma_0^{-1}$, and $\phi(v,e)=1$ for all other arcs of $(v,e) \in E(\overleftrightarrow{G})$. In this case, $\mathcal{B}_\phi$ consists of $k$ eigenvectors $x_i$ of $P_{\sigma_0}$ corresponding to the eigenvalues $\lambda_{x_i}(P_{\sigma_0})=e^{{\bf i} 2 \pi j/k}$ for $i=0,1,\ldots,k-1$, where ${\bf i}=\sqrt{-1}$. Surely, $\lambda_{x_i}(P_{\sigma_0^{-1}})=\lambda_{x_i}(P^{-1}_{\sigma_0})=e^{-{\bf i}2 \pi j/k}$ for $i=0,1,\ldots,k-1$. For each $i=0,1,\ldots,k-1$, let $A_{x_i}$ be the matrix obtained from the $A(B_G)$ only by replacing the $(v_0, e_0)$-entry by $e^{{\bf i} 2 \pi j/k}$ and the $(e_0, v_0)$-entry by $e^{-{\bf i} 2 \pi j/k}$. Surely, $A_{x_0}=A(B_G)$. Observe that $A(B_G)$ has eigenvalues $\pm \sqrt{q^2+q}$ respectively with multiplicity $1$, $\pm \sqrt{q}$ respectively with multiplicity $q^2-1$, and $0$ with multiplicity $q$. Noting that $\tau_H=q^2$, so $\lambda_{\tau_H}(A(B_H))=\lambda_{q^2}(A(B_H))=\sqrt{q}$. Write $A_{x_i}$ as the form: $$A_{x_i}=A(B_H) + B_i,$$ where, after a suitable labeling of the vertices, $$B_i=\left(\begin{array}{cc} 0 & e^{{\bf i} 2 \pi j/k}-1 \\ e^{-{\bf i} 2 \pi j/k}-1 & 0 \end{array}\right) \oplus O_{2q^2+q-2},$$ where $O_{2q^2+q-2}$ is a zero matrix of order $2q^2+q-2$. So, when $q \ge 5$, for each $i=1,\ldots,k-1$, $$\lambda_{q^2}(A_{x_i}) \ge \lambda_{q^2}(A(B_H))+ \lambda_{2q^2+q}(B_i)=\sqrt{q}-\sqrt{2(1-\cos(2 \pi j/k)}> \sqrt{q}-\sqrt{q-1}.$$ By Lemma [\[Mspec\]](#Mspec){reference-type="ref" reference="Mspec"}, the spectrum of $A(B_H^\phi)$ is a multiset union of the spectra of $A(B_H)$ and $A(B_{x_i})$ for $i=1,2,\ldots,k-1$. So, $$\lambda_{kq^2}(A(B_H^\phi)) \ge \min\{ \lambda_{q^2}(A(B_H)), \lambda_{q^2}(A_{x_i}): i=1,2,\ldots,k-1\}> \sqrt{q}-\sqrt{q-1}.$$ Note that $\tau_{H_B^\phi}=k \tau_H=kq^2$. So, $$\lambda_{kq^2}(A(H^\phi)) = \lambda_{kq^2}(B_H^\phi)^2-(q+1) > \left(\sqrt{q}-\sqrt{q-1}\right)^2-(q+1)=q-2-2\sqrt{q(q-1)}.$$ By definition, $H_B^\phi$ is left-sided Ramanujan for all positive integers $k$ with $\phi: E(\overleftrightarrow{B_G})\to\mathbb{S}_k$ defined as in the above. ◻ We end this paper with another problem: **Problem 2.** When $d \ne r$, how to characterize a $(d,r)$-regular hypergraph which are both left-sided and right-sided Ramanujan hypergraph? The problem is very difficult even for simple non-bipartite $d$-regular graphs. 99 N. Alon, Eigenvalues and expanders, *Combinatorica*, 6(2): 83-96, 1986. C. Bachoc, A. Gundert, A. Passuello, The theta number of simplicial complexes, *Israel J. Math.*, 232 (1): 443--481, 2019. Y. Bilu, N. Linial, Lifts, discrepancy and nearly optimal spectral gap, *Combinatorica*, 26(5): 495--519, 2006. S. M. Cioabă, Eigenvalues of graphs and a simple proof of a theorem of Greenberg, *Linear Algebra Appl.*, 416: 776-782, 2006. S. M. Cioabă, J. H. Koolen, M. Mimura, H. Nozaki, T. Okuda, On the spectrum and linear programming bound for hypergraphs. *European J. Combin.*, 104: 103535, 2022. J. Cooper, A. Dutle, Spectra of uniform hypergraphs, *Linear Algebra Appl.*, 436: 3268--3292, 2012. A. M. Duval, V. Reiner, Shifted simplicial complexes are Laplacian integral, *Trans. Amer. Math. Soc.*, 354 (11): 4313--4344, 2002. K. Feng, W.-C. W. Li, Spectra of hypergraphs and applications. *J. Number Theory*, 60: 1-22, 1996. J. Friedman, A. Wigderson, On the second eigenvalue of hypergraphs, *Combinatorica*, 15(1): 43-65, 1995. C. D. Godsil, Matchings and walks in graphs, *J. Graph Theory*, 5(3): 285-297, 1981. C. D. Godsil, B. Mohar, Walk generating functions and spectral measures of infinite graphs, *Linear Algebra Appl.*, 107: 191-206, 1988. Y. Greenberg, *On the Spectrum of Graphs and Their Universal Covering* (in Hebrew), Ph.D. thesis, Hebrew University of Jerusalem, 1995. J. L. Gross, T. W. Tucker, Generating all graph covering by permutation voltage assignments, *Discrete Math.*, 18: 273-283, 1977. C. Hall, D. Puder, W. F. Sawin, Ramanujan coverings of graphs, *Adv. Math.*, 323: 367-410, 2018. O. J. Heilmann, E. H. Lieb, Theory of monomer-dimer systems, *Commun. Math. Phys.*, 25: 190-232, 1972. S. Hoory, M. Linial, A. Wigderson, Expander graphs and their applications, *Bull. Amer. Math. Soc.*, 43(4): 439-561, 2006. D. Horak, J. Jost, Spectra of combinatorial Laplace operators on simplicial complexes, *Adv. Math.*, 244: 303--336, 2013. D. Li , Y. Hou. Hypergraph coverings and their zeta functions, *Electron. J. Combin.*, 25(4): \#P4.59, 2018. H.-H. Li, B. Mohar, On the first and second eigenvalue of finite and infinite uniform hypergraphs, *Proc. Amer. Math. Soc.*, 147(3): 933-946, 2019. W.-C. W. Li, P. Solé, Spectra of regular graphs and hypergraphs and orthogonal polynomials, *European J. Combin.*, 17(5): 461-477, 1996. L.-H. Lim, Singular values and eigenvalues of tensors: a variational approach, in: *Proceedings of the IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing*, Vol. 1, CAMSAP'05, 2005, pp. 129--132. A. Lubotzky, R. Phillips, P. Sarnak, Ramanujan graphs, *Combinatorica*, 8: 261-277, 1988. A. W. Marcus, D. A. Spielman, N. Srivastava, Interlacing families I: Bipartite Ramanujan graphs of all degrees, *Ann. of Math.*, 182: 307-325, 2015. B. Mohar, The spectrum of an infinite graph, *Linear Algebra Appl.*, 48: 245-256, 1982. B. Mohar, B. Tayfeh-Rezaie, Median eigenvalues of bipartite graphs, *J. Algebraic Combin.*, 41: 899-909, 2015. A. Nilli, On the second eigenvalue of a graph, *Discrete Math.*, 91(2): 207--210, 1991. O. Parzanchevski, R. Rosenthal, Simplicial complexes: spectrum, homology and random walks, *Random Struct. Algorithms*, 50 (2): 225--261, 2017. L. Qi, Eigenvalues of a real supersymmetric tensor, *J. Symbolic Comput.*, 40: 1302--1324, 2005. Y.-M. Song, Y.-Z. Fan, Y. Wang, M.-Y. Tian, J.-C. Wan, The spectral property of hypergraph coverings, available at arXiv: 2108.13417v2, 2021. H. M. Stark, A. A. Terras, Zeta functions of finite graphs and coverings, *Adv. Math.*, 121: 124-165, 1996. [^1]: \*The corresponding author. This work was supported by National Natural Science Foundation of China (Grant No. 12331012). [^2]: Hall et al. [@HPS] remarked that one can define a another version of one-sided Ramanujan covering as having all new eigenvalues bounded from below by $-\rho(\mathbb{T}_G)$. [^3]: Li and Sóle [@LS] introduced the Ramanujan hypergraph. Here we use left-sided or right-sided Ramanujan hypergraphs to distinguish the conditions for $\lambda$ in the upper bound and lower bound cases.
arxiv_math
{ "id": "2310.01771", "title": "Hypergraph coverings and Ramanujan Hypergraphs", "authors": "Yi-Min Song, Yi-Zheng Fan", "categories": "math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper, we argue that the usual approach to modelling knowledge and belief with the necessity modality $\Box$ does not produce intuitive outcomes in the framework of the Belnap--Dunn logic ($\mathsf{BD}$, alias $\mathbf{FDE}$ --- first-degree entailment). We then motivate and introduce a nonstandard modality $\blacksquare$ that formalises knowledge and belief in $\mathsf{BD}$ and use $\blacksquare$ to define $\bullet$ and $\blacktriangledown$ that formalise the *unknown truth* and ignorance as *not knowing whether*, respectively. Moreover, we introduce another modality $\mathbf{I}$ that stands for *factive ignorance* and show its connection with $\blacksquare$. We equip these modalities with Kripke-frame-based semantics and construct a sound and complete analytic cut system for $\mathsf{BD}^\blacksquare$ and $\mathsf{BD}^\mathbf{I}$ --- the expansions of $\mathsf{BD}$ with $\blacksquare$ and $\mathbf{I}$. In addition, we show that $\Box$ as it is customarily defined in $\mathsf{BD}$ cannot define any of the introduced modalities, nor, conversely, neither $\blacksquare$ nor $\mathbf{I}$ can define $\Box$. We also demonstrate that $\blacksquare$ and $\mathbf{I}$ are not interdefinable and establish the definability of several important classes of frames using $\blacksquare$. **Keywords:** Belnap--Dunn logic; non-standard modalities; factive ignorance; knowledge whether; expressivity; analytic cut. author: - Daniil Kozhemiachenko - Liubov Vashentseva bibliography: - references.bib title: "Knowledge and ignorance in Belnap--Dunn logic[^1]" --- # Introduction[\[sec:introduction\]]{#sec:introduction label="sec:introduction"} Formalising epistemic and doxastic contexts using classical modal logics can produce counter-intuitive outcomes. For example, if $\Box\phi$ is interpreted as "the agent believes in $\phi$"[^2], then $\Box(p\wedge\neg p)\rightarrow\Box q$ is valid in every regular classical modal logic. This means an agent cannot believe in a contradictory statement without believing in every proposition. Similarly, if we interpret $\Box$ as a (classical) knowledge operator, it usually entails the knowability paradox that states that there is no unknown truth. This happens because an instance of *reductio ad absurdum* is used in the derivation (cf., e.g., [@BrogaardSalerno2019 §2]). In addition, since (in normal logics) necessitation is sound, the agents are omniscient because they know every valid formula. In particular, $\Box\neg(p\wedge\neg p)$ is valid which means that the agent is supposed to know that every contradiction is false. However, if we hold that an agent's knowledge must be built upon the facts they have at hand, we might not be inclined to accept the validity of $\Box\neg(p\wedge\neg p)$ because it may happen that the agent has *no information at all* regarding $p$. These issues pose a problem not only for the formalisation of knowledge or belief but also for the formalisation of ignorance because it is usually defined as the *lack of knowledge* ("standard view") or the *lack of true belief* ("new view"; cf. a detailed discussion of these two approaches in [@leMorvanPeels2016]). On the other hand, *paraconsistent modal logics* usually do not suffer from the described drawbacks as *reductio ad absurdum* is not valid, and thus can be more intuitive when it comes to the formalisation of belief, knowledge, and ignorance. #### Ignorance and not knowing the truth The standard view of ignorance has been extensively criticised (cf., e.g., [@KubyshkinaPetrolo2021 §2.1] for an overview of the arguments), in particular, because if we assume it (and classical logic), the agents will be ignorant of every (classically) unsatisfiable formula (as indeed, $\neg\Box\phi$ is a theorem of every modal logic extending $\mathbf{KD}$ when $\phi$ is unsatisfiable). This issue is addressed by assuming that the agent is ignorant of $\phi$ when $\phi$ is true but the agent believes that it is false. A (classical) logic formalising this treatment of ignorance was proposed in [@KubyshkinaPetrolo2021]. Another approach to defining ignorance was proposed in [@vanderHoekLomuscio2004] and [@Steinsvold2008]. There the authors interpret "the agent is ignorant about $\phi$" as "the agent does not know whether $\phi$ is true", i.e., $\neg(\Box\phi\vee\Box\neg\phi)$. Thus, ignorance is treated as contingency modality $\triangledown$ introduced in [@MontgomeryRoutley1966] and then explored in, e.g., [@Humberstone1995; @Zolin1999].[^3] An epistemic interpretation of $\triangle$ (knowing whether) was further investigated in [@FanWangvanDitmarsch2015]. In addition to ignorance, it is worth mentioning the operator "$\phi$ is an unknown truth" ($\phi\wedge\neg\Box\phi$) studied in [@Steinsvold2008]. Note that while the agent does not have a true belief regarding $\phi$, the unknown truth does not conform to the new view of ignorance (cf. [@Peels2011] and [@KubyshkinaPetrolo2021 §2.2] for a detailed discussion). In this framework, unknown truth is defined as an accidence operator $\bullet$ introduced in [@Fine1995][^4] and further studied in [@Fine2000] and [@Marcos2005essenceaccident]. #### Modal expansions of the Belnap--Dunn logic Belnap--Dunn logic ($\mathsf{BD}$, alias First Degree Entailment --- $\mathbf{FDE}$) is a paraconsistent logic over the $\{\neg,\wedge,\vee\}$ language formulated by Dunn and Belnap in a series of papers [@Dunn1976; @Belnap1977computer; @Belnap1977fourvalued]. Semantically, $\mathsf{BD}$ retains the semantical conditions of truth and falsity of $\{\neg,\wedge,\vee\}$-formulas from classical logic but treats them independently (cf. Table [1](#table:BDinformal){reference-type="ref" reference="table:BDinformal"}). **is true when** **is false when** ---------------------- -------------------------------------- ---------------------------------------- $\neg\phi$ $\phi$ is false $\phi$ is true $\phi_1\wedge\phi_2$ $\phi_1$ and $\phi_2$ are true $\phi_1$ is false or $\phi_2$ is false $\phi_1\vee\phi_2$ $\phi_1$ is true or $\phi_2$ is true $\phi_1$ and $\phi_2$ are false : Truth and falsity conditions of $\mathsf{BD}$-formulas. Note that a formula can also be *both true and false* and *neither true nor false* as truth and falsity conditions are independent. Thus, any proposition $\phi$ can have one value (be either exactly true[^5] or exactly false), both values (i.e., a truth-value "glut" --- both true and false) or no value (a truth-value "gap" --- neither true nor false). It is easy to see from Table [1](#table:BDinformal){reference-type="ref" reference="table:BDinformal"} that if all variables of $\phi$ are both true and false, then $\phi$ itself is both true and false. Likewise, if all variables are neither true nor false, then so is $\phi$. Nevertheless, the validity can be defined for sequents of the form $\phi\vdash\chi$ where $\phi$ and $\chi$ are formulas in the $\{\neg,\wedge,\vee\}$ language: $\phi\vdash\chi$ is valid if whenever $\phi$ is true, then $\chi$ is true as well. $\mathsf{BD}$ has well-studied modal expansions with standard $\Box$- and $\lozenge$-like modalities (cf. e.g. [@Priest2008FromIftoIs; @Priest2008; @OdintsovWansing2017; @Drobyshevich2020]). They usually employ frame semantics and use either Hilbert-style or tableaux calculi for their proof theory. There is also work on the correspondence theory for expansions of $\mathsf{BD}$ with $\Box$ modality and (or) some implication (cf., e.g. [@RivieccioJungJansana2017; @Drobyshevich2020]). In this approach, $\Box\phi$ is defined in the expected manner: $\Box\phi$ is true at $w$ iff $\phi$ is true in all accessible states; $\Box\phi$ is false at $w$ iff there is an accessible state where $\phi$ is false. Non-standard modal expansions of $\mathsf{BD}$ have also been recently studied. In particular, the "classicality" operator was introduced in [@AntunesCarnielliKapsnerRodriguez2020] and a non-contingency operator $\blacktriangle\phi$ interpreted "the agent knows whether $\phi$ is true" in the epistemic contexts was proposed in [@KozhemiachenkoVashentseva2023]. On the other hand, there are no (as far as the authors are aware) studies of expansions of $\mathsf{BD}$ with accidence or ignorance operators. #### Plan of the paper The remainder of the text is organised as follows. In Section [\[sec:preliminaries\]](#sec:preliminaries){reference-type="ref" reference="sec:preliminaries"}, we argue that the $\Box$ modality does not align well with the intuitive understanding of belief and knowledge in the framework of $\mathsf{BD}$. We also introduce a non-standard modality $\blacksquare$ and provide motivation for its use as a more suitable belief or knowledge operator). Using the intuition behind $\blacksquare$, we then show how to define the ignorance modality $\mathbf{I}$. Section [\[sec:logics\]](#sec:logics){reference-type="ref" reference="sec:logics"} is dedicated to the formal presentation of $\mathsf{BD}^\mathbf{I}$ and $\mathsf{BD}^\blacksquare$ --- the expansions of $\mathsf{BD}$ with $\mathbf{I}$ and $\blacksquare$, respectively. We provide their Kripke semantics and show that $\blacktriangledown$, $\blacktriangle$ (as proposed in [@KozhemiachenkoVashentseva2023]), and $\bullet$ can be defined from $\blacksquare$ in the expected manner. We also show that the $\mathsf{BD}^\mathbf{I}$-counterparts of the rules and axioms of classical logic of ignorance in [@GilbertKubyshkinaPetroloVenturi2022] are valid. Moreover, we prove that $\blacksquare$ can be used as a knowledge modality since truthfulness, positive, and negative introspection are valid on $\mathbf{S5}$-frames (and since $\mathbf{S5}$-frames are definable using $\blacktriangle$ [@KozhemiachenkoVashentseva2023 Theorem 5.4]). We provide a sound and complete tableaux calculus for $\mathsf{BD}^\mathbf{I}$ and $\mathsf{BD}^\blacksquare$ in Section [\[sec:tableaux\]](#sec:tableaux){reference-type="ref" reference="sec:tableaux"}. Section [\[sec:expressivity\]](#sec:expressivity){reference-type="ref" reference="sec:expressivity"} explores the expressivity of $\blacksquare$, $\mathbf{I}$, and $\Box$. Namely, we show that all three modalities are not mutually interdefinable. Section [\[sec:definability\]](#sec:definability){reference-type="ref" reference="sec:definability"} addresses the definability of several important classes of frames in the language with $\blacksquare$. In particular, we show that Euclidean, serial, as well as transitive Euclidean frames are definable and thus $\blacksquare$ can be used as a doxastic modality as well. Finally, in Section [\[sec:conclusion\]](#sec:conclusion){reference-type="ref" reference="sec:conclusion"}, we summarise the results of the paper and provide a roadmap for future research. # Belief, knowledge, and ignorance in Belnap--Dunn logic[\[sec:preliminaries\]]{#sec:preliminaries label="sec:preliminaries"} In this section, we argue that the usual definition of $\Box$ in modal expansions of $\mathsf{BD}$ is not well-suited for the analysis of doxastic and epistemic contexts. To do this, let us first recall[^6] the semantics of $\mathsf{BD}^\Box$ (alias $\mathbf{K}_\mathbf{FDE}$) from [@Priest2008FromIftoIs §11a.4]. *Convention 1* (Languages). We fix a countable set $\mathsf{Var}=\{p,q,r,\ldots\}$ of propositional variables and define the language $\mathscr{L}_{\Box,\blacksquare,\mathbf{I}}$ using the following grammar in Backus--Naur form. $$\begin{aligned} \mathscr{L}_{\Box,\blacksquare,\mathbf{I}}\ni\phi&\coloneqq p\in\mathsf{Var}\mid\neg\phi\mid(\phi\wedge\phi)\mid(\phi\vee\phi)\mid\Box\phi\mid\blacksquare\phi\mid\mathbf{I}\phi\end{aligned}$$ In this paper, we will be mostly concerned with three fragments of $\mathscr{L}_{\Box,\blacksquare,\mathbf{I}}$ --- $\mathscr{L}_\Box$, $\mathscr{L}_\blacksquare$, and $\mathscr{L}_\mathbf{I}$ that contain only one modality: $\Box$, $\blacksquare$, and $\mathbf{I}$, respectively. **Definition 1** (Semantics of $\mathsf{BD}^\Box$). A *frame* is a tuple $\mathfrak{F}=\langle W,R\rangle$ with $W\neq\varnothing$, $R$ being a binary accessibility relation on $W$. A *model* is a tuple $\mathfrak{M}=\langle W,R,v^+,v^-\rangle$ with $\langle W,R\rangle$ being a frame and $v^+$ and $v^-$ being maps from $\mathsf{Var}$ to $2^W$ interpreted as support of truth and support of falsity, respectively. If $w\in\mathfrak{M}$, a tuple $\langle\mathfrak{M},w\rangle$ is called a *pointed model*. We also set $R(w)=\{w':wRw'\}$ and $R^!(w)=R(w)\setminus\{w\}$. The semantics of $\mathscr{L}_\Box$ formulas is defined as follows. $$\begin{aligned} \mathfrak{M},w\Vdash^+p&\text{ iff }w\in v^+(p)&\mathfrak{M},w\Vdash^-p&\text{ iff }w\in v^-(p)\\ \mathfrak{M},w\Vdash^+\neg\phi&\text{ iff }\mathfrak{M},w\Vdash^-\phi&\mathfrak{M},w\Vdash^-\neg\phi&\text{ iff }\mathfrak{M},w\Vdash^+\phi\\ \mathfrak{M},w\Vdash^+\phi_1\!\wedge\!\phi_2&\text{ iff }\mathfrak{M},w\Vdash^+\phi_1\text{ and }\mathfrak{M},w\Vdash^+\phi_2&\mathfrak{M},w\Vdash^-\phi_1\!\wedge\!\phi_2&\text{ iff }\mathfrak{M},w\Vdash^-\phi_1\text{ or }\mathfrak{M},w\Vdash^-\phi_2\\ \mathfrak{M},w\Vdash^+\phi_1\!\vee\!\phi_2&\text{ iff }\mathfrak{M},w\Vdash^+\phi_1\text{ or }\mathfrak{M},w\Vdash^+\phi_2&\mathfrak{M},w\Vdash^-\phi_1\!\vee\!\phi_2&\text{ iff }\mathfrak{M},w\Vdash^-\phi_1\text{ and }\mathfrak{M},w\Vdash^-\phi_2\\ \mathfrak{M},w\Vdash^+\Box\phi&\text{ iff }\forall w'\in R(w):\mathfrak{M},w'\Vdash^+\phi&\mathfrak{M},w\Vdash^-\Box\phi&\text{ iff }\exists w'\in R(w):\mathfrak{M},w'\Vdash^-\phi\end{aligned}$$ We also write $\lozenge\phi$ as a shorthand for $\neg\Box\neg\phi$. Let $\mathfrak{F}$ be a frame. $\phi\vdash\chi$ is *valid on $\mathfrak{F}$* (denoted $\mathfrak{F}\models[\phi\vdash\chi]$) iff for any model $\mathfrak{M}$ on $\mathfrak{F}$, and for any $w\in\mathfrak{M}$, if $\mathfrak{M},w\Vdash^+\phi$, then $\mathfrak{M},w\Vdash^+\chi$. $\phi\vdash\chi$ is *(universally) valid* iff it is valid on every frame. *Convention 2* (Notation in the models). Throughout the paper, we are going to give examples of models. We will use the shorthands shown in Table [2](#table:modelnotation){reference-type="ref" reference="table:modelnotation"} to denote the values of variables in states. **notation** **meaning** ----------------- -------------------------------------- $w:p^+$ $p$ is true and non-false at $w$ $w:p^-$ $p$ is false and non-true at $w$ $w:p^\pm$ $p$ is both true and false at $w$ $w:\xcancel{p}$ $p$ is neither true nor false at $w$ : Notation in the models. *Remark 1*. In [@Belnap1977fourvalued; @Belnap1977computer], $\mathsf{BD}$ is formulated as a four-valued logic with truth table semantics where each value from $\{\mathbf{T},\mathbf{F},\mathbf{B},\mathbf{N}\}$[^7] represents what a computer or a database might be told regarding a given statement. - $\mathbf{T}$ stands for "just told True". - $\mathbf{F}$ stands for "just told False". - $\mathbf{B}$ (or **Both**) stands for "told both True and False". - $\mathbf{N}$ (or **None**) stands for "told neither True nor False". It is also possible to formulate $\mathsf{BD}^\Box$ as a *four-valued* modal logic (cf., e.g., [@Priest2008; @Priest2008FromIftoIs]). Now, why is $\Box$ not well-suited to formalise belief or knowledge? Recall from classical logic that $\triangle\phi$ which is read as "$\phi$ is non-contingent", "the agent knows whether $\phi$ is the case" (in the epistemic setting), or "the agent is opinionated w.r.t. $\phi$" (in the doxastic setting) is true at a given state $w$ when $\phi$ has *the same value in all accessible states*. Thus, $\Box\phi\rightarrow\triangle\phi$[^8] is a valid formula. Classically, this is evident since $\triangle\phi\coloneqq\Box\phi\vee\Box\neg\phi$. Moreover, if the underlying frame is reflexive, $\triangle$ can be used to define $\Box$: $\Box\phi\coloneqq\phi\wedge\triangle\phi$. This, however, is not necessarily the case in $\mathsf{BD}$. Indeed, $\mathsf{BD}$ can be viewed as a four-valued logic (Remark [Remark 1](#rem:4valuedBD){reference-type="ref" reference="rem:4valuedBD"}). Just as in classical logic, it is reasonable to consider "$\phi$ is non-contingent" true only if $\phi$ has the same value in all accessible states. This time, however, $\phi$ can have not two but four values. Note, however, that it is possible for $\Box p$ to be true at a given state even when $p$ has different values in different states accessible from $w$ (cf. Fig. [\[fig:trueandfalse\]](#fig:trueandfalse){reference-type="ref" reference="fig:trueandfalse"}). Thus, $\Box\phi\vee\Box\neg\phi$ is not a suitable formalisation of the knowing whether modality in $\mathsf{BD}$. In [@KozhemiachenkoVashentseva2023], we were addressing this issue and introduced a non-contingency modality $\blacktriangle$ that captures the intuition behind "knowing whether" in a four-valued setting better than $\Box\phi\vee\Box\neg\phi$. Namely, for $\blacktriangle\phi$ to be true at $w$, $\phi$ should have the same Belnapian value in every accessible state (thus, $\mathfrak{M},w_0\nVdash^+\blacktriangle p$ in Fig. [\[fig:trueandfalse\]](#fig:trueandfalse){reference-type="ref" reference="fig:trueandfalse"}). Now, we can introduce a new doxastic or epistemic modality $\blacksquare$ that satisfies the above desideratum. Namely, in order for $\blacksquare\phi$ to be true at $w$, not only should $\phi$ be true at all accessible states but $\phi$ should have the same (Belnapian) value in all of them. In particular, it should be the case that $\mathfrak{M},w_0\nVdash^+\blacksquare p$. Let us now present the intuitions behind the ignorance modality $\mathbf{I}$. First, we recall the *classical* ignorance modality $\mathbb{I}$ as defined in [@KubyshkinaPetrolo2021]. A *classical* Kripke model[^9] is a tuple $\mathfrak{M}=\langle W,R,v\rangle$ with $W\neq\varnothing$, $R\subseteq W\times W$ and $v$ being a classical valuation. The semantics of $\mathbb{I}$ is then as follows: $$\begin{aligned} \mathfrak{M},w\Vdash\mathbb{I}\phi&\text{ iff }\mathfrak{M},w\Vdash\phi\text{ and }\forall w'\in R^!(w):\mathfrak{M},w'\nVdash\phi\label{equ:ignoclassical}\end{aligned}$$ I.e., the agent is ignorant of $\phi$ when $\phi$ is true but the agent believes[^10] that it is false (if they take into account accessible states that are different from $w$). Note that while $\mathbb{I}$ is not definable via $\Box$ [@KubyshkinaPetrolo2021 §3], it is convenient to represent $\mathbb{I}\phi$ as $\phi\wedge\Box^!\neg\phi$ where $\Box^!$ is a doxastic or epistemic modality w.r.t. $R^!(w)$ (and not $R(w)$): $$\begin{aligned} \mathfrak{M},w\Vdash\Box^!\phi&\text{ iff }\forall w'\in R^!(w):\mathfrak{M},w'\Vdash\phi\label{equ:strictboxclassical}\end{aligned}$$ # Belnap--Dunn logics of knowledge and ignorance[\[sec:logics\]]{#sec:logics label="sec:logics"} Let us now formalise the accounts of $\blacksquare$ and $\mathbf{I}$ given in the previous section. We begin with the presentation of $\mathsf{BD}^\blacksquare$ and $\mathsf{BD}^\mathbf{I}$ and then discuss their semantical properties. ## Language and semantics[\[ssec:ignoknowlanguage\]]{#ssec:ignoknowlanguage label="ssec:ignoknowlanguage"} The next definition presents the Kripke semantics of $\blacksquare$ and $\mathbf{I}$. **Definition 2** (Kripke semantics of $\mathsf{BD}^\blacksquare$ and $\mathsf{BD}^\mathbf{I}$). Let $\mathfrak{M}=\langle W,R,v^+,v^-\rangle$ be a model as presented in Definition [Definition 1](#def:KBD){reference-type="ref" reference="def:KBD"}. We define the semantics of $\mathscr{L}_\blacksquare$ and $\mathscr{L}_\mathbf{I}$ formulas as follows: the truth and falsity conditions of propositional formulas are as in Definition [Definition 1](#def:KBD){reference-type="ref" reference="def:KBD"}; the semantics of $\blacksquare\phi$ and $\mathbf{I}\phi$ are given below. $$\begin{aligned} \mathfrak{M},w\Vdash^+\blacksquare\phi\text{ iff }&\forall w'\in R(w):\mathfrak{M},w'\Vdash^+\phi\text{ and }\forall w_1,w_2\in R(w):\mathfrak{M},w_1\Vdash^-\phi\Rightarrow\mathfrak{M},w_2\Vdash^-\phi\\ \mathfrak{M},w\Vdash^-\blacksquare\phi\text{ iff }&\exists w'\in R(w):\mathfrak{M},w'\Vdash^-\phi\text{ or }\exists w_1,w_2\in R(w):\mathfrak{M},w_1\Vdash^+\phi\text{ and }\mathfrak{M},w_2\nVdash^+\phi\\ \mathfrak{M},w\Vdash^+\mathbf{I}\phi\text{ iff }&\mathfrak{M},w\Vdash^+\phi\text{ and }\forall w'\in R^!(w):\mathfrak{M},w'\Vdash^-\phi\\&\text{ and }\forall w_1,w_2\in R^!(w):\mathfrak{M},w_1\Vdash^+\phi\Rightarrow\mathfrak{M},w_2\Vdash^+\phi\\ \mathfrak{M},w\Vdash^-\mathbf{I}\phi\text{ iff }&\mathfrak{M},w\Vdash^-\phi\text{ or }\exists w'\in R^!(w):\mathfrak{M},w'\Vdash^+\phi\\&\text{ or }\exists w'_1,w'_2\in R^!(w):\mathfrak{M},w'_1\nVdash^-\phi\text{ and }\mathfrak{M},w'_2\Vdash^-\phi\end{aligned}$$ We also write $\blacklozenge\phi$ as a shorthand for $\neg\blacksquare\neg\phi$ and $\bullet\phi$ as a shorthand for $\phi\wedge\neg\blacksquare\phi$. The validity is defined as expected. For $\phi,\chi\in\mathscr{L}_\blacksquare$ ($\phi,\chi\in\mathscr{L}_\mathbf{I}$, respectively), $\phi\vdash\chi$ is *valid on a frame $\mathfrak{F}$* (denoted $\mathfrak{F}\models[\phi\vdash\chi]$) iff for any model $\mathfrak{M}$ on $\mathfrak{F}$, and for any $w\in\mathfrak{M}$, if $\mathfrak{M},w\Vdash^+\phi$, then $\mathfrak{M},w\Vdash^+\chi$. $\phi\vdash\chi$ is *$\mathsf{BD}^\blacksquare$ valid* (*$\mathsf{BD}^\mathbf{I}$ valid*, respectively) iff it is valid on every frame. *Remark 2*. One can notice that, indeed, given a frame $\mathfrak{F}=\langle W,R\rangle$ and a pointed model $\langle\mathfrak{M},w\rangle$ on it, we have $$\begin{aligned} \label{equ:ignobox!} \mathfrak{M},w\Vdash^+\mathbf{I}p&\text{ iff }\mathfrak{M},w\Vdash^+p\wedge\blacksquare^!\neg p&\mathfrak{M},w\Vdash^-\mathbf{I}p&\text{ iff }\mathfrak{M},w\Vdash^-p\wedge\blacksquare^!\neg p\end{aligned}$$ where $\blacksquare^!$ is associated to $R^!$ (recall [\[equ:ignoclassical\]](#equ:ignoclassical){reference-type="eqref" reference="equ:ignoclassical"} and [\[equ:strictboxclassical\]](#equ:strictboxclassical){reference-type="eqref" reference="equ:strictboxclassical"} as well as Definition [Definition 1](#def:KBD){reference-type="ref" reference="def:KBD"}). Moreover, it is easy to see from Definitions [Definition 1](#def:KBD){reference-type="ref" reference="def:KBD"} and [Definition 2](#def:ignoknowsemantics){reference-type="ref" reference="def:ignoknowsemantics"} that as long as all formulas have classical values in all states[^11] of a given model, then $\mathbf{I}$, $\blacksquare$, and $\bullet$ *behave classically*. It is also clear that there are no valid formulas in $\mathsf{BD}^\Box$, $\mathsf{BD}^\blacksquare$, and $\mathsf{BD}^\mathbf{I}$ as the following proposition states. **Proposition 1**. *Let $\phi\!\in\!\mathscr{L}_{\Box,\blacksquare,\mathbf{I}}$. Then, there are pointed models $\langle\mathfrak{M},w\rangle$ and $\langle\mathfrak{M}',w'\rangle$ s.t. $\mathfrak{M},w\nVdash^+\phi$ and $\mathfrak{M}',w'\Vdash^-\phi$.* *Proof.* We construct two pointed models: the one where *every* $\mathscr{L}_{\Box,\blacksquare,\mathbf{I}}$-formula is *false*, and the other where every $\mathscr{L}_{\Box,\blacksquare,\mathbf{I}}$-formula is *non-true*. Consider Fig. [\[fig:BandNall\]](#fig:BandNall){reference-type="ref" reference="fig:BandNall"}. It is easy to check by induction that for any $\phi\in\mathscr{L}_{\Box,\blacksquare,\mathbf{I}}$ (1) $\mathfrak{M},w_0\Vdash^+\phi$ and $\mathfrak{M},w_0\Vdash^-\phi$; (2) $\mathfrak{M},w'_0\nVdash^+\phi$ and $\mathfrak{M},w'_0\nVdash^-\phi$. The result follows. ◻ Note that all logics we are considering here --- $\mathsf{BD}^\Box$, $\mathsf{BD}^\blacksquare$, and $\mathsf{BD}^\mathbf{I}$ --- are conservative expansions of $\mathsf{BD}$. This, however, is not sufficient to obtain Proposition [Proposition 1](#prop:novalidities){reference-type="ref" reference="prop:novalidities"} since it is possible to define modalities in such a way that *modal formulas* can be valid. Let us now show a technical result analogous to [@KozhemiachenkoVashentseva2023 Lemma 2.12] that will simplify some proofs in this section. **Definition 3** (Dual models). For any model $\mathfrak{M}=\langle W,R,v^+,v^-\rangle$, we define its *dual model* on the same frame $\mathfrak{M}_\partial=\langle W,R,v^+_\partial,v^-_\partial\rangle$ as follows. $$\begin{aligned} \text{if }w\in v^+(p)\text{ and }w\notin v^-(p)&\text{ then }w\in v^+_\partial(p)\text{ and }w\notin v^-_\partial(p)\\ \text{if }w\in v^+(p)\text{ and }w\in v^-(p)&\text{ then }w\notin v^+_\partial(p)\text{ and }w\notin v^-_\partial(p)\\ \text{if }w\notin v^+(p)\text{ and }w\notin v^-(p)&\text{ then }w\in v^+_\partial(p)\text{ and }w\in v^-_\partial(p)\\ \text{if }w\notin v^+(p)\text{ and }w\in v^-(p)&\text{ then }w\notin v^+_\partial(p)\text{ and }w\in v^-_\partial(p)\end{aligned}$$ In other words, if a variable was either true and non-false or false and non-true in some state in a model, then it remains such in the dual[^12] model. But if it was both true and false, it becomes neither true nor false and vice versa. **Proposition 2**. *Let $\mathfrak{M}=\langle W,R,v^+,v^-\rangle$ be a model and $\mathfrak{M}_\partial=\langle W,R,v^+_\partial,v^-_\partial\rangle$ be its dual model. Then for any $\phi\in\mathscr{L}_\mathbf{I}\cup\mathscr{L}_\blacksquare$ and $w\in\mathfrak{M}$, it holds that $$\begin{aligned} \text{if }\mathfrak{M},w\Vdash^+\phi\text{ and }\mathfrak{M},w\nVdash^-\phi&\text{ then }\mathfrak{M}_\partial,w\Vdash^+\phi\text{ and }\mathfrak{M}_\partial,w\nVdash^-\phi\\ \text{if }\mathfrak{M},w\Vdash^+\phi\text{ and }\mathfrak{M},w\Vdash^-\phi&\text{ then }\mathfrak{M}_\partial,w\nVdash^+\phi\text{ and }\mathfrak{M}_\partial,w\nVdash^-\phi\\ \text{if }\mathfrak{M},w\nVdash^+\phi\text{ and }\mathfrak{M},w\nVdash^-\phi&\text{ then }\mathfrak{M}_\partial,w\Vdash^+\phi\text{ and }\mathfrak{M}_\partial,w\Vdash^-\phi\\ \text{if }\mathfrak{M},w\nVdash^+\phi\text{ and }\mathfrak{M},w\Vdash^-\phi&\text{ then }\mathfrak{M}_\partial,w\nVdash^+\phi\text{ and }\mathfrak{M}_\partial,w\Vdash^-\phi\end{aligned}$$* *Proof.* We adapt the technique from [@ZaitsevShramko2004english] and prove the statement by induction on $\phi$. The basis case of propositional variables holds by the construction of $v^+_\partial$ and $v^-_\partial$. The cases of propositional connectives hold by virtue of the admissibility of the contraposition in $\mathsf{BD}$ [@Font1997; @Dunn2000; @ZaitsevShramko2004english]. It remains to consider the cases of $\blacksquare$ and $\mathbf{I}$. Let $\mathfrak{M},w\Vdash^+\blacksquare\phi$ and $\mathfrak{M},w\nVdash^-\blacksquare\phi$. Then, $\mathfrak{M},w'\Vdash^+\phi$ and $\mathfrak{M},w'\nVdash^-\phi$ in all $w'\in R(w)$. Applying the induction hypothesis, we have that $\mathfrak{M}_\partial,w'\Vdash^+\phi$ and $\mathfrak{M}_\partial,w'\nVdash^-\phi$ in all $w'\in R(w)$, whence $\mathfrak{M}_\partial,w\Vdash^+\blacksquare\phi$ and $\mathfrak{M}_\partial,w\nVdash^-\blacksquare\phi$. Now assume that $\mathfrak{M},w\nVdash^+\blacksquare\phi$ and $\mathfrak{M},w\Vdash^-\blacksquare\phi$. Then, there are the following cases. 1. There is $w'\in R(w)$ s.t. $\mathfrak{M},w'\nVdash^+\phi$ and $\mathfrak{M},w'\Vdash^-\phi$. 2. There are $w_1,w_2\in R(w)$ s.t. one of the following holds: 1. $\mathfrak{M},w_1\Vdash^+\phi$ and $\mathfrak{M},w_1\nVdash^-\phi$ but $\mathfrak{M},w_2\Vdash^+\phi$ and $\mathfrak{M},w_2\Vdash^-\phi$; 2. $\mathfrak{M},w_1\Vdash^+\phi$ and $\mathfrak{M},w_1\nVdash^-\phi$ but $\mathfrak{M},w_2\nVdash^+\phi$ and $\mathfrak{M},w_2\nVdash^-\phi$; 3. $\mathfrak{M},w_1\Vdash^+\phi$ and $\mathfrak{M},w_1\Vdash^-\phi$ but $\mathfrak{M},w_2\nVdash^+\phi$ and $\mathfrak{M},w_2\nVdash^-\phi$. Applying the induction hypothesis, we obtain the following. 1. There is $w'\in R(w)$ s.t. $\mathfrak{M},w'\nVdash^+\phi$ and $\mathfrak{M},w'\Vdash^-\phi$ (nothing changes from $(a)$). 2. There are $w_1,w_2\in R(w)$ s.t. one of the following holds: 1. $\mathfrak{M},w_1\Vdash^+\phi$ and $\mathfrak{M},w_2\nVdash^-\phi$ but $\mathfrak{M},w_1\nVdash^+\phi$ and $\mathfrak{M},w_2\nVdash^-\phi$; 2. $\mathfrak{M},w_1\Vdash^+\phi$ and $\mathfrak{M},w_2\nVdash^-\phi$ but $\mathfrak{M},w_1\Vdash^+\phi$ and $\mathfrak{M},w_2\Vdash^-\phi$; 3. $\mathfrak{M},w_1\nVdash^+\phi$ and $\mathfrak{M},w_1\nVdash^-\phi$ but $\mathfrak{M},w_2\nVdash^+\phi$ and $\mathfrak{M},w_2\nVdash^-\phi$. It is clear that in all cases: $(a')$ and $(b'.1)$--$(b'.3)$, it holds that $\mathfrak{M}_\partial,w\nVdash^+\blacksquare\phi$ and $\mathfrak{M}_\partial,w\Vdash^-\blacksquare\phi$. The cases where $\mathfrak{M},w\Vdash^+\blacksquare\phi$ and $\mathfrak{M},w\Vdash^-\blacksquare\phi$ or $\mathfrak{M},w\nVdash^+\blacksquare\phi$ and $\mathfrak{M},w\nVdash^-\blacksquare\phi$ can be tackled similarly. Let us proceed to $\mathbf{I}\phi$. Observe from Remark [Remark 2](#rem:ignobox!){reference-type="ref" reference="rem:ignobox!"} and [\[equ:ignobox!\]](#equ:ignobox!){reference-type="eqref" reference="equ:ignobox!"} that $\mathbf{I}\phi$ can be defined as $\phi\wedge\blacksquare^!\phi$. Since the statement holds for $\blacksquare$ and propositional connectives on every frame $\mathfrak{F}=\langle W,R\rangle$ and since $\blacksquare^!$ is just $\blacksquare$ defined with $R^!(w)$ instead of $R(w)$, we obtain the result for $\mathbf{I}\phi$ as well. ◻ *Remark 3*. Proposition [Proposition 2](#prop:dualvaluations){reference-type="ref" reference="prop:dualvaluations"} has an important immediate consequence: if both $\phi\vdash\chi$ and $\chi\vdash\phi$ are valid (on a given frame), then $$\begin{aligned} \mathfrak{M},w\Vdash^+\phi&\text{ iff }\mathfrak{M},w\Vdash^+\chi&\mathfrak{M},w\Vdash^-\phi&\text{ iff }\mathfrak{M},w\Vdash^-\chi\end{aligned}$$ for every pointed model $\langle\mathfrak{M},w\rangle$ (on that frame). Moreover, it follows that if $\phi\vdash\chi$ is valid (on a given frame), then $\neg\chi\vdash\neg\phi$ is also valid (on that frame). I.e., the contraposition is sound, as expected. ## Semantical properties of $\blacksquare$[\[ssec:knowsemantics\]]{#ssec:knowsemantics label="ssec:knowsemantics"} Let us now show that $\blacksquare$ conforms to the intuitions outlined in Section [\[sec:preliminaries\]](#sec:preliminaries){reference-type="ref" reference="sec:preliminaries"}. We begin with recalling the semantics of $\blacktriangle$ from [@KozhemiachenkoVashentseva2023]. **Definition 4** (Semantics of $\blacktriangle$). Let $\mathfrak{M}=\langle W,R,v^+,v^-\rangle$ be a model as presented in Definition [Definition 1](#def:KBD){reference-type="ref" reference="def:KBD"}. To make the presentation of the semantics for $\blacktriangle$ more concise we introduce the following conditions. $$\begin{aligned} \forall w_1,w_2\in R(w_0):(\mathfrak{M},w_1\!\Vdash^+\!\phi\!\Rightarrow\!\mathfrak{M},w_2\!\Vdash^+\!\phi)~\&~(\mathfrak{M},w_1\!\Vdash^-\!\phi\!\Rightarrow\!\mathfrak{M},w_2\Vdash^-\phi)\tag{$t_1\blacktriangle$}\label{t1conditionI}\\ \forall w_1\in R(w_0):\mathfrak{M},w_1\Vdash^+\phi\text{ or }\mathfrak{M},w_1\Vdash^-\phi\tag{$t_2\blacktriangle$}\label{t2conditionI}\\ \exists w_1,w_2\in R(w_0):\mathfrak{M},w_1\Vdash^+\phi~\&~\mathfrak{M},w_2\nVdash^+\phi\tag{$f_1\blacktriangle$}\label{f1conditionI}\\ \exists w_1,w_2\in R(w_0):\mathfrak{M},w_1\Vdash^-\phi~\&~\mathfrak{M},w_2\nVdash^-\phi\tag{$f_2\blacktriangle$}\label{f2conditionI}\\ \exists w_1,w_2\in R(w_0):\mathfrak{M},w_1\Vdash^+\phi~\&~\mathfrak{M},w_2\Vdash^-\phi\tag{$f_3\blacktriangle$}\label{fconditionS}\end{aligned}$$ Using these conditions, support of truth and support of falsity of $\blacktriangle$ is defined as follows. $$\begin{aligned} \mathfrak{M},w_0\Vdash^+\blacktriangle\phi&\text{ iff }\eqref{t1conditionI}\text{ and }\eqref{t2conditionI}&\mathfrak{M},w_0\Vdash^-\blacktriangle\phi&\text{ iff }\eqref{f1conditionI}\text{ or }\eqref{f2conditionI}\text{ or }\eqref{fconditionS}\end{aligned}$$ We also write $\blacktriangledown\phi$ as a shorthand for $\neg\blacktriangle\phi$. The next statement shows that $\blacksquare$ behaves in the desired way. Namely, it has the expected connection with the "knowledge whether" modality and, in addition, truthfulness, positive introspection, and negative introspection are valid on $\mathbf{S5}$ frames (i.e., frames $\langle W,R\rangle$ where $R$ is an equivalence relation). **Theorem 1**. * * 1. *$\blacktriangle p\dashv\vdash\blacksquare p\vee\blacksquare\neg p$ is valid on every frame.* 2. *$\blacksquare p\dashv\vdash p\wedge\blacktriangle p$ is valid on every *reflexive* frame.* 3. *Let $\mathfrak{F}=\langle W,R\rangle$ be an *$\mathbf{S5}$ frame*. Then $\blacksquare p\vdash p$, $\blacksquare p\vdash\blacksquare\blacksquare p$, and $\blacklozenge p\vdash\blacksquare\blacklozenge p$ are valid on $\mathfrak{F}$.* *Proof.* The proofs of 1. and 2. are immediate from Definitions [Definition 2](#def:ignoknowsemantics){reference-type="ref" reference="def:ignoknowsemantics"} and [Definition 4](#def:contingencysemantics){reference-type="ref" reference="def:contingencysemantics"}. Let us prove 3. Let $\mathfrak{F}=\langle W,R\rangle$ be an $\mathbf{S5}$ frame and $w\in W$. Now let $\mathfrak{M}$ be a model on $\mathfrak{F}$ s.t. $\mathfrak{M},w\Vdash^+\blacksquare p$. Thus, $\mathfrak{M},w'\Vdash^+p$ in every $w'\in R(w)$. But since $R$ is reflexive, we have that $\mathfrak{M},w\Vdash^+p$ and thus, $\blacksquare p\vdash p$ is valid. To prove the validity of $\blacksquare p\vdash\blacksquare\blacksquare p$, let again $\mathfrak{M},w\Vdash^+\blacksquare p$ for some model $\mathfrak{M}$ on $\mathfrak{F}$. We consider two cases. First, if $\mathfrak{M},w\nVdash^-\blacksquare p$, then $\mathfrak{M},w'\Vdash^+p$ and $\mathfrak{M},w'\nVdash^-p$ in every $w'\in R(w)$. Thus, since $R$ is transitive, we have that $\mathfrak{M},w'\Vdash^+\blacksquare p$ and $\mathfrak{M},w'\nVdash^-\blacksquare p$ in every $w'\in R(w)$. Second, let $\mathfrak{M},w\nVdash^+\blacksquare p$. Then, $\mathfrak{M},w'\Vdash^+p$ and $\mathfrak{M},w'\Vdash^-p$ in every $w'\in R(w)$. Thus, since $R$ is reflexive[^13] and transitive, we have that $\mathfrak{M},w'\Vdash^+\blacksquare p$ and $\mathfrak{M},w'\Vdash^-\blacksquare p$ in every $w'\in R(w)$. In the first case, we obtain that $\mathfrak{M},w\Vdash^+\blacksquare\blacksquare p$ and $\mathfrak{M},w\nVdash^-\blacksquare\blacksquare p$. In the second case, we have that $\mathfrak{M},w\Vdash^+\blacksquare\blacksquare p$ and $\mathfrak{M},w\Vdash^-\blacksquare\blacksquare p$. We can now conclude that $\blacksquare p\vdash\blacksquare\blacksquare p$ is valid. Finally, let $\mathfrak{M},w\Vdash^+\blacklozenge p$. We have four cases: 1. there is $w'\in R(w)$ s.t. $\mathfrak{M},w'\Vdash^+p$ and $\mathfrak{M},w'\nVdash^-p$; 2. there are $w_1,w_2\in R(w)$ s.t. $\mathfrak{M},w_1\Vdash^-p$ and $\mathfrak{M},w_2\nVdash^-p$; 3. there are $w_1,w_2\in R(w)$ s.t. $\mathfrak{M},w_1\Vdash^+p$ and $\mathfrak{M},w_2\nVdash^+p$; 4. $\mathfrak{M},w'\Vdash^+p$ and $\mathfrak{M},w'\Vdash^-p$ in every $w'\in R(w)$. One can see that $\mathfrak{M},w\nVdash^-\blacklozenge p$ in cases 1--3 and $\mathfrak{M},w\Vdash^-\blacklozenge p$ in case 4. Now, since $R$ is Euclidean, it is clear that $uRu'$ for every $u,u'\in R(w)$. Thus, in cases 1--3, we have that $\mathfrak{M},t\Vdash^+\blacklozenge p$ and $\mathfrak{M},t\nVdash^-\blacklozenge p$ for every $t\in R(w)$, and in case 4., $\mathfrak{M},t\Vdash^+\blacklozenge p$ and $\mathfrak{M},t\Vdash^-\blacklozenge p$ for every $t\in R(w)$. Hence, $\mathfrak{M},w\Vdash^+\blacksquare\blacklozenge p$, as required and thus, $\blacklozenge p\vdash\blacksquare\blacklozenge p$ is valid on $\mathfrak{F}$. ◻ *Remark 4*. The above statement shows that $\blacksquare$ fulfils the desiderata w.r.t. a knowledge modality. First, it has the expected connections with the "knowledge whether" ($\blacktriangle$). Second, truthfulness, positive introspection, and negative introspection are valid on $\mathbf{S5}$ frames. In addition, $\mathbf{S5}$ frames are definable using $\blacktriangle$ [@KozhemiachenkoVashentseva2023 Theorem 5.4] (and thus, using $\blacksquare$ as well). We will later see (cf. Section [\[sec:definability\]](#sec:definability){reference-type="ref" reference="sec:definability"}) that $\blacksquare$ can define several important doxastic classes of frames in a natural way and thus can act as a belief modality as well. *Remark 5*. Note, however, that $\blacksquare$ is not a standard modality in contrast to $\Box$. Indeed, while $\Box(p\wedge q)\vdash\Box p\wedge\Box q$ is valid on every frame, one can check that $\blacksquare(p\wedge q)\vdash\blacksquare p\wedge\blacksquare q$ is valid only on partial-functional frames (i.e., frames where $|R(w)|\leq1$ for every $w$). This is an expected consequence of its semantics since we demand that the Belnapian value of $\phi$ be the same in all accessible states for $\blacksquare\phi$ to be true or non-false. Still, it is easy to see that $$\begin{aligned} \label{equ:exactlytrueknow} \mathfrak{M},w\Vdash^+\blacksquare(p\wedge q)\text{ and }\mathfrak{M},w\nVdash^-\blacksquare(p\wedge q)&\text{ iff }\mathfrak{M},w\Vdash^+\blacksquare p\wedge\blacksquare q\text{ and }\mathfrak{M},w\nVdash^-\blacksquare p\wedge\blacksquare q\end{aligned}$$ I.e., to refute $\blacksquare(p\wedge q)\vdash\blacksquare p\wedge\blacksquare q$, one needs to assume that $\blacksquare(p\wedge q)$ is *both true and false at $w$*. This is reasonable since the truth condition on $\blacksquare$ is stronger than that on $\Box$ (cf. Definitions [Definition 1](#def:KBD){reference-type="ref" reference="def:KBD"} and [Definition 2](#def:ignoknowsemantics){reference-type="ref" reference="def:ignoknowsemantics"}). Furthermore, observe that if $\blacksquare\phi$ is *both true and false at $w$*, it means that the agent believes (or knows) that $\phi$ is contradictory or paradoxical (since it has to be both true and false in all accessible states). But if $\phi=p\wedge q$, it makes sense to argue that the agent might not even have an opinion whether it is $p$, $q$, or both of them that have paradoxical value. In fact, one can maintain that in the presence of paradoxical truth-values, the belief is not compositional even w.r.t. conjunction [@Dubois2008]. ## Semantical properties of $\mathbf{I}$[\[ssec:ignosemantics\]]{#ssec:ignosemantics label="ssec:ignosemantics"} Let us now proceed to the Belnap--Dunn ignorance modality. To further motivate our semantics of $\mathbf{I}$, we show that the counterparts of the modal axioms and rules presented in [@GilbertKubyshkinaPetroloVenturi2022 Definition 1.12] are valid. These are as follows: $$\begin{aligned} (\mathbb{I}1)~\mathbb{I}p\rightarrow p&&(\mathbb{I}2)~(\mathbb{I}p\wedge\mathbb{I}q)\rightarrow\mathbb{I}(p\vee q)&&(\mathbb{I}R)~\dfrac{\phi\rightarrow\chi}{\phi\rightarrow(\mathbb{I}\chi\rightarrow\mathbb{I}\phi)} \label{equ:classicalignoaxioms}\end{aligned}$$ To produce their $\mathsf{BD}^\mathbf{I}$-counterparts, we use the fact that $\phi\rightarrow(\mathbb{I}\chi\rightarrow\mathbb{I}\phi)$ is equivalent to $(\phi\wedge\mathbb{I}\chi)\rightarrow\mathbb{I}\phi$ in classical logic and then replace $\mathbb{I}$ with $\mathbf{I}$ and $\rightarrow$ with $\vdash$. This gives us the following: $$\begin{aligned} (\mathbf{I}1)~\mathbf{I}p\vdash p&&(\mathbf{I}2)~\mathbf{I}p\wedge\mathbf{I}q\vdash\mathbf{I}(p\vee q)&&(\mathbf{I}R)~\dfrac{\phi\vdash\chi}{\phi\wedge\mathbf{I}\chi\vdash\mathbf{I}\phi} \label{equ:ignoaxioms}\end{aligned}$$ **Theorem 2**. *All sequents ($\mathbf{I}1$ and $\mathbf{I}2$) and the rule ($\mathbf{I}R$) in [\[equ:ignoaxioms\]](#equ:ignoaxioms){reference-type="eqref" reference="equ:ignoaxioms"} are valid on every frame.* *Proof.* The validity of $\mathbf{I}1$ is evident from Definition [Definition 2](#def:ignoknowsemantics){reference-type="ref" reference="def:ignoknowsemantics"}. Let us consider $\mathbf{I}2$. Assume that $\mathfrak{F}$ is a frame and $\langle\mathfrak{M},w\rangle$ is a pointed model on $\mathfrak{F}$ s.t. $\mathfrak{M},w\Vdash^+\mathbf{I}p\wedge\mathbf{I}q$. Then, we have that $\mathfrak{M},w\Vdash^+p$ and $\mathfrak{M},w\Vdash^+q$, that $\mathfrak{M},w'\Vdash^-p$ and $\mathfrak{M},w'\Vdash^-q$ in every $w'\in R^!(w)$, and, furthermore, that if $\mathfrak{M},u\Vdash^+p$ in some $u\in R^!(w)$, then $\mathfrak{M},u'\Vdash^+p$ in *every* $u'\in R^!(w)$ (and likewise, if $\mathfrak{M},t\Vdash^+q$ in some $t\in R^!(w)$, then $\mathfrak{M},t'\Vdash^+q$ in *every* $t'\in R^!(w)$). Thus, we have that $\mathfrak{M},w\Vdash^+p\vee q$, $\mathfrak{M},w'\Vdash^-p\vee q$ in every $w'\in R^!(w)$, and, in addition, if $\mathfrak{M},t\Vdash^-p\vee q$ in some $t\in R^!(w)$, then $\mathfrak{M},t'\Vdash^-p\vee q$ in *every* $t'\in R^!(w)$. Hence, $\mathfrak{M},w\Vdash^+\mathbf{I}(p\vee q)$, as required. To tackle $\mathbf{I}R$, we proceed by contraposition and assume that $\phi\wedge\mathbf{I}\chi\vdash\mathbf{I}\phi$ is not valid. Namely, that there is a pointed model $\langle\mathfrak{M},w\rangle$ s.t. $\mathfrak{M},w\Vdash^+\phi\wedge\mathbf{I}\chi$ but $\mathfrak{M},w\nVdash^+\mathbf{I}\phi$. We consider two cases: (1) $\mathfrak{M},w\nVdash^-\mathbf{I}\phi$, and (2) $\mathfrak{M},w\Vdash^-\mathbf{I}\phi$. If (1), we have that $\mathfrak{M},w'\nVdash^+\phi$ and $\mathfrak{M},w'\nVdash^-\phi$ in all $w'\in R^!(w)$ (since $\mathfrak{M}_\partial,w\Vdash^+\phi$). On the other hand, $\mathfrak{M},w'\Vdash^-\chi$ in all $w'\in R^!(w)$. Applying Proposition [Proposition 2](#prop:dualvaluations){reference-type="ref" reference="prop:dualvaluations"}, we obtain that $\mathfrak{M}_\partial,w'\Vdash^+\phi$ but $\mathfrak{M},w'\nVdash^-\chi$ in all $w'\in R^!(w)$, i.e., $\phi\vdash\chi$ is not valid. If (2), one of the following holds: 1. there is $w'\in R^!(w)$ s.t. $\mathfrak{M},w'\Vdash^+\phi$ and $\mathfrak{M},w'\nVdash^-\phi$; 2. there are $w',w''\in R^!(w)$ s.t. $\mathfrak{M},w'\Vdash^+\phi$ and $\mathfrak{M},w'\Vdash^-\phi$ but $\mathfrak{M},w''\nVdash^+\phi$ and $\mathfrak{M},w''\nVdash^-\phi$; 3. there are $w',w''\in R^!(w)$ s.t. $\mathfrak{M},w'\Vdash^+\phi$ and $\mathfrak{M},w'\Vdash^-\phi$ but $\mathfrak{M},w''\nVdash^+\phi$ and $\mathfrak{M},w''\Vdash^-\phi$; 4. there are $w',w''\in R^!(w)$ s.t. $\mathfrak{M},w'\nVdash^+\phi$ and $\mathfrak{M},w'\nVdash^-\phi$ but $\mathfrak{M},w''\nVdash^+\phi$ and $\mathfrak{M},w''\Vdash^-\phi$. It is also clear that either $\mathfrak{M},u\Vdash^+\chi$ and $\mathfrak{M},u\Vdash^-\chi$, or $\mathfrak{M},u\nVdash^+\chi$ and $\mathfrak{M},u\Vdash^-\chi$ in all $u\in R^!(w)$. Again, by an application of Proposition [Proposition 2](#prop:dualvaluations){reference-type="ref" reference="prop:dualvaluations"}, we have that $\mathfrak{M},u\nVdash^+\chi$ and $\mathfrak{M},u\Vdash^-\chi$ in all $u\in R^!(w)$ which gives $\mathfrak{M}_\partial,w'\Vdash^+\phi$ but $\mathfrak{M}_\partial,w'\nVdash^+\chi$ (for the case $(2.a)$); and $\mathfrak{M}_\partial,w''\Vdash^+\phi$ but $\mathfrak{M}_\partial,w''\nVdash^+\chi$ (for $(2.b)$--$(2.d)$). The result follows. ◻ # Analytic cut system[\[sec:tableaux\]]{#sec:tableaux label="sec:tableaux"} When it comes to providing a calculus for $\mathsf{BD}$ or one of its relatives or expansions, there are, usually, two avenues. The first is to provide a Hilbert-style axiomatisation. This was done in [@Dunn1995] and [@Drobyshevich2020][^14] for $\mathsf{BD}^\Box$ ($\mathbf{K}_\mathbf{FDE}$). However, the completeness proofs of such systems can require the introduction of non-normal worlds (cf. [@Drobyshevich2020 §4] for a detailed discussion). The other option is to construct a tableaux (or analytic cut[^15]) calculus. This was done, e.g., by Priest in [@Priest2008; @Priest2008FromIftoIs] for $\mathsf{BD}^\Box$. Similarly, in [@KozhemiachenkoVashentseva2023], we presented an analytic cut system for the expansion of $\mathsf{BD}$ with $\blacktriangle$. The soundness and completeness of tableaux and analytic cut systems are usually straightforward to establish. Moreover, they can be easily expanded to accommodate new connectives and operators which is not trivial when one deals with a Hilbert calculus. Thus, in this section, we provide a unified analytic cut calculus for $\mathsf{BD}^\blacksquare$ and $\mathsf{BD}^\mathbf{I}$ that is built similarly to the calculus for the expansion of $\mathsf{BD}$ with $\blacktriangle$ from [@KozhemiachenkoVashentseva2023] and augments D'Agostino's analytic cut calculus $\mathbf{RE}_{\mathrm{fde}}$ [@DAgostino1990] with additional modal rules. We are using labelled formulas whose labels contain two parts: the value of the formula and the state where the formula has this value. **Definition 5** (Labelled formulas). We fix a countable set of state-labels $\mathsf{Lab}=\{w,w_0,w',\ldots\}$ and the set of value-labels $\mathsf{Val}=\{\mathfrak{t},\mathfrak{f},\overline{\mathfrak{t}},\overline{\mathfrak{f}}\}$. A *labelled formula* is a construction of the form $\mathsf{w}:\phi;\mathfrak{v}$ with $\phi\in\mathscr{L}_\mathbf{I}\cup\mathscr{L}_\blacksquare$, $\mathsf{w}\in\mathsf{Lab}$, and $\mathfrak{v}\in\mathsf{Val}$. The interpretations of labelled formulas are summarised in Table [3](#table:labelsintepretation){reference-type="ref" reference="table:labelsintepretation"}. **Labelled formula** **Interpretation** ---------------------------------- ------------------------------- $w:\phi;\mathfrak{t}$ $\mathfrak{M},w\Vdash^+\phi$ $w:\phi;\mathfrak{f}$ $\mathfrak{M},w\Vdash^-\phi$ $w:\phi;\overline{\mathfrak{t}}$ $\mathfrak{M},w\nVdash^+\phi$ $w:\phi;\overline{\mathfrak{f}}$ $\mathfrak{M},w\nVdash^-\phi$ : Interpretations of labelled formulas in $\mathcal{AC}_{\blacksquare,\mathbf{I}}$ proofs. *Convention 3*. We set $$\begin{aligned} \mathfrak{t}^\neg&=\mathfrak{f}&\mathfrak{f}^\neg&=\mathfrak{t}& \overline{\mathfrak{t}}^\neg&=\overline{\mathfrak{f}}&\overline{\mathfrak{f}}^\neg&=\overline{\mathfrak{t}}\end{aligned}$$ **Definition 6** ($\mathcal{AC}_{\blacksquare,\mathbf{I}}$ --- analytic cut for $\mathsf{BD}^\blacksquare$ and $\mathsf{BD}^\mathbf{I}$). We define an $\mathcal{AC}_{\blacksquare,\mathbf{I}}$-proof as a downward branching tree whose nodes are labelled with sets containing labelled formulas and constructions of the form $w\mathsf{R}w'$. Each branch can be extended by one of the rules from Fig. [\[fig:Tknowignorules\]](#fig:Tknowignorules){reference-type="ref" reference="fig:Tknowignorules"}. A branch $\mathcal{B}$ is *closed* iff $w_i\!:\!\phi;\mathfrak{v},w_i\!:\!\phi;\overline{\mathfrak{v}}\in\mathcal{B}$ for some $\phi\in\mathscr{L}_\blacksquare\cup\mathscr{L}_\mathbf{I}$, $w_i\in\mathsf{Lab}$, and $\mathfrak{v}\in\mathsf{Val}$. Otherwise, $\mathcal{B}$ is *open*. An open branch $\mathcal{B}$ is *complete* iff the following condition is met. - If all premises of a rule occur on the branch, then at least one conclusion of that rule occurs on the branch as well. A tree is closed iff every branch is closed. Finally, we say that *$\phi\vdash\chi$ is proved in $\mathcal{AC}_{\blacksquare,\mathbf{I}}$* iff there is a closed tree whose root is $\{w\!:\!\phi;\mathfrak{t},~w\!:\!\chi;\overline{\mathfrak{t}}\}$. $$\begin{aligned} \mathfrak{v}\overline{\mathfrak{v}}:\dfrac{}{w:\phi;\mathfrak{v}\mid w:\phi;\overline{\mathfrak{v}}}~\left(\parbox{15em}{$\phi$ is a~subformula of a~formula occurring on the branch; $w$ occurs on the branch}\right)\end{aligned}$$ $$\begin{aligned} \neg_\mathfrak{v}:\dfrac{w\!:\!\neg\phi;\mathfrak{v}}{w\!:\!\phi;\mathfrak{v}^\neg}&&\wedge_\mathfrak{t}:\dfrac{w\!:\!\phi\wedge\chi;\mathfrak{t}}{\begin{matrix}w\!:\!\phi;\mathfrak{t}\\w\!:\!\chi;\mathfrak{t}\end{matrix}}&&\vee_\mathfrak{t}:\dfrac{\begin{matrix}w:\phi_1\vee\phi_2;\mathfrak{t}\\w:\phi_i;\overline{\mathfrak{t}}\end{matrix}}{w:\phi_j;\mathfrak{t}}&&\wedge_{\overline{\mathfrak{f}}}:\dfrac{w\!:\!\phi\wedge\chi;\overline{\mathfrak{f}}}{\begin{matrix}w\!:\!\phi;\overline{\mathfrak{f}}\\w\!:\!\chi;\overline{\mathfrak{f}}\end{matrix}}&&\vee_{\overline{\mathfrak{f}}}:\dfrac{\begin{matrix}w:\phi_1\vee\phi_2;\overline{\mathfrak{f}}\\w:\phi_i;\mathfrak{f}\end{matrix}}{w:\phi_j;\overline{\mathfrak{f}}}\\ &&\wedge_\mathfrak{f}:\dfrac{\begin{matrix}w:\phi_1\wedge\phi_2;\mathfrak{f}\\w:\phi_i;\overline{\mathfrak{f}}\end{matrix}}{w:\phi_j;\mathfrak{f}}&&\vee_\mathfrak{f}:\dfrac{w\!:\!\phi\vee\chi;\mathfrak{f}}{\begin{matrix}w\!:\!\phi;\mathfrak{f}\\w\!:\!\chi;\mathfrak{f}\end{matrix}}&&\wedge_{\overline{\mathfrak{t}}}:\dfrac{\begin{matrix}w:\phi_1\wedge\phi_2;\overline{\mathfrak{t}}\\w:\phi_i;\mathfrak{t};\end{matrix}}{w:\phi_j;\overline{\mathfrak{t}}}&&\vee_{\overline{\mathfrak{t}}}:\dfrac{w\!:\!\phi\vee\chi;\overline{\mathfrak{t}}}{\begin{matrix}w\!:\!\phi;\overline{\mathfrak{t}}\\w\!:\!\chi;\overline{\mathfrak{t}}\end{matrix}}\end{aligned}$$ $$\begin{aligned} \blacksquare_\mathfrak{t}:\dfrac{\begin{matrix}w\!:\!\blacksquare\phi;\mathfrak{t}\\w\mathsf{R}w'\end{matrix}}{w'\!:\!\phi;\mathfrak{t}}&&\blacksquare^-_\mathfrak{t}:\dfrac{\begin{matrix}w\!:\!\blacksquare\phi;\mathfrak{t}&w'\!:\!\phi;\mathfrak{f}\\w\mathsf{R}w'&w\mathsf{R}w''\end{matrix}}{w''\!:\!\phi;\mathfrak{f}}&&\blacksquare_\mathfrak{f}:\dfrac{w\!:\!\blacksquare\phi;\mathfrak{f}}{\left.\begin{matrix}w\mathsf{R}u\\u\!:\!\phi;\mathfrak{f}\end{matrix}\right|\begin{matrix}w\mathsf{R}u&w\mathsf{R}u'\\u\!:\!\phi;\mathfrak{t}&u'\!:\!\phi;\overline{\mathfrak{t}}\end{matrix}}\\ \blacksquare_{\overline{\mathfrak{f}}}:\dfrac{\begin{matrix}w\!:\!\blacksquare\phi;\overline{\mathfrak{f}}\\w\mathsf{R}w'\end{matrix}}{w'\!:\!\phi;\overline{\mathfrak{f}}}&&\blacksquare^+_{\overline{\mathfrak{f}}}:\dfrac{\begin{matrix}w\!:\!\blacksquare\phi;\overline{\mathfrak{t}}&w'\!:\!\phi;\mathfrak{t}\\w\mathsf{R}w'&w\mathsf{R}w''\end{matrix}}{w''\!:\!\phi;\mathfrak{t}}&&\blacksquare_{\overline{\mathfrak{t}}}:\dfrac{w\!:\!\blacksquare\phi;\overline{\mathfrak{t}}}{\left.\begin{matrix}w\mathsf{R}u\\u\!:\!\phi;\overline{\mathfrak{t}}\end{matrix}\right|\begin{matrix}w\mathsf{R}u&w\mathsf{R}u'\\u\!:\!\phi;\mathfrak{f}&u'\!:\!\phi;\overline{\mathfrak{f}}\end{matrix}}\end{aligned}$$ $$\begin{aligned} \mathbf{I}_\mathfrak{t}:\dfrac{w\!:\!\mathbf{I}\phi;\mathfrak{t}}{w\!:\!\phi;\mathfrak{t}}&&\mathbf{I}^\mathsf{R}_\mathfrak{t}:\dfrac{\begin{matrix}w\!:\!\mathbf{I}\phi;\mathfrak{t}\\w\mathsf{R}s\end{matrix}}{s\!:\!\phi;\mathfrak{f}}&&\mathbf{I}^+_\mathfrak{t}:\dfrac{\begin{matrix}w\!:\!\mathbf{I}\phi;\mathfrak{t}&s\!:\!\phi;\mathfrak{t}\\w\mathsf{R}s&w\mathsf{R}s'\end{matrix}}{s'\!:\!\phi;\mathfrak{t}}&&\mathbf{I}_\mathfrak{f}:\dfrac{w\!:\!\mathbf{I}\phi;\mathfrak{f}}{\left.\begin{matrix}w\mathsf{R}u\\u\!:\!\phi;\mathfrak{t}\end{matrix}\right|\left.\begin{matrix}w\mathsf{R}u&w\mathsf{R}u'\\u\!:\!\phi;\mathfrak{f}&u'\!:\!\phi;\overline{\mathfrak{f}}\end{matrix}\right|w\!:\!\phi;\mathfrak{f}}\\ \mathbf{I}_{\overline{\mathfrak{f}}}:\dfrac{w\!:\!\mathbf{I}\phi;\overline{\mathfrak{f}}}{w\!:\!\phi;\overline{\mathfrak{f}}}&&\mathbf{I}^\mathsf{R}_\mathfrak{t}:\dfrac{\begin{matrix}w\!:\!\mathbf{I}\phi;\overline{\mathfrak{f}}\\w\mathsf{R}s\end{matrix}}{s\!:\!\phi;\overline{\mathfrak{t}}}&&\mathbf{I}^+_{\overline{\mathfrak{f}}}:\dfrac{\begin{matrix}w\!:\!\mathbf{I}\phi;\overline{\mathfrak{f}}&s\!:\!\phi;\overline{\mathfrak{f}}\\w\mathsf{R}s&w\mathsf{R}s'\end{matrix}}{s'\!:\!\phi;\overline{\mathfrak{f}}}&&\mathbf{I}_{\overline{\mathfrak{t}}}:\dfrac{w\!:\!\mathbf{I}\phi;\overline{\mathfrak{t}}}{\left.\begin{matrix}w\mathsf{R}u\\u\!:\!\phi;\overline{\mathfrak{f}}\end{matrix}\right|\left.\begin{matrix}w\mathsf{R}u&w\mathsf{R}u'\\u\!:\!\phi;\mathfrak{t}&u'\!:\!\phi;\overline{\mathfrak{t}}\end{matrix}\right|w\!:\!\phi;\overline{\mathfrak{t}}}\end{aligned}$$ Before proceeding to the proof of soundness and completeness of $\mathcal{AC}_{\blacksquare,\mathbf{I}}$, let us consider two examples of tableaux proofs (Fig. [\[fig:failedproof\]](#fig:failedproof){reference-type="ref" reference="fig:failedproof"}). Namely, we prove $\mathbf{I}p\wedge\mathbf{I}q\vdash\mathbf{I}(p\vee q)$ (recall [\[equ:ignoaxioms\]](#equ:ignoaxioms){reference-type="eqref" reference="equ:ignoaxioms"}) and disprove $\blacksquare(p\wedge q)\vdash\blacksquare p$ (cf. Remark [Remark 5](#rem:knownonstandard){reference-type="ref" reference="rem:knownonstandard"}). For the sake of brevity, we do not apply the $\mathfrak{v}\overline{\mathfrak{v}}$ rule at $w_0$ as it is clear that these applications will not make the open branch closed. Note that the step **higlighed in boldface** is obtained from $w_0:\blacksquare(p\wedge q);\mathfrak{t}$, $w_1:p\wedge q;\mathfrak{f}$, $w_0\mathsf{R}w_1$, and $w_0\mathsf{R}w_2$ by $\blacksquare^-_\mathfrak{t}$ rule. To extract the counter-model from a complete open branch $\mathcal{B}$, we set $W=\{w:w\text{ occurs on }\mathcal{B}\}$, set $wRw'$ iff $w\mathsf{R}w'\in\mathcal{B}$, and define valuations $v^+$ and $v^-$ according to Table [3](#table:labelsintepretation){reference-type="ref" reference="table:labelsintepretation"}. smullyan tableaux \[w_0:pq; \[w_0:(pq); \[w_0:p; \[w_0:q; \[w_0:p;\[w_0:q; \[w_0:pq;\[w_0:p;\[w_0:q;,closed\]\]\] \[w_0w_1\[w_0w_2\[w_1:pq;\[w_2:pq;\[w_2:p;\[w_2:q;\[w_1:p;\[w_1:q;\[w_2:p;\[w_2:q; \[w_1:p;\[w_2:p;,closed\]\]\[w_1:p;\[w_1:q;\[w_2:q;,closed\]\]\] \]\]\]\]\]\]\]\]\]\] \[w_0w_1\[w_1:pq;\[w_1:p;\[w_1:q;\[w_1:q;,closed\]\]\]\]\] \]\]\]\]\]\] smullyan tableaux \[w_0:(pq); \[w_0:p; \[w_0w_1\[w_1:p;\[w_1:pq;\[w_1:p;\[w_1:q;,closed\]\]\]\]\] \[w_0w_1 \[w_0w_2 \[w_1:p; \[w_2:p; \[w_1:pq;\[w_2:pq; \[w_1:p;\[w_1:q;\[w_2:p;\[w_2:q; \[w_1:pq;\[w_1:p;\[w_1:q;,closed\]\]\]\[w_1:pq;\[\[w_2:q;\[\]\]\]\] \]\]\]\]\]\]\]\]\]\]\]\] We are now ready to state and prove that $\mathcal{AC}_{\blacksquare,\mathbf{I}}$ is sound and complete w.r.t. the semantics in Definition [Definition 2](#def:ignoknowsemantics){reference-type="ref" reference="def:ignoknowsemantics"}. Our proof is a straightforward adaptation of [@KozhemiachenkoVashentseva2023 Theorems 3.7 and 3.8]. **Definition 7** (Branch realisation). We say that $\mathfrak{M}=\langle W,R,v^+,v^-\rangle$ with $W=\{w:w\text{ occurs on }\mathcal{B}\}$, $R=\{\langle w_i,w_j\rangle:w_i\mathsf{R}w_j\in\mathcal{B}\}$, and $w\in v^+(p)$ ($w\in v^-(p)$) iff $w\!:\!p;\mathfrak{t}\in\mathcal{B}$ ($w\!:\!p;\mathfrak{f}\in\mathcal{B}$) realises a branch $\mathcal{B}$ of a tree iff the following conditions are met. 1. If $w\!:\!\phi;\mathfrak{t}\in\mathcal{B}$ ($w\!:\!\phi;\mathfrak{f}\in\mathcal{B}$), then $\mathfrak{M},w\Vdash^+\phi$ ($\mathfrak{M},w\Vdash^-\phi$, respectively). 2. If $w\!:\!\phi;\overline{\mathfrak{t}}\in\mathcal{B}$ ($w\!:\!\phi;\overline{\mathfrak{f}}\in\mathcal{B}$), then $\mathfrak{M},w\nVdash^+\phi$ ($\mathfrak{M},w\nVdash^-\phi$, respectively). **Theorem 3** (Soundness and completeness of $\mathcal{AC}_{\blacksquare,\mathbf{I}}$). *For every $\phi\vdash\chi$ s.t. $\phi,\chi\in\mathscr{L}_\blacksquare\cup\mathscr{L}_\mathbf{I}$, it holds that $\phi\vdash\chi$ is valid iff it has a $\mathcal{AC}_{\blacksquare,\mathbf{I}}$ proof.* *Proof.* The proof is in the appendix (Section [\[sec:completenessproof\]](#sec:completenessproof){reference-type="ref" reference="sec:completenessproof"}). ◻ # Expressivity[\[sec:expressivity\]]{#sec:expressivity label="sec:expressivity"} In Section [\[sec:preliminaries\]](#sec:preliminaries){reference-type="ref" reference="sec:preliminaries"}, we argued that the $\Box$ modality as it is often defined in $\mathsf{BD}$ (recall Definition [Definition 1](#def:KBD){reference-type="ref" reference="def:KBD"}) is not well-suited for the formalisation of belief, knowledge, or ignorance in the Belnap--Dunn logic. In this section, we show that $\mathscr{L}_\Box$ formulas, actually, cannot formalise our interpretation of $\blacksquare$ and $\mathbf{I}$ because neither of these modalities can be defined via $\Box$. In addition, we also show that neither $\blacksquare$, nor $\mathbf{I}$ can define $\Box$ and that $\mathbf{I}$ and $\blacksquare$ are not interdefinable either. This last property of $\mathbf{I}$ and $\blacksquare$ corresponds to a desideratum in [@KubyshkinaPetrolo2021] stating that knowledge and ignorance should be independent notions. **Definition 8**. Let $\mathcal{L}_1$ and $\mathcal{L}_2$ be two languages and let $\mathbb{K}$ be a class of frames. We say that $\phi\in\mathcal{L}_1$ *defines* $\chi\in\mathcal{L}_2$ *in $\mathbb{K}$* iff for any $\mathfrak{F}\in\mathbb{K}$ and for any pointed model $\langle\mathfrak{M},w\rangle$ on $\mathfrak{F}$, it holds that $$\begin{aligned} \mathfrak{M},w\Vdash^+\phi&\text{ iff }\mathfrak{M},w\Vdash^+\chi& \mathfrak{M},w\Vdash^-\phi&\text{ iff }\mathfrak{M},w\Vdash^-\chi\end{aligned}$$ **Theorem 4**. * * 1. *No $\mathscr{L}_\Box$ formula can define $\blacksquare p$ on the classes of all frames, all reflexive frames, all transitive frames, all symmetric frames, and all Euclidean frames.* 2. *No $\mathscr{L}_\Box$ formula can define $\mathbf{I}p$ on the classes of all frames, all reflexive frames, all transitive frames, all symmetric frames, and all Euclidean frames.* *Proof.* Since $\blacksquare$ can define $\blacktriangle$ (Theorem [Theorem 1](#theorem:knowisgood){reference-type="ref" reference="theorem:knowisgood"}) and since $\Box$ cannot define $\blacktriangle$ on $\mathbf{S5}$ frames as shown in [@KozhemiachenkoVashentseva2023 Theorem 4.3], the first part follows immediately. Let us now prove the second part. For this, we borrow the approach from [@KubyshkinaPetrolo2021 §3] and consider two models in Fig. [\[fig:ignoundefinable\]](#fig:ignoundefinable){reference-type="ref" reference="fig:ignoundefinable"}. It is clear that the accessibility relations in these models are, in fact, equivalence relations and that $\mathfrak{M},w_0\Vdash^+\mathbf{I}p$ but $\mathfrak{M},w_0\nVdash^+\mathbf{I}p$. However, we can show by induction that - for every $\phi\in\mathscr{L}_\Box$ s.t. $\mathfrak{M},w_0\Vdash^+\phi$, it holds that $\mathfrak{M}',w'_0\Vdash^+\phi$ and $\mathfrak{M}',w'_1\Vdash^+\phi$, and - for every $\phi\in\mathscr{L}_\Box$ s.t. $\mathfrak{M},w_0\Vdash^-\phi$, it holds that $\mathfrak{M}',w'_0\Vdash^-\phi$ $\mathfrak{M}',w'_1\Vdash^-\phi$. The basis case of variables holds by the construction of $\mathfrak{M}$ and $\mathfrak{M}'$. The cases of propositional connectives can be established by induction hypothesis. We consider $\phi=\Box\chi$ and let $\mathfrak{M},w_0\Vdash^+\Box\chi$. Then, $\mathfrak{M},w_0\Vdash^+\chi$. By the induction hypothesis, we have $\mathfrak{M}',w'_0\Vdash^+\chi$ and $\mathfrak{M}',w'_1\Vdash^+\chi$, whence, $\mathfrak{M}',w'_0\Vdash^+\Box\chi$, as required. Now let $\mathfrak{M},w_0\Vdash^-\Box\chi$. Hence, $\mathfrak{M},w_0\Vdash^-\chi$. By the induction hypothesis, we have $\mathfrak{M}',w'_0\Vdash^-\chi$ and $\mathfrak{M}',w'_1\Vdash^-\chi$, whence, $\mathfrak{M}',w'_0\Vdash^-\Box\chi$, as required. The result follows. ◻ **Theorem 5**. *There is no formula $\phi\in\mathscr{L}_\blacksquare\cup\mathscr{L}_\mathbf{I}$ that defines $\Box p$ on the classes of all frames, all reflexive frames, all transitive frames, all symmetric frames, and all Euclidean frames.* *Proof.* We consider Fig. [\[fig:Boxundefinable\]](#fig:Boxundefinable){reference-type="ref" reference="fig:Boxundefinable"}. It is clear that its accessibility relation is an equivalence relation and that $\mathfrak{M},w_0\Vdash^+\Box p$ and $\mathfrak{M},w_0\Vdash^-\Box p$ as well as $\mathfrak{M},w_2\Vdash^+\Box p$ and $\mathfrak{M},w_2\Vdash^-\Box p$. We show that there is no $\phi\in\mathscr{L}_\blacksquare\cup\mathscr{L}_\mathbf{I}$ s.t.  1. $\mathfrak{M},w_0\Vdash^+\phi$ and $\mathfrak{M},w_0\Vdash^-\phi$, and 2. $\mathfrak{M},w_2\Vdash^+\phi$ and $\mathfrak{M},w_2\Vdash^-\phi$. We proceed by induction on $\phi$ and reason for a contradiction. If $\phi=p$, it is clear that $\mathfrak{M},w_0\nVdash^-p$ and $\mathfrak{M},w_2\nVdash^-p$. If $\phi=\neg\chi$, assume that $\mathfrak{M},w_0\Vdash^+\neg\chi$ and $\mathfrak{M},w_0\Vdash^-\neg\chi$. Then, $\mathfrak{M},w_0\Vdash^+\chi$ and $\mathfrak{M},w_0\Vdash^-\chi$. A contradiction. If $\phi=\chi\wedge\psi$, again, assume that $\mathfrak{M},w_0\Vdash^+\chi\wedge\psi$ and $\mathfrak{M},w_0\Vdash^-\chi\wedge\psi$. Then, $\mathfrak{M},w_0\Vdash^+\chi$, $\mathfrak{M},w_0\Vdash^+\psi$ and either $\mathfrak{M},w_0\Vdash^-\chi$ or $\mathfrak{M},w_0\Vdash^-\psi$. In both cases, we have a contradiction. The case of $\phi=\chi\vee\psi$ can be proven in the same fashion. Let $\phi=\blacksquare\chi$. If $\mathfrak{M},w_0\Vdash^+\blacksquare\chi$ and $\mathfrak{M},w_0\Vdash^-\blacksquare\chi$ as well as $\mathfrak{M},w_2\Vdash^+\blacksquare\chi$ and $\mathfrak{M},w_2\Vdash^-\blacksquare\chi$, then $\mathfrak{M},w_0\Vdash^+\chi$ and $\mathfrak{M},w_0\Vdash^-\chi$ (and $\mathfrak{M},w_2\Vdash^+\chi$ and $\mathfrak{M},w_2\Vdash^-\chi$, as well). But by the inductive hypothesis, there cannot be $\chi\in\mathscr{L}_\blacksquare\cup\mathscr{L}_\mathbf{I}$ s.t. $\mathfrak{M},w_0\Vdash^+\chi$, $\mathfrak{M},w_0\Vdash^-\chi$, $\mathfrak{M},w_2\Vdash^+\chi$, and $\mathfrak{M},w_2\Vdash^-\chi$. Again, a contradiction. Finally, consider $\phi=\mathbf{I}\chi$ and assume that $\mathfrak{M},w_0\Vdash^+\mathbf{I}\chi$ and $\mathfrak{M},w_0\Vdash^-\mathbf{I}\chi$, $\mathfrak{M},w_2\Vdash^+\mathbf{I}\chi$, and $\mathfrak{M},w_2\Vdash^-\mathbf{I}\chi$. Then, we must have that $\mathfrak{M},u\Vdash^+\chi$ and $\mathfrak{M},u\Vdash^-\chi$ for all $u\in R^!(w_0)$ and also $\mathfrak{M},u'\Vdash^+\chi$ and $\mathfrak{M},u'\Vdash^-\chi$ for all $u\in R^!(w_2)$. But since $w_0\in R^!(w_2)$ and $w_2\in R^!(w_0)$, we again have a contradiction. The result follows. ◻ **Theorem 6**. * * 1. *No $\mathscr{L}_\blacksquare$ formula can define $\mathbf{I}p$ on the classes of all frames, all reflexive frames, all transitive frames, all symmetric frames, and all Euclidean frames.* 2. *No $\mathscr{L}_\mathbf{I}$ formula can define $\blacksquare p$ on the classes of all frames, all transitive frames, all symmetric frames, and all Euclidean frames.* *Proof.* The first part can be proven in the same manner as *the second part* of Theorem [Theorem 4](#theorem:ignoknowundefinable){reference-type="ref" reference="theorem:ignoknowundefinable"}: we use the models Fig. [\[fig:ignoundefinable\]](#fig:ignoundefinable){reference-type="ref" reference="fig:ignoundefinable"} and show that - for every $\phi\in\mathscr{L}_\blacksquare$ s.t. $\mathfrak{M},w_0\Vdash^+\phi$, it holds that $\mathfrak{M}',w'_0\Vdash^+\phi$ and $\mathfrak{M}',w'_1\Vdash^+\phi$, and - for every $\phi\in\mathscr{L}_\blacksquare$ s.t. $\mathfrak{M},w_0\Vdash^-\phi$, it holds that $\mathfrak{M}',w'_0\Vdash^-\phi$ $\mathfrak{M}',w'_1\Vdash^-\phi$. For the second part, we borrow the approach from [@GilbertKubyshkinaPetroloVenturi2022 Observation 1.25]. Namely, consider Fig. [\[fig:ignovsknow\]](#fig:ignovsknow){reference-type="ref" reference="fig:ignovsknow"}. It is easy to see that $\mathfrak{M},w_0\Vdash^-\blacksquare p$ but $\mathfrak{M}',w'_0\nVdash^-\blacksquare p$. On the other hand, one can check by induction on $\phi\in\mathscr{L}_\mathbf{I}$ that $\mathfrak{M},w_0\Vdash^+\phi$ iff $\mathfrak{M}',w'_0\Vdash^+\phi$ and $\mathfrak{M},w_0\Vdash^-\phi$ iff $\mathfrak{M}',w'_0\Vdash^-\phi$. ◻ *Remark 6*. Note from the proof of Theorem [Theorem 5](#theorem:Boxundefinable){reference-type="ref" reference="theorem:Boxundefinable"} that not only $\blacksquare$ and $\mathbf{I}$ *by themselves* cannot define $\Box$ but even *the language that combines both $\blacksquare$ and $\mathbf{I}$* cannot define $\Box$. We finish the section with a brief discussion of the unknown truth operator $\bullet$. Recall from Definition [Definition 2](#def:ignoknowsemantics){reference-type="ref" reference="def:ignoknowsemantics"} that $\bullet\phi$ is defined as $\phi\wedge\neg\blacksquare\phi$. The next statement shows that it cannot be defined using $\Box$. **Theorem 7**. *There is no formula $\phi\in\mathscr{L}_\Box$ that defines $\bullet p$ on the classes of all frames, all reflexive frames, all transitive frames, all symmetric frames, and all Euclidean frames.* *Proof.* The proof is essentially the same as that of [@KozhemiachenkoVashentseva2023 Theorem 4.3] and is put in the appendix (Section [\[sec:accidenceproof\]](#sec:accidenceproof){reference-type="ref" reference="sec:accidenceproof"}). ◻ # Frame definability[\[sec:definability\]]{#sec:definability label="sec:definability"} We have previously argued (Section [\[sec:preliminaries\]](#sec:preliminaries){reference-type="ref" reference="sec:preliminaries"} and Theorem [Theorem 1](#theorem:knowisgood){reference-type="ref" reference="theorem:knowisgood"}) that $\blacksquare$ modality is well-suited to formalise knowledge and belief in the Belnap--Dunn framework: it entails "knowledge whether" and "opinionatedness" (in contrast to $\Box$); three standard axioms of epistemic modalities (truthfulness, positive introspection, and negative introspection) are valid on $\mathbf{S5}$ frames; $\mathbf{S5}$ frames are definable in $\mathscr{L}_\blacksquare$. In this section, we further investigate which important classes of frames are definable in $\mathscr{L}_\blacksquare$. Recall the notion of frame definability. **Definition 9** (Frame definability). Let $\Sigma$ be a set of sequents of the form $\phi\vdash\chi$ and $\mathbb{F}$ be a class of frames. We say that $\Sigma$ *defines* $\mathbb{F}$ iff for any frame $\mathfrak{F}$, $\mathfrak{F}\in\mathbb{F}$ iff all sequents from $\Sigma$ are valid on $\mathfrak{F}$. A class of frames is definable in $\mathscr{L}_\blacksquare$ iff there is a set of sequents where $\phi,\chi\in\mathscr{L}_\blacksquare$ that defines it. Let $\mathfrak{F}=\langle W,R\rangle$ be a frame. We will use the notation given in Table [4](#table:classesofframes){reference-type="ref" reference="table:classesofframes"} to designate different classes of frames. **notation** **class of frames** --------------- -------------------------------------------------------------------------- $\mathbf{D}$ $R$ is serial: $\forall x\exists y:R(x,y)$ $\mathbf{dn}$ $R$ is dense: $\forall x,y:R(x,y)\Rightarrow\exists z(R(x,z)~\&~R(z,y))$ $\mathbf{T}$ $R$ is reflexive $\mathbf{5}$ $R$ is Euclidean: $\forall x,y,z:R(x,y)~\&~R(x,z)\Rightarrow R(y,z)$ : Shorthands for the classes of frames. In the remainder of this section, we are going to show that all classes of frames given in Table [4](#table:classesofframes){reference-type="ref" reference="table:classesofframes"} are definable. Note that the definability of $\mathbf{T}$- and $\mathbf{S5}$-frames can be obtained indirectly since they are definable using $\blacktriangle$ (which, in its turn, is definable via $\blacksquare$). However, we are going to show that the $\mathscr{L}_\blacksquare$-definitions of all these classes of frames are identical to their usual definitions up to the replacement of $\blacksquare$ with $\Box$, $\blacklozenge$ with $\lozenge$, and the classical implication with $\vdash$. **Theorem 8**. *All classes of frames given in Table [4](#table:classesofframes){reference-type="ref" reference="table:classesofframes"} are definable in $\mathscr{L}_\blacksquare$.* *Proof.*   \ We show that $\mathfrak{F}\in\mathbf{D}$ iff $\mathfrak{F}\models[\blacksquare p\vdash\blacklozenge p]$. Let $\mathfrak{F}$ be serial, and $\mathfrak{M}$ be a model on $\mathfrak{F}$ s.t. $\mathfrak{M},w\Vdash^+\blacksquare p$. We show that $\mathfrak{M},w\Vdash^+\blacklozenge p$. Since $\mathfrak{F}$ is serial, we have that there is some $w'\in R(w)$. Moreover, in every $w''\in R(w)$, it holds that $\mathfrak{M},w''\Vdash^+p$. Thus, $\mathfrak{M},w\Vdash^+\blacklozenge p$, as required. For the converse, let $\mathfrak{F}\notin\mathbf{D}$ and $w\in\mathfrak{F}$ be s.t. $R(w)=\varnothing$. It is clear that for every model $\mathfrak{M}$ on $\mathfrak{F}$, we have $\mathfrak{M},w\Vdash^+\blacksquare p$ but $\mathfrak{M},w\nVdash^+\blacklozenge p$. \ We show that $\mathfrak{F}\in\mathbf{T}$ iff $\mathfrak{F}\models[\blacksquare p\vdash p]$. Let $\mathfrak{F}$ be reflexive and $\mathfrak{M}$ be a model on $\mathfrak{M}$ s.t. $\mathfrak{M},w\Vdash^+\blacksquare p$. Since $w\in R(w)$, it is clear that $\mathfrak{M},w\Vdash^+p$, as required. For the converse, let $\mathfrak{F}\notin\mathbf{T}$ and $w\in\mathfrak{F}$ be s.t. $w\notin R(w)$. Now, for every $w'\in R(w)$, we set $w'\in v^+(p)$ and $w'\notin v^-(p)$. For $w$, we define $w\notin v^+(p)$ and $w\notin v^-(p)$. It is clear that $\mathfrak{M},w\Vdash^+\blacksquare p$ but $\mathfrak{M},w\nVdash^+p$. \ We show that $\mathfrak{F}\in\mathbf{dn}$ iff $\mathfrak{F}\models[\blacklozenge p\vdash\blacklozenge\blacklozenge p]$. Let $\mathfrak{F}$ be dense and $\mathfrak{M},w\Vdash^+\blacklozenge p$ for some model $\mathfrak{M}$ on $\mathfrak{F}$. We have two cases: 1. there is $w'\in R(w)$ s.t. $\mathfrak{M},w'\Vdash^+p$, or 2. there are $u,u'\in R(w)$ s.t. $\mathfrak{M},u\Vdash^-p$ but $\mathfrak{M},u'\nVdash^-p$. In the first case, there is a state $w''$ s.t. $w''Rw'$ and $wRw''$. Thus, $\mathfrak{M},w''\!\Vdash^+\!\blacklozenge p$, whence $\mathfrak{M},w\Vdash^+\blacklozenge\blacklozenge p$, as required. In the second case, there are $t$ and $t'$ s.t. $wRtRu$ and $wRt'Ru'$, whence $\mathfrak{M},t'\nVdash^-\blacklozenge p$. It now suffices to show that $\mathfrak{M},t\Vdash^+\blacklozenge p$ or $\mathfrak{M},t\Vdash^-\blacklozenge p$. Assume for contradiction that $\mathfrak{M},t\nVdash^+\blacklozenge p$ and $\mathfrak{M},t\nVdash^-\blacklozenge p$. Then $\mathfrak{M},s\nVdash^+p$ and $\mathfrak{M},s\nVdash^-p$ in all $s\in R(t)$ and $\mathfrak{M},u\nVdash^-p$, in particular. A contradiction. Now, we have that $\mathfrak{M},t\Vdash^+\blacklozenge p$ or $\mathfrak{M},t\Vdash^-\blacklozenge p$ from where (since $\mathfrak{M},t'\nVdash^-\blacklozenge p$) we obtain that $\mathfrak{M},w\Vdash^+\blacklozenge\blacklozenge p$, as required. For the converse, let $\mathfrak{F}$ be not dense and $w,w'\in\mathfrak{F}$ be s.t. $wRw'$ but for no $wRu$ and $uRw'$. We set the valuations as follows: $w'\in v^+(p)$ and $w'\notin v^-(p)$; $t\notin v^+(p)$ and $t\in v^-(p)$ for all $t\neq w'$. It is clear that $\mathfrak{M},w\Vdash^+\blacklozenge p$ but $\mathfrak{M},s\nVdash^+\blacklozenge p$ and $\mathfrak{M},s\Vdash^-\blacklozenge p$ for all $s\in R(w)$. Thus, $\mathfrak{M},w\nVdash^+\blacklozenge\blacklozenge p$, as required. \ Observe from the proof of Theorem [Theorem 1](#theorem:knowisgood){reference-type="ref" reference="theorem:knowisgood"} that if $\mathfrak{F}$ is Euclidean, then $\mathfrak{F}\models[\blacklozenge p\Vdash\blacksquare\blacklozenge p]$. We show the converse direction. Assume that $\mathfrak{F}\notin\mathbf{5}$ and that $w_0Rw_1$, $w_0Rw_2$, but $w_2\notin R(w_1)$. We set the valuation as follows: $w'\notin v^+(p)$ and $w'\in v^-(p)$ for all $w'\in R(w_1)$; $w\in v^+(p)$ and $w\notin v^-(p)$ for all other $w$'s. It is clear that $\mathfrak{M},w_0\Vdash^+\blacklozenge p$ (since $w_2\notin R(w_1)$), but $\mathfrak{M},w_1\nVdash^+\blacklozenge p$ and $\mathfrak{M},w_1\Vdash^-\blacklozenge p$. Thus, $\mathfrak{M},w_0\nVdash^+\blacksquare\blacklozenge p$, as required. ◻ From the above theorem, it follows immediately that $\mathbf{D5}$ (serial and Euclidean), $\mathbf{S5}$ (reflexive and Euclidean), and $\mathbf{Ddn}$ (serial and dense) frames are definable. Note, however, that the standard definition of transitive frames --- $\blacksquare p\vdash\blacksquare\blacksquare p$ --- does not hold (although, it is still open whether transitive frames are definable in $\mathscr{L}_\blacksquare$). Indeed, consider Fig. [\[fig:4counterexample\]](#fig:4counterexample){reference-type="ref" reference="fig:4counterexample"}. The expected definition fails because, on one hand, $\blacksquare\phi$ is exactly true at $w$ when $R(w)=\varnothing$, and, on the other hand, for $\blacksquare\phi$ to be true at a given state, $\phi$ has to have the same Belnapian value in all accessible states. These two conditions can be at odds in *non-serial* transitive models as Fig. [\[fig:4counterexample\]](#fig:4counterexample){reference-type="ref" reference="fig:4counterexample"} shows. Still, some *classes of transitive frames* are definable in the expected manner. **notation** **class of frames** --------------- --------------------------------- $\mathbf{45}$ $R$ is transitive and Euclidean $\mathbf{D4}$ $R$ is serial and transitive $\mathbf{S4}$ $R$ is reflexive and transitive : Shorthands for classes of transitive frames. **Theorem 9**. *The classes of frames in Table [5](#table:transitiveframes){reference-type="ref" reference="table:transitiveframes"} are definable in $\mathscr{L}_\blacksquare$.* *Proof.*   \ We show that $\mathfrak{F}\in\mathbf{45}$ iff $\mathfrak{F}\models[\blacklozenge p\vdash\blacksquare\blacklozenge p]$ and $\mathfrak{F}\models[\blacksquare p\vdash\blacksquare\blacksquare p]$. Let $\mathfrak{F}$ be transitive and Euclidean. Then $\mathfrak{F}\models[\blacklozenge p\vdash\blacksquare\blacklozenge p]$ by the previous case and Theorem [Theorem 1](#theorem:knowisgood){reference-type="ref" reference="theorem:knowisgood"}. We show that $\mathfrak{F}\models[\blacksquare p\vdash\blacksquare\blacksquare p]$, as well. Assume that $\mathfrak{M},w\Vdash^+\blacksquare p$. Then, $\mathfrak{M},w'\Vdash^+p$ for every $w'\in R(w)$. In addition, if $\mathfrak{M},u\Vdash^-p$ in some $u\in R(w)$, then $\mathfrak{M},u'\Vdash^-p$ in all $u'\in R(w)$. Moreover, since $\mathfrak{F}$ is Euclidean, it holds that if $R(w)\neq\varnothing$, then $R(w')\neq\varnothing$ for all $w'\in R(w)$ as well. Of course, if $R(w)=\varnothing$, then $\mathfrak{M},w\Vdash^+\blacksquare\blacksquare p$. If $R(w)\neq\varnothing$, then (since $R$ is transitive) either $\mathfrak{M},w'\Vdash^+\blacksquare p$ and $\mathfrak{M},w'\nVdash^-\blacksquare p$ in every $w'\in R(w)$, or $\mathfrak{M},w'\Vdash^+\blacksquare p$ and $\mathfrak{M},w'\Vdash^-\blacksquare p$ in every $w'\in R(w)$. In both cases, $\mathfrak{M},w\Vdash^+\blacksquare\blacksquare p$, as required. For the converse, let $\mathfrak{F}\notin\mathbf{45}$. If $\mathfrak{F}$ is not Euclidean, then $\mathfrak{F}\not\models[\blacklozenge p\vdash\blacksquare\blacklozenge p]$. So, we consider the case when $\mathfrak{F}$ is not transitive. Then, there are states $w_0$, $w_1$, and $w_2$ s.t. $w_0Rw_1Rw_2$ but $w_2\notin R(w_0)$. We set the valuation as follows: if $w\in R(w_0)$, then $w\in v^+(p)$ and $w\notin v^-(p)$; otherwise, $w\notin v^+(p)$ and $w\in v^-(p)$. It is clear that $\mathfrak{M},w_0\Vdash^+\blacksquare p$ but $\mathfrak{M},w_0\nVdash^+\blacksquare\blacksquare p$. Thus, $\mathfrak{F}\not\models[\blacksquare p\vdash\blacksquare\blacksquare p]$, as required. \ We show that $\mathfrak{F}\in\mathbf{D4}$ iff $\blacksquare p\vdash\blacklozenge p$ and $\blacksquare p\vdash\blacksquare\blacksquare p$ are valid on $\mathfrak{F}$. Let $\mathfrak{F}$ be serial and transitive. Then $\blacksquare p\vdash\blacklozenge p$ is valid on $\mathfrak{F}$ (Theorem [Theorem 8](#theorem:definability){reference-type="ref" reference="theorem:definability"}). We show that $\blacksquare p\vdash\blacksquare\blacksquare p$ is valid as well. Let $\mathfrak{M},w\Vdash^+\blacksquare p$. Then either $\mathfrak{M},w'\Vdash^+p$ and $\mathfrak{M},w'\nVdash^-p$ in all $w'\in R(w)$ or $\mathfrak{M},w'\Vdash^+p$ and $\mathfrak{M},w'\Vdash^-p$ in all $w'\in R(w)$. In the first case, since $R$ is transitive, we have that $\mathfrak{M},w'\Vdash^+\blacksquare p$ and $\mathfrak{M},w'\nVdash^-\blacksquare p$ in all $w'\in R(w)$, whence $\mathfrak{M},w\Vdash^+\blacksquare\blacksquare p$. In the second case, observe that $R$ is not only transitive but serial as well (i.e., $R(w')\neq\varnothing$ for all $w'\in R(w)$). Thus, $\mathfrak{M},w'\Vdash^+\blacksquare p$ and $\mathfrak{M},w'\Vdash^-\blacksquare p$ in all $w'\in R(w)$, and again, $\mathfrak{M},w\Vdash^+\blacksquare\blacksquare p$, as required. For the contrary, let $\mathfrak{F}\notin\mathbf{D4}$. If $\mathfrak{F}$ is not serial, then $\mathfrak{F}\not\models[\blacksquare p\vdash\blacklozenge p]$. If $\mathfrak{F}$ is not transitive, we can show that $\mathfrak{F}\not\models[\blacksquare p\vdash\blacksquare\blacksquare p]$ in the same way as in the previous case. \ We show that $\mathfrak{F}\in\mathbf{S4}$ iff $\blacksquare p\vdash p$ and $\blacksquare p\vdash\blacksquare\blacksquare p$ are valid on $\mathfrak{F}$. Again, since $\mathfrak{F}\in\mathbf{S4}$, $\blacksquare p\vdash p$ is valid on $\mathfrak{F}$ by Theorem [Theorem 8](#theorem:definability){reference-type="ref" reference="theorem:definability"}. The validity of $\blacksquare p\vdash\blacksquare\blacksquare p$ can be checked in the same way as in the case of $\mathbf{D4}$ frames. The converse direction can also be shown in the same way as the converse direction for the case of $\mathbf{45}$ frames. ◻ Let us quickly recapitulate the results of this section. We established that the traditional epistemic ($\mathbf{S5}$) and doxastic ($\mathbf{45}$ and $\mathbf{D45}$[^16]) frames are definable in an expected way. In fact, $\mathbf{S4}$ can be (cf., e.g., [@Steinsvold2008]) viewed as a logic of knowledge if one does not assume *negative introspection*. In this case, $\mathbf{D4}$ can be considered a doxastic logic (again, without the negative introspection). We have shown that both $\mathbf{D4}$ and $\mathbf{S4}$[^17] frames are definable. # Conclusion[\[sec:conclusion\]]{#sec:conclusion label="sec:conclusion"} In this paper, we continued the line of research proposed in [@KozhemiachenkoVashentseva2023] and provided expansions of the Belnap--Dunn logic (First-Degree entailment) with the knowledge ($\blacksquare$) and ignorance ($\mathbf{I}$) modalities. We presented and motivated their semantics and explored their properties and constructed a sound and complete (Theorem [Theorem 3](#theorem:Tknowignocompleteness){reference-type="ref" reference="theorem:Tknowignocompleteness"}) analytic cut calculus for $\mathsf{BD}^\blacksquare$ and $\mathsf{BD}^\mathbf{I}$. Below, we summarise the main results of the paper. - The ignorance modality $\mathbf{I}$ satisfies the desiderata outlined in [@KubyshkinaPetrolo2021]. Namely, the $\mathsf{BD}^\mathbf{I}$-counterparts of classical axioms and rules are $\mathsf{BD}^\mathbf{I}$ valid as shown in Theorem [Theorem 2](#theorem:ignoisgood){reference-type="ref" reference="theorem:ignoisgood"}, and $\mathbf{I}$ can be defined neither via the standard $\Box$ operator nor via the knowledge operator $\blacksquare$ (Theorems [Theorem 4](#theorem:ignoknowundefinable){reference-type="ref" reference="theorem:ignoknowundefinable"} and [Theorem 6](#theorem:ignovsknow){reference-type="ref" reference="theorem:ignovsknow"}). - The introduced modality $\blacksquare$ cannot be defined via $\Box$ nor $\mathbf{I}$ (Theorems [Theorem 4](#theorem:ignoknowundefinable){reference-type="ref" reference="theorem:ignoknowundefinable"} and [Theorem 6](#theorem:ignovsknow){reference-type="ref" reference="theorem:ignovsknow"}). Furthermore, it conforms to the usual requirements of knowledge modalities: $\blacksquare$ has the expected connection to the "knowledge whether" ($\blacktriangle$); truthfulness, positive, and negative introspection are valid on $\mathbf{S5}$ frames (Theorem [Theorem 1](#theorem:knowisgood){reference-type="ref" reference="theorem:knowisgood"}). Moreover, using $\blacksquare$, we defined an unknown truth operator $\bullet$ and showed that it is not definable via $\Box$ either (Theorem [Theorem 7](#theorem:accidenceundefinable){reference-type="ref" reference="theorem:accidenceundefinable"}). - Several important classes of epistemic and doxastic frames are definable using $\blacksquare$ in the same way as they are defined in classical logic up to the replacement of $\Box$ with $\blacksquare$ and the implication with $\vdash$ (Theorems [Theorem 8](#theorem:definability){reference-type="ref" reference="theorem:definability"} and [Theorem 9](#theorem:transitivedefinability){reference-type="ref" reference="theorem:transitivedefinability"}). Several questions remain open. First of all, recall from Remark [Remark 5](#rem:knownonstandard){reference-type="ref" reference="rem:knownonstandard"} that $\blacksquare$ behaves in a non-standard manner. This non-standard behaviour could be rectified if we defined the validity of sequents not via the preservation of truth but via the preservation of truth and non-falsity. I.e., if we considered $\phi\vdash\chi$ to be valid when there is no such pointed model $\langle\mathfrak{M},w\rangle$ where $\phi$ is *true and non-false* but $\chi$ is not. In this setting, $\blacksquare(p\wedge q)\vdash\blacksquare p\wedge\blacksquare q$ would be universally valid. This, however, would make our logic non-paraconsistent and (arguably, a bigger issue) would render reasoning by cases --- $\dfrac{\phi\vdash\psi\quad\chi\vdash\psi}{\phi\vee\chi\vdash\psi}$ --- unsound. In fact, such a definition of validity would make our logic an extension of $\mathbf{ETL}$ (Exactly True Logic), and hence, we would lose the conservativity over $\mathsf{BD}$. $\mathbf{ETL}$ was introduced in [@PietzRiveccio2013] and further studied in [@ShramkoZaitsevBelikov2017; @ShramkoZaitsevBelikov2018; @KapsnerRivieccio2023]. We leave the analysis of $\blacksquare$ in $\mathbf{ETL}$ for future research. Second, we have established the definability of several classes of frames in $\mathscr{L}_\blacksquare$. To the best of our knowledge, there are no results on the correspondence between the formulas with the classical ignorance modality $\mathbb{I}$ and classes of frames. Moreover, it is open which classes of frames *are not definable* in $\mathscr{L}_\blacksquare$ and $\mathscr{L}_\mathbf{I}$. Finally, many modal expansions of $\mathsf{BD}$ (cf., e.g., [@OdintsovWansing2010; @OdintsovWansing2017]) contain an implication. Thus, considering a counterpart of $\mathsf{BK}^\Box$[^18] but with $\blacksquare$ and $\mathbf{I}$ also makes sense. Moreover, adding an implication will allow for easier construction of Hilbert-style calculi and facilitate the study of the correspondence between classes of frames and formulas (both in the language with $\blacksquare$ and $\mathbf{I}$). In addition, the implication in $\mathsf{BK}^\Box$ is not the only possible one that we can add. Thus, it is also possible to compare different implicative expansions of $\mathsf{BD}^\mathbf{I}$ and $\mathsf{BD}^\blacksquare$, especially because many important modal formulas contain multiple implications or do not have implication as their principal connective. Finally, complete axiomatisations of implicative expansions of $\mathsf{BD}^\blacksquare$ and $\mathsf{BD}^\mathbf{I}$ will shed light on which formulas are valid on all frames. # Proof of Theorem [Theorem 3](#theorem:Tknowignocompleteness){reference-type="ref" reference="theorem:Tknowignocompleteness"}[\[sec:completenessproof\]]{#sec:completenessproof label="sec:completenessproof"} {#proof-of-theorem-theoremtknowignocompletenessseccompletenessproof} *For every $\phi\vdash\chi$ s.t. $\phi,\chi\in\mathscr{L}_\blacksquare\cup\mathscr{L}_\mathbf{I}$, it holds that $\phi\vdash\chi$ is valid iff it has a $\mathcal{AC}_{\blacksquare,\mathbf{I}}$ proof.* *Proof.* For the soundness part, one can check that if a branch is realised by a model, then its extension by any rule is realised too. Note also that a closed branch clearly cannot be realised. Thus, if the tree is closed, then the initial labelled formulas are not realisable. But in order to prove $\phi\vdash\chi$, we start a tree with $\{\phi\!:\!\mathfrak{t};w_0,\chi\!:\!\overline{\mathfrak{t}};w_0\}$. Hence, if this set cannot be realised, the sequent is valid. For the completeness part, we proceed by contraposition. We show that every complete open branch $\mathcal{B}$ is realisable. Namely, we prove by induction on $\phi$ that $w\colon\phi;\mathfrak{t}\in\mathcal{B}$ iff $\mathfrak{M},w\Vdash^+\phi$ and $w\colon\phi;\mathfrak{f}\in\mathcal{B}$ iff $\mathfrak{M},w\Vdash^-\phi$ with $\mathfrak{M}$ as in Definition [Definition 7](#def:branchrealisation){reference-type="ref" reference="def:branchrealisation"}. The basis case of $\phi=p$ holds by the construction of $\mathfrak{M}$. The propositional cases are straightforward. Thus, we are going to consider only the most instructive cases of $\phi=\blacksquare\chi$ and $\phi=\mathbf{I}\chi$. Let $w_i\colon\blacksquare\chi;\mathfrak{t}\in\mathcal{B}$. Since $\mathcal{B}$ is complete, we have that $w_j\colon\chi;\mathfrak{t}\in\mathcal{B}$ for every $w_j$ s.t. $w_i\mathsf{R}w_j\in\mathcal{B}$ (using $\blacksquare_\mathfrak{t}$). Moreover, if there is some $w'$ s.t. $w'\colon\chi;\mathfrak{f}\in\mathcal{B}$ and $w_i\mathsf{R}w'\in\mathcal{B}$, then $w_j\colon\chi;\mathfrak{f}$ for all $w_j$'s s.t. $w_i\mathsf{R}w_j\in\mathcal{B}$. By the induction hypothesis, we have that $\mathfrak{M},w_j\Vdash^+\chi$ for every $w_j\in R(w_i)$ and, moreover, if $\mathfrak{M},w'\Vdash^-\chi$ for some $w'\in R(w_i)$, then $\mathfrak{M},w_j\Vdash^-\chi$ for all $w_j\in R(w_i)$. Thus, $\mathfrak{M},w_i\Vdash^+\blacksquare\chi$, as required. For the converse, assume that $w_i\!\colon\!\blacksquare\chi;\mathfrak{t}\!\notin\!\mathcal{B}$. Hence, by completeness of $\mathcal{B}$, we have that $w_i\!\colon\!\blacksquare\chi;\overline{\mathfrak{t}}\!\in\!\mathcal{B}$. Then, one of the following holds: 1. $w_j\colon\chi;\overline{\mathfrak{t}}\in\mathcal{B}$ for some $w_j$ s.t. $w_i\mathsf{R}w_j\in\mathcal{B}$; 2. $w_j\colon\chi;\mathfrak{f}\in\mathcal{B}$ and $w_k\colon\chi;\overline{\mathfrak{f}}\in\mathcal{B}$ for some $w_j$ and $w_k$ s.t. $w_i\mathsf{R}w_j\in\mathcal{B}$ and $w_i\mathsf{R}w_k\in\mathcal{B}$. Applying the induction hypothesis, we have that $\mathfrak{M},w_j\nVdash^-\chi$ for some $w_j\in R(w_i)$ in the first case, and $\mathfrak{M},w_j\Vdash^-\chi$ and $\mathfrak{M},w_k\nVdash^-\chi$ for some $w_j,w_k\in R(w_i)$ in the second case. Thus, $\mathfrak{M},w_j\nVdash^+\blacksquare\chi$, as required. The case when $w_i\colon\blacksquare\chi;\mathfrak{f}\in\mathcal{B}$ can be tackled in the same way. Let us now proceed to the case when $\phi=\mathbf{I}\chi$. Again, we assume that $w_i\colon\mathbf{I}\chi;\mathfrak{t}\in\mathcal{B}$. Then, $w_i\chi;\mathfrak{t}\in\mathcal{B}$ and for each $w_j$ s.t. $w_i\mathsf{R}w_j$[^19], $w_j\colon\chi;\mathfrak{f}\in\mathcal{B}$. Moreover, if $w'\colon\chi;\mathfrak{t}\in\mathcal{B}$ for some $w'$ s.t. $w_i\mathsf{R}w'\in\mathcal{B}$, then $w_j\colon\chi;\mathfrak{t}\in\mathcal{B}$ for every $w_j$ s.t. $w_i\mathsf{R}w_j$. By the induction hypothesis, we obtain that $\mathfrak{M},w_i\Vdash^+\chi$, $\mathfrak{M},w_j\Vdash^-\chi$ for all $w_j\in R^!(w_i)$, and if $\mathfrak{M},w'\Vdash^+\chi$ for some $w'\in R^!(w_i)$, then $\mathfrak{M},w_j\Vdash^+\chi$ for all $w_j\in R^!(w_i)$ as well. Hence, $\mathfrak{M},w_i\Vdash^+\mathbf{I}\chi$. For the converse, assume that $w_i\colon\mathbf{I}\chi;\mathfrak{t}\notin\mathcal{B}$ (hence, $w_i\colon\mathbf{I}\chi;\overline{\mathfrak{t}}\in\mathcal{B}$). Then, one of the following holds: 1. $w_i\colon\chi;\overline{\mathfrak{t}}\in\mathcal{B}$; 2. $w_j\colon\chi;\overline{\mathfrak{f}}\in\mathcal{B}$ for some $w_j$ s.t. $w_i\mathsf{R}w_j\in\mathcal{B}$; 3. $w_j\colon\chi;\mathfrak{t}\in\mathcal{B}$ and $w_k\colon\chi;\overline{\mathfrak{t}}\in\mathcal{B}$ for some $w_j$ and $w_k$ s.t. $w_i\mathsf{R}w_j\in\mathcal{B}$ and $w_i\mathsf{R}w_k\in\mathcal{B}$. By the induction hypothesis, we have that $\mathfrak{M},w_i\nVdash^+\chi$ in the first case; $\mathfrak{M},w_j\nVdash^-\chi$ for some $w_j\in R^!(w_i)$[^20] in the second case; $\mathfrak{M},w_j\Vdash^+\chi$ and $\mathfrak{M},w_k\nVdash^+\chi$ for some $w_j,w_k\in R^!(w_i)$ in the third case. In all three cases, $\mathfrak{M},w_i\nVdash^+\mathbf{I}\chi$. The case of $w_i\colon\mathbf{I}\chi;\mathfrak{f}\in\mathcal{B}$ can be dealt with similarly. ◻ # Proof of Theorem [Theorem 7](#theorem:accidenceundefinable){reference-type="ref" reference="theorem:accidenceundefinable"}[\[sec:accidenceproof\]]{#sec:accidenceproof label="sec:accidenceproof"} {#proof-of-theorem-theoremaccidenceundefinablesecaccidenceproof} *There is no formula $\phi\in\mathscr{L}_\Box$ that defines $\bullet p$ on the classes of all frames, all reflexive frames, all transitive frames, all symmetric frames, and all Euclidean frames.* *Proof.* Consider the models in Fig. [\[fig:S4counterexample\]](#fig:S4counterexample){reference-type="ref" reference="fig:S4counterexample"}. Clearly, $\mathfrak{M},w_0\nVdash^+\bullet p$ and $\mathfrak{M},w_0\Vdash^-\bullet p$ but $\mathfrak{M}',w'_0\Vdash^+\bullet p$ and $\mathfrak{M}',w'_0\nVdash^-\bullet p$. It now suffices to show for any $\phi\in\mathscr{L}_\Box$ that 1. if $\mathfrak{M}',w'_0\nVdash^+\phi$ and $\mathfrak{M}',w'_0\Vdash^-\phi$, then $\mathfrak{M},w_0\nVdash^+\phi$ and $\mathfrak{M},w_0\Vdash^-\phi$; and 2. if $\mathfrak{M}',w'_0\Vdash^+\phi$ and $\mathfrak{M}',w'_0\nVdash^-\phi$, then $\mathfrak{M},w_0\Vdash^+\phi$ and $\mathfrak{M},w_0\nVdash^-\phi$. We proceed by induction on $\phi$. The basis case of variables is trivial since their valuations at $w_0$ and $w'_0$ are the same. We only show the most instructive cases of $\phi=\psi\wedge\psi'$ and $\phi=\lozenge\psi$ (recall that $\lozenge\psi$ can be defined as $\neg\Box\neg\psi$). For (1), if $\mathfrak{M}',w'_0\nVdash^+\psi\wedge\psi'$ and $\mathfrak{M}',w'_0\Vdash^-\psi\wedge\psi'$, then 1. $\mathfrak{M}',w'_0\nVdash^+\psi$ and $\mathfrak{M}',w'_0\Vdash^-\psi$, or 2. $\mathfrak{M}',w'_0\nVdash^+\psi'$ and $\mathfrak{M}',w'_0\Vdash^-\psi'$, or 3. w.l.o.g. $\mathfrak{M}',w'_0\Vdash^+\psi$ and $\mathfrak{M}',w'_0\Vdash^-\psi$ but $\mathfrak{M}',w'_0\nVdash^+\psi'$ and $\mathfrak{M}',w'_0\nVdash^-\psi'$. Cases (a) and (b) hold by the induction hypothesis. For (c), one can show by induction that there is no $\phi\in\mathscr{L}_\Box$ s.t. $\mathfrak{M}',w'_0\Vdash^+\phi$ and $\mathfrak{M}',w'_0\Vdash^-\phi$. The basis case of propositional variables holds by the construction of the model, and the cases of propositional connectives can be shown by a straightforward application of the induction hypothesis. Finally, one can see that there is no $\chi\in\mathscr{L}_\Box$ s.t. $\mathfrak{M}',w'_1\Vdash^+\chi$ and $\mathfrak{M}',w'_1\Vdash^-\chi$. Thus, it holds that there is no $\phi=\Box\chi$, nor $\phi=\lozenge\chi$ s.t. $\mathfrak{M}',w'_0\Vdash^+\phi$ and $\mathfrak{M}',w'_0\Vdash^-\phi$. For (2), recall that if $\mathfrak{M}',w'_0\Vdash^+\psi\wedge\psi'$ and $\mathfrak{M}',w'_0\nVdash^-\psi\wedge\psi'$, then $\mathfrak{M}',w'_0\Vdash^+\psi$ and $\mathfrak{M}',w'_0\nVdash^-\psi$ as well as $\mathfrak{M}',w'_0\Vdash^+\psi'$ and $\mathfrak{M}',w'_0\nVdash^-\psi'$. Hence, by the induction hypothesis, $\mathfrak{M},w_0\Vdash^+\psi$ and $\mathfrak{M},w_0\nVdash^-\psi$ as well as $\mathfrak{M},w_0\Vdash^+\psi'$ and $\mathfrak{M},w_0\nVdash^-\psi'$. Thus, $\mathfrak{M}',w'_0\Vdash^+\psi\wedge\psi'$ and $\mathfrak{M}',w'_0\nVdash^-\psi\wedge\psi'$, as required. Observe first, that since $\forall w',w''\!\in\!\mathfrak{M}':w'Rw''$, it holds that - $\mathfrak{M}',w'\Vdash^+\lozenge\psi$ iff $\mathfrak{M}',w''\Vdash^+\lozenge\psi$, and - $\mathfrak{M}',w'\Vdash^-\lozenge\psi$ iff $\mathfrak{M}',w''\Vdash^-\lozenge\psi$ for any $w',w''\in\mathfrak{M}'$ and any $\lozenge\psi\in\mathscr{L}_\Box$. Now, consider (1). If $\mathfrak{M}',w'_0\nVdash^+\lozenge\psi$ and $\mathfrak{M}',w'_0\Vdash^-\lozenge\psi$, then $\mathfrak{M}',w'\nVdash^+\psi$ and $\mathfrak{M}',w'\Vdash^-\psi$ for any $w'\in\{w'_0,w'_1\}$. But then, $\mathfrak{M},w_0\nVdash^+\psi$ and $\mathfrak{M},w_0\Vdash^-\psi$ by the induction hypothesis. Hence, $\mathfrak{M},w_0\nVdash^+\lozenge\psi$ and $\mathfrak{M},w_0\Vdash^-\lozenge\psi$, as required. For (2), let $\mathfrak{M}',w'_0\Vdash^+\lozenge\psi$ and $\mathfrak{M}',w'_0\nVdash^-\lozenge\psi$. We have four options: 1. $\mathfrak{M}',w'_0\Vdash^+\psi$ and $\mathfrak{M}',w'_0\nVdash^-\psi$, or 2. $\mathfrak{M}',w'_1\Vdash^+\psi$ and $\mathfrak{M}',w'_1\nVdash^-\psi$, or 3. $\mathfrak{M}',w'_1\Vdash^+\psi$, $\mathfrak{M}',w'_1\Vdash^-\psi$, $\mathfrak{M}',w'_0\nVdash^+\psi$ and $\mathfrak{M}',w'_0\nVdash^-\psi$, or 4. $\mathfrak{M}',w'_0\Vdash^+\psi$, $\mathfrak{M}',w'_0\Vdash^-\psi$, $\mathfrak{M}',w'_1\nVdash^+\psi$ and $\mathfrak{M}',w'_1\nVdash^-\psi$. The (a) case holds by the induction hypothesis. Case (d) holds trivially since there is no $\phi\in\mathscr{L}_\Box$ s.t. $\mathfrak{M}',w'_0\Vdash^+\phi$ and $\mathfrak{M}',w'_0\Vdash^-\phi$ as we have just shown. Case (c) holds trivially as well because a similar inductive argument demonstrates that there is no $\phi\in\mathscr{L}_\Box$ s.t. $\mathfrak{M}',w'_1\Vdash^+\phi$ and $\mathfrak{M}',w'_1\Vdash^-\phi$. Finally, we reduce (b) to (a) by proving that for any $\psi\in\mathscr{L}_\Box$, it holds that (i) if $\mathfrak{M}',w'_1\Vdash^+\psi$ and $\mathfrak{M}',w'_1\nVdash^-\psi$, then $\mathfrak{M}',w'_0\Vdash^+\psi$ and $\mathfrak{M}',w'_0\nVdash^-\psi$ as well, and that (ii) if $\mathfrak{M}',w'_1\nVdash^+\psi$ and $\mathfrak{M}',w'_1\Vdash^-\psi$, then $\mathfrak{M}',w'_0\nVdash^+\psi$ and $\mathfrak{M}',w'_0\Vdash^-\psi$. We proceed by induction on $\psi$. The basis case of a propositional variable holds trivially. The propositional cases are also straightforward. Now, if $\psi=\lozenge\chi$, we use the observation above to obtain that (i) and (ii) hold as well. The result follows. ◻ [^1]: The research of the first author was funded by the grant ANR JCJC 2019, project PRELAP (ANR-19-CE48-0006). The authors wish to thank two anonymous reviewers for their helpful comments and remarks. [^2]: It is customary in doxastic and epistemic logics to use $\mathbf{K}$ for the knowledge modality and $\mathbf{B}$ for the belief modality. We do not follow this tradition as the only difference between knowledge and belief modalities that we consider is that they are defined over different classes of frames. The semantics of the modalities themselves is the same and they exhibit the same 'box-like' behaviour. [^3]: Note that it is more customary to treat $\triangle$ (it is *non-contingent* that or *the agent knows whether* the given proposition is true) as the basic operator and define $\triangledown\phi$ as $\neg\triangle\phi$. [^4]: Again, just as with $\triangledown$, $\bullet\phi$ ($\phi$ is accidental) is treated as a shorthand for $\neg\circ\phi$ ($\phi$ is not essential). [^5]: Henceforth, when we deal with $\mathsf{BD}$ and its expansions, we reserve the word "true" to mean "at least true"; we use "exactly true" to stand for "true and non-false". A similar convention is applied for "false" and "exactly false". [^6]: In this paper, we use Odintsov's and Wansing's [@OdintsovWansing2010; @OdintsovWansing2017] presentation of semantics of non-classical modal logics which uses *two* valuations on a frame --- $v^+$ (support of truth) and $v^-$ (support of falsity). [^7]: We will call these values "Belnapian". [^8]: Read "if the agent knows that $\phi$ is true, they know whether $\phi$ is true" or "if the agent believes in $\phi$, they are opinionated w.r.t. $\phi$". [^9]: We refer our readers to [@BlackburndeRijkeVenema2010] for the detailed presentation of the classical semantics of modal logic. [^10]: This is very close to the "being wrong" modality $\mathbb{W}$ proposed in [@Steinsvold2011] that is defined as follows: $$\mathfrak{M},w\Vdash\mathbb{W}\phi\text{ iff }\mathfrak{M},w\nVdash\phi\text{ and }\forall w'\in R(w):\mathfrak{M},w'\Vdash\phi$$ I.e., $\phi$ is false but the agent believes that it is true. Note, however, that in contrast to $\mathbb{I}$, $\mathbb{W}$ uses the whole accessibility relation and *does not exclude* $w$, i.e., $\mathbb{W}\phi\coloneqq\neg\phi\wedge\Box\phi$. Thus, $\mathbb{W}\phi$ is *always false* on a reflexive frame. We direct the reader to [@GilbertKubyshkinaPetroloVenturi2022] for a detailed comparison between $\mathbb{I}$ and $\mathbb{W}$. [^11]: I.e., there is no formula $\phi$ and no state $w$ s.t. one of the following holds: - $\mathfrak{M},w\Vdash^+\phi$ and $\mathfrak{M},w\Vdash^-\phi$, or - $\mathfrak{M},w\nVdash^+\phi$ and $\mathfrak{M},w\nVdash^-\phi$. [^12]: Note that the dual model swaps $\mathbf{B}$ and $\mathbf{N}$ which mimics the behaviour of *conflation* presented in [@Fitting1994]. We chose against 'conflated model' for two reasons: first, to preserve the terminology from [@KozhemiachenkoVashentseva2023] where such models are also called 'dual'. Second, 'conflated model' might sound confusing. Note, finally, that $(\mathfrak{M}^\partial)^\partial=\mathfrak{M}$. [^13]: Reflexivity is crucial here since $R(u)\neq\varnothing$ for every $u\in W$ which guarantees that $\blacksquare p$ is *both true and false* in every state. [^14]: The completeness proof in Dunn's paper contained a mistake that was addressed in [@Drobyshevich2020]. [^15]: The main difference between tableaux and analytic cut calculi is the presence of the eponymous rule in the latter. In classical logic, this rule internalises the principle of excluded middle and is formulated as $\dfrac{}{\phi\mid\neg\phi}$ with $\phi$ being a subformula of a formula occurring on the branch. In general, if the $\{\mathbf{v}_1,\ldots,\mathbf{v}_n\}$ is the set of truth values of a given logic, the analytic cut rule can be given, for example, as follows: $\dfrac{}{\mathbf{v}_1[\phi]\mid\ldots\mid\mathbf{v}_n[\phi]}$. Our analytic cut rule ($\mathfrak{v}\overline{\mathfrak{v}}$ in Fig. [\[fig:Tknowignorules\]](#fig:Tknowignorules){reference-type="ref" reference="fig:Tknowignorules"}) is an adaptation of the analytic cut rule of the $\mathbf{RE}_\mathrm{fde}$ calculus from [@DAgostino1990]. [^16]: $\mathbf{D45}$ frames are definable since both $\mathbf{D}$ and $\mathbf{45}$ frames are definable. [^17]: We remind the readers again that $\mathbf{S4}$ frames are definable even with $\blacktriangle$ (knowledge whether) operator [@KozhemiachenkoVashentseva2023 Theorem 5.4]. The present result while not new is important because $\mathbf{S4}$ and $\mathbf{D4}$ frames are definable *in a standard manner*. [^18]: A logic expanding $\mathsf{BD}$ with $\Box$ and a four-valued "Nelsonian" implication. [^19]: Observe that $w\mathsf{R}w$, actually, does not occur in $\mathcal{AC}_{\blacksquare,\mathbf{I}}$ proofs since whenever $w\mathsf{R}w'$ is introduced, $w'$ is fresh. [^20]: Again, observe that the models produced from the complete open branches of $\mathcal{AC}_{\blacksquare,\mathbf{I}}$ proofs are *irreflexive*, whence $R^!(w)=R(w)$ for all $w$'s.
arxiv_math
{ "id": "2309.01449", "title": "Knowledge and ignorance in Belnap--Dunn logic", "authors": "Daniil Kozhemiachenko and Liubov Vashentseva", "categories": "math.LO", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- author: - Komi AGBALENYO\* - Vincent CAILLIEZ\* - Jonathan CAILLIEZ bibliography: - biblio.bib title: A novel Newton-Raphson style root finding algorithm --- Root finding ,Newton ,Numerical Methods # Abstract {#abstract .unnumbered} Many problems in applied mathematics require root finding algorithms. Unfortunately, root finding methods have limitations. Firstly, regarding the convergence, there is a trade-off between the size of it's domain and it's rate. Secondly the numerous evaluations of the function and its derivatives penalize the efficiency of high order methods. In this article, we present a family of high order methods, that require few functional evaluations ( One for each step plus one for each considered derivative at the start of the method), thus increasing the efficiency of the methods. # Introduction Finding the root of a function, that is solving the equation : $$g(x)=0$$ is a widely studied topic in numerical analysis. The methods that are developed to solve this kind of equations are employed widely in the different fields of science. They are used in decision science [@decision_applications_2019], computer aided design [@computer_aided_3_2005], simulation of physical phenomena [@zhang_physics_simulation_1_2016] and more domains. One of the first and most well known methods that has been developed to solve this class of problem is the Newton-Raphson method [@ypma1995historical]. This method has been then refined further in order to increase it's order of convergence, it's domain of convergence, or to increase it's efficiency index [@said_solaiman_optimal_2019_king; @ARGYROS2015336; @BEHL201589; @CORDERO20102969; @SOLEYMANI2012847]. The efficiency index of a method has been defined in [@kung_optimal_1974] as $n^{\frac{1}{q}}$ where q is the number of functional evaluations at each step and n is the order of convergence of the method. Unfortunately, high order of convergence methods require more evaluation of the function and its derivatives so their efficiency is limited. For example the methods presented in these articles [@said_solaiman_optimal_2019_king; @argyros_convergence_2015_convergence_fourth_order; @behl_new_2016_optimal_eighth_order; @chun_comparison_2016_comparison_of_eighth_order_methods; @amat_third-order_2007_kantorovich_conditions] have high order of convergence but the numerous evaluations of $f$ and its derivatives reduce their efficiency. [@budzko_modifications_2014] shows that there is a trade-off between the size of the domain of convergence and the method rate of convergence. It shows that for some functions as $atan$, the traditional methods don't converge for an initial guess greater than $1.39$. The tricks used to extend the domain of convergence above reduces the rate of convergence of these methods. Most of the methods in the literature consider no prior knowledge about the function g at the solution point l where $g(l)=0$. However, under certain circumstances, it is possible to compute the successive derivatives of the function at l, be it by physical reasoning in the case where the function g is derived from a physical simulation, or for a class of function where the derivative can be computed knowing the value of the function. For instance let $g(x)=\sqrt{x}$ then $g'(x)=\frac{1}{2\sqrt{x}}=\frac{1}{2g(x)}$ The work of this paper is divided as follows : the first part presents the proofs and construction of the methods. In the second part presents numerical tests of the methods and compare them to other methods present in the literature. # Methods and Proof ## Proof for the method with the use of the first derivative, (second order of convergence) Given an equation $$g(x) = 0 \label{eq1_j}$$ We note $l$ one of its solutions, we consider g to have a Taylor expansion at $l$ with a non 0 convergence radius. let $$f(x) = g(x) + x$$ then $l$ is a fixed point of $f$. $$f(l) = g(l) + l=l$$ The proposed method starts with the following sequence, while trying to improve its order of convergence : $$U_{n+1} = f(U_n)$$ Let $U_n = l(1 + \xi)$, then $$U_{n+1} = f(U_n) = f(l(1 + \xi)) \label{eq2_j}$$. Using the Taylor series of $f(x)$ near $l$, the equation [\[eq2_j\]](#eq2_j){reference-type="ref" reference="eq2_j"} becomes $$U_{n+1} = f(l) + l\xi f'(l) + \frac{l^2\xi^2}{2}f''(l) + o(l^2\xi^2)$$ $$U_{n+1} - l = l\xi f'(l) + \frac{l^2\xi^2}{2}f''(l) + o(l^2\xi^2)$$ In order to accelerate the convergence, we will reduce to zero the first order error $l\xi f'(l)$. To do so, a corrective term is added, and thus the sequence becomes : $$U_{n+1} = f(U_n) + \alpha(f(U_n) -U_{n})$$ Which gives the following result using the Taylor series of f $$U_{n+1} = f(l) + l\xi \left[ f'(l) + \alpha(f'(l) - 1)\right]+ o(l\xi) \label{eq3_j1}$$ In order to cancel out the $l\xi$ term (thus making the method of order 2), we get : $$\alpha = \frac{f'(l)}{1 - f'(l)}$$ This gives the following result : $$U_{n+1} = f(U_n) + [f(U_n) -U_n]\frac{f'(l)}{1 - f'(l)}$$ We replace $f(x)$ by $g(x) + x$ and $f'(x)$ by $g'(x) + 1$, then the sequence to approximate the roots of g(x) can then be expressed as : $$U_{n+1} = U_n - \frac{g(U_n)}{g'(l)}$$ This method is in theory of order 2, while only requiring one functional evaluation at each step ($g'(l)$ only has to be computed once at the start), it has an efficiency index of 2. ## Proof for the method with the use of the first and second derivatives, (third order of convergence) Given an equation $$g(x) = 0$$ We note $l$ one of its solutions, we consider $g$ to have a Taylor expansions at $l$ with a non 0 radius of convergence. Let $$f(x) = g(x) + x$$ $l$ is a fixed point of $f$. $$f(l) = g(l) + l=l$$ The proposed third order method starts with the following sequence, while trying to improve its order of convergence : $$U_{n+1} = f(U_n)$$ Let $U_n = l(1 + \xi)$, then $$U_{n+1} = f(U_n) = f(l(1 + \xi)) \label{eq3_j2}$$. Using the Taylor series of $f(x)$ near $l$, the equation [\[eq3_j2\]](#eq3_j2){reference-type="ref" reference="eq3_j2"} becomes $$U_{n+1}-U_n = l\xi (f'(l) - 1) + \frac{l^2\xi^2}{2}f''(l) + o(l^2\xi^2)$$ $$(U_{n+1}-U_n)^2 = l^2\xi^2 (f'(l) - 1)^2 + o(l^2\xi^2)$$ In order to accelerate the convergence, we will reduce to zero the first order error $l\xi f'(l)$ and the second order error $l^2\xi^2 f''(l)$. To do so, the method becomes: $$U_{n+1} = f(U_n) + \alpha(f(U_n) - U_n) + \beta(f(U_n) - U_n)^2$$ $$U_{n+1} = f(l) + l\xi[f'(l) + \alpha(f'(l) -1)] + \frac{l^2\xi^2}{2}[f''(l)(1 + \alpha) + 2\beta(f'(l) - 1)^2 ] + o(l^2\xi^2)$$ One deduces these values for $\alpha$ and $\beta$ in order to reduce to zero the $l\xi$ and the $l^2\xi^2$ terms. $$\begin{array}{cc} \alpha = \frac{f'(l)}{1-f'(l)}& \beta = \frac{-f''(l)}{2(1- f'(l) )^3}\\ \end{array}$$ The final sequence is consequently $$U_{n+1} = f(U_n) +[f(U_n) -U_n]\frac{f'(l)}{1-f'(l)}+ [f(U_n) -U_n]^2\frac{-f''(l)}{2(1 - f'(l))^3}$$ By replacing $f(x)$ by $g(x) + x$, $f'(x)$ by $g'(x) + 1$ and $f''(x)$ by $g''(x)$ we have this sequence $$U_{n+1} = U_n - \frac{g(U_n)}{g'(l)} + \frac{g(U_n)^2g''(l)}{2g'(l)^3}$$ This method is in theory of order 3, while only requiring one functional evaluation at each step ($g'(l)$ and $g''(l)$ only have to be computed once at the start), it has an efficiency index of 3. ## Proof for the method with the use of the first and second and third derivatives, (fourth order of convergence) Given an equation $$g(x) = 0 \label{eq1_j2}$$ We note $l$ one of its solutions, we consider g to have a Taylor expansion at $l$ with a non 0 radius of convergence.. let $$f(x) = g(x) + x$$ $l$ is a fixed point of $f$. $$f(l) = g(l) + l=l$$. The proposed method starts with the following sequence, while trying to improve its order of convergence: $$U_{n+1} = f(U_n)$$ Let $U_n = l(1 + \xi)$, then $$U_{n+1} = f(U_n) = f(l(1 + \xi)) \label{eq4_j}$$. Using the Taylor series of $f(x)$ near $l$, the equation [\[eq4_j\]](#eq4_j){reference-type="ref" reference="eq4_j"} becomes $$U_{n+1} = f(l) + l\xi f'(l) + \frac{l^2\xi^2}{2}f''(l) + \frac{l^3\xi^3}{6}f'''(l) +o(l^3\xi^3)$$ $$U_{n+1}-U_n = l\xi(f'(l)-1)+ \frac{l^2\xi^2}{2}f''(l) + \frac{l^3\xi^3}{6}f'''(l) + o(l^3\xi^3)$$ $$(U_{n+1}-U_n)^2 = l^2\xi^2(f'(l)-1)^2 + l^3\xi^3 f''(l)(f'(l)-1) + o(l^3\xi^3)$$ $$(U_{n+1}-U_n)^3 = l^3\xi^3(f'(l) - 1)^3 + o(l^3\xi^3)$$ In order to accelerate the convergence, we will reduce to zero the first, the second and the third order errors $l\xi f'(l)$, $l^2\xi^2 f''(l)$ and $l^3\xi^3 f'''(l)$. To do so, the method becomes: $$U_{n+1} = f(U_{n}) + \alpha(f(U_{n}) - U_{n}) + \beta(f(U_{n}) - U_{n})^2 + \gamma(f(U_{n}) - U_{n})^3$$ $$\begin{gathered} U{n+1} = f(l) + l\xi[f'(l) + \alpha(f'(l) - 1)] + \\ \quad \frac{l^2\xi^2}{2}[f''(l)(1 + \alpha) + 2\beta(f'(l) - 1)^2] + \\ \frac{l^3\xi^3}{6}[f'''(l)(1 + \alpha) +6\beta f''(l)(f'(l) - 1) + 6\gamma(f'(l)-1)^3] \end{gathered}$$ In order to cancel out the first, the second and the third order errors (thus making the method of order 4), we get $$\begin{array}{c} \alpha = \frac{f'(l)}{1-f'(l)} \\ \\ \beta = \frac{-f''(l)}{2(1-f'(l))^3} \\ \\ \gamma = \frac{f'''(l)(1 - f'(l)) + 3f''(l)^2}{6(1 -f'(l))^5} \end{array}$$ We replace $f(x)$ by $g(x) + x$ and $f'(x)$ by $g'(x) + 1$, we do so with the other derivatives and we get this sequence. $$U_{n+1} = U_n - \frac{g(U_n)}{g'(l)} + \frac{g(U_n)^2g''(l)}{2g'(l)^3} \\ - \frac{g(U_n)^3[3g''(l)^2 -g'(l)g'''(l)]}{6g'(l)^5}$$ This method is in theory of order 4, while only requiring one functional evaluation at each step ($g'(l)$, $g''(l)$, $g'''(l)$ only have to be computed once at the start), it has an efficiency index of 4, this is higher than the optimal of $4^\frac{1}{3}$ presented in [@said_solaiman_optimal_2019_king]. ## Generalization of the proof The same method can be applied up to any order of convergence desired. Given an equation $$g(x) = 0 \label{eq1_j3}$$ We note $l$ one of its solutions, we consider g to have a Taylor expansion at $l$ with a non 0 radius of convergence. let $$f(x) = g(x) + x$$ $l$ is a fixed point of $f$. $$f(l) = g(l) + l=l$$. We assume that the the function $f$ has a n-th order taylor series Let, $$U_{n+1}-U_{n} = \sum_{k=0}^{\ n-1} \frac{(l\xi)^k}{k!}f^{k}(l) + o((l\xi)^k) - l(1 + \xi)$$ We define after the sequence $$V_{n+1} = U_{n+1} + \sum_{k=1}^{\ n-1} C_{k}(U_{n+1}-U_{n} )^{k}$$ were $C_1$, $C_2$,\..., $C_k$ are named constant we will seek in order to reduce to zero all the errors of order smaller than n. Once the formula of all these constants are found we can replace $f(l)$ by $g(l) +l$ and calculate the derivatives accordingly to get the final sequence. This final sequence will give an approximation for the solution $l$ of the equation we want to solve. This final sequence will have an efficiency of $n$ because at each step we need only to calculate $f(U_n)$ while the order of convergence is $n$. # Results We will here test our method against other methods present in the literature for different functions and different starting points. The computation have been realised on windows 10 computer with and Intel(R) Core(TM) i5-4300M CPU. ## Comparison of mehtods for$f(x) = arctan(x)$ and X0 =$-0.9$ Method Time $\mu s$ Steps converge ------------------------------------------------------------------- --------------- ------- ---------- Second order 1 $\mu s$ 5 yes Third order 1 $\mu s$ 5 yes Fourth order 2 $\mu s$ 4 yes Newton-Raphson [@ehiwario_comparative_2014_comp_new_bisec_secant] 8 $\mu s$ 6 yes Newton two-step [@amat_third-order_2007_kantorovich_conditions] $<$ 1 $\mu s$ 4 yes Halley [@amat_third-order_2007_kantorovich_conditions] 2 No Chebyshev [@amat_third-order_2007_kantorovich_conditions] 2 $\mu s$ 6 Yes Four order [@said_solaiman_optimal_2019_king] No Eight order [@said_solaiman_optimal_2019_king] No ## Comparison of methods for$f(x) = arctan(x)$ and X0 =$-10^6$ Method Time $\mu s$ Steps converge ------------------------------------------------------------------- --------------- -------- ---------- Second order method 50694 $\mu s$ 636630 yes Third order method 46893 $\mu s$ 636630 yes Fourth order method 59850 $\mu s$ 349327 yes Newton-Raphson [@ehiwario_comparative_2014_comp_new_bisec_secant] No Newton two-step [@amat_third-order_2007_kantorovich_conditions] No Halley [@amat_third-order_2007_kantorovich_conditions] No Chebyshev [@amat_third-order_2007_kantorovich_conditions] No Four order [@said_solaiman_optimal_2019_king] No Eigth order [@said_solaiman_optimal_2019_king] No ## Results of the methods for $f(x) =sqrt(\left|x\right|)-4$ and X0 = $-10^{-6}$ Method Time used Steps converge ------------------------------------------------------------------- -------------- ------- ---------- Second order method 11 $\mu s$ 258 yes Third order method $<1$ $\mu s$ 2 yes Fourth order method 1 $\mu s$ 2 yes Newton-Raphson [@ehiwario_comparative_2014_comp_new_bisec_secant] 30 $\mu s$ 252 Yes Newton two-step [@amat_third-order_2007_kantorovich_conditions] No Halley [@amat_third-order_2007_kantorovich_conditions] No Chebyshev [@amat_third-order_2007_kantorovich_conditions] 1 $\mu s$ 2 Yes Four order [@said_solaiman_optimal_2019_king] No Eigth order [@said_solaiman_optimal_2019_king] No # Conclusion We have presented in this article a family of methods with high efficiency. Their advantage is that they require at each step only the evaluation of $g$. The others components of the formulas are computed only one time. This gives them for an order of convergence $n$ an efficiency of $n$. Moreover these methods can exhibit a wide domain of convergence. Because these methods only use only one functional evaluation at each step, are particularly well suited for the problem of root finding where the valuation of $g$ and it's derivatives is costly, for instance for physical models. # Acknowledgments Vincent CAILLIEZ and Komi AGBALENYO contributed equally to this work. # Citations
arxiv_math
{ "id": "2309.00698", "title": "A novel Newton-Raphson style root finding algorithm", "authors": "Komi Agbalenyo, Vincent Cailliez, Jonathan Cailliez", "categories": "math.NA cs.NA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper we introduce new spaces of holomorphic functions on the pointed unit disc of $\mathbb C$ that generalize classical Bergman spaces. We prove some fundamental properties of these spaces and their dual spaces. We finish the paper by extending Hardy-Littlewood and Fejér-Riesz inequalities to these spaces with an application on Toeplitz operators. author: - Noureddine Ghiloufi $^\star$ - Mohamed Zaway $^{\star\star}$ title: Meromorphic Bergman spaces --- $^\star$ University of Gabes, Faculty of Sciences of Gabes, LR17ES11 Mathematics and Applications laboratory, 6072, Gabes, Tunisia.\ $^{\star\star}$ Mathematics Department, Faculty of Sciences and Humanities in Dawadmi, Shaqra University, 11911, Saudi Arabia.\ Irescomath Laboratory, Gabes University, 6072 Zrig Gabes, Tunisia. # Introduction and preliminary results Since the seventeenth of the last century the notion of Bergman spaces has known an increasing use in mathematics and essentially in complex analysis and geometry. The fundamental concept of this notion is the Bergman kernel. This kernel was computed firstly for the unit disc $\mathbb D$ in $\mathbb C$ and then it was determined for any simply connected domain by the famous Riemann's theorem. However the determination of the Bergman kernels of domains in $\mathbb C^n$ is more delicate and it is determined for some type of domains and still unknown up to our day in general. In this paper we generalize most properties of Bergman spaces of the unit disk by introducing new spaces of holomorphic functions on the pointed unit disc $\mathbb D^*$ that are square integrable with respect to a probability measure $d\mu_{\alpha,\beta}$ for some $\alpha,\beta>-1$. In fact the classical Bergman space is reduced to the case $\beta=0$ (See [@He-KO-Zh] for more details). We call these new spaces meromorphic Bergman spaces; indeed any element of such a space is a meromorphic function which has $0$ as a pole of order controlled by the parameter $\beta$. The originality of our idea is that the Bergman kernels of these spaces may have zeros in the unit disk essentially when $\beta$ is not an integer. This problem will be discussed in a separate paper as a continuity of the present paper. For this reason we will concentrate here on the topological properties of these spaces and prove some well known inequalities.\ Throughout this paper, $\mathbb D(a,r)$ will be the disc of $\mathbb C$ with center $a$ and radius $r>0$. In case $a=0$, we use $\mathbb D(r)$ (resp. $\mathbb D$) in stead of $\mathbb D(0,r)$ (resp. $\mathbb D(0,1)$). We set $\mathbb S(r):=\partial \mathbb D(r)$ the circle and $\mathbb D^*:=\mathbb D\smallsetminus\{0\}$. For every $-1<\alpha,\beta<+\infty$, we consider the positive measure $\mu_{\alpha,\beta}$ on $\mathbb D$ defined by $$d\mu_{\alpha,\beta}(z):=\frac{1}{\mathscr B(\alpha+1,\beta+1)}|z|^{2\beta}(1-|z|^2)^\alpha dA(z)$$ where $\mathscr B$ is the beta function defined by $$\mathscr B(s,t)=\int_0^1x^{s-1}(1-x)^{t-1}dx=\frac{\Gamma(s)\Gamma(t)}{\Gamma(s+t)},\quad \forall\; s,t>0$$ and $$dA(z)=\frac{1}{\pi}dx dy=\frac{1}{\pi}rdrd\theta,\quad z=x+iy=re^{i\theta}$$ the normalized area measure on $\mathbb D$.\ The general aim of this paper is to study the properties of the Bergman type space $\mathcal A_{\alpha,\beta}^p(\mathbb D^*)$ defined for $0<p<+\infty$ as the set of holomorphic functions on $\mathbb D^*$ that belongs to the space $$L^p(\mathbb D,d\mu_{\alpha,\beta})=\{f:\mathbb D\longrightarrow \mathbb C;\hbox{ measurable function such that } \|f\|_{\alpha,\beta,p}<+\infty\}$$ where $$\|f\|_{\alpha,\beta,p}^p:=\int_{\mathbb D}|f(z)|^pd\mu_{\alpha,\beta}(z).$$ When $1\leq p<+\infty$, the space $\left(L^p(\mathbb D,d\mu_{\alpha,\beta}),\|.\|_{\alpha,\beta,p}\right)$ is a Banach space; however for $0<p<1$, the space $L^p(\mathbb D,d\mu_{\alpha,\beta})$ is a complete metric space where the metric is given by $d(f,g)=\|f-g\|_{\alpha,\beta,p}^p$. The following proposition will be useful in the hole of the paper. **Proposition 1**. *For every $0<r<1$ and $0<\varepsilon<1$ there exists $c_\varepsilon(r)=c_{\varepsilon,\alpha,\beta}(r)>0$ such that for any $0<p<+\infty$ and $f\in \mathcal A_{\alpha,\beta}^p(\mathbb D^*)$ we have $$|f(z)|^p\leq \frac{ \mathscr B(\alpha+1,\beta+1) }{c_\varepsilon(r)}\|f\|_{\alpha,\beta,p}^p,\qquad \forall z\in \mathbb S(r).$$ One can choose $c_\varepsilon(r)=r_\varepsilon^2\mathfrak{a}_\varepsilon(r)\mathfrak{b}_\varepsilon(r)$ with $r_\varepsilon=\varepsilon \min(r,1-r)$, $$\mathfrak{a}_\varepsilon(r):=\left\{ \begin{array}{lcl} \displaystyle\left[1-(r+r_\varepsilon)^2\right]^\alpha& if& \alpha\geq 0\\ \\ \displaystyle\left[1-(r-r_\varepsilon)^2\right]^\alpha& if& -1<\alpha<0 \end{array}\right.$$ and $$\mathfrak{b}_\varepsilon(r):=\left\{ \begin{array}{lcl} \displaystyle(r-r_\varepsilon)^{2\beta}& if& \beta\geq 0\\ \\ \displaystyle(r+r_\varepsilon)^{2\beta}& if& -1<\beta<0 \end{array}\right.$$* *Proof.* Let $0<r<1$, $0<\varepsilon<1$ and $0<p<+\infty$ be fixed reals. Let $f\in \mathcal A_{\alpha,\beta}^p(\mathbb D^*)$ and $z\in \mathbb S(r)$. We set $r_\varepsilon=\varepsilon \min(r,1-r)$. It is easy to see that $\overline{\mathbb D}(z,r_\varepsilon)\subset \mathbb D^*$; so thanks to the subharmonicity of $|f|^p$, we obtain $$\label{eq 1.1} \begin{array}{lcl} \displaystyle|f(z)|^p&\leq& \displaystyle\frac{1}{r_\varepsilon^2}\int_{\mathbb D(z,r_\varepsilon)}|f(w)|^pdA(w)\\ &\leq & \displaystyle\frac{\mathscr B(\alpha+1,\beta+1)}{r_\varepsilon^2}\int_{\mathbb D(z,r_\varepsilon)} \frac{|f(w)|^p}{|w|^{2\beta}(1-|w|^2)^\alpha} d\mu_{\alpha,\beta}(w)\\ \end{array}$$ If $w\in\mathbb D(z,r_\varepsilon)$ then $r-r_\varepsilon\leq |w|\leq r+r_\varepsilon$. Thus we obtain $$|w|^{2\beta}\geq \mathfrak{b}_\varepsilon(r):=\left\{ \begin{array}{lcl} \displaystyle(r-r_\varepsilon)^{2\beta}& if& \beta\geq 0\\ \\ \displaystyle(r+r_\varepsilon)^{2\beta}& if& -1<\beta<0 \end{array}\right.$$ and $$(1-|w|^2)^\alpha\geq\mathfrak{a}_\varepsilon(r):=\left\{ \begin{array}{lcl} \displaystyle\left[1-(r+r_\varepsilon)^2\right]^\alpha& if& \alpha\geq 0\\ \\ \displaystyle\left[1-(r-r_\varepsilon)^2\right]^\alpha& if& -1<\alpha<0 \end{array}\right.$$ It follows that Inequality ([\[eq 1.1\]](#eq 1.1){reference-type="ref" reference="eq 1.1"}) gives $$\begin{array}{lcl} \displaystyle|f(z)|^p&\leq& \displaystyle\frac{\mathscr B(\alpha+1,\beta+1)}{r_\varepsilon^2\mathfrak{a}_\varepsilon(r)\mathfrak{b}_\varepsilon(r)}\int_{\mathbb D(z,r_\varepsilon)} |f(w)|^p d\mu_{\alpha,\beta}(w)\\ &\leq& \displaystyle\frac{\mathscr B(\alpha+1,\beta+1)}{r_\varepsilon^2\mathfrak{a}_\varepsilon(r)\mathfrak{b}_\varepsilon(r)}\|f\|_{\alpha,\beta,p}^p. \end{array}$$ ◻ Using the previous proof, one can improve the previous proposition as follows ***Remark** 1*. For any $n\in\mathbb N$ and $0<r<1$, there exists $c=c(n,r,\alpha,\beta)>0$ such that for every $f\in\mathcal A_{\alpha,\beta}^p(\mathbb D^*)$ we have $$|f^{(n)}(z)|^p\leq c\|f\|_{\alpha,\beta,p}^p,\qquad \forall z\in \mathbb S(r).$$ As a first consequence of Proposition [Proposition 1](#p1){reference-type="ref" reference="p1"}, we have **Corollary 1**. *For every $-1<\alpha,\beta<+\infty$ and $0<p<+\infty$, the space $\mathcal A_{\alpha,\beta}^p(\mathbb D^*)$ is closed in $L^p(\mathbb D,\mu_{\alpha,\beta})$ and for any $z\in\mathbb D^*$, the linear form $\delta_z:\mathcal A_{\alpha,\beta}^p(\mathbb D^*)\longrightarrow \mathbb C$ defined by $\delta_z(f)=f(z)$ is bounded on $\mathcal A_{\alpha,\beta}^p(\mathbb D^*)$.* *Proof.* As $L^p(\mathbb D,d\mu_{\alpha,\beta})$ is complete, it suffices to consider a sequence $(f_n)_n\subset \mathcal A_{\alpha,\beta}^p(\mathbb D^*)$ that converges to $f\in L^p(\mathbb D,d\mu_{\alpha,\beta})$ and to prove that $f\in \mathcal A_{\alpha,\beta}^p(\mathbb D^*)$. Thanks to Proposition [Proposition 1](#p1){reference-type="ref" reference="p1"}, the sequence $(f_n)_n$ converges uniformly to $f$ on every compact subset of $\mathbb D^*$. Hence the function $f$ is holomorphic on $\mathbb D^*$ and we conclude that $f\in \mathcal A_{\alpha,\beta}^p(\mathbb D^*)$.\ For the second statement, one can see that $\delta_z$ is a linear functional well defined on $\mathcal A_{\alpha,\beta}^p(\mathbb D^*)$. For the continuity of $\delta_z$, thanks to Proposition [Proposition 1](#p1){reference-type="ref" reference="p1"}, for every $z\in\mathbb D^*$, there exists $c>0$ such that for every $f\in\mathcal A_{\alpha,\beta}^p(\mathbb D^*)$ we have $|\delta_z(f)|=|f(z)|\leq c\|f\|_{\alpha,\beta,p}$. Thus the linear functional $\delta_z$ is continuous on $\mathcal A_{\alpha,\beta}^p(\mathbb D^*)$. ◻ In the following we give some immediate properties:\ - if $f\in\mathcal A_{\alpha,\beta}^p(\mathbb D^*)$ then $0$ can't be an essential singularity for $f$, hence either $0$ is removable for $f$ (so $f$ is holomorphic on $\mathbb D$) or $0$ is a pole for $f$ with order $\nu_f=\nu_f(0)$ that satisfies $$\label{eq 1.2} \nu_f\leq m_{p,\beta}=\left\{\begin{array}{lcl} \displaystyle\left\lfloor\frac{2(\beta+1)}{p}\right\rfloor & if & \displaystyle\frac{2(\beta+1)}{p}\not\in\mathbb N\\ \displaystyle\frac{2(\beta+1)}{p}-1& if & \displaystyle\frac{2(\beta+1)}{p}\in\mathbb N \end{array}\right.$$ where $\lfloor .\rfloor$ is the integer part. - If we set $\widetilde{f}(z)=z^{\nu_f}f(z)$ then $\widetilde{f}$ is a holomorphic function on $\mathbb D$ and $f\in\mathcal A_{\alpha,\beta}^p(\mathbb D^*)$ if and only if $\widetilde{f}\in\mathcal A_{\alpha,\beta-\frac{p\nu_f}{2}}^p(\mathbb D^*)$ and $$\|f\|_{\alpha,\beta,p}=\left(\frac{\mathscr B(\alpha+1,\beta-\frac{p\nu_f}{2}+1)}{\mathscr B(\alpha+1,\beta+1)}\right)^{\frac1p}\|\widetilde{f}\|_{\alpha,\beta-\frac{p\nu_f}{2},p}.$$ Using the two previous properties, if we replace $f$ by $z^{m_{p,\beta}}f$ in the proof of Proposition [Proposition 1](#p1){reference-type="ref" reference="p1"}, we can obtain a more sharp estimate in Proposition [Proposition 1](#p1){reference-type="ref" reference="p1"}. - If $-1<\beta<\beta'$ and $-1<\alpha<\alpha'$ then $\mathcal A_{\alpha,\beta}^p(\mathbb D^*)\subseteq\mathcal A_{\alpha',\beta'}^p(\mathbb D^*)$ and the canonical injection is continuous. This is a consequence of the fact that we have $$\mathscr B(\alpha'+1,\beta'+1)\|f\|_{\alpha',\beta',p}\leq \mathscr B(\alpha+1,\beta+1)\|f\|_{\alpha,\beta,p}$$ for every $f\in\mathcal A_{\alpha,\beta}^p(\mathbb D^*)$. Claim that if we set $\mathbb D_\zeta:=\mathbb D\smallsetminus\{\zeta\}$ for any $\zeta\in\mathbb D$, then all results on $\mathcal A_{\alpha,\beta}^p(\mathbb D^*)$ can be extended to the space $\mathcal A_{\alpha,\beta}^p(\mathbb D_\zeta)$ of holomorphic functions on $\mathbb D_\zeta$ that are $p-$integrable with respect to the positive measure $|z-\zeta|^{2\beta}(1-|z|^2)^\alpha dA(z)$. Indeed $h\in\mathcal A_{\alpha,\beta}^p(\mathbb D_\zeta)$ if and only if $h\circ\varphi_\zeta\in\mathcal A_{\alpha,\beta}^p(\mathbb D^*)$ where $\varphi_\zeta(z)=\frac{\zeta-z}{1-\overline{\zeta}z}$. # Meromorphic Bergman Kernels In the case $p=2$ we have $\mathcal A_{\alpha,\beta}^2(\mathbb D^*)$ is a Hilbert space and $\mathcal A_{\alpha,\beta}^2(\mathbb D^*)=\mathcal A_{\alpha,m}^2(\mathbb D^*)$ for every $\beta\in]m-1,m]$ with $m\in\mathbb N$. If we set $$\label{eq 2.1} e_n(z)=\sqrt{\frac{\mathscr B(\alpha+1,\beta+1)}{\mathscr B(\alpha+1,n+\beta+1)}}\ z^n$$ for every $n\geq -m$, then the sequence $(e_n)_{n\geq -m}$ is a Hilbert basis of $\mathcal A_{\alpha,\beta}^2(\mathbb D^*)$. Furthermore, if $f,g\in\mathcal A_{\alpha,\beta}^2(\mathbb D^*)$ with $$f(z)=\sum_{n= -m}^{+\infty}a_nz^n,\quad g(z)=\sum_{n= -m}^{+\infty}b_nz^n$$ then $$\langle f,g\rangle_{\alpha,\beta}=\sum_{n= -m}^{+\infty}a_n\overline{b}_n\frac{\mathscr B(\alpha+1,n+\beta+1)}{\mathscr B(\alpha+1,\beta+1)}$$ where $\langle .,.\rangle_{\alpha,\beta}$ is the inner product in $\mathcal A_{\alpha,\beta}^2(\mathbb D^*)$ inherited from $L^2(\mathbb D,d\mu_{\alpha,\beta})$. **Lemma 1**. *Let $-1<\alpha<+\infty$ and $m\in\mathbb N$. Then the reproducing (Bergman) kernel $\mathbb{K}_{\alpha,m}$ of $\mathcal A_{\alpha,m}^2(\mathbb D^*)$ is given by $$\displaystyle\mathbb{K}_{\alpha,m}(w,z)=\frac{(\alpha+1)\mathscr B(\alpha+1,m+1)}{(w\overline{z})^m(1-w\overline{z})^{2+\alpha}}.$$* *Proof.* The sequence $(e_n)_{n\geq -m}$ given by ([\[eq 2.1\]](#eq 2.1){reference-type="ref" reference="eq 2.1"}) is a Hilbert basis of $\mathcal A_{\alpha,\beta}^2(\mathbb D^*)$, hence the reproducing kernel of $\mathcal A_{\alpha,\beta}^2(\mathbb D^*)$ is given by $$\begin{array}{lcl} \mathbb{K}_{\alpha,\beta}(w,z)&=&\displaystyle\sum_{n=-m}^{+\infty}e_n(w)\overline{e_n(z)}=\sum_{n=-m}^{+\infty}\frac{\mathscr B(\alpha+1,\beta+1)}{\mathscr B(\alpha+1,n+\beta+1)}w^n\overline{z}^n\\ &=&\displaystyle\frac{1}{(w\overline{z})^m}\sum_{n=0}^{+\infty}\frac{\mathscr B(\alpha+1,\beta+1)}{\mathscr B(\alpha+1,n+\beta-m+1)}(w\overline{z})^n \end{array}$$ The computation of this kernel in the general case is more complicated. However, in our case for $\beta=m$, we obtain $$\begin{array}{lcl} \mathbb{K}_{\alpha,m}(w,z)&=&\displaystyle\frac{1}{(w\overline{z})^m}\sum_{n=0}^{+\infty}\frac{\mathscr B(\alpha+1,m+1)}{\mathscr B(\alpha+1,n+1)}(w\overline{z})^n\\ &=&\displaystyle\frac{(\alpha+1)\mathscr B(\alpha+1,m+1)}{(w\overline{z})^m(1-w\overline{z})^{2+\alpha}}. \end{array}$$ ◻ Here we give some fundamental properties of the Bergman kernel as consequences of Lemma [Lemma 1](#l1){reference-type="ref" reference="l1"}. **Corollary 2**. *Let $-1<\alpha<+\infty$ and $m\in\mathbb N$. Let $\mathbb{P}_{\alpha,m}$ be the orthogonal projection from $L^2(\mathbb D,d\mu_{\alpha,m})$ onto $\mathcal A_{\alpha,m}^2(\mathbb D^*)$. Then for every $f\in L^2(\mathbb D,d\mu_{\alpha,m})$ we have $$\mathbb{P}_{\alpha,m}f(z)=(\alpha+1)\mathscr B(\alpha+1,m+1)\int_{\mathbb D}\frac{f(w)} {(z\overline{w})^m(1-z\overline{w})^{2+\alpha}}d\mu_{\alpha,m}(w).$$* *Proof.* This is a simple consequence of Lemma [Lemma 1](#l1){reference-type="ref" reference="l1"} and the fact that for every $f\in L^2(\mathbb D,d\mu_{\alpha,m})$ we have $\mathbb{P}_{\alpha,m}f(z)=\langle f,\mathbb{K}_{\alpha,m}(.,z)\rangle_{\alpha,m}.$ ◻ Using the density of $\mathcal A_{\alpha,m}^2(\mathbb D^*)$ in $\mathcal A_{\alpha,m}^1(\mathbb D^*)$, one can prove the following corollary: **Corollary 3**. *Let $-1<\alpha<+\infty$ and $m\in\mathbb N$. Then for every $f\in \mathcal A_{\alpha,m}^1(\mathbb D^*)$ we have $$f(z)=(\alpha+1)\mathscr B(\alpha+1,m+1)\int_{\mathbb D}\frac{f(w)}{(z\overline{w})^m(1-z\overline{w})^{2+\alpha}}d\mu_{\alpha,m}(w).$$* The following result is well known in general, its proof is based essentially on the fact that $\mathbb{K}_{\alpha,\beta}(z,z)\neq 0$ for every $z\in\mathbb D^*$. **Proposition 2**. *Let $-1<\alpha,\beta<+\infty$ and $\mathbb{K}_{\alpha,\beta}$ be the reproducing (Bergman) kernel of $\mathcal A_{\alpha,\beta}^2(\mathbb D^*)$. Then for every $z\in\mathbb D^*$ we have $\mathbb{K}_{\alpha,\beta}(z,z)>0$ and satisfies $$\label{eq 2.2} \begin{array}{lcl} \displaystyle\mathbb{K}_{\alpha,\beta}(z,z)&=&\displaystyle\sup\left\{|f(z)|^2;\ f\in\mathcal A_{\alpha,\beta}^2(\mathbb D^*),\ \|f\|_{\alpha,\beta,2}\leq1\right\}\\ &=&\displaystyle\sup\left\{\frac{1}{\|f\|_{\alpha,\beta,2}};\ f\in\mathcal A_{\alpha,\beta}^2(\mathbb D^*),\ f(z)=1\right\} \end{array}$$ In particular, the norm of the Dirac form $\delta_z$ on $\mathcal A_{\alpha,\beta}^2(\mathbb D^*)$ is given by $$\|\delta_z\|=\|\mathbb{K}_{\alpha,\beta}(.,z)\|_{\alpha,\beta,2}=\sqrt{\mathbb{K}_{\alpha,\beta}(z,z)}.$$* One can find the proof of the first equality in Krantz book [@Kr], however the second one is due to Kim [@Ki]. For the completeness of our paper we give the proof. *Proof.* Thanks to the proof of Lemma [Lemma 1](#l1){reference-type="ref" reference="l1"}, we have $\mathbb{K}_{\alpha,\beta}(z,z)>0$ for every $z\in\mathbb D^*$. To prove the first equality in ([\[eq 2.2\]](#eq 2.2){reference-type="ref" reference="eq 2.2"}), we fix $z\in\mathbb D^*$ and we consider $$\mathscr Q(z):=\sup\left\{|f(z)|^2;\ f\in\mathcal A_{\alpha,\beta}^2(\mathbb D^*),\ \|f\|_{\alpha,\beta,2}\leq1\right\}.$$ Let $f\in\mathcal A_{\alpha,\beta}^2(\mathbb D^*)$ such that $\|f\|_{\alpha,\beta,2}\leq1$. Then thanks to the Cauchy-Schwarz inequality, $$|f(z)|^2=|\langle f,\mathbb{K}_{\alpha,\beta}(.,z)\rangle_{\alpha,\beta}|^2\leq \|f\|_{\alpha,\beta,2}^2\|\mathbb{K}_{\alpha,\beta}(.,z)\|_{\alpha,\beta,2}^2\leq \mathbb{K}_{\alpha,\beta}(z,z).$$ It follows that $\mathscr Q(z)\leq \mathbb{K}_{\alpha,\beta}(z,z)$. Conversely, we set $$g(\xi)=\frac{\mathbb{K}_{\alpha,\beta}(\xi,z)}{\sqrt{\mathbb{K}_{\alpha,\beta}(z,z)}},\quad \xi\in\mathbb D^*.$$ Hence we have $g\in\mathcal A_{\alpha,\beta}^2(\mathbb D^*)$, $\|g\|_{\alpha,\beta,2}=1$ and $|g(z)|^2=\mathbb{K}_{\alpha,\beta}(z,z)$ and the converse inequality $\mathscr Q(z)\geq \mathbb{K}_{\alpha,\beta}(z,z)$ is proved.\ Now to prove the second equality in ([\[eq 2.2\]](#eq 2.2){reference-type="ref" reference="eq 2.2"}), we let $$\mathscr M(z):=\inf\left\{\|f\|_{\alpha,\beta,2};\ f\in\mathcal A_{\alpha,\beta}^2(\mathbb D^*),\ f(z)=1\right\}$$ If we set $$h(\xi)=\frac{\mathbb{K}_{\alpha,\beta}(\xi,z)}{\mathbb{K}_{\alpha,\beta}(z,z)},\quad \xi\in\mathbb D^*$$ then $h(z)=1$ and $h\in\mathcal A_{\alpha,\beta}^2(\mathbb D^*)$. Indeed, $$\begin{array}{lcl} \|h\|_{\alpha,\beta,2}^2&=&\displaystyle\int_{\mathbb D}|h(\xi)|^2d\mu_{\alpha,\beta}(\xi)=\int_{\mathbb D} \frac{\mathbb{K}_{\alpha,\beta}(\xi,z)}{\mathbb{K}_{\alpha,\beta}(z,z)} \frac{\mathbb{K}_{\alpha,\beta}(z,\xi)}{\mathbb{K}_{\alpha,\beta}(z,z)}d\mu_{\alpha,\beta}(\xi) \\ &=&\displaystyle\frac{1}{\mathbb{K}_{\alpha,\beta}(z,z)^2}\mathbb{K}_{\alpha,\beta}(z,z)=\frac{1}{\mathbb{K}_{\alpha,\beta}(z,z)}. \end{array}$$ It follows that $$\mathscr M(z)\leq \frac{1}{\mathbb{K}_{\alpha,\beta}(z,z)}.$$ Conversely, for every $f\in\mathcal A_{\alpha,\beta}^2(\mathbb D^*)$ such that $f(z)=1$ we have $$|f(\zeta)|^2=|\langle f,\mathbb{K}_{\alpha,\beta}(.,\zeta)\rangle_{\alpha,\beta}|^2\leq \|f\|_{\alpha,\beta,2}^2\mathbb{K}_{\alpha,\beta}(\zeta,\zeta).$$ Thus we obtain $$\frac{|f(\zeta)|^2}{\mathbb{K}_{\alpha,\beta}(\zeta,\zeta)}\leq \|f\|_{\alpha,\beta,2}^2,\quad \forall\; \zeta\in\mathbb D^*.$$ In particular, for $\zeta=z$, $$\frac{1}{\mathbb{K}_{\alpha,\beta}(z,z)}\leq \|f\|_{\alpha,\beta,2}^2.$$ We conclude that $$\frac{1}{\mathbb{K}_{\alpha,\beta}(z,z)}\leq \mathscr M(z).$$ ◻ # Duality of meromorphic Bergman spaces The aim of this part is to prove that the dual of $\mathcal A_{\alpha,\beta}^p(\mathbb D^*)$ is related to $\mathcal A_{\alpha,\beta}^q(\mathbb D^*)$ with $\frac1p+\frac1q=1$. This will be a consequence of the main result (Theorem [Theorem 1](#th1){reference-type="ref" reference="th1"}). But to prove the main result we need the following lemma: **Lemma 2**. *For every $-1<\sigma, \gamma<+\infty$ we set $$I_\omega(z)=\int_{\mathbb D}\frac{(1-|w|^2)^{\sigma}|w|^{2\gamma}}{|1-z\overline{w}|^{2+\sigma+\omega}}dA(w).$$ Then $I_\omega$ is continuous on $\mathbb D$ and $$I_\omega(z)\sim \left\{ \begin{array}{lcl} 1&if& \omega<0\\ \displaystyle\log\frac{1}{1-|z|^2}&if& \omega=0\\ \displaystyle\frac{1}{(1-|z|^2)^\omega}&if& \omega>0 \end{array}\right.$$ when $|z|\longrightarrow 1^-$ where $\varphi\sim \psi$ means that there exist $0<c_1<c_2$ such that we have $c_1\varphi(z)\leq \psi(z)\leq c_2\varphi(z)$.* *Proof.* The proof is similar to [@He-KO-Zh Theo.1.7]. ◻ **Theorem 1**. *For every $-1<\alpha, a, b<+\infty$ and $m\in\mathbb N$, we consider the two integral operators $T$ and $S$ defined by $$\begin{array}{lcl} Tf(z)&=&\displaystyle\frac{1}{z^m}\int_{\mathbb D}\frac{f(w)(1-|w|^2)^{\alpha-a} w^m}{|w|^{2b}(1-z\overline{w})^{2+\alpha}}d\mu_{a,b}(w)\\ Sf(z)&=&\displaystyle\frac{1}{|z|^m}\int_{\mathbb D}\frac{f(w)(1-|w|^2)^{\alpha-a} |w|^{m-2b}}{|1-z\overline{w}|^{2+\alpha}}d\mu_{a,b}(w). \end{array}$$ Then for every $1\leq p<+\infty$, the following assertions are equivalent:* 1. *$T$ is bounded on $L^p(\mathbb D,d\mu_{a,b})$* 2. *$S$ is bounded on $L^p(\mathbb D,d\mu_{a,b})$* 3. *$p(\alpha+1)>a+1$ and $\left\{\begin{array}{lcl} \displaystyle m-2<2b\leq m& if& p=1\\ \displaystyle mp-2<2b<mp-2+2p& if& p>1 \end{array}\right.$* *Proof.* $"(2)\Longrightarrow (1)"$ is obvious.\ $"(1)\Longrightarrow (2)"$ can be deduced using the transformation $$\varOmega_z f(w)=\frac{(1-z\overline{w})^{2+\alpha}|w|^m}{|1-z\overline{w}|^{2+\alpha}w^m}f(w).$$ $"(2)\Longrightarrow (3)"$ Now assume that $S$ is bounded on $L^p(\mathbb D,d\mu_{a,b})$. If we apply $S$ to $f_N(z)=(1-|z|^2)^N$ for $N$ large enough, we obtain $$\|Sf_N\|_{a,b,p}^p=\displaystyle\int_{\mathbb D}(1-|z|^2)^a|z|^{2b-mp}\left(\int_{\mathbb D}\frac{(1-|w|^2)^{\alpha+N} |w|^{m}}{|1-z\overline{w}|^{2+\alpha}}dA(w)\right)^pdA(z)$$ is finite. Thanks to Lemma [Lemma 2](#l2){reference-type="ref" reference="l2"}, we obtain $b>\frac{mp}{2}-1$.\ To prove the other inequalities, we need $S^\star$ the adjoint operator of $S$ with respect to the inner product $\langle.,.\rangle_{a,b}$. It is given by $$\begin{array}{lcl} S^\star g(w)&=&\displaystyle(1-|w|^2)^{\alpha-a} |w|^{m-2b}\int_{\mathbb D}\frac{g(z)}{|z|^m|1-z\overline{w}|^{2+\alpha}}d\mu_{a,b}(z)\\ &=&\displaystyle(1-|w|^2)^{\alpha-a} |w|^{m-2b}\int_{\mathbb D}\frac{g(z)|z|^{2b-m}(1-|z|^2)^a}{|1-z\overline{w}|^{2+\alpha}}dA(z). \end{array}$$ We distinguish two cases: 1. First case $p=1$: $S$ is bounded on $L^1(\mathbb D,d\mu_{a,b})$ gives $S^\star$ is bounded on $L^\infty(\mathbb D,d\mu_{a,b})$. By applying $S^\star$ on the constant function $g\equiv 1$, we obtain $$\sup_{w\in\mathbb D^*}(1-|w|^2)^{\alpha-a} |w|^{m-2b}\int_{\mathbb D}\frac{|z|^{2b-m}(1-|z|^2)^a}{|1-z\overline{w}|^{2+\alpha}}dA(z)<+\infty.$$ Thanks to Lemma [Lemma 2](#l2){reference-type="ref" reference="l2"}, we get $m-2b\geq0$ and $\alpha-a>0$. The desired inequalities are proved. 2. Second case $p>1$: Let $q>1$ such that $\frac{1}{p}+\frac{1}{q}=1$. Again by applying $S^\star$ on the function $f_N$ for $N$ large enough, we obtain $$\begin{array}{l} \|S^\star f_N\|_{a,b,q}^q=\\ =\displaystyle\int_{\mathbb D}(1-|w|^2)^{a+q(\alpha-a)} |w|^{2b+(m-2b)q}\left(\int_{\mathbb D}\frac{|z|^{2b-m}(1-|z|^2)^{a+N}}{|1-z\overline{w}|^{2+\alpha}}dA(z)\right)^qdA(w) \end{array}$$ is finite and hence all inequalities $$\displaystyle\frac{mp}{2}-1<b<\frac{mp}{2}-1+p$$ and $p(\alpha+1)>a+1$ hold. $"(3)\Longrightarrow (2)"$ We start by the case $p=1$. We assume that $m-2<2b\leq m$ and $\alpha>a$. Using Lemma [Lemma 2](#l2){reference-type="ref" reference="l2"}, one can prove easily the boundedness of $S$ on $L^1(\mathbb D,d\mu_{a,b})$.\ Now for $p>1$, to prove the boundedness of $S$ on $L^p(\mathbb D,d\mu_{a,b})$ we will use the Schur's test. We set $$h(z)=\frac1{|z|^t(1-|z|^2)^s},\quad \hbox{and}\quad \kappa(z,w)= \frac{(1-|w|^2)^{\alpha-a} |w|^{m-2b}}{|z|^m|1-z\overline{w}|^{2+\alpha}}.$$ Thanks to Lemma [Lemma 2](#l2){reference-type="ref" reference="l2"}, if $$\label{eq 3.1} \frac mq\leq t<\frac{m+2}q,\quad 0<s<\frac{\alpha+1}q$$ then $$\begin{array}{lcl} \displaystyle\int_{\mathbb D} \kappa(z,w)h(w)^qd\mu_{a,b}(w)&=&\displaystyle\frac1{|z|^m}\int_{\mathbb D}\frac{(1-|w|^2)^{\alpha-sq} |w|^{m-tq}}{|1-z\overline{w}|^{2+\alpha}}dA(w)\\ &\leq&\displaystyle\frac{c_1}{|z|^m(1-|z|^2)^{sq}}=c_1.|z|^{tq-m}h(z)^q\leq c_1.h(z)^q \end{array}$$ for some positive constant $c_1>0$.\ Similarly, if $$\label{eq 3.2} \frac{2b-m}p\leq t<\frac{2b-m+2}p,\quad \frac{a-\alpha}{p}<s<\frac{a+1}p$$ then $$\begin{array}{l} \displaystyle\int_{\mathbb D} \kappa(z,w)h(z)^pd\mu_{a,b}(z)\\ =\displaystyle(1-|w|^2)^{\alpha-a} |w|^{m-2b}\int_{\mathbb D} \frac{|z|^{2b-m-tp}(1-|z|^2)^{a-sp}}{|1-z\overline{w}|^{2+\alpha}}dA(z)\\ \leq\displaystyle c_2.\frac{|w|^{m-2b}}{(1-|w|^2)^{sp}}=c_2.|w|^{m-2b+tp}h(w)^p\leq c_2.h(w)^p \end{array}$$ with $c_2>0$. Thanks to the hypothesis given in $(3)$, we have $$\left]\frac{m}{q},\frac{m+2}q\right[\cap\left]\frac{2b-m}p,\frac{2b-m+2}p\right[\neq\emptyset,\quad \left]0,\frac{\alpha+1}q\right[\cap \left]\frac{a-\alpha}{p},\frac{a+1}p\right[\neq\emptyset.$$ This proves the existence of $t$ and $s$ satisfying ([\[eq 3.1\]](#eq 3.1){reference-type="ref" reference="eq 3.1"}) and ([\[eq 3.2\]](#eq 3.2){reference-type="ref" reference="eq 3.2"}). Thanks to Schur's test, $S$ is bounded on $L^p(\mathbb D,d\mu_{a,b})$. ◻ **Theorem 2**. *For every $1<p<+\infty$ and $-1<a,b<+\infty$, the topological dual of $\mathcal A_{a,b}^p(\mathbb D^*)$ is the space $\mathcal A_{a,b}^q(\mathbb D^*)$ under the integral pairing $$\langle f,g\rangle_{a,b}=\int_{\mathbb D}f(z)\overline{g(z)}d\mu_{a,b}(z),\quad \forall\; f\in \mathcal A_{a,b}^p(\mathbb D^*),\ g\in \mathcal A_{a,b}^q(\mathbb D^*)$$ where $q$ is the conjugate exponent of $p$.* *Proof.* Thanks to Hölder inequality, every function $g\in \mathcal A_{a,b}^q(\mathbb D^*)$ defines a bounded linear form on $\mathcal A_{a,b}^p(\mathbb D^*)$ via the above integral pairing. Conversely, Let $G$ be a bounded linear form on $\mathcal A_{a,b}^p(\mathbb D^*)$. Then thanks to Hahn Banach extension theorem, one can extend $G$ to a bounded linear form on $L^p(\mathbb D, d\mu_{a,b})$ (still denoted by $G$) with the same norm. By duality, there exists $\psi\in L^q(\mathbb D, d\mu_{a,b})$ such that $$G(f)=\langle f,\psi\rangle_{a,b},\quad \forall\; f\in \mathcal A_{a,b}^p(\mathbb D^*).$$ Claim that if $m=m_{p,b}$ given in ([\[eq 1.2\]](#eq 1.2){reference-type="ref" reference="eq 1.2"}), then thanks to Theorem [Theorem 1](#th1){reference-type="ref" reference="th1"}, $\mathbb P_{a,m}$ maps continuously $L^p(\mathbb D, d\mu_{a,b})$ onto $\mathcal A_{a,b}^p(\mathbb D^*)$ and $\mathbb P_{a,m}f=f$ for every $f\in\mathcal A_{a,b}^p(\mathbb D^*)$. It follows that $$G(f)=\langle f,\psi\rangle_{a,b}=\langle \mathbb P_{a,m}f,\psi\rangle_{a,b}=\langle f,\mathbb P_{a,m}^\star\psi\rangle_{a,b},\quad \forall\; f\in \mathcal A_{a,b}^p(\mathbb D^*).$$ If we set $g=\mathbb P_{a,m}^\star\psi$ then $g\in \mathcal A_{a,b}^q(\mathbb D^*)$ and $G(f)=\langle f,g\rangle$ for every $f\in \mathcal A_{a,b}^p(\mathbb D^*).$ ◻ # Inequalities on $\mathcal A_{\alpha,\beta}^p(\mathbb D^*)$ The aim here is to extend the two famous Hardy-Littlewood and Fejér-Riesz inequalities to our new spaces, these inequalities were proved firstly on Hardy spaces, then on Bergman spaces with some applications. In our case we give only one application on Toeplitz operators. To reach this aim, for a holomorphic function $f$ on $\mathbb D^*$ and $0<r<1$, we consider the main value on the circle: $$\begin{array}{lcl} M_p(r,f)&:=&\displaystyle\left(\frac{1}{2\pi}\int_0^{2\pi}|f(re^{i\theta})|^pd\theta\right)^{\frac1p}\\ M_\infty(r,f)&:=&\displaystyle\sup_{\theta\in[0,2\pi]}|f(re^{i\theta})|. \end{array}$$ We set $$\mathscr J(r)=\mathscr J_{\alpha,\beta,p}(r):=\frac{2r^{pm_{p,\beta}}}{\mathscr B(\alpha+1,\beta+1)}\int_r^1t^{2\beta-pm_{p,\beta}+1}(1-t^2)^\alpha dt.$$ ## Hardy-Littlewood inequality To prove the Hardy-Littlewood inequality on $\mathcal A_{\alpha,\beta}^p(\mathbb D^*)$, we need to prove firstly the following lemma: **Lemma 3**. *For every $p>1$ and $f\in\mathcal A_{\alpha,\beta}^p(\mathbb D^*)$, we have $$M_p(r,f)\leq \frac{\|f\|_{\alpha,\beta,p}}{\mathscr J(r)^{\frac1p}}.$$ In particular we have $$M_p(r,f)\leq \frac{\kappa_1\|f\|_{\alpha,\beta,p}}{r^{\max(\frac{2\beta}p,\nu_f)}(1-r^2)^{\frac{\alpha+1}p}}$$ where $\kappa_1=\left((\alpha+1)\mathscr B(\alpha+1,\beta+1)\right)^{\frac1p}.$* *Proof.* Let $f\in\mathcal A_{\alpha,\beta}^p(\mathbb D^*)$ and $0<r<1$. We set $F(z)=z^mf(z)$ with $m=m_{p,\beta}$. As $F$ is holomorphic on $\mathbb D$, then we obtain $$\begin{array}{lcl} \displaystyle\|f\|_{\alpha,\beta,p}^p&=&\displaystyle\frac{1}{\pi \mathscr B(\alpha+1,\beta+1)}\int_0^1\int_0^{2\pi}|f(te^{i\theta})|^p(1-t^2)^{\alpha}t^{2\beta+1} dtd\theta\\ &=&\displaystyle\frac{2}{\mathscr B(\alpha+1,\beta+1)}\int_0^1M_p^p(t,f)(1-t^2)^{\alpha}t^{2\beta+1}dt\\ &=&\displaystyle\frac{2}{\mathscr B(\alpha+1,\beta+1)}\int_0^1M_p^p(t,F)(1-t^2)^{\alpha}t^{2\beta-pm+1}dt\\ &\geq&\displaystyle\frac{2}{\mathscr B(\alpha+1,\beta+1)}M_p^p(r,F)\int_r^1(1-t^2)^{\alpha}t^{2\beta-pm+1}dt\\ &=&\displaystyle M_p^p(r,f)\mathscr J(r) \end{array}$$ and the first inequality is proved. The particular case can be deduced from the following inequality: $$\begin{array}{lcl} \mathscr J(r)&=&\displaystyle\frac{r^{pm}}{\mathscr B(\alpha+1,\beta+1)}\int_{r^2}^1(1-t)^{\alpha}t^{\beta-pm/2}dt\\ & & \\ &\geq &\displaystyle\left\{\begin{array}{lcl} \displaystyle\frac{r^{2\beta}(1-r^2)^{\alpha+1}}{(\alpha+1)\mathscr B(\alpha+1,\beta+1)}& if & 2\beta \geq pm\\ & & \\ \displaystyle\frac{r^{pm}(1-r^2)^{\alpha+1}}{(\alpha+1)\mathscr B(\alpha+1,\beta+1)}& if & 2\beta<pm \end{array}\right.\\ & & \\ & \geq& \displaystyle\frac{r^{\max(2\beta,pm)}(1-r^2)^{\alpha+1}}{(\alpha+1)\mathscr B(\alpha+1,\beta+1)}. \end{array}$$ ◻ Now we can prove the Hardy-Littlewood inequality on meromorphic Bergman spaces: **Theorem 3**. *For every $1<p\leq \tau\leq \infty$, there exists a positive constant $\kappa$ such that for every $f\in\mathcal A_{\alpha,\beta}^p(\mathbb D^*)$ we have $$M_\tau(r,f)\leq \frac{\kappa\|f\|_{\alpha,\beta,p}}{r^{\max(\frac{2\beta}p,m)}(1-r^2)^{\frac{\alpha+2}p-\frac1\tau}}$$ where $m=m_{p,\beta}$.* The Hardy-Littlewood inequality is proved in [@So] for classical Bergman spaces ($\beta=0$). *Proof.* The case $\tau=p$ is simply the previous lemma. Let we start by the case $\tau=\infty$. Let $f\in\mathcal A_{\alpha,\beta}^p(\mathbb D^*)$ and $0<r<1$. We set $F(z)=z^{m}f(z)$. Again as $F$ is holomorphic on $\mathbb D$, then thanks to the Cauchy Formula, we have $$F(re^{i\theta})=\frac{s}{2\pi}\int_0^{2\pi}\frac{s^{m}e^{im t}f(s e^{it})}{s e^{it}-re^{i\theta}}e^{it}dt$$ where $s=\frac{1+r}2$. Applying Hölder's inequality $(\frac1p+\frac1{q}=1)$ and Lemma [Lemma 3](#l3){reference-type="ref" reference="l3"}, we obtain $$\begin{array}{lcl} r^{m}|f(re^{i\theta})|&\leq & \displaystyle\left(\frac1{2\pi}\int_0^{2\pi}s^{pm}|f(s e^{it})|^pdt\right)^{\frac1p} \left(\frac1{2\pi}\int_0^{2\pi}\frac{s^{q}}{|s e^{it}-re^{i\theta}|^{q}}dt\right)^{\frac1{q}}\\ &\leq&\displaystyle s^{m}M_p(s,f)\left(\frac{\kappa_2}{\left(\frac{1-r}{1+r}\right)^{q-1}}\right)^{\frac1{q}}\\ &\leq&\displaystyle s^{m} \frac{\kappa_1\|f\|_{\alpha,\beta,p}}{s^{\max(\frac{2\beta}p,m)}(1-s^2)^{\frac{\alpha+1}p}} \frac{\kappa_3}{(1-r^2)^{1-\frac1{q}}}\\ &\leq&\displaystyle\frac{\kappa_4\|f\|_{\alpha,\beta,p}}{r^{\max(\frac{2\beta}p-m,0)}(1-r^2)^{\frac{\alpha+2}p}} \end{array}$$ It follows that $$M_\infty(r,f)\leq \frac{\kappa_4\|f\|_{\alpha,\beta,p}}{r^{\max(\frac{2\beta}p,m)}(1-r^2)^{\frac{\alpha+2}p}}.$$ Let now $p<\tau<\infty$. We have $$\begin{array}{lcl} M_\tau(r,f)&= &\displaystyle\left(\frac1{2\pi}\int_0^{2\pi}|f(re^{it})|^p|f(re^{it})|^{\tau-p}dt\right)^{\frac1\tau}\\ &\leq&\displaystyle\displaystyle M_\infty^{1-\frac p\tau}(r,f)M_p^{\frac p\tau}(r,f)\\ &\leq&\displaystyle\left( \frac{\kappa_4\|f\|_{\alpha,\beta,p}}{r^{\max(\frac{2\beta}p,m)}(1-r^2)^{\frac{\alpha+2}p}}\right)^{1-\frac p\tau} \left(\frac{\kappa_1\|f\|_{\alpha,\beta,p}}{r^{\max(\frac{2\beta}p,m)}(1-r^2)^{\frac{\alpha+1}p}}\right)^{\frac p\tau}\\ &=& \displaystyle\frac{\kappa\|f\|_{\alpha,\beta,p}}{r^{\max(\frac{2\beta}p,m)}(1-r^2)^{\frac{\alpha+2}p-\frac1\tau}} \end{array}$$ ◻ ## Fejér-Riesz inequality The aim here is to prove a generalization of the following lemma to meromorphic Bergman spaces. **Lemma 4**. *(See [@Du]) Let $g$ be a holomorphic function in the Hardy space $H^p(\mathbb D)$. Then for any $\xi\in\mathbb C$ with $|\xi|=1$, we have $$\int_{-1}^1|g(t\xi)|^pdt\leq \frac12 \|g\|_{H^p}^p:=\frac12\int_0^{2\pi}|g(e^{i\theta})|^pd\theta.$$* **Theorem 4** (Fejér-Riesz inequality). *For every $f\in\mathcal A_{\alpha,\beta}^p(\mathbb D^*)$ and $\xi\in\mathbb C$ with $|\xi|=1$, we have $$\int_{-1}^1|f(t\xi)|^p\mathscr J(|t|)dt\leq \pi\|f\|_{\alpha,\beta,p}^p.$$* Claim that if $\beta=0$ then we find the Zhu result [@Zh]. *Proof.* Let $f\in\mathcal A_{\alpha,\beta}^p(\mathbb D^*)$ and $F(z)=z^{m}f(z)$ where $m=m_{p,\beta}$. If we set $F_r(z)=F(rz)$ for $0<r<1$, Then $F_r\in H^p(\mathbb D)$ and thanks to Lemma [Lemma 4](#l4){reference-type="ref" reference="l4"}, for every $\xi\in\mathbb C,\ |\xi|=1$, $$\int_{-1}^1|F_r(t\xi)|^pdt\leq \frac12\int_0^{2\pi}|F_r(e^{i\theta})|^pd\theta.$$ That is $$\label{eq 4.1} \int_{-1}^1|F(rt\xi)|^pdt\leq \frac12\int_0^{2\pi}|F(re^{i\theta})|^pd\theta.$$ Thanks to Inequality ([\[eq 4.1\]](#eq 4.1){reference-type="ref" reference="eq 4.1"}) and Fubini theorem, we have $$\begin{array}{l} \|f\|_{\alpha,\beta,p}^p=\\ =\displaystyle\frac{1}{\pi \mathscr B(\alpha+1,\beta+1)}\int_0^1\int_0^{2\pi}|f(re^{i\theta})|^pr^{2\beta+1}(1-r^2)^{\alpha} drd\theta\\ =\displaystyle\frac{1}{\pi \mathscr B(\alpha+1,\beta+1)}\int_0^1\left(\int_0^{2\pi}|F(re^{i\theta})|^pd\theta\right)r^{2\beta-pm+1}(1-r^2)^{\alpha} dr\\ \geq\displaystyle\frac{2}{\pi \mathscr B(\alpha+1,\beta+1)}\int_0^1\left(\int_{-1}^1|F(rt\xi)|^pdt\right)r^{2\beta-pm+1}(1-r^2)^{\alpha} dr\\ =\displaystyle\frac{2}{\pi \mathscr B(\alpha+1,\beta+1)}\int_0^1\left(\int_{-r}^r|F(s\xi)|^pds\right)r^{2\beta-pm}(1-r^2)^{\alpha} dr\\ =\displaystyle\frac{2}{\pi \mathscr B(\alpha+1,\beta+1)}\int_{-1}^1|F(s\xi)|^p\left(\int_{|s|}^1r^{2\beta-pm}(1-r^2)^{\alpha} dr\right)ds\\ =\displaystyle\frac{2}{\pi \mathscr B(\alpha+1,\beta+1)}\int_{-1}^1|f(s\xi)|^p\left(|s|^{pm}\int_{|s|}^1r^{2\beta-pm}(1-r^2)^{\alpha} dr\right)ds\\ \geq\displaystyle\frac{1}{\pi}\int_{-1}^1|f(s\xi)|^p\mathscr J(|s|)ds. \end{array}$$ ◻ As an application of the Fejér-Riesz inequality on the Toeplitz operators, we have the following result: **Theorem 5**. *For every $\xi\in\mathbb D^*$, if we consider the Toeplitz operator $\mathcal T$ defined by $$\mathcal Tf(z)=\int_{-1}^1f(\xi x)\mathbb K_{\alpha,\beta}(z,\xi x)\mathscr J_{\alpha,\beta,2}(|x|)dx.$$ Then $\mathcal T$ is a positive bounded linear operator on $\mathcal A_{\alpha,\beta}(\mathbb D^*)$.* When $\beta=0$, this result is due to Andreev [@An] proved in a restricted case. *Proof.* Thanks to Fubini theorem, for every $f\in \mathcal A_{\alpha,\beta}(\mathbb D^*)$ one has $$\begin{array}{lcl} \langle \mathcal Tf,f\rangle_{\alpha,\beta}&=&\displaystyle\int_{\mathbb D}\mathcal Tf(z)\overline{f(z)}d\mu_{\alpha,\beta}(z)\\ &=&\displaystyle\int_{\mathbb D}\left(\int_{-1}^1f(\xi x)\mathbb K_{\alpha,\beta}(z,\xi x)\mathscr J_{\alpha,\beta,2}(|x|)dx\right) \overline{f(z)}d\mu_{\alpha,\beta}(z)\\ &=&\displaystyle\int_{-1}^1f(\xi x)\overline{\int_{\mathbb D}\mathbb K_{\alpha,\beta}(\xi x,z)f(z)d\mu_{\alpha,\beta}(z)}\mathscr J_{\alpha,\beta,2}(|x|)dx\\ &=&\displaystyle\int_{-1}^1f(\xi x)\overline{f(\xi x)}\mathscr J_{\alpha,\beta,2}(|x|)dx\\ &\leq & \pi\|f\|_{\alpha,\beta,2}^2. \end{array}$$ The last inequality is the Fejér-Riesz one in the particular case $p=2$.\ This proves that the operator $\mathcal T$ is positive and thus it is self-adjoint and bounded with norm $\|\mathcal T\|\leq\pi.$ Indeed, $$\|\mathcal T\|=\sup\{|\langle \mathcal Tf,f\rangle_{\alpha,\beta}|;\ \|f\|_{\alpha,\beta,2}=1\}\leq \pi.$$ ◻ X-XX1 **V. V. Andreev**, Fejér-Riesz type inequalities for Bergman spaces, Rend. Circ. Mat. Palermo 61 (2012) 385-392. **P. L. Duren**, Theory of $H^p$ spaces, Academic press (1970). **H. Hedenmalm, B. Korenblum and K. Zhu**, Theory of Bergman spaces, Graduate texts in Mathematics, 199 (2000). **H. Kim**, On the localization of the minimum integral related to the weighted Bergman kernel and its application, C. R. Acad. Sci. Paris, Ser. I 355 (2017) 420-425. **S.G. Krantz**, Geometric analysis of the Bergman kernel and metric, Graduate text in Mathematics 268, (2013). **P. Sobolewski**, Inequalities on Bergman spaces, Ann. Univ. Marie Curie-Skłodowska Lublin Polonia Vol. LXI (2007) 137-143. **K. Zhu**, Translating Inequalities between Hardy and Bergman spaces, Mathematical Assoc. Amer. Monthly 111 (2004) 520-525.
arxiv_math
{ "id": "2309.00552", "title": "Meromorphic Bergman spaces", "authors": "Noureddine Ghiloufi and Mohamed Zaway", "categories": "math.CV", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We show that for $\mathrm{C}^*$-algebras with the Global Glimm Property, the rank of every operator can be realized as the rank of a soft operator, that is, an element whose hereditary sub-$\mathrm{C}^*$-algebra has no nonzero, unital quotients. This implies that the radius of comparison of such a $\mathrm{C}^*$-algebra is determined by the soft part of its Cuntz semigroup. Under a mild additional assumption, we show that every Cuntz class dominates a (unique) largest soft Cuntz class. This defines a retract from the Cuntz semigroup onto its soft part, and it follows that the covering dimensions of these semigroups differ by at most $1$. address: - M. Ali Asadi-Vasfi, Department of Mathematics, University of Toronto, Toronto, Ontario, Canada. - Hannes Thiel, Department of Mathematical Sciences, Chalmers University of Technology and University of Gothenburg, Gothenburg SE-412 96, Sweden. - Eduard Vilalta, The Fields Institute for Research in Mathematical Sciences, Toronto, Ontario M5T 3J1, Canada. author: - M. Ali Asadi-Vasfi - Hannes Thiel - Eduard Vilalta date: - JanuaryFebruary MarchAprilMayJuneJulyAugustSeptember OctoberNovemberDecember  - JanuaryFebruary MarchAprilMayJuneJulyAugustSeptember OctoberNovemberDecember  title: "Ranks of soft operators in nowhere scattered $\\mathrm{C}^*$-algebras" --- # Introduction Realizing every strictly positive, lower-semicontinuous, affine function on the tracial state space of a separable, simple, nuclear, non-elementary $\mathrm{C}^*$-algebra as the rank of an operator in its stabilization is a deep and open problem, first studied in [@DadTom10Ranks]. A positive solution to this problem would imply that every separable, simple, non-elementary $\mathrm{C}^*$-algebra of locally finite nuclear dimension and strict comparison of positive elements is $\mathcal{Z}$-stable, thus proving the remaining implication of the prominent Toms-Winter conjecture ([@Win18ICM Section 5]) in this case; see, for example, [@Thi20RksOps Section 9] and the discussion in [@CasEviTikWhi22UniPropGamma Section 5]. When the $\mathrm{C}^*$-algebra $A$ is not simple, the problem is still of much interest, but one needs to replace the tracial state space by the cone $\mathop{\mathrm{QT}}(A)$ of lower-semicontinuous, extended-valued $2$-quasitraces on $A$. Each such quasitrace extends canonically to the stabilization $A\otimes{\mathbb{K}}$, and the *rank* of an operator $a \in (A\otimes{\mathbb{K}})_+$ is defined as the map $\widehat{[a]}\colon\mathop{\mathrm{QT}}(A)\to[0,\infty]$ given by $$\widehat{[a]}(\tau) := d_\tau(a) := \lim_{n\to\infty}\tau(a^{1/n})$$ for $\tau \in \mathop{\mathrm{QT}}(A)$. The *rank problem* is then to determine which functions on $\mathop{\mathrm{QT}}(A)$ arise as the rank of a positive operator in $A$ or $A\otimes{\mathbb{K}}$. A natural obstruction arises if $A$ has a nonzero elementary ideal-quotient, that is, if there are closed ideals $I \subseteq J \subseteq A$ such that $J/I$ is $\ast$-isomorphic to ${\mathbb{K}}(H)$ for some Hilbert space $H$. In this case, the natural trace on ${\mathbb{K}}(H)$ induces a quasitrace $\tau \in \mathop{\mathrm{QT}}(A)$ that is discrete in the sense that $d_\tau(a) \in \{0,1,2,\ldots,\infty\}$ for every $a \in (A\otimes{\mathbb{K}})_+$. A similar obstruction arises in the representation of interpolation groups by continuous, affine functions on their state space; see [@Goo86GpsInterpolation Chapter 8]. To avoid this obstruction, it is therefore natural to assume that $A$ has no nonzero elementary ideal-quotients, a condition termed *nowhere scatteredness* in [@ThiVil21arX:NowhereScattered]. Building on the results from [@Thi20RksOps], the rank problem was solved in [@AntPerRobThi22CuntzSR1] for nowhere scattered $\mathrm{C}^*$-algebras that have stable rank one: Every function on $\mathop{\mathrm{QT}}(A)$ that satisfies the 'obvious' conditions arises as the rank of an operator in $A\otimes{\mathbb{K}}$; see [@AntPerRobThi22CuntzSR1 Theorem 7.13] for the precise statement. Moreover, one can arrange for the operator to be *soft*, which means that it generates a hereditary sub-$\mathrm{C}^*$-algebra that has no nonzero unital quotients; see [@ThiVil23arX:Soft Definition 3.1]. As a consequence, in a nowhere scattered, stable rank one $\mathrm{C}^*$-algebra, the rank of every operator can be realized as the rank of a *soft* operator. The aim of this paper is to study this phenomenon in greater generality and, more concretely, to investigate when the rank of every operator in a $\mathrm{C}^*$-algebra $A$ can be realized as the rank of a soft element. We show that this holds whenever $A$ satisfies the *Global Glimm Property* --- a notion conjectured to be equivalent to nowhere scatteredness; see . Namely, we prove: **Theorem 1** ([Theorem 1](#prp:SoftRankCuGGP){reference-type="ref" reference="prp:SoftRankCuGGP"}). Let $A$ be a stable $\mathrm{C}^*$-algebra with the Global Glimm Property. Then, for any $a\in A_+$ there exists a soft element $b \in A_+$ with $b \precsim a$ and such that $$d_\tau(a) = d_\tau(b)$$ for every $\tau \in \mathop{\mathrm{QT}}(A)$. In above we use $\precsim$ to denote the *Cuntz subequivalence*, a relation between positive elements introduced by Cuntz in [@Cun78DimFct]. This relation allows one to define the *Cuntz semigroup*, an object that has played an important role in the structure and classification theory of $\mathrm{C}^*$-algebras; see and [@CowEllIva08CuInv; @Tom08ClassificationNuclear; @Win12NuclDimZstable; @Thi20RksOps; @AntPerRobThi22CuntzSR1]. As explained in , the study of the Cuntz semigroup has often come in hand with the development of abstract Cuntz semigroups, also known as *$\ensuremath{\mathrm{Cu}}$-semigroups*; see [@Rob13Cone; @AntPerThi20CuntzUltraproducts; @AntPerThi20AbsBivariantCu; @AntPerThi20AbsBivarII; @Vil22LocCharAI; @CanVil23arX:FraisseCu] among many others. If an operator $a$ is soft, then its Cuntz class $[a]$ is strongly soft (we recall the definition at the beginning of ). If $A$ has the Global Glimm Property, then every strongly soft Cuntz class arises this way, and it follows that the submonoid $\mathop{\mathrm{Cu}}(A)_{\rm{soft}}$ of strongly soft Cuntz classes agrees with the subset of Cuntz classes with a soft representative; see . The cone $\mathop{\mathrm{QT}}(A)$ is naturally isomorphic to the cone $F(\mathop{\mathrm{Cu}}(A))$ of functionals on the Cuntz semigroup $\mathop{\mathrm{Cu}}(A)$; see [@EllRobSan11Cone Theorem 4.4]. As an application of , we show that the same is true for the cone of functionals on $\mathop{\mathrm{Cu}}(A)_{\rm{soft}}$. **Theorem 1** ([Theorem 1](#prp:QTAEqFCuASof){reference-type="ref" reference="prp:QTAEqFCuASof"}). Let $A$ be a $\mathrm{C}^*$-algebra with the Global Glimm Property. Then, $\mathop{\mathrm{QT}}(A)$ is naturally isomorphic to $F(\mathop{\mathrm{Cu}}(A)_{\rm{soft}})$. We introduce in a weak notion of cancellation for Cuntz semigroups, which we term *left-soft separativity*; see . Whenever a $\mathrm{C}^*$-algebra with the Global Glimm Property has a left-soft separative Cuntz semigroup, the relation between arbitrary and soft elements from can be made more precise: **Theorem 1** ([Proposition 1](#prp:sigmaSoft){reference-type="ref" reference="prp:sigmaSoft"}, [Theorem 1](#prp:SsoftGenCuMor){reference-type="ref" reference="prp:SsoftGenCuMor"}). Let $A$ be a $\mathrm{C}^*$-algebra with the Global Glimm Property. Assume that $\mathop{\mathrm{Cu}}(A)$ is left-soft separative. Then; - For every element $x\in \mathop{\mathrm{Cu}}(A)$ there exists a greatest element in $\mathop{\mathrm{Cu}}(A)_{\rm{soft}}$ below $x$, denoted by $\sigma (x)$. - We have $\lambda (\sigma (x))=\lambda (x)$ for every $x \in \mathop{\mathrm{Cu}}(A)$ and $\lambda \in F( \mathop{\mathrm{Cu}}(A))$. - The map $\sigma\colon \mathop{\mathrm{Cu}}(A)\to \mathop{\mathrm{Cu}}(A)_{\rm{soft}}$, defined by $x\mapsto \sigma (x)$, preserves order, suprema of increasing sequences, and is superadditive. We show in that the Cuntz semigroup is left-soft separative whenever the $\mathrm{C}^*$-algebra has stable rank one or strict comparison of positive elements. Under these assumptions, we also show that $\sigma$ is subadditive and, consequently, a generalized Cu-morphism; see . Then $\mathop{\mathrm{Cu}}(A)_{\rm{soft}}$ is a retract of $\mathop{\mathrm{Cu}}(A)$, as defined in [@ThiVil22DimCu]. Using structure results of retracts and soft elements, we study the covering dimension ([@ThiVil22DimCu]) and the radius of comparison ([@BlaRobTikTomWin12AlgRC]) of $\mathrm{C}^*$-algebras with the Global Glimm Property in terms of their soft elements. **Theorem 1** ([Theorem 1](#prp:SsoftDimGen){reference-type="ref" reference="prp:SsoftDimGen"}). Let $A$ be a $\mathrm{C}^*$-algebra with the Global Glimm Property. Assume one of the following: - $A$ has strict comparison of positive elements; - $A$ has stable rank one; - $A$ has topological dimension zero, and $\mathop{\mathrm{Cu}}(A)$ is left-soft separative. Then $\dim(\mathop{\mathrm{Cu}}(A)_{\rm{soft}})\leq \dim(\mathop{\mathrm{Cu}}(A))\leq \dim(\mathop{\mathrm{Cu}}(A)_{\rm{soft}})+1$. **Theorem 1** ([Theorem 1](#prp:SsoftRcCa){reference-type="ref" reference="prp:SsoftRcCa"}). Let $A$ be a unital, separable $\mathrm{C}^*$-algebra with the Global Glimm Property. Assume that $A$ has stable rank one. Then $$\mathop{\mathrm{rc}}\big( \mathop{\mathrm{Cu}}(A),[1] \big) = \mathop{\mathrm{rc}}\big( \mathop{\mathrm{Cu}}(A)_{\rm{soft}}, \sigma ([1]) \big).$$ We finish the paper with some applications of Theorems [Theorem 1](#thmIntro:SsoftDim){reference-type="ref" reference="thmIntro:SsoftDim"} and [Theorem 1](#thmIntro:SsoftRC){reference-type="ref" reference="thmIntro:SsoftRC"} to crossed products; see and . ## Acknowledgements {#acknowledgements .unnumbered} Part of this research was conducted during the Cuntz Semigroup Workshop 2021 at WWU Münster, and during the short research visit of the third author to the first author at the Institute of Mathematics of the Czech Academy of Sciences. They thank both of the institutions for their hospitality. This work was completed while the authors were attending the Thematic Program on Operator Algebras and Applications at the Fields Institute for Research in Mathematical Sciences in August and September 2023, and they gratefully acknowledge the support and hospitality of the Fields Institute. # Preliminaries In this section we recall definitions and results that will be used in the sections that follow. The reader is referred to [@AraPerTom11Cu], [@AntPerThi18TensorProdCu] and [@GarPer23arX:ModernCu] for an extensive introduction to the theory of $\ensuremath{\mathrm{Cu}}$-semigroups and their interplay with Cuntz semigroups. Given a $\mathrm{C}^*$-algebra $A$, we use $A_+$ to denote the set of its positive elements. ** 1** (The Cuntz semigroup). Let $A$ be a $\mathrm{C}^*$-algebra. Given $a, b \in A_+$, one says that $a$ is *Cuntz subequivalent* to $b$, written $a\precsim b$, if there exists a sequence $(v_n)_n$ in $A$ such that $a=\lim_n v_n b v_n^*$. Further, one says that $a$ is *Cuntz equivalent* to $b$, written $a\sim b$, if $a\precsim b$ and $b \precsim a$. The *Cuntz semigroup* of $A$, denoted by $\mathop{\mathrm{Cu}}(A)$, is the positively ordered monoid defined as the quotient $(A\otimes \mathcal{K})_+/{\sim }$ equipped with the order induced by $\precsim$ and the addition induced by addition of orthogonal elements. For further details we refer to [@AraPerTom11Cu; @AntPerThi18TensorProdCu; @GarPer23arX:ModernCu]. ** 1** ($\ensuremath{\mathrm{Cu}}$-semigroups). Let $(P, \leq)$ be a partially ordered set. Suppose that every increasing sequence in $P$ has a supremum. Given two elements $x,y$ in $P$, one says that $x$ is *way-below* $y$, denoted $x\ll y$, if for every increasing sequence $(z_n)_n$ in $P$ satisfying $y\leq \sup_n z_n$, there exists some $m\in{\mathbb{N}}$ such that $x\leq z_m$. As defined in [@CowEllIva08CuInv], a *$\ensuremath{\mathrm{Cu}}$-semigroup* is a positively ordered monoid $S$ satisfying two domain-type conditions and two compatibility conditions: 1. Every increasing sequence in $S$ has a supremum. 2. For every element $x$ in $S$, there exists a sequence $(x_n)_{n}$ in $S$ such that $x_0 \ll x_1 \ll x_2 \ll \cdots$ and such that $x = \sup_n x_n$. 3. The addition is compatible with the way-below relation, that is, for every $x', x, y', y \in S$ satisfying $x' \ll x$ and $y' \ll y$, we have $x' + y' \ll x + y$. 4. The addition is compatible with suprema of increasing sequences, that is, for every increasing sequences $(x_n)_n$ and $(y_n)_n$ in $S$, we have $\sup_n (x_n+y_n) = \sup_nx_n + \sup_ny_n$. It follows from [@CowEllIva08CuInv] that the Cuntz semigroup of any $\mathrm{C}^*$-algebra always satisfies (O1)-(O4). Specifically, the Cuntz semigroup of any $\mathrm{C}^*$-algebra is a $\ensuremath{\mathrm{Cu}}$-semigroup. Given a monoid morphism $\varphi$ between two $\ensuremath{\mathrm{Cu}}$-semigroups, we say that $\varphi$ is a *$\ensuremath{\mathrm{Cu}}$-morphism* if it preserves the order, suprema of increasing sequences, and the way-below relation. A *generalized $\ensuremath{\mathrm{Cu}}$-morphism* is a monoid map that preserves order and suprema of increasing sequences (but not necessarily the way-below relation). The following properties, which will often be considered throughout the paper, are also satisfied in the Cuntz semigroup of any $\mathrm{C}^*$-algebra; see [@AntPerThi18TensorProdCu Proposition 4.6] and its precursor [@RorWin10ZRevisited Lemma 7.1] for (O5), [@Rob13Cone Proposition 5.1.1] for (O6), and [@AntPerRobThi21Edwards Proposition 2.2] for (O7). - For every $x, y, x', y', z\in S$ satisfying $x+y\leq z$ and $x'\ll x$ and $y'\ll y$, there exists $c \in S$ such that $y'\ll c$ and $x'+c\leq z \leq x+c$. This property is often applied with $y'=y=0$. - For every $x, x', y, z\in S$ satisfying $x'\ll x\ll y+z$, there exist $v, w \in S$ such that $$v \leq x,y, \quad w \leq x,z, \,\,\,\text{ and }\,\,\, x'\leq v+w.$$ - For every $x, x', y, y', w \in S$ satisfying $x'\ll x\leq w$ and $y'\ll y\leq w$, there exists $z \in S$ such that $x',y'\ll z\leq w,x+y$. Given an element $x$ in a $\ensuremath{\mathrm{Cu}}$-semigroup, we denote by $\infty x$ the supremum of the increasing sequence $(nx)_n$. ** 1** (The Global Glimm Property and nowhere scatteredness). A $\mathrm{C}^*$-algebra $A$ is said to be *nowhere scattered* if no hereditary sub-$\mathrm{C}^*$-algebra of $A$ has a nonzero one-dimensional representation. Equivalently, $A$ is nowhere scattered if and only if $A$ has no nonzero elementary ideal-quotients; see [@ThiVil21arX:NowhereScattered Definition A] and [@ThiVil21arX:NowhereScattered Theorem 3.1]. We say that $A$ has the *Global Glimm Property* (in the sense of [@KirRor02InfNonSimpleCalgAbsOInfty Definition 4.12]) if, for every $a\in A_+$ and $\varepsilon >0$, there exists a square-zero element $r\in\overline{aAa}$ such that $(a-\varepsilon )_+\in\overline{\rm span}ArA$; see [@ThiVil23Glimm Section 3]. A $\mathrm{C}^*$-algebra satisfying the Global Glimm Property is always nowhere scattered. The converse remains open, and is known as the *Global Glimm Problem*. The problem has been answered affirmatively under the additional assumption of real rank zero ([@EllRor06Perturb]) or stable rank one ([@AntPerRobThi22CuntzSR1]). A $\ensuremath{\mathrm{Cu}}$-semigroup is said to be $(2,\omega )$-divisible if, for every pair $x',x\in S$ with $x'\ll x$, there exists $y\in S$ such that $2y\leq x$ and $x'\leq \infty y$; see [@RobRor13Divisibility Definition 5.1]. For a detailed study of the Global Glimm Problem and its relation with the Cuntz semigroup we refer to [@ThiVil23Glimm]; see also [@Vil23pre:MultNSCa]. Among other results, it follows from [@ThiVil23Glimm Theorem 3.6] that a $\mathrm{C}^*$-algebra $A$ has the Global Glimm Property if and only if $\mathop{\mathrm{Cu}}(A)$ is $(2,\omega )$-divisible. # Soft operators and strongly soft Cuntz classes {#sec:Soft} In this section, we first recall the definitions of (completely) soft operators in $\mathrm{C}^*$-algebras and of strongly soft elements in $\ensuremath{\mathrm{Cu}}$-semigroups. We then connect these notions and show that, for a $\mathrm{C}^*$-algebra $A$ with the Global Glimm Property, an element in the Cuntz semigroup $\mathop{\mathrm{Cu}}(A)$ is strongly soft if and only if it has a soft representative; see and . As defined in [@ThiVil23arX:Soft Definition 4.2], an element $x$ in a $\ensuremath{\mathrm{Cu}}$-semigroup $S$ is *strongly soft* if for all $x' \in S$ with $x' \ll x$ there exists $t\in S$ such that $$x'+t \ll x, \,\,\,\text{ and }\,\,\, x'\ll \infty t.$$ This notion of softness is stronger than the one considered in [@AntPerThi18TensorProdCu Definition 5.3.1]. However, if $S$ is residually stably finite, both notions agree; see [@ThiVil23arX:Soft Proposition 4.6]. In particular, this applies to weakly cancellative $\ensuremath{\mathrm{Cu}}$-semigroups (see below). As mentioned in the introduction, a positive element $a$ in a $\mathrm{C}^*$-algebra $A$ is said to be *soft* if its hereditary sub-$\mathrm{C}^*$-algebra has no nonzero unital quotients. This definition can be seen as a generalization of *pure positivity*, a notion introduced in [@PerTom07Recasting Definition 2.1] for simple $\mathrm{C}^*$-algebras. An element $a \in A_+$ is said to be *completely soft* if $(a-\varepsilon)_+$ is soft for every $\varepsilon>0$, where $(a-\varepsilon)_+$ denotes the 'cut-down' of $a$ given by applying functional calculus to $a$ with the function $f(t)=\max\{t-\varepsilon,0\}$. As in [@ThiVil23arX:Soft Definition 5.2], we say that a $\mathrm{C}^*$-algebra $A$ has an *abundance of soft elements* if, for every $a\in A_+$ and $\varepsilon >0$, there exists a positive, soft element $b \in \overline{aAa}$ such that $(a-\varepsilon )_+\in\overline{\rm span}AbA$. By [@ThiVil23arX:Soft Proposition 7.7], any $\mathrm{C}^*$-algebra with the Global Glimm Property has an abundance of soft elements. If $a \in A_+$ is soft, then its Cuntz class $[a]$ is strongly soft; see [@ThiVil23arX:Soft Proposition 4.16]. Conversely, we prove in below that if $A$ has an abundance of soft elements (in particular, if $A$ has the Global Glimm Property), then every strongly soft Cuntz class arises this way, that is, a Cuntz class $[b] \in \mathop{\mathrm{Cu}}(A)$ is strongly soft if and only if there exists a soft element $a \in (A\otimes{\mathbb{K}})_+$ with $b\sim a$. It remains unclear if this also holds for general $\mathrm{C}^*$-algebras; see [@ThiVil23arX:Soft Question 4.17]. Given $a,b\in A_+$, we will write $a\vartriangleleft b$ whenever $a\in\overline{\rm span}AbA$. We say that two positive operators $a$ and $b$ in a $\mathrm{C}^*$-algebra are *orthognal* if $ab=0$. The next result is the $\mathrm{C}^*$-algebraic analog of [@ThiVil23arX:Soft Theorem 4.14(2)]. **Proposition 1**. *Let $a$ and $b$ be orthogonal positive elements in a $\mathrm{C}^*$-algebra such that $a \lhd b$, and such that $b$ is soft. Then $a+b$ is soft.* *Proof.* By [@ThiVil23arX:Soft Proposition 3.6], a positive element $c$ in a $\mathrm{C}^*$-algebra is soft if and only if for every $\varepsilon>0$ there exists $r \in (\overline{cAc})_+$ such that $r$ is orthogonal to $(c-\varepsilon)_+$ and such that $c \lhd r$. Using this characterization for $b$, we show that it is satisfied for $a+b$. To verify that $a+b$ is soft, let $\varepsilon>0$. Using that $b$ is soft, we obtain $r \in (\overline{bAb})_+$ such that $r$ is orthogonal to $(b-\varepsilon)_+$ and such that $b \lhd r$. Since $a$ and $b$ are orthogonal, we have $$((a+b)-\varepsilon)_+ = (a-\varepsilon)_+ + (b-\varepsilon)_+.$$ Since $r$ belongs to $\overline{bAb}$, it is also orthogonal to $a$, and thus also orthogonal to $((a+b)-\varepsilon)_+$. Further, we have $a+b \lhd b \lhd r$, as desired. ◻ **Lemma 1**. *Let $A$ be a $\mathrm{C}^*$-algebra with an abundance of soft elements, let $a \in A_+$ be such that $x:=[a]\in\mathop{\mathrm{Cu}}(A)$ is strongly soft, and let $x' \in \mathop{\mathrm{Cu}}(A)$ satisfy $x'\ll x$. Then there exists a positive, completely soft element $b \in \overline{aAa}$ such that $$x' \ll [b] \ll x.$$* *Proof.* Choose $x''\in \mathop{\mathrm{Cu}}(A)$ such that $x'\ll x''\ll x$. Using that $x$ is strongly soft, we know that there exists $t\in\mathop{\mathrm{Cu}}(A)$ such that $x''\ll \infty t$ and $x''+t\ll x$. Choose orthogonal positive elements $c,d \in A\otimes{\mathbb{K}}$ and $\varepsilon>0$ such that $$x'' = [c], \quad t = [d], \quad x' \ll [(c-\varepsilon)_+], \,\,\,\text{ and }\,\,\, x'' \ll \infty[(d-\varepsilon)_+].$$ Using that $c+d \precsim a$, we can apply Rørdam's lemma (see, for example, [@Thi17:CuLectureNotes Theorem 2.30]) to obtain $x \in A\otimes{\mathbb{K}}$ such that $$((c+d)-\varepsilon)_+ = xx^*, \,\,\,\text{ and }\,\,\, x^*x \in \overline{aAa}.$$ Set $$c' := x^*(c-\varepsilon)_+x, \,\,\,\text{ and }\,\,\, d' := x^*(d-\varepsilon)_+x.$$ Then $c',d' \in \overline{aAa}$. Since $c$ and $d$ are orthogonal, we have $$((c+d)-\varepsilon)_+ = (c-\varepsilon)_+ + (d-\varepsilon)_+.$$ It follows that $c'$ and $d'$ are orthogonal, and that $c' \sim (c-\varepsilon)_+$ and $d' \sim (d-\varepsilon)_+$. In particular, we have $x'' \ll \infty[(d-\varepsilon)_+] = \infty[d']$, and we obtain $\delta>0$ such that $x'' \ll \infty[(d'-\delta)_+]$. Applying that $A$ has an abundance of soft elements for $d'$ and $\delta$, we obtain a soft element $e \in (\overline{d'Ad'})_+$ such that $(d'-\delta)_+ \lhd e$. Since $c'$ and $d'$ are orthogonal, and $e$ belongs to $\overline{d'Ad'}$, it follows that $c'$ and $e$ are orthogonal. Using that positive elements $g,h$ in a $\mathrm{C}^*$-algebra satisfy $g \lhd h$ if and only if $[g] \leq \infty[h]$, we have $$[c'] = [(c-\varepsilon)_+] \leq [c] = x'' \leq \infty[(d'-\delta)_+] \leq \infty[e]$$ and thus $c' \lhd e$. By , $c'+e$ is soft. Note that $c'$ and $e$ belong to $\overline{aAa}$. In particular, $c'+e$ belongs to $A_+$, and we can apply [@ThiVil23arX:Soft Theorem 6.9] to obtain a completely soft element $f \in A_+$ such that $\overline{fAf} = \overline{(c'+e)A(c'+e)} \subseteq \overline{aAa}$. Then $f \in \overline{aAa}$, and therefore $[f] \leq [a] = x$. Further, we have $$x' \ll [(c-\varepsilon)_+] = [c'] \leq [c'+e] = [f].$$ Choose $\delta>0$ such that $$x' \ll [(f-\delta)_+],$$ and set $b := (f-\delta)_+$. Since cut-downs of $(f-\delta)_+$ are also cut-downs of $f$, we see that $b$ is completely soft. Further, we have $$x' \ll [b] = [(f-\delta)_+] \ll [f] \leq x,$$ which shows that $b$ has the desired properties. ◻ A unital $\mathrm{C}^*$-algebra is said to have *stable rank one* if its invertible elements are norm-dense; and a general $\mathrm{C}^*$-algebra is said to have stable rank one if its minimal unitization does; see [@Bla06OpAlgs Section V.3.1]. A $\mathrm{C}^*$-algebra is said to have *weak stable rank one* if $A\subseteq \overline{{\rm Gl}(\tilde{A})}$. Any stable $\mathrm{C}^*$-algebra has weak stable rank one; see [@BlaRobTikTomWin12AlgRC Lemma 4.3.2]. **Theorem 1**. *Let $A$ be a $\mathrm{C}^*$-algebra with an abundance of soft elements, and let $a\in A_+$ be such that $[a]\in\mathop{\mathrm{Cu}}(A)$ is strongly soft. Then there exists a sequence $(a_n)_n$ of completely soft elements in $(\overline{aAa})_+$ such that $([a_n])_n$ in $\mathop{\mathrm{Cu}}(A)$ is $\ll$-increasing with $[a] = \sup_n [a_n]$.* *If, moreover, $A$ has weak stable rank one, then $[a]$ is strongly soft if and only if there exists a completely soft element $b\in A_+$ such that $[a]=[b]$.* *Proof.* Choose a $\ll$-increasing sequence $(x_n)_n$ in $\mathop{\mathrm{Cu}}(A)$ with supremum $[a]$. We will inductively choose completely soft elements $a_n \in (\overline{aAa})_+$ such that $$x_n \ll [a_n] \ll [a], \,\,\,\text{ and }\,\,\, [a_n] \ll [a_{n+1}]$$ for $n\in{\mathbb{N}}$. To start, apply for $x_0 \ll [a]$ to obtain a completely soft element $a_0 \in (\overline{aAa})_+$ such that $x_0 \ll [a_0] \ll [a]$. Assuming we have chosen $a_0,\ldots,a_n$, find $x_n' \in \mathop{\mathrm{Cu}}(A)$ such that $[a_n],x_n \ll x_n' \ll [a]$. Applying for $x_n' \ll [a]$ we obtain a completely soft element $a_{n+1} \in (\overline{aAa})_+$ such that $x_n' \ll [a_{n+1}] \ll [a]$. Proceeding inductively, we obtain the desired sequence $(a_n)_n$. Next, assume that $A$ has weak stable rank one. By [@ThiVil23arX:Soft Proposition 4.16], soft operators have strongly soft Cuntz classes. Conversely, assuming that $[a]$ is strongly soft, we will show that $[a] = [b]$ for some completely soft element $b\in A_+$. Let $(a_n)_n$ be as above. We will show that $\sup_n [a_n]$ (which is $[a]$), has a soft representative. Given $c,d \in A_+$ we will write $c \sim_u d$ if there exists a unitary $u\in\tilde{A}$ such that $c=udu^*$; and we write $c \subseteq d$ if $\overline{cAc} \subseteq \overline{dAd}$. Using [@Thi17:CuLectureNotes §2.5], one can find a sequence $(\delta_n)_n$ in $(0,\infty)$, and a sequence of contractive elements $(b_n)_n$ in $A_+$ such that $$\begin{matrix} a_1 & \precsim & a_2 & \precsim & a_3 & \precsim & \ldots\\ \rotatebox{90}{$\leq$} & & \rotatebox{90}{$\leq$} & & \rotatebox{90}{$\leq$} & \\ (a_1-\delta_1)_+ & & (a_2-\delta_2)_+ & & (a_3-\delta_3)_+ & & \ldots \\ \rotatebox{90}{$\sim_u$} & & \rotatebox{90}{$\sim_u$} & & \rotatebox{90}{$\sim_u$} & \\ b_1 & \subseteq & b_2 & \subseteq & b_3 & \subseteq & \ldots \end{matrix}$$ and, setting $b_\infty:=\sum_n \frac{1}{2^n \Vert b_n \Vert } b_n$, such that $[b_\infty]=\sup_n [a_n]$. For each $n\in{\mathbb{N}}$, since $a_n$ is completely soft, so is the element $(a_n-\delta_n )_+$. Since $(a_n-\delta_n )_+$ and $b_n$ are unitarily equivalent, they generate $\ast$-isomorphic hereditary sub-$\mathrm{C}^*$-algebras of $A$, and it follows that $b_n$ is completely soft as well. Further, since $b_0\subseteq b_1\subseteq\ldots$ and $b_\infty=\sum_n \frac{1}{2^n \Vert b_n \Vert } b_n$, the sequence of hereditary sub-$\mathrm{C}^*$-algebras $\overline{b_{n}Ab_{n}}$ is increasing with $\overline{b_\infty A b_\infty} = \overline{\bigcup_n \overline{b_n A b_n}}$. Since each $\overline{b_n A b_n}$ has no nonzero unital quotients, it follows from [@ThiVil23arX:Soft Proposition 2.17] that neither does $\overline{b_\infty A b_\infty}$. This proves that $b_\infty$ is soft. Note that $b_\infty$ belongs to $A_+$. Applying [@ThiVil23arX:Soft Theorem 6.9], we obtain a completely soft element $b \in A_+$ such that $\overline{bAb} = \overline{b_\infty A b_\infty}$. Then $[b]=[b_\infty]=[a]$, as desired. ◻ **Corollary 1**. *Let $A$ be a $\mathrm{C}^*$-algebra with the Global Glimm Property, and let $x \in \mathop{\mathrm{Cu}}(A)$. Then $x$ is strongly soft if and only if there exists a soft element $a \in (A\otimes{\mathbb{K}})_+$ with $x = [a]$.* *Proof.* It follows from [@ThiVil23Glimm Theorem 3.6] that $A\otimes{\mathbb{K}}$ has the Global Glimm Property. Hence, $A\otimes{\mathbb{K}}$ has an abundance of soft elements by [@ThiVil23Glimm Proposition 7.7]. Further, $A\otimes{\mathbb{K}}$ has weak stable rank one by [@BlaRobTikTomWin12AlgRC Lemma 4.3.2]. Now the result follows from . ◻ ** 1** (The strongly soft subsemigroup). Given a $\ensuremath{\mathrm{Cu}}$-semigroup $S$, we let $S_{\rm{soft}}$ denote the set of strongly soft elements in $S$. By , given a $\mathrm{C}^*$-algebra $A$ with the Global Glimm Property, we have $$\mathop{\mathrm{Cu}}(A)_{\rm{soft}}= \big\{ [a] : a \in (A\otimes{\mathbb{K}})_+ \text{ soft} \big\}.$$ In particular, if $A$ is stably finite, simple, and unital, it follows from [@ThiVil23arX:Soft Proposition 4.16] that the subset $\mathop{\mathrm{Cu}}(A)_{\rm{soft}}\setminus\{ 0\}$ coincides with $\mathop{\mathrm{Cu}}_{+} (A)$, the set of Cuntz classes of purely positive elements as introduced in [@PerTom07Recasting Definition 2.1], see also [@AsaGolPhi21RadCompCrProd Definition 3.8]. Given a $\ensuremath{\mathrm{Cu}}$-semigroup $S$, a *sub-$\ensuremath{\mathrm{Cu}}$-semigroup* in the sense of [@ThiVil21DimCu2 Definition 4.1] is a submonoid $T \subseteq S$ that is a $\ensuremath{\mathrm{Cu}}$-semigroup for the inherited order, and such that the inclusion map $T \to S$ is a $\ensuremath{\mathrm{Cu}}$-morphism. **Proposition 1**. *Let $S$ be a $(2,\omega)$-divisible $\ensuremath{\mathrm{Cu}}$-semigroup that satisfies (O5). Then, $S_{\rm{soft}}$ is a sub-$\ensuremath{\mathrm{Cu}}$-semigroup that also satisfies (O5).* *If $S$ also satisfies (O6) (respectively (O7)), then so does $S_{\rm{soft}}$.* *Proof.* By [@ThiVil23arX:Soft Proposition 7.7], if a $\ensuremath{\mathrm{Cu}}$-semigroup is $(2,\omega)$-divisible and satisfies (O5), then it has an abundance of soft elements, which then by [@ThiVil23arX:Soft Proposition 5.6] implies that its strongly soft elements form a sub-$\ensuremath{\mathrm{Cu}}$-semigroup. Thus, $S_{\rm{soft}}$ is a sub-$\ensuremath{\mathrm{Cu}}$-semigroup. Let us verify that $S_{\rm{soft}}$ satisfies (O5). By [@AntPerThi18TensorProdCu Theorem 4.4(1)] it suffices to show that for all $x',x,y',y,z',z \in S_{\rm{soft}}$ satisfying $$\label{eq:SoftPartAxioms:O5Assumption} x' \ll x, \quad y' \ll y, \,\,\,\text{ and }\,\,\, x+y \ll z' \ll z,$$ there exist $c',c \in S_{\rm{soft}}$ such that $$\label{eq:SoftPartAxioms:O5Conclusion} x'+c \ll z, \quad z' \ll x+c', \,\,\,\text{ and }\,\,\, y' \ll c' \ll c.$$ So let $x',x,y',y,z',z \in S_{\rm{soft}}$ satisfy [\[eq:SoftPartAxioms:O5Assumption\]](#eq:SoftPartAxioms:O5Assumption){reference-type="eqref" reference="eq:SoftPartAxioms:O5Assumption"}. Choose $v',v \in S_{\rm{soft}}$ such that $z' \ll v' \ll v \ll z$. Applying (O5), we obtain $b \in S$ such that $$x'+b \leq v' \leq x+b, \,\,\,\text{ and }\,\,\, y' \ll b.$$ Using that $v' \ll v$ and that $v$ is strongly soft, we apply [@ThiVil23arX:Soft Proposition 4.13] to find $t \in S_{\rm{soft}}$ such that $v'+t \leq v \leq \infty t$. Set $c := b+t$. Since $b \leq v' \leq v \leq \infty t$ and $t$ is strongly soft, we have $c \in S_{\rm{soft}}$ by [@ThiVil23arX:Soft Theorem 4.14(2)]. Thus, one gets $$x'+c = x'+b+t \leq v'+t \leq v \ll z,$$ and $$z' \ll v' \leq x+b \leq x+c, \,\,\,\text{ and }\,\,\, y' \ll b \leq c.$$ Using also that $S_{\rm{soft}}$ is a $\ensuremath{\mathrm{Cu}}$-semigroup and $c \in S_{\rm{soft}}$, we can find $c' \in S_{\rm{soft}}$ such that $$c' \ll c, \quad z' \ll x+c', \,\,\,\text{ and }\,\,\, y' \ll c'.$$ This shows that $c'$ and $c$ satisfy [\[eq:SoftPartAxioms:O5Conclusion\]](#eq:SoftPartAxioms:O5Conclusion){reference-type="eqref" reference="eq:SoftPartAxioms:O5Conclusion"}, as desired. That $S_{\rm{soft}}$ satisfies (O6) (respectively (O7)) whenever $S$ does is proven analoguously. ◻ # Separative $\ensuremath{\mathrm{Cu}}$-semigroups {#sec:SepCu} We introduce in the notion of left-soft separativity, a weakening of weak cancellation () that is satisfied in the Cuntz semigroup of every $\mathrm{C}^*$-algebra with stable rank one or strict comparison of positive elements; see and respectively. We also prove in that, among strongly soft elements, the notions of unperforation and almost unperforation coincide. ** 1** (Cuntz semigroups of stable rank one $\mathrm{C}^*$-algebras). Let $A$ be a stable rank one $\mathrm{C}^*$-algebra. As shown in [@RorWin10ZRevisited Theorem 4.3], the Cuntz semigroup $\mathop{\mathrm{Cu}}(A)$ satisfies a cancellation property termed *weak cancellation*: If $x,y,z \in \mathop{\mathrm{Cu}}(A)$ satisfy $x+z\ll y+z$, then $x\ll y$. If $A$ is also separable, then $\mathop{\mathrm{Cu}}(A)$ is *inf-semilattice ordered*, that is, for every pair of elements $x,y \in \mathop{\mathrm{Cu}}(A)$ their infimum $x \wedge y$ exists, and for every $x,y,z \in \mathop{\mathrm{Cu}}(A)$ one has $(x+z) \wedge (y+z) = (x \wedge y) +z$; see [@AntPerRobThi22CuntzSR1 Theorem 3.8]. As defined in [@ThiVil21arX:ZeroDimCu], a $\ensuremath{\mathrm{Cu}}$-semigroup is *separative* if $x \ll y$ whenever $x+t\ll y+t$ with $t\ll\infty x,\infty y$. This and other cancellation properties will be studied in more detail in [@ThiVil21arX:ZeroDimCu]. For the results in this paper, we will need the following tailored definition: **Definition 1**. We say that a $\ensuremath{\mathrm{Cu}}$-semigroup $S$ is *left-soft separative* if, for any triple of elements $y,t\in S$ and $x\in S_{\rm{soft}}$ satisfying $$x+t \ll y+t,\quad t \ll \infty x,\,\,\,\text{ and }\,\,\, t \ll \infty y,$$ we have $x\ll y$. **Proposition 1**. *Every weakly cancellative $\ensuremath{\mathrm{Cu}}$-semigroup is separative, and every separative $\ensuremath{\mathrm{Cu}}$-semigroup is left-soft separative.* *In particular, the Cuntz semigroup of every stable rank one $\mathrm{C}^*$-algebra is left-soft separative.* *Proof.* It follows directly from the definitions that weak cancellation is stronger than left-soft separativitiy. By [@RorWin10ZRevisited Theorem 4.3], the Cuntz semigroup of a stable rank one $\mathrm{C}^*$-algebra is weakly cancellative. ◻ **Lemma 2**. *Let $S$ be a $(2,\omega )$-divisible $\ensuremath{\mathrm{Cu}}$-semigroup satisfying (O5). Then, $S$ is left-soft separative if and only if for all $y,t',t\in S$ and $x\in S_{\rm{soft}}$ satisfying $$x+t\leq y+t',\quad t'\ll t,\quad t'\ll \infty y,\,\,\,\text{ and }\,\,\, t'\ll \infty x,$$ we have $x \leq y$.* *Proof.* The backwards implication is straightforward to verify and even holds for general $\ensuremath{\mathrm{Cu}}$-semigroups. To show the forward implication, assume that $S$ is left-soft separative, and let $x,y,t',t\in S$ as in the statement. By , we know that $S_{\rm{soft}}$ is a sub-$\ensuremath{\mathrm{Cu}}$-semigroup. In particular, $x$ can be written as the supremum of a $\ll$-increasing sequence of strongly soft elements. Take $x'\in S_{\rm{soft}}$ such that $x'\ll x$. We have $$x'+t'\ll x+t\leq y+t',\quad t' \ll \infty x,\,\,\,\text{ and }\,\,\, t' \ll \infty y.$$ By left-soft separativity, we deduce $x'\ll y$. Since $x$ is the supremum of such $x'$, one gets $x\leq y$, as required. ◻ **Lemma 3**. *Let $S$ be a left-soft separative, $(2,\omega)$-divisible $\ensuremath{\mathrm{Cu}}$-semigroup satisfying (O5), and let $x,t \in S_{\rm{soft}}$ and $y,t' \in S$ satisfy $$x+t \leq y+t', \quad t' \ll t, \quad t' \ll \infty y.$$* *Then $x \leq y$.* *Proof.* Take $t''\in S$ such that $t'\ll t''\ll t$. Using that $t$ is strongly soft, one finds $s\in S_{\rm{soft}}$ such that $t''+s\leq t\leq \infty s$; see [@ThiVil23arX:Soft Proposition 4.13]. Note that, since $x$ and $s$ are strongly soft, so is $x+s$ by [@ThiVil23arX:Soft Theorem 4.14]. We get $$(x+s)+t'' = x+(s+t'') \leq x+t \leq y+t'.$$ Further, we have $t'\ll \infty y$ and $t'\ll t''\leq \infty s\leq \infty (x+s)$. An application of shows that $x+s\leq y$ and, therefore, that $x\leq y$. ◻ The following result shows that three different versions of unperforation coincide for the semigroup of strongly soft elements in a $\ensuremath{\mathrm{Cu}}$-semigroup. Given elements $x$ and $y$ in a partially ordered monoid, one writes $x<_sy$ if there exists $n \geq 1$ such that $(n+1)x \leq ny$; and one writes $x \leq_p y$ if there exists $n_0 \in {\mathbb{N}}$ such that $nx \leq ny$ for all $n \geq n_0$. We refer to [@AntPerThi18TensorProdCu Chapter 5] for details regarding these definitions. **Proposition 1**. *Let $S$ be a $\ensuremath{\mathrm{Cu}}$-semigroup. The following are equivalent:* - *$S_{\rm{soft}}$ is unperforated: If $x,y \in S_{\rm{soft}}$ and $n \geq 1$ satisfy $nx \leq ny$, then $x \leq y$.* - *$S_{\rm{soft}}$ is nearly unperforated: If $x,y \in S_{\rm{soft}}$ satisfy $x \leq_p y$, then $x \leq y$.* - *$S_{\rm{soft}}$ is almost unperforated: If $x,y \in S_{\rm{soft}}$ satisfy $x <_s y$, then $x \leq y$.* *Proof.* In general, (1) implies (2), which implies (3); see [@AntPerThi18TensorProdCu Proposition 5.6.3]. To verify that (3) implies (1), let $x,y\in S_{\rm{soft}}$ and $n\geq 1$ satisfy $nx\leq ny$. Then $\widehat{x} \leq \widehat{y}$; see . By [@ThiVil23arX:Soft Proposition 4.5], $x$ is functionally soft. Thus, we deduce from [@AntPerThi18TensorProdCu Theorem 5.3.12] that $x \leq y$, as desired. ◻ **Lemma 4**. *Every almost unperforated $\ensuremath{\mathrm{Cu}}$-semigroup satisfying (O5) is left-soft separative.* *Proof.* Let $S$ be an almost unperforated $\ensuremath{\mathrm{Cu}}$-semigroup satisfying (O5). To verify that $S$ is left-soft separative, let $y,t\in S$ and $x\in S_{\rm{soft}}$ satisfy $x+t\ll y+t$ and $t\ll\infty x,\infty y$. Choose $y'\in S$ such that $$x+t\ll y'+t, \quad t\ll\infty y', \,\,\,\text{ and }\,\,\,y'\ll y.$$ Then $x \leq_p y'$ by [@AntPerThi18TensorProdCu Proposition 5.6.8(ii)]. In particular, there exists $k\in{\mathbb{N}}$ such that $kx\leq ky'$, and thus $\widehat{x} \leq \widehat{y'}$; see . By [@ThiVil23arX:Soft Proposition 4.5], $x$ is functionally soft. Using that $S$ is almost unperforated, we obtain that $x \leq y' \ll y$, by [@AntPerThi18TensorProdCu Theorem 5.3.12]. ◻ A $\mathrm{C}^*$-algebra $A$ is said to have *strict comparison of positive elements* if, for all $a,b \in (A\otimes{\mathbb{K}})_+$ and $\varepsilon>0$, one has that $d_\tau(a) \leq (1-\varepsilon)d_\tau(b)$ for all $\tau$ implies $a \precsim b$. **Proposition 1**. *Let $A$ be a $\mathrm{C}^*$-algebra with strict comparison of positive elements. Then $\mathop{\mathrm{Cu}}(A)$ is left-soft separative.* *Proof.* A $\mathrm{C}^*$-algebra has strict comparison of positive elements if and only if its Cuntz semigroup is almost unperforated; see [@EllRobSan11Cone Proposition 6.2]. Since every Cuntz semigroup satisfies (O5), the result follows from . ◻ Since every $\mathcal{Z}$-stable $\mathrm{C}^*$-algebra has strict comparison of positive elements (see [@Ror04StableRealRankZ Theorem 4.5]), one gets the following: **Corollary 1**. *The Cuntz semigroup of every $\mathcal{Z}$-stable $\mathrm{C}^*$-algebra is left-soft separative.* # Ranks and soft elements {#sec:SoftRanks} Given a $(2,\omega)$-divisible $\ensuremath{\mathrm{Cu}}$-semigroup $S$ satisfying (O5)-(O7) (for example, the Cuntz semigroup of a $\mathrm{C}^*$-algebra with the Global Glimm Property) and an element $x\in S$, we show in that there exists a strongly soft element $w$ below $x$ which agrees with $x$ at the level of functionals, that is, the rank of $x$ coincides with the rank of $w$; see . Paired with , this implies that the rank of any positive element in a $\mathrm{C}^*$-algebra satisfying the Global Glimm Property is the rank of a soft element (). Using , we also prove that $F(S)$, the set of functionals on $S$, is homeomorphic to $F(S_{\rm{soft}})$; see . ** 1** (Functionals and ranks). Given a $\ensuremath{\mathrm{Cu}}$-semigroup $S$, we will denote by $F(S)$ the set of its *functionals*, that is to say, the set of monoid morphisms $S\to [0,\infty ]$ that preserve the order and suprema of increasing sequences. If $S$ satisfies (O5), then $F(S)$ becomes a compact, Hausdorff space -- and even an algebraically ordered compact cone [@AntPerRobThi21Edwards Section 3] -- when equipped with a natural topology [@EllRobSan11Cone; @Rob13Cone; @Kei17CuSgpDomainThy]. Given a $\mathrm{C}^*$-algebra, the cone $\mathop{\mathrm{QT}}(A)$ of lower-semicontinuous 2-quasitraces on $A$ is naturally isomrphic to $F(\mathop{\mathrm{Cu}}(A))$, as shown in [@EllRobSan11Cone Theorem 4.4]. We let $\mathop{\mathrm{LAff}}(F(S))$ denote the monoid of lower-semicontinuous, affine functions $F(S)\to(-\infty,\infty]$, equipped with pointwise order and addition. For $x\in S$, the *rank* of $x$ is defined as the map $\widehat{x} \colon F(S) \to [0,\infty]$ given by $$\widehat{x}(\lambda) := \lambda(x)$$ for $\lambda \in F(S)$. The function $\widehat{x}$ belongs to $\mathop{\mathrm{LAff}}(F(S))$ and the *rank problem* of determining which functions in $\mathop{\mathrm{LAff}}(F(S))$ arise this way has been studied extensively in [@Thi20RksOps] and [@AntPerRobThi22CuntzSR1]. Sending an element $x \in S$ to its rank $\widehat{x}$ defines a monoid morphism from $S$ to $\mathop{\mathrm{LAff}}(F(S))$ which preserves both the order and suprema of increasing sequences. **Lemma 5**. *Let $S$ be a $(2,\omega )$-divisible $\ensuremath{\mathrm{Cu}}$-semigroup satisfying (O5), and let $u\in S_{\rm{soft}}$ and $u',x\in S$ be such that $$u'\ll u\ll x.$$* *Then, there exists $c\in S_{\rm{soft}}$ satisfying $$u'+2c\leq x\leq \infty c.$$* *Proof.* Let $u''\in S$ be such that $u'\ll u''\ll u$. By [@ThiVil23arX:Soft Proposition 4.13], there exists $s\in S$ satisfying $$u''+s\leq u\leq \infty s.$$ Since $u''\ll u\leq \infty s$, there exists $s' \in S$ such that $$s' \ll s, \,\,\,\text{ and }\,\,\, u''\ll \infty s'.$$ We have $$u''+s \leq x. \quad u' \ll u'', \,\,\,\text{ and }\,\,\, s'\ll s.$$ Applying (O5), we obtain $d\in S$ such that $u'+d\leq x\leq u''+d$ with $s'\leq d$. Since $u''\leq \infty s'$, it follows that $x\leq \infty d$. Finally, apply [@ThiVil23arX:Soft Proposition 7.7] to $d$ in order to obtain $c\in S_{\rm{soft}}$ such that $2c\leq d\leq \infty c$. This element satisfies the required conditions. ◻ A $\ensuremath{\mathrm{Cu}}$-semigroup $S$ is said to be *countably based* if it contains a countable subset $D\subseteq S$ such that every element in $S$ can be written as the supremum of an increasing sequence of elements in $D$. Separable $\mathrm{C}^*$-algebras have countably based Cuntz semigroups; see, for example, [@AntPerSan11PullbacksCu]. **Lemma 6**. *Let $S$ be a countably based, $(2,\omega )$-divisible $\ensuremath{\mathrm{Cu}}$-semigroup satisfying (O5)-(O7), and let $x\in S$. Consider the set $$L_x := \big\{ u'\in S : u'\ll u\ll x \text{ for some } u\in S_{\rm{soft}}\big\}.$$* *Then, for every $k \in {\mathbb{N}}$, $x'\in S$ such that $x'\ll x$, and $u',v'\in L_x$, there exists a strongly soft element $w'\in L_x$ such that $$u'\ll w',\quad x'\ll \infty w',\,\,\,\text{ and }\,\,\, \frac{k}{k+1}\widehat{v'}\leq \widehat{w'} \text{ in $\mathop{\mathrm{LAff}}(F(S))$}.$$* *If, additionally, $S$ is left-soft separative, $w'$ may be chosen so that $v' \ll w'$.* *Proof.* Let $u',v'\in L_x$, let $x' \in S$ satisfy $x' \ll x$, and let $k \in {\mathbb{N}}$. By definition, there exist $u,v\in S_{\rm{soft}}$ such that $$u'\ll u\ll x,\,\,\,\text{ and }\,\,\,v'\ll v\ll x.$$ Choose $y',y\in S$ such that $$x' \ll y' \ll y \ll x, \quad v \ll y', \,\,\,\text{ and }\,\,\, u \ll y'.$$ Using that $S_{\rm{soft}}$ is a sub-$\ensuremath{\mathrm{Cu}}$-semigroup by , we can choose elements $u'',u''',v'' \in S_{\rm{soft}}$ such that $$u' \ll u'' \ll u''' \ll u, \,\,\,\text{ and }\,\,\, v' \ll v'' \ll v.$$ Applying for $u'''\ll u\ll y$ and $v''\ll v\ll y$, we obtain $c,d \in S_{\rm{soft}}$ such that $$u'''+c\leq y\leq \infty c,\,\,\,\text{ and }\,\,\, v''+2d\leq y\leq \infty d.$$ Then, applying [@ThiVil23Glimm Proposition 4.10] for $y'\ll y\leq\infty c,\infty d$, we get $e\in S$ such that $$y' \ll \infty e, \,\,\,\text{ and }\,\,\,e\ll c,d.$$ By [@ThiVil23arX:Soft Proposition 7.7], there exists a strongly soft element $e_0$ such that $e_0 \leq e \leq \infty e_0$. Replacing $e$ by $e_0$, we may assume that $e \in S_{\rm{soft}}$. Using again that $S_{\rm{soft}}$ is a sub-$\ensuremath{\mathrm{Cu}}$-semigroup, we can find $e',e''\in S_{\rm{soft}}$ satisfying $$y' \ll \infty e', \,\,\,\text{ and }\,\,\, e'\ll e''\ll e.$$ By [@ThiVil23arX:Soft Proposition 4.13], there exists $r\in S$ such that $$e''+r\leq e\leq \infty r.$$ Since $e''\ll e$, we can find $r' \in S$ such that $$r' \ll r, \,\,\,\text{ and }\,\,\, e''\leq \infty r'.$$ Thus, one has $$e''+(r+u''')\leq e+u'''\leq c+u'''\leq y, \quad e' \ll e'', \,\,\,\text{ and }\,\,\, r'+u'' \ll r+u'''.$$ Applying (O5), we obtain $z\in S$ such that $$e'+z \leq y \leq e''+z, \,\,\,\text{ and }\,\,\, r'+u''\leq z.$$ Using again that $S_{\rm{soft}}$ is a sub-$\ensuremath{\mathrm{Cu}}$-semigroup, choose $d' \in S_{\rm{soft}}$ such that $$e \ll d' \ll d.$$ We have $$\label{eq:Separative} (v''+d)+d=v''+2d\leq y \leq z+e'' \leq z+d',$$ with $v''+d\in S_{\rm{soft}}$. Note that $$d'\ll d\leq \infty (v''+d),\,\,\,\text{ and }\,\,\, d'\ll d\leq y\leq z+e''\leq z+\infty r'\leq \infty z.$$ In particular, since $d'\ll \infty z$, there exists $M\in{\mathbb{N}}$ such that $d'\leq Mz$. Set $$l:= \infty (u''+v''), \,\,\,\text{ and }\,\,\, w:=e'+ (z\wedge l),$$ where $z\wedge l$ exists because $l$ is idempotent, and $S$ is countably based and satisfies (O7); see [@AntPerRobThi21Edwards Theorem 2.4]. Note that, since $l\leq \infty y'\leq \infty e'$ and $e'\in S_{\rm{soft}}$, it follows from [@ThiVil23arX:Soft Theorem 4.14] that $w\in S_{\rm{soft}}$. We get $$w \leq e'+z\leq y\ll x,\quad x'\ll y'\leq \infty e'\leq \infty w,\,\,\,\text{ and }\,\,\, u'\ll u''\leq z\wedge l\leq w.$$ By [@AntPerRobThi21Edwards Theorem 2.5], the map $S \to S$, $s \mapsto s\wedge l$, is additive. Using this at the second and fourth step, we get $$\begin{aligned} v'' + 2(d'\wedge l) &= (v'' \wedge l) + 2(d'\wedge l) = (v''+2d')\wedge l \\ &\leq (z+d')\wedge l = (z \wedge l) + (d'\wedge l) \leq w + (d'\wedge l).\end{aligned}$$ We also have $d'\wedge l\leq (Mz)\wedge l = M(z\wedge l)\leq Mw$, and this implies that $$\widehat{v''}\leq \widehat{w}.$$ Now, since $v'\ll v''$ and $\frac{k}{k+1}<1$, we can apply [@Rob13Cone Lemma 2.2.5] to obtain $$\frac{k}{k+1} \widehat{v'} \ll \widehat{v''}\leq \widehat{w}.$$ Since $w$ is strongly soft and $S_{\rm{soft}}$ is a sub-$\ensuremath{\mathrm{Cu}}$-semigroup, there exists a $\ll$-increasing sequence of soft elements with supremum $w$. Using that the rank map $x\mapsto \widehat{x}$ preserves suprema of increasing sequences, we can find $w' \in S_{\rm{soft}}$ such that $$w'\ll w, \quad \frac{k}{k+1} \widehat{v'} \leq \widehat{w'}, \quad x'\ll \infty w',\,\,\,\text{ and }\,\,\, u'\ll w'.$$ Further, we have $w'\ll w\ll x$. This shows that $w'$ is a strongly soft element in $L_x$, as desired. If, additionally, $S$ is left-soft separative, we can apply on [\[eq:Separative\]](#eq:Separative){reference-type="eqref" reference="eq:Separative"} to obtain that $v''+d \leq z$, and so $v'' \leq z$. We also have $v'' \leq l$ and thus $$v' \ll v'' \leq z \wedge l \leq w.$$ We also have $u' \ll u'' \leq w$ and $x' \ll \infty w$. Using that $w$ is strongly soft and that $S_{\rm{soft}}$ is a sub-$\ensuremath{\mathrm{Cu}}$-semigroup, we can find $w' \in S_{\rm{soft}}$ such that $u',v' \ll w' \ll w$ and $x' \ll \infty w'$. Then $w'$ has the desired properties. ◻ **Remark 1**. The assumption of $S$ being countably based in is only used to prove the existence of the infimum $z\wedge l$. If $S$ is the Cuntz semigroup of a $\mathrm{C}^*$-algebra, this infimum always exists; see [@CiuRobSan10CuIdealsQuot]. Thus, the first part of holds for every $\mathrm{C}^*$-algebra with the Global Glimm Property. **Proposition 1**. *Let $S$ be a countably based, $(2,\omega )$-divisible $\ensuremath{\mathrm{Cu}}$-semigroup satisfying (O5)-(O7), let $x',x\in S$ with $x' \ll x$, let $k \in {\mathbb{N}}$, and let $u' \in L_x$. Then, for every finite subset $C\subseteq L_x$, there exists a strongly soft element $w'\in L_x$ such that $$u'\ll w',\quad x'\ll \infty w',\,\,\,\text{ and }\,\,\, \frac{k}{k+1}\widehat{v'} \leq \widehat{w'} \text{ in $\mathop{\mathrm{LAff}}(F(S))$}$$ for every $v'\in C$.* *Proof.* We will prove the result by induction on $\vert C\vert$, the size of $C$. If $\vert C\vert =1$, the result follows from . Thus, fix $n\in{\mathbb{N}}$ with $n \geq 2$, and assume that the result holds for any finite subset of $n-1$ elements. Given $C\subseteq L_x$ with $\vert C\vert =n$, pick some $v_0\in C$. Applying the induction hypothesis, we get an element $w''\in L_x$ such that $$u' \ll w'',\quad x' \ll \infty w'',\,\,\,\text{ and }\,\,\, \frac{k}{k+1}\widehat{v'} \leq \widehat{w''}$$ for every $v'\in C \setminus \{v_0\}$. Now, applying to $x'$, $w''$ and $v_0$, we get a strongly soft element $w' \in L_x$ such that $$w'' \ll w',\quad x' \ll \infty w',\,\,\,\text{ and }\,\,\, \frac{k}{k+1}\widehat{v_0} \leq \widehat{w'}.$$ Then $\widehat{w''}\leq \widehat{w'}$, which shows that $w'$ satisfies the required conditions. ◻ **Proposition 1**. *Let $S$ be a countably based, $(2,\omega )$-divisible $\ensuremath{\mathrm{Cu}}$-semigroup satisfying (O5)-(O7), let $x \in S$, and let $u' \in L_x$. Then there exists $w\in S_{\rm{soft}}$ such that $$u'\ll w\leq x\leq \infty w,\,\,\,\text{ and }\,\,\, \lambda (w)=\sup_{v'\in L_x} \lambda (v'),$$ for every $\lambda \in F(S)$.* *Proof.* By definition of $L_x$, we obtain $u \in S_{\rm{soft}}$ such that $u' \ll u \ll x$. Let $(x_n)_n$ be a $\ll$-increasing sequence with supremum $x$, and such that $u \ll x_0$. Note that the sets $L_{x_n}$ form an increasing sequence of subsets of $S$ with $L_x = \bigcup_n L_{x_n}$. Let $B$ be a countable basis for $S$. Then $$B \cap L_x = \bigcup_n (B \cap L_{x_n}),$$ and we can choose a $\subseteq$-increasing sequence $(C_n)_n$ of finite subsets of $B \cap L_x$ such that $$B \cap L_x = \bigcup_n C_n, \,\,\,\text{ and }\,\,\, C_n \subseteq B \cap L_{x_n} \text{ for each $n$}.$$ We have $u'\in L_{x_0} \subseteq L_{x_1}$. Apply to $k=1,(0 \ll x_1),u',C_1$ to obtain a strongly soft element $w_1'\in L_{x_1}$ such that $$u' \ll w_1',\quad 0 \ll \infty w_1', \,\,\,\text{ and }\,\,\, \frac{1}{2}\widehat{v'} \leq \widehat{w_1'}$$ for every $v' \in C_1$. We have $w_1' \in L_{x_2}$. Applying again to $k=2,(x_1 \ll x_2),w_1',C_2$, we obtain a strongly soft element $w_2' \in L_{x_2}$ such that $$w_1' \ll w_2',\quad x_1 \ll \infty w_2',\,\,\,\text{ and }\,\,\, \frac{2}{3} \widehat{v'}\leq \widehat{w_2'}$$ for every $v'\in C_2$. Proceeding inductively, we get a $\ll$-increasing sequence of strongly soft elements $(w_n')_n$ such that $$w_n'\in L_{x_n},\quad x_{n-1}\ll \infty w_{n}'\,\,\,\text{ and }\,\,\, \frac{n}{n+1}\widehat{v'}\leq \widehat{w_n'}$$ for every $v'\in C_n$ and $n \geq 2$. Set $w:=\sup_n w_n'$, which is strongly soft by [@ThiVil23arX:Soft Theorem 4.14]. Note that we get $u' \ll w_1' \leq w \leq x$ by construction. Further, since $x_n\leq \infty w_{n+1}'\leq \infty w$ for each $n \geq 2$, we deduce that $x\leq \infty w$. Now take $\lambda \in F(S)$. Given $v'\in B \cap L_x$, choose $n_0 \geq 2$ such that $v' \in C_{n_0}$. We have $$\frac{n}{n+1} \lambda(v') \leq \lambda(w_n') \leq \lambda(w)$$ for every $n\geq n_0$. Thus, it follows that $\lambda (v') \leq \lambda (w)$ for every $v' \in B \cap L_x$. Since $L_x$ is downward-hereditary, every element in $L_x$ is the supremum of an increasing sequence from $B \cap L_x$. Using also that functionals preserve suprema of increasing sequences, we obtain $$\sup_{v'\in L_x} \lambda (v') \leq \sup_{v'\in B\cap L_x} \lambda (v') \leq \lambda (w) = \sup_n \lambda (w_n')\leq \sup_{v'\in L_x} \lambda (v'),$$ which shows that $w$ has the desired properties. ◻ **Lemma 7**. *Let $S$ be a $(2, \omega)$-divisible $\ensuremath{\mathrm{Cu}}$-semigroup satisfying (O5)-(O7), and let $x',x,t\in S$ be such that $x'\ll x\leq \infty t$. Then there exists a strongly soft element $u'\in L_x$ such that $$x'\ll u'+t.$$* *Proof.* Choose $x'' \in S$ such that $x'\ll x''\ll x$. Applying [@ThiVil23Glimm Proposition 4.10] to $$x''\ll x\leq\infty x,\infty t,$$ we get $s\in S$ such that $$x'' \ll \infty s, \,\,\,\text{ and }\,\,\,s\ll x,t.$$ By [@ThiVil23arX:Soft Proposition 7.7], we can choose $s'\in S_{\rm{soft}}$ such that $$x'' \leq \infty s', \,\,\,\text{ and }\,\,\,s'\ll s.$$ Then $x'' \ll \infty s'$. Applying (O5) to $s'\ll s\leq x$, we obtain $v\in S$ satisfying $$v+s'\leq x\leq v+s.$$ In particular, one has $x''\ll v+s$. Applying (O6) to $x' \ll x'' \leq v+s$, we find $u\in S$ such that $$x'\ll u+s,\,\,\,\text{ and }\,\,\,u\ll x'' , v.$$ Since $u\ll x''\leq \infty s'$, it follows from [@ThiVil23arX:Soft Theorem 4.14] that $u+s'$ is soft. Further, we get $$x' \ll u+s \leq u+t \leq (u+s')+t, \,\,\,\text{ and }\,\,\, u+s'\leq v+s'\leq x.$$ Using that $S_{\rm{soft}}$ is a sub-$\ensuremath{\mathrm{Cu}}$-semigroup by , find can find $u'\in S_{\rm{soft}}$ such that $$x'\ll u'+t,\,\,\,\text{ and }\,\,\, u'\ll u+s'\leq x.$$ Then $u'\in L_x$, which shows that $u'$ has the desired properties. ◻ **Lemma 8**. *Let $S$ be a $(2,\omega )$-divisible $\ensuremath{\mathrm{Cu}}$-semigroup satisfying (O5)-(O7), and let $t\in S_{\rm{soft}}$ and $t',x',x\in S$ be such that $$x'\ll x\leq \infty t,\,\,\,\text{ and }\,\,\, t'\ll t.$$* *Then, there exists a strongly soft element $v'\in L_x$ such that $$x'+t'\leq v'+t.$$* *Proof.* By [@ThiVil23arX:Soft Proposition 4.13], there exists $s\in S_{\rm{soft}}$ such that $$t'+s\leq t\leq \infty s.$$ Applying to $x'\ll x\leq \infty s$, we obtain a strongly soft element $v'\in L_x$ satisfying $x'\leq v'+s$. Consequently, we obtain $$x'+t'\leq v'+s+t'\leq v'+t. \qedhere$$ ◻ We refer to [@ThiVil21DimCu2 Section 5] for an introduction to the basic technique to reduce certain proofs about $\ensuremath{\mathrm{Cu}}$-semigroups to the countably based setting. In particular, a property $\mathcal{P}$ for $\ensuremath{\mathrm{Cu}}$-semigroups is said to satisfy the Löwenheim-Skolem condition if, for every $\ensuremath{\mathrm{Cu}}$-semigroup $S$ satisfying $\mathcal{P}$, there exists a $\sigma$-complete and cofinal subcollection of countably based sub-$\ensuremath{\mathrm{Cu}}$-semigroups of $S$ satisfying $\mathcal{P}$. **Lemma 9**. *Let $S$ be a $\ensuremath{\mathrm{Cu}}$-semigroup, let $u \in S_{\rm{soft}}$, and let $\mathcal{R}$ be the family of countably based sub-$\ensuremath{\mathrm{Cu}}$-semigroups $T \subseteq S$ containing $u$ and such that $u$ is strongly soft in $T$. Then $\mathcal{R}$ is $\sigma$-complete and cofinal.* *Proof.* Strong softness is preserved under $\ensuremath{\mathrm{Cu}}$-morphisms, and the inclusion map of a sub-$\ensuremath{\mathrm{Cu}}$-semigroup is a $\ensuremath{\mathrm{Cu}}$-morphism. Hence, given sub-$\ensuremath{\mathrm{Cu}}$-semigroups $T_1 \subseteq T_2 \subseteq S$ containing $u$, if $u$ is strongly soft in $T_1$ then it is also strongly soft in $T_2$. This implies in particular that $\mathcal{R}$ is $\sigma$-complete. To show that $\mathcal{R}$ is cofinal, let $T_0 \subseteq S$ be a countably based sub-$\ensuremath{\mathrm{Cu}}$-semigroup, and let $B_0 \subseteq T_0$ be a countable basis, that is, a countable subset such that every element in $T_0$ is the supremum of an increasing sequence from $B_0$. Let $(u_n)_n$ be a $\ll$-increasing sequence in $S$ with supremum $u$. Since $u$ is strongly soft in $S$, for each $n$ we obtain $t_n \in S$ such that $$u_n + t_n \ll u, \,\,\,\text{ and }\,\,\, u_n \ll \infty t_n.$$ By [@ThiVil21DimCu2 Lemma 5.1], there exists a countably based sub-$\ensuremath{\mathrm{Cu}}$-semigroup $T \subseteq S$ containing $$B_0 \cup \{u_0,u_1,\ldots\} \cup \{t_0,t_1,\ldots\}.$$ One checks that $T_0 \subseteq T$, and that $u$ is strongly soft in $T$. ◻ **Theorem 1**. *Let $S$ be a $(2,\omega )$-divisible $\ensuremath{\mathrm{Cu}}$-semigroup satisfying (O5)-(O7), let $x\in S$, and let $u'\in L_x$. Then there exists $w\in S_{\rm{soft}}$ such that $$u'\ll w\leq x\leq \infty w,\,\,\,\text{ and }\,\,\, \widehat{w} = \widehat{x}.$$* *Proof.* We first prove the result under the additional assumption that $S$ is countably based. Use to obtain $w\in S_{\rm{soft}}$ such that $$u' \ll w \leq x \leq \infty w, \,\,\,\text{ and }\,\,\, \lambda (w)=\sup_{v'\in L_x} \lambda (v'),$$ for every $\lambda \in F(S)$. Since $w\leq x$, we have $\widehat{w} \leq \widehat{x}$. To show the reverse inequality, let $\lambda\in F(S)$. We need to prove that $\lambda(x) \leq \lambda(w)$. Take $x',w'\in S$ such that $x' \ll x$ and $w'\ll w$. Applying , we obtain an element $v'\in L_x$ such that $$x'+w'\leq v'+w.$$ Since $v'$ belongs to $L_x$, we have $\lambda (v')\leq \lambda (w)$. This implies $$\lambda (x')+\lambda (w')\leq \lambda (v')+\lambda (w) \leq 2\lambda (w).$$ Passing to the supremum over all $x'$ way-below $x$, and all $w'$ way-below $w$, we get $$\lambda (x)+\lambda (w) \leq 2\lambda (w).$$ This proves $\lambda (x)\leq \lambda (w)$. Indeed, if $\lambda (w)=\infty$, then there is nothing to prove. If $\lambda (w)\neq \infty$, we can cancel $\lambda (w)$ from the previous inequality. We now consider the case that $S$ is not countably based. Choose $u \in S_{\rm{soft}}$ such that $u' \ll u \ll x$. Since $(2,\omega)$-divisibility and (O5)-(O7) each satisfy the Löwenheim-Skolem condition, and using also , we can use the technique from [@ThiVil21DimCu2 Section 5] to deduce that there exists a countably based, $(2,\omega )$-divisible sub-$\ensuremath{\mathrm{Cu}}$-semigroup $H \subseteq S$ satisfying (O5)-(O7), containing $x$, $u$ and $u'$, and such that $u$ is strongly soft in $H$. Applying the first part of the proof to $H$, we find $w\in H_{\rm{soft}}$ such that $$u'\ll w\leq x\leq \infty w,\,\,\,\text{ and }\,\,\, \lambda (x)=\lambda (w)$$ for every $\lambda \in F(H)$. Since the inclusion $\iota\colon H\to S$ is a $\ensuremath{\mathrm{Cu}}$-morphism, it follows that $w$ is strongly soft in $S$. Further, any functional $\lambda$ on $S$ induces the functional $\lambda\iota$ on $H$. This shows that $w$ satisfies the required conditions. ◻ **Theorem 1**. *Let $A$ be a stable $\mathrm{C}^*$-algebra with the Global Glimm Property. Then, for any $a\in A_+$ there exists a soft element $b \in A_+$ with $b\precsim a$ and such that $$d_\tau(a) = d_\tau(b)$$ for every $\tau \in \mathop{\mathrm{QT}}(A)$.* *Proof.* Let $a\in A_+$. Since $A$ has the Global Glimm Property, it follows from [@ThiVil23Glimm Theorem 3.6] that $\mathop{\mathrm{Cu}}(A)$ is $(2,\omega )$-divisible. Using , find $w\in \mathop{\mathrm{Cu}}(A)_{\rm{soft}}$ such that $w\leq [a]$ and $\lambda (w)=\lambda ([a])$ for every $\lambda\in F(\mathop{\mathrm{Cu}}(A))$. By , there exists a soft element $b\in A_+$ such that $w=[b]$. The result now follows from the fact that the map $$\tau\mapsto \left([a]\mapsto d_\tau(a)\right)$$ is a natural bijection from ${\rm QT} (A)$ to $F(\mathop{\mathrm{Cu}}(A))$; see [@EllRobSan11Cone Theorem 4.4]. ◻ **Lemma 10**. *Let $S$ be a $(2,\omega )$-divisible $\ensuremath{\mathrm{Cu}}$-semigroup $S$ satisfying (O5), let $x\in S$, and let $\lambda \in F(S)$. Then $$\sup_{\{ v\in S_{\rm{soft}}: v\leq x \}}\lambda (v)=\sup_{v'\in L_x} \lambda (v')$$* *Proof.* Given $v' \in L_x$, there exists $v \in S_{\rm{soft}}$ with $v' \leq v \leq x$, which shows the inequality '$\geq$'. Conversely, let $v \in S_{\rm{soft}}$ with $v \leq x$. Since $S_{\rm{soft}}$ is a sub-$\ensuremath{\mathrm{Cu}}$-semigroup by , there exists a $\ll$-increasing sequence $(v_n')_n$ in $S_{\rm{soft}}$ with supremum $v$. Each $v_n'$ belongs to $L_x$, and one gets $$\lambda(v) = \sup_n \lambda(v_n') \leq \sup_{v'\in L_x} \lambda (v').$$ This shows the the inequality '$\leq$'. ◻ We will prove in that the inclusion $\iota\colon S_{\rm{soft}}\to S$ induces a homeomorphism $\iota^*\colon F(S)\to F(S_{\rm{soft}})$. The inverse of $\iota^*$ is constructed in the next result. **Proposition 1**. *Let $S$ be a $(2,\omega)$-divisible $\ensuremath{\mathrm{Cu}}$-semigroup satisfying (O5)-(O7), and let $\lambda \in F(S_{\rm{soft}})$. Then $\lambda_{\rm{soft}}\colon S \to [0,\infty]$ given by $$\lambda_{\rm{soft}}(x) := \sup_{\{ v\in S_{\rm{soft}}: v\leq x \}} \lambda (v)$$ for $x \in S$, is a functional on $S$.* *Proof.* It is easy to see that $\lambda_{\rm{soft}}$ preserves order. Further, given an increasing sequence $(x_n)_n$ with supremum $x$ in $S$, we have that for every $v'\in L_x$ there exists $n\in{\mathbb{N}}$ with $v'\in L_{x_n}$. Thus, using , we get $$\lambda_{\rm{soft}}(x) = \sup_{v'\in L_x}\lambda (v') \leq \sup_n \left(\sup_{v'\in L_{x_n}}\lambda (v') \right) =\sup_n \lambda_{\rm{soft}}(x_n).$$ Since $\lambda_{\rm{soft}}$ is order-preserving, we also have $\sup_n \lambda_{\rm{soft}}(x_n)\leq \lambda_{\rm{soft}}(x)$, which shows that $\lambda_{\rm{soft}}$ preserves suprema of increasing sequences. Given $x,y\in S$ and $u,v\in S_{\rm{soft}}$ such that $u\leq x$ and $v\leq y$, we have $u+v\in S_{\rm{soft}}$ and $u+v\leq x+y$. This implies that $$\lambda_{\rm{soft}}(x)+\lambda_{\rm{soft}}(y)\leq \lambda_{\rm{soft}}(x+y).$$ Thus, $\lambda_{\rm{soft}}$ is subadditive. Finally, we show that $\lambda_{\rm{soft}}$ is superadditive. Given $x,y\in S$ and $w'\in L_{x+y}$, take $x',x'',y',y''\in S$ such that $$x'\ll x''\ll x,\quad y'\ll y''\ll y,\,\,\,\text{ and }\,\,\, w'\ll x'+y'.$$ By [@ThiVil23arX:Soft Proposition 7.7], there exist $s,t\in S_{\rm{soft}}$ such that $$s\leq x''\leq \infty s, \,\,\,\text{ and }\,\,\, t\leq y''\leq \infty t.$$ Take $s',t'\in S$ such that $s'\ll s$ and $t'\ll t$. Using , we find $u'\in L_x$ and $v'\in L_y$ such that $$x'+s'\leq u'+s,\,\,\,\text{ and }\,\,\, y'+t'\leq v'+t.$$ Consequently, one has $$w'+s'+t' \leq x'+y'+s'+t' \leq u'+s+v'+t.$$ Applying , find $u,v\in S_{\rm{soft}}$ such that $$u'\ll u\leq x\leq \infty u,\,\,\,\text{ and }\,\,\, v'\ll v\leq y\leq \infty v.$$ This implies $$w'+s'+t' \leq u+s+v+t$$ and, therefore, $$\lambda (w')+\lambda (s' + t') \leq \lambda (u)+\lambda (v)+\lambda (s+t).$$ Passing to the suprema over all $s'$ way-below $s$, and all $t'$ way-below $t$, we deduce that $$\lambda (w')+\lambda (s + t) \leq \lambda (u)+\lambda (v)+\lambda (s+t).$$ Note that $s+t\leq x''+y'' \ll x+y \leq \infty (u+v)$. This allows us to cancel $\lambda (s + t)$, and we obtain $$\lambda (w') \leq \lambda (u)+\lambda (v) \leq \lambda_{\rm{soft}}(x)+\lambda_{\rm{soft}}(y).$$ Since this holds for every $w'\in L_{x+y}$, we can apply to get $$\lambda_{\rm{soft}}(x+y) = \sup_{\{ w\in S_{\rm{soft}}: w \leq x+y \}}\lambda(w) = \sup_{w'\in L_{x+y}} \lambda(w') \leq \lambda_{\rm{soft}}(x)+\lambda_{\rm{soft}}(y).$$ This show that $\lambda_{\rm{soft}}$ is superadditive, and thus a functional. ◻ **Theorem 1**. *Let $S$ be a $(2,\omega)$-divisible $\ensuremath{\mathrm{Cu}}$-semigroup satisfying (O5)-(O7). Let $\iota\colon S_{\rm{soft}}\to S$ be the canonical inclusion. Then the map $\iota^*\colon F(S)\to F(S_{\rm{soft}})$ given by $\iota^*(\lambda):=\lambda\circ\iota$ is a natural homemomorphism.* *Proof.* Given $\lambda \in F(S_{\rm{soft}})$, let $\lambda_{\rm{soft}}\in F(S)$ be defined as in . This defines a map $\phi \colon F(S_{\rm{soft}})\to F(S)$ by $\phi(\lambda) :=\lambda_{\rm{soft}}$. We verify that $\iota^*\phi={\operatorname{id}}_{F(S_{\rm{soft}})}$ and $\phi\iota^*={\operatorname{id}}_{F(S)}$. Given $\lambda \in F(S_{\rm{soft}})$ and $w\in S_{\rm{soft}}$, we have $$\iota^*\phi(\lambda )(w) = \iota^*\lambda_{\rm{soft}}(w) = \lambda_{{\rm{soft}}} (\iota (w)) = \sup_{\{ v\in S_{\rm{soft}}: v\leq w \}} \lambda (v) = \lambda (w),$$ which shows $\iota^*\phi={\operatorname{id}}_{F(S_{\rm{soft}})}$. Conversely, if $\lambda \in F(S)$ and $x\in S$, we can use at the last step to obtain $$\phi\iota^*(\lambda)(x) = \phi (\lambda \iota )(x) = \sup_{\{ v\in S_{\rm{soft}}: v\leq x \}}\lambda (v) = \lambda (x).$$ This shows that $\iota^*$ is a bijective, continuous map. Since $F(S)$ and $F(S)_{\rm{soft}}$ are both compact, Hausdorff spaces, it follows that $\iota^*$ is a homemorphism. ◻ Since simple, nonelementary $\mathrm{C}^*$-algebras automatically have the Global Glimm Property, the next result can be considered as a generalization of [@Phi14arX:LargeSub Lemma 3.8] to the non-simple setting. **Theorem 1**. *Let $A$ be a $\mathrm{C}^*$-algebra with the Global Glimm Property. Then $\mathop{\mathrm{QT}}(A)$ is naturally homemomorphic to $F(\mathop{\mathrm{Cu}}(A)_{\rm{soft}})$.* *Proof.* The result follows from and the fact that $\mathop{\mathrm{QT}}(A)$ is naturally homemomorphic to $F(\mathop{\mathrm{Cu}}(A))$; see [@EllRobSan11Cone Theorem 4.4]. ◻ # Retraction onto the soft part of a Cuntz semigroup {#sec:retract} Let $S$ be a countably based, left-soft separative, $(2,\omega)$-divisible $\ensuremath{\mathrm{Cu}}$-semigroup satisfying (O5)-(O7). Given any $x\in S$, we have seen in that $L_x$ is upward-directed. It then follows from [@AntPerThi18TensorProdCu Remarks 3.1.3] that $L_x$ has a supremum, which justifies the following: **Definition 1**. Let $S$ be a countably based, left-soft separative, $(2,\omega )$-divisible $\ensuremath{\mathrm{Cu}}$-semigroup satisfying (O5)-(O7). We define $\sigma \colon S \to S$ by $$\sigma (x) := \sup L_x = \sup \big\{ u'\in S : u' \ll u \ll x \text{ for some } u\in S_{\rm{soft}}\big\}$$ for $x \in S$. We will see in that $\sigma(x)$ is the largest strongly soft element dominated by $x$. Therefore, we often view $\sigma$ as a map $S \to S_{\rm{soft}}$. In we show that $\sigma$ is close to being a generalized $\ensuremath{\mathrm{Cu}}$-morphism, and in we give sufficient conditions ensuring that it is. If $A$ is a separable $\mathrm{C}^*$-algebra satisfying the Global Glimm Property and with left-soft separative Cuntz semigroup, then $\mathop{\mathrm{Cu}}(A)$ satisfies the assumptions of . If $A$ also has stable rank one or strict comparison of positive elements, then $\sigma \colon \mathop{\mathrm{Cu}}(A) \to \mathop{\mathrm{Cu}}(A)_{\rm{soft}}$ is a generalized $\ensuremath{\mathrm{Cu}}$-morphism; see . Then $\mathop{\mathrm{Cu}}(A)_{\rm{soft}}$ is a *retract* of $S$; see . This generalizes the construction of predecessors in the context of simple $\mathrm{C}^*$-algebras from [@Eng14PhD], as well as the constructions from [@AntPerThi18TensorProdCu Section 5.4] and [@Thi20RksOps Proposition 2.9]. **Remark 1**. Let $S$ be a weakly cancellative $\ensuremath{\mathrm{Cu}}$-semigroup satisfying (O5)-(O7) (for instance, the Cuntz semigroup of a stable rank one $\mathrm{C}^*$-algebra). Take $x\in S$, and consider the set $$L_x':= \big\{ u' : u'\ll u \leq \infty s, \text{and } u+s\ll x \text{ for some } u,s\in S \big\}.$$ A slight modification of shows that $L_x'$ is upward directed. If $S$ is countably based and $(2,\omega )$-divisible, it is readily checked that $\sigma (x) = \sup L_x = \sup L_x'$. However, if $S$ is not $(2,\omega )$-divisible, $\sup L_x'$ may not be strongly soft. For example, the Cuntz semigroup of $\mathbb{C}$ is $\overline{{\mathbb{N}}}= {\mathbb{N}}\cup\{\infty \}$, which is weakly cancellative. One can check that $$\sup L_x' = \begin{cases} 0, & \text{ if } x=0 \\ x-1, & \text{ if } x\neq 0,\infty \\ \infty, & \text{ if } x=\infty \\ \end{cases}.$$ In particular, if $x\neq 0, \infty$, we get $\sup L_x'=x-1$, which is not strongly soft. As another example, there are $\ensuremath{\mathrm{Cu}}$-semigroups whose order structure is deeply related to its soft elements but where $\sup L_x'$ is rarely strongly soft: Let $S$ be a $\ensuremath{\mathrm{Cu}}$-semigroup of the form $\mathop{\mathrm{Lsc}}(X,\overline{{\mathbb{N}}})$ for some $T_1$-space $X$ (these were called *Lsc-like* in [@Vil21arX:CommCuAI]). An element $f \in \mathop{\mathrm{Lsc}}(X,\overline{{\mathbb{N}}})$ is strongly soft if and only if $f = \infty \chi_U$ for the indicator function $\chi_U$ of some open subset $U \subseteq X$. Thus, if $x\in S$ satisfies $x\ll \infty$, we have $\sup L_x'\ll \infty$, which implies that $\sup L_x'$ is not strongly soft, unless it is zero. **Proposition 1**. *Let $S$ be a countably based, left-soft separative, $(2, \omega)$-divisible $\ensuremath{\mathrm{Cu}}$-semigroup satisfying (O5)-(O7), and let $x\in S$. Then:* 1. *The element $\sigma (x)$ is the largest strongly soft element dominated by $x$.* 2. *We have $\infty x=\infty\sigma (x)$.* 3. *We have $x=\sigma (x)$ if and only if $x$ is strongly soft.* 4. *We have $x\leq \sigma (x)+t$ for all $t \in S$ with $x\leq \infty t$.* *Proof.* To verify (1), note that the members of $L_x$ are bounded by $x$, and consequently $\sigma(x) \leq x$. To see that $\sigma (x)$ is strongly soft, let $s\in S$ be such that $s\ll \sigma (x)$. We will find $t \in S$ such that $s + t \ll \sigma(x)$ and $s \ll \infty t$. Since $\sigma(x)=\sup L_x$, there exists $u' \in L_x$ such that $s \ll u' \leq \sigma (x)$. Using that $u'\in L_x$, we find $u\in S_{\rm{soft}}$ with $u'\ll u\ll x$. By , $S_{\rm{soft}}$ is a sub-$\ensuremath{\mathrm{Cu}}$-semigroup, and we obtain $u'' \in S_{\rm{soft}}$ such that $$s \ll u' \ll u'' \ll u \ll x.$$ Then $s \ll u'' \in S_{\rm{soft}}$ and by definition we obtain $t \in S$ such that $s+t\ll u''$ and $s \ll \infty t$. We have $u'' \in L_x$ and therefore $u'' \leq \sigma(x)$, which shows that $t$ has the desired properties. Thus, $\sigma(x)$ is a strongly soft element dominated by $x$. To show that it is the largest element with these properties, let $w \in S_{\rm{soft}}$ satisfy $w\leq x$. We can use once again that $S_{\rm{soft}}$ is a sub-$\ensuremath{\mathrm{Cu}}$-semigroup to find a $\ll$-increasing sequence $(w_n)_n$ of strongly soft elements with supremum $w$. Then $w_n \in L_x$ for each $n$, and consequently $$w = \sup_n w_n \leq \sup L_x = \sigma (x).$$ This also shows that $x=\sigma (x)$ if and only if $x$ is strongly soft. We have proved (1) and (3). To verify (2), we first note that $\infty \sigma(x) \leq \infty x$ since $\sigma(x) \leq x$. For the converse inequality, use to obtain $w\in S_{\rm{soft}}$ with $w\leq x\leq \infty w$. By (1), we have $w\leq \sigma (x)$, and we get $$\infty x= \infty w\leq \infty \sigma(x).$$ Finally, to prove (4), let $t \in S$ satisfy $x \leq \infty t$. Let $x' \in S$ satisfy $x' \ll x$. Applying , we obtain $u' \in L_x$ such that $x' \ll u'+t$. Then $$x' \ll u'+t \leq \sigma(x)+t.$$ Passing to the supremum over all $x'$ way-below $x$, we get $x \leq \sigma(x)+t$, as desired. ◻ **Example 1**. Let $A$ be a separable, $\mathcal{W}$-stable $\mathrm{C}^*$-algebra, that is, $A \cong A \otimes \mathcal{W}$ where $\mathcal{W}$ denotes the Jacelon-Razak algebra. Then, every element in $\mathop{\mathrm{Cu}}(A)$ is strongly soft. Thus implies that $\sigma (x)=x$ for every $x\in \mathop{\mathrm{Cu}}(A)$. We refer to [@AntPerThi18TensorProdCu Section 7.5] for details. Similarly, given a separable $\mathcal{Z}$-stable $\mathrm{C}^*$-algebra $A$, where $\mathcal{Z}$ denotes the Jiang-Su algebra, then it follows from [@AntPerThi18TensorProdCu Theorem 7.3.11] that $\mathop{\mathrm{Cu}}(A)$ has $Z$-multiplication. Here, $Z=(0,\infty]\sqcup {\mathbb{N}}$ is the Cuntz semigroup of $\mathcal{Z}$, and $(0,\infty]$ is the subsemigroup of nonzero, strongly soft elements. Let $1' \in Z$ be the strongly soft element corresponding to $1\in [0,\infty ]$. As noted in [@AntPerThi18TensorProdCu Proposition 7.3.16], one has $$1'\mathop{\mathrm{Cu}}(A) = \mathop{\mathrm{Cu}}(A)_{\rm{soft}} \cong \mathop{\mathrm{Cu}}(A)\otimes [0,\infty ].$$ This implies that $\sigma (x) = 1'x$ for each $x\in\mathop{\mathrm{Cu}}(A)$. **Lemma 11**. *Let $S$ be a countably based, left-soft separative, $(2,\omega )$-divisible $\ensuremath{\mathrm{Cu}}$-semigroup satisfying (O5)-(O7), and let $x\in S$. Then $$2\sigma (x) = x+\sigma (x).$$* *Proof.* Using that $\sigma (x)\leq x$, we have $2\sigma (x)\leq x+\sigma (x)$. To show the reverse inequality, let $w \in S$ satisfy $w\ll \sigma (x)$. Since $\sigma (x)$ is strongly soft, it follows from [@ThiVil23arX:Soft Proposition 4.13] that there exists $t\in S$ with $w+t\leq \sigma (x)\leq \infty t$. We have $x \leq \infty \sigma (x)$ by  (2), and thus $x \leq \infty t$. Therefore, $x\leq \sigma (x)+t$ by  (4). Thus, we have $$x+w\leq \sigma (x) + t +w \leq 2\sigma (x).$$ Passing to the supremum over all $w$ way-below $\sigma (x)$, we get $x+\sigma (x)\leq 2\sigma (x)$. ◻ **Theorem 1**. *Let $S$ be a countably based, left-soft separative, $(2,\omega)$-divisible $\ensuremath{\mathrm{Cu}}$-semigroup satisfying (O5)-(O7). Then, the map $\sigma\colon S\to S_{\rm{soft}}$ preserves order, suprema of increasing sequences, and is superadditive. Further, we have $$2\sigma (x+y) = \sigma (x+y) + \big( \sigma (x) + \sigma (y) \big) = 2\big( \sigma (x) + \sigma (y) \big)$$ for every $x,y\in S$.* *Proof.* To show that $\sigma$ is order-preserving, let $x,y\in S$ satisfy $x\leq y$. Then $L_x \subseteq L_y$, and thus $$\sigma(x) = \sup L_x \leq \sup L_y = \sigma(y).$$ To show that $\sigma$ preserves suprema of increasing sequences, let $(x_n)_n$ be an increasing sequence in $S$ with supremum $x$. Since $\sigma$ is order-preserving, one gets $\sup_n\sigma(x_n) \leq \sigma(x)$. Conversely, given $u' \in L_x$, choose $u \in S_{\rm{soft}}$ with $u' \ll u \ll x$. Then there exists $n\in {\mathbb{N}}$ such that $u \ll x_n$, and thus $u' \in L_{x_n}$. We deduce that $$u' \leq \sup L_{x_n} = \sigma(x_n) \leq \sup_n \sigma(x_n).$$ Hence, $\sigma(x) = \sup L_x \leq \sup_n \sigma(x_n)$, as desired. To see that $\sigma$ is superadditive, let $x,y\in S$. Note that $\sigma (x)+\sigma (y)$ is a strongly soft element bounded by $x+y$. Using  (1), we get $\sigma (x)+\sigma (y)\leq \sigma (x+y)$. Next, given $x,y \in S$, let us show that $2\sigma (x+y) \leq 2\sigma (x) + 2\sigma (y)$. To prove this, let $w \in S$ satisfy $w \ll \sigma(x+y)$. By [@ThiVil23arX:Soft Proposition 4.13], there exists $s\in S$ satisfying $$w + s \leq \sigma(x+y) \leq \infty s.$$ Applying [@ThiVil23arX:Soft Proposition 7.7], we find $t \in S$ such that $2t \leq s\leq \infty t$. Using also  (2), we deduce that $$w+2t \leq w+s \leq \sigma(x+y), \,\,\,\text{ and }\,\,\, x,y \leq \infty (x+y) = \infty \sigma (x+y) \leq \infty s \leq \infty t.$$ Using  (4) at the second step, and at last step, we get $$\begin{split} \sigma (x+y) + w &\leq x+y+w\leq \sigma (x)+\sigma (y)+w+2t\leq \sigma (x)+\sigma (y) + \sigma (x+y)\\ &\leq \sigma (x)+\sigma (y) + x + y = 2\sigma (x) + 2\sigma (y). \end{split}$$ Passing to the supremum over all elements $w$ way-below $\sigma(x+y)$, we obtain $$2\sigma (x+y) \leq 2\sigma (x) + 2\sigma (y).$$ Next, given $x,y \in S$, using the above inequality together with the established superadditivity of $\sigma$, we get $$2\sigma (x+y) \leq 2\sigma (x) + 2\sigma (y) \leq \sigma (x+y) + \big( \sigma (x) + \sigma (y) \big) \leq 2\sigma (x+y),$$ as desired. ◻ Recall that a *generalized $\ensuremath{\mathrm{Cu}}$-morphism* is a monoid morphism between $\ensuremath{\mathrm{Cu}}$-semigroups that preserves order and suprema of increasing sequences. We recall the definition of *retract* from [@ThiVil22DimCu Definition 3.14]. **Definition 1**. Let $S, T$ be $\ensuremath{\mathrm{Cu}}$-semigroups. We say that $S$ is a *retract* of $T$ if there exist a $\ensuremath{\mathrm{Cu}}$-morphism $\iota\colon S\to T$ and a generalized $\ensuremath{\mathrm{Cu}}$-morphism $\sigma \colon T\to S$ such that $\sigma\circ \iota = {\operatorname{id}}_S$. **Proposition 1**. *Let $S$ be a countably based, left-soft separative, $(2,\omega)$-divisible $\ensuremath{\mathrm{Cu}}$-semigroup satisfying (O5)-(O7). Additionally, assume one of the following:* - *$S$ is almost unperforated;* - *$S$ is inf-semilattice ordered;* - *$S\otimes \{ 0,\infty \}$ is algebraic.* *Then, $\sigma$ is a generalized $\ensuremath{\mathrm{Cu}}$-morphism and $S_{\rm{soft}}$ is a retract of $S$.* *Proof.* By , we only need to check that $\sigma$ is subadditive. (i): If $S$ is almost unperforated, then it follows from that $S_{\rm{soft}}$ is unperforated. Given any pair $x,y\in S$, we know from that $$2\sigma (x+y) = 2\big(\sigma (x) + \sigma (y)\big).$$ Since this equality is in $S_{\rm{soft}}$, it follows that $\sigma (x+y)=\sigma (x) + \sigma (y)$. For (ii) and (iii), note that it is enough to prove that $\sigma (x+y)\leq x+\sigma (y)$ for all $x,y \in S$. Indeed, if this inequality holds, one can use it at the second and last steps to get $$\sigma (x+y) = \sigma (\sigma (x+y))\leq \sigma (x+\sigma (y)) = \sigma (\sigma (y) +x)\leq \sigma (y)+\sigma (x),$$ as required. Given $x,y \in S$, we proceed to verify that $\sigma (x+y)\leq x+\sigma (y)$. Let $w\in S$ satisfy $w \ll \sigma(x+y)$. Choose $y'\in S$ such that $$y'\ll y, \,\,\,\text{ and }\,\,\, w \ll x+y'.$$ Since $\sigma(x+y)$ is strongly soft, it follows from [@ThiVil23arX:Soft Proposition 4.13] that there exists $r\in S_{\rm{soft}}$ such that $$w+r \leq \sigma (x+y) \leq \infty r.$$ Applying  (2), one gets $$y' \ll y \leq \infty \sigma (x+y)\leq \infty r.$$ Applying [@ThiVil23Glimm Proposition 4.7], we obtain $t',t\in S$ such that $$y'\leq\infty t', \,\,\,\text{ and }\,\,\, t'\ll t\ll r,y.$$ Using that $S$ is $(2,\omega )$-divisible, it follows from [@ThiVil23arX:Soft Proposition 5.6] that we may assume both $t'$ and $t$ to be strongly soft. Thus, as in the proof of , we can apply (O5) to obtain an element $b$ satisfying $$t'+b\leq y\leq t+b,\,\,\,\text{ and }\,\,\, y\leq \infty b,$$ which implies $$w+r\leq\sigma(x+y)\leq x+y \leq x+t+b$$ with $t\ll r \leq \infty (x+y)=\infty (x+b)$. Thus, since both $w$ and $r$ are strongly soft, left-soft separativity (in the form of ) implies that $w\leq x+b$. Since $S$ is countably based and satisfies (O7), the infimum $(b\wedge\infty t')$ exists. Note that $(b\wedge\infty t') + t'$ is soft because $(b\wedge\infty t') \leq \infty t'$; see [@ThiVil23arX:Soft Theorem 4.14]. Then $$(b\wedge\infty t') + t' \leq b+t' \leq y,$$ and thus $b\wedge\infty t' \leq (b\wedge\infty t') + t' \leq \sigma(y)$ by  (1). (ii): Assuming that $S$ is inf-semilattice ordered, it now follows that $$w\leq (x+b)\wedge(x+\infty t') = x+(b\wedge\infty t') \leq x+\sigma(y).$$ Passing to the supremum over all $w$ way-below $\sigma(x+y)$, we get $\sigma(x+y)\leq x+\sigma(y)$, as desired. This proves the case (ii). (iii): Let us additionally assume that $y\ll \infty y$. Then, given $w$ and $r$ as before, we have that $y\ll \infty y\leq \infty r$. This implies that there exists $r'\in S$ such that $r'\ll r$ and $y\leq \infty r'$. Using at the last step, one gets $$w+r\leq \sigma (x+y)\leq x+y\leq x+\sigma (y)+r'$$ with $r'\ll r \leq \infty (x+y)=\infty (x+\sigma (y))$. Therefore, we can use to deduce that $w\leq x+\sigma (y)$. Since this holds for every $w$ way-below $\sigma (x+y)$, it follows that $\sigma(x+y)\leq x+\sigma(y)$ whenever $y\ll \infty y$. If $S\otimes \{ 0,\infty \}$ is algebraic, then by [@ThiVil23Glimm Lemma 4.16] every $y \in S$ is the supremum of an increasing sequence $(y_n)_n$ of elements $y_n \in S$ such that $y_n \ll \infty y_n$. Using the above for each $y_n$, and using that $\sigma$ preserves suprema of increasing sequences, we get $$\sigma(x+y) = \sup_n \sigma(x+y_n) \leq \sup_n \big( x+\sigma(y_n) \big) = x + \sigma(y),$$ as desired. ◻ **Theorem 1**. *Let $A$ be a separable $\mathrm{C}^*$-algebra with the Global Glimm Property. Additionally, assume one of the following:* - *$A$ has strict comparison of positive elements;* - *$A$ has stable rank one;* - *$A$ has topological dimension zero, and $\mathop{\mathrm{Cu}}(A)$ is left-soft separative.* *Then, $\mathop{\mathrm{Cu}}(A)_{\rm{soft}}$ is a retract of $\mathop{\mathrm{Cu}}(A)$.* *Proof.* The Cuntz semigroup $\mathop{\mathrm{Cu}}(A)$ is countably based and satisfies (O5)-(O7). Since $A$ has the Global Glimm Property, it follows from [@ThiVil23Glimm Theorem 3.6] that $\mathop{\mathrm{Cu}}(A)$ is $(2,\omega )$-divisible. We check that the additional conditions of are satisfied: (i): Assume that $A$ has has strict comparison of positive elements. Then $\mathop{\mathrm{Cu}}(A)$ is almost unperforated by [@EllRobSan11Cone Proposition 6.2], and left-soft separative by . This verifies  (i). \(ii\) : Assume that $A$ has stable rank one. Then $\mathop{\mathrm{Cu}}(A)$ is inf-semilattice ordered by [@AntPerRobThi22CuntzSR1 Theorem 3.8], and left-soft separative by . This verifies  (ii). (iii): Assume that $A$ has topological dimension zero, and $\mathop{\mathrm{Cu}}(A)$ is left-soft separative. Then $\mathop{\mathrm{Cu}}(A)\otimes\{ 0,\infty \}$ is algebraic by [@ThiVil23Glimm Proposition 4.18]. This verifies  (iii). ◻ **Question 1**. Let $S$ be a countably based, weakly cancellative, $(2,\omega)$-divisible $\ensuremath{\mathrm{Cu}}$-semigroup satisfying (O5)-(O7). Is the map $\sigma\colon S\to S_{\rm{soft}}$ subadditive? With view towards the proof of subadditivity in , we ask: **Question 1**. Let $S$ be the Cuntz semigroup of a $\mathrm{C}^*$-algebra. Let $x,y,z,w\in S$ satisfy $$w=2w, \quad x\leq y+z, \,\,\,\text{ and }\,\,\,x\leq y+w.$$ We know that $z\wedge w$ exists. Does it follow that $x\leq y+(z\wedge w)$? above has a positive answer if $S$ satisfies the *interval axiom*, as defined in [@ThiVil21arX:NowhereScattered Definition 9.3]. # Dimension of a Cuntz semigroup and its soft part {#sec:DimSoft} Let $S$ be a countably based, left-soft separative, $(2,\omega)$-divisible $\ensuremath{\mathrm{Cu}}$-semigroup satisfying (O5)-(O7), and assume that $\sigma\colon S\to S_{\rm{soft}}$ is a generalized $\ensuremath{\mathrm{Cu}}$-morphism. We show that the (covering) dimension of $S$ and $S_{\rm{soft}}$, as defined in [@ThiVil22DimCu Definition 3.1], are closely related: We have $\dim(S_{\rm{soft}})\leq\dim(S)\leq\dim(S_{\rm{soft}})+1$; see . Using the technique developed in [@ThiVil21DimCu2 Section 5], we remove the assumption that the $\ensuremath{\mathrm{Cu}}$-semigroup is countably based; see . The result applies, in particular, to the Cuntz semigroup of every $\mathrm{C}^*$-algebra with the Global Glimm Property that has either strict comparison of positive elements, stable rank one, or topological dimension zero; see . We also study the dimension of the fixed-point algebra $A^\alpha$ for a finite group action $\alpha$; see . ** 1** (Dimension of $\ensuremath{\mathrm{Cu}}$-semigroups). Recall from [@ThiVil22DimCu Definition 3.1] that, given a $\ensuremath{\mathrm{Cu}}$-semigroup $S$ and $n\in{\mathbb{N}}$, we say that $S$ has *dimension* $n$, in symbols $\dim (S)=n$, if $n$ is the least integer such that, whenever $x'\ll x\ll y_1+\ldots +y_r$, there exist elements $z_{j,k}\in S$ with $j=1,\ldots ,r$ and $k=0,\ldots ,n$ such that - $z_{j,k}\ll y_j$ for every $j$ and $k$; - $x'\ll \sum_{j,k}z_{j,k}$; - $\sum_{j}z_{j,k}\ll x$ for each $k$. If no such $n$ exists, we say that $S$ has dimension $\infty$, in symbols $\dim (S)=\infty$. The next result generalizes [@ThiVil22DimCu Proposition 3.17] to the nonsimple setting. **Proposition 1**. *Let $S$ be a countably based, left-soft separative, $(2,\omega)$-divisible $\ensuremath{\mathrm{Cu}}$-semigroup satisfying (O5)-(O7), and assume that $\sigma\colon S\to S_{\rm{soft}}$ is a generalized $\ensuremath{\mathrm{Cu}}$-morphism. Then, $$\dim(S_{\rm{soft}}) \leq \dim(S) \leq \dim(S_{\rm{soft}})+1.$$* *Proof.* Since $\sigma$ is a generalized $\ensuremath{\mathrm{Cu}}$-morphism, the first inequality follows from [@ThiVil22DimCu Proposition 3.15]. To show the second inequality, set $n:=\dim(S_{\rm{soft}})$, which we may assume to be finite. To verify that $\dim(S)\leq n+1$, let $x'\ll x\ll y_1+\ldots+y_r$ in $S$. We need to find $z_{j,k}\in S$ for $j=1,\ldots,r$ and $k=0,\ldots,n+1$ such that - $z_{j,k}\ll y_j$ for each $j$ and $k$; - $x'\ll\sum_{j,k}z_{j,k}$; - $\sum_j z_{j,k}\ll x$ for each $k$. First, choose $x'',x'''\in S$ such that $x'\ll x''\ll x''' \ll x$. Applying that $S$ satisfies (O6) for $x''\ll x'''\leq y_1+\ldots+y_r$, we obtain $s_1,\ldots,s_r\in S$ such that $$x''\ll s_1+\ldots+s_r, \,\,\,\text{ and }\,\,\,s_j\ll x''',y_j \ \text{ for each } j=1,\ldots,r.$$ Choose $s_1',\ldots,s_r' \in S$ such that $$x''\ll s_1'+\ldots+s_r', \,\,\,\text{ and }\,\,\,s_j'\ll s_j \ \text{ for each } j=1,\ldots,r.$$ Using that $S$ is $(2,\omega)$-divisible (and consequently also $(r,\omega)$-divisible by [@ThiVil23Glimm Paragraph 2.4]), we obtain $v\in S$ such that $$rv\leq x, \,\,\,\text{ and }\,\,\,x'''\leq\infty v.$$ For each $j$, we have $s_j \ll x''' \leq \infty v$. Applying [@ThiVil23Glimm Proposition 4.10] to $s_j'\ll s_j \ll \infty v, \infty y_j$, we obtain $v_j\in S$ such that $$s_j' \ll \infty v_j, \,\,\,\text{ and }\,\,\,v_j\ll v,y_j.$$ Note that $$x'' \ll s_1'+\ldots+s_r' \leq \infty (v_1+\ldots+v_r), \,\,\,\text{ and }\,\,\, v_1+\ldots+v_r \ll rv \leq x.$$ Now, applying at the second step, we have $$x' \ll x'' \leq \sigma(x'')+(v_1+\ldots+v_r).$$ Using that $S_{\rm{soft}}$ is a sub-$\ensuremath{\mathrm{Cu}}$-semigroup by , we can choose an element $w\in S_{\rm{soft}}$ such that $$x' \ll w+(v_1+\ldots+v_r), \,\,\,\text{ and }\,\,\, w\ll\sigma(x'').$$ Applying that $\dim(S_{\rm{soft}})\leq n$ for $w\ll\sigma(x'')\leq\sigma(y_1)+\ldots+\sigma(y_r)$, we obtain $z_{j,k}\in S_{\rm{soft}}$ for $j=1,\ldots,r$ and $k=0,\ldots,n$ such that - $z_{j,k}\ll \sigma(y_j)$ for each $j$ and $k=0,\ldots,n$; - $w\ll\sum_{j}\sum_{k=0}^n z_{j,k}$; - $\sum_j z_{j,k}\ll \sigma(x'')$ for each $k=0,\ldots,n$. Set $z_{j,n+1}:=v_j$ for each $j$. These elements satisfy conditions (i) and (iii). To verify (ii), we note that $$x' \ll w+(v_1+\ldots+v_r) \ll (\sum_{j}\sum_{k=0}^n z_{j,k} )+(v_1+\ldots+v_r) = \sum_{j}\sum_{k=0}^{n+1} z_{j,k},$$ as desired. ◻ **Theorem 1**. *Let $S$ be a left-soft separative, $(2,\omega)$-divisible $\ensuremath{\mathrm{Cu}}$-semigroup satisfying (O5)-(O7). Additionally, assume one of the following:* - *$S$ is almost unperforated;* - *$S$ satisfies the Riesz Interpolation Property, and the interval axiom;* - *$S\otimes \{ 0,\infty \}$ is algebraic.* *Then, $\dim(S_{\rm{soft}})\leq \dim(S)\leq \dim(S_{\rm{soft}})+1$.* *Proof.* By [@ThiVil21DimCu2 Proposition 5.3], properties (O5), (O6) and (O7) each satisfy the Löwenheim-Skolem condition. Similarly, one can see that left-soft separativity, $(2,\omega )$-divisibility, and the properties listed in (i)-(iii) each satisfy the Löwenheim-Skolem condition. (For (iii), one can use [@ThiVil23Glimm Lemma 4.16].) The proof is now analoguous to [@ThiVil21DimCu2 Proposition 5.9] using . ◻ **Corollary 1**. *Let $A$ be a $\mathrm{C}^*$-algebra with the Global Glimm Property. Additionally, assume one of the following:* - *$A$ has strict comparison of positive elements;* - *$A$ has stable rank one;* - *$A$ has topological dimension zero, and $\mathop{\mathrm{Cu}}(A)$ is left-soft separative.* *Then, $\dim(\mathop{\mathrm{Cu}}(A)_{\rm{soft}})\leq \dim(\mathop{\mathrm{Cu}}(A))\leq \dim(\mathop{\mathrm{Cu}}(A)_{\rm{soft}})+1$.* *Proof.* As in the proof of , we see that $\mathop{\mathrm{Cu}}(A)$ satisfies the corresponding assumptions of , from which the result follows. ◻ **Notation 1**. Let $A$ be a $\mathrm{C}^*$-algebra, and let $\alpha\colon G\to {\rm Aut}(A)$ be an action of a finite group $G$ on $A$. We will denote by $C^* (G,A,\alpha )$ the induced crossed product. The *fixed-point algebra* $A^\alpha$ is defined as $$A^\alpha := \big\{ a\in A : \alpha_g(a)=a\text{ for all }g\in G \big\}.$$ ** 1** (Fixed-point semigroups). For a group action $\alpha$ on a $\mathrm{C}^*$-algebra $A$, there are three natural objects that may be seen as the fixed-point semigroup of $\mathop{\mathrm{Cu}}(A)$: The Cuntz semigroup $\mathop{\mathrm{Cu}}(A^\alpha)$, the fixed-point semigroup $\mathop{\mathrm{Cu}}(A)^\alpha$, and the fixed-point $\ensuremath{\mathrm{Cu}}$-semigroup $\mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha )}$. We give some details. The *fixed-point semigroup* $\mathop{\mathrm{Cu}}(A)^\alpha$ is defined as $$\mathop{\mathrm{Cu}}(A)^\alpha := \big\{ x \in \mathop{\mathrm{Cu}}(A) : \mathop{\mathrm{Cu}}(\alpha_g) (x) = x \text{ for all } g\in G \big\}.$$ This is a submonoid of $\mathop{\mathrm{Cu}}(A)$ that is closed under passing to suprema of increasing sequences. In general, it is not known if or when $\mathop{\mathrm{Cu}}(A)^\alpha$ is a sub-$\ensuremath{\mathrm{Cu}}$-semigroup of $\mathop{\mathrm{Cu}}(A)$. An indexed collection $(x_t)_{t\in (0,1]}$ of elements in $S$ is a *path* if $x_t\ll x_r$ whenever $r<t$ and $x_t = \sup_{r<t}x_r$ for every $t\in (0,1]$. The *fixed-point $\ensuremath{\mathrm{Cu}}$-semigroup*, as defined in [@GarSan16EquivHomoRokhlin Definition 2.8], is $$\mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha )} =\left\{ x\in\mathop{\mathrm{Cu}}(A) : \exists (x_t)_{t\in (0,1]}\text{ path in }\mathop{\mathrm{Cu}}(A) : \begin{array}{l} x_1=x,\text{ and }\\ \mathop{\mathrm{Cu}}(\alpha_g) (x_t) = x_t \,\,\forall t, g \end{array}\!\! \right\}.$$ Using [@GarSan16EquivHomoRokhlin Lemma 2.9], one can show that $\mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha )}$ is always a sub-$\ensuremath{\mathrm{Cu}}$-semigroup of $\mathop{\mathrm{Cu}}(A)$. Note that $\mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha )}$ is contained in $\mathop{\mathrm{Cu}}(A)^{\alpha}$. In we will see a situation in which $\mathop{\mathrm{Cu}}(A)^\alpha$ and $\mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha )}$ agree. **Lemma 12**. *Let $S$ be an inf-semilattice ordered $\ensuremath{\mathrm{Cu}}$-semigroup, and let $\alpha$ be an action of a finite group $G$ on $S$ by $\mathop{\mathrm{Cu}}$-isomorphisms on $S$. Then the fixed-point semigroup $S^\alpha := \{ x \in S : \alpha_g(x)=x \text{ for all } g\in G \}$ is a sub-$\ensuremath{\mathrm{Cu}}$-semigroup of $S$.* *Moreover, if $S$ satisfies weak cancellation (resp. (O5), (O6), (O7)), then so does $S^\alpha$.* *Proof.* Define $\Phi \colon S \to S^\alpha$ by $$\Phi(x) := \bigwedge_{g \in G} \alpha_g(x)$$ for $x \in S$. For each $x \in S$, we have $\Phi(\Phi(x)) = \Phi(x) \leq x$; and we have $\Phi(x)=x$ if and only if $x \in S^\alpha$. It is straightforward to verify that $S^\alpha$ is a submonoid that is closed under suprema of increasing sequences. To show that $S^\alpha$ is a sub-$\ensuremath{\mathrm{Cu}}$-semigroup, it remains to verify that for given $x \in S^\alpha$ and $y \in S$ with $y \ll x$, there exists $x' \in S^\alpha$ with $y \leq x' \ll x$. Let $(x_n)_n$ be a $\ll$-increasing sequence in $S$ with supremum $x$. For each $g \in G$, we have $x = \alpha_g(x) = \sup_n \alpha_g(x_n)$, and it follows that $$x = \Phi(x) = \sup_n \Phi(x_n).$$ Hence, there exists $n_0$ such that $y \leq \Phi(x_{n_0})$. Set $x' := \Phi(x_{n_0})$. Then $x' \in S^\alpha$ and $$y \leq x' \leq x_{n_0} \ll x,$$ which shows that $x'$ has the desired properties. Thus $S^\alpha$ is a sub-$\ensuremath{\mathrm{Cu}}$-semigroup. Since $S^\alpha$ is a sub-$\ensuremath{\mathrm{Cu}}$-semigroup of $S$, it follows that $S^\alpha$ is weakly cancellative whenever $S$ is. Assuming that $S$ satisfies (O5), let us verify that so does $S^\alpha$. Let $x',x,y',y,z \in S^\alpha$ satisfy $$x' \ll x, \quad y' \ll y, \,\,\,\text{ and }\,\,\, x+y \leq z.$$ Choose $y'' \in S^\alpha$ satisfying $y' \ll y'' \ll y$. Applying (O5) in $S$, we obtain $c \in S$ such that $$x' + c \leq z \leq x + c, \,\,\,\text{ and }\,\,\, y'' \ll c.$$ We claim that $\Phi(c)$ has the desired properties. Indeed, for each $g \in G$, we have $$z = \alpha_g(z) \leq \alpha_g(x+c) = x + \alpha_g(c).$$ Using that $S$ is semilattice-ordered, we get $$z \leq \bigwedge_{g \in G} \big( x + \alpha_g(c) \big) = x + \bigwedge_{g \in G} \alpha_g(c) = x + \Phi(c).$$ We also have $$x'+\Phi(c) \leq x'+c \leq z, \,\,\,\text{ and }\,\,\, y' \ll y'' = \Phi(y'') \leq \Phi(c).$$ Assuming that $S$ satisfies (O6), let us verify that so does $S^\alpha$. Let $x',x,y,z \in S^\alpha$ satisfy $$x' \ll x \leq y + z.$$ It suffices to find $\tilde{e} \in S^\alpha$ such that $$x ' \leq \tilde{e} + z, \,\,\,\text{ and }\,\,\, \tilde{e} \leq x,y.$$ (One can then apply this argument with the roles of $y$ and $z$ reversed to verify (O6).) Applying (O6) in $S$, we obtain $e \in S$ such that $$x' \leq e + z, \,\,\,\text{ and }\,\,\, e \leq x,y.$$ For each $g \in G$, we have $$x' = \alpha_g(x') \leq \alpha_g(e+z) = \alpha_g(e)+z.$$ Using that $S$ is semilattice-ordered, we get $$x' \leq \bigwedge_{g \in G} \big( \alpha_g(e) + z \big) = \left(\bigwedge_{g \in G} \alpha_g(c) \right) + z = \Phi(e) + z.$$ Further, we have $$\Phi(e) \leq e \leq x,y,$$ which shows that $\tilde{e} := \Phi(e) \in S^\alpha$ has the desired properties. Similarly, one shows that (O7) passes from $S$ to $S^\alpha$. ◻ We refer to [@GarHirSan21RokDim Definition 2.2] for the definition of the weak tracial Rokhlin property. The first isomorphism in the statement below is well known, but we add it here for the convenience of the reader. **Proposition 1**. *Let $A$ be a non-elementary, stably finite, simple, unital $\mathrm{C}^*$-algebra, and let $\alpha$ be a finite group action on $A$ that has the weak tracial Rokhlin property. Then we have $$\mathop{\mathrm{Cu}}(C^*(G, A, \alpha)) \cong \mathop{\mathrm{Cu}}(A^\alpha), \,\,\,\text{ and }\,\,\, \mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha)} = \mathop{\mathrm{Cu}}(A)^\alpha.$$ Restricting to the soft parts, we obtain: $$\mathop{\mathrm{Cu}}(C^*(G, A, \alpha))_{\rm{soft}} \cong \mathop{\mathrm{Cu}}(A^\alpha)_{\rm{soft}} \cong \mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha)}_{\rm{soft}} = \mathop{\mathrm{Cu}}(A)^\alpha \cap \mathop{\mathrm{Cu}}(A)_{\rm{soft}}.$$* *If, moreover, $A$ is separable and has stable rank one, then $\mathop{\mathrm{Cu}}(A)^\alpha$ is a simple, countably based, weakly cancellative, $(2,\omega )$-divisible sub-$\ensuremath{\mathrm{Cu}}$-semigroup of $\mathop{\mathrm{Cu}}(A)$ satisfying (O5)-(O7).* *Proof.* For any action of a finite group on a unital $\mathrm{C}^*$-algebra, the fixed-point algebra is $\ast$-isomorphic to a corner of the crossed product; see [@AsaGolPhi21RadCompCrProd Lemma 4.3(4)]. By [@HirOro13TraciallyZstable Corollary 5.4], $C^*(G, A, \alpha)$ is simple, which implies that $C^*(G, A, \alpha)$ and $A^\alpha$ are Morita equivalent and therefore have isomorphic Cuntz semigroups. As noted in , $\mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha )}$ is contained in $\mathop{\mathrm{Cu}}(A)^\alpha$ in general, and $\mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha )}$ is always a sub-$\ensuremath{\mathrm{Cu}}$-semigroup of $\mathop{\mathrm{Cu}}(A)$. Let $\iota \colon A ^\alpha \to A$ denote the inclusion map, and note that $\mathop{\mathrm{Cu}}(\iota)$ takes image in $\mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha)}$. To show that $\mathop{\mathrm{Cu}}(A)^\alpha$ is contained in $\mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha )}$, let $x \in \mathop{\mathrm{Cu}}(A)^\alpha$. If $x$ is compact in $\mathop{\mathrm{Cu}}(A)$, then we can use the constant path $x_t = x$ to see that $x \in \mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha)}$. On the other hand, if $x$ is soft, then we can apply [@AsaGolPhi21RadCompCrProd Lemma 5.4] to obtain $y \in \mathop{\mathrm{Cu}}(A^\alpha)_{\rm{soft}}$ such that $x = \mathop{\mathrm{Cu}}(\iota)(y)$. Since $\mathop{\mathrm{Cu}}(\iota)$ takes image in $\mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha)}$, we have $x \in \mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha)}$. Since $A$ is simple and stably finite, every Cuntz class is either compact or soft, and we have $\mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha)} = \mathop{\mathrm{Cu}}(A)^\alpha$. We have shown $$\mathop{\mathrm{Cu}}(C^*(G, A, \alpha)) \cong \mathop{\mathrm{Cu}}(A^\alpha), \,\,\,\text{ and }\,\,\, \mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha)} = \mathop{\mathrm{Cu}}(A)^\alpha.$$ We know from [@AsaGolPhi21RadCompCrProd Theorem 5.5] that $\mathop{\mathrm{Cu}}(\iota)$ induces an order-isomorphism between the soft part of $\mathop{\mathrm{Cu}}(A^\alpha)$ and $\mathop{\mathrm{Cu}}(A)^\alpha \cap \mathop{\mathrm{Cu}}(A)_{\rm{soft}}$, the $\alpha$-invariant elements in $\mathop{\mathrm{Cu}}(A)_{\rm{soft}}$. It is easy to see that $\mathop{\mathrm{Cu}}(\iota)$ maps $\mathop{\mathrm{Cu}}(A)_{\rm{soft}}$ into $\mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha)}_{\rm{soft}}$, and that $\mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha)}_{\rm{soft}}$ is contained in $\mathop{\mathrm{Cu}}(A)^\alpha \cap \mathop{\mathrm{Cu}}(A)_{\rm{soft}}$. Together, we get $$\mathop{\mathrm{Cu}}(A^\alpha)_{\rm{soft}} \xrightarrow[\mathop{\mathrm{Cu}}(\iota)]{\cong} \mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha)}_{\rm{soft}} = \mathop{\mathrm{Cu}}(A)^\alpha \cap \mathop{\mathrm{Cu}}(A)_{\rm{soft}}.$$ Since $A^\alpha$ is a simple, nonelementary $\mathrm{C}^*$-algebra, $\mathop{\mathrm{Cu}}(A^\alpha)$ is a simple, $(2,\omega)$-divisible $\ensuremath{\mathrm{Cu}}$-semigroup satisfying (O5)-(O7). It follows from that $\mathop{\mathrm{Cu}}(A^\alpha)_{\rm{soft}}$ is a $\ensuremath{\mathrm{Cu}}$-semigroup that also satisfies (O5)-(O7). Finally, assume that $A$ is also separable and has stable rank one. Then $\mathop{\mathrm{Cu}}(A)$ is a $\ensuremath{\mathrm{Cu}}$-semigroup satisfying (O5)-(O7). Further, $\mathop{\mathrm{Cu}}(A)$ is weakly cancellative and inf-semilattice ordered by [@RorWin10ZRevisited Theorem 4.3] and [@AntPerRobThi22CuntzSR1 Theorem 3.8]. Hence, $\mathop{\mathrm{Cu}}(A)^\alpha$ satisfies (O5)-(O7) by . We have seen that $\mathop{\mathrm{Cu}}(A)^\alpha$ is a sub-$\ensuremath{\mathrm{Cu}}$-semigroup of $\mathop{\mathrm{Cu}}(A)$. Thus, since $\mathop{\mathrm{Cu}}(A)$ is simple and weakly cancellative, so is $\mathop{\mathrm{Cu}}(A)^\alpha$. To verify $(2,\omega)$-divisibility, let $x \in \mathop{\mathrm{Cu}}(A)^\alpha$. Since $A$ is simple and non-elementary, we know from that $\mathop{\mathrm{Cu}}(A)$ is $(2,\omega)$-divisible. Hence, there exists $y \in \mathop{\mathrm{Cu}}(A)$ such that $2y\leq x\leq\infty y$. Using [@AsaGolPhi21RadCompCrProd Lemma 5.2], we find a nonzero element $z \in \mathop{\mathrm{Cu}}(A)^\alpha$ satisfying $z\leq y$. Then $2z \leq x \leq \infty z$, a priori in $\mathop{\mathrm{Cu}}(A)$, but then also in $\mathop{\mathrm{Cu}}(A)^\alpha$ since the inclusion $\mathop{\mathrm{Cu}}(A)^\alpha \to \mathop{\mathrm{Cu}}(A)$ is an order-embedding. ◻ **Theorem 1**. *Let $A$ be a non-elementary, separable, simple, unital $\mathrm{C}^*$-algebra of stable rank one, and let $\alpha$ be a finite group action on $A$ that has the weak tracial Rokhlin property. Then $$\begin{aligned} \label{prp:DimWTRP:Eq1} \dim \big( \mathop{\mathrm{Cu}}(C^*(G, A, \alpha) ) \big) = \dim \big( \mathop{\mathrm{Cu}}(A^\alpha ) \big),\end{aligned}$$ and $$\begin{aligned} \dim \big( \mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha)} \big) -1 \leq \dim \big( \mathop{\mathrm{Cu}}(A^\alpha) \big) \leq \dim \big( \mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha)} \big) +1.\end{aligned}$$* *Proof.* By , we have $$\mathop{\mathrm{Cu}}(C^*(G, A, \alpha)) \cong \mathop{\mathrm{Cu}}(A^\alpha),$$ which immediately proves [\[prp:DimWTRP:Eq1\]](#prp:DimWTRP:Eq1){reference-type="eqref" reference="prp:DimWTRP:Eq1"}. It also follows from that $\mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha)}$ is a simple, weakly cancellative (hence left-soft separative), $(2,\omega )$-divisible sub-$\ensuremath{\mathrm{Cu}}$-semigroup of $\mathop{\mathrm{Cu}}(A)$ satisfying (O5)-(O7). Since $S$ is simple, $S\otimes\{0,\infty\}$ is algebraic. (In fact, $S\otimes\{0,\infty\} \cong \{0,\infty\}$.) Therefore, we can apply  (iii) to obtain $$\begin{aligned} \dim \big( \mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha)}_{\rm{soft}}\big) \leq \dim \big( \mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha)} \big) \leq \dim \big( \mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha)}_{\rm{soft}}\big) + 1.\end{aligned}$$ Further, since $A^{\alpha }$ is simple and stably finite, we know from [@ThiVil22DimCu Remark 3.18] that $$\begin{aligned} \dim \big( \mathop{\mathrm{Cu}}(A^{\alpha})_{\rm{soft}}\big) \leq \dim \big( \mathop{\mathrm{Cu}}(A^{\alpha}) \big) \leq \dim \big( \mathop{\mathrm{Cu}}(A^{\alpha})_{\rm{soft}}\big) + 1.\end{aligned}$$ The result now follows since $\mathop{\mathrm{Cu}}(A^{\alpha})_{\rm{soft}}\cong \mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha)}_{\rm{soft}}$; see . ◻ **Example 1**. Let $n \geq 2$, and let $G$ be $S_n$, the symmetric group on the set $\{1,...,n\}$. Let $A=\mathcal{Z}^{\otimes n}\cong \mathcal{Z}$, and let $\alpha \colon G \to {\mathrm{Aut}}(A)$ be the permutation action given by $$\alpha_{\theta}(a_1 \otimes a_2\otimes \ldots \otimes a_n) = a_{\theta^{-1}(1)} \otimes a_{\theta^{-1}(2)} \otimes \ldots \otimes a_{\theta^{-1}(n)}.$$ It follows from [@HirOro13TraciallyZstable Example 5.10] that $\alpha$ has the weak tracial Rokhlin property. Thus, using , one has $$\dim \big(\mathop{\mathrm{Cu}}(A^\alpha)\big) =\dim \big(\mathop{\mathrm{Cu}}(C^*(G, A, \alpha))\big).$$ The crossed product $\mathop{\mathrm{Cu}}(C^*(G, A, \alpha))$ is simple and $\mathcal{Z}$-stable; see Corollaries 5.4 and 5.7 from [@HirOro13TraciallyZstable]. Therefore, it follows from [@ThiVil22DimCu Proposition 3.22] that $$\dim \big(\mathop{\mathrm{Cu}}(A^\alpha)\big) = \dim \big(\mathop{\mathrm{Cu}}(C^*(G, A, \alpha))\big) \leq 1$$ and, moreover, we have $\dim ( \mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha)})\leq 2$ by . # Radius of comparison of a Cuntz semigroup and its soft part {#sec:RC} In this section we show that, under the assumptions of , the radius of comparison of a $\ensuremath{\mathrm{Cu}}$-semigroup is equal to that of its soft part; see . We deduce that the radius of comparison of a $\mathrm{C}^*$-algebra $A$ is equal to that of the soft part of its Cuntz semigroup whenever $A$ is unital and separable, satisfies the Global Glimm Property, and has either stable rank one or strict comparison of positive elements; see . This can be seen as a generalization of [@Phi14arX:LargeSub Theorem 6.14] to the setting of non-simple $\mathrm{C}^*$-algebras; see . We also study in the radius of comparison of certain crossed products. **Proposition 1**. *Let $S$ be a countably based, left-soft separative, $(2,\omega )$-divisible $\ensuremath{\mathrm{Cu}}$-semigroup satisfying (O5)-(O7), and let $x\in S$. Then $\widehat{x}=\widehat{\sigma (x)}$.* *Proof.* By , there exists $w \in S_{\rm{soft}}$ such that $w \leq x$ and $\widehat{x}=\widehat{w}$. Since $\sigma (x)$ is the largest strongly soft element dominated by $x$ (), we get $w \leq \sigma(x)$, and so $$\widehat{x}=\widehat{w}\leq \widehat{\sigma (x)}\leq \widehat{x},$$ as required. ◻ With the homeomorphism from at hand, we can now relate the radius of comparison of $S$ and $S_{\rm{soft}}$. Let us first recall the definition of the radius of the comparison of $\ensuremath{\mathrm{Cu}}$-semigroups from Section 3.3 of [@BlaRobTikTomWin12AlgRC]. **Definition 1**. Given a $\ensuremath{\mathrm{Cu}}$-semigroup $S$, a full element $e \in S$ and $r>0$, one says that the pair $(S,e)$ satisfies condition (R1) for $r$ if $x,y \in S$ satisfy $x \leq y$ whenever $$\lambda (x) + r\lambda (e)\leq \lambda (y)$$ for all $\lambda \in F(S)$. The *radius of comparison* of $(S,e)$, denoted by $\mathop{\mathrm{rc}}(S, e)$, is the infimum of the positive elements $r$ such that $(S,e)$ satisfies (R1) for $r$. **Remark 1**. In [@BlaRobTikTomWin12AlgRC Definition 3.3.2], for a $\mathrm{C}^*$-algebra $A$ and a full element $a \in (A \otimes \mathcal{K})_+$, the notation $r_{A, a}$ is used for $\mathop{\mathrm{rc}}(\mathop{\mathrm{Cu}}(A),[a])$. Also, it was shown in [@BlaRobTikTomWin12AlgRC Proposition 3.2.3] that for unital $\mathrm{C}^*$-algebras all of whose quotients are stably finite, the radius of comparison $\mathop{\mathrm{rc}}(\mathop{\mathrm{Cu}}(A),[ 1_A])$ coincides with the original notion of radius of comparison $\mathop{\mathrm{rc}}(A)$ as introduced in [@Tom06FlatDimGrowth Definition 6.1]. **Proposition 1**. *Let $\varphi \colon S \to T$ be a generalized $\ensuremath{\mathrm{Cu}}$-morphism between $\ensuremath{\mathrm{Cu}}$-semigroups that is also an order embedding, and let $e \in S$ be a full element such that $\varphi(e)$ is full in $T$. Then, $\mathop{\mathrm{rc}}(S, e) \leq \mathop{\mathrm{rc}}(T, \varphi(e))$.* *Proof.* Take $r>0$. We show that $(S,e)$ satisfies condition (R1) for $r$ whenever $(T, \varphi(e))$ does, which readily implies the claimed inequality. Thus, assume that $(T, \varphi(e))$ satisfies condition (R1) for $r$. In order to verify that $(S, e)$ satisfies (R1) for $r$ as well, let $x,y \in S$ satisfy $$\lambda (x)+r\lambda (e) \leq \lambda (y)$$ for all $\lambda \in F(S)$. Note that, for every $\rho \in F(T)$, we have that $\rho \circ \varphi \in F(S)$. Thus, we get $$\rho (\varphi(x))+r\rho (\varphi(e)) \leq \rho (\varphi(y))$$ for every $\rho \in F(T)$. It follows from our assumption that $\varphi (x) \leq \varphi (y)$ and, since $\varphi$ is an order-embedding, we deduce that $x \leq y$, as desired. ◻ **Theorem 1**. *Let $S$ be a $(2,\omega)$-divisible $\ensuremath{\mathrm{Cu}}$-semigroup satisfying (O5)-(O7), and let $e\in S$ be a full element. Then, there exists $w \in S_{\rm{soft}}$ such that $$\mathop{\mathrm{rc}}(S,e) = \mathop{\mathrm{rc}}(S_{\rm{soft}}, w), \quad w\leq e \leq \infty w, \,\,\,\text{ and }\,\,\, \widehat{e} = \widehat{w}.$$* *If $S$ is also countably based and left-soft separative, we have $$\mathop{\mathrm{rc}}(S,e) = \mathop{\mathrm{rc}}(S_{\rm{soft}}, \sigma (e)).$$* *Proof.* By , we can pick $w \in S_{\rm{soft}}$ such that $$w \leq e \leq \infty w, \,\,\,\text{ and }\,\,\,\widehat{e} = \widehat{w}.$$ Using at the first step that the inclusion map $\iota \colon S_{\rm{soft}}\to S$ is a $\ensuremath{\mathrm{Cu}}$-morphism and an order-embedding and applying , and using at the last step that $\widehat{e}=\widehat{w}$, we get $$\mathop{\mathrm{rc}}(S_{\rm{soft}}, w) \leq \mathop{\mathrm{rc}}(S, \iota (w)) = \mathop{\mathrm{rc}}(S, w) = \mathop{\mathrm{rc}}(S, e).$$ To prove the converse inequality, let $r>0$ and assume that $(S_{\rm{soft}}, w)$ satisfies condition (R1) for $r$. Take $\varepsilon >0$. We will show that $(S,e)$ satisfies (R1) for $r+\varepsilon$. Now let $x,y\in S$ be such that $\lambda (x)+(r+\varepsilon)\lambda (e)\leq \lambda (y)$ for every $\lambda \in F(S)$ or, equivalently, such that $$\widehat{x}+(r+\varepsilon )\widehat{e}\leq \widehat{y}$$ in $\mathop{\mathrm{LAff}}(F(S))$. Applying [@ThiVil23arX:Soft Proposition 7.7], we find $k\in{\mathbb{N}}$ and then $t \in S_{\rm{soft}}$ such that $$kt \leq e \leq \infty t,\,\,\,\text{ and }\,\,\, 1\leq k\varepsilon .$$ Thus, we get $$\widehat{x+t}+r\widehat{e} \leq \widehat{x}+k\varepsilon\widehat{t}+r\widehat{e} \leq \widehat{x}+\varepsilon\widehat{e}+r\widehat{e} = \widehat{x} + (\varepsilon +r)\widehat{e} \leq \widehat{y}.$$ Note that, since $e$ is full in $S$, so is $t$. By [@ThiVil23arX:Soft Theorem 4.14(2)], this implies that $x+t$ is strongly soft. By , there exists $v\in S_{\rm{soft}}$ such that $v\leq y$ and $\widehat{v}=\widehat{y}$. One gets $$\widehat{x+t}+r\widehat{w} = \widehat{x+t}+r\widehat{e} \leq \widehat{y} = \widehat{v}$$ or, equivalently, that $$\lambda(x+t) + r\lambda(w) \leq \lambda(v)$$ for every $\lambda \in F(S)$. Using that $F(S)\cong F(S_{\rm{soft}})$ () and that $(S_{\rm{soft}}, w)$ satisfies condition (R1) for $r$, it follows that $$x \leq x+t \leq v \leq y.$$ This shows that, given any $\varepsilon >0$, $(S,e)$ satisfies condition (R1) for $r+\varepsilon$ whenever $(S_{\rm{soft}},w)$ satisfies (R1) for $r$. Consequently, we have $\mathop{\mathrm{rc}}(S,e) \leq \mathop{\mathrm{rc}}(S_{\rm{soft}}, w)$, as required. Finally, if $S$ is also countably based and left-soft separative, then we can use $w:=\sigma(x)$ by . ◻ **Theorem 1**. *Let $A$ be a unital, separable $\mathrm{C}^*$-algebra with the Global Glimm Property. Assume that $A$ has stable rank one. Then $$\mathop{\mathrm{rc}}\big( \mathop{\mathrm{Cu}}(A),[1] \big) = \mathop{\mathrm{rc}}\big( \mathop{\mathrm{Cu}}(A)_{\rm{soft}}, \sigma ([1]) \big).$$* *Proof.* Proceeding as in the proof of , we see that the assumptions on $A$ imply that $\mathop{\mathrm{Cu}}(A)$ is a countably based, left-soft separative, $(2,\omega)$-divisible $\ensuremath{\mathrm{Cu}}$-semigroup satisfying (O5)-(O7), and that $[1]$ is full. Hence, the result follows from . ◻ **Corollary 1**. *Let $A$ be a unital, separable, nowhere scattered $\mathrm{C}^*$-algebra of stable rank one. Then $$\mathop{\mathrm{rc}}(A) = \mathop{\mathrm{rc}}\big(\mathop{\mathrm{Cu}}(A)_{\rm{soft}},\sigma ([ 1 ] ) \big).$$* *Proof.* By [@ThiVil23Glimm Proposition 7.3], $A$ has the Global Glimm Property: see also [@AntPerRobThi22CuntzSR1 Section 5]. Further, by [@BlaRobTikTomWin12AlgRC Proposition 3.2.3], we have $\mathop{\mathrm{rc}}(A) = \mathop{\mathrm{rc}}\big( \mathop{\mathrm{Cu}}(A),[1] \big)$, and so the result follows from . ◻ **Remark 1**. For a large subalgebra $B$ of a simple, unital, stably finite, non-elementary $\mathrm{C}^*$-algebra $A$, it is shown in [@Phi14arX:LargeSub Theorem 6.8] that $\mathop{\mathrm{Cu}}(A)_{\rm{soft}}\cong \mathop{\mathrm{Cu}}(B)_{\rm{soft}}$; see also . Thus, using at the first and last steps, one gets $$\mathop{\mathrm{rc}}(A)=\mathop{\mathrm{rc}}(\mathop{\mathrm{Cu}}(A)_{\rm{soft}},\sigma_A([1]))=\mathop{\mathrm{rc}}(\mathop{\mathrm{Cu}}(B)_{\rm{soft}},\sigma_B([1]))=\mathop{\mathrm{rc}}(B),$$ which recovers [@Phi14arX:LargeSub Theorem 6.14]. Note that in this case the existence of $\sigma$ is provided by [@Eng14PhD]. **Example 1**. Let $A$ be a non-elementary, separable, simple, unital $\mathrm{C}^*$-algebra of stable rank one, real rank zero, and such that the order of projections over $A$ is determined by traces, and let $\alpha$ be a finite group action on $A$ that has the tracial Rokhlin property. Then $$\mathop{\mathrm{rc}}\big( \mathop{\mathrm{Cu}}(A^\alpha) ,[1] \big) = \mathop{\mathrm{rc}}\big( \mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha)} ,[1] \big).$$ Indeed, by [@Arc11CrProdTrRP], the crossed product $C^*(G, A, \alpha)$ has stable rank one, and then so does the fixed point algebra $A^\alpha$ by [@AsaGolPhi21RadCompCrProd Lemma 4.3]. The question of when stable rank one passes to crossed producst by a finite group action with the (weak) tracial Rokhlin property is discussed after Corollary 5.6 in [@AsaGolPhi21RadCompCrProd]. One can also see that $A^\alpha$ is non-elementary, separable, simple and unital. Therefore, $\mathop{\mathrm{Cu}}(A^\alpha)$ is a countably based, weakly cancellative (hence, left-soft separative), $(2,\omega )$-divisible $\ensuremath{\mathrm{Cu}}$-semigroups satisfying (O5)-(O7). By , the $\ensuremath{\mathrm{Cu}}$-semigroup $\mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha)}$ has the same properties. Further, the soft parts of $\mathop{\mathrm{Cu}}(A^\alpha)$ and $\mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha)}$ are isomorphic by . This allows us to apply at the first and last steps, and we get $$\begin{aligned} \mathop{\mathrm{rc}}\big( \mathop{\mathrm{Cu}}(A^\alpha) ,[1] \big) &= \mathop{\mathrm{rc}}\big( \mathop{\mathrm{Cu}}(A^\alpha)_{\rm{soft}},\sigma([1]) \big) \\ &= \mathop{\mathrm{rc}}\big( \mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha)}_{\rm{soft}},\sigma([1]) \big) = \mathop{\mathrm{rc}}\big( \mathop{\mathrm{Cu}}(A)^{\mathop{\mathrm{Cu}}(\alpha)} ,[1] \big).\end{aligned}$$ Other examples where our results might be applicable are those obtained in [@Asa23RadCompCrossedPro]. CETW22 [R. Antoine, [F. Perera, [L. Robert, and [H. Thiel, Edwards' condition for quasitraces on $\mathrm{C}^*$-algebras, *Proc. Roy. Soc. Edinburgh Sect. A* **151** (2021), 525--547.]{.smallcaps}]{.smallcaps}]{.smallcaps}]{.smallcaps} [R. Antoine, [F. Perera, [L. Robert, and [H. Thiel, $\mathrm{C}^*$-algebras of stable rank one and their Cuntz semigroups, *Duke Math. J.* **171** (2022), 33--99.]{.smallcaps}]{.smallcaps}]{.smallcaps}]{.smallcaps} [R. Antoine, [F. Perera, and [L. Santiago, Pullbacks, $C(X)$-algebras, and their Cuntz semigroup, *J. Funct. Anal.* **260** (2011), 2844--2880.]{.smallcaps}]{.smallcaps}]{.smallcaps} [R. Antoine, [F. Perera, and [H. Thiel, Tensor products and regularity properties of Cuntz semigroups, *Mem. Amer. Math. Soc.* **251** (2018), viii+191.]{.smallcaps}]{.smallcaps}]{.smallcaps} [R. Antoine, [F. Perera, and [H. Thiel, Abstract bivariant Cuntz semigroups, *Int. Math. Res. Not. IMRN* (2020), 5342--5386.]{.smallcaps}]{.smallcaps}]{.smallcaps} [R. Antoine, [F. Perera, and [H. Thiel, Abstract bivariant Cuntz semigroups II, *Forum Math.* **32** (2020), 45--62.]{.smallcaps}]{.smallcaps}]{.smallcaps} [R. Antoine, [F. Perera, and [H. Thiel, Cuntz semigroups of ultraproduct $\mathrm{C}^*$-algebras, *J. Lond. Math. Soc. (2)* **102** (2020), 994--1029.]{.smallcaps}]{.smallcaps}]{.smallcaps} [P. Ara, [F. Perera, and [A. S. Toms, $K$-theory for operator algebras. Classification of $\mathrm{C}^*$-algebras, in *Aspects of operator algebras and applications*, *Contemp. Math.* **534**, Amer. Math. Soc., Providence, RI, 2011, pp. 1--71.]{.smallcaps}]{.smallcaps}]{.smallcaps} [D. Archey, Crossed product $\mathrm{C}^*$-algebras by finite group actions with the tracial Rokhlin property, *Rocky Mountain J. Math.* **41** (2011), 1755--1768.]{.smallcaps} [M. A. Asadi-Vasfi, The radius of comparison of the crossed product by a weakly tracially strictly approximately inner action, *Studia Math.* **271** (2023), 241--285.]{.smallcaps} [M. A. Asadi-Vasfi, [N. Golestani, and [N. C. Phillips, The Cuntz semigroup and the radius of comparison of the crossed product by a finite group, *Ergodic Theory Dynam. Systems* **41** (2021), 3541--3592.]{.smallcaps}]{.smallcaps}]{.smallcaps} [B. Blackadar, *Operator algebras*, *Encyclopaedia of Mathematical Sciences* **122**, Springer-Verlag, Berlin, 2006, Theory of $\mathrm{C}^*$-algebras and von Neumann algebras, Operator Algebras and Non-commutative Geometry, III.]{.smallcaps} [B. Blackadar, [L. Robert, [A. P. Tikuisis, [A. S. Toms, and [W. Winter, An algebraic approach to the radius of comparison, *Trans. Amer. Math. Soc.* **364** (2012), 3657--3674.]{.smallcaps}]{.smallcaps}]{.smallcaps}]{.smallcaps}]{.smallcaps} [L. Cantier and [E. Vilalta, Fraïssé theory for Cuntz semigroups, preprint (arXiv:2308.15792 \[math.OA\]), 2023.]{.smallcaps}]{.smallcaps} [J. Castillejos, [S. Evington, [A. Tikuisis, and [S. White, Uniform property $\Gamma$, *Int. Math. Res. Not. IMRN* (2022), 9864--9908.]{.smallcaps}]{.smallcaps}]{.smallcaps}]{.smallcaps} [A. Ciuperca, [L. Robert, and [L. Santiago, The Cuntz semigroup of ideals and quotients and a generalized Kasparov stabilization theorem, *J. Operator Theory* **64** (2010), 155--169.]{.smallcaps}]{.smallcaps}]{.smallcaps} [K. T. Coward, [G. A. Elliott, and [C. Ivanescu, The Cuntz semigroup as an invariant for $\mathrm{C}^*$-algebras, *J. Reine Angew. Math.* **623** (2008), 161--193.]{.smallcaps}]{.smallcaps}]{.smallcaps} [J. Cuntz, Dimension functions on simple $\mathrm{C}^*$-algebras, *Math. Ann.* **233** (1978), 145--153.]{.smallcaps} [M. Dadarlat and [A. S. Toms, Ranks of operators in simple $\mathrm{C}^*$-algebras, *J. Funct. Anal.* **259** (2010), 1209--1229.]{.smallcaps}]{.smallcaps} [G. A. Elliott, [L. Robert, and [L. Santiago, The cone of lower semicontinuous traces on a $\mathrm{C}^*$-algebra, *Amer. J. Math.* **133** (2011), 969--1005.]{.smallcaps}]{.smallcaps}]{.smallcaps} [G. A. Elliott and [M. Rørdam, Perturbation of Hausdorff moment sequences, and an application to the theory of $\mathrm{C}^*$-algebras of real rank zero, in *Operator Algebras: The Abel Symposium 2004*, *Abel Symp.* **1**, Springer, Berlin, 2006, pp. 97--115.]{.smallcaps}]{.smallcaps} [M. Engbers, Decomposition of simple Cuntz semigroups, Ph.D Thesis, WWU Münster, 2014.]{.smallcaps} [E. Gardella, [I. Hirshberg, and [L. Santiago, Rokhlin dimension: duality, tracial properties, and crossed products, *Ergodic Theory Dynam. Systems* **41** (2021), 408--460.]{.smallcaps}]{.smallcaps}]{.smallcaps} [E. Gardella and [F. Perera, The modern theory of Cuntz semigroups of $\mathrm{C}^*$-algebras, preprint (arXiv:2212.02290 \[math.OA\]), 2023.]{.smallcaps}]{.smallcaps} [E. Gardella and [L. Santiago, Equivariant $*$-homomorphisms, Rokhlin constraints and equivariant UHF-absorption, *J. Funct. Anal.* **270** (2016), 2543--2590.]{.smallcaps}]{.smallcaps} [K. R. Goodearl, *Partially ordered abelian groups with interpolation*, *Mathematical Surveys and Monographs* **20**, American Mathematical Society, Providence, RI, 1986.]{.smallcaps} [I. Hirshberg and [J. Orovitz, Tracially $\mathcal{Z}$-absorbing $\mathrm{C}^*$-algebras, *J. Funct. Anal.* **265** (2013), 765--785.]{.smallcaps}]{.smallcaps} [K. Keimel, The Cuntz semigroup and domain theory, *Soft Comput.* **21** (2017), 2485--2502.]{.smallcaps} [E. Kirchberg and [M. Rørdam, Infinite non-simple $\mathrm{C}^*$-algebras: absorbing the Cuntz algebra $\mathcal{O}_\infty$, *Adv. Math.* **167** (2002), 195--264.]{.smallcaps}]{.smallcaps} [F. Perera and [A. S. Toms, Recasting the Elliott conjecture, *Math. Ann.* **338** (2007), 669--702.]{.smallcaps}]{.smallcaps} [N. C. Phillips, Large subalgebras, preprint (arXiv:1408.5546 \[math.OA\]), 2014.]{.smallcaps} [L. Robert, The cone of functionals on the Cuntz semigroup, *Math. Scand.* **113** (2013), 161--186.]{.smallcaps} [L. Robert and [M. Rørdam, Divisibility properties for $\mathrm{C}^*$-algebras, *Proc. Lond. Math. Soc. (3)* **106** (2013), 1330--1370.]{.smallcaps}]{.smallcaps} [M. Rørdam, The stable and the real rank of $\mathcal{Z}$-absorbing $\mathrm{C}^*$-algebras, *Internat. J. Math.* **15** (2004), 1065--1084.]{.smallcaps} [M. Rørdam and [W. Winter, The Jiang-Su algebra revisited, *J. Reine Angew. Math.* **642** (2010), 129--155.]{.smallcaps}]{.smallcaps} [H. Thiel, The Cuntz semigroup, lecture notes available at hannesthiel.org/publications, 2017.]{.smallcaps} [H. Thiel, Ranks of operators in simple $\mathrm{C}^*$-algebras with stable rank one, *Comm. Math. Phys.* **377** (2020), 37--76.]{.smallcaps} [H. Thiel and [E. Vilalta, Covering dimension of Cuntz semigroups II, *Internat. J. Math.* **32** (2021), 27 p., Paper No. 2150100.]{.smallcaps}]{.smallcaps} [H. Thiel and [E. Vilalta, Nowhere scattered $\mathrm{C}^*$-algebras, J. Noncommut. Geom. (to appear), preprint (arXiv:2112.09877 \[math.OA\]), 2021.]{.smallcaps}]{.smallcaps} [H. Thiel and [E. Vilalta, Zero-dimensional Cuntz semigroups, in preparation, 2021.]{.smallcaps}]{.smallcaps} [H. Thiel and [E. Vilalta, Covering dimension of Cuntz semigroups, *Adv. Math.* **394** (2022), 44 p., Article No. 108016.]{.smallcaps}]{.smallcaps} [H. Thiel and [E. Vilalta, Soft $\mathrm{C}^*$-algebras, preprint (arXiv:2304.11644 \[math.OA\]), 2022.]{.smallcaps}]{.smallcaps} [H. Thiel and [E. Vilalta, The Global Glimm Property, *Trans. Amer. Math. Soc.* **376** (2023), 4713--4744.]{.smallcaps}]{.smallcaps} [A. S. Toms, Flat dimension growth for $\mathrm{C}^*$-algebras, *J. Funct. Anal.* **238** (2006), 678--708.]{.smallcaps} [A. S. Toms, On the classification problem for nuclear $\mathrm{C}^*$-algebras, *Ann. of Math. (2)* **167** (2008), 1029--1044.]{.smallcaps} [E. Vilalta, The Cuntz semigroup of unital commutative AI-algebras, Canad. J. Math. (to appear), DOI: 10.4153/S0008414X22000542, preprint (arXiv:2104.08165 \[math.OA\]), 2021.]{.smallcaps} [E. Vilalta, A local characterization for the Cuntz semigroup of AI-algebras, *J. Math. Anal. Appl.* **506** (2022), Paper No. 125612, 47.]{.smallcaps} [E. Vilalta, Nowhere scattered multiplier algebras, preprint (arXiv:2304.14007 \[math.OA\]), 2023.]{.smallcaps} [W. Winter, Nuclear dimension and $\mathcal{Z}$-stability of pure $\mathrm{C}^*$-algebras, *Invent. Math.* **187** (2012), 259--342.]{.smallcaps} [W. Winter, Structure of nuclear $\mathrm{C}^*$-algebras: from quasidiagonality to classification and back again, in *Proceedings of the International Congress of Mathematicians---Rio de Janeiro 2018. Vol. III. Invited lectures*, World Sci. Publ., Hackensack, NJ, 2018, pp. 1801--1823.]{.smallcaps}
arxiv_math
{ "id": "2310.00663", "title": "Ranks of soft operators in nowhere scattered C*-algebras", "authors": "M. Ali Asadi-Vasfi, Hannes Thiel, Eduard Vilalta", "categories": "math.OA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | The linear transport equation allows to advect level-set functions to represent moving sharp interfaces in multiphase flows as zero level-sets. A recent development in computational fluid dynamics is to modify the linear transport equation by introducing a nonlinear term to preserve certain geometrical features of the level-set function, where the zero level-set must stay invariant under the modification. The present work establishes mathematical justification for a specific class of modified level-set equations on a bounded domain, generated by a given smooth velocity field in the framework of the initial/boundary value problem of Hamilton-Jacobi equations. The first main result is the existence of smooth solutions defined in a time-global tubular neighborhood of the zero level-set, where an infinite iteration of the method of characteristics within a fixed small time interval is demonstrated; the smooth solution is shown to possess the desired geometrical feature. The second main result is the existence of time-global viscosity solutions defined in the whole domain, where standard Perron's method and the comparison principle are exploited. In the first and second main results, the zero level-set is shown to be identical with the original one. The third main result is that the viscosity solution coincides with the local-in-space smooth solution in a time-global tubular neighborhood of the zero level-set, where a new aspect of localized doubling the number of variables is utilized. linear transport equation; level-set equation; Hamilton-Jacobi equation; method of characteristics; viscosity solution; partial regularity of viscosity solution 35Q49; 35F21; 35A24; 35D40; 35R37 author: - "Dieter Bothe[^1],   Mathis Fricke[^2]   and    Kohei Soga[^3]" title: | **Mathematical analysis of\ modified level-set equations** --- # Introduction The linear transport equation $\partial_t f+v\cdot\nabla f=0$ describes the passive advection of a scalar quantity $f$ by a velocity field $v$. We start with a brief overview of the fundamental role of the linear transport equation in fluid dynamics. Suppose that a domain $\Omega\subset{\mathbb R}^3$ is occupied by a fluid. The Lagrangian specification of a fluid flow is to look at the position of each fluid element, i.e., for each time $t\in[0,\infty)$ the position of the fluid element being at $\xi\in\Omega$ at time $\tau\in[0,\infty)$, which is called the "fluid element $(\tau,\xi)$", is denoted by $$X(t,\tau,\xi).$$ Then, the velocity of each fluid element $(\tau,\xi)$ is defined for each $t\ge0$ as $$\frac{\partial}{\partial t}X(t,\tau,\xi).$$ Assuming that $X(t,\tau,\xi)=x$ is equivalent to $\xi=X(\tau,t,x)$ for all $t,\tau\in[0,\infty)$ and $x,\xi\in\Omega$, one obtains the Eulerian specification of the fluid flow, i.e., the velocity field $v$ defined as $$\begin{aligned} v:[0,\infty)\times\Omega\to{\mathbb R}^3,\quad v(t,x):=\frac{\partial}{\partial t}X(t,0,\xi)\Big|_{\xi=X(0,t,x)},\end{aligned}$$ which leads to $$\begin{aligned} \label{1velocity} \frac{\partial}{\partial t}X(t,0,\xi)=v(t,X(t,0,\xi)),\quad \forall\, t\ge0,\,\,\,\forall\,\xi\in\Omega.\end{aligned}$$ Hence, $X$ can be seen as the flow of the kinematic ordinary differential equation (ODE) $$\begin{aligned} \label{1ODE} x'(s)=v(s,x(s)),\quad s\ge 0.\end{aligned}$$ Let $F(t,\xi)$ be a scalar quantity at time $t$ that is associated with the fluid element $(0,\xi)$. The Eulerian description $f$ of $F$ is defined as $$f: [0,\infty)\times\Omega\to{\mathbb R},\quad f(t,x):=F(t,\xi)\Big|_{\xi=X(0,t,x)}.$$ Suppose that $F(t,\xi)\equiv\phi^0(\xi)$, i.e., each fluid element $(0,\xi)$ preserves the quantity $F(0,\xi)=\phi^0(\xi)$. Then, noting that $f(t,X(t,0,\xi))\equiv F(t,\xi)$, we have the following identity $$\begin{aligned} \frac{\rm d}{{\rm d} t} f(t,X(t,0,\xi))=0,\quad \forall\, t\ge0,\,\,\,\forall\,\xi\in\Omega.\end{aligned}$$ With [\[1velocity\]](#1velocity){reference-type="eqref" reference="1velocity"}, we find that $f$ satisfies the linear transport equation $$\begin{aligned} \label{transport} \left\{ \begin{array}{lll} &\displaystyle\frac{\partial f}{\partial t}(t,x)+v(t,x)\cdot\nabla f(t,x)=0 \mbox{\quad in $(0,\infty)\times\Omega$},\medskip \\ &\displaystyle f(0,\cdot)=\phi^0\mbox{\quad on $\Omega$}, \end{array} \right.\end{aligned}$$ where $\nabla=(\partial_{x_1},\partial_{x_2},\partial_{x_3})$. It is intuitively clear (and mathematically true as well) that the solution of [\[transport\]](#transport){reference-type="eqref" reference="transport"} is given as $$\begin{aligned} \label{1sol} f(t,x)=\phi^0(X(0,t,x)).\end{aligned}$$ This observation leads to the method of characteristics for more general first order PDEs. Note that if $v$ and $\phi^0$ are not $C^1$-smooth, the meaning of solution must be generalized. A typical example of $F$ being preserved by each fluid element is the density in an incompressible fluid, where the velocity $v$ comes from the incompressible Navier-Stokes equations. We refer to [@Lions] and [@DM] for recent development of mathematical analysis for the system of the linear transport equation and the incompressible Navier-Stokes equations. We refer also to [@DiPerna-Lions] and [@AC] for generalization of ODE-based classical theory of the linear transport equations to the case with velocity fields being less regular. We now discuss the transport equation in the context of the level-set method in two-phase flow problems. Suppose that $\Omega$ is occupied by two immiscible fluids (distinguished by the superscript $\pm$) in such a way that at $t=0$ the domain $\Omega$ is divided into two disjoint connected open sets and their interface: $\Omega^+(0)\subsetneq\Omega$ with $\partial\Omega^+(0)\cap\partial\Omega=\emptyset$ is a connected open set filled by fluid${}^+$, $\Omega^-(0):=\Omega\setminus \overline{\Omega^+(0)}$ is filled by fluid${}^-$ and $\Sigma(0):=\partial\Omega^+(0)\cap\partial\Omega^-(0)=\partial\Omega^+(0)$ is the interface. Note that, for now, we discuss the case where $\Sigma(0)$ does not touch $\partial\Omega$, while in Subsection 2.2 we will consider the other case. We suppose that for each $t>0$ the open set $$\begin{aligned} \label{1interior} \Omega^+(t):=X(t,0,\Omega^+(0))\qquad(\mbox{resp. $\Omega^-(t):=X(t,0,\Omega^-(0))$})\end{aligned}$$ is occupied by fluid${^+}$ (resp. fluid$^{-}$) and the common interface of fluid$^{\pm}$ is given as $$\begin{aligned} \label{1interface} \Sigma(t):=X(t,0, \Sigma(0)),\end{aligned}$$ where the continuity of the flow $X$ implies that [\[1interface\]](#1interface){reference-type="eqref" reference="1interface"} is well-defined even though no unique fluid elements are associated to the points on $\Sigma(0)$. In other words, the velocity field $v$ coming from the two-phase Navier-Stokes equations is assumed to be such that [\[1ODE\]](#1ODE){reference-type="eqref" reference="1ODE"} generates a proper flow from $\bar{\Omega}$ to itself; see Section 2 below for more details. The interface given as [\[1interface\]](#1interface){reference-type="eqref" reference="1interface"} is called a material interface, as opposed to a non-material interface formed by a two-phase flow with phase change (see [@DB] for investigations of [\[1ODE\]](#1ODE){reference-type="eqref" reference="1ODE"} in the case of non-material interfaces). Let $\phi^0:\bar{\Omega}\to{\mathbb R}$ be a smooth function such that $\phi^0>0$ on $\Omega^+(0)$ and $\phi^0<0$ on $\Omega^-(0)$, which implies that $\phi^0=0$ only on $\Sigma(0)$. We assign to each fluid element $(0,\xi)$ the number $\phi^0(\xi)$. Let $F(t,\xi)$ be the label of the fluid element $(0,\xi)$ at time $t$, which must be equal to $\phi^0(\xi)$ for any $t\ge0$. Then, the Eulerian description $f$ of $F$, i.e., $f(t,x):=F(t,\xi)|_{\xi=X(0,t,x)}=\phi^0(X(0,t,x))$, satisfies the transport equation [\[transport\]](#transport){reference-type="eqref" reference="transport"} with the representation [\[1sol\]](#1sol){reference-type="eqref" reference="1sol"}. In particular, we have $$\begin{aligned} &&\Omega^+(t)=\{x\in\Omega\,|\, f(t,x)>0\},\quad \Omega^-(t)=\{x\in\Omega\,|\, f(t,x)<0\},\\ &&\Sigma(t)=\{x\in\Omega\,|\, f(t,x)=0 \},\quad \forall\, t\ge0.\end{aligned}$$ We call $f$ a *level-set function* and the linear transport equation for level-set functions the *level-set equation*. Throughout the paper, the level-set means the zero level of a level-set function. Suppose that $\Sigma(0)$ is equal to the level-set of a $C^2$-function $\phi^0$ such that $\nabla \phi^0\neq0$ on $\Sigma(0)$, where $\Sigma(0)$ is a $C^2$-smooth closed surface (compact manifold without boundary). If $X(t,\tau,\cdot):\Omega^+(\tau)\cup\Sigma(\tau)\cup\Omega^-(\tau)\to\Omega^+(t)\cup\Sigma(t)\cup\Omega^-(t)$ is a $C^2$-diffeomorphism for each $t,\tau\ge0$, we see that $$\begin{aligned} \label{1non} \nabla f(t,x)\neq0 \mbox{\quad on $\Sigma(t)$, $\forall\,t\ge0$},\end{aligned}$$ and $\Sigma(t)$ keeps being a $C^2$-smooth closed surface for all $t>0$. In particular, the unit normal vector $\nu(t,x)$ and the total (twice the mean) curvature $\kappa(t,x)$ of $\Sigma(t)$ at each point $x$ are well-defined and represented as $$\begin{aligned} \nu(t,x)=\frac{\nabla f(t,x)}{|\nabla f(t,x)|},\quad \kappa(t,x)= -\nabla\cdot \nu(t,x),\end{aligned}$$ where $|\cdot|$ denotes the Euclidean norm. In a two-phase flow problem, the Navier-Stokes equations for the velocity field are coupled with the level-set equation on the interface through $\nu$ and $\kappa$. We refer to [@Abels] and [@pruss] for recent developments of mathematical analysis of multiphase flow problems and to [@Giga] for mathematical analysis of level-set methods beyond fluid dynamics. In computational fluid dynamics, the level-set equation is often used to represent a moving interface. In this context, the level-set approach has several advantages, such as a very accurate approximation of the mean curvature and a straightforward handling of topological changes of the interface (e.g., breakup and coalescence of droplets). In a numerical simulation, it is common to choose an initial level-set function $\phi^0$ that coincides locally with the signed distance function of a given closed surface $\Sigma(0)$, where $\phi^0$ is characterized by $|\nabla \phi^0| \equiv 1$ in a neighborhood of $\Sigma(0)$. However, it is known that the local signed distance property is not preserved by [\[transport\]](#transport){reference-type="eqref" reference="transport"}, i.e., $f(t,\cdot)$ does not coincide even locally with the signed distance function of $\Sigma(t)$ for $t>0$ in general. In fact, a short calculation [@F] shows that, along each curve $x(\cdot)$ determined by [\[1ODE\]](#1ODE){reference-type="eqref" reference="1ODE"} (it is called a *characteristic curve*) such that $x(t)\in\Sigma(t)$, $$\begin{aligned} \frac{\rm d}{{\rm d} t}|\nabla f(t,x(t))| = -|\nabla f|\langle(\nabla v) \nu , \nu \rangle \, (t,x(t))\end{aligned}$$ holds for a classical solution $f$ of the standard level-set equation [\[transport\]](#transport){reference-type="eqref" reference="transport"}. Here, $\langle\cdot , \cdot \rangle$ stands for the inner product of ${\mathbb R}^3$. Unfortunately, problems with the numerical accuracy emerge if $|\nabla f|$ becomes too small or too large, which is the case in general, even though the non-degeneracy condition [\[1non\]](#1non){reference-type="eqref" reference="1non"} is mathematically guaranteed. This is an important point in practice: on the one hand, it must be possible to resolve $|\nabla f|$ by the computational mesh, which implies an upper limit for $|\nabla f|$ related to the mesh size; on the other hand, too small values of $|\nabla f|$ lead to an inaccurate positioning of the interface, the normal field, the mean curvature field, etc. in the numerical algorithm. In order to keep the norm of the gradient approximately constant, so-called "reinitialization" methods [@Sussman1994; @Sethian1996; @Sussman1999] have been developed. Typically, an additional PDE is solved that computes a new function $\tilde{f}$ with the same zero contour but with a predefined gradient norm (e.g., $|\nabla \tilde{f}|=1$ on the level-set). In [@SOG], the authors developed an alternative numerical method to control the size of the gradient based on the level-set equation with a suitable source term that is determined by an extra equation, where the reinitialization procedure was no longer necessary. These methods might be computationally expensive. Moreover, it is known that many reinitialization methods struggle with extra difficulties if the interface touches the domain boundary $\partial\Omega$ [@DellaRocca2014] (i.e., if a so-called "contact line" is formed; see Section [2.2](#section:contact-line-case){reference-type="ref" reference="section:contact-line-case"}). In order to control the norm of the gradient within a single PDE, the following nonlinear modification of the level-set equation has been introduced[^4] in the literature of computational fluid dynamics (see [@F; @H] for details): $$\begin{aligned} \label{m-transport} && \left\{ \begin{array}{lll} &\displaystyle\!\!\!\!\frac{\partial\phi}{\partial t}(t,x)+v(t,x)\cdot\nabla \phi(t,x)=\phi(t,x)R(t,x,\nabla \phi(t,x)) \mbox{\quad in $\Theta\subseteq (0,\infty)\times\Omega$} ,\medskip \\ &\displaystyle\!\!\!\! \phi(0,\cdot)=\phi^0 \mbox{\quad on $\Omega$}. \end{array} \right.\end{aligned}$$ Note that, from here on, we rather use $\phi$ instead of $f$ to stress the fact that we deal with a modified level-set equation. Since the source term on the right-hand side is chosen proportional to the level-set function $\phi$, the modification term vanishes on the zero interface and, as seen in Section 2, one can show that the evolution of the zero level-set is unaffected by the modification (in fact, the configuration component of the characteristic ODEs for [\[m-transport\]](#m-transport){reference-type="eqref" reference="m-transport"} becomes [\[1ODE\]](#1ODE){reference-type="eqref" reference="1ODE"} on the level-set). Moreover, a suitable choice of the nonlinear function $R$ allows to control the evolution of $|\nabla\phi|$ (at least locally at the level-set). A formal calculation [@F] shows that, by choosing $$\begin{aligned} \label{original-source-term} R(t,x,p) := \left\langle\nabla v(t,x) \frac{p}{|p|} , \frac{p}{|p|} \right\rangle,\end{aligned}$$ we indeed obtain $$\begin{aligned} \label{1modify} \frac{\rm d}{{\rm d} t}|\nabla \phi(t,x(t))|\equiv 0, \quad \forall\,t\ge0\end{aligned}$$ along each characteristic curve $x(t)$ of [\[m-transport\]](#m-transport){reference-type="eqref" reference="m-transport"} such that $x(t)\in\Sigma(t)$ for all $t\ge0$ or, equivalently, $$\begin{aligned} \displaystyle\forall\, t\in[0, \infty),\,\,\,\forall\,x\in \Sigma(t),\,\,\,\exists\, \xi\in \Sigma(0)\mbox{ such that } |\nabla \phi(t,x)|=|\nabla\phi^0(\xi)|.\end{aligned}$$ We will prove this statement rigorously using the method of characteristics (see problem [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"} in Section [2](#section:classical_solution){reference-type="ref" reference="section:classical_solution"}). Notice that, in general, the property [\[1modify\]](#1modify){reference-type="eqref" reference="1modify"} only holds locally at the level-set. The signed distance function of $\Sigma(t)$ itself does not solve [\[transport\]](#transport){reference-type="eqref" reference="transport"} nor [\[m-transport\]](#m-transport){reference-type="eqref" reference="m-transport"} in general, but rather a non-local PDE, cf. Lemma 3.1 in [@H]. From the numerical perspective, it is of interest to study a formulation like [\[m-transport\]](#m-transport){reference-type="eqref" reference="m-transport"} because the advection of the interface and the preservation of the norm of the gradient are combined into one single PDE, i.e., into a monolithic approach. In addition to the choice [\[original-source-term\]](#original-source-term){reference-type="eqref" reference="original-source-term"}, we will also study a variant in which a cut-off function is applied such that the nonlinear source term is only active in a neighborhood of the level-set (see problem [\[RRRR\]](#RRRR){reference-type="eqref" reference="RRRR"} below) and another simpler modified level-set equation, which only keeps the norm of the gradient within given bounds; see the initial value problem [\[problem2-2\]](#problem2-2){reference-type="eqref" reference="problem2-2"} below. We refer to [@F] for a numerical investigation of [\[m-transport\]](#m-transport){reference-type="eqref" reference="m-transport"}. Now, we move to the mathematical analysis of [\[m-transport\]](#m-transport){reference-type="eqref" reference="m-transport"}. It is important to note that, due to the nonlinear source term in [\[m-transport\]](#m-transport){reference-type="eqref" reference="m-transport"}, the ODE [\[1ODE\]](#1ODE){reference-type="eqref" reference="1ODE"} is no longer the characteristic ODE of [\[m-transport\]](#m-transport){reference-type="eqref" reference="m-transport"}; instead, the system of ODEs [\[2chara\]](#2chara){reference-type="eqref" reference="2chara"}-[\[2chara3\]](#2chara3){reference-type="eqref" reference="2chara3"} defines the characteristic curves of [\[m-transport\]](#m-transport){reference-type="eqref" reference="m-transport"}. See Appendix 1 for more details on the method of characteristics as applied to Hamilton-Jacobi equations. Furthermore, since [\[m-transport\]](#m-transport){reference-type="eqref" reference="m-transport"} is a first order fully nonlinear PDE, the mathematical analysis of [\[m-transport\]](#m-transport){reference-type="eqref" reference="m-transport"} is not at all as simple as that of [\[transport\]](#transport){reference-type="eqref" reference="transport"}, even if $v$ and $R$ are smooth enough. Existence of a classical solution on the whole domain within an arbitrary time interval is no longer possible in general, i.e., the notion of viscosity solutions is necessary. Then, it is expected that the following statements hold true for $R$ given by [\[original-source-term\]](#original-source-term){reference-type="eqref" reference="original-source-term"} or its variants: 1. [\[m-transport\]](#m-transport){reference-type="eqref" reference="m-transport"} provides a level-set that is identical to the original one provided by [\[transport\]](#transport){reference-type="eqref" reference="transport"} for all $t\ge0$; 2. [\[m-transport\]](#m-transport){reference-type="eqref" reference="m-transport"} admits a unique classical solution $\phi$ at least in a $t$-global tubular neighborhood of the level-set (see its definition in Section 2) so that the normal field and mean curvature field are well-defined by $\phi$ and the property [\[1modify\]](#1modify){reference-type="eqref" reference="1modify"} (or, less restrictively, an a priori bound of $|\nabla\phi|$) holds on the level-set for all $t\ge0$; 3. [\[m-transport\]](#m-transport){reference-type="eqref" reference="m-transport"} admits a unique global-in-time viscosity solution defined on $[0,\infty)\times\bar{\Omega}$; 4. If initial data is $C^2$-smooth, the viscosity solution $\tilde{\phi}$ coincides with the local-in-space classical solution $\phi$ in a $t$-global tubular neighborhood of the level-set, i.e., partial $C^2$-regularity of $\tilde{\phi}$. The purpose of the current paper is to provide full proofs of (i)-(iv) for the problem [\[m-transport\]](#m-transport){reference-type="eqref" reference="m-transport"} with a given smooth velocity field $v$ and the above-mentioned $R$, where mathematical analysis on the system of [\[m-transport\]](#m-transport){reference-type="eqref" reference="m-transport"} and Navier-Stokes type equations for $v$ is an interesting future work. We will exploit the method of characteristics to show (ii) and (i) for the smooth solution; usually, the method of characteristics works only within a short time interval; however, since the nonlinearity of [\[m-transport\]](#m-transport){reference-type="eqref" reference="m-transport"} becomes arbitrarily small near the level-set, on which $|\nabla \phi|$ is appropriately controlled as well, we may iterate the method of characteristics countably many times with a shrinking neighborhood of the level-set to construct a time global solution defined in a $t$-global tubular neighborhood of the level-set. To show (iii) and (i) for the viscosity solution, we will apply the standard theory of viscosity solutions to [\[m-transport\]](#m-transport){reference-type="eqref" reference="m-transport"} with a boundary condition arising formally from the classical solutions. To prove (iv), we adapt the idea of localized doubling the numbers of variables for the comparison principle of viscosity solutions within a cone of dependence; the difficulty is that we cannot have a cone of dependence that contains a $t$-global tubular neighborhood of the level-set; we will demonstrate a new version of localized doubling the numbers of variables with an unusual choice of a penalty function in a $t$-global tubular neighborhood of the level-set. We emphasize that the result (iv) is particularly important from application points of view in the sense that, once a continuous viscosity solution is obtained, it provides the level-set, its normal field and mean curvature field with the necessary regularity being guaranteed; numerical construction of a viscosity solution on the whole domain would be easier than that of a local-in-space smooth solution; there is huge literature pioneered by [@Crandall-Lions] on rigorous numerical methods of viscosity solutions. Finally, we compare our results on (i)--(iv) with the work [@H]. In [@H], the author formulated a modification of the initial value problem of a general Hamilton-Jacobi equation with an autonomous Hamiltonian (including the linear transport equation with $v=v(x)$) on the whole space and proved the existence of a unique viscosity solution, where the modification is essentially the same as [\[original-source-term\]](#original-source-term){reference-type="eqref" reference="original-source-term"}; owing to the modification, he showed that the (continuous) viscosity solution of the modified equation stays close to the signed distance function of its own level-set with good upper/lower estimates, from which he obtained differentiability of the viscosity solution on the level-set with the norm of the derivative to be one. Additional regularity of the viscosity solution away from the level-set remained open. Our current paper provides a stronger partial regularity property of viscosity solutions in the same context as [@H]. # $C^2$-solution on tubular neighborhood of level-set {#section:classical_solution} Let $\Omega\subset{\mathbb R}^3$ be a bounded connected open set and $v=v(t,x)$ be a given smooth function defined in $[0,\infty)\times\bar{\Omega}$. Our investigation relies on certain properties of the flow generated by $v$ that satisfies certain conditions; in particular, we require *flow invariance of $\bar{\Omega}$ and $\partial\Omega$, i.e., the solution $x(s)=x(s;s_0,\xi)$ of [\[1ODE\]](#1ODE){reference-type="eqref" reference="1ODE"} with initial condition $x(s_0)=\xi$, $(s_0,\xi)\in [0,\infty)\times\bar{\Omega}$ (resp. $(s_0,\xi)\in [0,\infty)\times\partial\Omega$) uniquely exists and stays in $\bar{\Omega}$ (resp. $\partial\Omega$) for all $s\in[0,\infty)$*. Note that since $\bar{\Omega}$ is compact, local-in-time invariance implies global-in-time invariance. Typical examples of $v$ fulfilling our requirements are - $\partial\Omega$ is smooth and $v(t,x)\cdot \nu(x)=0$ for all $x\in \partial\Omega$ and $t\ge 0$, where $\nu$ is the unit normal of $\partial\Omega$ (cf., non-penetration condition in fluid dynamics), - $v(t,x)=0$ for all $x\in \partial\Omega$ and $t\ge 0$ (cf., non-slip condition in fluid dynamics). In the current paper, we consider a more general situation based on the theory of ODEs on closed sets and flow invariance. For this purpose, we introduce the so-called Bouligand contingent cone $T_K(x)$ of an arbitrary closed set $K\subset{\mathbb R}^d$ at $x\in K$ as $$\label{eq:10} T_K(x):=\left\{z \in {\mathbb R}^d\,\Big|\, \liminf\limits_{h \to 0+} \frac{ {\rm dist}\, (x+ hz,K)}{h}=0\right\} \text{ for } x \in K.$$ We say that $y\in{\mathbb R}^d$ is subtangential to $K$ at a point $x\in K$, if $y\in T_K(x)$. Note that $T_K(x)={\mathbb R}^d$ for $x$ in the interior of $K$. We shall employ the following result on flow invariance. **Lemma 1**. *Let $J=(a,b)\subset {\mathbb R}$, $K \subset {\mathbb R}^d$ compact and $g: J \times K \to {\mathbb R}^d$ (jointly) continuous and locally Lipschitz in $x\in K$. Then, the following holds true:* 1. *Suppose that $\pm g$ are subtangential to $K$, i.e., $$\pm g(s,x) \in T_K(x), \quad\forall\, s \in J, \,\,\,\forall\, x \in K.$$ Then, given any $s_0 \in J$ and $x_0 \in K$, the initial value problem $$x'(s)=g(s,x(s)),\quad x(s_0)=x_0$$ has a unique solution defined on $J$ that stays in $K$.* 2. *Suppose that $\pm g$ are subtangential to $\partial K$, i.e., $$\pm g(s,x) \in T_{\partial K}(x), \quad\forall\, s \in J, \,\,\,\forall\, x \in \partial K.$$ Then, the sets $K$, $\partial K$ and $K\setminus \partial K$ are flow invariant.* See Appendix 2 for more on flow invariance and Lemma [Lemma 1](#flow-invariance){reference-type="ref" reference="flow-invariance"}. Now we state the hypothesis on the velocity field $v$: - $v\in C^0([0,\infty)\times\bar{\Omega};{\mathbb R}^3)\cap C^1([0,\infty)\times\Omega;{\mathbb R}^3)$ is locally Lipschitz in $x\in \bar{\Omega}$; $v$ is three times partially differentiable in $x$; all of the partial derivatives of $v$ belong to $C^0([0,\infty)\times \Omega;{\mathbb R}^3)$, - $\pm v(s,x) \in T_{\partial\Omega}(x)$ for all $s \in [0,\infty),\,\, x \in \partial\Omega$, - $\displaystyle |\frac{\partial v_i}{\partial x_j}|$ ($i,j=1,2,3$) are bounded on $[0,\infty)\times\Omega$. We remark that the upcoming nonlinear modification of the linear transport equation requires $C^3$-smoothness of $v$ in $x$ so that its characteristic ODEs are properly defined; due to Lemma [Lemma 1](#flow-invariance){reference-type="ref" reference="flow-invariance"}, $\bar{\Omega}$, $\partial\Omega$ and $\Omega$ are flow invariant with respect to the flow $X$ of [\[1ODE\]](#1ODE){reference-type="eqref" reference="1ODE"}; $X(s,\tau,\cdot)$ is continuous on $\bar{\Omega}$ and $C^3$-smooth in $\Omega$. If $\bar{\Omega}$ is a cube, for instance, (H2) implies: at each vertex, $v$ must be equal to zero, while on each edge, $v$ may take non-zero values parallel to the edge. ## Case 1: problem with level-set being away from $\partial\Omega$ Let $\Sigma(0)\subset \Omega$ be a closed $C^2$-smooth surface. Let $\phi^0:\Omega\to{\mathbb R}$ be a $C^2$-smooth function such that $$\mbox{$\{x\in\Omega\,|\,\phi^0(x)=0\}=\Sigma(0)$,\quad $\nabla \phi^0\neq 0$ on $\Sigma(0)$}.$$ Let $f$ be the solution of the original level-set equation [\[transport\]](#transport){reference-type="eqref" reference="transport"}. We keep the notation and configuration in [\[1interior\]](#1interior){reference-type="eqref" reference="1interior"} and [\[1interface\]](#1interface){reference-type="eqref" reference="1interface"}, where we repeat $$\Sigma(t):=\{x\in\Omega\,|\, f(t,x)=0\}=X(t,0,\Sigma(0)).$$ Note that due to (H2) we have $$\Sigma(t)\subset \Omega,\quad \forall\,t\ge0.$$ For each $t\ge0$, let $\Sigma_\varepsilon (t)$ with $\varepsilon >0$ be the $\varepsilon$-neighborhood of $\Sigma(t)$, i.e., $$\Sigma_\varepsilon (t):=\bigcup_{x\in\Sigma(t)}\{y\in{\mathbb R}^3\,|\,|x-y|<\varepsilon \},$$ where we always consider $\varepsilon >0$ such that $\Sigma_\varepsilon (t)\subset \Omega$. We say that *a set $\Theta\subset [0,\infty)\times\Omega$ is a ($t$-global) tubular neighborhood of the level-set $\{\Sigma(t)\}_{t\ge0}$, if $\Theta$ contains $$\bigcup_{t\ge0}\Big(\{t\}\times \Sigma(t)\Big),$$ and there exists a nonincreasing function $\varepsilon :[0,\infty)\to{\mathbb R}_{>0}$ such that $$\{t\}\times\Sigma_{\varepsilon (t)}(t)\subset\Theta,\quad \forall\,t\ge0.$$* The problem under consideration is to find a tubular neighborhood $\Theta$ of $\{\Sigma(t)\}_{t\ge0}$ and a $C^2$-function $\phi$ satisfying $$\begin{aligned} \label{problem1} &&\left\{ \begin{array}{lll} &&\displaystyle\frac{\partial\phi}{\partial t} +v\cdot \nabla \phi=\phi \Big<\big( \nabla v \big) \frac{\nabla \phi}{|\nabla \phi|}, \frac{\nabla \phi}{|\nabla \phi|}\Big>\mbox{\quad in $\Theta$},\medskip\\ &&\displaystyle\phi(0,\cdot)=\phi^0\mbox{\quad on $\Theta|_{t=0}$}. \end{array} \right.\end{aligned}$$ Note that this problem makes sense with $\phi^0$ being defined only in a neighborhood of $\Sigma(0)$, e.g., $\phi^0$ is given as the local signed distance function of $\Sigma(0)$. We state the first main result of this paper. **Theorem 2**. *There exists a tubular neighborhood $\Theta$ of the level-set $\{\Sigma(t)\}_{t\ge0}$ for which [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"} admits a unique $C^2$-solution $\phi$ satisfying $$\begin{aligned} &&\displaystyle\Sigma^\phi(t):= \{x\in\Omega\,|\, \phi(t,x)=0\}=\Sigma(t),\quad \forall\,t\in[0,\infty),\medskip\\ &&\displaystyle\forall\, t\in[0, \infty),\,\,\,\forall\,x\in \Sigma(t),\,\,\,\exists\, \xi\in \Sigma(0)\mbox{ such that } |\nabla \phi(t,x)|=|\nabla\phi^0(\xi)|.\end{aligned}$$* *Proof.* We treat the PDE in [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"} as the Hamilton-Jacobi equation $$\begin{aligned} \label{HJ22} \partial_t\phi+H(t,x,\nabla \phi,\phi)=0,\end{aligned}$$ generated by the Hamiltonian $H: [0,\infty)\times\Omega\times{\mathbb R}^3\times{\mathbb R}\to{\mathbb R}$ defined as $$\begin{aligned} H(t,x,p,\Phi)&:=&v(t,x)\cdot p-\Phi\Big<\nabla v(t,x) \frac{p}{|p|}, \frac{p}{|p|}\Big>\\ &=&v(t,x)\cdot p-\frac{1}{2} \Phi\Big<\Big(\nabla v)+(\nabla v)^{\sf T}\Big) \frac{p}{|p|}, \frac{p}{|p|}\Big>.\end{aligned}$$ Set $$\begin{aligned} D(v):=\frac{\nabla v+(\nabla v)^{\sf T}}{2},\quad \tilde{D}(v,p):=\frac{\partial}{\partial x}\Big<\big( \nabla v(t,x) \big) \frac{p}{|p|}, \frac{p}{|p|}\Big>.\end{aligned}$$ The characteristic ODEs of [\[HJ22\]](#HJ22){reference-type="eqref" reference="HJ22"} are given as (see Appendix 1 for more details) $$\begin{aligned} \label{2chara} x'(s)&=& \frac{\partial H}{\partial p}(s,x(s),p(s),\Phi(s))\\\nonumber &=&v(s,x(s))\\\nonumber && -2\frac{\Phi(s)}{|p(s)|}\Big[ D(v(s,x(s)))\frac{p(s)}{|p(s)|} -\Big<D(v(s,x(s)))\frac{p(s)}{|p(s)|},\frac{p(s)}{|p(s)|}\Big> \frac{p(s)}{|p(s)|} \Big], \\\label{2chara2} p'(s)&=&-\frac{\partial H}{\partial x} (s,x(s),p(s),\Phi(s))-\frac{\partial H}{\partial\Phi}(s,x(s),p(s),\Phi(s))p(s)\\\nonumber &=&-(\nabla v(s,x(s)))^{\sf T}p(s) +\Big<D(v(s,x(s))) \frac{p(s)}{|p(s)|}, \frac{p(s)}{|p(s)|}\Big>p(s)\\\nonumber &&+\Phi(s) \tilde{D}(v(s,x(s)),p(s)),\\\label{2chara3} \ \Phi'(s)&=& \frac{\partial H}{\partial p}(s,x(s),p(s),\Phi(s))\cdot p(s)-H(s,x(s),p(s),\Phi(s)) \\\nonumber &=& \Phi(s)\Big<D( v(s,x(s))) \frac{p(s)}{|p(s)|}, \frac{p(s)}{|p(s)|}\Big> , \\\label{2chara4} &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!x(0)=\xi,\quad p(0)=\nabla\phi^0(\xi),\quad \Phi(0)=\phi^0(\xi)\quad \mbox{(except $\xi$ such that $p(0)=0$).}\end{aligned}$$ Note that $\tilde{D}(v(t,x),p)$ is still $C^1$-smooth because of (H1), which is required for the method of characteristics. We sometimes use the notation $x(s;\xi),p(s;\xi),\Phi(s;\xi)$ to specify the initial point. Our proof is based on the investigation of the variational equations of the characteristic ODEs [\[2chara\]](#2chara){reference-type="eqref" reference="2chara"}--[\[2chara3\]](#2chara3){reference-type="eqref" reference="2chara3"} for each $\xi \in\Sigma(0)$ to ensure the invertibility of $x(s;\cdot)$ in a small neighborhood of $\Sigma(s)$ for each $s\ge0$. In a general argument of the method of characteristics, such invertibility is proven only within a small time interval. Below, we will show an iterative scheme to extend the time interval in which the invertibility holds with a shrinking neighborhood of $\Sigma(s)$ as $s$ becomes larger. For each $\xi \in\Sigma(0)$, as long as $x(s;\xi),p(s;\xi),\Phi(s;\xi)$ exist, it holds that $$\begin{aligned} \label{1xsxs} \Phi(s;\xi)&\equiv&\Phi(0;\xi)=0,\\\label{2xsxs} x'(s) &=&v(s,x(s)), \quad x(s)\in\Sigma(s),\\\label{3xsxs} p'(s)&=&-(\nabla v(s,x(s)))^{\sf T}p(s) +\Big<D(v(s,x(s))) \frac{p(s)}{|p(s)|}, \frac{p(s)}{|p(s)|}\Big>p(s),\\\label{4xsxs} p'(s)\cdot p(s)&=&\frac{1}{2} \frac{\rm d}{{\rm d} s}|p(s)|^2=0,\quad |p(s)|^2\equiv|p(0)|^2=|\nabla \phi^0(\xi)|^2\neq0,\end{aligned}$$ where we note that due to (H2) applied to [\[2xsxs\]](#2xsxs){reference-type="eqref" reference="2xsxs"}, the above equalities [\[1xsxs\]](#1xsxs){reference-type="eqref" reference="1xsxs"}--[\[4xsxs\]](#4xsxs){reference-type="eqref" reference="4xsxs"} hold for all $s\ge0$ together with $$0<\inf_{\Sigma(0)}|\nabla \phi^0|\le |p(s)|\le \sup_{\Sigma(0)}|\nabla \phi^0|<\infty,\quad\forall\,s\ge0.$$ Hence, for each $\xi \in\Sigma(0)$, there exist $\frac{\partial x}{\partial\xi}(s)=\frac{\partial x}{\partial\xi}(s;\xi)$ and $\frac{\partial\Phi}{\partial\xi}(s)=\frac{\partial\Phi}{\partial\xi}(s;\xi)$ for all $s\ge0$ satisfying $$\begin{aligned} \label{1vvv2} &&\frac{\rm d}{{\rm d} s}\frac{\partial x}{\partial\xi}(s)=\nabla v(s,x(s))\frac{\partial x}{\partial\xi}(s)-\frac{2}{|p(s)|} b(s)\otimes \frac{\partial\Phi}{\partial\xi}(s),\quad \frac{\partial x}{\partial\xi}(0)=id,\\\label{2vvv2} &&\frac{\rm d}{{\rm d} s}\frac{\partial\Phi}{\partial\xi}(s)=\frac{\partial\Phi}{\partial\xi}(s)\Big<D( v(s,x(s))) \frac{p(s)}{|p(s)|}, \frac{p(s)}{|p(s)|}\Big> , \quad \frac{\partial\Phi}{\partial\xi}(0)=\nabla \phi^0(\xi),\end{aligned}$$ where $$b(s):=D(v(s,x(s)))\frac{p(s)}{|p(s)|} -\Big<D(v(s,x(s)))\frac{p(s)}{|p(s)|},\frac{p(s)}{|p(s)|}\Big> \frac{p(s)}{|p(s)|},\quad |p(s)|\equiv|p(0)|.$$ Hereafter, $V_0,V_1,V_2,V_3,V_4$ will denote constants depending only on $v$, $\sup_{\Sigma(0)}|\nabla\phi^0|<\infty$ and $\inf_{\Sigma(0)}|\nabla\phi^0|>0$. Due to (H3), it holds that for any $\xi\in\Sigma(0)$, $$\begin{aligned} &&|\nabla v(s,x(s)) y|\le V_0|y|,\quad \forall\,y\in{\mathbb R}^3,\\ &&\Big| D(v(s,x(s)))\frac{p(s)}{|p(s)|} \Big|\le V_1,\quad \Big| \Big<D(v(s,x(s)))\frac{p(s)}{|p(s)|},\frac{p(s)}{|p(s)|}\Big> \frac{p(s)}{|p(s)|} \Big|\le V_1,\quad |b(s)|\le V_1.\end{aligned}$$ Observe that for each $\xi \in\Sigma(0)$, we have from [\[2vvv2\]](#2vvv2){reference-type="eqref" reference="2vvv2"}, $$\begin{aligned} &&\frac{\rm d}{{\rm d} s}\frac{\partial\Phi}{\partial\xi}(s)=\frac{\partial\Phi}{\partial\xi}(s)\Big<D( v(s,x(s))) \frac{p(s)}{|p(s)|}, \frac{p(s)}{|p(s)|}\Big>,\\ &&\Big|\frac{\partial\Phi}{\partial\xi}(s)\Big|=|\nabla \phi^0(\xi) e^{\int_0^s \langle D(v(s,x(\tau))) p(\tau), p(\tau)\rangle|d\tau }|\le V_2e^{V_1s }, \quad \forall\,s\ge0,\\ &&\Big|\frac{\partial\Phi}{\partial\xi_i}(s) b(s)\Big|\le V_1V_2e^{V_1s }\quad (i=1,2,3),\quad \forall\,s\ge0.\end{aligned}$$ For each $i=1,2,3$ and $\xi \in\Sigma(0)$, we have from [\[1vvv2\]](#1vvv2){reference-type="eqref" reference="1vvv2"} $$\begin{aligned} \frac{\rm d}{{\rm d} s}\frac{\partial x}{\partial\xi_i}(s)&=&\nabla v(s,x(s))\frac{\partial x}{\partial\xi_i}(s)-\frac{2}{|p(0)|}\frac{\partial\Phi}{\partial\xi_i}(s) b(s),\quad \frac{\partial x}{\partial\xi_i}(0)=e^i,\\ \frac{\rm d}{{\rm d} s}\frac{\partial x}{\partial\xi_i}(s)\cdot \frac{\partial x}{\partial\xi_i}(s)&=&\nabla v(s,x(s))\frac{\partial x}{\partial\xi_i}(s)\cdot \frac{\partial x}{\partial\xi_i}(s)-\frac{2}{|p(0)|}\frac{\partial\Phi}{\partial\xi_i}(s) b(s)\cdot \frac{\partial x}{\partial\xi_i}(s),\\ \frac{1}{2} \frac{\rm d}{{\rm d} s}\Big|\frac{\partial x}{\partial\xi_i}(s)\Big|^2 &\le& V_1\Big|\frac{\partial x}{\partial\xi_i}(s)\Big|^2+ \frac{2}{\displaystyle\inf_{\Sigma(0)}|\nabla\phi^0|} V_1V_2e^{V_1s}\Big|\frac{\partial x}{\partial\xi_i}(s)\Big|\\ &\le& (V_1+ 1) \Big|\frac{\partial x}{\partial\xi_i}(s)\Big|^2+ \Big(\frac{V_1V_2e^{V_1s}}{\displaystyle\inf_{\Sigma(0)}|\nabla\phi^0|}\Big)^2.\end{aligned}$$ Gronwall's inequality implies $$\begin{aligned} \label{est21} \Big|\frac{\partial x}{\partial\xi_i}(s;\xi)\Big|^2&\le& e^{2(1+V_1)s}\Big\{1+V_1\Big(\frac{V_2}{\displaystyle\inf_{\Sigma(0)}|\nabla\phi^0|}\Big)^2(e^{2V_1s} -1)\Big\}\\\nonumber &\le& V^2_3,\quad \forall\, s\in[0,1],\,\,\,\forall\,\xi\in\Sigma(0).\end{aligned}$$ It follows from [\[est21\]](#est21){reference-type="eqref" reference="est21"} that $$\begin{aligned} \label{est22} &\displaystyle\Big|\frac{\rm d}{{\rm d} s}\frac{\partial x}{\partial\xi_i}(s;\xi)\Big|\le V_0 \Big|\frac{\partial x}{\partial\xi_i}(s;\xi)\Big|+2V_1V_2 e^{V_1s}\le V_4, \quad \forall\, s\in[0,1],\,\,\,\forall\,\xi\in\Sigma(0).\end{aligned}$$ Due to the continuity in [\[2chara\]](#2chara){reference-type="eqref" reference="2chara"}--[\[2chara4\]](#2chara4){reference-type="eqref" reference="2chara4"} and [\[est21\]](#est21){reference-type="eqref" reference="est21"}--[\[est22\]](#est22){reference-type="eqref" reference="est22"}, we find $\varepsilon _1>0$ such that $$\begin{aligned} \nonumber && \Sigma_{\varepsilon _1}(0)\subset \Omega,\quad \mbox{ \eqref{2chara}-\eqref{2chara4} is solvable for all $s\in[0,1]$ and $\xi\in \Sigma_{\varepsilon _1}(0)$},\\\label{est23} &&%\Big|\frac{\p x}{\p\xi_i}(s;\xi)\Big|< 2V_3,\quad \Big|\frac{\rm d}{{\rm d} s}\frac{\partial x}{\partial\xi_i}(s;\xi)\Big|< 2V_4, \quad \forall\, s\in[0,1],\,\,\,\forall\,\xi\in\Sigma_{\varepsilon _1}(0). %,\\ %\label{est230} %&& \Big|\frac{dx}{ds}(s;\xi)\Big|< 2V_5, \quad \forall\, s\in[0,1],\,\,\,\forall\,\xi\in\interface_{2\ep_1}(0).\end{aligned}$$ Fix a number $t_\ast\in(0,1]$ such that $$\begin{aligned} \label{t-star} 1- \{3\cdot (2V_4 t_\ast)+6\cdot (2V_4 t_\ast)^2+6\cdot(2V_4 t_\ast)^3\}>0.\end{aligned}$$ We will show that $$\begin{aligned} \label{2contra} &\mbox{\it \quad $\exists\, \tilde{\varepsilon }_1\in(0,\varepsilon _1]$ such that $\Sigma_{\tilde{\varepsilon }_1}(0)\ni\xi\mapsto x(s;\xi)$ is injective for each $0\le s\le t_\ast$}.\end{aligned}$$ First, we give an auxiliary explanation on how to prove [\[2contra\]](#2contra){reference-type="eqref" reference="2contra"}. In order to prove the injectivity of $x(s;\cdot):\Sigma_{\varepsilon }(0)\to x(s;\Sigma_{\varepsilon }(0))$ for each fixed $s\in[0,s_\ast]$ ($\varepsilon >0$ and $s_\ast>0$ are some constants), we would take one of the following strategies: - Fix $\varepsilon >0$ and take a sufficiently small $s_\ast>0$, - Fix $s_\ast>0$ and take a sufficiently small $\varepsilon >0$, where we note that $\det D_\xi x(s;\xi)\neq0$ everywhere on $\Sigma_{\varepsilon }(0)$ is not enough for the injectivity in general. In our proof of Theorem [Theorem 2](#Thm1){reference-type="ref" reference="Thm1"}, we need to repeat the argument from $[0,s_\ast]$ within $[s_\ast,2s_\ast]$ to demonstrate infinite iteration with the fixed $s_\ast>0$. Hence, we take the strategy (ii) with $s_\ast=t_\ast$ given in [\[t-star\]](#t-star){reference-type="eqref" reference="t-star"} and sufficiently small $\varepsilon =\tilde{\varepsilon }_1\in (0,\varepsilon _1]$. Then, what we need to show is that $$x(s;\tilde{\zeta})=x(s;\zeta)\mbox{ for $\zeta,\tilde{\zeta}\in \Sigma_{\varepsilon }(0)$}\Rightarrow \tilde{\zeta}=\zeta.$$ If the line segment joining $\zeta$ and $\tilde{\zeta}$ is included in $\Sigma_{\varepsilon }(0)$, we may immediately apply Taylor's approximation to have $$0=x(s;\tilde{\zeta})-x(s;\zeta)=\Lambda (\tilde{\zeta}-\zeta) \quad \mbox{with some $(3\times3)$-matrix $\Lambda$}$$ and get $\tilde{\zeta}=\zeta$ via $\det \Lambda>0$, where $t_\ast$ in [\[t-star\]](#t-star){reference-type="eqref" reference="t-star"} is given so that $\det \Lambda>0$ holds in $\Sigma_{\varepsilon _1}(0)$. However, it is not a priori clear if the line segment joining $\zeta$ and $\tilde{\zeta}$ is included in $\Sigma_{\varepsilon }(0)$ for each $s\in[0,t^\ast]$. This is a major difficulty to prove [\[2contra\]](#2contra){reference-type="eqref" reference="2contra"} directly. Now, we prove [\[2contra\]](#2contra){reference-type="eqref" reference="2contra"} by contradiction. Suppose that [\[2contra\]](#2contra){reference-type="eqref" reference="2contra"} does not hold. Then, we find a sequence $\{\delta_j\}_{j\in{\mathbb N}}\subset(0,\varepsilon _1]$ with $\delta_j\to0$ as $j\to\infty$ and $\{t_j\}_{j\in{\mathbb N}}\subset(0,t_\ast]$ for which $$\begin{aligned} \label{22contra} \exists\,\zeta,\tilde{\zeta}\in \Sigma_{\delta_j}(0)\mbox{ such that }\zeta\neq\tilde{\zeta},\,\,\,x(t_j;\zeta)=x(t_j;\tilde{\zeta})\mbox{ for each $j\in{\mathbb N}$}.\end{aligned}$$ Define $d_j :=\sup\{ |\tilde{\zeta}-\zeta|\,|\, \eqref{22contra}\mbox{ holds} \}$. We prove that $d_j\to0$ as $j\to\infty$. If not, we find $\eta>0$ and a subsequence of $\{d_j\}_{j\in{\mathbb N}}$ (still denoted by the same symbol) such that $d_j\ge \eta$ for all $j$. We further take out subsequences so that $t_j\to s\in[0,t_\ast]$ as $j\to\infty$. Then, for each $j\in{\mathbb N}$, there exist $\zeta_j$ and $\tilde{\zeta}_j$ such that $$|\zeta_j-\tilde{\zeta}_j|\ge \frac{\eta}{2},\quad {\rm dist}(\zeta_j,\Sigma(0))\le \delta_j,\quad {\rm dist}(\tilde{\zeta}_j,\Sigma(0))\le \delta_j.$$ Taking subsequences if necessary, we see that $\zeta_j$, $\tilde{\zeta}_j$ converge to some $\xi,\tilde{\xi}\in \Sigma(0)$ as $j\to\infty$, respectively, where $|\xi-\tilde{\xi}|\ge \frac{\eta}{2}$. The limit $j\to\infty$ in $x(t_j;\zeta_j)=x(t_j;\tilde{\zeta}_j)$ implies $x(s;\xi)=x(s;\tilde{\xi})$ with $\xi,\tilde{\xi}\in\Sigma(0)$ and $\xi\neq\tilde{\xi}$. This is a contradiction, because $x(s;\cdot)|_{\Sigma(0)}$ is given by the flow of [\[1ODE\]](#1ODE){reference-type="eqref" reference="1ODE"}. Consequently, $d_j\to0$ as $j\to\infty$. Hence, for all $j$ sufficiently large, we find that $\zeta,\tilde{\zeta}$ in [\[22contra\]](#22contra){reference-type="eqref" reference="22contra"}, denoted by $\zeta_j$, $\tilde{\zeta}_j$, are included in the $\varepsilon _1$-neighborhood of some $\xi_j\in\Sigma(0)$; for $i=1,2,3$, Taylor's approximation within the $\varepsilon _1$-neighborhood of $\xi_j$ yields $$\begin{aligned} 0&=&x_i(t_j;\tilde{\zeta}_j)-x_i(t_j,\zeta_j)\\ &=&x_i(0;\tilde{\zeta}_j)-x_i(0;\zeta_j)+(x_i(t_j;\tilde{\zeta}_j)- x_i(t_j,\zeta_j))-(x_i(0;\tilde{\zeta}_j)-x_i(0;\zeta_j))\\ &=&(\tilde{\zeta}_j-\zeta_j)_i+\frac{dx_i}{ds}(\lambda_{ji}t_j;\tilde{\zeta}_j)t_j- \frac{dx_i}{ds}(\lambda_{ji}t_j,\zeta_j))t_j, \quad \lambda_{ji}\in (0,1),\\ &=&(\tilde{\zeta}_j-\zeta_j)_i+\frac{\rm d}{{\rm d} s}\frac{\partial x_i}{\partial\xi}(\lambda_{ji}t_j;\tilde{\zeta}_j +\lambda'_{ji}(\tilde{\zeta}_j -\zeta_j ) )t_j\cdot (\tilde{\zeta}_j-\zeta_j), \quad \lambda'_{ji}\in (0,1).\end{aligned}$$ Therefore, by [\[est23\]](#est23){reference-type="eqref" reference="est23"}, we obtain $$\begin{aligned} &&(I+A)(\tilde{\zeta}_j-\zeta_j)=0,\\ &&\mbox{ $I$ is the ($3\times3$)-identity matrix, $A$ is a ($3\times3$)-matrix with $|A_{kl}|<2V_4 t_j\le 2V_4 t_\ast$},\\ &&\det(I+A)\ge1- \{3\cdot (2V_4 t_\ast)+6\cdot (2V_4 t_\ast)^2+6\cdot(2V_4 t_\ast)^3\}>0,\end{aligned}$$ which leads to $\tilde{\zeta}_j=\zeta_j$. This is a contradiction and we conclude [\[2contra\]](#2contra){reference-type="eqref" reference="2contra"}. Note that we also obtain $\det \frac{\partial x}{\partial\xi}(s;\xi)>0$ for all $s\in[0,t_\ast]$ and $\xi\in\Sigma_{\tilde{\varepsilon }_1}(0)$; this follows from $\frac{\partial x}{\partial\xi}(s;\xi)= \frac{\partial x}{\partial\xi}(0;\xi)+ \frac{\partial x}{\partial\xi}(s;\xi)- \frac{\partial x}{\partial\xi}(0;\xi)=I+[\frac{\rm d}{{\rm d} s} \frac{\partial x_k}{\partial\xi_l}(\lambda_{kl}s;\xi)s]$, $\lambda_{kl}\in(0,1)$. Define $$\begin{aligned} && O_{\tilde{\varepsilon }_1}(t_\ast) :=\bigcup_{0\le s\le t_\ast}\Big( \{s\}\times \{x(s;\xi)\,|\,\xi\in\Sigma_{\tilde{\varepsilon }_1}(0)\}\Big),\\ &&\psi_1: [0,t_\ast]\times \Sigma_{\tilde{\varepsilon }_1}(0)\to O_{\tilde{\varepsilon }_1}(t_\ast),\quad \psi_1(s,\xi):=(s, x(s;\xi)),\\ &&\psi_1(t,\cdot): \Sigma_{\tilde{\varepsilon }_1}(0)\to O_{\tilde{\varepsilon }_1}(t_\ast)|_{s=t}=\{t\}\times \{x(t;\xi)\,|\,\xi\in\Sigma_{\tilde{\varepsilon }_1}(0)\},\quad 0\le t\le t_\ast.\end{aligned}$$ The invertibility of $\xi\mapsto x(s;\xi)$ and $\det \frac{\partial x}{\partial\xi}(s;\xi)>0$ imply that $\psi_1$ and $\psi_1(t,\cdot)$ are $C^1$-diffeomorphic and that there exists $\varepsilon _2>0$ such that $$\begin{aligned} \label{ep2} \{x(t_\ast;\xi)\,|\,\xi\in\Sigma_{\tilde{\varepsilon }_1}(0)\}\supset \Sigma_{\varepsilon _2}(t_\ast).\end{aligned}$$ Let $\varphi_1: O_{\tilde{\varepsilon }_1}(t_\ast)\to \Sigma_{\tilde{\varepsilon }_1}(0)$ be defined as $$\psi_1(t,\xi)=(t,x)\Leftrightarrow (t,\xi)=\psi^{-1}_1(t,x)=:(t,\varphi_1(t,x)),$$ where we note that $\varphi_1(t,\Sigma(t))=\Sigma(0)$. Then, it follows from the general results of the method of characteristics (see Appendix 1) that the function $$\phi_1: O_{\tilde{\varepsilon }_1}(t_\ast) \to{\mathbb R},\quad \phi_1(t,x):=\Phi(t; \varphi_1(t,x))$$ is $C^2$-smooth and satisfies $$\begin{aligned} &&\frac{\partial\phi_1}{\partial t} +v\cdot \nabla \phi_1=\phi_1 \Big<\big( \nabla v \big)\frac{\nabla \phi_1}{|\nabla \phi_1|}, \frac{\nabla \phi_1}{|\nabla \phi_1|}\Big>,\quad (t,x)\in O_{\tilde{\varepsilon }_1}(t_\ast),\\ &&\phi_1(0,x)=\phi^0(x),\quad x\in \Sigma_{\tilde{\varepsilon }_1}(0),\\ &&\{x\,|\,\phi_1(s,x)=0\}=\Sigma(s),\quad \forall\,s\in[0,t_\ast],\\ && \nabla \phi_1(s,x)=p(s;\varphi_1(s,x)),\quad \forall\,(s,x)\in O_{\tilde{\varepsilon }_1}(t_\ast),\\ &&|\nabla \phi_1(s,x)|=|p(s;\varphi_1(s,x))|=|\nabla\phi^0(\varphi_1(s,x))| ,\quad \forall\, s\in[0,t_\ast],\,\,\, \forall\, x\in \Sigma(s).\end{aligned}$$ The argument up to now has used the $C^2$-smoothness of $\phi^0$ on a neighborhood of $\Sigma(0)$ and the upper/lower bound of $|\nabla \phi^0|$ on $\Sigma(0)$, where we note that $|\nabla \phi_1(t_\ast,\cdot)|$ on $\Sigma(t_\ast)$ has exactly the same upper/lower bound as $|\nabla \phi^0|$ on $\Sigma(0)$. Therefore, *we may replace $\phi^0$ with $\phi_1(t_\ast,\cdot)$ to demonstrate the same kind of estimates in terms of the above constants $V_0,V_1,V_2,V_3,V_4$ and $t_\ast$ as well as $\varepsilon _2 >0$ appropriately chosen in [\[ep2\]](#ep2){reference-type="eqref" reference="ep2"} for the characteristic ODEs [\[2chara\]](#2chara){reference-type="eqref" reference="2chara"}-[\[2chara3\]](#2chara3){reference-type="eqref" reference="2chara3"} for $s\in [t_\ast,2t_\ast]$ with the initial condition $$x(t_\ast)=\xi,\quad p(t_\ast)=\nabla\phi_1(t_\ast,\xi),\quad \Phi(t_\ast)= \phi_1(t_\ast,\xi),\quad \xi\in \{x(t_\ast;\tilde{\xi})\,|\,\tilde{\xi}\in\Sigma_{\tilde{\varepsilon }_1}(0)\}.$$* Furthermore, we find a constant $\tilde{\varepsilon }_2\in(0,\varepsilon _2]$ such that $$\begin{aligned} &&O_{\tilde{\varepsilon }_2}(2t_\ast) :=\bigcup_{t_\ast\le s\le 2t_\ast} \Big(\{s\}\times \{x(s;\xi)\,|\,\xi\in\Sigma_{\tilde{\varepsilon }_2}(t_\ast)\}\Big),\\ &&\psi_2: [t_\ast,2t_\ast]\times \Sigma_{\tilde{\varepsilon }_2}(t_\ast)\to O_{\tilde{\varepsilon }_2}(2t_\ast),\quad \psi_2(s,\xi):=(s, x(s;\xi)),\\ &&\psi_2(t,\cdot): \Sigma_{\tilde{\varepsilon }_2}(t_\ast)\to O_{\tilde{\varepsilon }_2}(2t_\ast)|_{s=t}=\{t\}\times \{x(t;\xi)\,|\,\xi\in\Sigma_{\tilde{\varepsilon }_2}(t_\ast)\},\quad t_\ast\le t\le 2t_\ast,\end{aligned}$$ are $C^1$-diffeomorphic. Also, there exists $\varepsilon _3>0$ such that $$\{x(2t_\ast;\xi)\,|\,\xi\in\Sigma_{\tilde{\varepsilon }_2}(t_\ast)\}\supset \Sigma_{\varepsilon _3}(2t_\ast).$$ Let $\varphi_2: O_{\tilde{\varepsilon }_2}(2t_\ast)\to \Sigma_{\tilde{\varepsilon }_2}(t_\ast)$ be defined as $$\psi_2(t,\xi)=(t,x)\Leftrightarrow (t,\xi)=\psi_2^{-1}(t,x)=:(t,\varphi_2(t,x)),$$ where we note that $\varphi_2(t,\Sigma(t))=\Sigma(t_\ast)$. Then, the function $$\phi_2: O_{\tilde{\varepsilon }_2}(2t_\ast) \to{\mathbb R},\quad \phi_2(t,x):=\Phi(t; \varphi_2(t,x))$$ is $C^2$-smooth and satisfies $$\begin{aligned} &&\frac{\partial\phi_2}{\partial t} +v\cdot \nabla \phi_2=\phi_2 \Big< \big( \nabla v \big) \frac{\nabla \phi_2}{|\nabla \phi_2|}, \frac{\nabla \phi_2}{|\nabla \phi_2|}\Big>,\quad (t,x)\in O_{\tilde{\varepsilon }_2}(2t_\ast),\\ &&\phi_2(0,x)=\phi_1(t_\ast, x),\quad x\in \Sigma_{\tilde{\varepsilon }_2}(t_\ast),\\ &&\{x\,|\,\phi_2(s,x)=0\}=\Sigma(s),\quad \forall\,s\in[t_\ast,2t_\ast],\\ && \nabla \phi_2(s,x)=p(s;\varphi_2(s,x)),\quad \forall\,(s,x)\in O_{\tilde{\varepsilon }_2}(2t_\ast),\\ &&|\nabla \phi_2(s,x)|=|p(s;\varphi_2(s,x))|=|\nabla\phi_1(t_\ast, \varphi_2(s,x))|\\ &&\qquad =|\nabla\phi^0(\varphi_1(t_\ast,\varphi_2(s,x)))| ,\quad \forall\, s\in[t_\ast,2t_\ast],\,\,\, \forall\, x\in \Sigma(s).\end{aligned}$$ Note that $\phi_1$ and $\phi_2$ are smoothly connected at $t=t_\ast$. With the common constant $t_\ast$, we may repeat this process with $\varepsilon _1,\tilde{\varepsilon }_1, \varepsilon _2,\tilde{\varepsilon }_2, ,\varepsilon _3, \tilde{\varepsilon }_3, \cdots$ (note that it is possible that $\varepsilon _k,\tilde{\varepsilon }_k,\to0$ as $k\to \infty$). We conclude the proof with defining $\Theta:= \cup_{l\in{\mathbb N}} O_{\tilde{\varepsilon }_l}(lt_\ast)$. ◻ If we choose $\phi^0$ which coincides with the (local) signed distance function of $\Sigma(0)$, then the solution $\phi$ obtained in Theorem [Theorem 2](#Thm1){reference-type="ref" reference="Thm1"} satisfies $$|\nabla \phi(t,x)|\equiv1 \quad\forall\, t\ge0,\,\,\,\forall\,x\in\Sigma(t).$$ ## Case 2: problem with level-set touching $\partial\Omega$ {#section:contact-line-case} We consider the problem [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"} with the level-set touching the boundary of $\Omega$. Let $K\subset {\mathbb R}^3$ be a bounded connected open set such that $K\cap \Omega\neq\emptyset$, $\partial K\cap \partial\Omega\neq\emptyset$ and $\partial K$ is a closed $C^2$-smooth surface. Define $$\Sigma(0):=\partial K \cap \bar{\Omega}.$$ Let $\phi^0$ be a $C^2$-smooth ${\mathbb R}$-valued function defined in an open set containing $\bar{K}\cup\bar{\Omega}$ such that $$\phi^0>0\mbox{ in $\bar{K}$},\quad \phi^0<0\mbox{ outside $ \bar{K}$},\quad \nabla \phi^0\neq0 \mbox{ on $\partial K$}.$$ Then, we have $$\phi^0>0\quad\mbox{in $K\cap\Omega$},\quad\{x\in\bar{\Omega}\,|\,\phi^0(x)=0\}=\Sigma(0),\quad \nabla \phi^0\neq0 \mbox{ on $\Sigma(0)$}.$$ Let $f$ be the solution of the original level-set equation [\[transport\]](#transport){reference-type="eqref" reference="transport"}. Define $$\Sigma(t):=\{x\in\bar{\Omega}\,|\, f(t,x)=0\}=X(t,0,\Sigma(0)),\quad \forall\,t\ge0,$$ where we note that $\partial\Omega$ is invariant under the flow $X$ of [\[1ODE\]](#1ODE){reference-type="eqref" reference="1ODE"}, and hence $\Sigma(t)$ always touches $\partial\Omega$. If we follow the same argument as given in Subsection 2.1, we would face non-trivial issues at/near $\Sigma(t)\cap\partial\Omega$ coming from the behavior of the variational equations of the characteristic ODEs on $\partial\Omega$. Hence, we modify the reasoning of Subsection 2.1 so that $\Sigma(t)\cap\partial\Omega$ is not involved. Let $\{\Omega^k\}_{k\in{\mathbb N}}$ be a monotone approximation of $\Omega$, i.e., each $\Omega^k$ is an open subset of $\Omega$; $\Omega^k\subset \Omega^{k+1}$ for all $k\in{\mathbb N}$; for any $G\subset \Omega$ compact, there exists $k=k(G)$ such that $G\subset\Omega^k$. Introduce $$\{ \Sigma^k(0) \}_{k\in{\mathbb N}}, \quad \Sigma^k(0):=\Sigma(0)\cap\Omega^k,\quad \Sigma^k(t):=X(t,0,\Sigma^k(0)),$$ where for each $k\in{\mathbb N}$ we have $\varepsilon >0$ depending on $k$ such that $$\Sigma^k_{\varepsilon }(0):=\bigcup_{x\in \Sigma^k(0)}\{ y\in{\mathbb R}^3\,|\, |y-x|<\varepsilon \}\subset \Omega.$$ Now, we may follow the same argument as given in Subsection 2.1 with $\Sigma^k(0)$ in place of $\Sigma(0)$ to obtain the following objects: $$\begin{aligned} &&O_{\tilde{\varepsilon }_1(k)}(t_\ast(k)) =\bigcup_{0\le s\le t_\ast(k)} \Big(\{s\}\times \{x(s;\xi)\,|\,\xi\in\Sigma^k_{\tilde{\varepsilon }_1(k)}(0)\}\Big),\\ &&O_{\tilde{\varepsilon }_2(k)}(2t_\ast(k)) =\bigcup_{t_\ast(k)\le s\le 2t_\ast(k)} \Big(\{s\}\times \{x(s;\xi)\,|\,\xi\in\Sigma^k_{\tilde{\varepsilon }_2(k)}(t_\ast(k))\}\Big),\ldots,\\ &&\Theta^k:= \bigcup_{l\in{\mathbb N}} O_{\tilde{\varepsilon }_l(k)}(lt_\ast(k)),\\ &&\mbox{a unique $C^2$-solution $\phi^k$ of \eqref{problem1}}|_{\Theta=\Theta^k}.\end{aligned}$$ The method of characteristics implies that $\phi^k\equiv\phi^{k'}$ on $\Theta^k\cap\Theta^{k'}$ for every $k,k'\in{\mathbb N}$. Therefore, setting $$\begin{aligned} \label{tube22} \Theta:=\bigcup_{k\in{\mathbb N}} \Theta^k,\end{aligned}$$ we obtain a unique $C^2$-solution $\phi$ of [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"}. We remark that $$\Theta\cap(\{t\}\times\Sigma(t))=\{t\}\times(\Sigma(t)\setminus\partial\Omega),\quad\forall\,t\ge0.$$ We summarize the result: **Theorem 3**. *There exists a tubular neighborhood $\Theta$ in the sense of [\[tube22\]](#tube22){reference-type="eqref" reference="tube22"} of the level-set $\{\Sigma(t)\}_{t\ge0}$ touching $\partial\Omega$ for which [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"} admits a unique $C^2$-solution $\phi$ satisfying $$\begin{aligned} &&\displaystyle\Sigma^\phi(t):= \{x\in\Omega\,|\, \phi(t,x)=0\}=\Sigma(t)\setminus \partial\Omega,\quad \forall\,t\in[0,\infty),\medskip\\ &&\displaystyle\forall\, t\in[0, \infty),\,\,\,\forall\,x\in \Sigma(t)\setminus \partial\Omega,\,\,\,\exists\, \xi\in \Sigma(0)\setminus \partial\Omega\mbox{ such that } |\nabla \phi(t,x)|=|\nabla\phi^0(\xi)|.\end{aligned}$$* Let us note in passing that another method to investigate a problem with the level-set touching $\partial\Omega$ would run via smooth extension of $v$ outside $\Omega$. The following steps would suffice: - **Step 1.** Extend the velocity field $v$ to ${\mathbb R}\times{\mathbb R}^3$ as a $C^3$-function by means of Whitney's extension theorem [@W] or the extension operators in Sobolev spaces (see, e.g., Chapter 5 of [@Adams]) together with the Sobolev embedding theorem, where additional conditions on $v$ and $\Omega$ are required accordingly. - **Step 2.** Extend the flow $X(t,\tau,\xi)$ to $t,\tau\in{\mathbb R}$, $\xi\in{\mathbb R}^3$ and the problem [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"} to a tubular neighborhood $\tilde{\Theta}$ of the level-set $\{ X(t,0,\partial K) \}_{t\ge0}$. - **Step 3.** Solve the extended problem in the same way as Subsection 2.1, where each characteristic curve starting at a point of $\Omega$ stays inside $\Omega$ forever due to the flow invariance of $\bar{\Omega}$ and $\partial\Omega$ under $X$. The restriction of the solution obtained in Step 3 to $\Theta:=\tilde{\Theta}\cap ([0,\infty)\times\bar{\Omega})$ then is the desired object. ## Simpler nonlinear modification The nonlinear modification in [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"} is designed to preserve $|\nabla \phi|$ along each characteristic curve on the level-set. If we relax the requirement, i.e., we only ask for an a priori bound of $|\nabla \phi|$ on the level-set, we may use a much simpler modification. We take the same configuration of the level-set considered in Subsection 2.1. Let $\alpha>0$ be a constant such that $$-\alpha|p^2|\le \langle\!(\nabla v(t,x))p,p\rangle\le \alpha|p|^2,\quad \forall\,p\in{\mathbb R}^3.$$ With a constant $\beta>\alpha$, we consider $$\begin{aligned} %\label{problem2} \frac{\partial\phi}{\partial t}+v\cdot\nabla\phi=\phi(\beta-|\nabla\phi|),\quad \phi(0,\cdot)=\phi^0,\end{aligned}$$ which is seen as the Hamilton-Jacobi equation [\[HJ22\]](#HJ22){reference-type="eqref" reference="HJ22"} with $$H(t,x,p,\Phi):=v\cdot p-\Phi(\beta-|p|).$$ The characteristic ODEs in this case are given as $$\begin{aligned} x'(s)&=&\frac{\partial H}{\partial p}(s,x(s),p(s),\Phi(s))=v(s,x(s))+\Phi(s)\frac{p(s)}{|p(s)|},\\ p'(s)&=&-\frac{\partial H}{\partial x}(s,x(s),p(s),\Phi(s))-\frac{\partial H}{\partial\Phi}(s,x(s),p(s),\Phi(s))p(s)\\ &=&(\nabla v(s,x(s)))^{\sf T} p(s)-(|p(s)|-\beta)p(s),\\ \Phi'(s)&=&\frac{\partial H}{\partial p}(s,x(s),p(s),\Phi(s))\cdot p(s)-H(s,x(s),p(s),\Phi(s))=\Phi(s),\\ &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\! x(0)=\xi,\quad p(0)=\nabla\phi^0(\xi),\quad \Phi(0)=\phi^0(\xi)\quad \mbox{(except $\xi$ such that $p(0)=0$).}\end{aligned}$$ As long as the above characteristic ODEs have solutions, $|p(s)|^2$ evolves as $$\frac{1}{2} \frac{\rm d}{{\rm d} s}|p(s)|^2=\langle\!(\nabla v(s,x(s)))p(s),p(s)\rangle-(|p(s)|-\beta)|p(s)|^2,$$ which leads to $$\begin{aligned} \frac{\rm d}{{\rm d} s}|p(s)|^2\le -2(|p(s)|-\beta-\alpha)|p(s)|^2, \quad \frac{\rm d}{{\rm d} s}|p(s)|^2\ge -2(|p(s)|-\beta+\alpha)|p(s)|^2.\end{aligned}$$ If $\beta-\alpha\le |\nabla \phi^0|\le \beta+\alpha$ on $\Sigma(0)$, we have for each $\xi\in\Sigma(0)$, $$\beta-\alpha\le |p(s)|\le \beta+\alpha\quad\mbox{as long as $p(s)$ exists}.$$ In fact, suppose that there exists $\tau>0$ such that $|p(\tau)|>\beta+\alpha$; set $s^\ast:=\sup\{ s\le \tau\,|\, |p(s)|=\beta+\alpha \}$; the continuity of $|p(\cdot)|$ implies that $|p(s^\ast)|=\beta+\alpha$ and $|p(s)|> \beta+\alpha$ for all $s\in(s^\ast,\tau]$; then, we necessarily have $$|p(\tau)|^2\le |p(s^\ast)|^2-2\int^\tau_{s^\ast} (|p(s)|-\beta-\alpha)|p(s)|^2ds <|p(s^\ast)|^2,$$ which is a contradiction; a fully analogous argument yields the lower bound. Therefore, we may apply the reasoning of Subsection 2.1 to the following problems: find a tubular neighborhood $\Theta$ of $\{\Gamma(t)\}_{t\ge0}$ and a $C^2$-function $\phi$ satisfying $$\begin{aligned} \label{problem2-2} &&\left\{ \begin{array}{lll} &&\displaystyle\frac{\partial\phi}{\partial t} +v\cdot \nabla \phi=\phi (\beta -|\nabla\phi|)\mbox{\quad in $\Theta$},\medskip\\ &&\displaystyle\phi(0,\cdot)=\phi^0\mbox{\quad on $\Theta|_{t=0}$}. \end{array} \right.\end{aligned}$$ We obtain the following result. **Theorem 4**. *Suppose that $\beta-\alpha\le |\nabla \phi^0|\le \beta+\alpha$ on $\Sigma(0)$. Then, there exists a tubular neighborhood $\Theta$ of the level-set $\{\Sigma(t)\}_{t\ge0}$ for which [\[problem2-2\]](#problem2-2){reference-type="eqref" reference="problem2-2"} admits a unique $C^2$-solution $\phi$ satisfying $$\begin{aligned} &&\Sigma^\phi(t):= \{x\,|\, \phi(t,x)=0\}=\Sigma(t),\quad \forall\,t\in[0,\infty),\medskip\\ &&\beta-\alpha \le |\nabla \phi(t,x)|\le \beta+\alpha,\quad \forall\, t\in[0, \infty),\,\,\,\forall\,x\in \Sigma(t).\end{aligned}$$* Note that Theorem [Theorem 4](#Thm1-1){reference-type="ref" reference="Thm1-1"} is interesting for numerical purposes because it is usually not important to exactly keep $|\nabla \phi| \equiv 1$, but rather to stay away from extreme values of the gradient norm. Therefore, the problem [\[problem2-2\]](#problem2-2){reference-type="eqref" reference="problem2-2"} should be investigated in more detail from the numerical perspective in the future. # Viscosity solution on the whole domain {#section:viscosity-solution} Let $\Omega\subset{\mathbb R}^3$ be a bounded connected open set. Let $v=v(t,x)$ be a given function belonging to $C^0([0,\infty)\times\bar{\Omega};{\mathbb R}^3)\cap C^1([0,\infty)\times\Omega;{\mathbb R}^3)$ and satisfying (H2)--(H3) (we do not need $C^3$-regularity in $x$). Let $\Sigma(0)\subset \Omega$ be a closed $2$-dimensional surface (topological manifold) and let $\phi^0:\bar{\Omega}\to{\mathbb R}$ be a $C^0$-function such that $\{x\in\Omega\,|\,\phi^0(x)=0\}=\Sigma(0)$, where $f(t,x):=\phi^0(X(0,t,x))$ is not necessarily a classical solution of the original linear transport equation. We keep the notation and configuration in [\[1interior\]](#1interior){reference-type="eqref" reference="1interior"} and [\[1interface\]](#1interface){reference-type="eqref" reference="1interface"} with the current $X$ and $\phi^0$, where we repeat $$\Sigma(t):=\{x\in\Omega\,|\, f(t,x)=0\}=X(t,0,\Sigma(0)),\quad \Sigma(t)\cap\partial\Omega=\emptyset,\quad \forall\,t\ge0.$$ If we deal with the Hamilton-Jacobi equation in [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"} on the whole space $\Omega$, the nonlinear term would suffer from singularity, i.e., $\langle(\nabla v) p , p \rangle |p|^{-2}$ cannot be continuous at $p=0$. Hence, also for a simpler structure near $\partial\Omega$, we introduce a smooth cut-off so that the nonlinearity is smooth and effective only around the level-set $\{\Sigma(t)\}_{t\ge0}$, where we note carefully that the level-set $\{\Sigma(t)\}_{t\ge0}$ is not the one determined by the upcoming viscosity solution, i.e., such cut-off can be given independently from the unknown function. We also note that $\nabla v$ is not required on $\partial\Omega$ due to the cut-off. We explain two ways to introduce a suitable cut-off. The first way is simpler but available only for the problem within each finite time interval. Fix an arbitrary terminal time $T>0$. For $\varepsilon >0$, let $K_{\varepsilon } \subset \bar{\Omega}$ denote the intersection of $\bar{\Omega}$ and the $\varepsilon$-neighborhood of $\partial\Omega$. Then, since $\Sigma(t)$ never touches $\partial\Omega$ within $[0,T]$, we find $\varepsilon >0$ such that $$K_{3\varepsilon } \cap \Sigma(t)=\emptyset\quad(\mbox{or, equivalently, } K_{3\varepsilon }\subset \Omega^-(t) ),\quad \forall\, t \in[0,T].$$ Define the continuous function $R_T:[0,T]\times\bar{\Omega}\times{\mathbb R}^3\to{\mathbb R}$ as $$\begin{aligned} &&R_T(t,x,p) :=\eta_0(x)\eta_2(|p|)\Big<\big( \nabla v(t,x)\big) \frac{p}{|p|},\frac{p}{|p|}\Big> , \\ &&\eta_0(x):=\left\{ \begin{array}{cll}\medskip 1&&\mbox{ for $x\not\in K_{3\varepsilon }$}, \\\nonumber \medskip 0&&\mbox{ for $x\in K_{2\varepsilon }$}, \\\nonumber \medskip \mbox{non-negative smooth transition from $1$ to $0$}&&\mbox{ otherwise}, \end{array} \right.\\ &&\eta_2(r):=\left\{ \begin{array}{cll}\medskip 1&&\mbox{ for $\displaystyle\frac{2}{3}\le r\le \frac{4}{3} $}, \\\nonumber\medskip 0&&\mbox{ for $\displaystyle r\le \frac{1}{3}$ or $\displaystyle\frac{5}{3}\le r$}, \\\medskip \mbox{monotone smooth transition from $1$ to $0$}&&\mbox{ otherwise}. \end{array} \right.\end{aligned}$$ This choice of cut-off is uncomplicated since we do not need detailed information on asymptotics of dist$(\Sigma(t),\partial\Omega)$ as $t\to\infty$; however, $R_T$ depends on the terminal time and a time global analysis is impossible. Note that the above specific choice of $1/3,2/3$, etc., in $\eta_2$ is not essential. The second way is to allow $t$-dependency for cut-off. Since $\Sigma(t)$ never touches $\partial\Omega$ within $[0,\infty)$, we find a smooth function $\varepsilon :[0,\infty)\to{\mathbb R}_{>0}$ (possibly $\varepsilon (t)\to0$ as $t\to\infty$) such that $$K_{3\varepsilon (t)} \cap \Sigma(t)=\emptyset\quad(\mbox{or, equivalently, } K_{3\varepsilon (t)}\subset \Omega^-(t) ),\quad \forall\, t \in[0,\infty).$$ Then, we take a smooth function $\eta_1:[0,\infty)\times\bar{\Omega}\to[0,1]$ such that $$\begin{aligned} &&\eta_1(t,x)=\left\{ \begin{array}{cll}\medskip 1&&\mbox{ for $x\not\in K_{3\varepsilon (t)}$}, \\\nonumber \medskip 0&&\mbox{ for $x\in K_{2\varepsilon (t)}$}, \\\nonumber \medskip \mbox{non-negative smooth transition from $1$ to $0$}&&\mbox{ otherwise}, \end{array} \right.\end{aligned}$$ where we omit an explicit formula of such $\eta_1$, and define the continuous function $R$ as $$\begin{aligned} \label{RRRR} &&R(t,x,p) :=\eta_1(t,x)\eta_2(|p|)\Big<\big( \nabla v(t,x)\big) \frac{p}{|p|},\frac{p}{|p|}\Big>,\end{aligned}$$ where $\eta_2$ is the one in $R_T$. Due to (H3), there exists a constant $V_0>0$ such that $$\begin{aligned} \label{bbb} \sup|R_T|\le V_0,\quad \sup|R|\le V_0.\end{aligned}$$ In the rest of the paper, we take $R$ given as in [\[RRRR\]](#RRRR){reference-type="eqref" reference="RRRR"}. Note that all upcoming results hold also for $R_T$ as long as the terminal time $T$ is unchanged. For an arbitrary $T>0$, we discuss existence of a unique viscosity solution $\phi$ of $$\begin{aligned} \label{HJ} && \left\{ \begin{array}{rll} \displaystyle\!\!\! \frac{\partial\phi}{\partial t}(t,x)+v(t,x)\cdot\nabla \phi (t,x)\!\!\!&=&\!\!\! \phi (t,x)R(t,x,\nabla \phi(t,x)) \mbox{ in $(0,T)\times\Omega$},\\ \displaystyle\phi(0,x)\!\!\!&=&\!\!\!\phi^0(x) \mbox{ on $\Omega$},\\[0.5ex] \displaystyle\phi(t,x)\!\!\!&=&\!\!\!\phi^0(X(0,t,x)) \mbox{ on $[0,T]\times\partial\Omega$}. \end{array} \right.\end{aligned}$$ Before going on with [\[HJ\]](#HJ){reference-type="eqref" reference="HJ"}, we recall the definition of viscosity (sub/super)solutions of a general first order Hamilton-Jacobi equation of the form $$\begin{aligned} \label{HJ2} G(z,u(z),\nabla_z u(z))=0 \mbox{\quad in $O$},\end{aligned}$$ where $O\subset{\mathbb R}^N$ is an open set, $G=G(z,u,q):O\times{\mathbb R}\times{\mathbb R}^N\to{\mathbb R}$ is a given continuous function and $u:O\to{\mathbb R}$ is the unknown function. Our evolutional Hamilton-Jacobi equation is also seen in this form with $z=(t,x)$. In the literature, [\[HJ2\]](#HJ2){reference-type="eqref" reference="HJ2"} is often treated as a typical example of degenerate second order PDEs (i.e., the second order term is completely degenerate to be $0$). To state the definition, we introduce the upper semicontinuous envelope $u^\ast:O\to{\mathbb R}$ and the lower semicontinuous envelope $u_\ast:O\to{\mathbb R}$ of a locally bounded function $u:O\to{\mathbb R}$ as $$\begin{aligned} &&u^\ast(z):=\lim_{r\to0}\,\sup\{ u(y)\,|\,y\in O,\,\,\,0\le|y-z|\le r\},\\ &&u_\ast(z):=\lim_{r\to0}\,\inf\{ u(y)\,|\,y\in O,\,\,\,0\le|y-z|\le r\}.\end{aligned}$$ Note that $u^\ast$ is upper semicontinuous and $u_\ast$ is lower semicontinuous; if $u$ is upper semicontinuous (resp. lower semicontinuous), we have $u=u^\ast$ (resp. $u=u_\ast$). *A function $u:O\to{\mathbb R}$ is a viscosity subsolution (resp. supersolution) of [\[HJ2\]](#HJ2){reference-type="eqref" reference="HJ2"}, provided* - $u^\ast$ is bounded from the above (resp. $u_\ast$ is bounded from the below); - If $(\varphi,z)\in C^1(O;{\mathbb R})\times O$ satisfies $$\max_{y\in O} (u^\ast(y)-\varphi(y))=u^\ast(z)-\varphi(z) \quad(\mbox{resp. }\min_{y\in O} (u_\ast(y)-\varphi(y))=u_\ast(z)-\varphi(z)) ,$$ we have $$G(z,u^\ast(z),\nabla_z \varphi(z))\le0 \quad(\mbox{resp. } G(z,u_\ast(z),\nabla_z \varphi(z))\ge0) .$$ A function $u:O\to{\mathbb R}$ is a viscosity solution of [\[HJ2\]](#HJ2){reference-type="eqref" reference="HJ2"}, if it is both a viscosity subsolution and supersolution of [\[HJ2\]](#HJ2){reference-type="eqref" reference="HJ2"}. It is well-known that $\max,\min$ in the definition can be replaced by the local maximum, local minimum, respectively. ## Existence of viscosity solution We state the main result of this subsection. **Theorem 5**. *Let $T>0$ be arbitrary. There exists a viscosity solution $\phi\in C^0([0,T)\times\bar{\Omega};{\mathbb R})$ of [\[HJ\]](#HJ){reference-type="eqref" reference="HJ"}, i.e., $\phi$ satisfies the first equation in [\[HJ\]](#HJ){reference-type="eqref" reference="HJ"} in the sense of the above definition and the initial/boundary condition strictly. Such a viscosity solution is unique. Furthermore, it holds that $$\Sigma^\phi(t):=\{x\in\Omega \,|\,\phi(t,x)=0 \}=\Sigma(t),\quad \forall\,t\in[0,T).$$* *Theorem [Theorem 5](#Thm2){reference-type="ref" reference="Thm2"} implies the existence of a unique viscosity solution $\phi\in C^0([0,\infty)\times\bar{\Omega};{\mathbb R})$ of [\[HJ\]](#HJ){reference-type="eqref" reference="HJ"} with $\Sigma^\phi(t)=\Sigma(t)$ for all $t\in[0,\infty)$.* **Proof of Theorem [Theorem 5](#Thm2){reference-type="ref" reference="Thm2"}.*.* Introduce the function $S:[0,T]\times\bar{\Omega}\to{\mathbb R}$ as $$\begin{aligned} &&S(t,x):=\left\{ \begin{array}{cll} &-V_0\mbox{\qquad for $ x\in\overline{\Omega^+(t)}$, $t\in[0,T]$},\\ &V_0\mbox{\,\,\,\,\,\qquad for $ x\in\Omega^-(t)\setminus K_{\varepsilon(T)}$, $t\in[0,T]$},\\ &0\mbox{\qquad \,\,\quad for $ x\in K_{\varepsilon(T)/2}$, $t\in[0,T]$},\\ &\mbox{non-negative smooth transition from $V_0$ to $0$}\mbox{\, otherwise}, \end{array} \right. %\\ %&&\tilde{S}(t,x):=\left\{ %\begin{array}{cll} %V_0&&\mbox{ for $x\in \overline{\Omega^+(t)}$, $t\in[0,T)$},\\ %-V_0&&\mbox{ for $ x\in\Omega^-(t)\setminus K(t)$, $t\in[0,T)$},\\ %\mbox{monotone smooth transition to $0$}&&\mbox{ for $x\in K(t)$, $t\in[0,T)$}, %\end{array} %\right.\\\end{aligned}$$ where $\varepsilon (t)\ge \varepsilon (T)$ for all $t\in[0,T]$ and $S$ is such that $S\equiv0$ near $\partial\Omega$ for all $t\in[0,T]$. Note that $S$ is smooth except on $\cup_{0\le t\le T}(\{t\}\times\Sigma(t))$. **Step 1: viscosity subsolution/supersolution.** We claim that the function $\rho:[0,T)\times\bar{\Omega}\to{\mathbb R}$ defined as $$\rho(t,x):=\phi^0(X(0,t,x))e^{\int_0^t S(s,X(s,t,x))ds}\quad (\mbox{$X$ is the flow of \eqref{1ODE}})$$ is a viscosity subsolution of [\[HJ\]](#HJ){reference-type="eqref" reference="HJ"} satisfying the initial/boundary condition strictly. For this purpose, we mollify $\phi^0$ by the standard Friedrichs' mollifier in ${\mathbb R}^3$ with the parameter $\delta>0$, the result being denoted by $\phi_\delta^0$. We consider the function $$\rho_\delta(t,x):=\phi^0_\delta(X(0,t,x))e^{\int_0^t S(s,X(s,t,x))ds},$$ where $\rho_\delta\to\rho$ locally uniformly on $[0,T)\times\Omega$ as $\delta\to0+$. For each point $(t,x)\in(0,T)\times \Omega\setminus \cup_{0\le s<T}(\{s\}\times\Sigma(s))$, there exists a sufficiently small neighborhood of $(t,x)$ on which $\rho_\delta$ is $C^1$-smooth and satisfies $$\begin{aligned} \frac{\partial\rho_\delta}{\partial t}(t,x)+v(t,x)\cdot\nabla \rho_\delta (t,x)&=& \rho_\delta(t,x) S(t,x)\end{aligned}$$ for all sufficiently small $\delta>0$. **Case 1:** fix any $(t,x)\in (0,T)\times\Omega \setminus \cup_{0\le s<T}(\{s\}\times\Sigma(s))$ such that $x\in \Omega^+(t)$. Then, $\rho(t,x)>0$, and hence $\rho_\delta(t,x)>0$ for all sufficiently small $\delta>0$. Let $\varphi$ be any test function satisfying the condition in the definition of viscosity subsolution at $(t,x)$, i.e., $\rho-\varphi$ attains the maximum at $(t,x)$ within the closed $r$-ball $\overline{B_r(t,x)}$ with the center $(t,x)$ and some $r>0$. Let $(t_\delta,x_\delta)$ be the maximum point of $\rho_\delta-\varphi$ within $\overline{B_r(t,x)}$, where $(t_\delta,x_\delta)\to (t,x)$ as $\delta\to0+$. Since $\rho_\delta$ is $C^1$-smooth near $(t_\delta,x_\delta)$, we see that $\varphi (s,y)+(\rho_\delta(t_\delta,x_\delta)-\varphi(t_\delta,x_\delta))$ is tangent to $\rho_\delta$ at $(s,y)=(t_\delta,x_\delta)$ from the above. This implies that for all sufficiently small $\delta>0$, $$\begin{aligned} &&\frac{\partial\varphi}{\partial t}(t_\delta,x_\delta)+v(t_\delta,x_\delta)\cdot\nabla \varphi(t_\delta,x_\delta) = \frac{\partial\rho_\delta}{\partial t}(t_\delta,x_\delta)+v(t_\delta,x_\delta)\cdot\nabla \rho_\delta (t_\delta,x_\delta)\\ &&\quad = \rho_\delta(t_\delta,x_\delta)S(t_\delta,x_\delta) =-\rho_\delta(t_\delta,x_\delta)V_0 \le \rho_\delta(t_\delta,x_\delta)R(t_\delta,x_\delta,\nabla \varphi(t_\delta,x_\delta)).\end{aligned}$$ Sending $\delta\to0+$, we obtain $$\begin{aligned} &&\frac{\partial\varphi}{\partial t}(t,x)+v(t,x)\cdot\nabla \varphi(t,x) = \frac{\partial\rho}{\partial t}(t,x)+v(t,x)\cdot\nabla \rho(t,x) \\ &&\quad = \rho(t,x)S(t,x) =-\rho(t,x)V_0 \le \rho(t,x)R(t,x,\nabla \varphi(t,x)).\end{aligned}$$ fix any $(t,x)\in (0,T)\times\Omega \setminus \cup_{0\le s<T}(\{s\}\times\Sigma(s))$ such that $x\in \Omega^-(t)$. Then, $\rho(t,x)<0$, and hence $\rho_\delta(t,x)<0$ for all sufficiently small $\delta>0$. Let $\varphi$ be any test function satisfying the condition in the definition of viscosity subsolution at $(t,x)$, i.e., $\rho-\varphi$ attains the maximum at $(t,x)$ within the ball $\overline{B_r(t,x)}$ with some $r>0$. Let $(t_\delta,x_\delta)$ be the maximum point of $\rho_\delta-\varphi$ within $\overline{B_r(t,x)}$, where $(t_\delta,x_\delta)\to (t,x)$ as $\delta\to0+$. Since $\rho_\delta$ is $C^1$-smooth near $(t_\delta,x_\delta)$, we see that $\varphi (s,y)+(\rho_\delta(t_\delta,x_\delta)-\varphi(t_\delta,x_\delta))$ is tangent to $\rho_\delta$ at $(s,y)=(t_\delta,x_\delta)$ from the above. This implies that for all sufficiently small $\delta>0$, $$\begin{aligned} &&\frac{\partial\varphi}{\partial t}(t_\delta,x_\delta)+v(t_\delta,x_\delta)\cdot\nabla \varphi(t_\delta,x_\delta) = \frac{\partial\rho_\delta}{\partial t}(t_\delta,x_\delta)+v(t_\delta,x_\delta)\cdot\nabla \rho_\delta (t_\delta,x_\delta)\\ &&\quad = \rho_\delta(t_\delta,x_\delta)S(t_\delta,x_\delta) =\rho_\delta(t_\delta,x_\delta)V_0\le \rho_\delta(t_\delta,x_\delta)R(t_\delta,x_\delta,\nabla \varphi(t_\delta,x_\delta)).\end{aligned}$$ Sending $\delta\to0+$, we obtain $$\begin{aligned} && \frac{\partial\varphi}{\partial t}(t,x)+v(t,x)\cdot\nabla \varphi(t,x) = \frac{\partial\rho}{\partial t}(t,x)+v(t,x)\cdot\nabla \rho(t,x) \\ &&\quad =\rho(t,x)S(t,x) \le \rho(t,x)R(t,x,\rho(t,x),\nabla \varphi(t,x)).\end{aligned}$$ fix any $(t,x)\in \cup_{0< s<T}(\{s\}\times\Sigma(s))$. Then, $\rho(t,x)=0$. Let $\varphi$ be any test function satisfying the condition in the definition of viscosity subsolution at $(t,x)$. We see that $$\begin{aligned} \label{superdiff} &&\limsup_{(s,y)\to(t,x)}\frac{\rho(s,y)-\rho(t,x)-(\varphi(s,y)-\varphi(t,x))}{|(s,y)-(t,x)|}\\\nonumber &&=\limsup_{(s,y)\to(t,x)}\frac{\rho(s,y)-\rho(t,x)-D\varphi(t,x)\cdot((s,y)-(t,x))}{|(s,y)-(t,x)|}\le 0,\end{aligned}$$ i.e., $D\varphi(t,x)=(\frac{\partial\varphi}{\partial t}(t,x),\nabla\varphi(t,x))\in D^+\rho(t,x)$, where $D^+\rho(t,x)$ stands for the superdifferential[^5] of $\rho$ at $(t,x)$. Since $\rho(s,X(s,t,x))=\rho(t,x)= 0$ for all $0< s\le t$ and $$\lim_{s\to t-0}\frac{X(s,t,x)-x}{s-t}=\lim_{s\to t-0}\frac{X(s,t,x)-X(t,t,x)}{s-t}=v(t,x),$$ [\[superdiff\]](#superdiff){reference-type="eqref" reference="superdiff"} with $y=X(s,t,x)$ and $s\to t-0$ implies that $$\begin{aligned} D\varphi(t,x)\cdot (1,v(t,x))\le0, \end{aligned}$$ from which we obtain $$\frac{\partial\varphi}{\partial t}(t,x)+v(t,x)\cdot\nabla \varphi(t,x)\le 0=\rho(t,x)R(t,x,\nabla \varphi(t,x)).$$ Therefore, we conclude that $\rho$ is a viscosity subsolution of [\[HJ\]](#HJ){reference-type="eqref" reference="HJ"}. A similar reasoning with $$\tilde{\rho}_\delta(t,x):=\phi_\delta^0(X(0,t,x))e^{\int_0^t -S(s,X(s,t,x))ds}$$ shows that the function $\tilde{\rho}:[0,T)\times\Omega\to{\mathbb R}$ defined as $$\tilde{\rho}(t,x):=\phi^0(X(0,t,x))e^{\int_0^t -S(s,X(s,t,x))ds}$$ is a viscosity supersolution of [\[HJ\]](#HJ){reference-type="eqref" reference="HJ"} satisfying the initial/boundary condition strictly, where we look at the subdifferential[^6] of $\tilde{\rho}$ in Case 3. **Step 2: application of Perron's method and comparison principle.** Recall that $\partial\Omega$ is invariant under $X$ and that $S$ is defined to satisfy $S\equiv0$ near $\partial\Omega$. Hence, there exist $\varepsilon _1,\varepsilon _2$ with $0<\varepsilon _2\ll\varepsilon _1\ll\varepsilon$ such that $S\equiv0$ on $[0,T)\times K_{\varepsilon _1}$ and $X(s,t,x) \in K_{\varepsilon _1}$ for all $s\in[0,t]$ and all $(t,x)\in [0,T)\times K_{\varepsilon _2}$. Therefore, the viscosity subsolution and supersolution $\rho ,\tilde{\rho}\in C^0([0,T)\times\bar{\Omega})$ are such that $$\begin{aligned} &&\rho\le\tilde{\rho}\mbox{ in $[0,T)\times\bar{\Omega}$},\quad \rho=\tilde{\rho}=\phi^0(X(0,t,x))\mbox{ in $[0,T)\times K_{\varepsilon _2}$},\\ &&\rho(0,x)=\tilde{\rho}(0,x)=\phi^0(x) \mbox{ on }\bar{\Omega},\quad \rho=\tilde{\rho}=0\mbox{ on }\bigcup_{0\le s<T}\Big(\{s\}\times\Sigma(s)\Big).\end{aligned}$$ Applying Perron's method (Theorem 3.1 in [@Ishii]), we obtain a viscosity solution $\phi:(0,T)\times\Omega\to{\mathbb R}$ of the first equation in [\[HJ\]](#HJ){reference-type="eqref" reference="HJ"} such that $$\rho\le \phi\le \tilde{\rho} \mbox{ in $(0,T)\times\Omega$}.$$ Since the level-sets of $\rho(t,\cdot),\tilde{\rho}(t,\cdot)$ are equal to $\Sigma(t)$, the level-sets of $\phi(t,\cdot)$ must be equal to $\Sigma(t)$ as well. It is clear that $\phi$ can be continuously extended up to $([0,T)\times\partial\Omega)\cup(\{0\}\times\bar{\Omega})$ satisfying $$\begin{aligned} \label{boundary} \rho(t,x)=\phi(t,x)=\tilde{\rho}(t,x)=\phi^0(X(0,t,x))\mbox{ on $([0,T)\times\partial\Omega)\cup(\{0\}\times\bar{\Omega})$};\end{aligned}$$ the same holds for $\phi^\ast,\phi_\ast$, the lower/upper semicontinuous envelop of $\phi$: $$\phi^\ast(t,x)=\phi_\ast(t,x)=\phi^0(X(0,t,x))\mbox{ on $([0,T)\times\partial\Omega)\cup(\{0\}\times\bar{\Omega})$}.$$ By definition, we have $\phi_\ast\le \phi^\ast$. On the other hand, $\phi$ being both a viscosity subsolution and supersolution implies that $\phi^\ast$ is an upper semicontinuous viscosity subsolution (note that $(\phi^{\ast})^{\ast}=\phi^\ast$) and $\phi_\ast$ is a lower semicontinuous viscosity supersolution (note that $(\phi_{\ast})_\ast=\phi_\ast$); the comparison principle (Theorem 8.2 in [@CIL]) implies that $\phi^\ast\le \phi_\ast$ on $[0,T)\times\Omega$. Here, we remark that [\[HJ\]](#HJ){reference-type="eqref" reference="HJ"} in the form of [\[HJ2\]](#HJ2){reference-type="eqref" reference="HJ2"} does not directly satisfy the monotonicity property ($u\mapsto G(z,u,q)$ must be nondecreasing), but [\[bbb\]](#bbb){reference-type="eqref" reference="bbb"} implies that one can verify the monotonicity property through the change of variable $u=e^{V_0t}\tilde{u}$ (see the following subsection and Chapter 2 of [@Giga]). Thus, we conclude that $\phi$ is continuous on $[0,T)\times\bar{\Omega}$ satisfying the initial/boundary condition strictly. Furthermore, such a viscosity solution is unique. In fact, if $\tilde{\phi}$ is a viscosity solution of [\[HJ\]](#HJ){reference-type="eqref" reference="HJ"} in the current sense, the comparison principle implies $\phi\le\tilde{\phi}$ by regarding $\phi$ as a viscosity subsolution and $\tilde{\phi}$ as a viscosity supersolution; $\phi\ge\tilde{\phi}$ by regarding $\phi$ as a viscosity supersolution and $\tilde{\phi}$ as a viscosity subsolution. ◻ We remark that the result and reasoning of this subsection hold also for the Hamilton-Jacobi equation corresponding to [\[problem2-2\]](#problem2-2){reference-type="eqref" reference="problem2-2"}, where we need to add suitable cut-off to the nonlinearity in [\[problem2-2\]](#problem2-2){reference-type="eqref" reference="problem2-2"}. To see this, observe that the mapping $u\mapsto u(\beta-|p|)$ is not monotone for all $p\in{\mathbb R}^3$; hence, we define $$\begin{aligned} \label{3simple} &R(t,x,p):=\eta_1(t,x)\eta_3(|p|)(\beta-|p|),\\\nonumber &\eta_3(r):=\left\{ \begin{array}{cll}\medskip 1&&\mbox{ for $\displaystyle 0\le r\le 2(\beta+\alpha) $}, \\\nonumber\medskip 0&&\mbox{ for $\displaystyle r\ge 3(\beta+\alpha)$}, \\\medskip \mbox{monotone smooth transition from $1$ to $0$}&&\mbox{ otherwise}, \end{array} \right.\\\nonumber &\mbox{$\beta>\alpha>0$ are the constants given in Subsection 2.3}\end{aligned}$$ to confirm $\sup|R|\le \tilde{V}_0:=2\beta+3\alpha$. Then, with this $R$, Theorem [Theorem 5](#Thm2){reference-type="ref" reference="Thm2"} still holds. ## $C^2$-regularity of viscosity solution near the level-set We prove that the viscosity solution of [\[HJ\]](#HJ){reference-type="eqref" reference="HJ"} coincides with the classical solution of [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"} in a $t$-global tubular neighborhood of the level-set. Note that $|\nabla\phi|$ is nicely controlled on the level-set in [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"} and there exists a tubular neighborhood $\Theta$ in which $\eta_1(t,x)\eta_2(|\nabla\phi(t,x)|)\equiv 1$, i.e., the solution of [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"} satisfies the Hamilton-Jacobi equation in [\[HJ\]](#HJ){reference-type="eqref" reference="HJ"} within $\Theta$. Our proof is based on the technique known as *doubling the number of variables*, which is standard in proofs of comparison principles for viscosity solutions; more precisely, we adapt the localized version of this technique to our situation with an unusual choice of a penalty function. The outcome of standard *"localized doubling the number of variables"* (see, e.g., Theorem 3.12 of [@B-CD]) for a Hamilton-Jacobi equation $$\begin{aligned} \label{3HJ-0} \partial_tu+ H(t,x,\nabla u)=0\end{aligned}$$ states the following: - *Let $u,\tilde{u}$ be viscosity solutions of [\[3HJ-0\]](#3HJ-0){reference-type="eqref" reference="3HJ-0"} defined in a cone $$\mathcal{C}:= \{ (t,x)\in [0,b]\times{\mathbb R}^N\,|\, |x-z|\le C(b-t) \},$$ where $C>0$ is a constant such that $$\begin{aligned} &|H(t,x,p)-H(t,x,q)|\le C|p-q| ,\\ &|H(t,x,p)-H(s,y,p)|\le C(1+|p|)|(t,x)-(s,y)|.\end{aligned}$$ If $u(0,\cdot)=\tilde{u}(0,\cdot)$ on $\{x\,|\,|x-z|\le Cb\}$ (the bottom of $\mathcal{C}$), then $u\equiv \tilde{u}$ on $\mathcal{C}$.* The cone is the region of dependence for general first order Hamilton-Jacobi equations, i.e., the value $u(b,z)$ is determined by the information only on the bottom of $\mathcal{C}$ (the speed of propagation is finite). If we directly apply the result to our case, we have to take such a cone contained in the tubular neighborhood $\Theta$, which implies that the time interval $[0,b]$ must be small. This is the nontrivial aspect of this subsection. In order to overcome the difficulty, we will introduce an unusual penalty function (i.e., $h(\bar{u}^2)$ below) in *localized doubling the number of variables* that consists of the classical solution itself. Therefore, our technique is specialized for local comparison of a viscosity solution and a classical solution. We emphasize that if both solutions are defined on the whole domain, the issue is obvious, but otherwise not. Suppose that $v$ satisfies (H1)--(H3). We consider [\[HJ\]](#HJ){reference-type="eqref" reference="HJ"} with initial data $\phi^0\in C^2(\bar{\Omega};{\mathbb R})$ such that $\frac{2}{3}<|\nabla \phi^0|<\frac{4}{3}$ on $\Sigma(0)$ (this ensures $\eta_1\eta_2\equiv1$ near the level-set). Then, Theorem [Theorem 2](#Thm1){reference-type="ref" reference="Thm1"} yields a tubular neighborhood $\Theta$ of the level-set $\{\Sigma(t)\}_{t\ge0}$ and a unique $C^2$-solution $\phi:\Theta\to{\mathbb R}$ of [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"} with $\eta_1(t,x)\eta_2(|\nabla\phi(t,x)|)\equiv 1$ in $\Theta$ (hence, from here, we rewrite [\[problem1\]](#problem1){reference-type="eqref" reference="problem1"} with $R$ given in [\[RRRR\]](#RRRR){reference-type="eqref" reference="RRRR"}), while Theorem [Theorem 5](#Thm2){reference-type="ref" reference="Thm2"} yields a continuous viscosity solution $\tilde{\phi}:[0,\infty)\times\bar{\Omega}\to{\mathbb R}$ of [\[HJ\]](#HJ){reference-type="eqref" reference="HJ"}. We fix an arbitrary $T>0$ and set $\Theta_T:=\Theta_{0\le t\le T}$. We need to convert the Hamilton-Jacobi equation in [\[HJ\]](#HJ){reference-type="eqref" reference="HJ"} into $$\begin{aligned} \label{3HJ} \partial_tu+ G(t,x,\nabla u, u)=0\end{aligned}$$ with $G$ such that $u\mapsto G(t,x,p,u)$ is nondecreasing. This is done by the change of the functions as $$\begin{aligned} \label{3convert} w(t,x):=e^{-V_0t}\phi(t,x),\quad \tilde{w}(t,x):=e^{-V_0t}\tilde{\phi}(t,x)\end{aligned}$$ with the constant $V_0$ given in [\[bbb\]](#bbb){reference-type="eqref" reference="bbb"}. Since $\phi$ is a $C^2$-solution of the original Hamilton-Jacobi equation, it is clear that $w$ satisfies the new Hamilton-Jacobi equation $$\begin{aligned} \label{33HJ} \partial_t u(t,x)+ v(t,x)\cdot\nabla u(t,x)+u(t,x)\Big(V_0-R(t,x,e^{V_0t}\nabla u(t,x))\Big)=0.\end{aligned}$$ Note that $w$ satisfies [\[33HJ\]](#33HJ){reference-type="eqref" reference="33HJ"} also in the sense of viscosity solutions. In the case of $\tilde{\phi}$ being a viscosity solution, one can also show that $\tilde{w}:=e^{-V_0t}\tilde{\phi}$ is a viscosity solution of [\[33HJ\]](#33HJ){reference-type="eqref" reference="33HJ"}. For the readers' convenience, we briefly explain how to do it: suppose that a test function $\psi$ is such that $\tilde{w}-\psi$ has a local maximum at $(t_0,x_0)$; then, setting the constant $r:=\tilde{w}(t_0,x_0)-\psi(t_0,x_0)$, we have $$\begin{aligned} &\tilde{w}(t,x)-\psi(t,x)\le \tilde{w}(t_0,x_0)-\psi(t_0,x_0)=r\mbox{\quad near $(t_0,x_0)$},\\ &\tilde{w}(t,x)-(\psi(t,x)+r)\le \tilde{w}(t_0,x_0)-(\psi(t_0,x_0)+r)=0\mbox{\quad near $(t_0,x_0)$},\end{aligned}$$ from which we obtain $$\begin{aligned} &e^{V_0t}\tilde{w}(t,x)-e^{V_0t}(\psi(t,x)+r)\le 0=\tilde{w}(t_0,x_0)-(\psi(t_0,x_0)+r)\mbox{\quad near $(t_0,x_0)$},\\ &e^{V_0t}\tilde{w}(t,x)-e^{V_0t}(\psi(t,x)+r)\le 0=e^{V_0t_0}\tilde{w}(t_0,x_0)-e^{V_0t_0}(\psi(t_0,x_0)+r)\mbox{\quad near $(t_0,x_0)$},\\ &\tilde{\phi}(t,x)-e^{V_0t}(\psi(t,x)+r)\le \tilde{\phi}(t_0,x_0)-e^{V_0t_0}(\psi(t_0,x_0)+r)\mbox{\quad near $(t_0,x_0)$};\end{aligned}$$ since $\tilde{\phi}$ is a viscosity subsolution, it holds that $$\begin{aligned} &\partial_t\{e^{V_0t}(\psi(t,x)+r)\}+v(t,x)\cdot\nabla\{e^{V_0t}(\psi(t,x)+r)\} \\ &\qquad - \tilde{\phi}(t,x)R\Big(t,x,\nabla\{e^{V_0t}(\psi(t,x)+r)\}\Big)|_{t=t_0,x=x_0}\\ & =e^{V_0t_0}V_0(\psi(t_0,x_0)+r) + e^{V_0t_0}\partial_t\psi(t_0,x_0)+e^{V_0t_0}v(t_0,x_0)\cdot \nabla\psi(t_0,x_0) \\ &\qquad - \tilde{\phi}(t_0,x_0)R\Big(t_0,x_0,e^{V_0t_0}\nabla\psi(t_0,x_0)\}\Big) \le 0;\end{aligned}$$ hence, noting that $\psi(t_0,x_0)+r=\tilde{w}(t_0,x_0)$ and $\tilde{\phi}(t_0,x_0)=e^{V_0t_0}\tilde{w}(t_0,x_0)$, we obtain $$\begin{aligned} \partial_t \psi(t_0,x_0)+v(t_0,x_0)\cdot\nabla \psi(t_0,x_0)+\tilde{w}(t_0,x_0)\Big(V_0 -R(t_0,x_0,e^{V_0t_0}\nabla\psi(t_0,x_0)) \Big)\le 0;\end{aligned}$$ therefore, we conclude that $\tilde{w}$ is a viscosity subsolution of [\[33HJ\]](#33HJ){reference-type="eqref" reference="33HJ"}; similar argument shows that $\tilde{w}$ is a viscosity supersolution of [\[33HJ\]](#33HJ){reference-type="eqref" reference="33HJ"}. We rewrite [\[33HJ\]](#33HJ){reference-type="eqref" reference="33HJ"} in the form of [\[3HJ\]](#3HJ){reference-type="eqref" reference="3HJ"} with $$G(t,x,p,u):=v(t,x)\cdot p+ u\tilde{R}(t,x,p),\quad \tilde{R}(t,x,p):= (V_0-R(t,x,e^{V_0t}p)).$$ Due to the condition of $v$ and the definition of $R$ (see [\[RRRR\]](#RRRR){reference-type="eqref" reference="RRRR"}), there exists a constant $C>0$ such that $$\begin{aligned} \label{3con} \begin{array}{lll} &|\tilde{R}(t,x,p)-\tilde{R}(t,x,q)|\le C|p-q| ,\quad \forall\,(t,x)\in\Theta_T,\,\,\, \forall\,p,q\in {\mathbb R}^3,\\ &|\tilde{R}(t,x,p)-\tilde{R}(s,y,p)|\le C(1+|p|)|(t,x)-(s,y)|,\,\,\, \forall\,(t,x)\in\Theta_T,\,\,\, \forall\,p\in {\mathbb R}^3. \end{array}\end{aligned}$$ For a technical reason, we consider another tubular neighborhood $\tilde{\Theta}\subset\Theta$ of $\{\Sigma(t)\}_{t\ge0}$ such that $\phi$ is defined on the closure of $\tilde{\Theta}$, and (partly) describe $\tilde{\Theta}_T:=\tilde{\Theta}|_{0\le t \le T}$ as a family of cylinders, i.e., a foliation: - Let $\alpha>0$ be a constant such that $$\begin{aligned} \label{3alpha} \alpha-2V_0-C\sup_{\tilde{\Theta}_T}|\nabla w|>0\mbox{\,\,\, with } w(t,x)=e^{-V_0t}\phi(t,x);\end{aligned}$$ - Consider the function $$\bar{u}(t,x):= e^{\alpha t}w(t,x) ,$$ where $\bar{u}$ solves in the classical sense $$\begin{aligned} \label{3pen} \partial_t u+v\cdot\nabla u+u\tilde{R}(t,x,e^{-\alpha t}\nabla u)= \alpha u ,\quad u(0,\cdot)=\phi^0;\end{aligned}$$ - With a constant $m_0>0$, define $$\begin{aligned} &A_m:=\{ (t,x)\in[0,T]\times\Omega \,|\,\bar{u}(t,x)=m\},\quad-m_0\le m\le m_0,\quad \\ &\Gamma_T:= \bigcup_{-m_0\le m\le m_0} A_m.\end{aligned}$$ Since $\nabla \bar{u}\neq0$ near $\cup_{0\le t\le T} (\{t\}\times\Sigma(t))$, we may choose $m_0>0$ (possibly very small) so that $\Gamma_T$ is contained in $\tilde{\Theta}_T$ and $\partial\Gamma_T\setminus \Gamma_T|_{t=0,T}=(A_{m_0}\cup A_{-m_0})|_{0<t<T}$, while $\Gamma_T$ contains the $[0,T]$-part of a tubular neighborhood of $\{\Sigma(t)\}_{0\le t\le T}$. Note that such an $m_0$ depends, in general, on $T$. **Theorem 6**. *It holds that $\phi\equiv \tilde{\phi}$ on $\Gamma_T$, where $T>0$ is arbitrary.* *$\Gamma_T$ would become narrower as $T$ gets larger. In order to find a more optimal region on which $\phi\equiv\tilde{\phi}$, one can iterate Theorem [Theorem 6](#Theorem3){reference-type="ref" reference="Theorem3"} for $T=T_0$, $T=2T_0$, $T=3T_0$, $\ldots$, with a small $T_0>0$.* **Proof of Theorem [Theorem 6](#Theorem3){reference-type="ref" reference="Theorem3"}.*.* We proceed by contradiction. Suppose that the assertion does not hold. We assume $$\mbox{$\displaystyle\max_{\Gamma_T}(\phi-\tilde{\phi})>0$ (in the other case, we switch $\phi$ and $\tilde{\phi}$).}$$ From now on, we deal with $w,\tilde{w}$ given as [\[3convert\]](#3convert){reference-type="eqref" reference="3convert"}. Then, we find an interior point $(t^\ast,x^\ast)$ of $\Gamma_T$ such that $$\sigma:=w(t^\ast,x^\ast)-\tilde{w}(t^\ast,x^\ast)>0.$$ Since the level-set of $w$ and that of $\tilde{w}$ are identical, we see that $(t^\ast,x^\ast)\not\in A_0=\cup_{0\le t\le T}(\{t\}\times\Sigma(t))$. Let $m^\ast\in(-m_0,m_0)$ be such that $$\bar{u}(t^\ast,x^\ast)=m^\ast,\,\,\,\mbox{or equivalently, }(t^\ast,x^\ast)\in A_{m^\ast}.$$ Let $\delta>0$ be a constant such that $$%0<m^\ast-2\delta,\quad (m^\ast)^2+2\delta<(m_0)^2.$$ Take a constant $M>0$ such that $$M\ge\max_{(t,x,s,y)\in \Gamma_T\times\Gamma_T} |w(t,x)-\tilde{w}(s,y)|$$ and a monotone increasing $C^1$-function $h: {\mathbb R}\to [0,3M]$ such that $$\begin{aligned} h(r) =\left\{ \begin{array}{cll} &3M, &\mbox{ if $(m^\ast)^2 + 2\delta \le r$},\\ &0,&\mbox{ if $ r\le (m^\ast)^2+\delta$},\\ &\mbox{monotone transition between $0$ and $3M$},& \mbox{ otherwise}. \end{array} \right.\end{aligned}$$ For each $\varepsilon >0$, $\lambda>0$ (they are independent of each other; $\lambda>0$ will be appropriately fixed at some point, while $\varepsilon$ will be sent to $0$ at the end), define the function $F_{\varepsilon ,\lambda}:\Gamma_T\times\Gamma_T\to{\mathbb R}$ as $$\begin{aligned} F_{\varepsilon ,\lambda}(t,x,s,y):=&w(t,x)-\tilde{w}(s,y)-\lambda(t+s)-\frac{1}{\varepsilon ^2}\Big(|x-y|^2+|t-s|^2\Big)\\ &- h(\bar{u}(t,x)^2)-h(\bar{u}(s,y)^2).\end{aligned}$$ Let $(t_0,x_0,s_0,y_0)=(t_0(\varepsilon ,\lambda),x_0(\varepsilon ,\lambda),s_0(\varepsilon ,\lambda),y_0(\varepsilon ,\lambda))\in \Gamma_T\times \Gamma_T$ be such that $$\begin{aligned} %\label{300} F_{\varepsilon ,\lambda}(t_0,x_0,s_0,y_0)=\max_{ \Gamma_T\times \Gamma_T}F_{\varepsilon ,\lambda}.\end{aligned}$$ We first claim that there exists a sufficiently small $\lambda>0$ for which we have $$\begin{aligned} \label{322-1} & |x_0-y_0|=O(\varepsilon ),\,\, |t_0-s_0|=O(\varepsilon )\mbox{ \quad as $\varepsilon \to0+$},\\\label{322-12} & \mbox{$t_0=0$ or $s_0=0$ or $(t_0,x_0,s_0,y_0)\in\,$[interior of $\Gamma_T\times \Gamma_T$],\,\,\,\, for all $0<\varepsilon \ll1$}.\end{aligned}$$ In fact, observe that $$\begin{aligned} F_{\varepsilon ,\lambda}(t_0,x_0,s_0,y_0)\ge F_{\varepsilon ,\lambda}(t^\ast,x^\ast,t^\ast,x^\ast),\end{aligned}$$ that is, $$\begin{aligned} &w(t_0,x_0)-\tilde{w}(s_0,y_0)-\lambda(t_0+s_0) -\frac{1}{\varepsilon ^2}\Big(|x_0-y_0|^2+|t_0-s_0|^2\Big)\\ &\qquad- h(\bar{u}(t_0,x_0)^2)-h(\bar{u}(s_0,y_0)^2) \ge w(t^\ast,x^\ast)-\tilde{w}(t^\ast,x^\ast)-2\lambda t^\ast-2h(\bar{u}(t^\ast,x^\ast)^2).\end{aligned}$$ Hence, with the definition of $h$, we may fix $\lambda>0$ small enough to obtain for all sufficiently small $\varepsilon >0$, $$\begin{aligned} &\frac{1}{\varepsilon ^2}\Big(|x_0-y_0|^2+|t_0-s_0|^2\Big) + h(\bar{u}(t_0,x_0)^2)+h(\bar{u}(s_0,y_0)^2)\\ &\quad \le w(t_0,x_0)-\tilde{w}(s_0,y_0)-(w(t^\ast,x^\ast)-\tilde{w}(t^\ast,x^\ast)) +2\lambda t^\ast+2h((m^\ast)^2)\\ &\quad =w(t_0,x_0)-\tilde{w}(s_0,y_0)-(w(t^\ast,x^\ast)-\tilde{w}(t^\ast,x^\ast)) +2\lambda t^\ast < 3M,\end{aligned}$$ and $$\begin{aligned} \label{831} F_{\varepsilon ,\lambda}(t_0,x_0,s_0,y_0) &\ge w(t^\ast,x^\ast)-\tilde{w}(t^\ast,x^\ast)-2\lambda t^\ast-2h(\bar{u}(t^\ast,x^\ast)^2)\\\nonumber &=\sigma -2\lambda t^\ast \ge \frac{\sigma}{2}>0.\end{aligned}$$ Since $h$ is nonnegative, [\[322-1\]](#322-1){reference-type="eqref" reference="322-1"} is clear. Suppose that $(t_0,x_0)$ is on the "lateral surface" of $\Gamma_T$, i.e., $(t_0,x_0)\in A_{\pm m_0}$, or equivalently, $|\bar{u}(t_0,x_0)|=m_0$. Then, with the definition of $h$, we see that $$\begin{aligned} F_{\varepsilon ,\lambda}(t_0,x_0,s_0,y_0) &=w(t_0,x_0)-\tilde{w}(s_0,y_0)-\lambda(t_0+s_0) -\frac{1}{\varepsilon ^2}\Big(|x_0-y_0|^2+|t_0-s_0|^2\Big)\\ &\quad - h(m_0^2)-h(\bar{u}(s_0,y_0)^2) \le M-h(m_0^2)=-2M <0,\end{aligned}$$ which contradicts to [\[831\]](#831){reference-type="eqref" reference="831"}; the same to $(s_0,y_0)$. Thus, [\[322-12\]](#322-12){reference-type="eqref" reference="322-12"} is confirmed. Next, we sharpen the estimates [\[322-1\]](#322-1){reference-type="eqref" reference="322-1"} to be of small order. Since $F_{\varepsilon ,\lambda}(t_0,x_0,s_0,y_0)\ge F_{\varepsilon ,\lambda}(t_0,x_0,t_0,x_0)$, we have $$\begin{aligned} &w(t_0,x_0)-\tilde{w}(s_0,y_0)-\lambda(t_0+s_0) -\frac{1}{\varepsilon ^2}\Big(|x_0-y_0|^2+|t_0-s_0|^2\Big)\\ &\quad - h(\bar{u}(t_0,x_0)^2)-h(\bar{u}(s_0,y_0)) \ge w(t_0,x_0)-\tilde{w}(t_0,x_0)-2\lambda t_0 - 2h(\bar{u}(t_0,x_0)^2).\end{aligned}$$ Hence, by [\[322-1\]](#322-1){reference-type="eqref" reference="322-1"} and the continuity of $\tilde{w}$ and $h$, we obtain $$\begin{aligned} &\frac{1}{\varepsilon ^2}\Big(|x_0-y_0|^2+|t_0-s_0|^2\Big) \le \tilde{w}(t_0,x_0)-\tilde{w}(s_0,y_0)+\lambda (t_0-s_0) \\ &\qquad +h(\bar{u}(t_0,x_0)^2)-h(\bar{u}(s_0,y_0)^2) \to0\quad \mbox{as $\varepsilon \to0+$},\end{aligned}$$ which implies $$\begin{aligned} \label{323-1} |x_0-y_0|=o(\varepsilon ),\,\,\, |t_0-s_0|=o(\varepsilon )\mbox{ \quad as $\varepsilon \to0+$}.\end{aligned}$$ We claim that the case $t_0=0$ or $s_0=0$ is impossible in [\[322-12\]](#322-12){reference-type="eqref" reference="322-12"} for all sufficiently small $\varepsilon >0$. In fact, suppose that $t_0=0$. By [\[831\]](#831){reference-type="eqref" reference="831"}, [\[323-1\]](#323-1){reference-type="eqref" reference="323-1"} and $w(0,\cdot)\equiv\tilde{w}(0,\cdot)$ on $\Gamma_T|_{t=0}$, we see that $$\begin{aligned} \frac{\sigma}{2}&\le w(t_0,x_0)-\tilde{w}(s_0,y_0)= \tilde{w}(t_0,x_0)-\tilde{w}(s_0,y_0)\le \tilde{\omega}(|t_0-s_0|+|x_0-y_0|)=\tilde{\omega}(o(\varepsilon )),\end{aligned}$$ where $\tilde{\omega}(\cdot)$ stands for the modulus of continuity of $\tilde{w}$. If $\varepsilon >0$ is sufficiently small, we face a contradiction. The same to $s_0$. We confirmed that the point $(t_0,x_0)$ is an interior point of $\Gamma_T$, or $t_0=T$ and $x_0$ is an interior point of $\Gamma_T|_{t=T}$; $( s_0,y_0)$ is an interior point of $\Gamma_T$, or $s_0=T$ and $y_0$ is an interior point of $\Gamma_T|_{t=T}$, which enables us to test $w$ and $\tilde{w}$ at $(t_0,x_0)$, $( s_0,y_0)$ in terms of viscosity sub-/supersolutions (see Lemma in Section 10.2 of [@Evans-book] for a remark on the case $t_0=T$ or $s_0=T$). Observe that the mapping $(t,x)\mapsto F_{\varepsilon ,\lambda}(t,x,s_0,y_0)$ takes the maximum at the point $(t_0,x_0)$; then, introducing the $C^1$-function $\psi$ as $$\begin{aligned} \psi(t,x)&:= \tilde{w}(s_0,y_0)+\lambda(t+s_0) +\frac{1}{\varepsilon ^2}\Big(|x-y_0|^2+|t-s_0|^2\Big) +h(\bar{u}(t,x)^2) +h(\bar{u}(s_0,y_0)^2),\end{aligned}$$ we see that the definition of $F_{\varepsilon ,\lambda}$ implies that $$w-\psi\mbox{ takes a maximum at $(t_0,x_0)$}.$$ Since $w$ satisfies the equation in the sense of viscosity subsolutions, we have $$\partial_t \psi(t_0,x_0)+G(t_0,x_0,\nabla_x \psi(t_0,x_0),w(t_0,x_0))\le 0.$$ Therefore, we obtain $$\begin{aligned} \label{3sub} &\lambda +\frac{2}{\varepsilon ^2}(t_0-s_0) +2h'(\bar{u}(t_0,x_0)^2)\bar{u}(t_0,x_0)\partial_t \bar{u}(t_0,x_0)\\\nonumber & +G\Big(t_0,x_0, \frac{2}{\varepsilon ^2}(x_0-y_0) +2h'(\bar{u}(t_0,x_0)^2)\bar{u}(t_0,x_0)\nabla \bar{u}(t_0,x_0),w(t_0,x_0) \Big)\le 0.\end{aligned}$$ Similarly, the mapping $(s,y)\mapsto -F_{\varepsilon ,\lambda}(t_0,x_0,s,y)$ takes a minimum at the point $(s_0,y_0)$; then, introducing the $C^1$-function $\tilde{\psi}$ as $$\begin{aligned} \tilde{\psi}(s,y)&:= w(t_0,x_0)-\lambda(t_0+s) -\frac{1}{\varepsilon ^2}\Big(|x_0-y|^2+|t_0-s|^2\Big) -h(\bar{u}(t_0,x_0)^2)-h(\bar{u}(s,y)^2),\end{aligned}$$ we see that $$\tilde{w}-\tilde{\psi}\mbox{ takes a minimum at $(s_0,y_0)$}.$$ Since $\tilde{w}$ satisfies the equation in the sense of viscosity supersolutions, we have $$\partial_s \tilde{\psi}(s_0,y_0)+G(s_0,y_0,\nabla_y \tilde{\psi}(s_0,y_0),\tilde{w}(s_0,y_0))\ge 0.$$ Consequently, we obtain $$\begin{aligned} \label{3super} &-\lambda +\frac{2}{\varepsilon ^2}(t_0-s_0) -2h'(\bar{u}(s_0,y_0)^2)\bar{u}(s_0,y_0)\partial_tu(s_0,y_0)\\\nonumber & +G\Big(s_0,y_0, \frac{2}{\varepsilon ^2}(x_0-y_0) -2h'(\bar{u}(s_0,y_0)^2)\bar{u}(s_0,y_0)\nabla \bar{u}(s_0,y_0),\tilde{w}(s_0,y_0) \Big)\ge 0.\end{aligned}$$ It follows from [\[831\]](#831){reference-type="eqref" reference="831"} that $$\begin{aligned} w(t_0,x_0)-\tilde{w}(s_0,x_0)> 0.\end{aligned}$$ Since $G(t,x,p,u)$ is nondecreasing with respect to $u$, [\[3super\]](#3super){reference-type="eqref" reference="3super"} yields $$\begin{aligned} \label{3super2} &-\lambda +\frac{2}{\varepsilon ^2}(t_0-s_0) -2h'(\bar{u}(s_0,y_0)^2)\bar{u}(s_0,y_0)\partial_tu(s_0,y_0)\\\nonumber & +G\Big(s_0,y_0, \frac{2}{\varepsilon ^2}(x_0-y_0) -h'(\bar{u}(s_0,y_0)^2)\bar{u}(s_0,y_0)\nabla u(s_0,y_0),w(t_0,x_0) \Big)\ge 0.\end{aligned}$$ By [\[3sub\]](#3sub){reference-type="eqref" reference="3sub"} and [\[3super2\]](#3super2){reference-type="eqref" reference="3super2"}, we obtain $$\begin{aligned} 2\lambda &\le -2h'(\bar{u}(s_0,y_0)^2)\bar{u}(s_0,y_0)\Big(\partial_t\bar{u}(s_0,y_0) +v(s_0,y_0)\cdot \nabla \bar{u}(s_0,y_0)\Big)\\ &\quad -2h'(\bar{u}(t_0,x_0)^2)\bar{u}(t_0,x_0)\Big(\partial_t\bar{u}(t_0,x_0) +v(t_0,x_0)\cdot \nabla \bar{u}(t_0,x_0)\Big)\\ &\quad + 2(v(s_0,y_0)-v(t_0,x_0))\cdot\frac{x_0-y_0}{\varepsilon ^2}\\ &\quad + w(t_0,x_0)\tilde{R}\Big(s_0,y_0, \frac{2}{\varepsilon ^2}(x_0-y_0) -2h'(\bar{u}(s_0,y_0)^2)\bar{u}(s_0,y_0)\nabla \bar{u}(s_0,y_0) \Big)\\ &\quad -w(t_0,x_0)\tilde{R}\Big(t_0,x_0, \frac{2}{\varepsilon ^2}(x_0-y_0)+2h'(\bar{u}(t_0,x_0)^2)\bar{u}(t_0,x_0)\nabla \bar{u}(t_0,x_0) \Big).\end{aligned}$$ Since $\bar{u}$ solves [\[3pen\]](#3pen){reference-type="eqref" reference="3pen"} and $|\tilde{R}|\le 2V_0$, we obtain with [\[3con\]](#3con){reference-type="eqref" reference="3con"}, $h'\ge0$ and [\[323-1\]](#323-1){reference-type="eqref" reference="323-1"}, $$\begin{aligned} &2\lambda \le -2h'(\bar{u}(s_0,y_0)^2)\bar{u}(s_0,y_0) \cdot \bar{u}(s_0,y_0)\Big(\alpha-\tilde{R}(s_0,y_0,e^{-\alpha s_0}\nabla \bar{u}(s_0,y_0))\Big)\\ &\quad -2h'(\bar{u}(t_0,x_0)^2)\bar{u}(t_0,x_0) \cdot \bar{u}(t_0,x_0)\Big(\alpha-\tilde{R}(t_0,x_0,e^{-\alpha t_0}\nabla \bar{u}(t_0,x_0))\Big)\\ &\quad + 2\mbox{[Lipschitz constant of $v$ within $\Theta_T$]} |(t_0,x_0)-(s_0,y_0)| \frac{|x_0-y_0|}{\varepsilon ^2}\\ &\quad +2C|w(t_0,x_0)|| h'(\bar{u}(s_0,y_0)^2)\bar{u}(s_0,y_0)\nabla \bar{u}(s_0,y_0)+h'(\bar{u}(t_0,x_0)^2)\bar{u}(t_0,x_0)\nabla \bar{u}(t_0,x_0)| \\ &\quad + C|w(t_0,x_0)|\Big(1+ \Big| \frac{2}{\varepsilon ^2}(x_0-y_0) +2h'(\bar{u}(t_0,x_0)^2)\bar{u}(t_0,x_0)\nabla \bar{u}(t_0,x_0) \Big| \Big)|(t_0,x_0)-(s_0,x_0)|\\ &\le -2e^{2\alpha t_0}h'(\bar{u}(s_0,y_0)^2)w(t_0,x_0) ^2\Big(\alpha-2V_0\Big)+o(\varepsilon ) \\ &\quad -2e^{2\alpha t_0}h'(\bar{u}(t_0,x_0)^2)w(t_0,x_0)^2\Big(\alpha-2V_0\Big)\\ &\quad +2e^{2\alpha t_0}C h'(\bar{u}(s_0,y_0)^2)w(t_0,x_0)^2|\nabla w(s_0,y_0)|+o(\varepsilon )\\ &\quad+2e^{2\alpha t_0}Ch'(\bar{u}(t_0,x_0)^2)w(t_0,x_0)^2|\nabla w(t_0,x_0)|+\frac{o(\varepsilon )^2}{\varepsilon ^2}\\ &= -2e^{2\alpha t_0}h'(\bar{u}(s_0,y_0)^2)w(t_0,x_0)^2\Big( \alpha-2V_0 - C |\nabla w(s_0,y_0)| \Big)\\ &\quad -2e^{2\alpha t_0}h'(\bar{u}(t_0,x_0)^2)w(t_0,x_0)^2\Big( \alpha-2V_0 - C |\nabla w(t_0,x_0)| \Big)+\frac{o(\varepsilon )^2}{\varepsilon ^2}.\end{aligned}$$ The right-hand side of this inequality is bounded from above by $\frac{o(\varepsilon )^2}{\varepsilon ^2}$ due to the choice of $\alpha$ in [\[3alpha\]](#3alpha){reference-type="eqref" reference="3alpha"}. Thus, we reach a contradiction by sending $\varepsilon \to0+$. ◻ We remark that the result and reasoning of this subsection hold also for the problems [\[problem2-2\]](#problem2-2){reference-type="eqref" reference="problem2-2"} and [\[HJ\]](#HJ){reference-type="eqref" reference="HJ"} with $R$ given in [\[3simple\]](#3simple){reference-type="eqref" reference="3simple"} and $C^2$-initial data $\phi^0$ such that $\beta-\alpha\le |\nabla \phi^0|\le \beta+\alpha$. The authors thank Prof. Qing Liu in Okinawa Institute of Science and Technology for his pointing out Theorem 3.12 of [@B-CD]. Dieter Bothe and Mathis Fricke are supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -- Project-ID 265191195 -- SFB 1194. This work was done mostly during Kohei Soga's one-year research stay at the department of mathematics, Technische Universität Darmstadt, Germany, with the grant Fukuzawa Fund (Keio Gijuku Fukuzawa Memorial Fund for the Advancement of Education and Research). Kohei Soga is also supported by JSPS Grant-in-aid for Young Scientists \#18K13443 and JSPS Grants-in-Aid for Scientific Research (C) \#22K03391. Data sharing not applicable to this article as no datasets were generated or analysed during the current study. The authors state that there is no conflict of interest. # We explain the method of characteristics for Hamilton-Jacobi equations. Consider a function $H=H(t,x,p,\Phi)$ such that $$\begin{aligned} &H:[0,\infty)\times{\mathbb R}^N\times{\mathbb R}^N\times{\mathbb R}\to{\mathbb R}\mbox{ is $C^1$ in $(t,x,p,\Phi)$};\\ %&\mbox{$H$ is $C^1$-smooth in $(t,x,p,\Phi)$};\\ &\mbox{$H$ is twice partial differentiable in $(x,p,\Phi)$;}\\ &\mbox{ the partial derivatives of $H$ are continuous in $(t,x,p,\Phi)$}.\end{aligned}$$ Note that the upcoming argument is available also for a function $H$ defined on $[0,\infty)\times K$ with an open subset $K\subset {\mathbb R}^{2N+1}$. Let $w:{\mathbb R}^n\to{\mathbb R}$ be a given $C^2$-function. We will construct a unique $C^2$-solution $u$ of $$\begin{aligned} \label{gHJ} \frac{\partial u}{\partial t}+H(t,x,\nabla u,u)=0 \mbox{ in $O$},\quad u(0,x)=w(x)\mbox{ on $O|_{t=0}$},\end{aligned}$$ where $O\subset{\mathbb R}\times{\mathbb R}^N$ is a neighborhood of a point $(0,\xi)\in{\mathbb R}\times{\mathbb R}^N$. We remark that $\xi$ can be arbitrary, but the size of $O$ depends on $\xi$; there exists $\varepsilon >0$ such that $[-\varepsilon ,\varepsilon ]\times B_\varepsilon (\xi)\subset O$ (here, $B_\varepsilon (\xi)\subset{\mathbb R}^N$ is the $\varepsilon$-ball with the center $\xi$) and [\[gHJ\]](#gHJ){reference-type="eqref" reference="gHJ"} is solvable for negative time; $O$ is determined by the inverse map theorem around the point $(0,\xi)$; if one wants to solve [\[gHJ\]](#gHJ){reference-type="eqref" reference="gHJ"} in $O$ larger than an open set coming from the inverse map theorem around a single point, one needs more arguments, in addition to the upcoming discussion, to confirm injectivity. We consider autonomization by introducing $$\tilde{H}(t,x,r,p,\Phi):=r+H(t,x,p,\Phi)$$ with the extended configuration variable $(t,x)$. Then, we see that [\[gHJ\]](#gHJ){reference-type="eqref" reference="gHJ"} can be seen as $$\tilde{H}(t,x,\mbox{$\frac{\partial u}{\partial t}$},\nabla u,u)=0.$$ The characteristic ODE-system of [\[gHJ\]](#gHJ){reference-type="eqref" reference="gHJ"} is given as the autonomous (contact) Hamiltonian system generated by $\tilde{H}(t,x,r,p,\Phi)$, i.e., setting $Q:=(t,x)$, $P=(r,p)$, $\tilde{H}=\tilde{H}(Q,P,\Phi)$, $$\begin{aligned} &&Q'(s)=\frac{\partial\tilde{H}}{\partial P}(Q(s),P(s),\Phi(s)), \\ &&P'(s)=-\frac{\partial\tilde{H}}{\partial Q}(Q(s),P(s),\Phi(s))-\frac{\partial\tilde{H}}{\partial\Phi}(Q(s),P(s),\Phi(s))P(s),\\ &&\Phi'(s)=\frac{\partial\tilde{H}}{\partial P}(Q(s),P(s),\Phi(s))\cdot P(s)-\tilde{H}(Q(s),P(s),\Phi(s)),\\ &&Q(0)=(0,\xi),\,\,\,\,P(0)=(-H(0,\xi,\nabla w(\xi),w(\xi)), \nabla w(\xi)),\,\,\,\,\Phi(0)=w(\xi),\quad \xi\in{\mathbb R}^N,\end{aligned}$$ where the solution is written as $Q(s;\xi),P(s;\xi),\Phi(s;\xi)$. This system is equivalent to $$\begin{aligned} \label{CH1} &&x'(s)=\frac{\partial H}{\partial p}(t(s),x(s),p(s),\Phi(s)),\\\nonumber &&x(0)=\xi,\\\label{CH2} &&p'(s)=-\frac{\partial H}{\partial x}(t(s),x(s),p(s),\Phi(s))-\frac{\partial H}{\partial\Phi}(t(s),x(s),p(s),\Phi(s))p(s),\\\nonumber && p(0)=\nabla w(\xi),\\\label{CH3} &&\Phi'(s)=\frac{\partial H}{\partial p}(t(s),x(s),p(s),\Phi(s))\cdot p(s)-H(t(s),x(s),p(s),\Phi(s)) ,\\\nonumber &&\Phi(0)=w(\xi),\\\label{CH4} &&t'(s)=1,\quad\\\nonumber && t(0)=0,\\\label{CH5} &&r'(s)=-\frac{\partial H}{\partial t} (t(s),x(s),p(s),\Phi(s))-\frac{\partial H}{\partial\Phi}(t(s),x(s),p(s),\Phi(s))r(s),\\\nonumber &&r(0)=-H(0,\xi,\nabla w(\xi), w(\xi)).\end{aligned}$$ Observe that $$\begin{aligned} \label{A10} t(s;\xi)&\equiv& s,\quad \forall\,\xi,\,\, \forall\,s,\\\label{A11} \tilde{H}(Q(0;\xi),P(0;\xi),\Phi(0;\xi))&=&0,\quad \forall\,\xi, %-H(0,\xi,\nabla w(\xi),w(\xi))\\\nonumber %&&\quad +H(0,\xi,\nabla w(\xi), w(\xi))=0,\quad \forall\,\xi, \\\label{A12} \frac{\partial}{\partial s}\{\tilde{H}(Q(s;\xi),P(s;\xi),\Phi(s;\xi))\} &=&\frac{\partial\tilde{H}}{\partial Q}(Q(s;\xi),P(s;\xi),\Phi(s;\xi))\cdot Q'(s;\xi)\\\nonumber &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!+\frac{\partial\tilde{H}}{\partial P}(Q(s;\xi),P(s;\xi),\Phi(s;\xi))\cdot P'(s;\xi)+\frac{\partial\tilde{H}}{\partial\Phi}(Q(s;\xi),P(s;\xi),\Phi(s;\xi))\Phi'(s) \\\nonumber &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!=-\tilde{H}(Q(s;\xi),P(s;\xi),\Phi(s;\xi))\frac{\partial\tilde{H}}{\partial\Phi}(Q(s;\xi),P(s;\xi),\Phi(s;\xi)).\end{aligned}$$ [\[A10\]](#A10){reference-type="eqref" reference="A10"} implies that $t(s)=s$ serves as the non-autonomous factor in [\[CH1\]](#CH1){reference-type="eqref" reference="CH1"}, [\[CH2\]](#CH2){reference-type="eqref" reference="CH2"},[\[CH3\]](#CH3){reference-type="eqref" reference="CH3"},[\[CH5\]](#CH5){reference-type="eqref" reference="CH5"}; hence, our regularity assumption on $H$ is sufficient to provide the smooth dependency with respect to initial data (note that this is apparently not clear if we only look at the autonomous system with $\tilde{H}(Q,P,\Phi)$, as $C^2$-regularity in $Q$ is missing). [\[A11\]](#A11){reference-type="eqref" reference="A11"} and [\[A12\]](#A12){reference-type="eqref" reference="A12"} imply that $$\begin{aligned} \label{A13} &\tilde{H}(Q(s;\xi),P(s;\xi),\Phi(s;\xi))\equiv 0 \mbox{ as long as $Q(s;\xi),P(s;\xi),\Phi(s;\xi)$ exist}.\end{aligned}$$ The map $F(s,\xi):=Q(s;\xi)=(s,x(s;\xi))$ is $C^1$-smooth as long as the characteristic ODEs have solutions. Furthermore, it holds that $$\begin{aligned} \label{inverse} F(0,\xi)=(0,\xi),\quad \det D_{(s,\xi)}F(0,\xi)=1,\end{aligned}$$ where $D_{(s,\xi)}F$ is the Jacobian matrix of $F$. Therefore, for each $\xi\in{\mathbb R}^N$, the inverse map theorem guarantees that there exist two sets, a neighborhood $O_1$ of $(0,\xi)$ and a neighborhood $O_2$ of $F(0,\xi)=(0,\xi)$, such that $F:O_1\to O_2$ is a $C^1$-diffeomorphism. We obtain the inverse map of $F$ as $$F^{-1}:O_2\to O_1,\quad F^{-1}(t,x)=(t,\varphi(t,x)),$$ where $\xi=\varphi(t,x)$ is the point for which $x(s;\xi)$ passes through $x$ at $s=t$. The essential point is to obtain open sets $O_1$ and $O_2=F(O_1)$ such that $F:O_1\to O_2$ is a $C^1$-diffeomorphism, no matter how we find them. The easiest way is to use the inverse map theorem around a single point $(0,\xi)$ based on [\[inverse\]](#inverse){reference-type="eqref" reference="inverse"}. The invertibility of $F$, or the invertibility of $x(s;\xi)$ for fixed $s$, on a wider region requires injectivity, which will be an additional issue to be verified. We proceed, assuming that $O_1\subset{\mathbb R}\times{\mathbb R}^N$ is a neighborhood of a subset of $\{0\}\times{\mathbb R}^N$ and $F:O_1\to O_2=F(O_1)$ is a $C^1$-diffeomorphism. By [\[A13\]](#A13){reference-type="eqref" reference="A13"}, we have $$\frac{\partial\Phi}{\partial s}(s;\xi)=\frac{\partial\tilde{H}}{\partial P}(Q(s;\xi),P(s;\xi),\Phi(s;\xi))\cdot P(s;\xi)=P(s;\xi)\cdot \frac{\partial Q}{\partial s}(s;\xi).$$ It holds that $$\frac{\partial\Phi}{\partial\xi_i}(s;\xi)=P(s;\xi)\cdot \frac{\partial Q}{\partial\xi_i}(s;\xi),\quad i=1,2,\ldots N.$$ In fact, we have $$\begin{aligned} &&\frac{\partial\Phi}{\partial\xi_i}(0;\xi)-P(0;\xi)\cdot \frac{\partial Q}{\partial\xi_i}(0;\xi)=\frac{\partial w}{\partial\xi_i}(\xi)-\nabla w(\xi) \cdot \frac{\partial\xi}{\partial\xi_i}=0,\\ &&\frac{\partial}{\partial s}\Big( \frac{\partial\Phi}{\partial\xi_i}(s;\xi)-P(s;\xi)\cdot \frac{\partial Q}{\partial\xi_i}(s;\xi) \Big) = \frac{\partial}{\partial s}\frac{\partial\Phi}{\partial\xi_i}(s;\xi)- \frac{\partial}{\partial s}\Big(P(s;\xi)\cdot \frac{\partial Q}{\partial\xi_i}(s;\xi)\Big)\\ &&\quad =\frac{\partial}{\partial\xi_i}\frac{\partial\Phi}{\partial s}(s;\xi)- \frac{\partial}{\partial s}\Big(P(s;\xi)\cdot \frac{\partial Q}{\partial\xi_i}(s;\xi)\Big)\\ &&\quad =\frac{\partial}{\partial\xi_i}\Big(P(s;\xi)\cdot \frac{\partial Q}{\partial s}(s;\xi)\Big)- \frac{\partial}{\partial s}\Big(P(s;\xi)\cdot \frac{\partial Q}{\partial\xi_i}(s;\xi)\Big)\\ &&\quad =\frac{\partial Q}{\partial s}(s;\xi)\cdot \frac{\partial P}{\partial\xi_i}(s;\xi)- \frac{\partial P}{\partial s}(s;\xi)\frac{\partial Q}{\partial\xi_i}(s;\xi)\\ &&\quad=\frac{\partial\tilde{H}}{\partial P}(Q(s;\xi),P(s;\xi),\Phi(s;\xi)) \cdot \frac{\partial P}{\partial\xi_i}(s;\xi)+\frac{\partial\tilde{H}}{\partial Q}(Q(s;\xi),P(s;\xi),\Phi(s;\xi)) \cdot \frac{\partial Q}{\partial\xi_i}(s;\xi)\\ &&\qquad +\frac{\partial\tilde{H}}{\partial\Phi}(Q(s;\xi),P(s;\xi),\Phi(s;\xi))P(s;\xi)\cdot \frac{\partial Q}{\partial\xi_i}(s;\xi)\\ &&\quad = \frac{\partial}{\partial\xi_i}\tilde{H}(Q(s;\xi),P(s;\xi),\Phi(s;\xi)) \\ &&\qquad -\frac{\partial\tilde{H}}{\partial\Phi}(Q(s;\xi),P(s;\xi),\Phi(s;\xi)) \Big( \frac{\partial\Phi}{\partial\xi_i}(s;\xi)-P(s;\xi)\cdot \frac{\partial Q}{\partial\xi_i}(s;\xi) \Big)\\ &&\quad = -\frac{\partial\tilde{H}}{\partial\Phi}(Q(s;\xi),P(s;\xi),\Phi(s;\xi)) \Big( \frac{\partial\Phi}{\partial\xi_i}(s;\xi)-P(s;\xi)\cdot \frac{\partial Q}{\partial\xi_i}(s;\xi) \Big),\\ && \frac{\partial\Phi}{\partial\xi_i}(s;\xi)-P(s;\xi)\cdot \frac{\partial Q}{\partial\xi_i}(s;\xi)\equiv 0,\quad \forall\,\xi,\,\, \forall\,s.\end{aligned}$$ Hence, we see that $$D_{(s;\xi)}\Phi(s;\xi)= P(s;\xi) D_{(s;\xi)}Q(s;\xi)= P(s;\xi) D_{(s;\xi)} F(s,\xi).$$ Now we define $$u(t,x):=\Phi(F^{-1}(t,x))=\Phi(t;\varphi(t,x)):O_2\to {\mathbb R},$$ which is apparently $C^1$-smooth. Observe that $$\begin{aligned} (\partial_t u(t,x),\nabla u(t,x))&=&D_{(t,x)}u(t,x)=D_{(s;\xi)}\Phi(F^{-1}(t,x))D_{(t,x)}F^{-1}(t,x)\\ &=&P(F^{-1}(t,x)) D_{(s;\xi)} F(F^{-1}(t,x))D_{(t,x)}F^{-1}(t,x)\\&=&P(F^{-1}(t,x)) =P(t;\varphi(t,x)).\end{aligned}$$ Therefore, with [\[A10\]](#A10){reference-type="eqref" reference="A10"} and [\[A13\]](#A13){reference-type="eqref" reference="A13"}, we see that $u$ is in fact $C^2$-smooth and satisfies $$\begin{aligned} 0&=&\tilde{H}(Q(s;\xi), P(s;\xi);\Phi(s;\xi))|_{(s,\xi)=F^{-1}(t,x)=(t,\varphi(t,x))}\\ &=&\tilde{H}(t,x,\partial_t u(t,x),\nabla u(t,x),u(t,x))\\ &=&\partial_t u(t,x)+H(t,x,\nabla u(t,x),u(t,x)),\,\,\,\forall\,(t,x)\in O_2,\\ u(0,x)&=&\Phi(0;x)=w(x),\,\,\,\forall\,x\in O_2|_{t=0}.\end{aligned}$$ The uniqueness of [\[gHJ\]](#gHJ){reference-type="eqref" reference="gHJ"} follows from the uniqueness of the characteristic ODEs. As a conclusion, *all one needs to do to solve [\[gHJ\]](#gHJ){reference-type="eqref" reference="gHJ"} by the method of characteristics are* - Analyze [\[CH1\]](#CH1){reference-type="eqref" reference="CH1"}, [\[CH2\]](#CH2){reference-type="eqref" reference="CH2"} and [\[CH3\]](#CH3){reference-type="eqref" reference="CH3"} with $t(s)=s$, where [\[CH4\]](#CH4){reference-type="eqref" reference="CH4"} and [\[CH5\]](#CH5){reference-type="eqref" reference="CH5"} are not explicitly necessary; - For the map $F(s,\xi):=(s,x(s,\xi))$, find an open set $O_1\subset{\mathbb R}\times{\mathbb R}^N$ with $O_1|_{t=0}\neq \emptyset$ such that $F:O_1\to O_2=F(O_1)$ is a $C^1$-diffeomorphism; - Define $u(t,x):=\Phi(t;\varphi(t,x)):O_2\to{\mathbb R}$ with $F^{-1}(t,x)=(t,\varphi(t,x))$. # Let $J \subset {\mathbb R}$ be an open interval, $V \subset {\mathbb R}^N$ open and $g: J \times V \to {\mathbb R}^N$ continuous. By Peano's theorem, the initial value problem (IVP for short) $$\label{eq:1} x'(t)= g\big(t, x(t)\big) \text{ on } J, \quad x(t_0)=x_0$$ has a local (classical) solution for every $(t_0, x_0) \in J \times V$, i.e. there is $\varepsilon=\varepsilon(t_0, x_0) >0$ and a $C^1$-function $x: I_\varepsilon\to {\mathbb R}^N$, with $I_\varepsilon=(t_0 -\varepsilon, t_0 + \varepsilon)$, such that [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"} is satisfied in every point. In this situation, a closed set $K \subset V$ is said to be positive (negative) flow invariant for [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"}, if every solution of [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"} that starts in $t=t_0$ at a point $x_0 \in K$ stays inside $K$ for all (admissible) $t > t_0$ ($t < t_0$). In this case, one also speaks of forward (backward) invariance of $K$ for the right-hand side $g$. A closed set $K$ is called flow invariant, or just invariant, for [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"}, if $K$ is both positive and negative flow invariant for [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"}. Since classical solutions for [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"} with (only) continuous right-hand side need not be unique, it might happen that, for given closed $K \subset V$, one solution starting in $x_0$ stays in $K$, while another solution leaves $K$. The autonomous standard example for non-uniqueness, namely $g(x)=2\sqrt{|x|}$ on $V={\mathbb R}$, already shows this behavior with $K:=\{0\}$ and $x_0=0$. One therefore calls a closed $K \subset V$ weakly (positively or negatively) flow invariant for [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"}, if $t_0 \in J$ and $x_0 \in K$ implies the existence of one solution staying in $K$ (for $t>t_0$ or $t<t_0$, respectively). We are interested in situations of unique solvability of [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"}, which holds true if $g$ is jointly continuous and locally Lipschitz continuous in $x$, i.e. for every $(t_0, x_0) \in J \times V$ there is $\delta= \delta(t_0, x_0) >0$ and $L=L(t_0, x_0)>0$ such that $$\label{eq:2} |g(t,x)-g(t,\bar{x})| \leq L |x-\bar{x}| \text{ for all } t \in I_\delta \cap J, \; x,\bar{x} \in B_\delta (x_0)\cap V.$$ In this case, the Picard-Lindelöf theorem yields unique solvability of [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"}. Therefore, weak flow invariance then is the same as flow invariance. Let us note in passing that forward (backward) unique solvability of [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"} for continuous $g$ holds under weaker additional assumptions such as one-sided Lipschitz continuity in $x$. More precisely, if for every $(t_0, x_0) \in J \times V$ there is $\delta=\delta(t_0, x_0) >0$ and a $k=k_{t_0,x_0} \in L^1(J)$ such that $$\label{eq:3} \left\langle g(t,x) -g(t,\bar{x}), x-\bar{x}\right\rangle \leq k(t) |x-\bar{x}|^2 \text{ for } t \in J, \; x, \bar{x} \in B_\delta(x_0) \cap V,$$ then forward uniqueness holds for [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"}. Evidently, flow invariance of $K$ requires conditions on $g$. The extreme case is $K=\{x_0\}$, where $g(t, x_0)=0$ for $t \in I_\varepsilon$ is required. In the general case, if $x_0 \in K$ and $x(t) \in K$ on $[t_0, t_0 +\varepsilon)$ holds, then $$\label{eq:4} x(t_0 +h)= x_0 +h g(t_0, x_0) + e (h; t_0, x_0) \in K \text{ for } 0 \leq h < \varepsilon$$ with a remainder term $e(\cdot; t_0, x_0)$, which satisfies $$\label{eq:5} \lim\limits_{h \to 0+} \frac{1}{h} e (h; t_0, x_0)=0.$$ Hence $$\label{eq:6} {\rm dist}\,\big(x_0+h g(t_0, x_0), K\big) \leq \left|e (h; t_0, x_0)\right|= o(h) \text{ as } h \to 0+.$$ Consequently, the condition $$\label{eq:7} g(t_0, x_0) \in \widetilde{T}_K(x_0)$$ with the so-called "tangent cone" $$\label{eq:8} \widetilde{T}_K(x)=\left\{z \in X: \lim\limits_{h \to 0+} h^{-1} {\rm dist}\,(x+h z, K)=0\right\} \text{ for } x \in K$$ is a necessary condition, where $X={\mathbb R}^N$. Actually, it turns out that the apparently weaker condition $$\label{eq:9} g(t,x) \in T_K(x) \quad\text{ for } t \in J, \; x \in K$$ with the so-called Bouligand contingent cone $$%\label{eq:10} T_K(x)=\left\{z \in X: \liminf\limits_{h \to 0+} h^{-1} {\rm dist}\, (x+ hz,K)=0\right\} \text{ for } x \in K$$ is sufficient for positive flow invariance of the closed set $K \subset V$; as above, $X={\mathbb R}^N$ here. This is a direct consequence of the following result on existence of solutions for ordinary differential equations (ODEs) on closed sets. *Let $J=(a,b) \subset {\mathbb R}$, $K\subset {\mathbb R}^N$ closed and $g:J \times K \to {\mathbb R}^N$ continuous. Then, given any $(t_0, x_0) \in J \times K$, the IVP [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"} has a local (classical) forward solution if and only if $g$ satisfies the subtangential condition [\[eq:9\]](#eq:9){reference-type="eqref" reference="eq:9"}.* Proofs of Theorem A1 can be found in [@AmannODEs], [@DeLN] or [@Jan-Wilke-ODEs]. We call elements from $T_K(x)$ subtangential vectors (to $K$ at $x$). Observe that in Theorem A1 the right-hand side $g(t,x)$ is only defined for $x \in K$, hence the constraint $x(t) \in K$ is incorporated into the domain of definition of $g$. If $g$ is defined on the larger domain $J \times V$ with $V \supset K$, then Theorem A1 yields weak positive flow invariance of $K$ as only continuity of $g$ is assumed. A solution which stays inside $K$ is then also called viable [@AuViab]. Positive flow invariance follows if forward uniqueness for [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"} holds. Since backward solutions of [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"} are equivalent to forward solutions for $\widetilde{g} (t,x):= -g(2t_0-t,x)$, we also obtain a characterisation of negative flow invariance. Together, this is contained in *Let $J=(a,b) \subset {\mathbb R}$, $K \subset {\mathbb R}^N$ closed and $g: J \times K \to {\mathbb R}^N$ continuous. Then the following statements hold true:* 1. The IVP [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"} admits local solutions on some $I_\varepsilon$ (i.e. forward & backward) iff $$g(t,x) \in T_K(x) \text{ and } -g(t,x) \in T_K(x) \text{ on } J \times K.\vspace{-0.1in}$$ 2. If $g$ also satisfies the one-sided Lipschitz condition [\[eq:3\]](#eq:3){reference-type="eqref" reference="eq:3"} and satisfies [\[eq:9\]](#eq:9){reference-type="eqref" reference="eq:9"}, then IVP [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"} has a unique local forward solution for every $t_0 \in J$ and $x_0 \in K$. 3. If $g$ is locally Lipschitz continuous in $x$ (in the sense of [\[eq:2\]](#eq:2){reference-type="eqref" reference="eq:2"} above), with $\pm g(t,x) \in T_K(x)$ for all $t \in J$, $x \in K$, then IVP [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"} has a unique local solution on some $I_\varepsilon$ (i.e., forward & backward) for every $t_0 \in J$ and $x_0 \in K$. If $g$ is defined on $J \times V$ with some open $V \supset K$, then $K$ is positive flow invariant for [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"} in the situation of (b), while $K$ is flow invariant for [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"} in the situation as described in (c). Theorems A1 and A2 have been generalized in several directions. See [@DeLN] for extensions to ODEs in Banach space, [@MDE] for extensions to differential inclusions (in ${\mathbb R}^N$ as well as in real Banach spaces), [@Bo2] for time-dependent constraints, i.e. IVP [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"} with the additional condition that $x(t) \in K(t)$ is to hold and [@Bo9], [@Bo_Cara] for extensions to more general evolution problems. Due to Theorem A1 and A2, information on flow invariance of given sets can be -- to a large extent -- obtained from properties of the set of subtangential vectors and their calculation. 9 H. Abels, On generalized solutions of two-phase flows for viscous incompressible fluids, Interfaces Free Bound. **9** (2007), no. 1, pp. 31-65. R. A. Adams and J. J. F. Fournier, Sobolev spaces, Second edition. Pure and Applied Mathematics (Amsterdam) **140**, Elsevier/Academic Press, Amsterdam, 2003. H. Amann, *Ordinary Differential Equations,* de Gruyter Studies in Mathematics Vol. **13**, Walter de Gruyter & Co., Berlin 1985. L. Ambrosio and G. Crippa, Continuity equations and ODE flows with non-smooth velocity, Proc. R. Soc. Edinb., Sect. A, Math. **144** (2014), no. 6, pp. 1191-1244. J. P. Aubin, *Viability Theory*, Birkhäuser 1991. M. Bardi and I. Capuzzo-Dolcetta, *Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman equations*, Birkhäuser Boston, Inc., Boston MA., 1997. D. Bothe, Multivalued differential equations on graphs, Nonlinear Analysis **18** (1992), pp. 245-252. D. Bothe, Flow invariance for perturbed nonlinear evolution equations, Abstract and Applied Analysis **1** (1996), pp. 417-433. D. Bothe, Nonlinear evolutions with Carathéodory forcing, J. Evol. Equations **3** (2003), pp. 375-394. D. Bothe, On moving hypersurfaces and the discontinuous ODE-system associated with two-phase flows, Nonlinearity **33** (2020), pp. 5425-5456. M. G. Crandall, H. Ishii and P. L. Lions, User's guide to viscosity solutions of second order partial differential equations, Bull. Amer. Math. Soc. (N.S.) **27** (1992), no. 1, pp. 1-67. M. G. Crandall and P. L. Lions, Two approximations of solutions of Hamilton-Jacobi equations, Math. Comp. **43 (1984), No. 167, pp. 1-19. R. Danchin and P. Mucha, The incompressible Navier-Stokes equations in vacuum, Comm. Pure Appl. Math. **72** (2019), no. 7, pp. 1351-1385. K. Deimling, *Ordinary Differential Equations in Banach Spaces*, Lect. Notes Math. **596**, Springer 1977. K. Deimling, *Multivalued Differential Equations*, De Gruyter 1992. R. J. DiPerna and P. L. Lions, Ordinary differential equations, transport theory and Sobolev spaces, Invent. Math. **98** (1989), no. 3, pp. 511-547. L. C. Evans, *Partial Differential Equations*, 2nd edition, American Mathematical Society (2010). M. Fricke, T. Marić, A. Vučković, I. Roisman and D. Bothe, A locally signed-distance preserving level set method (SDPLS) for moving interfaces, preprint (arXiv:2208.01269v2). Y. Giga, *Surface evolution equations. A level set approach*, Monographs in Mathematics **99**, Birkhäuser Verlag, Basel, 2006. N. Hamamuki, An improvement of level set equations via approximation of a distance function, Applicable Analysis **98** (2019), no 10, pp. 1901-1915. H. Ishii, Perron's method for Hamilton-Jacobi equations, Duke Math. J. **55** (1987), no. 2, pp. 369-384. P. L. Lions, *Mathematical topics in fluid mechanics. Vol. 1. Incompressible models*, Oxford Lecture Series in Mathematics and its Applications, 3. Oxford Science Publications. The Clarendon Press, Oxford University Press, New York (1996). J. Prüss and G. Simonett, *Moving interfaces and quasilinear parabolic evolution equations*, Monographs in Mathematics **105**, Birkhäuser/Springer (2016). J. Prüss, M. Wilke, *Gewöhnliche Differentialgleichungen und dynamische Systeme*, Springer 2010. G. Della Rocca and G. Blanquart, Level set reinitialization at a contact line, Journal of Computational Physics **265** (2014), pp. 34-49. V. Sabelnikov, A. Yu. Ovsyannikov and M. Gorokhovski, Modified level set equation and its numerical assessment, Journal of Computational Physics **278** (2014), pp. 1-30. J. A. Sethian, A fast marching level set method for monotonically advancing fronts, Proceedings of the National Academy of Sciences **93** (1996), no. 4, pp. 1591-1595. M. Sussman, P. Smereka and S. Osher, A level set approach for computing solutions to incompressible two-phase flow, Journal of Computational Physics **114** (1994), no. 1, pp. 146-159. M. Sussman and E. Fatemi, An efficient, interface-preserving level set redistancing algorithm and its application to interfacial incompressible fluid flow, SIAM Journal on Scientific Computing **20** (1999), no. 4, pp. 1165-1191. H. Whitney, Analytic extensions of differentiable functions defined in closed sets, Trans. Amer. Math. Soc. **36** (1934), pp. 63-89.** [^1]: Department of Mathematics, Technische Universität Darmstadt, Germany. E-mail: bothe\@mma.tu-darmstadt.de [^2]: Department of Mathematics, Technische Universität Darmstadt, Germany. E-mail: fricke\@mma.tu-darmstadt.de [^3]: Department of Mathematics, Faculty of Science and Technology, Keio University, Japan. E-mail: soga\@math.keio.ac.jp [^4]: This modification has first been introduced by Ilia Roisman in the lecture entitled *Implicit surface method for numerical simulations of moving interfaces*, given at the international workshop on 'Transport Processes at Fluidic Interfaces -- from Experimental to Mathematical Analysis', Aachen, Germany, December 2011. [^5]: $\displaystyle D^+\rho(z):=\Big\{a\in{\mathbb R}^4\,\Big|\, \limsup_{\tilde{z}\to z}\frac{\rho(\tilde{z})-\rho(z)- a\cdot(\tilde{z}-z) }{|\tilde{z}-z|}\le0\Big\}$ with $z=(t,x)$. [^6]: $\displaystyle D^-\tilde{\rho}(z):=\Big\{a\in{\mathbb R}^4\,\Big|\, \liminf_{\tilde{z}\to z}\frac{\tilde{\rho}(\tilde{z})-\tilde{\rho}(z)- a\cdot(\tilde{z}-z) }{|\tilde{z}-z|}\ge0\Big\}=-D^+(-\tilde{\rho})(z)$ with $z=(t,x)$.
arxiv_math
{ "id": "2310.05111", "title": "Mathematical analysis of modified level-set equations", "authors": "Dieter Bothe, Mathis Fricke, Kohei Soga", "categories": "math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Nordhaus and Gaddum proved in 1956 that the sum of the chromatic number $\chi$ of a graph $G$ and its complement is at most $|G|+1$. The Nordhaus-Gaddum graphs are the class of graphs satisfying this inequality with equality, and are well-understood. In this paper we consider a hereditary generalization: graphs $G$ for which all induced subgraphs $H$ of $G$ satisfy $\chi(H) + \chi(\overline{H}) \le |H|$. We characterize the forbidden induced subgraphs of this class and find its intersection with a number of common classes, including line graphs. We also discuss $\chi$-boundedness and algorithmic results. author: - | Vaidy Sivaraman\ Dept. of Mathematics and Statistics\ Mississippi State University\ Starkville, MS 39762\ `vs692@msstate.edu ` - | Rebecca Whitman\ Department of Mathematics\ University of California, Berkeley\ Berkeley, CA 94720\ `rebecca_whitman@berkeley.edu ` title: Hereditary Nordhaus-Gaddum Graphs --- AMS subject classifications: 05C75 05C17 Keywords: Nordhaus-Gaddum graph, hereditary graph class, sum-perfect graph, perfect graph, claw-free graph, line graph # Introduction In this paper we define a new family of graph classes generalizing the Nordhaus-Gaddum graphs, and introduce the largest hereditary subclasses of each of these classes. Throughout, all graphs are finite and simple, with neither loops nor multiple edges. We begin by introducing four graph invariants used throughout the paper. Let $G = (V(G), E(G))$ be a graph. Let $n$ denote $|V(G)|$ and $\overline{G}$ denote the complement of $G$. The *clique number* $\omega(G)$ is the largest positive integer $k$ such that the complete graph on $n$ vertices, denoted $K_n$, is a subgraph of $G$, and the *independence number* $\alpha(G)$ is the largest positive integer $l$ such that $\overline{K_n}$ is a subgraph of $G$. The *chromatic number* $\chi(G)$ is the smallest positive integer $k$ such that the vertices of $G$ can be properly colored with $k$ colors (a proper coloring is one where adjacent vertices receive different colors). The *clique cover number* $\theta(G)$ is the smallest positive integer $l$ such that $V(G)$ can be partitioned into $l$ sets, each inducing a clique. It follows from the definitions that $\omega(\overline{G}) = \alpha(G)$, $\chi(\overline{G}) = \theta(G)$, $\omega(G) \le \chi(G) \le n$, and $\alpha(G) \le \theta(G) \le n$. The relationship between the chromatic number of a graph and that of its complement was first studied by Nordhaus and Gaddum in 1956 [@NoGa56]. They proved that the sum of the two invariants is bounded above by $n + 1$: $$\label{eq:NG_inequality} \chi(G) + \theta(G) \le n + 1$$ Since then several researchers have studied the same problem for various graph invariants. See [@AoHa13] for a survey. The class of graphs satisfying Inequality [\[eq:NG_inequality\]](#eq:NG_inequality){reference-type="ref" reference="eq:NG_inequality"} with equality are called the Nordhaus-Gaddum graphs (we denote this class $0$-$NG$). The class $0$-$NG$ has been widely studied, with three increasingly elegant structural characterizations [@Fi66] [@StTu08] [@CoTr13]. Cheng, Collins, and Trenk gave a degree sequence characterization of $0$-$NG$ and an enumeration of the class [@ChCoTr16]. Notably, $0$-$NG$ is not a hereditary class, closed under taking induced subgraphs. As an example, the cycle graph on $5$ vertices, $C_5$, has $\chi = \theta = 3$ and $n = 5$, so satisfies Inequality [\[eq:NG_inequality\]](#eq:NG_inequality){reference-type="ref" reference="eq:NG_inequality"} with equality. Its induced subgraph the path graph $P_4$ has $\chi = \theta = 2$ and $n = 4$, so is not in $0$-$NG$. A natural question is to understand the largest hereditary subclass of $0$-$NG$, i.e. the set of graphs for which all induced subgraphs are Nordhaus-Gaddum. It turns out that this class coincides with a class well-studied in the context of induced subgraphs, namely, the threshold graphs. We provide a proof in Proposition [Proposition 3](#prop:threshold){reference-type="ref" reference="prop:threshold"}, later in this section. Threshold graphs have been relaxed and generalized in several different ways (see for instance [@MaPe95] [@YCC] [@BSZ]), and this paper will give another. The class $0$-$NG$ is small and contains few important subclasses, aside from the threshold graphs. Nevertheless, a number of classes have interesting intersections. One is the $C_4, 2K_2$-free graphs (the pseudo-split graphs), defined in [@Bl93]. Pseudo-split graphs containing an induced $C_5$ satisfy Inequality [\[eq:NG_inequality\]](#eq:NG_inequality){reference-type="ref" reference="eq:NG_inequality"} with equality, and form one of three subclasses of $0$-$NG$ in Collins and Trenk's characterization [@CoTr13]. Blászik [@Bl93] showed all pseudo-split graphs satisfy the relaxed bound $\chi(G) + \theta(G) \ge n$. We build on this generalization to introduce a parameter $a$ to Inequality [\[eq:NG_inequality\]](#eq:NG_inequality){reference-type="ref" reference="eq:NG_inequality"} and consider graphs satisfying the following inequality for each choice of $a \ge 0$: $$\label{eq:generalized_ng} \chi(G) + \theta(G) \ge n + 1 - a.$$ Given $a \ge 0$, a graph $G$ is *a-Nordhaus-Gaddum* if $G$ satisfies Inequality [\[eq:generalized_ng\]](#eq:generalized_ng){reference-type="ref" reference="eq:generalized_ng"} with equality. The generalized classes of $a$-Nordhaus-Gaddum graphs are again not hereditary; as an example, $C_{2a+5}$ has $\chi = 3$ and $\theta = a+3$, so is an element of $a$-$NG$, but its induced subgraph $P_{2a+4}$ has $\chi = 2$ and $\theta = a+2$, so is not an element of $a$-$NG$. We hence define $G$ to be *a-hereditary-Nordhaus-Gaddum* if every induced subgraph $H$ of $G$ satisfies [\[eq:generalized_ng\]](#eq:generalized_ng){reference-type="ref" reference="eq:generalized_ng"} with equality. We denote the class of $a$-Nordhaus-Gaddum graphs by $a$-$NG$ and the class of $a$-hereditary-Nordhaus-Gaddum graphs by $a$-$HNG$. For all $a \ge 0$, $a$-$NG$ and $a$-$HNG$ are both closed under complementation, and $a$-$HNG$ is hereditary. We have the following chain of inclusions: **Proposition 1**. *$\text{$0$-$HNG$} \subset \text{$0$-$NG$} \subset \text{$1$-$HNG$} \subset \text{$1$-$NG$} \subset \text{$2$-$HNG$} \subset \ldots$* *Proof.* Fix $a \ge 0$. By definition $a$-$HNG$ $\subset$ $a$-$NG$. Suppose for a contradiction that $a$-$NG$ $\not \subset$ $(a+1)$-$HNG$, so there exists $G \in$ $a$-$NG$ with an induced subgraph $H \not \in$ $(a+1)$-$NG$. Let $K$ be the induced subgraph with vertex set $V(G) - V(H)$. Then $\chi(G) + \theta(G) \ge |G| + 1 - a$ and $\chi(H) + \theta(H) < |H| + 1 - (a+1)$. By Inequality [\[eq:NG_inequality\]](#eq:NG_inequality){reference-type="ref" reference="eq:NG_inequality"}, $\chi(K) + \theta(K) \le |K| + 1$. As a result: $$\chi(G) + \theta(G) \le \chi(H) + \chi(K) + \theta(H) + \theta(K) < |H| + |K| + 1 - a = |G| + 1 - a,$$ contradicting $G$'s inclusion in $a$-$NG$. Thus instead $a$-$NG$ $\subset$ $(a+1)$-$HNG$. Since $C_{2a+5}$ is in $a$-$NG$ but not $a$-$HNG$, and $P_{2a+4}$ is in $(a+1)$-$HNG$but not $a$-$NG$, all inclusions are strict. ◻ We introduce more notation used throughout the paper, then discuss a number of important graph classes and their relation to $a$-$NG$ and $a$-$HNG$. Given positive integers $k, l$, we denote the complete bipartite graph with $k$ vertices in one half of the bipartition and $l$ in the other by $K_{k,l}$. We use $+$ to denote the disjoint union of two graphs. Given a vertex $v$ (resp. induced subgraph $H$), let $G - \{v\}$ (resp., $G - H$) denote the induced subgraph on vertex set $V(G) - \{v\}$ (resp., $V(G) - V(H)$). A vertex $v$ is *complete to* $H \subseteq V(G)$ if $v$ is adjacent to all vertices in $H$. A vertex subset $A$ is complete to $H$ if all $v \in A$ are complete to $H$. The neighborhood of a vertex $v$ in $G$ (resp., in induced subgraph $H$) is denoted $N(v)$ (resp., $N_H(v)$). A vertex $v$ is *isolated* in $G$ (resp., in $H$) if $N(v)$ (resp. $N_H(v)$) is empty, and *dominating* if $N(v) = V(G) - \{v\}$. We refer to two induced subgraphs as isolated from one another where there are no edges with one endpoint in each subgraph. A graph $G$ is said to be *threshold* if there exist a function $w: V(G) \to \mathbb{R}$ and a real number $t$ such that there is an edge between two distinct vertices $u$ and $v$ if and only if $w(u) + w(v) > t$ [@ChHa73]. The class of threshold graphs has been studied in great detail. See Mahadev and Peled's book for more information [@MaPe95]. Threshold graphs are identifiable by forbidden induced subgraphs and with a vertex ordering ([@ChHa77]), among myriad other characterizations. **Theorem 2**. *Given a graph $G$, the following are equivalent:* 1. *$G$ is threshold.* 2. *$G$ contains no induced $2K_2, P_4, C_4$.* 3. *$V(G)$ can be given an ordering $v_1, \ldots, v_n$ such that for every $i$, $1 \le i \le n$, $v_i$ is adjacent to all or none of $v_1, \ldots, v_{i-1}$.* Together, these two characterizations show that threshold graphs are exactly $0$-$HNG$. **Proposition 3**. *The class $0$-$HNG$ is exactly the class of threshold graphs.* *Proof.* Given $a \ge 0$ and a graph $G$ with an isolated or dominating vertex $v$, $G$ satisfies Inequality [\[eq:generalized_ng\]](#eq:generalized_ng){reference-type="ref" reference="eq:generalized_ng"} with equality if and only if $G-\{v\}$ satisfies Inequality [\[eq:generalized_ng\]](#eq:generalized_ng){reference-type="ref" reference="eq:generalized_ng"} with equality. Hence Theorem [Theorem 2](#prop:threshold_char){reference-type="ref" reference="prop:threshold_char"} (iii) implies that threshold graphs are contained in $0$-$HNG$. For the converse it suffices to see from Theorem [Theorem 2](#prop:threshold_char){reference-type="ref" reference="prop:threshold_char"}(ii), that none of $P_4, 2K_2, C_4$, the three forbidden induced subgraphs for the class of threshold graphs, is in $0$-$HNG$. ◻ A graph $G$ is *split* if its vertices can be partitioned into a clique and a stable set [@FoHa77]. A graph $G$ is *chordal* if every induced cycle in it is a triangle [@Di61]. It is *weakly chordal* if every induced cycle in it or its complement is either a triangle or a square [@Ha85]. A graph $G$ is *perfect* if $\chi(H) = \omega(H)$ for all induced subgraphs $H$ of $G$ [@Be61]. The Strong Perfect Graph Theorem [@CRST] shows a graph is perfect if and only if it contains none of $\{C_5, C_7, \overline{C_7}, C_9, \overline{C_9}, \ldots\}$ as an induced subgraph. We also define a new class generalizing perfect graphs: a graph is *apex-perfect* if it contains a vertex whose deletion results in a perfect graph. Although hereditary, the forbidden induced subgraph characterization of this class remains unknown. By the forbidden induced subgraph characterizations, we have the following chain of inclusions on graph classes: $$\text{Threshold} \subset \text{Split} \subset \text{Chordal} \subset \text{Weakly Chordal} \subset \text{Perfect} \subset \text{Apex-Perfect}.$$ We compare the hereditary classes $a$-$HNG$ to the above hereditary classes. First, $0$-$HNG$ is exactly the class of threshold graphs. In a split graph, $\omega(G) + \alpha(G) \ge |G|$ [@HaSi81], so since split graphs are perfect, $\chi(G) + \theta(G) = \omega(G) + \alpha(G) \ge |G|$, it follows that split graphs are a subclass of $1$-$HNG$. For $a \ge 1$, $a$-$HNG$ is not a subclass of chordal, weakly chordal, or perfect graphs, since it contains $C_5$. The converse also holds: chordal, weakly chordal, and perfect graphs do not constitute subclasses of $a$-$HNG$, since $P_{2a+4}$ is in each of these classes but not in $a$-$HNG$. We prove later in Theorem [Theorem 12](#prop:apex_perfect){reference-type="ref" reference="prop:apex_perfect"} that all graphs in $1$-$HNG$ are apex-perfect. Another generalization of perfect graphs is to $\chi$-bounding functions. For a hereditary class $\mathcal{G}$, $f: \mathbb{N} \rightarrow \mathbb{N}$ is a $\chi$-bounding function on $\mathcal{G}$ if for all $G \in \mathcal{G}$, $\chi(G) \le f(\omega(G))$. For perfect graphs, the function $f(x) = x$ is $\chi$-bounding, and this is of course best possible. See [@ScSe20] for a survey of hereditary classes with known $\chi$-bounding functions. Since all graphs in $1$-$HNG$ are apex-perfect, it follows as Theorem [Theorem 13](#prop:chi_bound_Vizing){reference-type="ref" reference="prop:chi_bound_Vizing"} that $1$-$HNG$ is $\chi$-bounded by the function $f(x) = x+1$, which is best possible. Given a graph $G$, it holds that $\omega(G) + \alpha(G) \le n + 1$. Where this bound holds with equality for all induced subgraphs $H$ of $G$, it follows that $G$ is threshold [@LiPoSi19]. More interesting is the generalization of threshold graphs to the class of sum-perfect graphs. A graph is *sum-perfect* if for all induced subgraphs $H$, $\omega(H) + \alpha(H) \ge n$. One of the authors, together with Litjens and Polak, provides a forbidden induced subgraph characterization of the class [@LiPoSi19]. It follows from the characterization that the class of sum-perfect graphs is a subclass of the weakly-chordal graphs, and hence perfect. Since $\omega(G) \le \chi(G)$ and $\alpha(G) \le \theta(G)$, sum-perfect graphs are also $1$-$HNG$. We make use of this inclusion in Theorem [Theorem 9](#prop:FIS_for_1_hng){reference-type="ref" reference="prop:FIS_for_1_hng"}, our forbidden induced subgraph characterization of $1$-$HNG$. The line graph of $G$, denoted $L(G)$, has the edges of $G$ as its vertices. Two vertices in $L(G)$ are adjacent if as edges in $G$ they are incident to a shared vertex. Line graphs translate questions about edges into questions about vertices, and so are an important tool for simplifying otherwise-intractable problems. The class of line graphs is hereditary and there is a beautiful characterization of this class in terms of its forbidden induced subgraphs [@LWB]. We characterize the intersection of $1$-$HNG$ and line graphs in Theorem [Theorem 14](#prop:hng_line_graph){reference-type="ref" reference="prop:hng_line_graph"}. A *claw* is an induced subgraph that is isomorphic to $K_{1,3}$; its trivalent vertex is called its *center*. A graph is *claw-free* if it contains no claw. Since the claw is one of the forbidden induced subgraphs of line graphs, claw-free graphs are an important generalization of line graphs. They also have strong algorithmic properties (see the survey [@CFGS]). We characterize the intersection of $1$-$HNG$ and claw-free graphs in Theorem [Theorem 16](#prop:hng_claw_free){reference-type="ref" reference="prop:hng_claw_free"}. Section 5 ends with characterizations of the intersection of $1$-$HNG$ with two simple, essential classes: bipartite graphs and triangle-free graphs. The paper is structured as follows. In Section 2 we study the effect of vertex deletion on $\chi$ and $\theta$, which is connected to forbidden induced subgraphs of $a$-$HNG$. From here, the remaining sections of the paper pertain only to the smallest class $1$-$HNG$. In Section 3 we extend the forbidden induced subgraph characterization of sum-prefect graphs [@LiPoSi19] to characterize the forbidden induced subgraphs of $1$-$HNG$. In Section 4 we show graphs in $1$-$HNG$ are apex-perfect, thereby giving a best possible $\chi$-bounding function on the class. Section 5 centers on the intersection of $1$-$HNG$ with, respectively, the classes of line graphs, claw-free graphs, and triangle-free graphs. In Section 6 we provide a number of optimization results about $1$-$HNG$, showing that inclusion in $1$-$HNG$, $\omega(G), \alpha(G), \chi(G),$ and $\theta(G)$ can all be computed in polynomial time. # Vertex Deletion in Minimum Colorings and Clique Coverings In this section we present several results about vertex deletion and forbidden induced subgraphs of $a$-$HNG$. First, since $a$-$HNG$ is closed under complementation, so too is its set of forbidden induced subgraphs, which we record as a remark. **Remark 4**. *For all $a \ge 0$, the set of forbidden induced subgraphs of $a$-$HNG$ is closed under complementation.* Second, each vertex in a graph can contribute at most one color to a coloring and add at most one clique to a clique covering. Since a proper coloring of a graph $G$ is formally a partition of $V(G)$ into stable sets, we call each stable set a *color class*, that is, the set of vertices with the same color. A vertex $v$ is *$\chi$-distinct* in $G$ if there exists a minimum proper coloring of $G$ where $v$ is the only vertex in its color class. Similarly, $v$ is *$\theta$-distinct* in $G$ if there exists a minimum proper clique covering where $v$ is the only vertex in its clique. Such a minimum proper coloring or clique covering is called *$v$-distinct*. Deleting a vertex $v$ from $G$ will decrement $\chi$ or $\theta$ if and only if $v$ is $\chi$-distinct or $\theta$-distinct, respectively, which we record in the following proposition. **Proposition 5**. *Given $v \in G$, $\chi(G - \{v\}) = \chi(G) - 1$ if and only if $v$ is $\chi$-distinct. Otherwise $\chi(G - \{v\}) = \chi(G)$. Analogously, $\theta(G - \{v\}) = \theta(G) - 1$ if and only if $v$ is $\theta$-distinct, and otherwise $\theta(G - \{v\}) = \theta(G)$.* *Proof.* Let $v \in G$. Clearly $\chi(G - \{v\}) = \chi(G)$ or $\chi(G) - 1$. If $v$ is $\chi$-distinct, then choose a $v$-distinct coloring. The restriction of this coloring to $G - \{v\}$ gives a proper coloring of $G - \{v\}$ with $\chi(G) - 1$ colors, so $\chi(G - \{v\}) \le \chi(G) - 1$. Hence they are equal. For the reverse direction, if $\chi(G - \{v\}) = \chi(G) - 1$, then fix a proper coloring of $G - \{v\}$ using $\chi(G) - 1$ colors. Add a new color class containing only $v$ to produce a proper coloring of $G$ using $\chi(G)$ colors. Hence the coloring is minimal in $G$, and we conclude that $v$ is $\chi$-distinct. An analogous proof holds for $\theta(G)$. ◻ The following result is used to identify $\theta$-distinct vertices; an analogous result holds for $\chi$-distinct vertices. **Proposition 6**. *For any $v \in V(G)$ such that $N(v)$ induces a stable set, $v$ is $\theta$-distinct in $G$ if its neighbors are not $\theta$-distinct in $G - \{v\}$.* *Proof.* Let $v \in V(G)$ with $N(v)$ inducing a stable set. We prove the contrapositive. Suppose $v$ is not $\theta$-distinct in $G$. Then for all minimum proper clique coverings of $G$, $v$ shares a clique with one of its neighbors. Choose a minimum proper clique covering. This clique covering restricted to $G - \{v\}$ is then a proper clique covering with some $w \in N(v)$ in a distinct clique. Proposition [Proposition 5](#prop:vertex_deletion){reference-type="ref" reference="prop:vertex_deletion"} implies that $\theta(G - \{v\}) = \theta(G)$, so the clique covering restricted to $G - \{v\}$ is minimum. Thus $w$ is $\theta$-distinct in $G - \{v\}$. ◻ Identifying distinct vertices is critical because forbidden induced subgraphs cannot contain distinct vertices of either type. **Proposition 7**. *If $G$ is a forbidden induced subgraph of $a$-$HNG$, then $G \in$ $(a+1)$-$HNG$, and no vertex of $G$ is $\chi$-distinct or $\theta$-distinct.* *Proof.* Let $v \in V(G)$. Since $G$ is a forbidden induced subgraph of $a$-$HNG$, it follows that $G - \{v\} \in$ $a$-$HNG$. Thus $\chi(G) + \theta(G) \le n-a$ and $\chi(G - \{v\}) + \theta(G - \{v\}) \ge n-a$. Since $\chi(G) \ge \chi(G - \{v\})$ and $\theta(G) \ge \theta(G - \{v\})$, it follows that $\chi(G) = \chi(G-\{v\})$, $\theta(G) = \theta(G - \{v\}$, and $\chi(G) + \theta(G) = n-a$. Thus $G \in$ $(a+1)$-$HNG$, and by Proposition [Proposition 5](#prop:vertex_deletion){reference-type="ref" reference="prop:vertex_deletion"}, $v$ is not $\chi$-distinct or $\theta$-distinct. ◻ The above two propositions combine to give the following corollary, which we use extensively in our characterization of $1$-$HNG$ in Section 3. **Corollary 8**. *If there exists $v \in V(G)$ such that $N(v)$ induces a stable set and no vertex in $N(v)$ is $\theta$-distinct in $G - \{v\}$, then $G$ is not a forbidden induced subgraph of $a$-$HNG$for any $a \ge 0$.* # A Forbidden Induced Subgraph Characterization of $1$-$HNG$ Let $\mathcal{F}$ be the set of $52$ forbidden induced subgraphs of $1$-$HNG$ found computationally on $6$ to $8$ vertices, shown in Figure [1](#fig:fis_for_1hng){reference-type="ref" reference="fig:fis_for_1hng"}. Let $\mathcal{F_S}$ denote those graphs in $\mathcal{F}$ containing no induced $C_5$, and $\mathcal{F_C}$ denote graphs in $\mathcal{F}$ containing an induced $C_5$. The subscript $S$ will denote that these are forbidden induced subgraphs of the sum-perfect graphs; see Theorem [Theorem 10](#prop:sum_perfect){reference-type="ref" reference="prop:sum_perfect"}. The set $\mathcal{F}_S$ contains 24 graphs on 6 vertices and 2 graphs on 7 vertices. Twelve of the 24 graphs on 6 vertices are bipartite graphs with matching number $\nu = 3$. Note that a bipartite graph has $\nu = 3$ if and only if it has an induced perfect matching on $6$-vertices, if and only if it contains a $3K_2$ subgraph. The other 12 are complements of these graphs. The two graphs on 7 vertices are the sun graph with a pendant (degree one) vertex attached to a degree two vertex, and its complement. ![These graphs, together with their complements, comprise the set $\mathcal{F}$ of forbidden induced subgraphs of $1$-$HNG$.](FIS.png){#fig:fis_for_1hng height="10cm"} The set $\mathcal{F_C}$ contains 22 graphs on 7 vertices and 4 graphs on 8 vertices, and is also by necessity closed under complementation. From here, we prove that $\mathcal{F}$ is exactly the set of forbidden induced subgraphs for $1$-$HNG$. **Theorem 9**. *A graph $G$ is in $1$-$HNG$ if and only if it contains no element of $\mathcal{F}$ as an induced subgraph.* This work is made much simpler for graphs in $1$-$HNG$ without $C_5$ by the fact that these graphs are exactly the sum-perfect graphs, and their forbidden induced subgraph characterization is known. We present the characterization by Litjens, Polak, and Sivaraman [@LiPoSi19]. **Theorem 10**. *A graph $G$ is sum-perfect if and only if it is $\mathcal{F_S} \cup \{C_5\}$-free.* Our proof of Theorem [Theorem 9](#prop:FIS_for_1_hng){reference-type="ref" reference="prop:FIS_for_1_hng"} will separately address graphs that do and do not contain an induced $C_5$. In anticipation of the former, we introduce notation and a proposition summarizing the contents of $\mathcal{F_C}$. If a graph $G$ contains no element of $\mathcal{F}$ as an induced subgraph, then the possible coexisting vertices outside $C_5$ and their adjacencies are limited. Let $G$ be a graph containing vertices $C = \{c_1, c_2, c_3, c_4, c_5\} \in V(G)$ that induce a copy of $C_5$ along edges $v_1v_2, v_2v_3, v_3v_4, v_4v_5,$ and $v_1v_5$. (We will choose such a subgraph $C$ frequently throughout this paper, and always with the given edge set.) Let $D = V(G) - C$; we say that a vertex $v \in D$ is of type $N(v) \cap C$, according to its adjacencies in $C_5$, such as $\{c_1, c_2, c_4\}, \{c_3, c_5\}$, or even $\{\emptyset\}$. Multiple vertices in $D$ may be of the same type. The automorphism group of $C_5$ is the dihedral group on $5$ elements. Throughout, we will work up to symmetry of this automorphism group, and relabel $C$, up to symmetry, so that any chosen vertex or vertices are of lowest possible type. For instance, if we assume the existence of a vertex with one neighbor in $C$, we would call it of type $\{c_1\}$, up to symmetry. With a vertex of type $\{c_2, c_4\}$ or $\{c_2, c_5\}$, we would apply the requisite relabeling of $C$ to have a vertex of type $\{c_1, c_3\}$. The following remark summarizes the pairs of vertices, up to symmetry, not in $C_5$ that can occur within $D$ without containing an induced graph from $\mathcal{F}$. **Proposition 11**. *Let $G$ be a graph containing no element of $\mathcal{F}$ as an induced subgraph, and let $C = \{c_1, c_2, c_3, c_4, c_5\} \in V(G)$ induce a copy of $C_5$ with edges $c_1c_2,$ $c_2c_3,$ $c_3c_4,$ $c_4c_5,$ and $c_1c_5$. Let $D = V(G) - C$ and let $v \in D$.* *If $v$ is of type $\{\emptyset\}$, then for all $w \in D$, either $w$ is adjacent to $v$ and of types $\{c_1, c_2, c_3\},$$\{c_1, c_2, c_3, c_4\}$ or $\{c_1, c_2, c_3, c_4, c_5\}$, up to symmetry, or $w$ is non-adjacent to $v$ and of any type.* *If $v$ is of type $\{c_1\}$, up to symmetry, then for all $w \in D$, either $w$ is adjacent to $v$ and of type $\{c_3, c_4\}$ or $\{c_1, c_2, c_3, c_4, c_5\}$, or $w$ is non-adjacent to $v$ and of type $\{\emptyset\},\{c_1\},$ $\{c_1, c_2\},$ $\{c_1, c_3\},$ $\{c_1, c_4\},$ $\{c_1, c_5\},$ $\{c_3, c_4\},$ $\{c_1, c_2, c_4\},$ $\{c_1, c_2, c_5\},$ $\{c_1, c_3, c_4\},$ $\{c_1, c_3, c_5\}$, or $\{c_1, c_2, c_3, c_4, c_5\}$.* *If $v$ is of type $\{c_1, c_2\}$, up to symmetry, then for all $w \in D$, either $w$ is adjacent to $v$ and of type $\{c_4\},$ $\{c_1, c_2\},$ $\{c_1, c_2, c_3\},$ $\{c_1, c_2, c_4\},$ $\{c_1, c_2, c_5\},$ $\{c_1, c_2, c_3, c_4\},$ $\{c_1, c_2, c_3, c_5\},$ $\{c_1, c_2, c_4, c_5\}$ or $\{c_1, c_2, c_3, c_4, c_5\}$, or $w$ is non-adjacent to $v$ and of type $\{\emptyset\},$ $\{c_1\},$ $\{c_4\},$ $\{c_1, c_2\},$$\{c_1, c_4\},$ $\{c_2, c_4\}$, or $\{c_1, c_2, c_4\}$.* *If $v$ is of type $\{c_1, c_3\}$, up to symmetry, then for all $w \in D$, either $w$ is adjacent to $v$ and of type $\{c_1, c_3, c_4, c_5\}$ or $\{c_1, c_2, c_3, c_4, c_5\}$, or $w$ is non-adjacent to $v$ and of type $\{\emptyset\},$ $\{c_1\},$ $\{c_3\},$ $\{c_1, c_3\},$ $\{c_1, c_4\},$ $\{c_1, c_5\},$ $\{c_3, c_4\},$ $\{c_3, c_5\},$ $\{c_1, c_3, c_4\},$ $\{c_1, c_3, c_5\}$, or $\{c_1, c_2, c_3, c_4, c_5\}$.* *If $v$ is of type $\{c_1, c_2, c_3\}$, up to symmetry, then for all $w \in D$, either $w$ is adjacent to $v$ and of type $\{\emptyset\},$ $\{c_1, c_2\},$ $\{c_2, c_3\},$ $\{c_1, c_2, c_3\},$ $\{c_1, c_2, c_4\},$ $\{c_1, c_2, c_5\},$ $\{c_2, c_3, c_4\},$ $\{c_2, c_3, c_5\},$ $\{c_1, c_2, c_3, c_4\},$ $\{c_1, c_2, c_3, c_5\}$ or $\{c_1, c_2, c_3, c_4, c_5\}$, or $w$ is non-adjacent to $v$ and of type $\{\emptyset\}$ or $\{c_2\}$.* *If $v$ is of type $\{c_1, c_2, c_4\}$, up to symmetry, then for all $w \in D$, either $w$ is adjacent to $v$ and of type $\{c_1, c_2\},$ $\{c_1, c_2, c_3\},$ $\{c_1, c_2, c_4\},$ $\{c_1, c_2, c_5\},$ $\{c_1, c_2, c_3, c_4\},$ $\{c_1, c_2, c_3, c_5\},$ $\{c_1, c_2, c_4, c_5\}$ or $\{c_1, c_2, c_3, c_4, c_5\}$, or $w$ is non-adjacent to $v$ and of type $\{\emptyset\},$ $\{c_1\},$ $\{c_2\},$ $\{c_4\},$ $\{c_1, c_2\},$ $\{c_1, c_4\},$ $\{c_2, c_4\},$ $\{c_1, c_2, c_4\}$, or $\{c_1, c_2, c_3, c_5\}$.* *If $v$ is of type $\{c_1, c_2, c_3, c_4\}$, up to symmetry, then for all $w \in D$, either $w$ is adjacent to $v$ and of type $\{c_1, c_2\},$ $\{c_1, c_4\},$ $\{c_2, c_3\},$ $\{c_3, c_4\},$ $\{c_1, c_2, c_3\},$ $\{c_2, c_3, c_4\},$ $\{c_1, c_2, c_4\},$ $\{c_1, c_3, c_4\},$ $\{c_2, c_3, c_5\},$ $\{c_1, c_2, c_3, c_4\}$ or $\{c_1, c_2, c_3, c_4, c_5\}$, or $w$ is non-adjacent to $v$ and of type $\{\emptyset\}$ or $\{c_2, c_3, c_5\}$.* *If $v$ is of type $\{c_1, c_2, c_3, c_4, c_5\}$, then for all $w \in D$, either $w$ is adjacent to $v$ and of any type, or $w$ is non-adjacent to $v$ and of type $\{\emptyset\},$ $\{c_1\},$ $\{c_2\},$ $\{c_3\},$ $\{c_4\},$ $\{c_5\},$ $\{c_1, c_3\},$ $\{c_1, c_4\},$ $\{c_2, c_4\},$ $\{c_2, c_5\}$, or $\{c_3, c_5\}$.* *Proof.* It is straightforward to verify the result (by hand or computer). For $v$ of any type out of $\{\emptyset\},$ $\{c_1\},$ $\{c_1, c_2\},$ $\{c_1, c_3\},$ $\{c_1, c_2, c_3\},$ $\{c_1, c_2, c_4\},$ $\{c_1, c_2, c_3, c_4\},$ or $\{c_1, c_2, c_3, c_4, c_5\}$, if $w$ is of a listed type and adjacency to $v$, then the subgraph induced on vertices $\{v, w, c_1, c_2, c_3, c_4, c_5\}$ is neither isomorphic to nor contains an element of $\mathcal{F}$. If $w$ is not of a listed type and adjacency to $v$, then the subgraph induced on vertices $\{v, w, c_1, c_2, c_3, c_4, c_5\}$ is isomorphic to or contains an element of $\mathcal{F}$. ◻ We will use this result profusely throughout our proof of Theorem [Theorem 9](#prop:FIS_for_1_hng){reference-type="ref" reference="prop:FIS_for_1_hng"}. *Proof.* The reverse direction is straightforward to check: for each of the $52$ graphs in $\mathcal{F}$, each has $\chi(G) + \theta(G) = |G|-1$, but for all proper induced subgraphs $H$, $\chi(H) + \theta(H) \ge |H|$. For the forward direction, let $G$ be a graph containing none of the graphs in $\mathcal{F}$ as an induced subgraph. We consider separately graphs that do and do not contain an induced subgraph isomorphic to $C_5$. Suppose, first, that $G$ does not contain $C_5$ as an induced subgraph. Hence by Theorem [Theorem 10](#prop:sum_perfect){reference-type="ref" reference="prop:sum_perfect"}, $G$ is sum-perfect, so $\omega(G) + \alpha(G) \ge n$. Since $\omega(G) \le \chi(G)$ and $\alpha(G) \le \theta(G)$, it follows that $chi(G) + \theta(G) \ge n$ and therefore $G \in$ $1$-$HNG$. Otherwise, let $G$ contain an induced copy of $C_5$. Suppose that vertices $\{c_1, c_2, c_3, c_4, c_5\} \in V(G)$ induce a copy of $C_5$ with edges $v_1v_2, v_2v_3, v_3v_4, v_4v_5,$ and $v_1v_5$. Let $C = \{c_1,c_2,c_3,c_4,c_5\}$ and $D = V(G) - C$. In any graph, isolated vertices are $\theta$-distinct and dominating vertices are $\chi$-distinct. Assume by Proposition [Proposition 7](#prop:vertex_deletion_mfis){reference-type="ref" reference="prop:vertex_deletion_mfis"} that $G$ has neither isolated nor dominating vertices. It is also straightforward to check that if $|D| \le 2$, then $G \in$ $1$-$HNG$, so we assume $|D| \ge 3$. We will exploit Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"} to identify possible vertex types in $G$. Then, we show either (i), there exists a $\theta$-distinct vertex in $G$, or (ii), there exists some $v \in V(G)$ whose neighborhood is isolated and no vertex in it is $\theta$-distinct in $G - \{v\}$. If (i) holds, $G$ cannot be a forbidden induced subgraph by Proposition [Proposition 7](#prop:vertex_deletion_mfis){reference-type="ref" reference="prop:vertex_deletion_mfis"}, and if (ii) holds, Corollary [Corollary 8](#prop:theta_distinct_mfis){reference-type="ref" reference="prop:theta_distinct_mfis"} implies the same. The proof consists of four cases: first, where there exists a vertex of type $\{\emptyset\}$. Since the forbidden induced subgraphs of $1$-$HNG$ are closed under complementation (see Remark [Remark 4](#prop:FIS_closed_complement){reference-type="ref" reference="prop:FIS_closed_complement"}), we can assume for subsequent cases that no vertex is of type $\{\emptyset\}$ or $\{c_1, c_2, c_3, c_4, c_5\}$. The second case is where there exists a vertex of type $\{c_1\}$, up to symmetry. Again, for subsequent cases, we assume there are no vertices of types $\{c_1\},$ $\{c_2\},$ $\{c_3\},$ $\{c_4\},$ $\{c_5\},$ $\{c_1, c_2, c_3, c_4\},$ $\{c_1, c_2, c_3, c_5\},$ $\{c_1, c_2, c_4, c_5\},$ $\{c_1, c_3, c_4, c_5\},$ or $\{c_2, c_3, c_4, c_5\}$. The third case is where there exists a vertex of type $\{c_1, c_3\}$, up to symmetry. For the fourth case, we assume there are only vertices of types $\{c_1, c_2\},$ $\{c_1, c_5\},$ $\{c_2, c_3\},$ $\{c_3, c_4\},$ $\{c_4, c_5\},$ $\{c_1, c_2, c_4\},$ $\{c_1, c_3, c_4\},$ $\{c_1, c_3, c_5\},$ $\{c_2, c_3, c_5\},$ or $\{c_2, c_4, c_5\}$, and suppose up to symmetry that there exists a vertex of type $\{c_1, c_2\}$. **Case 1: $D$ contains a vertex of type $\{\emptyset\}$.** Let $E$ be the set of vertices of type $\{\emptyset\}$ and let $N(E)$ be the set of vertices with a neighbor in $E$. By Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"}, $E$ induces a stable set, and by assumption, both $E$ and $N(E)$ are nonempty (else a vertex in $E$ would be isolated). By Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"}, $N(E)$ can contain vertices of types $\{c_1, c_2, c_3\}, \{c_1, c_2, c_3, c_4\}$, and $\{c_1, c_2, c_3, c_4, c_5\}$, up to symmetry. Any vertices of these types must be adjacent by Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"}, so $N(E)$ induces a clique. We now show $N(E)$ is complete to $B = D - E - N(E)$. First, let $w \in N(E)$ be a vertex of type $\{c_1, c_2, c_3\}$, up to symmetry. Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"} implies $w$ is complete to $B$ except for vertices of type $\{c_2\}$. However, if there exists a vertex of type $\{c_2\}$, $G$ contains $F_2$ as an induced subgraph. By Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"}, $D$ contains no vertices of types $\{c_1\},\{c_3\},\{c_4\},\{c_5\},\{c_1, c_3\},\{c_1, c_4\},\{c_2, c_4\},\{c_2, c_5\},$ or $\{c_3, c_5\}$, so any vertex of type $\{c_1, c_2, c_3, c_4, c_5\}$ in $N(E)$ is also complete to $B$. Second, let $w \in N(E)$ be of type $\{c_1, c_2, c_3, c_4\}$. Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"} implies $w$ is complete to $B$ except for possibly vertices of type $\{c_2, c_3, c_5\}$. However, given a vertex $x$ of type $\{c_2, c_3, c_5\}$ not adjacent to $w$, $G$ contains $F_{24}$ as an induced subgraph. Furthermore, by Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"}, $D$ contains no vertices of types $\{c_1\},$ $\{c_2\}$ $\{c_3\},$ $\{c_4\},$ $\{c_5\},$ $\{c_1, c_3\},$ $\{c_2, c_4\},$ $\{c_2, c_5\},$ or $\{c_3, c_5\}$. More subtly, $D$ also does not contain a vertex of type $\{c_1, c_4\}$ that is not adjacent to some vertex of type $\{c_1, c_2, c_3, c_4, c_5\}$, else $G$ contains an induced $\overline{F_2}$. Thus any vertex of type $\{c_1, c_2, c_3, c_4, c_5\}$ in $N(E)$ is complete to $B$. The third and final option is that $N(E)$ contains only vertices of type $\{c_1, c_2, c_3, c_4, c_5\}$. By Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"}, any vertex of type $\{c_1, c_2, c_3, c_4, c_5\}$ is adjacent to all other vertices in $B$ except possibly those of type $\{c_1\}$ and $\{c_1, c_3\}$, up to symmetry. If there exists a vertex of type $\{c_1, c_2, c_3, c_4, c_5\}$ that is non-adjacent to a vertex of type $\{c_1\}$ in $D$, up to symmetry, then $\overline{G}$ contains a vertex of type $\{\emptyset\}$ with a neighbor of type $\{c_1, c_2, c_3\}$. If there exists a vertex of type $\{c_1, c_2, c_3, c_4, c_5\}$ that is non-adjacent to a vertex of type $\{c_1, c_3\}$ in $D$, up to symmetry, then $\overline{G}$ contains a vertex of type $\{\emptyset\}$ with a neighbor of type $\{c_1, c_2, c_3, c_4\}$. Since $1$-$HNG$ is closed under complementation, we can assume $N(E)$ contains a vertex of type $\{c_1, c_2, c_3\}$ or $\{c_1, c_2, c_3, c_4\}$, both resolved above. Hence we can proceed with the assumption that $N(E)$ is complete to $B$. Fix a minimum proper clique covering of $C \cup B$. We can extend this to a minimum proper clique covering of $C \cup B \cup N(E)$ by adding $N(E)$ to the clique containing $c_2$. Since $N(E)$ is complete to $B$, the clique covering is proper. This extends to a proper clique covering of $G$ by giving each vertex in $E$ a distinct clique, so $\theta(G) \le \theta(C \cup B) + |E|$. Since $E$ induces a stable set and is isolated to $C \cup B$, $\theta(G) \ge \theta(C \cup B) + |E|$. We conclude the specified clique covering is minimum. Since it is $v$-distinct for any $v \in E$, Proposition [Proposition 7](#prop:vertex_deletion_mfis){reference-type="ref" reference="prop:vertex_deletion_mfis"} implies that $G$ is not a forbidden induced subgraph of $1$-$HNG$. **Case $2$: $D$ contains a vertex of type $\{c_1\}$.** Suppose that $D$ contains no vertices of type $\{\emptyset\}$ or $\{c_1, c_2, c_3, c_4, c_5\}$, but does contain a vertex of type $\{c_1\}$, up to symmetry. By Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"}, $D$ may also contain vertices of type $\{c_1\},$ $\{c_1, c_2\},$ $\{c_1, c_3\},$ $\{c_1, c_4\},$ $\{c_1, c_5\},$ $\{c_3, c_4\},$ $\{c_1, c_2, c_4\},$ $\{c_1, c_2, c_5\},$ $\{c_1, c_3, c_4\},$ and $\{c_1, c_3, c_5\}$. Of these, vertices of type $\{c_1\}$ may only be adjacent to vertices of type $\{c_3, c_4\}$. Suppose there exists $v$ of type $\{c_1\}$ not adjacent to any vertex of type $\{c_3, c_4\}$, so $N(v) = \{c_1\}$ is a stable set. Suppose for a contradiction there exists a $c_1$-distinct clique covering of $H = G - \{v\}$. Since $N(c_2) \subseteq N(c_1) - \{c_3\}$ with the exception of $c_3$, it follows that $c_2$ and $c_3$ share a clique. By an analogous argument, $c_4$ and $c_5$ share a clique, and so no vertex in $H - C_5$ shares a clique with any vertex in $C_5$. By assumption, $|D| \ge 3$, so $|H - C_5|$ is nonempty. Choose a clique $K$ in $H - C_5$; there are three possibilities: 1. $K$ has no vertex of type $\{c_3, c_4\}$. If so, $c_1$ is complete to $K$ and can be added to it, contradicting the clique covering's minimality. 2. $K$ has a vertex of type $\{c_3, c_4\}$ but no vertex of type $\{c_1\}$. If so, $K$ contains only vertices of types $\{c_3, c_4\}$ and $\{c_1, c_3, c_4\}$, so we add $c_3$ to $K$ and combine $c_1$ and $c_2$ into one clique, contradicting the assumption that the clique covering is minimum. 3. $K$ has a vertex of type $\{c_3, c_4\}$ and a vertex of type $\{c_1\}$. These two vertices, together with $C_5$ and $v$, induce $F_{25}$ or $F_{26}$ in $G$, for a contradiction. Instead, we conclude $H$ has no $c_1$-distinct clique covering, and Corollary [Corollary 8](#prop:theta_distinct_mfis){reference-type="ref" reference="prop:theta_distinct_mfis"} implies $G$ is not a forbidden induced subgraph of $1$-$HNG$. Otherwise, every vertex of type $\{c_1\}$ is adjacent to a vertex of type $\{c_3, c_4\}$. If there exist two vertices of type $\{c_1\}$, they cannot be adjacent by Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"}. If they are adjacent to a common vertex of type $\{c_3, c_4\}$, $G$ contains an induced $F_{26}$, and if they are adjacent to different vertices of type $\{c_3, c_4\}$, $G$ contains an induced $F_{25}$. Thus $G$ has exactly one vertex $v$ of type $\{c_1\}$. If there exist two vertices of type $\{c_3, c_4\}$ adjacent to $v$, then $G$ contains either $F_3$ or $F_4$ as an induced subgraph. Hence $v$ is adjacent to exactly one vertex $w$ of type $\{c_3, c_4\}$. The vertex $w$ is adjacent to all other vertices of type $\{c_3, c_4\}$, else $G$ contains an induced $F_3$, and to all vertices of type $\{c_1, c_3, c_4\}$, else $G$ contains an induced $F_{17}$. Hence $w$ is complete to $D$. Here, $N(v) = \{c_1, w\}$, which induces a stable set. We claim both $c_1$ and $w$ are not $\theta$-distinct in $H = G - \{v\}$. Suppose for a contradiction there exists a $w$-distinct clique covering of $H$. Thus $D - \{v\}$ shares a clique with $c_1$, $c_2$ and $c_3$ share a clique, and $c_4$ and $c_5$ share a clique. However, the clique covering is not minimum: put $c_1$ with $c_2$ and add $c_3$ and $w$ to $D - \{v\}$ to produce a strictly smaller clique covering of $H$. The covering is proper since $w$ is complete to $D$. Suppose for a contradiction there exists a $c_1$-distinct clique covering of $H$. Thus $c_2$ and $c_3$ share a clique in that covering, and $c_4$ and $c_5$ share a clique. Put $c_3$ in $w$'s clique and add $c_1$ to $c_2$'s clique to produce a proper, strictly smaller clique covering. Instead, we conclude $H$ has no $c_1$-distinct or $w$-distinct clique covering, and Corollary [Corollary 8](#prop:theta_distinct_mfis){reference-type="ref" reference="prop:theta_distinct_mfis"} implies $G$ is not a forbidden induced subgraph of $1$-$HNG$. **Case 3: $D$ contains a vertex of type $\{c_1, c_3\}$.** By ruling out the previous cases, $D$ contains no vertices with zero, one, four, or five neighbors in $C$, but does contain a vertex $v$ of type $\{c_1, c_3\}$, up to symmetry and complementation. By Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"}, $D$ may also contain vertices of type $\{c_1, c_3\}, \{c_1, c_4\}, \{c_1, c_5\}, \{c_3, c_4\}, \{c_3, c_5\}, \{c_1, c_3, c_4\},$ and $\{c_1, c_3, c_5\}$. Any vertices of type $\{c_1, c_3\}$, $\{c_1, c_4\}$, or $\{c_3, c_5\}$ are isolated in $D$. However, $D$ contains vertices of type $\{c_1, c_4\}, \{c_3, c_4\}$, or $\{c_1, c_3, c_4\}$ if and only if it does not contain vertices of type $\{c_1, c_5\}, \{c_3, c_5\},$ or $\{c_1, c_3, c_5\}$. Up to symmetry, we assume the former. Here, $N(v) = \{c_1, c_3\}$, which induces a stable set. We claim both $c_1$ and $c_3$ are not $\theta$-distinct in $H = G - \{v\}$. Suppose for a contradiction there exists a $c_1$-distinct clique covering of $H$. Thus $c_2$ and $c_3$ share a clique, $c_4$ and $c_5$ share a clique, $D$ contains no vertex of type $\{c_1, c_4\}$. and no vertex in $H - C_5$ shares a clique with any vertex in $C_5$. Since $H - C_5$ is nonempty and complete to $c_3$, join $c_3$ to any clique in $H - C_5$ and combine $c_1$ and $c_2$ to produce a proper clique covering of $H$ with fewer cliques. If there exists a $c_3$-distinct clique covering of $H$, then $c_1$ and $c_2$ share a clique. Since $c_3$ cannot be added to any other clique, $H - C_5$ contains only vertices of type $\{c_1, c_4\}$ and by assumption must have at least $2$ such vertices. However, the proper clique covering of $H$ with $c_2$ and $c_3$ sharing a clique, $c_1$ and $c_5$ sharing a clique, and $c_4$ in a clique with some vertex of type $\{c_1, c_4\}$ contains fewer cliques. Instead, we conclude $H$ has no $c_1$-distinct or $c_3$-distinct clique covering, and Corollary [Corollary 8](#prop:theta_distinct_mfis){reference-type="ref" reference="prop:theta_distinct_mfis"} implies $G$ is not a forbidden induced subgraph of $1$-$HNG$. **Case 4: $D$ contains a vertex of type $\{c_1, c_2\}$.** Here, $D$ contains only vertices of types $\{c_1, c_2\},$ $\{c_1, c_5\},$ $\{c_2, c_3\},$ $\{c_3, c_4\},$ $\{c_4, c_5\},$ $\{c_1, c_2, c_4\},$ $\{c_1, c_3, c_4\},$ $\{c_1, c_3, c_5\},$ $\{c_2, c_3, c_5\}$ and $\{c_2, c_4, c_5\}$. Without loss of generality, suppose that $D$ contains a vertex $v$ of type $\{c_1, c_2\}$. Thus $D$ can also contain only vertices of types $\{c_1, c_2\}$ and $\{c_1, c_2, c_4\}$, which may or may not be adjacent to one another. With $D$ so limited, we move directly to a clique covering. The set $N(c_3) = \{c_2, c_4\}$ induces a stable set, and we claim that neither $c_2$ nor $c_4$ is $\theta$-distinct in $H = G - \{c_3\}$. Since $c_2$ is complete to $\{v\} \cup N(v)$, no clique covering of $H$ is $c_2$-distinct, else $c_2$ could be joined to $v$'s clique. If a clique covering is $c_4$-distinct, then $c_5$ and $c_1$ share a clique. Since $c_1$ is complete to $\{v\} \cup N(v)$, add $c_1$ to $v$'s clique and combine $c_4$ and $c_5$ into one clique to produce a proper clique covering of $H$ with fewer cliques, for a contradiction. We conclude by Corollary [Corollary 8](#prop:theta_distinct_mfis){reference-type="ref" reference="prop:theta_distinct_mfis"} that $G$ is not a forbidden induced subgraph of $1$-$HNG$. ◻ # Graphs in $1$-$HNG$ are Apex-Perfect We prove that graphs in $1$-$HNG$ are apex-perfect, that is, that the deletion of some vertex yields a perfect graph. This is sufficient to show that $1$-$HNG$ is $\chi$-bounded by the function $f(x) = x+1$. **Theorem 12**. *If $G \in$ $1$-$HNG$, then there exists $v \in G$ such that $G - \{v\}$ is a perfect graph.* *Proof.* Let $G \in$ $1$-$HNG$. The graphs $C_6, \overline{C_6}, P_6,$ and $\overline{P_6}$ are elements of $\mathcal{F}$, so $G$ contains none of these, and hence $G$ contains no cycle $C_m$ or cycle-complement $\overline{C_m}$ for $m \ge 6$. Thus $G$ is perfect if and only if $G$ contains no induced $C_5$. From here, we consider the structure of induced copies of $C_5$ in $G$ to show that all induced copies of $C_5$ must intersect at a single vertex. Assume that $G$ is imperfect (i.e., not a perfect graph) and let $C = \{c_1, c_2, c_3, c_4, c_5\} \in V(G)$ induce $C_5$ with edges $c_1c_2, c_2c_3, c_3c_4, c_4c_5,$ and $c_1c_5$. If $G$ has exactly one induced copy of $C_5$, then the theorem holds, so we assume another exists. In the remainder of the proof, we consider vertex types that can be included in another $C_5$, and show that in each case $G$ is apex-perfect. The neighbor set of a vertex of type $\{\emptyset\}$ induces a clique, as shown in the proof of Case 1 of Theorem [Theorem 9](#prop:FIS_for_1_hng){reference-type="ref" reference="prop:FIS_for_1_hng"}. Hence no vertices of type $\{\emptyset\}$ or, by complement, of type $\{c_1, c_2, c_3, c_4, c_5\}$ are elements of any induced $C_5$. Suppose there exists a vertex $v$ of type $\{c_1\}$, up to symmetry and complementation, contained in an induced subgraph $X$ isomorphic to $C_5$. The only neighbors of $v$ in $G$ are $c_1$ and possibly some vertices of type $\{c_3, c_4\}$. Since our work in the proof of Case 2 of Theorem [Theorem 9](#prop:FIS_for_1_hng){reference-type="ref" reference="prop:FIS_for_1_hng"} showed that a vertex of type $\{c_1\}$ cannot have two (nonadjacent) neighbors of type $\{c_3, c_4\}$, we conclude that $v$'s neighbors in $X$ are $c_1$ and a vertex $w$ of type $\{c_3, c_4\}$. If, for a contradiction, there exists a subset $Y \subseteq V(G)$ inducing $C_5$ with $c_1 \not \in Y$, then by Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"}, the elements of $Y$ must be some of $c_2, c_3, c_4, c_5$ and vertices of types $\{c_3, c_4\}$ and $\{c_1, c_3, c_4\}$. Let $U$ denote the set of these possible elements. Since $c_2$ and $c_5$ have one neighbor in $U$, they cannot be in $Y$, and since $c_3$ and $c_4$ are adjacent to all but one element of $U$, they also cannot be in $Y$. Hence $Y$ is comprised entirely of vertices of types $\{c_3, c_4\}$ and $\{c_1, c_3, c_4\}$. Suppose up to complementation that at least three vertices of $Y$ are of type $\{c_3, c_4\}$. If three consecutive vertices of $Y$, say $v_1, v_2, v_3$ are of type $\{c_3, c_4\}$, then the induced subgraph of $G$ on vertex set $\{v_1, v_2, v_3, c_1, c_2, c_5\}$ is forbidden by Theorem [Theorem 9](#prop:FIS_for_1_hng){reference-type="ref" reference="prop:FIS_for_1_hng"}. Otherwise, up to symmetry $v_1, v_2,$ and $v_4$ are of type $\{c_3, c_4\}$ and $v_3, v_5$ are of type $\{c_1, c_3, c_4\}$. Here, the induced subgraph on $\{v_2, v_3, v_4, c_1, c_2, c_5\}$ is forbidden by Theorem [Theorem 9](#prop:FIS_for_1_hng){reference-type="ref" reference="prop:FIS_for_1_hng"}. In either case, we obtain the contradiction that $G \not \in$ $1$-$HNG$, so conclude instead that all induced copies of $C_5$ in $G$ contain vertex $c_1$. Thus its deletion renders the graph perfect. Hence if $G$ contains a vertex of types $\{c_1\}$ or $\{c_1, c_2, c_3, c_4\}$, up to symmetry, contained in an induced $C_5$, the graph is apex-perfect. Assume for the remainder of the proof that $G$ contains no such vertex. If a vertex $v$ of type $\{c_1, c_3\}$, up to symmetry and complementation, is in an induced $C_5$, then by Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"}, its two neighbors in the $C_5$ must be $c_1$ and $c_3$. If, for a contradiction, there exists a subset $Y \subseteq V(G)$ inducing $C_5$ with $c_1 \not \in Y$, then by Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"}, the elements of $Y$ must be some of $c_2, c_3, c_4, c_5$ and vertices of types $\{c_1, c_3\}$, $\{c_1, c_4\}$, $\{c_1, c_5\}$, $\{c_3, c_4\}$, $\{c_3, c_5\}$, $\{c_1, c_3, c_4\}$, and $\{c_1, c_3, c_5\}$. Let $U$ denote the set of these possible elements. Since $c_2$ has one neighbor in $U$, it cannot be in $Y$. Since we've shown that any vertex of type $\{c_x, c_{x+2}\},$ with subscripts $\mod 5$ must have as its $C_5$ neighbors $c_x$ and $c_{x+2}$, it follows that $Y$ does not contain vertices of type $\{c_1, c_3\}$ or $\{c_1, c_4\}$. If $Y$ contains a vertex of type $\{c_3, c_5\}$, then $Y$ contains $c_3, c_5$, and vertices of types Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"} restricts $Y$ to containing $c_3, c_5$, and vertices of types $\{c_1, c_5\}$ and $\{c_1, c_3, c_5\}$. Since $c_5$ is adjacent to all but $c_3$, $Y$ does not induce $C_5$. Thus $Y$ does not contain a vertex of type $\{c_3, c_5\}$. If instead $Y$ contains a vertex of type $\{c_3, c_4\}$ or $\{c_1, c_3, c_4\}$, then $Y$ is restricted to containing $c_3, c_4$ and vertices of types $\{c_3, c_4\}$ or $\{c_1, c_3, c_4\}$, which was shown above to not be possible. Lastly, if $Y$ contains a vertex of type $\{c_1, c_5\}$ or $\{c_1, c_3, c_5\}$, then in order to induce $C_5$, $Y$ must contain two vertices $v_1, v_2$ of type $\{c_1, c_5\}$, two vertices $v_3, v_5$ of type $\{c_1, c_3, c_5\}$, and $c_3$. However, $G$ then contains the induced subgraph $\overline{F_{22}}$, for a contradiction. Instead, all induced copies of $C_5$ in $G$ contain vertex $c_1$, and its deletion renders the graph perfect. Hence if $G$ contains a vertex of types $\{c_1, c_3\}$ or $\{c_1, c_2, c_3\}$, up to symmetry, contained in an induced $C_5$, the graph is apex-perfect. Assume for the remainder of the proof that $G$ contains no such vertex. Lastly, if there is a vertex $v$ of type $\{c_1, c_2\}$, up to symmetry and complementation, in an induced $C_5$, then by Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"}, the other vertices in that induced $C_5$ must be some of $c_1, c_2$, and vertices of types $\{c_1, c_2\}$ and $\{c_1, c_2, c_4\}$. Since $c_1$ and $c_2$ are adjacent to all but $c_4$, they do not occur in this $C_5$. As shown above, the $C_5$ cannot contain only vertices of types $\{c_1, c_2\}$ and $\{c_1, c_2, c_4\}$, so $c_4$ must be included. This $C_5$ then contains two vertices each of types $\{c_1, c_2\}$ and $\{c_1, c_2, c_4\}$, and $G$ contains $\overline{F_{22}}$ as an induced subgraph, for a contradiction. ◻ It follows that graphs in $1$-$HNG$ are $\chi$-bounded by the function $f(x) = x+1$, which is best possible. **Theorem 13**. *If $G \in$ $1$-$HNG$, then $\chi(G) \le \omega(G) + 1$.* *Proof.* Let $G \in$ $1$-$HNG$. By Theorem [Theorem 12](#prop:apex_perfect){reference-type="ref" reference="prop:apex_perfect"}, there exists $v \in G$ such that $\chi(G - \{v\}) = \omega(G - \{v\})$. By Remark [Proposition 5](#prop:vertex_deletion){reference-type="ref" reference="prop:vertex_deletion"}, $\chi(G) \le \chi(G - \{v\}) + 1$. Either $\omega(G - \{v\}) = \omega(G)$ or $\omega(G - \{v\}) = \omega(G) - 1$. In the first case, $\chi(G) \le \chi(G - \{v\}) + 1 = \omega(G - \{v\}) + 1 = \omega(G) + 1$, and in the second case, $\chi(G) \le \chi(G - \{v\}) + 1 = \omega(G - \{v\}) + 1 = \omega(G)$. Both yield the desired result. ◻ # The Intersection of $1$-$HNG$ with Several Common Graph Classes In this section we provide equivalent structural and forbidden (induced) subgraph characterizations of $1$-$HNG$ intersected with line graphs, claw-free graphs, and triangle-free graphs. Beginning with line graphs, we define two sets that will provide our structural and forbidden subgraph characterizations. Given graphs $A_1$ to $A_6$ shown in Figure [2](#fig:line_graphs){reference-type="ref" reference="fig:line_graphs"}, let $\mathcal{A} = \{A_1, \ldots, A_6, 3P_3, P_5 + P_3, P_7, C_6, K_4, C_4+P_3, 2K_3, 2K_{1,3}, K_{2,3}, K_3+K_{1,3}\}$. Given families $L_1$ to $L_{11}$ in Figure [2](#fig:line_graphs){reference-type="ref" reference="fig:line_graphs"}, let $\mathcal{L}$ be the set of all graphs in one of families $L_1$ to $L_{11}$. We characterize the set of graphs whose line graphs are in $1$-$HNG$ as the set of graphs containing none of $\mathcal{A}$ as a (not necessarily induced) subgraph, and equivalently as the set of graphs contained in an element of $\mathcal{L}$. ![The set $\mathcal{A}$ comprises $A_1$ to $A_6$ as well as ten other small graphs. The set $\mathcal{L}$ comprises all graphs in families $L_1$ to $L_{11}$.](Line_graphs.png){#fig:line_graphs height="7cm"} **Theorem 14**. *Given a graph $G$, the following are equivalent:* 1. *$L(G) \in$ $1$-$HNG$.* 2. *$G$ contains none of $\mathcal{A}$ as a subgraph.* 3. *$G$ is a subgraph of a graph in $\mathcal{L}$.* *Proof.* (i) $\rightarrow$ (ii) It is straightforward to verify that for all $A \in \mathcal{A}$, the line graph $L(A) \in \mathcal{F}$. Thus if a graph $G$ contains $A$ as a subgraph, $L(G)$ contains the forbidden $L(A)$ as an induced subgraph, and by Theorem [Theorem 9](#prop:FIS_for_1_hng){reference-type="ref" reference="prop:FIS_for_1_hng"}, $L(G) \not \in$ $1$-$HNG$. \(ii\) $\rightarrow$ (iii) Let $G$ be a graph containing none of $\mathcal{A}$ as a subgraph. Since $C_6, P_7 \in \mathcal{A}$, it follows that $G$'s largest cycle has $5$ or fewer vertices, should one exist. It is clear that $G$ can contain arbitrarily many connected components isomorphic to $K_2$, and so we restrict the subsequent analysis to edgewise nontrivial components. We split into cases according to the number of components containing at least two edges and the size of the largest cycle. Case 1: The graph $G$ has one edgewise nontrivial component, which contains a $5$-cycle. Any added vertex with two neighbors in the $5$-cycle either produces $C_6 \in \mathcal{A}$ as a subgraph if the neighbors are adjacent or $A_4 \in \mathcal{A}$ if the neighbors are not adjacent. Hence any vertex beyond the $5$-cycle must have at most one neighbor in the $5$-cycle. If two different vertices in the $5$-cycle have added neighbors, then $G$ contains either $P_7 \in \mathcal{A}$ (if the vertices are adjacent) or $A_2 \in \mathcal{A}$ (if not), so at most one vertex in the $5$-cycle has an added neighbor. If there exists an edge $e$ not incident to the $5$-cycle, then $e$ together with its path to the $5$-cycle and the $5$-cycle itself contain $P_7 \in \mathcal{A}$, so the only possible addition to the $5$-cycle is a set of pendant vertices attached to one vertex. If the $5$-cycle has two chords, then whether or not they are crossing chords, $G$ contains $A_1 \in \mathcal{A}$ as a subgraph. It follows that $G$ has at most one chord. A chord can be added if and only if it is incident to the vertex with pendant vertices, else $G$ contains $A_2$ or $A_4 \in \mathcal{A}$. It can be added freely and we conclude that $G$ is a subgraph of a graph in family $L_1$. Case 2: The graph $G$ has one edgewise nontrivial component, which contains no $5$-cycle but does contain a $4$-cycle. Any added vertex with two neighbors in the $4$-cycle either produces $C_5$ (handled in Case 1) as a subgraph if the neighbors are adjacent or $K_{2,3} \in \mathcal{A}$ if the neighbors are not adjacent. Hence any added vertex has at most one neighbor in the $4$-cycle. If two different vertices in the $4$-cycle have added neighbors, then those two vertices must be adjacent, else $G$ contains $A_4 \in \mathcal{A}$. If two adjacent vertices in the $4$-cycle each have two added neighbors, then $G$ contains $2K_{1,3} \in \mathcal{A}$, so instead only one vertex in the $4$-cycle can have more than one additional neighbor. If there exists an edge $e$ not incident to the $4$-cycle and two vertices in the $4$-cycle have additional neighbors, then $e$ together with its path to the $4$-cycle, the $4$-cycle itself, and the additional neighbor contain $P_7 \in \mathcal{A}$. Hence either all additions to the $4$-cycle are pendant vertices and all but one are appended to the same vertex, creating a subgraph of a graph in family $L_1$, or all additions to the $4$-cycle are appended to the same vertex, which we call $v$. However, we are limited in the subgraphs we can append to $v$: the set of vertices not included in the $4$-cycle cannot contain $P_3$, else $G$ contains $C_4+P_3 \in \mathcal{A}$. Hence we are limited to appending isolated copies of $P_3, K_3$, and pendant vertices to $v$. With two copies of $P_3$ appended to $v$, $G$ contains $P_5 + P_3 \in \mathcal{A}$, so we can append either one $K_3$ or one $P_3$ (and unlimited pendant vertices) to $v$. If the $4$-cycle contains two chords, then $G$ contains $K_4 \in \mathcal{A}$, so it has at most one chord. This chord must be incident to $v$, else $G$ contains $A_1$. If there is a chord, then $K_3$ cannot be appended to $v$, else $G$ contains $A_3 \in \mathcal{A}$. With a chord, $G$ is thus a subgraph of a graph in family $L_2$, and without a chord, $G$ is a subgraph of a graph in family $L_3$. Case 3: The graph $G$ has one edgewise nontrivial component, which contains $K_3$ but no larger cycle. Let $C = \{v_1, v_2, v_3\} \subseteq V(G)$ induce $K_3$. There is exactly one path from any other vertex to $C$, else $G$ contains $2K_3 \in \mathcal{A}$ or some larger cycle, addressed above. Hence the subgraphs appended to each vertex in $C$ are isolated from one another. If $P_4$ is appended with a vertex in $C$ as an endpoint and $G$ contains any other edge not incident to that vertex, then $G$ contains either $P_7$ or $P_5+P_3$. Hence if $P_4$ is appended to a vertex in $C$, $G$ is a subgraph of a graph in family $L_4$. A claw cannot be appended at a spoke to a vertex in $C$, since this would create $A_3 \in \mathcal{A}$ as a subgraph. Hence the subgraph appended to any vertex of $C$ contains only isolated copies of $P_3, K_3$, and pendant vertices. If $K_3$ is appended to a vertex in $C$, then appending neighbors to any other vertex in (either copy of) $K_3$ yields $A_3$ as a subgraph. Any number of copies of $K_3$ can be appended at the same vertex, and we conclude that $G$ is a subgraph of a graph in family $L_5$. If two copies of $P_3$ are appended to $C$ at the same vertex, then no other vertex in $C$ has an additional neighbor, else $G$ contains $P_5 + P_3$. If two copies of $P_3$ are appended to different vertices, then $G$ contains $P_7$. If two vertices in $C$ each have two additional neighbors, then $G$ contains $A_2$. If one vertex has a copy of $P_3$ appended and the other two vertices in $C$ have additional neighbors, then $G$ contains $A_6 \in \mathcal{A}$. Thus with a $P_3$ appended to a vertex in $C$, either that vertex can have a cluster of pendant vertices and $G$ is a subgraph of a graph in $L_6$ or a different vertex has a cluster of pendant vertices and $G$ is a subgraph of a graph in $L_7$. With no $P_3$ appended to a vertex in $C$, $G$ is a subgraph of a graph in $L_8$. Case 4: The graph $G$ has one edgewise nontrivial component, which contains no cycles. If $G$ contains $P_6$ on vertices $v_1, \ldots, v_6$ in that order, then it can have a cluster of pendant vertices appended to any non-endpoint vertex. No $P_3$ can be appended, else $G$ contains $P_7$ or $P_5+P_3$. Furthermore, two non-adjacent vertices cannot have appended vertices, else $G$ contains $A_2$. If $v_2$ has a pendant vertex, then $v_3$ cannot, else $G$ contains $P_5 + P_3$. If $v_3$ and $v_4$ each have two or more pendant vertices, then $G$ contains $2K_{1,3}$. Thus $G$ is isomorphic to a subgraph of $P_6$ together with a collection of pendant vertices to $v_2$, which is a subgraph of a graph in $L_7$; or $G$ has a collection of pendant vertices to $v_3$ and a pendant vertex attached to $v_4$ (up to symmetry), and hence is a subgraph of a graph in $L_9$. If $G$ contains no induced $P_6$ but does contain an induced $P_5$ on vertices $v_1, \ldots, v_5$ in that order, then it can have adjoined copies of $P_3$, but only to $v_3$. The resulting graph, with a cluster of copies of $P_3$ adjoined to $v_3$, is a subgraph of a graph in $L_5$. Without an adjoined $P_3$, again $G$ can only have one cluster of pendant vertices, and only neighboring vertices of the $P_5$ can have pendant vertices. The result is a subgraph a graph in $L_9$. Case 5: The graph $G$ has at least two edgewise nontrivial components. Since $G$ cannot contain $3P_3 \in \mathcal{A}$ as an induced subgraph, $G$ has at most two components with at least two edges. Components containing a single edge or vertex are trivial, and ignored henceforth. The nontrivial components cannot contain $C_4$ or $P_5$, since $C_4 + P_3$ and $P_5 + P_3$ are in $\mathcal{A}$. If one component has a triangle, there cannot be a copy of $P_3$ appended to one triangle vertex, additional neighbors appended to two triangle vertices, or a vertex of degree two or more appended to the triangle, since each would create a $C_4$ or $P_5$ subgraph. Hence a component with a triangle consists of, at most, a triangle with a cluster of pendant vertices adjoined to one vertex. The other component cannot contain a triangle or a claw since $2K_3, K_3 + K_{1,3} \in \mathcal{A}$, so $H$ must be a subgraph of $P_4$. Thus $G$ is a subgraph of a graph in $L_{10}$. Otherwise, both components are trees. If both components are subgraphs of $P_4$, $G$ is a subgraph of a graph in $L_{10}$. Otherwise, exactly one component contains a claw. This component contains $P_4$ with a pendant vertex attached to one of the center vertices. Since both center vertices cannot have two or more pendant vertices simultaneously without producing a $2K_{1,3}$ subgraph, the component is a subgraph of $P_4$ together with a cluster of pendant vertices at one center vertex and a single pendant vertex attached to the other center vertex. The other nontrivial component is again a subgraph of $P_4$, and we conclude that $G$ is a subgraph of a graph in $L_{11}$. \(iii\) $\rightarrow$ (i) Compute the line graph of each graph in $\mathcal{L}$. By Theorem [Theorem 9](#prop:FIS_for_1_hng){reference-type="ref" reference="prop:FIS_for_1_hng"}, each contains no forbidden induced subgraph, so is in $1$-$HNG$. The same is true of its subgraphs. ◻ In preparation for characterizing claw-free and triangle-free subsets of $1$-$HNG$, we show that three families of graphs built from a copy of $C_5$ are in $1$-$HNG$. The three families, $D_1, D_2,$ and $D_3$, are shown in Figure [3](#fig:claw_triangle_free){reference-type="ref" reference="fig:claw_triangle_free"}. Each consists of an induced $C_5$ on vertices labeled $\{c_1, c_2, c_3, c_4, c_5\}$ together with a stable set of added vertices. In $D_1$, the added vertices may be of types $\{c_1, c_3\},$ $\{c_1, c_4\},$ and $\{c_1, c_3, c_4\}$. In $D_2$, the added vertices may be of types $\{c_4\},$ $\{c_1, c_4\},$ and $\{c_1, c_3, c_4\}$. In $D_3$, the added vertices may be of types $\{c_1\},$ $\{c_1, c_3\},$ and $\{c_1, c_4\}$. ![A claw-free graph in $1$-$HNG$ containing an induced $C_5$ is an induced subgraph of a graph in family $\overline{D_1}, \overline{D_2}$, or $\overline{D_3}$, possibly with dominating vertices. A triangle-free graph in $1$-$HNG$ is a subgraph of $K_{2, n-2}$ or $S_{m, n-m}$ for some $3 \le m \le n$, or an induced subgraph of $D_3$, possibly with isolated vertices.](Claw_free.png){#fig:claw_triangle_free height="3cm"} **Lemma 15**. *All graphs in families $D_1, D_2$, or $D_3$ are in $1$-$HNG$.* *Proof.* Let $G$ be a graph in one of these families, with a fixed induced $C_5$ on vertex set $C = \{c_1, c_2, c_3, c_4, c_5\}$ in that order. By Theorem [1](#fig:fis_for_1hng){reference-type="ref" reference="fig:fis_for_1hng"} the addition to $C$ of any three vertices from $D$ does not induce an element of $F_C$. The choice of $C_5$ is immaterial: all induced copies of $C_5$ contain $c_1, c_3, c_4$, either $c_2$ or a vertex of type $\{c_1, c_3\}$, and either $c_5$ or a vertex of type $\{c_1, c_4\}$. With respect to any of these induced $C_5$, the types of vertices in $D = V(G) - C$ remain constant. Therefore $G$ contains no element of $F_C$ on any vertex subset. Graphs in $D_1$ and $D_2$ may contain induced triangles, but all triangles share two common vertices, $c_3$ and $c_4$. Graphs in $D_3$ contain no induced triangles. In either case, $G$ cannot contain an induced $\overline{F_1}, \ldots, \overline{F_{12}}, F_13,$ or $\overline{F_{13}}$. See Figure [1](#fig:fis_for_1hng){reference-type="ref" reference="fig:fis_for_1hng"} for these graphs. If $G$ is an element of $D_1$ or $D_3$, then the largest stable set containing $c_1$ has size $2$. Hence if $G$ contains an induced $F_1, \ldots, F_{11},$ or $F_{12}$, it cannot include $c_1$. Without $c_1$, however, every edge in $G$ is incident to $c_3$ or $c_4$, so $\nu(G - \{c_1\}) = 2$. Hence $G$ contains no induced $F_1, \ldots, F_{12}$. If $G$ is an element of $D_2$, an analogous argument holds with the exchange of $c_1$ and $c_4$. Therefore $G$ contains no induced element of $\mathcal{F}$, and Theorem [1](#fig:fis_for_1hng){reference-type="ref" reference="fig:fis_for_1hng"} implies that $G \in$$1$-$HNG$. ◻ Given graphs $F_i$, $1 \le i \le 26$, shown in Figure [1](#fig:fis_for_1hng){reference-type="ref" reference="fig:fis_for_1hng"}, let $\mathcal{B} = \{K_{1,3}, F_1, F_2, F_3, F_5, F_8, F_{13},\overline{F_{21}}\} \cup \{\overline{F_1}, \overline{F_2}, \ldots, \overline{F_{13}}\}$, as defined in Figure [1](#fig:fis_for_1hng){reference-type="ref" reference="fig:fis_for_1hng"}. We use $\mathcal{B}$ as well as Lemma [Lemma 15](#prop:three_hng_families){reference-type="ref" reference="prop:three_hng_families"} to characterize the claw-free graphs in $1$-$HNG$. **Theorem 16**. *Given a graph $G$, the following are equivalent:* 1. *$G \in$ $1$-$HNG$ and $G$ is claw-free.* 2. *$G$ contains no element of $\mathcal{B}$ as an induced subgraph.* 3. *Either $G$ is perfect, claw-free, and in $1$-$HNG$; or $\overline{G}$ is a graph in family $D_1, D_2,$ or $D_3$, possibly with the addition of some dominating vertices.* *Proof.* (i) $\rightarrow$ (iii) Let $G \in$ $1$-$HNG$ be claw-free. Assume that $G$ contains no isolated vertices. By Theorem [Theorem 12](#prop:apex_perfect){reference-type="ref" reference="prop:apex_perfect"}, $G$ is perfect if and only if it is $C_5$-free. If so, we are done, and if not, suppose $G$ contains an induced $C_5$ on vertex set $C = \{c_1, c_2, c_3, c_4, c_5\}$ in that order. Let $D = V(G) - C$. Since $G$ is claw-free, $D$ cannot contain any vertex $v$ of type $\{c_1\}, \{c_1, c_3\},$ or $\{c_1, c_3, c_4\}$, up to symmetry, else $\{c_1, c_2, c_5, v\}$ induces a claw. Additionally, $D$ contains no vertex $v$ of type $\{\emptyset\}$, since then by Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"}, its neighbor set contains a vertex $w$ with two non-adjacent neighbors in $C$, inducing a claw. By virtue of the remaining permissible types, Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"} implies $D$ must induce a clique. If $D$ contains a vertex $v$ of type $\{c_1, c_2\}$, up to symmetry, then Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"} allows $D$ contain vertices of types $\{c_1, c_2\},$ $\{c_1, c_2, c_3\},$ $\{c_1, c_2, c_5\},$ $\{c_1, c_2, c_3, c_4\},$ $\{c_1, c_2, c_3, c_5\},$ $\{c_1, c_2, c_4, c_5\}$ or $\{c_1, c_2, c_3, c_4, c_5\}$. However, if $D$ contains a vertex $w$ of type $\{c_1, c_2, c_3, c_5\}$ or $\{c_1, c_2, c_3, c_4, c_5\},$ then $\{v, w, c_3, c_5\}$ induces a claw. Since $D$ cannot simultaneously contain vertices of types $\{c_1, c_2, c_5\}$ and $\{c_1, c_2, c_3, c_4\}$ or $\{c_1, c_2, c_4, c_5\}$ and $\{c_1, c_2, c_3, c_4\}$ by Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"}, it follows that $D$ contains vertices of types from set $\{\{c_1, c_2\},$ $\{c_1, c_2, c_3\},$ $\{c_1, c_2, c_5\}\}$ or $\{\{c_1, c_2\},$ $\{c_1, c_2, c_3\},$ $\{c_1, c_2, c_3, c_4\}\}$. If the former, then $\overline{G}$ is in $D_1$, and if the latter, $\overline{G}$ is in $D_2$. If $D$ contains no vertices of type $\{c_1, c_2\}$ up to symmetry, then the vertices of $D$ are of types from the set $\{\{c_1, c_2, c_3\}, \{c_1, c_2, c_5\}, \{c_1, c_2, c_3, c_4\}, \{c_1, c_2, c_3, c_5\}, \{c_1, c_2, c_3, c_4, c_5\}\}$, up to symmetry. However, if $D$ contains a vertex of type $\{c_1, c_2, c_3, c_4\},$ it cannot contain vertices of types $\{c_1, c_2, c_5\}$ or $\{c_1, c_2, c_3, c_5\}$. Up to symmetry, we conclude the vertices of $D$ are from set $\{\{c_1, c_2, c_3\}$, $\{c_1, c_2, c_5\}$, $\{c_1, c_2, c_3, c_5\}$, $\{c_1, c_2, c_3, c_4, c_5\}\}$, and so $G$ is in family $D_3$. \(iii\) $\rightarrow$ (ii) Let $G$ be a graph such that $\overline{G}$ is in family $D_1, D_2$, or $D_3$, possibly with dominating vertices. Since dominating and isolated vertices do not affect inclusion in $1$-$HNG$, suppose $G$ has none. Lemma [Lemma 15](#prop:three_hng_families){reference-type="ref" reference="prop:three_hng_families"} implies $G$ contains no element of $\mathcal{B}$ as an induced subgraph, except possibly the claw. We now show that if $G$ contains an induced $C_5$, it is claw-free. Suppose $\overline{G}$ is in $D_1$ or $D_2$. Since $D$ induces a clique, the only stable set of at least three vertices is $\{v, c_3, c_5\}$, and no vertex is adjacent to all three of these vertices. Thus $G$ contains no induced claw. Otherwise, if $\overline{G}$ is in $D_3$, then since $D$ induces a clique, each vertex in $D$ has at most two non-neighbors, which are adjacent. Thus $\alpha(G) = 2$ and $G$ contains no induced claw. \(ii\) $\rightarrow$ (i) Since $\mathcal{B}$ is precisely the subset of graphs in $\mathcal{F}$ not containing an induced claw, together with the claw, the result follows by Theorem [Theorem 9](#prop:FIS_for_1_hng){reference-type="ref" reference="prop:FIS_for_1_hng"}. ◻ In anticipation of characterizing triangle-free graphs of $1$-$HNG$, we provide a lemma about the structure of bipartite graphs with no $3K_2$ subgraph. By Theorem [Theorem 9](#prop:FIS_for_1_hng){reference-type="ref" reference="prop:FIS_for_1_hng"}, this result also characterizes bipartite graphs in $1$-$HNG$. Let $S_{m, n-m}$ denote the double star graph with center vertices of degree $m$ and $n-m$; see Figure [3](#fig:claw_triangle_free){reference-type="ref" reference="fig:claw_triangle_free"}. **Lemma 17**. *A graph $G$ is bipartite and has no $3K_2$ subgraph if and only if $G$ is a subgraph of $K_{2,n-2}$ or $S_{m, n-m}$.* *Proof.* Suppose $G$ is a subgraph of a double star graph with non-pendant vertices $v$ and $w$; hence $G$ is bipartite. Since all edges of $G$ are incident to $v$ or $w$, $G$ does not contain $3K_2$ as a subgraph. Alternatively, suppose $G$ is bipartite and does not contain $3K_2$ as a subgraph. If one side of any possible bipartition has two or fewer vertices, then $G$ is a subgraph of $K_{2,n-2}$. Otherwise, in any bipartition $V(G) = A \cup B$ of $G$, both sides have three or more vertices, and suppose for a contradiction that $G$ is not a subgraph of a double star graph. Thus two vertices in $A$ must have different neighbors in $B$; suppose $E(G)$ contains $a_1b_1$ and $a_2b_2$. If there exists an edge with endpoints outside $a_1, a_2, b_1, b_2$, $G$ must contain $3K_2$; else, all edges are incident to one of these vertices. If there exists $a_3$ adjacent to $b_1$ and $b_3$ adjacent to $a_1$, then $a_1b_3, a_2b_2, a_3b_1$ form $3K_2$, and analogously if there exists $a_4$ adjacent to $b_2$ and $b_4$ adjacent to $a_2$. Otherwise, without loss of generality, all vertices in $A$ besides $a_1$ and $a_2$ are adjacent at most to $b_1$, and all vertices in $B$ besides $b_1$ and $b_2$ are adjacent at most to $a_2$. This is a subgraph of a double star graph unless $E(G)$ contains $a_1b_2$. In that case, since $A$ and $B$ each contain a third non-isolated vertex $a_3$ and $b_3$, the edges $a_1b_2, a_2b_3, a_3b_1$ form $3K_2$. ◻ To end this section, we characterize the triangle-free graphs in $1$-$HNG$. The structural characterization in (iii) is illustrated in Figure [3](#fig:claw_triangle_free){reference-type="ref" reference="fig:claw_triangle_free"}. **Theorem 18**. *Given a graph $G$, the following are equivalent:* 1. *$G \in$ $1$-$HNG$ and $\omega(G) = 2$.* 2. *$G$ contains no induced $K_3, F_1, \ldots, F_{12}$.* 3. *$G$ is either (1) a subgraph of $K_{2,n-2}$, (2) a subgraph of $S_{m, n-m}$, or (3) a graph in family $D_3$, possibly with isolated vertices.* *Proof.* (i) $\rightarrow$ (iii) Let $G \in$ $1$-$HNG$ with $\omega(G) = 2$. By Theorem [Theorem 12](#prop:apex_perfect){reference-type="ref" reference="prop:apex_perfect"}, $G$ is perfect if and only if it is $C_5$-free. If so, $\omega = \chi = 2$ and so $G$ is bipartite. By Theorem [Theorem 9](#prop:FIS_for_1_hng){reference-type="ref" reference="prop:FIS_for_1_hng"}, $G$ has no $3K_2$ subgraph, and so by Lemma [Lemma 17](#prop:bipartite_doublestar){reference-type="ref" reference="prop:bipartite_doublestar"}, $G$ is a subgraph of $K_{2,n-2}$ or a double star graph. Otherwise, $G$ is imperfect and contains an induced $C_5$ on some vertex subset $\{c_1, c_2, c_3, c_4, c_5\}$. Any other vertices must be of types $\{\emptyset\}, \{c_1\}$, and $\{c_1, c_3\}$, up to symmetry, since other types would induce $K_3$. The result follows from Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"}. \(iii\) $\rightarrow$ (ii) If $G$ is a subgraph of $K_{2,n-2}$ or the double star graph, it clearly is bipartite and contains no induced $K_3$, and Lemma [Lemma 17](#prop:bipartite_doublestar){reference-type="ref" reference="prop:bipartite_doublestar"} implies $\nu(G) \le 2$, so $G$ contains no induced $F_1, \ldots, F_{12}$. If $G$ is a graph in family $D_3$, possibly with isolated vertices, then $G$ contains no induced $K_3$. By Lemma [Lemma 15](#prop:three_hng_families){reference-type="ref" reference="prop:three_hng_families"}, $G \in$ $1$-$HNG$. \(ii\) $\rightarrow$ (i) Clearly $\omega(G) = 2$. All forbidden induced subgraphs of $1$-$HNG$, as laid out in Theorem [Theorem 9](#prop:FIS_for_1_hng){reference-type="ref" reference="prop:FIS_for_1_hng"}, are given in (ii) or contain $K_3$, so we conclude that $G \in$ $1$-$HNG$. ◻ # Optimization In this section we show that membership in $1$-$HNG$ , and many of its graphs' key invariants, can be determined in polynomial time. These results largely follow from the forbidden induced subgraph characterization and work in Section 4 on apex-perfection. **Theorem 19**. *Graphs in $1$-$HNG$ can be recognized in polynomial time, specifically $O(n^8)$.* *Proof.* It suffices to test whether the given graph contains one of the $52$ graphs in the list, each of which has at most $8$ vertices. ◻ Furthermore, the clique number and independence number of graphs in $1$-$HNG$ can be determined in polynomial time. Where the graphs are perfect, the results are immediate from the result by Chudnovsky et al. [@Ch05] that perfect graphs can be identified in polynomial time and from Grötschel, Lovász, and Schrijver's earlier work [@GrLoSc84] that perfect graphs can be given a minimum proper coloring in polynomial time. For imperfect graphs, we begin with a lemma refining the proof of Theorem [Theorem 12](#prop:apex_perfect){reference-type="ref" reference="prop:apex_perfect"} to add an additional requirement to the deleted vertex, then continue with the full result in Theorem [Theorem 21](#prop:cliquenumber_polynomialtime){reference-type="ref" reference="prop:cliquenumber_polynomialtime"}. **Lemma 20**. *If $G \in$ $1$-$HNG$ and is imperfect, then there exists a vertex $v$ such that $G - \{v\}$ is perfect and $\omega(G - \{v\}) = \omega(G)$.* *Proof.* Let $G \in$ $1$-$HNG$ and suppose $G$ is imperfect. By Theorem [Theorem 12](#prop:apex_perfect){reference-type="ref" reference="prop:apex_perfect"}, $G$ is apex-perfect, and contains at least one induced $C_5$ (note again that $G$ cannot contain any of the other forbidden induced subgraphs of perfect graphs). Let $C = \{c_1, c_2, c_3, c_4, c_5\}$ induce a copy of $C_5$ in $G$ with edge set $\{c_1c_2, c_2c_3, c_3c_4, c_4c_5, c_1c_5\}$ and let $D = V(G) - C$. If $D = \emptyset$, then $G$ is isomorphic to $C_5$ and $\omega(G) = \omega(G - \{v\})$ for all $v \in V(G)$. If $\omega(G) = 2$, then since $G$ contains $C_5$, $\omega(G - \{v\}) = 2$ for all $v \in V(G)$. Otherwise, suppose that $D$ is nonempty and $\omega(G) \ge 3$. If $D$ only has vertices of types $\{c_1, c_2\}$ and $\{c_1, c_2, c_4\}$, up to symmetry, then by the proof of Theorem [Theorem 12](#prop:apex_perfect){reference-type="ref" reference="prop:apex_perfect"}, $G$ cannot contain an induced $C_5$ other than $C$. Since $c_3$ is contained in no triangles, its deletion verifies the theorem. Assume thus going forward that $G$ contains an induced $C_5$ other than $C$. Suppose next that $D$ may also have vertices of types $\{c_1, c_3\}$ and $\{c_1, c_2, c_3\}$, up to symmetry. If an induced $C_5$ contains a vertex of type $\{c_1, c_3\}$, then its neighbors in the copy of $C_5$ must be $c_1$ and $c_3$, by Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"}. The deletion of either verifies apex-perfection, so delete whichever of the two is not contained in a unique maximum clique. If an induced $C_5$ contains a vertex of type $\{c_1, c_2, c_3\}$, then its non-neighbors must analogously be $c_4$ and $c_5$. By Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"}, $D$ cannot contain a vertex adjacent to both, so each is in a clique of size at most two, which is non-maximum by assumption. Delete either to reach the desired conclusion. Suppose next that $D$ may also have vertices of types $\{c_1\}$ and $\{c_1, c_2, c_3, c_4\}$, up to symmetry. If an induced $C_5$ contains neither, then the above argument holds to verify the theorem. If an induced $C_5$ contains a vertex of type $\{c_1\}$, then its neighbors in the $C_5$ must be $c_1$ and a vertex of type $\{c_3, c_4\}$. By Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"}, $c_1$ may neighbor vertices of types $\{c_1\}, \{c_1, c_3\}, \{c_1, c_4\}$ and $\{c_1, c_3, c_4\}$. All of these are isolated to one another, $c_2$, and $c_5$, with the exception of vertices of type $\{c_1, c_3, c_4\}$, which may be adjacent to one another. Hence if $c_1$ is in a maximum clique, the clique must comprise $c_1$ and at least two adjacent vertices of type $\{c_1, c_3, c_4\}$. Replace $c_1$ with $c_3$ and $c_4$ to obtain a strictly larger clique, for a contradiction. We conclude that $c_1$ is in no maximum clique, so its deletion verifies apex-perfection, by the proof of Theorem [Theorem 12](#prop:apex_perfect){reference-type="ref" reference="prop:apex_perfect"}, and $\omega(G - \{c_1\}) = \omega(G)$. If an induced $C_5$ contains a vertex of type $\{c_1, c_2, c_3, c_4\}$, then its non-neighbors in the $C_5$ must be $c_5$ and a vertex of type $\{c_2, c_3, c_5\}$. Since this case is the complement of the case in the proof of Theorem [Theorem 12](#prop:apex_perfect){reference-type="ref" reference="prop:apex_perfect"} where an induced $C_5$ contains a vertex of type $\{c_1\}$, it follows that $G - \{c_5\}$ is perfect. By Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"}, $c_5$ is in a clique of size at least $3$ only if all other clique vertices are of type $\{c_2, c_3, c_5\}$, but these vertices together with $c_2$ and $c_3$ form a strictly larger clique. Hence $\omega(G - \{c_5\}) = \omega(G)$. Lastly, suppose $D$ may also contain vertices of types $\{\emptyset\}$ and $\{c_1, c_2, c_3, c_4, c_5\}$. Neither can be in any induced copies of $C_5$. Any vertex of type $\{c_1, c_2, c_3, c_4, c_5\}$ is included in all maximum cliques in $G$ and its induced subgraphs $G - \{v\}$ for all other $v \in V(G)$, since its only possible non-neighbors, vertices of types $\{c_1\}$ or $\{c_2, c_5\}$, up to symmetry, are never included in any maximum cliques. (Any two adjacent neighbors of a vertex of these types are adjacent to both $c_3$ and $c_4$, which would produce a strictly larger clique). Hence a vertex of type $\{c_1, c_2, c_3, c_4, c_5\}$ cannot impact the relative size of a maximum clique, and we proceed assuming none exist in $G$. Any vertex of type $\{\emptyset\}$ is included in no maximum cliques in $G$ or its induced subgraphs $G - \{v\}$ for all other $v \in V(G)$, since its neighbor set (if nonempty) forms a clique dominating two adjacent vertices of $C$. Hence a vertex of type $\{\emptyset\}$ cannot change the clique number in $G$, and we can assume no such vertex exists in $G$. ◻ Note also the complementary result pertaining to $\alpha(G)$ holds since $1$-$HNG$, perfect graphs, and all processes used in the proof are invariant under complementation. **Theorem 21**. *If $G \in$ $1$-$HNG$, then $\omega(G)$ and $\alpha(G)$ can be determined in polynomial time.* *Proof.* If $G$ is perfect, the result holds immediately, as shown in [@GrLoSc84]. Otherwise, assume $G$ is imperfect. We show $\omega(G)$ can be determined in polynomial time; by complementation, the same holds for $\alpha(G)$. As established in the proof of Theorem [Theorem 12](#prop:apex_perfect){reference-type="ref" reference="prop:apex_perfect"}, $G$ contains an induced copy of $C_5$. In $O(n^5)$ time, identify a subset of vertices $\{v_1, v_2, v_3, v_4, v_5\}$ inducing $C_5$. For each $v_i$, $1 \le i \le 5$, determine for which $G - \{v_i\}$ is perfect, which can be done in polynomial time by [@Ch05], and if so compute $\omega_i = \omega(G - \{v_i\})$, which is again possible in polynomial time. In all cases, $\omega_i = \omega(G)$ or $\omega(G) - 1$. By Lemma [Lemma 20](#prop:apexperfect_cliquenumber){reference-type="ref" reference="prop:apexperfect_cliquenumber"}, this must hold with equality in the case of at least one $\omega_i$, so we conclude that $\omega(G) = \max\{\omega_i\}$. ◻ We now perform a similar analysis to exactly determine the chromatic number of a graph in $1$-$HNG$, and a complementary result allows us to calculate $\theta(G)$. These allow for $\chi(G)$ and $\theta(G)$ to be determined in polynomial time; see Corollary [Corollary 23](#prop:chromaticnumber_polynomialtime){reference-type="ref" reference="prop:chromaticnumber_polynomialtime"}. **Theorem 22**. *If $G \in$ $1$-$HNG$, then $\chi(G) = \omega(G)$ unless $G$ comprises an induced $C_5$ and an isolated set of vertices of types either (i) a subset of $\{c_1\}$, $\{c_1, c_3\},$ $\{c_1, c_4\}$, and $\{c_1, c_2, c_3, c_4, c_5\}$, or (ii) both $\{c_1, c_2, c_4\}$ and $\{c_1, c_2, c_3, c_5\}$, up to symmetry. If either of these hold, $\chi(G) = \omega(G) + 1$.* *Proof.* Let $G \in$ $1$-$HNG$ and suppose $G$ is imperfect. By Theorem [Theorem 12](#prop:apex_perfect){reference-type="ref" reference="prop:apex_perfect"}, $G$ is apex-perfect, and contains at least one induced $C_5$ (note again that $G$ cannot contain any of the other forbidden induced subgraphs of perfect graphs). Let $C = \{c_1, c_2, c_3, c_4, c_5\}$ induce a copy of $C_5$ in $G$ with edge set $\{c_1c_2, c_2c_3, c_3c_4, c_4c_5, c_1c_5\}$ and let $D = V(G) - C$. Suppose $D$ matches none of the conditions given in the theorem and $|D| \ge 2$ (the result is straightforward to check if $|D| = 1$). If $\omega(D) = 1$ (i.e., $D$ induces a stable set), then by Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"}, it follows that $D$ comprises vertices of types $\{c_1, c_2\}$ and $\{c_1, c_2, c_4\}$, up to symmetry. The argument in the following paragraph holds. Note that by the proof of Theorem [Theorem 12](#prop:apex_perfect){reference-type="ref" reference="prop:apex_perfect"}, $D$ induces a perfect graph. First, suppose $D$ has only vertices of types $\{c_1, c_2\}$ and $\{c_1, c_2, c_4\}$, up to symmetry. If so, a minimum proper coloring of $D$ can be extended to a proper coloring of $G$ using $\chi(D) + 2$ colors by giving vertices $c_1$ and $c_2$ new colors and coloring $c_4$ with $c_1$'s color, $c_5$ with $c_2$'s color, and $c_3$ with any color used in $D$. Hence $\chi(G) \le chi(D) + 2$. However, since $D$ is nonempty and complete to $c_1$ and $c_2$, it follows that $\omega(G) = \omega(D) + 2$. Since $D$ is perfect, we have $\chi(G) \ge omega(G) = \omega(D) + 2 = \chi(D) + 2$, and conclude that $\chi(G) = \omega(G)$. Second, suppose $D$ may also have vertices of types $\{c_1, c_3\}$ and $\{c_1, c_2, c_3\}$, up to symmetry. If $D$ contains a vertex of type $\{c_1, c_3\}$, then $D$ contains only vertices of types $\{c_1, c_3\},$ $\{c_1, c_4\},$ $\{c_1, c_3, c_4\},$ and $\{c_3, c_4\}$, of which there must be at least two vertices of types $\{c_1, c_3, c_4\},$ and $\{c_3, c_4\}$ to produce a clique number at least two. Vertices of types $\{c_1, c_3\}$ and $\{c_1, c_4\}$ can be freely colored with the same color as a vertex of type $\{c_1, c_3, c_4\}$ or $\{c_3, c_4\}$, so the above argument holds to show $\chi(G) = \omega(G)$. If $D$ contains a vertex of type $\{c_1, c_2, c_3\}$, then $D$ comprises either (i) vertices of types $\{c_1, c_2\}$, $\{c_1, c_2, c_3\}$, $\{c_1, c_2, c_4\}$, $\{c_1, c_2, c_5\}$ or (ii) vertices of types $\{c_2, c_3\}$, $\{c_1, c_2, c_3\}$, $\{c_2, c_3, c_4\}$, $\{c_2, c_3, c_5\}$. Assuming the former, without loss of generality, $D$ is complete to $c_1$ and $c_2$, so a minimum proper coloring of $D$ extends to $G$ with two more colors: one is used for $c_1$ and $c_3$, one is used for $c_2$ and $c_4$, and $c_5$ can be colored with the same color as a vertex of type $\{c_1, c_2, c_3\}$. Note the coloring is proper since $D$ induces a clique, so no neighbor of $c_5$ in $D$ shares the same color as a vertex of type $\{c_1, c_2, c_3\}$. The above argument holds to show that $\chi(G) = \omega(G)$. Third, suppose $D$ may also have vertices of types $\{c_1\}$ and $\{c_1, c_2, c_3, c_4\}$, up to symmetry. If $D$ contains a vertex of type $\{c_1\}$ not in a maximum clique, then $D$ without its vertices of type $\{c_1\}$ is as described in one of the above three paragraphs. Since the vertices of type $\{c_1\}$ are not adjacent to one another and not in a maximum clique in $D$, they can be added back into any proper coloring without using new colors. If a vertex of type $\{c_1\}$ is in a maximum clique, then since $\omega(D) \ge 2$, this maximum clique must also have one vertex of type $\{c_3, c_4\}$. Recall from the proof of Theorem [Theorem 9](#prop:FIS_for_1_hng){reference-type="ref" reference="prop:FIS_for_1_hng"} that this maximum clique cannot contain two vertices of type $\{c_3, c_4\}$. Hence $\omega(D) = 2$, so all vertices of types $\{c_3, c_4\}$ and $\{c_1, c_3, c_4\}$ are isolated from one another. Here $G$ can be properly colored with three colors: the first is used on $c_2$, $c_5$, and all vertices of types $\{c_3, c_4\}$ and $\{c_1, c_3, c_4\}$; the second is used on $c_3$ and all vertices of type $\{c_1\}$; and the third is used on $c_1$ and $c_4$. If $D$ contains a vertex of type $\{c_1, c_2, c_3, c_4\}$ and no vertex of type $\{c_2, c_3, c_5\}$, then by Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"} it follows that $c_5$ has no neighbors in $D$. Hence any proper coloring of $D$ can be extended to a proper coloring of $G$ with two new colors, one for $c_1$ and $c_3$ and the other for $c_2$ and $c_4$, with a color from $D$ given to $c_5$. Furthermore, Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"} implies that $\omega(G) = \omega(D) + 2$, so it follows that $\chi(G) = \omega(G)$, as reasoned above. If $D$ does have a vertex of type $\{c_2, c_3, c_5\}$, then by Proposition [Proposition 11](#prop:allowable_vertex_list){reference-type="ref" reference="prop:allowable_vertex_list"} all vertices in $D$ are adjacent to $c_2$ and $c_3$. As a result $\omega(G) = \omega(D) + 2$. Moreover, any minimum proper coloring of $D$ can be extended to a proper coloring of $G$ by giving $c_2$ and $c_4$ one new color and $c_1$ and $c_3$ a second new color. If there exists a vertex $v$ of type $\{c_1, c_2, c_3, c_4\}$ adjacent to all vertices of type $\{c_2, c_3, c_5\}$, then $v$ dominates $D$ and its color can be given to $c_5$ so that the coloring on $G$ is proper. Otherwise every vertex of type $\{c_1, c_2, c_3, c_4\}$ has a non-neighbor of type $\{c_2, c_3, c_5\}$. If there are two vertices of type $\{c_1, c_2, c_3, c_4\},$ then $G$ contains $\overline{F_{25}}$ or $\overline{F_{26}}$ as an induced subgraph. If there are two adjacent vertices of type $\{c_2, c_3, c_5\}$, then these two vertices together with $c_1, c_2, c_5$, and the vertex of type $\{c_1, c_2, c_3, c_4\}$ induce an element of $\mathcal{F}$. Otherwise all vertices of type $\{c_2, c_3, c_5\}$ are non-adjacent to one another, and the vertex of type $\{c_1, c_2, c_3, c_4\}$ is adjacent to at least one of them. Here, $\omega(D) = 2, \omega(G) = 4, \chi(D) = 2$, and $\chi(G) = 4$. Lastly, suppose $D$ also has vertices of types $\{\emptyset\}$ and/or $\{c_1, c_2, c_3, c_4, c_5\}$. We showed in the proof of Theorem [Lemma 20](#prop:apexperfect_cliquenumber){reference-type="ref" reference="prop:apexperfect_cliquenumber"} that a vertex of type $\{\emptyset\}$ is never in a maximum clique in $G$ or $D$, and a vertex of type $\{c_1, c_2, c_3, c_4, c_5\}$ is in all maximum cliques. Let $G'$ be the induced subgraph with no vertices of types $\{\emptyset\}$ or $\{c_1, c_2, c_3, c_4, c_5\}$, so by the above arguments, $\chi(G') = \omega(G')$. Suppose $D$ has $k$ vertices of type $\{c_1, c_2, c_3, c_4, c_5\}$. A minimum proper coloring of $G'$ can be extended to a proper coloring of $G$ with $\chi(G') + k$ colors, and $\omega(G) = \omega(G') + k$. Thus $\chi(G) \ge \omega(G) = \omega(G') + k = \chi(G') + k \ge \chi(G)$, so we conclude $\chi(G) = \omega(G)$. For the second claim, let $G$ comprise an induced $C_5$ and a stable set $D$ of vertices of types either (i) a subset of $\{c_1\}$, $\{c_1, c_3\},$ $\{c_1, c_4\}$, and $\{c_1, c_2, c_3, c_4, c_5\}$, or (ii) both $\{c_1, c_2, c_4\}$ and $\{c_1, c_2, c_3, c_5\}$, up to symmetry. In either case, if $D$ is nonempty, then $\omega(G) = 3$ and $\chi(G) = 4$. If $D$ is empty, then $G$ is isomorphic to $C_5$ and $\chi(G) = 3$. ◻ **Corollary 23**. *If $G \in$ $1$-$HNG$, then $\chi(G)$ and $\theta(G)$ can be determined in polynomial time.* *Proof.* By Corollary [Theorem 21](#prop:cliquenumber_polynomialtime){reference-type="ref" reference="prop:cliquenumber_polynomialtime"}, we can determine $\omega(G)$ in polynomial time. Subsequently, in $O(n^5)$ time, identify a subset of vertices $\{v_1, v_2, v_3, v_4, v_5\}$ inducing $C_5$, and check the condition in Theorem [Theorem 22](#prop:chromaticnumber){reference-type="ref" reference="prop:chromaticnumber"} to determine if $\chi(G) = \omega(G)$ or $\chi(G) = \omega(G) + 1$. ◻ 99 M. Aouchiche and P. Hansen, A survey of Nardhaus-Gaddum type relations, Discrete Appl. Math. **161** (4-5) (2013), 466--546. R. Behr, V. Sivaraman, and T. Zaslavsky, Mock threshold graphs, *Discrete Math.* **341** (2018), 2159--2178. L. W. Beineke, Characterizations of derived graphs, *J. Combin. Theory* **9** (1970), 129--135. Z. Blázsik et al., Graphs with no induced $C_4$ and $2K_2$, *Discrete Math.* **115** (1993), 51--55. C. Berge, Färburg von Graphen, deren sämtlitche bzw. deren ungerade Kreise starr sind, *Wiss. Z. Martin-Luther-Univ. Halle-Wittenberg Math.-Natur. Reihe* **10** (1961), 114 (in German). A. Brandstädt, V. B. Le, and J. P. Spinrad, Graph Classes: A Survey, SIAM Monographs Discrete Math. Appl., Society for Industrial and Applied Mathematics, Philadelphia (1999). C. Cheng, K. L. Collins, and A. N. Trenk, Split graphs and Nordhaus--Gaddum graphs, *Discrete Math.* **339** (2016), 2345--2356. M. Chudnovsky et al., Recognizing Berge graphs, *Combinatorica* **25** (2005), 143--186. M. Chudnovsky, N. Robertson, P. Seymour, and R. Thomas, The strong perfect graph theorem, *Ann. Math.* **164** (2006) (1), 51--229. V. Chvátal and P. L. Hammer, Set-packing and threshold graphs, Univ. Waterloo Res. Rep. CORR 73-21 (1973). V. Chvátal and P.L. Hammer, Aggregation of inequalities in integer programming, in Annals of Discrete Mathematics 1: Studies in Integer Programming, ed. P. L. Hammer, E. I. Johnson, B. H. Korte, and G. L. Nemhauser, North Holland (1977), 145-162. K. L. Collins and A.N. Trenk, Nordhaus-Gaddum Theorem for the Distinguishing Chromatic Number, *Electron. J. of Combin.* **20** (3) (2013). R. Diestel, Graph Theory, 4th ed., in: Grad. Texts in Math. **173**, Springer, Heidelberg (2010). G. A. Dirac, On rigid circuit graphs, *Abh. Math. Sem. Univ. Hamburg* **25** (1961), 71--76. R. Faudree, E. Flandrin, and Z. Ryjáček, Claw-free graphs--a survey, *Discrete Math.* **164** (1997), 87--147. H. J. Finck, On the chromatic number of a graph and its complements, Theory of Graphs, Proceedings of the Colloquium, Tihany, Hungary (1966), 99--113. S. Földes and P. Hammer, Split graphs, *Congr. Numer.* **19** (1977), 311--315. M. C. Golumbic, Algorithmic Graph Theory and Perfect Graphs, Academic Press, New York, 1980. 2nd ed., Ann. Discrete Math. **57** (2004). M. Grötschel, L. Lovász, and A. Schrijver, Polynomial algorithms for perfect graphs, Ann. Discrete Math. **21** (1984), 325-356. P. Hammer and B. Simeone, The splittance of a graph, *Combinatorica* **1** (1981), 275--284. R. B. Hayward, Weakly triangulated graphs, *J. Combin. Theory Ser. B* **39** (3) (1985), 200--208. B. Litjens, S. Polak, and V. Sivaraman, Sum-perfect graphs, *Discret. Appl. Math.* **259** (2019), 232--239. L. Lovász, A characterization of perfect graphs, *J. Combin. Theory Ser. B* **13** (1972), 95--98. N.V.R. Mahadev and U.N. Peled, *Threshold Graphs and Related Topics,* in: Ann. Discrete Math. **56**, North-Holland, Amsterdam (1995). E. A. Nordhaus and J. W. Gaddum, On complementary graphs, *Amer. Math. Monthly* **63** (1956), 175--177. A. Scott and P. Seymour, A Survey of $\chi$-boundedness, *J. Graph Theory* **95** (3) (2020), 473--504. C. L. Starr and G. E. Turner III, Complementary graphs and the chromatic number, *Missouri J. Math Sci.* **20** (1) (2008), 19--26. D. B. West, Introduction to Graph Theory, 2nd ed., Prentice Hall, Upper Saddle River, N.J. (2001). J.-H. Yan, J.-J. Chen, and G. J. Chang, Quasi-threshold graphs, *Discrete Appl. Math.* **69** (3) (1996), 247--255.
arxiv_math
{ "id": "2310.02336", "title": "Hereditary Nordhaus-Gaddum Graphs", "authors": "Vaidy Sivaraman and Rebecca Whitman", "categories": "math.CO", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/" }
--- abstract: | Let $G$ be a bipartite graph with bipartition $(X,Y)$. Inspired by a hypergraph problem, we seek an upper bound on the number of disjoint paths needed to cover all the vertices of $X$. We conjecture that a Hall-type sufficient condition holds based on the maximum value of $|S|-|{\mathsf{\Lambda}}(S)|$, where $S\subseteq X$ and ${\mathsf{\Lambda}}(S)$ is the set of all vertices in $Y$ with at least two neighbors in $S$. This condition is also a necessary one for a hereditary version of the problem, where we delete vertices from $X$ and try to cover the remaining vertices by disjoint paths. The conjecture holds when $G$ is a forest, has maximum degree $3$, or is regular with high girth, and we prove those results in this paper. author: - | Mikhail Lavrov Jennifer Vandenbussche\ Department of Mathematics\ Kennesaw State University\ Kennesaw, GA\ `{mlavrov,jvandenb}@kennesaw.edu` title: A Hall-type condition for path covers in bipartite graphs --- # Introduction ## Path covers of bipartite graphs Problems regarding path covers of graphs are ubiquitous in graph theory. A *path cover* of $G$ is a collection of vertex-disjoint paths in $G$ where the union of the vertices of the paths is $V(G)$. Certainly the most well-studied example looks for a single path covering all vertices of $G$, i.e. a Hamiltonian path. Graphs with such a path are also called *traceable*. See [@gould2003advances] for a survey of results in this area. Determining whether a graph has a Hamiltonian path is NP-complete even for very restrictive classes of graphs; for example, Akiyama et al. [@akiyama1980np] prove that it is NP-complete for 3-regular bipartite graphs. In graphs that are not traceable, we may seek a path cover with as few paths as possible. For example, Magnant and Martin [@Magnant2009pathcover] conjecture that a $d$-regular graph $G$ can be covered with at most $|V(G)|/(d+1)$ paths, and prove this when $d \leq 5$. Feige and Fuchs [@feige20226regular] extend the result to $d=6$. In [@Magnant2016pathcovermindegree], Magnant et al. conjecture that a graph with maximum degree $\Delta$ and minimum degree $\delta$ needs at most $\max\left\{\frac{1}{\delta+1},\frac{\Delta-\delta}{\Delta+\delta}\right\}\cdot |V(G)|$ paths to cover its vertices, which they verify for $\delta \in \{1,2\}$ and which Kouider and Zamime [@kouider2022preprint] prove for $\Delta \ge 2\delta$. For dense $d$-regular bipartite graphs, Han [@han2018] proves that a collection of $|V(G)|/(2d)$ vertex-disjoint paths covers all but $o(|V(G)|)$ vertices. In this paper, we focus on a variant of the path cover problem for bipartite graphs: collections of vertex-disjoint paths that cover one partite set of the bipartite graph. Let an *$(X,Y)$-bigraph* be a bipartite graph with a specified ordered bipartition $(X,Y)$. If $G$ is an $(X,Y)$-bigraph, a *path $X$-cover* of $G$ is a set of pairwise vertex-disjoint paths in $G$ that cover all of $X$. We seek a Hall-type condition for the existence of a path $X$-cover of $G$ with at most $k$ paths. Let $S \subseteq X$, and let ${\mathsf{\Lambda}}_G(S)$ be the set of all vertices in $Y$ that have at least two neighbors in $S$; in cases where there is only one graph $G$ under consideration, we will write ${\mathsf{\Lambda}}_G(S)$ simply as ${\mathsf{\Lambda}}(S)$. We define the *${\mathsf{\Lambda}}$-deficiency of $S$* to be $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G,S) := |S|-|{\mathsf{\Lambda}}(S)|$, and the *${\mathsf{\Lambda}}$-deficiency of $G$* to be $$\mathop{\mathrm{{\mathsf\Lambda}-def}}(G) := \max\{\mathop{\mathrm{{\mathsf\Lambda}-def}}(G,S) : S \subseteq X\}.$$ We conjecture the following: **Conjecture 1**. *Every $(X,Y)$-bigraph $G$ has a path $X$-cover by at most $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G)$ paths.* If this conjecture holds, then for every $S \subseteq X$, there is a set of at most $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G)$ vertex-disjoint paths whose intersection with $X$ is precisely $S$. To see this, just delete all the vertices in $X-S$ from $G$, which can only decrease the ${\mathsf{\Lambda}}$-deficiency. Conversely, suppose it is true that for every $S \subseteq X$, there is a set of at most $k$ vertex-disjoint paths whose intersection with $X$ is precisely $S$. Then for every $S$, these paths have at least $|S|-k$ internal vertices in $Y$ that are all elements of ${\mathsf{\Lambda}}(S)$; therefore $|{\mathsf{\Lambda}}(S)| \ge |S|-k$ for all $S$, which implies that $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G) \ge k$. It follows that the condition in our conjecture is a *necessary* one if we would like to draw the stronger conclusion in the preceding paragraph. Our conjecture is a slightly weakened form of a conjecture on cycle covers proposed in [@kostochka2021conditions]: **Conjecture 2**. *Let $G$ be an $(X,Y)$-bigraph with the property that for all $S \subseteq X$ with $|S|>2$, $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G,S) \le 0$. Then $G$ contains a cycle that covers all of $X$.* We claim Conjecture [Conjecture 2](#conjecture:super-cyclic){reference-type="ref" reference="conjecture:super-cyclic"} implies Conjecture [Conjecture 1](#conjecture:path cover){reference-type="ref" reference="conjecture:path cover"}. Let $H$ be the graph obtained from $G$ by adding ${\mathop{\mathrm{{\mathsf\Lambda}-def}}(G)}$ more vertices to $Y$, each of which is adjacent to every vertex in $X$. Then for all $S \subseteq X$ with $|S|>2$ (and even with $|S|=2$), we have $\mathop{\mathrm{{\mathsf\Lambda}-def}}(H,S) \le 0$, since all the new vertices of $H$ are in ${\mathsf{\Lambda}}_H(S)$. Now a cycle in $H$ covering all of $X$ yields a path $X$-cover of $G$ by at most $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G)$ paths by deleting all the new vertices. ## Hypergraphs and the Gallai--Milgram theorem The setting of Conjecture [Conjecture 1](#conjecture:path cover){reference-type="ref" reference="conjecture:path cover"} can be translated into the language of hypergraphs and Berge paths in hypergraphs, and here we see the motivation for focusing on path cover of $X$. Following the terminology of Berge [@berge73], a *hypergraph* $H$ consists of a set of vertices $V(H)$ and a set of edges $E(H)$ where each edge $e \in E(H)$ is a subset of $V(H)$. (We allow edges of any size.) The *subhypergraph of $H$ generated by a set $S \subseteq V(H)$* is the hypergraph with $V(H)=S$ and $$E(H) = \{e \cap S : e \in E(H), e \cap S \ne \varnothing\}.$$ There are several notions of paths in hypergraphs that generalize paths in graphs. One such notion is that of a *Berge path*: a sequence $$(v_0, e_1, v_1, e_2, v_2, \dots, e_\ell, v_\ell)$$ where $v_0, v_1, \dots, v_\ell$ are distinct vertices in $V(H)$, $e_1, e_2, \dots, e_\ell$ are distinct edges in $E(H)$, and $\{v_{i-1}, v_i\} \subseteq e_i$ for all $i=1, \dots, \ell$. Given a hypergraph $H$, we can define its *incidence graph* to be the $(X,Y)$-bigraph $G$ with $X = V(H)$ and $Y = E(H)$ such that $xy \in E(G)$ if and only if $x \in X$, $y \in Y$, and $x \in y$. Berge paths in $H$ correspond to paths in $G$ that begin and end in $X$; these are vertex-disjoint in $G$ if and only if they are both vertex-disjoint and edge-disjoint in $H$. If we define a *Berge path cover* of the hypergraph $H$ to be a set of pairwise vertex- and edge-disjoint paths that cover all of $V(H)$, then Conjecture [Conjecture 1](#conjecture:path cover){reference-type="ref" reference="conjecture:path cover"} proposes a sufficient condition for $H$ to have a Berge path cover of size at most $k$. Moreover, the proposed sufficient condition is a necessary condition for every subhypergraph of $H$ to have a Berge path cover of size at most $k$. This statement is reminiscent of the Gallai--Milgram theorem ([@gallai1960], p. 298 in [@berge73]), which states that the vertices of any directed graph $D$ can be covered by at most $\alpha(D)$ disjoint paths, where $\alpha(D)$ is the independence number of $D$. (The weaker statement for undirected graphs clearly follows.) For a hypergraph $H$, let a set $I \subseteq V(H)$ be *strongly independent* (following the terminology of Berge) if $|e \cap I| \le 1$ for all $e \in E(H)$; let $\alpha(H)$, the *strong independence number of $H$*, be the size of a largest strongly independent set in $H$. It would be natural to hope that $H$ has a path cover by at most $\alpha(H)$ pairwise-disjoint paths. In [@muller1981oriented], Müller proves such a generalization of the Gallai--Milgram theorem (and, in fact, a generalization of it to directed hypergraphs), but in a slightly different setting: Müller does not require the edges of a path to be distinct, and does not require the paths in the cover to be edge-disjoint, merely vertex-disjoint. In our setting, the corresponding generalization is false. Translating from hypergraphs back into the language of graphs: a set $I \subseteq V(H)$ is strongly independent if and only if, in the incidence graph of $H$, ${\mathsf{\Lambda}}(I) = \varnothing$. Generalizing to an arbitrary $(X,Y)$-bigraph $G$, let $S \subseteq X$ be *${\mathsf{\Lambda}}$-independent* if ${\mathsf{\Lambda}}(S) = \varnothing$, and let the *${\mathsf{\Lambda}}$-independence number* $\alpha_{\mathsf{\Lambda}}(G)$ be the size of a largest ${\mathsf{\Lambda}}$-independent set. Note that if $S$ is ${\mathsf{\Lambda}}$-independent, then $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G,S)=|S|$, so $\alpha_{\mathsf{\Lambda}}(G)$ is always at most $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G)$. To see that an $(X,Y)$-bigraph $G$ may not have a path $X$-cover with at most $\alpha_{\mathsf{\Lambda}}(G)$ paths, even if $G$ is balanced and has a high connectivity, consider the following family of examples. Fix an integer $k$ between 1 and $n$, and let $X = \{x_1, \dots, x_n\}$ and $Y = \{y_1, \dots, y_n\}$ with $x_i y_j \in E(G)$ when $i \le k$ or $j \le k$. Then $\alpha_{\mathsf{\Lambda}}(G)=1$, since any two vertices share the neighbor $y_1$, but $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G) = n-2k+1$ (choose $S=\{x_k,x_{k+1}, \dots, x_n\}$), and in fact it can be checked that a minimum path $X$-cover contains $n-2k+1$ paths. However, in all the cases of Conjecture [Conjecture 1](#conjecture:path cover){reference-type="ref" reference="conjecture:path cover"} we consider where $G$ is a *regular* graph, $\alpha_{\mathsf{\Lambda}}(G)$ paths suffice for a path $X$-cover of $G$. Whether this holds for all regular bigraphs $G$ is an open question that would have far-reaching consequences. For example, a result of Singer [@singer1938theorem] states that the incidence graph of any classical projective plane is Hamiltonian. The proof relies on algebra over finite fields, but the claim above would give a purely graph-theoretic reason that these incidence graphs are always traceable, since the incidence graph $G$ of any projective plane must have $\alpha_{\mathsf{\Lambda}}(G)=1$. More generally, a hypergraph $H$ is *covering* if every pair of vertices of $H$ lie on a common edge: in other words, $\alpha(H)=1$. Lu and Wang [@lu2021hamiltonian] prove that every $\{1,2,3\}$-uniform covering hypergraph has a Hamiltonian Berge cycle. This implies Conjecture [Conjecture 1](#conjecture:path cover){reference-type="ref" reference="conjecture:path cover"} for $(X,Y)$-bigraphs $G$ with maximum degree $3$ in $Y$ and $\alpha_{\mathsf{\Lambda}}(G)=1$. ## Our results Our first result states that Conjecture [Conjecture 1](#conjecture:path cover){reference-type="ref" reference="conjecture:path cover"} holds for forests: **Proposition 3**. *If $G$ is an $(X,Y)$-bigraph with no cycles, then $G$ has a path $X$-cover of size at most $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G)$.* To strengthen Proposition [Proposition 3](#result:forest){reference-type="ref" reference="result:forest"}, we go in two directions: we consider graphs with low maximum degree and graphs with high girth. In the first case, we begin by proving: **Theorem 4**. *If $G$ is a $3$-regular $(X,Y)$-bigraph, then $G$ has a path $X$-cover of size at most $\alpha_{\mathsf{\Lambda}}(G)$.* The proof of Theorem [Theorem 4](#result:3-regular){reference-type="ref" reference="result:3-regular"} begins by taking a $2$-factor of $G$, covering the graph (and, in particular, $X$) with pairwise vertex-disjoint cycles. If we generalize to graphs with maximum degree $3$, we are unable to do this, but if we cover as much of $G$ with cycles as possible, we are left with a forest. Once we deal with the interaction between the forest and the cycles, we can combine the arguments of Theorem [Theorem 4](#result:3-regular){reference-type="ref" reference="result:3-regular"} with Proposition [Proposition 3](#result:forest){reference-type="ref" reference="result:forest"} to prove a result for all graphs with maximum degree $3$: **Theorem 5**. *If $G$ is an $(X,Y)$-bigraph with maximum degree at most $3$, then $G$ has a path $X$-cover of size at most $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G)$.* It is particularly interesting to strengthen Theorem [Theorem 4](#result:3-regular){reference-type="ref" reference="result:3-regular"} to Theorem [Theorem 5](#result:max degree 3){reference-type="ref" reference="result:max degree 3"} because if $G$ has maximum degree at most $3$, then so does every subgraph of $G$. As a result, we obtain a necessary and sufficient condition for an $(X,Y)$-bigraph $G$ of maximum degree $3$ to have the property that for all $S \subseteq X$, there is a set of at most $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G)$ pairwise vertex-disjoint paths whose intersection with $X$ is precisely $S$. Conjecture [Conjecture 1](#conjecture:path cover){reference-type="ref" reference="conjecture:path cover"} holds for regular bigraphs of any degree if we add a condition on the *girth* of $G$, that is, the length of the shortest cycle in $G$. **Theorem 6**. *Let $G$ be an $(X,Y)$-bigraph with maximum degree at most $d$ and girth at least $4ed^2+1$, and assume that there exists a collection of pairwise vertex-disjoint cycles in $G$ that cover all of $X$. (In particular, such a collection is guaranteed to exist if $G$ is $d$-regular.)* *Then $G$ has a path $X$-cover of size at most $\alpha_{\mathsf{\Lambda}}(G)$.* # Forests *Proof of Proposition [Proposition 3](#result:forest){reference-type="ref" reference="result:forest"}.* We may assume that $G$ has no leaves in $Y$, since a vertex in $Y$ of degree $1$ does not contribute to $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G,S)$ for any $S$, and it does not help cover more of $X$ by paths. We may also assume that $G$ is a tree; if $G$ has multiple components, we can solve the problem on each component separately. We induct on $|X|$. When $|X|=1$, we have $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G) = \mathop{\mathrm{{\mathsf\Lambda}-def}}(G, X) = 1$, and we can cover $X$ by a single path of length $0$. When $|X|>1$, consider $G$ as a rooted tree with an arbitrary root in $X$. Let $x \in X$ be a leaf of $G$ at the furthest distance possible from the root, and let $y \in Y$ be the parent vertex of $x$. **Case 1:** $y$ has other children. Let $x_1, \dots, x_k$ be all the children of $y$ (including $x$); by the case, $k\ge 2$. Since $x$ was chosen to be as far from the root as possible, each $x_i$ must be a leaf. Delete $x_1, x_2, \dots, x_k, y$ from $G$ to get $G'$. Let $S \subseteq X - \{x_1, \dots, x_k\}$ be the set such that $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G') = \mathop{\mathrm{{\mathsf\Lambda}-def}}(G', S)$. We claim that $$\mathop{\mathrm{{\mathsf\Lambda}-def}}(G, S \cup \{x_1, \dots, x_k\}) = \mathop{\mathrm{{\mathsf\Lambda}-def}}(G', S) + k - 1.$$ On one hand, $|S \cup \{x_1, \dots, x_k\}| = |S|+k$. On the other hand, ${\mathsf{\Lambda}}_G(S \cup \{x_1, \dots, x_k\}) = {\mathsf{\Lambda}}_{G'}(S) \cup \{y\}$, so $|{\mathsf{\Lambda}}_G(S \cup \{x_1, \dots, x_k\})| = |{\mathsf{\Lambda}}_{G'}(S)|+1$. In particular, $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G) \ge \mathop{\mathrm{{\mathsf\Lambda}-def}}(G',S) + k-1$. By the inductive hypothesis, $G'$ has a path $X$-cover by at most $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G',S)$ paths. Add $k-1$ more paths to that set: the path $(x_1, y, x_2)$ and the length-$0$ paths $(x_3), \dots, (x_k)$. This is a path $X$-cover of $G$ by at most $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G',S) + k-1 \le \mathop{\mathrm{{\mathsf\Lambda}-def}}(G)$ paths, completing the case. **Case 2:** $y$ has no other children. Let $x^* \in X$ be the parent vertex of $y$. **Case 2a:** $\deg(x^*) \le 2$ (this includes the case where $x^*$ is the root and $\deg(x^*)=1$). Delete $x$ and $y$ from $G$ to get $G'$. For every $S \subseteq X - \{x\}$, we have $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G,S) = \mathop{\mathrm{{\mathsf\Lambda}-def}}(G',S)$, since $y$ cannot be in ${\mathsf{\Lambda}}_G(S)$ and all other vertices of $Y$ are still in $G'$. Therefore $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G) \ge \mathop{\mathrm{{\mathsf\Lambda}-def}}(G')$. By the inductive hypothesis, $G'$ has a path $X$-cover by at most $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G')$ paths. By the case, $x^*$ is a leaf of $G'$ (or an isolated vertex), so the path that covers $x^*$ must begin or end at $x^*$. Extend that path to go through $y$ and $x$, and we get a path $X$-cover of $G$ by $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G') \le \mathop{\mathrm{{\mathsf\Lambda}-def}}(G)$ paths, completing the case. **Case 2b:** $\deg(x^*) \ge 3$. Let $y_1, \dots, y_k$ be all of the children of $x^*$ (including $y$); by the case, $k\ge 2$. No vertices of $y$ are leaves, so each has a child. By our choice of $x$, those children are all as far from the root as possible, so they must all be leaves. If any of $y_1, \dots, y_k$ have multiple children, then we can proceed as in **Case 1**, so assume each $y_i$ has a single child $x_i$. Delete $x_k$ and $y_k$ from $G$ to get $G'$. Let $S \subseteq X - \{x_k\}$ be the set such that $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G') = \mathop{\mathrm{{\mathsf\Lambda}-def}}(G',S)$. We may assume that $x^* \notin S$ by one of the following modifications: - If $x^* \in S$ and $x_1 \notin S$, replace $S$ by $S' = S \cup \{x_1\} - \{x^*\}$. Then $|S'| = |S|$ and $|{\mathsf{\Lambda}}_{G'}(S')| \le |{\mathsf{\Lambda}}_{G'}(S)|$: $y_1$ is in neither ${\mathsf{\Lambda}}_{G'}(S)$ nor ${\mathsf{\Lambda}}_{G'}(S')$, and no other vertices in $Y$ have any neighbors in $S'$ that they did not have in $S$. So $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G',S') \ge \mathop{\mathrm{{\mathsf\Lambda}-def}}(G',S)$. - If $x^* \in S$ and $x_1 \in S$, replace $S$ by $S' := S-\{x^*\}$. Then $|S'| = |S|-1$, but $|{\mathsf{\Lambda}}_{G'}(S')| \le |{\mathsf{\Lambda}}_{G'}(S)|-1$ as well, since $y_1 \in {\mathsf{\Lambda}}_{G'}(S)$ but $y_1 \notin {\mathsf{\Lambda}}_{G'}(S')$. (No other vertices in $Y$ have any neighbors in $S'$ that they did not have in $S$.) So $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G',S') \ge \mathop{\mathrm{{\mathsf\Lambda}-def}}(G',S)$. When $x^* \notin S$, we have $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G, S \cup \{x_k\}) = \mathop{\mathrm{{\mathsf\Lambda}-def}}(G', S)+1$, because $|S \cup \{x_k\}| = |S|+1$, while ${\mathsf{\Lambda}}_{G}(S \cup \{x_k\})={\mathsf{\Lambda}}_{G'}(S)$. Therefore $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G) \ge \mathop{\mathrm{{\mathsf\Lambda}-def}}(G') + 1$. By the inductive hypothesis, $G'$ has a path $X$-cover by at most $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G')$ paths. Add the path $(x_k)$ to get a path $X$-cover of $G$ by $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G') + 1 \le \mathop{\mathrm{{\mathsf\Lambda}-def}}(G)$ paths, completing the case and the proof. ◻ # 3-regular graphs It is a standard result (Corollary 3.1.13 in [@west1996introduction]) that every regular bipartite graph has a perfect matching. Removing a perfect matching from a $d$-regular bipartite graph leaves a $(d-1)$-regular bipartite graph, which also has a perfect matching. The union of the two matchings provides a cover of $G$ by vertex-disjoint cycles, giving the following lemma (which is also well-known): **Lemma 7**. *If $G$ is a regular bipartite graph, then $G$ has a cycle cover.* The existence of this lemma is the primary reason that this proof is simpler than the proof of Theorem [Theorem 5](#result:max degree 3){reference-type="ref" reference="result:max degree 3"} in the next section. That proof begins with the same ideas, but must deal with vertices of $X$ that are not part of the initially chosen collection of cycles. *Proof of Theorem [Theorem 4](#result:3-regular){reference-type="ref" reference="result:3-regular"}.* By Lemma [Lemma 7](#lemma:cycle cover){reference-type="ref" reference="lemma:cycle cover"}, we can take a cycle cover $\mathcal C$ of $G$. Let $S$ be any maximal ${\mathsf{\Lambda}}$-independent subset of $X$ such that each cycle in $\mathcal C$ contains at most one vertex of $S$. To prove the claim, it suffices to construct a path cover of $G$ with exactly $|S|$ paths. We give an algorithm for this below. Let $H$ be a subgraph of $G$ that will change over the course of the algorithm; initially, $H$ will consist of the $|S|$ cycles in $\mathcal C$ containing a vertex of $S$. We will extend $H$ to a spanning subgraph of $G$, while maintaining the properties (1) $H$ has $|S|$ components, and (2) each component of $H$ is traceable. At each step of the algorithm, choose a cycle $C \in \mathcal C$ that is not yet contained in $H$, and $x(C) \in V(C) \cap X$. In most cases, we make this choice arbitrarily. Occasionally, we will want to make sure that a particular vertex $w$ on a cycle $C$ not yet in $H$ will never become $x(C)$. To do so, we select $C$ to be processed next, and choose an arbitrary vertex in $V(C) \cap X$ other than $w$ to be $x(C)$. To indicate that we do this, we say that we *Do Something Else* with $w$; we provide details about this choice later. Suppose we have selected $C$ and $x(C)$. By the maximality of $S$, we have ${\mathsf{\Lambda}}(S \cup \{x(C)\}) \ne \varnothing$, so there is some vertex $s \in S$ such that $x(C)$ and $s$ have a common neighbor $y$. Let $C(s)$ be the cycle in $\mathcal C$ containing $s$. The vertex $y$ must lie on either $C$ or $C(s)$, since otherwise $y$ would have four neighbors: $x(C)$, $s$, and its two neighbors on the cycle in $\mathcal C$ containing $y$. We extend $H$ by adding cycle $C$ to $H$, and either the edge $x(C)y$ (if $y$ lies on $C(s)$) or $sy$ (if $y$ lies on $C$). This ends one step of the algorithm. This step maintains the property that $H$ has $|S|$ components, since cycle $C$ has been joined to an existing component of $H$. To maintain the property that each component of $H$ is traceable, we must clarify when we Do Something Else. Consider an arbitrary $s \in S$; let $C(s)$ be the cycle of $\mathcal C$ containing $s$, and let $y_1, y_2$ be the two neighbors of $s$ along $C(s)$. Initially, the component of $H$ containing $s$ is just $C(s)$. There are three ways that $C(s)$ can potentially be added to $H$, namely via an edge from any of $s$, $y_1$, or $y_2$ going to another cycle in $\mathcal C$. The component remains traceable if any one of these edges is used to extend it: in that case, we can extend that edge to a Hamiltonian path by going the long way around both cycles. The component also remains traceable if it is extended both using an edge from $s$ and using an edge from $y_1$. In that case, delete edge $sy_1$, obtaining a long path containing $s$ and $y_1$ joining two cycles; extend that path by going the long way around both of those cycles. The same is true if $y_1$ is replaced by $y_2$. However, we must ensure that the component of $H$ containing $s$ is never extended by using edges from both $y_1$ and $y_2$. Suppose that a step of the algorithm extended the component of $H$ containing $s$ via an external edge to $y_1$, and $y_2$ has a neighbor $w$ in some $C \in \mathcal C$ not yet contained in $H$. In this situation, we Do Something Else with $w$. This ensures the component of $H$ containing $s$ cannot be extended using edges from both $y_1$ and $y_2$, because one of those edges goes to $C$, and $C$ will become part of $H$ in the next step of the algorithm and hence will not be considered at later stages of the algorithm. As a result, no component of $H$ is ever prevented from being traceable. At the end of the algorithm, we have a spanning subgraph $H$ with $|S|$ traceable components. By taking a Hamiltonian path in each component, we obtain a path cover of $G$ with $|S|$ paths, completing the proof. ◻ # Graphs with maximum degree 3 *Proof of Theorem [Theorem 5](#result:max degree 3){reference-type="ref" reference="result:max degree 3"}.* We will prove the theorem by describing an algorithm that constructs a path $X$-cover $\mathcal P$ and a set $S\subseteq X$ with $|\mathcal P| = \mathop{\mathrm{{\mathsf\Lambda}-def}}(G, S)$. To begin the algorithm, let $\mathcal C$ be a collection of vertex-disjoint cycles in $G$ satisfying the following conditions: 1. The union of the cycles contains as many vertices of $G$ as possible. 2. Subject to condition 1, there are as few cycles as possible. As a consequence of condition 1, deleting the vertices in $\mathcal C$ from $G$ leaves a forest, which we call $F$. In the next phase of the algorithm, we process the cycles in $\mathcal C$, one at a time. This phase has two goals. First, for each $C \in \mathcal C$, we will choose a designated vertex $x(C) \in V(C) \cap X$. Intuitively, $x(C)$ will be the only vertex of $C$ which *may* become part of the high-${\mathsf{\Lambda}}$-deficiency set $S$ we construct. We define $y^+(C)$ and $y^-(C)$ to be the two neighbors of $x(C)$ along $C$. Second, we will split $\mathcal C$ into three sets: $\mathcal C_{\mathsf{good}}$, $\mathcal C_{\mathsf{bad}}$, and $\mathcal C_{\mathsf{ugly}}$. Intuitively, if $C \in \mathcal C_{\mathsf{good}}$, then $x(C)$ is far from any problems; if $C \in \mathcal C_{\mathsf{bad}}$, then $x(C)$ is too close to the forest $F$; finally, if $C \in \mathcal C_{\mathsf{ugly}}$, then $x(C)$ is too close to $x(D)$ for some $D \in \mathcal C_{\mathsf{good}}\cup \mathcal C_{\mathsf{bad}}$. In most cases, we arbitrarily choose an unprocessed cycle $C$ to process next, and arbitrarily choose $x(C) \in V(C) \cap X$. Occasionally, as in the proof of Theorem [Theorem 4](#result:3-regular){reference-type="ref" reference="result:3-regular"}, we will want to make sure that a particular vertex $w$ on an unprocessed cycle $C$ will never become $x(C)$. To do so, we select $C$ to be processed next, and choose an arbitrary vertex in $V(C) \cap X$ other than $w$ to be $x(C)$. As in Theorem [Theorem 4](#result:3-regular){reference-type="ref" reference="result:3-regular"}, to indicate that we do this, we say that we *Do Something Else* with $w$. To decide what to do with a cycle $C$ as we process it, we consider the following cases, *in order*, choosing the first that applies: **Case 1:** $x(C)$ has a common neighbor with $x(D)$ for some $D \in \mathcal C_{\mathsf{good}}\cup \mathcal C_{\mathsf{bad}}$, and that common neighbor lies on either $C$ or $D$. In other words, at least one edge $$e(C) \in \{x(C)y^+(D), x(C)y^-(D), y^+(C)x(D), y^-(C) x(D)\}$$ must exist in $G$. (If multiple choices of $D$ or of $e(C)$ are possible, then fix one of them.) In this case, we place $C$ in $\mathcal C_{\mathsf{ugly}}$; we say that *$C$ attaches to $D$ at $u$*, where $u$ is the endpoint of $e(C)$ in $D$. We save the edge $e(C)$ for reference; later, we will use it to extend a path covering $D$ to also cover $C$. Additionally, if $e(C) = x(C) y^{\pm}(D)$ and the vertex $y^{\mp}(D)$ (that is, whichever of $y^+(D), y^-(D)$ is not an endpoint of $e(C)$) is adjacent to a vertex $w$ on an unprocessed cycle, we Do Something Else with $w$. **Case 2:** At least one of $x(C)$, $y^+(C)$, or $y^-(C)$ has a neighbor in $F$. In this case, we place $C$ in $\mathcal C_{\mathsf{bad}}$. Additionally, if $y^{\pm}(C)$ has a neighbor in $F$ and $y^{\mp}(C)$ has a neighbor $w$ on an unprocessed cycle, we Do Something Else with $w$. **Case 3:** Neither case 1 nor case 2 occurs. In this case, we simply place $C$ in $\mathcal C_{\mathsf{good}}$. This concludes the second phase (or the *processing phase*) of the algorithm. In the third phase of the algorithm, we create an auxiliary graph $F^*$ (which is not precisely a subgraph of $G$) containing $F$ and some extra vertices representing the elements of $\mathcal C_{\mathsf{bad}}$. For each $C \in \mathcal C_{\mathsf{bad}}$: - We add $x(C)$ to $F^*$, together with the edge to its neighbor in $F$, if there is one. - We add an artificial vertex $y^*(C)$ to $F^*$ that is adjacent to $x(C)$ and to the neighbors of both $y^+(C)$ and $y^-(C)$ in $F$, if these exist. Before we continue, we must show that $F^*$ is a forest. Suppose for the sake of contradiction that $F^*$ contains a cycle. Since $F$ is acyclic, this cycle must contain either $x(C)$ or $y^*(C)$ for at least one $C \in \mathcal C_{\mathsf{bad}}$. First, consider the case that the cycle only includes the vertex $y^*(C)$ for a single $C \in \mathcal C_{\mathsf{bad}}$. This means that there is a path $P$ from $y^+(C)$ to $y^-(C)$ of length at least $3$, whose internal vertices are in $F$. Now we can modify $C$, replacing $x(C)$ and the edges $y^+(C)x(C), x(C)y^-(C)$ by $P$. The resulting cycle contains more vertices that $C$, violating condition 1 in the definition of $\mathcal C$. Similarly, if the cycle in $F^*$ includes only the vertices $x(C)$ and $y^*(C)$ for a single $C \in \mathcal C_{\mathsf{bad}}$, we can expand $C$ to include some vertices in $F$. This also violates condition 1 in the definition of $\mathcal C$. Finally, consider the case that the cycle in $F^*$ includes vertices $x(C)$ and/or $y^*(C)$ for multiple $C \in \mathcal C_{\mathsf{bad}}$. In this case, we can extend it to a cycle in $G$: every time the cycle in $F^*$ visits $y^*(C)$, we can replace that visit by a path that enters $C$ via $y^{\pm}(C)$, goes around $C$, and leaves via either $y^{\mp}(C)$ or $x(C)$. This cycle in $G$ contains at least as many vertices as the cycles from $\mathcal C$ it uses: it misses at most the vertex $x(C)$ from each of them, but includes a vertex in $F$ between any two of the cycles in $\mathcal C$. Therefore, we can replace multiple cycles in $\mathcal C$ by a single cycle through at least as many vertices, violating condition 2 in the definition of $\mathcal C$. In all cases, we arrive at a contradiction, so we can conclude that $F^*$ is a forest. We give it the structure of an $(X',Y')$-bigraph by defining: $$\begin{aligned} X' &= (X \cap V(F)) \cup \{x(C) : C \in \mathcal C_{\mathsf{bad}}\}, \\ Y' &= (Y \cap V(F)) \cup \{y^*(C) : C \in \mathcal C_{\mathsf{bad}}\}.\end{aligned}$$ By Proposition [Proposition 3](#result:forest){reference-type="ref" reference="result:forest"}, we can find a path $X'$-cover $\mathcal P'$ of $F^*$ and a set $S' \subseteq X'$ such that $\mathop{\mathrm{{\mathsf\Lambda}-def}}(F^*,S') \ge |\mathcal P'|$. We may assume that both endpoints of every path in $\mathcal P'$ are in $X'$, not $Y'$. Because $X' \subseteq X$, we have $S' \subseteq X$ as well; moreover, none of the vertices $x(C)$ for $C \in \mathcal C_{\mathsf{bad}}$ have common neighbors outside $F$, or else one of the cycles would have been handled by Case 1 of the processing phase instead. Therefore $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G,S') = \mathop{\mathrm{{\mathsf\Lambda}-def}}(F^*,S')$. We are now ready to construct the path $X$-cover $\mathcal P$ in $G$ and a set $S \subseteq X$ with $|\mathcal P| \leq \mathop{\mathrm{{\mathsf\Lambda}-def}}(G,S)$. Let $S = S' \cup \{x(C) : C \in \mathcal C_{\mathsf{good}}\}$. The vertices $\{x(C) : C \in \mathcal C_{\mathsf{good}}\}$ have no common neighbors with each other or with any vertex in $S'$. This is ensured by Case 1 and Case 2 of the processing phase, where any cycle $C$ for which $x(C)$ did have such a common neighbor would be placed in $\mathcal C_{\mathsf{bad}}$ or $\mathcal C_{\mathsf{ugly}}$ instead. Therefore $\mathop{\mathrm{{\mathsf\Lambda}-def}}(G,S) = \mathop{\mathrm{{\mathsf\Lambda}-def}}(G, S') + |\mathcal C_{\mathsf{good}}|$. We transform $\mathcal P'$ into $\mathcal P$ with $|\mathcal C_{\mathsf{good}}|$ additional paths. For each $C \in \mathcal C_{\mathsf{good}}$, we add a single path to $\mathcal P$ that covers the vertices of $X$ on both $C$ and any cycles in $\mathcal C_{\mathsf{ugly}}$ that attach to $C$. There are three possibilities: - If there are no cycles in $\mathcal C_{\mathsf{ugly}}$ that attach to $C$, we take a path that goes around $C$. - If there is one cycle $C_1 \in \mathcal C_{\mathsf{ugly}}$ that attaches to $C$, we obtain a path that covers both $C$ and $C_1$ by taking both cycles, adding edge $e(C_1)$, and deleting an edge incident on $e(C_1)$ from both cycles. - If there are two cycles $C_1, C_2 \in \mathcal C_{\mathsf{ugly}}$ that attach to $C$, then $e(C_1)$ and $e(C_2)$ cannot both have endpoints in $\{y^+(C), y^-(C)\}$, because once one such cycle is added to $\mathcal C_{\mathsf{ugly}}$ in Case 1, we Do Something Else to prevent a second cycle of this type from appearing. (This is also why three cycles in $\mathcal C_{\mathsf{ugly}}$ cannot attach to $C$.) Without loss of generality, $C_1$ and $C_2$ attach to $C$ at $x(C)$ and $y^+(C)$. We obtain a path that covers $C$, $C_1$, and $C_2$ by taking all three cycles, adding edges $e(C_1)$ and $e(C_2)$, deleting edge $x(C) y^+(C)$ from $C$, and deleting an edge from each of $C_1$ and $C_2$ incident on $e(C_1)$ and $e(C_2)$, respectively. Finally, we must transform the paths in $\mathcal P'$ that use the vertices $x(C)$ or $y^*(C)$ for some $C \in \mathcal C_{\mathsf{bad}}$ into paths in $G$ that cover the cycles in $\mathcal C_{\mathsf{bad}}$, as well as any cycles in $\mathcal C_{\mathsf{ugly}}$ that attach to them. For each $C \in \mathcal C_{\mathsf{bad}}$, there is a path $P_x \in \mathcal P'$ that covers $x(C)$, and possibly a different path $P_y \in \mathcal P'$ that passes through $y^*(C)$. Without changing these paths outside $\{x(C), y^*(C)\}$, we modify them to cover $C$ and any cycles in $\mathcal C_{\mathsf{ugly}}$ that attach to $C$. During this process, the host graph of the paths in $\mathcal P'$ is unclear, but once we have considered all of $\mathcal C_{\mathsf{bad}}$, the paths will all be paths in $G$. There are multiple possibilities for how the modification is done: - $P_x$ contains $x(C)$ but not $y^*(C)$, and $P_y$ does not exist. Then $x(C)$ must be an endpoint of $P_x$, since $x(C)$ has only one neighbor in $F^*$ other than $y^*(C)$. We extend $P_x$ to go around the cycle $C$, ending at either $y^+(C)$ or $y^-(C)$. If there is a cycle $C_1 \in \mathcal C_{\mathsf{ugly}}$ that attaches to $C$ at $y^{\pm}(C)$, we choose $P_x$ to go around $C$ so that it ends at the endpoint of $e(C_1)$; then, extend $P_x$ to use $e(C_1)$ and go around $C_1$. As before, there cannot be two cycles $C_1, C_2 \in \mathcal C_{\mathsf{ugly}}$ that attach to $C$ at $y^+(C)$ and $y^-(C)$, since we Do Something Else to prevent this. - $P_x$ goes from $x(C)$ to $y^*(C)$ and continues to $F$, and $P_y$ does not exist. The edge used from $y^*(C)$ comes from an edge in $G$ from $y^{\pm}(C)$ to $F$; we modify $P_x$ to go around the cycle $C$ from $x(C)$ to $y^{\pm}(C)$. If there is a cycle $C_1 \in \mathcal C_{\mathsf{ugly}}$ that attaches to $C$ at $x(C)$, then $x(C)$ cannot have a neighbor in $F$, so it is an endpoint of $P_x$. We extend $x(C)$ in the other direction, prepending a path that goes around $C_1$ and takes edge $e(C_1)$ to $x(C)$. There cannot be a cycle $C_2 \in \mathcal C_{\mathsf{ugly}}$ that attaches to $C$ at $y^{\mp}(C)$, again because we Do Something Else to prevent it. - $P_x$ contains $x(C)$, and a different path $P_y$ contains $y^*(C)$. In this case, $x(C)$ must be an endpoint of $P_x$. We leave $P_x$ unchanged, unless there is a cycle $C_1 \in \mathcal C_{\mathsf{ugly}}$ that attaches to $C$ at $x(C)$. In this case, $x(C)$ has no neighbors in $F$, so $P_x$ must be a path of length $0$. We replace $P_x$ by a path that covers $C_1$ and ends with $e(C_1)$, also covering $x(C)$. Meanwhile, $P_y$ must enter and leave $y^*(C)$ by edges other than $x(C)y^*(C)$; in $G$, these correspond to two edges between $\{y^+(C), y^-(C)\}$ and $F$. We modify $P_y$, replacing $y^*(C)$ by the $y^+(C),y^-(C)$-path that goes around $C$, covering all its vertices except the previously covered $x(C)$. Once this process is complete, we add these modified paths to $\mathcal P$, along with the paths that covered the cycles in $\mathcal C_{\mathsf{good}}$. The collection $\mathcal P$ is a path $X$-cover; just as $\mathcal P'$ did, it still covers all vertices of $X'$, but now it also covers all vertices of the cycles in $\mathcal C$. This completes the proof, since $|\mathcal P| = |\mathcal P'| + |\mathcal C_{\mathsf{good}}| \le \mathop{\mathrm{{\mathsf\Lambda}-def}}(G,S') + |\mathcal C_{\mathsf{good}}| = \mathop{\mathrm{{\mathsf\Lambda}-def}}(G,S)$. ◻ # Graphs with high girth We will use the Lovász Local Lemma [@erdos1975problems] to prove Theorem [Theorem 6](#result:high girth){reference-type="ref" reference="result:high girth"}, in the form stated below. **Lemma 8** (The Local Lemma; Lemma 5.1.1 in [@alon08]). *Let $A_1, A_2, \dots, A_N$ be events in an arbitrary probability space. A directed graph $D = (V,E)$ on the set of vertices $V = \{1,2,\dots,N\}$ is called a *dependency digraph* for the events $A_1, \dots, A_N$ if for each $i$, $1 \le i \le N$, the event $A_i$ is mutually independent of all the events $\{A_j : (i,j) \notin E\}$. Suppose that $D = (V,E)$ is a dependency digraph for the above events and suppose there are real numbers $x_1, \dots, x_N$ such that $0 \le x_i < 1$ and $\Pr[A_i] \le x_i \prod_{(i,j) \in E} (1-x_j)$ for all $1 \le i \le N$. Then $$\Pr\left[ \bigwedge_{i=1}^N \overline {A_i}\right] \ge \prod_{i=1}^N (1-x_i).$$ In particular, with positive probability, no event $A_i$ holds.* A symmetric version of Lemma [Lemma 8](#local-lemma){reference-type="ref" reference="local-lemma"} is often used, where $\Pr[A_i] = p$ for all $i$; by setting $x_i = e \cdot \Pr[A_i]$ for all $i$, and using the inequality $(1 - \frac1{x+1})^x \ge \frac1e$, valid for all $x \ge0$, the hypotheses of the lemma are satisfied. In our case, the probabilities of our events $A_i$ will vary, but we will pursue mostly the same strategy. We will still set $x_i = e \cdot \Pr[A_i]$ for all $i$; because event $A_i$ will depend on two types of other events, we will use the inequality $(1 - \frac1{2x+1})^x \ge e^{-1/2}$ (also valid for all $x \ge 0$) on two parts of the product, instead. *Proof of Theorem [Theorem 6](#result:high girth){reference-type="ref" reference="result:high girth"}.* Let $G$ be an $(X,Y)$-bigraph with maximum degree at most $d$ and girth $g \ge 4ed^2 + 1$. Additionally, let $\mathcal C$ be a collection of pairwise vertex-disjoint cycles in $G$ that cover $X$. This collection exists either by assumption or (if $G$ is taken to be $d$-regular) by Lemma [Lemma 7](#lemma:cycle cover){reference-type="ref" reference="lemma:cycle cover"}. The girth condition on $G$ guarantees that the cycles in $\mathcal C$ are relatively long, so there cannot be too many of them; in fact, we will show that there cannot be more than $\alpha_{\mathsf{\Lambda}}(G)$ of them. If $v \in X$, we will write $C(v)$ for the cycle in $\mathcal C$ containing $v$. Furthermore, if $v_1, v_2 \in X$, we will write $v_1 \sim v_2$ to mean that $C(v_1) \ne C(v_2)$ and $v_1$ and $v_2$ have a common neighbor in $Y$. We choose a set $S$ randomly, by selecting one vertex of $X$ uniformly at random from each cycle in $\mathcal C$. Let $\{u_1, v_1\}, \dots, \{u_N, v_N\}$ be an enumeration of the (unordered) pairs of vertices in $X$ such that $u_i \sim v_i$. For each $i$, $1 \le i \le N$, we let $A_i$ be the event that $u_i \in S$ and $v_i \in S$. As a result, the conjunction $\overline{A_1} \land \dots \land \overline{A_n}$ is exactly the claim that $S$ is ${\mathsf{\Lambda}}$-independent. If we can satisfy the hypotheses of Lemma [Lemma 8](#local-lemma){reference-type="ref" reference="local-lemma"} for the events $A_1, \dots, A_N$, then this conjunction occurs with positive probability, and therefore $|\mathcal C| = |S| \le \alpha_{\mathsf{\Lambda}}(G)$, proving the theorem. We define the dependency digraph $D$ to include an edge $(i,j)$ whenever the four vertices $u_i$, $v_i$, $u_j$, and $v_j$ do *not* lie on four distinct cycles. If $C(u_i)$ has length $2\ell_1$ and $C(v_i)$ has length $2\ell_2$, we define $x_i = \frac{e}{\ell_1 \ell_2}$; for reference, $\Pr[A_i] = \frac1{\ell_1 \ell_2}$. We will show that the hypotheses of Lemma [Lemma 8](#local-lemma){reference-type="ref" reference="local-lemma"} hold with the choices made above. To do this, we must put a lower bound on $$x_i \prod_{(i,j) \in E(D)} (1 - x_j)$$ for an arbitrary event $A_i$. This product consists of two types of events $A_j$. The first type consists of those $A_j$ for which either $u_j$ or $v_j$ lies on $C(u_i)$ (including $u_j=u_i$ or $v_j=u_i$). There are at most $\ell_1 d^2$ events $A_j$ of this type. For each of them, one of $C(u_j), C(v_j)$ is the same as $C(u_i)$ and has length $\ell_1$, and the other has length at least $g$. Therefore $1 - x_j \ge 1 - \frac{2e}{\ell_1 g}$, for an overall product of at least $(1 - \frac{2e}{\ell_1 g})^{\ell_1 d^2}$. The second type of events $A_j$ such that $(i,j) \in E(D)$ consists of those $A_j$ for which either $u_j$ or $v_j$ lies on $C(v_i)$. By a similar argument, there are at most $\ell_2 d^2$ events $A_j$ of this type, and for each of them, $1 - x_j \ge 1 - \frac{2e}{\ell_2 g}$, so $$x_i \prod_{(i,j) \in E(D)} (1 - x_j) \ge \frac e{\ell_1 \ell_2} \left(1 - \frac {2e}{\ell_1 g}\right)^{\ell_1 d^2} \left(1 - \frac {2e}{\ell_2 g}\right)^{\ell_2 d^2}.$$ Recall that $g \ge 4ed^2+1$; a lower bound on $\frac{\ell_1 g}{2e} \geq 2\ell_1 d^2 + \frac{\ell_1}{2e}$ is $2\ell_1 d^2 + 1$. Applying $(1 - \frac1{2x+1})^x \ge e^{-1/2}$, we get $$\left(1 - \frac {2e}{\ell_1 g}\right)^{\ell_1 d^2} \ge \left(1 - \frac1{2\ell_1 d^2+1}\right)^{\ell_1 d^2} \ge e^{-1/2}$$ and similarly $\left(1 - \frac {2e}{\ell_2 g}\right)^{\ell_2 d^2} \ge e^{-1/2}$. Therefore $$\frac e{\ell_1 \ell_2} \left(1 - \frac {2e}{\ell_1 g}\right)^{\ell_1 d^2} \left(1 - \frac {2e}{\ell_2 g}\right)^{\ell_2 d^2} \ge \frac e{\ell_1 \ell_2} \cdot e^{-1/2} \cdot e^{-1/2} = \frac1{\ell_1 \ell_2} = \Pr[A_i]$$ and the conditions of Lemma [Lemma 8](#local-lemma){reference-type="ref" reference="local-lemma"} are satisfied. We conclude that with positive probability, $S$ is ${\mathsf{\Lambda}}$-independent, and therefore $|\mathcal C| = |\mathcal S| \le \alpha_{\mathsf{\Lambda}}(G)$. The theorem follows, since we can find a path $X$-cover of $G$ of size $|\mathcal C|$ by removing a vertex of $Y$ from each cycle in $\mathcal C$. ◻ 10 Takanori Akiyama, Takao Nishizeki, and Nobuji Saito. NP-completeness of the Hamiltonian cycle problem for bipartite graphs. , 3(2):73--76, 1980. N. Alon and J. Spencer. . John Wiley & Sons, Hoboken, NJ, 2008. Claude Berge. . North-Holland Publishing Company, Amsterdam- London, 1973. Paul Erdős and László Lovász. Problems and results on 3-chromatic hypergraphs and some related questions. , 10(2):609--627, 1975. Tibor Gallai and Arthur Norton Milgram. Verallgemeinerung eines graphentheoretischen Satzes von Rédei. , 21:181--186, 1960. Uriel Feige and Ella Fuchs. On the path partition number of 6-regular graphs. , 101(3):345--378, 2022. <https://doi.org/10.1002/jgt.22830>. Ronald J. Gould. Advances on the Hamiltonian problem---a survey. , 19(1):7--52, 2003. <https://doi.org/10.1007/s00373-002-0492-x>. Jie Han. On vertex-disjoint paths in regular graphs. , 25(2):P1.12, 2018. <https://doi.org/10.37236/7109>. Alexandr Kostochka, Mikhail Lavrov, Ruth Luo, and Dara Zirlin. Conditions for a bigraph to be super-cyclic. , 28(1):P1.2, 2021. <https://doi.org/10.37236/9683>. Mekkia Kouider and Mohamed Zamime. On the path partition of graphs. [`arXiv:2212.12793`](https://arxiv.org/abs/2212.12793), 2022. Linyuan Lu and Zhiyu Wang. On Hamiltonian Berge cycles in $[3]$-uniform hypergraphs. , 344(8):112462, 2021. <https://doi.org/10.1016/j.disc.2021.112462>. Colton Magnant and Daniel M. Martin. , 43:211-217, 2009. Colton Magnant, Hua Wang and Shuai Yuan. , 64:334--340, 2016. Heinrich Müller. Oriented hypergraphs, stability numbers and chromatic numbers. , 34(3):319--320, 1981. <https://doi.org/10.1016/0012-365X(81)90011-X>. James Singer. A theorem in finite projective geometry and some applications to number theory. , 43(3):377--385, 1938. <https://doi.org/10.2307/1990067>. D.B. West. . Prentice Hall, Englewood Cliffs, NJ, 1996.
arxiv_math
{ "id": "2310.05248", "title": "A Hall-type condition for path covers in bipartite graphs", "authors": "Mikhail Lavrov and Jennifer Vandenbussche", "categories": "math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper, we introduce Topological Quantum Field Theories (TQFTs) generalizing the arithmetic computations done by Hausel and Rodríguez-Villegas and the geometric construction done by Logares, Muñoz, and Newstead to study cohomological invariants of $G$-representation varieties and $G$-character stacks. We show that these TQFTs are related via a natural transformation that we call the 'arithmetic-geometric correspondence' generalizing the classical formula of Frobenius on the irreducible characters of a finite group. We use this correspondence to extract some information on the character table of finite groups using the geometric TQFT, and vice versa, we greatly simplify the geometric calculations in the case of upper triangular matrices by lifting its irreducible characters to the geometric setting. author: - Ángel González-Prieto - Márton Hablicsek - Jesse Vogel bibliography: - bibliography.bib title: | **Arithmetic-Geometric Correspondence of Character Stacks\ via Topological Quantum Field Theory** --- # Introduction In 1896, Frobenius [@frobenius1896gruppencharaktere] introduced a beautiful formula (for $g = 1$) relating the number of group homomorphisms $\rho : \pi_1(\Sigma_g) \to G$ from the fundamental group of a closed orientable surface $\Sigma_g$ of genus $g$ into a finite group $G$ with the irreducible characters $\hat{G}$ of $G$, $$\label{eq:frobenius_formula-intro} |\textup{Hom}(\Sigma_g, G)| = |G|\sum_{\chi \in \hat{G}} \left( \frac{|G|}{\chi(1)} \right)^{2g - 2}.$$ More than a century later, Hausel and Rodríguez-Villegas [@hausel2008mixed] breathed new life into this formula using a result by Katz that shows these counts can be used to compute Hodge-theoretic invariants associated to a complex variety known as the representation variety. To be precise, given an algebraic group $G$ and a compact connected smooth manifold $M$, the collection of group homomorphisms from the fundamental group $\pi_1 (M)$ of the manifold into $G$ forms an algebraic variety induced by the algebraic variety structure on $G$, $$R_G(M) = \textup{Hom}(\pi_1(M), G)$$ called the $G$-representation variety of $M$. Explicitly, in the case of a closed orientable surface $\Sigma_g$ of genus $g$, the $G$-representation variety is the closed subvariety of $G^{2g}$ given by $$\label{eq:presentation_representation_variety_closed_surface} R_G(\Sigma_g) = \left\{ (A_1, B_1, \ldots, A_g, B_g) \in G^{2g} \;\bigg|\; \prod_{i = 1}^{g} [A_i, B_i] = 1 \right\} .$$ Questions about the geometry of the $G$-representation variety have attracted researchers in various fields of mathematics, as the theory combines the algebraic geometry of the group $G$, the topology of $M$, and the group theory and representation theory of both $\pi_1 (M)$ and $G$ (see, for instance, [@cooper1994plane; @cooper1998representation; @culler1983varieties; @dunfield2004non; @munoz2016geometry]). There are several successful frameworks to study cohomological invariants of representation and character varieties, the main two representatives are the following. - *The arithmetic method*: It is the method developed by Hausel and Rodríguez-Villegas to relate the complex structure of the $G(\mathbb{C})$-representation variety with the point count of the $G(\mathbb{F}_q)$-representation varieties over finite fields $\mathbb{F}_q$. This method strongly relies on a result by Katz, inspired by the Weil conjectures, that shows that if the point count of the varieties over $\mathbb{F}_q$ is a certain polynomial in $q$, then such polynomial is the so-called $E$-polynomial of the associated complex variety, a polynomial collecting alternating sums of the Hodge numbers in the spirit of an Euler characteristic. In this way, using Frobenius' formula ([\[eq:frobenius_formula-intro\]](#eq:frobenius_formula-intro){reference-type="ref" reference="eq:frobenius_formula-intro"}), in [@hausel2008mixed] the authors managed to compute the $E$-polynomials of $\textup{GL}_n(\mathbb{C})$-character varieties in purely combinatorial terms. Subsequent works have extended this method to other scenarios such as $G = \textup{SL}_r(\mathbb{C})$ [@mereb2015polynomials], $G=\textup{GL}_r(\mathbb{C})$ with a generic parabolic structure [@mellit2020poincare], non-orientable surfaces [@letellier2020series], and connected split reductive groups [@bridger2022character]. - *The geometric method*: It was started by Logares, Muñoz, and Newstead in the seminar paper [@logmunnew13]. The goal of this method is to compute cohomological invariants of the representation variety by cutting it into smaller, simpler pieces whose cohomological invariants can be easily computed. From them, they use motivic techniques to compile this local information and assemble it for the whole variety. Thanks to this approach, more subtle invariants can be computed, such as the virtual class of the representation variety in the Grothendieck ring $\textup{K}_0(\textup{\bfseries Var}_k)$ of algebraic varieties. Its drawback is that it requires very involved calculations that can only be conducted for particular examples [@marmun16; @mar17]. The aim of this work is to show that these two completely unrelated methods can be actually encompassed into a common framework, by means of Topological Quantum Field Theories (TQFTs). Furthermore, we shall push Frobenius' formula further in several directions to bring new striking connections between point counting, the character theory of finite groups, and the motivic theory of complex manifolds. The use of TQFTs to understand representation varieties has been proven to be very successful. In [@gonlogmun20], the first version of a TQFT computing the virtual Hodge structure of the complex representation variety was constructed. This construction has been extended several times to virtual classes in the Grothendieck ring of varieties [@gon20], for manifolds with singularities [@gonzalez2023character], and for character stacks [@gonzalez2022virtual]. However, instead of the usual category $\textup{\bfseries Bord}_n$ of $n$-dimensional bordisms, all these constructions are forced to work in the weaker category of 'pointed' bordisms, with an arbitrary choice of basepoints on it. With this method, the motivic classes of the representation varieties of surface groups were computed for $\textup{SL}_2(\mathbb{C})$ and for groups of upper triangular matrices [@gonlogmun20; @hablicsek2022virtual; @vogel2023motivic]. Notice that, with this method, in contrast to the arithmetic method, not just the $E$-polynomial of the representation variety can be computed, but much more, its class in the Grothendieck ring of varieties computing its universal cohomology theory. It is worth mentioning that the study of the geometry of representation varieties is actually intrinsically tied to mathematical physics through the so-called $G$-character varieties. In the case that the group $G$ is reductive, they are defined using Geometric Invariant Theory [@mumford1994geometric] as the GIT quotient of the representation variety $R_G(M)$ with respect to the adjoint action of $G$ given by conjugation $$\mathcal{M}_G(M) = R_G(M) \sslash G.$$ The character variety parametrizes representations $\pi_1(M) \to G$ up to isomorphism, in contrast to the representation variety which only parametrizes raw representations. These character varieties are widely studied in mirror symmetry [@hausel2003mirror; @mauri2021topological], in string theory [@diaconescu2018bps], and in the context of Yang-Mills and Chern-Simons quantum field theories, as in Witten's quantization of the Jones polynomial of knots [@witten1991quantum]. Furthermore, when $M = \Sigma_g$ is a compact orientable surface of genus $g$, character varieties are also called Betti moduli spaces, and they are one of the three main moduli spaces studied in non-Abelian Hodge Theory, alongside the Dolbeault moduli space parametrizing semistable $G$-Higgs bundles on a smooth projective curve $C$ of genus $g$, and the de Rham moduli space parametrizing flat $G$-connections on the curve $C$. Interestingly, with some assumptions, these moduli spaces are smooth varieties with canonically identified underlying differentiable manifolds via the Riemann-Hilbert, Hitchin-Kobayashi and Simpson correspondences. However, their algebraic structures and their cohomological invariants are completely different: for instance, the celebrated and recently resolved $P=W$ conjecture states the weight filtration on the cohomology of the character variety coincides with the perverse filtration on the cohomology of the Dolbeault moduli space [@de2022hitchin; @de2012topology; @hausel2022p; @maulik2022p; @mauri2022geometric]. #### Arithmetic results. Let us discuss the results obtained in this work in the arithmetic setting first. Fix a finite group $G$, and let us focus on the point count of the $G$-representation variety for a compact manifold $M$, $|R_G(M)| = |\textup{Hom}(\pi_1(M), G)|$. Our first result in this direction is that this point count can be quantized by means of a Topological Quantum Field Theory (TQFT). A TQFT is a symmetric monoidal functor $$Z: \textup{\bfseries Bord}_n \to \textup{\bfseries Vect}_k$$ from the category of $n$-dimensional bordisms to the category of $k$-vector spaces (or, more generally, the category $R\textup{-}\textup{\bfseries Mod}$ of modules over a ring $R$). Roughly speaking, the functor $Z$ transforms closed $(n-1)$-dimensional manifolds $M$ into a vector space $Z(M)$ and bordisms $W$ between manifolds $M_1$ and $M_2$ into linear maps $Z(W): Z(M_1) \to Z(M_2)$. An important use of these functors is to compute invariants. For instance, suppose that we are interested in a certain invariant $\chi(W)$ of closed connected $n$-dimensional manifolds $W$ with values in a field $k$. Such closed manifolds can be seen as bordisms $W: \varnothing \to \varnothing$ and since $Z(\varnothing) = k$, under a TQFT they correspond to $k$-linear maps $Z(W): k \to k$, that is, multiplication by an element $Z(W)(1) \in k$. If we manage to construct a TQFT such that this element $Z(W)(1) = \chi(W)$ is precisely the invariant we want to study, we can use this gadget to decompose $W$ into simpler pieces, and to use the functoriality of $Z$ to re-assemble the invariant $\chi(W)$ from simpler algebraic information. In this situation, we shall say that $Z$ quantizes the invariant $\chi$. For instance, in the case of the genus $g$ surface $\Sigma_g$, we can decompose it as $$\Sigma_g = \begin{tikzpicture}[semithick, scale=1*0.5, baseline=-0.5ex] \begin{scope} \draw (0,0) ellipse (0.2cm and 0.4cm); \draw (0,0.4) arc (90:270:0.75cm and 0.4cm); \end{scope} \end{tikzpicture} \circ \begin{tikzpicture}[semithick, scale=1*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (-0.5,0.6) and (0.5,0.6) .. (1,0.4); \draw (-1,-0.4) .. controls (-0.5,-0.6) and (0.5,-0.6) .. (1,-0.4); \draw (-0.5,0.1) .. controls (-0.5,-0.125) and (0.5,-0.125) .. (0.5,0.1); \draw (-0.4,0.0) .. controls (-0.4,0.0625) and (0.4,0.0625) .. (0.4,0.0); \draw (1,0) ellipse (0.2cm and 0.4cm); \end{scope} \end{tikzpicture} ^g \circ \begin{tikzpicture}[semithick, scale=1*0.5, baseline=-0.5ex] \begin{scope} \draw (0,0) ellipse (0.2cm and 0.4cm); \draw (0,-0.4) arc (-90:90:0.75cm and 0.4cm); \end{scope} \end{tikzpicture}$$ and hence the invariant $\chi(\Sigma_g)$ can be computed through the composition of three linear maps $$\label{eq:decomposition-TQFT-surface} k = Z(\varnothing) \xleftarrow{Z \left( \begin{tikzpicture}[semithick, scale=1*0.5, baseline=-0.5ex] \begin{scope} \draw (0,0) ellipse (0.2cm and 0.4cm); \draw (0,0.4) arc (90:270:0.75cm and 0.4cm); \end{scope} \end{tikzpicture} \right)} Z(S^1) \xleftarrow{Z\left( \begin{tikzpicture}[semithick, scale=1*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (-0.5,0.6) and (0.5,0.6) .. (1,0.4); \draw (-1,-0.4) .. controls (-0.5,-0.6) and (0.5,-0.6) .. (1,-0.4); \draw (-0.5,0.1) .. controls (-0.5,-0.125) and (0.5,-0.125) .. (0.5,0.1); \draw (-0.4,0.0) .. controls (-0.4,0.0625) and (0.4,0.0625) .. (0.4,0.0); \draw (1,0) ellipse (0.2cm and 0.4cm); \end{scope} \end{tikzpicture} \right)^g} Z(S^1) \xleftarrow{Z\left( \begin{tikzpicture}[semithick, scale=1*0.5, baseline=-0.5ex] \begin{scope} \draw (0,0) ellipse (0.2cm and 0.4cm); \draw (0,-0.4) arc (-90:90:0.75cm and 0.4cm); \end{scope} \end{tikzpicture} \right)} Z(\varnothing) = k : Z(\Sigma_g) .$$ This is, in abstract terms, the central idea of the geometric method developed by Logares, Muñoz and Newstead. Our first result in this direction is to formally construct a TQFT quantizing the point count of the $G$-representation variety. **Theorem 1** (Proposition [Proposition 49](#prop:invariant-arithmetic){reference-type="ref" reference="prop:invariant-arithmetic"}). *For any finite group $G$ and any $n \geq 1$, there exists a TQFT $$Z^\#_G : \textup{\bfseries Bord}_n \to \textup{\bfseries Vect}_\mathbb{C}$$ such that $Z^\#_G(W)(1) = |R_G(W)|/|G|$ for any closed connected $n$-dimensional manifold $W$.* This TQFT has a natural interpretation in terms of characters of the group $G$ that connects directly with Frobenius' formula ([\[eq:frobenius_formula-intro\]](#eq:frobenius_formula-intro){reference-type="ref" reference="eq:frobenius_formula-intro"}). It is well-known [@kock] that $2$-dimensional TQFTs with values in $\textup{\bfseries Vect}_\mathbb{C}$ are in one-to-one correspondences with commutative Frobenius algebras over $\mathbb{C}$. The correspondence is given by assigning to the TQFT $Z$ the vector space $A = Z(S^1)$, which inherits a natural commutative Frobenius algebra structure from the pair-of-pants bordisms. In the case of the TQFT of Theorem [Theorem 1](#thm:arithmetic-tqft-intro){reference-type="ref" reference="thm:arithmetic-tqft-intro"}, the associated Frobenius algebra is exactly $Z^\#_G(S^1) = R_\mathbb{C}(G)$, the representation ring of $G$ generated by the characters of $G$, with convolution of class functions as product and the usual inner product of characters as bilinear form, as proven in Theorem [Proposition 51](#prop:equiv-frobenius){reference-type="ref" reference="prop:equiv-frobenius"}. In this way, $Z^\#_G\left( \begin{tikzpicture}[semithick, scale=0.7*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (-0.5,0.6) and (0.5,0.6) .. (1,0.4); \draw (-1,-0.4) .. controls (-0.5,-0.6) and (0.5,-0.6) .. (1,-0.4); \draw (-0.5,0.1) .. controls (-0.5,-0.125) and (0.5,-0.125) .. (0.5,0.1); \draw (-0.4,0.0) .. controls (-0.4,0.0625) and (0.4,0.0625) .. (0.4,0.0); \draw (1,0) ellipse (0.2cm and 0.4cm); \end{scope} \end{tikzpicture} \right)$ is an endomorphism of $R_\mathbb{C}(G)$ whose eigenvectors are precisely the irreducible characters $\chi \in \hat{G}$ with eigenvalues $|G|^2/\chi(1)^2$. At the light of these observations, we realize that Frobenius' formula ([\[eq:frobenius_formula-intro\]](#eq:frobenius_formula-intro){reference-type="ref" reference="eq:frobenius_formula-intro"}) is nothing but a direct corollary of the decomposition ([\[eq:decomposition-TQFT-surface\]](#eq:decomposition-TQFT-surface){reference-type="ref" reference="eq:decomposition-TQFT-surface"}). #### Geometric results. Regarding the geometric techniques, our aim is to understand the geometry of the representation variety $R_G(M)$ for an algebraic group $G$ and a compact manifold $M$. A key accomplishment of this work in this direction is a definition of character stack in a functorial way. In the literature, the $G$-character stack of a manifold $M$ is usually defined as the quotient stack $[R_G(M) / G]$ of the $G$-representation variety mod out by the action of $G$ by conjugation. However, this definition has a critical problem that prevents it to generalize well to gluings: The representation variety $R_G(M) = \textup{Hom}(\pi_1(M), G)$ depends on the fundamental group of $M$, which is not a functor out of the category of topological spaces, but out of the category of *pointed* topological spaces. This is the reason why all the previously studied TQFT, like [@gonlogmun20; @gonzalez2023character; @gonzalez2022virtual] needed to deal with pointed bordisms, adding a piece of extra spurious information. To address this problem, instead of considering representations of the fundamental group of $M$, we shall consider representations $\rho: \Pi(M) \to G$ of the fundamental groupoid $\Pi(M)$ of $M$, mod out by the action of the 'local gauge' group $\mathcal{G}_M = \prod_{x \in M} G$, given by $(\overline{g} \cdot \rho)(\gamma) = g_y \; \rho(\gamma) \; g_x^{-1}$ for a path $\gamma$ joining $x, y \in M$ and $\overline{g} = (g_x)_{x \in M} \in \mathcal{G}_M$. In this way, we define the $G$-character stack of $M$ as the quotient stack $$\mathfrak{X}_G(M) = [\textup{Hom}(\Pi(M), G) / \mathcal{G}_M].$$ Notice that, in principle, such quotient is not well-defined since both $\textup{Hom}(\Pi(M), G)$ and $\mathcal{G}_M$ are infinite-dimensional. However, we will show that we can provide a precise meaning to the above expression by considering finitely generated groupoids equivalent to $\Pi(M)$ (see Corollary [Corollary 26](#cor:fin-gen-character-stack){reference-type="ref" reference="cor:fin-gen-character-stack"} and the discussion therein). Since we are now using fundamental groupoids instead of fundamental groups, this construction inherits much better continuity properties. For instance, the functor $\mathfrak{X}_G$ sends colimits into limits. In particular, if $W \cup_M W'$ is the gluing of two compact manifolds $W$ and $W'$ along a common boundary $M$, we have $\mathfrak{X}_G(W \cup_M W') = \mathfrak{X}_G(W) \times_{\mathfrak{X}_G(M)} \mathfrak{X}_G(W')$. In this direction, the main result we obtain in this work is that we can use the continuity of the local gauge character stack to actually construct a TQFT computing the virtual class $[\mathfrak{X}_G(W)]\in \textup{K}_0(\textup{\bfseries Stck}_k)$ in the Grothendieck ring $\textup{K}_0(\textup{\bfseries Stck}_k)$ of algebraic stacks with affine stabilizers. To the best of our knowledge, this is the first genuine TQFT constructed in the literature computing these virtual classes in the unpointed setting. **Theorem 2** (Theorem [Theorem 40](#thm:character_stack_TQFT){reference-type="ref" reference="thm:character_stack_TQFT"}). *For any algebraic group $G$ and any $n \geq 1$, there exists a lax TQFT $$Z_G: \textup{\bfseries Bord}_n \to \textup{K}_0(\textup{\bfseries Stck}_k)\textup{-}\textup{\bfseries Mod}$$ quantizing the virtual class of the $G$-character stack.* Notice that this TQFT is not monoidal, but only lax monoidal, that is, there exists a morphism $Z_G(M_1)\otimes Z_G(M_2) \to Z_G(M_1 \sqcup M_2)$ but this morphism is not an isomorphism. This implies in particular that $Z_G(M)$ may not be finitely generated for an object $M$ (and, in general, it is not so), and thus, the classification theorem of $2$-dimensional TQFTs does not apply. In this sense, the algebra $Z_G(S^1) = \textup{K}_0(\textup{\bfseries Stck}_{[G/G]})$ should be seen as the infinite-dimensional analogue of the representation ring for general algebraic groups $G$. It is worth mentioning that the result of Theorem [Theorem 2](#thm:geometric-tqft-intro){reference-type="ref" reference="thm:geometric-tqft-intro"} is sharp since the no-go theorem [@gonzalez2023quantization Theorem 4.21] proves that no genuinely monoidal TQFT exists for this purpose. #### The arithmetic-geometric correspondence. Apart from the results obtained in this paper in the arithmetic and geometric setting, the main aim of this work is precisely to relate both constructions. The first result in this direction is that, in fact, we have a natural transformation connecting the geometry with the arithmetics. **Theorem 3** (Corollary [Corollary 58](#cor:natural_transformation_geometric_arithmetic){reference-type="ref" reference="cor:natural_transformation_geometric_arithmetic"}). *Let $G$ be a connected algebraic group over a finite field $\mathbb{F}_q$. Then there exists a natural transformation $Z_G \Rightarrow Z^\#_{G(\mathbb{F}_q)}$ as functors $\textup{\bfseries Bord}_n \to \textup{\bfseries Ab}$.* It is worth mentioning that the assumption of $G$ being connected is crucial here, since otherwise such natural transformation may not exist. This is related to the fact that for non-connected $G$ the $\mathbb{F}_q$-points $\mathfrak{X}_G(M)(\mathbb{F}_q)$ of the character stack is not the same groupoid as the action groupoid of $G(\mathbb{F}_q)$ on the representation variety $R_{G(\mathbb{F}_q)}(M)$, as it already happens for $G = \mathbb{Z}/ 2\mathbb{Z}$. In some sense, the geometric TQFT is studying the former groupoid, whereas the arithmetic TQFT is studying the latter. However, we can go even a step further and encompass both constructions under a general TQFT by using the so-called Landau-Ginzburg (LG) models. Roughly speaking, an LG model over a stack $\mathfrak{S}$ is a pair $(\mathfrak{X}, f : \mathfrak{X} \to k)$ of a representable $\mathfrak{S}$-stack $\mathfrak{X}$ together with a scalar function $f$ on its closed points. Using them and mimicking the construction of Theorem [Theorem 2](#thm:geometric-tqft-intro){reference-type="ref" reference="thm:geometric-tqft-intro"}, we obtain the following result. **Theorem 4** (Theorem [Theorem 63](#thm:tqft_LG){reference-type="ref" reference="thm:tqft_LG"}). *For any algebraic group $G$ and any $n \geq 1$, there exists a lax monoidal TQFT $$Z^{\textup{\bfseries LG}}_G : \textup{\bfseries Bord}_n \rightarrow \textup{K}_0(\textup{\bfseries LG}_k)\textup{-}\textup{\bfseries Mod}$$ computing the virtual class of the character stack in the Grothendieck ring $\textup{K}_0(\textup{\bfseries LG}_k)$ of Landau-Ginzburg models over $k$.* This TQFT encodes both the arithmetic and the geometric methods via natural transformations. For this purpose, we consider the forgetful, inclusion, and 'integration along the fibers' maps, which are respectively ring morphisms $$\ooalign{\hidewidth $\int$\hidewidth\cr\rule[0.7ex]{1ex}{.4pt}}: \textup{K}_0(\textup{\bfseries LG}_\mathfrak{S})\rightarrow \textup{K}_0(\textup{\bfseries Stck}_\mathfrak{S}), \qquad \iota: \textup{K}_0(\textup{\bfseries Stck}_\mathfrak{S}) \rightarrow \textup{K}_0(\textup{\bfseries LG}_\mathfrak{S}) , \qquad \int: \textup{K}_0(\textup{\bfseries LG}_\mathfrak{S}) \rightarrow \mathbb{C}^{\mathfrak{S}(k)}.$$ Here $\mathfrak{S}$ is a fixed stack and $\mathbb{C}^{\mathfrak{S}(k)}$ denotes the ring of (set-theoretic) functions on the objects of the groupoid $\mathfrak{S}(k)$ which are invariant under isomorphism. The forgetful map $\ooalign{\hidewidth $\int$\hidewidth\cr\rule[0.7ex]{1ex}{.4pt}}$ just operates by sending the class $[(\mathfrak{X} \to \mathfrak{S}, f)]$ of a Landau-Ginzburg model to the virtual class $[\mathfrak{X} \to \mathfrak{S}]$ of its underlying $\mathfrak{S}$-stack, whereas the inclusion map $\iota$ sends the class of a stack $[\mathfrak{X} \to \mathfrak{S}]$ into the Landau-Ginzburg model $[(\mathfrak{X} \to \mathfrak{S}, 1_{\mathfrak{X}})]$, where $1_{\mathfrak{X}}$ denotes the constant $1$ function. Furthermore, if $k$ is a finite field, the 'integration along the fibres' map $\int$ sends a Landau-Ginzburg model $[(\pi: \mathfrak{X} \to \mathfrak{S}, f)]$ to the function $\mathfrak{S}(k) \to \mathbb{C}$ given by $s \mapsto \sum_{x\in \pi^{-1}(s)} f(x)$. Using these maps, we can obtain the following result, which we call the *arithmetic-geometric correspondence* and represents the main theorem of this paper. In the following, we take $G$ to be any algebraic group defined over a base ring $R$ (typically $R=\mathbb{Z}$) and $\mathbb{F}_q$ is any finite field 'extending' $R$, in the sense that there exists a ring homomorphism $R \to \mathbb{F}_q$. **Theorem 5**. *For any connected algebraic group $G$, there exist natural transformations $$\begin{tikzcd}[row sep=0.5em] & Z^\textup{\bfseries LG}_G\arrow[rd,shift left=2, "\ooalign{\hidewidth $\int$\hidewidth\cr\rule[0.7ex]{1ex}{.4pt}}",bend left,Rightarrow] \arrow[Rightarrow]{ld}[swap]{\int} & \\ Z^{\#}_{G(\mathbb{F}_q)} & & Z_G\arrow[bend left, Rightarrow]{lu}[shift left]{i} \end{tikzcd}$$* As an application of these results, we use this correspondence in two ways. First, assuming that the eigenvectors of point-counting TQFT $Z_{G(\mathbb{F}_q)}^{\#}$ lift to the Landau-Ginzburg TQFT, the eigenvectors of the geometric TQFT $Z_G$ can be easily computed. We apply this idea in the case of $G$ being the group of unitriangular matrices and we significantly simplify the TQFT used in [@hablicsek2022virtual] computing the virtual classes of representation varieties corresponding to the group of unitriangular matrices. Second, assuming that a universal Landau-Ginzburg TQFT exists, we derive information about the character table of the family of finite groups $G(\mathbb{F}_q)$. To do so, we need to impose an extra finiteness condition. Recall that, since $Z_G^{\textup{\bfseries LG}}$ is not monoidal, $Z_G^{\textup{\bfseries LG}}(S^1) = \textup{K}_0(\textup{\bfseries LG}_{[G/G]})$ is not in general finitely generated as $\textup{K}_0(\textup{\bfseries LG}_k)$-module. Let us denote by $[{\textup{B}G}]_{[G/G]}$ the virtual class of the Landau-Ginzburg model $({\textup{B}G}\to [G / G], 1_{{\textup{B}G}})$ over $[G / G]$. We shall say that $\mathcal{V} \subseteq Z_G^{\textup{\bfseries LG}}(S^1)$ is a 'core submodule' if it is a finitely generated $\textup{K}_0(\textup{\bfseries LG}_k)$-module such that $[{\textup{B}G}]_{[G/G]} \in \mathcal{V}$ and $Z_G\left( \begin{tikzpicture}[semithick, scale=0.7*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (-0.5,0.6) and (0.5,0.6) .. (1,0.4); \draw (-1,-0.4) .. controls (-0.5,-0.6) and (0.5,-0.6) .. (1,-0.4); \draw (-0.5,0.1) .. controls (-0.5,-0.125) and (0.5,-0.125) .. (0.5,0.1); \draw (-0.4,0.0) .. controls (-0.4,0.0625) and (0.4,0.0625) .. (0.4,0.0); \draw (1,0) ellipse (0.2cm and 0.4cm); \end{scope} \end{tikzpicture} \right)(\mathcal{V}) \subseteq \mathcal{V}$. Recall we have a natural identification of the ring of iso-invariant functions $\mathbb{C}^{[G/G](\mathbb{F}_q)}$ with the representation ring $R_{\mathbb{C}}(G(\mathbb{F}_q))$. In this way, the 'integration along fibers' map provides morphisms $$\int_{R} : \textup{K}_0(\textup{\bfseries LG}_{k}) \to \mathbb{C}, \qquad \int_{[G/G]} : \textup{K}_0(\textup{\bfseries LG}_{[G/G]}) \to R_{\mathbb{C}}(G(\mathbb{F}_q)) .$$ **Theorem 6** (Corollary [Corollary 67](#cor:relation_eigenvalues_geometric_arithmetic){reference-type="ref" reference="cor:relation_eigenvalues_geometric_arithmetic"}). *Let $G$ be an algebraic group over a finitely generated $\mathbb{Z}$-algebra $R$, and let $\mathbb{F}_q$ be a finite field extending $R$. Denote by $\textup{\bfseries 1}_G \in \textup{K}_0(\textup{\bfseries Stck}_{[G/G]})$ the class of the inclusion of the identity in $G$. If the $\textup{K}_0(\textup{\bfseries Stck}_k)$-module $\mathcal{V} = \langle Z_G\left( \begin{tikzpicture}[semithick, scale=0.7*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (-0.5,0.6) and (0.5,0.6) .. (1,0.4); \draw (-1,-0.4) .. controls (-0.5,-0.6) and (0.5,-0.6) .. (1,-0.4); \draw (-0.5,0.1) .. controls (-0.5,-0.125) and (0.5,-0.125) .. (0.5,0.1); \draw (-0.4,0.0) .. controls (-0.4,0.0625) and (0.4,0.0625) .. (0.4,0.0); \draw (1,0) ellipse (0.2cm and 0.4cm); \end{scope} \end{tikzpicture} \right)^g(\textup{\bfseries 1}_G) \textup{ for } g = 0, 1, 2, \ldots \rangle$ is finitely generated, then:* (i) *The dimensions of the complex irreducible characters of $G(\mathbb{F}_q)$ are precisely given by $$d_i = \frac{|G(\mathbb{F}_q)|}{\sqrt{\lambda_i}}$$ for $\lambda_i \in \mathbb{Z}$ the eigenvalues of $\int_{R} A$, where $A$ is any matrix representing the linear map $Z_G\left( \begin{tikzpicture}[semithick, scale=0.7*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (-0.5,0.6) and (0.5,0.6) .. (1,0.4); \draw (-1,-0.4) .. controls (-0.5,-0.6) and (0.5,-0.6) .. (1,-0.4); \draw (-0.5,0.1) .. controls (-0.5,-0.125) and (0.5,-0.125) .. (0.5,0.1); \draw (-0.4,0.0) .. controls (-0.4,0.0625) and (0.4,0.0625) .. (0.4,0.0); \draw (1,0) ellipse (0.2cm and 0.4cm); \end{scope} \end{tikzpicture} \right)$ with respect to a generating set of $\mathcal{V}$.* (ii) *Write $\int_{R} \textup{\bfseries 1}_G = \sum_i v_i$, where $v_i$ are eigenvectors of $\int_{R} A$ with eigenvalues $\lambda_i$. Then $$v_i =\frac{d_i}{|G(\mathbb{F}_q)|} \sum_{\substack{\chi \in \hat{G} \textup{\ s.t.} \\ \chi(1) = d_i}} \chi$$ for every $i$.* Notice that this is a remarkable result since it relates the character theory of the infinite family $G(\mathbb{F}_q)$ of finite groups of Lie type with a single linear map associated to the geometry of the stack $[G/G]$. Indeed, suppose that, in the case of base field $k = \overline{\mathbb{Q}}$, we manage to compute a set of eigenvectors $\mathfrak{X}_1, \ldots, \mathfrak{X}_m$ of the core submodule $\mathcal{V} \subseteq Z_G^{\textup{\bfseries LG}}(S^1)$. For this purpose, since we are in the characteristic zero algebraically closed setting, we can apply 'heavy' technology such as Weyl's complete reducibility theorem and Jordan forms for the elements of $G$. However, since for this computation we use only finitely many morphisms and stacks, the same calculation works over a certain finitely generated ring $R$. Therefore, the eigenvectors $\mathfrak{X}_1, \ldots, \mathfrak{X}_m$ and their eigenvalues can be used in Theorem [Theorem 6](#thm:applications-intro){reference-type="ref" reference="thm:applications-intro"} to provide information of the character table of $G(\mathbb{F}_q)$ for any finite field $\mathbb{F}_q$ extending $R$. We illustrate this strategy with a couple of examples in which we can fully determine the character table of some families of finite groups from the simpler calculation of the TQFT over a characteristic zero algebraically closed field. #### Structure of the document. The paper is organized as follows. In Section [2](#sec:representation_ring){reference-type="ref" reference="sec:representation_ring"} we describe a $2$-dimensional TQFT by means of the representation ring of a finite group, and its connection to Frobenius' formula. In Section, [3](#sec:character-stack-tqft){reference-type="ref" reference="sec:character-stack-tqft"} we develop the theory of local gauge character stacks and use it to construct a TQFT computing their virtual classes. In Section [4](#sec:arithmetic_tqft){reference-type="ref" reference="sec:arithmetic_tqft"} we discuss the arithmetic counterpart of the previous TQFT, and we prove it to be isomorphic to the one induced by the representation ring (Section [4.2](#sec:arithmetic-tqft-frobenius){reference-type="ref" reference="sec:arithmetic-tqft-frobenius"}) and to be covered by the geometric TQFT (Section [4.3](#sec:arithmetic-tqft-character-stack){reference-type="ref" reference="sec:arithmetic-tqft-character-stack"}) in the case of connected groups. The Landau-Ginzburg theory and its corresponding TQFT are described in Section [5](#sec:tqftLG){reference-type="ref" reference="sec:tqftLG"}. We finish the paper with some applications of the arithmetic-geometric correspondence, as discussed in Section [6](#sec:app){reference-type="ref" reference="sec:app"}. **Acknowledgements.** The first-named author thanks the hospitality of the University of Leiden, where part of this work was developed during a research stay. The second and third authors thank the hospitality of the Universidad Complutense de Madrid, where the initial stage of this work was done. The second author thanks Professor Kremnitzer for fruitful conversations about TQFTs. The first-named author has been partially supported by Severo Ochoa excellence programme SEV-2015-0554, Spanish Ministerio de Ciencia e Innovación project PID2019-106493RB-I00, and by the Madrid Government (*Comunidad de Madrid* -- Spain) under the Multiannual Agreement with the Universidad Complutense de Madrid in the line Research Incentive for Young PhDs, in the context of the V PRICIT (Regional Programme of Research and Technological Innovation) through the project PR27/21-029. # Representation ring and Frobenius algebras {#sec:representation_ring} Let $G$ be a finite group, and denote by $A = R_\mathbb{C}(G)$ the representation ring of $G$, that is, the complex algebra generated by $\mathbb{C}$-valued class functions on $G$. Of great importance in the representation theory of $G$ is the inner product that is defined on $A$, which we will denote by $\beta : A \otimes_\mathbb{C}A \to \mathbb{C}$, and it is given by $$\beta(a \otimes b) = \frac{1}{|G|} \sum_{g \in G} a(g) b(g^{-1}) \quad \textup{ for } a, b \in A .$$ A lesser known but equally important operation on $A$ is the convolution operation $\mu : A \otimes_\mathbb{C}A \to A$ on $A$, which is given by $$\mu(a \otimes b)(g) = \sum_{h \in G} a(h) b(h^{-1} g) \quad \textup{ for } a, b \in A ,$$ and is related to the inner product via $\beta(a \otimes b) = \mu(a \otimes b)(1)$ for $a, b \in A$. The unit $\eta : \mathbb{C}\to A$ with respect to $\mu$ is given by $\eta(1) = \mathbbm{1}_1$, where $\mathbbm{1}_1$ is the characteristic function of $1$, that is, $\mathbbm{1}_1(1) = 1$ and $\mathbbm{1}_1(g) = 0$ for $g \ne 1$. We write $\eta$ for this unit, rather than $1$, to not confuse it with the constant class function $1$ which is the unit for the usual point-wise multiplication on $A$. Alternatively, $\eta$ can be expressed as $$\eta(1) = \frac{1}{|G|} \sum_{\chi \in \hat{G}} \chi(1) \, \chi ,$$ where $\hat{G}$ denotes the set of complex irreducible characters of $G$. To complete our setup, we will also consider the map $\gamma : \mathbb{C}\to A \otimes_\mathbb{C}A$ given by $$\gamma(1) = \sum_{\chi \in \hat{G}} \chi \otimes \chi .$$ Note that $\gamma(1)$ can be seen as an inner product on the conjugacy classes of $G$, as the function $G \times G \to \mathbb{C}$ is given by the cardinal $$\gamma(1)(g_1, g_2) = \left|\left\{ h \in G \mid h g_1 h^{-1} = g_2 \right\}\right| .$$ It turns out these operations give $R_\mathbb{C}(G)$ the structure of a *Frobenius algebra* over $\mathbb{C}$. **Definition 1**. A *Frobenius algebra* over a field $k$ is an algebra $A$ over $k$ equipped with a bilinear form $\beta : A \otimes_k A \to k$, which is - associative, that is, $\beta(ab \otimes c) = \beta(a \otimes bc)$ for all $a, b, c \in A$, - non-degenerate, that is, there exists a $k$-linear map $\gamma : k \to A \otimes_k A$ such that $(\beta \otimes \textup{id}_A) (a \otimes \gamma(1)) = a = (\textup{id}_A \otimes \beta)(\gamma(1) \otimes a)$ for all $a \in A$. **Proposition 2**. *The representation ring $R_\mathbb{C}(G)$ is a commutative Frobenius algebra over $\mathbb{C}$ with multiplication $\mu$ and bilinear form $\beta$.* *Proof.* First note that $\mu$ is associative as $$\begin{aligned} \mu(a \otimes \mu(b \otimes c))(g) &= \sum_{h_1, h_2 \in G} a(h_1) b(h_2) c(h_2^{-1} h_1^{-1} g) \\ &= \sum_{h_1, h_2 \in G} a(h_2) b(h_2^{-1} h_1) c(h_1^{-1} g) = \mu(\mu(a \otimes b) \otimes c)(g) \end{aligned}$$ for all $a, b, c \in A$ and $g \in G$. Similarly, $\mu$ is commutative as $$\begin{aligned} \mu(a \otimes b)(g) &= \sum_{h \in G} a(h) b(h^{-1}g) = \sum_{h' \in G} a(h'^{-1}g) b(g^{-1}h'g) = \sum_{h' \in G} a(h'^{-1}g) b(h') = \mu(b \otimes a),\end{aligned}$$ for all $a, b \in A$ and $g \in G$, where in he second equality we set $h' = g h^{-1}$. Furthermore, $\beta$ is associative as $$\beta(\mu(a \otimes b) \otimes c) = \frac{1}{|G|} \sum_{g, h \in G} a(h) b(h^{-1} g) c(g^{-1}) = \frac{1}{|G|} \sum_{g, h \in G} a(g) b(g^{-1} h) c(h^{-1}) = \beta(a \otimes \mu(b \otimes c))$$ for all $a, b, c \in A$. Finally, $\beta$ is non-degenerate as $\gamma : \mathbb{C}\to A \otimes_\mathbb{C}A$ satisfies $$(\beta \otimes \textup{id}_A)(a \otimes \gamma(1)) = \sum_{\chi \in \hat{G}} \beta(a \otimes \chi) \chi = a$$ by the orthogonality of irreducible characters [@fulton2013representation Theorem 2.12], and similarly $(\textup{id}_A \otimes \beta)(\gamma(1) \otimes a) = a$, for all $a \in A$. ◻ Frobenius algebras naturally carry even more structure: the structure of a coalgebra. In particular, there is a coassociative comultiplication $\delta : A \to A \otimes_\mathbb{C}A$ with a counit $\varepsilon : A \to \mathbb{C}$ which are given by $$\delta(a) = (\mu \otimes \textup{id}_A)(a \otimes \gamma(1)) % \quad \textup{ and } \quad \varepsilon(a) = \beta(a \otimes 1) = \beta(1 \otimes a) .$$ The algebra and coalgebra structures are compatible in a certain sense, see [@kock] for details. In the case of the representation ring $A = R_\mathbb{C}(G)$, one can check that the comultiplication and counit are explicitly given by $$\delta(a) = \sum_{\chi \in \hat{G}} \mu(a \otimes \chi) \otimes \chi \quad \textup{ and } \quad \varepsilon(a) = \frac{1}{|G|} a(1) \quad \textup{ for all } a \in A .$$ There is a surprising relation between the $G$-representation varieties of closed surfaces and the character table of $G$, given by the following proposition, which goes back to Frobenius [@frobenius1896gruppencharaktere]. What is remarkable, is that this equation can be written purely in terms of the operations of the Frobenius algebra structure on the representation ring of $G$. **Proposition 3** (Frobenius' formula, [@frobenius1896gruppencharaktere]). *The number of points of the $G$-representation variety of $\Sigma_g$ is given by $$\label{eq:frobenius_formula} \frac{|R_G(\Sigma_g)|}{|G|} = (\varepsilon \circ (\mu \circ \delta)^g \circ \eta)(1) = \sum_{\chi \in \hat{G}} \left( \frac{|G|}{\chi(1)} \right)^{2g - 2} .$$* In order to prove this proposition, we will make use of the following lemma. **Lemma 4**. *In any Frobenius algebra $A$, one has $\mu \circ \delta = \mu \circ (\textup{id}_A \otimes (\mu \circ \delta \circ \eta)(1))$.* *Proof.* Let us start with the right-hand side. By definition of $\delta$, this is equal to $$\mu \circ (\textup{id}_A \otimes (\mu \circ (\mu \otimes \textup{id}_A)(\eta(1) \otimes \gamma(1)))) .$$ Since $\eta$ is the unit for $\mu$, this reduces to $$\mu \circ (\textup{id}_A \otimes (\mu \circ \gamma(1))) .$$ By coassociativity of $\mu$, this is equal to $$\mu \circ (\mu \otimes \textup{id}_A) \circ (\textup{id}_A \otimes \gamma(1)) .$$ Finally, by definition of $\delta$, this is equal to the left-hand side. ◻ *Proof of Proposition [Proposition 3](#prop:frobenius_formula){reference-type="ref" reference="prop:frobenius_formula"}.* Let $f : G \to \mathbb{C}$ be the class function given by $f(g) = |\{ (A, B) \in G^2 \mid [A, B] = g \}|$. From the explicit expression of the representation variety [\[eq:presentation_representation_variety_closed_surface\]](#eq:presentation_representation_variety_closed_surface){reference-type="eqref" reference="eq:presentation_representation_variety_closed_surface"} and the definition of the convolution $a * b = \mu(a \otimes b)$ on $R_\mathbb{C}(G)$, it follows immediately that the number of points of the representation variety is given by $$|R_G(\Sigma_g)| = (\underbrace{f * \cdots * f}_{g \textup{ times}})(1) .$$ Therefore, using Lemma [Lemma 4](#lemma:for_frobenius_formula){reference-type="ref" reference="lemma:for_frobenius_formula"}, it is sufficient to show that $f$ is equal to $(\mu \circ \delta \circ \eta)(1) = \sum_{\chi \in \hat{G}} \frac{|G|}{\chi(1)} \chi$, or equivalently, that $\beta(f \otimes \chi) = \frac{|G|}{\chi(1)}$ for any irreducible complex character $\chi$ of $G$. Note that $$\beta(f \otimes \chi) = \frac{1}{|G|} \sum_{g \in G} f(g) \chi(g^{-1}) = \frac{1}{|G|} \sum_{A, B \in G} \chi([A, B]^{-1}) = \frac{1}{|G|} \sum_{A, B \in G} \chi(B A B^{-1} A^{-1}) .$$ Let $\rho : G \to \textup{GL}(V)$ be a representation with character $\chi$. From Schur's lemma it follows that, for any $A \in G$, the operator $T_A = \sum_{B \in G} \rho(B A B^{-1})$ is a scalar multiple of the identity, that is, $T_A = \frac{\mathop{\mathrm{tr}}(T_A)}{\chi(1)} = \frac{|G|}{\chi(1)} \chi(A)$. Hence, it follows that $$\beta(f \otimes \chi) = \frac{1}{|G|} \sum_{A, B \in G} \mathop{\mathrm{tr}}(T_A A^{-1}) = \frac{1}{|G|} \sum_{A \in G} \frac{|G|}{\chi(1)} \chi(A) \chi(A^{-1}) = \frac{|G|}{\chi(1)} .$$ The second equality follows directly from the definition of $\varepsilon, \mu, \delta$ and $\eta$. ◻ ## Topological Quantum Field Theories {#sec:tqft} While Frobenius' formula, Proposition [Proposition 3](#prop:frobenius_formula){reference-type="ref" reference="prop:frobenius_formula"}, can be beautifully expressed in terms of the operations on a Frobenius algebra, it is possible to view this formula in an even more general framework, that of *Topological Quantum Field Theories*, or TQFTs for short. In this section, we shall review the main concepts and notations we will use around TQFTs. For a more thorough treatment of TQFTs, see [@atiyah1988topological; @kock]. Let us recall the definition of an (oriented) bordism. **Definition 5**. Let $i : M \to \partial W$ be a smooth embedding of a closed oriented $(n - 1)$-dimensional manifold $M$ into the boundary of a compact oriented $n$-dimensional manifold $W$. Then $i$ is an *in-boundary* (resp. *out-boundary*) if for all $x \in M$, positively oriented bases $v_1, \ldots, v_{n - 1}$ for $T_x M$, and $w \in T_{i(x)} W$ pointing inwards (resp. outwards) compared to $W$, the basis $di_x(v_1), \ldots, di_x(v_{n - 1}), w$ for $T_{i(x)} W$ is positively oriented. Given two closed oriented $(n - 1)$-dimensional manifolds $M_1$ and $M_2$, a *bordism* from $M_1$ to $M_2$ is a diagram $$M_2 \xrightarrow{i_2} W \xleftarrow{i_1} M_1$$ consisting of a compact oriented manifold $W$ with boundary $\partial W$, an in-boundary $i_1 : M_1 \to \partial W$ and an out-boundary $i_2 : M_2 \to \partial W$, such that $i_1(M_1) \cap i_2(M_2) = \varnothing$ and $\partial W = i_1(M_1) \cup i_2(M_2)$. Two bordisms $(W, i_1, i_2)$ and $(W', i_1', i_2')$ between $M_1$ and $M_2$ are *equivalent* if there exists a diffeomorphism $\alpha : W \to W'$ such that the diagram $$\begin{tikzcd}[row sep=0.5em] & W \arrow{dd}{\alpha} & \\ M_2 \arrow{ur}{i_2} \arrow[swap]{dr}{i_2'} & & M_1 \arrow[swap]{ul}{i_1} \arrow{dl}{i_1'} \\ & W' & \end{tikzcd}$$ commutes. **Definition 6**. Let $n \ge 1$ be an integer. The *category of $n$-dimensional bordisms*, denoted $\textup{\bfseries Bord}_n$, is the category whose objects are closed oriented $(n - 1)$-dimensional manifolds (possibly empty). A morphism $W : M_1 \to M_2$ between two manifolds is an equivalence class of bordisms between $M_1$ and $M_2$. Composition in $\textup{\bfseries Bord}_n$ is given by gluing along the common boundary, which can be shown to be a well-defined operation on equivalence classes of bordisms [@milnor]. Furthermore, the category $\textup{\bfseries Bord}_n$ is endowed with a symmetric monoidal structure given by the disjoint union of both objects and bordisms. **Remark 7**. From Section [3.4](#sec:field_theory_and_quantization){reference-type="ref" reference="sec:field_theory_and_quantization"} on, we consider $\textup{\bfseries Bord}_n$ as a $2$-category rather than a $1$-category, where $2$-morphisms are given by equivalences of bordisms. However, for the purposes of this section, it suffices to consider $\textup{\bfseries Bord}_n$ as the 'truncated' $1$-category. Given a commutative ring $R$, denote by $R\textup{-}\textup{\bfseries Mod}$ the category of $R$-modules. When $R = k$ is a field, this category will also be denoted by $\textup{\bfseries Vect}_k$, the category of $k$-vector spaces. These categories are symmetric monoidal categories with monoidal structure given by tensor product over $R$. **Definition 8**. An *$n$-dimensional Topological Quantum Field Theory (TQFT)* is a symmetric monoidal functor $$Z : \textup{\bfseries Bord}_n \to R\textup{-}\textup{\bfseries Mod}.$$ When $Z$ is not monoidal, but only lax monoidal, $Z$ is called a *lax TQFT*. **Remark 9**. In this context, recall that monoidality for $Z : \textup{\bfseries Bord}_n \to R\textup{-}\textup{\bfseries Mod}$ means that $Z(\varnothing) = R$ and we have natural isomorphisms $Z(M_1 \sqcup M_2) \cong Z(M_1) \otimes_R Z(M_2)$, for objects $M_1$ and $M_2$. Lax monoidality, on the other hand, means that $Z(\varnothing) = R$ and there exists a homomorphism $Z(M_1) \otimes_R Z(M_2) \to Z(M_1 \sqcup M_2)$, but this map may not be an isomorphism. TQFTs have an important application in algebraic topology, as they can be used to compute algebraic invariants of closed manifolds. Suppose that we are interested in a particular $R$-valued invariant $\chi(W)$ associated to a closed $n$-dimensional manifold $W$. For instance, $\chi(W)$ might be some (co)homological information of a moduli space attached to $W$. In that case, we will say that a TQFT $Z : \textup{\bfseries Bord}_n \to R\textup{-}\textup{\bfseries Mod}$ *quantizes* $\chi$ if, for any closed $n$-dimensional manifold $W$, we have that $Z(W)(1) = \chi(W)$. Here, we view $W$ as a bordism $W : \varnothing \to \varnothing$, so that by monoidality of the TQFT, $Z(W)$ will be a morphism $R \to R$ of $R$-modules, that is, $Z(W)$ is simply multiplication by some element of $R$. ## Frobenius TQFT {#subsec:point_counting_tqft} TQFTs have been studied in depth in the literature, see for instance [@atiyah1988topological; @kock]. A key observation in this direction is that the fact that the functor $Z: \textup{\bfseries Bord}_n \to R\textup{-}\textup{\bfseries Mod}$ is monoidal implies that dualizable objects in $\textup{\bfseries Bord}_n$ are sent to dualizable objects in $R\textup{-}\textup{\bfseries Mod}$. In particular, since every object $M$ of $\textup{\bfseries Bord}_n$ is dualizable, $Z(M)$ must be a dualizable object in $R\textup{-}\textup{\bfseries Mod}$, and these are precisely the finitely generated projective modules. This suggests that TQFTs should be characterizable in terms of additional algebraic structures imposed on a finitely generated projective module. One of the earlier results in this direction came from Dijkgraaf [@Dijkgraaf1989], who observed that $2$-dimensional TQFTs are essentially the same as commutative Frobenius algebras. More precise proofs of this statement were later given by Abrams [@abrams1996two] (see also [@kock]). **Theorem 10** ([@Dijkgraaf1989; @kock]). *Let $k$ be a field. There is an equivalence of categories $$\textbf{\textup{2-TQFTs}} \simeq \textup{\bf CommFrobAlg}_k$$ between the category of $\textup{\bfseries Vect}_k$-valued $2$-dimensional TQFTs and the category of commutative Frobenius algebras over $k$, given by sending a TQFT to its value on the circle.* In Section [2](#sec:representation_ring){reference-type="ref" reference="sec:representation_ring"} we have seen that the representation ring $R_\mathbb{C}(G)$ of a finite group $G$ carries the structure of a Frobenius algebra. Therefore, it is natural to consider the corresponding $2$-dimensional TFQT, and to ask which invariant of closed surfaces is quantized by this TQFT. Let us denote the TQFT associated to the representation ring $R_\mathbb{C}(G)$, under the correspondence of Theorem [Theorem 10](#thm:equivalence_tqft_frobenius){reference-type="ref" reference="thm:equivalence_tqft_frobenius"}, by $$\label{eq:tqft_finite} Z_G^\textup{Fr}: \textup{\bfseries Bord}_2 \to \textup{\bfseries Vect}_\mathbb{C}$$ which we call the *Frobenius TQFT*. It is explicitly given by $Z_G^\textup{Fr}(S^1) = R_\mathbb{C}(G)$ and $$\quad Z_G^\textup{Fr}\left( \begin{tikzpicture}[semithick, scale=1*0.5, baseline=-0.5ex] \begin{scope} \draw (0,0) ellipse (0.2cm and 0.4cm); \draw (0,-0.4) arc (-90:90:0.75cm and 0.4cm); \end{scope} \end{tikzpicture} \right) = \eta, \quad Z_G^\textup{Fr}\left( \begin{tikzpicture}[semithick, scale=1*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (1,0.5) ellipse (0.2cm and 0.4cm); \draw (1,-0.5) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (0,0.4) and (0,0.9) .. (1,0.9); \draw (-1,-0.4) .. controls (0,-0.4) and (0,-0.9) .. (1,-0.9); \draw (1,0.1) .. controls (0.2,0.1) and (0.2,-0.1) .. (1,-0.1); \end{scope} \end{tikzpicture} \right) = \mu, \quad Z_G^\textup{Fr}\left( \begin{tikzpicture}[semithick, scale=1*0.5, baseline=-0.5ex] \begin{scope} \draw (0,0.5) ellipse (0.2cm and 0.4cm); \draw (0,-0.5) ellipse (0.2cm and 0.4cm); \draw (0,0.9) arc (90:270:1.1cm and 0.9cm); \draw (0,0.1) arc (90:270:0.3cm and 0.1cm); \end{scope} \end{tikzpicture} \right) = \beta,$$ $$Z_G^\textup{Fr}\left( \begin{tikzpicture}[semithick, scale=1*0.5, baseline=-0.5ex] \begin{scope} \draw (0,0) ellipse (0.2cm and 0.4cm); \draw (0,0.4) arc (90:270:0.75cm and 0.4cm); \end{scope} \end{tikzpicture} \right) = \varepsilon, \quad Z_G^\textup{Fr}\left( \begin{tikzpicture}[semithick, scale=1*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0.5) ellipse (0.2cm and 0.4cm); \draw (-1,-0.5) ellipse (0.2cm and 0.4cm); \draw (1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.9) .. controls (0,0.9) and (0,0.4) .. (1,0.4); \draw (-1,-0.9) .. controls (0,-0.9) and (0,-0.4) .. (1,-0.4); \draw (-1,0.1) .. controls (-0.2,0.1) and (-0.2,-0.1) .. (-1,-0.1); \end{scope} \end{tikzpicture} \right) = \delta, \quad Z_G^\textup{Fr}\left( \begin{tikzpicture}[semithick, scale=1*0.5, baseline=-0.5ex] \begin{scope} \draw (0,0.5) ellipse (0.2cm and 0.4cm); \draw (0,-0.5) ellipse (0.2cm and 0.4cm); \draw (0,0.9) arc (90:-90:1.1cm and 0.9cm); \draw (0,0.1) arc (90:-90:0.3cm and 0.1cm); \end{scope} \end{tikzpicture} \right) = \gamma .$$ **Remark 11**. The compatibility relations of the algebra and coalgebra structures of a Frobenius algebra translate to compatibilities on the bordisms. We refer the reader to [@kock]. Naturally, we are now interested in the $\mathbb{C}$-valued invariant of closed surfaces that this TQFT quantizes. For any surface $\Sigma_g$ of genus $g$, we have $$\begin{array}{ccrcccl}\label{eq:frtqftpt} Z_G^\textup{\textup{Fr}}(\Sigma_g) & = & Z_G^\textup{\textup{Fr}} \Bigg( \begin{tikzpicture}[semithick, scale=1*0.5, baseline=-0.5ex] \begin{scope} \draw (0,0) ellipse (0.2cm and 0.4cm); \draw (0,0.4) arc (90:270:0.75cm and 0.4cm); \end{scope} \end{tikzpicture} & \circ & \underbrace{ \begin{tikzpicture}[semithick, scale=1*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (-0.5,0.6) and (0.5,0.6) .. (1,0.4); \draw (-1,-0.4) .. controls (-0.5,-0.6) and (0.5,-0.6) .. (1,-0.4); \draw (-0.5,0.1) .. controls (-0.5,-0.125) and (0.5,-0.125) .. (0.5,0.1); \draw (-0.4,0.0) .. controls (-0.4,0.0625) and (0.4,0.0625) .. (0.4,0.0); \draw (1,0) ellipse (0.2cm and 0.4cm); \end{scope} \end{tikzpicture} \circ \cdots \circ \begin{tikzpicture}[semithick, scale=1*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (-0.5,0.6) and (0.5,0.6) .. (1,0.4); \draw (-1,-0.4) .. controls (-0.5,-0.6) and (0.5,-0.6) .. (1,-0.4); \draw (-0.5,0.1) .. controls (-0.5,-0.125) and (0.5,-0.125) .. (0.5,0.1); \draw (-0.4,0.0) .. controls (-0.4,0.0625) and (0.4,0.0625) .. (0.4,0.0); \draw (1,0) ellipse (0.2cm and 0.4cm); \end{scope} \end{tikzpicture} }_{g \textup{ times}} & \circ & \begin{tikzpicture}[semithick, scale=1*0.5, baseline=-0.5ex] \begin{scope} \draw (0,0) ellipse (0.2cm and 0.4cm); \draw (0,-0.4) arc (-90:90:0.75cm and 0.4cm); \end{scope} \end{tikzpicture} \Bigg) \\[25pt] & = & Z_G^\textup{\textup{Fr}}\left( \begin{tikzpicture}[semithick, scale=1*0.5, baseline=-0.5ex] \begin{scope} \draw (0,0) ellipse (0.2cm and 0.4cm); \draw (0,0.4) arc (90:270:0.75cm and 0.4cm); \end{scope} \end{tikzpicture} \right) & \circ & Z_G^\textup{\textup{Fr}}\left( \begin{tikzpicture}[semithick, scale=1*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (-0.5,0.6) and (0.5,0.6) .. (1,0.4); \draw (-1,-0.4) .. controls (-0.5,-0.6) and (0.5,-0.6) .. (1,-0.4); \draw (-0.5,0.1) .. controls (-0.5,-0.125) and (0.5,-0.125) .. (0.5,0.1); \draw (-0.4,0.0) .. controls (-0.4,0.0625) and (0.4,0.0625) .. (0.4,0.0); \draw (1,0) ellipse (0.2cm and 0.4cm); \end{scope} \end{tikzpicture} \right)^g & \circ & Z_G^\textup{\textup{Fr}}\left( \begin{tikzpicture}[semithick, scale=1*0.5, baseline=-0.5ex] \begin{scope} \draw (0,0) ellipse (0.2cm and 0.4cm); \draw (0,-0.4) arc (-90:90:0.75cm and 0.4cm); \end{scope} \end{tikzpicture} \right) \\[15pt] & = & \varepsilon & \circ & (\mu \circ \delta)^g & \circ & \eta . \end{array}$$ From Proposition [Proposition 3](#prop:frobenius_formula){reference-type="ref" reference="prop:frobenius_formula"} it now becomes apparent how to interpret the invariant quantized by this TQFT. **Corollary 12**. *The TQFT $Z_G^\textup{\textup{Fr}}$ quantizes the number of points of the $G$-representation variety $R_G(\Sigma_g)$ divided by $|G|$. 0◻* # Character stack TQFT {#sec:character-stack-tqft} Let $G$ be an algebraic group over a field $k$. In this section, we shall construct a lax TQFT $$Z_G : \textup{\bfseries Bord}_n \to \textup{K}_0(\textup{\bfseries Stck}_k)\textup{-}\textup{\bfseries Mod}$$ computing the virtual classes of character stacks in the Grothendieck ring of algebraic stacks over $k$. This formulation extends and improves the previous version of this functor [@gonzalez2022virtual] and sets a proper framework for the non-stacky versions [@arXiv181009714; @gon20]. ## Character groupoids Before introducing the $G$-character stack, we will forget all geometry, and let $G$ simply be a group. Recall that a groupoid $\Gamma$ is a category all of whose morphisms are invertible. A groupoid $\Gamma$ is *finitely generated* if it has finitely many objects, all of whose automorphism groups are finitely generated. If, in addition, all automorphism groups are finite we shall say that $\Gamma$ is *finite*. Furthermore, we shall say $\Gamma$ is *essentially finitely generated* (resp. *essentially finite*) if it is equivalent to a finitely generated (resp. finite) groupoid. **Definition 13**. Given a groupoid $\Gamma$, the *$G$-character groupoid* of $\Gamma$ is the groupoid $$\mathfrak{X}_G(\Gamma) = \textbf{Fun}(\Gamma, G)$$ whose objects are functors $\rho : \Gamma \to G$ (where $G$ is seen as a groupoid with a single object), and morphisms $\rho_1 \to \rho_2$ are given by natural transformations $\mu : \rho_1 \Rightarrow \rho_2$. The map $\mathfrak{X}_G$ can naturally be extended to a $2$-functor $\mathfrak{X}_G : \textup{\bfseries Grpd}\to \textup{\bfseries Grpd}^\textup{op}$. Explicitly, - for any functor $f : \Gamma' \to \Gamma$ between groupoids, let $\mathfrak{X}_G(f) : \mathfrak{X}_G(\Gamma) \to \mathfrak{X}_G(\Gamma')$ be the functor given by precomposition $\mathfrak{X}_G(f)(\rho) = \rho \circ f$ for any $\rho \in \mathfrak{X}_G(\Gamma)$, and $\mathfrak{X}_G(f)(\mu) = \mu f$ for any morphism $\mu : \rho_1 \to \rho_2$. - for any natural transformation $\eta : f_1 \Rightarrow f_2$ between functors $f_1, f_2 : \Gamma' \to \Gamma$, let $\mathfrak{X}_G(\eta) : \mathfrak{X}_G(f_1) \Rightarrow \mathfrak{X}_G(f_2)$ be the natural transformation given by $(\mathfrak{X}_G(\eta)_\rho)_{x'} = \rho(\eta_{x'})$ for all $\rho \in \mathfrak{X}_G(\Gamma)$ and $x' \in \Gamma'$. Indeed, this defines a natural transformation as the square $$\begin{tikzcd} \rho(f_1(x')) \arrow{r}{\rho(\eta_{x'})} \arrow[swap]{d}{\rho(f_1(\gamma'))} & \rho(f_2(x')) \arrow{d}{\rho(f_2(\gamma'))} \\ \rho(f_1(y')) \arrow[swap]{r}{\rho(\eta_{y'})} & \rho(f_2(y')) \end{tikzcd}$$ commutes for every $\gamma' : x' \to y'$ in $\Gamma'$ by naturality of $\eta$, and this is natural in $\rho$. Note that $\mathfrak{X}_G$ strictly preserves the composition of $1$-morphisms and $2$-morphisms, and therefore defines a strict $2$-functor. **Corollary 14**. *Any equivalence between groupoids $\Gamma$ and $\Gamma'$ naturally induces an equivalence between the $G$-character groupoids $\mathfrak{X}_G(\Gamma)$ and $\mathfrak{X}_G(\Gamma')$. 0◻* Now, if $G$ is a finite group and $\Gamma$ is a finitely generated groupoid, then it can easily be seen that $\mathfrak{X}_G(\Gamma)$ is a finite groupoid. Hence, Corollary [Corollary 14](#cor:equivalent_character_groupoids){reference-type="ref" reference="cor:equivalent_character_groupoids"} implies that $\mathfrak{X}_G(\Gamma)$ is essentially finite for $\Gamma$ essentially finitely generated. Hence, for $G$ a finite group, we can restrict $\mathfrak{X}_G$ to a 2-functor $$\mathfrak{X}_G : \textup{\bfseries FGGrpd}\to \textup{\bfseries FinGrpd}^\textup{op}$$ from the 2-category of essentially finitely generated groupoids to (the opposite of) the 2-category of essentially finite groupoids. **Definition 15**. Let $M$ be a compact manifold. The *fundamental groupoid* of $M$ is the groupoid $\Pi(M)$ whose objects are the points of $M$, and morphisms $x \to y$ are given by homotopy classes of paths from $x$ to $y$. Note that if $M$ is a compact manifold, then $\Pi(M)$ is essentially finitely generated since $M$ is homotopically equivalent to a finite CW-complex [@whitehead1940c1]. Moreover, for any smooth map of manifolds $f : M \to N$, there is an induced functor $\Pi(f) : \Pi(M) \to \Pi(N)$. In particular, one can think of $\Pi$ as a functor $\Pi : \textup{\bfseries Mnfd}_c \to \textup{\bfseries FGGrpd}$ out of the category of compact smooth manifolds. Moreover, $\Pi$ can be promoted to a $2$-functor if one considers $\textup{\bfseries Mnfd}_c$ as a $2$-category where $2$-morphisms are given by smooth homotopies. **Definition 16**. Let $M$ be a compact manifold. The *$G$-character groupoid* of $M$, denoted $\mathfrak{X}_G(M)$, is defined as $\mathfrak{X}_G(\Pi(M))$. In particular, if $G$ is finite, $\mathfrak{X}_G(M)$ is essentially finite. Let us think about the groupoid $\mathfrak{X}_G(M)$ a bit more closely. An object $\rho$ can be identified with a map from the set of homotopy classes of paths on $M$ to $G$ which preserves composition. A morphism from $\rho_1$ to $\rho_2$ is a natural transformation $\mu : \rho_1 \Rightarrow \rho_2$, which can be thought of as a function $\mu : M \to G$ such that $\rho_2(\gamma) = \mu(y) \rho_1(\gamma) \mu(x)^{-1}$ for any path $\gamma : x \to y$ in $\Pi(M)$. Such transformations are known in physics as *local gauge transformations*. With this intuition, there is an alternative way to think about the $G$-character groupoid. Defining $\mathcal{G}_\Gamma = \prod_{x \in \Gamma} G$, which we call the *group of local gauge transformations*, it acts on the set $X = \textup{Hom}(\Gamma, G)$ by $$((g_x)_{x \in \Gamma} \cdot \rho)(\gamma) = g_y \, \rho(\gamma) \, g_x^{-1}$$ for any $\rho \in X$ and $\gamma : x \to y$ in $\Gamma$. Now the $G$-character groupoid $\mathfrak{X}_G(\Gamma)$ is equivalent to the action groupoid $[X/\mathcal{G}_\Gamma]$. This alternative description will be the key to defining the $G$-character stacks. **Definition 17**. The *groupoid cardinality* of an essentially finite groupoid $\mathfrak{X}$ is $$|\mathfrak{X}| = \sum_{x \in [\mathfrak{X}]} \frac{1}{|\mathop{\mathrm{Aut}}(x)|} \in \mathbb{Q},$$ where $[\mathfrak{X}]$ denotes the quotient category where all the isomorphic objects have been identified and $\mathop{\mathrm{Aut}}(x)$ is the automorphism group of an object $x$ in $[\mathfrak{X}]$ (in other words, $x\in \pi_0(\mathfrak{X})$, i.e, $x$ is an equivalence class of isomorphic objects in $\mathfrak{X}$). **Proposition 18**. *Let $M$ be a connected compact manifold. Then we have $$|\mathfrak{X}_G(M)| = \frac{|\textup{Hom}(\pi_1(M), G)|}{|G|}.$$* *Proof.* Since $M$ is connected, its fundamental groupoid is equivalent to the groupoid $\Gamma_M$ with a single object and $\pi_1(M)$ automorphisms. As shown above, in this setting we have that $\mathfrak{X}_G(M)$ is equivalent to the action groupoid $[\textup{Hom}(\Gamma_M, G) / \mathcal{G}_{\Gamma_M}]$. This is precisely a groupoid with $\textup{Hom}(\Gamma_M, G) = \textup{Hom}(\pi_1(M), G)$ objects and the action of $G$ by conjugation. The result follows then from the orbit-stabilizer formula. ◻ ## Character stacks {#sec:charstacks} Let $G$ be an algebraic group over a field or, more generally, a finitely generated commutative algebra over $\mathbb{Z}$. We construct the $G$-character stack as the geometric analogue of the $G$-character groupoid. First, we briefly discuss what we mean by a stack in the next couple of paragraphs. Following [@eke09], in this paper, a stack will refer to an algebraic (Artin) stack of finite type over a field (or more generally, over a finitely generated ring over $\mathbb{Z}$) with affine stabilizers at every closed point. Similarly, we define the category of stacks, $\textup{\bfseries Stck}_{\mathfrak{S}}$, over a base stack $\mathfrak{S}$ (here, $\mathfrak{S}$ is an algebraic stack, finite type over a field with affine stabilizers) as the slice category of the category of stacks over $\mathfrak{S}$. **Definition 19**. The category of stacks over $\mathfrak{S}$, $\textup{\bfseries Stck}_{\mathfrak{S}}$, is defined as follows. - The objects are pairs $(\mathfrak{X}, \pi)$, where $\mathfrak{X}$ is an algebraic stack of finite type over $k$ with affine stabilizers, and $\pi \colon \mathfrak{X} \to \mathfrak{S}$ is a $1$-morphism of stacks. If the $1$-morphism $\pi$ is understood from the context, we denote the object simply by $\mathfrak{X}$. - A $1$-morphism $(f, \alpha) \colon (\mathfrak{X}, \pi) \to (\mathfrak{X'}, \pi')$ consists of a $1$-morphism of stacks $f \colon \mathfrak{X} \to \mathfrak{X'}$ and a $2$-morphism of stacks $\alpha \colon \pi \Rightarrow \pi' \circ f$. - A $2$-morphism $\mu \colon (f, \alpha) \Rightarrow (g, \beta)$ in $\textup{\bfseries Stck}_\mathfrak{S}$ is a natural isomorphism such that $\pi'(\mu) \circ \alpha = \beta$. $$\begin{tikzcd}[column sep=large, row sep = huge] & \mathfrak{S} & \\ \mathfrak{X} \arrow[dr, swap, "\pi"{name=U}] \arrow[ur, "\pi"{name=W}] \arrow[rr, swap, bend right=15, "f"{name=A}] \arrow[rr, bend left=15, "g"{name=B}] \arrow[Rightarrow, shorten=10mm, from=W, to=rr, shift left=1.5ex, "\beta"] \arrow[Rightarrow, shorten = 1.5mm, from=A, to=B, "\mu"] & & \mathfrak{X}' \arrow[dl, "\pi'"]\arrow[ul, swap, "\pi'"] \\ & \mathfrak{S} \arrow[Rightarrow, swap, shorten=10mm, from=U, to=ru, shift right=2ex, "\alpha"] & \end{tikzcd}$$ **Remark 20**. Global quotient stacks of the form $[X/G]$ where $G$ is a linear algebraic group acting on a variety $X$ are examples of algebraic stacks of finite type with affine stabilizers. In fact, any reduced algebraic stack of finite type with affine stabilizers can be stratified by global quotient stacks [@kres99]. The remark above justifies why we would only work with stacks with affine stabilizers, namely, character stacks are stacks with affine stabilizers. We will use the following lemma about stacks with affine stabilizers several times in the paper. **Lemma 21**. *The category of stacks over $\mathfrak{S}$, $\textup{\bfseries Stck}_{\mathfrak{S}}$, is closed under products. In fact, if $\mathfrak{X}\to \mathfrak{S}$ and $\mathfrak{Y}\to \mathfrak{S}$ are morphisms of stacks with affine stabilizers, then the fiber product $\mathfrak{X}\times_{\mathfrak{S}}\mathfrak{Y}$ also has affine stabilizers.* *Proof.* Let $\star$ be a point of the fiber product $\mathfrak{X}\times_{\mathfrak{S}}\mathfrak{Y}$. The induced points on $\mathfrak{X}$, $\mathfrak{Y}$, and $\mathfrak{S}$ are also denoted by $\star$. Furthermore, let us denote the corresponding affine stabilizers by $G_{ \mathfrak{X}\times_{\mathfrak{S}}\mathfrak{Y}}$, $G_\mathfrak{X}$, $G_\mathfrak{Y}$, and $G_\mathfrak{S}$ respectively. Consider the following diagram where all squares (with horizontal and vertical arrows) are Cartesian diagrams. $$\begin{tikzcd} G_{\mathfrak{X}\times_{\mathfrak{S}}\mathfrak{Y}}\ar[r]\ar[d]&G_{\mathfrak{Y}}\ar[r]\ar[d]&\star\ar[dd]\ar[rd,equals]&\\ G_{\mathfrak{X}}\ar[r]\ar[d]&G_{\mathfrak{S}}\ar[rr]\ar[dd]&& \star\ar[d]\\ \star\ar[rd,equals]\ar[rr]&&\mathfrak{X}\times_{\mathfrak{S}}\mathfrak{Y}\ar[r]\ar[d]& \mathfrak{X}\ar[d]\\ &\star\ar[r]&\mathfrak{Y} \ar[r]& \mathfrak{S} \end{tikzcd}$$ This diagram shows that the stabilizer of the point $\star$ of $\mathfrak{X}\times_{\mathfrak{S}}\mathfrak{Y}$ is the fiber product of the stabilizers of corresponding points of $\mathfrak{X}$, $\mathfrak{Y}$, and $\mathfrak{S}$, implying that the stabilizer is affine. ◻ After our brief introduction of stacks, we turn our attention to defining character stacks. First, we define $G$-representation varieties. **Definition 22**. Let $\Gamma$ be a finitely generated groupoid. The *$G$-representation variety* of $\Gamma$ is the variety over $k$ whose functor of points is given by $$R_G(\Gamma)(T) = \textup{Hom}(\Gamma, G(T)) ,$$ where the group $G(T)$ is seen as a groupoid with a single object. Functoriality on $T$ is inherited from the functoriality of $G$ and $\textup{Hom}$. Note that $R_G(\Gamma)$ is indeed representable: choosing generators $\gamma_1, \ldots, \gamma_n$ of $\Gamma$, the $G$-representation variety can be realized as a closed subvariety of $G^n$, and this variety structure can be shown to be independent of the chosen generators. However, $R_G(\Gamma)$ is not well-defined up to equivalence of $\Gamma$. This will be fixed once we pass to the $G$-character stack. Similar to the previous section, we define the group of *local gauge transformations* to be the algebraic group $$\mathcal{G}_\Gamma = \prod_{x \in \Gamma} G .$$ There is a natural action of $\mathcal{G}_\Gamma$ on $R_G(\Gamma)$, which is pointwise given by $$((g_x)_{x \in \Gamma} \cdot \rho)(\gamma) = g_y \, \rho(\gamma) \, g_x^{-1}$$ for all $(g_x)_{x \in \Gamma} \in \mathcal{G}_\Gamma(T)$ and $\rho \in R_G(\Gamma)(T)$ and $\gamma : x \to y$ in $\Gamma$. **Definition 23**. Let $\Gamma$ be a finitely generated groupoid. The *$G$-character stack* of $\Gamma$ is the quotient stack $$\mathfrak{X}_G(\Gamma) = \left[ R_G(\Gamma) \,/\, \mathcal{G}_\Gamma \right] .$$ **Remark 24**. The $G$-character stack is indeed a stack with affine stabilizers (see Definition [Definition 19](#def:stacks){reference-type="ref" reference="def:stacks"}) as the stabilizer of a point of $\mathfrak{X}_G(\Gamma)$ is a closed subgroup of $G$. **Remark 25**. It is possible to promote $\mathfrak{X}_G(-)$ to a $2$-functor $\textup{\bfseries FGGrpd}\to \textup{\bfseries Stck}_k^\textup{op}$. Let $f : \Gamma' \to \Gamma$ be a functor between finitely generated groupoids. Such a functor induces a morphism between the representation varieties, given by pullback $$f^* : R_G(\Gamma) \to R_G(\Gamma'), \quad \rho \mapsto \rho \circ f \quad \textup{ for all } \rho \in R_G(\Gamma)(T) ,$$ and also a morphism of groups $$\mathcal{G}_f : \mathcal{G}_\Gamma \to \mathcal{G}_{\Gamma'} , \quad (g_x)_{x \in \Gamma} \mapsto (g_{f(x')})_{x' \in \Gamma'} .$$ In particular, there is an induced map on character stacks $\mathfrak{X}_G(\Gamma) \to \mathfrak{X}_G(\Gamma')$ given by sending a $\mathcal{G}_\Gamma$-torsor $P$ to the $\mathcal{G}_{\Gamma'}$-torsor $\mathcal{G}_{\Gamma'} \times_{\mathcal{G}_\Gamma} P$. This is easily seen to be functorial in $f$. Next, let $\eta : f_1 \Rightarrow f_2$ be a natural transformation between functors $f_1, f_2 : \Gamma' \to \Gamma$. To this natural transformation, we want to assign a $2$-morphism $\mathfrak{X}_G(f_1) \Rightarrow \mathfrak{X}_G(f_2)$, which amounts to, for every $\mathcal{G}_\Gamma$-torsor $P$ over $T$ with $\mathcal{G}_\Gamma$-equivariant map $\rho : P \to R_G(\Gamma)$, a morphism of $\mathcal{G}_{\Gamma'}$-torsors (as indicated by the dashed arrow) such that the diagram $$\begin{tikzcd}[row sep=1em, column sep=6em] \mathcal{G}_{\Gamma'} \times_{\mathcal{G}_\Gamma} P \arrow[dashed]{dd} \arrow[bend left=10]{dr}{(g', p) \mapsto g' \cdot f_1^*(\rho(p))} & \\ & R_G(\Gamma') \\ \mathcal{G}_{\Gamma'} \times_{\mathcal{G}_\Gamma} P \arrow[swap, bend right=10]{ur}{(g', p) \mapsto g' \cdot f_2^*(\rho(p))} & \end{tikzcd}$$ commutes. Analogous to the case for $G$-character groupoids, this morphism is given by $(g', p) \mapsto (g' \rho(p)(\eta_{x'}), p)$. Note that this map is well-defined (that is, respects the $\mathcal{G}_\Gamma$-action on both sides) by unfolding the definitions. **Corollary 26**. *Any equivalence between finitely generated groupoids $\Gamma$ and $\Gamma'$ naturally induces an isomorphism between the $G$-character stacks $\mathfrak{X}_G(\Gamma)$ and $\mathfrak{X}_G(\Gamma')$. 0◻* This observation allows us to extend the definition of the $G$-character stack to groupoids $\Gamma$ which are only essentially finitely generated. In particular, this allows us to define the $G$-character stack of a compact manifold. **Definition 27**. Let $M$ be a compact manifold. The *$G$-character stack* of $M$ is defined as $$\mathfrak{X}_G(M) = \mathfrak{X}_G(\Gamma)$$ where $\Gamma$ is any finitely generated groupoid equivalent to the fundamental groupoid $\Pi(M)$ of $M$. By the above corollary, this definition is, up to isomorphism, independent of the choice of $\Gamma$. However, to be exact, one needs to make a choice of $\Gamma$ for every $M$. **Remark 28**. There is a crucial difference between this definition and the one considered in [@gonzalez2022virtual]. In the latter, only an action of $G$ on $R_G(\Gamma)$ is considered, given by $(g \cdot \rho) (\gamma) = g^{-1} \rho(\gamma) g$. This can be seen as a "global" gauge action, where the coordinate system is changed at the same time at all the vertices. In sharp contrast, in this work, we will consider a *local* gauge action, where we allow to change the gauge differently at each of the vertices of the groupoid. **Remark 29**. Note that the $G$-character stack could not be described via the functor of points via the $G$-character groupoids since $$\label{eq:chargroupoidnotfunctor} \mathfrak{X}_G(\Gamma)(T) \ne \left[ R_G(\Gamma)(T) \Big/ \mathcal{G}_\Gamma(T) \right] .$$ For example, let $G = \mathbb{Z}/ 2\mathbb{Z}$ and let $\Gamma$ be the trivial groupoid. Then $\mathfrak{X}_G(\Gamma) = {\textup{B}G}$. However, over a finite field $k$, we have that $\pi_0({\textup{B}G}(\mathop{\mathrm{Spec}}k))=2$, since there are two principal $\mathbb{Z}/2\mathbb{Z}$-bundles over $\mathop{\mathrm{Spec}}k$: the trivial bundle and the one corresponding to the degree 2 extension of $k$. On the other hand, $|R_G(\Gamma)(\mathop{\mathrm{Spec}}k)|=1$, showing that Eq, [\[eq:chargroupoidnotfunctor\]](#eq:chargroupoidnotfunctor){reference-type="ref" reference="eq:chargroupoidnotfunctor"} does not hold in general. **Lemma 30**. *$\mathfrak{X}_G(-)$ sends finite colimits in $\textup{\bfseries FGGrpd}$ to finite limits in $\textup{\bfseries Stck}_k$.* *Proof.* Since colimits are defined up to equivalence, without loss of generality, we can suppose that we are working in $\textup{\bfseries FGGrpd}$. Let $\Gamma = \operatorname{colim}\limits_{i \in I} \Gamma_i$ be a colimit in $\textup{\bfseries FGGrpd}$. A $T$-point of $\lim_{i \in I} \mathfrak{X}_G(\Gamma_i)$ is a collection of $\mathcal{G}_{\Gamma_i}$-torsors $P_i$ over $T$ with $\mathcal{G}_{\Gamma_i}$-equivariant morphisms $\rho_i : P_i \to R_G(\Gamma_i)(T)$, which are compatible in the sense that there are natural isomorphisms $\mathcal{G}_{\Gamma_i} \times_{\mathcal{G}_{\Gamma_j}} P_j \cong P_i$ in the groupoid $\mathfrak{X}_G(\Gamma_i)(T)$ for every $i \to j$ in $I$. On the other hand, a $T$-point of $\mathfrak{X}_G(\Gamma)$ is a $\mathcal{G}_\Gamma$-torsor $P$ over $T$ with a $\mathcal{G}_\Gamma$-equivariant morphism $\rho : P \to R_G(\Gamma)(T)$. Note that $\rho$, on $T$-points, is given by $$\rho : P \to R_G(\Gamma)(T) = \textup{Hom}(\operatorname{colim}\limits_{i \in I} \Gamma_i, G(T)) = \lim_{i \in I} \textup{Hom}(\Gamma_i, G(T)),$$ so $\rho$ is equivalently described by compatible morphisms $\rho_i : P \to R_G(\Gamma_i)(T)$ which are $\mathcal{G}_{\Gamma_i}$-equivariant, where $\mathcal{G}_{\Gamma_i}$ acts on $P$ via $\mathcal{G}_\Gamma$. These two descriptions are related as follows. From the $\mathcal{G}_\Gamma$-torsor $P$, one constructs the $\mathcal{G}_{\Gamma_i}$-torsors $P_i = \mathcal{G}_{\Gamma_i} \times_{\mathcal{G}_\Gamma} P$, which are naturally compatible. Conversely, from the $P_i$ one constructs $\lim_{i \in I} P_i$, where the limit is taken as schemes over $T$, which naturally comes with the structure of a $(\lim_{i \in I} \mathcal{G}_{\Gamma_i})$-torsor, and one puts $P = \mathcal{G}_\Gamma \times_{\left(\lim_{i \in I} \mathcal{G}_{\Gamma_i}\right)} \lim_{i \in I} P_i$. This induces the desired isomorphism between $\lim_{i \in I} \mathfrak{X}_G(\Gamma_i)$ and $\mathfrak{X}_G(\Gamma)$. ◻ ## Grothendieck ring of stacks {#sec:groth} Recall (see Definition [Definition 19](#def:stacks){reference-type="ref" reference="def:stacks"}) that all the stacks considered in this paper are algebraic stacks of finite type over a field $k$ (or more generally over a finitely generated ring over $\mathbb{Z}$) with affine stabilizers at every closed point. Following [@eke09], we define the Grothendieck ring of stacks as follows. **Definition 31**. Let $\mathfrak{S}$ be an algebraic stack of finite type over a field $k$ with affine stabilizers. We define the *Grothendieck ring of stacks over $\mathfrak{S}$*, denoted by $\textup{K}_0(\textup{\bfseries Stck}_\mathfrak{S})$, as the free abelian group generated by the isomorphism classes $[\mathfrak{X}]$ of stacks $\mathfrak{X}$ over $\mathfrak{S}$, modulo the *scissor relations* $$[\mathfrak{X}] = [\mathfrak{Z}] + [\mathfrak{X} \setminus \mathfrak{Z}],$$ for every closed substack $\mathfrak{Z} \subset \mathfrak{X}$ with open complement $\mathfrak{X} \setminus \mathfrak{Z}$. Note that $\mathfrak{Z}$ and $\mathfrak{X}\setminus \mathfrak{Z}$ are considered as stacks over $\mathfrak{S}$ via $\mathfrak{X}$. The multiplicative structure is given by the fiber product (see Lemma [Lemma 21](#lem:prodaffinestabilizer){reference-type="ref" reference="lem:prodaffinestabilizer"}) $$[\mathfrak{X}] \cdot [\mathfrak{Y}] = [ \mathfrak{X} \times_{\mathfrak{S}} \mathfrak{Y} ] ,$$ for any algebraic stacks $\mathfrak{X}$ and $\mathfrak{Y}$ making $\textup{K}_0(\textup{\bfseries Stck}_\mathfrak{S})$ a ring with unit $[(\mathfrak{S}, \textup{id}_{\mathfrak{S}})]$ and zero element $[\varnothing]$. Given a stack $\mathfrak{X}$ over $\mathfrak{S}$, its class $[\mathfrak{X}] \in \textup{K}_0(\textup{\bfseries Stck}_\mathfrak{S})$ will be called the *virtual class* of $\mathfrak{X}$. **Remark 32**. In [@eke09], Ekedahl defines a Grothendieck ring of stacks $\widetilde{\textup{K}_0}(\textup{\bfseries Stck}_\mathfrak{S})$ as above with an additional relation that the class of every vector bundle is the same as the trivial bundle, meaning that $$[\mathfrak{E}] = [ \mathbb{A}^n_{\mathfrak{S}} \times \mathfrak{X} ]$$ for every vector bundle $\mathfrak{E} \to \mathfrak{X}$ of rank $n$. We choose to omit this additional relation as we would like to keep track of the group action. Of course, there is a natural quotient map (by quotienting out the additional relation) $$\label{eq:KStck->KStck} \textup{K}_0(\textup{\bfseries Stck}_\mathfrak{S})\to \widetilde{\textup{K}_0}(\textup{\bfseries Stck}_\mathfrak{S}).$$ Ekedahl's version of the Grothendieck ring of stacks over $k$ is isomorphic to the localization of the Grothendieck ring of varieties, $\textup{K}_0(\textup{\bfseries Var}_{k})$, by inverting the class of the affine line $q = [\mathbb{A}_k^1]$ and the classes of the form $q^n-1$ (coming from the classes of the general linear groups) providing a natural map $$\label{eq:KStck->KVar} \widetilde{\textup{K}_0}(\textup{\bfseries Stck}_k)\to \widehat{\textup{K}_0}(\textup{\bfseries Var}_{k})$$ where $\widehat{\textup{K}_0}(\textup{\bfseries Var}_{k})$ denotes the completion of the ring $\textup{K}_0(\textup{\bfseries Var}_{k})[q^{-1}]$ with the filtration given by the powers of the class of the affine line. By Lemma [Lemma 21](#lem:prodaffinestabilizer){reference-type="ref" reference="lem:prodaffinestabilizer"}, any morphism $\mathfrak{X} \to \mathfrak{S}$ of algebraic stacks induces a $\textup{K}_0(\textup{\bfseries Stck}_\mathfrak{S})$-module structure on $\textup{K}_0(\textup{\bfseries Stck}_\mathfrak{X})$, where the module structure is given on the generators by $$[\mathfrak{T}] \cdot [\mathfrak{Y}] = [\mathfrak{T} \times_\mathfrak{S} \mathfrak{Y}].$$ Similarly, since the property of having affine stabilizers is an absolute notion, any morphism of algebraic stacks $f \colon \mathfrak{X} \to \mathfrak{Y}$ over $\mathfrak{S}$ induces a functor $$f_!: \textup{\bfseries Stck}_\mathfrak{X}\to \textup{\bfseries Stck}_\mathfrak{Y}$$ given by composing with $f$ that descends to a $\textup{K}_0(\textup{\bfseries Stck}_\mathfrak{S})$-module morphism $$\textup{K}_0(\textup{\bfseries Stck}_\mathfrak{X}) \to \textup{K}_0(\textup{\bfseries Stck}_\mathfrak{Y})$$ which we will denote by $f_!$ as well. Finally, by Lemma [Lemma 21](#lem:prodaffinestabilizer){reference-type="ref" reference="lem:prodaffinestabilizer"}, any morphism of algebraic stacks $f \colon \mathfrak{X} \to \mathfrak{Y}$ over $\mathfrak{S}$ induces a functor $$f^*: \textup{\bfseries Stck}_\mathfrak{Y} \to \textup{\bfseries Stck}_\mathfrak{X}$$ given by the fiber product along $f$. This descends to a $\textup{K}_0(\textup{\bfseries Stck}_\mathfrak{S})$-algebra morphism $$f^* \colon \textup{K}_0(\textup{\bfseries Stck}_\mathfrak{Y}) \to \textup{K}_0(\textup{\bfseries Stck}_\mathfrak{X}).$$ ## Field theory and quantization {#sec:field_theory_and_quantization} The goal of this section is to construct a lax monoidal TQFT $$Z_G : \textup{\bfseries Bord}_n \to \textup{K}_0(\textup{\bfseries Stck}_k)\textup{-}\textup{\bfseries Mod}$$ quantizing the virtual class of $G$-character stacks. This lax TQFT will be constructed as the composition of two functors, the *field theory* and the *quantization functor*. While the field theory will be symmetric monoidal, the quantization functor will only be symmetric lax monoidal. In fact, our construction will be slightly more general, since we shall construct a lax monoidal functor of $2$-categories, in such a way that the usual functor of $1$-categories appears after collapsing the $2$-morphisms. For this purpose, we need to consider the $2$-categorical version of $\textup{\bfseries Bord}_n$, which we shall also denote by $\textup{\bfseries Bord}_n$. As in the classical setting, the objects of this $2$-category are $(n-1)$-dimensional closed oriented manifolds. A $1$-morphism $(W, i_1, i_2): M_1 \to M_2$ between manifolds $M_1$ and $M_2$ is a bordism between $M_1$ and $M_2$, with boundary inclusions $i_1: M_1 \to W$ and $i_2: M_2 \to W$. Now, a $2$-morphism $\alpha: W \Rightarrow W'$ between two bordisms $(W, i_1, i_2), (W', i_1', i_2'): M_1 \to M_2$ is a boundary-preserving diffeomorphism $\alpha: W \to W'$ such that the diagram $$\begin{tikzcd}[row sep=0.5em] & W \arrow{dd}{\alpha} & \\ M_2 \arrow{ur}{i_2} \arrow[swap]{dr}{i_2'} & & M_1 \arrow[swap]{ul}{i_1} \arrow{dl}{i_1'} \\ & W' & \end{tikzcd}$$ commutes. Notice that the classical category of bordisms used in Section [2.1](#sec:tqft){reference-type="ref" reference="sec:tqft"} can be obtained from $\textup{\bfseries Bord}_n$ by collapsing $2$-morphisms: we declare two $1$-morphisms to be equivalent if there exists a $2$-morphism between them. **Definition 33**. Let $\mathcal{C}$ be a $2$-category with pullbacks. The *$2$-category of spans* over $\mathcal{C}$, denoted $\textup{Span}(\mathcal{C})$, is the $2$-category defined as follows. The objects of $\textup{Span}(\mathcal{C})$ are the same as the objects of $\mathcal{C}$, and a $1$-morphism between the two objects $X$ and $Y$ is a diagram $X \xleftarrow{f} Z \xrightarrow{g} Y$, called a *span*, in $\mathcal{C}$. A $2$-morphism from a span $X \xleftarrow{f} Z \xrightarrow{g} Y$ to $X \xleftarrow{f'} Z' \xrightarrow{g'} Y$ is a morphism $h : Z \to Z'$ in $\mathcal{C}$ together with $2$-isomorphisms $\alpha : f \to f' \circ h$ and $\beta : g \to g' \circ h$ in $\mathcal{C}$. $$\begin{tikzcd}[row sep=0.5em, column sep=3.0em, execute at end picture={ \node at (-0.5, 0) {$\Downarrow$}; \node at (0.5, 0) {$\Downarrow$}; \node at (-0.75, 0) {\scriptsize $\alpha$}; \node at (0.75, 0) {\scriptsize $\beta$}; }] & Z \arrow[swap]{ld}{f} \arrow{rd}{g} \arrow{dd}{h} & \\ X & & Y \\ & Z' \arrow{lu}{f'} \arrow[swap]{ru}{g'} & \end{tikzcd}$$ Composition of $1$-morphisms is given by pullback: given spans $X \xleftarrow{f} Z \xrightarrow{g} X'$ and $X' \xleftarrow{f'} Z' \xrightarrow{g'} X''$, their composite is the outer span of the diagram $$\begin{tikzcd}[row sep=0.5em] & & Z \times_{X'} Z' \arrow{dr} \arrow{dl} & & \\ & Z \arrow{dr}{g} \arrow[swap]{dl}{f} & & Z' \arrow{dr}{g'} \arrow[swap]{dl}{f'} & \\ X & & X' & & X'' \end{tikzcd}$$ Composition of $2$-morphisms is given by vertical composition. **Definition 34**. The *geometric field theory* is the $2$-functor $$\mathcal{F}_G : \textup{\bfseries Bord}_n \to \textup{Span}(\textup{\bfseries Stck}_k)$$ defined as follows. - To any compact oriented $(n - 1)$-dimensional manifold $M$, assign the the $G$-character stack $\mathcal{F}_G(M) = \mathfrak{X}_G(M)$, which is an Artin stack of finite type over $k$ with affine stabilizers. - For any bordism $W : M_1 \to M_2$, the inclusions $i_i : M_i \to W$ induce morphisms of fundamental groupoids $\Pi(M_i) \to \Pi(W)$, which in turn induce morphisms $$\mathfrak{X}_G(M_2) \xleftarrow{i_2^*} \mathfrak{X}_G(W) \xrightarrow{i_1^*} \mathfrak{X}_G(M_1)$$ as explained in Remark [Remark 25](#rem:functorcharstack){reference-type="ref" reference="rem:functorcharstack"}. Define $\mathcal{F}_G(W)$ as the span $(\mathfrak{X}_G(W), i_1^*, i_2^*)$. - For any boundary-preserving diffeomorphism $\alpha: W \to W'$ between bordisms $W$ and $W'$, we consider the induced isomorphism $\alpha^*: \mathfrak{X}_G(W') \to \mathfrak{X}_G(W)$ between the associated character stacks. Then, $\mathcal{F}_G(W)$ is the $2$-morphism of $\textup{Span}(\textup{\bfseries Stck}_k)$ given by $$\begin{tikzcd}[row sep=0.5em] & \mathfrak{X}_G(W) \arrow{rd} \arrow[swap]{ld} & \\ \mathfrak{X}_G(M_1) & & \mathfrak{X}_G(M_1) \\ & \mathfrak{X}_G(W') \arrow[swap]{uu}{\alpha^*} \arrow[swap]{ru} \arrow{lu} & \\ \end{tikzcd}$$ **Proposition 35**. *The map $\mathcal{F}_G$ is a symmetric monoidal $2$-functor.* *Proof.* The proposition is an adaption of [@gonzalez2022virtual Proposition 4.7] to a base-point-free setting. In fact, the proof of [@gonzalez2022virtual Proposition 4.7] goes through line-by-line using the Seifert-van Kampen theorem [@brown1967groupoids] and Lemma [Lemma 30](#lemma:character_stack_colimits_to_limits){reference-type="ref" reference="lemma:character_stack_colimits_to_limits"}. ◻ Now, let us consider the category $R\textup{-}\textup{\bfseries Mod}$ of modules over a ring $R$. It is naturally a category, which we promote to a $2$-category by considering only identity $2$-morphisms between module homomorphism. **Definition 36**. The *quantization functor* is the $2$-functor $$\mathcal{Q} : \textup{Span}(\textup{\bfseries Stck}_k) \to \textup{K}_0(\textup{\bfseries Stck}_k)\textup{-}\textup{\bfseries Mod}$$ defined as follows. - To an object $\mathfrak{X}$ of $\textup{Span}(\textup{\bfseries Stck}_k)$, we associate the $\textup{K}_0(\textup{\bfseries Stck}_k)$-module $\textup{K}_0(\textup{\bfseries Stck}_\mathfrak{X})$. - To a span $\mathfrak{X} \xleftarrow{f} \mathfrak{Z} \xrightarrow{g} \mathfrak{Y}$, we associate the morphism $g_! \circ f^* : \textup{K}_0(\textup{\bfseries Stck}_\mathfrak{X}) \to \textup{K}_0(\textup{\bfseries Stck}_\mathfrak{Y})$ of $\textup{K}_0(\textup{\bfseries Stck}_k)$-modules As in [@gonlogmun20], the quantization functor is symmetric lax monoidal, but not monoidal since the natural map $$\begin{array}{ccc} \textup{K}_0(\textup{\bfseries Stck}_\mathfrak{X}) \otimes_{\textup{K}_0(\textup{\bfseries Stck}_k)} \textup{K}_0(\textup{\bfseries Stck}_\mathfrak{Y}) & \to & \textup{K}_0(\textup{\bfseries Stck}_{\mathfrak{X} \times \mathfrak{Y}}) \\[5pt] {[\mathfrak{U} \rightarrow \mathfrak{X}]} \otimes {[\mathfrak{V} \rightarrow \mathfrak{Y}]} & \mapsto & {[ \mathfrak{U} \times \mathfrak{V} \to \mathfrak{X} \times \mathfrak{Y} ]} \end{array}$$ is generally not an isomorphism. For a proof that $\mathcal{Q}$ is indeed a symmetric lax monoidal functor, see [@gonzalez2022virtual]. **Remark 37**. Note that the construction of $\mathcal{Q}$ is well-defined, because two spans related by a $2$-morphism are sent to the same module morphisms. Indeed, given a $2$-morphism of spans $$\begin{tikzcd}[row sep=0.5em] & \mathfrak{Z} \arrow{dd}{h} \arrow{rd}{g} \arrow[swap]{ld}{f} & \\ \mathfrak{X} & & \mathfrak{Y}\\ & \mathfrak{Z}' \arrow[swap]{ru}{g'} \arrow{lu}{f'} & \\ \end{tikzcd}$$ observe that, since $h$ is an isomorphism, the square $$\begin{tikzcd} \mathfrak{Z} \arrow[swap]{d}{\textup{id}} \arrow{r}{\textup{id}} & \mathfrak{Z} \arrow{d}{h} \\ \mathfrak{Z} \arrow{r}{h} & \mathfrak{Z}' \end{tikzcd}$$ is cartesian, so that $g_! \circ f^* = (g')_! \circ h_! \circ h^* \circ (f')^* = (g')_! \circ \textup{id}^* \circ \textup{id}_! \circ (f')^* = (g')_! \circ (f')^*$. **Definition 38**. The *character stack TQFT* $Z_G : \textup{\bfseries Bord}_n \to \textup{K}_0(\textup{\bfseries Stck}_k)\textup{-}\textup{\bfseries Mod}$ is the composition $\mathcal{Q} \circ \mathcal{F}_G$, which is a lax monoidal $2$-functor. **Remark 39**. Since in $R\textup{-}\textup{\bfseries Mod}$ we only allow identities as $2$-morphisms, any $2$-functor $Z: \textup{\bfseries Bord}_n \to R\textrm{-}\textup{\bfseries Mod}$ must send any two diffeomorphic bordisms to the same homomorphism of $R\textup{-}\textup{\bfseries Mod}$. Hence, any such functor induces a regular functor between the underlying $1$-categories as discussed in Section [2.1](#sec:tqft){reference-type="ref" reference="sec:tqft"}, where in $\textup{\bfseries Bord}_n$ the morphisms are classes of bordisms up to boundary-preserving difeomorphism. In particular, $Z_G: \textup{\bfseries Bord}_n \to \textup{K}_0(\textup{\bfseries Stck}_k)\textup{-}\textup{\bfseries Mod}$ can be also seen as a regular lax monoidal TQFT. **Theorem 40**. *Given a closed $n$-dimensional manifold $W : \varnothing \to \varnothing$, the character stack TQFT quantizes the virtual class of the $G$-character stack of $W$, that is, $$Z_G(W)(1) = [\mathfrak{X}_G(W)] \in \textup{K}_0(\textup{\bfseries Stck}_k) .$$* *Proof.* The field theory $\mathcal{F}_G(W)$ associated to $W : \varnothing \to \varnothing$ is the span $$\mathop{\mathrm{Spec}}k \xleftarrow{t} \mathfrak{X}_G(W) \xrightarrow{t} \mathop{\mathrm{Spec}}k ,$$ and hence, applying $\mathcal{Q}$, we obtain $$Z_G(W)(1) = t_! t^* (1) = t_! [\mathfrak{X}_G(W)]_{\mathfrak{X}_G(W)} = [\mathfrak{X}_G(W)] \in \textup{K}_0(\textup{\bfseries Stck}_k). \qedhere$$ ◻ **Remark 41**. Theorem [Theorem 40](#thm:character_stack_TQFT){reference-type="ref" reference="thm:character_stack_TQFT"} generalizes the results of [@gonzalez2022virtual] to the basepoint-free setting. To be precise, in [@gonzalez2022virtual], an auxiliary choice of a finite set of points on the bordism is mandatory to make the field theory functorial. However, using the local gauge action explained in Section [3.2](#sec:charstacks){reference-type="ref" reference="sec:charstacks"}, instead of the global gauge used in [@gonzalez2022virtual], the results can be extended to the basepoint-free case. # Arithmetic TQFT {#sec:arithmetic_tqft} The goal of this section is to construct the *arithmetic TQFT*, a higher-dimensional analogue of the Frobenius TQFT of Section [2](#sec:representation_ring){reference-type="ref" reference="sec:representation_ring"}. Throughout this section, we will fix a finite group $G$. In applications, this finite group will usually be the $\mathbb{F}_q$-points of an algebraic group. While the TQFT of Section [2](#sec:representation_ring){reference-type="ref" reference="sec:representation_ring"} is defined in an ad-hoc manner, in terms of specific operations on the representation ring $R_\mathbb{C}(G)$, the construction of the arithmetic TQFT will be very similar to that of the character stack TQFT: as the composition of a field theory and a quantization. ## Field theory and quantization {#field-theory-and-quantization} **Definition 42**. The *arithmetic field theory* is the 2-functor $\mathcal{F}^\#_G : \textup{\bfseries Bord}_n \to \operatorname{Span}(\textup{\bfseries FinGrpd})$ which assigns to a closed $(n - 1)$-dimensional manifold $M$ the $G$-character groupoid $$\mathcal{F}^\#_G(M) = \mathfrak{X}_G(M),$$ to a 1-morphism, i.e. to a bordism $W : M_1 \to M_2$ the span $$\mathcal{F}^\#_G(W) = \left( \mathfrak{X}_G(M_1) \xleftarrow{i_1^*} \mathfrak{X}_G(W) \xrightarrow{i_2^*} \mathfrak{X}_G(M_2) \right)$$ and to a 2-morphism, i.e. to a diffeomorphism, the 2-cell induced by the diffeomorphism. **Remark 43**. Here we are using the character groupoid $\mathfrak{X}_G(M)$ of $M$ on the finite group $G$, that is, the groupoid $\mathfrak{X}_G(M) = \textbf{Fun}(\Gamma, G)$. When $G$ is the group of $\mathbb{F}_q$-points of an algebraic group, this groupoid is in general different from the groupoid of $\mathbb{F}_q$-points of the $G$-character stack, as seen in Example [Remark 29](#rmk:comparison-stack-groupoid){reference-type="ref" reference="rmk:comparison-stack-groupoid"}. **Proposition 44**. *$\mathcal{F}^\#_G$ is a symmetric monoidal functor.* *Proof.* Completely analogous to the proof of Proposition [Proposition 35](#prop:field_theory_symmetric_monoidal_functor){reference-type="ref" reference="prop:field_theory_symmetric_monoidal_functor"}. ◻ **Definition 45**. Let $\Gamma$ be a groupoid. Denote by $\mathbb{C}^\Gamma$ the complex vector space of complex-valued functions on the objects of $\Gamma$ which are invariant under isomorphism. In other words, an element of $\mathbb{C}^\Gamma$ can be seen as a function $[\Gamma] \to \mathbb{C}$, where $[\Gamma]$ denotes the set of isomorphism classes of $\Gamma$. Given a functor $f : \Gamma \to \Gamma'$ of groupoids, we can pullback functions via $$f^* : \mathbb{C}^{\Gamma'} \to \mathbb{C}^\Gamma, \quad \varphi \mapsto \varphi \circ f .$$ Moreover, if $\mu : f \Rightarrow g$ is a natural transformation between functors $f, g : \Gamma \to \Gamma'$, then $f^* = g^*$. In particular, if $\Gamma$ and $\Gamma'$ are equivalent groupoids, then $\mathbb{C}^\Gamma$ and $\mathbb{C}^{\Gamma'}$ are naturally isomorphic. Furthermore, given a functor $f : \Gamma \to \Gamma'$ of essentially finite groupoids, we define push-forward along $f$ as $$f_! : \mathbb{C}^\Gamma \to \mathbb{C}^{\Gamma'}, \quad \varphi \mapsto \left( \gamma' \mapsto \sum_{[(\gamma, \alpha)] \in [f^{-1}(\gamma')]} \frac{\varphi(\gamma)}{|\mathop{\mathrm{Aut}}(\gamma, \alpha)|} \right) ,$$ where $f^{-1}(\gamma')$ denotes the fiber product $\Gamma \times_{\Gamma'} \{ \gamma' \}$ of groupoids. **Definition 46**. The *arithmetic quantization functor* is the 2-functor $$\mathcal{Q}^\# : \operatorname{Span}(\textup{\bfseries FinGrpd}) \to \textup{\bfseries Vect}_\mathbb{C}$$ which assigns to an essentially finite groupoid $\Gamma$ the vector space $\mathbb{C}^\Gamma$ and assigns to a span of essentially finite groupoids $\Gamma' \xleftarrow{f} \Gamma \xrightarrow{g} \Gamma''$ the morphism $g_! \circ f^* : \mathbb{C}^{\Gamma'} \to \mathbb{C}^{\Gamma''}$. **Lemma 47**. *$\mathcal{Q}^\#$ is a symmetric monoidal functor.* *Proof.* Let $A' \xleftarrow{f} B \xrightarrow{g} A$ and $A \xleftarrow{h} C \xrightarrow{i} A''$ be two spans of essentially finite groupoids. The relevant diagram in $\textup{\bfseries Vect}_\mathbb{C}$ is given by $$\begin{tikzcd}[row sep=0.5em] & & \mathbb{C}^{B \times_A C} \arrow{dr}{(\pi_C)_!} & & \\ & \mathbb{C}^B \arrow{ur}{\pi_B^*} \arrow{dr}{g_!} & & \mathbb{C}^C \arrow{dr}{j_!} & \\ \mathbb{C}^{A'} \arrow{ur}{f^*} & & \mathbb{C}^{A} \arrow{ur}{h^*} & & \mathbb{C}^{A''} \end{tikzcd}$$ where $\pi_B : B \times_A C \to B$ and $\pi_C : B \times_A C \to C$ are the projections. To show $\mathcal{Q}^\#$ preserves compositions, it suffices to show that $h^* \circ g_! = (\pi_C)_! \circ \pi_B^*$. First observe that, for any $\tilde{c} \in C$, the groupoids $\pi_C^{-1}(\tilde{c}) = (B \times_A C) \times_C \{ \tilde{c} \}$ and $g^{-1}(h(\tilde{c})) = B \times_A \{ h(\tilde{c}) \}$ are equivalent. Indeed explicitly, an object of $\pi_C^{-1}(\tilde{c})$ is a tuple $(b, c, \alpha, \gamma)$ with $(b, c, \alpha) \in B \times_A C$ and $\gamma : c \to \tilde{c}$ a morphism in $C$. A morphism $(b', c', \alpha', \gamma') \to (b, c, \alpha, \gamma)$ is given by a tuple of morphisms $(\beta : b' \to b, \zeta : c' \to c)$ such that $\alpha \circ g(\beta) = h(\zeta) \circ \alpha'$ and $\gamma \circ \zeta = \gamma'$. By appropriate choice of $\zeta$, this is equivalent to the groupoid whose objects are $(b, \alpha)$ with $b \in B$ and $\alpha : g(b) \to h(\tilde{c})$ and morphisms $(b', \alpha') \to (b, \alpha)$ are morphisms $\beta : b' \to b$ such that $\alpha' \circ g(\beta) = \alpha$. But this is precisely $g^{-1}(h(\tilde{c}))$. Now, for any $\varphi \in \mathbb{C}^B$ and any $c \in C$, it follows that $$((\pi_C)_! \pi_B^* \varphi)(c) = \sum_{[(b, c, \alpha, \gamma)] \in [\pi_C^{-1}(c)]} \frac{\varphi(b)}{|\mathop{\mathrm{Aut}}(b, c, \alpha, \gamma)|} = \sum_{[(b, \alpha)] \in [g^{-1}(h(c))]} \frac{\varphi(b)}{|\mathop{\mathrm{Aut}}(b, \alpha)|} = (h^* g_! \varphi)(c) . \qedhere$$ ◻ **Definition 48**. The *arithmetic TQFT* $Z^\#_G : \textup{\bfseries Bord}_n \to \textup{\bfseries Vect}_\mathbb{C}$ is the composition $\mathcal{Q}^\# \circ \mathcal{F}^\#_G$. **Proposition 49**. *Given a closed $n$-dimensional manifold $W : \varnothing \to \varnothing$, the arithmetic TQFT quantizes the groupoid cardinality of the $G$-character groupoid of $W$, that is, $$Z^\#_G(W)(1) = |\mathfrak{X}_G(W)| = \sum_{[x] \in [\mathfrak{X}_G(W)]} \frac{1}{|\mathop{\mathrm{Aut}}(x)|} ,$$ where the sum runs over the isomorphism classes $[x]$ of the groupoid $\mathfrak{X}_G(W)$.* *Proof.* The field theory associated to $W : \varnothing \to \varnothing$ is the span $$\begin{tikzcd} \star & \mathfrak{X}_G(W) \arrow[swap]{l}{c} \arrow{r}{c} & \star \end{tikzcd}$$ where $\star$ denotes the trivial groupoid. Note that $Z^\#_G(\star)$ can naturally be identified with $\mathbb{C}$. Under this identification, $c^* : \mathbb{C}\to \mathbb{C}^{\mathfrak{X}_G(W)}$ is given by $c^*(1) = 1_{\mathfrak{X}_G(W)}$, where $1_{\mathfrak{X}_G(W)}$ denotes the constant function one. Hence, $$Z^\#_G(W)(1) = c_! c^*(1) = c_!(1_{\mathfrak{X}_G(W)}) = \sum_{[x] \in [\mathfrak{X}_G(W)]} \frac{1_{\mathfrak{X}_G(W)}(x)}{|\mathop{\mathrm{Aut}}(x)|} = \sum_{[x] \in [\mathfrak{X}_G(W)]} \frac{1}{|\mathop{\mathrm{Aut}}(x)|} ,$$ where in the third equality we used that $c^{-1}(\star) = \mathfrak{X}_G(W)$. ◻ **Remark 50**. The whole construction of this section can be repeated considering $\mathbb{Q}$-valued functions, leading to a functor $\textup{\bfseries Bord}_n \to \textup{\bfseries Vect}_\mathbb{Q}$. However, as we shall show later, we want to emphasize the similarities of this construction with group characters, and for this reason, it is preferable to consider complex coefficients. ## Comparison with the Frobenius TQFT {#sec:arithmetic-tqft-frobenius} Let us return to the case $n = 2$. For a finite group $G$, we have $\mathfrak{X}_G(S^1) = [G/G]$ where $G$ acts on itself by conjugation. In particular, $Z^\#_G(S^1)$ is the complex vector space of complex functions on $G$ which are invariant under conjugation, which can be naturally identified with the vector space of the representation ring $R_\mathbb{C}(G)$. That is, there is an isomorphism $$\label{eq:isomorphism_representation_ring_equivariant_functions_G} Z_G^\#(S^1) = \mathbb{C}^{[G/G]} \cong R_\mathbb{C}(G) = Z_G^\textup{Fr}(S^1)$$ **Proposition 51**. *Let $G$ be a finite group. For $n = 2$, there is a natural isomorphism $Z^\#_G \cong Z_G^\textup{Fr}$ from the arithmetic TQFT to the Frobenius TQFT of Section [2](#sec:representation_ring){reference-type="ref" reference="sec:representation_ring"}.* *Proof.* Since both $Z^\#_G$ and $Z_G^\textup{Fr}$ are TQFTs, by Theorem [Theorem 10](#thm:equivalence_tqft_frobenius){reference-type="ref" reference="thm:equivalence_tqft_frobenius"}, it is enough to compute the Frobenius algebra associated to $Z^\#_G(S^1)$ and to check that it is isomorphic to $R_\mathbb{C}(G)$ through [\[eq:isomorphism_representation_ring_equivariant_functions_G\]](#eq:isomorphism_representation_ring_equivariant_functions_G){reference-type="ref" reference="eq:isomorphism_representation_ring_equivariant_functions_G"}. From ([\[eq:isomorphism_representation_ring_equivariant_functions_G\]](#eq:isomorphism_representation_ring_equivariant_functions_G){reference-type="ref" reference="eq:isomorphism_representation_ring_equivariant_functions_G"}), we know the underlying vector spaces are isomorphic. For the rest of the structure, we have - Ring structure. It is given by the image of $W = \begin{tikzpicture}[semithick, scale=0.7*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (1,0.5) ellipse (0.2cm and 0.4cm); \draw (1,-0.5) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (0,0.4) and (0,0.9) .. (1,0.9); \draw (-1,-0.4) .. controls (0,-0.4) and (0,-0.9) .. (1,-0.9); \draw (1,0.1) .. controls (0.2,0.1) and (0.2,-0.1) .. (1,-0.1); \end{scope} \end{tikzpicture}$. The field theory $\mathcal{F}^\#_G(W)$ is given by $$\begin{tikzcd}[column sep=4em] {[G/G]^2} & {[G^2 / G]} \arrow[swap]{l}{\pi_1 \times \pi_2} \arrow{r}{m} & {[G/G]} \end{tikzcd}$$ where $m : G \times G \to G$ is the multiplication map of $G$. Hence, the morphism $Z^\#_G(W) : R_\mathbb{C}(G) \otimes_\mathbb{C}R_\mathbb{C}(G) \to R_\mathbb{C}(G)$ is given by $a \otimes b \mapsto m_! (\pi_1 \times \pi_2)^* (a \otimes b)$ which a direct check shows that it is precisely $\mu(a \otimes b)$. - Bilinear form. It is given by $W = \begin{tikzpicture}[semithick, scale=0.7*0.5, baseline=-0.5ex] \begin{scope} \draw (0,0.5) ellipse (0.2cm and 0.4cm); \draw (0,-0.5) ellipse (0.2cm and 0.4cm); \draw (0,0.9) arc (90:270:1.1cm and 0.9cm); \draw (0,0.1) arc (90:270:0.3cm and 0.1cm); \end{scope} \end{tikzpicture}$. The field theory is given by $$\begin{tikzcd}[column sep=4em] {[G/G]^2} & {[G^2 / G]} \arrow[swap]{l}{\pi_1 \times \pi_2} \arrow{r}{c} & \star, \end{tikzcd}$$ where $c$ is the terminal map. Hence, the morphism $Z^\#_G(W) : R_\mathbb{C}(G) \otimes_\mathbb{C}R_\mathbb{C}(G) \to \mathbb{C}$ is given by $a \otimes b \mapsto c_! (\pi_1 \times \pi_2)^* (a \otimes b)$ which is precisely $\beta(a \otimes b) = \mu(a \otimes b)(1)$. - Unit. It is given by the image of $\begin{tikzpicture}[semithick, scale=0.7*0.5, baseline=-0.5ex] \begin{scope} \draw (0,0) ellipse (0.2cm and 0.4cm); \draw (0,-0.4) arc (-90:90:0.75cm and 0.4cm); \end{scope} \end{tikzpicture}$. The corresponding field theory is given by $$\begin{tikzcd}[column sep=4em] {[G/G]} & {[{\textup{B}G}]} \arrow[swap]{l}{e} \arrow{r}{c} & \star \end{tikzcd}$$ where the map $e$ is induced by the embedding of the identity element $\star\to G$. Hence, the corresponding morphism $Z^\#_G(W):\mathbb{C}\to R_\mathbb{C}(G)$ is the unit $\eta$.  ◻ **Remark 52**. The use of Theorem [Theorem 10](#thm:equivalence_tqft_frobenius){reference-type="ref" reference="thm:equivalence_tqft_frobenius"} can be avoided in Proposition [Proposition 51](#prop:equiv-frobenius){reference-type="ref" reference="prop:equiv-frobenius"} if instead we check that under the natural identification $Z_G^\#(S^1) = R_\mathbb{C}(G)$, we have that the images of the generating bordisms $$\begin{tikzpicture}[semithick, scale=1*0.5, baseline=-0.5ex] \begin{scope} \draw (0,0) ellipse (0.2cm and 0.4cm); \draw (0,-0.4) arc (-90:90:0.75cm and 0.4cm); \end{scope} \end{tikzpicture} , \qquad \begin{tikzpicture}[semithick, scale=1*0.5, baseline=-0.5ex] \begin{scope} \draw (0,0) ellipse (0.2cm and 0.4cm); \draw (0,0.4) arc (90:270:0.75cm and 0.4cm); \end{scope} \end{tikzpicture} , \qquad \begin{tikzpicture}[semithick, scale=1*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (1,0.5) ellipse (0.2cm and 0.4cm); \draw (1,-0.5) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (0,0.4) and (0,0.9) .. (1,0.9); \draw (-1,-0.4) .. controls (0,-0.4) and (0,-0.9) .. (1,-0.9); \draw (1,0.1) .. controls (0.2,0.1) and (0.2,-0.1) .. (1,-0.1); \end{scope} \end{tikzpicture} , \qquad \begin{tikzpicture}[semithick, scale=1*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0.5) ellipse (0.2cm and 0.4cm); \draw (-1,-0.5) ellipse (0.2cm and 0.4cm); \draw (1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.9) .. controls (0,0.9) and (0,0.4) .. (1,0.4); \draw (-1,-0.9) .. controls (0,-0.9) and (0,-0.4) .. (1,-0.4); \draw (-1,0.1) .. controls (-0.2,0.1) and (-0.2,-0.1) .. (-1,-0.1); \end{scope} \end{tikzpicture} , \qquad \begin{tikzpicture}[semithick, scale=1*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0.5) ellipse (0.2cm and 0.4cm); \draw (-1,-0.5) ellipse (0.2cm and 0.4cm); \draw (1,0.5) ellipse (0.2cm and 0.4cm); \draw (1,-0.5) ellipse (0.2cm and 0.4cm); \draw (-1,0.9) .. controls (0,0.9) and (0,-0.1) .. (1,-0.1); \draw (-1,0.1) .. controls (0,0.1) and (0,-0.9) .. (1,-0.9); \draw (-1,-0.9) .. controls (0,-0.9) and (0,0.1) .. (1,0.1); \draw (-1,-0.1) .. controls (0,-0.1) and (0,0.9) .. (1,0.9); \end{scope} \end{tikzpicture} ,$$ coincide with the ones of $Z_G^\textup{Fr}$. This can be checked through a straightforward calculation similar to the ones above. **Remark 53**. Putting together Propositions [Proposition 51](#prop:equiv-frobenius){reference-type="ref" reference="prop:equiv-frobenius"}, [Proposition 49](#prop:invariant-arithmetic){reference-type="ref" reference="prop:invariant-arithmetic"} and [Proposition 18](#prop:count-character-groupoid){reference-type="ref" reference="prop:count-character-groupoid"}, we can re-interpret the left-hand side of Frobenius formula ([\[eq:frobenius_formula\]](#eq:frobenius_formula){reference-type="ref" reference="eq:frobenius_formula"}) in a more natural way. Indeed, the mysterious quotient $|R_G(\Sigma)|/|G|$ should be understood as the point count of the character groupoid $|\mathfrak{X}_G(\Sigma)|$. ## Comparison with the character stack TQFT {#sec:arithmetic-tqft-character-stack} In this section, we prove Theorem [Theorem 5](#introthm:nattrans){reference-type="ref" reference="introthm:nattrans"}. Throughout this section, fix an algebraic group $G$ defined over a finitely generated ring $R$ (typically, $R = \mathbb{Z}$ for $G = \textup{GL}_n, \textup{SL}_n$). Furthermore, let $\mathbb{F}_q$ be a finite field *extending* $R$, that is, fix a ring morphism $R \to \mathbb{F}_q$. Such a morphism allows us to talk about the $\mathbb{F}_q$-rational points of any stack $\mathfrak{X}$ over $R$, denoted $\mathfrak{X}(\mathbb{F}_q)$. Now, the bridge from the geometric to the arithmetic world is given by counting $\mathbb{F}_q$-rational points. **Definition 54**. For any stack $\mathfrak{X}$ over $R$, define the map $$\int_\mathfrak{X} : \textup{K}_0(\textup{\bfseries Stck}_\mathfrak{X}) \to \mathbb{C}^{\mathfrak{X}(\mathbb{F}_q)}, \quad [\mathfrak{Y} \xrightarrow{f} \mathfrak{X}] \mapsto (x \mapsto |f^{-1}(x)|) ,$$ where $|f^{-1}(x)|$ denotes the groupoid cardinality of $f^{-1}(x)$ as in Definition [Definition 17](#defn:groupoid-cardinality){reference-type="ref" reference="defn:groupoid-cardinality"}. Note that $\int_\mathfrak{X}$ can be thought of as integration along the fibers. This map is easily seen to be a ring homomorphism. In particular, note that the map $\int_R : \textup{K}_0(\textup{\bfseries Stck}_R) \to \mathbb{C}^{R(\mathbb{F}_q)} = \mathbb{C}$ induces a restriction-of-scalars functor $(\int_R)^* : \textup{\bfseries Vect}_\mathbb{C}\to \textup{K}_0(\textup{\bfseries Stck}_R)\textup{-}\textup{\bfseries Mod}$. This allows us to compare the geometric and arithmetic quantization functors. **Proposition 55**. *The morphisms $\int_\mathfrak{X}$ define a natural transformation $$\int : \mathcal{Q} \Rightarrow \left(\int_R\right)^* \circ \mathcal{Q}^\# \circ (-)(\mathbb{F}_q) .$$ In particular, this induces a natural transformation of TQFTs $$\int : Z_G \Rightarrow \left(\int_R\right)^* \circ \mathcal{Q}^\# \circ (-)(\mathbb{F}_q) \circ \mathcal{F}_G.$$* *Proof.* For any span $\mathfrak{X} \xleftarrow{f} \mathfrak{Z} \xrightarrow{g} \mathfrak{Y}$ of stacks over $R$, the relevant diagram of $\textup{K}_0(\textup{\bfseries Stck}_R)$-modules is: $$\begin{tikzcd} \textup{K}_0(\textup{\bfseries Stck}_\mathfrak{X}) \arrow{r}{f^*} \arrow{d}{\int_\mathfrak{X}} & \textup{K}_0(\textup{\bfseries Stck}_\mathfrak{Z}) \arrow{r}{g_!} \arrow{d}{\int_\mathfrak{Z}} & \textup{K}_0(\textup{\bfseries Stck}_\mathfrak{Y}) \arrow{d}{\int_\mathfrak{Y}} \\ \mathbb{C}^{\mathfrak{X}(\mathbb{F}_q)} \arrow{r}{f^*} & \mathbb{C}^{\mathfrak{Z}(\mathbb{F}_q)} \arrow{r}{g_!} & \mathbb{C}^{\mathfrak{Y}(\mathbb{F}_q)} \end{tikzcd}$$ To show that the left square commutes, let $\mathfrak{T} \xrightarrow{j} \mathfrak{X}$ be a morphism of stacks and consider the cartesian diagram: $$\label{di:cartesianinstacks} \begin{tikzcd} \mathfrak{T}\times_{\mathfrak{X}}\mathfrak{Z} \arrow{r} \arrow{d}{h} & \mathfrak{T}\arrow{d}{j} \\ \mathfrak{Z} \arrow{r}{f} & \mathfrak{X} \end{tikzcd}$$ Then, for any $\mathbb{F}_q$-point $z \in \mathfrak{Z}(\mathbb{F}_q)$ of $\mathfrak{Z}$, we find $$\int_\mathfrak{Z}(f^*\mathfrak{T})(z) = |h^{-1}(z)| = |j^{-1}(f(z))| = f^* \left(\int_{\mathfrak{X}} \mathfrak{T} \right)(z) .$$ To show that the right square commutes, let $\mathfrak{T} \xrightarrow{j} \mathfrak{Z}$ be a morphism of stacks. Then, for any $\mathbb{F}_q$-point $y \in \mathfrak{Y}(\mathbb{F}_q)$ of $\mathfrak{Y}$, we find $$g_!\left(\int_\mathfrak{Z} \mathfrak{T}\right)(y) = \sum_{[(z, \alpha)] \in [g^{-1}(y)]} \frac{|j^{-1}(z)|}{|\mathop{\mathrm{Aut}}(z, \alpha)|} = |(g \circ j)^{-1}(y)| = \int_\mathfrak{Y}(g_!\mathfrak{T})(y) . \qedhere$$ ◻ Next, let us show how the geometric and arithmetic field theories are related. Crucially, we need to assume that $G$ is connected. **Proposition 56**. *Suppose $G$ is connected. Then there is a natural isomorphism $$(-)(\mathbb{F}_q) \circ \mathcal{F}_G \cong \mathcal{F}^\#_{G(\mathbb{F}_q)} .$$ In particular, this induces a natural isomorphism of TQFTs $$\mathcal{Q}^\# \circ (-)(\mathbb{F}_q) \circ \mathcal{F}_G \cong Z^\#_{G(\mathbb{F}_q)} .$$* *Proof.* Since the field theories $\mathcal{F}_G$ and $\mathcal{F}^\#_{G(\mathbb{F}_q)}$, see Definition [Definition 34](#def:geometric_field_theory){reference-type="ref" reference="def:geometric_field_theory"} and [Definition 42](#def:arithmetic_field_theory){reference-type="ref" reference="def:arithmetic_field_theory"}, are constructed completely analogous, it suffices to give a natural isomorphism between the groupoid of $\mathbb{F}_q$-rational points of the character stack $\mathcal{F}_G(M)(\mathbb{F}_q) = \mathfrak{X}_G(M)(\mathbb{F}_q)$ and the character groupoid $\mathcal{F}_G^{\#}(M) = \mathfrak{X}_{G(\mathbb{F}_q)}(M)$. Since both field theories are monoidal, we may assume that $M$ is connected, so $\mathfrak{X}_{G}(M) = [R_G(M) / G]$. In this case, the objects of $\mathfrak{X}_G(M)(\mathbb{F}_q)$ are $G$-principal bundles $P \to \mathop{\mathrm{Spec}}(\mathbb{F}_q)$ with an equivariant map $P \to R_G(M)$. However, by Lang's theorem [@Lang1956] for connected groups, every $G$-principal bundle over $\mathop{\mathrm{Spec}}(\mathbb{F}_q)$ is trivial, so $P = \mathop{\mathrm{Spec}}(\mathbb{F}_q) \times G$ and the equivariant map is the same as a map $\mathop{\mathrm{Spec}}(\mathbb{F}_q) \to R_G(M)$. Hence, the objects of $\mathfrak{X}_G(M)(\mathbb{F}_q)$ are $\textup{Hom}(\mathop{\mathrm{Spec}}(\mathbb{F}_q), R_G(M)) = R_G(M)(\mathbb{F}_q) = R_{G(\mathbb{F}_q)}(M)$, by Definition [Definition 22](#defn:representation-variety){reference-type="ref" reference="defn:representation-variety"}. The morphisms between these objects are given by the action of $G(\mathbb{F}_q)$ on $R_{G(\mathbb{F}_q)}(M)$ by conjugation. But this is exactly the action stack $[R_{G(\mathbb{F}_q)}(M) / G(\mathbb{F}_q)]$, which is precisely the character groupoid $\mathfrak{X}_{G(\mathbb{F}_q)}(M)$. ◻ **Example 57**. If $G$ is not connected, there cannot even be a natural transformation $(-)(\mathbb{F}_q) \circ \mathcal{F}_G \Rightarrow \mathcal{F}^\#_{G(\mathbb{F}_q)}$. For instance, consider the $2$-sphere $S^2$ as a bordism $\varnothing \to \varnothing$. On one hand, since $S^2$ is simply connected we have that $\mathcal{F}^\#_{G(\mathbb{F}_q)}(S^2)$ is the action groupoid $\mathfrak{X}_{G(\mathbb{F}_q)}(\star) = [\star / G(\mathbb{F}_q)]$, which is a groupoid with a single object and $|G(\mathbb{F}_q)|$ automorphisms. On the other hand, the character stack of $S^2$ is $\mathfrak{X}_G(S^2) = {\textup{B}G}$, so $((-)(\mathbb{F}_q) \circ \mathcal{F}_G)(S^2) = {\textup{B}G}(\mathbb{F}_q)$. These two groupoids are not in general equivalent, as for $G = \mathbb{Z}/ 2\mathbb{Z}$. **Corollary 58**. *Suppose $G$ is connected. Then there is a natural transformation of TQFTs $$Z_G \Rightarrow \left(\int_R\right)^* \circ Z^\#_{G(\mathbb{F}_q)} . \qedhere$$* The situation can be summarized by the following diagram. $$\label{eq:diagram_natural_transformations} \begin{tikzcd}[execute at end picture={ \node at (1.5, 0) {$\Downarrow$}; \node at (-2.0, 0) {$\Downarrow\!\wr$}; }] & \operatorname{Span}(\textup{\bfseries Stck}_k) \arrow{r}{\mathcal{Q}} \arrow{dd}{(-)(\mathbb{F}_q)} & \textup{K}_0(\textup{\bfseries Stck}_k)\textup{-}\textup{\bfseries Mod}\\ \textup{\bfseries Bord}_n \arrow{ur}[name=FG]{\mathcal{F}_G} \arrow[swap]{dr}[name=FGk]{\mathcal{F}^\#_{G(\mathbb{F}_q)}} & & \\ & \operatorname{Span}(\textup{\bfseries FinGrpd}) \arrow[swap]{r}{\mathcal{Q}^\#} & \textup{\bfseries Vect}_\mathbb{C}\arrow[swap]{uu}{\left(\int_R\right)^*} \end{tikzcd}$$ From these results, we observe that for $G$ non-connected, the two functors $Z_G^\# = \mathcal{Q}^\# \circ \mathcal{F}_{G(\mathbb{F}_q)}^\#$ and $\mathcal{Q}^\# \circ (-)(\mathbb{F}_q) \circ \mathcal{F}_{G}$ are different TQFTs. For further reference, we shall give a name to the later functor, which will play an important role for Landau-Ginzburg models. **Definition 59**. The TQFT $$Z_G^{\mathbb{F}_q} = \mathcal{Q}^\# \circ (-)(\mathbb{F}_q) \circ \mathcal{F}_{G}: \textup{\bfseries Bord}_n \to \textup{\bfseries Vect}_\mathbb{C}$$ will be called the *TQFT of $\mathbb{F}_q$-closed points*. Note that, regardless of whether $G$ is connected, there always is a natural transformation $Z_G \Rightarrow Z_G^{\mathbb{F}_q}$ due to Proposition [Proposition 55](#prop:comparisontqft_arith){reference-type="ref" reference="prop:comparisontqft_arith"}. When $G$ is connected, we have a natural isomorphism $Z_G^{\mathbb{F}_q} \cong Z_{G(\mathbb{F}_q)}^{\#}$ ($\cong Z_{G(\mathbb{F}_q)}^{\textup{Fr}}$). # Landau--Ginzburg models TQFT {#sec:tqftLG} In this section, we give a different point of view on the natural transformations given in Corollary [Corollary 58](#cor:natural_transformation_geometric_arithmetic){reference-type="ref" reference="cor:natural_transformation_geometric_arithmetic"} by generalizing Theorem [Theorem 40](#thm:character_stack_TQFT){reference-type="ref" reference="thm:character_stack_TQFT"} to Landau--Ginzburg models. The generalization is relatively straightforward, we will only sketch the key steps. We begin with the definition of Landau--Ginzburg models. ## Landau--Ginzburg models Let $\mathfrak{S}$ be a finite type algebraic stack over a field $k$ (with affine stabilizers). A Landau--Ginzburg (LG) model over $\mathfrak{S}$ is an algebraic stack $\mathfrak{X}$ (with affine stabilizers) over $\mathfrak{S}$ equipped with a function called the potential. In this paper, we will consider two cases, namely (1) when the potential is algebraic (see Definition [Definition 60](#def:lgmodel){reference-type="ref" reference="def:lgmodel"}), and (2) when the potential can be any function on the closed points (see Definition [Definition 65](#def:lgmodel2){reference-type="ref" reference="def:lgmodel2"}). Until Section [5.2.2](#sec:LGarith){reference-type="ref" reference="sec:LGarith"}, we will use the former definition, however, the results of this section hold for both definitions. **Definition 60**. The category $\textup{\bfseries LG}_\mathfrak{S}$ of *Landau--Ginzburg models* over $\mathfrak{S}$ is the category - whose objects are pairs $(\mathfrak{X}\rightarrow \mathfrak{S},f)$ where $\mathfrak{X}$ is an algebraic stack over $\mathfrak{S}$ of finite type (with affine stabilizers) and $f:\mathfrak{X}\to \mathbb{A}^1_k$ is a map of algebraic stacks, and - morphisms between the objects $(\mathfrak{X}\to \mathfrak{S}, f)$ and $(\mathfrak{Y}\to \mathfrak{S}, g)$ are morphisms $\pi:\mathfrak{X}\to \mathfrak{Y}$ commuting with the structure maps (a) $\mathfrak{X}\to \mathfrak{S}$ and $\mathfrak{Y}\to \mathfrak{S}$, and (b) $\mathfrak{X}\to \mathbb{A}^1_k$ and $\mathfrak{Y}\to \mathbb{A}^1_k$. The definition above yields the definition of the Grothendieck ring of Landau--Ginzburg models. **Definition 61**. Let $\mathfrak{S}$ be an algebraic stack of finite type over a field $k$ (with affine stabilizers). The *Grothendieck ring of Landau--Ginzburg models* over $\mathfrak{S}$, denoted $\textup{K}_0(\textup{\bfseries LG}_\mathfrak{S})$, is defined as the quotient of the free abelian group on the set of isomorphism classes of Landau--Ginzburg models over $\mathfrak{S}$, by relations of the form $$[(\mathfrak{X}\to \mathfrak{S}, f)] = [(\mathfrak{Z}\to \mathfrak{S}, f|_{\mathfrak{Z}})] + [(\mathfrak{U}\to \mathfrak{S}, f|_{\mathfrak{U}})]$$ where $\mathfrak{Z}$ is a closed substack of $\mathfrak{X}$ and $\mathfrak{U}$ is its open complement equipped with the restrictions of the regular functions $f$. Note, that the Grothendieck ring of Landau--Ginzburg models is indeed a ring, the multiplication is induced by the fibre product $$[(\mathfrak{X} \to \mathfrak{S}, f)] \cdot [(\mathfrak{Y}\to \mathfrak{S}, g)] = [(\mathfrak{X} \times_{\mathfrak{S}} \mathfrak{Y})\to \mathfrak{S}, h)].$$ Here $h$ is defined as the sequence of maps $$h: (\mathfrak{X} \times_{\mathfrak{S}} \mathfrak{Y})\xrightarrow{(f,g)} \mathbb{A}^1_k\times \mathbb{A}^1_k\xrightarrow{\cdot} \mathbb{A}^1_k$$ where the last map uses the multiplicative structure on $\mathbb{A}^1_k$. This multiplication is indeed associative and commutative with the identity element being $[(\mathfrak{S}\xrightarrow{\textup{id}} \mathfrak{S}, 1)]$. The following lemma relates the rings $\textup{K}_0(\textup{\bfseries LG}_S)$ and $\textup{K}_0(\textup{\bfseries Stck}_S)$ and it is straightforward to prove. **Lemma 62**. *The constant function map $$\iota:\textup{K}_0(\textup{\bfseries Stck}_\mathfrak{S})\rightarrow \textup{K}_0(\textup{\bfseries LG}_\mathfrak{S})$$ induced by the map on virtual classes $[\mathfrak{X}\to \mathfrak{S}]\mapsto [(\mathfrak{X}\to \mathfrak{S}, 1)]$ is a ring homomorphism, making $\textup{K}_0(\textup{\bfseries LG}_\mathfrak{S})$ a $\textup{K}_0(\textup{\bfseries Stck}_\mathfrak{S})$-algebra.* *Similarly, the forgetful map $$\ooalign{\hidewidth $\int$\hidewidth\cr\rule[0.7ex]{1ex}{.4pt}}:\textup{K}_0(\textup{\bfseries LG}_\mathfrak{S})\to \textup{K}_0(\textup{\bfseries Stck}_\mathfrak{S})$$ induced by the map on virtual classes $[(\mathfrak{X}\to \mathfrak{S}, f)]\mapsto [\mathfrak{X}\to \mathfrak{S}]$ is a ring homomorphism as well.0◻* As in Section [3.3](#sec:groth){reference-type="ref" reference="sec:groth"}, a morphism of stacks $\mathfrak{X}\to \mathfrak{S}$ induces a $\textup{K}_0(\textup{\bfseries LG}_\mathfrak{S})$-module structure on $\textup{K}_0(\textup{\bfseries LG}_\mathfrak{X})$ defined on the classes as follows. Let $[(\mathfrak{Y}\to \mathfrak{X}, f:\mathfrak{Y}\to \mathbb{A}^1_k)]$, $[(\mathfrak{T}\to \mathfrak{S}, f:\mathfrak{T}\to \mathbb{A}^1_k)]$ be two classes, then we define the action as $$[(\mathfrak{T}\to \mathfrak{S}, f:\mathfrak{T}\to \mathbb{A}^1_k)]\cdot [(\mathfrak{Y}\to \mathfrak{X}, g:\mathfrak{Y}\to \mathbb{A}^1_k)]:=[(\mathfrak{Y}\times_\mathfrak{X} (\mathfrak{T}\times_\mathfrak{S} \mathfrak{X}), h:(\mathfrak{Y}\times_\mathfrak{X} (\mathfrak{T}\times_\mathfrak{S} \mathfrak{X})\to \mathbb{A}^1_k]$$ where $h=\tilde{f}\cdot\tilde{g}$ illustrated in the diagram below. $$\begin{tikzcd}[column sep=3em, row sep=3em] \mathfrak{Y}\times_\mathfrak{X} (\mathfrak{T}\times_\mathfrak{S} \mathfrak{X}) \arrow[dd, bend right=60, "\tilde{g}"] \arrow[rrr, bend left=60, "\tilde{f}"] \arrow[r] \arrow[d] & \mathfrak{T}\times_\mathfrak{S} \mathfrak{X} \arrow[r] \arrow[d] & \mathfrak{T} \arrow[r, "f"] \arrow[d] & \mathbb{A}^1_k \\ \mathfrak{Y} \arrow[r] \arrow[d, "g"] & \mathfrak{X} \arrow[r] & \mathfrak{S} & \\ \mathbb{A}^1_k & & & \end{tikzcd}$$ It is easy to see that this action extends to a $\textup{K}_0(\textup{\bfseries LG}_\mathfrak{S})$-module structure on $\textup{K}_0(\textup{\bfseries LG}_\mathfrak{X})$. Furthermore, any morphism of stacks $a : \mathfrak{X} \to \mathfrak{Y}$ over $\mathfrak{S}$ yields a functor $$\begin{aligned} \textup{\bfseries LG}_\mathfrak{X} &\to \textup{\bfseries LG}_\mathfrak{Y} \\ (\mathfrak{Z} \xrightarrow{h} \mathfrak{X}, f:\mathfrak{Z}\to \mathbb{A}^1_k) & \mapsto (\mathfrak{Z}\xrightarrow {a \circ h} \mathfrak{Y}, f:\mathfrak{Z}\to \mathbb{A}^1_k)\end{aligned}$$ inducing a $\textup{K}_0(\textup{\bfseries LG}_\mathfrak{S})$-module map $a_!: \textup{K}_0(\textup{\bfseries LG}_\mathfrak{X})\to \textup{K}_0(\textup{\bfseries LG}_\mathfrak{Y})$. Note that this map will in general not be a ring morphism. Similarly, pulling back along $a$ yields a functor $$\textup{\bfseries LG}_\mathfrak{Y} \to \textup{\bfseries LG}_\mathfrak{X}$$ sending $(\mathfrak{Z}\to \mathfrak{Y}, f:\mathfrak{Z}\to \mathbb{A}^1_k)$ to $(\mathfrak{Z} \times_\mathfrak{Y} \mathfrak{X}\to \mathfrak{X}, \tilde{f})$ illustrated in the diagram below. $$\begin{tikzcd} \mathfrak{Z} \times_\mathfrak{Y} \mathfrak{X} \arrow{r} \arrow{d} \arrow[bend left=20]{rr}{\overline{f}} & \mathfrak{Z} \arrow[swap]{r}{f} \arrow{d} & \mathbb{A}^1_k \\ \mathfrak{X} \arrow{r}{a} & \mathfrak{Y} & \end{tikzcd}$$ This induces a ring homomorphism $$a^* : \textup{K}_0(\textup{\bfseries LG}_\mathfrak{Y}) \to \textup{K}_0(\textup{\bfseries LG}_\mathfrak{X}) ,$$ which is also a $\textup{K}_0(\textup{\bfseries LG}_\mathfrak{S})$-algebra map. ## Landau--Ginzburg TQFT We alter the construction of Section [3.4](#sec:field_theory_and_quantization){reference-type="ref" reference="sec:field_theory_and_quantization"} to establish a TQFT in values of Landau--Ginzburg models computing the virtual class of character stacks. Namely, the symmetric monoidal functor called *field theory* $$\mathcal{F} : \textup{\bfseries Bord}_n \to \textup{Span}(\textup{\bfseries Stck}_k)$$ is unchanged, defined as in Section [3.4](#sec:field_theory_and_quantization){reference-type="ref" reference="sec:field_theory_and_quantization"}. However, the *quantization functor* $$\mathcal{Q}^\textup{\bfseries LG}: \textup{Span}(\textup{\bfseries Stck}_{\textup{B}G}) \to \textup{K}_0(\textup{\bfseries LG}_k)\textup{-}\textup{\bfseries Mod}$$ is altered accordingly by assigning to an object $\mathfrak{X}$ the $\textup{K}_0(\textup{\bfseries LG}_k)$-module $\textup{K}_0(\textup{\bfseries LG}_\mathfrak{X})$, and to a span $\mathfrak{X} \xleftarrow{f} \mathfrak{Z} \xrightarrow{g} \mathfrak{Y}$ the morphism $g_! \circ f^* : \textup{K}_0(\textup{\bfseries LG}_\mathfrak{X}) \to \textup{K}_0(\textup{\bfseries LG}_\mathfrak{Y})$ of $\textup{K}_0(\textup{\bfseries LG}_k)$-modules. As before, the quantization functor is not monoidal, however, it is symmetric and lax monoidal. The composition of the field theory and the quantization functor defines the TQFT in LG-models $$Z^\textup{\bfseries LG}_G = \mathcal{Q}^\textup{\bfseries LG}\circ \mathcal{F} : \textup{\bfseries Bord}_n \to \textup{K}_0(\textup{\bfseries LG}_k)\textup{-}\textup{\bfseries Mod}.$$ This lax monoidal functor assigns to the empty set $\varnothing$ the ring $\textup{K}_0(\textup{\bfseries LG}_k)$ and to a closed connected manifold $W$ of dimension $n$ thought of as a bordism $W : \varnothing \to \varnothing$ the ring endomorphism of $\textup{K}_0(\textup{\bfseries LG}_k)$ that is multiplication by $[(\mathfrak{X}_G(W), \mathfrak{X}_G(W) \xrightarrow{1} \mathbb{A}_k^1)]$ yielding the following theorem. **Theorem 63**. *Let $G$ be an algebraic group. There exists a lax TQFT $$Z^\textup{\bfseries LG}_G : \textup{\bfseries Bord}_n \to \textup{K}_0(\textup{\bfseries LG}_k)\textup{-}\textup{\bfseries Mod}$$ computing the virtual classes of $G$-character stacks. 0◻* ### Geometric method via LG-models {#sec:geommet} In this short section, we show that the TQFT $Z_G$ defined in Theorem [Theorem 40](#thm:character_stack_TQFT){reference-type="ref" reference="thm:character_stack_TQFT"} is an instance of the Landau-Ginzburg TQFT. Let $\mathfrak{S}$ be an algebraic stack of finite type over $k$. Consider the forgetful map $$\ooalign{\hidewidth $\int$\hidewidth\cr\rule[0.7ex]{1ex}{.4pt}}: \textup{K}_0(\textup{\bfseries LG}_\mathfrak{S})\rightarrow \textup{K}_0(\textup{\bfseries Stck}_\mathfrak{S})$$ sending the class of a Landau-Ginzburg model $[(\mathfrak{X}\to \mathfrak{S}, f:\mathfrak{X}\to \mathbb{A}^1)]$ to the virtual class $[\mathfrak{X}\to \mathfrak{S}]\in \textup{K}_0(\textup{\bfseries RStck}_\mathfrak{S})$ (see Lemma [Lemma 62](#lem:forgetfulLG){reference-type="ref" reference="lem:forgetfulLG"}). The following statement is immediate because of the forgetful nature of $\ooalign{\hidewidth $\int$\hidewidth\cr\rule[0.7ex]{1ex}{.4pt}}$. **Theorem 64**. *The forgetful map induces a natural transformation $\ooalign{\hidewidth $\int$\hidewidth\cr\rule[0.7ex]{1ex}{.4pt}}:Z_G^{\textup{\bfseries LG}} \Rightarrow Z_G$ as functors $\textup{\bfseries Bord}_n\to \textup{\bfseries Ab}$. In particular, if $W$ is a closed manifold, seen as a bordism $\varnothing\rightarrow \varnothing$, then $$\ooalign{\hidewidth $\int$\hidewidth\cr\rule[0.7ex]{1ex}{.4pt}}(Z_G^{\textup{\bfseries LG}}(W))(1) = Z_G(W)(1) = [\mathfrak{X}_G(W)]$$ in $\textup{K}_0(\textup{\bfseries Stck}_k)$. 0◻* ### Arithmetic method via LG-models {#sec:LGarith} In this section, we compare the Landau-Ginzburg TQFT with the arithmetic TQFT. We first alter the definition of a Landau-Ginzburg model (LG-model for short) as follows (see Definition [Definition 60](#def:lgmodel){reference-type="ref" reference="def:lgmodel"}). **Definition 65**. The category $\textup{\bfseries LG}_\mathfrak{S}$ of Landau-Ginzburg models over an algebraic stack $\mathfrak{S}$ of finite type over a finite field $k$ (with affine stabilizers) is the category - whose objects are pairs $(\mathfrak{X}\rightarrow \mathfrak{S},f)$ where $\mathfrak{X}$ is afinite type algebraic stack over $\mathfrak{S}$ (with finite stabilizers) and $f:\mathfrak{X}(k)\to \mathbb{C}$ is any $\mathbb{C}$-valued function on the $k$-points of $\mathfrak{X}$, and - morphisms between the objects $(\mathfrak{X}\to \mathfrak{S}, f)$ and $(\mathfrak{Y}\to \mathfrak{S}, g)$ are algebraic morphisms $\pi:\mathfrak{X}\to \mathfrak{Y}$ commuting with the structure maps (a) $\mathfrak{X}\to \mathfrak{S}$ and $\mathfrak{Y}\to \mathfrak{S}$, and the maps on $k$-points (b) $f:\mathfrak{X}(k)\to \mathbb{C}$ and $g:\mathfrak{Y}(k)\to \mathbb{C}$. We consider the integration over the fibres map $$\int:\textup{\bfseries LG}_\mathfrak{X}\to \mathbb{C}^{\mathfrak{X}(k)}$$ sending a Landau-Ginzburg model $(\mathfrak{Y}\xrightarrow{\pi} \mathfrak{X}, f:\mathfrak{Y}(k)\to \mathbb{C})$ to $\left(x\mapsto \sum_{[y]\in [\pi^{-1}(x)]}\frac{f(y)}{|\mathop{\mathrm{Aut}}(y)|}\right)$. It is easy to see that in the case of $\mathfrak{S}=[S/G]$ this provides a map $\int:K_0(\textup{\bfseries LG}_\mathfrak{S})\rightarrow \mathbb{C}^{\mathfrak{S}(k)}$ (the integration map) from the Grothendieck ring of Landau-Ginzburg models to the complex vector space of complex functions on the objects of the character groupoid $\mathfrak{S}(k)$. We have the following statement, completely similar to Proposition [Proposition 55](#prop:comparisontqft_arith){reference-type="ref" reference="prop:comparisontqft_arith"}. **Proposition 66**. *Let $\mathbb{F}_q$ be a finite field over $k$. There is a natural transformation $$\int: \mathcal{Q}^\textup{\bfseries LG}\Rightarrow \mathcal{Q}^\# \circ (-)(\mathbb{F}_q).$$ In particular, this induces a natural transformation of TQFTs $$Z_G^\textup{\bfseries LG}\Rightarrow Z_G^{\mathbb{F}_q} .$$ As a consequence, if $W$ is a closed manifold, seen as a bordism $\varnothing\rightarrow \varnothing$, then $$\int Z_G^{\textup{\bfseries LG}}(W)(1) = Z^{\mathbb{F}_q}_G(W)(1) = |\mathfrak{X}_G(W)(\mathbb{F}_q)|.$$ 0◻* Combining Proposition [Proposition 56](#prop:compGconn){reference-type="ref" reference="prop:compGconn"} and the proposition above, we see that if $G$ is connected, then there is a natural transformation $Z_G^\textup{\bfseries LG}\Rightarrow Z^\#_{G(\mathbb{F}_q)}$ comparing the LG-valued TQFT with the arithmetic one. # Applications {#sec:app} In the previous section, we provided a comparison between the TQFTs considered in this paper, in terms of natural transformations, summarized in diagram [\[eq:diagram_natural_transformations\]](#eq:diagram_natural_transformations){reference-type="eqref" reference="eq:diagram_natural_transformations"}. We refer to this as the *arithmetic-geometric correspondence*: the TQFT $Z^\#_G$ has an arithmetic flavour as it counts solutions to algebraic equations on finite fields, whereas the TQFT $Z_G$ has an intrinsic geometric nature, in the sense that it computes a subtle algebraic invariant of the character stack. In this section, we analyze two types of implications of the correspondence. In one implication, we use the geometric TQFT $Z_G$ in order to extract information about the character table of the algebraic group $G$ over finite fields. For example, the first column of the character table, consisting of the dimensions of the irreducible characters and their multiplicities, can easily be determined, but also some other sums of columns. In the other implication, we attempt to lift the eigenvectors of the arithmetic TQFT $Z^\#_G$ to eigenvectors of the geometric TQFT $Z_G$. For $G$ the group of upper triangular matrices (of small rank), we show there exist canonical such lifts and, moreover, that these lifts immensely simplify the computations for $Z^\#_G$, as performed in [@hablicsek2022virtual]. That is, in [@hablicsek2022virtual], computed-assisted computations were performed on a submodule with $16$ generators, while with the simplification, only $2$ or $3$ generators are needed. The bridge between these two worlds is Corollary [Corollary 67](#cor:relation_eigenvalues_geometric_arithmetic){reference-type="ref" reference="cor:relation_eigenvalues_geometric_arithmetic"}, which follows directly from the arithmetic-geometric correspondence [\[eq:diagram_natural_transformations\]](#eq:diagram_natural_transformations){reference-type="eqref" reference="eq:diagram_natural_transformations"}. To state it properly, recall that $Z_G\left( \begin{tikzpicture}[semithick, scale=0.7*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (-0.5,0.6) and (0.5,0.6) .. (1,0.4); \draw (-1,-0.4) .. controls (-0.5,-0.6) and (0.5,-0.6) .. (1,-0.4); \draw (-0.5,0.1) .. controls (-0.5,-0.125) and (0.5,-0.125) .. (0.5,0.1); \draw (-0.4,0.0) .. controls (-0.4,0.0625) and (0.4,0.0625) .. (0.4,0.0); \draw (1,0) ellipse (0.2cm and 0.4cm); \end{scope} \end{tikzpicture} \right)$ is an endomorphism of the $\textup{K}_0(\textup{\bfseries Stck}_R)$-module $\textup{K}_0(\textup{\bfseries Stck}_{[G/G]})$. However, suppose that the $\textup{K}_0(\textup{\bfseries Stck}_R)$-submodule $$\mathcal{V} = \langle Z_G\left( \begin{tikzpicture}[semithick, scale=0.7*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (-0.5,0.6) and (0.5,0.6) .. (1,0.4); \draw (-1,-0.4) .. controls (-0.5,-0.6) and (0.5,-0.6) .. (1,-0.4); \draw (-0.5,0.1) .. controls (-0.5,-0.125) and (0.5,-0.125) .. (0.5,0.1); \draw (-0.4,0.0) .. controls (-0.4,0.0625) and (0.4,0.0625) .. (0.4,0.0); \draw (1,0) ellipse (0.2cm and 0.4cm); \end{scope} \end{tikzpicture} \right)^g(\textup{\bfseries 1}_G) \textup{ for } g \in \mathbb{Z}_{\ge 0} \rangle \subseteq \textup{K}_0(\textup{\bfseries Stck}_{[G/G]})$$ is finitely generated. Then $Z_G\left( \begin{tikzpicture}[semithick, scale=0.7*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (-0.5,0.6) and (0.5,0.6) .. (1,0.4); \draw (-1,-0.4) .. controls (-0.5,-0.6) and (0.5,-0.6) .. (1,-0.4); \draw (-0.5,0.1) .. controls (-0.5,-0.125) and (0.5,-0.125) .. (0.5,0.1); \draw (-0.4,0.0) .. controls (-0.4,0.0625) and (0.4,0.0625) .. (0.4,0.0); \draw (1,0) ellipse (0.2cm and 0.4cm); \end{scope} \end{tikzpicture} \right)$ is an endomorphism of $\mathcal{V}$. Thus, if $x_1, \ldots, x_n \in \mathcal{V}$ is a set of generators, we have that for all $i = 1, \ldots, n$ we can write $$Z_G\left( \begin{tikzpicture}[semithick, scale=0.7*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (-0.5,0.6) and (0.5,0.6) .. (1,0.4); \draw (-1,-0.4) .. controls (-0.5,-0.6) and (0.5,-0.6) .. (1,-0.4); \draw (-0.5,0.1) .. controls (-0.5,-0.125) and (0.5,-0.125) .. (0.5,0.1); \draw (-0.4,0.0) .. controls (-0.4,0.0625) and (0.4,0.0625) .. (0.4,0.0); \draw (1,0) ellipse (0.2cm and 0.4cm); \end{scope} \end{tikzpicture} \right)(x_i) = \sum_{j=1}^n a_{ij} x_j, \quad \textrm{with } a_{ij} \in \textup{K}_0(\textup{\bfseries Stck}_R).$$ With this information, in analogy with the vector space case, we can form the matrix $A = (a_{ij})$ of coefficients representing $Z_G\left( \begin{tikzpicture}[semithick, scale=0.7*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (-0.5,0.6) and (0.5,0.6) .. (1,0.4); \draw (-1,-0.4) .. controls (-0.5,-0.6) and (0.5,-0.6) .. (1,-0.4); \draw (-0.5,0.1) .. controls (-0.5,-0.125) and (0.5,-0.125) .. (0.5,0.1); \draw (-0.4,0.0) .. controls (-0.4,0.0625) and (0.4,0.0625) .. (0.4,0.0); \draw (1,0) ellipse (0.2cm and 0.4cm); \end{scope} \end{tikzpicture} \right)$. Note that $A$ is not necessarily unique. With this matrix, given a ring morphism $R \to \mathbb{F}_q$, using the construction of Definition [Definition 54](#defn:integral-fibers){reference-type="ref" reference="defn:integral-fibers"}, we can form the $n \times n$ matrix of complex numbers $$\int_{R} A = \left(\int_{R} a_{ij}\right)_{ij} \quad \textup{ where } \quad \int_{R} a_{ij} \in \mathbb{C}^{R(\mathbb{F}_q)} = \mathbb{C}.$$ With this notation, the main result of this section is the following. **Corollary 67**. *Let $G$ be a connected algebraic group over a finitely generated $\mathbb{Z}$-algebra $R$, and let $\mathbb{F}_q$ be a finite field extending $R$. Denote by $\textup{\bfseries 1}_G \in \textup{K}_0(\textup{\bfseries Stck}_{[G/G]})$ the class of the inclusion of the identity in $G$. If the $\textup{K}_0(\textup{\bfseries Stck}_R)$-module $\mathcal{V} = \langle Z_G\left( \begin{tikzpicture}[semithick, scale=0.7*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (-0.5,0.6) and (0.5,0.6) .. (1,0.4); \draw (-1,-0.4) .. controls (-0.5,-0.6) and (0.5,-0.6) .. (1,-0.4); \draw (-0.5,0.1) .. controls (-0.5,-0.125) and (0.5,-0.125) .. (0.5,0.1); \draw (-0.4,0.0) .. controls (-0.4,0.0625) and (0.4,0.0625) .. (0.4,0.0); \draw (1,0) ellipse (0.2cm and 0.4cm); \end{scope} \end{tikzpicture} \right)^g(\textup{\bfseries 1}_G) \textup{ for } g \in \mathbb{Z}_{\geq 0} \rangle$ is finitely generated, then:* (i) *The submodule $\int_{[G/G]} \mathcal{V}$ of $\mathbb{C}^{[G/G](\mathbb{F}_q)}$, both seen as $\textup{K}_0(\textup{\bfseries Stck}_R)$-modules, is generated by the sums of equidimensional irreducible characters.* (ii) *The dimensions of the complex irreducible characters of $G(\mathbb{F}_q)$ are precisely given by $$d_i = \frac{|G(\mathbb{F}_q)|}{\sqrt{\lambda_i}}$$ for $\lambda_i \in \mathbb{Z}$ the eigenvalues of $\int_R A$, where $A$ is any matrix representing the linear map $Z_G\left( \begin{tikzpicture}[semithick, scale=0.7*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (-0.5,0.6) and (0.5,0.6) .. (1,0.4); \draw (-1,-0.4) .. controls (-0.5,-0.6) and (0.5,-0.6) .. (1,-0.4); \draw (-0.5,0.1) .. controls (-0.5,-0.125) and (0.5,-0.125) .. (0.5,0.1); \draw (-0.4,0.0) .. controls (-0.4,0.0625) and (0.4,0.0625) .. (0.4,0.0); \draw (1,0) ellipse (0.2cm and 0.4cm); \end{scope} \end{tikzpicture} \right)$ with respect to a generating set of $\mathcal{V}$.* (iii) *Write $\int_R \textup{\bfseries 1}_G = \sum_i v_i$, where $v_i$ are eigenvectors of $\int_R A$ with eigenvalues $\lambda_i$. Then each $v_i$ is a scalar multiple of the sum of equidimensional characters $$v_i = \frac{d_i}{|G(\mathbb{F}_q)|}\sum_{\substack{\chi \in \hat{G}\textup{\ s.t.} \\ \chi(1) = d_i}} \chi .$$* *Proof.* *(i)* Since $G$ is connected, by Corollary [Corollary 58](#cor:natural_transformation_geometric_arithmetic){reference-type="ref" reference="cor:natural_transformation_geometric_arithmetic"}, we get a commutative square $$\begin{tikzcd}[column sep=5em] \textup{K}_0(\textup{\bfseries Stck}_{[G/G]}) \arrow{r} \arrow[swap]{d}{\int_{[G/G]}} & \textup{K}_0(\textup{\bfseries Stck}_{[G/G]}) \arrow{d}{\int_{[G/G]}} \\ \mathbb{C}^{[G(\mathbb{F}_q)/G(\mathbb{F}_q)]} \arrow{r} & \mathbb{C}^{[G(\mathbb{F}_q)/G(\mathbb{F}_q)]} \end{tikzcd}$$ where the top and bottom map are given by $Z_G\left( \begin{tikzpicture}[semithick, scale=0.7*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (-0.5,0.6) and (0.5,0.6) .. (1,0.4); \draw (-1,-0.4) .. controls (-0.5,-0.6) and (0.5,-0.6) .. (1,-0.4); \draw (-0.5,0.1) .. controls (-0.5,-0.125) and (0.5,-0.125) .. (0.5,0.1); \draw (-0.4,0.0) .. controls (-0.4,0.0625) and (0.4,0.0625) .. (0.4,0.0); \draw (1,0) ellipse (0.2cm and 0.4cm); \end{scope} \end{tikzpicture} \right)$ and $Z^\#_{G(\mathbb{F}_q)}\left( \begin{tikzpicture}[semithick, scale=0.7*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (-0.5,0.6) and (0.5,0.6) .. (1,0.4); \draw (-1,-0.4) .. controls (-0.5,-0.6) and (0.5,-0.6) .. (1,-0.4); \draw (-0.5,0.1) .. controls (-0.5,-0.125) and (0.5,-0.125) .. (0.5,0.1); \draw (-0.4,0.0) .. controls (-0.4,0.0625) and (0.4,0.0625) .. (0.4,0.0); \draw (1,0) ellipse (0.2cm and 0.4cm); \end{scope} \end{tikzpicture} \right)$, respectively. Also, we see $\mathbb{C}^{[G/G](\mathbb{F}_q)} = R_\mathbb{C}(G(\mathbb{F}_q))$ as a $\textup{K}_0(\textup{\bfseries Stck}_{R})$-module through restriction of scalars via $\int_{R}$. Now, observe that $\int_{[G/G]}\textup{\bfseries 1}_G \in R_\mathbb{C}(G(\mathbb{F}_q))$ is the delta-function over the identity of $G(\mathbb{F}_q)$, so we can write $$\int_{[G/G]}\textup{\bfseries 1}_G=\frac{1}{|G(\mathbb{F}_q)|}\sum_{\chi\in \hat{G}} \chi(1)\chi.$$ More in general, using that the irreducible characters are eigenvectors of $Z^\#_{G(\mathbb{F}_q)}\left( \begin{tikzpicture}[semithick, scale=0.7*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (-0.5,0.6) and (0.5,0.6) .. (1,0.4); \draw (-1,-0.4) .. controls (-0.5,-0.6) and (0.5,-0.6) .. (1,-0.4); \draw (-0.5,0.1) .. controls (-0.5,-0.125) and (0.5,-0.125) .. (0.5,0.1); \draw (-0.4,0.0) .. controls (-0.4,0.0625) and (0.4,0.0625) .. (0.4,0.0); \draw (1,0) ellipse (0.2cm and 0.4cm); \end{scope} \end{tikzpicture} \right)$ with eigenvalue $|G(\mathbb{F}_q)|^2/\chi(1)^2$, we have $$Z^\#_{G(\mathbb{F}_q)}\left( \begin{tikzpicture}[semithick, scale=0.7*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (-0.5,0.6) and (0.5,0.6) .. (1,0.4); \draw (-1,-0.4) .. controls (-0.5,-0.6) and (0.5,-0.6) .. (1,-0.4); \draw (-0.5,0.1) .. controls (-0.5,-0.125) and (0.5,-0.125) .. (0.5,0.1); \draw (-0.4,0.0) .. controls (-0.4,0.0625) and (0.4,0.0625) .. (0.4,0.0); \draw (1,0) ellipse (0.2cm and 0.4cm); \end{scope} \end{tikzpicture} \right)^g \left(\int_{[G/G]}\textup{\bfseries 1}_G \right) =\frac{1}{|G(\mathbb{F}_q)|}\sum_{\chi\in \hat{G}} \frac{|G|^{2g}}{\chi(1)^{2g-1}}\chi(1)\chi,$$ which shows that the space generated by the vectors $Z^\#_{G(\mathbb{F}_q)}\left( \begin{tikzpicture}[semithick, scale=0.7*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (-0.5,0.6) and (0.5,0.6) .. (1,0.4); \draw (-1,-0.4) .. controls (-0.5,-0.6) and (0.5,-0.6) .. (1,-0.4); \draw (-0.5,0.1) .. controls (-0.5,-0.125) and (0.5,-0.125) .. (0.5,0.1); \draw (-0.4,0.0) .. controls (-0.4,0.0625) and (0.4,0.0625) .. (0.4,0.0); \draw (1,0) ellipse (0.2cm and 0.4cm); \end{scope} \end{tikzpicture} \right)^g \left(\int_{[G/G]}\textup{\bfseries 1}_G \right) \in R_\mathbb{C}(G(\mathbb{F}_q))$ for $g = 0, 1, \ldots$ is generated by the sums $\sum_{\chi(1) = n} \chi$ of irreducible characters of dimension $n \geq 1$. But, by the naturality of $\int_{[G/G]}$, this space is exactly $\int_{[G/G]} \mathcal{V}$. *(ii)* By naturality, $Z^\#_{G(\mathbb{F}_q)}\left( \begin{tikzpicture}[semithick, scale=0.7*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (-0.5,0.6) and (0.5,0.6) .. (1,0.4); \draw (-1,-0.4) .. controls (-0.5,-0.6) and (0.5,-0.6) .. (1,-0.4); \draw (-0.5,0.1) .. controls (-0.5,-0.125) and (0.5,-0.125) .. (0.5,0.1); \draw (-0.4,0.0) .. controls (-0.4,0.0625) and (0.4,0.0625) .. (0.4,0.0); \draw (1,0) ellipse (0.2cm and 0.4cm); \end{scope} \end{tikzpicture} \right)$ is an endomorphism of the submodule $\int_{[G/G]} \mathcal{V} \subseteq R_\mathbb{C}(G(\mathbb{F}_q))$. Furthermore, we see that $\int_{[G/G]} A$ represents this restriction written in the set of generators $\int_{[G/G]} x_i$ of $R_\mathbb{C}(G(\mathbb{F}_q))$, where $x_1, \ldots, x_n$ are generators of $\mathcal{V}$. Hence, the non-zero eigenvalues and eigenvectors of this matrix are also eigenvalues and eigenvectors of $Z^\#_{G(\mathbb{F}_q)}\left( \begin{tikzpicture}[semithick, scale=0.7*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (-0.5,0.6) and (0.5,0.6) .. (1,0.4); \draw (-1,-0.4) .. controls (-0.5,-0.6) and (0.5,-0.6) .. (1,-0.4); \draw (-0.5,0.1) .. controls (-0.5,-0.125) and (0.5,-0.125) .. (0.5,0.1); \draw (-0.4,0.0) .. controls (-0.4,0.0625) and (0.4,0.0625) .. (0.4,0.0); \draw (1,0) ellipse (0.2cm and 0.4cm); \end{scope} \end{tikzpicture} \right)$, which are of the form $\frac{|G(\mathbb{F}_q)|^2}{\chi(1)^2}$. Since all these dimensions appear in one of the generators of $\int_{[G/G]}$, this proves the statement. *(iii)* We can write $\int_{[G/G]}\textup{\bfseries 1}_G$ in two ways: first as $\frac{1}{|G(\mathbb{F}_q)|}\sum_{\chi\in \hat{G}} \chi(1)\chi$, second as $\sum_i v_i$ where $v_i$ are the eigenvalues of $\int_R A$. The statement follows from this. ◻ **Remark 68**. In a previous paper [@gonzalez2022virtual], the authors of this paper considered a different computational framework to investigate the virtual classes of character stacks. In that framework, additional basepoints were chosen on the manifolds and the bordisms to make the field theory functorial. We note that the main result of this paper, Theorem [Theorem 5](#introthm:nattrans){reference-type="ref" reference="introthm:nattrans"}, also holds with some natural alterations in the case of that framework. This allows us to compare the endomorphism $Z_G\left( \begin{tikzpicture}[semithick, scale=0.7*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (-0.5,0.6) and (0.5,0.6) .. (1,0.4); \draw (-1,-0.4) .. controls (-0.5,-0.6) and (0.5,-0.6) .. (1,-0.4); \draw (-0.5,0.1) .. controls (-0.5,-0.125) and (0.5,-0.125) .. (0.5,0.1); \draw (-0.4,0.0) .. controls (-0.4,0.0625) and (0.4,0.0625) .. (0.4,0.0); \draw (1,0) ellipse (0.2cm and 0.4cm); \end{scope} \end{tikzpicture} \right)$ of the geometric TQFT to the character table of $G$ when $G$ is not necessarily connected. ## From geometry to characters Let us illustrate how the geometry of $Z_G$ implies, through the arithmetic-geometric correspondence, information on the arithmetic side: the representation theory of $G$ over finite fields $\mathbb{F}_q$. As a simple example, we take $G$ to be the affine linear group of rank 1, $$\textup{AGL}_1(k) = \left\{ \begin{pmatrix} a & b \\ 0 & 1 \end{pmatrix} : a \ne 0 \right\} .$$ We introduce the following two locally closed subvarieties of $\textup{AGL}_1$: $$I = \left\{ \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}\right\}, \qquad J = \left\{ \begin{pmatrix} 1 & b \\ 0 & 1 \end{pmatrix} : b \ne 0 \right\}$$ whose natural inclusions $I \to G$ and $J \to G$ induce elements $[I/G], [J/G] \in \textup{K}_0(\textup{\bfseries Stck}_{[G/G]})$. The fact that the elements of the commutator subgroup have ones on the diagonal translates into the fact that the $\textup{K}_0(\textup{\bfseries Stck}_k)$-submodule generated by $[I/G]$ and $[J/G]$ will be invariant under $Z_G\left( \begin{tikzpicture}[semithick, scale=0.7*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (-0.5,0.6) and (0.5,0.6) .. (1,0.4); \draw (-1,-0.4) .. controls (-0.5,-0.6) and (0.5,-0.6) .. (1,-0.4); \draw (-0.5,0.1) .. controls (-0.5,-0.125) and (0.5,-0.125) .. (0.5,0.1); \draw (-0.4,0.0) .. controls (-0.4,0.0625) and (0.4,0.0625) .. (0.4,0.0); \draw (1,0) ellipse (0.2cm and 0.4cm); \end{scope} \end{tikzpicture} \right)$. Denoting $q = [\mathbb{A}^1_k]$, one easily shows (as in [@gonzalez2022virtual] or [@gologmun05]) that $$Z_G\left( \begin{tikzpicture}[semithick, scale=1*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (-0.5,0.6) and (0.5,0.6) .. (1,0.4); \draw (-1,-0.4) .. controls (-0.5,-0.6) and (0.5,-0.6) .. (1,-0.4); \draw (-0.5,0.1) .. controls (-0.5,-0.125) and (0.5,-0.125) .. (0.5,0.1); \draw (-0.4,0.0) .. controls (-0.4,0.0625) and (0.4,0.0625) .. (0.4,0.0); \draw (1,0) ellipse (0.2cm and 0.4cm); \end{scope} \end{tikzpicture} \right) = \left( \begin{array}{cc} q^2 (q - 1) & q^2 (q - 2) (q - 1) \\ q^2 (q - 2) & q^2 (q^2 - 3 q + 3) \end{array} \right)$$ with respect to $[I/G]$ and $[J/G]$. This matrix can be diagonalized with eigenvectors $(q - 1)([I/G] + [J/G])$ and $(q - 1)[I/G] - [J/G]$ and eigenvalues $q^2 (q - 1)^2$ and $q^2$, respectively. As the computations for $Z_G\left( \begin{tikzpicture}[semithick, scale=0.7*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (-0.5,0.6) and (0.5,0.6) .. (1,0.4); \draw (-1,-0.4) .. controls (-0.5,-0.6) and (0.5,-0.6) .. (1,-0.4); \draw (-0.5,0.1) .. controls (-0.5,-0.125) and (0.5,-0.125) .. (0.5,0.1); \draw (-0.4,0.0) .. controls (-0.4,0.0625) and (0.4,0.0625) .. (0.4,0.0); \draw (1,0) ellipse (0.2cm and 0.4cm); \end{scope} \end{tikzpicture} \right)$ can be performed over $\mathbb{Z}$, they are also valid over any finite field $k = \mathbb{F}_q$. Under the natural transformation of Corollary [Corollary 58](#cor:natural_transformation_geometric_arithmetic){reference-type="ref" reference="cor:natural_transformation_geometric_arithmetic"}, the eigenvectors transform to the following functions on $\textup{AGL}_1(\mathbb{F}_q)$: $$\left(\int_{[G/G]} (q - 1) ([I/G] + [J/G]) \right)(g) = \begin{cases} q - 1 & \textup{ if } g = \left(\begin{smallmatrix} 1 & b \\ 0 & 1 \end{smallmatrix}\right) , \\ 0 & \textup{ otherwise} , \end{cases}$$ and $$\left(\int_{[G/G]} (q - 1) [I/G] - [J/G]\right)(g) = \begin{cases} (q - 1) & \textup{ if } g = 1, \\ -1 & \textup{ if } g \in J, \\ 0 & \text{otherwise} . \end{cases}$$ From Section [2](#sec:representation_ring){reference-type="ref" reference="sec:representation_ring"} we know that the eigenvalues, which are $q^2 (q - 1)^2$ and $q^2$, correspond to the values $\left(\frac{|\textup{AGL}_1(\mathbb{F}_q)|}{\chi(1)}\right)^2$ for the irreducible characters $\chi$. Hence, the irreducible characters are of dimension $1$ or $q - 1$. The above eigenvectors are already properly normalized, meaning $\eta(1) = \frac{1}{|G|} (v_1 + (q - 1) v_{q - 1})$. Therefore, the first function is the sum of the $(q - 1)$ $1$-dimensional irreducible characters of $\textup{AGL}_1(\mathbb{F}_q)$ and the second function is the character of the unique $(q - 1)$-dimensional representation of $\textup{AGL}_1(\mathbb{F}_q)$. Note that in the special case of $q = 2$, there are 2 irreducible characters both of dimension $1$, which are precisely given by the two above functions. ## From characters to geometry Let us illustrate how the arithmetic information, i.e. the representation theory of $G$ over $\mathbb{F}_q$, provides geometric insight into the TQFT $Z_G$. We will show how the representation theory of the groups of unipotent upper triangular matrices over $\mathbb{F}_q$ can be used to simplify the corresponding geometric TQFT $Z_G$, which was also considered and computed in [@hablicsek2022virtual]. In particular, we provide a new smaller set of generators for this TQFT motivated by the arithmetic side. More precisely, we obtain this new generating set by canonically lifting the sums of equidimensional characters to the Grothendieck ring of varieties. These generators will be given by classes of locally closed subvarieties of $G$. **Example 69** (Unipotent $3 \times 3$ matrices). Consider the group $\mathbb{U}_3(\mathbb{F}_q)$ of unipotent matrices of rank $3$ over a finite field $\mathbb{F}_q$, $$\label{eq:presentation_U3} \mathbb{U}_3(\mathbb{F}_q) = \left\{ \begin{pmatrix} 1 & x & y \\ 0 & 1 & z \\ 0 & 0 & 1 \end{pmatrix} : x, y, z \in \mathbb{F}_q \right\} .$$ The irreducible complex characters of $\mathbb{U}_3(\mathbb{F}_q)$ are of dimension $1$ or $q$. Denote the set of $1$-dimensional characters by $X_1$ and of the $q$-dimensional characters by $X_q$. Summing the $1$-dimensional characters, we find that $$v_1 = \sum_{\chi \in X_1} \chi \quad \textup{ is given by } \quad v_1 \begin{pmatrix} 1 & x & y \\ 0 & 1 & z \\ 0 & 0 & 1 \end{pmatrix} = \begin{cases} q & \textup{ if } x = z = 0 , \\ 0 & \textup{ otherwise} , \end{cases}$$ and summing the $q$-dimensional characters, we find that $$v_2 = \sum_{\chi \in X_q} \chi \quad \textup{ is given by } \quad v_2 \begin{pmatrix} 1 & x & y \\ 0 & 1 & z \\ 0 & 0 & 1 \end{pmatrix} = \begin{cases} -q & \textup{ if } x = z = 0 \textup{ and } y \ne 0, \\ q(q - 1) & \textup{ if } x = y = z = 0, \\ 0 & \textup{ otherwise}. \end{cases}$$ These eigenvectors can be canonically lifted to the elements in $\textup{K}_0(\textup{\bfseries Stck}_{[G/G]})$ for $G = \mathbb{U}_3$ given by $$q [\{ x = z = 0 \}] \quad \textup{ and } \quad -q [\{ x = z = 0, \; y \ne 0 \}] + q (q - 1) [\{ x = y = z = 0 \}] ,$$ in terms of classes of $G$-equivariant subvarieties of $\mathbb{U}_3$, where $x, y, z$ are as in the presentation [\[eq:presentation_U3\]](#eq:presentation_U3){reference-type="eqref" reference="eq:presentation_U3"}. Indeed, from the computations of [@hablicsek2022virtual] it can be seen that these elements are eigenvectors of $Z_G$. The eigenvalues are $q^6$ and $q^4$, respectively, where now $q = [\mathbb{A}^1_k]$. In fact, the submodule of $\textup{K}_0(\textup{\bfseries Stck}_{[G/G]})$ generated by these two eigenvectors contains $Z_G( \begin{tikzpicture}[semithick, scale=0.9*0.5, baseline=-0.5ex] \begin{scope} \draw (0,0) ellipse (0.2cm and 0.4cm); \draw (0,-0.4) arc (-90:90:0.75cm and 0.4cm); \end{scope} \end{tikzpicture} )(1)$ and is invariant under $Z_G( \begin{tikzpicture}[semithick, scale=0.7*0.5, baseline=-0.5ex] \begin{scope} \draw (-1,0) ellipse (0.2cm and 0.4cm); \draw (-1,0.4) .. controls (-0.5,0.6) and (0.5,0.6) .. (1,0.4); \draw (-1,-0.4) .. controls (-0.5,-0.6) and (0.5,-0.6) .. (1,-0.4); \draw (-0.5,0.1) .. controls (-0.5,-0.125) and (0.5,-0.125) .. (0.5,0.1); \draw (-0.4,0.0) .. controls (-0.4,0.0625) and (0.4,0.0625) .. (0.4,0.0); \draw (1,0) ellipse (0.2cm and 0.4cm); \end{scope} \end{tikzpicture} )$, and is therefore a simplification of the submodule used in [@hablicsek2022virtual], which used $5$ generators. **Example 70** (Unipotent $4 \times 4$ matrices). Consider the group $\mathbb{U}_4(\mathbb{F}_q)$ of unipotent matrices of rank $4$ over a finite field $\mathbb{F}_q$. This group has three families of irreducible complex characters: the $1$-dimensional characters $X_1$, the $q$-dimensional characters $X_q$ and the $q^2$-dimensional characters $X_{q^2}$. Summing characters of the same dimension, we find $$\begin{aligned} \sum_{\chi \in X_1} \chi \begin{pmatrix} 1 & a & b & c \\ 0 & 1 & d & e \\ 0 & 0 & 1 & f \\ 0 & 0 & 0 & 1 \end{pmatrix} &= \begin{cases} q^2 & \textup{ if } a = d = f = 0, \\ 0 & \textup{ otherwise} , \end{cases} \\ \sum_{\chi \in X_q} \chi \begin{pmatrix} 1 & a & b & c \\ 0 & 1 & d & e \\ 0 & 0 & 1 & f \\ 0 & 0 & 0 & 1 \end{pmatrix} &= \begin{cases} q^4 & \textup{ if } a = b = d = e = f = 0, \\ -q^2 & \textup{ if } a = d = f\\ 0 & \textup{ otherwise} , \end{cases} \\ \sum_{\chi \in X_{q^2}} \chi \begin{pmatrix} 1 & a & b & c \\ 0 & 1 & d & e \\ 0 & 0 & 1 & f \\ 0 & 0 & 0 & 1 \end{pmatrix} &= \begin{cases} q^3 (q - 1) & \textup{ if } a = b = c = d = e = f = 0, \\ -q^3 & \textup{ if } a = b = d = e = f = 0 \textup{ and } c \ne 0 , \\ 0 & \textup{ otherwise} . \end{cases}\end{aligned}$$ Again, we can canonically lift these functions to elements in $\textup{K}_0(\textup{\bfseries Stck}_{[G/G]})$ for $G = \mathbb{U}_4$ given by $$\begin{aligned} & q^2 [\{ a = d = f = 0 \}], \\ & q^4 [\{ a = b = d = e = f = 0 \}] - q^2 [\{ a = d = f = 0 \}] , \\ \textup{ and } \quad & q^3 (q - 1) [\{ a = b = c = d = e = f = 0 \}] - q^3 [\{ a = b = d = e = f = 0 \textup{ and } c \ne 0 \}] .\end{aligned}$$ From the computations of [@hablicsek2022virtual] it can be seen that these elements are indeed eigenvectors of $Z_G$. As before, we can use these three eigenvectors to generate a submodule of $\textup{K}_0(\textup{\bfseries Stck}_{[G/G]})$ replacing the one from [@hablicsek2022virtual], which used $16$ generators, a significant simplification. **Example 71** ($\mathbb{G}_m \rtimes \mathbb{Z}/2\mathbb{Z}$). A very interesting phenomenon arises in the case of the group $G = \mathbb{G}_m \rtimes \mathbb{Z}/2\mathbb{Z}$ where the group $\mathbb{Z}/2\mathbb{Z}$ acts on $\mathbb{G}_m$ by negation, namely, that it is impossible that the geometric TQFT consists of generators that are subvarieties of $G$! Indeed, consider the group over a finite field $\mathbb{F}_q$, and assume for simplicity that $q$ is odd. Using Remark [Remark 68](#rem:nonconnected){reference-type="ref" reference="rem:nonconnected"}, we can study the geometric TQFT from the character table. The sums of equidimensional characters can be expressed as $${\arraycolsep=1em \def\arraystretch{1.5} \begin{array}{c|ccc} & \{ 1 \} & \{ t \in G(\mathbb{F}_q) \mid t \textup{ is a square} \} & \{ t \in G(\mathbb{F}_q) \mid t \textup{ is not a square} \} \\ \hline v_1 =\sum_{\varepsilon, \delta} \rho_{\varepsilon, \delta} & 4 & 4 & 0 \\ v_2=\sum_k \tau_k & q - 3 & -2 & 0 \end{array}}$$ In this case, lifting the eigenvectors $v_1$ and $v_2$ to elements of the Grothendieck ring is slightly more involved: there exists no 'subvariety of squares' of $G$. However, we can instead consider the variety $X = \mathbb{G}_m$ over $\mathbb{G}_m$ given by $x \mapsto x^2$. Counting the fibers of $X \to \mathbb{G}_m$ over $\mathbb{F}_q$, we find $$| X(\mathbb{F}_q)|_t | = \left\{ \begin{array}{cl} 2 & \textup{ if $t$ is a square}, \\ 0 & \textup{ if $t$ is not a square} . \end{array}\right.$$ This suggests that eigenvectors of $Z_G$ over $R = \mathbb{Z}[\frac{1}{2}]$ are given by $$2 [X] \quad \textup{ and } \quad (q - 1) [\{ 1 \}] - [X] .$$ Indeed, these elements agree with the generators for the submodule of $\textup{K}_0(\textup{\bfseries Stck}_{[G/G]})$ as used for the computations in [@gonzalez2022virtual].
arxiv_math
{ "id": "2309.15331", "title": "Arithmetic-Geometric Correspondence of Character Stacks via Topological\n Quantum Field Theory", "authors": "\\'Angel Gonz\\'alez-Prieto, M\\'arton Hablicsek, Jesse Vogel", "categories": "math.AG math-ph math.CT math.MP math.RT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We revisit the study of the sum and difference sets of a subset of $\mathbb{N}$ drawn from a binomial model, proceeding under the following more general setting. Given $A \subseteq \{0, 1, \dots, N\}$, an integer $h \geq 2$, and a linear form $L: \mathbb{Z}^h \to \mathbb{Z}$ given by $$L(x_1, \dots, x_h) = u_1x_1 + \cdots + u_hx_h, \quad u_i \in \mathbb{Z}_{\neq 0} \text{ for all } i \in [h],$$ we study the size of $$L(A) = \left\{u_1a_1 + \cdots + u_ha_h : a_i \in A \right\}$$ and its complement $L(A)^c$ when each element of $\{0, 1, \dots, N\}$ is independently included in $A$ with probability $p(N)$. We identify two phase transition phenomena. The first concerns the relative sizes of $L(A)$ and $L(A)^c$, with $p(N) = N^{-(h-1)/h}$ as the threshold. Asymptotically almost surely, it holds below the threshold that almost all sums generated in $L(A)$ are distinct and almost all possible sums are in $L(A)^c$, and above the threshold that almost all possible sums are in $L(A)$. This generalizes work of Hegarty and Miller and settles their conjecture. The second, which may be expressed in terms of a stochastic process on hypergraphs, concerns the asymptotic behavior of the number of distinct representations in $L(A)$ of a given value, with $p(N) = N^{-\frac{h-2}{h-1}}$ as the threshold. address: - Department of Pure Mathematics and Mathematical Statistics, University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA, UK - Department of Mathematics and Statistics, Williams College, Williamstown, MA 01267 author: - Ryan Jeong - Steven J. Miller bibliography: - references.bib title: Phase Transitions for Binomial Sets Under Linear Forms --- # Introduction ## Motivation {#subsec:motivation} Some of the most basic objects that one can study in additive combinatorics are sum and difference sets of subsets of the integers. Specifically, letting $A \subset \mathbb{Z}$ be a finite subset of the integers, the *[sumset $A+A$]{style="color: red"}* and the *[difference set $A-A$]{style="color: red"}* of $A$ are given by $$\begin{aligned} & A+A := \left\{a_1 + a_2 : a_1, a_2 \in A\right\}, & A-A := \left\{a_1 - a_2 : a_1, a_2 \in A\right\}.\end{aligned}$$ Sum and difference sets lie at the core of many of the most important conjectures and results in number theory. For example, if we let $P$ denote the set of all prime numbers and $A_k$ the set of all $k$^th^ powers of positive integers, then Goldbach's Conjecture, the Twin Primes Conjecture, and Fermat's Last Theorem are respectively equivalent to the statements that $P+P$ contains all even numbers that are at least $4$, that $2$ can be generated in $P-P$ via infinitely many representations, and that $(A_k + A_k) \cap A_k = \emptyset$ if $k$ is an integer that is at least $3$. The most fundamental statistic that one can associate with a finite combinatorial object is its size. Since addition is commutative and subtraction is not, a typical pair of integers generates two distinct differences and one distinct sum. It is therefore natural to conjecture that a generic finite subset $A \subset \mathbb{Z}$ will satisfy $|A-A| \geq |A+A|$. *[More sums than differences (MSTD) sets]{style="color: red"}*, or finite sets $A \subseteq Z$ such that $|A+A| > |A-A|$, exist: both the first such example $\{0, 2, 3, 4, 7, 11, 12, 14\}$ and the claim that there does not exist an MSTD set with fewer elements than this example are attributed in folklore to Conway in the 1960s. Since the affirmative resolution of the question of whether MSTD sets exist, the construction of families of MSTD sets and studying their structural properties has been a productive direction of research: see [@asada2017fringe; @chu2019infinite; @chu2020generalizations; @hegarty2006some; @iyer2014finding; @miller2010explicit; @nathanson2006sets; @penman2013sets; @zhao2010constructing; @zhao2011sets]. The study of MSTD sets has also been pursued under certain algebraic [@ascoli2022sum; @miller2014most; @nathanson2018mstd; @zhao2010counting], geometric [@do2015sets; @miller2019geometric], complexity-theoretic [@mathur2018computational], and probabilistic [@chu2022generalizing; @harvey2021distribution; @lazarev2013distribution] contexts. The work [@chu2020sets] pursues similar inquiries under the context of product and quotient sets. In yet another extension of the original problem, the setup has also been generalized to high-dimensional settings in the following way. For a subset $A \subseteq \mathbb{Z}$, we define its *[generalized sumset with $s$ sums and $d$ differences $A_{s,d}$]{style="color: red"}* to be $$\begin{aligned} A_{s,d} := \{a_1 + \dots + a_s - b_1 - \dots - b_d : a_1, \dots, a_s, b_1, \dots, b_d \in A\}.\end{aligned}$$ Of course, $A_{2,0} = A+A$ and $A_{1,1} = A-A$ are the usual sum and difference sets of $A$. It is similarly natural to motivate the study of these objects with classical number theory: for example, letting $B_k$ denote the set of all $k$^th^ powers of nonnegative integers, Waring's problem asks whether for each $k \in \mathbb{Z}_{\geq 0}$, there exists a positive integer $s$ for which $(B_k)_{s,0} = \mathbb{Z}_{\geq 0}$. Following similar intuition as before, we might expect that given two generalized sumsets $A_{s_1, d_1}$, $A_{s_2,d_2}$ for which $$\begin{aligned} \label{eq:generalized_sumset_assumptions} s_1 + d_1 = s_2 + d_2 = h, \ \ \ \ \ s_1 \geq d_1, \ \ \ \ \ s_2 \geq d_2, \ \ \ \ \ s_1 \leq s_2,\end{aligned}$$ it should usually be the case that $|A_{s_1, d_1}| \geq |A_{s_2, d_2}|$, since the former set has more minus signs. For fixed values of $s_1, d_1, s_2, d_2$ satisfying [\[eq:generalized_sumset_assumptions\]](#eq:generalized_sumset_assumptions){reference-type="eqref" reference="eq:generalized_sumset_assumptions"}, we say that a finite set $A \subseteq \mathbb{Z}$ is *[MSTD]{style="color: red"}* in this context if $|A_{s_1, d_1}| < |A_{s_2, d_2}|$. A number of articles construct MSTD sets under this generalized setting and study their properties: see [@iyer2012generalized; @kim2022constructions; @miller2012explicit]. Despite the fact that sets $A \subset \mathbb{Z}$ satisfying $|A+A| > |A-A|$ exist, one might guess that it should be the case that they ought to be, in a certain sense, uncommon. This is captured by Nathanson [@nathanson2006problems], as he wrote "Even though there exist sets $A$ that have more sums than differences, such sets should be rare, and it must be true with the right way of counting that the vast majority of sets satisfies $|A-A| > |A+A|$.\" One intuitive way to interpret Nathanson's quote is as follows. Letting $I_N := \{0, \dots, N\}$, we would like to find a natural family of probability measures $\left\{\mathbb{P}_N\right\}_{N=1}^\infty$, defined respectively on the $\sigma$-fields $2^{I_N}$, for which it holds that $$\begin{aligned} \label{eq:desired_prob_msr} \lim_{N \to \infty} \mathbb{P}_N\left( \left\{A \subseteq I_N : |A-A| > |A+A|\right\} \right) = 1.\end{aligned}$$ Perhaps the most natural such model is that in which $\mathbb{P}_N$ is taken to be the uniform measure on $I_N$: we can think of this equivalently as including each element of $I_N$ in our random subset $A$ with probability $1/2$. Perhaps counterintuitively, [@martin2006many Theorem 1] states that $$\begin{aligned} \label{eq:uniform_model} \mathbb{P}_N\left( \left\{A \subseteq I_N : |A-A| > |A+A|\right\} \right) = \Omega(1).\end{aligned}$$ In other words, MSTD sets not only exist, but are ubiquitous. A similar phenomenon has been observed when studying generalized sumsets: see [@iyer2012generalized Theorem 1.1]. This proves that, in the sense of [\[eq:desired_prob_msr\]](#eq:desired_prob_msr){reference-type="eqref" reference="eq:desired_prob_msr"}, the uniform model is not the "right way of counting.\" We mention that the LHS of [\[eq:uniform_model\]](#eq:uniform_model){reference-type="eqref" reference="eq:uniform_model"} is known to have a limit as $N \to \infty$, but at the time of writing, the question of what this limit is remains open. The proof of this statement and the best known lower bound for this limit are both due to [@zhao2011sets]. Some thought reveals that [\[eq:uniform_model\]](#eq:uniform_model){reference-type="eqref" reference="eq:uniform_model"} is a consequence of the nature of the model. Indeed, with high probability, it will be that $|A|$ is roughly $N/2$, so $A$ is quite dense: $A+A$ and $A-A$ will generate nearly all possible sums and differences that they possibly can, so the relative ordering of $|A+A|$ and $|A-A|$ is dictated by the sums in the fringes of the intervals $[0, 2N]$ and $[-N, N]$ that $A+A$ and $A-A$ respectively generate. The proof of [\[eq:uniform_model\]](#eq:uniform_model){reference-type="eqref" reference="eq:uniform_model"}, as well as the construction of infinite families of MSTD sets, often relies on carefully selecting the fringe elements of $A$ to manipulate the fringe elements of $A+A$ and $A-A$. As such, one natural adjustment to the uniform model is to sparsify the random subset $A$ by letting the inclusion probability $p$ vanish as a function of $N$. Specifically, we assume that $p: \mathbb{N}\to (0,1)$ is a function such that $$\begin{aligned} \label{eq:p_assumptions} p(N) \lll 1 \lll Np(N).\end{aligned}$$ In other words, we assume that the random subset $A$ is drawn from a binomial model on $I_N$, where the inclusion probability $p(N)$ vanishes, but the size of the random subset $A$ almost surely grows (sublinearly in $N$) to infinity. It was conjectured in [@martin2006many Conjecture 21] that [\[eq:desired_prob_msr\]](#eq:desired_prob_msr){reference-type="eqref" reference="eq:desired_prob_msr"} would hold under such a model, which was confirmed by [@hegarty2009almost Theorem 1.1]. Specifically, this result identifies $p(N) = N^{-1/2}$ as a threshold function, in the sense of [@janson2011random]: $\frac{|A-A|}{|A+A|}$ converges in probability to $2$ if $p(N) = o(N^{-1/2})$, this ratio converges in probability to a value converging to $1$ as $c \to \infty$ when $p(N) = cN^{-1/2}$, and $\frac{|(A+A)^c|}{|(A-A)^c|}$ converges in probability to $2$ if $N^{-1/2} = o(p(N))$. This result was later generalized in the same paper to binary linear forms (see [@nathanson2007binary] for some motivation on why studying these objects is natural), with the specific statement given in [@hegarty2009almost Theorem 3.1]. It is natural to ask what happens if we include three or more summands: [@hegarty2009almost Conjecture 4.2] asks for the correct generalization of [@hegarty2009almost Theorem 3.1] to linear forms in $h \geq 3$ variables. Since then, progress in this direction has been elusive. The article [@hogan2013generalized], which studies generalized sumsets, proceeds under restrictive assumptions on the inclusion probability $p$. There, they identify $p(N) = N^{-\frac{h-1}{h}}$ as the appropriate threshold probability, but they failed to find a tractable form for the relevant limits in probability even for the setting of fast decay in this restricted setting. Our main objective in this paper is to break this lull and complete the story. In Theorem [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"}, we assert that all relevant quantities are strongly concentrated about their expectations, and provide asymptotic formulae for these expectations. ## Notation and Conventions {#subsec:notation} For a natural number $N$ (we will include $0$ in the natural numbers), we will denote $[N] := \{1, \dots, N\}$ and (as defined earlier) will denote $I_N = \{0, \dots, N\}$. For a positive integer[^1] $h$, a *[$\mathbb{Z}$-linear form in $h$ variables]{style="color: red"}* is a function $L: \mathbb{Z}^h \to \mathbb{Z}$ of the form $$\begin{aligned} L(x_1, \dots, x_h) = u_1x_1 + \cdots + u_hx_h, \ \ \ u_i \in \mathbb{Z}_{\neq 0} \text{ for all } i \in [h].\end{aligned}$$ Throughout this paper, we understand intervals to be their intersections with $\mathbb{Z}$. Say we are given a $\mathbb{Z}$-linear form $L$ in $h$ variables with coefficients $u_1, \dots, u_h \in \mathbb{Z}_{\neq 0}$. We define $$\begin{aligned} n_L := \left|\{i \in [h]: u_i > 0\}\right|, \ \ \ \ \ \ \ \ s_L := \sum_{i: u_i > 0} u_i, \ \ \ \ \ \ \ \ d_L := \sum_{j: u_j < 0} |u_j|, \ \ \ \ \ \ \ \ m_L := s_L+d_L.\end{aligned}$$ Without loss of generality, we always assume that $u_1 \geq \cdots \geq u_n > 0 > u_{n+1} \geq \cdots \geq u_h$. If we are also given a subset $A \subseteq I_N$, we let $$\begin{aligned} \label{eq:L(A)} L(A) := \left\{u_1a_1 + \cdots + u_ha_h : a_i \in A \right\}.\end{aligned}$$ Notably, we observe that $L(A)$ corresponds to $A_{s_L,d_L}$ if $|u_i| = 1$ for all $i \in [h]$. From these definitions, it follows that $$\begin{aligned} \label{eq:L(A)_range} L(A) \subseteq \left[ -d_LN, s_LN \right].\end{aligned}$$ As such, we will define the *[complement of $L(A)$]{style="color: red"}* to be $$\begin{aligned} \label{eq:L(A)_complement} L(A)^c := \left[ -d_LN, s_LN \right] \setminus L(A).\end{aligned}$$ Thus, $|L(A)^c|$ is $|L(A)|$ subtracted from the maximum possible value of $|L(A)|$. The setting where the multisets $\left[u_1, \dots, u_h \right]$ and $\left[-u_1, \dots, -u_h \right]$ are equal will occasionally need to be treated separately in our arguments. We call such linear forms *[balanced]{style="color: red"}*. With the conventions and notation that we have established, it follows for a balanced linear form that $h$ is even, $n_L = h/2$, $s_L = d_L = m_L/2$, and $u_i = -u_{h-i+1}$ for all $i \in [h]$. We let $\mathfrak{S}_h$ denote the symmetric group on $h$ elements, and let $\sigma_\textup{rev}\in \mathfrak{S}_h$ denote the permutation given by $\sigma_\textup{rev}(i) = h-i+1$ for all $i \in [h]$. Let $\mathfrak{R}_h(L), \mathfrak{R}_h^\textup{rev}(L)$ denote subgroups of $\mathfrak{S}_h$ given by $$\begin{aligned} \mathfrak{R}_h(L) & := \left\{\sigma \in \mathfrak{S}_h : (u_{\sigma(1)}, \dots, u_{\sigma(h)}) = (u_1, \dots, u_h)\right\}, \label{eq:symmetric_subgroup} \\ \mathfrak{R}_h^\textup{rev}(L) & := \left\langle \mathfrak{R}_h(L), \sigma_\textup{rev}\right\rangle. \label{eq:symmetric_subgroup_balanced}\end{aligned}$$ Let $\theta_L := \left| \mathfrak{R}_h(L) \right| = \left| \mathfrak{R}_h^\textup{rev}(L) \right|/2$. We are interested in the number of distinct ways in which a value is generated in $L(A)$. Towards this, let $\approx_L$ denote the equivalence relation on $\mathbb{N}^h$ for which $(a_1, \dots, a_h) \approx_L (b_1, \dots, b_h)$ if and only if there exists $\sigma \in \mathfrak{R}_h(L)$ such that $$\begin{aligned} \label{eq:expression_relation_condition} (a_{\sigma(1)}, \dots, a_{\sigma(h)}) = (b_1, \dots, b_h).\end{aligned}$$ We think of $\approx_L$ as modding out redundancies (hence the notation [\[eq:symmetric_subgroup\]](#eq:symmetric_subgroup){reference-type="eqref" reference="eq:symmetric_subgroup"}, [\[eq:symmetric_subgroup_balanced\]](#eq:symmetric_subgroup_balanced){reference-type="eqref" reference="eq:symmetric_subgroup_balanced"}) related to permuting the order in which we pass elements into $L$. That $\approx_L$ is an equivalence relation and that $(a_1, \dots, a_h) \approx_L (b_1, \dots, b_h) \implies L(a_1, \dots, a_h) \approx_L L(b_1, \dots, b_h)$ both follow easily from $\mathfrak{R}_h(L)$ being a subgroup of $\mathfrak{S}_h$. For an equivalence class $\Lambda \in \mathbb{N}^h \setminus \! \approx_L$, let $L(\Lambda)$ denote the mapping of any representative of $\Lambda$ under $L$. We let $\mathfrak{D}_L(N,k)$ denote the collection of equivalence classes $\Lambda \in \mathbb{N}^h \setminus \! \approx_L$ satisfying[^2] $$\begin{aligned} & L(\Lambda) = -d_LN+k, & S(\Lambda) \subset I_N.\end{aligned}$$ We now demonstrate the sense in which balanced linear forms serve as an edge case in this article. Say that the $\mathbb{Z}$-linear form $L$ is balanced. It is not hard to see that $$\begin{aligned} L(a_1, \dots, a_h) = 0 \iff L(a_{\sigma_\textup{rev}(1)}, \dots, a_{\sigma_\textup{rev}(h)}) = 0,\end{aligned}$$ so $\mathfrak{D}_{L}(N,m_LN/2)$ fails to capture all redundancies amongst $h$-tuples related to permuting the order in which we pass elements into $L$. This leads us to consider the subgroup $\mathfrak{R}_h^\textup{rev}(L)$ of $\mathfrak{S}_h$, using which we may define the collection $\mathfrak{D}_L^\textup{rev}(N,m_LN/2)$ of equivalence classes on $h$-tuples in $\mathbb{N}^h$ mapping under $L$ to $0$ entirely analogously. We now define $$\begin{aligned} & \mathscr{R}_L(N,k) := \begin{cases} \mathfrak{R}_h^\textup{rev}(L) & L \text{ balanced, } k = m_LN/2, \\ \mathfrak{R}_h(L) & \text{otherwise}; \end{cases} \label{eq:redundancies} \\ & \mathscr{D}_L(N,k) := \begin{cases} \mathfrak{D}_L^\textup{rev}(N,m_LN/2) & L \text{ balanced, } k = m_LN/2, \\ \mathfrak{D}_L(N,k) & \text{otherwise}. \end{cases} \label{eq:distinct_expressions}\end{aligned}$$ Following the preceding discussion, we think of the subgroup $\mathscr{R}_L(N,k)$ of $\mathfrak{S}_h$ as capturing the redundancies related to permuting the order in which we pass elements into $L$ to generate $-d_LN+k$, and the collection $\mathscr{D}_L(N,k)$ as capturing the distinct (hence the notation) ways we can map $h$-tuples under $L$ to generate $-d_LN+k$. We call elements of $\mathscr{D}_L(N,k)$ *[$L$-expressions]{style="color: red"}*. For an $L$-expression with representative $(a_1, \dots, a_h)$, we call $L(\Lambda) := L(a_1, \dots, a_h)$ the *[$L$-evaluation]{style="color: red"}* of $\Lambda$, and we call $S(\Lambda) := \{a_1, \dots, a_h\}$ the *[ground set]{style="color: red"}* of $\Lambda$.[^3] These notions are clearly well-defined. In order to estimate or bound the probability that a particular collection of values is excluded from a sum or difference set, past papers on MSTD sets (e.g., see [@zhao2010counting; @lazarev2013distribution]) introduced *[forbiddance graphs.]{style="color: red"}* Specifically, for a collection of forbidden values, the corresponding forbiddance graph is the graph whose vertex set is $I_N$, with edges between vertices whose sum/difference is forbidden: the probability that no forbidden values are included in the sum/difference set is then equal to the probability that, if we choose each vertex of the forbiddance graph with probability $p(N)$, an independent set is drawn. We generalize by letting the *[forbiddance hypergraph $\mathcal G_L(N,k)$]{style="color: red"}* be the hypergraph on the vertex set $I_N$ such that $e \subset I_N$ is a hyperedge of $\mathcal G_L(N,k)$ if and only if $e = S(\Lambda)$ for some $\Lambda \in \mathscr{D}_L(N,k)$. This article studies what happens when we apply $\mathbb{Z}$-linear forms to random subsets $A \subseteq I_N$ drawn from a binomial model: we include each element from $I_N$ in $A$ with probability $p(N): \mathbb{N}\to (0,1)$, which we take to be a function satisfying [\[eq:p_assumptions\]](#eq:p_assumptions){reference-type="eqref" reference="eq:p_assumptions"}. Formally, we set $\Omega_N := \{0, 1\}^{N+1}$, so subsets $A \subseteq I_N$ correspond exactly with elements $(a_0, \dots, a_N) \in \Omega_N$ in the natural way (that is, $a_i = 1$ corresponds to $i \in A$ for all $i \in I_N$): we refer to $A$ and $(a_0, \dots, a_N)$ interchangeably. We will let $\mu$ denote the product of $N+1$ instances of the Bernoulli measure with parameter $p$, so that we are working in the probability space $\big(\Omega_N, 2^{\Omega_N}, \mu\big)$. We use $d_\textup{TV}(\mu_1, \mu_2)$ to denote the total variation distance between probability measures $\mu_1$ and $\mu_2$ defined on the same space. For random variables $X, Y$ defined on the same space, we will write $d_\textup{TV}(X, Y)$ as shorthand for the total variation distance between the laws of $X$ and $Y$. We now introduce some random variables and stochastic processes which will be of central importance throughout the article. For $\Lambda \in \mathscr{D}_L(N,k)$, we let $X_\Lambda$ be the Bernoulli random variable corresponding to the event $S(\Lambda) \subseteq A$, so $p_\Lambda := \mathbb{E}[X_\Lambda] = p^{|S(\Lambda)|}$. We also let $Y_\Lambda :\stackrel{d}{=} \textup{Pois}(p_\Lambda)$ be mutually independent. We let $\mathscr{X}_L(N,k) := (X_\Lambda)_{\Lambda \in \mathscr{D}_L(N,k)}$ be the dependent Bernoulli process, and let $\mathscr{Y}_L(N,k) := (Y_\Lambda)_{\Lambda \in \mathscr{D}_L(N,k)}$ be the Poisson process on $\mathscr{D}_L(N,k)$ with intensity $p_{\mathscr{D}_L(N,k)}$: this is a measure on $2^{\mathscr{D}_L(N,k)}$ defined by $p_{\mathscr{D}_L(N,k)}(B) := \sum_{\Lambda \in B} p_\Lambda$ for $B \subseteq \mathscr{D}_L(N,k)$. Similarly, for $e \in E(\mathcal G_L(N,k))$, we let $X_{e}$ be the Bernoulli random variable corresponding to the event $V(e) \subseteq A$, so $p_{e} := \mathbb{E}[X_{e}] = p^{|V(e)|}$. We also let $Y_{e} :\stackrel{d}{=} \textup{Pois}(p_{e})$ be mutually independent. Let $\mathfrak{X}_k := (X_{e})_{e \in E(\mathcal G_L(N,k))}$ be the dependent Bernoulli process, and let $\mathfrak{Y}_k := (Y_{e})_{e \in E(\mathcal G_L(N,k))}$ be the Poisson process on $E(\mathcal G_L(N,k))$ with intensity $p_{\mathcal G_L(N,k)}$: this is a measure on $2^{E(\mathcal G_L(N,k))}$ defined by $p_{\mathcal G_L(N,k)}(B) := \sum_{\Lambda \in B} p_\Lambda$ for $B \subseteq E(\mathcal G_L(N,k))$. The asymptotic formulae that we derive in Theorem [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"}(ii) involve the density of the Irwin-Hall distribution of order $h$ (e.g., see [@feller1991introduction; @laplace1814theorie]), which is the distribution for the sum of $h$ independent random variables uniformly distributed on $[0,1]$. We denote the density of the Irwin-Hall distribution of order $h$ at a real number $x$ by $\mathsf{IH}_h(x)$. Letting $x_+ := \max\{0, x\}$ denote the positive part of the real number $x$, this density is given by $$\begin{aligned} \label{eq:irwin_hall_density} \mathsf{IH}_h(x) := \begin{cases} \frac{1}{(h-1)!}\sum_{j=0}^h (-1)^j \binom{h}{j}(x-j)_+^{h-1} & x \leq h, \\ 0 & x > h. \end{cases}\end{aligned}$$ We now record those conventions that we assume, unless stated otherwise, throughout the rest of the paper. Throughout the analysis, we assume that we have fixed an arbitrary integer $h \geq 2$ and a $\mathbb{Z}$-linear form $L: \mathbb{Z}^h \to \mathbb{Z}$ with coefficients $u_1, \dots, u_h \in \mathbb{Z}_{\neq 0}$ satisfying $\gcd(u_1, \dots, u_h) = 1$. We employ standard asymptotic notation throughout the article: for sake of completeness, we describe this notation in Appendix [9](#sec:asymptotic_notation){reference-type="ref" reference="sec:asymptotic_notation"}. We also assume that $N$ is a large positive integer which tends to infinity, and that all asymptotic notation is with respect to $N$. As such, we henceforth abbreviate much of the notation introduced here by dropping arguments or subscripts of $L$, $N$, and $h$. In this section, we were careful to make it clear what the quantities we defined depended on; whenever we use such an abbreviation, it will be unambiguous what it is referring to. We omit floor and ceiling symbols when doing so does not affect asymptotics. In Sections [2](#sec:preliminaries){reference-type="ref" reference="sec:preliminaries"}-[7](#sec:point_processes){reference-type="ref" reference="sec:point_processes"}, we will assume that [\[eq:p_assumptions\]](#eq:p_assumptions){reference-type="eqref" reference="eq:p_assumptions"} holds, and frequently abbreviate $p(N)$ to $p$. ## Main Results {#subsec:main_results} Our main interest is to understand what happens to $|L(A)|$ and $|L(A)^c|$ as we vary the rate at which the inclusion probability $p(N)$ decays as $N$ grows. Our techniques yield two broad classes of results. ### Phase Transition Our main findings can be summarized by the following theorem. **Theorem 1**. *Let $p : \mathbb{N}\to (0,1)$ be a function satisfying [\[eq:p_assumptions\]](#eq:p_assumptions){reference-type="eqref" reference="eq:p_assumptions"}. Fix an integer $h \geq 2$ and a $\mathbb{Z}$-linear form $L: \mathbb{Z}^h \to \mathbb{Z}$ with coefficients $u_1, \dots, u_h \in \mathbb{Z}_{\neq 0}$ such that $\gcd(u_1, \dots, u_h) = 1$. Let $A \subseteq I_N$ be a random subset where each element of $I_N$ is independently included in $A$ with probability $p(N)$. The following three situations arise.* 1. *If $p(N) \lll N^{-\frac{h-1}{h}}$, then $$\begin{aligned} \label{eq:fast_decay} |L(A)| \sim \frac{\left( N \cdot p(N) \right)^h}{\theta_L}. \end{aligned}$$* 2. *If $p(N) = cN^{-\frac{h-1}{h}}$ for some constant $c > 0$, then $$\begin{gathered} |L(A)| \sim \left(\sum_{i=1}^h |u_i| - 2\int_0^{m/2} \exp \left(- \frac{c^h \sum_{t_1=0}^{|u_1|-1} \cdots \sum_{t_h=0}^{|u_h|-1} \mathsf{IH}_h\left(x - \sum_{i=1}^h t_i\right)}{\theta_L \cdot \prod_{i=1}^h |u_i|} \right) dx\right)N, \label{eq:critical_decay} \\ |L(A)^c| \sim \left(2\int_0^{m/2} \exp \left(- \frac{c^h \sum_{t_1=0}^{|u_1|-1} \cdots \sum_{t_h=0}^{|u_h|-1} \mathsf{IH}_h\left(x - \sum_{i=1}^h t_i\right)}{\theta_L \cdot \prod_{i=1}^h |u_i|} \right) dx \right) N. \label{eq:critical_decay_complement} \end{gathered}$$* 3. *If $p(N) \ggg N^{-\frac{h-1}{h}}$, then $$\begin{aligned} \label{eq:slow_decay} |L(A)^c| \sim \frac{2 \cdot \Gamma\left( \frac{1}{h-1} \right) \sqrt[h-1]{(h-1)! \cdot \theta_L \cdot \prod_{i=1}^{h} |u_i|}}{(h-1) \cdot p(N)^{\frac{h}{h-1}}}. \end{aligned}$$* Theorem [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"}, which identifies $N^{-\frac{h-1}{h}}$ as a threshold function (in the sense of [@janson2011random]), asserts that the following macroscopic behavior holds. It is not hard to see that the number of $L$-expressions we can form in $L(A)$ is $\sim |A|^h/\theta_L \sim (Np)^h/\theta_L$: if we are below the threshold, it follows that $L(A)$ is asymptotically almost surely "basically Sidon,\" i.e., that almost all pairs of such $L$-expressions generated in $L(A)$ correspond to distinct values. If we are above the threshold, it follows from $p(N)^{-\frac{h}{h-1}} \lll N$ that asymptotically almost surely, almost all possible values are generated in $L(A)$. On the threshold, as $c$ increases from zero to infinity, it follows from the expressions given in Theorem [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"}(ii) that the asymptotic almost sure behavior of $|L(A)|$ shifts from $L(A)$ missing almost all sums to $L(A)$ having almost all sums. Theorem [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"} may also be interpreted as corroboration for some of the ideas presented in our opening discussion on MSTD sets. Indeed, say that we are given two $\mathbb{Z}$-linear forms $L, \Tilde{L}: \mathbb{Z}^h \to \mathbb{Z}$. It follows in all three cases of Theorem [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"} that if $\theta_L < \theta_{\Tilde{L}}$, then $|L(A)|$ is asymptotically almost surely larger than $|\Tilde{L}(A)|$. In particular, consider the setting in which the coefficients of $L$ and $\Tilde{L}$ all have absolute value $1$, and it is true for both $L$ and $\Tilde{L}$ that there are at least as many instances of $1$ as $-1$ amongst its coefficients. Then this statement on the asymptotic relative size of $L(A)$ and $\Tilde{L}(A)$ is consistent with the heuristic that for two generalized sumsets which have at least as many pluses as minuses, the generalized sumset with more minus signs will usually be larger. Theorem [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"} also completely resolves [@hegarty2009almost Conjecture 4.2]. Specifically, replacing their conjectured threshold function of $N^{-\frac{1}{h}}$ with $N^{-\frac{h-1}{h}}$, we derive their conjectured statement in the regime where $p(N)$ is below the threshold. By taking $$\begin{aligned} R(x_0, \dots, x_h) & = \frac{x_0^h}{\theta_L \cdot \prod_{i=1}^h |x_i|}, \\ g_{u_1, \dots, u_h}(y) & = \int_0^{\sum_{i=1}^h |u_i|} \exp \left(- y \sum_{t_1=0}^{|u_1|-1} \cdots \sum_{t_h=0}^{|u_h|-1} \mathsf{IH}_h\left(x - \sum_{i=1}^h t_i\right)\right) dx,\end{aligned}$$ we observe that Theorem [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"}(ii) establishes [@hegarty2009almost Equation (4.4)], so we also derive their conjectured statement in the regime where $p(N)$ is on the threshold. In the regime where $p(N)$ is above the threshold, we disprove their conjecture and replace it with the correct statement. By considering $\mathbb{Z}$-linear forms $L$ for which all the coefficients $u_1, \dots, u_h$ have absolute value $1$, we extract the following result, which is Theorem [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"} specifically for the setting of generalized sumsets. **Theorem 2**. *Let $p : \mathbb{N}\to (0,1)$ be any function satisfying [\[eq:p_assumptions\]](#eq:p_assumptions){reference-type="eqref" reference="eq:p_assumptions"}. Fix nonnegative integers $s, d, h$ such that $h \geq 2$ and $s + d = h$. Let $A \subseteq I_N$ be a random subset where each element of $I_N$ is independently included in $A$ with probability $p(N)$. Then the following three situations arise.* 1. *If $p(N) \lll N^{-\frac{h-1}{h}}$, then $$\begin{aligned} |A_{s,d}| \sim \frac{(N\cdot p(N))^h}{s!d!}. \end{aligned}$$* 2. *If $p(N) = cN^{-\frac{h-1}{h}}$ for some constant $c > 0$, then $$\begin{aligned} & |A_{s,d}| \sim \left(h - 2\int_0^{h/2} \exp \left(- \frac{c^h}{s!d!} \mathsf{IH}_h(x) \right) dx\right)N, & |A_{s,d}^c| \sim \left(2\int_0^{h/2} \exp \left(- \frac{c^h}{s!d!} \mathsf{IH}_h(x) \right) dx\right)N. \end{aligned}$$* 3. *If $p(N) \ggg N^{-\frac{h-1}{h}}$, then $$\begin{aligned} |A_{s,d}^c| \sim \frac{2 \cdot \Gamma\left( \frac{1}{h-1} \right) \sqrt[h-1]{(h-1)! s! d!}}{(h-1) \cdot p(N)^{\frac{h}{h-1}}}. \end{aligned}$$* Theorem [Theorem 2](#thm:generalized_sumsets){reference-type="ref" reference="thm:generalized_sumsets"} generalizes [@hegarty2009almost Theorem 1.1] from sum and difference sets with two summands to generalized sumsets with $h$ summands, under any number $0 \leq s \leq h$ of sums. It is also naturally of interest to understand how the relative asymptotic almost sure sizes of generalized sumsets and their complements change when we modify their structure, i.e., the number of sums amongst their $h$ summands. In this direction, we have Corollary [Corollary 3](#cor:generalized_sumsets_ratios){reference-type="ref" reference="cor:generalized_sumsets_ratios"} as an immediate corollary of Theorem [Theorem 2](#thm:generalized_sumsets){reference-type="ref" reference="thm:generalized_sumsets"}(i, iii). We note that Corollary [Corollary 3](#cor:generalized_sumsets_ratios){reference-type="ref" reference="cor:generalized_sumsets_ratios"}(i) is a strictly more general statement than the first part of [@hogan2013generalized Theorem 1.2]. We omit the corresponding result for Theorem [Theorem 2](#thm:generalized_sumsets){reference-type="ref" reference="thm:generalized_sumsets"}(ii) (which would yield a strictly more general statement than the second part of [@hogan2013generalized Theorem 1.2]), as it is less natural. **Corollary 3**. *Fix nonnegative integers $s_1, d_1, s_2, d_2, h$ such that $h \geq 2$ and $s_1 + d_1 = s_2 + d_2 = h$. Let $A \subseteq I_N$ be a random subset where each element of $I_N$ is independently included in $A$ with probability $p(N)$. Then the following two situations arise.* 1. *If $p(N) \lll N^{-\frac{h-1}{h}}$, then $$\begin{aligned} & \frac{|A_{s_1,d_1}|}{|A_{s_2,d_2}|} \sim \frac{s_2!d_2!}{s_1!d_1!}, & \frac{|A_{s_1,d_1}^c|}{|A_{s_2,d_2}^c|} \sim 1. \end{aligned}$$* 2. *If $p(N) \ggg N^{-\frac{h-1}{h}}$, then $$\begin{aligned} & \frac{|A_{s_1,d_1}|}{|A_{s_2,d_2}|} \sim 1, & \frac{|A_{s_1,d_1}^c|}{|A_{s_2,d_2}^c|} \sim \sqrt[h-1]{\frac{s_1!d_1!}{s_2!d_2!}}. \end{aligned}$$* ### Limiting Point Process Behavior In order to estimate the probability that a particular candidate value is included in $L(A)$, which is the key step in our expectation computations, we invoke the Stein-Chen method based on dependency graphs: see, for example, [@arratia1989two; @arratia1990poisson; @chen2004normal; @chen2004stein; @chen2011poisson]. As such, it is natural to ask if there is some kind of interesting local point process limit. In this direction, the tools and arguments used to prove Theorem [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"} quickly lend themselves to the following result, which presents a phase transition concerning the distribution of the dependent Bernoulli process corresponding to the number of distinct representations in $L(A)$ of a given value. **Theorem 4**. *Assume the setup of Theorem [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"}. Let $C > 0$ be a constant. The following two situations arise.* 1. *If $p(N) \lll N^{-\frac{h-2}{h-1}}$, then uniformly over $k \in \left[ 0, mN \right]$, $d_\textup{TV}\left(\mathscr{X}_k, \mathscr{Y}_k \right) \lll 1$.* 2. *If $p(N) \gtrsim N^{-\frac{h-2}{h-1}}$, then uniformly over $k \in \left[CN, (m-C)N\right]$, $d_\textup{TV}\left(\mathscr{X}_k, \mathscr{Y}_k \right) = \Omega(1)$.* We may interpret Theorem [Theorem 4](#thm:poisson_process){reference-type="ref" reference="thm:poisson_process"}, which identifies $p(N) = N^{-\frac{h-2}{h-1}}$ as a threshold function for this Poisson convergence (observe that this dominates the threshold function $N^{-\frac{h-1}{h}}$ in Theorem [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"}) holding over $k \in \left[ 0, mN \right]$, as follows. If we are below the threshold, we may think of the number of ways that $-dN+k$ is generated in $L(A)$ as being modeled by a collection of several independent ways of representing $-dN+k$ under $L$. On the threshold, the extent to which this interpretation is faithful to the truth weakens as the constant $c$ grows, until it collapses everywhere outside of the fringes of $\left[ 0, mN \right]$ as we continue to move above the threshold. The condition $\gcd(u_1, \dots, u_h) = 1$ assumed in the statements of Theorems [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"} and [Theorem 4](#thm:poisson_process){reference-type="ref" reference="thm:poisson_process"} can be relaxed. Indeed, let $L(x_1, \dots, x_h) = u_1x_1 + \cdots + u_hx_h$ be a $\mathbb{Z}$-linear form on $h$ variables for which $\gcd(u_1, \dots, u_h) > 1$. Let $\Tilde{L}(x_1, \dots, x_h) = v_1x_1 + \cdots + v_hx_h$ be the $\mathbb{Z}$-linear form on $h$ variables for which $v_i = \frac{u_i}{\gcd(u_1, \dots, u_h)}$ for all $i \in [h]$. It is straightforward to show that $$\begin{aligned} \label{eq:expression_collection_not_coprime} \mathscr{D}_L(N,k) = \begin{cases} \emptyset & \gcd(u_1, \dots, u_h) \nmid k, \\ \mathscr{D}_{\Tilde{L}}\left(N, k/\gcd(u_1, \dots, u_h)\right) & \gcd(u_1, \dots, u_h) \mid k, \end{cases} \end{aligned}$$ and that $$\begin{aligned} & |L(A)| = |\Tilde{L}(A)|, & |L(A)^c| = |\Tilde{L}(A)^c| - \left( \gcd(u_1, \dots, u_h) - 1 \right)\sum_{i=1}^h |u_i|. \end{aligned}$$ In this sense, we will have solved the problem for all $\mathbb{Z}$-linear forms once we have solved it under the setting in which $\gcd(u_1, \dots, u_h) = 1$. We have thus assumed this condition in our main results. By considering the case $h=2$, we observe that Theorem [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"}(i) and Theorem [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"}(iii) are respectively consistent with [@hegarty2009almost Theorem 3.1(i)] and [@hegarty2009almost Theorem 3.1(iii)]. Letting $L(x_1, x_2) = u_1x_1 + u_2x_2$ be a $2$-linear form for which $|u_1| \geq |u_2|$, $\gcd(u_1, u_2) = 1$, and $(u_1, u_2) \neq (1,1)$, combining Theorem [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"}(ii) with [@hegarty2009almost Theorem 3.1(ii)] yields the following identity, which appears to be new: $$\begin{gathered} 2\int_0^{\frac{|u_1| + |u_2|}{2}} \exp \left(- \frac{c^2 \sum_{t_1=0}^{|u_1|-1} \sum_{t_2=0}^{|u_2|-1} \mathsf{IH}_2\left(x - t_1 - t_2\right)}{|u_1u_2|} \right) dx \\ = \frac{2|u_1u_2|\left(1-e^{-c^2/|u_1|}\right)}{c^2} + \left( |u_1| - |u_2| \right)e^{-c^2/|u_1|}. \end{gathered}$$ The validity of this identity is easy to confirm if $|u_1| = |u_2| = 1$. Finally, we note that there are other natural choices for the definitions of our main objects of study in Subsection [1.2](#subsec:notation){reference-type="ref" reference="subsec:notation"}: we have proceeded as such to be consistent with the presentation of [@hegarty2009almost], which pursues the $h=2$ case of our problem. We mention some of these alternatives here, and discuss the changes they would induce to our main results. One natural modification is to define a coarser equivalence relation on $\mathbb{N}^h$ for which two $h$-tuples are equivalent if and only if they have the same ground set, and then proceed exactly as before. We may think of this as studying minimal subsets of $I_N$ which generate a candidate sum $-dN+k$ in $L(A)$ instead of distinct representations in $L(A)$ generating $-dN+k$. By tracing our arguments, it can be shown that in this setting, Theorem [Theorem 4](#thm:poisson_process){reference-type="ref" reference="thm:poisson_process"} translates to the following result concerning a dependent Bernoulli process on hypergraphs, also with $p(N) = N^{-\frac{h-2}{h-1}}$ as the corresponding threshold function. **Theorem 5**. *Assume the setup of Theorem [Theorem 4](#thm:poisson_process){reference-type="ref" reference="thm:poisson_process"}. The following two situations arise.* 1. *If $p(N) \lll N^{-\frac{h-2}{h-1}}$, then uniformly over $k \in \left[ 0, mN \right]$, $d_\textup{TV}\left(\mathfrak{X}_k, \mathfrak{Y}_k \right) \lll 1$.* 2. *If $p(N) \gtrsim N^{-\frac{h-2}{h-1}}$, then uniformly over $k \in \left[CN, (m-C)N \right]$, $d_\textup{TV}\left(\mathfrak{X}_k, \mathfrak{Y}_k \right) = \Omega(1)$.* Another natural modification is to instead define $L(A)$ by $$\begin{aligned} \label{eq:L(A)_distinct} L(A) := \left\{u_1a_1 + \cdots + u_ha_h : a_i \in A, a_1, \dots, a_h \text{ distinct} \right\}.\end{aligned}$$ By tracing our arguments, it can be checked that all of our main results follow over without issue to this setting (indeed, much of the analysis becomes simpler, as we no longer have to deal with repeated summands). Notably, if we proceed under this definition, the forbiddance hypergraphs we define will be $h$-regular, and Theorem [Theorem 5](#thm:L_hypergraph_hyperedge_process){reference-type="ref" reference="thm:L_hypergraph_hyperedge_process"} translates to a result concerning stochastic processes on regular hypergraphs. The rest of the paper is organized as follows. In Section [2](#sec:preliminaries){reference-type="ref" reference="sec:preliminaries"}, we establish much of the machinery that we will invoke in the proofs of our main results. In Section [3](#sec:computations){reference-type="ref" reference="sec:computations"}, we perform some standard computations that we will make use of in later sections. In Sections [4](#sec:fast_decay){reference-type="ref" reference="sec:fast_decay"}-[6](#sec:slow_decay){reference-type="ref" reference="sec:slow_decay"}, we prove Theorem [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"}. In Section [7](#sec:point_processes){reference-type="ref" reference="sec:point_processes"}, we prove Theorem [Theorem 4](#thm:poisson_process){reference-type="ref" reference="thm:poisson_process"}. We conclude in Section [8](#sec:future_directions){reference-type="ref" reference="sec:future_directions"} with directions for future research. # Preliminaries {#sec:preliminaries} In this section, we gather several results that we invoke in the proofs of our main results. ## Asymptotic Enumeration In this subsection, we present a number of results concerning the asymptotics of $L$-expressions which we invoke later in our arguments. The enumerative combinatorial arguments we used to prove these results do not provide much insight for the rest of the main body of the paper. Thus, we simply give the statements here, and defer proofs of these results to Appendix [10](#sec:preliminaries_proofs){reference-type="ref" reference="sec:preliminaries_proofs"} to avoid distracting from the flavor of this article. We first introduce the following quantities, which we abbreviate as $\lambda_k$, and which will resurface in Sections [5](#sec:critical_decay){reference-type="ref" reference="sec:critical_decay"} and [6](#sec:slow_decay){reference-type="ref" reference="sec:slow_decay"}. Set $$\begin{aligned} \label{eq:lambda_k} \lambda_L(N,k) := \frac{\sum_{t_1=0}^{|u_1|-1} \cdots \sum_{t_h=0}^{|u_h|-1} \mathsf{IH}_h\left(k/N - \sum_{i=1}^h t_i\right)}{2^{\mathbf{1}\left\{ L \text{ balanced, } k = mN/2 \right\}} \cdot \theta_L \cdot \prod_{i=1}^h |u_i|}.\end{aligned}$$ **Lemma 6**. *Fix $g: \mathbb{N}\to \mathbb{N}$ satisfying $1 \lll g(N) \lll N$. Uniformly over $k \in \left[0, g(N)\right]$, $$\begin{aligned} \label{eq:number_of_subsets_fringe_asymptotics} |\mathscr{D}_k| = |\mathscr{D}_{mN-k}| \lll N^{h-1}. \end{aligned}$$ Uniformly over $k \in \left[g(N), mN/2 \right]$, $$\begin{aligned} \label{eq:number_of_subsets_asymptotics} |\mathscr{D}_k| = |\mathscr{D}_{mN-k}| \sim \lambda_k N^{h-1}. \end{aligned}$$* **Lemma 7**. *The following statements hold uniformly over $k \in \left[ 0, mN/2 \right]$.* 1. *The number of $L$-expressions $\Lambda \in \mathscr{D}_k$ such that $|S(\Lambda)| = h$ is $\gtrsim k^{h-1}$.* 2. *The number of $2$-tuples $\left(\Lambda(1), \Lambda(2) \right) \in \mathscr{D}_k^2$ such that $$\begin{aligned} & S(\Lambda(1)) \cap S(\Lambda(2)) \neq \emptyset, & \left| S(\Lambda(1)) \cup S(\Lambda(2)) \right| = 2h-1 \end{aligned}$$ is $\gtrsim k^{2h-3}$.* **Lemma 8**. *Fix positive integers $t, \ell$ such that $t \in [4]$ and $\ell \leq \max\{h,th-1\}$. Uniformly over $k \in \left[0, mN \right]$, the number of $t$-tuples $\left(\Lambda(1), \dots, \Lambda(t) \right) \in \mathscr{D}_k^t$ such that $\Lambda(1), \dots, \Lambda(t)$ satisfy $$\begin{aligned} \label{eq:L_expressions_tuples_upper_bounds} & S(\Lambda_i) \cap \left( \bigcup_{j\neq i} S(\Lambda_j) \right) \neq \emptyset \text{ for all } i \in [t], & \left|\bigcup_{i=1}^t S(\Lambda_i)\right| = \ell \end{aligned}$$ is $\lesssim k^{\ell-\lceil (\ell+1)/h \rceil}$ if $t \geq 2$, and $\lesssim k^{\ell-1}$ if $t=1$.* ## Poisson Approximation Over the course of our calculations in Sections [5](#sec:critical_decay){reference-type="ref" reference="sec:critical_decay"} and [6](#sec:slow_decay){reference-type="ref" reference="sec:slow_decay"}, we will make use of the following definitions and theorems. Note that it is very natural to suspect some kind of Poisson weak limit in this setting: loosely speaking, the possible ways in which a candidate sum $-dN+k$ may be generated in $L(A)$ correspond to a collection of weakly dependent events, rendering the Stein-Chen method a promising line of attack. **Definition 9** ([@barbour1992poisson], Equation (1.1)). Let $I$ be a finite index set. For each $\alpha \in I$, let $X_\alpha$ be a Bernoulli random variable. We say that the random variables $\left\{X_\alpha: \alpha \in I\right\}$ are *[positively related]{style="color: red"}* if, for every $\alpha \in I$, there exist random variables $\left\{Y_{\beta \alpha}: \beta \in I \setminus \{\alpha\}\right\}$ defined on the same probability space such that $$\begin{aligned} & \mathcal L\left(Y_{\beta \alpha}; \beta \in I\right) = \mathcal L\left(X_\beta; \beta \in I \ | \ X_\alpha = 1 \right), & Y_{\beta \alpha} \geq X_\beta \text{ for all } \beta \in I \setminus \{\alpha\}. \end{aligned}$$ **Theorem 10** ([@arratia1989two], Theorem 1). *Let $I$ be a finite index set. For each $\alpha \in I$, let $X_\alpha$ be a Bernoulli random variable with parameter $p_\alpha$, and choose a subset $B_\alpha$ of $I$. Define $$\begin{aligned} & b_1 := \sum_{\alpha \in I} \sum_{\beta \in B_\alpha} p_\alpha p_\beta, \\ & b_2 := \sum_{\alpha \in I} \sum_{\alpha \neq \beta \in B_\alpha} \Pr[X_\alpha X_\beta = 1], \\ & b_3 := \sum_{\alpha \in I} \mathbb{E}\left[ \left| \mathbb{E}[X_\alpha \ | \ X_\beta: \beta \notin B_\alpha] - p_\alpha \right| \right]. \end{aligned}$$ Let $W := \sum_{\alpha \in I} X_\alpha$, and let $Z :\stackrel{d}{=} \textup{Pois}(\lambda)$ with rate $\lambda := \mathbb{E}[W] = \sum_{\alpha \in I} p_\alpha$. Then $$\begin{aligned} \left|\Pr[W = 0] - e^{-\lambda} \right| < \min\{1, \lambda^{-1}\}(b_1+b_2+b_3). \end{aligned}$$* Whenever Theorem [Theorem 10](#thm:stein_chen){reference-type="ref" reference="thm:stein_chen"} is invoked in this article, we will take $B_\alpha$ to be the *[dependency set]{style="color: red"}* of $\alpha$ for every $\alpha \in I$: $\beta \in B_\alpha$ if and only if $X_\alpha$ and $X_\beta$ are dependent. With this choice of the subsets $B_\alpha$, we may interpret $b_1$ as measuring the total size of the dependence sets and $b_2$ as twice the expected number of dependent pairs that arise. It also follows immediately that $b_3 = 0$ (in general, $b_3$ might be thought of as measuring how honest we were in selecting $B_\alpha$ to be the dependency set of $\alpha$). We now state the theorems that we will invoke when proving our results at the process level. **Theorem 11** ([@chen2011poisson], Corollary 2.3). *Assume the setup of Theorem [Theorem 10](#thm:stein_chen){reference-type="ref" reference="thm:stein_chen"}, with the additional assumption that for each $\alpha \in I$, $B_\alpha$ is the dependency set of $\alpha$. For each $\alpha \in I$, let $Y_\alpha :\stackrel{d}{=} \textup{Pois}(p_\alpha)$ be mutually independent. Let $\mathscr{X} := (X_\alpha)_{\alpha \in I}$ denote the dependent Bernoulli process, and $\mathscr{Y} := (Y_\alpha)_{\alpha \in I}$ denote the Poisson process on $I$ with intensity $p_{(\cdot)}$. Then $$\begin{aligned} d_\textup{TV}\left(\mathscr{X}, \mathscr{Y}\right) \leq \min\{1, \lambda^{-1} \} \left(b_1 + b_2\right). \end{aligned}$$* **Theorem 12** ([@barbour1992poisson], Theorem 3.E). *Let $\left\{X_\alpha: \alpha \in I\right\}$ be positively related, with $p_\alpha := \mathbb{E}[X_\alpha]$. Let $W := \sum_{\alpha \in I} X_\alpha$, and let $\lambda := \mathbb{E}[W]$. Set $$\begin{aligned} & \epsilon := \frac{\textup{Var}(W)}{\lambda} - 1,& \gamma := \frac{\mathbb{E}\left[(W-\mathbb{E}[W])^4 \right]}{\lambda} - 1, \end{aligned}$$ and also[^4] $$\begin{aligned} \psi := \left(\frac{\gamma}{\lambda \epsilon}\right)_+ + 3\epsilon + \frac{\sum_{\alpha \in I} p_\alpha^2}{\lambda^2 \epsilon} + \frac{3\textup{Var}(W)\max_{\alpha \in I}p_\alpha}{\lambda\epsilon}. \end{aligned}$$ If $\epsilon > 0$, then $$\begin{aligned} d_\textup{TV}\left(W, \textup{Pois}(\lambda) \right) \geq \frac{\epsilon}{11+3\psi}. \end{aligned}$$* ## Martingale Concentration When proving strong concentration results in Sections [5](#sec:critical_decay){reference-type="ref" reference="sec:critical_decay"} and [6](#sec:slow_decay){reference-type="ref" reference="sec:slow_decay"}, we will rely on the martingale machinery developed in [@vu2002concentration]. For each $A \in \Omega$, $n \in I_N$, and $x \in \{0,1\}$, we define the functions[^5] $$\begin{aligned} C_n(x,A) & := \left| \mathbb{E}\left[ |L(A)^c| \ \Big| \ a_0, \dots, a_{n-1}, a_n = x\right] - \mathbb{E}\left[ |L(A)^c| \ \Big| \ a_0, \dots, a_{n-1}\right] \right|, \\ V_n(A) & := \int_0^1 C_n^2(x,A) \ d\mu_n(x) = (1-p)C_n^2(0,A) + pC_n^2(1,A).\end{aligned}$$ We also define the functions $$\begin{aligned} & C(A) := \mathop{\max_{n \in I_N,}}_{x \in \{0,1\}} C_n(x,A), & V(A) := \sum_{n=0}^N V_n(A).\end{aligned}$$ With this setup in place, we can extract the following result. **Theorem 13** ([@vu2002concentration], Lemma 3.1). *For any $\lambda, V, C > 0$ such that $\lambda \leq 4V/C^2$, we have that $$\begin{aligned} \Pr\left( \left| |L(A)^c| - \mathbb{E}\left[ |L(A)^c| \right] \right| \geq \sqrt{\lambda V} \right) \leq 2e^{-\lambda/4} & + \Pr(C(A) \geq C) + \Pr(V(A) \geq V). \end{aligned}$$* To make some simplifications, we also introduce, for each $A \in \Omega$ and $n \in I_N$, the function $$\label{eq:delta_function} \begin{aligned} \Delta_n(A):= & \ \mathbb{E}\left[ |L(A)^c| \ \Big| \ a_0, \dots, a_{n-1}, a_n = 0\right] - \mathbb{E}\left[ |L(A)^c| \ \Big| \ a_0, \dots, a_{n-1}, a_n = 1\right] \\ = & \ \mathbb{E}\left[ \big| L\left(A \setminus \{n\}\right)^c \big| - \big| L\left(A \cup \{n\}\right)^c \big| \ \Big| \ a_0, \dots, a_{n-1} \right]. \end{aligned}$$ Here, $\Delta_n(A)$ is the change in the conditional expectation of $|L(A)^c|$, conditioning on $a_0, \dots, a_n$, when we flip the value of $a_n$ from $1$ to $0$. Proceeding as in [@hegarty2009almost Equations (2.34)-(2.40)] now yields that we can write the functions $C(A)$ and $V(A)$ in terms of the functions $\Delta_n(A)$, namely as $$\begin{aligned} & C(A) = (1-p) \max_{n \in I_N} \Delta_n(A) \sim \max_{n \in I_N} \Delta_n(A), \label{eq:C(A)_reformulation}\\ & V(A) = p(1-p) \sum_{n=0}^N \left( \Delta_n(A) \right)^2 \sim p\sum_{n=0}^N \left( \Delta_n(A) \right)^2. \label{eq:V(A)_reformulation}\end{aligned}$$ We will make use of these functions in Sections [5](#sec:critical_decay){reference-type="ref" reference="sec:critical_decay"} and [6](#sec:slow_decay){reference-type="ref" reference="sec:slow_decay"}. # Standard Computations {#sec:computations} In this section, we isolate those computations which we will need later. Our computations are over values $k \in \left[ 0, mN/2 \right]$. It is not hard to show (e.g., via [\[eq:number_of_subsets_left_right\]](#eq:number_of_subsets_left_right){reference-type="eqref" reference="eq:number_of_subsets_left_right"} with a standard argument) that $W_k \stackrel{d}{=} W_{mN-k}$. Indeed, under this correspondence, it holds at the process level that we have the equality in distribution $$\begin{aligned} \label{eq:left_right_process_equivalence} \left( X_\Lambda : \Lambda \in \mathscr{D}_k \right) \stackrel{d}{=} \left( X_\Lambda : \Lambda \in \mathscr{D}_{mN-k}\right).\end{aligned}$$ These observations allow us to extend these computations to values $k \in \left[ mN/2, mN \right]$. We now fix some $k \in \left[ 0, mN/2 \right]$. We take $\mathscr{D}_k$ as our finite index set of interest. Up to constants, we bound the expectation, variance, third moment, and fourth central moment of $W_k$. All asymptotic statements are to be understood as holding uniformly over all $k \in \left[ 0, mN/2 \right]$. Asymptotic statements involving a condition on $k$ are to be understood as holding uniformly over such $k$ when working over a regime where the condition applies. We have $$\begin{aligned} \mu_k := \mathbb{E}[W_k] & = \sum_{\Lambda \in \mathscr{D}_k} p^{|S(\Lambda)|} = \sum_{\ell=1}^h \mathop{\sum_{\Lambda \in \mathscr{D}_k}}_{|S(\Lambda)| = \ell} p^\ell \begin{cases} \gtrsim k^{h-1}p^h \\ \lesssim \sum_{\ell=1}^h k^{\ell-1}p^\ell \stackrel{k \gtrsim 1/p}{\lesssim} k^{h-1}p^h \end{cases}, \nonumber \\ \textup{Var}(W_k) & = \mathop{\sum_{\left(\Lambda(1),\Lambda(2)\right) \in \mathscr{D}_k^2}}_{\Lambda(1) \neq \Lambda(2)} \textup{Cov}(X_{\Lambda(1)},X_{\Lambda(2)}) + \sum_{\Lambda \in \mathscr{D}_k} \textup{Var}(X_\Lambda) \label{eq:basic_computation_1} \\ & = \mathop{\sum_{\left(\Lambda(1),\Lambda(2)\right) \in \mathscr{D}_k^2}}_{S(\Lambda(1)) \cap S(\Lambda(2)) \neq \emptyset} p^{|S(\Lambda(1)) \cup S(\Lambda(2)) |} - p^{|S(\Lambda(1))|+|S(\Lambda(2))|} + \mathbb{E}[W_k] \nonumber \\ & \sim \sum_{\ell=1}^{2h-1} \mathop{\mathop{\sum_{\left(\Lambda(1),\Lambda(2)\right) \in \mathscr{D}_k^2}}_{S(\Lambda(1)) \cap S(\Lambda(2)) \neq \emptyset}}_{|S(\Lambda(1)) \cup S(\Lambda(2))|=\ell} p^\ell + \mathbb{E}[W_k] \begin{cases} \gtrsim k^{2h-3}p^{2h-1} + k^{h-1}p^h \\ \lesssim \sum_{\ell=h}^{2h-1} k^{\ell-2}p^\ell + \sum_{\ell=1}^{h-1} k^{\ell-1}p^\ell \end{cases} \nonumber \\ & \begin{cases} \stackrel{k \gtrsim (1/p)^{\frac{h-1}{h-2}}}{\gtrsim} k^{2h-3}p^{2h-1} \\ \stackrel{k \gtrsim 1/p}{\lesssim} k^{2h-3}p^{2h-1} + k^{h-1}p^h \stackrel{k \gtrsim (1/p)^{\frac{h-1}{h-2}}}{\lesssim} k^{2h-3}p^{2h-1} \end{cases}, \nonumber \\ \mathbb{E}[W_k^3] & = \mathop{\sum_{\left(\Lambda(1), \Lambda(2), \Lambda(3)\right) \in \mathscr{D}_k^3}}_{\left|\cup_{i=1}^3 S(\Lambda(i))\right| = \ell} \mathbb{E}\left[ X_{\Lambda(1)}X_{\Lambda(2)}X_{\Lambda(3)} \right] \lesssim \sum_{\ell=1}^{3h-2} k^{\ell-\lceil (\ell+1)/h \rceil} p^\ell + \mathbb{E}[ W_k^2 ] \label{eq:basic_computation_2} \\ & \lesssim \sum_{\ell=2h}^{3h-2} k^{\ell-3}p^\ell + \sum_{\ell=h}^{2h-1} k^{\ell-2}p^\ell + \sum_{\ell=1}^{h-1} k^{\ell-1}p^\ell + \textup{Var}(W_k) + \mathbb{E}[W_k] \nonumber \\ & \stackrel{k \gtrsim 1/p}{\asymp} k^{3h-5}p^{3h-2} + k^{2h-3}p^{2h-1} + k^{h-1}p^h, \nonumber \\ \mathbb{E}\left[(W_k-\mathbb{E}[W_k])^4\right] & \leq \sum_{\left(\Lambda(1), \dots, \Lambda(4)\right) \in \mathscr{D}_k^4} \left| \mathbb{E}\left[ (X_{\Lambda(1)}-p_{\Lambda(1)}) \cdots (X_{\Lambda(4)}-p_{\Lambda(4)}) \right] \right| \nonumber \\ & = \sum_{\ell=1}^{4h} \mathop{\sum_{\left(\Lambda(1), \dots, \Lambda(4)\right) \in \mathscr{D}_k^4}}_{\left|\cup_{i=1}^4 S(\Lambda(i))\right| = \ell} \left| \mathbb{E}\left[ (X_{\Lambda(1)}-p_{\Lambda(1)}) \cdots (X_{\Lambda(4)}-p_{\Lambda(4)}) \right] \right| \nonumber \\ & \lesssim \sum_{\ell=1}^{4h-2} \mathop{\sum_{\left(\Lambda(1), \dots, \Lambda(4)\right) \in \mathscr{D}_k^4}}_{\left|\cup_{i=1}^4 S(\Lambda(i))\right| = \ell} \mathbb{E}\left[ X_{\Lambda(1)}X_{\Lambda(2)}X_{\Lambda(3)}X_{\Lambda(4)} \right] \lesssim \sum_{\ell=1}^{4h-2} k^{\ell-\lceil (\ell+1)/h \rceil} p^\ell + \mathbb{E}[ W_k^3 ] \label{eq:basic_computation_3} \\ & \lesssim \sum_{\ell=3h}^{4h-2} k^{\ell-4}p^\ell + \sum_{\ell=2h}^{3h-1} k^{\ell-3}p^\ell + \sum_{\ell=h}^{2h-1} k^{\ell-2}p^\ell + \sum_{\ell=1}^{h-1} k^{\ell-1}p^\ell + \mathbb{E}[ W_k^3 ] \nonumber \\ & \stackrel{k \gtrsim 1/p}{\lesssim} k^{4h-6}p^{4h-2} + k^{3h-4}p^{3h-1} + k^{2h-3}p^{2h-1} + k^{h-1}p^h \nonumber \\ & \stackrel{k \gtrsim (1/p)^{\frac{h-1}{h-2}}}{\lesssim} k^{4h-6}p^{4h-2}. \nonumber\end{aligned}$$ We have invoked Lemmas [Lemma 7](#lem:L_expression_tuples_lower_bounds){reference-type="ref" reference="lem:L_expression_tuples_lower_bounds"} and [Lemma 8](#lem:L_expressions_tuples_upper_bounds){reference-type="ref" reference="lem:L_expressions_tuples_upper_bounds"} several times in these computations. In particular, observe that all summands in [\[eq:basic_computation_1\]](#eq:basic_computation_1){reference-type="eqref" reference="eq:basic_computation_1"}, [\[eq:basic_computation_2\]](#eq:basic_computation_2){reference-type="eqref" reference="eq:basic_computation_2"}, and [\[eq:basic_computation_3\]](#eq:basic_computation_3){reference-type="eqref" reference="eq:basic_computation_3"} corresponding to tuples which fail to satisfy the former condition of [\[eq:L_expressions_tuples_upper_bounds\]](#eq:L_expressions_tuples_upper_bounds){reference-type="eqref" reference="eq:L_expressions_tuples_upper_bounds"} vanish, so we take the upper limits of the sums in [\[eq:basic_computation_2\]](#eq:basic_computation_2){reference-type="eqref" reference="eq:basic_computation_2"} and [\[eq:basic_computation_3\]](#eq:basic_computation_3){reference-type="eqref" reference="eq:basic_computation_3"} to be $3h-2$ and $4h-2$, respectively. For use in Section [7](#sec:point_processes){reference-type="ref" reference="sec:point_processes"}, we also record the following, which is easily observed by proceeding slightly more frugally in the variance computations starting from [\[eq:basic_computation_1\]](#eq:basic_computation_1){reference-type="eqref" reference="eq:basic_computation_1"}. $$\begin{aligned} \label{eq:frugal_variance_bounds} \textup{Var}(W_k) - (1-p)\mu_k \geq \sum_{\ell=1}^{2h-1} \mathop{\mathop{\sum_{\left(\Lambda(1),\Lambda(2)\right) \in \mathscr{D}_k^2}}_{S(\Lambda(1)) \cap S(\Lambda(2)) \neq \emptyset}}_{|S(\Lambda(1)) \cup S(\Lambda(2))|=\ell} p^\ell \gtrsim k^{2h-3}p^{2h-1} + k^{h-1}p^h.\end{aligned}$$ Finally, we bound the quantities that were introduced in Theorem [Theorem 10](#thm:stein_chen){reference-type="ref" reference="thm:stein_chen"}. Let $B_\Lambda \subseteq \mathscr{D}_k$ denote the dependency set of $\Lambda$ for every $\Lambda \in \mathscr{D}_k$, so that $\Lambda' \in B_\Lambda$ if and only if $S(\Lambda) \cap S(\Lambda') \neq \emptyset$. Recall that $b_3 \equiv 0$ in this setting: we may bound the other two constants by $$\begin{aligned} b_1(k) & = \sum_{\Lambda \in \mathscr{D}_k} \sum_{\Lambda' \in B_\Lambda} p_\Lambda p_{\Lambda'} = \sum_{t=1}^h \mathop{\sum_{\left(\Lambda, \Lambda'\right) \in \mathscr{D}_k^2}}_{|S(\Lambda) \cap S(\Lambda')| = t} p_\Lambda p_{\Lambda'} \leq \sum_{t=1}^h \mathop{\sum_{\left(\Lambda, \Lambda'\right) \in \mathscr{D}_k^2}}_{|S(\Lambda) \cap S(\Lambda')| = t} \Pr\left[X_\Lambda X_{\Lambda'} = 1\right] \\ & \leq \sum_{t=1}^h \sum_{\ell=2}^{2h-t} \mathop{\mathop{\sum_{\left(\Lambda, \Lambda'\right) \in \mathscr{D}_k^2}}_{S(\Lambda) \cap S(\Lambda') \neq \emptyset}}_{|S(\Lambda) \cup S(\Lambda')|=\ell} \Pr\left[X_\Lambda X_{\Lambda'} = 1\right] \lesssim \sum_{t=1}^h \sum_{\ell=2}^{2h-t} k^{\ell-\lceil (\ell+1)/h \rceil} p^\ell = \sum_{\ell=h}^{2h-1} k^{\ell-2}p^\ell + \sum_{\ell=2}^{h-1} k^{\ell-1}p^\ell \\ & \stackrel{k \gtrsim 1/p}{\lesssim} k^{2h-3}p^{2h-1} + k^{h-2}p^{h-1} \stackrel{k \gtrsim (1/p)^{\frac{h}{h-1}}}{\lesssim} k^{2h-3}p^{2h-1}, \\ b_2(k) & \leq \sum_{\Lambda \in \mathscr{D}_k} \sum_{\Lambda' \in B_\Lambda} \Pr\left[X_\Lambda X_{\Lambda'} = 1\right] = \sum_{t=1}^h \mathop{\sum_{\left(\Lambda, \Lambda'\right) \in \mathscr{D}_k^2}}_{|S(\Lambda) \cap S(\Lambda')| = t} \Pr\left[X_\Lambda X_{\Lambda'} = 1\right] \\ & \stackrel{k \gtrsim 1/p}{\lesssim} k^{2h-3}p^{2h-1} + k^{h-2}p^{h-1} \stackrel{k \gtrsim (1/p)^{\frac{h}{h-1}}}{\lesssim} k^{2h-3}p^{2h-1}.\end{aligned}$$ Combining these two computations yields $$\begin{aligned} \label{eq:AGG_1_not_dN} b_1(k) + b_2(k) + b_3(k) \stackrel{k \gtrsim 1/p}{\lesssim} k^{2h-3}p^{2h-1} + k^{h-2}p^{h-1} \stackrel{k \gtrsim (1/p)^{\frac{h}{h-1}}}{\lesssim} k^{2h-3}p^{2h-1}.\end{aligned}$$ # Subcritical Decay {#sec:fast_decay} We commence the proof of Theorem [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"}. In this section, we prove Theorem [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"}(i). Throughout this section, we assume that $$\begin{aligned} \label{eq:fast_decay_assumption} Np^{\frac{h}{h-1}} \lll 1.\end{aligned}$$ We prove that an upper bound and a lower bound for $|L(A)|$ are both asymptotically equivalent to $(Np)^h/\theta_L$. Certainly, $A^h$ corresponds to a subset $\Lambda(A)$ of $\mathbb{N}^h / \! \approx$ (in the natural way), and $$\begin{aligned} \label{eq:fast_decay_upper_bound} |L(A)| \leq |\Lambda(A)| = \frac{|A|(|A|-1) \cdots (|A|-h+1)}{\theta_L} + O\left(|A|^{h-1}\right)\sim \frac{(Np)^h}{\theta_L},\end{aligned}$$ where the $O(|A|^{h-1})$ term corresponds to $L$-expressions in $\Lambda(A)$ whose ground sets have size less than $h$, and the asymptotic equivalence follows from (for example) a Chernoff bound since $A$ is a binomial random subset of $I_N$ whose expected size, $(N+1)p$, tends to infinity. We obtain a lower bound for $|L(A)|$ by subtracting, from $|\Lambda(A)|$, the sum of the number of $L$-expressions in $\Lambda(A)$ with $L$-evaluation $0$ (i.e., $| \mathscr{S}_{dN} \cap \Lambda(A) |$) and the number of ordered pairs of $L$-expressions in $\Lambda(A)$ with the same nonzero $L$-evaluation. For this latter summand, we count, for each $k \in \left[0, mN\right] \setminus \{dN\}$ and $t \in [h]$, the number of ordered pairs $\left(\Lambda, \Lambda' \right) \in \mathscr{D}_k^2$ of such $L$-expressions satisfying $$\begin{aligned} \label{eq:pair_condition} \Lambda, \Lambda' \in \Lambda(A), \ \ \ \ \ \Lambda \neq \Lambda', \ \ \ \ \ |S(\Lambda) \cap S(\Lambda')| = t.\end{aligned}$$ Using the computations in Section [3](#sec:computations){reference-type="ref" reference="sec:computations"}, the expectation of the value that we subtract is $$\begin{aligned} & \mathbb{E}\left[ \left| \mathscr{S}_{dN} \cap \Lambda(A) \right| \right] + \sum_{k \in I_{mN} \setminus \{dN\}} \sum_{t=1}^h \mathbb{E}\big[ \text{num. ordered pairs } \left(\Lambda, \Lambda'\right) \in \mathscr{D}_k^2 \text{ satisfying } \eqref{eq:pair_condition} \big] \nonumber \\ & \lesssim \mathbb{E}\left[\left| \mathscr{D}_{dN} \cap \Lambda(A) \right|\right] + \sum_{k=0}^{mN/2} \sum_{t=1}^h \mathop{\sum_{\left(\Lambda, \Lambda'\right) \in \mathscr{D}_k^2}}_{|S(\Lambda) \cap S(\Lambda')| = t} \Pr[X_\Lambda X_{\Lambda'} = 1] \nonumber \\ & \lesssim \mathbb{E}[W_{dN}] + \sum_{k=0}^{mN/2} \sum_{t = 1}^h \sum_{\ell=2}^{2h-t} k^{\ell-\lceil (\ell+1)/h \rceil}p^\ell \nonumber \\ & \lesssim N^{h-1}p^h + \sum_{k=0}^{mN/2} \sum_{t = 1}^h \sum_{\ell=2}^{2h-t} N^{\ell-\lceil (\ell+1)/h \rceil}p^\ell \lesssim N^{h-1}p^h + \sum_{k=0}^{mN/2} N^{2h-3}p^{2h-1} + N^{h-2}p^{h-1} \nonumber \\ & \lesssim N^{h-1}p^h + N^{2h-2}p^{2h-1} + (Np)^{h-1} \lll N^{2h-1}p^{2h} + (Np)^{h-1} \lll (Np)^h. \label{eq:fast_decay_lower_bound_1}\end{aligned}$$ It follows from [\[eq:fast_decay_lower_bound_1\]](#eq:fast_decay_lower_bound_1){reference-type="eqref" reference="eq:fast_decay_lower_bound_1"} and a standard argument using Markov's inequality that asymptotically almost surely, the number we subtract to obtain the lower bound is negligible compared to $(Np)^h$. We conclude that this lower bound for $|L(A)|$ is asymptotically almost surely equivalent to $(Np)^h/\theta_L$, establishing Theorem [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"}(i). # Critical Decay {#sec:critical_decay} We now prove Theorem [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"}(ii). In this section, we assume that $$\begin{aligned} \label{eq:critical_decay_assumption} N^{\frac{h-1}{h}}p = c,\end{aligned}$$ where $c > 0$ is a constant. Our basic strategy in Sections [5](#sec:critical_decay){reference-type="ref" reference="sec:critical_decay"} and [6](#sec:slow_decay){reference-type="ref" reference="sec:slow_decay"} is as follows. We will show that the number of ways that an element in $\left[-dN, sN\right]$ is generated in $L(A)$ converges in distribution to a Poisson, from which we can find asymptotic expressions for both the expected number of elements and the expected number of missing elements in $L(A)$. These are the relevant expressions from Theorem [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"}. We then show that the relevant random variables are strongly concentrated about their expectations, allowing us to promote these results on expectations to the latter two parts of Theorem [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"}. To begin this endeavor, we set up some expressions that we use throughout. In Sections [5](#sec:critical_decay){reference-type="ref" reference="sec:critical_decay"} and [6](#sec:slow_decay){reference-type="ref" reference="sec:slow_decay"}, we take $g: \mathbb{N}\to \mathbb{R}_{\geq 0}$ to be a function such that[^6] $$\begin{aligned} \label{eq:g_asymptotics} 1 \lll 1/p \lll g(N) \lll (1/p)^{\frac{h}{h-1}} \lesssim N.\end{aligned}$$ Let $\Tilde{L}: \mathbb{Z}^h \to \mathbb{Z}$ be the $\mathbb{Z}$-linear form with coefficients $-u_1, \dots, -u_h$. It follows that $L(A) = -\Tilde{L}(A)$, and for any $k$, $$\begin{aligned} \label{eq:left_right_symmetry} sN-k \notin L(A) \iff -sN+k \notin \Tilde{L}(A).\end{aligned}$$ In many forthcoming arguments, we will prove statements over values $-dN+k$ for $k \in \left[ 0, mN/2 \right]$, then take advantage of the symmetry apparent in [\[eq:left_right_symmetry\]](#eq:left_right_symmetry){reference-type="eqref" reference="eq:left_right_symmetry"} to prove the corresponding statement over values $-dN+k$ for $k \in \left[ mN/2, mN \right]$. Additionally, in Sections [5](#sec:critical_decay){reference-type="ref" reference="sec:critical_decay"} and [6](#sec:slow_decay){reference-type="ref" reference="sec:slow_decay"}, we define the function $\delta: \mathbb{N}\to \mathbb{R}_{\geq 0}$ as follows, which we think of as the worst-case multiplicative margin of error for the asymptotic equivalence given by Lemma [Lemma 6](#lem:number_of_subsets_asymptotics){reference-type="ref" reference="lem:number_of_subsets_asymptotics"}: $$\begin{aligned} \label{eq:delta(N)} \delta(N) := \max_{k \in \left[g(N), mN/2 \right]} \left| \frac{\mu_k}{\lambda_kN^{h-1}p^h} - 1 \right|.\end{aligned}$$ By Lemma [Lemma 6](#lem:number_of_subsets_asymptotics){reference-type="ref" reference="lem:number_of_subsets_asymptotics"} and Section [3](#sec:computations){reference-type="ref" reference="sec:computations"} (the first $\sim$ is since all summands $\ell<h$ are of a lower order and $g(N) \ggg 1/p$), it holds uniformly over $k \in \left[ g(N), mN/2 \right]$ that $$\begin{aligned} \label{eq:critical_gamma_1} \mu_k = \sum_{\ell=1}^h \mathop{\sum_{\Lambda \in \mathscr{D}_k}}_{|S(\Lambda)| = \ell} p^\ell \sim |\mathscr{D}_k| \cdot p^h \sim \lambda_k N^{h-1}p^h.\end{aligned}$$ Therefore, it follows from Lemma [Lemma 6](#lem:number_of_subsets_asymptotics){reference-type="ref" reference="lem:number_of_subsets_asymptotics"} that $\delta(N) \lll 1$. ## Expectation In this subsection, we compute $\mathbb{E}\left[ | L(A) | \right]$ and $\mathbb{E}\left[ | L(A)^c | \right]$. We begin with the following observation, which we invoke in the computation. We exclude $k = mN/2$ from the range where [\[eq:critical_approximation_error\]](#eq:critical_approximation_error){reference-type="eqref" reference="eq:critical_approximation_error"} holds uniformly: this is for convenience, as it ensures $\mathscr{D}_k = \mathfrak{D}_k$ for all such $k$. **Lemma 14**. *Uniformly over all $k \in \left[ g(N), mN/2 \right)$, $$\begin{aligned} \label{eq:critical_approximation_error} \left|\Pr\left[-dN+k \notin L(A)\right] - e^{-c^h\lambda_k} \right| \lll e^{-c^h\lambda_k}. \end{aligned}$$* *Proof.* By definition, $-dN+k \notin L(A)$ and $W_k = 0$ are the same event. It follows from Theorem [Theorem 10](#thm:stein_chen){reference-type="ref" reference="thm:stein_chen"} that uniformly over $k \in \left[ g(N), mN/2 \right)$, $$\begin{gathered} e^{c^h\lambda_k}\left|\Pr\left[-dN+k \notin L(A) \right]- e^{-c^h\lambda_k} \right| = e^{c^h\lambda_k} \left|(\Pr\left[W_k = 0 \right] - e^{-\mu_k}) + (e^{-\mu_k} - e^{-c^h\lambda_k}) \right| \nonumber \\ \leq \exp\left(\max_{k \in \left[ g(N), mN/2 \right)} c^h\lambda_k \right) \left(b_1(k)+b_2(k)+b_3(k)\right) + e^{c^h\lambda_k}\left|e^{-\mu_k} - e^{-c^h\lambda_k}\right| \nonumber \\ \lesssim k^{2h-3}p^{2h-1} + k^{h-2}p^{h-1} + \left|e^{c^h\lambda_k\big(1 - \frac{\mu_k}{c^h\lambda_k}\big)} - 1\right| \lesssim N^{2h-3}p^{2h-1} + N^{h-2}p^{h-1} + o(1) \label{eq:critical_poisson_conv} \\ \lll N^{2h-2}p^{2h} + N^{h-1}p^h + o(1) \lesssim 1. \nonumber\end{gathered}$$ In order, the claims in [\[eq:critical_poisson_conv\]](#eq:critical_poisson_conv){reference-type="eqref" reference="eq:critical_poisson_conv"} follow from the fact that $\mathsf{IH}_h(x)$ attains a maximum on $x \in [0, m/2]$ and from the computations in Section [3](#sec:computations){reference-type="ref" reference="sec:computations"} (note that $k \gtrsim 1/p$ due to [\[eq:g_asymptotics\]](#eq:g_asymptotics){reference-type="eqref" reference="eq:g_asymptotics"}), and uniformly over $k \in \left[ g(N), mN/2 \right)$, it holds that $\left|c^h\lambda_k\big(1 - \frac{\mu_k}{c^h\lambda_k}\big)\right| \leq c^h\lambda_k \delta(N) \lll 1$. This proves the lemma. ◻ Equipped with Lemma [Lemma 14](#lem:critical_poisson_conv){reference-type="ref" reference="lem:critical_poisson_conv"}, we are ready to compute the expectations that we seek. **Proposition 15**. *We have that $$\begin{gathered} \mathbb{E}\left[ |L(A)| \right] \sim \left(\sum_{i=1}^h |u_i| - 2\int_0^{m/2} \exp \left(- \frac{c^h \sum_{t_1=0}^{|u_1|-1} \cdots \sum_{t_h=0}^{|u_h|-1} \mathsf{IH}_h\left(x - \sum_{i=1}^h t_i\right)}{\theta_L \cdot \prod_{i=1}^h |u_i|} \right) dx\right)N, \\ \mathbb{E}\left[ |L(A)^c| \right] \sim \left(2\int_0^{m/2} \exp \left(- \frac{c^h \sum_{t_1=0}^{|u_1|-1} \cdots \sum_{t_h=0}^{|u_h|-1} \mathsf{IH}_h\left(x - \sum_{i=1}^h t_i\right)}{\theta_L \cdot \prod_{i=1}^h |u_i|} \right) dx \right) N. \end{gathered}$$* *Proof.* It is clear that it suffices to prove one of the two statements. We prove the latter. We write $$\begin{aligned} \label{eq:critical_expectation_left_right} \mathbb{E}\left[ |L(A)^c| \right] = \mathbb{E}\left[ \left| L(A)^c \cap \left[-dN, -dN + mN/2 \right] \right| \right] + \mathbb{E}\left[ \left| L(A)^c \cap \left[sN - mN/2, sN \right] \right| \right]. \end{aligned}$$ Using [\[eq:critical_decay_assumption\]](#eq:critical_decay_assumption){reference-type="eqref" reference="eq:critical_decay_assumption"}, the first summand in [\[eq:critical_expectation_left_right\]](#eq:critical_expectation_left_right){reference-type="eqref" reference="eq:critical_expectation_left_right"} can be written as $$\begin{aligned} \label{eq:critical_fringes_expectation_1} o(N) + \mathbb{E}\left[ \left| L(A)^c \cap \left[-dN + g(N), -dN+mN/2 \right] \right| \right]. \end{aligned}$$ We express the latter summand of [\[eq:critical_fringes_expectation_1\]](#eq:critical_fringes_expectation_1){reference-type="eqref" reference="eq:critical_fringes_expectation_1"} as $$\begin{aligned} & \sum_{k = g(N)}^{mN/2} \Pr\left[-dN+k \notin L(A) \right] = O(1) + \sum_{k = g(N)}^{mN/2} \Pr\left[-dN+k \notin L(A) \right] - e^{-c^h\lambda_k} + e^{-c^h\lambda_k} \nonumber \\ & \sim O(1) + \sum_{k=g(N)}^{mN/2} e^{-c^h\lambda_k} \label{eq:critical_fringes_expectation_2} \\ & \sim O(1) + \int_{k=g(N)}^{mN/2} \exp\left(-\frac{c^h\sum_{t_1=0}^{|u_1|-1} \cdots \sum_{t_h=0}^{|u_h|-1} \mathsf{IH}_h\left(x/N - \sum_{i=1}^h t_i\right)}{\theta_L \cdot \prod_{i=1}^h |u_i|} \right) \ dx \nonumber \\ & \sim N \int_0^{m/2} \exp \left(- \frac{c^h \sum_{t_1=0}^{|u_1|-1} \cdots \sum_{t_h=0}^{|u_h|-1} \mathsf{IH}_h\left(x - \sum_{i=1}^h t_i\right)}{\theta_L \cdot \prod_{i=1}^h |u_i|} \right) dx. \label{eq:critical_fringes_expectation_3} \end{aligned}$$ We have invoked Lemma [Lemma 14](#lem:critical_poisson_conv){reference-type="ref" reference="lem:critical_poisson_conv"} in [\[eq:critical_fringes_expectation_2\]](#eq:critical_fringes_expectation_2){reference-type="eqref" reference="eq:critical_fringes_expectation_2"}. It follows from [\[eq:critical_fringes_expectation_1\]](#eq:critical_fringes_expectation_1){reference-type="eqref" reference="eq:critical_fringes_expectation_1"} that [\[eq:critical_fringes_expectation_3\]](#eq:critical_fringes_expectation_3){reference-type="eqref" reference="eq:critical_fringes_expectation_3"} is asymptotically equivalent to the first summand of [\[eq:critical_expectation_left_right\]](#eq:critical_expectation_left_right){reference-type="eqref" reference="eq:critical_expectation_left_right"}. We now handle the second summand of [\[eq:critical_expectation_left_right\]](#eq:critical_expectation_left_right){reference-type="eqref" reference="eq:critical_expectation_left_right"}. Proceeding like [\[eq:critical_fringes_expectation_1\]](#eq:critical_fringes_expectation_1){reference-type="eqref" reference="eq:critical_fringes_expectation_1"} and invoking [\[eq:left_right_symmetry\]](#eq:left_right_symmetry){reference-type="eqref" reference="eq:left_right_symmetry"}, this is $$\begin{aligned} & o(N) + \sum_{k=g}^{\tau N} \Pr\left[ sN-k \notin L(A) \right] = o(N) + \sum_{k=g}^{\tau N} \Pr\left[ -sN+k \notin \Tilde{L}(A) \right] \label{eq:critical_fringes_expectation_4} \\ & \sim N \int_0^{m/2} \exp \left(- \frac{c^h \sum_{t_1=0}^{|u_1|-1} \cdots \sum_{t_h=0}^{|u_h|-1} \mathsf{IH}_h\left(x - \sum_{i=1}^h t_i\right)}{\theta_L \cdot \prod_{i=1}^h |u_i|} \right) dx. \label{eq:critical_fringes_expectation_5} \end{aligned}$$ We have applied [\[eq:critical_fringes_expectation_3\]](#eq:critical_fringes_expectation_3){reference-type="eqref" reference="eq:critical_fringes_expectation_3"} to $\Tilde{L}(A)$.[^7] Using [\[eq:critical_expectation_left_right\]](#eq:critical_expectation_left_right){reference-type="eqref" reference="eq:critical_expectation_left_right"}, [\[eq:critical_fringes_expectation_3\]](#eq:critical_fringes_expectation_3){reference-type="eqref" reference="eq:critical_fringes_expectation_3"} and [\[eq:critical_fringes_expectation_5\]](#eq:critical_fringes_expectation_5){reference-type="eqref" reference="eq:critical_fringes_expectation_5"}, we conclude that $$\begin{aligned} \mathbb{E}\left[ |L(A)^c| \right] \sim 2N \int_0^{m/2} \exp \left(- \frac{c^h \sum_{t_1=0}^{|u_1|-1} \cdots \sum_{t_h=0}^{|u_h|-1} \mathsf{IH}_h\left(x - \sum_{i=1}^h t_i\right)}{\theta_L \cdot \prod_{i=1}^h |u_i|} \right) dx, \end{aligned}$$ which suffices to prove the theorem. ◻ ## Concentration We now show that the random variables $|L(A)|$ and $|L(A)^c|$ are strongly concentrated about its expectations. With Theorem [Proposition 15](#prop:critical_expectations){reference-type="ref" reference="prop:critical_expectations"}, this proves Theorem [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"}(ii). **Proposition 16**. *If [\[eq:critical_decay_assumption\]](#eq:critical_decay_assumption){reference-type="eqref" reference="eq:critical_decay_assumption"} holds, then $|L(A)| \sim \mathbb{E}\left[ |L(A)| \right]$ and $|L(A)^c| \sim \mathbb{E}\left[ |L(A)^c| \right]$.* *Proof.* We fix $n \in I_N$. We define $$\begin{aligned} \label{eq:interval_I} \mathcal I_n := \left[-dN+\min\{n,N-n\}, sN-\min\{n,N-n\}\right]. \end{aligned}$$ We also define $$\begin{aligned} \label{eq:g_tilde} \Tilde{g}(N) := (1/p)^{\frac{h}{h-1}}\log(1/p), \end{aligned}$$ and abbreviate $\Tilde{g}(N)$ to $\Tilde{g}$. We think of $\left[-dN+\Tilde{g}, sN-\Tilde{g}\right]$ and $\left[-dN+\Tilde{g}, sN-\Tilde{g}\right]^c$, respectively, as the "middle\" and "fringes\" of the interval $\left[-dN, sN\right]$.[^8] Since $A \setminus \{n\} \subset A \cup \{n\}$, $L\left(A \setminus \{n\}\right) \subseteq L\left(A \cup \{n\}\right)$, and thus it follows that the integrand in [\[eq:delta_function\]](#eq:delta_function){reference-type="eqref" reference="eq:delta_function"} can be written as $$\begin{aligned} \label{eq:adding_n} \big| L\left(A \setminus \{n\}\right) ^c \big| - \big| L\left(A \cup \{n\}\right)^c \big| & = \left| L\left(A \setminus \{n\}\right)^c \setminus L\left(A \cup \{n\}\right)^c \right| = \left| L\left(A \cup \{n\}\right) \setminus L\left(A \setminus \{n\}\right) \right|. \end{aligned}$$ The final expression in [\[eq:adding_n\]](#eq:adding_n){reference-type="eqref" reference="eq:adding_n"} can be understood as the number of new elements that are added to $L(A)$ due to the inclusion of $n$. Any such new element certainly must use $n$ as a summand in any sum that generates it. Therefore, it holds that $$\begin{aligned} L\left(A \cup \{n\}\right) \setminus L\left(A \setminus \{n\}\right) \subseteq \mathcal I_n. \end{aligned}$$ From the linearity of conditional expectation, $$\begin{aligned} \label{eq:delta_reexpressed} \Delta_n(A) & = \mathbb{E}\left[ \left| L\left(A \cup \{n\}\right) \setminus L\left(A \setminus \{n\}\right) \right| \ \Big| \ a_0, \dots, a_{n-1} \right] \nonumber \\ & = \mathbb{E}\left[ \left| \left(L\left(A \cup \{n\}\right) \setminus L\left(A \setminus \{n\}\right) \right) \cap (\mathcal I_n \cap [-dN+\Tilde{g}, sN-\Tilde{g}]) \right| \ \Big| \ a_0, \dots, a_{n-1} \right] \nonumber \\ & \quad + \mathbb{E}\left[ \left| \left(L\left(A \cup \{n\}\right) \setminus L\left(A \setminus \{n\}\right) \right) \cap (\mathcal I_n \setminus [-dN+\Tilde{g}, sN-\Tilde{g}]) \right| \ \Big| \ a_0, \dots, a_{n-1} \right] \nonumber \\ & =: \Delta_{n,1}(A) + \Delta_{n,2}(A). \end{aligned}$$ From [\[eq:critical_decay_assumption\]](#eq:critical_decay_assumption){reference-type="eqref" reference="eq:critical_decay_assumption"}, we see that $\Tilde{g} \ggg N$, so it certainly holds that $\Delta_{n,1}(A) = 0$. We now turn to showing that $\Delta_{n,2}(A)$ is modest with high probability. If $n \in [\Tilde{g}, N - \Tilde{g}]$, $$\begin{aligned} \min\{n, N-n\} \geq \Tilde{g}, \end{aligned}$$ from which it follows from [\[eq:interval_I\]](#eq:interval_I){reference-type="eqref" reference="eq:interval_I"} that $$\begin{aligned} \mathcal I_n \setminus [-dN+\Tilde{g}, sN-\Tilde{g}] = \left[-dN+\min\{n,N-n\}, sN-\min\{n,N-n\}\right] \setminus [-dN+\Tilde{g}, sN-\Tilde{g}] = \emptyset. \end{aligned}$$ This implies $$\begin{aligned} \label{eq:delta_2_middle_bound} \Delta_{n,2}(A) = 0 \text{ for all } n \in [\Tilde{g}, N - \Tilde{g}]. \end{aligned}$$ Combining [\[eq:delta_2\_middle_bound\]](#eq:delta_2_middle_bound){reference-type="eqref" reference="eq:delta_2_middle_bound"} with a union bound yields that with probability $1-o(1)$,[^9] $$\begin{aligned} \label{eq:delta_bound_middle} \Delta_n(A) = \Delta_{n,1}(A) + \Delta_{n,2}(A) = 0 \text{ for all } n \in [\Tilde{g}, N - \Tilde{g}]. \end{aligned}$$ We now consider values $n \notin [\Tilde{g}, N - \Tilde{g}]$. Here, we observe that $$\begin{aligned} \label{eq:candidate_summands} \Delta_{n,2}(A) \leq \left(\big|A \cap [0, \Tilde{g}]\big| + \big|A \cap [N-\Tilde{g}, N]\big| \right)^{h-1}. \end{aligned}$$ Indeed, including the element $n$ in $A \setminus \{n\}$ generates no more than $$\begin{aligned} \left(\big|A \cap [0, \Tilde{g}]\big| + \big|A \cap [N-\Tilde{g}, N]\big| \right)^{h-1} \end{aligned}$$ new elements from $\mathcal I_n \setminus [-dN+\Tilde{g}, sN-\Tilde{g}]$: as before, any new element in $L(A)$ resulting from including $n$ in $A \setminus \{n\}$ must use $n$ as a summand in any sum which generates it, and it is not hard to see that the remaining $h-1$ summands in any such sum must lie in $$\begin{aligned} (A \cap [0, \Tilde{g}]) \cup (A \cap [N-\Tilde{g}, N]), \end{aligned}$$ since the resulting sum would lie in $[-dN+\Tilde{g}, sN-\Tilde{g}]$ otherwise. By a Chernoff bound, $$\label{eq:chernoff_fringes} \begin{aligned} \Pr\left( |A \cap [0, \Tilde{g}]| \leq 2(1/p)^{\frac{1}{h-1}}\log(1/p) \right) & \geq 1 - \exp\left( - (1/p)^{\frac{1}{h-1}}\log(1/p)/2 \right), \\ \Pr\left( |A \cap [N-\Tilde{g}, N]| \leq 2(1/p)^{\frac{1}{h-1}}\log(1/p) \right) & \geq 1 - \exp\left( -(1/p)^{\frac{1}{h-1}}\log(1/p)/2 \right). \end{aligned}$$ From [\[eq:chernoff_fringes\]](#eq:chernoff_fringes){reference-type="eqref" reference="eq:chernoff_fringes"} with a union bound, we deduce that with probability at most $$\begin{aligned} 2\exp\left( -(1/p)^{\frac{1}{h-1}}\log(1/p)/2 \right), \end{aligned}$$ we have that $$\begin{aligned} \label{eq:first_sum_bound_1} \left(\big|A \cap [0, \Tilde{g}]\big| + \big|A \cap [N-\Tilde{g}, N]\big| \right)^{h-1} & > \left(4(1/p)^{\frac{1}{h-1}}\log(1/p) \right)^{h-1} = 4^{h-1}(1/p)\left[ \log(1/p) \right]^{h-1}. \end{aligned}$$ Now, [\[eq:candidate_summands\]](#eq:candidate_summands){reference-type="eqref" reference="eq:candidate_summands"}, [\[eq:first_sum_bound_1\]](#eq:first_sum_bound_1){reference-type="eqref" reference="eq:first_sum_bound_1"}, and a union bound over $n \notin [\Tilde{g}, N-\Tilde{g}]$ together imply that $$\begin{aligned} & \Pr\left( \Delta_{n,2}(A) > 4^{h-1}(1/p)\left[ \log(1/p) \right]^{h-1} \text{ for some } n \notin [\Tilde{g}, N-\Tilde{g}] \right) \\ & \leq \Pr\left( \left(\big|A \cap [0, \Tilde{g}]\big| + \big|A \cap [N-\Tilde{g}, N]\big| \right)^{h-1} > 4^{h-1}(1/p)\left[ \log(1/p) \right]^{h-1} \text{ for some } n \notin [\Tilde{g}, N-\Tilde{g}] \right) \\ & \lesssim \Tilde{g} \exp\left( - \frac{\log(1/p)}{2p^{\frac{1}{h-1}}} \right) \lesssim \frac{\log (1/p)}{p^{\frac{h}{h-1}}} \exp\left( - \frac{\log(1/p)}{2p^{\frac{1}{h-1}}} \right) \leq \left( \frac{\log (1/p)}{p^{\frac{1}{h-1}}} \right)^h \exp\left( - \frac{\log(1/p)}{2p^{\frac{1}{h-1}}} \right) \lll 1. \end{aligned}$$ Therefore, with probability $1-o(1)$, $$\begin{aligned} \label{eq:delta_2_fringe_bound} \Delta_{n,2}(A) \lesssim (1/p) \left[ \log (1/p)\right]^{h-1} \text{ for all } n \notin [\Tilde{g}, N - \Tilde{g}]. \end{aligned}$$ Combining [\[eq:delta_2\_fringe_bound\]](#eq:delta_2_fringe_bound){reference-type="eqref" reference="eq:delta_2_fringe_bound"} with a union bound yields that with probability $1 - o(1)$, $$\label{eq:delta_bound_fringes} \begin{aligned} \Delta_n(A) = \Delta_{n,1}(A) + \Delta_{n,2}(A) & \lesssim o(1) + (1/p) \left[ \log (1/p)\right]^{h-1} \\ & \lesssim (1/p) \left[ \log (1/p)\right]^{h-1} \text{ for all } n \notin [\Tilde{g}, N - \Tilde{g}]. \end{aligned}$$ Therefore, by combining [\[eq:delta_bound_middle\]](#eq:delta_bound_middle){reference-type="eqref" reference="eq:delta_bound_middle"} and [\[eq:delta_bound_fringes\]](#eq:delta_bound_fringes){reference-type="eqref" reference="eq:delta_bound_fringes"} under a union bound and recalling [\[eq:C(A)\_reformulation\]](#eq:C(A)_reformulation){reference-type="eqref" reference="eq:C(A)_reformulation"}, we deduce that with probability $1-o(1)$, $$\begin{aligned} \label{eq:C(A)} C(A) \sim \max_{n \in I_N} \Delta_n(A) \lesssim (1/p) \left[ \log (1/p)\right]^{h-1}. \end{aligned}$$ On this event with $1-o(1)$ probability, we also deduce from [\[eq:delta_bound_middle\]](#eq:delta_bound_middle){reference-type="eqref" reference="eq:delta_bound_middle"} and [\[eq:delta_bound_fringes\]](#eq:delta_bound_fringes){reference-type="eqref" reference="eq:delta_bound_fringes"} that, recalling [\[eq:V(A)\_reformulation\]](#eq:V(A)_reformulation){reference-type="eqref" reference="eq:V(A)_reformulation"}, $$\label{eq:V(A)} \begin{aligned} V(A) & \sim p\sum_{n=0}^N \left(\Delta_n(A)\right)^2 = p\sum_{n=\Tilde{g}}^{N-\Tilde{g}} \left(\Delta_n(A)\right)^2 + p\sum_{n=[\Tilde{g}, N-\Tilde{g}]^c} \left(\Delta_n(A)\right)^2 \\ & \lesssim p \cdot o(1) + p \cdot \Tilde{g} \left( (1/p) \left[ \log (1/p)\right]^{h-1} \right)^2 \\ & \lesssim o(1) + \frac{\log (1/p)}{p^{\frac{1}{h-1}}} (1/p)^2 \left( \log (1/p) \right)^{2(h-1)} \lesssim (1/p)^{2+\frac{1}{h-1}} \left(\log (1/p)\right)^{2h-1}. \end{aligned}$$ We finish the proof by invoking Theorem [Theorem 13](#thm:vu_lemma){reference-type="ref" reference="thm:vu_lemma"}. Specifically, we take $$\begin{aligned} \label{eq:vu_parameters} \lambda \asymp \log (1/p), \ \ V \asymp (1/p)^{2+\frac{1}{h-1}} \left(\log (1/p)\right)^{2h-1}, \ \ C \asymp (1/p) \left( \log (1/p)\right)^{h-1}, \end{aligned}$$ and take the constant factors implicit in our expressions for $C$ and $V$ to agree with those implied in [\[eq:C(A)\]](#eq:C(A)){reference-type="eqref" reference="eq:C(A)"} and [\[eq:V(A)\]](#eq:V(A)){reference-type="eqref" reference="eq:V(A)"}, respectively. It follows from our choices in [\[eq:vu_parameters\]](#eq:vu_parameters){reference-type="eqref" reference="eq:vu_parameters"} that $$\begin{aligned} & \lambda \lesssim \log (1/p) \lesssim V/C^2 = (1/p)^{\frac{1}{h-1}}\log (1/p), \label{eq:lambda} \\ & \sqrt{\lambda V} \asymp \sqrt{\log (1/p) \cdot (1/p)^{2+\frac{1}{h-1}} \left(\log (1/p)\right)^{2h-1}} = (1/p)^{1+\frac{1}{2(h-1)}} \left[\log (1/p)\right]^h \lll (1/p)^{\frac{h}{h-1}}. \label{eq:root_lambda_V} \end{aligned}$$ From [\[eq:critical_decay_assumption\]](#eq:critical_decay_assumption){reference-type="eqref" reference="eq:critical_decay_assumption"} and [\[eq:root_lambda_V\]](#eq:root_lambda_V){reference-type="eqref" reference="eq:root_lambda_V"}, it follows that $\sqrt{\lambda V} \lll N$. Since the right-hand sides of [\[eq:critical_decay\]](#eq:critical_decay){reference-type="eqref" reference="eq:critical_decay"} and [\[eq:critical_decay_complement\]](#eq:critical_decay_complement){reference-type="eqref" reference="eq:critical_decay_complement"} are $\Omega(N)$, it follows from Theorem [Theorem 13](#thm:vu_lemma){reference-type="ref" reference="thm:vu_lemma"} and [\[eq:root_lambda_V\]](#eq:root_lambda_V){reference-type="eqref" reference="eq:root_lambda_V"} that to prove the theorem, it suffices to show $$\begin{aligned} \label{eq:vanishing_RHS} 2e^{-\lambda/4} & + \Pr(C(A) \geq C) + \Pr(V(A) \geq V) \lll 1. \end{aligned}$$ Certainly, $2e^{-\lambda/4} \lll 1$, since it is immediate from [\[eq:vu_parameters\]](#eq:vu_parameters){reference-type="eqref" reference="eq:vu_parameters"} that $\lambda \ggg 1$. The latter two terms in the LHS of [\[eq:vanishing_RHS\]](#eq:vanishing_RHS){reference-type="eqref" reference="eq:vanishing_RHS"} vanish due to [\[eq:C(A)\]](#eq:C(A)){reference-type="eqref" reference="eq:C(A)"} and [\[eq:V(A)\]](#eq:V(A)){reference-type="eqref" reference="eq:V(A)"}. ◻ Propositions [Proposition 15](#prop:critical_expectations){reference-type="ref" reference="prop:critical_expectations"} and [Proposition 16](#prop:critical_concentration){reference-type="ref" reference="prop:critical_concentration"} together prove Theorem [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"}(ii). # Supercritical Decay {#sec:slow_decay} Finally, we prove Theorem [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"}(iii). The $h=2$ case was done by [@hegarty2009almost Theorem 3.1(iii)], so we are free to assume $h \geq 3$. We assume in this section that $$\begin{aligned} \label{eq:slow_decay_assumption} Np^{\frac{h}{h-1}} \ggg 1.\end{aligned}$$ ## Reduction {#subsec:reductions} We set up expressions for the setting in which [\[eq:slow_decay_assumption\]](#eq:slow_decay_assumption){reference-type="eqref" reference="eq:slow_decay_assumption"} holds. We let $f: \mathbb{N}\to \mathbb{R}_{\geq 0}$ be a function satisfying $$\begin{aligned} & 1 \lll f(N) \lll \min\left\{Np^{\frac{h}{h-1}}, \left(1/\delta(N)\right)^{\frac{1}{h-1}}\right\}, \label{eq:f_asymptotics} \\ & \exp\left(\frac{f(N)^{h-1}}{(h-1)! \cdot \theta_L \cdot \prod_{i=1}^h |u_i|} \right)f(N)^{2h-3}p^{\frac{1}{(h-1)(h-2)}} \lll 1; \nonumber\end{aligned}$$ by taking $f$ to be sufficiently slowly growing, it is easy to see that such a function satisfying all of the conditions in [\[eq:f_asymptotics\]](#eq:f_asymptotics){reference-type="eqref" reference="eq:f_asymptotics"} exists. We define $$\begin{aligned} \label{eq:tau} \tau(N) := \frac{f(N)}{Np^{\frac{h}{h-1}}}.\end{aligned}$$ From [\[eq:f_asymptotics\]](#eq:f_asymptotics){reference-type="eqref" reference="eq:f_asymptotics"} and [\[eq:tau\]](#eq:tau){reference-type="eqref" reference="eq:tau"}, it follows that $\tau \lll 1$. We will frequently abbreviate $\tau(N)$ to $\tau$. We think of $[(-d+\tau)N, (s-\tau)N]$ as the "middle\" of the interval $[-dN, sN]$, and $[(-d+\tau)N, (s-\tau)N]^c$ as the "fringes.\" In this subsection, we reduce the computation of $\mathbb{E}\left[ | L(A)^c | \right]$ to the fringes. We begin by deriving an asymptotic lower bound for the expected number of elements in $[-dN, sN]$ which are missing sums in $L(A)$. **Lemma 17**. *It holds that $$\begin{aligned} \mathbb{E}\left[ | L(A)^c | \right] \gtrsim \left( 1/p \right)^{\frac{h}{h-1}}. \end{aligned}$$* *Proof.* It follows from [\[eq:slow_decay_assumption\]](#eq:slow_decay_assumption){reference-type="eqref" reference="eq:slow_decay_assumption"} that $N \ggg (1/p)^{\frac{h}{h-1}}$, so $$\begin{aligned} \mathcal I := \left[-dN, -dN + (1/p)^{\frac{h}{h-1}} \right] \subseteq [-dN, sN]. \end{aligned}$$ Certainly, any sum in $L(A)$ which is generated by adding a term greater than $(1/p)^{\frac{h}{h-1}}$ or subtracting a term less than $N-(1/p)^{\frac{h}{h-1}}$ fails to lie in $\mathcal I$. Thus, any sum in $L(A)$ which lands in $\mathcal I$ must strictly add terms that are at most $(1/p)^{\frac{h}{h-1}}$ and subtract terms that are at least $N-(1/p)^{\frac{h}{h-1}}$. Now, it follows from a Chernoff bound and the latter statement of [\[eq:p_assumptions\]](#eq:p_assumptions){reference-type="eqref" reference="eq:p_assumptions"} that for all $i \in [n]$ and $j \in \{n+1, \dots, h\}$, $$\begin{aligned} \label{eq:missing_fringes_a} & \left| A \cap \left[ 0, (1/p)^{\frac{h}{h-1}}/u_i \right] \right| \sim (1/p)^{\frac{1}{h-1}}/u_i, & \left| A \cap \left[ N- (1/p)^{\frac{h}{h-1}}/|u_j|, N \right] \right| \sim (1/p)^{\frac{1}{h-1}}/|u_j|. \end{aligned}$$ By considering the number of $L$-expressions whose $L$-evaluations lie in $\mathcal I$, we deduce that with probability $1-o(1)$, the number of elements in $\mathcal I$ that lie in $L(A)$ is at most[^10] $$\begin{aligned} \label{eq:missing_fringes_1} \frac{1+o(1)}{\theta_L} \cdot \prod_{i=1}^n \left| A \cap \left[ 0, (1/p)^{\frac{h}{h-1}}/u_i \right] \right| \cdot \prod_{j = n+1}^h \left| A \cap \left[ N- (1/p)^{\frac{h}{h-1}}/|u_j|, N \right] \right|. \end{aligned}$$ Furthermore, on this high probability event, [\[eq:missing_fringes_1\]](#eq:missing_fringes_1){reference-type="eqref" reference="eq:missing_fringes_1"} is bounded by $$\begin{aligned} \frac{1+o(1)}{\theta_L} \cdot \prod_{i=1}^n \left(\frac{(1+o(1))(1/p)^{\frac{1}{h-1}}}{u_i} \right) \cdot \prod_{j = n+1}^h \left( \frac{(1+o(1))(1/p)^{\frac{1}{h-1}}}{|u_j|} \right) = \frac{(1+o(1))(1/p)^{\frac{h}{h-1}}}{\theta_L \cdot \prod_{i=1}^h |u_i|}. \end{aligned}$$ Therefore, with probability $1-o(1)$, the number of elements in $\mathcal I$ missing in $L(A)$ is at least $$\begin{aligned} \label{eq:missing_fringes_4} (1/p)^{\frac{h}{h-1}} - \frac{(1+o(1))(1/p)^{\frac{h}{h-1}}}{\theta_L \cdot \prod_{i=1}^h |u_i|} = \left[1 - \frac{1+o(1)}{\theta_L \cdot \prod_{i=1}^h |u_i|} \right](1/p)^{\frac{h}{h-1}} \gtrsim (1/p)^{\frac{h}{h-1}}, \end{aligned}$$ since $\theta_L \cdot \prod_{i=1}^h |u_i| \geq 2$ (because $h \geq 3$, $|u_i| = 1$ for all $i \in [h]$ would imply that $\theta_L \geq 2$, since either $1$ or $-1$ appears as a coefficient at least twice). This establishes the lemma. ◻ We now prove Lemma [Lemma 18](#lem:missing_middle){reference-type="ref" reference="lem:missing_middle"}, which shows that the lower bound of Lemma [Lemma 17](#lem:missing_fringes){reference-type="ref" reference="lem:missing_fringes"} dominates the number of elements in the middle of $[-dN, sN]$ that are missing from $L(A)$. **Lemma 18**. *The expected number of elements in the interval $[(-d+\tau)N, (s-\tau)N]$ that are missing from $L(A)$ satisfies $$\begin{aligned} \mathbb{E}\left[ \left| L(A)^c \cap \left[(-d+\tau)N, (s-\tau)N \right] \right| \right] \lll 1/p. \end{aligned}$$* *Proof.* Fix $k \in \left[\tau N, mN/2 \right]$: it follows from [\[eq:f_asymptotics\]](#eq:f_asymptotics){reference-type="eqref" reference="eq:f_asymptotics"} and [\[eq:tau\]](#eq:tau){reference-type="eqref" reference="eq:tau"} that $k^{h-1}p^h \gtrsim 1$ uniformly over such $k$. We take $\mu_k$ and $b_2(k)$ as defined in the proof of Lemma [Lemma 19](#lem:slow_poisson_conv){reference-type="ref" reference="lem:slow_poisson_conv"} and Section [3](#sec:computations){reference-type="ref" reference="sec:computations"}, respectively. Uniformly over $k \in \left[\tau N, mN/2 \right] \setminus \{dN\}$, $$\begin{aligned} \label{eq:missing_middle_1} -\frac{\mu_k^2}{2b_2(k)} \lesssim -\frac{k^{2h-2}p^{2h}}{k^{2h-3}p^{2h-1}} \lesssim -kp. \end{aligned}$$ If there exists $\Lambda \in \mathscr{S}_k$ for which $S(\Lambda) \subseteq A$, then $-dN+k \in L(A)$. Let $E_\Lambda$ denote the event $S(\Lambda) \subseteq A$. Now, [\[eq:missing_middle_1\]](#eq:missing_middle_1){reference-type="eqref" reference="eq:missing_middle_1"} and the extended Janson inequality (e.g., see [@alon2016probabilistic Theorem 8.1.2]) together imply that for some constant $C > 0$, $$\begin{aligned} \label{eq:missing_middle_3} \Pr\left[-dN+k \notin L(A)\right] \leq \Pr\left[ \bigwedge_{\Lambda \in \mathscr{S}_k} \overline{E_\Lambda} \right] \leq e^{-\frac{\mu^2}{2\Delta}} \leq e^{-Ckp}, \end{aligned}$$ where this holds uniformly over $k \in \left[\tau N, mN/2 \right]$. We deduce that $$\begin{aligned} \label{eq:missing_middle_4} \mathbb{E}\left[ \left| L(A)^c \cap \left[(-d+\tau)N, \left(s-m/2\right)N \right] \right| \right] = \sum_{k = \tau N}^{mN/2} \Pr\left[-dN+k \notin L(A)\right] \leq \sum_{k = \tau N}^{mN/2} e^{-Ckp} \nonumber \\ \leq \int_{\tau N-1}^\infty e^{-Cxp} \ dx = \frac{1}{Cp}\exp\left( -Cp(\tau N-1)\right) \lesssim (1/p) \exp\left( C(p - f(N))\right) \lll 1/p, \end{aligned}$$ with the final statement due to [\[eq:f_asymptotics\]](#eq:f_asymptotics){reference-type="eqref" reference="eq:f_asymptotics"}. It now follows from [\[eq:left_right_symmetry\]](#eq:left_right_symmetry){reference-type="eqref" reference="eq:left_right_symmetry"} that $$\begin{aligned} \mathbb{E}\left[ \left| L(A)^c \cap \left[\left(-d+m/2\right)N, (s-\tau)N \right] \right| \right] & = \sum_{k \in [\tau N, mN/2]} \Pr[sN-k \notin L(A)] \nonumber \\ & = \sum_{k \in [\tau N, mN/2]} \Pr[-sN+k \notin \Tilde{L}(A)] \lll 1/p, \label{eq:missing_middle_5} \end{aligned}$$ where we have applied [\[eq:missing_middle_4\]](#eq:missing_middle_4){reference-type="eqref" reference="eq:missing_middle_4"} with respect to $\Tilde{L}$ to observe [\[eq:missing_middle_5\]](#eq:missing_middle_5){reference-type="eqref" reference="eq:missing_middle_5"}. Altogether, we conclude that $$\begin{aligned} & \mathbb{E}\left[ \left| L(A)^c \cap \left[(-d+\tau)N, (s-\tau)N \right] \right| \right] \\ & = \mathbb{E}\left[ \left| L(A)^c \cap \left[(-d+\tau)N, \left(s-h/2\right)N \right] \right| \right] + \mathbb{E}\left[ \left| L(A)^c \cap \left[\left(-d+h/2\right)N, (s-\tau)N \right] \right| \right] \lll 1/p. \end{aligned}$$ This yields the desired statement. ◻ It now follows from Lemmas [Lemma 17](#lem:missing_fringes){reference-type="ref" reference="lem:missing_fringes"} and [Lemma 18](#lem:missing_middle){reference-type="ref" reference="lem:missing_middle"} that $$\label{eq:reduction_to_fringes_obj_2} \begin{aligned} \mathbb{E}\left[ | L(A)^c | \right] & = \mathbb{E}\left[ \left| L(A)^c \cap \left[(-d+\tau)N, (s-\tau)N \right] \right| \right] + \mathbb{E}\left[ \left| L(A)^c \cap \left[(-d+\tau)N, (s-\tau)N \right]^c \right| \right] \\ & \sim \mathbb{E}\left[ \left| L(A)^c \cap \left[(-d+\tau)N, (s-\tau)N \right]^c \right| \right], \end{aligned}$$ which provides the desired reduction. [\[rmk:forbidden_summand\]]{#rmk:forbidden_summand label="rmk:forbidden_summand"} It is straightforward to adapt Lemma [Lemma 18](#lem:missing_middle){reference-type="ref" reference="lem:missing_middle"} if we were to include the condition that some particular value $n \in I_N$ cannot be in any subset of $\mathscr{S}_k$. Indeed, it is not hard to see that the asymptotic claim implicit in [\[eq:missing_middle_1\]](#eq:missing_middle_1){reference-type="eqref" reference="eq:missing_middle_1"} would still hold uniformly over $k \in \left[ \tau N, mN/2 \right]$, since for any such $k$, the number of $h$-tuples $(a_1, \dots, a_h)$ with at least one instance of $n$ such that $L(a_1, \dots, a_h) = -dN+k$ is $O(k^{h-2})$. More specifically, by tracing and adapting the proof of Lemma [Lemma 18](#lem:missing_middle){reference-type="ref" reference="lem:missing_middle"}, we can show that there exists a constant $C > 0$ (also independent of $n$) for which it holds for all $k \in \left[(1/p)^{\frac{h}{h-1}}, mN/2 \right]$ that $$\begin{aligned} & \Pr\left[-dN + k \notin L\left(A \setminus \{n\}\right)\right] \leq e^{-Ckp}, & \Pr\left[sN - k \notin L\left(A \setminus \{n\}\right)\right] \leq e^{-Ckp}. \end{aligned}$$ We make use of this remark in the proof of Theorem [Proposition 21](#prop:slow_concentration){reference-type="ref" reference="prop:slow_concentration"}. ## Expectation {#subsec:slow_expectation} In this subsection, we will compute $\mathbb{E}\left[ | L(A)^c | \right]$. We begin with the following observation, which is the analogue of Lemma [Lemma 14](#lem:critical_poisson_conv){reference-type="ref" reference="lem:critical_poisson_conv"} for this regime. **Lemma 19**. *Uniformly over all $k \in \left[ g(N), \tau N \right]$, $$\begin{aligned} \left|\Pr\left[-dN+k \notin L(A)\right] - e^{-\lambda_kN^{h-1}p^h} \right| \lll e^{-\lambda_kN^{h-1}p^h}. \end{aligned}$$* *Proof.* We proceed as in the proof of Lemma [Lemma 14](#lem:critical_poisson_conv){reference-type="ref" reference="lem:critical_poisson_conv"}. Uniformly over $k \in \left[ g(N), \tau N \right]$, $$\begin{aligned} \mu_k \sim |\mathscr{D}_k| \cdot p^h \sim \lambda_k N^{h-1}p^h = \frac{p^hk^{h-1}}{(h-1)! \cdot \theta_L \cdot \prod_{i=1}^h |u_i|}. \label{eq:gamma_1}\end{aligned}$$ Here, [\[eq:gamma_1\]](#eq:gamma_1){reference-type="eqref" reference="eq:gamma_1"} follows as in Lemma [Lemma 14](#lem:critical_poisson_conv){reference-type="ref" reference="lem:critical_poisson_conv"}, with the final equality due to the definition of $\mathsf{IH}_h$. It now follows from Theorem [Theorem 10](#thm:stein_chen){reference-type="ref" reference="thm:stein_chen"} that uniformly over $k \in \left[ g(N), \tau N\right]$, $$\begin{aligned} & e^{\lambda_kN^{h-1}p^h}\left|\Pr\left[-dN+k \notin L(A) \right] - e^{-\lambda_kN^{h-1}p^h} \right| \nonumber \\ & = e^{\lambda_kN^{h-1}p^h} \left|(\Pr\left[W_k = 0 \right] - e^{-\mu_k}) + (e^{-\mu_k} - e^{-\lambda_kN^{h-1}p^h}) \right| \nonumber \\ & \leq \exp\left(\frac{p^hk^{h-1}}{(h-1)! \cdot \theta_L \cdot \prod_{i=1}^h |u_i|} \right) \left(b_1(k)+b_2(k)+b_3(k)\right) + e^{\lambda_kN^{h-1}p^h}\left|e^{-\mu_k} - e^{-\lambda_kN^{h-1}p^h}\right| \nonumber \\ & \lesssim \exp\left(\frac{\left(\tau N p^{\frac{h}{h-1}} \right)^{h-1}}{(h-1)! \cdot \theta_L \cdot \prod_{i=1}^h |u_i|} \right) \left( k^{2h-3}p^{2h-1} + k^{h-1}p^h \right) \nonumber \\ & \quad + \left|\exp\left(\lambda_kN^{h-1}p^h\left(1 - \frac{\mu_k}{\lambda_kN^{h-1}p^h}\right)\right) - 1\right| \nonumber \\ & \leq \exp\left(\frac{f(N)^{h-1}}{(h-1)! \cdot \theta_L \cdot \prod_{i=1}^h |u_i|} \right) k^{h-1}p^h \left(k^{h-2}p^{h-1} + 1 \right) + o(1) \label{eq:poisson_conv_1} \\ & \leq \exp\left(\frac{f(N)^{h-1}}{(h-1)! \cdot \theta_L \cdot \prod_{i=1}^h |u_i|} \right) \left(\tau N p^{\frac{h}{h-1}} \right)^{h-1} \left( \left( \tau Np^{\frac{h-1}{h-2}} \right)^{h-2} + 1 \right) + o(1) \nonumber \\ & = \exp\left(\frac{f(N)^{h-1}}{(h-1)! \cdot \theta_L \cdot \prod_{i=1}^h |u_i|} \right) f(N)^{h-1} \left( \left( f(N) p^{\frac{h-1}{h-2} - \frac{h}{h-1}} \right)^{h-2} + 1 \right) + o(1) \nonumber \\ & \leq \exp\left(\frac{f(N)^{h-1}}{(h-1)! \cdot \theta_L \cdot \prod_{i=1}^h |u_i|} \right) f(N)^{2h-3} \left( p^{\frac{1}{(h-1)(h-2)}} + p \right) + o(1) \nonumber \\ & \lesssim \exp\left(\frac{f(N)^{h-1}}{(h-1)! \cdot \theta_L \cdot \prod_{i=1}^h |u_i|} \right) f(N)^{2h-3} p^{\frac{1}{(h-1)(h-2)}} + o(1) \lll 1. \nonumber\end{aligned}$$ Here, [\[eq:poisson_conv_1\]](#eq:poisson_conv_1){reference-type="eqref" reference="eq:poisson_conv_1"} follows since uniformly over $k \in \left[g(N), \tau N \right]$, $$\begin{aligned} \left| \lambda_kN^{h-1}p^h\left(1 - \frac{\mu_k}{\lambda_kN^{h-1}p^h}\right) \right| \leq f(N)^{h-1} \delta(N) \lll 1.\end{aligned}$$ This proves the lemma. ◻ Equipped with Lemma [Lemma 19](#lem:slow_poisson_conv){reference-type="ref" reference="lem:slow_poisson_conv"}, we are ready to compute the expectation that we seek. **Proposition 20**. *We have that $$\begin{aligned} \mathbb{E}\left[ |L(A)^c| \right] \sim \mathbb{E}\left[ \left| L(A) \cap \left[(-d+\tau)N, (s-\tau)N \right]^c \right| \right] \sim \frac{2 \cdot \Gamma\left( \frac{1}{h-1} \right) \sqrt[h-1]{(h-1)! \cdot \theta_L \cdot \prod_{i=1}^{h} |u_i|}}{(h-1) \cdot p(N)^{\frac{h}{h-1}}}. \end{aligned}$$* *Proof.* The first asymptotic equivalence is simply a restating of [\[eq:reduction_to_fringes_obj_2\]](#eq:reduction_to_fringes_obj_2){reference-type="eqref" reference="eq:reduction_to_fringes_obj_2"}. We prove the latter. We begin with the left fringe. Using [\[eq:g_asymptotics\]](#eq:g_asymptotics){reference-type="eqref" reference="eq:g_asymptotics"}, we write $$\begin{aligned} & \mathbb{E}\left[ \left| L(A)^c \cap \left[-dN, (-d+\tau)N \right] \right| \right] \nonumber \\ & = o\left( (1/p)^{\frac{h}{h-1}} \right) + \mathbb{E}\left[ \left| L(A)^c \cap \left[-dN + g(N), (-d+\tau)N \right] \right| \right]. \label{eq:fringes_expectation_1} \end{aligned}$$ We express the latter term as $$\begin{aligned} & \mathbb{E}\left[ \left| L(A)^c \cap \left[-dN + g(N), (-d+\tau)N \right] \right| \right] = \sum_{k = g(N)}^{\tau N} \Pr\left[-dN+k \notin L(A) \right] \nonumber\\ & = \sum_{k = g(N)}^{\tau N} \left( \Pr\left[-dN+k \notin L(A) \right] - e^{-\lambda_kN^{h-1}p^h} + e^{-\lambda_kN^{h-1}p^h} \right) \sim \sum_{k=g(N)}^{\tau N} e^{-\lambda_kN^{h-1}p^h} \label{eq:fringes_expectation_2} \\ & = \sum_{k=g(N)}^{\tau N} \exp\left(-\frac{p^hk^{h-1}}{(h-1)! \cdot \theta_L \cdot \prod_{i=1}^h |u_i|} \right) \sim \int_{g(N)}^{\tau N} \exp\left(-\frac{p^hx^{h-1}}{(h-1)! \cdot \theta_L \cdot \prod_{i=1}^h |u_i|} \right) \ dx \label{eq:fringes_expectation_3} \\ & \sim \sqrt[h-1]{\frac{(h-1)! \cdot \theta_L \cdot \prod_{i=1}^h |u_i|}{p^h}} \int_{g(N) \sqrt[h-1]{\frac{p^h}{(h-1)! \cdot \theta_L \cdot \prod_{i=1}^h |u_i|}}}^{\tau N \sqrt[h-1]{\frac{p^h}{(h-1)! \cdot \theta_L \cdot \prod_{i=1}^h |u_i|}}} e^{-x^{h-1}} \ dx \nonumber \\ & \sim \frac{\sqrt[h-1]{(h-1)! \cdot \theta_L \cdot \prod_{i=1}^h |u_i|}}{p^{\frac{h}{h-1}}} \int_0^\infty e^{-x^{h-1}} \ dx \label{eq:fringes_expectation_4} \\ & = \frac{\sqrt[h-1]{(h-1)! \cdot \theta_L \cdot \prod_{i=1}^h |u_i|}}{p^{\frac{h}{h-1}}} \int_0^\infty x^{h-2} x^{-(h-2)} e^{-x^{h-1}} \ dx \nonumber \\ & = \frac{\sqrt[h-1]{(h-1)! \cdot \theta_L \cdot \prod_{i=1}^h |u_i|}}{p^{\frac{h}{h-1}}} \int_0^\infty x^{-\frac{h-2}{h-1}} e^{-x} \ dx = \frac{\Gamma\left( \frac{1}{h-1} \right)\sqrt[h-1]{(h-1)! \cdot \theta_L \cdot \prod_{i=1}^h |u_i|}}{(h-1) \cdot p^{\frac{h}{h-1}}}. \label{eq:fringes_expectation_5} \end{aligned}$$ We used Lemma [Lemma 19](#lem:slow_poisson_conv){reference-type="ref" reference="lem:slow_poisson_conv"} in [\[eq:fringes_expectation_2\]](#eq:fringes_expectation_2){reference-type="eqref" reference="eq:fringes_expectation_2"} and [\[eq:fringes_expectation_3\]](#eq:fringes_expectation_3){reference-type="eqref" reference="eq:fringes_expectation_3"}, and the asymptotic equivalence in [\[eq:fringes_expectation_4\]](#eq:fringes_expectation_4){reference-type="eqref" reference="eq:fringes_expectation_4"} follows from the dominated convergence theorem, since [\[eq:g_asymptotics\]](#eq:g_asymptotics){reference-type="eqref" reference="eq:g_asymptotics"} and [\[eq:f_asymptotics\]](#eq:f_asymptotics){reference-type="eqref" reference="eq:f_asymptotics"} imply that the lower and upper limits of the integral respectively tend to zero and infinity. It follows from [\[eq:fringes_expectation_1\]](#eq:fringes_expectation_1){reference-type="eqref" reference="eq:fringes_expectation_1"} and [\[eq:fringes_expectation_5\]](#eq:fringes_expectation_5){reference-type="eqref" reference="eq:fringes_expectation_5"} that $$\begin{aligned} \mathbb{E}\left[ \left| L(A)^c \cap \left[-dN, (-d+\tau)N \right] \right| \right] \sim \frac{\Gamma\left( \frac{1}{h-1} \right)\sqrt[h-1]{(h-1)! \cdot \theta_L \cdot \prod_{i=1}^h |u_i|}}{(h-1) \cdot p^{\frac{h}{h-1}}}. \end{aligned}$$ We now handle the right fringe. Proceeding like [\[eq:fringes_expectation_1\]](#eq:fringes_expectation_1){reference-type="eqref" reference="eq:fringes_expectation_1"} in [\[eq:fringes_expectation_6\]](#eq:fringes_expectation_6){reference-type="eqref" reference="eq:fringes_expectation_6"} and invoking [\[eq:left_right_symmetry\]](#eq:left_right_symmetry){reference-type="eqref" reference="eq:left_right_symmetry"}, $$\begin{aligned} & \mathbb{E}\left[ \left| L(A)^c \cap \left[(s-\tau)N, sN \right] \right| \right] = o\left( (1/p)^{\frac{h}{h-1}} \right) + \sum_{k=g}^{\tau N} \Pr\left[ sN-k \notin L(A) \right] \label{eq:fringes_expectation_6} \\ & = o\left( (1/p)^{\frac{h}{h-1}} \right) + \sum_{k=g}^{\tau N} \Pr\left[ -sN+k \notin \Tilde{L}(A) \right] \sim \frac{\Gamma\left( \frac{1}{h-1} \right)\sqrt[h-1]{(h-1)! \cdot \theta_L \cdot \prod_{i=1}^h |u_i|}}{(h-1) \cdot p^{\frac{h}{h-1}}}, \label{eq:fringes_expectation_7} \end{aligned}$$ where we have applied [\[eq:fringes_expectation_5\]](#eq:fringes_expectation_5){reference-type="eqref" reference="eq:fringes_expectation_5"} to $\Tilde{L}(A)$. Using [\[eq:fringes_expectation_5\]](#eq:fringes_expectation_5){reference-type="eqref" reference="eq:fringes_expectation_5"} and [\[eq:fringes_expectation_7\]](#eq:fringes_expectation_7){reference-type="eqref" reference="eq:fringes_expectation_7"}, we conclude that $$\begin{aligned} & \mathbb{E}\left[ \left| L(A)^c \cap \left[(-d+\tau)N, (s-\tau)N \right]^c \right| \right] \sim \frac{2 \cdot \Gamma\left( \frac{1}{h-1} \right)\sqrt[h-1]{(h-1)! \cdot \theta_L \cdot \prod_{i=1}^h |u_i|}}{(h-1) \cdot p^{\frac{h}{h-1}}}, \end{aligned}$$ which was the desired result. ◻ ## Concentration {#subsec:concentration} We now show that the random variable $|L(A)^c|$ is strongly concentrated about its expectation. Together with Proposition [Proposition 20](#prop:slow_fringes_expectation){reference-type="ref" reference="prop:slow_fringes_expectation"}, this will prove Theorem [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"}(iii). **Proposition 21**. *If [\[eq:slow_decay_assumption\]](#eq:slow_decay_assumption){reference-type="eqref" reference="eq:slow_decay_assumption"} holds, then $|L(A)^c| \sim \mathbb{E}\left[ |L(A)^c| \right]$.* *Proof.* We proceed exactly as in the proof of Theorem [Proposition 16](#prop:critical_concentration){reference-type="ref" reference="prop:critical_concentration"}, but with the following modifications. First, we show that $\Delta_{n,1}(A)$ is modest with high probability. The expectation of $\Delta_{n,1}(A)$ satisfies $$\begin{aligned} \mathbb{E}\left[ \Delta_{n,1}(A) \right] & = \mathbb{E}\left[\mathbb{E}\left[ \left| \left(L\left(A \cup \{n\}\right) \setminus L\left(A \setminus \{n\}\right) \right) \cap (\mathcal I_n \cap [-dN+\Tilde{g}, sN-\Tilde{g}]) \right| \ \Big| \ a_0, \dots, a_{n-1} \right]\right] \nonumber \\ & = \mathbb{E}\left[ \left| \left(L\left(A \cup \{n\}\right) \setminus L\left(A \setminus \{n\}\right) \right) \cap (\mathcal I_n \cap [-dN+\Tilde{g}, sN-\Tilde{g}]) \right| \right] \nonumber \\ & = \sum_{k \in \mathcal I_n \cap [-dN+\Tilde{g}, sN-\Tilde{g}]} \Pr\left[ \left( k \in L\left(A \cup \{n\}\right) \right) \land \left( k \notin L\left(A \setminus \{n\}\right)\right) \right] \nonumber \\ & \leq \sum_{k=\max\left\{\Tilde{g}, \min\{n,N-n\}\right\}}^{mN-\Tilde{g}} \Pr\left[ -dN+k \notin L\left(A \setminus \{n\}\right) \right] \leq \sum_{k=\max\left\{\Tilde{g}, \min\{n,N-n\}\right\}}^{mN-\Tilde{g}} e^{-Ckp} \label{eq:restricted_sum_1} \\ & \lesssim (1/p)\exp\left( -Cp\left(\max\left\{\Tilde{g}, \min\{n,N-n\}\right\}-1\right) \right) \label{eq:restricted_sum_2} \\ & \lesssim (1/p)\exp\left( -(C/2)p\max\left\{\Tilde{g}, \min\{n,N-n\}\right\} \right). \nonumber \end{aligned}$$ We appealed to Remark [\[rmk:forbidden_summand\]](#rmk:forbidden_summand){reference-type="ref" reference="rmk:forbidden_summand"} to observe the latter inequality in [\[eq:restricted_sum_1\]](#eq:restricted_sum_1){reference-type="eqref" reference="eq:restricted_sum_1"} (we may do so, since it follows from [\[eq:g_tilde\]](#eq:g_tilde){reference-type="eqref" reference="eq:g_tilde"} and [\[eq:g_asymptotics\]](#eq:g_asymptotics){reference-type="eqref" reference="eq:g_asymptotics"} that $\Tilde{g} \ggg (1/p)^{\frac{h}{h-1}}$), where $C > 0$ is an appropriate constant. The former inequality in [\[eq:restricted_sum_2\]](#eq:restricted_sum_2){reference-type="eqref" reference="eq:restricted_sum_2"} follows from computations entirely analogous to those performed in the proof of Lemma [Lemma 18](#lem:missing_middle){reference-type="ref" reference="lem:missing_middle"}. Therefore, we have from [\[eq:restricted_sum_2\]](#eq:restricted_sum_2){reference-type="eqref" reference="eq:restricted_sum_2"} and Markov's inequality that $$\label{eq:markov_1} \begin{aligned} & \Pr\left( \Delta_{n,1}(A) \gtrsim (1/p)\exp\left( -(C/4)p\max\left\{\Tilde{g}, \min\{n,N-n\}\right\} \right) \right) \\ & \lesssim \frac{(1/p)\exp\left( - (C/2) p \max\left\{\Tilde{g}, \min\{n,N-n\}\right\} \right)}{(1/p)\exp\left( -(C/4)p \max\left\{\Tilde{g}, \min\{n,N-n\}\right\} \right)} \lesssim \exp\left( -(C/4)p \max\left\{\Tilde{g}, \min\{n,N-n\}\right\} \right). \end{aligned}$$ By a union bound, it now follows from [\[eq:markov_1\]](#eq:markov_1){reference-type="eqref" reference="eq:markov_1"} that $$\begin{aligned} & \Pr\left( \Delta_{n,1}(A) \geq \exp\left( -(C/4)p\max\left\{\Tilde{g}, \min\{n,N-n\}\right\} \right) \text{ for some } n \in I_N \right) \nonumber \\ & \lesssim \sum_{n = 0}^{N} \exp\left( -(C/4)p\max\left\{\Tilde{g}, \min\{n,N-n\}\right\} \right) \nonumber \\ & = \sum_{n=0}^{\Tilde{g}} \exp\left( -\frac{Cp\Tilde{g}}{4} \right) + \sum_{n=N-\Tilde{g}}^N \exp\left( -\frac{Cp\Tilde{g}}{4} \right) + \sum_{n=\Tilde{g}+1}^{N-\Tilde{g}-1} \exp\left( -\frac{Cpn}{2}\right) \nonumber \\ & \lesssim \frac{\log(1/p)}{p^{\frac{h}{h-1}}}\exp\left(-\frac{C \log(1/p)}{4p^{\frac{1}{h-1}}} \right) + \int_{\Tilde{g}}^\infty \exp\left( -\frac{Cpx}{2}\right) \ dx \nonumber \\ & \lesssim \left((1/p)^{\frac{1}{h-1}} \log(1/p) \right)^h \exp\left(-(1/p)^{\frac{1}{h-1}}(C/4) \log(1/p) \right) + (1/p) \exp\left(-(1/p)^{\frac{1}{h-1}}(C/4) \log(1/p) \right) \nonumber \\ & \leq o(1) + \left((1/p)^{\frac{1}{h-1}} \log(1/p) \right)^{h-1} \exp\left(-(1/p)^{\frac{1}{h-1}}(C/2) \log(1/p) \right) \lll 1. \end{aligned}$$ So with probability $1-o(1)$, $$\label{eq:delta_1_bound} \begin{aligned} \Delta_{n,1}(A) & \leq \exp\left( -(C/4)p\max\left\{\Tilde{g}, \min\{n,N-n\}\right\} \right) \\ & \leq \exp\left(-(1/p)^{\frac{1}{h-1}}(C/4) \log(1/p) \right) \lll 1 \text{ for all } n \in I_N. \end{aligned}$$ Furthermore, on this event with $1-o(1)$ probability, it holds that $$\label{eq:delta_1_sum_bound} \begin{aligned} & p \sum_{n=0}^N \left( \Delta_{n,1}(A) \right)^2 \\ & \leq p \left[ \sum_{n=0}^{\Tilde{g}} \exp\left(-\frac{C\log(1/p)}{2p^{\frac{1}{h-1}}} \right) + \sum_{n=N-\Tilde{g}}^N \exp\left(-\frac{C\log(1/p)}{2p^{\frac{1}{h-1}}} \right) + \sum_{n=\Tilde{g}+1}^{N-\Tilde{g}} \exp\left( -Cpn \right) \right] \\ & \lesssim p\Tilde{g}\exp\left(-\frac{C\log(1/p)}{2p^{\frac{1}{h-1}}} \right) + p \cdot \frac{1}{p} \exp\left(-\frac{C \log(1/p)}{p^{\frac{1}{h-1}}} \right) \\ & = p\Tilde{g}\exp\left(-\frac{C\log(1/p)}{2p^{\frac{1}{h-1}}} \right) + o(1) \lesssim \frac{ \log(1/p)}{p^{\frac{1}{h-1}}}\exp\left(-\frac{C\log(1/p)}{2p^{\frac{1}{h-1}}} \right) + o(1) \lll 1. \end{aligned}$$ Finally, if we assume [\[eq:slow_decay_assumption\]](#eq:slow_decay_assumption){reference-type="eqref" reference="eq:slow_decay_assumption"}, [\[eq:root_lambda_V\]](#eq:root_lambda_V){reference-type="eqref" reference="eq:root_lambda_V"} and Lemma [Lemma 17](#lem:missing_fringes){reference-type="ref" reference="lem:missing_fringes"} together imply the sufficiency of showing [\[eq:vanishing_RHS\]](#eq:vanishing_RHS){reference-type="eqref" reference="eq:vanishing_RHS"}. With these adjustments, tracing the proof of Theorem [Proposition 16](#prop:critical_concentration){reference-type="ref" reference="prop:critical_concentration"} proves the desired. ◻ This completes the proof of Theorem [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"}. # Limiting Poisson Behavior {#sec:point_processes} The methods we used to prove Theorem [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"} lend themselves quickly to another phase transition concerning the asymptotic behavior of certain local dependent Bernoulli processes of interest, which is captured in Theorem [Theorem 4](#thm:poisson_process){reference-type="ref" reference="thm:poisson_process"}. In the following computations, we assume that $k \in \left[ 0, mN/2 \right]$. As remarked in Section [3](#sec:computations){reference-type="ref" reference="sec:computations"}, we may extend these results to $k \in \left[ mN/2, mN \right]$ by appealing to [\[eq:left_right_process_equivalence\]](#eq:left_right_process_equivalence){reference-type="eqref" reference="eq:left_right_process_equivalence"}. We let $F: \mathbb{N}\to \mathbb{R}_{\geq 0}$ be a function satisfying $$\begin{aligned} \label{eq:F_asymptotics} p^{\frac{2}{2h-3}} \lll F(N) \lll 1.\end{aligned}$$ We begin by proving an upper bound for the total variation distance between the processes $\mathscr{X}_k$ and $\mathscr{Y}_k$. By invoking Theorem [Theorem 11](#thm:poisson_process_convergence){reference-type="ref" reference="thm:poisson_process_convergence"} and substituting results from Section [3](#sec:computations){reference-type="ref" reference="sec:computations"}, it follows from entirely straightforward manipulations that if $p(N) \lll N^{-\frac{h-2}{h-1}}$ (this assumption is needed only for the latter case), noting that it follows from [\[eq:F_asymptotics\]](#eq:F_asymptotics){reference-type="eqref" reference="eq:F_asymptotics"} that $F(N)(1/p)^{\frac{2h-1}{2h-3}} \ggg 1/p$, $$\begin{aligned} d_\textup{TV}\left(\mathscr{X}_k, \mathscr{Y}_k \right) & \leq \min\{1, \mu_k^{-1} \} \left(b_1(k) + b_2(k)\right) \nonumber \\ & \lesssim \begin{cases} \sum_{\ell=h}^{2h-1} k^{\ell-2}p^\ell + \sum_{\ell=2}^{h-1} k^{\ell-1}p^\ell & k \leq F(N)(1/p)^{\frac{2h-1}{2h-3}} \\ \frac{k^{2h-3}p^{2h-1} + k^{h-2}p^{h-1}}{k^{h-1}p^h} & k > F(N)(1/p)^{\frac{2h-1}{2h-3}} \end{cases} \lll 1. \label{eq:total_var_upper_bound}\end{aligned}$$ This proves Theorem [Theorem 4](#thm:poisson_process){reference-type="ref" reference="thm:poisson_process"}(i). We now turn to proving a lower bound for the total variation distance, which will show that the regime of $p(N)$ for which we established the convergence of $\mathscr{X}_k$ to a Poisson process for all $k \in \left[ 0, mN \right]$, in the sense of [\[eq:total_var_upper_bound\]](#eq:total_var_upper_bound){reference-type="eqref" reference="eq:total_var_upper_bound"}, is sharp. In what follows, we assume that $$\begin{aligned} \label{eq:local_critical_decay} p(N) \geq cN^{-\frac{h-2}{h-1}}\end{aligned}$$ for some constant $c > 0$. We fix another constant $C > 0$, and we study values $k \in \left[CN, mN/2\right]$, for which we will be able to obtain a meaningful lower bound. For bookkeeping purposes, we record $$\begin{aligned} \label{eq:process_above_threshold_asymptotics} & N \gtrsim (1/p)^{\frac{h-1}{h-2}}, & k \asymp N.\end{aligned}$$ To derive such a lower bound on the total variation distance, we invoke Theorem [Theorem 12](#thm:positively_related_lower_bound){reference-type="ref" reference="thm:positively_related_lower_bound"} to bound $d_\textup{TV}\left(W_k, \textup{Pois}(\mu_k)\right)$ from below. First, observe that the collection of random variables $\{X_\Lambda: \Lambda \in \mathscr{D}_k\}$ is positively related. Indeed, for $\Lambda, \Lambda' \in \mathscr{D}_k$, let $Y_{\Lambda'\Lambda}$ be the Bernoulli random variable corresponding to the event $S(\Lambda') \setminus S(\Lambda) \subseteq A$; if $\Lambda = \Lambda'$, then $Y_{\Lambda'\Lambda} \equiv 1$. It is not hard to see that $$\begin{aligned} & \mathcal L\left(Y_{\Lambda'\Lambda}; \Lambda' \in \mathscr{D}_k\right) = \mathcal L\left(X_{\Lambda'}; \Lambda' \in \mathscr{D}_k \ | \ X_\Lambda = 1 \right), & Y_{\Lambda'\Lambda} \geq X_{\Lambda'} \text{ for all } \Lambda' \in \mathscr{D}_k \setminus \{\Lambda\}.\end{aligned}$$ We define the following. The statements for $\epsilon_k$ hold uniformly over $k \in \left[ CN, mN/2\right]$ and follow quickly from computations in Section [3](#sec:computations){reference-type="ref" reference="sec:computations"}, notably using [\[eq:frugal_variance_bounds\]](#eq:frugal_variance_bounds){reference-type="eqref" reference="eq:frugal_variance_bounds"} to prove the $\gtrsim$ implicit in the $\asymp$: $$\begin{aligned} & \gamma_k := \frac{\mathbb{E}\left[(W_k-\mathbb{E}[W_k])^4 \right]}{\mu_k} - 1; & \epsilon_k := \frac{\textup{Var}(W_k)}{\mu_k} - 1 \asymp N^{h-2}p^{h-1} = \Omega(1).\end{aligned}$$ Thus, we may invoke Theorem [Theorem 12](#thm:positively_related_lower_bound){reference-type="ref" reference="thm:positively_related_lower_bound"}. Appealing to [\[eq:process_above_threshold_asymptotics\]](#eq:process_above_threshold_asymptotics){reference-type="eqref" reference="eq:process_above_threshold_asymptotics"}, we proceed with the relevant computations. $$\begin{aligned} \left( \frac{\gamma_k}{\mu_k \epsilon_k} \right)_+ & \leq \frac{\gamma_k+1}{\mu_k \epsilon_k} \lesssim \frac{N^{4h-6}p^{4h-2} + N^{3h-4}p^{3h-1} + N^{2h-3}p^{2h-1} + N^{h-1}p^h}{N^{3h-4}p^{3h-1}} \\ & = N^{h-2}p^{h-1} + 1 + \frac{1}{N^{h-1}p^h} + \frac{1}{N^{2h-3}p^{2h-1}} \asymp N^{h-2}p^{h-1} + 1 \asymp \epsilon_k; \\ \frac{\sum_{\Lambda \in \mathscr{D}_k} p_\Lambda^2}{\mu_k^2 \epsilon_k} & \leq \frac{\sum_{\Lambda \in \mathscr{D}_k} p_\Lambda}{\mu_k^2 \epsilon_k} = \frac{\mu_k}{\mu_k^2 \epsilon_k} = \frac{1}{\mu_k\epsilon_k} \asymp \frac{1}{N^{2h-3}p^{2h-1}} \lll 1; \\ \frac{3\textup{Var}(W_k) \max_{\Lambda \in \mathscr{D}_k} p_\Lambda}{\mu_k\epsilon_k} & \lesssim \frac{3p N^{2h-3}p^{2h-1}}{N^{2h-3}p^{2h-1}} = 3p \lll 1.\end{aligned}$$ Therefore, it follows that $$\begin{aligned} \psi_k := \left(\frac{\gamma_k}{\mu_k \epsilon_k}\right)_+ + 3\epsilon_k + \frac{\sum_{\Lambda \in \mathscr{D}_k} p_\Lambda^2}{\mu_k^2 \epsilon_k} + \frac{3\textup{Var}(W_k) \max_{\Lambda \in \mathscr{D}_k} p_\Lambda}{\mu_k\epsilon_k} \asymp \epsilon_k. \end{aligned}$$ Altogether, we conclude that uniformly over $k \in \left[CN, mN/2\right]$, $$\begin{aligned} d_\textup{TV}\left(\mathscr{X}_k, \mathscr{Y}_k \right) \geq d_\textup{TV}\left( W_k, \textup{Pois}(\mu_k) \right) \geq \frac{\epsilon_k}{11+3\psi_k} \geq \frac{\epsilon_k}{11+ O(1)\epsilon_k} = \Omega(1),\end{aligned}$$ which proves Theorem [Theorem 4](#thm:poisson_process){reference-type="ref" reference="thm:poisson_process"}(ii). # Future Directions {#sec:future_directions} ## Further Generalizations Our work 1. presents and utilizes the Stein-Chen method as a powerful tool for the study of sum/difference sets and some of their natural generalizations, as it yields tight estimates on the probability that certain values are missing from said sets; 2. provides sharp statements concerning when we can control the dependence resulting from several overlapping ways of representing the same sum. The second author has been involved in several projects investigating MSTD sets, some of which were mentioned in the introduction. These projects took place in a variety of settings, ranging from working with different groups (e.g., we may generalize our present setting by assuming that we draw elements independently from subsets $G_N$ of a group $G$; here, we took $G_N = I_N$ and $G = \mathbb{Z}$) to working with different binary operations. In some of these settings, the desired outcome was to show that a positive proportion of sets were MSTD or to show that asymptotically almost all sets failed to be MSTD, and the main rigidity arose due to dependencies resulting from overlapping representations. It may thus be fruitful to revisit these works and see if our techniques may be quickly adapted to yield improvements on known results. A slightly less immediate generalization is to introduce dependencies into the binomial model itself. Specifically, we might assume that $A$ is drawn from a probability measure $\mathbb{P}_N$ on $\big(\Omega_N, 2^{\Omega_N}\big)$ for which the marginal laws are all $\textup{Ber}\left(p(N)\right)$, but the joint law need not be the product measure. Natural generalizations of Theorems [Theorem 1](#thm:Z_linear_forms){reference-type="ref" reference="thm:Z_linear_forms"} and [Theorem 4](#thm:poisson_process){reference-type="ref" reference="thm:poisson_process"} for this setting would be nice. ## Global Process Behavior Theorem [Theorem 4](#thm:poisson_process){reference-type="ref" reference="thm:poisson_process"} provides a sharp threshold for when the total variation distance between $\mathscr{X}_k$ and $\mathscr{Y}_k$ vanishes uniformly across all $k \in \left[ 0, mN \right]$. In this sense, we have shown that the dependent Bernoulli process corresponding to the number of ways a sum is generated in $L(A)$ converges to a Poisson point process. It might be worth asking if a nice point process limit can be obtained across the entire set $L(A)$, i.e., for the dependent Bernoulli process $(X_k)_{k \in [0, mN]}$, where $X_k$ corresponds to the event $k \in L(A)$. # Acknowledgments {#acknowledgments .unnumbered} This research was supported, in part, by NSF Grants DMS1947438 and DMS2241623. The first author was introduced to this topic while he was a student at the 2022 SMALL REU at Williams College, and he thanks Professor Eyvindur Ari Palsson and the second author for giving him a chance to participate in the program. We thank the University of Pennsylvania and Williams College for their support over the course of this project. Parts of this work were carried out when the first author was attending the 2023 Princeton Machine Learning Theory Summer School, and he thanks the organizers for providing a hospitable environment to work in. The first author also thanks Professor Robin Pemantle for a productive conversation on the problem. # Asymptotic Notation {#sec:asymptotic_notation} Let $X$ be a real-valued random variable depending on some positive integer parameter $N$, and let $f(N)$ be some real-valued function. We write $X \sim f(N)$ to denote the fact that, for any $\epsilon_1, \epsilon_2 > 0$, it holds for all sufficiently large values of $N$ that $$\begin{aligned} \Pr\left( X \notin [(1-\epsilon_{1})f(N), (1+\epsilon_{1})f(N)] \right) < \epsilon_{2}.\end{aligned}$$ We will also use this notation for deterministic functions $f(N), g(N)$: here, we write $f(N) \sim g(N)$ if $\lim_{N \to \infty} f(N)/g(N) = 1$. We write $f(N) \gtrsim g(N)$ to indicate that there exists a constant $C > 0$ for which $f(N) \geq Cg(N)$ for all sufficiently large $N$, and $f(N) \lesssim g(N)$ to indicate that there exists a constant $C > 0$ for which $f(N) \leq Cg(N)$ for all sufficiently large $N$. By $f(N) = O(g(N))$, we mean that there exists a constant $C > 0$ for which $f(N) \leq Cg(N)$ for all $N$ large. We write $f(N) = \Omega(g(N))$ if $g(N) = O(f(N))$. By $f(N) = \Theta(g(N))$, we mean that both $f(N) = O(g(N))$ and $g(N) = O(f(N))$ hold. Finally, if $\lim_{N\to\infty} f(N) / g(N) = 0$ then we write $f(N) = o(g(N))$, which is equivalent to $f(N) \lll g(N)$. We write $f(N) = \omega(g(N))$ if $g(N) = o(f(N))$, which is equivalent to $f(N) \ggg g(N)$. # Proofs for Section [2](#sec:preliminaries){reference-type="ref" reference="sec:preliminaries"} {#sec:preliminaries_proofs} We work towards a proof of Lemma [Lemma 6](#lem:number_of_subsets_asymptotics){reference-type="ref" reference="lem:number_of_subsets_asymptotics"}. We begin with the following standard definitions. **Definition 22**. A *[partition]{style="color: red"}* of a positive integer $k$ is a finite nonincreasing sequence of positive integers $\lambda_1, \dots, \lambda_h$ such that $\sum_{i=1}^h \lambda_i = k$. A *[weak composition]{style="color: red"}* is an $h$-tuple $(\lambda_1, \dots, \lambda_h)$ of nonnegative integers such that $\sum_{i=1}^h \lambda_i = k$. **Definition 23**. A sequence $a_0, a_1, \dots, a_n$ of real numbers is *[unimodal]{style="color: red"}* if there exists an index $0 \leq j \leq n$ for which it holds that $$\begin{aligned} a_0 \leq a_1 \leq \cdots \leq a_j \geq a_{j+1} \geq \cdots \geq a_n. \end{aligned}$$ The sequence is *[symmetric]{style="color: red"}* if $a_i = a_{n-i}$ for all $0 \leq i \leq n$. For later use, observe that a sequence $a_0, a_1, \dots, a_n$ that is both unimodal and symmetric must have that $a_{\lfloor n/2 \rfloor} = a_{\lceil n/2 \rceil}$, and that this value must be the maximum of the sequence. **Lemma 24**. *Fix a function $g: \mathbb{N}\to \mathbb{N}$ satisfying $1 \lll g(N)$. For $k \in I_{hN}$, let $p_h(k,N)$ denote the number of partitions of $k$ into at most $h$ parts, each at most $N$. Uniformly over $k \in \left[g(N), hN - g(N) \right]$, $$\begin{aligned} \label{eq:partition_asymptotics_desired} p_h\left(k,N\right) \sim \frac{\mathsf{IH}_h\left(k/N\right)}{h!}N^{h-1}. \end{aligned}$$* *Proof.* We handle the cases $k \notin \left[N, (h-1)N\right]$ and $k \in \left[N, (h-1)N \right]$ separately. It is well known that the sequence $$\begin{aligned} \label{eq:partition_asymptotics_sequence} p_h(0, N), \ p_h(1, N), \ \dots, \ p_h(hN-1, N), \ p_h(hN, N) \end{aligned}$$ has the Gaussian binomial coefficient $\binom{N+h}{h}_q$ as its generating function (e.g., see [@andrews1998theory Theorem 3.1]), and that this sequence is both unimodal and symmetric (e.g., see [@melczer2020counting; @sylvester1878xxv]). Appealing to [@stanley2016some Theorem 2.4] and [@janson2013euler Theorem 3.2], we observe that, where the first statement holds uniformly over all integers $k \geq g(N)$ and $\alpha$ is a nonnegative real number, $$\begin{aligned} \label{eq:q_binomial_coeff_asymptotics} & [ q^k ] \binom{k+h}{h}_q \sim \frac{k^{h-1}}{(h-1)!h!} = \frac{\mathsf{IH}_h(k/N)}{h!}N^{h-1}, & \left[ q^{\lfloor \alpha N \rfloor} \right]\binom{N+h}{h}_q \sim \frac{\mathsf{IH}_h\left( \alpha \right)}{h!}N^{h-1}. \end{aligned}$$ Since any partition of an integer $k$ satisfying $g(N) \leq k \leq N$ certainly has parts at most $k$, it follows for such values of $k$ that $p_h(k,N)$ is simply the number of partitions of $k$ into $h$ parts. Thus, the first statement of [\[eq:q_binomial_coeff_asymptotics\]](#eq:q_binomial_coeff_asymptotics){reference-type="eqref" reference="eq:q_binomial_coeff_asymptotics"} implies that [\[eq:partition_asymptotics_desired\]](#eq:partition_asymptotics_desired){reference-type="eqref" reference="eq:partition_asymptotics_desired"} holds uniformly over all integers $k$ satisfying $g(N) \leq k \leq N$, from which the symmetry of the sequence [\[eq:partition_asymptotics_sequence\]](#eq:partition_asymptotics_sequence){reference-type="eqref" reference="eq:partition_asymptotics_sequence"} implies that [\[eq:partition_asymptotics_desired\]](#eq:partition_asymptotics_desired){reference-type="eqref" reference="eq:partition_asymptotics_desired"} holds uniformly over all $k \in [N, (h-1)N]^c \cap \left[g(N), hN - g(N)\right]$. We now turn to the case where $k \in \left[N, (h-1)N\right]$. We observe that $\mathsf{IH}(x)$ is uniformly continuous on $x \in [1,h-1]$ and positive on $x \in (0,h)$ (the positivity can be deduced inductively using [@janson2013euler Remark 3.3], for example). It thus follows that $\mathsf{IH}_h(k/N)$ has a fixed positive minimum on $k \in \left[N, (h-1)N\right]$. From these observations and the latter statement of [\[eq:q_binomial_coeff_asymptotics\]](#eq:q_binomial_coeff_asymptotics){reference-type="eqref" reference="eq:q_binomial_coeff_asymptotics"}, it is straightforward to show using a simple continuity argument[^11] that uniformly over $k \in \left[N, (h-1)N\right]$, $$\begin{aligned} \label{eq:q_binomial_coeff_asymptotics_middle} [ q^k ]\binom{N+h}{h}_q \sim \frac{\mathsf{IH}_h\left( k/N \right)}{h!}N^{h-1}. \end{aligned}$$ Combining [\[eq:q_binomial_coeff_asymptotics_middle\]](#eq:q_binomial_coeff_asymptotics_middle){reference-type="eqref" reference="eq:q_binomial_coeff_asymptotics_middle"} with the result for $k \notin \left[N, (h-1)N\right]$ proves the lemma. ◻ [\[rmk:partition_asymptotics_small_k\]]{#rmk:partition_asymptotics_small_k label="rmk:partition_asymptotics_small_k"} For $k \leq N$, the expression in Lemma [Lemma 24](#lem:partition_asymptotics){reference-type="ref" reference="lem:partition_asymptotics"} can be written using [\[eq:irwin_hall_density\]](#eq:irwin_hall_density){reference-type="eqref" reference="eq:irwin_hall_density"} as $$\begin{aligned} \frac{\mathsf{IH}_h\left(k/N\right)}{h!}N^{h-1} = \frac{\left(k/N\right)^{h-1}}{(h-1)!h!} N^{h-1} = \frac{k^{h-1}}{(h-1)!h!}. \end{aligned}$$ This is consistent with classical results on the number of partitions of a positive integer with a bounded number of parts (e.g., see [@erdos1941distribution Theorem 4.1]). **Lemma 25**. *Fix a function $g: \mathbb{N}\to \mathbb{N}$ satisfying $1 \lll g(N) \lll N$, jointly coprime nonzero integers $u_1, \dots, u_h$, and integers $b_1, \dots, b_h$. Let $u = (u_1, \dots, u_h)$ and $b = (b_1, \dots, b_h)$. For $k \in I_{hN}$, let $c_{h,u,b}(k, N)$ denote the number of weak compositions $(a_1, \dots, a_h)$ of $k$ with $h$ parts, each of which is at most $N$, that satisfy $$\begin{aligned} \label{eq:ordered_tuples_conditions} a_i \equiv b_i \pmod{u_i} \text{ for all } i \in [h]. \end{aligned}$$ Uniformly over $k \in \left[g(N), hN-g(N) \right]$, $$\begin{aligned} \label{eq:num_ordered_tuples_asymptotics} c_{h,u,b}(k, N) \sim \frac{\mathsf{IH}_h(k/N)}{\prod_{i=1}^h |u_i|}N^{h-1}. \end{aligned}$$ Additionally, uniformly over $k_1 < g(N)$ and $k_2 > hN-g(N)$, $$\begin{aligned} \label{eq:ordered_tuples_fringes} & \mathsf{IH}_h(k_1/N) \lll \mathsf{IH}_h(1+k_1/N), & \mathsf{IH}(k_2/N) \lll \mathsf{IH}_h(1-k_2/N). \end{aligned}$$* *Proof.* Let $c_h(k,N)$ denote the number of weak compositions $(a_1, \dots, a_h)$ of $k$ with $h$ parts, each of which is at most $N$. It follows from [@zhong2022combinatorial Theorem 3] that the sequence $$\begin{aligned} \label{eq:ordered_tuples_sequence} c_h(0, N), \ c_h(1, N), \ \dots, \ c_h(hN, N) \end{aligned}$$ is unimodal and symmetric. Additionally, the sequence[^12] $$\begin{aligned} \label{eq:weak_compositions_sequence} c_{h,u,b}(0, N), \ c_{h,u,b}(1, N), \ \dots, \ c_{h,u,b}(hN, N) \end{aligned}$$ can easily be deduced to be symmetric from the observation that for $k \in [0, hN]$, $$\begin{aligned} a_1 + \cdots + a_h = k \iff (N-a_1) + \cdots + (N-a_h) = hN-k. \end{aligned}$$ Lemma [Lemma 24](#lem:partition_asymptotics){reference-type="ref" reference="lem:partition_asymptotics"} implies that uniformly over $k \in \left[g(N), hN/2\right]$, $$\begin{aligned} \label{eq:unconstrained_compositions_asymptotics} c_h(k, N) \sim \mathsf{IH}_h(k/N) N^{h-1}, \end{aligned}$$ since it is easy to show that $O(k^{h-2})$ partitions of $k$ into at most $h$ parts are such that its parts are not all distinct.[^13] The asymptotic equivalence [\[eq:unconstrained_compositions_asymptotics\]](#eq:unconstrained_compositions_asymptotics){reference-type="eqref" reference="eq:unconstrained_compositions_asymptotics"} is now seen to hold uniformly over $k \in \left[g(N), hN - g(N)\right]$ by the symmetry of [\[eq:ordered_tuples_sequence\]](#eq:ordered_tuples_sequence){reference-type="eqref" reference="eq:ordered_tuples_sequence"}. We deviate from our usual practice of assuming a fixed $h \geq 2$, and prove [\[eq:num_ordered_tuples_asymptotics\]](#eq:num_ordered_tuples_asymptotics){reference-type="eqref" reference="eq:num_ordered_tuples_asymptotics"} by induction on $h \geq 2$. We begin with the induction basis $h=2$. For all $k \in \left[ g(N), 2N-g(N)\right]$, the values of $a_1$ yielding a (unique) weak composition $(a_1, a_2)$ of $k$ with parts at most $N$ comprise an interval with at least $g = \omega(1)$ elements. It is not hard to show that uniformly over all such $k$, $\sim \frac{1}{|u_1u_2|}$ of them satisfy [\[eq:ordered_tuples_conditions\]](#eq:ordered_tuples_conditions){reference-type="eqref" reference="eq:ordered_tuples_conditions"} (e.g., appeal to the result of [@dummit2004abstract Exercise 8.1.4(b)]). Combined with [\[eq:unconstrained_compositions_asymptotics\]](#eq:unconstrained_compositions_asymptotics){reference-type="eqref" reference="eq:unconstrained_compositions_asymptotics"}, this establishes [\[eq:num_ordered_tuples_asymptotics\]](#eq:num_ordered_tuples_asymptotics){reference-type="eqref" reference="eq:num_ordered_tuples_asymptotics"} for $h=2$. Now consider $h \geq 3$. Let $\Tilde{g}: \mathbb{N}\to \mathbb{N}$ be a function satisfying $1 \lll \Tilde{g} \lll g$. We can reformulate the condition that $(a_1, \dots, a_h)$ is a weak composition of $k$ with $h$ parts via $$\begin{aligned} \label{eq:ordered_tuples_induction} \sum_{i=1}^h a_i = k \iff \sum_{i=2}^h a_i = k - a_1. \end{aligned}$$ It is easy to see that for all $k \in \left[ g(N), hN/2\right]$, the possible choices of $a_1$ yielding the existence of $a_2, \dots, a_h$ for which $(a_1, \dots, a_h)$ is a weak composition of $k$ with $h$ parts, each at most $N$, comprise an interval with at least $g = \omega(1)$ elements. For such an integer $k$, let $\mathcal I_k$ denote this interval of possible choices for $a_1$, but with both endpoints compressed by an additive margin of $\Tilde{g}$. So for any $a_1 \in \mathcal I_k$, $k-a_1 \in \left[\Tilde{g}(N), hN/2\right]$ for all $a_1 \in \mathcal I_k$. The following statements are to be understood as holding uniformly over $k \in \left[ g(N), hN/2\right]$. Since $\Tilde{g} \lll g$, $\mathcal I_k$ has $\omega(1)$ elements. Every[^14] $|u_1|$^th^ element $a_1 \in \mathcal I_k$ satisfies [\[eq:ordered_tuples_conditions\]](#eq:ordered_tuples_conditions){reference-type="eqref" reference="eq:ordered_tuples_conditions"} for $a_1$: each such $a_1$ yields $c_{h-1}(k-a_1, N)$ weak compositions of $k$ with $h$ parts, each at most $N$. From [\[eq:ordered_tuples_sequence\]](#eq:ordered_tuples_sequence){reference-type="eqref" reference="eq:ordered_tuples_sequence"}, we observe the unimodality and symmetry of the sequence $$\begin{aligned} \label{eq:compositions_induction_sequence} c_{h-1}(\Tilde{g}(N), N), \dots, c_{h-1}(hN-\Tilde{g}(N), N). \end{aligned}$$ Using this observation alongside [\[eq:unconstrained_compositions_asymptotics\]](#eq:unconstrained_compositions_asymptotics){reference-type="eqref" reference="eq:unconstrained_compositions_asymptotics"}, it follows from a simple continuity argument that $\sim \frac{1}{|u_1|}$ of the number of weak compositions $(a_1, \dots, a_h)$ satisfying [\[eq:ordered_tuples_induction\]](#eq:ordered_tuples_induction){reference-type="eqref" reference="eq:ordered_tuples_induction"} for which $a_1 \in \mathcal I_k$ also satisfy [\[eq:ordered_tuples_conditions\]](#eq:ordered_tuples_conditions){reference-type="eqref" reference="eq:ordered_tuples_conditions"} for $a_1$. The induction hypothesis (which may be invoked since $1 \lll \Tilde{g} \lll N$) with [\[eq:unconstrained_compositions_asymptotics\]](#eq:unconstrained_compositions_asymptotics){reference-type="eqref" reference="eq:unconstrained_compositions_asymptotics"} implies that uniformly over all $a_1 \in \mathcal I_k$, $\sim \frac{1}{\prod_{i=2}^h|u_i|}$ of the weak compositions $(a_2, \dots, a_h)$ of $k-a_1$ with $h-1$ parts at most $N$ satisfy [\[eq:ordered_tuples_conditions\]](#eq:ordered_tuples_conditions){reference-type="eqref" reference="eq:ordered_tuples_conditions"}. Together with the preceding deduction, we conclude that $\sim \frac{1}{\prod_{i=1}^h|u_i|}$ of the weak compositions $(a_1, \dots, a_h)$ of $k$ with $h$ parts, each at most $N$, and with $a_1 \in \mathcal I_k$ satisfy [\[eq:ordered_tuples_conditions\]](#eq:ordered_tuples_conditions){reference-type="eqref" reference="eq:ordered_tuples_conditions"}. Since $\Tilde{g} \lll g \leq k$, it follows that the number of weak compositions $(a_1, \dots, a_h)$ of $k$ with $h$ parts, each at most $N$, for which $a_1 \notin \mathcal I_k$ is $o(k^{h-1})$. Altogether, with [\[eq:unconstrained_compositions_asymptotics\]](#eq:unconstrained_compositions_asymptotics){reference-type="eqref" reference="eq:unconstrained_compositions_asymptotics"}, we conclude that $$\begin{aligned} \label{eq:compositions_desired_1} c_{h,u,b}(k,N) \sim \frac{\mathsf{IH}_h(k/N)}{\prod_{i=1}^h |u_i|} N^{h-1}, \end{aligned}$$ uniformly over $k \in \left[ g(N), hN/2 \right]$.[^15] Since the sequence [\[eq:weak_compositions_sequence\]](#eq:weak_compositions_sequence){reference-type="eqref" reference="eq:weak_compositions_sequence"} is symmetric, we conclude that [\[eq:num_ordered_tuples_asymptotics\]](#eq:num_ordered_tuples_asymptotics){reference-type="eqref" reference="eq:num_ordered_tuples_asymptotics"} holds uniformly over $k \in \left[ g(N), hN-g(N) \right]$, completing the induction. Now, [\[eq:ordered_tuples_fringes\]](#eq:ordered_tuples_fringes){reference-type="eqref" reference="eq:ordered_tuples_fringes"} can be seen to hold from an entirely elementary continuity argument. ◻ We are now ready to prove Lemma [Lemma 6](#lem:number_of_subsets_asymptotics){reference-type="ref" reference="lem:number_of_subsets_asymptotics"}. *Proof of Lemma [Lemma 6](#lem:number_of_subsets_asymptotics){reference-type="ref" reference="lem:number_of_subsets_asymptotics"}.* Fix $k \in \left[0, mN/2\right]$. The fact that $|\mathscr{D}_k| = |\mathscr{D}_{mN-k}|$ follows quickly from $$\begin{aligned} \label{eq:number_of_subsets_left_right} L\left(a_1, \dots, a_h\right) = -dN+k \iff L\left(N-a_1,\dots,N-a_h\right) = -dN+(mN-k) = sN-k, \end{aligned}$$ which naturally lends itself to a bijection between $\mathscr{D}_k$ and $\mathscr{D}_{mN-k}$. We now prove [\[eq:number_of_subsets_fringe_asymptotics\]](#eq:number_of_subsets_fringe_asymptotics){reference-type="eqref" reference="eq:number_of_subsets_fringe_asymptotics"} and [\[eq:number_of_subsets_asymptotics\]](#eq:number_of_subsets_asymptotics){reference-type="eqref" reference="eq:number_of_subsets_asymptotics"} for $|\mathscr{D}_k|$, after which we are done. Rearranging the LHS of [\[eq:number_of_subsets_left_right\]](#eq:number_of_subsets_left_right){reference-type="eqref" reference="eq:number_of_subsets_left_right"} gives $$\begin{aligned} \label{eq:number_of_subsets_1} \sum_{i=1}^n u_ia_i + \sum_{i=n+1}^h |u_i| \left(N-a_i \right) = k. \end{aligned}$$ For values $a_1, \dots, a_h \in I_N$, [\[eq:number_of_subsets_1\]](#eq:number_of_subsets_1){reference-type="eqref" reference="eq:number_of_subsets_1"} is equivalent to the statement that $$\begin{aligned} \label{eq:desired_rearranged} \left(u_1a_1, \dots, u_na_n, |u_{n+1}|(N-a_{n+1}), \dots, |u_h|(N-a_h)\right) \end{aligned}$$ forms a weak composition of $k$. For any $k \in \left[0, g(N)\right]$, there are at most $g(N)^{h-1} \lll N^{h-1}$ weak compositions with $h$ parts of $k$, so [\[eq:number_of_subsets_fringe_asymptotics\]](#eq:number_of_subsets_fringe_asymptotics){reference-type="eqref" reference="eq:number_of_subsets_fringe_asymptotics"} follows. We now fix $k \in \left[g(N), mN/2\right]$, and prove [\[eq:number_of_subsets_asymptotics\]](#eq:number_of_subsets_asymptotics){reference-type="eqref" reference="eq:number_of_subsets_asymptotics"} by counting the number of $h$-tuples $(a_1, \dots, a_h)$ in $I_N^h$ for which [\[eq:desired_rearranged\]](#eq:desired_rearranged){reference-type="eqref" reference="eq:desired_rearranged"} forms a weak composition of $k$, then arguing that for essentially all such $h$-tuples, all $h$ parts are distinct. Dividing our initial count by $2^{\mathbf{1}\left\{ L \text{ balanced, } k = mN/2 \right\}} \cdot \theta_L$ to correct for overcounting (resulting from counting $h$-tuples rather than sets) now yields an asymptotic expression for $|\mathscr{D}_k|$. All of this will be done using an argument which applies uniformly over $k \in [g(N), mN/2]$. We begin with the initial count. We partition the collection of $h$-tuples $(a_1, \dots, a_h) \in I_N^h$ for which [\[eq:desired_rearranged\]](#eq:desired_rearranged){reference-type="eqref" reference="eq:desired_rearranged"} is a weak composition of $k$ into classes based on the $h$-tuple $$\begin{aligned} \label{eq:class_inclusion} \left(\lfloor u_1a_1/N \rfloor, \dots, \lfloor u_na_n/N \rfloor, \lfloor |u_{n+1}|(N-a_{n+1})/N \rfloor, \dots, \lfloor |u_h|(N-a_h)/N \rfloor \right). \end{aligned}$$ Set $t_i \in I_{|u_i|-1}$ for $i \in [h]$, so that those $h$-tuples $(a_1, \dots, a_h)$ in the class where [\[eq:class_inclusion\]](#eq:class_inclusion){reference-type="eqref" reference="eq:class_inclusion"} is $(t_1, \dots, t_h)$ are exactly those which satisfy the inequalities $$\begin{aligned} \label{eq:class_ineqs} & 0 \leq u_ia_i - t_iN \leq N-1 \text{ for } i \in [n], & 0 \leq |u_i|(N-a_i) - t_iN \leq N-1 \text{ for } i > n. \end{aligned}$$ This suggests, for all such $h$-tuples $(a_1, \dots, a_h)$, the reformulation of [\[eq:number_of_subsets_1\]](#eq:number_of_subsets_1){reference-type="eqref" reference="eq:number_of_subsets_1"} via $$\begin{aligned} \sum_{i=1}^n \left( u_ia_i - t_iN \right) + \sum_{i=n+1}^h \left( |u_i| \left(N-a_i \right) - t_iN \right) = k - \left(\sum_{i=1}^h t_i\right)N, \end{aligned}$$ from which we observe that the number of $h$-tuples $(a_1, \dots, a_h)$ in the class where [\[eq:class_inclusion\]](#eq:class_inclusion){reference-type="eqref" reference="eq:class_inclusion"} is $(t_1, \dots, t_h)$ is exactly the number of weak compositions of $k-\left(\sum_{i=1}^h t_i\right)N$ with $h$ parts, each at most $N-1$ (due to [\[eq:class_ineqs\]](#eq:class_ineqs){reference-type="eqref" reference="eq:class_ineqs"}), and satisfying [\[eq:ordered_tuples_conditions\]](#eq:ordered_tuples_conditions){reference-type="eqref" reference="eq:ordered_tuples_conditions"} for the values $b_1 = -t_1N, \dots, b_h = -t_hN$. Invoking[^16] Lemma [Lemma 25](#lem:ordered_tuples_asymptotics){reference-type="ref" reference="lem:ordered_tuples_asymptotics"} for all choices of $t_i \in I_{|u_i|-1}$ for $i \in [h]$ and summing yields that the corresponding number of $h$-tuples $(a_1, \dots, a_h) \in I_N^h$ for which [\[eq:desired_rearranged\]](#eq:desired_rearranged){reference-type="eqref" reference="eq:desired_rearranged"} is a weak composition of $k$ is $$\begin{aligned} \label{eq:ordered_seq_count} \sim \frac{\sum_{t_1=0}^{|u_1|-1} \cdots \sum_{t_h=0}^{|u_h|-1} \mathsf{IH}_h\left(k/N - \sum_{i=1}^h t_i\right)}{\prod_{i=1}^h |u_i|} N^{h-1}. \end{aligned}$$ The number of these $h$-tuples $(a_1, \dots, a_h)$ for which either the $h$ parts are not distinct or $t_i = |u_i|$ for some $i \in [h]$ is $O(k^{h-2})$. Combining this observation with [\[eq:ordered_seq_count\]](#eq:ordered_seq_count){reference-type="eqref" reference="eq:ordered_seq_count"}, we conclude that [\[eq:number_of_subsets_asymptotics\]](#eq:number_of_subsets_asymptotics){reference-type="eqref" reference="eq:number_of_subsets_asymptotics"} holds uniformly over $k \in \left[g(N), mN/2\right]$.[^17] ◻ We now prove Lemmas [Lemma 7](#lem:L_expression_tuples_lower_bounds){reference-type="ref" reference="lem:L_expression_tuples_lower_bounds"} and [Lemma 8](#lem:L_expressions_tuples_upper_bounds){reference-type="ref" reference="lem:L_expressions_tuples_upper_bounds"}. *Proof of Lemma [Lemma 7](#lem:L_expression_tuples_lower_bounds){reference-type="ref" reference="lem:L_expression_tuples_lower_bounds"}.* Since $\gcd(u_1, \dots, u_h) = 1$, there is a solution to the Diophantine equation $$\begin{aligned} \label{eq:L_expression_num_elmts} L(a_1, \dots, a_h) = -dN+k. \end{aligned}$$ It is not hard to show (briefly, add appropriate multiples of $\textup{lcm}(u_1, \dots, u_h)$ to the entries of a particular solution of [\[eq:L_expression_num_elmts\]](#eq:L_expression_num_elmts){reference-type="eqref" reference="eq:L_expression_num_elmts"} to form new solutions of [\[eq:L_expression_num_elmts\]](#eq:L_expression_num_elmts){reference-type="eqref" reference="eq:L_expression_num_elmts"}) that uniformly over $k \in \left[ 0, mN/2 \right]$, we may form $\gtrsim k^{h-1}$ solutions to [\[eq:L_expression_num_elmts\]](#eq:L_expression_num_elmts){reference-type="eqref" reference="eq:L_expression_num_elmts"} such that the $h$ elements are distinct and lie in $I_N$. Since there are $O(1)$ such solutions in each equivalence class of $\mathscr{D}_k$, this proves $(1)$. We prove $(2)$ by deriving the lower bound for the number of ordered pairs of $h$-tuples $$\begin{aligned} \label{eq:ordered_tuples} \left( (a_1, \dots, a_h), (b_1, \dots, b_h) \right) \in (I_N^h)^2 \end{aligned}$$ such that $a_1 = b_1$, the two $h$-tuples have a union of size $2h-1$, and both map under $L$ to $-dN+k$. This is since $O(1)$ such ordered pairs of $h$-tuples [\[eq:ordered_tuples\]](#eq:ordered_tuples){reference-type="eqref" reference="eq:ordered_tuples"} correspond[^18] to an ordered pair $\left(\Lambda(1), \Lambda(2)\right) \in \mathscr{D}_k^2$ such that $|S(\Lambda(1)) \cup S(\Lambda(2))| = 2h-1$ and $S(\Lambda(1)) \cap S(\Lambda(2)) \neq \emptyset$. It can be similarly shown that uniformly over $k \in \left[ 0, mN/2 \right]$, the number of such ordered pairs we may form is $\gtrsim k^{2h-3}$ (e.g., fix $a_1=b_1$ such that $u_1a_1 \leq k/2$ and the resulting Diophantine equation [\[eq:L_expression_num_elmts\]](#eq:L_expression_num_elmts){reference-type="eqref" reference="eq:L_expression_num_elmts"} has a solution, then choose the remaining $2(h-1)$ values). ◻ *Proof of Lemma [Lemma 8](#lem:L_expressions_tuples_upper_bounds){reference-type="ref" reference="lem:L_expressions_tuples_upper_bounds"}.* It suffices to derive these asymptotic upper bounds for the number of matrices $$\begin{aligned} \mathcal A = \left(a_{i,j} \right)_{1 \leq i \leq t, 1 \leq j \leq h} \in I_N^{t \times h}, \end{aligned}$$ where the matrices $\mathcal A$ that we consider are those such that there are exactly $\ell$ distinct values amongst the entries of $\mathcal A$, there do not exist $i_1, i_2 \in [t]$ and $\sigma \in \mathscr{R}_k$ such that $$\begin{aligned} \label{eq:L_expressions_tuples_asymptotics_condition_1} (a_{i_1, 1}, \dots, a_{i_1, h}) = (a_{i_2, \sigma(1)}, \dots, a_{i_2, \sigma(h)}), \end{aligned}$$ and it holds for all $i \in [t]$ that $$\begin{aligned} \label{eq:L_expressions_tuples_asymptotics_condition_2} \{a_{i,1}, \dots, a_{i,h}\} \cap \left( \bigcup_{j \neq i} \{a_{j,1}, \dots, a_{j,h}\} \right) \neq \emptyset \text{ for all } i \in [t], \end{aligned}$$ and that $$\begin{aligned} \label{eq:L_expressions_tuples_asymptotics_equations_1} L\left(a_{i,1}, \dots, a_{i,h}\right) = -dN+k. \end{aligned}$$ Indeed, there is an injective map[^19] from the $t$-tuples we would like to count to the collection of all such matrices $\mathcal A$. Observe that for any such $\mathcal A$, at least $2\ell - th$ of the $\ell$ values occur once in the entries of $\mathcal A$, so the number of rows of $\mathcal A$ with a value that is included exactly once in $\mathcal A$ is at least $$\begin{aligned} \label{eq:lower_bound_multiplicity_1} \lceil (2\ell - th)/h \rceil = \lceil 2\ell/ h \rceil-t. \end{aligned}$$ We partition the collection of all such matrices $\mathcal A$ based on the $\ell$ subsets[^20] of $\left(a_{i,j} \right)_{1 \leq i \leq t, 1 \leq j \leq h}$ corresponding to the $\ell$ distinct values. We break into cases based on the values of $\ell$ and $t$. The following discussion is to be understood as having fixed one of the blocks of the partition. We will occasionally assume that we are working with a specific fixed block of the partition. #### **Case 1: $1 \leq \ell \leq h-1$ or $t=1$ and $\ell=h$.** Let $r=1$, and let $v_1$ be one of the $\ell$ distinct subsets. #### **Case 2: $t \geq 2$ and $h \leq \ell \leq 2h-1$.** We break into two subcases. 1. If there exist values $v_1, v_2$ such that $v_2$ is in a row that $v_1$ is not in, then let $r=2$. 2. If no such $v_1, v_2$ exist, then necessarily $\ell = h$, and all $\ell$ values lie in all $t$ rows of $\mathcal A$. The entries of any such matrix $\mathcal A$ must satisfy, for some $\sigma \notin \mathscr{R}_k$ (due to [\[eq:L_expressions_tuples_asymptotics_condition_1\]](#eq:L_expressions_tuples_asymptotics_condition_1){reference-type="eqref" reference="eq:L_expressions_tuples_asymptotics_condition_1"}), $$\begin{aligned} \label{eq:L_expressions_tuples_asymptotics_system} L\left(a_{1,1}, \dots, a_{1,h}\right) = L(a_{1,\sigma(1)}, \dots, a_{1,\sigma(h)}) = -dN+k. \end{aligned}$$ If such a permutation $\sigma \notin \mathscr{R}_k$ exists, it must be that the two equations $$\begin{aligned} & u_1a_{1,1} + \cdots + u_ha_{1,h} = -dN+k, & u_{\sigma^{-1}(1)}a_{1,1} + \cdots + u_{\sigma^{-1}(h)}a_{1,h} = -dN+k \end{aligned}$$ are multiples of each other. By the definition of $\mathscr{R}_k$, there must exist $j \in [h]$ for which $u_j \neq u_{\sigma^{-1}(j)}$, so it follows that $-dN+k = 0$. It is not hard to see that $|u_j| = |u_{\sigma^{-1}(j)}|$ (since $[u_1, \dots, u_h] \neq [u_{\sigma(1)}, \dots, u_{\sigma(h)}]$ otherwise), so $u_i = -u_{\sigma^{-1}(i)}$ for all $i \in [h]$. Thus, $L$ must also be balanced, and it therefore holds that $\sigma \in \mathscr{R}_k$. We conclude that no such matrices $\mathcal A$ satisfying the conditions of this subcase exist. #### **Case 3: $2h \leq \ell \leq 3h-1$.** We are safe to assume $t \geq 3$. We break into two subcases. 1. If there exist values $v_1, v_2, v_3$ such that $v_2$ is in a row that $v_1$ is not in and $v_3$ is in a row that neither $v_1$ nor $v_2$ are in, then let $r=3$. It is not hard to show that if there exists a value occurring in exactly one row of $\mathcal A$, such values $v_1, v_2, v_3$ exist. It follows from [\[eq:lower_bound_multiplicity_1\]](#eq:lower_bound_multiplicity_1){reference-type="eqref" reference="eq:lower_bound_multiplicity_1"} that this holds for all $\ell$ such that $2h+1 \leq \ell \leq 3h-1$, and for $\ell = 2h$ if $t = 3$. 2. Otherwise, it must hold that $\ell = 2h$, $t=4$, every value occurs exactly twice in $\mathcal A$, and these $2h$ values are arranged in such a way that $h$ values lie once in each of two rows of $\mathcal A$, and the other $h$ values lie once in each of the other two rows of $\mathcal A$. Arguing as in Case $2(2)$, it follows that no such matrices $\mathcal A$ satisfying the conditions of this subcase exist. #### **Case 4: $\ell \geq 3h$.** We are safe to assume $t = 4$. We break into two subcases. 1. If there exist values $v_1, \dots, v_4$ such that for every $j \in [4]$, $v_j$ is in a row of $\mathcal A$ that no $v_i$ for $i < j$ is in, then let $r=4$. By [\[eq:lower_bound_multiplicity_1\]](#eq:lower_bound_multiplicity_1){reference-type="eqref" reference="eq:lower_bound_multiplicity_1"}, this can be done for all $\ell \geq 3h+1$. 2. Otherwise, $\ell = 3h$. By [\[eq:lower_bound_multiplicity_1\]](#eq:lower_bound_multiplicity_1){reference-type="eqref" reference="eq:lower_bound_multiplicity_1"}, two rows of $\mathcal A$ have values occurring exactly once in $\mathcal A$. The remaining two rows of $\mathcal A$ thus contain the same values. Arguing as in Case $2(2)$, it follows that no such matrices $\mathcal A$ satisfying the conditions of this subcase exist. In all cases where $r$ was defined, we count the matrices $\mathcal A$ by choosing the $\ell-r$ values that are not $v_1, \dots, v_r$, where we have $O(k)$ choices for each value. Afterwards, [\[eq:L_expressions_tuples_asymptotics_equations_1\]](#eq:L_expressions_tuples_asymptotics_equations_1){reference-type="eqref" reference="eq:L_expressions_tuples_asymptotics_equations_1"} determines the values of these remaining $r$ subsets. It is easy to check that in all of these cases, this gives the bounds given in the statement of the lemma. The lemma follows since there are $O(1)$ blocks in this partition. ◻ # References {#references .unnumbered} [^1]: We use $h$ for the number of variables of a $\mathbb{Z}$-linear form to be consistent with the notation used by [@hogan2013generalized]. [^2]: Think of this as the "offset\" of size $k$ from the minimum possible value of $L(a_1, \dots, a_h)$ when $a_1, \dots, a_h \in I_N$. [^3]: Of course, the ground set of $\Lambda$ may have fewer than $h$ elements if a representative of $\Lambda$ has a repeated element. [^4]: *The expression for $\psi$ we provide is not the same as that given in [@barbour1992poisson Theorem 3.E]. Indeed, our expression is at least the expression they provide, which quickly follows by applying Jensen's inequality to their final summand.* [^5]: Our expression for $V_n(A)$ deviates from that in [@hegarty2009almost Equation (2.27)], which contains a mistake. [^6]: Of course, $g$ need not be the same function in Sections [5](#sec:critical_decay){reference-type="ref" reference="sec:critical_decay"} and [6](#sec:slow_decay){reference-type="ref" reference="sec:slow_decay"}, as our assumptions on how quickly $p$ decays vary. We will assume in both sections that $g(N)$ refers to a function satisfying [\[eq:g_asymptotics\]](#eq:g_asymptotics){reference-type="eqref" reference="eq:g_asymptotics"} within the relevant context. [^7]: This is valid, since we established [\[eq:critical_fringes_expectation_3\]](#eq:critical_fringes_expectation_3){reference-type="eqref" reference="eq:critical_fringes_expectation_3"} for an arbitrary $\mathbb{Z}$-linear form on $h$ variables. [^8]: Our choice of $\Tilde{g}$, as well as much of the following argument, will seem unnatural. In the proof of Theorem [Proposition 21](#prop:slow_concentration){reference-type="ref" reference="prop:slow_concentration"}, we will follow, with some modifications mentioned there, exactly the same argument to prove the strong concentration of $|L(A)^c|$ about its expectation. As such, many of our choices will make more sense later. [^9]: We write like this so that the argument can be adapted without modification to the proof of Theorem [Proposition 21](#prop:slow_concentration){reference-type="ref" reference="prop:slow_concentration"}. The same comment applies for [\[eq:delta_bound_fringes\]](#eq:delta_bound_fringes){reference-type="eqref" reference="eq:delta_bound_fringes"}. [^10]: The $o(1)$ term in [\[eq:missing_fringes_1\]](#eq:missing_fringes_1){reference-type="eqref" reference="eq:missing_fringes_1"} corresponds to $h$-tuples $(a_1, \dots, a_h)$ with nondistinct elements, which are of a lower order. [^11]: Briefly, appeal to the latter statement of [\[eq:q_binomial_coeff_asymptotics\]](#eq:q_binomial_coeff_asymptotics){reference-type="eqref" reference="eq:q_binomial_coeff_asymptotics"} at finitely many points $\alpha \in [1, h-1]$, including $h/2$. Use the uniform continuity of $\mathsf{IH}_h(x)$ on $x \in [1,h-1]$ so that values of $\mathsf{IH}_h(\alpha)$ between consecutive points $\alpha$ are small compared to $\min_{x \in [1,h-1]} \mathsf{IH}_h(x)$. The unimodality and symmetry of [\[eq:partition_asymptotics_sequence\]](#eq:partition_asymptotics_sequence){reference-type="eqref" reference="eq:partition_asymptotics_sequence"} now imply that we can estimate $[ q^k ]\binom{N+h}{h}_q$ by the values of $\frac{\mathsf{IH}_h(\alpha)}{h!}N^{h-1}$ for those points $\alpha$ above and below $k/N$ in a manner that is uniform over $k \in \left[ N, (h-1)N \right]$. [^12]: Of course, the sequence [\[eq:weak_compositions_sequence\]](#eq:weak_compositions_sequence){reference-type="eqref" reference="eq:weak_compositions_sequence"} is not unimodal in general. Take $b_i = 0$ and $u_i = 2$ for all $i \in [h]$, for instance. [^13]: To derive [\[eq:unconstrained_compositions_asymptotics\]](#eq:unconstrained_compositions_asymptotics){reference-type="eqref" reference="eq:unconstrained_compositions_asymptotics"} from here, treat the cases $k \in \left[g(N), N\right]$ and $k \in \left[N, hN/2\right]$ separately. Appeal to Remark [\[rmk:partition_asymptotics_small_k\]](#rmk:partition_asymptotics_small_k){reference-type="ref" reference="rmk:partition_asymptotics_small_k"} for the former, and the existence of a fixed positive minimum for $\mathsf{IH}_h(k/N)$ with $k = O(N)$ for the latter. [^14]: Technically, this is starting from one of the first $|u_1|$ elements of $\mathcal I_k$. [^15]: As was done to prove [\[eq:unconstrained_compositions_asymptotics\]](#eq:unconstrained_compositions_asymptotics){reference-type="eqref" reference="eq:unconstrained_compositions_asymptotics"}, [\[eq:compositions_desired_1\]](#eq:compositions_desired_1){reference-type="eqref" reference="eq:compositions_desired_1"} can be observed by treating $k \in \left[g(N), N\right]$ and $k \in \left[N, hN/2\right]$ separately. [^16]: Appeal to [\[eq:num_ordered_tuples_asymptotics\]](#eq:num_ordered_tuples_asymptotics){reference-type="eqref" reference="eq:num_ordered_tuples_asymptotics"} if $k/N - \sum_{i=1}^h t_i \in \left[g(N), hN-g(N)\right]$ and [\[eq:ordered_tuples_fringes\]](#eq:ordered_tuples_fringes){reference-type="eqref" reference="eq:ordered_tuples_fringes"} if not. Of course, $(N-1)^{h-1} \sim N^{h-1}$. [^17]: As in the proof of Lemma [Lemma 25](#lem:ordered_tuples_asymptotics){reference-type="ref" reference="lem:ordered_tuples_asymptotics"}, treat $k \in \left[g(N), N\right]$ and $k \in \left[N, mN/2\right]$ separately to see this. [^18]: Surjectively map such ordered pairs of $h$-tuples to the ordered pairs of their equivalence classes in $\mathscr{D}_k$. [^19]: Given $\left(\Lambda_1, \dots, \Lambda_t \right)$, take representatives and make them the rows of the matrix $\mathcal A$ that it is mapped to. [^20]: More precisely, the partition is based on the $\ell$ subsets of $[t] \times [h]$ corresponding to these $\ell$ values.
arxiv_math
{ "id": "2309.01801", "title": "Phase Transitions for Binomial Sets Under Linear Forms", "authors": "Ryan Jeong, Steven J. Miller", "categories": "math.NT math.CO math.PR", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In this paper we propose an $\infty$-categorical definition of abstract six-functor formalisms on varieties. Our definition is a variation on Mann's definition, with the additional requirement of having Grothendieck and Wirthmüller contexts, and recollements. Using Nagata's compactification theorem, we show that such a six-functor formalism can be given by just specifying adjoint triples on open immersions and on proper maps, satisfying certain compatibilities. Moreover, the existence of recollements is equivalent to a sheaf condition for a Grothendieck topology on the category of "varieties and spans of open immersions and proper maps". We show that our definition of six-functor formalism is equivalent to a full subcategory of lax symmetric monoidal functors from the category of smooth and complete schemes to the category of stable $\infty$-categories and adjoint triples, and characterise which lax monoidal functors on complete varieties extend to six-functor formalisms. author: - Josefien Kuijper bibliography: - bibliography.bib title: An axiomatization of six-functor formalisms --- # Introduction A six-functor formalism is meant to formalise the structure that is often present in sheaf theories in algebraic geometry, which underlie cohomology theories. Roughly speaking, a six-functor formalism is a structure that assigns to every scheme $X$ (or every variety/topological space/\...) a closed symmetric monoidal category $D(X)$. This gives, for $A$ in $D(X)$, an adjunction Moreover a morphism of schemes (varieties/topological spaces/\...) $f:X \longrightarrow Y$ should give rise to two adjunctions $f^*\dashv f_*$ and $f_!\dashv f^!$. Collectively this structure is known as *Grothendieck's six operations*. Many facts about cohomology theories are formal consequences of basic properties of the underlying six-functor formalism, as laid out for example in Section 1.5 of Gallauer's lecture notes [@gallauer]. Six-functor formalisms have been constructed and studied for example for manifolds and topological spaces in [@sheaves_on_manifolds] and [@volpe], for motivic homotopy theory in [@ayoub_i], [@ayoub_ii], for equivariant motivic homotopy theory in [@hoyois] and for étale cohomology of Artin stacks in [@liu_zheng_six_ops]. A classical example of a six-functor formalism is the assignment of the bounded derived category of constructible sheaves on a scheme $X$, together with the direct and inverse images $f_*$ and $f^*$, and the direct and inverse images with compact support $f_!$ and $f^!$. Six-functor formalisms that occur in nature look like "the category of (some adjective) sheaves on $X$". These generally satisfy additional properties, such as base change and the projection formula. Another common feature is the existence of Grothendieck and Wirthmüller contexts. This means that for some class $P$ of morphisms, one has $f_*\cong f_!$ (a *Grothendieck context*), and for another class $I$, $f^!\cong f^*$ (*a Wirthmüller context*). As the notation suggest, the morphisms in $P$ are often proper maps and those in $I$ open immersions. In that case, for $i:U\longrightarrow X$ an open immersion, there is a diagram $$\label{eq:recollement_intro} \begin{tikzcd} D(U\setminus X) \arrow[r, "i_!" description] & D(X) \arrow[l, "i^ !", shift left=2] \arrow[l, "i^ *"', shift right=2] \arrow[r, "j^*" description] & D(U) \arrow[l, "j_!"', shift right=2] \arrow[l, "j_*", shift left=2] \end{tikzcd}$$ When this diagram satisfies a list of properties, including $j^* i_*=0$ and $i_!$, $j_!$ and $j_*$ are embeddings, then we say that the six-functor formalism has *recollements*. ## A comparison of different $\infty$-categorical approaches In recent years, there have been several $\infty$-categorical axiomatizations of the notion of a six-functor formalism. A convenient way to wrap up all of the structure present in a six-functor formalism, and capture some of the aforementioned properties, is by defining a six-functor formalism to be a lax symmetric monoidal functor of $\infty$-categories $$\label{eq:intro_6ff} D^*_!:\textup{Corr}(\mathcal{S}) \longrightarrow\textup{Cat}_\infty.$$ Here $\textup{Cat}_\infty$ is the cartesian symmetric monoidal $\infty$-category of $\infty$-categories and functors which have a right adjoint. The category $\mathcal{S}$ is some category of geometric objects, such as varieties, (derived) schemes, stacks or sufficiently nice topological spaces. The category $\textup{Corr}(\mathcal{S})$ is the $\infty$-category of correspondences in $\mathcal{S}$, where the objects are objects in $\mathcal{S}$, and the morphisms are diagrams The image of such a span under $D^*_!$ is $$D(X)\xrightarrow{f^*}D(Y) \xrightarrow{g_!} D(Z).$$ Composition in $\textup{Corr}(\mathcal{S})$ is given by pullbacks. Base change then follows from functoriality, and the projection formula comes from being a lax monoidal functor. This idea is attributed to Lurie and worked out in [@gaitsgory]. An $(\infty,2)$-categorical approach is developed by Gaitsgory and Rozenblyum in [@gaitsgory_rozenblyum]. An alternative approach, avoiding $(\infty,2)$-categories, is taken by Liu and Zheng in [@liu_zheng_gluing] and [@liu_zheng_six_ops], where they construct a six-functor formalism encoding étale cohomology of Artin stacks. In [@Hormann], Hörmann uses the theory of (op)fibrations of 2-multicategories to define abstract six-functor-formalisms. In his thesis [@mann_thesis], Mann defines the category of abstract six-functor formalisms as lax monoidal functors out of the $(\infty,1)$-category of correspondences, encoding both base change and the projection formulas. Grothendieck and Wirthmüller contexts are not part of his definition, but in [@mann_thesis Proposition A.5.10] he shows how six-functor formalisms that have them, can be constructed from just a lax symmetric monoidal functor $$D^*:\mathcal{S}^\textup{op}\longrightarrow\textup{Cat}_\infty$$ satisfying a number of conditions, involving the existence of certain adjoints. In a recent preprint [@khan] Khan axiomatizes six-functor formalisms on derived schemes with Grothendieck contexts for proper morphisms and Wirthmüller contexts for smooth morphisms, which he calls *weaves*. Drew and Gallauer have a similar approach in [@drew_gallauer] using *coefficient systems*, which satisfy localization (similar to the existence of recollements). The goal of this text is to give yet another $\infty$-categorical definition of six-functor formalisms on the category of varieties over a field $k$. One motivation for our definition is that a six-functor formalism as described above, is an extremely over-determined object. For example, the six operations come in pairs of adjoints, and therefore instead of giving all of them explicitly, giving only half of them should suffice. In the existing $\infty$-categorical frameworks, the data of a six-functor formalism is a functor ([\[eq:intro_6ff\]](#eq:intro_6ff){reference-type="ref" reference="eq:intro_6ff"}), encoding only the left adjoints $f^*$ and $f_!$ explicitly; this already boils down the amount of "data" significantly. Now suppose that a six-functor formalism has Grothendieck and Wirthmüller contexts. Then for morphisms $f$ in $P$ and $I$, in theory giving only one out of the four functors $f^*, f_*, f_!$ and $f^!$ should suffice. So in this case it is to be expected that we can do even better, especially if $P$ and $I$ form a suitable factorisation system of all morphisms in $\mathcal{S}$. We show that for $\mathcal{S}$ the category of varieties over a field $k$, and $P$ and $I$ proper maps and open immersions respectively, we can indeed define a category of six-functor formalisms characterised by a minimal amount of data; see Proposition [Proposition 2](#propx:intro1){reference-type="ref" reference="propx:intro1"} and Theorem [Theorem 3](#thmx:intro){reference-type="ref" reference="thmx:intro"}. Moreover, our definition brings to light a curious analogy between six-functor formalisms and compactly supported cohomology theories. In a sense, the assignment $X\mapsto D_{\textup{constr}}^b(X)$ may be thought of as a higher version of cohomology with compact support. ## Outline For convenience, we work with algebraic varieties over a base field $k$. However, much of what we do can be carried out over an arbitrary noetherian base scheme. In Section [2](#sect:prel){reference-type="ref" reference="sect:prel"} we discuss some notation and conventions. In Section [3](#sect:recollements){reference-type="ref" reference="sect:recollements"} we recall some terminology around stable $\infty$-categories. In Section [4](#sect:reducingtospan){reference-type="ref" reference="sect:reducingtospan"} we define an $\infty$-category $\textup{Pre}\mathbf{6FF}$ of six-functor formalisms on varieties taking values in stable $\infty$-categories, with Grothendieck contexts for proper morphisms and Wirthmüller contexts for open immersions. We call these *pre-six-functor formalisms*. This definition is similar to Khan's definition of weaves, which is a subcategory of Mann's category of 6-functor formalisms. We observe that any morphism of varieties factors as an open immersion followed by a proper morphism, by Nagata's compactification theorem. Therefore to give an entire pre-six-functor formalism, it should be enough to only give the adjoint triples $i_!\dashv (i^!=i^*) \dashv i_*$ for $i$ an open immersion, and $p^* \dashv (p_*=p_! )\dashv p^!$ for $p$ proper; and each adjoint triple is of course determined by any one of the three functors in the triple. **Definition 1**. We define $\mathbf{Span}$ to be the 1-category with as objects algebraic varieties over $k$, where a morphism $X\longrightarrow Y$ is a span $$X\hookleftarrow U \xrightarrow{p} Y$$ where $U$ is an open subvariety of $X$ and $p$ is a proper morphism. The composition of two spans $X\hookleftarrow U \rightarrow Y$ and $Y\hookleftarrow V \rightarrow Y$ is defined by taking the pullback of $U \longrightarrow Y$ along $V \hookrightarrow Y$. Let $\textup{St}_{\infty}^{LL}$ be the $\infty$-category with as objects stable $\infty$-categories, and as morphisms the functors that occur as leftmost adjoint in an adjoint triple. Using methods from [@mann_thesis] and [@liu_zheng_gluing],[@liu_zheng_six_ops] we can show the following. **Proposition 1** (Proposition [Proposition 23](#prop:w6FF){reference-type="ref" reference="prop:w6FF"}). *The $\infty$-category $\mathrm{Pre}\mathbf{6FF}$ is equivalent to the $\infty$-category of lax symmetric monoidal functors $$F^*_!:\mathbf{Span}^\textup{op}\longrightarrow\textup{St}_{\infty}^{LL}$$ which have the additional property of sending certain types of squares to adjointable squares (the *BC property*, see Definition [Definition 22](#defn:bc_span){reference-type="ref" reference="defn:bc_span"}) and $F(\emptyset)=*$.* Here $F^*_!$ explicitly encodes $(-)^*$ for proper morphisms, and $(-)_!$ for open immersions. In a sense, Proposition [Proposition 1](#propx:intro0){reference-type="ref" reference="propx:intro0"} looks like a dual to a special case of Theorem 2.49 in [@khan] and the conjecture stated on page 47 in [@scholze_notes]. There, six-functor formalisms are characterised by the data of $f^*$ for all morphisms (encoded by a functor $D^*:\mathbf{Var}^\textup{op}\longrightarrow\textup{Cat}_\infty$), satisfying certain conditions involving the left adjoint $i_!\dashv i^*$ for $i$ an open immersion, and the right adjoint $p^*\dashv p_*$ when $p$ is a proper morphism. Let us consider the following composable morphisms in $\mathbf{Span}$: $$\label{eq:localisation} X\setminus U \xhookleftarrow{=} X\setminus U \xrightarrow{i} X \xhookleftarrow{j} U \xrightarrow{=} U$$ In our definition of six-functor formalisms, we want to include the property that such a sequence is sent to a recollement ([\[eq:recollement_intro\]](#eq:recollement_intro){reference-type="ref" reference="eq:recollement_intro"}). This defines a full subcategory $$\mathbf{6FF} \subseteq \mathrm{Pre}\mathbf{6FF}$$ which we will call the $\infty$-category of *six-functor formalisms*. **Proposition 2** (Proposition [Proposition 36](#prop:6ff){reference-type="ref" reference="prop:6ff"}). *The $\infty$-category $\mathbf{6FF}$ is equivalent to the $\infty$-category of symmetric monoidal functors $$\mathbf{Span}^\textup{op}\longrightarrow\textup{St}_{\infty}^{LL}$$ with the BC-property, that in addition are hypersheaves for a certain topology on $\mathbf{Span}$.* We also see that six-functor formalisms satisfy descent for *abstract blowups* in Lemma [Lemma 32](#lem:localisation_implies_descent){reference-type="ref" reference="lem:localisation_implies_descent"}. Every variety $U$ has a compactification $X$, where the complement $X\setminus U$ is complete. Therefore morally, a six-functor formalism with recollements should be completely determined by its restriction to complete varieties. We can make this precise as follows. Note that the 1-category $\mathbf{Comp}$ of complete varieties is a subcategory of $\mathbf{Span}$. **Theorem 3** (Theorem [Theorem 38](#thm){reference-type="ref" reference="thm"}). *Restriction to $\mathbf{Comp}$ gives an equivalence between $\mathbf{6FF}$ and the $\infty$-category of symmetric monoidal functors $$\mathbf{Comp}^\textup{op}\longrightarrow\textup{St}_{\infty}^{LL}$$ which satisfy descent for abstract blowups (this is again a hypersheaf condition for a topology on $\mathbf{Comp}$) and satisfy a BC-property, see definition [Definition 35](#defn:bc_comp){reference-type="ref" reference="defn:bc_comp"}.* In other words, a six-functor formalism is uniquely determined by the $(-)^*$-part restricted to complete varieties. Moreover, we can characterise exactly which functors $\mathbf{Comp}^\textup{op}\longrightarrow\textup{St}_{\infty}^{LL}$ give rise to fully-fledged six-functor formalisms. Lastly, under the additional assumption that our base field $k$ has characteristic zero, using resolution of singularities we can show the following. **Corollary 4** (Corollary [Corollary 39](#cor){reference-type="ref" reference="cor"}). *Restriction to smooth and complete varieties induces a fully faithful embedding of $\mathbf{6FF}$ into the $\infty$-category of lax symmetric monoidal functors $$\mathbf{SmComp}^\textup{op}\longrightarrow\textup{St}_{\infty}^{LL}$$ that satisfy descent for blowups (this is a hypersheaf condition for a topology on $\mathbf{SmComp}$).* This shows that a six-functor formalism is even uniquely determined by the $(-)^*$-part on smooth and complete varieties. It is not clear whether there is a good characterisation of which lax monoidal functors $\mathbf{SmComp}^\textup{op}\longrightarrow\textup{St}_{\infty}^{LL}$ give rise to six-functor formalisms. There is an interesting analogy between Theorem [Theorem 3](#thmx:intro){reference-type="ref" reference="thmx:intro"} and Corollary [Corollary 4](#corx:intro){reference-type="ref" reference="corx:intro"}, and results about "abstract cohomology theories with compact support" in our previous work [@kuij_descent]. For $F:\mathbf{Span}^\textup{op}\longrightarrow\mathcal{S}\textup{pectra}$ a functor, denote $H^n_c(X)=\pi_{-n}(F(X))$. If $F$ sends sequences ([\[eq:localisation\]](#eq:localisation){reference-type="ref" reference="eq:localisation"}) to fiber sequences, then these groups satisfy the long exact sequence $$\dots \longrightarrow H^n_c(U)\longrightarrow H^n_c(X) \longrightarrow H^n_c(X\setminus U) \longrightarrow H^{n+1}_c(U)\longrightarrow\dots$$ and can therefore reasonably be called a cohomology theory with compact supports. This condition on $F$ is the same hypersheaf condition that makes six-functor formalisms send sequences ([\[eq:localisation\]](#eq:localisation){reference-type="ref" reference="eq:localisation"}) to recollements, and therefore the category of abstract compactly supported cohomology theories is given by $\textup{HSh}(\mathbf{Span};\mathcal{S}\textup{pectra})_\emptyset$. By [@kuij_descent Theorem 7.2], restriction induces equivalences between $\infty$-categories of (not necessarily symmetric monoidal) sheaves $$\textup{HSh}(\mathbf{Span}; \mathcal{S}\textup{pectra})_\emptyset \simeq \textup{HSh}(\mathbf{Comp};\mathcal{S}\textup{pectra}) \simeq \textup{HSh}(\mathbf{SmComp};\mathcal{S}\textup{pectra})$$ where the Grothendieck topologies are those occuring in Propositon [Proposition 2](#propx:intro1){reference-type="ref" reference="propx:intro1"}, Theorem [Theorem 3](#thmx:intro){reference-type="ref" reference="thmx:intro"} and Corollary [Corollary 4](#corx:intro){reference-type="ref" reference="corx:intro"}. In other words, abstract compactly supported cohomology theories are determined by their restriction to $\mathbf{SmComp}$, and we can characterise which functors on $\mathbf{Comp}$, and (in contrast to the situation with six-functor formalisms) even which functors on $\mathbf{SmComp}$, extend to compactly supported cohomology theories. Our results on six-functor formalisms can be seen as a higher, more structured analogue of this. ## Acknowledgements I want to thank Dan Petersen for sharing with me the idea that one should be able to recover a six-functor formalism from a functor defined on $\mathbf{Span}$, and also for many comments on earlier versions of this manuscript. I also thank Adeel Khan and Lucas Mann for comments. I thank Fabian Hebestreit for suggesting a proof of Lemma [Lemma 33](#lem:dual_of_nine){reference-type="ref" reference="lem:dual_of_nine"}, and for ever so kindly pointing out to me that in this paper there is no need to invoke *presentable* stable $\infty$-categories. # Preliminaries {#sect:prel} In this short section we recall some terminology and notation that we will use later on. No new material is presented. For $\mathcal{C}^\otimes \longrightarrow\textup{Fin}_\textup{part}$ a symmetric monoidal $\infty$-category, the opposite $\mathcal{C}^\textup{op}$ of the underlying $\infty$-category does not in general inherit the structure of a symmetric monoidal $\infty$-category. However, for $\mathcal{C}$ a symmetric monoidal 1-category, the opposite category $\mathcal{C}^\textup{op}$ is a symmetric monoidal 1-category too, and we can give classify it by a cocartesian fibration $(\mathcal{C}^\textup{op})^\otimes \longrightarrow\textup{Fin}_\textup{part}$. We introduce the following notation for the case where the symmetric monoidal 1-category is cartesian. **Notation 2**. For $\mathcal{C}$ a 1-category with products, let $$\mathcal{C}^\textup{op}_\times \longrightarrow\textup{Fin}_\textup{part}$$ denote the cocartesian fibration which classifies the symmetric monoidal $\infty$-category given by $(\mathcal{C}^\textup{op}, \times)$. Explicitly $\mathcal{C}_\times$ will have as objects tuples $(X_i)_I$ indexed by a finite set, with $X_i$ in $\mathcal{C}$. A morphism $$(X_i)_I \longrightarrow(Y_j)_J$$ is given by a partial map $\alpha:J \longrightarrow I$ and for $i\in I$ a map $$X_i \longrightarrow\prod_{j\in \alpha^{-1}(i)} Y_j.$$ **Notation 3**. Let $\mathcal{C}$ be a 1-category with products. - For $A$ a set of morphisms in $\mathcal{C}$, let $A_\times$ denote the set of morphisms in $\mathcal{C}_\times$ consisting of all $$(X_i)_I \longrightarrow(Y_j)_J$$ lying over $\alpha:J\dashrightarrow I$ such that for all $i \in I$, the map $X_i \longrightarrow\prod_{j\in \alpha^{-1}(i)} Y_j$ is in $A$. - For $A$ a set of morphisms in $\mathcal{C}$, let $A_-$ denote the subset of $A_\times$ consisting of only morphisms over identities in $\textup{Fin}^\textup{part}$ - For $B$ a set of morphisms in any $\infty$-category $\mathcal{D}$, let $B^\textup{op}$ denote the corresponding set of morphisms in $\mathcal{D}^\textup{op}$. In [@liu_zheng_gluing] the following type of multisimplicial set is defined. **Definition 4** ([@liu_zheng_gluing Definition 3.16]). For $\mathcal{C}$ an $\infty$-category and $I_1,\dots, I_k$ sets of morphisms in $\mathcal{C}$ or $\mathcal{C}^\textup{op}$, let $\mathcal{C}(I_1,\dots,I_k)$ denote the k-simplicial set whose $n$-simplices are $n$-dimensional cubes in $I_1\times \dots \times I_k$ (roughly), where all squares are cartesian squares in $\mathcal{C}$. **Notation 5**. For $\mathcal{C}$ an $\infty$-category and $I_1,\dots, I_k$ sets of morphisms in $\mathcal{C}$ or $\mathcal{C}^\textup{op}$, we denote by $\delta^* \mathcal{C}(I_1,\dots,I_k)$ the simplicial set whose $n$-simplices are $(n,\dots,n)$-vertices in $\mathcal{C}(I_1,\dots,I_k)$, in other words, the diagonal of the $k$-simplicial set $\mathcal{C}(I_1,\dots,I_k)$. The following notation is taken from Lecture 3 in [@scholze_notes]. Let $(\Delta_+^n)^2\subseteq (\Delta^n)^\textup{op}\times \Delta^n$ be the full subcategory on objects $(i,j)$ with $i\geq j$; in other words, as a graph this 1-category looks like the above-diagonal part of an $(n\times n)$-grid. For example, $(\Delta^1_+)^2$ looks like **Definition 6**. In the special case where $k=2$, and $I_1$ and $I_2$ a sets of morphisms in $\mathcal{C}$, let $\delta_+^*\mathcal{C}(I_1^\textup{op},I_2)$ be the simplicial set whose $n$-simplices are functors from $(\Delta^n_+)^2$ to $\mathcal{D}$ such that the vertical arrows are in $I_1$ and the horizontal arrows are in $I_2$, and such that the little squares are pullback squares. The following examples will be of interest to us. **Example 7**. Taking for $\mathcal{C}$ the 1-category of varieties $\mathbf{Var}$, and $I$ the set of open immersions and $P$ the set of proper immersions, the simplicial set $$\delta^*_+\mathbf{Var}(I^\textup{op}, P)$$ is isomorphic to the nerve of the 1-category $\mathbf{Span}$ from Definition [Definition 1](#defn:span){reference-type="ref" reference="defn:span"}. **Remark 8**. In general $\delta_+^*\mathcal{C}(I_1^\textup{op},I_2)$ need not be a 1-category even if $\mathcal{C}$ is. However, in the example above, $\delta^*_+\mathbf{Var}(I^\textup{op}, P)$ and therefore $\mathbf{Span}$ is in fact a 1-category. Indeed, a morphism $$X \xhookleftarrow{i} U \longrightarrow Y$$ in $\mathbf{Span}$ is uniquely isomorphic to a span where $i$ is the inclusion of an open subvariety $U$ in $X$. A 2-morphism between two such spans amounts to an isomorphism $f:U_0\xrightarrow{\cong} U_1$ making the diagram commute. If $i_0$, $i_1$ are inclusions of open subvarieties, then such $f$ is unique up to unique isomorphism. **Example 9**. Consider the 1-category $\mathbf{Var}_\times$ that results from Notation [Notation 2](#nota:Cotimes){reference-type="ref" reference="nota:Cotimes"} applied to the 1-category of varieties. Let $I$ and $P$ be as in the previous Example [Example 7](#ex:span){reference-type="ref" reference="ex:span"}. Then $$\delta^*_+\mathbf{Var}_\times(P_\times^\textup{op}, I_-)$$ is a symmetric monoidal 1-category, and in fact equivalent to $\mathbf{Span}_\times^\textup{op}$, as we will show in Lemma [Lemma 11](#lem:spaniscorr){reference-type="ref" reference="lem:spaniscorr"}. **Example 10**. We will also consider the symmetric monoidal $\infty$-category $$\delta^*_+\mathbf{Var}_\times(All_\times^\textup{op},All_-)$$ where $All$ denotes the set of all morphisms of varieties. This $\infty$-category will be the domain of our six-functor formalisms. Note that it contains the symmetric monoidal $\infty$-category $\mathbf{Var}_\times^\textup{op}$ as wide subcategory. **Lemma 11**. *There is an equivalence of symmetric monoidal categories $\mathbf{Span}_\times^\textup{op}\simeq \delta^*_+\mathbf{Var}_\times(P_\times^\textup{op},I_-)$. In particular, the latter is equivalent to a 1-category.* *Proof.* By [@liu_zheng_six_ops Lemma 6.1.3], $\delta^*_+\mathbf{Var}_\times(P_\times^\textup{op},I_-)$ is a symmetric monoidal $\infty$-category with the $\infty$-category of correspondences $\mathbf{Var}(P^\textup{op},I)$ as underlying $\infty$-category. But this is the same as $\mathbf{Span}^\textup{op}$, which is a 1-category. It is obvious that for both $\mathbf{Span}_\times^\textup{op}$ and $\delta^*_+\mathbf{Var}_\times(P_\times^\textup{op},I_-)$ the objects are tuples $(X_i)_I$ of varieties indexed by a finite set. We give a functor $$\mathbf{Span}_\times^\textup{op}\longrightarrow\delta^*_+\mathbf{Var}_\times(P_\times^\textup{op},I_-)$$ which is the identity on objects, as follows. A morphism $(X_i)_I\longrightarrow(Y_j)_J$ in $\mathbf{Span}_\times^\textup{op}$ is given by a partial map $\alpha:I\dashrightarrow J$ and for $j\in J$ a map $$Y_j \hookleftarrow U_j \rightarrow \prod_{\alpha^{-1}(j)} X_i$$ in $\mathbf{Span}$. But this is equivalent to giving the tuple $(U_j)_J$, a map $(U_j)_J\longrightarrow(Y_j)_J$ in $I_-$ and a map $(U_j)_J\longrightarrow(X_i)_I$ in $P_\times$. This gives a map in $\delta^*_+\mathbf{Var}_\times(P_\times^\textup{op},I_-)$, and it is clear that this assignment gives a bijection on the morphisms in both categories. ◻ We single out the following "simple edges\" in simplicial sets of the form $\delta^*\mathcal{C}(I_1,\dots,I_k)$ and $\delta^*_+\mathcal{C}(I_1^\textup{op},I_2)$ for $\mathcal{C}$ and $I_1,\dots, I_k$ as in Notation [Notation 5](#nota:diagonal){reference-type="ref" reference="nota:diagonal"}. We observe that a 1-simplex of $\delta^*\mathcal{C}(I_1,\dots,I_k)$ is a $k$-dimensional commutative cube in $\mathcal{C}$ where the edges in direction $i$ are maps in $I_i$ for $i=1,\dots, k$. **Definition 12**. A 1-simplex of the simplicial set $\delta^*\mathcal{C}(I_1,\dots,I_k)$ is called $i$-*simple* if the maps in all directions except direction $i$ are identities. Therefore such an edge can be identified with a morphism in $I_i$. For $\mathcal{C}$, $I_1$ and $I_2$ as in Definition [Definition 6](#defn:diagonal_plus){reference-type="ref" reference="defn:diagonal_plus"}, we note that a 1-simplex of $\delta^*\mathcal{C}(I_1^\textup{op},I_2)$ is just a roof with an arrow in $I_1$ (in the "wrong" direction) an an arrow in $I_2$. **Definition 13**. A 1-simplex of the simplicial set $\delta^*_+\mathcal{C}(I_1^\textup{op},I_2)$ is called *simple* if either the morphism in $I_1$ or the morphism in $I_2$ is the identity. Therefore such an edge can be identified with a morphism in $I_2$ or $I_1^\textup{op}$. # Recollements and adjointable squares {#sect:recollements} In this section we recall some more terminology and prove a small technical lemma. **Notation 14**. Let $\textup{St}_{\infty}^L$ denote the $\infty$-category of stable $\infty$-categories and functors which have a right adjoint, and $\textup{St}_{\infty}^R$ the $\infty$-category of stable $\infty$-categories and functors which have a left adjoint. We define $\textup{St}_{\infty}^{LR}\subseteq \textup{St}_{\infty}^R$ to be the subcategory with as objects stable $\infty$-categories and as morphisms the functors that have both a left and a right adjoint. In other words, $\textup{St}_{\infty}^{LR} = \textup{St}_{\infty}^{L}\cap \textup{St}_{\infty}^R$. We define $(\textup{St}_{\infty}^{LL})^\textup{op}\subseteq (\textup{St}_{\infty}^L)^\textup{op}$ to be the image of $\textup{St}_{\infty}^{LR}$ under the anti-equivalence $$\textup{St}_{\infty}^R \xrightarrow{\sim} (\textup{St}_{\infty}^L)^\textup{op}$$ given by passing to the left adjoint of a functor. Hence $\textup{St}_{\infty}^{LL}$ is the $\infty$-category of stable $\infty$-categories and functors which are left adjoints and whose right adjoint has a right adjoint, or in other words, functors which occur as leftmost functor in an adjoint triple. It is clear that there is an anti-equivalence $$\textup{St}_{\infty}^{LR} \xrightarrow{\sim} (\textup{St}_{\infty}^{LL})^\textup{op}$$ given by sending the middle functor in an adjoint triple to the leftmost functor in an adjoint triple. The categories $\textup{St}_{\infty}^{LR}$ and $\textup{St}_{\infty}^{LL}$ have all small limits and colimits, and the inclusions into $\textup{Cat}_\infty$ preserve small limits; see [@nine_1 Proposition 6.1.1]. The $\infty$-category $\textup{St}_{\infty}^{LR}$ (and thus $\textup{St}_{\infty}^{LL}$) is pointed with as zero object the trivial one-object $\infty$-category. **Definition 15**. A recollement is a sequence $$\mathcal{A}'\xrightarrow{i_\circ} \mathcal{A}\xrightarrow{j^\circ}\mathcal{A}''$$ in $\textup{St}_{\infty}^{LR}$ that is both a fibre and a cofibre sequence. For a sequence as above, we denote the left adjoint of $i_\circ$ by $i^*$ and its right adjoint by $i^!$. We denote the left and right adjoint of $j^\circ$ by $j_!$ and $j_*$ respectively. From results in [@nine Section A.2] it follows that all the classical conditions for a recollement of triangulated categories are satisfied, such as 1. for $A$ in $\mathcal{A}$, there are bifbre sequences $$i_\circ i^! A \longrightarrow A \longrightarrow j_*j^\circ A$$ and $$j_!j^\circ A \longrightarrow A \longrightarrow i_\circ i^*A,$$ 2. the unit $\textup{id}\longrightarrow j^\circ j_!$ and counit $j^\circ j_!\longrightarrow\textup{id}$ are isomorphisms, 3. the unit $\textup{id}\longrightarrow i^!i_\circ$ and counit $i^*i_\circ \longrightarrow\textup{id}$ are isomorphisms. This implies for example that the functors $i_\circ$, $j_!$ and $j_*$ are fully faithful, and the functors $i^*$, $i^!$ and $j^\circ$ are essentially surjective. Occasionally we will refer to a sequence $$\mathcal{A}''\xrightarrow{j_! } \mathcal{A}\xrightarrow{i^*}\mathcal{A}'$$ in $\textup{St}_{\infty}^{LL}$ as a recollement, if for the right adjoints $j^*$ and $i_*$ of $j_!$ and $i^*$ respectively, the sequence $$\mathcal{A}'\xrightarrow{i_*} \mathcal{A}\xrightarrow{j^*}\mathcal{A}''$$ in $\textup{St}_{\infty}^{LR}$ is a recollement. We recall the following definitons from [@htt Section 7.3.1]. **Definition 16**. Consider a commutative square of categories and suppose that $g^*$ and $h^*$ have right adjoints $g_*$ and $h_*$ respectively. In that case the counit of adjunction $g^* g_*\implies \textup{id}$ gives a natural transformation $j^*g^*g_*\implies j^*$. Commutativity of the square implies that there is a natural transformation $h^*f^*g_*\longrightarrow j^*$. By the adjunction $g^* \dashv g_*$ this gives a natural transformation $f^*g_*\implies h_*j^*$. We say that the square is *vertically right adjointable* if the natural transformation $f^*g_*\implies h_*j^*$ is an isomorphism. Vertically/horizontally left/right adjointable squares are defined similarly. **Definition 17**. A square of $\infty$-categories is vertically/horizontally left/right adjointable if the corresponding square of homotopy categories is. The following lemma, and its proof, are a special case of Theorem 2.5 in [@parshall_scott]. **Lemma 18**. *Suppose we have the following diagram of adjoint triples and stable $\infty$-categories $$% https://tikzcd.yichuanshen.de/#N4Igdg9gJgpgziAXAbVABwnAlgFyxMJZABgBpiBdUkANwEMAbAVxiRAB12BBAchAF9S6TLnyEUARnJVajFm05cBQkBmx4CRAEzTq9Zq0QduPPoOHqxRMhJn75RzgCEzKtaM2TStvXMPGnZQsPcWQdH1kDBXYXMxkYKABzeCJQADMAJwgAWyQyEBwIJABmXyijDIB9TgBjLAyakGoGOgAjGAYABRENcRAsMGxYIJBMnLzqQqQpSIcQACtq9jqGppAW9q6eqyMBodZzUazcxBmpxAAWMrm0JZXG5raO7stPfsGsYcOxk7OixB0s38WAAerV6g91k8tq8+ntPgcVD9ppN-vl7P55iCAFRrDbPbZvDJYRIACxwazgpKwaQpRi0I2Rp1REyBbCxAEI8dCXiE2AwYLTKdShYgALRab7HJCA84AVmu-gAjncIdzNrzemx4V8kdKASzmWzdpVcY8NYS+gKhdQqTS6RKpeMDQV-jMMdqueaCbC2MSyRTbSK6SAGU6TqVXUgrsb1qrVt6YXzdh9del9ZHzui-H6QV6oRbfRUSeThfa2GG9c7M2jFbmzQWfcn1oLAyA7aLHVWTjH5XWjABrMHLNWJzU7d77Rn63v-SMeowMPPqptaxetsudyXd6OG+c5xc4ldJtcgf2loPl+nT50KqOXfsgAeVfP4k8T89tjshyvp2+GmMFyfU1j3HN5rS-YMkC7P8Tjvc5ASApVlzHS0-RLSCr1DG84MNRCDxAZCGzfMCrQ3S8t3DXd7xmb8ZVQoszwwtYgLQZcqIfGiKIdfDylUI9+AofggA \begin{tikzcd} \mathcal{A}' \arrow[d, "r_\circ" description] \arrow[r, "j_\circ" description] & \mathcal{A}\arrow[d, "p_\circ" description] \arrow[r, "i^\circ" description] \arrow[l, "j^*"', shift right=2] \arrow[l, "j^!", shift left=2] & \mathcal{A}'' \arrow[d, "q_\circ" description] \arrow[l, "i_*", shift left=2] \arrow[l, "i_!"', shift right=2] \\ \mathcal{B}' \arrow[r, "l_\circ" description] \arrow[u, "r^!"', shift right=2] \arrow[u, "r^*", shift left=2] & \mathcal{B}\arrow[r, "k^\circ" description] \arrow[l, "l^!", shift left=2] \arrow[l, "l^*"', shift right=2] \arrow[u, "p^!"', shift right=2] \arrow[u, "p^*", shift left=2] & \mathcal{B}'' \arrow[l, "k_!"', shift right=2] \arrow[l, "k_*", shift left=2] \arrow[u, "q^!"', shift right=2] \arrow[u, "q^*", shift left=2] \end{tikzcd}$$ where the rows are recollements. We assume that the diagram of maps in $\textup{St}_{\infty}^{LL}$ commutes, in other words, $(r^*,p^*)$ commutes with $(j^*, l^*)$ and $(p^*,q^*)$ commutes with $(i_!,k_!)$.* - *if $(r_\circ, p_\circ)$ commutes with $(j^*,l^*)$, then $(p_\circ,q_\circ)$ commutes with $(i_!,k_!)$.* - *if $(r^*,p^*)$ commutes with $(j_\circ,l_\circ)$, then $(p^*,q^*)$ commutes with $(i^\circ,k^\circ)$.* *Proof.* For part (1), since the upper row is a recollement, for $A$ an object of $\mathcal{A}$ there is a bifibre sequence $$i_!i^\circ A \longrightarrow A \longrightarrow j_\circ j^* A$$ in $\mathcal{A}$. Applying $p_\circ$ gives a bifibre squence $$p_\circ i_!i^\circ A \longrightarrow p_\circ A \longrightarrow p_\circ j_\circ j^* A.$$ Using that $(r^*,p^*)$ commutes with $(j^*, l^*)$, we see that $(r_\circ,p_\circ)$ commutes with $(j_\circ,l_\circ)$, and hence we can rewrite $p_\circ j_\circ j^*A = l_\circ r_\circ j^* A$. Since $(r_\circ,p_\circ)$ commutes with $(j^*,l^*)$, we can again rewrite this as $l_\circ l^* p_\circ A$. Since the lower row is a recollement, we also have a bifibre sequence $$k_!k^\circ p_\circ A \longrightarrow p_\circ A \longrightarrow l_\circ l^* p_\circ A.$$ This shows that $p_\circ i_!i^\circ A = k_!k^\circ p_\circ A$. It follows from the assumptions that $(p_\circ,q_\circ)$ commutes with $(i^\circ,k^\circ)$, hence we can rewrite this as $k_!q_\circ i^\circ$. Using that $i^\circ$ is surjective on objects, it follows that $p_\circ i_!=k_!q_\circ$, as desired. The proof of part (2) is similar, but we write it out nonetheless. For $B$ an object in $\mathcal{B}$, there is a bifibre sequence $$k_!k^\circ B \longrightarrow B \longrightarrow l_\circ l^* B.$$ We apply $p_*$, then for the rightmost term use that $(r^*,p^*)$ commutes with $(j_\circ,l_\circ)$ and that $(r^*,p^*)$ commutes with $(j^*, l^*)$, and for the leftmost term we use that $(p^*,q^*)$ commutes with $(i_!,k_!)$. This gives us a bifibre sequence $$i_! q^*k^\circ B \longrightarrow p^*B \longrightarrow j_\circ j^* p^* B.$$ We know that such a bifibre sequence also exists where the leftmost term is $i_!i^\circ p^* B$. Since $i_!$ is an embedding, this implies $p^*k^\circ B = i^\circ p^* B$, as desired. ◻ # Reducing to proper maps and open immersions {#sect:reducingtospan} The symmetric monoidal structure that we consider on $\textup{St}_{\infty}^{L}$ is cartesian; therefore by [@HA Proposition 2.4.1.7], the $\infty$-category of lax symmetric monoidal functors $$\mathbf{Var}_\times(All_\times^\textup{op},All_-\longrightarrow(\textup{St}_{\infty}^{L})^\otimes$$ is equivalent to the $\infty$-category $\textup{Fun}^\textup{lax}(\mathbf{Var}_\times(All_\times^\textup{op},All_-),\textup{St}_{\infty}^{L})$ of *lax cartesian structures*, which are functors $$F:\mathbf{Var}_\times(All_\times^\textup{op},All_-) \longrightarrow\textup{St}_{\infty}^L$$ such that for $(X_i)_I$ in $\mathbf{Var}_\times(All_\times^\textup{op},All_-)$, the canonical inert morphisms $(X_i)_I\longrightarrow X_i$ induce an isomorphism $$\label{eq:lax_cartesian_structure} F((X_i)_I) \longrightarrow\prod_I F(X_i).$$ The $\infty$-category of six-functor formalisms $\mathbf{6FF}$ will be a full subcategory of the $\infty$-category of lax cartesian structures $$\textup{Fun}^\textup{lax}(\mathbf{Var}_\times(All_\times^\textup{op}, All_-), \textup{St}_{\infty}^L)$$ and hence equivalent to a full subcategory of the $\infty$-category of lax symmetric monoidal functors $$\textup{Fun}^{\otimes}(\mathbf{Var}(All^\textup{op},All), \textup{St}_{\infty}^L).$$ For a functor $$D^*_!:\delta^*_+\mathbf{Var}_\times(All^\textup{op}_\times, All)\longrightarrow\textup{St}_{\infty}^{L},$$ and $f:X\longrightarrow Y$ a morphism of varieties, let $f^*$ denote the image under $D^*_!$ of the span $$Y \xleftarrow{f} X = X$$ and $f_!$ the image of the span $$X=X\xrightarrow{f} Y.$$ Since $(-)^*$ is the restricton of $D^*_!$ to $\mathbf{Var}_\times^\textup{op}$ and hence a lax cartesian structure, this implies that $f^*$ is a lax monoidal functor. We will denote the right adjoint of $f^*$ by $f_*$, and the right adjoint of $f_!$ by $f^!$. **Definition 19**. We denote by $\mathrm{Pre}\mathbf{6FF}$ the full subcategory of lax cartesian structures $$D^*_!:\delta^*_+\mathbf{Var}_\times(All^\textup{op}_\times, All)\longrightarrow\textup{St}_{\infty}^{L}$$ together with 1. a natural isomorphism between the functors from the category of varieties and proper maps given by sending $p:X\longrightarrow Y$ to $p_*$ and to $p_!$ respectively, 2. and a natural isomorphism between the functors from the category of varieties and open immersions given by sending $i:U\longrightarrow X$ to $i^*$ and to $i^!$ respectively. We call such a functor a *pre-six-functor formalism*, or pre-6ff for short. **Remark 20**. In our definition, a pre-six-functor formalism is given by a lax cartesian structure $D^*_!:\delta^*_+\mathbf{Var}_\times(All^\textup{op}_\times, All)\longrightarrow\textup{St}_{\infty}^{L}$ with the additional *data* of two natural isomorphisms of functors. However, as we will see in Proposition [Proposition 23](#prop:w6FF){reference-type="ref" reference="prop:w6FF"}, every pre-six-functor formalism is equivalent to a lax cartesian structure $\delta^*_+\mathbf{Var}_\times(All^\textup{op}_\times, All)\longrightarrow\textup{St}_{\infty}^{L}$ that restricts to a functor $$\mathbf{Span}^\textup{op}_\times\longrightarrow\textup{St}_{\infty}^{LL}$$ which takes values in the category of "adjoint triples", in other words, where the natural isomorphisms between $p_!$ and $p_*$ and between $i^*$ and $i^!$ are in fact identities. Therefore, if $D^*_!$ is a lax cartesian structure for which natural isomorphisms as specified in Definition [Definition 19](#defn:pre6ff){reference-type="ref" reference="defn:pre6ff"} exist, these natural isomorphisms are essentially unique. Note that what we call a pre-6ff (when the lax cartesian structure is interpreted as lax symmetric monoidal functor) is a six-functor formalism according to [@mann_thesis] with an additional property; we could call these "six-functor formalisms with (Wirthmüller and Grothendieck) contexts". Other classical properties of six-functor formalisms, such as base change and the projection formula, hold for 6ff's according to Mann's definitions, and therefore also for our definition of pre-6ff's; see [@mann_thesis Proposition A.5.8]. Our pre-6ff's are exactly the six-functor formalisms that can arise from an application of [@mann_thesis Proposition A.5.10]. Eventually we will consider an even narrower notion of six-functor formalisms, with contexts and *recollements*, in the next section. **Definition 21**. We call a commutative square $$% https://tikzcd.yichuanshen.de/#N4Igdg9gJgpgziAXAbVABwnAlgFyxMJZABgBpiBdUkANwEMAbAVxiRAAoANAfSwEpuASRABfUuky58hFAEZyVWoxZt2ATW4ArAQClR4kBmx4CRMrMX1mrRBwBa3ANYCA0vonHpReRepWVtuwAatwMAgAyooowUADm8ESgAGYAThAAtkhkIDgQSABM1AxYYDYgUHRwABYx7iCpGVnUuUjyIMWlbBXVtWLJaZmIbS2IAMxFJWXdNVB1DYOFOXljE522070UIkA \begin{tikzcd} (X_i)_I \arrow[d] \arrow[r] & (Y_j)_J \arrow[d] \\ (Z_k)_K \arrow[r] & (V_l)_L \end{tikzcd}$$ in $\mathbf{Var}_\times$ a *square of type* $(P_\times,I_-)$ if it is a pullback square in $\mathbf{Var}_\times$, and the pair of **vertical** maps is in $P_\times$ and the pair of **horizontal** maps is in $I_-$. Squares of types $(I_-,P_-)$, $(P_\times,P_-)$ and $(I_-,I_-)$ are defined similarly. For $X$ an arbitrary simplicial set, by a *square* in $X$ we mean a map of simplicial set $\Delta^1\times \Delta^1 \longrightarrow X$. Via the inclusions $$\Delta^1 \cong \Delta^1\times \Delta^0 \longrightarrow\Delta^1\times \Delta^1$$ and the inclusions $$\Delta^1 \cong \Delta^0\times \Delta^1 \longrightarrow\Delta^1\times \Delta^1$$ induced by the two inclusions of $\Delta^0$ in $\Delta^1$, a square in $X$ determines four 1-simplices in $X$. We note that the morphisms in $P_\times, P_-$ and $I_-$ can be considered as simple morphisms in $\delta^*_+\mathbf{Var}(P_\times^\textup{op}, I_-)$ and $\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-)$. Therefore we can speak of squares of types $(P_\times,I_-)$, $(I_-,P_-)$, $(P_\times,P_-)$ and $(I_-,I_-)$ in $\delta^*_+\mathbf{Var}_\times(P_\times^\textup{op},I_-)$ and $\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-)$, and using the identification of Lemma [Lemma 11](#lem:spaniscorr){reference-type="ref" reference="lem:spaniscorr"}, also in $\mathbf{Span}_\times$. **Definition 22**. Let $\textup{Fun}^\textup{lax}_{BC}(\mathbf{Span}^\textup{op},\textup{St}_{\infty}^{LL})$ denote the subcategory of lax cartesian structures (i.e., functors for which the maps of the form ([\[eq:lax_cartesian_structure\]](#eq:lax_cartesian_structure){reference-type="ref" reference="eq:lax_cartesian_structure"}) are isomorphisms) that in addition send all squares in $\mathbf{Span}_\times$ of type $(I_-,I_-)$, $(I_-,P_\times^\textup{op})$, $(P_-^\textup{op},P_\times^\textup{op})$ and $(P_-^\textup{op},I_-)$ to vertically right adjointable squares in $\textup{St}_{\infty}^{LL}$. We will call these *BC lax cartesian structures*, or *lax cartesian structures with the BC-property*, where BC stands for either Beck-Chevalley or base change. In this section, using the ideas in the proof of [@mann_thesis Proposition A.5.10], we prove the following. **Proposition 23**. *There is an equivalence of categories $$\Phi:\textup{Fun}^\textup{lax}_{BC}(\mathbf{Span}^\textup{op}_\times,\textup{St}_{\infty}^{LL}) \xrightarrow[]{\sim} \mathrm{Pre}\mathbf{6FF}.$$* Let $$\textup{Fun}^\textup{lax}_{BC}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-, I_-, P_-^\textup{op}),\textup{St}_{\infty}^{LL})$$ denote the $\infty$-category of functors $\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-, I_-, P_-^\textup{op})\longrightarrow\textup{St}_{\infty}^{LL}$ for which the maps of the form ([\[eq:lax_cartesian_structure\]](#eq:lax_cartesian_structure){reference-type="ref" reference="eq:lax_cartesian_structure"}) are isomorphisms, and that in addition send all squares in $\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-, I_-, P_-^\textup{op})$ of type $(P_\times,I_- )$, $(I_-,P_-^\textup{op})$, $(P_\times,P_-)$ an $(I_-,I_-)$ to vertically right adjointable squares in $\textup{St}_{\infty}^{LL}$. We will prove Proposition [Proposition 23](#prop:w6FF){reference-type="ref" reference="prop:w6FF"} in three steps. **Lemma 24**. *There is an equivalence of categories $$\alpha:\textup{Fun}^\textup{lax}_{BC}(\mathbf{Span}^\textup{op}_\times,\textup{St}_{\infty}^{LL}) \xrightarrow{\sim} \textup{Fun}^\textup{lax}_{BC}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-, I_-, P_-^\textup{op}),\textup{St}_{\infty}^{LL}).$$* *Proof.* Recall that by Lemma [Lemma 11](#lem:spaniscorr){reference-type="ref" reference="lem:spaniscorr"}, we can identify $\mathbf{Span}^\textup{op}$ with $\delta^*_+\mathbf{Var}_\times(P^\textup{op}_\times,I_-)$. By [@liu_zheng_gluing Theorem 4.27], the restriction map $$\phi_0:\delta^*\mathbf{Var}_\times(P^\textup{op}_\times,I_-) \longrightarrow\delta^*_+\mathbf{Var}_\times(P^\textup{op}_\times,I_-)$$ is a categorical equivalence. Let $$\phi_1:\delta^*\mathbf{Var}_\times(P_\times^\textup{op},P_-^\textup{op},I_-, I_-) \longrightarrow \delta^*\mathbf{Var}_\times(P_\times^\textup{op}, I_-)$$ be the map given in degree $n$ by precomposition with the product of diagonals $$(\Delta^n)^\textup{op}\times \Delta^n \longrightarrow (\Delta^n)^\textup{op}\times (\Delta^n)^\textup{op}\times \Delta^n \times \Delta^n.$$ We apply [@liu_zheng_gluing Theorem 5.4] twice, with respect to the sets of morphisms $P_\times^\textup{op}, P_-^\textup{op}$ and $I_-, I_-$. It is clear that conditions 1, 2, 3, 4 and 6 are fulfilled. For condition 4, we observe that for a square if all the maps are open immersions then the natural map $V\longrightarrow U\times_X Y$ is an open immersion since composing with $U\times_X Y\longrightarrow U$ yields an open immersion, and if all the maps are proper, then $V\longrightarrow U\times_X Y$ is proper since composing with $U\times_X Y\longrightarrow U$ yields a proper morphism. Lastly, there is an isomorphism of simplicial sets $$\phi_2: \delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-, I_-,P_-^\textup{op}) \xrightarrow{\cong} \delta^*\mathbf{Var}_\times(P_\times^\textup{op},P_-^\textup{op},I_-, I_-)$$ given in degree $n$ by precomposition with the map $$(\Delta^n)^\textup{op}\times (\Delta^n)^\textup{op}\times \Delta^n \times \Delta^n \longrightarrow (\Delta^n)^\textup{op}\times \Delta^n \times \Delta^n \times (\Delta^n)^\textup{op}$$ which simply permutes the factors in the product of categories in the obvious way (i.e. interchanging the second and the fourth factor). Composing these maps gives a categorical equivalence $$\phi:=\phi_0\circ\phi_1\circ\phi_2: \delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-, I_-,P_-^\textup{op}) \longrightarrow\mathbf{Span}^\textup{op}_\times.$$ and precomposing with this map gives an equivalence of functor categories $$\phi^*:\textup{Fun}(\mathbf{Span}_\times^\textup{op},\textup{St}_{\infty}^{LL}) \xrightarrow{\sim} \textup{Fun}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-, I_-,P_-^\textup{op}),\textup{St}_{\infty}^{LL}).$$ Let us denote the restriction of $\phi^*$ to $\textup{Fun}^\textup{lax}_{BC}(\mathbf{Span}_\times^\textup{op},\textup{St}_{\infty}^{LL})$ by $\alpha$. In order to characterise the image of $\alpha$, we remark that a 1-simplex of $\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-, I_-,P_-^\textup{op})$ is a commutative 4-cube in $\mathbf{Var}_\times$, where the morphisms in the four different directions are in $P_\times ^\textup{op}$, $I_-$, $I_-$ and $P_-^\textup{op}$ respectively. However, the statement of this lemma only deals with 1-simplices which have identities in all but one of the four directions, and can therefore be identified with a morphism in $P_\times^\textup{op}$, $I_-$ or $P_-^\textup{op}$. Let $f = \alpha(g):\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-, I_-,P_-^\textup{op}) \longrightarrow\textup{St}_{\infty}^{LL}$ be in the image of $\alpha$. For a square in $\mathbf{Var}_\times$ of types $(I_-,I_-)$, $(I_-,P_\times^\textup{op})$, $(P_-^\textup{op},P_\times^\textup{op})$ and $(P_-^\textup{op},I_-)$, it is clear that $f$ sends these to vertically right adjointable squares; indeed, all these morphisms and therefore the squares made up of them, also live in $\mathbf{Span}^\textup{op}$. Therefore $g$ sends them to pullback squares, and $f$ does the same. On the other hand, let $f:\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-, I_-,P_-^\textup{op}) \longrightarrow\textup{St}_{\infty}^{LL}$ be a morphisms in $$\textup{Fun}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-, I_-,P_-^\textup{op}), \textup{St}_{\infty}^{LL})$$ that sends said squares to vertically right adjointable squares. Since there is an equivalence $$\phi^*:\textup{Fun}(\mathbf{Span}_\times^\textup{op},\textup{St}_{\infty}^{LL}) \xrightarrow{\sim} \textup{Fun}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-, I_-,P_-^\textup{op}),\textup{St}_{\infty}^{LL}),$$ we can write $f=\phi^*(g)$ for some $g$ in $\textup{Fun}(\delta^*\mathbf{Var}_\times(P^\textup{op}_\times,I_-), \textup{St}_{\infty}^{LL})$ and it is clear that $g$ sends squares of the given types to vertically right adjointable squares when $f$ does. Similarly, the existence of isomorphisms ([\[eq:lax_cartesian_structure\]](#eq:lax_cartesian_structure){reference-type="ref" reference="eq:lax_cartesian_structure"}), is preserved and reflected by $\alpha$, since the morphisms $(X_i)_I \longrightarrow X_i$ are present in each of the difference source categories. This shows that the image of $\alpha$ is the full subcategory $\textup{Fun}^\textup{lax}_{BC}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-, I_-,P_-^\textup{op}), \textup{St}_{\infty}^{LL})$ of $\textup{Fun}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-, I_-,P_-^\textup{op}), \textup{St}_{\infty}^{LL})$, on functors for which the maps of the form ([\[eq:lax_cartesian_structure\]](#eq:lax_cartesian_structure){reference-type="ref" reference="eq:lax_cartesian_structure"}) are isomorphisms, and that in addition send all squares in $\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-, I_-, P_-^\textup{op})$ of type $(I_-,I_-)$, $(I_-,P_\times^\textup{op})$, $(P_-^\textup{op},P_\times^\textup{op})$ and $(P_-^\textup{op},I_-)$ to vertically right adjointable squares in $\textup{St}_{\infty}^{LL}$. ◻ In the next step we change the variance of some of the inputs of our functors that will grow into six-functor formalisms, by passing to adjoints. **Definition 25**. Let $$\textup{Fun}^\textup{lax}_{BC}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-^\textup{op}, I_-,P_-), \textup{St}_{\infty}^{L})$$ denote the full subcategory of $\textup{Fun}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-^\textup{op}, I_-,P_-), \textup{St}_{\infty}^{L})$ on functors for which - the maps ([\[eq:lax_cartesian_structure\]](#eq:lax_cartesian_structure){reference-type="ref" reference="eq:lax_cartesian_structure"}) are isomorphisms - simple morphisms in $P_\times^\textup{op}$ and $I_-$ are sent to $\textup{St}_{\infty}^{LL}$ and simple morphisms in $I_-^\textup{op}$ and $P_-$ to $\textup{St}_{\infty}^{LR}$ - squares of types $(I_-^\textup{op},I_-)$, $(I_-^\textup{op},P_\times^\textup{op})$, $(P_-,P_\times^\textup{op})$ and $(P_-,I_-)$ are sent to vertically left adjointable squares. **Lemma 26**. *There is an equivalence of categories $$\beta:\textup{Fun}^\textup{lax}_{BC}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-, I_-,P_-^\textup{op}), \textup{St}_{\infty}^{LL}) \longrightarrow \textup{Fun}^\textup{lax}_{BC}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-^\textup{op}, I_-,P_-), \textup{St}_{\infty}^{L}).$$* *Proof.* The key point of the proof is that we "make Lemma 1.4.4 [@liu_zheng_six_ops] functorial", so that we can apply it to construct a map $$\textup{Fun}^\textup{lax}_{BC}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-, I_-,P_-^\textup{op}), \textup{St}_{\infty}^{LL}) \longrightarrow \textup{Fun}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-^\textup{op}, I_-,P_-), \textup{St}_{\infty}^{L})$$ by sending a functor $f$ in $\textup{Fun}^\textup{lax}_{BC}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-, I_-,P_-^\textup{op}), \textup{St}_{\infty}^{LL})$, which by definition satisfies the conditions of [@liu_zheng_six_ops Lemma 1.4.4], to the functor $f'$ that is the result of applying said lemma. The functor $f'$ is obtained from $f$ by passing to the right adjoint functor of the image of edges in $I_-$ and $P_-^\textup{op}$, changing the variance in these edges. As a result, the functor $f'$ takes values in $\textup{St}_{\infty}^L$ instead of $\textup{St}_{\infty}^{LL}$. We make this precise using the idea of [@liu_zheng_six_ops Remark 1.4.5(5)]. We apply [@liu_zheng_six_ops Lemma 1.4.4] to the evaluation functor $$\textup{ev}:\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-, I_-,P_-^\textup{op}) \times \textup{Fun}^\textup{lax}_{BC}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-, I_-,P_-^\textup{op}), \textup{St}_{\infty}^{LL}) \longrightarrow\textup{St}_{\infty}^{LL}$$ where the relevant squares are constant in the argument $\textup{Fun}^\textup{lax}_{BC}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-, I_-,P_-^\textup{op}), \textup{St}_{\infty}^{LL})$ and in the other argument correspond to a square in $\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-, I_-,P_-^\textup{op})$ of one of the types $(I_-,I_-)$, $(I_-,P_\times^\textup{op})$, $(P_-^\textup{op},P_\times^\textup{op})$ and $(P_-^\textup{op},I_-)$; the evaluation map sends these squares to vertically right adjointable squares by construction. The result of applying [@liu_zheng_six_ops Lemma 1.4.4] to this evaluation map is now a functor $$\textup{ev}':\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-^\textup{op}, I_-,P_-) \times \textup{Fun}^\textup{lax}_{BC}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-, I_-,P_-^\textup{op}), \textup{St}_{\infty}^{LL}) \longrightarrow\textup{St}_{\infty}^L$$ which induces a functor $$\beta:\textup{Fun}^\textup{lax}_{BC}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-, I_-,P_-^\textup{op}), \textup{St}_{\infty}^{LL}) \longrightarrow \textup{Fun}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-^\textup{op}, I_-,P_-), \textup{St}_{\infty}^{L}).$$ From the construction it is clear that $\beta$ lands in $\textup{Fun}^\textup{lax}_{BC}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-^\textup{op}, I_-,P_-), \textup{St}_{\infty}^{L})$. To show that $\beta$ is an equivalence, we simply construct an inverse. It is clear that we can apply [@liu_zheng_six_ops Lemma 1.4.4] to $$\textup{ev}:\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-^\textup{op}, I_-,P_-) \times \textup{Fun}^\textup{lax}_{BC}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-^\textup{op}, I_-,P_-), \textup{St}_{\infty}^{L}) \longrightarrow\textup{St}_{\infty}^{L}$$ to obtain a map $$\textup{ev}'':\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-, I_-,P_-^\textup{op}) \times \textup{Fun}^\textup{lax}_{BC}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-^\textup{op}, I_-,P_-), \textup{St}_{\infty}^{L}) \longrightarrow\textup{St}_{\infty}^L$$ which in fact lands in $\textup{St}_{\infty}^{LL}$, by passing to left adjoints for the image on morphisms in $I_-^\textup{op}$ and $P_-$. This gives a map $$\beta':\textup{Fun}^\textup{lax}_{BC}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-^\textup{op}, I_-,P_-), \textup{St}_{\infty}^{L})\longrightarrow\textup{Fun}^\textup{lax}_{BC}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-, I_-,P_-^\textup{op}), \textup{St}_{\infty}^{LL})$$ which is the inverse of $\beta$. ◻ **Remark 27**. For $F$ in $\textup{Fun}^\textup{lax}_{BC}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-^\textup{op}, I_-,P_-), \textup{St}_{\infty}^{L})$, let us say that it sends an opposite map of varieties $f: Y\leftarrow X$ to $f^*$ and a map $f:X \rightarrow Y$ to $f_!$, where $f$ can be a proper map or an open immersion. By assumption, for $i$ an open immersion $i^*$ has a left adjoint, which we for now denote by $i_?$, and for $p$ proper $p_!$ has a left adjoint which for now we denote by $p^?$. Then the image under $\beta'$ $$\beta'(F):\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-, I_-,P_-^\textup{op}) \longrightarrow\textup{St}_{\infty}^{LL}$$ in $\textup{Fun}^\textup{lax}_{BC}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-, I_-,P_-^\textup{op}), \textup{St}_{\infty}^{LL})$ encodes $p^*$, $i_?$, $i_!$ and $p^?$ for $p$ proper and $i$ open. By Lemma 4.5, $\beta'(F)$ is naturally equivalent to a functor $G:\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-, I_-,P_-^\textup{op})\longrightarrow\textup{St}_{\infty}^{LL}$ obtained from some functor $H:\mathbf{Span}_\times^\textup{op}\longrightarrow\textup{St}_{\infty}^{LL}$ in $\textup{Fun}^\textup{lax}_{BC}(\mathbf{Span}^\textup{op}_\times,\textup{St}_{\infty}^{LL})$ by precomposing with the categorical equivalence $$\phi:\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-,I_-,P_-^\textup{op}) \longrightarrow\mathbf{Span}_\times.$$ This implies that for the $F$ we started with, there are natural isomorphisms $i_?\cong i_!$ for $i$ proper, and $p^?\cong p^*$ for $p$. As a consequence of that, $i^*$ is naturally isomorphic to the right adjoint of $i_!$, which we denote by $i^!$ as usual, and $p_!$ is naturally isomorphic the right adjoint of $p^*$, which we denote by $p_*$ as usual. In the last step in proving Proposition [Proposition 23](#prop:w6FF){reference-type="ref" reference="prop:w6FF"}, we again precompose with a categorical equivalence. **Lemma 28**. *There is an equivalence of categories $$\mathrm{Pre}\mathbf{6FF}\xrightarrow{\sim}\textup{Fun}^\textup{lax}_{BC}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-^\textup{op}, I_-,P_-), \textup{St}_{\infty}^L).$$* *Proof.* We observe that there is a categorical equivalence $$\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-^\textup{op}, I_-,P_-) \xlongrightarrow{\sim} \delta^*\mathbf{Var}_\times(All_\times^\textup{op}, All_-).$$ We see this by by applying [@liu_zheng_gluing Theorem 5.4] twice, with respect to the sets of morphisms $P_\times^\textup{op}, I_-^\textup{op}$ and $I_-,P_-$. To check condition 1 in the theorem, let $f:(X_i)_I\longrightarrow(Y_j)_J$ be a morphism in $All_\times$ over a partial map $\alpha:J \dashrightarrow I$, given by maps $f:X_i \longrightarrow\prod_{\alpha^{-1}(i)} Y_i$. By Nagata's compactification theorem, every $f_i$ van be factored as an open immersion followed by a proper map $$X_i \hookleftarrow Z_i \rightarrow \prod_{\alpha^{-1}(i)} Y_i,$$ and therefore $f$ factors as a map $(X_i)_I\longrightarrow(Z_i)_I$ in $I_-$ and a map $(Z_i)_I\longrightarrow(Y_j)_J$ in $P_\times$. Similarly, every map in $All_-$ factors as a map in $I_-$ followed by a map in $P_-$. Conditions 2, 3, 4, 6 are obvious. For condition 4, we observe that in a square the natural map $V\longrightarrow U\times_X Y$ is proper since composing with $U\times_X Y\longrightarrow U$ yields a proper map, and an open immersion since composing with $U\times_X Y \longrightarrow Y$ yields an open immersion. Using [@liu_zheng_gluing Theorem 4.27] again, we see that there is a categorical equivalence of simplicial sets $\delta^*\mathbf{Var}_\times(All_\times^\textup{op}, All_-) \longrightarrow\delta^*_+\mathbf{Var}_\times(All_+^\textup{op}, All_-)$. Precomposing with this categorical equivalence and the one above, we get an equivalence of $\infty$-categories which we denote by $$\gamma':\textup{Fun}(\delta_+^*\mathbf{Var}_\times(All_+^\textup{op}, All_-), \textup{St}_{\infty}^{L}) \xrightarrow{\sim} \textup{Fun}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-^\textup{op}, I_-,P_-), \textup{St}_{\infty}^{L})$$ Let us denote the preimage of $\textup{Fun}^\textup{lax}_{BC}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-^\textup{op}, I_-,P_-), \textup{St}_{\infty}^L)$ under $\gamma'$ by $\mathcal{A}$, and let $\gamma$ be the restriction of $\gamma'$ $$\gamma:\mathcal{A} \xrightarrow{\sim}\textup{Fun}^\textup{lax}_{BC}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-^\textup{op}, I_-,P_-), \textup{St}_{\infty}^L),$$ such that $\gamma$ is an equivalence itself. Now we show that $\mathcal{A}$ is the $\infty$-category $\mathrm{Pre}\mathbf{6FF}$. Let $D^*_!$ be in $\mathrm{Pre}\mathbf{6FF}\subseteq \textup{Fun}(\delta_+^*\mathbf{Var}_\times(All_+^\textup{op}, All_-), \textup{St}_{\infty}^{L})$. It is clear that for $F=\gamma'(D^*_!)$, the maps ([\[eq:lax_cartesian_structure\]](#eq:lax_cartesian_structure){reference-type="ref" reference="eq:lax_cartesian_structure"}) are isomorphisms. Moreover $F$ sends a morphisms $f$ in $P_\times^\textup{op}$ to $f^*$, $i$ in $I_-^\textup{op}$ to $i^\circ$, $j$ in $I_-$ to $i_!$ and $g$ in $P_-$ to $g_\circ$. This shows that morphisms in $P_\times^\textup{op}$ and $I_-$ end up in $\textup{St}_{\infty}^{LL}$, since these are the leftmost functors in adjoint triples, and morphisms in $I_-^\textup{op}$ and $P_-$ and up in $\textup{St}_{\infty}^{LR}$. Lastly, we check adjointability. For a square of type $(I_-^\textup{op}, I_-)$ this follows from the $(-)_!$-part of $D^*_!$ being functorial in $All_-$, since the left adjoint of $i^\circ$ is $i_!$. Similarly, for squares of type $(P_-, P_\times^\textup{op})$ being vertically left adjointable follows from $(-)^*$ being functorial in all morphisms in $All_+$. For squares of types $(I_-^\textup{op},P_\times^\textup{op})$ and $(P_-,I_-)$, vertical left adjointability follows from $D^*_!$ being functorial in spans; recall that an $I_-$-morphism $(U_i)_I \hookleftarrow (X_i)_I$ and an $P_\times^\textup{op}$-morphism $(X_i)_I \leftarrow (Y_j)_J$ compose by taking the pullback, which results in a square of type $(I_-,P_\times^\textup{op})$. Now let $D^*_!$ be an arbitrary functor in $\textup{Fun}(\delta_+^*\mathbf{Var}_\times(All_+^\textup{op}, All_-), \textup{St}_{\infty}^{L})$, such that $F=\gamma'(D^*_!)$ is in $\textup{Fun}^\textup{lax}_{BC}(\delta^*\mathbf{Var}_\times(P_\times^\textup{op},I_-^\textup{op}, I_-,P_-), \textup{St}_{\infty}^L)$. By Remark [Remark 27](#rmk:triples){reference-type="ref" reference="rmk:triples"}, we see that $i^*=i^!$ and $p_!=p_*$; in other words, $D^*_!$ is a pre-6ff. This shows that $\mathcal{A}=\mathrm{Pre}\mathbf{6FF}$ as desired. ◻ *Proof of Proposition [Proposition 23](#prop:w6FF){reference-type="ref" reference="prop:w6FF"}.* This proposition now follows from Lemma [Lemma 24](#lem:w6FF_step_1){reference-type="ref" reference="lem:w6FF_step_1"}, Lemma [Lemma 26](#lem:w6FF_step_2.1){reference-type="ref" reference="lem:w6FF_step_2.1"} and Lemma [Lemma 28](#lem:w6FF_step_3){reference-type="ref" reference="lem:w6FF_step_3"}. ◻ # Reducing to complete varieties {#sect:reducing_to_comp} In this section will define a subcategory $$\mathbf{6FF}\subseteq \mathrm{Pre}\mathbf{6FF}$$ of pre-six-functor formalisms with *recollements*, which we will call six-functor formalisms. Under the equivalence $$\mathrm{Pre}\mathbf{6FF}\simeq \textup{Fun}^{\textup{lax}}_{BC}(\mathbf{Span}^\textup{op},\textup{St}_{\infty}^{LL})$$ from Proposition [Proposition 23](#prop:w6FF){reference-type="ref" reference="prop:w6FF"}, the six-functor formalisms will correspond to BC lax cartesian structures $$F:\mathbf{Span}^\textup{op}_\times \longrightarrow\textup{St}_{\infty}^{LL}$$ for which moreover the restriction $F:\mathbf{Span}^\textup{op}\longrightarrow\textup{St}_{\infty}^{LL}$ is a hypersheaf for the topology $\tau^c_{A\cup L}$ studied in [@kuij_descent]. The following definition can be thought of as defining "six-functor formalisms with contexts and recollements\". **Definition 29**. We denote by $\mathbf{6FF}$ the subcategory of pre-six-functor formalisms $$D^*_!:\delta^*_+\mathbf{Var}_\times(All^\textup{op}_\times, All)\longrightarrow\textup{St}_{\infty}^{L}$$ where for $i:X\setminus U\longrightarrow X$ a closed embedding, and $j:U\longrightarrow X$ its complement, the diagram $$\label{eq:recollement} \begin{tikzcd} D(U\setminus X) \arrow[r, "i_\circ" description] & D(X) \arrow[l, "i^ !", shift left=2] \arrow[l, "i^ *"', shift right=2] \arrow[r, "j^\circ" description] & D(U) \arrow[l, "j_!"', shift right=2] \arrow[l, "j_*", shift left=2] \end{tikzcd}$$ is a recollement. In particular, this implies that $D(\emptyset)$ is the trivial one-object stable $\infty$-category. By [@nine Proposition A.2.10], a bifiber sequence in $\textup{St}_{\infty}^{LR}$ of the shape [\[eq:recollement\]](#eq:recollement){reference-type="ref" reference="eq:recollement"} is a recollement if $i_\circ$ and $j_*$ are full embeddings **Lemma 30**. *For $D^*_!:\delta^*_+\mathbf{Var}_\times(All^\textup{op}_\times, All)\longrightarrow\textup{St}_{\infty}^{L}$ a pre-six functors formalism, if $D^*_!$ sends a sequences of the form $$\label{eq:localisationseq} X \setminus U \xrightarrow{i} X \xhookleftarrow{j} U$$ to a cofibre sequences $$D(X\setminus U) \xrightarrow{i_!} D(X) \xrightarrow{j^*} D(U)$$ in $\textup{St}_{\infty}^{LR}$, then the diagrams $$\label{eq:recollement2} \begin{tikzcd} D(X\setminus U) \arrow[r, "i_\circ" description] & D(X) \arrow[l, "i^ !", shift left=2] \arrow[l, "i^ *"', shift right=2] \arrow[r, "j^\circ" description] & D(U) \arrow[l, "j_!"', shift right=2] \arrow[l, "j_*", shift left=2] \end{tikzcd}$$ are recollements. In particular $D^*_!$ is a six functors formalism.* *Proof.* Let us consider the sequence $$X\setminus U \longrightarrow X \leftarrow X \setminus U$$ in the $\infty$-category $\textup{Corr}(\mathbf{Var})_{All,All}$, which has a unique (up to unique isomorphism) composition $$X\setminus U \xrightarrow{\textup{id}} X\setminus U.$$ Therefore the composition $$F(X\setminus U) \xrightarrow{i_\circ} F(X)\xrightarrow{i^*} F(X\setminus U)$$ is the identity, so the counit $i^*i_\circ \Rightarrow \textup{id}$ is an isomorphism by [@elephant Lemma A.1.1.1], which implies that $i_\circ$ is fully faithful. On the other hand, the sequence $$X\setminus U \longrightarrow X \leftarrow X \setminus U$$ composes to $$X \setminus U \leftarrow\emptyset \longrightarrow U$$ which shows that $$F(X\setminus U)\xrightarrow{i_\circ} F(X) \xrightarrow{j^\circ} F(U)$$ composes to $0$. By [@nine Lemma A.2.5(ii)], this implies that the diagram ([\[eq:recollement2\]](#eq:recollement2){reference-type="ref" reference="eq:recollement2"}) is a recollement. ◻ **Definition 31**. A pullback square of algebraic varieties is called an *abstract blowup square* if $i$ is a closed immersion, $p$ a proper map, and the induced map $p:Y\setminus E \longrightarrow X \setminus C$ an isomorphism. The following lemma essentially says that descent for abstract blowups follows from base change (i.e., the Beck-Chevally property) and localisation; see for example [@cisinski_13 Proposition 3.7], [@gallauer Section 1.5.5]. **Lemma 32**. *For $F:\mathbf{Span}_\times^\textup{op}\longrightarrow\textup{St}_{\infty}^{LL}$ in $\textup{Fun}^\textup{lax}_{BC}(\mathbf{Span}^\textup{op}_\times, \textup{St}_{\infty}^{LL})$, if $F$ sends sequences of the form ([\[eq:localisationseq\]](#eq:localisationseq){reference-type="ref" reference="eq:localisationseq"}) to fibre sequences, then it sends abstract blowup squares to pullback squares.* We note that a functor $\mathbf{Span}^\textup{op}\longrightarrow\mathcal{C}$ taking values in a stable $\infty$-category $\mathcal{C}$, it is immediate that localisation implies descent for abstract blowup squares, see for example [@kuij_descent Proposition 7.9]. The $\infty$-category $\textup{St}_{\infty}^{LR}$ is not stable, but is still nice enough to make this true. For the proof we need the following lemma, which is the dual to a case of [@nine Lemma 1.5.3]. **Lemma 33**. *For a diagram* *in $\textup{St}_{\infty}^{LR}$ where the rows are recollements and the rightmost vertical map an equivalence, and the square on the left is a vertically left *and* right adjointable, it follows that the square on the left is a pushout square.* *Proof.* First we note that for $C$ a stable $\infty$-category, there is a fully faithful functor $$\textup{Hom}(-,C):(\textup{St}_{\infty}^{LR})^\textup{op}\longrightarrow\textup{St}_{\infty}^{LR}$$ that reflects limits (see for example [@riehl_verity Proposition 5.20]). For $f:A\longrightarrow B$ in $\textup{St}_{\infty}^{LR}$, with $L$ the left adjoint and $R$ the right adjoint, the induced functor $$f^*:\textup{Hom}(B,C)\longrightarrow\textup{Hom}(A,C),$$ has a left adjoint given by $R^*$ and a right adjoint given by $L^*$. Hence, it suffices to check that in the diagram in $\textup{St}_{\infty}^{LR}$ the square on the right is a pullback. Adjointability of the original square implies that this square is adjointable as well. Therefore by [@nine Lemma 1.5.3] it is a pullback square, as desired. ◻ *Proof of Lemma [Lemma 32](#lem:localisation_implies_descent){reference-type="ref" reference="lem:localisation_implies_descent"}.* By Proposition [Proposition 23](#prop:w6FF){reference-type="ref" reference="prop:w6FF"}, $F$ extends to a pre-six-functor formalism $$D^*_!:\delta^*_+\mathbf{Var}_\times(All^\textup{op}_\times, All)\longrightarrow\textup{St}_{\infty}^{L}.$$ By assumption $F$ sends a sequence $$U\xhookrightarrow{j}X \xleftarrow{i} X\setminus U$$ to a fibre sequence $$F(U) \xrightarrow{j_!} F(X) \xrightarrow{i^*} F(X\setminus U)$$ in $\textup{St}_{\infty}^{LL}$. This implies that the corresponding sequence of right adjoints $$D(X\setminus U) \xrightarrow{i_\circ} D(X) \xrightarrow{j^\circ} D(U)$$ is a cofibre sequence in $\textup{St}_{\infty}^{LR}$. Therefore $D^*_!$ satisfies the condition of Lemma [Lemma 30](#lem:recollement){reference-type="ref" reference="lem:recollement"} and sends localisation sequences to recollements. Now consider an abstract blowup square $$\begin{tikzcd} \tilde Z \arrow[r, "\tilde i"]\arrow[d, "\tilde p"] & \tilde X\arrow[d, "p" ]\\ Z \arrow[r, "i"] & X. \end{tikzcd}$$ We observe that in the the diagram $$% https://tikzcd.yichuanshen.de/#N4Igdg9gJgpgziAXAbVABwnAlgFyxMJZABgBpiBdUkANwEMAbAVxiRADEAKAHW7wdgACAFoBKEAF9S6TLnyEUARnJVajFmy69+QgBripM7HgJEATCur1mrRBx58sAmIN3anQ3nBg4AtljAmOBEDaRAMY3kiZUVVaw07LldQozlTFDJYq3VbezFJMIi0hWQLLLUbTU5XQS8ff0Dg-IlVGCgAc3giUAAzACcIXyQyEBwIJAAWagY6ACMYBgAFWRMFED6sdoALHBBsyrt3Z0E0AH0AKgLegaHEKdHxxABmabmF5cj09c2dvYqEkCCLAXP44OhONhbCAQADWVxA-UGSGUDyQL3+uTOl0MCJuw2oY2R+wBRyEwMuBPBDEh0LhOMRtxRhMQFgxbFJLgAVgA9bFhBlogmPACsxNygh5FJAM3mSxWUTsG22u3peJZQqQorZh24AGMCO1JBQJEA \begin{tikzcd} D(\tilde Z) \arrow[d, "\tilde p_\circ"'] \arrow[r, "\tilde i_\circ", hook] & D(\tilde X) \arrow[d, "p_\circ"] \arrow[r, "\tilde j^\circ"] & D(\tilde X\tilde \setminus Z) \arrow[d, "\cong"] \\ D(Z) \arrow[r, " i_\circ"', hook] & D( X) \arrow[r, " j^\circ"'] & D( X \setminus Z) \end{tikzcd}$$ in $\textup{St}_{\infty}^{LR}$, the rows are recollements, and the square on the left is vertically and horizontally left adjointable, since $D^!_*$ satisfies base change. In particular if we take left adjoints of the horizontal maps in the square on the left, we get a commutative square (with maps $\tilde i^*, i^*$ and $(\tilde p_\circ,p_\circ)$). Now taking right adjoints of all these maps gives a commutative square with maps $\tilde i_\circ, i_\circ$ and $p^*,p^*$, which shows that the square is vertically right adjointable as well. Therefore Lemma [Lemma 33](#lem:dual_of_nine){reference-type="ref" reference="lem:dual_of_nine"} implies that the square on the left is a pushout square. This implies that the corresponding square in $\textup{St}_{\infty}^{LL}$ is a pullback square. ◻ In particular the Lemma above applies to functors in $\textup{Fun}^{\textup{lax}}_{BC}(\mathbf{Span}_\times^\textup{op},\textup{St}_{\infty}^{LL})_\emptyset$ that correspond to six-functor formalisms under the equivalence of categories shown in Proposition [Proposition 23](#prop:w6FF){reference-type="ref" reference="prop:w6FF"}. Indeed, the sequence $$F(X\setminus U) \xrightarrow{i_!} F(X) \xrightarrow{j^*} F(U)$$ is a cofibre sequence in $\textup{St}_{\infty}^{LR}$ if and only if the sequence of left adjoints $$F(U)\xrightarrow{j_!} F(X) \xrightarrow{i^*}F(X\setminus U)$$ is a fibre sequence in $\textup{St}_{\infty}^{LL}$. This implies that for any six-functor formalism, the functor $(-)^*$ satisfies descent for abstract blowup squares. For a lax cartesian structure $F:\mathbf{Span}^\textup{op}_\times \longrightarrow\textup{St}_{\infty}^{LL}$, sending localisation sequences to fibre sequences and abstract blowups to pullbacks, can be encoded by a sheaf condition. **Definition 34**. We denote by $A$ the set of abstract blowup squares of varieties, considered as commutative squares in the 1-category $\mathbf{Span}$. We denote by $L$ the commutative squares in $\mathbf{Span}$ of the form for $i:U\longrightarrow X$ an open immersion. We call such squares *localisation squares*. We consider the coarse topology $\tau^c_{A\cup L}$ generated by the set of squares $A\cup L$, see [@kuij_descent Section 2]. **Definition 35**. We denote by $\textup{Fun}^{\textup{lax},BC}_{A\cup L}(\mathbf{Span}_\times^\textup{op},\textup{St}_{\infty}^{LL})_\emptyset$ the $\infty$-category of lax cartesian structures $F:\mathbf{Span}_\times^\textup{op}\longrightarrow\textup{St}_{\infty}^{LL}$ such that - the underlying presheaf $F|_{\mathbf{Span}}:\mathbf{Span}^\textup{op}\longrightarrow\textup{St}_{\infty}^{LL}$ is a hypersheaf for the topology $\tau^c_{A\cup L}$, - for the empty variety $\emptyset$ we have $F(\emptyset) = *$, the trivial one-object stable $\infty$-category, - and $F$ send squares of types $(I_-,I_- )$, $(I_-,P_\times^\textup{op})$, $(P_-^\textup{op},P_\times^\textup{op})$ an $(P_-^\textup{op},I_-)$ to vertically left adjointable-squares. **Proposition 36**. *The equivalence $$\Phi:\textup{Fun}^\textup{lax}_{BC}(\mathbf{Span}^\textup{op}_\times,\textup{St}_{\infty}^{LL}) \xrightarrow{\sim} \mathrm{Pre}\mathbf{6FF}$$ in Proposition [Proposition 23](#prop:w6FF){reference-type="ref" reference="prop:w6FF"} restricts to an equivalence $$\textup{Fun}^{\textup{lax},BC}_{A\cup L}(\mathbf{Span}_\times^\textup{op},\textup{St}_{\infty}^{LL})_\emptyset \xrightarrow{\sim } \mathbf{6FF}.$$* *Proof.* The cd-structure $A\cup L$ is c-regular and c-complete and compatible with a dimension function. Therefore if a presheaf $F:\mathbf{Span}_\times^\textup{op}\longrightarrow\textup{St}_{\infty}^{LL}$ in $\textup{Fun}^{\textup{lax}}_{BC}(\mathbf{Span}_\times^\textup{op},\textup{St}_{\infty}^{LL})$ is a hypersheaf for the topology $\tau^c_{A\cup L}$, then it sends abstract blowup squares and localisation squares to pullback squares. If in addition $F(\emptyset)=*$, then the pre-six-functor formalism $\Phi(F)$ corresponding to $F$ satisfies the conditions of Lemma [Lemma 30](#lem:recollement){reference-type="ref" reference="lem:recollement"} and is therefore in $\mathbf{6FF}$. On the other hand, given a six-functor formalism $D$, by assumption the BC lax cartesian structure $\Phi^{-1}(F)$ satisfies $F(\emptyset)=*$ and sends localisation squares to cartesian squares. Moreover by Lemma [Lemma 32](#lem:localisation_implies_descent){reference-type="ref" reference="lem:localisation_implies_descent"}, $\Phi^{-1}(D)$ sends abstract blowup squares to pullback squares, and this implies that $\Phi^{-1}(F)$ is a hypersheaf for $\tau^c_{A\cup L}$. ◻ Somewhat surprisingly, a 6ff can be recovered entirely from its restriction to complete varieties. Indeed, by results in [@kuij_monoidal], for varieties over any field, there is an equivalence of categories of hypersheaves $$\textup{HSh}_{\tau^c_{M_\mathbf{Span}\cup (A\cup L)_\otimes}}(\mathbf{Span}_\times;\textup{St}_{\infty}^{LL}) \simeq\textup{HSh}_{\tau^c_{M_\mathbf{Comp}\cup AC_\otimes}}(\mathbf{Comp}_\times;\textup{St}_{\infty}^{LL}).$$ Restricting to hypersheaves that send $0$ and $\emptyset$ to $*$, this implies $$\textup{Fun}^\textup{lax}_{A\cup L}(\mathbf{Span}^\textup{op},\textup{St}_{\infty}^{LL})_\emptyset \simeq \textup{Fun}^\textup{lax}_{AC}(\mathbf{Comp}^\textup{op},\textup{St}_{\infty}^{LL}).$$ In the next proposition we will show that under this equivalence, the subcategory of lax monoidal $A\cup L$-hypersheaves with the BC-property corresponds to the subcategory of lax monoidal $AC$-hypersheaves that send squares of type $(P^\textup{op}_-, P^\textup{op}_\times)$ to vertically right adjointable squares. We denote the $\infty$-category of such $AC$-hypersheaves by $\textup{Fun}^{\textup{lax},BC}_{AC}(\mathbf{Comp}^\textup{op},\textup{St}_{\infty}^{LL})$. **Proposition 37**. *The equivalence $\textup{res}:\textup{Fun}^{\textup{lax}}_{A\cup L}(\mathbf{Span}^\textup{op}_\times,\textup{St}_{\infty}^{LL})_\emptyset \simeq \textup{Fun}^{\textup{lax}}_{AC}(\mathbf{Comp}^\textup{op}_\times,\textup{St}_{\infty}^{LL})$ restricts to an equivalence $$\textup{Fun}^{\textup{lax},BC}_{A\cup L}(\mathbf{Span}^\textup{op}_\times,\textup{St}_{\infty}^{LL})_\emptyset \simeq \textup{Fun}^{\textup{lax},BC}_{AC}(\mathbf{Comp}^\textup{op}_\times,\textup{St}_{\infty}^{LL})$$* *Proof.* It is clear that the restriction of $\textup{res}$ to $\textup{Fun}^{\textup{lax},BC}_{A\cup L}(\mathbf{Span}_\times^\textup{op},\textup{St}_{\infty}^{LL})$ lands in $\textup{Fun}^{\textup{lax},BC}_{AC}(\mathbf{Comp}^\textup{op}_\times,\textup{St}_{\infty}^{LL})$. What we need to show is that for any $F$ in $\textup{Fun}^{\textup{lax},BC}_{AC}(\mathbf{Comp}^\textup{op}_\times,\textup{St}_{\infty}^{LL})$, the corresponding $\tilde F:\mathbf{Span}_\times^\textup{op}\longrightarrow\textup{St}_{\infty}^{LL}$ is in $\textup{Fun}^{\textup{lax},BC}_{A\cup L}(\mathbf{Span}_\times^\textup{op},\textup{St}_{\infty}^{LL})$. For this we need to check that $\tilde F$ sends pullback squares of the types $(I_-,I_- )$, $(I_-,P_\times^\textup{op})$, $(P_-^\textup{op},P_\times^\textup{op})$ an $(P_-^\textup{op},I_-)$ to vertically left adjointable-squares. We consider the four types of squares separately. **Squares of type $(P_-^\textup{op},P_\times^\textup{op})$.** We consider an arbitrary pullback square of type $(P_-^\textup{op},P_\times^\textup{op})$ in $\mathbf{Span}^\textup{op}_\times$ $$\begin{tikzcd} (X_i)_I \arrow[r, ] \arrow[d, ] & (Y_j)_J \arrow[d, ] \\ (Z_i)_I \arrow[r, ] & (V_j)_J. \end{tikzcd}$$ where the horizontal morphisms lie over a partial map $\alpha:J \dashrightarrow I$. Let $(\overline{V}_j)_J$ be a compactification of $(V_j)_J$ (here this means: there is an $I_-$-map $(V_j)_J \hookrightarrow (\overline{V}_j)_J$ and $(\overline{V}_j)_J$ is in $\mathbf{Comp}_\times$). Then we can factor each $Y_j \rightarrow V_j \hookrightarrow \overline{V_j}$ as a dense open morphism followed by a proper morphism $Y_j \hookrightarrow \overline{Y_j} \rightarrow \overline{V_j}$. This gives a dense compactification $(\overline{Y_i})_J$ of $(Y_j)_J$ which admits a $P_-$-morphism $(\overline{Y_j})_J\rightarrow (\overline{V_j})_J$. Similarly, for each $i$ we can factor $Z_i \rightarrow \prod_{\alpha^{-1}(i)} V_j \hookrightarrow \prod_{\alpha^{-1}(i)} \overline{V_j}$ as a dense open morphism followed by a proper morphism $Z_i \hookrightarrow \overline{Z_i} \rightarrow \prod_{\alpha^{-1}(i)} \overline{V_j}$. This gives a dense compactification $(\overline{Z_i})_I$ of $(Z_i)_I$ which admits a $P_\times$-morphism $(\overline{Z_i})_I \rightarrow (\overline{V_j})_J$. Now for each $i$ let $\overline{X_i}$ be the pullback $$% https://tikzcd.yichuanshen.de/#N4Igdg9gJgpgziAXAbVABwnAlgFyxMJZABgBpiBdUkANwEMAbAVxiRAA0B9LEAX1PSZc+QijIBGKrUYs2AHTkQaMAE4MsYGMABa3XnwEgM2PASLjyU+s1aIQCtCuidgCxmgAWdAHrAAtOK8ABRYAJS8AAQKSqrqmsAAmnoGgiYi5qSS1Naydg5OUC5uDJ4+-oEh4VGKymoaWgBqybxSMFAA5vBEoABmTgC2SBYgOBBIAMzUHjB0UGw4AO4Q07MI-L0DSABM1KMTUzNzdovLh2uGfRCDiGQjY4g7ICtHI0vP5xtXSLd7iMPP8zeZz4FF4QA \begin{tikzcd} \overline{X_i} \arrow[r, ] \arrow[d, ] & \prod_{\alpha^{-1}(i)} \overline{Y_i} \arrow[d, ] \\ \overline{Z_i} \arrow[r, ] & \prod_{\alpha^{-1}(i)} \overline{V_i} . \end{tikzcd}$$ Then $(\overline{X_i})$ is the pullback of $(\overline{Y_j})_J\rightarrow (\overline{V_j})_J$ along $(\overline{Z_i})_I \rightarrow (\overline{V_j})_J$. In the cube $$% https://tikzcd.yichuanshen.de/#N4Igdg9gJgpgziAXAbVABwnAlgFyxMJZAJgBoBGAXVJADcBDAGwFcYkQAKAHS4lpgBOjLGBjAAGgH0sAXwCUkgJIgZpdJlz5CKAAwVqdJq3YcpWBctXrseAkT0BmAwxZtEnAFrSLKtSAw2WkRkTjQuxu7cvPxCImJesj5W-hq22sjkpMTORm6cAGqSAFYKAFK+1pp2KJk6Oa4mAJrFZRUpgdXIDqR1YbkmPHyCwqLAzUXykuXJAVXp3dl9DZGDMSNihROtMgYwUADm8ESgAGYCEAC2SACsNDgQSABsd-RYjOwAFhAQANZtZ5ckJkQPckHoQa93u5IKJ-ucrogyCCHohuhC3p9vn9kgCEcDQYiaB8YPQoOwcAB3CDE0kIHHwoF3FG3EA0snuSnUklQOl+XFIJEEgAsRO55KpbN5pwZiBZwtFpPFXNpcMBiHBBOerLFHIl3KlIH56qZSDRbKVktVCK1BIA7Ar2SC9Sr6Wq0XaHRb9VakCLkUh7eioaysSpKDIgA \begin{tikzcd} & (Y_j)_J \arrow[rr, hook] \arrow[dd, ] & & (\overline{Y_j})_J \arrow[dd, ] \\ (X_i)_I \arrow[rr] \arrow[dd, ] \arrow[ru, ] & & (\overline{X_i})_I \arrow[ru, ] \arrow[dd, ] & \\ & (V_j)_J \arrow[rr, hook] & & (\overline{V_j})_J \\ (Z_i)_I \arrow[rr, hook] \arrow[ru, ] & & (\overline{Z_i})_I \arrow[ru, ] & \end{tikzcd}$$ the left and right faces are pullbacks. It follows from [@kuij_descent Lemma 6.15] that the bottom and back faces are pullbacks, since $(\overline{Z_i})_I$ and $(\overline{Y_j})_J$ are dense compactifications. By the pasting law for pullbacks, it follows that the front and top face are pullbacks as well. This implies that $(X_i)_I \longrightarrow(\overline{X_i})_I$ is an $(I_-)$-morphism. Now by taking complements, we can extend the cube to the left by $$% https://tikzcd.yichuanshen.de/#N4Igdg9gJgpgziAXAbVABwnAlgFyxMJZAJgBoBGAXVJADcBDAGwFcYkQAKAHS4lpgBOjLGBjAAGgH0sAXwCUkgJIgZpdJlz5CKAAwVqdJq3YcpWBctXrseAkT0BmAwxZtEnAFrSLKtSAw2WkRkTjQuxu7cvPxCImJesj5W-hq22sjkpMTORm6cAGqSAFYKAFK+1pp2KJk6Oa4mAJrFZRUpgdXIDqR1YbkmPHyCwqLAzUXykuXJAVXp3dl9DZGDMSNihROtM6lBKACsPfURnKvDcWPFMjxwMDgAtiLMcAAE49t+s2lEACz6SycokNYqMzNcuLcHk9XmYkp9dp1DotDMtTtFzqNNuDIY8wM8Xps5G0vntkH9QijAWcQfFpNi7rj8QkiTIDDAoABzeBEUAAMwEEHuSEOIBwECQADYaDh6FhGOwABYQCAAaza-MFSEyovFiD0otl8vckFE6oFQsQZB1SG6BrliuVauSGot2rFSCtCpg9Cg7BwAHcIF6fQhneatdLdSLg773AGg96oKG-C6PZGkH8QDG-YGY8m+eHECL3YhM9m47nE-mQKm9enEFKs4mcwmQ2bNXXrYhbeXRZW22GO42SwB2Gi9+N59sW22j8fNiutpPTjP1sd2o1Zx0rxDkN26nsLvtL6u1gAc9cbE-7y8HFoAnPX9deTzvyPq502fS2p3ekI+uwvL9Y2PX8U0LICS3fedv0XMCCw7ACoPIP9d33LVnyPScqxUSgZCAA \begin{tikzcd} & (Y_j)_J \arrow[rr, hook] \arrow[dd, ] & & (\overline{Y_j})_J \arrow[dd, ] & & (\overline{Y_j}\setminus Y_j)_J \arrow[ll, ] \arrow[dd, ] \\ (X_i)_I \arrow[rr,hook] \arrow[dd, ] \arrow[ru, ] & & (\overline{X_i})_I \arrow[ru, ] \arrow[dd, ] & & (\overline{X_i}\setminus X_i)_I \arrow[ll, ] \arrow[ru, ] \arrow[dd, ] & \\ & (V_j)_J \arrow[rr, hook] & & (\overline{V_j})_J & & (\overline{V_j}\setminus V_j)_J \arrow[ll, ] \\ (Z_i)_I \arrow[rr, hook] \arrow[ru, ] & & (\overline{Z_i})_I \arrow[ru, ] & & (\overline{Z_i}\setminus Z_i)_I. \arrow[ll, ] \arrow[ru, ] & \end{tikzcd}$$ where all the faces of the cube on the right are pullbacks. Evaluating $\tilde F$ on this diagram gives rows of recollements. All the *vertical* faces in the cube on the right are sent to vertically right adjointable squares, since they are squares of type $(P^\textup{op}_-, P^\textup{op}_\times)$. By Lemma [Lemma 18](#lem:parshall_scott){reference-type="ref" reference="lem:parshall_scott"}, this implies that the vertical squares on the left, apart from the leftmost one, are sent to vertically right adjointable squares. Lastly, a diagram chase shows that the vertical leftmost squares is vertically right adjointable as well (using the fact that the open immersions in the cube on the left are sent to embeddings, and that in the image under $\tilde F$, the other vertical faces are vertically right adjointable). **Squares of type $(P_-^\textup{op}, I_-)$** Let be a square of type $(P_-^\textup{op}, I_-)$. As above, we can extend this with a pullback square of type $(P_-^\textup{op},P_-^\textup{op})$ as follows. $$% https://tikzcd.yichuanshen.de/#N4Igdg9gJgpgziAXAbVABwnAlgFyxMJZARgBoAGAXVJADcBDAGwFcYkQAKADQH0sBKHgEkQAX1LpMufIRRli1Ok1bsOATR4ArQQCkxEkBmx4CRchUUMWbRJwCqfQSPGTjMs6QU0rK2xwBqWrr6rtKmKABMFt7KNpy8WAA6iXAwOAC2WGDMcAAEDgLCIYZSJrLIUV5K1qoamsmpGVk5uYHaPHqiijBQAObwRKAAZgBOEOlIUSA4EEjm0-RYjOwAFhAQANbFo+NIAMw0M0hkC0ur61suIDsTiPNHiCcrMPRQ7DgA7hDPrwhXN5NDrNEAcQD83rZPt8XlA-gYAYgACxAuY0cHvL7guHDMa3ACsKMeaJhGOhv22uKQyOmwIJYJJkMxML+lFEQA \begin{tikzcd} (U_i)_I \arrow[r, hook] \arrow[d, ] & (X_i)_I \arrow[d, ] & (X_i\setminus U_i)_I \arrow[l, ] \arrow[d, ] \\ (V_i)_I \arrow[r, hook] & (Y_i)_I & (Y_i\setminus V_i)_I. \arrow[l, ] \end{tikzcd}$$ The rows in the diagram are sent to recollements, and because of the previous step we know that the square on the right is sent to a vertically right adjointable square. Lemma [Lemma 18](#lem:parshall_scott){reference-type="ref" reference="lem:parshall_scott"} implies that image of the sqaure of the left is vertically right adjointable. **Squares of type $(I_-,P_\times^\textup{op})$.**. Let $$% https://tikzcd.yichuanshen.de/#N4Igdg9gJgpgziAXAbVABwnAlgFyxMJZABgBpiBdUkANwEMAbAVxiRAAoANAfSwEpuASRABfUuky58hFGQCMVWoxZt2AVV4DhYidjwEic8ovrNWiDgE1uAKwEApUeJAY90w6QXVTKi+wBqtg6iijBQAObwRKAAZgBOEAC2SEYgOBBIZGl0WAxsABYQEADWTrEJyYgAzNTpSABMtTl5FoUlZSDxSZm1GYiNIPkwdFBsOADuEEMjCDqdFSm9SDWDw6MWE1NrsxQiQA \begin{tikzcd} (X_i)_I \arrow[r, ] & (Y_j)_J \\ (U_i)_I \arrow[u, hook] \arrow[r, ] & (V_j)_J \arrow[u, hook] \end{tikzcd}$$ be a square of type$(I_-,P_\times^\textup{op})$ , where the horizontal morphisms lie over a partial map $\alpha:J \dashrightarrow I$. Then we rotate this to a $(P_\times^\textup{op}, I_-)$-square and extend the diagram with a pullback square of type $(P_\times,P_-)$ as follows. $$% https://tikzcd.yichuanshen.de/#N4Igdg9gJgpgziAXAbVABwnAlgFyxMJZARgBoAGAXVJADcBDAGwFcYkQAKADQH0sBKHgEkQAX1LpMufIRRli1Ok1bsOATR4ArQQCkxEkBmx4CRchUUMWbRJwCqfQSPGTjMs6QU0rK2xwBqWrr6rtKmKABMFt7KNpy8WAA6iXAwOAC2WGDMcAAEDgLCIYZSJrLIUV5K1qoamsmpGVk5uYHaPHqiijBQAObwRKAAZgBOEOlIUSA4EEjm0-RYjOwAFhAQANbFo+NIAMw0M0hkC0ur61suIDsTiPNHiCcrMPRQ7DgA7hDPrwhXN5NDrNEAcQD83rZPt8XlA-gYAYgACxAuY0cHvL7guHDMa3ACsKMeaJhGOhv22uKQyOmwIJYJJkMxML+lFEQA \begin{tikzcd} (U_i)_I \arrow[r, hook] \arrow[d, ] & (X_i)_I \arrow[d, ] & (X_i\setminus U_i)_I \arrow[l, ] \arrow[d, ] \\ (V_j)_J \arrow[r, hook] & (Y_j)_J & (Y_j\setminus V_j)_J \arrow[l, ] \end{tikzcd}$$ Again rows in the diagram are sent to recollements, and because of the first step in this proof, we know that the square on the right is sent to a horizontally right adjointable square. By Lemma [Lemma 18](#lem:parshall_scott){reference-type="ref" reference="lem:parshall_scott"} this implies that the image of the square on the left is horizontally right adjointable, so the original square is sent to a vertically left adjointable square. **Squares of type $(I_-,I_-)$.** Lastly we consider a pullback square $$% https://tikzcd.yichuanshen.de/#N4Igdg9gJgpgziAXAbVABwnAlgFyxMJZABgBoBGAXVJADcBDAGwFcYkQAKANQH0sBKHgEkQAX1LpMufIRRli1Ok1bsOADT6CR4ydjwEi5UgpoMWbRJwCqm4WIkgMemYYqKzKyxwDqt7YpgoAHN4IlAAMwAnCABbJDIQHAgkI0T6LEZ2AAsICABrewjouMQAJhoklIr0zMsc-KYcQpAo2KQAZgrksuqM7NyCnRbijq743tqQerzGsUpRIA \begin{tikzcd} (X_i)_I & (V_i)_I \arrow[l, hook'] \\ (U_i)_I \arrow[u, hook] & (W_i)_I \arrow[u, hook] \arrow[l, hook'] \end{tikzcd}$$ of type $(I_-,I_-)$ in $\mathbf{Span}_\times^\textup{op}$. We can extend this with another a pullback square of type $(I_-,P_-)$: $$% https://tikzcd.yichuanshen.de/#N4Igdg9gJgpgziAXAbVABwnAlgFyxMJZARgBpiBdUkANwEMAbAVxiRAAoBVAfSwEpuASRABfUuky58hFGQAMVWoxZt2ADV4DhYidjwEic0gur1mrRBwBqmoaPEgMe6YfKKzKy+wDqt7Q6cpAxQAJmN3ZQsODSwAHVi4GBwAWywwJjgAAht+Ox1HSX0ZZDDKU0jVHjiEpNT0rN9c7UUYKABzeCJQADMAJwhkpCMQHAgkMhG6LAY2AAsICABrex7+wcQAZmpRpDDJ6bmF5fy+gd3tscQJnCmZy3mllZBT9a2Ry+Gbg-ujp5ekAAsF3G1FmMDoUDYOAA7hAwRCECc1kgAKzAxDDeGQywwuHgqCIhz-RBo96A7a3Q6PEQUERAA \begin{tikzcd} (V_i)_I \arrow[r, hook] & (X_i)_I & (X_i\setminus V_i)_I \arrow[l, ] \\ (W_i)_I \arrow[u, hook] \arrow[r, hook] & (U_i)_I \arrow[u, hook] & (U_i\setminus W_i)_I \arrow[l, ] \arrow[u, hook] \end{tikzcd}$$ The rows are sent to recollements and by the previous step, the square on the right is sent to a vertically right adjointable square square. By Lemma [Lemma 18](#lem:parshall_scott){reference-type="ref" reference="lem:parshall_scott"}, this implies that the square on the left is sent to a vertically right adjointable square. ◻ The following theorem is now an easy corollary. **Theorem 38**. *There is an equivalence of categories $$\mathbf{6FF}\simeq \textup{Fun}^{\textup{lax},BC}_{AC}(\mathbf{Comp}^\textup{op},\textup{St}_{\infty}^{LL})$$* From now on we assume that the base field $k$ has characteristic zero. Let $\mathbf{SmComp}$ be the 1-category of smooth and complete varieties. Let $B$ be the the set of pullback squares in $\mathbf{SmComp}$ where $C \hookrightarrow X$ is a closed immersion and $Bl_C X$ the blowup of $X$ in $C$. We consider the topology $\tau_B$ on the 1-category $\mathbf{SmComp}$. Then [@kuij_monoidal] gives an equivalence $$\label{eq:smooth_vs_smoothcomp} \textup{Fun}^\textup{lax}_{AC}(\mathbf{Comp}^\textup{op},\textup{St}_{\infty}^{LL}) \simeq \textup{Fun}^\textup{lax}_B(\mathbf{SmComp}^\textup{op},\textup{St}_{\infty}^{LL}).$$ Since $\mathbf{6FF}$ is defined as a full subcategory of the $\infty$-category on the left, this equivalence gives the following. **Corollary 39**. Restricting the $(-)^*$-part of a six functors formalism to $\mathbf{SmComp}$, gives a fully faithful functor $$\mathbf{6FF} \longrightarrow\textup{Fun}^\textup{lax}_B(\mathbf{SmComp}^\textup{op},\textup{St}_{\infty}^{LL}).$$ **Remark 40**. Let $\textup{Fun}^{\textup{lax},BC}_B(\mathbf{SmComp}^\textup{op},\textup{St}_{\infty}^{LL})$ denote the $\infty$-category of lax monoidal $\tau_B$-hypersheaves on $\mathbf{SmComp}$ that send squares of type $(P_-^\textup{op}, P_\times^\textup{op})$ to vertically right adjointable squares. We do not expect the equivalence ([\[eq:smooth_vs_smoothcomp\]](#eq:smooth_vs_smoothcomp){reference-type="ref" reference="eq:smooth_vs_smoothcomp"}) to restrict on both sides to hypersheaves that send squares of type $(P_-^\textup{op}, P_\times^\textup{op})$ to vertically right adjointable squares. Therefore it is not expected that the six-functor formalisms $\mathbf{6FF}$ to be equivalent to the $\infty$-category $\textup{Fun}^{\textup{lax},BC}_B(\mathbf{SmComp}^\textup{op},\textup{St}_{\infty}^{LL}).$ The reason for this is that the 1-category $\mathbf{SmComp}$ does not have a lot of pullbacks, so if $F:\mathbf{SmComp}^\textup{op}\longrightarrow\textup{St}_{\infty}^{LL}$ sends all squares of type $(P_-^\textup{op}, P_\times^\textup{op})$ to vertically right adjointable squares, then this might not be true for the extension of $F$ to the 1-category $\mathbf{Comp}$ which has a lot more pullbacks.
arxiv_math
{ "id": "2309.11449", "title": "An axiomatization of six-functor formalisms", "authors": "Josefien Kuijper", "categories": "math.AG math.KT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
# Exact recovery for graphs with the $\cscc$ proeprty {#sec:proof-deterministicrecovery} In this section, let $G$ be a graph that satisfies the $\cscc$ property. Our goal is to extend the results in Section [\[sec:disjointunion\]](#sec:disjointunion){reference-type="ref" reference="sec:disjointunion"}, and show that $(\ac,\ks)$ is also the unique solution to [\[eq:lovasz_lambdamax\]](#eq:lovasz_lambdamax){reference-type="eqref" reference="eq:lovasz_lambdamax"} for graphs satisfying the $\cscc$ condition, provided $c$ is appropriately small. Following our discussion in Section [\[sec:disjointunion\]](#sec:disjointunion){reference-type="ref" reference="sec:disjointunion"}, the matrix $\xc$ remains an extreme point of the feasible region of [\[eq:lovasz_lambdamax\]](#eq:lovasz_lambdamax){reference-type="eqref" reference="eq:lovasz_lambdamax"} if $c<1$. Hence, by Theorem [\[thm:constraintqualification\]](#thm:constraintqualification){reference-type="ref" reference="thm:constraintqualification"}, it remains to produce a suitable dual witness $Z$ satisfying the requirements listed in Theorem [\[thm:constraintqualification\]](#thm:constraintqualification){reference-type="ref" reference="thm:constraintqualification"}. ## Incoherence-type result {#sec:incoherence} The certificate we will use for graphs with the $\cscc$ property is $\zc = P_{\tilde{\mathcal{K}} \cap \tilde{\mathcal{L}}} (\zz)$, defined as the projection of $\zz$ onto ${\tilde{\mathcal{K}} \cap \tilde{\mathcal{L}}}$ with respect to the Frobenius norm, i.e., $$\zc ~~:=~~ \underset{X}{\arg \min} ~\| X - \zz \|_F^2 \quad \mathrm{s.t.} \quad X \in \tilde{\mathcal{K}} \cap \tilde{\mathcal{L}}.$$ Using the first-order optimality conditions we have that $$\zz - \zc = \tilde{L} + \tilde{K},$$ where $\tilde{L}$ lies in the normal cone of $\mathcal{L}$ at $Z'$ and $\tilde{K}$ lies in the normal cone of $\mathcal{K}$ at $Z'$. Recalling that the normal cone of a subspace is its orthogonal complement we have that $\tilde{L}\in \tilde{\mathcal{L}}^\perp$ and $\tilde{K}\in \tilde{\mathcal{K}}^\perp$. Our main result can be viewed as the extension of orthogonality for graphs that are disjoint unions of cliques to graphs with the $\cscc$ property. The proof is broken up into a sequence of results that follow. [\[thm:incoherence_1\]]{#thm:incoherence_1 label="thm:incoherence_1"} Let $G$ be a graph satisfying the $\cscc$ property. Then, - For any $\tilde{K} \in \tilde{\mathcal{K}}^\perp$ and $\tilde{L} \in \tilde{\mathcal{L}}^\perp$ we have $$| \langle \tilde{K}, \tilde{L} \rangle | \leq 2\sqrt{c}\|\tilde{K}\|_F \|\tilde{L}\|_F .$$ - For any $\tilde{L} \in \tilde{\mathcal{L}}^\perp$ we have $$\|P_{\mathcal{K}^\perp}(\tilde{L})\|_F \leq (2 \sqrt{c})^{1/2} \|\tilde{L}\|_F.$$ #### A direct sum decomposition for $\tilde{\mathcal{L}}^\perp$. The first step is to provide a decomposition of the space $\tilde{\mathcal{L}}^\perp$ into simpler subspaces, on which it is easier to prove the near orthogonality property. We use these results as basic ingredients to build up to our near orthogonality property later. Define the matrices $\{ F_{x,y,z} : 1 \leq x \neq y \leq k, 1\leq z \leq |\ccs_x | \}$ so that (i) the $(\ccs_x,\ccs_x)$-th block is a diagonal matrix whose $z$-th entry is set equal to $-2(|\ccs_x|-1)$ and all remaining entries equal to $2$, and (ii) the $(\ccs_x,\ccs_y)$-th block ($(\ccs_y,\ccs_x)$-th block) is such that entries in the $z$-th row (column) are equal to $|\ccs_x|-1$ and all other entries equal to $-1$, (iii) all other entries are zero. As an example, in the case where $k=2$ the matrix $F_{1,2,1}$ is given by: $$F_{1,2,1} := \left( \begin{array}{cccc|ccc} -2(|\ccs_1|-1) &&&& |\ccs_1|-1 & \ldots & |\ccs_1|-1 \\ & 2 &&& -1 & \ldots & -1 \\ && \ddots &&\vdots &&\vdots \\ &&&2& -1 & \ldots & -1 \\ \hline |\ccs_1|-1 & -1 & \ldots & -1 &&& \\ \vdots & \vdots & & \vdots &&& \\ |\ccs_1|-1 & -1 & \ldots & -1 &&& \\ \end{array} \right),$$ where omitted entries are zero. Second, consider the following subspaces: Name Description Dimension ------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------- $\mathcal{T}_{1}$ $\left \{ \left( \begin{array}{c|c|c|c|c} \gamma_1 I & \theta_{1,2} E & \theta_{1,3} E & \ldots & \theta_{1,n} E \\ \hline \theta_{2,1} E & \gamma_2 I & \theta_{2,3} E & \ldots & \theta_{2,n} E \\ \hline \theta_{3,1} E & \theta_{3,2} E & & & \\ \hline \vdots & \vdots & & & \\ \hline \theta_{n,1} E & \theta_{n,2} E & & & \gamma_n I \end{array}\right) : \sum \gamma_i + \sum \theta_{i,j} = 0 \right\}$ ${k+1 \choose 2}-1$ $\mathcal{T}_{2}$ $\mathrm{Span} \left\{ F_{x,y,z} : 1 \leq x \neq y \leq k, 1\leq z \leq n \right\}$ $(|V| - k) \times (k-2)$ [\[thm:description_of_lperp\]]{#thm:description_of_lperp label="thm:description_of_lperp"} We have that $$\tilde{\mathcal{L}}^\perp=\mathcal{T}_{1}\oplus \mathcal{T}_{2}.$$ *Proof of Proposition [\[thm:description_of_lperp\]](#thm:description_of_lperp){reference-type="ref" reference="thm:description_of_lperp"}.* The fact that $\mathcal{T}_{1}$ and $\mathcal{T}_{2}$ are orthogonal is easy to check. Define the matrix $L_{x,y,z} \in \mathbb{R}^{|\V|\times|\V|}$, where $1 \leq x \neq y \leq k$, and $1 \leq z \leq |\ccs_x|$ such that (i) the $z$-th entry of the $(\ccs_x,\ccs_x)$-th block is equal to $2$, and (ii) the entries of the $z$-th row (column) of the $(\ccs_x,\ccs_y)$-th block ($(\ccs_y,\ccs_x)$-th block) are all equal to $-1$, and (iii) all other entries are $0$. As an example, in the case of two cliques (so $k=2$) the matrix $L_{1,2,1}$ is given by: $$L_{1,2,1} := \left( \begin{array}{cc|ccc} 2 & & -1 & \ldots & -1 \\ & & & & \\ \hline -1 & & & & \\ \vdots & & & & \\ -1 & & & & \\ \end{array}\right),$$ where all omitted entries are zero. First, observe that the matrices $\{ L_{x,y,z} \}$ specify all the linear equalities in the subspace $\tilde{\mathcal{L}}$, and thus, $\tilde{\mathcal{L}}^\perp$ is precisely the span of $\{ L_{x,y,z} \}$. As such, to show that $\mathcal{T}_{1}$ and $\mathcal{T}_{2}$ span $\tilde{\mathcal{L}}^\perp$, it suffices to show that every matrix $L_{x,y,z}$ is expressible as the sum of matrices belonging to each of these subspaces. As a concrete example, we show this is true for $L_{1,2,1}$ -- the construction for other matrices are similar. Indeed, one checks that $L_{1,2,1}$ is the linear sum of these matrices $$L_{1,2,1}= \underbrace{\frac{1}{|\ccs_1|} \left( \begin{array}{c|c|c|c} 2I & -E & 0 & \ldots \\ \hline -E & 0 & \ldots & \\ \hline 0 & \vdots & \ddots & \\ \hline \vdots & & & \end{array}\right) }_{\mathcal{T}_1} -\frac{1}{|\ccs_1|} \underbrace{F_{1,2,1}}_{\mathcal{T}_2}.$$ This completes the proof. ◻ In view of Proposition [\[thm:description_of_lperp\]](#thm:description_of_lperp){reference-type="ref" reference="thm:description_of_lperp"}, to bound the inner product between vectors in $\tilde{\mathcal{K}}^\perp$ and $\tilde{\mathcal{L}}^\perp$ respectively, it suffices to bound inner product between $(i) \ \tilde{\mathcal{K}}^\perp$ and $\mathcal{T}_1$ and $(ii) \ \tilde{\mathcal{K}}^\perp$ and $\mathcal{T}_2$. Next, we describe a result that shows how incoherence computation for (a small number of) orthogonal subspaces can be put together to obtain incoherence computations. [\[thm:orthogonalsubspace_incoherence\]]{#thm:orthogonalsubspace_incoherence label="thm:orthogonalsubspace_incoherence"} Let $\{ \mathcal{T}_i \}_{i=1}^{r}$ be orthogonal subspaces of $\R^d$. Consider $s\in \R^d$ such that $$| \langle s,t_i \rangle | \leq \epsilon \| s \|_2 \| t_i \|_2 \quad \text{ for all } t_i \in \mathcal{T}_i.$$ Then, for $t\in \oplus_i \mathcal{T}_i$ we have that $$| \langle s,t \rangle | \leq \epsilon \sqrt{r} \| s \|_2 \| t \|_2.$$ *Proof of Lemma [\[thm:orthogonalsubspace_incoherence\]](#thm:orthogonalsubspace_incoherence){reference-type="ref" reference="thm:orthogonalsubspace_incoherence"}.* For any $t= \sum t_i\in \oplus \mathcal{T}_i$ we have $$| \langle s, t \rangle | = | \langle s, \sum_{i=1}^r t_i \rangle | \leq \sum_{i=1}^r | \langle s, t_i \rangle | \leq \epsilon \sum_{i=1}^r \| s \|_2 \| t_i \|_2 \leq \epsilon \sqrt{r} \|s\|_2(\sum_{i=1}^r \| t_i \|_2^2)^{1/2} = \epsilon\sqrt{r}\|s\|_2 \| t \|_2,$$ where the second last inequality follows from Cauchy-Schwarz, while the last equality uses the fact that $\mathcal{T} = \oplus \mathcal{T}_i$. ◻ #### Bounding inner product between $\tilde{\mathcal{K}}^\perp$ and $\mathcal{T}_2$. Define the column vectors $\{\bf_i\}_{i=1}^{n}$ by $$\bf_i = n \be_i - \be = (\ldots, \underbrace{-1}_{i-1},\underbrace{n-1}_{i}, \underbrace{-1}_{i+1},\ldots)^T \in \mathbb{R}^{n}.$$ Note that $$\bf_i \be^T=\begin{pmatrix} f_i & \ldots & f_i\end{pmatrix} \text{ and } \be \bf_i^T=\begin{pmatrix} f_i^T \\ \vdots \\ f_i^T\end{pmatrix}.$$ With a slight abuse of notation, we define the subspace of matrices in $\mathbb{R}^{n_1 \times n_2}$ $$\mathcal{N}_{C} ={\rm span}( \bf_1 \be^T,\ldots, \bf_n \be^T ) \qquad \text{ and } \qquad \mathcal{N}_{R}={\rm span}( \be \bf_1^T, \ldots, \be \bf_n^T ).$$ The abuse of notation arises the vectors $\bf_i$ in the definitions of $\mathcal{N}_{C}$ and $\mathcal{N}_{R}$ are different. Note that because $\bf_i^T \be = 0$, the subspaces $\mathcal{N}_{C}$ and $\mathcal{N}_{R}$ have dimensions $n_1-1$ and $n_2-1$ respectively and are orthogonal. $\mathcal{N}_{C}$ and $\mathcal{N}_{R}$ are relevant for our problem as the block off-diagonal entries of any matrix in $\mathcal{T}_{2}$ belong to $\mathcal{N}_{C} \oplus \mathcal{N}_{R}.$ [\[thm:ncnr_space_incoherence\]]{#thm:ncnr_space_incoherence label="thm:ncnr_space_incoherence"} Let $K \in \mathbb{R}^{n_1\times n_2}$ such that each row has at most $c n_2$ non-zero entries for some $0 \leq c \leq 1$. Then, for any $L \in \mathcal{N}_{C}$ we have that $$| \langle K, L \rangle | \leq \sqrt{c} \| L \|_F \| K \|_F.$$ The same conclusion holds if each column of $K$ has at most $c n_1$ non-zero entries and $L\in \mathcal{N}_R$. *Proof of Lemma [\[thm:ncnr_space_incoherence\]](#thm:ncnr_space_incoherence){reference-type="ref" reference="thm:ncnr_space_incoherence"}.* Since $L \in \mathcal{N}_{C}$, it has constant columns. Suppose its column entries are $(\theta_1,\ldots,\theta_{n_1})$, i.e., $L_{x,y}=\theta_x$ for all $y$. We then have $$\langle K, L \rangle = \mathrm{tr}(KL^T) = \sum_{x} (\sum_{y} K_{x,y} L_{x,y}) = \sum_{x} \theta_x \sum_{y} K_{x,y} {\leq} \sum_{x} \theta_x \sqrt{cn_2} (\sum_{y} K_{x,y}^2)^{1/2},$$ where the last inequality follows from Cauchy-Schwarz, and since there are at most $c n_2$ non-zero entries in each column of $K$. Finally, we have $$\sqrt{cn_2} \sum_{x} \theta_x (\sum_{y} K_{x,y}^2)^{1/2} \leq \sqrt{cn_2} (\sum_x \theta_x^2)^{1/2} (\sum_{x,y} K_{x,y}^2)^{1/2} = \sqrt{c} \|L\|_F \|K\|_F,$$ where in the last equality we use that $\|L\|_F = \sqrt{n_2}(\sum \theta_i^2)^{1/2}$. ◻ [\[thm:ncnr_combined\]]{#thm:ncnr_combined label="thm:ncnr_combined"} Consider $K \in \mathbb{R}^{n_1\times n_2}$ where each column has at most $c n_1$ non-zero entries, and each row has at most $c n_2$ non-zero entries for some $0 \leq c \leq 1$. For any $L \in \mathcal{N}_{C}\oplus \mathcal{N}_{R}$ we have: $$| \langle K, L \rangle | \leq \sqrt{2c} \| L \|_F \| K \|_F.$$ *Proof of Corollary [\[thm:ncnr_combined\]](#thm:ncnr_combined){reference-type="ref" reference="thm:ncnr_combined"}.* By Lemma [\[thm:ncnr_space_incoherence\]](#thm:ncnr_space_incoherence){reference-type="ref" reference="thm:ncnr_space_incoherence"} for $L \in \mathcal{N}_C,$ or $L\in \mathcal{N}_R$ we have that $$| \langle K, L \rangle | \leq \sqrt{c} \| L \|_F \| K \|_F.$$ As $\be^T \bf_j=0$, $\mathcal{N}_{C}$ and $\mathcal{N}_{R}$ are orthogonal subspaces. The result follows by Lemma [\[thm:orthogonalsubspace_incoherence\]](#thm:orthogonalsubspace_incoherence){reference-type="ref" reference="thm:orthogonalsubspace_incoherence"}. ◻ [\[thm:incoherence_efsum\]]{#thm:incoherence_efsum label="thm:incoherence_efsum"} Suppose $G$ is a graph satisfying the $\cscc$ property. Then, for any $K \in \tilde{\mathcal{K}}^\perp$ and $L\in \mathcal{T}_2$ we have that $$| \langle K,L \rangle| \leq \sqrt{2c} \|K\|_F \|L\|_F.$$ *Proof.* As $K \in \tilde{\mathcal{K}}^\perp$, its diagonal blocks are zero, and also, entries corresponding to non-edges of $G$ are zero. Moreover, as $G$ has the $c$-SCC property, each row (and column) of $K$ has at most $cn$ non-zero entries. Let the block matrices be indexed by $(x,y)$, and let $L_{xy}$ and $K_{xy}$ denote the $xy$-th block. Recall that the block off-diagonal entries of any matrix in $\mathcal{T}_{2}$ belong to $\mathcal{N}_{C} \oplus \mathcal{N}_{R}.$ By Corollary [\[thm:ncnr_combined\]](#thm:ncnr_combined){reference-type="ref" reference="thm:ncnr_combined"} we have $| \langle K_{xy}, L_{xy} \rangle | \leq \sqrt{2c} \| L_{xy} \|_F \| K_{xy} \|_F$. By summing over the blocks and by applying Cauchy-Schwarz, we have $$\begin{aligned} | \langle K , L \rangle | & \leq \sum_{x,y} |\langle K_{xy} , L_{xy} \rangle |= \sum_{x\ne y} |\langle K_{xy} , L_{xy} \rangle | \\ & \leq \sqrt{2c} \sum_{x\ne y} \| K_{xy} \|_F \| L_{xy} \|_F \leq \sqrt{2c} (\sum_{x,y} \| K_{xy} \|_F^2)^{1/2} (\sum_{x,y} \| L_{xy} \|_F^2)^{1/2} = \sqrt{2c} \| K \|_F \| L \|_F. \end{aligned}$$ ◻ #### Bounding the inner product between $\tilde{\mathcal{K}}^\perp$ and $\mathcal{T}_1$. This case is easier and the required result is given in the next lemma. [\[thm:incoherence_Espan\]]{#thm:incoherence_Espan label="thm:incoherence_Espan"} Suppose $G$ is a graph satisfying the $\cscc$ property. Then, for any $K \in \tilde{\mathcal{K}}^\perp$ and $L\in \mathcal{T}_1$ we have that $$|\langle K, L \rangle | \leq \sqrt{c} \| K \|_F \| L \|_F .$$ *Proof of Lemma [\[thm:incoherence_Espan\]](#thm:incoherence_Espan){reference-type="ref" reference="thm:incoherence_Espan"}.* Note that $K$ has no entries in each block diagonal. Consider the $(i,j)$-th block matrix where $i \neq j$. We denote the coordinates in this block by $\mathcal{B}_{i,j}$. Then $$\sum_{x,y \in \mathcal{B}_{i,j}} K_{x,y} L_{x,y} = \theta_{i,j} \left( \sum_{x,y \in \mathcal{B}_{i,j}} K_{x,y} \right) \leq \sqrt{c |\ccs_i| |\ccs_j|} \theta_{i,j} \left( \sum_{x,y \in \mathcal{B}_{i,j}} K_{x,y}^2 \right)^{1/2}.$$ The last inequality follows from Cauchy-Schwarz, and by noting that $K$ has at most $c |\ccs_i| |\ccs_j|$ non-zero entries within the block $\mathcal{B}_{i,j}$. Then by summing over the blocks $(i,j)$ we have $$|\langle K,L \rangle| \leq \sum_{i,j} \sqrt{c|\ccs_i| |\ccs_j|} \theta_{i,j} ( \sum_{x,y \in \mathcal{B}_{i,j}} K_{x,y}^2 )^{1/2} \leq \sqrt{c} (\sum_{i,j} |\ccs_i| |\ccs_j|\theta_{i,j}^2)^{1/2} ( \sum_{x,y} K_{x,y}^2 )^{1/2}.$$ The last inequality follows from Cauchy-Schwarz. Now note that $( \sum_{x,y} K_{x,y}^2 )^{1/2} = \|K\|_F$, and that $(\sum_{i,j} |\ccs_i| |\ccs_j| \theta_{i,j}^2)^{1/2} = \|L\|_F$, from which the result follows. ◻ Finally, we are ready to prove Theorem [\[thm:incoherence_1\]](#thm:incoherence_1){reference-type="ref" reference="thm:incoherence_1"}. #### Proof of Theorem [\[thm:incoherence_1\]](#thm:incoherence_1){reference-type="ref" reference="thm:incoherence_1"}. Consider $\tilde{K} \in \tilde{\mathcal{K}}^\perp$ and $\tilde{L} \in \tilde{\mathcal{L}}^\perp$. By Proposition [\[thm:description_of_lperp\]](#thm:description_of_lperp){reference-type="ref" reference="thm:description_of_lperp"} we have that $\tilde{\mathcal{L}}^\perp=\mathcal{T}_1\oplus \mathcal{T}_2$ and let $\tilde{L} =\tilde{L}_1+\tilde{L}_2$, where $\tilde{L}_i\in \mathcal{T}_i$. **Part $(i)$.** By Lemma [\[thm:incoherence_Espan\]](#thm:incoherence_Espan){reference-type="ref" reference="thm:incoherence_Espan"} we have that $$| \langle \tilde{K}, \tilde{L}_1 \rangle | \leq \sqrt{c}\|\tilde{K}\|_F \|\tilde{L}_1 \|_F.$$ Let $\tilde{L}_{2,\mathrm{off}}$ be the block off-diagonal component of $\tilde{L}_2$ and $\tilde{L}_{2,\mathrm{diag}}$ be the block diagonal component of $\tilde{L}_2$ so that $\tilde{L}_2 = \tilde{L}_{2,\mathrm{off}} + \tilde{L}_{2,\mathrm{diag}}$. By definition, the matrix $\tilde{L}_{2,\mathrm{off}}$ is zero on the diagonal blocks and each block belongs to $\mathcal{N}_{C} \oplus \mathcal{N}_{R}$. Hence by Lemma [\[thm:incoherence_efsum\]](#thm:incoherence_efsum){reference-type="ref" reference="thm:incoherence_efsum"}, we have $$| \langle \tilde{K}, \tilde{L}_{2,\mathrm{off}} \rangle | \leq \sqrt{2c} \| \tilde{K} \|_F \| \tilde{L}_{2,\mathrm{off}} \|_F.$$ Moreover, as the diagonal blocks of $\tilde{K}$ are zero we have that $$\langle \tilde{K}, \tilde{L}_{2,\mathrm{diag}} \rangle =0.$$ Putting everything together, $$| \langle \tilde{K}, \tilde{L}_2 \rangle | = | \langle K, \tilde{L}_{2,\mathrm{off}} \rangle | \leq \sqrt{2c} \| \tilde{K} \|_F \| \tilde{L}_{\mathrm{2,off}} \|_F \leq \sqrt{2c} \| \tilde{K} \|_F \| \tilde{L} \|_F,$$ where the last inequality follows by noting that $\| L \|_F^2 = \| L_{\mathrm{off}} \|_F^2 + \| L_{\mathrm{diag}} \|_F^2$. Finally, by Proposition [\[thm:description_of_lperp\]](#thm:description_of_lperp){reference-type="ref" reference="thm:description_of_lperp"}, the subspaces $\mathcal{T}_1$ and $\mathcal{T}_2$ are orthogonal. Hence by Lemma [\[thm:orthogonalsubspace_incoherence\]](#thm:orthogonalsubspace_incoherence){reference-type="ref" reference="thm:orthogonalsubspace_incoherence"} applied to the subspaces $\mathcal{T}_1$ and $\mathcal{T}_2$ we have $| \langle \tilde{K}, \tilde{L} \rangle | \leq \sqrt{2} \times \sqrt{2c} \|\tilde{K}\|_F \|\tilde{L}\|_F$. **Part $(ii)$.** Note that $$\| P_{\mathcal{K}^\perp}(\tilde{L}) \|_F^2 = \langle P_{\mathcal{K}^\perp}(\tilde{L}), P_{\mathcal{K}^\perp}(\tilde{L}) \rangle = \langle P_{\mathcal{K}^\perp} (P_{\mathcal{K}^\perp}(\tilde{L})), \tilde{L} \rangle = \langle P_{\mathcal{K}^\perp}(\tilde{L}), \tilde{L} \rangle,$$ where the second and third equalities follow from the fact that orthogonal projections are self-adjoint and idempotent. Since $P_{\mathcal{K}^\perp}(\tilde{L}) \in \mathcal{K}^\perp$, by Proposition [\[thm:incoherence_1\]](#thm:incoherence_1){reference-type="ref" reference="thm:incoherence_1"}, we have $$|\langle P_{\mathcal{K}^\perp}(\tilde{L}), \tilde{L} \rangle| \leq 2\sqrt{c} \|P_{\mathcal{K}^\perp}(\tilde{L})\|_F \|\tilde{L}\|_F \leq 2\sqrt{c} \|\tilde{L}\|_F^2,$$ from which the result follows.0◻
arxiv_math
{ "id": "2310.00257", "title": "The Lov\\'asz Theta Function for Recovering Planted Clique Covers and\n Graph Colorings", "authors": "Jiaxin Hou and Yong Sheng Soh and Antonios Varvitsiotis", "categories": "math.OC cs.DS cs.IT math.CO math.IT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Accuracy certificates for convex minimization problems allow for online verification of the accuracy of approximate solutions and provide a theoretically valid online stopping criterion. When solving the Lagrange dual problem, accuracy certificates produce a simple way to recover an approximate primal solution and estimate its accuracy. In this paper, we generalize accuracy certificates for the setting of inexact first-order oracle, including the setting of primal and Lagrange dual pair of problems. We further propose an explicit way to construct accuracy certificates for a large class of cutting plane methods based on polytopes. As a by-product, we show that the considered cutting plane methods can be efficiently used with a noisy oracle even thought they were originally designed to be equipped with an exact oracle. Finally, we illustrate the work of the proposed certificates in the numerical experiments highlighting that our certificates provide a tight upper bound on the objective residual. author: - Egor Gladin - Alexander Gasnikov - Pavel Dvurechensky bibliography: - references.bib title: Accuracy Certificates for Convex Minimization with Inexact Oracle --- # Introduction The authors of [@nemirovski2010accuracy] introduced the notion of accuracy certificates for convex minimization and other problems with a convex structure. These certificates verify the accuracy of an approximate solution at any stage of an optimization algorithm execution. Although many algorithms have convergence rate estimates, those often involve parameters unknown in practice, e.g., a constant of Lipschitz continuity of the objective, distance from the starting point to the closest solution, and so on. Accuracy certificates, on the contrary, verify the accuracy of an approximate solution without additional a priori information about the particular problem. Moreover, the accuracy can be verified online and on the fly using the already available information generated by the algorithm. Thus, the accuracy certificates provide a theoretically valid and practical stopping criterion. Furthermore, certificates allow an external recipient to verify the accuracy guarantees, without knowing how the algorithm works. This can be useful in some cases where privacy is a priority. Certificates are extremely useful when an algorithm is applied to the Lagrange dual optimization problem. In this case, they can be used to convert an $\epsilon$-optimal dual solution into an $\epsilon$-optimal solution to the primal problem. Moreover, this approach allows to reuse the information already generated by the algorithm and the approximate primal solution is reconstructed in a direct way, without the knowledge of additional problem parameters. Remarkably, certificate-based approach allows one to circumvent the following disadvantages of the approach based on the regularization of the primal problem [@devolder2012double; @gasnikov2016efficient]. - The regularization approach uses an upper bound on the norm of a primal solution. In many cases, such a bound is not available or overestimated which leads to slower convergence. - It requires the target accuracy to be fixed in advance. This raises difficulties when the time limit is exceeded before this accuracy is reached, or when the user decides in the middle of the process that a higher target accuracy is needed. - In some cases, regularization is not enough to reconstruct the primal solution. For example, one needs to impose a $\beta$-ergodicity assumption when applying a dual approach to optimizing a constrained Markov decision process [@gladin2023algorithm]. - In some cases, the accuracy deteriorates when one converts a dual point to a primal solution in this way. For example, accuracy $\epsilon$ of a dual solution might result in accuracy $\sqrt{\epsilon}$ of the respective primal solution [@gladin2023algorithm]. **Related literature.** The paper [@nemirovski2010accuracy] provides a way to compute accuracy certificates for the ellipsoid method. After $\tau$ iterations of the method applied to an $n$-dimensional problem, computation of certificates requires $O(n^3)+O(\tau n^2)$ arithmetic operations (a.o.). Moreover, the authors mention that one can compute certificates for other cutting-plane methods in a similar fashion by approximating localizer sets with John ellipsoids (for a notion of John ellipsoid see, e.g., [@boyd2004convex], Chapter 8.4). However, we didn't find any uses of this procedure with algorithms other than the ellipsoid method. A possible reason for this is the high computational cost of approximating John ellipsoids [@khachiyan1990complexity; @nemirovski1999self; @anstreicher2002improved; @kumar2005minimum; @todd2007khachiyan; @cohen2019near]. Fortunately, we show that there is a way to build certificates for polytope-based cutting plane algorithms in a straightforward way by approximately solving a single linear problem, which according to [@van2020deterministic] takes only $\widetilde{O}\left(n^{\omega} \right)$ a.o., where $\widetilde{O}$ hides polylog$(n)$ factors. We also note that the certificates proposed in [@nemirovski2010accuracy] are constructed under the assumption that the first-order information, i.e., subgradients, in the problem are available exactly, which may not always happed in practice. **Contributions** of this paper are as follows: - Generalizing the work [@nemirovski2010accuracy], we investigate the properties of accuracy certificates in the setting of minimization problems with *inexact first-order oracle*; - We develop a simple and efficient way to obtain accuracy certificates for a large class of cutting plane methods, including Vaidya's method [@vaidya1989new; @vaidya1996new], Atkinson-Vaidya algorithm [@atkinson1995cutting] and many others; - We show that the considered methods can be efficiently used with a noisy oracle even if they were originally designed to be used with an exact oracle; - We consider convex problems with (possibly nonlinear) convex inequality constraints and establish a straightforward way to obtain an approximate primal solution based on the information obtained by a method with certificates applied to the dual problem. Generalizing [@nemirovski2010accuracy], we consider nonlinear constraints and allow for inexact solutions of auxiliary problems in each iteration. The rest of the paper is organized as follows. In Section [2](#S:certificates){reference-type="ref" reference="S:certificates"}, we state the minimization problem, and define the separation oracle and the inexact first-order oracle that are used in the algorithms. We also formally define certificates and prove their main property, namely, an upper bound for the objective residual based on a certificate. Section [3](#S:primal-dual){reference-type="ref" reference="S:primal-dual"} is devoted to the primal-dual setting where we consider convex optimization problems with convex inequality constraints. In particular, we construct separation and inexact oracles for this setting and propose a way to reconstruct an approximate primal solution based on a certificate for the dual problem, with the same accuracy both in terms of the primal objective and constraint violation. In Section [4](#S:certificates_construction){reference-type="ref" reference="S:certificates_construction"}, we describe a wide class of cutting plane methods and propose a way to construct accuracy certificates for these methods. Finally, in Section [5](#S:experiments){reference-type="ref" reference="S:experiments"}, we illustrate the practical efficiency of the proposed certificates. # Certificates and Their Properties {#S:certificates} ## Problem Formulation Consider a convex minimization problem (CMP) $$\label{eq:cmp} \mathrm{Opt}= \min_{x\in X} F(x),$$ where - $X \subset \mathbb{R}^n$ is a solid (convex compact set with a nonempty interior) represented by a *Separation oracle* -- a black box which, given on input a point $x \in \mathbb{R}^n$, reports whether or not $x \in \operatorname{int}X$, and in the case of $x \notin \operatorname{int}X$, returns a *separator* -- a vector $e \neq 0$ such that $\langle e, y-x\rangle \leq 0$ for all $y \in X$. - $F: X \rightarrow \mathbb{R}\cup\{+\infty\}$ is a convex function with $\operatorname{Dom}(F)=\{x: F(x)<\infty\} \supseteq \operatorname{int}X$; this function is represented by $\delta$-*oracle* -- a black box which, given on input a point $x \in \operatorname{int} X$, returns a value $\tilde{F}(x)$ such that $|\tilde{F}(x)-F(x)|<\delta$, and a $\delta$-subgradient $\tilde{F}^{\prime}(x)\in \partial_\delta F(x)$ of $F$ at $x$, i.e., a vector $\tilde{F}^{\prime}(x)$ satisfying $$\label{eq:delta_subgrad} F(y) \geq F(x) + \langle\tilde{F}^{\prime}(x), y-x\rangle-\delta\quad \forall y \in X.$$ A point $x \in \operatorname{int} X$ is called a *strictly feasible* solution to [\[eq:cmp\]](#eq:cmp){reference-type="eqref" reference="eq:cmp"}. A proximity measure for such a point $x$ to optimality is defined by $$\epsilon_{\mathrm{opt}}(x)=F(x)-\inf _{y \in X} F(y)=F(x)-\text { Opt. }$$ A strictly feasible point $x$ is called $\epsilon$-optimal for [\[eq:cmp\]](#eq:cmp){reference-type="eqref" reference="eq:cmp"}, if $\epsilon_{\mathrm{opt}}(x) \leq \epsilon$, that is, if $F(x) \leq \mathrm{Opt}+\epsilon$. ## Certificates for Convex Minimization Problems A computational method for solving the problem [\[eq:cmp\]](#eq:cmp){reference-type="eqref" reference="eq:cmp"} within a prescribed accuracy $\epsilon>0$ produces execution protocols $P_\tau=\left\{\left(x_t, e_t\right)\right\}_{t=1}^\tau$, where - $\tau \in \mathbb{N}$ is the current number of steps, - $x_t \in \mathbb{R}^n$ are the search points generated so far, - $e_t$ is either a nonzero vector, reported by the Separation oracle and separating $x_t$ and $X$ (this is the case at a *nonproductive* steps $t$ -- those with $\left.x_t \notin \operatorname{int} X\right)$, or is a $\delta$-subgradient $\tilde{F}^{\prime}\left(x_t\right)$ of $F$ at $x_t$ reported by the $\delta$-oracle (this is the case at *productive* steps $t$ -- those with $x_t \in \operatorname{int} X$). The range $1 \leq t \leq \tau$ of the values of $t$ associated with an execution protocol $P_\tau$ is split into the sets $I_\tau, J_\tau$ of indices of productive, resp., nonproductive steps, and the protocol is augmented by the approximate values $\tilde{F}\left(x_t\right)$ of the objective at productive search points $x_t$ -- those with $t \in I_\tau$. We are about to demonstrate a natural way to certify $\epsilon$-optimality of a strictly feasible solution offered by certificates which are defined as follows. Let $P_\tau$ be an execution protocol. A *certificate* for this protocol is a collection $\xi=\left\{\xi_t\right\}_{t=1}^\tau$ of weights such that - $\xi_t \geq 0$ for each $t=1, \ldots, \tau$, - $\sum_{t \in I_\tau} \xi_t=1$. Note that certificates exist only for protocols with nonempty sets $I_\tau$. Given a solid $\mathbf{B}$ known to contain $X$, an execution protocol $P_\tau$ and a certificate $\xi$ for this protocol, we define the quantity $$\epsilon_{\mathrm{cert}}\left(\xi \mid P_\tau, \mathbf{B}\right) \equiv \max _{x \in \mathbf{B}} \sum_{t=1}^\tau \xi_t\left\langle e_t, x_t-x\right\rangle$$ which we call the *residual of* the certificate $\xi$ on $\mathbf{B}$. Moreover, we define *the approximate solution induced by* $\xi$ $$x^\tau[\xi] := \sum_{t \in I_\tau} \xi_t x_t$$ which clearly is a strictly feasible solution to [\[eq:cmp\]](#eq:cmp){reference-type="eqref" reference="eq:cmp"}. The role of the just defined quantities in certifying accuracy of approximate solutions to [\[eq:cmp\]](#eq:cmp){reference-type="eqref" reference="eq:cmp"} stems from the following [\[prop:resid_def\]]{#prop:resid_def label="prop:resid_def"} Let $P_\tau$ be a $\tau$-point execution protocol associated with the CMP [\[eq:cmp\]](#eq:cmp){reference-type="eqref" reference="eq:cmp"}, $\xi$ be a certificate for $P_\tau$ and $\mathbf{B} \supset X$ be a solid. Then $x^\tau=x^\tau[\xi]$ is a strictly feasible solution of the given CMP, with $$\epsilon_{\mathrm{opt}}\left(x^\tau\right) \leq \epsilon_{\mathrm{cert}}\left(\xi \mid P_\tau, \mathbf{B}\right) + \delta.$$ Proof can be found in Appendix [7.1](#proof:resid_def){reference-type="ref" reference="proof:resid_def"}. # Recovering Approximate Primal Solution from Dual {#S:primal-dual} Consider a convex optimization problem with constraints $$\label{eq:primal} \mathrm{Opt}= \min_{u \in U} \{f(u) : g(u) \leq 0\},$$ where $g(u)=[g_1(u),\ldots, g_n(u)]^\top,\, g_j(u)$ are convex functions, $U$ is a closed convex set. We assume the problem to be bounded below. A natural way to solve it is to consider its Lagrange dual problem: $$\label{eq:dual} \min_{x \geq 0} F(x), \quad F(x)=-\min_{u \in U}\{\underbrace{f(u)+\langle x, g(u)\rangle}_{\phi(u,x)} \}.$$ Assuming that [\[eq:primal\]](#eq:primal){reference-type="eqref" reference="eq:primal"} satisfies the Slater condition (so that [\[eq:dual\]](#eq:dual){reference-type="eqref" reference="eq:dual"} is solvable) and that we have at our disposal an upper bound $L$ on the norm $\left\|x_*\right\|_p$ of an optimal solution $x_*$ to [\[eq:dual\]](#eq:dual){reference-type="eqref" reference="eq:dual"}, we can reduce the problem to solving the following CMP: $$\label{eq:dual2} \min_{x \in X} F(x), \quad X = \left\{x \geq 0:\|x\|_p \leq L+1\right\}. %\tilde{X}\cap \dom F,\; \tilde{X}:=$$ We further assume that $\phi(\cdot,x)$ is bounded from below for every $x\in X$, i.e., $X \subseteq \mathrm{dom}F \equiv \{x\in \mathbb{R}^n: \min_{u \in U}\phi(u,x) > -\infty \}$. This is the case, for example, if $U$ is compact or if $f(u)$ is strongly convex. ## Separation and $\delta$-oracles Separation oracle for $X$ is easily constructed in the following way: let $x'\notin \operatorname{int}X$. If $x_i' \leq 0$, then the vector $-e_i$ (having $-1$ is position $i$ and 0 in others) is a separator since for any $x\in X$ it holds $-e_i^\top x \leq 0 \leq -x_i' = -e_i^\top x'$. If $\|x'\|_p \geq L+1$, then let $a\in \mathbb{R}^n$ be the vector satisfying $$\operatorname{sign} a_i = \operatorname{sign} x_i',\; |a_i|^q=\textstyle \frac{|x_i'|^p}{\|x'\|_p^p},\; i=1,\ldots, n,$$ so that Hölder's inequality for $a$ and $x$ becomes an equality: $a^\top x' = \|a\|_q \|x'\|_p \geq L+1$ since $\|a\|_q=1$. Thus, $a$ is a separator since for any $x\in X$ it holds $a^\top x \leq \|a\|_q \|x'\|_p \leq L+1 \leq a^\top x'$. Let us now show that it is easy to equip $F$ with a $\delta$-oracle provided that the aforementioned assumptions hold and that an efficient first-order method for solving the convex problem $\min_{u \in U}\phi(u,x)$ up to a prescribed accuracy $\delta$ is available. Let $u_x$ be the point returned by such method, i.e., $\phi(u_x,x) - F(x) \leq \delta$ (we also write: $u_x \in \arg\underset{u\in U}{\min}^\delta\phi(u,x)$). It follows from the argument on page 132 of [@polyak1983intro] that $-g(u_x)\in \partial_\delta F(x)$. We provide this argument below: $$\begin{aligned} \forall x'\in X,\quad F(x')&=-\min_{u \in U}\phi(u,x')\geq -\phi(u_x,x') = -\phi(u_x,x)-\langle g(u_x), x'-x\rangle\\ &\geq F(x)-\langle g(u_x), x'-x\rangle-\delta.\end{aligned}$$ ## Reconstructing Primal Solution With separation and $\delta$-oracles at hand, we can solve the dual problem [\[eq:dual2\]](#eq:dual2){reference-type="eqref" reference="eq:dual2"}. It turns out that accuracy certificates allow us to recover nearly feasible and nearly optimal solution for [\[eq:primal\]](#eq:primal){reference-type="eqref" reference="eq:primal"}. The following statement generalizes Proposition 5.1 from [@nemirovski2010accuracy]. [\[prop:primal_dual\]]{#prop:primal_dual label="prop:primal_dual"} Let [\[eq:dual2\]](#eq:dual2){reference-type="eqref" reference="eq:dual2"} be solved by a black-box-oriented method, $P_\tau=\left\{I_\tau, J_\tau,\left\{x_t, e_t\right\}_{t=1}^\tau\right\}$ be the execution protocol upon termination, with $$e_t = -g(u_t),\quad u_t \in \arg\underset{u\in U}{\min}^\delta\phi(u,x^t),\quad t \in I_\tau.$$ Let also $\xi$ be an accuracy certificate for this protocol. Set $\hat{u}=\sum_{t \in I_\tau} \xi_t u_{t}$, then $\hat{u} \in U$ and $$\begin{aligned} \left\|[g(\hat{u})]_{+}\right\|_q &\leq \epsilon_{\mathrm{cert}}\left(\xi \mid P_\tau, X\right) + \delta, \label{eq:constr_viol} \\ f(\hat{u})-\mathrm{Opt}&\leq \epsilon_{\mathrm{cert}}\left(\xi \mid P_\tau, X\right) + \delta, \label{eq:opt_gap} \end{aligned}$$ where $[g(\hat{u})]_{+}$ is the "vector of constraint violations" obtained from $g(\hat{u})$ by replacing the negative components with 0, and $q=p /(p+1)$. Proof can be found in Appendix [7.2](#proof:primal_dual){reference-type="ref" reference="proof:primal_dual"}. Proposition [\[prop:primal_dual\]](#prop:primal_dual){reference-type="ref" reference="prop:primal_dual"} shows that the vector $\hat{u}=\sum_{t \in I_\tau} \xi_t u_{t}$ is nearly feasible and nearly optimal for [\[eq:primal\]](#eq:primal){reference-type="eqref" reference="eq:primal"}, provided that $\epsilon_{\mathrm{cert}}\left(\xi \mid P_\tau, \mathbf{B}\right)$ is small. # Accuracy Certificates for Cutting Plane Methods {#S:certificates_construction} ## Generic Cutting Plane Algorithm with $\delta$-Oracle A generic cutting plane algorithm with $\delta$-oracle, as applied to a CMP [\[eq:cmp\]](#eq:cmp){reference-type="eqref" reference="eq:cmp"}, builds a sequence of search points $x_t \in \mathbb{R}^n$ along with a sequence of localizers $Q_t$ -- solids such that $x_t \in \operatorname{int} Q_t, t=1,2, \ldots$. The algorithm is as follows: **Initialization:** Choose a solid $Q_1 \supset X$ and a point $x_1 \in \operatorname{int} Q_1$. **Step** $t=1,2, \ldots$: given $x_t, Q_t$, 1. Call Separation oracle, $x_t$ being the input. If the oracle reports that $x_t \in \operatorname{int} X$ (productive step), go to 2. Otherwise (nonproductive step) the oracle reports a separator $e_t \neq 0$ such that $\left\langle e_t, x-x_t\right\rangle \leq 0$ for all $x \in X$. Go to 3. 2. Call $\delta$-oracle to compute $e_t=\tilde{F}^{\prime}(x_t)\in \partial_\delta F(x_t)$. If $e_t=0$, terminate, otherwise go to 3. 3. [\[step_3\]]{#step_3 label="step_3"} Set $$\widehat{Q}_{t+1}=\left\{x \in Q_t:\left\langle e_t, x-x_t\right\rangle \leq 0\right\} .$$ Choose as $Q_{t+1}$, a solid which contains the solid $\widehat{Q}_{t+1}$. Choose $x_{t+1} \in \operatorname{int} Q_{t+1}$ and loop to step $t+1$. For a solid $B \subset \mathbb{R}^n$, let $\rho(B)$ be the radius of Euclidean ball in $\mathbb{R}^n$ with the same $n$-dimensional volume as the one of $B$. A cutting plane algorithm with $\delta$-oracle applied to the problem [\[eq:cmp\]](#eq:cmp){reference-type="eqref" reference="eq:cmp"} is called converging if for the associated localizers $Q_t$ one has $\rho\left(Q_t\right) \rightarrow 0,\, t \rightarrow \infty$. Some examples of converging cutting plane algorithms are the center of gravity method [@levin1965minimization; @newman1965location], the ellipsoid method [@yudin1976informational; @shor1977cutting], the inscribed ellipsoid algorithm [@khachiyan1988method], the circumscribed simplex algorithm [@bulatov1982method; @yamnitsky1982old], Vaidya's algorithm [@vaidya1989new; @vaidya1996new]. ## Polytope-Based Cutting Plane Algorithms {#subsec:polytope_algo} Recall that a full-dimensional polytope is a bounded set with nonempty interior of the form $$Q(A, b) = \{x\in\mathbb{R}^n: a_i^\top x \leq b_i, i=1,\ldots,m\} = \{x\in\mathbb{R}^n: A x\leq b\}$$ for given $$A=\left[\begin{array}{c} a_1^\top \\ \vdots \\ a_{m}^\top \end{array}\right],\; a_1, \ldots, a_{m}\in\mathbb{R}^n,\; b\in\mathbb{R}^{m}.$$ In what follows, we consider implementations of a generic cutting plane algorithm with $\delta$-oracle where localizers are full-dimensional polytopes, i.e., $Q_t = Q(A_t, b_t)$. In what follows, we omit the subscript $t$ for brevity when it doesn't cause ambiguity, i.e., we write $Q_t=Q(A,b)$ and implicitly assume that $m, A$ and $b$ depend on $t$. Moreover, we assume that if the constraint $a_i^\top x \leq b_i$ was added at the step $t(i)$, then $a_i = e_{t(i)}$. Note that item [\[step_3\]](#step_3){reference-type="ref" reference="step_3"} in the description of the generic cutting plane algorithm with $\delta$-oracle implies that $a_i^\top x_{t(i)} \leq b_i$, i.e., when a new constraint is added, the current iterate satisfies it. We will refer to the group of methods described above as *polytope-based cutting plane algorithms with $\delta$-oracle*. ## Building Accuracy Certificates Consider a *nonterminal* step $\tau$ (i.e., the one with $e_\tau \neq 0$) of a polytope-based cutting plane algorithms with $\delta$-oracle. The respective localizer $Q_{\tau+1}$ is formed by the set of constraints $a_i^\top x \leq b_i, i=1,\ldots,m$ which can be divided into three disjoint sets: $\{1, \ldots, m\} = \mathcal{I}_\tau\cup \mathcal{P}_\tau\cup \mathcal{N}_\tau$, where - $\mathcal{I}_\tau$ (not to be confused with $I_\tau$) corresponds to **I**nitial constraints that were present in $Q_1$, - $\mathcal{P}_\tau$ (not to be confused with $P_\tau$) corresponds to constraints added during **P**roductive steps of the algorithm, - $\mathcal{N}_\tau$ corresponds to constraints added during **N**onproductive steps. Note that if a constraint was removed during execution of the algorithm, it does not appear in any of the sets $\mathcal{I}_\tau, \mathcal{P}_\tau, \mathcal{N}_\tau$. The following LP problem will play a crucial role in building certificates: $$\begin{aligned} \label{eq:lp} \max_{\lambda \in \mathbb{R}^{m}}\, D_{\tau}(\lambda) &:= \sum_{i\in \mathcal{P}_\tau} \lambda_i \| a_i \|_2, \\ \text{s.t. } \lambda &\geq 0,\nonumber \\ A^\top \lambda &= 0,\nonumber \\ b^\top \lambda &\in [0,2].\nonumber % s &\in [0,2],\nonumber \\ % h(\lambda) &= 0,\nonumber \\ % g(\lambda, s) &= 0.\nonumber\end{aligned}$$ [\[lem:boundedness\]]{#lem:boundedness label="lem:boundedness"} The LP problem [\[eq:lp\]](#eq:lp){reference-type="eqref" reference="eq:lp"} is feasible and bounded. Proof can be found in Appendix [7.3](#proof:boundedness_lemma){reference-type="ref" reference="proof:boundedness_lemma"}. [\[def:cert\]]{#def:cert label="def:cert"} If $\lambda$ is a feasible point in the LP problem [\[eq:lp\]](#eq:lp){reference-type="eqref" reference="eq:lp"} and $d_\tau:= \sum_{i\in \mathcal{P}_\tau} \lambda_i >0$, define $\xi=\{\xi_t\}_1^\tau$ as follows: 1. For every $i\in \mathcal{P}_\tau\cup \mathcal{N}_\tau$, set $\xi_{t(i)}:=\frac{\lambda_i}{d_\tau}$, where $t(i)$ is the step when the constraint $a_i^\top x\leq b_i$ was added, 2. For all other steps $t$, set $\xi_{t}:=0$ Observe that this definition implies $$\label{eq:D_tau} D_\tau(\lambda) = \sum_{i\in \mathcal{P}_\tau} \lambda_i \| a_i \|_2 = d_\tau \sum_{i\in \mathcal{P}_\tau} \xi_{t(i)} \| e_{t(i)} \|_2 = d_\tau \sum_{t\in I_\tau} \xi_{t} \| e_{t} \|_2.$$ In what follows, we sometimes write $D_\tau$ in place of $D_\tau(\lambda)$ for brevity. [\[lem:resid\]]{#lem:resid label="lem:resid"} If $\lambda$ is a feasible point for the LP problem [\[eq:lp\]](#eq:lp){reference-type="eqref" reference="eq:lp"} with $d_\tau>0$, then $\xi$ from Definition [\[def:cert\]](#def:cert){reference-type="ref" reference="def:cert"} is a certificate. If $\epsilon_\tau := \frac{2}{D_\tau}<r$, then $$\label{eq:resid2} \epsilon_{\mathrm{cert}}\left(\xi \mid P_\tau, Q_1\right)\leq \frac{\epsilon_\tau}{r-\epsilon_\tau} W_\tau,$$ where $$\label{eq:variation} W_\tau :=\max _{t \in I_\tau} \max _{x \in X}\left\langle e_t, x-x_t\right\rangle,$$ and $r=r(X)$ is the largest of the radii of Euclidean balls contained in $X$. Proof can be found in Appendix [7.4](#proof:resid_lemma){reference-type="ref" reference="proof:resid_lemma"}. Informally speaking, inequality [\[eq:resid2\]](#eq:resid2){reference-type="eqref" reference="eq:resid2"} shows that the larger $D_\tau$ is, the more accurate the estimate $\epsilon_{\mathrm{cert}}\left(\xi \mid P_\tau, Q_1\right)$ is, provided that $W_\tau$ is bounded. [\[thm:D_lower\]]{#thm:D_lower label="thm:D_lower"} An optimal solution $\lambda^*$ for the LP problem [\[eq:lp\]](#eq:lp){reference-type="eqref" reference="eq:lp"} satisfies $$\label{eq:D_lower} D_\tau(\lambda^*) \geq D^{-1}\left(Q_1\right)\left(\frac{r}{2 n \rho\left(Q_{\tau+1}\right)}-1\right),$$ where $D\left(Q_1\right)$ is the Euclidean diameter of $Q_1$. Proof can be found in Appendix [8](#proof:D_lower){reference-type="ref" reference="proof:D_lower"}. Since the quantity $D_\tau$ is always nonnegative, the inequality [\[eq:D_lower\]](#eq:D_lower){reference-type="eqref" reference="eq:D_lower"} can only be useful when $\rho\left(Q_{\tau+1}\right) < \frac{r}{2n}$. Before we move on to the most important corollary, let us mention that the convergence rate of a cutting plane method is basically described by how fast $\rho\left(Q_{\tau+1}\right)$ is decreasing as $\tau$ grows. For example, for Vaidya's method $$\rho\left(Q_{\tau+1}\right) \leq C_1\cdot \rho\left(Q_{1}\right) e^{-C_2\tau/n}$$ for some $C_1, C_2>0$. It can be shown that this estimate implies $\epsilon_{\mathrm{opt}}\left(x^\tau\right) = O\left(e^{-C_2\tau/n}\right)$, where $x^\tau$ is a point returned by Vaidya's method after $\tau$ iterations. [\[cor:convergence\]]{#cor:convergence label="cor:convergence"} Let $\alpha\in [0, 1)$ be the relative accuracy in the LP problem [\[eq:lp\]](#eq:lp){reference-type="eqref" reference="eq:lp"}. If $\tau$ is a nonterminal iteration number of a polytope-based cutting plane algorithms with $\delta$-oracle such that $$\rho\left(Q_{\tau+1}\right) \leq \frac{(1-\alpha) r^2}{16 n D\left(Q_1\right)},$$ and $\lambda$ is a feasible point for the LP problem [\[eq:lp\]](#eq:lp){reference-type="eqref" reference="eq:lp"} with $D_\tau(\lambda) \geq (1-\alpha) D_\tau(\lambda^*)$, then the respective certificate $\xi$ is well defined, and $$\epsilon_{\mathrm{cert}}\left(\xi \mid P_\tau, Q_1\right)\leq \frac{16 n D\left(Q_1\right) W_\tau}{(1-\alpha) r^2} \rho\left(Q_{\tau+1}\right).$$ In particular, if $\sup_{x,y\in X}\left(F(x) - F(y) \right) \leq C < \infty$, then $W_\tau \leq C+\delta$ and $$\epsilon_{\mathrm{cert}}\left(\xi \mid P_\tau, Q_1\right)\leq \frac{16 n D\left(Q_1\right) (C+\delta)}{(1-\alpha) r^2} \rho\left(Q_{\tau+1}\right).$$ Proof can be found in Appendix [8.4](#proof:convergence){reference-type="ref" reference="proof:convergence"}. Parameter $\alpha\in (0, 1]$ provides a trade-off between the number of iterations performed by a cutting plane method and the accuracy of solving the LP problem [\[eq:lp\]](#eq:lp){reference-type="eqref" reference="eq:lp"}. Complexity of constructing certificates is, in essence, the complexity of solving LP [\[eq:lp\]](#eq:lp){reference-type="eqref" reference="eq:lp"} up to a chosen relative accuracy $\alpha$ (say, $\alpha=1/2$). When a method uses polytopes formed by $\widetilde{O}(n)$ of constraints (which is the case, for example, for Vaidya's method), the LP [\[eq:lp\]](#eq:lp){reference-type="eqref" reference="eq:lp"} can be solved in $\widetilde{O}\left(n^\omega \log (n / \alpha)\right)$ arithmetic operations [@van2020deterministic]. Here $\widetilde{O}$ hides polylog$(n)$ factors, $O\left(n^\omega\right)$ is the time required to multiply two $n \times n$ matrices. Corollary [\[cor:convergence\]](#cor:convergence){reference-type="ref" reference="cor:convergence"} implies that all polytope-based cutting plane algorithms can be used with a $\delta$-oracle, $\delta \in [0, \varepsilon)$, to achieve an $\varepsilon$-optimal solution provided that their localizers' volumes converge to zero. # Numerical Experiments {#S:experiments} We present the results of numerical experiments which aim to showcase the performance of certificates described in the previous section and compare it to that of the certificates from the paper [@nemirovski2010accuracy]. Vaidya's cutting plane method [@vaidya1989new; @vaidya1996new] is chosen to demonstrate the certificates from Definition [\[def:cert\]](#def:cert){reference-type="ref" reference="def:cert"} in action. Such a choice is made because it fulfills requirements of Subsection [4.2](#subsec:polytope_algo){reference-type="ref" reference="subsec:polytope_algo"}, in particular, its localizers are polytopes. Moreover, it is the first optimal cutting plane method in terms of the number of oracle calls. The ellipsoid method is used to show the performance of the certificates based on Algorithm 4.2 from [@nemirovski2010accuracy] which was designed for this method. Although authors mention that it is possible to adapt the certificate computation procedure to other methods, details are omitted. Furthermore, we didn't find any uses of this procedure with algorithms other than the ellipsoid method. Consider the following nonsmooth convex optimization problem taken from the book [@nesterov2018lectures] (subsection 3.2.1): $$\label{eq:experim} \min_{x\in \mathbb{R}^n} \Bigl\{ F(x):= \max_{i=1,\ldots,n}x_i + \frac{\mu}{2}\|x\|_2^2 \Bigr\}.$$ As proposed in the book, we take the initial point to be $x^0=0$ and let the first-order oracle called at a point $x$ return (apart from the function value) a subgradient $f'(x) = e_{i_*} + \mu x$, where $i_*:=\min\{j\, |\, x_j= \displaystyle\max_{i=1,\ldots,n}x_i \}$. Note that the problem has a closed-form solution $x_* = -\frac{1}{\mu n}\mathbf{1}$ (see [@nesterov2018lectures]), which makes it possible to compute the quantities $\epsilon_{\mathrm{opt}}(x^\tau)$ in the experiment. We turn [\[eq:experim\]](#eq:experim){reference-type="eqref" reference="eq:experim"} into a problem on a solid by setting $X$ to be a Euclidean ball of radius $10\cdot \|x_*\|_2$ centered at the origin. [\[fig:exp\]]{#fig:exp label="fig:exp"} ![Performance of Vaidya's and the ellipsoid methods for problem [\[eq:experim\]](#eq:experim){reference-type="eqref" reference="eq:experim"}. The rows correspond to different dimensions of the problem ($n=10,20,30$). The left and right columns present the results with small ($\mu=0.01$) and medium ($\mu =0.1$) regularization, respectively. X-axis represents number of oracle calls. Solid and dashed lines depict $\epsilon_{\mathrm{opt}}$ for Vaidya's and the ellipsoid methods, respectively. Dotted and dash-dotted lines depict $\epsilon_{\mathrm{cert}}$ for Vaidya's and the ellipsoid methods, respectively.](Fig1.eps "fig:"){#fig:exp width="\\textwidth"} Figure [1](#fig:exp){reference-type="ref" reference="fig:exp"} presents the results of the experiments. The rows represent different dimensions of the problem [\[eq:experim\]](#eq:experim){reference-type="eqref" reference="eq:experim"} ($n=10,20,30$). The left and right columns correspond to small ($\mu=0.01$) and medium ($\mu =0.1$) regularization, respectively. X-axis depicts the number of oracle calls. Solid and dashed lines represent $\epsilon_{\mathrm{opt}}$ for Vaidya's and the ellipsoid methods, respectively. Dotted and dash-dotted lines depict $\epsilon_{\mathrm{cert}}$ for Vaidya's and the ellipsoid methods, respectively. As we see from Figure [1](#fig:exp){reference-type="ref" reference="fig:exp"}, the certificates from Definition [\[def:cert\]](#def:cert){reference-type="ref" reference="def:cert"} for Vaidya's method provide an upper bound $\epsilon_{\mathrm{cert}}$ on the optimality gap $\epsilon_{\mathrm{opt}}$ which becomes tight after a few hundred iterations. The certificates based on Algorithm 4.2 from [@nemirovski2010accuracy] for the ellipsoid method yield a less tight bound. We refer the reader to Section [6](#sec:concl){reference-type="ref" reference="sec:concl"} for discussion of this phenomenon. Moreover, the figure illustrate the fact that Vaidya's method scales much better with the dimension $n$. ## Implementation Details We use the version of Vaidya's cutting plane method from the paper [@anstreicher1997vaidya] since it is more practical than the original version. The parameters used are $\varepsilon=5\cdot 10^{-3}$, $\tau=1$, see the aforementioned paper for details. Certificates and their residuals were computed after each iteration for illustration purposes. The experiments were conducted using programming language Python 3.11.5 with packages numpy v1.26.0 and scipy v1.11.3. The source code is available at <https://github.com/egorgladin/vaidya-with-certificates>. # Conclusions {#sec:concl} The present paper generalizes the notion of accuracy certificates to the case of convex optimization problems with inexact oracle and establishes properties of such certificates. In particular, we show how they provide a simple way to recover primal solutions when solving a wide class of Lagrange dual problems. Additionally, we develop a new recipe to construct certificates suitable for cutting plane methods which use polytopes as localizers. A prominent example is Vaidya's method which is asymptotically optimal in terms of the number of oracle calls. Arithmetic complexity of our recipe is equivalent to that of approximately solving an LP problem. Notably, the requirements for the accuracy of such approximate solutions are very mild. As of this writing, this can be done in current matrix multiplication time. As an important by-product of our analysis, we conclude that all polytope-based cutting plane algorithms can be used with an inexact oracle to achieve a near-optimal solution provided that their localizers' volumes converge to zero. Numerical experiments show that the proposed procedure for computing certificates may be superior to the existing approach which we build on. A possible reason for this phenomenon is that we look for certificates that directly maximize a function used to bound the residual. The previous approach, in contrast, simply produces a point with a sufficiently large value of that function. Although we use such a point in the analysis to bound the optimal value from below, the maximum may turn out to be considerably larger, which leads to better residuals. A possible way to improve the presented approach for computing certificates is to introduce warm starts. Namely, one could solve the LP problem once, and then update the solution after each iteration as the problem is slightly modified between two consecutive iterations. This may further reduce the complexity of the approach. Another important direction of future research is the exploration of accuracy certificates for variational inequalities with inexact oracle. The work of E. Gladin and P. Dvurechensky is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy -- The Berlin Mathematics Research Center MATH+ (EXC-2046/1, project ID: 390685689). The work of A. Gasnikov was supported by Ministry of Science and Higher Education grant No. 075-10-2021-068. # Proofs of Propositions and Lemmas ## Proof of Proposition [\[prop:resid_def\]](#prop:resid_def){reference-type="ref" reference="prop:resid_def"} {#proof:resid_def} Since the points $x_t$ with $t \in I_t$ belong to $\operatorname{int} X$ and $X$ is convex, $x^\tau$ (which is a convex combination of these points) belongs to $\operatorname{int} X$ and thus is a strictly feasible solution. Define $$\begin{aligned} F_*\left(\xi \mid P_\tau, \mathbf{B}\right) & \equiv \min _{x \in \mathbf{B}}\left[\sum_{t \in I_\tau} \xi_t\left[F\left(x_t\right)+\left\langle e_t, x-x_t\right\rangle\right]+\sum_{t \in J_\tau} \xi_t\left\langle e_t, x-x_t\right\rangle\right] \nonumber \\ & =\sum_{t \in I_\tau} \xi_t F\left(x_t\right)-\epsilon_{\mathrm{cert}}\left(\xi \mid P_\tau, \mathbf{B}\right) . \label{eq:def_Fstar}\end{aligned}$$ We will first show that $$\label{eq:F_star} F_*\left(\xi \mid P_\tau, \mathbf{B}\right) \leq \text{Opt} + \delta.$$ Let $x \in X$. Then, due to the origin of vectors $e_t$, we have $\left\langle e_t, x-x_t\right\rangle \leq 0$ for $t \in J_\tau$ and $F\left(x_t\right)+\left\langle e_t, x-x_t\right\rangle \leq F(x) + \delta$ for $t \in I_\tau$. Taking weighted sum of these inequalities with the weights determined by a certificate $\xi$, we get $$\sum_{t \in I_\tau} \xi_t\left[F\left(x_t\right)+\left\langle e_t, x-x_t\right\rangle\right]+\sum_{t \in J_\tau} \xi_t\left\langle e_t, x-x_t\right\rangle \leq F(x) + \delta.$$ Hence, taking the infimum of both sides over $x \in X \cap \operatorname{Dom} F$, $$\min _{x \in X}\left[\sum_{t \in I_\tau} \xi_t\left[F\left(x_t\right)+\left\langle e_t, x-x_t\right\rangle\right]+\sum_{t \in J_\tau} \xi_t\left\langle e_t, x-x_t\right\rangle\right] \leq \mathrm{Opt} + \delta.$$ It remains to note that the left hand side in this inequality is $\geq F_*\left(\xi \mid P_\tau, \mathbf{B}\right)$ due to $X \subseteq \mathbf{B}$. Now, observe that $$\begin{aligned} \epsilon_{\mathrm{opt}}\left(x^\tau\right) &= F(x^\tau)-\mathrm{Opt} \stackrel{\text{convexity}}{\leq} \sum_{t \in I_\tau}\xi_t F\left(x_t\right) -\mathrm{Opt} \\ &\stackrel{\eqref{eq:F_star}}{\leq} \sum_{t \in I_\tau}\xi_t F\left(x_t\right) - F_*\left(\xi \mid P_\tau, \mathbf{B}\right) + \delta \\ &\stackrel{\eqref{eq:def_Fstar}}{=} \epsilon_{\mathrm{cert}}\left(\xi \mid P_\tau, \mathbf{B}\right) + \delta.\end{aligned}$$ ## Proof of Proposition [\[prop:primal_dual\]](#prop:primal_dual){reference-type="ref" reference="prop:primal_dual"} {#proof:primal_dual} First, let us provide a lower bound on the certificate residual. $$\begin{aligned} \epsilon_{\mathrm{cert}}&\left(\xi \mid P_\tau, X\right)\geqslant && \\ \geqslant& \max _{x \in X} \sum_{t=1}^\tau \xi_t\left\langle e_t, x_t-x\right\rangle\\ \geqslant& \max _{x \in X} \sum_{I_\tau} \xi_t\left\langle e_t, x_t-x\right\rangle&& \Big|\; \left\langle e_t, x-x_t\right\rangle\leq 0\; \forall t \in J_\tau, x\in X \\ =&-\sum_{I_\tau} \xi_t\left\langle g\left(u_t\right), x_t\right\rangle+\max _{x \in X}\Bigl\langle\sum_{I_\tau} \xi_t g\left(u_t\right), x\Bigr\rangle&& \Big|\; e_t=-g\left(u_t\right)\; \forall t \in I_\tau \\ \geqslant&-\sum_{I_\tau} \xi_t\left\langle g\left(u_t\right), x_t\right\rangle+\max _{x \in X}\langle g(\hat{u}), x\rangle&& \Big|\; g_i(\underbrace{\textstyle\sum_{I_\tau} \xi_t u_t}_{\hat{u}}) \stackrel{\text{conv.}}{\leqslant} \sum_{I_\tau} \xi_t g_i\left(u_t\right) \\ =&-\sum_{I_\tau} \xi_t\left\langle g\left(u_t\right), x_t\right\rangle+ (L+1)\left\|[g(\hat{u})]_{+}\right\|_q. && \Big|\; \max _{x \in X}\langle g(\hat{u}), x\rangle\stackrel{\text{Hölder}}{=} (L+1)\left\|[g(\hat{u})]_{+}\right\|_q\end{aligned}$$ Since $u_t \in \arg\underset{u\in U}{\min}^\delta\phi(u,x_t), t\in I_\tau$, it holds $$\phi(u_t,x_t) = f\left(u_t\right)+\left\langle g\left(u_t\right), x_t\right\rangle \leq f(u)+\left\langle g(u), x_t\right\rangle+\delta,\quad \forall t \in I_\tau,\, u\in U.$$ Summing over $t\in I_\tau$ and using the inequality $f(\hat{u})\leq \sum_{I_\tau} \xi_t f\left(u_t\right)$, we get $$\label{eq:primal_upper_bound} f(\hat{u})-f(u)-\langle g(u), \bar{x}\rangle \leqslant-\sum_{I_\tau} \xi_t\left\langle g\left(u_t\right), x_t\right\rangle+\delta \quad \forall u \in U,$$ where $\bar{x}:=\sum_{I_\tau} \xi_t x_t$. Combining the lower bound on the certificate residual and [\[eq:primal_upper_bound\]](#eq:primal_upper_bound){reference-type="eqref" reference="eq:primal_upper_bound"} where $u=u_*$ (an optimal solution to [\[eq:primal\]](#eq:primal){reference-type="eqref" reference="eq:primal"}), we arrive at $$\begin{aligned} \epsilon_{\mathrm{cert}}\left(\xi \mid P_\tau, X\right) + \delta &\geqslant (L+1)\left\|[g(u)]_{+}\right\|_q+f(\hat{u})-f\left(u_*\right) - \left\langle g\left(u_*\right), \bar{x}\right\rangle \label{eq:primal_dual_bound} \\ &\geqslant f(\hat{u})-f\left(u_*\right). \nonumber\end{aligned}$$ Thus, [\[eq:opt_gap\]](#eq:opt_gap){reference-type="eqref" reference="eq:opt_gap"} is established. Due to Slater's theorem, $$\begin{aligned} f(u_*)=-F(x_*)=\min_{u \in U}\{f(u)+\left\langle g(u), x_*\right\rangle\} &\leq f(\hat{u})+\left\langle g(\hat{u}), x_*\right\rangle\\ &\leq f(\hat{u})+L\left\|[g(\hat{u})]_{+}\right\|_q.\end{aligned}$$ Combining this inequality with [\[eq:primal_dual_bound\]](#eq:primal_dual_bound){reference-type="eqref" reference="eq:primal_dual_bound"}, we arrive at [\[eq:constr_viol\]](#eq:constr_viol){reference-type="eqref" reference="eq:constr_viol"}. ## Proof of Lemma [\[lem:boundedness\]](#lem:boundedness){reference-type="ref" reference="lem:boundedness"} {#proof:boundedness_lemma} Feasibility is evident since the zero vector satisfies all constraints. Suppose that the feasible set is unbounded, then we will show that there exist a vector $\nu \in \mathbb{R}^m$ such that $$\label{eq:ray} \nu\geq 0,\; A^\top\nu=0,\; b^\top\nu = 0,\; \|\nu\|_2=1.$$ Indeed, let $\{\lambda_k\}_1^\infty$ be an unbounded sequence of feasible points. Specifically, $$\lambda_k \geq 0,\; A^\top\lambda_k=0,\; b^\top\lambda_k\in [0,2],\; \|\lambda_k\|_2\geq k \quad \forall k \in \mathbb{N}.$$ Define $\nu_k:=\frac{\lambda_k}{\|\lambda_k\|_2}$, then $$\nu_k \geq 0,\; A^\top\nu_k=0,\; b^\top\nu_k\in \bigl[0,\textstyle\frac{2}{k}\bigr],\; \|\nu_k\|_2=1 \quad \forall k \in \mathbb{N}.$$ Let $\nu_{k_j}$ be a convergent subsequence, then its limit $\nu$ satisfies [\[eq:ray\]](#eq:ray){reference-type="eqref" reference="eq:ray"}. Now, let $Q_{\tau+1}=\{x\in \mathbb{R}^n:\; Ax\leq b\}$ be the current localizer. Since it has nonempty interior, there exist $x_+ \in Q_{\tau+1}$ such that $b-A x_+>0$. At the same time, $$\label{eq:ray_dot} \nu^\top(b-Ax_+)=b^\top\nu-x_+^\top A^\top \nu = 0.$$ Thus, $\nu=0$ which contradicts $\|\nu\|_2=1$. Therefore, the feasible set is bounded. ## Proof of Lemma [\[lem:resid\]](#lem:resid){reference-type="ref" reference="lem:resid"} {#proof:resid_lemma} The fact that $\xi$ is a certificate follows from its construction. Let us first show that $\epsilon_{\mathrm{cert}}\left(\xi \mid P_\tau, Q_1\right)\leq \frac{2}{d_\tau}$. For any $x\in Q_1$, it holds $a_i^\top x \leq b_i,\, \forall i\in \mathcal{I}_\tau$. Therefore, $$\lambda^\top(b-Ax) = \sum_{i\in \mathcal{I}_\tau\cup \mathcal{P}_\tau\cup \mathcal{N}_\tau}\lambda_i (b_i - a_i^\top x) \geq \sum_{i\in \mathcal{P}_\tau\cup \mathcal{N}_\tau}\lambda_i (b_i - a_i^\top x).$$ On the other hand, $\lambda^\top(b-Ax) = \lambda^\top b \leq 2$ since $\lambda$ is a feasible point for LP problem [\[eq:lp\]](#eq:lp){reference-type="eqref" reference="eq:lp"}. Thus, $$\begin{aligned} \sum_{t =1}^\tau \xi_{t} \langle e_t, x_t-x\rangle&= d_\tau^{-1} \sum_{i\in \mathcal{P}_\tau\cup \mathcal{N}_\tau}\lambda_i a_i^\top (x_{t(i)}-x) \nonumber \\ &\leq d_\tau^{-1} \sum_{i\in \mathcal{P}_\tau\cup \mathcal{N}_\tau}\lambda_i (b_i-a_i^\top x) \leq \frac{2}{d_\tau}, \label{eq:almost_resid}\end{aligned}$$ where we used $a_i^\top x_{t(i)} \leq b_i, i\in \mathcal{P}_\tau\cup \mathcal{N}_\tau$, see the end of subsection [4.2](#subsec:polytope_algo){reference-type="ref" reference="subsec:polytope_algo"}. We maximize the left-hand side of the last equation w.r.t. $x\in Q_1$ to obtain $\epsilon_{\mathrm{cert}}\left(\xi \mid P_\tau, Q_1\right)\leq \frac{2}{d_\tau}$. To prove [\[eq:resid2\]](#eq:resid2){reference-type="eqref" reference="eq:resid2"}, let $\tau$ be such that $\epsilon := \epsilon_\tau<r$, and let $\bar{x}$ be the center of Euclidean ball $B$ of the radius $r$ which is contained in $X$. Observe that the definition [\[eq:variation\]](#eq:variation){reference-type="eqref" reference="eq:variation"} of $W_\tau$ implies $$t \in I_\tau \Rightarrow\left\langle e_t, x-x_t\right\rangle \leq W_\tau\; \forall x \in B,$$ hence $\left\langle e_t, \bar{x}-x_t\right\rangle \leq W_\tau-r\left\|e_t\right\|_2$. Recalling what $e_t$ is for $t \in J_\tau \equiv\{1, \ldots, \tau\} \backslash I_\tau$, we get the relations $$\left\langle e_t, \bar{x}-x_t\right\rangle \leq \begin{cases}W_\tau-r\left\|e_t\right\|_2, & t \in I_\tau \\ 0, & t \in J_\tau\end{cases}.$$ In particular, $$\label{eq:ball_center_ineq} \langle a_i, \bar{x}-x_{t(i)}\rangle\leq \begin{cases}W_\tau-r\left\|a_i\right\|_2, & i \in \mathcal{P}_\tau\\ 0, & t \in \mathcal{N}_\tau\end{cases}.$$ Now let $x \in Q_1$, and let $y=\frac{(r-\epsilon) x+\epsilon \bar{x}}{r}$, so that $y \in Q_1$. By [\[eq:almost_resid\]](#eq:almost_resid){reference-type="eqref" reference="eq:almost_resid"} we have $$\sum_{i\in \mathcal{P}_\tau\cup \mathcal{N}_\tau}\lambda_i \langle a_i, x_{t(i)}-y\rangle\leq 2.$$ Multiplying this inequality by $r$ and adding weighted sum of inequalities [\[eq:ball_center_ineq\]](#eq:ball_center_ineq){reference-type="eqref" reference="eq:ball_center_ineq"}, the weights being $\lambda_{i} \epsilon$, we get $$\sum_{i\in \mathcal{P}_\tau\cup \mathcal{N}_\tau}\lambda_i \langle a_i, \underbrace{r x_{t(i)}-r y+\epsilon \bar{x}-\epsilon x_{t(i)}}_{(r-\epsilon)\left(x_{t(i)}-x\right)}\rangle \leq 2 r+\epsilon W_\tau d_\tau-r \epsilon D_\tau .$$ The right hand side in this inequality, by the definition of $\epsilon$, is $\epsilon W_\tau d_\tau$, and we arrive at the relation $$(r-\epsilon)\cdot \sum_{i\in \mathcal{P}_\tau\cup \mathcal{N}_\tau} \lambda_i \left\langle a_i, x_{t(i)}-x\right\rangle \leq \epsilon W_\tau d_\tau \iff \sum_{t=1}^\tau \xi_t\left\langle e_t, x_t-x\right\rangle \leq \frac{\epsilon W_\tau}{r-\epsilon} .$$ This relation holds true for every $x \in Q_1$, and [\[eq:resid2\]](#eq:resid2){reference-type="eqref" reference="eq:resid2"} follows. # Proof of Theorem [\[thm:D_lower\]](#thm:D_lower){reference-type="ref" reference="thm:D_lower"} and Corollary [\[cor:convergence\]](#cor:convergence){reference-type="ref" reference="cor:convergence"} {#proof:D_lower} The proof of Theorem [\[thm:D_lower\]](#thm:D_lower){reference-type="ref" reference="thm:D_lower"} is divided into three parts. First, we "lift" the original space $\mathbb{R}^n$, treating it as a hyperplane $E=\{(x,s)\in\mathbb{R}^{n+1} \mid s=1\}$, and introduce a set $Q_{\tau+1}^{+}$. In the second part, we describe the polar of this set. Both $Q_{\tau+1}^{+}$ and its polar play an important role in the third part of the proof, where we provide a lower bound on the optimal value. ## "Lifting" the Original Space Let us treat the original space $\mathbb{R}^n$ as a hyperplane in $\mathbb{R}^{n+1}$, that is, $E=\{(x,s)\in\mathbb{R}^{n+1} \mid s=1\}$. Define the set $Q_{\tau+1}^+\subset \mathbb{R}^{n+1}$ as a convex hull of the origin $0\in\mathbb{R}^{n+1}$ and $Q_{\tau+1}$ (treated as a subset of a hyperplane $E\subset \mathbb{R}^{n+1}$). Let $\bar{A}:=\left[\begin{array}{cc}A & -b \end{array}\right]$ represent the constraints that form $Q_{\tau+1}$. We will now show that $$\label{eq:conv_hull} Q_{\tau+1}^+ = \left\{z \in \mathbb{R}^{n+1}: \bar{A}z \leq {\mathbf 0}\right\} \cap \left\{z \in \mathbb{R}^{n+1}: \bigl\langle\left[\begin{smallmatrix} {\mathbf 0}\\ 1 \end{smallmatrix}\right], z \bigr\rangle\leq 1 \right\}.$$ First note that since $Q_{\tau+1}$ is a bounded polytope, the system of inequalities $Ay \leq 0$ only has a trivial solution. Indeed, if we had a nonzero solution $y$, then for any $x\in Q_{\tau+1}$, the ray $x+\alpha y, \alpha\geq 0$ would belong to $Q_{\tau+1}$: $A(x+\alpha y) = Ax + \alpha Ay \leq b$, which contradicts the boundedness of $Q_{\tau+1}$. Let $M$ be the right-hand side of [\[eq:conv_hull\]](#eq:conv_hull){reference-type="eqref" reference="eq:conv_hull"}. Let us show that if $\left[\begin{smallmatrix} x \\ s \end{smallmatrix}\right] \in M$, then $s\geq 0$. Indeed, if $s<0$, then $$Ax \leq sb \Rightarrow A\frac{x}{|s|} \leq -b.$$ Since $Q_{\tau+1}$ has a nonempty interior, there exists a point $y\in \mathbb{R}^n$ with $Ay < b$, therefore, $A(y+\frac{x}{|s|}) < b-b = 0$, which is impossible. It is evident that $M$ is convex and contains both $0$ and $Q_{\tau+1}$ (treated as a subset of a hyperplane $E\subset \mathbb{R}^{n+1}$). What is left to prove is that any point $\left[\begin{smallmatrix} x \\ s \end{smallmatrix}\right] \in M$ is a convex combination of a point in $Q_{\tau+1}$ and $0$. If $s=0$, then $Ax\leq 0$ and we conclude that $x=0$. In the opposite case, we have $s\in (0,1]$. The vector $y:=s^{-1}x$ satisfies $Ay =s^{-1}Ax \leq b$ since $Ax \leq sb$, i.e., $\left[\begin{smallmatrix} y \\ 1 \end{smallmatrix}\right] \in Q_{\tau+1}$. Thus, $$\left[\begin{smallmatrix} x \\ s \end{smallmatrix}\right] = s \left[\begin{smallmatrix} y \\ 1 \end{smallmatrix}\right] + (1-s) 0,$$ which concludes the proof of [\[eq:conv_hull\]](#eq:conv_hull){reference-type="eqref" reference="eq:conv_hull"}. ## Polar of a Set The polar of a set $P\subseteq \mathbb{R}^{n+1}$ is defined as $$\operatorname{Polar}P:=\left\{z \in \mathbb{R}^{n+1} \mid \langle z, p\rangle\leq 1\; \forall p \in P\right\}.$$ We will now show that $\operatorname{Polar}Q_{\tau+1}^+$ has the form $$\label{eq:polar} % \pol Q_t^+ = \left\{ z = (A_t^+)^\top \alpha + s\last: \alpha \in \R_+^m,\, s \in [0,1] \right\}. \operatorname{Polar}Q_{\tau+1}^+ = \left\{ \left[\begin{smallmatrix} x \\ g \end{smallmatrix}\right] \in \mathbb{R}^{n+1} \mid x = A^\top \lambda,\, g=-b^\top \lambda + s,\, \lambda \in \mathbb{R}_+^{m},\, s \in [0,1] \right\}.$$ To do so, we will use the following [\[Lm:nemirovski2010accuracy\]]{#Lm:nemirovski2010accuracy label="Lm:nemirovski2010accuracy"} Let $P, Q$ be two closed convex sets in $\mathbb{R}^{n+1}$ containing the origin and such that $P$ is a cone, and let $\operatorname{int}P \cap \operatorname{int}Q \neq \emptyset$. Then $\operatorname{Polar}(P \cap Q)=\operatorname{Polar}Q+P_*$, where $$P_*=\left\{z \in \mathbb{R}^{n+1}: \langle z, u\rangle\leq 0\; \forall u \in P\right\}.$$ Observe that $P:=\left\{z \in \mathbb{R}^{n+1}: \bar{A} z \leq {\mathbf 0}\right\}$ and $Q:=\left\{z \in \mathbb{R}^{n+1}: \bigl\langle\left[\begin{smallmatrix} {\mathbf 0}\\ 1 \end{smallmatrix}\right], z \bigr\rangle\leq 1 \right\}$ are closed convex sets containing the origin, $P$ is a cone, and $\operatorname{int}P \cap \operatorname{int}Q \neq \emptyset$. Thus, the lemma applies. Polar of a polyhedral cone $P$ is a finitely generated cone (see, for example, Lemma 1.12 (4) in [@paffenholz2010polyhedral]), i.e., $$\label{eq:fin_gen_con} P_*=\left\{ z = \bar{A}^\top \alpha: \alpha \in \mathbb{R}_+^m \right\}.$$ Let us show that $$\label{eq:pol_halfspace} \operatorname{Polar}Q = \left\{ s \left[\begin{smallmatrix} {\mathbf 0}\\ 1 \end{smallmatrix}\right]: s \in [0,1] \right\}.$$ Denote the right hand side of [\[eq:pol_halfspace\]](#eq:pol_halfspace){reference-type="eqref" reference="eq:pol_halfspace"} by $\tilde{Q}$. Let $y \in \tilde{Q}$, i.e., $y = s \left[\begin{smallmatrix} {\mathbf 0}\\ 1 \end{smallmatrix}\right],\, s \in [0,1]$, then for any $z\in Q$, we have $\langle y, z\rangle= s \langle\left[\begin{smallmatrix} {\mathbf 0}\\ 1 \end{smallmatrix}\right], z \rangle\leq s \leq 1 \Rightarrow y \in \operatorname{Polar}Q$. Now let $y = \left[\begin{smallmatrix} x \\ s \end{smallmatrix}\right] \notin \tilde{Q}$, i.e., $s \notin [0,1]$ or $x \neq {\mathbf 0}$. - If $s<0$, then for $z = \left[\begin{smallmatrix} {\mathbf 0}\\ s^{-1}-1 \end{smallmatrix}\right]$ it holds $z \in Q$ and $\langle y, z\rangle= 1-s>1 \Rightarrow y \notin \operatorname{Polar}Q$. - If $s>1$, then for $z = \left[\begin{smallmatrix} {\mathbf 0}\\ 1 \end{smallmatrix}\right]$ it holds $z \in Q$ and $\langle y, z\rangle= s>1 \Rightarrow y \notin \operatorname{Polar}Q$. - If $x \neq {\mathbf 0}$, then for $z = \left[\begin{smallmatrix} 2x/\|x\|_2^2\\ 0 \end{smallmatrix}\right]$ it holds $z \in Q$ and $\langle y, z\rangle= 2 > 1 \Rightarrow y \notin \operatorname{Polar}Q$. Thus, $\operatorname{Polar}Q = \tilde{Q}$ and the formula [\[eq:polar\]](#eq:polar){reference-type="eqref" reference="eq:polar"} follows from [\[eq:fin_gen_con\]](#eq:fin_gen_con){reference-type="eqref" reference="eq:fin_gen_con"}, [\[eq:pol_halfspace\]](#eq:pol_halfspace){reference-type="eqref" reference="eq:pol_halfspace"} and Lemma [\[Lm:nemirovski2010accuracy\]](#Lm:nemirovski2010accuracy){reference-type="ref" reference="Lm:nemirovski2010accuracy"}. ## Lower Bound on an Optimal Value Consider the ellipsoid $\mathcal{E}$ of maximal volume contained in $Q_{\tau+1}$. It is called John ellipsoid and it has a property that $$\label{eq:john} \mathcal{E} \subset Q_{\tau+1} \subset \hat{\mathcal{E}} := \{nx\mid x \in \mathcal{E}\},$$ see, e.g., [@boyd2004convex], Chapter 8.4. As it was shown in [@nemirovski2010accuracy] (subsection 4.3), for an ellispoid $\hat{\mathcal{E}}$ there exists a vector $h'\in \mathbb{R}^n$ with $\|h'\|_2 \geq \frac{1}{2 \rho(\hat{\mathcal{E}})}$ such that $$\max_{x \in \hat{\mathcal{E}}}\langle h', x\rangle-\min_{x \in \hat{\mathcal{E}}}\langle h', x\rangle \leq 1.$$ As it follows from [\[eq:john\]](#eq:john){reference-type="eqref" reference="eq:john"}, $\|h'\|_2 \geq \frac{1}{2n \rho(\mathcal{E})} \geq \frac{1}{2n \rho(Q_{\tau+1})}$ and $$\label{eq:narrowest_stripe} \max_{x \in Q_{\tau+1}}\langle h', x\rangle-\min_{x \in Q_{\tau+1}}\langle h', x\rangle \leq 1.$$ The last formula implies that both vectors $$h^{+}=\left[\begin{smallmatrix} h' \\ -\langle h', x_{\tau+1} \rangle\end{smallmatrix}\right],\; % \left[h ;-\left\langle h, x_{\tau+1}\right\rangle\right], h^{-}=-h^{+}$$ belong to $\operatorname{Polar}\left(Q_{\tau+1}^{+}\right)$ since for any $z\in Q_{\tau+1}^{+}$ it holds $$z=\left[\begin{smallmatrix} sx \\ s \end{smallmatrix}\right]\quad \text{for some } x\in Q_{\tau+1},\, s\in[0,1],$$ therefore, $$\begin{aligned} \langle h^+, z\rangle= \langle h', sx\rangle- \langle h', x_{\tau+1}\rangle\cdot s \leq \max_{x \in Q_{\tau+1}}\langle h', x\rangle-\min_{x \in Q_{\tau+1}}\langle h', x\rangle \stackrel{\eqref{eq:narrowest_stripe}}{\leq} 1,\end{aligned}$$ and similarly for $h^-$. According to [\[eq:polar\]](#eq:polar){reference-type="eqref" reference="eq:polar"}, there exist $\mu,\eta \in \mathbb{R}^m_+$ and $u,v,\in [0,1]$ such that $$\label{eq:h_decomp} h^{+} = \left[\begin{smallmatrix} A^\top \mu \\ -b^\top \mu+u \end{smallmatrix}\right],\; h^{-} = \left[\begin{smallmatrix} A^\top \eta \\ -b^\top \eta+v \end{smallmatrix}\right].$$ Observe that $$0 = h^{+} + h^{-} = \left[\begin{smallmatrix} A^\top(\mu+\eta) \\ -b^\top(\mu+\eta)+u+v \end{smallmatrix}\right],\; u+v\in [0,2],$$ i.e., $\lambda := \mu+\eta$ is a feasible point for the LP problem [\[eq:lp\]](#eq:lp){reference-type="eqref" reference="eq:lp"}. Let $\bar{x}$ be the center of Euclidean ball $B$ of radius $r$ which is contained in $X$. Consider first the case when $\left\langle h', \bar{x}-x_{\tau+1}\right\rangle \geq 0$. Multiplying $h^+$ by $x^{+}=[\bar{x}+r e ; 1]$ with $e \in \mathbb{R}^n,\|e\|_2 \leq 1$, we get $$\label{eq:h+x+0} \langle h^{+}, x^{+}\rangle= \langle h', r e\rangle+ \langle h', \bar{x}-x_{\tau+1}\rangle\geq \langle h', r e\rangle.$$ At the same time, $$\label{eq:h+x+} \begin{aligned} \langle h^{+}, x^{+}\rangle& \stackrel{\eqref{eq:h_decomp}}{=} \Bigl\langle\sum_{i=1}^m \mu_i a_i, \bar{x}+r e \Bigr\rangle- \sum_{i=1}^m \mu_i b_i + u \\ &\leq \sum_{i \in \mathcal{P}_\tau\cup \mathcal{N}_\tau} \mu_i \langle a_i, \bar{x}+r e-x_{t(i)}\rangle+ \sum_{i \in \mathcal{I}_\tau} \mu_i \left(a_i^\top(\bar{x}+re)-b_i\right) + u, \end{aligned}$$ where we used $a_i^\top x_{t(i)}\leq b_i,\, i \in \mathcal{P}_\tau\cup \mathcal{N}_\tau$ (see the end of subsection [4.2](#subsec:polytope_algo){reference-type="ref" reference="subsec:polytope_algo"}). Further, since $u\leq 1$ and $\bar{x}+re \in Q_1 \Rightarrow a_i^\top(\bar{x}+re)\leq b_i, i\in \mathcal{I}_\tau$, it holds $$\begin{aligned}\label{eq:h+x+2} \langle h^{+}, x^{+}\rangle&\stackrel{\eqref{eq:h+x+}}{\leq} \sum_{i \in \mathcal{P}_\tau\cup \mathcal{N}_\tau} \mu_i\langle a_i, \bar{x}+r e-x_{t(i)}\rangle+ 1, \\ &\leq \sum_{i \in \mathcal{P}_\tau} \mu_i \langle a_i, \bar{x}+r e-x_{t(i)} \rangle+ 1, \end{aligned}$$ where the last inequality is due to the fact that $a_i$ separates $x_{t(i)}$ and $X$ for $i \in \mathcal{N}_\tau$ which means $a_i^\top x \leq a_i^\top x_{t(i)}$ for all $x\in X$. Note that $x_{t(i)}\in \operatorname{int}X\subseteq Q_1$ for all $i\in \mathcal{P}_\tau$. Thus, combining [\[eq:h+x+0\]](#eq:h+x+0){reference-type="eqref" reference="eq:h+x+0"} and [\[eq:h+x+2\]](#eq:h+x+2){reference-type="eqref" reference="eq:h+x+2"}, we obtain $$\langle h', r e\rangle \leq 1+\sum_{i \in \mathcal{P}_\tau} \mu_i \left\|a_i\right\|_2 D\left(Q_1\right) \leq 1 + D_\tau D\left(Q_1\right).$$ The resulting inequality holds true for all unit vectors $e$; maximizing the left hand side over these $e$, we get $D_\tau \geq \frac{r\|h'\|_2-1}{D\left(Q_1\right)}$. Recalling that $\|h'\|_2 \geq \frac{1}{2 n \rho\left(Q_{\tau+1}\right)}$, we arrive at [\[eq:D_lower\]](#eq:D_lower){reference-type="eqref" reference="eq:D_lower"}. We have established it in the case of $\left\langle h', \bar{x}-x_{\tau+1}\right\rangle \geq 0$; in the opposite case we can use the same reasoning with $h^-$ in the role of $h^+$. ## Proof of Corollary [\[cor:convergence\]](#cor:convergence){reference-type="ref" reference="cor:convergence"} {#proof:convergence} The certificate is well-defined since $$\label{eq:small_rho} \rho\left(Q_{\tau+1}\right) \leq \frac{(1-\alpha) r^2}{16 n D\left(Q_1\right)} < \frac{r}{2n}$$ implies $$D_\tau(\lambda) \geq (1-\alpha) D_\tau(\lambda^*) \stackrel{\text{Thm} \ref{thm:D_lower}}{\geq} (1-\alpha) D^{-1}\left(Q_1\right)\left(\frac{r}{2 n \rho\left(Q_{\tau+1}\right)}-1\right) \stackrel{\eqref{eq:small_rho}}{>} 0,$$ therefore, $\lambda_i>0$ for some $i\in \mathcal{P}_\tau\Rightarrow d_\tau>0$. Let us now show that $\epsilon_\tau := \frac{2}{D_\tau}<r$. $$\begin{aligned} \rho\left(Q_{\tau+1}\right) &\leq \frac{(1-\alpha) r^2}{16 n D\left(Q_1\right)} = \frac{r}{2n}\left( \frac{8D\left(Q_1\right)}{(1-\alpha) r} \right)^{-1} < \frac{r}{2n}\left( \frac{2D\left(Q_1\right)}{(1-\alpha) r}+1 \right)^{-1} \\ & \Rightarrow (1-\alpha) D^{-1}\left(Q_1\right)\left(\frac{r}{2 n \rho\left(Q_{\tau+1}\right)}-1\right) > \frac{2}{r} \\ & \Rightarrow D_\tau(\lambda) \geq (1-\alpha) D_\tau(\lambda^*) \stackrel{\text{Thm} \ref{thm:D_lower}}{\geq} (1-\alpha) D^{-1}\left(Q_1\right)\left(\frac{r}{2 n \rho\left(Q_{\tau+1}\right)}-1\right) > \frac{2}{r}.\end{aligned}$$ Applying Lemma [\[lem:resid\]](#lem:resid){reference-type="ref" reference="lem:resid"}, we get $$\begin{aligned} \epsilon_{\mathrm{cert}}\left(\xi \mid P_\tau, Q_1\right)&\leq \frac{\epsilon_\tau}{r-\epsilon_\tau} W_\tau = \left( \frac{r D_\tau}{2}-1 \right)^{-1} W_\tau \\ &\stackrel{\text{Thm} \ref{thm:D_lower}}{\leq} \left( \frac{(1-\alpha) r}{2 D\left(Q_1\right)}\left(\frac{r}{2 n \rho\left(Q_{\tau+1}\right)}-1\right) - 1 \right)^{-1} W_\tau \\ &\leq \left( \frac{(1-\alpha) r^2}{8 n \rho\left(Q_{\tau+1}\right) D\left(Q_1\right)} - 1 \right)^{-1} W_\tau,\end{aligned}$$ where the last inequality follows from $$\rho\left(Q_{\tau+1}\right) \leq \frac{r}{4n} \Rightarrow \frac{r}{2 n \rho\left(Q_{\tau+1}\right)}-1 \geq \frac{r}{4 n \rho\left(Q_{\tau+1}\right)}.$$ Finally, using $$\rho\left(Q_{\tau+1}\right) \leq \frac{(1-\alpha) r^2}{16 n D\left(Q_1\right)} \Rightarrow \frac{(1-\alpha) r^2}{8 n \rho\left(Q_{\tau+1}\right) D\left(Q_1\right)} - 1 \geq \frac{(1-\alpha) r^2}{16 n \rho\left(Q_{\tau+1}\right)},$$ we arrive at the first result. The second result now follows directly from the definition of $\delta$-subgradient [\[eq:delta_subgrad\]](#eq:delta_subgrad){reference-type="eqref" reference="eq:delta_subgrad"}.
arxiv_math
{ "id": "2310.00523", "title": "Accuracy Certificates for Convex Minimization with Inexact Oracle", "authors": "Egor Gladin, Alexander Gasnikov, Pavel Dvurechensky", "categories": "math.OC", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this work, we study first-order algorithms for solving Bilevel Optimization (BO) where the objective functions are smooth but possibly nonconvex in both levels and the variables are restricted to closed convex sets. As a first step, we study the landscape of BO through the lens of penalty methods, in which the upper- and lower-level objectives are combined in a weighted sum with penalty parameter $\sigma > 0$. In particular, we establish a strong connection between the penalty function and the hyper-objective by explicitly characterizing the conditions under which the values and derivatives of the two must be $O(\sigma)$-close. A by-product of our analysis is the explicit formula for the gradient of hyper-objective when the lower-level problem has multiple solutions under minimal conditions, which could be of independent interest. Next, viewing the penalty formulation as $O(\sigma)$-approximation of the original BO, we propose first-order algorithms that find an $\epsilon$-stationary solution by optimizing the penalty formulation with $\sigma = O(\epsilon)$. When the perturbed lower-level problem uniformly satisfies the *small-error* proximal error-bound (EB) condition, we propose a first-order algorithm that converges to an $\epsilon$-stationary point of the penalty function, using in total $O(\epsilon^{-3})$ and $O(\epsilon^{-7})$ accesses to first-order (stochastic) gradient oracles when the oracle is deterministic and oracles are noisy, respectively. Under an additional assumption on stochastic oracles, we show that the algorithm can be implemented in a fully *single-loop* manner, *i.e.,* with $O(1)$ samples per iteration, and achieves the improved oracle-complexity of $O(\epsilon^{-3})$ and $O(\epsilon^{-5})$, respectively. author: - Jeongyeol Kwon - Dohyun Kwon - Stephen Wright - Robert Nowak bibliography: - main.bib title: "**On Penalty Methods for Nonconvex Bilevel Optimization and First-Order Stochastic Approximation**" --- # Introduction Bilevel Optimization (BO) [@dempe2003annotated; @colson2007overview; @dempe2015bilevel] is a versatile framework for optimization problems in many applications arising in economics, transportation, operations research, and machine learning, among others [@sinha2017review]. In this work, we consider the following formulation of BO: $$\label{problem:bilevel} \tag{\textbf{P}} \begin{aligned} \min_{x \in \mathcal{X}, y^*\in\mathcal{Y}} \quad f(x,y^*) & := \mathbb{E}_{\zeta} [f(x,y^*;\zeta)] \nonumber \\ \text{s.t.} \quad y^* \in \arg \min_{y \in \mathcal{Y}} \ g(x,y) & := \mathbb{E}_{\xi} [g(x,y; \xi)], \end{aligned}$$ where $f$ and $g$ are continuously differentiable and smooth functions, $\mathcal{X}\subseteq \mathbb{R}^{d_x}$ and $\mathcal{Y}\subseteq \mathbb{R}^{d_y}$ are closed convex sets, and $\zeta$ and $\xi$ are random variables (*e.g.,* indexes of batched samples in empirical risk minimization). That is, [\[problem:bilevel\]](#problem:bilevel){reference-type="eqref" reference="problem:bilevel"} minimizes $f$ over $x\in\mathcal{X}$ and $y\in\mathcal{Y}$ (the upper-level problem) when $y$ must be one of the minimizers of $g(x,\cdot)$ over $y\in\mathcal{Y}$ (the lower-level problem). Scalable optimization methods for solving [\[problem:bilevel\]](#problem:bilevel){reference-type="eqref" reference="problem:bilevel"} are in high demand to handle increasingly large-scale applications in machine-learning, including meta-learning [@rajeswaran2019meta], hyper-parameter optimization [@franceschi2018bilevel; @bao2021stability], model selection [@kunapuli2008bilevel; @giovannelli2021bilevel], adversarial networks [@goodfellow2020generative; @gidel2018variational], game theory [@stackelberg1952theory], and reinforcement learning [@konda1999actor; @sutton2018reinforcement]. There is particular interest in developing (stochastic) gradient-descent-based methods due to their simplicity and scalability to large-scale problems [@ghadimi2018approximation; @chen2021closing; @hong2023two; @khanduri2021near; @chen2022single; @dagreou2022framework; @guo2021randomized; @sow2022constrained; @ji2021bilevel; @yang2021provably]. One popular approach is to perform a direct gradient-descent on the hyper-objective $\psi(x)$ defined as follows: $$\begin{aligned} \label{eq:psi} \psi (x) := \min_{y\in \mathcal{S}(x)} \, f(x,y), \quad \text{ where } \mathcal{S}(x) = \arg\min_{y \in \mathcal{Y}} g(x,y),\end{aligned}$$ but this requires the estimation of $\nabla\psi(x)$, which we refer to as an *implicit* gradient. Some existing works obtain this gradient under the assumptions that $g(x,y)$ is strongly convex in $y$ and the lower-level problem is unconstrained, *i.e.,* $\mathcal{Y}= \mathbb{R}^{d_y}$. There is no straightforward extension of these approaches to nonconvex and/or constrained lower-level problems (*i.e.,* $\mathcal{Y}\subset \mathbb{R}^{d_y}$ is defined by some constraints) due to the difficulty in estimating implicit gradients; see Section [1.2](#subsec:prior_art){reference-type="ref" reference="subsec:prior_art"} for more discussion on the technical challenges. The goal of this paper is to extend our knowledge of solving BO with unconstrained strongly-convex lower-level problems to a broader class of BO with possibly constrained and nonconvex lower-level problems. In general, however, when there is not enough curvature around the lower-level solution, the problem can be highly ill-conditioned and no known algorithms can handle it even for the simpler case of min-max optimization [@chen2023bilevel; @jin2020local] (see also Example [Example 1](#example:bilinear_LL){reference-type="ref" reference="example:bilinear_LL"}). In such cases, it is hard even to identify *tractable* algorithms for solving [\[problem:bilevel\]](#problem:bilevel){reference-type="eqref" reference="problem:bilevel"}, since the issue is fundamental [@daskalakis2021complexity]. To circumvent the fundamental hardness, there have been many recent works on BO with nonconvex lower-level problems (nonconvex inner-level maximization in min-max optimization literature) under the uniform Polyak-Łojasiewicz (PL)-condition [@karimi2016linear; @yang2020global; @yang2022faster; @xiao2023generalized; @li2022nonsmooth]. While this assumption does not cover all interesting cases of BO, it can cover situations in which the lower-level problem is nonconvex and where it does not have a unique solution. It can thus be viewed as a significant generalization of the strong convexity condition. Furthermore, several recent results show that the uniform PL condition can be satisfied by practical and complicated functions such as over-parameterized neural networks [@frei2021proxy; @song2021subquadratic; @liu2022loss]. Nevertheless, to our best knowledge, no algorithm is known to reach the stationary point of $\psi(x)$ (*i.e.,* to find $x$ where $\nabla\psi(x) = 0$) under PL conditions alone. In fact, even the landscape of $\psi(x)$ has not been studied precisely when the lower-level problem can have multiple solutions and constraints. We take a step forward in this direction under the proximal error-bound (EB) condition that is analogous to PL but more suited for constrained problems[^1]. ## Overview of Main Results Since it is difficult to work directly with implicit gradients when the lower-level problem is nonconvex, we consider a common alternative that converts the BO [\[problem:bilevel\]](#problem:bilevel){reference-type="eqref" reference="problem:bilevel"} into an equivalent constrained single-level problem, namely, $$\begin{aligned} &\min_{x \in \mathcal{X}, y \in \mathcal{Y}} \, f(x,y) \qquad \ \text{s.t. } \ g(x,y) \le \min_{z \in \mathcal{Y}} g(x,z), \tag{$\textbf{P}_{\text{con}}$} \label{problem:bilevel_single_level}\end{aligned}$$ and finds a stationary solution of this formulation, also known as an (approximate)-KKT solution [@lu2023first; @liu2023averaged; @ye2022bome]. [\[problem:bilevel_single_level\]](#problem:bilevel_single_level){reference-type="eqref" reference="problem:bilevel_single_level"} suggests the penalty formulation $$\begin{aligned} \min_{x \in \mathcal{X}, y \in \mathcal{Y}} \, \sigma f(x,y) + (g(x,y) - \min_{z \in \mathcal{Y}} g(x,z)), \tag{$\textbf{P}_{\text{pen}}$} \label{problem:penalty_formulation}\end{aligned}$$ with some sufficiently small $\sigma > 0$. Our fundamental goal in this paper is to describe algorithms for finding approximate stationary solutions of [\[problem:penalty_formulation\]](#problem:penalty_formulation){reference-type="eqref" reference="problem:penalty_formulation"}, as explored in several previous works [@white1993penalty; @ye1997exact; @shen2023penalty; @kwon2023fully]. Although finding the solution of [\[problem:penalty_formulation\]](#problem:penalty_formulation){reference-type="eqref" reference="problem:penalty_formulation"} is a more familiar task than the solution of [\[problem:bilevel\]](#problem:bilevel){reference-type="eqref" reference="problem:bilevel"}, the former problem is merely an approximation of the latter, so it is important to understand the relationship between their respective landscapes, which have remained elusive in the literature [@chen2023bilevel]. Furthermore, it is still challenging to solve [\[problem:penalty_formulation\]](#problem:penalty_formulation){reference-type="eqref" reference="problem:penalty_formulation"}, since it still involves a nested optimization structure (solving a inner minimization problem over $z$), and typically involves a very small value of the penalty parameter $\sigma > 0$. #### Landscape Analysis. Our first goal is to bridge the gap between landscapes of the two problems [\[problem:bilevel\]](#problem:bilevel){reference-type="eqref" reference="problem:bilevel"} and [\[problem:penalty_formulation\]](#problem:penalty_formulation){reference-type="eqref" reference="problem:penalty_formulation"}. By scaling [\[problem:penalty_formulation\]](#problem:penalty_formulation){reference-type="eqref" reference="problem:penalty_formulation"}, we define the penalized hyper-objective $\psi_{\sigma}(x)$: $$\begin{aligned} \label{eq:psis} \psi_{\sigma}(x) := \frac{1}{\sigma} \left(\min_{y \in \mathcal{Y}} \left( \sigma f(x,y) + g(x,y) \right) - \min_{z \in \mathcal{Y}} g(x,z) \right).\end{aligned}$$ As mentioned earlier, without any assumptions on the lower-level problem, it is not possible to make any meaningful connections between $\psi_{\sigma}(x)$ and $\psi(x)$, since the original landscape $\psi(x)$ itself may not be well-defined. Proximal operators are key to defining assumptions that guarantee nice behavior of the lower-level problem. **Definition 1**. *The proximal operator with parameter $\rho$ and function $f(\theta)$ over a domain $\Theta$ is defined as $$\begin{aligned} \normalfont\textbf{prox}_{\rho f} (\theta) := \arg \min_{z \in \Theta} \, \big\{ \rho f(z) + \tfrac{1}{2} \| z - \theta\|^2 \big\}. \end{aligned}$$* We now state the proximal-EB assumption, which is crucial to our approach in this paper. **Assumption 1** (Proximal-EB). *Let $h_{\sigma}(x,\cdot) := \sigma f(x,\cdot) + g(x,\cdot)$ and $T(x,\sigma) := \arg\min_{y\in\mathcal{Y}} h_{\sigma}(x,y)$. We assume that for all $x \in \mathcal{X}$ and $\sigma \in [0,\sigma_0]$, $h_{\sigma}(x,\cdot)$ satisfies the $(\mu,\delta)$-proximal error bound: $$\begin{aligned} \rho^{-1} \cdot \left\|y - \normalfont\textbf{prox}_{\rho h_{\sigma}(x,\cdot)} (y) \right\| \ge \mu \cdot \normalfont\mathrm{\bf dist}(y, T(x,\sigma)), \end{aligned}$$ for all $y \in \mathcal{Y}$ that satisfies $\rho^{-1} \cdot \|y - \normalfont\textbf{prox}_{\rho h_{\sigma}(x,\cdot)} (y) \| \le \delta$ with some positive parameters $\mu, \delta, \sigma_0 > 0$.* As we discuss in detail in Section [3](#section:functional_analysis){reference-type="ref" reference="section:functional_analysis"}, the crux of Assumption [Assumption 1](#assumption:small_gradient_proximal_error_bound){reference-type="ref" reference="assumption:small_gradient_proximal_error_bound"} is the guaranteed (Lipschitz) continuity of solution sets, under which we prove our key landscape analysis result: **Theorem 1** (Informal). *Under Assumption [Assumption 1](#assumption:small_gradient_proximal_error_bound){reference-type="ref" reference="assumption:small_gradient_proximal_error_bound"} (with additional smoothness assumptions), for all $x \in \mathcal{X}$ such that at least one sufficiently regular solution path $y^*(\sigma)$ exists for $\sigma \in [0,\sigma_0]$, we have $$\begin{aligned} |\psi_{\sigma}(x) - \psi(x)| & = O(\sigma/\mu), \\ \|\nabla\psi_{\sigma}(x) - \nabla\psi(x)\| &= O(\sigma/\mu^3). \end{aligned}$$* As a corollary, our result implies global $O(\sigma)$-approximability of $\psi_{\sigma}(x)$ for the special case studied in several previous works (*e.g.,* [@ghadimi2018approximation; @chen2021closing; @ye2022bome; @kwon2023fully]), where $g(x,\cdot)$ is (locally) strongly-convex and the lower-level problem is unconstrained. To our best knowledge, such a connection, and even the differentiability of $\psi(x)$, is not fully understood for BO with nonconvex lower-level problems with possibly multiple solutions and constraints. In particular, the case with possibly multiple solutions (Theorem [Theorem 3](#theorem:conjecture_pseudo_inverse){reference-type="ref" reference="theorem:conjecture_pseudo_inverse"}) is discussed under nearly minimal assumptions. #### Algorithm. Once we show that $\nabla\psi_{\sigma}(x)$ is an $O(\sigma)$-approximation of $\nabla\psi(x)$ in most desirable circumstances, it suffices to find an $\epsilon$-stationary solution of $\psi_{\epsilon}(x)$. However, still directly optimizing $\psi_{\sigma}(x)$ is not possible since the exact minimizers (in $y$) of $h_{\sigma}(x,y):=\sigma f(x,y) + g(x,y)$ and $g(x,y)$ are unknown. Thus, we use the alternative min-max formulation: $$\begin{aligned} \min_{x \in \mathcal{X}, y \in \mathcal{Y}} \max_{z \in \mathcal{Y}} \, \psi_{\sigma} (x,y,z) := \frac{h_{\sigma}(x,y) - g(x,z)}{\sigma}. \tag{$\textbf{P}_{\text{saddle}}$} \label{problem:saddle_point_def}\end{aligned}$$ Once we reduce the problem to finding an $\epsilon$-stationary point of the saddle-point problem [\[problem:saddle_point_def\]](#problem:saddle_point_def){reference-type="eqref" reference="problem:saddle_point_def"}, we may invoke the rich literature on min-max optimization. However, even when we assume that $g(x,\cdot)$ satisfies the PL conditions *globally* for all $y \in \mathbb{R}^{d_y}$, a plug-in min-max optimization method (*e.g.,* [@yang2022faster]) yields an oracle-complexity that cannot be better than $O\left(\sigma^{-4}\epsilon^{-4}\right)$ with stochastic oracles [@li2021complexity], resulting in an overall $O(\epsilon^{-8})$ complexity bound when $\sigma = O(\epsilon)$. As we pursue an $\epsilon$-saddle point specifically in the form of [\[problem:saddle_point_def\]](#problem:saddle_point_def){reference-type="eqref" reference="problem:saddle_point_def"}, we show that we can achieve a better complexity bound under Assumption [Assumption 1](#assumption:small_gradient_proximal_error_bound){reference-type="ref" reference="assumption:small_gradient_proximal_error_bound"}. We list below our algorithmic contributions. - In contrast to previous work on bilevel or min-max optimization, (*e.g.,* [@ghadimi2018approximation; @ye2022bome; @yang2020global]), Assumption [Assumption 1](#assumption:small_gradient_proximal_error_bound){reference-type="ref" reference="assumption:small_gradient_proximal_error_bound"} holds only in a neighborhood of the lower-level solution. In fact, we show that we only need a nice lower-level landscape within the neighborhood of solutions with $O(\delta)$-proximal error. - While we eventually set $\sigma = O(\epsilon)$, it can be overly conservative to choose such a small value of $\sigma$ from the first iteration, resulting in a slower convergence rate. By gradually decreasing penalty parameters $\{\sigma_k\}$ polynomially as $k^{-s}$ for some $s > 0$, we save an $O(\epsilon^{-1})$-order of oracle-complexity, improving the overall complexity to $O(\epsilon^{-7})$ with stochastic oracles. - If stochastic oracles satisfy a mild stochastic smoothness condition that admits the momentum-assistance technique, we show a version of our algorithm that uses this technique can be implemented in a fully single-loop manner (*i.e.,* only $O(1)$ calls to stochastic oracles before updating the outer variables $x$) with an improved oracle-complexity of $O(\epsilon^{-5})$ with stochastic oracles. Our results match the best-known complexity results for fully first-order methods in stochastic bilevel optimization [@kwon2023fully] with strongly convex and unconstrained lower-level problems, and can also be applied to BO where the lower-level problem satisfies the PL condition, as studied in [@ye2022bome]. See Section [4](#section:algorithm){reference-type="ref" reference="section:algorithm"} for more details on the algorithm. ## Related Work {#subsec:prior_art} Since its introduction in [@bracken1973mathematical], Bilevel optimization has been an important research topic in many scientific disciplines. Classical results tend to focus on the asymptotic properties of algorithms once in neighborhoods of global/local minimizers (see *e.g.,* [@white1993penalty; @vicente1994descent; @colson2007overview]). In contrast, recent results are more focused on studying numerical optimization methods and non-asymptotic analysis to obtain an approximate stationary solution of Bilevel problems (see *e.g.,* [@ghadimi2018approximation; @chen2021closing]). Our work falls into this category. Due to the vast volume of literature on Bilevel optimization, we only discuss some relevant lines of work. #### Implicit-Gradient Descent As mentioned earlier, initiated by [@ghadimi2018approximation], a flurry of recent works (see *e.g.,* [@chen2021closing; @hong2023two; @khanduri2021near; @chen2022single; @dagreou2022framework; @guo2021randomized]) study stochastic-gradient-descent (SGD)-based iterative procedures and their finite-time performance for solving [\[problem:bilevel\]](#problem:bilevel){reference-type="eqref" reference="problem:bilevel"} when the lower-level problem is strongly-convex and unconstrained. In such cases, the implicit gradient of hyper-objective $\psi(x)$ is given by $$\begin{aligned} \nabla_x f(x,y^*(x)) - \nabla^2_{xy} g(x,y^*(x)) \left(\nabla^2_{yy} g(x,y^*(x)) \right)^{-1} \nabla_y f(x,y^*(x)),\end{aligned}$$ where $y^*(x) = \arg \min_{w \in \mathbb{R}^{d_y}} g(x,w)$. Two main challenges in performing (implicit) gradient descent are (a) to evaluate lower-level solution $y^*(x)$, and (b) to estimate Hessian inverse $(\nabla^2_{yy} g(x,y^*(x)))^{-1}$. For (a), it is now well-understood that instead of exactly solving for $y^*(x_k)$ for every $k^{th}$ iteration, we can incrementally solve for $y^*(x_k)$, *e.g.,* run a few more gradient steps on the current estimate $y_k$, and use it as a proxy for the lower-level solution [@chen2021closing]. As long as the contraction toward the true solution (with the strong convexity of $g(x_k,\cdot)$) is large enough to compensate for the change of lower-level solution (due to the movement in $x_k$), $y_k$ will eventually stably stay around $y^*(x_k)$, and can be used as a proxy for the lower-level solution to compute the implicit gradient. Then for (b), with the Hessian of $g$ being invertible for all given $y$, we can exploit the Neumann series approximation [@ghadimi2018approximation] to estimate the true Hessian inverse using $y_k$ as a proxy for $y^*(x_k)$. Unfortunately, the above results are not easily extendable to nonconvex lower-level objectives with potential constraints $\mathcal{Y}$. One obstacle is, again, to estimate the Hessian-inverse: now that for some $y \in \mathcal{Y}$, the Hessian of $g$ may not be invertible even if $\nabla^2_{yy} g(x,y^*(x))$ is invertible at the exact solution. Therefore, in order to use an approximate $y_k$ as a proxy to $y^*(x_k)$, we need a certain high-probability guarantee (or some other complicated arguments) to ensure that the algorithm remains stable with the inversion operation. The other obstacle, which is more complicated to resolve, is that the implicit gradient formula may no longer be the same if the solution is found at the boundary of $\mathcal{Y}$. In such a case, explicitly estimating $\nabla\psi(x)$ would not only require an approximate solution but also require the optimal dual variables for the lower-level solutions which are unknown (see Theorem [Theorem 3](#theorem:conjecture_pseudo_inverse){reference-type="ref" reference="theorem:conjecture_pseudo_inverse"} for the exact formula). However, computing the optimal solution as well as the optimal dual variables is even more challenging when we only access objective functions through stochastic oracles. Therefore, it is essential to develop a first-order method that does not rely on the explicit estimation of the implicit gradient to solve a broader class of BO, aside from the cost of using second-order derivatives in large-scale applications. #### Nonconvex Lower-Level Objectives In general, BO with nonconvex lower-level objectives is not computationally tractable without further assumptions, even for the special case of min-max optimization [@daskalakis2021complexity]. Therefore, additional assumptions on the lower-level problem are necessary. Arguably, the minimal assumption would be the continuity of lower-level solution sets, otherwise, any local-search algorithms are likely to fail due to the hardness of (approximately) tracking the lower-level problem. The work in [@chen2023bilevel] considers several growth conditions for the lower-level objectives (including PL), which guarantee Lipschitz continuity of lower-level solution sets, and proposes a zeroth-order method for solving [\[problem:bilevel\]](#problem:bilevel){reference-type="eqref" reference="problem:bilevel"}. The work in [@shen2023penalty] assumes the PL condition, and studies the complexity of the penalty method in deterministic settings. The work in [@arbel2022non] introduces a notion of parameteric Morse-Bott functions, and studies some asymptotic properties of their proposed gradient flow under the proposed condition. In all these works as well as in our work, underlying assumptions involve some growth conditions of the lower-level problem, which is essential for the continuity of lower-level solution sets. #### Penalty Methods Studies on penalty methods date back to 90s [@marcotte1996exact; @white1993penalty; @anandalingam1990solution; @ye1997exact; @ishizuka1992double], when the equivalence is established between two formulations [\[problem:bilevel\]](#problem:bilevel){reference-type="eqref" reference="problem:bilevel"} and [\[problem:penalty_formulation\]](#problem:penalty_formulation){reference-type="eqref" reference="problem:penalty_formulation"} for sufficiently small $\sigma > 0$. However, these results are often limited to relations within *infinitesimally* small neighborhoods of global/local minimizers. As we aim to obtain a stationary solution from arbitrary initial points, we need a comprehensive understanding of approximating the *global* landscape, rather than only around infinitesimally small neighborhoods of global/local minimizers. The most closely related work to ours is a recent work in [@shen2023penalty], where the authors study the penalty method under lower-level PL-like conditions with constraints $\mathcal{Y}$ as in ours. However, the connection established in [@shen2023penalty] only relates [\[problem:penalty_formulation\]](#problem:penalty_formulation){reference-type="eqref" reference="problem:penalty_formulation"} and $\epsilon$-relaxed version of [\[problem:bilevel_single_level\]](#problem:bilevel_single_level){reference-type="eqref" reference="problem:bilevel_single_level"}, and only concerns infinitesimally small neighborhoods of global/local minimizers as in older works. Furthermore, their analysis is restricted to the double-loop implementation of algorithms in deterministic settings. In contrast, we establish a direct connection between $\nabla\psi(x)$ and $\nabla\psi_{\sigma}(x)$ when they are well-defined, and our analysis can be applied to both double-loop and single-loop algorithms with explicit oracle-complexity bounds in stochastic settings. #### Implicit Differentiation Methods Another popular approach to side-step the computation of implicit gradients is to construct a chain of lower-level variables via gradient descents, a technique often called automatic implicit differentiation (AID), or iterative differentiation (ITD) [@pedregosa2016hyperparameter; @yang2021provably; @li2022fully; @ji2021bilevel; @grazzi2023bilevel]. The benefit of this technique is that now we do not require the estimation of Hessian-inverse. In fact, this construction can be seen as one constructive way of approximating the Hessian-inverse. However, when the lower-level problem is constrained by compact $\mathcal{Y}$, more complicated operations such as the projection may prevent the use of the implicit differentiation technique. # Preliminaries We state several assumptions on [\[problem:bilevel\]](#problem:bilevel){reference-type="eqref" reference="problem:bilevel"} to specify the problem class of interest. Our focus is on smooth objectives whose values are bounded below: **Assumption 2**. *$f$ and $g$ are twice continuously-differentiable and $l_{f,1}$, $l_{g,1}$-smooth jointly in $(x,y)$ respectively, *i.e.,* $\|\nabla^2 f(x,y)\| \le l_{f,1}$ and $\|\nabla^2 g(x,y)\| \le l_{g,1}$ for all $x \in \mathcal{X}$, $y \in \mathcal{Y}$.* **Assumption 3**. *The following conditions hold for objective functions $f$ and $g$:* 1. *$f$ and $g$ are bounded below and coercive, *i.e.,* for all $x \in \mathcal{X}$, $f(x,y), g(x,y) > -\infty$ for all $y \in \mathcal{Y}$, and $f(x,y), g(x,y) \rightarrow +\infty$ as $\|y\| \rightarrow \infty$.* 2. *$\|\nabla_y f(x,y)\| \le l_{f,0}$ for all $x \in \mathcal{X}, y \in \mathcal{Y}$.* We also make technical assumptions on the domains: **Assumption 4**. *The following conditions hold for domains $\mathcal{X}$ and $\mathcal{Y}$:* 1. *$\mathcal{X}, \mathcal{Y}$ are convex and closed.* 2. *$\mathcal{Y}$ is bounded, *i.e.,* $\max_{y \in \mathcal{Y}} \|y\| \le D_{\mathcal{Y}}$ for some $D_\mathcal{Y}= O(1)$. Furthermore, we assume that $C_f := \max_{x \in \mathcal{X}, y \in \mathcal{Y}} |f(x,y)| = O(1)$ is bounded.* 3. *The domain $\mathcal{Y}$ can be compactly expressed with at most $m_1 \ge 0$ inequality constraints $\{g_i(y) \le 0\}_{i \in [m_1]}$ with convex and twice continuously-differentiable $g_i$, and at most $m_2 \ge 0$ equality constraints $\{h_i(y) = 0\}_{i \in [m_2]}$ with linear functions $h_i$.* We note here that the expressiveness of inner domain constraints $\mathcal{Y}$ is only required for the analysis, and not required in our algorithms as long as there exist efficient projection operators. While there could be many possible representations of constraints, only the most compact representation would matter in our analysis. We denote by $\Pi_{\mathcal{X}}$ and $\Pi_{\mathcal{Y}}$ the projection operators onto sets $\mathcal{X}$ and $\mathcal{Y}$, respectively. $\mathcal{N}_{\mathcal{X}}(x)$ denotes the normal cone of $\mathcal{X}$ at a point $x \in \mathcal{X}$. Next, we define the distance measure between sets: **Definition 2** (Hausdorff Distance). *Let $S_1$ and $S_2$ be two sets in $\mathbb{R}^d$. The Hausdorff distance between $S_1$ and $S_2$ is given as $$\begin{aligned} \normalfont\mathrm{\bf dist}(S_1, S_2) = \max \left\{\sup_{\theta_1 \in S_1} \inf_{\theta_2 \in S_2} \|\theta_1 - \theta_2\|, \sup_{\theta_2 \in S_2} \inf_{\theta_1 \in S_1} \|\theta_1 - \theta_2\| \right\}. \end{aligned}$$ For distance between a point $\theta$ and a set $S$, we denote $\normalfont\mathrm{\bf dist}(\theta, S) := \normalfont\mathrm{\bf dist}(\{\theta\}, S)$.* Throughout the paper, we use Definition [Definition 2](#def:hausdorff){reference-type="ref" reference="def:hausdorff"} as a measure of distances between sets. We define the notion of (local) Lipschitz continuity of solution sets introduced in [@chen2023bilevel]. **Definition 3** (Lipschitz Continuity of Solution Sets [@chen2023bilevel]). *For a differentiable and smooth function $f(w,\theta)$ on $\mathcal{W} \times \Theta$, we say the solution set $S(w) := \arg\min_{\theta \in \Theta} f(w,\theta)$ is locally Lipschitz continuous at $w \in \mathcal{W}$ if there exists an open-ball of radius $\delta > 0$ and a constant $L_S < \infty$ such that for any $w' \in \mathbb{B}(w,\delta)$, we have $\normalfont\mathrm{\bf dist}(S(w), S(w')) \le L_S \|w-w'\|$.* #### Constrained Optimization We introduce some standard notions of regularities from nonlinear constrained optimization [@bertsekas1997nonlinear]. For a general constrained optimization problem $\textbf{Q}: \min_{\theta \in \Theta} f(\theta)$, suppose $\Theta$ can be compactly expressed with $m_1 \ge 0$ inequality constraints $\{g_i(\theta) \le 0\}_{i\in[m_1]}$ with convex and twice continuously differentiable $g_i$, and $m_2 \ge 0$ equality constraints $\{h_i(\theta) = 0\}_{i\in[m_2]}$ with linear functions $h_i$. **Definition 4** (Active Constraints). *We denote $\mathcal{I}(\theta) \subseteq [m_1]$ the index of active inequality constraints of $\normalfont {\textbf{Q}}$ at $\theta \in \Theta$, *i.e.,* $\mathcal{I}(\theta) := \{ i \in [m_1] \, : \, g_i(\theta) = 0\}$.* **Definition 5** (Linear Independence Constraint Qualification (LICQ)). *We say $\normalfont {\textbf{Q}}$ is regular at a feasible point $\theta \in \Theta$ if the set of vectors consisting of all equality constraint gradients $\nabla h_i(\theta)$, $\forall i \in [m_2]$ and the active inequality constraint gradients $\nabla g_i(\theta)$, $\forall i \in \mathcal{I}(\theta)$ is a linearly independent set.* A solution $\theta^*$ of $\normalfont {\textbf{Q}}$ satisfies the so-called KKT conditions when LICQ holds at $\theta^*$: the KKT conditions are that there exist unique Lagrangian multipliers $\lambda_i^* \ge 0$ for $i \in \mathcal{I}(\theta^*)$ and $\nu_i^* \in \mathbb{R}$ for $i \in [m_2]$ such that $$\label{eq:kkt.1} \mbox{$\theta^* \in \Theta$ and } \ \; \nabla f(\theta^*) + \textstyle \sum_{i\in \mathcal{I}(\theta^*)} \lambda_i^* \nabla g_i(\theta^*) + \textstyle \sum_{i \in [m_2]} \nu_i^* \nabla h_i(\theta^*) = 0.$$ For such a solution, we define the *strict* complementary slackness condition: **Definition 6** (Strict Complementarity). *Let $\theta^*$ be a solution of $\normalfont {\textbf{Q}}$ satisfying LICQ and the KKT condition above. We say that the strict complementary condition is satisfied at $\theta^*$ if there exist multipliers $\lambda^*, \nu^*$ that satisfy [\[eq:kkt.1\]](#eq:kkt.1){reference-type="eqref" reference="eq:kkt.1"}, and further $\lambda_i^* > 0$ for all $i \in \mathcal{I}(\theta^*)$.* #### Other Notation We say $a_k \asymp b_k$ if $a_k$ and $b_k$ decreases (or increases) in the same rate as $k \rightarrow \infty$, *i.e.,* $\lim_{k\rightarrow \infty} a_k/b_k = \Theta(1)$. Throughout the paper, $\|\cdot\|$ denotes the Euclidean norm for vectors, and operator norm for matrices. $[n]$ with a natural number $n \in \mathbb{N}_+$ denotes a set $\{1,2,...,n\}$. Let $\Pi_{\mathcal{S}} (\theta)$ be the projection of a point $\theta$ onto a convex set $\mathcal{S}$. We denote $\normalfont\textbf{Ker}(M)$ and $\normalfont\textbf{Im}(M)$ to mean the kernel (nullspace) and the image (range) of a matrix $M$ respectively. For a symmetric matrix $M$, we define the pseudo-inverse of $M$ as $M^{\dagger} := U (U^\top M U)^{-1} U^\top$ where the columns of $U$ consist of eigenvectors corresponding to all non-zero eigenvalues of $M$. # Landscape Analysis and Penalty Method {#section:functional_analysis} In this section, we establish the relationship between the landscapes of the penalty formulation [\[problem:penalty_formulation\]](#problem:penalty_formulation){reference-type="eqref" reference="problem:penalty_formulation"} and the original problem [\[problem:bilevel\]](#problem:bilevel){reference-type="eqref" reference="problem:bilevel"}. Recalling the definition of the perturbed lower-level problem $h_{\sigma}(x,y) := \sigma f(x,y) + g(x,y)$ from Assumption [Assumption 1](#assumption:small_gradient_proximal_error_bound){reference-type="ref" reference="assumption:small_gradient_proximal_error_bound"}, we introduce the following notation for its solution set: For $\sigma \in [0,\sigma_0]$ with sufficiently small $\sigma_0 > 0$, we define $$\label{eq:perturbed_problems} l(x,\sigma) := \min_{y \in \mathcal{Y}} \, h_{\sigma}(x,y), \quad T(x,\sigma) := \arg\min_{y\in\mathcal{Y}} \, h_{\sigma}(x,y).$$ We call $l(x,\sigma)$ the value function, and $T(x,\sigma)$ the corresponding solution set. Then, the minimization problem [\[problem:penalty_formulation\]](#problem:penalty_formulation){reference-type="eqref" reference="problem:penalty_formulation"} over the penalty function and $\psi_\sigma$ defined in [\[eq:psis\]](#eq:psis){reference-type="eqref" reference="eq:psis"} can be rewritten as $$\begin{aligned} \label{problem:pf2} \min_{x\in\mathcal{X}} \, \psi_\sigma(x) \ \hbox{ where }\ \psi_\sigma(x) = \frac{l(x,\sigma) - l(x,0)}{\sigma}.\end{aligned}$$ We can view $\psi_{\sigma} (x)$ as a sensitivity measure of how the optimal value $\min_{y\in\mathcal{Y}} \, g(x,y)$ changes when we impose a perturbation of $\sigma f(x,y)$ in the objective. In fact, it can be easily shown that $$\begin{aligned} \lim_{\sigma \rightarrow 0} \psi_{\sigma}(x) = \frac{\partial}{\partial \sigma} l(x,\sigma)|_{\sigma = 0^+} = \psi(x). \end{aligned}$$ However, this formula provides only a pointwise asymptotic equivalence of two functions and does not imply the equivalence of the *gradients* $\nabla\psi_{\sigma}(x)$ and $\nabla\psi(x)$ of the two hyper-objectives. In the limit setting, we check whether $$\nabla\psi(x) = \frac{\partial^2}{\partial x \partial \sigma} l(x,\sigma) |_{\sigma = 0^+} \overset{?}{=} \frac{\partial^2}{\partial \sigma \partial x} l(x,\sigma) |_{\sigma = 0^+} = \lim_{\sigma \rightarrow 0} \nabla\psi_{\sigma}(x).$$ Unfortunately, it is not always true that the above relation holds, and the gradient of $\nabla\psi(x)$ may not even be well-defined, as illustrated in the examples below. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![$\psi(x)$ and $\psi_{\sigma}(x)$ in Examples: **(left)** Example [Example 1](#example:bilinear_LL){reference-type="ref" reference="example:bilinear_LL"}, **(right)** Example [Example 2](#example:sc_non_smooth){reference-type="ref" reference="example:sc_non_smooth"}. Blue dashed lines compare $\psi_{\sigma}(x)$ to the original hyper-objective $\psi(x)$.](Figures/example1_pic.png){#fig:example_plots width="40%"} ![$\psi(x)$ and $\psi_{\sigma}(x)$ in Examples: **(left)** Example [Example 1](#example:bilinear_LL){reference-type="ref" reference="example:bilinear_LL"}, **(right)** Example [Example 2](#example:sc_non_smooth){reference-type="ref" reference="example:sc_non_smooth"}. Blue dashed lines compare $\psi_{\sigma}(x)$ to the original hyper-objective $\psi(x)$.](Figures/example1_pic_s.png){#fig:example_plots width="40%"} $\psi(x)$ $\psi_{\sigma}(x)$ --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![$\psi(x)$ and $\psi_{\sigma}(x)$ in Examples: **(left)** Example [Example 1](#example:bilinear_LL){reference-type="ref" reference="example:bilinear_LL"}, **(right)** Example [Example 2](#example:sc_non_smooth){reference-type="ref" reference="example:sc_non_smooth"}. Blue dashed lines compare $\psi_{\sigma}(x)$ to the original hyper-objective $\psi(x)$.](Figures/example2_pic.png){#fig:example_plots width="40%"} ![$\psi(x)$ and $\psi_{\sigma}(x)$ in Examples: **(left)** Example [Example 1](#example:bilinear_LL){reference-type="ref" reference="example:bilinear_LL"}, **(right)** Example [Example 2](#example:sc_non_smooth){reference-type="ref" reference="example:sc_non_smooth"}. Blue dashed lines compare $\psi_{\sigma}(x)$ to the original hyper-objective $\psi(x)$.](Figures/example2_pic_s.png){#fig:example_plots width="40%"} $\psi(x)$ $\psi_{\sigma}(x)$ --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- **Example 1**. *Consider the bilevel problem with $\mathcal{X}= \mathcal{Y}= [-1,1]$, $f(x,y) = x^2 + y^2$, and $g(x,y) = xy$. Note that $\lim_{\sigma\rightarrow 0} \psi_\sigma(x) \rightarrow x^2+1$ for all $x \in [-1,1]\setminus\{0\}$, where we have $\lim_{\sigma\rightarrow 0} \psi_\sigma(0) \rightarrow 0$. See Figure [4](#fig:example_plots){reference-type="ref" reference="fig:example_plots"}. Therefore $\psi(x)$ is a pointwise convergent point of $\psi_{\sigma} (x)$. However, neither $\psi(x)$ nor $\psi_{\sigma}(x)$ is differentiable at $x = 0$.* This example also implies that the order of (partial) differentiation may not be swapped in general. Even when the lower-level objective $g(x,y)$ is strongly convex in $y$, the inclusion of a constraint set $\mathcal{Y}$, even when compact, can lead to a nondifferentiable $\psi$, as we see next. **Example 2**. *Consider an example with $\mathcal{X}= [-2, 2]$, $\mathcal{Y}= [-1,1]$, $f(x,y) = -y$, and $g(x,y) = (y-x)^2$. In this example, $\psi(x) = 1$ if $x < -1$, $\psi(x) = -x$ if $x \in [-1,1]$, and $\psi(x) = -1$ otherwise. Thus $\psi(x)$ is not differentiable at $x=-1$ and $1$, while $\nabla\psi_{\sigma}(1)=0$ and $\nabla\psi_{\sigma}(-1)=-1$ for all $\sigma>0$.* We conclude that additional conditions are required to claim $\lim_{\sigma \rightarrow 0} \nabla\psi_{\sigma}(x) \rightarrow \nabla\psi(x)$, and even just to ensure that $\nabla\psi(x)$ exists. There are two reasons for the poor behavior of $\nabla\psi(x)$. First, in Example [Example 1](#example:bilinear_LL){reference-type="ref" reference="example:bilinear_LL"}, the solution set moves discontinuously at $x=0$. When the solution abruptly changes due to a small perturbation in $x$, the problem is highly ill-conditioned --- a fundamental difficulty [@daskalakis2021complexity]. Second, in Example [Example 2](#example:sc_non_smooth){reference-type="ref" reference="example:sc_non_smooth"}, even though the solution set is continuous thanks to the strong convexity, the solution can move nonsmoothly when the set of active constraints for the lower-level problem changes. The result is frequently nonsmoothness of $\psi(x)$, so $\nabla\psi(x)$ may not be defined at such $x$. In summary, we need regularity conditions to ensure that these two cases do not happen. ## Sufficient Conditions for Differentiability We first consider assumptions that obviate solution-set discontinuity so situations like those appearing in Example [Example 1](#example:bilinear_LL){reference-type="ref" reference="example:bilinear_LL"} will not occur. **Assumption 5**. *The solution set $T(x,\sigma) := \arg\min_{y\in\mathcal{Y}} h_{\sigma}(x,y)$ is locally Lipschitz continuous, *i.e.,* $T(x,\sigma)$ satisfies Definition [Definition 3](#definition:lipschitz_continuity_solution){reference-type="ref" reference="definition:lipschitz_continuity_solution"} at $(x,\sigma)$.* We note here that the differentiability of the value function $l(x,\sigma)$ requires Lipschitz continuity of solution sets (not just continuity). When the solution set is locally Lipschitz continuous, it is known (see Lemma [Lemma 18](#lemma:lipscthiz_solution_smooth_minimizer){reference-type="ref" reference="lemma:lipscthiz_solution_smooth_minimizer"}) that the value function $l(x,\sigma)$ is (locally) differentiable and smooth. As we have seen in Example [Example 2](#example:sc_non_smooth){reference-type="ref" reference="example:sc_non_smooth"}, however, the Lipschitz-continuity of the solution set alone may not be sufficient in the constrained setting. We need additional regularity assumptions for constrained lower-level problems. Recalling the algebraic definition of $\mathcal{Y}$ in Assumption [Assumption 4](#assumption:constrained_set){reference-type="ref" reference="assumption:constrained_set"}, we define the Lagrangian of the constrained ($\sigma-$ perturbed) lower-level optimization problem: **Definition 7**. *Given the lower-level feasible set $\mathcal{Y}$ satisfying Assumption [Assumption 4](#assumption:constrained_set){reference-type="ref" reference="assumption:constrained_set"}, let $\{\lambda_i\}_{i=1}^{m_1} \subset \mathbb{R}_+$ and $\{\nu_i\}_{i=1}^{m_2} \subset \mathbb{R}$ be some Lagrangian multipliers. The Lagrangian function $\mathcal{L}(\cdot,\cdot,\cdot | x,\sigma): (\mathbb{R}_+^{m_1} \times \mathbb{R}^{m_2} \times \mathbb{R}^{d_y}) \rightarrow \mathbb{R}$ at $(x,\sigma)$ is defined by $$\begin{aligned} \mathcal{L}(\lambda, \nu, y | x,\sigma) = \sigma f(x,y) + g(x,y) + \textstyle \sum_{i \in [m_1]} \lambda_i g_i(y) + \textstyle \sum_{i \in [m_2]} \nu_i h_i(y). \end{aligned}$$ We also define the Lagrangian restricted to active constraints, denoted as $\mathcal{L}_{\mathcal{I}} (\cdot | x,\sigma)$, only in terms of $\lambda_{\mathcal{I}}, \nu, y$: $$\begin{aligned} \mathcal{L}_{\mathcal{I}} (\lambda_{\mathcal{I}} ,\nu, y | x,\sigma) = \sigma f(x,y) + g(x,y) + \textstyle \sum_{i \in \mathcal{I}(y)} \lambda_i g_i(y) + \textstyle \sum_{i \in [m_2]} \nu_i h_i(y). \end{aligned}$$* The required assumption is the existence of a regular and stable solution that satisfies the following: **Assumption 6**. *The solution set $T(x,\sigma)$ contains at least one $y^* \in T(x,\sigma)$ such that LICQ (Definition [Definition 5](#definition:LICQ){reference-type="ref" reference="definition:LICQ"}) and strict complementary condition (Definition [Definition 6](#definition:strict_slackness){reference-type="ref" reference="definition:strict_slackness"}) hold at $y^*$, and $\nabla^2 \mathcal{L}_{\mathcal{I}}(\cdot,\cdot, \cdot| x,\sigma)$ (the matrix of second derivatives with respect to all variables $\lambda_{\mathcal{I}}, \nu, y$) is continuous at $(\lambda_{\mathcal{I}}^*, \nu^*, y^*)$.* Assumption [Assumption 6](#assumption:strict_slackness){reference-type="ref" reference="assumption:strict_slackness"} helps ensure that the active set $\mathcal{I}(y^*)$ does not change when $x$ or $\sigma$ is perturbed slightly. ## Asymptotic Landscape We show that Assumptions [Assumption 5](#assumption:solution_lipschitz_continuity){reference-type="ref" reference="assumption:solution_lipschitz_continuity"} and [Assumption 6](#assumption:strict_slackness){reference-type="ref" reference="assumption:strict_slackness"} are nearly minimal to ensure the twice-differentiability of the value function $l(x,\sigma)$ which, in turn, guarantees asymptotic equivalence of $\nabla\psi_{\sigma}(x)$ and $\nabla\psi(x)$. In the sequel, we state our main (local) landscape analysis result only under the two assumptions at a given point $(x,\sigma)$. We first show that when the solution set is Lipschitz continuous, the following proposition assures that the perturbation of $(x,\sigma)$ must not perturb the gradient of Lagrangian in the kernel space of the Hessian: **Proposition 2** (Necessary Condition for Lipschitz Continuity). *Suppose $T(x,\sigma)$ satisfies Assumption [Assumption 5](#assumption:solution_lipschitz_continuity){reference-type="ref" reference="assumption:solution_lipschitz_continuity"} at $(x,\sigma)$. For any $y^* \in T(x,\sigma)$ that satisfies Assumption [Assumption 6](#assumption:strict_slackness){reference-type="ref" reference="assumption:strict_slackness"}, the following must hold: $$\begin{aligned} \forall v \in \texttt{\normalfont span}(\normalfont\textbf{Im}(\nabla^2_{yx} h_{\sigma} (x,y^*)), \nabla_y f(x,y^*)): \begin{bmatrix} 0 \\ v \end{bmatrix} \in \normalfont\textbf{Im}( \nabla^2 \mathcal{L}_{\mathcal{I}} (\lambda_{\mathcal{I}}^*, \nu^*, y^* | x, \sigma)), \label{eq:necessary_image_condition} \end{aligned}$$* Geometrically speaking, perturbations in $(x,\sigma)$ must not tilt the flat directions of the lower-level landscape for $T(x,\sigma)$ to be continuous. While it is easy to make up examples that do not meet the condition (*e.g.* Example [Example 1](#example:bilinear_LL){reference-type="ref" reference="example:bilinear_LL"}), several recent results show that the solution landscape may be stabilized for complicated functions such as over-parameterized neural networks [@frei2021proxy; @song2021subquadratic; @liu2022loss]. Under a slightly stronger condition on the Lipschitz-continuity of solution sets, we show that $\frac{\partial^2}{\partial x \partial \sigma } l(x,\sigma)$ exists and the order of differentiation commutes. **Theorem 3**. *Suppose $T(\cdot,\cdot)$ satisfies Assumption [Assumption 5](#assumption:solution_lipschitz_continuity){reference-type="ref" reference="assumption:solution_lipschitz_continuity"} in a neighborhood of $(x,\sigma)$. If there exists at least one $y^* \in T(x,\sigma)$ that satisfies Assumption [Assumption 6](#assumption:strict_slackness){reference-type="ref" reference="assumption:strict_slackness"}, then $\frac{\partial^2}{\partial x \partial \sigma} l(x,\sigma)$ exists and can be given explicitly by $$\begin{aligned} \frac{\partial^2}{\partial \sigma \partial x} l(x,\sigma) &= \frac{\partial^2}{\partial x \partial \sigma} l(x,\sigma) \nonumber \\ &= \nabla_x f(x,y^*) - \begin{bmatrix} 0 & \nabla_{xy}^2 h_{\sigma} (x,y^*) \end{bmatrix} (\nabla^2 \mathcal{L}_{\mathcal{I}}(\lambda_{\mathcal{I}}^*, \nu^*, y^*|x,\sigma))^{\dagger} \begin{bmatrix} 0 \\ \nabla_y f(x,y^*) \end{bmatrix}. \label{eq:conjecture_pseudo_inverse} \end{aligned}$$ If this equality holds at $\sigma = 0^+$, then $\psi(x)$ is differentiable at $x$, and $\lim_{\sigma \rightarrow 0} \nabla\psi_{\sigma}(x) = \nabla\psi(x)$.* Theorem [Theorem 3](#theorem:conjecture_pseudo_inverse){reference-type="ref" reference="theorem:conjecture_pseudo_inverse"} generalizes the expression of $\nabla\psi(x)$ from the case of a unique solution to the one with multiple solutions, significantly enlarging the scope of tractable instances of BO. Up to our best knowledge, there are no previous results that provide an explicit formula of $\nabla\psi(x)$, even when the solution set is Lipschitz continuous, though conjectures have been made in the literature [@xiao2023generalized; @arbel2022non] under similar conditions. In Appendix [8.4](#appendix:proof:conjecture_pseudo_inverse){reference-type="ref" reference="appendix:proof:conjecture_pseudo_inverse"}, we prove a more general version of Theorem [Theorem 3](#theorem:conjecture_pseudo_inverse){reference-type="ref" reference="theorem:conjecture_pseudo_inverse"} (see Theorem [Theorem 23](#theorem:general_pseudo_inverse_form){reference-type="ref" reference="theorem:general_pseudo_inverse_form"}), of possible broader interest, concerning the Hessian of $l(x,\sigma)$. **Remark 4** (Set Lipschitz Continuity). *While we require the entire solution set $T(x,\sigma)$ to be Lipschitz continuous, the proof indicates that we need only Lipschitz continuity of solution paths passing through $y^*$ (that defines the first-order derivative of $l(x,\sigma)$ and satisfies other regularity conditions) in all possible perturbation directions of $(x,\sigma)$. Nonetheless, we stick to a stronger requirement of Definition [Definition 3](#definition:lipschitz_continuity_solution){reference-type="ref" reference="definition:lipschitz_continuity_solution"}, since our algorithm requires a stronger condition that implies the continuity of entire solution sets.* **Remark 5** (Lipschitz Continuity in $\sigma$). *While we require $T(x,\sigma)$ to be Lipschitz continuous in both $x$ and $\sigma$, well definedness of $\nabla\psi(x)$ requires only Lipschitz continuity of $T(x,0^+)$ in $x$ (which sometimes can be implied by the PL condition only on $g$ as in [@chen2023bilevel; @shen2023penalty; @xiao2023generalized]). Still, for implementing a stable and efficient algorithm with stochastic oracles, we conjecture that it is essential to have the Lipschitz continuity assumption on the additional axis $\sigma$.* ### Special Case: Unique Solution and Invertible Hessian When the Hessian is invertible at the unique solution, the statement can be made stronger since we can deduce solution-set Lipschitz continuity from the well-understood solution sensitivity analysis in constrained optimization [@bonnans2013perturbation]. That is, we can provide a strong sufficiency guarantee for the Lipschitz continuity of solution-sets. **Proposition 6** (Sufficient Condition for Lipschitz Continuity). *Suppose $y^* \in T(x,\sigma)$ is the unique lower-level solution at $(x,\sigma)$. Suppose that Assumption [Assumption 6](#assumption:strict_slackness){reference-type="ref" reference="assumption:strict_slackness"} holds at $y^*$ with corresponding Lagrangian multipliers $\lambda^*, \nu^*$. Further, suppose that $\nabla^2 \mathcal{L}_{\mathcal{I}}^*$ is invertible. Then $T(x,\sigma)$ is locally Lipschitz continuous at $(x,\sigma)$.* Thus, the uniqueness of the solution along with LICQ, strict complementarity, and invertibility of the Hessian is strong enough to guarantee the Lipschitz continuity of solution sets. Therefore, we can conclude that $\frac{\partial^2}{\partial x \partial \sigma } l(x,\sigma)$ exists and that the order of differentiation commutes under Assumption [Assumption 6](#assumption:strict_slackness){reference-type="ref" reference="assumption:strict_slackness"}. **Theorem 7**. *Suppose $y^* \in T(x,\sigma)$ is the unique lower-level solution at $(x,\sigma)$. If Assumption [Assumption 6](#assumption:strict_slackness){reference-type="ref" reference="assumption:strict_slackness"} holds at $y^*$, and $\nabla^2 \mathcal{L}_{\mathcal{I}}^*$ is invertible, then $\frac{\partial^2}{\partial x \partial \sigma} l(x,\sigma)$ exists and can be given explicitly by $$\begin{aligned} \frac{\partial^2}{\partial \sigma \partial x} l(x,\sigma) &= \frac{\partial^2}{\partial x \partial \sigma} l(x,\sigma) \\ &= \nabla_x f(x,y^*) - \begin{bmatrix} 0 & \nabla_{xy}^2 h_{\sigma} (x,y^*) \end{bmatrix} (\nabla^2 \mathcal{L}_{\mathcal{I}} (\lambda^*_{\mathcal{I}}, \nu^*, y^*|x,\sigma))^{-1} \begin{bmatrix} 0 \\ \nabla_y f(x,y^*) \end{bmatrix}. \end{aligned}$$ If this equality holds at $\sigma = 0^+$, then $\psi(x)$ is differentiable at $x$, and $\lim_{\sigma \rightarrow 0} \nabla\psi_{\sigma}(x) = \nabla\psi(x)$.* ## Landscape Approximation with $\sigma>0$ We can view $\nabla\psi_{\sigma}(x)$ as an approximation of $\frac{\partial}{\partial \sigma} \left(\frac{\partial}{\partial x} l(x,\sigma)\right)|_{\sigma = 0^+}$ via finite differentation with respect to $\sigma$. Assuming that $\frac{\partial^2}{\partial {\sigma} \partial {x} } l(x,\sigma)$ exists and is continuous for all small values of $\sigma$, we can apply mean-value theorem and conclude that $\nabla\psi_{\sigma}(x) = \frac{\partial^2}{\partial {\sigma} \partial {x} } l(x,\sigma')$ for some $\sigma' \in [0,\sigma]$. Thus, $\|\nabla\psi_{\sigma}(x) - \nabla\psi (x)\|$ is $O(\sigma)$ whenever $\frac{\partial^2}{\partial {x} \partial {\sigma}} l(x,\sigma)$ is well-defined and Lipschitz-continuous over $[0,\sigma]$. To work with nonzero constant $\sigma > 0$, we need the regularity assumptions to hold in significantly larger regions. The crux of Assumption [Assumption 1](#assumption:small_gradient_proximal_error_bound){reference-type="ref" reference="assumption:small_gradient_proximal_error_bound"} is the guaranteed Lipschitz continuity of solution sets (see also Assumption [Assumption 5](#assumption:solution_lipschitz_continuity){reference-type="ref" reference="assumption:solution_lipschitz_continuity"}) for every given $x$ and $\sigma$, which is also crucial for the tractability of lower-level solutions by local search algorithms whenever upper-level variable changes: **Lemma 8**. *Under Assumption [Assumption 1](#assumption:small_gradient_proximal_error_bound){reference-type="ref" reference="assumption:small_gradient_proximal_error_bound"}, $T(x,\sigma)$ is $(l_{g,1}/\mu)$-Lipschitz continuous in $x$ and $(l_{f,0}/\mu)$-Lipschitz continuous in $\sigma$ for all $x \in \mathcal{X},\sigma \in [0,\delta/C_f]$.* An additional consequence of Lemma [Lemma 8](#lemma:proximal_error_to_lipschitz_continuity){reference-type="ref" reference="lemma:proximal_error_to_lipschitz_continuity"} is that, by Lemma [Lemma 18](#lemma:lipscthiz_solution_smooth_minimizer){reference-type="ref" reference="lemma:lipscthiz_solution_smooth_minimizer"}, $l (x, \sigma)$ is continuously differentiable and smooth for all $x \in \mathcal{X}$ and $\sigma \in [0,\delta/C_f]$. This fact guarantees in turn that $\psi_{\sigma}(x)$ is differentiable and smooth (though $\psi(x)$ does not necessarily have these properties). While Assumption [Assumption 1](#assumption:small_gradient_proximal_error_bound){reference-type="ref" reference="assumption:small_gradient_proximal_error_bound"} is sufficient to ensure $\psi_{\sigma}(x)$ is well-behaved, we need additional regularity conditions to ensure that $\psi(x)$ is also well-behaved. Therefore, we make two more *local* assumptions when we connect $\psi(x)$ and $\psi_{\sigma}(x)$. The first concerns Hessian-Lipschiztness and regularity of solutions. **Assumption 7**. *For a given $x$, there exists at least one $y^* \in T(x,0)$ such that if we follow the solution path $y^*(\sigma)$ along the interval $\sigma \in [0,\sigma_0]$,* 1. *all $y^*(\sigma)$ satisfies Assumption [Assumption 6](#assumption:strict_slackness){reference-type="ref" reference="assumption:strict_slackness"} with active constraint indices $\mathcal{I}$ and Lagrangian multipliers $\lambda^*_{\mathcal{I}}(\sigma), \nu^*(\sigma)$ of size $O(1)$ and* 2. *$\nabla^2 f, \nabla^2 g, \{\nabla^2 g_i \}_{i=1}^{m_1}$ are $l_{h,2}$-Lipschitz continuous at all $(x, y^*(\sigma))$.* In the unconstrained settings with Hessian-Lipschitz objectives, Assumption [Assumption 7](#assumption:additional_regularity){reference-type="ref" reference="assumption:additional_regularity"} is implied by Assumption [Assumption 1](#assumption:small_gradient_proximal_error_bound){reference-type="ref" reference="assumption:small_gradient_proximal_error_bound"} for all $x \in \mathcal{X}, \sigma \in [0,\delta/C_f]$. The second assumption is on the minimum nonzero singular value of active constraint gradients. **Assumption 8**. *For a given $x$, there exists at least one $y^* \in T(x,0)$ such that if we follow the solution path $y^*(\sigma)$ along the interval $\sigma \in [0,\sigma_0]$, all solutions $y^*(\sigma)$ satisfy Definition [Definition 5](#definition:LICQ){reference-type="ref" reference="definition:LICQ"} with minimum singular value $s_{\min} > 0$. That is, for all $\sigma \in [0,\sigma_0]$, we have $$\begin{aligned} \min_{v: \|v\|_2=1} \left\| \begin{bmatrix} \nabla g_i(y^*(\sigma)), \forall i \in \mathcal{I}&| \ \nabla h_i(y^*(\sigma)), \forall i \in [m_2] \end{bmatrix} v \right\| \ge s_{\min}. \end{aligned}$$* Note that $s_{\min}$ in the constrained setting depends purely on the LICQ condition, Definition [Definition 5](#definition:LICQ){reference-type="ref" reference="definition:LICQ"}. **Theorem 9**. *Under Assumptions [Assumption 1](#assumption:small_gradient_proximal_error_bound){reference-type="ref" reference="assumption:small_gradient_proximal_error_bound"} - [Assumption 4](#assumption:constrained_set){reference-type="ref" reference="assumption:constrained_set"}, we have $$\begin{aligned} | \psi_\sigma (x) - \psi(x) | \le O\left( l_{f,0}^2/\mu \right) \cdot \sigma, \end{aligned}$$ for all $x \in \mathcal{X}$ and $\sigma \in [0, \delta / C_f]$. If, in addition, Assumptions [Assumption 7](#assumption:additional_regularity){reference-type="ref" reference="assumption:additional_regularity"} and [Assumption 8](#assumption:non_zero_singular_value){reference-type="ref" reference="assumption:non_zero_singular_value"} hold at a given $x$, then $$\begin{aligned} \| \nabla\psi_\sigma (x) - \nabla\psi(x) \| \le O\left( \frac{l_{g,1}^4 l_{f,0}^3}{\mu^3 s_{\min}^3} + \frac{l_{h,2} l_{g,1}^2 l_{f,0}^3 }{\mu^3 s_{\min}^2} \right) \cdot \sigma. \end{aligned}$$* The proof of Theorem [Theorem 9](#proposition:uniform_convergence_QG){reference-type="ref" reference="proposition:uniform_convergence_QG"} is given in Appendix [8.5](#proof:uniform_convergence_QG){reference-type="ref" reference="proof:uniform_convergence_QG"}. **Remark 10** (Change in Active Sets). *A slightly unsatisfactory conclusion of Theorem [Theorem 9](#proposition:uniform_convergence_QG){reference-type="ref" reference="proposition:uniform_convergence_QG"} is that when $\nabla\psi(x)$ is not well-defined due to the non-smooth movement of the solution set as in Example [Example 2](#example:sc_non_smooth){reference-type="ref" reference="example:sc_non_smooth"}, it does not relate $\nabla\psi_{\sigma}(x)$ to any alternative measure for $\nabla\psi(x)$. Around the point where $\psi(x)$ is non-smooth, some concurrent work attempts to find a so-called $(\epsilon,\delta)$-Goldstein stationary point [@chen2023bilevel], which can be seen as an approximation of gradients via localized smoothing (but only in the upper-level variables $x$). While this is an interesting direction, we do not pursue it here. Instead, we conclude this section by stating that an $\epsilon$-stationary solution of $\psi_{\sigma}(x)$ is an $O(\epsilon + \sigma)$-KKT point of [\[problem:bilevel_single_level\]](#problem:bilevel_single_level){reference-type="eqref" reference="problem:bilevel_single_level"}. (This claim is fairly straightforward to check, see for example, Theorem [Theorem 24](#theorem:theorem_kkt_condition){reference-type="ref" reference="theorem:theorem_kkt_condition"} in Appendix [\[appendix:theorem_kkt_condition\]](#appendix:theorem_kkt_condition){reference-type="ref" reference="appendix:theorem_kkt_condition"}.)* # Algorithm {#section:algorithm} In the previous section, we investigated how and when the penalty-based method can yield an approximate solution of the original Bilevel optimization problem [\[problem:bilevel\]](#problem:bilevel){reference-type="eqref" reference="problem:bilevel"}. In this section, we describe algorithms that find a stationary point of the penalty function using only access to first-order (stochastic) gradient oracles. We have the following assumptions on the first-order (stochastic) oracles and efficient projection operators required to develop our algorithms. **Assumption 9**. *The projection operations $\Pi_{\mathcal{X}}, \Pi_{\mathcal{Y}}$ onto sets $\mathcal{X}, \mathcal{Y}$, respectively, can be implemented efficiently.* **Assumption 10**. *We access first-order information about the objective functions via unbiased estimators $\nabla f(x,y; \zeta), \nabla g(x,y; \xi)$, where $\mathbb{E}[\nabla f(x,y; \zeta)] = \nabla f(x, y)$ and $\mathbb{E}[\nabla g(x,y; \xi)] = \nabla g(x, y)$. The variances of stochastic gradient estimators are bounded as follows: $$\begin{aligned} \mathbb{E}[\|\nabla f(x,y;\zeta) - \nabla f(x, y)\|^2] \le \sigma_f^2, \quad \mathbb{E}[\|\nabla g(x,y;\xi) - \nabla g(x, y)\|^2] \le \sigma_g^2, \end{aligned}$$ for some universal constants $\sigma_f^2, \sigma_g^2 \ge 0$.* Throughout the rest of the paper, we assume that Assumption [Assumption 1](#assumption:small_gradient_proximal_error_bound){reference-type="ref" reference="assumption:small_gradient_proximal_error_bound"} holds. ## Stationarity Measures Since we showed in the previous section that $\nabla\psi_{\sigma}(x)$ is an $O(\sigma)$-approximation of $\nabla\psi(x)$ in most desirable circumstances, now we consider finding a stationary point $(x^*,y^*,z^*)$ of $\nabla\psi_{\sigma}(x)$. Under Assumption [Assumption 1](#assumption:small_gradient_proximal_error_bound){reference-type="ref" reference="assumption:small_gradient_proximal_error_bound"}, we can show that this is equivalent to finding the stationary point of [\[problem:saddle_point_def\]](#problem:saddle_point_def){reference-type="eqref" reference="problem:saddle_point_def"} defined as the following: $$\label{eq:exact_minmax_stationary_point} y^* = \normalfont\textbf{prox}_{\rho h_{\sigma}(x^*,\cdot)} (y^*), \quad z^* = \normalfont\textbf{prox}_{\rho g (x^*,\cdot)} (z^*), \quad -\left(\nabla_x h_{\sigma} (x^*, y^*) - \nabla_x g(x^*,z^*)\right) \in \mathcal{N}_{\mathcal{X}}(x^*),$$ where $\mathcal{N}_{\mathcal{X}}(x^*)$ is the normal cone of $\mathcal{X}$ at $x^*$. We define a notion of *approximate* stationary points as follows.  **Definition 8**. *We say $(x,y,z)$ is an $\epsilon$-stationary point of [\[problem:saddle_point_def\]](#problem:saddle_point_def){reference-type="eqref" reference="problem:saddle_point_def"} if it satisfies the following: $$\begin{aligned} \frac{1}{\rho} \| y - \normalfont\textbf{prox}_{\rho h_{\sigma}(x,\cdot)}(y) \| \le \sigma \epsilon, \quad \frac{1}{\rho} \| z - \normalfont\textbf{prox}_{\rho g(x,\cdot)}(z) \| \le \sigma \epsilon, \\ \frac{1}{\rho} \left\|x - \Pi_{ \mathcal{X} } \left\{ x - \rho (\nabla_x h_{\sigma} (x,y) - \nabla_x g (x,z) \right\} \right\| \le \sigma \epsilon. \end{aligned}$$* The lemma below relates the $\epsilon$-stationarity of [\[problem:saddle_point_def\]](#problem:saddle_point_def){reference-type="eqref" reference="problem:saddle_point_def"} to the landscape of $\nabla\psi_{\sigma}(x)$: **Lemma 11**. *Let $(x^*,y^*,z^*)$ be an $\epsilon$-stationary point of [\[problem:saddle_point_def\]](#problem:saddle_point_def){reference-type="eqref" reference="problem:saddle_point_def"}.* 1. *For all $x \in \mathcal{X}$, $\nabla\psi_{\sigma}(x)$ is well-defined, and $x^*$ is a $(1 + l_{g,1}/\mu) \epsilon$-stationary point of $\psi_{\sigma}(x)$.* 2. *Supposing in addition that Assumptions [Assumption 7](#assumption:additional_regularity){reference-type="ref" reference="assumption:additional_regularity"} and [Assumption 8](#assumption:non_zero_singular_value){reference-type="ref" reference="assumption:non_zero_singular_value"} hold at $x^*$, then $x^*$ is a $((1+ l_{g,1}/\mu)\epsilon + L_{\sigma} \sigma)$-stationary point of $\psi(x)$, where $L_{\sigma} = O\left(l_{g,1}^4l_{f,0}^3 / (\mu^3 s_{\min}^3) \right)$.* The first part of the lemma is a consequence of Lemma [Lemma 8](#lemma:proximal_error_to_lipschitz_continuity){reference-type="ref" reference="lemma:proximal_error_to_lipschitz_continuity"}, while the second part is from Theorem [Theorem 9](#proposition:uniform_convergence_QG){reference-type="ref" reference="proposition:uniform_convergence_QG"}. Henceforth, we aim to find a saddle point of formulation [\[problem:saddle_point_def\]](#problem:saddle_point_def){reference-type="eqref" reference="problem:saddle_point_def"}. **Input:** total outer-loop iterations: $K$, step sizes: $\{\alpha_k, \gamma_{k}\}$, proximal-smoothing parameters: $\{\beta_k: \beta_k \in (0,1]\}$, inner-loop iteration counts: $\{T_k\}$, outer-loop batch size: $\{M_k\}$, penalty parameters: $\{\sigma_k\}$, proximal parameter: $\rho$, initializations: $x_0 \in \mathcal{X}, y_0, z_0 \in \mathcal{Y}$ ## First-Order Method with Large Batches We first consider solving a stochastic saddle-point problem by applying (projected) stochastic gradient descent-ascent, alternating between upper-level and lower-level variables, with multiple iterations for the lower-level variables ($y, z$) per single iteration in the upper-level variables ($x$). There are two technical challenges that we aim to tackle specifically for the form [\[problem:saddle_point_def\]](#problem:saddle_point_def){reference-type="eqref" reference="problem:saddle_point_def"} with Assumption [Assumption 1](#assumption:small_gradient_proximal_error_bound){reference-type="ref" reference="assumption:small_gradient_proximal_error_bound"}. 1. Technically speaking, the main difference from many previous works (*e.g.,* [@hong2023two; @chen2021closing; @chen2022single]) is that now we no longer have a global contraction property of inner iterations toward solution sets. To be more specific, when the lower-level objective the PL-condition for all $y \in \mathbb{R}^{d_y}$, the distance between the current (lower-level) iterates and solution-sets contracts globally after applying inner gradient steps, *i.e.,* if updating $z_{k+1} \leftarrow z_k - \beta_k \nabla_y g (x_k,z_k)$ at the $k^{th}$ iteration before updating $x_k$, we get $$\begin{aligned} \mathbb{E}[\normalfont\mathrm{\bf dist}(z_{k+1}, T(x_k, 0)) | \mathcal{F}_k] \le (1-\lambda_k) \cdot \normalfont\mathrm{\bf dist}(z_{k}, T(x_k,0)), \end{aligned}$$ for some $\lambda_k \in (0,1]$. However, we assume that the error-bound condition only holds at points with $O(\delta)$ proximal error. That is, unless $y$ and $z$ remain close to the solution set (with high probability if gradient oracles are stochastic), we cannot guarantee that $\normalfont\mathrm{\bf dist}(z_{k}, T(x_k,0))$ is improved (in expectation) as the outer-iteration $k$ proceeds. 2. Eventually, we want $\sigma = O(\epsilon)$ since $\psi_{\sigma}(x)$ is ideally an $O(\sigma)$-approximation of $\psi(x)$ up to first-order. However, to set $\sigma = O(\epsilon)$ from the first iteration is overly conservative, resulting in an overall slowdown of convergence. We decrease the penalty parameters $\{\sigma_k\}$ gradually, to improve the overall convergence rates and the gradient oracle complexity. To address issue 1, we propose a smoothed surrogate of $\psi_{\sigma}(x,y,z)$ via proximal envelope (often referred to as Moreau Envelope [@moreau1965proximite]) with sufficiently small $\rho \ll 1/l_{g,1}$: $$\begin{aligned} \label{eq:def_three_variable_hg} h^*_{\sigma,\rho} (x,y) &= \min_{w \in \mathcal{Y}} \left( h_{\sigma}(x,y,w) := h_{\sigma}(x,w) + \frac{1}{2\rho} \|w - y\|^2 \right) , \nonumber \\ g^*_\rho (x,z) &= \min_{w \in \mathcal{Y}} \left( g(x,y,w) := g(x, w) + \frac{1}{2\rho} \|w - z\|^2 \right),\end{aligned}$$ and consider the following alternative saddle-point problem with proximal envelopes: $$\begin{aligned} \min_{x\in \mathcal{X}, y\in \mathcal{Y}} \, \max_{z \in \mathcal{Y}} \, \psi_{\sigma,\rho} (x,y,z) := \frac{h^*_{\sigma,\rho} (x,y) - g_{\rho}^* (x,z)}{\sigma}, \label{problem:envelop_saddle_point_def}\end{aligned}$$ This formulation is convenient because the inner-minimization problem is strongly convex, so we always have a unique and well-defined lower-level optimizer to chase. Note that $\nabla h_{\sigma,\rho}^*(x,y) = \nabla h_{\sigma}(x,w_y^*)$ where $w_y^* = \normalfont\textbf{prox}_{\rho h_{\sigma}(x,\cdot)}(y)$, and similarly, $\nabla g^*_\rho (x,z) = \nabla g(x,w_z^*)$ where $w_z^* = \normalfont\textbf{prox}_{\rho g(x,\cdot)}(z)$. That is, to apply gradient descent-ascent on $\psi_{\sigma,\rho}(x,y,z)$, we need only solve for proximal operations associated with $h_{\sigma}(x,\cdot)$ and $g(x,\cdot)$. While we may not be able to compute the proximal operators exactly, we can introduce intermediate variables $w_{y,k}, w_{z,k}$ that chase the solution of proximal envelopes. We then design the inner loop of the algorithm to solve the proximal operation using $T_k$ inner iterations. Later, we make particular choices of the number of inner iterations $T_k$ to achieve the best oracle complexity and convergence rates. To address issue 2 above, we simply choose $\sigma_k = k^{-s}$ for some chosen constant $s > 0$. This rate of decrease of $\sigma_k$ is optimized to achieve the best oracle complexity and convergence rates to reach an $\epsilon$-stationary point of $\psi_{\epsilon,\rho}(x,y,z)$. We summarize the overall double-loop implementation in Algorithm [\[algo:algo_double_loop\]](#algo:algo_double_loop){reference-type="ref" reference="algo:algo_double_loop"}, where we define: $$\begin{aligned} {3} f_{wy}^{k,t} &= \nabla_y f(x_k, u_{t};\zeta_{wy}^{k,t}), \quad & g_{wy}^{k,t} &= \nabla_y g(x_k, u_{t}; \xi_{wy}^{k,t}), \quad & g_{wz}^{k,t} &= \nabla_y g(x_k, v_{t}; \xi_{wz}^{k,t}), \\ f_{x}^{k,m} &= \nabla_x f(x_k, w_{y,k+1}; \zeta_{x}^{k,m}), \quad & g_{xy}^{k,m} &= \nabla_x g(x_k, w_{y,k+1}; \xi_{xy}^{k,m}), \quad & g_{xz}^{k,m} & = \nabla_x g(x_k, w_{y,k+1}; \xi_{xz}^{k,m}).\end{aligned}$$ We mention here that one may try $T_k = M_k = O(1)$, in which case Algorithm [\[algo:algo_double_loop\]](#algo:algo_double_loop){reference-type="ref" reference="algo:algo_double_loop"} becomes a single-loop algorithm. However, as we see in the analysis, the optimal scheduling of $T_k$ and $M_k$ should increase with $k$ (see also Remark [Remark 14](#remark:double_to_single){reference-type="ref" reference="remark:double_to_single"}). **Input:** total outer-loop iterations: $K$, step sizes: $\{\alpha_k, \gamma_{k}\}$, proximal-smoothing parameters: $\{\beta_k: \beta_k \in (0,1]\}$ penalty parameters: $\{\sigma_k\}$, momentum schedulers: $\{\eta_k: \eta_k \in (0,1] \}$, proximal parameter: $\rho$, initializations: $x_0 \in \mathcal{X}, y_0, z_0 \in \mathcal{Y}$ ## A Fully Single-Loop First-Order Algorithm A drawback of double-loop implementation is that we have to wait for an increasingly large number of samples (since we design $T_k$ or $M_k$ to be increased in $k$) to be collected before we can improve the objective. A natural question is whether we can keep incrementally updating upper-level variables $x$ without waiting for too many inner iterations or for the evaluation of large batches. While single-loop implementations (that replace the inner loops with a single step, and avoid the use of batches) may result in slower convergence (or worse sample complexity), along with additional mild assumptions on the stochastic oracles, we show that it achieves faster convergence and improved sample complexity. Specifically, we consider the following assumption. **Assumption 11**. *Assumption [Assumption 2](#assumption:nice_functions){reference-type="ref" reference="assumption:nice_functions"} holds for $f(x,y;\zeta)$ and $g(x,y;\xi)$ with probability $1$.* We define momentum-assisted gradient estimators recursively for the inner loop proximal-solvers as follows: $$\begin{aligned} \widetilde{g}_{wz}^{k} &:= \nabla_y g(x_k, w_{z,k}; \xi_{wz}^{k}) + (1 - \eta_{k}) \left( \widetilde{g}_{wz}^{k-1} - \nabla_y g(x_{k-1}, w_{z,k-1}; \xi_{wz}^{k}) \right), \nonumber \\ \widetilde{f}_{wy}^{k} &:= \nabla_y f(x_k, w_{y,k}; \zeta_{wy}^{k}) + (1 - \eta_{k}) \left( \widetilde{f}_{wy}^{k-1} - \nabla_y f(x_{k-1}, w_{y,k-1}; \zeta_{wy}^{k}) \right), \nonumber \\ \widetilde{g}_{wy}^{k} &:= \nabla_y g (x_k, w_{y,k}; \xi_{wy}^{k}) + (1 - \eta_{k}) \left( \widetilde{g}_{wy}^{k-1} - \nabla_y g (x_{k-1}, w_{y,k-1}; \xi_{wy}^{k}) \right),\end{aligned}$$ where $\eta_k \in (0,1]$, and $\eta_0 = 1$ (and thus, ignores $(k-1)^{th}$ terms at $k=0$). Formulas for the update to upper-level variables $x$ are defined similarly. A single-loop alternative to Algorithm [\[algo:algo_double_loop\]](#algo:algo_double_loop){reference-type="ref" reference="algo:algo_double_loop"} can be defined as in Algorithm [\[algo:algo_single_loop\]](#algo:algo_single_loop){reference-type="ref" reference="algo:algo_single_loop"}. Our analysis shows that the momentum-assisted technique leads to improvement in sample-complexity upper bounds. # Analysis {#section:analysis} In this section, we provide our main convergence results for Algorithm [\[algo:algo_double_loop\]](#algo:algo_double_loop){reference-type="ref" reference="algo:algo_double_loop"} and Algorithm [\[algo:algo_single_loop\]](#algo:algo_single_loop){reference-type="ref" reference="algo:algo_single_loop"}. ## Analysis of Algorithm [\[algo:algo_double_loop\]](#algo:algo_double_loop){reference-type="ref" reference="algo:algo_double_loop"} We first define the proximal error of $y$ and $z$ at the $k^{th}$ iteration as: $$\begin{aligned} \Delta_k^y := \rho^{-1} \cdot (y_k - \normalfont\textbf{prox}_{\rho h_{\sigma_k}(x_k, \cdot)} (y_k)), \quad \Delta_k^z := \rho^{-1} \cdot (z_k - \normalfont\textbf{prox}_{\rho g(x, \cdot)} (z_k)).\end{aligned}$$ For measuring the error in $x$, we define [^2] $$\begin{aligned} \hat{x}_k & := \Pi_{ \mathcal{X} } \left\{ x_k - \alpha_k \left(\nabla_x h_{\sigma_k} (x_k,\normalfont\textbf{prox}_{\rho h_{\sigma_k}(x_k, \cdot)} (y_k) ) - \nabla_x g(x_k, \normalfont\textbf{prox}_{\rho g(x_k, \cdot)} (z_k) ) \right) \right\}, \\ \Delta_k^x & := \alpha_k^{-1} (x_k - \hat{x}_k). \end{aligned}$$ Next, we define $$\begin{aligned} \Phi_{\sigma,\rho}(x,y,z) := \frac{h^*_{\sigma,\rho}(x,y) - g_{\rho}^*(x,z)}{\sigma} + \frac{C}{\sigma} (g_{\rho}^*(x,z) - g^*(x)), \label{eq:potential_definition}\end{aligned}$$ with some universal constant $C \ge 4$, and finallly we define the potential function as $$\begin{aligned} \mathbb{V}_k & := \Phi_{\sigma_k, \rho}(x_k,y_k,z_k) + \frac{C_w \lambda_k}{\sigma_k \rho} \left(\|w_{y,k} - \normalfont\textbf{prox}_{\rho h_{\sigma_k}(x_k, \cdot)} (y_k)\|^2 + \|w_{z,k} - \normalfont\textbf{prox}_{\rho g(x_k, \cdot)} (z_k)\|^2 \right), \label{Eq:potential_def}\end{aligned}$$ where $C_w > 0$ is some sufficiently large universal constant, and $\lambda_k := T_k \gamma_k / (4\rho)$ is a target improvement rate for chasing proximal operators per outer-iteration. We are now ready to state our main convergence theorem. **Theorem 12**. *Suppose that Assumptions [Assumption 1](#assumption:small_gradient_proximal_error_bound){reference-type="ref" reference="assumption:small_gradient_proximal_error_bound"}-[Assumption 4](#assumption:constrained_set){reference-type="ref" reference="assumption:constrained_set"} and [Assumption 9](#assumption:efficient_projection){reference-type="ref" reference="assumption:efficient_projection"}-[Assumption 10](#assumption:gradient_variance){reference-type="ref" reference="assumption:gradient_variance"} hold, with parameters and stepsizes satisfying the following bounds, for all $k \ge 0$: $$\label{eq:stepsize_theorem_general} \begin{split} &\rho < c_2 / l_{g,1}, \quad \sigma_k < c_1 l_{g,1} / l_{f,1}, \quad T_k \gamma_k < c_3 \rho, \quad \beta_k \le c_4 \ll 1, \quad \alpha_k \le c_5 \rho (1+l_{g,1}/\mu)^{-1}, \\ & \alpha_k \le c_6 \rho^{3} \min(\mu^{2}, \delta^2 / D_\mathcal{Y}^2) \cdot \beta_k, \quad \frac{\sigma_k - \sigma_{k+1}}{\sigma_{k+1}} \le c_7 \rho^2 \min(\mu^{2}, \delta^2 / D_\mathcal{Y}^2) \cdot \beta_k, \end{split}$$ with some universal constants $c_1, c_2, c_3, c_4, c_5, c_6, c_7 > 0$ as well as the following: $$\begin{aligned} &\rho \beta_k + \alpha_k \le c_8 T_k^2 \gamma_k^2, \quad \forall k, \label{eq:stepsize_theorem_compactY} \end{aligned}$$ with some universal constant $c_8 > 0$. Then the iterates of Algorithm [\[algo:algo_double_loop\]](#algo:algo_double_loop){reference-type="ref" reference="algo:algo_double_loop"} satisfy $$\begin{aligned} &\mathbb{E}\left[ \sum_{k=0}^{K-1} \frac{\alpha_k}{16\sigma_k} \|\Delta_k^x\|^2 + \frac{\rho\beta_k}{16\sigma_k} (\|\Delta_k^y\|^2 + \|\Delta_k^z\|^2) \right] \le \mathbb{E}[\mathbb{V}_0 - \mathbb{V}_K] \label{eq:final_convergence_bound} \\ &\quad + O(C_f) \cdot \sum_{k=0}^{K-1} \left(\frac{\sigma_k - \sigma_{k+1}}{\sigma_{k+1}}\right) + \frac{O(l_{g,1} / \mu + C_w)}{\rho} \left(\sum_{k=0}^{K-1} \sigma_k^{-1} \left(\frac{\alpha_k}{M_k} + \rho^{-1} T_k^2 \gamma_k^3\right) (\sigma_k^2 \sigma_f^2 + \sigma_g^2) \right). \nonumber \end{aligned}$$* The proof of Theorem [Theorem 12](#theorem:smoothed_convergence){reference-type="ref" reference="theorem:smoothed_convergence"} is given in Appendix [9](#Appendix:convergence_analysis){reference-type="ref" reference="Appendix:convergence_analysis"}. We mention here that problem-dependent constants may not be fully optimized and could be improved with more careful analysis. Still, there are two major considerations for the stepsizes: (i) the relations between (i) $\beta_k$ and $\alpha_k$ and (ii) the relations between $\alpha_k$ (or $\rho\beta_k$) and $T_k\gamma_k$. Regarding (i), the conditions [\[eq:stepsize_theorem_general\]](#eq:stepsize_theorem_general){reference-type="eqref" reference="eq:stepsize_theorem_general"} require $\alpha_k / \beta_k \asymp \rho^2 \min(\mu, \delta/D_{\mathcal{Y}})^2$. Effectively, this relation determines the number of updates of the $y_k$ and $z_k$ variables for each update of $x_k$. The condition is necessary to ensure that $y_k$ and $z_k$ always remain relatively close to the solution-set $T(x_k,\sigma_k)$ and $T(x_k,0)$ *in expectation*, which is crucial to convergence to a stationary point of the saddle-point problem [\[problem:envelop_saddle_point_def\]](#problem:envelop_saddle_point_def){reference-type="eqref" reference="problem:envelop_saddle_point_def"}. Regarding (ii), the relation between $\alpha_k$ and $T_k\gamma_k$ in [\[eq:stepsize_theorem_compactY\]](#eq:stepsize_theorem_compactY){reference-type="eqref" reference="eq:stepsize_theorem_compactY"} is required for approximately evaluating the proximal operators without solving from scratch at every outer iteration. As a corollary, with proper design of step-sizes, we can give a finite-time convergence guarantee for reaching an approximate stationary point of $\psi_{\sigma} (x)$. To simplify the statement, we treat all problem-dependent parameters as $O(1)$ quantities. **Corollary 13**. *Let $\alpha_k = c_\alpha \rho (k+k_0)^{-a}$, $\beta_k = c_{\beta} (k+k_0)^{-b}$, $\gamma_k = c_{\gamma} (k+k_0)^{-c}$, and $\sigma_k = c_{\sigma} (k+k_0)^{-s}$, $T_k = (k+k_0)^{t}$, $M_k = (k+k_0)^{m}$ with some proper problem-dependent constants $c_{\alpha}, c_{\beta}, c_{\gamma}$, $c_{\sigma}$, and $k_0$. Let $R$ be a random variable drawn from a uniform distribution over $\{0, ..., K-1\}$, and let $\epsilon = \sigma_K$. Under the same conditions in Theorem [Theorem 12](#theorem:smoothed_convergence){reference-type="ref" reference="theorem:smoothed_convergence"}, the following holds after $K$ iterations of Algorithm [\[algo:algo_double_loop\]](#algo:algo_double_loop){reference-type="ref" reference="algo:algo_double_loop"}: for the optimal design of rates, we set $a = b = 0, s = 1/3$, and* 1. *if stochastic noises are present in both upper-level objective $f$ and lower-level objective $g$ (*i.e.,* $\sigma_f^2, \sigma_g^2 > 0$), then let $c = t = m = 4/3$.* 2. *If stochastic noises are present only in $f$ (*i.e.,* $\sigma_f^2 > 0$, $\sigma_g^2 = 0$), then let $c = t = m = 2/3$.* 3. *If we have access to exact information about $f$ and $g$ (*i.e.,* $\sigma_f^2 = \sigma_g^2 = 0$), then let $c = t = m = 0$.* *Then, we have $\|\nabla\psi_{\epsilon}(x_R)\| \asymp \frac{\log K}{K^{1/3}}$ with probability at least $2/3$. If Assumption [Assumption 7](#assumption:additional_regularity){reference-type="ref" reference="assumption:additional_regularity"} and [Assumption 8](#assumption:non_zero_singular_value){reference-type="ref" reference="assumption:non_zero_singular_value"} additionally hold at $x_R$, then we also have $\|\nabla\psi(x_R)\| \asymp \frac{\log K}{K^{1/3}}$.* Note that the overall gradient oracle complexity (or simply sample complexity) to have $\mathbb{E}[\|\nabla\psi_{\epsilon}(x_R)\|] = O(\epsilon)$ is given by $O(K \cdot (M_K + T_K))$ with $K = O(\epsilon^{-1/s})$ and $M_K = T_K = O(\epsilon^{t/s})$. Thus, we have $O(\epsilon^{-7})$, $O(\epsilon^{-5})$, and $O(\epsilon^{-3})$ sample-complexity upper-bounds for fully-stochastic, only upper-level stochastic, and deterministic cases respectively. **Remark 14** (Single-Loop Implementation with Algorithm [\[algo:algo_double_loop\]](#algo:algo_double_loop){reference-type="ref" reference="algo:algo_double_loop"}). *While we design $T_K = M_K = O(\epsilon^{-4})$ to achieve the best complexity bound in stochastic scenarios, we can also find different rate scheduling for which $T_K = M_K = O(1)$. For instance, when $\mathcal{X}= \mathbb{R}^{d_x}$, we can change the coefficients of noise-variance terms from $O(\alpha_k/M_k)$ to $O(\alpha_k^2)$, and schedule the rates of step-sizes such that left-hand side of [\[eq:final_convergence_bound\]](#eq:final_convergence_bound){reference-type="eqref" reference="eq:final_convergence_bound"} converges. However, we found that such a single-loop design may result in overall worse complexity bounds unless momentum-assistance techniques are deployed.* ## Analysis of Algorithm [\[algo:algo_single_loop\]](#algo:algo_single_loop){reference-type="ref" reference="algo:algo_single_loop"} In addition to quantities defined before, we also should track the noise-variance terms in momentum-assisted gradient estimators. We first define the expected gradients $G_{wy}^k$, $G_{wz}^k, G_{x}^k$ as follows: $$\begin{aligned} G_{wz}^k &:= \nabla_y g(x_k, w_{z,k}), \quad G_{wy}^k := \sigma_k \nabla_y f(x_k, w_{y,k}) + \nabla_y g(x_k, w_{y,k}), \\ G_{x}^k &:= \sigma_k \nabla_x f(x_k, w_{y,k+1}) + \nabla_x g(x_k, w_{y,k+1}) - \nabla_x g(x_k, w_{z,k+1}). \end{aligned}$$ Next, we define error terms $e_{wz}^k, e_{wy}^k, e_x^k$ in these gradient estimators as follows: $$\begin{aligned} e_{wz}^k &:= \widetilde{g}_{wz}^k - G_{wz}^k, \quad e_{wy}^k := \sigma_k \widetilde{f}_{wy}^k + \widetilde{g}_{wy}^k - G_{wy}^k, \quad e_{x}^k := \sigma_k \widetilde{f}_{x}^k + (\widetilde{g}_{xy}^k - \widetilde{g}_{xz}^k) - G_{x}^k.\end{aligned}$$ Finally, we we redefine the potential function: $$\begin{aligned} \mathbb{V}_k &:= \Phi_{\sigma_k, \rho}(x_k,y_k,z_k) + \frac{C_w}{\sigma_k \rho} \left(\|w_{y,k} - \normalfont\textbf{prox}_{\rho h_{\sigma_k}(x_k, \cdot)} (y_k)\|^2 + \|w_{z,k} - \normalfont\textbf{prox}_{\rho g(x_k, \cdot)} (z_k)\|^2 \right) \nonumber \\ &\qquad + \frac{C_\eta \rho^2}{\sigma_k \gamma_{k-1}} \left(\|e_x^{k-1}\|^2 + \|e_{wy}^k\|^2 + \|e_{wz}^k\|^2 \right), \label{Eq:potential_def_momentum}\end{aligned}$$ with some properly set universal constants $C_w, C_\eta > 0$. For technical reasons, we require here one additional assumption on the boundedness of the movement in $w_{y,k}$. **Assumption 12**. *For all $x \in \mathcal{X}$ and $y,z \in \mathcal{Y}$, let $w_y^* := \normalfont\textbf{prox}_{\rho h_{\sigma}(x,\cdot)} (y) = \arg\min_{w \in \mathcal{Y}} h_{\sigma} (x,y,w)$ and $w_{z}^* := \normalfont\textbf{prox}_{\rho g(x,\cdot)} (z) = \arg\min_{w \in \mathcal{Y}} g(x,z,w)$ where $h_{\sigma}(x,y,w)$ and $g(x,z,w)$ are defined in [\[eq:def_three_variable_hg\]](#eq:def_three_variable_hg){reference-type="eqref" reference="eq:def_three_variable_hg"}. We assume that $$\begin{aligned} \|\nabla_w h_{\sigma}(x,y, w_y^*)\| \le M_w, \quad \|\nabla_w g(x,z,w_z^*)\| \le M_w, \end{aligned}$$ for some (problem-dependent) constant $M_w = O(1)$.* We are now ready to state the convergence guarantee for the momentum-assisted fully-single loop implementation. **Theorem 15**. *Suppose that Assumptions [Assumption 1](#assumption:small_gradient_proximal_error_bound){reference-type="ref" reference="assumption:small_gradient_proximal_error_bound"}-[Assumption 4](#assumption:constrained_set){reference-type="ref" reference="assumption:constrained_set"}, [Assumption 9](#assumption:efficient_projection){reference-type="ref" reference="assumption:efficient_projection"}-[Assumption 12](#assumption:bounded_w_gradients){reference-type="ref" reference="assumption:bounded_w_gradients"} hold, with parameters and step-sizes satisfying[\[eq:stepsize_theorem_general\]](#eq:stepsize_theorem_general){reference-type="eqref" reference="eq:stepsize_theorem_general"} as well as the following relations for all $k \ge 0$: $$\label{eq:stepsize_theorem_momentum} \rho \beta_k + \alpha_k \le c_8 \gamma_k, \quad \eta_{k+1} \ge c_9 \rho^{-2} \cdot \max\left( (l_{g,1}/\mu) \alpha_{k}\gamma_k, \gamma_k^2 \right).$$ Then the iterates of Algorithm [\[algo:algo_single_loop\]](#algo:algo_single_loop){reference-type="ref" reference="algo:algo_single_loop"} satisfy the following inequality: $$\begin{aligned} &\mathbb{E}\left[ \sum_{k=0}^{K-1} \frac{\alpha_k}{16 \sigma_k } \|\Delta_k^x\|^2 +\frac{\beta_k}{16 \sigma_k \rho} \left( \|y_k - w_{y,k}^*\|^2 + \|z_k - w_{z,k}^*\|^2 \right) \right] \le \mathbb{E}[\mathbb{V}_0 - \mathbb{V}_K] \\ &\qquad + \sum_{k=0}^{K-1} \left( \left(\frac{\sigma_{k} - \sigma_{k+1}}{\sigma_k}\right) \cdot O(C_f) + \frac{O(M_w^2) \rho^2}{ C_w \sigma_k \gamma_k} \left(\frac{\sigma_k - \sigma_{k+1}}{\sigma_{k+1}}\right)^2 + C_\eta \rho^2 \frac{O(\eta_{k+1}^2)}{\sigma_k \gamma_{k}} (\sigma_{k}^2 \sigma_f^2 + \sigma_g^2) \right) \\ &\qquad + C_{\eta} O(\rho^2 l_{g,1}^2) \left( \frac{h_{\sigma_0} (x_0, y_0, w_{y,0}) - h_{\sigma_0,\rho}^*(x_0,y_0)}{\sigma_0} + \frac{g (x_0, z_0, w_{z,0}) - g_{\rho}^*(x_0,z_0)}{\sigma_0} \right). \end{aligned}$$* We then give a corollary analogous to Corollary [Corollary 13](#corollary:final_convergence_result){reference-type="ref" reference="corollary:final_convergence_result"}, with proper design of step-sizes. As before, to simplify the statement, we treat all problem-dependent parameters as $O(1)$ quantities. **Corollary 16**. *Let $\alpha_k = c_\alpha \rho (k+k_0)^{-a}$, $\beta_k = c_{\beta} (k+k_0)^{-b}$, $\gamma_k = c_{\gamma} (k+k_0)^{-c}$, $\sigma_k = c_{\sigma} (k+k_0)^{-s}$ and $\eta_k = (k+k_0)^{-n}$ with some proper problem-dependent constants $c_{\alpha}, c_{\beta}, c_{\gamma}$, $c_{\sigma}$, and $k_0$. Let $R$ be a random variable drawn from a uniform distribution over $\{0, ..., K-1\}$, and let $\epsilon = \sigma_K$. Under the same conditions in Theorem [Theorem 15](#theorem:convergence_momentum){reference-type="ref" reference="theorem:convergence_momentum"}, the following claims hold after $K$ iterations of Algorithm [\[algo:algo_single_loop\]](#algo:algo_single_loop){reference-type="ref" reference="algo:algo_single_loop"}.* 1. *If stochastic noise is present in both upper-level objective $f$ and lower-level objective $g$ (*i.e.,* $\sigma_f^2, \sigma_g^2 > 0$), then let $a = b = c = 2/5$, $s = 1/5$, and $n = 4/5$. Then $\|\nabla\psi_{\epsilon}(x_R)\| \asymp \frac{\log K}{K^{1/5}}$ with probability at least $2/3$.* 2. *If stochastic noises are present only in $f$, let $a = b = c = 1/4$, $s = 1/4$, and $n = 1/2$. Then $\|\nabla\psi_{\epsilon}(x_R)\| \asymp \frac{\log K}{K^{1/4}}$ with probability at least $2/3$.* 3. *If we have access to exact gradient information, let $a=b=c=0$, $s=1/3$, $n=0$. Then $\|\nabla\psi_{\epsilon}(x_R)\| \asymp \frac{\log K}{K^{1/3}}$ with probability at least $2/3$.* *If Assumption [Assumption 7](#assumption:additional_regularity){reference-type="ref" reference="assumption:additional_regularity"} and [Assumption 8](#assumption:non_zero_singular_value){reference-type="ref" reference="assumption:non_zero_singular_value"} additionally hold at $x_R$, then the same conclusion holds for $\|\nabla\psi(x_R)\|$.* Note that since Algorithm [\[algo:algo_single_loop\]](#algo:algo_single_loop){reference-type="ref" reference="algo:algo_single_loop"} only uses $O(1)$ samples per iteration, the overall sample-complexity is upper-bounded by $O(\epsilon^{-5})$, $O(\epsilon^{-4})$, and $O(\epsilon^{-3})$ for fully-stochastic, only upper-level stochastic, and deterministic cases respectively. That is, momentum assistance not only enables single-loop implementation, but also improves the overall sample complexity. # Conclusion This paper studies a first-order algorithm for solving Bilevel Optimization when the lower-level problem and perturbed versions of it satisfy a proximal error bound condition when the errors are small. We establish an $O(\sigma)$-closeness relationship between the penalty formulation $\psi_{\sigma}(x)$ and the hyper-objective $\psi(x)$ under the proximal-error bound condition, and then we develop a fully first-order stochastic approximation scheme for finding a stationary point of $\psi_{\sigma}(x)$, and study its non-asymptotic performance guarantees. We believe our algorithm to be simple and general, and useful in many large-scale scenarios that involve nested optimization problems. Below, we discuss several issues not addressed in this paper, that may become the subjects of fruitful future research. #### Tightness of Results. Can our complexity result can be improved in terms of its dependence on $\epsilon$ while using only first-order oracles? Recent work in [@chen2023near] shows that when the lower-level problem is unconstrained and strongly convex, oracle complexity can be improved to $O(\epsilon^{-2})$ with deterministic first-order gradient oracles. Can similar improvements be found in the complexity when stochastic oracles and constraints are present in the formulation? #### Lower Level $x$-Dependent Constraints. When the lower-level constraints depend on $x$, it is also possible to derive an implicit gradient formula when the lower level problem is non-degenerate. For instance, [@xiao2023alternating] has studied the case in which the lower-level objective is strongly convex and there are lower-level linear equality constraints that depend on $x$. In general, with $x$-dependent constraints, we cannot avoid estimating Lagrangian multipliers, as they are needed in the implicit gradient formula. Even to find the stationary point of penalty functions, $\nabla\psi_{\sigma} (x)$ requires Lagrangian multipliers (see the Envelope Theorem [@milgrom2002envelope]). An interesting future direction would be to develop an efficient first-order algorithm for this case. #### General Convex Lower-Level. One interesting special case is when $g(x,\cdot)$ is merely convex, not necessarily strongly convex. There have been recent advances in min-max optimization for nonconvex-concave problems; see for example [@boroun2023accelerated; @thekumparampil2019efficient; @kovalev2022first; @kong2021accelerated; @ostrovskii2021efficient; @zhang2020single]. We note that when $g(x,\cdot)$ is convex, an $\epsilon$-stationary point of [\[problem:saddle_point_def\]](#problem:saddle_point_def){reference-type="eqref" reference="problem:saddle_point_def"} is also an $\epsilon$-KKT solution of [\[problem:bilevel_single_level\]](#problem:bilevel_single_level){reference-type="eqref" reference="problem:bilevel_single_level"}. The first paper to investigate this direction in deterministic settings is [@lu2023first], to our knowledge. An important future direction would be to extend their results to stochastic settings. #### Nonsmooth Objectives. We could also consider nonsmooth objectives in both levels where efficient proximal operators are available for handling the nonsmoothness. It would also be interesting in future work to see whether the analysis in this paper needs to be changed significantly in order to handle nonsmooth objectives. # Auxiliary Lemmas Throughout the section, we take $\rho \le c_1/l_{g,1}$ and $\sigma < c_2 l_{g,1} / l_{f,1}$ with sufficiently small universal constants $c_1, c_2 \in (0, 0.01]$. We also assume that Assumptions [Assumption 2](#assumption:nice_functions){reference-type="ref" reference="assumption:nice_functions"}-[Assumption 4](#assumption:constrained_set){reference-type="ref" reference="assumption:constrained_set"} hold by default. **Theorem 17** (Danskin's Theorem). *Let $f(w, \theta)$ be a continuously differentiable and smooth function on $\mathcal{W} \times \Theta$. Let $l^*(w) := \min_{\theta \in \Theta} f(w,\theta)$ and $S(w) := \arg\min_{\theta \in \Theta} f(w,\theta)$, and assume $S(w)$ is compact for all $w$. Then the directional derivative of $l^*(w)$ in direction $v$ with $\|v\|=1$ is given by: $$\begin{aligned} D_v l^*(x) := \lim_{\delta\rightarrow 0} \frac{l^*(w+\delta v) - l^*(w)}{\delta} = \min_{\theta \in S(w)} \langle v, \nabla_w f(w,\theta) \rangle. \end{aligned}$$* **Lemma 18** (Proposition 5 in [@shen2023penalty]). *For a continuously-differentiable and $L$-smooth function $f(w,\theta)$ in $\mathcal{W} \times \Theta$, consider a minimizer function $l^*(w) = \min_{\theta \in \Theta} f(w,\theta)$ and a solution map $S(w) = \arg\min_{\theta\in\Theta} f(w,\theta)$. If $S(w)$ is $L_S$-Lipschitz continuous at $w$, then $l^*(w)$ is differentiable and $L(1+L_S)$-smooth at $w$, and $\nabla l^*(w) = \nabla_w f(w,\theta^*)$ for any $\theta^* \in S(w)$.* **Lemma 19**. *For any $x_1, x_2 \in \mathcal{X}$, and $y_1, y_2 \in \mathcal{Y}$, the following holds: $$\begin{aligned} \| \normalfont\textbf{prox}_{\rho g(x_1, \cdot)}(y_1) - \normalfont\textbf{prox}_{\rho g(x_2,\cdot)} (y_2) \| \le O(\rho l_{g,1}) \|x_1 - x_2\| + \|y_1 - y_2\|. \end{aligned}$$ The same property holds with $h_{\sigma}(x,\cdot)$ instead of $g(x,\cdot)$.* **Lemma 20**. *For any $x \in \mathcal{X}$, $y \in \mathcal{Y}$ and $\sigma_1, \sigma_2 \in [0,\sigma]$, the following holds: $$\begin{aligned} \| \normalfont\textbf{prox}_{\rho h_{\sigma_1}(x, \cdot)}(y) - \normalfont\textbf{prox}_{\rho h_{\sigma_2} (x,\cdot)} (y) \| \le O(\rho l_{f,0}) |\sigma_1 - \sigma_2|. \end{aligned}$$* **Lemma 21**. *For the choice of $\rho < 1/(4l_{g,1})$ and $\sigma < c \cdot l_{g,1}/l_{f,1}$ with sufficiently small $c > 0$, $h_{\sigma,\rho}^*(x,y)$ is continuously differentiable and $2\rho^{-1}$-smooth jointly in $(x,y)$.* # Deferred Proofs in Section [3](#section:functional_analysis){reference-type="ref" reference="section:functional_analysis"} {#deferred-proofs-in-section-sectionfunctional_analysis} We first prove more classical results of Proposition [Proposition 6](#lem:dsc){reference-type="ref" reference="lem:dsc"} and Theorem [Theorem 7](#theorem:differentiability_sufficiency_condition){reference-type="ref" reference="theorem:differentiability_sufficiency_condition"} for completeness. ## Proof of Proposition [Proposition 6](#lem:dsc){reference-type="ref" reference="lem:dsc"} {#proof-of-proposition-lemdsc} The proof is based on the celebrated implicit function theorem. See Appendix C.7 in [@evans2022partial], for instance. We first show that if $y^*(x,\sigma) \in T(x,\sigma)$ is unique, then there exists $\delta > 0$ and $\delta_y > 0$ such that for all $\|(x',\sigma') - (x,\sigma)\| < \delta$, the solution satisfies $T(x',\sigma') \subset \mathbb{B}(y^*, \delta_y)$ and is singleton where $\mathbb{B}(y^*, \delta_y)$ is an open ball of radius $\delta_y$ centered at $y^*$. When the context is clear, we simply denote $y^*(x,\sigma)$ as $y^*$. To begin with, we first argue that we can take $\delta$ small enough such that solutions cannot happen outside the neighborhood of $y^*$. Note that unions of all solution sets are contained in $\mathbb{B}(0,R)$ with some finite $R < \infty$ due to Assumption [Assumption 4](#assumption:constrained_set){reference-type="ref" reference="assumption:constrained_set"}. For any $\delta_y>0$, let $q^* = \min_{y \in (\mathcal{Y}\cup \mathbb{B}(0,R)) / \mathbb{B}(y^*, \delta_y)} \sigma f(x,y) + g(x,y)$, and let $M = \max_{y \in \mathbb{B}(0,R)} (\|\sigma \nabla_x f(x,y) + \nabla_x g(x,y) \| + f(x,y))$ (the finite maximum exists since $f,g$ are smooth). Since $T(x,\sigma)$ is singleton, we have $q^* > l(x,\sigma)$. Thus, there exists $0 < \delta_0 \ll (q^* - l(x,\sigma)) / M$, such that for all $(x',\sigma') \in \mathbb{B}((x,\sigma), \delta_0)$, we have $T(x',\sigma') \subseteq \mathbb{B}(y,\delta_y)$. Next, by regularity and strict complementary slackness of $y^*$, there exists a unique $(\lambda^*, \nu^*)$ such that $\lambda_i^* > 0$ for all $i \in \mathcal{I}(y^*)$, $\lambda_i^*=0$ for all $i \notin \mathcal{I}(y^*)$, and $\nabla\mathcal{L}(\lambda^*_{\mathcal{I}}, \nu^*, y^* | x, \sigma) = 0$. Since we assumed $\nabla^2 \mathcal{L}(\lambda^*_{\mathcal{I}}, \nu^*, y^* | x,\sigma)$ being invertible, we can apply implicit function theorem. That is, there exists an sufficiently small $\delta > 0$ such that for all $(x',\sigma')$: $|(x',\sigma') - (x,\sigma) | < \delta$, we can take $\delta_{\lambda, \nu}, \delta_y > 0$ such that there is a unique $(\lambda_{\mathcal{I}}', \nu', y')$ $$\begin{aligned} \nabla\mathcal{L}_{\mathcal{I}} (\lambda_{\mathcal{I}}', \nu', y' | x', \sigma') = 0,\end{aligned}$$ inside the local region $\|(\lambda_{\mathcal{I}}',\nu') - (\lambda^*,\nu^*)\| < \delta_{\lambda,\nu}$ and $\|y' - y\|< \delta_y$. Thus, we can take $\delta > 0$ sufficiently small such that $\delta_{\lambda,\nu}$ can be sufficiently small to keep $\lambda^*_{\mathcal{I}}$ non-negative. Furthermore, in this local region, $T(x',\sigma') \subseteq \mathbb{B}(y,\delta_y)$ for all $(x',\sigma') \in \mathbb{B}((x,\sigma), \delta')$ where $\delta' = \min(\delta_0, \delta)$, which in turn implies that $T(x',\sigma')$ is a singleton and uniquely given by the implicit function theorem. Therefore, $T(x,\sigma)$ is differentiable and thus locally Lipschitz continuous. In addition, $T(x,\sigma)$ is always singleton over $\mathbb{B}((x,\sigma), \delta')$.$\Box$ ## Proof of Theorem [Theorem 7](#theorem:differentiability_sufficiency_condition){reference-type="ref" reference="theorem:differentiability_sufficiency_condition"} {#proof:differentiability_sufficiency_condition} Recall the local region given in the proof of Proposition [Proposition 6](#lem:dsc){reference-type="ref" reference="lem:dsc"}. We note that the implicit function theorem further says that in this local region, we can define differentiation of $y$ with respect to $x$ and $\sigma$ such that $$\begin{aligned} \frac{d y^*(x,\sigma)}{d\sigma} &= - \begin{bmatrix} 0 & I \end{bmatrix} \nabla^2 \mathcal{L}_{\mathcal{I}}(\lambda_{\mathcal{I}}^*, \nu^*, y^* | x, \sigma)^{-1} \begin{bmatrix} 0 \\ \nabla_y f(x,y^*) \end{bmatrix}, \\ \nabla_x y^*(x,\sigma) &= - \begin{bmatrix} 0 & I \end{bmatrix} \nabla^2 \mathcal{L}_{\mathcal{I}}(\lambda_{\mathcal{I}}^*, \nu^*, y^* | x, \sigma)^{-1} \begin{bmatrix} 0 \\ \nabla_{yx}^2 h_{\sigma}(x,y^*) \end{bmatrix}.\end{aligned}$$ As a consequence, $\frac{\partial^2}{\partial \sigma \partial x} l(x,\sigma)$ is given by $$\begin{aligned} \frac{\partial^2}{\partial \sigma \partial x} l(x,\sigma) &= \frac{\partial}{\partial \sigma} (\sigma \nabla_x f(x,y^*(x,\sigma)) + \nabla_x g(x,y^*(x,\sigma))) \\ &= \nabla_x f(x,y^*(x,\sigma)) + \sigma \nabla_{xy}^2 f(x,y^*(x,\sigma)) \frac{dy^*(x,\sigma)}{d\sigma} + \nabla_{xy}^2 g(x,y^*(x,\sigma)) \frac{dy^*(x,\sigma)}{d\sigma} \\ &= \nabla_x f(x,y^*(x,\sigma)) - \begin{bmatrix} 0 & \nabla_{xy}^2 h_{\sigma} (x,y^*(x,\sigma)) \end{bmatrix} (\nabla^2 \mathcal{L}|_{\mathcal{I}}^*)^{-1} \begin{bmatrix} 0 \\ \nabla_y f(x,y^*(x,\sigma)) \end{bmatrix}.\end{aligned}$$ Similarly, differentiation in swapped order is also given by $$\begin{aligned} \frac{\partial^2}{\partial x \partial \sigma} l(x,\sigma) &= \frac{\partial}{\partial x} f(x,y^*(x,\sigma)) \\ &= \nabla_x f(x,y^*(x,\sigma)) + \nabla_x y^*(x,\sigma))^\top \nabla_y f(x,y^*(x,\sigma)) \\ &= \nabla_x f(x,y^*(x,\sigma)) - \begin{bmatrix} 0 & \nabla_{xy}^2 h_{\sigma} (x,y^*(x,\sigma)) \end{bmatrix} (\nabla^2 \mathcal{L}_{\mathcal{I}}^*)^{-1} \begin{bmatrix} 0 \\ \nabla_y f(x,y^*(x,\sigma)) \end{bmatrix}.\end{aligned}$$ Hence $\frac{\partial^2}{\partial x \partial \sigma} l(x,\sigma)$ exists. If this holds at $\sigma = 0^+$, then we have $\lim_{\sigma \rightarrow 0} \psi_{\sigma}(x) = \psi(x)$. $\Box$ ## Proof of Proposition [Proposition 2](#proposition:key_set_continuity_necessary){reference-type="ref" reference="proposition:key_set_continuity_necessary"} {#proof:key_set_continuity_necessary} We instead prove the general version of Proposition [Proposition 2](#proposition:key_set_continuity_necessary){reference-type="ref" reference="proposition:key_set_continuity_necessary"}: **Theorem 22**. *Suppose that $f(w,\theta)$ in $\mathcal{W} \times \Theta$ is continuously-differentiable and $L$-smooth with $\mathcal{W}, \Theta$ satisfying Assumption [Assumption 4](#assumption:constrained_set){reference-type="ref" reference="assumption:constrained_set"}. Let $l^*(w) = \min_{\theta \in \Theta} f(w,\theta)$. Assume the solution map $S(w) = \arg\min_{\theta\in\Theta} f(w,\theta)$ is locally Lipschitz continuous at $w$. For any $\theta^* \in S(w)$ such that (1) $\theta^*$ satisfies Definition [Definition 5](#definition:LICQ){reference-type="ref" reference="definition:LICQ"} and [Definition 6](#definition:strict_slackness){reference-type="ref" reference="definition:strict_slackness"} with Lagrangian multipliers $\lambda^*$, and (2) $\nabla^2 \mathcal{L}|_{\mathcal{I}}^*$ is locally continuous at $(w, \lambda^*, \theta^*)$ jointly in $(w, \lambda, \theta)$. Then the following must hold: $$\begin{aligned} \forall v \in \normalfont\textbf{Im}(\nabla^2_{\theta w} f(w,\theta^*)): \begin{bmatrix} 0 \\ v \end{bmatrix} \in \normalfont\textbf{Im}( \nabla^2 \mathcal{L}(\lambda^*, \theta^* | w)). \end{aligned}$$* Then Proposition [Proposition 2](#proposition:key_set_continuity_necessary){reference-type="ref" reference="proposition:key_set_continuity_necessary"} follows as a corollary. #### *Proof.* We show this by contradiction. For simplicity, we assume that Let $\{g_i\}_{i \in \mathcal{I}(\theta^*)}$ be a set of active constraints of $\Theta$ at $\theta^*$. To simplify the discussion, we assume no equality constraints (there will be no change in the argument). Suppose there exists $v \in \normalfont\textbf{Im}(\nabla^2_{\theta w} f(w,\theta^*))$ such that $\|v\|=1$ and $(0,v)$ is not in the image of Lagrangian Hessian. Let $(0,v) = v_{\normalfont\textbf{Ker}} + v_{\normalfont\textbf{Im}}$ be the orthogonal decomposition of $v$ into kernel and image of the Hessian of Lagrangian. Note that we can take $v$ such that $\|v_{\normalfont\textbf{Ker}}\| > 0$. Let $(dx, d\sigma)$ be such that $\Omega(\delta) \cdot v = \nabla^2_{\theta w} f(w, \theta^*) dw$. Since $S(w)$ is locally Lipschitz continuous, there exists $\delta > 0$ and $L_T < \infty$ such that for all $\|dw\| < \delta$, there exists $\|d\theta\| < L_T \delta$ such that $\theta^* + d\theta \in S(w+dw)$. We can take $\delta$ small enough such that inactive inequality constraints stay inactive with $d \theta$ change. Thus, when considering $d\lambda$, we do not change coordinates that correspond to inactive constraints. We claim that there cannot exist $(d\lambda, d\theta)$ with $\|d\theta\| < L_T \delta$ that can satisfy $$\begin{aligned} \nabla\mathcal{L}(\lambda^* + d\lambda, \theta^* + d\theta | w+dw) = 0. \end{aligned}$$ First, we show that $d\lambda$ cannot be too large. Note that $$\begin{aligned} \nabla\mathcal{L}(\lambda^* + d\lambda, \theta^*| w) - \nabla\mathcal{L}(\lambda^*, \theta^*| w) &= \sum_{i \in \mathcal{I}(\theta^*)} (d\lambda_i) \nabla g_i(\theta^*) \\ &= \underbrace{[\nabla g_i(\theta^*), \ i \in \mathcal{I}(\theta^*)]}_{B} [d\lambda_i, \ i \in \mathcal{I}(\theta^*)] \end{aligned}$$ Since we assumed that $B$ is full-rank in columns, the minimum (right) singular value $s_{\min}$ of $B$ is strictly positive, *i.e.,* $s_{\min} > 0$. On the other hand, by Lipschitz-continuity of all gradients, perturbations in $x,y, \sigma$ can change gradients of Lagrangian only by order $O(\delta)$: $$\begin{aligned} \nabla\mathcal{L}(\lambda^* + d\lambda, \theta^* + d\theta| w+dw) - \nabla\mathcal{L}(\lambda^* + d\lambda, \theta^*| w) \le \underbrace{L \|dw\|}_{\text{perturbed by $dw$}} + \underbrace{\sum_{i \in \mathcal{I}(\theta^*)} (\lambda^*_i + d\lambda_i) L \|d\theta\| }_{\text{perturbed by $d\theta$}}. \end{aligned}$$ Thus, since $(dw,d\theta) = O(\delta)$, we have $$\begin{aligned} (s_{\min} - O(\delta)) \|d\lambda\| + O(\delta) (1 + \|\lambda^*\|) = 0. \label{eq:derive_dlambda_cond} \end{aligned}$$ By taking $\delta < s_{\min}$ small enough, and due to the existence of Lagrange multipliers $\|\lambda^*\| < \infty$, we have proven that $\|d\lambda\| = O(\delta)$ with sufficiently small $\delta$. Next, we check that $$\begin{aligned} &\nabla\mathcal{L}(\lambda^* + d\lambda, \theta^* + d\theta| w+dw,\Theta) - \nabla\mathcal{L}(\lambda^*, \theta^*| w) \\ &= \Omega(\delta) \cdot (v_{\normalfont\textbf{Ker}} + v_{\normalfont\textbf{Im}}) + \nabla^2 \mathcal{L}(\lambda^*, y^*| w) \begin{bmatrix} d\lambda \\ d\theta \end{bmatrix} + o(\delta). \end{aligned}$$ However, $v_{\normalfont\textbf{Ker}}$ is not in the image of $\nabla^2 \mathcal{L}$, and $o(\delta)$ terms cannot eliminate $\Omega(\delta) \cdot v_{\normalfont\textbf{Ker}}$ if $\delta \ll \|v_{\normalfont\textbf{Ker}}\|$. Thus, $\nabla\mathcal{L}(\lambda^* + d\lambda, \theta^* + d\theta| w+dw,\Theta)$ cannot be 0, which implies there is no feasible optimal solution in $\delta$-ball around $\theta^*$ if we perturb $w$ in direction $dw$. This contradicts $S(w)$ being locally Lipschitz continuous. Thus, [\[eq:necessary_image_condition\]](#eq:necessary_image_condition){reference-type="eqref" reference="eq:necessary_image_condition"} is necessary for $S(w)$ to be locally Lipschitz continuous. $\square$ ## Proof of Theorem [Theorem 3](#theorem:conjecture_pseudo_inverse){reference-type="ref" reference="theorem:conjecture_pseudo_inverse"} {#appendix:proof:conjecture_pseudo_inverse} Similarly to the proof of Proposition [Proposition 2](#proposition:key_set_continuity_necessary){reference-type="ref" reference="proposition:key_set_continuity_necessary"}, we prove the following general version: **Theorem 23**. *Suppose that $f(w,\theta)$ in $\mathcal{W} \times \Theta$ is continuously-differentiable and $L$-smooth with $\mathcal{W}, \Theta$ satisfying Assumption [Assumption 4](#assumption:constrained_set){reference-type="ref" reference="assumption:constrained_set"}. Let $l^*(w) = \min_{\theta \in \Theta} f(w,\theta)$. Assume the solution map $S(w) = \arg\min_{\theta\in\Theta} f(w,\theta)$ is locally (uniformly) Lipschitz continuous at all neighborhoods of $w$. For $w \in \mathcal{W}$, if there exists at least one $\theta^* \in S(w)$ such that (1) $\theta^*$ satisfies Definition [Definition 5](#definition:LICQ){reference-type="ref" reference="definition:LICQ"} and [Definition 6](#definition:strict_slackness){reference-type="ref" reference="definition:strict_slackness"} with Lagrangian multipliers $\lambda^*$, and (2) $\nabla^2 f, \nabla^2 \mathcal{L}_{\mathcal{I}}^*$ is locally continuous at $(w, \lambda_{\mathcal{I}}^*, \theta^*)$ jointly in $(w, \lambda_{\mathcal{I}}, \theta)$, then $\nabla^2 l^*(w)$ exists and is given by $$\begin{aligned} \label{eq:hs} \nabla^2 l^*(w) = \nabla_{ww}^2 f(w,\theta^*) - \begin{bmatrix} 0 & \nabla_{w\theta}^2 f(w,\theta^*) \end{bmatrix} (\nabla^2 \mathcal{L}_\mathcal{I}(\lambda^*_{\mathcal{I}}, \theta^* | w))^{\dagger} \begin{bmatrix} 0 \\ \nabla_{\theta w}^2 f(w,\theta^*) \end{bmatrix}. \end{aligned}$$* Theorem [Theorem 3](#theorem:conjecture_pseudo_inverse){reference-type="ref" reference="theorem:conjecture_pseudo_inverse"} follows as a corollary by only taking $\frac{\partial^2}{\partial x \partial \sigma}$ and $\frac{\partial^2}{\partial \sigma \partial x}$ parts with $w = (\sigma, x)$. #### *Proof.* Let $\theta_t^* \in S(w + tv)$ be the closest solution to $\theta^* \in S(w)$, and let $\lambda_t^*$ be the corresponding Lagrangian multiplier. Let $\mathcal{I}:= \mathcal{I}(\theta^*)$ be a set of active constraints of $\Theta$ at $\theta^*$. As in the proof of Theorem [Theorem 22](#theorem:key_set_continuity_necessary){reference-type="ref" reference="theorem:key_set_continuity_necessary"}, to simplify the discussion, we assume there is no equality constraints (including equality constraints needs only a straightforward modification). We first show that the active constraints $\mathcal{I}$ does not change due to the perturbation $tv$ in $w$ when the solution set is Lipschitz continuous. To see this, note that all inactive inequality constraints remain strictly negative $g_i(\theta) < 0$ for all $i \neq \mathcal{I}(\theta^*)$. For active constraints, due to Definition [Definition 6](#definition:strict_slackness){reference-type="ref" reference="definition:strict_slackness"}, we have $\lambda_i^* > 0$ for all $g_i(\theta^*) = 0$ with $i \in \mathcal{I}$. By the solution-set continuity given as assumption, we have $\|\theta_t^* - \theta^*\| = O(t)$. Thus, by the same argument as deriving [\[eq:derive_dlambda_cond\]](#eq:derive_dlambda_cond){reference-type="eqref" reference="eq:derive_dlambda_cond"}, we have $\|\lambda_t^* - \lambda^*\| = O(t)$ as well for sufficiently small $t$. Thus, active constraints remain the same with perturbation of amount $O(t)$ as long as $t \ll \min_{i \in \mathcal{I}} \lambda_i^*$. Now by the Lipschitzness of the solution map $S(\theta)$ and Lemma [Lemma 18](#lemma:lipscthiz_solution_smooth_minimizer){reference-type="ref" reference="lemma:lipscthiz_solution_smooth_minimizer"}, we have $$\begin{aligned} \nabla l^*(w) = \nabla_w f(w, \theta), \qquad \forall \theta \in S(w). \end{aligned}$$ To begin with, for any unit vector $v$ and arbitrarily small $t > 0$, we consider $$\begin{aligned} \frac{\nabla l^*(w+tv) - \nabla l^*(w)}{t}, \end{aligned}$$ which approximates $\nabla^2 l^*(w) v$. Furthermore, due to Lemma [Lemma 18](#lemma:lipscthiz_solution_smooth_minimizer){reference-type="ref" reference="lemma:lipscthiz_solution_smooth_minimizer"} and the local continuity of $\nabla^2 f$, it holds that $$\begin{aligned} \frac{\nabla l^*(w+tv) - \nabla l^*(w)}{t} &= \frac{\nabla_w f(w+tv, \theta_t^*) - \nabla_w f (w, \theta^*)}{t} \\ &= \frac{t \nabla_{ww}^2 f(w, \theta^*) v + \nabla_{w\theta}^2 f (w, \theta^*) (\theta_t^* - \theta^*)}{t} + o(1). \end{aligned}$$ If $\nabla^2 \mathcal{L}_{\mathcal{I}}^* := \nabla^2 \mathcal{L}_{\mathcal{I}} ((\lambda^*)_{\mathcal{I}}, \theta^* | w, \Theta)$ is invertible, then by the implicit function theorem, $$\begin{aligned} \begin{bmatrix} (\lambda_t^*)_{\mathcal{I}} - (\lambda^*)_{|\mathcal{I}} \\ \theta_t^* - \theta^* \end{bmatrix} = (\nabla^2 \mathcal{L}_{\mathcal{I}}^*)^{-1} \begin{bmatrix} 0 \\ \nabla_{\theta w}^2 f(w,\theta^*) (tv) \end{bmatrix} + o(t). \end{aligned}$$ In general, let the eigen-decomposition $\nabla^2 \mathcal{L}_{\mathcal{I}}^* = Q \Sigma Q^{\top}$ and let $r$ be the rank of $\nabla^2 \mathcal{L}_{\mathcal{I}}^*$. Without loss of generality, assume that the first $r$ columns of $Q$ correspond to non-zero eigenvalues. Let $\mu_{\min} > 0$ be the smallest *absolute* value of *non-zero* eigenvalue, and let $U := Q_r$ be the first $r$ columns of $Q$, and let $U_{\perp}$ be the orthogonal complement of $U$, *i.e.,* $U_{\perp}$ is the kernel basis of $\nabla^2 \mathcal{L}_{\mathcal{I}}^*$. We fix $U$ henceforth. Our goal is to show that $$\begin{aligned} U^\top \begin{bmatrix} (\lambda_t^*)_{\mathcal{I}} - (\lambda^*)_{\mathcal{I}} \\ \theta_t^* - \theta^* \end{bmatrix} &= -t U^\top \nabla^2 (\mathcal{L}_{\mathcal{I}}^*)^{\dagger} \begin{bmatrix} 0 \\ \nabla_{\theta w}^2 f(w,\theta^*) \end{bmatrix} v + o(t). \label{eq:constrained_projected_solution} \end{aligned}$$ If this holds, then we can plug this into the original differentiation formula, yielding $$\begin{aligned} &\frac{\nabla l^*(w+tv) - \nabla l^*(w)}{t} = \frac{t \cdot \nabla_{ww}^2 f(w, \theta^*) v + \nabla_{w\theta}^2 f (w, \theta^*) (\theta_t^* - \theta^*)}{t} + o(1) \\ &= \frac{t \cdot \nabla_{ww}^2 f(w, \theta^*) v + \begin{bmatrix} 0 & \nabla_{w\theta}^2 f (w, \theta^*) \end{bmatrix} \begin{bmatrix} (\lambda_t^* - \lambda^*)_{\mathcal{I}} \\ \theta_t^* - \theta^* \end{bmatrix}}{t} + o(1) \\ &= \nabla_{ww}^2 f(w, \theta^*) v - \left( \begin{bmatrix} 0 & \nabla_{w\theta}^2 f (w, \theta^*) \end{bmatrix} U \right) (U^\top \nabla^2 \mathcal{L}_{\mathcal{I}}^* U)^{-1} \left( U^\top \begin{bmatrix} 0 \\ \nabla_{\theta w}^2 f(w,\theta^*) \end{bmatrix} \right)v + o(1). \end{aligned}$$ Since $\normalfont\textbf{Im}(\begin{bmatrix} 0 & \nabla_{w\theta}^2 f (w, \theta^*) \end{bmatrix}^\top ) \subseteq \textbf{span}(U)$, sending $t \rightarrow 0$, the limit is given by $$\begin{aligned} \nabla^2 l^*(w) v = \nabla_{ww}^2 f(w,\theta^*) v - \begin{bmatrix} 0 & \nabla_{w\theta}^2 f(w,\theta^*) \end{bmatrix} (\nabla^2 \mathcal{L}_\mathcal{I}^*)^{\dagger} \begin{bmatrix} 0 \\ \nabla_{\theta w}^2 f(w,\theta^*) \end{bmatrix} v. \end{aligned}$$ The above holds for any unit vector $v$, and we conclude [\[eq:hs\]](#eq:hs){reference-type="eqref" reference="eq:hs"}. Note that this holds for any $\theta^* \in S(w)$ where $\mathcal{L}(\lambda^*,\theta^* | w, \Theta)$ is locally Hessian-Lipschitz (jointly in $w$, $\lambda$ and $\theta$), concluding the proof. We are left with showing [\[eq:constrained_projected_solution\]](#eq:constrained_projected_solution){reference-type="eqref" reference="eq:constrained_projected_solution"}. For simplicity, let $y = \begin{bmatrix} \lambda_{\mathcal{I}} \\ \theta \end{bmatrix}$, and we simply denote $\mathcal{L}(w,y) := \mathcal{L}|_{\mathcal{I}} (\lambda_{\mathcal{I}}, \theta|w, \Theta)$. Consider $\mathcal{L}_U(w, z) := \mathcal{L}(w, U z + y_0)$ where $y_0$ is a projected point of $y^*$ onto the kernel of $\nabla^2 \mathcal{L}|_{\mathcal{I}}^*$. Note that since kernel and image are orthogonal complements of each other, $y^* = Uz^* + y_0$ where $z^* = U^\top y^*$. We list a few properties of $\mathcal{L}_U(w,z)$: $$\begin{aligned} &\nabla_z \mathcal{L}_U(w,z) = U^\top \nabla_{\theta} \mathcal{L}(w, Uz + y_0), \\ & \nabla_{zz}^2 \mathcal{L}_U(w,z) = U^\top \nabla_{yy}^2 \mathcal{L}(w, Uz + \theta_0) U, \\ & \nabla_{wz}^2 \mathcal{L}_U(w,z) = \nabla_{wy}^2 \mathcal{L}(w, Uz + y_0) U, \end{aligned}$$ and $\nabla^2 \mathcal{L}_U$ is locally uniformly Lipschitz continuous at $(w,z^*)$ jointly in $(w,z)$. A crucial observation is that $z^*$ is a critical point of $\mathcal{L}_U(w, z)$, *i.e.,* $\nabla_z \mathcal{L}_U(w,z^*) = U^\top \nabla_y \mathcal{L}(w,y^*) = 0$, and at $(w,z^*)$, $$\begin{aligned} \nabla_{zz}^2 \mathcal{L}_U(w,z^*) = U^\top \nabla_{\theta\theta}^2 \mathcal{L}(w, Uz^* + \theta_0) U = U^\top \nabla_{\theta\theta}^2 \mathcal{L}(w, \theta^*) U, \end{aligned}$$ and $\min_{u: \|u\|=1} \|\nabla_{zz}^2 \mathcal{L}_U(w,z^*) u\| \ge \mu_{\min}$. Tracking the movement from $z^*$ to $z_t^*$ with respect to $(tv)$ perturbations in $w$, by implicit function theorem, we have $$\begin{aligned} z_t^* - z^* = - t (\nabla_{zz}^2 \mathcal{L}_U(w,z^*))^{-1} (\nabla_{zw}^2 \mathcal{L}_U(w,z^*)) v + o(t). \end{aligned}$$ where $z_t^*$ is the only $O(t)$-neighborhood of $z^*$ that satisfies $\nabla_z \mathcal{L}_U(w+tv, z_t^*) = 0$. Note that for any $z$ in the neighborhood of $z_t^*$, $$\begin{aligned} \| \nabla_z \mathcal{L}_U(w+tv,z) - \nabla_z \mathcal{L}_U(w+tv,z_t^*) \| &= \| \nabla_{zz}^2 \mathcal{L}_U(w+tv,z_t^*) (z - z_t^*) \| + O(\|z - z_t^*\|^2) \\ &\ge (\mu_{\min} - O(t) - O(\|z^* - z_t^*\|)) \|z - z_t^*\| - O(\|z - z_t^*\|^2). \end{aligned}$$ Now let $y_0^t$ be the projection of $y_t^*$ onto the kernel of $\nabla^2 \mathcal{L}(w, \theta^*)$. Since $y_t^* = (\lambda_t^*, \theta_t^*) \in S(w+tv)$ is a global solution for $w + tv$ (without active constraints changed thanks to $\theta^*$ satisfying Definition [Definition 6](#definition:strict_slackness){reference-type="ref" reference="definition:strict_slackness"}), we have $$\begin{aligned} 0 &= \|\nabla_y \mathcal{L}(w+tv, y_t^*)\| \ge \frac{1}{\sqrt{2}} \|U^\top \nabla_y \mathcal{L}(w+tv, y_t^*)\| + \frac{1}{\sqrt{2}} \|U_{\perp}^\top \nabla_y \mathcal{L}(w+tv, y_t^*)\| \\ &= \frac{1}{\sqrt{2}} \underbrace{\|U^\top \nabla_{yy}^2 \mathcal{L}(w+tv, Uz_t^* + y_0) (U (z^*_t - U^\top y_t^*) + (y_0 - y_0^t) )\|}_{(i)} + \frac{1}{\sqrt{2}} \underbrace{\|U_{\perp}^\top \nabla_{y} \mathcal{L}(w+tv, y_t^*)\|}_{(ii)} + o(t), \end{aligned}$$ where we used $U^\top \nabla_{y} \mathcal{L}(w+tv, Uz_t^* + y_0) = \nabla_z \mathcal{L}(w+tv, z_t^*) = 0$, continuity of $\nabla^2 \mathcal{L}$, and $\|y_t^* - y^*\| = O(t)$ in the last equality. To bound $(i)$, we observe that $$\begin{aligned} (i) &\ge \|U^\top \nabla_{yy}^2 \mathcal{L}(w+tv, Uz_t^* + y_0) U (z_t^* - U^\top y_t^*) \| - \|U^\top \nabla_{yy}^2 \mathcal{L}(w+tv, Uz_t^* + y_0) (y_0 - y_0^t) \| \\ &= \|\nabla_{zz}^2 \mathcal{L}_U(w+tv, z_t^*) (z_t^* - U^\top y^*_t) \| - \|U^\top (\nabla_{yy}^2 \mathcal{L}(w+tv, Uz_t^* + y_0) - \nabla_{yy}^2 \mathcal{L}(w, y^*)) (y_0^t - y_0) \| \\ &\ge \left((\mu_{\min} - O(t) - O(\|z_t^* - z^*\|)) O(\|z_t^* - U^\top y_t^*\|) - o(t)\right) - o\left(t\right) \\ &= O(\mu_{\min}) \|z_t^* - U^\top \theta_t^*\| - o(t). \end{aligned}$$ where we used $\|y_0^t - y_0\| \le \|y_t^* - y^*\| = O(t)$, and assuming $t \ll \mu_{\min}$. On the other hand, $$\begin{aligned} (ii) &= \|U_{\perp}^\top (\nabla_{y} \mathcal{L}(w+tv, y_t^*) - \nabla_{y} \mathcal{L}(w, y^*)) \| \\ &\le \|U_{\perp}^\top \left(t \nabla_{y w}^2 \mathcal{L}(w, y^*) v + \nabla_{yy}^2 \mathcal{L}(w, y^*) (y_t^* - y^*)\right) \| + o(t) = o(t), \end{aligned}$$ where the first equality follows from the optimality condition of $\theta^*$, and the last equality is due to necessity condition (Proposition [Proposition 2](#proposition:key_set_continuity_necessary){reference-type="ref" reference="proposition:key_set_continuity_necessary"}) for the Lipscthiz-continuity of solution maps. Therefore, we conclude that $$\begin{aligned} 0 = \|\nabla_y \mathcal{L}(w+tv, y_t^*)\| \ge O(\mu_{\min}) \|z_t^* - U^\top y_t^*\| - o(t), \end{aligned}$$ which can only be true if $\|z_t^* - U^\top y_t^*\| = o(t)$. This means $$\begin{aligned} U^\top (y^*_t - y^*) &= (U^\top y^*_t - z_t^*) + (z_t^* - z^*) \\ &= -t(\nabla^2_{zz} \mathcal{L}_U(w,z^*)^{-1} \nabla_{zw}^2 \mathcal{L}_U(w,z^*)) v + o(t), \end{aligned}$$ and thus we get $$\begin{aligned} \label{eq:projected_solution2} U^\top (y_t^* - y^*) &= - t\left( (U^\top \nabla^2_{yy} \mathcal{L}(w,y^*) U)^{-1} (U^\top \nabla_{y w}^2 \mathcal{L}(w,y^*)) \right) v + o(t). \end{aligned}$$ Note that the constraint does not depend on $w$, and thus $$\begin{aligned} \nabla_{y w}^2 \mathcal{L}(w,y^*) = \begin{bmatrix} 0 \\ \nabla^2_{yw} f(w,y^*) \end{bmatrix}. \end{aligned}$$ On the other hand, the necessity condition given in Proposition [Theorem 22](#theorem:key_set_continuity_necessary){reference-type="ref" reference="theorem:key_set_continuity_necessary"} implies $$\normalfont\textbf{Im}(\nabla^2_{y w} \mathcal{L}(w,y^*)) \subseteq \normalfont\textbf{Im}(\nabla^2_{yy} \mathcal{L}(w,y^*)) = \textbf{span}(U).$$ From the above inclusion and [\[eq:projected_solution2\]](#eq:projected_solution2){reference-type="eqref" reference="eq:projected_solution2"}, we conclude [\[eq:constrained_projected_solution\]](#eq:constrained_projected_solution){reference-type="eqref" reference="eq:constrained_projected_solution"}. $\square$ ## Proof of Theorem [Theorem 9](#proposition:uniform_convergence_QG){reference-type="ref" reference="proposition:uniform_convergence_QG"} {#proof:uniform_convergence_QG} #### *Proof.* For simplicity, $y_\sigma^* \in T(x, \sigma)$, let $z^*_p$ be a projected point of $y_{\sigma}^*$ onto $S(x) := T(x,0)$. To bound $|\psi_{\sigma}(x) - \psi(x)|$, we first see that $$\begin{aligned} \psi_\sigma(x) & = & \min_{y \in \mathcal{Y}} \left( f(x,y) + g(x,y)/\sigma \right) - \min_{z \in \mathcal{Y}} g(x,z)/\sigma \\ & \le & \min_{y \in S(x)} \left( f(x,y) + g(x,y)/\sigma \right) - \min_{z \in \mathcal{Y}} g(x,z)/\sigma \ = \ \min_{z \in S(x)} f(x,z) = \psi(x). \end{aligned}$$ We first show that $g(x,y_{\sigma}^*) - g(x, z_p^*) \le \delta$. To see this, note that $$\begin{aligned} \sigma f(x, y_{\sigma}^*) + g(x,y_{\sigma}^*) \le \sigma f(x,z_p^*) + g(x,z_p^*), \end{aligned}$$ and thus $g(x,y_{\sigma}^*) - g(x,z_p^*) \le \sigma (f(x,z_p^*) - f(x,y_{\sigma}^*)) \le \sigma C_f$. As long as $\sigma \le \delta / C_f$, we have $g(x,y_{\sigma}^*) - g(x, z_p^*) \le \delta$. Then, since we have Assumption [Assumption 1](#assumption:small_gradient_proximal_error_bound){reference-type="ref" reference="assumption:small_gradient_proximal_error_bound"}, we get $$\begin{aligned} \psi_{\sigma}(x) = f(x,y_\sigma^*) + \frac{g(x,y_{\sigma}^*) - g(x,z_p^*)}{\sigma} \ge f(x,y_{\sigma}^*) + \frac{\mu \|y_{\sigma}^* - z^*_p \|^2 }{2 \sigma}. \end{aligned}$$ We can further observe that $$\begin{aligned} f(x,y_\sigma^*) + \mu_g \frac{\|y_{\sigma}^* - z_p^*\|^2}{2\sigma} &\ge f(x,z_p^*) + \mu \frac{\|y_{\sigma}^* - z_p^*\|^2}{2\sigma} - l_{f,0} \|y_\sigma^* - z_p^*\| \\ &\ge f(x,z_p^*) - \frac{l_{f,0}^2}{2\mu} \sigma \ge \psi(x) - \frac{l_{f,0}^2}{2\mu} \sigma. \end{aligned}$$ Thus, we conclude that $$\begin{aligned} 0 \le \psi(x) - \psi_{\sigma}(x) \le \frac{l_{f,0}^2}{2\mu}\sigma. \end{aligned}$$ #### Gradient Convergence. As long as the active constraint set does not change, we can only consider $\lambda_{\mathcal{I}}^*$. By $(l_{f,0}/\mu)$-Lipschitz continuity of solution sets, for all $\sigma_1, \sigma_2 \in [0,\sigma]$, we can find $y^*(\sigma_1) \in T(x, \sigma_1), y^*(\sigma_2) \in T(x, \sigma_2)$ such that $$\begin{aligned} \|y^*(\sigma_1) - y^*(\sigma_2)\| = O(l_{f,0} / \mu) \cdot |\sigma_1 - \sigma_2|. \end{aligned}$$ On the other hand, we check that $$\begin{aligned} &\nabla\mathcal{L}(\lambda^*(\sigma_2),\nu^*(\sigma_2), y^*(\sigma_1) | x, \sigma_1) - \nabla\mathcal{L}(\lambda^*(\sigma_1),\nu^*(\sigma_1),y^*(\sigma_1) | x,\sigma_1) \\ &= \sum_{i \in \mathcal{I}} (\lambda^*(\sigma_2) - \lambda^*(\sigma_1)) \nabla g_i(y^*(\sigma_1)) + \sum_{i \in [m_2]} (\nu^*(\sigma_2) - \nu^*(\sigma_1)) \nabla h_i(y^*(\sigma_1)) \\ &= \nabla^2 \mathcal{L}(\lambda^*(\sigma_1),\nu^*(\sigma_1),y^*(\sigma_1) | x,\sigma_1) \begin{bmatrix} \lambda^*(\sigma_2) - \lambda^*(\sigma_1) \\ \nu^*(\sigma_2) - \nu^*(\sigma_1) \\ 0 \end{bmatrix}. \end{aligned}$$ At the same time, we also know that $$\begin{aligned} &\nabla\mathcal{L}(\lambda^*(\sigma_2),\nu^*(\sigma_2),y^*(\sigma_2) | x,\sigma) - \nabla\mathcal{L}(\lambda^*(\sigma2),\nu^*(\sigma_2),y^*(\sigma_1) | x,\sigma_1) \\ &\le l_{f,0} |\sigma_2-\sigma_1| + O(l_{g,1}) (\|\lambda^*(\sigma_2)\| + \|\nu^*(\sigma_2)\|) \|y^*(\sigma_2) - y^*(\sigma_1)\|. \end{aligned}$$ Since the two must sum up to $0$, we have $$\begin{aligned} \nabla^2 \mathcal{L}(\lambda^*,\nu^*,y^* | x, \sigma_1) \begin{bmatrix} \lambda^*(\sigma_2) - \lambda^*(\sigma_1) \\ \nu^*(\sigma_2) - \nu^*(\sigma_1) \\ 0 \end{bmatrix} = O(l_{g,1} l_{f,0} / \mu) |\sigma_2 - \sigma_1|. \end{aligned}$$ Thus, with Assumption [Assumption 8](#assumption:non_zero_singular_value){reference-type="ref" reference="assumption:non_zero_singular_value"}, we have $$\begin{aligned} \|\lambda^*_{\mathcal{I}}(\sigma_2) - \lambda_{\mathcal{I}}^*(\sigma_1)\|, \|\nu_{\mathcal{I}}^*(\sigma_2) - \nu_{\mathcal{I}}^*(\sigma_1)\| = O(l_{f,0} l_{g,1} / (\mu s_{\min})) |\sigma_2-\sigma_1|. \end{aligned}$$ Thus, we can conclude that $$\begin{aligned} \|\nabla^2 \mathcal{L}_{\mathcal{I}}^*(\sigma_2)) - \nabla^2 \mathcal{L}_{\mathcal{I}}^*(\sigma_1)\| &\lesssim \left(\frac{l_{f,0}l_{g,1}^2}{\mu s_{\min}} + \frac{l_{h,2} l_{f,0}}{\mu}\right) |\sigma_2 - \sigma_1|. \end{aligned}$$ where $(\nabla^2 \mathcal{L}_{\mathcal{I}}^*(\sigma))$ is a short-hand for $\nabla^2 \mathcal{L}(\lambda_{\mathcal{I}}^*(\sigma), \nu^*(\sigma), y^*(\sigma) | x, \sigma, \mathcal{Y})$. To check whether $\nabla\psi_{\sigma}(x)$ well-approximates $\nabla\psi(x) = \frac{\partial^2}{\partial x \partial \sigma} l(x,\sigma)|_{\sigma = 0^+}$, we first check that for any $\sigma_1, \sigma_2 \in [0,\sigma]$, $$\begin{aligned} &\left\| \frac{\partial^2}{\partial x\partial \sigma} l(x,\sigma_2) - \frac{\partial^2}{\partial x\partial \sigma} l(x,\sigma_1) \right\| \le \|\nabla_x f(x,y^*(\sigma_2)) - \nabla_x f(x, y^*(\sigma_1))\| \\ &\quad + \underbrace{\left\|\nabla_{xy}^2 h_{\sigma_2} (x,y^*(\sigma_2)) - \nabla_{xy}^2 h_{\sigma_1}(x,y^*(\sigma_1))\right\| \cdot \left\| \nabla^2 \mathcal{L}_{\mathcal{I}}^*(\sigma_1)^{\dagger} \begin{bmatrix} 0 \\ \nabla_y f(x,y^*(\sigma_1) \end{bmatrix} \right\|}_{(i)} \\ &\quad + \underbrace{\left\|\begin{bmatrix} 0 & \nabla_{xy}^2 h_{\sigma_2} (x,y^*(\sigma_2)) \end{bmatrix} \left(\nabla^2 \mathcal{L}_{\mathcal{I}}^*(\sigma_2))^{\dagger} - (\nabla^2 \mathcal{L}_{\mathcal{I}}^*(\sigma_1)^{\dagger} \right) \begin{bmatrix} 0 \\ \nabla_y f(x,y^*(\sigma_1) \end{bmatrix} \right\|}_{(ii)} \\ &\quad + \underbrace{\left\|\begin{bmatrix} 0 & \nabla_{xy}^2 h_{\sigma_2} (x,y^*(\sigma_2)) \end{bmatrix} \nabla^2 \mathcal{L}_{\mathcal{I}}^*(\sigma_2))^{\dagger}\right\| \|\nabla_y f(x,y^*(\sigma_2)) - \nabla_y f(x,y^*(\sigma_1))\|}_{(iii)}. \end{aligned}$$ Here, we use the explicit formula of $\frac{\partial^2}{\partial x \partial \sigma} l(x,\sigma)$ given in Theorem [Theorem 3](#theorem:conjecture_pseudo_inverse){reference-type="ref" reference="theorem:conjecture_pseudo_inverse"}. To bound $(i)$, note the meaning of the latter term: $$\begin{aligned} \begin{bmatrix} d\lambda / d\sigma \\ dy / d\sigma \end{bmatrix} = \nabla^2 \mathcal{L}_{\mathcal{I}}^*(\sigma_1)^{\dagger} \begin{bmatrix} 0 \\ \nabla_y f(x,y^*(\sigma_1)) \end{bmatrix}, \end{aligned}$$ where $\begin{bmatrix} d\lambda / d\sigma \\ dy / d\sigma \end{bmatrix}$ is the movement of $y^*(\sigma_1)$ to the nearest solution by perturbing $\sigma$ projected to the image of $\nabla^2 \mathcal{L}_{\mathcal{I}}^*(\sigma_1)$. By Lemma [Lemma 8](#lemma:proximal_error_to_lipschitz_continuity){reference-type="ref" reference="lemma:proximal_error_to_lipschitz_continuity"}, $\|dy/d \sigma\|$ must not exceed $O(l_{f,0} / \mu)$. Consequently, $$\begin{aligned} \begin{bmatrix} \nabla g_i(y^*(\sigma)), \forall i \in \mathcal{I}&| \ \nabla h_i(y^*(\sigma)), \forall i \in [m_2] \end{bmatrix} \frac{d\lambda}{d\sigma} + \nabla_{yy}^2 \mathcal{L}_{\mathcal{I}}^*(\sigma_1) \frac{dy}{d\sigma} = \nabla_y f(x,y^*(\sigma_1)), \end{aligned}$$ which enforces that $\|d\lambda/ d\sigma\| \le \frac{l_{g,1} l_{f,0}}{\mu s_{\min}}$ by Assumption [Assumption 8](#assumption:non_zero_singular_value){reference-type="ref" reference="assumption:non_zero_singular_value"}. Thus, $$\begin{aligned} (i) \lesssim \frac{l_{h,2}l_{f,0}}{\mu} \frac{l_{g,1}l_{f,0}}{\mu s_{\min}} |\sigma_1 - \sigma_2| = \frac{l_{h,2} l_{g,1}l_{f,0}^2}{\mu^2 s_{\min}} |\sigma_1 - \sigma_2|. \end{aligned}$$ Similarly, we can show that $$\begin{aligned} (iii) \lesssim \frac{l_{f,1} l_{g,1} l_{f,0}}{\mu^2 s_{\min}} |\sigma_1 - \sigma_2|. \end{aligned}$$ For $(ii)$, note that $$\begin{aligned} &\left\|\begin{bmatrix} 0 & \nabla_{xy}^2 h_{\sigma_2} (x,y^*(\sigma_2)) \end{bmatrix} \left(\nabla^2 \mathcal{L}_{\mathcal{I}}^*(\sigma_2))^{\dagger} - (\nabla^2 \mathcal{L}_{\mathcal{I}}^*(\sigma_1)^{\dagger} \right) \begin{bmatrix} 0 \\ \nabla_y f(x,y^*(\sigma_1)) \end{bmatrix} \right\| \\ &= \left\|\begin{bmatrix} 0 & \nabla_{xy}^2 h_{\sigma_2} (x,y^*(\sigma_2)) \end{bmatrix} \nabla^2 \mathcal{L}_{\mathcal{I}}^*(\sigma_2)^{\dagger} \left(\nabla^2 \mathcal{L}_{\mathcal{I}}^*(\sigma_2) - \nabla^2 \mathcal{L}_{\mathcal{I}}^*(\sigma_1) \right) \nabla^2 \mathcal{L}_{\mathcal{I}}^*(\sigma_1)^{\dagger} \begin{bmatrix} 0 \\ \nabla_y f(x,y^*(\sigma_1) \end{bmatrix} \right\| \\ &\le \frac{l_{g,1}^2 l_{f,0}^2}{\mu^2 s_{\min}^2} \| \nabla^2 \mathcal{L}_{\mathcal{I}}^*(\sigma_2) - \nabla^2 \mathcal{L}_{\mathcal{I}}^*(\sigma_1) \| \\ &\lesssim \frac{l_{g,1}^2 l_{f,0}^2}{\mu^2 s_{\min}^2} \left(\frac{l_{f,0}l_{g,1}^2}{\mu s_{\min}} + \frac{l_{h,2} l_{f,0}}{\mu}\right) |\sigma_2 - \sigma_1|, \end{aligned}$$ where the first equality comes from the fact that $$\begin{aligned} \normalfont\textbf{Im}\left( \begin{bmatrix} 0 \\ \nabla_{yx}^2 h_{\sigma_2} (x,y^*(\sigma_2)) \end{bmatrix} \right) \subseteq \normalfont\textbf{Im}\nabla^2 \mathcal{L}_{\mathcal{I}}^*(\sigma_2), \end{aligned}$$ and similarly, $$\begin{aligned} \begin{bmatrix} 0 \\ \nabla_{y} f (x,y^*(\sigma_1)) \end{bmatrix} \in \normalfont\textbf{Im}\nabla^2 \mathcal{L}_{\mathcal{I}}^*(\sigma_1), \end{aligned}$$ by Proposition [Proposition 2](#proposition:key_set_continuity_necessary){reference-type="ref" reference="proposition:key_set_continuity_necessary"}. Therefore, $\frac{\partial^2}{\partial x \partial \sigma} l(x,\sigma)$ is Lipschitz-continuous in $\sigma$, and by Mean-Value Theorem, we can conclude that $$\begin{aligned} \|\nabla\psi_{\sigma}(x) - \nabla\psi(x)\| &\le O(\sigma / \mu^3 ) \cdot \left( \frac{l_{g,1}^4 l_{f,0}^3}{s_{\min}^3} + \frac{l_{h,2} l_{g,1}^2 l_{f,0}^3}{s_{\min}^2} \right), \end{aligned}$$ counting the dominating term. $\square$ ## $\epsilon$-Stationary Point and $\epsilon$-KKT Solution To simplify the argument, we assume that $\mathcal{X}= \mathbb{R}^{d_x}$. Then, define an $\epsilon$-KKT condition of [\[problem:bilevel_single_level\]](#problem:bilevel_single_level){reference-type="eqref" reference="problem:bilevel_single_level"} as: $$\begin{aligned} \| \nabla_x f(x,y) + \lambda_x (\nabla_x g(x,y) - \nabla g^*(x)) \| & \le \epsilon, \\ \| \nabla_y f(x,y) + \lambda_x \nabla_y g(x,y) + \textstyle \sum_{i\in [m_1]} \lambda_i^* \nabla g_i(y) + \textstyle \sum_{i \in [m_2]} \nu_i^* \nabla h_i(y) \| & \le \epsilon, \\ g(x,y) - g^*(x) & \le \epsilon. \end{aligned}$$ for some Lagrangian multipliers $\lambda_x \ge 0$ and $\lambda^* \ge 0, \nu^*$ with some $y \in \mathcal{Y}$. [\[appendix:theorem_kkt_condition\]]{#appendix:theorem_kkt_condition label="appendix:theorem_kkt_condition"} **Theorem 24**. *Suppose Assumptions [Assumption 1](#assumption:small_gradient_proximal_error_bound){reference-type="ref" reference="assumption:small_gradient_proximal_error_bound"}-[Assumption 3](#assumption:regulaity_on_fg){reference-type="ref" reference="assumption:regulaity_on_fg"} hold, $\mathcal{X}= \mathbb{R}^{d_x}$. Then $\nabla\psi_{\sigma}(x)$ is well-defined with $\sigma \le \sigma_0$. If $x$ is an $\epsilon$-stationary point of $\psi_{\sigma}(x)$, that is, $$\begin{aligned} \left\|\nabla\psi_{\sigma}(x) \right\| \le \epsilon, \end{aligned}$$ then $x$ is an $O(\epsilon + \sigma)$-KKT solution of [\[problem:bilevel_single_level\]](#problem:bilevel_single_level){reference-type="eqref" reference="problem:bilevel_single_level"}.* #### *Proof.* This comes almost immediately from Lemma [Lemma 18](#lemma:lipscthiz_solution_smooth_minimizer){reference-type="ref" reference="lemma:lipscthiz_solution_smooth_minimizer"}. Let $y$ and $z$ as minimizers: $$\begin{aligned} &y \in \arg \min_{w\in\mathcal{Y}} \sigma f(x,w) + g(x,w), \\ &z \in \arg\min_{w\in\mathcal{Y}} g(x,w). \end{aligned}$$ Then, by the optimality condition of $y$, the $\epsilon$-optimality condition with respect to $\nabla_y$ is automatically satisfied with $\lambda_x = 1/\sigma$. Furthermore, since $T(x,\sigma)$ is Lipshictz-continuous due to Assumption [Assumption 1](#assumption:small_gradient_proximal_error_bound){reference-type="ref" reference="assumption:small_gradient_proximal_error_bound"} with $L_T = l_{f,0}/\mu$, we have $\nabla g^*(x) = \nabla_x g(x,z)$ and $\nabla h^*_{\sigma}(x) = \nabla_x \sigma f(x,y) + \nabla_x g(x,y)$. Finally, by the optimality condition, we know that $y$ is the optimal solution of a proximal operation $\normalfont\textbf{prox}_{\rho h_{\sigma}(x,\cdot)} (y)$: $$\begin{aligned} \sigma f(x,y) + g(x,y) \le \sigma f(x,z) + g(x,z) + \frac{1}{2\rho} \|z - y\|^2. \end{aligned}$$ Using $|f(x,y) - f(x,z)| \le l_{f,0} \|y-z\|$ and $\|y-z\| \le \sigma L_T$, we have $$\begin{aligned} g(x,y) - g(x,z) = g(x,y) - g^*(x) \le \sigma l_{f,0} L_T + \frac{\sigma^2 L_T^2}{2\rho} = O(\sigma), \end{aligned}$$ as claimed. $\square$ # Analysis for Algorithm [\[algo:algo_double_loop\]](#algo:algo_double_loop){reference-type="ref" reference="algo:algo_double_loop"} {#Appendix:convergence_analysis} For simplicity, let $w_{y, k}^* = \normalfont\textbf{prox}_{\rho h_{\sigma_k}(x_k, \cdot)} (y_k)$ and $w_{z,k}^* = \normalfont\textbf{prox}_{\rho g(x_k, \cdot)} (z_k)$. ## Descent Lemma for $w_{y,k}, w_{z,k}$ We first analyze $\|w_{y,k} - \normalfont\textbf{prox}_{\rho h_{\sigma_k}(x_k, \cdot)} (y_k)\|^2$. We start by observing that $$\begin{aligned} \|w_{y,k+1} - w_{y,k+1}^*\|^2 &= {\|w_{y,k+1} - w_{y, k}^*\|^2} + {\|w_{y, k+1}^* - w_{y, k}^*\|^2} - 2\langle w_{y,k+1} - w_{y, k}^*, w_{y, k+1}^* - w_{y, k}^* \rangle \nonumber \\ &\le \left(1 + \frac{\lambda_k}{4} \right) \underbrace{\|w_{y,k+1} - w_{y,k}^* \|^2}_{(i)} + \left(1 + \frac{4}{\lambda_k} \right) \underbrace{\|w_{y, k+1}^* - w_{y, k}^*\|^2}_{(ii)}, \label{eq:w_contraction_bound}\end{aligned}$$ where we used $\langle a, b \rangle \le c \|a\|^2 + \frac{1}{4c} \|b\|^2$, and $\lambda_k = T_k \gamma_k / (4\rho)$ as defined. They are bounded in two following lemmas. **Lemma 25**. *At every $k^{th}$ iteration, the following holds: $$\begin{aligned} \mathbb{E}[\|w_{y,k+1} - w_{y,k}^*\|^2 | \mathcal{F}_{k}] &\le \left(1 - \frac{\gamma_k}{4\rho} \right)^{T_k} \mathbb{E}[\|w_{y,k} - w_{y,k}^*\|^2 | \mathcal{F}_{k}] + 2 \left( T_k \gamma_{k}^2\right) (\sigma_k^2 \cdot \sigma_f^2 + \sigma_g^2). \label{eq:wy_descent} \end{aligned}$$ Similarly, we also have that $$\begin{aligned} \mathbb{E}[\|w_{z,k+1} - w_{z,k}^*\|^2 | \mathcal{F}_{k}] &\le \left(1 - \frac{\gamma_k}{4\rho} \right)^{T_k} \mathbb{E}[\|w_{z,k} - w_{z,k}^*\|^2 | \mathcal{F}_{k}] + 2 (T_k \gamma_k^2) (\sigma_k^2 \cdot \sigma_f^2 + \sigma_g^2), \label{eq:wz_descent} \end{aligned}$$* #### *Proof.* We use the linear convergence of projected gradient steps. To simplify the notation, let $$\begin{aligned} \widetilde{G}_t = \nabla_y (\sigma_k f(x_k,u_t; \zeta_{wy}^{k,t}) + g(x_k, u_t; \xi_{wy}^{k,t}) ) + \rho^{-1}(u_t - y_k), \end{aligned}$$ and $G_t = \mathbb{E}[\widetilde{G}_t]$. Also let $G^* = \nabla h_{\sigma_k}(x, w_{y,k}^*) + \rho^{-1}(w_{y,k}^* - y_k)$. We first check that $$\begin{aligned} \|u_{t+1} - w_{y,k}^*\|^2 &= \left\|\Pi_{ \mathcal{Y} } \left\{ u_t - \gamma_k \widetilde{G}_t \right\} - \Pi_{ \mathcal{Y} } \left\{ w_{y,k}^* - \gamma_{k} G^* \right\} \right\|^2 \\ &\le \left\|u_t - \gamma_{k} \widetilde{G}_t - (w_{y,k}^* - \gamma_k G^*) \right\|^2 \\ &= \left\|u_t - w_{y,k}^*\right\|^2 + \gamma_{k}^2 \left\| \widetilde{G}_t - G^* \right\|^2 - 2 \gamma_{k} \langle u_t - w_{y,k}^*, \widetilde{G}_t - G^* \rangle. \end{aligned}$$ Taking expectation conditioned on $\mathcal{F}_{k,t}$ yields: $$\begin{aligned} \mathbb{E}[\|u_{t+1} - w_{y,k}^*\|^2 | \mathcal{F}_{k,t}] &\le \mathbb{E}[\|u_t - w_{y,k}^*\|^2 | \mathcal{F}_{k,t}] + \gamma_{k}^2 \mathbb{E}[\|\widetilde{G}_t - G^*\|^2 | \mathcal{F}_{k,t}] \\ &\quad - 2 \gamma_k \langle u_t - w_{y,k}^*, G_t - G^* \rangle.\end{aligned}$$ Note that $$\begin{aligned} \mathbb{E}[\|\widetilde{G}_t - G^*\|^2 | \mathcal{F}_{k}] \le 2 \|G_t - G^*\|^2 + 2\mathbb{E}[\|\widetilde{G}_t - G_t\|^2 | \mathcal{F}_{k,t}]. \end{aligned}$$ By co-coercivity of strongly convex function, since the inner minimization is $(1/(3\rho))$-strongly convex and $(1/\rho)$-smooth, we have $$\begin{aligned} \|G_t - G^*\|^2 &\le (1/\rho) \cdot \langle u_t - w_{y,k}^*, G_t - G^* \rangle, \\ \frac{1}{3\rho} \cdot \|u_t - w_{y,k}^*\|^2 &\le \langle u_t - w_{y,k}^*, G_t - G^* \rangle.\end{aligned}$$ Given $\gamma_k \ll \rho$, we have $$\begin{aligned} \mathbb{E}[\|u_{t+1} - w_{y,k}^*\|^2 | \mathcal{F}_{k,t}] &\le \left(1 - \frac{\gamma_{k}}{4\rho} \right) \mathbb{E}[\|u_t - w_{y,k}^*\|^2 | \mathcal{F}_{k,t}] + 2\gamma_{k}^2 (\sigma_k^2 \cdot \sigma_f^2 + \sigma_g^2). \end{aligned}$$ Applying this for $T_k$ steps, we get the lemma. $\square$ **Lemma 26**. *At every $k^{th}$ iteration, the following holds: $$\begin{aligned} \mathbb{E}[\|w_{y,k+1} - w_{y,k+1}^*\|^2 | \mathcal{F}_k] &\le \left( 1 + \frac{\lambda_k}{4} \right) \mathbb{E}[\| w_{y,k+1} - w_{y,k}^*\|^2 | \mathcal{F}_k] + O\left(\frac{\rho^2 l_{f,0}^2}{\lambda_k} \right) |\sigma_{k} - \sigma_{k+1}|^2 \nonumber \\ &\quad + O\left( \frac{\rho l_{g,1}}{\lambda_k} \right) \|x_{k+1} - x_k\|^2 + \frac{8}{\lambda_k} \|y_{k+1} - y_k\|^2. \label{eq:wy_potential_descent} \end{aligned}$$ Similarly, we have $$\begin{aligned} \mathbb{E}[\|w_{z,k+1} - w_{z,k+1}^*\|^2 | \mathcal{F}_k] &\le \left( 1 + \frac{\lambda_k}{4} \right) \mathbb{E}[\| w_{z,k+1} - w_{z,k}^*\|^2 | \mathcal{F}_k] + O\left(\frac{\rho^2 l_{f,0}^2}{\lambda_k} \right) |\sigma_{k} - \sigma_{k+1}|^2 \nonumber \\ &\quad + O\left( \frac{\rho l_{g,1}}{\lambda_k} \right) \|x_{k+1} - x_k\|^2 + \frac{8}{\lambda_k} \|z_{k+1} - z_k\|^2. \label{eq:wz_potential_descent} \end{aligned}$$* #### *Proof.* By Lemmas [Lemma 19](#lemma:prox_diff_bound){reference-type="ref" reference="lemma:prox_diff_bound"} and [Lemma 20](#lemma:prox_sigma_diff_bound){reference-type="ref" reference="lemma:prox_sigma_diff_bound"}, we have $$\begin{aligned} \|w_{y,k+1}^* - w_{y,k}^*\| &\le O(\rho l_{g,1}) \|x_{k+1} - x_k\| + \|y_{k+1} - y_k\| + O(\rho l_{f,0}) |\sigma_k - \sigma_{k+1}|.\end{aligned}$$ Take square and conditional expectation, and plug this to the bound for $(i), (ii)$ in [\[eq:w_contraction_bound\]](#eq:w_contraction_bound){reference-type="eqref" reference="eq:w_contraction_bound"}, we get the lemma. $\square$ ## Descent Lemma for $\Phi_{\sigma,\rho}$ {#appendix:descent_lemma_Phi_general} **Proposition 27**. *At every $k^{th}$ iteration, we have $$\begin{aligned} &\sigma_k \left(\Phi_{\sigma_{k+1},\rho}(x_{k+1}, y_{k+1}, z_{k+1}) - \Phi_{\sigma_k,\rho}(x_k, y_k, z_k) \right) \nonumber\\ & \leq C_1 \rho^{-1} \left\{ \|y_k - y_{k+1}\|^2 + \frac{l_{f,0}^2}{\mu^2} |\sigma_k - \sigma_{k+1}|^2 + \normalfont\mathrm{\bf dist}^2(z_{k}, T(x_{k}, 0)) + \normalfont\mathrm{\bf dist}^2(y_{k}, T(x_{k},\sigma_{k})) \right\} \nonumber\\ & \quad + \left(\frac{C_1 l_{g,1}^2}{\rho \mu^2} -\frac{1}{4\alpha_k} \right) \|x_k - x_{k+1}\|^2 + \left( \frac{C_1}{\rho} + O(\rho^{-2}) \alpha_k \right) \left(\|z_k - z_{k+1}\|^2 + \normalfont\mathrm{\bf dist}^2 (z_k, T(x_k,0)) \right) \nonumber\\ & \quad - \frac{\beta_k}{4\rho} (\|y_k - w_{y,k}^*\|^2 +\|y_k - w_{y,k+1}\|^2) - \frac{\beta_k}{\rho} (\|z_k - w_{z,k}^*\|^2 + \|z_k - w_{z,k+1}\|^2) \nonumber \\ &\quad + O\left( l_{g,1}^2 \alpha_k + \rho^{-1} \beta_k \right) \left(\|w_{y,k}^* - w_{y,k+1}\|^2 + \|w_{z,k}^* - w_{z,k+1}\|^2 \right) + \alpha_k \|\widetilde{G} - G\|^2 + \sigma_{k+1} C_1 C_f \label{eq:descent_phi} \end{aligned}$$ where $C_1 = O\left( \frac{\sigma_k - \sigma_{k+1}}{\sigma_{k+1}} \right)$.* #### *Proof.* To start with, note that $$\begin{aligned} \Phi_{\sigma_{k+1},\rho}(x_{k+1}, y_{k+1}, z_{k+1}) - \Phi_{\sigma_k,\rho}(x_k, y_k, z_k) &= \underbrace{\Phi_{\sigma_{k},\rho}(x_{k+1}, y_{k+1}, z_{k+1}) - \Phi_{\sigma_k,\rho}(x_k, y_k, z_k)}_{(i)} \\ &\quad + \underbrace{\Phi_{\sigma_{k+1},\rho}(x_{k+1}, y_{k+1}, z_{k+1}) - \Phi_{\sigma_{k},\rho}(x_{k+1}, y_{k+1}, z_{k+1})}_{(ii)}. \end{aligned}$$ Note that by Lemma [Lemma 8](#lemma:proximal_error_to_lipschitz_continuity){reference-type="ref" reference="lemma:proximal_error_to_lipschitz_continuity"}, we have $\normalfont\mathrm{\bf dist}(T(x_1,\sigma_1),T(x_2,\sigma_2)) \le \frac{l_{g,1}}{\mu} \|x_1 - x_2\| + \frac{l_{f,0}}{\mu} |\sigma_1 - \sigma_2|$ for all $x_1, x_2 \in \mathcal{X}$, $\sigma_1,\sigma_2 \in [0,\delta/C_f]$. Applying this to [\[eq:bound_on_ii\]](#eq:bound_on_ii){reference-type="eqref" reference="eq:bound_on_ii"} in the subsequent subsection, we obtain that $$\begin{aligned} (ii) &\leq O\left( \frac{\sigma_k - \sigma_{k+1}}{\sigma_k\sigma_{k+1}} \right)\rho^{-1} \left\{ \|y_k - y_{k+1}\|^2 + \|z_k - z_{k+1}\|^2 + \frac{l_{g,1}^2}{\mu^2} \|x_k - x_{k+1}\|^2 + \frac{l_{f,0}^2}{\mu^2} |\sigma_k - \sigma_{k+1}|^2 \right. \\ &\quad \normalfont\mathrm{\bf dist}^2(z_{k}, T(x_{k}, 0)) + \normalfont\mathrm{\bf dist}^2(y_{k}, T(x_{k},\sigma_{k})) \Bigg\} + O\left( \frac{\sigma_k - \sigma_{k+1}}{\sigma_k} \right) C_f.\end{aligned}$$ Combining this with the estimation of $(i)$ given in [\[eq:final_phi_bound\]](#eq:final_phi_bound){reference-type="eqref" reference="eq:final_phi_bound"}, we conclude. $\square$ ### Bounding $(ii)$ For $(ii)$, we realize that for any $x,y,z$ with $\sigma_{k+1} < \sigma_k$, $$\begin{aligned} \Phi_{\sigma_{k+1},\rho}(x,y,z) - \Phi_{\sigma_{k},\rho}(x,y,z) &= \frac{h^*_{\sigma_{k+1},\rho}(x,y) - g^*(x,z)}{\sigma_{k+1}} - \frac{h^*_{\sigma_k,\rho}(x,y) - g^*(x,z)}{\sigma_k} \\ &\quad + \left(\frac{C}{\sigma_{k+1}} - \frac{C}{\sigma_{k}}\right) \left(g_{\rho}^*(x,z) - g^*(x)\right) \\ &\le \underbrace{\frac{h^*_{\sigma_{k+1},\rho}(x,y) - g_{\rho}^*(x,y)}{\sigma_{k+1}} - \frac{h^*_{\sigma_k,\rho}(x,y) - g_{\rho}^*(x,y)}{\sigma_k}}_{(iii)} \\ &\quad + \underbrace{\left(\frac{\sigma_k - \sigma_{k+1}}{\sigma_k \sigma_{k+1}} \right) \left(g_{\rho}^*(x,y) - g^*(x)\right)}_{(iv)} \\ &\quad + \underbrace{\left(\frac{C(\sigma_k - \sigma_{k+1})}{\sigma_{k+1}\sigma_k} \right) \left(g_{\rho}^*(x,z) - g^*(x)\right)}_{(v)}.\end{aligned}$$ To bound $(iii)$, for any $\sigma_1 > \sigma_2$, note that $$\begin{aligned} h^*_{\sigma_{1},\rho}(x,y) \le \sigma_1 f(x,w_2^*) - g(x,w_2^*) + \frac{\|w_2^* - y\|^2}{2\rho} = h^*_{\sigma_{2},\rho}(x,y) + (\sigma_1 - \sigma_2) f(x,w_2^*),\end{aligned}$$ where $w_2^* = \arg\min_{w \in \mathcal{Y}} \sigma_2 f(x,w) + g(x,w) + \frac{\|w - y\|^2}{2\rho}$. Thus, $$\begin{aligned} (iii) &\le \left(\frac{1}{\sigma_{k+1}} - \frac{1}{\sigma_k}\right) (h^*_{\sigma_{k+1},\rho}(x,y) - h^*_{0,\rho}(x,y)) - \frac{1}{\sigma_k} (h^*_{\sigma_{k},\rho}(x,y) - h^*_{\sigma_{k+1},\rho}(x,y)) \\ &\le 2 \frac{\sigma_k - \sigma_{k+1}}{\sigma_{k}} \cdot \max_{w \in \mathcal{Y}} |f(x,w)| \le \frac{\sigma_k - \sigma_{k+1}}{\sigma_{k}} \cdot O(C_f). \end{aligned}$$ In order to bound $(iv)$, note that for any $y_{\sigma}^* \in T(x,\sigma)$, $$\begin{aligned} g_{\rho}^*(x,y) - g^*(x) &= (g_{\rho}^*(x,y) - h_{\sigma}^*(x)) + (h_{\sigma}^*(x) - g^*(x)) \\ &\le (h_{\sigma, \rho}^*(x,y) - h_{\sigma, \rho}^*(x, y_{\sigma}^*)) + O(\sigma C_f) \\ &\le \langle \underbrace{\nabla_y h_{\sigma, \rho}^*(x, y_{\sigma}^*)}_{=0}, y - y_{\sigma}^* \rangle + O(\rho^{-1}) \|y - y_{\sigma}^*\|^2 + O(\sigma C_f).\end{aligned}$$ where $\nabla_y h_{\sigma,\rho}^*(x, y_{\sigma}^*) = \rho^{-1}(y_{\sigma}^* - \normalfont\textbf{prox}_{\rho h_{\sigma}(x,\cdot)}(y_{\sigma}^*)) = 0$ since $y_{\sigma}^*$ is a fixed point of $\normalfont\textbf{prox}_{\rho h_{\sigma}(x,\cdot)}$ operation. Taking $y^*_{\sigma}$ the closest element to $y$, we get $$\begin{aligned} (iv) \le \frac{\sigma_{k} - \sigma_{k+1}}{\sigma_k \sigma_{k+1}} \left(\rho^{-1} \normalfont\mathrm{\bf dist}^2(y_{k+1}, T(x_{k+1},\sigma_{k+1})) + O(\sigma_{k+1} C_f) \right).\end{aligned}$$ Similarly, we can also show that $$\begin{aligned} (v) \le \frac{C (\sigma_{k} - \sigma_{k+1})}{\sigma_k \sigma_{k+1}} \cdot \rho^{-1} \normalfont\mathrm{\bf dist}^2(z_{k+1}, T(x_{k+1}, 0)).\end{aligned}$$ Thus, we can conclude that $$\begin{aligned} (ii) &\le O\left( \frac{\sigma_k - \sigma_{k+1}}{\sigma_k \sigma_{k+1}} \right) \left(\sigma_{k+1} C_f + \rho^{-1} \normalfont\mathrm{\bf dist}^2(y_{k+1}, T(x_{k+1},\sigma_{k+1})) + \rho^{-1} \normalfont\mathrm{\bf dist}^2(z_{k+1}, T(x_{k+1}, 0))\right) \nonumber \\ &\le O\left( \frac{\sigma_k - \sigma_{k+1}}{\sigma_k\sigma_{k+1}} \right)\rho^{-1} \left( \|z_{k+1} - z_k\|^2 + \normalfont\mathrm{\bf dist}^2(z_{k}, T(x_{k}, 0)) + \normalfont\mathrm{\bf dist}^2(T(x_{k+1},0), T(x_{k}, 0)) \right) \nonumber \\ &\quad + O\left( \frac{\sigma_k - \sigma_{k+1}}{\sigma_k\sigma_{k+1}} \right) \rho^{-1} \left(\|y_{k+1} - y_k\|^2 + \normalfont\mathrm{\bf dist}^2(y_{k}, T(x_{k},\sigma_{k})) + \normalfont\mathrm{\bf dist}^2(T(x_{k+1},\sigma_{k+1}), T(x_{k},\sigma_{k})) \right) \nonumber \\ &\quad + O\left( \frac{\sigma_k - \sigma_{k+1}}{\sigma_k} \right) C_f \label{eq:bound_on_ii}.\end{aligned}$$ ### Bounding $(i)$ Henceforth, to simplify the notation, we simply denote $\sigma = \sigma_k$. $$\begin{aligned} (i) &= \frac{1}{\sigma} \underbrace{\left(h^*_{\sigma,\rho}(x_{k+1},y_{k+1}) - g_{\rho}^*(x_{k+1}, z_{k+1}) - (h^*_{\sigma,\rho}(x_{k},y_{k}) - g_{\rho}^*(x_{k}, z_k)) \right)}_{(a) = \sigma \cdot (\psi_{\sigma,\rho}(x_{k+1},y_{k+1},z_{k+1}) - \psi_{\sigma,\rho}(x_k,y_k,z_k))} \\ &\quad \quad + \frac{C}{\sigma} \underbrace{\left( (g^*_{\rho} (x_{k+1}, z_{k+1}) - g^*(x_{k+1})) - (g^*_{\rho} (x_{k}, z_{k}) - g^*(x_{k})) \right)}_{(b)}.\end{aligned}$$ #### Bounding $(a)$. It is easy to check using Lemma [Lemma 18](#lemma:lipscthiz_solution_smooth_minimizer){reference-type="ref" reference="lemma:lipscthiz_solution_smooth_minimizer"} that $$\begin{aligned} \nabla_y h_{\sigma,\rho}^*(x_{k},y_{k}) &= \rho^{-1} (y_k - w_{y,k}^*), \\ \nabla_z g_{\rho}^*(x_{k},z_{k}) &= \rho^{-1} (z_k - w_{z,k}^*),\end{aligned}$$ Note that $y_{k+1} - y_k = -\beta_k (y_k - w_{y,k+1})$, and thus, $$\begin{aligned} \langle \nabla_y h_{\sigma,\rho}^* (x_k,y_k), y_{k+1} - y_k \rangle &= \frac{-\beta_k}{\rho} \langle y_k - w_{y,k}^*, y_k - w_{y,k+1} \rangle \\ &= \frac{-\beta_k }{2\rho} \left( \|y_k - w_{y,k}^* \|^2 + \|y_k - w_{y,k+1}\|^2 - \|w_{y,k}^* - w_{y,k+1}\|^2 \right).\end{aligned}$$ Similarly, $z_{k+1} - z_k = -\beta_k (z_k - w_{z,k+1})$, and thus $$\begin{aligned} \langle \nabla_z g_{\rho}^*(x_k, z_k), z_{k+1} - z_k \rangle &= \frac{-\beta_k}{\rho} \langle z_k - w_{z,k}^*, z_k - w_{z,k+1} \rangle \\ &= \frac{-\beta_k}{2\rho} \left( \|z_k - w_{z,k}^* \|^2 + \|z_k - w_{z,k+1}\|^2 - \|w_{z,k}^* - w_{z,k+1}\|^2 \right).\end{aligned}$$ Using smoothness of $h_{\sigma,\rho}^*$ and $g_{\rho}^*$, and noting that $$\begin{aligned} \nabla_x (h_{\sigma,\rho}^*(x_{k},y_{k}) - g_{\rho}^*(x_k,z_k)) &= \nabla_x \psi_{\sigma,\rho} (x_k,y_k,z_k) = \nabla_x (h_{\sigma}(x_k, w_{y,k}^*) - g (x_k, w_{z,k}^*)),\end{aligned}$$ we get (caution on the sign of $g_{\rho}^*(x_k,z_k)$ terms): $$\begin{aligned} (a) &\le \langle \sigma \cdot \nabla_x \psi_{\sigma,\rho} (x_k,y_k,z_k), x_{k+1} - x_k \rangle + \frac{O(1)}{\rho} \|x_{k+1} - x_k\|^2 \nonumber \\ &\qquad -\frac{\beta_k}{2 \rho} \left( \|y_k - w_{y,k}^*\|^2 + \frac{1}{2} \|y_k - w_{y,k+1}\|^2 \right) + \frac{\beta_k}{2\rho} \|w_{y,k}^* - w_{y,k+1}\|^2 \nonumber \\ &\qquad +\frac{\beta_k}{2 \rho} \left( \|z_k - w_{z,k}^*\|^2 + 2 \|z_k - w_{z,k+1}\|^2 \right) - \frac{\beta_k}{2 \rho} \|w_{z,k}^* - w_{z,k+1}\|^2. \label{eq:descent_phi_restart_a}\end{aligned}$$ where we assume $\beta_k \ll 1$. For terms regarding $x$, let $$\begin{aligned} \widetilde{G} := \frac{1}{M_k} \sum_{m=1}^{M_k} \nabla_x \left( {\sigma_k} f(x_k, w_{y,k+1}; \zeta_{x}^{k,m}) + g(x_k, w_{y,k+1}; \xi_{xy}^{k,m}) - g (x_k, w_{z,k+1}; \xi_{xz}^{k,m})\right),\end{aligned}$$ and $G = \mathbb{E}[\widetilde{G}]$. By projection lemma, we have $$\begin{aligned} \langle (x_{k} - \alpha_k \widetilde{G}) - x_{k+1}, x - x_{k+1} \rangle \le 0, \qquad \forall x \in \mathcal{X},\end{aligned}$$ and therefore $\langle \widetilde{G}, x_{k+1} - x \rangle \le -\frac{1}{\alpha_k} \langle x_k - x_{k+1}, x - x_{k+1} \rangle$ for all $x \in \mathcal{X}$. Plugging $x = x_k$ here, we have $$\begin{aligned} \langle G^*, x_{k+1} - x_k \rangle &\le -\frac{1}{\alpha_k} \|x_k - x_{k+1}\|^2 + \langle G^* - \widetilde{G}, x_{k+1} - x_{k} \rangle \\ &\le -\frac{1}{2\alpha_k} \|x_k - x_{k+1}\|^2 + \alpha_k \left( \|G^* - G\|^2 + \|\widetilde{G} - G\|^2 \right).\end{aligned}$$ Note that $$\begin{aligned} \|G^* - G\| \le l_{g,1} (\|w_{y,k}^* - w_{y,k+1}\| + \|w_{z,k}^* - w_{z,k+1}\|).\end{aligned}$$ In conclusion, omitting expectations on both sides, we have $$\begin{aligned} (a) &\le -\frac{1}{2\alpha_k} \|x_{k+1} - x_k\|^2 - \frac{\beta_k}{4\rho} (\|y_k - w_{y,k}^*\|^2 + \|y_k - w_{y,k+1}\|^2) \nonumber \\ &\quad + \frac{O(1)}{\rho} \|x_{k+1} - x_k\|^2 + \frac{\beta_k}{\rho} (\|z_k - w_{z,k}^*\|^2 + \|z_k - w_{z,k+1}\|^2) \nonumber \\ &\quad + \left(O(l_{g,1}^2) \alpha_k + \frac{\beta_k}{2\rho} \right) (\|w_{y,k}^* - w_{y,k+1}\|^2 + \|w_{z,k}^* - w_{z,k+1}\|^2 ) + \alpha_k \|\widetilde{G} - G\|^2. \label{eq:psisigma_decrease}\end{aligned}$$ #### Bounding (b). We realize that in [\[eq:psisigma_decrease\]](#eq:psisigma_decrease){reference-type="eqref" reference="eq:psisigma_decrease"}, coefficients of proximal error terms on $z$, *i.e.,* $\|z_k - w_{z,k}^*\|^2$ are positive, unlike terms regarding $y_k$. We show that these terms will be canceled out with $(b)$ when Assumption [Assumption 1](#assumption:small_gradient_proximal_error_bound){reference-type="ref" reference="assumption:small_gradient_proximal_error_bound"} holds. Using Lemma [Lemma 8](#lemma:proximal_error_to_lipschitz_continuity){reference-type="ref" reference="lemma:proximal_error_to_lipschitz_continuity"}, $$\begin{aligned} (b) &= (g_{\rho}^*(x_{k+1},z_{k+1}) - g^*(x_{k+1})) - (g_{\rho}^*(x_{k},z_{k+1}) - g^*(x_{k})) + (g_{\rho}^*(x_{k},z_{k+1}) - g_{\rho}^*(x_{k},z_k)) \nonumber \\ &\le \langle \nabla_x g_{\rho}^*(x_k,z_{k+1}) - \nabla_x g^*(x_k), x_{k+1} - x_k \rangle + O\left( \frac{l_{g,1}}{\mu} \right) \|x_{k+1} - x_k\|^2 \nonumber \\ &\quad + \langle \nabla_z g_{\rho}^* (x_k, z_k), z_{k+1} - z_k \rangle + O(\rho^{-1}) \|z_{k+1} - z_k\|^2. \label{eq:descent_phi_restart_b}\end{aligned}$$ Taking conditional expectation on both sides and using $z_{k+1} - z_k = -\beta_k (z_k - w_{z,k+1})$ and $\nabla_z g_{\rho}^*(x_k,z_k) = \rho^{-1} (z_k - w_{z,k}^*)$, $$\begin{aligned} \mathbb{E}[(b)|\mathcal{F}_k'] &\le - \langle \nabla_x g_{\rho}^*(x_k,z_{k+1}) - \nabla_x g^*(x_k), x_{k+1} - x_k \rangle + O\left( \frac{l_{g,1}}{\mu} \right) \mathbb{E}[\|x_{k+1} - x_k\|^2|\mathcal{F}_k'] \\ &\quad -\beta_k \langle \nabla_z g_{\rho}^* (x_k, z_k), z_k - w_{z,k+1} \rangle + O(\beta_k^2 \rho^{-1}) \|z_{k} - w_{z,k+1}\|^2 \\ &\le \left( O(\rho^{-2}) \alpha_k C \cdot \normalfont\mathrm{\bf dist}^2 (z_{k+1}, T(x_k,0)) + \frac{\|x_k - x_{k+1}\|^2}{16 C \alpha_k} \right) + O\left( \frac{l_{g,1}}{\mu} \right) \mathbb{E}[\|x_{k+1} - x_k\|^2|\mathcal{F}_k'] \\ &\quad - \frac{\beta_k}{2\rho} \left(\|z_k - w_{z,k}^*\|^2 + \|z_k - w_{z,k+1}\|^2 - \|w_{z,k}^* - w_{z,k+1}\|^2 \right) + O \left( \frac{\beta_k^2}{\rho} \right) \|z_{k} - w_{z,k+1}\|^2.\end{aligned}$$ #### Combining (a) and (b). We take $C \ge 4$. Given that $\alpha_k^{-1} \ll \max(\rho^{-1}, l_{g,1}/\mu)$, we can conclude that $$\begin{aligned} \sigma_k \cdot (i) &\le -\frac{1}{4\alpha_k} \|x_{k+1} - x_k\|^2 - \frac{\beta_k}{4\rho} (\|y_k - w_{y,k}^*\|^2 + \|y_k - w_{y,k+1}\|^2) - \frac{\beta_k}{\rho} (\|z_k - w_{z,k}^*\|^2 + \|z_k - w_{z,k+1}\|^2) \nonumber \\ &\quad + O\left( l_{g,1}^2 \alpha_k + \rho^{-1} \beta_k \right) \left(\|w_{y,k}^* - w_{y,k+1}\|^2 + \|w_{z,k}^* - w_{z,k+1}\|^2 \right) \nonumber \\ &\quad + O(\rho^{-2}) \alpha_k \left(\normalfont\mathrm{\bf dist}^2 (z_k, T(x_k,0)) + \|z_k - z_{k+1}\|^2 \right) + \alpha_k \|\widetilde{G} - G\|^2. \label{eq:final_phi_bound}\end{aligned}$$ ## Proof of Theorem [Theorem 12](#theorem:smoothed_convergence){reference-type="ref" reference="theorem:smoothed_convergence"} {#appendix:descent_potential_general} Note that $$\begin{aligned} \|x_{k+1} - \hat{x}_k\|^2 &= \|\Pi_{ \mathcal{X} } \left\{ x_k - \alpha_k \widetilde{G} \right\} - \Pi_{ \mathcal{X} } \left\{ x_k - \alpha_k G^* \right\}\|^2 \le \alpha_k^2 \|\widetilde{G} - G^*\|^2 \\ &\le O(l_{g,1}^2) \alpha_k^2 (\|w_{y,k}^* - w_{y,k+1}\|^2 + \|w_{z,k}^* - w_{z,k+1}\|^2) + 2 \alpha_k^2 \mathbb{E}[ \|\widetilde{G} - G\|^2], \end{aligned}$$ and also note that $$\begin{aligned} \|x_{k} - x_{k+1}\|^2 &\ge \frac{1}{2} \|x_k - \hat{x}_k\|^2 - 2 \|\hat{x}_k - x_{k+1}\|^2, \\ \mathbb{E}[ \|\widetilde{G} - G\|^2] &\le \frac{1}{M_k} (\sigma_k^2 \sigma_f^2 + \sigma_g^2).\end{aligned}$$ The following lemma is also useful: **Lemma 28**. *Under Assumption [Assumption 1](#assumption:small_gradient_proximal_error_bound){reference-type="ref" reference="assumption:small_gradient_proximal_error_bound"}, for all $x\in\mathcal{X}, y\in\mathcal{Y}$ and $\sigma \in [0,\delta/C_f]$, we have $$\begin{aligned} \normalfont\mathrm{\bf dist}(y, T(x,\sigma)) \le \left(\frac{1}{\mu} + \frac{D_{\mathcal{Y}}}{\delta} \right) \rho^{-1} \left\|y - \normalfont\textbf{prox}_{\rho h_{\sigma}(x,\cdot)}(y)\right\|. \end{aligned}$$* #### *Proof.* This can be shown with a simple algebra: $$\begin{aligned} \normalfont\mathrm{\bf dist}(y, T(x,\sigma)) &\le \frac{1}{\mu} \cdot \rho^{-1} \|y - \normalfont\textbf{prox}_{\rho h_{\sigma}(x,\cdot)}(y)\| \cdot \mathds{1}\left\{ \rho^{-1} \|y - \normalfont\textbf{prox}_{\rho h_{\sigma}(x,\cdot)}(y)\| \le \delta \right\} \\ &\quad + D_{\mathcal{Y}} \cdot \mathds{1}\left\{ \rho^{-1} \|y - \normalfont\textbf{prox}_{\rho h_{\sigma}(x,\cdot)}(y)\| > \delta \right\}, \end{aligned}$$ and noting that $\mathds{1}\left\{ \rho^{-1} \|y - \normalfont\textbf{prox}_{\rho h_{\sigma}(x,\cdot)}(y)\| > \delta \right\} < \frac{1}{\delta} \left(\rho^{-1} \|y - \normalfont\textbf{prox}_{\rho h_{\sigma}(x,\cdot)}(y)\|\right)$. $\square$ We now combine results in Proposition [Proposition 27](#prop:descent_phi){reference-type="ref" reference="prop:descent_phi"}, Lemma [Lemma 26](#lemma:w_potential_descent){reference-type="ref" reference="lemma:w_potential_descent"} and Lemma [Lemma 8](#lemma:proximal_error_to_lipschitz_continuity){reference-type="ref" reference="lemma:proximal_error_to_lipschitz_continuity"}, we have (omitting expectations): $$\begin{aligned} \mathbb{V}_{k+1} - \mathbb{V}_k &\le -\frac{1}{16\sigma_k \alpha_k} \|x_k - \hat{x}_k\|^2 - \frac{1}{8\sigma_k\alpha_k} \|x_k - x_{k+1}\|^2 -\frac{\beta_k}{4 \sigma_k \rho} \left( \|y_k - w_{y,k}^*\|^2 + \|z_k - w_{z,k}^*\|^2 \right) \\ &\quad + O\left( \frac{\sigma_k - \sigma_{k+1}}{\sigma_k \sigma_{k+1}} \right) \rho^{-1} (\normalfont\mathrm{\bf dist}^2(y_k, T(x_{k},\sigma_k)) + \normalfont\mathrm{\bf dist}^2(z_k, T(x_k,0)))\nonumber \\ &\quad + O\left(\frac{\alpha_k}{\rho^2 \sigma_k}\right) \normalfont\mathrm{\bf dist}^2(z_k, T(x_k,0)) + O\left( \frac{\sigma_k - \sigma_{k+1}}{\sigma_k} \right) C_f \\ &\quad + O\left(\frac{\sigma_k - \sigma_{k+1}}{\sigma_k \sigma_{k+1}} \right)\rho^{-1} \left(\|y_k - y_{k+1}\|^2 + \|z_k - z_{k+1}\|^2 + \frac{l_{g,1}^2}{\mu^2} \|x_k - x_{k+1}\|^2 + \frac{l_{f,0}^2}{\mu^2} |\sigma_k - \sigma_{k+1}|^2 \right) \nonumber \\ &\quad + \frac{O(1 + l_{g,1}/\mu + C_w \rho l_{g,1})}{\sigma_k\rho} \|x_{k+1} - x_k\|^2 + \frac{O(C_w \rho l_{f,0}^2)}{\sigma_k} |\sigma_k - \sigma_{k+1}|^2 \nonumber \\ &\quad - \frac{1}{\sigma_k \rho} \left( \frac{1}{4\beta_k} - 16C_w - \rho^{-1}\alpha_k \right) (\|y_k - y_{k+1}\|^2 + \|z_k - z_{k+1}\|^2) \nonumber \\ &\quad + \frac{C_w\lambda_k}{\sigma_k \rho} \left( 1 + \frac{\sigma_k - \sigma_{k+1}}{\sigma_{k+1}} + \frac{\lambda_k}{4} + \frac{O(l_{g,1}^2) \rho \alpha_k + 2 \beta_k}{C_w \lambda_k} \right) \left( \|w_{y,k}^* - w_{y,k+1}\|^2 + \|w_{z,k}^* - w_{z,k+1}\|^2 \right) \nonumber \\ &\quad - \frac{C_w \lambda_k}{\sigma_k \rho} \left( \|w_{y,k}^* - w_{y,k}\|^2 + \|w_{z,k}^* - w_{z,k}\|^2 \right) + \frac{2 \alpha_k}{\sigma_k} \|\widetilde{G} - G\|^2.\end{aligned}$$ Note that $w_{y,k}^* = \normalfont\textbf{prox}_{\rho h_{\sigma_k}(x_k, \cdot)} (y_k)$ and $w_{z,k}^* = \normalfont\textbf{prox}_{\rho g(x_k, \cdot)} (z_k)$. Using Lemma [Lemma 28](#lemma:dist_to_prox_error){reference-type="ref" reference="lemma:dist_to_prox_error"} and rearranging the terms in the above inequality, we obtain that $$\begin{aligned} \mathbb{V}_{k+1} - \mathbb{V}_k &\le -\frac{1}{16\sigma_k \alpha_k} \|x_k - \hat{x}_k\|^2 + \left( \frac{O(1 + l_{g,1}/\mu + C_w \rho l_{g,1})}{\sigma_k\rho} - \frac{1}{8\sigma_k\alpha_k} + \frac{d_k l_{g,1}^2}{\sigma_k \rho \mu^2} \right) \|x_k - x_{k+1}\|^2 \\ &\quad + \left(-\frac{\beta_k}{4 \sigma_k \rho} + \frac{d_k C_\delta}{\rho\sigma_k} \right) \|y_k - w_{y,k}^*\|^2 + \left( \frac{l_{f,0}^2}{\mu^2} + \frac{O(C_w \rho l_{f,0}^2)}{\sigma_k} \right) |\sigma_k - \sigma_{k+1}|^2 \nonumber \\ &\quad + \left(-\frac{\beta_k}{4 \sigma_k \rho} + \frac{d_k C_\delta}{\rho\sigma_k} + C_\delta O\left(\frac{\alpha_k}{\rho^2 \sigma_k}\right) \right) \|z_k - w_{z,k}^*\|^2 + d_k C_f \\ &\quad + \left( \frac{d_k}{\sigma_k \rho} - \frac{1}{\sigma_k \rho} \left( \frac{1}{4\beta_k} - 16C_w - \rho^{-1}\alpha_k \right) \right) \left(\|y_k - y_{k+1}\|^2 + \|z_k - z_{k+1}\|^2 \right) \nonumber \\ &\quad + \frac{C_w\lambda_k}{\sigma_k \rho} \left( 1 + \frac{\sigma_k - \sigma_{k+1}}{\sigma_{k+1}} + \frac{\lambda_k}{4} + \frac{O(l_{g,1}^2) \rho \alpha_k + 2 \beta_k}{C_w \lambda_k} \right) \left( \|w_{y,k}^* - w_{y,k+1}\|^2 + \|w_{z,k}^* - w_{z,k+1}\|^2 \right) \nonumber \\ &\quad - \frac{C_w \lambda_k}{\sigma_k \rho} \left( \|w_{y,k}^* - w_{y,k}\|^2 + \|w_{z,k}^* - w_{z,k}\|^2 \right) + \frac{2 \alpha_k}{\sigma_k} \|\widetilde{G} - G\|^2.\end{aligned}$$ where $d_k = O\left( \frac{\sigma_k - \sigma_{k+1}}{\sigma_{k+1}} \right)$, and $C_\delta = \left(\frac{1}{\mu} + \frac{D_{\mathcal{Y}}}{\delta} \right)^2 \rho^{-2}$. We state several step-size conditions to keep target quantities to be bounded via telescope sum. 1. To keep the $\|x_k - x_{k+1}\|^2$ term negative, we need $\alpha_k \ll \rho (1+l_{g,1}/\mu + C_w \rho l_{g,1})^{-1}$. 2. To keep $\|y_k - w_{y,k}^*\|^2$ term negative, along with Lemma [Lemma 28](#lemma:dist_to_prox_error){reference-type="ref" reference="lemma:dist_to_prox_error"}, we require $$\begin{aligned} \beta_k \gg \left(\frac{\sigma_k - \sigma_{k+1}}{\sigma_{k+1}}\right) \rho^{-2} \left( \mu^{-2} + D_\mathcal{Y}^2 / \delta^2 \right). \end{aligned}$$ 3. To keep $\|z_k - w_{z,k}^*\|^2$ term negative, we additionally require $$\begin{aligned} \beta_k \gg \rho^{-3} \left( \mu^{-2} + D_\mathcal{Y}^2 / \delta^2 \right) \alpha_k. \end{aligned}$$ 4. To keep terms on $\|y_k - y_{k+1}\|^2$ and $\|z_k - z_{k+1}\|^2$ negative, we first require $$\begin{aligned} \beta_k \ll C_w, \rho \alpha_k^{-1}, \end{aligned}$$ and then $$\begin{aligned} \frac{1}{\beta_k} \gg \frac{\sigma_k - \sigma_{k+1}}{\sigma_{k+1}}, \end{aligned}$$ which trivially holds as $\beta_k = o(1)$ and $(\sigma_k - \sigma_{k+1} /\sigma_k) = O(1/k)$. Once the above are satisfied, we get $$\begin{aligned} \mathbb{V}_{k+1} - \mathbb{V}_k &\le -\frac{\alpha_k}{16 \sigma_k } \|\Delta_k^x\|^2 -\frac{\beta_k}{16 \sigma_k \rho} \left( \|y_k - w_{y,k}^*\|^2 + \|z_k - w_{z,k}^*\|^2 \right) \nonumber \\ & - \frac{1}{16\sigma_k \beta_k\rho} (\|y_{k} - y_{k+1}\|^2 + \|z_k - z_{k+1}\|^2) \nonumber \\ & + O\left( \frac{1 + l_{g,1}/\mu + C_w\rho l_{g,1}}{\sigma_k \rho} \right) \frac{(\alpha_k^2 + \alpha_k \rho)}{M_k} \cdot (\sigma_k^2 \sigma_f^2 + \sigma_g^2) + \frac{(\sigma_{k} - \sigma_{k+1})}{\sigma_k } \cdot O(C_f) \nonumber \\ & + \frac{C_w \lambda_k}{\sigma_k \rho} \left( 1 + \frac{\sigma_k - \sigma_{k+1}}{\sigma_{k+1}} + \frac{\lambda_k}{4} + \frac{O(l_{g,1}^2) \rho \alpha_k + 2 \beta_k}{C_w \lambda_k} \right) \left( \|w_{y,k}^* - w_{y,k+1}\|^2 + \|w_{z,k}^* - w_{z,k+1}\|^2 \right) \nonumber \\ &- \frac{C_w \lambda_k}{\sigma_k \rho} \left( \|w_{y,k}^* - w_{y,k}\|^2 + \|w_{z,k}^* - w_{z,k}\|^2 \right) + o(1/k), \label{eq:potential_decrease_double_loop} \end{aligned}$$ where $o(1/k)$-term collectively represents the terms asymptotically smaller than $\frac{\sigma_k - \sigma_{k+1}}{\sigma_k} = O(1/k)$ since we use polynomially decaying penalty parameters $\{\sigma_k\}$. Now we can apply Lemma [Lemma 25](#lemma:w_descent_lemma){reference-type="ref" reference="lemma:w_descent_lemma"}, and plug $\lambda_k = \frac{T_k \gamma_k}{4 \rho}$, and using the step-size condition: $$\begin{aligned} \frac{\sigma_k - \sigma_{k+1}}{\sigma_{k+1}} \ll \lambda_k, \ \max \left( \rho l_{g,1}^2 \alpha_k, \beta_k \right) \ll C_w \lambda_k^2, \end{aligned}$$ we get $$\begin{aligned} \mathbb{V}_{k+1} - \mathbb{V}_k &\le -\frac{\alpha_k}{16 \sigma_k} \|\Delta_k^x\|^2 -\frac{\beta_k}{16 \sigma_k \rho} \left( \|y_k - w_{y,k}^*\|^2 + \|z_k - w_{z,k}^*\|^2 \right) \nonumber \\ &\quad + O\left( \frac{1 + l_{g,1}/\mu + C_w\rho l_{g,1}}{\rho} \right) \frac{\alpha_k}{\sigma_k M_k} (\sigma_k^2 \sigma_f^2 + \sigma_g^2) + O\left( \frac{C_w}{\rho^2} \right) \frac{T_k^2\gamma_k^3}{\sigma_k} (\sigma_k^2 \sigma_f^2 + \sigma_g^2). \label{eq:potential_diff}\end{aligned}$$ Arranging terms and sum over $k = 0$ to $K-1$, we have $$\begin{aligned} & \mathbb{E}\left[ \sum_{k=0}^{K-1} \frac{\alpha_k}{16\sigma_k} \|\Delta_k^x\|^2 + \frac{\rho\beta_k}{16\sigma_k} (\|\Delta_k^y\|^2 + \|\Delta_k^z\|^2) \right] \\ &\le (\mathbb{V}_0 - \mathbb{V}_K) + O(C_f) \cdot \sum_{k=0}^{K-1} \left(\frac{\sigma_k - \sigma_{k+1}}{\sigma_{k+1}}\right) \\ &+ \frac{O(l_{g,1} / \mu + C_w)}{\rho} \left(\sum_{k=0}^{K-1} \sigma_k^{-1} \left( \frac{\alpha_k}{M_k} + \rho^{-1} T_k^2 \gamma_k^3 \right) (\sigma_k^2 \sigma_f^2 + \sigma_g^2) \right). \end{aligned}$$ ## Proof of Corollary [Corollary 13](#corollary:final_convergence_result){reference-type="ref" reference="corollary:final_convergence_result"} {#appendix:corollary_final_convergence} The remaining part is to show that $\mathbb{V}_K$ is lower-bounded by $O(1)$, and to check the rates. To see this, recall our definition in [\[eq:potential_definition\]](#eq:potential_definition){reference-type="eqref" reference="eq:potential_definition"}, and note that $$\begin{aligned} \mathbb{V}_K \ge \frac{h_{\sigma,\rho}^*(x,y) - g^*(x)}{\sigma}, \end{aligned}$$ as long as $C \ge 1$. Then, $$\begin{aligned} h_{\sigma,\rho}^*(x,y) \ge \sigma f(x, w_{y,k}^*) + g(x, w_{y,k}^*) \ge \sigma f(x,w^*_{y,k}) + g^*(x), \end{aligned}$$ and therefore $\mathbb{V}_K \ge f(x,w_{y,k}^*) > -C_f$ by Assumption [Assumption 3](#assumption:regulaity_on_fg){reference-type="ref" reference="assumption:regulaity_on_fg"} on the lower bounded value of $f$. Now, since $\sigma_k = k^{-s}$ for some $s > 0$, we know that $$\begin{aligned} \frac{\sigma_k - \sigma_{k+1}}{\sigma_{k+1}} = O(1/k), \end{aligned}$$ and thus, $$\begin{aligned} & \mathbb{E}\left[ \sum_{k=0}^{K-1} \frac{\alpha_k}{16\sigma_k} \|\Delta_k^x\|^2 + \frac{\rho\beta_k}{16\sigma_k} (\|\Delta_k^y\|^2 + \|\Delta_k^z\|^2) \right] \\ &\le O(\log K) + O \left(\sum_{k=0}^{K-1} \sigma_k^{-1} \left( \frac{\alpha_k}{M_k} + \rho^{-1} T_k^2 \gamma_k^3 \right) (\sigma_k^2 \sigma_f^2 + \sigma_g^2) \right). \end{aligned}$$ Plugging the step-size rates, the right-hand side is bounded by $O(\log K)$, and thus we get the corollary. # Analysis for Algorithm [\[algo:algo_single_loop\]](#algo:algo_single_loop){reference-type="ref" reference="algo:algo_single_loop"} In addition to descent lemmas for $w_{y,k}$ and $w_{z,k}$, we also need descent lemmas for noise variances in momentum-assisted gradient estimators. We define the outer-variable gradient estimators as the following: $$\begin{aligned} \widetilde{f}_{x}^{k} &:= \nabla_x f(x_k, w_{y,k+1}; \zeta_{x}^{k}) + (1 - \eta_{k}) \left( \widetilde{f}_{x}^{k-1} - \nabla_x f(x_{k-1}, w_{y,k}; \zeta_{x}^{k}) \right), \nonumber \\ \widetilde{g}_{xy}^{k} &:= \nabla_x g(x_k, w_{y,k+1}; \xi_{xy}^{k}) + (1 - \eta_{k}) \left( \widetilde{g}_{xy}^{k-1} - \nabla_x g(x_{k-1}, w_{y,k}; \xi_{xy}^{k}) \right), \nonumber \\ \widetilde{g}_{xz}^{k} &:= \nabla_x g (x_k, w_{y,k+1}; \xi_{xz}^{k}) + (1 - \eta_{k}) \left( \widetilde{g}_{xz}^{k-1} - \nabla_x g (x_{k-1}, w_{z,k}; \xi_{xz}^{k}) \right),\end{aligned}$$ ## Descent Lemma for Noise-Variances We first show that noise-variances for $e_{wy}^k$ and $e_{wz}^k$ decay. **Lemma 29**. *At every $k^{th}$ iteration, the following holds: $$\begin{aligned} \mathbb{E}[\|e_{wz}^{k+1}\|^2] &\le (1 - \eta_{k+1})^2 \mathbb{E}[\|e_{wz}^k\|^2] + 2\eta_{k+1}^2 \sigma_g^2 \\ &\quad + O(l_{g,1}^2) \left(\|x_{k+1} - x_k\|^2 + \|w_{z,k+1} - w_{z,k}\|^2\right), \end{aligned}$$ and similarly, $$\begin{aligned} \mathbb{E}[\|e_{wy}^{k+1}\|^2] &\le (1 - \eta_{k+1})^2 \mathbb{E}[\|e_{wy}^k\|^2] + 4 \eta_{k+1}^2 (\sigma_k^2 \sigma_f^2 + \sigma_g^2) + 2 (\sigma_k - \sigma_{k+1})^2 \sigma_f^2 \\ &\quad + O(l_{g,1}^2) \left(\|x_{k+1} - x_k\|^2 + \|w_{y,k+1} - w_{y,k}\|^2 \right). \end{aligned}$$* #### *Proof.* We start with $$\begin{aligned} \mathbb{E}\left[\|e_{wz}^{k+1}\|^2 \right] &= \mathbb{E}\left[\|\widetilde{g}_{wz}^{k+1} - G_{wz}^{k+1} \|^2 \right] \\ &= \mathbb{E}\left[ \| (\nabla_y g(x_{k+1}, w_{z,k+1}; \xi_{wz}^{k+1}) - G_{wz}^{k+1}) + (1-\eta_{k+1}) (\widetilde{g}_{wz}^k - \nabla_y g(x_{k}, w_{z,k}; \xi_{wz}^{k+1})) \|^2 \right] \\ &= (1-\eta_{k+1})^2 \mathbb{E}[ \| e_{wz}^k \|^2] \\ &\quad + \mathbb{E}\left[ \| (\nabla_y g(x_{k+1}, w_{z,k+1}; \xi_{wz}^{k+1}) - G_{wz}^{k+1}) + (1-\eta_{k+1}) (G_{wz}^k - \nabla_y g(x_{k}, w_{z,k}; \xi_{wz}^{k+1})) \|^2 \right], \end{aligned}$$ where the inequality holds since gradient-oracles are unbiased: $$\begin{aligned} \mathbb{E}[\langle e_{wz}^k, \nabla_y g(x_{k+1}, w_{z,k+1}; \xi_{wz}^{k+1}) - G_{wz}^{k+1} \rangle |\mathcal{F}_{k+1}] = 0, \\ \mathbb{E}[\langle e_{wz}^k, \nabla_y g(x_{k}, w_{z,k}; \xi_{wz}^{k+1}) - G_{wz}^{k} \rangle |\mathcal{F}_{k+1}] = 0. \end{aligned}$$ The remaining part is to bound $$\begin{aligned} & \mathbb{E}\left[ \| (\nabla_y g(x_{k+1}, w_{z,k+1}; \xi_{wz}^{k+1}) - G_{wz}^{k+1}) + (1-\eta_{k+1}) (G_{wz}^k - \nabla_y g(x_{k}, w_{z,k}; \xi_{wz}^{k+1})) \|^2 \right] \\ &\le 2\eta_{k+1}^2 \mathbb{E}[\| \nabla_y g(x_{k+1}, w_{z,k+1}; \xi_{wz}^{k+1}) - G_{wz}^{k+1} \|^2] \\ &\quad + 2(1-\eta_{k+1})^2 \mathbb{E}\left[ \| (\nabla_y g(x_{k+1}, w_{z,k+1}; \xi_{wz}^{k+1}) - \nabla_y g(x_{k}, w_{z,k}; \xi_{wz}^{k+1})) + (G_{wz}^k - G_{wz}^{k+1}) \|^2 \right] \\ &\le 2\eta_{k+1}^2 \sigma_g^2 + O(l_{g,1}^2) \left( \|x_{k+1} - x_k\|^2 + \|w_{z,k+1} - w_{z,k}\|^2 \right). \end{aligned}$$ Similarly, we can show the similar result for $e_{wy}^k$. Let $\bar{G}_{wy}^k = \sigma_{k+1} \nabla_y f(x_k, w_{y,k}) + \nabla_y g(x_k, w_{y,k})$, and we have that $$\begin{aligned} &\mathbb{E}\left[\|e_{wy}^{k+1}\|^2 \right] = \mathbb{E}\left[\|\sigma_{k+1} \widetilde{f}_{wy}^{k+1} + \widetilde{g}_{wy}^{k+1} - G_{wy}^{k+1} \|^2 \right] \\ &= (1-\eta_{k+1})^2 \mathbb{E}[ \| e_{wy}^k \|^2] + 2 (\sigma_k - \sigma_{k+1})^2 \mathbb{E}[ \| \nabla_y f(x_k, w_{y,k}; \xi_{wy}^{k+1}) - \nabla_y f(x_k, w_{y,k}) \|^2] \\ &\quad + 2 \mathbb{E}\left[ \| (\nabla_y h_{\sigma_{k+1}} (x_{k+1}, w_{y,k+1}; \xi_{wy}^{k+1}) - \nabla_y h_{\sigma_{k+1}} (x_{k}, w_{y,k}; \xi_{wy}^{k+1})) + (1-\eta_{k+1}) (G_{wy}^{k+1} - \bar{G}_{wy}^k) \|^2 \right] \\ &\le (1-\eta_{k+1})^2 \mathbb{E}[ \| e_{wy}^k \|^2] + 2 (\sigma_k - \sigma_{k+1})^2 \sigma_f^2 + 4\eta_{k+1}^2 (\sigma_k^2 \sigma_f^2 + \sigma_g^2) \\ &\quad + O(l_{g,1}^2) \left( \|x_{k+1} - x_k\|^2 + \|w_{y,k+1} - w_{y,k}\|^2\right). \end{aligned}$$ $\square$ We can state a similar descent lemma for $e_{x}^k$: **Lemma 30**. *At every $k^{th}$ iteration, the following holds: $$\begin{aligned} \mathbb{E}[\|e_{x}^{k+1}\|^2] &\le (1 - \eta_{k+1})^2 \mathbb{E}[\|e_{x}^k\|^2] + O(\eta_{k+1}^2) (\sigma_k^2 \sigma_f^2 + \sigma_g^2) + O(\sigma_{k+1} - \sigma_k)^2 \sigma_f^2 + O(l_{g,1}^2) \|x_{k+1} - x_k\|^2 \\ &\quad + O(l_{g,1}^2) \left( \|w_{y,k+2} - w_{y,k+1}\|^2 + \|w_{z,k+2} - w_{z,k+1}\|^2 \right). \end{aligned}$$* The proof follows exactly the same procedure for bounding $e_{wy}^{k+1}$, and thus we omit the proof. ## Descent Lemma for $w_{y,k}$, $w_{z,k}$ The strategy is again to start with [\[eq:w_contraction_bound\]](#eq:w_contraction_bound){reference-type="eqref" reference="eq:w_contraction_bound"}: $$\begin{aligned} \|w_{y,k+1} - w_{y,k+1}^*\|^2 &= {\|w_{y,k+1} - w_{y, k}^*\|^2} + {\|w_{y, k+1}^* - w_{y, k}^*\|^2} - 2\langle w_{y,k+1} - w_{y, k}^*, w_{y, k+1}^* - w_{y, k}^* \rangle \nonumber \\ &\le \left(1 + \frac{\lambda_k}{4} \right) \underbrace{\|w_{y,k+1} - w_{y,k}^* \|^2}_{(i)} + \left(1 + \frac{4}{\lambda_k} \right) \underbrace{\|w_{y, k+1}^* - w_{y, k}^*\|^2}_{(ii)}, \end{aligned}$$ For bounding $(ii)$, we can recall Lemma [Lemma 26](#lemma:w_potential_descent){reference-type="ref" reference="lemma:w_potential_descent"}. For bounding $(i)$, we can slightly modify Lemma [Lemma 25](#lemma:w_descent_lemma){reference-type="ref" reference="lemma:w_descent_lemma"}. **Lemma 31**. *At every $k^{th}$ iteration, the following holds: $$\begin{aligned} \mathbb{E}[\|w_{y,k+1} - w_{y,k}^*\|^2 | \mathcal{F}_{k}] &\le \left(1 - \frac{\gamma_k}{4\rho} \right) \mathbb{E}[\|w_{y,k} - w_{y,k}^*\|^2 | \mathcal{F}_{k}] + O(\gamma_k\rho) \mathbb{E}[\|e_{wy}^k\|^2|\mathcal{F}_k]. \label{eq:wy_descent_momentum} \end{aligned}$$ Similarly, we also have that $$\begin{aligned} \mathbb{E}[\|w_{z,k+1} - w_{z,k}^*\|^2 | \mathcal{F}_{k}] &\le \left(1 - \frac{\gamma_k}{4\rho} \right) \mathbb{E}[\|w_{z,k} - w_{z,k}^*\|^2 | \mathcal{F}_{k}] + O(\gamma_k\rho) \mathbb{E}[\|e_{wz}^k\|^2|\mathcal{F}_k], \label{eq:wz_descent_momentum} \end{aligned}$$* #### *Proof.* We use the linear convergence of projected gradient steps. To simplify the notation, let $\widetilde{G} = \sigma_k \widetilde{f}_{wy}^k + \widetilde{g}_{wy}^k + \rho^{-1}(w_{y,k} - y_k)$ and $G = \nabla_y h_{\sigma_k}(x_k,w_{y,k}) + \rho^{-1}(w_{y,k} - y_k)$. Also let $G^* = \nabla h_{\sigma_k}(x, w_{y,k}^*) + \rho^{-1}(w_{y,k}^* - y_k)$. We first check that $$\begin{aligned} \|w_{y,k+1} - w_{y,k}^*\|^2 &= \left\|\Pi_{ \mathcal{Y} } \left\{ w_{y,k} - \gamma_k \widetilde{G} \right\} - \Pi_{ \mathcal{Y} } \left\{ w_{y,k}^* - \gamma_{k} G^* \right\} \right\|^2 \\ &\le \left\|w_{y,k} - \gamma_{k} \widetilde{G} - (w_{y,k}^* - \gamma_k G^*) \right\|^2 \\ &= \left\|w_{y,k} - w_{y,k}^*\right\|^2 + \gamma_{k}^2 \left\| \widetilde{G} - G^* \right\|^2 - 2 \gamma_{k} \langle w_{y,k} - w_{y,k}^*, \widetilde{G} - G^* \rangle. \end{aligned}$$ Taking expectation conditioned on $\mathcal{F}_{k}$ yields: $$\begin{aligned} \mathbb{E}[\|w_{y,k+1} - w_{y,k}^*\|^2 | \mathcal{F}_{k}] &\le \mathbb{E}[\|w_{y,k} - w_{y,k}^*\|^2 | \mathcal{F}_{k}] + \gamma_{k}^2 \mathbb{E}[\|\widetilde{G} - G^*\|^2 | \mathcal{F}_{k}] \\ &\quad - 2 \gamma_k \langle w_{y,k} - w_{y,k}^*, G - G^* \rangle - 2 \gamma_k \mathbb{E}[\langle w_{y,k} - w_{y,k}^*, G - \widetilde{G} \rangle | \mathcal{F}_k].\end{aligned}$$ Note that we have $$\begin{aligned} &\mathbb{E}[\|\widetilde{G} - G^*\|^2 | \mathcal{F}_{k}] \le 2 \|G - G^*\|^2 + 2\mathbb{E}[\|\widetilde{G} - G\|^2 | \mathcal{F}_{k}], \\ &\gamma_k \mathbb{E}[| \langle w_{y,k} - w_{y,k}^*, G - \widetilde{G} \rangle | | \mathcal{F}_{k}] \le \frac{\gamma_k}{8\rho} \|w_{y,k} - w_{y,k}^*\|^2 + (2\gamma_k\rho) \mathbb{E}[\|\widetilde{G} - G\|^2 | \mathcal{F}_k],\end{aligned}$$ Now again using the co-coercivity of strongly convex function, since the inner minimization is $(1/(3\rho))$-strongly convex and $(1/\rho)$-smooth, we have $$\begin{aligned} \|G - G^*\|^2 &\le (1/\rho) \cdot \langle w_{y,k} - w_{y,k}^*, G - G^* \rangle, \\ \frac{1}{3\rho} \cdot \|w_{y,k} - w_{y,k}^*\|^2 &\le \langle w_{y,k} - w_{y,k}^*, G - G^* \rangle.\end{aligned}$$ Given $\gamma_k \ll \rho$, and noting that $\widetilde{G} - G = e_{wy}^k$, we have $$\begin{aligned} \mathbb{E}[\|w_{y,k+1} - w_{y,k}^*\|^2] &\le \left(1 - \frac{\gamma_{k}}{4\rho} \right) \mathbb{E}[\|w_{y,k} - w_{y,k}^*\|^2] + O(\gamma_{k}\rho)\mathbb{E}[\|e_{wy}^k\|^2]. \end{aligned}$$ Similar arguments can show the bound on $\|w_{z,k+1} - w_{z,k}^*\|$. $\square$ In addition to the contraction of $w_{y,k}$ toward the proximal operators, we will also use the following on the bounds on expected movements: **Lemma 32**. *At every $k^{th}$ iteration, the following holds: $$\begin{aligned} \frac{1}{2 \gamma_k } \mathbb{E}[\|w_{y,k+1} - w_{y,k}\|^2] \le \mathbb{E}[h_{\sigma_k} (x_k, y_k, w_{y,k}) - h_{\sigma_k} (x_k, y_k, w_{y,k+1}) + 4\gamma_k \|e_{wy}^k\|^2]. \label{eq:wy_movement_bound_eq} \end{aligned}$$ Similarly for $w_{z,k}$, we have $$\begin{aligned} \frac{1}{2 \gamma_k } \mathbb{E}[\|w_{z,k+1} - w_{z,k}\|^2] \le \mathbb{E}[g (x_k, z_k, w_{z,k}) - g (x_k, z_k, w_{z,k+1}) + 4\gamma_k \|e_{wz}^k\|^2]. \label{eq:wz_movement_bound_eq} \end{aligned}$$* #### *Proof.* By $2\rho^{-1}$-smoothness of $h_{\sigma_k}(x,y,w)$, we have $$\begin{aligned} h_{\sigma_k} (x_k, y_k, w_{y,k+1}) &\le h_{\sigma_k} (x_k, y_k, w_{y,k}) + \langle \nabla_w h_{\sigma_k}(x_k, y_k, w_{y,k}), w_{y,k+1} - w_{y,k} \rangle + \frac{1}{\rho} \|w_{y,k+1} - w_{y,k}\|^2. \end{aligned}$$ Let $\bar{w}_k = w_{y,k} - \gamma_k (\nabla_w h_{\sigma_k}(x_k,y_k,w_{y,k}) + e_{wy}^k)$, and thus $\nabla_w h_{\sigma_k}(x_k, y_k, w_{y,k}) = \frac{1}{\gamma_k} (w_{y,k} - \bar{w}_k) - e_{wy}^k$. Plugging this back, we have $$\begin{aligned} h_{\sigma_k} (x_k, y_k, w_{y,k+1}) &\le h_{\sigma_k} (x_k, y_k, w_{y,k}) + \gamma_k^{-1} \langle w_{y,k} - \bar{w}_k, w_{y,k+1} - w_{y,k} \rangle \\ &\quad - \langle e_{wy}^k, w_{y,k+1} - w_{y,k} \rangle + \frac{1}{\rho} \|w_{y,k+1} - w_{y,k}\|^2 \\ &\le h_{\sigma_k} (x_k, y_k, w_{y,k}) - \gamma_k^{-1} \|w_{y,k} - w_{y,k+1}\|^2 + \gamma_k^{-1} \langle w_{y,k+1} - \bar{w}_k, w_{y,k+1} - w_{y,k} \rangle \\ &\quad + 4\gamma_k \|e_{wy}^k\|^2 + \frac{1}{16\gamma_k} \|w_{y,k+1} - w_{y,k}\|^2 + \frac{1}{\rho} \|w_{y,k+1} - w_{y,k}\|^2. \end{aligned}$$ By projection lemma, $\langle w_{y,k+1} - \bar{w}_k, w_{y,k+1} - w_{y,k} \rangle \le 0$, and since $\gamma_k \ll \rho$, we have $$\begin{aligned} h_{\sigma_k} (x_k, y_k, w_{y,k+1}) &\le h_{\sigma_k} (x_k, y_k, w_{y,k}) - \frac{1}{2\gamma_k} \|w_{y,k+1} - w_{y,k}\|^2 + 4\gamma_k \|e_{wy}^k\|^2. \end{aligned}$$ Arranging this, we get [\[eq:wy_movement_bound_eq\]](#eq:wy_movement_bound_eq){reference-type="eqref" reference="eq:wy_movement_bound_eq"}. [\[eq:wz_movement_bound_eq\]](#eq:wz_movement_bound_eq){reference-type="eqref" reference="eq:wz_movement_bound_eq"} can be obtained similarly, and hence we omit the details. $\square$ ## Descent Lemma for $\Phi_{\sigma,\rho}$ {#descent-lemma-for-phi_sigmarho} For simplicity, let $G = G_x^k = \nabla_x (\sigma_{k} f(x_k, w_{y,k+1}) + g(x_k, w_{y,k+1}) - g(x_k, w_{z,k+1}))$ and $\widetilde{G} = e_x^k + G_x^k$. This part follows exactly the same as Appendix [9.2](#appendix:descent_lemma_Phi_general){reference-type="ref" reference="appendix:descent_lemma_Phi_general"}, yielding the similar result to [\[eq:final_phi_bound\]](#eq:final_phi_bound){reference-type="eqref" reference="eq:final_phi_bound"}: $$\begin{aligned} \sigma_k \cdot (i) &\le -\frac{1}{4\alpha_k} \|x_{k+1} - x_k\|^2 - \frac{\beta_k}{4\rho} (\|y_k - w_{y,k}^*\|^2 + \|y_k - w_{y,k+1}\|^2) - \frac{\beta_k}{\rho} (\|z_k - w_{z,k}^*\|^2 + \|z_k - w_{z,k+1}\|^2) \nonumber \\ &\quad + O\left( l_{g,1}^2 \alpha_k + \rho^{-1} \beta_k \right) \left(\|w_{y,k}^* - w_{y,k+1}\|^2 + \|w_{z,k}^* - w_{z,k+1}\|^2 \right) \nonumber \\ &\quad + O(\rho^{-2}) \alpha_k \left(\normalfont\mathrm{\bf dist}^2 (z_k, T(x_k,0)) + \|z_k - z_{k+1}\|^2 \right) + \alpha_k \|\widetilde{G} - G\|^2.\end{aligned}$$ ## Descent in Potentials Note again that $$\begin{aligned} \mathbb{E}[\|x_{k+1} - \hat{x}_k\|^2] &= \mathbb{E}\left[\|\Pi_{ \mathcal{X} } \left\{ x_k - \alpha_k \widetilde{G} \right\} - \Pi_{ \mathcal{X} } \left\{ x_k - \alpha_k G^* \right\}\|^2\right] \le \alpha_k^2 \|\widetilde{G} - G^*\|^2 \\ &\le O(l_{g,1}^2) \alpha_k^2 (\|w_{y,k}^* - w_{y,k+1}\|^2 + \|w_{z,k}^* - w_{z,k+1}\|^2) + 2 \alpha_k^2 \mathbb{E}[ \|\widetilde{G} - G\|^2], \end{aligned}$$ and also note that $$\begin{aligned} \|x_{k} - x_{k+1}\|^2 &\ge \frac{1}{2} \|x_k - \hat{x}_k\|^2 - 2 \|\hat{x}_k - x_{k+1}\|^2, \\ \mathbb{E}[ \|\widetilde{G} - G\|^2] &= \mathbb{E}[\|e_x^k\|^2].\end{aligned}$$ Similarly to the proof of Appendix [9.3](#appendix:descent_potential_general){reference-type="ref" reference="appendix:descent_potential_general"}, using Lemma [Lemma 26](#lemma:w_potential_descent){reference-type="ref" reference="lemma:w_potential_descent"}, [\[eq:bound_on_ii\]](#eq:bound_on_ii){reference-type="eqref" reference="eq:bound_on_ii"}, [\[eq:final_phi_bound\]](#eq:final_phi_bound){reference-type="eqref" reference="eq:final_phi_bound"}, and Lemma [Lemma 28](#lemma:dist_to_prox_error){reference-type="ref" reference="lemma:dist_to_prox_error"}, and using the step-size conditions, we obtain a similar inequality to [\[eq:potential_decrease_double_loop\]](#eq:potential_decrease_double_loop){reference-type="eqref" reference="eq:potential_decrease_double_loop"}, with extra terms on noise-variances: $$\begin{aligned} \mathbb{V}_{k+1} - \mathbb{V}_k &\le -\frac{\alpha_k}{16 \sigma_k } \|\Delta_k^x\|^2 -\frac{\beta_k}{16 \sigma_k \rho} \left( \|y_k - w_{y,k}^*\|^2 + \|z_k - w_{z,k}^*\|^2 \right) \\ & - \frac{1}{16\sigma_k \beta_k\rho} (\|y_{k} - y_{k+1}\|^2 + \|z_k - z_{k+1}\|^2) + \frac{(\sigma_{k} - \sigma_{k+1})}{\sigma_k } \cdot O(C_f) \\ & + \frac{C_w}{\sigma_k \rho} \left( 1 + \frac{\sigma_k - \sigma_{k+1}}{\sigma_{k+1}} + \frac{\lambda_k}{4} + \frac{O(l_{g,1}^2) \rho \alpha_k + 2 \beta_k}{C_w} \right) \left( \|w_{y,k}^* - w_{y,k+1}\|^2 + \|w_{z,k}^* - w_{z,k+1}\|^2 \right) \\ & - \frac{C_w}{\sigma_k \rho} \left( \|w_{y,k}^* - w_{y,k}\|^2 + \|w_{z,k}^* - w_{z,k}\|^2 \right) - \frac{1}{16\sigma_k \alpha_k} \|x_k - x_{k+1}\|^2 + o(1/k) \\ & + \underbrace{(C_\eta \rho^2) \left( \frac{1}{\sigma_{k+1} \gamma_{k}} \|e_x^{k}\|^2 - \frac{1}{\sigma_k \gamma_{k-1}} \|e_x^{k-1}\|^2 \right) + O\left( \frac{1 + l_{g,1}/\mu + C_w\rho l_{g,1}}{\sigma_k} \right) (\alpha_k + \rho^{-1} \alpha_k^2) \cdot \|e_x^k\|^2}_{(i)} \\ & + \underbrace{(C_\eta \rho^2) \left( \frac{1}{\sigma_{k+1} \gamma_{k}} \|e_{wy}^{k+1}\|^2 - \frac{1}{\sigma_k \gamma_{k-1}} \|e_{wy}^{k}\|^2 \right)}_{(ii)} \\ & + \underbrace{(C_\eta \rho^2) \left( \frac{1}{\sigma_{k+1} \gamma_{k}} \|e_{wz}^{k+1}\|^2 - \frac{1}{\sigma_k \gamma_{k-1}} \|e_{wz}^{k}\|^2 \right)}_{(iii)}.\end{aligned}$$ In order to bound $e_{x}^k$ term, given that $$\begin{aligned} \eta_{k+1} \gg \left( \frac{\sigma_k \gamma_{k-1}}{\sigma_{k+1} \gamma_{k}} - 1 + \frac{l_{g,1}/\mu + C_w}{C_\eta \rho^2} \alpha_k \gamma_{k-1} \right), \end{aligned}$$ using Lemma [Lemma 30](#lemma:momentum_variance_x_descent){reference-type="ref" reference="lemma:momentum_variance_x_descent"}, we have $$\begin{aligned} (i) &\le - \frac{C_{\eta}\rho^2 \eta_k}{\sigma_k \gamma_{k-1}} \|e_{x}^{k-1}\|^2 + C_{\eta}\rho^2 \cdot \frac{O(\eta_{k}^2) (\sigma_{k-1}^2 \sigma_f^2 + \sigma_g^2) + O(l_{g,1}^2) \|x_{k} - x_{k-1}\|^2}{\sigma_k \gamma_{k-1}} \\ &\quad + C_{\eta}\rho^2 \cdot \frac{O(l_{g,1}^2) (\|w_{y,k+1} - w_{y,k}\|^2 + \|w_{z,k+1} - w_{z,k}\|^2)}{\sigma_k \gamma_{k-1}}.\end{aligned}$$ Similarly, by Lemma [Lemma 29](#lemma:momentum_variance_yz_descent){reference-type="ref" reference="lemma:momentum_variance_yz_descent"}, $$\begin{aligned} (ii) &\le - \frac{C_{\eta}\rho^2 \eta_{k+1}}{\sigma_k \gamma_{k-1}} \|e_{wy}^{k}\|^2 + C_{\eta}\rho^2 \cdot \frac{O(\eta_{k+1}^2) (\sigma_{k}^2 \sigma_f^2 + \sigma_g^2) + O(l_{g,1}^2) \|x_{k+1} - x_{k}\|^2}{\sigma_k \gamma_{k-1}} \\ &\quad + C_{\eta}\rho^2 \cdot \frac{O(l_{g,1}^2) \|w_{y,k+1} - w_{y,k}\|^2}{\sigma_k \gamma_{k-1}}, \\ (iii) &\le - \frac{C_{\eta}\rho^2 \eta_{k+1}}{\sigma_k \gamma_{k-1}} \|e_{wz}^{k}\|^2 + C_{\eta}\rho^2 \cdot \frac{O(\eta_{k+1}^2) \sigma_g^2 + O(l_{g,1}^2) \|x_{k+1} - x_{k}\|^2}{\sigma_k \gamma_{k-1}} \\ &\quad + C_{\eta}\rho^2 \cdot \frac{O(l_{g,1}^2) \|w_{z,k+1} - w_{z,k}\|^2}{\sigma_k \gamma_{k-1}}.\end{aligned}$$ Now setting $C_\eta > 0$ and $\rho \ll 1 / l_{g,1}$ properly, with $\alpha_k \ll \gamma_k$, we can keep $\|x_{k+1} - x_k\|^2$ terms negative, and have $$\begin{aligned} \mathbb{V}_{k+1} - \mathbb{V}_k &\le -\frac{\alpha_k}{16 \sigma_k } \|\Delta_k^x\|^2 -\frac{\beta_k}{16 \sigma_k \rho} \left( \|y_k - w_{y,k}^*\|^2 + \|z_k - w_{z,k}^*\|^2 \right) \\ & - \frac{1}{16\sigma_k \beta_k\rho} (\|y_{k} - y_{k+1}\|^2 + \|z_k - z_{k+1}\|^2) + \frac{(\sigma_{k} - \sigma_{k+1})}{\sigma_k } \cdot O(C_f) \\ & + \frac{C_w}{\sigma_k \rho} \left( 1 + \frac{\sigma_k - \sigma_{k+1}}{\sigma_{k+1}} + \frac{\lambda_k}{4} + \frac{O(l_{g,1}^2) \rho \alpha_k + 2 \beta_k}{C_w} \right) \left( \|w_{y,k}^* - w_{y,k+1}\|^2 + \|w_{z,k}^* - w_{z,k+1}\|^2 \right) \\ & - \frac{C_w}{\sigma_k \rho} \left( \|w_{y,k}^* - w_{y,k}\|^2 + \|w_{z,k}^* - w_{z,k}\|^2 \right) - \frac{1}{32\sigma_k \alpha_k} \|x_k - x_{k+1}\|^2 + \frac{O(\rho^2 l_{g,1}^2)}{\sigma_k\gamma_{k-1}} \|x_k - x_{k-1}\|^2 \\ & - \frac{C_\eta \rho^2 \eta_{k+1}}{\sigma_k \gamma_{k-1}} (\|e_{wy}^k\|^2 + \|e_{wz}^k\|^2) + C_{\eta} O(\rho^2 l_{g,1}^2) \frac{\|w_{y,k+1} - w_{y,k}\|^2 + \|w_{z,k+1} - w_{z,k}\|^2}{\sigma_k \gamma_{k-1}} \\ & + C_\eta \rho^2 \frac{O(\eta_{k+1}^2)}{\sigma_k \gamma_{k}} (\sigma_{k}^2 \sigma_f^2 + \sigma_g^2) + o(1/k). \end{aligned}$$ Given that $$\begin{aligned} \eta_{k+1} \gg O(l_{g,1}^2) \gamma_k^2,\end{aligned}$$ using Lemma [Lemma 32](#lemma:w_movement_bound_lemma){reference-type="ref" reference="lemma:w_movement_bound_lemma"}, we can manipulate $\|w_{y,k+1} - w_{y,k}\|$ and $\|w_{z,k+1} - w_{z,k}\|$ terms to be bounded by $$\begin{aligned} \mathbb{V}_{k+1} - \mathbb{V}_k &\le -\frac{\alpha_k}{16 \sigma_k } \|\Delta_k^x\|^2 -\frac{\beta_k}{16 \sigma_k \rho} \left( \|y_k - w_{y,k}^*\|^2 + \|z_k - w_{z,k}^*\|^2 \right) \nonumber \\ & - \frac{1}{16\sigma_k \beta_k\rho} (\|y_{k} - y_{k+1}\|^2 + \|z_k - z_{k+1}\|^2) + \frac{(\sigma_{k} - \sigma_{k+1})}{\sigma_k } \cdot O(C_f) \nonumber \\ & + \frac{C_w}{\sigma_k \rho} \left( 1 + \frac{\sigma_k - \sigma_{k+1}}{\sigma_{k+1}} + \frac{\lambda_k}{4} + \frac{O(l_{g,1}^2) \rho \alpha_k + 2 \beta_k}{C_w} \right) \left( \|w_{y,k}^* - w_{y,k+1}\|^2 + \|w_{z,k}^* - w_{z,k+1}\|^2 \right) \nonumber \\ & - \frac{C_w}{\sigma_k \rho} \left( \|w_{y,k}^* - w_{y,k}\|^2 + \|w_{z,k}^* - w_{z,k}\|^2 \right) - \frac{1}{32\sigma_k \alpha_k} \|x_k - x_{k+1}\|^2 + \frac{O(\rho^2 l_{g,1}^2)}{\sigma_k\gamma_{k-1}} \|x_k - x_{k-1}\|^2 \nonumber \\ & - \frac{C_\eta \rho^2 \eta_{k+1}}{2 \sigma_k \gamma_{k-1}} (\|e_{wy}^k\|^2 + \|e_{wz}^k\|^2) + C_\eta \rho^2 \frac{O(\eta_{k+1}^2)}{\sigma_k \gamma_{k}} (\sigma_{k}^2 \sigma_f^2 + \sigma_g^2) + o(1/k) \nonumber \\ &+ \frac{C_{\eta} O(\rho^2 l_{g,1}^2)}{\sigma_k} \left( \underbrace{h_{\sigma_k} (x_k, y_k, w_{y,k}) - h_{\sigma_k}(x_k, y_k, w_{y,k+1})}_{(iv)} + \underbrace{g (x_k, z_k, w_{z,k}) - g(x_k, z_k, w_{z,k+1})}_{(v)} \right). \label{eq:potential_bound_interim2}\end{aligned}$$ To proceed, we note that $$\begin{aligned} (iv) &= (h_{\sigma_k} (x_k, y_k, w_{y,k}) - h_{\sigma_k,\rho}^*(x_k,y_k)) - (h_{\sigma_k}(x_k, y_k, w_{y,k+1}) - h_{\sigma_k,\rho}^*(x_k,y_k)) \\ &= (h_{\sigma_k} (x_k, y_k, w_{y,k}) - h_{\sigma_k,\rho}^*(x_k,y_k)) - (h_{\sigma_k}(x_{k+1}, y_{k+1}, w_{y,k+1}) - h_{\sigma_k,\rho}^*(x_{k+1},y_{k+1})) \\ &\quad + \underbrace{h_{\sigma_k}(x_{k+1}, y_{k+1}, w_{y,k+1}) - h_{\sigma_k,\rho}^*(x_{k+1},y_{k+1})) - (h_{\sigma_k}(x_{k}, y_{k}, w_{y,k+1}) - h_{\sigma_k,\rho}^*(x_{k},y_{k}))}_{(a)},\end{aligned}$$ and the term $(a)$ is bounded as $$\begin{aligned} (a) &\le \langle \nabla_x (h_{\sigma_k}(x_k,y_k,w_{y,k+1}) - h_{\sigma_k,\rho}^*(x_k,y_k)), x_{k+1} - x_k \rangle + l_{g,1} \|x_{k+1} - x_k\|^2 \\ &\quad + \langle \nabla_y (h_{\sigma_k}(x_k,y_k,w_{y,k+1}) - h_{\sigma_k,\rho}^*(x_k,y_k)), y_{k+1} - y_k \rangle + \rho^{-1} \|y_{k+1} - y_k\|^2 \\ &= \langle \nabla_x (h_{\sigma_k} (x_k,w_{y,k+1}) - h_{\sigma_k} (x_k,w_{y,k}^*)), x_{k+1} - x_k \rangle + l_{g,1} \|x_{k+1} - x_k\|^2 \\ &\quad + \rho^{-1} \langle (y_k - w_{y,k+1}) - (y_k - w_{y,k}^*), y_{k+1} - y_k \rangle + \rho^{-1} \|y_{k+1} - y_k\|^2 \\ &\le 32 l_{g,1}^2 \alpha_k \|w_{y,k+1} - w_{y,k}^*\|^2 + \frac{1}{128\alpha_k} \|x_{k+1} - x_k\|^2 + \frac{16 \beta_k}{\rho} \|w_{y,k+1} - w_{y,k}^*\|^2 + \frac{1}{64\beta_k\rho} \|y_{k+1} - y_k\|^2. \end{aligned}$$ Thus, we can conclude that $$\begin{aligned} (iv) &\le (h_{\sigma_k} (x_k, y_k, w_{y,k}) - h_{\sigma_k,\rho}^*(x_k,y_k)) - (h_{\sigma_k}(x_{k+1}, y_{k+1}, w_{y,k+1}) - h_{\sigma_k,\rho}^*(x_{k+1},y_{k+1})) \\ &\quad + 32 l_{g,1}^2 \alpha_k \|w_{y,k+1} - w_{y,k}^*\|^2 + \frac{1}{128\alpha_k} \|x_{k+1} - x_k\|^2 \\ &\quad + \frac{16 \beta_k}{\rho} \|w_{y,k+1} - w_{y,k}^*\|^2 + \frac{1}{64\beta_k\rho} \|y_{k+1} - y_k\|^2.\end{aligned}$$ Similarly, we have $$\begin{aligned} (v) &\le (g (x_k, z_k, w_{z,k}) - g_{\rho}^*(x_k,z_k)) - (g(x_{k+1}, z_{k+1}, w_{z,k+1}) - g_{\rho}^*(x_{k+1},z_{k+1})) \\ &\quad + 32 l_{g,1}^2 \alpha_k \|w_{z,k+1} - w_{z,k}^*\|^2 + \frac{1}{128\alpha_k} \|x_{k+1} - x_k\|^2 \\ &\quad + \frac{16 \beta_k}{\rho} \|w_{z,k+1} - w_{z,k}^*\|^2 + \frac{1}{64\beta_k\rho} \|z_{k+1} - z_k\|^2.\end{aligned}$$ Now plugging this back, we have reduced [\[eq:potential_bound_interim2\]](#eq:potential_bound_interim2){reference-type="eqref" reference="eq:potential_bound_interim2"} to $$\begin{aligned} & \mathbb{V}_{k+1} - \mathbb{V}_k \nonumber \\ &\le -\frac{\alpha_k}{16 \sigma_k } \|\Delta_k^x\|^2 -\frac{\beta_k}{16 \sigma_k \rho} \left( \|y_k - w_{y,k}^*\|^2 + \|z_k - w_{z,k}^*\|^2 \right) \nonumber \\ & - \frac{1}{32\sigma_k \beta_k\rho} (\|y_{k} - y_{k+1}\|^2 + \|z_k - z_{k+1}\|^2) + \frac{(\sigma_{k} - \sigma_{k+1})}{\sigma_k} \cdot O(C_f) \nonumber \\ & + \frac{C_w}{\sigma_k \rho} \left( 1 + \frac{\sigma_k - \sigma_{k+1}}{\sigma_{k+1}} + \frac{\lambda_k}{4} + \frac{O(l_{g,1}^2 \rho) \alpha_k + O(1) \beta_k}{C_w} \right) \left( \|w_{y,k}^* - w_{y,k+1}\|^2 + \|w_{z,k}^* - w_{z,k+1}\|^2 \right) \nonumber \\ & - \frac{C_w}{\sigma_k \rho} \left( \|w_{y,k}^* - w_{y,k}\|^2 + \|w_{z,k}^* - w_{z,k}\|^2 \right) - \frac{1}{64 \sigma_k \alpha_k} \|x_k - x_{k+1}\|^2 + \frac{O(\rho^2 l_{g,1}^2)}{\sigma_k\gamma_{k-1}} \|x_k - x_{k-1}\|^2 \nonumber \\ & - \frac{C_\eta \rho^2 \eta_{k+1}}{2 \sigma_k \gamma_{k-1}} (\|e_{wy}^k\|^2 + \|e_{wz}^k\|^2) + C_\eta \rho^2 \frac{O(\eta_{k+1}^2)}{\sigma_k \gamma_{k}} (\sigma_{k}^2 \sigma_f^2 + \sigma_g^2) + o(1/k) \nonumber \\ &+ C_{\eta} O(\rho^2 l_{g,1}^2) \left( \frac{h_{\sigma_k} (x_k, y_k, w_{y,k}) - h_{\sigma_k,\rho}^*(x_k,y_k)}{\sigma_k} - \frac{h_{\sigma_k} (x_{k+1}, y_{k+1}, w_{y,k+1}) - h_{\sigma_k,\rho}^*(x_{k+1},y_{k+1})}{\sigma_k} \right) \nonumber \\ & + C_{\eta} O(\rho^2 l_{g,1}^2) \left(\frac{g (x_k, z_k, w_{z,k}) - g_{\rho}^*(x_k,z_k)}{\sigma_k} - \frac{g(x_{k+1}, z_{k+1}, w_{z,k+1}) - g_{\rho}^*(x_{k+1},z_{k+1})}{\sigma_k} \right). \label{eq:potential_bound_interim3}\end{aligned}$$ Now applying Lemma [Lemma 31](#lemma:w_descent_lemma_momentum){reference-type="ref" reference="lemma:w_descent_lemma_momentum"}, along with $$\begin{aligned} \lambda_k = \frac{\gamma_k}{4\rho} \gg \frac{O(l_{g,1}^2\rho) \alpha_k + O(\beta_k)}{C_w}, \end{aligned}$$ and $$\begin{aligned} \eta_{k+1} \gg \frac{C_w}{C_\eta} \frac{\lambda_k \gamma_k}{\rho} \gg \frac{C_w \gamma_k^2}{4 C_\eta \rho^2},\end{aligned}$$ we can further bound [\[eq:potential_bound_interim3\]](#eq:potential_bound_interim3){reference-type="eqref" reference="eq:potential_bound_interim3"} by $$\begin{aligned} & \mathbb{V}_{k+1} - \mathbb{V}_k \nonumber \\ &\le -\frac{\alpha_k}{16 \sigma_k } \|\Delta_k^x\|^2 -\frac{\beta_k}{16 \sigma_k \rho} \left( \|y_k - w_{y,k}^*\|^2 + \|z_k - w_{z,k}^*\|^2 \right)\nonumber \\ & - \frac{1}{32\sigma_k \beta_k\rho} (\|y_{k} - y_{k+1}\|^2 + \|z_k - z_{k+1}\|^2) - \frac{1}{64\sigma_k \alpha_k} \|x_k - x_{k+1}\|^2 + \frac{O(\rho^2l_{g,1}^2)}{\sigma_k\gamma_{k-1}} \|x_k - x_{k-1}\|^2 \nonumber \\ & + \frac{(\sigma_{k} - \sigma_{k+1})}{\sigma_k } \cdot O(C_f) + C_\eta \rho^2 \frac{O(\eta_{k+1}^2)}{\sigma_k \gamma_{k}} (\sigma_{k}^2 \sigma_f^2 + \sigma_g^2) + o(1/k) \nonumber \\ &- \frac{C_w \lambda_k}{16 \sigma_k \rho} (\|w_{y,k}^* - w_{y,k}\|^2 + \|w_{z,k}^* - w_{z,k}\|^2 ) \nonumber \\ &+ C_{\eta} O(\rho^2 l_{g,1}^2) \left( \frac{h_{\sigma_k} (x_k, y_k, w_{y,k}) - h_{\sigma_k,\rho}^*(x_k,y_k)}{\sigma_k} - \frac{h_{\sigma_k} (x_{k+1}, y_{k+1}, w_{y,k+1}) - h_{\sigma_k,\rho}^*(x_{k+1},y_{k+1})}{\sigma_k} \right) \nonumber \\ & + C_{\eta} O(\rho^2 l_{g,1}^2) \left(\frac{g (x_k, z_k, w_{z,k}) - g_{\rho}^*(x_k,z_k)}{\sigma_k} - \frac{g(x_{k+1}, z_{k+1}, w_{z,k+1}) - g_{\rho}^*(x_{k+1},z_{k+1})}{\sigma_k} \right). \label{eq:potential_bound_interim4}\end{aligned}$$ ## Proof of Theorem [Theorem 15](#theorem:convergence_momentum){reference-type="ref" reference="theorem:convergence_momentum"} {#proof-of-theorem-theoremconvergence_momentum} Summing the bound [\[eq:potential_bound_interim4\]](#eq:potential_bound_interim4){reference-type="eqref" reference="eq:potential_bound_interim4"} for $k= 0$ to $K-1$, we can cancel out $\|x_{k} - x_{k+1}\|^2$ terms given that $$\begin{aligned} O(\rho^2 l_{g,1}^2) \alpha_k \ll \frac{\sigma_k}{\sigma_{k+1}} \gamma_k.\end{aligned}$$ Leaving only relevant terms in the final bound, we get $$\begin{aligned} & \mathbb{V}_K - \mathbb{V}_0 \\ &\le \sum_{k=0}^{K-1} \left(-\frac{\alpha_k}{16 \sigma_k } \|\Delta_k^x\|^2 -\frac{\beta_k}{16 \sigma_k \rho} \left( \|y_k - w_{y,k}^*\|^2 + \|z_k - w_{z,k}^*\|^2 \right) \right) \\ &+ \sum_{k=0}^{K-1} \left(\frac{(\sigma_{k} - \sigma_{k+1})}{\sigma_k } \cdot O(C_f) + C_\eta \rho^2 \frac{O(\eta_{k+1}^2)}{\sigma_k \gamma_{k}} (\sigma_{k}^2 \sigma_f^2 + \sigma_g^2) \right) \\ &+ \sum_{k=0}^{K-1} \left( - \frac{C_w \lambda_k}{16 \sigma_k \rho} (\|w_{y,k}^* - w_{y,k}\|^2 + \|w_{z,k}^* - w_{z,k}\|^2 ) \right) \\ &+ \sum_{k=0}^{K-2} \left(\left(\frac{1}{\sigma_{k+1}} - \frac{1}{\sigma_{k}} \right) \left(h_{\sigma_k}(x_k,y_k,w_{y,k}) - h_{\sigma_k,\rho}^*(x_k,y_k) + g(x_k,z_k,w_{z,k}) - g_{\rho}^*(x_k,z_k) \right)\right) \\ &+ C_{\eta} O(\rho^2 l_{g,1}^2) \left( \frac{h_{\sigma_0} (x_0, y_0, w_{y,0}) - h_{\sigma_0,\rho}^*(x_0,y_0)}{\sigma_0} - \frac{h_{\sigma_K} (x_{K}, y_{K}, w_{y,K}) - h_{\sigma_K,\rho}^*(x_{K},y_{K})}{\sigma_{K-1}} \right) \\ & + C_{\eta} O(\rho^2 l_{g,1}^2) \left(\frac{g (x_0, z_0, w_{z,0}) - g_{\rho}^*(x_0,z_0)}{\sigma_0} - \frac{g(x_{K}, z_{K}, w_{z,K}) - g_{\rho}^*(x_{K},z_{K})}{\sigma_{K-1}} \right),\end{aligned}$$ where the last two lines come from the telescoping sum. Note that $$\begin{aligned} h_{\sigma_K} (x_{K}, y_{K}, w_{y,K}) - h_{\sigma_K,\rho}^*(x_{K},y_{K}) \ge 0, \\ g(x_{K}, z_{K}, w_{z,K}) - g_{\rho}^*(x_{K},z_{K}) \ge 0,\end{aligned}$$ by definition, and furthermore, $$\begin{aligned} h_{\sigma_k}(x_k,y_k,w_{y,k}) - h_{\sigma_k,\rho}^*(x_k,y_k) &\le \langle \nabla_{w} h_{\sigma_k}(x_k,y_k,w_{y,k}^*), w_{y,k} - w_{y,k}^* \rangle, \\ g(x_k,z_k,w_{z,k}) - g_{\rho}^*(x_k,z_k) &\le \langle \nabla_{w} g (x_k,z_k,w_{z,k}^*), w_{z,k} - w_{z,k}^* \rangle.\end{aligned}$$ Here we rely on Assumption [Assumption 12](#assumption:bounded_w_gradients){reference-type="ref" reference="assumption:bounded_w_gradients"} (along with $\mathcal{Y}$ being compact) to bound them: $$\begin{aligned} h_{\sigma_k}(x_k,y_k,w_{y,k}) - h_{\sigma_k,\rho}^*(x_k,y_k) &\le M_w \|w_{y,k} - w_{y,k}^*\|, \\ g(x_k,z_k,w_{z,k}) - g_{\rho}^*(x_k,z_k) &\le M_w \|w_{z,k} - w_{z,k}^*\|.\end{aligned}$$ Plugging this back, and using $-a x^2 + bx \le \frac{b^2}{4a}$, we have $$\begin{aligned} \mathbb{V}_K - \mathbb{V}_0 &\le \sum_{k=0}^{K-1} \left(-\frac{\alpha_k}{16 \sigma_k } \|\Delta_k^x\|^2 -\frac{\beta_k}{16 \sigma_k \rho} \left( \|y_k - w_{y,k}^*\|^2 + \|z_k - w_{z,k}^*\|^2 \right) \right) \\ &+ \sum_{k=0}^{K-1} \left(\frac{(\sigma_{k} - \sigma_{k+1})}{\sigma_k } \cdot O(C_f) + C_\eta \rho^2 \frac{O(\eta_{k+1}^2)}{\sigma_k \gamma_{k}} (\sigma_{k}^2 \sigma_f^2 + \sigma_g^2) \right) \\ &+ \sum_{k=0}^{K-1} \left( \frac{O(M_w^2) \rho^2}{ C_w \sigma_k \gamma_k} \left(\frac{\sigma_k - \sigma_{k+1}}{\sigma_{k+1}}\right)^2 \right) \\ &+ C_{\eta} O(\rho^2 l_{g,1}^2) \left( \frac{h_{\sigma_0} (x_0, y_0, w_{y,0}) - h_{\sigma_0,\rho}^*(x_0,y_0)}{\sigma_0} + \frac{g (x_0, z_0, w_{z,0}) - g_{\rho}^*(x_0,z_0)}{\sigma_0} \right).\end{aligned}$$ Note that $w_{y,0} = y_0, w_{z,0} = z_0$, and thus $h_{\sigma_0} (x_0, y_0, w_{y,0}) = h_{\sigma_0} (x_0,y_0), g(x_0,z_0,w_{z,0}) = g(x_0, z_0)$. Arranging the terms, we have the theorem. ## Proof of Corollary [Corollary 16](#corollary:final_convergence_result_momentum){reference-type="ref" reference="corollary:final_convergence_result_momentum"} {#proof-of-corollary-corollaryfinal_convergence_result_momentum} The proof is almost identical to the proof of Corollary [Corollary 13](#corollary:final_convergence_result){reference-type="ref" reference="corollary:final_convergence_result"} in Appendix [9.4](#appendix:corollary_final_convergence){reference-type="ref" reference="appendix:corollary_final_convergence"}, and thus we omit the proof. # Proofs of Auxiliary Lemmas ## Proof of Lemma [Lemma 8](#lemma:proximal_error_to_lipschitz_continuity){reference-type="ref" reference="lemma:proximal_error_to_lipschitz_continuity"} {#proof-of-lemma-lemmaproximal_error_to_lipschitz_continuity} The Lemma essentially follows the proof of Proposition 4 in [@shen2023penalty]. It suffices to show the *local* Lipschitz-continuity of solution-sets. For every $x_1, x_2 \in \mathcal{X}$ such that $\|x_1 - x_2\| \le c_x \mu \delta / l_{g,1}$ and $|\sigma_1 - \sigma_2| \le c_s \mu \delta / l_{f,0}$ with sufficiently small constants $c_x, c_s > 0$, let $y_1 \in T(x_1,\sigma_1)$ and $\bar{y} = \normalfont\textbf{prox}_{\rho h_{\sigma_2}(x_2,\cdot)} (y_1)$. Note that $y_1 = \normalfont\textbf{prox}_{\rho h_{\sigma_1}(x_1,\cdot)}(y_1)$. By Lemma [Lemma 19](#lemma:prox_diff_bound){reference-type="ref" reference="lemma:prox_diff_bound"} and Lemma [Lemma 20](#lemma:prox_sigma_diff_bound){reference-type="ref" reference="lemma:prox_sigma_diff_bound"}, $$\begin{aligned} \| y_1 - \bar{y} \| &= \| \normalfont\textbf{prox}_{\rho h_{\sigma_1}(x_1,\cdot)}(y_1) - \normalfont\textbf{prox}_{\rho h_{\sigma_2}(x_2,\cdot)} (y_1) \| \\ &\le O(\rho l_{g,1}) \|x_1 - x_2\| + O(\rho l_{f,0}) |\sigma_1 - \sigma_2| \le \rho \delta. \end{aligned}$$ Thus, $\rho^{-1} \|y_1 - \normalfont\textbf{prox}_{\rho h_{\sigma_2}(x_2, \cdot)} (y_1)\| \le \delta$, and applying Assumption [Assumption 1](#assumption:small_gradient_proximal_error_bound){reference-type="ref" reference="assumption:small_gradient_proximal_error_bound"}, $$\begin{aligned} \normalfont\mathrm{\bf dist}(y_1, T(x_2, \sigma_2)) \le \mu^{-1} \cdot \left(O(l_{g,1}) \|x_1 - x_2\| + O(l_{f,0}) |\sigma_1 - \sigma_2| \right),\end{aligned}$$ proving the (local) $O(l_{g,1}/\mu)$-Lipschitz continuity in $x$ and $O(l_{f,0}/\mu)$-Lipscthiz continuity in $\sigma$ of $T(x,\sigma)$. ## Proof of Lemma [Lemma 11](#lemma:stationary_sigmarho_to_sigmapsi){reference-type="ref" reference="lemma:stationary_sigmarho_to_sigmapsi"} {#proof:stationary_sigmarho_to_sigmapsi} #### *Proof.* By Danskin's theorem (Theorem [Theorem 17](#theorem:danskins){reference-type="ref" reference="theorem:danskins"}), if $\nabla h_{\sigma}^*(x)$ and $\nabla g^*(x)$ exist, then they are given by $$\begin{aligned} \nabla h_{\sigma}^*(x) = \nabla_x h_{\sigma}(x, w_{\sigma}^*), \ \nabla g^*(x) = \nabla_x g(x, w^*), \end{aligned}$$ for any $w_{\sigma}^* \in T(x,\sigma), w^* \in T(x,0)$. Let $w_{y}^* = \normalfont\textbf{prox}_{\rho h_{\sigma}(x^*,\cdot)}(y^*), w_{z}^* = \normalfont\textbf{prox}_{\rho g(x^*,\cdot)}(z^*)$. By Assumption [Assumption 1](#assumption:small_gradient_proximal_error_bound){reference-type="ref" reference="assumption:small_gradient_proximal_error_bound"}, there exists $w_{\sigma}^* \in T(x,\sigma)$ that satisfies $$\begin{aligned} \sigma\epsilon \ge \rho^{-1}\|y^* - w_{y}^*\| \ge \mu \|y^* - w_{\sigma}^* \|, \end{aligned}$$ and thus $\|y^* - w_{\sigma}^*\| \le \frac{\sigma \epsilon}{\mu}$. Similarly, $\|z^* - w^*\| \le \frac{\sigma \epsilon}{\mu}$. Therefore, we have $$\begin{aligned} \|\nabla_x \psi_{\sigma}(x^*,y^*,z^*) - \nabla\psi_{\sigma}(x^*)\| &\le \frac{\|\nabla_x h_{\sigma}(x^*, y^*) - \nabla_x h_{\sigma}(x,w_{\sigma}^*)\| + \|\nabla_x g(x^*, z^*) - \nabla_x g (x^*,w^*)\|}{\sigma} \\ &\le \frac{l_{g,1}}{\sigma} \left(\|y^* - w_{\sigma}^*\| + \|{z}^* - w^*\|\right) \le \frac{2 l_{g,1} \epsilon}{\mu}. \end{aligned}$$ To bound the projection error, note that $$\begin{aligned} \frac{1}{\rho} \| \Pi_{ \mathcal{X} } \left\{ x^* - \rho \nabla\psi_{\sigma}(x^*) \right\} - \Pi_{ \mathcal{X} } \left\{ x^* - \rho \nabla_x \psi_{\sigma}(x^*, y^*, z^*) \right\} \| \le \|\nabla_x \psi_{\sigma}(x^*,y^*,z^*) - \nabla\psi_{\sigma}(x^*)\| \le \frac{2 l_{g,1}}{\mu} \epsilon, \end{aligned}$$ by non-expansiveness of projection operators. Thus, $$\begin{aligned} \frac{1}{\rho} \left\|x^* - \Pi_{ \mathcal{X} } \left\{ x^* - \rho \nabla\psi_{\sigma}(x^*) \right\} \right\| \le (1 + 2 l_{g,1} / \mu) \cdot \epsilon, \end{aligned}$$ concluding that $x^*$ is an $O(\epsilon)$-stationary point of $\nabla\psi_{\sigma}(x)$. The second part comes from the mean-value theorem: by the assumption, there exists $\sigma' \in [0,\sigma]$ such that $$\begin{aligned} \nabla\psi_{\sigma}(x) = \frac{\partial_x l(x,\sigma) - \partial_x l(x,0)}{\sigma} = \frac{\partial^2}{\partial \sigma \partial x} l(x,\sigma') = \frac{\partial^2}{\partial x \partial \sigma } l(x,\sigma'). \end{aligned}$$ We also assumed $L_{\sigma}$-Lipschitz continuity of second cross-partial derivatives, and thus $$\begin{aligned} \left\| \frac{\partial^2}{\partial x \partial \sigma } l(x,\sigma') - \frac{\partial^2}{\partial x \partial \sigma } l(x,0) \right\| \le L_{\sigma} \sigma' \le L_{\sigma} \sigma, \end{aligned}$$ which implies $\|\nabla\psi_{\sigma}(x) - \nabla\psi(x) \| \le L_{\sigma} \sigma$. Using a similar projection non-expansiveness argument, we can show that $$\begin{aligned} \frac{1}{\rho} \left\|x - \Pi_{ \mathcal{X} } \left\{ x - \rho \nabla\psi(x) \right\} \right\| \le (1 + 2 l_{g,1} / \mu) \cdot \epsilon + L_{\sigma} \sigma, \end{aligned}$$ concluding the proof. $\square$ ## Proof of Lemma [Lemma 19](#lemma:prox_diff_bound){reference-type="ref" reference="lemma:prox_diff_bound"} {#proof:prox_diff_bound} #### *Proof.* Note that $$\begin{aligned} \| \normalfont\textbf{prox}_{\rho g(x_1, \cdot)}(y_1) - \normalfont\textbf{prox}_{\rho g(x_2,\cdot)} (y_2) \| &\le \| \normalfont\textbf{prox}_{\rho g(x_1, \cdot)}(y_1) - \normalfont\textbf{prox}_{\rho g(x_2,\cdot)} (y_1) \| \\ &+ \| \normalfont\textbf{prox}_{\rho g(x_2, \cdot)}(y_1) - \normalfont\textbf{prox}_{\rho g(x_2,\cdot)} (y_2) \|. \end{aligned}$$ Due to non-expansiveness of proximal operators, the second term is less than $\|y_1 - y_2\|$. For bounding the first term, define $$\begin{aligned} w_1^* &= \normalfont\textbf{prox}_{\rho g(x_1,\cdot)} (y_1) = \arg\min_{y \in \mathcal{Y}} g(x_1, w) + \frac{1}{2\rho} \|w - y_1\|^2, \\ w_2^* &= \normalfont\textbf{prox}_{\rho g(x_2,\cdot)} (y_1) = \arg\min_{y \in \mathcal{Y}} g(x_2, w) + \frac{1}{2\rho} \|w - y_1\|^2. \end{aligned}$$ Due to the optimality condition, for any $\beta = \rho/4$, we can check that $$\begin{aligned} w_1^* = \Pi_{ \mathcal{Y} } \left\{ w_1^* - \beta (\nabla_y g(x_1, w_1^*) + \rho^{-1} (w_1^* - y_1)) \right\}. \end{aligned}$$ On the other hand, define $w' = \Pi_{ \mathcal{Y} } \left\{ w_1^* - \beta (\nabla_y g(x_2, w_1^*) + \rho^{-1} (w_1^* - y_1)) \right\}$. This is one projected gradient-descent step (PGD) for finding $w_2^*$ started from $w_1^*$. By the linear convergence of PGD for strongly convex functions over convex domain [@bubeck2015convex], since the inner function is $1/(3\rho)$-strongly convex and $1/\rho$-smooth by choosing proper $\rho$, we have $$\begin{aligned} \|w' - w_2^*\| \le \frac{9}{10} \|w_1^* - w_2^*\|. \end{aligned}$$ Thus, $$\begin{aligned} & \|w_1^* - w_2^*\| \\ &\le \|w_1^* - w'\| + \|w' - w_2^*\| \\ &\le \|\Pi_{ \mathcal{Y} } \left\{ w_1^* - \beta (\nabla_y g(x_1, w_1^*) + \rho^{-1} (w_1^* - y_1)) \right\} - \Pi_{ \mathcal{Y} } \left\{ w_1^* - \beta (\nabla_y g(x_2, w_1^*) + \rho^{-1} (w_1^* - y_1)) \right\}\| \\ & \quad + \frac{9}{10} \|w_1^* - w_2^*\| \\ &\le \beta \| \nabla_y g(x_1, w_1^*) - \nabla_y g(x_2, w_1^*) \| + \frac{9}{10} \|w_1^* - w_2^*\| \\ &= \beta O(l_{g,1}) \|x_1 - x_2\| + \frac{9}{10} \|w_1^* - w_2^*\|. \end{aligned}$$ where in the third inequality we used non-expansive property of projection onto convex sets. Arranging the term and plug $\beta = \rho / 4$, we get $$\begin{aligned} \|w_1^* - w_2^* \| \le O\left(\rho l_{g,1} \right) \|x_1 - x_2\|. \end{aligned}$$ $\square$ ## Proof of Lemma [Lemma 20](#lemma:prox_sigma_diff_bound){reference-type="ref" reference="lemma:prox_sigma_diff_bound"} {#proof:prox_sigma_diff_bound} #### *Proof.* Again, let $w_1^* = \normalfont\textbf{prox}_{\rho h_{\sigma_1}(x,\cdot)} (y)$ and $w_2^* = \normalfont\textbf{prox}_{\rho h_{\sigma_2}(x,\cdot)} (y)$. Using similar arguments to the proof in Appendix [11.3](#proof:prox_diff_bound){reference-type="ref" reference="proof:prox_diff_bound"}, we have $$\begin{aligned} \|w_1^* - w_2^*\| &\le \beta \|\nabla_y h_{\sigma_1} (x, w_1^*) - \nabla_y h_{\sigma_2} (x,w_1^*) \| + \frac{9}{10} \|w_1^* - w_2^*\| \\ &\le \beta \|\sigma_1 \nabla_y f(x, w_1^*) - \sigma_2 \nabla_y f_ (x,w_1^*) \| + \frac{9}{10} \|w_1^* - w_2^*\|. \end{aligned}$$ where $\beta = \rho/4$. Arranging terms and using $\|\nabla_y f(x,w_1^*)\| \le l_{f,0}$, we get the lemma. $\square$ ## Proof of Lemma [Lemma 21](#lemma:smoothed_smoothness){reference-type="ref" reference="lemma:smoothed_smoothness"} {#proof:smoothed_smoothness} #### *Proof.* By Lemma [Lemma 19](#lemma:prox_diff_bound){reference-type="ref" reference="lemma:prox_diff_bound"}, we know that the solution of $\min_{w \in \mathcal{Y}} h_{\sigma}(x,w) + \frac{1}{2\rho} \|w-y\|^2$ is $O(\rho l_{g,1})$ Lipschitz-continuous in $x$ and 1 Lipschitz-continuous in $y$. Therefore by Lemma [Lemma 18](#lemma:lipscthiz_solution_smooth_minimizer){reference-type="ref" reference="lemma:lipscthiz_solution_smooth_minimizer"}, since the inner minimization problem is $\rho^{-1}$-smooth, $h_{\sigma,\rho}^*(x,y)$ is $2\rho^{-1}$-smooth. $\square$ [^1]: In the unconstrained setting, EB and PL are known to be equivalent conditions, see *e.g.,* [@karimi2016linear] [^2]: Note that $\Delta_k^x$ is a stricter stationarity measure on $x$ than Definition [Definition 8](#definition:saddle_point_def){reference-type="ref" reference="definition:saddle_point_def"} as long as $\alpha_k \le \rho$, since the function $g:[0,\infty) \to \mathbb{R}$ defined by $g(s) := \| x - \Pi_{ \mathcal{X} } \left\{ x + s w \right\}\|/s$ with any $w \in \mathbb{R}^{d_x}$ is monotonically nonincreasing (see *e.g.,* Lemma 2.3.1 in [@Ber99])
arxiv_math
{ "id": "2309.01753", "title": "On Penalty Methods for Nonconvex Bilevel Optimization and First-Order\n Stochastic Approximation", "authors": "Jeongyeol Kwon, Dohyun Kwon, Steve Wright, Robert Nowak", "categories": "math.OC cs.LG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Building on work of Williams, Wieler proved that every irreducible Smale space with totally disconnected stable sets can be realized via a stationary inverse limit. Using this result, the first and fourth listed authors of the present paper showed that the stable $C^*$-algebra associated to such a Smale space can be obtained from a stationary inductive limit of a Fell algebra. Its spectrum is typically non-Hausdorff and admits a self-map related to the stationary inverse limit. With the goal of understanding the fine structure of the stable algebra and the stable Ruelle algebra, we study said self-map on the spectrum of the Fell algebra as a dynamical system in its own right. Our results can be summarized into the statement that this dynamical system is an expansive, surjective, local homeomorphism of a compact, locally Hausdorff space and from its $K$-theory we can compute $K$-theoretical invariants of the stable and unstable Ruelle algebra of a Smale space with totally disconnected stable sets. address: - Robin J. Deeley, Department of Mathematics, University of Colorado Boulder Campus Box 395, Boulder, CO 80309-0395, USA - Menevşe Eryüzlü, Department of Mathematics, University of Colorado Boulder Campus Box 395, Boulder, CO 80309-0395, USA - Magnus Goffeng, Centre for Mathematical Sciences, Lund University, Box 118, 221 00 LUND, Sweden - Allan Yashinski, Department of Mathematics, University of Maryland, College Park, MD 20742-4015, USA author: - Robin J. Deeley - Menevşe Eryüzlü - Magnus Goffeng - Allan Yashinski title: "Wieler solenoids: non-Hausdorff expansiveness, Cuntz-Pimsner models, and functorial properties" --- [^1] # Introduction {#introduction .unnumbered} In [@Wil] Williams proved that every expanding attractor can be realized as a solenoid (i.e., a stationary inverse limit). The space in Williams' construction is a branched manifold and the map satisfies natural axioms. The expanding attractors studied by Williams are examples of Smale spaces that have totally disconnected stable sets. Building on Williams' axioms, Wieler [@Wie] proved that every Smale space with totally disconnected stable sets can be realized as a solenoid. Roughly speaking, Wieler replaces the differential geometric axioms of Williams with axioms that only rely on topological/metric space properties. This is necessary to include examples such as shifts of finite type where the solenoid is constructed from a Cantor set. In terms of notation, we let $(Y, g)$ denote the Wieler pre-solenoid and $(X, \varphi)$ denote the associated Smale space; as mentioned $(X, \varphi)$ is the stationary inverse limit of $(Y, g)$. The stable $C^*$-algebra associated to a Smale space is introduced in [@Put] and [@PutSpi]. Building on work of Gonçalves [@Gon1; @Gon2] (also see [@Min] and [@GonRamSol]) the first and fourth listed authors [@DeeYas] showed that the stable algebra is isomorphic to a stationary inductive limit when the Smale space is obtained from a Wieler pre-solenoid. The relevant $C^*$-algebra in the inductive limit is particularly nice; it is the Fell algebra associated to a local homeomorphism and furthermore the spectrum of this algebra is related to but typically not equal to the space in Wieler's construction. This paper aims at clarifying this relation. More to the point, the spectrum of the Fell algebra is typically non-Hausdorff. However, it is compact and locally Hausdorff. In addition, the original map on the pre-solenoid lifts to a map on this compact, locally Hausdorff space. We denote this space and map as a pair $(X^u({\bf P})/{\sim_0}, \tilde{g})$ where the notation is introduced in more detail in Section [2](#SecCstarWieler){reference-type="ref" reference="SecCstarWieler"}. Importantly for us, the map $\tilde{g}$ is a local homeomorphism and (in a non-Hausdorff sense that is introduced in the present paper) expansive. In this sense, the dynamical system $(X^u({\bf P})/{\sim_0}, \tilde{g})$ is a "non-Hausdorff resolution" of the failure of $(Y,g)$ to be an expansive local homeomorphism. Thus, at the purely dynamical level, there is a trade-off between the Hausdorffness of the space of the pre-solenoid and the map being a local homeomorphism and expansive. For $C^*$-algebraic applications, the map being a local homeomorphism and expansive is often more important than the space being Hausdorff. Using $(X^u({\bf P})/{\sim_0}, \tilde{g})$ as our primary example, we study the following three areas:\ *The notion of forward orbit expansiveness in the context of non-Hausdorff spaces.* This builds on the work of Achigar, Artigue, and Monteverde [@MR3501269] who study expansiveness for homeomorphisms on non-Hausdorff spaces. We obtain natural conditions on the pre-solenoid (which is non-Hausdoff) that ensures that the associated solenoid is Hausdorff. In our prototypical example, we show that the solenoid associated with $(X^u({\bf P})/{\sim_0}, \tilde{g})$ is the original Smale space. Thus any Smale space with totally disconnected stable sets can be realized as solenoid where the map of the pre-solenoid is an expansive, local homeomorphism; of course, the space cannot always be taken to be Hausdorff, see [@DeeYas Example 10.3]. This area is discussed in Section [3](#ExpDSNonHausSec){reference-type="ref" reference="ExpDSNonHausSec"}.\ *The inductive limit and Cuntz-Pimsner models for the stable and stable Ruelle algebra respectively.* Building on previous work in [@DGMW; @DeeYas], we relate stationary inductive limits to Cuntz-Pimsner algebras in Section [5](#cpsectionone){reference-type="ref" reference="cpsectionone"}. Our main result is a very general construction of a Cuntz-Pimsner model of a stationary inductive limit. Furthermore, we discuss the construction of unital Cuntz-Pimsner models under the assumption of the existence of a full projection. In our prototypical example $(X^u({\bf P})/{\sim_0}, \tilde{g})$, the relevant Fell algebra always has a full projection as we show in Section [4](#fullprojectioninfell){reference-type="ref" reference="fullprojectioninfell"} and the stable algebra of the original Smale space is a stationary inductive limit of this Fell algebra and hence is the core of our Cuntz-Pimsner model of the stable Ruelle algebra.\ *The functorial properties of the $K$-theory of a compact, locally Hausdorff space.* The $K$-theory of a compact, locally Hausdorff space is defined using Fell algebras so functorial properties are subtle. In particular, a continuous map between compact, locally Hausdorff spaces need not induce a $*$-homomorphism at the $C^*$-algebra level. Nevertheless, we show that there are well-behaved right way and wrong way functors despite the failure of functoriality at the Fell algebra level. In our prototypical example, the map induced by $\tilde{g}$ plays a key role in computing the $K$-theory of the stable algebra and the stable Ruelle algebra of the original Smale space. These computations use the stationary inductive limit in the case of the stable algebra and the Cuntz-Pimsner models in the case of the stable Ruelle algebra; an explicit description of the wrong way map associated to $\tilde{g}$ is the key to such computations. This area is discussed in Section [6](#SecKthFun){reference-type="ref" reference="SecKthFun"} where a number of illustrative examples are discussed at the end of the section.\ For each of the three areas, we work at a quite general level. However, everything we consider is rooted and motivated by the dynamical system $(X^u({\bf P})/{\sim_0}, \tilde{g})$ associated with a Wieler pre-solenoid. Before entering the above mentioned areas, we present some preliminary material in Section [1](#prelsection){reference-type="ref" reference="prelsection"}, on Smale spaces, Fell algebras, groupoids and correspondences, as well as in Section [2](#SecCstarWieler){reference-type="ref" reference="SecCstarWieler"}, on $C^*$-algebras associated with Wieler solenoids. The succeeding four sections of the paper treat the areas listed above and can be read independently of one another. Even if these last four sections of the paper are logically independent of each other, we believe they form a coherent picture of Wieler-Smale spaces. Such a description is missing in the literature except in the case of a Wieler-Smale space defined from a local homeomorphism [@DGMW], see also Subsection [1.2](#SubSecWielerBasic){reference-type="ref" reference="SubSecWielerBasic"} below. To avoid confusion about Hausdorffness, we indicate spaces that are possibly non-Hausdorff with a tilde. For instance, $X$ is Hausdorff while $\tilde{X}$ is possibly non-Hausdorff. The exception to this notation is $X^u({\bf P})/{\sim_0}$ which in general can be non-Hausdorff.\ **Acknowledgements** The authors wish to thank Jamie Gabe for the examples leading up to Remark [Remark 62](#expaneeded){reference-type="ref" reference="expaneeded"}. The first listed author thanks the University of Hawaii and the Fields Institute for visits during which time the paper was completed. # Preliminaries {#prelsection} In this section, we will present some preliminary material from the literature. As the paper aims for a rather broad view on Wieler-Smale spaces, we carefully review the known results. ## Smale spaces {#Section-SmaleSpaces} Although we are only interested in Wieler solenoids some definitions and basic properties of general Smale spaces are required. The reader can find more on Smale spaces in [@Kil; @Put; @PutSpi; @Rue]. **Definition 1**. A Smale space is a metric space $(X, d)$ along with a homeomorphism $\varphi: X\rightarrow X$ with the following additional structure: there exists global constants $\epsilon_X>0$ and $0< \lambda < 1$ and a continuous map, called the bracket map, $$[ \ \cdot \ , \ \cdot \ ] :\{(x,y) \in X \times X : d(x,y) \leq \epsilon_X\}\to X$$ such that the following axioms hold - $\left[ x, x \right] = x$; - $\left[x,[y, z] \right] = [x, z]$ assuming both sides are defined; - $\left[[x, y], z \right] = [x,z]$ assuming both sides are defined; - $\varphi[x, y] = [ \varphi(x), \varphi(y)]$ assuming both sides are defined; - For $x,y \in X$ such that $[x,y]=y$, $d(\varphi(x),\varphi(y)) \leq \lambda d(x,y)$; - For $x,y \in X$ such that $[x,y]=x$, $d(\varphi^{-1}(x),\varphi^{-1}(y)) \leq \lambda d(x,y)$. A Smale space is denoted simply by $(X,\varphi)$ and to avoid certain trivialities, throughout the paper we assume that $X$ is infinite. **Definition 2**. Suppose $(X, \varphi)$ is a Smale space and $x$, $y$ are in $X$. We write $$x \sim_s y \hbox{ if }\lim_{n \rightarrow \infty} d(\varphi^n(x), \varphi^n(y)) =0$$ and we write $$x\sim_u y \hbox{ if } \lim_{n\rightarrow \infty}d(\varphi^{-n}(x) , \varphi^{-n}(y))=0.$$ The $s$ and $u$ stand for stable and unstable respectively. The global stable and unstable set of a point $x \in X$ are defined as follows: $$X^s(x) = \{ y \in X \: | \: y \sim_s x \} \hbox{ and } X^u(x)=\{ y \in X \: | \: y \sim_u x \}$$ Furthermore, the local stable and unstable sets of $x$ are defined as follows: Given, $0< \epsilon \le \epsilon_X$, we have $$\begin{aligned} X^s(x, \epsilon) & = \{ y \in X \: | \: [x, y ]= y \hbox{ and }d(x,y)< \epsilon \} \hbox{ and } \\ X^u(x, \epsilon) & = \{ y \in X \: | \: [y, x]= y \hbox{ and } d(x,y)< \epsilon \}.\end{aligned}$$ The following result is a standard, see for example [@Put; @Rue]. **Theorem 3**. *Suppose $(X, \varphi)$ is a Smale space and $x$, $y$ are in $X$ with $d(x,y)< \epsilon_X$. Then the following hold: for any $0< \epsilon \le \epsilon_X$* 1. *$X^s(x, \epsilon) \cap X^u(y, \epsilon)=\{[x, y]\}$ or is empty;* 2. *$\displaystyle X^s(x) = \bigcup_{n\in \mathbb{N}} \varphi^{-n}(X^s(\varphi^n(x), \epsilon))$;* 3. *$\displaystyle X^u(x) = \bigcup_{n\in \mathbb{N}} \varphi^n(X^u(\varphi^{-n}(x), \epsilon))$.* A Smale space is mixing if for each pair of non-empty open sets $U$, $V$, there exists $N$ such that $\varphi^n(U)\cap V \neq \emptyset$ for all $n\geq N$. When $(X, \varphi)$ is mixing, $X^u(x)$ and $X^s(x)$ are each dense as subsets of $X$. However, one can use the previous theorem to give $X^u(x)$ and $X^s(x)$ locally compact, Hausdorff topologies, see for example [@Kil Theorem 2.10]. ## Background on Wieler solenoids {#SubSecWielerBasic} Inspired by work of Williams [@Wil], Wieler [@Wie] proved that every Smale space with totally disconnected stable sets $X^s(x)$ can be realized as a solenoid. Such Smale spaces will be referred to as Wieler solenoids or Wieler-Smale spaces. More precisely, she characterized Wieler-Smale spaces in terms of the following axioms on the pre-solenoid. **Definition 4** (Wieler's Axioms). Let $(Y, \mathrm{d}_Y)$ be a compact metric space, and $g: Y \rightarrow Y$ be a continuous surjective map. Then, the triple $(Y, \mathrm{d}_Y, g)$ satisfies Wieler's axioms if there exists global constants $\beta>0$, $K\in \mathbb{N}^+$, and $0< \gamma < 1$ such that the following hold: Axiom 1 : If $x,y\in Y$ satisfy $\mathrm{d}_Y(x,y)\le \beta$, then $$\mathrm{d}_Y(g^K(x),g^K(y))\leq\gamma^K \mathrm{d}_Y(g^{2K}(x),g^{2K}(y)).$$ Axiom 2 : For all $x\in V$ and $0<\epsilon\le \beta$ $$g^K(B(g^K(x),\epsilon))\subseteq g^{2K}(B(x,\gamma\epsilon)).$$ **Definition 5**. Suppose $(Y,\mathrm{d}_Y, g)$ satisfies Wieler's axioms and form the inverse limit space $$X:= \varprojlim (Y, g) = \{ (y_n)_{n\in \mathbb{N}} = (y_0, y_1, y_2, \ldots ) \: | \: g(y_{i+1})=y_i \hbox{ for each }i\ge0 \}.$$ Consider the map $\varphi: X \rightarrow X$ defined via $$\varphi(x_0, x_1, x_2, \ldots ) = (g(x_0), g(x_1), g(x_2), \ldots) = (g(x_0), x_0, x_1, \ldots ).$$ Following Wieler, we take a metric on $X$, $\mathrm{d}_X$, given by $$\mathrm{d}_X((x_n)_{n\in \mathbb{N}}, (y_n)_{n\in \mathbb{N}} ) = \sum_{i=0}^K \gamma^{i} \mathrm{d}^{\prime}_X( \varphi^{i}(x_n)_{n\in \mathbb{N}}, \varphi^{i}(y_n)_{n\in \mathbb{N}}),$$ where $\mathrm{d}^{\prime}_X ( (x_n)_{n\in \mathbb{N}}, (y_n)_{n\in \mathbb{N}} )= \operatorname{sup}_{n\in \mathbb{N}} (\gamma^n \mathrm{d}_Y(x_n, y_n))$. We note that the topology induced by $\mathrm{d}_X$ is the product topology. The triple $(X, \mathrm{d}_X, \varphi)$ is called a Wieler solenoid. *Remark 6*. We will assume that $Y$ is infinite. In particular, this ensures that $X$ is infinite and that $g$ is not a homeomorphism, see [@DeeYas]. The pair $(Y, g)$ will be called a presolenoid and $(X,\varphi)$ the associated solenoid or Smale space. **Theorem 7**. *[@Wie Theorems A and B on page 4] [\[WielerTheorem\]]{#WielerTheorem label="WielerTheorem"} Suppose that $(Y, \mathrm{d}_Y, g)$ satisfies Wieler's axioms. Then the associated Wieler solenoid $(X, \mathrm{d}_X, \varphi)$ is a Smale space with totally disconnected stable sets. The constants in Wieler's definition give Smale space constants: $\epsilon_X=\frac{\beta}{2}$ and $\lambda=\gamma$. Moreover, if ${\bf x}=(x_n)_{n\in\mathbb{N}} \in X$ and $0< \epsilon \le \frac{\beta}{2}$, the locally stable and unstable sets of $(X, \mathrm{d}_X, \varphi)$ are given by $$X^s( {\bf x}, \epsilon)= \{ {\bf y}=(y_n)_{n\in \mathbb{N}} \: | \: y_m= x_m \hbox{ for }0\le m \le K-1 \hbox{ and } \mathrm{d}_X({\bf x},{\bf y}) \le \epsilon \}$$ and $$X^u({\bf x}, \epsilon) =\{ {\bf y}=(y_n)_{n\in \mathbb{N}} \: | \: \mathrm{d}_Y(x_n,y_n)< \epsilon \: \forall n \hbox{ and } \mathrm{d}_X({\bf x},{\bf y}) \le \epsilon \}$$ respectively.* *Conversely, if $(X, \varphi)$ is an irreducible Smale space with totally disconnected stable sets, then there exists a triple $(Y, \mathrm{d}_Y, g)$ satisfying Wieler's axioms such that $(X, \varphi)$ is conjugate to the Wieler solenoid associated to $(Y, \mathrm{d}_Y, g)$.* *Remark 8*. Wieler's axioms and the previous theorem should be compared with work of Williams [@Wil]. As mentioned in the introduction, an important difference between the two is that Wieler's are purely metric space theoretic. If a triple $(Y, \mathrm{d}_Y, g)$ satisfies Williams' axioms the inverse limit space $X$ with $\varphi$ as in definition [Definition 5](#WieSolenoid){reference-type="ref" reference="WieSolenoid"} is also Smale space and we will refer to such Smale spaces as Williams solenoids. However, we are most concerned with the more general case of a Wieler solenoid so we will not review Williams' axioms but rather direct the reader to [@Wil] for more details. An important special case occurs when $g$ is a local homeomorphism that satisfies Wieler's axioms. This special case was studied in detail in [@DGMW] (also see [@ThoAMS Section 4.5]). We recall a salient characterization of how refinements of Wieler's axioms (Definition [Definition 4](#WielerAxioms){reference-type="ref" reference="WielerAxioms"}) are equivalent to $g$ being a local homeomorphism, more detailed statements can be found in [@DGMW Section 3]. **Theorem 9** (Lemma 3.7 and 3.8 of [@DGMW]). *Let $(Y, \mathrm{d}_Y)$ be a compact metric space, and $g: Y \rightarrow Y$ be a continuous surjective map. The following are equivalent:* - *$(Y,\mathrm{d}_Y, g)$ satisfies Wieler's axioms and $g$ is a local homeomorphism.* - *$(Y,\mathrm{d}_Y, g)$ satisfies Wieler's axiom 1 and $g$ is open.* - *$(Y,\mathrm{d}_Y, g)$ satisfies Wieler's axiom $2$ and $g^K$ is locally expanding (for the $K$ in Wieler's axiom 2).* *Remark 10*. The reader is encouraged to compare Theorem [Theorem 9](#equiforlocalhom){reference-type="ref" reference="equiforlocalhom"} to the more satisfying situation arising from going to a non-Hausdorff setting in Theorem [Theorem 45](#WielerNonHausOrbExp){reference-type="ref" reference="WielerNonHausOrbExp"}. A list of examples of Wieler solenoids can be found in [@DeeYas]. Three explicit examples that are relevant and illustrative of the results in this paper are the following: *Example 11* ($n$-solenoid). Let $S^1 \subseteq \mathbb{C}$ be the unit circle. Take $n > 1$ and define $g: S^1 \rightarrow S^1$ via $z \mapsto z^n$. Since $g$ is open and expansive (notice that $|g'(z)|=n$), one readily verifies Wieler's axioms for $(S^1, g)$ (see Theorem [Theorem 9](#equiforlocalhom){reference-type="ref" reference="equiforlocalhom"}). Hence the associated inverse limit is a Smale space. It is worth emphasizing that in this case $g$ is a local homeomorphism. *Example 12* ($ab/ab$-solenoid). Let $Y = S^1 \vee S^1$ be the wedge sum of two circles as in Figure [\[Figure-ab/ab-PreSolenoid\]](#Figure-ab/ab-PreSolenoid){reference-type="ref" reference="Figure-ab/ab-PreSolenoid"}. The map $g: Y \rightarrow Y$ is defined using Figure [\[Figure-ab/ab-PreSolenoid\]](#Figure-ab/ab-PreSolenoid){reference-type="ref" reference="Figure-ab/ab-PreSolenoid"}. In Figure [\[Figure-ab/ab-PreSolenoid\]](#Figure-ab/ab-PreSolenoid){reference-type="ref" reference="Figure-ab/ab-PreSolenoid"}, we consider the outer circle to be the $a$-circle and the inner circle to be the $b$-circle. Each line segment labelled with $a$ in Figure [\[Figure-ab/ab-PreSolenoid\]](#Figure-ab/ab-PreSolenoid){reference-type="ref" reference="Figure-ab/ab-PreSolenoid"} is mapped onto the $a$-circle (i.e., the outer circle); while, each line segment labelled with $b$ in Figure [\[Figure-ab/ab-PreSolenoid\]](#Figure-ab/ab-PreSolenoid){reference-type="ref" reference="Figure-ab/ab-PreSolenoid"} is mapped onto the $b$-circle (i.e., the inner circle). The mapping is done in an orientation-preserving way, provided we have oriented both circles the same way, say clockwise. Note that $g$ is not a local homeomorphism in this example. For more details on this specific example and one-solenoids in general, see [@ThoSol; @WilOneDim; @Yi]. The next example is also of this form. *Example 13* ($aab/ab$-solenoid). Again, we take $Y = S^1 \vee S^1$ to be the wedge sum of two circles but with labels as in Figure [\[Figure-aab/ab-PreSolenoid\]](#Figure-aab/ab-PreSolenoid){reference-type="ref" reference="Figure-aab/ab-PreSolenoid"}. This example has been studied in [@DeeGofYasFellAlgPaper; @DeeYas]. The map $g: Y \rightarrow Y$ is defined from Figure [\[Figure-aab/ab-PreSolenoid\]](#Figure-aab/ab-PreSolenoid){reference-type="ref" reference="Figure-aab/ab-PreSolenoid"} via the same process as in Example [Example 12](#ababSolEx){reference-type="ref" reference="ababSolEx"}. The map $g$ is of course different than the one in Example [Example 12](#ababSolEx){reference-type="ref" reference="ababSolEx"} because the labels are different. Again, the resulting map $g$ is not a local homeomorphism and is an example of a one-solenoid, again see [@ThoSol; @WilOneDim; @Yi] for more on this class of examples. ## Fell algebras and their spectrum {#fellalgsubsec} Let us recall the basic facts about Fell algebras that we make use of. The spectrum of a separable $C^*$-algebra $A$ is defined as the set $\hat{A}$ of equivalence classes of irreducible representations (see [@Dix:C* Chapter 3]). The spectrum $\hat{A}$ can be topologized in several different ways, for instance in the Fell topology or the Jacobson topology. For a Fell algebra, the topologies coincide. The spectrum $\hat{A}$ is locally quasi-compact by [@Dix:C* Corollary 3.3.8]. A $C^*$-algebra $A$ is a Fell algebra if every $[\pi_0]\in \hat{A}$ admits a neighborhood $U$ and an element $b\in A$ such that $\pi(b)$ is a rank one projection for all $[\pi]\in U$. An equivalent definition of a Fell algebra is that $A$ is generated by its abelian elements. For details, see [@HKS Chapter 3]. A Fell algebra has locally Hausdorff spectrum (i.e. any $[\pi]\in \hat{A}$ has a Hausdorff neighborhood) by [@archsom Corollary 3.4]. The spectrum of a $C^*$-algebra is always locally quasi-compact. The properties of the spectrum of a Fell algebra can be summarized as being locally Hausdorff and locally locally compact (see [@CHR Chapter 3]). **Definition 14**. Let $\tilde{Y}$ be a topological space. A Hausdorff resolution of $\tilde{Y}$ is a surjective local homeomorphism $\psi:X\to \tilde{Y}$ from a locally compact, Hausdorff space $X$. *Example 15*. The main example of a Fell algebra that we will concern ourselves with arises in a rather explicit way from a Hausdorff resolution. The construction can be found in [@CHR Corollary 5.4]. Suppose that $\psi:X\to \tilde{Y}$ is a Hausdorff resolution of a topological space $\tilde{Y}$. It follows that $\tilde{Y}$ is locally Hausdorff and locally locally compact, and second countable if $X$ is. We define the equivalence groupoid $$R(\psi):=X\times_\psi X:=\{(y_1,y_2)\in X\times X: \psi(y_1)=\psi(y_2)\}.$$ By declaring the domain and range mappings $d(y_1,y_2):=y_2$ and $r(y_1,y_2):=y_1$ to be local homeomorphisms, $R(\psi)$ becomes an etale groupoid over $X$. By [@CHR Corollary 5.4], $C^*(R(\psi))$ is a Fell algebra with vanishing Dixmier-Douady invariant and spectrum $\tilde{Y}$. We also note that $R(\psi)$ is amenable so $C^*(R(\psi))=C^*_r(R(\psi))$. The theory of Dixmier-Douady invariants of Fell algebras was introduced and developed in [@HKS], also see [@CHR]. We only need to consider Fell algebras with vanishing Dixmier-Douady class in which case the following theorem (see [@HKS; @CHR]) reduces the problem to a more manageable situation. **Theorem 16**. *We have the following relationship between Fell algebras and non-Hausdorff spaces.* 1. *Let $A$ be a separable Fell algebra with vanishing Dixmier-Douady invariant. Then the locally Hausdorff and locally locally compact space $\hat{A}$ determines $A$ up to stable isomorphism in the sense that whenever $A'$ is a separable Fell algebra with vanishing Dixmier-Douady invariant then a homeomorphism $h:\hat{A}\to \widehat{A'}$ can be lifted to a stable isomorphism $A\otimes \mathbb{K}\cong A'\otimes \mathbb{K}$.* 2. *A topological space $\tilde{Y}$ is locally Hausdorff and locally locally compact if and only if it admits a Hausdorff resolution $\psi:X\to \tilde{Y}$. If $\tilde{Y}$ is second countable then $X$ can also be choosen second countable. In particular, any second countable, locally Hausdorff, locally locally compact topological space is the spectrum of a separable Fell algebra with vanishing Dixmier-Douady invariant.* *Remark 17*. One can view the previous theorem as follows. Taking the spectra defines an equivalence between the category of stable isomorphism classes of separable Fell algebras with vanishing Dixmier-Douady invariants and that of second countable, locally Hausdorff and locally locally compact spaces. However, it should be emphasized that the morphisms in the category in the previous sentence are homeomorphisms; one cannot generalize to the case of continuous map, even in the case of locally Hausdorff and compact spaces. For a proof of the first statement on stable uniqueness, see [@HKS Theorem 7.13]. The existence result in the second statement can be found in [@CHR Corollary 5.5]. ## Correspondences Results from this section and the next will not be needed until Section [5](#cpsectionone){reference-type="ref" reference="cpsectionone"}. Furthermore, the reader only interested in the purely dynamical results of the present paper can skip to Section [2](#SecCstarWieler){reference-type="ref" reference="SecCstarWieler"} without issue. A $C^*$-correspondence ${}_{A}E_B$ is a right Hilbert $B$-module equipped with a left action given by a homomorphism $\varphi_E: A\rightarrow \mathcal{L}(E)$, where $\mathcal{L}(E)$ denotes the $C^*$-algebra of adjointable operators on $E$. In the literature one also finds the term $A-B$-Hilbert $C^*$-module for a $C^*$-correspondence from $A$ to $B$. A $C^*$-correspondence homomorphism ${}_{A}E_B \rightarrow {}_{A}F_B$ is a $B$-linear map $\Phi: E\rightarrow F$ satisfying $$\Phi(a\cdot \xi)=a\cdot \Phi(\xi) \hbox{ and } \langle\xi, \nu\rangle_C = \langle\Phi(\xi), \Phi(\nu)\rangle_C,$$ for all $a\in A$, and $\xi, \nu\in E.$ **Definition 18**. An $A-B$ bimodule $E_0$ is called a *pre-correspondence* if it has a $B$-valued semi-inner product satisfying $$\langle\xi, \nu\cdot b\rangle = \langle\xi,\nu\rangle b , \hspace{.5cm} \langle\xi,\nu\rangle^*=\langle\nu ,\xi\rangle$$ and $\langle a\cdot \xi, a\cdot \xi\rangle \leq \norm{a}^2\langle\xi,\xi\rangle$ for all $a\in A, b\in B$ and $\xi,\nu \in E_0$. Modding out by the elements of length $0$ and completing gives a $C^*$-correspondence ${}_{A}E_B$ . We call ${}_{A}E_B$ the *completion* of the pre-correspondence $E_0$. **Proposition 19**. *[[@enchilada Lemma 1.23]]{.nodecor} Let $E_0$ be an $A-B$ pre-correspondence given with the completion ${}_{A}E_B$ , and let $F$ be an $A-B$ correspondence. If there is a map $\Phi : E_0 \rightarrow F$ satisfying $$\Phi (a\cdot \xi) = \varphi_Z (a) \Phi(\xi) \qquad\text{and }\qquad \langle\Phi(\xi), \Phi(\nu) \rangle_B = \langle\xi, \nu\rangle_{B} ,$$ for all $a\in A$ and $\xi,\nu\in E_0$, then $\Phi$ extends uniquely to an injective $A-B$ correspondence homomorphism $\tilde{\Phi}: E \rightarrow F.$* The *balanced tensor product* $E\otimes_BF$ of an $A-B$ correspondence $E$ and a $B-C$ correspondence $F$ is formed as follows: the algebraic tensor product $E\odot F$ is a pre-correspondence with the $A-C$ bimodule structure satisfying $$a(\xi\otimes \nu)c=a\xi\otimes \nu c \qquad\text{for }a\in A,\xi\in E,\nu\in F,c\in C,$$ and the unique $C$-valued semi-inner product whose values on elementary tensors are given by $$\langle\xi_1\otimes \nu_1 ,\xi_2 \otimes \nu_2\rangle_C=\langle\nu_1,\langle\xi_1,\xi_2\rangle_B\cdot \nu_2\rangle_C \qquad\text{for }\xi_1, \xi_2 \in E,\nu_1,\nu_2\in F.$$ This semi-inner product defines a $C$-valued inner product on the quotient $E{\odot}_BF$ of $E\odot F$ by the subspace generated by elements of form $$\xi\cdot b \otimes \nu - \xi\otimes \varphi_Y(b)\nu \qquad\text{($\xi\in E$, $\nu\in F$, $b\in B$) }.$$ The completion $E\otimes_B F$ of $E{\odot}_BF$ with respect to the norm coming from the $C$-valued inner product is an $A-B$ correspondence, where the left action is given by $$A\rightarrow \mathcal{L}(E\otimes_BF), \qquad\text{$a\mapsto \varphi_E(a)\otimes 1_F,$ }$$ for $a\in A.$ In other words, the $A-C$ correspondence $E\otimes_BF$ is the completion of the pre-correspondence $E\odot F$ (as in Definition [Definition 18](#precor){reference-type="ref" reference="precor"}). We denote the canonical image of $\xi\otimes \nu$ in $E\otimes_BF$ by $\xi\otimes_B\nu$. The term *balanced* refers to the property $$\xi\cdot b\otimes_B\nu=\xi\otimes_Bb\cdot \nu \qquad\text{for }\xi\in E,b\in B,\nu\in F,$$ which is a consequence of the construction. **Definition 20**. [@Sk Definition 5.7][\[mor\]]{#mor label="mor"} Hilbert modules $E_A$ and $F_B$ are Morita equivalent if there exists an imprimitivity bimodule ${}_{B}M_C$ such that $E\otimes_BM \cong F$ as Hilbert C-modules. We now put Definition [\[mor\]](#mor){reference-type="ref" reference="mor"} in the setting of $C^*$-correspondences: **Definition 21**. $C^*$-correspondences ${}_{A}E_B$ and ${}_{A}F_C$ are Morita equivalent if there exists an imprimitivity bimodule ${}_{B}M_C$ such that $E\otimes_BM \cong F$ as $A-C$ correspondences. ## Groupoid Actions and Equivalence The correspondences reviewed in the last section will in this paper arise from groupoids and their actions. Again, the reader only interested in the purely dynamical results can skip to Section [2](#SecCstarWieler){reference-type="ref" reference="SecCstarWieler"} without issue. **Definition 22**. Suppose $G$ is a groupoid and that $X$ is a set together with a map $\rho: X\rightarrow G^{(0)}$ called the *moment map*. Then a left action of $G$ on $X$ is a map $(g,x)\mapsto g\cdot x$ from $G*X=\{ (g,x)\in G\times X: s(g)=\rho(x)\}$ to $X$ such that - $\rho(x)\cdot x = x$, for all $x\in X$ and - if $(g,g')\in G^{(2)}$ and $(g',x)\in G*X$, then $(g, g'\cdot x)\in G*X$ and $gg'\cdot x= g\cdot(g'\cdot x).$ *Right actions are defined analogously, and we denote by $\sigma$ the moment map for a right action.* **Definition 23**. [@M Definition 1.2] [\[groupoidcorr\]]{#groupoidcorr label="groupoidcorr"} Let $G_1$ and $G_2$ be second countable locally compact Hausdorff groupoids and $Z$ a second countable locally compact Hausdorff space. The space $Z$ is a groupoid correspondence from $G_1$ to $G_2$ if it satisfies the following properties: 1. there exists a left proper action of $G_1$ on $Z$ such that $\rho$ is an open map; 2. there exists a right proper action of $G_2$ on Z; 3. the $G_1$ and $G_2$ actions commute; 4. the map $\rho$ induces a bijection of $Z/G_2$ onto $G_1^{(0)}$. **Theorem 24**. *[[@M Theorem 1.4]]{.nodecor}[\[corGroupoid\]]{#corGroupoid label="corGroupoid"} Let $G_1$, $G_2$ be second countable locally compact Hausdorff etale groupoids; and let $Z$ be a groupoid correspondence from $G_1$ to $G_2$. Then the pre-correspondence ${}_{C_c(G_2)}C_c(Z)_{C_c(G_1)}$ extends to a correspondence from $C^*(G_2)$ to $C^*(G_1)$ with the actions $$\begin{aligned} (\xi\cdot a)(z) &= \sum_{\substack{g\in G_1 with \\s(g)=\rho(z)}} \xi(g\cdot z)a(g)\\ (b\cdot \xi)(z)&=\sum_{\substack{g\in G_2 with \\ \sigma(z)=r(g^{-1})}} b(g^{-1})\xi(z\cdot g^{-1})\end{aligned}$$ for $\xi\in C_c(Z), a\in C_c(G_1), b\in C_c(G_2), z\in Z.$ The inner product is defined by $$\langle\xi_1, \xi_2\rangle(g) = \sum_{\substack{h^{-1}\in G_2 with \\ r(h^{-1})=\sigma(z)}} \overline{\xi_1(z\cdot h^{-1})}\xi_2 (g^{-1}\cdot z\cdot h^{-1}),$$ where $g\in G_1, \xi_1, \xi_2\in C_c(Z),$ and $z\in Z$ such that $r(g)=\rho(z)$.* # $C^*$-algebras associated to a Wieler solenoid {#SecCstarWieler} The fine structure of the stable algebra of a Wieler solenoid was studied in [@DeeYas], extending results from [@DGMW] in the case of the pre-solenoid being defined from a local homeomorphism. We here review and refine the relevant points of [@DeeYas], leading up to the stable algebra of a Wieler solenoid being a stationary inductive limit of a Fell algebra defined from the dynamics. This Fell algebra plays an important role in the paper, as its spectrum will be the non-Hausdorff dynamical system $(X^u({\bf P}))/{\sim_0},\tilde{g})$ of main interest in this paper. ## The stable algebra of a Smale space Following [@PutSpi], we construct the stable groupoid of $(X, \varphi)$. Let ${\bf P}$ denote a finite $\varphi$-invariant set of periodic points of $\varphi$ and define $$X^u({\bf P})=\{ x \in X \: | \: x \sim_u p \hbox{ for some }p \in {\bf P}\}$$ and $$G^s({\bf P}) := \{ (x, y) \in X^u({\bf P}) \times X^u({\bf P}) \: | \: x \sim_s y \}.$$ Still following [@PutSpi], a topology is defined on $G^s({\bf P})$ by constructing a neighborhood base. Suppose $(x,y)\in G^s({\bf P})$. Then there exists $k\in \mathbb{N}$ such that $$\varphi^k(x) \in X^s\left(\varphi^k(y), \frac{\epsilon_X}{2}\right).$$ Since $\varphi$ is continuous there exists $\delta>0$ such that $$\varphi^k( X^u(y, \delta)) \subseteq X^u\left(\varphi^k(y), \frac{\epsilon_X}{2}\right).$$ Using this data, we define a function $h_{(x,y,\delta)} : X^u(y, \delta) \rightarrow X^u(x, \epsilon_X)$ via $$z \mapsto \varphi^{-k} ( [ \varphi^k(z) , \varphi^k(x) ])$$ and have the following result from [@PutSpi]: **Theorem 25**. *The function $h=h_{(x,y,\delta)}$ is a homeomorphism onto its image and (by letting $x$, $y$, and $\delta$ vary) the sets $$V(x,y,h, \delta) := \{ ( h(z), z ) \: | \: z \in X^u(y, \delta) \}$$ forms a neighborhood base for an étale topology on the groupoid $G^s({\bf P})$. Moreover, the groupoid $G^s({\bf P})$ is amenable, second countable, locally compact, and Hausdorff.* **Definition 26**. The stable Ruelle groupoid is the groupoid $G^s({\bf P}) \rtimes \mathbb{Z}$ where the $\mathbb{Z}$-actions is the one induced from $\varphi|_{X^u(P)}$; the associated $C^*$-algebra is call the stable Ruelle algebra. It is worth noting that this definition requires that ${\bf P}$ is $\varphi$-invariant (as was assumed above). ## The open subrelation, Fell algebra and its spectrum {#tildeZeroSection} We review the construction of $\sim_0$ and the associated groupoid $C^*$-algebra studied in [@DeeYas]. Let $\epsilon_Y>0$ be the global constant defined in [@DeeYas] and $\pi_0 : X^u({\bf P}) \rightarrow Y$ denote the map defined via $${\bf x}= (x_n)_{n\in \mathbb{N}} \mapsto x_0.$$ **Definition 27**. Suppose ${\bf x}$ and ${\bf y}$ are in $X^u({\bf P})$. Then ${\bf x}\sim_0 {\bf y}$ if 1. $\pi_0({\bf x})=\pi_0({\bf y})$ (i.e., $x_0 = y_0$); 2. there exists $0< \delta_{{\bf x}} < \epsilon_Y$ and open set $U \subseteq X^u({\bf y}, \epsilon_Y)$ such that $$\pi_0 ( X^u({\bf x}, \delta_{{\bf x}})) =\pi_0 (U).$$ Let $G_0({\bf P}) = \{ ({\bf x}, {\bf y}) \: | \: {\bf x}\sim_0 {\bf y}\}$. To parse the requirements of Definition [Definition 27](#tildeZeroDef){reference-type="ref" reference="tildeZeroDef"}, the reader can return to Theorem [\[WielerTheorem\]](#WielerTheorem){reference-type="ref" reference="WielerTheorem"} for a description of the local unstable sets. Results in [@DeeYas] imply that $G_0({\bf P})$ is an open subgroupoid of $G^s({\bf P})$ and hence that $C^*(G_0({\bf P}))$ is a subalgebra of $C^*(G^s({\bf P}))$. Furthermore, we can define $$G_k({\bf P}) = \{ ({\bf x}, {\bf y}) \: | \: \varphi^k({\bf x}) \sim_0 \varphi^k({\bf y}) \}$$ and one of the main results of [@DeeYas] is the following: **Theorem 28**. *Using the notation above, there is a nested sequence of étale subgroupoids $$G_0({\bf P}) \subset G_1({\bf P}) \subset G_2({\bf P}) \subset \ldots$$ of $G^s({\bf P})$ such that $G^s({\bf P}) = \bigcup_{k=0}^\infty G_k({\bf P})$ and each $G_k({\bf P})$ is isomorphic to $G_0({\bf P})$ in the natural way $({\bf x}, {\bf y}) \mapsto (\varphi^{k}({\bf x}),\varphi^{k}({\bf y}))$.* In practical terms, this theorem reduces the study of $G^s({\bf P})$ to $G_0({\bf P})$ and likewise the study of the $C^*$-algebra $C^*(G^s({\bf P}))$ to $C^*(G_0({\bf P}))$. Since $C^*(G_0({\bf P}))$ is a type I $C^*$-algebra many of its properties can be determined from its spectrum, $X^u({\bf P}))/{\sim_0}$. Furthermore, again following [@DeeYas], we define $\tilde{g} : X^u({\bf P}))/{\sim_0} \rightarrow X^u({\bf P}))/{\sim_0}$ via $$[{\bf x}] \mapsto [(g(x_0), g(x_1), \ldots )]$$ and $r: X^u({\bf P}))/{\sim_0} \rightarrow Y$ via $$[{\bf x}] \mapsto x_0.$$ Proofs that $\tilde{g}$ and $r$ are well-defined can be found in [@DeeYas]. Moreover, there is a commutative diagram $$\begin{CD} X^u({\bf P}) @>\varphi >> X^u({\bf P}) \\ @Vq VV @Vq VV \\ X^u({\bf P})/{\sim_0} @>\tilde{g} >> X^u({\bf P})/{\sim_0} \\ @Vr VV @Vr VV \\ Y @>g>> Y \end{CD}$$ **Proposition 29**. *The maps $q: X^u({\bf P}) \rightarrow X^u({\bf P})/{\sim_0}$ and $\tilde{g} : X^u({\bf P}))/{\sim_0} \rightarrow X^u({\bf P}))/{\sim_0}$ are each a local homeomorphisms (but not in general covering maps). In particular, $q: X^u({\bf P}) \rightarrow X^u({\bf P})/{\sim_0}$ is a Hausdorff resolution of $X^u({\bf P})/{\sim_0}$.* Next, we discuss $G_0({\bf P})$ and $X^u({\bf P})/{\sim_0}$ for the three examples considered in Section [1.2](#SubSecWielerBasic){reference-type="ref" reference="SubSecWielerBasic"}. *Example 30*. Recall the setup of Example [Example 11](#nSolEx){reference-type="ref" reference="nSolEx"}. The space $Y$ is the unit circle, $S^1 \subseteq \mathbb{C}$ and (with fixed $n > 1$) $g: S^1 \rightarrow S^1$ is defined via $z \mapsto z^n$. In this case, we take ${\bf P}$ to be the set containing the single point $(1,1,1, \ldots)$. Then $X^u({\bf P})$ is homeomorphic to $\mathbb{R}$, $\sim_0$ can be identified with the equivalence relation $x \sim y$ when $x-y\in \mathbb{Z}$ and $(X^u({\bf P})/{\sim_0}, \tilde{g})$ can be identified with the original system $(S^1, g)$. In fact the results in this example generalize to the case when $g: Y \rightarrow Y$ is a local homeomorphism, in which case $(X^u({\bf P})/{\sim_0}, \tilde{g})=(Y, g)$, see [@DeeYas] for details. *Example 31*. The relations $\sim_0$ and $\sim_1$ associated to the $aab/ab$-solenoid defined in Example [Example 13](#aababSolEx){reference-type="ref" reference="aababSolEx"} can be viewed as in Figures [\[Figure-aab/ab-Tilde_0\]](#Figure-aab/ab-Tilde_0){reference-type="ref" reference="Figure-aab/ab-Tilde_0"} and [\[Figure-aab/ab-Tilde_1\]](#Figure-aab/ab-Tilde_1){reference-type="ref" reference="Figure-aab/ab-Tilde_1"}. The details for $G_0({\bf P})$ (i.e., $\sim_0$), which is illustrated in Figure [\[Figure-aab/ab-Tilde_0\]](#Figure-aab/ab-Tilde_0){reference-type="ref" reference="Figure-aab/ab-Tilde_0"}, are as follows. We take ${\bf P}$ to be the set containing the fixed point associated to the wedge point (that is, $(p, p, p, \ldots)$ see Figure [\[Figure-aab/ab-PreSolenoid\]](#Figure-aab/ab-PreSolenoid){reference-type="ref" reference="Figure-aab/ab-PreSolenoid"}) and then have that $X^u({\bf P})$ is homeomorphic to the real line. Intervals labelled with $a$ (resp. $b$) are mapped by $\pi_0=g\circ q$ to the outer (resp. inner) circle in $Y$. Identifying the endpoints of these intervals as $\mathbb{Z}$, we have that two integer points are equivalent if and only if the intervals to the left and right are labelled the same. While non-integer points are equivalent if and only if they are in intervals with the same label and their difference is in $\mathbb{Z}$. Figure [\[Figure-aab/ab-Tilde_1\]](#Figure-aab/ab-Tilde_1){reference-type="ref" reference="Figure-aab/ab-Tilde_1"} illustrates $G_1({\bf P})$ in a similar way. The reader can find more details in both cases in [@DeeYas]. We will show that in general $G_0({\bf P})$ is not closed in $G_1({\bf P})$ using this example. Notice that the points $p$ and $q$ are equivalent with respect to $\sim_1$ but not with respect to $\sim_0$. However there is a sequence of points (namely $p+\frac{1}{n} \sim_0 q+\frac{1}{n}$ where $(p+\frac{1}{n}, q+\frac{1}{n})$ converge to $(p, q)$). It follows that $\sim_0$ is not closed in $\sim_1$. Continuing with this example, we discuss $(X^u({\bf P})/{\sim_0}, \tilde{g})$. Notice that the map $g: Y \rightarrow Y$ is not a local homeomorphism. Furthermore, the discussion of $G_0({\bf P})$ above shows that $X^u({\bf P})/{\sim_0}$ is given as follows. The point $p \in Y$ splits into three non-Hausdorff points, denoted $ab, ba, aa$. This space is illustrated in Figure [\[Figure-aab/ab-QuotientSpace\]](#Figure-aab/ab-QuotientSpace){reference-type="ref" reference="Figure-aab/ab-QuotientSpace"}. These points correspond to the three different $\sim_0$-equivalence classes for "integer\" points, as seen in Figure [\[Figure-aab/ab-Tilde_0\]](#Figure-aab/ab-Tilde_0){reference-type="ref" reference="Figure-aab/ab-Tilde_0"}. Open neighborhoods of the three points (that is, $ab, ba, aa$) are pictured in Figure [\[Figure-aab/ab-OpenNeighborhoods\]](#Figure-aab/ab-OpenNeighborhoods){reference-type="ref" reference="Figure-aab/ab-OpenNeighborhoods"}. Next, we discuss the map $\tilde{g} : X^u({\bf P})/{\sim_0} \rightarrow X^u({\bf P})/{\sim_0}$. The map $\tilde{g}$ takes points labelled $ab$, $ba$, and $aa$ to $ba$. It takes the point labelled $q_1$ to $aa$, the point labelled $q_2$ to $ab$, and the point labelled $q_3$ to $ab$. The other points (i.e., the ones without labels) are maps in the same way as $g: Y \rightarrow Y$. Some important properties to notice about $\tilde{g} : X^u({\bf P})/{\sim_0} \rightarrow X^u({\bf P})/{\sim_0}$ are the following: 1. $\tilde{g}$ is a local homeomorphisms, in particular it is open; 2. $\tilde{g}(r^{-1}(p))=\{ ba \}$ where $r : X^u({\bf P})/{\sim_0} \rightarrow Y$ and $p$ is the wedge point of $Y$; 3. informally, $\tilde{g}$ is locally expanding. The importance of the second and third properties will be seen in Section [3](#ExpDSNonHausSec){reference-type="ref" reference="ExpDSNonHausSec"}. In particular, $\tilde{g}$ will be shown to be "forward orbit expansive\" in Section [3](#ExpDSNonHausSec){reference-type="ref" reference="ExpDSNonHausSec"}, which makes precise the third item in the list above. *Example 32*. When $(Y, g)$ is the ab/ab-solenoid the map $g$ is not a local homeomorphism. We take the set ${\bf P}$ to be the set containing the single element $(p,p, \ldots)$ where $p$ is the wedge point. Then, using a similar method to the one in the previous example, one can show that $X^u({\bf P})$ is homeomorphic to $\mathbb{R}$, $X^u({\bf P})/{\sim_0}$ is homeomorphic to the circle and $\tilde{g}$ is the two-fold cover of the circle. The main point of this example is that $X^u({\bf P})/{\sim_0}$ can be Hausdorff even when $g$ is not a local homeomorphism. We restate [@DeeYas Lemma 4.4] here for ease of the reader; it will be used a number of times below. **Lemma 33**. *There exists constant $K_0>0$ such that if ${\bf x}$ and ${\bf y}$ are in $X^u(P)$ and $x_i = y_i$ for $0\le i \le K_0$, then ${\bf x}\sim_0 {\bf y}$.* **Theorem 34**. *Suppose that $K_0$ is as in the previous lemma, $[{\bf y}]_0\in X^u({\bf P})/{\sim_0}$ and there exists $V$ open in $Y$ such that* 1. *$\pi_0({\bf y})=y_0\in V$ and* 2. *$g^{K_0}|_V$ is injective.* *Then $r^{-1}(y_0)=\{ [{\bf y}]_0\}$.* *Proof.* Take $\delta>0$ such that $B(y_0, \delta) \subseteq V$. Suppose that $\hat{{\bf y}}=(y_0, \hat{y_1}, \hat{y_2}, \ldots) \in X^u({\bf P})$. We must show ${\bf y}\sim_0 \hat{{\bf y}}$. By the previous lemma, $$(g^{K_0}(y), \ldots, g(y), y, y_1, \ldots) \sim_0 (g^{K_0}(y), \ldots, g(y), y, \hat{y}_1, \ldots).$$ By the definition of $\sim_0$, there exists open sets in $X^u(P)$, $U\subseteq X^u(\varphi^{K_0}({\bf y}), \delta)$ and $\hat{U}\subseteq X^u(\varphi^{K_0}(\hat{{\bf y}}), \delta)$, such that 1. $\varphi^{K_0}({\bf y})=(g^{K_0}(y), \ldots, g(y), y, y_1, \ldots) \in U$; 2. $\varphi^{K_0}(\hat{{\bf y}})=(g^{K_0}(y), \ldots, g(y), y, \hat{y}_1, \ldots) \in \hat{U}$; 3. $\pi_0(U)=\pi(\hat{U})$. Since $\varphi|_{X^u({\bf P})}$ is a homeomorphism, both $\varphi^{-K_0}(U)$ and $\varphi^{-K_0}(\hat{U})$ are open in $X^u({\bf P})$. Furthermore, ${\bf y}\in \varphi^{-K_0}(U)$ and $\hat{{\bf y}} \in \varphi^{-K_0}(\hat{U})$. Notice that $\varphi^{-K_0}(U) \subseteq X^u({\bf y}, \delta)$ and $\varphi^{-K_0}(\hat{U}) \subseteq X^u(\hat{{\bf y}}, \delta)$. We show that $\pi_0(\varphi^{-K_0}(U)) = \pi_0(\varphi^{-K_0}(\hat{U}))$. Take ${\bf z}\in \varphi^{-K_0}(U)$. Then $$(g^{K_0}(z_0), \ldots, g(z_0), z_0, \ldots) \in U$$ and by the third item in the list above, there exists $(\bar{z}_0, \bar{z}_1, \ldots) \in \hat{U} \subseteq X^u(\hat{{\bf y}}, \delta)$ with $\bar{z}_0=g^{K_0}(z_0)$. Since $(\bar{z}_0, \bar{z}_1, \ldots) \in X^u(\varphi^{K_0}(\hat{{\bf y}}), \delta)$, $\mathrm{d}_Y(z_{K_0}, y_0)< \delta$. However, $g^{K_0}$ is injective on $B(y_0, \delta) \subseteq V$ hence $z_{K_0}=z_0$ (i.e., $z_0$ is the unique preimage of $g^{K_0}(z_0)$ inside $V$). Hence $\pi_0({\bf z})=\pi_0(z_{K_0}, z_{K_0+1}, \ldots)$ and the result follows. ◻ **Corollary 35**. *If $(Y,g)$ is a Williams presolenoid, then $r$ is one-to-one on a dense open set of $X^u({\bf P})/{\sim_0}$.* *Proof.* The conditions of the previous theorem hold for any non-branched point in $Y$. Since the set of branched points is closed and nowhere dense the result follows. (The reader can find the precise definition of branched point in [@Wil]. This is the only proof in the present paper that uses this term.) ◻ To summarize, we believe the results of this section along with the three examples gives sufficient motivation to consider the properties of the dynamical systems $(X^u({\bf P}))/{\sim_0}, \tilde{g})$ and its connection to $(X, \varphi)$ and $(Y, g)$. # Expansive dynamics in the non-Hausdorff setting {#ExpDSNonHausSec} Since $X^u({\bf P})/{\sim_0}$ is a non-Hausdorff space and we are interested in the dynamics on it, we discuss the general situation for expansive maps in the non-Hausdorff setting. Much of the general theory is based on work of Achigar, Artigue, and Monteverde [@MR3501269], who consider the case of expansive homeomorphisms in the non-Hausdorff setting. The work in this section differs from [@MR3501269] in that we relax the homeomorphism assumption and consider a pair $(\tilde{Y}, \tilde{g})$ of a compact topological space $\tilde{Y}$ and a continuous surjective map $\tilde{g}: \tilde{Y} \rightarrow \tilde{Y}$; additional conditions on $\tilde{g}$ (such as expansiveness and being a local homeomorphism) will be explicitly stated as required. We emphasize that $\tilde{Y}$ need not be Hausdorff. It might be useful for the reader to have a copy of [@MR3501269] while reading the present section, so they compare results here to those in [@MR3501269]. ## Expansive maps on non-Hausdorff spaces {#exponnonhaus} **Definition 36**. Suppose that $\mathcal{U}=\{ U_i \}_{i\in I}$ is an open cover of a topological space and $(x_n)_{n\in \mathbb{N}}$ and $(y_n)_{n\in \mathbb{N}}$ are sequences in the space. Then we write $$\{ x_n, y_n\}_{n\in \mathbb{N}} \prec \mathcal{U}$$ if for each $n\in \mathbb{N}$, there exists $i_n\in I$ such that $$x_n \hbox{ and }y_n \hbox{ are elements of }U_{i_n}.$$ We use similar notation for two-sided sequences (that is, for $(x_n)_{n\in \mathbb{Z}}$ and $(y_n)_{n\in \mathbb{Z}}$) In [@MR3501269 Definition 2.1], the reader can find a notion of expansiveness for homeomorphisms of compact, non-Hausdorff spaces. Upon replacing the conditions of [@MR3501269] involving the $\mathbb{Z}$-action associated to a homeomorphism, with an $\mathbb{N}$-action we arrive at a completely analogous definition of orbit expansive maps that need not be invertible. **Definition 37**. We say that $(\tilde{Y}, \tilde{g})$ is forward orbit expansive if there exists a finite open cover of $\tilde{Y}$, $\mathcal{U}=\{ U_i \}_{i=1}^l$, such that if $x$, $y$ are in $Y$ with $$\{ \tilde{g}^n(x), \tilde{g}^n(y) \}_{n\in \mathbb{N}} \prec \mathcal{U},$$ then $x=y$. **Proposition 38**. *Suppose that $(\tilde{Y}, \tilde{g})$ is forward orbit expansive and $(\tilde{X}, \tilde{\varphi})$ is the associated solenoid. Then $(\tilde{X}, \tilde{\varphi})$ is orbit expansive in the sense of [@MR3501269 Definition 2.1].* *Proof.* Take an open cover of $\tilde{Y}$, $\mathcal{U}_{\tilde{Y}}=\{ U_i \}_{i=1}^l$, as in the definition of forward orbit expansive. Form $$\mathcal{U}_X=\{ U_i\times \tilde{Y} \times \tilde{Y}\times \ldots \}_{i=1}^l,$$ which is an open cover of $\tilde{X}$. Suppose ${\bf x}=(x_0, x_1, \ldots)$ and ${\bf y}=(y_0, y_1, \ldots )$ are in $\tilde{X}$ with $$\{ \varphi^n({\bf x}), \varphi^n({\bf y}) \}_{n\in \mathbb{Z}} \prec \mathcal{U}_X.$$ Then there exists $i_0 \in \{ 1, \ldots l \}$ such that ${\bf x}$ and ${\bf y}$ are in $U_{i_0}\times \tilde{Y} \times \tilde{Y}\times \ldots$. Hence, $x_0$ and $y_0$ are in $U_{i_0}$. Using the fact that $\tilde{\varphi}({\bf x})=( \tilde{g}(x_0), \tilde{g}(x_1), \ldots)$ and an induction argument give us $$\{ \tilde{g}^n(x_0), \tilde{g}^n(y_0) \} \prec \mathcal{U}_Y.$$ The definition of forward orbit expansive then implies that $x_0=y_0$. Using the fact that $\tilde{\varphi}^{-1}({\bf x})=( x_1, x_2, \ldots)$ allows us to repeat the above argument to get $x_1=y_1$. The proof is then completed by showing that $x_n=y_n$ for each $n\in \mathbb{N}$ in a similar way; this implies that ${\bf x}={\bf y}$. ◻ **Proposition 39**. *(compare with [@MR3501269 Proposition 2.5]) Suppose that $(\tilde{Y}, \tilde{g})$ is forward orbit expansive, then $\tilde{Y}$ is $T_1$.* *Proof.* Take an open cover of $\tilde{Y}$, $\mathcal{U}=\{ U_i \}_{i=1}^l$, as in the definition of forward orbit expansive. Suppose $x\neq y$ are in $\tilde{Y}$. Then, using the definition of forward orbit expansive, there exists $n\in \mathbb{N}$ and $i_x \neq i_y$ such that $\tilde{g}^n(x) \in U_{i_x}$ but $\tilde{g}^n(x)\not\in U_{i_y}$ and $\tilde{g}^n(y)\in U_{i_y}$ but $\tilde{g}^n(y)\not\in U_{i_x}$. Since $\tilde{g}$ is continuous, the sets $\tilde{g}^{-n}(U_{i_x})$ and $\tilde{g}^{-n}(U_{i_y})$ are open and the properties in the previous sentence imply that these sets are a $T_1$-separation of $x$ and $y$. ◻ **Proposition 40**. *(compare with [@MR3501269 Theorem 2.7]) Suppose that $Y$ is compact and Hausdorff, $(Y, g)$ is forward orbit expansive, and $g$ is an open map. Then $Y$ is metrizable and $(Y, g)$ is forward expansive.* *Proof.* To begin, we note that $g$, in addition to being open, is also a closed map. Let $\mathcal{U}$ be an open cover of $Y$ as in the definition of forward orbit expansive. Given $U\in \mathcal{U}$ and $y\in U$, there exists $V_y$ open such that $$y \in V_y \subseteq \overline{V}_y \subseteq U.$$ Using this fact and the compactness of $Y$, there exists an open $\mathcal{V}=\{V_1, \ldots , V_m\}$ such that for each $V_i$ there exists $U \in \mathcal{U}$ with $\overline{V}_i \subseteq U$. This property and forward orbit expansiveness imply that $${\rm card}\left( \bigcap_{i\ge0} g^i(\overline{V}_{k_i}) \right) \le 1$$ for any $(k_i)_{i\ge0} \in \{ 1, \ldots m\}^{\mathbb{N}}$. Now suppose that $y \in Y$ and $W$ is an open set with $y\in W$. Since $\mathcal{V}$ is a cover and $g$ is onto, there exists $(k_i)_{i\ge0} \in \{ 1, \ldots m\}^{\mathbb{N}}$ such $$y \in \bigcap_{i\ge0} g^i(V_{k_i}).$$ It follows that $\bigcap_{i\ge0} g^i(\overline{V}_{k_i})=\{y\}$ and hence that $$W^c \cap \left( \bigcap_{i\ge0} g^i(\overline{V}_{k_i}) \right) = \emptyset.$$ By the reformulation of compactness via the finite intersection property (this is also where we use the fact that $g$ is a closed map) there exists $N\in \mathbb{N}$ such that $$W^c \cap \left( \bigcap_{i\ge0}^N g^i(\overline{V}_{k_i}) \right) = \emptyset$$ and hence $\bigcap_{i\ge0}^N g^i(V_{k_i}) \subseteq W$. It now follows that the collection $$\left\{ \bigcap_{i\ge0}^N g^i(V_{k_i}) \mid N\in \mathbb{N}\hbox{ and }k_i \in \{ 1, \ldots, m\} \right\}$$ is a basis for the topology on $Y$. Since this collection is countable (and $Y$ is compact and Hausdorff), the topology on $Y$ is metrizable. For the second part of the theorem, again let $\mathcal{U}$ be an open cover of $Y$ as in the definition of forward orbit expansive. Fix a metric on $Y$ that induces the given topology. Let $\delta>0$ be the Lebesgue number of $\mathcal{U}$ with respect to $d$. One can show that $\delta$ is an expansive constant for $(Y, g)$; the details are omitted. ◻ **Proposition 41**. *(compare with [@MR3501269 Proposition 2.14]) [\[propPowerIsExpansive\]]{#propPowerIsExpansive label="propPowerIsExpansive"} Suppose that $(\tilde{Y}, \tilde{g})$ is forward orbit expansive, and $\tilde{g}$ is an open map. Then, for each $n\in \mathbb{N}$, $\tilde{g}^n$ is forward orbit expansive.* *Proof.* Take an open cover of $\tilde{Y}$, $\mathcal{U}=\{ U_i \}_{i=1}^l$, as in the definition of forward orbit expansive for $\tilde{g}$. Using the openness of the map $\tilde{g}$ we have that $$\mathcal{U}_n= \{ U_{i_1} \cap \tilde{g}(U_{i_2}) \cap \ldots \cap \tilde{g}^{n-1}(U_{i_{n-1}}) \mid i_1, i_2, \ldots , i_{n-1} \in \{ 1, \ldots, l\} \}$$ is an open cover of $\tilde{Y}$. Moreover, the fact that $\mathcal{U}$ is forward orbit expansive for $\tilde{g}$ implies that $\mathcal{U}_n$ is forward orbit expansive for $\tilde{g}^{n}$; the details are omitted. ◻ **Proposition 42**. *Suppose that $(\tilde{Y}, \tilde{g})$ is forward orbit expansive with $\tilde{g}$ open, $(\tilde{Y}_{{\rm Haus}}, \tilde{g}_{{\rm Haus}})$ denotes its Hausdorffization (see [@munsterthe]), and $\tilde{r}: \tilde{Y} \rightarrow \tilde{Y}_{{\rm Haus}}$ denotes the natural map. Furthermore suppose that there exists $L\in \mathbb{N}$ such that for each $y\in \tilde{Y}_{{\rm Haus}}$, $\tilde{g}^L(\tilde{r}^{-1}(y))$ is a singleton. Then $\tilde{Y}$ is locally Hausdorff.* *Proof.* By Proposition [\[propPowerIsExpansive\]](#propPowerIsExpansive){reference-type="ref" reference="propPowerIsExpansive"} and replacing $\tilde{g}$ with $\tilde{g}^L$, we can assume that $L=1$. Take $\mathcal{U}=\{ U_i \}_{i=1}^l$, as in the definition of forward orbit expansive and $x \in \tilde{Y}$. Let $U=U_{i_0}$ where $x\in U_{i_0}$. We will show that $U$ with the subspace topology is Hausdorff. Suppose $y_1$ and $y_2$ in $U$ cannot be separated. We will show that $y_1=y_2$. Since $\tilde{g}$ is continuous, $\tilde{g}(y_1)$ and $\tilde{g}(y_2)$ cannot be separated, but then $\tilde{g}(y_1)=\tilde{g}(y_2)$ (since $L=1$). Hence $\{ \tilde{g}^n(x), \tilde{g}^n(y) \} \prec \mathcal{U}$ and by the definition of forward expansive, $y_1=y_2$ as required. ◻ The next example shows that an additional condition (e.g., the assumption that $\tilde{g}^L(\tilde{r}^{-1}(y))$ is a singleton in the previous result) is required to ensure the solenoid is Hausdorff. *Example 43*. Let $\tilde{Y}$ be the unit circle in $\mathbb{C}$ with two "1\"s, see Figure [\[CircleTwoOnes\]](#CircleTwoOnes){reference-type="ref" reference="CircleTwoOnes"}. We define $\tilde{g}: \tilde{Y} \rightarrow \tilde{Y}$ to be the two fold covering map for points other than $-1$, $1^a$ and $1^b$; for those points we define $$-1 \mapsto 1^a \hbox{ and }1^a\mapsto 1^b \hbox{ and }1^b \mapsto 1^a.$$ One can check that $\tilde{g}$ is continuous, onto, and open. Furthermore, the solenoid associated to $(\tilde{Y}, \tilde{g})$ is not Hausdorff since the points $$(1^a, 1^b, 1^a, \ldots ) \hbox{ and }(1^b, 1^a, 1^b, \ldots)$$ cannot be separated. **Theorem 44**. *Suppose that $(\tilde{Y}, \tilde{g})$ is forward orbit expansive with $\tilde{g}$ open (and continuous and onto as usual in this section) and there exists $(Y, g)$ and $r: \tilde{Y} \rightarrow Y$ with* 1. *$Y$ compact and Hausdorff;* 2. *$g$ is continuous and onto;* 3. *$r$ continuous, onto, and $r\circ \tilde{g}=g \circ r$;* 4. *there exists $L\in \mathbb{N}$ such that for each $y\in Y$, $g^L(r^{-1}(y))$ is a singleton.* *Then the solenoids associated to $(\tilde{Y}, \tilde{g})$ and $(Y, g)$ are conjugate. In particular, the solenoid associated to $(\tilde{Y}, \tilde{g})$ is Hausdorff.* *Proof.* Following [@Wil], we will define a shift equivalence between $(\tilde{Y}, \tilde{g})$ and $(Y, g)$. In addition to the map $r: \tilde{Y} \rightarrow Y$ in the statement of the theorem, we have the map $s: Y \rightarrow \tilde{Y}$ defined via $$y \mapsto \tilde{g}^L(r^{-1}(y))$$ where we have abused notation in the sense that $\tilde{g}^L(r^{-1}(y))$ is really a set that contains a single element. The map $s$ is onto since $r$ and $\tilde{g}$ are onto. One can also show that $s$ is continuous. To show that $r$ and $s$ define a shift equivalence, we must show that 1. $r \circ \tilde{g} = g\circ r$; 2. $s\circ g = \tilde{g} \circ s$; 3. $s\circ r=\tilde{g}^L$; 4. $r\circ s=g^L$. The first item is an assumption of the theorem. The fourth is similar to the third so only the proofs of the second and third will be considered in detail. To see that $s\circ g = \tilde{g} \circ s$, we have that $$(s\circ g)(y)=\tilde{g}^L(r^{-1}(g(y))) \hbox{ and }(\tilde{g}\circ s)(y)=\tilde{g}^{L+1}(r^{-1}(y)).$$ Using $r\circ \tilde{g}=g \circ r$, it follows that $\tilde{g}(r^{-1}(y)) \subseteq r^{-1}(g(y))$. Apply $\tilde{g}^L$ to both sides, we find that $\tilde{g}^L(r^{-1}(g(y)))$ is a singleton by assumption. Hence $\tilde{g}^{L+1}(r^{-1}(y))$ is the same singleton because it is non-empty and contained in the singleton set $\tilde{g}^L(r^{-1}(g(y)))$. To see that $s\circ r=\tilde{g}^L$, we have $$(s\circ r)(\tilde{y})=\tilde{g}^L(r^{-1}(r(\tilde{y}))).$$ Now, $\tilde{y}\in r^{-1}(r(y))$, so $\tilde{g}^L(\tilde{y})\in \tilde{g}^L(r^{-1}(r(\tilde{y})))$. The set $\tilde{g}^L(r^{-1}(r(\tilde{y})))$ is a singleton, so it must be equal to $\{ \tilde{g}^L(\tilde{y}) \}$ as required. Using work in [@Wil], it follows that the map $S: X \rightarrow \tilde{X}$ defined via $$(y_0, y_1, y_2, \ldots) \mapsto (s(y_0), s(y_1), s(y_2), \ldots )$$ is a conjugacy from $(X, \varphi)$ (the solenoid associated to $(Y, g)$) to $(\tilde{X}, \tilde{\varphi})$ (the solenoid associated to $(\tilde{Y}, \tilde{g})$). Its inverse is given by $R : \tilde{X} \rightarrow X$ defined via $$(\tilde{y}_0, \tilde{y}_1, \tilde{y}_2, \ldots) \mapsto (r(\tilde{y}_0), r(\tilde{y}_1), r(\tilde{y}_2), \ldots ).$$ For the final part of the theorem, $X$ is Hausdorff because $Y$ is Hausdorff. Hence, the solenoid associated to $\tilde{X}$ is Hausdorff as well because $X$ and $\tilde{X}$ are homeomorphic. ◻ ## The dynamics on the locally Hausdorff quotient space We now come to the main application of Subsection [3.1](#exponnonhaus){reference-type="ref" reference="exponnonhaus"} on Wieler-Smale spaces. **Theorem 45**. *The dynamical system $(X^u(P)/{\sim_0}, \tilde{g})$ is forward orbit expansive. Moreover, $\tilde{g}$ is a local homeomorphism and there exists $L\in \mathbb{N}$ such that for each $y\in \tilde{Y}_{{\rm Haus}}$, $\tilde{g}^L(r^{-1}(y))$ is a singleton.* *Proof.* Recall the constants $\beta>0$ and $0<\lambda<1$ in the definition of a Wieler presolenoid, see Definition [Definition 4](#WielerAxioms){reference-type="ref" reference="WielerAxioms"}. Using the nature of the local stable and unstable sets for a Wieler solenoid (see Definition [Definition 5](#WieSolenoid){reference-type="ref" reference="WieSolenoid"}) there exists $0<\delta< \frac{\beta}{2}$ such that if ${\bf x}\neq {\bf y}$ are in $X^u({\bf z}, \delta)$ (for some ${\bf z}$) then $x_0\neq y_0$. Using the compactness of $X^u(P)/{\sim_0}$, we have an open cover of the form $$\mathcal{U}= \{ \pi_0(X^u({\bf z}_i, \delta)) \cap r^{-1}(B(w_j, \frac{\beta}{2})) \}_{i\in I, j\in J}.$$ where ${\bf z}_i \in X^u(P)$, $w_j\in Y$, $I$ and $J$ are finite index sets, and the fact that $\pi_0$ is an open map and $r$ is continuous ensures that the given sets are open. Let $[{\bf x}]_0 \neq [{\bf y}]_0$ be in $X^u(P)/{\sim_0}$. We will find $N\in \mathbb{N}$ such that for each $i$, the two element set $\{ \tilde{g}^N([{\bf x}]_0), \tilde{g}^N([{\bf y}]_0) \}$ is not a subset of $\pi_0(X^u({\bf z}_i, \delta))\cap r^{-1}(B(w_j, \frac{\beta}{2}))$. We are done unless there exists $i_0$ such that $$[{\bf x}]_0 \hbox{ and } [{\bf y}]_0 \hbox{ are both in }\pi_0(X^u({\bf z}_{i_0}, \delta)).$$ Let ${\bf x}$ and ${\bf y}$ be points in $X^u({\bf z}_{i_0}, \delta)$ representing $[{\bf x}]_0$ and $[{\bf y}]_0$ respectively. By assumption, $x_0 \neq y_0$ and $$\mathrm{d}_Y(x_K, y_K)< 2\delta < \beta$$ where $K$ is the constant in Wieler's Axioms. Using the first of Wieler's Axioms, $$0<\mathrm{d}_Y(x_0, y_0) \le \gamma^K \mathrm{d}_Y(g^K(x_0), g^K(y_0)).$$ Using this inequality and the fact that $x_0 \neq y_0$, we have that $g^K(x_0) \neq g^K(y_0)$. If $\mathrm{d}_Y(g^K(x_0), g^K(y_0))> \beta$, then we stop. Otherwise, since $$\mathrm{d}_Y(x_0, y_0)< 2\delta < \beta,$$ we can again use the first of Wieler's Axioms. We get that $$\mathrm{d}_Y(g^K(x_0), g^K(y_0)) \le \gamma^{K} \mathrm{d}_Y(g^{2K}(x_0), g^{2K}(y_0)).$$ Using this inequality and the previous one, we obtain $$0<\mathrm{d}_Y(x_0, y_0) \le \gamma^{2K} \mathrm{d}_Y(g^{2K}(x_0), g^{2K}(y_0)).$$ Using the fact that $0<\gamma<1$ and possibly repeating this process, there exists $N\in \mathbb{N}$ such that $d(g^N(x_0), g^N(y_0))> \beta$. Now, $$r(\tilde{g}^N([{\bf x}]_0))=g^N(x_0) \hbox{ and }r(\tilde{g}^N([{\bf y}]_0))=g^N(y_0).$$ Therefore, $\tilde{g}^N([{\bf x}]_0)$ and $\tilde{g}^N([{\bf y}]_0)$ cannot both be in $r^{-1}(B(w_j, \frac{\beta}{2}))$ for any $j$. Thus, $\tilde{g}$ is forward orbit expansive. For the second part of the theorem, it was already noted that $\tilde{g}$ is a local homeomorphism and the existence of the required $L$ follows from Lemma [Lemma 33](#Lemma44inDeeYas){reference-type="ref" reference="Lemma44inDeeYas"}. ◻ ## Inverse limit space associated to the spectrum The main goal of this section is to show that the inverse limit formed from $(X^u(P)/{\sim_0}, \tilde{g})$ also gives the Wieler solenoid associated to $(Y, g)$. This is perhaps somewhat surprising in light of the fact that $X^u(P)/{\sim_0}$ is often non-Hausdorff. To fix notation, $(Y, g)$ is assumed to satisfy Wieler's axioms, $(X, \varphi)$ is the associated solenoid and $(X^u(P)/{\sim_0}, \tilde{g})$ is as in the previous section. We let $$\tilde{X}:= \varprojlim (X^u(P)/{\sim_0}, \tilde{g}) = \{ (\tilde{y}_n)_{n\in \mathbb{N}} = (\tilde{y}_0, \tilde{y}_1, \tilde{y}_2, \ldots ) \: | \: \tilde{g}(\tilde{y}_{i+1})=\tilde{y}_i \hbox{ for each }i\ge0 \}$$ with the map $\tilde{\varphi}: \tilde{X} \rightarrow \tilde{X}$ be defined via $$\tilde{\varphi}(\tilde{y}_0, \tilde{y}_1, \tilde{y}_2, \ldots ) = (\tilde{g}(\tilde{y}_0), \tilde{g}(\tilde{y}_1), \tilde{g}(\tilde{y}_2), \ldots) = (\tilde{g}(\tilde{y}_0), \tilde{y}_0, \tilde{y}_1, \ldots ).$$ Again we will make use of Lemma 4.4 in [@DeeYas], which we restate here for the reader. We have the two maps. The first one was defined in [@DeeYas]; it is $$r: X^u(P)/{\sim_0} \rightarrow Y \hbox{ defined via } [{\bf x}]_0 \mapsto x_0$$ where we note that the definition of $\sim_0$ implies that $r$ is well-defined. Furthermore, it is continuous and surjective. The second map is $$s: Y \rightarrow X^u(P)/{\sim_0} \hbox{ defined via }y \mapsto [g^{K_0}(y), g^{K_0-1}(y), \ldots , g(y), y, \ldots ]_0$$ where we note that the Lemma 4.4 in [@DeeYas] implies that $s$ is well-defined. **Theorem 46**. *Using the notation in the previous paragraphs, the maps $r$ and $s$ define a shift equivalence. That is, they satisfy $$r \circ \tilde{g}=g\circ r, s \circ g =g \circ s, r \circ s = g^{K_0} \hbox{ and } s \circ r = \tilde{g}^{K_0}.$$ Moreover, the map $S: X \rightarrow \tilde{X}$ defined via $$(y_0, y_1, y_2, \ldots) \mapsto (s(y_0), s(y_1), s(y_2), \ldots )$$ is a conjugacy from $(X, \varphi)$ to $(\tilde{X}, \tilde{\varphi})$.* *Proof.* This follows from the statement and proof of Theorem [Theorem 44](#PreNonHausSolHaus){reference-type="ref" reference="PreNonHausSolHaus"}. We note that the assumptions of Theorem [Theorem 44](#PreNonHausSolHaus){reference-type="ref" reference="PreNonHausSolHaus"} hold by Theorem [Theorem 45](#WielerNonHausOrbExp){reference-type="ref" reference="WielerNonHausOrbExp"}. Also, the reader can check that the map $s$ defined just before the statement of the current theorem is equal to the map $s$ consider in Theorem [Theorem 44](#PreNonHausSolHaus){reference-type="ref" reference="PreNonHausSolHaus"}. ◻ **Corollary 47**. *If $(X, \varphi)$ is an irreducible Smale space with totally disconnected stable sets, then there exists $(\tilde{Y}, \tilde{g})$ such that $\tilde{g}$ is a surjective local homeomorphism that is forward orbit expansive and $(X, \varphi)$ is conjugate the solenoid associated to $(\tilde{Y}, \tilde{g})$.* *Proof.* This follows from the previous result and Wieler's theorem. ◻ ## The relationship between $(X, \varphi)$, $(X^u({\bf P})/{\sim_0}, \tilde{g})$ and $(Y, g)$ It follows from Theorem [Theorem 46](#InverseLimitXUP){reference-type="ref" reference="InverseLimitXUP"} that there is a continuous surjection from $X$ to $X^u(P)/{\sim_0}$. However, we can write it is explicitly without reference to the conjugacy in Theorem [Theorem 46](#InverseLimitXUP){reference-type="ref" reference="InverseLimitXUP"}: **Theorem 48**. *Define $p : X \rightarrow X^u(P)/{\sim_0}$ via $$x \mapsto [y]_0$$ where $y\in X^u(P) \cap X^s(x, \frac{\epsilon_X}{2})$. Then $p$ is a continuous surjection and for each $z\in X^u(P)/{\sim_0}$, $p^{-1}(z)$ is a Cantor set.* *Proof.* To begin, we must show that $p$ is well-defined. Suppose that $y'$ is another element in $X^u(P) \cap X^s(x, \frac{\epsilon_X}{2})$. Firstly, since both $y$ and $y'$ are in $X^s(x, \frac{\epsilon_X}{2})$ we have that $\pi_0(x)=\pi_0(y)=\pi_0(y')$. Moreover, properties of the bracket implies that the map $h : X^u(y, \epsilon_X) \rightarrow X^u(y', \epsilon_X)$ defined via $$z \mapsto [z, y']$$ is well defined and that $\pi_0(z)=\pi_0(h(z))$ for each $z \in X^u(y, \epsilon_X)$. This implies that $[y]_0=[y']_0$. An equivalent definition of $p$ is the following: Let $U(x, \frac{\epsilon_X}{2})$ denote the image of the bracket of the set $X^s(x, \frac{\epsilon_X}{2}) \times X^u(x, \frac{\epsilon_X}{2})$. Given $x \in X$ and $w \in X^u(P) \cap U(x, \frac{\epsilon_X}{2})$ then $p(x)=[x, w]$. To see this is the same as the previous definition, we need only check that $[x, w]$ is an element of $X^u(P) \cap X^s(x, \frac{\epsilon_X}{2})$ but this follows from the definitions of the bracket and the set $U(x, \frac{\epsilon_X}{2})$. ◻ *Remark 49*. If $g$ is a local homeomorphism satisfying Wieler's axioms, then $X^u(P)/{\sim_0}=Y$ and the map $p:X\to Y$ is a locally trivial bundle of Cantor sets. This fact follows from [@DGMW Theorem 3.12]. Already for Williams solenoids, local triviality of $p$ fails to hold in general [@FJ2]. Moving forward we will use an abuse of notation to identify sets of the form $X^s(x, \delta_1) \times X^u(x, \delta_2)$ with their image in $X$ under the bracket map. For example, using this convention, there would be no reference to the set $U(x, \frac{\epsilon_X}{2})$ in the proof of the previous theorem. **Lemma 50**. *Suppose $x\in X$ and $0< \delta<\delta'< \epsilon'$. Then $$p(X^s(x, \delta)\times X^u(x, \epsilon'))=p(X^s(x, \delta')\times X^u(x, \epsilon')).$$* *Proof.* Let $z\in X^s(x, \delta')\times X^u(x, \epsilon')$. Then $[z, x] \in X^u(x, \epsilon') \subseteq X^s(x, \delta)\times X^u(x, \epsilon')$. Furthermore, taking $w\in X^u(P) \cap U(x, \frac{\epsilon_X}{2})$, we have that $$p(z)=[z, w] = [ [z, x], w] = p([z, x])$$ as required. ◻ **Theorem 51**. *Using the notation in the past few paragraphs, we have that the following diagram commutes:* *$\begin{CD} X @>\varphi >> X \\ @Vp VV @Vp VV \\ X^u(P)/{\sim_0} @>\tilde{g} >> X^u(P)/{\sim_0} \\ @Vr VV @Vr VV \\ Y @>g>> Y \end{CD}$* *where $r$ is defined above (see Section [2.2](#tildeZeroSection){reference-type="ref" reference="tildeZeroSection"}) and $r \circ p$ is equal to the projection map $\pi_0 : X \rightarrow Y$.* *Proof.* The result follows directly from the definitions of the relevant maps; the details are omitted. ◻ **Theorem 52**. *The map $p$ induces a bijection between periodic points of $\varphi$ and periodic points of $\tilde{g}$ and hence $\varphi$ and $\tilde{g}$ have the same zeta function. Furthermore, if the set of periodic points with respect to $\varphi$ is dense in $X$, then the set of periodic points with respect to $\tilde{g}$ is dense in $X^u(P)/{\sim_0}$.* *Proof.* The first statement using Theorem [Theorem 46](#InverseLimitXUP){reference-type="ref" reference="InverseLimitXUP"} and applying the proof of [@Wil Lemma 5.3] to our situation. The second follows since $p$ is onto so the image of a dense set in $X$ under $p$ is dense in $X^u(P)/{\sim_0}$. ◻ **Theorem 53**. *The map $r: X^u(P)/{\sim_0} \rightarrow Y$ induces a bijection between the periodic points of $\tilde{g}$ and the periodic points of $g$. Hence $\tilde{g}$ and $g$ have the same zeta function.* *Proof.* This follows from Theorem [Theorem 46](#InverseLimitXUP){reference-type="ref" reference="InverseLimitXUP"} and [@Wil Theorem 5.2 Part (3)], see in particular the argument just before Lemma 5.3 on page 189 of [@Wil]. ◻ **Theorem 54**. *The following are equivalent* 1. *$(X, \varphi)$ is mixing,* 2. *$(X^u(P)/{\sim_0}, \tilde{g})$ is mixing,* 3. *$(Y, g)$ is mixing.* *Proof.* We only prove the case $(X, \varphi)$ is mixing implies $(X^u(P)/{\sim_0}, \tilde{g})$ is mixing in detail. Let $U$ and $V$ be non-empty open sets in $X^u(P)/{\sim_0}$. Then $p^{-1}(U)$ and $p^{-1}(V)$ are non-empty open sets in $X$ and since $(X, \varphi)$ is mixing, there exists $N\in \mathbb{N}$ such that $$\varphi^n(p^{-1}(U)) \cap p^{-1}(V) \neq \emptyset \hbox{ for each }n\ge N$$ Then, using the fact that $p$ is onto, $$\begin{aligned} p(\varphi^n(p^{-1}(U)) \cap p^{-1}(V)) & \subseteq (p(\varphi^n(p^{-1}(U)))) \cap p(p^{-1}(V)) \\ & \subseteq (\tilde{g}^n( p(p^{-1}(U)))) \cap p(p^{-1}(V)) \\ & = \tilde{g}^n(U) \cap V\end{aligned}$$ This implies the result. Notice that we have only used the following properties: $p$ is onto, continuous and $\tilde{g} \circ p = p \circ \varphi$. Thus to see that $(X^u(P)/{\sim_0}, \tilde{g})$ is mixing implies that $(Y, g)$ is mixing, one replaces $p$ with $r$ in the previous argument. ◻ # Full projections {#fullprojectioninfell} The main goal of this section is to prove that $C^*(G_0({\bf P}))$ contains a full projection. This result is used in Section [5](#cpsectionone){reference-type="ref" reference="cpsectionone"} to construct unital Cuntz-Pimsner models. In light of [@DeeGofYasFellAlgPaper], the existence of a full projection in $C^*(G_0({\bf P}))$ puts restrictions on the type of Fell algebras that appear as $C^*(G_0({\bf P}))$. ## Dynamical results **Lemma 55**. *Suppose $(X^u(P)/{\sim_0}, \tilde{g})$ is mixing and $U$ is a non-empty open set in $X^u(P)/{\sim_0}$ such that there exists $L\in \mathbb{N}$ that satisfies $\tilde{g}^L(U) \subseteq U$. Then $U$ is dense in $X^u(P)/{\sim_0}$.* *Proof.* The result will follow by showing that if $V$ is a non-empty open set in $X^u(P)/{\sim_0}$ then $U \cap V \neq \emptyset$. Since $\tilde{g}$ is mixing, there exists $N \in \mathbb{N}$ such that for each $n\ge N$, $$\tilde{g}^n(U) \cap V \neq \emptyset$$ Thus $\emptyset \neq \tilde{g}^{N\cdot L}(U) \cap V \subseteq U \cap V$. ◻ **Lemma 56**. *Suppose that $(X, \varphi)$ is mixing and $\emptyset \neq U \subseteq X^u(P)/{\sim_0}$ is open. Then there exists $\emptyset \neq V \subseteq U$ open and $N \in \mathbb{N}$ such that $V \subseteq \tilde{g}^N(V)$.* *Proof.* Since periodic points of $(X, \varphi)$ are dense, there exists $x \in p^{-1}(U)$ and $N \in \mathbb{N}$ such that $\varphi^N(x)=x$. Since $\varphi$ and $p$ are both continuous, there exists $\delta>0$ and $0<\delta'< \epsilon'$ such that $\varphi^N(X^u(x, \delta)) \subseteq X^u(x, \delta')$ and $p(X^s(x, \delta) \times X^u(x, \delta))\subseteq U$. The set $X^s(x, \delta) \times X^u(x, \delta)$ is open in $X$ and so $p(X^s(x, \delta) \times X^u(x, \delta))$ is open in $X^u(P)/{\sim_0}$. We also have that $X^u(x, \delta) \subseteq \varphi^N(X^u(x, \delta))$. By Lemma [Lemma 50](#stableSetNotImportant){reference-type="ref" reference="stableSetNotImportant"} and the fact that $\varphi$ contracts in the local stable direction $$p(\varphi^N(X^s(x, \delta) \times X^u(x, \delta)))=p(X^s(x, \delta) \times \varphi^N(X^u(x, \delta)))$$ and using $X^u(x, \delta) \subseteq \varphi^N(X^u(x, \delta))$, we have that $$p(X^s(x, \delta) \times X^u(x, \delta)) \subseteq p(\varphi^N(X^s(x, \delta) \times X^u(x, \delta)))=\tilde{g}^N(p(X^s(x, \delta) \times X^u(x, \delta)))$$ ◻ **Theorem 57**. *Suppose that $(X, \varphi)$ is mixing and $U \subseteq X^u(P)/{\sim_0}$ is a non-empty, open set. Then there exists $N\in \mathbb{N}$ such that $\tilde{g}^N(U) = X^u(P)/{\sim_0}$.* *Proof.* By Lemma [Lemma 56](#locallyExpansiveOfgmap){reference-type="ref" reference="locallyExpansiveOfgmap"}, there exists a nonempty open set $V$ such that $V \subseteq U$ and $V\subseteq \tilde{g}^K(V)$. Moreover, the $V$ constructed in Lemma [Lemma 56](#locallyExpansiveOfgmap){reference-type="ref" reference="locallyExpansiveOfgmap"} depended on $\delta>0$ and so we denote the set associated to $\delta>0$ as $V_{\delta}$. In more detail, $V_{\delta}= p (X^s(z, \delta) \times X^u(z, \delta))$ where $z$ is a periodic point of period $K$ in $U$. Form $$G_{\delta}= \bigcup_{n\in \mathbb{N}} \tilde{g}^{n \cdot K}(V_{\delta}).$$ Notice that $$G_{\delta}=p\left(\bigcup_{n\in \mathbb{N}} \varphi^{n\cdot K}(X^s(z, \delta) \times X^u(z, \delta)) \right)$$ and that since $(X, \varphi)$ is mixing $$\bigcup_{n\in \mathbb{N}} \varphi^{n\cdot K}(X^s(z, \delta) \times X^u(z, \delta))$$ is dense in $X$. Since $p$ is onto, it follows that $G_{\delta}$ is dense for each valid $\delta>0$. In particular, $G_{\delta/2}$ is dense. Next, suppose that $\tilde{y}\in X^u(P)/{\sim_0}$ is a limit point of the set $G_{\delta/2}$. We show $\tilde{y} \in G_{\delta}$. From this, it follows that $G_{\delta}=X^u(P)/{\sim_0}$. Using the compactness of $X$ and the fact that $$\bigcup_{n\in \mathbb{N}} \varphi^{n\cdot K}(X^s(z, \delta) \times X^u(z, \delta))$$ is dense in $X$, there exists a sequence $(x_k)_{k\in \mathbb{N}}$ in $X$ converging to $x \in X$ such that $$x_k \in \bigcup_{n\in \mathbb{N}} \varphi^{n\cdot K}(X^s(z, \delta/2) \times X^u(z, \delta/2))$$ for each $n$ and $p(x)=\tilde{y}$. There exists $N \in \mathbb{N}$ such that $x_k \in X^s(x, \frac{\delta}{2}) \times X^u(x, \frac{\delta}{2})$ for $k \ge N$. In particular, $[x, x_N]$ is well-defined and is an element in $X^u(x_N, \frac{\delta}{2})$. Lemma [Lemma 50](#stableSetNotImportant){reference-type="ref" reference="stableSetNotImportant"} implies that $p([x, x_N])=p(x)=\tilde{y}$. Since $x_N \in \bigcup_{n\in \mathbb{N}} \varphi^{n\cdot K}(X^s(z, \delta/2) \times X^u(z, \delta/2))$ there exists $l \in \mathbb{N}$ such that $$x_N \in \varphi^{l \cdot K}(X^s(z, \delta/2) \times X^u(z, \delta/2))$$ which implies that $$\varphi^{-l \cdot K}(x_N) \in X^s(z, \delta/2) \times X^u(z, \delta/2).$$ Since $[x, x_N]$ is an element in $X^u(x_N, \frac{\delta}{2})$, we have that $$\varphi^{-l \cdot K}(x) \in X^u(\varphi^{-l\cdot K}(x_N), \frac{\delta}{2}).$$ The triangle inequality and the previous two statements imply that $$\varphi^{-l \cdot K}(x) \in X^s(z, \delta) \times X^u(z, \delta).$$ and hence that $$x \in \varphi^{l \cdot K}(X^s(z, \delta) \times X^u(z, \delta)) \hbox{ and } \tilde{y}=p(x) \in G_{\delta}.$$ In summary, we have shown that $G_{\delta}=X^u(P)/{\sim_0}$. Using this along with the fact that $X^u(P)/{\sim_0}$ is compact and $V_{\delta} \subseteq \tilde{g}^K(V_{\delta})$ imply that there exists $N\in \mathbb{N}$ such that $\tilde{g}^N(V_{\delta})=X^u(P)/{\sim_0}$. The required result follows since $V_{\delta} \subseteq U$. ◻ ## Existence of a full projection **Lemma 58**. *Let $V$ be a basic set of $G_0({\bf P})$. Then, there exists $N\in \mathbb{N}$ such that for any $f \in C_c(G_0({\bf P}))$ supported in another basic set and nonzero on $V$, we have that the following two properties hold:* 1. *For each $x \in X^u(P)$ there exists $(\hat{x}, \hat{w}) \in {\rm supp}(\alpha^N(f))$ such that $x \sim_0 \hat{x}$;* 2. *likewise for each $b\in X^u(P)$ there exists $(\hat{a}, \hat{b}) \in {\rm supp}(\alpha^N(f))$ such that $b \sim_0 \hat{b}$.* *Proof.* This follows by applying Theorem [Theorem 57](#gExpOnXU){reference-type="ref" reference="gExpOnXU"} to the open set $s(V)$ and $r(V)$ and taking the max of the two $N$s. ◻ **Lemma 59**. *Let $V$ be a basic set of $G_0({\bf P})$. Then, there exists $N\in \mathbb{N}$ such that for any $f \in C_c(G_0({\bf P}))$ supported in another basic set with $||f|_V||>0$, we have that for each $k \in C_c(G_0({\bf P}))$ supported in yet another basic set, there exists $f_1$, $f_2 \in C_c(G_0({\bf P}))$ such that $$f_1 \alpha^K(f) f_2 = k \hbox{ and } ||f_1||\leq ||k|| , ||f_2||= \frac{1}{||f|_V||}$$ Moreover, in this case, we have that $\overline{C^*(G_0({\bf P})) \alpha^N(f) C^*(G_0({\bf P}))}=C^*(G_0({\bf P}))$.* *Proof.* Let $A=C^*(G_0({\bf P}))$. The second part of the theorem follows from the first part. To see this note that $\overline{A \alpha^N(f) A}$ denotes the closed linear span of $A \alpha^N(f) A$ and that closed linear span of the set of functions of compact support with support contained in a single basic set is $A$. We now prove the first part. Recall that $q: X^u({\bf P}) \rightarrow X^u({\bf P})/{\sim_0}$ denotes the quotient map. A basic set, $U$, is of the form $$\{ (h_U(x), x) \mid x \in X^u(z, \delta) \}$$ where $z\in X^u({\bf P})$, $\delta>0$ is small, and $h_U$ is a local homeomorphism onto its image. Because we are dealing with a very specific groupoid, we have that $h_U= (q|_{r(U)})^{-1} \circ q|_{s(U)}$. This in particular holds for the basic set $U_k$ that contains the support of $k$. We likewise let $U_f$ denote the basic set containing the support of $f$. By the previous lemma, there exists $N\in \mathbb{N}$ such that 1. $q(s(\varphi^N \times \varphi^N(U_f)))=X^u({\bf P})/{\sim_0}$ and 2. $q(r(\varphi^N \times \varphi^N(U_f)))=X^u({\bf P})/{\sim_0}$. It follows that we have $V_1 \subseteq s(\varphi^N \times \varphi^N(U_f))$ and $V_2 \subseteq r(\varphi^N \times \varphi^N(U_f))$ such that 1. $q|_{s(U_k)}$ is a homeomorphism from $s(U_k)$ to $V_1$ and 2. $q|_{V_2}$ is a homeomorphism from $V_2$ to $r(U_k)$. Since $q(r(U_k))=q(s(U_k))$ and all the relevant maps are homeomorphism (because the domains have been restricted) we have $$(q|_{r(U_k)})^{-1} \circ q|_{s(U_k)} = (q|_{r(U_k)})^{-1} \circ q|_{V_2} \circ (q|_{V_2})^{-1} \circ q|_{V_1} \circ (q|_{V_1})^{-1} \circ q|_{s(U)}.$$ Using this setup we can define $f_1$ and $f_2$ as follows. For $y\in s(U_k)$, let $$f_1( ((q|_{V_1})^{-1} \circ q|_{s(U_k)})(y), y)=k(h_{U_k}(y), y)$$ and otherwise define $f_1$ to be zero. For $z\in V_2$, let $$f_2( ((q|_{r(U_k)})^{-1} \circ q|_{V_2})(z), z) = \frac{1}{ f(\varphi^{-N}(z), \varphi^{-N}( ((q|_{V_1})^{-1} \circ q|_{V_2})(z)))}$$ and then extend $f_2$ so that its support is contained in a slightly large basic set. We note that $f_2$ is well-defined because $f$ is non-zero on $V$. A short computation using the convolution product shows that $f_1$ and $f_2$ satisfy requirements in the statement of the theorem. ◻ **Theorem 60**. *Suppose that $a \in A=C^*(G_0({\bf P}))$ is nonzero, then there exists $N\in \mathbb{N}$ such that $\alpha^N(a)$ is full in A. In particular, $C^*(G_0({\bf P}))$ contains a full projection.* *Proof.* Let $a\in A=C^*(G_0({\bf P}))$ be nonzero. Without loss of generality we can and will assume that $||a||=1$. There exists a basic set $U \in \mathcal{G}$ with the following property. If $c \in C_c(\mathcal{G})$ with $||c||=1$ and $$||a-c||< \frac{1}{4}$$ then $|c(x,y)|>\frac{1}{2}$ for each $(x,y) \in U$. Take a basic set such that $V\subseteq \overline{V} \subseteq U$ and also take $\tilde{f}$ a continuous bump function with support contained in $U$ that is one on $V$. Let $N\in \mathbb{N}$ be as in the previous lemma where $N$ depends only on $V$. Let $0\neq k \in C_c(\mathcal{G})$ be supported in a basic set and $\epsilon>0$. Take $b \in C_c(\mathcal{G})$ such that $$||a-b||< {\rm min}\left\{ \frac{1}{4} , \frac{\epsilon}{||k||} \right\} \hbox{ and } ||b||=1$$ By construction, the previous lemma can be applied to $\tilde{f} b \tilde{f}$. We obtain $f_1$ and $f_2$ such that $$f_1 \alpha^N(\tilde{f} b \tilde{f} ) f_2 =k$$ Therefore $$\begin{aligned} ||f_1 \alpha^N(\tilde{f}) \alpha^N(a) \alpha^N(\tilde{f}) f_2 -k|| & = ||f_1 \alpha^N(\tilde{f}) \alpha^N(a) \alpha^N(\tilde{f}) f_2 - f_1 \alpha^N(\tilde{f}) \alpha^N(b) \alpha^N( \tilde{f} ) f_2 || \\ & \le ||f_1|| || \alpha^N(\tilde{f})|| || a - b|| || \alpha^N(\tilde{f})|| ||f_2|| \\ & < \epsilon\end{aligned}$$ where we have used the estimates for $||f_i||$ from the previous lemma and the fact that $||\tilde{f}||=1$. In summary we have shown that the set of compactly supported functions with support in a single basic set is contained in $\overline{A \alpha^N(a) A}$ and this implies the result. For the second part, by the main result of [@DGY], $C^*(G_0({\bf P}))$ contains a non-zero projection. Using the first part it therefore contains a full projection. ◻ **Corollary 61**. *There exists a projection $p\in C_c(G_0({\bf P}))$ which is full in $C^*(G_0({\bf P}))$. In particular, the subalgebra $A_p:=pC^*(G_0({\bf P}))p$ is a unital Fell algebra with spectrum $X^u({\bf P})/{\sim_0}$ and the dense unital subalgebra $pC_c(G_0({\bf P}))p\subseteq A_p$ is closed under holomorphic functional calculus.* *Proof.* By Theorem [Theorem 60](#existfull){reference-type="ref" reference="existfull"}, there exists a full projection $p_0\in C^*(G_0({\bf P}))$. The $*$-subalgebra $C_c(G_0({\bf P}))\subseteq C^*(G_0({\bf P}))$ is closed under holomorphic functional calculus by [@DeeYas Proposition 7.4], so by standard density and functional calculus arguments there exists a projection $p\in C_c(G_0({\bf P}))$ and a unitary $u\in 1+C^*(G_0({\bf P}))$ such that $p=up_0u^*$. Since $p_0$ is full in $C^*(G_0({\bf P}))$, it follows that $p$ is full in $C^*(G_0({\bf P}))$. It is clear that $pC_c(G_0({\bf P}))p$ is closed under holomorphic functional calculus from [@DeeYas Proposition 7.4]. ◻ *Remark 62*. The authors were surprised by the length of the proof of Theorem [Theorem 60](#existfull){reference-type="ref" reference="existfull"} for the following reason. The stable algebra $S=\varinjlim (C^*(G_0({\bf P})),\alpha)$ is a simple $C^*$-algebra arising as an inductive limit, and by [@DGY] $S$ admits a full projection. The existence of projections in $S$ readily produces projections in $C^*(G_0({\bf P}))$, but fullness is not immediate even if it is conceivably provable from some soft argument. By the results of [@DeeGofYasFellAlgPaper], there is no gain from $C^*(G_0({\bf P}))$ being a Fell algebra. As the following example shows, something extra is needed beyond the inductive limit, e.g., the proof we gave above relied on expansiveness. For a stationary inductive limit $S=\varinjlim (A,\alpha)$ where $\alpha$ is an injective, nondegenerate $*$-homomorphism, such that $S$ is simple and admits a projection (which is automatically full), one can ask if there is a full projection in $A$? The following example due to Jamie Gabe answers the question in the negative. Consider $A:=C_0(\mathbb{N},\mathbb{K}(H))$ for an infinite-dimensional, separable Hilbert space $H$. Take a pair of Cuntz isometries $s$ and $t$ on $H$, i.e. $s$ and $t$ are isometries such that $ss^*+tt^*=1$. We define $\alpha:A\to A$ by $$\alpha(f)(n):=\begin{cases} sf(0)s^*+tf(1)t^*, \; &n=0,\\ f(n+1), \; &n>0.\end{cases}$$ It is clear from the construction that $\alpha$ is an injective, nondegenerate $*$-homomorphism. Moreover, consider two non-zero elements $f_1,f_2\in S$ in the image of any of the functorial embeddings $C_c(\mathbb{N},\mathbb{K}(H))\hookrightarrow S$. Then for sufficiently large $N$, $\alpha^N(f_1),\alpha^N(f_2)\in S$ are both embeddings of non-zero elements in $A$ supported in $\{0\}\subseteq \mathbb{N}$. Since $\mathbb{K}(H)$ is simple, we have that $\overline{Sf_1S}=\overline{Sf_2S}$. From this observation, one can show that $S$ is simple. In fact, one can show that $S\cong \mathbb{K}(H)$. Since $A$ admits no full projection, we can conclude our negative answer to the above question. One can note that in contrast to the expansive scenario of Theorem [Theorem 60](#existfull){reference-type="ref" reference="existfull"}, the $*$-homomorphism in this counterexample is contracting on the spectrum of $A$. # Cuntz-Pimsner models for stationary inductive limits {#cpsectionone} In this section we consider a purely $C^*$-algebraic set up. However, the prototypical example is the case of the $C^*$-algebras associated to a Wieler solenoid. The general set up is as follows. We consider a non-degenerate self-$*$-monomorphism $\alpha:A\to A$ of a $C^*$-algebra $A$. The prototypical example is $A=C^*(G_0({\bf P}))$ with $\alpha$ given by the composition of $C^*(G_0({\bf P})) \subseteq C^*(G_1({\bf P}))$ and the canonical isomorphism $C^*(G_1({\bf P})) \cong C^*(G_0({\bf P}))$ from Theorem [Theorem 28](#MainDeeYas){reference-type="ref" reference="MainDeeYas"}. Before proceeding to Cuntz-Pimsner models, we make a remark about the stationary inductive limits. **Proposition 63**. *Write $S:=\varinjlim (A,\alpha)$ and let $\phi:S\to S$ denote the right shift in the direct limit. Then $\phi$ is a well defined $*$-automorphism of $S$.* *Proof.* The proof of this proposition is trivial once having parsed its statement. So parsing the statement we shall. By construction, $S:=\bigoplus_{k\in \mathbb{N}}A/I$ where $I$  is the ideal generated by $a\delta_k-\alpha(a)\delta_{k+1}$ for $a\in A$. Here we write $a\delta_k\in \bigoplus_{k\in \mathbb{N}}A$ for the element $a$ placed in position $k$. In this notation $\phi(a\delta_k\mod I)= a\delta_{k+1}\mod I$. The map $\phi$ is well defined since it is induced from the right shift mapping on $\bigoplus_{k\in \mathbb{N}}A$ that preserves $I$. The map $\phi$ is an inverse to the left shift mapping that coincides with the mapping $\alpha(a\delta_k+I):=\alpha(a)\delta_k+I$. Thus $\phi$ is a well defined $*$-automorphism. ◻ Associated with the $*$-monomorphism $\alpha:A\to A$ there is an $A$-Hilbert bimodule $E:={}_\alpha A_{\rm id}$. That is $E=A$ is a vector space with the action $a.\xi.b:=\alpha(a)\xi b$ for $a,b\in A$ and $\xi\in E$. The right $A$-inner product is given by $$\langle \xi_1,\xi_2\rangle:=\xi_1^*\xi_2\in A.$$ It is readily verified that $E$ is a right $A$-Hilbert module and that $A$ acts as adjointable operators on the left. Consider the associated Cuntz-Pimsner algebra $O_E$ and its core $\mathcal{C}_E:=O_E^{U(1)}$ for the standard gauge action. **Proposition 64**. *The functorial property of inductive limits and the inclusion $A\hookrightarrow \mathcal{C}_E$ induces isomorphisms $S\cong \mathcal{C}_E$ and $S\rtimes_\phi \mathbb{Z}\cong O_E$.* *Proof.* By [@MR1426840] it holds that $$\mathcal{C}_E=\varinjlim \mathbb{K}_A(E^{\otimes k}).$$ Since $\mathbb{K}_A(E^{\otimes k})=A$ and $\mathbb{K}_A(E^{\otimes k})\to \mathbb{K}_A(E^{\otimes k+1})$ coincides with $\alpha$, it follows that $S\cong \mathcal{C}_E$. The construction of $S\cong \mathcal{C}_E$ is by definition the map functorially constructed from the inclusion $A\hookrightarrow \mathcal{C}_E$. Following the standard procedure of "extension of scalars\" (see [@MR1426840] or [@MR4031052 Section 3.1]) let $E_\infty:=E\otimes_A \mathcal{C}_E$ denote the self-Morita equivalence of $\mathcal{C}_E$ induced by $E$. For the details of the left action of $\mathcal{C}_E$ on $E_\infty$, see [@MR4031052 Proposition 3.2]. There are natural isomorphisms $$O_E\cong O_{E_\infty}\cong \mathcal{C}_E\ltimes E_\infty.$$ For details, see [@MR4031052 Proposition 3.2]. Here $\mathcal{C}_E\ltimes E_\infty$ denotes the generalized crossed product by the the self-Morita equivalence $E_\infty$. In terms of the isomorphism $S\cong \mathcal{C}_E$ we can identify the $\mathcal{C}_E$-self Morita equivalence $E_\infty$ with the $S$-self Morita equivalence ${}_\phi S_{\rm id}$. Therefore $$\mathcal{C}_E\ltimes E_\infty\cong S\rtimes {}_\phi S_{\rm id}=S\rtimes_\phi \mathbb{Z}.$$ ◻ Proposition [Proposition 64](#nonunitalcp){reference-type="ref" reference="nonunitalcp"} has implications for the stable Ruelle algebra of a Wieler solenoid. It was proven in [@DeeYas Theorem 1.2] that $C^*(G^s({\bf P}))\cong \varinjlim(C^*(G_0({\bf P})),\alpha)$ for the Fell subalgebra $C^*(G_0({\bf P}))\subseteq C^*(G^s({\bf P}))$ with spectrum $X^u({\bf P})/{\sim_0}$. The next result extends this result to the stable Ruelle algebra, and is an immediate consequence of Proposition [Proposition 64](#nonunitalcp){reference-type="ref" reference="nonunitalcp"} and [@DeeYas Theorem 1.2]. **Corollary 65**. *Consider a Wieler solenoid, and write $E$ for the $C^*(G_0({\bf P})-C^*(G_0({\bf P}))$-correspondence defined as $E:=C^*(G_0({\bf P}))$ as a right Hilbert module with left action defined from $\alpha$. The stable Ruelle algebra $C^*(G^s({\bf P}))\rtimes \mathbb{Z}$ is isomorphic to the Cuntz-Pimsner algebra $O_E$.* We are also interested in unital Cuntz-Pimsner models. We first describe the general setup and return to the specifics of Wieler solenoids at the end of the section. They can be constructed from choosing a full projection $p\in A$. For such a projection, define the unital $C^*$-algebra $A_p:=pAp$. Set $E_{p,k}:=\alpha^k(p)Ap$ equipped with the structure of an $A_p$-Hilbert bimodule defined by $a.\xi.b:=\alpha^k(a)\xi b$ for $a,b\in A$ and $\xi\in E$. The right $A_p$-inner product on $E_{p,k}$ is given by $$\langle \xi_1,\xi_2\rangle:=\xi_1^*\xi_2\in A_p.$$ We set $E_p:=E_{p,1}$. **Proposition 66**. *Let $p\in A$ be a full projection and define the unital $C^*$-algebra $A_p:=pAp$. It then holds that* 1. *$E_{p,k}=E_p^{\otimes_A k}$* 2. *$\mathbb{K}_{A_p}(E_p^{\otimes_A k})=A_{\alpha^k(p)}$* 3. *$\mathcal{C}_{E_p}=\varinjlim (A_{\alpha^k(p)},\alpha)$ is a unital $C^*$-algebra which is stably isomorphic to $S$* 4. *$O_{E_p}$ is a unital $C^*$-algebra which is $U(1)$-equivariantly stably isomorphic to $S\rtimes_\phi \mathbb{Z}$.* *Proof.* For (1), we have for any $k,l\in \mathbb{N}$ that $$E_{p,k}\otimes_A E_{p,l}=\overline{\alpha^l(\alpha^k(p)Ap)\alpha^l(p)Ap}=\alpha^{k+l}(p)Ap=E_{p,k+l}.$$ For (2), we have that $$\mathbb{K}_{A_p}(E_p^{\otimes_A k})=E_p^{\otimes_A k}\otimes_A (E_p^{\otimes_A k})^*=\alpha^k(p)A\alpha^k(p)=A_{\alpha^k(p)}.$$ Item (3) follows from (2) and the fact that $\varinjlim (A_{\alpha^k(p)},\alpha)$ coincides with $pSp$ which is Morita equivalent to $S$. Item (4) follows from (3) and Proposition [Proposition 64](#nonunitalcp){reference-type="ref" reference="nonunitalcp"}. ◻ We also note the following consequence on traces and KMS-weights that relies on [@MR2056837]. **Theorem 67**. *Let $(A,\alpha)$ be a stationary inductive system, $p\in A$ a full projection and $\beta\in \mathbb{R}$. There are bijections between the sets of following objects:* 1. *Tracial weights $\tau$ on $A$ such that $\tau\circ \alpha=\mathrm{e}^\beta \tau$ normalized by $\tau(p)=1$.* 2. *Tracial states $\tau_p$ on $A_p$ such that $\mathrm{Tr}_{\tau_p}^{E_p}=\mathrm{e}^\beta \tau_p$.* 3. *KMS${}_\beta$-weights $\Phi$ on $S\rtimes_\phi \mathbb{Z}$ normalized by $\Phi(p)=1$.* *The bijection from the set of objects in (1) to the set of objects in (2) is given by restriction along $A_p\hookrightarrow A$. The bijection from the set of objects in (3) to the set of objects in (1) is given by restriction along $A\hookrightarrow S\rtimes_\phi \mathbb{Z}$.* It is worth noting that the results from the previous section on the existence of full projections in the case of the Fell algebra associated to a Wieler solenoid imply that the previous theorem can be applied to the case of $(C^*(G_0({\bf P})), \alpha)$ where $\alpha$ is the composition of $C^*(G_0({\bf P})) \subseteq C^*(G_1({\bf P}))$ with the isomorphism $C^*(G_1({\bf P})) \cong C^*(G_0({\bf P}))$. In the case of Wieler solenoids, we can summarize the results of this section in the following corollary. Recall that Corollary [Corollary 61](#existefullforcom){reference-type="ref" reference="existefullforcom"} ensures the existence of a projection $p\in C_c(G_0({\bf P}))$ which is full in $C^*(G_0({\bf P}))$. **Corollary 68**. *Let $(Y,\mathrm{d}_Y,g)$ be a Wieler pre-solenoid with associated Wieler solenoid $X:=\varprojlim (Y,g)$. We fix a projection $p\in C_c(G_0({\bf P}))$ which is full in $C^*(G_0({\bf P}))$. Consider the unital Fell algebra $A_p:=pC^*(G_0({\bf P}))p$ and define the $A_p$-bimodule $E_p:=\alpha(p)C^*(G_0({\bf P}))p$. It then holds that* 1. *The Cuntz-Pimsner algebra $O_{E_p}$ is a unital $C^*$-algebra which is $U(1)$-equivariantly stably isomorphic to the stable Ruelle algebra $C^*(G^s({\bf P}))\rtimes \mathbb{Z}$* 2. *The core of the Cuntz-Pimsner algebra $\mathcal{C}_{E_p}=(O_{E_p})^{U(1)}$ is a unital $C^*$-algebra which is stably isomorphic to the stable algebra $C^*(G^s({\bf P}))$.* 3. *If $(Y,\mathrm{d}_Y,g)$ is mixing, the equation of traces on $A_p$ $$\mathrm{Tr}_{\tau}^{E_p}=\mathrm{e}^\beta \tau$$ only has a solution when $\beta$ is the topological entropy of $X$, in which case there exists a one-dimensional space of solutions determined from the Bowen measure on $X$ and the correspondence in Theorem [Theorem 67](#kmsything){reference-type="ref" reference="kmsything"} combined with item 1).* For more details on the Bowen measure and how it determines traces and weights on the algebras associated with a Smale space, see [@DGY; @MR3272757; @Put]. # The $K$-theory as a functor for compact, locally Hausdorff spaces {#SecKthFun} The goal of this section is to extend the $K$-theory functor from compact, Hausdorff spaces to compact, locally Hausdorff spaces. As mentioned in Remark [Remark 17](#remarkonfellchar){reference-type="ref" reference="remarkonfellchar"}, the process of taking spectra of Fell algebras does not define a functor when the morphisms between compact, locally Hausdorff spaces are taken to be continuous maps. Nevertheless, the $K$-theory of the associated Fell algebra is better behaved than the algebra itself. In addition to the contravariant functor associated to (at least some) continuous maps between compact, locally Hausdorff space, there is also a wrong-way functor for self local homeomorphisms. Wrong way functoriality is particularly important for our study of the Fell algebra associated to a Wieler solenoid because it appears both in the inductive limits structure of the stable algebra and the Cuntz--Pimsner model of the stable Ruelle algebra. We start by defining $K$-theory of a compact, locally Hausdorff space $\tilde{Y}$. As in Subsection [1.3](#fellalgsubsec){reference-type="ref" reference="fellalgsubsec"}, we choose a Hausdorff resolution $\psi:X\to \tilde{Y}$. The associated groupoid $R(\psi)$ is defined in Example [Example 15](#fellex){reference-type="ref" reference="fellex"}. We define $$K^*(\tilde{Y}):=K_*(C^*(R(\psi))).$$ This definition is a priori depending on the choice of $\psi$. We shall in the next subsection study functoriality and in the subsequent subsection show that up to canonical isomorphism, $K^*(\tilde{Y})$ is independent on the choice of $\psi$. ## Correspondences and maps between locally Hausdorff spaces We will in this subsection study how maps between compact, locally Hausdorff spaces give rise to correspondences between the related Fell algebras $C^*(R(\psi))$. Recall the terminology of a Hausdorff resolution of a compact, locally Hausdorff space $\tilde{Y}$ from Definition [Definition 14](#hausres){reference-type="ref" reference="hausres"}. We shall make use of another terminology. **Definition 69**. Let $p_1:X_1\to \tilde{Y}_1$ and $p_2:X_2\to \tilde{Y}_2$ be Hausdorff resolutions of two compact, locally Hausdorff spaces. A proper morphism $(\Pi,\pi):p_1\to p_2$ is a pair of continuous maps fitting into the commuting diagram such that $\Pi:X_1\to X_2$ is a proper mapping. We say that $\pi$ lifts to a proper morphism if there is a proper mapping $\Pi:X_1\to X_2$ making $(\Pi,\pi)$ into a proper morphism. The results of this subsection can be summarized in the following theorem. **Theorem 70**. *Let $\tilde{Y}_1$ and $\tilde{Y}_2$ be compact, locally Hausdorff spaces, and assume that $$\pi:\tilde{Y}_1\to \tilde{Y}_2,$$ is a continuous mapping. Fix Hausdorff resolutions $p_1:X_1\to \tilde{Y}_1$ and $p_2:X_2\to \tilde{Y}_2$.* 1. *Associated with this data there is a canonically associated $C^*(R(p_2))-C^*(R(p_1))$-correspondence $\operatorname{Corr}(\pi,p_1,p_2)$, see Definition [Definition 75](#correfrompi){reference-type="ref" reference="correfrompi"}.* 2. *The left action of $C^*(R(p_2))$ on $\operatorname{Corr}(\pi,p_1,p_2)$ is via $C^*(R(p_1))$-compact operators if $\pi$ lifts to a proper morphism for some Hausdorff resolutions of $\tilde{Y}_1$ and $\tilde{Y}_2$, for instance if $\pi$ is a local homeomorphism. In fact, if $\pi$ is a homeomorphism then $\operatorname{Corr}(\pi,p_1,p_2)$ is a Morita equivalence.* 3. *If $\pi':\tilde{Y}_2\to \tilde{Y}_3$ is another continuous mapping to a space with a Hausdorff resolution $p_3:X_3\to \tilde{Y}_3$, then there is a unitary isomorphism of $C^*(R(p_3))-C^*(R(p_1))$-correspondences $$\operatorname{Corr}(\pi',p_2,p_3)\otimes_{C^*(R(p_2))}\operatorname{Corr}(\pi,p_1,p_2)\cong \operatorname{Corr}(\pi'\circ \pi,p_1,p_3).$$* We shall start with an easy result for proper morphisms of Hausdorff resolutions. **Proposition 71**. *Given a proper morphism $(\Pi,\pi):p_1\to p_2$ of Hausdorff resolutions $p_1:X_1\to \tilde{Y}_1$ and $p_2:X_2\to \tilde{Y}_2$, the pullback map $$\Pi^*:C_c(R(p_2))\to C_c(R(p_1)),$$ along the proper groupoid homomorphism $(x,x')\mapsto(\Pi(x),\Pi(x'))$, is a well defined $*$-homomorphism that induces a $*$-homomorphism $$\Pi^*:C^*(R(p_2))\to C^*(R(p_1)).$$* Recall the notion of a groupoid correspondence from Definition [\[groupoidcorr\]](#groupoidcorr){reference-type="ref" reference="groupoidcorr"}. **Definition 72**. For Hausdorff resolutions $p_1:X_1\to \tilde{Y}_1$ and $p_2:X_2\to \tilde{Y}_2$, we consider a diagram $\mathfrak{X}$ of the form where $\pi$ is a continuous map. From the diagram $\mathfrak{X}$, we define the subspace $Z_\mathfrak{X}\subseteq X_1\times X_2$ by $$Z_\mathfrak{X}= \{ (x,y)\in X_1\times X_2: \pi\circ p_1(x) = p_2(y)\}.$$ **Lemma 73**. *The space $Z_\mathfrak{X}$ defined above is a groupoid correspondence from $R(p_1)$ to $R(p_2)$.* *Proof.* The moment map for the right action will be defined as $\sigma: Z_\mathfrak{X}\rightarrow G_2^{(0)}$, $(x,y)\mapsto (y,y)$. The map $\sigma$ is an open map. Define the right action of $R(p_2)$ on $Z_\mathfrak{X}$ by $(x,y)\cdot (y,y')=(x,y')$ for $(x,y)\in Z_\mathfrak{X}$ and $(y,y')\in R(p_2)$. This action is free and proper. Define the moment map $\rho: Z_\mathfrak{X}\rightarrow R(p_1)^{(0)}$ for the left action by $(x,y)\mapsto (x,x)$, which is open and surjective. Define the action of $R(p_1)$ on $Z_\mathfrak{X}$ by $(x_1,x_2)\cdot (x_2,y)=(x_1,y),$ for $(x_1,x_2)\in R(p_1), (x_2,y)\in Z_\mathfrak{X}$ which is free and proper. Moreover, $\rho$ induces a homemorphism $\overline{\rho}: Z_\mathfrak{X} / R(p_2) \rightarrow R(p_1)^{(0)}.$ ◻ The following lemma follows from Theorem [\[corGroupoid\]](#corGroupoid){reference-type="ref" reference="corGroupoid"}. **Proposition 74**. *Let $Z_\mathfrak{X}$ be the groupoid correspondence defined from a diagram $\mathfrak{X}$ as in Definition [Definition 72](#def){reference-type="ref" reference="def"}. Then, the space $C_c(Z_\mathfrak{X})$ becomes a pre-correspondence from $C_c(R(p_2))$ to $C_c(R(p_1))$ with the operations given as in Theorem [\[corGroupoid\]](#corGroupoid){reference-type="ref" reference="corGroupoid"}.* **Definition 75**. Given a diagram $\mathfrak{X}$ as in Definition [Definition 72](#def){reference-type="ref" reference="def"}, we define the correspondence $\operatorname{Corr}(\pi,p_1,p_2)$ from $C^*(R(p_2))$ to $C^*(R(p_1))$ as the completion of the pre-correspondence $C_c(Z_\mathfrak{X})$. In other words $\operatorname{Corr}(\pi,p_1,p_2)$ is a $C^*(R(p_2))-C^*(R(p_1))$-Hilbert $C^*$-module. **Lemma 76**. *Let $Z_\mathfrak{X}$ be the groupoid correspondence defined from a diagram $\mathfrak{X}$ as in Definition [Definition 72](#def){reference-type="ref" reference="def"}. If $\pi$ is a continuous bijection then $\operatorname{Corr}(\pi,p,q)$ is an imprimitivity bimodule from $C^*(R(p_2))$ to $C^*(R(p_1))$ in the left $C^*(R(p_2))$-valued inner product defined as follows. The $C_c(R(p_2))$-valued inner product $\langle\langle\xi_1, \xi_2\rangle\rangle\in C_c(R(p_2))$ of $\xi_1, \xi_2 \in C_c(Z_\mathfrak{X})$ as $$\label{left} \langle\langle\xi_1, \xi_2\rangle\rangle(y,y') = \sum_{\substack{ x\in X with \\ (x,x')\in R(p_1)}} \xi_1(x,y')\overline{\xi_2(x,y)}, \quad (y,y')\in R(p_2),$$ for $x'\in X$ with $(x',y)\in Z_\mathfrak{X}$, and this left inner product is extended to a left $C^*(R(p_2))$-valued inner product on $\operatorname{Corr}(\pi,p_1,p_2)$ by continuity.* *Proof.* Notice that $\sigma: Z_\mathfrak{X}\rightarrow R(p_2)^{(0)}$ is an open map; and since $\pi$ is a bijection, the map $\sigma$ induces a bijection from $R(p_1)\backslash Z_\mathfrak{X}$ onto $R(p_1)^{(0)}$. Moreover,$Z_\mathfrak{X}$ is a left principal $R(p_1)$-space and a right principal $R(p_2)$-space. Now, as in [@MRW], the operation ([\[left\]](#left){reference-type="ref" reference="left"}) defines a left semi-inner product, and the correspondence ${}_{C_c(R(p_2))}C_c(Z_\mathfrak{X})_{C_c(R(p_1))}$ becomes a pre-imprimitivity bimodule. ◻ *Example 77*. Let $X_1,X_2$ be compact Hausdorff spaces and let $\pi: X_1\rightarrow X_2$ be a continuous map. The map $\pi$ induces a unital homomorphism via pulback $$\pi^*: C(X_2)\rightarrow C(X_1), \hspace{1cm} \pi^*(g)(x)=f\left(\pi(x)\right),$$ for $g\in C(X_2),$ $x\in X_1$. The homomorphism $\pi^*$ allows one to view $C(X_1)$ as a $C(X_2)-C(X_1)$ correspondence. We show that this correspondence is isomorphic to the correspondence ${}_{C(X_2)}C(Z_\mathfrak{X})_{C(X_1)}$ associated to the following diagram. We have $Z_\mathfrak{X}= \{ (x,\pi(x))\in X_1\times X_2: x\in X_1\}$. Notice that $R(\operatorname{id}_{X_j})=\{(x,x):x\in X_j\}$ for $j=1,2$. Therefore, we may identify the groupoid $R(\operatorname{id}_{X_j})$ with the trivial groupoid $X_j$, for $j=1,2$. The diagram above gives us the correspondence ${}_{C(X_2)}C(Z_\mathfrak{X})_{C(X_1)}$ with the operations $$\begin{aligned} \xi\cdot f \left(x,\pi(x)\right) &=\xi\left(x,\pi(x)\right)f(x)\\ g\cdot \xi(x,\pi(x)) &=g\left(\pi(x)\right)\xi\left(x,\pi(x)\right)\\ \langle\xi_1,\xi_2\rangle(x) &= \overline{\xi_1\left(x,\pi(x)\right)}\xi_2\left(x,\pi(x)\right)\end{aligned}$$ for $\xi, \xi_1,\xi_2\in C(Z_\mathfrak{X}), g\in C(X_2),$ and $f\in C(X_1)$. We can define the isomorphism of correspondences $\phi: C(Z_\mathfrak{X})\rightarrow C(X)$ by $$\phi(\xi)(x)=\xi(x,\pi(x)).$$ Indeed, it is clear that $\phi$ has the inverse $\phi^{-1}(f)(x,\pi(x))=f(x)$ and a short computation shows that $\phi$ is compatible with the correspondence structure. **Lemma 78**. *Assume we have the following diagram* *where $p_1:X_1\to \tilde{Y}_1$, $p_2:X_2\to \tilde{Y}_2$ and $p_3:X_3\to \tilde{Y}_3$ are Hausdorff resolutions; and $\pi_1$ and $\pi_2$ are continuous maps. Write $\mathfrak{X}_1$ for the left part of the diagram and $\mathfrak{X}_2$ for the right part and $\mathfrak{X}_3$ for the diagram* *For $f\in C_c(Z_{\mathfrak{X}_2})$, $\xi\in C_c(Z_{\mathfrak{X}_1})$, the map $F(f,\xi)$ defined by $$F(f,\xi)(x,r)=\sum_{\substack{y\in Y with \\\pi\circ p_1(x)=p_2(y)}} \xi(x,y)f(y,r)$$ is an element of $C_c(Z_{\mathfrak{X}_3})$.* *Proof.* Consider the closed subset $$Z_{\mathfrak{X}_1} * Z_{\mathfrak{X}_2}=\left\{ \big((x,y),(y,r)\big): (x,y) \in Z_{\mathfrak{X}_1} \hspace{.2cm} and \hspace{.2cm} (y,r) \in Z_{\mathfrak{X}_2} \right\}$$ of $Z_{\mathfrak{X}_1}\times Z_{\mathfrak{X}_2}$, and the continuous surjective map $$V: Z_{\mathfrak{X}_1} * Z_{\mathfrak{X}_2} \rightarrow Z_{\mathfrak{X}_3}, \hspace{.5cm} \left((x,y),(y,r)\right) \mapsto (x,r).$$ Give $f$ and $\xi$ as in the statement of the lemma, we define the map $g: Z_{\mathfrak{X}_1}* Z_{\mathfrak{X}_2}\rightarrow \mathbb{C}$ by $$g\left((x,y),(y,r)\right)=\xi(x,y)f(y,r).$$ Then $\mathrm{supp}(g)$ is contained in the compact $\mathrm{supp}(\xi)\times\mathrm{supp}(f)$, and thus $g$ is compactly supported. We now show that $\mathrm{supp}(F(f,\xi))\subset V(\mathrm{supp}g)$. Assume $(x,r)\notin V(\mathrm{supp}g)$. Then, we have $g(m)=0$ for any $m\in V^{-1}\left((x,r)\right).$ Since $$F(f,\xi)(x,r)= \sum_{m\in V^{-1}\left((x,r)\right)}g(m),$$ we have $F(f,\xi)(x,r)=0,$ which completes the proof. ◻ **Theorem 79**. *In the notation of Lemma [Lemma 78](#complemma){reference-type="ref" reference="complemma"}, the map $\Phi$ defined by $$C_c(Z_{\mathfrak{X}_2}) \otimes C_c(Z_{\mathfrak{X}_1}) \rightarrow C_c(Z_{\mathfrak{X}_3}), \hspace{.5cm} \text{$f\otimes \xi \mapsto F(f,\xi)$}$$ is a $C_c(R(p_3)) - C_c(R(p_1))$ pre-correspondence isomorphism. The map $\Phi$ induces an isomorphism of $C^*(R(p_3)) - C^*(R(p_1))$-correspondences $$\operatorname{Corr}(\pi_2,p_2,p_3)\otimes_{C^*(R(p_2))}\operatorname{Corr}(\pi_1,p_1,p_2)\cong \operatorname{Corr}(\pi_2\circ \pi_1,p_1,p_3).$$* The proof is omitted as it only consists of long computations verifying that the module structure and inner products are respected by $\Phi$. The following immediate corollary of Theorem [Theorem 79](#comp){reference-type="ref" reference="comp"} shows that the correspondence $\operatorname{Corr}(\pi,p_1,p_2)$ only depends on $\pi$ up to canonical Morita equivalence. **Corollary 80**. *Assume that $\pi:\tilde{Y}\to \tilde{Y}'$ is a continuous mapping between compact, locally Hausdorff spaces. Let $p_j: X_j\rightarrow \tilde{Y}$ and $p_j': X_j'\rightarrow \tilde{Y'}$ be surjective local homeomorphisms from locally compact, Hausdorff spaces, for $j=1,2$. Then, for $j=1,2$, $\operatorname{Corr}(\pi, p_1,p_j')$ and $\operatorname{Corr}(\pi, p_2, p_j')$ are Morita equivalent via $\operatorname{Corr}(\operatorname{id}_{\tilde{Y}},p_1,p_2)$ and $\operatorname{Corr}(\pi, p_j,p_1')$ and $\operatorname{Corr}(\pi, p_j, p_2')$ are Morita equivalent via $\operatorname{Corr}(\operatorname{id}_{\tilde{Y}'},p_1',p_2')$.* **Lemma 81**. *In the notation of Lemma [Lemma 78](#complemma){reference-type="ref" reference="complemma"}, if $\pi_1$ lifts to a proper morphism $(\Pi_1,\pi_1):p_1\to p_2$ then $$\operatorname{Corr}(\pi_2,p_2,p_3)\otimes_{\Pi_1^*}C^*(R(p_2))\cong \operatorname{Corr}(\pi_2\circ \pi_1,p_1,p_3).$$ Similarly, if $\pi_2$ lifts to a proper morphism $(\Pi_2,\pi_2):p_2\to p_3$ then $$C^*(R(p_3))\otimes_{\Pi_2^*}\operatorname{Corr}(\pi_1,p_1,p_2)\cong \operatorname{Corr}(\pi_2\circ \pi_1,p_1,p_3).$$ In particular, if $(\Pi,\pi):p_1\to p_2$ is a proper morphism then there is a Morita equivalence of correspondences from $\operatorname{Corr}(\pi,p_1,p_2)$ the $C^*(R(p_2))-C^*(R(p_1))$-correspondence ${}_{\Pi^*}C^*(R(p_1))$, i.e. the right $C^*(R(p_1))$-Hilbert module $C^*(R(p_1))$ with left action defined from $\Pi^*$.* *Proof.* The proof of the first two isomorphisms is analogous to Lemma [Lemma 78](#complemma){reference-type="ref" reference="complemma"} and Theorem [Theorem 79](#comp){reference-type="ref" reference="comp"} and is omitted. To prove the final statement, we note that $\operatorname{Corr}(\operatorname{id}_{\tilde{Y}_1},p_1,p_1)=C^*(R(p_1))$ and so $${}_{\Pi^*}C^*(R(p_1))=C^*(R(p_2))\otimes_{\Pi^*}\operatorname{Corr}(\operatorname{id}_{\tilde{Y}_1},p_1,p_1)\cong\operatorname{Corr}(\pi,p_1,p_2).$$ ◻ **Lemma 82**. *Consider a diagram as in Definition [Definition 72](#def){reference-type="ref" reference="def"}. The left action of $C^*(R(p_2))$ on $\operatorname{Corr}(\pi,p_1,p_2)$ is via $C^*(R(p_1))$-compact operators if $\pi$ lifts to a proper morphism for some Hausdorff resolutions of $\tilde{Y}_1$ and $\tilde{Y}_2$.* *Proof.* The statement that the left action of $C^*(R(p_2))$ on $\operatorname{Corr}(\pi,p_1,p_2)$ is via $C^*(R(p_1))$-compact operators is Morita invariant, so by Lemma [Lemma 76](#moritainvarcorr){reference-type="ref" reference="moritainvarcorr"} it suffices to prove the lemma for particular choices of Hausdorff resolutions $p_1$ and $p_2$. If it is the case that $\pi$ lifts to a proper morphism for some Hausdorff resolutions of $\tilde{Y}_1$ and $\tilde{Y}_2$, we can therefore assume that $\pi$ lifts to a proper morphism $p_1\to p_2$. In this case, the left action of $C^*(R(p_2))$ on $\operatorname{Corr}(\pi,p_1,p_2)$ is via $C^*(R(p_1))$-compact operators by the final statement of Lemma [Lemma 81](#propcomplemma){reference-type="ref" reference="propcomplemma"}. ◻ ## Functoriality in $K$-theory of compact, locally Hausdorff spaces {#funcink} In this section we consider compact, locally Hausdorff spaces. We are interested in their $K$-theory. First we show that the $K$-theory of compact, locally Hausdorff spaces is well-defined. **Proposition 83**. *Consider a compact, locally Hausdorff space $\tilde{Y}$. Defining the $K$-theory of $\tilde{Y}$ as $$K^*(\tilde{Y}):=K_*(C^*(R(\psi))),$$ for some Hausdorff resolution $\psi:X\to\tilde{Y}$, produces a group uniquely determined up to canonical isomorphism.* *Proof.* If $\psi_1$ and $\psi_2$ are two different Hausdorff resolutions of $\tilde{Y}$, Lemma [Lemma 76](#moritainvarcorr){reference-type="ref" reference="moritainvarcorr"} shows that $\operatorname{Corr}(\operatorname{id}_{\tilde{Y}},\psi_1,\psi_2)$ is a Morita equivalence producing the sought after isomorphism $K_*(C^*(R(\psi_1)))\cong K_*(C^*(R(\psi_2)))$. ◻ Next, we study the contravariant properties of $K$-theory for a sub-class of continuous mappings. In compliance with the results of the last subsection, we say that a continuous map $$\pi:\tilde{Y}_1\to \tilde{Y}_2,$$ is HRP (Hausdorff Resolution Proper) if $\pi$ lifts to a proper morphism $p_1\to p_2$ for some Hausdorff resolutions $p_1:X_1\to \tilde{Y}_1$ and $p_2:X_2\to \tilde{Y}_2$. We note the following consequence of Lemma [Lemma 81](#propcomplemma){reference-type="ref" reference="propcomplemma"} and [Lemma 82](#nicemaps){reference-type="ref" reference="nicemaps"}. **Theorem 84**. *If the continuous map of compact, locally Hausdorff spaces $$\pi:\tilde{Y}_1\to \tilde{Y}_2,$$ is HRP, there is an associated class $[\pi]\in KK_0(C^*(R(p_2)),C^*(R(p_1)))$ for any Hausdorff resolutions $p_1:X_1\to \tilde{Y}_1$ and $p_2:X_2\to \tilde{Y}_2$. Moreover, if $\pi':\tilde{Y}_2\to \tilde{Y}_3$ is another HRP-map, then we have the following Kasparov product $$[\pi']\otimes_{C^*(R(p_2))}[\pi]=[\pi'\circ \pi]\in KK_0(C^*(R(p_3)),C^*(R(p_1))),$$ for any Hausdorff resolution $p_3:X_3\to \tilde{Y}_3$.* Contravariance of $K$-theory of compact, locally Hausdorff spaces under HRP-maps is now immediate. **Corollary 85**. *Taking $K$-theory of compact, locally Hausdorff spaces defines a contravariant functor from the category of compact, locally Hausdorff spaces with morphisms being HRP-maps to the category of $\mathbb{Z}/2$-graded abelian groups.* We now turn to wrong way functoriality. Assume that $\tilde{g}: \tilde{Y}_1 \rightarrow \tilde{Y}_2$ is a surjective local homeomorphism. Our goal is the definition of a wrong way map from the $K$-theory of $\tilde{Y}_1$ to that of $\tilde{Y}_2$. Take a Hausdorff resolution $p: X \rightarrow \tilde{Y}_1$. The following diagram is clearly commutative: $$\begin{CD} X@>\operatorname{id}_X >> X \\ @V p VV @VV \tilde{g}\circ p V \\ \tilde{Y}_1 @>\tilde{g} >> \tilde{Y}_2 \\ \end{CD}.$$ Since the vertical maps are both surjective local homeomorphisms, $\tilde{g}\circ p: X \rightarrow \tilde{Y}_2$ is a Hausdorff resolution. We can form the étale groupoids $R(p)$ and $R(\tilde{g}\circ p)$ over $X$, whose assocated groupoid algebras have spectrum $\tilde{Y}_1$ and $\tilde{Y}_2$, respectively. Furthermore, $R(p) \subseteq R(\tilde{g}\circ p)$ as an open subgroupoid. This inclusion at the groupoid level leads to an inclusion of $C^*$-algebras, $C^*(R(p)) \subseteq C^*(R(\tilde{g} \circ p))$. The induced map on $K$-theory will be denoted by $$\tilde{g}!: K^*(\tilde{Y}_1)=K_*(C^*(R(p))) \rightarrow K_*(C^*(R(\tilde{g} \circ p)))=K^*(\tilde{Y}_2).$$ Following the same argument as in the proof of Proposition [Proposition 83](#geenknd){reference-type="ref" reference="geenknd"}, we arrive at the following. **Proposition 86**. *If $\tilde{g}: \tilde{Y}_1 \rightarrow \tilde{Y}_2$ is a surjective local homeomorphism of compact, locally Hausdorff spaces, then the map $$\tilde{g}!: K^*(\tilde{Y}_1)\rightarrow K^*(\tilde{Y}_2).$$ is well-defined, i.e. independent of Hausdorff resolution.* The next proposition shows that we indeed have wrong way functoriality for surjective local homeomorphisms of compact, locally Hausdorff spaces. **Proposition 87**. *If $\tilde{g}_1: \tilde{Y}_1 \rightarrow \tilde{Y}_2$ and $\tilde{g}_2: \tilde{Y}_2 \rightarrow \tilde{Y}_3$ are surjective local homeomorphisms of compact, locally Hausdorff spaces, then $$\tilde{g}_2!\circ \tilde{g}_1!=(\tilde{g}_2\circ \tilde{g}_1)!: K^*(\tilde{Y}_1)\rightarrow K^*(\tilde{Y}_3).$$* *Proof.* The claim follows from that both sides can be defined from the chain of inclusions $$C^*(R(p)) \subseteq C^*(R(\tilde{g}_1 \circ p))\subseteq C^*(R(\tilde{g}_2 \circ \tilde{g}_1 \circ p))$$ where $p:X\to \tilde{Y}_1$ is a Hausdorff resolution. The first inclusion defines $\tilde{g}_1!$, the second inclusion defines $\tilde{g}_2!$ and the composed inclusion defines $(\tilde{g}_2\circ \tilde{g}_1)!$. ◻ *Remark 88*. We note that also for wrong way maps, we could also work at the level of $KK$-theory. Associated with a surjective local homeomorphism $\tilde{g}: \tilde{Y}_1 \rightarrow \tilde{Y}_2$, we can for any Hausdorff resolutions $p_1:X_1\to \tilde{Y}_1$ and $p_2:X_2\to \tilde{Y}_2$ define a wrong way class $$[g!]:=\iota^*\operatorname{Corr}(\operatorname{id}_{\tilde{Y}_2},p_2,\tilde{g}\circ p_1)\in KK_0(C^*(R(p_1)),C^*(R(p_2))),$$ where $\iota$ denotes the ideal inclusion $C^*(R(p_1)) \subseteq C^*(R(\tilde{g} \circ p_1))$. We can then identify $g!$ with the Kasparov product from the right $$\cdot\otimes_{C^*(R(p_1))} [g!]:KK_*(\mathbb{C},C^*(R(p_1)))\to KK_*(\mathbb{C},C^*(R(p_2))),$$ under the natural isomorphisms $$KK_*(\mathbb{C},C^*(R(p_1)))\cong K_*(C^*(R(p_1)))\quad\mbox{and}\quad KK_*(\mathbb{C},C^*(R(p_2)))\cong K_*(C^*(R(p_2))).$$ In this setting, the analogue of Proposition [Proposition 87](#wrongwayfuncfunc){reference-type="ref" reference="wrongwayfuncfunc"} is the Kasparov product statement that $$[g_1!]\otimes_{C^*(R(p_2))}[g_2!]=[(g_2\circ g_1)!]\in KK_0(C^*(R(p_1)),C^*(R(p_3))),$$ which can be verified using Theorem [Theorem 79](#comp){reference-type="ref" reference="comp"} and Lemma [Lemma 81](#propcomplemma){reference-type="ref" reference="propcomplemma"}. *Example 89*. If we consider the non-Hausdorff dynamical system $(X^u({\bf P})/{\sim_0},\tilde{g})$ associated with a Wieler solenoid, and the Hausdorff resolution $q:X^u({\bf P})\to X^u({\bf P})/{\sim_0}$, then as an element of $KK_0(C^*(G_0({\bf P})),C^*(G_0({\bf P})))$ we have that $$[g!]:=\iota^*\operatorname{Corr}(\operatorname{id}_{X^u({\bf P})/{\sim_0}},q,\tilde{g}\circ q)=[E],$$ where $E$ is the $C^*(G_0({\bf P}))-C^*(G_0({\bf P}))$-correspondence defined as $E:=C^*(G_0({\bf P}))$ as a right Hilbert module with left action defined from $\alpha$. The correspondence $E$ was considered in Corollary [Proposition 64](#nonunitalcp){reference-type="ref" reference="nonunitalcp"} and will play a role in $K$-theory computations below. *Example 90*. Let $g:Y_1\to Y_2$ be a surjective local homeomorphism of compact, Hausdorff spaces. Write $E_g$ for the finitely generated projective $C(Y_2)$-module $C(Y_1)$, with right action defined from $g^*$. We can equipp $E_g$ with the $C(Y_2)$-valued right inner product $$\langle f_1,f_2\rangle(x):=\sum_{y\in g^{-1}(x)} \overline{f_1(y)}f_2(y), \quad x\in Y_2, \; f_1,f_2\in E_g=C(Y_1).$$ The algebra $C(Y_1)$ acts adjointably from the left on $E_g=C(Y_1)$ by pointwise multiplication. With these structures, $E_g$ is a $C(Y_1)-C(Y_2)$-correspondence in which $C(Y_1)$ acts as $C(Y_2)$-compact operators. It is immediate from the definition of the inner product on $E_g$ that $\mathbb{K}_{C(Y_2)}(E_g)=C^*(R(g))$. By taking $p=\operatorname{id}_{Y_1}$ in the constructions above, it follows that $$[g!]=[E_g]\in KK_0(C(Y_1),C(Y_2)).$$ In the case that $Y=Y_1=Y_2$, the class $[E_g]\in KK_0(C(Y),C(Y))$ plays an important role in the $K$-theory, or more generally $KK$-theory, of the Cuntz-Pimsner algebra associated with the module $E_g$ [@GMR; @MR1426840]. For Wieler solenoids this is of interest due to the results of the next subsection. ## Wrong way maps and $K$-theory of the stable and unstable Ruelle algebra In Corollary [Proposition 64](#nonunitalcp){reference-type="ref" reference="nonunitalcp"} we saw that the stable Ruelle algebra $C^*(G^s({\bf P}))\rtimes \mathbb{Z}$ of a Wieler solenoid is a Cuntz-Pimsner algebra over $C^*(G_0({\bf P}))$. From the general theory of Cuntz-Pimsner algebras, the $K$-theory of the stable Ruelle algebra can therefore be computed from $K^*(X^u({\bf P})/{\sim_0})$ -- the $K$-theory of $C^*(G_0({\bf P}))$. Using Kaminker-Putnam-Whittaker duality result [@KPW], the $K$-theory of the unstable Ruelle algebra can similarly be computed from the $K$-homology group $K_*(X^u({\bf P})/{\sim_0}):=K^*(C^*(G_0({\bf P})))$. Throughout the subsection, we tacitly identify the stable Ruelle algebra $C^*(G^s({\bf P}))\rtimes \mathbb{Z}$ with the Cuntz-Pimsner algebra $O_E$ over $C^*(G_0({\bf P}))$ in Corollary [Proposition 64](#nonunitalcp){reference-type="ref" reference="nonunitalcp"}. **Theorem 91**. *Assume that $(X,\phi)$ is a Wieler solenoid and write $(X^u({\bf P})/{\sim_0},\tilde{g})$ for the associated non-Hausdorff dynamical system. The $K$-theory of the stable Ruelle algebra $K_*(C^*(G^s({\bf P}))\rtimes \mathbb{Z})$ fits into a six term exact sequence with the $K$-theory of $X^u({\bf P})/{\sim_0}$: $$\begin{CD} K^0(X^u({\bf P})/{\sim_0}) @>1-\tilde{g}!>> K^0(X^u({\bf P})/{\sim_0}) @>j_S>> K_0(C^*(G^s({\bf P}))\rtimes \mathbb{Z})\\ @A\beta_S AA @. @VV\beta_S V \\ K_1(C^*(G^s({\bf P}))\rtimes \mathbb{Z}) @<j_S<< K^1(X^u({\bf P})/{\sim_0}) @<1-\tilde{g}!<< K^1(X^u({\bf P})/{\sim_0}) \end{CD}$$ where $j_S:C^*(G_0({\bf P}))\hookrightarrow C^*(G^s({\bf P}))\rtimes \mathbb{Z}$ denotes the inclusion and $\beta_S$ is the Pimsner boundary map in $KK_1(C^*(G^s({\bf P}))\rtimes \mathbb{Z},C^*(G_0({\bf P})))$, cf. [@MR4031052; @GMR; @MR1426840].* *Proof.* By standard results for Cuntz-Pimsner algebras [@MR4031052; @GMR; @MR1426840], we have a six term exact sequence $$\begin{CD} K_0(C^*(G_0({\bf P}))) @>1-\cdot\otimes_{C^*(G_0({\bf P}))}[E]>> K_0(C^*(G_0({\bf P}))) @>j_S>> K_0(C^*(G^s({\bf P}))\rtimes \mathbb{Z})\\ @A\beta_S AA @. @VV\beta_S V \\ K_1(C^*(G^s({\bf P}))\rtimes \mathbb{Z}) @<j_S<< K_1(C^*(G_0({\bf P}))) @<1-\cdot\otimes_{C^*(G_0({\bf P}))}[E]<< K_1(C^*(G_0({\bf P}))) \end{CD}$$ where $E$ is the $C^*(G_0({\bf P}))-C^*(G_0({\bf P}))$-correspondence from Corollary [Proposition 64](#nonunitalcp){reference-type="ref" reference="nonunitalcp"}. The theorem now follows from the identity $[\tilde{g}!]=[E]\in KK_0(C^*(G_0({\bf P})),C^*(G_0({\bf P})))$ in Example [Example 89](#wrongforwiel){reference-type="ref" reference="wrongforwiel"}. ◻ Using Kaminker-Putnam-Whittaker duality result [@KPW], we can derive an analogous result for the $K$-theory of the unstable Ruelle algebra $C^*(G^u({\bf P}))\rtimes \mathbb{Z}$. Here one uses $K$-homology of $X^u({\bf P})$ instead of $K$-theory, and the theory for $K$-homology of compact, locally Hausdorff spaces can be developed in the same way as that of $K$-theory in Subsection [6.2](#funcink){reference-type="ref" reference="funcink"}. **Theorem 92**. *Assume that $(X,\phi)$ is a Wieler solenoid and write $(X^u({\bf P})/{\sim_0},\tilde{g})$ for the associated non-Hausdorff dynamical system. The $K$-theory of the unstable Ruelle algebra $K_*(C^*(G^u({\bf P}))\rtimes \mathbb{Z})$ fits into a six term exact sequence the $K$-homology of $X^u({\bf P})/{\sim_0}$: $$\begin{CD} K_0(X^u({\bf P})/{\sim_0}) @>1-[\tilde{g}!]\otimes\cdot>> K_0(X^u({\bf P})/{\sim_0}) @>\beta_U>> K_0(C^*(G^u({\bf P}))\rtimes \mathbb{Z})\\ @Aj_UAA @. @VVj_U V \\ K_1(C^*(G^u({\bf P}))\rtimes \mathbb{Z}) @<\beta_U<< K_1(X^u({\bf P})/{\sim_0}) @<1-[\tilde{g}!]\otimes\cdot<< K_1(X^u({\bf P})/{\sim_0}) \end{CD}$$ where $j_U$ is Kaminker-Putnam-Whittaker dual to $j_S$ and $\beta_U$ is Kaminker-Putnam-Whittaker dual to the Pimsner boundary map in $KK_1(C^*(G^s({\bf P}))\rtimes \mathbb{Z},C^*(G_0({\bf P})))$, cf. [@MR4031052; @GMR; @MR1426840].* ## Groupoid homology Although we have only discussed the $K$-theory of stable and stable Ruelle groupoid $C^*$-algebras, one can also use the results in this paper to compute the homology of these groupoids. For the stable groupoid, one uses the main result of [@DeeYas] (stated above in Theorem [Theorem 28](#MainDeeYas){reference-type="ref" reference="MainDeeYas"}) and [@MR4030921 Proposition 4.7 ]. Again, determining the map (now on groupoid homology) associated to the open inclusion $G_0({\bf P}) \subseteq G_1({\bf P})$ is the key to explicit computations. For the stable Ruelle groupoid, one uses the homology of the stable groupoid and [@MR4170644 Lemma 1.3] (also see [@MR4283513 Theorem 3.8]). ## Examples In this section of the paper we discuss a few explicit examples. In the first few, the original map $g$ is a local homeomorphism and hence $X^u({\bf P})/{\sim_0}=Y$ and $\tilde{g}=g$. Nevertheless, these examples illustrate the importance of computing the wrong-way map in $K$-theory computation. We also summarize the case of the aab/ab-solenoid where $X^u({\bf P})/{\sim_0} \neq Y$ and $\tilde{g} \neq g$, which is discussed in much more detail in [@DeeYas]. In fact, typically $g: Y \rightarrow Y$ is not a local homeomorphism, so that $X^u({\bf P})/{\sim_0} \neq Y$ and $\tilde{g} \neq g$, see for example [@DGMW Page 14] where tiling spaces are discussed. *Example 93*. Suppose $Y$ is a manifold and $g: Y \rightarrow Y$ is an expanding endomorphism as defined by Shub [@ShubExp]. In this case, $g$ is a covering map and hence a local homeomorphism. In [@DeeREU] it is shown that the map $g!$ is the transfer map associated to the cover, compare to Example [Example 90](#localhomeoexample){reference-type="ref" reference="localhomeoexample"} above. In particular, $g!$ is a rational isomorphism in the case of $K$-theory and in the case of homology has even better properties, see [@DeeREU] for details. The case when $Y$ is the circle and $g$ is the two-fold cover from the circle to itself is a special case of this situation. It is well-known or one can check directly that $g!$ is given by multiplication by 2 in degree zero and the identity in degree one. Hence, $$K_*(C^*(G^s({\bf P})))\cong \left\{ \begin{array}{cc} \mathbb{Z}\left[ \frac{1}{2} \right] & *=0 \\ \mathbb{Z}& *=1 \end{array} \right.$$ and $$K_*(C^*(G^s({\bf P}))\rtimes \mathbb{Z})\cong \left\{ \begin{array}{cc} \mathbb{Z}& *=0 \\ \mathbb{Z}& *=1 \end{array} \right.$$ The reader can see [@DeeREU] for other explicit computations of the transfer map in the case of flat manifolds. The unstable and unstable Ruelle algebras associated to a Wieler solenoid are relevant in the study of the HK-conjecture. However, the stable and stable Ruelle algebras are not relevant for the HK-conjecture because the unit space of the relevant groupoids are not totally disconnected (except for the case of shifts of finite type). For example, in the case of a flat manifold $Y$, we have that the unit space is $\mathbb{R}^{{\rm dim}(Y)}$. *Example 94*. When the relevant Smale space is a two-sided shifts of finite type, the relevant Wieler pre-solenoid is the one-sided shifts of finite type, see [@Wie]. In this case, the map $g$ is the shift map and $g!$ is not a rational isomorphism even though $g$ is a local homeomorphism. This fact is in contrast with the previous example. *Example 95*. We will discuss the case of the $p/q$-solenoid [@MR3628917] briefly. The details will be published elsewhere and were obtained at the same time as [@DeeREU]. As such, we give a short summary of the results. Let $S^1$ be the unit circle in the complex plane and $1<q<p$ be positive integers with gcd($p$, $q$)=1. The $p/q$-solenoid can be realized as an inverse limit where $Y$ is the $q$-solenoid. That is, $$Y=S_q= \{ (z_0, z_1, z_2, \ldots) \mid z_i \in S^1 \hbox{ and }z_{i+1}^q=z_i \}$$ The map is defined via $$g(z_0, z_1, z_2, \ldots)=(z_1^p, z_2^p, \ldots)$$ where $(z_0, z_1, z_2, \ldots) \in Y=S_q$. Since $g$ is a local homeomorphism in this example, $K_*(C^*(G_0({\bf P}))\cong K^*(S_q)$. Furthermore, the $K$-theory of $S_q$ is known and given by $$K^*(S_q) \cong \left\{ \begin{array}{cc} \mathbb{Z}& *=0 \\ \mathbb{Z}\left[ \frac{1}{q} \right] & *=1 \end{array} \right.$$ Then one computes that $g!$ is multiplication by $p$ in degree zero and the identity in degree one. Hence, $$K_*(C^*(G^s({\bf P})))\cong \left\{ \begin{array}{cc} \mathbb{Z}\left[ \frac{1}{p} \right] & *=0 \\ \mathbb{Z}\left[ \frac{1}{q} \right] & *=1 \end{array} \right.$$ and $$K_*(C^*(G^s({\bf P}))\rtimes \mathbb{Z})\cong \left\{ \begin{array}{cc} \mathbb{Z}\left[ \frac{1}{q} \right] & *=0 \\ \mathbb{Z}\left[ \frac{1}{q} \right] \oplus \mathbb{Z}/ (p-1)\mathbb{Z}& *=1 \end{array} \right.$$ *Example 96*. When $(Y, g)$ is the aab/ab-solenoid the map $g$ is not a local homeomorphism. The $K$-theory of $C^*(G_0({\bf P}))$ and the induced map $\tilde{g}!$ were computed in [@DeeYas]. A summary is as follows. We have that $$K_*(C^*(G_0({\bf P}))) \cong \left\{ \begin{array}{cc} \mathbb{Z}\oplus \mathbb{Z}& *=0 \\ \mathbb{Z}& *=1 \end{array} \right.$$ with $\tilde{g}!$ given by $\begin{bmatrix} 2 & 1\\ 1 & 1 \end{bmatrix}$ in degree zero and the identity in degree one. Hence, $$K_*(C^*(G^s({\bf P}))) \cong \left\{ \begin{array}{cc} \mathbb{Z}\oplus \mathbb{Z}& *=0 \\ \mathbb{Z}& *=1 \end{array} \right.$$ and $$K_*(C^*(G^s({\bf P}))\rtimes \mathbb{Z})\cong \left\{ \begin{array}{cc} \mathbb{Z}& *=0 \\ \mathbb{Z}& *=1 \end{array} \right.$$ Other similar computations, both for one dimensional solenoids and more general constructions from tiling spaces, can be found in [@Gon1; @Gon2; @RenAP; @ThoAMS; @ThoSol; @Yi]. 10 M. Achigar, A. Artigue, and I. Monteverde. Expansive homeomorphisms on non-Hausdorff spaces. , 207:109--122, 2016. R. J. Archbold and D. W. B. Somerset. Transition probabilities and trace functions for $C^*$-algebras. , 73(1):81--111, 1993. F. Arici and A. Rennie. The Cuntz-Pimsner extension and mapping cone exact sequences. , 125(2):291--319, 2019. N. D. Burke and I. F. Putnam. Markov partitions and homology for $n/m$-solenoids. , 37(3):716--738, 2017. R. Chaiser, M. Coates-Welsh, R. J. Deeley, A. Farhner, J. Giornozi, R. Huq, L. Lorenzo, J. Oyola-Cortes, M. Reardon, and A. M. Stocker. Invariants for the Smale space associated to an expanding endomorphism of a flat manifold. , 16(1):177--199, 2023. L. O. Clark, A. an Huef, and I. Raeburn. The equivalence relations of local homeomorphisms and Fell algebras. , 19:367--394, 2013. V. Deaconu. On groupoids and $C^*$-algebras from self-similar actions. , 27:923--942, 2021. R. J. Deeley, M. Goffeng, B. Mesland, and M. F. Whittaker. Wieler solenoids, Cuntz-Pimsner algebras and $K$-theory. , 38(8):2942--2988, 2018. R. J. Deeley, M. Goffeng, and A. Yashinski. Smale space $C^*$-algebras have nonzero projections. , 148(4):1625--1639, 2020. R. J. Deeley, M. Goffeng, and A. Yashinski. Fell algebras, groupoids, and projections. , 150(11):4891--4907, 2022. R. J. Deeley and A. Yashinski. The stable algebra of a Wieler solenoid: inductive limits and $K$-theory. , 40(10):2734--2768, 2020. J. Dixmier. . North Holland, Amsterdam, 1982. S. Echterhoff, S. Kaliszewski, J. Quigg, and I. Raeburn. A categorical approach to imprimitivity theorems for $C^*$-dynamical systems. , 180(850):viii+169, 2006. F. T. Farrell and L. E. Jones. Expanding immersions on branched manifolds. , 103(1):41--101, 1981. C. Farsi, A. Kumjian, D. Pask, and A. Sims. Ample groupoids: equivalence, homology, and Matui's HK conjecture. , 12(2):411--451, 2019. M. Goffeng, B. Mesland, and A. Rennie. Shift-tail equivalence and an unbounded representative of the Cuntz-Pimsner extension. , 38(4):1389--1421, 2018. D. Gonçalves. New $C^*$-algebras from substitution tilings. , 57(2):391--407, 2007. D. Gonçalves. On the $K$-theory of the stable $C^*$-algebras from substitution tilings. , 260(4):998--1019, 2011. D. Gonçalves and M. Ramirez-Solano. On the $K$-theory of $C^*$-algebras associated to substitution tilings. , 551:133, 2020. A. a. Huef, A. Kumjian, and A. Sims. A Dixmier-Douady theorem for Fell algebras. , 260(5):1543--1581, 2011. J. Kaminker, I. F. Putnam, and M. F. Whittaker. K-theoretic duality for hyperbolic dynamical systems. , 730:263--299, 2017. D. B. Killough. . PhD thesis, University of Victoria, 2009. D. B. Killough and I. F. Putnam. Trace asymptotics for $C^*$-algebras from Smale spaces. , 143(1):317--325, 2015. M. Laca and S. Neshveyev. KMS states of quasi-free dynamics on Pimsner algebras. , 211(2):457--482, 2004. M. Macho Stadler and M. O'uchi. Correspondence of groupoid $C^\ast$-algebras. , 42(1):103--119, 1999. J. A. Mingo. -algebras associated with one-dimensional almost periodic tilings. , 183(2):307--337, 1997. P. S. Muhly, J. N. Renault, and D. P. Williams. Equivalence and isomorphism for groupoid $C^\ast$-algebras. , 17(1):3--22, 1987. B. v. Munster. The Hausdorff quotient. Bachelor Thesis, Mathematisch Instituut, Universiteit Leiden, 6 2014. the thesis can be found on the link:\ https://math.leidenuniv.nl/scripties/BachVanMunster.pdf. E. Ortega. The homology of the Katsura-Exel-Pardo groupoid. , 14(3):913--935, 2020. M. V. Pimsner. A class of $C^*$-algebras generalizing both Cuntz-Krieger algebras and crossed products by ${\bf Z}$. In *Free probability theory (Waterloo, ON, 1995)*, volume 12 of *Fields Inst. Commun.*, pages 189--212. Amer. Math. Soc., Providence, RI, 1997. I. F. Putnam. -algebras from Smale spaces. , 48(1):175--195, 1996. I. F. Putnam and J. Spielberg. The structure of $C^\ast$-algebras associated with hyperbolic dynamical systems. , 163(2):279--299, 1999. J. Renault. The Radon-Nikodým problem for appoximately proper equivalence relations. , 25(5):1643--1672, 2005. D. Ruelle. . Cambridge Mathematical Library. Cambridge University Press, Cambridge, second edition, 2004. The mathematical structures of equilibrium statistical mechanics. M. Shub. Endomorphisms of compact differentiable manifolds. , 91:175--199, 1969. M. Skeide. Unit vectors, Morita equivalence and endomorphisms. , 45(2):475--518, 2009. K. Thomsen. -algebras of homoclinic and heteroclinic structure in expansive dynamics. , 206(970):x+122, 2010. K. Thomsen. The homoclinic and heteroclinic $C^*$-algebras of a generalized one-dimensional solenoid. , 30(1):263--308, 2010. S. Wieler. Smale spaces via inverse limits. , 34(6):2066--2092, 2014. R. F. Williams. Classification of one dimensional attractors. In *Global Analysis (Proc. Sympos. Pure Math., Vol. XIV, Berkeley, Calif., 1968)*, pages 341--361. Amer. Math. Soc., Providence, R.I., 1970. R. F. Williams. Expanding attractors. , (43):169--203, 1974. I. Yi. -theory of $C^\ast$-algebras from one-dimensional generalized solenoids. , 50(2):283--295, 2003. [^1]: RJD was partially supported by NSF Grants DMS 2000057 and DMS 2247424. MG was supported by the Swedish Research Council Grant VR 2018-0350.
arxiv_math
{ "id": "2310.00415", "title": "Wieler solenoids: non-Hausdorff expansiveness, Cuntz-Pimsner models, and\n functorial properties", "authors": "Robin J. Deeley, Menevse Ery\\\"uzl\\\"u, Magnus Goffeng, Allan Yashinski", "categories": "math.OA math.DS math.KT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Let ${\mathbf K}$ be a finite field, $X$ and $Y$ two curves over ${\mathbf K}$, and $Y\rightarrow X$ an unramified abelian cover with Galois group $G$. Let $D$ be a divisor on $X$ and $E$ its pullback on $Y$. Under mild conditions the linear space associated with $E$ is a free ${\mathbf K}[G]$-module. We study the algorithmic aspects and applications of these modules. address: - Jean-Marc Couveignes, Univ. Bordeaux, CNRS, INRIA, Bordeaux-INP, IMB, UMR 5251, F-33400 Talence, France. - Jean Gasnier, Univ. Bordeaux, CNRS, INRIA, Bordeaux-INP, IMB, UMR 5251, F-33400 Talence, France. author: - Jean-Marc Couveignes - Jean Gasnier bibliography: - cc.bib title: Explicit Riemann-Roch spaces in the Hilbert class field --- # Introduction Given a curve $Y$ over a field ${\mathbf K}$, and two divisors $E$ and $Q$ on $Y$, with $Q$ effective and disjoint from $E$, the evaluation map $e : H^0(\mathcal O_Y (E), Y)\rightarrow H^0 (\mathcal O_Q,Q)$ is a natural ${\mathbf K}$-linear datum of some importance for various algorithmic problems such as efficient computing in the Picard group of $Y$ (see [@mak1; @mak2]), constructing good error correcting codes [@gop1; @gop2; @tvc], or bounding the bilinear complexity of multiplication in finite fields [@STV; @SH; @BR; @BAL; @CHA; @RAN]. Assume $G$ is a finite group of automorphisms of $Y/{\mathbf K}$, and the divisors $E$ and $Q$ are $G$-equivariant (they are equal to their pullback by any element of $G$). The evaluation map $e$ is then a ${\mathbf K}[G]$-linear map between two ${\mathbf K}[G]$-modules. In some cases these modules can be shown to be both free, and their rank as ${\mathbf K}[G]$-modules is then smaller than their dimension as ${\mathbf K}$-vector spaces, by a factor $\mathfrak o$, the order of $G$. This is of quite some help when $G$ is abelian, because multiplication in ${\mathbf K}[G]$ is achieved in quasi-linear time using discrete Fourier transform, and the advantage of lowering dimension is much stronger than the disadvantage of dealing with a larger ring of scalars. In this work we review basic algebraic and algorithmic properties of ${\mathbf K}[G]$-modules when $G$ is a finite group. We then focus on free ${\mathbf K}[G]$-modules arising from abelian groups acting freely on a curve. We will see that this special case has a rich mathematical background and produces interesting constructions. In Section [2](#sec:duakg){reference-type="ref" reference="sec:duakg"} we review elementary properties of ${\mathbf K}[G]$-modules when ${\mathbf K}$ is a commutative field and $G$ a finite group. We recall in Section [3](#sec:curveact){reference-type="ref" reference="sec:curveact"} how unramified fibers of Galois covers of curves produce free ${\mathbf K}[G]$-modules and we introduce natural bases for these modules and their duals. We study the abelian unramified case in Section [4](#sec:comm){reference-type="ref" reference="sec:comm"} and see that Riemann-Roch spaces associated to $G$-equivariant divisors tend to be free ${\mathbf K}[G]$-modules then. Evaluating at another $G$-equivariant divisor then produces a ${\mathbf K}[G]$-linear map between two free ${\mathbf K}[G]$-modules. This makes it possible to treat evaluation and interpolation as ${\mathbf K}[G]$-linear problems. We introduce the matrices associated to these problems. Section [5](#sec:pade){reference-type="ref" reference="sec:pade"} is devoted to the definition and computation of Padé approximants in this context. The complexity of arithmetic operations in ${\mathbf K}[G]$ is bounded in Section [6](#sec:ga){reference-type="ref" reference="sec:ga"} using various classical discrete Fourier transforms. In Section [7](#sec:constf){reference-type="ref" reference="sec:constf"} we use effective class field theory and the algorithmics of curves and jacobian varieties to compute the evaluation and interpolation matrices introduced in Section [4](#sec:comm){reference-type="ref" reference="sec:comm"}. Section [8](#sec:interpol){reference-type="ref" reference="sec:interpol"} provides two applications of interpolation with ${\mathbf K}[G]$-modules: multiplication in finite fields and geometric codes. The asymptotic properties of the codes constructed this way are studied in Section [9](#sec:gc){reference-type="ref" reference="sec:gc"}. # Duality for ${\mathbf K}[G]$-modules {#sec:duakg} In this section ${\mathbf K}$ is a commutative field and $G$ is a finite group. We state elementary properties of ${\mathbf K}[G]$-modules and their duals. In Section [2.1](#sec:inf){reference-type="ref" reference="sec:inf"} we describe the natural correspondence between $G$-invariant ${\mathbf K}$-bilinear forms and ${\mathbf K}[G]$-bilinear forms. We see in Section [2.2](#sec:orth){reference-type="ref" reference="sec:orth"} that the orthogonal of a ${\mathbf K}[G]$-submodule for either form is the same. Sections [2.3](#sec:duamo){reference-type="ref" reference="sec:duamo"} and [2.4](#sec:freedi){reference-type="ref" reference="sec:freedi"} concern the canonical bilinear form relating a ${\mathbf K}[G]$-module and its dual. ## Invariant bilinear forms {#sec:inf} Let $M$ be a right ${\mathbf K}[G]$-module. Let $N$ be a left ${\mathbf K}[G]$-module. Let $$<.,.> \, : M\times N \rightarrow {\mathbf K}$$ be a ${\mathbf K}$-bilinear form. We assume that this form is invariant by the action of $G$ in the sense that $$<m.\sigma , n>\, = \, <m, \sigma . n >$$ for every $m$ in $M$, $n$ in $N$, and $\sigma$ in $G$. We define a map $$\label{eq:inf} \xymatrix@R-2pc{ (.,.) & \relax : & N\times M \ar@{->}[rr] && {\mathbf K}[G]\\ &&n,m \ar@{|->}[rr] && (n,m) = \sum_{\sigma \in G} <m.\sigma^{-1},n>\sigma }$$ **Proposition 1**. *The map $(.,.)$ in Equation ([\[eq:inf\]](#eq:inf){reference-type="ref" reference="eq:inf"}) is ${\mathbf K}[G]$-bilinear.* **Proof    ** Indeed for any $\tau$ in $G$, $m$ in $M$, and $n$ in $N$ $$\begin{aligned} (\tau . n, m)&=&\sum_{\sigma \in G} <m.\sigma^{-1},\tau .n> \sigma\\ &=& \sum_{\sigma \in G} <m.\sigma^{-1}\tau^{-1},\tau .n> \tau \sigma\\ &=& \sum_{\sigma \in G} <m.\sigma^{-1}, n> \tau \sigma\\ &=& \tau \sum_{\sigma \in G} <m.\sigma^{-1}, n> \sigma\\ &=& \tau (n,m).\end{aligned}$$ And $$\begin{aligned} (n, m . \tau )&=&\sum_{\sigma \in G} <m.\tau \sigma^{-1}, n> \sigma\\ &=& \sum_{\sigma \in G} <m.\tau \tau^{-1}\sigma^{-1}, n> \sigma\tau\\ &=& \sum_{\sigma \in G} <m.\sigma^{-1}, n> \sigma \tau \\ &=& (n,m)\tau.\end{aligned}$$ $\Box$ ## Orthogonality {#sec:orth} In the situation of Section [2.1](#sec:inf){reference-type="ref" reference="sec:inf"} we consider a right ${\mathbf K}[G]$-submodule $U$ of $M$. Call $$U^\perp = \lbrace n\in N \,\, | \, <U,n>=0\rbrace$$ the orthogonal to $U$ in $N$ for the $<.,.>$ form. This is a ${\mathbf K}$-vector space. Since $U$ is stable by the action of $G$, its orthogonal $U^\perp$ is a left ${\mathbf K}[G]$-module. And $U^\perp$ is the orthogonal to $U$ for the $(.,.)$ form : $$U^\perp = \lbrace n\in N \,\, | \,\, (n,U)=0\rbrace .$$ We consider similarly a left ${\mathbf K}[G]$-submodule $V$ of $N$ and call $$V^\circ = \lbrace m\in M \,\, | \, <m,V>\, = 0\rbrace$$ the orthogonal to $V$ in $M$ for the $<.,.>$ form. This is a right ${\mathbf K}[G]$ module. And $V^\circ$ is the orthogonal to $V$ for the $(.,.)$ form : $$V^\circ = \lbrace m\in M \,\, | \,\, (V,m)=0\rbrace .$$ We have $U\subset (U^\perp)^\circ$ and $V\subset (V^\circ )^\perp$. These inclusions are equalities when $M$ and $N$ are finite dimensional and $<.,.>$ is perfect. ## The dual of a ${\mathbf K}[G]$-module {#sec:duamo} Let $N$ be a left ${\mathbf K}[G]$-module. We can see $N$ as a ${\mathbf K}$-vector space and call $\hat N$ its dual. This is naturally a right ${\mathbf K}[G]$-module. For every $\varphi$ in $\hat N$ and $\sigma$ in $G$ we set $\varphi.\sigma = \varphi \circ \sigma$. We consider the canonical ${\mathbf K}$-bilinear form defined by $$<\varphi , n > \, =\, \varphi (n)$$ for every $n$ in $N$ and $\varphi$ in $\hat N$. For every $\sigma$ in $G$ we have $$<\varphi .\sigma , n> \, = \, \varphi (\sigma . n) \, = \, <\varphi, \sigma .n>$$ so $<.,.>$ is invariant by $G$. Following Section [2.1](#sec:inf){reference-type="ref" reference="sec:inf"} we define a ${\mathbf K}[G]$-bilinear form $$%\label{eq:inf} (.,.) : N\times \hat N \rightarrow {\mathbf K}[G]$$ by $$\label{eq:isodu}(n,\varphi )=\sum_{\sigma \in G} \varphi(\sigma^{-1}.n)\sigma.$$ We define a map from $\hat N$ to the dual $\check N$ of $N$ as a ${\mathbf K}[G]$-module, by sending $\varphi$ to the map $$\label{eq:phi_G}\varphi^G : n\mapsto (n,\varphi).$$ We prove that this map is a bijection. First $\varphi \mapsto \varphi^G$ is trivially seen to be an injection. As for surjectivity we let $\psi : N \rightarrow {\mathbf K}[G]$ be a ${\mathbf K}[G]$-linear map. Writing $$\psi (n)=\sum_{\sigma \in G} \psi_\sigma (n)\sigma$$ we define a ${\mathbf K}$-linear coordinate form $\psi_\sigma$ on $N$ for every $\sigma$ in $G$. From the ${\mathbf K}[G]$-linearity of $\psi$ we deduce that $\psi_\sigma (n)=\psi_1(\sigma^{-1} . n)$ where $1$ is the identity element in $G$. So $\psi (n) = (n, \psi_1)$ for every $n$ in $N$. So $\psi = (\psi_1)^G$. ## Free submodules of a ${\mathbf K}[G]$-module {#sec:freedi} The ring ${\mathbf K}[G]$ may not be semisimple. Still free ${\mathbf K}[G]$-submodules of finite rank have a supplementary module. **Proposition 2**. *Let $G$ be finite group, ${\mathbf K}$ a commutative field, $N$ a left ${\mathbf K}[G]$-module, $V$ a submodule of $N$. If $V$ is free of finite rank then it is a direct summand.* **Proof    ** Let $r$ be the rank of $V$. Let $v_1$, $v_2$, ..., $v_r$ be a basis of $V$. Let $\varphi_1$, $\varphi_2$, ..., $\varphi_r$ be the dual basis. For every $i$ such that $1\leqslant i\leqslant n$, the coordinate form $\varphi_{i,e}$ associated to the identity element in $G$ belongs to $\hat V$. Let $\psi_i$ be a ${\mathbf K}$-linear form on $N$ whose restriction to $V$ is $\varphi_{i,e}$. Let $\psi_i^G \in \check N$ be the associated ${\mathbf K}[G]$-linear form according to Equations ([\[eq:phi_G\]](#eq:phi_G){reference-type="ref" reference="eq:phi_G"}) and ([\[eq:isodu\]](#eq:isodu){reference-type="ref" reference="eq:isodu"}). The restriction of $\psi_i^G$ to $V$ is $\varphi_i$. The map $$%\label{eq:} \xymatrix@R-2pc{ \psi & \relax : & N \ar@{->}[r] & V\\ &&n \ar@{|->}[r] & \sum_{1\leqslant i\leqslant r} \psi_i^G (n) . v_i}$$is a ${\mathbf K}[G]$-linear projection onto $V$. Its kernel is a supplementary ${\mathbf K}[G]$-submodule to $V$. $\Box$ # Curves with a group action {#sec:curveact} Let ${\mathbf K}$ be a commutative field. Let $p$ be the characteristic of ${\mathbf K}$. Let $X$ and $Y$ be two smooth, projective, absolutely integral curves over ${\mathbf K}$. Let $g_X$ be the genus of $X$. And similarly $g_Y$. Let $\tau : Y \rightarrow X$ be a Galois cover with Galois group $G$. Let $\mathfrak o$ be the order of $G$. There is a natural left action of $G$ on ${\mathbf K}(Y)$ defined by $$\sigma . f = f\circ \sigma ^{-1} \text{\,\,\,\, for\,\,} f\in {\mathbf K}(Y) \text{\,\, and \,\,} \sigma \in G.$$ There is a natural right action of $G$ on meromorphic differentials defined by $$\omega . \sigma = \sigma^\star \omega \text{\,\,\,\, for\,\,} \omega \in \Omega^1_{{\mathbf K}(Y)/{\mathbf K}} \text{\,\, and \,\,} \sigma \in G.$$ These are ${\mathbf K}(X)$-linear actions. And the two actions are compatible in the sense that $$\label{eq:comp} (\omega . \sigma)(\sigma^{-1}. f)=(\omega f).\sigma$$ We study some free ${\mathbf K}[G]$-modules that arise naturally in this context. ## The residue ring of a non-ramified fiber {#sec:rrf} Let $P$ be a prime divisor (a place) on $X$. Let $t_P$ be a uniformizing parameter at $P$. Let $$a = \deg (P).$$ This is the degree over ${\mathbf K}$ of the residue field $${\mathbf K}_P=H^0(\mathcal O_P, P).$$ We assume that $\tau$ is not ramified above $P$ and let $Q_1$ be a place above $P$. We call $G_1$ the decomposition group of $Q_1$. This is the stabilizer of $Q_1$ in $G$. Places above $P$ are parameterized by left cosets in $G/G_1$. We write the fiber above $P$ $$Q=\sum_{\sigma \in G/G_1} Q_\sigma \text{\,\,\,\, whith \,\,\,} Q_\sigma =\sigma (Q_1).$$ We call $$b=[G:G_1]$$ the number of places above $P$ and let $$c=\mathfrak o/b=|G_1|$$ be the residual degree, that is the degree of $${\mathbf K}_\sigma =H^0(\mathcal O_{Q_\sigma},Q_\sigma )$$ over ${\mathbf K}_P$ for all $\sigma \in G/G_1$. We call $${\mathbf R}_Q = H^0(\mathcal O_Q,Q)$$ the residue ring at $Q$. The action of $G$ on ${\mathbf R}_Q$ makes it a free left ${\mathbf K}[G]$-module of rank $a$. Indeed it is a free ${\mathbf K}_P [G]$-module of rank $1$. A basis for it consists of any normal element $\theta$ in ${\mathbf K}_{1}/{\mathbf K}_P$. If $m$ is a positive integer, Taylor expansion provides an isomorphism of ${\mathbf K}_P [G]$-modules $$H^0(\mathcal O_Y/\mathcal O_Y(-mQ),Y) \simeq {\mathbf R}_Q[t_P]/t_P^m$$ between the residue ring at $mQ$ and the ring of truncated series in $t_P$. So the former is a free left ${\mathbf K}_P [G]$-module of rank $m$. A basis for it is made of the $\theta t_P^k$ for $0\leqslant k < m$. ## The residue ring of a non-ramified $G$-equivariant divisor {#sec:rrd} We take $P$ an effective divisor on $X$. We assume that $\tau$ does not ramify above $P$ and call $Q$ the pullback of $P$ by $\tau$. We write $$P=\sum_{1\leqslant i\leqslant I}m_iP_i.$$ We let $t_i$ be a uniformizing parameter at $P_i$. We call $a_i$ the degree of the place $P_i$. We call $b_i$ the number of places of $Y$ above $P_i$. We let $c_i=\mathfrak o/b_i$. For every $1\leqslant i\leqslant I$ we choose a place $Q_{i,1}$ above $P_i$ and call $G_{i,1}$ the decomposition group at $Q_{i,1}$. We call $Q_i$ the pullback of $P_i$ by $\tau$ and write $$Q_i=\sum_{\sigma \in G/G_{i,1}}Q_{i,\sigma} \text{\,\,\,\, with \,\,\,} Q_{i,\sigma}=\sigma (Q_{i,1}).$$ its decomposition as a sum of $b_i$ places. We call ${\mathbf K}_{i,\sigma}$ the residue field at $Q_{i,\sigma}$. Taylor expansion induces an isomorphism of ${\mathbf K}$-algebras $$\label{eq:rr} H^0(\mathcal O_Q,Q)\simeq \bigoplus_{i=1}^I\, \bigoplus_{\sigma \in G/G_{i,1}}\, {\mathbf K}_{i,\sigma }[t_i]/t_i^{m_i}$$ which is compatible with the action of $G$. In the special case when all the places $P_i$ have degree one, a basis for the ${\mathbf K}[G]$-module $H^0(\mathcal O_Q,Q)$ is made of the $\theta_it_i^{k_i}$ for $1\leqslant i\leqslant I$ and $0\leqslant k_i < m_i$ where $\theta_i$ is a normal element in the extension ${\mathbf K}_{i,1}/{\mathbf K}$. The proposition below follows from the discussion in this section and the previous one. **Proposition 3**. *Assume the hypotheses at the beginning of Section [3](#sec:curveact){reference-type="ref" reference="sec:curveact"}. Let $P$ be an effective divisor on $X$. Assume that $\tau$ is not ramified above $P$ and let $Q$ be the pullbak of $P$ by $\tau$. The residue ring $H^0(\mathcal O_Q,Q)$ is a free ${\mathbf K}[G]$-module of rank the degree of $P$.* ## Duality {#sec:duaa} We denote by ${\mathbf A}$ the right hand side of Equation ([\[eq:rr\]](#eq:rr){reference-type="ref" reference="eq:rr"}). We need a dual of ${\mathbf A}$ as a ${\mathbf K}$-vector space. We set $$\label{eq:hA} \hat {\mathbf A}=\bigoplus_{i=1}^I\, \bigoplus_{\sigma \in G/G_{i,1}}\, \left( {\mathbf K}_{i,\sigma }[t_i]/t_i^{m_i} \right) \frac{dt_i}{t_i^{m_i}} \simeq H^0(\Omega^1_{Y/{\mathbf K}}(-Q)/\Omega^1_{Y/{\mathbf K}},Y).$$ For $f\in {\mathbf A}$ and $\omega \in \hat{\mathbf A}$ we write $<\omega,f>$ for the sum of the residues of $\omega f$ at all the geometric points of $Q$. This is a ${\mathbf K}$-bilinear form. We deduce from Equation ([\[eq:comp\]](#eq:comp){reference-type="ref" reference="eq:comp"}) that this form is invariant by the action of $G$ $$<\omega . \sigma , f > \, = \, <\omega, \sigma . f >$$ We define a ${\mathbf K}[G]$-bilinear form using the construction in Section [2.1](#sec:inf){reference-type="ref" reference="sec:inf"} $$\label{eq:bil} (f,\omega)=\sum_{\sigma \in G}<\omega . \sigma^{-1}, f>\sigma \in {\mathbf K}[G].$$ These two bilinear forms turn $\hat{\mathbf A}$ into the dual of ${\mathbf A}$ as a ${\mathbf K}$-vector space (resp. as a ${\mathbf K}[G]$-module). In the special case when all the places $P_i$ have degree one, the dual basis to the basis introduced before Proposition [Proposition 3](#prop:freeres){reference-type="ref" reference="prop:freeres"} is made of the $\mu_i t_i^{m_i - k_i}dt/t$ for $1\leqslant i\leqslant I$ and $0\leqslant k_i < m_i$ where $\mu_i$ is the dual to the normal element $\theta_i$ in the extension ${\mathbf K}_{i,1}/{\mathbf K}$. # Free commutative actions {#sec:comm} We study the situation at the beginning of Section [3](#sec:curveact){reference-type="ref" reference="sec:curveact"} in the special case when the Galois cover $\tau : Y\rightarrow X$ is abelian and unramified. We prove that large enough equivariant Riemann-Roch spaces are free ${\mathbf K}[G]$-modules. To this end we prove in Section [4.2](#sec:rrsp){reference-type="ref" reference="sec:rrsp"} that evaluation at some fibers induces an isomorphism with some of the ${\mathbf K}[G]$-modules studied in Section [3.2](#sec:rrd){reference-type="ref" reference="sec:rrd"}. We will need non-special equivariant divisors on $Y$. We first prove in Section [4.1](#sec:sid){reference-type="ref" reference="sec:sid"} that such divisors exist. We introduce in Section [4.3](#sec:orths){reference-type="ref" reference="sec:orths"} the evaluation, interpolation and checking matrices whose existence results from the freeness of the considered modules. ## Special invariant divisors {#sec:sid} The pullback by $\tau$ of a degree $g_X-1$ divisor on $X$ is a degree $g_Y-1$ divisor on $Y$. We need the following criterion from [@coez §14] for the latter divisor to be special. **Proposition 4**. *Assume the hypotheses at the beginning of Section [3](#sec:curveact){reference-type="ref" reference="sec:curveact"} with $\tau$ abelian and unramified and ${\mathbf K}$ algebraically closed. Let $c$ be a divisor class of degree $g_X-1$ on $X$ and let $\tau^\star(c)$ be its pullback on $Y$. If the class $\tau^\star(c)$ is effective then $c$ is the sum of an effective class of degree $g_X-1$ and a class of degree $0$ annihilated by $\tau^\star$ and by $\mathfrak o$.* **Proof    ** Let $D$ be a divisor in $c$ and let $E$ be the pullback of $D$ by $\tau$. We assume that $\tau^\star(c)$ is effective. The space $H^0(\mathcal O_Y(E),Y)$ is non-zero and is acted on by $G$. Let $f$ be an eigenvector for this action. The divisor of $f$ is $J-E$ where $J$ is effective and stable under the action of $G$. So there exists an effective divisor $I$ on $X$ such that $J$ is the pullback of $I$ by $\tau$. And the class of $I-D$ is annihilated by $\tau^\star$. It is also annihilated by $\mathfrak o$ because $f^\mathfrak o$ is invariant by $G$. $\Box$ ## Riemann-Roch spaces {#sec:rrsp} Let $E$ be a divisor on $Y$ defined over ${\mathbf K}$ and invariant by $G$. The Riemann-Roch space $H^0(\mathcal O_Y(E),Y)$ is a ${\mathbf K}[G]$-module. This module is free provided the degree of $E$ is large enough. **Proposition 5**. *Assume the hypotheses at the beginning of Section [3](#sec:curveact){reference-type="ref" reference="sec:curveact"} with $\tau$ abelian and unramified. Let $D$ be a divisor on $X$ with degree $\geqslant 2g_X-1$. Let $E$ be the pullback of $D$ by $\tau$. The ${\mathbf K}$-vector space $H^0(\mathcal O_Y(E),Y)$ is a free ${\mathbf K}[G]$-module of rank $\deg (D)-g_X+1$.* **Proof    ** We may assume that ${\mathbf K}$ is algebraically closed because of the Noether-Deuring theorem [@bouralg8 §2, Section 5]. Let $k=\deg (D)-g_X+1$. We note that $k\geqslant g_X$. So there exist $k$ points $$P_1, P_2, \ldots, P_k \text{\,\,\, on \,\,} X$$ such that the class of $D-P_1-P_2-\dots -P_k$ is not the sum of an effective class of degree $g_X-1$ and a class annihilated by $$\tau^\star :\mathop{\rm{Pic}}\nolimits (X)\rightarrow \mathop{\rm{Pic}}\nolimits (Y).$$ Let $P$ be the divisor sum of all $P_i$ and let $Q$ be its pullback by $\tau$. According to Proposition [Proposition 4](#prop:special){reference-type="ref" reference="prop:special"} the class of $E-Q$ is ineffective. Thus the evaluation map $$H^0(\mathcal O_Y(E),Y)\rightarrow {\mathbf K}[Q]$$ is an isomorphism. And ${\mathbf K}[Q]$ is a free ${\mathbf K}[G]$-module of rank $k$ according to Proposition [Proposition 3](#prop:freeres){reference-type="ref" reference="prop:freeres"}. $\Box$ ## The orthogonal submodule {#sec:orths} In the situation of the beginning of Section [3](#sec:curveact){reference-type="ref" reference="sec:curveact"} and assuming that $\tau$ is abelian and unramified we let $D$ and $P$ be divisors on $X$ with $P$ effective. We assume that $D$ and $P$ are disjoint. We assume that $$\label{eq:inte}2g_X-1\leqslant \deg (D) \leqslant \deg (P) -1.$$ We call $E$ the pullback of $D$ by $\tau$ and $Q$ the pullback of $P$. We write $$\mathcal L(E)=H^0(\mathcal O_Y (E),Y) \text{\,\,\, and \,\,\,} {\mathit \Omega}(-Q+E)= H^0(\Omega_{Y/{\mathbf K}}(-Q+E),Y).$$ Proposition [Proposition 5](#prop:free){reference-type="ref" reference="prop:free"} and Equation ([\[eq:inte\]](#eq:inte){reference-type="ref" reference="eq:inte"}) imply that these two ${\mathbf K}[G]$-modules are free. And the evaluation maps $$\mathcal L(E)\longrightarrow {\mathbf A}\text{\,\,\, and \,\,\,} {\mathit \Omega}(-Q+E) \longrightarrow \hat{\mathbf A}\text{\,\,\, are injective.}$$ So $\mathcal L(E)$ can be seen as a free submodule of ${\mathbf A}$ and ${\mathit \Omega}(-Q+E)$ as a free submodule of $\hat{\mathbf A}$. These two ${\mathbf K}[G]$-modules are orthogonal to each other for the form introduced in Equation ([\[eq:bil\]](#eq:bil){reference-type="ref" reference="eq:bil"}). Proposition [Proposition 2](#prop:ds){reference-type="ref" reference="prop:ds"} implies that $\mathcal L(E)$ has a supplementary submodule in ${\mathbf A}$ that is isomorphic to ${\mathit \Omega}(-Q+E)$ and is thus a free submodule also. Similarly ${\mathit \Omega}(-Q+E)$ has a free supplementary submodule in $\hat{\mathbf A}$ that is isomorphic to $\mathcal L(E)$. In the special case when all the places $P_i$ have degree one, we have introduced a natural basis for ${\mathbf A}$ before Proposition [Proposition 3](#prop:freeres){reference-type="ref" reference="prop:freeres"} and its dual basis $\hat{\mathbf A}$ in Section [3.3](#sec:duaa){reference-type="ref" reference="sec:duaa"}, using Taylor expansions at the places above the $P_i$. We choose ${\mathbf K}[G]$-bases for $\mathcal L(E)$ and ${\mathit \Omega}(-Q+E)$. We denote $\mathcal E_E$ the $\deg (P)\times (\deg (D)-g_X+1)$ matrix with coefficients in ${\mathbf K}[G]$ of the evaluation map $\mathcal L(E)\rightarrow {\mathbf A}$ in the chosen bases. We denote $\mathcal C_E$ the $\deg (P)\times (\deg (P)-\deg (D)+g_X-1)$ matrix of the map ${\mathit \Omega}(-Q+E)\rightarrow \hat{\mathbf A}$ in the chosen bases. The matrix $\mathcal C_E$ checks that a vector in ${\mathbf A}$ belongs to $\mathcal L(E)$. Its left kernel is the image of $\mathcal E_E$. So $$\mathcal C_E^t\times \mathcal E_E = 0$$ the zero $(\deg (P)-\deg (D)+g_X-1)\times (\deg (D)-g_X+1)$ matrix with coefficients in ${\mathbf K}[G]$. We choose a ${\mathbf K}[G]$-linear projection ${\mathbf A}\rightarrow \mathcal L(E)$ and denote $\mathcal I_E$ the $(\deg (D)-g_X+1)\times \deg (P)$ matrix of this projection. This is an interpolation matrix since it recovers a function in $\mathcal L(E)$ from its evaluation at $Q$. Equivalently $$\mathcal I_E\times \mathcal E_E=1$$ the $(\deg (D)-g_X+1) \times (\deg (D)-g_X+1)$ identity matrix with coefficients in ${\mathbf K}[G]$. We note that applying either of the matrices $\mathcal E_E$, $\mathcal C_E$, $\mathcal I_E$ requires at most a constant times $\deg (P)^2$ operations in ${\mathbf K}[G]$. # Padé approximants {#sec:pade} In the situation of the beginning of Section [3](#sec:curveact){reference-type="ref" reference="sec:curveact"} and assuming that $\tau$ is abelian and unramified we let $D_0$, $D_1$ and $P$ be divisors on $X$ with $P$ effective. We assume that $D_0$ and $D_1$ are disjoint from $P$. We call $E_0$, $E_1$, and $Q$ the pullbacks of $D_0$, $D_1$, and $P$ by $\tau$. We assume that $$\label{eq:pade}2g_X-1\leqslant \deg (D_0) \leqslant \deg (D_1) \leqslant \deg (P) -1.$$ As a consequence, the ${\mathbf K}[G]$-modules $\mathcal L(E_0)$, $\mathcal L(E_1)$, ${\mathit \Omega}(-Q+E_0)$, and ${\mathit \Omega}(-Q+E_1)$ are free and the evaluation maps into ${\mathbf A}$ and $\hat{\mathbf A}$ are injective. Given $r$ in ${\mathbf A}$, $a_0\not = 0$ in $\mathcal L(E_0)$ and $a_1$ in $\mathcal L(E_1)$ such that $$a_0r-a_1=0 \in {\mathbf A},$$ we say that $(a_0, a_1)$ is a Padé approximant of $r$ and call $a_0$ a **denominator** for $r$. Denominators for $r$ are non-zero $a_0$ in $\mathcal L(E_0)\subset {\mathbf A}$ such that $$a_0r\in \mathcal L(E_1).$$ Equivalently $$\label{eq:coliK} (a_0r,\omega)=0 \text{\,\,\, for every \,\,\,} \omega \in {\mathit \Omega}(-Q+E_1).$$ Denominators are thus non-zero solutions of a ${\mathbf K}$-linear system of equations. We note that this is not a ${\mathbf K}[G]$-linear system in general. In Section [5.1](#sec:splitc){reference-type="ref" reference="sec:splitc"} we show that one can be a bit more explicit in some cases. We consider the problem of computing Padé approximants in Section [5.2](#sec:wied){reference-type="ref" reference="sec:wied"}. ## The split case {#sec:splitc} Assume that $P=P_1+\dots+P_n$ is a sum of $n$ pairwise distinct rational points over ${\mathbf K}$. Assume that the fiber of $\tau$ above each $P_i$ decomposes as a sum of $\mathfrak o$ rational points over ${\mathbf K}$. We choose a point $Q_{i,1}$ above each $P_i$ and set $$Q_{i,\sigma }= \sigma (Q_{i,1}) \text{\,\,\, for every \,\, } \sigma \in G.$$ For every $1\leqslant i\leqslant n$ we call $\alpha_i$ the function in ${\mathbf A}$ that takes value $1$ at $Q_{i,1}$ and zero everywhere else. We thus form a basis $$\mathcal A_G= (\alpha_i)_{1\leqslant i \leqslant n}$$ of ${\mathbf A}$ over ${\mathbf K}[G]$. We note $\hat\mathcal A_G$ its dual basis. For every $1\leqslant i\leqslant n$ and $\sigma \in G$ we call $$\alpha_{i,\sigma } =\sigma . \alpha_i = \alpha_i \circ \sigma^{-1}$$ the function in ${\mathbf A}$ that takes value $1$ at $Q_{i,\sigma }$ and zero everywhere else. We thus form a basis $$\mathcal A_{\mathbf K}=(\alpha_{i,\sigma })_{1\leqslant i\leqslant n, \, \sigma \in G}$$ of ${\mathbf A}$ over ${\mathbf K}$. The coordinates of $r$ in the ${\mathbf K}[G]$-basis $\mathcal A_G$ are $$r_G=(\sum_{\sigma \in G }r(Q_{i,\sigma})\sigma )_{1\leqslant i \leqslant n}$$ and the coordinates of $r\in {\mathbf A}$ in the ${\mathbf K}$-basis $\mathcal A_{\mathbf K}$ are $$r_{\mathbf K}=(r(Q_{i,\sigma }))_{1\leqslant i\leqslant n, \, \sigma \in G}.$$ Multiplication by $r$ is a ${\mathbf K}$-linear map from ${\mathbf A}$ to ${\mathbf A}$. We call $$\mathcal R_{\mathbf K}\in \mathcal M_{\mathfrak o. n, \mathfrak o. n}({\mathbf K})$$ the $\mathfrak o. n\times \mathfrak o. n$ diagonal matrix of this map in the basis $\mathcal A_{\mathbf K}$. We choose a ${\mathbf K}[G]$-basis $\mathcal Z_G$ for $\mathcal L(E_0)$ and denote $\mathcal E_G^0$ the $\deg (P)\times (\deg (D_0)-g_X+1)$ matrix of the ${\mathbf K}[G]$-linear map $$\label{eq:cL} \mathcal L(E_0)\rightarrow {\mathbf A}$$ in the bases $\mathcal Z_G$ and $\mathcal A_G$. We denote $\mathcal Z_{\mathbf K}$ the ${\mathbf K}$-basis of $\mathcal L(E_0)$ obtained by letting $G$ act on $\mathcal Z_G$. Call $\mathcal E_{\mathbf K}^0$ the matrix of the map ([\[eq:cL\]](#eq:cL){reference-type="ref" reference="eq:cL"}) in the bases $\mathcal Z_{\mathbf K}$ and $\mathcal A_{\mathbf K}$. The matrix $\mathcal E_{\mathbf K}^0$ is obtained from $\mathcal E_G^0$ by replacing each ${\mathbf K}[G]$ entry by the corresponding $\mathfrak o\times \mathfrak o$ circulant-like matrix with entries in ${\mathbf K}$. We choose a ${\mathbf K}[G]$-basis $\mathcal U_G$ for ${\mathit \Omega}(-Q+E_1)$ and denote $\mathcal C_G^1$ the matrix of the injective map $$\label{eq:omeg} {\mathit \Omega}(-Q+E_1)\rightarrow \hat{\mathbf A}$$ in the bases $\mathcal U_G$ and $\hat \mathcal A_G$. This is a $\deg (P)\times (\deg (P)-\deg (D_1)+g_X-1)$ matrix with entries in ${\mathbf K}[G]$. We denote $\mathcal U_{\mathbf K}$ the ${\mathbf K}$-basis of ${\mathit \Omega}(-Q+E_1)$ obtained by letting $G$ act on $\mathcal U_G$. The matrix of the map ([\[eq:omeg\]](#eq:omeg){reference-type="ref" reference="eq:omeg"}) in the bases $\mathcal U_{\mathbf K}$ and $\hat\mathcal A_{\mathbf K}$ is called $\mathcal C_{\mathbf K}^1$. Let $a_0$ in $\mathcal L(E_0)$ and let $x_G$ be the coordinates of $a_0$ in the ${\mathbf K}[G]$-basis $\mathcal Z_G$. This is a column of height $\deg(D_0)-g_X+1$. We call $x_{\mathbf K}$ the coordinates of $a_0$ in the ${\mathbf K}$-basis $\mathcal Z_{\mathbf K}$. This is a column of height $\mathfrak o. (\deg(D_0)-g_X+1)$ obtained from $x_G$ by replacing each entry by its $\mathfrak o$ coefficients in the canonical basis of ${\mathbf K}[G]$. We deduce from Equation ([\[eq:coliK\]](#eq:coliK){reference-type="ref" reference="eq:coliK"}) that $a_0$ is a denominator for $r$ if and only if $x_{\mathbf K}$ is in the kernel of the matrix $$\mathcal D_r=(\mathcal C_{\mathbf K}^1)^t\times \mathcal R_{\mathbf K}\times \mathcal E^0_{\mathbf K} \in \mathcal M_{\mathfrak o. (\deg P -deg D_1+g_X-1)\times \mathfrak o. (\deg D_0-g_X+1)}({\mathbf K}).$$ **Proposition 6**. *Assume we are in the context of the beginning of Section [5](#sec:pade){reference-type="ref" reference="sec:pade"}. In particular assume Equation ([\[eq:pade\]](#eq:pade){reference-type="ref" reference="eq:pade"}), assume that $P$ is a sum of $n$ pairwise distinct ${\mathbf K}$-rational points, and that the $n$ corresponding fibers of $\tau$ split over ${\mathbf K}$. Assume we are given the matrices $\mathcal E^0_{\mathbf K}$ and $\mathcal C_{\mathbf K}^1$. On input an $r = (r(Q_{i,\sigma })_{1\leqslant i\leqslant n, \, \sigma \in G}$ in ${\mathbf A}$ and some $a_0$ in $\mathcal L(E_0)$, given by its coordinates $x_{\mathbf K}$ in the basis $\mathcal Z_{\mathbf K}$, one can check if $a_0r\in \mathcal L(E_1)$ at the expense of ${\mathcal Q}. n^2$ operations in ${\mathbf K}[G]$ (addition, multiplication) and ${\mathcal Q}. \mathfrak o. n$ operations in ${\mathbf K}$ (addition, multiplication) where ${\mathcal Q}$ is some absolute constant.* **Proof    ** We first multiply $x_{\mathbf K}$ by $\mathcal E^0_{\mathbf K}$. This requires less than $2\deg (P)\times (\deg (D_0)-g_X+1)$ operations in ${\mathbf K}[G]$. We then multiply the result by $\mathcal R_{\mathbf K}$. This requires less than $\mathfrak o. \deg (P)$ operations in ${\mathbf K}$. We finally multiply the result by $(\mathcal C_{\mathbf K}^1)^t$. This requires less than $2\deg (P)\times (\deg (P)-\deg (D_1)+g_X-1)$ operations in ${\mathbf K}[G]$. $\Box$ ## Computing Padé approximants {#sec:wied} Beeing able to check a denominator we can find a random one (if there is some) using an iterative method as in [@Wied; @Kal]. Recall that an $\ell\times n$ **black box** matrix $A$ with coefficients in a field ${\mathbf K}$ is an oracle that on input an $n\times 1$ vector $x$ returns $Ax$. **Proposition 7** (Wiedemann, Kaltofen, Saunders). *There exists a probabilistic (Las Vegas) algorithm that takes as input an $\ell\times n$ black box matrix $A$ and an $\ell \times 1$ vector $b$ with entries in a finite field ${\mathbf K}$ and returns a uniformly distributed random solution $x$ to the system $Ax=b$ with probability of success $\geqslant 1/2$ at the expense of ${\mathcal Q}. m . \log m$ calls to the black box for $A$ and ${\mathcal Q}. m^2 . (\log (m))^2$ operations in ${\mathbf K}$ (addition, multiplication, inversion, picking a random element) where $m=\max (\ell, n)$ and ${\mathcal Q}$ is some absolute constant.* From Propositions [Proposition 6](#prop:testde){reference-type="ref" reference="prop:testde"} and [Proposition 7](#prop:wks){reference-type="ref" reference="prop:wks"} we deduce **Proposition 8**. *Under the hypotheses of Proposition [Proposition 6](#prop:testde){reference-type="ref" reference="prop:testde"} and on input a vector $r = (r(Q_{i,j})_{i,j}$ in ${\mathbf A}$ one can find a uniformly distributed random denominator (if there is some) for $r$ with probability of success $\geqslant 1/2$ at the expense of ${\mathcal Q}. \mathfrak o. n^3. \log (\mathfrak o. n)$ operations in ${\mathbf K}[G]$ (addition, multiplication) and ${\mathcal Q}. (\mathfrak o. n . \log (\mathfrak o. n))^2$ operations in ${\mathbf K}$ (addition, multiplication, inversion, picking a random element) where ${\mathcal Q}$ is some absolute constant.* Once we have found a denominator $a_0$ for $r$ we set $a_1=ra_0$ and recover the coordinates of $a_1$ applying the interpolation matrix associated to $E_1$. # Computing in the group algebra {#sec:ga} Given a finite commutative group $G$ and a finite field ${\mathbf K}$ we will need efficient algorithms to multiply in ${\mathbf K}[G]$. This is classically achieved using discrete Fourier transform when $G$ is cyclic and ${\mathbf K}$ contains enough roots of unity. The complexity analysis requires some care in general. This is the purpose of this section. We recall in Section [6.1](#sec:ft){reference-type="ref" reference="sec:ft"} the definition of Fourier transform in the setting of commutative finite groups. The most classical case of cyclic groups is studied in Section [6.2](#sec:unift){reference-type="ref" reference="sec:unift"} from an algorithmic point of view. The general case follows by induction as explained in Section [6.3](#sec:multiv){reference-type="ref" reference="sec:multiv"}. The complexity of the resulting multiplication algorithm in ${\mathbf K}[G]$ is bounded in Section [6.4](#sec:fasmul){reference-type="ref" reference="sec:fasmul"}. ## Fourier transform {#sec:ft} Let $G$ be a finite commutative group. Let $\mathfrak o$ be the order of $G$. Let $e$ be its exponent. Let ${\mathbf K}$ be a commutative field containing a primitive $e$-th root of unity. In particular $e$ and $\mathfrak o$ are non-zero in ${\mathbf K}$. Let $\hat G$ be the dual of $G$ defined as the group of characters $\chi : G\rightarrow {\mathbf K}^*$. We define a map from the group algebra of $G$ to the algebra of functions on $G$ $$%\label{eq:} \xymatrix@R-2pc{ \top & \relax : & {\mathbf K}[G] \ar@{->}[r] & \mathop{\rm{Hom}}\nolimits (G, {\mathbf K})\\ &&\sum_{\sigma \in G} a_\sigma \sigma \ar@{|->}[r] & \sigma \mapsto a_\sigma }$$ This is an isomorphism of ${\mathbf K}$-vector space. We call $\bot : \mathop{\rm{Hom}}\nolimits (G, {\mathbf K})\rightarrow {\mathbf K}[G]$ the reciprocal map. We dualy define $$%\label{eq:} \xymatrix@R-2pc{ \hat \top & \relax : & {\mathbf K}[\hat G] \ar@{->}[r] & \mathop{\rm{Hom}}\nolimits (\hat G, {\mathbf K})\\ &&\sum_{\chi \in \hat G} \, a_\chi \chi \ar@{|->}[r] & \chi \mapsto a_\chi }$$ and its reciprocal map $\hat \bot$. We call $$\iota_G : {\mathbf K}[G]\rightarrow {\mathbf K}[G]$$ the ${\mathbf K}$-linear involution that maps $\sigma$ onto $\sigma^{-1}$. We define the Fourier transform $$%\label{eq:} \xymatrix@R-2pc{ \mathop{\rm{FT}}\nolimits _G & \relax : & {\mathbf K}[ G] \ar@{->}[r] & \mathop{\rm{Hom}}\nolimits (\hat G, {\mathbf K})\\ &&\sum_{\sigma \in G} a_\sigma \sigma \ar@{|->}[r] & \chi \mapsto \sum_{\sigma}a_\sigma \chi (\sigma) }$$ The Fourier transform evaluates an element in the group algebra at every character. The Fourier transform of the dual group $$%\label{eq:} \xymatrix@R-2pc{ \mathop{\rm{FT}}\nolimits _{\hat G} & \relax : & {\mathbf K}[ \hat G] \ar@{->}[r] & \mathop{\rm{Hom}}\nolimits (G, {\mathbf K})\\ &&\sum_{\chi \in \hat G} a_\chi \chi \ar@{|->}[r] & \sigma \mapsto \sum_{\chi}a_\chi \chi (\sigma) }$$ provides an inverse for $\mathop{\rm{FT}}\nolimits _G$ in the sense that $$\bot\circ \mathop{\rm{FT}}\nolimits _{\hat G}\circ \hat\bot \circ \mathop{\rm{FT}}\nolimits _G = \mathfrak o. \iota$$ is the ${\mathbf K}$-linear invertible map that sends $\sigma$ to $\mathfrak o. \sigma^{-1}$. Let $M$ be a finite dimensional ${\mathbf K}$-vector space. We set $$M[G]=M\otimes_{\mathbf K}{\mathbf K}[G]$$ and note that $$\mathop{\rm{Hom}}\nolimits (\hat G,M)=M\otimes_{\mathbf K}\mathop{\rm{Hom}}\nolimits (\hat G, {\mathbf K}).$$ We define a Fourier transform on $M$ $$%\label{eq:} \xymatrix@R-2pc{ \mathop{\rm{FT}}\nolimits _M & \relax : & M [ G] \ar@{->}[r] & \mathop{\rm{Hom}}\nolimits (\hat G, M)\\ &&\sum_{\sigma \in G} m_\sigma \otimes \sigma \ar@{|->}[r] & \chi \mapsto \sum_{\sigma} \chi (\sigma) m_\sigma }$$ ## Univariate Fourier transform {#sec:unift} We assume in this section that the group $G$ is cyclic of order $\mathfrak o$. We choose a primitive $\mathfrak o$-th root of unity $\omega$ in ${\mathbf K}$. We choose a generator in $G$ and deduce the following identifications $$\mathop{\rm{Hom}}\nolimits (G, {\mathbf K})={\mathbf K}^\mathfrak o\text{\,\,\, and \,\,\,} {\mathbf K}[G]={\mathbf K}[x]/(x^\mathfrak o-1).$$ Let $M$ be a finite dimensional ${\mathbf K}$-vector space. Setting $$M[x]=M\otimes_{\mathbf K}{\mathbf K}[x] \text{\,\,\, and \,\,\,} M[G]=M\otimes_{\mathbf K}{\mathbf K}[x]/(x^\mathfrak o-1).$$ the Fourier transform is $$%\label{eq:} \xymatrix@R-2pc{ \mathop{\rm{FT}}\nolimits _M & \relax : & M[G] \ar@{->}[r] & M^o\\ & & m \ar@{|->}[r] & (m(1),m(\omega), m(\omega^2), \ldots, m(\omega^{o-1}))}$$ Given $m$ in $M[G]=M\otimes_{\mathbf K}{\mathbf K}[x]/(x^\mathfrak o-1)$ the computation of $\mathop{\rm{FT}}\nolimits _M(m)$ reduces to the multiplication of a polynomial of degree $2\mathfrak o-2$ in ${\mathbf K}[x]$ and a vector of degree $\mathfrak o-1$ in $M[x]$. This is the key for the proof of the proposition below. **Proposition 9**. *Let ${\mathbf K}$ be a commutative field. Let $M$ be a finite dimensional ${\mathbf K}$-vector space. Let $\mathfrak o\geqslant 2$ be an integer. Assume that ${\mathbf K}$ contains a primitive $\mathfrak o$-th root of unity $\omega$ and a primitive root of unity of order a power of two that is bigger than $3\mathfrak o-3$. Let $$m=m_0\otimes 1 +m_1 \otimes x+\dots +m_{\mathfrak o-1}\otimes x^{\mathfrak o-1}\bmod x^\mathfrak o-1 \in M\otimes_{\mathbf K}{\mathbf K}[x]/(x^\mathfrak o-1).$$ One can compute $\mathop{\rm{FT}}\nolimits _M(m)$ at the expense of ${\mathcal Q}. \mathfrak o. \log \mathfrak o$ additions, multiplications and inversions in ${\mathbf K}$, additions and scalar multiplications in $M$, where ${\mathcal Q}$ is an absolute constant.* **Proof    ** We adapt the proof from [@bost I.5.4, Proposition 5.10]. For every $0\leqslant i \leqslant 2\mathfrak o-2$ let $$t_i=i(i-1)/2 \text{ \,\,\, and \,\,\, } \beta_i=\omega^{t_i}.$$ We note that $$t_{i+1}=t_i+i \text{\,\,\, and \,\,\, }\beta_{i+1}=\beta_i\omega^i.$$ So one can compute the $\beta_i$ for $0\leqslant i\leqslant 2\mathfrak o-2$ at the expense of $4\mathfrak o$ operations in ${\mathbf K}$. We then compute the inverse of every $\beta_i$. For every $0\leqslant i \leqslant \mathfrak o-1$ let $$n_i=\beta_i^{-1}m_i.$$ These can be computed at the expense of $\mathfrak o$ scalar multiplications in $M$. Let $$n(x)=n_{\mathfrak o-1}+n_{\mathfrak o-2}\otimes x +\dots+n_{0}\otimes x^{\mathfrak o-1} \in M[x]$$ and let $$b(x)=\beta_0+\beta_1x+\dots +\beta_{2\mathfrak o-2}x^{2\mathfrak o-2} \in {\mathbf K}[x].$$ Let $$r(x)=b(x). n(x) = \sum_{0\leqslant i\leqslant 3\mathfrak o-3} r_i \otimes x^i \in M[x].$$ From the identity $$t_{i+j}=t_i+t_j+ij$$ we deduce $$\omega^{ij}\beta_i\beta_j=\beta_{i+j} \text{\,\, for \,\,} 0\leqslant i, j \leqslant \mathfrak o-1$$ and $$\sum_{j=0}^{\mathfrak o-1}\omega^{ij}m_j= \beta_i^{-1}\sum_{j=0}^{o-1}\beta_{i+j}n_j.$$ We deduce that $\mathop{\rm{FT}}\nolimits ^\omega_M(m)=(\beta_0^{-1}r_{\mathfrak o-1}, \beta_1^{-1}r_{\mathfrak o}, \beta_2^{-1}r_{\mathfrak o+1}, \dots, \beta_{\mathfrak o-1}^{-1}r_{2\mathfrak o-2})$. Since ${\mathbf K}$ contains a primitive root of unity of order a power of two that is bigger than $3\mathfrak o-3$, the coefficients in the product $r(x)=b(x).n(x)$ can be computed at the expense of ${\mathcal Q}.\mathfrak o.\log \mathfrak o$ operations in ${\mathbf K}$, additions in $M$ and products of a vector in $M$ by a scalar in ${\mathbf K}$. See [@bost I.2.4, Algorithme 2.3]. $\Box$ ## Multivariate Fourier transform {#sec:multiv} Let $(\mathfrak o_i)_{1\leqslant i\leqslant I}$ be integers such that $2\leqslant \mathfrak o_1 | \mathfrak o_2 | \dots | \mathfrak o_I$. Let $G=\prod_{1\leqslant i\leqslant I}({\mathbf Z}/\mathfrak o_i{\mathbf Z})$. Let $\mathfrak o=\prod_i \mathfrak o_i$ be the order of $G$. Let $e=\mathfrak o_I$ the exponent of $G$. Let ${\mathbf K}$ be a commutative field containing a primitive root of unity of order $e$. The group algebra ${\mathbf K}[G]$ is isomorphic to ${\mathbf K}[x_1, \ldots, x_I]/(x_1^{\mathfrak o_1}-1, \ldots, x_1^{\mathfrak o_I}-1)$. We set $A_i={\mathbf K}[x_i]/(x_i^{\mathfrak o_i}-1)$ and write ${\mathbf K}[G]$ as a tensor product of ${\mathbf K}$-algebras $${\mathbf K}[G]=A_1\otimes_{\mathbf K}A_2 \otimes_{\mathbf K}\dots \otimes_{\mathbf K} A_I.$$ For every $1\leqslant i\leqslant I$ we call $M_i$ the tensor product of all $A_j$ but $A_i$. This is a ${\mathbf K}$-vector space of dimension $\mathfrak o/\mathfrak o_i$. We can see ${\mathbf K}[G]$ as $M_i\otimes {\mathbf K}[x]/(x^{\mathfrak o_i}-1)$. We denote $\mathop{\rm{FT}}\nolimits _i$ the corresponding univariate Fourier transform as defined in Section [6.2](#sec:unift){reference-type="ref" reference="sec:unift"}. We have $$\mathop{\rm{FT}}\nolimits _G= \mathop{\rm{FT}}\nolimits _1 \circ \mathop{\rm{FT}}\nolimits _2 \circ \dots \circ \mathop{\rm{FT}}\nolimits _I.$$ Using Proposition [Proposition 9](#prop:ffcy){reference-type="ref" reference="prop:ffcy"} we deduce **Proposition 10**. *Let $(\mathfrak o_i)_{1\leqslant i\leqslant I}$ be integers such that $2\leqslant \mathfrak o_1 | \mathfrak o_2 | \dots | \mathfrak o_I$. Let $G=\prod_{1\leqslant i\leqslant I}({\mathbf Z}/\mathfrak o_i{\mathbf Z})$. Let $\mathfrak o$ be the order of $G$. Let $e=\mathfrak o_I$ be the exponent of $G$. Let ${\mathbf K}$ be a commutative field containing a primitive root of unity of order $e$ and a primitive root of unity of order a power of two that is bigger than $3e-3$. Given an element $a = \sum_{\sigma \in G}a_\sigma \sigma$ in ${\mathbf K}[G]$ one can compute $\mathop{\rm{FT}}\nolimits _G(a)$ in $\mathop{\rm{Hom}}\nolimits (\hat G, {\mathbf K})$ at the expense of ${\mathcal Q}.\mathfrak o.\log \mathfrak o$ additions, multiplications and inversions in ${\mathbf K}$. Here ${\mathcal Q}$ is some absolute constant.* ## Fast multiplication in ${\mathbf K}[G]$ {#sec:fasmul} Let $G$, $\mathfrak o$, $e$ be as in Section [6.3](#sec:multiv){reference-type="ref" reference="sec:multiv"}. Let ${\mathbf K}$ be a commutative field. In this section we study the algorithmic complexity of computing the product of two given elements $$\label{eq:ab}a =\sum_{\sigma \in G}a_\sigma \sigma\text{ \,\,\, and \,\, } b =\sum_{\sigma \in G}b_\sigma \sigma \text{\,\, in \,\, } {\mathbf K}[G].$$ It will depend on the field ${\mathbf K}$. We first treat the case when ${\mathbf K}$ has enough roots of unity. **Proposition 11**. *In the context of the beginning of Section [6.4](#sec:fasmul){reference-type="ref" reference="sec:fasmul"} assume that ${\mathbf K}$ contains a primitive root of unity of order $e$ and a primitive root of unity of order a power of two that is bigger than $3e-3$. One can compute the product $ab\in {\mathbf K}[G]$ at the expense of ${\mathcal Q}.\mathfrak o.\log \mathfrak o$ operations in ${\mathbf K}$ where ${\mathcal Q}$ is some absolute constant.* **Proof    ** We compute $A=\mathop{\rm{FT}}\nolimits _G(a)$ and $B=\mathop{\rm{FT}}\nolimits _G(b)$ as in Section [6.3](#sec:multiv){reference-type="ref" reference="sec:multiv"}. We then compute $C=AB$ in $\mathop{\rm{Hom}}\nolimits (\hat G, {\mathbf K}^*)$ at the expense of $\mathfrak o$ multiplications in ${\mathbf K}$. We then deduce $c=ab$ applying $\mathop{\rm{FT}}\nolimits _G^{-1}$ to $C$. The cost of this computation is bounded using Proposition [Proposition 10](#prop:ffG){reference-type="ref" reference="prop:ffG"}.$\Box$ We now consider the case when ${\mathbf K}$ is ${\mathbf Z}/p{\mathbf Z}$ where $p$ is a prime integer. We miss roots of unity in ${\mathbf K}$ in general. So we transport the problem into another ring using non-algebraic maps. We let $t$ be the smallest power of $2$ that is bigger than $3e-3$. Let $p'$ be the smallest prime integer congruent to $1$ modulo $\mathfrak o. (p-1)^2.t$. We set ${\mathbf K}'={\mathbf Z}/p'{\mathbf Z}$ and note that ${\mathbf K}'$ contains a primitive root of unity of order $e$ and a primitive root of order a power of two bigger than $3e-3$. Also $$\label{eq:pq} p' > \mathfrak o. (p-1)^2.$$ By a result of Heath-Brown, the exponent in Linnik's theorem for primes in arithmetic progressions can be taken to be $11/2$. See [@hb] and the recent improvement [@xyl]. We deduce that there exists an absolute constant ${\mathcal Q}$ such that $$\label{eq:linn} p' \leqslant {\mathcal Q} (\mathfrak o. p)^{11}.$$ For $c$ a congruence class in ${\mathbf K}={\mathbf Z}/p{\mathbf Z}$ we denote $\ell (c)$ the lift of $c$, that is the unique integer in the intersection of $c$ with the interval $[0,p[$. We write $$\label{eq:up}\mathop{{\uparrow}}\nolimits (c)=\ell (c)\bmod p'.$$ We thus define maps $\ell : {\mathbf K}\rightarrow {\mathbf Z}$ and $\mathop{{\uparrow}}\nolimits : {\mathbf K}\rightarrow {\mathbf K}'$. We similarly define the lifting map $\ell' : {\mathbf K}' \rightarrow {\mathbf Z}$ and $\mathop{{\downarrow}}\nolimits : {\mathbf K}'\rightarrow {\mathbf K}$ by $$\label{eq:down}\mathop{{\downarrow}}\nolimits (c)=\ell' (c)\bmod p \text{ \,\,\, for \,\,} c \in {\mathbf K}'.$$ These four maps can be extended to the corresponding group algebras by coefficientwise application. Given $a$ and $b$ as in Equation ([\[eq:ab\]](#eq:ab){reference-type="ref" reference="eq:ab"}) we define $$A = \ell (a)= \sum_{\sigma \in G}\ell (a_\sigma)\sigma\text{ \,\,\, and \,\, } B = \ell (b)=\sum_{\sigma \in G}\ell (b_\sigma) \sigma \text{\,\, in \,\, } {\mathbf Z}[G] \text{ \,\,\, and \,\, } C=AB.$$ The coefficients in $C$ belong to the interval $[0,\mathfrak o. (p-1)^2 [$. So $$C= \ell'( (A\mod p')\times (B\mod p')) \text{\,\,\, and \,\,\,} ab = \mathop{{\downarrow}}\nolimits (\mathop{{\uparrow}}\nolimits (a)\mathop{{\uparrow}}\nolimits (b)).$$ Using Proposition [Proposition 11](#prop:enro){reference-type="ref" reference="prop:enro"} we deduce **Proposition 12**. *There exists an absolute constant ${\mathcal Q}$ such that the following is true. Let $G$, $\mathfrak o$, $e$ be as in Section [6.3](#sec:multiv){reference-type="ref" reference="sec:multiv"}. Let ${\mathbf K}= {\mathbf Z}/p{\mathbf Z}$. There exists a prime integer $p'\leqslant {\mathcal Q}(\mathfrak o. p)^{11}$ and a straight-line program of length smaller than ${\mathcal Q}.\mathfrak o.\log \mathfrak o$ that computes the product $c=\sum_g c_g[g]$ of two elements $a=\sum_g a_g[g]$ and $b=\sum_g b_g[g]$ in ${\mathbf K}[G]$ given by their coefficients $(a_g)_g$ and $(b_g)_g$. The operations in this straigth line program are additions and multiplications in $({\mathbf Z}/p'{\mathbf Z})$ and evaluations of the maps $\mathop{{\uparrow}}\nolimits$ and $\mathop{{\downarrow}}\nolimits$ defined in Equations ([\[eq:up\]](#eq:up){reference-type="ref" reference="eq:up"}) and ([\[eq:down\]](#eq:down){reference-type="ref" reference="eq:down"}).* Now let ${\mathbf L}$ be a field extension of degree $d$ of ${\mathbf K}={\mathbf Z}/p{\mathbf Z}$. We assume that elements in ${\mathbf L}$ are represented by their coordinates in some ${\mathbf K}$-basis of ${\mathbf L}$. Work by Shparlinsky, Tsfasmann, Vladut [@STV], Shokrollahi [@SH], Ballet and Rolland [@BR; @BAL], Chaumine [@CHA], Randriambololona [@RAN] and others imply that the ${\mathbf K}$-bilinear complexity of ${\mathbf L}$ is bounded by an absolute constant times $d$. We deduce the following proposition. **Proposition 13**. *There exists an absolute constant ${\mathcal Q}$ such that the following is true. Let $G$, $\mathfrak o$, $e$ be as in Section [6.3](#sec:multiv){reference-type="ref" reference="sec:multiv"}. Let ${\mathbf K}= {\mathbf Z}/p{\mathbf Z}$ and ${\mathbf L}$ a field extension of degree $d$ of ${\mathbf K}$. There exists a prime integer $p'\leqslant {\mathcal Q}(\mathfrak o. p)^{11}$ and a straight-line program of length $\leqslant {\mathcal Q}(d.\mathfrak o.\log \mathfrak o+d^2.\mathfrak o)$ that computes the product $c=\sum_g c_g[g]$ of two elements $a=\sum_g a_g[g]$ and $b=\sum_g b_g[g]$ in ${\mathbf L}[G]$ given by their coefficients $(a_g)_g$ and $(b_g)_g$. The operations in this straigth line program are additions and multiplications in $({\mathbf Z}/p{\mathbf Z})$ and in $({\mathbf Z}/p'{\mathbf Z})$ and evaluations of the maps $\mathop{{\uparrow}}\nolimits$ and $\mathop{{\downarrow}}\nolimits$ defined in Equations ([\[eq:up\]](#eq:up){reference-type="ref" reference="eq:up"}) and ([\[eq:down\]](#eq:down){reference-type="ref" reference="eq:down"}).* # Constructing functions in the Hilbert class field {#sec:constf} We have defined in Section [4](#sec:comm){reference-type="ref" reference="sec:comm"} matrices $\mathcal E$, $\mathcal C$ and $\mathcal I$ for the evaluation and interpolation of functions in the linear space of global sections of a $G$-equivariant invertible sheaf on a curve $Y$. We have seen in Sections [4](#sec:comm){reference-type="ref" reference="sec:comm"}, [5](#sec:pade){reference-type="ref" reference="sec:pade"}, and [6](#sec:ga){reference-type="ref" reference="sec:ga"} how to efficiently compute with these matrices. In this section we address the problem of computing these matrices. We recall in Section [7.1](#sec:cftj){reference-type="ref" reference="sec:cftj"} the necessary background from class field theory of function fields over a finite field. We illustrate the constructive aspects of class fields on a small example in section [7.2](#sec:excftj){reference-type="ref" reference="sec:excftj"}. An important feature of this method is that we only work with divisors and functions on $X$. This is of some importance since in the applications we have in mind the genus of $Y$ is much larger (e.g. exponentially) than the genus of $X$. ## Class field theory and the jacobian variety {#sec:cftj} We start from a projective curve $X$ over a finite field ${\mathbf K}$ of characteristic $p$. We assume that $X$ is smooth and absolutely integral. We let ${\bar {\mathbf K}}$ be an algebraic closure of ${\mathbf K}$. We need an abelian cover $\tau : Y \rightarrow X$, whith $Y$ absolutely integral. We will require that $Y$ have a ${\mathbf K}$-rational point $Q_1$. This implies that $\tau$ is completely split above $P_1=\tau (Q_1)$. According to class field theory [@gacc; @rosen] there is a maximal abelian unramified cover of $X$ over ${\mathbf K}$ that splits totally above $P_1$. We briefly recall its geometric construction. Let $J_X$ be the jacobian variety of $X$ and let $$j_X : X\rightarrow J_X$$ be the Jacobi map with origin $P_1$. Let $$F_{\mathbf K}: J_X\rightarrow J_X$$ be the Frobenius endomorphism of degree $|{\mathbf K}|$, the cardinality of ${\mathbf K}$. The endomorphism $$\wp = F_{\mathbf K}-1 : J_X\rightarrow J_X$$ is an unramified Galois cover between ${\mathbf K}$-varieties with Galois group $J_X({\mathbf K})$. We denote $${\tau_{{\mathrm{max}}}}: {Y_{{\mathrm{max}}}}\rightarrow X$$ the pullback of $\wp$ along $j_X$. This is the maximal abelian unramified cover of $X$ that splits totally above $P_1$. Any such cover $\tau : Y \rightarrow X$ is thus a quotient of ${\tau_{{\mathrm{max}}}}$ by some subgroup $H$ of $J_X({\mathbf K})$. We set $G=J_X({\mathbf K})/H$ and notice that $G$ is at the same time the fiber of $\tau$ above $P_1$ and its Galois group, acting by translations in $J_X/H$. $$%\label{eq:} \begin{tikzcd} J_X({\mathbf K}) \ar[r,hook] \ar[d] & {Y_{{\mathrm{max}}}}\ar[r,hook] \ar[d] & J_X \ar[d,"H"] \ar[dd,bend left = 60, "\wp"] \\ G=J_X({\mathbf K})/H \ar[r,hook] \ar[d] & Y \ar[r,hook] \ar[d,"\tau "] & J_X/H \ar[d,"G"]\\ 0=P_1 \ar[r,hook] &X \ar[r,hook] &J_X & \end{tikzcd}$$ Let $P$ be a ${\mathbf K}$-rational point on $X$ and let ${Q_{{\mathrm{max}}}}$ be any point on ${Y_{{\mathrm{max}}}}({\bar {\mathbf K}})$ such that $${\tau_{{\mathrm{max}}}}({Q_{{\mathrm{max}}}})=\wp({Q_{{\mathrm{max}}}})=P.$$ We have $F_{\mathbf K}({Q_{{\mathrm{max}}}})={Q_{{\mathrm{max}}}}+P$. So the decomposition group of any place on $Y$ above $P$ is the subgroup in $G/H$ generated by $P$. In particular the fiber of $\tau$ above $P$ splits over ${\mathbf K}$ if and only if $P$ is sent into $H$ by the Jacobi map. Equivalently the class of $P-P_1$ belongs to $H$. ## An example {#sec:excftj} In this section ${\mathbf K}$ is the field with three elements and $X$ is the plane projective curve with equation $$Y^2Z^3=X(X-Z)(X^3+X^2Z+2Z^3).$$ This is a smooth absolutely integral curve of genus $2$. The characteristic polynomial of the Frobenius of $X/{\mathbf K}$ is $$\label{eq:lpol}\chi_{\mathbf K}(t)=t^4 + t^3 + 2t^2 + 3t + 9.$$ The characteristic polynomial of the Frobenius of a curve over a finite field (given by a reasonable model) can be computed in time polynomial in $p.g.n$ where $p$ is the characteristic of the field, $n$ its degree over the prime field, and $g$ the genus of the curve, using the so called $p$-adic methods introduced by Kato-Lubkin [@kl], Satoh [@satoh], Kedlaya [@ked], Lauder and Wan [@lw] and widely extended since then. When the genus of the curve is fixed, the characteristic polynomial of the Frobenius can be computed in time polynomial in the logarithm of the cardinality of ${\mathbf K}$, using the $\ell$-adic method introduced by Schoof [@schoof] and generalized by Pila [@pila]. We deduce from Equation [\[eq:lpol\]](#eq:lpol){reference-type="ref" reference="eq:lpol"} that the jacobian variety $J_X$ of $X$ has $$\chi_{\mathbf K}(1)=16$$ rational points. There are $5$ places of degree $1$ on $X$. We call $P_1$ the unique place at $(0,1,0)$ and let $$P_2=(0,0,1), \,\, P_3=(1,0,1), \,\, P_4=(2,2,1), \,\, P_5=(2,1,1).$$ The Picard group $J_X({\mathbf K})$ is the direct sum of a subgroup of order $8$ generated by the class of $P_4-P_1$ and a subgroup of order $2$ generated by $P_2-P_1$. The class of $4(P_4-P_1)$ is the class of $P_3-P_1$. The classes of $P_2-P_1$ and $P_3-P_1$ generate a subgroup $H$ of $\mathop{\rm{Pic}}\nolimits ^0(X)$ isomorphic to $({\mathbf Z}/2{\mathbf Z})^2$. The quotient group $$G=J_X({\mathbf K})/H=\mathop{\rm{Pic}}\nolimits ^0(X)/H$$ is cyclic of order $4$ generated by $P_4-P_1$. So the subcover $\tau : Y \rightarrow X$ of ${Y_{{\mathrm{max}}}}$ associated with $H$ is cyclic of order $4$. And the fibers above $P_1$, $P_2$, and $P_3$ in this cover are split over ${\mathbf K}$. We will work with this cover. According to Kummer theory, there is a duality between the prime to $p$ part of $\mathop{\rm{Pic}}\nolimits ^0(X)$ and the étale part of the kernel of $F_{\mathbf K}-p$. Associated to the quotient $G = \mathop{\rm{Pic}}\nolimits ^0(X)/H$ there must be a cyclic subgroup $C_4$ of order $4$ inside the latter kernel. This cyclic subgroup is isomorphic to $\mbox{$\raisebox{-0.59ex} {$l$}\hspace{-0.18em}\mu\hspace{-0.88em}\raisebox{-0.98ex}{\scalebox{2} {$\color{white}.$}}\hspace{-0.416em}\raisebox{+0.88ex} {$\color{white}.$}\hspace{0.46em}$}{}_4$. We let $\zeta$ be a primitive fourth root of unity in ${\bar {\mathbf K}}$ and denote ${\mathbf L}$ the degree two extension of ${\mathbf K}$ generated by $\zeta$. In order to find the group $C_4$ we are interested in, we use algorithms to compute the kernels of $F_{\mathbf K}-1$ and $F_{\mathbf K}-p$ described in [@booktau Chapter 13]. The basic idea is to pick random elements in $J_X({\mathbf L})$ and project them onto the relevant characteristic subspaces for the action of $F_{\mathbf K}$, using our knowledge of the characteristic polynomial $\chi_{\mathbf K}$. We set $$P_6=(2\zeta, 2) \text{\,\,\, and \,\,\,} \Gamma = 2(P_6-P_4)$$ and find that the class $\gamma$ of $\Gamma$ is of order $4$ and satisfies $$F_{\mathbf K}(\gamma)=3\gamma.$$ Thus $\gamma$ generates the group $C_4$ we were looking for. There is a unique function $R$ in ${\mathbf L}(X)$ with divisor $4\Gamma$ and taking value $1$ at $P_1$. The cover $\tau : Y \rightarrow X$ we are interested in is obtained by adding a $4$-th root $r$ of $R$ to ${\mathbf L}(X)$. To be quite precise this construction produces the base change to ${\mathbf L}$ of the cover we are interested in. This will be fine for our purpose. So we let $$r=R^{1/4}$$ be the $4$-th root of $R$ taking value $1$ at $Q_1$. Equivalently we define $Q_1$ to be the point over $P_1$ where $r$ takes the value $1$. With the notation of Section [4.3](#sec:orths){reference-type="ref" reference="sec:orths"} we take $$D=2P_5 \text{ \,\,\, and \,\,\, } P=P_1+P_2+P_3.$$ We call $E$ the pullback of $D$ by $\tau$ and $Q$ the pullback of $P$. We expect $$\mathcal L(E)=H^0(\mathcal O_Y (E),Y)$$ to be a free ${\mathbf K}[G]$-module of rank $$\deg(D)-g_X+1=1.$$ This will be confirmed by our computations. Because the fibers above $P_1$, $P_2$ and $P_3$ all split over ${\mathbf K}$, the evaluation map $\mathcal L(E) \rightarrow {\mathbf A}$ is described by a $3\times 1$ matrix with coefficients in ${\mathbf K}[G]$. For every $2\leqslant i\leqslant 3$ we choose a $4$-th root of $R(P_i)$ in ${\mathbf L}$. This amounts to choosing a point $Q_i$ in the fiber of $\tau$ above $P_i$. We call $\sigma$ the unique element in $G$ that sends $r$ to $\zeta.r$ so $$G \owns \sigma : r \mapsto \zeta .r .$$ The ${\mathbf K}$-vector space $\mathcal L(E)$ decomposes over ${\mathbf L}$ as a sum of four eigenspaces associated to the four eigenvalues $1$, $\zeta$, $\zeta^2=-1$, $\zeta^3=-\zeta$ of $\sigma$. Let $0\leqslant j\leqslant 3$ and let $f$ be an eigenfunction in $\mathcal L(E)$ associated with the eigenvalue $\zeta^j$. Then the quotient $f/r^j$ is invariant by $G$ and its divisor satisfies $$(f/r^j) \geqslant -E - j.(r)= -E-j.\tau^*(\Gamma).$$ So $f/r^j$ can be seen as a function on $X$ with divisor bigger than or equal to $-D -j\Gamma$. The eigenspace $\mathcal L(E)_j$ associated to $\zeta^j$ is thus obtained as the image of the map $$%\label{eq:} \xymatrix@R-2pc{ H^0(X, \mathcal O_X(D+j\Gamma)) \ar@{->}[r] & \mathcal L(E)_j\\ F \ar@{|->}[r] & f= Fr^j}$$ Evaluating $f$ at $Q_i$ for $1\leqslant i\leqslant 3$ then reduces to evaluating $F=f/r^j$ at $P_i$ and multiplying the result by the chosen $4$-th root of $R(P_i)$, raised to the power $j$. This remarks enables us to compute a ${\mathbf K}$-basis of $\mathcal L(E)$ consisting of eigenfunctions of $\sigma$ and to evaluate the functions in this basis at the $(Q_i)_{1\leqslant i\leqslant 3}$ without ever writing equations for $Y$. We only need to compute the Riemann-Roch spaces associated to the divisors $D+j\Gamma$ on $X$ for $0\leqslant j\leqslant 3$. The Riemann-Roch space of a divisor $D=D_+-D_{-}$ on a curve $X$ is computed in time polynomial in the genus of $X$ and the degrees of the positive and negative parts $D_+$ and $D_{-}$ of $D$, using Brill-Noether algorithm and its many variants. The most efficient general algorithm is due to Makdisi [@mak1; @mak2]. In case the exponent of $G$ is large, we may have to compute linear spaces like $H^0(X,\mathcal O_X(D+j\Gamma))$ for large $j$. In that case, one should use the method introduced by Menezes, Okamoto, and Vanstone [@MOV] in the context of pairing computation, in order to replace $j$ by its logarithm in the complexity. Passing from the values of the eigenfunctions to the evaluation matrix $\mathcal E$ reduces to applying Fourier transform. We find $$\label{eq:cE} \mathcal E= \begin{pmatrix} 1 \\ e_{1,2} \\ e_{1,3}\end{pmatrix} \text{\,\,\, with \,\,} e_{1,1} = 1, \,\, e_{1,2}=1+2\sigma +2\sigma^2+2\sigma^3, \,\, e_{1,3} = 2+2\sigma +2\sigma^2+\sigma^3.$$ Having a unit for $e_{1,1}$ is quite convenient. In general one says that $\mathcal E$ is systematic when the top square submatrix is the identity. This is possible when the first points $Q_{i,1}$ form a basis for the dual of $\mathcal L(E)$. This situation is generic in some sense but not granted. Having a systematic matrix $\mathcal E$ makes it trivial to deduce the checking and interpolation matrices $$\label{eq:cC} \mathcal C= \begin{pmatrix} e_{1,2} & e_{1,3}\\ -1 & 0 \\ 0&-1 \end{pmatrix} \text{\,\,\, and \,\,} \mathcal I= \begin{pmatrix}1 &0&0 \end{pmatrix}.$$ # Interpolation on algebraic curves {#sec:interpol} In this section we recall two classical applications of interpolation on algebraic curves over finite fields and detail the benefit of ${\mathbf K}[G]$-module structures in this context. Section [8.1](#sec:multens){reference-type="ref" reference="sec:multens"} is concerned with the multiplication tensor in finite fields. In Sections [8.2](#sec:geocod){reference-type="ref" reference="sec:geocod"} and [8.3](#sec:deco){reference-type="ref" reference="sec:deco"} we see that geometric codes associated to $G$-equivariant divisors can be encoded in quasi-linear time and decoded in quasi-quadratic time if $G$ is abelian, acts freely, and is big enough. ## The complexity of multiplication in finite fields {#sec:multens} The idea of using Lagrange interpolation over an algebraic curve to multiply two elements in a finite field is due to Chudnovsky [@CC] and has been developped by Shparlinski, Tsfasmann and Vladut [@STV], Ballet and Rolland [@BR], Chaumine [@CHA], Randriambololona [@RAN] and others. Let ${\mathbf K}$ be a finite field and let $\mathfrak o\geqslant 2$ be an integer. Let $Y$ be a smooth, projective, absolutely integral curve over ${\mathbf K}$ and $B$ an irreducible divisor of degree $\mathfrak o$ on $Y$. We call ${\mathbf L}= H^0( \mathcal O_B,B)$ the residue field at $B$. We choose a divisor $E$ disjoint from $B$ and assume that the evaluation map $$e_B : H^0(\mathcal O_Y(E),Y)\rightarrow {\mathbf L}$$ is surjective so that elements in ${\mathbf L}$ can be represented by functions in $H^0(\mathcal O_Y(E),Y)$. The latter functions will be characterized by their values at a collection $(Q_i)_{1\leqslant i\leqslant N}$ of ${\mathbf K}$-rational points on $Y$. We denote $$e_Q : H^0(\mathcal O_Y(2E),Y) \rightarrow {\mathbf K}^N$$ the evaluation map at these points which we assume to be injective. The multiplication of two elements $e_B(f_1)$ and $e_B(f_2)$ in ${\mathbf K}$ can be achieved by evaluating $f_1$ and $f_2$ at the $Q_i$, then multiplying each $f_1(Q_i)$ by the corresponding $f_2(Q_i)$, then finding the unique function $f_3$ in $H^0(\mathcal O_Y(2E),Y)$ taking value $f_1(Q_i)f_2(Q_i)$ at $Q_i$, then computing $e_B(f_3)$. The number of bilinear multiplications in ${\mathbf K}$ in the whole process is equal to $N$. This method uses curves over ${\mathbf K}$ with arbitrarily large genus having a number of ${\mathbf K}$-points bigger than some positive constant times their genus. It bounds the ${\mathbf K}$-bilinear complexity of multiplication in ${\mathbf L}$ by an absolute constant times the degree $\mathfrak o$ of ${\mathbf L}$ over ${\mathbf K}$, but it says little abound the linear part of the algorithm : evaluation of the maps $e_B$ and $e_Q$ and their right (resp. left) inverses. Now assume that the group of ${\mathbf K}$-automorphisms of $Y$ contains a cyclic subgroup $G$ of order $\mathfrak o$ acting freely on $Y$. We call $\tau : Y\rightarrow X$ the quotient by $G$ map. Assume that $B$ is the fiber of $\tau$ above some rational point $a$ on $X$. Assume that $E$ (resp. $Q$) is the pullback by $\tau$ of a divisor $D$ (resp. $P$) on $X$. Under milde conditions, all the linear spaces above become free ${\mathbf K}[G]$-modules and the evaluation maps are $G$-equivariant. A computational consequence is that the linear part in the Chudnovsky algorithm becomes quasi-linear in the degree $\mathfrak o$ of the extension ${\mathbf L}/{\mathbf K}$. This remark has been exploited in [@coez] to bound the complexity of multiplication of two elements in a finite field given by their coordinates in a normal basis. The decompositions of the multiplication tensor that are proven to exist in [@coez] can be actually computed using the techniques presented in Section [7](#sec:constf){reference-type="ref" reference="sec:constf"}. ## Geometric codes {#sec:geocod} The construction of error correcting codes by evaluating functions on algebraic curves of higher genus is due to Goppa [@gop1; @go2]. Let $Y$ be a smooth, projective, absolutely integral curve over a finite field ${\mathbf K}$ of characteristic $p$. Let $d$ be the degree of ${\mathbf K}$ over the prime field ${\mathbf Z}/p{\mathbf Z}$. Let $g_Y$ be the genus of $Y$. Let $Q_1$, ..., $Q_N$ be pairwise distinct ${\mathbf K}$-rational points on $Y$. Let $t_i$ be a uniformizing parameter at $Q_i$. Let $E$ be a divisor that is disjoint from $Q = Q_1+\dots +Q_N$. Assume that $$\label{eq:encad} 2g_Y-1\leqslant \deg (E)\leqslant \deg (Q)-1.$$ Let $${\mathbf A}= H^0(\mathcal O_Q,Q) ={\mathbf K}^N$$ be the residue algebra at $Q$. Let $$\hat{\mathbf A}= H^0(\Omega^1_{Y/{\mathbf K}}(-Q)/\Omega^1_{Y/{\mathbf K}},Y) = \bigoplus_{i=1}^N{\mathbf K}\frac{dt_i}{t_i}={\mathbf K}^N$$ be the dual of ${\mathbf A}$. Evaluation at the $Q_i$ defines an injective linear map $$\mathcal L(E)=H^0(\mathcal O_Y(E),Y)\rightarrow {\mathbf A}.$$ We similarly define an injective linear map $${\mathit \Omega}(-Q+E)=H^0(\Omega_{Y/{\mathbf K}}(-Q+E),Y)\rightarrow \hat{\mathbf A}.$$ The two vector subspaces $\mathcal L(E)$ and ${\mathit \Omega}(-Q+E)$ are orthogonal to each other. They can be considered as linear codes over ${\mathbf K}$ and denoted $C_\mathcal L$ and $C_\Omega$ respectively. The code $C_\mathcal L$ has length $N$, dimension $$K=\deg (E)-g_Y+1$$ and minimum distance $\geqslant N-\deg(E)$. Given a basis of $\mathcal L(E)$ one defines the generating matrix $\mathcal E_E$ of the code $C_\mathcal L$ to be the $N\times K$-matrix of the injection $\mathcal L(E)\rightarrow {\mathbf A}={\mathbf K}^N$. One similarly defines the parity-check matrix $\mathcal C_E$ to be the $N\times (N-K)$-matrix of ${\mathit \Omega}(-Q+E)\rightarrow \hat{\mathbf A}$. We finally call $\mathcal I_E$ the $K\times N$-matrix of some projection of ${\mathbf A}$ onto $C_\mathcal L$. A message of length $K$ is encoded by multiplying the corresponding column on the left by $\mathcal E_E$. The received word is checked by multiplying it on the left by the transpose of $\mathcal C_E$. And the initial message is recovered from a correct codeword applying the interpolation matrix $\mathcal I_E$. In full generality, coding, testing and interpolating respectively require $2NK$, $2N(N-K)$ and $2KN$ operations in ${\mathbf K}$. Assume now that the group of ${\mathbf K}$-automorphisms of $Y$ contains a finite commutative subgroup $G$ of order $\mathfrak o$ acting freely on $Y$. Let $\tau : Y\rightarrow X$ be the quotient by $G$ map. Assume that $\mathfrak o$ divides $N$ and let $$n=N/\mathfrak o.$$ Assume that $Q$ is the pullback by $\tau$ of a divisor $$P=P_1+\dots+P_n$$ on $X$. Assume that $E$ is the pullback of some divisor $D$ on $X$. We are thus in the situation of Section [4](#sec:comm){reference-type="ref" reference="sec:comm"}. The code $C_\mathcal L$ is a free ${\mathbf K}[G]$-submodule of ${\mathbf A}$ of rank $$k=K/\mathfrak o$$ and $\mathcal C_\Omega$ is its orthogonal module for the ${\mathbf K}[G]$-bilinear form defined in Section [3.3](#sec:duaa){reference-type="ref" reference="sec:duaa"}. The matrices $\mathcal E_E$, $\mathcal C_E$, and $\mathcal I_E$ can be seen as matrices with coefficients in ${\mathbf K}[G]$ of respective sizes $n\times k$, $n\times (n-k)$, and $k\times n$. Coding now requires $2nk$ operations in ${\mathbf K}[G]$ rather than $2NK$ operations in ${\mathbf K}$. According to Proposition [Proposition 13](#prop:fifi){reference-type="ref" reference="prop:fifi"}, each such operation requires less than ${\mathcal Q}. d^2. \mathfrak o. \log \mathfrak o$ operations in ${\mathbf Z}/p{\mathbf Z}$ and ${\mathbf Z}/p'{\mathbf Z}$ where $p'\leqslant {\mathcal Q}. (\mathfrak o. p)^{11}$ for some absolute constant ${\mathcal Q}$. The total cost of coding is thus bounded by a constant times $$\frac{NK}{\mathfrak o^2}.d^2.\mathfrak o.\log \mathfrak o(\log p+\log \mathfrak o)$$ elementary operations. Assuming that $$\label{cond:1} \log \mathfrak o\text{\,\,\, is bigger than a positive constant times \,\,\,}k\log p$$ we bound the encoding complexity by a constant times $$N(\log N)^3d^2$$ elementary operations, where $d$ is the degree of ${\mathbf K}$ over the prime field ${\mathbf Z}/p{\mathbf Z}$ and $N$ is the length of the code. We obtain the same complexity estimate for parity-checking and interpolating. ## Basic decoding {#sec:deco} In the situation of the beginning of Section [8.2](#sec:geocod){reference-type="ref" reference="sec:geocod"} we assume that we have received a message $r$ in ${\mathbf A}={\mathbf K}^N$. Let $c$ be the closest codeword to $r$ in $C_\mathcal L$ for the Hamming distance in ${\mathbf K}^N$. Write $$r=c+\epsilon$$ and call $\epsilon$ the error vector. Let $f$ be the unique function in $\mathcal L(E)$ such that $f=c\bmod Q$. The support of the error vector $\epsilon$ is the effective divisor ${\mathop{{\mathrm{Supp}}}\nolimits (\epsilon)}$ consisting of all points $Q_i$ where $\epsilon$ is not-zero. The degree of ${\mathop{{\mathrm{Supp}}}\nolimits (\epsilon)}$ is the number of errors in $r$. The principle of the basic decoding algorithm [@jlkh; @svl] is : if $a_0$ is a small degree function vanishing at every point in the support ${\mathop{{\mathrm{Supp}}}\nolimits (\epsilon)}$ then $a_0r=a_0c \mod Q$ is the residue modulo $Q$ of an algebraic function $a_0f$ of not too large degree. This function can be recovered from its values at $Q$ if $N$ is large enough. More concretely we let $E_0$ be some auxiliary divisor on $Y$ and set $$E_1=E+E_0.$$ We call $\mathcal P$ the subspace of $\mathcal L(E_0)$ consisting of all $a_0$ such that there exists $a_1$ in $\mathcal L(E_1)$ with $a_0r=a_1\bmod Q$. Non-zero elements in $\mathcal P$ are denominators for $r$ in the sense of Section [5](#sec:pade){reference-type="ref" reference="sec:pade"}. We just saw that every function in $\mathcal L(E_0)$ vanishing at every point in the support of $\epsilon$ belongs to $\mathcal P$. Conversely if $a_0$ is in $\mathcal P$ then $a_0r$ belongs to $\mathcal L(E_1)$ modulo $Q$. But $a_0c$ belongs to $\mathcal L(E_1)$ modulo $Q$ also because $a_0$ is in $\mathcal L(E_0)$ modulo $Q$ and $c$ is in $\mathcal L(E)$ modulo $Q$. So $a_0(r-c)=a_0\epsilon$ belongs to $\mathcal L(E_1)$ modulo $Q$. There is a function in $\mathcal L(E_1)$ that is $a_0\epsilon$ modulo $Q$. This function has $N-\deg ({\mathop{{\mathrm{Supp}}}\nolimits (\epsilon)})$ zeros and degree $\leqslant \deg(E_1)=\deg (E)+\deg(E_0)$. If we assume that $$\label{eq:basic} \deg ({\mathop{{\mathrm{Supp}}}\nolimits (\epsilon)})\leqslant N-1 -\deg (E) -\deg (E_0)$$ then the latter function must be zero. So $a_0$ vanishes at ${\mathop{{\mathrm{Supp}}}\nolimits (\epsilon)}$. Assuming Equation ([\[eq:basic\]](#eq:basic){reference-type="ref" reference="eq:basic"}) we thus have $\mathcal P= \mathcal L(E_0-{\mathop{{\mathrm{Supp}}}\nolimits (\epsilon)})$. Assuming further that $$\label{eq:basic2} \deg ({\mathop{{\mathrm{Supp}}}\nolimits (\epsilon)})\leqslant \deg(E_0)-g$$ this space is non-zero. Computing it is a matter of linear algebra and requires a constant times $N^3$ operations in ${\mathbf K}$. Given any non-zero element $a_0$ in $\mathcal P$ we denote $A_0$ the divisor consisting of all $Q_i$ where $a_0$ vanishes. The degree of $A_0$ is bounded by $\deg E_0$. The error $\epsilon$ is an element in ${\mathbf A}$ with support contained in $A_0$ and such that $r-\epsilon$ belongs to $C_\mathcal L$. Finding $\epsilon$ is a linear problem in $\leqslant \deg E_0$ unknows and $N-\deg (E)+g_Y-1$ equations. The solution is unique because the difference of two solutions is in $C_\mathcal L$ and has at least $N-\deg (E_0)$ zeros. And this is strictly greater than $\deg (E)$ by Equation ([\[eq:basic\]](#eq:basic){reference-type="ref" reference="eq:basic"}). Combining Equations ([\[eq:basic\]](#eq:basic){reference-type="ref" reference="eq:basic"}) and ([\[eq:basic2\]](#eq:basic2){reference-type="ref" reference="eq:basic2"}) we see that the basic decoding algorithm corrects up to ${d_{{\mathrm{basic}}}}$ errors where $$\label{eq:dbasic} {d_{{\mathrm{basic}}}}= \frac{N-\deg (E)-1-g_Y}{2}.$$ Assume now that the group of ${\mathbf K}$-automorphisms of $Y$ contains a finite commutative subgroup $G$ of order $\mathfrak o$ acting freely on $Y$. Let $\tau : Y\rightarrow X$ be the quotient by $G$ map. Assume that $\mathfrak o$ divides $N$ and let $n=N/\mathfrak o$. Assume that $Q$ is the pullback by $\tau$ of a divisor $$P=P_1+\dots+P_n$$ on $X$. Assume that $E$ is the pullback of some divisor $D$ on $X$. According to Proposition [Proposition 8](#prop:finddenom){reference-type="ref" reference="prop:finddenom"} we can find a denominator $a_0$ at the expense of ${\mathcal Q}. (\mathfrak o. n . \log (\mathfrak o. n))^2$ operations in ${\mathbf K}$ and ${\mathcal Q}.\mathfrak o. n^3\log (\mathfrak o. n)$ operations in ${\mathbf K}[G]$. According to Proposition [Proposition 13](#prop:fifi){reference-type="ref" reference="prop:fifi"}, each operation in ${\mathbf K}[G]$ requires less than $${\mathcal Q}. d^2.\mathfrak o. \log \mathfrak o(\log p+\log \mathfrak o)$$ elementary operations. The total cost of finding a denominator is thus bounded by a constant times $$N^2.n.d^2.\log^3 (\mathfrak o.n.p)$$ elementary operation. Assuming Condition ([\[cond:1\]](#cond:1){reference-type="ref" reference="cond:1"}) and $$\label{cond:2} \log \mathfrak o\text{\,\,\, is bigger than a positive constant times \,\,\,}n-\log n$$ we obtain a complexity of a constant times $$N^2(\log N)^4 d^2$$ elementary operations where $d$ is the degree of ${\mathbf K}$ over the prime field ${\mathbf Z}/p{\mathbf Z}$ and $N$ is the length of the code. Once obtained a denominator, the error can be found at the same cost. # Good geometric codes with quasi-linear encoding {#sec:gc} In this section we specialize the constructions presented in Sections [8.2](#sec:geocod){reference-type="ref" reference="sec:geocod"} and [8.3](#sec:deco){reference-type="ref" reference="sec:deco"} using curves with many points and their class fields. We quickly review in Section [9.1](#sec:artin){reference-type="ref" reference="sec:artin"} some standard useful results and observations which we apply in Section [9.2](#sec:asym){reference-type="ref" reference="sec:asym"} to the construction of families of good geometric codes having quasi-linear encoding and a quasi-quadratic decoder. Recall that a family of codes over a fixed alphabet is said to be good when the length tends to infinity while both the rate and the minimum distance have a strictly positive liminf. ## Controling the class group and the Artin map {#sec:artin} We keep the notation in Section [7.1](#sec:cftj){reference-type="ref" reference="sec:cftj"}. In particular $P_1$ is a ${\mathbf K}$-rational point on $X$ and $$j_X : X\rightarrow J_X$$ is the Jacobi map with origin $P_1$. For the applications we have in mind we need some control on the ${\mathbf K}$-rational points on $X$, on the group $\mathop{\rm{Pic}}\nolimits ^0 (X)$ and most importantly on the image of $X({\mathbf K})$ in $\mathop{\rm{Pic}}\nolimits ^0 (X)$ by the Jacobi map. A typical advantageous situation would be : 1. $X$ has enough ${\mathbf K}$-rational points, that is a fixed positive constant times its genus $g_X$, 2. a fixed positive proportion of these points are mapped by $j_X$ into a subgroup $H$, 3. $H$ is not too large i.e. the quotient $\log |H|/\log |\mathop{\rm{Pic}}\nolimits ^0(X)|$ is smaller than a fixed constant smaller than $1$. A range of geometric techniques relevant to that problem is presented in Serre's course [@serreratio] with the related motivation of constructing maximal curves. One says that (a family of) curves over a fixed finite field of cardinality $q$ have many points when the ratio of the number of rational points by the genus tends to $\sqrt q -1$. Modular curves $X_0(N)$ have many points over finite fields with $p^2$ elements, corresponding to supersingular moduli, as was noticed by Ihara [@ihara] and by Tzfasman, Vladut, and Zink [@tvz]. These authors also find families of Shimura curves having many points over fields with cardinality a square. Garcia and Stichtenoth [@garsti] construct for every square $q$ an infinite tower of algebraic curves over ${\mathbf F}_q$ such that the quotient of the number of ${\mathbf F}_q$-points by the genus converges to $\sqrt q -1$, and the quotient of the genera of two consecutive curves converges to $q$. As for conditions (2) and (3) above, it is noted in [@serreratio 5.12.4] that the images by $j_X$ of $P_2$, ..., $P_{n}$ generate a subgroup $H$ with at most $n-1$ invariant factors. If the class group $J_X({\mathbf K})$ has $I\geqslant n-1$ invariant factors then the size of the quotient $G$ is bigger than or equal to the product of the $I-(n-1)$ smallest invariant factors of $J_X({\mathbf K})$. Another favourable situation exploited in [@queb; @nixi; @vdg; @guxi] is when ${\mathbf K}$ has a strict subfield ${\mathbf k}$ and $X$ is defined over ${\mathbf k}$ and $P_1$ is ${\mathbf k}$-rational. Then the Jacobi map sends the points in $X({\mathbf k})$ into the subgroup $H=J_X({\mathbf k})$ of $J_X({\mathbf K})$. We will use this remark in the next section. ## A construction {#sec:asym} Let ${\mathbf k}$ be a finite field with characteristic $p$. Let $q$ be the cardinality of ${\mathbf k}$. We assume that $q$ is a square. We consider a family of curves $(X_k)_{k \geqslant 1}$ over ${\mathbf k}$ having many points over ${\mathbf k}$. For example we may take $X_k$ to be $k$-th curve in the Garcia-Stichtenoth tower associated with $q$. We denote $g_{X}$ the genus of $X_k$. We omit the index $k$ in the sequel because there is no risk of confusion. We denote $n$ the number of ${\mathbf k}$-rational points on $X$. We denote these points $P_1$, ..., $P_n$ and let $P$ be the effective divisor sum of all these points. We let ${\mathbf K}$ be a non-trivial extension of ${\mathbf k}$. We will assume that the degree of ${\mathbf K}$ over ${\mathbf k}$ is $2$ because higher values seem to bring nothing but disadvantages. We set $$H=J_X({\mathbf k}) \text{\,\,\, and \,\,\, } G=J_X({\mathbf K})/H.$$ We let $\mathfrak o$ be the order of $G$. We note that $$\mathfrak o\geqslant \left( \sqrt{q}-1 \right)^{2g_X}$$ grows exponentially in $g_X$ provided $q\geqslant 9$. We find ourselves in the situation of Section [7.1](#sec:cftj){reference-type="ref" reference="sec:cftj"}. We call ${Y_{{\mathrm{max}}}}$ the maximal unramified cover of $X$ over ${\mathbf K}$ which is totally decomposed over ${\mathbf K}$ above $P_1$. We call $Y$ the quotient of ${Y_{{\mathrm{max}}}}$ by $H$. The fibers of $$\tau : Y\rightarrow X$$ above the points $P_1$, ..., $P_n$ all split over ${\mathbf K}$. We call $Q$ the pullback of $P$ by $\tau$. This is a divisor on $Y$ of degree $$N=\mathfrak o. n.$$ We choose a real number ${\varrho}$ such that $$\label{eq:encad2} 0 < {\varrho}< \frac{\sqrt q}{2} -2.$$ Our goal is to correct up to ${\varrho}. \mathfrak o. g_X$ errors. Let $D$ be a divisor on $X$ that is disjoint from $P$ and such that $$\deg (D)= \lceil (\sqrt q - 2-2{\varrho})g_X \rfloor$$ the closest integer to $(\sqrt q - 2-2{\varrho})g_X$. Let $E$ be the pullback of $D$ by $\tau$. We deduce from Equation ([\[eq:encad2\]](#eq:encad2){reference-type="ref" reference="eq:encad2"}) that condition ([\[eq:encad\]](#eq:encad){reference-type="ref" reference="eq:encad"}) is met at least asymptotically. From $X$, $Y$, $E$, and $Q$ the construction in Section [8.2](#sec:geocod){reference-type="ref" reference="sec:geocod"} produces a code $C_\mathcal L$ over the field ${\mathbf K}$ with $q^2$ elements, having length $$N= \mathfrak o. n \simeq (\sqrt q -1) . \mathfrak o. g_X$$ and dimension $$K = \mathfrak o. (\deg (D)-g_X+1)\simeq (\sqrt q -3- 2{\varrho}) . \mathfrak o. g_X.$$ The code $C_\mathcal L$ can be encoded and parity-checked in quasi-linear time in its length $N$. One can decode with the same complexity when there are no errors. Using the basic decoding algorithm as in Section [8.3](#sec:deco){reference-type="ref" reference="sec:deco"} one can decode in the presence of errors in quasi-quadratic time up to the distance $${d_{{\mathrm{basic}}}}= \frac{N-\deg (E)-1-g_Y}{2} \simeq {\varrho}. \mathfrak o. g_X$$ defined by Equation ([\[eq:dbasic\]](#eq:dbasic){reference-type="ref" reference="eq:dbasic"}). We denote ${\delta_{{\mathrm{basic}}}}$ the relative distance ${d_{{\mathrm{basic}}}}/N$. **Proposition 14**. *Let $p$ be a prime integer and let $q$ be a power of $p$. Assume that $q$ is a square and $$\label{eq:25}q\geqslant 25.$$ Let ${\varrho}$ be a real such that $$\label{eq:daleth} 0 < {\varrho}< \frac{\sqrt q}{2} -2.$$ The construction above produces a family of error correcting codes over the field with $q^2$ elements having length $N$ tending to infinity and such that* 1. *the codes can be encoded in quasi-linear time in their length,* 2. *the rate $R$ satisfies $$\label{eq:rate} \lim R = \frac{\sqrt q -3-2{\varrho}}{\sqrt q -1}$$* 3. *the codes can be decoded in quasi-quadratic time in $N$ up to the relative distance ${\delta_{{\mathrm{basic}}}}$ and $$\lim {\delta_{{\mathrm{basic}}}}= \frac{{\varrho}}{\sqrt q-1}.$$* We may want to use the general purpose algorithm of Beelen, Rosenkilde, Solomatov [@beelen] to decode up to half the Goppa designed minimum distance. Inequalities ([\[eq:25\]](#eq:25){reference-type="ref" reference="eq:25"}) and ([\[eq:daleth\]](#eq:daleth){reference-type="ref" reference="eq:daleth"}) are then replaced by $$\label{eq:16}q\geqslant 16 \text{\,\,\,\,\,\, and \,\,\,\,\,\,} 0 < {\varrho}< \frac{\sqrt q -3}{2},$$ and the limit of the rate is now $$%\label{eq:rate} \lim R = \frac{\sqrt q -2-2{\varrho}}{\sqrt q -1}.$$ However the complexity of decoding is then of order $\mu^{\omega-1}(N+g_Y)$ where $N$ is the length of the code, $\mu$ is the gonality of $Y$, and $\omega$ is the exponent in the complexity of matrix multiplication. Curves with many points have large gonality. In particular $\mu\geqslant N/(q^2+1)$ in our situation, so that for fixed $q$, the complexity of this decoder is of order greater than $N^{\omega}$. It is known [@laser] that $2\leqslant \omega < 2.37286$ but it is not granted that $\omega =2$. Power decoding [@schsidbos] seems attrictive in our situation because of its purely linear nature. However the rigorous analysis of its performances is delicate in general [@prb] and particularly in our situation because we fix the base field, let the genus tend to infinity and use a rather rigid construction.
arxiv_math
{ "id": "2309.06754", "title": "Explicit Riemann-Roch spaces in the Hilbert class field", "authors": "Jean-Marc Couveignes and Jean Gasnier", "categories": "math.NT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Let $W$ be a quasi-homogeneous polynomial of general type and $\langle J\rangle$ be the cyclic symmetry group of $W$ generated by the exponential grading element $J$. We study the quantum spectrum and asymptotic behavior in Fan-Jarvis-Ruan-Witten theory of the Landau-Ginzburg pair $(W, \langle J\rangle)$. Inspired by Galkin-Golyshev-Iritani's Gamma conjectures for quantum cohomology of Fano manifolds, we propose Gamma conjectures for Fan-Jarvis-Ruan-Witten theory of general type. We prove the quantum spectrum conjecture and the Gamma conjectures for Fermat homogeneous polynomials and the mirror simple singularities. The Gamma structures in Fan-Jarvis-Ruan-Witten theory also provide a bridge from the category of matrix factorizations of the Landau-Ginzburg pair (the algebraic aspect) to its analytic aspect. We will explain the relationship among the Gamma structures, Orlov's semiorthogonal decompositions, and the Stokes phenomenon. author: - Yefeng Shen - Ming Zhang title: Quantum spectrum and Gamma structures for quasi-homogeneous polynomials of general type --- # Introduction ## FJRW theory for admissible Landau-Ginzburg pairs ### A Landau-Ginzburg pair Let $W: {\mathbb C}^N\to {\mathbb C}$ be a quasi-homogeneous polynomial of degree $d\in\mathbb{Z}_{\geq2}$. The weight of the variables, denoted by ${\rm wt}(x_i)=w_i$, are positive integers such that for all $\lambda\in {\mathbb C}$, $$\lambda^d W(x_1, \ldots, x_N)=W(\lambda^{w_1}x_1, \ldots, \lambda^{w_N}x_N).$$ Throughout this paper, we assume that the polynomial $W$ is *nondegenerate* and $$\label{gcd-condition} \gcd(w_1, \ldots, w_N)=1.$$ The nondegenerate condition means that $W$ has an isolated critical point only at the origin and the choices of ${w_i\over d}$ are unique. We sometimes abuse the language and call a nondegenerate quasi-homogeneous polynomial $W$ the *singularity* $W$. The singularity $W$ is invariant under the action of the *maximal diagonal symmetry group of $W$* defined by $$\label{maximal-group} G_W:=\{g\in ({\mathbb C}^*)^N\mid W(g\cdot (x_1, \ldots, x_N))=W(x_1, \ldots, x_N)\}.$$ Here the action $g\cdot (x_1, \ldots, x_N)$ is given by the coordinate-wise multiplication. In particular, the group $G_W$ contains the *exponential grading element* $$J=\left(e^{2\pi\sqrt{-1}w_1/d}, \ldots, e^{2\pi\sqrt{-1}w_N/d}\right).$$ By condition [\[gcd-condition\]](#gcd-condition){reference-type="eqref" reference="gcd-condition"}, the subgroup $\langle J\rangle\subset G_W$ is a cyclic group of order $d$. In general, we may consider a subgroup $G$ of $G_W$ that contains the element $J$. Such a pair $(W, G)$ is called an *admissible* Landau-Ginzburg (LG) pair. **Definition 1**. For an admissible LG pair $(W, \langle J\rangle)$, we define its *weight system* by the $(N+1)$-tuple $$\label{weight-system} (d; w_1, \ldots, w_N)\in ({\mathbb Z}_{>0})^{N+1}.$$ We define the *index* of the LG pair $(W, \langle J\rangle)$ by the integer $$\label{gorenstein-parameter} \nu:=d-\sum_{i=1}^Nw_i\in \mathbb{Z}$$ We say the pair $(W, \langle J\rangle)$ is *Fano/Calabi-Yau/general type* if $\nu$ is negative/zero/positive. *Remark 2*. Usually, the integer $-\nu$ is called the Gorenstein parameter of the LG model [@Orl; @BFK2]. We choose to use $\nu$ in this paper simply because we focus on the LG pair of general type and a positive parameter is convenient to deal with in many situations. ### Fan-Jarvis-Ruan-Witten theory Based on a proposal of Witten [@Wit], for any admissible LG pair $(W, G)$, Fan, Jarvis, and Ruan constructed an intersection theory on the moduli space of $W$-spin structures of $G$-orbifold curves in a series of works [@FJR-book; @FJR]. This theory is called Fan-Jarvis-Ruan-Witten (FJRW) theory nowadays. It is analogous to Gromov-Witten (GW) theory and generalizes the theory of $r$-spin curves, where $W=x^r$ and $G=\langle J\rangle$. Briefly speaking, the main ingredient of FJRW theory consists of a state space and a *cohomological field theory* (CohFT). The state space, denoted by ${\mathcal H }_{W,G}$, is the $G$-invariant subspace of the middle-dimensional relative cohomology for $W$ with a nondegenerate pairing $\langle \cdot, \cdot\rangle$; see [@FJR Section 3.2]. The CohFT is a set of multilinear maps $\{\Lambda_{g,k}^{W,G}\}$ [\[cohft-notation\]](#cohft-notation){reference-type="eqref" reference="cohft-notation"} mapping from ${\mathcal H }_{W,G}^{\otimes k}$ to the cohomology of the moduli spaces of stable curves of genus $g$ with $k$ markings. These maps satisfy a collection of axioms listed in [@FJR Theorem 4.2.2]. The CohFT is used to define the genus-$g$, $k$-pointed *FJRW invariants* $$\Big\langle\gamma_1\psi_1^{\ell_1},\ldots,\gamma_k\psi_k^{\ell_k}\Big\rangle_{g,k}^{W,G};$$ see formula [\[fjrw-inv\]](#fjrw-inv){reference-type="eqref" reference="fjrw-inv"}. Here $\{\gamma_i\in {\mathcal H }_{W, G}\}$ are called *insertions*, $\psi_i$'s are the psi classes on the moduli space of stable curves, and $\ell_i\in{\mathbb Z}_{\geq 0}$. We will review some properties of the FJRW theory in Section [2.1](#sec:fjrw){reference-type="ref" reference="sec:fjrw"}. *Remark 3*. There are two other versions of CohFTs for the LG pair $(W, G)$. One version was constructed by Polishchuk and Vaintrob using matrix factorizations [@PV-cohft]. Another version using cosection localized Gysin maps was initiated by Chang, Li, and Li in [@CLL] and completed by Kiem and Li [@KL]. There is a folklore conjecture that all these three CohFTs are equivalent. Such an equivalence is only partially verified in [@CLL]. ### A quantum product We only consider genus-zero invariants in this paper. There is an associative product $\star$ on ${\mathcal H }_{W, G}$, determined by the nondegenerate pairing and the genus-zero three-pointed FJRW invariants via $$\langle\gamma_1\star\gamma_2, \gamma_3\rangle=\Big\langle\gamma_1, \gamma_2, \gamma_3\Big\rangle_{0,3}^{W,G}.$$ The genus-zero $k$-pointed FJRW invariants with $k\geq 3$ produce a *quantum product* $\star_\gamma$ (see [\[quantum-product\]](#quantum-product){reference-type="eqref" reference="quantum-product"}), which can be viewed as a deformation of the product $\star$ along the element $\gamma\in{\mathcal H }_{W, G}$. ## Quantum spectrum for FJRW theory of general type In this paper, we will focus on the enumerative geometry of the LG pair $(W, \langle J\rangle)$. ### The element $\tau(t)$ Let $(d; w_1, \ldots, w_N)$ be the weight system of an admissible LG pair $(W, \langle J\rangle)$. We define its set of *narrow indices* by $$\label{narrow-index} {\bf Nar}=\left\{m\in \mathbb{Z}_{>0} \mid 0< m<d, \text{ and } d\nmid w_j\cdot m, \ \forall \ 1\leq j\leq N\right\}.$$ As a vector space, ${\mathcal H }_{W, \langle J\rangle}$ admits a *narrow-broad decomposition* ${\mathcal H }_{W, \langle J\rangle}={\mathcal H }_{\rm nar}\oplus {\mathcal H }_{\rm bro}$ where $${\mathcal H }_{\rm nar}:=\bigoplus_{m\in {\bf Nar}}{\mathcal H }_{J^m}$$ is the narrow subspace and ${\mathcal H }_{\rm bro}$ is the broad subspace. For each $m\in {\bf Nar}$, the subspace ${\mathcal H }_{J^m}$ is one-dimensional and spanned by a *standard generator* ${\bf e}_m$ defined in [\[standard-generator\]](#standard-generator){reference-type="eqref" reference="standard-generator"}. Let $\{r\}$ be the fractional part of the number $r\in \mathbb{R}$ and $\Gamma(x)$ be the *gamma function*. Following [@Aco; @CIR; @Gue; @RR], we introduce a *small $I$-function* of the FJRW theory for the LG pair $(W, \langle J\rangle)$ by $$\label{small-i-function} I_{\rm FJRW}^{\rm sm}(t,z) =z\sum_{m\in {\bf Nar}} \sum_{\ell=0}^{\infty} \frac {t^{d\ell+m}} {z^{d\ell+m-1}\Gamma(d\ell+m)} \prod_{j=1}^{N} {z^{\lfloor {w_j\over d}(d\ell+m)\rfloor} \Gamma({w_j\over d}(d\ell+m))\over \Gamma\left(\left\{{w_j\over d}\cdot m\right\}\right)}{\bf e}_m.$$ By definition, we have $I_{\rm FJRW}^{\rm sm}(t,z)\in {\mathcal H }_{\rm nar}[\![t]\!][z, z^{-1}]\!].$ Let $t\tau(t)$ be the coefficient of $z^0$ of $I_{\rm FJRW}^{\rm sm}(t,z)$, namely, $$\label{def-tau} \tau(t)={1\over t}\left[I_{\rm FJRW}^{\rm sm}(t,z)\right]_{z^0}\in {\mathcal H }_{\rm nar}[\![t]\!].$$ ### Quantum spectrum conjecture for $(W, \langle J\rangle)$ of general type Now we focus on LG pair $(W, \langle J\rangle)$ of general type. We have $\tau(t)\in {\mathcal H }_{\rm nar}[t].$ Let $\tau'(t)={d\over dt}\tau(t).$ We set $\tau:=\tau(1)$ and $\tau':=\tau'(1)$. We will consider the quantum multiplication $$\label{quantum-J2} {\nu\over d}\tau'\star_{\tau}={\nu\over d}t\tau'(t)\star_{\tau(t)}\Big\vert_{t=1}.$$ We denote the set of eigenvalues of the quantum multiplication [\[quantum-J2\]](#quantum-J2){reference-type="eqref" reference="quantum-J2"} by $$\label{def-evalue} {\mathfrak E}:=\left\{\lambda\in {\mathbb C}\mid \lambda \text{ is an eigenvalue of } {\nu\over d}\tau'\star_{\tau}\in {\rm End}({\mathcal H }_{W, \langle J\rangle})\right\}.$$ Inspired by the *Conjecture $\mathcal{O}$* for *Fano* manifolds proposed by Galkin-Golyshev-Iritani in [@GGI], we propose a quantum spectrum conjecture for FJRW theory *of general type*. **Conjecture 4** (*Quantum spectrum conjecture* for an admissible LG pair $(W, \langle J\rangle)$ of general type). Let $W$ be a nondegenerate quasihomogeneous polynomial of general type, with weight system $(d; w_1, \ldots, w_N)$. Then the set ${\mathfrak E}$ in [\[def-evalue\]](#def-evalue){reference-type="eqref" reference="def-evalue"} contains a number $$\label{value-principal-spectrum} T=\nu\left(d^{-d}\prod_{j=1}^Nw_j^{w_j}\right)^{1\over \nu}\in \mathbb{R}_{>0}$$ that satisfies the following properties: 1. For any $\lambda\in {\mathfrak E}$, the inequality $|\lambda|\leq T$ holds. 2. The following two sets are the same: $$\{\lambda\in {\mathfrak E}\mid T= |\lambda|\}=\{e^{2\pi\sqrt{-1}j/\nu} \cdot T\mid j=0, \ldots, \nu-1\}.$$ 3. The multiplicity of each $\lambda\in {\mathfrak E}$ with $|\lambda|=T$ is one. *Remark 5*. 1. In Section [2.3](#sec-variant){reference-type="ref" reference="sec-variant"}, a similar conjecture (Conjecture [Conjecture 36](#quantum-spectrum-conj-invariant){reference-type="ref" reference="quantum-spectrum-conj-invariant"}) is proposed to study the quantum spectrum on a $G_W$-invariant subspace. 2. For any admissible LG pairs $(W, G)$, we can study the quantum spectrum of [\[quantum-J2\]](#quantum-J2){reference-type="eqref" reference="quantum-J2"}. The multiplicity-one property in Conjecture  [Conjecture 4](#conjecture-C){reference-type="ref" reference="conjecture-C"} could fail if $G\neq \langle J\rangle$. 3. The study of quantum spectrum plays an important role in Katzarkov-Kontsevich-Pantev-Yu's proposal to extract birational invariants from quantum cohomology [@Kon19; @Kon20; @Kon21] and Iritani's decomposition theorem of the quantum cohomology $D$-module of blowups [@Iri23]. The quantum spectrum conjectures in this paper can be seen as an analogy in FJRW theory. This can be generalized to gauge linear sigma models. ### Main results on quantum spectrum conjecture [Conjecture 4](#conjecture-C){reference-type="ref" reference="conjecture-C"} {#main-results-on-quantum-spectrum-conjecture-conjecture-c} We will verify Conjecture [Conjecture 4](#conjecture-C){reference-type="ref" reference="conjecture-C"} for the following cases. In Section [7.1](#sec-mirror-simple){reference-type="ref" reference="sec-mirror-simple"}, direct calculations of the quantum product show **Theorem 6**. *Quantum spectrum conjecture [Conjecture 4](#conjecture-C){reference-type="ref" reference="conjecture-C"} holds for the following singularities: $$\label{mirror-ade-singularity} \begin{dcases} A_n: & W=x^{n+1}, n\geq 1;\\ D_n^{T}: & W=x^{n-1}y+y^2, n\geq 4;\\ E_6: & W= x^3+y^4;\\ E_7: & W=x^3y+y^3;\\ E_8: & W=x^3+y^5. \end{dcases}$$* These polynomials are mirror to simple singularities (or $ADE$ singularities), whose central charge satisfies $\widehat{c}_W<1$. We call them *mirror simple singularities* or *mirror ADE-singularities*. Note that an $A$-type singularity or an $E$-type singularity is mirror to itself. In Section [7.3](#sec-spectrum-fermat){reference-type="ref" reference="sec-spectrum-fermat"}, we will use mirror symmetry to prove **Theorem 7**. *Quantum spectrum conjecture [Conjecture 4](#conjecture-C){reference-type="ref" reference="conjecture-C"} holds for $W=\sum\limits_{i=1}^{N}x_i^d$ if $d-N>1$.* ## Asymptotic expansion in FJRW theory Next we study certain asymptotic expansion in the FJRW theory of $(W, \langle J\rangle)$ of general type, inspired by the work [@GGI]. The key observation is that the coefficient functions of the small $I$-function $I_{\rm FJRW}^{\rm sm}(t,z)$ in [\[small-i-function\]](#small-i-function){reference-type="eqref" reference="small-i-function"} can be transformed into certain *generalized hypergeometric functions*; see Proposition [Proposition 68](#i-function-via-ghf){reference-type="ref" reference="i-function-via-ghf"}. Then many results on asymptotic expansions of generalized hypergeometric functions studied in the literature since Barnes, Meijer, and many others [@Bar; @Mei; @Luk; @Fie; @dlmf] can be applied here. ### A Dubrovin connection in FJRW theory Let $\{\phi_i\}$ be a homogeneous basis of ${\mathcal H }_{W, \langle J\rangle}$ and $\{\phi^i\}$ be a dual basis with respect to the nondegenerate pairing $\langle\cdot, \cdot\rangle$. Let $t_i$ be the coordinate of $\phi_i$ and let $\mathbf{t}:=\sum_i t_i\phi_i$. The FJRW invariants produce a Dubrovin-Frobenius manifold structure on ${\mathcal H }_{W, \langle J\rangle}$, with a *flat identity* $\mathbf{1}:={\bf e}_1\in {\mathcal H }_{J}$, a *Hodge grading operator* $\mathrm{Gr}\in {\rm End}({\mathcal H }_{W, \langle J\rangle})$ defined in [\[eq:modified-hodge-operator\]](#eq:modified-hodge-operator){reference-type="eqref" reference="eq:modified-hodge-operator"}, and an *Euler vector field* $$\label{euler-vector} {\mathscr E}(\mathbf{t})=\sum_{i}\Big(1-\deg_{\mathbb C}(\phi_i)\Big)\, t_i{\partial \over \partial t_i}.$$ Here the complex degree $\deg_{\mathbb C}\phi_i\in\mathbb{Q}$ is defined in [\[complex-degree\]](#complex-degree){reference-type="eqref" reference="complex-degree"}. We can identify the vector space ${\mathcal H }_{W, \langle J\rangle}$ with its tangent space by sending $\phi_i$ to ${\partial\over \partial t_i}$. The quantum product induces a Dubrovin-type connection (or a quantum connection) $\nabla$ on ${\mathcal H }_{W, \langle J\rangle}\times\mathbb{C}^*$, given by $$\label{quantum-connection} \begin{dcases} \nabla_{\partial\over \partial t_i}&=\frac{\partial }{\partial t_i}+\frac{1}{z}\phi_i\star_{\mathbf{t}},\\ \nabla_{z{\partial\over \partial z}}&=z{\partial\over \partial z}-{1\over z} {\mathscr E}(\mathbf{t})\star_{\mathbf{t}}+\mathrm{Gr}. \end{dcases}$$ By Proposition [\[fundamental-solution-infty\]](#fundamental-solution-infty){reference-type="ref" reference="fundamental-solution-infty"}, the quantum connection [\[quantum-connection\]](#quantum-connection){reference-type="eqref" reference="quantum-connection"} admits flat sections $\{S(\mathbf{t},z)z^{-\mathrm{Gr}}\alpha\}$, where $S(\mathbf{t},z)\in{\rm End}({\mathcal H }_{W, \langle J\rangle})\otimes {\mathbb C}[\![\mathbf{t}]\!][\![z^{-1}]\!]$ is an operator defined by $$S(\mathbf{t},z)\alpha=\alpha +\sum_{i=1}^{r}\sum_{n\geq1}\sum_{k\geq 0}\frac{\phi^i}{n!(-z)^{k+1}} \Big\langle \alpha\psi_1^k, \mathbf{t},\dots,\mathbf{t}, \phi_i \Big\rangle_{0,n+2}, \quad \forall\alpha\in {\mathcal H }_{W, \langle J\rangle}.$$ ### Weak asymptotic classes We will focus on the asymptotic behavior of the flat sections $\{S(\mathbf{t},z)z^{-\mathrm{Gr}}\alpha\}$. Let us consider the quantum connection by restricting to $\mathbf{t}=\tau$. According to Proposition [Proposition 21](#prop-euler-tau){reference-type="ref" reference="prop-euler-tau"}, the restriction of $\nabla_{z{\partial\over \partial z}}$ to $\tau$ is given by $$\label{intro-meromorphic} \nabla_{z{\partial\over \partial z}}=z{\partial\over \partial z}-{1\over z}\cdot {\nu\over d}\tau'\star_{\tau} +\mathrm{Gr}.$$ When $W$ is of general type, this is a meromorphic connection that has an *irregular* singularity at $z=0$ and a *regular* singularity at $z=\infty$. In the study of asymptotic analysis of irregular meromorphic connections, it is natural to consider the *smallest asymptotic expansion* along a certain ray when $z$ approaches to the irregular singularity. **Definition 8**. We say $\alpha\in {\mathcal H }_{\rm nar}\subset {\mathcal H }_{W, \langle J\rangle}$ is a *weak asymptotic class* with respect to $\lambda\in {\mathbb C}^*$ if there exists $m\in \mathbb{R}$, such that when $\arg(z)=\arg(\lambda)\in [0, 2\pi)$, we have $$\left|e^{\lambda\over z}\cdot \langle S(\tau, z)z^{-\mathrm{Gr}}\alpha, \mathbf{1}\rangle\right|=O(|z|^{m})\quad\mathrm{as}\ |z|\to 0.$$ Such asymptotic classes will play the central role in this paper. ### A mirror conjecture {#sec-intro-wall} The operator $S(\mathbf{t}, z)$ is invertible. We define the *$J$-function* $$J(\mathbf{t}, z)=z S(\mathbf{t}, z)^{-1}(\mathbf{1}).$$ Let $I_{\rm FJRW}^{\rm sm}(t, z)$ be the small $I$-function defined in [\[small-i-function\]](#small-i-function){reference-type="eqref" reference="small-i-function"} and $\tau(t)$ be defined in [\[def-tau\]](#def-tau){reference-type="eqref" reference="def-tau"}. These two functions are conjectured to be related by a *Givental-type mirror formula*. In particular, we have **Conjecture 9**. If the singularity $W$ is of general type, then $$\label{I-J-relation} I_{\rm FJRW}^{\rm sm}(t, -z)=t J(\tau(t), -z).$$ A general version of this conjecture, where $W$ is not necessarily of general type, has been proved for many examples including the Fermat polynomials [@Aco; @CIR; @RR; @Gue]. See Proposition [Proposition 65](#mirror-theorem-fermat){reference-type="ref" reference="mirror-theorem-fermat"} for a more explicit description. ### Calculation of weak asymptotic classes via a Barnes' formula When the mirror formula [\[I-J-relation\]](#I-J-relation){reference-type="eqref" reference="I-J-relation"} holds, we can calculate the weak asymptotic classes using the small $I$-function. It is more convenient to consider a modified $I$-function $$\widetilde{I}(t,z)=z^{{\widehat{c}_W\over 2}-1}z^{\mathrm{Gr}} I_{\rm FJRW}^{\rm sm}(t,z).$$ After a change of variables $x=d^{-d}\prod\limits_{j=1}^{N}w_j^{w_j} z^{-\nu}$ and restricting to $t=1$, we will show in Proposition [Proposition 68](#i-function-via-ghf){reference-type="ref" reference="i-function-via-ghf"} that the coefficients of the modified $I$-function satisfies a generalized hypergeometric equation $$\label{ghe-equation-intro} \Big(x{\partial \over \partial x}\prod_{j=1}^{q}(x{\partial \over \partial x}+\rho_j-1)-x\prod_{i=1}^{p}(x{\partial \over \partial x}+\alpha_i)\Big) \widetilde{I}(1,z(x)) =0,$$ where $q+1=|{\bf Nar}|=p+\nu$, and $\rho_j, \alpha_i$'s are determined by the weight system completely. See Section [4.2](#sec-basic-ghe){reference-type="ref" reference="sec-basic-ghe"} for the details. For each $\ell \in {\mathbb Z}$, we consider the cohomology class $$\label{asymptotic-class-formula} {\mathcal A}_\ell:=(2\pi)^N\sum_{m\in {\bf Nar}} \exp\left({-2\pi\sqrt{-1}{\ell\cdot m\over d}}\right)\prod\limits_{j=1}^{N} {1\over \Gamma\left(\left\{{w_j\cdot m\over d}\right\}\right)}\cdot {\bf e}_{m}\in {\mathcal H }_{W, \langle J\rangle}.$$ Using an asymptotic expansion formula of generalized hypergeometric functions discovered by Barnes [@Bar] in 1906, we obtain **Proposition 10** (Proposition [Proposition 74](#theorem-asymptotic-classes){reference-type="ref" reference="theorem-asymptotic-classes"}). Let $(W, \langle J\rangle)$ be an admissible LG pair of general type, with a weight system $(d; w_1, \ldots, w_N)$. If the mirror conjecture [Conjecture 9](#conjecture-i-function-formula){reference-type="ref" reference="conjecture-i-function-formula"} holds, then for each $\ell=0, 1,\ldots, \nu-1$, the class ${\mathcal A}_\ell$ in [\[asymptotic-class-formula\]](#asymptotic-class-formula){reference-type="eqref" reference="asymptotic-class-formula"} is a weak asymptotic class with respect to $Te^{2\pi\sqrt{-1}\ell/\nu}$, where $T$ is given in [\[value-principal-spectrum\]](#value-principal-spectrum){reference-type="eqref" reference="value-principal-spectrum"}. ### Strong asymptotic classes The result in Proposition [Proposition 10](#wall-to-asymptotic){reference-type="ref" reference="wall-to-asymptotic"} suggests that the asymptotic behavior and the conjectured quantum spectrum of ${\nu\over d}\tau'\star_{\tau}$ in Conjecture [Conjecture 4](#conjecture-C){reference-type="ref" reference="conjecture-C"} are closely related. Notice that in Definition [Definition 8](#def-weak-intro){reference-type="ref" reference="def-weak-intro"} of weak asymptotic classes, we only use the asymptotic expansion of $\langle S(\tau,z)z^{-\mathrm{Gr}}\alpha, \mathbf{1}\rangle$. In general, we follow [@GGI] to consider asymptotic classes (which we call *strong asymptotic classes* in this paper) using the asymptotic behavior of the flat section $S(\tau,z)z^{-\mathrm{Gr}}\alpha$. See Definition [Definition 46](#def-asymptotic-class){reference-type="ref" reference="def-asymptotic-class"} for the details. The two types of asymptotic classes are expected to be the same up to rescaling. This is true if both quantum spectrum conjecture [Conjecture 4](#conjecture-C){reference-type="ref" reference="conjecture-C"} and mirror conjecture [Conjecture 9](#conjecture-i-function-formula){reference-type="ref" reference="conjecture-i-function-formula"} hold, c.f. Proposition [Proposition 80](#weak=strong){reference-type="ref" reference="weak=strong"}. *Remark 11*. Inspired from the work [@Aco; @AS], we expect that the asymptotic classes considered in this paper will correspond to the massive vacuum solutions in the physics literature. ## $\Gamma$-structures in FJRW theory In [@CIR], Chiodo, Iritani, and Ruan introduced Gamma structures in FJRW theory. These structures can be used to define Gamma classes for *matrix factorizations* [@CIR] and D-brane central charge in LG models [@KRS]. We propose a Gamma conjecture which relates certain Gamma classes to the asymptotic classes of the quantum connection. ### A Gamma map The key construction in [@CIR] is a linear map, called a *Gamma map*, that gives integral structures for the FJRW state space. We slightly modify the definition in [@CIR] and define a Gamma map for the LG pair $(W, G)$ by $$\label{gamma-map-intro} \widehat{\Gamma}_{W, G}:=\bigoplus_{g\in G} (-1)^{-\mu(g)} \prod_{j=1}^{N}\Gamma(1-\theta_j(g))\cdot {\rm Id}_{{\mathcal H }_{g}}\in {\rm End}({\mathcal H }_{W, G}).$$ Here ${\rm Id}_{{\mathcal H }_{g}}$ is the identity map on ${\mathcal H }_g$, $\mu(g)$ is the Hodge grading number of $g$ [\[hodge-grading-operator\]](#hodge-grading-operator){reference-type="eqref" reference="hodge-grading-operator"}, and the numbers $\theta_j(g)\in [0, 1)\cap\mathbb{Q}$ are defined by $$g=\left(e^{2\pi\sqrt{-1}\theta_1(g)}, \ldots, e^{2\pi\sqrt{-1}\theta_N(g)}\right)\in({\mathbb C}^*)^N.$$ ### Gamma classes Let ${\rm MF}_{G}(W)$ be the category of *$G$-equivariant matrix factorizations* of $W$ and ${\rm HH}_*({\rm MF}_G(W))$ be the *Hochschild homology group* of the category ${\rm MF}_{G}(W)$. According to [@PV Theorem 2.5.4], there is an isomorphism $${\rm HH}_*({\rm MF}_G(W))\cong {\mathcal H }_{W, G}.$$ The map $\widehat{\Gamma}_{W, G}$ can be also viewed as a bridge between ${\rm MF}_G(W)$ and the FJRW theory of $(W, G)$. In [@PV], for an object $\overline{E}$ in the category ${\rm MF}_{G}(W)$, Polishchuk and Vaintrob constructed a Chern character ${\rm ch}_{G}(\overline{E})\in {\rm HH}_0({\rm MF}_{G}(W)).$ We define the *Gamma class of the object $\overline{E}$* to be $$\widehat{\Gamma}_{W, G}({\rm ch}_{G}(\overline{E}))\in{\mathcal H }_{W, G}.$$ Furthermore, Polishchuk and Vaintrob proved a Hirzebruch-Riemann-Roch formula for matrix factorizations, generalizing the work of Walcher [@Wal]. The formula expresses the Euler characteristic $\chi(\overline{E}, \overline{F})$ of the Hom space ${\rm Hom}^*(\overline{E}, \overline{F})^G$ in terms of a canonical pairing [\[PV-pairing\]](#PV-pairing){reference-type="eqref" reference="PV-pairing"} between ${\rm ch}_{G}(\overline{E})$ and ${\rm ch}_{G}(\overline{F})$. We will define a non-symmetric pairing $\left[\cdot, \cdot \right)$ on ${\mathcal H }_{W, G}$ [\[non-symmetric-pairing-qst\]](#non-symmetric-pairing-qst){reference-type="eqref" reference="non-symmetric-pairing-qst"}. Using the HRR formula and the Gamma map [\[gamma-map-intro\]](#gamma-map-intro){reference-type="eqref" reference="gamma-map-intro"}, we show in Corollary [Corollary 93](#cor-gamma-pairing){reference-type="ref" reference="cor-gamma-pairing"} that $$\left[\widehat{\Gamma}_{W, G}({\rm ch}_{G}(\overline{E})), \widehat{\Gamma}_{W, G}({\rm ch}_{G}(\overline{F}))\right)=\chi(\overline{E}, \overline{F})\in {\mathbb Z}.$$ ### Gamma conjectures for admissible LG pairs Let $R={\mathbb C}[x_1, \ldots, x_N]$. Let $\{e_j\}$ be a standard basis of $R^N$ and $\iota(e_j^*)$ be the contraction of the dual element $e_j^*$. We consider the *Koszul matrix factorization* $${\mathbb C}^{\rm st}:=\Big(\bigoplus_{k=0}\wedge^k_R R^N, \quad \delta:=\sum_{j=1}^{N}x_j e_j\wedge+\sum_{j=1}^{N}{\partial W\over \partial x_j}\iota(e_j^*)\Big).$$ Following [@Dyc], this matrix factorization ${\mathbb C}^{\rm st}$ is called the *stabilization of the residue field ${\mathbb C}$*. The Gamma classes of ${\mathbb C}^{\rm st}$ and its twist ${\mathbb C}(\ell)^{\rm st}$ can be calculated using Polishchuk-Vaintrob's Chern character formula [@PV Theorem 3.3.3]. Direct calculation shows that these Gamma classes match the classes defined in [\[asymptotic-class-formula\]](#asymptotic-class-formula){reference-type="eqref" reference="asymptotic-class-formula"} $$\widehat{\Gamma}_{W, \langle J\rangle}\left({\rm ch}_{\langle J\rangle}({\mathbb C}(\ell)^{\rm st})\right)={\mathcal A}_\ell, \quad \forall \ell\in {\mathbb Z}.$$ If the LG pair $(W, \langle J\rangle)$ is of general type, some of these Gamma classes has asymptotic meanings. Inspired by Galkin-Golyshev-Iritani's *Gamma Conjecture I* for quantum cohomology of Fano manifold [@GGI], we propose a Gamma conjecture in FJRW theory. **Conjecture 12** (The weak/strong Gamma Conjecture). Let $(W, \langle J\rangle)$ be an admissible LG pair of general type, with index $\nu>0$. For integers $\ell=0, \ldots, \nu-1$, the Gamma class of the matrix factorization ${\mathbb C}(\ell)^{\rm st}$ is a weak/strong asymptotic class with respect to $Te^{2\pi\sqrt{-1}\ell/ \nu}$. By Proposition [Proposition 10](#wall-to-asymptotic){reference-type="ref" reference="wall-to-asymptotic"}, Mirror conjecture [Conjecture 9](#conjecture-i-function-formula){reference-type="ref" reference="conjecture-i-function-formula"} implies the weak Gamma Conjecture [Conjecture 12](#algebraic-analytic){reference-type="ref" reference="algebraic-analytic"}. Using Proposition [Proposition 65](#mirror-theorem-fermat){reference-type="ref" reference="mirror-theorem-fermat"}, Proposition [Proposition 80](#weak=strong){reference-type="ref" reference="weak=strong"}, and Theorem [Theorem 7](#main-thm-intro){reference-type="ref" reference="main-thm-intro"}, we have **Theorem 13**. *Let $(W, \langle J\rangle)$ be an admissible LG pair of general type. Then* 1. *The weak Gamma Conjecture [Conjecture 12](#algebraic-analytic){reference-type="ref" reference="algebraic-analytic"} holds if $W$ is a Fermat polynomial of the form $W=\sum\limits_{i=1}^{N} x_i^{a_i}$ with $a_i\geq 2$ for all $i$.* 2. *The strong Gamma Conjecture [Conjecture 12](#algebraic-analytic){reference-type="ref" reference="algebraic-analytic"} holds if $W$ is a Fermat homogeneous polynomial of the form $W=\sum\limits_{i=1}^{N}x_i^d$ with $d-N>1$.* ## Orlov's SOD and Stokes phenomenon Now we consider the relation between asymptotic classes and *Orlov's semiorthogonal decomposition (SOD)*. ### Orlov's SOD and a Gram matrix Let $(W, \langle J\rangle)$ be an admissible LG pair of general type. Let ${\mathcal X}_W=(W=0)$ be the hypersurface in the weighted projective space ${\mathbb P}^{N-1}(w_1, \ldots, w_N)$ and ${\bf D}^b({\mathcal X}_W)$ be the derived category of bounded complex of coherent sheaves on ${\mathcal X}_W.$ According to [@Orl; @BFK2], the triangulated category ${\rm HMF}_{\langle J\rangle}(W)$ admits a semiorthogonal decomposition $$\label{intro-sod} {\rm HMF}_{\langle J\rangle}(W)\simeq\left<{\mathbb C}^{\rm st}, {\mathbb C}(-1)^{\rm st}, \ldots, {\mathbb C}(1-\nu)^{\rm st}, {\bf D}^b({\mathcal X}_W)\right>.$$ The matrix of the Euler pairing $\chi(\cdot, \cdot)$ of pairs in $\{{\mathbb C}^{\rm st}, {\mathbb C}(-1)^{\rm st}, \ldots, {\mathbb C}(1-\nu)^{\rm st}\}$ forms an upper triangular matrix $$M:=\Big(\chi({\mathbb C}(-i)^{\rm st}, {\mathbb C}(-j)^{\rm st})\Big)_{0\leq i, j\leq\nu-1}.$$ This matrix can be calculated by the Hirzebruch-Riemann-Roch formula in [@PV]. Let $\left[f(x)\right]_n$ be the coefficient of $x^n$ in the formal power series $f(x)$. We have **Proposition 14** (Proposition [Proposition 98](#inverse-gram-formula){reference-type="ref" reference="inverse-gram-formula"}). The matrix $M^{-1}=(a^{i,j})$ is an upper triangular matrix with integer coefficients. Its entries are determined by the weight system of $(W, \langle J\rangle)$: $$a^{i, i+n}=\left[\prod_{j=1}^{N}(1-x^{w_j})^{-1}\right]_n, \quad \forall \ 1\leq i\leq \nu, 0\leq n\leq \nu-i.$$ ### Relation to Stokes phenomenon The topological aspect of irregular meromorphic connections can be described using *Stokes matrices*, see [@DM; @CV; @Dub; @Dub-conj; @DHMS] for example. Following the ideas of Dubrovin [@Dub] and Kontsevich, the SOD of ${\rm HMF}_{\langle J\rangle}(W)$ is closely related to the Stokes phenomenon of the meromorphic connection [\[intro-meromorphic\]](#intro-meromorphic){reference-type="eqref" reference="intro-meromorphic"}. Similar phenomenon in quantum cohomology has been discussed in the literature [@Guz; @GGI; @CDG; @SS]. In a certain sector of $\arg(x)\in (a, b)\cap\mathbb{R}$, the ordinary differential equation [\[ghe-equation-intro\]](#ghe-equation-intro){reference-type="eqref" reference="ghe-equation-intro"} admits a basis of analytic solutions given by Meijer $G$-functions; see Proposition [Proposition 99](#meijer-basis){reference-type="ref" reference="meijer-basis"}. The transformation laws between such functions in different sectors is linear [@Luk; @Fie]. These transformations characterize the Stoke phenomenon of the ODE system. We find one transformation law [\[transformation-exponential-type\]](#transformation-exponential-type){reference-type="eqref" reference="transformation-exponential-type"} which characterizes the behavior of certain Meijer $G$-functions with exponential asymptotic expansions. We call certain coefficients in [\[transformation-exponential-type\]](#transformation-exponential-type){reference-type="eqref" reference="transformation-exponential-type"} the *Stokes coefficients of exponential type*. **Proposition 15** (Proposition [Proposition 108](#theorem-gram-asymptotic){reference-type="ref" reference="theorem-gram-asymptotic"}). The nonzero entries of the upper triangular matrix $M^{-1}$ are Stokes coefficients of exponential type of the equation [\[ghe-equation-intro\]](#ghe-equation-intro){reference-type="eqref" reference="ghe-equation-intro"}. For the $r$-spin case, the Stokes matrice can be calculated by the transformation law [\[transformation-exponential-type\]](#transformation-exponential-type){reference-type="eqref" reference="transformation-exponential-type"}. We recover the calculation of the Stokes matrice in [@CV]. The computation is similar to that for quantum cohomology of projective spaces in [@Guz]. ## Plan of the paper In Section 2, we introduce the construction of quantum product in FJRW theory and then propose a quantum spectrum conjecture for admissible LG pairs $(W, \langle J\rangle)$ of general type. In Section 3, we define the strong asymptotic classes in FJRW theory using the quantum connection. In Section 4, we use a formula of Barnes on the asymptotic expansions of generalized hypergeometric functions to calculate weak asymptotic classes. In Section 5, we review the Gamma structures in FJRW theory, and discuss the relation to HRR formula and asymptotic classes. We propose a Gamma conjecture which relates the Gamma classes of certain $\langle J\rangle$-equivariant matrix factorizations to the weak/strong asymptotic classes. In Section 6, we discuss the relation between the Gamma structures and Orlov's semiorthogonal decompositions for the category of $\langle J\rangle$-equivariant matrix factorizations. We also relate a Gram submatrix with Stokes coefficients of the ODE from the quantum connection. In Section 7, we prove the quantum spectrum conjecture for mirror ADE singularities and Fermat homogeneous polynomials. ## Acknowlegement We thank Weiqiang He for helpful discussions on Gamma structures and Yang Zhou for helpful discussions on mirror conjecture [Conjecture 9](#conjecture-i-function-formula){reference-type="ref" reference="conjecture-i-function-formula"}. Y.S. also would like to thank Changzheng Li, Alexander Polishchuk, and Mark Shoemaker for helpful discussions. Part of the work was completed while Y.S. was visiting Simons Center of Geometry and Physics during the program "Integrability, Enumerative Geometry and Quantization\". Y.S. would like to thank the organizers of the program and the hospitality of Simons Center of Geometry and Physics. Y.S. also would like to thank the hospitality of the Institute for Advanced Study in Mathematics (IASM), Zhejiang University. Y.S. is partially supported by Simons Collaboration Grant 587119. # Quantum spectrum in FJRW theory {#sec:spectrum} ## A brief review of FJRW theory {#sec:fjrw} ### Admissible Landau-Ginzburg pairs Let $W: \mathbb{C}^N\to \mathbb{C}$ be a degree $d$ quasi-homogeneous polynomial, such that the weights of the variables $w_j:={\rm wt}(x_j)$ satisfying $\gcd(w_1, \ldots, w_N)$=1. Following [@FJR], we only consider when $W$ is nondegenerate, that is (1) $W$ has an isolated critical point only at the origin and (2) the choices of $q_j:={w_j\over d}\in\left(0, {1\over 2}\right]\cap \mathbb{Q}$ are unique. The FJRW theory constructed in [@FJR-book; @FJR] is an intersection theory on the moduli space of $W$-spin structures of $G$-orbifold curves, where $G$ is a certain group of symmetry of $W$. There are a few choices of the symmetry group $G$, including the group $G_W$ defined in [\[maximal-group\]](#maximal-group){reference-type="eqref" reference="maximal-group"}, called the *maximal diagonal symmetry group* of $W$. **Definition 16**. We say that a subgroup $G\leq G_{W}$ is admissible if it contains the exponential grading element $$J:=\left(e^{2\pi \sqrt{-1} q_1}, \ldots, e^{2\pi \sqrt{-1} q_N}\right).$$ We call such a pair $(W,G)$ an *admissible LG pair*. We denote the group generated by $J$ by $\langle J\rangle$, and call it the minimal (admissible) group of $W$. According to [@Kra Proposition 3.4], the definition here is equivalent to the original definition in [@FJR Definition 2.3.2], which says there exists a Laurent polynomial $Z$, quasi-homogeneous with the same weights $q_i$ as $W$, but with no monomials in common with $W$, such that $G=G_{W+Z}$. ### The state space For each group element $g\in G$, let ${\rm Fix}(g)\subset {\mathbb C}^n$ be the $g$-fixed locus and $W_g$ be the restriction of $W$ on ${\rm Fix}(g)$. Following [@FJR Definition 3.2.1], the *$g$-twisted sector* ${\mathcal H }_g$ is defined to be the $G$-invariant part of the middle-dimensional relative cohomology for $W_g$. The *state space* of the admissible LG pair $(W, G)$ is the *FJRW vector space* associated with a nondegenerate pairing $\langle\cdot , \cdot \rangle$ defined by intersecting Lefschetz thimbles [@FJR Section 3.2] $$\label{state-space} \bigg({\mathcal H }_{W,G}:=\bigoplus_{g\in G}{\mathcal H }_{g}, \quad \langle \cdot , \cdot \rangle\bigg).$$ In this paper, we will use an equivalent description of the state space. According to [@FJR Formula (74)], there is a pairing-preserving graded isomorphism $$\label{fjrw=orbifold} \bigg({\mathcal H }_{W,G}, \langle \cdot , \cdot \rangle\bigg)\cong\bigg(\bigoplus_{g\in G}\left({\rm Jac}(W_g)\cdot \omega_g\right)^G, \langle \cdot , \cdot \rangle^{\rm Res}\bigg).$$ Let us explain the meaning of the right-hand side. Let $\mathfrak{I}_W$ be the *Jacobian ideal* in ${\mathbb C}[x_1, \ldots, x_N]$, generated by the partial derivatives ${\partial{W}/ \partial x_j}$ of W. Let $${\rm Jac}(W):={\mathbb C}[x_1, \ldots, x_N]/\mathfrak{I}_W$$ be the *Jacobian algeba* (or the *Milnor ring*, or the *local algebra*) of the singularity $W$. Let ${\bf x}=(x_{1},\dots, x_{N})$ be a coordinate system on ${\mathbb C}^{N}$. An element $h\in G$ acts on ${\bf x}$ and the form $\{d{\bf x}\}$ by the scalar products $h\cdot{\bf x}$ and $h\cdot d{\bf x}=d(h\cdot{\bf x})$. For each $g\in G$, let $\omega_g$ be the standard top form of ${\rm Fix}(g)$. That is, $$\omega_g:=\bigwedge_{\{i\mid g\cdot x_i=x_i\}} dx_{i}.$$ We denote the $G$-invariant part of the Jacobian algebra on the fixed locus ${\rm Fix}(g)$ by $$\label{state-space-component} {\mathcal H }_g:=\left( {\rm Jac}(W_g)\cdot \omega_g \right)^G.$$ Here if ${\rm Fix}(g)$ only contains the origin, that is, ${\rm Fix}(g)=\{0\}$, then we say that ${\mathcal H }_{g}$ is a *narrow* sector. Otherwise, ${\mathcal H }_{g}$ is called a *broad* sector. Each narrow sector ${\mathcal H }_{g}$ is defined to be one-dimensional, and spanned by the *standard generator* $1\vert g\rangle.$ In particular, if ${\rm Fix}(J^m)=\{0\}$, then we denote the standard generator of ${\mathcal H }_{J^m}$ by $$\label{standard-generator} {\bf e}_m:=1\vert J^m\rangle\in {\mathcal H }_{J^m}.$$ In general, we can denote an element $\phi_g\in{\mathcal H }_g$ for any $g\in G$ by $$\label{broad-element-notation} \phi_g:= f\cdot \omega_g \vert g\rangle, \quad \text{with} \quad f\in {\rm Jac}(W_g).$$ The direct sum of all narrow (resp. broad) sectors is denoted by ${\mathcal H }_{\rm nar}$ (resp. ${\mathcal H }_{\rm bro}$). The state space admits a *narrow-broad decomposition* $$\label{G-orb-space} {\mathcal H }_{W, G}=\bigoplus_{g\in G}{\mathcal H }_g={\mathcal H }_{\rm nar}\bigoplus{\mathcal H }_{\rm bro}.$$ The vector space ${\rm Jac}(W)$ (and ${\rm Jac}(W)\cdot d{\bf x}$) has a *Grothendieck residue pairing* $$\label{residue-pairing} \langle f_1, f_2\rangle^{\rm Res}=\langle f_1\cdot d{\bf x}, f_2\cdot d{\bf x}\rangle^{\rm Res}:={\rm Res}_{{\mathbb C}[{\bf x}]/{\mathbb C}} \begin{bmatrix} f_1f_2d{\bf x}\\ \partial_1W, \ldots, \partial_{n}W \end{bmatrix}.$$ See [@Har III.9]. We denote this pairing on each ${\rm Jac}(W_g)\cdot \omega_g$ by $\langle\cdot, \cdot\rangle^{\rm Res}_{W_g}$. This induces a canonical pairing $\langle\cdot, \cdot\rangle^{\rm Res}$ on ${\mathcal H }_{W, G}=\bigoplus_{g\in G}{\mathcal H }_g$ [@PV], which is the direct sum of the pairings ${\mathcal H }_g\otimes{\mathcal H }_{g^{-1}}\rightarrow {\mathbb C}$ defined by $$\label{residue-dual} \langle f_1\cdot \omega_g, f_2\cdot \omega_{g^{-1}}\rangle^{\rm Res}:=\langle\zeta_*(f_1\cdot \omega_g), f_2\cdot \omega_{g^{-1}}\rangle^{\rm Res}_{W_g}.$$ Here $\zeta:=(e^{\pi \sqrt{-1} q_1}, \ldots, e^{\pi \sqrt{-1} q_N})\in({\mathbb C}^*)^{N}$ is a special square root of $J$. See [@FJR Definition 3.1.1]. *Remark 17*. According to [@PV], the pairing is nondegenerate. In FJRW theory [@FJR], there is a state space, given by the space of $G$-invariant Lefschetz thimbles of $W: {\mathbb C}^N\to {\mathbb C}$, ? with a nondegenerate intersection pairing [@FJR]. We denote this pairing by $\langle, \rangle_{0,2}^{\rm FJRW}$. According to [@FJR; @CIR], this pair $({\mathcal H }_{W, G}^{\rm FJRW}, \langle, \rangle_{0,2}^{\rm FJRW})$ is isomorphic to $({\mathcal H }_{W, G}, \langle , \rangle_{0,2}^{\rm res})$. ### Fan-Jarvis-Ruan-Witten invariants The FJRW theory of the admissible pair $(W, G)$ produces a CohFT [@FJR Definition 4.2.1] $$\label{cohft-notation}\bigg\{\Lambda_{g,k}^{W,G}: {\mathcal H }_{W,G}^{\otimes k}\to H^*(\overline{\mathcal{M}}_{g,k}, \mathbb{Q})\bigg\}_{2g-2+k>0},$$ where each $\Lambda_{g,k}^{W,G}$ is a multi-linear map from ${\mathcal H }_{W,G}^{\otimes k}$ to the cohomology of moduli spaces of stable curves of genus $g$ with $k$-markings. These maps are compatible under gluing morphisms. Let $\gamma_i\in {\mathcal H }_{W,G}$ and let $\psi_i$ be psi classes in $H^*(\overline{\mathcal{M}}_{g,k}, \mathbb{Q})$. We consider the genus-$g$ FJRW invariants defined in [@FJR Definition 4.2.6] $$\label{fjrw-inv} \Big\langle\gamma_1\psi_1^{\ell_1},\ldots,\gamma_k\psi_k^{\ell_k}\Big\rangle_{g,k}^{W,G} :=\int_{\overline{\mathcal{M}}_{g,k}}\Lambda_{g,k}^{W,G}(\gamma_1,\ldots,\gamma_k) \prod_{i=1}^k\psi_i^{\ell_i}, \quad \text{where} \quad\ell_{i}\in {\mathbb Z}_{\geq 0}.$$ Now we recall some nonvanishing criteria for the (genus-zero) FJRW invariants in Lemma [Lemma 18](#nonvanishing-lemma){reference-type="ref" reference="nonvanishing-lemma"} below. Later it will be used frequently. For more properties, see [@FJR Theorem 4.2.2]. For each $g\in G\subset G_W$, there are unique numbers $\theta_j(g)\in \mathbb{Q}\cap [0, 1)$, such that $$g=\left(\exp(2\pi\sqrt{-1} \theta_1(g)), \ldots, \exp(2\pi\sqrt{-1} \theta_N(g))\right)\in ({\mathbb C}^*)^N.$$ We denote *the degree shifting number* of $g\in G$ by $$\label{degree-shifting-number} \iota(g):=\sum_{j=1}^{N}(\theta_{j}(g)-q_j).$$ Let $N_g:=\dim_{\mathbb C}{\rm Fix}(g).$ The *complex degree* of a homogeneous element $\phi_g\in {\mathcal H }_g$ in [\[broad-element-notation\]](#broad-element-notation){reference-type="eqref" reference="broad-element-notation"} is defined by $$\label{complex-degree} \deg_{\mathbb C}\phi_g:={N_g\over 2}+\iota(g).$$ Note that the $g$-twisted sector ${\mathcal H }_g$ (and any element in it) is *narrow* if and only if $N_g=0$. We will denote the *central charge* of the singularity $W$ by $$\label{central} \widehat{c}_W:=\sum_{j=1}^{N}\left(1-{2w_j\over d}\right) \in \mathbb{Q}_{\geq 0}.$$ **Lemma 18**. *If the genus-zero FJRW invariant $$\Big\langle\phi_{g_1}\psi_1^{\ell_1},\ldots,\phi_{g_k}\psi_k^{\ell_k}\Big\rangle_{0, k}^{W, G}\neq 0,$$ then $$\begin{aligned} \widehat{c}_W-3+k=\sum_{i=1}^{k}(\deg_{\mathbb C}\phi_{g_i}+\ell_i),\quad\text{and} \label{virtual-degree}\\ {w_j\over d}\left(k-2\right)-\sum_{i=1}^{k}\theta_{j}(g_i)\in {\mathbb Z}. \label{selection-rule}\end{aligned}$$* Here the equation [\[virtual-degree\]](#virtual-degree){reference-type="eqref" reference="virtual-degree"} follows from [@FJR Theorem 4.1.8 (1)]. We call it the *homogeneity property* or the degree constraint (in genus zero). The equation [\[selection-rule\]](#selection-rule){reference-type="eqref" reference="selection-rule"} is called the *selection rule* [@FJR Proposition 2.2.8]. ### A quantum product We denote the rank of ${\mathcal H }_{W, G}$ by $r$ and fix a homogeneous basis $\{\phi_i\}_{i=1}^{r}$ of ${\mathcal H }_{W, G}$. Let $${\bf t}(z):=\sum_{m\geq 0}\sum_{i=1}^{r}t_{i, m}\phi_i z^m\in {\mathcal H }_{W,G}[\![z]\!].$$ It is convenient to use the following double bracket notation to define a *correlation function* (as a formal power series of variables $\{t_{i, m}\}$) $$\savebox{\@brx}{\(\m@th{\big\langle}\)}% \mathopen{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}\gamma_1\psi_1^{\ell_1},\ldots,\gamma_k\psi_k^{\ell_k}\savebox{\@brx}{\(\m@th{\big\rangle}\)}% \mathclose{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}_{g,k}^{W,G}(\mathbf{t}(\psi)) :=\sum_{n\geq 0}{1\over n!}\Big\langle\gamma_1\psi_1^{\ell_1},\ldots,\gamma_k\psi_k^{\ell_k}, \mathbf{t}(\psi_{k+1}), \ldots, \mathbf{t}(\psi_{k+n})\Big\rangle_{g,k+n}^{W,G}.$$ In this paper, we only consider genus zero correlation functions. We restrict to $$\mathbf{t}:=\mathbf{t}(0)=\sum\limits_{i=1}^{r}t_i\phi_i, \quad \text{with} \quad t_i:=t_{i, 0}.$$ The FJRW invariants induce a *quantum product* $\star_{\mathbf{t}}$ defined by $$\label{quantum-product} \langle\phi_i\star_{\mathbf{t}}\phi_j, \phi_k\rangle%={\partial^3\over \partial t_{0}^{a}\partial t_{0}^{b}\partial t_{0}^{c} }F_0^{\clubsuit, (W, G)}(t) =\savebox{\@brx}{\(\m@th{\big\langle}\)}% \mathopen{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}\phi_i, \phi_j, \phi_k\savebox{\@brx}{\(\m@th{\big\rangle}\)}% \mathclose{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}^{W, G}_{0,3}(\mathbf{t})\in {\mathbb C}[\![t_1, \ldots, t_r]\!].$$ By the WDVV equations, the quantum product is associative. Recall that the narrow element ${\bf e}_1$ is the unit of the quantum product $\star$ [@FJR Theorem 4.2.2]. We sometimes denoted it by $\mathbf{1}:={\bf e}_1$. An important invariant for quasi-homogeneous polynomial is the **Definition 19** (The weight system and the Gorenstein parameter). Let $q_j\in \mathbb{Q}$ be the weight of $x_i$ of a quasi-homogeneous polynomial $W(x_1, \ldots, x_N)$. There is an $(N+1)$-tuple of positive integers, called the *weight system* of $W$ (or the LG pair $(W, \langle J\rangle)$), given by $$(d; w_1, \ldots, w_N)\in \mathbb{N}^{N+1}$$ such that 1. $d={\rm lcm}(w_1, \ldots, w_N),$ 2. $\gcd(w_1, \ldots, w_N)=1,$ 3. $w_j=d\cdot q_j$ for any $j$. The *Gorenstein parameter* of the polynomial $W$ (or the LG pair $(W, \langle J\rangle)$) is defined to be $$\label{index} a_{W, G}:=-\nu_{W, G}:=\sum_{j=1}^N w_j-d.$$ There is a classification of the admissible LG pairs $(W, \langle J\rangle)$ by the Gorenstein parameter. The admissible LG pair $(W, \langle J\rangle)$ is called of *Fano/Calabi-Yau/general type* if the parameter $\nu$ is negative/zero/positive. Such a classification is used in [@HV; @Orl; @HR; @Aco]. If $W$ is of Fano type and $N\geq 2$, then the hypersurface given by $W=0$ is a Fano manifold (orbifold) of Fano index $-\nu>0.$ In this paper, we mostly focus on the singularities of general type, that is when $\nu>0.$ ## A quantum spectrum conjecture in FJRW theory Now we consider the admissible LG pairs $(W, \langle J\rangle)$. We recall that for an admissible LG pair $(W, \langle J\rangle)$, its weight system is defined in [\[weight-system\]](#weight-system){reference-type="eqref" reference="weight-system"}. Let us denote its *weight system* by $$(d; w_1, \ldots, w_N)\in (\mathbb{Z}_{>0})^{N+1}.$$ Recall According to Definition [Definition 1](#index-lg-pair){reference-type="ref" reference="index-lg-pair"}, the *index* of the LG pair $(W, \langle J\rangle)$ is given by $$\nu=d-\sum\limits_{j=1}^{N}w_j,$$ and we say that the LG pair is *Fano/CY/general type* if $\nu$ is negative/zero/positive. ### The element $\tau(t)$ Recall that the element $\tau(t)$ is defined in [\[def-tau\]](#def-tau){reference-type="eqref" reference="def-tau"}. We write $$\label{tau-component} \tau(t):=\sum_{m\in {\bf Mir}}\tau_m(t)\ {\bf e}_m.$$ Here ${\bf Mir}=\{m\in {\bf Nar}\mid \tau_m(t)\neq 0\}$. In particular, it follows from formula [\[small-i-function\]](#small-i-function){reference-type="eqref" reference="small-i-function"} that $1\in {\bf Mir}$ if and only if $\nu=1.$ Let $$\label{condition-m-in-tau} {\bf Mir}_{\geq 2}:=\left\{m\in {\bf Nar}\ \bigg\vert\ m-1-\sum_{j=1}^{N}\left \lfloor {w_j\cdot m\over d}\right\rfloor=1\right\}\subset {\bf Nar}.$$ Then we have $${\bf Mir}= \begin{dcases} \{1\}\cup {\bf Mir}_{\geq 2}, & \text{ if } \nu=1;\\ {\bf Mir}_{\geq 2}, & \text{ if } \nu>1. \end{dcases}$$ We also have $$\label{coefficient-tau} \tau_m(t)= \begin{dcases} {t^{d}\over d!}\prod_{j=1}^{N} {\Gamma(w_j+{w_j\over d})\over \Gamma\left({w_j\over d}\right)}, & \text{ if } m=1;\\ {t^{m-1}\over (m-1)!}\prod_{j=1}^{N} {\Gamma({w_j\cdot m\over d})\over \Gamma\left(\left\{{w_j\cdot m\over d}\right\}\right)}, & \text{ if } m>1. \end{dcases}$$ **Example 20**. If $W=\sum\limits_{i=1}^{N}x_i^d$ is a Fermat homogeneous polynomial of general type, then $$\label{tau-fermat} \tau(t)= \begin{dcases} t{\bf e}_2, & \textit{ if } d>N+1;\\ t{\bf e}_2+\frac{t^{d}}{d!d^N}{\bf e}_1, & \textit{ if } d=N+1>2;\\ {t\over 4}{\bf e}_1, & \textit{ if } d=N+1=2. \end{dcases}$$ The set ${\bf Mir}$ can contain many elements. For example, the $Q_{11}$-singularity $W=x_1^2x_2+x_2^3x_3+x_3^3$ has the weight system $(18; 7, 4, 6)$ and the index $\nu=1.$ We have $${\bf Mir}=\{1, 2, 4, 5, 7, 10, 11, 13, 14, 16\}={\bf Nar}\backslash\{17\}.$$ Using formulas [\[degree-shifting-number\]](#degree-shifting-number){reference-type="eqref" reference="degree-shifting-number"} and [\[complex-degree\]](#complex-degree){reference-type="eqref" reference="complex-degree"}, we have $$\label{degree-narrow} 1-\deg_{\mathbb C}{\bf e}_m=1-\sum_{j=1}^{N}\left(\{{w_j\cdot m\over d}\}-{w_j\over d}\right) =\begin{dcases} 1, & \text{ if } m=1;\\ {\nu(m-1)\over d}, & \text{ if } m\in {\bf Mir}_{\geq 2}. \end{dcases}$$ Here the last equality follows from the equality [\[condition-m-in-tau\]](#condition-m-in-tau){reference-type="eqref" reference="condition-m-in-tau"} when $m\neq 1$. **Proposition 21**. If $\nu>0$, then the Euler vector field ${\mathscr E}(\tau(t))$ takes the form $$\label{euler-tau}{\mathscr E}(\tau(t))={\nu\over d}t\tau'(t).$$ *Proof.* By the definition in  [\[euler-vector\]](#euler-vector){reference-type="eqref" reference="euler-vector"}, we have $${\mathscr E}(\tau(t))=\sum_{m\in {\bf Mir}}(1-\deg_{\mathbb C}{\bf e}_m)\tau_m(t){\bf e}_m.$$ Now the result follows directly from [\[degree-narrow\]](#degree-narrow){reference-type="eqref" reference="degree-narrow"} and [\[coefficient-tau\]](#coefficient-tau){reference-type="eqref" reference="coefficient-tau"}. ◻ ### A quantum product ${\nu\over d}\tau'\star_\tau$ For simplicity, we will write $$\tau:=\tau(1) \quad \text{and} \quad \tau':=\tau'(1).$$ In the following, we will focus on the quantum multiplication ${(\nu/d)}\tau'\star_\tau$. The following proposition shows that $$\label{c1-avatar} {\nu\over d}\tau'\star_\tau\in {\rm End}({\mathcal H }_{W, \langle J\rangle}).$$ **Proposition 22**. Let $(W, \langle J\rangle)$ be an admissible LG pair of general type, then $${\nu\over d}t\tau'(t)\star_{\tau(t)} \in {\mathbb C}[t]\otimes_{{\mathbb C}} {\rm End}({\mathcal H }_{W, \langle J\rangle}).$$ *Proof.* For any pair of homogeneous elements $\phi_i, \phi_j\in {\mathcal H }_{W, \langle J\rangle}$, we will prove $$\savebox{\@brx}{\(\m@th{\big\langle}\)}% \mathopen{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}\tau'(t), \phi_i, \phi_j\savebox{\@brx}{\(\m@th{\big\rangle}\)}% \mathclose{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}_{0,3}(\tau(t))\in {\mathbb C}[t].$$ Using [\[tau-component\]](#tau-component){reference-type="eqref" reference="tau-component"} and [\[coefficient-tau\]](#coefficient-tau){reference-type="eqref" reference="coefficient-tau"}, it suffices to show that for any $m_1, \ldots, m_n\in {\bf Mir}$, $$\label{polynomiality-tau} \Big\langle{\bf e}_{m_1}, \phi_i, \phi_j, {\bf e}_{m_2}, \ldots, {\bf e}_{m_n}\Big\rangle_{0, n+2}$$ vanishes if $n$ is large enough. According to the *string equation* [@FJR Theorem 4.2.9], this is true if $n\geq 2$ and at least one of $m_1, \ldots, m_n$ is $1.$ So we can assume $m_k\geq 2$ for each $k.$ Now according to the homogeneity property [\[virtual-degree\]](#virtual-degree){reference-type="eqref" reference="virtual-degree"}, if such an FJRW invariant [\[polynomiality-tau\]](#polynomiality-tau){reference-type="eqref" reference="polynomiality-tau"} is nonzero, then we have $$\begin{aligned} \widehat{c}_W-3+n+2 &=&\deg_{\mathbb C}\phi_i+\deg_{\mathbb C}\phi_j+\sum_{k=1}^{n}\deg_{\mathbb C}{\bf e}_{m_k}\\ &=&\deg_{\mathbb C}\phi_i+\deg_{\mathbb C}\phi_j+n-\sum_{k=1}^{n}{\nu(m_k-1)\over d}\\ &\leq&\deg_{\mathbb C}\phi_i+\deg_{\mathbb C}\phi_j+n-{\nu\over d}n. %&\leq&2\widehat{c}_W+n-{\nu\over d}n.\end{aligned}$$ Here the second equality follows from the degree formula [\[degree-narrow\]](#degree-narrow){reference-type="eqref" reference="degree-narrow"}, and the last inequality holds as we assume $m_k\geq 2$ for each $k$. Since $\nu>0$, we have $$n\leq {d\over \nu}\left(\deg_{\mathbb C}\phi_i+\deg_{\mathbb C}\phi_j+1-\widehat{c}_W\right).$$ Thus $n$ is bounded. ◻ ### A conjecture on quantum spectrum for admissible LG pairs In [\[def-evalue\]](#def-evalue){reference-type="eqref" reference="def-evalue"}, we denote the set of eigenvalues of the quantum multiplication ${\nu\over d}\tau'\star_{\tau}$ on ${\mathcal H }_{W, \langle J\rangle}$ by ${\mathfrak E}$ and refer to it as the *quantum spectrum*. Inspired by Galkin-Golychev-Iritani's Conjecture $\mathcal{O}$ for quantum cohomology of Fano manifolds [@GGI], we propose the following *quantum spectrum conjecture for admissible LG pair $(W, \langle J\rangle)$ of general type*. **Conjecture 23**. Let $(W, \langle J\rangle)$ be an admissible LG pair with $W$ a nondegenerate quasihomogeneous polynomial of general type. Let $(d; w_1, \ldots, w_N)$ be its weight system. Then the set ${\mathfrak E}$ contains the number $$\label{largest-positive-number} T:=\nu\left(d^{-d}\prod_{j=1}^Nw_j^{w_j}\right)^{1\over \nu}\in \mathbb{R}_{>0}$$ such that: 1. for any $\lambda\in {\mathfrak E}$, we have $|\lambda|\leq T$; 2. the following two sets are the same: $$\Big\{\lambda\in {\mathfrak E}\mid |\lambda|=T\Big\}=\Big\{Te^{2\pi\sqrt{-1}j/\nu}\mid j=0, 1, \ldots, \nu-1\Big\};$$ 3. the multiplicity of each $\lambda\in {\mathfrak E}$ with $|\lambda|=T$ is one. If the conjecture holds, we call $T$ in [\[largest-positive-number\]](#largest-positive-number){reference-type="eqref" reference="largest-positive-number"} the *principal* eigenvalue of ${\nu\over d}\tau'\star_{\tau}$. Part (3) of Conjecture [Conjecture 23](#quantum-spectrum-conj){reference-type="ref" reference="quantum-spectrum-conj"} is also called the *multiplicity-one conjecture*. [ The *multiplicity-one Conjecture* fails when the weight system is $(3; 1,1)$. There are three cases: $W=x^3+y^3, x^2y+xy^2$ or a $D_4$-singularity $x^3+xy^2$. In each case, the characteristic polynomial of the quantum multiplication ${\bf e}_2$ (up to scaling) is $$(x^2-36^2a^4t^4)^2, \quad 72^2a^4={2\over 27}.$$ The largest real positive eigenvalue has multiplicity two.]{style="color: red"} This would hold for all the cases when ${\nu\over d}{\bf e}_2 \star_{t=1}$ makes sense: Fano, CY, and general type. According to [\[c1-avatar\]](#c1-avatar){reference-type="eqref" reference="c1-avatar"}, for CY type, we simply have ${\nu\over d}{\bf e}_2 \star_{t=1}=0.$ For Fano cases, the unique eigenvalue is zero??? ### Evidence of Conjecture [Conjecture 23](#quantum-spectrum-conj){reference-type="ref" reference="quantum-spectrum-conj"} {#evidence-of-conjecture-quantum-spectrum-conj} In Section [7](#sec-evidence){reference-type="ref" reference="sec-evidence"}, we will check Conjecture [Conjecture 23](#quantum-spectrum-conj){reference-type="ref" reference="quantum-spectrum-conj"} in special cases. If $W$ is a mirror simple singularity as in [\[mirror-ade-singularity\]](#mirror-ade-singularity){reference-type="eqref" reference="mirror-ade-singularity"}, the conjecture follows from direct calculations of the quantum products. If $W$ is a Fermat homogeneous polynomial with $\nu>1$, the conjecture follows from the result below: **Proposition 24**. For the Fermat homogeneous polynomial of general type $$W=x_1^d+\ldots +x_N^d, \quad \text{with} \quad \nu=d-N>1,$$ the quantum spectrum of ${\nu \over d}\tau'\star_{\tau}$ for the LG pair $(W, \langle J\rangle)$ is given by $${\mathfrak E}= %\left({\nu \over d}\sJ_2\star_{t=1}\right)= \begin{dcases} %\{0\}, & \text{if } W \textcolor{red}{ is Fano or Calabi-Yau};\\ \{\nu d^{-d/\nu}e^{2\pi\sqrt{-1}j/\nu} \mid j=0, 1, \ldots, \nu-1\}, & \text{if } d>N=1,\\ \{0, \nu d^{-d/\nu}e^{2\pi\sqrt{-1}j/\nu} \mid j=0, 1, \ldots, \nu-1\}, & \text{if } d >N>1. %\left\{{(d-1)!-1\over d!d^{d-2}}, -{1\over d!d^{d-2}}, ?\right\}, & \text{if } d -1=N>1. \end{dcases}$$ Moreover, all nonzero eigenvalues have multiplicity one. The proof of this statement uses mirror symmetry and will be given in Section [7.3](#sec-spectrum-fermat){reference-type="ref" reference="sec-spectrum-fermat"}. ## A variant of the quantum spectrum conjecture on a subalgebra {#sec-variant} The quantum multiplication on the state space ${\mathcal H }_{W, \langle J\rangle}$ is difficult to handle in general, especially when the subspace of broad sectors has a large dimension. When studying the Gromov-Witten theory of a hypersurface $X$, instead of considering the quantum product on the whole cohomology space $H^*(X)$, one often consider the quantum product on the subspace $H^*_{\rm amb}(X)$ of ambient cohomology classes. Here in FJRW theory, we will also consider the quantum product on a special subspace of ${\mathcal H }_{W, \langle J\rangle}$. ### The $G_W$-invariant subspace For any admissible LG pair $(W, G)$, we notice that the group $G_W$ acts on the state space ${\mathcal H }_{W, G}$. We denote the $G_W$-invariant subspace by $${\mathcal H }_{W, G}^{G_W}:=\{\phi\in {\mathcal H }_{W, G}\mid g\cdot\phi=\phi, \forall g\in G_W\}.$$ **Lemma 25**. *By viewing ${\mathcal H }_{W, G}$ and ${\mathcal H }_{W, G_W}$ as subspaces of $\bigoplus_{g\in G_{W}}{\rm Jac}(W_g)\cdot \omega_g$, we have $${\mathcal H }_{W, G}^{G_W}={\mathcal H }_{W, G}\cap{\mathcal H }_{W, G_W}.$$ In particular, we have ${\mathcal H }_{W, G_W}^{G_W}={\mathcal H }_{W, G_W}.$* *Proof.* Since $G\subset G_{W}$, each $G_W$-invariant element $\phi_g\in {\mathcal H }_{W, G}$ belongs to ${\mathcal H }_{W, G_{W}}$. Thus $${\mathcal H }_{W, G}^{G_{W}}\subset{\mathcal H }_{W, G}\cap {\mathcal H }_{W, G_{W}}.$$ On the other hand, if $\phi_g\in {\rm Jac}(W_g)\cdot \omega_g\cap {\mathcal H }_{W, G}\cap {\mathcal H }_{W, G_{W}}$, then $g\in G\cap G_W=G$, and $\phi_g\in {\mathcal H }_{W, G}$ is also $G_W$-invariant. ◻ Using the isomorphism [\[fjrw=orbifold\]](#fjrw=orbifold){reference-type="eqref" reference="fjrw=orbifold"}, we obtain **Corollary 26**. We have $${\mathcal H }_{W, G}^{G_{W}}\cong{\mathcal H }_{W, G}\cap {\mathcal H }_{W, G_{W}}.$$ Here $\cap$ is slightly abused as we consider the sets intersect when the images under the isomorphisms [\[fjrw=orbifold\]](#fjrw=orbifold){reference-type="eqref" reference="fjrw=orbifold"} intersect. **Example 27**. The vector space ${\mathcal H }_{W, G}^{G_{W}}$ depends on choices of both $G$ and $G_{W}$. In fact, the $G_{W}$-invariant subspaces ${\mathcal H }_{W_{1}, \langle J\rangle}^{G_{W_{1}}},{\mathcal H }_{W_{2}, \langle J\rangle}^{G_{W_{2}}}$ may be different even if the LG pairs $(W_{1}, \langle J\rangle),(W_{2}, \langle J\rangle)$ have the same weight system. For example, we consider the following two LG pairs, $$\begin{dcases} (W_1=x_1^4+x_2^8, \langle J\rangle), \\ (W_2=x_1^4+x_1x_2^6, \langle J\rangle). \end{dcases}$$ Both LG models have the weight system $(8;2,1)$. However, $$\dim_{\mathbb C}{\mathcal H }_{W_1, \langle J\rangle}^{G_{W}}=6,\quad \dim_{\mathbb C}{\mathcal H }_{W_2, \langle J\rangle}^{G_{W}}=7.$$ Both state spaces contain narrow elements ${\bf e}_1, {\bf e}_2, {\bf e}_3, {\bf e}_5, {\bf e}_6, {\bf e}_7$. The latter also contains a broad element $x_2^5dx_1dx_2\vert J^0\rangle.$ Here is another example. Consider two LG pairs, $$\begin{dcases} (W_1=x_1^5+x_1x_2^2, \langle J\rangle), \\ (W_2=x_1^3x_2+x_1x_2^2, \langle J\rangle=G_{W_{2}}). \end{dcases}$$ Both LG pairs have the weight system $(5; 1, 2)$. Both state spaces contain narrow elements ${\bf e}_1, {\bf e}_2, {\bf e}_3, {\bf e}_4$. However, ${\mathcal H }_{W_1, \langle J\rangle}$ is 5-dimensional as it contains one broad element $x_2dx_1dx_2\vert J^0\rangle$, while ${\mathcal H }_{W_2, \langle J\rangle}$ is 6-dimensional as it contains two broad elements: $x_1^2dx_1dx_2\vert J^0\rangle$ and $x_2dx_1dx_2\vert J^0\rangle$. Furthermore, the following three examples have the same weight system $(3;1,1)$. The associated spaces ${\mathcal H }_{W, \langle J\rangle}$ have the same dimension, whereas their $G_W$-invariant subspaces ${\mathcal H }_{W, \langle J\rangle}^{G_W}$ do not. Notice that $\omega_{J^0}=dx_1\wedge dx_2.$ [\[table-3-1-1\]]{#table-3-1-1 label="table-3-1-1"} $W$ basis of ${\mathcal H }_{W, \langle J\rangle}$ basis of ${\mathcal H }_{W, \langle J\rangle}^{G_W}$ --------------------- ------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------- $x_1^3+x_2^3$ $\{{\bf e}_1,{\bf e}_2,x_1\omega_{J^0}\vert J^0\rangle,x_2\omega_{J^0}\vert J^0\rangle\}$ $\{{\bf e}_1,{\bf e}_2\}$ $x_1^3+x_1x_2^2$ $\{{\bf e}_1,{\bf e}_2,x_1\omega_{J^0}\vert J^0\rangle,x_2\omega_{J^0}\vert J^0\rangle\}$ $\{{\bf e}_1,{\bf e}_2, x_2\omega_{J^0}\vert J^0\rangle\}$ $x_1^2x_2+x_1x_2^2$ $\{{\bf e}_1,{\bf e}_2,x_1\omega_{J^0}\vert J^0\rangle,x_2\omega_{J^0}\vert J^0\rangle\}$ $\{{\bf e}_1,{\bf e}_2,x_1\omega_{J^0}\vert J^0\rangle,x_2\omega_{J^0}\vert J^0\rangle\}$ ### A comparison between the narrow subspace and ${\mathcal H }_{W, \langle J\rangle}^{G_{W}}$ Let $(d; w_1, \ldots, w_N)$ be the weight system of the LG pair $(W,\langle J\rangle)$. Recall that ${\bf Nar}$ is the set of narrow indices defined in [\[narrow-index\]](#narrow-index){reference-type="eqref" reference="narrow-index"}. We denote the space of narrow sectors by $${\mathcal H }_{\rm nar}=\bigoplus_{m\in {\bf Nar}}{\mathcal H }_{J^m}.$$ Let ${\mathcal H }_{k}\subset {\mathcal H }_{W, G}$ be the direct sum of subspaces ${\mathcal H }_{g\in G}$ such that $\dim_{{\mathbb C}}{\rm Fix}(g)=k,$ and ${\mathcal H }_{>0}$ be the subspace of broad elements. We have the following decompositions $${\mathcal H }_{W, \langle J\rangle}=\bigoplus_{k\in \mathbb{Z}_{\geq 0}}{\mathcal H }_{k}={\mathcal H }_0\bigoplus {\mathcal H }_{>0}.$$ It is easy to see **Proposition 28**. We have an inclusion ${\mathcal H }_{\rm nar}\subset {\mathcal H }_{W, \langle J\rangle}^{G_{W}}$. In general, ${\mathcal H }_{\rm nar}\neq {\mathcal H }_{W, \langle J\rangle}^{G_{W}}$; here is an example. **Example 29**. For the $E_7$-singularity $W=x_1^3x_2+x_2^3$, we have $${\mathcal H }_{W, \langle J\rangle}^{G_{W}}={\mathcal H }_{\rm nar}\bigoplus {\mathbb C}\cdot x_2^2dx_1\wedge dx_2\vert J^0\rangle.$$ For Fermat polynomials, the two spaces are the same. **Proposition 30**. If $W=x_1^{a_1}+\cdots +x_N^{a_N}$, then ${\mathcal H }_{W, \langle J\rangle}^{G_{W}}={\mathcal H }_{\rm nar}$. *Proof.* It is sufficient to show ${\mathcal H }_{W, \langle J\rangle}^{G_{W}}\subset{\mathcal H }_{\rm nar}$. We assume there exists some $f\neq 0$ and $g=J^m$, $m\notin {\bf Nar}$, such that $f\cdot \omega_g\vert g\rangle\in{\mathcal H }_g\subset {\mathcal H }_{W, \langle J\rangle}^{G_{W}}$. Since $m\notin {\bf Nar}$, without loss of generality, we can assume ${\rm Fix}(J^m)={\rm Span} \{x_1, \cdots, x_k\}$ for some $k\geq 1$. Since $W$ is a Fermat polynomial, we have $${\rm Jac}(W_g)={\mathbb C}[x_1, \ldots, x_k]/(x_1^{a_1-1}, \cdots, x_k^{a_k-1}).$$ So $f\cdot \omega_g$ must be a scalar multiple of $\prod\limits_{i=1}^{k}x_i^{b_i}dx_i$ for some $b_i\in{\mathbb Z}$ such that $0\leq b_i\leq a_i-2$. But $\prod\limits_{i=1}^{k}x_i^{b_i}dx_i$ is not $G_W$-invariant as the action of the element $(e^{2\pi\sqrt{-1}/a_1}, 1, \cdots, 1)$ in $G_W$ sends $\prod\limits_{i=1}^{k}x_i^{b_i}dx_i$ to $$e^{2\pi\sqrt{-1}(b_1+1)/a_1} \cdot \prod\limits_{i=1}^{k}x_i^{b_i}dx_i\neq \prod\limits_{i=1}^{k}x_i^{b_i}dx_i.$$ This is a contradiction. Hence we must have ${\mathcal H }_{W, \langle J\rangle}^{G_{W}}\subset{\mathcal H }_{\rm nar}$. ◻ ### A quantum multiplication on ${\mathcal H }_{W, \langle J\rangle}^{G_W}$ The following vanishing result holds for any admissible LG pair $(W, G)$. **Lemma 31**. *For any admissible pair $(W, G)$, let $\alpha\in {\mathcal H }_{W, G}$ be a homogeneous element which is not $G_W$-invariant. If $\gamma_j \in {\mathcal H }_{W, G}^{G_{W}}$ for all $j=1, \ldots, n$, then $$\label{vanishing-non-invariant} \langle\gamma_1, \ldots, \gamma_n, \alpha\rangle_{0, n+1}=0.$$* *Proof.* By the choice of the insertions, the FJRW invariant $\langle\gamma_1, \ldots, \gamma_n, \alpha\rangle_{0, n+1}$ is not $G_W$-invariant unless it vanishes. On the other hand, according to the *$G_{W}$-invariance axiom* in [@FJR Theorem 4.1.8 (10)], the underlying virtual cycle is invariant under the $G_W$-action. So the invariant must vanish. ◻ *Remark 32*. Lemma [Lemma 31](#gmax-inv-inv){reference-type="ref" reference="gmax-inv-inv"} is similar to Proposition 2.8 in [@CIR]. The result there is stated for any admissible LG pair $(W, \langle J\rangle)$ with $W$ a Fermat polynomial. The narrow subspace in [@CIR Proposition 2.8] is replaced by the $G_W$-invariant subspace ${\mathcal H }_{W, G}^{G_{W}}$ in Lemma [Lemma 31](#gmax-inv-inv){reference-type="ref" reference="gmax-inv-inv"}. Now we use Lemma [Lemma 31](#gmax-inv-inv){reference-type="ref" reference="gmax-inv-inv"} to study the spectrum of the quantum multiplication ${\nu\over d}\tau'\star_\tau$ on the subspace ${\mathcal H }_{W, \langle J\rangle}^{G_W}$ when the admissible LG pair $(W, \langle J\rangle)$ is of general type. We remark that by the definition [\[tau-component\]](#tau-component){reference-type="eqref" reference="tau-component"} and the explicit formulas [\[coefficient-tau\]](#coefficient-tau){reference-type="eqref" reference="coefficient-tau"}, we have $\tau,\tau'\in {\mathcal H }_{W, \langle J\rangle}^{G_W}.$ Let $\alpha\in {\mathcal H }_{W, \langle J\rangle}$ be a homogeneous element which is not $G_W$-invariant. Because both $\tau', \tau\in {\mathcal H }_{W, \langle J\rangle}^{G_{W}}$, we see that for any $\gamma\in {\mathcal H }_{W, \langle J\rangle}^{G_{W}}$, $$\label{fake-broad-vanishing} \savebox{\@brx}{\(\m@th{\big\langle}\)}% \mathopen{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}\tau', \gamma, \alpha\savebox{\@brx}{\(\m@th{\big\rangle}\)}% \mathclose{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}_{0, 3}(\tau)=0.$$ Since the flat identity $\mathbf{1}={\bf e}_1\in {\mathcal H }_{W, \langle J\rangle}^{G_{W}}$ and $\langle\cdot, \cdot \rangle=\langle\mathbf{1}, \cdot, \cdot\rangle_{0,3}$ [@FJR Section 4.2 (68)], the pairing $\langle\cdot, \cdot \rangle$ is also $G_W$-invariant. Hence the pairing [\[residue-dual\]](#residue-dual){reference-type="eqref" reference="residue-dual"} induces a perfect pairing on ${\mathcal H }_{W, \langle J\rangle}^{G_{W}}$. Using Lemma [Lemma 31](#gmax-inv-inv){reference-type="ref" reference="gmax-inv-inv"} and a similar argument as in Proposition [Proposition 22](#prop-convergence){reference-type="ref" reference="prop-convergence"}, we have **Proposition 33**. $\left({\mathcal H }_{W, \langle J\rangle}^{G_{W}}, \star_\tau\right)$ is a subalgebra of $\left({\mathcal H }_{W, \langle J\rangle}, \star_\tau\right)$. Combining [\[fake-broad-vanishing\]](#fake-broad-vanishing){reference-type="eqref" reference="fake-broad-vanishing"} with [\[quantum-product\]](#quantum-product){reference-type="eqref" reference="quantum-product"}, we obtain **Proposition 34**. The multiplication $\tau'\star_\tau$ is closed on ${\mathcal H }_{W, \langle J\rangle}^{G_{W}}$. That is, $\tau'\star_{\tau}\in {\rm End}({\mathcal H }_{W, \langle J\rangle}^{G_{W}}).$ Similar to [@GI Proposition 7.1], we can consider the quantum spectrum of $\tau'\star_{\tau}$ on the subspace ${\mathcal H }_{W, \langle J\rangle}^{G_{W}}$. **Proposition 35**. If $\lambda\in {\mathbb C}$ is an eigenvalue of $\tau'\star_{\tau}$ on ${\mathcal H }_{W, \langle J\rangle}$, it is also an eigenvalue of $\tau'\star_{\tau}$ on ${\mathcal H }_{W, \langle J\rangle}^{G_{W}}$. *Proof.* We have an injective algebra homomorphism ${\mathcal H }_{W, \langle J\rangle}^{G_{W}}\hookrightarrow {\rm End}({\mathcal H }_{W, \langle J\rangle})$ given by $a\mapsto a\star_{\tau}$. Let $\lambda\in {\mathbb C}$ be an eigenvalue of $\tau'\star_{\tau}$ on ${\mathcal H }_{W, \langle J\rangle}$. Then as an endomorphism of ${\mathcal H }_{W, \langle J\rangle}$, $\tau'\star_{\tau}-\lambda$ is not invertible. This implies that $\tau'-\lambda {\bf 1}$ is not invertible in ${\mathcal H }_{W, \langle J\rangle}^{G_{W}}$ as well. Hence $\lambda$ is an eigenvalue of $\tau'\star_{\tau}$ on ${\mathcal H }_{W, \langle J\rangle}^{G_{W}}$. ◻ ### A quantum spectrum conjecture on the subspace ${\mathcal H }_{W, \langle J\rangle}^{G_W}$ Based on Proposition [Proposition 35](#quantum-spectrum-invariant){reference-type="ref" reference="quantum-spectrum-invariant"}, we propose a variant of the quantum spectrum conjecture for the LG pair $(W, \langle J\rangle)$ on the $G_W$-invariant subspace ${\mathcal H }_{W, \langle J\rangle}^{G_W}$. We denote the set of eigenvalues of ${\nu\over d}\tau'\star_{\tau}$ on the subspace ${\mathcal H }_{W, \langle J\rangle}^{G_W}$ by ${\mathfrak E}_{G_W}$. **Conjecture 36**. Let $(W, \langle J\rangle)$ be an admissible LG pair with $W$ a nondegenerate quasihomogeneous polynomial of general type. The set ${\mathfrak E}_{G_W}$ contains the positive real number $$T:=\nu\left(d^{-d}\prod_{j=1}^Nw_j^{w_j}\right)^{1\over \nu} %\in \mathbb{R}_{>0}$$ that satisfies the following conditions: 1. For any $\lambda\in {\mathfrak E}_{G_W}$, we have $|\lambda|\leq T$. 2. The following two sets are the same: $$\{\lambda\in {\mathfrak E}_{G_W}\mid |\lambda|=T\}=\{Te^{2\pi\sqrt{-1}j/\nu}\vert j=0, \ldots, \nu-1\}.$$ 3. The multiplicity of each $\lambda\in {\mathfrak E}_{G_W}$ with $|\lambda|=T$ is one. This conjecture holds if $W$ is a Fermat homogeneous polynomial of general type, that is, $W=x_1^d+\ldots +x_N^d$ and $\nu=d-N>0$. A proof will be given in Proposition [Proposition 134](#quantum-spectrum-fermat-invariant){reference-type="ref" reference="quantum-spectrum-fermat-invariant"}. If $\nu>1$, then the stronger version Conjecture [Conjecture 23](#quantum-spectrum-conj){reference-type="ref" reference="quantum-spectrum-conj"} holds. # Strong asymptotic classes in FJRW theory of general type {#sec-asymptotic} In this section, we study the asymptotic expansions in FJRW theory. The eigenvalues that appear in the quantum spectrum conjectures will lead to some interesting asymptotic classes in the state space. In Section [3.1](#sec:connection){reference-type="ref" reference="sec:connection"}, we consider the quantum connection in FJRW theory and its basic properties. In Section [3.2](#sec-asymptotic-classes){reference-type="ref" reference="sec-asymptotic-classes"}, we define the *strong asymptotic classes* in FJRW theory using the asymptotic behavior of the quantum connection. We also describe some properties of these asymptotic classes. ## Quantum connection and the $J$-function {#sec:connection} The genus-zero FJRW theory defines a formal Dubrovin-Frobenius manifold structure on ${\mathcal H }_{W, G}$ [@FJR Corollary 4.2.8]. Similar to the *Dubrovin connection* in GW theory, there is a quantum connection in FJRW theory, defined using the quantum product $\star_{\mathbf{t}}$ in [\[quantum-product\]](#quantum-product){reference-type="eqref" reference="quantum-product"}. Let us describe this quantum connection and its properties. ### Hodge grading operator For each $g\in G$ and a homogeneous element $\phi_g\in {\mathcal H }_g$, we define the *Hodge grading number* by $$\label{hodge-grading-operator} \mu(\phi_g)=\mu(g)%={\mu_+(\phi)+\mu_-(\phi)\over 2} :={N_g\over 2}+\iota(g)-{\widehat{c}_W\over 2} ={N_g-N\over 2}+\sum_{j=1}^{N}\theta_j(g).$$ We define the *Hodge grading operator* $\mathrm{Gr}\in {\rm End}({\mathcal H }_{W, G})$ by extending the map $\mathrm{Gr}\in {\rm End}({\mathcal H }_g)$ below to ${\mathcal H }_{W, G}$ linearly: $$\label{eq:modified-hodge-operator} \mathrm{Gr}(\phi_g):=\mu(g)\phi_g.$$ According to [\[complex-degree\]](#complex-degree){reference-type="eqref" reference="complex-degree"}, we have an identity $$\label{degree-hodge} \deg_{\mathbb C}\phi_g={\widehat{c}_W\over 2}+\mu(g).$$ **Lemma 37**. *Let $\phi_g\in {\mathcal H }_g$ and $\phi_{g'}\in{\mathcal H }_{g'}$ be a pair of homogeneous elements. If the pairing $\langle\phi_g, \phi_{g'}\rangle\neq 0$, then $g'=g^{-1}$ and $$\label{adjoint-hodge-operator} \mu(g)+\mu(g')=0.$$* *Proof.* The first equality $g'=g^{-1}$ follows from the definition of the pairing. We also have $N_g=N_{g^{-1}}$ and $$\theta_j(g)+\theta_j(g^{-1})= \begin{dcases} 1, & \text{ if } g \text{ does not fix } x_j;\\ 0, & \text{ if } g \text{ fixes } x_j. \end{dcases}$$ Using the Hodge grading number formula [\[hodge-grading-operator\]](#hodge-grading-operator){reference-type="eqref" reference="hodge-grading-operator"}, we must have $$\mu(g)+\mu(g')={N_g-N\over 2}+\sum_{j=1}^{N}\theta_j(g)+{N_{g^{-1}}-N\over 2}+\sum_{j=1}^{N}\theta_j(g^{-1}).$$ The result follows from the fact that $\theta_j(g)+\theta_j(g^{-1})=0$ if $g$ fixes $x_j$ and $\theta_j(g)+\theta_j(g^{-1})=1$ otherwise. ◻ ### A non-symmetric pairing in FJRW theory Recall that ${\mathbb C}^g\subset {\mathbb C}^N$ be the $g$-invariant subspace and ${\mathbb C}^N/{\mathbb C}^g$ be the quotient space for any $g\in G$. Let $\phi_g\in {\mathcal H }_g$ be the component of $\phi\in{\mathcal H }_{W, G}$ in the sector ${\mathcal H }_g$. We define a non-symmetric pairing on the state space ${\mathcal H }_{W, G}$ $$\left[ \cdot, \cdot \right): {\mathcal H }_{W,G}\times{\mathcal H }_{W,G}\to {\mathbb C},$$ given by $$\label{non-symmetric-pairing-qst} \left[\phi, \phi'\right)={1\over |G| }\sum_{g\in G}{(-1)^{-\mu(g)}\over (-2\pi)^{N-N_g}}\left\langle\phi_{g^{-1}}, \phi'_{g}\right\rangle.$$ In particular, if both $\phi, \phi'\in {\mathcal H }_{\bf Nar}$, we obtain $$\left[\phi, \phi'\right)={1\over |G|\cdot (-2\pi)^N}\sum_{g\in {\bf Nar}}(-1)^{-\mu(g)}\left\langle\phi_{g^{-1}}, \phi'_{g}\right\rangle.$$ ### A quantum connection For a homogeneous basis $\{\phi_i\}_{i=1}^{r}$ of ${\mathcal H }_{W, G}$, we will denote its dual basis by $\{\phi^i\}_{i=1}^{r}$, such that $\langle\phi_i, \phi^j\rangle=\delta_i^j.$ Let $$\label{basis-element-form} \mathbf{t}=\sum_{i=1}^{r}t_i\phi_i\in {\mathcal H }_{W, G}.$$ Let $\mathrm{Gr}$ be the Hodge grading operator in [\[eq:modified-hodge-operator\]](#eq:modified-hodge-operator){reference-type="eqref" reference="eq:modified-hodge-operator"} and ${\mathscr E}(\mathbf{t})$ be the Euler vector field in [\[euler-vector\]](#euler-vector){reference-type="eqref" reference="euler-vector"}. There is a quantum connection $\nabla$ on the trivial bundle $T{\mathcal H }_{W, G}\times \mathbb{C}^*\to {\mathcal H }_{W, G}\times \mathbb{C}^*$, given by $$\label{quantum-connection-formula} \begin{dcases} \nabla_{\partial\over \partial t_i}&=\frac{\partial }{\partial t_i}+\frac{1}{z}\phi_i\star_{\mathbf{t}},\\ \nabla_{z{\partial\over \partial z}}&=z{\partial\over \partial z}-{1\over z} {\mathscr E}(\mathbf{t})\star_{\mathbf{t}}+\mathrm{Gr}. \end{dcases}$$ Here we write the complex number $z\in {\mathbb C}^*$ as $$\label{multivalue-z} z=|z|e^{\sqrt{-1}\arg(z)}\in\mathbb{C}^*. % \quad \text{for} \quad (|z|, \arg(z)). %\in \mathbb{R}_+\times \mathbb{R}.$$ For any $\mathbf{t}$ in [\[basis-element-form\]](#basis-element-form){reference-type="eqref" reference="basis-element-form"}, the operator $z^{-\mathrm{Gr}}=e^{-(\log z)\mathrm{Gr}}$ takes the form $$\label{zmu-formula} z^{-\mathrm{Gr}}(\mathbf{t})=\sum_{i=1}^{r}t_i |z|^{-\mu(\phi_i)} e^{-\sqrt{-1}\mu(\phi_i)\arg(z)} \phi_i.$$ We denote the so-called *$S$-operator* in FJRW theory by $$S(\mathbf{t},z)\in\operatorname{End}(\mathcal{H}_{W,G})\otimes \mathbb{C}[\![\mathbf{t}]\!][\![z^{-1}]\!],$$ where[^1] $$\label{S-operator} S(\mathbf{t},z)\alpha=\alpha +\sum_{i=1}^{r}\sum_{n\geq1}\sum_{k\geq 0}\frac{\phi^i}{n!(-z)^{k+1}} \Big\langle \alpha\psi_1^k, \mathbf{t},\dots,\mathbf{t}, \phi_i \Big\rangle_{0,n+2}.$$ By the *homogeneity property* [\[virtual-degree\]](#virtual-degree){reference-type="eqref" reference="virtual-degree"} and genus zero *Topological Recursion Relation* [@FJR Theorem 4.2.9], the operator $S(\mathbf{t}, z)$ provides solutions of the quantum connection. **Proposition 38**. [@CIR Proposition 2.7] [\[fundamental-solution-infty\]]{#fundamental-solution-infty label="fundamental-solution-infty"} The set $\{S(\mathbf{t},z)z^{-\mathrm{Gr}}\alpha\mid \alpha\in{\mathcal H }_{W,G}\}$ is a set of flat sections of $\nabla$. That is, $$\label{flat-section-nabla} \begin{dcases} \nabla_{\partial\over \partial t_i}(S(\mathbf{t},z)z^{-\mathrm{Gr}}\alpha)=0,\\ \nabla_{\partial\over \partial z}(S(\mathbf{t},z)z^{-\mathrm{Gr}}\alpha)=0. \end{dcases}$$ For $\alpha, \beta\in\mathcal{H}_{W,G}$, we have $$\label{adjoint-S-operator} \langle S(\mathbf{t},-z)\alpha, S(\mathbf{t},z)\beta\rangle=\langle\alpha,\beta\rangle.$$ ### The FJRW $J$-function and Lagrangian cone {#sec-small-j-function} Using the genus zero topological recursion relation, the inverse operator of $S(\mathbf{t}, z)$ is given by $$\label{inverse-S-operator} S(\mathbf{t},z)^{-1}\alpha=\alpha +\sum_{i=1}^{r}\sum_{n\geq1}\sum_{k\geq 0}\frac{\phi^i}{n!z^{k+1}} \langle \phi_i\psi_1^k, \mathbf{t},\dots,\mathbf{t}, \alpha \rangle^{\mathrm{FJRW}}_{0,n+2}.$$ Applying the string equation and [@FJR-book (68)] to [\[inverse-S-operator\]](#inverse-S-operator){reference-type="eqref" reference="inverse-S-operator"}, we have $$\label{S-operator-unit} z\cdot S({\bf t}, z)^{-1}(\mathbf{1})=z\mathbf{1}+{\bf t}+\sum_{i=1}^{r}\sum_{n\geq 2}{\phi^{i}\over n!}\Big\langle{\phi_i\over z-\psi_1}, \mathbf{t}, \ldots, \mathbf{t}\Big\rangle_{0, n+1}.$$ We define the *FJRW $J$-function* of an LG pair $(W, G)$ by $$\begin{aligned} J(\mathbf{t},-z)=-z\mathbf{1}+\mathbf{t} +\sum_{i=1}^{r}\sum_{n\geq2}\sum_{k\geq0}\frac{\phi^i}{n!(-z)^{k+1}} \Big\langle\mathbf{t},\ldots,\mathbf{t}, \phi_i\psi_{n+1}^k\Big\rangle_{0,n+1}. \end{aligned}$$ As a consequence of [\[S-operator-unit\]](#S-operator-unit){reference-type="eqref" reference="S-operator-unit"}, we have $$\label{J-equal-S} J(\mathbf{t}, -z)=-z S({\bf t}, -z)^{-1}(\mathbf{1}).$$ Sometimes it is more convenient to consider *the modified $J$-function* $$\label{modified-j-function} \widetilde{J}(\mathbf{t}, z):=z^{\widehat{c}_W\over 2} z^{\mathrm{Gr}} S({\bf t}, z)^{-1}(\mathbf{1}).$$ Following Givental [@Giv], the *loop space* ${\mathcal H }_{W, G}(\!(z^{-1})\!)$ consisting of Laurent series in ${1\over z}$ with vector coefficients carries a symplectic form induced by the pairing $\langle\cdot, \cdot\rangle$. This infinite dimensional symplectic vector space can be identified with $T^*{\mathcal H }_{W, G}[z]$, which is the cotangent bundle of ${\mathcal H }_{W, G}[z]$. Using the *dilaton shift* $${\bf q}(z)=-z\mathbf{1}+{\bf t}(z)$$ and the Darboux coordinates $({\bf p}, {\bf q})\in T^*{\mathcal H }_{W, G}[z]$, the graph of the differential of the genus zero generating function $$\mathcal{F}_0(\mathbf{t}(z))=\sum_{n}{1\over n!}\savebox{\@brx}{\(\m@th{\big\langle}\)}% \mathopen{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}\mathbf{t}(z), \ldots, \mathbf{t}(z)\savebox{\@brx}{\(\m@th{\big\rangle}\)}% \mathclose{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}_{0, n}(\mathbf{t}(z))$$ defines a formal germ of Lagrangian submanifold $\mathcal{L}$ in ${\mathcal H }_{W, G}(\!(z^{-1})\!)$, where $$\mathcal{L}=\{({\bf p}, {\bf q})\in T^*{\mathcal H }_{W, G}[z]: {\bf p}=d_{\bf q}\mathcal{F}_0\}.$$ Following [@Giv Theorem 1], the FJRW $J$-function lies on the Lagrangian cone. **Proposition 39**. The $J$-function $J(\mathbf{t}, -z)$ belongs to the Lagrangian cone $\mathcal{L}.$ ### Restriction to $\tau(t)$. We want to restrict the meromorphic connection to $\mathbf{t}=\tau(t)$, we obtain the *small $J$-function* $J(\tau(t), z)$ and the *modified small $J$-function* $$\widetilde{J}(\tau(t),z):=\widetilde{J}(\mathbf{t}, z)\vert_{\mathbf{t}=\tau(t)}=z^{\widehat{c}_W\over 2} z^{\mathrm{Gr}} S(\tau(t), z)^{-1}(\mathbf{1}).$$ **Lemma 40**. *If $\tau(t)=t{\bf e}_2,$ we have a formula for the modified small $J$-function $$\label{modified-j-formula} \widetilde{J}(\tau(t), z) = %\left( \mathbf{1} + ({t\cdot z^{-{\nu\over d}}}){\bf e}_2 +\sum_{i=1}^{r}\sum_{n\geq 2}{({t\cdot z^{-{\nu\over d}}})^{n}\over n!} \Big\langle\phi_i\psi_1^{k}, {\bf e}_2, \ldots, {\bf e}_2\Big\rangle_{0, n+1} \ \phi^i. %\right).$$ where the integer $k$ satisfies $$\label{value-k-jfunc} k=-\mu(\phi_i)+{n\nu\over d}+{\widehat{c}_W\over 2}-2.$$* *Proof.* Using the homogeneity property [\[virtual-degree\]](#virtual-degree){reference-type="eqref" reference="virtual-degree"} and the relation [\[degree-hodge\]](#degree-hodge){reference-type="eqref" reference="degree-hodge"}, we see if the FJRW invariant $\langle\phi_i\psi_1^{k}, {\bf e}_2, \ldots, {\bf e}_2\rangle_{0, n+1}\neq 0$, we must have $$\label{modify-j-degree} \widehat{c}_W-3+n+1={\widehat{c}_W\over 2}+\mu(\phi_i)+k+n\left({\widehat{c}_W\over 2}+\mu(J^2)\right).$$ By [\[hodge-grading-operator\]](#hodge-grading-operator){reference-type="eqref" reference="hodge-grading-operator"}, we have $$\mu(J^2)%=-{N\over 2}+\sum_{j=1}^{N}2q_j =-{\widehat{c}_W\over 2}+1-{\nu\over d}.$$ Plugging it into [\[modify-j-degree\]](#modify-j-degree){reference-type="eqref" reference="modify-j-degree"}, we see that the integer $k$ is determined by $\phi_i$ and $n$, as in [\[value-k-jfunc\]](#value-k-jfunc){reference-type="eqref" reference="value-k-jfunc"}. So for a fixed $n$, the coefficients of $\phi^i$ has a factor of $$z^{{\widehat{c}_W\over 2}+\mu(\phi^i)-1-(k+1)}=z^{-{n\nu\over d}}.$$ Here the equality follows from [\[adjoint-hodge-operator\]](#adjoint-hodge-operator){reference-type="eqref" reference="adjoint-hodge-operator"} and [\[value-k-jfunc\]](#value-k-jfunc){reference-type="eqref" reference="value-k-jfunc"}. ◻ *Remark 41*. In fact, we can simplify the formula further by using the *selection rule* [\[selection-rule\]](#selection-rule){reference-type="eqref" reference="selection-rule"}. The nonvanishing condition $\langle\phi_i\psi_1^{k}, {\bf e}_2, \ldots, {\bf e}_2\rangle_{0, n+1}\neq 0$ implies that $\phi_i\in {\mathcal H }_{J^{-n-1}}.$ ### Adjoint operators For any operator $$A(z)\in {\rm End}({\mathcal H }_{W,G})\otimes_{{\mathbb C}}{\mathbb C}[z][\![z^{-1}]\!],$$ we denote by $A^*(z)$ its *adjoint operator* with respect to the pairing $\langle\cdot, \cdot\rangle.$ That is $$\label{adjoint-operator} \langle A(z)\alpha, \beta\rangle=\langle\alpha, A^*(z)\beta\rangle.$$ **Lemma 42**. 1. *The quantum product ${\nu\over d}\tau'\star_{\tau}$ is self-adjoint, i.e., $$\left \langle {\nu\over d}\tau'\star_{\tau} \alpha, \beta \right\rangle = \left\langle \alpha, {\nu\over d}\tau'\star_{\tau} \beta \right\rangle.$$* 2. *The Hodge grading operator $\mathrm{Gr}$ is skew-adjoint, i.e, $$\left \langle \mathrm{Gr} (\alpha), \beta \right\rangle = - \left\langle \alpha, \mathrm{Gr} (\beta) \right\rangle.$$* *Proof.* (1) follows easily from the definition of quantum product and (2) follows from formula [\[adjoint-hodge-operator\]](#adjoint-hodge-operator){reference-type="eqref" reference="adjoint-hodge-operator"}. ◻ **Lemma 43**. *Let $\lambda\in {\mathbb C}^*$ be an eigenvalue of ${\nu\over d}\tau'\star_{\tau}$ and let $E(\lambda)\subset {\mathcal H }_{W, \langle J\rangle}$ be its eigenspace. Suppose that $E(\lambda)$ is one-dimensional. Then $H'_{\lambda}:=\mathrm{Im}(\lambda-{\nu\over d}\tau'\star_{\tau})$ is the orthogonal complement to $E(\lambda)$ in ${\mathcal H }_{W, \langle J\rangle}$ with respect to the pairing $\langle \cdot, \cdot\rangle$ and the Hodge grading operator $\mathrm{Gr}$ satisfies $\mathrm{Gr}(E(\lambda))\subset H'_{\lambda}$.* *Proof.* The same arguments as in the proof of [@GGI Lemma 3.2.2] work here. The proof relies on the self-adjointness of the quantum product and the skew-adjointness of the Hodge grading operator, which are stated in Lemma [Lemma 42](#adjoint-hodge){reference-type="ref" reference="adjoint-hodge"}. ◻ *Remark 44*. The above lemma shows that for any nonzero $\lambda$-eigenvector $\alpha$, we have $\langle\alpha,\alpha \rangle\neq 0$. The element $\alpha^{*}:=\alpha/\langle\alpha,\alpha\rangle$ is dual to $\alpha$ in the sense that $\langle\alpha^{*},\alpha\rangle=1$ and $\alpha$ is orthogonal to any element in the orthogonal complement of ${\mathbb C}\psi_{\lambda}$. ## Strong asymptotic classes {#sec-asymptotic-classes} Now we consider strong asymptotic classes in FJRW theory of $(W, \langle J\rangle)$ when $W$ is of general type. The discussion here is parallel to [@GGI]. One minor difference is that we consider the asymptotic classes with respect to any eigenvalue with the maximal norm. However, the proof easily follows from those for $\lambda\in \mathbb{R}_+$ (see Proposition [Proposition 47](#weak-to-strong){reference-type="ref" reference="weak-to-strong"}). ### Strong asymptotic classes {#strong-asymptotic-classes-1} According to Proposition [Proposition 21](#prop-euler-tau){reference-type="ref" reference="prop-euler-tau"}, when we restrict the quantum connection in [\[quantum-connection-formula\]](#quantum-connection-formula){reference-type="eqref" reference="quantum-connection-formula"} to $\mathbf{t}=\tau$, we obtain a meromorphic connection on ${\mathbb C}$ which takes the form $$\label{meromorphic-connection-tau} \nabla_{z{\partial\over \partial z}}=z{\partial\over \partial z}-{1\over z}\cdot {\nu\over d}\tau'\star_{\tau} +\mathrm{Gr}.$$ Proposition [\[fundamental-solution-infty\]](#fundamental-solution-infty){reference-type="ref" reference="fundamental-solution-infty"} implies **Lemma 45**. *For each $\alpha\in {\mathcal H }_{W, \langle J\rangle},$ the (multi-valued) section $$\label{flat-section-formula} \Phi(\alpha):=(2\pi)^{-{{\widehat{c}_W}\over 2}}S(\tau, z)z^{-\mathrm{Gr}}\alpha$$ is a flat section of the meromorphic connection.* Using the notations in [\[multivalue-z\]](#multivalue-z){reference-type="eqref" reference="multivalue-z"} and [\[zmu-formula\]](#zmu-formula){reference-type="eqref" reference="zmu-formula"}, it is convenient to fix $\arg(z)=\theta\in \mathbb{R}$ and consider a single-valued flat section $$\Phi_\theta(\alpha):=\Phi(\alpha)\vert_{\arg(z)=\theta} %: \mathbb{R}_+\to \cH_{W, \<J\>}.$$ defined over the ray $L_{\theta}=\mathbb{R}_{>0} \exp({\sqrt{-1}\theta})$. Now we define an asymptotic class based on the asymptotic behavior of this flat section. **Definition 46** (Strong asymptotic classes). Let $\lambda\in {\mathbb C}^*$ be an eigenvalue of ${\nu\over d}\tau'\star_{\tau}$ with the maximal norm. We say a nonzero element $\alpha\in {\mathcal H }_{W, \langle J\rangle}$ is a *strong asymptotic class* with respect to $\lambda$ if there exists some $m\in\mathbb{R}$ such that $$\label{asymp-class-behavior} |\!| e^{\lambda \over z} \Phi_{\arg(\lambda)}(\alpha) |\!| = O(|z|^{-m})$$ as $z\to 0$ along the ray $L_{\arg(\lambda)}=\mathbb{R}_{>0} \exp({\sqrt{-1}\arg(\lambda)})$. Such a strong asymptotic class is called *principal* if $\lambda\in \mathbb{R}_+$. Let $\lambda\in{\mathbb C}^*$ be an eigenvalue of ${\nu\over d}\tau'\star_{\tau}\in {\rm End}({\mathcal H }_{W, \langle J\rangle})$ with the maximal norm. We denote the distinct eigenvalues of ${\nu\over d}\tau'\star_{\tau}$ by $$\lambda_1=\lambda, \lambda_2, \ldots, \lambda_k,$$ where $\lambda_i$ has multiplicity $N_i$. Note that $N_{1}+\dots+N_{k}=r$. We assume that the multiplicity of $\lambda$ is one, i.e. $N_{1}=1$. Define $$U_\lambda= \begin{bmatrix} u_1&&&\\ &u_2&&\\ &&\ddots&\\ &&&u_r \end{bmatrix} = \begin{bmatrix} \lambda&&&\\ &\lambda_2 {\rm Id}_{N_2}&&\\ &&\ddots&\\ &&&\lambda_k {\rm Id}_{N_k} \end{bmatrix} ,$$ where ${\rm Id}_{N_i}$ is the identity matrix of size $N_i$. Let $\Psi:{\mathbb C}^{r}\rightarrow {\mathcal H }_{W, \langle J\rangle}$ be a linear isomorphism that transforms ${\nu\over d}\tau'\star_{\tau}$ to the block-diagonal form $$\Psi^{-1} \left( {\nu\over d}\tau'\star_{\tau} \right) \Psi = \begin{bmatrix} A_1&&&\\ &A_2&&\\ &&\ddots&\\ &&&A_k \end{bmatrix},$$ where $A_{i}$ is a matrix of size $N_{i}$ and $A_{i}-\lambda_{i} I_{N_{i}}$ is nilpotent. **Proposition 47**. With the notation as above, there exists a fundamental matrix solution for the quantum connection [\[meromorphic-connection-tau\]](#meromorphic-connection-tau){reference-type="eqref" reference="meromorphic-connection-tau"} of the form $$%\label{eq:fundamental-solution} P(z)e^{-U_{\lambda}/z} \begin{bmatrix} 1&&&\\ &F_2(z)&&\\ &&\ddots&\\ &&&F_k(z) \end{bmatrix}$$ over a sufficiently small sector $$\mathcal{S}_\lambda=\{z\in {\mathbb C}^*\mid |z|\leq 1, |\arg(z)-\arg(\lambda)|\leq \epsilon\}$$ with $\epsilon>0$ such that 1. $P(z)$ has an asymptotic expansion $P(z)\sim \Psi+P_1 z+P_2 z^2+\dots$ as $|z|\to 0$ in $\mathcal{S}_{\lambda}$. 2. $F_i(z)$ is a $\mathrm{GL}_{N_i}({\mathbb C})$-valued function satisfying the estimate $$\max\left( |\!|F_i(z)|\!|, |\!|F_i(z)^{-1}|\!| \right)\leq C e^{\delta |z|^{-p}}$$ on $\mathcal{S}_\lambda$ for some $C, \delta>0,$ and $0<p<1$. *Proof.* In the case when $\lambda$ is a positive real number, the same arguments as in the proof of [@GGI Proposition 3.2.1] work here. The proof relies on Sibuya and Wasow's basic results on ODEs and Lemma [Lemma 43](#lem:orthogonal-complement){reference-type="ref" reference="lem:orthogonal-complement"} on the $\lambda$-eigenspace of ${\nu\over d}\tau'\star_{\tau}$. For the general case, let $\theta=\arg(z)$. We consider the change of coordinate $z=e^{2\pi \sqrt{-1}\theta}\tilde{z}$, and the quantum connection [\[meromorphic-connection-tau\]](#meromorphic-connection-tau){reference-type="eqref" reference="meromorphic-connection-tau"} becomes $$\label{eq:new-quantum-connection} \tilde{z}{\partial\over \partial \tilde{z}}-{e^{-2\pi\sqrt{-1}\theta}\over \tilde{z}}\cdot {\nu\over d}\tau'\star_{\tau} +\mathrm{Gr}.$$ Note that the eigenvalues of $e^{-2\pi\sqrt{-1}\theta} ({\nu\over d}\tau'\star_{\tau})$ are $e^{-2\pi\sqrt{-1}\theta}\lambda_{i}$, and $|\lambda|$ is the one with the maximal norm. Hence according to the previous special case of the proposition, there exists a fundamental matrix solution for [\[eq:new-quantum-connection\]](#eq:new-quantum-connection){reference-type="eqref" reference="eq:new-quantum-connection"} of the form $$%\label{eq:fundamental-solution} \tilde{P}(\tilde{z})e^{-U/\tilde{z}} \begin{bmatrix} 1&&&\\ &\tilde{F}_2(\tilde{z})&&\\ &&\ddots&\\ &&&\tilde{F}_k(\tilde{z}) \end{bmatrix}$$ over $\mathcal{S}=\{\tilde{z}\in {\mathbb C}^*\mid |\tilde{z}|\leq 1, |\arg(\tilde{z})|\leq \epsilon\}$, where $U=e^{-2\pi\sqrt{-1}\theta}U_{\lambda}$, and $\tilde{P}(\tilde{z})$ and $\tilde{F}_{i}(\tilde{z})$ satisfy asymptotic conditions similar to (1) and (2) over $\mathcal{S}$. This implies the proposition in the original $z$-coordinate. ◻ Figure out how to phrase the following remark *Remark 48*. By Definition [Definition 46](#def-asymptotic-class){reference-type="ref" reference="def-asymptotic-class"}, we see if $\lambda=T$ is the principal spectrum, then the flat section with respect to the principle asymptotic class will have the smallest asymptotic expansion when $z$ approaches to $+0$. [Discuss the major difference here.]{style="color: red"} The following proposition is the counterpart of [@GGI Proposition 3.3.1] in the LG setting and is an easy consequence of Proposition [Proposition 47](#weak-to-strong){reference-type="ref" reference="weak-to-strong"}. **Proposition 49**. Let $\lambda\in{\mathbb C}^*$ be an eigenvalue of ${\nu\over d}\tau'\star_{\tau}\in {\rm End}({\mathcal H }_{W, \langle J\rangle})$ with the maximal norm. If the multiplicity of $\lambda$ is one, then 1. the space of strong asymptotic classes with respect to $\lambda$ is 1-dimensional. 2. For a strong symptotic classes $\alpha$ with respect to $\lambda$, the limit $\lim\limits_{|z|\to+0}e^{\lambda \over z} \Phi_{\arg(\lambda)}(\alpha)$ exists and lies in the $\lambda$-eigenspace $E(\lambda)$ of ${\nu\over d}\tau'\star_{\tau}$. ### A formula for strong asymptotic classes We introduce a *dual quantum connection* $\nabla^{\vee}$ for the meromorphic connection [\[meromorphic-connection-tau\]](#meromorphic-connection-tau){reference-type="eqref" reference="meromorphic-connection-tau"}, given by $$\label{dual-connection} \nabla^\vee_{z{\partial\over \partial z}}=z{\partial\over \partial z}+{1\over z} \left({\nu\over d}\tau'\star_{\tau} \right)^*- \mathrm{Gr}^*.$$ We identity ${\mathcal H }_{W,G}$ with its dual $({\mathcal H }_{W,G})^{\vee}$ via the nondegenerate pairing $\langle\cdot, \cdot\rangle$. Under this identification, Lemma [Lemma 42](#adjoint-hodge){reference-type="ref" reference="adjoint-hodge"} implies that $$\label{eq:sym-anti-sym} \left({\nu\over d}\tau'\star_{\tau} \right)^*={\nu\over d}\tau'\star_{\tau},\quad \mathrm{Gr}^*=- \mathrm{Gr},$$ and formula [\[adjoint-S-operator\]](#adjoint-S-operator){reference-type="ref" reference="adjoint-S-operator"} implies $$\label{eq:S-operator-adjoint} S^*(\mathbf{t}, z)=S^{-1}(\mathbf{t}, -z).$$ For each $\alpha\in {\mathcal H }_{W, \langle J\rangle},$ we consider the inverse adjoint of the flat section [\[flat-section-formula\]](#flat-section-formula){reference-type="eqref" reference="flat-section-formula"} given by $$\label{dual-section} \Phi^\vee(\alpha) :=(2\pi)^{{\widehat{c}_W\over 2}}(S^{-1})^{*}(\tau,z) z^{\mathrm{Gr}^{*}}\alpha =(2\pi)^{{\widehat{c}_W\over 2}}S(\tau,-z) z^{-\mathrm{Gr}}\alpha.$$ Here the second equality follows from [\[eq:sym-anti-sym\]](#eq:sym-anti-sym){reference-type="eqref" reference="eq:sym-anti-sym"}. **Proposition 50**. The following properties hold: 1. The section $\Phi^\vee(\alpha)$ is a flat section of the connection $\nabla^{\vee}$. 2. We have $\Phi^\vee=(\Phi^{-1})^{*}$. Equivalently, we have $$\langle\Phi^\vee(\alpha), \Phi(\beta)\rangle=\langle\alpha, \beta\rangle, \quad \forall \quad \alpha, \beta \in {\mathcal H }_{W, \langle J\rangle}.$$ *Proof.* (2) follows from the definition [\[dual-section\]](#dual-section){reference-type="eqref" reference="dual-section"} of $\Phi^\vee$. Using the identities [\[eq:sym-anti-sym\]](#eq:sym-anti-sym){reference-type="eqref" reference="eq:sym-anti-sym"}, we can identify the dual flat connection $\nabla^{\vee}$ with the quantum connection $\nabla$ with the opposite sign of $z$. Since $\Phi$ and $\Phi^{\vee}$ agree up to a constant after changing the sign of $z$ (see [\[flat-section-formula\]](#flat-section-formula){reference-type="eqref" reference="flat-section-formula"}, [\[dual-section\]](#dual-section){reference-type="eqref" reference="dual-section"}), we obtain the proof of statement (1). ◻ **Lemma 51**. *We have a map $$\Phi^\vee: {\mathcal H }_{W, G}\longrightarrow\{s^\vee: \mathbb{R}_{>0}\to {\mathcal H }_{W, G}\mid \nabla^\vee s^\vee(z)=0\}.$$ That is, for each $\alpha\in {\mathcal H }_{W, G},$ $\Phi^\vee(\alpha)$ is a flat section of the meromorphic connection $\nabla^\vee$.* *Proof.* We can check that $${d\over dz}\langle s_1(z), s_2(z)\rangle=\langle\nabla^\vee s_1(z), s_2(z)\rangle+\langle s_1(z), \nabla s_2(z)\rangle.$$ ◻ Similar to [@GGI Definition 3.6.3], we have the following nice relation between the dual flat sections and the modified small $J$-function. **Lemma 52**. *For any $\alpha\in {\mathcal H }_{W, \langle J\rangle}$, we have $$\left({z\over 2\pi}\right)^{{\widehat{c}_W\over 2}}\langle\Phi^\vee(\alpha), \mathbf{1}\rangle=\langle\alpha, \widetilde{J}(\tau, z)\rangle.$$* *Proof.* We have $$\begin{aligned} \left({z\over 2\pi}\right)^{{\widehat{c}_W\over 2}}\langle\Phi^\vee(\alpha), \mathbf{1}\rangle &=&z^{{\widehat{c}_W\over 2}}\langle S(\tau, -z)z^{-\mathrm{Gr}}\alpha, \mathbf{1}\rangle\\ &=&z^{{\widehat{c}_W\over 2}}\langle\alpha, z^{-\mathrm{Gr}^*}S^*(\tau, -z)\mathbf{1}\rangle\\ &=&\langle\alpha, z^{{\widehat{c}_W\over 2}}z^{\mathrm{Gr}}S(\tau, z)^{-1}\mathbf{1}\rangle\\ &=&\langle\alpha, \widetilde{J}(\tau, z)\rangle. %\\&=&\left\<\alpha, z^{\mu-1} I_{\rm FJRW}(t,z)\right\>.\end{aligned}$$ Here the first equality is the definition of the section $\Phi^\vee(\alpha)$. The third equality uses Lemma [Lemma 42](#adjoint-hodge){reference-type="ref" reference="adjoint-hodge"} and [\[eq:S-operator-adjoint\]](#eq:S-operator-adjoint){reference-type="eqref" reference="eq:S-operator-adjoint"}. The last equality follows from the definition [\[modified-j-function\]](#modified-j-function){reference-type="eqref" reference="modified-j-function"} of the modified $J$-function. ◻ Similar to [@GGI Theorem 3.6.8], the strong asymptotic classes are related to the modified $J$-function under some mild assumption. **Proposition 53**. Let $\lambda\in{\mathbb C}^*$ be a multiplicity-one eigenvalue of ${\nu\over d}\tau'\star_{\tau}\in {\rm End}({\mathcal H }_{W, \langle J\rangle})$ with the maximal norm. Let $A_\lambda$ be a strong asymptotic class with respect to $\lambda$. If $\langle\mathbf{1}, A_\lambda\rangle\neq 0$, then $$\label{limit-j-function} \lim_{\substack{\arg(z)=\arg(\lambda)\\|z|\to+0}} {\widetilde{J}(\tau, z) \over \langle\mathbf{1}, \widetilde{J}(\tau, z) \rangle} ={A_\lambda \over \langle\mathbf{1}, A_\lambda\rangle}.$$ *Proof.* By Proposition [Proposition 49](#prop:existence-of-limit){reference-type="ref" reference="prop:existence-of-limit"}, we have $$\psi_\lambda:= \lim_{|z|\to+0}e^{\lambda \over z} \Phi_{\arg(\lambda)}(A_\lambda)\in E(\lambda).$$ By Remark [Remark 44](#rem:nonzero-norm){reference-type="ref" reference="rem:nonzero-norm"}, the element $\psi_\lambda^*:=\psi_\lambda/\langle \psi_\lambda,\psi_\lambda\rangle\in {\mathcal H }_{W, \langle J\rangle}$ is dual to $\psi_\lambda$. Follow the argument in the proof of [@GGI Proposition 3.6.2], for each $\alpha\in {\mathcal H }_{W, \langle J\rangle}$, the following limit exists and $$\label{leading-term-negative} \lim_{\substack{\arg(z)=\arg(\lambda)\\|z|\to+0}} e^{-{\lambda\over z}}\Phi^\vee(\alpha)(z)=\langle\alpha, A_\lambda\rangle\psi_\lambda^*.$$ Thus we have $$\lim_{\substack{\arg(z)=\arg(\lambda)\\|z|\to+0}} {\langle \alpha, \widetilde{J}(\tau, z) \rangle\over \langle\mathbf{1}, \widetilde{J}(\tau, z) \rangle} =\lim_{\substack{\arg(z)=\arg(\lambda)\\|z|\to+0}} {e^{-{\lambda\over z}}\langle\Phi^\vee(\alpha), \mathbf{1}\rangle\over e^{-{\lambda\over z}}\langle\Phi^\vee(\mathbf{1}), \mathbf{1}\rangle} ={ \langle\alpha, A_\lambda\rangle\langle\psi_\lambda^*, \mathbf{1}\rangle\over \langle\mathbf{1}, A_\lambda\rangle\langle\psi_\lambda^*, \mathbf{1}\rangle} ={\langle\alpha, A_\lambda\rangle \over \langle\mathbf{1}, A_\lambda\rangle}.$$ Here the first equality uses Lemma [Lemma 52](#dual-flat-section-via-j-function){reference-type="ref" reference="dual-flat-section-via-j-function"}. The second equality uses the formula [\[leading-term-negative\]](#leading-term-negative){reference-type="eqref" reference="leading-term-negative"}. The last equality uses Lemma [Lemma 54](#dual-eigenvector-nonzero){reference-type="ref" reference="dual-eigenvector-nonzero"} below. ◻ Follow the argument in the proof of [@GGI Lemma 3.6.10], we have **Lemma 54**. *The element $\psi_\lambda^*$ satisfies $\langle\psi_\lambda^*, \mathbf{1}\rangle\neq 0$.* # Weak asymptotic classes via mirror symmetry {#sec-weak} The asymptotic condition [\[asymp-class-behavior\]](#asymp-class-behavior){reference-type="eqref" reference="asymp-class-behavior"} in the definition of strong asymptotic classes is a strong constraint. Now we consider elements that satisfies a different asymptotic condition concerning only the coefficient of ${\bf e}_{d-1}$ in $e^{\lambda \over z} \Phi_{\arg(\lambda)}(\alpha)$. **Definition 55** (Weak asymptotic classes). We say $\alpha\in H_{\rm nar}$ is a *weak asymptotic class* with respect to $\lambda\in {\mathbb C}^*$ if there exists some $m\in\mathbb{R}$, such that when $$\arg(z)=\arg(\lambda)\in [0, 2\pi),$$ we have $$\label{leading-asymp-class-behavior} \left|e^{\lambda\over z}\cdot \langle S(\tau, z)z^{-\mathrm{Gr}}\alpha, \mathbf{1}\rangle\right|=O(|z|^{m})\quad\mathrm{as}\ |z|\to 0.$$ We say a weak asymptotic class is principal if $\lambda\in {\mathbb R}_+$. *Remark 56*. At a first glance, the two asymptotic classes (strong and weak) may have different properties. For example, neither we require the complex number $\lambda$ to be an eigenvalue of the quantum multiplication in the definition of weak asymptotic classes, nor we require the strong asymptotic class to be a narrow element. However, we expect they are the same (up to a scalar). See Proposition [Proposition 80](#weak=strong){reference-type="ref" reference="weak=strong"}. In this section, we will calculate the weak asymptotic classes using mirror symmetry (without the assumption on the quantum spectrum conjecture). Here is the plan. In Section [4.1](#sec-i-function){reference-type="ref" reference="sec-i-function"}, we study the differential equations for the small $I$-function and state a mirror conjecture. In Section [4.2](#sec-basic-ghe){reference-type="ref" reference="sec-basic-ghe"}, we recall some basics of generalized hypergeometric functions. After a change of coordinates, we prove the coefficients of the modified small $I$-function satisfy a generalized hypergeometric equation [\[ghe-equation\]](#ghe-equation){reference-type="eqref" reference="ghe-equation"}. In Section [4.3](#sec-barnes-formula){reference-type="ref" reference="sec-barnes-formula"}, we review an asymptotic expansion formula of generalized hypergeometric functions, found by Barnes [@Bar] in the year of 1906. When the mirror conjecture holds, Barnes' formula allows us to calculate the weak asymptotic classes. In Section [4.4](#sec-dominance-order){reference-type="ref" reference="sec-dominance-order"}, we explain the meaning of the asymptotic classes in terms of the dominance order of asymptotic expansions. ## The small $I$-function and mirror symmetry {#sec-i-function} We now study some properties of the small $I$-function defined in [\[small-i-function\]](#small-i-function){reference-type="eqref" reference="small-i-function"} $$I_{\rm FJRW}^{\rm sm}(t,z)=z\sum_{m\in {\bf Nar}}\sum_{\ell=0}^{\infty}\frac{t^{d\ell+m}}{z^{d\ell+m-1}\Gamma(d\ell+m)}\prod_{j=1}^{N}{z^{\lfloor {w_j\over d}(d\ell+m)\rfloor}\Gamma({w_j\over d}(d\ell+m))\over \Gamma\left(\left\{{w_j\over d}\cdot m\right\}\right)}{\bf e}_m.$$ ### Differential equations for the small $I$-function We denote the cardinality of the set $${\bf Nar}=\left\{m\in \mathbb{Z}_{>0} \mid 0< m<d, \text{ and } d\nmid w_j\cdot m, \ \forall \ 1\leq j\leq N\right\}$$ in [\[narrow-index\]](#narrow-index){reference-type="eqref" reference="narrow-index"} by $$\label{definition-q} q+1=|{\bf Nar}|.$$ It is convenient to reindex the set by the increasing bijection $$\label{narrow-bijection} \mathscr{N}: {\bf Nar}\to \{0, 1, \ldots, q\}.$$ For each integer $j$ such that $0\leq j\leq q$, we write $m=\mathscr{N}^{-1}(j)\in {\bf Nar}$. Now we define a set of $q$ positive rational numbers $$\label{rho-tuple} \rho_Q=\{\rho_1, \ldots, \rho_q\},$$ where each $\rho_j$ is given by $$\label{reindex-narrow} \rho_j=\rho_{\mathscr{N}(m)}={1\over d}+1-{m\over d}, \quad m\in{\bf Nar}.$$ This formula includes the case $\rho_0=1$. The set $\{\rho_0, \rho_1, \ldots, \rho_q\}$ is arranged in a decreasing order. Now we consider a collection of $\sum\limits_{j=1}^{N}w_j$ positive rational numbers $$\label{alpha-tilde} \widetilde{\alpha}=\left\{{1\over d}+{k\over w_j} \mid \quad j=1, 2, \ldots, N, \quad k=0, 1, \ldots, w_j-1 \right\}.$$ Numbers may appear repeatedly in $\widetilde{\alpha}$. For each $n\in \{0, 1, \ldots, d-1\}\backslash{\bf Nar},$ by the definition of ${\bf Nar}$, there exists some pair $(k, w_j)$ such that $k d=nw_j$. For each such $n$, we delete the number $$\label{common-numbers} {1\over d}+{k\over w_j}={n+1\over d}$$ once from the set $\widetilde{\alpha}$. We eventually obtain a reduced set of $p$ numbers, denoted by $$\label{alpha-tuple} \alpha_P=\{\alpha_1, \ldots, \alpha_p\}.$$ According to the construction, the integer $p$ satisfies $$\label{definition-p} p=\sum_{j=1}^{N}w_j-(d-|{\bf Nar}|)=|{\bf Nar}|-\nu.$$ Using [\[definition-q\]](#definition-q){reference-type="eqref" reference="definition-q"} and [\[definition-p\]](#definition-p){reference-type="eqref" reference="definition-p"}, we have $$\label{p-q-nu-relation} q+1-p%=d-\sum_{j=1}^{N}w_j =\nu.$$ We provide a list of examples in Table [\[table-indicies\]](#table-indicies){reference-type="ref" reference="table-indicies"} below. Using the notations above, we have **Proposition 57**. The function $I_{\rm FJRW}^{\rm sm}(t,z)$ in [\[small-i-function\]](#small-i-function){reference-type="eqref" reference="small-i-function"} satisfies the differential equation $$\label{i-function-ode} \left(t^{d}z^p\prod_{j=1}^N{w_j^{w_j}\over d^{w_j}}\prod_{i=1}^{p}\left(t{\partial\over\partial t}+\alpha_i d-1\right) -z^{q+1}\prod_{j=0}^{q}\left(t{\partial \over \partial t}+(\rho_j-1)d-1\right)\right) I_{\rm FJRW}^{\rm sm}(t,z)=0.$$ *Proof.* We first prove that $I_{\rm FJRW}^{\rm sm}(t,z)$ satisfies the differential equation $$\label{eq:dfq-quasihomogeneous-Fermat} \left(t^{d}\prod_{j=1}^{N}\prod_{c=0}^{w_j-1}z\left({w_j\over d}t{\partial\over\partial t}+c\right) \Big/\left(zt{\partial\over\partial t}\right) -\prod_{c=1}^{d-1}z\left(t{\partial \over \partial t}-c\right)\right) I_{\rm FJRW}^{\rm sm}(t,z)=0.$$ The proof is straight forward. We have $$\left({w_j\over d}t{\partial\over\partial t}+c\right){t^{d\ell+m}}=\left({w_j\over d}(d\ell+m)+c\right){t^{d\ell+m}}.$$ Since $m\in {\bf Nar}, \ell\geq 0, c\geq 0$, the coefficient ${w_j\over d}(d\ell+m)+c$ is always positive. Thus when we apply the first differential operator in [\[eq:dfq-quasihomogeneous-Fermat\]](#eq:dfq-quasihomogeneous-Fermat){reference-type="eqref" reference="eq:dfq-quasihomogeneous-Fermat"} to the small $I$-function, the coefficient of the term $$\label{i-function-monomial} t^{d\ell+d+m}z^{1+\left(\sum_{j=1}^{N}w_j\right)-1+\sum_{j=1}^{N}\lfloor {w_j\over d}(d\ell+m)\rfloor-(d\ell+m-1)}$$ is given by $$\label{coefficient-monomial-1st} {1\over \Gamma(d\ell+m)}\cdot\prod_{j=1}^{N}{\Gamma({w_j\over d}(d\ell+m))\over \Gamma\left(\left\{{w_j\over d}\cdot m\right\}\right)} \cdot {\prod\limits_{j=1}^{N}\prod\limits_{c=1}^{w_j-1}\left({w_j\over d}(d\ell+m)+c\right)\over d\ell+m}.$$ Now let us apply the second differential operator in [\[eq:dfq-quasihomogeneous-Fermat\]](#eq:dfq-quasihomogeneous-Fermat){reference-type="eqref" reference="eq:dfq-quasihomogeneous-Fermat"} to the small $I$-function. When we take the same $m\in {\bf Nar}$ but $\ell+1$, the exponent of $t$ is $d(\ell+1)+m$ while the exponent of $z$ is $$1+\sum_{j=1}^{N}\lfloor {w_j\over d}\left(d(\ell+1)+m\right)\rfloor-(d(\ell+1)+m-1)+d-1.$$ We obtain the same monomial in [\[i-function-monomial\]](#i-function-monomial){reference-type="eqref" reference="i-function-monomial"} and the coefficient of the monomial is $${1\over \Gamma(d\ell+d+m)}\cdot\prod_{j=1}^{N}{\Gamma({w_j\over d}(d\ell+d+m))\over \Gamma\left(\left\{{w_j\over d}\cdot m\right\}\right)} \cdot \prod_{c=1}^{d-1}(d\ell+d+m-c).$$ By the property $\Gamma(x+1)=x\Gamma(x)$ for $x>0$, it is the same as the coefficient in [\[coefficient-monomial-1st\]](#coefficient-monomial-1st){reference-type="eqref" reference="coefficient-monomial-1st"}. Finally, if we take $m\in {\bf Nar}$ and $\ell=0$, then $t^m$ is annihilated by the second differential operator $\prod\limits_{c=1}^{d-1}\left(t{\partial \over \partial t}-c\right)$. This completes the proof of the equation [\[eq:dfq-quasihomogeneous-Fermat\]](#eq:dfq-quasihomogeneous-Fermat){reference-type="eqref" reference="eq:dfq-quasihomogeneous-Fermat"}. Now we reduce the order to prove the equation [\[i-function-ode\]](#i-function-ode){reference-type="eqref" reference="i-function-ode"}. From the proof of the equation [\[eq:dfq-quasihomogeneous-Fermat\]](#eq:dfq-quasihomogeneous-Fermat){reference-type="eqref" reference="eq:dfq-quasihomogeneous-Fermat"}, we can see that for an integer $k$ such that $0<k<w_j$, if ${k\cdot d\over w_j}$ is also an integer, then we can reduce the order of the differential equation by one by deleting the factor $z\left(t{\partial \over \partial t}+{k\cdot d\over w_j}\right)$ from the first different operator and a factor $z\left(t{\partial \over \partial t}-(d-{k\cdot d\over w_j})\right)$ from the second different operator. In particular, we see that $d-{k\cdot d\over w_j}\notin {\bf Nar},$ so this does not affect the annihilation of $t^m$ with $m\in {\bf Nar}$. We can repeat the process until the order of the differential equation equation reaches to $|{\bf Nar}|=q+1$, which is the equation [\[i-function-ode\]](#i-function-ode){reference-type="eqref" reference="i-function-ode"}. ◻ *Remark 58*. We make a few remarks on the small $I$-function and the differential equation. 1. The small $I$-function formula in [\[small-i-function\]](#small-i-function){reference-type="eqref" reference="small-i-function"} takes the same form as formula (15) in [@Aco] and the formula (102) in [@Gue] (up to some sign). In [@Aco], the polynomial $W$ needs to satisfy the Gorenstein condition $w_j\vert d$ for any $j$. The element ${\bf e}_m$ here is the same as the element $\phi_{m-1}$ in [@Aco]. In [@Gue], the polynomial $W$ is assumed to be of chain type. 2. The differential equation [\[eq:dfq-quasihomogeneous-Fermat\]](#eq:dfq-quasihomogeneous-Fermat){reference-type="eqref" reference="eq:dfq-quasihomogeneous-Fermat"} becomes a *Picard-Fuchs* equation if there is a mirror B-model, and the period integrals in the $B$-model satisfy the equation. ### A genus-zero mirror conjecture Now the *mirror conjecture [Conjecture 9](#conjecture-i-function-formula){reference-type="ref" reference="conjecture-i-function-formula"}* proposed in Section [1.3.3](#sec-intro-wall){reference-type="ref" reference="sec-intro-wall"} is a special case of the following *Givental type genus-zero mirror symmetry conjecture for small $I$-function of $(W, \langle J\rangle)$*. **Conjecture 59**. The small $I$-function $I_{\rm FJRW}^{\rm sm}(t, -z)$ lies on the Lagrangian cone. In particular, let $t\tau(t)$ be the coefficients of $z^{0}$ in the expansion of the small $I$-function, then the $J$-function $J(\tau(t),z)$ is determined by an equation $$tJ(\tau(t), -z)=I_{\rm FJRW}^{\rm sm}(t, -z)+c(t, z)z{\partial I_{\rm FJRW}^{\rm sm}(t, -z)\over \partial t}.$$ where $c(t, z)$ is a formal power series in $t$ and $z$. Let $(W, \langle J\rangle)$ be an admissible LG pair of general type. Using $d-\sum\limits_{j=1}^Nw_j>0$ and the degree calculation, we see that the only term in $I_{\rm FJRW}^{\rm sm}(t,z)$ with a positive power of $z$ is the leading term $zt\mathbf{1}$. **Corollary 60**. If Conjecture [Conjecture 59](#mirror-conjecture){reference-type="ref" reference="mirror-conjecture"} holds, then we have $$I_{\rm FJRW}^{\rm sm}(t,z)=tJ(\tau(t),z).$$ Similar to Lemma [Lemma 52](#dual-flat-section-via-j-function){reference-type="ref" reference="dual-flat-section-via-j-function"}, we can express $\langle\Phi(\alpha), \mathbf{1}\rangle$ in terms of the small $J$-function. **Proposition 61**. For any element $\alpha\in {\mathcal H }_{W, G}$, we have $$(-2\pi z)^{{\widehat{c}_W\over 2}}\langle\Phi(\alpha), \mathbf{1}\rangle=\left\langle e^{\pi\sqrt{-1}\mathrm{Gr}} \alpha, (-z)^{{\widehat{c}_W\over 2}-1}(-z)^{\mathrm{Gr}} J(\tau, -z)\right\rangle.$$ *Proof.* We have $$\begin{aligned} (2\pi)^{{\widehat{c}_W\over 2}}\langle\Phi(\alpha), \mathbf{1}\rangle &=&\left\langle S(\tau,z)z^{-\mathrm{Gr}}\alpha, \mathbf{1}\right\rangle\\ &=&\left\langle z^{-\mathrm{Gr}}\alpha, S(\tau,-z)^{-1}\mathbf{1}\right\rangle\\ &=&\left\langle(-z)^{-\mathrm{Gr}} e^{\pi\sqrt{-1}\mathrm{Gr}}\alpha, S(\tau,-z)^{-1}\mathbf{1}\right\rangle\\ &=&\left\langle e^{\pi\sqrt{-1}\mathrm{Gr}}\alpha, (-z)^{\mathrm{Gr}}S(\tau, -z)^{-1}\mathbf{1}\right\rangle\\ &=&\left\langle e^{\pi\sqrt{-1}\mathrm{Gr}}\alpha, (-z)^{\mathrm{Gr}}(-z)^{-1} J(\tau,-z)\right\rangle. %\\&=&\left\<e^{\pi\sqrt{-1}\mu}\alpha, (-z)^{\mu-1} I_{\rm FJRW}(t, -z)\right\>.\end{aligned}$$ The second equality uses the property [\[adjoint-S-operator\]](#adjoint-S-operator){reference-type="eqref" reference="adjoint-S-operator"} of $S(\tau,z)$. The fourth equality uses skew-adjointness of the Hodge grading operator. The last equality follows from the definition of $J$-function [\[J-equal-S\]](#J-equal-S){reference-type="eqref" reference="J-equal-S"}. ◻ By Proposition [Proposition 61](#weak-asymptotic-i-function){reference-type="ref" reference="weak-asymptotic-i-function"}, we have **Corollary 62**. If the mirror conjecture [Conjecture 9](#conjecture-i-function-formula){reference-type="ref" reference="conjecture-i-function-formula"} holds, then $$(-2\pi z)^{{\widehat{c}_W\over 2}}\langle\Phi(\alpha), \mathbf{1}\rangle=%{1\over t}\left\<e^{\pi\sqrt{-1}\mu(\alpha)}\alpha, (-z)^{{\widehat{c}_W\over 2}}(-z)^{\mu-1} I^{\rm sm}_{\rm FJRW}(t, -z)\right\> \left\langle e^{\pi\sqrt{-1}\mathrm{Gr}}\alpha, (-z)^{{\widehat{c}_W\over 2}-1}(-z)^{\mathrm{Gr}} I^{\rm sm}_{\rm FJRW}(1, -z)\right\rangle.$$ Moreover, we have the following consequences. **Corollary 63**. Let $(W, \langle J\rangle)$ be an admissible LG pair of general type. If mirror conjecture [Conjecture 9](#conjecture-i-function-formula){reference-type="ref" reference="conjecture-i-function-formula"} holds, then the FJRW invariant [fix $k, \phi_i$]{style="color: red"} $$\left\langle\phi_i\psi_1^{k}, {\bf e}_2, \ldots, {\bf e}_2\right\rangle_{0, n+1}=?$$ *Remark 64*. We can define a big $I$-function on ${\mathcal H }_{W, G}^{G_W}$. For the big $I$-function, we have the following *Coates-Givental style genus zero mirror conjecture*. ### Mirror symmetry for invertible polynomials A polynomial is called *invertible* if it can be linearly rescaled to the form $$\label{invertible} W=\sum_{i=1}^{N}\prod_{j=1}^{N}x_j^{a_{ij}},$$ such that the *exponent matrix* $E_W:=\left(a_{ij}\right)_{N\times N}$ is an invertible matrix. According to  [@KrS], each invertible polynomial can be written as a disjoint sums of three *atomic types* of polynomials (up to permutation of variables): 1. *Fermat type*: $x_1^{a_1}+\dots+x_m^{a_m}$; 2. *Chain type*: $x_1^{a_1}x_2+\dots+x_{m-1}^{a_{m-1}}x_m+x_m^{a_m}$; 3. *Loop type*: $x_1^{a_1}x_2+\dots+x_{m-1}^{a_{m-1}}x_m+x_m^{a_m}x_1$. In the literature, the computations in the FJRW theory of $(W, G)$ have been considered mostly when the polynomial $W$ is invertible, see [@FJR; @Kra; @CIR; @HLSW]. In particular, when the group $G=G_W$ is the maximal group, the mirror symmetry for invertible polynomials can be upgraded to any genus. In [@HLSW], it has been proved that if $W$ has no $x^2$ monomial, the FJRW invariants of $(W, G_W)$ at any genus are equal to B-model invariants from the *Saito-Givental theory* of the *mirror polynomial* $$W^T=\sum\limits_{i=1}^{N}\prod\limits_{j=1}^{N}x_j^{a_{ji}}.$$ Furthermore, the underlying Dubrovin-Frobenius manifolds are semisimple. In general, when $G=\langle J\rangle\neq G_W$, Dubrovin-Frobenius manifold structure has not been constructed on the mirror side. Mirror conjecture [Conjecture 59](#mirror-conjecture){reference-type="ref" reference="mirror-conjecture"} has been only proved for the following invertible polynomials in [@Aco Theorem 1.2] and [@Gue Theorem 4.2]. **Proposition 65**. The mirror conjecture [Conjecture 59](#mirror-conjecture){reference-type="ref" reference="mirror-conjecture"} holds if 1. $W$ is a Fermat polynomial; 2. $W$ is a chain polynomial, $\langle J\rangle=G_W$ and $W$ has no $x^2$ monomial when $N\geq 3$. ## Generalized hypergeometric functions in FJRW theory {#sec-basic-ghe} Using the Pochhammer symbol $$(c)_\ell:={\Gamma(c+\ell)\over \Gamma(c)}=c(c+1)\cdots(c+\ell-1),$$ we consider a generalized hypergeometric function $$\begin{aligned} \, _pF_q\left( \begin{array}{l} a_1, \ldots, a_{p}\\ b_1, \ldots, b_{q} \end{array}; x\right) :=\sum_{\ell\geq0} {\prod\limits_{i=1}^{p}(a_i)_\ell\over \prod\limits_{i=1}^{q}(b_i)_\ell}\cdot {x^\ell\over \ell !}.\end{aligned}$$ We will recall some basics on generalized hypergeometric functions. See [@Luk; @Fie; @dlmf] for more details. ### Generalized hypergeometric equations and its solutions near $x=0$ Let $\{\rho_j\}$ and $\{\alpha_i\}$ be the rational numbers introduced in [\[rho-tuple\]](#rho-tuple){reference-type="eqref" reference="rho-tuple"} and [\[alpha-tuple\]](#alpha-tuple){reference-type="eqref" reference="alpha-tuple"}. We consider the generalized hypergeometric equation $$\label{ghe-equation} \Big(\vartheta_x\prod_{j=1}^{q}(\vartheta_x+\rho_j-1)-x\prod_{i=1}^{p}(\vartheta_x+\alpha_i)\Big) f(x) =0, \quad\text{where}\quad \vartheta_x:=x{\partial\over \partial x}.$$ We will focus on the cases when $p\leq q$, since the polynomial $W$ under consideration is of general type. When $p\leq q$, the equation [\[ghe-equation\]](#ghe-equation){reference-type="eqref" reference="ghe-equation"} has a *regular singularity* at $x=0$ and an *irregular singularity* at $x=\infty$. The order of irregularity is one. We remark that when $W$ is of Calabi-Yau type, namely $p=q+1$, then the equation has regular singularities at $x=0, 1$, and $\infty$. See [@dlmf Section 16.8]. Now we discuss the formal solutions of [\[ghe-equation\]](#ghe-equation){reference-type="eqref" reference="ghe-equation"} near $x=0$. We notice by [\[reindex-narrow\]](#reindex-narrow){reference-type="eqref" reference="reindex-narrow"} that 1. no two $\rho_j$'s differ by an integer, $\rho_0=1$, and $\rho_j\notin\mathbb{Z}$ $(\forall \ 1\leq j\leq q)$; 2. for any pair $(i, j)$, $\rho_j\neq\alpha_i$. According to [@dlmf Section 16.8], near $x=0$, there is a fundamental set of formal solutions of the equation [\[ghe-equation\]](#ghe-equation){reference-type="eqref" reference="ghe-equation"}, denoted by $\{f_{j}(x) \vert j=0, 1, \ldots, q\}$, where $$\label{ghe-standard} f_{j}(x)= x^{1-\rho_j}\, _pF_q\left( \begin{array}{l} 1+\alpha_1-\rho_j, \ldots, 1+\alpha_p-\rho_j\\ 1+\rho_0-\rho_j, \ldots, (1+\rho_j-\rho_j)^*, \ldots, 1+\rho_q-\rho_j \end{array}; x\right).$$ Here $(1+\rho_j-\rho_j)^*$ means the term $1+\rho_j-\rho_j$ is deleted. **Lemma 66**. *The following identity hold: $$\label{difference-alpha-rho} \sum_{h=1}^{p}\alpha_h-\sum_{h=1}^{q}\rho_h =-{\nu(d+2)\over 2d}+{3-N\over 2}.$$* *Proof.* By adding all the numbers except $\rho_0=1$ that are deleted in [\[common-numbers\]](#common-numbers){reference-type="eqref" reference="common-numbers"} to both sums on the left-hand side of [\[difference-alpha-rho\]](#difference-alpha-rho){reference-type="eqref" reference="difference-alpha-rho"}, we obtain $$\begin{split} \sum_{h=1}^{p}\alpha_h-\sum_{h=1}^{q}\rho_h &=\sum_{j=1}^{N}\sum_{k=0}^{w_j-1}\left({1\over d}+{k\over w_j}\right)-\sum_{m=0}^{d-2}{m+1\over d}\\%\quad {\red -\sum_{m=0}^{d-1}{m+1\over d}} &=\sum_{j=1}^{N}\left({w_j\over d}+{w_j-1\over 2}\right)-{d-1\over 2}. %-{\nu(d+2)\over 2d}+{3-N\over 2}. \end{split}$$ Now the result follows from the formula [\[gorenstein-parameter\]](#gorenstein-parameter){reference-type="eqref" reference="gorenstein-parameter"} of $\nu$. ◻ For each $m\in {\bf Nar}$, we introduce a $p$-tuple $\alpha_P^{(m)}$ and a $q$-tuple $\rho_Q^{(m)}$ such that $$\label{alpha-rho-m} \begin{dcases} \alpha_P^{(m)}=\left(\alpha_1^{(m)}, \ldots, \alpha_p^{(m)}\right):=(1+\alpha_1-\rho_{\mathscr{N}(m)},\ldots, 1+\alpha_p-\rho_{\mathscr{N}(m)});\\ \rho_Q^{(m)}=\left(\rho_1^{(m)}, \ldots, \rho_q^{(m)}\right) :=(1+\rho_0-\rho_{\mathscr{N}(m)}, \ldots, 1^*, \ldots, 1+\rho_{q}-\rho_{\mathscr{N}(m)}). \end{dcases}$$ **Lemma 67**. *For each $m\in {\bf Nar}$, we have an identity $$\label{difference-alpha-rho-m} \sum_{h=1}^{p}\alpha_h^{(m)}-\sum_{h=1}^{q}\rho_h^{(m)} =-\nu\left({1\over 2}+{m\over d}\right)+{3-N\over 2}.$$* *Proof.* By definition, we have $$\begin{aligned} \sum_{h=1}^{p}\alpha_h^{(m)}-\sum_{h=1}^{q}\rho_h^{(m)} &=&(1-\rho_{\mathscr{N}(m)})(p-q-1)+\sum_{h=1}^{p}\alpha_h-\sum_{h=1}^{q}\rho_h. %&=&-\nu\left({1\over 2}+{m\over d}\right)+{3-N\over 2}.\nonumber\end{aligned}$$ Now the result follows from [\[reindex-narrow\]](#reindex-narrow){reference-type="eqref" reference="reindex-narrow"} and [\[difference-alpha-rho\]](#difference-alpha-rho){reference-type="eqref" reference="difference-alpha-rho"}. ◻ ### The modified small $I$-function $$\begin{aligned} {\Gamma(\alpha^{(m)}_P)\over \Gamma(\rho_Q^{(m)})}\cdot F^{(m)}(x) &=&\sum_{\ell=0}^{\infty}{\prod\limits_{j=1}^{N}\prod\limits_{k=0}^{w_j-1}\Gamma\left({m\over d}+{k\over w_j}+\ell\right)\over \prod\limits_{k=0}^{d-1}\Gamma\left({m+k\over d}+\ell\right)}\cdot x^{\ell}\\ &=&\sum_{\ell=0}^{\infty} (2\pi)^{1-N-\nu\over 2}\cdot {\prod\limits_{j=1}^{N}w_j^{{1\over 2}-w_j({m\over d}+\ell)}\over d^{{1\over 2}-m-d\ell}}\cdot{\prod\limits_{j=1}^{N}\Gamma\left({w_j\over d}(m+d\ell)\right)\over \Gamma(m+d\ell)} \cdot x^{\ell}.\end{aligned}$$ Now we relate the small $I$-function in [\[small-i-function\]](#small-i-function){reference-type="eqref" reference="small-i-function"} to the solutions of the generalized hypergeometric equation [\[ghe-equation\]](#ghe-equation){reference-type="eqref" reference="ghe-equation"}. For each $m\in {\bf Nar}$, the degree shifting number of ${\bf e}_m$ defined by [\[degree-shifting-number\]](#degree-shifting-number){reference-type="eqref" reference="degree-shifting-number"} has the expression $$\iota({\bf e}_m)=\sum_{j=1}^{N}\left(\left\{{w_j\cdot m\over d}\right\}-{w_j\over d}\right).%=-\sum_{j=1}^{N}\lfloor{w_j\cdot m\over d}\rfloor %=\sum_{j=1}^{N}\left\{{w_j\cdot (m-1)\over d}\right\}.$$ According to [\[hodge-grading-operator\]](#hodge-grading-operator){reference-type="eqref" reference="hodge-grading-operator"}, we have $$\label{hodge-J^m} \mu(J^m)=-{N\over 2}+\sum_{j=1}^{N}\left\{{w_j\cdot m\over d}\right\}.$$ We define a *modified small $I$-function* $$\label{modified-i-function} \widetilde{I}(t,z):=z^{{\widehat{c}_W\over 2}-1}z^{\mathrm{Gr}} I_{\rm FJRW}^{\rm sm}(t,z).$$ Combining the formula [\[small-i-function\]](#small-i-function){reference-type="eqref" reference="small-i-function"}, for fixed $\ell\in{\mathbb Z}$, we compute the power of $z$ in the coefficient of ${\bf e}_m$, which is $${\widehat{c}_W\over 2}+\mu(J^m)-1+1+\sum\limits_{j=1}^{N}\lfloor {w_j\over d}(d\ell+m)\rfloor-(d\ell+m-1) ={\nu\over d}(d\ell+m-1)-1.$$ Thus we have $$\label{modified-formula} \widetilde{I}(t,z)=\sum_{m\in {\bf Nar}} \sum_{\ell=0}^{\infty} \frac {t^{d\ell+m}} {z^{{\nu\over d}(d\ell+m-1)}\Gamma(d\ell+m)} \prod_{j=1}^{N} {\Gamma({w_j\over d}(d\ell+m))\over \Gamma\left(\left\{{w_j\over d}\cdot m\right\}\right)}{\bf e}_m.$$ We consider the following change of variables $$\label{change-of-variable-formula} x=\left({1\over d}\right)^{d}\cdot\left(\prod\limits_{j=1}^{N}w_j^{w_j}\right) z^{-\nu}. %\left({t\over d}\right)^{d}\cdot\left(\prod\limits_{j=1}^{N}w_j^{w_j}\right)\cdot z^{-\nu}.$$ When we restrict $\widetilde{I}(t,z)$ to $t=1$ and use the change of variables [\[change-of-variable-formula\]](#change-of-variable-formula){reference-type="eqref" reference="change-of-variable-formula"}, the coefficient functions become generalized hypergeometric series that appear in [\[ghe-standard\]](#ghe-standard){reference-type="eqref" reference="ghe-standard"}. **Proposition 68**. For the modified small $I$-function in [\[modified-i-function\]](#modified-i-function){reference-type="eqref" reference="modified-i-function"}, we have $$\begin{aligned} %{\widetilde{I}(t, z)\over t}\Big\vert_{t=1} \widetilde{I}(1, z) %\nonumber\\ %&=&\sum_{m\in {\bf Nar}}\phi_m\sum_{\ell=0}^{\infty}\frac{t^{d\ell+m}\cdot z^{-\nu\cdot(\ell+{m-1\over d})}}{\Gamma(d\ell+m)}\cdot\prod_{j=1}^{N}{\Gamma(q_j(d\ell+m))\over\Gamma(\left\{q_j\cdot m\right\})}\nonumber\\ &=& {(2\pi)^{N+\nu-1\over 2}\over d^{1\over 2}} \prod_{j=1}^{N}w_j^{{w_j\over d}-{1\over 2}}\sum_{m\in {\bf Nar}}{{\Gamma(\alpha^{(m)}_P)\over \Gamma(\rho_Q^{(m)})}\, f_{\mathscr{N}(m)}(x)\over \prod\limits_{j=1}^{N}\Gamma(\left\{{w_j\cdot m\over d}\right\})}\,{\bf e}_m.\label{I-function-hypergeometric} %\\&=& t\cdot {(2\pi)^{N+\nu-1\over 2}\over d^{{1\over 2}}}\cdot \prod_{j=1}^{N}w_j^{-{1\over 2}+{w_j\over d}}\cdot\sum_{m\in {\bf Nar}}c_{m}\cdot f_{\mathscr{N}(m)}(x)\cdot \phi_{m}.\end{aligned}$$ *Proof.* Applying the *Legendre duplication formula* $$\label{legendre-prod}\prod_{k=0}^{d-1}\Gamma\left(y+{k\over d}\right)=(2\pi)^{d-1\over 2}d^{{1\over 2}-d\cdot y}\Gamma(d\cdot y),$$ we obtain $$\begin{split} &{\Gamma(\alpha^{(m)}_P)\over \Gamma(\rho_Q^{(m)})}\cdot f_{\mathscr{N}(m)}(x)\\ =&\sum_{\ell=0}^{\infty}{\prod\limits_{j=1}^{N}\prod\limits_{k=0}^{w_j-1}\Gamma\left({m\over d}+{k\over w_j}+\ell\right)\over \prod\limits_{k=0}^{d-1}\Gamma\left({m+k\over d}+\ell\right)}\cdot x^{\ell+{m-1\over d}} \\ =&\sum_{\ell=0}^{\infty} (2\pi)^{1-N-\nu\over 2}\cdot{\prod\limits_{j=1}^{N}w_j^{{1\over 2}-w_j({m\over d}+\ell)}\over d^{{1\over 2}-m-d\ell}}\cdot{\prod\limits_{j=1}^{N}\Gamma\left({w_j\over d}(m+d\ell)\right)\over \Gamma(m+d\ell)}\cdot x^{\ell+{m-1\over d}}\\ =&{d^{{1\over 2}}\over (2\pi)^{N+\nu-1\over 2}} \cdot \prod_{j=1}^{N}w_j^{{1\over 2}-{w_j\over d}} \cdot\sum_{\ell=0}^{\infty} {t^{d\ell+m-1}\over z^{\nu\cdot(\ell+{m-1\over d})}} \cdot{\prod\limits_{j=1}^{N}\Gamma\left({w_j\over d}(m+d\ell)\right) \over \Gamma(m+d\ell)}. \end{split}$$ The last equality is a direct consequence of the change of variable formula [\[change-of-variable-formula\]](#change-of-variable-formula){reference-type="eqref" reference="change-of-variable-formula"}. Using [\[modified-formula\]](#modified-formula){reference-type="eqref" reference="modified-formula"}, this implies the formula [\[I-function-hypergeometric\]](#I-function-hypergeometric){reference-type="eqref" reference="I-function-hypergeometric"} immediately. ◻ ## Barnes asymptotic formula and weak asymptotic classes {#sec-barnes-formula} Now we introduce a Barnes formula [@Bar], which can be used to describe the smallest asymptotic expansion as well as the dominant asymptotic expansion of certain generalized hypergeometric functions. ### Barnes' $Q$-function For each $m\in {\bf Nar}$, we introduce $$\label{barnes-Q} Q_{\mathscr{N}(m)}(x):={\prod\limits_{i=0}^{q}\Gamma(\rho_{\mathscr{N}(m)}-\rho_i)^*\over \prod\limits_{i=1}^{p}\Gamma(\rho_{\mathscr{N}(m)}-\alpha_i)} \cdot x^{1-\rho_{\mathscr{N}(m)}}\cdot \ _pF_q\left( \begin{array}{l} \alpha_P^{(m)}\\ \rho_Q^{(m)} \end{array}; (-1)^{\nu}x\right).$$ Here the notation $\Gamma(\rho_{\mathscr{N}(m)}-\rho_i)^*$ means the term is omitted if $\rho_{\mathscr{N}(m)}-\rho_i=0.$ Since we work with the cases when $\rho_0=1$, these functions are exactly Barnes' functions $\{Q_r(x)\}$ in [@Bar], where $r=\mathscr{N}(m)\in \{0, 1,\ldots, q\}$ according to [\[narrow-bijection\]](#narrow-bijection){reference-type="eqref" reference="narrow-bijection"}. These functions are multi-valued. For any integer $k$, we have $$\label{Q-multivalued} Q_{\mathscr{N}(m)}(x\cdot e^{2\pi\sqrt{-1}k})=e^{2\pi\sqrt{-1}k(1-\rho_{\mathscr{N}(m)})}Q_{\mathscr{N}(m)}(x).$$ For each $m\in {\bf Nar}$, we define a constant $$\label{constant-upsilon} \Upsilon(m):=(-1)^{\nu(1-\rho_{\mathscr{N}(m)})} {\Gamma(\alpha^{(m)}_P) \Gamma(1-\alpha^{(m)}_P)\over \Gamma(\rho_Q^{(m)})\Gamma(1-\rho_Q^{(m)})}.$$ Using the definitions in [\[ghe-standard\]](#ghe-standard){reference-type="eqref" reference="ghe-standard"} and [\[alpha-rho-m\]](#alpha-rho-m){reference-type="eqref" reference="alpha-rho-m"}, we immediately obtain $$Q_{\mathscr{N}(m)}(x)={1\over \Upsilon(m) }\cdot {\Gamma(\alpha^{(m)}_P)\over \Gamma(\rho_Q^{(m)})}\cdot f_{\mathscr{N}(m)}\Big((-1)^{\nu}x\Big).$$ We denote the primitive $d$-th root of unity by $$\label{d-th-root} \omega:=\exp\left({2\pi\sqrt{-1}/d}\right).$$ **Lemma 69**. *The following identity holds: $$\label{upsilon-2nd-formula} \Upsilon(m)%=(-1)^{\nu(1-\rho_{\mathscr{N}(m)})}\cdot {d(-\sqrt{-1})^{N-1}\over (-2\pi)^{\nu}}\cdot {(-1)^{\nu(\rho_{\mathscr{N}(m)}-{1\over d})}\over \prod\limits_{j=1}^{N}(1-\omega^{w_j\cdot m})} %={d(-\sqrt{-1})^{N-1}\over (2\pi)^{\nu}}\cdot {(-1)^{\nu(-{1\over d})}\over \prod\limits_{j=1}^{N}(1-\omega^{w_j\cdot m})}. =d(2\pi)^{1-\nu}e^{\pi\sqrt{-1}\left(-{\nu\over d}-1-{N\over 2}\right)}\prod_{j=1}^{N}{1\over 1-\omega^{mw_j}}.$$* *Proof.* Using the relation [\[reindex-narrow\]](#reindex-narrow){reference-type="eqref" reference="reindex-narrow"} and the *Euler reflection formula* $$\label{euler-reflection} \Gamma(x)\Gamma(1-x)(1-e^{-2\pi\sqrt{-1}x})=2\pi\sqrt{-1}e^{-\pi\sqrt{-1}x},$$ we have $$\begin{aligned} \Upsilon(m) %&=&(-1)^{\nu(1-\rho_{\mathscr{N}(m)})} {\Gamma(\alpha^{(m)}_P) \Gamma(1-\alpha^{(m)}_P)\over \Gamma(\rho_Q^{(m)})\Gamma(1-\rho_Q^{(m)})}\\ &=&(-1)^{\nu({m-1\over d})}\prod_{h=1}^{p}{2\pi\sqrt{-1}e^{-\pi \sqrt{-1}\alpha_h^{(m)}}\over 1-e^{-2\pi\sqrt{-1}\alpha_h^{(m)}}}\prod_{i=1}^{q}{1-e^{-2\pi\sqrt{-1}\rho_i^{(m)}}\over 2\pi\sqrt{-1}e^{-\pi \sqrt{-1}\rho_i^{(m)}}}\\ &=& (2\pi)^{p-q}e^{\pi\sqrt{-1}\left({\nu({m-1\over d})}+{p-q\over 2}+\nu\left({1\over 2}+{m\over d}\right)-{3-N\over 2}\right)}\prod\limits_{\substack{1\leq i\leq d\\ i\neq d-\rho_{\mathscr{N}(m)}}}\left(1-e^{2\pi\sqrt{-1}\left(\rho_{\mathscr{N}(m)}-1+{i\over d}\right)}\right)\\ && \cdot\prod\limits_{j=1}^{N}\prod\limits_{k=0}^{w_j-1}\left(1-e^{2\pi\sqrt{-1}\left(\rho_{\mathscr{N}(m)}-1-({1\over d}+{k\over w_j})\right)}\right)^{-1}\\ &=& (2\pi)^{1-\nu}e^{\pi\sqrt{-1}\left({2m\nu\over d}-{\nu\over d}-1+{N\over 2}\right)}d \prod\limits_{j=1}^{N}\prod\limits_{k=0}^{w_j-1}\left(1-e^{2\pi\sqrt{-1}(-{m\over d}-{k\over w_j})}\right)^{-1}\\ &=&d(2\pi)^{1-\nu}e^{\pi\sqrt{-1}\left({2m\nu\over d}-{\nu\over d}-1+{N\over 2}\right)} \prod_{j=1}^{N}(1-\omega^{-mw_j})^{-1}\\ %&=&d(2\pi)^{1-\nu}e^{\pi\sqrt{-1}\left(-{\nu\over d}-1+{N\over 2}\right)}\prod_{j=1}^{N}(\omega^{mw_j}-1)^{-1}\\ &=&d(2\pi)^{1-\nu}e^{\pi\sqrt{-1}\left(-{\nu\over d}-1-{N\over 2}\right)}\prod_{j=1}^{N}\left(1-\omega^{mw_j}\right)^{-1}.\end{aligned}$$ Here in the second equality, we use the identities [\[alpha-rho-m\]](#alpha-rho-m){reference-type="eqref" reference="alpha-rho-m"}, [\[difference-alpha-rho-m\]](#difference-alpha-rho-m){reference-type="eqref" reference="difference-alpha-rho-m"}, and multiply both the denominator and the numerator with the product of terms of the form $1-e^{2\pi\sqrt{-1}\left(\rho_{\mathscr{N}(m)}-1-{n+1\over d}\right)}$, where $n$ is given by [\[common-numbers\]](#common-numbers){reference-type="eqref" reference="common-numbers"}. ◻ According to formula [\[I-function-hypergeometric\]](#I-function-hypergeometric){reference-type="eqref" reference="I-function-hypergeometric"} and [\[Q-multivalued\]](#Q-multivalued){reference-type="eqref" reference="Q-multivalued"}, we have $$%(-z)^{\widehat{c}_W\over 2}(-z)^{\mu-1}I_{\rm FJRW}^{\rm sm}(1,-z) \widetilde{I}(t, -z)\vert_{t=1} %&=&(2\pi)^{N+\nu-1\over 2}\cdot d^{-{1\over 2}}\cdot \prod_{j=1}^{N}w_j^{{w_j\over d}-{1\over 2}}\cdot\sum_{m\in {\bf Nar}}{\Upsilon(m) Q_{\mathscr{N}(m)}\left((-1)^{-2\nu}x\right)\over \prod\limits_{j=1}^{N}\Gamma(\left\{{w_j\cdot m\over d}\right\})}\cdot \sJ_m\nonumber \\ =(2\pi)^{N+\nu-1\over 2}d^{-{1\over 2}} \prod_{j=1}^{N}w_j^{{w_j\over d}-{1\over 2}}\sum_{m\in {\bf Nar}} {\omega^{-\nu(m-1)} \Upsilon(m) Q_{\mathscr{N}(m)}(x)\over\prod\limits_{j=1}^{N}\Gamma(\left\{{w_j\cdot m\over d}\right\})}\, {\bf e}_m. \label{i-function-via-upsilon}$$ ### Barnes' asymptotic formula In [@Bar Section 26], Barnes considered an analytic function $_pS_q(s)$ with certain poles on the $s$-plane. If $${\rm Re}(s)>{\rm Re}\left(\sum\limits_{r=1}^{p}\alpha_r-\sum\limits_{r=1}^{q}\rho_r+{\nu-1\over 2}\right)/\nu,$$ the function is represented by the convergent *factorial series* $$\sum_{k=0}^{\infty}{\prod\limits_{r=0}^{q}\Gamma(1-\rho_r+{k\over \nu}-s)\over \prod\limits_{r=1}^{\nu}\Gamma({k+r\over \nu})\prod\limits_{r=1}^{p}\Gamma(1-\alpha_r+{k\over \nu}-s)}.$$ Barnes obtained an identity in [@Bar Section 46], $$\sum_{r=0}^{q}Q_r(x)=\exp(-\nu x^{1\over \nu})(2\pi)^{{1\over 2}(\nu-1)}\nu^{-{1\over 2}}\left(-{1\over 2\pi\sqrt{-1}}\int_L\ _pS_q(s)x^sds\right)$$ where $L$ is certain contour on the $s$-plane. By analyzing the contour integral using residues and asymptotic expansion of Gamma functions, Barnes proved an asymptotic formula for the linear combination of $\{Q_r(x)\}$ [@Bar Section 47]. Applying the equality [\[difference-alpha-rho\]](#difference-alpha-rho){reference-type="eqref" reference="difference-alpha-rho"} for the generalized hypergeometric functions in Section [4.2](#sec-basic-ghe){reference-type="ref" reference="sec-basic-ghe"}, we can rewrite Barnes' formula as follows. **Theorem 70**. *[@Bar][\[Barnes-theorem\]]{#Barnes-theorem label="Barnes-theorem"} If $|\arg(x)|<(\nu+1-{1\over 2}\delta_1^\nu)\pi$, then the following linear combination of $\{Q_r(x)\}$ admits the asymptotic expansion $$\label{Barnes-formula} \sum_{r=0}^{q}Q_r(x) =\exp(-\nu x^{1\over \nu})(2\pi)^{{1\over 2}(\nu-1)} \nu^{-{1\over 2}}x^{-{1\over d}+{2-N\over 2\nu}}\left(\sum_{n=0}^{K}\lambda_n x^{-{n\over\nu}}+J_K x^{-{K\over \nu}}\right), %=\exp(-\nu x^{1\over \nu}){(2\pi)^{{1\over 2}(\nu-1)}\over \nu^{{1\over 2}}}x^{[\sum\alpha-\sum\rho+{1\over 2}(\nu-1)]/\nu}\cdot\left(\sum_{k=0}^{r}{\lambda_k\over x^{k\over\nu}}+{J_r\over x^{r\over \nu}}\right).$$ where the quantity $\lambda_n$ is the residue of $_pS_q(s)$ at the pole $-{1\over d}+{2-N\over 2\nu}-{n\over \nu}$ and the quantity $|J_K|$ approaches to $0$ as $|x|$ approaches to $\infty$.* *Remark 71*. Barnes' formula gives the corresponding linear combination the smallest asymptotic expansion in a small sector containing the ray $\mathbb{R}_{>0}\cdot e^{2\pi\sqrt{-1}\ell/\nu}$. In the original paper, Barnes mainly considered the application to dominant asymptotic expansions [@Bar Theorem (A), Theoreom (B)], generalizing earlier results of Stokes and Orr. See Lemma [Lemma 78](#lem-smallest-dominant){reference-type="ref" reference="lem-smallest-dominant"} for more details. ### Calculation of weak asymptotic classes {#calculation-weak-strong} If mirror conjecture [Conjecture 9](#conjecture-i-function-formula){reference-type="ref" reference="conjecture-i-function-formula"} holds for the admissible LG pair $(W, \langle J\rangle)$, we can calculate its weak asymptotic classes in the following steps. 1. We express the coefficients of the small $I$-function in terms of generalized hypergeometric functions after a change of variable formula. 2. The term $\langle\Phi(\alpha), \mathbf{1}\rangle$ is a linear combination of Barnes' functions $\{Q_r(x)\}$. 3. We can choose classes such that the linear combinations match those in Barnes' asymptotic expansion formulas [\[Barnes-formula\]](#Barnes-formula){reference-type="eqref" reference="Barnes-formula"}. Barnes' result will imply that these classes are the weak asymptotic classes. For any $\ell\in {\mathbb Z}$, we consider the element $$\label{asymptotic-class-definition} {\mathcal A}_\ell:=(2\pi)^N \sum_{m\in {\bf Nar}} \omega^{-\ell\cdot m} \prod\limits_{j=1}^{N}{1 \over \Gamma(\left\{{w_j\cdot m\over d}\right\})}\,{\bf e}_m\in {\mathcal H }_{W, \langle J\rangle}. %\\&=&\sum_{m\in {\bf Nar}} {\red (-1)^{\mu(J^m)}}\cdot \omega^{-\ell\cdot m}\cdot \prod_{j=1}^{N}\Gamma\left(1-\left\{{w_j\cdot m\over d}\right\}\right)(1-\omega^{-w_j\cdot m})\, \sJ_{m}.$$ **Lemma 72**. *If mirror conjecture [Conjecture 9](#conjecture-i-function-formula){reference-type="ref" reference="conjecture-i-function-formula"} holds, then for any $\ell\in{\mathbb Z}$, we obtain $$\begin{split} %& (-2\pi z)^{{\widehat{c}_W\over 2}}\langle\Phi({\mathcal A}_\ell), \mathbf{1}\rangle%\\ %&=&\left\<e^{\pi\sqrt{-1}\mu}\alpha, (-z)^{{\widehat{c}_W\over 2}}(-z)^{\mu-1} I_{\rm FJRW}(t, -z)\right\>\\ =%& C \sum_{m\in {\bf Nar}} \omega^{\ell\cdot m}Q_{\mathscr{N}(m)}(x), %=&(2\pi)^{N-\nu-1\over 2}\cdot d^{{1\over 2}}\cdot \prod_{j=1}^{N}w_j^{-{1\over 2}+{w_j\over d}}\cdot(-\sqrt{-1})^{N-1}\cdot (-1)^{\nu(-{1\over d})}\sum_{m\in {\bf Nar}}\omega^{\ell\cdot m}Q_{\mathscr{N}(m)}(x). \end{split}$$ where the constant $$C=(2\pi)^{N-\nu+1\over 2}\cdot d^{{1\over 2}}\cdot e^{\pi\sqrt{-1}\left({\nu\over d}-1-{3N\over 2}\right)} \cdot \prod_{j=1}^{N}w_j^{{w_j\over d}-{1\over 2}}$$ is independent of the choice of $\ell$.* *Proof.* Using Corollary [Corollary 62](#weak-via-i-function){reference-type="ref" reference="weak-via-i-function"} and [\[i-function-via-upsilon\]](#i-function-via-upsilon){reference-type="eqref" reference="i-function-via-upsilon"}, the contribution of $(-2\pi z)^{{\widehat{c}_W\over 2}}\langle\Phi({\mathcal A}_\ell), \mathbf{1}\rangle$ from each $m\in {\bf Nar}$ is given by the product of the constant in [\[i-function-via-upsilon\]](#i-function-via-upsilon){reference-type="eqref" reference="i-function-via-upsilon"} and $$\label{component-weak} e^{\pi\sqrt{-1}\mu(J^{d-m})}\cdot {(2\pi)^N \omega^{-\ell\cdot (d-m)} \over \prod\limits_{j=1}^{N}\Gamma\left(\left\{{w_j(d-m)\over d}\right\}\right)}\cdot {\omega^{-\nu(m-1)} \Upsilon(m) Q_{\mathscr{N}(m)}(x)\over\prod\limits_{j=1}^{N}\Gamma(\left\{{w_j\cdot m\over d}\right\})}.$$ Using formulas [\[hodge-J\^m\]](#hodge-J^m){reference-type="eqref" reference="hodge-J^m"}, [\[upsilon-2nd-formula\]](#upsilon-2nd-formula){reference-type="eqref" reference="upsilon-2nd-formula"}, and [\[euler-reflection\]](#euler-reflection){reference-type="eqref" reference="euler-reflection"}, the formula [\[component-weak\]](#component-weak){reference-type="eqref" reference="component-weak"} can be simplified to $$d(2\pi)^{1-\nu}e^{\pi\sqrt{-1}\left(-{\nu\over d}-1-{N\over 2}\right)}\cdot (-1)^{-N}\cdot \omega^{\nu+\ell m}\cdot Q_{\mathscr{N}(m)}(x).$$ Now the result follows from further simplification. ◻ $$\begin{split} &(-2\pi z)^{{\widehat{c}_W\over 2}}\langle\Phi({\mathfrak N}), \mathbf{1}\rangle\\ %&=&\left\<e^{\pi\sqrt{-1}\mu}\alpha, (-z)^{{\widehat{c}_W\over 2}}(-z)^{\mu-1} I_{\rm FJRW}(t, -z)\right\>\\ =&(2\pi)^{N+\nu-1\over 2}\cdot d^{-{1\over 2}}\cdot \prod_{j=1}^{N}w_j^{-{1\over 2}+{w_j\over d}}\cdot\sum_{m\in {\bf Nar}} { (-1)^{\red \mu(J^{d-m})}{\mathfrak N}_{d-m}\Upsilon(m) \over \prod\limits_{j=1}^{N}\Gamma(\left\{{w_j\cdot m\over d}\right\})}\cdot Q_{\mathscr{N}(m)}(x). \end{split}$$ According to [\[reindex-narrow\]](#reindex-narrow){reference-type="eqref" reference="reindex-narrow"} and [\[Q-multivalued\]](#Q-multivalued){reference-type="eqref" reference="Q-multivalued"}, we have $$\label{Barnes-formula-shift} Q_{\mathscr{N}(m)}(x\cdot e^{2\pi\sqrt{-1}\ell})=e^{2\pi\sqrt{-1}\ell\cdot {m-1\over d}}Q_{\mathscr{N}(m)}(x)=\omega^{\ell(m-1)}Q_{\mathscr{N}(m)}(x).$$ **Corollary 73**. Let $\nu\geq1$ and $\ell$ be an integer such that $0\leq \ell\leq \nu-1$. If $|\arg(x)+2\pi\ell |<(\nu+1-{1\over 2}\delta_1^\nu)\pi$, then the linear combination $\sum\limits_{m\in {\bf Nar}}\omega^{\ell\cdot m}Q_{\mathscr{N}(m)}(x)$ admits the asymptotic expansion $$%\sum_{r=0}^{q}\omega^{\ell\cdot\mathscr{N}^{-1}(r)}Q_r(x) \exp(-\nu e^{2\pi\sqrt{-1}\ell/\nu} x^{1\over \nu}) e^{{2\pi\sqrt{-1}\ell\over \nu}(1-{N\over 2})}(2\pi)^{{1\over 2}(\nu-1)} \nu^{-{1\over 2}} x^{-{1\over d}+{2-N\over 2\nu}}\left(\sum_{n=0}^{K}\lambda_n x^{-{n\over\nu}}+J_K x^{-{K\over \nu}}\right).$$ Now the following result on weak asymptotic classes is a consequence of the Barnes' formula [\[Barnes-formula\]](#Barnes-formula){reference-type="eqref" reference="Barnes-formula"} and Corollary [Corollary 73](#barnes-corollary){reference-type="ref" reference="barnes-corollary"}. **Proposition 74**. If mirror conjecture [Conjecture 9](#conjecture-i-function-formula){reference-type="ref" reference="conjecture-i-function-formula"} holds and $\nu\geq1$, then the class ${\mathcal A}_\ell$ defined in [\[asymptotic-class-definition\]](#asymptotic-class-definition){reference-type="eqref" reference="asymptotic-class-definition"} is a weak asymptotic class with respect to $T e^{2\pi\sqrt{-1}\ell/\nu}$ for each integer $\ell=0, \ldots, \nu-1$. In particular, ${\mathcal A}_0$ is the principal weak asymptotic class. *Proof.* Using the change-of-variables formula [\[change-of-variable-formula\]](#change-of-variable-formula){reference-type="eqref" reference="change-of-variable-formula"} and formula [\[largest-positive-number\]](#largest-positive-number){reference-type="eqref" reference="largest-positive-number"}, we have $$-\nu e^{2\pi\sqrt{-1}\ell/\nu} x^{1\over \nu}= -\nu\cdot d^{-{d\over \nu}}\cdot\left(\prod\limits_{j=1}^{N}w_j^{w_j}\right)^{1\over \nu}\cdot {e^{2\pi\sqrt{-1}\ell/\nu}\over z}=-{T e^{2\pi\sqrt{-1}\ell/\nu}\over z}.$$ If $\arg(z)=2\pi\ell/\nu$, then $\arg(x)=-\nu \arg(z)=-2\pi\ell$, and $$\arg(x\cdot e^{2\pi\sqrt{-1}\ell})=\arg(x)+2\pi\ell=-\nu \arg(z)+2\pi\ell=0.$$ ◻ Now the result below is a consequence of Proposition [Proposition 65](#mirror-theorem-fermat){reference-type="ref" reference="mirror-theorem-fermat"}. **Corollary 75**. Let $W$ be an invertible polynomial of either Fermat type or chain type. if $\nu>1$, then for each integer $\ell=0, \ldots, \nu-1$, the class ${\mathcal A}_\ell$ defined in [\[asymptotic-class-definition\]](#asymptotic-class-definition){reference-type="eqref" reference="asymptotic-class-definition"} is a weak asymptotic class with respect to $T e^{2\pi\sqrt{-1}\ell/\nu}$. ## Smallest asymptotic expansion versus dominant asymptotic expansion {#sec-dominance-order} Now we study solutions of the equation [\[ghe-equation\]](#ghe-equation){reference-type="eqref" reference="ghe-equation"} near $x=\infty$. Using the dominance order for complex functions, we introduce the smallest asymptotic expansion and dominant asymptotic expansion. Then we relate these concepts to the weak asymptotic classes and the strong asymptotic classes. ### Solution basis of [\[ghe-equation\]](#ghe-equation){reference-type="eqref" reference="ghe-equation"} near $x=\infty$ Now we consider the solutions of the generalized hypergeometric equation [\[ghe-equation\]](#ghe-equation){reference-type="eqref" reference="ghe-equation"} near $x=\infty$. Using the formula [\[barnes-Q\]](#barnes-Q){reference-type="eqref" reference="barnes-Q"}, we see that the asymptotic expansion of $\sum\limits_{r=0}^{q}Q_r((-1)^{\nu}x)$ as in Theorem [\[Barnes-theorem\]](#Barnes-theorem){reference-type="ref" reference="Barnes-theorem"} will be a solution of [\[ghe-equation\]](#ghe-equation){reference-type="eqref" reference="ghe-equation"}. In general, we can write down a basis of asymptotic solutions explicitly. We follow the notation in [@Luk Section 5.11]. Let $$\label{exponential-term} K_{p,q}(x):=\exp(\nu x^{1\over \nu})(2\pi)^{{1\over 2}(\nu-1)} \nu^{-{1\over 2}}x^{\gamma}\left(1+\sum_{n=1}^{\infty}N_n x^{-{n\over\nu}}\right)$$ be the formal function given in [@Luk Section 5.11 (19)] and $$K_{p,q}^{(s)}(x):=K_{p,q}(x\cdot e^{2\pi\sqrt{-1}s}).$$ Let $L_{p, q}^{(t)}(x)$ be the formal function considered in [@Luk Section 5.11] for each $t=1, 2, \ldots, p$. If $\alpha_i-\alpha_j\notin \mathbb{Z}$ for any $i\neq j$, then $$\label{t-th-algebraic-term} L_{p,q}^{(t)}(x)={x^{-\alpha_t}\Gamma(\alpha_t)\Gamma(\alpha_P-\alpha_t)^*\over \Gamma(\rho_Q-\alpha_t)} \ _{q+1}F_{p-1}\left( \begin{array}{l} \alpha_t,1+\alpha_t-\rho_Q\\ (1+\alpha_t-\alpha_P)^* \end{array} ; {(-1)^{q-p}\over x} \right).$$ In general, $L_{p, q}^{(t)}$ can be defined by a limit process as in [@Luk Section 5.1 (27-30)]. According to [@Luk Section 5], we have **Proposition 76**. Near $x=\infty$, the set $$\mathcal{B}_{\infty}:=\{K_{p,q}^{(s)}(x), L^{(t)}_{p, q}(x) \mid 0\leq s\leq \nu-1, \quad 1\leq t\leq p\}$$ forms a basis of solutions of the generalized hypergeometric equation [\[ghe-equation\]](#ghe-equation){reference-type="eqref" reference="ghe-equation"}. ### Dominance order in asymptotic functions For each $\theta\in [0, 2\pi)$, we consider a partial order $\preccurlyeq_\theta$ (and a strict order $\prec_\theta$) for complex functions in asymptotic expansions. **Definition 77**. Fix $\arg(z):=\theta\in [0, 2\pi)$. We say $f(z)\preccurlyeq_\theta g(z)$ if $$f(z)=O(g(z)), \quad \text{as} \quad z\to 0.$$ We say $f(z)\prec_\theta g(z)$ if $$f(z)=o(g(z)), \quad \text{as} \quad z\to 0.$$ The order $\preccurlyeq_\theta$ (or $\prec_\theta$) is usually called the *dominance order*. It is clear that the order depends on the choices of $\theta$. For the asymptotic functions in the set $\mathcal{B}_{\infty}$, the dominance order is determined by the real part of $\nu x^{1\over \nu}e^{2\pi\sqrt{-1}s/\nu}$. It is easy to observe the following results. **Lemma 78**. *Let $\arg(z)=\theta=2\pi\ell/\nu,$ then* 1. *$K_{p,q}^{(\ell)}(x)$ is the dominant asymptotic solution of the equation [\[ghe-equation\]](#ghe-equation){reference-type="eqref" reference="ghe-equation"} with respect to the order $\prec_\theta$. That is, for any $g(x)\in \mathcal{B}_{\infty}\backslash\{K_{p,q}^{(\ell)}(x)\}$, we have $$g(x)\prec_\theta K_{p,q}^{(\ell)}(x).$$* 2. *$K_{p,q}^{(\ell)}((-1)^{\nu}x)$ is the smallest asymptotic expansion with respect to $\prec_\theta$, such that for any $g(x)\in \mathcal{B}_{\infty}\backslash\{K_{p,q}^{(\ell)}(x)\}$, we have $$K_{p,q}^{(\ell)}((-1)^{\nu}x)\prec_\theta g((-1)^{\nu}x).$$* ### Smallest asymptotic expansion and weak asymptotic classes Now the weak asymptotic class ${\mathcal A}_\ell$ in Proposition [Proposition 74](#theorem-asymptotic-classes){reference-type="ref" reference="theorem-asymptotic-classes"} can be interpreted as the class such that when $\arg(z)=\theta=2\pi\ell/\nu,$ $\langle\Phi({\mathcal A}_\ell), \mathbf{1}\rangle$ has the smallest asymptotic expansion with respect to $\preccurlyeq_\theta$ among $$\{\langle\Phi(\alpha), \mathbf{1}\rangle\mid \alpha\in {\mathcal H }_{W, \langle J\rangle}\cap H_{\rm nar}\}.$$ Now we show that **Proposition 79**. If mirror conjecture [Conjecture 9](#conjecture-i-function-formula){reference-type="ref" reference="conjecture-i-function-formula"} holds and $\alpha_i-\alpha_j\notin \mathbb{Z}$ for any $i\neq j$, then for each $\ell=0, 1, \ldots, \nu-1$, the space of weak asymptotic classes with respect to $T e^{2\pi\sqrt{-1}\ell/\nu}$ is spanned by ${\mathcal A}_\ell$. *Proof.* We construct a linear map $$\Lambda_{\theta}: \bigoplus_{s=0}^{\nu-1}{\mathbb C}\cdot K_{p,q}^{(s)}((-1)^{\nu} x)\oplus \bigoplus_{t=1}^{p}{\mathbb C}\cdot L^{(t)}_{p, q}((-1)^{\nu} x) \longrightarrow \bigoplus_{j=0}^{q}{\mathbb C}\cdot Q_j(x)$$ as follows: 1. using Barnes' formulas in Theorem [\[Barnes-theorem\]](#Barnes-theorem){reference-type="ref" reference="Barnes-theorem"} and Corollary [Corollary 73](#barnes-corollary){reference-type="ref" reference="barnes-corollary"}, we take $$K_{p,q}^{(s)}((-1)^{\nu} x)\mapsto \sum\limits_{m\in {\bf Nar}}\omega^{s\cdot m}Q_{\mathscr{N}(m)}(x).$$ 2. using Barnes' formula (A) in [@Bar Section 7], we take $$(-1)^{-\nu\alpha_t} L^{(t)}_{p, q}((-1)^{\nu} x)\mapsto \sum_{m\in {\bf Nar}}\Gamma(\rho_{\mathscr{N}(m)}-\alpha_t)\Gamma(\alpha_t-\rho_{\mathscr{N}(m)}+1)Q_{\mathscr{N}(m)}(x).$$ If $\alpha_i-\alpha_j\notin \mathbb{Z}$ for any $i\neq j$, the linear map $\Lambda_{\theta}$ is an isomorphism between vector spaces. By Definition [Definition 55](#def-weak-asymptotic-class){reference-type="ref" reference="def-weak-asymptotic-class"}, the linear combination $\langle\Phi(\alpha), \mathbf{1}\rangle$ for a weak asymptotic class $\alpha$ with respect to $T e^{2\pi\sqrt{-1}\ell/\nu}$ must lie in the images of ${\mathbb C}\cdot K_{p,q}^{(\ell)}((-1)^{\nu} x)$ under $\Lambda_{\theta}$. So the subspace of the weak asymptotic classes is one dimensional and the result follows from Proposition [Proposition 74](#theorem-asymptotic-classes){reference-type="ref" reference="theorem-asymptotic-classes"}. ◻ If this holds, we need $$\alpha(d-m)\propto(-1)^{\mu(J^{d-m})}\Upsilon^{-1}(m)\prod\limits_{j=1}^{N}\Gamma(\left\{{w_j\cdot m\over d}\right\})\propto (-1)^{-\mu(J^{m})}\prod\limits_{j=1}^{N}\Gamma\left(\left\{{w_j\cdot m\over d}\right\}\right)(1-\omega^{w_j\cdot m}).$$ We need to show that the space is of one dimensional. $$\Upsilon(m)%=(-1)^{\nu(1-\rho_{\mathscr{N}(m)})}\cdot {d(-\sqrt{-1})^{N-1}\over (-2\pi)^{\nu}}\cdot {(-1)^{\nu(\rho_{\mathscr{N}(m)}-{1\over d})}\over \prod\limits_{j=1}^{N}(1-\omega^{w_j\cdot m})} ={d(-\sqrt{-1})^{N-1}\over (2\pi)^{\nu}}\cdot {(-1)^{\nu(-{1\over d})}\over \prod\limits_{j=1}^{N}(1-\omega^{w_j\cdot m})}.$$ ### Dominant asymptotic expansion and strong asymptotic classes Now we return to the strong asymptotic classes discussed in Section [3.2](#sec-asymptotic-classes){reference-type="ref" reference="sec-asymptotic-classes"}. According to Proposition [Proposition 53](#prop-strong-class){reference-type="ref" reference="prop-strong-class"}, when quantum spectrum conjecture [Conjecture 4](#conjecture-C){reference-type="ref" reference="conjecture-C"} holds, the strong asymptotic classes with respect to the eigenvalue $T e^{2\pi\sqrt{-1}\ell/\nu}$ will be determined by the dominant asymptotic expansion of $\widetilde{J}(\tau,z)$ as $z\to 0$ and $\arg(z)=2\pi\ell/\nu$. Furthermore, if mirror conjecture [Conjecture 9](#conjecture-i-function-formula){reference-type="ref" reference="conjecture-i-function-formula"} holds, then $$\widetilde{J}(\tau,z)=z^{\widehat{c}_W\over 2} z^{\mathrm{Gr}} S(\tau, z)^{-1}(\mathbf{1})=\widetilde{I}(t, z)\Big\vert_{t=1}.$$ According to [\[I-function-hypergeometric\]](#I-function-hypergeometric){reference-type="eqref" reference="I-function-hypergeometric"}, the coordinate function of $\widetilde{J}(\tau,z)$ with respect to ${\bf e}_m$ is a scalar multiple of $f_{\mathscr{N}(m)}(x)$. So let us consider the dominant asymptotic expansion of $f_{\mathscr{N}(m)}(x)$ for each $m\in {\bf Nar}$. Using the change-of-coordinates formula [\[change-of-variable-formula\]](#change-of-variable-formula){reference-type="eqref" reference="change-of-variable-formula"}, we consider $\widetilde{x}:=e^{2\pi\sqrt{-1}\ell}x$. If $\arg(z)=2\pi \ell/ \nu$ and $z\to 0$, we have $\arg(\widetilde{x})=0$ and $\widetilde{x}\to\infty$. According to Lemma [Lemma 78](#lem-smallest-dominant){reference-type="ref" reference="lem-smallest-dominant"} and [@Luk Section 5.11.2 (4)], for each $m\in {\bf Nar}$, the dominant asymptotic expansion of $f_{\mathscr{N}(m)}(x)$ is given by $$\begin{aligned} f_{\mathscr{N}(m)}(x)%&=f_j(e^{-2\pi\sqrt{-1}\ell}\widetilde{x})\\ =\omega^{\ell(1-m)}f_{\mathscr{N}(m)}(\widetilde{x}) \sim \omega^{\ell(1-m)} {\Gamma(\rho_Q^{(m)})\over {\Gamma(\alpha^{(m)}_P)}} K_{p, q}(\widetilde{x}).\end{aligned}$$ Applying [\[I-function-hypergeometric\]](#I-function-hypergeometric){reference-type="eqref" reference="I-function-hypergeometric"}, this implies $$\begin{aligned} &\lim_{\substack{\arg(z)=2\pi \ell/\nu\\|z|\to+0}}{\widetilde{J}(\tau, z) \over \langle\mathbf{1}, \widetilde{J}(\tau, z) \rangle}\\ &=\omega^{d\ell} \prod\limits_{j=1}^{N}\Gamma(\{{w_j\cdot (d-1)\over d}\}) \sum_{m\in {\bf Nar}} \omega^{\ell\cdot (1-m)} \prod\limits_{j=1}^{N}{1 \over \Gamma(\{{w_j\cdot m\over d}\})}\,{\bf e}_m\\ &={\omega^{\ell}{\mathcal A}_\ell\over(2\pi)^{N}}\prod\limits_{j=1}^{N}\Gamma(\{{w_j\cdot (d-1)\over d}\}).\end{aligned}$$ According to Proposition [Proposition 53](#prop-strong-class){reference-type="ref" reference="prop-strong-class"}, we have **Proposition 80**. If both quantum spectrum conjecture [Conjecture 4](#conjecture-C){reference-type="ref" reference="conjecture-C"} and mirror conjecture [Conjecture 9](#conjecture-i-function-formula){reference-type="ref" reference="conjecture-i-function-formula"} holds, then for each $\ell=0, 1, \ldots, \nu-1$, the subspace of strong asymptotic classes with respect to $T e^{2\pi\sqrt{-1}\ell/\nu}$ is spanned by ${\mathcal A}_\ell$. # Matrix factorization and Gamma structures Let $R={\mathbb C}[x_1, \ldots, x_N]$ be a polynomial ring with $\deg(x_j)=w_j\in \mathbb{N}.$ Let $(W, G)$ be an admissible LG pair. Let ${\rm MF}(R, W)$ be the category of *matrix factorizations of $W$* and ${\rm MF}_{G}(R, W)$ be the category of *$G$-equivariant matrix factorizations of $W$* [@Wal; @Orl; @PV]. These categories and their homotopy categories have been used to describe the $D$-branes in topological Laudau-Ginzburg models [@KaL; @Orl1; @Orl]. In [@CIR], Chiodo, Iritani, and Ruan constructed Gamma structures in FJRW theory and use the Gamma structure to build a connection from the category of matrix factorizations ${\rm MF}_{G}(R, W)$ to the FJRW theory of $(W, G)$. We review these results and then relate the Gamma structures to the asymptotic expansions in Section [3](#sec-asymptotic){reference-type="ref" reference="sec-asymptotic"} and Section [4](#sec-weak){reference-type="ref" reference="sec-weak"}. The only new result in this section is Corollary [Theorem 13](#corollary-chern-asymptotic){reference-type="ref" reference="corollary-chern-asymptotic"}. Later in Section [6](#sec-orlov){reference-type="ref" reference="sec-orlov"}, we will explain the relation between the asymptotic classes and Orlov's semiorthogonal decomposition of the category of matrix factorizations. ## The category of $G$-equivariant matrix factorizations of $W$ Following [@PV], an object in the category ${\rm MF}_{G}(R, W)$ is a pair $$(E, \delta_E)=(E^0 \mathrel{% \raise.55ex\hbox{% $\ext@arrow 0359% \arrowfill@\relbar\relbar\rightarrow{\phantom{\delta_1}}{\delta_0}$}% \setbox 0=\hbox{% $\ext@arrow 3095\MT_leftarrow_fill:{\delta_1}{\phantom{\delta_0}}$}% \kern-\wd 0 \lower.55ex\box 0} E^1),$$ where $E=E^0\oplus E^1$ is a $G$-equivariant ${\mathbb Z}/2$-graded finitely generated projective $R$-module and $\delta_E\in {\rm End}_R^1(E)$ is an odd $G$-equivariant endomorphism of E, such that $$\delta_E^2=W\cdot {\rm id}_E.$$ The morphisms between the $G$-equivariant matrix factorizations $\overline{E}:=(E, \delta_E)$ and $\overline{F}:=(F, \delta_F)$ are given by ${\rm Hom}_W(\overline{E}, \overline{F})^G$, which is the $G$-equivariant part of the ${\mathbb Z}/2$-graded module of $R$-linear homomorphisms $${\rm Hom}_W(\overline{E}, \overline{F}):= {\rm Hom}_{{\mathbb Z}/2-Mod_R}(E,F)\oplus {\rm Hom}_{{\mathbb Z}/2-Mod_R}(E, F[1]).$$ We remark that the category possess a dg-structure given by a differential $d$ defined on $f\in {\rm Hom}_W(\overline{E}, \overline{F})$ as $$df=\delta_F\circ f-(-1)^{|f|}f\circ\delta_E.$$ The homotopy category associated with the category ${\rm MF}(R, W)$ (or ${\rm MF}_{G}(R, W)$) will be denoted by ${\rm HMF}(R, W)$ (or ${\rm HMF}_{G}(R, W)$ respectively). Both the categories ${\rm HMF}(R,W)$ and ${\rm HMF}_G(R,W)$ are triangulated categories with a natural translation functor $[1]$, where $$\overline{E}[1]=(E^1 \mathrel{% \raise.55ex\hbox{% $\ext@arrow 0359% \arrowfill@\relbar\relbar\rightarrow{\phantom{-\delta_0}}{-\delta_1}$}% \setbox 0=\hbox{% $\ext@arrow 3095\MT_leftarrow_fill:{-\delta_0}{\phantom{-\delta_1}}$}% \kern-\wd 0 \lower.55ex\box 0} E^0).$$ In [@Orl1; @Orl], the homotopy category ${\rm HMF}_{\langle J\rangle}(R, W)$ is called the categories of $D$-branes of type $B$ in Landau-Ginzburg models. *Remark 81*. [Consider the ${\mathbb C}^*$-action on ${\mathbb C}^N={\rm Spec}(R)$ by $$\lambda\cdot (x_1, \ldots, x_N)=\left(\lambda^{w_1} x_1, \ldots, \lambda^{w_N} x_N\right).$$ Thus we may consider the action $\langle J\rangle$ on ${\mathbb C}^N$ via the embedding $$J\mapsto e^{2\pi\sqrt{-1}/d}\in {\mathbb C}^*.$$ ]{style="color: red"} ### The singularity category We consider $R={\mathbb C}[x_1, \ldots, x_n]$ as a finitely generated commutative graded algebra over the field ${\mathbb C}$. Consider the *hypersurface algebra* of $W$, which is a quotient graded algebra $$S=R/(W)={\mathbb C}[x_1, \ldots, x_n]/(W).$$ Let ${\mathbf D}^b(S)$ be the bounded derived category of all complexes of $S$-modules with finitely generated cohomology and ${\mathbf D}^b_{\rm per}(S)$ be the full triangulated subcategory of ${\mathbf D}^b(S)$ of perfect complexes (i.e. finite complexes of finitely generated projective $S$-modules). The quotient $${\mathbf D}^{\rm gr}_{\rm Sg}(S): ={\mathbf D}^b(S)/{\mathbf D}^b_{\rm per}(S)$$ is called the *stablized derived category* of $S$ [@Buc], or *triangulated category of singularities* [@Orl1]. The category carries a natural triangulated structure. According to [@Eis; @Buc; @Orl1], the functor Cok: ${\rm MF}(R,W)\to {\rm gr-}S$ giving by ${\rm Cok}(\overline{E})={\rm Coker} (\delta^{1})$ induces an exact functor $F: {\rm HMF}(R,W)\to {\mathbf D}^{\rm gr}_{\rm Sg}(S)$ between the two triangulated categories, which completes the commutative diagram $$\label{cokernel-diagram} \begin{CD} {\rm MF}(R,W)@>{\rm Cok}>> {\rm gr-}S\\ @VV V @VV V\\ {\rm HMF}(R,W) @> F >> {\mathbf D}^{\rm gr}_{\rm Sg}(S). \end{CD}$$ Here the vertical arrows are given by projections. Since the algebra $R$ has a finite homological dimension, the functor $F$ is an equivalence  [@Orl1 Theorem 3.9]. Let $q$ be the natural projection $q: {\mathbf D}^b({\rm gr-}S)\to {\mathbf D}^{\rm gr}_{\rm Sg}(S)$. For each graded $S$-module $M$, we obtain a matrix factorization $$\label{stabilization-def} M^{\rm st}:=F^{-1}q(M),$$ which is called the *stablization of* $M$ in [@Dyc]. In general, there exists an equivalence $F^G$ between the triangulated category ${\rm HMF}_G(R,W)$ and the $G$-equivariant version of the singularity category, when $G$ is a finite group [@Qui Theorem 7.3]. The two equivalences $F$ and $F^G$ commutes with the natural forgetful functors. *Remark 82*. Explain the degree of $p_1$ and $p_0$ in [@Orl]. Construct the corresponding quasi-periodic infinite sequence $$\underline{K}^\bullet:=\{\ldots \to K^i\xrightarrow{k^i}K^{i+1}\xrightarrow{k^{i+1}}K^{i+2}\to \ldots\}$$ ### Koszul matrix factorizations Let ${\bf a}=(a_1, \ldots, a_n)$ and ${\bf b}=(b_1, \ldots, b_n)$ be two $n$-tuples of elements of $R$, such that $$W={\bf a}\cdot{\bf b}=\sum_{j=1}^n a_j b_j.$$ The Koszul matrix factorization $\{{\bf a}, {\bf b}\}\in {\rm MF}(R, W)$ corresponding to the pair $({\bf a}, {\bf b})$ is isomorphic as a ${\mathbb Z}/2$-graded $R$-module to the Koszul complex $$\label{koszul-complex} K_\bullet= \left(\land_R^{\bullet} (R^n), \quad \delta=\sum_{j=1}^{n}a_j e_j\wedge +\sum_{j=1}^{n} b_j\iota( e_j^*)\right).$$ Here $\{e_j\}$ is the standard basis of $R^n$, $\{e_j^*\}$ is the dual basis, and $\iota( e_j^*)$ is the contraction of the element $e_j$. If both elements $\sum_{j=1}^{n}a_j e_j$ and $\sum_{j=1}^{n} b_j\iota( e_j^*)$ are $G$-invariant, we use the same symbol $\{{\bf a}, {\bf b}\}\in {\rm MF}_{G}(R, W)$ to denote the $G$-equivariant Koszul matrix factorization of the pair $({\bf a}, {\bf b})$. *Remark 83*. The $G$-invariance of both elements are required before formula (2.11) in [@PV]. **Example 84**. Since $W$ is quasihomogeneous, we write $W=\sum_{j=1}^{N}q_j\partial_j W \cdot x_j.$ The *Koszul matrix factorization* $$\label{stablization-pair} {\mathbb C}^{\rm st}:= \left\{(q_1\partial_1 W, \ldots, q_N\partial_N W); (x_1, \ldots, x_N)\right\}$$ is also called the *stablization of the residue field* ${\mathbb C}=R/(x_1, \ldots, x_N)$. In fact, the cokernel of this matrix factorization is the residue field ${\mathbb C}=R/(x_1, \ldots, x_N)$. Thus the notion ${\mathbb C}^{\rm st}=F^{-1}(q({\mathbb C}))$ follows from the definition in [\[stabilization-def\]](#stabilization-def){reference-type="eqref" reference="stabilization-def"}. We write $M(j)$ for the twisted graded $S$-module with $M(j)_i:=M_{j+i}$. In particular, let ${\mathbb C}(j)$ be the twist of the residue field ${\mathbb C}$ by $j$. We will also be interested in objects ${\mathbb C}(j)^{\rm st}=F^{-1}(q({\mathbb C}(j)))$. We remark that each ${\mathbb C}(j)^{\rm st}$ is $\langle J\rangle$-equivariant. So we will view ${\mathbb C}(j)^{\rm st}$ as an object in ${\rm HMF}_{\langle J\rangle}(R,W)$. ### Hochshild homology and Chern character It is known that the equivariant Hochshild homology of the category ${\rm MF}_{G}(R, W)$ is isomorphic to the state space ${\mathcal H }_{W, G}$ of the LG pair $(W, G)$ defined in [\[state-space\]](#state-space){reference-type="eqref" reference="state-space"} [@PV Theorem 2.5.4], $$\label{hochshild-decomposition} {\rm HH}_*({\rm MF}_{G}(R, W))\simeq \left(\bigoplus_{g\in G}{\rm Jac}(W_g)\cdot \omega_g\right)^G.$$ For each $\overline{E}=(E, \delta_E)$, Polishchuk and Vaintrob constructed a $G$-equivariant *Chern character* $${\rm ch}_G(\overline{E})\in {\rm HH}_0({\rm MF}_{G}(R, W))$$ given by a canonical map $\tau^{\overline{E}}: {\rm Hom}_G^*(\overline{E}, \overline{E})\to {\rm HH}_*({\rm MF}_{G}(R, W))$ called *boundary-bulk map* [@PV Proposition 1.2.4], such that ${\rm ch}_G(\overline{E})=\tau^{\overline{E}}({\rm id}_E)$. We can write $$\label{chern-character-component} {\rm ch}_G(\overline{E})=\sum_{g\in G}{\rm ch}_G(\overline{E})_g,$$ where each component ${\rm ch}_G(\overline{E})_g\in {\mathcal H }_g$ has an explicit super-trace formula expression as follows. Let $x_{k+1}, \ldots, x_n$ be the $g$-invariant variables, $\partial_j \delta_E$ be the derivative of $\delta_E$ with respect to the variable $x_j$, and ${\rm str}_{R^g}(\cdot)$ be the supertrace of an operator on the quotient space $R^g:=R/(x_1, \ldots, x_k)$, then by [@PV Theorem 3.3.3], $$\label{chern-character-supertrace} {\rm ch}_G(\overline{E})_g={\rm str}_{R^g}\left([\partial_n\delta_E\circ \ldots \circ \partial_{k+1}\delta_E\circ g]\vert_{x_1=\ldots=x_k=0}\right)\, dx_{k+1}\wedge\ldots\wedge dx_n.$$ [ We follow [@PV formula (2.37)] to consider the action of $G\subset G_{W}$ on $R$ introduced by [\[maximal-group\]](#maximal-group){reference-type="eqref" reference="maximal-group"}. For $J\in G$, and a homogeneous element $y\in E$ with $\deg(y)$, we have $$\label{positive-action} J\cdot y=\omega^{w_j\cdot \deg(y)} y.$$ ]{style="color: red"} *Remark 85*. In [@CIR Example 4.5], it is claimed that the Chern character of the Koszul matrix factorization $\{a, b\}_q$ of $$W=\sum_{j=1}^{N}q_j\partial_j W \cdot x_j$$ is supported on the narrow subspace. This is true if the polynomial $W$ is of Fermat type. In general, the Chern character of $\{a, b\}_q$ is supported on the ambient subspace ${\mathcal H }_{W, \langle J\rangle}^{\rm amb}$. For example, if $W=x^3y+y^3$ is the $E_7$-singularity. The following result is a direct consequence of [@PV Proposition 4.3.4]. **Lemma 86**. *Let $(W, \langle J\rangle)$ be an admissible LG pair. The $\langle J\rangle$-equivariant Chern character of ${\mathbb C}(\ell)^{\rm st}$ is supported on ${\mathcal H }_{\rm nar}$. More explicitly, we have $$\label{stablization-chern} {\rm ch}_{\langle J\rangle}({\mathbb C}(\ell)^{\rm st})=\sum_{m\in {\bf Nar}}\omega^{-\ell\cdot m}\prod_{j=1}^{N}(1-\omega^{w_j\cdot m})\ {\bf e}_m.$$* *Remark 87*. Check the calculations of the Chern character for examples in Example [Example 125](#example-weight-system){reference-type="ref" reference="example-weight-system"}. Tell the differences if we choose $G=\langle J\rangle$ or $G=G_{W}$. The element $m$ should belong to a set depends on the weight system, but the set ${\rm Nar}$ is just for the case $G=G_{W}$. We should choose certain invariant part of ${\rm Nar}.$ If we consider for more general admissible group $G$. Main question: is there any change for the formula $${\rm ch}_{G}(k^{\rm st}(-\ell))?$$ Yes. For example, it may have non-trivial contribution in ${\mathcal H }_g$ for some $g\in G_W\backslash \langle J\rangle$. ## Gamma structures in FJRW theory Here is the plan of this subsection. - Introduce the Gamma class in QST. - Define the Gamma map from category of Matrix factorizations to QST. - Prove the Gamma class is the principle asymptotic class - Explain the Hirzebruch-Riemann-Roch in [@PV-hrr Theorem 4.2.1] implies the Euler pairing matches the non-symmetric pairing defined earlier. This generalizes the cases discussed in [@CIR]. - Show an example: calculate explicitly $\chi(E_0, E_0)$. Gamma structure has been explored in the study of integral structure in Gromov-Witten theory by [@KKP; @Iri]. In [@CIR], Chiodo, Iritani, and Ruan introduced Gamma structures in FJRW theory. The Gamma structures allow them to discover the integral structures in FJRW theory as well as the connection between the Landau-Ginzburg/Calabi-Yau correspondence and the Orlov equivalence for Fermat type Calabi-Yau hypersurface singularities. We apply the construction in [@CIR] to admissible pairs $(W, \langle J\rangle)$ of general type and reveal the connections between the category ${\rm MF}_{\langle J\rangle}(R, W)$ and the asymptotic behavior of the corresponding FJRW theory discussed in Section [3](#sec-asymptotic){reference-type="ref" reference="sec-asymptotic"}. ### Gamma map and Gamma class Following [@CIR], we introduce an endormorphism on ${\mathcal H }_{W,G}$ for any admissible LG pair $(W, G)$. This generalizes the construction for Fermat polynomials in [@CIR]. Cite [@KRS]. **Definition 88**. Let $(W, G)$ be an admissible LG pair. We define a *Gamma map* $\widehat{\Gamma}$ on the state space ${\mathcal H }_{W, G}$, given by $$\label{qst-gamma} \widehat{\Gamma}:=\bigoplus_{g\in G}{(-1)^{-\mu(g)}} \prod_{j=1}^{N}\Gamma(1-\theta_j(g))\cdot {\rm Id}_{{\mathcal H }_{g}}\in {\rm End}({\mathcal H }_{W,G}).$$ For each object $\overline{E}\in {\rm MF}_{G}(R, W)$, we call the cohomology class $\widehat{\Gamma}({\rm ch}_{G}(\overline{E}))$ the *Gamma class* of $\overline{E}$. In particular, we define the *Gamma class* of the pair $(W, G)$ to be the Gamma class of ${\mathbb C}^{\rm st}$, and denote it by $$\label{standard=gamma} \widehat{\Gamma}(W, G):=\widehat{\Gamma}\left({\rm ch}_G({\mathbb C}^{\rm st})\right).$$ Applying Lemma [Lemma 86](#chern-residue-field){reference-type="ref" reference="chern-residue-field"} to the LG pair $(W, \langle J\rangle)$, we immediately have **Proposition 89**. Let $(W, \langle J\rangle)$ be an admissible LG pair, not necessarily of general type. For any $\ell\in{\mathbb Z}$, the Gamma class of ${\mathbb C}(\ell)^{\rm st}$ is the class ${\mathcal A}_{\ell}$ in [\[asymptotic-class-definition\]](#asymptotic-class-definition){reference-type="eqref" reference="asymptotic-class-definition"}. Namely, $$\widehat{\Gamma}\left({\rm Ch}_{\langle J\rangle}({\mathbb C}(\ell)^{\rm st})\right)={\mathcal A}_{\ell}.$$ *Proof.* Using the Gamma map [\[qst-gamma\]](#qst-gamma){reference-type="eqref" reference="qst-gamma"} and the Chern character formula [\[stablization-chern\]](#stablization-chern){reference-type="eqref" reference="stablization-chern"}, we have $$\begin{aligned} &&\widehat{\Gamma}\left({\rm Ch}_{\langle J\rangle}({\mathbb C}(\ell)^{\rm st})\right)\\ &=&\widehat{\Gamma}\left(\sum_{m\in {\bf Nar}} \omega^{-\ell\cdot m}\prod_{j=1}^{N}(1-\omega^{w_j\cdot m})\, {\bf e}_m\right)\\ &=&\sum_{m\in {\bf Nar}}{(-1)^{-\mu(J^m)}} \prod_{j=1}^{N}\Gamma\left(1-\left\{{w_j\cdot m\over d}\right\}\right) \omega^{-\ell\cdot m} \prod_{j=1}^{N}(1-\omega^{w_j\cdot m})\, {\bf e}_m\\ &=&(2\pi)^N\sum_{m\in {\bf Nar}} {\omega^{-\ell\cdot m}\over \prod\limits_{j=1}^{N}\Gamma(\left\{{w_j\cdot m\over d}\right\})}\, {\bf e}_m. %\\=&\cA_{\ell}.\end{aligned}$$ The third equality uses the formula [\[hodge-J\^m\]](#hodge-J^m){reference-type="eqref" reference="hodge-J^m"} and the Euler reflection formula [\[euler-reflection\]](#euler-reflection){reference-type="eqref" reference="euler-reflection"}. ◻ *Remark 90*. We remark that the Gamma map defined in [\[qst-gamma\]](#qst-gamma){reference-type="eqref" reference="qst-gamma"} differs from the Gamma map $\widehat{\Gamma}\in {\rm End}({\mathcal H }_{W,G})$ in [@CIR Definition 2.17] by a factor of ${(-1)^{-\mu(g)}}$. The modification is caused by the identification of the Gamma classes and asymptotic classes in Proposition [Proposition 89](#theorem-chern-asymptotic){reference-type="ref" reference="theorem-chern-asymptotic"}. ### Gamma classes and results on Gamma Conjecture [Conjecture 12](#algebraic-analytic){reference-type="ref" reference="algebraic-analytic"} {#gamma-classes-and-results-on-gamma-conjecture-algebraic-analytic} According to the results in Proposition [Proposition 74](#theorem-asymptotic-classes){reference-type="ref" reference="theorem-asymptotic-classes"} and Corollary [Corollary 75](#corollary-weak-classes){reference-type="ref" reference="corollary-weak-classes"}, we obtain Theorem [Theorem 13](#corollary-chern-asymptotic){reference-type="ref" reference="corollary-chern-asymptotic"} on Gamma Conjectures [Conjecture 12](#algebraic-analytic){reference-type="ref" reference="algebraic-analytic"}. *Proof.* We calculate ${\rm Ch}_G({\mathbb C}^{\rm st})$ directly using the definition in [@PV-hrr]. - Expand the calculation using supertrace formula. - Show it is supported on the ambient subspace ${\mathcal H }_{W, \langle J\rangle}^{\rm amb}$. We have $$\begin{aligned} {\rm Ch}_G({\mathbb C}^{\rm st})=\sum_{\gamma\in {\bf Nar}}\det\left[1-\gamma\right]\phi_\gamma=\sum_{\gamma\in {\bf Nar}}\prod_{j=1}^{N}\left(1-e^{2\pi\sqrt{-1}\theta_j(\gamma)}\right)\phi_\gamma\end{aligned}$$ Apply the Gamma map [\[qst-gamma\]](#qst-gamma){reference-type="eqref" reference="qst-gamma"} and identity [\[todd-identity\]](#todd-identity){reference-type="eqref" reference="todd-identity"} for gamma functions, we have $$\begin{aligned} \widehat{\Gamma}({\rm Ch}_G({\mathbb C}^{\rm st}))%=\sum_{\ga\in {\bf Nar}}\det\left[1-\ga\right]\phi_\ga &=\sum_{\gamma\in {\bf Nar}}\prod_{j=1}^{N}\left(1-e^{2\pi\sqrt{-1}\theta_j(\gamma)}\right)\Gamma(1-\theta_j(\gamma))\phi_\gamma\\ &=\sum_{\gamma\in {\bf Nar}}\prod_{j=1}^{N}\left(1-e^{2\pi\sqrt{-1}\theta_j(\gamma)}\right)\Gamma(1-\theta_j(\gamma))\phi_\gamma\\ &=\sum_{\gamma\in {\bf Nar}}(2\pi)^N(-1)^{\bf Gr}\prod_{j=1}^{N}\Gamma^{-1}(\theta_j(\gamma))\phi_\gamma.\end{aligned}$$ Recall here ${\bf Gr}$ is the Hodge grading operator such that for $\phi_\gamma=\alpha\vert\gamma\rangle\in {\mathcal H }_{W, G}$, not necessarily narrow, $${\bf Gr}(\phi_\gamma)=\mu^+_\gamma(\phi_\gamma)=\left({\rm wt}(\alpha)+\sum_{j=1}^{N}\theta_j(\gamma)-{N\over 2}\right)\phi_\gamma.$$ Now the result follows from [\[principle-class-general-case\]](#principle-class-general-case){reference-type="eqref" reference="principle-class-general-case"}. ◻ ## Gamma structure and the Hirzebruch-Riemann-Roch formula ### A canonical pairing on ${\rm HH}_*({\rm MF}_G(R, W))$ In general, for a proper dg-category $\mathscr{C}$, there is a canonical pairing on the Hochshild homology [@Shk] $$\langle\cdot, \cdot\rangle_{\mathscr{C}}: {\rm HH}_*(\mathscr{C}^{op})\otimes {\rm HH}_*(\mathscr{C})\to {\mathbb C}.$$ In [@PV Theorem 4.2.1], an explicit formula of this pairing for the category ${\rm MF}_G(R, W)$ has been worked out in terms of the residue pairing in [\[residue-pairing\]](#residue-pairing){reference-type="eqref" reference="residue-pairing"}. Using the identification $${\rm HH}_*({\rm MF}_G(R, -W))= {\rm HH}_*({\rm MF}_G(R, W)),$$ we will denote this canonical pairing by $$( \ , \ )^{\rm PV}: {\rm HH}_*({\rm MF}_G(R, W))\times {\rm HH}_*({\rm MF}_G(R, W))\to {\mathbb C}.$$ Recall that in [\[hochshild-decomposition\]](#hochshild-decomposition){reference-type="eqref" reference="hochshild-decomposition"}, we have ${\rm HH}_*({\rm MF}_G(R, W))={\mathcal H }_{W, G}.$ For $A, B\in {\mathcal H }_{W, G}$, the canonical pairing takes the form $$\label{PV-pairing} (A, B)^{\rm PV}={1\over |G|}\sum_{g\in G}{1\over \det\left[1-g; {\mathbb C}^N/{\mathbb C}^g\right]}\langle A_{g^{-1}}, B_{g}\rangle.$$ Here ${\mathbb C}^{g}\subset{\mathbb C}^N$ is the $g$-invariant subspace, ${\mathbb C}^N/{\mathbb C}^g$ is the quotient space, $A_g\in {\mathcal H }_g$ is the projection of $A$ on ${\mathcal H }_g$, and $\langle\cdot, \cdot\rangle$ is the pairing in [\[fjrw=orbifold\]](#fjrw=orbifold){reference-type="eqref" reference="fjrw=orbifold"}. ### Euler pairing and Hirzebruch-Riemann-Roch formula For a pair of objects $\overline{E}$ and $\overline{F}$ in the category ${\rm MF}_G(R, W)$, the *Euler pairing* $\chi(\overline{E}, \overline{F})$ is defined to be the Euler characteristic of the Hom-space $$\label{euler-pairing} \chi(\overline{E}, \overline{F}):=\chi({\rm Hom}_W(\overline{E}, \overline{F})^G)\in \mathbb{Z}.$$ In [@PV Theorem 4.2.1 (ii)], Polishchuk and Vaintrob proved a *Hirzebruch-Riemann-Roch formula* for the category of $G$-equivariant matrix factorizations of $W$, which identifies the Euler pairing with the canonical pairing using the Chern character. That is, $$\label{PV-HRR-formula} \chi(\overline{E}, \overline{F})=({\rm ch}_G(\overline{E}), {\rm ch}_G(\overline{F}))^{\rm PV}.$$ This generalized the earlier work of Walcher [@Wal], where the HRR formula for some particular cases was proven. ### Gamma class and an analog of Todd class in LG models By the isomorphisms in [\[hochshild-decomposition\]](#hochshild-decomposition){reference-type="eqref" reference="hochshild-decomposition"}, we may consider the Gamma map in [\[qst-gamma\]](#qst-gamma){reference-type="eqref" reference="qst-gamma"} as an isomorphism $$\widehat{\Gamma}:{\rm HH}_*({\rm MF}_G(R, W))\to {\mathcal H }_{W, G}, \quad A\mapsto \widehat{\Gamma}(A).$$ **Proposition 91**. The non-symmetric pairing defined in [\[non-symmetric-pairing-qst\]](#non-symmetric-pairing-qst){reference-type="eqref" reference="non-symmetric-pairing-qst"} and the canonical pairing [\[PV-pairing\]](#PV-pairing){reference-type="eqref" reference="PV-pairing"} are compatible under the Gamma structure. That is $$\left[ \widehat{\Gamma}(A), \widehat{\Gamma}(B)\right)=(A, B)^{\rm PV}.$$ *Proof.* By definition of the Gamma map [\[qst-gamma\]](#qst-gamma){reference-type="eqref" reference="qst-gamma"}, we can rewrite $$\widehat{\Gamma}(A)=\sum_{g\in G}(-1)^{-\mu(g)} \prod_{\substack{1\leq j\leq N\\ \theta_j(g)\neq0}}\Gamma(1-\theta_j(g))A_g.$$ Recall $N_g$ is the dimension of $g$-fixed locus in ${\mathbb C}^N$. Applying [\[non-symmetric-pairing-qst\]](#non-symmetric-pairing-qst){reference-type="eqref" reference="non-symmetric-pairing-qst"}, we have $$\begin{aligned} \left[\widehat{\Gamma}(A), \widehat{\Gamma}(B)\right) =&{1\over |G|}\sum_{g\in G}{(-1)^{-\mu(g)}\over (-2\pi)^{N-N_g}}%{(-1)^{-\mu(g^{-1})}}{(-1)^{-\mu(g)}} \prod_{\substack{1\leq j\leq N\\ \theta_j(g)\neq0}}\Gamma(\theta_j(g))\Gamma(1-\theta_j(g))\langle A_{g^{-1}}, B_{g}\rangle.\\ =&{1\over |G|}\sum_{g\in G}{(-1)^{-\mu(g)}\over (-2\pi)^{N-N_g}}\prod_{\substack{1\leq j\leq N\\ \theta_j(g)\neq0}} {-2\pi\sqrt{-1}e^{\pi\sqrt{-1}\theta_j(g)} \over 1-e^{2\pi\sqrt{-1}\theta_j(g)}} \langle A_{g^{-1}}, B_{g}\rangle.\\ =&{1\over |G|}\sum_{g\in G}\prod_{\substack{1\leq j\leq N\\ \theta_j(g)\neq0}} {\langle A_{g^{-1}}, B_{g}\rangle \over 1-e^{2\pi\sqrt{-1}\theta_j(g)}}\\ =&(A, B)^{\rm PV}.\end{aligned}$$ Here the second equality follows from the Euler reflection formula [\[euler-reflection\]](#euler-reflection){reference-type="eqref" reference="euler-reflection"}, and the third equality follows from the definition of Hodge grading number in [\[hodge-grading-operator\]](#hodge-grading-operator){reference-type="eqref" reference="hodge-grading-operator"}. ◻ *Remark 92*. 1. This result in Proposition [Proposition 91](#non-symmetric pairing-identity){reference-type="ref" reference="non-symmetric pairing-identity"} is similar to  [@CIR Theorem 4.6], where the weighted homogeneous polynomial $W$ is assumed to be Gorenstein. 2. According to [@Wal Section 5], in LG models, the factor ${1\over \det\left[1-g; {\mathbb C}^N/{\mathbb C}^g\right]}$ in [\[PV-pairing\]](#PV-pairing){reference-type="eqref" reference="PV-pairing"} can be viewed as an analog of Todd class of manifolds. So we may view the Gamma map as a non-symmetric "square root\" of this Todd class analog. As a consequence of [\[PV-HRR-formula\]](#PV-HRR-formula){reference-type="eqref" reference="PV-HRR-formula"} and Proposition [Proposition 91](#non-symmetric pairing-identity){reference-type="ref" reference="non-symmetric pairing-identity"}, we have **Corollary 93**. The Euler pairing in [\[euler-pairing\]](#euler-pairing){reference-type="eqref" reference="euler-pairing"} is compatible with the non-symmetric pairing in [\[non-symmetric-pairing-qst\]](#non-symmetric-pairing-qst){reference-type="eqref" reference="non-symmetric-pairing-qst"} via the Gamma map [\[qst-gamma\]](#qst-gamma){reference-type="eqref" reference="qst-gamma"}, $$\chi(\overline{E}, \overline{F})%=({\rm Ch}_G(E), {\rm Ch}_G(F))^{\rm PV} =\left[ \widehat{\Gamma}({\rm ch}_G(\overline{E})), \widehat{\Gamma}({\rm ch}_G(\overline{F}))\right).$$ # Orlov's SOD and Stokes phenomenon {#sec-orlov} For quasi-homogeneous polynomials of general type, there is a remarkable connection between Orlov's work [@Orl] on semiorthogonal decompositions of the triangulated category ${\rm HMF}_{\langle J\rangle}(R,W)$ and the Stokes phenomenon that appears in the asymptotic expansions in the FJRW theory of $(W, \langle J\rangle)$. This is similar to the case of Fano hypersurface, where semiorthogonal decompositions of the derived category of coherent sheaves of the Fano hypersurface [@KKP; @GGI; @SS] are related to the Stokes phenomenon that appears in the Gromov-Witten theory of the Fano hypersurface. ## Orlov's SOD for matrix factorizations ### Semiorthogonal decompositions We collect a few definitions and facts on semiorthogonal decompositions of a triangulated category [@BO; @Orl]. Let ${\mathcal D}$ be a triangulated category. Let ${\mathcal N}\subset {\mathcal D}$ be a full subcategory. The *right orthogonal to ${\mathcal N}$* is a full subcategory $${\mathcal N}^\perp=\{M\in {\mathcal D}\mid {\rm Hom}(N, M)=0, \forall N\in {\mathcal N}.\}$$ The orthogonal ${\mathcal N}^\perp$ is also a triangulated category. The subcategory ${\mathcal N}$ is called *right admissible* if there is a right adjoint functor for the embedding ${\mathcal N}\hookrightarrow {\mathcal D}.$ The left orthogonal and left admissible are defined analogously. The subcategory ${\mathcal N}$ is called *admissible in ${\mathcal D}$* if it is left admissible and right admissible. A sequence of full triangulated categories $({\mathcal N}_1, \ldots, {\mathcal N}_n)$ in ${\mathcal D}$ is called a *semiorthogonal decomposition of the category ${\mathcal D}$* (SOD for short) if - there is a sequence of left admissible subcategories $${\mathcal D}_1={\mathcal N}_1\subset {\mathcal D}_2\subset \ldots \subset {\mathcal D}_n={\mathcal D}$$ such that ${\mathcal N}_p$ is left orthogonal to ${\mathcal D}_{p-1}$ in ${\mathcal D}_p$; - each ${\mathcal N}_p$ is admissible in ${\mathcal D}$; - the sequence $({\mathcal N}_1, \ldots, {\mathcal N}_n)$ is *full* in ${\mathcal D}$, i.e. the sequence generates the category ${\mathcal D}$. Such a semiorthogonal decomposition of the category ${\mathcal D}$ will be denoted by $${\mathcal D}=\langle{\mathcal N}_1, \ldots, {\mathcal N}_n\rangle.$$ Recall that an object $E$ is said to be *exceptional*, if for $p\in{\mathbb Z}$, $${\rm Hom}(E, E[p]) =\begin{dcases} 0, & p\neq0;\\ {\mathbb C}, & p=0. \end{dcases}$$ ### Orlov equivalence Consider the hypersurface $${\mathcal X}_W:=(W=0)\subset {\mathbb P}^{N-1}(w_1, \ldots, w_N).$$ If $N=1$, then ${\mathcal X}_W$ is the empty set. If $N>1$, then the hypersurface has complex dimension $N-2$. In [@Orl] Orlov established a beautiful connection between the triangulated category ${\rm HMF}_{\langle J\rangle}(R,W)$ and the derived category ${\bf D}^b({\mathcal X}_W)$ of coherent sheaves of the projective variety ${\mathcal X}_W$ using semiorthogonal decompositions. The details for quasihomogeneous polynomials have been developed in [@BFK1; @BFK2]. For our purpose, we only consider the cases when the admissible LG pair $(W, \langle J\rangle)$ is of general type. This implies that either ${\mathcal X}_W$ is an empty set (if $N=1$) or it is a projective variety of general type (if $N>1$). The Orlov equivalence can be summarized as follows: **Theorem 94**. *[@Orl Theorem 40][@BFK2 Theorem 3.5] [\[orlov-equivalence\]]{#orlov-equivalence label="orlov-equivalence"} Consider the admissible LG model $(W, \langle J\rangle)$, where $W$ is of general type with index $\nu>0$, then the triangulated category ${\rm HMF}_{\langle J\rangle}(R,W)$ admits a semiorthogonal decomposition $$\label{orlov-sod-lg} {\rm HMF}_{\langle J\rangle}(R,W)\simeq\left< {\mathbb C}(\nu-1)^{\rm st}, \ldots, {\mathbb C}^{\rm st}, {\bf D}^b({\mathcal X}_W)\right>.$$* Here the category ${\bf D}^b({\mathcal X}_W)$ is viewed as the bounded derived category ${\mathbf D}^b({\rm qgr}S)$ under equivalence [@Orl Theorem 28], where ${\rm qgr}S$ is the quotient of the abelian category of graded finitely generated $S$-modules by the subcategory of torsion modules. When $W$ is of general type, there is a fully faithful functor between the bounded derived category ${\mathbf D}^b({\rm qgr}S)$ and the category[^2] ${\mathbf D}^{\rm gr}_{\rm Sg}(S)$ [@Orl Theorem 16 (ii)]. The sequence $({\mathbb C}(\nu+1)^{\rm st}, \ldots, {\mathbb C}^{\rm st})$ in the semiorthogonal decomposition [\[orlov-sod-lg\]](#orlov-sod-lg){reference-type="eqref" reference="orlov-sod-lg"} is an *exceptional collection*. That is, each ${\mathbb C}(j)^{\rm st}$ is exceptional, and the sequence satisfies the semiorthogonal condition that for all $p\in {\mathbb Z}$, when $i>j$, $${\rm Hom}({\mathbb C}(j)^{\rm st}, {\mathbb C}(i)^{\rm st}[p])=0.$$ In fact, for any $\ell\in{\mathbb Z}$, the sequence $\big({\mathbb C}(\ell)^{\rm st}, {\mathbb C}(\ell-1)^{\rm st}, \ldots, {\mathbb C}(\ell-\nu+1)^{\rm st}\big)$ is an exceptional collection, and there is a semiorthogonal decomposition $${\rm HMF}_{\langle J\rangle}(R,W)\simeq\left< {\mathbb C}(\ell)^{\rm st}, {\mathbb C}(\ell-1)^{\rm st}, \ldots, {\mathbb C}(\ell-\nu+1)^{\rm st}, {\bf D}^b({\mathcal X}_W)\right>.$$ ## Combinatorics of the Gram matrix {#sec-gram-matrix} Let the admissible LG pair $(W, \langle J\rangle)$ be of general type. For any integer $\ell$, we consider the Gram matrix of the sequence $\big({\mathbb C}(\ell)^{\rm st}, {\mathbb C}(\ell-1)^{\rm st}, \ldots, {\mathbb C}(\ell-\nu+1)^{\rm st}\big)$ by $M_\ell$. It is a $\nu\times\nu$ square matrix, whose entries are given by the Euler pairing. We write $$\label{sub-gram-matrix} M_\ell:=\left(\chi\big({\mathbb C}(\ell-j+1)^{\rm st}, {\mathbb C}(\ell-i+1)^{\rm st}\big)\right)_{\nu \times \nu}.$$ ### A polynomial of the weighted system Now we investigate the property of this matrix and its inverse (if exists). First of all, we need some combinatorial preparation. Let $\vec{w}=(w_1, \ldots, w_N)$ be the $N$-tuple of weights. We define a set of coefficients $$\{a(n)\mid n\in{\mathbb Z}, 0\leq n<d\}$$ using the polynomial $$\label{weight-product-polynomial} P^A(x):=\sum_{n=0}^{d-1} a(n) x^n:=\prod_{j=1}^{N}(1-x^{w_j})=1+\ldots+(-1)^N x^{\sum_{j=1}^{N}w_j}.$$ By definition, we immediately have $$\label{property-coefficients-polynomial} a(n)= \begin{dcases} 1, & n=0;\\ (-1)^N, & n= d-\nu;\\ 0, & d-\nu< n< d. \end{dcases}$$ Using $$\begin{split} \prod_{j=1}^{N}(1-x^{w_j}) &=\prod_{j=1}^{N}\left(-x^{w_j}(1-x^{-w_j})\right)\\ &=(-1)^Nx^{\sum_{j=1}^{N}w_j}\sum_{n=0}^{d-\nu} a(n) x^{-n}\\ &=\sum_{n=0}^{d-\nu} (-1)^Na(n) x^{d-\nu-n}, \end{split}$$ we obtain the following symmetry of the coefficients. **Lemma 95**. *The coefficients $a(n)$ of the degree $d-\nu$ polynomial $P^A(x)$ satisfies $$a(d-\nu-n)=(-1)^Na(n), \quad \forall \quad 0\leq n\leq d-\nu.$$* On the other hand, we have the following combinatoric identities. **Lemma 96**. *For each $n$ such that $0\leq n\leq d-1$, we have $$\label{magic-identity} {1\over d}\sum_{m\in {\bf Nar}}\omega^{n\cdot m}\prod_{j=1}^{N}(1-\omega^{-w_j\cdot m})=a(n).$$* *Proof.* Consider a polynomial $$P(x)={1\over d}\sum_{m=0}^{d-1}\prod_{j=1}^{N}(1-\omega^{-m\cdot w_j})\sum_{n=0}^{d-1}(\omega^mx)^n-\prod_{j=1}^{N}(1-x^{w_j}).$$ For any $m$ such that $0\leq m\leq d-1$, if $m\notin {\bf Nar},$ then there exists some $j$ such that $1\leq j \leq N$ and $(1-\omega^{-m\cdot w_j})=0$. So we can rewrite $$P(x)=\sum_{n=0}^{d-1}x^n \left({1\over d}\sum_{m\in {\bf Nar}}\omega^{n\cdot m}\prod_{j=1}^{N}(1-\omega^{-w_j\cdot m})-a(n)\right).$$ We see $$\deg P(x)={\rm Max}\{d-1, \sum_{j=1}^N w_j\}=d-1.$$ Now for each integer $k=0,1,\ldots, d-1$, using the following identities $$\label{unity-identity} \sum_{n=0}^{d-1}(\omega^mx)^n= \begin{dcases} d, & \text{if } x=\omega^{-m};\\ 0, & \text{if } x^d=1, x\neq\omega^{-m}, \end{dcases}$$ we have $$P(\omega^{-k})={1\over d}\prod_{j=1}^{N}(1-\omega^{-k\cdot w_j})d-\prod_{j=1}^{N}(1-\omega^{-k\cdot w_j})=0.$$ Thus $P(x)$ have $d$ distinct roots and we must have $P(x)=0$. So the equality [\[magic-identity\]](#magic-identity){reference-type="eqref" reference="magic-identity"} follows. ◻ Using the canonical pairing [\[PV-pairing\]](#PV-pairing){reference-type="eqref" reference="PV-pairing"} and Lemma [Lemma 96](#lemma-magic-identity){reference-type="ref" reference="lemma-magic-identity"}, we have **Proposition 97**. Let $W$ be an invertible polynomial of general type. Consider the admissible LG pair $(W, \langle J\rangle)$, the Gram matrix $M_\ell$ of the exceptional collection $\big({\mathbb C}(\ell)^{\rm st}, {\mathbb C}(\ell-1)^{\rm st}, \ldots, {\mathbb C}(\ell-\nu+1)^{\rm st}\big)$ have the following properties: 1. It does not depend on $\ell$. 2. It is upper-triangular and all the diagonal entries are $1$. 3. For each $1\leq j\leq \nu$ and $0\leq n\leq \nu-j$, the $(j, j+n)$-th entries of $M_\ell$ is $a(n)$. *Proof.* Using the HRR formula [\[PV-HRR-formula\]](#PV-HRR-formula){reference-type="eqref" reference="PV-HRR-formula"}, the Chern character formula [\[stablization-chern\]](#stablization-chern){reference-type="eqref" reference="stablization-chern"}, and the canonical pairing formula [\[PV-pairing\]](#PV-pairing){reference-type="eqref" reference="PV-pairing"}, for any pair $(j, j+n)$ such that $1\leq j, j+n\leq \nu$, we can calculate the Euler pairing explicitly $$\begin{split} &\chi\Big({\mathbb C}(\ell-(j+n)+1)^{\rm st}, {\mathbb C}(\ell-j+1)^{\rm st}\Big)\\ =&\left({\rm ch}_{\langle J\rangle}({\mathbb C}(\ell-(j+n)+1)^{\rm st}), {\rm ch}_{\langle J\rangle}({\mathbb C}(\ell-j+1)^{\rm st})\right)^{\rm PV}\\ =&{1\over d}\sum_{m\in {\bf Nar}}\langle\omega^{-(\ell-j-n+1)(d-m)}\prod_{j=1}^{N}(1-\omega^{w_j\cdot (d-m)}){\bf e}_{d-m},\ \omega^{-(\ell-j+1)m}\prod_{j=1}^{N}(1-\omega^{w_j\cdot m}){\bf e}_{m}\rangle\\ &\cdot \prod_{j=1}^{N}(1-\omega^{w_j\cdot m})^{-1}\\ =&{1\over d}\sum_{m\in {\bf Nar}}\omega^{n\cdot m}\prod_{j=1}^{N}(1-\omega^{-w_j\cdot m}). \end{split}$$ The entries are independent on the choice of $\ell$. According to the formula [\[magic-identity\]](#magic-identity){reference-type="eqref" reference="magic-identity"}, the $(j,j+n)$-th entry of $M_\ell$ is given by $$a_{j, j+n}= \begin{dcases} a(d+n), & n<0;\\ a(n), & n\geq 0. \end{dcases}$$ Now the rest properties follow from the fact listed in [\[property-coefficients-polynomial\]](#property-coefficients-polynomial){reference-type="eqref" reference="property-coefficients-polynomial"}. ◻ ### A Laurent polynomial of the weighted system Let $L_{\vec{w}}(n)$ be the number of partitions of $n$ using only the parts from $\vec{w}$. That is the number of nonnegative integer $N$-tuples $(k_1, k_2, \ldots, k_N)$ satisfying $$\sum_{j=1}^{N}k_jw_j=n.$$ We have a generating function $$\label{weight-product-inverse-polynomial} \sum_{n=0}^{\infty} L_{\vec{w}}(n) x^n=\prod_{j=1}^{N}(1-x^{w_j})^{-1}.$$ It is straightforward to obtain **Proposition 98**. The Gram matrix $M_\ell$ has an inverse $M_\ell^{-1}=(a^{i,j})$. The inverse matrix $M_\ell^{-1}$ satisfies the following properties. 1. It does not depend on $\ell$. 2. It is upper-triangular and all the diagonal entries are $1$. 3. For each $1\leq j\leq \nu$ and $0\leq n\leq \nu-j$, we have $a^{j, j+n}=L_{\vec{w}}(n)$. ## Relation to Stokes phenomenon via Meijer G-functions {#sec-asymptotic-luke} In this subsection, we will show in Theorem [Proposition 108](#theorem-gram-asymptotic){reference-type="ref" reference="theorem-gram-asymptotic"} that the coefficients of the matrix $M_\ell^{-1}$ match certain coefficients in the asymptotic expansion of generalized hypergeometric functions. We need some knowledge on Meijer $G$-functions [@Mei], which are integral solutions of the generalized hypergeometric equation [\[ghe-equation\]](#ghe-equation){reference-type="eqref" reference="ghe-equation"}. We mainly follow the work of Luke [@Luk] and Fields [@Fie]. ### Meijer $G$-function Following [@Luk Section 5.2], for an arbitrary $p$-tuple $(a_1, \ldots, a_p)\in \mathbb{R}^p$ and an arbitrary $q$-tuple $(b_1, \ldots, b_q)\in \mathbb{R}^q,$ such that $$\{a_k-b_j\mid 1\leq k\leq p, 1\leq j\leq q\}\cap {\mathbb Z}_+=\emptyset,$$ we define a Meijer $G$-function by integrals $$\label{meijer-def} G_{p,q}^{m,n} \left( \begin{array}{l} a_1, \ldots, a_p\\ b_1, \ldots, b_q \end{array} ; x \right) ={1\over 2\pi\sqrt{-1}} \int_L{\prod\limits_{j=1}^{m}\Gamma(b_j-s)\prod\limits_{j=1}^{n}\Gamma(1-a_j+s)\over\prod\limits_{j=m+1}^{q}\Gamma(1-b_j+s)\prod\limits_{j=n+1}^{p}\Gamma(a_j-s)}x^s ds,$$ where $L$ is a path goes from $-\sqrt{-1}\infty$ to $\sqrt{-1}\infty$ so that all poles of $\Gamma(b_j-s)$, $j=1, 2, \ldots, m$, lies to the rights of the path, and all poles of $\Gamma(1-a_k+s)$, $k=1,2,\ldots, n,$ lies to the left of the path. The Meijer $G$-function in [\[meijer-def\]](#meijer-def){reference-type="eqref" reference="meijer-def"} is a solution of the linear differential equation (see [@Luk Section 5.8 (1)]) $$\label{diff-eqn-meijer} \Big(\prod_{i=1}^{q}(\vartheta_x-b_j)+(-1)^{p+1-m-n}\,x\prod_{j=1}^{p}(\vartheta_x+1-a_i)\Big) f(x) =0.$$ ### Solutions of the generalized hypergeometric equation  [\[ghe-equation\]](#ghe-equation){reference-type="eqref" reference="ghe-equation"} via Meijer $G$-functions We can evaluate the integral appears in [\[meijer-def\]](#meijer-def){reference-type="eqref" reference="meijer-def"} as a sum of residues by deforming the path to a loop beginning and ending at $+\infty$ and encircling all poles of $\Gamma(b_j-s)$, $j=1, 2, \ldots, m$, once in the negative direction, but none of the poles of $\Gamma(1-a_k+s)$, $k=1,2,\ldots, n.$ The residue expressions of the Meijer $G$-function will be generalized hypergeometric functions [@Luk Section 5.2]. Since we are only interested in quasi-homogeneous polynomials of general type in this paper. We consider the cases when $p\leq q+1$ and no two of the $b_j$'s differ by an integer. Let $\{\alpha_i\mid 1\leq i\leq p\}$ and $\{\rho_j\mid 1\leq j\leq q\}$ be the set of rational numbers defined in [\[alpha-tuple\]](#alpha-tuple){reference-type="eqref" reference="alpha-tuple"} and [\[rho-tuple\]](#rho-tuple){reference-type="eqref" reference="rho-tuple"}. We set $$\label{replace-index} \begin{dcases} a_i:=1-\alpha_i, & i=1,2,\ldots, p; \\ b_j:=1-\rho_j, & j=0,1,\ldots, q. \end{dcases}$$ According to [@Luk Section 5.2 (14)], when $|x|<1$, the generalized hypergeometric function is the residue expression of a Meijer $G$-function $$\label{ghe-meijer} _pF_q\left( \begin{array}{l} \alpha_1, \ldots, \alpha_p\\ \rho_1,\ldots, \rho_q \end{array}; x\right)= {\prod\limits_{j=1}^{q}\Gamma(\rho_j)\over \prod\limits_{j=1}^{p}\Gamma(\alpha_j)} G_{p,q+1}^{1,p}\left( \begin{array}{l} 1-\alpha_1, \ldots, 1-\alpha_p\\ 0, 1-\rho_1,\ldots, 1-\rho_q \end{array}; -x\right).$$ We may also rewrite the equation [\[ghe-equation\]](#ghe-equation){reference-type="eqref" reference="ghe-equation"} as $$\label{ghe-meijer-form} \Big(\prod_{i=0}^{q}(\vartheta_x-b_j)-x\prod_{j=1}^{p}(\vartheta_x+1-a_i)\Big) f(x) =0.$$ ### Asymptotic expansions of Meijer $G$-functions near $w=\infty$ Following [@Luk Section 5.7] and [@Fie], now we investigate two special types of Meijer $G$-functions for the fixed tuples $(a_1, \ldots, a_p)$ and $(b_0, b_1, \ldots, b_q)$, $$\label{asymptotic-solution-meijer} G(w)= G_{p,q+1}^{q+1,0} \left( \begin{array}{l} a_1, \ldots, a_p\\ b_0, b_1,\ldots, b_q \end{array} ; w \right) %={1\over 2\pi\sqrt{-1}}\int_L{\prod\limits_{j=0}^{q}\Gamma(b_j-s)\over\prod\limits_{j=1}^{p}\Gamma(a_j-s)}x^s ds,$$ and $$\label{algebraic-solution-meijer} L_j(w)=G_{p, q+1}^{q+1, 1} \left( \begin{array}{l} a_j, a_1, \ldots, a_{j-1}, a_{j+1}, \ldots, a_p\\ b_0, b_1,\ldots, b_q \end{array} ; w \right), \quad j=1,\ldots, p.$$ The asymptotic behavior of these functions in [\[asymptotic-solution-meijer\]](#asymptotic-solution-meijer){reference-type="eqref" reference="asymptotic-solution-meijer"} and [\[algebraic-solution-meijer\]](#algebraic-solution-meijer){reference-type="eqref" reference="algebraic-solution-meijer"} near $w=\infty$ has been studied by Barnes [@Bar] and Meijer [@Mei]. Recall that $K_{p,q}(x)$ and $L_{p,q}^{(j)}(x)$ are defined in [\[exponential-term\]](#exponential-term){reference-type="eqref" reference="exponential-term"} and [\[t-th-algebraic-term\]](#t-th-algebraic-term){reference-type="eqref" reference="t-th-algebraic-term"}. Following [@Luk Theorem 5, Theorem 1] and [@Fie Theorem 2], we summarize the asymptotic results as below: 1. If $|\arg(w)|<(\nu+1-{1\over 2}\delta_1^\nu)\pi$, then as $|w|\to \infty$, $$\label{asymptotic-exponential} G(w)={1\over 2\pi\sqrt{-1}} \int_L{\prod\limits_{j=0}^{q}\Gamma(b_j-s)\over\prod\limits_{j=1}^{p}\Gamma(a_j-s)}w^s ds \sim K_{p,q}(w).$$ 2. If $|\arg(w)|< ({\nu\over 2}+1)\pi,$ then as $|w|\to \infty$, $$\label{asymptotic-algebraic} L_j(w)\sim L_{p,q}^{(j)}(w).$$ We make a remark that the asymptotic expansion formula [\[asymptotic-exponential\]](#asymptotic-exponential){reference-type="eqref" reference="asymptotic-exponential"} is consistent with Theorem [\[Barnes-theorem\]](#Barnes-theorem){reference-type="ref" reference="Barnes-theorem"}. In fact, Barnes' formula is obtained by writing the linear combinations in [\[Barnes-formula\]](#Barnes-formula){reference-type="eqref" reference="Barnes-formula"} and in Corollary [Corollary 73](#barnes-corollary){reference-type="ref" reference="barnes-corollary"} as specific Meijer $G$-functions. The other formula [\[asymptotic-algebraic\]](#asymptotic-algebraic){reference-type="eqref" reference="asymptotic-algebraic"} is obtained similarly. For the equation [\[ghe-meijer-form\]](#ghe-meijer-form){reference-type="eqref" reference="ghe-meijer-form"}, we already discussed a basis of generalized hypergeometric solutions $\{f_j(x)\}$ near $x=0$ in [\[ghe-standard\]](#ghe-standard){reference-type="eqref" reference="ghe-standard"}. Using the asymptotic expansions of the Meijer $G$-functions in [\[asymptotic-exponential\]](#asymptotic-exponential){reference-type="eqref" reference="asymptotic-exponential"} and [\[asymptotic-algebraic\]](#asymptotic-algebraic){reference-type="eqref" reference="asymptotic-algebraic"}, one can form a basis of solutions of [\[ghe-equation\]](#ghe-equation){reference-type="eqref" reference="ghe-equation"} near $x=\infty$. The following result is stated in [@Luk Section 5.8, Case 1]. **Proposition 99**. For any value $\arg(x)$, there exists integers $\lambda$ and $\omega$, such that $$\begin{dcases} |\arg(x)+(\nu-2\lambda+1)\pi|<({\nu\over 2}+1)\pi,\\ |\arg(x)+(\nu-2\psi)\pi|<(\nu+1-{1\over 2}\delta_1^{\nu})\pi, \quad \psi=\omega, \omega+1, \ldots, \omega+\nu-1. \end{dcases}$$ If $\alpha_i-\alpha_j\notin \mathbb{Z}$ for any $i\neq j$, then the set $$\left\{ L_j(x e^{\pi\sqrt{-1}(\nu+1-2\lambda)}), G(x e^{\pi\sqrt{-1}(\nu-2\psi)})\mid 1\leq j\leq p, \psi=\omega, \omega+1, \ldots, \omega+\nu-1. \right\}.$$ forms a fundamental system of solutions of the equation [\[ghe-meijer-form\]](#ghe-meijer-form){reference-type="eqref" reference="ghe-meijer-form"} near $w=\infty$. Otherwise, a fundamental system of solutions can be obtained by replacing certain $L_j$'s using the limit processes in [@Luk Section 5.1]. The following result is a consequence of the asymptotic expansion formulas (see the discussion after [@Fie Theorem 2]). **Proposition 100**. For any pair of integers $(r, s)$, the functions $$\label{meijer-basis-ghe} \left\{ L_j(ze^{\pi\sqrt{-1}(\nu+1-2r)}), G(ze^{\pi\sqrt{-1}(\nu+2-2s-2h)})\mid 1\leq j\leq p, 1\leq h\leq \nu \right\}.$$ form a basis of solutions of [\[ghe-equation\]](#ghe-equation){reference-type="eqref" reference="ghe-equation"} in the sector $|\arg(z)|<???$ *Remark 101*. The extra factor $e^{\pi\sqrt{-1}\nu}$ in the basis elements of [\[meijer-basis-ghe\]](#meijer-basis-ghe){reference-type="eqref" reference="meijer-basis-ghe"} is important when we match the two expansions from Luke and Fields. We see that the Meijer functions are multi-valued functions and the asymptotic expansions [\[asymptotic-exponential\]](#asymptotic-exponential){reference-type="eqref" reference="asymptotic-exponential"} and [\[asymptotic-algebraic\]](#asymptotic-algebraic){reference-type="eqref" reference="asymptotic-algebraic"} depend on the choices of sectors. The transformation of Meijer $G$-functions in [\[asymptotic-solution-meijer\]](#asymptotic-solution-meijer){reference-type="eqref" reference="asymptotic-solution-meijer"} and [\[algebraic-solution-meijer\]](#algebraic-solution-meijer){reference-type="eqref" reference="algebraic-solution-meijer"} between different sectors has been well studied in the literature [@Mei; @Luk; @Fie]. Such transformation laws provide a powerful tool to describe the Stokes phenomenon of the ODE system, generalizing the seminal work of Stokes [@Sto]. ### A partial fraction decomposition We now introduce a transformation law of Meijer $G$-functions using the method described in [@Fie Proposition 2]. We first consider the partial fraction decomposition of the rational function $$\label{pfd-rational} {Q(x)\over P(x)} = {\prod\limits_{j=0}^{q}\left(x-e^{2\pi\sqrt{-1} (\rho_j-{1\over d})}\right) \over \prod\limits_{j=1}^{p}\left(x-e^{2\pi\sqrt{-1} (\alpha_j-{1\over d})}\right)}.$$ Since the polynomial $x-e^{2\pi\sqrt{-1}(\alpha_j-{1\over d})}$ is irreducible in ${\mathbb C}[x]$, and only depends on the value $\alpha_j$ modulo ${\mathbb Z}$. We can rewrite the denominator of the rational function [\[pfd-rational\]](#pfd-rational){reference-type="eqref" reference="pfd-rational"} as a product of powers of distinct irreducible polynomials by $$P(x)=\prod\limits_{i=1}^{k}(x-p_i)^{n_i}, \quad p_i\in \{e^{2\pi\sqrt{-1}(\alpha_j-{1\over d})}\mid j=1, \ldots, p\},$$ where $n_i$ is the multiplicity of $p_i$ and $k$ is the number of distinct $p_i$'s. A well known result in partial fraction decomposition says **Proposition 102**. There are unique coefficients $$\{d_h\in {\mathbb C}\mid h=0, 1, \ldots, \nu\}\cup\{c_{i,j}\in {\mathbb C}\mid i=1, \ldots k; 1\leq j\leq n_i\}$$ such that the following fraction decomposition holds. $$\label{classical-pfd} {Q(x)\over P(x)}=\sum_{h=0}^{\nu}d_h x^h+\sum_{i=1}^{k}\sum_{r=1}^{n_i}{c_{i,r}\over (x-p_i)^{r}}.$$ Recall that $L_{\vec{w}}(n)$ is defined in [\[weight-product-inverse-polynomial\]](#weight-product-inverse-polynomial){reference-type="eqref" reference="weight-product-inverse-polynomial"}. We have **Corollary 103**. The partial fractional decomposition [\[classical-pfd\]](#classical-pfd){reference-type="eqref" reference="classical-pfd"} can be written as $$\label{simple-D_h-formula} {Q(x)\over P(x)}=\sum_{h=1}^{\nu}d_h x^h -(-1)^{N} +x\sum_{i=1}^{k}{b_i(x)\over (x-p_i)^{n_i}}$$ where $b_i(x)$ is a polynomial determined by $$\label{pfd-linear} xb_i(x)=\sum_{r=1}^{n_i}c_{i,r}(x-p_i)^{n_i-r}\left(1-(1-{x\over p_i})^r\right)$$ and $$\label{stokes-exponential} d_h=L_{\vec{w}}(\nu-h)=a^{1, \nu-h+1}, \quad \forall \quad h=1, \ldots, \nu.$$ *Proof.* Using [\[difference-alpha-rho\]](#difference-alpha-rho){reference-type="eqref" reference="difference-alpha-rho"}, we obtain $${Q(0)\over P(0)}=-(-1)^{N}.$$ We evaluate the equation [\[classical-pfd\]](#classical-pfd){reference-type="eqref" reference="classical-pfd"} at $x=0$ and obtain $$-(-1)^N=d_0+\sum_{i=1}^{k}\sum_{r=1}^{n_i}{c_{i,r}\over (-p_i)^{r}}.$$ Using this equality, we obtain the formula [\[pfd-linear\]](#pfd-linear){reference-type="eqref" reference="pfd-linear"} by taking the difference of [\[classical-pfd\]](#classical-pfd){reference-type="eqref" reference="classical-pfd"} and [\[simple-D_h-formula\]](#simple-D_h-formula){reference-type="eqref" reference="simple-D_h-formula"}. Next we multiply the equation [\[simple-D_h-formula\]](#simple-D_h-formula){reference-type="eqref" reference="simple-D_h-formula"} by $P^A(x)=\prod_{j=1}^{N}(1-x^{w_j})$. Using the definitions of $\rho_k$'s and $\alpha_j$'s, we obtain $$\label{type-cd-stokes-coefficients} (-1)^N(x^d-1)=P^A(x)\left(\sum_{h=1}^{\nu}d_h x^h-(-1)^N\right)+x\sum_{i=1}^{k} b_i(x) P^A_i(x),$$ where $P^A_i(x)$ is the polynomial given by $$P^A_i(x)={P^A(x)\over (x-p_i)^{n_i}}.$$ We have $$\deg (x b_i(x) P^A_i(x))=\deg P^A(x)=d-\nu.$$ By Lemma [Lemma 95](#symmetry-coefficients){reference-type="ref" reference="symmetry-coefficients"}, we have $$P^A(x)=\sum_{n=0}^{d-\nu} (-1)^Na(n) x^{d-\nu-n}.$$ Now we compare the coefficients of $x^{d-j}$ in [\[type-cd-stokes-coefficients\]](#type-cd-stokes-coefficients){reference-type="eqref" reference="type-cd-stokes-coefficients"} for $j=0, 1, \ldots, \nu-1$ and obtain $\nu$ equalities: $$\label{recusion-d-type} %\begin{dcases} %& a(0) d_\nu=1;\\ \sum_{i=0}^{j} a(i) d_{\nu-j+i}=\delta_j^0, \quad j=0, 1,\ldots, \nu-1. %&d_{\nu-1}\cdot a(0)+d_\nu\cdot a(1)=0;\\ %&\ldots\\ %&d_1\cdot a(0)+\ldots=0. %\end{dcases}$$ Since $a(0)=1$, these $\nu$ equalities determines $\{d_h\mid h=\nu, \nu-1, \ldots, 1\}$ recursively. On the other hand, the definitions of [\[weight-product-polynomial\]](#weight-product-polynomial){reference-type="eqref" reference="weight-product-polynomial"} and [\[weight-product-inverse-polynomial\]](#weight-product-inverse-polynomial){reference-type="eqref" reference="weight-product-inverse-polynomial"} imply that $$1=\left(\prod_{j=1}^N(1-x^{w_j})\right)\left(\prod_{j=1}^{N}(1-x^{w_j})^{-1}\right)=\left(\sum_{n=0}^{d-\nu}a(n)x^n\right)\left(\sum_{n=0}^{\infty}L_{\vec{w}}(n)x^n\right).$$ So the sequence $\{L_{\vec{w}}(\nu-h)\mid h=\nu, \ldots, 1\}$ satisfies the same recursive formula [\[recusion-d-type\]](#recusion-d-type){reference-type="eqref" reference="recusion-d-type"} as $\{d_h \mid h=\nu, \ldots, 1\}$. Now the rest of the result follows from Proposition [Proposition 98](#inverse-gram-formula){reference-type="ref" reference="inverse-gram-formula"}. ◻ *Remark 104*. The polynomial $b_i(x)$ in [\[pfd-linear\]](#pfd-linear){reference-type="eqref" reference="pfd-linear"} has another expression $$b_i(x)=\sum_{r=1}^{n_i}c_{i,r}(x-p_i)^{n_i-r}\sum_{m=0}^{r-1}(-1)^m{r\choose m+1}p_i^{-m-1} x^m\in {\mathbb C}[x].$$ ### Transformation laws of Meijer $G$-functions and Stokes coefficients Now we use the partial fractional decomposition formula [\[simple-D_h-formula\]](#simple-D_h-formula){reference-type="eqref" reference="simple-D_h-formula"} to derive a transformation law of Meijer $G$-functions. We set $x:=e^{-2\pi\sqrt{-1}({1\over d}+s)}$ and then multiply the equation [\[simple-D_h-formula\]](#simple-D_h-formula){reference-type="eqref" reference="simple-D_h-formula"} by $$w^s{ \prod\limits_{j=0}^{q}\Gamma(1-\rho_j-s) \over \prod\limits_{j=1}^{p}\Gamma(1-\alpha_j-s) }$$ and the integrate along the designated contour $L$ in [\[meijer-def\]](#meijer-def){reference-type="eqref" reference="meijer-def"}. The new equation will imply our transformation law. **Proposition 105**. The Meijer $G$-functions satisfy $$\label{transformation-exponential-type} (-1)^N G(w)=\sum_{h=1}^{\nu}d_h e^{-2\pi\sqrt{-1}{h\over d}}G(we^{-2\pi\sqrt{-1}h}) -\sum_{i=1}^{k}{c_{i,1}e^{-\pi\sqrt{-1}\alpha_j}\over 2\pi\sqrt{-1} p_i}L_j(we^{-\pi\sqrt{-1}}).$$ *Proof.* Firstly, let us consider the LHS of the new equation. Applying the *Euler reflection formula* $$\Gamma(y)\Gamma(1-y)={-2\pi\sqrt{-1}e^{\pi\sqrt{-1}y}\over (1-e^{2\pi\sqrt{-1}y})},$$ we obtain $$\begin{split} {Q(x)\over P(x)}%&=x^{q+1-p}{\prod\limits_{j=0}^{q}\left(1-e^{2\pi\sqrt{-1} (\rho_j+s)}\right)\over\prod\limits_{j=1}^{p}\left(x-e^{2\pi\sqrt{-1} (\alpha_j+s)}\right)}\\ &=(-2\pi\sqrt{-1}x)^{\nu} e^{\pi\sqrt{-1}\left(\sum_{j=0}^{q}(\rho_j+s)-\sum_{j=1}^{p}(\alpha_j+s)\right)} {\prod\limits_{j=1}^{p}\Gamma(\alpha_j+s)\Gamma(1-\alpha_j-s) \over \prod\limits_{j=0}^{q}\Gamma(\rho_j+s)\Gamma(1-\rho_j-s) }. \end{split}$$ So the LHS of the new equation is a scalar multiple of the Meijer $G$-function $$G_{p,q+1}^{0,p} \left( \begin{array}{l} 1-\alpha_1, \ldots, 1-\alpha_p\\ 1-\rho_0,\ldots, 1-\rho_q \end{array} ; w e^{-\pi\sqrt{-1}\nu}\right),$$ This Meijer $G$-function vanishes, as it has no poles of $\Gamma(1-\rho_j-s)$, $j=1,\ldots, m$, lie to the right of the contour [@Luk Section 5.2 (8)]. Secondly, the contribution from the term $\sum_{h=1}^{\nu}d_h x^h -(-1)^{N}$ is $$\sum_{h=1}^{\nu}d_h e^{-2\pi\sqrt{-1}{h\over d}}G(we^{-2\pi\sqrt{-1}h})-(-1)^N G(w).$$ Finally, let us consider the contribution from the term ${x^n\over (x-p_i)^r}$ for $1\leq r\leq n_i$. Recall that $p_i=e^{2\pi\sqrt{-1}(\alpha_j-{1\over d})}$ for some $j$. If $r\geq2$, then the integral have poles of $\Gamma(\alpha_j+s)$, with order $r$ at each $s=-\alpha_j-k$, $k=0, 1, \ldots,$ lie to the left of the contour. So the contribution vanishes. The only possible vanishing contribution comes from the cases when $r=1$. According to Remark [Remark 104](#linear-term-algebraic){reference-type="ref" reference="linear-term-algebraic"}, the contribution is $$\sum_{i=1}^{k}-{c_{i,1}\over p_i}{e^{-\pi\sqrt{-1}\alpha_j}\over 2\pi\sqrt{-1}}L_j(we^{-\pi\sqrt{-1}}).$$ This completes the proof. ◻ The coefficients $\{d_h\}$ in the transformation law [\[transformation-exponential-type\]](#transformation-exponential-type){reference-type="eqref" reference="transformation-exponential-type"} will be called *Stokes coefficients of exponential type*. *Remark 106*. If $\alpha_i-\alpha_j\notin \mathbb{Z}$ for any $i\neq j$, then the transformation law [\[transformation-exponential-type\]](#transformation-exponential-type){reference-type="eqref" reference="transformation-exponential-type"} is the formula (2.8) in [@Fie] in the case of $k=1.$ $$-{\prod\limits_{k=0}^{q}(1-y e^{-2\pi\sqrt{-1}(\rho_k-{1\over d})})\over \prod\limits_{j=1}^{p}(1-y e^{-2\pi\sqrt{-1}(\alpha_j-{1\over d})})} =\sum_{j=1}^{p}{c_j y\over 1-y e^{-2\pi\sqrt{-1}(\alpha_j-{1\over d})}}-1+\sum_{h=1}^{\nu}(-1)^N d_h y^h.$$ **Corollary 107**. If we are in the situation ???, then the coefficients $\{d_h\}$ satisfies $$d_h=L_{\vec{w}}(\nu-h)=a^{1, \nu-h+1}, \quad \forall \quad h=1, \ldots, \nu.$$ By the definitions in [\[replace-index\]](#replace-index){reference-type="eqref" reference="replace-index"}, we set $$\label{B1A1} B_1:=\sum_{k=0}^{q} b_k=q+1-\sum_{k=0}^{q}\rho_k, \quad A_1:=\sum_{j=1}^{p} a_j=p-\sum_{j=1}^{p}\alpha_j.$$ Following [@Fie Formula (2.4)], we define $$C_j(1)={e^{\pi\sqrt{-1}a_j}\over 2\pi\sqrt{-1}}{\prod\limits_{k=0}^{q}(1-e^{2\pi\sqrt{-1}(b_k-a_j)})\over \prod\limits_{k\neq j}(1-e^{2\pi\sqrt{-1}(a_k-a_j)})}, \quad D_{\nu}(1)=(-1)^{\nu+1}e^{2\pi\sqrt{-1}(B_1-A_1)},$$ and the other coefficients $\{D_{h}(1)\mid h=1,2,\ldots \nu-1\}$ are determined by $$\label{D_h-formula} \begin{split} -&(-2\pi\sqrt{-1})^{\nu}e^{\pi\sqrt{-1}(B_1-A_1-\nu t)}{\prod\limits_{j=1}^{p}\Gamma(1-\alpha_j-t)\Gamma(\alpha_j+t)\over \prod\limits_{k=0}^{q}\Gamma(1-\rho_k-t)\Gamma(\rho_k+t)}\\ &=\sum_{j=1}^{p}C_j(1)e^{-\pi\sqrt{-1}t}\Gamma(1-\alpha_j-t)\Gamma(\alpha_j+t)+\left(-1+\sum_{h=1}^{\nu}D_h(1)e^{-2\pi\sqrt{-1}ht}\right). \end{split}$$ According to [@Fie Proposition 2] , these functions satisfy the relations $$\begin{aligned} L_{j}(w)&=&e^{2\pi\sqrt{-1}a_j}L_j(we^{-2\pi\sqrt{-1}})-2\pi\sqrt{-1}e^{\pi\sqrt{-1}a_j}G(we^{-\pi\sqrt{-1}}), \label{fields-relation-L}\\ G(w)&=&\sum_{j=1}^{p}C_j(1)L_{j}(we^{\pi\sqrt{-1}(1-2l)})+\sum_{h=1}^{\nu}D_h(1)G(we^{-2\pi\sqrt{-1}h}), \label{fields-relation-G}\end{aligned}$$ Similar formulas can be found also in [@Luk] and [@DM]. We notice that the condition (1.1) in [@Fie Proposition 2] is not satisfied for the FJRW theory we consider in general, as we may have $a_i=a_j$. ### Stokes coefficients Now we adjust the coefficients appropriately, $$\label{change-of-stokes-coefficients} \begin{dcases} d_h:=(-1)^Ne^{2\pi\sqrt{-1}{h\over d}}D_h(1);\\ c_j:=2\pi\sqrt{-1}e^{-\pi\sqrt{-1}(\alpha_j-{2\over d})}C_j(1). \end{dcases}$$ In particular, by [\[difference-alpha-rho\]](#difference-alpha-rho){reference-type="eqref" reference="difference-alpha-rho"}, we have $d_\nu=1.$ We define ${\mathfrak G}_h(w)$ and ${\mathfrak L}_{j,k}(w)$ by $$\begin{dcases} {\mathfrak G}_h(w):=(-1)^N\cdot e^{-2\pi\sqrt{-1}{h\over d}}\cdot G(w e^{-2\pi\sqrt{-1}});\\ {\mathfrak L}_{j,k}(w):=??? \end{dcases}$$ By Proposition [Proposition 100](#meijer-basis-ode){reference-type="ref" reference="meijer-basis-ode"}, the functions $\{\}$ and $\{\}$ are basis of solutions of [\[ghe-equation\]](#ghe-equation){reference-type="eqref" reference="ghe-equation"} in adjacent sectors. According to  [\[fields-relation-L\]](#fields-relation-L){reference-type="eqref" reference="fields-relation-L"} and [\[fields-relation-G\]](#fields-relation-G){reference-type="eqref" reference="fields-relation-G"}, the transformation between the two basis are given by $$\label{stokes-transformation} \begin{dcases} (-1)^N G_0(w)&=\sum_{j=1}^{p}c_jL_{j}(1)+\sum_{h=1}^{\nu}d_h G_h(w);\\ L_j(k)&=L_{j}(k+1)+e_kG_k. \end{dcases}$$ We call the coefficients $c_j, d_h, e_k$ the *Stokes coefficients* of the system. We focus on the $d$-type Stokes coefficients here. As a consequence of Proposition [Proposition 98](#inverse-gram-formula){reference-type="ref" reference="inverse-gram-formula"} and the formula [\[stokes-exponential\]](#stokes-exponential){reference-type="eqref" reference="stokes-exponential"}, we have **Proposition 108**. The Gram matrix $M_\ell$ of the exceptional collection $\big({\mathbb C}(\ell)^{\rm st}, {\mathbb C}(\ell-1)^{\rm st}, \ldots, {\mathbb C}(\ell-\nu+1)^{\rm st}\big)$ is determined by the Stokes coefficients via $$\label{inverse-gram-stokes} M_\ell^{-1}= \begin{bmatrix} 1& d_{\nu-1}& \cdots &d_2 & d_1\\ &1& \cdots&d_3&d_2\\ &&\ddots&\vdots&\vdots\\ &&&1&d_{\nu-1}\\ &&&&1 \end{bmatrix}_{\nu\times\nu}.$$ ### Stokes matrices for $r$-spin curves The Stokes coefficients in the transformation law [\[transformation-exponential-type\]](#transformation-exponential-type){reference-type="eqref" reference="transformation-exponential-type"} can be used to calculate the *Stokes matrix* [@Dub] of the underlying Dubrovin-Frobenius manifold. If $N=1$ and $d=r$, this is the $r$-spin case $W=x^r$. The Stokes matrix was first calculated in [@CV]. The coefficients in the transformation law [\[transformation-exponential-type\]](#transformation-exponential-type){reference-type="eqref" reference="transformation-exponential-type"} determine the Stokes matrix completely. The calculation of Stokes matrix using the original definition in [@Dub] is much involved. Once we replace formula (16) in [@Guz Lemma 5] by the transformation law [\[transformation-exponential-type\]](#transformation-exponential-type){reference-type="eqref" reference="transformation-exponential-type"}, we can obtain the calculation for the $r$-spin case almost verbatimly as for the projective space $\mathbb{P}^{r-2}$ case in [@Guz]. Similar as [@Guz Theorem 2], both the matrices $M_\ell$ and $M_{\ell}^{-1}$ can be realized as Stokes matrices, and they are equivalent to each other, under certain braid group action. Moreover, the symmetric matrix $M_\ell+M_{\ell}^T$ is the Cartan matrix of the $A_{r-1}$ [@CV]. The calculation of Stokes matrices for general cases is much more complicated. We plan to discuss the general situations in the future. *Proof.* to the equation [\[D_h-formula\]](#D_h-formula){reference-type="eqref" reference="D_h-formula"}, and using the formulas in [\[change-of-stokes-coefficients\]](#change-of-stokes-coefficients){reference-type="eqref" reference="change-of-stokes-coefficients"}, we obtain $$%\label{simple-D_h-formula} -{\prod\limits_{k=0}^{q}(1-e^{-2\pi\sqrt{-1}(\rho_k+t)})\over \prod\limits_{j=1}^{p}(1-e^{-2\pi\sqrt{-1}(\alpha_j+t)})} =\sum_{j=1}^{p}{c_j e^{-2\pi\sqrt{-1}({1\over d}+t)}\over 1-e^{-2\pi\sqrt{-1}(\alpha_j+t)}}-1+\sum_{h=1}^{\nu}(-1)^N d_h e^{-2\pi\sqrt{-1}h({1\over d}+t)}. %=\sum_{j=1}^{p}{2\pi\sqrt{-1}e^{-\pi\sqrt{-1}(\alpha_j+2t)}C_j(1)\over 1-e^{-2\pi\sqrt{-1}(\alpha_j+t)}}+\left(-1+\sum_{h=1}^{\nu}D_h(1)e^{-2\pi\sqrt{-1}ht}\right).$$ ◻ The Stokes phenomenon can be calculated using the transformation formula [\[stokes-transformation\]](#stokes-transformation){reference-type="eqref" reference="stokes-transformation"}. For simplicity, we give an example of how it works in the case of $(W=x^d, \langle J\rangle).$ According to [@FJR], the FJRW theory of this pair matches to the Saito-Givental theory of the $d$-spin singularity. The Stokes matrix of the $d$-spin singularity has been calculated in the literature (see [@AT] and earlier work?). Here we provide an alternative proof, following the method used in [@Guz; @CDG]. The Dubrovin-Frobenius manifold corresponding to the FJRW theory of $(W=x^d, \langle J\rangle)$ is semisimple. We only need to calculate the braid group action and show **Proposition 109**. The inverse Gram matrix [\[inverse-gram-stokes\]](#inverse-gram-stokes){reference-type="eqref" reference="inverse-gram-stokes"} is a Stokes matrix corresponding to a full exceptional collection ???. # Evidence of the quantum spectrum conjecture [Conjecture 4](#conjecture-C){reference-type="ref" reference="conjecture-C"} {#sec-evidence} ## Quantum spectrum of mirror simple singularities {#sec-mirror-simple} Now we prove quantum spectrum conjecture [Conjecture 23](#quantum-spectrum-conj){reference-type="ref" reference="quantum-spectrum-conj"} holds true if the polynomial $W$ is one of the forms below: 1. $A_{n-1}$ singularity $W=x_1^{n}$, $n\geq 2.$ 2. $D^T_{n+1}$-singularity $W=x_1^nx_2+x_2^2$, $n\geq 3.$ 3. $E$-type simple singularities: $W=x_1^3+x_2^4,$ or $x_1^3+x_2^5$, or $x_1^3x_2+x_2^3$. Here the transpose mirror polynomial $W^T$ is a simple singularity of $ADE$-type. We will calculate the quantum multiplication $\tau'\star_\tau$ explicitly in each case. *Remark 110*. For all the cases above, quantum spectrum conjecture [Conjecture 23](#quantum-spectrum-conj){reference-type="ref" reference="quantum-spectrum-conj"} and quantum spectrum conjecture [Conjecture 36](#quantum-spectrum-conj-invariant){reference-type="ref" reference="quantum-spectrum-conj-invariant"} are equivalent because $\langle J\rangle=G_W$ always holds. Recall that $\tau(t)$ is the element defined by the $I$-function via [\[def-tau\]](#def-tau){reference-type="eqref" reference="def-tau"}. We denote $$\label{power-of-x} (\tau'(t))^{j}:=\underbrace{\tau'(t)\star_{\tau(t)} \tau'(t)\star_{\tau(t)} \ldots \star_{\tau(t)} \tau'(t)}_{j\text{-copies}}.$$ ### Quantum spectrum of $A_{n-1}$-singularities We start with $n$-spin singularity $$W=x_1^n, \quad n\geq 2.$$ The weight system is $(d; w_1)=(n;1)$, the index is $\nu=n-1$. The state space ${\mathcal H }_{W, \langle J\rangle}$ has a basis $\{{\bf e}_1, {\bf e}_2, \ldots, {\bf e}_{n-1}\}$, with the dual basis $\{{\bf e}_{n-1}, {\bf e}_{n-2}, \ldots, {\bf e}_{1}\}.$ There are two situations. If $n=2$, we have $\tau=\tau'={{\bf e}_1\over 4}$, and $${\nu\over n}\tau'\star_{\tau}{\bf e}_1={1\over 8}%\LD\sJ_1, \sJ_1, \sJ_1\RD_{0,3} {\bf e}_1.$$ So Conjecture [Conjecture 23](#quantum-spectrum-conj){reference-type="ref" reference="quantum-spectrum-conj"} holds because the quantum spectrum of ${\nu\over d}\tau'\star_{\tau}$ is $\{{1\over 8}\}$. If $n\geq 3$, we have $\tau=\tau'={\bf e}_2.$ Let us calculate ${\bf e}_2\star_{{\bf e}_2}.$ **Lemma 111**. *For each ${\bf e}_i\in {\mathcal H }_{W, \langle J\rangle}$, we have quantum multiplication $$\begin{split} {\bf e}_2\star_{{\bf e}_2}{\bf e}_i&=\sum_{j=1}^{n-1}\sum_{k=0}^{\infty}{1\over k!}\Big\langle{\bf e}_2, {\bf e}_i, {\bf e}_j, {\bf e}_2, \ldots, {\bf e}_2\Big\rangle_{0, k+3}{\bf e}_{n-j} =\begin{dcases} {\bf e}_{i+1}, & \text{ if } i<n-1;\\ {1\over n} {\bf e}_1, & \text{ if } i=n-1. \end{dcases} \end{split}$$* *Proof.* By the degree constraint [\[virtual-degree\]](#virtual-degree){reference-type="eqref" reference="virtual-degree"}, if $\Big\langle{\bf e}_2, {\bf e}_i, {\bf e}_j, {\bf e}_2, \ldots, {\bf e}_2\Big\rangle_{0, k+3}\neq0$, we will have $(n-1)k=i+j+1-n\leq n-1$. So if $k=0$, then $n-j=i+1$; if $k=1$, then $i=j=n-1$. Now the result follows from computation of the following FJRW invariants $$\begin{dcases} \Big\langle{\bf e}_2, {\bf e}_i, {\bf e}_{n-1-i}\Big\rangle_{0,3}=1; \\ \Big\langle{\bf e}_2, {\bf e}_2, {\bf e}_{n-1}, {\bf e}_{n-1}\Big\rangle_{0,4}={1\over n}. \end{dcases}$$ Both invariants are obtained by the *concavity axiom* in [@FJR Theorem 4.1.8]. ◻ We summarize the result as below. **Proposition 112**. For the LG pair $(W=x_1^n, \langle J\rangle)$ with $n\geq 3$, we have 1. The set $\{{\bf e}_1, \tau', \ldots, (\tau')^{n-2}\}$ is a basis of ${\mathcal H }_{W, \langle J\rangle}$. 2. There is a quantum relation $$\label{A-relation} (\tau')^{n-1}={1\over n}{\bf e}_1.$$ 3. Conjecture [Conjecture 23](#quantum-spectrum-conj){reference-type="ref" reference="quantum-spectrum-conj"} holds because the quantum spectrum of ${\nu\over d}\tau'\star_{\tau}$ is $${\mathfrak E}=\left\{{n-1\over n}\left({1\over n}\right)^{1\over n-1} e^{2\pi\sqrt{-1}j\over n-1}\mid j=0, 1, \ldots, n-2\right\}.$$ $$\begin{bmatrix} 0 & 0 & \cdots & 0 & {1\over n} \\ 1 & 0 & \cdots & 0 & 0 \\ 0 & 1 & \cdots & 0 & 0 \\ \vdots &\vdots &\ddots & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ \end{bmatrix}.$$ ### Quantum spectrum of $D^T_{n+1}$-singularities Now we consider the $D^T_{n+1}$-singularity $$W=x_1^nx_2+x_2^2, \quad n\geq 3.$$ The weight system is $(d; w_1, w_2)=(2n; 1, n)$. The index is $\nu=n-1\geq 2$. We have a basis of ${\mathcal H }_{W, \langle J\rangle}$: $$\left\{{\bf e}_{2k-1}, {\bf e}_0:=x_1^{n-1}dx_1 dx_2\vert J^0\rangle \mid 1\leq k\leq n\right\}.$$ We have $$\tau(t)={t^2\over 4}{\bf e}_3, \quad \tau=\tau(1)={1\over 4}{\bf e}_3, \quad \tau'=\tau'(1)={1\over 2}{\bf e}_3.$$ Using the degree constraint [\[virtual-degree\]](#virtual-degree){reference-type="eqref" reference="virtual-degree"}, the nontrivial contribution to ${\bf e}_3\star_{{\bf e}_3}$ are obtained from the following genus zero FJRW invariants $$\begin{dcases} \langle{\bf e}_3, {\bf e}_{2k-1},{\bf e}_{2n-2k-1}\rangle_{0,3}=1, &\text{ if } 1\leq k\leq n-1;\\ \langle{\bf e}_3, {\bf e}_3, {\bf e}_{2n-3},{\bf e}_{2n-1}\rangle_{0,4}={1\over 2n}. & \end{dcases}$$ Here the values of the FJRW invariants can be found in [@FJR Section 6.3.7]. So we see that the multiplication ${\bf e}_3\star_{{\bf e}_3}$ on the basis $$\{{\bf e}_1, {\bf e}_3, \ldots, {\bf e}_{2n-1}, {\bf e}_0\}$$ is given by the matrix $$\begin{bmatrix} 0 & 0 & \cdots & {1\over 2n}& 0 & 0 \\ 1 & 0 & \cdots & 0 & {1\over 2n} & 0 \\ 0 & 1 & \cdots & 0 & 0 & 0 \\ \vdots &\vdots &\ddots & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}.$$ So we have quantum relations $$(\tau')^k= \begin{dcases} 2^{2-k}{\bf e}_{2k+1}, & \text{ if } 1\leq k\leq n-2;\\ 2^{1-n}\left({\bf e}_{2n-1}+{1\over 8n}{\bf e}_1\right), & \text{ if } k=n-1;\\ 2^{-2-n}{1\over n}{\bf e}_{3}, & \text{ if } k=n. \end{dcases}$$ Now these calculations implies **Proposition 113**. For the LG pair $(W=x_1^nx_2+x_2^2, \langle J\rangle)$ with $n\geq 3$, we have 1. The set $\{{\bf e}_1, \tau', \ldots, (\tau')^{n-1}, {\bf e}_0\}$ is a basis of ${\mathcal H }_{W, \langle J\rangle}$. 2. There are quantum relations $$\label{mirror-D-relation} \begin{dcases} (\tau')^n={1\over 2^{n+1} n}\tau'; \\ \tau'\star_{\tau} {\bf e}_0=0. \end{dcases}$$ 3. Conjecture [Conjecture 23](#quantum-spectrum-conj){reference-type="ref" reference="quantum-spectrum-conj"} holds because the quantum spectrum of ${\nu\over d}\tau'\star_{\tau}$ is $${\mathfrak E}=\left\{0^2, {n-1\over 2n}\left({n^n\over (2n)^{2n-(n-1)}}\right)^{1\over n-1}e^{2\pi\sqrt{-1}j\over n-1}\mid j=0, 1, \ldots, n-2\right\}.$$ ### Quantum spectrum of $D^T_{3}$-singularity $W=x_1^2x_2+x_2^2$ We have the following nonvanishing FJRW invariants $$\label{D3-invariants} \begin{dcases} \Big\langle{\bf e}_3, {\bf e}_1, {\bf e}_{1}\Big\rangle_{0,3}=1, \quad \Big\langle{\bf e}_1, {\bf e}_0, {\bf e}_0\Big\rangle_{0,3}=-{1\over 2}, \\ \Big\langle{\bf e}_3, {\bf e}_3, {\bf e}_0, {\bf e}_0\Big\rangle_{0,4}=\pm{1\over 8}, \\ \Big\langle{\bf e}_3, {\bf e}_3, {\bf e}_{3}, {\bf e}_{3}, {\bf e}_{3}\Big\rangle_{0,5}={1\over 8}. \end{dcases}$$ We have $$\tau(t)={t^2\over 4}{\bf e}_3\pm{t^4\over 128}{\bf e}_1.$$ The linear independence follows from $$\tau={1\over 4}{\bf e}_3\pm{1\over 128}{\bf e}_1, \quad \tau'={1\over 2}{\bf e}_3\pm{1\over 32}{\bf e}_1.$$ The relation [\[mirror-D-broad-vanish\]](#mirror-D-broad-vanish){reference-type="eqref" reference="mirror-D-broad-vanish"} follows from the definition of the quantum multiplication and the FJRW invariants in [\[D3-invariants\]](#D3-invariants){reference-type="eqref" reference="D3-invariants"}. ### Quantum spectrum of $E_6$-singularity We first discuss the cases when $W$ is a two-variable Fermat polynomial with coprime exponents. That is $W=x^p+y^q$, where $p, q\geq 3$ are coprime. In this case, $\langle J\rangle=G_{W}$. The $I$-function calculation shows the set of eigenvalues is a subset of $$\{0, T\xi^j \mid j=0, \ldots, pq-p-q-1.\}$$ **Lemma 114**. *Each element in the set is an eigenvalue (of multiplicity-one).* *Proof.* It is enough to show $\{1, X, \ldots, X^{pq-p-q-1}\}$ is a basis of ${\mathcal H }_{W, \langle J\rangle}$. If we can not show it for general cases, just show it for $E_6$ and $E_8$. ◻ We see $E_6$ and $E_8$ singularities are special cases, where $W=x^3+y^4$ for $E_6$-singularity and $W=x^3+y^5$ for $E_8$-singularity. For the $E_6$-singularity $W=x_1^3+x_2^4$, the weight system of $(W, \langle J\rangle)$ is $(d; w_1, w_2)=(12; 4, 3)$ and the index is $\nu=5$. The set $$\{{\bf e}_1, {\bf e}_2, {\bf e}_5, {\bf e}_7, {\bf e}_{10}, {\bf e}_{11}\}$$ is a basis of the state space ${\mathcal H }_{W, \langle J\rangle}$. We have $$\tau(t)=t{\bf e}_2, \quad \tau=\tau'={\bf e}_2.$$ Using the degree constraint [\[virtual-degree\]](#virtual-degree){reference-type="eqref" reference="virtual-degree"}, the nonzero FJRW invariants involved in the quantum multiplication ${\nu\over d}\tau'\star_{\tau}$ are listed below: $$\begin{dcases} \Big\langle{\bf e}_2, {\bf e}_1, {\bf e}_{10}\Big\rangle_{0,3}=1; \\ \Big\langle{\bf e}_2, {\bf e}_2, {\bf e}_{5}, {\bf e}_{5}\Big\rangle_{0,4}=a={1\over 3};\\ \Big\langle{\bf e}_2, {\bf e}_2, {\bf e}_{2}, {\bf e}_{2}, {\bf e}_{7}\Big\rangle_{0,5}=b={1\over 6}; \quad \Big\langle{\bf e}_2, {\bf e}_2, {\bf e}_{2}, {\bf e}_{10}, {\bf e}_{11}\Big\rangle_{0,5}=c={1\over 12}. \end{dcases}$$ These invariants are calculated in [@FJR Section 6]. Using these invariants, we obtain the relations $$(\tau')^2={b\over 2}{\bf e}_5, \quad (\tau')^3={ab\over 2}{\bf e}_7, \quad (\tau')^4={ab^2\over 4}{\bf e}_2, \quad (\tau')^5={ab^2\over 4}{\bf e}_{11}+{ab^2c\over 8}{\bf e}_1, \quad (\tau')^6={ab^2c\over 4}\tau'.$$ **Proposition 115**. For $E_6$-singularity $W=x_1^3+x_2^4$, we have: 1. The set $\{{\bf e}_1, \tau', (\tau')^2, (\tau')^3, (\tau')^4, (\tau')^5\}$ is a basis of ${\mathcal H }_{W, \langle J\rangle}$; 2. There is a quantum relation $$(\tau')^6%={ab^2c\over 4}\tau' ={1\over 5184}\tau'={4^43^3\over 12^{12-5}}\tau'.$$ 3. Conjecture [Conjecture 23](#quantum-spectrum-conj){reference-type="ref" reference="quantum-spectrum-conj"} holds because the quantum spectrum of ${\nu\over d}\tau'\star_{\tau}$ is $${\mathfrak E}=\left\{0, {5\over 12}\left({4^43^3\over 12^{12-5}}\right)^{1\over 5} e^{2\pi\sqrt{-1}j\over 5}\mid j=0, 1, 2, 3, 4\right\}.$$ ### Quantum spectrum of $E_7$-singularity {#sec-E7} For the $E_7$-singularity $W=x_1^3x_2+x_2^3,$ the weight system of $(W, \langle J\rangle)$ is $(d; w_1, w_2)=(9; 2, 3)$ and the index is $\nu=4$. The set $$\{{\bf e}_1, {\bf e}_2, {\bf e}_4, {\bf e}_5, {\bf e}_7, {\bf e}_8, {\bf e}_0\}$$ is a basis of the state space ${\mathcal H }_{W, \langle J\rangle}$. We have $$\tau(t)=t{\bf e}_2, \quad \tau=\tau'={\bf e}_2.$$ Using WDVV equation and the results in [@FJR], the nonzero FJRW invariants involved in the quantum multiplication ${\nu\over d}\tau'\star_{\tau}$ can be calculated: $$\begin{dcases} \Big\langle{\bf e}_2, {\bf e}_1, {\bf e}_{7}\Big\rangle_{0,3}=1, \\ \Big\langle{\bf e}_2, {\bf e}_2, {\bf e}_{2}, {\bf e}_{5}\Big\rangle_{0,4}%=a ={1\over 3}, \\ \Big\langle{\bf e}_2, {\bf e}_2, {\bf e}_{2}, {\bf e}_{2}, {\bf e}_{4}\Big\rangle_{0,5}%=b ={2\over 27}, \quad \Big\langle{\bf e}_2, {\bf e}_2, {\bf e}_{2}, {\bf e}_{7}, {\bf e}_{8}\Big\rangle_{0,5}%=c ={2\over 27}. \end{dcases}$$ To compute the quantum multiplication ${4\over 9}{\bf e}_2\star_t$, the FJRW invariants are important $$\begin{dcases} \langle{\bf e}_2, {\bf e}_2, {\bf e}_2, {\bf e}_5\rangle_{0,4}=a={1\over 3}; \\ \langle{\bf e}_2, {\bf e}_2, {\bf e}_2, {\bf e}_2, {\bf e}_4\rangle_{0,5}= \langle{\bf e}_2, {\bf e}_2, {\bf e}_2, {\bf e}_7, {\bf e}_8\rangle_{0,5}={2\over 9\times 3}. \end{dcases}$$ We remark that the vanishing of $\Big\langle{\bf e}_2, {\bf e}_2, {\bf e}_{7}, {\bf e}_{0}\Big\rangle_{0,4}$ can not be achieved using the degree constraint [\[virtual-degree\]](#virtual-degree){reference-type="eqref" reference="virtual-degree"}. The matrix of the quantum multiplication $\tau'\star_{\tau}$ with respect to the basis $$\{{\bf e}_1, {\bf e}_2, {\bf e}_4, {\bf e}_5, {\bf e}_7, {\bf e}_8, {\bf e}_0\}$$ is given by $$\begin{bmatrix} 0 & 0 & 0 & 0 & \frac{1}{27} & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & \frac{1}{27} & 0 \\ 0 & \frac{1}{3} & 0 & 0 & 0 & 0 & 0 \\ 0 & \frac{1}{27} & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & \frac{1}{27} & \frac{1}{3} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{bmatrix}.$$ **Proposition 116**. For $E_7$-singularity $W=x_1^3x_2+x_2^3$, we have: 1. The state space ${\mathcal H }_{W, \langle J\rangle}$ has a basis $$\label{E7-basis} \{{\bf e}_1, \tau', (\tau')^2, (\tau')^3, (\tau')^4-{4\over 2187}{\bf e}_1, 9{\bf e}_4-{\bf e}_5, {\bf e}_0\}.$$ 2. There are quantum relations $$\label{E7-relation} \begin{dcases} (\tau')^5={4\over 2187}\tau'={2^2 3^3\over 9^{9-4}}\tau'; \\ \tau'\star_{\tau} {\bf e}_0=0. \end{dcases}$$ 3. Conjecture [Conjecture 23](#quantum-spectrum-conj){reference-type="ref" reference="quantum-spectrum-conj"} holds because the quantum spectrum of ${\nu\over d}\tau'\star_{\tau}$ is $${\mathfrak E}=\left\{0^3, {4\over 9}\left({2^2 3^3\over 9^{9-4}}\right)^{1\over 4} e^{2\pi\sqrt{-1}j\over 4}\mid j=0, 1, 2, 3\right\}.$$ *Remark 117*. In this case, we see that the set $\{\mathbf{1}, \tau', (\tau')^2, (\tau')^3, (\tau')^4, (\tau')^5\}$ is linearly dependent and the last three elements in [\[E7-basis\]](#E7-basis){reference-type="eqref" reference="E7-basis"} span the kernel of ${\nu\over d}\tau'\star_{\tau}$. ### Quantum spectrum of $E_8$-singularity For the $E_8$-singularity $W=x_1^3+x_2^5$, the weight system of $(W, \langle J\rangle)$ is $(d; w_1, w_2)=(15; 5, 3)$ and the index is $\nu=5$. The set $$\{{\bf e}_1, {\bf e}_2, {\bf e}_4, {\bf e}_7, {\bf e}_{8}, {\bf e}_{11}, {\bf e}_{13}, {\bf e}_{14}\}$$ is a basis of the state space ${\mathcal H }_{W, \langle J\rangle}$. Again, we have $$\tau(t)=t{\bf e}_2, \quad \tau=\tau'={\bf e}_2.$$ The nonzero FJRW invariants involved in the quantum multiplication ${\nu\over d}\tau'\star_{\tau}$ are $$\begin{dcases} \Big\langle{\bf e}_2, {\bf e}_1, {\bf e}_{13}\Big\rangle_{0,3}= \Big\langle{\bf e}_2, {\bf e}_7, {\bf e}_7\Big\rangle_{0,3}=1, \\ \Big\langle{\bf e}_2, {\bf e}_2, {\bf e}_{2}, {\bf e}_{11}\Big\rangle_{0,4}=a={1\over 3}, \\ \Big\langle{\bf e}_2, {\bf e}_2, {\bf e}_{2}, {\bf e}_{4}, {\bf e}_{8}\Big\rangle_{0,5}=b={2\over 15}, \quad \Big\langle{\bf e}_2, {\bf e}_2, {\bf e}_{2}, {\bf e}_{13}, {\bf e}_{14}\Big\rangle_{0,5}=c={1\over 15}. \end{dcases}$$ Using these invariants, we obtain quantum relations $$\begin{dcases} (\tau')^2=a{\bf e}_4, \quad (\tau')^3={ab\over 2}{\bf e}_7,\quad (\tau')^4={ab\over 2}{\bf e}_8, \quad (\tau')^5={ab^2\over 4}{\bf e}_{11}, \\ (\tau')^6={a^2b^2\over 4}{\bf e}_{13}, \quad (\tau')^7={a^2b^2\over 4}{\bf e}_{14}+{a^2b^2c\over 8}{\bf e}_1, \quad (\tau')^8={a^2b^2c\over 4}{\bf e}_2. \end{dcases}$$ **Proposition 118**. For $E_8$-singularity $W=x_1^3+x_2^5$, we have: 1. The set $\{{\bf e}_1, \tau', (\tau')^2, (\tau')^3, (\tau')^4, (\tau')^5, (\tau')^6, (\tau')^7\}$ is a basis of ${\mathcal H }_{W, \langle J\rangle}$; 2. There is a quantum relation $$(\tau')^8%={ab^2c\over 4}\tau' ={1\over 30375}\tau'={5^5 3^3\over 15^{15-7}}\tau'.$$ 3. Conjecture [Conjecture 23](#quantum-spectrum-conj){reference-type="ref" reference="quantum-spectrum-conj"} holds because the quantum spectrum of ${\nu\over d}\tau'\star_{\tau}$ is $${\mathfrak E}=\left\{0, {7\over 15}\left({5^5 3^3\over 15^{15-7}}\right)^{1\over 7} e^{2\pi\sqrt{-1}j\over 7}\mid j=0, 1, 2, 3, 4, 5, 6\right\}.$$ ### A different approach We have a graded vector space isomorphism $${\mathcal H }_{D_{n+1}, \langle J\rangle}^{\langle J\rangle}\cong {\rm Jac}(D_{n+1}^T)^{{\mathbb Z}_2}\cong {\rm Jac}(A_{2n-1})^{{\mathbb Z}_2}.$$ The later two vector spaces are ${\mathbb Z}_2$-invariant subspaces. We write the rank of the vector space as $$\dim_{\mathbb C}{\mathcal H }_{D_{n+1}, \langle J\rangle}^{\langle J\rangle}=n=\nu+{n-1\over 2}+1.$$ *Remark 119*. The vector space ${\mathcal H }_{D_{n+1}, \langle J\rangle}$ contains an element $x_1^{n-1\over 2}dx_1dx_2$ which is not $G_{W}$-invariant. So $$\dim_{\mathbb C}{\mathcal H }_{D_{n+1}, \langle J\rangle}=n+1.$$ **Proposition 120**. The characteristic polynomial of the quantum multiplication on ${\mathcal H }_{D_{n+1}, \langle J\rangle}^{\langle J\rangle}$ is given by $$-x(x^{n-1\over 2}-a)(x^{n-1\over 2}+b).$$ **Example 121**. We give some examples: - $(D_4, \langle J\rangle)$, the matrix of $(D_4, G_{W})$ is given by ?? This case is very special as it has no zero eigenvalue. - $(D_6, \langle J\rangle)$, the matrix of $(D_6, G_{W})$ is given by $$\left( \begin{array}{ccccccccc} 0 & 0 & -2 & 0 & 0 & 0 & 12 & 0 & 0 \\ 0 & 0 & 0 & -6 & 0 & 0 & 0 & 18 & 0 \\ 0 & 0 & 0 & 0 & -12 & 0 & 0 & 0 & 12 \\ 0 & -1 & 0 & 0 & 0 & 7 & 0 & 0 & 0 \\ 0 & 0 & -2 & 0 & 0 & 0 & 6 & 0 & 0 \\ 0 & 0 & 0 & 6 & 0 & 0 & 0 & -6 & 0 \\ 1 & 0 & 0 & 0 & 4 & 0 & 0 & 0 & -2 \\ 0 & 1 & 0 & 0 & 0 & -1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)$$ The characteristic polynomial is $-x (432 - 72 x^2 + x^4)^2$. If we can take the ${\mathbb Z}_2$-invariant part, $$\left( \begin{array}{ccccc} 0 & -2 & 0 & 12 & 0 \\ 0 & 0 & -12 & 0 & 12 \\ 0 & -2 & 0 & 6 & 0 \\ 1 & 0 & 4 & 0 & -2 \\ 0 & 1 & 0 & 0 & 0 \\ \end{array} \right)$$ The characteristic polynomial is $-x (432 - 72 x^2 + x^4)$. **Corollary 122**. For the LG pairs $(D_{n+1}, \langle J\rangle)$, the quantum multiplication along $J^2$ is always semisimple. *Proof.* wrong statement. ◻ ### A proof of Conjecture [Conjecture 9](#conjecture-i-function-formula){reference-type="ref" reference="conjecture-i-function-formula"} for invertible polynomials. {#a-proof-of-conjecture-conjecture-i-function-formula-for-invertible-polynomials.} Recall in Definition [Definition 19](#def-weight-system){reference-type="ref" reference="def-weight-system"}, we only consider the cases when $$\label{coprime-condition} \gcd(d, w_1, \ldots, w_N)=1.$$ **Lemma 123**. *The condition [\[coprime-condition\]](#coprime-condition){reference-type="eqref" reference="coprime-condition"} implies $$G=\langle J\rangle={\mathbb Z}_d.$$* **Example 124**. Take $W=x^3+y^3$. - If we take $G=\langle J\rangle={\mathbb Z}_3$, then $(d; w_1, w_2)=(3; 1,1)$. - If we take $G=G_{W}={\mathbb Z}_3\times {\mathbb Z}_3$, then $(d; w_1, w_2)=(9; 3,3)$? ### Examples **Example 125**. Consider the chain polynomial $W=x_1^3x_2+x_2^3x_3+x_3^4$. There are two choices of the admissible group $G$: $G=\langle J\rangle$ or $G=G_{W}$. 1. If $G=\langle J\rangle\cong {\mathbb Z}/4{\mathbb Z}$, then the weight system is $(1,1,1;4)$. 2. If $G=G_{W}\cong {\mathbb Z}/36{\mathbb Z}$, then the weight system is $(9,9,9; 36)$. Consider the loop polynomial $W=x_1^3x_2+x_2^3x_3+x_3^3x_1$. There are also two choices of the admissible group $G$: $G=\langle J\rangle$ or $G=G_{W}$. 1. If $G=\langle J\rangle\cong {\mathbb Z}/4{\mathbb Z}$, then the weight system is $(1,1,1;4)$. 2. If $G=G_{W}\cong {\mathbb Z}/28{\mathbb Z}$, then the weight system is $(7,7,7; 28)$. Both polynomials above are homogeneous polynomials of degree $4$, thus the hypersurfaces $(W=0)\subset \mathbb{P}^2$ are genus three curves. The first curve has an automorphism $G_{W}/\langle J\rangle\cong {\mathbb Z}/9{\mathbb Z}$ and the second curve has an automorphism $G_{W}/\langle J\rangle\cong {\mathbb Z}/7{\mathbb Z}$. ## Auxiliary formulas for smallest asymptotic expansions For $\ell=0, 1, \ldots, \nu-1,$ we consider the following narrow element $$\begin{aligned} {\mathcal A}_\ell&=& \sum_{m\in {\bf Nar}}{\red (-1)^{\mu(J^m)}}\phi_{m}\cdot \omega^{\ell\cdot m}\cdot \prod_{j=1}^{N}\Gamma\left(1-\left\{{w_j\cdot m\over d}\right\}\right)(1-\omega^{w_j\cdot m})\label{asymptotic-class-definition} \\&=&\sum_{m\in {\bf Nar}}{\red (-1)^{-\mu(J^m)}}\phi_{d-m}\cdot \omega^{-\ell\cdot m}\cdot \prod_{j=1}^{N}\Gamma\left(\left\{{w_j\cdot m\over d}\right\}\right)(1-\omega^{-w_j\cdot m})\nonumber \\&=&(2\pi)^N\sum_{m\in {\bf Nar}}\phi_{m}\cdot \omega^{\ell\cdot m}\cdot \prod_{j=1}^{N}{1\over \Gamma\left(\left\{{w_j\cdot m\over d}\right\}\right)}\nonumber\end{aligned}$$ The last formula may has some problem. We have $$g_{d-m}({\mathcal A}_\ell)= {\red (-1)^{\mu(J^m)}}\omega^{-\ell\cdot m}\cdot \prod_{j=1}^{N}\Gamma\left(\left\{{w_j\cdot m\over d}\right\}\right)(1-\omega^{-w_j\cdot m}).$$ Now we see the cancellation of ${\red (-1)^{-\mu(J^m)}}$ and ${\red (-1)^{\mu(J^m)}}$. **Proposition 126**. We have a principal weak asymptotic class $${\mathcal A}_0=\sum_{m\in {\bf Nar}}(-1)^{\mu(J^{m})} { \prod\limits_{j=1}^{N}\Gamma\left(1-\left\{{w_j\cdot m\over d}\right\}\right)\over \Upsilon(d-m)} \mathbf{1}_{J^m} \propto\sum_m \prod_{j=1}^{N}{1\over \Gamma\left(\left\{{w_j\cdot m\over d}\right\}\right)}.$$ *Proof.* If $$\alpha(d-m)\propto(-1)^{\mu(J^{d-m})} { \prod\limits_{j=1}^{N}\Gamma(\left\{{w_j\cdot m\over d}\right\})\over \Upsilon(m)},$$ ◻ ### The weak asymptotic class of $r$-spin is strong ### Some other formulas For a weight system ${\bf d}=(d; w_1, \ldots, w_N)$, we denote by $$C({\bf d}):=(2\pi)^{N+\nu-1\textcolor{red}{-\widetilde{c}_W}\over 2}\cdot d^{-{1\over 2}}\cdot \prod_{j=1}^{N}w_j^{-{1\over 2}+{w_j\over d}}$$ and $$\label{change-of-variable-formula-negative} \widetilde{x}={\left(t\over d\right)^d}\cdot\left(\prod\limits_{j=1}^{N}w_j^{w_j}\right)\cdot ({\red -z})^{-d+\sum\limits_{j=1}^{N}w_j}.$$ Applying [\[I-function-hypergeometric\]](#I-function-hypergeometric){reference-type="eqref" reference="I-function-hypergeometric"}, we have $$\langle\Phi(\alpha), \mathbf{1}\rangle=t\cdot (-z)^{-{\widehat{c}_W\over 2}}\cdot C({\bf d})\cdot \sum_{m\in {\bf Nar}}{\red (-1)^{-\mu(J^m)}}c_m\cdot \alpha(d-m)\cdot f_{\mathscr{N}(m)}(\widetilde{x}). %\sum\limits_{m\in {\bf Nar}}{g_{d-m}(\alpha)\over \prod\limits_{j=1}^{N}\Gamma\left(\left\{{w_j\cdot m\over d}\right\}\right)}\cdot x^{m-1\over d}\cdot {\Gamma(\alpha^{(m)}_P)\over \Gamma(\rho_Q^{(m)})}\cdot F^{(m)}(x).$$ $$\label{quantum-product-gorenstein} X^{d-1}=d^{-w}\left(\prod\limits_{i=1}^{N}w_i^{w_i}\right) t^wX^{w -1}.$$ $$\begin{aligned} &&t^{d}z^{w-2}\prod_{j=1}^{N}\prod_{c=0}^{w_j-1} \left(q_jt{\partial\over \partial t}+c\right) /\left(t{\partial\over \partial t}\right) I_{\rm FJRW}^{\rm sm}(t,z)\\ =&&t^d\frac{\prod_{j=1}^Nw_j^{w_j}}{d^w}\mathrm{S}(\tau(t),z) \big( t^w(\tau'(t))^{w-1}+O(z) \big) \end{aligned}$$ and $$z^{d-2}\prod_{c=1}^{d-1}\left(t{\partial \over \partial t}-c\right) I_{\rm FJRW}^{\rm sm}(t,z) = \mathrm{S}(\tau(t),z) \big(t^{d} (\tau'(t))^{d-1} +O(z)\big).$$ Hence we get $$(\tau'(t))^{d-1}=d^{\nu-d}\left(\prod_{j=1}^Nw_j^{w_j}\right)t^w(\tau'(t))^{w-1}.$$ ## Quantum spectrum of Fermat homogeneous polynomials {#sec-spectrum-fermat} Now we prove quantum spectrum conjecture [Conjecture 23](#quantum-spectrum-conj){reference-type="ref" reference="quantum-spectrum-conj"} for any Fermat homogeneous polynomial $W=\sum_{i=1}^N x_i^d$ with $\nu=d-N>1$. The proof is similar to the proof of the Gamma conjecture $\mathcal{O}$ for Fano hypersurfaces in $\mathbb{P}^{N-1}$ [@GGI; @Ke]. We need some preparation. ### A quantum relation We first recall a well-known property of the quantum product. **Lemma 127**. *For any admissible LG pair $(W, G)$, let $t_i$ be the coordinate of $\phi_i\in {\mathcal H }_{W, G}$. We have $$\frac{\partial}{\partial t_i}S^{-1}(\mathbf{t}, z) =\frac{1}{z}S^{-1}(\mathbf{t}, z)(\phi_i\star_{\mathbf{t}}).$$* Using this property, we obtain a quantum relation under a mirror symmetry assumption. *Proof.* Equivalently, we need to prove $$\frac{\partial}{\partial t_i}\mathrm{S}^{-1}(\mathbf{t}, -z) =-\frac{1}{z}\mathrm{S}^{-1}(\mathbf{t}, -z)(\phi_i\star_{\mathbf{t}}).$$ This equation holds, because for any pair of homogeneous element $\alpha, \beta\in{\mathcal H }_{W, G}$, we have $$\begin{aligned} \langle \frac{\partial}{\partial t_i}(\mathrm{S}^{-1}(\mathbf{t}, -z)\alpha), z^{-\mathrm{Gr}}\beta \rangle &= \frac{\partial}{\partial t_i} \langle \mathrm{S}^{-1}(\mathbf{t}, -z)\alpha, z^{-\mathrm{Gr}}\beta \rangle\\ &= \frac{\partial}{\partial t_i} \langle \alpha, \mathrm{S}(\mathbf{t}, z) z^{-\mathrm{Gr}}\beta \rangle\\ &= \langle \alpha, \frac{\partial}{\partial t_i} \mathrm{S}(\mathbf{t}, z) z^{-\mathrm{Gr}}\beta \rangle\\ &= \langle \alpha, -\frac{1}{z} \phi_i\star_{\mathbf{t}}\mathrm{S}(\mathbf{t}, z) z^{-\mathrm{Gr}}\beta \rangle \\ &= -\frac{1}{z} \langle \phi_i\star_{\mathbf{t}}\alpha, \mathrm{S}(\mathbf{t}, z) z^{-\mathrm{Gr}}\beta \rangle\\ &=-\frac{1}{z} \langle \mathrm{S}^{-1}(\mathbf{t}, -z)( \phi_i\star_{\mathbf{t}}\alpha), z^{-\mathrm{Gr}}\beta \rangle. \end{aligned}$$ Here the second equality and the last equality follows from the adjointness [\[adjoint-S-operator\]](#adjoint-S-operator){reference-type="eqref" reference="adjoint-S-operator"} of the $S$-operator. The fourth equality follows from the definition of the quantum connection [\[quantum-connection\]](#quantum-connection){reference-type="eqref" reference="quantum-connection"} and the property [\[flat-section-nabla\]](#flat-section-nabla){reference-type="eqref" reference="flat-section-nabla"}. The fifth equality follows from the associativity of the quantum product. ◻ **Proposition 128**. If mirror conjecture [Conjecture 9](#conjecture-i-function-formula){reference-type="ref" reference="conjecture-i-function-formula"} is true, the following relation of the quantum product $\tau'(t)\star_{\tau(t)}$ holds: $$\label{eq-quantum-relation} \left(\prod_{j=1}^Nw_j^{w_j}\right)\left({t\over d}\right)^{d-\nu}(\tau'(t))^{p}=(\tau'(t))^{q+1}.$$ *Proof.* Using [\[tau-component\]](#tau-component){reference-type="eqref" reference="tau-component"} and Lemma [Lemma 127](#lem-derivative-multiplication){reference-type="ref" reference="lem-derivative-multiplication"}, we have $$\label{s-derivative-tau} \frac{\partial}{\partial t}zS^{-1}(\tau(t), z)=S^{-1}(\tau(t), z)(\tau'(t)\star_{\tau(t)}).$$ If mirror conjecture [Conjecture 9](#conjecture-i-function-formula){reference-type="ref" reference="conjecture-i-function-formula"} holds, we can apply the equation [\[s-derivative-tau\]](#s-derivative-tau){reference-type="eqref" reference="s-derivative-tau"} repeatedly to obtain $$\begin{aligned} &t^{d}z^p\prod_{j=1}^N{w_j^{w_j}\over d^{w_j}}\prod_{i=1}^{p}\left(t{\partial\over\partial t}+\alpha_i d-1\right) I_{\rm FJRW}^{\rm sm}(t,z)\\ =&t^{d}z^p\prod_{j=1}^N{w_j^{w_j}\over d^{w_j}}\prod_{i=1}^{p}\left(t{\partial\over\partial t}+\alpha_i d-1\right)\big(tz S^{-1}(\tau(t),z)\mathbf{1}\big)\\ =&t^{d}z^{p-1}\prod_{j=1}^N{w_j^{w_j}\over d^{w_j}}\prod_{i=2}^{p}\left(t{\partial\over\partial t}+\alpha_i d-1\right) t S^{-1}(\tau(t),z) \big(t\tau'(t) +O(z)\big)\\ %&=t^{d}{z^{N-3}\over d^{N}}\left(t{\partial\over \partial t}\right)^{N-3}\mathrm{S}^{-1}(\tau(t),z)\big(t^3\tau'(t)\star_{\tau(t)}\tau'(t)+O(z)\big)\\ % &= t^{d}{z^{N-4}\over d^{N}}\left(t{\partial\over \partial t}\right)^{N-4} \mathrm{S}(\tau(t),z)\big(t^4(\tau'(t))^3+O(z) \big)\\ &\dots\\ =&t^{d}\prod_{j=1}^N{w_j^{w_j}\over d^{w_j}} S^{-1}(\tau(t),z)\big(t^{p+1} \big(\underbrace{\tau'(t)\star_{\tau(t)} \ldots \star_{\tau(t)} \tau'(t)}_{p\text{-copies}}\big)+O(z)\big).\end{aligned}$$ Similarly, we have $$\begin{aligned} &z^{q+1}\prod_{j=0}^{q}\left(t{\partial \over \partial t}+(\rho_j-1)d-1\right) I_{\rm FJRW}^{\rm sm}(t,z) \\ =& S^{-1}(\tau(t),z)\big(t^{q+2} \big(\underbrace{\tau'(t)\star_{\tau(t)} \ldots \star_{\tau(t)} \tau'(t)}_{q+1\text{-copies}}\big)+O(z)\big).\end{aligned}$$ Applying the differential equation [\[i-function-ode\]](#i-function-ode){reference-type="eqref" reference="i-function-ode"} and take the leading term, we have $$\left(\prod_{j=1}^N{w_j^{w_j}\over d^{w_j}}\right) t^{d+p+1}(\tau'(t))^{p}=t^{q+2}(\tau'(t))^{q+1}.$$ This implies the quantum relation [\[eq-quantum-relation\]](#eq-quantum-relation){reference-type="eqref" reference="eq-quantum-relation"}. ◻ The quantum relation [\[eq-quantum-relation\]](#eq-quantum-relation){reference-type="eqref" reference="eq-quantum-relation"} can be used to calculate the eigenvalues of ${\nu\over d}\tau'\star_{\tau}$. **Corollary 129**. For an admissible LG pair $(W, \langle J\rangle)$, if 1. mirror conjecture [Conjecture 9](#conjecture-i-function-formula){reference-type="ref" reference="conjecture-i-function-formula"} holds for the LG pair $(W, \langle J\rangle)$, and 2. the set $\{1, \tau', \ldots, (\tau')^{q}\}$ is linearly independent; then the set $$\{0^p, Te^{2\pi\sqrt{-1}j\over \nu}\mid j=0, 1, \ldots, \nu-1 \}$$ is a subset of the set of eigenvalues of ${\nu\over d}\tau'\star_{\tau}$ on ${\mathcal H }_{W, \langle J\rangle}$. Here the notation $0^p$ means the multiplicity of the eigvenvalue $0$ is $p$, and $T$ is the positive real number defined in [\[largest-positive-number\]](#largest-positive-number){reference-type="eqref" reference="largest-positive-number"}. *Remark 130*. The set $\{1, \tau', \ldots, (\tau')^{q}\}$ maybe linearly dependent even if mirror conjecture [Conjecture 9](#conjecture-i-function-formula){reference-type="ref" reference="conjecture-i-function-formula"} holds. In particular, $(W=x_1^3x_2+x_2^3, \langle J\rangle)$ is such an example. See Remark [Remark 117](#remark-E7){reference-type="ref" reference="remark-E7"}. The following result is a consequence of Proposition [Proposition 128](#mirror-quantum-relation){reference-type="ref" reference="mirror-quantum-relation"}. **Proposition 131**. For the Fermat pair $(W=x_1^d+\ldots +x_N^d, \langle J\rangle)$ of general type, we have a quantum relation $$(\tau')^{d-1}=d^{-N}(\tau')^{N-1}.$$ Here $\tau(t)$ is given by the formulas in [\[tau-fermat\]](#tau-fermat){reference-type="eqref" reference="tau-fermat"}. $$\tau(t)= \begin{dcases} t{\bf e}_2, & \textit{ if } d>N+1;\\ t{\bf e}_2+\frac{t^{d}}{d!d^N}{\bf e}_1, & \textit{ if } d=N+1>2;\\ {t\over 4}{\bf e}_1, & \textit{ if } d=N+1=2. \end{dcases}$$ $$\label{quantum-relation} \begin{dcases} \frac{t^{N}}{d^N}{\bf e}_2^{ N-1}={\bf e}_2^{d-1}, & \text{if } d\geq N+2;\\ \frac{t^{N}}{d^N} \Big(\frac{t^{d-1}}{d^N(d-1)!}\mathbf{1}+{\bf e}_2\Big)^{ N-1}=\Big(\frac{t^{d-1}}{d^N(d-1)!}\mathbf{1}+{\bf e}_2\Big)^{d-1}, & \text{if } d\geq N+1. \end{dcases}$$ *Proof.* According to the formula [\[small-i-function\]](#small-i-function){reference-type="eqref" reference="small-i-function"}, the small I-function in this case is given by $$\label{fermat-small} I_{\rm FJRW}^{\rm sm}(t,z):= %z\sum_{k=0}^{d-2}\sum_{\ell=0}^{\infty} {t^{d\ell+k+1}\over z^{d\ell+k}(d\ell+k)!}z^{N\ell}{\Gamma^N(\ell+{k+1\over d})\over \Gamma^N({1+k\over d})\sJ_{k-1}. z\sum_{k=0}^{d-2}\sum_{\ell=0}^{\infty} {t^{d\ell+k+1}\over z^{(d-N)\ell+k}(d\ell+k)!}\cdot {\Gamma^N(\ell+{k+1\over d})\over \Gamma^N({1+k\over d})}\cdot {\bf e}_{k-1}.$$ The small $I$-function satisfies the differential equation $$\label{fermat-picard-fuchs} \left(t^{d}{z^N\over d^N}\left(t{\partial\over \partial t}\right)^N - z^d\prod_{c=1}^{d}\left(t{\partial \over \partial t}-c\right)\right) I_{\rm FJRW}^{\rm sm}(t,z)=0.$$ It also satisfies the following asymptotic properties: 1. If $d\geq N+2$, then $$I_{\rm FJRW}^{\rm sm}(t,z)=zt\mathbf{1} +t^2 {\bf e}_2+O(z^{-1}).$$ 2. If $d=N+1$, then $$I_{\rm FJRW}^{\rm sm}(t,z)=\left(zt +\frac{t^{d+1}}{d!d^N}\right)\mathbf{1} +t^2 {\bf e}_2+O(z^{-1}).$$ By the mirror statement in Proposition [Proposition 65](#mirror-theorem-fermat){reference-type="ref" reference="mirror-theorem-fermat"} and formula [\[J-equal-S\]](#J-equal-S){reference-type="eqref" reference="J-equal-S"}, we have $$\label{mirror-formula-proof} I_{\rm FJRW}^{\rm sm}(t,z)=tJ(\tau(t),z)=tz\mathrm{S}^{-1}(\tau(t),z)\mathbf{1},$$ where the mirror map $\tau(t)$ is define by $$\label{eq:mirror-map} \tau(t)= \begin{cases} t {\bf e}_2 &\mbox{if}\ d\geq N +2; \\ \frac{t^{d}}{d!d^N}\mathbf{1} +t {\bf e}_2 & \mbox{if}\ d=N+1. \end{cases}$$ By applying the formula [\[s-derivative-tau\]](#s-derivative-tau){reference-type="eqref" reference="s-derivative-tau"} repeatedly to [\[mirror-formula-proof\]](#mirror-formula-proof){reference-type="eqref" reference="mirror-formula-proof"}, we obtain $$\begin{aligned} t^{d}{z^{N-2}\over d^{N}}\left(t{\partial\over \partial t}\right)^{N-1} I_{\rm FJRW}^{\rm sm}(t,z) &=t^{d}{z^{N-2}\over d^{N}}\left(t{\partial\over \partial t}\right)^{N-1}\big(tz\mathrm{S}^{-1}(\tau(t),z)\mathbf{1}\big)\\ &=t^{d}{z^{N-2}\over d^{N}}\left(t{\partial\over \partial t}\right)^{N-2}t\mathrm{S}^{-1}(\tau(t),z) \big(t\tau'(t) +z\mathbf{1}\big)\\ &=t^{d}{z^{N-3}\over d^{N}}\left(t{\partial\over \partial t}\right)^{N-3}\mathrm{S}^{-1}(\tau(t),z)\big(t^3\tau'(t)\star_{\tau(t)}\tau'(t)+O(z)\big)\\ % &= t^{d}{z^{N-4}\over d^{N}}\left(t{\partial\over \partial t}\right)^{N-4} \mathrm{S}(\tau(t),z)\big(t^4(\tau'(t))^3+O(z) \big)\\ &\dots\\ & = \frac{t^{d}}{d^N} \mathrm{S}^{-1}(\tau(t),z)\big(t^{N} \big(\underbrace{\tau'(t)\star_{\tau(t)} \ldots \star_{\tau(t)} \tau'(t)}_{N-1\text{-copies}}\big)+O(z)\big).\end{aligned}$$ Using the expression [\[eq:mirror-map\]](#eq:mirror-map){reference-type="eqref" reference="eq:mirror-map"} of $\tau(t)$, we have $\alpha\star_{\tau(t)}\beta=\alpha\star_{t{\bf e}_2}\beta$. In particular, if $d=N+1$, $$\begin{aligned} \alpha\star_{\tau(t)}\beta&=&\sum_{i=1}^{r}\sum_{k=0}^{\infty}{\phi^i\over k!}\langle\alpha,\beta,\phi_i, \frac{t^{d}}{d!d^N}\mathbf{1} +t {\bf e}_2,\ldots, \frac{t^{d}}{d!d^N}\mathbf{1} +t {\bf e}_2\rangle_{0,k+3}\\ &=&\sum_{i=1}^{r}\sum_{k=0}^{\infty}\sum_{j=0}^{k}{\phi^i\over k!}{k\choose j}t^{k-j}\left(\frac{t^{d}}{d!d^N}\right)^{j}\langle\alpha,\beta,\phi_i, \underbrace{{\bf e}_2, \ldots {\bf e}_2}_{(k-j)\text{-copies}}, \underbrace{\mathbf{1}, \ldots \mathbf{1}}_{j\text{-copies}}\rangle_{0,k+3}\\ &=&\sum_{i=1}^{r}\sum_{k=0}^{\infty}{\phi^i\over k!}t^{k}\langle\alpha,\beta,\phi_i, \underbrace{{\bf e}_2, \ldots {\bf e}_2}_{k\text{-copies}}\rangle_{0,k+3}\\ &=&\alpha\star_{t{\bf e}_2}\beta.\end{aligned}$$ Here in the third equality, all the terms with $j>0$ vanishes by the string equation. Thus $$t^{d}{z^{N-2}\over d^{N}}\left(t{\partial\over \partial t}\right)^{N-1} I_{\rm FJRW}^{\rm sm}(t,z) = \frac{t^{d}}{d^N} \mathrm{S}^{-1}(\tau(t),z)\big(t^{N} (\tau'(t))^{ N-1}+O(z)\big).$$ Similarly, we obtain $$z^{d-2}\prod_{c=1}^{d-1}\left(t{\partial \over \partial t}-c\right) I_{\rm FJRW}^{\rm sm}(t,z)= %\mathrm{S}(\tau(t),z)\big(t^{d}\big(\underbrace{\tau'(t)\star_{\tau(t)} \ldots \star_{\tau(t)} \tau'(t)}_{d-1\text{-copies}}\big)+O(z)\big). \mathrm{S}^{-1}(\tau(t),z)\big(t^{d}(\tau'(t))^{d-1}+O(z)\big).$$ Applying the differential equation [\[fermat-picard-fuchs\]](#fermat-picard-fuchs){reference-type="eqref" reference="fermat-picard-fuchs"}, we obtain the following relation $$%\frac{t^{N}}{d^N}\big(\underbrace{\tau'(t)\star_{\tau(t)} \ldots \star_{\tau(t)} \tau'(t)}_{N-1\text{-copies}}\big)= \big(\underbrace{\tau'(t)\star_{\tau(t)} \ldots \star_{\tau(t)} \tau'(t)}_{d-1\text{-copies}}\big). \frac{t^{N}}{d^N} (\tau'(t))^{ N-1}=(\tau'(t))^{d-1}.$$ Plugging in the mirror map formulas [\[eq:mirror-map\]](#eq:mirror-map){reference-type="eqref" reference="eq:mirror-map"} into this relation, we get [\[quantum-relation\]](#quantum-relation){reference-type="eqref" reference="quantum-relation"}. ◻ ### Bases of ${\mathcal H }_{W, \langle J\rangle}^{G_W}$ According to Proposition [Proposition 30](#fermat-broad){reference-type="ref" reference="fermat-broad"}, the subspace ${\mathcal H }_{W, \langle J\rangle}^{G_W}$ has a basis $$\{{\bf e}_{m}\mid m=1, 2, \ldots, d-1\}.$$ We consider two other (possibly different) bases. **Proposition 132**. Let $W=x_1^d+\ldots +x_N^d$ be a Fermat polynomial of general type, then $$\{{\bf e}_2^{m-1}\mid m=1, 2, \ldots, d-1\}$$ is a basis of the subspace ${\mathcal H }_{W, \langle J\rangle}^{G_W}$. *Proof.* We now prove that the change-of-basis matrix between the ordered sets $$\{{\bf e}_{m}\mid m=1, 2, \ldots, d-1\}\ \quad \text{and}\quad \{{\bf e}_2^{m-1}\mid m=1, 2, \ldots, d-1\}$$ is upper triangular and the result follows. It is easy to see that ${\bf e}_j$ is dual to ${\bf e}_{d-j}$, $\langle{\bf e}_2, {\bf e}_{m-1}, {\bf e}_{d-m}\rangle_{0,3}=1$, and $${\bf e}_2\star_{t{\bf e}_2}{\bf e}_{m}={\bf e}_{m+1}+\sum_{j=1}^{d-1}{\bf e}_{d-j} \sum_{n\geq 1}{t^n\over n!}\langle{\bf e}_2, {\bf e}_{m}, {\bf e}_j, {\bf e}_2, \ldots, {\bf e}_2\rangle_{0,n+3}.$$ If the coefficient of $t^n$ is nonzero, then by the homogeneity property [\[virtual-degree\]](#virtual-degree){reference-type="eqref" reference="virtual-degree"} and the selection rule  [\[selection-rule\]](#selection-rule){reference-type="eqref" reference="selection-rule"}, we must have $$\begin{dcases} {(d-2)N\over d}+n={N(n+1)\over d}+{N(m-1)\over d}+{N(j-1)\over d}, \\ -{n+1\over d}-{m+j\over d}\equiv 0 \mod \mathbb{Z}. \end{dcases}$$ The second formula says $m+j+n+1=kd$ for some $k\in\mathbb{Z}_+$. This implies the RHS of the first formula becomes ${N(kd-2)\over d}$ and thus we obtain $nd=N(k-1)d.$ This implies $$d-m-j-1=d-kd+n=n-{nd\over N}<0.$$ So we must have $d-j<m+1$. Thus the change-of-basis matrix is upper triangular. ◻ **Proposition 133**. Let $W=x_1^d+\ldots +x_N^d$ be a Fermat polynomial of general type, then $$\{(\tau')^{m-1}\mid m=1, 2, \ldots, d-1\}$$ is a basis of the subspace ${\mathcal H }_{W, \langle J\rangle}^{G_W}$. *Proof.* It suffices to prove the statement for the cases when $d=N+1>2$. We have $$\begin{aligned} \alpha\star_{\tau(t)}\beta&=&\sum_{i=1}^{r}\sum_{k=0}^{\infty}{\phi^i\over k!}\Big\langle\alpha,\beta,\phi_i, t {\bf e}_2+\frac{t^{d}}{d!d^N}{\bf e}_1, \ldots, t{\bf e}_2+\frac{t^{d}}{d!d^N}{\bf e}_1 \Big\rangle_{0,k+3}\\ &=&\sum_{i=1}^{r}\sum_{k=0}^{\infty}\sum_{j=0}^{k}{\phi^i\over k!}{k\choose j}t^{k-j}\left(\frac{t^{d}}{d!d^N}\right)^{j}\Big\langle\alpha,\beta,\phi_i, \underbrace{{\bf e}_2, \ldots {\bf e}_2}_{(k-j)\text{-copies}}, \underbrace{{\bf e}_1, \ldots {\bf e}_1}_{j\text{-copies}}\Big\rangle_{0,k+3}\\ &=&\sum_{i=1}^{r}\sum_{k=0}^{\infty}{\phi^i\over k!}t^{k}\Big\langle\alpha,\beta,\phi_i, \underbrace{{\bf e}_2, \ldots {\bf e}_2}_{k\text{-copies}}\Big\rangle_{0,k+3}\\ &=&\alpha\star_{t{\bf e}_2}\beta.\end{aligned}$$ Here in the third equality, all the terms with $j>0$ vanishes by the string equation. Now the result follows from the formula $\tau'={\bf e}_2+\frac{1}{d!d^{N-1}}{\bf e}_1$ and Proposition [Proposition 132](#basis-fermat-subspace){reference-type="ref" reference="basis-fermat-subspace"}. ◻ Now according to Corollary [Corollary 129](#cor-linearlity){reference-type="ref" reference="cor-linearlity"}, Proposition [Proposition 65](#mirror-theorem-fermat){reference-type="ref" reference="mirror-theorem-fermat"}, and Proposition [Proposition 133](#basis-fermat){reference-type="ref" reference="basis-fermat"}, we have **Proposition 134**. Quantum spectrum conjecture [Conjecture 36](#quantum-spectrum-conj-invariant){reference-type="ref" reference="quantum-spectrum-conj-invariant"} holds true for Fermat homogenous polynomials of general type. Now the calculation of the quantum spectrum of ${\bf e}_2\star_t$ on ${\mathcal H }_{W, \langle J\rangle}^{G_W}$ is a consequence of the formula [\[quantum-relation\]](#quantum-relation){reference-type="eqref" reference="quantum-relation"} and Lemma [Proposition 133](#basis-fermat){reference-type="ref" reference="basis-fermat"}. There are two situations: ### A proof of Proposition [Proposition 24](#prop-quantum-spectrum-inv){reference-type="ref" reference="prop-quantum-spectrum-inv"} {#a-proof-of-proposition-prop-quantum-spectrum-inv} Now we consider the cases when $\nu>1$. For these cases, we can prove that quantum spectrum conjecture [Conjecture 23](#quantum-spectrum-conj){reference-type="ref" reference="quantum-spectrum-conj"} holds. It suffices to show that Proposition [Proposition 24](#prop-quantum-spectrum-inv){reference-type="ref" reference="prop-quantum-spectrum-inv"} holds as the action ${\bf e}_2\star_{t{\bf e}_2}$ on the broad subspace ${\mathcal H }_{\rm bro}$ is trivial. **Lemma 135**. *Let $W=x_1^d+\ldots +x_N^d$. If $\nu>1$, then we have ${\bf e}_2\star_t\phi=0$ for each homogeneous element $\phi\in {\mathcal H }_{\rm bro}$. That is, $${\bf e}_2\star_t\vert_{{\mathcal H }_{J^0}}=0.$$* *Proof.* According to Proposition [Proposition 30](#fermat-broad){reference-type="ref" reference="fermat-broad"}, each homogeneous element $\phi\in {\mathcal H }_{\rm bro}$ is a broad element in ${\mathcal H }_{J^0}$. Using the Hodge grading operator formula [\[hodge-grading-operator\]](#hodge-grading-operator){reference-type="eqref" reference="hodge-grading-operator"} and degree formula [\[degree-hodge\]](#degree-hodge){reference-type="eqref" reference="degree-hodge"}, we must have $$\deg_{\mathbb C}\phi={\widehat{c}_W\over 2}={N\over 2}-{N\over d}.$$ Now we consider the quantum product $$\label{fermat-broad-vanishi} {\bf e}_2\star_{t}\phi=\sum_{j=1}^{r}\sum_{n=0}^{\infty}{t^n\phi^j\over n!}\Big\langle{\bf e}_2, \phi, \phi_j, \underbrace{{\bf e}_2, \ldots {\bf e}_2}_{n\text{-copies}}\Big\rangle_{0,n+3}.$$ Here $\{\phi_j\}$ is a homogeneous basis of ${\mathcal H }_{W, \langle J\rangle}$ and $\{\phi^j\}$ is the dual basis. Using Lemma [Lemma 31](#gmax-inv-inv){reference-type="ref" reference="gmax-inv-inv"}, we observe that the nonzero invariant $\Big\langle{\bf e}_2, \phi, \phi_j, {\bf e}_2, \ldots, {\bf e}_2\Big\rangle_{0,n+3}$ must come from some element $\phi_j\in {\mathcal H }_{\rm bro}.$ Thus we must have $$\deg_{\mathbb C}\phi_j={\widehat{c}_W\over 2}.$$ Since we have the degree formula $\deg_{\mathbb C}{\bf e}_2=1-{\nu\over d}$, we can apply the homogeneity property [\[virtual-degree\]](#virtual-degree){reference-type="eqref" reference="virtual-degree"} to the invariant $\Big\langle{\bf e}_2, \phi, \phi_j, {\bf e}_2, \ldots, {\bf e}_2\Big\rangle_{0,n+3}$ and obtain $${n+1\over d}={1\over d-N}.$$ On the other hand, the selection rule [\[selection-rule\]](#selection-rule){reference-type="eqref" reference="selection-rule"} implies that $${n+1\over d}\equiv 0 \mod {\mathbb Z}.$$ But we assume $d-N=\nu>1$, this is a contradiction. So all the FJRW invariants in the formula [\[fermat-broad-vanishi\]](#fermat-broad-vanishi){reference-type="eqref" reference="fermat-broad-vanishi"} vanish and the result follows. ◻ ## Miscellaneous examples of $\nu=1$ ### A special case $W=x_1^3+x_2^3$ We consider this special case with the weight system $(3; 1, 1)$ and the index $\nu=1$. The state space ${\mathcal H }_{W, \langle J\rangle}$ has a basis $$\{{\bf e}_1, {\bf e}_2, x_1dx_1dx_2, x_2dx_1dx_2\}$$ ### Quantum spectrum of $D_{4}$-singularity $W=x_1^3+x_1x_2^2$ ### Quantum spectrum of a loop singularity $W=x_1^2x_2+x_1x_2^2$ ### A formula for general cases For each weight system $(d; w_1, \ldots, w_N)$, we can define a function $$\label{maximal-evalue} T(t):=\nu \left(\left({t\over d}\right)^{d}\prod\limits_{i=1}^{N}w_i^{w_i}\right)^{1\over \nu}.$$ We observe that if $W$ is of general type, then there exists some $\tau(t)\in {\mathcal H }_0[[t]]$, such that the small $I$-function in [\[small-i-function\]](#small-i-function){reference-type="eqref" reference="small-i-function"} has the form $$I_{\rm FJRW}^{\rm sm}(t,z)=zt\mathbf{1}+t\tau(t)+O\left(\frac{1}{z}\right).$$ **Proposition 136**. If Conjecture [Conjecture 9](#conjecture-i-function-formula){reference-type="ref" reference="conjecture-i-function-formula"} is true, then the quantum product of $X$ satisfies $$\label{quantum-product-gorenstein} X^{d-1}=d^{-w}\left(\prod\limits_{i=1}^{N}w_i^{w_i}\right) t^wX^{w -1}, \quad w:=\sum_{j=1}^{N}w_j.$$ *Proof.* The proof is similar to that of Lemma [Proposition 131](#lem-quantum-relation){reference-type="ref" reference="lem-quantum-relation"}. By applying Lemma [Lemma 127](#lem-derivative-multiplication){reference-type="ref" reference="lem-derivative-multiplication"}, we obtain $$\begin{aligned} &&t^{d}z^{w-2}\prod_{j=1}^{N}\prod_{c=0}^{w_j-1} \left(q_jt{\partial\over \partial t}+c\right) \Big/\left(t{\partial\over \partial t}\right) I_{\rm FJRW}^{\rm sm}(t,z)\\ =&&t^d\frac{\prod_{j=1}^Nw_j^{w_j}}{d^w}\mathrm{S}(\tau(t),z) \big( t^w(\tau'(t))^{w-1}+O(z) \big) \end{aligned}$$ and $$z^{d-2}\prod_{c=1}^{d-1}\left(t{\partial \over \partial t}-c\right) I_{\rm FJRW}^{\rm sm}(t,z) = \mathrm{S}(\tau(t),z) \big(t^{d} (\tau'(t))^{d-1} +O(z)\big).$$ The small $I$-function satisfies the differential equation [\[eq:dfq-quasihomogeneous-Fermat\]](#eq:dfq-quasihomogeneous-Fermat){reference-type="eqref" reference="eq:dfq-quasihomogeneous-Fermat"}. Hence we get $$(\tau'(t))^{d-1}=d^{-d}\left(\prod_{j=1}^Nw_j^{w_j}\right)t^w(\tau'(t))^{w-1}.$$ ◻ 999 Pedro Acosta, *Asymptotic Expansion and the LG/(Fano, General Type) Correspondence.* arXiv:1411.4162 Pedro Acosta, Mark Shoemaker, *Quantum cohomology of toric blowups and Landau-Ginzburg correspondences.* Algebr. Geom.5(2018), no.2, 239--263. Ragnar-Olaf Buchweitz, *Maximal Cohen-Macaulay modules and Tate cohomology.* Math. Surveys Monogr., 262 American Mathematical Society, Providence, RI, 2021, xii+175 pp. Matthew Ballard, David Favero, Ludmil Katzarkov, *A category of kernels for equivariant factorizations and its implications for Hodge theory.* Publ. Math. Inst. Hautes Études Sci. 120 (2014), 1--111. Matthew Ballard, David Favero, Ludmil Katzarkov, *A category of kernels for equivariant factorizations, II: further implications.* J. Math. Pures Appl. (9) 102 (2014), no. 4, 702--757. E. W. Barnes, *The Asymptotic Expansion of Integral Functions Defined by Generalised Hypergeometric Series.* Proc. London Math. Soc. (2) 5 (1907), 59--116. Bra **B. L. J. Braaksma,** *Asymptotic expansions and analytic continuations for a class of Barnes-integrals.* Compositio Math. 15 (1964), 239--341 (1964). BBRS **Balser, W.; Braaksma, B. L. J.; Ramis, J.-P.; Sibuya, Y.** *Multisummability of formal power series solutions of linear ordinary differential equations.* Asymptotic Anal. 5 (1991), no. 1, 27--45. **Boalch, P. P.** *Geometry and braiding of Stokes data; fission and wild character varieties.* Ann. of Math. (2) 179 (2014), no. 1, 301--365. **Philip Boalch,** *Topology of the Stokes phenomenon.* arXiv:1903.12612 \[math.AG\] A. Bondal, D. Orlov, *Semiorthogonal decomposition for algebraic varieties,* arXiv:alg-geom/9506012. Sergio Cecotti, Cumrun Vafa, *On classification of N=2 supersymmetric theories,* Comm. Math. Phys. 158 (1993), no. 3, 569--644. Huai-Liang Chang, Jun Li, Wei-Ping Li, *Witten's top Chern class via cosection localization.* Invent. Math. 200 (2015), no. 3, 1015--1063. Alessandro Chiodo, Hiroshi Iritani, Yongbin Ruan, *Landau-Ginzburg/Calabi-Yau correspondence, global mirror symmetry and Orlov equivalence.* Publ. Math. Inst. Hautes Etudes Sci. 119 (2014), 127--216. Giordano Cotti, Boris Dubrovin, Davide Guzzetti, *Helix Structures in Quantum Cohomology of Fano Varieties.* arXiv:1811.09235 \[math.AG\] Andrea D'Agnolo, Marco Hien, Giovanni Morando, Claude Sabbah, *Topological computation of some Stokes phenomena on the affine line.* Ann. Inst. Fourier (Grenoble) 70 (2020), no. 2, 739--808. Boris Dubrovin, *Geometry of 2D topological field theories.* Integrable systems and quantum groups (Montecatini Terme, 1993), 120--348, Lecture Notes in Math., 1620, Fond. CIME/CIME Found. Subser., Springer, Berlin, 1996. Boris Dubrovin, *Geometry and analytic theory of Frobenius manifolds.* Proceedings of the International Congress of Mathematicians, Vol. II (Berlin, 1998). Doc. Math. 1998, Extra Vol. II, 315--326. Anne Duval, Claude Mitschi, *Matrices de Stokes et groupe de Galois des équations hypergéométriques confluentes généralisées.* (French) Pacific J. Math. 138 (1989), no. 1, 25--56. Tobias Dyckerhoff, *Compact generators in categories of matrix factorizations.* Duke Math. J. 159 (2011), no. 2, 223--274. David Eisenbud, *Homological algebra on a complete intersection, with an application to group representations.* Trans. Amer. Math. Soc.260(1980), no.1, 35--64. Huijun Fan, Tyler J. Jarvis, Yongbin Ruan, *The Witten equation and its virtual fundamental cycle.* arXiv:0712.4025 \[math.AG\] Huijun Fan, Tyler J. Jarvis, Yongbin Ruan, *The Witten equation, mirror symmetry and quantum singularity theory.* Ann. Math. 178(1) (2013), 1--106. Jerry L. Fields, *The asymptotic expansion of the Meijer G-function.* Math. Comp. 26 (1972), 757--765. Sergey Galkin, Hiroshi Iritani, *Gamma conjecture via mirror symmetry.* Adv. Stud. Pure Math. Primitive Forms and Related Subjects -- Kavli IPMU 2014, K. Hori, C. Li, S. Li and K. Saito, eds. (Tokyo: Mathematical Society of Japan, 2019), 55 -- 115 Sergey Galkin, Vasily Golyshev, Hiroshi Iritani, *Gamma classes and quantum cohomology of Fano manifolds: gamma conjectures.* Duke Math. J. 165 (2016), no. 11, 2005--2077. Jeremy Guéré, *A Landau-Ginzburg mirror theorem without concavity.* Duke Math. J. 165 (2016), 2461--2527. Alexander Givental, *Symplectic geometry of Frobenius structures.* Frobenius manifolds, 91--112, Aspects Math., E36, Friedr. Vieweg, Wiesbaden, 2004. Davide Guzzetti, *Stokes matrices and monodromy of the quantum cohomology of projective spaces.* Comm. Math. Phys. 207 (1999), no. 2, 341--383. Robin Hartshorne, *Residues and Duality.* Lecture Notes in Math. 20, Springer-Verlag, Berlin, 1966. Weiqiang He, Si Li, Yefeng Shen, Rachel Webb, *Landau-Ginzburg mirror symmetry conjecture.* J. Eur. Math. Soc. 24 (2022), no. 8, pp. 2915--2978 Hiroshi Iritani, *An integral structure in quantum cohomology and mirror symmetry for toric orbifolds.* Adv. Math. 222 (2009), no. 3, 1016--1079 Hiroshi Iritani, *Quantum cohomology of blowups.* arXiv:2307.13555 \[math.AG\] Anton Kapustin, Yi Li, *D-branes in Landau-Ginzburg models and algebraic geometry.* J. High Energy Phys. 2003, no. 12, 005, 44 pp. L. Katzarkov, M. Kontsevich, T. Pantev, *Hodge theoretic aspects of mirror symmetry. From Hodge theory to integrability and TQFT $tt*$-geometry,* 87--174, Proc. Sympos. Pure Math., 78, Amer. Math. Soc., Providence, RI, 2008. Hua-Zhong Ke, *On Conjecture $\mathcal{O}$ for projective complete intersections.* arXiv:1809.10869 \[math.AG\] Young-Hoon Kiem, Jun Li, *Quantum singularity theory via cosection localization.* J. Reine Angew. Math. 766 (2020), 73--107. Johanna Knapp, Mauricio Romo, Emanuel Scheidegger, *D-brane central charge and Landau-Ginzburg orbifolds.* Comm. Math. Phys. 384 (2021), no. 1, 609--697. Maxim Kontsevich, *Birational invariants from quantum cohomology,* Talk at Higher School of Economics on 27 May 2019, as part of Homological Mirror Symmetry at HSE, 27 May -- 1 June 2019. Maxim Kontsevich, *Birational invariants from quantum cohomology,* Talk at Higher School of Economics on 27 May 2019, as part of Homological Mirror Symmetry at HSE, 27 May -- 1 June 2019. Maxim Kontsevich, *Quantum spectrum in algebraic geometry I,II,III,* Talks at Homological Mirror Symmetry and Topological Recursion, Miami, 27 January -- 1 February, 2020, Maxim Kontsevich, *Blow-up equivalence,* Talks at Simons Collaboration on Homological Mirror Symmetry Annual Meeting 2021, 18-19 November, 2021. Marc Krawitz, *FJRW rings and Landau-Ginzburg Mirror Symmetry.* arXiv:0906.0796 \[math.AG\] Maximillian Kreuzer, Harald Skarke *On the classification of quasihomogeneous functions.* Comm. Math. Phys. 150 (1992), no. 1, 137--147. Yudell L. Luke, *The special functions and their approximations, Vol. I.* Mathematics in Science and Engineering, Vol. 53 Academic Press, New York-London 1969 xx+349 pp. C S Meijer, *On the G -function. I-IV,* Nederl. Akad. Wetensch., Proc. 49 (1946), 227--237, 344--356, 457--469, 632--641. *NIST Digital Library of Mathematical Functions.* https://dlmf.nist.gov/ Dmitri Orlov, *Triangulated categories of singularities and D-branes in Landau-Ginzburg models.* (Russian) Tr. Mat. Inst. Steklova 246 (2004), Algebr. Geom. Metody, Svyazi i Prilozh., 240--262; translation in Proc. Steklov Inst. Math. 2004, no. 3(246), 227--248 Dmitri Orlov, *Derived categories of coherent sheaves and triangulated categories of singularities,* in: Algebra, arithmetic, and geometry. In honor of Yu. I. Manin. Vol. II, Birkhäuser-Verlag, Boston (2009), 503--531. Alexander Polishchuk, Arkady Vaintrob, *Chern characters and Hirzebruch-Riemann-Roch formula for matrix factorizations.* Duke Math. J. 161 (2012), no. 10, 1863--1926. Alexander Polishchuk, Arkady Vaintrob, *Matrix factorizations and cohomological field theories.* J. Reine Angew. Math. 714 (2016), 1--122. Alexander Quintero Vélez, *McKay correspondence for Landau-Ginzburg models,* Commun. Number Theory Phys.3 (2009), Volume 3, Number 1, 173--208. Dustin Ross, Yongbin Ruan, *Wall-crossing in genus zero Landau-Ginzburg theory.* J. Reine Angew. Math. 733 (2017), 183--201. Fumihiko Sanda, Yota Shamoto, *An analogue of Dubrovin's conjecture.* Ann. Inst. Fourier (Grenoble) 70 (2020), no. 2, 621--682. D Shklyarov, *Hirzebruch-Riemann-Roch-type formula for DG algebras.* Proc. Lond. Math. Soc. (3) 106 (2013), no. 1, 1--32. George Gabriel Stokes, *On the discontinuity of arbitrary constants which appear in divergent developments,* Transactions of the Cambridge Philosophical Society, X (I): (1858) 105--128. Johannes Walcher, *Stability of Landau-Ginzburg branes.* J. Math. Phys. 46 (2005), no. 8, 082305, 29 pp. E Witten, *Algebraic geometry associated with matrix models of two-dimensional gravity.* Topological methods in modern mathematics (Stony Brook, NY, 1991), 235--269, Publish or Perish, Houston, TX, 1993. .1in .1in [^1]: This is the $L$-operator in [@CIR Section 2, formula (20)]. [^2]: Here the symbol ${\mathrm{Gr}}$ means graded, so the category is a $\langle J\rangle$-equivariant version.
arxiv_math
{ "id": "2309.07446", "title": "Quantum spectrum and Gamma structures for quasi-homogeneous polynomials\n of general type", "authors": "Yefeng Shen, Ming Zhang", "categories": "math.AG hep-th", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We show that de Rham--Witt forms are naturally isomorphic to $p$-typical curves on $p$-adic Tate twists, which revisits a question of Artin--Mazur pursued in the earlier work of Bloch and Kato. We show this by more generally equipping a related result of Hesselholt on topological cyclic homology with the motivic filtrations introduced by Bhatt--Morrow--Scholze. address: - - author: - Sanath Devalapurkar - Shubhodip Mondal bibliography: - main.bib title: $p$-typical curves on $p$-adic Tate twists and de Rham--Witt forms --- # Introduction Let $k$ be a perfect field of characteristic $p>0.$ For a smooth proper scheme $X$ over $k,$ Artin and Mazur [@AM] constructed certain formal groups $\Phi^r(X)$ associated to $X$, with the property that $p$-typical curves on $\Phi^r(X)$ recover the slope $[0,1)$ part of crystalline cohomology $H^r_{\mathop{\mathrm{crys}}} (X/W(k))_{\mathbf{Q}}$ (see [@AM Cor. 3.3]). Roughly speaking, this relies on the fact that the Dieudonné module of $p$-typical curves on the formal group $\widehat{\mathbb G}_m$ is $W(k)$ with the usual $F$ and $V$-operators. In [@AM Qn. (b)], the authors raised the question of whether there is a way to recover the slope $[i,i+1)$ part of $H^r_{\mathop{\mathrm{crys}}} (X/W(k))_{\mathbf{Q}}$ via the formalism of $p$-typical curves on certain group valued functors. This question was answered by Bloch [@bloch] under the hypothesis that $p > \dim X$ by studying $p$-typical curves on symbolic part of the higher algebraic $K$-groups, generalizing the role played by ${{\mathbb G}}_m$ when $i=0.$ The possibility of removing some of these assumptions was expressed in [@Kato p. 635, Rmk. 2]. The goal of our paper is to revisit the above question of Artin and Mazur. We show that instead of using the algebraic $K$-groups, one may simply use the $p$-adic Tate twists $\mathbf{Z}_p (n)[n]$ from [@BMS2], which, in some sense, generalizes the role played by the $p$-adic completion of $\mathbb G_m$ when $n=1.$ More precisely, we prove the following: **Theorem 1**. *Let $S$ be a quasisyntomic $\mathbf{F}_p$-algebra. Then for every $n \ge 0,$ we have a natural isomorphism $$\prod_{I_p}\mathbb{L}W\Omega^{n-1}_S \simeq \mathrm{fib} \left(\varprojlim_k \mathbf{Z}_p(n)(S[t]/t^k)[n] \to \mathbf{Z}_p(n)(S)[n]\right),$$ where $I_p$ denotes the set of positive integers coprime to $p.$* In the above, $\mathbb{L}W\Omega^{n-1}_S$ denotes the animated de Rham--Witt forms as in [Definition 1](#derhamw){reference-type="ref" reference="derhamw"}. By analogy with the classical work of Cartier [@Cart], the right hand side may be regarded as curves on the functor $\mathbf{Z}_p (n)[n]$; which is further equipped with Frobenius and Verschiebung operators $F_m, V_m$ for all $m \ge 0$ (see [Construction 1](#constrans){reference-type="ref" reference="constrans"}). Let $\mathbb{D}(\mathbf{Z}_p(n)[n]_S)\coloneqq \bigcap_{(m,p)=1, m>1} \mathrm{fib}(F_m)$ (see [Construction 1](#ptypica){reference-type="ref" reference="ptypica"}), which may be regarded as $p$-typical curves on the functor $\mathbf{Z}_p(n)[n].$ Note that $\mathbb{D}(\mathbf{Z}_p(n)[n]_S)$ is naturally equipped with the operators $F\coloneqq F_p$ and $V\coloneqq V_p$. As a corollary of [Theorem 1](#mainthmintro){reference-type="ref" reference="mainthmintro"}, one obtains **Corollary 1**. *Let $S$ be a quasisyntomic $\mathbf{F}_p$-algebra. Then for each $n \ge 0,$ we have a natural isomorphism $$\mathbb{L}W \Omega_S^{n-1} \simeq \mathbb{D}(\mathbf{Z}_p(n)[n]_S),$$ which is compatible with the $F$ and $V$ defined on both sides.* **Remark 1**. Let us explain [Theorem 1](#mainthmintro){reference-type="ref" reference="mainthmintro"} and [Corollary 1](#coro){reference-type="ref" reference="coro"} in the case $n=1.$ For a quasisyntomic $\mathbf{F}_p$-algebra $S,$ one has $\mathbf{Z}_p(1) (S)[1]\simeq R\Gamma_{\mathrm{\acute{e}t}} (S, \mathbb G_m)^{\wedge_p}.$ More or less by definition, it then follows that the right hand side of [Theorem 1](#mainthmintro){reference-type="ref" reference="mainthmintro"} is isomorphic to the ring of big Witt vectors of $S;$ which is isomorphic to $\prod_{I_p}W(S),$ where $W(S)$ denotes the ring of $p$-typical Witt vectors of $S$ (see, e.g., [@bloch Prop. 3.6]). Thus [Corollary 1](#coro){reference-type="ref" reference="coro"} in this case says that $W(S) \simeq \mathbb{D}(\mathbf{Z}_p(1)[1]_S).$ **Remark 1**. Note that by the degeneration of the slope spectral sequence as proved in [@Luc1], the slope $[i, i+1)$ part of $H^r_{\mathop{\mathrm{crys}}}(X/W(k))_{\mathbf{Q}}$ identifies with $H^{r-i}(X, W\Omega^i_X )_\mathbb{Q}$. Thus, our formula recovers the slopes desired in [@AM Qn. (b)]. Also, [Corollary 1](#coro){reference-type="ref" reference="coro"} gives a way to reconstruct the de Rham--Witt forms purely from the $p$-adic Tate twists $\mathbf{Z}_p(n).$ Let us now mention the main ideas that go into the proof of [Theorem 1](#mainthmintro){reference-type="ref" reference="mainthmintro"}, which, in particular, uses some of the recent techniques introduced in $p$-adic geometry by Bhatt--Morrow--Scholze. In [@Hess], Hesselholt proved that there is a natural isomorphism $$\label{hesselholt} \mathrm{TR}(S)[1]\simeq \mathrm{fib} (\varprojlim_k \mathrm{TC}(S[t]/t^k) \to \mathrm{TC}(S)),$$where $\mathrm{TR}$ denotes topological restriction homology and $\mathrm{TC}$ denotes topological cyclic homology[^1]. Further, Hesselholt [@Hess Thm. C] showed that if $S$ is smooth, then $\pi_* \mathrm{TR}(S,p) \simeq W\Omega^*_S,$ where $\mathrm{TR}(S,p)$ denotes $p$-typical part of $\mathrm{TR}(S).$ In [@BMS2], Bhatt--Morrow--Scholze constructed a "motivic\" filtration $\mathop{\mathrm{Fil}}^* \mathrm{TC}(S)$ on $\mathrm{TC}(S)$ where the graded pieces $\mathrm{gr}^n \mathrm{TC}(S)$ are given by $\mathbf{Z}_p (n)(S)[2n].$ Using techniques similar to [@BMS2] and animating the theory of de Rham Witt forms ([Definition 1](#derhamw){reference-type="ref" reference="derhamw"}), we obtain the following: **Proposition 1**. *Let $A$ be an $\mathbf{F}_p$-algebra. There is a descending exhaustive complete $\mathbf{Z}$-indexed filtration $\mathrm{Fil}^*_\mathrm{} \mathrm{TR}(A,p)$ on $\mathrm{TR}(A,p)$ such that $\mathrm{gr}^n_\mathrm{} \mathrm{TR}(A,p) \simeq \mathbb{L}W\Omega^n_A [n].$* Therefore, we pursue the more general question of obtaining a filtered version of [\[hesselholt\]](#hesselholt){reference-type="ref" reference="hesselholt"}. More precisely, we prove the following: **Theorem 1**. *Let $S$ be a quasisyntomic $\mathbf{F}_p$-algebra. Then we have a natural isomorphism $$(\mathop{\mathrm{Fil}}^{*-1} \mathrm{TR}(S))[1] \simeq \varprojlim_k \mathrm{fib} \left(\mathop{\mathrm{Fil}}^* \mathrm{TC}(S[t]/t^k) \to \mathop{\mathrm{Fil}}^* \mathrm{TC}(S)\right).$$* Note that passing to graded pieces yields [Theorem 1](#mainthmintro){reference-type="ref" reference="mainthmintro"}. In order to prove [Theorem 1](#mainthm2intro){reference-type="ref" reference="mainthm2intro"}, we crucially use the technique of quasisyntomic descent introduced in [@BMS2]. This allows one to reduce to the case when $S$ is a quasiregular semiperfect algebra. In this case, the filtration $\mathop{\mathrm{Fil}}^* \mathrm{TC}(S)$ is understood concretely by the work of [@BMS2]. In [Corollary 1](#cor3){reference-type="ref" reference="cor3"}, by studying the animated de Rham Witt forms, we prove that if $S$ is a quasiregular semiperfect algebra, then $\mathop{\mathrm{Fil}}^n \mathrm{TR}(S)$ is given by $\tau_{\ge 2n} \mathrm{TR}(S)$. Relatedly, we use the animated de Rham--Witt forms to give a different proof of the following result of Darrell and Riggenbach [@Noah Thm. A]. **Proposition 1**. *Let $S$ be a quasiregular semiperfect algebra. Then $\pi_* \mathrm{TR}(S)$ is concentrated in even degrees.* Back to [Theorem 1](#mainthm2intro){reference-type="ref" reference="mainthm2intro"}, one further needs to understand $\varprojlim_{k} \mathop{\mathrm{Fil}}^* \mathrm{TC}(S[t]/t^k),$ when $S$ is quasiregular semiperfect. We show that this is given by the "odd filtration\": $$\label{sharpbound} \varprojlim_k \mathop{\mathrm{Fil}}^n \mathrm{TC}(S[t]/t^k) \simeq \tau_{\ge 2n-1} \varprojlim_k \mathrm{TC}(S[t]/t^k)$$(see [Proposition 1](#prop1){reference-type="ref" reference="prop1"}). In order to show this, we require certain estimates on $\varprojlim _k \mathbf{Z}_p(n)(S[t]/t^k).$ To this end, we show the following, which, along with [Lemma 1](#filtrationlemma){reference-type="ref" reference="filtrationlemma"}, implies [\[sharpbound\]](#sharpbound){reference-type="ref" reference="sharpbound"}. **Proposition 1**. *Let $S$ be a quasiregular semiperfect ring. Then $$\varprojlim_k \mathbf{Z}_p(i)(S[t]/t^k) \in D_{[-1,0]} (\mathbf{Z}_p).$$* The arguments for proving the above proposition relies on studying properties of Nygaard filtration on derived crystalline cohomology as well as Hodge and conjugate filtration on derived de Rham cohomology. This result is the content of [3](#sec3){reference-type="ref" reference="sec3"}. In [4](#finalsec){reference-type="ref" reference="finalsec"}, we put together the knowledge of all these filtrations to prove [Theorem 1](#mainthm2intro){reference-type="ref" reference="mainthm2intro"}. ## Conventions: {#conventions .unnumbered} We will freely use the language of $\infty$-categories as in [@Lur09], more specifically, the language of stable $\infty$-categories [@luriehigher]. For an ordinary commutative ring $R,$ we will let $D(R)$ denote the derived $\infty$-category of $R$-modules, so that it is naturally equipped with a $t$-structure and $D_{\ge 0}(R)$ (resp. $D_{\le 0} (R)$) denotes the connective (resp. coconnective) objects. For a map $W \to V$, $V/W$ will mean the cofiber unless otherwise mentioned. If $\mathcal C$ is a stable $\infty$-category, its associated category of pro-objects $\mathrm{Pro}(\mathcal{C})$ is also a stable $\infty$-category[^2]. Let $\mathrm{Poly}_R$ denote the category of finitely generated polynomial algebras and $\mathrm{Alg}_R$ denote the category of ordinary commutative $R$-algebras. Then any functor $F: \mathrm{Poly}_R \to \mathcal{D}$ can be left Kan extended to a functor $\underline{F}: \mathrm{Ani}(\mathrm{Alg}_R) \to \mathcal{D},$ where $\mathrm{Ani}(\mathrm{Alg}_R)$ denotes the $\infty$-category of animated $R$-algebras. The functor $\underline{F},$ or $\underline{F} \mid_{\mathrm{Alg}_R}$ will be called the animation of $F.$ If $A$ is an algebra over a field of characteristic $p>0$, we will write $A^{(p^n)}$ to denote its $n$-fold Frobenius twist. $W(A)$ will denote the ring of $p$-typical Witt vectors. ## Acknowledgement {#acknowledgement .unnumbered} We are very thankful to Bhargav Bhatt and Peter Scholze for helpful conversations and suggestions. We would further like to thank Luc Illusie, Akhil Mathew, Alexander Petrov, Noah Riggenbach, Arpon Raksit and Lucy Yang for helpful discussions regarding the paper. We are especially grateful to Luc Illusie for many detailed comments and questions on a draft version of this paper. The first author is supported by the PD Soros Fellowship and NSF DGE-2140743. The second author acknowledges support from the Max Planck Institute for Mathematics, Bonn and the University of British Columbia. # Animated de Rham--Witt theory and filtration on $p$-typical $\mathrm{TR}$ {#sec2} Let $k$ be a perfect field of characteristic $p>0.$ In this section, we discuss the "motivic\" filtration on $\mathrm{TR}(A,p)$ for a $k$-algebra $A.$ In order to do so, we discuss an animated form of the theory of de Rham--Witt forms from [@Luc1]. As demonstrated in [@Luc1], the theory of Cartier operators (see [@Luc1 § 2]) play an important role in analyzing the de Rham--Witt forms via devissage. To this end, we will begin by discussing an animated form of Cartier operators, which would play a similar role in analyzing the animated de Rham--Witt forms. We will focus on understanding the animated Cartier operators in the case of quasiregular semiperfect algebras, which determines the entire theory via quasisyntomic descent. **Construction 1** (Animated Cartier operators). Let $A \in \mathrm{Alg}_k.$ For each $i \ge 0,$ we define $Z_n \mathbb{L}\Omega^i_A$ to be the animation of the functor $Z_n \Omega^i_{(\cdot)}: \mathrm{Poly}_k \to D(k)$ from [@Luc1 § 2.2.2]. Similarly, we define $B_n \mathbb{L}\Omega^i_A$ to be the animation of the functor $B_n \Omega^i_{(\cdot)}: \mathrm{Poly}_k \to D(k)$ from [@Luc1 § 2.2.2]. There are canonical maps $B_n \mathbb{L}\Omega^i_A \to Z_n \mathbb{L}\Omega^i_A$ and $Z_n \mathbb{L}\Omega^i_A \to \mathbb{L}\Omega^i_A.$ Animating the Cartier isomorphism (see [@Luc1 § 2]), we obtain a fiber sequence $$\label{buyfan} B_n \mathbb{L}\Omega^i_A \to Z_n \mathbb{L}\Omega^i_A \to \mathbb{L}\Omega^i _{A^{(p^n)}}.$$By construction, for each $n \ge 0,$ we have the following natural fiber sequences in $D(\mathbf{Z})$: $$\label{cart1} B_1 \mathbb{L}\Omega^i_A \to Z_{n+1} \mathbb{L}\Omega^i_A \xrightarrow[]{C} Z_{n} \mathbb{L}\Omega^i_{A}$$$$\label{cart2} B_1 \mathbb{L}\Omega^i_A \to B_{n+1} \mathbb{L}\Omega^i_A \xrightarrow[]{C} B_{n} \mathbb{L}\Omega^i_A.$$ Note that we set $B_0 \mathbb{L}\Omega^i_A = 0$ and $Z_0 \mathbb{L}\Omega^i_A = \mathbb{L}\Omega^i_A.$ By construction, it follows that for all $n \ge 0$, we have $B_n \mathbb{L}\Omega^0_A \simeq 0$ and $Z_n \mathbb{L}\Omega^0_A \simeq A^{(p^n)}.$ Again, by construction, we have a fiber sequence $$\label{cartier3} Z_1 \mathbb{L}\Omega^i_A \to \mathbb{L}\Omega^i_A \xrightarrow[]{d} B_1 \mathbb{L}\Omega^{i+1}_A.$$ **Proposition 1**. *The functors from $\mathrm{Alg}_k^{\mathrm{op}}$ to $D(\mathbf{Z})$ determined by $A \mapsto B_n \mathbb{L}\Omega^i_A$ and $A \mapsto Z_n \mathbb{L}\Omega^i_A$ are fpqc sheaves for all $n \ge 0$.* *Proof.* Note that the claim holds when $i=0.$ Let us suppose that for a fixed $i \ge 0,$ we have shown that the functors $A \mapsto B_n \mathbb{L}\Omega^i_A$ and $A \mapsto Z_n \mathbb{L}\Omega^i_A$ are fpqc sheaves. Since the functor $A \mapsto \mathbb{L}\Omega_A^j$ satisfies fpqc descent for all $j$ (see [@BMS2 Thm. 3.1]), by [\[cartier3\]](#cartier3){reference-type="ref" reference="cartier3"}, $A \mapsto B_1 \mathbb{L}\Omega_A^{i+1}$ is an fpqc sheaf. By [\[cart2\]](#cart2){reference-type="ref" reference="cart2"}, $A \mapsto B_n \mathbb{L}\Omega_A^{i+1}$ is an fpqc sheaf for all $n.$ By [\[buyfan\]](#buyfan){reference-type="ref" reference="buyfan"}, $A \mapsto Z_n \mathbb{L}\Omega_A^{i+1}$ is an fpqc sheaf for all $n.$ Therefore, by induction on $i$, we are done. ◻ **Remark 1**. For $A \in \mathrm{Alg}_k,$ the objects $Z_n \mathbb{L}\Omega^i_A$ and $B_n \mathbb{L}\Omega^i_A$ can be naturally viewed as objects of $D(A^{(p^n)}).$ The fiber sequence [\[buyfan\]](#buyfan){reference-type="ref" reference="buyfan"} lifts to a fiber sequence in $D(A^{(p^n)}).$ In order to further understand $Z_1\mathbb{L}\Omega^i_A$ via descent, it would be useful to relate it to the conjugate filtration $\mathrm{Fil}^*_{\mathrm{conj}} \mathbb{L}\Omega^*_A$ and the Hodge filtration $\mathop{\mathrm{Fil}}^*_{\mathop{\mathrm{Hodge}}} \mathbb{L}\Omega^*_A$ on derived de Rham cohomology. Recall that when $A$ is a polynomial algebra, then $Z_1\mathbb{L}\Omega^i_A \simeq Z_1 \Omega^i_A := \mathrm{Ker}(\Omega^i_A \xrightarrow{d} \Omega^{i+1}_A).$ We begin by observing the following: **Proposition 1**. *Let $A$ be a polynomial algebra over $k.$ Then $$Z_1\Omega^i_A[-i] \simeq \mathop{\mathrm{Fil}}^i_{\mathop{\mathrm{conj}}} \mathbb{L}\Omega^*_A \times_{\mathbb{L}\Omega^*_A} \mathop{\mathrm{Fil}}^i_{\mathop{\mathrm{Hodge}}} \mathbb{L}\Omega^*_A.$$* *Proof.* Since $A$ is a polynomial algebra over $k,$ we have $\mathop{\mathrm{Fil}}^i_{\mathop{\mathrm{conj}}} \mathbb{L}\Omega^*_A \simeq \tau_{\ge -i} \Omega^*_A$ and $\mathop{\mathrm{Fil}}^i_{\mathop{\mathrm{Hodge}}} \mathbb{L}\Omega_A^* \simeq \Omega_A^{\ge i}.$ Note that there is a natural map $Z_1\Omega_A^i[-i] \simeq \tau_{\ge -i} \Omega_A^{\ge i} \to \mathop{\mathrm{Fil}}^i_{\mathop{\mathrm{conj}}} \mathbb{L}\Omega^*_A \times_{\mathbb{L}\Omega^*_A} \mathop{\mathrm{Fil}}^i_{\mathop{\mathrm{Hodge}}} \mathbb{L}\Omega^*_A.$ Since we have a fiber sequence $\Omega_A^{\ge i} \to \Omega_A^* \to \Omega_A^{\le i-1},$ we obtain a natural isomorphism $$\mathop{\mathrm{Fil}}^i_{\mathop{\mathrm{conj}}} \mathbb{L}\Omega^*_A \times_{\mathbb{L}\Omega^*_A} \mathop{\mathrm{Fil}}^i_{\mathop{\mathrm{Hodge}}} \mathbb{L}\Omega^*_A \xrightarrow{\simeq} \mathrm{fib} (\mathop{\mathrm{Fil}}^i_{\mathop{\mathrm{conj}}} \mathbb{L}\Omega_A^* \to \Omega_A^{\le i-1}).$$ By computing homotopy groups, one sees that the map $Z_1\Omega_A^i[-i] \to \mathrm{fib} (\mathop{\mathrm{Fil}}^i_{\mathop{\mathrm{conj}}} \mathbb{L}\Omega_A^* \to \Omega_A^{\le i-1})$ is an isomorphism, which finishes the proof. ◻ By animating [Proposition 1](#intersection){reference-type="ref" reference="intersection"}, we obtain the following: **Corollary 1**. *Let $A \in \mathrm{Alg}_k.$ Then $$Z_1\mathbb{L}\Omega^i_A[-i] \simeq \mathop{\mathrm{Fil}}^i_{\mathop{\mathrm{conj}}} \mathbb{L}\Omega^*_A \times_{\mathbb{L}\Omega^*_A} \mathop{\mathrm{Fil}}^i_{\mathop{\mathrm{Hodge}}} \mathbb{L}\Omega^*_A.$$* **Proposition 1**. *Let $A$ be a perfect ring. Then $B_n \mathbb{L}\Omega^i_A \simeq Z_n \mathbb{L}\Omega^i_A \simeq 0$ for all $i >0.$* *Proof.* Since $A$ is perfect, we have $\mathbb{L}\Omega_A^i =0$ for $i>0.$ Therefore, using [\[buyfan\]](#buyfan){reference-type="ref" reference="buyfan"}, it suffices to prove that $B_n \mathbb{L}\Omega^i_A \simeq 0$ for $i>0.$ By [\[cart2\]](#cart2){reference-type="ref" reference="cart2"}, this reduces to the case $n=1.$ Using [\[cartier3\]](#cartier3){reference-type="ref" reference="cartier3"} and descending induction on $i,$ it suffices to show that the map $\mathrm{can}: Z_1 \mathbb{L}\Omega^0_A \to \mathbb{L}\Omega^0_A$ is an isomorphism, which follows because $A$ is perfect. ◻ Before proceeding further, let us recall two definitions from [@BMS2]. **Definition 1** ([@BMS2 Def. 4.10]). Let $f: A \to B$ be a map of $\mathbf{F}_p$-algebras. Then $f$ is said to be quasisyntomic if it is flat and the cotangent complex $\mathbb{L}_{B/A}$ has Tor amplitude in homological degrees $[0,1].$ **Proposition 1** ([@BMS2 Def. 8.8]). *An $\mathbf{F}_p$-algebra $S$ is said to be semiperfect if the natural map $S^\flat:= \varprojlim_{x \to x^p} S \to S$ is surjective. $S$ is called quasiregular semiperfect if $S$ is semiperfect and $\mathbb{L}_{S/\mathbf{F}_p}$ is a flat $S$-module supported in homological degree 1.* **Proposition 1**. *Let $A$ let a quasiregular semiperfect algebra. Then $Z_1 \mathbb{L}\Omega^i_A[-i]$ is discrete for all $i \ge 0.$* *Proof.* Let $I:= \mathrm{Ker}(A^\flat \to A).$ By \[BMS\], one knows that $\mathbb{L}\Omega^*_A \simeq D_{A^\flat}(I).$ Further, the conjugate filtration $\mathrm{Fil}^i_{\mathrm{conj}}\mathbb{L}\Omega^*_A$ is also discrete and is given by the $A^\flat$-submodule of $D_{A^\flat}(I)$ generated by elements of the form $a_1^{[l_1]} \cdots a_m^{[l_m]}$ such that $m \ge 0,$ $a_i \in I$ and $\sum _{u=1}^{m} l_u < (i+1)p.$ The Hodge filtration on $\mathbb{L}\Omega^*_A$ identifies with the divided power filtration on $D_{A^\flat}(I),$ i.e., $\mathop{\mathrm{Fil}}^i_{\mathop{\mathrm{Hodge}}} \mathbb{L}\Omega_A^*$ is the ideal generated by elements of the form $a_1^{[l_1]} \cdots a_m^{[l_m]}$ such that $m \ge 0,$ $a_i \in I$ and $\sum _{u=1}^{m} l_u \ge i.$ In particular, note that the map $\mathop{\mathrm{Fil}}^i_{\mathop{\mathrm{Hodge}}} \mathbb{L}\Omega^*_A \to \mathbb{L}\Omega_A^*$ is injective. From the above description, we see that the composite map $$\mathop{\mathrm{Fil}}^i_{\mathop{\mathrm{conj}}} \mathbb{L}\Omega^*_A \to \mathbb{L}\Omega^*_A \to \mathbb{L}\Omega^*_A / \mathop{\mathrm{Fil}}^i_{\mathop{\mathrm{Hodge}}} \mathbb{L}\Omega^*_A$$ is surjective. This implies that the fiber of $\mathop{\mathrm{Fil}}^i_{\mathop{\mathrm{conj}}} \mathbb{L}\Omega^*_A \to \mathbb{L}\Omega^*_A / \mathop{\mathrm{Fil}}^i_{\mathop{\mathrm{Hodge}}} \mathbb{L}\Omega^*_A$ must be discrete. By [Corollary 1](#sunday){reference-type="ref" reference="sunday"}, the fiber identifies with $Z_1\mathbb{L}\Omega_A^i [-i],$ which finishes the proof. ◻ **Proposition 1**. *Let $A$ be a quasiregular semiperfect algebra. Then $B_n \mathbb{L}\Omega^i_A[-i]$ and $Z_n \mathbb{L}\Omega_A^i[-i]$ are discrete for all $n, i \ge 0.$* *Proof.* Note that $B_0 \mathbb{L}\Omega^i_A = 0$ by construction, and $Z_0 \mathbb{L}\Omega^i_A [-i]\simeq \mathbb{L}\Omega^i_A[-i] \simeq \Gamma^i_A (I/I^2),$ where $I := \mathrm{Ker}(A^\flat \to A),$ thus the claim holds when $n=0.$ By the proof of [Proposition 1](#coollemma){reference-type="ref" reference="coollemma"}, we see that if $A$ is a quasiregular semiperfect algebra, one has a natural identification $Z_1 \mathbb{L}\Omega_A^i[-i] \simeq \mathop{\mathrm{Fil}}^i_{\mathop{\mathrm{conj}}} \mathbb{L}\Omega^*_A \cap \mathop{\mathrm{Fil}}^i_{\mathop{\mathrm{Hodge}}} \mathbb{L}\Omega^*_A$ owing to the discreteness of all objects involved. The explicit description of $\mathop{\mathrm{Fil}}^i_{\mathop{\mathrm{conj}}} \mathbb{L}\Omega^*_A$ and $\mathop{\mathrm{Fil}}^i_{\mathop{\mathrm{Hodge}}} \mathbb{L}\Omega^*_A$ in this case implies that the natural map $Z_1 \mathbb{L}\Omega^i_A[-i] \to \mathrm{gr}^i_{\mathop{\mathrm{conj}}} \mathbb{L}\Omega^*_A$ is surjective. Using the fiber sequence [\[buyfan\]](#buyfan){reference-type="ref" reference="buyfan"}, it follows that $B_1 \mathbb{L}\Omega^i_A[-i]$ is discrete. By [\[cart2\]](#cart2){reference-type="ref" reference="cart2"}, it inductively follows that $B_n \mathbb{L}\Omega^i_A[-i]$ is discrete for all $n \ge 1.$ Applying [\[buyfan\]](#buyfan){reference-type="ref" reference="buyfan"} again, and using the fact that $\mathbb{L}\Omega^i_A [-i]$ is discrete, we see that $Z_n \mathbb{L}\Omega^i_A[-i]$ is discrete for all $n \ge 1.$ ◻ Now, we will discuss animation of the de Rham--Witt theory from [@Luc1]. **Definition 1**. Let $\mathrm{Alg}_k$ denote the category of $k$-algebras. For each $i \ge 0,$ we define $$\mathbb{L}W_n \Omega^i_{(\cdot)}\colon \mathrm{Alg}_k \to D(W_n(k))$$ to be the animation of the functor $W_n \Omega^i_{(\cdot)}\colon \mathrm{Poly}_k \to D(W_n(k))$ defined in [@Luc1]. For $A \in \mathrm{Alg}_k,$ the object $\mathbb{L}W_n \Omega^i_A$ is naturally an object of $D(W_n(A))$; further, the usual operators on the de Rham--Witt complex extend to $\mathbb{L}W_n \Omega^i_A.$ In particular, we have $F\colon \mathbb{L}W_n \Omega^i _A\to \mathbb{L}W_{n-1} \Omega^{i}_A$, $V\colon \mathbb{L}W_n \Omega^i_A\to \mathbb{L}W_{n+1} \Omega^i_A$ which extend the Frobenius and Verschiebung maps. We also have a map $R\colon \mathbb{L}W_n \Omega^i_A \to \mathbb{L}W_{n-1} \Omega^{i}_A$ extending the restriction maps which allows one to obtain a pro-object $\mathbb{L}W_{% \mathord{\mathpalette\smallbullet@{0.7}}% } \Omega^i_{A}$ in the derived $\infty$-category of $W(k)$-modules. The operators $F$ and $V$ may be lifted to maps $F \colon \mathbb{L}W_{% \mathord{\mathpalette\smallbullet@{0.7}}% } \Omega^i_{A} \to \mathbb{L}W_{{% \mathord{\mathpalette\smallbullet@{0.7}}% }-1} \Omega^i _{A}$ and $V \colon \mathbb{L}W_{% \mathord{\mathpalette\smallbullet@{0.7}}% } \Omega^i_{A} \to \mathbb{L}W_{{% \mathord{\mathpalette\smallbullet@{0.7}}% }+1} \Omega^i _{A}.$ When $A$ is a polynomial algebra, it follows that $FV= VF= p;$ by animation, this gives natural fiber sequences of pro-objects $$\label{eq100} \mathbb{L}W_{% \mathord{\mathpalette\smallbullet@{0.7}}% } \Omega^i_A/F \to \mathbb{L}W_{% \mathord{\mathpalette\smallbullet@{0.7}}% } \Omega^i_A/p \to \mathbb{L}W_{% \mathord{\mathpalette\smallbullet@{0.7}}% } \Omega^i_A/V$$ and $$\label{eq101} \mathbb{L}W_{% \mathord{\mathpalette\smallbullet@{0.7}}% } \Omega^i_A/V \to\mathbb{L}W_{% \mathord{\mathpalette\smallbullet@{0.7}}% } \Omega^i_A/p \to \mathbb{L}W_{% \mathord{\mathpalette\smallbullet@{0.7}}% } \Omega^i_A/F.$$ **Definition 1**. Let $\mathrm{Alg}_k$ denote the category of $k$-algebras. For each $i \ge 0,$ we define $$\mathbb{L}W \Omega^i_{(\cdot)}\colon \mathrm{Alg}_k \to D(W(k))$$ to be the functor determined by sending a $k$-algebra $A$ to $\mathbb{L}W\Omega^i_A\colon= \varprojlim _{n,R} \mathbb{L}W_n \Omega^i_A.$ **Example 1**. Note that $\mathbb{L}W_n \Omega_A^0 \simeq W_n(A)$ and therefore, $\mathbb{L}W\Omega_A^0 \simeq W(A),$ the usual ring of $p$-typical Witt vectors. We have natural maps $F,V\colon \mathbb{L}W\Omega^i_A \to \mathbb{L}W\Omega^i_A$ which are obtained by passing to the limit of the map of pro-objects above. We also obtain the following fiber sequences $$\label{eq200} \mathbb{L}W \Omega^i_A /F \to \mathbb{L}W \Omega^i_A/p \to \mathbb{L}W \Omega^i_A/V$$and $$\label{eq201} \mathbb{L}W \Omega^i_A/V \to \mathbb{L}W \Omega^i_A/p \to \mathbb{L}W \Omega^i_A /F.$$ The following proposition will allow us to calculate the animated de Rham--Witt sheaves by devissage. **Proposition 1**. *Let $A \in \mathrm{Alg}_k.$ For $r \ge 1,$ we have a fiber sequence $$\mathbb{L}W\Omega^{i-1}_A/F^r \to \mathbb{L}W\Omega^i_A/V^r \to \mathbb{L}W_r\Omega^i_A$$* *Proof.* When $r=1,$ the proposition follows from animating [@Luc1 Prop. 3.18] and passing to inverse limits. In general, our claim is a consequence of the following: ◻ **Lemma 1**. *Let $A \in \mathrm{Alg}_k$ Then we have a fiber sequence of pro-objects $$\mathbb{L}W_{{% \mathord{\mathpalette\smallbullet@{0.7}}% }-r}\Omega^{i-1}_A/F^r \xrightarrow[]{dV^r} \mathbb{L}W_{{% \mathord{\mathpalette\smallbullet@{0.7}}% }}\Omega^i_A/V^r \to \mathbb{L}W_r\Omega^i_A$$* *Proof.* Suppose that $A$ is a polynomial algebra. Let $\underline{R}: W_{n+1} \Omega^i_A [p^r] \to W_{n} \Omega^i_A [p^r]$ be the induced restriction maps. By [@Luc1 Prop. 3.4], $\underline{R}^r = 0;$ in particular, the same holds for $W_{n+1}\Omega^i_A[F^r]$ and $W_{n+1}\Omega^i_A[V^r]$. Therefore, the desired result follows from animation and passing to pro-objects, once we show that for a polynomial algebra $A,$ the following sequence is exact for $n \ge 2r,$ where the quotients are taken in the discrete sense: $$0 \to V^r W_{n-r} \Omega^{i-1}_A /p^r W_n \Omega^{i-1}_A \xrightarrow[]{d} W_n \Omega^i_A/V^r W_{n-r}\Omega^i_A \to W_r\Omega^i_A \to 0.$$ Using [@Luc1 Prop. 3.2], one obtains the exactness in the middle and right. For the injectivity of $d,$ let us suppose that there is an $x \in W_{n-r}\Omega^{i-1}_A$ such that $dV^r x = V^r y$ for some $y \in W_{n-r}\Omega^i_A.$ Applying $F^r$ on both sides (also using $FdV= d$ and $FV= p$), we obtain $dx = p^r y.$ Let $\overline{x} \in W_r \Omega^{i-1}_A$ be the restriction of $x.$ Therefore, we have $d \overline{x} = 0.$ It follows from [@Luc1 Prop. 3.21][^3] that there exists an $\alpha \in W_{2r}\Omega^{i-1}_A$ such that $\overline{x} = F^r \alpha.$ Now let $\widetilde{\alpha}, \widetilde{x} \in W\Omega^{i-1}_A$ be elements that restrict to $\alpha, x$ respectively. Then it follows that $\widetilde{x} - F^r \widetilde{\alpha}$ restricts to zero in $W_r \Omega^{i-1}_A.$ Since the restriction maps induce a quasi-isomorphism of complexes $W\Omega^*_A/p^r \to W_r\Omega^*_A$ (see [@Luc1 Cor. 3.17]), it follows that $\widetilde{x} - F^r\widetilde{\alpha} \in p^r W\Omega^{i-1}_A + d W\Omega^{i-2}.$ This implies that $V^r \widetilde{x} = p^r z$ for some $z \in W\Omega^{i-1}_A.$ By restriction, we obtain $V^rx = p^r \overline{z}$, where $\overline{z} \in W_n\Omega^{i-1}_A.$ This finishes the proof. ◻ To further analyze the animated de Rham--Witt sheaves, we use the following notion from [@Ben]. **Definition 1**. Let $M \in D(\mathbf{Z})$ and $V: M \to M$ be an endomorphism. We will say that $M$ is derived $V$-complete if $\varprojlim_V M =0.$ Let $M/V^n:= \mathrm{cofib} (M \xrightarrow[]{V^n}M).$ Then we have a fiber sequence $\varprojlim_{V}M \to M \to \varprojlim M/V^n,$ which implies that $M$ is derived $V$-complete if and only if the natural map $M \to \varprojlim M/V^n$ is an isomorphism. The $\infty$-category of derived $V$-complete objects are stable under limits. **Lemma 1**. *Let $M \in D(\mathbf{Z})$ and $V: M \to M$ be such that $M$ is derived $V$-complete. If $M/V$ is discrete then so is $M.$* *Proof.* We will prove that $M/V^n$ is discrete by induction on $n$. The diagram $M \xrightarrow{V} M \xrightarrow{V^{n-1}}M$ yields a fiber sequence $$M/V \to M/V^n \to M/V^{n-1}.$$ It follows inductively from the hypothesis that $M/V^n$ is discrete for all $n \ge 1$ and the natural maps $M/V^n \to M/V^{n-1}$ are surjective. Since $M \simeq \varprojlim M/V^n$ by derived $V$-completeness of $M$, the conclusion follows. ◻ **Lemma 1**. *Let $A \in \mathrm{Alg}_k.$ Then $\mathbb{L}W \Omega^i_A$ is derived $V$-complete for each $i \ge 0.$* *Proof.* Let us define $V'\colon \mathbb{L}W_{n} \Omega^i_A \to \mathbb{L}W_{n} \Omega^i_A$ as the composite of $V\colon \mathbb{L}W_n\Omega^i_A \to \mathbb{L}W_{n+1} \Omega^i_A$ with $R\colon \mathbb{L}W_{n+1} \Omega^i_A \to \mathbb{L}W_{n} \Omega^i_A.$ When $A$ is a polynomial algebra, it follows that $V'^n=0.$ Therefore, $\mathbb{L}W_n \Omega^i_A$ is derived $V'$-complete after animation. Passing to inverse limit over $n$, it follows that $\mathbb{L}W\Omega^i_A$ is derived $V$-complete. ◻ **Proposition 1**. *The functor $\mathrm{Alg}_k^{\mathrm{op}} \to D(\mathbf{Z})$ determined by $A \mapsto \mathbb{L}W \Omega^i_A$ is an fpqc sheaf.* *Proof.* We will use induction on $i;$ the case $i=0$ is clear. By [Lemma 1](#derV){reference-type="ref" reference="derV"}, we reduce to checking that $A \mapsto \mathbb{L}W\Omega^i_A/V$ is an fpqc sheaf. The latter follows inductively from [@BMS2 Theorem 3.1] and [Proposition 1](#Illusieprop){reference-type="ref" reference="Illusieprop"} (in the case $r=1$). ◻ **Remark 1**. The functor $\mathrm{Alg}_k^{\mathrm{op}} \to D(\mathbf{Z})$ determined by $A \mapsto \mathbb{L}W_r \Omega^i_A$ is also an fpqc sheaf; this follows from [Proposition 1](#Illusieprop){reference-type="ref" reference="Illusieprop"} and [Proposition 1](#flatdescent){reference-type="ref" reference="flatdescent"}. The importance of Cartier operators for our purposes is reflected in the following proposition. **Proposition 1**. *Let $A \in \mathrm{Alg}_k.$ Then $\mathbb{L}W \Omega^i_A/ V \simeq \varprojlim_{C} Z_n \mathbb{L}\Omega^i_A$ and $\mathbb{L}W\Omega^i_A/ F \simeq \varprojlim_{C} B_{n} \mathbb{L}\Omega^{i+1}_A.$* *Proof.* Let $A$ be a polynomial algebra over $k.$ Then by [@Luc1 Prop. 3.11], there is a natural map $\mathrm{cofib}(W_{n} \Omega^i_A \xrightarrow{V} W_{n+1} \Omega^i_A ) \xrightarrow{F^n} Z_n \Omega^i_A$ of $\mathbb{N}$-indexed (via the restriction maps $W_{n+1}\Omega^i_A \to W_{n}\Omega^i_A$ on the left-hand side and via $C$ on the right-hand side) objects whose fiber is an $\mathbb{N}$-indexed object whose transition maps are all zero. By animation and taking inverse limit over $n \in \mathbb{N},$ we obtain $$\mathbb{L}W \Omega^i_A/ V \simeq \varprojlim_{C} Z_n \mathbb{L}\Omega^i_A.$$ The other assertion is deduced similarly from loc. cit. ◻ **Proposition 1**. *Let $S$ be a perfect ring. Then $\mathbb{L}W \Omega^i_S = 0$ for $i>0.$* *Proof.* Follows from [Proposition 1](#perfectringcart){reference-type="ref" reference="perfectringcart"} and [Proposition 1](#llll){reference-type="ref" reference="llll"}. ◻ **Proposition 1**. *Let $S$ be a quasiregular semiperfect algebra. Then $\mathbb{L}W\Omega^i_S[-i]$ is discrete for each $i \ge 0$.* *Proof.* Using [Lemma 1](#lemma1){reference-type="ref" reference="lemma1"}, it suffices to prove that $(\mathbb{L}W \Omega^i_S/V) [-i]$ is discrete. By [Proposition 1](#llll){reference-type="ref" reference="llll"}, $\mathbb{L}W \Omega^i_S/V \simeq \varprojlim Z_n \mathbb{L}\Omega^i_A.$ By [Proposition 1](#prop45){reference-type="ref" reference="prop45"}, $Z_n \mathbb{L}\Omega^i_A[-i]$ is discrete. Since (by [Proposition 1](#prop45){reference-type="ref" reference="prop45"}), $B_1 \mathbb{L}\Omega^i_A[-i]$ is also discrete, using the fiber sequence [\[cart1\]](#cart1){reference-type="ref" reference="cart1"}, we see that the maps $Z_{n+1} \mathbb{L}\Omega^i_A[-i] \xrightarrow{C} Z_n \mathbb{L}\Omega^i_A[-i]$ are surjections. This finishes the proof. ◻ Finally, we begin our discussion on $\mathrm{TR}.$ **Construction 1**. Note that for a $k$-algebra $A,$ the functor $A \mapsto \mathrm{TR}^r (A,p)$ is the animation of its restriction to the full subcategory of finitely generated polynomial algebras. Further, when $A$ is smooth, by [@Hess Thm. B], we have $$\pi_n \mathrm{TR}^r(A,p) \simeq \bigoplus_{\substack{0 \le i \le n\\ n-i\,\text{is even}}} W_r\Omega^i_A.$$ Therefore, by animating the decreasing Postnikov filtration on $\mathrm{TR}^r(A,p)$ (given by $\tau_{\ge *}\mathrm{TR}^r(A,p)$) from the category of polynomial algebras, one can equip $\mathrm{TR}^r(A,p)$ with the structure of a filtered object $\mathrm{Fil}^*_{\mathrm{}} \mathrm{TR}^r(A,p)$ such that $$\label{eq1} \mathrm{gr}^n_{\mathrm{M}} \mathrm{TR}^r(A,p) \cong \displaystyle{\bigoplus_{\substack{0 \le i \le n\\ n-i\,\text{is even}}}\mathbb{L}W_r\Omega^i_A[n]}$$ Note that there natural restriction maps $R\colon \mathrm{TR}^{r+1}(A,p) \to \mathrm{TR}^{r}(A,p)$, and one sets $$\mathrm{TR}(A,p) \colon= \varprojlim_{r,R} \mathrm{TR}^r(A,p).$$ By passing to inverse limit over the restriction maps one can equip $\mathrm{TR}(A,p)$ with the structure of a filtered object $\mathrm{Fil}^*_{\mathrm{M}} \mathrm{TR}(A,p)$. Note that by [@Hess Thm. B], the map $R\colon \mathrm{TR}^{r+1}(A,p) \to \mathrm{TR}^{r}(A,p)$ induces a map $R_{n,r,i}\colon \mathbb{L}W_{r+1} \Omega^i_A[n] \to \mathbb{L}W_{r} \Omega^i_A [n]$ at the level of the $i$-th summand of $\mathrm{gr}^n$ that is equivalent to $(p \lambda_{r+1})^{\frac{n-i}{2}}R[n]$, where $\lambda_{r+1} \in (\mathbf{Z}/p^{r+1} \mathbf{Z})^\times.$ It follows from this description that $\varprojlim _{r,R_{n,r,i}} \mathbb{L}W_{r}\Omega^i_A [n] \simeq 0$ if $n>i.$ Therefore, we see that $\mathrm{gr}^n \mathrm{TR}(A,p) \simeq \mathbb{L}W\Omega^n_A [n].$ Let us summarize this construction in the following proposition. **Proposition 1**. *Let $A$ be a $k$-algebra. There is a descending exhaustive complete $\mathbf{Z}$-indexed filtration $\mathrm{Fil}^* \mathrm{TR}(A,p)$ on $\mathrm{TR}(A,p)$ such that $\mathrm{gr}^n \mathrm{TR}(A,p) \simeq \mathbb{L}W\Omega^n_A [n].$ This may be be called the "motivic\" filtration on $\mathrm{TR}(A,p).$* *Proof.* The construction of the filtration and the description of the graded pieces follow from the above discussion. In order to prove the completeness of the filtration $\mathop{\mathrm{Fil}}^* \mathrm{TR}(A,p)$, by construction, it would be enough to show the completeness of $\mathop{\mathrm{Fil}}^*\mathrm{TR}^r (A,p).$ To this end, it is enough to show that $\mathop{\mathrm{Fil}}^k \mathrm{TR}^r(A,p)$ is $k$-connective. By considering sifted colimits, this reduces to the case when $A$ is a polynomial algebra, in which case the result follows since the filtration is given by the Postnikov filtration. ◻ **Corollary 1** ([@Ben Thm. 6.14]). *Let $S$ be a perfect ring. Then $\mathrm{TR}(S,p) \simeq W(S).$* *Proof.* Follows from [Proposition 1](#perfectwitt){reference-type="ref" reference="perfectwitt"} and [Proposition 1](#trr){reference-type="ref" reference="trr"}. ◻ Let us now consider the category of quasisyntomic $k$-algebras $\mathrm{qSyn}_k,$ thought of as a Grothendieck site equipped with the quasisyntomic topology. If $A$ is in $\mathrm{qSyn}_k,$ we will give a different construction of $\mathop{\mathrm{Fil}}^* \mathrm{TR}(A,p)$ by quasisyntomic descent that will be important later in this paper. By [@BMS2 Prop. 4.31] any quasisyntomic sheaf on $\mathrm{qSyn}_k$ is determined by its values on quasiregular semiperfect algebras. **Proposition 1**. *Let $A \in \mathrm{qSyn}_k.$ The functor $A \mapsto \mathrm{Fil}^* \mathrm{TR}(A,p)$ is a quasisyntomic sheaf with values in filtered spectra.* *Proof.* Since $\mathop{\mathrm{Fil}}^* \mathrm{TR}(A,p)$ is a complete descending filtration on $\mathrm{TR}(A,p)$, by considering limits, it would be enough to prove that $A \mapsto \mathrm{gr}^n\mathrm{TR}(A,p) \simeq \mathbb{L}W\Omega^n_A [n]$ is a sheaf for all $n.$ The latter follows from [Proposition 1](#flatdescent){reference-type="ref" reference="flatdescent"}. ◻ **Proposition 1**. *Let $R$ be a quasiregular semiperfect algebra. Then $\pi_* \mathrm{TR}(R,p)$ is concentrated in even degrees.* *Proof.* Follows from [Proposition 1](#trr){reference-type="ref" reference="trr"} and [Proposition 1](#fixthis){reference-type="ref" reference="fixthis"}. ◻ The following proposition, along with [Proposition 1](#descent){reference-type="ref" reference="descent"}, gives an alternative way to understand the filtration on $p$-typical $\mathrm{TR}$ via quasisyntomic descent (see [@BMS2 Prop. 4.31]). **Proposition 1**. *Let $R$ be a quasiregular semiperfect algebra. Then $$\mathop{\mathrm{Fil}}^n \mathrm{TR}(R,p) \simeq \tau_{\ge 2n} \mathrm{TR}(R,p).$$* *Proof.* Follows because $\mathop{\mathrm{gr}}^n \mathrm{TR}(R,p)[-2n]$ is discrete. ◻ Note that for any $\mathbf{F}_p$-algebra $A,$ there is a canonical product decomposition $\mathrm{TR}(A) \simeq \prod_{(k,p)=1} \mathrm{TR}(A,p).$ One may define a filtration $\mathop{\mathrm{Fil}}^* \mathrm{TR}(A) \coloneqq \prod_{(k,p)=1} \mathop{\mathrm{Fil}}^* \mathrm{TR}(A,p).$ This equips $\mathrm{TR}(A)$ with a descending complete exhaustive filtration such that $\mathop{\mathrm{gr}}^n \mathrm{TR}(A) \simeq \prod_{(k,p)=1}\mathbb{L}W\Omega^n_A [n].$ Our previous discussion on $\mathrm{TR}(A,p)$ implies the following corollaries. **Corollary 1**. *Let $A \in \mathrm{qSyn}_k.$ The functor $A \mapsto \mathrm{Fil}^* \mathrm{TR}(A)$ is a quasisyntomic sheaf with values in filtered spectra.* **Corollary 1**. *Let $R$ be a quasiregular semiperfect algebra. Then $\pi_* \mathrm{TR}(R)$ is concentrated in even degrees, and $$\mathop{\mathrm{Fil}}^n \mathrm{TR}(R) \simeq \tau_{\ge 2n} \mathrm{TR}(R).$$* # Pro-system of truncated polynomial rings {#sec3} Let $S$ be a quasisyntomic $\mathbf{F}_p$-algebra. In [@BMS2], Bhatt--Morrow--Scholze constructed a "motivic\" filtration $\mathop{\mathrm{Fil}}^* \mathrm{TC}(S)$ on $\mathrm{TC}(S)$ where the graded pieces $\mathrm{gr}^n \mathrm{TC}(S)$ are given by $\mathbf{Z}_p (n)(S)[2n].$ The goal of this section is to identify the induced motivic filtration on $\varprojlim \mathrm{TC}(R[t]/t^k)$, when $R$ is a quasiregular semiperfect algebra, with the "odd filtration\" (see [Proposition 1](#prop1){reference-type="ref" reference="prop1"}). We will use the description of the Tate twists $\mathbf{Z}_p(n)$ in terms of Nygaard filtration on derived crystalline cohomology. To this end, we recall a few notations and basic properties. **Notation 1**. Let $A$ be an $\mathbf{F}_p$-algebra. We will use $\mathop{\mathrm{dR}}$, $\mathop{\mathrm{Fil}}^*_{\mathop{\mathrm{Hodge}}} \mathop{\mathrm{dR}}$ and $\mathop{\mathrm{Fil}}^*_{\mathop{\mathrm{conj}}} \mathop{\mathrm{dR}}$ to denote the functors $\mathbb{L}\Omega^*_{(\cdot)},$ $\mathop{\mathrm{Fil}}^*_{\mathop{\mathrm{Hodge}}}\mathbb{L}\Omega^*_{(\cdot)}$ and $\mathop{\mathrm{Fil}}^*_{\mathop{\mathrm{conj}}}\mathbb{L}\Omega^*_{(\cdot)}$. Let $\widehat{\mathop{\mathrm{dR}}}$ denote the completion of $\mathop{\mathrm{dR}}$ with respect to $\mathrm{Fil}^*_{\mathop{\mathrm{Hodge}}} \mathop{\mathrm{dR}}$, so that it is naturally equipped with a filtration $\mathop{\mathrm{Fil}}^*_{\mathop{\mathrm{Hodge}}} \widehat {\mathop{\mathrm{dR}}}.$ By construction, $\mathrm{gr}^n_{\mathop{\mathrm{Hodge}}} \mathop{\mathrm{dR}}(A) \simeq \mathrm{gr}^n_{\mathop{\mathrm{Hodge}}} \widehat{\mathop{\mathrm{dR}}}(A) \simeq \wedge^n\mathbb{L}_{A/\mathbf{F}_p}[-n].$ By animating the Cartier isomorphism, $\mathrm{gr}^n_{\mathop{\mathrm{conj}}} \mathop{\mathrm{dR}}(A) \simeq \wedge^n \mathbb{L}_{A^{(p)}/ \mathbf{F}_p}[-n]$ (see [@Bha12 Prop. 3.5]). The following proposition discusses certain monoidal properties of these functors that will be useful later. **Proposition 1**. *Let $A$ and $B$ be two $\mathbf{F}_p$-algebras. Then,* 1. *$\mathop{\mathrm{dR}}(A \otimes_{\mathbf{F}_p} B) \simeq \mathop{\mathrm{dR}}(A) \otimes_{\mathbf{F}_p} \mathop{\mathrm{dR}}(B).$* 2. *$\mathop{\mathrm{Fil}}^n_{\mathop{\mathrm{Hodge}}}\mathop{\mathrm{dR}}(A \otimes_{\mathbf{F}_p} B) \simeq \mathop{\mathrm{colim}}_{j+k \ge n} \mathop{\mathrm{Fil}}^{j}_{\mathop{\mathrm{Hodge}}} \mathop{\mathrm{dR}}(A) \otimes_{\mathbf{F}_p} \mathop{\mathrm{Fil}}^{k}_{\mathop{\mathrm{Hodge}}} \mathop{\mathrm{dR}}(B).$* 3. *$\mathop{\mathrm{Fil}}^n_{\mathop{\mathrm{conj}}}\mathop{\mathrm{dR}}(A \otimes_{\mathbf{F}_p} B) \simeq \mathop{\mathrm{colim}}_{j+k \le n} \mathop{\mathrm{Fil}}^{j}_{\mathop{\mathrm{conj}}} \mathop{\mathrm{dR}}(A) \otimes_{\mathbf{F}_p} \mathop{\mathrm{Fil}}^{k}_{\mathop{\mathrm{conj}}} \mathop{\mathrm{dR}}(B).$* *Proof.* By animation, these can be checked by reducing to polynomial algebras. For polynomial algebras, one can further reduce it to checking on graded pieces and use [@BMS2 Lem. 5.2] (*cf.* [@fild]). ◻ **Remark 1**. In the language of filtered derived categories $DF(\mathbf{F}_p)$ as in [@BMS2], the construction appearing in the right hand side is called the Day convolution, which turns $DF(\mathbf{F}_p)$ into a symmetric monoidal stable $\infty$-category. One also has the completed filtered derived category $\widehat{DF}(\mathbf{F}_p)$, equipped with an induced monoidal structure. It follows from [Proposition 1](#convv){reference-type="ref" reference="convv"} that $\mathop{\mathrm{Fil}}^*_{\mathop{\mathrm{Hodge}}}\widehat{\mathop{\mathrm{dR}}}(A \otimes_{\mathbf{F}_p} B) \simeq \mathop{\mathrm{Fil}}^*_{\mathop{\mathrm{Hodge}}}\widehat{\mathop{\mathrm{dR}}}(A) \hat{\otimes} \mathop{\mathrm{Fil}}^*_{\mathop{\mathrm{Hodge}}}\widehat{\mathop{\mathrm{dR}}}(B),$ where the right hand side denotes the monoidal operation on $\widehat{DF}(\mathbf{F}_p).$ **Notation 1**. Let $A$ be an $\mathbf{F}_p$-algebra. We let $R\Gamma_{\mathop{\mathrm{crys}}}(A)$ denote derived crystalline cohomology, and $\mathop{\mathrm{Fil}}^*_{\mathrm{Nyg}} R\Gamma_{\mathop{\mathrm{crys}}}(A)$ denote the Nygaard filtration; they are both defined to be animated from polynomial algebras. The associated Nygaard completed object will be dentoted by $\widehat{R\Gamma}_{\mathop{\mathrm{crys}}} (A)$, which is naturally equipped with a filtration $\mathop{\mathrm{Fil}}^*_{\mathrm{Nyg}} \widehat{R\Gamma}_{\mathop{\mathrm{crys}}} (A).$ We will only apply these notions in the case when $A$ is a quasisyntomic $\mathbf{F}_p$-algebra, and we will assume $A$ to be quasisyntomic from now for simplicity. The proposition below lists some basic properties of the Nygaard filtration. **Proposition 1**. *Let $A$ be a quasisyntomic $\mathbf{F}_p$-algebra. Then,* 1. *$\widehat{R\Gamma}_{\mathop{\mathrm{crys}}}(A)/p \simeq \widehat{\mathop{\mathrm{dR}}}(A).$* 2. *$\mathrm{gr}^n_{\mathrm{Nyg}} R\Gamma_{\mathop{\mathrm{crys}}}(A) \simeq \mathrm{gr}^n_{\mathrm{Nyg}} \widehat{R\Gamma}_{\mathop{\mathrm{crys}}}(A) \simeq \mathop{\mathrm{Fil}}^n_{\mathop{\mathrm{conj}}} \mathop{\mathrm{dR}}(A).$* 3. *Multiplication by $p$ induces a natural map $p: \mathop{\mathrm{Fil}}^{n-1}_{\mathrm{Nyg}}R\Gamma_{\mathop{\mathrm{crys}}}(A) \to \mathop{\mathrm{Fil}}^{n}_{\mathrm{Nyg}}R\Gamma_{\mathop{\mathrm{crys}}}(A)$ whose cofiber is naturally isomorphic to $\mathop{\mathrm{Fil}}^n_{\mathop{\mathrm{Hodge}}} \mathop{\mathrm{dR}}(A).$* 4. *Multiplication by $p$ induces a natural map $p: \mathop{\mathrm{Fil}}^{n-1}_{\mathrm{Nyg}}\widehat{R\Gamma}_{\mathop{\mathrm{crys}}}(A) \to \mathop{\mathrm{Fil}}^{n}_{\mathrm{Nyg}}\widehat{R\Gamma}_{\mathop{\mathrm{crys}}}(A)$ whose cofiber is naturally isomorphic to $\mathop{\mathrm{Fil}}^n_{\mathop{\mathrm{Hodge}}} \widehat{\mathop{\mathrm{dR}}}(A).$* 5. *There is a divided Frobenius map $\varphi_n: \mathrm{Fil}^n_{\mathrm{Nyg}} R\Gamma_{\mathop{\mathrm{crys}}} (A) \to R\Gamma_{\mathop{\mathrm{crys}}}(A)$ which gives a fiber sequence $$\mathbf{Z}_p(n)(A) \xrightarrow{}\mathrm{Fil}^n_{\mathrm{Nyg}}{R\Gamma}_{\mathop{\mathrm{crys}}} (A) \xrightarrow{\varphi_n - \mathrm{can}} R\Gamma_{\mathop{\mathrm{crys}}}(A).$$* 6. *There is a divided Frobenius map $\varphi_n: \mathrm{Fil}^n_{\mathrm{Nyg}} \widehat{R\Gamma}_{\mathop{\mathrm{crys}}} (A) \to \widehat{R\Gamma}_{\mathop{\mathrm{crys}}}(A)$ which gives a fiber sequence $$\mathbf{Z}_p(n) (A)\xrightarrow{}\mathrm{Fil}^n_{\mathrm{Nyg}}\widehat{R\Gamma}_{\mathop{\mathrm{crys}}}(A) \xrightarrow{\varphi_n - \mathrm{can}} \widehat{R\Gamma}_{\mathop{\mathrm{crys}}}(A).$$* *Proof.* See [@BMS2 § 8] for the case when $A$ is a quasiregular semiperfect $\mathbf{F}_p$-algebra. The proposition follows by quasisyntomic descent. ◻ Let us define $\mathbf{F}_p (n) (A) := \mathbf{Z}_p (n)(A)/p.$ We show that $\mathbf{F}_p (n) (A)$ may be described purely in terms of the animated Cartier theory from [2](#sec2){reference-type="ref" reference="sec2"}. **Proposition 1**. *Let $A$ be quasisyntomic $\mathbf{F}_p$-algebra. We have a natural fiber sequence $$\label{cartierbloch} \mathbf{F}_p(n)(A)[n] \to Z_1 \mathbb{L}\Omega^n_A \xrightarrow[]{\mathrm{can} - C} \mathbb{L}\Omega^n_A,$$* *Proof.* By animation, it is enough to prove the claim when $A$ is a polynomial algebra. By [@BMS2 Prop. 8.21] and quasisyntomic descent, it follows that $$\mathbf{Z}_p (n)(A)[n] \simeq R\Gamma_{\mathrm{pro\acute{e}t}}(\mathop{\mathrm{Spec}}A, W\Omega^n_{\mathop{\mathrm{Spec}}A, \mathrm{log}}),$$ where $W\Omega^n_{\mathop{\mathrm{Spec}}A, \mathrm{log}} := \varprojlim W_r\Omega^n_{\mathop{\mathrm{Spec}}A, \mathrm{log}}$ (see [@BMS2 Prop. 8.4]). The claim now follows from going modulo $p$, and using [@Luc1 2.1.20, 2.4.1.1, Thm. 2.4.2, Cor. 5.7.5]. ◻ **Remark 1**. Let $R$ be a quasiregular semiperfect algebra. By [@BMS2 Lem. 8.19], $\mathbf{Z}_p(i) (R)$ is discrete for $i>0.$ This may also be seen by reducing modulo $p$ and using the sequence [\[cartierbloch\]](#cartierbloch){reference-type="ref" reference="cartierbloch"}. Further, $\mathbf{Z}_p (0)(R) \in D_{[-1,0]}(\mathbf{Z}_p)$ and $\mathbf{Z}_p (i) (S) = 0$ for $i<0.$ Using [Lemma 1](#filtrationlemma){reference-type="ref" reference="filtrationlemma"}, we see that the filtration $\mathop{\mathrm{Fil}}^n\mathrm{TC}(R)$ constructed in [@BMS2] is simply given by $\tau_{\ge 2n-1} \mathrm{TC}(R).$ Having discussed these basic properties, we now focus on the behavior of these invariants for the pro-system of truncated polynomial rings. **Proposition 1**. *Let $R$ be a perfect ring. Then $\varprojlim \mathrm{dR}(R[t]/t^n) \simeq R[[t]]^{(p)} \oplus R[[t]]^{(p)}[-1]$ as an $R[[t]]^{(p)}$-module.* *Proof.* Let $n = p^k.$ Then $\mathbb{L}_{R[t]/t^n} \simeq (t^n/t^{2n})[1] \oplus R[t]/t^n$ as an $R[t]/t^n$-module. Let $I_n:= (t^n/t^{2n}),$ which is free of rank $1$ as an $R[t]/t^n$-module. For $s \ge 0,$ one has $$\wedge^s \mathbb{L}_{R[t]/t^n}[-s] \simeq \Gamma^s(I_n) \oplus \Gamma^{s-1}(I_n)[-1].$$ Now, we note that since $R[t]/t^n$ is liftable to $\mathbf{Z}/p^2 \mathbf{Z}$, along with a lift of the Frobenius, the conjugate filtration on $\mathrm{dR}(R[t]/t^n)$ splits [@Bha12 Prop. 3.17]; this gives $$\mathrm{dR}(R[t]/t^n) \simeq \Gamma^* (I_n) \oplus \Gamma^* (I_n)[-1].$$ Finally, note that the natural map $R[t]/t^{p^{k+1}} \to R[t]/t^{p^{k}}$ induces the zero map $I_{p^{k+1}} \to I_{p^k}.$ This shows that the $\mathbf N$-indexed objects $\mathrm{dR}(R[t]/t^n)$ and $(R[t]/t^n)^{(p)} \oplus (R[t]/t^n)^{(p)}[-1]$ are isomorphic as pro-objects. This yields the desired claim. ◻ **Proposition 1**. *Let $R$ be a perfect ring. Then $Z_1 \mathbb{L}\Omega^i_{R[t]/t^n} \simeq 0$ as a pro-object for $i \ge 2.$* *Proof.* First we prove that the natural map $\mathrm{dR}(R[t]/t^n) \to \widehat{\mathrm{dR}}(R[t]/t^n)$ is a pro-isomorphism. To this end, note that the pro-object $\widehat{\mathrm{dR}}(R[t]/t^n)$ admits a complete descending filtration (induced by the Hodge filtration) $\mathrm{Fil}^*_{\mathrm{Hodge}}\widehat{\mathrm{dR}}(R[t]/t^n)$ whose graded pieces are described as $$\mathrm{gr}^0 \widehat{\mathrm{dR}}(R[t]/t^n) \simeq R[t]/t^n,\,\, \mathrm{gr}^1 \widehat{\mathrm{dR}}(R[t]/t^n) \simeq \Omega^1_{R[t]/t^n}[-1],\,\, \mathrm{gr}^i\widehat{\mathrm{dR}}(R[t]/t^n) \simeq 0\, \text{for}\, i >1$$ as pro-objects. This implies that the pro-object $\widehat{\mathrm{dR}}(R[t]/t^n)$ is pro-isomorphic to $\Omega^*_{R[t]/t^n},$ where the latter denotes the classical de Rham complex. Now let $n= p^k.$ Then $\Omega^*_{R[t]/t^n},$ as an $(R[t]/ t^n)^{(p)}$-module is naturally isomorphic to $$\Omega^*_{R[t]} \otimes_{R[t]^{(p)}} (R[t]/ t^n)^{(p)}.$$ Considering that $R[t]$ lifts to $\mathbf{Z}/p^2 \mathbf{Z}$ along with a lift of the Frobenius, we see that the pro-object $\widehat{\mathrm{dR}}(R[t]/t^n) \simeq(R[t]/t^n)^{(p)} \oplus (R[t]/t^n)^{(p)}[-1] \simeq \mathrm{dR}(R[t]/t^n),$ where the latter isomorphism follows from the proof of [Proposition 1](#ven){reference-type="ref" reference="ven"}. Since for $i \ge 2,$ we have $\widehat{\mathrm{dR}}(R[t]/t^n) \simeq \widehat{\mathrm{dR}}(R[t]/t^n)/ \mathop{\mathrm{Fil}}^i_{\mathrm{Hodge}},$ and the natural map $\mathrm{dR}(R[t]/t^n) \to \widehat{\mathrm{dR}}(R[t]/t^n)$ is a pro-isomorphism, it now follows that the natural map $$\mathrm{dR}(R[t]/t^n) \to \mathrm{dR}(R[t]/t^n)/ \mathop{\mathrm{Fil}}^i_{\mathrm{Hodge}}$$ is a pro-isomorphism. Further, note that the natural map $$\mathrm{Fil}^j_{\mathrm{conj}} \mathrm{dR}(R[t]/t^n) \to \mathrm{dR}(R[t]/t^n)$$ is a pro-isomorphism for $j \ge 1.$ Therefore, the natural map $$\mathrm{Fil}^i_{\mathrm{conj}} \mathrm{dR}(R[t]/t^n) \to \mathrm{dR}(R[t]/t^n)/ \mathop{\mathrm{Fil}}^i_{\mathrm{Hodge}}$$ is a pro-isomorphism for $i \ge 2.$ Thus the fiber, which is naturally isomorphic to $Z_1\mathbb{L}\Omega^i_{R[t]/t^n}[-i],$ is pro-zero for $i \ge 2$. ◻ **Proposition 1**. *Let $R$ be a perfect ring. Then $\varprojlim \mathbf{Z}_p(i) (R[t]/t^n) = 0$ for $i>1.$* *Proof.* By derived $p$-completeness, it is enough to prove that $\varprojlim \mathbf{F}_p(i) (R[t]/t^n) = 0$ for $i>1.$ Now the fiber sequence (see [\[cartierbloch\]](#cartierbloch){reference-type="ref" reference="cartierbloch"}) $$\mathbf{F}_p(i)(R[t]/t^n)[i] \to Z_1 \mathbb{L}\Omega^i_{R[t]/t^n} \xrightarrow[]{\mathrm{can} - C} \mathbb{L}\Omega^i_{R[t]/t^n}$$ yields the desired vanishing since $Z_1 \mathbb{L}\Omega^i_{R[t]/t^n}$ and $\mathbb{L}\Omega^i_{R[t]/t^n}$ are both pro-zero for $i>1.$ ◻ Now we focus our attention to $\varprojlim \mathbf{Z}_p(i) (R[t]/t^n),$ where $R$ is a quasiregular semiperfect algebra. For this purpose, it will be convenient to work with Nygaard filtration on derived crystalline cohomology. **Lemma 1**. *Let $R$ be a quasiregular semiperfect ring. Then $$\varprojlim \mathop{\mathrm{Fil}}^{i}_{\mathop{\mathrm{conj}}}\mathrm{dR} (R[t]/t^n)\in D_{[-1,0]} (\mathbf{Z}_p).$$* *Proof.* Note that $\mathrm{dR} (R[t]/t^n) \simeq \mathrm{dR}(R) \otimes_{\mathbf{F}_p} \mathrm{dR} (\mathbf{F}_p [t]/t^n).$ By the proof of [Proposition 1](#ven){reference-type="ref" reference="ven"}, $\mathop{\mathrm{Fil}}^0_{\mathop{\mathrm{conj}}} \mathrm{dR}(\mathbf{F}_p [t]/t^n) \simeq (\mathbf{F}_p [t]/t^n)^{(p)}$ and $\mathop{\mathrm{Fil}}^i_{\mathop{\mathrm{conj}}} \mathrm{dR}(\mathbf{F}_p [t]/t^n) \simeq (\mathbf{F}_p [t]/t^n)^{(p)} \oplus (\mathbf{F}_p [t]/t^n)^{(p)}[-1]$ as $n$-indexed pro-objects for each $i \ge 1$. For fixed $i,n$, we have $$\mathop{\mathrm{Fil}}^i_{\mathop{\mathrm{conj}}}\mathrm{dR}(R[t]/t^n) \simeq \mathop{\mathrm{colim}}_{u+v \le i} \mathop{\mathrm{Fil}}^u_{\mathop{\mathrm{conj}}} \mathrm{dR}(R) \otimes_{\mathbf{F}_p}\mathop{\mathrm{Fil}}^v_{\mathop{\mathrm{conj}}} \mathrm{dR}(\mathbf{F}_p [t]/t^n).$$ The above formula gives a natural map $$\label{kt} \left( \mathrm{Fil}^*_{\mathop{\mathrm{conj}}} \mathop{\mathrm{dR}}(R) \oplus \mathrm{Fil}^{*-1}_{\mathop{\mathrm{conj}}} \mathop{\mathrm{dR}}(R) [-1] \right) \otimes_{\mathbf{F}_p} (\mathbf{F}_p[t] /t^n)^{(p)} \to \mathop{\mathrm{Fil}}^*_{\mathop{\mathrm{conj}}}\mathrm{dR}(R[t]/t^n)$$ To prove that this induces an isomorphism in the category of pro-objects, it is enough to prove that the graded pieces are pro-isomorphic. To this end, let $I:= \mathrm{Ker}(R^\flat \to R);$ then we have $\mathbb{L}_{R/ \mathbf{F}_p} \simeq I/I^2 [1].$ One computes that in the pro-category, we have $$\mathbb{L}_{(R[t]/t^n)/\mathbf{F}_p} \simeq (I/I^2 \otimes_R R[t]/t^n)[1] \oplus R[t]/t^n.$$ Computing wedge powers, we see that [\[kt\]](#kt){reference-type="ref" reference="kt"} is indeed an isomorphism. Since $\wedge^i \mathbb{L}_{R/\mathbf{F}_p}[-i] \simeq \Gamma ^i_R (I/I^2)$ is discrete, $\mathop{\mathrm{Fil}}^*_{\mathop{\mathrm{conj}}} \mathop{\mathrm{dR}}(R)$ is also discrete. The maps induced on $\pi_{-1}$ on the left hand side of [\[kt\]](#kt){reference-type="ref" reference="kt"} are surjections and therefore the proposition follows. ◻ **Lemma 1**. *Let $R$ be a quasiregular semiperfect ring. Then $$\varprojlim\mathop{\mathrm{Fil}}_{\mathrm{Nyg}}^{i} \widehat{R\Gamma}_{\mathrm{crys}}(R[t]/t^n) \in D_{[-1,0]} (\mathbf{Z}_p).$$* *Proof.* The left hand side is equipped with a complete descending filtration $\varprojlim\mathop{\mathrm{Fil}}_{\mathrm{Nyg}}^{i+*} \widehat{R\Gamma}_{\mathrm{crys}}(R[t]/t^n)$, where the graded pieces are computed by $\varprojlim \mathop{\mathrm{Fil}}^{i+*}_{\mathop{\mathrm{conj}}}\mathrm{dR} (R[t]/t^n).$ Therefore, the claim follows from the above lemma. ◻ **Lemma 1**. *Let $R$ be a quasiregular semiperfect ring. Then $$\varprojlim \widehat{\mathop{\mathrm{dR}}}(R[t]/t^n) \simeq \varprojlim \left(\widehat{\mathop{\mathrm{dR}}}(R) \otimes_{\mathbf{F}_p} (\mathbf{F}_p[t] /t^n)^{(p)} \oplus \widehat{\mathop{\mathrm{dR}}}(R) \otimes_{\mathbf{F}_p} (\mathbf{F}_p[t] /t^n)^{(p)}[-1]\right).$$* *Proof.* It follows that $\widehat{\mathop{\mathrm{dR}}}(R[t]/t^n)$ may be computed by completing $\widehat{\mathop{\mathrm{dR}}}(R) \otimes_{\mathbf{F}_p} \widehat{\mathop{\mathrm{dR}}} (\mathbf{F}_p [t]/t^n)$ with respect to the Day convolution filtration induced from the Hodge filtration $\mathop{\mathrm{Fil}}^*_{\mathop{\mathrm{Hodge}}}\widehat{\mathop{\mathrm{dR}}}(R)$ and $\mathop{\mathrm{Fil}}^*_{\mathop{\mathrm{Hodge}}} \widehat{\mathop{\mathrm{dR}}}(\mathbf F_p[t]/t^n).$ However, as an $\mathbf N$-indexed pro-object, $\mathop{\mathrm{Fil}}^i_{\mathop{\mathrm{Hodge}}} \widehat{\mathop{\mathrm{dR}}}(\mathbf{F}_p[t]/t^n) = 0$ for $i \ge 2$ (see [Proposition 1](#pack){reference-type="ref" reference="pack"}); therefore, one may ignore the completion step in order to compute the inverse limit, i.e., $\varprojlim \widehat{\mathop{\mathrm{dR}}}(R[t]/t^n) \simeq \varprojlim \left(\widehat{\mathop{\mathrm{dR}}}(R) \otimes_{\mathbf{F}_p} \widehat{\mathop{\mathrm{dR}}} (\mathbf{F}_p [t]/t^n)\right).$ This yields the desired statement. ◻ **Proposition 1**. *Let $R$ be a quasiregular semiperfect ring. Then $$\varprojlim \mathbf{Z}_p(i)(R[t]/t^n) \in D_{[-1,0]} (\mathbf{Z}_p).$$* *Proof.* When $i=0,$ one may check the claim by reducing modulo $p$ and using the Artin--Schreier sequence. For $i=1,$ we argue as follows: note that for any quasisyntomic $\mathbf{F}_p$-algebra $S,$ we have $\mathbf{Z}_p(1) (S)[1]\simeq R\Gamma_{\mathrm{\acute{e}t}} (S, \mathbb G_m)^{\wedge_p}.$ This implies that we have a fiber sequence $$\prod_{(r,p)=1} W(S) \to \varprojlim_n \mathbf{Z}_p(1) (R[t]/t^n)[1] \to \mathbf{Z}_p(1) (R)[1].$$Since $R$ is quasiregular semiperfect, $\mathbf{Z}_p(1) (R)$ is discrete, which gives the claim. Let us now suppose that $i>1.$ Since we have a fiber sequence of derived $p$-complete objects $$\varprojlim \mathbf{Z}_p(i) (R[t]/t^n) \to \varprojlim \mathop{\mathrm{Fil}}_{\mathrm{Nyg}}^{i} \widehat{R\Gamma}_{\mathrm{crys}}(R[t]/t^n) \xrightarrow{\varphi_i - \iota} \varprojlim \widehat{R\Gamma}_{\mathrm{crys}}(R[t]/t^n),$$ it would be enough to prove that the map ${\varphi_i - \iota} \colon \varprojlim \mathop{\mathrm{Fil}}_{\mathrm{Nyg}}^{i} \widehat{R\Gamma}_{\mathrm{crys}}(R[t]/t^n)/p \to \varprojlim \widehat{R\Gamma}_{\mathrm{crys}}(R[t]/t^n)/p$ induces a surjection on $\pi_{-1}.$ Note that the composition $$\mathop{\mathrm{Fil}}_{\mathrm{Nyg}}^{i+1} \widehat{R\Gamma}_{\mathrm{crys}}(R[t]/t^n) \to \mathop{\mathrm{Fil}}_{\mathrm{Nyg}}^{i} \widehat{R\Gamma}_{\mathrm{crys}}(R[t]/t^n) \xrightarrow{\varphi_i - \iota} \widehat{R\Gamma}_{\mathrm{crys}}(R[t]/t^n)/p \simeq \widehat{\mathop{\mathrm{dR}}} (R[t]/t^n)$$ is homotopic to the canonical map $\iota$ and the latter factors as $$\label{wh} \mathop{\mathrm{Fil}}_{\mathrm{Nyg}}^{i+1} \widehat{R\Gamma}_{\mathrm{crys}}(R[t]/t^n) \to \mathop{\mathrm{Fil}}^{i+1}_{\mathrm{Hodge}} \widehat{\mathop{\mathrm{dR}}}(R[t]/t^n) \to \widehat{\mathop{\mathrm{dR}}}(R[t]/t^n).$$ Since we have a fiber sequence $$\mathop{\mathrm{Fil}}_{\mathrm{Nyg}}^{i} \widehat{R\Gamma}_{\mathrm{crys}}(R[t]/t^n) \xrightarrow{p} \mathop{\mathrm{Fil}}_{\mathrm{Nyg}}^{i+1} \widehat{R\Gamma}_{\mathrm{crys}}(R[t]/t^n) \to \mathop{\mathrm{Fil}}^{i+1}_{\mathrm{Hodge}} \widehat{\mathop{\mathrm{dR}}}(R[t]/t^n),$$it follows from [Lemma 1](#ktt){reference-type="ref" reference="ktt"} that the map $\varprojlim \mathop{\mathrm{Fil}}_{\mathrm{Nyg}}^{i+1} \widehat{R\Gamma}_{\mathrm{crys}}(R[t]/t^n) \to \varprojlim \mathop{\mathrm{Fil}}^{i+1}_{\mathrm{Hodge}} \widehat{\mathop{\mathrm{dR}}}(R[t]/t^n)$ is a surjection on $\pi_{-1}.$ Therefore, the image under $\pi_{-1}$ of the composite map $\varprojlim \mathop{\mathrm{Fil}}_{\mathrm{Nyg}}^{i+1} \widehat{R\Gamma}_{\mathrm{crys}}(R[t]/t^n) \to \varprojlim \widehat{\mathop{\mathrm{dR}}}(R[t]/t^n)$ coming from [\[wh\]](#wh){reference-type="ref" reference="wh"} is the same as image of $\pi_{-1}$ induced by the map $$\alpha\colon \varprojlim \mathop{\mathrm{Fil}}^{i+1}_{\mathrm{Hodge}} \widehat{\mathop{\mathrm{dR}}}(R[t]/t^n) \to \varprojlim \widehat{\mathop{\mathrm{dR}}}(R[t]/t^n).$$ On the other hand, note that the composition $$\mathop{\mathrm{Fil}}^{i-1}_{\mathrm{Nyg}} \widehat{R\Gamma}_{\mathrm{crys}}(R[t]/t^n) \xrightarrow{p} \mathop{\mathrm{Fil}}_{\mathrm{Nyg}}^{i} \widehat{R\Gamma}_{\mathrm{crys}}(R[t]/t^n) \xrightarrow{\varphi_i - \iota} \widehat{R\Gamma}_{\mathrm{crys}}(R[t]/t^n)/p$$ is homotopic to $\varphi_{i-1}.$ Furthermore, since $i>0,$ the map $\varphi_{i-1}\colon\mathop{\mathrm{Fil}}^{i-1}_{\mathrm{Nyg}} \widehat{R\Gamma}_{\mathrm{crys}}(R[t]/t^n) \to \widehat{R\Gamma}_{\mathrm{crys}}(R[t]/t^n)/p$ factors as $$\mathop{\mathrm{Fil}}^{i-1}_{\mathrm{Nyg}} \widehat{R\Gamma}_{\mathrm{crys}}(R[t]/t^n) \to \mathop{\mathrm{gr}}^{i-1}_{\mathrm{Nyg}} \widehat{R\Gamma}_{\mathrm{crys}}(R[t]/t^n) \to \widehat{R\Gamma}_{\mathrm{crys}}(R[t]/t^n)/p .$$Passing to inverse limits over $n$, we see that the map $\mathop{\mathrm{Fil}}^{i-1}_{\mathrm{Nyg}} \widehat{R\Gamma}_{\mathrm{crys}}(R[t]/t^n) \to \mathop{\mathrm{gr}}^{i-1}_{\mathrm{Nyg}} \widehat{R\Gamma}_{\mathrm{crys}}(R[t]/t^n)$ induces surjection on $\pi_{-1}.$ Therefore, the image of the map induced on $\pi_{-1}$ by the composite map $\varprojlim \mathop{\mathrm{Fil}}^{i-1}_{\mathrm{Nyg}} \widehat{R\Gamma}_{\mathrm{crys}}(R[t]/t^n) \to \varprojlim \widehat{R\Gamma}_{\mathrm{crys}}(R[t]/t^n)/p$ is the same as the image of the map induced on $\pi_{-1}$ by $\varprojlim \mathop{\mathrm{gr}}^{i-1}_{\mathrm{Nyg}} \widehat{R\Gamma}_{\mathrm{crys}}(R[t]/t^n) \to \varprojlim \widehat{R\Gamma}_{\mathrm{crys}}(R[t]/t^n)/p.$ The latter map identifies with the map $$\beta\colon\varprojlim \mathop{\mathrm{Fil}}^{i-1}_{\mathop{\mathrm{conj}}} \mathrm{dR} (R[t]/t^n) \to \varprojlim \widehat{\mathop{\mathrm{dR}}} (R[t]/t^n).$$ It would be enough to prove that image of $\pi_{-1}(\alpha)$ and $\pi_{-1}(\beta)$ generates $\pi_{-1} (\varprojlim \widehat{\mathop{\mathrm{dR}}}(R[t]/t^n))$ under addition. Note that there is a natural map $$\varprojlim \mathop{\mathrm{Fil}}^{i}_{\mathop{\mathrm{Hodge}}} \widehat{\mathop{\mathrm{dR}}}(R) \otimes_{\mathbf{F}_p} \mathop{\mathrm{Fil}}^1_{\mathop{\mathrm{Hodge}}}\widehat{\mathop{\mathrm{dR}}}(\mathbf{F}_p[t]/t^n) \to \varprojlim \mathop{\mathrm{Fil}}^{i+1}_{\mathrm{Hodge}} \widehat{\mathop{\mathrm{dR}}}(R[t]/t^n).$$Composing with $\alpha,$ we get a map $\varprojlim \mathop{\mathrm{Fil}}^{i}_{\mathop{\mathrm{Hodge}}} \widehat{\mathop{\mathrm{dR}}}(R) \otimes_{\mathbf{F}_p} \mathop{\mathrm{Fil}}^1_{\mathop{\mathrm{Hodge}}}\widehat{\mathop{\mathrm{dR}}}(\mathbf{F}_p[t]/t^n) \to \varprojlim \widehat{\mathop{\mathrm{dR}}}(R[t]/t^n).$ Note that by [Lemma 1](#fight){reference-type="ref" reference="fight"}, we have an isomorphism $\pi_{-1} (\varprojlim \widehat{\mathop{\mathrm{dR}}}(R[t]/t^n)) \simeq \widehat{\mathop{\mathrm{dR}}}(R)[[s]].$ It follows that under the latter isomorphism, the image of $\pi_{-1}(\alpha)$ contains all elements of the form $\sum_i x_i s^i$, where $x_i \in \mathop{\mathrm{Fil}}^i_{\mathop{\mathrm{Hodge}}} \widehat{\mathop{\mathrm{dR}}}(R)$ and image of $\pi_{-1}(\beta)$ contains all elements of the form $\sum_i y_i s^i$, where $y_i \in \mathop{\mathrm{Fil}}^{i-2}_{\mathop{\mathrm{conj}}}{\mathop{\mathrm{dR}}}(R)$ (see [\[kt\]](#kt){reference-type="ref" reference="kt"}). For $i >1,$ we have $$\mathop{\mathrm{Fil}}^{i-2}_{\mathop{\mathrm{conj}}}{\mathop{\mathrm{dR}}}(R) + \mathop{\mathrm{Fil}}^i_{\mathop{\mathrm{Hodge}}} \widehat{\mathop{\mathrm{dR}}}(R) = \widehat{\mathop{\mathrm{dR}}}(R),$$ which finishes the proof. ◻ **Lemma 1**. *Let us suppose that a spectrum $S$ admits a descending complete and exhaustive $\mathbf{Z}$-indexed filtration $\mathop{\mathrm{Fil}}^* S$ such that the graded pieces $\mathop{\mathrm{gr}}^n S \in \mathrm{Sp}_{[2n,2n-1]}.$ Then there is a natural isomorphism $\mathop{\mathrm{Fil}}^n S \simeq \tau_{\ge 2n-1} S.$* *Proof.* Let us fix an integer $n.$ Let us choose another integer $j \ge 2n-1.$ By the description of graded pieces, it follows that $\pi_j (\mathop{\mathrm{Fil}}^n S) = \pi_j (\mathop{\mathrm{Fil}}^{n-1}S) = \ldots.$ Therefore, $\pi_j (\mathop{\mathrm{Fil}}^n S) \simeq \pi_j (S),$ since the filtration is exhaustive. By completeness of the filtration, we have $\mathop{\mathrm{Fil}}^n S \simeq \varprojlim _{k \in \mathbf N} \mathop{\mathrm{Fil}}^n S/ \mathop{\mathrm{Fil}}^{n+k}S$ By the description of the graded pieces, it follows that if $j < 2n-1,$ then $\pi_j (\mathop{\mathrm{Fil}}^n S/ \mathop{\mathrm{Fil}}^{n+k}) \simeq 0.$ Moreover, using the fiber sequence $$\mathop{\mathrm{gr}}^{n+k} S \to \mathop{\mathrm{Fil}}^n S/ \mathop{\mathrm{Fil}}^{n+k+1} S \to \mathop{\mathrm{Fil}}^n S/ \mathop{\mathrm{Fil}}^{n+k} S ,$$ we see that the maps $\pi_{2n-1} (\mathop{\mathrm{Fil}}^n S/ \mathop{\mathrm{Fil}}^{n+k+1}) \to \pi_{2n-1} (\mathop{\mathrm{Fil}}^n S/ \mathop{\mathrm{Fil}}^{n+k})$ are surjections for $k \ge 1.$ By using Milnor sequences, it follows that $\pi_j (\mathop{\mathrm{Fil}}^n S) = 0$ for $j < 2n-1.$ Since we have a natural map $\mathop{\mathrm{Fil}}^n S[-2n+1] \to S[-2n+1],$ and $\mathop{\mathrm{Fil}}^n S[-2n+1]$ is connective, we obtain a map $\mathop{\mathrm{Fil}}^n S \to \tau_{\ge 2n-1} S.$ Since we know that this map induces isomorphism on all homotopy groups, we obtain the desired claim. ◻ Now we can summarize the observations in this section in the following manner: let $S$ be a quasisyntomic $\mathbf{F}_p$-algebra. Let us define $$\mathop{\mathrm{Fil}}^* \varprojlim_k \mathrm{TC}(S[t]/t^k)\coloneqq \varprojlim_k \mathrm{Fil}_{\mathrm{BMS}}^* \mathrm{TC}(S[t]/t^k).$$ It follows that the graded pieces of this filtration are computed as $$\mathrm{gr}^n \varprojlim_k \mathrm{TC}(S[t]/t^k) \simeq \varprojlim_{k} \mathbf{Z}_p(n) (S[t]/t^k)[2n].$$ It also follows that $\mathop{\mathrm{Fil}}^* \varprojlim_k \mathrm{TC}(S[t]/t^k)$ is a complete exhaustive filtration and the functor determined by $S \mapsto \mathop{\mathrm{Fil}}^* \varprojlim_k \mathrm{TC}(S[t]/t^k)$ is a quasisyntomic sheaf of spectra. The proposition below gives a concrete description of this filtration for quasiregular semiperfect algebras. **Proposition 1**. *Let $R$ be a quasiregular semiperfect algebra. Then $$\mathop{\mathrm{Fil}}^n \varprojlim_k \mathrm{TC}(R[t]/t^k) \simeq \tau_{\ge 2n-1} \varprojlim_k \mathrm{TC}(R[t]/t^k).$$* *Proof.* Follows from the above description of the graded pieces along with [Proposition 1](#LY){reference-type="ref" reference="LY"} and [Lemma 1](#filtrationlemma){reference-type="ref" reference="filtrationlemma"}. ◻ **Remark 1**. Let us point out that certain computations of topological cyclic homology of $R[t]/t^k$ where $R$ is a perfect(oid) ring appeared in [@Yuri] and [@Noah1] (*cf.* [@akhill Thm. 10.4]). # Proof of the main result {#finalsec} In this section, we will enhance Hesselholt's isomorphism [\[hesselholt\]](#hesselholt){reference-type="ref" reference="hesselholt"} with the motivic filtrations studied above. We recall some notations first. Let $S$ be a quasisyntomic $\mathbf{F}_p$-algebra. Let $\mathop{\mathrm{Fil}}^* \mathrm{TR}(S)$ be the filtration constructed before [Corollary 1](#cor1){reference-type="ref" reference="cor1"}. Let $\mathop{\mathrm{Fil}}^* \mathrm{TC}(S[t]/t^k)$ and $\mathop{\mathrm{Fil}}^* \mathrm{TC}(S)$ be the motivic filtrations as constructed by Bhatt--Morrow--Scholze. **Proposition 1**. *Let $S$ be a quasisyntomic $\mathbf{F}_p$-algebra. Then we have a natural isomorphism $$(\mathop{\mathrm{Fil}}^{*-1} \mathrm{TR}(S))[1] \simeq \varprojlim_k \mathrm{fib} \left(\mathop{\mathrm{Fil}}^* \mathrm{TC}(S[t]/t^k) \to \mathop{\mathrm{Fil}}^* \mathrm{TC}(S)\right).$$* *Proof.* Let $R$ be a quasiregular semiperfect algebra. Using Hesselholt's result [\[hesselholt\]](#hesselholt){reference-type="ref" reference="hesselholt"}, we obtain a natural fiber sequence $$\varprojlim_k \mathrm{TC}(R[t]/t^k) \to \mathrm{TC}(R) \to \mathrm{TR}(R)[2].$$ By [Corollary 1](#cor3){reference-type="ref" reference="cor3"} and [Remark 1](#BMSsimple){reference-type="ref" reference="BMSsimple"}, we obtain a fiber sequence $$\tau_{\ge 2n-1} \varprojlim_k \mathrm{TC}(R[t]/t^k) \to \tau_{\ge 2n-1} \mathrm{TC}(R) \to (\tau_{\ge 2n-2}\mathrm{TR}(R))[2].$$ Using [Corollary 1](#cor3){reference-type="ref" reference="cor3"} and [Proposition 1](#prop1){reference-type="ref" reference="prop1"}, this implies that we have a fiber sequence $$\mathop{\mathrm{Fil}}^n \varprojlim_k \mathrm{TC}(R[t]/t^k) \to \mathop{\mathrm{Fil}}^n \mathrm{TC}(R) \to \mathop{\mathrm{Fil}}^{n-1} \mathrm{TR}(R) [2].$$ Applying quasisyntomic descent produces a natural isomorphism $$(\mathop{\mathrm{Fil}}^{n-1} \mathrm{TR}(S))[1] \simeq \varprojlim_k \mathrm{fib} \left(\mathop{\mathrm{Fil}}^n \mathrm{TC}(S[t]/t^k) \to \mathop{\mathrm{Fil}}^n \mathrm{TC}(S)\right);$$ this finishes the proof. ◻ **Proposition 1**. *Let $S$ be a quasisyntomic $\mathbf{F}_p$-algebra. Then we have a natural isomorphism $$\prod_{(u,p)=1}\mathbb{L}W\Omega^{n-1}_S \simeq \mathrm{fib} \left(\varprojlim_k \mathbf{Z}_p(n)(S[t]/t^k)[n] \to \mathbf{Z}_p(n)(S)[n]\right).$$* *Proof.* By passing to the graded pieces in the filtered isomorphism in [Proposition 1](#mainthm1){reference-type="ref" reference="mainthm1"}, we obtain a natural isomorphism $$\prod_{(u,p)=1}\mathbb{L}W\Omega^{n-1}_S[n-1][1] \simeq \mathrm{fib} \left(\varprojlim_k \mathbf{Z}_p(n)(S[t]/t^k)[2n] \to \mathbf{Z}_p(n)(S)[2n]\right),$$ which gives the desired result. ◻ **Construction 1** (Frobenius and Verschiebung). Let $S$ be a quasisyntomic $\mathbf F_p$-algebra. Let $$C (\mathbf{Z}_p(n)[n]_S) \coloneqq \mathrm{fib} \left(\varprojlim_k \mathbf{Z}_p(n)(S[t]/t^k)[n] \to \mathbf{Z}_p(n)(S)[n]\right),$$ which we regard as curves on $\mathbf Z_p(n)[n].$ Let $m \ge 0$ be an integer. The assignment $t \mapsto t^m$ determines an endomorphism $V_m\colon C (\mathbf{Z}_p(n)[n]_S) \to C (\mathbf{Z}_p(n)[n]_S),$ which we call the $m$-th *Verschiebung*. We will now construct the $m$-th Frobenius maps. Let us first assume that $S$ is quasiregular semiperfect. We will construct a "transfer endomorphism\" $$\label{transfer} \Phi_m\colon \varprojlim_k \mathbf{Z}_p(n)(S[t]/t^k) \to \varprojlim_k \mathbf{Z}_p(n)(S[t]/t^k)$$To do so, one notes that there are transfer maps $\mathrm{TC}(S[t]/t^{km}) \to \mathrm{TC}(S[t]/t^k)$ induced by the map $S[t]/t^{k} \to S[t]/t^{km}$ determined by $t \mapsto t^m.$ This induces a map $$\Tilde{\Phi}_m\colon \varprojlim_k \mathrm{TC}(S[t]/t^k) \to \varprojlim_k \mathrm{TC}(S[t]/t^k).$$ Now, using the assumption that $S$ is quasiregular semiperfect, [Proposition 1](#prop1){reference-type="ref" reference="prop1"}, and passing to graded pieces produces the desired transfer map [\[transfer\]](#transfer){reference-type="ref" reference="transfer"} on the $p$-adic Tate twists. By quasisyntomic descent, for any quasisyntomic $\mathbf F_p$-algebra, we obtain a "transfer map\" $$\label{transfer} \Phi_m\colon \varprojlim_k \mathbf{Z}_p(n)(S[t]/t^k) \to \varprojlim_k \mathbf{Z}_p(n)(S[t]/t^k).$$ The latter induces an endomorphism $$F_m \colon C (\mathbf{Z}_p(n)[n]_S) \to C (\mathbf{Z}_p(n)[n]_S),$$ that we call the $m$-th *Frobenius*. **Remark 1**. Note that the Frobenius and Verschiebung operators on $\mathrm{TR}(S)$ induce the operators $F_m$ and $V_m$ on the left hand side of [Proposition 1](#prop98){reference-type="ref" reference="prop98"} as well. Using [@Jonas Rmk. 2.4.6], it follows that [Proposition 1](#prop98){reference-type="ref" reference="prop98"} is compatible with the $F_m$ and $V_m$ defined on both sides. **Construction 1** ($p$-typicalization). Let $S$ be a quasisyntomic $\mathbf F_p$-algebra. Using [Construction 1](#constrans){reference-type="ref" reference="constrans"}, we obtain natural maps $\eta_m\colon \mathrm{fib} (F_m) \to C (\mathbf{Z}_p(n)[n]_S),$ which maybe viewed as an object of $D(\mathbf Z_p)_{/C (\mathbf{Z}_p(n)[n]_S)}.$ We define $$\mathbb{D}(\mathbf{Z}_p(n)[n]_S) \coloneqq \prod_{(m,p)=1, m>1}\eta_m \in D(\mathbf Z_p)_{/C (\mathbf{Z}_p(n)[n]_S)},$$ where the product is taken in $D(\mathbf Z_p)_{/C (\mathbf{Z}_p(n)[n]_S)}$. Naively, one may think of $\mathbb{D}(\mathbf{Z}_p(n)[n]_S)$ as $``\bigcap_{(m,p)=1, m>1} \mathrm{fib}(F_m)",$ where the latter should be suitably interpreted as above. By analogy with the classical situation, we will call $\mathbb{D}(\mathbf{Z}_p(n)[n]_S)$ the *$p$-typical curves* on $\mathbf Z_p(n)[n]$ over $S.$ Note that $\mathbb{D}(\mathbf{Z}_p(n)[n]_S)$ is naturally equipped with the operators $F\coloneqq F_p$ and $V\coloneqq V_p$. **Corollary 1**. *Let $S$ be a quasisyntomic $\mathbf{F}_p$-algebra. Then we have a natural isomorphism $$\mathbb{L}W \Omega_S^{n-1} \simeq \mathbb{D}(\mathbf{Z}_p(n)[n]_S),$$ which is compatible with the $F$ and $V$ defined on both sides.* *Proof.* This follows from [Proposition 1](#prop98){reference-type="ref" reference="prop98"}, the previous discussion and the description of the Frobenius on $\mathrm{TR}(S)$ following [@Hess Prop. 3.3.1]. ◻ [^1]: Combine [@Hess Thm. 3.1.9] and [@Hess Thm. 3.1.10] to obtain this statement. [^2]: Combine [@LurX Rmk. 2.13] and [@LurX Thm. 4.5] to obtain this statement. [^3]: Illusie pointed out that there was a gap in the proof of [@Luc1 Prop. 3.21] which was fixed in [@Luc11 II, 1.3].
arxiv_math
{ "id": "2309.16623", "title": "$p$-typical curves on $p$-adic Tate twists and de Rham-Witt forms", "authors": "Sanath K. Devalapurkar and Shubhodip Mondal", "categories": "math.KT math.AG math.AT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Let ${\mathfrak g}$ be a finite dimensional simple Lie algebra over ${\mathbb{C}}$, and let $\ell$ be a positive integer. In this paper, we construct the quantization $K_{\hat{\mathfrak g},\hbar}^\ell$ of the parafermion vertex algebra $K_{\hat{\mathfrak g}}^\ell$ as an $\hbar$-adic quantum vertex subalgebra inside the simple quantum affine vertex algebra $L_{\hat{\mathfrak g},\hbar}^\ell$. We show that $L_{\hat{\mathfrak g},\hbar}^\ell$ contains an $\hbar$-adic quantum vertex subalgebra isomorphic to the quantum lattice vertex algebra $V_{\sqrt\ell Q_L}^{\eta_\ell}$, where $Q_L$ is the lattice generated by the long roots of ${\mathfrak g}$. Moreover, we prove the double commutant property of $K_{\hat{\mathfrak g},\hbar}^\ell$ and $V_{\sqrt\ell Q_L}^{\eta_\ell}$ in $L_{\hat{\mathfrak g},\hbar}^\ell$. address: Key Laboratory of Computing and Stochastic Mathematics (Ministry of Education), School of Mathematics and Statistics, Hunan Normal University, Changsha, China 410081 author: - Fei Kong$^1$ title: Quantization of parafermion vertex algebras --- [^1] # Introduction Let ${\mathfrak g}$ be a finite dimensional simple Lie algebra over ${\mathbb{C}}$, and let $L_{\hat{\mathfrak g}}^\ell$ be the simple affine vertex operator algebra of positive integer level $\ell$. The parafermion vertex opearator algebra $K_{\hat{\mathfrak g}}^\ell$ is the commutant of the Heisenberg vertex operator algebra in $L_{\hat{\mathfrak g}}^\ell$, and can also be regarded as the commutant of the lattice vertex operator algebra $V_{\sqrt\ell Q_L}$ in $L_{\hat{\mathfrak g}}^\ell$ ([@DW-para-structure-double-comm]) where $Q_L$ is the lattice spanned by the long roots of ${\mathfrak g}$. A set of generators of the parafermion vertex operator algebra $K_{\hat{\mathfrak g}}^\ell$ was determined in [@DLY-para-gen-1; @DLW-para-gen-2; @DW-para-structure-gen; @DR-para-rational]. While the $C_2$-cofiniteness of $K_{\hat{\mathfrak g}}^\ell$ was proved in [@DW-para-cofiniteness-1; @ALY-para-cofiniteness-2], and the rationality was proved in [@DR-para-rational] with the help of a result in [@CM-orbifold] on the abelian orbifolds for rational and $C_2$-cofinite vertex operator algebras. The irreducible modules of $K_{\hat{\mathfrak{sl}}_2}^\ell$ were classified in [@ALY-para-cofiniteness-2] and their fusion rules and quantum dimensions ([@DJX-qdim-qGalois]) were computed in [@DW-para-irr-mods-fusion-ssl2]. For general ${\mathfrak g}$, the irreducible modules of $K_{\hat{\mathfrak g}}^\ell$ were classified in [@DR-para-rational; @ADJR-para-irr-mods-fusion], the fusion rules were determined in [@ADJR-para-irr-mods-fusion] with the help of quantum dimensions, and their trace functions were computed in [@DKR-trace-para]. Parafermion vertex operator algebras also have a close relationship with $W$-algebras ([@DLY-para-gen-1; @ALY-para-W-algebra]). It is well known that vertex algebras are closely related to affine Kac-Moody Lie algebras ([@FZ; @Li-local; @LL; @DL; @MP1; @MP2; @DLM]). In [@Dr-hopf-alg] and [@JimboM], Drinfeld and Jimbo independently introduced the notion of quantum enveloping algebras, which are formal deformations of the universal enveloping algebras of Kac-Moody Lie algebras. The quantum enveloping algebras of affine types, which are called quantum affine algebras, are one of the most important subclasses. Like the affinization realization of affine Kac-Moody Lie algebras, Drinfeld provided a quantum affinization realization of quantum affin algebras in [@Dr-new]. Based on the Drinfeld presentation, Frenkel and Jing constructed vertex representations for simply-laced untwisted quatnum affine algebras in [@FJ-vr-qaffine] and formulated a fundamental problem of developing certain "quantum vertex algebra theory" associated to quantum affine algebras, in parallel with the connection between vertex algebras and affine Kac-Moody Lie algebras. In [@EK-qva], Etingof and Kazhdan developed a theory of quantum vertex operator algebras in the sense of formal deformations of vertex algebras. Partly motivated by the work of Etingof and Kazhdan, H. Li conducted a series of studies. While vertex algebras are analogues of commutative associative algebras, H. Li introduced the notion of nonlocal vertex algebras [@Li-nonlocal], which are analogues of noncommutative associative algebras. A weak quantum vertex algebra [@Li-nonlocal] is a nonlocal vertex algebra satisfying the $S$-locality. In addition, it becomes a quantum vertex algebra [@Li-nonlocal] if the $S$-locality is controlled by a rational quantum Yang-Baxter operator. The $\hbar$-adic counterparts of these notions were introduced in [@Li-h-adic]. In this framework, a quantum vertex operator algebra in sense of Etingof-Kazhdan is an $\hbar$-adic quantum vertex algebra whose classical limit is a vertex algebra. In the very paper [@EK-qva], Etingof and Kazhdan constructed quantum vertex operator algebras as formal deformations of $V_{\hat{\mathfrak gl}_n}^\ell$ and $V_{\hat{\mathfrak{sl}}_n}^\ell$, by using the $R$-matrix type relations given in [@RS-RTT]. Butorac, Jing and Kožić ([@BJK-qva-BCD]) extended Etingof-Kazhdan's construction to type $B$, $C$ and $D$ rational $R$-matrices. The quantum vertex operator algebras associated with trigonometric $R$-matrices of type $A$, $B$, $C$ and $D$ were constructed in [@Kozic-qva-tri-A; @K-qva-phi-mod-BCD]. In [@JKLT-Defom-va], we developed a method to construct quantum vertex operator algebras by using vertex bialgebras. By using this method, we constructed the quantum lattice vertex algebras, which is a family of quantum vertex operator algebras as deformations of lattice vertex algebras. Moreover, based on the Drinfeld's quantum affinization construction ([@Dr-new; @J-KM; @Naka-quiver]), we constructed the quantum affine vertex algebras $V_{\hat{\mathfrak g},\hbar}^\ell$ and $L_{\hat{\mathfrak g},\hbar}^\ell$ for all symmetric Kac-Moody Lie algebras ${\mathfrak g}$. When ${\mathfrak g}$ is of finite type, we proved that $L_{\hat{\mathfrak g},\hbar}^\ell/\hbar L_{\hat{\mathfrak g},\hbar}^\ell\cong L_{\hat{\mathfrak g}}^\ell$. In this paper, we study the quantization of parafermion vertex algebras, that is, the commutant $K_{\hat{\mathfrak g},\hbar}^\ell$ of quantum Heisenberg vertex algebra in $L_{\hat{\mathfrak g},\hbar}^\ell$. We give a set of generators of $K_{\hat{\mathfrak g},\hbar}^\ell$ in Section [6.2](#sec:qpara){reference-type="ref" reference="sec:qpara"}. By using these generators, we prove that $K_{\hat{\mathfrak g},\hbar}^\ell/\hbar K_{\hat{\mathfrak g},\hbar}^\ell\cong K_{\hat{\mathfrak g}}^\ell$. Furthermore, we show that $L_{\hat{\mathfrak g},\hbar}^\ell$ contains a quantum lattice vertex algebra $V_{\sqrt\ell Q_L}^{\eta_\ell}$ as a subalgebra, and prove that $K_{\hat{\mathfrak g}}^\ell$ and $V_{\sqrt\ell Q_L}^{\eta_\ell}$ are commutants of each other in $L_{\hat{\mathfrak g},\hbar}^\ell$. The main difficult is to prove the injection $V_{\sqrt\ell Q_L}^{\eta_\ell}\to L_{\hat{\mathfrak g},\hbar}^\ell$ (see Section [6.1](#sec:qlattice-in-qaffva){reference-type="ref" reference="sec:qlattice-in-qaffva"}). We first construct coproducts of the quantum affine vertex algebras (see Section [5.3](#sec:coprod){reference-type="ref" reference="sec:coprod"}), which are injective $\hbar$-adic quantum vertex algebra homomorphisms $\Delta$ from $L_{\hat{\mathfrak g},\hbar}^{\ell+\ell'}$ to $L_{\hat{\mathfrak g},\hbar}^\ell\widehat{\otimes}L_{\hat{\mathfrak g},\hbar}^{\ell'}$ for any positive integers $\ell$ and $\ell'$. By using the injection $\Delta:L_{\hat{\mathfrak g},\hbar}^\ell\to L_{\hat{\mathfrak g},\hbar}^{\ell-1}\widehat{\otimes}L_{\hat{\mathfrak g},\hbar}^1$, we can reduce the level $\ell$ case to the level $\ell-1$ case. It should be noted that in the construction of coproducts, the tensor product $\hbar$-adic quantum vertex algebra $L_{\hat{\mathfrak g},\hbar}^\ell\widehat{\otimes}L_{\hat{\mathfrak g},\hbar}^{\ell'}$ is a twisted tensor product ([@LS-twisted-tensor; @S-iter-twisted-tensor]) rather than the usual tensor product. So in Section [3.3](#subsec:construct-n-qvas){reference-type="ref" reference="subsec:construct-n-qvas"}, we extend the method given in [@JKLT-Defom-va] and derive a construction method of twisted tensor products by using vertex bialgebras. Using this construction, we construct the twisted tensor products of quantum affine vertex algebras in Section [5.2](#subsec:construct-n-qvas-qaffva){reference-type="ref" reference="subsec:construct-n-qvas-qaffva"}. The paper is organized as follows. Section [2](#sec:VAs){reference-type="ref" reference="sec:VAs"} provides an introduction to vertex algebras, affine vertex algebras, lattice vertex algebras and parafermion vertex algebras. Section [3](#sec:qvas){reference-type="ref" reference="sec:qvas"} presents the basics about $\hbar$-adic quantum vertex algebras, the theory of twisted tensor products and a construction method for twisted tensor products by using vertex bialgebras. Section [4](#sec:qlattice){reference-type="ref" reference="sec:qlattice"} discusses the construction of quantum lattice vertex algebras based on the work presented in [@JKLT-Defom-va]. It provides a universal description of these $\hbar$-adic quantum vertex algebras and establishes an equivalence between the category of their modules and the category of modules for corresponding lattice vertex algebras. Section [5](#sec:qaff-va){reference-type="ref" reference="sec:qaff-va"} presents the construction of quantum affine vertex algebras as described in [@K-Quantum-aff-va], along with their twisted tensor products and coproducts. Section [6](#chap:qpara){reference-type="ref" reference="chap:qpara"} demonstrates that $L_{\hat{\mathfrak g},\hbar}^\ell$ contains an $\hbar$-adic quantum vertex subalgebra that is isomorphic to the quantum lattice vertex algebra $V_{\sqrt\ell Q_L}^{\eta_\ell}$. Moreover, it focuses on identifying a generating subset of the quantum parafermion vertex algebra $K_{\hat{\mathfrak g},\hbar}^\ell$, namely the commutant of the quantum Heisenberg vertex algebra in $L_{\hat{\mathfrak g},\hbar}^\ell$. Additionally, the chapter establishes the double commutant property of $K_{\hat{\mathfrak g},\hbar}^\ell$ and $V_{\sqrt\ell Q_L}^{\eta_\ell}$. Throughout this paper, we denote by ${\mathbb{Z}}_+$ and ${\mathbb{N}}$ the set of positive and nonnegative integers, respectively. For a vector space $W$ and $g(z)\in W[[z^{\pm 1}]]$, we denote by $g(z)^+$ (resp. $g(z)^-$) the regular (singular) part of $g(z)$. # Parafermion vertex algebras {#sec:VAs} ## Vertex algebras {#subsec:VA} A *vertex algebra* (VA) is a vector space $V$ together with a *vacuum vector* ${{\bf 1}}\in V$ and a vertex operator map $$\begin{aligned} Y(\cdot,z):&V\longrightarrow {\mathcal{E}}(V):=\mathop{\mathrm{Hom}}(V,V((z)))\\ &v\mapsto Y(v,z)=\sum_{n\in{\mathbb{Z}}}v_nz^{-n-1},\nonumber\end{aligned}$$ such that $$\begin{aligned} \label{eq:vacuum-property} Y({{\bf 1}},z)v=v,\quad Y(v,z){{\bf 1}}\in V[[z]],\quad \lim_{z\to 0}Y(v,z){{\bf 1}}=v, \quad\mbox{for }v\in V,\end{aligned}$$ and that $$\begin{aligned} \label{eq:Jacobi} &z_0^{-1}\delta\left(\frac{z_1-z_2}{z_0}\right)Y(u,z_1)Y(v,z_2) -z_0^{-1}\delta\left(\frac{z_2-z_1}{-z_0}\right)Y(v,z_2)Y(u,z_1)\\ &\quad=z_1^{-1}\delta\left(\frac{z_2+z_0}{z_1}\right)Y(Y(u,z_0)v,z_2) \quad\quad\mbox{for }u,v\in V.\nonumber\end{aligned}$$ A *module* $W$ of a VA $V$ is a vector space $W$ together with a vertex operator map $$\begin{aligned} Y_W(\cdot,z):&V\longrightarrow {\mathcal{E}}(W)\\ &v\mapsto Y_W(v,z)=\sum_{n\in{\mathbb{Z}}}v_nz^{-n-1},\nonumber\end{aligned}$$ such that $Y_W({{\bf 1}},z)=1_W$ and $$\begin{aligned} &z_0^{-1}\delta\left(\frac{z_1-z_2}{z_0}\right)Y_W(u,z_1)Y_W(v,z_2) -z_0^{-1}\delta\left(\frac{z_2-z_1}{-z_0}\right)Y_W(v,z_2)Y_W(u,z_1)\\ &\quad=z_1^{-1}\delta\left(\frac{z_2+z_0}{z_1}\right)Y_W(Y(u,z_0)v,z_2) \quad\quad\mbox{for }u,v\in V.\nonumber\end{aligned}$$ Let $\mathcal B$ be a set, and let $N:\mathcal B\times \mathcal B\to {\mathbb{N}}$ be a symmetric function called *locality function* ([@R-free-conformal-free-va]). Define $\mathcal V(\mathcal B,N)$ to be the category consisting of VAs $V$, containing $\mathcal B$ as a subset and $$\begin{aligned} (z_1-z_2)^{N(a,b)}[Y(a,z_1),Y(b,z_2)]=0\quad \mbox{for }a,b\in\mathcal B.\end{aligned}$$ The following result is given in [@R-free-conformal-free-va]. **Proposition 1**. *There exists a unique VA $V(\mathcal B,N)$ such that for any $V\in\mathcal V(\mathcal B,N)$, there exists a unique VA homomorphism $f:V(\mathcal B,N)\to V$ such that $$\begin{aligned} f(a)=a\quad\mbox{for }a\in \mathcal B.\end{aligned}$$* ## Affine VAs {#subsec:AffVA} Let $A=(a_{ij})_{i,j\in I}$ be a Cartan matrix, and let ${\mathfrak g}={\mathfrak g}(A)$ be the corresponding finite dimensional simple Lie algebra over ${\mathbb{C}}$. We fix a realization $({\mathfrak h},\Pi,\Pi^\vee)$ of ${\mathfrak g}$, where ${\mathfrak h}$ is a Cartan subalgebra of ${\mathfrak g}$, $\Pi={ \left.\left\{ {\alpha_i} \,\right|\, {i\in I} \right\} }\subset{\mathfrak h}^\ast$ is the set of simple roots and $\Pi^\vee={ \left.\left\{ {h_i} \,\right|\, {i\in I} \right\} }\subset{\mathfrak h}$ is the set of simple coroots. Define $$\begin{aligned} I_L={ \left.\left\{ {i\in I} \,\right|\, {\alpha_i\,\,\mbox{is a long root}} \right\} },\quad I_S={ \left.\left\{ {i\in I} \,\right|\, {\alpha_i\,\,\mbox{is a short root}} \right\} }.\end{aligned}$$ And define $$\begin{aligned} r=\begin{cases} 1,&\mbox{if ${\mathfrak g}$ is simply-laced},\\ 2,&\mbox{if ${\mathfrak g}$ is of type $B_n,C_n,F_4$},\\ 3,&\mbox{if ${\mathfrak g}$ is of type $G_2$}, \end{cases} \quad r_i=\begin{cases} 1,&\mbox{if }i\in I_S,\\ r,&\mbox{if }i\in I_L. \end{cases}\end{aligned}$$ Then we get from [@Kac-book]\*(6.22) and [@Kac-book]\*Table Aff that $$\begin{aligned} r_ia_{ij}=r_ja_{ji}\quad\mbox{for }i,j\in I.\end{aligned}$$ Define a bilinear map ${\langle}\cdot,\cdot{\rangle}$ on ${\mathfrak h}^\ast$ by $$\begin{aligned} {\langle}\alpha_i,\alpha_j{\rangle}=r_ia_{ij}/r\quad\mbox{for }i,j\in I.\end{aligned}$$ Then ${\langle}\cdot,\cdot{\rangle}$ is the normalized symmetric bilinear form on ${\mathfrak h}^\ast$, such that ${\langle}\alpha,\alpha{\rangle}=2$ for any long root $\alpha$. Since ${\langle}\cdot,\cdot{\rangle}$ is non-degenerated, we can identify ${\mathfrak h}^\ast$ with ${\mathfrak h}$ via ${\langle}\cdot,\cdot{\rangle}$. To be more precise, we identify $\alpha_i$ with $r_ih_i /r$ for $i\in I$. Then $$\begin{aligned} {\langle}h_i,h_j{\rangle}=ra_{ij}/r_j\quad\mbox{for }i,j\in I.\end{aligned}$$ Let $\hat{\mathfrak g}={\mathfrak g}\otimes{\mathbb{C}}[t,t^{-1}]\oplus{\mathbb{C}}c$ be the affinization of ${\mathfrak g}$. One has that ([@Gar-loop-alg]): **Proposition 2**. *The Lie algebra $\hat{\mathfrak g}$ is isomorphic to the Lie algebra generated by $$\begin{aligned} { \left.\left\{ {h_i(m),\,x_i^\pm(m)} \,\right|\, {i\in I} \right\} }\end{aligned}$$ and a central element $c$, subject to the relations written in terms of generating functions in $z$: $$\begin{aligned} h_i(z)=\sum_{m\in{\mathbb{Z}}}h_i(m)z^{-m-1},\quad x_i^\pm(z)=\sum_{m\in{\mathbb{Z}}}x_i^\pm(m)z^{-m-1},\quad i\in I.\end{aligned}$$ The relations are $(i,j\in I)$: $$\begin{aligned} \mbox{(L1)}\quad &[h_i(z_1),h_j(z_2)]=\frac{a_{ij}}{r_j}rc\frac{\partial}{\partial {z_2}}z_1^{-1}\delta\left(\frac{z_2}{z_1}\right),\\ \mbox{(L2)}\quad &[h_i(z_1),x_j^\pm(z_2)]=\pm a_{ij}x_j^\pm(z_2)z_1^{-1}\delta\left(\frac{z_2}{z_1}\right),\\ \mbox{(L3)}\quad &\left[x_i^+(z_1),x_j^-(z_2)\right]=\delta_{ij}\left(h_i(z_2)z_1^{-1}\delta\left(\frac{z_2}{z_1}\right)+rc\frac{\partial}{\partial {z_2}}z_1^{-1}\delta\left(\frac{z_2}{z_1}\right) \right),\\ \mbox{(L4)}\quad&(z_1-z_2)^{n_{ij}}\left[x_i^\pm(z_1),x_j^\pm(z_2)\right]=0,\\ \mbox{(S)}\quad &\left[ x_i^\pm(z_1),\left[ x_i^\pm(z_2),\dots,\left[ x_i^\pm(z_{m_{ij}}),x_j^\pm(z_0) \right]\cdots \right] \right]=0,\quad \mbox{if }a_{ij}<0,\end{aligned}$$ where $n_{ij}=1-\delta_{ij}$ for $i,j\in I$ and $m_{ij}=1-a_{ij}$ for $i,j\in I$ with $a_{ij}<0$.* Introduce a set $\mathcal B={ \left.\left\{ {h_i,\,x_i^\pm} \,\right|\, {i\in I} \right\} }$, and defined a function $N:\mathcal B\times\mathcal B\to{\mathbb{N}}$ by $$\begin{aligned} N(h_i,h_j)=2,\quad N(h_i,x_j^\pm)=1,\quad N(x_i^\pm,x_j^\pm)=n_{ij},\quad N(x_i^+,x_j^-)=\delta_{ij}2.\end{aligned}$$ **Definition 3**. For $\ell\in {\mathbb{C}}$, we let $F_{\hat{\mathfrak g}}^\ell$ be the quotient VA of $V(\mathcal B,N)$ modulo the ideal generated by $$\begin{aligned} (h_i)_0(h_j),\quad (h_i)_1(h_j)-\frac{a_{ij}}{r_j}r\ell{{\bf 1}},\quad (h_i)_0(x_j^\pm)\mp a_{ij} x_j^\pm \quad \mbox{for }i,j\in I.\end{aligned}$$ Furthermore, let $V_{\hat{\mathfrak g}}^\ell$ be the quotient VA of $F_{\hat{\mathfrak g}}^\ell$ modulo the ideal generated by $$\begin{aligned} &(x_i^+)_0(x_j^-)-\delta_{ij}h_i,\quad (x_i^+)_1(x_j^-)-\delta_{ij}r\ell/r_i {{\bf 1}}\quad\mbox{for }i,j\in I,\\ &\left(x_i^\pm\right)_0^{m_{ij}}(x_j^\pm)\quad\mbox{for }i,j\in I\,\,\mbox{with }a_{ij}<0.\end{aligned}$$ If $\ell\in{\mathbb{Z}}_+$, we define $L_{\hat{\mathfrak g}}^\ell$ to be the quotient VA $V_{\hat{\mathfrak g}}^\ell$ modulo the ideal generated by $$\begin{aligned} \left(x_i^\pm\right)_{-1}^{r\ell/r_i}(x_i^\pm)\quad\mbox{for }i\in I.\end{aligned}$$ **Remark 4**. **Set $\hat{\mathfrak g}_+={\mathfrak g}\otimes{\mathbb{C}}[t]\oplus{\mathbb{C}}c$. For $\ell\in {\mathbb{C}}$, we let ${\mathbb{C}}_\ell:={\mathbb{C}}$ be a $\hat{\mathfrak g}_+$-module, with ${\mathfrak g}\otimes{\mathbb{C}}[t].{\mathbb{C}}_\ell=0$ and $c=\ell$. Then there are $\hat{\mathfrak g}$-module structures on both $V_{\hat{\mathfrak g}}^\ell$ and $L_{\hat{\mathfrak g}}^\ell$, such that $$\begin{aligned} h_i(z)=Y(h_i,z),\quad x_i^\pm(z)=Y(x_i^\pm,z),\quad i\in I.\end{aligned}$$ In addition, as $\hat{\mathfrak g}$-modules, we have that $$\begin{aligned} V_{\hat{\mathfrak g}}^\ell\cong {\mathcal{U}}(\hat{\mathfrak g})\otimes_{{\mathcal{U}}(\hat{\mathfrak g}_+)}{\mathbb{C}}_\ell.\end{aligned}$$ And $L_{\hat{\mathfrak g}}^\ell$ is the unique simple quotient $\hat{\mathfrak g}$-module of $V_{\hat{\mathfrak g}}^\ell$, when $\ell\in{\mathbb{Z}}_+$.** ## Lattice VAs {#subsec:LatticeVA} Let $Q=\oplus_{i\in J}{\mathbb{Z}}\beta_i$ be a non-degenerated even lattice of finite rank, i.e., a free abelian group of finite rank equipped with a non-degenerated symmetric ${\mathbb{Z}}$-valued bilinear form ${\langle}\cdot,\cdot{\rangle}$ such that ${\langle}\alpha,\alpha{\rangle}\in 2{\mathbb{Z}}$ for any $\alpha\in Q$. Set $$\begin{aligned} {\mathfrak h}_Q={\mathbb{C}}\otimes_{\mathbb{Z}}Q\end{aligned}$$ and extend ${\langle}\cdot,\cdot{\rangle}$ to a symmetric ${\mathbb{C}}$-valued bilinear form on ${\mathfrak h}_Q$. We form an affine Lie algebra $\hat{\mathfrak h}$, where $$\begin{aligned} \hat{\mathfrak h}_Q={\mathfrak h}_Q\otimes{\mathbb{C}}[t,t^{-1}]\oplus{\mathbb{C}}c\end{aligned}$$ as a vector space, and where $c$ is central and $$\begin{aligned} =m\delta_{m+n,0}{\langle}\alpha,\beta{\rangle}c\end{aligned}$$ for $\alpha,\beta\in{\mathfrak h}_Q,\,m,n\in{\mathbb{Z}}$ with $\alpha(m)$ denoting $\alpha\otimes t^m$. Define the following two abelian Lie subalgebras $$\begin{aligned} \hat{\mathfrak h}_Q^\pm={\mathfrak h}_Q\otimes t^{\pm 1}{\mathbb{C}}[t^{\pm 1}],\end{aligned}$$ and identify ${\mathfrak h}_Q$ with ${\mathfrak h}_Q\otimes t^0$. Set $$\begin{aligned} \hat{\mathfrak h}'_Q=\hat{\mathfrak h}_Q^+\oplus\hat{\mathfrak h}_Q^-\oplus{\mathbb{C}}c\end{aligned}$$ which is a Heisenberg algebra. Then $\hat{\mathfrak h}_Q=\hat{\mathfrak h}'_Q\oplus {\mathfrak h}_Q$, a direct sum of Lie algebras. Let $$\begin{aligned} Q^0={ \left.\left\{ {\beta\in{\mathfrak h}_Q} \,\right|\, {{\langle}\beta,Q{\rangle}\in{\mathbb{Z}}} \right\} }\end{aligned}$$ be the dual lattice of $Q$. Choose a minimal $d\in{\mathbb{Z}}_+$, such that $$\begin{aligned} Q^0\subset (1/d)Q.\end{aligned}$$ We fix a total order $<$ on the index set $J$. Let $\epsilon:(1/d)Q\times (1/d)Q\to {\mathbb{C}}^\times$ be the bimultiplicative map defined by $$\begin{aligned} \epsilon(1/d\beta_i,1/d\beta_j)=\begin{cases} 1,&\mbox{if }i\le j,\\ \exp(\pi{\langle}\beta_i,\beta_j{\rangle}\sqrt{-1}/d^2),&\mbox{if }i>j. \end{cases}\end{aligned}$$ Then $\epsilon$ is a $2$-cocycle of $(1/d)Q$ satisfying the following relations: $$\begin{aligned} \epsilon(\beta,\gamma)\epsilon(\gamma,\beta)^{-1}=(-1)^{{\langle}\beta,\gamma{\rangle}},\quad \epsilon(\beta,0)=1=\epsilon(0,\beta)\quad \mbox{for }\beta,\gamma\in Q.\end{aligned}$$ Denote by ${\mathbb{C}}_\epsilon[(1/d)Q]$ the $\epsilon$-twisted group algebra of $(1/d)Q$, which by definition has a designated basis ${ \left.\left\{ {e_\beta} \,\right|\, {\beta\in (1/d)Q} \right\} }$ with relations $$\begin{aligned} e_\beta\cdot e_\gamma=\epsilon(\beta,\gamma)e_{\beta+\gamma}\quad\mbox{for }\beta,\gamma\in (1/d)Q.\end{aligned}$$ For any subset $S\subset (1/d)Q$, we denote $$\begin{aligned} {\mathbb{C}}_\epsilon[S]=\mathop{\mathrm{Span}}_{\mathbb{C}}{ \left.\left\{ {e_\beta} \,\right|\, {\beta\in S} \right\} }.\end{aligned}$$ Then ${\mathbb{C}}_\epsilon[Q]$ is a subalgebra of ${\mathbb{C}}_\epsilon[(1/d)Q]$, and ${\mathbb{C}}_\epsilon[Q^0]$ becomes a ${\mathbb{C}}_\epsilon[Q]$-module under the left multiplication action. Moreover, for each coset $Q+\gamma\in Q^0/Q$, ${\mathbb{C}}_\epsilon[Q+\gamma]$ is a simple ${\mathbb{C}}_\epsilon[Q]$-submodule of ${\mathbb{C}}_\epsilon[Q^0]$, and $$\begin{aligned} {\mathbb{C}}_\epsilon[Q^0]=\bigoplus_{Q+\gamma\in Q^0/Q}{\mathbb{C}}_\epsilon[Q+\gamma]\end{aligned}$$ as ${\mathbb{C}}_\epsilon[Q]$-modules. Make ${\mathbb{C}}_{\epsilon}[Q^0]$ an $\hat {\mathfrak h}_Q$-module by letting $\hat{\mathfrak h}'_Q$ act trivially and letting ${\mathfrak h}_Q$ act by $$\begin{aligned} h e_\beta={\langle}h,\beta{\rangle}e_\beta\ \ \mbox{ for }h\in {\mathfrak h}_Q,\ \beta\in Q^0.\end{aligned}$$ Note that $S(\hat{\mathfrak h}_Q^-)$ is naturally an $\hat{\mathfrak h}_Q$-module of level $1$. For each coset $Q+\gamma\in Q^0/Q$, we set $$\begin{aligned} V_{Q+\gamma}=S(\hat{\mathfrak h}_Q^-)\otimes {\mathbb{C}}_{\epsilon}[Q+\gamma],\end{aligned}$$ the tensor product of $\hat {\mathfrak h}_Q$-modules, which is an $\hat {\mathfrak h}_Q$-module of level $1$. Set $${{\bf 1}}=1\otimes e_0\in V_Q.$$ Identify ${\mathfrak h}_Q$ and ${\mathbb{C}}_{\epsilon}[Q+\gamma]$ as subspaces of $V_{Q+\gamma}$ via the correspondence $$h\mapsto h(-1)\otimes 1\ (h\in{\mathfrak h}_Q) \quad\mbox{and}\quad e_\alpha\mapsto 1\otimes e_\alpha\ (\alpha\in Q+\gamma).$$ For $h\in {\mathfrak h}_Q$, set $$\begin{aligned} h(z)=\sum_{n\in{\mathbb{Z}}}h(n)z^{-n-1}.\end{aligned}$$ On the other hand, for $\alpha\in Q$ set $$\begin{aligned} \label{eq:def-E} E^\pm(\alpha,z)=\exp\left(\sum_{n\in{\mathbb{Z}}_+} \frac{\alpha(\mp n)}{\pm n}z^{\pm n} \right)\end{aligned}$$ on $V_{Q+\gamma}$. For $\alpha\in Q$, define $z^{\alpha}:\ {\mathbb{C}}_{\epsilon}[Q+\gamma]\rightarrow {\mathbb{C}}_{\epsilon}[Q+\gamma][z,z^{-1}]$ by $$\begin{aligned} z^\alpha\cdot e_\beta=z^{{\langle}\alpha,\beta{\rangle}}e_\beta\ \ \ \mbox{ for }\beta\in Q+\gamma.\end{aligned}$$ Then **Theorem 5**. *There exists a VA structure on $V_Q$, which is uniquely determined by the conditions that ${{\bf 1}}$ is the vacuum vector and that $$\begin{aligned} &Y_Q(h,z)=h(z),\quad Y_Q(e_\alpha,z)=E^+(\alpha,z)E^-(\alpha,z)e_\alpha z^\alpha,\quad h\in{\mathfrak h}_Q,\,\alpha\in Q.\end{aligned}$$ Moreover, every $V_Q$-module is completely reducible, and for each simple $V_Q$-module $W$, there exists a coset $Q+\gamma\in Q^0/Q$, such that $W\cong V_{Q+\gamma}$.* ## Parafermion VAs {#subsec:ParaVA} **Definition 6**. Let $V$ be a VA, and let $S\subset V$. Define $$\begin{aligned} \mathop{\mathrm{Com}}_V(S)={ \left.\left\{ {v\in V} \,\right|\, {[Y(u,z_1),Y(v,z_2)]=0\quad\mbox{for any }u\in S } \right\} }.\end{aligned}$$ **Remark 7**. **It is immediate from [\[eq:Jacobi\]](#eq:Jacobi){reference-type="eqref" reference="eq:Jacobi"} that $\mathop{\mathrm{Com}}_V(S)$ is a subVA of $V$.** Let ${\mathfrak g},{\mathfrak h}$ be as in Subsection [2.2](#subsec:AffVA){reference-type="ref" reference="subsec:AffVA"}. In this subsection, we fix an $\ell\in{\mathbb{Z}}_+$. Define $$\begin{aligned} K_{\hat{\mathfrak g}}^\ell=\mathop{\mathrm{Com}}_{L_{\hat{\mathfrak g}}^\ell}({\mathfrak h}).\end{aligned}$$ It is proved in [@DW-para-structure-gen] (see [@DR-para-rational]) that **Theorem 8**. *For each $i\in I$, define $$\begin{aligned} W_i^2=&(x_i^+)_{-1}x_i^--\frac{1}{2}\partial h_i-\frac{r_i}{2r\ell}(h_i)_{-1}h_i,\\ W_i^3=&(x_i^+)_{-2}x_i^--(x_i^+)_{-1}\partial x_i^--\frac{2r_i}{r\ell}(h_i)_{-1}(x_i^+)_{-1}x_i^-\\ &+\frac{1}{6}\partial^2h_i+\frac{r_i}{r\ell}(h_i)_{-1}\partial h_i +\frac{2r_i^2}{3r^2\ell^2}(h_i)_{-1}^2h_i.\nonumber\end{aligned}$$ Then $K_{\hat{\mathfrak g}}^\ell$ is generated by the set $$\begin{aligned} { \left.\left\{ {W_i^2,\,W_i^3} \,\right|\, {i\in I} \right\} }.\end{aligned}$$* **Remark 9**. * *For each $i\in I$, we have that $$\begin{aligned} \partial W_i^2+W_i^3=&(x_i^+)_{-2}x_i^-+(x_i^+)_{-1}\partial x_i^--\frac{1}{2}\partial^2h_i\nonumber\\ &-\frac{r_i}{2r\ell}(h_i)_{-2}h_i-\frac{r_i}{2r\ell}(h_i)_{-1}\partial h_i\nonumber\\ &+(x_i^+)_{-2}x_i^--(x_i^+)_{-1}\partial x_i^--\frac{2r_i}{r\ell}(h_i)_{-1}(x_i^+)_{-1}x_i^-\nonumber\\ &+\frac{1}{6}\partial^2h_i+\frac{r_i}{r\ell}(h_i)_{-1}\partial h_i +\frac{2r_i^2}{3r^2\ell^2}(h_i)_{-1}^2h_i\nonumber\\ =&2(x_i^+)_{-2}x_i^--\frac{2r_i}{r\ell}(h_i)_{-1}(x_i^+)_{-1}x_i^--\frac{1}{3}\partial^2h_i\nonumber\\ &-\frac{r_i}{2r\ell}(h_i)_{-2}h_i+\frac{r_i}{2r\ell}(h_i)_{-1}\partial h_i +\frac{2r_i^2}{3r^2\ell^2}(h_i)_{-1}^2h_i\nonumber\\ =&2(x_i^+)_{-2}x_i^--\frac{2r_i}{r\ell}(h_i)_{-1}(x_i^+)_{-1}x_i^--\frac{1}{3}\partial^2h_i +\frac{2r_i}{3r^2\ell^2}(h_i)_{-1}^2h_i.\end{aligned}$$ The parafermion VA $K_{\hat{\mathfrak g}}^\ell$ is generated by ${ \left.\left\{ {W_i^2,\,\partial W_i^2+W_i^3} \,\right|\, {i\in I} \right\} }$.* * Let $Q_L$ be the lattice generated by long roots of ${\mathfrak g}$. We get from [@Kac-book]\*§6.7 that $$\begin{aligned} Q_L=\oplus_{i\in I}{\mathbb{Z}}r/r_i\alpha_i.\end{aligned}$$ Then $\sqrt\ell Q_L$ is a positive definite even lattice of finite rank for any $\ell\in{\mathbb{Z}}_+$. It is proved in [@DW-para-structure-double-comm] that **Theorem 10**. *$L_{\hat{\mathfrak g}}^\ell$ has a subVA isomorphic to $V_{\sqrt\ell Q_L}$, and $$\begin{aligned} \mathop{\mathrm{Com}}_{L_{\hat{\mathfrak g}}^\ell}\left(V_{\sqrt\ell Q_L}\right)=K_{\hat{\mathfrak g}}^\ell,\quad \mathop{\mathrm{Com}}_{L_{\hat{\mathfrak g}}^\ell}\left(K_{\hat{\mathfrak g}}^\ell\right)=V_{\sqrt\ell Q_L}.\end{aligned}$$* # Quantum vertex algebras {#sec:qvas} In this paper, we let $\hbar$ be a formal variable, and let ${\mathbb{C}}[[\hbar]]$ be the ring of formal power series in $\hbar$. A ${\mathbb{C}}[[\hbar]]$-module $V$ is said to be *torsion-free* if $\hbar v\ne 0$ for every $0\ne v\in V$, and said to be *separated* if $\cap_{n\ge1}\hbar^n V=0$. For a ${\mathbb{C}}[[\hbar]]$-module $V$, using subsets $v+\hbar^nV$ for $v\in V$, $n\ge 1$ as the basis of open sets one obtains a topology on $V$, which is called the *$\hbar$-adic topology*. A ${\mathbb{C}}[[\hbar]]$-module $V$ is said to be *$\hbar$-adically complete* if every Cauchy sequence in $V$ with respect to this $\hbar$-adic topology has a limit in $V$. A ${\mathbb{C}}[[\hbar]]$-module $V$ is *topologically free* if $V=V_0[[\hbar]]$ for some vector space $V_0$ over ${\mathbb{C}}$. It is known that a ${\mathbb{C}}[[\hbar]]$-module is topologically free if and only if it is torsion-free, separated, and $\hbar$-adically complete (see [@Kassel-topologically-free]). For another topologically free ${\mathbb{C}}[[\hbar]]$-module $U=U_0[[\hbar]]$, we recall the complete tensor $$\begin{aligned} U\widehat{\otimes}V=(U_0\otimes V_0)[[\hbar]].\end{aligned}$$ ## $\hbar$-adic quantum VAs We view a vector space as a ${\mathbb{C}}[[\hbar]]$-module by letting $\hbar=0$. Fix a ${\mathbb{C}}[[\hbar]]$-module $W$. For $k\in{\mathbb{Z}}_+$, and some formal variables $z_1,\dots,z_k$, we define $$\begin{aligned} {\mathcal{E}}^{(k)}(W;z_1,\dots,z_k)=\mathop{\mathrm{Hom}}_{{\mathbb{C}}[[\hbar]]}\left(W,W((z_1,\dots,z_k))\right)\end{aligned}$$ We will denote ${\mathcal{E}}^{(k)}(W;z_1,\dots,z_k)$ by ${\mathcal{E}}^{(k)}(W)$ if there is no ambiguity, and will denote ${\mathcal{E}}^{(1)}(W)$ by ${\mathcal{E}}(W)$. An ordered sequence $(a_1(z),\dots,a_k(z))$ in ${\mathcal{E}}(W)$ is said to be *compatible* ([@Li-nonlocal]) if there exists an $m\in{\mathbb{Z}}_+$, such that $$\begin{aligned} \left(\prod_{1\le i<j\le k}(z_i-z_j)^m\right)a_1(z_1)\cdots a_k(z_k)\in{\mathcal{E}}^{(k)}(W).\end{aligned}$$ Now, we assume that $W=W_0[[\hbar]]$ is topologically free. Define $$\begin{aligned} {\mathcal{E}}_\hbar^{(k)}(W;z_1,\dots,z_k)=\mathop{\mathrm{Hom}}_{{\mathbb{C}}[[\hbar]]}\left(W,W_0((z_1,\dots,z_k))[[\hbar]]\right).\end{aligned}$$ Similarly, we denote ${\mathcal{E}}_\hbar^{(k)}(W;z_1,\dots,z_k)$ by ${\mathcal{E}}_\hbar^{(k)}(W)$ if there is no ambiguity, and denote ${\mathcal{E}}_\hbar^{(1)}(W)$ by ${\mathcal{E}}_\hbar(W)$ for short. We note that ${\mathcal{E}}_\hbar^{(k)}(W)={\mathcal{E}}^{(k)}(W_0)[[\hbar]]$ is topologically free. For $n,k\in{\mathbb{Z}}_+$, the quotient map from $W$ to $W/\hbar^nW$ induces the following ${\mathbb{C}}[[\hbar]]$-module map $$\begin{aligned} \widetilde{\pi}_n^{(k)}:\mathop{\mathrm{End}}_{{\mathbb{C}}[[\hbar]]}(W)[[z_1^{\pm 1},\dots,z_k^{\pm 1}]] \to \mathop{\mathrm{End}}_{{\mathbb{C}}[[\hbar]]}(W/\hbar^nW)[[z_1^{\pm 1},\dots,z_k^{\pm 1}]].\end{aligned}$$ For $A(z_1,z_2),B(z_1,z_2)\in\mathop{\mathrm{Hom}}_{{\mathbb{C}}[[\hbar]]}(W,W_0((z_1))((z_2))[[\hbar]])$, we write $A(z_1,z_2)\sim B(z_2,z_1)$ if for each $n\in{\mathbb{Z}}_+$ there exists $k\in{\mathbb{N}}$, such that $$\begin{aligned} (z_1-z_2)^k\widetilde{\pi}_n^{(2)}(A(z_1,z_2))=(z_1-z_2)^k\widetilde{\pi}_n^{(2)}(B(z_2,z_1)).\end{aligned}$$ Let $Z(z_1,z_2):{\mathcal{E}}_\hbar(W)\widehat{\otimes}{\mathcal{E}}_\hbar(W)\widehat{\otimes}{\mathbb{C}}((z))[[\hbar]]\to \mathop{\mathrm{End}}_{{\mathbb{C}}[[\hbar]]}(W)[[z_1^{\pm 1},z_2^{\pm 1}]]$ be defined by $$\begin{aligned} Z(z_1,z_2)(a(z)\otimes b(z)\otimes f(z))=\iota_{z_1,z_2}f(z_1-z_2)a(z_1)b(z_2).\end{aligned}$$ For each $k\in{\mathbb{Z}}_+$, the inverse system $$\begin{aligned} \xymatrix{ 0&W/\hbar W\ar[l]&W/\hbar^2W\ar[l]&W/\hbar^3W\ar[l]&\cdots\ar[l] }\end{aligned}$$ induces the following inverse system $$\begin{aligned} \label{eq:E-h-inv-sys} \xymatrix{ 0&{\mathcal{E}}^{(k)}(W/\hbar W)\ar[l]&{\mathcal{E}}^{(k)}(W/\hbar^2W)\ar[l]&\cdots\ar[l] }\end{aligned}$$ Then ${\mathcal{E}}_\hbar^{(k)}(W)$ is isomorphic to the inverse limit of [\[eq:E-h-inv-sys\]](#eq:E-h-inv-sys){reference-type="eqref" reference="eq:E-h-inv-sys"}. The map $\widetilde{\pi}_n^{(k)}$ induces a ${\mathbb{C}}[[\hbar]]$-module $\pi_n^{(k)}:{\mathcal{E}}_\hbar^{(k)}(W)\to {\mathcal{E}}^{(k)}(W/\hbar^nW)$. It is easy to verify that $\ker \pi_n^{(k)}=\hbar^n{\mathcal{E}}_\hbar^{(k)}(W)$. We will denote $\pi_n^{(1)}$ by $\pi_n$ for short. An ordered sequence $(a_1(z),\dots,a_r(z))$ in ${\mathcal{E}}_\hbar(W)$ is called *$\hbar$-adically compatible* if for every $n\in{\mathbb{Z}}_+$, the sequence $(\pi_n(a_1(z)),\dots,\pi_n(a_r(z)))$ in ${\mathcal{E}}(W/\hbar^nW)$ is compatible. A subset $U$ of ${\mathcal{E}}_\hbar(W)$ is called *$\hbar$-adically compatible* if every finite sequence in $U$ is $\hbar$-adically compatible. And a subset $U$ of ${\mathcal{E}}_\hbar(W)$ is called *$\hbar$-adically $S$-local* if for any $a(z),b(z)\in U$, there is $A(z)\in ({\mathbb{C}}U\otimes{\mathbb{C}}U\otimes{\mathbb{C}}((z)))[[\hbar]]$ such that $$\begin{aligned} a(z_1)b(z_2)\sim Z(z_2,z_1)(A(z)),\end{aligned}$$ where ${\mathbb{C}}U$ denotes the subspace spanned by $U$. It is easy to see that every $\hbar$-adically $S$-local subset is $\hbar$-adically compatible. Let $(a(z),b(z))$ in ${\mathcal{E}}_\hbar(W)$ be $\hbar$-adically compatible. That is, for any $n\in{\mathbb{Z}}_+$, we have $$\begin{aligned} (z_1-z_2)^{k_n}\pi_n(a(z_1))\pi_n(b(z_2))\in{\mathcal{E}}^{(2)}(W/\hbar^nW)\quad\mbox{for some }k_n\in{\mathbb{N}}.\end{aligned}$$ We recall the following vertex operator map ([@Li-h-adic]): $$\begin{aligned} Y_{\mathcal{E}}(a(z)&,z_0)b(z)=\sum_{n\in{\mathbb{Z}}}a(z)_nb(z)z_0^{-n-1}\\ =&\varinjlim_{n>0}z_0^{-k_n}\left.\left((z_1-z)^{k_n}\pi_n(a(z_1))\pi_n(b(z))\right)\right|_{z_1=z+z_0}.\nonumber\end{aligned}$$ An *$\hbar$-adic nonlocal VA* ([@Li-h-adic]) is a topologically free ${\mathbb{C}}[[\hbar]]$-module $V$ equipped with a vacuum vector ${{\bf 1}}$ such that the vacuum property [\[eq:vacuum-property\]](#eq:vacuum-property){reference-type="eqref" reference="eq:vacuum-property"} hold, and a vertex operator map $Y(\cdot,z):V\to {\mathcal{E}}_\hbar(V)$ such that $$\begin{aligned} { \left.\left\{ {Y(u,z)} \,\right|\, {u\in V} \right\} }\subset{\mathcal{E}}_\hbar(V)\quad\mbox{is $\hbar$-adically compatible},\end{aligned}$$ and that $$\begin{aligned} Y_{\mathcal{E}}(Y(u,z),z_0)Y(v,z)=Y(Y(u,z_0)v,z)\quad\mbox{for }u,v\in V.\end{aligned}$$ We denote by $\partial$ the canonical derivation of $V$: $$\begin{aligned} u\to\partial u=\lim_{z\to 0}\frac{d}{dz}Y(u,z){{\bf 1}}.\end{aligned}$$ In addition, a *$V$-module* is a topologically free ${\mathbb{C}}[[\hbar]]$-module $W$ equipped with a vertex operator map $Y_W(\cdot,z):V\to {\mathcal{E}}_\hbar(W)$, such that $Y_W({{\bf 1}},z)=1_W$, $$\begin{aligned} { \left.\left\{ {Y_W(u,z)} \,\right|\, {u\in V} \right\} }\subset{\mathcal{E}}_\hbar(W)\quad\mbox{is $\hbar$-adically compatible},\end{aligned}$$ and that $$\begin{aligned} Y_{\mathcal{E}}(Y_W(u,z),z_0)Y_W(v,z)=Y_W(Y(u,z_0)v,z)\quad\mbox{for }u,v\in V.\end{aligned}$$ An *$\hbar$-adic weak quantum VA* ([@Li-h-adic]) is an $\hbar$-adic nonlocal VA $V$, such that ${ \left.\left\{ {Y(u,z)} \,\right|\, {u\in V} \right\} }$ is $\hbar$-adically $S$-local. And an *$\hbar$-adic quantum VA* is an $\hbar$-adic weak quantum VA $V$ equipped with a ${\mathbb{C}}[[\hbar]]$-module map (called a *quantum Yang-Baxter operator*) $$\begin{aligned} S(z):V\widehat{\otimes}V\to V\widehat{\otimes}V\widehat{\otimes}{\mathbb{C}}((z))[[\hbar]],\end{aligned}$$ which satisfies the *shift condition*: $$\begin{aligned} \label{eq:qyb-shift} [\partial\otimes 1,S(z)]=-\frac{d}{dz}S(z),\end{aligned}$$ the *quantum Yang-Baxter equation*: $$\begin{aligned} \label{eq:qyb} S^{12}(z_1)S^{13}(z_1+z_2)S^{23}(z_2)=S^{23}(z_2)S^{13}(z_1+z_2)S^{12}(z_1),\end{aligned}$$ and the *unitarity condition*: $$\begin{aligned} \label{eq:qyb-unitary} S^{21}(z)S(-z)=1,\end{aligned}$$ satisfying the following conditions: \(1\) The *vacuum property*: $$\begin{aligned} \label{eq:qyb-vac-id} S(z)({{\bf 1}}\otimes v)={{\bf 1}}\otimes v,\quad \mbox{for }v\in V. \end{aligned}$$ \(2\) The *$S$-locality*: For any $u,v\in V$, one has $$\begin{aligned} \label{eq:qyb-locality} Y(u,z_1)Y(v,z_2)\sim Y(z_2)(1\otimes Y(z_1))S(z_2-z_1)(v\otimes u). \end{aligned}$$ \(3\) The *hexagon identity*: $$\begin{aligned} \label{eq:qyb-hex-id} S(z_1)Y^{12}(z_2)=Y^{12}(z_2)S^{23}(z_1)S^{13}(z_1+z_2). \end{aligned}$$ **Remark 11**. **For $u,v\in V$, we have that $$\begin{aligned} &Y(u,z_1)Y(v,z_2)-Y(z_2)(1\otimes Y(z_1))S(z_2-z_1)(v\otimes u)\\ &\quad=Y(Y(u,z_1-z_2)^-v-Y(u,-z_2+z_1)^-v,z_2).\end{aligned}$$** **Remark 12**. * *Let $(V,S(z))$ be a quantum VA. Then $$\begin{aligned} &S(z)(v\otimes{{\bf 1}})=v\otimes{{\bf 1}}\quad \mbox{for }v\in V,\label{eq:qyb-vac-id-alt}\\ &[1\otimes\partial,S(z)]=\frac{d}{dz}S(z),\label{eq:qyb-shift-alt}\\ &S(z_1)(1\otimes Y(z_2))=(1\otimes Y(z_2))S^{12}(z_1-z_2)S^{13}(z_1)\label{eq:qyb-hex-id-alt},\\ &S(z)f(\partial\otimes 1)=f\left(\partial\otimes 1+\frac{\partial}{\partial {z}}\right)S(z)\quad \mbox{for }f(z)\in{\mathbb{C}}[z][[\hbar]],\label{eq:qyb-shift-total1}\\ &S(z)f(1\otimes\partial)=f\left(1\otimes\partial-\frac{\partial}{\partial {z}}\right)S(z)\quad \mbox{for }f(z)\in{\mathbb{C}}[z][[\hbar]].\label{eq:qyb-shift-total2}\end{aligned}$$* * For a topologically free ${\mathbb{C}}[[\hbar]]$-module $V$ and a submodule $U\subset V$, we denote by $\bar U$ the closure of $U$ and set $$\begin{aligned} ={ \left.\left\{ {u\in V} \,\right|\, {\hbar^n u\in U\,\,\mbox{for some }n\in{\mathbb{Z}}_+} \right\} }.\end{aligned}$$ Then $\overline{[U]}$ is the minimal closed submodule that is invariant under the operation $[\cdot]$. Suppose further that $V$ is an $\hbar$-adic nonlocal VA, and ${{\bf 1}}\in U$. We set $$\begin{aligned} U'=\mathop{\mathrm{Span}}_{{\mathbb{C}}[[\hbar]]}{ \left.\left\{ {u(m)v} \,\right|\, {u,v\in U,\,m\in{\mathbb{Z}}} \right\} },\quad\mbox{and}\quad \widehat{U}=\overline{[U']},\end{aligned}$$ where $u(m)$ is defined as follows $$\begin{aligned} \sum_{m\in{\mathbb{Z}}}u(m)z^{-m-1}=Y(u,z).\end{aligned}$$ For a subset $S\subset V$, we define $$\begin{aligned} S^{(1)}=\overline{\left[\mathop{\mathrm{Span}}_{{\mathbb{C}}[[\hbar]]}S\cup\{{{\bf 1}}\}\right]},\end{aligned}$$ and then inductively define $$\begin{aligned} S^{(n+1)}=\widehat{S^{(n)}}\quad\mbox{for }n\ge 1.\end{aligned}$$ Then we have $$\begin{aligned} {{\bf 1}}\in S^{(1)}\subset S^{(2)}\subset\cdots.\end{aligned}$$ Set $$\begin{aligned} {\langle}S{\rangle}=\overline{\left[\cup_{n\ge 1}S^{(n)}\right]}.\end{aligned}$$ Then ${\langle}S{\rangle}$ is the unique minimal closed $\hbar$-adic nonlocal vertex subalgebra of $V$, such that $[{\langle}S{\rangle}]={\langle}S{\rangle}$. In addition, we say that $V$ is generated by $S$ if ${\langle}S{\rangle}=V$. We list some technical results that will be used later on. **Proposition 13**. *Let $(V,Y,{{\bf 1}})$ be an $\hbar$-adic quantum VA with quantum Yang-Baxter operator $S(z)$, and let $(W,Y_W)$ be a $V$-module. Assume that $U$ is a subset of $V$, such that $$\begin{aligned} \label{eq:s-closed} S(z)(U \widehat{\otimes}U)\subset U\widehat{\otimes}U\widehat{\otimes}{\mathbb{C}}((z))[[\hbar]]\end{aligned}$$ and $V$ is generated by $U$. Suppose further that there exists $w^+\in W$, such that $$\begin{aligned} \label{eq:vacuum-like} Y_W(u,z)w^+\in W[[z]]\quad\mbox{for }u\in U.\end{aligned}$$ Then the relation [\[eq:vacuum-like\]](#eq:vacuum-like){reference-type="eqref" reference="eq:vacuum-like"} holds for all $u\in V$.* *Proof.* We note that for any subset $S$ of $V$ such that the two relations [\[eq:s-closed\]](#eq:s-closed){reference-type="eqref" reference="eq:s-closed"} and [\[eq:vacuum-like\]](#eq:vacuum-like){reference-type="eqref" reference="eq:vacuum-like"} hold for $S$, these two relations hold for $$\begin{aligned} S\cup \{{{\bf 1}}\},\quad \mathop{\mathrm{Span}}_{{\mathbb{C}}[[\hbar]]}S,\quad [S],\quad\mbox{and}\quad\bar S.\end{aligned}$$ It is easy to see that [\[eq:s-closed\]](#eq:s-closed){reference-type="eqref" reference="eq:s-closed"} and [\[eq:vacuum-like\]](#eq:vacuum-like){reference-type="eqref" reference="eq:vacuum-like"} hold for $U^{(1)}$. Suppose these two relations hold for $U^{(n)}$. It is immediate from [\[eq:qyb-hex-id\]](#eq:qyb-hex-id){reference-type="eqref" reference="eq:qyb-hex-id"} and [\[eq:qyb-hex-id-alt\]](#eq:qyb-hex-id-alt){reference-type="eqref" reference="eq:qyb-hex-id-alt"} that the relation [\[eq:s-closed\]](#eq:s-closed){reference-type="eqref" reference="eq:s-closed"} holds for $U^{(n+1)}$. Next we show that [\[eq:vacuum-like\]](#eq:vacuum-like){reference-type="eqref" reference="eq:vacuum-like"} also holds for all $u\in U^{(n+1)}$. Let $u,v\in U^{(n)}$. From the $S$-locality, we get that for each $n\ge 0$, there exists $k\ge 0$, such that $$\begin{aligned} &(z_1-z_2)^kY_W(u,z_1)Y_W(v,z_2)w^+\\ \equiv& (z_1-z_2)^kY_W^{12}(z_2)Y_W^{23}(z_1)S^{12}(z_2-z_1)(v\otimes u\otimes w^+)\end{aligned}$$ modulo $\hbar^n W$. From the relation [\[eq:vacuum-like\]](#eq:vacuum-like){reference-type="eqref" reference="eq:vacuum-like"}, we have that $$\begin{aligned} (z_1-z_2)^kY_W(u,z_1)Y_W(v,z_2)w^+\in W[[z]]+\hbar^n W[[z,z^{-1}]].\end{aligned}$$ Then from the weak associativity, we get that $$\begin{aligned} Y_W(Y(u,z_0)v,z_2)w^+\equiv z_0^{-k}\left((z_1-z_2)^kY_W(u,z_1)Y_W(v,z_2)w^+\right)|_{z_1=z_2+z_0}\end{aligned}$$ lies in $W[[z_2]]((z_0))+\hbar^n W[[z_0^{\pm 1},z_2^{\pm 1}]]$. Since $n$ is arbitrary, we have that $$\begin{aligned} Y_W(Y(u,z_0)v,z_2)w^+\in W[[z_2,z_0,z_0^{-1}]].\end{aligned}$$ Therefore, the relation [\[eq:vacuum-like\]](#eq:vacuum-like){reference-type="eqref" reference="eq:vacuum-like"} holds for all $u\in \left(U^{(n)}\right)'$, and hence holds for all $u\in \widehat{U^{(n)}}=U^{(n+1)}$. As $$\begin{aligned} U^{(n)}\subset U^{(n+1)}\quad\mbox{for }n\ge 1,\quad\mbox{and}\quad V={\langle}U{\rangle}=\overline{\left[ \cup_{n\ge 1}U^{(n)} \right]},\end{aligned}$$ we get that the relation [\[eq:vacuum-like\]](#eq:vacuum-like){reference-type="eqref" reference="eq:vacuum-like"} holds for all $u\in V$. ◻ **Proposition 14**. *Let $(U,Y_U,{{\bf 1}}_U)$ and $(V,Y_V,{{\bf 1}}_V)$ be two $\hbar$-adic nonlocal VAs, and let $S$ be a ${\mathbb{C}}[[\hbar]]$-submodule of $U$ such that $U$ is generated by $S$. Assume that there exists a $U$-module structure $Y_V^U$ on $V$ such that $$\begin{aligned} \label{eq:vacuum-like-total} Y_V^U(u,z){{\bf 1}}_V\in V[[z]].\end{aligned}$$ Suppose further that there exists a ${\mathbb{C}}[[\hbar]]$-module map $\psi^0:S\to V$ such that $$\begin{aligned} \label{eq:Y-V-U=Y-V-psi} Y_V^U(s,z)=Y_V\left(\psi^0(s),z\right)\quad\mbox{for }s\in S.\end{aligned}$$ Then there exists an $\hbar$-adic nonlocal VA homomorphism $\psi:U\to V$, such that $$\begin{aligned} \psi(s)=\psi^0(s)\quad\mbox{for }s\in S.\end{aligned}$$* *Proof.* From [\[eq:vacuum-like-total\]](#eq:vacuum-like-total){reference-type="eqref" reference="eq:vacuum-like-total"}, we define $\psi:U\to V$ by $$\begin{aligned} \psi(u)=\lim_{z\to 0} Y_V^U(u,z){{\bf 1}}_V\quad\mbox{for }u\in U.\end{aligned}$$ We deduce from [\[eq:Y-V-U=Y-V-psi\]](#eq:Y-V-U=Y-V-psi){reference-type="eqref" reference="eq:Y-V-U=Y-V-psi"} that $$\begin{aligned} \psi(s)=\lim_{z\to 0}Y_V^U(s,z){{\bf 1}}_V=\lim_{z\to 0}Y_V(\psi^0(s),z){{\bf 1}}_V=\psi^0(s)\quad\mbox{for }s\in S.\end{aligned}$$ Let $u,v\in U$, and let $n$ be an arbitrary non-negative integer. From the weak associativity, we get that there exists $k\ge 0$, such that $$\begin{aligned} \label{eq:vacuum-like-sp-temp1} &(z_0+z_2)^kY_V^U(u,z_0+z_2)Y_V^U(v,z_2){{\bf 1}}_V\nonumber\\ \equiv &(z_0+z_2)^kY_V^U(Y_U(u,z_0)v,z_2){{\bf 1}}_V \mod \hbar^n V[[z_0^{\pm 1},z_2^{\pm 1}]].\end{aligned}$$ From [\[eq:vacuum-like-total\]](#eq:vacuum-like-total){reference-type="eqref" reference="eq:vacuum-like-total"}, we get that $$\begin{aligned} Y_V^U(Y_U(u,z_0)v,z_2){{\bf 1}}_V\in V((z_0,z_2))+\hbar^n V[[z_0^{\pm 1},z_2^{\pm 1}]].\end{aligned}$$ Combining this with [\[eq:vacuum-like-sp-temp1\]](#eq:vacuum-like-sp-temp1){reference-type="eqref" reference="eq:vacuum-like-sp-temp1"}, we get that $$\begin{aligned} Y_V^U(u,z_0+z_2)Y_V^U(v,z_2){{\bf 1}}_V\equiv Y_V^U(Y_U(u,z_0)v,z_2){{\bf 1}}_V \mod \hbar^n V[[z_0^{\pm 1},z_2^{\pm 1}]].\end{aligned}$$ Since the choice of $n$ is arbitrary, we have that $$\begin{aligned} Y_V^U(u,z_0+z_2)Y_V^U(v,z_2){{\bf 1}}_V= Y_V^U(Y_U(u,z_0)v,z_2){{\bf 1}}_V.\end{aligned}$$ Taking $z_2\to 0$, we get that $$\begin{aligned} Y_V^U(u,z_0)\psi(v)=\psi\left(Y_U(u,z_0)v\right).\end{aligned}$$ For any $s\in S$ and $v\in U$, we have that $$\begin{aligned} Y_V(\psi(s),z)\psi(v)=Y_V(\psi^0(s),z)\psi(v)=Y_V^U(s,z)\psi(v)=\psi\left(Y_U(u,z)v\right).\end{aligned}$$ Since $U$ is generated by $S$, $\psi$ is an $\hbar$-adic nonlocal VA homomorphism. ◻ ## Twisted tensor products In this subsection, we first present the direct $\hbar$-adic analogue of the *twisted tensor products* given in [@LS-twisted-tensor; @S-iter-twisted-tensor]. Then we list some formulas that will be used later on. **Definition 15**. Let $V_1$ and $V_2$ be two $\hbar$-adic nonlocal VAs. A *twisting operator* for the ordered pair $(V_1,V_2)$ is a ${\mathbb{C}}[[\hbar]]$-module map $$\begin{aligned} S(z):V_1\widehat{\otimes}V_2\to V_1\widehat{\otimes}V_2\widehat{\otimes}{\mathbb{C}}((z))[[\hbar]],\end{aligned}$$ satisfying the following conditions: $$\begin{aligned} &S(z)({{\bf 1}}\otimes v)={{\bf 1}}\otimes v\quad\mbox{for }v\in V_2,\\ &S(z)(u\otimes{{\bf 1}})=u\otimes{{\bf 1}}\quad\mbox{for }u\in V_1,\\ &S(z_1)Y_1^{12}(z_2)=Y_1^{12}(z_2) S^{23}(z_1)S^{13}(z_1+z_2),\\ &S(z_1)Y_2^{23}(z_2)=Y_2^{23}(z_2)S^{12}(z_1-z_2)S^{13}(z_1),\end{aligned}$$ where $Y_1$ and $Y_2$ are the vertex operators of $V_1$ and $V_2$, respectively. **Remark 16**. **Given a twisting operator $S(z)$ for the ordered pair $(V_1,V_2)$, $S(z)\sigma$ is a twisting operator defined in [@LS-twisted-tensor Definition 2.2], where $\sigma:V_1\widehat{\otimes}V_2\to V_2\widehat{\otimes}V_1$ is the flip map. Conversely, let $R(z)$ be a twisting operator for the ordered pair $(V_1,V_2)$ in sense of [@LS-twisted-tensor Definition 2.2]. Then $R(z)\sigma$ is a twisting operator for $(V_1,V_2)$.** $S(x)$ is said to be *invertible*, if it is invertible as a ${\mathbb{C}}((z))[[\hbar]]$-linear map on $V_1\widehat{\otimes}V_2\widehat{\otimes}{\mathbb{C}}((z))[[\hbar]]$. Then $S^{21}(-z)^{-1}$ is an invertible twisting operator for the ordered pair $(V_2,V_1)$ ([@LS-twisted-tensor Lemma 2.3]). **Theorem 17**. *Define $Y_S(z):(V_1\widehat{\otimes}V_2)\otimes(V_1\widehat{\otimes}V_2)\to (V_1\widehat{\otimes}V_2)[[z,z^{-1}]]$ by $$\begin{aligned} Y_S(z)=(Y_1(z)\otimes Y_2(z))S^{23}(-z)\sigma,\end{aligned}$$ where $Y_1$ and $Y_2$ are the vertex operators of $V_1$ and $V_2$, respectively. Then $(V_1\widehat{\otimes}V_2,Y_S,{{\bf 1}}\otimes{{\bf 1}})$ carries the structure of a nonlocal VA. Moreover, we denote this nonlocal VA by $V_1\widehat{\otimes}_S V_2$.* **Theorem 18**. *Let $V_1,V_2$ and $V_3$ be nonlocal VAs, $S_1(z),S_2(z)$ and $S_3(z)$ twisting operators for ordered pairs $(V_1,V_2)$, $(V_2,V_3)$ and $(V_1,V_3)$, respectively. Suppose that $S_1(z),S_2(z)$ and $S_3(z)$ satisfy the following hexagon equation $$\begin{aligned} \label{eq:twisting-op-qybeq} S_1^{12}(z_1)S_3^{13}(z_1+z_2)S_2^{23}(z_2)=S_2^{23}(z_2)S_3^{13}(z_1+z_2)S_1^{12}(z_1).\end{aligned}$$ Then we have that $$\begin{aligned} T_1(z)=S_2^{23}(z)S_3^{13}(z)\quad\mbox{and}\quad T_2(z)=S_1^{12}(z)S_3^{13}(z)\end{aligned}$$ are twisting operators for ordered pairs $(V_1\widehat{\otimes}_{S_1} V_2, V_3)$ and $(V_1, V_2\widehat{\otimes}_{S_2}V_3)$, respectively. Moreover, the two twisted tensor products $(V_1\widehat{\otimes}_{S_1}V_2)\widehat{\otimes}_{T_1}V_3$ and $V_1\widehat{\otimes}_{T_2}(V_2\widehat{\otimes}_{S_2}V_3)$ are equivalent.* **Theorem 19**. *Let $V_1,\dots,V_n$ be $\hbar$-adic nonlocal VAs, and let $S_{ij}(z):V_i\widehat{\otimes}V_j\to V_i\widehat{\otimes}V_j\widehat{\otimes}{\mathbb{C}}((z))[[\hbar]]$ be twisting operators for every $1\le i<j\le n$, such that for any $i<j<k$ the twisting operators $S_{ij}(z)$, $S_{jk}(z)$ and $S_{ik}(z)$ satisfy the hexagon equation [\[eq:twisting-op-qybeq\]](#eq:twisting-op-qybeq){reference-type="eqref" reference="eq:twisting-op-qybeq"}. Let $$\begin{aligned} S_{i,\{n-1,n\}}(z)=S_{i,n-1}^{12}(z)S_{in}^{13}(z)\end{aligned}$$ be twisting operators for ordered pairs $(V_i, V_{n-1}\widehat{\otimes}_{S_{n-1,n}}V_n)$, ($i=1,2,\dots,n-2$). Then for every $1<i<j\le n-2$, the twisting operators $S_{ij}(z)$, $S_{j,\{n-1,n\}}(z)$ and $S_{i,\{n-1,n\}}(z)$ satisfy the hexagon equation [\[eq:twisting-op-qybeq\]](#eq:twisting-op-qybeq){reference-type="eqref" reference="eq:twisting-op-qybeq"}. Moreover, for disjoint subsets $K_1,K_2,K_3\subset\{1,2,\dots,n\}$, such that $k_1<k_2<k_3$ for all $k_i\in K_i$, one has the (inductively defined) twisting operator $$\begin{aligned} &S_{K_1\cup K_2,K_3}(z)=S_{K_2,K_3}^{23}(z)S_{K_1,K_3}^{13}(z),\\ &S_{K_1,K_2\cup K_3}(z)=S_{K_1,K_2}^{12}(z)S_{K_1,K_3}^{13}(z),\end{aligned}$$ and the (inductively defined) $\hbar$-adic nonlocal VA $$\begin{aligned} V_{K_1\cup K_2\cup K_3}:=& V_{K_1}\widehat{\otimes}_{S_{K_1,K_2\cup K_3}} \left(V_{K_2}\widehat{\otimes}_{S_{K_2,K_3}}V_{K_3}\right)\\ =&\left(V_{K_1}\widehat{\otimes}_{S_{K_1,K_2}}V_{K_2}\right)\widehat{\otimes}_{S_{K_1\cup K_2,K_3}}V_{K_3}.\end{aligned}$$ Furthermore, we also denote the $\hbar$-adic nonlocal VA $V_{\{1,2,\dots,n\}}$ by $$\begin{aligned} V_1\widehat{\otimes}_{S_{12}}V_2\widehat{\otimes}_{S_{23}}\cdots\widehat{\otimes}_{S_{n-1,n}}V_n.\end{aligned}$$* We generalize the notion of $\hbar$-adic quantum VAs as follows. **Definition 20**. Let $(V_i,Y_i,{{\bf 1}})$ ($1\le i\le n$) be $\hbar$-adic nonlocal VAs. We call a family of ${\mathbb{C}}[[\hbar]]$-module maps $S_{ij}(z):V_i\widehat{\otimes}V_j\to V_i\widehat{\otimes}V_j\widehat{\otimes}{\mathbb{C}}((z))[[\hbar]]$ ($1\le i,j\le n$) *quantum Yang-Baxter operators* of the $n$-tuple $(V_1,\dots, V_n)$, if $$\begin{aligned} &S_{ij}(z)(v\otimes{{\bf 1}})=v\otimes{{\bf 1}}\quad\mbox{for }v\in V_i,\label{eq:multqyb-vac1}\\ &S_{ij}(z)({{\bf 1}}\otimes u)={{\bf 1}}\otimes u\quad\mbox{for }u\in V_j,\label{eq:multqyb-vac2}\\ &[\partial\otimes 1,S_{ij}(z)]=-\frac{d}{dz}S_{ij}(z), \quad [1\otimes\partial, S_{ij}(z)]=\frac{d}{dz}S_{ij}(z),\label{eq:multqyb-der-shift}\\ &S_{ji}^{21}(z)S_{ij}(-z)=1,\label{eq:multqyb-unitary}\\ &S_{ij}(z_1)Y_i^{12}(z_2)=Y_i^{12}(z_2)S_{ij}^{23}(z_1)S_{ij}^{13}(z_1+z_2),\label{eq:multqyb-hex1}\\ &S_{ij}(z_1)Y_j^{23}(z_2)=Y_j^{23}(z_2)S_{ij}^{12}(z_1-z_2)S_{ij}^{13}(z_1),\label{eq:multqyb-hex2}\\ &S_{ij}^{12}(z_1)S_{ik}^{13}(z_1+z_2)S_{jk}^{23}(z_2)\label{eq:multqyb-qybeq}\\ &\quad=S_{jk}^{23}(z_2)S_{ik}^{13}(z_1+z_2)S_{ij}^{12}(z_1)\quad\mbox{for } 1\le i,j,k\le n.\nonumber\end{aligned}$$ We call the pair $((V_i)_{i=1}^n,(S_{ij}(z))_{i,j=1}^n)$ an *$\hbar$-adic $n$-quantum VA*. Moreover, let $((V'_i)_{i=1}^n,(S'_{ij}(z))_{i,j=1}^n)$ another $\hbar$-adic $n$-quantum VA, and let $f_i:V_i\to V'_i$ ($1\le i\le n$) be $\hbar$-adic nonlocal VA homomorphisms. We call $(f_1,\dots,f_n)$ an *$\hbar$-adic $n$-quantum VA homomorphism* if $$\begin{aligned} (f_i\otimes f_j)\circ S_{ij}(z)=S'_{ij}(z)\circ(f_i\otimes f_j)\quad\mbox{for }1\le i,j\le n.\end{aligned}$$ **Remark 21**. * *The notion of $\hbar$-adic $1$-quantum VAs is identical to the notion of $\hbar$-adic quantum VAs.* * **Remark 22**. * *Let $(V_i,Y_i,{{\bf 1}})$ ($1\le i\le n$) be $\hbar$-adic nonlocal VAs, and let $(S_{ij}(z))_{i,j=1}^n$ be quantum Yang-Baxter operators of the $n$-tuple $(V_1,\dots,V_n)$. Then for any subset $\{i_1,i_2,\dots,i_k\}$ of $\{1,2,\dots,n\}$, $(S_{i_a,i_b}(z))_{a,b=1}^k$ are quantum Yang-Baxter operators of the $k$-tuple $(V_{i_1},\dots,V_{i_k})$.* * **Lemma 23**. *For each $1\le i\le n$, let $U_i\subset V_i$ be a generating subset of $V_i$, and let $M_i\subset V_i$ be a closed ideal of $V_i$ such that $[M_i]=M_i$ and $$\begin{aligned} &S_{ij}(z)(M_i\otimes U_j),\,S_{ij}(x)(U_i\otimes M_j)\\ &\qquad\subset M_i\widehat{\otimes}V_j\widehat{\otimes}{\mathbb{C}}((z))[[\hbar]]+V_i\widehat{\otimes}M_j\widehat{\otimes}{\mathbb{C}}((z))[[\hbar]]\end{aligned}$$ for $1\le i,j\le n$. Then $S_{ij}(z)$ induces a ${\mathbb{C}}[[\hbar]]$-module map on $$V_i/M_i\widehat{\otimes}V_j/M_j\widehat{\otimes}{\mathbb{C}}((z))[[\hbar]],$$ which is still denoted by $S_{ij}(z)$. And $((V_i/M_i)_{i=1}^n,(S_{ij}(z))_{i,j=1}^n)$ is an $\hbar$-adic $n$-quantum VA.* **Lemma 24**. *For $K=\{i_1<i_2<\cdots<i_k\}\subset\{1,2,\dots,n\}$, we denote by $Y_K$ the vertex operator of $V_K$. Then we have that $$\begin{aligned} Y_K(z)=&Y_{i_1}^{12}(z) Y_{i_2}^{34}(z)\cdots Y_{i_k}^{2k-1,2k}(z)\\ \times&\prod_{\substack{a-b=-1\\ 1\le a,b\le k}}S_{i_a,i_b}^{2a,2b-1}(-z) \prod_{\substack{a-b=-2\\ 1\le a,b\le k}}S_{i_a,i_b}^{2a,2b-1}(-z)\cdots \prod_{\substack{a-b=1-k\\ 1\le a,b\le k}}S_{i_a,i_b}^{2a,2b-1}(-z)\\ \times&\sigma^{23}\sigma^{345}\cdots \sigma^{k,k+1,\dots,2k-1},\end{aligned}$$ where $\sigma^{a,a+1,\dots,b}=\sigma^{a,a+1}\sigma^{a+1,a+2}\cdots\sigma^{b-1,b}$.* *Proof.* Let $K_1=\{i_1<i_2<\cdots<i_{k-1}\}$. We see that $$\begin{aligned} &S_{K_1,i_k}(z)=S_{i_{k-1},i_k}^{k-1,k}(z)S_{i_{k-2},i_k}^{k-2,k}(z)\cdots S_{i_1,i_k}^{1,k}(z).\end{aligned}$$ Suppose that the lemma holds for $k-1$. Then we have that $$\begin{aligned} &Y_K(z) =Y_{K_1}^{\{1,2,\dots,k-1\},\{k,k+1,\dots,2k-2\}}(z) Y_{i_k}^{2k-1,2k}(z)\\ &\times S_{i_{k-1},i_k}^{2k-2,2k-1}(-z)\cdots S_{i_1,i_k}^{k,2k-1}(-z)\sigma^{k,k+1,\dots,2k-1}\\ =&Y_{i_1}^{12}(z) Y_{i_2}^{34}(z)\cdots Y_{i_{k-1}}^{2k-3,2k-2}(z)\\ &\times \prod_{\substack{a-b=-1\\ 1\le a,b\le k-1}}S_{i_a,i_b}^{2a,2b-1}(-z)\cdots \prod_{\substack{a-b=2-k\\ 1\le a,b\le k-1}}S_{i_a,i_b}^{2a,2b-1}(-z)\\ &\times \sigma^{23}\sigma^{345}\cdots \sigma^{k-1,k,\dots,2k-3}Y_{i_k}^{2k-1,2k}(z)\\ &\times S_{i_{k-1},i_k}^{2k-2,2k-1}(-z)\cdots S_{i_1,i_k}^{k,2k-1}(-z)\sigma^{k,k+1,\dots,2k-1}\\ =&Y_{i_1}^{12}(z) Y_{i_2}^{34}(z)\cdots Y_{i_k}^{2k-1,2k}(z)\\ &\times \prod_{\substack{a-b=-1\\ 1\le a,b\le k-1}}S_{i_a,i_b}^{2a,2b-1}(-z)\cdots \prod_{\substack{a-b=2-k\\ 1\le a,b\le k-1}}S_{i_a,i_b}^{2a,2b-1}(-z)\\ &\times S_{i_{k-1},i_k}^{2k-2,2k-1}(-z)S_{i_{k-2},i_k}^{2k-4,2k-1}(-z)\cdots S_{i_1,i_k}^{2,2k-1}(-z)\\ &\times \sigma^{23}\sigma^{345}\cdots \sigma^{k,k+1,\dots,2k-1}\\ =&Y_{i_1}^{12}(z) Y_{i_2}^{34}(z)\cdots Y_{i_k}^{2k-1,2k}(z)\\ &\times \prod_{\substack{a-b=-1\\ 1\le a,b\le k}}S_{i_a,i_b}^{2a,2b-1}(-z)\cdots \prod_{\substack{a-b=2-k\\ 1\le a,b\le k}}S_{i_a,i_b}^{2a,2b-1}(-z)\\ &\times \sigma^{23}\sigma^{345}\cdots \sigma^{k,k+1,\dots,2k-1},\end{aligned}$$ as desired. ◻ For two subsets $K=\{i_1<i_2<\cdots<i_k\}$ and $L=\{j_1<j_2<\cdots<j_l\}$ of $\{1,2,\dots,n\}$, we set $$\begin{aligned} S_{K,L}(z)=\prod_{\substack{a-b=k-1\\ 1\le a\le k\\ 1\le b\le l}}S_{i_a,j_b}^{a,k+b}(z) \prod_{\substack{a-b=k-2\\ 1\le a\le k\\ 1\le b\le l}}S_{i_a,j_b}^{a,k+b}(z)\cdots \prod_{\substack{a-b=1-l\\ 1\le a\le k\\ 1\le b\le l}}S_{i_a,j_b}^{a,k+b}(z).\end{aligned}$$ **Lemma 25**. *Let $(V_1,V_2,V_3)$ be an $\hbar$-adic $3$-quantum VA with quantum Yang-Baxter operators $(S_{ij}(z))_{i,j=1}^3$. Set $K_1=\{1,2\}$ and $K_2=\{3\}$. Then $(V_{K_1},V_{K_2})$ is an $\hbar$-adic $2$-quantum VA with quantum Yang-Baxter operators $(S_{K_a,K_b}(z))_{a,b=1}^2$.* *Proof.* The equations [\[eq:multqyb-vac1\]](#eq:multqyb-vac1){reference-type="eqref" reference="eq:multqyb-vac1"}, [\[eq:multqyb-vac2\]](#eq:multqyb-vac2){reference-type="eqref" reference="eq:multqyb-vac2"} and [\[eq:multqyb-der-shift\]](#eq:multqyb-der-shift){reference-type="eqref" reference="eq:multqyb-der-shift"} are clear. From relations [\[eq:multqyb-unitary\]](#eq:multqyb-unitary){reference-type="eqref" reference="eq:multqyb-unitary"} and [\[eq:multqyb-qybeq\]](#eq:multqyb-qybeq){reference-type="eqref" reference="eq:multqyb-qybeq"}, we have that $$\begin{aligned} &S_{K_1,K_1}^{\{34\},\{12\}}(z)S_{K_1,K_1}(-z)=\sigma^{13}\sigma^{24}S_{K_1,K_1}(z)\sigma^{13}\sigma^{24}S_{K_1,K_1}(-z)\\ =&\sigma^{13}\sigma^{24}S_{21}^{23}(z)S_{11}^{13}(z)S_{22}^{24}(z)S_{12}^{14}(z)\sigma^{13}\sigma^{24} S_{21}^{23}(-z)S_{11}^{13}(-z)S_{22}^{24}(-z)S_{12}^{14}(-z)\\ =&S_{21}^{41}(z)S_{11}^{31}(z)S_{22}^{42}(z)S_{12}^{32}(z)S_{21}^{23}(-z)S_{11}^{13}(-z)S_{22}^{24}(-z)S_{12}^{14}(-z)\\ =&S_{21}^{41}(z)S_{22}^{42}(z)S_{11}^{31}(z)S_{12}^{32}(z)S_{21}^{23}(-z)S_{11}^{13}(-z)S_{22}^{24}(-z)S_{12}^{14}(-z)=1,\\ &S_{K_2,K_1}^{3,\{12\}}(z)S_{K_1,K_2}(-z) =\sigma^{23}\sigma^{12}S_{21}^{12}(z)S_{22}^{13}(z)\sigma^{12}\sigma^{23}S_{22}^{23}(-z)S_{12}^{13}(-z)\\ =&S_{21}^{31}(z)S_{22}^{32}(z)S_{22}^{23}(-z)S_{12}^{13}(-z)=1,\\ &S_{K_1,K_2}^{\{12\},3}(z)S_{K_2,K_1}(-z) =\sigma^{12}\sigma^{23}S_{22}^{23}(z)S_{12}^{13}(z)\sigma^{23}\sigma^{12}S_{21}^{12}(-z)S_{22}^{13}(-z)\\ =&S_{22}^{31}(z)S_{12}^{21}(z)S_{21}^{12}(-z)S_{22}^{13}(-z)=1.\end{aligned}$$ Then [\[eq:multqyb-unitary\]](#eq:multqyb-unitary){reference-type="eqref" reference="eq:multqyb-unitary"} holds for $(S_{K_a,K_b}(z))_{a,b=1}^2$. From Lemma [Lemma 24](#lem:vertex-op-K){reference-type="ref" reference="lem:vertex-op-K"}, we have that $$\begin{aligned} Y_{\{12\}}(z)=Y_1^{12}(z)Y_2^{34}(z)S_{12}^{23}(-z)\sigma^{23}.\end{aligned}$$ Then by using [\[eq:multqyb-hex1\]](#eq:multqyb-hex1){reference-type="eqref" reference="eq:multqyb-hex1"} and [\[eq:multqyb-hex2\]](#eq:multqyb-hex2){reference-type="eqref" reference="eq:multqyb-hex2"} we have that $$\begin{aligned} &S_{K_1,K_1}^{\{12\},\{34\}}(z_1)Y_{K_1}^{\{12\},\{34\}}(z_2)\nonumber\\ =& S_{21}^{23}(z_1)S_{11}^{13}(z_1)S_{22}^{24}(z_1)S_{12}^{14}(z_1) Y_1^{12}(z_2)Y_2^{34}(z_2)S_{12}^{23}(-z_2)\sigma^{23}\nonumber\\ =&S_{21}^{23}(z_1)S_{11}^{13}(z_1)S_{22}^{24}(z_1)Y_1^{12}(z_2)S_{12}^{25}(z_1)\nonumber\\ &\times S_{12}^{15}(z_1+z_2)Y_2^{34}(z_2)S_{12}^{23}(-z_2)\sigma^{23}\nonumber\\ =&S_{21}^{23}(z_1)S_{11}^{13}(z_1)Y_1^{12}(z_2)S_{22}^{35}(z_1) Y_2^{34}(z_2)\nonumber\\ &\times S_{12}^{26}(z_1)S_{12}^{16}(z_1+z_2)S_{12}^{23}(-z_2)\sigma^{23}\nonumber\\ =&S_{21}^{23}(z_1)Y_1^{12}(z_2)S_{11}^{24}(z_1)S_{11}^{14}(z_1+z_2) Y_2^{34}(z_2)\nonumber\\ &\times S_{22}^{46}(z_1)S_{22}^{36}(z_1+z_2)S_{12}^{26}(z_1)S_{12}^{16}(z_1+z_2)S_{12}^{23}(-z_2)\sigma^{23}\nonumber\\ =&Y_1^{12}(z_2)S_{21}^{34}(z_1)Y_2^{34}(z_2)S_{11}^{25}(z_1)S_{11}^{15}(z_1+z_2) S_{22}^{46}(z_1)\nonumber\\ &\times S_{22}^{36}(z_1+z_2)S_{12}^{26}(z_1)S_{12}^{16}(z_1+z_2)S_{12}^{23}(-z_2)\sigma^{23}\nonumber\\ =&Y_1^{12}(z_2)Y_2^{34}(z_2)S_{21}^{45}(z_1)S_{21}^{35}(z_1+z_2)S_{11}^{25}(z_1)\nonumber\\ &\times S_{11}^{15}(z_1+z_2)S_{22}^{46}(z_1) S_{22}^{36}(z_1+z_2)S_{12}^{26}(z_1)\nonumber\\ &\times S_{12}^{16}(z_1+z_2)S_{12}^{23}(-z_2)\sigma^{23}\nonumber\\ =&Y_{K_1}^{\{12\},\{34\}}(z_2)\sigma^{23}S_{21}^{32}(z_2) S_{21}^{45}(z_1)S_{21}^{35}(z_1+z_2)\label{eq:3-qva-to-2-qva-temp0}\\ &\times S_{11}^{25}(z_1)S_{11}^{15}(z_1+z_2) S_{22}^{46}(z_1)S_{22}^{36}(z_1+z_2)\nonumber\\ &\times S_{12}^{26}(z_1)S_{12}^{16}(z_1+z_2)S_{12}^{23}(-z_2)\sigma^{23}\nonumber\\ =&Y_{K_1}^{\{12\},\{34\}}(z_2) S_{21}^{45}(z_1)S_{21}^{23}(z_2)S_{21}^{25}(z_1+z_2)\nonumber\\ &\times S_{11}^{35}(z_1)S_{11}^{15}(z_1+z_2) S_{22}^{46}(z_1)S_{22}^{26}(z_1+z_2)\nonumber\\ &\times S_{12}^{36}(z_1)S_{12}^{32}(-z_2)S_{12}^{16}(z_1+z_2)\nonumber\\ =&Y_{K_1}^{\{12\},\{34\}}(z_2) S_{21}^{45}(z_1)S_{11}^{35}(z_1)S_{21}^{25}(z_1+z_2)\nonumber\\ &\times S_{21}^{23}(z_2) S_{11}^{15}(z_1+z_2) S_{22}^{46}(z_1)S_{12}^{32}(-z_2)\nonumber\\ &\times S_{12}^{36}(z_1)S_{22}^{26}(z_1+z_2)S_{12}^{16}(z_1+z_2)\nonumber\\ =&Y_{K_1}^{\{12\},\{34\}}(z_2) S_{21}^{45}(z_1)S_{11}^{35}(z_1)S_{21}^{25}(z_1+z_2)\label{eq:3-qva-to-2-qva-temp1}\\ &\times S_{21}^{23}(z_2) S_{11}^{15}(z_1+z_2) S_{22}^{46}(z_1)S_{12}^{32}(-z_2)\nonumber\\ &\times S_{12}^{36}(z_1)S_{22}^{26}(z_1+z_2)S_{12}^{16}(z_1+z_2)\nonumber\\ =&Y_{K_1}^{\{12\},\{34\}}(z_2) S_{21}^{45}(z_1)S_{11}^{35}(z_1)S_{21}^{25}(z_1+z_2)\nonumber\\ &\times S_{21}^{23}(z_2)S_{12}^{32}(-z_2) S_{22}^{46}(z_1)S_{12}^{36}(z_1) \nonumber\\ &\times S_{11}^{15}(z_1+z_2)S_{22}^{26}(z_1+z_2)S_{12}^{16}(z_1+z_2)\nonumber\\ =&Y_{K_1}^{\{12\},\{34\}}(z_2) S_{21}^{45}(z_1)S_{11}^{35}(z_1)S_{21}^{25}(z_1+z_2)S_{22}^{46}(z_1) \label{eq:3-qva-to-2-qva-temp2}\\ &\times S_{12}^{36}(z_1) S_{11}^{15}(z_1+z_2)S_{22}^{26}(z_1+z_2)S_{12}^{16}(z_1+z_2)\nonumber\\ =&Y_{K_1}^{\{12\},\{34\}}(z_2) S_{21}^{45}(z_1)S_{11}^{35}(z_1) S_{22}^{46}(z_1) S_{12}^{36}(z_1) \nonumber\\ &\times S_{21}^{25}(z_1+z_2)S_{11}^{15}(z_1+z_2)S_{22}^{26}(z_1+z_2)S_{12}^{16}(z_1+z_2)\nonumber\\ =&Y_{K_1}^{\{12\},\{34\}}(z_2)S_{K_1,K_1}^{\{34\},\{56\}}(z_1)S_{K_1,K_1}^{\{12\},\{56\}}(z_1+z_2).\end{aligned}$$ where equations [\[eq:3-qva-to-2-qva-temp0\]](#eq:3-qva-to-2-qva-temp0){reference-type="eqref" reference="eq:3-qva-to-2-qva-temp0"} and [\[eq:3-qva-to-2-qva-temp2\]](#eq:3-qva-to-2-qva-temp2){reference-type="eqref" reference="eq:3-qva-to-2-qva-temp2"} follows from [\[eq:multqyb-unitary\]](#eq:multqyb-unitary){reference-type="eqref" reference="eq:multqyb-unitary"} and the equation [\[eq:3-qva-to-2-qva-temp1\]](#eq:3-qva-to-2-qva-temp1){reference-type="eqref" reference="eq:3-qva-to-2-qva-temp1"} follows from [\[eq:multqyb-qybeq\]](#eq:multqyb-qybeq){reference-type="eqref" reference="eq:multqyb-qybeq"}. Similarly, we have that $$\begin{aligned} &S_{K_1,K_1}^{\{12\},\{34\}}(z_1)Y_{K_1,K_1}^{\{34\},\{56\}}(z_2)\\ =&Y_{K_1,K_1}^{\{34\},\{56\}}(z_2)S_{K_1,K_1}^{\{12\},\{34\}}(z_1-z_2) S_{K_1,K_1}^{\{12\},\{56\}}(z_1),\\ &S_{K_1,K_2}^{\{12\},3}(z_1)Y_{K_1}^{\{12\},\{34\}}(z_2)\\ =&Y_{K_1}^{\{12\},\{34\}}(z_2)S_{K_1,K_2}^{\{34\},5}(z_1) S_{K_1,K_2}^{\{12\},5}(z_1+z_2),\\ &S_{K_1,K_2}^{\{12\},3}(z_1)Y_{K_2}^{34}(z_2)\\ =&Y_{K_2}^{34}(z_2)S_{K_1,K_2}^{\{12\},3}(z_1-z_2) S_{K_1,K_2}^{\{12\},4}(z_1),\\ &S_{K_2,K_1}^{1,\{23\}}(z_1)Y_{K_2}^{12}(z_2)\\ =&Y_{K_2}^{12}(z_2)S_{K_2,K_1}^{2,\{34\}}(z_1)S_{K_2,K_1}^{1,\{34\}}(z_1+z_2),\\ &S_{K_2,K_1}^{1,\{23\}}(z_1)Y_{K_1}^{\{23\},\{45\}}(z_2)\\ =&Y_{K_1}^{\{23\},\{45\}}(z_2)S_{K_2,K_1}^{1,\{23\}}(z_1-z_2)S_{K_2,K_1}^{1,\{45\}}(z_1),\\ &S_{K_2,K_2}^{12}(z_1)Y_{K_2}^{12}(z_2)\\ =&Y_{K_2}^{12}(z_2)S_{K_2,K_2}^{23}(z_1)S_{K_2,K_2}^{13}(z_1+z_2),\\ &S_{K_2,K_2}^{12}(z_1)Y_{K_2}^{23}(z_2)\\ =&Y_{K_2}^{23}(z_2)S_{K_2,K_2}^{12}(z_1-z_2)S_{K_2,K_2}^{13}(z_1).\end{aligned}$$ Hence, the equations [\[eq:multqyb-hex1\]](#eq:multqyb-hex1){reference-type="eqref" reference="eq:multqyb-hex1"} and [\[eq:multqyb-hex2\]](#eq:multqyb-hex2){reference-type="eqref" reference="eq:multqyb-hex2"} hold for $(S_{K_a,K_b}(z))_{a,b=1}^2$. Finally, we check the equations [\[eq:multqyb-qybeq\]](#eq:multqyb-qybeq){reference-type="eqref" reference="eq:multqyb-qybeq"} for $(S_{K_a,K_b}(z))_{a,b=1}^2$. We only check the most complex case. From [\[eq:multqyb-qybeq\]](#eq:multqyb-qybeq){reference-type="eqref" reference="eq:multqyb-qybeq"}, we have that $$\begin{aligned} &S_{K_1,K_1}^{\{12\},\{34\}}(z_1)S_{K_1,K_1}^{\{12\},\{56\}}(z_1+z_2)S_{K_1,K_1}^{\{34\},\{56\}}(z_2)\\ =&S_{21}^{23}(z_1)S_{11}^{13}(z_1)S_{22}^{24}(z_1)S_{12}^{14}(z_1) S_{21}^{25}(z_1+z_2)S_{11}^{15}(z_1+z_2)\\ &\times S_{22}^{26}(z_1+z_2)S_{12}^{16}(z_1+z_2) S_{21}^{45}(z_2)S_{11}^{35}(z_2)S_{22}^{46}(z_2)S_{12}^{36}(z_2)\\ =&S_{21}^{23}(z_1)S_{11}^{13}(z_1)S_{22}^{24}(z_1) S_{21}^{25}(z_1+z_2)S_{12}^{14}(z_1)S_{11}^{15}(z_1+z_2)S_{21}^{45}(z_2)\\ &\times S_{22}^{26}(z_1+z_2)S_{12}^{16}(z_1+z_2) S_{11}^{35}(z_2)S_{22}^{46}(z_2)S_{12}^{36}(z_2)\\ =&S_{21}^{23}(z_1)S_{11}^{13}(z_1)S_{22}^{24}(z_1) S_{21}^{25}(z_1+z_2)S_{21}^{45}(z_2)S_{11}^{15}(z_1+z_2)S_{12}^{14}(z_1)\\ &\times S_{22}^{26}(z_1+z_2)S_{12}^{16}(z_1+z_2) S_{11}^{35}(z_2)S_{22}^{46}(z_2)S_{12}^{36}(z_2)\\ =&S_{21}^{23}(z_1)S_{11}^{13}(z_1)S_{21}^{45}(z_2) S_{21}^{25}(z_1+z_2)S_{22}^{24}(z_1) S_{11}^{15}(z_1+z_2)S_{12}^{14}(z_1)\\ &\times S_{22}^{26}(z_1+z_2)S_{12}^{16}(z_1+z_2) S_{11}^{35}(z_2)S_{22}^{46}(z_2)S_{12}^{36}(z_2)\\ =&S_{21}^{45}(z_2)S_{21}^{23}(z_1) S_{21}^{25}(z_1+z_2)S_{11}^{13}(z_1) S_{11}^{15}(z_1+z_2)S_{22}^{24}(z_1)S_{22}^{26}(z_1+z_2)\\ &\times S_{12}^{14}(z_1)S_{12}^{16}(z_1+z_2) S_{22}^{46}(z_2)S_{11}^{35}(z_2)S_{12}^{36}(z_2)\\ %%%%%%%%%%%%%%%%%%%% =&S_{21}^{45}(z_2)S_{21}^{23}(z_1) S_{21}^{25}(z_1+z_2)S_{11}^{13}(z_1) S_{11}^{15}(z_1+z_2)S_{22}^{24}(z_1)S_{22}^{26}(z_1+z_2)\\ &\times S_{22}^{46}(z_2)S_{12}^{16}(z_1+z_2)S_{12}^{14}(z_1) S_{11}^{35}(z_2)S_{12}^{36}(z_2)\\ =&S_{21}^{45}(z_2)S_{21}^{23}(z_1) S_{21}^{25}(z_1+z_2)S_{11}^{13}(z_1) S_{11}^{15}(z_1+z_2)S_{22}^{46}(z_2)S_{22}^{26}(z_1+z_2)\\ &\times S_{22}^{24}(z_1)S_{12}^{16}(z_1+z_2)S_{12}^{14}(z_1) S_{11}^{35}(z_2)S_{12}^{36}(z_2)\\ %%%%%%%%%%%%%%%%%%%%%%%% =&S_{21}^{45}(z_2)S_{21}^{23}(z_1) S_{21}^{25}(z_1+z_2)S_{11}^{13}(z_1) S_{11}^{15}(z_1+z_2)S_{11}^{35}(z_2)S_{22}^{46}(z_2)\\ &\times S_{22}^{26}(z_1+z_2)S_{12}^{16}(z_1+z_2)S_{22}^{24}(z_1)S_{12}^{14}(z_1) S_{12}^{36}(z_2)\\ =&S_{21}^{45}(z_2)S_{21}^{23}(z_1) S_{21}^{25}(z_1+z_2)S_{11}^{35}(z_2) S_{11}^{15}(z_1+z_2)S_{11}^{13}(z_1)S_{22}^{46}(z_2)\\ &\times S_{22}^{26}(z_1+z_2)S_{12}^{16}(z_1+z_2)S_{22}^{24}(z_1)S_{12}^{14}(z_1) S_{12}^{36}(z_2)\\ =&S_{21}^{45}(z_2)S_{11}^{35}(z_2) S_{21}^{25}(z_1+z_2)S_{21}^{23}(z_1)S_{11}^{15}(z_1+z_2)S_{11}^{13}(z_1)S_{22}^{46}(z_2)\\ &\times S_{22}^{26}(z_1+z_2)S_{12}^{16}(z_1+z_2)S_{22}^{24}(z_1)S_{12}^{14}(z_1) S_{12}^{36}(z_2)\\ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% =&S_{21}^{45}(z_2)S_{11}^{35}(z_2) S_{21}^{25}(z_1+z_2)S_{11}^{15}(z_1+z_2)S_{22}^{46}(z_2)S_{21}^{23}(z_1)S_{22}^{26}(z_1+z_2)\\ &\times S_{11}^{13}(z_1)S_{12}^{16}(z_1+z_2)S_{12}^{36}(z_2)S_{22}^{24}(z_1)S_{12}^{14}(z_1) \\ =&S_{21}^{45}(z_2)S_{11}^{35}(z_2) S_{21}^{25}(z_1+z_2)S_{11}^{15}(z_1+z_2)S_{22}^{46}(z_2)S_{21}^{23}(z_1)S_{22}^{26}(z_1+z_2)\\ &\times S_{12}^{36}(z_2)S_{12}^{16}(z_1+z_2)S_{11}^{13}(z_1)S_{22}^{24}(z_1)S_{12}^{14}(z_1) \\ =&S_{21}^{45}(z_2)S_{11}^{35}(z_2) S_{21}^{25}(z_1+z_2)S_{11}^{15}(z_1+z_2)S_{22}^{46}(z_2)S_{12}^{36}(z_2) S_{22}^{26}(z_1+z_2)\\ &\times S_{21}^{23}(z_1)S_{12}^{16}(z_1+z_2)S_{11}^{13}(z_1)S_{22}^{24}(z_1)S_{12}^{14}(z_1) \\ %%%%%%%%%%%%%%%%%%%%%%%%%%%% =&S_{21}^{45}(z_2)S_{11}^{35}(z_2)S_{22}^{46}(z_2)S_{12}^{36}(z_2) S_{21}^{25}(z_1+z_2)S_{11}^{15}(z_1+z_2) \\ &\times S_{22}^{26}(z_1+z_2)S_{12}^{16}(z_1+z_2)S_{21}^{23}(z_1) S_{11}^{13}(z_1)S_{22}^{24}(z_1)S_{12}^{14}(z_1) \\ =&S_{K_1,K_1}^{\{34\},\{56\}}(z_2) S_{K_1,K_1}^{\{12\},\{56\}}(z_1+z_2)S_{K_1,K_1}^{\{12\},\{34\}}(z_1).\end{aligned}$$ Therefore, we complete the proof. ◻ **Proposition 26**. *Let $((V_i)_{i=1}^n,(S_{ij}(z))_{i,j=1}^n)$ be an $\hbar$-adic $n$-quantum VA, and let $(K_1,\dots,K_k)$ be a partition of the set $\{1,2,\dots,n\}$. Then $$((V_{K_i})_{i=1}^k,(S_{K_i,K_j}(z))_{i,j=1}^k)$$ is an $\hbar$-adic $k$-quantum VA.* *Proof.* This proposition is immediate from induction on the size of each subset $K_i$ and Lemma [Lemma 25](#lem:3-qva-to-2-qva){reference-type="ref" reference="lem:3-qva-to-2-qva"}. ◻ Finally, we list some formulas for later use. Let $((V_i)_{i=1}^n,(S_{ij}(z))_{i,j=1}^n$ be an $\hbar$-adic $n$-quantum VA. It is immediate from [\[eq:multqyb-der-shift\]](#eq:multqyb-der-shift){reference-type="eqref" reference="eq:multqyb-der-shift"}, we have that **Lemma 27**. *For any $f(z)\in{\mathbb{C}}[z][[\hbar]]$, we have that $$\begin{aligned} &S_{ij}(z)f(\partial\otimes 1)=f\left(\partial\otimes 1+\frac{\partial}{\partial {z}}\right)S_{ij}(z),\label{eq:multqyb-shift-total1}\\ &S_{ij}(z)f(1\otimes\partial)=f\left(1\otimes\partial-\frac{\partial}{\partial {z}}\right)S_{ij}(z).\label{eq:multqyb-shift-total2}\end{aligned}$$* **Lemma 28**. *Let $v\in V_i,u\in V_j$, $f(z)\in{\mathbb{C}}((z))[[\hbar]]$.* - *If $S_{ij}(z)(v\otimes u)=v\otimes u+{{\bf 1}}\otimes{{\bf 1}}\otimes f(z)$. Then $$\begin{aligned} &S_{ij}(z)((v_{-1})^n{{\bf 1}}\otimes u)\\ &\quad=v_{-1}^n{{\bf 1}}\otimes u+nv_{-1}^{n-1}{{\bf 1}}\otimes{{\bf 1}}\otimes f(z),\\ &S_{ij}(z)(v\otimes(u_{-1})^n{{\bf 1}})\\ &\quad=v\otimes u_{-1}^n{{\bf 1}}+n{{\bf 1}}\otimes u_{-1}^{n-1}{{\bf 1}}\otimes f(z),\\ &S_{ij}(z)\left(\exp ( v_{-1}){{\bf 1}}\otimes u\right)\\ &\quad=\exp ( v_{-1}){{\bf 1}}\otimes u+\exp (v_{-1}){{\bf 1}}\otimes{{\bf 1}}\otimes f(z),\quad \mbox{if }v\in \hbar V,\\ &S_{ij}(z)\left(v\otimes\exp( u_{-1}){{\bf 1}}\right)\\ &\quad=v\otimes\exp( u_{-1}){{\bf 1}}+{{\bf 1}}\otimes\exp( u_{-1}){{\bf 1}}\otimes f(z),\quad \mbox{if }u\in \hbar V.\end{aligned}$$* - *If $S_{ij}(z)(v\otimes u)=v\otimes u+{{\bf 1}}\otimes u\otimes f(z)$. Then $$\begin{aligned} &S_{ij}(z)(v_{-1}^n{{\bf 1}}\otimes u)=\sum_{i=0}^n\binom{n}{i}v_{-1}^i{{\bf 1}}\otimes u\otimes f(z)^{n-i},\\ &S_{ij}(z)\left(\exp ( v_{-1}){{\bf 1}}\otimes u\right)=\exp ( v_{-1}){{\bf 1}}\otimes u\otimes\exp\left( f(z)\right),\\ &\quad\mbox{if }v\in\hbar V_j, f(z)\in\hbar{\mathbb{C}}((z))[[\hbar]].\end{aligned}$$* - *If $S_{ij}(z)(v\otimes u)=v\otimes u+v\otimes{{\bf 1}}\otimes f(z)$. Then $$\begin{aligned} &S_{ij}(z)(v\otimes u_{-1}^n{{\bf 1}})=\sum_{i=0}^n\binom{n}{i}v\otimes u_{-1}^i{{\bf 1}}\otimes f(z)^{n-i},\\ &S_{ij}(z)\left(v\otimes\exp( u_{-1}){{\bf 1}}\right)=v\otimes\exp\left( u_{-1}\right){{\bf 1}}\otimes\exp( f(z)),\\ &\quad \mbox{if }u\in\hbar V_i, f(z)\in\hbar{\mathbb{C}}((z))[[\hbar]].\end{aligned}$$* **Lemma 29**. *Let $v\in V_i$, $u,u'\in V_j$, $f_1(z),f_2(z)\in{\mathbb{C}}((z))[[\hbar]]$.* - *If $$\begin{aligned} &S_{ij}(z)(v\otimes u)=v\otimes u+{{\bf 1}}\otimes{{\bf 1}}\otimes f_1(z),\\ &S_{ij}(z)(v\otimes u')=v\otimes u'+{{\bf 1}}\otimes u'\otimes f_2(z). \end{aligned}$$ Then $$\begin{aligned} &S_{ij}(z)(v\otimes u_{-1}^nu')=v\otimes u_{-1}^nu'\\ &\quad+n{{\bf 1}}\otimes u_{-1}^{n-1}u'\otimes f_1(z) +{{\bf 1}}\otimes u_{-1}^nu'\otimes f_2(z),\\ &S_{ij}(z)(v\otimes\exp\left(u_{-1}\right)u')=v\otimes\exp\left(u_{-1}\right)u'\\ &\quad+{{\bf 1}}\otimes\exp\left(u_{-1}\right)u'\otimes(f_1(z)+f_2(z)),\quad\mbox{if }u\in \hbar V_j. \end{aligned}$$* - *If $$\begin{aligned} &S_{ij}(z)(v\otimes u)=v\otimes u+v\otimes{{\bf 1}}\otimes f_1(z),\\ &S_{ij}(z)(v\otimes u')=v\otimes u'\otimes f_2(z). \end{aligned}$$ Then $$\begin{aligned} &S_{ij}(z)(v\otimes u_{-1}^n u')\\ &\quad=\sum_{i=0}^n\binom{n}{i}v\otimes u_{-1}^iu'\otimes f_1(z)^{n-i}f_2(z),\\ &S_{ij}(z)(v\otimes\exp\left(u_{-1}\right)u')\\ &\quad=v\otimes\exp\left(u_{-1}\right)u'\otimes\exp\left(f_1(z)\right)f_2(z),\\ &\quad\quad\mbox{if }u\in \hbar V_j, f(z)\in \hbar{\mathbb{C}}((z))[[\hbar]]. \end{aligned}$$* **Lemma 30**. *Let $v,v'\in V_i$, $u\in V_j$, $f_1(z),f_2(z)\in{\mathbb{C}}((z))[[\hbar]]$.* - *If $$\begin{aligned} &S_{ij}(z)(v\otimes u)=v\otimes u+{{\bf 1}}\otimes{{\bf 1}}\otimes f_1(z),\\ &S_{ij}(z)(v'\otimes u)=v'\otimes u+v'\otimes{{\bf 1}}\otimes f_2(z). \end{aligned}$$ Then $$\begin{aligned} &S_{ij}(z)(v_{-1}^nv'\otimes u)=v_{-1}^nv'\otimes u\\ &\quad+nv_{-1}^{n-1}v'\otimes{{\bf 1}}\otimes f_1(z)+v_{-1}^nv'\otimes{{\bf 1}}\otimes f_2(z),\\ &S_{ij}(z)(\exp\left(v_{-1}\right)v'\otimes u)=\exp\left(v_{-1}\right)v'\otimes u\\ &\quad+\exp\left(v_{-1}\right)v'\otimes{{\bf 1}}\otimes(f_1(z)+f_2(z)),\quad\mbox{if }v\in \hbar V_i. \end{aligned}$$* - *If $$\begin{aligned} &S_{ij}(z)(v\otimes u)=v\otimes u+v\otimes{{\bf 1}}\otimes f_1(z),\\ &S_{ij}(z)(v'\otimes u)=v'\otimes u\otimes f_2(z). \end{aligned}$$ Then $$\begin{aligned} &S_{ij}(z)(v_{-1}^n v'\otimes u)\\ &\quad=\sum_{i=0}^n\binom{n}{i}v_{-1}^iv'\otimes u\otimes f_1(z)^{n-i}f_2(z),\\ &S_{ij}(z)(\exp\left(v_{-1}\right)v'\otimes u)\\ &\quad=\exp\left(v_{-1}\right)v'\otimes u\otimes\exp\left(f_1(z)\right)f_2(z),\\ &\quad\quad\mbox{if }v\in \hbar V_i, f(z)\in \hbar{\mathbb{C}}((z))[[\hbar]]. \end{aligned}$$* ## Construction of $\hbar$-adic $n$-quantum VAs {#subsec:construct-n-qvas} This subsection extends the construction of $\hbar$-adic quantum VAs introduced in [@JKLT-Defom-va] to the construction of $\hbar$-adic $n$-quantum VAs. We first recall the smash product introduced in [@Li-smash]. An *$\hbar$-adic (nonlocal) vertex bialgebra* [@Li-smash] is an $\hbar$-adic (nonlocal) VA $H$ equipped with a classical coalgebra structure $(\Delta,\varepsilon)$ such that (the coproduct) $\Delta:H\to H\widehat{\otimes}H$ and (the counit) $\varepsilon:H\to{\mathbb{C}}[[\hbar]]$ are homomorphism of $\hbar$-adic nonlocal VAs. An *$H$-module (nonlocal) VA* [@Li-smash] if an $\hbar$-adic nonlocal VA $V$ equipped with a module structure $\tau$ on $V$ for $H$ viewed as an $\hbar$-adic nonlocal VA such that $$\begin{aligned} &\tau(h,z)v\in V\otimes{\mathbb{C}}((z)),\qquad \tau(h,z){{\bf 1}}_V=\varepsilon(h){{\bf 1}}_V,\label{eq:mod-va-for-vertex-bialg1-2}\\ &\tau(h,z_1)Y(u,z_2)v=\sum Y(\tau(h_{(1)},z_1-z_2)u,z_2)\tau(h_{(2)},z_1)v \label{eq:mod-va-for-vertex-bialg3}\end{aligned}$$ for $h\in H$, $u,v\in V$, where ${{\bf 1}}_V$ denotes the vacuum vector of $V$ and $\Delta(h)=\sum h_{(1)}\otimes h_{(2)}$ is the coproduct in the Sweedler notation. The following result is given in [@Li-smash]. **Theorem 31**. *Let $H$ be an $\hbar$-adic nonlocal vertex bialgebra and let $(V,\eta)$ be an $H$-module nonlocal VA. Set $V\sharp H=V\widehat{\otimes}H$. For $u,v\in V$, $h,k\in H$, define $$\begin{aligned} Y^\sharp(u\otimes h,z)(v\otimes k)=\sum Y(u,z)\tau(h_{(1)},z)v\otimes Y(h_{(2)},z)k.\end{aligned}$$ Then $V\sharp H$ is an $\hbar$-adic nonlocal VA.* *Moreover, let $W$ be a $V$-module and an $H$-module such that $$\begin{aligned} &Y_W(h,z)w\in W\widehat{\otimes}{\mathbb{C}}((z))[[\hbar]],\\ &Y_W(h,z_1)Y_W(v,z_2)w=\sum Y_W(\tau(h_{(1)},z_1-z_2)v,z_2)Y_W(h_{(2)},z_1)w\end{aligned}$$ for $h\in S$, $v\in V$, $w\in W$, where $S$ is a generating subset of $H$ as an $\hbar$-adic nonlocal VA. Then $W$ is a module for $V\sharp H$ with $$\begin{aligned} Y_W^\sharp(v\otimes h,z)w=Y_W(v,z)Y_W(h,z)w\quad \mbox{for }h\in H,v\in V,w\in W.\end{aligned}$$* A *(right) $H$-comodule nonlocal VA* ([@JKLT-Defom-va Definition 2.22]) is a nonlocal VA $V$ equipped with a homomorphism $\rho:V\to V\widehat{\otimes}H$ of $\hbar$-adic nonlocal VAs such that $$\begin{aligned} (\rho\otimes 1)\rho=(1\otimes\Delta)\rho,\quad (1\otimes\varepsilon)\rho=\mbox{Id}_V.\end{aligned}$$ $\rho$ and $\tau$ are *compatible* ([@JKLT-Defom-va Definition 2.24]) if $$\begin{aligned} \rho(\tau(h,z)v)=(\tau(h,z)\otimes 1)\rho(v)\quad \mbox{for }h\in H,\,v\in V.\end{aligned}$$ **Definition 32**. Let $V$ be an $\hbar$-adic nonlocal VA. A *deforming triple* is a triple $(H,\rho,\tau)$, where $H$ is a cocommutative $\hbar$-adic nonlocal vertex bialgebra, $(V,\rho)$ is an $H$-nonlocal VA and $(V,\tau)$ is an $H$-module nonlocal VA, such that $\rho$ and $\tau$ are compatible. The following represents the direct $\hbar$-adic analogue of [@JKLT-Defom-va Theorem 2.25 and Proposition 2.26]. **Theorem 33**. *Let $V$ be an $\hbar$-adic nonlocal VA, and let $(H,\rho,\tau)$ be a deforming triple. Define $$\begin{aligned} \mathfrak D_\tau^\rho (Y)(a,z)=\sum Y(a_{(1)},z)\tau(a_{(2)},z)\quad\mbox{for }a\in V,\end{aligned}$$ where $\rho(a)=\sum a_{(1)}\otimes a_{(2)}\in V\otimes H$. Then $(V,\mathfrak D_\tau^\rho (Y),{{\bf 1}})$ forms an $\hbar$-adic nonlocal VA, which is denoted by $\mathfrak D_\tau^\rho (V)$. Moreover, $\rho$ is an $\hbar$-adic nonlocal VA homomorphism from $\mathfrak D_\tau^\rho (V)$ to $V\sharp H$. Furthermore, $(\mathfrak D_\tau^\rho(V),\rho)$ is also a right $H$-comodule nonlocal VA.* **Remark 34**. * *Let $H$ be a cocommutative $\hbar$-adic nonlocal vertex bialgebra, and let $(V,\rho)$ be an $H$-comodule nonlocal VA. We view the counit $\varepsilon$ as an $H$-module nonlocal VA structure on $V$ as follows $$\begin{aligned} \varepsilon(h,z)v=\varepsilon(h)v\quad \mbox{for }h\in H,\,v\in V.\end{aligned}$$ Then it is easy to verify that $\varepsilon$ and $\rho$ are compatible, and $\mathfrak D_\varepsilon^\rho(V)=V$.* * Now, we fix a cocommutative $\hbar$-adic nonlocal vertex bialgebra $H$, and an $H$-comodule nonlocal VA $(V,\rho)$. Note that $$\mathop{\mathrm{Hom}}\left(H,\mathop{\mathrm{Hom}}(V,V\widehat{\otimes}{\mathbb{C}}((x))[[\hbar]])\right)$$ is a unital associative algebra, where the multiplication is defined by $$\begin{aligned} (f\ast g)(h,z)=\sum f(h_{(1)},z)g(h_{(2)},z),\end{aligned}$$ for $f,g\in\mathop{\mathrm{Hom}}\left(H,\mathop{\mathrm{Hom}}(V,V\widehat{\otimes}{\mathbb{C}}((z))[[\hbar]])\right)$ and the unit $\varepsilon$. For $f,g\in\mathop{\mathrm{Hom}}\left(H,\mathop{\mathrm{Hom}}(V,V\widehat{\otimes}{\mathbb{C}}((z))[[\hbar]])\right)$, we say that $f$ and $g$ are commute if $$\begin{aligned} =0\quad \mbox{for }h,k\in H.\end{aligned}$$ Let $\mathfrak L_H^\rho(V)$ be the set of $H$-module nonlocal VA structures on $V$ that are compatible with $\rho$. The following is a direct $\hbar$-adic analogue of [@JKLT-Defom-va Proposition 3.3 and Proposition 3.4]. **Proposition 35**. *Let $\tau$ and $\tau'$ are commuting elements in $\mathfrak L_H^\rho(V)$. Then $\tau\ast\tau'\in\mathfrak L_H^\rho(V)$ and $\tau\ast\tau'=\tau'\ast\tau$. Moreover, $\tau\in \mathfrak L_H^\rho\left(\mathfrak D_{\tau'}^\rho(V)\right)$ and $$\begin{aligned} \mathfrak D_\tau^\rho\left(\mathfrak D_{\tau'}^\rho(V)\right)=\mathfrak D_{\tau\ast\tau'}^\rho(V).\end{aligned}$$* Recall that an element $\tau\in\mathfrak L_H^\rho(V)$ is said to be invertible if there exists $\tau^{-1}\in \mathfrak L_H^\rho(V)$, such that $\tau$ and $\tau^{-1}$ are commute and $\tau\ast\tau^{-1}=\varepsilon$. We have the following immediate $\hbar$-adic analogue of [@JKLT-Defom-va Theorem 3.6]. **Theorem 36**. *Let $V_0$ be a VA, and $V=V_0[[\hbar]]$ be an $\hbar$-adic VA. Let $\tau$ be an invertible element in $\mathfrak L_H^\rho(V)$. Then $\mathfrak D_\tau^\rho(V)$ is an $\hbar$-adic quantum VA with quantum Yang-Baxter operator $S(z)$ defined by $$\begin{aligned} S(z)(v\otimes u)=\sum \tau(u_{(2)},-z)v_{(1)}\otimes\tau^{-1}(v_{(2)},z)u_{(1)}\quad u,v\in V,\end{aligned}$$ where $\rho(u)=\sum u_{(1)}\otimes u_{(2)}$ and $\rho(v)=\sum v_{(1)}\otimes v_{(2)}$.* Let $V_1,\dots,V_n$ be $\hbar$-adic VAs and let $H$ be a commutative and cocommutative $\hbar$-adic vertex bialgebra. Assign an $H$-comodule VA structure $\rho_i$ to $V_i$ for each $1\le i\le n$. Choose an invertible element $\lambda_{ij}\in\mathfrak L_H^{\rho_i}(V_i)$ for $1\le i,j\le n$. Suppose further that $$\begin{aligned} =0\quad\mbox{for }1\le i,j,k\le n,\,h,h'\in H.\end{aligned}$$ Define $$\begin{aligned} \label{eq:S-twisted-def} S_{ij}(z):&V_i\widehat{\otimes}V_j\to V_i\widehat{\otimes}V_j\widehat{\otimes}{\mathbb{C}}((z))[[\hbar]]\\ &v\otimes u\mapsto \sum \lambda_{ij}(u_{(2)},-z)v_{(1)}\otimes\lambda_{ji}^{-1}(v_{(2)},z)u_{(1)},\nonumber\end{aligned}$$ where $\rho_i(v)=\sum v_{(1)}\otimes v_{(2)}$ and $\rho_j(u)=\sum u_{(1)}\otimes u_{(2)}$ are the Sweedler notations. We first have the following result. **Lemma 37**. *For $1\le i,j\le n$, we have that $$\begin{aligned} &\sigma S_{ij}(-z)^{-1}\sigma=S_{ji}(z).\end{aligned}$$* *Proof.* For $v\in V_i$ and $u\in V_j$, we have that $$\begin{aligned} &S_{ij}(z)\sigma S_{ji}(-z)\sigma (v\otimes u) =S_{ij}(z)\sigma S_{ji}(-z)(u\otimes v)\\ =&S_{ij}(z)\sigma \sum \lambda_{ji}(v_{(2)},z)u_{(1)}\otimes\lambda_{ij}^{-1}(u_{(2)},-z)v_{(1)}\\ =&\sum S_{ij}(z)\lambda_{ij}^{-1}(u_{(2)},-z)v_{(1)}\otimes\lambda_{ji}(v_{(2)},z)u_{(1)}\\ =&\sum \lambda_{ij}\left(\left(\lambda_{ji}(v_{(2)},z)u_{(1)}\right)_{(2)},-z\right)\left(\lambda_{ij}^{-1}(u_{(2)},-z)v_{(1)}\right)_{(1)}\\ &\qquad\otimes\lambda_{ji}^{-1}\left(\left(\lambda_{ij}^{-1}(u_{(2)},-z)v_{(1)}\right)_{(2)},z\right)\left(\lambda_{ji}(v_{(2)},z)u_{(1)}\right)_{(1)}\\ =&\sum \lambda_{ij}\left(u_{(1),(2)},-z\right)\lambda_{ij}^{-1}(u_{(2)},-z)v_{(1),(1)}\\ &\qquad\otimes\lambda_{ji}^{-1}\left(v_{(1),(2)},z\right)\lambda_{ji}(v_{(2)},z)u_{(1),(1)}\\ =&\sum \lambda_{ij}\left(u_{(2),(1)},-z\right)\lambda_{ij}^{-1}(u_{(2),(2)},-z)v_{(1)}\\ &\qquad\otimes\lambda_{ji}^{-1}\left(v_{(2),(1)},z\right)\lambda_{ji}(v_{(2),(2)},z)u_{(1)}\\ =&\sum \varepsilon(u_{(2)})v_{(1)}\otimes\varepsilon(v_{(2)})u_{(1)}=v\otimes u.\end{aligned}$$ Similarly, we have that $$\begin{aligned} \sigma S_{ji}(-z)\sigma S_{ij}(z)(v\otimes u)=v\otimes u.\end{aligned}$$ Therefore, $$\begin{aligned} \sigma S_{ji}(-z)\sigma=S_{ij}(z)^{-1}.\end{aligned}$$ Consequently, $\sigma S_{ij}(-z)^{-1}\sigma=S_{ji}(z)$. ◻ Similar to the proof of [@JKLT-Defom-va Theorem 3.6], we have that **Proposition 38**. *$((\mathfrak D_{\lambda_{ii}}^{\rho_i}(V_i))_{i=1}^n,(S_{ij}(z))_{i,j=1}^n)$ is an $\hbar$-adic $n$-quantum VA.* Combining Lemma [Lemma 37](#lem:Sij-Sji){reference-type="ref" reference="lem:Sij-Sji"} and Propositions [Proposition 26](#prop:n-qva-to-k-qva){reference-type="ref" reference="prop:n-qva-to-k-qva"}, [Proposition 38](#prop:qva-twisted-tensor-deform){reference-type="ref" reference="prop:qva-twisted-tensor-deform"}, we have that **Corollary 39**. *The twisting tensor product $$\mathfrak D_{\lambda_{11}}^{\rho_1}(V_1)\widehat{\otimes}_{S_{12}}\cdots \widehat{\otimes}_{S_{n-1,n}}\mathfrak D_{\lambda_{nn}}^{\rho_n}(V_n)$$ is an $\hbar$-adic quantum VA with the quantum Yang-Baxter operator $$\begin{aligned} S(z)=\prod_{i-j=n-1}S_{ij}^{i,n+j}(z) \prod_{i-j=n-2}S_{ij}^{i,n+j}(z) \cdots \prod_{i-j=1-n}S_{ij}^{i,n+j}(z).\end{aligned}$$* # Quantum lattice vertex algebras {#sec:qlattice} ## Construction of quantum lattice VA Let us refer to the notations provided in Subsection [2.3](#subsec:LatticeVA){reference-type="ref" reference="subsec:LatticeVA"}. Set $$\begin{aligned} B_Q=\left(S(\hat{\mathfrak h}_Q^-)\otimes{\mathbb{C}}[Q]\right)[[\hbar]].\end{aligned}$$ Then $B_Q$ is a bialgebra over ${\mathbb{C}}[[\hbar]]$, where the coproduct $\Delta$ and counit $\varepsilon$ are uniquely determined by $$\begin{aligned} &\Delta(h(-n))=h(-n)\otimes 1+1\otimes h(-n),\quad \Delta(e^\alpha)=e^\alpha\otimes e^\alpha,\\\ &\varepsilon(h(-n))=0,\quad \varepsilon(e^\alpha)=1\end{aligned}$$ for $h\in{\mathfrak h}_Q$, $n\in{\mathbb{Z}}_+$, $\alpha\in Q$. Moreover, $B_Q$ admits a derivation $\partial$ which is uniquely determined by $$\begin{aligned} \partial(h(-n))=nh(-n-1),\quad \partial(u\otimes e^\alpha)=\alpha(-1)u\otimes e^\alpha+\partial u\otimes e^\alpha\end{aligned}$$ for $h\in{\mathfrak h}_Q$, $n\in{\mathbb{Z}}_+$, $u\in S(\hat{\mathfrak h}^-)$ and $\alpha\in Q$. We identify ${\mathfrak h}_Q$ (resp. ${\mathbb{C}}[Q]$) with ${\mathfrak h}_Q(-1)\otimes 1\subset B_Q$ (resp. $1\otimes{\mathbb{C}}[Q]\subset B_Q$). Then $B_Q$ becomes a cocommutative and commutative $\hbar$-adic vertex bialgebra with the vertex operator map $Y_{B_Q}(\cdot,z)$ determined by $$\begin{aligned} &Y_{B_Q}(h,z)=\sum_{n>0}h(-n)z^{n-1},\quad Y_{B_Q}(e^\alpha,z)=e^\alpha E^+(\alpha,z)\end{aligned}$$ for $h\in{\mathfrak h}_Q$ and $\alpha\in Q$. We view $V_Q[[\hbar]]$ as an $\hbar$-adic VA in the obvious way. For a ${\mathbb{C}}[[\hbar]]$-module $W$, and a linear map $\varphi(\cdot,z):{\mathfrak h}_Q\to W[[z,z^{-1}]]$, we define (see [@EK-qva]) $$\begin{aligned} \label{eq:def-Phi} \Phi(h\otimes f(z),\varphi)=\mathop{\mathrm{Res}}_{z_1}\varphi(h,z_1)f(z-z_1).\end{aligned}$$ Define $\Phi(G(z),\varphi)$ for $G(z)\in {\mathfrak h}_Q\otimes{\mathbb{C}}((z))[[\hbar]]$ in the obvious way. And extend the bilinear form ${\langle}\cdot,\cdot{\rangle}$ on ${\mathfrak h}_Q$ to a ${\mathbb{C}}((z))[[\hbar]]$-linear form on ${\mathfrak h}_Q\otimes{\mathbb{C}}((z))[[\hbar]]$. **Definition 40**. Denote by $\mathcal H({\mathfrak h}_Q,z)$ the space of linear maps $\eta: {\mathfrak h}_Q\rightarrow {\mathfrak h}_Q\otimes {\mathbb{C}}((z))[[\hbar]]$ such that $$\begin{aligned} &\eta_0(\beta_i,z):=\eta(\beta_i,z)|_{\hbar=0}\in {\mathfrak h}\otimes z{\mathbb{C}}[[z]]\quad\mbox{for }i\in J,\label{eq:eta-h=0-cond}\\ &\eta(\beta_i,z)^-\in {\mathfrak h}_Q\otimes\hbar{\mathbb{C}}[z^{-1}][[\hbar]]\quad\mbox{for }i\in J,\,n\in{\mathbb{N}}.\label{eq:eta-neg-cond}\end{aligned}$$ Fix an $\eta\in \mathcal H({\mathfrak h}_Q,z)$. The following four results are proved in [@JKLT-Defom-va]. **Theorem 41**. *There exists a deforming triple $(B_Q,\rho,\eta)$ of $V_Q[[\hbar]]$, where $\rho:V_Q[[\hbar]]\to V_Q[[\hbar]]\widehat{\otimes}B_Q$ and $\eta(\cdot,z):B_Q\to {\mathcal{E}}_\hbar(V_Q[[\hbar]])$ are uniquely determined by $$\begin{aligned} &\rho(h)=h\otimes 1+1\otimes h,&& \rho(e_\alpha)=e_\alpha\otimes e^\alpha,\\ &\eta(h,z)=\Phi(\eta'(h,z),Y_Q),&& \eta(e^\alpha,z)=\exp\left( \Phi(\eta(\alpha,z),Y_Q) \right)\end{aligned}$$ for $h\in{\mathfrak h}_Q$, $\alpha\in Q$. Denote the deformed $\hbar$-adic nonlocal VA $\mathfrak D_\eta^\rho(V_Q[[\hbar]])$ by $V_Q^\eta$, and denote its vertex operator map by $Y_Q^\eta$. Then $Y_Q^\eta$ is uniquely determined by $$\begin{aligned} &Y_Q^\eta(h,z)=Y_Q(h,z)+\Phi(\eta'(h,z),Y_Q)\quad \mbox{for }h\in{\mathfrak h}_Q,\\ &Y_Q^\eta(e_\alpha,z)=Y_Q(e_\alpha,z)\exp\left( \Phi(\eta(\alpha,z),Y_Q) \right)\quad \mbox{for }\alpha\in Q.\end{aligned}$$ Moreover, for $h,h'\in{\mathfrak h}_Q$ and $\alpha,\beta\in Q$, we have that $$\begin{aligned} &Y_Q^\eta(h,z)^-h'=\left({\langle}h,h'{\rangle}z^{-2}-{\langle}\eta''(h,z)^-,h'{\rangle}\right){{\bf 1}},\\ &Y_Q^\eta(h,z)^-e_\alpha=\left( {\langle}h,\alpha{\rangle}z^{-1}+{\langle}\eta'(h,z)^-,\alpha{\rangle} \right)e_\alpha,\\ &Y_Q^\eta(e_\alpha,z)e_\beta=\epsilon(\alpha,\beta)z^{{\langle}\alpha,\beta{\rangle}}e^{{\langle}\eta(\alpha,z),\beta{\rangle}}E^+(\alpha,z)e_{\alpha+\beta}.\end{aligned}$$* **Theorem 42**. *$\eta$ is invertible, so that $V_Q^\eta$ is an $\hbar$-adic quantum VA with quantum Yang-Baxter operator $S_Q^\eta(z)$ determined by ($h,h'\in{\mathfrak h}_Q$, $\alpha,\beta\in Q$): $$\begin{aligned} &S_Q^\eta(z)(h'\otimes h)=h'\otimes h+{{\bf 1}}\otimes{{\bf 1}}\otimes({\langle}\eta''(h',z),h{\rangle}-{\langle}\eta''(h,-z),h'{\rangle}),\\ &S_Q^\eta(z)(e_\alpha\otimes h)=e_\alpha\otimes h+e_\alpha\otimes{{\bf 1}}\otimes({\langle}\eta'(\alpha,z),h{\rangle}+{\langle}\eta'(h,-z),\alpha{\rangle}),\\ &S_Q^\eta(z)(h\otimes e_\alpha)=h\otimes e_\alpha-{{\bf 1}}\otimes e_\alpha\otimes({\langle}\eta'(\alpha,-z),h{\rangle}+{\langle}\eta'(h,z),\alpha{\rangle}),\\ &S_Q^\eta(z)(e_\beta\otimes e_\alpha)=e_\beta\otimes e_\alpha\otimes\exp\left( {\langle}\eta(\alpha,-z),\beta{\rangle}-{\langle}\eta(\beta,z),\alpha{\rangle} \right).\end{aligned}$$* **Proposition 43**. *There is a functor $\mathfrak D_\eta$ from the category of $V_Q[[\hbar]]$-modules to the category of $V_Q^\eta$-modules. More precisely, let $(W,Y_W)$ be a $V_Q[[\hbar]]$-module. Then there exists a $V_Q^\eta$-module structure $Y_W^\eta$ on $W$, which is uniquely determined by $$\begin{aligned} &Y_W^\eta(h,z)=Y_W(h,z)+\Phi(\eta'(h,z),Y_W)\quad \mbox{for }h\in{\mathfrak h},\\ &Y_W^\eta(e_\alpha,z)=Y_W(e_\alpha,z)\exp\left(\Phi(\eta(\alpha,z),Y_W)\right)\quad \mbox{for }\alpha\in Q.\end{aligned}$$ We denote this module by $W^\eta$. In addition, let $W_1$ be another $V_Q[[\hbar]]$-module, and let $f:W\to W_1$ be a $V_Q[[\hbar]]$-module homomorphism. Then $f$ is also a $V_Q^\eta$-module homomorphism.* **Definition 44**. Define $\mathcal A_\hbar^\eta(Q)$ to be the category, where the objects are topologically free ${\mathbb{C}}[[\hbar]]$-modules $W$ equipped with fields $\beta_{i,\hbar}(z),\,e_{i,\hbar}^\pm(z)\in{\mathcal{E}}_\hbar(W)$ ($i\in J$), satisfying the conditions that ($i,j\in J$): $$\begin{aligned} \mbox{(AQ1)}\quad &[\beta_{i,\hbar}(z_1),\beta_{j,\hbar}(z_2)]={\langle}\beta_i,\beta_j{\rangle}\frac{\partial}{\partial {z_2}}z_1^{-1}\delta\left(\frac{z_2}{z_1}\right)\\ &\quad-{\langle}\eta''(\beta_i,z_1-z_2),\beta_j{\rangle}+{\langle}\eta''(\beta_j,z_2-z_1),\beta_i{\rangle},\\ \mbox{(AQ2)}\quad &[\beta_{i,\hbar}(z_1),e_{j,\hbar}^\pm(z_2)]=\pm{\langle}\beta_i,\beta_j{\rangle}e_{j,\hbar}^\pm(z_2)z_1^{-1}\delta\left(\frac{z_2}{z_1}\right)\\ &\quad\pm{\langle}\eta'(\beta_i,z_1-z_2),\beta_j{\rangle} e_{j,\hbar}^\pm(z_2)\pm{\langle}\eta'(\beta_j,z_2-z_1),\beta_i{\rangle}e_{j,\hbar}^\pm(z_2),\\ \mbox{(AQ3)}\quad &\iota_{z_1,z_2}e^{-{\langle}\eta(\beta_i,z_1-z_2),\beta_j{\rangle}}P_{ij}(z_1-z_2) e_{i,\hbar}^\pm(z_1)e_{j,\hbar}^\pm(z_2)\\ &\quad=\iota_{z_2,z_1}e^{-{\langle}\eta(\beta_j,z_2-z_1),\beta_i{\rangle}}P_{ij}(-z_2+z_1)e_{j,\hbar}^\pm(z_2)e_{i,\hbar}^\pm(z_1),\\ &\quad\quad\mbox{for some }P_{ij}(z)\in{\mathbb{C}}(z)[[\hbar]],\\ \mbox{(AQ4)}\quad &\iota_{z_1,z_2}e^{{\langle}\eta(\beta_i,z_1-z_2),\beta_j{\rangle}}Q_{ij}(z_1-z_2) e_{i,\hbar}^\pm(z_1)e_{j,\hbar}^\mp(z_2)\\ &\quad=\iota_{z_2,z_1}e^{{\langle}\eta(\beta_j,z_2-z_1),\beta_i{\rangle}}Q_{ij}(z_1-z_2)e_{j,\hbar}^\mp(z_2)e_{i,\hbar}^\pm(z_1),\\ &\quad\quad\mbox{for some }Q_{ij}(z)\in{\mathbb{C}}(z)[[\hbar]],\\ \mbox{(AQ5)}\quad &\frac{d}{dz}e_{i,\hbar}^\pm(z) =\pm(\beta_i)_\hbar(z)^+e_{i,\hbar}^\pm(z)\pm e_{i,\hbar}^\pm(z)(\beta_i)_\hbar(z)^-\\ &\quad-{\langle}\eta'(\beta_i,0)^+,\beta_i{\rangle}e_{i,\hbar}^\pm(z),\\ \mbox{(AQ6)}\quad &\iota_{z_1,z_2}e^{{\langle}\eta(\beta_i,z_1-z_2),\beta_i{\rangle}}(z_1-z_2)^{{\langle}\beta_i,\beta_i{\rangle}} e_{i,\hbar}^+(z_1)e_{i,\hbar}^-(z_2)\\ &\quad=\iota_{z_2,z_1}e^{{\langle}\eta(\beta_i,z_2-z_1),\beta_i{\rangle}}(z_1-z_2)^{{\langle}\beta_i,\beta_i{\rangle}} e_{i,\hbar}^-(z_2)e_{i,\hbar}^+(z_1),\\ &\mbox{and}\,\,\left.\left(e^{{\langle}\eta(\beta_i,z_1-z_2),\beta_i{\rangle}}(z_1-z_2)^{{\langle}\beta_i,\beta_i{\rangle}} e_{i,\hbar}^+(z_1)e_{i,\hbar}^-(z_2)\right)\right|_{z_1=z_2}=1.\end{aligned}$$ The morphisms between two objects $W_1$ and $W_2$ are ${\mathbb{C}}[[\hbar]]$-module maps $f:W_1\to W_2$ such that $$\begin{aligned} \beta_{i,\hbar}(z)\circ f=f\circ \beta_{i,\hbar}(z),\quad e_{i,\hbar}^\pm(z)\circ f=f\circ e_{i,\hbar}^\pm(z)\quad \mbox{for }i\in J.\end{aligned}$$ **Proposition 45**. *There exists a functor $\mathfrak I$ from the category of $V_Q^\eta$-modules to the category $\mathcal A_\hbar^\eta(Q)$. More precisely, let $(W,Y_W^\eta)$ be a $V_Q^\eta$-module. Then the topologically free ${\mathbb{C}}[[\hbar]]$-module $W$ equipped with the following fields $$\begin{aligned} Y_W^\eta(\beta_i,z),\quad Y_W^\eta(e_{\pm \alpha_i},z)\quad\mbox{for }i\in J\end{aligned}$$ is an object in $\mathcal A_\hbar^\eta({\mathbb{Q}})$. In particular, $P_{ij}(z)$ and $Q_{ij}(z)$ can be chosen as follows: $$\begin{aligned} P_{ij}(z)=z^{-{\langle}\beta_i,\beta_j{\rangle}},\quad Q_{ij}(z)=z^{{\langle}\beta_i,\beta_j{\rangle}}\quad \mbox{for }i,j\in J.\end{aligned}$$ Moreover, let $W_1$ be another $V_Q^\eta$-module, and let $f:W\to W_1$ be a $V_Q^\eta$-module. Then $f$ is also a morphism in $\mathcal A_\hbar^\eta(Q)$.* We will prove the following result in Subsection [4.2](#subsec:pf-undeform-qlattice){reference-type="ref" reference="subsec:pf-undeform-qlattice"}. **Theorem 46**. *There exists a functor $\mathfrak U_\eta$ from the category $\mathcal A_\hbar^\eta(Q)$ to the category of $V_Q[[\hbar]]$-modules. Moreover, $\mathfrak U_\eta\circ\mathfrak I\circ \mathfrak D_\eta$ is the identity functor of the category of $V_Q[[\hbar]]$-modules, and $\mathfrak D_\eta\circ \mathfrak U_\eta\circ \mathfrak I$ is the identity functor of the category of $V_Q^\eta$-modules. Furthermore, the category of $V_Q[[\hbar]]$-modules, the category of $V_Q^\eta$-modules and the category $\mathcal A_\hbar^\eta({\mathbb{Q}})$ are isomorphic.* **Corollary 47**. *Let $(W,Y_W^\eta)$ be a $V_Q^\eta$-module. Suppose that there exists $w\in W$ and $\lambda\in{\mathfrak h}_Q$ such that $$\begin{aligned} Y_W^\eta(\beta_i,z)^-w={\langle}\lambda,\beta_i{\rangle}wz^{-1}+{\langle}\lambda,\eta'(\beta_i,z)^-{\rangle}w\quad\mbox{for }i\in I.\end{aligned}$$ Then $$\begin{aligned} &e^{\mp{\langle}\lambda,\eta(\beta_i,z){\rangle}}z^{\mp{\langle}\lambda,\beta_i{\rangle}}Y_W^\eta(e_{\pm\beta_i},z)w\in W[[z]].\end{aligned}$$* *Proof.* Theorem [Theorem 46](#thm:Undeform-qlattice){reference-type="ref" reference="thm:Undeform-qlattice"} provides a $V_Q[[\hbar]]$-module structure $Y_W$ on $W$, such that $$\begin{aligned} &Y_W^\eta(\beta_i,z)=Y_W(\beta_i,z)+\Phi(\eta'(\beta_i,z),Y_W),\label{eq:coro-h--to-e-temp1}\\ &Y_W^\eta(e_{\pm\beta_i},z)=Y_W(e_{\pm\beta_i},z)\exp\left(\pm\Phi(\eta(\beta_i,z),Y_W)\right),\end{aligned}$$ for $i\in J$. Write $$\begin{aligned} Y_W(\beta_i,z)=\sum_{n\in{\mathbb{Z}}}\beta_i(n)z^{-n-1},\quad \eta(\beta_i,z)=\sum_{j\in J}\beta_j\otimes f_{ij}(z).\end{aligned}$$ Taking $\mathop{\mathrm{Res}}_zz^n$ on both hand sides of [\[eq:coro-h\--to-e-temp1\]](#eq:coro-h--to-e-temp1){reference-type="eqref" reference="eq:coro-h--to-e-temp1"} for $n\in{\mathbb{N}}$, we get that $$\begin{aligned} &\mathop{\mathrm{Res}}_zz^n Y_W^\eta(\beta_i,z)w\\ =&\mathop{\mathrm{Res}}_zz^nY_W(\beta_i,z)w+\mathop{\mathrm{Res}}_{z,z_1}z^n\frac{\partial}{\partial {z}}f_{ij}(z-z_1)Y_W(\beta_i,z_1)w \\ =&\beta_i(n)w+\sum_{k\ge 0}\sum_{j\in J}\binom{n}{k}\beta_j(k)w\mathop{\mathrm{Res}}_zz^{n-k}\frac{\partial}{\partial {z}}f_{ij}(z)\\ =&\beta_i(n)w+\sum_{k=0}^{n-1}\binom{n}{k}\sum_{j\in J}\beta_j(k)w\mathop{\mathrm{Res}}_zz^{n-k}\frac{\partial}{\partial {z}}f_{ij}(z).\end{aligned}$$ Viewing $\beta_i(n)w$ ($i\in J$, $n\in{\mathbb{N}}$) as indeterminates, the linear system above has the unique solution: $$\begin{aligned} \beta_i(n)w=\delta_{n,0}{\langle}\lambda,\beta_i{\rangle}w,\quad i\in J,\,n\in{\mathbb{N}}.\end{aligned}$$ Then $$\begin{aligned} &Y_W^\eta(e_{\pm\beta_i},z)w=Y_W(e_{\pm\beta_i},z)\exp\left(\pm\Phi(\eta(\beta_i,z),Y_W)\right)w\\ =&E^+(\pm\beta_i,z)e_{\pm\beta_i}E^-(\pm\beta_i,z)z^{\pm\beta_i}\exp\left(\pm\Phi(\eta(\beta_i,z),Y_W)\right)w\\ =&E^+(\pm\beta_i,z)e_{\pm\beta_i}w z^{\pm{\langle}\lambda,\beta_i{\rangle}}\exp\left(\pm{\langle}\lambda,\eta(\beta_i,z){\rangle}\right).\end{aligned}$$ As $E^+(\pm\beta_i,z)e_{\pm\beta_i}w\in W[[z]]$, we complete the proof. ◻ **Lemma 48**. *The category of $V_Q$-modules is equivalent to the category of $V_Q[[\hbar]]$-modules.* *Proof.* For each $V_Q$-module $W$, $W[[\hbar]]$ naturally carries a $V_Q[[\hbar]]$-module structure. Conversely, for each $V_Q[[\hbar]]$-module $W$, $W/\hbar W$ naturally carries a $V_Q$-module structure. Let $\mathfrak A$ be the functor from the category of $V_Q$-modules to the category of $V_Q[[\hbar]]$-modules, such that $\mathfrak A(W)=W[[\hbar]]$, and let $\mathfrak B$ be the functor from the category of $V_Q[[\hbar]]$-modules to the category of $V_Q$-modules, such that $\mathfrak B(W)=W/\hbar W$. It is easy to see that $\mathfrak B\circ \mathfrak A$ is isomorphic to the identity functor of the category of $V_Q$-modules. Finally, let $W$ be a $V_Q[[\hbar]]$-module. Then $W$ is naturally a $V_Q$-module. Since $W$ is completely reducible (see Theorem [Theorem 5](#thm:lattice-VA){reference-type="ref" reference="thm:lattice-VA"}) and $\hbar W$ is a $V_Q$-submodule, there is a $V_Q$-submodule $W_0$ of $W$, such that $W_0\oplus \hbar W=W$. Notice that $W_0\cong W/\hbar W$ as $V_Q$-modules and $W_0[[\hbar]]=W$ as $V_Q[[\hbar]]$-modules. Then we have that $$\begin{aligned} \mathfrak A\circ\mathfrak B(W)=(W/\hbar W)[[\hbar]]\cong W_0[[\hbar]]=W.\end{aligned}$$ Therefore, $\mathfrak A\circ\mathfrak B$ is isomorphic to the identity functor of the category of $V_Q[[\hbar]]$-modules. ◻ Combining this result with Theorems [Theorem 46](#thm:Undeform-qlattice){reference-type="ref" reference="thm:Undeform-qlattice"} and [Theorem 5](#thm:lattice-VA){reference-type="ref" reference="thm:lattice-VA"}, we have that **Corollary 49**. *Every $V_Q^\eta$-module is completely reducible, and for any simple $V_Q^\eta$-module $W$, there exists $Q+\gamma\in Q^0/Q$, such that $W\cong V_{Q+\gamma}^\eta$.* **Proposition 50**. *Let $(V,Y,{{\bf 1}}_V)$ be an $\hbar$-adic nonlocal VA containing a subset ${ \left.\left\{ {\beta_i,e_i^\pm} \,\right|\, {i\in J} \right\} }$, such that the topologically free ${\mathbb{C}}[[\hbar]]$-module $V$ equipped with the fields $$\begin{aligned} Y(\beta_i,z),\quad Y(e_i^\pm,z)\quad \mbox{for }i\in J\end{aligned}$$ becomes an object in $\mathcal A_\hbar^\eta(Q)$. Then there is a unique $\hbar$-adic nonlocal VA homomorphism $\psi:V_Q^\eta\to V$, such that $$\begin{aligned} \psi(\beta_i)=\beta_i,\quad \psi(e_{\pm\beta_i})=e_i^\pm\quad\mbox{for }i\in J.\end{aligned}$$ Moreover, $\psi$ is injective.* *Proof.* Theorem [Theorem 46](#thm:Undeform-qlattice){reference-type="ref" reference="thm:Undeform-qlattice"} provides a $V_Q^\eta$-module structure $Y_V^\eta(\cdot,z)$ on $V$, uniquely determined by $$\begin{aligned} \label{eq:prop-qlattice-universal-temp0} &Y_V^\eta(\beta_i,z)=Y(\beta_i,z),\quad Y_V^\eta(e_{\pm\beta_i},z)=Y(e_i^\pm,z)\quad\mbox{for }i\in J.\end{aligned}$$ Let $$\begin{aligned} U=\mathop{\mathrm{Span}}_{\mathbb{C}}{ \left.\left\{ {\beta_i,\,e_{\pm\beta_i},\,{{\bf 1}}} \,\right|\, {i\in J} \right\} }.\end{aligned}$$ Define $\psi^0:U\to V$ by $\psi^0({{\bf 1}})={{\bf 1}}_V$ and $$\begin{aligned} \psi^0(\beta_i)=\beta_i,\quad \psi^0(e_{\pm\beta_i})=e_i^\pm\quad \mbox{for }i\in J.\end{aligned}$$ Then $$\begin{aligned} \label{eq:prop-qlattice-universal-temp1} Y_V^\eta(u,z){{\bf 1}}_V=Y(\psi^0(u),z){{\bf 1}}_V\in V[[z]]\quad\mbox{for }u\in U.\end{aligned}$$ From Theorem [Theorem 42](#thm:qlatticeVAQYB){reference-type="ref" reference="thm:qlatticeVAQYB"}, we see that $$\begin{aligned} S_Q^\eta(z)(U\otimes U)\subset U\otimes U\otimes{\mathbb{C}}((z))[[\hbar]].\end{aligned}$$ Then we get from Proposition [Proposition 13](#prop:vacuum-like){reference-type="ref" reference="prop:vacuum-like"} that $$\begin{aligned} \label{eq:prop-qlattice-universal-temp2} Y_V^\eta(u,z){{\bf 1}}_V\in V[[z]]\quad\mbox{for }u\in V_Q^\eta.\end{aligned}$$ Notice that $V_Q^\eta$ is generated by ${ \left.\left\{ {\beta_i,e_{\pm\beta_i}} \,\right|\, {i\in J} \right\} }$. Combining this fact with [\[eq:prop-qlattice-universal-temp0\]](#eq:prop-qlattice-universal-temp0){reference-type="eqref" reference="eq:prop-qlattice-universal-temp0"}, [\[eq:prop-qlattice-universal-temp2\]](#eq:prop-qlattice-universal-temp2){reference-type="eqref" reference="eq:prop-qlattice-universal-temp2"} and Proposition [Proposition 14](#prop:vacuum-like-sp){reference-type="ref" reference="prop:vacuum-like-sp"}, we get an $\hbar$-adic nonlocal VA homomorphism $$\begin{aligned} \psi:V_Q^\eta\to V,\quad\mbox{such that }\psi(u)=\psi^0(u)\quad\mbox{for }u\in U.\end{aligned}$$ Finally, we show that $\psi$ is injective. View $V$ as a $V_Q^\eta$-module via $\psi$. Then $\psi$ is also a $V_Q^\eta$-module homomorphism. Since $V_Q^\eta$ itself is a simple $V_Q^\eta$-module and $\psi({{\bf 1}})={{\bf 1}}_V\ne 0$, we get that $\psi$ must be injective. ◻ ## Proof of Theorem [Theorem 46](#thm:Undeform-qlattice){reference-type="ref" reference="thm:Undeform-qlattice"} {#subsec:pf-undeform-qlattice} In this subsection, we fix an object $$\begin{aligned} (W,\{\beta_{i,\hbar}(z)\}_{i\in J},\{e_{i,\hbar}^\pm(z)\}_{i\in J})\in\mathop{\mathrm{obj}}\mathcal A_\hbar^\eta(Q).\end{aligned}$$ **Lemma 51**. *Let $V$ be a ${\mathbb{C}}[[\hbar]]$-module. For each linear map $\varphi(\cdot,z):{\mathfrak h}\to V[[z^{\pm1}]]$, there exists a unique linear map $\bar\varphi(\cdot,z):{\mathfrak h}\to V[[z^{\pm1}]]$, such that $$\begin{aligned} \label{eq:linear-sys} \bar\varphi(\beta_i,z)+\Phi({\langle}\eta'(\beta_i,z),\bar\varphi)=\varphi(\beta_i,z)\quad\mbox{for }i\in J.\end{aligned}$$ Moreover, if $\varphi({\mathfrak h},z)^-\subset V[z^{-1}][[\hbar]]$, then $$\begin{aligned} \bar\varphi({\mathfrak h},z)^-\subset V[z^{-1}][[\hbar]].\end{aligned}$$* *Proof.* Write $$\begin{aligned} \bar\varphi(\beta_i,z)=\sum_{n\in{\mathbb{Z}}}\bar\beta_i(n)z^{-n-1},\quad \eta(\beta_i,z)=\sum_{j\in J}\beta_j\otimes f_{ij}(z),\end{aligned}$$ and write $f_{ij}(z)=\sum_{n\in{\mathbb{Z}}}f_{ij}(n)z^{-n-1}$. We view $\bar\beta_i(n)$ ($i\in J$, $n\in{\mathbb{Z}}$) as indeterminates. Taking $\mathop{\mathrm{Res}}_zz^n$ on both hand sides of [\[eq:linear-sys\]](#eq:linear-sys){reference-type="eqref" reference="eq:linear-sys"} for $n\in{\mathbb{N}}$, we get the following linear system $$\begin{aligned} &\mathop{\mathrm{Res}}_zz^n\varphi(\beta_i,z)\\ =&\bar\beta_i(n)+\sum_{j\in J}\sum_{k\in{\mathbb{N}}}\mathop{\mathrm{Res}}_zz^n\bar\beta_i(k)\frac{(-1)^k}{k!}\frac{\partial^{k+1}}{\partial z^{k+1}}f_{ij}(z)\\ =&\bar\beta_i(n)+\sum_{j\in J}\sum_{k\in{\mathbb{N}}}\binom{n}{k}\bar\beta_i(k)\mathop{\mathrm{Res}}_zz^{n-k}\frac{\partial}{\partial {z}}f_{ij}(z)\\ =&\bar\beta_i(n)-\sum_{j\in J}\sum_{k=0}^{n-1}\binom{n}{k}(n-k)f_{ij}(n-k-1)\bar\beta_i(k).\end{aligned}$$ It has a unique solution. So $\bar\beta_i(n)$ ($i\in J$, $n\in{\mathbb{N}}$) are uniquely determined. Taking $\mathop{\mathrm{Res}}_zz^{-n-1}$ on both hand sides of [\[eq:linear-sys\]](#eq:linear-sys){reference-type="eqref" reference="eq:linear-sys"} for $n\in{\mathbb{N}}$, we get that $$\begin{aligned} &\mathop{\mathrm{Res}}_zz^{-n-1}\varphi(\beta_i,z)\\ =&\bar\beta_i(-n-1)+\sum_{j\in J}\sum_{k\in{\mathbb{N}}}\mathop{\mathrm{Res}}_zz^{-n-1}\bar\beta_i(k)\frac{(-1)^k}{k!}\frac{\partial^{k+1}}{\partial z^{k+1}}f_{ij}(z)\\ =&\bar\beta_i(-n-1)-\sum_{j\in J}\sum_{k\in{\mathbb{N}}}(-1)^k\binom{n+k}{k}(n+k+1)f_{ij}(-n-k-3)\bar \beta_i(k).\end{aligned}$$ So $\bar\beta_i(-n-1)$ ($i\in J$, $n\in{\mathbb{N}}$) are also uniquely determined. The moreover statement follows from [\[eq:eta-neg-cond\]](#eq:eta-neg-cond){reference-type="eqref" reference="eq:eta-neg-cond"}. ◻ Under the consideration of Lemma [Lemma 51](#lem:linear-sys){reference-type="ref" reference="lem:linear-sys"}, we have the following definition. **Definition 52**. Let $\varphi(\cdot,z):{\mathfrak h}\to {\mathcal{E}}_\hbar(W)$ be the linear map determined by $$\begin{aligned} \label{eq:def-beta-i} \varphi(\beta_i,z)+\Phi(\eta'(\beta_i,z),\varphi)=\beta_{i,\hbar}(z)\quad\mbox{for }i\in J.\end{aligned}$$ **Lemma 53**. *For $i,j\in J$, we have that $$\begin{aligned} &[\varphi(\beta_i,z_1),\beta_{j,\hbar}(z_2)]={\langle}\beta_i,\beta_j{\rangle}\frac{\partial}{\partial {z_2}}z_1^{-1}\delta\left(\frac{z_2}{z_1}\right) +{\langle}\eta''(\beta_j,z_2-z_1),\beta_i{\rangle},\label{eq:bar-h-h-com}\\ &[\varphi(\beta_i,z_1),e_{j,\hbar}^\pm(z_2)]=\pm{\langle}\beta_i,\beta_j{\rangle}e_{j,\hbar}^\pm(z_2)z_1^{-1}\delta\left(\frac{z_2}{z_1}\right) \label{eq:bar-h-e-com}\\ &\qquad\qquad\qquad\qquad \pm{\langle}\eta'(\beta_j,z_2-z_1),\beta_i{\rangle}e_{j,\hbar}^\pm(z_2),\nonumber\\ &[\varphi(\beta_i,z_1),\varphi(\beta_j,z_2)]={\langle}\beta_i,\beta_j{\rangle}\frac{\partial}{\partial {z_2}}z_1^{-1}\delta\left(\frac{z_2}{z_1}\right).\label{eq:AQ-1}\end{aligned}$$* *Proof.* Define $$\begin{aligned} \varphi_1(\cdot,z):{\mathfrak h}&\longrightarrow \mathop{\mathrm{End}}_{{\mathbb{C}}[[\hbar]]}(W)[[z_2^{\pm 1}]][[z,z^{-1}]]\\ \beta_i&\mapsto [\varphi(\beta_i,z),\beta_{j,\hbar}(z_2)].\end{aligned}$$ From (AQ1), we get the following linear system $$\begin{aligned} &\varphi_1(\beta_i,z)+\Phi(\eta'(\beta_i,z),\varphi_1)\\ =&[\varphi(\beta_i,z),\beta_{j,\hbar}(z_2)]+[\Phi(\eta'(\beta_i,z),\varphi),\beta_{j,\hbar}(z_2)]\\ =&[\beta_{i,\hbar}(z),\beta_{j,\hbar}(z_2)] ={\langle}\beta_i,\beta_j{\rangle}\frac{\partial}{\partial {z_2}}z^{-1}\delta\left(\frac{z_2}{z}\right)\\ &\quad-{\langle}\eta''(\beta_i,z-z_2),\beta_j{\rangle} +{\langle}\eta''(\beta_j,z_2-z),\beta_i{\rangle}.\end{aligned}$$ Lemma [Lemma 51](#lem:linear-sys){reference-type="ref" reference="lem:linear-sys"} provides the following unique solution: $$\begin{aligned} \varphi_1(\beta_i,z)={\langle}\beta_i,\beta_j{\rangle}\frac{\partial}{\partial {z_2}}z^{-1}\delta\left(\frac{z_2}{z}\right) +{\langle}\eta''(\beta_j,z_2-z),\beta_i{\rangle},\end{aligned}$$ as desired. Define $$\begin{aligned} \varphi_2(\cdot,z):{\mathfrak h}&\longrightarrow \mathop{\mathrm{End}}_{{\mathbb{C}}[[\hbar]]}(W)[[z_2^{\pm 1}]][[z,z^{-1}]]\\ \beta_i&\mapsto [\varphi(\beta_i,z),e_{j,\hbar}^\pm(z_2)].\end{aligned}$$ From (AQ2), we get the following linear system $$\begin{aligned} &\varphi_2(\beta_i,z)+\Phi(\eta'(\beta_i,z),\varphi_2)\\ =&[\varphi(\beta_i,z),e_{j,\hbar}^\pm(z_2)]+[\Phi(\eta'(\beta_i,z),\varphi),e_{j,\hbar}^\pm(z_2)]\\ =&[\beta_{i,\hbar}(z),e_{j,\hbar}^\pm(z_2)] =\pm{\langle}\beta_i,\beta_j{\rangle}e_{j,\hbar}^\pm(z_2)z_1^{-1}\delta\left(\frac{z_2}{z}\right)\\ &\pm {\langle}\eta'(\beta_i,z-z_2),\beta_j{\rangle}e_{j,\hbar}^\pm(z_2) \pm {\langle}\eta'(\beta_j,z_2-z),\beta_i{\rangle}e_{j,\hbar}^\pm(z_2).\end{aligned}$$ Lemma [Lemma 51](#lem:linear-sys){reference-type="ref" reference="lem:linear-sys"} provides the following unique solution: $$\begin{aligned} \varphi_2(\beta_i,z)=\pm{\langle}\beta_i,\beta_j{\rangle}e_{j,\hbar}^\pm(z_2)z^{-1}\delta\left(\frac{z_2}{z}\right) \pm{\langle}\eta'(\beta_j,z_2-z),\beta_i{\rangle}e_{j,\hbar}^\pm(z_2),\end{aligned}$$ as desired. Finally, we define $$\begin{aligned} \varphi_3(\cdot,z):{\mathfrak h}&\longrightarrow \mathop{\mathrm{End}}_{{\mathbb{C}}[[\hbar]]}(W)[[z_2^{\pm 1}]][[z,z^{-1}]]\\ \beta_i&\mapsto [\varphi(\beta_i,z),\varphi(\beta_j,z_2)].\end{aligned}$$ Then we get from [\[eq:bar-h-h-com\]](#eq:bar-h-h-com){reference-type="eqref" reference="eq:bar-h-h-com"} that $$\begin{aligned} &\varphi_3(\beta_i,z)+\Phi(\eta'(\beta_i,z),\varphi_3)\\ =&[\varphi(\beta_i,z),\varphi(\beta_j,z_2)]+[\Phi(\eta'(\beta_i,z),\varphi),\varphi(\beta_j,z_2)]\\ =& [\beta_{i,\hbar}(z),\varphi(\beta_j,z_2)]\\ =&{\langle}\beta_i,\beta_j{\rangle}\frac{\partial}{\partial {z_2}}z^{-1}\delta\left(\frac{z_2}{z}\right) -{\langle}\eta''(\beta_i,z-z_2),\beta_j{\rangle}.\end{aligned}$$ Lemma [Lemma 51](#lem:linear-sys){reference-type="ref" reference="lem:linear-sys"} provides the following unique solution: $$\begin{aligned} \varphi_3(\beta_i,z)={\langle}\beta_i,\beta_j{\rangle}\frac{\partial}{\partial {z_2}}z^{-1}\delta\left(\frac{z_2}{z}\right),\end{aligned}$$ as desired. ◻ **Proposition 54**. *For $i\in J$, we define $$\begin{aligned} &e_i^\pm(z)=e_{i,\hbar}^\pm(z)\exp\left(\mp\Phi(\eta(\beta_i,z),\varphi)\right).\label{eq:def-e-i}\end{aligned}$$ Then we have that $$\begin{aligned} &[\varphi(\beta_i,z_1),e_j^\pm(z_2)] =\pm{\langle}\beta_i,\beta_j{\rangle}e_j^\pm(z_2)z_1^{-1}\delta\left(\frac{z_2}{z_1}\right),\label{eq:AQ-2}\\ & \iota_{z_1,z_2}P_{ij}(z_1-z_2)e_i^\pm(z_1)e_j^\pm(z_2) =\iota_{z_2,z_1}P_{ij}(z_1-z_2)e_j^\pm(z_2)e_i^\pm(z_1),\label{eq:AQ-3}\\ & \iota_{z_1,z_2}Q_{ij}(z_1-z_2)e_i^\pm(z_1)e_j^\mp(z_2) =\iota_{z_2,z_1}Q_{ij}(z_1-z_2)e_j^\mp(z_2)e_i^\pm(z_1),\label{eq:AQ-4}\\ & \frac{d}{dz}e_i^\pm(z)=\pm\varphi(\beta_i,z)^+e_i^\pm(z)\pm e_i^\pm(z)\varphi(\beta_i,z)^-,\label{eq:AQ-5}\\ & \iota_{z_1,z_2}(z_1-z_2)^{{\langle}\beta_i,\beta_i{\rangle}}e_i^+(z_1)e_j^-(z_2)\label{eq:AQ-6}\\ &\quad =\iota_{z_2,z_1}(z_1-z_2)^{{\langle}\beta_i,\beta_i{\rangle}}e_j^-(z_2)e_i^+(z_1),\nonumber\\ & \left((z_1-z_2)^{{\langle}\beta_i,\beta_i{\rangle}}e_i^+(z_1)e_i^-(z_2)\right)|_{z_1=z_2}=1.\label{eq:AQ-7}\end{aligned}$$* *Proof.* From [\[eq:AQ-1\]](#eq:AQ-1){reference-type="eqref" reference="eq:AQ-1"}, we have that $$\begin{aligned} &[\Phi(\beta_i\otimes g(z_1),\varphi),e_{j,\hbar}^\pm(z_2)] =\pm{\langle}\beta_i,\beta_j{\rangle}e_{j,\hbar}^\pm(z_2)g(z_1-z_2)\end{aligned}$$ for any $g(z)\in{\mathbb{C}}((z))[[\hbar]]$. By linearity, we get that $$\begin{aligned} &[\Phi(\eta'(\beta_i,z_1),\varphi),e_{j,\hbar}^\pm(z_2)] =\pm {\langle}\eta'(\beta_i,z_1-z_2),\beta_j{\rangle}e_{j,\hbar}^\pm(z_2),\label{eq:Phi-e-h-com-der}\\ &[\Phi(\eta(\beta_i,z_1),\varphi),e_{j,\hbar}^\pm(z_2)] =\pm {\langle}\eta(\beta_i,z_1-z_2),\beta_j{\rangle}e_{j,\hbar}^\pm(z_2).\label{eq:Phi-e-h-com}\end{aligned}$$ Then we have that $$\begin{aligned} &[\varphi(\beta_i,z_1),e_j^\pm(z_2)]\\ =&[\varphi(\beta_i,z_1),e_{j,\hbar}^\pm(z_2)\exp\left(\mp\Phi(\eta(\beta_i,z_2),\varphi)\right)]\\ =&[\varphi(\beta_i,z_1),e_{j,\hbar}^\pm(z_2)]\exp\left(\mp\Phi(\eta(\beta_i,z_2))\right)\\ &\quad+e_{j,\hbar}^\pm(z_2)[\varphi(\beta_i,z_1),\exp\left(\mp\Phi(\eta(\beta_i,z_2))\right)]\\ =&\pm {\langle}\beta_i,\beta_j{\rangle}e_{j,\hbar}^\pm(z_2)z_1\delta\left(\frac{z_2}{z_1}\right)\pm {\langle}\eta'(\beta_j,z_2-z_1),\beta_i{\rangle}e_{j,\hbar}^\pm(z_2)\\ &\mp {\langle}\eta'(\beta_j,z_2-z_1),\beta_i{\rangle}e_{j,\hbar}^\pm(z_2)\\ =&\pm {\langle}\beta_i,\beta_j{\rangle} e_{j,\hbar}^\pm(z_2)z_1\delta\left(\frac{z_2}{z_1}\right),\end{aligned}$$ where the second equation follows from [\[eq:bar-h-e-com\]](#eq:bar-h-e-com){reference-type="eqref" reference="eq:bar-h-e-com"} and [\[eq:Phi-e-h-com\]](#eq:Phi-e-h-com){reference-type="eqref" reference="eq:Phi-e-h-com"}. It proves [\[eq:AQ-2\]](#eq:AQ-2){reference-type="eqref" reference="eq:AQ-2"}. From [\[eq:Phi-e-h-com\]](#eq:Phi-e-h-com){reference-type="eqref" reference="eq:Phi-e-h-com"}, we have that $$\begin{aligned} &e_i^{\epsilon_1}(z_1)e_j^{\epsilon_2}(z_2)\\ =&e_{i,\hbar}^{\epsilon_1}(z_1)\exp\left(-{\epsilon_1}\Phi(\eta(\beta_i,z_1),\varphi)\right) e_{j,\hbar}^{\epsilon_2}(z_2)\exp\left(-{\epsilon_2}\Phi(\eta(\beta_j,z_2),\varphi)\right)\\ =&e^{-{\epsilon_1}\epsilon_2{\langle}\eta(\beta_i,z_1-z_2),\beta_j{\rangle}} e_{i,\hbar}^{\epsilon_1}(z_1) e_{j,\hbar}^{\epsilon_2}(z_2)\\ &\quad\times\exp\left(-{\epsilon_1}\Phi(\eta(\beta_i,z_1),\varphi)\right) \exp\left(-{\epsilon_2}\Phi(\eta(\beta_j,z_2),\varphi)\right).\end{aligned}$$ Then we get from (AQ3) that $$\begin{aligned} &\iota_{z_1,z_2}P_{ij}(z_1-z_2)e_i^\pm(z_1)e_j^\pm(z_2)\\ &\quad-\iota_{z_2,z_1}P_{ij}(z_1-z_2)e_j^\pm(z_2)e_i^\pm(z_1)\\ =&\iota_{z_1,z_2}P_{ij}(z_1-z_2)e^{-{\langle}\eta(\beta_i,z_1-z_2),\beta_j{\rangle}}e_{i,\hbar}^\pm(z_1)e_{j,\hbar}^\pm(z_2)\\ &\quad\times\exp\left(\mp\Phi(\eta(\beta_i,z_1),\varphi)\right)\exp\left(\mp\Phi(\eta(\beta_j,z_2),\varphi)\right)\\ &-\iota_{z_2,z_1}P_{ij}(z_1-z_2)e^{-{\langle}\eta(\beta_j,z_2-z_1),\beta_j{\rangle}} e_{j,\hbar}^\pm(z_2)e_{i,\hbar}^\pm(z_1)\\ &\quad\times \exp\left(\mp\Phi(\eta(\beta_i,z_1),\varphi)\right)\exp\left(\mp\Phi(\eta(\beta_j,z_2),\varphi)\right)\\ =&0.\end{aligned}$$ And we get from (AQ4) that $$\begin{aligned} &\iota_{z_1,z_2}Q_{ij}(z_1-z_2)e_i^\pm(z_1)e_j^\mp(z_2)\\ &\quad-\iota_{z_2,z_1}Q_{ij}(z_1-z_2)e_j^\mp(z_2)e_i^\pm(z_1)\\ =&\iota_{z_1,z_2}Q_{ij}(z_1-z_2)e^{{\langle}\eta(\beta_i,z_1-z_2),\beta_j{\rangle}} e_{i,\hbar}^\pm(z_1)e_{j,\hbar}^\mp(z_2)\\ &\quad\times\exp\left(\mp\Phi(\eta(\beta_i,z_1),\varphi)\right)\exp\left(\pm\Phi(\eta(\beta_j,z_2),\varphi)\right)\\ &-\iota_{z_2,z_1}Q_{ij}(z_1-z_2)e^{{\langle}\eta(\beta_j,z_2-z_1),\beta{\rangle}}e_{j,\hbar}^\mp(z_2)e_{i,\hbar}^\pm(z_1)\\ &\quad\times \exp\left(\mp\Phi(\eta(\beta_i,z_1),\varphi)\right)\exp\left(\pm\Phi(\eta(\beta_j,z_2),\varphi)\right)\\ =&0.\end{aligned}$$ It proves [\[eq:AQ-3\]](#eq:AQ-3){reference-type="eqref" reference="eq:AQ-3"} and [\[eq:AQ-4\]](#eq:AQ-4){reference-type="eqref" reference="eq:AQ-4"}. Taking $Q_{ii}(z)=z^{{\langle}\beta_i,\beta_i{\rangle}}$, we see that (AQ6) provides [\[eq:AQ-6\]](#eq:AQ-6){reference-type="eqref" reference="eq:AQ-6"}. From (AQ5), we get that $$\begin{aligned} &\frac{d}{dz}e_i^\pm(z)=\frac{d}{dz}e_{i,\hbar}^\pm(z)\exp\left(\mp\Phi(\eta(\beta_i,z),\varphi)\right)\\ =&\pm \beta_{i,\hbar}(z)^+e_{i,\hbar}^\pm(z)\exp\left(\mp\Phi(\eta(\beta_i,z),\varphi)\right)\\ &+e_{i,\hbar}^\pm(z)\beta_{i,\hbar}(z)^-\exp\left(\mp\Phi(\eta(\beta_i,z),\varphi)\right)\\ &-{\langle}\eta'(\beta_i,0)^+,\beta_i{\rangle}e_{i,\hbar}^\pm(z)\exp\left(\mp\Phi(\eta(\beta_i,z),\varphi)\right)\\ &\mp e_{i,\hbar}^\pm(z)\Phi(\eta'(\beta_i,z),\varphi)\exp\left(\mp\Phi(\eta(\beta_i,z),\varphi)\right)\\ =&\pm\left(\varphi(\beta_i,z)^++\Phi(\eta'(\beta_i,z)^+,\varphi)\right)e_{i,\hbar}^\pm(z)\exp\left(\mp\Phi(\eta(\beta_i,z),\varphi)\right)\\ &\pm e_{i,\hbar}^\pm(z)\left(\varphi(\beta_i,z)^-+\Phi(\eta'(\beta_i,z)^-,\varphi)\right)\exp\left(\mp\Phi(\eta(\beta_i,z),\varphi)\right)\\ &-{\langle}\eta'(\beta_i,0)^+,\beta_i{\rangle}e_{i,\hbar}^\pm(z)\exp\left(\mp\Phi(\eta(\beta_i,z),\varphi)\right)\\ &\mp e_{i,\hbar}^\pm(z)\Phi(\eta'(\beta_i,z),\varphi)\exp\left(\mp\Phi(\eta(\beta_i,z),\varphi)\right)\\ =&\pm\varphi(\beta_i,z)^+e_i^\pm(z)\pm e_i^\pm(z)\varphi(\beta_i,z)^-\\ &\pm[\Phi(\eta'(\beta_i,z)^+,\varphi),e_{i,\hbar}^\pm(z)]\exp\left(\mp\Phi(\eta(\beta_i,z),\varphi)\right)\\ &-{\langle}\eta'(\beta_i,0)^+,\beta_i{\rangle}e_i^\pm(z)\\ =&\pm\varphi(\beta_i,z)^+e_i^\pm(z)\pm e_i^\pm(z)\varphi(\beta_i,z)^-,\end{aligned}$$ where the second equation follows from [\[eq:def-beta-i\]](#eq:def-beta-i){reference-type="eqref" reference="eq:def-beta-i"}, and the last equation follows from [\[eq:Phi-e-h-com-der\]](#eq:Phi-e-h-com-der){reference-type="eqref" reference="eq:Phi-e-h-com-der"}. Then we complete the proof of [\[eq:AQ-5\]](#eq:AQ-5){reference-type="eqref" reference="eq:AQ-5"}. Notice that $$\begin{aligned} &(z_1-z_2)^{{\langle}\beta_i,\beta_i{\rangle}}e_i^+(z_1)e_i^-(z_2)\\ =&e^{{\langle}\eta(\beta_i,z_1-z_2),\beta_i{\rangle}}(z_1-z_2)^{{\langle}\beta_i,\beta_i{\rangle}}e_{i,\hbar}^+(z_1)e_{i,\hbar}^-(z_2)\\ &\quad\times \exp\left(-\Phi(\eta(\beta_i,z_1),\varphi)\right)\exp\left(\Phi(\eta(\beta_i,z_2),\varphi)\right).\end{aligned}$$ Then we get from (AQ6) that $$\begin{aligned} \left((z_1-z_2)^{{\langle}\beta_i,\beta_i{\rangle}}e_i^+(z_1)e_i^-(z_2)\right)|_{z_1=z_2}=1.\end{aligned}$$ We complete the proof of [\[eq:AQ-7\]](#eq:AQ-7){reference-type="eqref" reference="eq:AQ-7"}. ◻ Similar to [\[eq:def-E\]](#eq:def-E){reference-type="eqref" reference="eq:def-E"}, we let ($i\in J$, $\epsilon=\pm$): $$\begin{aligned} \varphi(\beta_i,z)=\sum_{n\in{\mathbb{Z}}}\beta_i(n)z^{-n-1},\quad E^\pm(\epsilon\beta_i,z)=\exp\left(\epsilon\sum_{n\in{\mathbb{Z}}_+}\frac{\beta_i(\mp n)}{\pm n}z^{\pm n}\right).\end{aligned}$$ Inspired by the theory of $Z$-operators, we have the following result. **Proposition 55**. *For each $i\in J$, we define $$\begin{aligned} Z_i^\pm(z)=E^+(\mp \beta_i,z)e_i^\pm(z)E^-(\mp\beta_i,z)z^{\mp\beta_i(0)}.\end{aligned}$$ Then we have that $$\begin{aligned} \label{eq:Z-noz} Z_i^\pm(z)\in\mathop{\mathrm{End}}_{{\mathbb{C}}[[\hbar]]}W,\end{aligned}$$ that is, $Z_i^\pm(z)$ does not depend on $z$, so that we can write $Z_i^\pm$ for $Z_i^\pm(z)$. Moreover, $$\begin{aligned} &[\varphi(\beta_i,z),Z_j^\pm]=\pm{\langle}\beta_i,\beta_j{\rangle}Z_j^\pm z^{-1},\label{eq:Z-ops-rel-1}\\ &Z_i^\pm Z_j^\pm=(-1)^{{\langle}\beta_i,\beta_j{\rangle}}Z_j^\pm Z_i^\pm,\label{eq:Z-ops-rel-2}\\ &Z_i^\pm Z_j^\mp=(-1)^{{\langle}\beta_i,\beta_j{\rangle}}Z_j^\mp Z_i^\pm,\label{eq:Z-ops-rel-3}\\ &Z_i^+Z_i^-=1.\label{eq:Z-ops-rel-4}\end{aligned}$$* *Proof.* Notice that $$\begin{aligned} &\frac{d}{dz}E^+(\epsilon\beta_i,z)=\epsilon E^+(\epsilon\beta_i,z)\varphi(\beta_i,z)^+\\ &\frac{d}{dz}E^-(\epsilon\beta_i,z)z^{\epsilon\beta_i(0)} =\epsilon\varphi(\beta_i,z)^-E^-(\epsilon\beta_i,z)z^{\epsilon\beta_i(0)}.\end{aligned}$$ Combining these relations with [\[eq:AQ-5\]](#eq:AQ-5){reference-type="eqref" reference="eq:AQ-5"}, we get that $$\begin{aligned} \frac{d}{dz}Z_i^\pm(z)=&\left(\frac{d}{dz}E^+(\mp \beta_i,z)\right)e_i^\pm(z)E^-(\mp\beta_i,z)z^{\mp\beta_i(0)}\\ &+E^+(\mp \beta_i,z)\left(\frac{d}{dz}e_i^\pm(z)\right)E^-(\mp\beta_i,z)z^{\mp\beta_i(0)}\\ &+E^+(\mp \beta_i,z)e_i^\pm(z)\left(\frac{d}{dz}E^-(\mp\beta_i,z)z^{\mp\beta_i(0)}\right)\\ =&\mp E^+(\mp \beta_i,z)\varphi(\beta_i,z)^+e_i^\pm(z)E^-(\mp\beta_i,z)z^{\mp\beta_i(0)}\\ &\pm E^+(\mp \beta_i,z)\varphi(\beta_i,z)^+e_i^\pm(z)E^-(\mp\beta_i,z)z^{\mp\beta_i(0)}\\ &\pm E^+(\mp \beta_i,z)e_i^\pm(z)\varphi(\beta_i,z)^-E^-(\mp\beta_i,z)z^{\mp\beta_i(0)}\\ &\mp E^+(\mp \beta_i,z)e_i^\pm(z)\varphi(\beta_i,z)^-E^-(\mp\beta_i,z)z^{\mp\beta_i(0)}\\ =&0.\end{aligned}$$ It shows that $Z_i^\pm(z)$ does not depend on $z$. Notice that $$\begin{aligned} &E^-(\epsilon\beta_i,z_1)z_1^{\epsilon\beta_i}e_j^\pm(z_2)\\ =&e_j^\pm(z_2)E^-(\epsilon\beta_i,z_1)z_1^{\epsilon\beta_i}(z_1-z_2)^{\pm\epsilon{\langle}\beta_i,\beta_j{\rangle}},\\ &E^+(\epsilon\beta_i,z_1)e_j^\pm(z_2)\\ =&e_j^\pm(z_2)E^+(\epsilon\beta_i,z_1)\left(1-z_2/z_1\right)^{\mp\epsilon{\langle}\beta_i,\beta_j{\rangle}},\\ &E^-(\epsilon_1\beta_i,z_1)E^+(\epsilon_2\beta_j,z_2)\\ =&E^+(\epsilon_2\beta_j,z_2)E^-(\epsilon_1\beta_i,z_1)\left(1-z_2/z_1\right)^{\epsilon_1\epsilon_2{\langle}\beta_i,\beta_j{\rangle}}.\end{aligned}$$ Then for $i,j\in J$, we have that $$\begin{aligned} &\iota_{z_1,z_2}P_{ij}(z_1-z_2)(z_1-z_2)^{{\langle}\beta_i,\beta_j{\rangle}}Z_i^\pm(z_1)Z_j^\pm(z_2)\nonumber\\ =&E^+(\mp \beta_i,z_1)e_i^\pm(z_1)E^-(\mp\beta_i,z_1)z_1^{\mp\beta_i} E^+(\mp \beta_j,z_2)\nonumber\\ &\quad\times e_j^\pm(z_2)E^-(\mp\beta_j,z_2)z_2^{\mp\beta_j}\nonumber\\ =&E^+(\mp \beta_i,z_1)E^+(\mp \beta_j,z_2) \iota_{z_1,z_2}P_{ij}(z_1-z_2)e_i^\pm(z_1)e_j^\pm(z_2)\label{eq:prop-Z-temp0}\\ &\quad\times E^-(\mp\beta_i,z_1)z_1^{\mp\beta_i}E^-(\mp\beta_j,z_2)z_2^{\mp\beta_j}.\nonumber\end{aligned}$$ Similarly, we have that $$\begin{aligned} &\iota_{z_2,z_1}P_{ij}(z_1-z_2)(z_1-z_2)^{{\langle}\beta_i,\beta_j{\rangle}}Z_j^\pm(z_2)Z_i^\pm(z_1)\\ =&(-1)^{{\langle}\beta_i,\beta_j{\rangle}}E^+(\mp \beta_i,z_1)E^+(\mp \beta_j,z_2) \iota_{z_2,z_1}P_{ij}(z_1-z_2)e_j^\pm(z_2)e_i^\pm(z_1)\\ &\quad\times E^-(\mp\beta_i,z_1)z_1^{\mp\beta_i}E^-(\mp\beta_j,z_2)z_2^{\mp\beta_j}.\end{aligned}$$ Then we get from [\[eq:AQ-3\]](#eq:AQ-3){reference-type="eqref" reference="eq:AQ-3"} that $$\begin{aligned} \label{eq:prop-Z-temp1} &\iota_{z_1,z_2}P_{ij}(z_1-z_2)(z_1-z_2)^{{\langle}\beta_i,\beta_j{\rangle}}Z_i^\pm(z_1)Z_j^\pm(z_2)\\ =&(-1)^{{\langle}\beta_i,\beta_j{\rangle}}\iota_{z_2,z_1}P_{ij}(z_1-z_2)(z_1-z_2)^{{\langle}\beta_i,\beta_j{\rangle}}Z_j^\pm(z_2)Z_i^\pm(z_1).\nonumber\end{aligned}$$ Let $0\ne g(z)\in{\mathbb{C}}[z][[\hbar]]$, such that $$\begin{aligned} g(z)P_{ij}(z)z^{{\langle}\beta_i,\beta_j{\rangle}}\in{\mathbb{C}}[z][[\hbar]].\end{aligned}$$ By multiplying $g(z_1-z_2)$ on both hand sides of [\[eq:prop-Z-temp1\]](#eq:prop-Z-temp1){reference-type="eqref" reference="eq:prop-Z-temp1"}, we get that $$\begin{aligned} &g(z_1-z_2)P_{ij}(z_1-z_2)(z_1-z_2)^{{\langle}\beta_i,\beta_j{\rangle}}Z_i^\pm(z_1)Z_j^\pm(z_2)\\ =&(-1)^{{\langle}\beta_i,\beta_j{\rangle}}g(z_1-z_2)P_{ij}(z_1-z_2)(z_1-z_2)^{{\langle}\beta_i,\beta_j{\rangle}}Z_j^\pm(z_2)Z_i^\pm(z_1).\end{aligned}$$ It follows that $$\begin{aligned} Z_i^\pm Z_j^\pm=(-1)^{{\langle}\beta_i,\beta_j{\rangle}}Z_j^\pm Z_i^\pm,\end{aligned}$$ since $Z_i^\pm(z)\in {\mathcal{E}}_{{\mathbb{C}}[[\hbar]]}(W)$ (see [\[eq:Z-noz\]](#eq:Z-noz){reference-type="eqref" reference="eq:Z-noz"}). Similarly, we have that $$\begin{aligned} Z_i^\pm Z_j^\mp=(-1)^{{\langle}\beta_i,\beta_j{\rangle}}Z_j^\mp Z_i^\pm.\end{aligned}$$ Finally, from a similar calculation as [\[eq:prop-Z-temp0\]](#eq:prop-Z-temp0){reference-type="eqref" reference="eq:prop-Z-temp0"}, we get that $$\begin{aligned} &Z_i^+(z_1)Z_i^-(z_2) =E^+(-\beta_i,z_1)E^+( \beta_i,z_2)\\ \times \iota_{z_1,z_2}&(z_1-z_2)^{{\langle}\beta_i,\beta_i{\rangle}}e_i^+(z_1)e_i^-(z_2) E^-(-\beta_i,z_1)z_1^{-\beta_i}E^-(\beta_i,z_2)z_2^{\beta_i}.\end{aligned}$$ Therefore, $$\begin{aligned} &Z_i^+Z_i^-=Z_i^+(z_1)Z_i^-(z_1)\\ =&\left((z_1-z_2)^{{\langle}\beta_i,\beta_i{\rangle}}e_i^+(z_1)e_i^-(z_2)\right) |_{z_1=z_2} =1,\end{aligned}$$ where the last equation follows from [\[eq:AQ-7\]](#eq:AQ-7){reference-type="eqref" reference="eq:AQ-7"}. ◻ **Corollary 56**. *There exists a unique ${\mathbb{C}}_\epsilon[Q]$-module action on $W$ such that $$\begin{aligned} e_{\pm\beta_i}.w=Z_i^\pm w\quad\mbox{for }i\in J,\,\,w\in W.\end{aligned}$$* **Proposition 57**. *There exists a unique $V_Q[[\hbar]]$-module structure $Y_W$ on $W$ such that $$\begin{aligned} Y_W(\beta_i,z)=\varphi(\beta_i,z),\quad Y_W(e_{\pm\beta_i},z)=e_i^\pm(z)\quad\mbox{for }i\in J.\end{aligned}$$* *Proof.* For $h=\sum_{i\in J}c_i\beta_i\in{\mathfrak h}_Q$, we define $$\begin{aligned} h(z)=\sum_{i\in J}c_i\varphi(\beta_i,z).\end{aligned}$$ In addition, if $\alpha\in Q$, we define $$\begin{aligned} e_\alpha(z)=E^+(\alpha,z)E^-(\alpha,z)e_\alpha z^{\alpha(0)},\end{aligned}$$ where the operator $e_\alpha\in \mathop{\mathrm{End}}(W)$ is defined as in Corollary [Corollary 56](#coro:group-alg-action){reference-type="ref" reference="coro:group-alg-action"}. One can straightforwardly verify that $$\begin{aligned} &e_0(z)=1,\\ &[h(z_1),h'(z_2)]={\langle}h,h'{\rangle}\frac{\partial}{\partial {z_2}}z_1^{-1}\delta\left(\frac{z_2}{z_1}\right),\\ &[h(z_1),e_\beta(z_2)]={\langle}\alpha,\beta{\rangle}e_\beta(z_2)z_1^{-1}\delta\left(\frac{z_2}{z_1}\right),\\ &(z_1-z_2)^{-{\langle}\alpha,\beta{\rangle}}e_\alpha(z_1)e_\beta(z_2)=(-z_2+z_1)^{-{\langle}\alpha,\beta{\rangle}}e_\beta(z_2)e_\alpha(z_1),\\ &\frac{d}{dz}e_\alpha(z)=\alpha(z)^+e_\alpha(z)+e_\alpha(z)\alpha(z)^-,\\ &(z_1-z_2)^{-{\langle}\alpha,\beta{\rangle}-1}e_\alpha(z_1)e_\beta(z_2) -(-z_2+z_1)^{-{\langle}\alpha,\beta{\rangle}-1}e_\beta(z_2)e_\alpha(z_1)\\ &\quad=\epsilon(\alpha,\beta)e_{\alpha+\beta}(z_2)z_1^{-1}\delta\left(\frac{z_2}{z_1}\right),\nonumber\end{aligned}$$ where $h,h'\in{\mathfrak h}_Q$ and $\alpha,\beta\in Q$. It is proved in [@LL Section 6.5] that $W$ carries a $V_Q$-module structure such that $$\begin{aligned} Y_W(h,z)=h(z),\quad Y_W(e_\alpha,z)=e_\alpha(z)\quad\mbox{for }h\in{\mathfrak h}_Q,\,\alpha\in Q.\end{aligned}$$ The uniqueness follows immediate from the fact that $V_Q$ is generated by ${\mathfrak h}_Q\cup{ \left.\left\{ {e_{\pm\alpha_i}} \,\right|\, {i\in J} \right\} }$. ◻ $\quad$ *Proof of Theorem [Theorem 46](#thm:Undeform-qlattice){reference-type="ref" reference="thm:Undeform-qlattice"}:* Proposition [Proposition 57](#prop:A-eta-mod-to-V-Q-mod){reference-type="ref" reference="prop:A-eta-mod-to-V-Q-mod"} provides a $V_Q[[\hbar]]$-module structure $Y_W(\cdot,z)$ on $W$. Denote this module by $\mathfrak U_\eta(W)$. Let $(W',\{\beta_{i,\hbar}(z)\}_{i\in J}, \{e_{i,\hbar}^\pm(z)\}_{i\in J})$ be another object of $\mathcal A_\hbar^\eta(Q)$, and let $f:W\to W'$ be a morphism. That is, $f$ is a ${\mathbb{C}}[[\hbar]]$-module map and $$\begin{aligned} \beta_{i,\hbar}(z)\circ f=f\circ \beta_{i,\hbar}(z),\quad e_{i,\hbar}^\pm(z)\circ f=f\circ e_{i,\hbar}^\pm(z)\quad \mbox{for }i\in J.\end{aligned}$$ Define $$\begin{aligned} \varphi_1:{\mathfrak h}&\longrightarrow \mathop{\mathrm{Hom}}_{{\mathbb{C}}[[\hbar]]}(W,W')[[z,z^{-1}]]\\ \beta_i&\mapsto \varphi(\beta_i,z)\circ f-f\circ \varphi(\beta_i,z).\end{aligned}$$ Then we get the following linear system $$\begin{aligned} &\varphi_1(\beta_i,z)+\Phi(\eta'(\beta_i,z),\varphi_1)\\ =&\left(\varphi(\beta_i,z)+\Phi(\eta'(\beta_i,z),\varphi)\right)\circ f-f\circ \left(\varphi(\beta_i,z)+\Phi(\eta'(\beta_i,z),\varphi)\right)\\ =&\beta_{i,\hbar}(z)\circ f-f\circ \beta_{i,\hbar}(z)=0\quad\mbox{for }i\in J,\end{aligned}$$ where the last equation follows from [\[eq:def-beta-i\]](#eq:def-beta-i){reference-type="eqref" reference="eq:def-beta-i"}. Lemma [Lemma 51](#lem:linear-sys){reference-type="ref" reference="lem:linear-sys"} provides the following unique solution: $$\begin{aligned} \varphi(\beta_i,z)\circ f-f\circ \varphi(\beta_i,z)=\varphi_1(\beta_i,z)=0\quad\mbox{for }i\in J\end{aligned}$$ From the definition of $e_i^\pm(z)$ (see [\[eq:def-e-i\]](#eq:def-e-i){reference-type="eqref" reference="eq:def-e-i"}), we also have that $$\begin{aligned} &e_i^\pm(z)\circ f=f\circ e_i^\pm(z)\quad\mbox{for }i\in J.\end{aligned}$$ Since $Y_W$ is uniquely determined by the following conditions (see Proposition [Proposition 57](#prop:A-eta-mod-to-V-Q-mod){reference-type="ref" reference="prop:A-eta-mod-to-V-Q-mod"}) $$\begin{aligned} &Y_W(\beta_i,z)=\varphi(\beta_i,z),\quad Y_W(e_{\pm\beta_i},z)=e_i^\pm(z)\quad \mbox{for }i\in J,\end{aligned}$$ $f:\mathfrak U_\eta(W)\to \mathfrak U_\eta(W')$ is a $V_Q[[\hbar]]$-module homomorphism. We get a functor $\mathfrak U_\eta$ from the category $\mathcal A_\hbar^\eta(Q)$ to the category of $V_Q[[\hbar]]$-modules. Next, we show that $\mathfrak U_\eta$ is the inverse of $\mathfrak I\circ\mathfrak D_\eta$. Let $(W,Y_W)$ be a $V_Q[[\hbar]]$-module. Proposition [Proposition 43](#prop:latticeVA-mod-to-qlatticeVA-mod){reference-type="ref" reference="prop:latticeVA-mod-to-qlatticeVA-mod"} provides a $V_Q^\eta$-module structure $Y_W^\eta$ on $W$ uniquely determined by $$\begin{aligned} &Y_W^\eta(\beta_i,z)=Y_W(\beta_i,z)+\Phi(\eta'(\beta_i,z),Y_W),\label{eq:Y-W-eta}\\ &Y_W^\eta(e_{\pm\beta_i},z)=Y_W(e_{\pm\beta_i},z)\exp\left(\pm \Phi(\eta(\beta_i,z),Y_W) \right)\quad\mbox{for }i\in J.\label{eq:Y-W-eta-2}\end{aligned}$$ From Proposition [Proposition 45](#prop:qlatticeVA-mod-to-A-mod){reference-type="ref" reference="prop:qlatticeVA-mod-to-A-mod"}, we have that $$\begin{aligned} (W,\{Y_W^\eta(\beta_i,z)\}_{i\in J},\{Y_W^\eta(e_{\pm\beta_i},z)\}_{i\in J})\in \mathop{\mathrm{obj}}\mathcal A_\hbar^\eta(Q).\end{aligned}$$ Then by using Proposition [Proposition 57](#prop:A-eta-mod-to-V-Q-mod){reference-type="ref" reference="prop:A-eta-mod-to-V-Q-mod"}, we get another $V_Q[[\hbar]]$-module structure $Y_W'(\cdot,z)$ on $W$, such that $$\begin{aligned} &Y_W'(\beta_i,z)+\Phi(\eta'(\beta_i,z),Y_W')=Y_W^\eta(\beta_i,z),\label{eq:Y-W-beta-i}\\ &Y_W'(e_{\pm\beta_i},z)=Y_W^\eta(e_{\pm\beta_i},z)\exp\left(\mp \Phi(\eta(\beta_i,z),Y_W') \right)\quad\mbox{for }i\in J.\label{eq:Y-W-e-i}\end{aligned}$$ In view of Lemma [Lemma 51](#lem:linear-sys){reference-type="ref" reference="lem:linear-sys"}, we get from [\[eq:Y-W-eta\]](#eq:Y-W-eta){reference-type="eqref" reference="eq:Y-W-eta"} and [\[eq:Y-W-beta-i\]](#eq:Y-W-beta-i){reference-type="eqref" reference="eq:Y-W-beta-i"} that $$\begin{aligned} Y_W(\beta_i,z)=Y_W'(\beta_i,z)\quad\mbox{for }i\in J.\end{aligned}$$ Consequently, we get from [\[eq:Y-W-eta-2\]](#eq:Y-W-eta-2){reference-type="eqref" reference="eq:Y-W-eta-2"} and [\[eq:Y-W-e-i\]](#eq:Y-W-e-i){reference-type="eqref" reference="eq:Y-W-e-i"} that $$\begin{aligned} &Y_W'(e_{\pm\beta_i},z)=Y_W^\eta(e_{\pm\beta_i},z)\exp\left(\mp \Phi(\eta(\beta_i,z),Y_W') \right)\\ =&Y_W^\eta(e_{\pm\beta_i},z)\exp\left(\mp \Phi(\eta(\beta_i,z),Y_W \right)\\ =&Y_W(e_{\pm\beta_i},z)\qquad\mbox{for }i\in J.\end{aligned}$$ Therefore, $Y_W=Y_W'$, since $V_Q[[\hbar]]$ is generated by ${ \left.\left\{ {\beta_i,e_{\pm\beta_i}} \,\right|\, {i\in J} \right\} }$. It proves that $\mathfrak U_\eta\circ \mathfrak I\circ\mathfrak D_\eta$ is the identity fucntor of the category of $V_Q[[\hbar]]$-modules. Let $(W,Y_W^\eta)$ be a $V_Q^\eta$-module. Proposition [Proposition 45](#prop:qlatticeVA-mod-to-A-mod){reference-type="ref" reference="prop:qlatticeVA-mod-to-A-mod"} provides that $$\begin{aligned} (W,\{Y_W^\eta(\beta_i,z)\}_{i\in J},\{Y_W^\eta(e_{\pm\beta_i},z)\}_{i\in J})\in \mathop{\mathrm{obj}}\mathcal A_\hbar^\eta(Q).\end{aligned}$$ Using Proposition [Proposition 57](#prop:A-eta-mod-to-V-Q-mod){reference-type="ref" reference="prop:A-eta-mod-to-V-Q-mod"}, we get $V_Q[[\hbar]]$-module structure $Y_W(\cdot,z)$ on $W$ uniquely determined by $$\begin{aligned} &Y_W(\beta_i,z)+\Phi(\eta'(\beta_i,z),Y_W)=Y_W^\eta(\beta_i,z),\\ &Y_W(e_{\pm\beta_i},z)=Y_W^\eta(e_{\pm\beta_i},z)\exp\left(\mp \Phi(\eta(\beta_i,z),Y_W) \right).\end{aligned}$$ Then Proposition [Proposition 43](#prop:latticeVA-mod-to-qlatticeVA-mod){reference-type="ref" reference="prop:latticeVA-mod-to-qlatticeVA-mod"} provides another $V_Q^\eta$-module structure $\widehat{Y}_W^\eta(\cdot,z)$ on $W$, such that $$\begin{aligned} &\widehat{Y}_W^\eta(\beta_i,z)=Y_W(\beta_i,z)+\Phi(\eta'(\beta_i,z),Y_W)=Y_W^\eta(\beta_i,z),\\ &\widehat{Y}_W^\eta(e_{\pm\beta_i},z)=Y_W(e_{\pm\beta_i},z)\exp\left( \pm\Phi(\eta(\beta_i,z),Y_W) \right)=Y_W^\eta(e_{\pm\beta_i},z).\end{aligned}$$ Since $V_Q^\eta$ is generated by ${ \left.\left\{ {\beta_i,e_{\pm\beta_i}} \,\right|\, {i\in J} \right\} }$, we get that $\widehat{Y}_W^\eta=Y_W^\eta$. Therefore, $\mathfrak D_\eta\circ\mathfrak U_\eta\circ \mathfrak I$ is the identity fucntor of the category of $V_Q^\eta$-modules. # Quantum affine vertex algebras {#sec:qaff-va} ## Construction of quantum affine VAs Let us refer to the notations provided in Subsection [2.2](#subsec:AffVA){reference-type="ref" reference="subsec:AffVA"}. In this subsection, we recall the construction of quantum affine VAs introduced in [@K-Quantum-aff-va]. Let $\mathfrak T$ be the set of tuples $$\begin{aligned} \tau=\big(\tau_{ij}(z),\tau_{ij}^{1,\pm}(z),\tau_{ij}^{2,\pm}(z),\tau_{ij}^{\epsilon_1,\epsilon_2}(z)\big)_{i,j\in I}^{\epsilon_1,\epsilon_2=\pm},\end{aligned}$$ where $\tau_{ij}(z),\,\tau_{ij}^{1,\pm}(z),\tau_{ij}^{2,\pm}(z),\tau_{ij}^{\epsilon_1,\epsilon_2}(z)\in {\mathbb{C}}((z))[[\hbar]]$, such that $$\begin{aligned} &\lim_{\hbar\to 0}\tau_{ij}(z)=\lim_{\hbar\to 0}\tau_{ji}(-z),\quad \lim_{\hbar\to 0}\tau_{ij}^{1,\pm}(z)=-\lim_{\hbar\to 0}\tau_{ji}^{2,\pm}(-z),\\ &\lim_{\hbar\to 0}\tau_{ij}^{\epsilon_1,\epsilon_2}(z)=\lim_{\hbar\to 0}\tau_{ji}^{\epsilon_2,\epsilon_1}(-z)\in{\mathbb{C}}[[z]]^\times.\end{aligned}$$ Then $\mathfrak T$ is a commutative group, where the multiplication $\tau\ast\sigma$ of $\sigma,\tau\in\mathcal T$ is defined by $$\begin{aligned} &(\tau\ast\sigma)_{ij}(z)=\tau_{ij}(z)+\sigma_{ij}(z),\quad (\tau\ast\sigma)_{ij}^{k,\pm}(z)=\tau_{ij}^{k,\pm}(z)+\sigma_{ij}^{k,\pm}(z),\\ &(\tau\ast\sigma)_{ij}^{\epsilon_1,\epsilon_2}(z)=\tau_{ij}^{\epsilon_1,\epsilon_2}(z)\sigma_{ij}^{\epsilon_1,\epsilon_2}(z), \quad\mbox{for }i,j\in I,\,k=1,2,\,\epsilon_1,\epsilon_2=\pm,\end{aligned}$$ the inverse of $\tau$ is defined by $$\begin{aligned} \tau^{-1}=(-\tau_{ij}(z),-\tau_{ij}^{1,\pm}(z),-\tau_{ij}^{2,\pm}(z),\tau_{ij}^{\epsilon_1,\epsilon_2}(z)^{-1})_{i,j\in I}^{\epsilon_1,\epsilon_2=\pm},\end{aligned}$$ and the identity $\varepsilon$ is defined by $$\begin{aligned} \varepsilon_{ij}(z)=0=\varepsilon_{ij}^{1,\pm}(z)=\varepsilon_{ij}^{2,\pm}(z),\quad \varepsilon_{ij}^{\epsilon_1,\epsilon_2}(z)=1\quad\mbox{for }i,j\in I, \epsilon_1,\epsilon_2=\pm.\end{aligned}$$ Define $\mathcal M_\tau({\mathfrak g})$ to be the category consisting of topologically free ${\mathbb{C}}[[\hbar]]$-modules $W$, equipped with fields $h_{i,\hbar}(z),x_{i,\hbar}^\pm(z)\in{\mathcal{E}}_\hbar(W)$ ($i\in I$) satisfying the following conditions $$\begin{aligned} &[h_{i,\hbar}(z_1),h_{j,\hbar}(z_2)]\label{eq:fqva-rel1}\\ =&\frac{a_{ij} r\ell}{r_j}\frac{\partial}{\partial {z_2}}z_1^{-1}\delta\left(\frac{z_2}{z_1}\right)+\tau_{ij}(z_1-z_2)-\tau_{ji}(z_2-z_1), \nonumber\\ &[h_{i,\hbar}(z_1),x_{j,\hbar}^\pm(z_2)]\label{eq:fqva-rel2}\\ =&\pm x_{j,\hbar}^\pm(z_2)\left(a_{ij} z_1^{-1}\delta\left(\frac{z_2}{z_1}\right)+ \tau_{ij}^{1,\pm}(z_1-z_2)+\tau_{ji}^{2,\pm}(z_2-z_1)\right),\nonumber\\ &\tau_{ij}^{\pm,\pm}(z_1-z_2)(z_1-z_2)^{n_{ij}}x_{i,\hbar}^\pm(z_1)x_{j,\hbar}^\pm(z_2)\label{eq:fqva-rel3}\\ =&\tau_{ji}^{\pm,\pm}(z_2-z_1)(z_1-z_2)^{n_{ij}}x_{j,\hbar}^\pm(z_2)x_{i,\hbar}^\pm(z_1),\nonumber\\ &\tau_{ij}^{\pm,\mp}(z_1-z_2)(z_1-z_2)^{2\delta_{ij}}x_{i,\hbar}^\pm(z_1)x_{j,\hbar}^\mp(z_2)\label{eq:fqva-rel4}\\ =&\tau_{ji}^{\mp,\pm}(z_2-z_1)(z_1-z_2)^{2\delta_{ij}}x_{j,\hbar}^\mp(z_2)x_{i,\hbar}^\pm(z_1).\nonumber\end{aligned}$$ **Proposition 58**. *There is a unique $\hbar$-adic nonlocal VA $(F_\tau({\mathfrak g},\ell),Y_\tau,{{\bf 1}})$, such that* - *$F_\tau({\mathfrak g},\ell)$ is generated by ${ \left.\left\{ {h_i,\,x_i^\pm} \,\right|\, {i\in I} \right\} }$, and $$(F_\tau({\mathfrak g},\ell),\{Y_\tau(h_i,z)\}_{i\in I},\{Y_\tau(x_i^\pm,z)\}_{i\in I})$$ is an object of $\mathcal M_\tau({\mathfrak g})$.* - *For any $\hbar$-adic nonlocal VA $(V,Y,{{\bf 1}})$ containing $\bar h_i, \bar x_i^\pm$ ($i\in I$), such that $(V,\{Y(\bar h_i,z)\}_{i\in I},\{Y(\bar x_i^\pm,z)\}_{i\in I})$ is an object of $\mathcal M_\tau({\mathfrak g})$, then there exists a unique $\hbar$-adic nonlocal VA homomorphism $\varphi:F_\tau({\mathfrak g},\ell)\to V$, such that $\varphi(h_i)=\bar h_i$ and $\varphi(x_i^\pm)=\bar x_i^\pm$ for $i\in I$.* **Remark 59**. **Recall the VA $F_{\hat{\mathfrak g}}^\ell$ in Definition [Definition 3](#de:affVAs){reference-type="ref" reference="de:affVAs"}. View $F_{\hat{\mathfrak g}}^\ell[[\hbar]]$ as the natural $\hbar$-adic VA. Then $F_{\hat{\mathfrak g}}^\ell[[\hbar]]\cong F_{\varepsilon}({\mathfrak g},\ell)$.** Let $H_0$ be the symmetric algebra of the following vector space, and let $H=H_0[[\hbar]]$: $$\begin{aligned} \bigoplus_{i\in I}\left({\mathbb{C}}[\partial] \left({\mathbb{C}}\alpha^\vee_i\oplus{\mathbb{C}}e_i^+\oplus{\mathbb{C}}e_i^-\right)\right).\end{aligned}$$ Then $(H,\Delta,\varepsilon)$ carries a commutative and cocommutative bialgebra structure, such that ($i\in I$, $n\in{\mathbb{N}}$): $$\begin{aligned} &\Delta(\partial^n \alpha^\vee_i)=\partial^n \alpha^\vee_i\otimes 1+1\otimes\partial^n \alpha^\vee_i,\\ &\Delta(\partial^n e_i^\pm)=\sum_{k=0}^n\binom{n}{k} \partial^k e_i^\pm\otimes\partial^{n-k} e_i^\pm,\\ &\varepsilon(\partial^n \alpha^\vee_i)=0, \,\,\varepsilon(\partial^n e_i^\pm)=\delta_{n,0}.\end{aligned}$$ Let $\partial$ be the derivation on $H$ such that $$\begin{aligned} \partial (\partial^n \alpha^\vee_i)=\partial^{n+1} \alpha^\vee_i,\quad \partial(\partial^n e_i^\pm)=\partial^{n+1}e_i^\pm,\quad i\in I,\,0\le n\in{\mathbb{Z}}.\end{aligned}$$ Define a vertex operator map $Y(\cdot,z)$ on $H$ by $Y(u,z)v=(e^{z\partial}u)v$. Then $(H,Y,1,\Delta,\varepsilon)$ forms a cocommutative and commutative $\hbar$-adic vertex bialgebra. **Proposition 60**. *For $\sigma,\tau\in \mathfrak T$, there exists a deforming triple $(H,\rho,\sigma)$, where $(F_\tau({\mathfrak g},\ell),\rho)$ is the $H$-comodule nonlocal VA uniquely determined by $$\begin{aligned} \rho(h_i)=h_i\otimes 1+{{\bf 1}}\otimes\alpha^\vee_i,\quad \rho(x_i^\pm)=x_i^\pm\otimes e_i^\pm,\end{aligned}$$ and $(F_\tau({\mathfrak g},\ell),\sigma)$ is the $H$-module nonlocal VA uniquely determined by $$\begin{aligned} &\sigma(\alpha^\vee_i,z)h_j={{\bf 1}}\otimes\sigma_{ij}(z),\\ &\sigma(\alpha^\vee_i,z)x_j^\pm=\pm x_j^\pm\otimes\sigma_{ij}^{1,\pm}(z),\\ &\sigma(e_i^\pm,z)h_j=h_j\otimes 1\mp{{\bf 1}}\otimes\sigma_{ij}^{2,\pm}(z),\\ &\sigma(e_i^\pm,z)x_j^\epsilon=x_j^\epsilon\otimes\sigma_{ij}^{\pm,\epsilon}(z)^{-1}.\end{aligned}$$ Moreover, for $\tau,\sigma_1,\sigma_2\in\mathfrak T$, we have that $$\begin{aligned} \label{eq:F-deform-modVA-struct-com} [\sigma_1(h_1,z_1),\sigma_2(h_2,z_2)]=0\,\,\mbox{on }F_\tau({\mathfrak g},\ell)\quad\mbox{for }h_1,h_2\in H.\end{aligned}$$* **Proposition 61**. *Let $\tau,\sigma\in\mathfrak T$. Then $$\begin{aligned} \mathfrak D_\sigma^\rho(F_\tau({\mathfrak g},\ell))=F_{\tau\ast\sigma}({\mathfrak g},\ell).\end{aligned}$$ Moreover, $F_\tau({\mathfrak g},\ell)/\hbar F_\tau({\mathfrak g},\ell)\cong F_{\mathfrak g}^\ell$.* **Theorem 62**. *For any $\tau\in\mathfrak T$, $F_\tau({\mathfrak g},\ell)$ is an $\hbar$-adic quantum VA with the quantum Yang-Baxter operator $S_\tau(z)$ defined by $$\begin{aligned} S_\tau(z)(v\otimes u)=\sum \tau(u_{(2)},-z)v_{(1)}\otimes\tau^{-1}(v_{(2)},z)u_{(1)}\quad \mbox{for }u,v\in F_\tau({\mathfrak g},\ell).\end{aligned}$$ Moreover, for any $i,j\in I$ and $\epsilon_1,\epsilon_2=\pm$, we have that $$\begin{aligned} &S_\tau(z)(h_j\otimes h_i)=h_j\otimes h_i+{{\bf 1}}\otimes{{\bf 1}}\otimes\left(\tau_{ij}(-z)-\tau_{ji}(z)\right),\label{eq:S-tau-1}\\ &S_\tau(z)(x_j^\pm\otimes h_i)=x_j^\pm\otimes h_i\pm x_j^\pm\otimes{{\bf 1}}\otimes\left(\tau_{ij}^{1,\pm}(-z)+\tau_{ji}^{2,\pm}(z)\right),\label{eq:S-tau-2}\\ &S_\tau(z)(h_j\otimes x_i^\pm)=h_j\otimes x_i^\pm\mp{{\bf 1}}\otimes x_i^\pm \otimes\left(\tau_{ij}^{2,\pm}(-z)+\tau_{ji}^{1,\pm}(z)\right),\label{eq:S-tau-3}\\ &S_\tau(z)(x_j^{\epsilon_1}\otimes x_i^{\epsilon_2})=x_j^{\epsilon_1}\otimes x_i^{\epsilon_2}\otimes\tau_{ji}^{\epsilon_1,\epsilon_2}(z) \tau_{ij}^{\epsilon_2,\epsilon_1}(-z)^{-1}.\label{eq:S-tau-4}\end{aligned}$$* For $0\ne P(z)\in{\mathbb{C}}(z)[[\hbar]]$ and $g(q)=\sum_{k=1}^na_kq^{m_k}\in{\mathbb{Z}}[q]$, we let $$\begin{aligned} P(z)^{g(q)}=\prod_{k=1}^n \left(q^{m_k\frac{\partial}{\partial {z}}}P(z)^{a_k}\right) \in {\mathbb{C}}(z)[[\hbar]].\end{aligned}$$ It is straightforward to check that $$\begin{aligned} P(z)^{g_1(q)}P(z)^{g_2(q)}=P(z)^{g_1(q)+g_2(q)},\quad \left(P(z)^{g_1(q)}\right)^{g_2(q)}=P(z)^{g_1(q)g_2(q)}\end{aligned}$$ for any $g_1(q),g_2(q)\in{\mathbb{Z}}[q]$, and $$\begin{aligned} \left(P(z_1)^{g(q)}\right)|_{z_1=-z}=\left(P(-z)\right)^{g(q^{-1})}.\end{aligned}$$ For an $\ell\in{\mathbb{C}}$, we define the element $\widehat{\ell}\in\mathfrak T$ as follows: $$\begin{aligned} &\widehat{\ell}_{ij}(z)=-[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}[r\ell/r_j]_{q^{r_j\frac{\partial}{\partial {z}}}}q^{r\ell\frac{\partial}{\partial {z}}}\frac{\partial^{2}}{\partial z^{2}}\log f(z) -\frac{a_{ij}r\ell}{r_j} z^{-2},\label{eq:tau-1}\\ &\widehat{\ell}_{ij}^{1,\pm}(z)=\widehat{\ell}_{ji}^{2,\pm}(z)=[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}q^{r\ell\frac{\partial}{\partial {z}}}\frac{\partial}{\partial {z}}\log f(z) -a_{ij}z^{-1},\label{eq:tau-2}\\ %&\wh\ell_{ij}^{2,\pm}(z)=[a_{ji}]_{q^{r_j\pd{z}}}q^{r\ell\pd{z}}\pd{z}\log f(z) % -a_{ji}z\inv,\label{eq:tau-2-2}\\ &\widehat{\ell}_{ij}^{\pm,\pm}(z)=\begin{cases} f(z)^{q^{-r_ia_{ii}}-1},&\mbox{if }a_{ij}> 0,\\ z^{-1}f(z)^{q^{-r_ia_{ij}}},&\mbox{if }a_{ij}\le 0, \end{cases} \label{eq:tau-3}\\ &\widehat{\ell}_{ij}^{+,-}(z)=z^{-\delta_{ij}}(z+2r\ell\hbar)^{\delta_{ij}},\label{eq:tau-4}\\ &\widehat{\ell}_{ij}^{-,+}(z)=z^{-\delta_{ij}}(z-2r\ell\hbar)^{\delta_{ij}} f(z)^{q^{r_ia_{ij}}-q^{-r_ia_{ij}}},\label{eq:tau-5}\end{aligned}$$ where $$\begin{aligned} f(z)=e^{z/2}-e^{-z/2}\in{\mathbb{C}}[[z]].\end{aligned}$$ In the rest of this paper, we denote $F_{\widehat{\ell}}({\mathfrak g},\ell)$ by $$\begin{aligned} \label{eq:def-F-qva} F_{\hat{\mathfrak g},\hbar}^{\ell}.\end{aligned}$$ **Lemma 63**. *The category $\mathcal M_{\widehat{\ell}}({\mathfrak g})$ consisting of topologically free ${\mathbb{C}}[[\hbar]]$-modules $W$ equipped with fields $h_{i,\hbar}(z),x_{i,\hbar}^\pm(z)\in{\mathcal{E}}_\hbar(W)$ ($i\in I$) satisfying the following relations $$\begin{aligned} &[h_{i,\hbar}(z_1),h_{j,\hbar}(z_2)] =[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z_2}}}}[r\ell/r_j]_{q^{r_j\frac{\partial}{\partial {z_2}}}}\label{eq:local-h-1}\\ \times& \left(\iota_{z_1,z_2}q^{-r\ell\frac{\partial}{\partial {z_2}}}-\iota_{z_2,z_1}q^{r\ell\frac{\partial}{\partial {z_2}}}\right) \frac{\partial}{\partial {z_1}}\frac{\partial}{\partial {z_2}}\log f(z_1-z_2),\nonumber\\ &[h_{i,\hbar}(z_1),x_{j,\hbar}^\pm(z_2)] =\pm x_{j,\hbar}^\pm(z_2)[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z_2}}}}\label{eq:local-h-2}\\ \times& \left(\iota_{z_1,z_2}q^{-r\ell\frac{\partial}{\partial {z_2}}}-\iota_{z_2,z_1}q^{r\ell\frac{\partial}{\partial {z_2}}}\right) \frac{\partial}{\partial {z_1}}\log f(z_1-z_2),\nonumber\\ &\iota_{z_1,z_2}f(z_1-z_2)^{-\delta_{ij}+q^{-r_ia_{ij}}} x_{i,\hbar}^\pm(z_1)x_{j,\hbar}^\pm(z_2)\label{eq:local-h-3}\\ =&\iota_{z_2,z_1} f(-z_2+z_1)^{-\delta_{ij}+q^{r_ia_{ij}}} x_{j,\hbar}^\pm(z_2)x_{i,\hbar}^\pm(z_1),\nonumber\\ &\iota_{z_1,z_2}f(z_1-z_2)^{\delta_{ij}+\delta_{ij}q^{2r\ell}} x_{i,\hbar}^+(z_1)x_{j,\hbar}^-(z_2)\label{eq:local-h-4}\\ =&\iota_{z_2,z_1}f(z_1-z_2)^{\delta_{ij}+\delta_{ij}q^{2r\ell}+q^{-r_ia_{ij}}-q^{r_ia_{ij}}} x_{j,\hbar}^-(z_2)x_{i,\hbar}^+(z_1).\nonumber\end{aligned}$$ Moreover, the topologically free ${\mathbb{C}}[[\hbar]]$-module $F_{\hat{\mathfrak g},\hbar}^\ell$ equipped fields $Y_{\widehat{\ell}}(h_i,z)$ and $Y_{\widehat{\ell}}(x_i^\pm,z)$ ($i\in I$) is an object of $\mathcal M_{\widehat{\ell}}({\mathfrak g})$.* Define $$\begin{aligned} f_0(z)=\frac{f(z)}{z}=\frac{e^{z/2}-e^{-z/2}}{z}=\sum_{n\ge 0}\frac{z^{2n}}{4^n(2n+1)!}\in 1+z^2{\mathbb{C}}[[z^2]].\end{aligned}$$ **Definition 64**. For $\ell\in{\mathbb{C}}$, we let $R_{1}^\ell$ be the minimal closed ideal of $F_{\hat{\mathfrak g},\hbar}^{\ell}$ such that $[R_{1}^\ell]=R_{1}^\ell$ and contains the following elements $$\begin{aligned} &\left(x_i^+\right)_0x_i^--\frac{1}{q^{r_i}-q^{-r_i}}\left({{\bf 1}}-E_\ell(h_i)\right)\quad\mbox{for } i\in I,\label{eq:x+1x-1}\\ &\left(x_i^+\right)_1x_i^-+\frac{2r\ell\hbar}{q^{r_i}-q^{-r_i}}E_\ell(h_i)\quad\mbox{for } i\in I,\label{eq:x+1x-2}\\ &\left(x_i^\pm\right)_0^{m_{ij}}x_j^\pm\quad\mbox{for } i,j\in I\,\mbox{with}\,a_{ij}<0,\label{eq:serre}\end{aligned}$$ where $$\begin{aligned} \label{eq:def-E-h} &E_\ell(h_i)=\left(\frac{f_0(2r_i\hbar+2r\ell\hbar)}{f_0(2r_i\hbar-2r\ell\hbar)}\right)^{\frac{1}{2}}\\ &\quad\times\exp\left(\left(-q^{-r\ell\partial}2\hbar f_0(2\partial\hbar)[r_i]_{q^\partial} h_i\right)_{-1}\right){{\bf 1}}.\nonumber\end{aligned}$$ Define $$\begin{aligned} V_{\hat{\mathfrak g},\hbar}^{\ell}=F_{\hat{\mathfrak g},\hbar}^{\ell}/R_{1}^\ell.\end{aligned}$$ **Definition 65**. Let $\ell\in {\mathbb{Z}}_+$, and let $R_{2}^\ell$ be the minimal closed ideal of $V_{\hat{\mathfrak g},\hbar}^{\ell}$ such that $[R_{2}^\ell]=R_{2}^\ell$ and contains the following elements $$\begin{aligned} & \left(x_i^\pm\right)_{-1}^{r\ell/r_i}x_i^\pm,\quad i\in I.\label{eq:integrable}\end{aligned}$$ Define $$\begin{aligned} L_{\hat{\mathfrak g},\hbar}^{\ell}=V_{\hat{\mathfrak g},\hbar}^{\ell}/R_{2}^\ell.\end{aligned}$$ **Remark 66**. * *For the notation $L_{\hat{\mathfrak g},\hbar}^{\ell}$. we always assume that $\ell\in{\mathbb{Z}}_+$.* * **Remark 67**. **Let $X=V,L$. The definition of $F_{\hat{\mathfrak g},\hbar}^\ell$ (resp. $X_{\hat{\mathfrak g},\hbar}^\ell$) is slightly different from the $\hbar$-adic quantum VA $F_{\widehat{\ell}}(A,\ell)$ (resp. $X_{\hat{\mathfrak g},\hbar}(\ell,0)$) given in [@K-Quantum-aff-va]. Recall from [@K-Quantum-aff-va]\*Section 6 that $F_{\widehat{\ell}}(A,\ell)$ (resp. $X_{\hat{\mathfrak g},\hbar}(\ell,0)$) is generated by $h_{i,\hbar}$, $x_{i,\hbar}^\pm$ for $i\in I$. Then there is a unique $\hbar$-adic quantum VA isomorphism from $F_{\hat{\mathfrak g},\hbar}^\ell$ (resp. $X_{\hat{\mathfrak g},\hbar}^\ell$) to $F_{\widehat{\ell}}(A,\ell)$ (resp. $X_{\hat{\mathfrak g},\hbar}(\ell,0)$) such that $$\begin{aligned} h_i\mapsto [r_i]_{q^\partial}^{-1}h_{i,\hbar},\quad x_i^\pm\mapsto x_{i,\hbar}^\pm\quad \mbox{for }i\in I.\end{aligned}$$** Proposition [Proposition 58](#prop:universal-M-tau){reference-type="ref" reference="prop:universal-M-tau"} and Definitions [Definition 64](#de:V-tau){reference-type="ref" reference="de:V-tau"}, [Definition 65](#de:L-tau){reference-type="ref" reference="de:L-tau"} provides the following result. **Proposition 68**. *Let $(V,Y,{{\bf 1}})$ be an $\hbar$-adic nonlocal VA containing a subset ${ \left.\left\{ {\bar h_i,\bar x_i^\pm} \,\right|\, {i\in I} \right\} }$, such that $$\begin{aligned} (V,\{Y(\bar h_i,z)\}_{i\in I},\{Y(\bar x_i^\pm,z)\}_{i\in I})\in\mathop{\mathrm{obj}}\mathcal M_{\widehat{\ell}}({\mathfrak g}).\end{aligned}$$ Suppose that $$\begin{aligned} &(\bar x_i^+)_0\bar x_i^-=\frac{1}{q^{r_i}-q^{-r_i}}\left({{\bf 1}}-E_\ell(\bar h_i)\right)\quad\mbox{for }i\in I,\\ &(\bar x_i^+)_1\bar x_i^-=-\frac{2r\ell\hbar}{q^{r_i}-q^{-r_i}}E_\ell(\bar h_i)\quad\mbox{for }i\in I,\\ &(\bar x_i^\pm)_0^{m_{ij}}\bar x_j^\pm=0\quad\mbox{for }i,j\in I\,\mbox{with}\,a_{ij}<0.\end{aligned}$$ Then the unique $\hbar$-adic nonlocal VA homomorphism $\varphi:F_{\hat{\mathfrak g},\hbar}^\ell\to V$ provided in Proposition [Proposition 58](#prop:universal-M-tau){reference-type="ref" reference="prop:universal-M-tau"} factor through $V_{\hat{\mathfrak g},\hbar}^\ell$. Suppose further that $\ell\in{\mathbb{Z}}_+$ and $$\begin{aligned} (\bar x_i^\pm)_{-1}^{r\ell/r_i}\bar x_i^\pm=0\quad\mbox{for }i\in I.\end{aligned}$$ Then $\varphi$ also factor through $L_{\hat{\mathfrak g},\hbar}^\ell$.* It is straightforward to see that **Lemma 69**. *Let $W$ be a topologically free ${\mathbb{C}}[[\hbar]]$-module, and let $\zeta_i(z)\in{\mathcal{E}}_\hbar(W)$ for $i\in I$. Suppose that $$\begin{aligned} \label{eq:q-local} &\iota_{z_1,z_2}f(z_1-z_2)^{-\delta_{ij}+q^{-r_ia_{ij}}} \zeta_i(z_1)\zeta_j(z_2)\\ =&\iota_{z_2,z_1}f(z_1-z_2)^{-\delta_{ij}+q^{r_ia_{ij}}} \zeta_j(z_2)\zeta_i(z_1).\nonumber\end{aligned}$$ For any $n\in{\mathbb{Z}}_+$ and any $i_1,\dots,i_n\in I$, we define $$\begin{aligned} \label{eq:def-normal-ordering} &\mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }\zeta_{i_1}(z_1)\cdots \zeta_{i_n}(z_n)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}\\ =&\left(\prod_{1\le s<t\le n} f(z_s-z_t)^{-\delta_{ij}+q^{-r_{i_s}a_{i_s,i_t}}} \right) \zeta_{i_1}(z_1)\cdots \zeta_{i_n}(z_n).\nonumber\end{aligned}$$ Then $$\begin{aligned} &\mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }\zeta_{i_{\sigma(1)}}(z_{\sigma(1)})\cdots \zeta_{i_{\sigma(n)}}(z_{\sigma(n)})\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}\\ =&\left(\prod_{s<t:\sigma(s)>\sigma(t)}(-1)^{\delta_{i_s,i_t}-1}\right) \mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }\zeta_{i_1}(z_1)\cdots \zeta_{i_n}(z_n)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}},\nonumber\end{aligned}$$ and $$\begin{aligned} \mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }\zeta_{i_1}(z_1)\cdots \zeta_{i_n}(z_n)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}\in{\mathcal{E}}_\hbar^{(n)}(W).\end{aligned}$$* The following result is a generalization of [@K-Quantum-aff-va]. **Proposition 70**. *Let $V$ be an $\hbar$-adic nonlocal VA and let ${ \left.\left\{ {\zeta_i} \,\right|\, {i\in I} \right\} }\subset V$ such that ${ \left.\left\{ {Y(\zeta_i,z)} \,\right|\, {i\in I} \right\} }$ satisfies the relations [\[eq:q-local\]](#eq:q-local){reference-type="eqref" reference="eq:q-local"}. Then we have that\ (1) For $i,j\in I$ with $a_{ij}<0$, the operator $Y\left(\left(\zeta_i\right)_0^k\zeta_j,z\right)$ equals to $$\begin{aligned} %&Y\(\(\zeta_i\)_0^{m_{ij}}\zeta_j,z\)= &\mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y(\zeta_i,z+r_i((k-1)a_{ii}+a_{ij})\hbar) Y(\zeta_i,z+r_i((k-2)a_{ii}+ a_{ij})\hbar)\\ &\quad\cdots Y(\zeta_i,z+r_ia_{ij}\hbar)Y(\zeta_j,z)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}. \end{aligned}$$\ (2) For $i,j\in I$ with $a_{ij}<0$, we have that $$\begin{aligned} &\mathop{\mathrm{Sing}}_{z_1,\dots,z_k}Y(\zeta_i,z_1)\cdots Y(\zeta_i,z_k)\zeta_j\\ =&\prod_{a=1}^k\frac{1}{z_a-r_i((k-a)a_{ii}+a_{ij})\hbar}\left(\zeta_i\right)_0^k\zeta_j. \end{aligned}$$\ (3) For $i\in I$, the operator $Y\left(\left(\zeta_i\right)_{-1}^k{{\bf 1}},z\right)$ equals to $$\begin{aligned} %&Y\(\(\zeta_i\)_{-1}^{r\ell/r_i}\zeta_i,z\)= \prod_{a=1}^{k-1}f_0(2ar_i\hbar) \mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y(\zeta_i,z+2(k-1)r_i\hbar) Y(\zeta_i,z+2(k-2)r_i\hbar)\cdots Y(\zeta_i,z)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}. \end{aligned}$$\ (4) For $i\in I$, we have that $$\begin{aligned} &\mathop{\mathrm{Rat}}_{z_1^{-1},\dots,z_k^{-1}}Y(\zeta_i,z_1)\cdots Y(\zeta_i,z_k){{\bf 1}}\\ =&\prod_{a=1}^k\frac{z_a}{z_a-2(k-a)r_i\hbar}\left(\zeta_i\right)_{-1}^k{{\bf 1}}. \end{aligned}$$* **Corollary 71**. *Let $V$ and ${ \left.\left\{ {\zeta_i} \,\right|\, {i\in I} \right\} }\subset V$ be as in Proposition [Proposition 70](#prop:normal-ordering-rel-general){reference-type="ref" reference="prop:normal-ordering-rel-general"}. Let $i,j\in I$ with $a_{ij}<0$. Then $\left(\zeta_i\right)_0^{m_{ij}}\zeta_j=0$ if and only if $$\begin{aligned} \mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y(\zeta_i,z-r_ia_{ij}\hbar) Y(\zeta_i,z-r_i(a_{ij}-2)\hbar)\cdots Y(\zeta_i,z+r_ia_{ij}\hbar)Y(\zeta_j,z)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}{{\bf 1}}=0.\end{aligned}$$ Suppose that $\ell\in{\mathbb{Z}}_+$. Then $\left(\zeta_i\right)_{-1}^{r\ell/r_i}\zeta_i=0$ if and only if $$\begin{aligned} \mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y(\zeta_i,z+2r\ell\hbar) Y(\zeta_i,z+2(r\ell-r_i)\hbar)\cdots Y(\zeta_i,z)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}{{\bf 1}}=0.\end{aligned}$$* **Proposition 72**. *Define $$\begin{aligned} A_{i}(z)=&Y_{\widehat{\ell}}(x_i^+,z)^-x_i^- -\frac{1}{q^{r_i}-q^{-r_i}}\left(\frac{{{\bf 1}}}{z}-\frac{E_\ell(h_i)}{z+2r\ell\hbar}\right)&&\mbox{for }i\in I,\\ Q_{ij}^\pm(z_1,&\dots,z_{m_{ij}})=%\Sing_{z_1,z_2,\dots,z_{m_{ij}}} Y_{\widehat{\ell}}(x_i^\pm,z_1)^- \cdots Y_{\widehat{\ell}}(x_i^\pm,z_{m_{ij}})^-x_j^\pm&&\mbox{for }(i,j)\in\mathbb I.\end{aligned}$$ Then $R_{1}^\ell$ is the minimal closed ideal of $F_{\hat{\mathfrak g},\hbar}^{\ell}$ such that $[R_{1}^\ell]=R_{1}^\ell$, and contains all coefficients of $A_{i}(z)$ for $i\in I$ and all coefficients of $Q_{ij}^\pm(z_1,\dots,z_{m_{ij}})$ for $(i,j)\in\mathbb I$. Moreover, suppose $\ell\in{\mathbb{Z}}_+$. For $i\in I$, set $$\begin{aligned} &M_{i}^\pm(z_1,\dots,z_{r\ell/r_i})=\mathop{\mathrm{Sing}}_{z_1,\dots,z_{r\ell/r_i}}z_1^{-1}\cdots z_{r\ell/r_i}^{-1}\\ &\quad\times Y_{\widehat{\ell}}(x_i^\pm,z_1)\cdots Y_{\widehat{\ell}}(x_i^\pm,z_{r\ell/r_i})x_i^\pm.\end{aligned}$$ Then $R_{2}^\ell$ is the minimal closed ideal of $V_{\hat{\mathfrak g},\hbar}^{\ell}$ such that $[R_{2}^\ell]=R_{2}^\ell$ and contains all coefficients of $M_{i}^\pm(z_1,\dots,z_{r\ell/r_i})$ for $i\in I$.* **Theorem 73**. *Let $\ell\in{\mathbb{C}}$. Then $V_{\hat{\mathfrak g},\hbar}^{\ell}$ is an $\hbar$-adic quantum VA. Moreover, if $\ell\in {\mathbb{Z}}_+$, then $L_{\hat{\mathfrak g},\hbar}^{\ell}$ is also an $\hbar$-adic quantum VA. Furthermore, the quantum Yang-Baxter operators $S_{\ell,\ell}(z)$ of both $V_{\hat{\mathfrak g},\hbar}^{\ell}$ and $L_{\hat{\mathfrak g},\hbar}^{\ell}$ satisfy the following relations $$\begin{aligned} &S_{\ell,\ell}(z)(h_j\otimes h_i)=h_j\otimes h_i\\ &\quad+{{\bf 1}}\otimes{{\bf 1}}\otimes \frac{\partial^{2}}{\partial z^{2}}\log f(z)^{ [a_{ij}]_{q^{r_i}}[r\ell/r_j]_{q^{r_j}}[r\ell]_q(q-q^{-1}) },\\ %[a_{ij}]_{q^{r_i\pd{z}}}[r\ell/r_j]_{q^{r_j\pd{z}}}^2\(q^{\pd{z}}-q^{-\pd{z}}\)\pdiff{z}{2}\log f(z),\\ &S_{\ell,\ell}(z)(x_j^\pm\otimes h_i)=x_j^\pm\otimes h_i\\ &\quad\pm x_j^\pm\otimes{{\bf 1}}\otimes \frac{\partial}{\partial {z}}\log f(z)^{ [a_{ij}]_{q^{r_i}}[r\ell]_q(q-q^{-1}) } ,\\ %[a_{ij}]_{q^{r_i\pd{z}}}[r\ell]_{q^{\pd{z}}}\(q^{\pd{z}}-q^{-\pd{z}}\)\pd{z}\log f(z),\\ &S_{\ell,\ell}(z)(h_j\otimes x_i^\pm)=h_j\otimes x_i^\pm\\ &\quad\mp{{\bf 1}}\otimes x_i^\pm \otimes\frac{\partial}{\partial {z}}\log f(z)^{ [a_{ji}]_{q^{r_j}}[r\ell]_q(q-q^{-1}) },\\ %[a_{ji}]_{q^{r_j\pd{z}}}[r\ell]_{q^{\pd{z}}}\(q^{\pd{z}}-q^{-\pd{z}}\)\pd{z}\log f(z),\\ &S_{\ell,\ell}(z)(x_j^{\epsilon_1}\otimes x_i^{\epsilon_2})=x_j^{\epsilon_1}\otimes x_i^{\epsilon_2}\otimes f(z)^{q^{-\epsilon_1\epsilon_2r_ia_{ij}} -q^{\epsilon_1\epsilon_2r_ia_{ij}}}.\end{aligned}$$* ## Twisted tensor product of quantum affine VAs {#subsec:construct-n-qvas-qaffva} Let $\ell'$ be another complex number. We have $\hbar$-adic quantum VAs $F_{\hat{\mathfrak g},\hbar}^{\ell'}$, $V_{\hat{\mathfrak g},\hbar}^{\ell'}$ and $L_{\hat{\mathfrak g},\hbar}^{\ell'}$. Define the element $\widehat{\ell,\ell'}\in\mathfrak T$ as follows: $$\begin{aligned} &\widehat{\ell,\ell'}_{ij}(z)= \frac{\partial^{2}}{\partial z^{2}}\log f(z)^{ [a_{ij}]_{q^{r_i}}[r\ell/r_j]_{q^{r_j}}[r\ell']_q(q^{-1}-q) } ,\\ %[a_{ij}]_{q^{r_i\pd{z}}}[r\ell/r_j]_{q^{r_j\pd{z}}}[r\ell']_{q^{\pd{z}}} % \(q^{-\pd{z}}-q^{\pd{z}}\)\pdiff{z}{2}\log f(z),\\ &\widehat{\ell,\ell'}_{ij}^{1,\pm}(z)= \frac{\partial}{\partial {z}}\log f(z)^{ [a_{ij}]_{q^{r_i}}[r\ell']_q(q-q^{-1}) },\\ %[a_{ij}]_{q^{r_i\pd{z}}}[r\ell']_{q^{\pd{z}}} % \(q^{\pd{z}}-q^{-\pd{z}}\)\pd{z}\log f(z),\\ &\widehat{\ell,\ell'}_{ij}^{2,\pm}(z)= \frac{\partial}{\partial {z}}\log f(z)^{ [a_{ji}]_{q^{r_j}}[r\ell]_q(q-q^{-1}) },\\ %[a_{ji}]_{q^{r_j\pd{z}}}[r\ell]_{q^{\pd{z}}} % \(q^{\pd{z}}-q^{-\pd{z}}\)\pd{z}\log f(z),\\ &\widehat{\ell,\ell'}_{ij}^{\epsilon_1,\epsilon_2}(z)= f(z)^{q^{-\epsilon_1\epsilon_2r_ia_{ij}} -q^{\epsilon_1\epsilon_2r_ia_{ij}}}.\end{aligned}$$ It is straightforward to check the following result. **Lemma 74**. *Define $$\begin{aligned} \label{eq:def-S-twisted} S_{\ell,\ell'}(z):F_{\hat{\mathfrak g},\hbar}^{\ell}\widehat{\otimes}F_{\hat{\mathfrak g},\hbar}^{\ell'}&\to F_{\hat{\mathfrak g},\hbar}^{\ell}\widehat{\otimes}F_{\hat{\mathfrak g},\hbar}^{\ell'}\widehat{\otimes}{\mathbb{C}}((z))[[\hbar]]\nonumber\\ v\otimes u&\mapsto \sum \widehat{\ell,\ell'}(u_{(2)},-z)v\otimes u_{(1)}.\end{aligned}$$ Then for $i,j\in I$ we have that $$\begin{aligned} &S_{\ell,\ell'}(z)(h_j\otimes h_i)=h_j\otimes h_i+{{\bf 1}}\otimes{{\bf 1}}\label{eq:S-twisted-1}\\ &\quad\otimes\frac{\partial^{2}}{\partial z^{2}}\log f(z)^{[a_{ij}]_{q^{r_i}}[r\ell/r_j]_{q^{r_j}} [r\ell']_q(q-q^{-1}) } ,\nonumber\\ %[a_{ij}]_{q^{r_i\pd{z}}}[r\ell/r_j]_{q^{r_j\pd{z}}}[r\ell']_{q^{\pd{z}}} %\(q^{\pd{z}}-q^{-\pd{z}}\)\pdiff{z}{2}\log f(z),\nonumber\\ &S_{\ell,\ell'}(z)(x_j^\pm\otimes h_i)=x_j^\pm\otimes h_i\pm x_j^\pm\otimes{{\bf 1}}\label{eq:S-twisted-2}\\ &\quad\otimes \frac{\partial}{\partial {z}}\log f(z)^{ [a_{ij}]_{q^{r_i}}[r\ell']_q(q-q^{-1}) },\nonumber\\ %[a_{ij}]_{q^{r_i\pd{z}}}[r\ell']_{q^{\pd{z}}} %\(q^{\pd{z}}-q^{-\pd{z}}\)\pd{z}\log f(z),\nonumber\\ &S_{\ell,\ell'}(z)(h_j\otimes x_i^\pm)=h_j\otimes x_i^\pm\mp{{\bf 1}}\otimes x_i^\pm\label{eq:S-twisted-3}\\ &\quad \otimes \frac{\partial}{\partial {z}}\log f(z)^{ [a_{ji}]_{q^{r_j}}[r\ell]_q(q-q^{-1}) },\nonumber\\ %[a_{ji}]_{q^{r_j\pd{z}}}[r\ell]_{q^{\pd{z}}}\(q^{\pd{z}}-q^{-\pd{z}}\)\pd{z}\log f(z),\nonumber\\ &S_{\ell,\ell'}(z)(x_j^{\epsilon_1}\otimes x_i^{\epsilon_2})=x_j^{\epsilon_1}\otimes x_i^{\epsilon_2}\otimes f(z)^{q^{-\epsilon_1\epsilon_2r_ia_{ij}} -q^{\epsilon_1\epsilon_2r_ia_{ij}}}. \label{eq:S-twisted-4}\end{aligned}$$* **Remark 75**. * *If $\ell=\ell'$, then the map $S_{\ell,\ell'}(z)$ defined in Lemma [Lemma 74](#lem:S-twisted){reference-type="ref" reference="lem:S-twisted"} is equivalent to the map $S_{\ell,\ell}(z)$ given in Theorem [Theorem 73](#thm:quotient-algs){reference-type="ref" reference="thm:quotient-algs"}.* * **Lemma 76**. *$\sigma S_{\ell,\ell'}(-z)^{-1}\sigma=S_{\ell',\ell}(z)$.* *Proof.* For $i,j\in I$, we have that $$\begin{aligned} &\sigma S_{\ell,\ell'}(-z)^{-1}\sigma(h_j\otimes h_i) =\sigma S_{\ell,\ell'}(-z)^{-1}(h_i\otimes h_j)=h_j\otimes h_i\\ &+{{\bf 1}}\otimes{{\bf 1}}\otimes \frac{\partial^{2}}{\partial z^{2}}\log f(z)^{ [a_{ij}]_{q^{r_i}}[r\ell/r_j]_{q^{r_j}}[r\ell']_q(q-q^{-1}) }\\ %[a_{ij}]_{q^{r_i\pd{z}}}[r\ell/r_j]_{q^{r_j\pd{z}}}[r\ell']_{q^{\pd{z}}} %\(q^{\pd{z}}-q^{-\pd{z}}\)\pdiff{z}2\log f(z)\\ =&S_{\ell',\ell}(z)(h_j\otimes h_i),\\ &\sigma S_{\ell,\ell'}(-z)^{-1}\sigma(x_j^\pm\otimes h_i) =\sigma S_{\ell,\ell'}(-z)^{-1}(h_i\otimes x_j^\pm)\\ =&x_j^\pm\otimes h_i\pm x_j^\pm\otimes{{\bf 1}}\otimes \frac{\partial}{\partial {z}}\log f(z)^{ [a_{ij}]_{q^{r_i}}[r\ell]_q(q-q^{-1}) }\\ %[a_{ij}]_{q^{r_i\pd{z}}}[r\ell]_{q^{\pd{z}}} %\(q^{\pd{z}}-q^{-\pd{z}}\)\pd{z}\log f(z)\\ =&S_{\ell',\ell}(z)(x_j^\pm\otimes h_i),\\ &\sigma S_{\ell,\ell'}(-z)^{-1}\sigma(h_j\otimes x_i^\pm) =\sigma S_{\ell,\ell'}(-z)^{-1}(x_i^\pm\otimes h_j)\\ =&h_j\otimes x_i^\pm\mp {{\bf 1}}\otimes x_i^\pm\otimes \frac{\partial}{\partial {z}}\log f(z)^{ [a_{ji}]_{q^{r_j}}[r\ell']_q(q-q^{-1}) } \\ %[a_{ji}]_{q^{r_j\pd{z}}}[r\ell']_{q^{\pd{z}}} %\(q^{\pd{z}}-q^{-\pd{z}}\)\pd{z}\log f(z)\\ =&S_{\ell',\ell}(z)(h_j\otimes x_i^\pm),\\ &\sigma S_{\ell,\ell'}(-z)^{-1}\sigma(x_j^{\epsilon_1}\otimes x_i^{\epsilon_2}) =\sigma S_{\ell,\ell'}(-z)^{-1}(x_i^{\epsilon_2}\otimes x_j^{\epsilon_1})\\ =&x_j^{\epsilon_1}\otimes x_i^{\epsilon_2}\otimes f(z)^{q^{-\epsilon_1\epsilon_2r_ia_{ij}} -q^{\epsilon_1\epsilon_2r_ia_{ij}}} =S_{\ell',\ell}(z)(x_j^{\epsilon_1}\otimes x_i^{\epsilon_2}).\end{aligned}$$ Notice that both $\sigma S_{\ell,\ell'}(-z)^{-1}\sigma$ and $S_{\ell',\ell}(z)$ are quantum Yang-Baxter operators for the ordered pair $(F_{\hat{\mathfrak g},\hbar}^{\ell'},F_{\hat{\mathfrak g},\hbar}^{\ell})$. Since both $F_{\hat{\mathfrak g},\hbar}^{\ell}$ and $F_{\hat{\mathfrak g},\hbar}^{\ell'}$ are generated by $\{h_i,\,x_i^\pm\,\mid\,i\in I\}$, one has $\sigma S_{\ell,\ell'}(-z)^{-1}\sigma=S_{\ell',\ell}(z)$. ◻ **Lemma 77**. *For $\ell_1,\ell_2,\dots,\ell_n\in{\mathbb{C}}$, we have that $$((F_{\hat{\mathfrak g},\hbar}^{\ell_i})_{i=1}^n,(S_{\ell_i,\ell_j}(z))_{i,j=1}^n)$$ is an $\hbar$-adic $n$-quantum VA.* *Proof.* For $1\le i,j\le n$, we define $\lambda_{ij}\in\mathfrak T$ as follows $$\begin{aligned} \lambda_{ij}=\begin{cases} \widehat{\ell}_i, & \mbox{if } i=j, \\ \widehat{\ell_i,\ell_j}, & \mbox{if } i<j \\ \varepsilon, & \mbox{if }i>j. \end{cases}\end{aligned}$$ Proposition [Proposition 60](#prop:deform-datum){reference-type="ref" reference="prop:deform-datum"} provides deforming triples $(H,\rho,\lambda_{ij})$ of $F_\varepsilon(A,\ell_i)$ for any $1\le i,j\le n$. From [\[eq:F-deform-modVA-struct-com\]](#eq:F-deform-modVA-struct-com){reference-type="eqref" reference="eq:F-deform-modVA-struct-com"}, we see that $\lambda_{ij}$ and $\lambda_{ik}$ are commute for any $1\le i,j,k\le n$. Then Proposition [Proposition 38](#prop:qva-twisted-tensor-deform){reference-type="ref" reference="prop:qva-twisted-tensor-deform"} provides an $\hbar$-adic $n$-quantum VA $$\begin{aligned} &\left(\left(\mathfrak D_{\lambda_{ii}}^\rho(F_\varepsilon({\mathfrak g},\ell_i))\right)_{i\in I},\left(S_{ij}(z)\right)_{i,j\in I}\right).\end{aligned}$$ Comparing [\[eq:S-twisted-def\]](#eq:S-twisted-def){reference-type="eqref" reference="eq:S-twisted-def"} with Lemmas [Lemma 74](#lem:S-twisted){reference-type="ref" reference="lem:S-twisted"}, [Lemma 76](#lem:S-twisted-inverse){reference-type="ref" reference="lem:S-twisted-inverse"} and Remark [Remark 75](#rem:ell=ell-prime-case){reference-type="ref" reference="rem:ell=ell-prime-case"}, we have that $$\begin{aligned} S_{ij}(z)=S_{\ell_i,\ell_j}(z)\quad\mbox{for }1\le i,j\le n.\end{aligned}$$ It follows from Proposition [Proposition 61](#prop:classical-limit){reference-type="ref" reference="prop:classical-limit"} that $$\begin{aligned} \mathfrak D_{\lambda_{ii}}^\rho(F_\varepsilon({\mathfrak g},\ell_i)) =\mathfrak D_{\widehat{\ell}_i}^\rho(F_\varepsilon({\mathfrak g},\ell_i)) =F_{\widehat{\ell}_i}({\mathfrak g},\ell_i)=F_{\widehat{{\mathfrak g}},\hbar}^{\ell_i}.\end{aligned}$$ Therefore, we complete the proof of lemma. ◻ **Lemma 78**. *For any $i,j\in I$, we have that $$\begin{aligned} &2\hbar f_0\left(2\hbar\frac{\partial}{\partial {z}}\right)[r_j]_{q^{\frac{\partial}{\partial {z}}}}\widehat{\ell,\ell'}_{ij}(z) =\left( q^{-r\ell\frac{\partial}{\partial {z}}}-q^{r\ell\frac{\partial}{\partial {z}}} \right)\widehat{\ell,\ell'}_{ij}^{1,+}(z),\label{eq:special-tau-tech1-3}\\ &2\hbar f_0\left(2\hbar\frac{\partial}{\partial {z}}\right)[r_i]_{q^{\frac{\partial}{\partial {z}}}}\widehat{\ell,\ell'}_{ij}(z) =\left( q^{-r\ell'\frac{\partial}{\partial {z}}}-q^{r\ell'\frac{\partial}{\partial {z}}} \right)\widehat{\ell,\ell'}_{ij}^{2,+}(z),\\ &2\hbar f_0\left(2\hbar\frac{\partial}{\partial {z}}\right)[r_i]_{q^{\frac{\partial}{\partial {z}}}}\widehat{\ell,\ell'}_{ij}^{1,\pm}(z)\label{eq:special-tau-tech1-4} =%\(q^{-r\ell'\pd{z}}-q^{r\ell'\pd{z}}\)\(q^{-r_ia_{ij}\pd{z}}-q^{r_ia_{ij}\pd{z}}\) \log f(z)^{ (q^{r_ia_{ij}}-q^{-r_ia_{ij}})(q^{r\ell'}-q^{-r\ell'}) },\\ &2\hbar f_0\left(2\hbar\frac{\partial}{\partial {z}}\right)[r_j]_{q^{\frac{\partial}{\partial {z}}}}\widehat{\ell,\ell'}_{ij}^{2,\pm}(z) =%&\(q^{-r\ell\pd{z}}-q^{r\ell\pd{z}}\)\(q^{-r_ia_{ij}\pd{z}}-q^{r_ia_{ij}\pd{z}}\) \log f(z)^{ (q^{r_ia_{ij}}-q^{-r_ia_{ij}})(q^{r\ell}-q^{-r\ell}) }.\end{aligned}$$* Similar to the proof of [@K-Quantum-aff-va Lemma 6.19, Lemma 6.20, Lemma 6.21], by using Lemma [Lemma 78](#lem:special-tau-tech1){reference-type="ref" reference="lem:special-tau-tech1"}, we have that **Lemma 79**. *For $i,j\in I$, we have that $$\begin{aligned} &S_{\ell,\ell'}(z_1)(A_i(z_2)\otimes h_j)=A_i(z_2)\otimes h_j\\ &\quad+\mathop{\mathrm{Sing}}_{z_2}\left(A_i(z_2)\otimes{{\bf 1}}\otimes\left(e^{z_2\frac{\partial}{\partial {z_1}}}-1\right){\widehat{\ell,\ell'}}_{ij}^{1,+}(-z_1) \right),\\ &S_{\ell,\ell'}(z_1)(h_j\otimes A_i(z_2))=h_j\otimes A_i(z_2)\\ &\quad+\mathop{\mathrm{Sing}}_{z_2}\left({{\bf 1}}\otimes A_i(z_2)\otimes\left(1-e^{-z_2\frac{\partial}{\partial {z_1}}}\right){\widehat{\ell,\ell'}}_{ji}^{2,+}(-z_1)\right),\\ &S_{\ell,\ell'}(z_1)(A_i(z_2)\otimes x_j^\pm) =\mathop{\mathrm{Sing}}_{z_2}\Big(A_i(z_2)\otimes x_j^\pm\\ &\quad\quad\otimes \exp\left(\left(1-e^{z_2\frac{\partial}{\partial {z_1}}}\right) \log{\widehat{\ell,\ell'}}_{ji}^{\pm,+}(-z_1) \right)\Big),\\ &S_{\ell,\ell'}(z_1)(x_j^\pm\otimes A_i(z_2)) =\mathop{\mathrm{Sing}}_{z_2}\Big(x_j^\pm\otimes A_i(z_2)\\ &\quad\quad\otimes\exp\left(\left(1-e^{-z_2\frac{\partial}{\partial {z_1}}}\right) \log{\widehat{\ell,\ell'}}_{ij}^{+,\pm}(-z_1) \right)\Big).\end{aligned}$$ For $i,j,k\in I$ such that $a_{ij}<0$, we have that $$\begin{aligned} &S_{\ell,\ell'}(z)\left(Q_{ij}^\pm(z_1,\dots,z_{m_{ij}})\otimes h_k\right) =Q_{ij}^\pm(z_1,\dots,z_{m_{ij}})\otimes h_k\\ &\quad\pm \mathop{\mathrm{Sing}}_{z_1,\dots,z_{m_{ij}}}\Bigg(Q_{ij}^\pm(z_1,\dots,z_{m_{ij}})\otimes{{\bf 1}}\\ &\quad\quad\otimes \left({\widehat{\ell,\ell'}}_{kj}^{1,\pm}(-z)+\sum_{a=1}^{m_{ij}}{\widehat{\ell,\ell'}}_{ki}^{1,\pm}(-z-z_a) \right)\Bigg),\\ &S_{\ell,\ell'}(z)\left(h_k\otimes Q_{ij}^\pm(z_1,\dots,z_{m_{ij}})\right) =h_k\otimes Q_{ij}^\pm(z_1,\dots,z_{m_{ij}})\\ &\quad\mp \mathop{\mathrm{Sing}}_{z_1,\dots,z_{m_{ij}}}\Bigg({{\bf 1}}\otimes Q_{ij}^\pm(z_1,\dots,z_{m_{ij}})\\ &\quad\quad\otimes\left({\widehat{\ell,\ell'}}_{jk}^{2,\pm}(-z) +\sum_{a=1}^{m_{ij}}{\widehat{\ell,\ell'}}_{ik}^{2,\pm}(-z+z_a)\right)\Bigg),\\ &S_{\ell,\ell'}(z)\left(Q_{ij}^\pm(z_1,\dots,z_{m_{ij}})\otimes x_k^\epsilon\right) =\mathop{\mathrm{Sing}}_{z_1,\dots,z_{m_{ij}}}\\ &\quad\left(Q_{ij}^\pm(z_1,\dots,z_{m_{ij}})\otimes x_k^\epsilon \otimes{\widehat{\ell,\ell'}}_{kj}^{\epsilon,\pm}(-z)^{-1} \prod_{a=1}^{m_{ij}}{\widehat{\ell,\ell'}}_{ki}^{\epsilon,\pm}(-z-z_a)^{-1}\right),\\ &S_{\ell,\ell'}(z)\left(x_k^\epsilon\otimes Q_{ij}^\pm(z_1,\dots,z_{m_{ij}})\right) =\mathop{\mathrm{Sing}}_{z_1,\dots,z_{m_{ij}}}\\ &\quad\left(x_k^\epsilon\otimes Q_{ij}^\pm(z_1,\dots,z_{m_{ij}})\otimes {\widehat{\ell,\ell'}}_{jk}^{\epsilon,\pm}(-z)^{-1} \prod_{a=1}^{m_{ij}}{\widehat{\ell,\ell'}}_{ik}^{\epsilon,\pm}(-z+z_a)^{-1}\right).\end{aligned}$$ For any $\ell\in {\mathbb{Z}}_+$ and $i,j\in I$, we have that $$\begin{aligned} &S_{\ell,\ell'}(z)\left(M_i^\pm(z_1,\dots,z_{r\ell/r_i})\otimes h_j\right) =M_i^\pm(z_1,\dots,z_{r\ell/r_i})\otimes h_j\\ &\quad\pm\mathop{\mathrm{Sing}}_{z_1,\dots,z_{r\ell/r_i}}z_1^{-1}\cdots z_{r\ell/r_i}^{-1}\\ &\quad\left( M_i^\pm(z_1,\dots,z_{r\ell/r_i})\otimes{{\bf 1}}\otimes \sum_{a=1}^{r\ell/r_i+1}{\widehat{\ell,\ell'}}_{ji}^{1,\pm}(-z-z_a) \right),\\ &S_{\ell,\ell'}(z)\left(h_j\otimes M_i^\pm(z_1,\dots,z_{r\ell/r_i})\right) =h_j\otimes M_i^\pm(z_1,\dots,z_{r\ell/r_i})\\ &\quad\mp\mathop{\mathrm{Sing}}_{z_1,\dots,z_{r\ell/r_i}}z_1^{-1}\cdots z_{r\ell/r_i}^{-1}\\ &\quad {{\bf 1}}\otimes\left(M_i^\pm(z_1,\dots,z_{r\ell/r_i})\otimes \sum_{a=1}^{r\ell/r_i+1}{\widehat{\ell,\ell'}}_{ij}^{1,\pm}(-z+z_a) \right),\\ &S_{\ell,\ell'}(z)\left(M_i^\pm(z_1,\dots,z_{r\ell/r_i})\otimes x_j^\epsilon\right) =\mathop{\mathrm{Sing}}_{z_1,\dots,z_{r\ell/r_i}}z_1^{-1}\cdots z_{r\ell/r_i}^{-1}\\ &\quad\left( M_i^\pm(z_1,\dots,z_{r\ell/r_i})\otimes x_j^\epsilon\otimes \prod_{a=1}^{r\ell/r_i+1}{\widehat{\ell,\ell'}}_{ji}^{\epsilon,\pm}(-z-z_a)^{-1} \right),\\ &S_{\ell,\ell'}(z)\left(x_j^\epsilon\otimes M_i^\pm(z_1,\dots,z_{r\ell/r_i})\right) =\mathop{\mathrm{Sing}}_{z_1,\dots,z_{r\ell/r_i}}z_1^{-1}\cdots z_{r\ell/r_i}^{-1}\\ &\quad\left( x_j^\epsilon\otimes M_i^\pm(z_1,\dots,z_{r\ell/r_i})\otimes \prod_{a=1}^{r\ell/r_i+1}{\widehat{\ell,\ell'}}_{ij}^{\epsilon,\pm}(-z+z_a)^{-1} \right),\end{aligned}$$ where $z_{r\ell/r_i+1}=0$.* From Proposition [Proposition 72](#prop:ideal-def-alt){reference-type="ref" reference="prop:ideal-def-alt"} and Lemma [Lemma 79](#lem:S-ell-ell-prime){reference-type="ref" reference="lem:S-ell-ell-prime"}, we have the following result. **Proposition 80**. *$S_{\ell,\ell'}(z)$ induces a ${\mathbb{C}}[[\hbar]]$-module map on $$V_{\hat{\mathfrak g},\hbar}^{\ell}\widehat{\otimes}V_{\hat{\mathfrak g},\hbar}^{\ell'}\widehat{\otimes}{\mathbb{C}}((z))[[\hbar]].$$ If $\ell,\ell'\in{\mathbb{Z}}_+$, it further induces a ${\mathbb{C}}[[\hbar]]$-module map on $$L_{\hat{\mathfrak g},\hbar}^{\ell}\widehat{\otimes}L_{\hat{\mathfrak g},\hbar}^{\ell'}\widehat{\otimes}{\mathbb{C}}((z))[[\hbar]].$$* Combining this with Lemma [Lemma 77](#lem:n-qva-F){reference-type="ref" reference="lem:n-qva-F"}, we get the main result of this section. **Theorem 81**. *Let $\ell_1,\ell_2,\dots,\ell_n\in{\mathbb{C}}$. Then $$((X_{\hat{\mathfrak g},\hbar}^{\ell_i})_{i=1}^n,(S_{\ell_i,\ell_j}(z))_{i,j=1}^n)$$ is an $\hbar$-adic $n$-quantum VA for $X=F,V,L$.* **Remark 82**. * *Let $\ell_1,\dots,\ell_n\in{\mathbb{C}}$. In this paper, we simply write the twisted tensor product $$\begin{aligned} X_{\hat{\mathfrak g},\hbar}^{\ell_1}\widehat{\otimes}_{S_{\ell_1,\ell_2}}\cdots \widehat{\otimes}_{S_{\ell_{n-1},\ell_n}}X_{\hat{\mathfrak g},\hbar}^{\ell_n}\end{aligned}$$ as $$\begin{aligned} X_{\hat{\mathfrak g},\hbar}^{\ell_1}\widehat{\otimes}\cdots \widehat{\otimes}X_{\hat{\mathfrak g},\hbar}^{\ell_n}\end{aligned}$$ for $X=F,V,L$.* * ## Corpoduct of quantum affine VAs {#sec:coprod} The purpose of this section is to prove the following result. **Theorem 83**. *Let $X=F,V,L$. For any $\ell,\ell'\in{\mathbb{C}}$, there is an $\hbar$-adic quantum VA homomorphism $\Delta:X_{\hat{\mathfrak g},\hbar}^{\ell+\ell'}\to X_{\hat{\mathfrak g},\hbar}^{\ell}\widehat{\otimes}X_{\hat{\mathfrak g},\hbar}^{\ell'}$ defined by $$\begin{aligned} &\Delta(h_i)=q^{-r\ell'\partial}h_i\otimes{{\bf 1}}+{{\bf 1}}\otimes q^{r\ell\partial}h_i,\label{eq:def-Delta-h}\\ &\Delta(x_i^+)=x_i^+\otimes{{\bf 1}}+q^{2r\ell\partial}E_\ell(h_i)\otimes q^{2r\ell\partial}x_i^+,\label{eq:def-Delta-x+}\\ &\Delta(x_i^-)=x_i^-\otimes{{\bf 1}}+{{\bf 1}}\otimes x_i^-.\label{eq:def-Delta-x-}\end{aligned}$$ Moreover, let $n\in{\mathbb{Z}}_+$ and $\ell_1,\dots,\ell_n\in{\mathbb{C}}$. For $1\le i<n$, we have $$\begin{tikzcd}[column sep=1em, row sep=1.5em] \Big(X_{\hat{\mathfrak g},\hbar}^{\ell_1}\ar[d,equal]&\dots&X_{\hat{\mathfrak g},\hbar}^{\ell_{i-1}}\ar[d,equal] &X_{\hat{\mathfrak g},\hbar}^{\ell_i+\ell_{i+1}}\ar[d,"\Delta"] &X_{\hat{\mathfrak g},\hbar}^{\ell_{i+2}}\ar[d,equal] &\dots&X_{\hat{\mathfrak g},\hbar}^{\ell_n}\Big)\ar[d,equal]\\ \Big(X_{\hat{\mathfrak g},\hbar}^{\ell_1}&\dots&X_{\hat{\mathfrak g},\hbar}^{\ell_{i-1}}& X_{\hat{\mathfrak g},\hbar}^{\ell_i}\widehat{\otimes}X_{\hat{\mathfrak g},\hbar}^{\ell_{i+1}}& X_{\hat{\mathfrak g},\hbar}^{\ell_{i+2}}&\dots& X_{\hat{\mathfrak g},\hbar}^{\ell_n}\Big) \end{tikzcd}$$ is an $\hbar$-adic $(n-1)$-quantum VA homomorphism. Furthermore, for $\ell,\ell',\ell''\in{\mathbb{C}}$, we have that $$\begin{aligned} (\Delta\otimes 1)\circ\Delta=(1\otimes\Delta)\circ\Delta:X_{\hat{\mathfrak g},\hbar}^{\ell+\ell'+\ell''}\to X_{\hat{\mathfrak g},\hbar}^{\ell}\widehat{\otimes}X_{\hat{\mathfrak g},\hbar}^{\ell'}\widehat{\otimes}X_{\hat{\mathfrak g},\hbar}^{\ell''}\end{aligned}$$* We prove this theorem by proving the following results: **Proposition 84**. *Let $\ell,\ell',\ell''\in{\mathbb{C}}$. Set $$\begin{aligned} S_{\{\ell,\ell'\},\ell''}(z)=S_{\ell',\ell''}^{23}(z)S_{\ell,\ell''}^{13}(z),\quad S_{\ell,\{\ell',\ell''\}}(z)=S_{\ell,\ell'}^{12}(z)S_{\ell,\ell''}^{13}(z).\end{aligned}$$ Then $$\begin{aligned} &S_{\{\ell,\ell'\},\ell''}(z)\circ(\Delta\otimes 1)=(\Delta\otimes 1)\circ S_{\ell+\ell',\ell''}(z) \quad \mbox{on }F_{\hat{\mathfrak g},\hbar}^{\ell+\ell'}\widehat{\otimes}F_{\hat{\mathfrak g},\hbar}^{\ell''},\\ &S_{\ell,\{\ell',\ell''\}}(z)\circ(1\otimes\Delta)=(1\otimes\Delta)\circ S_{\ell,\ell'+\ell''}(z) \quad\mbox{on }F_{\hat{\mathfrak g},\hbar}^{\ell}\widehat{\otimes}F_{\hat{\mathfrak g},\hbar}^{\ell'+\ell''}.\end{aligned}$$* **Proposition 85**. *Denote by $Y_\Delta$ the vertex operator of $X_{\hat{\mathfrak g},\hbar}^{\ell}\widehat{\otimes}X_{\hat{\mathfrak g},\hbar}^{\ell'}$ for $X=F,V,L$. Then we have that ($i,j\in I$) $$\begin{aligned} &[Y_\Delta(\Delta(h_i),z_1),Y_\Delta(\Delta(h_j),z_2)] =[r_ia_{ij}]_{q^{\frac{\partial}{\partial {z_2}}}}[r(\ell+\ell')]_{q^{\frac{\partial}{\partial {z_2}}}}\label{eq:prop-Y-Delta-1}\\ &\quad\times\left(\iota_{z_1,z_2}q^{-r(\ell+\ell')\frac{\partial}{\partial {z_2}}} -\iota_{z_2,z_1}q^{r(\ell+\ell')\frac{\partial}{\partial {z_2}}}\right) \frac{\partial}{\partial {z_1}}\frac{\partial}{\partial {z_2}}\log f(z_1-z_2),\nonumber\\ &[Y_\Delta(\Delta(h_i),z_1),Y_\Delta(\Delta(x_j^\pm),z_2)] =\pm Y_\Delta(\Delta(x_j^\pm),z_2)[r_ia_{ij}]_{q^{\frac{\partial}{\partial {z_2}}}}\label{eq:prop-Y-Delta-2}\\ &\quad\left(\iota_{z_1,z_2}q^{-r(\ell+\ell')\frac{\partial}{\partial {z_2}}} -\iota_{z_2,z_1}q^{r(\ell+\ell')\frac{\partial}{\partial {z_2}}}\right) \frac{\partial}{\partial {z_1}}\log f(z_1-z_2),\nonumber\\ &f(z_1-z_2)^{\delta_{ij}+\delta_{ij}q^{2r\ell}} Y_\Delta(\Delta(x_i^+),z_1)Y_\Delta(\Delta(x_j^-),z_2)\label{eq:prop-Y-Delta-3-pre}\\ &\quad=\iota_{z_2,z_1}f(z_1-z_2)^{\delta_{ij}+\delta_{ij}q^{2r\ell}+q^{-r_ia_{ij}}-q^{r_ia_{ij}}}\nonumber\\ &\quad\times Y_\Delta(\Delta(x_j^-),z_2) Y_\Delta(\Delta(x_i^+),z_1)\nonumber\end{aligned}$$ on $F_{\hat{\mathfrak g},\hbar}^{\ell}\widehat{\otimes}F_{\hat{\mathfrak g},\hbar}^{\ell'}$, and $$\begin{aligned} &Y_\Delta(\Delta(x_i^+),z_1)Y_\Delta(\Delta(x_j^-),z_2)\label{eq:prop-Y-Delta-3}\\ &\quad-\iota_{z_2,z_1}f(z_1-z_2)^{q^{-r_ia_{ij}}-q^{r_ia_{ij}}} Y_\Delta(\Delta(x_j^-),z_2) Y_\Delta(\Delta(x_i^+),z_1)\nonumber\\ =&\frac{\delta_{ij}}{q^{r_i}-q^{-r_i}}\bigg(z_1^{-1}\delta\left(\frac{z_2}{z_1}\right)\nonumber\\ &\quad-Y_\Delta\left(E_{\ell+\ell'}(\Delta(h_i)),z_2\right) z_1^{-1}\delta\left(\frac{z_2-2r(\ell+\ell')\hbar}{z_1}\right)\bigg)\nonumber\end{aligned}$$ on $V_{\hat{\mathfrak g},\hbar}^{\ell}\widehat{\otimes}V_{\hat{\mathfrak g},\hbar}^{\ell'}$* **Proposition 86**. *For $i,j\in I$, we have that $$\begin{aligned} &\iota_{z_1,z_2}f(z_1-z_2)^{-\delta_{ij}+q^{-r_ia_{ij}}} Y_\Delta(\Delta(x_i^\pm),z_1) Y_\Delta(\Delta(x_j^\pm),z_2)\label{eq:prop-Y-Delta-locality}\\ =&\iota_{z_2,z_1}f(z_1-z_2)^{-\delta_{ij}+q^{r_ia_{ij}}} Y_\Delta(\Delta(x_j^\pm),z_2)Y_\Delta(\Delta(x_i^\pm),z_1)\nonumber\end{aligned}$$ on $F_{\widehat{{\mathfrak g}},\hbar}^\ell\widehat{\otimes}F_{\widehat{{\mathfrak g}},\hbar}^{\ell'}$. In addition, if $a_{ij}<0$, then we have that $$\begin{aligned} \label{eq:prop-Y-Delta-serre} \left(\Delta(x_i^\pm)\right)_0^{1-a_{ij}}\Delta(x_j^\pm)=0\quad\mbox{on }V_{\hat{\mathfrak g},\hbar}^\ell\widehat{\otimes}V_{\hat{\mathfrak g},\hbar}^{\ell'}.\end{aligned}$$ Moreover, if $\ell\in{\mathbb{Z}}_+$, then we have that $$\begin{aligned} \label{eq:prop-Y-Delta-int} \left(\Delta(x_i^\pm)\right)_0^{r(\ell+\ell')/r_i}\Delta(x_i^\pm)=0\,\,\mbox{on }L_{\hat{\mathfrak g},\hbar}^\ell\widehat{\otimes}L_{\hat{\mathfrak g},\hbar}^{\ell'}\quad \mbox{for }i\in I.\end{aligned}$$* **Proposition 87**. *For $\ell,\ell',\ell''\in{\mathbb{C}}$, we have that $$\begin{aligned} (\Delta\otimes 1)\circ \Delta(u)=(1\otimes\Delta)\circ\Delta(u)\in F_{\hat{\mathfrak g},\hbar}^\ell \widehat{\otimes}F_{\hat{\mathfrak g},\hbar}^{\ell'} \widehat{\otimes} F_{\hat{\mathfrak g},\hbar}^{\ell''}\end{aligned}$$ for $u\in F_{\hat{\mathfrak g},\hbar}^{\ell+\ell'+\ell''}$.* Assume now that the above four Propositions hold. $$ *Proof of Theorem [Theorem 83](#thm:coproduct){reference-type="ref" reference="thm:coproduct"}:* From [\[eq:prop-Y-Delta-1\]](#eq:prop-Y-Delta-1){reference-type="eqref" reference="eq:prop-Y-Delta-1"}, [\[eq:prop-Y-Delta-2\]](#eq:prop-Y-Delta-2){reference-type="eqref" reference="eq:prop-Y-Delta-2"}, [\[eq:prop-Y-Delta-3-pre\]](#eq:prop-Y-Delta-3-pre){reference-type="eqref" reference="eq:prop-Y-Delta-3-pre"} and [\[eq:prop-Y-Delta-locality\]](#eq:prop-Y-Delta-locality){reference-type="eqref" reference="eq:prop-Y-Delta-locality"}, we get that $$\begin{aligned} \left(F_{\hat{\mathfrak g},\hbar}^\ell\widehat{\otimes}F_{\hat{\mathfrak g},\hbar}^{\ell'}, \left\{Y_\Delta\left(\Delta(h_i),z\right)\right\}_{i\in I}, \left\{Y_\Delta\left(\Delta\left(x_i^\pm\right),z\right)\right\}_{i\in I}\right) \in\mathop{\mathrm{obj}}\mathcal M_{\widehat{\ell+\ell'}}({\mathfrak g}).\end{aligned}$$ So Proposition [Proposition 58](#prop:universal-M-tau){reference-type="ref" reference="prop:universal-M-tau"} provides an $\hbar$-adic nonlocal VA homomorphism $\Delta$ from $F_{\hat{\mathfrak g},\hbar}^{\ell+\ell'}$ to $F_{\hat{\mathfrak g},\hbar}^\ell\otimes F_{\hat{\mathfrak g},\hbar}^{\ell'}$. From Propositions [Proposition 84](#prop:S-Delta){reference-type="ref" reference="prop:S-Delta"} and [Proposition 87](#prop:Delta-coasso){reference-type="ref" reference="prop:Delta-coasso"}, we complete the proof for $X=F$. For the case $X=V$ or $L$, the theorem follows immediate from [\[eq:prop-Y-Delta-3\]](#eq:prop-Y-Delta-3){reference-type="eqref" reference="eq:prop-Y-Delta-3"} and Proposition [Proposition 86](#prop:Y-Delta-serre){reference-type="ref" reference="prop:Y-Delta-serre"}. ## Some formulas {#subsec:some-formulas} In this subsection, we collect some formulas that will be used later on. Denote by $Y(\cdot,z)$ the vertex operator map of the VA $F_{\hat{\mathfrak g}}^\ell$ (see Definition [Definition 3](#de:affVAs){reference-type="ref" reference="de:affVAs"}). From Remark [Remark 59](#rem:F-varepsilon=F){reference-type="ref" reference="rem:F-varepsilon=F"}, we have that $$\begin{aligned} \left(F_{\hat{\mathfrak g}}^\ell[[\hbar]],\{Y(h_i,z)\}_{i\in I},\{Y(x_i^\pm,z)\}_{i\in I}\right)\in\mathop{\mathrm{obj}}\mathcal M_\varepsilon({\mathfrak g}).\end{aligned}$$ In view of Proposition [Proposition 61](#prop:classical-limit){reference-type="ref" reference="prop:classical-limit"}, we have that $$\begin{aligned} F_{\widehat{{\mathfrak g}},\hbar}^\ell=\mathfrak D_{\widehat{\ell}}^\rho(F_\varepsilon({\mathfrak g},\ell))=\mathfrak D_{\widehat{\ell}}^\rho(F_{\hat{\mathfrak g}}^\ell).\end{aligned}$$ It follows that ($i\in I$): $$\begin{aligned} Y_{\widehat{\ell}}(h_i,z)=Y(h_i,z)+\widehat{\ell}(\alpha_i^\vee,z),\quad Y_{\widehat{\ell}}(x_i^\pm,z)=Y(x_i^\pm,z)\widehat{\ell}(e_i^\pm,z).\end{aligned}$$ **Lemma 88**. *For $i\in I$, we define $$\begin{aligned} &h_i^-(z)=Y(h_i,z)^- +\widehat{\ell}(\alpha^\vee_i,z),\quad h_i^+(z)=Y(h_i,z)^+.\end{aligned}$$ Then we have that $$\begin{aligned} &[h_i^-(z_1),h_j^+(z_2)]\label{eq:com-formulas-1} %[a_{ij}]_{q^{r_i\pd{z_2}}}[r\ell/r_j]_{q^{r_j\pd{z_2}}}q^{-r\ell\pd{z_2}} =\frac{\partial}{\partial {z_1}}\frac{\partial}{\partial {z_2}}\log f(z_1-z_2)^{ [a_{ij}]_{q^{r_i}}[r\ell/r_j]_{q^{r_j}}q^{r\ell} },\\ &[h_i^-(z_1),Y_{\widehat{\ell}}(x_j^\pm,z_2)]=\pm Y_{\widehat{\ell}}(x_j^\pm,z_2)%[a_{ij}]_{q^{r_i\pd{z_2}}}q^{-r\ell\pd{z_2}} \frac{\partial}{\partial {z_1}}\log f(z_1-z_2)^{[a_{ij}]_{q^{r_i}}q^{r\ell}},\label{eq:com-formulas-2}\\ &[h_i^+(z_1),Y_{\widehat{\ell}}(x_j^\pm,z_2)]=\mp Y_{\widehat{\ell}}(x_j^\pm,z_2)%[a_{ij}]_{q^{r_i\pd{z_2}}}q^{r\ell\pd{z_2}} \frac{\partial}{\partial {z_1}}\log f(-z_2+z_1)^{[a_{ij}]_{q^{r_i}}q^{-r\ell}}.\label{eq:com-formulas-3}\end{aligned}$$* *Proof.* From [\[eq:fqva-rel1\]](#eq:fqva-rel1){reference-type="eqref" reference="eq:fqva-rel1"} and [\[eq:fqva-rel2\]](#eq:fqva-rel2){reference-type="eqref" reference="eq:fqva-rel2"}, we have that $$\begin{aligned} &[Y(h_i,z_1),Y(h_j,z_2)]=\frac{a_{ij}r\ell}{r_j} \frac{\partial}{\partial {z_2}}z_1^{-1}\delta\left(\frac{z_2}{z_1}\right),\\ &[Y(h_i,z_1),Y(x_j^\pm,z_2)]=\pm Y(x_j^\pm,z_2)a_{ij} z^{-1}\delta\left(\frac{z_2}{z_1}\right).\end{aligned}$$ Then we have that $$\begin{aligned} &[Y(h_i,z_1)^-,Y(h_j,z_2)^+]=\frac{a_{ij}r\ell}{r_j} (z_1-z_2)^{-2},\label{eq:com-formulas-temp1}\\ &[Y(h_i,z_1)^-,Y(x_j^\pm,z_2)]=\pm Y(x_j^\pm,z_2)a_{ij}(z_1-z_2)^{-1},\label{eq:com-formulas-temp2}\\ &[Y(h_i,z_1)^+,Y(x_j^\pm,z_2)]=\mp Y(x_j^\pm,z_2)a_{ij}(z_2-z_1)^{-1}.\label{eq:com-formulas-temp3}\end{aligned}$$ From the definition of $\widehat{\ell}$ (see Proposition [Proposition 60](#prop:deform-datum){reference-type="ref" reference="prop:deform-datum"}), we have that $$\begin{aligned} &[\widehat{\ell}(\alpha^\vee_i,z_1),Y(h_j,z_2)]=Y(\widehat{\ell}_i(z_1-z_2)h_j,z_2)=\widehat{\ell}_{ij}(z_1-z_2),\\ &[\widehat{\ell}(e_i^\pm,z_1),Y(h_j,z_2)]=Y(\widehat{\ell}_i^\pm(z_1-z_2)h_j,z_2)\widehat{\ell}_i^\pm(z_1)-Y(h_j,z_2)\widehat{\ell}_i^\pm(z_1) \\ &\quad\quad=\mp \widehat{\ell}(e_i^\pm,z_1)\widehat{\ell}_{ij}^{2,+}(z_1-z_2). % \\=&\mp\wh\ell(e_i^\pm,z_1)[r_ia_{ij}]_{q^{\pd{z_2}}}q^{-r\ell\pd{z_2}}\frac{1+e^{-z_1+z_2}}{2-2e^{-z_1+z_2}} % \pm \wh\ell(e_i^\pm,z_1)r_ia_{ij}(z_1-z_2)\inv.\end{aligned}$$ It follows that $$\begin{aligned} &[\widehat{\ell}(\alpha^\vee_i,z_1),Y(h_j,z_2)^-]=0,\\ &[\widehat{\ell}(\alpha^\vee_i,z_1),Y(h_j,z_2)^+]=\widehat{\ell}_{ij}(z_1-z_2),\\ &[\widehat{\ell}(e_i^\pm,z_1),Y(h_j,z_2)^-]=0,\\ &[\widehat{\ell}(e_i^\pm,z_1),Y(h_j,z_2)^+]=\mp\widehat{\ell}(e_i^\pm,z_1)\widehat{\ell}_{ij}^{2,+}(z_1-z_2).\end{aligned}$$ Combining these with equations [\[eq:com-formulas-temp1\]](#eq:com-formulas-temp1){reference-type="eqref" reference="eq:com-formulas-temp1"}, [\[eq:com-formulas-temp2\]](#eq:com-formulas-temp2){reference-type="eqref" reference="eq:com-formulas-temp2"} and [\[eq:com-formulas-temp3\]](#eq:com-formulas-temp3){reference-type="eqref" reference="eq:com-formulas-temp3"}, we get that $$\begin{aligned} &[h_i^-(z_1),h_j^+(z_2)]\\ =&[Y(h_i,z_1)^-+\widehat{\ell}(\alpha^\vee_i,z_1),Y(h_j,z_2)^+] \\ =&\widehat{\ell}_{ij}(z_1-z_2)+\frac{a_{ij}r\ell}{r_j}(z_1-z_2)^{-2}\\ =& %[a_{ij}]_{q^{r_i\pd{z_2}}}[r\ell/r_j]_{q^{r_j\pd{z_2}}}q^{-r\ell\pd{z_2}} \frac{\partial}{\partial {z_1}}\frac{\partial}{\partial {z_2}}\log f(z_1-z_2)^{[a_{ij}]_{q^{r_i}}[r\ell/r_j]_{q^{r_j}}q^{r\ell}},\\ &[h_i^-(z_1),Y_{\widehat{\ell}}(x_j^\pm,z_2)]\\ =&[Y(h_i,z_1)^-+\widehat{\ell}(\alpha^\vee_i,z_1),Y(x_j^\pm,z_2)\widehat{\ell}(e_j^\pm,z_2)]\\ =&[Y(h_i,z_1)^-,Y(x_j^\pm,z_2)]\widehat{\ell}(e_j^\pm,z_2)\\ &+[\widehat{\ell}(\alpha_i,z_1),Y(x_j^\pm,z_2)]\widehat{\ell}(e_j^\pm,z_2)\\ &-Y(x_j^\pm,z_2)[\widehat{\ell}(e_j^\pm,z_2),Y(h_i,z_1)^-]\\ =&\pm Y(x_j^\pm,z_2)\widehat{\ell}(e_j^\pm,z_2)r_ia_{ij}(z_1-z_2)^{-1}\\ & +Y(\widehat{\ell}_i(z_1-z_2)x_j^\pm,z_2)\widehat{\ell}(e_j^\pm,z_2)\\ =&\pm Y(x_j^\pm,z_2)\widehat{\ell}(e_j^\pm,z_2)\left(r_ia_{ij}(z_1-z_2)^{-1} +\widehat{\ell}_{ij}^{1,+}(z_1-z_2)\right)\\ =&\pm Y_{\widehat{\ell}}(x_j^\pm,z_2)\frac{\partial}{\partial {z_1}}\log f(z_1-z_2)^{[a_{ij}]_{q^{r_i}}q^{r\ell}},\\ &[h_i^+(z_1),Y_{\widehat{\ell}}(x_j^\pm,z_2)]\\ =&[Y(h_i,z_1)^+,Y(x_j^\pm,z_2)\widehat{\ell}(e_j^\pm,z_2)]\\ =&[Y(h_i,z_1)^+,Y(x_j^\pm,z_2)]\widehat{\ell}(e_j^\pm,z_2)\\ &-Y(x_j^\pm,z_2)[\widehat{\ell}(e_j^\pm,z_2),Y(h_i,z_2)^+]\\ =&\mp Y(x_j^\pm,z_2)\widehat{\ell}(e_j^\pm,z_2)a_{ij}(z_1-z_2)^{-1}\\ &\pm Y(x_j^\pm,z_2)\widehat{\ell}(e_j^\pm,z_2)\widehat{\ell}_{ji}^{2,+}(z_2-z_1)\\ =&\mp Y_{\widehat{\ell}}(x_j^\pm,z_2)\frac{\partial}{\partial {z_1}}\log f(-z_2+z_1)^{[a_{ij}]_{q^{r_i}}q^{-r\ell}}.\end{aligned}$$ Therefore, we complete the proof. ◻ **Lemma 89**. *For $i\in I$, we define $$\begin{aligned} %&\wt h_i^\pm(w_1,w_2,z)=G_{w_1,w_2}\(\pd{z}\)[r\ell]_{q^{\pd{z}}}\inv h_i^\pm(z),\\ &\widetilde{h}_i^\pm(z)=-q^{-r\ell\frac{\partial}{\partial {z}}}2\hbar f_0\left(2\hbar\frac{\partial}{\partial {z}}\right)[r_i]_{q^{\frac{\partial}{\partial {z}}}} h_i^\pm(z). %&\wt h_i^{\pm,n}(z)=-q^{-nr\ell\pd{z}}[n]_{q^{nr\ell\pd{z}}}F\(\pd{z}\)h_i^\pm(z).\end{aligned}$$ Then we have that $$\begin{aligned} &[\widetilde{h}_i^-(z_1),h_j^+(z_2)]\label{eq:com-formulas-4} =\frac{\partial}{\partial {z_1}}\log f(z_1-z_2)^{[a_{ji}]_{q^{r_j}}[r\ell]_q \left(q-q^{-1}\right)},\\ &[h_i^-(z_1),\widetilde{h}_j^+(z_2)]\label{eq:com-formulas-5} =\frac{\partial}{\partial {z_1}}\log f(z_1-z_2)^{[a_{ij}]_{q^{r_i}} \left(q^{r\ell}-q^{-r\ell}\right)q^{ 2r\ell}},\\ &[\widetilde{h}_i^-(z_1),\widetilde{h}_j^+(z_2)]\label{eq:com-formulas-6} =\log f(z_1-z_2)^{( q^{2r\ell}-1 )( q^{-r_ia_{ij}}-q^{r_ia_{ij}} )},\\ &[\widetilde{h}_i^-(z_1),Y_{\widehat{\ell}}(x_j^\pm,z_2)]\label{eq:com-formulas-7} =\pm Y_{\widehat{\ell}}(x_j^\pm,z_2)%\( q^{r_ia_{ij}\pd{z_2}}-q^{-r_ia_{ij}\pd{z_2}} \) \log f(z_1-z_2)^{q^{-r_ia_{ij}}-q^{r_ia_{ij}}},\\ &[\widetilde{h}_i^+(z_1),Y_{\widehat{\ell}}(x_j^\pm,z_2)]\label{eq:com-formulas-8} =\pm Y_{\widehat{\ell}}(x_j^\pm,z_2)\log f(z_2-z_1)^{\left( q^{-r_ia_{ij}}-q^{r_ia_{ij}} \right)q^{2r\ell}}.\end{aligned}$$* *Proof.* From [\[eq:com-formulas-1\]](#eq:com-formulas-1){reference-type="eqref" reference="eq:com-formulas-1"}, we have that $$\begin{aligned} &[\widetilde{h}_i^-(z_1),h_j^+(z_2)]\\ =&-q^{-r\ell\frac{\partial}{\partial {z_1}}}2\hbar f_0\left(2\hbar\frac{\partial}{\partial {z_1}}\right) \frac{\partial}{\partial {z_1}}\frac{\partial}{\partial {z_2}}\log f(z_1-z_2)^{[a_{ji}]_{q^{r_j}}[r\ell]_{q}q^{r\ell}}\\ =&-2\hbar f_0\left(2\hbar\frac{\partial}{\partial {z_1}}\right) \frac{\partial}{\partial {z_1}}\frac{\partial}{\partial {z_2}}\log f(z_1-z_2)^{[a_{ji}]_{q^{r_j}}[r\ell]_{q}}\\ =&-\left(q^{\frac{\partial}{\partial {z_1}}}-q^{-\frac{\partial}{\partial {z_1}}}\right) \frac{\partial}{\partial {z_2}}\log f(z_1-z_2)^{[a_{ji}]_{q^{r_j}}[r\ell]_{q}}\\ =&\frac{\partial}{\partial {z_1}}\log f(z_1-z_2)^{[a_{ji}]_{q^{r_j}}[r\ell]_{q}(q-q^{-1})}.\end{aligned}$$ This proves [\[eq:com-formulas-4\]](#eq:com-formulas-4){reference-type="eqref" reference="eq:com-formulas-4"}. The proofs of the rest equations are similar. ◻ As an immediate consequence of Lemma [Lemma 89](#lem:com-formulas2){reference-type="ref" reference="lem:com-formulas2"}, we have that **Corollary 90**. *For $i,j\in I$, we have that $$\begin{aligned} &\left[\exp\left(\widetilde{h}_i^-(z_1)\right),h_j^+(z_2)\right] =\exp\left(\widetilde{h}_i^-(z_1)\right) \label{eq:com-formulas-9}\\ &\quad\times \frac{\partial}{\partial {z_1}}\log f(z_1-z_2)^{[a_{ji}]_{q^{r_j}}[r\ell]_q(q-q^{-1})},\nonumber\\ &\left[h_i^-(z_1),\exp\left(\widetilde{h}_j^+(z_2)\right)\right] =\exp\left(\widetilde{h}_j^+(z_2)\right) \label{eq:com-formulas-10}\\ &\quad\times \frac{\partial}{\partial {z_1}}\log f(z_1-z_2)^{[a_{ij}]_{q^{r_i}}[r\ell]_qq^{2r\ell} },\nonumber\\ &\exp\left(\widetilde{h}_i^-(z_1)\right)\exp\left(\widetilde{h}_j^+(z_2)\right) =\exp\left(\widetilde{h}_j^+(z_2)\right)\exp\left(\widetilde{h}_i^-(z_1)\right) \label{eq:com-formulas-11}\\ &\quad\times f(z_1-z_2)^{(q^{r_ia_{ij}}-q^{-r_ia_{ij}})(1-q^{2r\ell})}, \nonumber\\ &\exp\left(\widetilde{h}_i^-(z_1)\right)Y_{\widehat{\ell}}(x_j^\pm,z_2)\label{eq:com-formulas-12}\\ &\quad=Y_{\widehat{\ell}}(x_j^\pm,z_2)\exp\left(\widetilde{h}_i^-(z_1)\right) f(z_1-z_2)^{q^{\mp r_ia_{ij}}-q^{\pm r_ia_{ij}}}, \nonumber\\ &\exp\left(\widetilde{h}_i^+(z_1)\right)Y_{\widehat{\ell}}(x_j^\pm,z_2)\label{eq:com-formulas-13}\\ &\quad=Y_{\widehat{\ell}}(x_j^\pm,z_2)\exp\left(\widetilde{h}_i^+(z_1)\right) f(z_1-z_2)^{q^{2r\ell}(q^{\mp r_ia_{ij}}-q^{\pm r_ia_{ij}})}.\nonumber %&\exp\(\wt h_i^-(z_1)\)Y_{\wh\ell}\(\(x_j^\pm\)_{-1}^k\vac,z_2\) % =Y_{\wh\ell}\(\(x_j^\pm\)_{-1}^k\vac,z_2\)\label{eq:com-formulas-14}\\ % &\times\exp\(\wt h_i^-(z_1)\) % \prod_{a=1}^kg_{ij,\hbar}(z_1-z_2-2(k-a)r_j\hbar)^{\pm 1},\nonumber\\\end{aligned}$$* **Lemma 91**. *Let $W$ be a topologically free ${\mathbb{C}}[[\hbar]]$-module and let $$\begin{aligned} &\alpha(z)\in \mathop{\mathrm{Hom}}(W,W\widehat{\otimes}{\mathbb{C}}[z,z^{-1}][[\hbar]]),\quad \beta(z)\in\mathop{\mathrm{Hom}}(W,W[[z]]),\\ &\quad\xi(z)\in {\mathcal{E}}_\hbar(W)\end{aligned}$$ satisfying the condition that $$\begin{aligned} &[\alpha(z_1),\alpha(z_2)]=0=[\beta(z_1),\beta(z_2)],\\ &[\alpha(z_1),\beta(z_2)]=\iota_{z_1,z_2}\gamma(z_2-z_1),\\ &[\alpha(z_1),\xi(z_2)]=\xi(z_2) \iota_{z_1,z_2}\gamma_1(z_1-z_2),\end{aligned}$$ for some $\gamma(z)\in{\mathbb{C}}(z)[[\hbar]]$ and $\gamma_1(z)\in {\mathbb{C}}((z))[[\hbar]]$. Assume $\alpha(z),\beta(z)\in\hbar(\mathop{\mathrm{End}}W)[[z,z^{-1}]]$. Then $$\begin{aligned} \exp&\left((\alpha(z)+\beta(z))_{-1}\right)\xi(z)=\exp\left(\beta(z)\right)\xi(z)\exp\left(\alpha(z)\right)\\ &\times \exp\left({\frac{1}{2}}\mathop{\mathrm{Res}}_zz^{-1}\gamma(-z)+z^{-1}\gamma_1(z)\right).\end{aligned}$$* *Proof.* Set $U=\{\alpha(z),\beta(z),\xi(z)\}$. Then $U$ generates an $\hbar$-adic nonlocal VA ${\langle}U{\rangle}$. From Baker-Campbell-Hausdorff formula we obtain that $$\begin{aligned} &\exp\left((\alpha(z)+\beta(z))_{-1}\right)\\ =&\exp\left(\beta(z)_{-1}\right)\exp\left(\alpha(z)_{-1}\right)\exp\left({\frac{1}{2}}\mathop{\mathrm{Res}}_zz^{-1}\gamma(-z)\right).\end{aligned}$$ Note that $$\begin{aligned} \alpha(z_1)\xi(z_2)\alpha(z_2)^m=\xi(z_2)\alpha(z_1)\alpha(z_2)^m+\xi(z_2)\alpha(z_2)^m\iota_{z_1,z_2}\gamma_1(z_1-z_2).\end{aligned}$$ Then $$\begin{aligned} \alpha(z)_{-1}\xi(z)\alpha(z)^m=\xi(z)\alpha(z)^{m+1}+\xi(z)\alpha(z)^m\mathop{\mathrm{Res}}_zz^{-1}\gamma_1(z).\end{aligned}$$ So $$\begin{aligned} \exp\left(\alpha(z)_{-1}\right)\xi(z)=\xi(z)\exp\left(\alpha(z)\right)\exp\left(\mathop{\mathrm{Res}}_zz^{-1}\gamma_1(z)\right).\end{aligned}$$ Therefore, $$\begin{aligned} \exp&\left((\alpha(z)+\beta(z))_{-1}\right)\xi(z)=\exp\left(\beta(z)\right)\xi(z)\exp\left(\alpha(z)\right)\\ &\times\exp\left({\frac{1}{2}}\mathop{\mathrm{Res}}_zz^{-1}\gamma(-z)+z^{-1}\gamma_1(z)\right).\end{aligned}$$ We complete the proof. ◻ **Proposition 92**. *For each $i\in I$, we have that $$\begin{aligned} Y_{\widehat{\ell}}(E_\ell(h_i),z)=\exp(\widetilde{h}_i^+(z))\exp(\widetilde{h}_i^-(z)).\end{aligned}$$* *Proof.* From [\[eq:com-formulas-6\]](#eq:com-formulas-6){reference-type="eqref" reference="eq:com-formulas-6"}, we have that $$\begin{aligned} =\gamma(z_2-z_1),\end{aligned}$$ where $$\begin{aligned} &\gamma(z)=\log f(-z)^{ (q^{-2r_i}-q^{2r_i})(q^{2r\ell}-1) }.\end{aligned}$$ Notice that $$\begin{aligned} &\mathop{\mathrm{Res}}_zz^{-1}\gamma(-z)\\ =&\mathop{\mathrm{Res}}_zz^{-1}\log f(z)^{ (q^{2r_i}-q^{-2r_i})(q^{-2r\ell}-1) }\\ =&\mathop{\mathrm{Res}}_zz^{-1}\log f_0(z)^{ (q^{2r_i}-q^{-2r_i})(q^{-2r\ell}-1) }\\ =&\log\frac{f_0(2r_i\hbar-2r\ell\hbar)}{f_0(2r_i\hbar+2r\ell\hbar)}.\end{aligned}$$ Lemma [Lemma 91](#lem:exp-cal){reference-type="ref" reference="lem:exp-cal"} provides that $$\begin{aligned} &Y_{\widehat{\ell}}(E_\ell(h_i),z)\\ =&\left(\frac{f_0(2r_i\hbar+2r\ell\hbar)}{f_0(2r_i\hbar-2r\ell\hbar)}\right)^{\frac{1}{2}}Y_{\widehat{\ell}}\left(\exp\left(\left(-q^{-r\ell\partial}2\hbar f_0(2\partial\hbar)[r_i]_{q^\partial} h_i\right)_{-1}\right){{\bf 1}},z\right)\\ =&\left(\frac{f_0(2r_i\hbar+2r\ell\hbar)}{f_0(2r_i\hbar-2r\ell\hbar)}\right)^{\frac{1}{2}}\exp(\widetilde{h}_i^+(z))\exp(\widetilde{h}_i^-(z)) \left(\frac{f_0(2r_i\hbar-2r\ell\hbar)}{f_0(2r_i\hbar+2r\ell\hbar)}\right)^{\frac{1}{2}}\\ =&\exp(\widetilde{h}_i^+(z))\exp(\widetilde{h}_i^-(z)),\end{aligned}$$ as desired. ◻ **Lemma 93**. *For $i,j\in I$, we have that $$\begin{aligned} &Y_{\widehat{\ell}}(h_i,z)^-E_\ell(h_j)\label{eq:Y-action-1}\\ =&E_\ell(h_j)\otimes[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}} \left(q^{r\ell\frac{\partial}{\partial {z}}}-q^{-r\ell\frac{\partial}{\partial {z}}}\right)q^{2r\ell\frac{\partial}{\partial {z}}} z^{-1},\nonumber\\ &Y_{\widehat{\ell}}(E_\ell(h_i),z)E_\ell(h_j)\label{eq:Y-action-2}\\ =& \exp\left(\widetilde{h}_i^+(z)\right)E_\ell(h_j)\otimes f(z)^{ (q^{r_ia_{ij}}-q^{-r_ia_{ij}})(1-q^{2r\ell}) } %\frac{ f(z+(2r\ell-r_ia_{ij})\hbar)f(z+r_ia_{ij}\hbar) } % { f(z+(2r\ell+r_ia_{ij})\hbar)f(z-r_ia_{ij}\hbar) } ,\nonumber\\ &Y_{\widehat{\ell}}(E_\ell(h_i),z)x_j^\pm\label{eq:Y-action-3} =\exp\left(\widetilde{h}_i^+(z)\right)x_j^\pm\otimes f(z)^{\mp (q^{r_ia_{ij}}-q^{-r_ia_{ij}}) } %\frac{f(z\mp r_ia_{ij}\hbar)}{f(z\pm r_ia_{ij}\hbar)} ,\\ &Y_{\widehat{\ell}}(x_i^\pm,z)E_\ell(h_j)\label{eq:Y-action-4} =\exp\left(\widetilde{h}_j(0)\right)e^{z\partial} x_i^\pm\otimes f(z)^{\pm (q^{r_ia_{ij}}-q^{-r_ia_{ij}})q^{2r\ell} } %\frac{f(z+(2r\ell\pm r_ia_{ij})\hbar)}{f(z+(2r\ell\mp r_ia_{ij})\hbar)} .\end{aligned}$$* *Proof.* From [\[eq:com-formulas-9\]](#eq:com-formulas-9){reference-type="eqref" reference="eq:com-formulas-9"}, we have that $$\begin{aligned} &h_i^-(z_1)Y_{\widehat{\ell}}(E_\ell(h_j),z_2){{\bf 1}} =[h_i^-(z_1),Y_{\widehat{\ell}}(E_\ell(h_j),z_2)]{{\bf 1}}\\ =&\left[h_i^-(z_1),\exp\left(\widetilde{h}_i^+(z_2)\right)\exp\left(\widetilde{h}_i^-(z_2)\right)\right]{{\bf 1}}\\ =&\exp\left(\widetilde{h}_i^+(z_2)\right)\exp\left(\widetilde{h}_i^-(z_2)\right){{\bf 1}}[r_ia_{ij}]_{q^{\frac{\partial}{\partial {z_2}}}}\\ &\quad\times\left(q^{-r\ell\frac{\partial}{\partial {z_2}}}-q^{r\ell\frac{\partial}{\partial {z_2}}}\right)q^{- 2r\ell\frac{\partial}{\partial {z_2}}} \frac{\partial}{\partial {z_1}}\log f(z_1-z_2)\\ =&\exp\left(\widetilde{h}_i^+(z_2)\right){{\bf 1}}[r_ia_{ij}]_{q^{\frac{\partial}{\partial {z_1}}}} \left(q^{r\ell\frac{\partial}{\partial {z_1}}}-q^{-r\ell\frac{\partial}{\partial {z_1}}}\right)q^{2r\ell\frac{\partial}{\partial {z_1}}} \frac{\partial}{\partial {z_1}}\log f(z_1-z_2).\end{aligned}$$ Taking $z_2\to 0$, we have that $$\begin{aligned} &h_i^-(z)E_\ell(h_j) =E_\ell(h_j)[r_ia_{ij}]_{q^{\frac{\partial}{\partial {z}}}} \left(q^{r\ell\frac{\partial}{\partial {z}}}-q^{-r\ell\frac{\partial}{\partial {z}}}\right)q^{2r\ell\frac{\partial}{\partial {z}}} \frac{\partial}{\partial {z}}\log f(z).\end{aligned}$$ Notice that $$\begin{aligned} Y_{\widehat{\ell}}(h_i,z)^-=h_i^-(z)^-,\quad \mathop{\mathrm{Sing}}_z\frac{\partial}{\partial {z}}\log f(z)=z^{-1}.\end{aligned}$$ Then we have that $$\begin{aligned} Y_{\widehat{\ell}}(h_i,z)^-E_\ell(h_j)=E_\ell(h_j)[r_ia_{ij}]_{q^{\frac{\partial}{\partial {z}}}} \left(q^{r\ell\frac{\partial}{\partial {z}}}-q^{-r\ell\frac{\partial}{\partial {z}}}\right)q^{2r\ell\frac{\partial}{\partial {z}}} z^{-1},\end{aligned}$$ which proves [\[eq:Y-action-1\]](#eq:Y-action-1){reference-type="eqref" reference="eq:Y-action-1"}. The proof of the rest equations are similar. ◻ **Lemma 94**. *For $i,j\in I$, we have that $$\begin{aligned} &S_{\ell,\ell'}(z)\left(E_\ell(h_j)\otimes h_i\right)\label{eq:S-E-1} =E_\ell(h_j)\otimes h_i+E_\ell(h_j)\otimes{{\bf 1}}\\ &\quad\otimes\frac{\partial}{\partial {z}}\log f(z)^{-[a_{ij}]_{q^{r_i}}[r\ell']_{q}[r\ell]_{q} \left(q-q^{-1}\right)^2q^{-r\ell}},\nonumber\\ &S_{\ell,\ell'}(z)\left(h_j\otimes E_{\ell'}(h_i)\right) =h_j\otimes E_{\ell'}(h_i)+{{\bf 1}}\otimes E_{\ell'}(h_i)\label{eq:S-E-2}\\ &\quad\otimes\frac{\partial}{\partial {z}}\log f(z)^{-[a_{ji}]_{q^{r_j}}[r\ell']_{q}[r\ell]_{q} \left(q-q^{-1}\right)^2q^{r\ell'}},\nonumber\\ &S_{\ell,\ell'}(z)\left(E_\ell(h_j)\otimes E_{\ell'}(h_i)\right) =E_\ell(h_j)\otimes E_{\ell'}(h_i)\label{eq:S-E-3}\\ &\quad\otimes f(z)^{ (q^{r_ia_{ij}}-q^{-r_ia_{ij}})(q^{2r\ell'}+q^{-2r\ell}-1-q^{2r(\ell'-\ell)}) } %\frac{f(z-r_ia_{ij}\hbar)f(z+(2r(\ell'-\ell)-r_ia_{ij})\hbar)} % {f(z+r_ia_{ij}\hbar)f(z+(2r(\ell'-\ell)+r_ia_{ij})\hbar)}\nonumber\\ %&\quad\times \frac{f(z+(2r\ell'+r_ia_{ij})\hbar)f(z-(2r\ell-r_ia_{ij})\hbar)} % {f(z+(2r\ell'-r_ia_{ij})\hbar)f(z-(2r\ell+r_ia_{ij})\hbar)} %\frac{g_{ij,\hbar}(z)g_{ij,\hbar}(z+2r(\ell'-\ell)\hbar)}{g_{ij,\hbar}(z+2r\ell'\hbar) g_{ij,\hbar}(z-2r\ell\hbar)} ,\nonumber\\ &S_{\ell,\ell'}(z)\left(x_j^\pm\otimes E_{\ell'}(h_i)\right) =x_j^\pm\otimes E_{\ell'}(h_i)\label{eq:S-E-4}\\ &\quad\otimes f(z)^{\pm(q^{r_ia_{ij}} -q^{-r_ia_{ij}})(1-q^{2r\ell'}) } %\frac{f(z\pm r_ia_{ij}\hbar)f(z+(2r\ell'\mp r_ia_{ij})\hbar)} %{f(z\mp r_ia_{ij}\hbar) f(z+(2r\ell'\pm r_ia_{ij})\hbar) } %g_{ij,\hbar}(z+2r\ell'\hbar)^{\pm 1}g_{ij,\hbar}(z)^{\mp 1} ,\nonumber\\ &S_{\ell,\ell'}(z)\left(E_\ell(h_j)\otimes x_i^\pm\right) =E_\ell(h_j)\otimes x_i^\pm\label{eq:S-E-5}\\ &\quad\otimes f(z)^{\pm(q^{r_ia_{ij}} -q^{-r_ia_{ij}})(1-q^{-2r\ell}) } %\frac{f(z\pm r_ia_{ij}\hbar)f(z-(2r\ell\pm r_ia_{ij})\hbar)} %{f(z\mp r_ia_{ij}\hbar) f(z-(2r\ell\mp r_ia_{ij})\hbar) } %g_{ij,\hbar}(z-2r\ell\hbar)^{\pm 1}g_{ij,\hbar}(z)^{\mp 1} .\nonumber\end{aligned}$$* *Proof.* From [\[eq:S-twisted-1\]](#eq:S-twisted-1){reference-type="eqref" reference="eq:S-twisted-1"} and [\[eq:multqyb-der-shift\]](#eq:multqyb-der-shift){reference-type="eqref" reference="eq:multqyb-der-shift"}, we have that $$\begin{aligned} &S_{\ell,\ell'}(z)(q^{-r\ell\partial}2\hbar f_0(2\hbar\partial)[r_j]_{q^\partial}h_j\otimes h_i)\nonumber\\ =&q^{-r\ell\partial\otimes 1-r\ell\frac{\partial}{\partial {z}}}2\hbar f_0\left(2\hbar\left(\partial\otimes 1+1\otimes\frac{\partial}{\partial {z}}\right)\right)\nonumber\\ &\quad\times [r_j]_{q^{\partial+\frac{\partial}{\partial {z}}}}S_{\ell,\ell'}(z)(h_j\otimes h_i)\nonumber\\ =&q^{-r\ell\partial}2\hbar f_0(2\hbar\partial)[r_j]_{q^\partial}h_j\otimes h_i\nonumber\\ &\quad+{{\bf 1}}\otimes{{\bf 1}}\otimes q^{-r\ell\frac{\partial}{\partial {z}}}2\hbar f_0\left(2\hbar\frac{\partial}{\partial {z}}\right)[r_j]_{q^{\frac{\partial}{\partial {z}}}}{\widehat{\ell,\ell'}}_{ij}(-z)\nonumber\\ =&q^{-r\ell\partial}2\hbar f_0(2\hbar\partial)[r_j]_{q^\partial}h_j\otimes h_i +{{\bf 1}}\otimes{{\bf 1}}\otimes\left(1-q^{-2r\ell\frac{\partial}{\partial {z}}}\right){\widehat{\ell,\ell'}}_{ij}^{1,+}(-z)\label{eq:S-E-temp-1}\\ =&q^{-r\ell\partial}2\hbar f_0(2\hbar\partial)[r_j]_{q^\partial}h_j\otimes h_i +{{\bf 1}}\otimes{{\bf 1}}\otimes\left(1-q^{-2r\ell\frac{\partial}{\partial {z}}}\right)\nonumber\\ &\times[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}[r\ell']_{q^{\frac{\partial}{\partial {z}}}} \left(q^{\frac{\partial}{\partial {z}}}-q^{-\frac{\partial}{\partial {z}}}\right)\frac{\partial}{\partial {z}}f(z)\nonumber\\ =&q^{-r\ell\partial}2\hbar f_0(2\hbar\partial)[r_j]_{q^\partial}h_j\otimes h_i +{{\bf 1}}\otimes{{\bf 1}}\nonumber\\ &\otimes\frac{\partial}{\partial {z}}f(z)^{[a_{ij}]_{q^{r_i}}[r\ell']_{q} [r\ell]_{q}(q-q^{-1})^2q^{-r\ell} },\nonumber\end{aligned}$$ where the equation [\[eq:S-E-temp-1\]](#eq:S-E-temp-1){reference-type="eqref" reference="eq:S-E-temp-1"} follows from [\[eq:special-tau-tech1-3\]](#eq:special-tau-tech1-3){reference-type="eqref" reference="eq:special-tau-tech1-3"}. Then we complete the proof of [\[eq:S-E-1\]](#eq:S-E-1){reference-type="eqref" reference="eq:S-E-1"} by using Lemma [Lemma 28](#lem:S-special-tech-gen2){reference-type="ref" reference="lem:S-special-tech-gen2"}. The proof of the rest equations are similar. ◻ ## Proof of Proposition [Proposition 84](#prop:S-Delta){reference-type="ref" reference="prop:S-Delta"} {#proof-of-proposition-props-delta} **Lemma 95**. *For $i,j\in I$, we have that $$\begin{aligned} &S_{\{\ell,\ell'\},\ell''}(z)\left(\Delta(h_j)\otimes h_i\right)= \Delta(h_j)\otimes h_i+\Delta({{\bf 1}})\otimes{{\bf 1}}\label{eq:S3-Delta-h-h}\\ &\quad\otimes\frac{\partial^{2}}{\partial z^{2}}\log f(z)^{ [a_{ij}]_{q^{r_i}}[r\ell''/r_j]_{q^{r_j}} [r(\ell+\ell')]_{q} (q-q^{-1}) },\nonumber\\ &S_{\{\ell,\ell'\},\ell''}(z)\left(\Delta(h_j)\otimes x_i^\pm\right) =\Delta(h_j)\otimes x_i^\pm\mp \Delta({{\bf 1}})\otimes x_i^\pm\label{eq:S3-Delta-h-x}\\ &\quad\otimes\frac{\partial}{\partial {z}}\log f(z)^{[a_{ji}]_{q^{r_j}}[r(\ell+\ell')]_{q}(q-q^{-1}) },\nonumber\\ &S_{\{\ell,\ell'\},\ell''}(z)\left(\Delta(x_j^\pm)\otimes h_i\right) =\Delta(x_j^\pm)\otimes h_i\pm\Delta(x_j^\pm)\otimes{{\bf 1}}\label{eq:S3-Delta-x-h}\\ &\quad\otimes\frac{\partial}{\partial {z}}\log f(z)^{ [a_{ij}]_{q^{r_i}}[r\ell'']_{q}(q-q^{-1}) }.\nonumber\end{aligned}$$* *Proof.* Recall from [\[eq:def-Delta-h\]](#eq:def-Delta-h){reference-type="eqref" reference="eq:def-Delta-h"} that $\Delta(h_i)=q^{-r\ell'\partial}h_i\otimes{{\bf 1}}+{{\bf 1}}\otimes q^{r\ell\partial}h_i$. Then $$\begin{aligned} &S_{\{\ell,\ell'\},\ell''}(z)\left(\Delta(h_j)\otimes h_i\right)\\ =&S_{\ell',\ell''}^{23}(z)S_{\ell,\ell''}^{13}(z)\left(q^{-r\ell'\partial}h_j\otimes{{\bf 1}}\otimes h_i +{{\bf 1}}\otimes q^{r\ell\partial}h_j\otimes h_i\right)\\ =&(q^{-r\ell'\hbar}\otimes 1\otimes 1) S_{\ell,\ell''}^{13}(z-r\ell'\hbar)\left(h_j\otimes{{\bf 1}}\otimes h_i\right)\\ &+(1\otimes q^{r\ell\hbar}\otimes 1) S_{\ell',\ell''}(z+r\ell\hbar)\left({{\bf 1}}\otimes h_j\otimes h_i\right)\\ =&q^{-r\ell'\partial}h_j\otimes{{\bf 1}}\otimes h_i +{{\bf 1}}\otimes q^{r\ell\partial}h_j\otimes h_i\\ &+{{\bf 1}}\otimes{{\bf 1}}\otimes{{\bf 1}}\otimes \frac{\partial^{2}}{\partial z^{2}}\log f(z)^{ [a_{ij}]_{q^{r_i}}[r\ell''/r_j]_{q^{r_j}} ([r\ell]_{q}q^{-r\ell'} +[r\ell']_{q}q^{r\ell})(q-q^{-1}) }\\ =&q^{-r\ell'\partial}h_j\otimes{{\bf 1}}\otimes h_i +{{\bf 1}}\otimes q^{r\ell\partial}h_j\otimes h_i\\ &+{{\bf 1}}\otimes{{\bf 1}}\otimes{{\bf 1}}\otimes \frac{\partial^{2}}{\partial z^{2}}\log f(z)^{ [a_{ij}]_{q^{r_i}}[r\ell''/r_j]_{q^{r_j}} [r(\ell+\ell')]_{q} \left(q-q^{-1}\right) }\\ =&\Delta(h_j)\otimes h_i+\Delta({{\bf 1}})\otimes{{\bf 1}}\otimes \frac{\partial^{2}}{\partial z^{2}}\log f(z)^{ [a_{ij}]_{q^{r_i}}[r\ell''/r_j]_{q^{r_j}} [r(\ell+\ell')]_{q} \left(q-q^{-1}\right) }, %\\ % =&\(\Delta\ot 1\)S_{\ell+\ell',\ell''}(z)\(h_j\ot h_i\),\end{aligned}$$ where the second equation follows from [\[eq:multqyb-shift-total1\]](#eq:multqyb-shift-total1){reference-type="eqref" reference="eq:multqyb-shift-total1"} and the third equation follows from [\[eq:S-twisted-1\]](#eq:S-twisted-1){reference-type="eqref" reference="eq:S-twisted-1"}. We also have that $$\begin{aligned} &S_{\{\ell,\ell'\},\ell''}(z)\left(\Delta(h_j)\otimes x_i^\pm\right)\\ =&S_{\ell',\ell''}^{23}(z)S_{\ell,\ell''}^{13}(z)\left(q^{-r\ell'\partial}h_j\otimes{{\bf 1}}\otimes x_i^\pm +{{\bf 1}}\otimes q^{r\ell\partial}h_j\otimes x_i^\pm\right)\\ =&\left(q^{-r\ell'\partial}\otimes 1\otimes 1\right)S_{\ell,\ell''}^{13}(z-r\ell'\hbar)\left(h_j\otimes{{\bf 1}}\otimes x_i^\pm\right)\\ &+\left(1\otimes q^{r\ell\partial}\otimes 1\right)S_{\ell',\ell''}^{23}(z+r\ell\hbar)\left({{\bf 1}}\otimes q^{r\ell\partial}h_j\otimes x_i^\pm\right)\\ =&q^{-r\ell'\partial}h_j\otimes{{\bf 1}}\otimes x_i^\pm +{{\bf 1}}\otimes q^{r\ell\partial}h_j\otimes x_i^\pm\mp {{\bf 1}}\otimes{{\bf 1}}\otimes x_i^\pm\\ &\otimes\frac{\partial}{\partial {z}}\log f(z)^{ [a_{ji}]_{q^{r_j}}([r\ell]_{q}q^{-r\ell'} +[r\ell']_{q}q^{r\ell}) (q-q^{-1}) }\\ =&\Delta(h_j)\otimes x_i^\pm\mp \Delta({{\bf 1}})\otimes x_i^\pm \otimes\frac{\partial}{\partial {z}}\log f(z)^{ [a_{ji}]_{q^{r_j}}[r(\ell+\ell')]_{q}(q-q^{-1}) },\end{aligned}$$ where the second equation follows from [\[eq:multqyb-shift-total1\]](#eq:multqyb-shift-total1){reference-type="eqref" reference="eq:multqyb-shift-total1"} and the third equation follows from [\[eq:S-twisted-3\]](#eq:S-twisted-3){reference-type="eqref" reference="eq:S-twisted-3"}. The "$-$" case of [\[eq:S3-Delta-x-h\]](#eq:S3-Delta-x-h){reference-type="eqref" reference="eq:S3-Delta-x-h"} follows from the following two relations: $$\begin{aligned} \label{eq:S3-x-vac-h} &S_{\{\ell,\ell'\},\ell''}(z)\left(x_j^\pm\otimes{{\bf 1}}\otimes h_i\right)\\ =&S_{\ell',\ell''}^{23}(z)S_{\ell,\ell''}^{13}(z)\left(x_j^\pm\otimes{{\bf 1}}\otimes h_i\right)\nonumber\\ =&S_{\ell,\ell''}^{13}(z)\left(x_j^\pm\otimes{{\bf 1}}\otimes h_i\right)\nonumber\\ =&x_j^\pm\otimes{{\bf 1}}\otimes h_i\pm x_j^\pm\otimes{{\bf 1}}\otimes{{\bf 1}} \otimes\frac{\partial}{\partial {z}}\log f(z)^{ [a_{ij}]_{q^{r_i}}[r\ell'']_q(q-q^{-1}) },\nonumber\end{aligned}$$ where the last equation follows from [\[eq:S-twisted-2\]](#eq:S-twisted-2){reference-type="eqref" reference="eq:S-twisted-2"}; $$\begin{aligned} \label{eq:S3-vac-x-h} &S_{\{\ell,\ell'\},\ell''}(z)\left({{\bf 1}}\otimes x_j^\pm\otimes h_i\right)\\ =&S_{\ell',\ell''}^{23}(z)S_{\ell,\ell''}^{13}(z)\left({{\bf 1}}\otimes x_j^\pm\otimes h_i\right)\nonumber\\ =&S_{\ell',\ell''}^{23}(z)\left({{\bf 1}}\otimes x_j^\pm\otimes h_i\right)\nonumber\\ =&{{\bf 1}}\otimes x_j^\pm\otimes h_i\pm {{\bf 1}}\otimes x_j^\pm\otimes{{\bf 1}} \otimes \frac{\partial}{\partial {z}}\log f(z)^{[a_{ij}]_{q^{r_i}}[r\ell'']_q (q-q^{-1}) },\nonumber\end{aligned}$$ where the last equation follows from [\[eq:S-twisted-2\]](#eq:S-twisted-2){reference-type="eqref" reference="eq:S-twisted-2"}. In order to prove the "$+$" case of [\[eq:S3-Delta-x-h\]](#eq:S3-Delta-x-h){reference-type="eqref" reference="eq:S3-Delta-x-h"}, we need the following relation: $$\begin{aligned} \label{eq:S3-E-x-h} &S_{\{\ell,\ell'\},\ell''}(z)\left( E_\ell(h_j)\otimes x_j^+ \otimes h_i \right)\\ =&S_{\ell',\ell''}^{23}(z)S_{\ell,\ell''}^{13}(z)\left( E_\ell(h_j)\otimes x_j^+ \otimes h_i \right)\nonumber\\ =&S_{\ell',\ell''}^{23}(z)\left(E_\ell(h_j)\otimes x_j^+ \otimes h_i\right) -E_\ell(h_j)\otimes x_j^+\otimes{{\bf 1}}\nonumber\\ &\otimes\frac{\partial}{\partial {z}}\log f(z)^{ [a_{ij}]_{q^{r_i}}[r\ell]_q [r\ell'']_q(q-q^{-1})^2q^{-r\ell} } \nonumber\\ =&E_\ell(h_j)\otimes x_j^+ \otimes h_i +E_\ell(h_j)\otimes x_j^+\otimes{{\bf 1}}\nonumber\\ &\otimes \frac{\partial}{\partial {z}}\log f(z)^{ [a_{ij}]_{q^{r_i}} [r\ell'']_q\left(1-1+q^{-2r\ell}\right) \left(q-q^{-1}\right) }\nonumber\\ =&E_\ell(h_j)\otimes x_j^+ \otimes h_i +E_\ell(h_j)\otimes x_j^+\otimes{{\bf 1}}\nonumber\\ &\otimes \frac{\partial}{\partial {z}}\log f(z)^{ [a_{ij}]_{q^{r_i}} [r\ell'']_qq^{-2r\ell}(q-q^{-1}) },\nonumber\end{aligned}$$ where the second equation follows from [\[eq:S-E-1\]](#eq:S-E-1){reference-type="eqref" reference="eq:S-E-1"} and the third equation follows from [\[eq:S-twisted-2\]](#eq:S-twisted-2){reference-type="eqref" reference="eq:S-twisted-2"}. Then we have that $$\begin{aligned} &S_{\{\ell,\ell'\},\ell''}(z)\left(\Delta(x_j^+)\otimes h_i\right)\\ =&S_{\{\ell,\ell'\},\ell''}(z)(x_j^+\otimes{{\bf 1}}\otimes h_i) +S_{\{\ell,\ell'\},\ell''}(z)( q^{2r\ell\partial}E_\ell(h_j)\otimes q^{2r\ell\partial} x_j^+\otimes h_i)\\ =&x_j^+\otimes{{\bf 1}}\otimes h_i+x_j^+\otimes{{\bf 1}}\otimes{{\bf 1}} \otimes\frac{\partial}{\partial {z}}\log f(z)^{ [a_{ij}]_{q^{r_i}}[r\ell'']_q(q-q^{-1}) }\\ &+S_{\{\ell,\ell'\},\ell''}(z)( q^{2r\ell\partial}E_\ell(h_j)\otimes q^{2r\ell\partial} x_j^+\otimes h_i)\\ =&x_j^+\otimes{{\bf 1}}\otimes h_i+x_j^+\otimes{{\bf 1}}\otimes{{\bf 1}} \otimes\frac{\partial}{\partial {z}}\log f(z)^{ [a_{ij}]_{q^{r_i}}[r\ell'']_q(q-q^{-1}) }\\ &+q^{2r\ell\partial}E_\ell(h_j)\otimes q^{2r\ell\partial} x_j^+\otimes h_i +q^{2r\ell\partial}E_\ell(h_j)\otimes q^{2r\ell\partial} x_j^+\otimes{{\bf 1}}\\ &\otimes\frac{\partial}{\partial {z}}\log f(z)^{ [a_{ij}]_{q^{r_i}}[r\ell'']_q(q-q^{-1}) }\\ =&\Delta(x_j^+)\otimes h_i+\Delta(x_j^+)\otimes{{\bf 1}} \otimes\frac{\partial}{\partial {z}}\log f(z)^{ [a_{ij}]_{q^{r_i}}[r\ell'']_q(q-q^{-1}) },\end{aligned}$$ where the second equation follows from [\[eq:S3-x-vac-h\]](#eq:S3-x-vac-h){reference-type="eqref" reference="eq:S3-x-vac-h"} and the third equation follows from [\[eq:multqyb-shift-total1\]](#eq:multqyb-shift-total1){reference-type="eqref" reference="eq:multqyb-shift-total1"} and [\[eq:S3-E-x-h\]](#eq:S3-E-x-h){reference-type="eqref" reference="eq:S3-E-x-h"}. ◻ **Lemma 96**. *For $i,j\in I$ and $\epsilon\in\{\pm\}$, we have that $$\begin{aligned} \label{eq:S3-Delta-x-x} &S_{\{\ell,\ell'\},\ell''}(z)\left(\Delta(x_j^\pm)\otimes x_i^\epsilon\right) =\Delta(x_j^\pm)\otimes x_i^\epsilon\otimes f(z)^{ q^{\mp\epsilon r_ia_{ij}}-q^{\pm \epsilon r_ia_{ij}} }.\end{aligned}$$* *Proof.* Using [\[eq:S-twisted-4\]](#eq:S-twisted-4){reference-type="eqref" reference="eq:S-twisted-4"}, we have that $$\begin{aligned} &S_{\{\ell,\ell'\},\ell''}(z)\left(\Delta(x_j^-)\otimes x_i^\epsilon\right)\\ =&S_{\ell',\ell''}^{23}(z)S_{\ell,\ell''}^{13}(z)\left(x_j^-\otimes{{\bf 1}}\otimes x_i^\epsilon +{{\bf 1}}\otimes x_j^-\otimes x_i^\epsilon\right)\\ =&x_j^-\otimes{{\bf 1}}\otimes x_i^\epsilon\otimes f(z)^{ q^{\epsilon r_ia_{ij}}-q^{- \epsilon r_ia_{ij}} } +{{\bf 1}}\otimes x_j^-\otimes x_i^\epsilon\otimes f(z)^{ q^{\epsilon r_ia_{ij}}-q^{- \epsilon r_ia_{ij}} }\\ =&\Delta(x_j^-)\otimes x_i^\epsilon\otimes f(z)^{ q^{\epsilon r_ia_{ij}}-q^{- \epsilon r_ia_{ij}} }.\end{aligned}$$ Note that $$\begin{aligned} &S_{\{\ell,\ell'\},\ell''}(z)\left(q^{2r\ell\partial}E_\ell(h_j)\otimes q^{2r\ell\partial}x_j^+\otimes x_i^\epsilon \right)\\ =&S_{\ell',\ell''}^{23}(z)S_{\ell,\ell''}^{13}(z)\left(q^{2r\ell\partial}E_\ell(h_j)\otimes q^{2r\ell\partial}x_j^+\otimes x_i^\epsilon \right)\\ =&\left(q^{2r\ell\partial}\otimes q^{2r\ell\partial}\otimes 1\right)S_{\ell',\ell''}^{23}(z+2r\ell\hbar)S_{\ell,\ell''}^{13}(z+2r\ell\hbar) \left(E_\ell(h_j)\otimes x_j^+\otimes x_i^\epsilon\right)\\ =&q^{2r\ell\partial}E_\ell(h_j)\otimes q^{2r\ell\partial}x_j^+\otimes x_i^\epsilon \otimes f(z)^{ (q^{-\epsilon r_ia_{ij}}-q^{\epsilon r_ia_{ij}})(1-q^{2r\ell}+q^{2r\ell} }\\ =&q^{2r\ell\partial}E_\ell(h_j)\otimes q^{2r\ell\partial}x_j^+\otimes x_i^\epsilon \otimes f(z)^{ q^{-\epsilon r_ia_{ij}}-q^{\epsilon r_ia_{ij}} },\end{aligned}$$ where the second equation follows from [\[eq:multqyb-shift-total1\]](#eq:multqyb-shift-total1){reference-type="eqref" reference="eq:multqyb-shift-total1"} and the third equation follows from [\[eq:S-twisted-4\]](#eq:S-twisted-4){reference-type="eqref" reference="eq:S-twisted-4"} and [\[eq:S-E-5\]](#eq:S-E-5){reference-type="eqref" reference="eq:S-E-5"}. Combining this with [\[eq:S-twisted-4\]](#eq:S-twisted-4){reference-type="eqref" reference="eq:S-twisted-4"}, we get that $$\begin{aligned} &S_{\{\ell,\ell'\},\ell''}(z)\left(\Delta(x_j^+)\otimes x_i^\epsilon\right)\\ =&S_{\ell',\ell''}^{23}(z)S_{\ell,\ell''}^{13}(z)\left( x_j^+\otimes{{\bf 1}}\otimes x_i^\epsilon +q^{2r\ell\partial}E_\ell(h_j)\otimes q^{2r\ell\partial}x_j^+\otimes x_i^\epsilon \right)\\ =&x_j^+\otimes{{\bf 1}}\otimes x_i^\epsilon\otimes f(z)^{ q^{-\epsilon r_ia_{ij}}-q^{\epsilon r_ia_{ij}} }\\ &\quad+q^{2r\ell\partial}E_\ell(h_j)\otimes q^{2r\ell\partial}x_j^+\otimes x_i^\epsilon \otimes f(z)^{ q^{-\epsilon r_ia_{ij}}-q^{\epsilon r_ia_{ij}} }\\ =&\Delta(x_j^+)\otimes x_i^\epsilon\otimes f(z)^{ q^{-\epsilon r_ia_{ij}}-q^{\epsilon r_ia_{ij}} }.\end{aligned}$$ Therefore, we complete the proof of lemma. ◻ Note that $F_{\hat{\mathfrak g},\hbar}^{\ell}$ is generated by ${ \left.\left\{ {h_i,x_i^\pm} \,\right|\, {i\in I} \right\} }$. Combining this with Lemmas [Lemma 74](#lem:S-twisted){reference-type="ref" reference="lem:S-twisted"}, [Lemma 95](#lem:S3-1){reference-type="ref" reference="lem:S3-1"} and [Lemma 96](#lem:S3-2){reference-type="ref" reference="lem:S3-2"} we get that **Proposition 97**. *As operators on $F_{\hat{\mathfrak g},\hbar}^{\ell+\ell'}\widehat{\otimes}F_{\hat{\mathfrak g},\hbar}^{\ell''}$, one has $$\begin{aligned} S_{\{\ell,\ell'\},\ell''}(z) (\Delta\otimes 1)=(\Delta\otimes 1) S_{\ell+\ell',\ell''}(z).\end{aligned}$$* Similarly, we have that **Proposition 98**. *As operators on $F_{\hat{\mathfrak g},\hbar}^{\ell}\widehat{\otimes}F_{\hat{\mathfrak g},\hbar}^{\ell'+\ell''}$, one has $$\begin{aligned} S_{\ell,\{\ell',\ell''\}}(z)(1\otimes\Delta)=(1\otimes\Delta)S_{\ell,\ell'+\ell''}(z).\end{aligned}$$* Proposition [Proposition 84](#prop:S-Delta){reference-type="ref" reference="prop:S-Delta"} is immediate from Propositions [Proposition 97](#prop:S-Delta-1){reference-type="ref" reference="prop:S-Delta-1"} and [Proposition 98](#prop:S-Delta-2){reference-type="ref" reference="prop:S-Delta-2"}. ## Proof of Propositions [Proposition 85](#prop:Y-Delta-cartan){reference-type="ref" reference="prop:Y-Delta-cartan"} and [Proposition 87](#prop:Delta-coasso){reference-type="ref" reference="prop:Delta-coasso"} {#proof-of-propositions-propy-delta-cartan-and-propdelta-coasso} **Lemma 99**. *Denote by $S_\Delta(z)$ the quantum Yang-Baxter operator of $$F_{\hat{\mathfrak g},\hbar}^{\ell}\widehat{\otimes}F_{\hat{\mathfrak g},\hbar}^{\ell'}.$$ For $i,j\in I$, we have that $$\begin{aligned} &S_\Delta(z)\left(\Delta(h_j)\otimes\Delta(h_i)\right) =\Delta(h_j)\otimes\Delta(h_i)+{{\bf 1}}\otimes{{\bf 1}}\otimes{{\bf 1}}\otimes{{\bf 1}}\label{eq:S-Delta-1-1}\\ \otimes& \frac{\partial^{2}}{\partial z^{2}}\log f(z)^{[a_{ij}]_{q^{r_i}} [r(\ell+\ell')/r_j]_{q^{r_j}} (q-q^{-1}) },\nonumber\\ &S_\Delta(z)\left(\Delta(h_j)\otimes\Delta(x_i^\pm)\right)=\Delta(h_j)\otimes\Delta(x_i^\pm)\label{eq:S-Delta-3-1}\\ \mp&{{\bf 1}}\otimes{{\bf 1}}\otimes\Delta(x_i^\pm)\otimes \frac{\partial}{\partial {z}}\log f(z)^{ [a_{ji}]_{q^{r_j}}(q^{r(\ell+\ell')}-q^{-r(\ell+\ell')}) },\nonumber\\ &S_\Delta(z)\left(\Delta(x_j^\pm)\otimes\Delta(h_i)\right)=\Delta(x_j^\pm)\otimes\Delta(h_i)\label{eq:S-Delta-3-2}\\ \pm&\Delta(x_j^\pm)\otimes{{\bf 1}}\otimes{{\bf 1}}\otimes \frac{\partial}{\partial {z}}\log f(z)^{ [a_{ij}]_{q^{r_i}}(q^{r(\ell+\ell')}-q^{-r(\ell+\ell')}) },\nonumber\\ &S_\Delta(z)\left(\Delta(x_j^{\epsilon_1})\otimes\Delta(x_i^{\epsilon_2})\right) =\Delta(x_j^{\epsilon_1})\otimes\Delta(x_i^{\epsilon_2})\label{eq:S-Delta-3-3} \otimes\frac{f(z)^{ q^{-\epsilon_1\epsilon_2r_ia_{ij}} }}{f(z)^{q^{\epsilon_1\epsilon_2r_ia_{ij}}} }.\end{aligned}$$* *Proof.* Notice that $$\begin{aligned} \label{eq:def-S-Delta} S_\Delta(z)=&S_{\ell',\ell}^{23}(z)S_{\ell,\ell}^{13}(z)S_{\ell',\ell'}^{24}(z)S_{\ell,\ell'}^{14}(z) =S_{\{\ell,\ell'\},\ell}^{\{12\},3}(z)S_{\{\ell,\ell'\},\ell'}^{\{12\},4}(z).\end{aligned}$$ Then as operators on $F_{\hat{\mathfrak g},\hbar}^{\ell+\ell'}\widehat{\otimes}F_{\hat{\mathfrak g},\hbar}^{\ell+\ell'}$, we have that $$\begin{aligned} &S_\Delta(z)(\Delta\otimes\Delta)\\ =&S_{\{\ell,\ell'\},\ell}^{\{12\},3}(z)S_{\{\ell,\ell'\},\ell'}^{\{12\},4}(z) (\Delta\otimes 1\otimes 1)(1\otimes\Delta)\\ =&(\Delta\otimes 1\otimes 1)S_{\ell+\ell',\ell}^{12}(z)S_{\ell+\ell',\ell'}^{13}(z)(1\otimes\Delta)\\ =&(\Delta\otimes 1\otimes 1)S_{\ell+\ell',\{\ell,\ell'\}}(z)(1\otimes\Delta)\\ =&(\Delta\otimes 1\otimes 1)(1\otimes\Delta)S_{\ell+\ell',\ell+\ell'}(z)\\ =&(\Delta\otimes\Delta)S_{\ell+\ell',\ell+\ell'}(z),\end{aligned}$$ where the second equation follows from Proposition [Proposition 97](#prop:S-Delta-1){reference-type="ref" reference="prop:S-Delta-1"} and the forth equation follows from Proposition [Proposition 98](#prop:S-Delta-2){reference-type="ref" reference="prop:S-Delta-2"}. Combining this with Lemma [Lemma 74](#lem:S-twisted){reference-type="ref" reference="lem:S-twisted"}, we complete the proof of lemma. ◻ **Lemma 100**. *For $i,j\in I$, we have that $$\begin{aligned} &Y_{\widehat{\ell}}(h_i,z)^-h_j={{\bf 1}}[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}[r\ell/r_j]_{q^{r_j\frac{\partial}{\partial {z}}}}q^{r\ell\frac{\partial}{\partial {z}}}z^{-2},\label{eq:Sing-Y-1}\\ &Y_{\widehat{\ell}}(h_i,z)^-x_j^\pm =\pm x_j^\pm [a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}q^{r\ell\frac{\partial}{\partial {z}}}z^{-1}.\label{eq:Sing-Y-2}\end{aligned}$$* *Proof.* From Proposition [Proposition 60](#prop:deform-datum){reference-type="ref" reference="prop:deform-datum"} and [\[eq:tau-1\]](#eq:tau-1){reference-type="eqref" reference="eq:tau-1"}, we get that $$\begin{aligned} &Y_{\widehat{\ell}}(h_i,z)^-h_j=Y(h_i,z)^-h_j+ {{\bf 1}}\widehat{\ell}_{ij}(z)^-\\ =&{{\bf 1}}\frac{a_{ij}r\ell}{r_j} z^{-2}+{{\bf 1}}[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}[r\ell/r_j]_{q^{r_j\frac{\partial}{\partial {z}}}}q^{r\ell\frac{\partial}{\partial {z}}}z^{-2} -{{\bf 1}}\frac{a_{ij}r\ell}{r_j} z^{-2}\\ =&{{\bf 1}}[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}[r\ell/r_j]_{q^{r_j\frac{\partial}{\partial {z}}}}q^{r\ell\frac{\partial}{\partial {z}}}z^{-2}.\end{aligned}$$ Similarly, [\[eq:Sing-Y-2\]](#eq:Sing-Y-2){reference-type="eqref" reference="eq:Sing-Y-2"} follows from Proposition [Proposition 60](#prop:deform-datum){reference-type="ref" reference="prop:deform-datum"} and [\[eq:tau-1\]](#eq:tau-1){reference-type="eqref" reference="eq:tau-1"}. ◻ **Lemma 101**. *We denote by $Y_\Delta$ the vertex operator map of the twisted tensor product $\hbar$-adic quantum VA $F_{\hat{\mathfrak g},\hbar}^{\ell}\widehat{\otimes}F_{\hat{\mathfrak g},\hbar}^{\ell'}$. For $i,j\in I$, we have that $$\begin{aligned} &Y_\Delta\left(\Delta(h_i),z\right)^-\Delta(h_j)\label{eq:Y-Delta-1-1}\\ =&{{\bf 1}}\otimes{{\bf 1}}\otimes{{\bf 1}}\otimes{{\bf 1}} \otimes[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}[r(\ell+\ell')/r_j]_{q^{r_j\frac{\partial}{\partial {z}}}}q^{r(\ell+\ell')\frac{\partial}{\partial {z}}}z^{-2},\nonumber\\ &Y_\Delta\left(\Delta(h_i),z\right)^-(x_j^\pm\otimes{{\bf 1}})\label{eq:Y-Delta-1-2}\\ =&\pm x_j^\pm\otimes{{\bf 1}}\otimes [a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}q^{r(\ell+\ell')\frac{\partial}{\partial {z}}}z^{-1},\nonumber\\ &Y_\Delta\left(\Delta(h_i),z\right)^-({{\bf 1}}\otimes x_j^\pm)\label{eq:Y-Delta-1-3}\\ =&\pm {{\bf 1}}\otimes x_j^\pm\otimes [a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}q^{r(\ell+\ell')\frac{\partial}{\partial {z}}}z^{-1},\nonumber\\ &Y_\Delta\left(\Delta(h_i),z\right)^-(E_\ell(h_j)\otimes x_j^+)\label{eq:Y-Delta-1-4}\\ =&E_\ell(h_j)\otimes x_j^+ \otimes[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}q^{r(\ell+\ell')\frac{\partial}{\partial {z}}}q^{2r\ell\frac{\partial}{\partial {z}}}z^{-1}.\nonumber\end{aligned}$$* *Proof.* From Lemma [Lemma 24](#lem:vertex-op-K){reference-type="ref" reference="lem:vertex-op-K"}, we have that $$\begin{aligned} \label{eq:Y-Delta} Y_\Delta(z)=Y_{\widehat{\ell}}^{12}(z)Y_{\widehat{\ell'}}^{34}(z)S_{\ell,\ell'}^{23}(-z)\sigma.\end{aligned}$$ Recall from [\[eq:def-Delta-h\]](#eq:def-Delta-h){reference-type="eqref" reference="eq:def-Delta-h"} that $\Delta(h_i)=q^{-r\ell'\partial}h_i\otimes{{\bf 1}}+{{\bf 1}}\otimes q^{r\ell\partial}h_i$. Then we have that $$\begin{aligned} &Y_\Delta\left(\Delta(h_i),z\right)^-\Delta(h_j)\\ =&Y_\Delta\left(q^{-r\ell'\partial}h_i\otimes{{\bf 1}}+{{\bf 1}}\otimes q^{r\ell\partial}h_i,z\right)^- \left(q^{-r\ell'\partial}h_j\otimes{{\bf 1}}+{{\bf 1}}\otimes q^{r\ell\partial}h_j\right)\\ =&Y_{\widehat{\ell}}(q^{-r\ell'\partial}h_i,z)^-q^{-r\ell'\partial}h_j\otimes{{\bf 1}}\\ &+{{\bf 1}}\otimes Y_{\widehat{\ell'}}(q^{r\ell\partial}h_i,z)^-q^{r\ell\partial}h_j +\mathop{\mathrm{Sing}}_z Y_{\widehat{\ell}}^{12}(z)Y_{\widehat{\ell'}}^{34}(z){{\bf 1}}\\ &\quad\otimes S_{\ell,\ell'}(-z)\left(q^{-r\ell'\partial}h_j\otimes q^{r\ell\partial}h_i\right)\otimes{{\bf 1}}\\ =&Y_{\widehat{\ell}}(q^{-r\ell'\partial}h_i,z)^-q^{-r\ell'\partial}h_j\otimes{{\bf 1}} +{{\bf 1}}\otimes Y_{\widehat{\ell'}}(q^{r\ell\partial}h_i,z)^-q^{r\ell\partial}h_j\\ &+\mathop{\mathrm{Sing}}_z Y_{\widehat{\ell}}^{12}(z)Y_{\widehat{\ell'}}^{34}(z){{\bf 1}}\\ &\quad\otimes\left(q^{-r\ell'\partial+r\ell'\frac{\partial}{\partial {z}}}\otimes q^{r\ell\partial+r\ell\frac{\partial}{\partial {z}}}\right) S_{\ell,\ell'}(-z)\left(h_j\otimes h_i\right)\otimes{{\bf 1}}\\ =&Y_{\widehat{\ell}}(q^{-r\ell'\partial}h_i,z)^-q^{-r\ell'\partial}h_j\otimes{{\bf 1}} +{{\bf 1}}\otimes Y_{\widehat{\ell'}}(q^{r\ell\partial}h_i,z)^-q^{r\ell\partial}h_j\\ &+\mathop{\mathrm{Sing}}_z Y_{\widehat{\ell}}({{\bf 1}},z)q^{r\ell\partial}h_j\otimes Y_{\widehat{\ell'}}(q^{-r\ell'\partial}h_i,z){{\bf 1}}+\mathop{\mathrm{Sing}}_z {{\bf 1}}\otimes{{\bf 1}}\\ &\quad\otimes q^{r(\ell+\ell')\frac{\partial}{\partial {z}}}[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}[r\ell/r_j]_{q^{r_j\frac{\partial}{\partial {z}}}}[r\ell']_{q^{\frac{\partial}{\partial {z}}}} \left(q^{-\frac{\partial}{\partial {z}}}-q^{\frac{\partial}{\partial {z}}}\right)\frac{\partial^{2}}{\partial z^{2}}\log f(z)\\ =&{{\bf 1}}\otimes{{\bf 1}} \otimes[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}q^{r(\ell+\ell')\frac{\partial}{\partial {z}}}\bigg([r\ell/r_j]_{q^{r_j\frac{\partial}{\partial {z}}}}q^{-r\ell'\frac{\partial}{\partial {z}}}\\ &\quad+[r\ell'/r_j]_{q^{r_j\frac{\partial}{\partial {z}}}}q^{-r\ell\frac{\partial}{\partial {z}}} +[r\ell/r_j]_{q^{r_j\frac{\partial}{\partial {z}}}}\left(q^{r\ell'\frac{\partial}{\partial {z}}}-q^{-r\ell'\frac{\partial}{\partial {z}}}\right) \bigg)z^{-2}\\ =&{{\bf 1}}\otimes{{\bf 1}}\otimes[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}[r(\ell+\ell')/r_j]_{q^{r_j\frac{\partial}{\partial {z}}}}q^{r(\ell+\ell')\frac{\partial}{\partial {z}}}z^{-2},\end{aligned}$$ where the third equation follows from Lemma [Lemma 27](#lem:multqyb-shift-total){reference-type="ref" reference="lem:multqyb-shift-total"}, the fourth equation follows from [\[eq:S-twisted-1\]](#eq:S-twisted-1){reference-type="eqref" reference="eq:S-twisted-1"}, and the fifth equation follows from [\[eq:Sing-Y-1\]](#eq:Sing-Y-1){reference-type="eqref" reference="eq:Sing-Y-1"}. For the equation [\[eq:Y-Delta-1-2\]](#eq:Y-Delta-1-2){reference-type="eqref" reference="eq:Y-Delta-1-2"}, we have that $$\begin{aligned} &Y_\Delta\left(\Delta(h_i),z\right)^-(x_j^\pm\otimes{{\bf 1}})\\ =&Y_\Delta\left(q^{-r\ell'\partial}h_i\otimes{{\bf 1}}+{{\bf 1}}\otimes q^{r\ell\partial}h_i,z\right)^- (x_j^\pm\otimes{{\bf 1}})\\ =&\mathop{\mathrm{Sing}}_z Y_{\widehat{\ell}}^{12}(z)Y_{\widehat{\ell'}}^{34}(z)S_{\ell,\ell'}^{23}(-z)\\ &\quad\left( q^{-r\ell'\partial}h_i\otimes x_j^\pm\otimes{{\bf 1}}\otimes{{\bf 1}} +{{\bf 1}}\otimes x_j^\pm\otimes q^{r\ell\partial}h_i\otimes{{\bf 1}}\right)\\ =&\mathop{\mathrm{Sing}}_z Y_{\widehat{\ell}}^{12}(z)Y_{\widehat{\ell'}}^{34}(z)\bigg( q^{-r\ell'\partial}h_i\otimes x_j^\pm\otimes{{\bf 1}}\otimes{{\bf 1}} +{{\bf 1}}\otimes x_j^\pm\otimes q^{r\ell\partial}h_i\otimes{{\bf 1}}\\ &\quad\pm{{\bf 1}}\otimes x_j^\pm \otimes{{\bf 1}}\otimes{{\bf 1}}\otimes q^{r\ell\frac{\partial}{\partial {z}}}[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}\left(q^{r\ell'\frac{\partial}{\partial {z}}}-q^{-r\ell'\frac{\partial}{\partial {z}}}\right) \frac{\partial}{\partial {z}}\log f(z) \bigg)\\ =&\pm x_j^\pm\otimes{{\bf 1}}\otimes[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}\left(q^{r(\ell-\ell')\frac{\partial}{\partial {z}}}+q^{r(\ell+\ell')\frac{\partial}{\partial {z}}}-q^{r(\ell-\ell')\frac{\partial}{\partial {z}}}\right)z^{-1}\\ =&\pm x_j^\pm\otimes{{\bf 1}}\otimes[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}q^{r(\ell+\ell')\frac{\partial}{\partial {z}}}z^{-1},\end{aligned}$$ where the third equation follows from Lemma [Lemma 27](#lem:multqyb-shift-total){reference-type="ref" reference="lem:multqyb-shift-total"} and [\[eq:S-twisted-2\]](#eq:S-twisted-2){reference-type="eqref" reference="eq:S-twisted-2"}, and the forth equation follows from [\[eq:Sing-Y-2\]](#eq:Sing-Y-2){reference-type="eqref" reference="eq:Sing-Y-2"}. From the equation [\[eq:Sing-Y-2\]](#eq:Sing-Y-2){reference-type="eqref" reference="eq:Sing-Y-2"} we get that $$\begin{aligned} &Y_\Delta\left(\Delta(h_i),z\right)^-({{\bf 1}}\otimes x_j^\pm)\\ =&Y_\Delta\left(q^{-r\ell'\partial}h_i\otimes{{\bf 1}}+{{\bf 1}}\otimes q^{r\ell\partial}h_i,z\right)({{\bf 1}}\otimes x_j^\pm)^-\\ =&\mathop{\mathrm{Sing}}_z Y_{\widehat{\ell}}^{12}(z)Y_{\widehat{\ell'}}^{34}(z)S_{\ell,\ell'}^{23}(-z)\\ &\quad\left( q^{-r\ell'\partial}h_i\otimes{{\bf 1}}\otimes{{\bf 1}}\otimes x_j^\pm +{{\bf 1}}\otimes{{\bf 1}}\otimes q^{r\ell\partial}h_i\otimes x_j^\pm\right)\\ =&{{\bf 1}}\otimes Y_{\widehat{\ell'}}(q^{r\ell\partial}h_i,z)^-x_j^\pm =\pm {{\bf 1}}\otimes x_j^\pm\otimes[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}q^{r(\ell+\ell')\frac{\partial}{\partial {z}}}z^{-1},\end{aligned}$$ Finally, we have that $$\begin{aligned} &Y_\Delta\left(\Delta(h_i),z\right)^-\left(E_\ell(h_j)\otimes x_j^+\right)\\ =&Y_\Delta\left(q^{-r\ell'\partial}h_i\otimes{{\bf 1}}+{{\bf 1}}\otimes q^{r\ell\partial}h_i,z\right)^- \left(E_\ell(h_j)\otimes x_j^+\right)\\ =&\mathop{\mathrm{Sing}}_z Y_{\widehat{\ell}}^{12}(z)Y_{\widehat{\ell'}}^{34}(z)S_{\ell,\ell'}^{23}(-z)\\ &\quad\left( q^{-r\ell'\partial}h_i\otimes E_\ell(h_j)\otimes{{\bf 1}}\otimes x_j^+ +{{\bf 1}}\otimes E_\ell(h_j)\otimes q^{r\ell\partial}h_i\otimes x_j^+ \right)\\ =&Y_{\widehat{\ell}}^{12}(z)Y_{\widehat{\ell'}}^{34}(z)\bigg( q^{-r\ell'\partial}h_i\otimes E_\ell(h_j)\otimes{{\bf 1}}\otimes x_j^+ \\ &\quad+{{\bf 1}}\otimes E_\ell(h_j)\otimes q^{r\ell\partial}h_i\otimes x_j^+ +{{\bf 1}}\otimes E_\ell(h_j)\otimes{{\bf 1}}\otimes x_j^+ \\ &\quad\otimes[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}} \left(q^{r\ell\frac{\partial}{\partial {z}}}-q^{-r\ell\frac{\partial}{\partial {z}}}\right) \left(q^{r\ell'\frac{\partial}{\partial {z}}}-q^{-r\ell'\frac{\partial}{\partial {z}}}\right)q^{2r\ell\frac{\partial}{\partial {z}}}\frac{\partial}{\partial {z}}\log f(z) \bigg)\\ =&E_\ell(h_j)\otimes x_j^+\otimes[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}q^{r(\ell+\ell')\frac{\partial}{\partial {z}}}\\ &\quad\times\left(q^{2r(\ell-\ell')\frac{\partial}{\partial {z}}}-q^{-2r\ell'\frac{\partial}{\partial {z}}} +1+\left(q^{2r\ell\frac{\partial}{\partial {z}}}-1\right)\left(1-q^{-2r\ell'\frac{\partial}{\partial {z}}}\right) \right)z^{-1}\\ =&E_\ell(h_j)\otimes x_j^+\otimes[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}q^{r(\ell+\ell')\frac{\partial}{\partial {z}}}q^{2r\ell\frac{\partial}{\partial {z}}}z^{-1},\end{aligned}$$ where third equation follows from Lemma [Lemma 27](#lem:multqyb-shift-total){reference-type="ref" reference="lem:multqyb-shift-total"} and [\[eq:S-E-1\]](#eq:S-E-1){reference-type="eqref" reference="eq:S-E-1"}, and the forth equation follows from [\[eq:Sing-Y-2\]](#eq:Sing-Y-2){reference-type="eqref" reference="eq:Sing-Y-2"} and [\[eq:Y-action-1\]](#eq:Y-action-1){reference-type="eqref" reference="eq:Y-action-1"}. ◻ From the definition [\[eq:def-Delta-x+\]](#eq:def-Delta-x+){reference-type="eqref" reference="eq:def-Delta-x+"}, [\[eq:def-Delta-x-\]](#eq:def-Delta-x-){reference-type="eqref" reference="eq:def-Delta-x-"} and the equations [\[eq:Y-Delta-1-2\]](#eq:Y-Delta-1-2){reference-type="eqref" reference="eq:Y-Delta-1-2"}, [\[eq:Y-Delta-1-3\]](#eq:Y-Delta-1-3){reference-type="eqref" reference="eq:Y-Delta-1-3"} and [\[eq:Y-Delta-1-4\]](#eq:Y-Delta-1-4){reference-type="eqref" reference="eq:Y-Delta-1-4"}, we immediately get that **Lemma 102**. *For $i,j\in I$, we have that $$\begin{aligned} Y_\Delta\left(\Delta(h_i),z\right)^-\Delta(x_j^\pm)=\pm \Delta(x_j^\pm)\otimes[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}q^{r(\ell+\ell')\frac{\partial}{\partial {z}}}z^{-1}.\end{aligned}$$* **Lemma 103**. *In $V_{\hat{\mathfrak g},\hbar}^\ell\widehat{\otimes}V_{\hat{\mathfrak g},\hbar}^{\ell'}$, we have that ($i\in I$): $$\begin{aligned} &Y_\Delta\left(\Delta(x_i^+),z\right)^-\Delta(x_i^-)\\ =&\frac{1}{q^{r_i}-q^{-r_i}}\left({{\bf 1}}\otimes{{\bf 1}}z^{-1}-q^{-2r\ell'\partial}E_\ell(h_i)\otimes E_{\ell'}(h_i)(z+2r(\ell+\ell')\hbar)^{-1}\right).\end{aligned}$$* *Proof.* We have that $$\begin{aligned} &Y_\Delta\left(x_i^+\otimes{{\bf 1}}+q^{2r\ell\partial}E_\ell(h_i)\otimes q^{2r\ell\partial}x_i^+,z\right)^-\left(x_i^-\otimes{{\bf 1}}+{{\bf 1}}\otimes x_i^-\right)\\ =&\mathop{\mathrm{Sing}}_z Y_{\widehat{\ell}}^{12}(z)Y_{\widehat{\ell'}}^{34}(z)S_{\ell,\ell'}^{23}(-z) \bigg( x_i^+\otimes x_i^-\otimes{{\bf 1}}\otimes{{\bf 1}}\\ &\quad +q^{2r\ell\partial}E_\ell(h_i)\otimes x_i^-\otimes q^{2r\ell\partial}x_i^+\otimes{{\bf 1}}+x_i^+\otimes{{\bf 1}}\otimes{{\bf 1}}\otimes x_i^-\\ &\quad +q^{2r\ell\partial}E_\ell(h_i)\otimes{{\bf 1}}\otimes q^{2r\ell\partial}x_i^+\otimes x_i^- \bigg)\\ =&\mathop{\mathrm{Sing}}_z Y_{\widehat{\ell}}^{12}(z)Y_{\widehat{\ell'}}^{34}(z) \bigg( x_i^+\otimes x_i^-\otimes{{\bf 1}}\otimes{{\bf 1}}\\ &\quad+q^{2r\ell\partial}E_\ell(h_i)\otimes x_i^-\otimes q^{2r\ell\partial}x_i^+\otimes{{\bf 1}} \otimes f(z)^{ (q^{-2r_i}-q^{2r_i})q^{2r\ell} }\\ &\quad +x_i^+\otimes{{\bf 1}}\otimes{{\bf 1}}\otimes x_i^- +q^{2r\ell\partial}E_\ell(h_i)\otimes{{\bf 1}}\otimes q^{2r\ell\partial}x_i^+\otimes x_i^- \bigg)\\ =&\frac{1}{q^{r_i}-q^{-r_i}}\left({{\bf 1}}\otimes{{\bf 1}}z^{-1}- E_\ell(h_i)\otimes{{\bf 1}}(z+2r\ell\hbar)^{-1}\right)\\ &+\mathop{\mathrm{Sing}}_z\exp\left(\widetilde{h}_i(z+2r\ell\hbar)\right)x_i^-\otimes e^{(z+2r\ell\hbar)\partial}x_i^+\\ &+\frac{1}{q^{r_i}-q^{-r_i}}\mathop{\mathrm{Sing}}_z\Big(e^{(z+2r\ell\hbar)\partial}E_\ell(h_i)\otimes{{\bf 1}}(z+2r\ell\hbar)^{-1}\\ &\quad -e^{(z+2r\ell\hbar)\partial}E_\ell(h_i)\otimes E_{\ell'}(h_i)(z+2r(\ell+\ell')\hbar)^{-1} \Big)\\ =&\frac{1}{q^{r_i}-q^{-r_i}}\Big({{\bf 1}}\otimes{{\bf 1}}z^{-1}\\ &\quad-E_\ell(h_i)\otimes{{\bf 1}}(z+2r\ell\hbar)^{-1}+E_\ell(h_i)\otimes{{\bf 1}}(z+2r\ell\hbar)^{-1}\\ &\quad-q^{-2r\ell'\partial}E_\ell(h_i)\otimes E_{\ell'}(h_i)(z+2r(\ell+\ell')\hbar)^{-1}\Big)\\ =&\frac{1}{q^{r_i}-q^{-r_i}}\left({{\bf 1}}\otimes{{\bf 1}}z^{-1} -q^{-2r\ell'\partial}E_\ell(h_i)\otimes E_{\ell'}(h_i)(z+2r(\ell+\ell')\hbar)^{-1}\right).\end{aligned}$$ where the first equation follows from [\[eq:Y-Delta\]](#eq:Y-Delta){reference-type="eqref" reference="eq:Y-Delta"} and the definition of $\Delta(x_i^\pm)$ (see [\[eq:def-Delta-x+\]](#eq:def-Delta-x+){reference-type="eqref" reference="eq:def-Delta-x+"}, [\[eq:def-Delta-x-\]](#eq:def-Delta-x-){reference-type="eqref" reference="eq:def-Delta-x-"}), the second equation follows from Lemma [Lemma 27](#lem:multqyb-shift-total){reference-type="ref" reference="lem:multqyb-shift-total"} and [\[eq:S-E-4\]](#eq:S-E-4){reference-type="eqref" reference="eq:S-E-4"}, and the third equation follows from Proposition [Proposition 72](#prop:ideal-def-alt){reference-type="ref" reference="prop:ideal-def-alt"} and [\[eq:Y-action-3\]](#eq:Y-action-3){reference-type="eqref" reference="eq:Y-action-3"}. ◻ **Lemma 104**. *For any $i\in I$, we have that $$\begin{aligned} E_{\ell+\ell'}\left(\Delta(h_i)\right) =q^{-2r\ell'\partial} E_\ell(h_i)\otimes E_{\ell'}(h_i).\end{aligned}$$* *Proof.* Set $$\begin{aligned} &a=-q^{-r(\ell+2\ell')\partial}2\hbar f_0(2\hbar\partial)[r_i]_{q^\partial} h_i\otimes{{\bf 1}},\\ &b=-{{\bf 1}}\otimes q^{-r\ell'\partial}2\hbar f_0(2\hbar\partial)[r_i]_{q^\partial} h_i\end{aligned}$$ and $\widetilde{h}_i=a+b$. Recall from [\[eq:def-S-Delta\]](#eq:def-S-Delta){reference-type="eqref" reference="eq:def-S-Delta"} that $S_\Delta(z)=S_{\ell',\ell}^{23}(z)S_{\ell,\ell}^{13}(z)S_{\ell',\ell'}^{24}(z)S_{\ell,\ell'}^{14}(z)$. Then $$\begin{aligned} &S_\Delta(z)(b\otimes a)\\ =&S_{\ell',\ell}^{23}(z)S_{\ell,\ell}^{13}(z)S_{\ell',\ell'}^{24}(z)S_{\ell,\ell'}^{14}(z)\\ &\quad\left({{\bf 1}}\otimes q^{-r\ell'\partial}2\hbar f_0(2\hbar\partial)h_i\otimes q^{-r(\ell+2\ell')\partial}2\hbar f_0(2\hbar\partial)h_i\otimes{{\bf 1}}\right)\\ =&\bigg(1\otimes q^{-r\ell'\partial-r\ell'\frac{\partial}{\partial {z}}}2\hbar f_0\left(2\hbar\left(\partial+\frac{\partial}{\partial {z}}\right)\right)\otimes q^{-r(\ell+2\ell')\partial+r(\ell+2\ell')\frac{\partial}{\partial {z}}}\\ &\times 2\hbar f_0\left(2\hbar\left(\partial-\frac{\partial}{\partial {z}}\right)\right)\otimes 1\bigg) S_{\ell',\ell}^{23}(z)({{\bf 1}}\otimes h_i\otimes h_i\otimes{{\bf 1}})\\ =&b\otimes a+{{\bf 1}}\otimes{{\bf 1}}\otimes{{\bf 1}}\otimes{{\bf 1}}\otimes q^{r(\ell+\ell')\frac{\partial}{\partial {z}}}4\hbar^2f_0\left(2\hbar\frac{\partial}{\partial {z}}\right)^2\\ &\times \frac{\partial^{2}}{\partial z^{2}}\log f(z)^{ [2r_i]_{q} [r\ell]_{q} [r\ell']_{q}(q-q^{-1}) }\\ =&b\otimes a+{{\bf 1}}\otimes{{\bf 1}}\otimes{{\bf 1}}\otimes{{\bf 1}}\otimes \log f(z)^{ (q^{2r_i}-q^{-2r_i})(q^{2r\ell}-1)(q^{2r\ell'}-1) },\end{aligned}$$ where the second equation follows from Lemma [Lemma 27](#lem:multqyb-shift-total){reference-type="ref" reference="lem:multqyb-shift-total"} and the third equation follows from [\[eq:S-twisted-1\]](#eq:S-twisted-1){reference-type="eqref" reference="eq:S-twisted-1"}. Then $$\begin{aligned} &[Y_\Delta(a,z_1),Y_\Delta(b,z_2)]w\\ =&Y_\Delta(a,z_1)Y_\Delta(b,z_2)w-Y_\Delta(z_2)Y_\Delta^{23}(z_1)S_\Delta^{12}(z_2-z_1)(b\otimes a\otimes w)\\ &+Y_\Delta(z_2)Y_\Delta^{23}(z_1)S_\Delta^{12}(z_2-z_1)(b\otimes a\otimes w)-Y_\Delta(b,z_2)Y_\Delta(a,z_1)w\\ =&Y_\Delta(Y_\Delta(a,z_1-z_2)b-Y_\Delta(a,-z_2+z_1)b,z_2)\\ &+Y_\Delta(z_2)Y_\Delta^{23}(z_1)S_\Delta^{12}(z_2-z_1)(b\otimes a\otimes w)-Y_\Delta(b,z_2)Y_\Delta(a,z_1)w\\ =&Y_\Delta(z_2)Y_\Delta^{23}(z_1)\left(S_\Delta^{12}(z_2-z_1)(b\otimes a\otimes w)-b\otimes a\otimes w\right)\\ =&w\otimes\iota_{z_2,z_1}\log f(z_2-z_1)^{ (q^{2r_i}-q^{-2r_i})(q^{2r\ell}-1) (q^{2r\ell'}-1) },\end{aligned}$$ where the second equation follows from [@Li-h-adic (2.25)]. Applying $\mathop{\mathrm{Res}}_{z_1,z_2}z_1^{-1}z_2^{-1}$ on both hand sides, we get that $$\begin{aligned} &[a_{-1},b_{-1}]\\ =&\mathop{\mathrm{Res}}_{z_1,z_2}z_1^{-1}z_2^{-1} \log f(z_2-z_1)^{ (q^{2r_i}-q^{-2r_i})(q^{2r\ell}-1) (q^{2r\ell'}-1) }\\ =&\mathop{\mathrm{Res}}_{z_2}z_2^{-1}\log f(z_2)^{ (q^{2r_i}-q^{-2r_i})(q^{2r\ell}-1) (q^{2r\ell'}-1) }\\ =&\mathop{\mathrm{Res}}_{z_2}z_2^{-1}\log f_0(z_2)^{ (q^{2r_i}-q^{-2r_i})(q^{2r\ell}-1) (q^{2r\ell'}-1) }\\ =&\log \frac{f_0(2(r_i+r\ell+r\ell')\hbar)f_0(2r_i\hbar) }{f_0(2(r_i+r\ell)\hbar)f_0(2(r_i+r\ell')\hbar) } \frac{f_0(2(-r_i+r\ell)\hbar)f_0(2(-r_i+r\ell')\hbar) }{f_0(2(-r_i+r\ell+r\ell')\hbar)f_0(-2r_i\hbar) }\\ =&\log \frac{f_0(2(r_i+r\ell+r\ell')\hbar)}{f_0(2(r_i-r\ell-r\ell')\hbar) } \frac{f_0(2(r_i-r\ell)\hbar) }{f_0(2(r_i+r\ell)\hbar)} \frac{f_0(2(r_i-r\ell')\hbar)}{f_0(2(r_i+r\ell')\hbar) }.\end{aligned}$$ From Baker-Campbell-Hausdorff formula, we have that $$\begin{aligned} &E_{\ell+\ell'}\left(\Delta(h_i)\right) =\left(\frac{f_0(2(r_i+r\ell+r\ell')\hbar)}{f_0(2(r_i-r\ell-r\ell')\hbar) }\right)^{\frac{1}{2}}\exp(a_{-1}+b_{-1}){{\bf 1}}\otimes{{\bf 1}}\\ =&\left(\frac{f_0(2(r_i+r\ell+r\ell')\hbar)}{f_0(2(r_i-r\ell-r\ell')\hbar) }\right)^{\frac{1}{2}} \exp(a_{-1}){{\bf 1}}\otimes\exp(b_{-1}){{\bf 1}}\otimes\exp(-[a_{-1},b_{-1}]/2) \\ =&\exp(a_{-1}){{\bf 1}}\otimes\exp(b_{-1}){{\bf 1}} \left(\frac{f_0(2(r_i+r\ell)\hbar)}{f_0(2(r_i-r\ell)\hbar) }\right)^{\frac{1}{2}} \left(\frac{f_0(2(r_i+r\ell')\hbar) }{f_0(2(r_i-r\ell')\hbar)}\right)^{\frac{1}{2}}\\ =& q^{-2r\ell'\partial} E_\ell(h_i)\otimes E_{\ell'}(h_i),\end{aligned}$$ which complete the proof. ◻ $$ *Proof of Proposition [Proposition 85](#prop:Y-Delta-cartan){reference-type="ref" reference="prop:Y-Delta-cartan"}:* In view of Remark [Remark 11](#rem:Jacobi-S){reference-type="ref" reference="rem:Jacobi-S"}, the relation [\[eq:prop-Y-Delta-1\]](#eq:prop-Y-Delta-1){reference-type="eqref" reference="eq:prop-Y-Delta-1"} follows from [\[eq:S-Delta-3-1\]](#eq:S-Delta-3-1){reference-type="eqref" reference="eq:S-Delta-3-1"} and [\[eq:Y-Delta-1-1\]](#eq:Y-Delta-1-1){reference-type="eqref" reference="eq:Y-Delta-1-1"}, the relation [\[eq:prop-Y-Delta-2\]](#eq:prop-Y-Delta-2){reference-type="eqref" reference="eq:prop-Y-Delta-2"} follows from [\[eq:S-Delta-3-2\]](#eq:S-Delta-3-2){reference-type="eqref" reference="eq:S-Delta-3-2"} and Lemma [Lemma 102](#lem:Y-Delta-h-x){reference-type="ref" reference="lem:Y-Delta-h-x"}, and the relation [\[eq:prop-Y-Delta-3\]](#eq:prop-Y-Delta-3){reference-type="ref" reference="eq:prop-Y-Delta-3"} follows from [\[eq:S-Delta-3-3\]](#eq:S-Delta-3-3){reference-type="eqref" reference="eq:S-Delta-3-3"} and Lemmas [Lemma 103](#lem:Y-Delta-x+x-){reference-type="ref" reference="lem:Y-Delta-x+x-"}, [Lemma 104](#lem:E-ell+){reference-type="ref" reference="lem:E-ell+"}. $$ *Proof of Proposition [Proposition 87](#prop:Delta-coasso){reference-type="ref" reference="prop:Delta-coasso"}:* It is easy to see that $$\begin{aligned} (\Delta\otimes 1)\circ\Delta(x_i^-) =(1\otimes\Delta)\circ\Delta(x_i^-) \quad\mbox{for }i\in I.\end{aligned}$$ For $i\in I$, we have that $$\begin{aligned} &(\Delta\otimes 1)\circ \Delta(h_i)\\ =&(\Delta\otimes 1)\left(q^{-r\ell''\partial}h_i\otimes{{\bf 1}} +{{\bf 1}}\otimes q^{r(\ell+\ell')\partial}h_i\right)\\ %=&\(q^{-r\ell''\partial}\ot q^{-r\ell''\partial}\ot 1\)\\ % &\quad\times % \(q^{-r\ell'\partial}h_i\ot\vac\ot\vac % +\vac\ot q^{r\ell\partial}h_i\ot\vac\)\\ % &+\vac\ot\vac\ot q^{r(\ell+\ell')\partial}h_i\\ =&q^{-r(\ell'+\ell'')\partial}h_i\otimes{{\bf 1}}\otimes{{\bf 1}} +{{\bf 1}}\otimes q^{r(\ell-\ell'')\partial}h_i\otimes{{\bf 1}}\\ &+{{\bf 1}}\otimes{{\bf 1}}\otimes q^{r(\ell+\ell')\partial}h_i,\end{aligned}$$ and that $$\begin{aligned} &(1\otimes\Delta)\circ\Delta(h_i)\\ =&(1\otimes\Delta) \left(q^{-r(\ell'+\ell'')\partial}h_i\otimes{{\bf 1}} +{{\bf 1}}\otimes q^{r\ell\partial}h_i\right)\\ =&q^{-r(\ell'+\ell'')\partial}h_i\otimes{{\bf 1}}\otimes{{\bf 1}} +{{\bf 1}}\otimes q^{r(\ell-\ell'')\partial}h_i\otimes{{\bf 1}}\\ &+{{\bf 1}}\otimes{{\bf 1}}\otimes q^{r(\ell+\ell')\partial}h_i.\end{aligned}$$ It follows that $$\begin{aligned} &(\Delta\otimes 1)\circ\Delta(h_i) =(1\otimes\Delta)\circ\Delta(h_i) \quad\mbox{for }i\in I.\end{aligned}$$ From Lemma [Lemma 104](#lem:E-ell+){reference-type="ref" reference="lem:E-ell+"}, we have that $$\begin{aligned} &(\Delta\otimes 1)\circ\Delta(x_i^+)\\ =&(\Delta\otimes 1)\left(x_i^+\otimes{{\bf 1}} +q^{2r(\ell+\ell')\partial}E_{\ell+\ell'}(h_i) \otimes q^{2r(\ell+\ell')\partial}x_i^+\right)\\ =&x_i^+\otimes{{\bf 1}}\otimes{{\bf 1}} +q^{2r\ell\partial}E_\ell(h_i) \otimes q^{2r\ell\partial}x_i^+\otimes{{\bf 1}}\\ &+q^{2r\ell\partial}E_\ell(h_i) \otimes q^{2r(\ell+\ell')\partial} E_{\ell'}(h_i) \otimes q^{2r(\ell+\ell')\partial}x_i^+,\end{aligned}$$ and that $$\begin{aligned} &(1\otimes\Delta)\circ\Delta(x_i^+)\\ =&(1\otimes\Delta)\left(x_i^+\otimes{{\bf 1}} +q^{2r\ell\partial}E_\ell(h_i)\otimes q^{2r\ell\partial}x_i^+\right)\\ =&x_i^+{{\bf 1}}\otimes{{\bf 1}} +q^{2r\ell\partial}E_\ell(h_i) \otimes q^{2r\ell\partial}x_i^+\otimes{{\bf 1}}\\ &+q^{2r\ell\partial}E_\ell(h_i) \otimes q^{2r(\ell+\ell')\partial}E_{\ell'}(h_i) \otimes q^{2r(\ell+\ell')\partial}x_i^+.\end{aligned}$$ It follows that $$\begin{aligned} &(\Delta\otimes 1)\circ\Delta(x_i^+) =(1\otimes\Delta)\circ\Delta(x_i^+)\quad \mbox{for }i\in I.\end{aligned}$$ Since $F_{\hat{\mathfrak g},\hbar}^\ell$ is generated by ${ \left.\left\{ {h_i,x_i^\pm} \,\right|\, {i\in I} \right\} }$, we complete the proof of Proposition [Proposition 87](#prop:Delta-coasso){reference-type="ref" reference="prop:Delta-coasso"}. ## Proof of Proposition [Proposition 86](#prop:Y-Delta-serre){reference-type="ref" reference="prop:Y-Delta-serre"} {#proof-of-proposition-propy-delta-serre} **Lemma 105**. *For $i,j\in I$, we have that $$\begin{aligned} \mathop{\mathrm{Sing}}_z z^{-\delta_{ij}}(z-r_ia_{ij}\hbar)Y_\Delta(\Delta(x_i^\pm),z)\Delta(x_j^\pm)=0.\end{aligned}$$* *Proof.* We only need to consider the "$+$" case. From [\[eq:Y-Delta\]](#eq:Y-Delta){reference-type="eqref" reference="eq:Y-Delta"}, we have that $$\begin{aligned} &Y_\Delta(E_\ell(h_i)\otimes x_i^+,z)(E_\ell(h_j)\otimes x_j^+)\\ =&Y_{\widehat{\ell}}^{12}(z)Y_{\widehat{\ell'}}^{34}(z)S_{\ell,\ell'}^{23}(-z) \left(E_\ell(h_i)\otimes E_\ell(h_j)\otimes x_i^+\otimes x_j^+\right)\\ =&Y_{\widehat{\ell}}(E_\ell(h_i),z)E_\ell(h_j)\otimes Y_{\widehat{\ell'}}(x_i^+,z)x_j^+ \otimes f(z)^{(q^{-r_ia_{ij}}-q^{r_ia_{ij}})(1-q^{2r\ell})}\\ =&\exp\left(\widetilde{h}_i^+(z)\right)E_\ell(h_j)\otimes Y_{\widehat{\ell'}}(x_i^+,z)x_j^+,\end{aligned}$$ where the second equation follows from [\[eq:S-E-5\]](#eq:S-E-5){reference-type="eqref" reference="eq:S-E-5"} and the last equation follows from [\[eq:Y-action-2\]](#eq:Y-action-2){reference-type="eqref" reference="eq:Y-action-2"}. Then we have $$\begin{aligned} \mathop{\mathrm{Sing}}_z z^{-\delta_{ij}}(z-r_ia_{ij}\hbar)Y_\Delta(E_\ell(h_i)\otimes x_i^+,z)(E_\ell(h_j)\otimes x_j^+)=0.\end{aligned}$$ Similarly, by using [\[eq:Y-action-4\]](#eq:Y-action-4){reference-type="eqref" reference="eq:Y-action-4"} we have that $$\begin{aligned} &Y_\Delta(x_i^+\otimes{{\bf 1}},z)(E_\ell(h_j)\otimes x_j^+)\\ =&Y_{\widehat{\ell}}^{12}(z)Y_{\widehat{\ell'}}^{34}(z)S_{\ell,\ell'}(-z)\left(x_i^+\otimes E_\ell(h_j)\otimes{{\bf 1}}\otimes x_j^+\right)\\ =&Y_{\widehat{\ell}}(x_i^+,z)E_\ell(h_j)\otimes x_j^+\\ =&\exp\left(\widetilde{h}_j^+(0)\right)x_i^+\otimes x_j^+\otimes f(z)^{(q^{r_ia_{ij}}-q^{-r_ia_{ij}})q^{2r\ell}},\end{aligned}$$ and by using [\[eq:S-twisted-4\]](#eq:S-twisted-4){reference-type="eqref" reference="eq:S-twisted-4"} and [\[eq:Y-action-3\]](#eq:Y-action-3){reference-type="eqref" reference="eq:Y-action-3"}, we have that $$\begin{aligned} &Y_\Delta(E_\ell(h_i)\otimes x_i^+,z)(x_j^+\otimes{{\bf 1}})\\ =&Y_{\widehat{\ell}}^{12}(z)Y_{\widehat{\ell'}}^{34}(z)S_{\ell,\ell'}(-z)\left( E_\ell(h_i)\otimes x_j^+\otimes x_i^+\otimes{{\bf 1}}\right)\\ =&Y_{\widehat{\ell}}^{12}(z)Y_{\widehat{\ell'}}^{34}(z)\left( E_\ell(h_i)\otimes x_j^+\otimes x_i^+\otimes{{\bf 1}}\right)\otimes f(z)^{q^{r_ia_{ij}}-q^{-r_ia_{ij}}}\\ =&Y_{\widehat{\ell}}(E_\ell(h_i),z)x_j^+\otimes e^{z\partial}x_i\otimes f(z)^{q^{r_ia_{ij}}-q^{-r_ia_{ij}}}\\ =&\exp\left(\widetilde{h}_i^+(z)\right)x_j^+\otimes e^{z\partial}x_i^+.\end{aligned}$$ Then we have that $$\begin{aligned} &\mathop{\mathrm{Sing}}_zz^{-\delta_{ij}}(z-r_ia_{ij}\hbar)Y_\Delta(\Delta(x_i^+),z)\Delta(x_j^+)\\ =&\mathop{\mathrm{Sing}}_zz^{-\delta_{ij}}(z-r_ia_{ij}\hbar)Y_\Delta(x_i^+\otimes{{\bf 1}}+q^{2r\ell\partial} E_\ell(h_i)\otimes q^{2r\ell\partial} x_i^+,z)\\ &\qquad (x_j^+\otimes{{\bf 1}}+q^{2r\ell\partial}E_\ell(h_j)\otimes q^{2r\ell\partial}x_j^+)\\ =&\mathop{\mathrm{Sing}}_zz^{-\delta_{ij}}(z-r_ia_{ij}\hbar)Y_{\widehat{\ell}}(x_i^+,z)x_j^+\otimes{{\bf 1}}\\ &+q^{2r\ell(\partial\otimes 1+1\otimes\partial)}\mathop{\mathrm{Sing}}_zz^{-\delta_{ij}}(z-r_ia_{ij}\hbar)\\ &\quad\times Y_\Delta(x_i^+\otimes{{\bf 1}},z-2r\ell\hbar)(E_\ell(h_j)\otimes x_j^+)\\ &+\mathop{\mathrm{Sing}}_zz^{-\delta_{ij}}(z-r_ia_{ij}\hbar)Y_\Delta(E_\ell(h_i)\otimes x_i^+,z+2r\ell\hbar)(x_j^+\otimes{{\bf 1}})\\ &+q^{2r\ell(\partial\otimes 1+1\otimes\partial)}\mathop{\mathrm{Sing}}_zz^{-\delta_{ij}}(z-r_ia_{ij}\hbar)\\ &\quad\times Y_\Delta(E_\ell(h_i)\otimes x_i^+,z)(E_\ell(h_j)\otimes x_j^+)\\ =&\exp\left(\widetilde{h}_j^+(2r\ell\hbar)\right)q^{2r\ell\partial}x_i^+\otimes q^{2r\ell\partial}x_j^+\\ &\quad\otimes\mathop{\mathrm{Sing}}_zz^{-\delta_{ij}}(z+r_ia_{ij}\hbar)f_0(z+r_ia_{ij}\hbar)f_0(z-r_ia_{ij}\hbar)^{-1}\\ &+\mathop{\mathrm{Sing}}_zz^{-\delta_{ij}}(z-r_ia_{ij}\hbar)\exp\left(\widetilde{h}_i^+(z+2r\ell\hbar)\right)x_j^+\otimes e^{z\partial}q^{2r\ell\partial}x_i^+\\ =&\delta_{ij}\exp\left(\widetilde{h}_j^+(2r\ell\hbar)\right)q^{2r\ell\partial}x_i^+\otimes q^{2r\ell\partial}x_i^+\\ &\quad\otimes z^{-1}\left((r_ia_{ii}\hbar)f_0(r_ia_{ii}\hbar)f_0(-r_ia_{ii}\hbar)^{-1}-r_ia_{ii}\hbar\right)\\ =&0,\end{aligned}$$ which completes the proof of lemma. ◻ Combining Remark [Remark 11](#rem:Jacobi-S){reference-type="ref" reference="rem:Jacobi-S"}, Lemma [Lemma 105](#lem:Delta-x-pm-q-local){reference-type="ref" reference="lem:Delta-x-pm-q-local"} and [\[eq:S-Delta-3-3\]](#eq:S-Delta-3-3){reference-type="eqref" reference="eq:S-Delta-3-3"}, we immediately get that **Lemma 106**. *For $i,j\in I$, we have that $$\begin{aligned} &\iota_{z_1,z_2} f(z_1-z_2)^{-\delta_{ij}+q^{-r_ia_{ij}}} Y_\Delta(\Delta(x_i^\pm),z_1) Y_\Delta(\Delta(x_j^\pm),z_2)\\ =&\iota_{z_2,z_1}f(-z_2+z_1)^{-\delta_{ij}+q^{r_ia_{ij}}}Y_\Delta(\Delta(x_j^\pm),z_2) Y_\Delta(\Delta(x_i^\pm),z_1).\end{aligned}$$* **Lemma 107**. *For $n\in{\mathbb{Z}}_+$, we have that $$\begin{aligned} \label{eq:Y-Delta-n} &Y_\Delta^{12,34}(z_1)Y_\Delta^{34,56}(z_2)\cdots Y_\Delta^{2n-3,2n-2,2n-1,2n}(z_{n-1})\nonumber\\ =&Y_{\widehat{\ell}}^{12}(z_1)Y_{\widehat{\ell}}^{23}(z_2)\cdots Y_{\widehat{\ell}}^{n-1,n}(z_{n-1})\\ &\times Y_{\widehat{\ell'}}^{n+1,n+2}(z_1)Y_{\widehat{\ell'}}^{n+2,n+3}(z_2)\cdots Y_{\widehat{\ell'}}^{2n-1,2n}(z_{n-1})\nonumber\\ &\times\prod_{a-b=n-1}S_{\ell,\ell'}^{a,n+b}(-z_b+z_a) \prod_{a-b=n-2}S_{\ell,\ell'}^{a,n+b}(-z_b+z_a)\nonumber\\ &\times\cdots \prod_{a-b=1}S_{\ell,\ell'}^{a,n+b}(-z_b+z_a)\nonumber\\ & \times \sigma^{n,n+1,n+2,\dots,2n-1}\sigma^{n-1,n,n+1,\dots,2n-3} \cdots\sigma^{23},\nonumber\end{aligned}$$ where $\sigma^{a,a+1,\dots,b}=\sigma^{a,a+1}\sigma^{a+1,a+2}\cdots\sigma^{b-1,b}$ and $z_n=0$.* *Proof.* The case $n=2$ follows immediate from [\[eq:Y-Delta\]](#eq:Y-Delta){reference-type="eqref" reference="eq:Y-Delta"}. Suppose that the equation [\[eq:Y-Delta-n\]](#eq:Y-Delta-n){reference-type="eqref" reference="eq:Y-Delta-n"} holds for $n=k$. From the induction assumption, we have $$\begin{aligned} \label{eq:k+1} &Y_\Delta^{12,34}(z_1)Y_\Delta^{34,56}(z_2)\cdots Y_\Delta^{2k-1,2k,2k+1,2k+2}(z_k)\\ =&Y_\Delta^{12,34}(z_1)Y_{\widehat{\ell}}^{34}(z_2)Y_{\widehat{\ell}}^{45}(z_3)\cdots Y_{\widehat{\ell}}^{k+1,k+2}(z_k)\nonumber\\ &\times Y_{\widehat{\ell'}}^{k+3,k+4}(z_2)\cdots Y_{\widehat{\ell'}}^{2k+1,2k+2}(z_k)\nonumber\\ &\times \prod_{a-b=k-1}S_{\ell,\ell'}^{a+2,k+2+b}(-z_{b+1}+z_{a+1})\nonumber\\ %\prod_{a-b=k-2}S_{\ell,\ell'}^{a+2,k+2+b}(-z_{b+1}+z_{a+1}) &\times\cdots \prod_{a-b=1}S_{\ell,\ell'}^{a+2,k+2+b}(-z_{b+1}+z_{a+1})\nonumber\\ &\times \sigma^{k+2,k+3,k+4,\dots,2k+1}\sigma^{k+1,k+2,k+3,\dots,2k-1} \cdots\sigma^{45}\nonumber\\ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% =&Y_{\widehat{\ell}}^{12}(z_1)Y_{\widehat{\ell'}}^{34}(z_1)S_{\ell,\ell'}^{23}(-z_1)\sigma^{45} Y_{\widehat{\ell}}^{34}(z_2)\cdots Y_{\widehat{\ell}}^{k+1,k+2}(z_k)\nonumber\\ &\times Y_{\widehat{\ell'}}^{k+3,k+4}(z_2)\cdots Y_{\widehat{\ell'}}^{2k+1,2k+2}(z_k)\nonumber\\ &\times \prod_{a-b=k-1}S_{\ell,\ell'}^{a+2,k+2+b}(-z_{b+1}+z_{a+1})\nonumber\\ %\prod_{a-b=k-2}S_{\ell,\ell'}^{a+2,k+2+b}(-z_{b+1}+z_{a+1}) &\times\cdots \prod_{a-b=1}S_{\ell,\ell'}^{a+2,k+2+b}(-z_{b+1}+z_{a+1})\nonumber\\ &\times \sigma^{k+2,k+3,k+4,\dots,2k+1}\sigma^{k+1,k+2,k+3,\dots,2k-1} \cdots\sigma^{45}\nonumber\\ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% =&Y_{\widehat{\ell}}^{12}(z_1)Y_{\widehat{\ell'}}^{34}(z_1)S_{\ell,\ell'}^{23}(-z_1) Y_{\widehat{\ell}}^{23}(z_2)Y_{\widehat{\ell}}^{34}(z_3)\cdots Y_{\widehat{\ell}}^{k,k+1}(z_k)\nonumber\\ &\times Y_{\widehat{\ell'}}^{k+3,k+4}(z_2)\cdots Y_{\widehat{\ell'}}^{2k+1,2k+2}(z_k)\nonumber\\ &\times \prod_{a-b=k-1}S_{\ell,\ell'}^{a+1,k+2+b}(-z_{b+1}+z_{a+1})\nonumber\\ %\prod_{a-b=k-2}S_{\ell,\ell'}^{a+2,k+2+b}(-z_{b+1}+z_{a+1}) &\times\cdots \prod_{a-b=1}S_{\ell,\ell'}^{a+1,k+2+b}(-z_{b+1}+z_{a+1})\nonumber\\ &\times \sigma^{k+1,k+2}\sigma^{k,k+1}\cdots \sigma^{34}\sigma^{23} \sigma^{k+2,k+3,k+4,\dots,2k+1}\nonumber\\ &\times\sigma^{k+1,k+2,k+3,\dots,2k-1} \cdots\sigma^{45}.\nonumber\end{aligned}$$ By utilizing equations [\[eq:multqyb-hex1\]](#eq:multqyb-hex1){reference-type="eqref" reference="eq:multqyb-hex1"} and [\[eq:multqyb-hex2\]](#eq:multqyb-hex2){reference-type="eqref" reference="eq:multqyb-hex2"}, the equation [\[eq:k+1\]](#eq:k+1){reference-type="eqref" reference="eq:k+1"} becomes $$\begin{aligned} &Y_{\widehat{\ell}}^{12}(z_1)Y_{\widehat{\ell'}}^{34}(z_1) Y_{\widehat{\ell}}^{23}(z_2)S_{\ell,\ell'}^{34}(-z_1)S_{\ell,\ell'}^{24}(-z_1+z_2)\\ &\times Y_{\widehat{\ell}}^{34}(z_3)\cdots Y_{\widehat{\ell}}^{k,k+1}(z_k) Y_{\widehat{\ell'}}^{k+3,k+4}(z_2)\cdots Y_{\widehat{\ell'}}^{2k+1,2k+2}(z_k)\\ &\times \prod_{a-b=k-1}S_{\ell,\ell'}^{a+1,k+2+b}(-z_{b+1}+z_{a+1})\\ %\prod_{a-b=k-2}S_{\ell,\ell'}^{a+2,k+2+b}(-z_{b+1}+z_{a+1}) &\times\cdots \prod_{a-b=1}S_{\ell,\ell'}^{a+1,k+2+b}(-z_{b+1}+z_{a+1})\nonumber\\ &\times \sigma^{k+1,k+2}\sigma^{k,k+1}\cdots \sigma^{34}\sigma^{23} \sigma^{k+2,k+3,k+4,\dots,2k+1}\\ &\times\sigma^{k+1,k+2,k+3,\dots,2k-1} \cdots\sigma^{45}\\ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% =&Y_{\widehat{\ell}}^{12}(z_1)Y_{\widehat{\ell'}}^{34}(z_1) Y_{\widehat{\ell}}^{23}(z_2)S_{\ell,\ell'}^{34}(-z_1)Y_{\widehat{\ell}}^{34}(z_3)\cdots Y_{\widehat{\ell}}^{k,k+1}(z_k)\\ &\times Y_{\widehat{\ell'}}^{k+3,k+4}(z_2)\cdots Y_{\widehat{\ell'}}^{2k+1,2k+2}(z_k)S_{\ell,\ell'}^{2,k+2}(-z_1+z_2)\\ &\times \prod_{a-b=k-1}S_{\ell,\ell'}^{a+1,k+2+b}(-z_{b+1}+z_{a+1})\\ %\prod_{a-b=k-2}S_{\ell,\ell'}^{a+2,k+2+b}(-z_{b+1}+z_{a+1}) &\times\cdots \prod_{a-b=1}S_{\ell,\ell'}^{a+1,k+2+b}(-z_{b+1}+z_{a+1})\nonumber\\ &\times \sigma^{k+1,k+2}\sigma^{k,k+1}\cdots \sigma^{34}\sigma^{23}\sigma^{k+2,k+3,k+4,\dots,2k+1}\\ &\times\sigma^{k+1,k+2,k+3,\dots,2k-1} \cdots\sigma^{45}\\ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% =&Y_{\widehat{\ell}}^{12}(z_1)Y_{\widehat{\ell'}}^{34}(z_1) Y_{\widehat{\ell}}^{23}(z_2)Y_{\widehat{\ell}}^{34}(z_3)\cdots Y_{\widehat{\ell}}^{k,k+1}(z_k)\\ &\times Y_{\widehat{\ell'}}^{k+3,k+4}(z_2)\cdots Y_{\widehat{\ell'}}^{2k+1,2k+2}(z_k)\\ &\times S_{\ell,\ell'}^{k+1,k+2}(-z_1+z_{k+1})\cdots S_{\ell,\ell'}^{3,k+2}(-z_1+z_3)S_{\ell,\ell'}^{2,k+2}(-z_1+z_2)\\ &\times \prod_{a-b=k-1}S_{\ell,\ell'}^{a+1,k+2+b}(-z_{b+1}+z_{a+1})\\ %\prod_{a-b=k-2}S_{\ell,\ell'}^{a+2,k+2+b}(-z_{b+1}+z_{a+1}) &\times\cdots \prod_{a-b=1}S_{\ell,\ell'}^{a+1,k+2+b}(-z_{b+1}+z_{a+1})\nonumber\\ &\times \sigma^{k+1,k+2}\sigma^{k,k+1}\cdots \sigma^{34}\sigma^{23}\sigma^{k+2,k+3,k+4,\dots,2k+1}\\ &\times\sigma^{k+1,k+2,k+3,\dots,2k-1} \cdots\sigma^{45}\\ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% =&Y_{\widehat{\ell}}^{12}(z_1)Y_{\widehat{\ell'}}^{34}(z_1) Y_{\widehat{\ell}}^{23}(z_2)Y_{\widehat{\ell}}^{34}(z_3)\cdots Y_{\widehat{\ell}}^{k,k+1}(z_k)\\ &\times Y_{\widehat{\ell'}}^{k+3,k+4}(z_2)\cdots Y_{\widehat{\ell'}}^{2k+1,2k+2}(z_k) \prod_{a-b=k}S_{\ell,\ell'}^{a,k+1+b}(-z_{b}+z_{a})\\ &\times \prod_{a-b=k-1}S_{\ell,\ell'}^{a,k+1+b}(-z_{b}+z_{a}) %\prod_{a-b=k-2}S_{\ell,\ell'}^{a+2,k+2+b}(-z_{b+1}+z_{a+1}) \cdots \prod_{a-b=1}S_{\ell,\ell'}^{a,k+1+b}(-z_{b}+z_{a})\nonumber\\ &\times \sigma^{k+1,k+2} \sigma^{k+2,k+3,k+4,\dots,2k+1}\sigma^{k,k+1}\\ &\times\sigma^{k+1,k+2,k+3,\dots,2k-1} \cdots\sigma^{34}\sigma^{45}\sigma^{23}\\ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% =&Y_{\widehat{\ell}}^{12}(z_1)Y_{\widehat{\ell'}}^{34}(z_1) Y_{\widehat{\ell}}^{23}(z_2)Y_{\widehat{\ell}}^{34}(z_3)\cdots Y_{\widehat{\ell}}^{k,k+1}(z_k)\\ &\times Y_{\widehat{\ell'}}^{k+3,k+4}(z_2)\cdots Y_{\widehat{\ell'}}^{2k+1,2k+2}(z_k) \prod_{a-b=k}S_{\ell,\ell'}^{a,k+1+b}(-z_{b}+z_{a})\\ &\times \prod_{a-b=k-1}S_{\ell,\ell'}^{a,k+1+b}(-z_{b}+z_{a}) %\prod_{a-b=k-2}S_{\ell,\ell'}^{a+2,k+2+b}(-z_{b+1}+z_{a+1}) \cdots \prod_{a-b=1}S_{\ell,\ell'}^{a,k+1+b}(-z_{b}+z_{a})\nonumber\\ &\times \sigma^{k+1,k+2,k+3,\dots,2k+1} \sigma^{k,k+1,k+2,\dots,2k-1} \cdots\sigma^{345}\sigma^{23}.\end{aligned}$$ This proves the equation [\[eq:Y-Delta\]](#eq:Y-Delta){reference-type="eqref" reference="eq:Y-Delta"} for $n=k+1$. ◻ **Proposition 108**. *Let $n\in{\mathbb{Z}}_+$, and $i_1,\dots,i_n\in I$. For a subset $J$ of $\{1,2,\dots,n\}$, we set $$\begin{aligned} u_{i_a}^{J}=\begin{cases} q^{2r\ell\partial}E_\ell(h_{i_a}),&\mbox{if }a\in J,\\ x_{i_a}^+,&\mbox{if }a\not\in J, \end{cases} \quad\mbox{and}\quad v_{i_a}^{J}=\begin{cases} q^{2r\ell\partial}x_{i_a}^+,&\mbox{if }a\in J,\\ {{\bf 1}},&\mbox{if }a\not\in J. \end{cases}\end{aligned}$$ Assume $J=\{a_1<a_2<\cdots<a_k\}$ and the complementary set $\{b_1<b_2<\cdots<b_{n-k}\}$. We set $$\begin{aligned} &Y_J^+(z_1,\dots,z_n)\\ =&\mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y_{\widehat{\ell}}(x_{i_{b_1}}^+,z_{b_1})Y_{\widehat{\ell}}(x_{i_{b_2}}^+,z_{b_2})\cdots Y_{\widehat{\ell}}(x_{i_{b_{n-k}}}^+,z_{b_{n-k}})\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}\\ &{Y_J'}^+(z_1,\dots,z_n)\\ =&\mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y_{\widehat{\ell}'}(x_{i_{a_1}}^+,z_{a_1}+2r\ell\hbar) Y_{\widehat{\ell}'}(x_{i_{a_2}}^+,z_{a_2}+2r\ell\hbar)\cdots Y_{\widehat{\ell}'}(x_{i_{a_k}}^+,z_{a_k}+2r\ell\hbar)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}},\\ &Y_{i_1,\dots,i_n,J,\Delta}^+(z_1,\dots,z_n)\\ =&\iota_{z_1,\dots,z_n}\prod_{1\le a<b\le n} f(z_a-z_b)^{-\delta_{i_a,i_b}+q^{-r_{i_a}a_{i_a,i_b}}} \\ %\frac{ f(z_a-z_b-r_{i_a}a_{i_a,i_b}\hbar) }{f(z_a-z_b)^{\delta_{i_a,i_b}}}\\ &\times Y_\Delta(u_{i_1}^{J}\otimes v_{i_1}^{J},z_1)\cdots Y_\Delta(u_{i_n}^{J}\otimes v_{i_n}^{J},z_n).\end{aligned}$$ Then $$\begin{aligned} &Y_{i_1,\dots,i_n,J,\Delta}^+(z_1,\dots,z_n){{\bf 1}}\otimes{{\bf 1}}\\ =&\iota_{z_1,\dots,z_n}f_{i_1,\dots,i_n,J,\hbar}^+(z_1,\dots,z_n) X_{i_1,\dots,i_n,J,\Delta}^+(z_1,\dots,z_n),\end{aligned}$$ where $X_{i_1,\dots,i_n,J,\Delta}^+(z_1,\dots,z_n)\in \left(F_{\hat{\mathfrak g}}^\ell\widehat{\otimes}F_{\hat{\mathfrak g}}^{\ell'}\right)((z_1,\dots,z_n))[[\hbar]]$ is defined by $$\begin{aligned} &\prod_{\substack{a<b\\ a\not\in J,b\in J}}(-1)^{\delta_{i_a,i_b}+1} \prod_{a\in J}\exp\left(\widetilde{h}_{i_a}^-(z_a+2r\ell\hbar)\right)\\ &\quad\times Y_J^+(z_1,\dots,z_n){{\bf 1}}\otimes{Y_J'}^+(z_1,\dots,z_n){{\bf 1}}\end{aligned}$$ and $$\begin{aligned} f_{i_1,\dots,i_n,J,\hbar}^+(z_1,\dots,z_n)=\prod_{a\in J,b\not\in J} f(z_a-z_b)^{ -\delta_{i_a,i_b}+q^{-r_{i_a}a_{i_a,i_b} } }.\end{aligned}$$* *Proof.* From [\[eq:multqyb-shift-total1\]](#eq:multqyb-shift-total1){reference-type="eqref" reference="eq:multqyb-shift-total1"}, [\[eq:multqyb-shift-total2\]](#eq:multqyb-shift-total2){reference-type="eqref" reference="eq:multqyb-shift-total2"}, [\[eq:S-twisted-4\]](#eq:S-twisted-4){reference-type="eqref" reference="eq:S-twisted-4"} and [\[eq:S-E-5\]](#eq:S-E-5){reference-type="eqref" reference="eq:S-E-5"}, we have that $$\begin{aligned} &S_{\ell,\ell'}(z)(u_{i_a}^{J}\otimes v_{i_b}^{J})=u_{i_a}^{J}\otimes v_{i_b}^{J}\otimes g(u_{i_a}^{J},v_{i_b}^{J},z),\end{aligned}$$ where $$\begin{aligned} g(u_{i_a}^{J},v_{i_b}^{J},z) =\begin{cases} %\displaystyle\frac{f(z-(2r\ell+r_{i_a}a_{i_a,i_b})\hbar)f(z+r_{i_a}a_{i_a,i_b}\hbar)} % {f(z-(2r\ell-r_{i_a}a_{i_a,i_b})\hbar)f(z-r_{i_a}a_{i_a,i_b}\hbar)} f(z)^{(q^{r_{i_a} a_{i_a,i_b} } -q^{-r_{i_a}a_{i_a,i_b}} )(1-q^{-2r\ell})} ,& \mbox{if }a,b\in J,\\ f(z)^{ q^{-2r\ell}(q^{-r_{i_a}a_{i_a,i_b}}-q^{r_{i_a}a_{i_a,i_b}}) } %\displaystyle\frac{f(z-(2r\ell+r_{i_a}a_{i_a,i_b})\hbar)} % {f(z-(2r\ell-r_{i_a}a_{i_a,i_b})\hbar)} ,&\mbox{if }a\not\in J,\,b\in J,\\ 1,&\mbox{otherwise}. \end{cases}\end{aligned}$$ From Lemma [Lemma 107](#lem:Y-Delta-type2){reference-type="ref" reference="lem:Y-Delta-type2"}, it follows that $$\begin{aligned} &Y_\Delta^{12,34}(z_1)\cdots Y_\Delta^{2n-1,2n,2n+1,2n+2}(z_n)\\ &\quad\left(u_{i_1}^{J}\otimes v_{i_1}^{J}\otimes\cdots \otimes u_{i_n}^{J}\otimes v_{i_n}^{J}\otimes{{\bf 1}}\otimes{{\bf 1}}\right)\nonumber\\ =&Y_{\widehat{\ell}}^{12}(z_1)Y_{\widehat{\ell}}^{23}(z_2)\cdots Y_{\widehat{\ell}}^{n,n+1}(z_n) Y_{\widehat{\ell'}}^{n+2,n+3}(z_1)\nonumber\\ &\times Y_{\widehat{\ell'}}^{n+3,n+4}(z_2)\cdots Y_{\widehat{\ell'}}^{2n+1,2n+2}(z_n) \prod_{a-b=n-1}S_{\ell,\ell'}^{a,n+b}(-z_b+z_a)\nonumber\\ &\times\prod_{a-b=n-2}S_{\ell,\ell'}^{a,n+b}(-z_b+z_a) \cdots \prod_{a-b=1}S_{\ell,\ell'}^{a,n+b}(-z_b+z_a)\nonumber\\ &\quad\left(u_{i_1}^{J}\otimes\cdots\otimes u_{i_n}^{J}\otimes v_{i_1}^{J}\otimes\cdots v_{i_n}^{J}\otimes{{\bf 1}}\otimes{{\bf 1}}\right)\nonumber\\ =&Y_{\widehat{\ell}}^{12}(z_1)Y_{\widehat{\ell}}^{23}(z_2)\cdots Y_{\widehat{\ell}}^{n,n+1}(z_n) Y_{\widehat{\ell'}}^{n+2,n+3}(z_1)\\ &\times Y_{\widehat{\ell'}}^{n+3,n+4}(z_2)\cdots Y_{\widehat{\ell'}}^{2n+1,2n+2}(z_n)\nonumber\\ &\quad \left(u_{i_1}^{J}\otimes\cdots\otimes u_{i_n}^{J}\otimes{{\bf 1}}\otimes v_{i_1}^{J}\otimes\cdots v_{i_n}^{J}\otimes{{\bf 1}}\right)\\ &\times \prod_{b=1}^{n-1}\prod_{a=b+1}^n g(u_{i_a}^{J},v_{i_b}^{J},-z_b+z_a)\nonumber\\ =&Y_{\widehat{\ell}}(u_{i_1}^{J},z_1)\cdots Y_{\widehat{\ell}}(u_{i_n}^{J},z_n){{\bf 1}} \otimes Y_{\widehat{\ell'}}(v_{i_1}^{J},z_1)\cdots Y_{\widehat{\ell'}}(v_{i_n}^{J},z_n){{\bf 1}}\nonumber\\ &\times \prod_{\substack{b<a\\ a,b\in J}} f(-z_b+z_a)^{(q^{r_{i_a} a_{i_a,i_b} } -q^{-r_{i_a}a_{i_a,i_b}} )(1-q^{-2r\ell})}\\ %\frac{f(-z_b+z_a-(2r\ell+r_{i_a}a_{i_a,i_b})\hbar)f(-z_b+z_a+r_{i_a}a_{i_a,i_b}\hbar)} % {f(-z_b+z_a-(2r\ell-r_{i_a}a_{i_a,i_b})\hbar)f(-z_b+z_a-r_{i_a}a_{i_a,i_b}\hbar)}\\ &\times\prod_{\substack{b<a\\ b\in J,a\not\in J}} f(-z_b+z_a)^{ q^{-2r\ell}(q^{-r_{i_a}a_{i_a,i_b}}-q^{r_{i_a}a_{i_a,i_b}}) }\nonumber\\ %\frac{f(-z_b+z_a-(2r\ell+r_{i_a}a_{i_a,i_b})\hbar)} % {f(-z_b+z_a-(2r\ell-r_{i_a}a_{i_a,i_b})\hbar)}\nonumber\\ =&Y_{\widehat{\ell}}(u_{i_1}^{J},z_1)\cdots Y_{\widehat{\ell}}(u_{i_n}^{J},z_n){{\bf 1}} \otimes Y_{\widehat{\ell'}}(v_{i_1}^{J},z_1)\cdots Y_{\widehat{\ell'}}(v_{i_n}^{J},z_n){{\bf 1}}\nonumber\\ &\times \prod_{\substack{a<b\\ a,b\in J}} f(z_a-z_b)^{(q^{-r_{i_a} a_{i_a,i_b} } -q^{r_{i_a}a_{i_a,i_b}} )(1-q^{2r\ell})}\\ %\frac{f(z_a-z_b+(2r\ell+r_{i_a}a_{i_a,i_b})\hbar)f(z_a-z_b-r_{i_a}a_{i_a,i_b}\hbar)} % {f(z_a-z_b+(2r\ell-r_{i_a}a_{i_a,i_b})\hbar)f(z_a-z_b+r_{i_a}a_{i_a,i_b}\hbar)}\\ &\times\prod_{\substack{a<b\\ a\in J,b\not\in J}} f(z_a-z_b)^{(q^{r_{i_a} a_{i_a,i_b} } -q^{-r_{i_a}a_{i_a,i_b}} )q^{2r\ell}} %\frac{f(z_a-z_b+(2r\ell+r_{i_a}a_{i_a,i_b})\hbar)} % {f(z_a-z_b+(2r\ell-r_{i_a}a_{i_a,i_b})\hbar)} \nonumber.\end{aligned}$$ The definition of normal ordering produces (see [\[eq:def-normal-ordering\]](#eq:def-normal-ordering){reference-type="eqref" reference="eq:def-normal-ordering"}) provides that $$\begin{aligned} &Y_{\widehat{\ell'}}(v_{i_1}^{J},z_1)\cdots Y_{\widehat{\ell'}}(v_{i_n}^{J},z_n){{\bf 1}}\\ = &\iota_{z_1,\dots,z_n}\prod_{\substack{a<b\\ a,b\in J}} f(z_a-z_b)^{\delta_{i_a,i_b}-q^{-r_{i_a}a_{i_a,i_b}}} %\frac{f(z_a-z_b)^{\delta_{i_a,i_b}}}{f(z_a-z_b-r_{i_a}a_{i_ai_b}\hbar)} {Y_J'}^+(z_1,\dots,z_n){{\bf 1}}.\end{aligned}$$ Using [\[eq:com-formulas-11\]](#eq:com-formulas-11){reference-type="eqref" reference="eq:com-formulas-11"}, [\[eq:com-formulas-12\]](#eq:com-formulas-12){reference-type="eqref" reference="eq:com-formulas-12"}, [\[eq:com-formulas-13\]](#eq:com-formulas-13){reference-type="eqref" reference="eq:com-formulas-13"} and Proposition [Proposition 92](#prop:Y-E){reference-type="ref" reference="prop:Y-E"}, we get that $$\begin{aligned} &Y_{\widehat{\ell}}(u_{i_1}^{J},z_1)\cdots Y_{\widehat{\ell}}(u_{i_n}^{J},z_n){{\bf 1}}\\ =&\prod_{a\in J}\exp\left(\widetilde{h}_{i_a}^-(z_a+2r\ell\hbar)\right) Y_J^+(z_1,\dots,z_n) \prod_{a\in J}\exp\left(\widetilde{h}_{i_a}^+(z_a+2r\ell\hbar)\right){{\bf 1}}\\ &\times \prod_{\substack{a<b\\ a,b\in J}} f(z_a-z_b)^{ (q^{r_{i_a}a_{i_a,i_b}}-q^{-r_{i_a}a_{i_a,i_b}})(1-q^{2r\ell}) }\\ %\frac{f(z_a-z_b+(2r\ell-r_{i_a}a_{i_a,i_b})\hbar)f(z_a-z_b+r_{i_a}a_{i_a,i_b}\hbar)} % {f(z_a-z_b+(2r\ell+r_{i_a}a_{i_a,i_b})\hbar)f(z_a-z_b-r_{i_a}a_{i_a,i_b}\hbar)}\\ %&\times %\frac{f(z_a-z_b)^{\delta_{i_a,i_b}}}{f(z_a-z_b-r_{i_a}a_{i_ai_b}\hbar)}\\ &\times \prod_{\substack{a<b\\ a\in J,b\not\in J}} f(z_a-z_b)^{ (q^{-r_{i_a}a_{i_a,i_b}}-q^{r_{i_a}a_{i_a,i_b}})q^{2r\ell} } %\frac{f(z_a-z_b+(2r\ell-r_{i_a}a_{i_a,i_b})\hbar)} % {f(z_a-z_b+(2r\ell+r_{i_a}a_{i_a,i_b})\hbar)} \\ &\times \prod_{\substack{a<b\\ a,b\not\in J}} f(z_a-z_b)^{ \delta_{i_a,i_b}-q^{-r_{i_a}a_{i_a,i_b}} } \prod_{\substack{a<b\\ a\not\in J,b\in J}} f(z_a-z_b)^{q^{r_{i_a}a_{i_a,i_b}}-q^{-r_{i_a}a_{i_a,i_b}}} %\frac{f(z_a-z_b+r_{i_a}a_{i_a,i_b}\hbar)} % {f(z_a-z_b-r_{i_a}a_{i_a,i_b}\hbar)} \\ =&\prod_{a\in J}\exp\left(\widetilde{h}_{i_a}^-(z_a+2r\ell\hbar)\right) Y_J^+(z_1,\dots,z_n){{\bf 1}}\\ &\times \prod_{\substack{a<b\\ a,b\in J}} f(z_a-z_b)^{ (q^{r_{i_a}a_{i_a,i_b}}-q^{-r_{i_a}a_{i_a,i_b}})(1-q^{2r\ell}) }\\ %\frac{f(z_a-z_b+(2r\ell-r_{i_a}a_{i_a,i_b})\hbar)f(z_a-z_b+r_{i_a}a_{i_a,i_b}\hbar)} % {f(z_a-z_b+(2r\ell+r_{i_a}a_{i_a,i_b})\hbar)f(z_a-z_b-r_{i_a}a_{i_a,i_b}\hbar)}\\ &\times \prod_{\substack{a<b\\ a\in J,b\not\in J}} f(z_a-z_b)^{ (q^{-r_{i_a}a_{i_a,i_b}}-q^{r_{i_a}a_{i_a,i_b}})q^{2r\ell} }\\ %\frac{f(z_a-z_b+(2r\ell-r_{i_a}a_{i_a,i_b})\hbar)} % {f(z_a-z_b+(2r\ell+r_{i_a}a_{i_a,i_b})\hbar)} &\times \prod_{\substack{a<b\\ a,b\not\in J}} f(z_a-z_b)^{ \delta_{i_a,i_b}-q^{-r_{i_a}a_{i_a,i_b}} } %\frac{f(z_a-z_b)^{\delta_{i_a,i_b}}}{f(z_a-z_b-r_{i_a}a_{i_ai_b}\hbar)} \prod_{\substack{a<b\\ a\not\in J,b\in J}} f(z_a-z_b)^{q^{r_{i_a}a_{i_a,i_b}}-q^{-r_{i_a}a_{i_a,i_b}}}. %\frac{f(z_a-z_b+r_{i_a}a_{i_a,i_b}\hbar)} % {f(z_a-z_b-r_{i_a}a_{i_a,i_b}\hbar)}.\end{aligned}$$ Combining these equations, we finally get that $$\begin{aligned} &Y_\Delta(u_{i_1}^{J}\otimes v_{i_1}^{J},z_1)\cdots Y_\Delta(u_{i_n}^{J}\otimes v_{i_n}^{J},z_n){{\bf 1}}\otimes{{\bf 1}}\\ =&\prod_{a\in J}\exp\left(\widetilde{h}_{i_a}^-(z_a+2r\ell\hbar)\right) Y_J^+(z_1,\dots,z_n){{\bf 1}}\otimes{Y_J'}^+(z_1,\dots,z_n){{\bf 1}}\\ %&\times \prod_{\substack{a<b\\ a,b\in J}}\frac{f(z_a-z_b+(2r\ell+r_{i_a}a_{i_a,i_b})\hbar)f(z_a-z_b-r_{i_a}a_{i_a,i_b}\hbar)} % {f(z_a-z_b+(2r\ell-r_{i_a}a_{i_a,i_b})\hbar)f(z_a-z_b+r_{i_a}a_{i_a,i_b}\hbar)}\\ % &\times\prod_{\substack{a<b\\ a\in J,b\not\in J}}\frac{f(z_a-z_b+(2r\ell+r_{i_a}a_{i_a,i_b})\hbar)} % {f(z_a-z_b+(2r\ell-r_{i_a}a_{i_a,i_b})\hbar)}\nonumber\\ % &\times % \prod_{\substack{a<b\\ a,b\in J}}\frac{f(z_a-z_b+(2r\ell-r_{i_a}a_{i_a,i_b})\hbar)f(z_a-z_b+r_{i_a}a_{i_a,i_b}\hbar)} % {f(z_a-z_b+(2r\ell+r_{i_a}a_{i_a,i_b})\hbar)f(z_a-z_b-r_{i_a}a_{i_a,i_b}\hbar)}\\ &\times \prod_{\substack{a<b\\ a,b\not\in J}} f(z_a-z_b)^{ \delta_{i_a,i_b}-q^{-r_{i_a}a_{i_a,i_b}} } %\frac{f(z_a-z_b)^{\delta_{i_a,i_b}}}{f(z_a-z_b-r_{i_a}a_{i_ai_b}\hbar)} %\prod_{\substack{a<b\\ a\in J,b\not\in J}}\frac{f(z_a-z_b+(2r\ell-r_{i_a}a_{i_a,i_b})\hbar)} % {f(z_a-z_b+(2r\ell+r_{i_a}a_{i_a,i_b})\hbar)} \prod_{\substack{a<b\\ a\not\in J,b\in J}} f(z_a-z_b)^{q^{r_{i_a}a_{i_a,i_b}}-q^{-r_{i_a}a_{i_a,i_b}}} %\frac{f(z_a-z_b+r_{i_a}a_{i_a,i_b}\hbar)} % {f(z_a-z_b-r_{i_a}a_{i_a,i_b}\hbar)} \\ &\times\prod_{\substack{a<b\\ a,b\in J}} f(z_a-z_b)^{ \delta_{i_a,i_b}-q^{-r_{i_a}a_{i_a,i_b}} } %\frac{f(z_a-z_b)^{\delta_{i_a,i_b}}}{f(z_a-z_b-r_{i_a}a_{i_ai_b}\hbar)} \\ =&\prod_{a\in J}\exp\left(\widetilde{h}_{i_a}^-(z_a+2r\ell\hbar)\right) Y_J^+(z_1,\dots,z_n){{\bf 1}}\otimes{Y_J'}^+(z_1,\dots,z_n){{\bf 1}}\\ &\times \prod_{\substack{a<b\\ a\in J,b\not\in J}} f(z_a-z_b)^{ -\delta_{i_a,i_b}+q^{-r_{i_a}a_{i_a,i_b}} } %\frac{f(z_a-z_b-r_{i_a}a_{i_a,i_b}\hbar)} % {f(z_a-z_b)^{\delta_{i_a,i_b}}} \prod_{\substack{a<b\\ a\not\in J,b\in J}} f(z_a-z_b)^{ -\delta_{i_a,i_b}+q^{r_{i_a}a_{i_a,i_b}} } %\frac{f(z_a-z_b+r_{i_a}a_{i_a,i_b}\hbar)} % {f(z_a-z_b)^{\delta_{i_a,i_b}}} \\ &\times\prod_{a<b} f(z_a-z_b)^{ \delta_{i_a,i_b}-q^{-r_{i_a}a_{i_a,i_b}} } %\frac{f(z_a-z_b)^{\delta_{i_a,i_b}}}{f(z_a-z_b-r_{i_a}a_{i_ai_b}\hbar)} \\ %%%%%%%%%%%%%%%%%%%%%%% % =&\prod_{a\in J}\exp\(\wt h_{i_a,\hbar}^-(z_a+2r\ell\hbar)\) % Y_J(z_1,\dots,z_n)\vac \ot Y_J'(z_1,\dots,z_n)\vac\\ % &\times % \prod_{\substack{a<b\\ a\in J,b\not\in J}}\frac{f(z_a-z_b-r_{i_a}a_{i_a,i_b}\hbar)} % {f(z_a-z_b)^{\delta_{i_a,i_b}}} % \prod_{\substack{a<b\\ a\not\in J,b\in J}}(-1)^{\delta_{i_a,i_b}+1}\frac{f(z_b-z_a-r_{i_a}a_{i_a,i_b}\hbar)} % {f(z_b-z_a)^{\delta_{i_a,i_b}}}\\ % &\times\prod_{a<b} % \frac{f(z_a-z_b)^{\delta_{i_a,i_b}}}{f(z_a-z_b-r_{i_a}a_{i_ai_b}\hbar)}\\ %%%%%%%%%%%%%%%%%%%%% =&\prod_{a\in J}\exp\left(\widetilde{h}_{i_a}^-(z_a+2r\ell\hbar)\right) Y_J^+(z_1,\dots,z_n){{\bf 1}}\otimes{Y_J'}^+(z_1,\dots,z_n){{\bf 1}}\\ &\times \prod_{\substack{a<b\\ a\not\in J,b\in J}}(-1)^{\delta_{i_a,i_b}+1} \prod_{a<b} f(z_a-z_b)^{ \delta_{i_a,i_b}-q^{-r_{i_a}a_{i_a,i_b}} } %\frac{f(z_a-z_b)^{\delta_{i_a,i_b}}}{f(z_a-z_b-r_{i_a}a_{i_ai_b}\hbar)} \\ &\times \prod_{a\in J,b\not\in J} f(z_a-z_b)^{ -\delta_{i_a,i_b}+q^{-r_{i_a}a_{i_a,i_b}} }. %\frac{f(z_a-z_b-r_{i_a}a_{i_a,i_b}\hbar)} % {f(z_a-z_b)^{\delta_{i_a,i_b}}}.\end{aligned}$$ The proof of the proposition is now complete. ◻ **Proposition 109**. *Let $n\in{\mathbb{Z}}_+$, and $i_1,\dots,i_n\in I$. For a subset $J$ of $\{1,2,\dots,n\}$, we set $$\begin{aligned} u_{i_a}^J=\begin{cases} {{\bf 1}}, & \mbox{if }a\in J,\\ x_{i_a}^-,&\mbox{if }a\not\in J, \end{cases} \quad\mbox{and}\quad v_{i_a}^J=\begin{cases} x_{i_a}^-, & \mbox{if }a\in J,\\ {{\bf 1}},&\mbox{if }a\not\in J. \end{cases}\end{aligned}$$ Assume $J=\{a_1<a_2<\cdots<a_k\}$ and the complementary set $\{b_1<b_2<\cdots<b_{n-k}\}$. We set $$\begin{aligned} &Y_J^-(z_1,\dots,z_n)\\ =&\mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y_{\widehat{\ell}}(x_{i_{b_1}}^-,z_{b_1})Y_{\widehat{\ell}}(x_{i_{b_2}}^-,z_{b_2})\cdots Y_{\widehat{\ell}}(x_{i_{b_{n-k}}}^-,z_{b_{n-k}})\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}\\ &{Y_J'}^-(z_1,\dots,z_n)\\ =&\mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y_{\widehat{\ell}'}(x_{i_{a_1}}^-,z_{a_1}) Y_{\widehat{\ell}'}(x_{i_{a_2}}^-,z_{a_2})\cdots Y_{\widehat{\ell}'}(x_{i_{a_k}}^-,z_{a_k})\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}},\\ &Y_{i_1,\dots,i_n,J,\Delta}^-(z_1,\dots,z_n)\\ =&\iota_{z_1,\dots,z_n}\prod_{1\le a<b\le n} f(z_a-z_b)^{ -\delta_{i_a,i_b}+q^{-r_{i_a}a_{i_a,i_b}} } %\frac{ f(z_a-z_b-r_{i_a}a_{i_a,i_b}\hbar) }{f(z_a-z_b)^{\delta_{i_a,i_b}}} \\ &\times Y_\Delta(u_{i_1}^{J}\otimes v_{i_1}^{J},z_1)\cdots Y_\Delta(u_{i_n}^{J}\otimes v_{i_n}^{J},z_n).\end{aligned}$$ Then $$\begin{aligned} &Y_{i_1,\dots,i_n,J,\Delta}^-(z_1,\dots,z_n){{\bf 1}}\otimes{{\bf 1}}\\ =&\iota_{z_1,\dots,z_n}f_{i_1,\dots,i_n,J,\hbar}^-(z_1,\dots,z_n) X_{i_1,\dots,i_n,J,\Delta}^-(z_1,\dots,z_n),\end{aligned}$$ where $X_{i_1,\dots,i_n,J,\Delta}^-(z_1,\dots,z_n)\in \left(F_{\hat{\mathfrak g}}^\ell\widehat{\otimes}F_{\hat{\mathfrak g}}^{\ell'}\right)((z_1,\dots,z_n))[[\hbar]]$ is defined by $$\begin{aligned} &\prod_{\substack{a<b\\ a\in J,b\not\in J}}(-1)^{\delta_{i_a,i_b}+1} Y_J^-(z_1,\dots,z_n){{\bf 1}}\otimes{Y_J'}^-(z_1,\dots,z_n){{\bf 1}}\end{aligned}$$ and $$\begin{aligned} f_{i_1,\dots,i_n,J,\hbar}^-(z_1,\dots,z_n)=\prod_{a\not\in J,b\in J} f(z_a-z_b)^{ -\delta_{i_a,i_b}+q^{-r_{i_a}a_{i_a,i_b}} }. %\frac{ f(z_a-z_b-r_{i_a}a_{i_a,i_b}\hbar) }{f(z_a-z_b)^{\delta_{i_a,i_b}}}.\end{aligned}$$* *Proof.* From [\[eq:S-twisted-4\]](#eq:S-twisted-4){reference-type="eqref" reference="eq:S-twisted-4"}, we have that $$\begin{aligned} &S_{\ell,\ell'}(z)(u_{i_a}^J\otimes v_{i_b}^J)=u_{i_a}^J\otimes v_{i_b}^J\otimes g(u_{i_a}^J, v_{i_b}^J,z),\end{aligned}$$ where $$\begin{aligned} g(u_{i_a}^{J},v_{i_b}^{J},z) =\begin{cases} f(z)^{q^{-r_{i_a}a_{i_a,i_b}}-q^{r_{i_a}a_{i_a,i_b}} } %\displaystyle\frac{f(z-(2r\ell+r_{i_a}a_{i_a,i_b})\hbar)} % {f(z-(2r\ell-r_{i_a}a_{i_a,i_b})\hbar)} ,&\mbox{if }a\not\in J,\,b\in J,\\ 1,&\mbox{otherwise}. \end{cases}\end{aligned}$$ From Lemma [Lemma 107](#lem:Y-Delta-type2){reference-type="ref" reference="lem:Y-Delta-type2"}, it follows that $$\begin{aligned} &Y_\Delta^{12,34}(z_1)\cdots Y_\Delta^{2n-1,2n,2n+1,2n+2}(z_n)\\ &\quad\left(u_{i_1}^{J}\otimes v_{i_1}^{J}\otimes\cdots \otimes u_{i_n}^{J}\otimes v_{i_n}^{J}\otimes{{\bf 1}}\otimes{{\bf 1}}\right)\nonumber\\ =&Y_{\widehat{\ell}}^{12}(z_1)Y_{\widehat{\ell}}^{23}(z_2)\cdots Y_{\widehat{\ell}}^{n,n+1}(z_n) Y_{\widehat{\ell'}}^{n+2,n+3}(z_1)\nonumber\\ &\times Y_{\widehat{\ell'}}^{n+3,n+4}(z_2)\cdots Y_{\widehat{\ell'}}^{2n+1,2n+2}(z_n) \prod_{a-b=n-1}S_{\ell,\ell'}^{a,n+b}(-z_b+z_a)\nonumber\\ &\times\prod_{a-b=n-2}S_{\ell,\ell'}^{a,n+b}(-z_b+z_a) \cdots \prod_{a-b=1}S_{\ell,\ell'}^{a,n+b}(-z_b+z_a)\nonumber\\ &\quad\left(u_{i_1}^{J}\otimes\cdots\otimes u_{i_n}^{J}\otimes v_{i_1}^{J}\otimes\cdots v_{i_n}^{J}\otimes{{\bf 1}}\otimes{{\bf 1}}\right)\nonumber\\ =&Y_{\widehat{\ell}}^{12}(z_1)Y_{\widehat{\ell}}^{23}(z_2)\cdots Y_{\widehat{\ell}}^{n,n+1}(z_n) Y_{\widehat{\ell'}}^{n+2,n+3}(z_1)\\ &\times Y_{\widehat{\ell'}}^{n+3,n+4}(z_2)\cdots Y_{\widehat{\ell'}}^{2n+1,2n+2}(z_n)\nonumber\\ &\quad \left(u_{i_1}^{J}\otimes\cdots\otimes u_{i_n}^{J}\otimes{{\bf 1}}\otimes v_{i_1}^{J}\otimes\cdots v_{i_n}^{J}\otimes{{\bf 1}}\right)\\ &\times \prod_{b=1}^{n-1}\prod_{a=b+1}^n g(u_{i_a}^{J},v_{i_b}^{J},-z_b+z_a)\nonumber\\ =&Y_{\widehat{\ell}}(u_{i_1}^{J},z_1)\cdots Y_{\widehat{\ell}}(u_{i_n}^{J},z_n){{\bf 1}} \otimes Y_{\widehat{\ell'}}(v_{i_1}^{J},z_1)\cdots Y_{\widehat{\ell'}}(v_{i_n}^{J},z_n){{\bf 1}}\nonumber\\ &\times\prod_{\substack{b<a\\ b\in J,a\not\in J}} f(-z_b+z_a)^{ q^{-r_{i_a}a_{i_a,i_b}}-q^{r_{i_a}a_{i_a,i_b}} }\nonumber\\ %\frac{f(-z_b+z_a-(2r\ell+r_{i_a}a_{i_a,i_b})\hbar)} % {f(-z_b+z_a-(2r\ell-r_{i_a}a_{i_a,i_b})\hbar)}\nonumber\\ =&Y_{\widehat{\ell}}(u_{i_1}^{J},z_1)\cdots Y_{\widehat{\ell}}(u_{i_n}^{J},z_n){{\bf 1}} \otimes Y_{\widehat{\ell'}}(v_{i_1}^{J},z_1)\cdots Y_{\widehat{\ell'}}(v_{i_n}^{J},z_n){{\bf 1}}\nonumber\\ &\times\prod_{\substack{a<b\\ a\in J,b\not\in J}} f(z_a-z_b)^{q^{r_{i_a} a_{i_a,i_b} } -q^{-r_{i_a}a_{i_a,i_b}}} %\frac{f(z_a-z_b+(2r\ell+r_{i_a}a_{i_a,i_b})\hbar)} % {f(z_a-z_b+(2r\ell-r_{i_a}a_{i_a,i_b})\hbar)} \nonumber.\end{aligned}$$ The definition of normal ordering produces (see [\[eq:def-normal-ordering\]](#eq:def-normal-ordering){reference-type="eqref" reference="eq:def-normal-ordering"}) provides that $$\begin{aligned} &Y_{\widehat{\ell'}}(v_{i_1}^{J},z_1)\cdots Y_{\widehat{\ell'}}(v_{i_n}^{J},z_n){{\bf 1}}\\ = &\iota_{z_1,\dots,z_n}\prod_{\substack{a<b\\ a,b\in J}} f(z_a-z_b)^{\delta_{i_a,i_b}-q^{-r_{i_a}a_{i_a,i_b}}} %\frac{f(z_a-z_b)^{\delta_{i_a,i_b}}}{f(z_a-z_b-r_{i_a}a_{i_ai_b}\hbar)} {Y_J'}^-(z_1,\dots,z_n){{\bf 1}},\\ &Y_{\widehat{\ell}}(u_{i_1}^{J},z_1)\cdots Y_{\widehat{\ell}}(u_{i_n}^{J},z_n){{\bf 1}}\\ =&\iota_{z_1,\dots,z_n}\prod_{\substack{a<b\\ a,b\not\in J}} f(z_a-z_b)^{ \delta_{i_a,i_b}-q^{-r_{i_a}a_{i_a,i_b}} } Y_J^-(z_1,\dots,z_n){{\bf 1}}.\end{aligned}$$ Combining these equations, we finally get that $$\begin{aligned} &Y_\Delta(u_{i_1}^{J}\otimes v_{i_1}^{J},z_1)\cdots Y_\Delta(u_{i_n}^{J}\otimes v_{i_n}^{J},z_n){{\bf 1}}\otimes{{\bf 1}}\\ =&Y_J^-(z_1,\dots,z_n){{\bf 1}}\otimes{Y_J'}^-(z_1,\dots,z_n){{\bf 1}}\\ %&\times \prod_{\substack{a<b\\ a,b\in J}}\frac{f(z_a-z_b+(2r\ell+r_{i_a}a_{i_a,i_b})\hbar)f(z_a-z_b-r_{i_a}a_{i_a,i_b}\hbar)} % {f(z_a-z_b+(2r\ell-r_{i_a}a_{i_a,i_b})\hbar)f(z_a-z_b+r_{i_a}a_{i_a,i_b}\hbar)}\\ % &\times\prod_{\substack{a<b\\ a\in J,b\not\in J}}\frac{f(z_a-z_b+(2r\ell+r_{i_a}a_{i_a,i_b})\hbar)} % {f(z_a-z_b+(2r\ell-r_{i_a}a_{i_a,i_b})\hbar)}\nonumber\\ % &\times % \prod_{\substack{a<b\\ a,b\in J}}\frac{f(z_a-z_b+(2r\ell-r_{i_a}a_{i_a,i_b})\hbar)f(z_a-z_b+r_{i_a}a_{i_a,i_b}\hbar)} % {f(z_a-z_b+(2r\ell+r_{i_a}a_{i_a,i_b})\hbar)f(z_a-z_b-r_{i_a}a_{i_a,i_b}\hbar)}\\ &\times \prod_{\substack{a<b\\ a,b\not\in J}} f(z_a-z_b)^{ \delta_{i_a,i_b}-q^{-r_{i_a}a_{i_a,i_b}} } %\frac{f(z_a-z_b)^{\delta_{i_a,i_b}}}{f(z_a-z_b-r_{i_a}a_{i_ai_b}\hbar)} %\prod_{\substack{a<b\\ a\in J,b\not\in J}}\frac{f(z_a-z_b+(2r\ell-r_{i_a}a_{i_a,i_b})\hbar)} % {f(z_a-z_b+(2r\ell+r_{i_a}a_{i_a,i_b})\hbar)} \prod_{\substack{a<b\\ a\in J,b\not\in J}} f(z_a-z_b)^{q^{r_{i_a}a_{i_a,i_b}}-q^{-r_{i_a}a_{i_a,i_b}}} %\frac{f(z_a-z_b+r_{i_a}a_{i_a,i_b}\hbar)} % {f(z_a-z_b-r_{i_a}a_{i_a,i_b}\hbar)} \\ &\times\prod_{\substack{a<b\\ a,b\in J}} f(z_a-z_b)^{ \delta_{i_a,i_b}-q^{-r_{i_a}a_{i_a,i_b}} } %\frac{f(z_a-z_b)^{\delta_{i_a,i_b}}}{f(z_a-z_b-r_{i_a}a_{i_ai_b}\hbar)} \\ =&Y_J^-(z_1,\dots,z_n){{\bf 1}}\otimes{Y_J'}^-(z_1,\dots,z_n){{\bf 1}}\\ &\times \prod_{\substack{a<b\\ a\in J,b\not\in J}} f(z_a-z_b)^{ -\delta_{i_a,i_b}+q^{r_{i_a}a_{i_a,i_b}} } %\frac{f(z_a-z_b-r_{i_a}a_{i_a,i_b}\hbar)} % {f(z_a-z_b)^{\delta_{i_a,i_b}}} \prod_{\substack{a<b\\ a\not\in J,b\in J}} f(z_a-z_b)^{ -\delta_{i_a,i_b}+q^{-r_{i_a}a_{i_a,i_b}} } %\frac{f(z_a-z_b+r_{i_a}a_{i_a,i_b}\hbar)} % {f(z_a-z_b)^{\delta_{i_a,i_b}}} \\ &\times\prod_{a<b} f(z_a-z_b)^{ \delta_{i_a,i_b}-q^{-r_{i_a}a_{i_a,i_b}} } %\frac{f(z_a-z_b)^{\delta_{i_a,i_b}}}{f(z_a-z_b-r_{i_a}a_{i_ai_b}\hbar)} \\ %%%%%%%%%%%%%%%%%%%%%%% % =&\prod_{a\in J}\exp\(\wt h_{i_a,\hbar}^-(z_a+2r\ell\hbar)\) % Y_J(z_1,\dots,z_n)\vac \ot Y_J'(z_1,\dots,z_n)\vac\\ % &\times % \prod_{\substack{a<b\\ a\in J,b\not\in J}}\frac{f(z_a-z_b-r_{i_a}a_{i_a,i_b}\hbar)} % {f(z_a-z_b)^{\delta_{i_a,i_b}}} % \prod_{\substack{a<b\\ a\not\in J,b\in J}}(-1)^{\delta_{i_a,i_b}+1}\frac{f(z_b-z_a-r_{i_a}a_{i_a,i_b}\hbar)} % {f(z_b-z_a)^{\delta_{i_a,i_b}}}\\ % &\times\prod_{a<b} % \frac{f(z_a-z_b)^{\delta_{i_a,i_b}}}{f(z_a-z_b-r_{i_a}a_{i_ai_b}\hbar)}\\ %%%%%%%%%%%%%%%%%%%%% =&Y_J^-(z_1,\dots,z_n){{\bf 1}}\otimes{Y_J'}^-(z_1,\dots,z_n){{\bf 1}}\\ &\times \prod_{\substack{a<b\\ a\in J,b\not\in J}}(-1)^{\delta_{i_a,i_b}+1} \prod_{a<b} f(z_a-z_b)^{ \delta_{i_a,i_b}-q^{-r_{i_a}a_{i_a,i_b}} } %\frac{f(z_a-z_b)^{\delta_{i_a,i_b}}}{f(z_a-z_b-r_{i_a}a_{i_ai_b}\hbar)} \\ &\times \prod_{a\not\in J,b\in J} f(z_a-z_b)^{ -\delta_{i_a,i_b}+q^{-r_{i_a}a_{i_a,i_b}} }. %\frac{f(z_a-z_b-r_{i_a}a_{i_a,i_b}\hbar)} % {f(z_a-z_b)^{\delta_{i_a,i_b}}}.\end{aligned}$$ The proof of the proposition is now complete. ◻ **Proposition 110**. *For $i,j\in I$ with $a_{ij}<0$ and $k\ge 0$, we have that $$\begin{aligned} &\left(\Delta(x_i^+)\right)_0^k\Delta(x_j^+)=(x_i^+)_0^kx_j^+\otimes{{\bf 1}}\\ +&\sum_{t=0}^{k} r_i^t[t]_{q^{r_i}}!(q^{r_i}-q^{-r_i})^t\binom{k}{t}_{q^{r_i}}\binom{k-1+a_{ij}}{t}_{q^{r_i}}\\ &\times \prod_{a=t+1}^k\exp\left(\widetilde{h}_i^+((2r_i(k-a)+r_ia_{ij}+2r\ell)\hbar)\right) \exp\left(\widetilde{h}_j^+(2r\ell\hbar)\right)\\ &\times q^{(2k+a_{ij}-2t)r_i\partial}(x_i^+)_{-1}^t{{\bf 1}}\otimes q^{2r\ell\partial}(x_i^+)_0^{k-t}x_j^+.\end{aligned}$$* *Proof.* From Proposition [Proposition 70](#prop:normal-ordering-rel-general){reference-type="ref" reference="prop:normal-ordering-rel-general"}, we have that $$\begin{aligned} &Y_\Delta\left(\left(\Delta\left(x_i^+\right)\right)_0^k \Delta\left(x_j^+\right),z\right)\\ =&\mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y_\Delta\left(\Delta\left(x_i^+\right),z+r_i(2(k-1)+a_{ij})\hbar\right)\\ &\times Y_\Delta\left(\Delta\left(x_i^+\right),z+r_i(2(k-2)+a_{ij})\hbar\right)\\ &\cdots Y_\Delta\left(\Delta\left(x_i^+\right),z+r_ia_{ij}\hbar\right) Y_\Delta\left(\Delta\left(x_j^+\right),z\right)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}.\end{aligned}$$ Fix a $(k+1)$-tuple $(i,i,\dots,i,j)$. For any $J\subset\{1,2,\dots,k+1\}$, we see that $$\begin{aligned} \label{eq:Delta-Serre-+temp1} &f_{i,\dots,i,j,J,\hbar}^+(z+r_i(2(k-1)+a_{ij})\hbar,\cdots,z+r_ia_{ij}\hbar,z)\ne 0\end{aligned}$$ if and only if $$\begin{aligned} &\prod_{\substack{a\in J,b\not\in J\\ a,b\le k}}f(2r_i(b-a-1)\hbar) \prod_{k+1\in J,b\not\in J}f(-2r_i(k-b+a_{ij})\hbar)\\ &\quad\times\prod_{a\in J,k+1\not\in J}f(2r_i(k-a)\hbar)\ne 0.\end{aligned}$$ So relation [\[eq:Delta-Serre-+temp1\]](#eq:Delta-Serre-+temp1){reference-type="eqref" reference="eq:Delta-Serre-+temp1"} holds only if $$\begin{aligned} J=\emptyset \quad\mbox{or}\quad J=\{t+1,\dots,k+1\}\quad\mbox{for some }0\le t\le k.\end{aligned}$$ For the case $J=\{t+1,\dots,k+1\}$, we have that $$\begin{aligned} &f_{i,\dots,i,j,J,\hbar}^+(z+r_i(2(k-1)+a_{ij})\hbar,\cdots,z+r_ia_{ij}\hbar,z)\\ =&\prod_{b\le t< a\le k}\frac{f(2r_i(b-a-1)\hbar)}{f(2r_i(b-a)\hbar)} \prod_{b\le t}f(-2r_i(k-b+a_{ij})\hbar)\\ =&\prod_{b=1}^t\frac{f(2r_i(b-k-1)\hbar)}{f(2r_i(b-t-1)\hbar)}\prod_{b\le t}f(-2r_i(k-b+a_{ij})\hbar)\\ =&(-1)^t\prod_{b=1}^t\frac{f(2r_i(k+1-b)\hbar)}{f(2r_i(t-b)\hbar)}\prod_{b=1}^tf(2r_i(k-b+a_{ij})\hbar)\\ =&(-r_i)^t[t]_{q^{r_i}}!(q^{r_i}-q^{-r_i})^t\binom{k}{t}_{q^{r_i}}\binom{k-1+a_{ij}}{t}_{q^{r_i}}.\end{aligned}$$ Then we get from Proposition [Proposition 108](#prop:normal-ordering-J+){reference-type="ref" reference="prop:normal-ordering-J+"} that $$\begin{aligned} &\mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y_\Delta\left(\Delta\left(x_i^+\right),z+r_i(2(k-1)+a_{ij})\hbar\right)\\ &\cdots Y_\Delta\left(\Delta\left(x_i^+\right),z+r_ia_{ij}\hbar\right) Y_\Delta\left(\Delta\left(x_j^+\right),z\right)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}\left({{\bf 1}}\otimes{{\bf 1}}\right)\\ =&\sum_{J\subset\{1,2,\dots,k+1\}}Y_{J,\Delta}^+\left(z+r_i(2(k-1)+a_{ij})\hbar,\cdots,z+r_ia_{ij}\hbar,z\right)\left({{\bf 1}}\otimes{{\bf 1}}\right)\\ =&\mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y_{\widehat{\ell}}\left(x_i^+,z+r_i(2(k-1)+a_{ij})\hbar\right)\cdots Y_{\widehat{\ell}}\left(x_i^+,z+r_ia_{ij}\hbar\right)\\ &\quad\times Y_{\widehat{\ell}}\left(x_j^+,z\right)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}{{\bf 1}}\otimes{{\bf 1}}\\ +&\sum_{t=0}^{k} r_i^t[t]_{q^{r_i}}!(q^{r_i}-q^{-r_i})^t\binom{k}{t}_{q^{r_i}}\binom{k-1+a_{ij}}{t}_{q^{r_i}}\\ &\times q^{2r\ell\frac{\partial}{\partial {z}}} \prod_{a=t+1}^k\exp\left(\widetilde{h}_i^+(z+(2r_i(k-a)+r_ia_{ij})\hbar)\right) \exp\left(\widetilde{h}_j^+(z)\right)\\ &\times q^{(2k+a_{ij}-2t)r_i\frac{\partial}{\partial {z}}}\mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y_{\widehat{\ell}}\left(x_i^+,z+2(t-1)r_i\hbar\right) \cdots Y_{\widehat{\ell}}\left(x_i^+,z\right)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}{{\bf 1}}\\ &\quad\otimes q^{2r\ell\frac{\partial}{\partial {z}}}\mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y_{\widehat{\ell'}}\left(x_i^+,z+(2r_i(k-t-1)+r_ia_{ij})\hbar\right)\\ &\quad\quad \cdots Y_{\widehat{\ell'}}\left(x_i^+,z+r_ia_{ij}\hbar\right) Y_{\widehat{\ell'}}\left(x_j^+,z\right)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}{{\bf 1}}.\end{aligned}$$ Taking $z\to 0$, we complete the proof of proposition. ◻ **Proposition 111**. *For $i,j\in I$ with $a_{ij}<0$ and $k\ge 0$, we have that $$\begin{aligned} &\left(\Delta(x_i^-)\right)_0^k\Delta(x_j^-)={{\bf 1}}\otimes(x_i^-)_0^kx_j^-\\ +&\sum_{t=0}^kr_i^t[t]_{q^{r_i}}!(q^{r_i}-q^{-r_i})^t\binom{k}{t}_{q^{r_i}}\binom{k-1+a_{ij}}{t}_{q^{r_i}}\\ &\times (x_i^-)_0^{k-t}x_j^-\otimes q^{r_i(2(k-t)+a_{ij})\partial}(x_i^-)_{-1}^t{{\bf 1}}.\end{aligned}$$* *Proof.* From Proposition [Proposition 70](#prop:normal-ordering-rel-general){reference-type="ref" reference="prop:normal-ordering-rel-general"}, we have that $$\begin{aligned} &Y_\Delta\left(\left(\Delta\left(x_i^-\right)\right)_0^k \Delta\left(x_j^-\right),z\right)\\ =&\mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y_\Delta\left(\Delta\left(x_i^-\right),z+r_i(2(k-1)+a_{ij})\hbar\right)\\ &\times Y_\Delta\left(\Delta\left(x_i^-\right),z+r_i(2(k-2)+a_{ij})\hbar\right)\\ &\cdots Y_\Delta\left(\Delta\left(x_i^-\right),z+r_ia_{ij}\hbar\right) Y_\Delta\left(\Delta\left(x_j^-\right),z\right)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}.\end{aligned}$$ Fix a $(k+1)$-tuple $(i,i,\dots,i,j)$. For any $J\subset\{1,2,\dots,k+1\}$, we see that $$\begin{aligned} \label{eq:Delta-Serre-+temp1} &f_{i,\dots,i,j,J,\hbar}^-(z+r_i(2(k-1)+a_{ij})\hbar,\cdots,z+r_ia_{ij}\hbar,z)\ne 0\end{aligned}$$ if and only if $$\begin{aligned} &\prod_{\substack{a\not\in J,b\in J\\ a,b\le k}}f(2r_i(b-a-1)\hbar) \prod_{k+1\not\in J,b\in J}f(-2r_i(k-b+a_{ij})\hbar)\\ &\quad\times\prod_{a\not\in J,k+1\in J}f(2r_i(k-a)\hbar)\ne 0.\end{aligned}$$ So relation [\[eq:Delta-Serre-+temp1\]](#eq:Delta-Serre-+temp1){reference-type="eqref" reference="eq:Delta-Serre-+temp1"} holds only if $$\begin{aligned} J=\{1,2,\dots,k+1\} \quad\mbox{or}\quad J=\{1,2,\dots,t\}\quad\mbox{for some }0\le t\le k.\end{aligned}$$ For the case $J=\{1,2,\dots,t\}$, we have that $$\begin{aligned} &f_{i,\dots,i,j,J,\hbar}^-(z+r_i(2(k-1)+a_{ij})\hbar,\cdots,z+r_ia_{ij}\hbar,z)\\ =&\prod_{b\le t< a\le k}\frac{f(2r_i(b-a-1)\hbar)}{f(2r_i(b-a)\hbar)} \prod_{b\le t}f(-2r_i(k-b+a_{ij})\hbar)\\ =&\prod_{b=1}^t\frac{f(2r_i(b-k-1)\hbar)}{f(2r_i(b-t-1)\hbar)}\prod_{b\le t}f(-2r_i(k-b+a_{ij})\hbar)\\ =&(-1)^t\prod_{b=1}^t\frac{f(2r_i(k+1-b)\hbar)}{f(2r_i(t-b)\hbar)}\prod_{b=1}^tf(2r_i(k-b+a_{ij})\hbar)\\ =&(-r_i)^t[t]_{q^{r_i}}!(q^{r_i}-q^{-r_i})^t\binom{k}{t}_{q^{r_i}}\binom{k-1+a_{ij}}{t}_{q^{r_i}}.\end{aligned}$$ Then we get from Proposition [Proposition 109](#prop:normal-ordering-J-){reference-type="ref" reference="prop:normal-ordering-J-"} that $$\begin{aligned} &\mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y_\Delta\left(\Delta\left(x_i^-\right),z+r_i(2(k-1)+a_{ij})\hbar\right)\\ &\cdots Y_\Delta\left(\Delta\left(x_i^-\right),z+r_ia_{ij}\hbar\right) Y_\Delta\left(\Delta\left(x_j^-\right),z\right)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}\left({{\bf 1}}\otimes{{\bf 1}}\right)\\ =&\sum_{J\subset\{1,2,\dots,k+1\}}Y_{J,\Delta}^-\left(z+r_i(2(k-1)+a_{ij})\hbar,\cdots,z+r_ia_{ij}\hbar,z\right)\left({{\bf 1}}\otimes{{\bf 1}}\right)\\ =&{{\bf 1}}\otimes\mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y_{\widehat{\ell'}}\left(x_i^-,z+r_i(2(k-1)+a_{ij})\hbar\right)\cdots Y_{\widehat{\ell'}}\left(x_i^-,z+r_ia_{ij}\hbar\right)\\ &\quad\times Y_{\widehat{\ell'}}\left(x_j^-,z\right)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}{{\bf 1}}\\ +&\sum_{t=0}^{k} r_i^t[t]_{q^{r_i}}!(q^{r_i}-q^{-r_i})^t\binom{k}{t}_{q^{r_i}}\binom{k-1+a_{ij}}{t}_{q^{r_i}}\\ &\times \mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y_{\widehat{\ell}}\left(x_i^-,z+(2r_i(k-t-1)+r_ia_{ij})\hbar\right)\\ &\quad\quad \cdots Y_{\widehat{\ell}}\left(x_i^-,z+r_ia_{ij}\hbar\right) Y_{\widehat{\ell}}\left(x_j^-,z\right)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}{{\bf 1}}\\ &\quad\otimes q^{r_i(2(k-t)+a_{ij})\frac{\partial}{\partial {z}}} \mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y_{\widehat{\ell'}}\left(x_i^-,z+2(t-1)r_i\hbar\right) \cdots Y_{\widehat{\ell'}}\left(x_i^-,z\right)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}{{\bf 1}}.\end{aligned}$$ Taking $z\to 0$, we complete the proof of proposition. ◻ **Proposition 112**. *For $i\in I$ and $k\ge 0$, we have that $$\begin{aligned} &\left(\Delta\left(x_i^+\right)\right)_{-1}^k({{\bf 1}}\otimes{{\bf 1}})\\ =&\sum_{t=0}^k f_0(2tr_i\hbar)\binom{k-1}{t}_{q^{r_i}} \binom{k}{t}_{q^{r_i}}\binom{k-1}{t}^{-1}\\ % \prod_{a=1}^{k-1}F(ar_i)\prod_{a=1}^{t-1}F(ar_i)\inv\prod_{a=1}^{k-t-1}F(ar_i)\inv\\ \times&\prod_{a=0}^{k-t-1}\exp\left(\widetilde{h}_i^+(2(ar_i+r\ell)\hbar)\right) q^{2r_i(k-t)\partial}\left(x_i^+\right)_{-1}^t{{\bf 1}}\\ &\quad\otimes q^{2r\ell\partial}\left(x_i^+\right)_{-1}^{k-t}{{\bf 1}}.\end{aligned}$$* *Proof.* From Proposition [Proposition 70](#prop:normal-ordering-rel-general){reference-type="ref" reference="prop:normal-ordering-rel-general"}, we have that $$\begin{aligned} &Y_\Delta\left(\left(\Delta\left(x_i^+\right)\right)_{-1}^k\left({{\bf 1}}\otimes{{\bf 1}}\right),z\right)\\ =& \prod_{a=1}^{k-1}f_0(2ar_i\hbar) \mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y_\Delta\left(\Delta\left(x_i^+\right),z+2(k-1)r_i\hbar\right)\\ &\times Y_\Delta\left(\Delta\left(x_i^+\right),z+2(k-2)r_i\hbar\right) \cdots Y_\Delta\left(\Delta\left(x_i^+\right),z\right)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}.\end{aligned}$$ Fix a $k$-tuple $(i,\dots,i)$. For any $J\subset \{1,2,\dots,k\}$, we see that $$\begin{aligned} f_{i,\dots,i,J,\hbar}^+(z+2r_i(k-1)\hbar,z+2r_i(k-2)\hbar,\dots,z)\ne 0\end{aligned}$$ if and only if $$\begin{aligned} \prod_{a\in J,b\not\in J}f(2r_i(b-a-1)\hbar)\ne 0\end{aligned}$$ if and only if $$\begin{aligned} J=\{t+1,t+2,\dots,k\}\quad\mbox{for some }0\le t\le k.\end{aligned}$$ For each $J=\{t+1,t+2,\dots,k\}$, we have $$\begin{aligned} &f_{i,\dots,i,J,\hbar}^+(z+2r_i(k-1)\hbar,z+2r_i(k-2)\hbar,\dots,z)\\ =&\prod_{a>t\ge b}\frac{f(2r_i(b-a-1)\hbar)}{f(2r_i(b-a))} =\binom{k}{t}_{q^{r_i}}.\end{aligned}$$ Using Proposition [Proposition 108](#prop:normal-ordering-J+){reference-type="ref" reference="prop:normal-ordering-J+"}, we have that $$\begin{aligned} &Y_\Delta\left(\left(\Delta(x_i^+)\right)_{-1}^k\left({{\bf 1}}\otimes{{\bf 1}}\right),z\right)\\ =& \prod_{a=1}^{k-1}f_0(2ar_i\hbar) \mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y_\Delta\left(\Delta(x_i^+),z+2(k-1)r_i\hbar\right)\\ &\times Y_\Delta\left(\Delta(x_i^+),z+2(k-2)r_i\hbar\right) \cdots Y_\Delta\left(\Delta(x_i^+),z\right)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}\\ =& \prod_{a=1}^{k-1}f_0(2ar_i\hbar)\sum_{t=0}^k Y_{i,\dots,i,J,\Delta}^+(z+2(k-1)r_i\hbar,z+2(k-2)r_i\hbar,\dots,z){{\bf 1}}\otimes{{\bf 1}}\\ =& \prod_{a=1}^{k-1}f_0(2ar_i\hbar)\sum_{t=0}^k f_{i,\dots,i,J,\hbar}^+(z+2(k-1)r_i\hbar,z+2(k-2)r_i\hbar,\dots,z)\\ &\times \prod_{a=t+1}^k\exp\left(\widetilde{h}_i^-(z+2((k-a)r_i+r\ell)\hbar)\right)\\ &\times \mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y_{\widehat{\ell}}(x_i^+,z+2(k-1)r_i\hbar)\cdots Y_{\widehat{\ell}}(x_i^+,z+2(k-t)r_i\hbar)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}{{\bf 1}}\\ &\quad \otimes\mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y_{\widehat{\ell'}}(x_i^+,z+2((k-t-1)r_i+r\ell)\hbar)\cdots Y_{\widehat{\ell'}}(x_i^+,z+2r\ell\hbar)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}{{\bf 1}}\\ =& \prod_{a=1}^{k-1}f_0(2ar_i\hbar)\sum_{t=0}^k \prod_{a=t+1}^k\exp\left(\widetilde{h}_i^-(z+2((k-a)r_i+r\ell)\hbar)\right)\\ &\times \prod_{a=1}^{t-1}f_0(2ar_i\hbar)^{-1}\prod_{a=1}^{k-t-1}f_0(2ar_i\hbar)^{-1} \binom{k}{t}_{q^{r_i}}\\ &\times Y_{\widehat{\ell}}\left(\left(x_i^+\right)_{-1}^t{{\bf 1}},z+2(k-t)r_i\hbar\right){{\bf 1}} \otimes Y_{\widehat{\ell'}}\left(\left(x_i^+\right)_{-1}^{k-t}{{\bf 1}},z+2r\ell\hbar\right) {{\bf 1}}.\end{aligned}$$ Taking $z=0$, we get that $$\begin{aligned} &\left(\Delta(x_i^+)\right)_{-1}^k({{\bf 1}}\otimes{{\bf 1}})\\ =&\sum_{t=0}^k\prod_{a=t+1}^k\exp\left(\widetilde{h}_i^-(2((k-a)r_i+r\ell)\hbar)\right) q^{2(k-t)r_i\partial}(x_i^+)_{-1}^t{{\bf 1}}\\ &\otimes q^{2r\ell\partial}(x_i^+)_{-1}^{k-t}{{\bf 1}} \otimes f_0(2tr_i\hbar)\binom{k-1}{t}_{q^{r_i}} \binom{k}{t}_{q^{r_i}}\binom{k-1}{t}^{-1},\end{aligned}$$ which completes the proof of proposition. ◻ **Proposition 113**. *For $i\in I$ and $k\ge 0$, we have that $$\begin{aligned} &\left(\Delta(x_i^-)\right)_{-1}^k({{\bf 1}}\otimes{{\bf 1}}) =\sum_{t=0}^k(x_i^-)_{-1}^{k-t}{{\bf 1}} \otimes q^{2(k-t)r_i\partial}(x_i^-)_{-1}^t{{\bf 1}}\\ &\quad\otimes f_0(2tr_i\hbar)\binom{k-1}{t}_{q^{r_i}} \binom{k}{t}_{q^{r_i}}\binom{k-1}{t}^{-1}.\end{aligned}$$* *Proof.* From Proposition [Proposition 70](#prop:normal-ordering-rel-general){reference-type="ref" reference="prop:normal-ordering-rel-general"}, we have that $$\begin{aligned} &Y_\Delta\left(\left(\Delta\left(x_i^-\right)\right)_{-1}^k\left({{\bf 1}}\otimes{{\bf 1}}\right),z\right)\\ =& \prod_{a=1}^{k-1}f_0(2ar_i\hbar) \mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y_\Delta\left(\Delta\left(x_i^-\right),z+2(k-1)r_i\hbar\right)\\ &\times Y_\Delta\left(\Delta\left(x_i^-\right),z+2(k-2)r_i\hbar\right) \cdots Y_\Delta\left(\Delta\left(x_i^-\right),z\right)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}.\end{aligned}$$ Fix a $k$-tuple $(i,\dots,i)$. For any $J\subset \{1,2,\dots,k\}$, we see that $$\begin{aligned} f_{i,\dots,i,J,\hbar}^-(z+2r_i(k-1)\hbar,z+2r_i(k-2)\hbar,\dots,z)\ne 0\end{aligned}$$ if and only if $$\begin{aligned} \prod_{a\not\in J,b\in J}f(2r_i(b-a-1)\hbar)\ne 0\end{aligned}$$ if and only if $$\begin{aligned} J=\{1,2,\dots,t\}\quad\mbox{for some }0\le t\le k.\end{aligned}$$ For each $J=\{1,2,\dots,t\}$, we have $$\begin{aligned} &f_{i,\dots,i,J,\hbar}^-(z+2r_i(k-1)\hbar,z+2r_i(k-2)\hbar,\dots,z)\\ =&\prod_{a>t\ge b}\frac{f(2r_i(b-a-1)\hbar)}{f(2r_i(b-a))} =\binom{k}{t}_{q^{r_i}}.\end{aligned}$$ Using Proposition [Proposition 109](#prop:normal-ordering-J-){reference-type="ref" reference="prop:normal-ordering-J-"}, we have that $$\begin{aligned} &Y_\Delta\left(\left(\Delta\left(x_i^-\right)\right)_{-1}^k\left({{\bf 1}}\otimes{{\bf 1}}\right),z\right)\\ =& \prod_{a=1}^{k-1}f_0(2ar_i\hbar) \mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y_\Delta\left(\Delta(x_i^+),z+2(k-1)r_i\hbar\right)\\ &\times Y_\Delta\left(\Delta(x_i^-),z+2(k-2)r_i\hbar\right) \cdots Y_\Delta\left(\Delta(x_i^-),z\right)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}\\ =& \prod_{a=1}^{k-1}f_0(2ar_i\hbar)\sum_{t=0}^k Y_{i,\dots,i,J,\Delta}^-(z+2(k-1)r_i\hbar,z+2(k-2)r_i\hbar,\dots,z){{\bf 1}}\otimes{{\bf 1}}\\ =& \prod_{a=1}^{k-1}f_0(2ar_i\hbar)\sum_{t=0}^k f_{i,\dots,i,J,\hbar}^-(z+2(k-1)r_i\hbar,z+2(k-2)r_i\hbar,\dots,z)\\ &\times \mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y_{\widehat{\ell}}(x_i^-,z+2(k-t-1)r_i\hbar)\cdots Y_{\widehat{\ell}}(x_i^-,z)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}{{\bf 1}}\\ &\quad \otimes\mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y_{\widehat{\ell'}}(x_i^-,z+2(k-1)r_i\hbar)\cdots Y_{\widehat{\ell'}}(x_i^-,z+2(k-t)r_i\hbar)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}{{\bf 1}}\\ =& \prod_{a=1}^{k-1}f_0(2ar_i\hbar)\sum_{t=0}^k f_{i,\dots,i,J,\hbar}^-(z+2(k-1)r_i\hbar,z+2(k-2)r_i\hbar,\dots,z)\\ &\times \prod_{a=1}^{t-1}f_0(2ar_i\hbar)^{-1}\prod_{a=1}^{k-t-1}f_0(2ar_i\hbar)^{-1} \binom{k}{t}_{q^{r_i}}\\ &\times Y_{\widehat{\ell}}\left((x_i^-)_{-1}^{k-t}{{\bf 1}},z\right){{\bf 1}} \otimes Y_{\widehat{\ell'}}\left((x_i^-)_{-1}^t{{\bf 1}},z+2(k-t)r_i\hbar\right){{\bf 1}}.\end{aligned}$$ Taking $z=0$, we get that $$\begin{aligned} &\left(\Delta(x_i^-)\right)_{-1}^k({{\bf 1}}\otimes{{\bf 1}}) =\sum_{t=0}^k(x_i^-)_{-1}^{k-t}{{\bf 1}} \otimes q^{2(k-t)r_i\partial}(x_i^-)_{-1}^t{{\bf 1}}\\ &\quad\otimes f_0(2tr_i\hbar)\binom{k-1}{t}_{q^{r_i}} \binom{k}{t}_{q^{r_i}}\binom{k-1}{t}^{-1},\end{aligned}$$ which completes the proof of proposition. ◻ *Proof of Proposition [Proposition 86](#prop:Y-Delta-serre){reference-type="ref" reference="prop:Y-Delta-serre"}:* The equation [\[eq:prop-Y-Delta-locality\]](#eq:prop-Y-Delta-locality){reference-type="eqref" reference="eq:prop-Y-Delta-locality"} is a consequence of Lemma [Lemma 106](#lem:normal-ordering-Delta){reference-type="ref" reference="lem:normal-ordering-Delta"}, while the equation [\[eq:prop-Y-Delta-serre\]](#eq:prop-Y-Delta-serre){reference-type="eqref" reference="eq:prop-Y-Delta-serre"} can be deduced from Propositions [Proposition 110](#prop:delta-serre+){reference-type="ref" reference="prop:delta-serre+"} and [Proposition 111](#prop:delta-serre-){reference-type="ref" reference="prop:delta-serre-"}, and the equation [\[eq:prop-Y-Delta-int\]](#eq:prop-Y-Delta-int){reference-type="eqref" reference="eq:prop-Y-Delta-int"} can be derived from Propositions [Proposition 112](#prop:delta-int+){reference-type="ref" reference="prop:delta-int+"} and [Proposition 113](#prop:delta-int-){reference-type="ref" reference="prop:delta-int-"}. # Quantum parafermion vertex algebras {#chap:qpara} ## Quantum lattice VA inside quantum affine VA {#sec:qlattice-in-qaffva} Please refer to the notations provided in Subsections [2.2](#subsec:AffVA){reference-type="ref" reference="subsec:AffVA"} and [2.4](#subsec:ParaVA){reference-type="ref" reference="subsec:ParaVA"}. Let $$D=\mathop{\mathrm{diag}}{ \left.\left\{ {r/r_i} \,\right|\, {i\in I} \right\} }.$$ Then $AD$ is symmetric. Set $$\begin{aligned} A(z)=([a_{ij}]_{e^{r_iz}})_{i,j\in I},\quad D(z)=\mathop{\mathrm{diag}}{ \left.\left\{ {[r/r_i]_{e^{r_iz}}} \,\right|\, {i\in I} \right\} }.\end{aligned}$$ Since both $A$ and $D$ are invertible, we define $$\begin{aligned} \label{eq:def-C-ell-z} C^\ell(z)=(c_{ij}^\ell(z))_{i,j\in I} =\frac{1}{z}\left(e^{r\ell z}[\ell]_{e^{rz}}A(z)D(z)D^{-1}A^{-1}\ell^{-1}-I\right).\end{aligned}$$ Define $\eta_\ell(z)\in \mathcal H({\mathfrak h},z)$ as follows $$\begin{aligned} \label{eq:def-eta} \eta_\ell(\sqrt\ell r/r_i\alpha_i,z)=&\hbar\sum_{j\in I}\sqrt\ell r/r_j\alpha_j\otimes c_{ij}^\ell\left(\hbar\frac{\partial}{\partial {z}}\right)\frac{\partial}{\partial {z}}\log f(z)\\ &+\sqrt\ell r/r_i\alpha_i\otimes\log f(z)/z.\nonumber\end{aligned}$$ The purpose of this section is to the following result. **Theorem 114**. *There exists a unique $\hbar$-adic quantum VA embedding from $V_{\sqrt\ell Q_L}^{\eta_\ell}$ to $L_{\hat{\mathfrak g},\hbar}^\ell$, such that $$\begin{aligned} \label{eq:qlattice-embedding} &\sqrt\ell r/r_i\alpha_i\mapsto h_i,\quad e_{\pm\sqrt\ell r/r_i\alpha_i}\mapsto \sqrt{c_{i,\ell}} q^{(-r\ell+r_i)\partial}\left(x_i^\pm\right)_{-1}^{r\ell/r_i}{{\bf 1}}\end{aligned}$$ where $$\begin{aligned} c_{i,\ell}=\frac{f_0(2r\ell \hbar)^{\frac{1}{2}}f_0(2(r\ell-r_i) \hbar)^{-{\frac{1}{2}}}}{([r\ell/r_i]_{q^{r_i}}!)^2 f_0(2r_i\hbar)^{r\ell/r_i} }.\end{aligned}$$* To enhance readability and convenience, we define $$\begin{aligned} &\beta_i=\sqrt\ell r/r_i\alpha_i,\quad f_i^\pm=\sqrt{c_{i,\ell}} q^{(r_i-r\ell)\partial}\left(x_i^\pm\right)_{-1}^{r\ell/r_i}{{\bf 1}} \quad\mbox{for }i\in I.\end{aligned}$$ **Lemma 115**. *For $i,j\in I$, we have that $$\begin{aligned} &{\langle}\eta_\ell(\beta_i,z),\beta_j{\rangle} =\log f(z)^{ [a_{ij}]_{q^{r_i}}[r\ell/r_j]_{q^{r_j}}q^{r\ell} }-a_{ij}r\ell/r_j \log z, \label{eq:eta-sp}\\ &e^{{\langle}\eta_\ell(\beta_i,z),\beta_j{\rangle}}=f(z)^{[a_{ij}]_{q^{r_i}}[r\ell/r_j]_{q^{r_j}}q^{r\ell} }z^{-a_{ij}r\ell/r_j}. \label{eq:eta-sp-exp}\end{aligned}$$* *Proof.* Based on the definition of $C^\ell(z)$ as shown in [\[eq:def-C-ell-z\]](#eq:def-C-ell-z){reference-type="eqref" reference="eq:def-C-ell-z"}, we see that $$\begin{aligned} z\ell C^\ell(z)AD=e^{r\ell z}[\ell]_{e^{rz}}A(z)D(z)-\ell AD.\end{aligned}$$ Consequently, we have that $$\begin{aligned} &{\langle}\eta_\ell(\beta_i,z),\beta_j{\rangle}\nonumber\\ =&\hbar\sum_{k\in I}\ell a_{kj}r/r_j c_{ik}^\ell\left(\hbar\frac{\partial}{\partial {z}}\right)\frac{\partial}{\partial {z}}\log f(z) +\ell r/r_j a_{ij}\log f(z)/z\nonumber\\ =&\log f(z)^{ q^{r\ell}[\ell]_{q^{r}}[a_{ij}]_{q^{r_i}}[r/r_j]_{q^{r_j}} }-r\ell/r_j a_{ij}\log z\nonumber\\ =&\log f(z)^{ [a_{ij}]_{q^{r_i}}[r\ell/r_j]_{q^{r_j}}q^{r\ell} }-r\ell/r_j a_{ij}\log z.\end{aligned}$$ Therefore, we complete the proof of [\[eq:eta-sp\]](#eq:eta-sp){reference-type="eqref" reference="eq:eta-sp"}. The relation [\[eq:eta-sp-exp\]](#eq:eta-sp-exp){reference-type="eqref" reference="eq:eta-sp-exp"} follows immediate from [\[eq:eta-sp\]](#eq:eta-sp){reference-type="eqref" reference="eq:eta-sp"}. ◻ For $i,j\in I$, we define $$\begin{aligned} &P_{ij}(z)=f(z)^{[r\ell/r_i+a_{ij}]_{q^{r_i}}[r\ell/r_j]_{q^{r_j}}}z^{-r\ell/r_ja_{ij}},\\ &Q_{ij}(z)=f(z)^{[r\ell/r_i-a_{ij}]_{q^{r_i}}[r\ell/r_j]_{q^{r_j}}} f(z)^{\delta_{ij}(1+q^{2r\ell})[r\ell/r_i]_{q^{r_i}}^2}z^{r\ell/r_ja_{ij}}.\end{aligned}$$ **Lemma 116**. *A topologically free ${\mathbb{C}}[[\hbar]]$-module $W$ equipped with fields $$\beta_{i,\hbar}(z), e_{i,\hbar}^\pm(z)\in{\mathcal{E}}_\hbar(W)\quad\mbox{for }i\in I$$ is an object of $\mathcal A_\hbar^{\eta_\ell}(\sqrt\ell Q_L)$ if and only if $$\begin{aligned} &[\beta_{i,\hbar}(z_1),\beta_{j,\hbar}(z_2)] =[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z_2}}}}[r\ell/r_j]_{q^{r_j\frac{\partial}{\partial {z_2}}}}\label{eq:AQ-sp1}\\ &\quad\times\left(\iota_{z_1,z_2}q^{-r\ell\frac{\partial}{\partial {z_2}}}-\iota_{z_2,z_1}q^{r\ell\frac{\partial}{\partial {z_2}}}\right) \frac{\partial}{\partial {z_1}}\frac{\partial}{\partial {z_2}}\log f(z_1-z_2),\nonumber\\ &[\beta_{i,\hbar}(z_1),e_{j,\hbar}^\pm(z_2)]=\pm e_{j,\hbar}^\pm(z_2) [a_{ij}]_{q^{r_i\frac{\partial}{\partial {z_2}}}}[r\ell/r_j]_{q^{r_j\frac{\partial}{\partial {z_2}}}}\label{eq:AQ-sp2}\\ &\quad\times\left(\iota_{z_1,z_2}q^{-r\ell\frac{\partial}{\partial {z_2}}}-\iota_{z_2,z_1}q^{r\ell\frac{\partial}{\partial {z_2}}}\right) \frac{\partial}{\partial {z_1}}\log f(z_1-z_2),\nonumber\\ &\iota_{z_1,z_2}f(z_1-z_2)^{q^{-r_ia_{ij}}[r\ell/r_i]_{q^{r_i}}[r\ell/r_j]_{q^{r_j}}} e_{i,\hbar}^\pm(z_1)e_{j,\hbar}^\pm(z_2)\label{eq:AQ-sp3}\\ =&\iota_{z_2,z_1}f(z_1-z_2)^{q^{r_ia_{ij}}[r\ell/r_i]_{q^{r_i}}[r\ell/r_j]_{q^{r_j}}} e_{j,\hbar}^\pm(z_2)e_{i,\hbar}^\pm(z_1),\nonumber\\ &\iota_{z_1,z_2}f(z_1-z_2)^{(q^{r_ia_{ij}}+\delta_{ij}+\delta_{ij}q^{2r\ell}) [r\ell/r_i]_{q^{r_i}}[r\ell/r_j]_{q^{r_j}}} e_{i,\hbar}^\pm(z_1)e_{j,\hbar}^\mp(z_2)\label{eq:AQ-sp4}\\ =&\iota_{z_2,z_1}f(z_1-z_2)^{(q^{-r_ia_{ij}}+\delta_{ij}+\delta_{ij}q^{2r\ell}) [r\ell/r_i]_{q^{r_i}}[r\ell/r_j]_{q^{r_j}}} e_{j,\hbar}^\mp(z_2)e_{i,\hbar}^\pm(z_1),\nonumber\\ &\frac{d}{dz}e_{i,\hbar}^\pm(z)=\pm \beta_{i,\hbar}(z)^+e_{i,\hbar}^\pm(z) \pm e_{i,\hbar}^\pm(z)\beta_{i,\hbar}(z)^- %-\frac{e_{i,\hbar}^\pm(z)}{q^{r_i}-q^{-r_i}} \label{eq:AQ-sp5}\\ &-e_{i,\hbar}^\pm(z)\left(\left(f_0'(z)/f_0(z)\right)^{[2]_{q^{r_i}}[r\ell/r_i]_{q^{r_i}}q^{r\ell}}\right)|_{z=0} ,\nonumber\\ &\iota_{z_1,z_2}f(z_1-z_2)^{q^{r\ell}[2]_{q^{r_i}}[r\ell/r_i]_{q^{r_i}}} e_{i,\hbar}^+(z_1)e_{i,\hbar}^-(z_2)\label{eq:AQ-sp6}\\ =&\iota_{z_2,z_1}f(z_1-z_2)^{q^{-r\ell}[2]_{q^{r_i}}[r\ell/r_i]_{q^{r_i}}} e_{i,\hbar}^-(z_2)e_{i,\hbar}^+(z_1),\nonumber\\ &\left.\left(\iota_{z_1,z_2}f(z_1-z_2)^{q^{r\ell}[2]_{q^{r_i}}[r\ell/r_i]_{q^{r_i}}} e_{i,\hbar}^+(z_1)e_{i,\hbar}^-(z_2)\right)\right|_{z_2=z_1}=1.\label{eq:AQ-sp7}\end{aligned}$$* *Proof.* The relations [\[eq:AQ-sp1\]](#eq:AQ-sp1){reference-type="eqref" reference="eq:AQ-sp1"}, [\[eq:AQ-sp2\]](#eq:AQ-sp2){reference-type="eqref" reference="eq:AQ-sp2"} and [\[eq:AQ-sp5\]](#eq:AQ-sp5){reference-type="eqref" reference="eq:AQ-sp5"} are immediate consequences of [\[eq:eta-sp\]](#eq:eta-sp){reference-type="eqref" reference="eq:eta-sp"}, while the relations [\[eq:AQ-sp6\]](#eq:AQ-sp6){reference-type="eqref" reference="eq:AQ-sp6"} and [\[eq:AQ-sp7\]](#eq:AQ-sp7){reference-type="eqref" reference="eq:AQ-sp7"} follows immediate from [\[eq:eta-sp-exp\]](#eq:eta-sp-exp){reference-type="eqref" reference="eq:eta-sp-exp"}. Notice that $$\begin{aligned} &[r\ell/r_i\pm a_{ij}]_{q^{r_i}}[r\ell/r_j]_{q^{r_j}}\mp q^{r\ell}[a_{ij}]_{q^{r_i}}[r\ell/r_j]_{q^{r_j}}\\ =&\frac{1}{q^{r_i}-q^{-r_i}}[r\ell/r_j]_{q^{r_j}} \left(q^{r\ell\pm r_ia_{ij}}-q^{-r\ell\mp r_ia_{ij}}-q^{r\ell\pm r_ia_{ij}}+q^{r\ell\mp r_ia_{ij}}\right)\\ =&q^{\mp r_ia_{ij}}[r\ell/r_i]_{q^{r_i}}[r\ell/r_j]_{q^{r_j}},\\ &[r\ell/r_i\pm a_{ij}]_{q^{r_i}}[r\ell/r_j]_{q^{r_j}}\mp q^{-r\ell}[a_{ji}]_{q^{r_j}}[r\ell/r_i]_{q^{r_i}}\\ =&\frac{q^{r\ell}-q^{-r\ell}}{(q^{r_i}-q^{-r_i})(q^{r_j}-q^{-r_j})} \left(q^{r\ell\pm r_ia_{ij}}-q^{-r\ell\mp r_ia_{ij}}-q^{-r\ell\pm r_ja_{ji}}+q^{-r\ell\mp r_ja_{ji}} \right)\\ =&q^{\pm r_ia_{ij}}[r\ell/r_i]_{q^{r_i}}[r\ell/r_j]_{q^{r_j}}.\end{aligned}$$ Then $$\begin{aligned} &e^{-{\langle}\eta_\ell(\beta_i,z),\beta_j{\rangle}}P_{ij}(z) =f(z)^{q^{-r_ia_{ij}}[r\ell/r_i]_{q^{r_i}}[r\ell/r_j]_{q^{r_j}}},\\ &e^{-{\langle}\eta_\ell(\beta_j,-z),\beta_i{\rangle}}P_{ij}(z) =f(z)^{q^{r_ia_{ij}}[r\ell/r_i]_{q^{r_i}}[r\ell/r_j]_{q^{r_j}}},\\ &e^{{\langle}\eta_\ell(\beta_i,z),\beta_j{\rangle}}Q_{ij}(z) =f(z)^{\left(q^{r_ia_{ij}}+\delta_{ij}+\delta_{ij}q^{2r\ell}\right)[r\ell/r_i]_{q^{r_i}}[r\ell/r_j]_{q^{r_j}}},\\ &e^{{\langle}\eta_\ell(\beta_j,-z),\beta_i{\rangle}}Q_{ij}(z) =f(z)^{\left(q^{-r_ia_{ij}}+\delta_{ij}+\delta_{ij}q^{2r\ell}\right)[r\ell/r_i]_{q^{r_i}}[r\ell/r_j]_{q^{r_j}}}.\end{aligned}$$ By utilizing these equations, we successfully establish the proof for equations ([\[eq:AQ-sp3\]](#eq:AQ-sp3){reference-type="ref" reference="eq:AQ-sp3"}) and ([\[eq:AQ-sp4\]](#eq:AQ-sp4){reference-type="ref" reference="eq:AQ-sp4"}). ◻ **Definition 117**. Let $\widehat{\mathcal A}_\hbar^{\eta_\ell}(\sqrt\ell Q_L)$ be the category whose objects are topologically free ${\mathbb{C}}[[\hbar]]$-modules $W$, equipped with fields $\beta_{i,\hbar}(z),e_{i,\hbar}^\pm(z)\in{\mathcal{E}}_\hbar(W)$ ($i\in I$) satisfying the relations [\[eq:AQ-sp1\]](#eq:AQ-sp1){reference-type="eqref" reference="eq:AQ-sp1"}, [\[eq:AQ-sp2\]](#eq:AQ-sp2){reference-type="eqref" reference="eq:AQ-sp2"}, [\[eq:AQ-sp3\]](#eq:AQ-sp3){reference-type="eqref" reference="eq:AQ-sp3"} and [\[eq:AQ-sp4\]](#eq:AQ-sp4){reference-type="eqref" reference="eq:AQ-sp4"}. **Lemma 118**. *Let $X=F,V,L$. Then $X_{\hat{\mathfrak g},\hbar}^\ell$ equipped with fields $Y_{\widehat{\ell}}(h_i,z)$, $Y_{\widehat{\ell}}(f_i^\pm,z)$ ($i\in I$) is an object of $\widehat{\mathcal A}_\hbar^{\eta_\ell}(\sqrt\ell Q_L)$.* *Proof.* The equation [\[eq:AQ-sp1\]](#eq:AQ-sp1){reference-type="eqref" reference="eq:AQ-sp1"} is precisely the equation [\[eq:local-h-1\]](#eq:local-h-1){reference-type="eqref" reference="eq:local-h-1"}. According to Proposition [Proposition 70](#prop:normal-ordering-rel-general){reference-type="ref" reference="prop:normal-ordering-rel-general"}, $$\begin{aligned} Y_{\widehat{\ell}}(f_i^\pm&,z) \label{eq:pf-AQ-sp-temp1} =\sqrt{c_{i,\ell}}\prod_{a=1}^{r\ell/r_i}f_0(2ar_i\hbar) \mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y_{\widehat{\ell}}(x_i^\pm,z+(r\ell-r_i)\hbar)\\ &\times Y_{\widehat{\ell}}(x_i^\pm,z+(r\ell-2r_i)\hbar)\cdots Y_{\widehat{\ell}}(x_i^\pm,z-(r\ell-r_i)\hbar)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}.\nonumber\end{aligned}$$ Then from [\[eq:local-h-2\]](#eq:local-h-2){reference-type="eqref" reference="eq:local-h-2"}, we have that $$\begin{aligned} &[Y_{\widehat{\ell}}(h_i,z_1),Y_{\widehat{\ell}}(f_j^\pm,z_2)]= \sqrt{c_{j,\ell}}\prod_{a=1}^{r\ell/r_j}f_0(2ar_j\hbar)\\ \times&\bigg[Y_{\widehat{\ell}}(h_i,z_1), \mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y_{\widehat{\ell}}(x_j^\pm,z_2+(r\ell-r_j)\hbar) %Y_{\wh\ell}(x_{j,\hbar}^\pm,z_2+(r\ell-2r_j)\hbar) \cdots Y_{\widehat{\ell}}(x_j^\pm,z_2-(r\ell-r_j)\hbar)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}\bigg]\\ =&\pm \sqrt{c_{j,\ell}}\prod_{a=1}^{r\ell/r_j}f_0(2ar_j\hbar) \mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y_{\widehat{\ell}}(x_j^\pm,z_2+(r\ell-r_j)\hbar)\\ &\times Y_{\widehat{\ell}}(x_j^\pm,z_2+(r\ell-2r_j)\hbar)\cdots Y_{\widehat{\ell}}(x_j^\pm,z_2-(r\ell-r_j)\hbar)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}\\ &\times [a_{ij}]_{q^{r_i\frac{\partial}{\partial {z_2}}}}[r\ell/r_j]_{q^{r_j\frac{\partial}{\partial {z_2}}}} \left(\iota_{z_1,z_2}q^{-r\ell\frac{\partial}{\partial {z_2}}}-\iota_{z_2,z_1}q^{r\ell\frac{\partial}{\partial {z_2}}}\right)\\ &\times\frac{\partial}{\partial {z_1}}\log f(z_1-z_2)\\ =&\pm Y_{\widehat{\ell}}(f_j^\pm,z_2) [a_{ij}]_{q^{r_i\frac{\partial}{\partial {z_2}}}}[r\ell/r_j]_{q^{r_j\frac{\partial}{\partial {z_2}}}} \left(\iota_{z_1,z_2}q^{-r\ell\frac{\partial}{\partial {z_2}}}-\iota_{z_2,z_1}q^{r\ell\frac{\partial}{\partial {z_2}}}\right)\\ &\times\frac{\partial}{\partial {z_1}}\log f(z_1-z_2),\end{aligned}$$ which proves [\[eq:AQ-sp2\]](#eq:AQ-sp2){reference-type="eqref" reference="eq:AQ-sp2"}. Finally, [\[eq:AQ-sp3\]](#eq:AQ-sp3){reference-type="eqref" reference="eq:AQ-sp3"} and [\[eq:AQ-sp4\]](#eq:AQ-sp4){reference-type="eqref" reference="eq:AQ-sp4"} can be derived from [\[eq:pf-AQ-sp-temp1\]](#eq:pf-AQ-sp-temp1){reference-type="eqref" reference="eq:pf-AQ-sp-temp1"}, [\[eq:local-h-3\]](#eq:local-h-3){reference-type="eqref" reference="eq:local-h-3"}, [\[eq:local-h-4\]](#eq:local-h-4){reference-type="eqref" reference="eq:local-h-4"} and Lemma [Lemma 69](#lem:normal-ordering-general){reference-type="ref" reference="lem:normal-ordering-general"}. ◻ The proof of Theorem [Theorem 114](#thm:qlattice-inj){reference-type="ref" reference="thm:qlattice-inj"} for ${\mathfrak g}={\mathfrak{sl}}_2$ is presented in Subsection [6.3](#subsec:sl2-case){reference-type="ref" reference="subsec:sl2-case"}. Assuming that Theorem [Theorem 114](#thm:qlattice-inj){reference-type="ref" reference="thm:qlattice-inj"} holds true for ${\mathfrak g}={\mathfrak{sl}}_2$. The proof of Theorem [Theorem 114](#thm:qlattice-inj){reference-type="ref" reference="thm:qlattice-inj"} for a general ${\mathfrak g}$ is now ready to be completed. $$ *Proof of Theorem [Theorem 114](#thm:qlattice-inj){reference-type="ref" reference="thm:qlattice-inj"}:* Proposition [Proposition 68](#prop:universal-qaff){reference-type="ref" reference="prop:universal-qaff"} yields a ${\mathbb{C}}[[\hbar]]$-linear quantum VA homomorphism $$\begin{aligned} \varsigma_i:L_{\hat{\mathfrak{sl}}_2,r_i\hbar}^{r\ell/r_i} \to L_{\hat{\mathfrak g},\hbar}^\ell.\end{aligned}$$ The map induced by $\varsigma_i$ is injective since $$\begin{aligned} L_{\hat{\mathfrak{sl}}_2}^{r\ell/r_i}\cong L_{\hat{\mathfrak{sl}}_2,r_i\hbar}^{r\ell/r_i}/\hbar L_{\hat{\mathfrak{sl}}_2,r_i\hbar}^{r\ell/r_i} \to L_{\hat{\mathfrak g},\hbar}^\ell/\hbar L_{\hat{\mathfrak g},\hbar}^\ell\cong L_{\hat{\mathfrak g}}^\ell,\end{aligned}$$ As both $L_{\hat{\mathfrak{sl}}_2,\hbar}^{r\ell/r_i}$ and $L_{\hat{\mathfrak g},\hbar}^\ell$ are topologically free, the map $\varsigma_i$ must be injective. Let $\{h_1,x_1^\pm\}$ be the standard generators of $L_{\hat{\mathfrak{sl}}_2,r_i\hbar}^{r\ell/r_i}$, and let ${\mathbb{Z}}\alpha_1$ be the root lattice of ${\mathfrak{sl}}_2$. Considering the previously mentioned assumption that Theorem [Theorem 114](#thm:qlattice-inj){reference-type="ref" reference="thm:qlattice-inj"} holds true for ${\mathfrak g}={\mathfrak{sl}}_2$, we can infer from Theorem [Theorem 46](#thm:Undeform-qlattice){reference-type="ref" reference="thm:Undeform-qlattice"} that $$\begin{aligned} &\left(L_{\hat{\mathfrak{sl}}_2,r_i\hbar}^{r\ell/r_i}, Y_{\widehat{r\ell/r_i}}(h_1,z), Y_{\widehat{r\ell/r_i}}(f_1^\pm,z)\right)\in\mathop{\mathrm{obj}}\mathcal A_{r_i\hbar}^{\eta_{r\ell/r_i}}(\sqrt{r\ell/r_i}{\mathbb{Z}}\alpha_1).\end{aligned}$$ As $\varsigma_i$ is an injective ${\mathbb{C}}[[\hbar]]$-linear quantum VA homomorphism, the fields $Y_{\widehat{\ell}}(h_i,z)$, $Y_{\widehat{\ell}}(f_i^\pm,z)$ satisfy the relations [\[eq:AQ-sp5\]](#eq:AQ-sp5){reference-type="eqref" reference="eq:AQ-sp5"}, [\[eq:AQ-sp6\]](#eq:AQ-sp6){reference-type="eqref" reference="eq:AQ-sp6"} and [\[eq:AQ-sp7\]](#eq:AQ-sp7){reference-type="eqref" reference="eq:AQ-sp7"} on $L_{\hat{\mathfrak g},\hbar}^\ell$. By combining Lemma [Lemma 118](#lem:AQ-sp1-4){reference-type="ref" reference="lem:AQ-sp1-4"}, we find that $$\begin{aligned} \left(L_{\hat{\mathfrak g},\hbar}^\ell,\{Y_{\widehat{\ell}}(h_i,z)\}_{i\in I},\{Y_{\widehat{\ell}}(f_i^\pm,z)\}_{i\in I}\right)\in \mathop{\mathrm{obj}}\mathcal A_\hbar^{\eta_\ell}(\sqrt\ell Q_L).\end{aligned}$$ Thus, by utilizing Proposition [Proposition 50](#prop:qlattice-universal-property){reference-type="ref" reference="prop:qlattice-universal-property"}, we complete the proof. ## Quantum parafermion VAs {#sec:qpara} In this section, we study the structure of quantum parafermion VA, that is, the commutant of ${\mathfrak h}$ in a simple quantum affine VA $L_{\hat{\mathfrak g},\hbar}^\ell$. **Definition 119**. Let $V$ be an ($\hbar$-adic) nonlocal VA, and let $U\subset V$ be a subset of $V$. Define $$\begin{aligned} &\mathop{\mathrm{Com}}_V(U)={ \left.\left\{ {v\in V} \,\right|\, {[Y(u,z_1),Y(v,z_2)]=0\quad\mbox{for any }u\in U.} \right\} }.\end{aligned}$$ **Remark 120**. * *For any $u\in U$ and $v\in \mathop{\mathrm{Com}}_V(U)$, we have that $$\begin{aligned} Y(u,z_1)Y(v,z_2)=Y(v,z_2)Y(u,z_1)\in{\mathcal{E}}_\hbar^{(2)}(V). \end{aligned}$$ Then $$\begin{aligned} Y(Y(u,z_0)v,z_2)=\left(Y(u,z_1)Y(v,z_2)\right)|_{z_1=z_2+z_0}\in \mathop{\mathrm{End}}(V)[[z_0,z_2,z_2^{-1}]]. \end{aligned}$$ By applying on ${{\bf 1}}$ and taking $z_2\to 0$, we get that $$\begin{aligned} Y(u,z_0)v\in V[[z_0]]. \end{aligned}$$ Therefore, $$\begin{aligned} \label{eq:com-alt-rel} \mathop{\mathrm{Com}}_V(U)\subset{ \left.\left\{ {v\in V} \,\right|\, {Y(u,z)^-v=0\quad\mbox{for any }u\in U} \right\} }. \end{aligned}$$* * It is straightforward to verify that **Lemma 121**. *Let $V$ be an $\hbar$-adic nonlocal VA, and let $U\subset V$ be a ${\mathbb{C}}[[\hbar]]$-submodule of $V$. Then $$\begin{aligned} &\mathop{\mathrm{Com}}_V(\bar U)=\mathop{\mathrm{Com}}_V(U),\quad \mathop{\mathrm{Com}}_V([U])=\mathop{\mathrm{Com}}_V(U),\\ &\overline{\mathop{\mathrm{Com}}_V(U)}=\mathop{\mathrm{Com}}_V(U),\quad \left[\mathop{\mathrm{Com}}_V(U)\right]=\mathop{\mathrm{Com}}_V(U).\end{aligned}$$ Moreover, $\mathop{\mathrm{Com}}_V(U)$ is an $\hbar$-adic nonlocal vertex subalgebra of $V$.* **Lemma 122**. *Let $V$ be an $\hbar$-adic nonlocal VA, and let $U_1,U_2\subset V$ be closed ${\mathbb{C}}[[\hbar]]$-submodules of $V$, such that $[U_i]=U_i$ for $i=1,2$. Suppose further that $U_2\subset\mathop{\mathrm{Com}}_V(U_1)$ and $$\begin{aligned} U_2/\hbar U_2=\mathop{\mathrm{Com}}_{V/\hbar V}(U_1/\hbar U_1).\end{aligned}$$ Then $$\begin{aligned} U_2=\mathop{\mathrm{Com}}_V(U_1).\end{aligned}$$* *Proof.* Notice that $$\begin{aligned} \mathop{\mathrm{Com}}_V(U_1)/\hbar \mathop{\mathrm{Com}}_V(U_1)\subset \mathop{\mathrm{Com}}_{V/\hbar V}(U_1/\hbar U_1).\end{aligned}$$ Then $$\begin{tikzcd} U_2/\hbar U_2 \arrow[d, phantom, sloped, "\subset"] \arrow[r, equal] &\mathop{\mathrm{Com}}_{V/\hbar V}(U_1/\hbar U_1)\\ \mathop{\mathrm{Com}}_V(U_1)/\hbar \mathop{\mathrm{Com}}_V(U_1) \arrow[ur, phantom, sloped, "\subset"]& \end{tikzcd}$$ It implies that $$\begin{aligned} U_2/\hbar U_2= \mathop{\mathrm{Com}}_V(U_1)/\hbar \mathop{\mathrm{Com}}_V(U_1).\end{aligned}$$ It follows from Lemma [Lemma 121](#lem:Com-basic1){reference-type="ref" reference="lem:Com-basic1"} that all the ${\mathbb{C}}[[\hbar]]$-submodules are topologically free. Therefore, the injection $U_2\to \mathop{\mathrm{Com}}_V(U_1)$ must be an isomorphism. ◻ Recall from Theorem [Theorem 8](#thm:para-classical-gen){reference-type="ref" reference="thm:para-classical-gen"} that the parafermion VA $K_{\hat{\mathfrak g}}^\ell$ is generated by the set ${ \left.\left\{ {W_i^2,W_i^3} \,\right|\, {i\in I} \right\} }$. The following result gives the $\hbar$-adic analogues of $W_i^2$ and $W_i^3$ in quantum affine VAs, whose proof will be given in Section [6.4](#subsec:pf-prop-W){reference-type="ref" reference="subsec:pf-prop-W"}. **Proposition 123**. *Let $X=V$ or $L$. For each $i\in I$ and $\ell\in{\mathbb{C}}^\times$, we define $$\begin{aligned} W_i(z)=&\exp\left(\left(\frac{1-e^{z\partial}}{\partial [r\ell/r_i]_{q^{r_i\partial}}} h_i\right)_{-1}\right)Y_{\widehat{\ell}}(x_i^+,z)x_i^--\frac{{{\bf 1}}}{(q^{r_i}-q^{-r_i})z}\\ & +\frac{f_0(2(r_i+r\ell)\hbar)^{\frac{1}{2}} f_0(2(r_i-r\ell)\hbar)^{-{\frac{1}{2}}} {{\bf 1}}}{(q^{r_i}-q^{-r_i})(z+2r\ell\hbar)}.\nonumber\end{aligned}$$ Then $$\begin{aligned} W_i(z)\in X_{\hat{\mathfrak g},\hbar}^\ell[[z]],\label{eq:prop-W-well-defined}\end{aligned}$$ and $$\begin{aligned} Y_{\widehat{\ell}}(h_i,z_1)^-W_j(z)=0\quad\mbox{in }X_{\hat{\mathfrak g},\hbar}^\ell.\label{eq:prop-W-parafermion}\end{aligned}$$ Moreover, for another $\ell'\in{\mathbb{C}}$, we have that $$\begin{aligned} S_{\ell,\ell'}(z_1)(W_i(z)\otimes u)=W_i(z)\otimes u\quad\mbox{for any }u\in X_{\hat{\mathfrak g},\hbar}^{\ell'}. \label{eq:prop-W-S-invariant}\end{aligned}$$* **Remark 124**. * *For $i\in I$ and $\ell\in{\mathbb{C}}^\times$, we have that $$\begin{aligned} &W_i(0)=\left(x_i^+\right)_{-1}x_i^--\frac{1}{(q^{r_i}-q^{-r_i})2r\ell\hbar} \left(\frac{f_0(2(r_i+r\ell)\hbar)}{f_0(2(r_i-r\ell)\hbar)}\right)^{\frac{1}{2}}\\ &\quad\times \left(\exp\left(\left(-q^{-r\ell\partial}2\hbar f_0(2\partial\hbar)[r_i]_{q^{\partial}}h_i\right)_{-1}\right){{\bf 1}}-{{\bf 1}}\right)\nonumber\\ &\quad -\frac{1}{(q^{r_i}-q^{-r_i})[r\ell/r_i]_{q^{r_i\partial}}}h_i,\nonumber\\ &W_i(-2r\ell\hbar)= \exp\left(\left(q^{-r\ell\partial}2\hbar f_0(2\partial\hbar) [r_i]_{q^\partial} h_i\right)_{-1}\right) Y_{\widehat{\ell}}(x_i^+,-2r\ell\hbar)^+x_i^-\\ &\quad-\frac{1}{(q^{r_i}-q^{-r_i})2r\ell\hbar} \left(\exp\left(\left(q^{-r\ell\partial}2\hbar f_0(2\partial\hbar) [r_i]_{q^\partial} h_i\right)_{-1}\right){{\bf 1}}-{{\bf 1}} \right)\nonumber\\ &\quad+\left(\frac{f_0(2(r_i+r\ell)\hbar)} {f_0(2(r_i-r\ell)\hbar)}\right)^{\frac{1}{2}} \frac{q^{-2r\ell\partial}} {(q^{r_i}-q^{-r_i})[r\ell/r_i]_{q^{r_i\partial}}}h_i.\nonumber\end{aligned}$$* * **Lemma 125**. *Denote by $\pi$ the composition map $$\begin{aligned} \xymatrix{ L_{\hat{\mathfrak g},\hbar}^\ell\ar[r]& L_{\hat{\mathfrak g},\hbar}^\ell/\hbar L_{\hat{\mathfrak g},\hbar}^\ell\ar[r]& L_{\hat{\mathfrak g}}^\ell. }\end{aligned}$$ Then for each $i\in I$, we have that $$\begin{aligned} \pi\left(W_i(0)\right)=W_i^2,\quad \pi\left(\frac{W_i(0)-W_i(-2r\ell\hbar)}{r\ell\hbar}\right)=\partial W_i^2+W_i^3.\end{aligned}$$* *Proof.* Notice that $$\begin{aligned} &W_i(0)\equiv (x_i^+)_{-1}x_i^- -\frac{1}{2}\partial h_i -\frac{r_i}{2r\ell}(h_i)_{-1}h_{i,\hbar}\\ &\quad+\bigg(\frac{r_i}{6}h_i +\frac{r\ell}{3}\partial^2 h_i+\frac{r_i}{2} (h_i)_{-1}\partial h_i+\frac{r_i}{2} (h_i)_{-2}h_i +\frac{r_i^2}{3r\ell} (h_i)_{-1}^2h_i \bigg)\hbar,\\ &W_i(-2r\ell\hbar)\equiv\left(x_i^+\right)_{-1}x_i^- -\frac{1}{2}\partial h_i-\frac{r_i}{2r\ell}(h_i)_{-1}h_i\\ &\quad+\bigg(-2r\ell(x_i^+)_{-2}x_i^-+2r_i(h_i)_{-1}(x_i^+)_{-1}x_i^- +\frac{2r\ell}{3}\partial^2h_i\\ &\quad\quad+\frac{r_i}{2}(h_i)_{-1}\partial h_i+\frac{r_i}{2}(h_i)_{-2}h_i -\frac{r_i^2}{3r\ell}(h_i)_{-1}^2h_i+\frac{r_i}{6}h_i\bigg)\hbar,\end{aligned}$$ modulo $\hbar^2 L_{\hat{\mathfrak g},\hbar}^\ell$. Hence, $$\begin{aligned} &W_i(0)\equiv \left(x_i^+\right)_{-1}x_i^- -\frac{1}{2}\partial h_i-\frac{r_i}{2r\ell}(h_i)_{-1}h_i,\end{aligned}$$ and $$\begin{aligned} &\frac{W_i(0)-W_i(-2r\ell\hbar)}{\hbar} \equiv\frac{r_i}{6}h_i+\frac{r\ell}{3}\partial^2h_i +\frac{r_i}{2}(h_i)_{-1}\partial h_i\\ &+\frac{r_i}{2}(h_i)_{-2}h_i+\frac{r_i^2}{3r\ell}(h_i)_{-1}^2h_i\\ &+2r\ell(x_i^+)_{-2}x_i^--2r_i(h_i)_{-1}(x_i^+)_{-1}x_i^- -\frac{2r\ell}{3}\partial^2h_i\\ &-\frac{r_i}{2}(h_i)_{-1}\partial h_i-\frac{r_i}{2}(h_i)_{-2}h_i +\frac{r_i^2}{3r\ell}(h_i)_{-1}^2h_i-\frac{r_i}{6}h_i\\ \equiv&2r\ell(x_i^+)_{-2}x_i^--2r_i(h_i)_{-1}(x_i^+)_{-1}x_i^- -\frac{r\ell}{3}\partial^2h_i+\frac{2r_i^2}{3r\ell}(h_i)_{-1}^2h_i,\end{aligned}$$ modulo $\hbar L_{\hat{\mathfrak g},\hbar}^\ell$. Comparing this with Remark [Remark 9](#rem:para-gen-classical){reference-type="ref" reference="rem:para-gen-classical"}, we complete the proof. ◻ **Definition 126**. Define $$\begin{aligned} K_{\hat{\mathfrak g},\hbar}^\ell=\mathop{\mathrm{Com}}_{L_{\hat{\mathfrak g},\hbar}^\ell}({\mathfrak h}).\end{aligned}$$ **Theorem 127**. *$K_{\hat{\mathfrak g},\hbar}^\ell$ is generated by the set $$\begin{aligned} \label{eq:para-gen} { \left.\left\{ {W_i(0),\,W_i(-2r\ell\hbar)} \,\right|\, {i\in I} \right\} }.\end{aligned}$$ Moreover, $$\begin{aligned} \label{eq:para-alt} K_{\hat{\mathfrak g},\hbar}^\ell={ \left.\left\{ {u\in L_{\hat{\mathfrak g},\hbar}^\ell} \,\right|\, {Y_{\widehat{\ell}}(h_i,z)^-u=0\quad\mbox{for any }i\in I} \right\} },\end{aligned}$$ and for any $\ell'\in{\mathbb{Z}}_+$, any $u\in K_{\hat{\mathfrak g},\hbar}^\ell$ and any $v\in L_{\hat{\mathfrak g},\hbar}^{\ell'}$, we have that $$\begin{aligned} \label{eq:para-S-inv} S_{\ell,\ell'}(z)(u\otimes v)=u\otimes v.\end{aligned}$$ Furthermore, we have that $$\begin{aligned} \label{eq:para-classical-limit} K_{\hat{\mathfrak g},\hbar}^\ell/\hbar K_{\hat{\mathfrak g},\hbar}^\ell\cong K_{\hat{\mathfrak g}}^\ell.\end{aligned}$$* *Proof.* Let $K_1={\langle}\{W_i(0),\,W_i(-2r\ell\hbar)\,|\,i\in I\}{\rangle}$ and let $K_2$ be the right hand side of [\[eq:para-alt\]](#eq:para-alt){reference-type="eqref" reference="eq:para-alt"}. Then we have that $$\begin{aligned} \label{eq:K-space-desc-temp1} K_1\subset K_{\hat{\mathfrak g},\hbar}^\ell\subset K_2,\end{aligned}$$ where the first "$\subset$" follows from Proposition [Proposition 123](#prop:W){reference-type="ref" reference="prop:W"} and the second "$\subset$" follows from Remark [Remark 120](#rem:com-alt-def){reference-type="ref" reference="rem:com-alt-def"}. For each $i\in I$, we denote $$\begin{aligned} \iota\left(W_i^2\right)=W_i(0),\quad \iota\left(W_i^3\right)=(r\ell\hbar)^{-1}(W_i(0)-W_i(-2r\ell\hbar))-\partial W_i(0).\end{aligned}$$ Then $K_1={\langle}\{\iota(W_i^2),\,\iota(W_i^3)\,|\,i\in I\}{\rangle}$. For each $u\in K_2\setminus\hbar L_{\hat{\mathfrak g},\hbar}^\ell$, we have that $\pi(u)\in K_{\hat{\mathfrak g}}^\ell$. Then from Theorem [Theorem 8](#thm:para-classical-gen){reference-type="ref" reference="thm:para-classical-gen"}, we get that $\pi(u)$ can be written as a finite sum of the form $$\begin{aligned} v^{(1)}_{n_1}\cdots v^{(k)}_{n_k},\quad\mbox{where }k\in{\mathbb{N}},\,v^{(a)}\in\{W_i^2,\,W_i^3\,|\,i\in I\},\,n_a\in{\mathbb{Z}},\,(1\le a\le k).\end{aligned}$$ Denote by $u'$ the corresponding finite sum with $W_i^j$ being replaced by $\iota(W_i^j)$ for $i\in I$, $j=2,3$. Then $u'\in K_1$. Moreover, we deduce from Lemma [Lemma 125](#lem:W-mod-h){reference-type="ref" reference="lem:W-mod-h"} that $\pi(u)=\pi(u')$. Now, we fix an arbitrary $u\in K_2$. Define $$\begin{aligned} N(u)=\sup{ \left\{ {n\in{\mathbb{N}}} \,\left|\, {u-v\in\hbar^nL_{\hat{\mathfrak g},\hbar}^\ell\,\,\mbox{for some }v\in K_1} \right\}\right. }.\end{aligned}$$ Suppose that $N(u)<+\infty$. Then there exists $v\in K_1$ such that $$\begin{aligned} u-v\in \hbar^{N(u)}L_{\hat{\mathfrak g},\hbar}^\ell.\end{aligned}$$ Let $w=\hbar^{-N(u)}(u-v)$. Then $w\in K_2\setminus\hbar L_{\hat{\mathfrak g},\hbar}^\ell$. From the argument above, we can choose $w'\in K_1$, such that $\pi(w)=\pi(w')$. That is, $w-w'\in\hbar L_{\hat{\mathfrak g},\hbar}^\ell$. Define $$\begin{aligned} v'=v+\hbar^{N(u)}w'\in K_1.\end{aligned}$$ Then $$\begin{aligned} u-v'=u-v-\hbar^{N(u)}w'=\hbar^{N(u)}(w-w')\in\hbar^{N(u)+1}L_{\hat{\mathfrak g},\hbar}^\ell,\end{aligned}$$ which contradict to the definition of $N(u)$. Hence, $N(u)$ must be $+\infty$. It implies that for each $n\in{\mathbb{N}}$, there exists $v_n\in K_1$, such that $u-v_n\in\hbar^n L_{\hbar{\mathfrak g},\hbar}^\ell$. Since $K_1$ is closed in $L_{\hat{\mathfrak g},\hbar}^\ell$, we get that $$\begin{aligned} u=\lim_{n\to\infty}v_n\in K_1.\end{aligned}$$ Therefore, $K_2\subset K_1$. We complete the proof of [\[eq:para-gen\]](#eq:para-gen){reference-type="eqref" reference="eq:para-gen"} and [\[eq:para-alt\]](#eq:para-alt){reference-type="eqref" reference="eq:para-alt"}. The relation [\[eq:para-S-inv\]](#eq:para-S-inv){reference-type="eqref" reference="eq:para-S-inv"} follows immediately from [\[eq:prop-W-S-invariant\]](#eq:prop-W-S-invariant){reference-type="eqref" reference="eq:prop-W-S-invariant"} and [\[eq:para-gen\]](#eq:para-gen){reference-type="eqref" reference="eq:para-gen"}. Finally, since $\pi(\iota(W_i^j))=W_i^j$ for $i\in I$, $j=2,3$, we get from Theorem [Theorem 8](#thm:para-classical-gen){reference-type="ref" reference="thm:para-classical-gen"} that $$\begin{aligned} K_{\hat{\mathfrak g},\hbar}^\ell/\hbar K_{\hat{\mathfrak g},\hbar}^\ell=\pi(K_1)\supset K_{\hat{\mathfrak g}}^\ell.\end{aligned}$$ Since $\pi(K_2)\subset K_{\hat{\mathfrak g}}^\ell$, we get the reverse inclusion: $$\begin{aligned} K_{\hat{\mathfrak g},\hbar}^\ell/\hbar K_{\hat{\mathfrak g},\hbar}^\ell=\pi(K_2)\subset K_{\hat{\mathfrak g}}^\ell.\end{aligned}$$ Therefore, we complete the proof of [\[eq:para-classical-limit\]](#eq:para-classical-limit){reference-type="eqref" reference="eq:para-classical-limit"}. ◻ **Theorem 128**. *View $V_{\sqrt\ell Q_L}^{\eta_\ell}$ as an $\hbar$-adic quantum vertex subalgebra of $L_{\hat{\mathfrak g},\hbar}^\ell$, we have that $$\begin{aligned} \label{eq:double-comutant} \mathop{\mathrm{Com}}_{L_{\hat{\mathfrak g},\hbar}^\ell}\left(V_{\sqrt\ell Q_L}^{\eta_\ell}\right)=K_{\hat{\mathfrak g},\hbar}^\ell,\quad \mathop{\mathrm{Com}}_{L_{\hat{\mathfrak g},\hbar}^\ell}\left(K_{\hat{\mathfrak g},\hbar}^\ell\right)=V_{\sqrt\ell Q_L}^{\eta_\ell}.\end{aligned}$$* *Proof.* Let $u\in K_{\hat{\mathfrak g},\hbar}^\ell$. From [\[eq:para-alt\]](#eq:para-alt){reference-type="eqref" reference="eq:para-alt"}, we have that $$\begin{aligned} Y_{\widehat{\ell}}(h_i,z)^-u=0\quad\mbox{for }i\in I.\end{aligned}$$ Then by utilizing Corollary [Corollary 47](#coro:h--to-e){reference-type="ref" reference="coro:h--to-e"}, we get that $$\begin{aligned} Y_{\widehat{\ell}}(f_i^\pm,z)^-u=0\quad\mbox{for }i\in I.\end{aligned}$$ Combining this with [\[eq:para-S-inv\]](#eq:para-S-inv){reference-type="eqref" reference="eq:para-S-inv"} and the fact that $V_{\sqrt\ell Q_L}^{\eta_\ell}$ is generated by the set ${ \left.\left\{ {h_i,f_i^\pm} \,\right|\, {i\in I} \right\} }$, we get that $$\begin{aligned} \mathop{\mathrm{Com}}_{L_{\hat{\mathfrak g},\hbar}^\ell}\left(V_{\sqrt\ell Q_L}^{\eta_\ell}\right)\supset K_{\hat{\mathfrak g},\hbar}^\ell,\quad \mathop{\mathrm{Com}}_{L_{\hat{\mathfrak g},\hbar}^\ell}\left(K_{\hat{\mathfrak g},\hbar}^\ell\right) \supset V_{\sqrt\ell Q_L}^{\eta_\ell}.\end{aligned}$$ Since $$\begin{aligned} L_{\hat{\mathfrak g},\hbar}^\ell/\hbar L_{\hat{\mathfrak g},\hbar}^\ell\cong L_{\hat{\mathfrak g}}^\ell,\quad K_{\hat{\mathfrak g},\hbar}^\ell/\hbar K_{\hat{\mathfrak g},\hbar}^\ell\cong K_{\hat{\mathfrak g}}^\ell,\quad V_{\sqrt\ell Q_L}^{\eta_\ell}/\hbar V_{\sqrt\ell Q_L}^{\eta_\ell}\cong V_{\sqrt\ell Q_L},\end{aligned}$$ we complete the proof by using Theorem [Theorem 10](#thm:classical-double-commutant){reference-type="ref" reference="thm:classical-double-commutant"} and Lemma [Lemma 122](#lem:Com-basic2){reference-type="ref" reference="lem:Com-basic2"}. ◻ ## Proof of Theorem [Theorem 114](#thm:qlattice-inj){reference-type="ref" reference="thm:qlattice-inj"} for ${\mathfrak g}={\mathfrak{sl}}_2$ {#subsec:sl2-case} Let ${\mathfrak g}={\mathfrak{sl}}_2$. Then $I=\{1\}$, $r=r_1=1$, $Q_L={\mathbb{Z}}\alpha_1$ and the Cartan matrix $A=(2)$. In this section, we prove Theorem [Theorem 114](#thm:qlattice-inj){reference-type="ref" reference="thm:qlattice-inj"} using induction on $\ell$. First, we prove that $L_{\hat{\mathfrak{sl}}_2,\hbar}^1\cong V_{{\mathbb{Z}}\alpha_1}^{\eta_1}$. Then we show that if the case for $\ell$ holds true, it implies that the case for $\ell+1$ also holds true by using the following $\hbar$-adic quantum VA injection provided in Theorem [Theorem 83](#thm:coproduct){reference-type="ref" reference="thm:coproduct"}: $$\begin{aligned} \Delta:L_{\hat{\mathfrak{sl}}_2,\hbar}^{\ell+1}\longrightarrow L_{\hat{\mathfrak{sl}}_2,\hbar}^\ell\widehat{\otimes}L_{\hat{\mathfrak{sl}}_2,\hbar}^1.\end{aligned}$$ Similarly to the definition of $E_\ell(h_i)$ (see [\[eq:def-E-h\]](#eq:def-E-h){reference-type="eqref" reference="eq:def-E-h"}), we define $$\begin{aligned} &E_\ell(\beta_1)=\left( \frac{f_0(2(1+\ell)\hbar)}{f_0(2(1-r\ell)\hbar)} \right)^{\frac{1}{2}}\\ &\quad\times\exp\left( \left(-q^{-\ell\partial}2\hbar f_0(2\hbar\partial)\beta_1\right)_{-1} \right){{\bf 1}}\in V_{\sqrt\ell {\mathbb{Z}}\alpha_1}^{\eta_\ell}.\end{aligned}$$ **Lemma 129**. *We have that $$\begin{aligned} E_\ell(\beta_1)=E^+(\beta_1,-\hbar-\ell\hbar) E^+(-\beta_1,\hbar-\ell\hbar){{\bf 1}}.\end{aligned}$$* *Proof.* Set $$\begin{aligned} &\widetilde{\beta}_1^-(z)=-q^{-\ell\frac{\partial}{\partial {z}}}2\hbar f_0\left(2\hbar\frac{\partial}{\partial {z}}\right)\left( Y_{\sqrt\ell {\mathbb{Z}}\alpha_1}(\beta_1,z)^-+\Phi(\eta_\ell'(\beta_1,z),Y_{\sqrt\ell {\mathbb{Z}}\alpha_1})\right),\\ &\widetilde{\beta}_1^+(z)=-q^{-\ell\frac{\partial}{\partial {z}}}2\hbar f_0\left(2\hbar\frac{\partial}{\partial {z}}\right) Y_{\sqrt\ell {\mathbb{Z}}\alpha_1}(\beta_1,z)^+.\end{aligned}$$ Following a similar proof technique as Proposition [Proposition 92](#prop:Y-E){reference-type="ref" reference="prop:Y-E"}, we have that $$\begin{aligned} &Y_{\sqrt\ell {\mathbb{Z}}\alpha_1}^{\eta_\ell}\left(E_\ell(\beta_1),z\right) =\exp\left(\widetilde{\beta}_1^+(z)\right)\exp\left(\widetilde{\beta}_1^-(z)\right).\end{aligned}$$ Notice that $$\begin{aligned} \exp\left(\widetilde{\beta}_1^+(z)\right)=E^+(\beta_1,z-\hbar-\ell\hbar) E^+(-\beta_1,z+\hbar-\ell\hbar).\end{aligned}$$ Then $$\begin{aligned} &E_\ell(\beta_1)=\lim_{z\to 0}Y_{\sqrt\ell {\mathbb{Z}}\alpha_1}^{\eta_\ell}(E_\ell(\beta_1),z){{\bf 1}}\\ =&\lim_{z\to 0}\exp\left(\widetilde{\beta}_1^+(z)\right)\exp\left(\widetilde{\beta}_1^-(z)\right){{\bf 1}}\\ =&\lim_{z\to 0}E^+(\beta_1,z-\hbar-\ell\hbar)E^+(-\beta_1,z+\hbar-\ell\hbar){{\bf 1}}\\ =&E^+(\beta_1,-\hbar-\ell\hbar)E^+(-\beta_1,\hbar-\ell\hbar){{\bf 1}},\end{aligned}$$ which complete the proof of lemma. ◻ Theorem [Theorem 114](#thm:qlattice-inj){reference-type="ref" reference="thm:qlattice-inj"} is equivalent to the following result when $\ell=1$. **Proposition 130**. *There exists a unique $\hbar$-adic quantum VA isomorphism from $L_{\hat{\mathfrak{sl}}_2,\hbar}^1$ to $V_{{\mathbb{Z}}\alpha_1}^{\eta_1}$ such that $$\begin{aligned} h_1\mapsto \alpha_1,\quad x_1^\pm\mapsto e_{\pm\alpha_1}.\end{aligned}$$* *Proof.* From Proposition [Proposition 45](#prop:qlatticeVA-mod-to-A-mod){reference-type="ref" reference="prop:qlatticeVA-mod-to-A-mod"}, we have that $$\begin{aligned} \left(V_{{\mathbb{Z}}\alpha_1}^{\eta_1},Y_{{\mathbb{Z}}\alpha_1}^{\eta_1}(\alpha_1,z),Y_{{\mathbb{Z}}\alpha_1}^{\eta_1}(e_{\pm\alpha_1},z)\right)\in \mathop{\mathrm{obj}}\mathcal A_\hbar^{\eta_1}({\mathbb{Z}}\alpha_1).\end{aligned}$$ Comparing the relations [\[eq:AQ-sp1\]](#eq:AQ-sp1){reference-type="eqref" reference="eq:AQ-sp1"}, [\[eq:AQ-sp2\]](#eq:AQ-sp2){reference-type="eqref" reference="eq:AQ-sp2"}, [\[eq:AQ-sp3\]](#eq:AQ-sp3){reference-type="eqref" reference="eq:AQ-sp3"} and [\[eq:AQ-sp4\]](#eq:AQ-sp4){reference-type="eqref" reference="eq:AQ-sp4"} with the relations [\[eq:local-h-1\]](#eq:local-h-1){reference-type="eqref" reference="eq:local-h-1"}, [\[eq:local-h-2\]](#eq:local-h-2){reference-type="eqref" reference="eq:local-h-2"}, [\[eq:local-h-3\]](#eq:local-h-3){reference-type="eqref" reference="eq:local-h-3"} and [\[eq:local-h-4\]](#eq:local-h-4){reference-type="eqref" reference="eq:local-h-4"}, we get that $$\begin{aligned} \left(V_{{\mathbb{Z}}\alpha_1}^{\eta_1},Y_{{\mathbb{Z}}\alpha_1}^{\eta_1}(\alpha_1,z),Y_{{\mathbb{Z}}\alpha_1}^{\eta_1}(e_{\pm\alpha_1},z)\right)\in \mathop{\mathrm{obj}}\mathcal M_{\widehat{1}}({\mathfrak{sl}}_2).\end{aligned}$$ By utilizing Proposition [Proposition 58](#prop:universal-M-tau){reference-type="ref" reference="prop:universal-M-tau"}, we obtain a unique $\hbar$-adic nonlocal VA homomorphism $\psi:F_{\hat{\mathfrak{sl}}_2,\hbar}^1\to V_{{\mathbb{Z}}\alpha_1}^{\eta_1}$ such that $$\begin{aligned} \psi(h_1)=\alpha_1,\quad \psi(x_1^\pm)=e_{\pm\alpha_1}.\end{aligned}$$ From [\[eq:eta-sp-exp\]](#eq:eta-sp-exp){reference-type="eqref" reference="eq:eta-sp-exp"}, we get that $$\begin{aligned} &Y_{{\mathbb{Z}}\alpha_1}^{\eta_1}(e_{\alpha_1},z)e_{-\alpha_1} =f(z)^{-1-q^2}E^+(\alpha_1,z){{\bf 1}}.\end{aligned}$$ Then $$\begin{aligned} Y_{{\mathbb{Z}}\alpha_1}^{\eta_1}(e_{\alpha_1},z)^-e_{\alpha_1}=&0\quad\mbox{and}\\ Y_{{\mathbb{Z}}\alpha_1}^{\eta_1}(e_{\alpha_1},z)^-e_{-\alpha_1} =&\frac{1}{q-q^{-1}} \left(\frac{{{\bf 1}}}{z}-\frac{E^+(\alpha_1,-2\hbar){{\bf 1}}} {z+2\hbar}\right).\end{aligned}$$ By recalling Lemma [Lemma 129](#lem:E-beta){reference-type="ref" reference="lem:E-beta"}, we know that $$\begin{aligned} &E^+(\alpha_1,-2\hbar){{\bf 1}}=E_1(\alpha_1).\end{aligned}$$ Using [\[eq:eta-sp-exp\]](#eq:eta-sp-exp){reference-type="eqref" reference="eq:eta-sp-exp"}, it is also derived that $$\begin{aligned} &Y_{{\mathbb{Z}}\alpha_1}^{\eta_1}(e_{\alpha_1},z)e_{\alpha_1} =f(z)^{1+q^2}E^+(\alpha_1,z)e_{2\alpha_1}.\end{aligned}$$ It implies that $$\begin{aligned} &(e_{\alpha_1})_{-1}e_{\alpha_1}=\mathop{\mathrm{Res}}_zz^{-1}Y_{{\mathbb{Z}}\alpha_1}^{\eta_1}(e_{\alpha_1},z)e_{\alpha_1}\\ =&\mathop{\mathrm{Res}}_zz^{-1}f(z)^{1+q^2}E^+(\alpha_1,z)e_{2\alpha_1}=0.\end{aligned}$$ By applying Proposition [Proposition 68](#prop:universal-qaff){reference-type="ref" reference="prop:universal-qaff"}, $\psi$ gives rise to a unique $\hbar$-adic nonlocal VA homomorphism $V_{{\mathfrak{sl}}_2,\hbar}^1\to L_{{\mathbb{Z}}\alpha_1}^{\eta_1}$. Notice that the ${\mathbb{C}}[[\hbar]]$-module map $\psi$ induces a ${\mathbb{C}}$-isomorphism $$\begin{aligned} L_{{\mathfrak{sl}}_2,\hbar}^1/\hbar L_{{\mathfrak{sl}}_2,\hbar}^1\cong L_{{\mathfrak{sl}}_2}^1\cong V_{{\mathbb{Z}}\alpha_1}\cong V_{{\mathbb{Z}}\alpha_1}^{\eta_1}/\hbar V_{{\mathbb{Z}}\alpha_1}^{\eta_1}.\end{aligned}$$ Therefore, $\psi$ must be an $\hbar$-adic nonlocal VA isomorphism. Finally, through a comparison of the action of quantum Yang-Baxter operators on generators (refer to Theorems [Theorem 73](#thm:quotient-algs){reference-type="ref" reference="thm:quotient-algs"} and [Theorem 42](#thm:qlatticeVAQYB){reference-type="ref" reference="thm:qlatticeVAQYB"}), it is concluded that $\psi$ is an $\hbar$-adic quantum VA isomorphism. ◻ In the subsequent subsection, we explore the process of induction. Recall the definitions of $h_1^\pm(z)$ and $\widetilde{h}_1^\pm(z)$ given in Subsection [5.4](#subsec:some-formulas){reference-type="ref" reference="subsec:some-formulas"}. The following conclusions can be inferred directly from Propositions [Proposition 112](#prop:delta-int+){reference-type="ref" reference="prop:delta-int+"} and [Proposition 113](#prop:delta-int-){reference-type="ref" reference="prop:delta-int-"}. **Lemma 131**. *In $L_{\hat{\mathfrak{sl}}_2,\hbar}^\ell\widehat{\otimes}L_{\hat{\mathfrak{sl}}_2,\hbar}^1$, we have that $$\begin{aligned} &\Delta(f_1^+)\label{eq:Delta-f+} =q^{\partial}\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)f_1^+\otimes q^{\ell\partial}f_1^+\\ &\quad\times \frac{f_0(2(\ell-1)\hbar)^{\frac14}f_0(2\ell\hbar)^{\frac{1}{2}}f_0(2(\ell+1)\hbar)^{\frac 14}}{f_0(2\hbar)^{\frac{1}{2}}},\nonumber\\ &\Delta(f_1^-)\label{eq:Delta-f-} =q^{-\partial}f_1^-\otimes q^{\ell\partial}f_1^-\\ &\quad\times \frac{f_0(2(\ell-1)\hbar)^{\frac14}f_0(2\ell\hbar)^{\frac{1}{2}}f_0(2(\ell+1)\hbar)^{\frac 14}}{f_0(2\hbar)^{\frac{1}{2}}}.\nonumber\end{aligned}$$* **Lemma 132**. *We have that $$\begin{aligned} &S_{\ell,1}(z)\left(f_1^\pm\otimes h_1\right)\\ =&f_1^\pm\otimes h_1\pm f_1^\pm\otimes{{\bf 1}} \otimes\frac{\partial}{\partial {z}}\log f(z)^{[2]_q[\ell]_q(q-q^{-1})}.\nonumber\end{aligned}$$* *Proof.* We can utilize Proposition [Proposition 70](#prop:normal-ordering-rel-general){reference-type="ref" reference="prop:normal-ordering-rel-general"} to derive the following expression: $$\begin{aligned} \label{eq:S-xk-temp1} &\mathop{\mathrm{Rat}}_{z_1^{-1},\dots z_\ell^{-1}}Y_{\widehat{\ell}}(x_1^\pm,z_1)\cdots Y_{\widehat{\ell}}(x_1^\pm,z_\ell){{\bf 1}}\\ =&\prod_{a=1}^\ell\frac{z_a}{z_a-2(\ell-a)\hbar}(x_1^\pm)_{-1}^\ell{{\bf 1}}.\nonumber\end{aligned}$$ By combining equations [\[eq:qyb-hex-id\]](#eq:qyb-hex-id){reference-type="eqref" reference="eq:qyb-hex-id"} and [\[eq:S-twisted-2\]](#eq:S-twisted-2){reference-type="eqref" reference="eq:S-twisted-2"}, we obtain the following result: $$\begin{aligned} &S_{\ell,1}(z)\left(Y_{\widehat{\ell}}(x_1^\pm,z_1)\cdots Y_{\widehat{\ell}}(x_1^\pm,z_\ell){{\bf 1}}\otimes h_1\right)\\ =&Y_{\widehat{\ell}}(x_1^\pm,z_1)\cdots Y_{\widehat{\ell}}(x_1^\pm,z_\ell){{\bf 1}}\otimes h_1\nonumber\\ &\pm Y_{\widehat{\ell}}(x_1^\pm,z_1)\cdots Y_{\widehat{\ell}}(x_1^\pm,z_\ell){{\bf 1}}\otimes{{\bf 1}}\nonumber\\ &\otimes[2]_{q^{\frac{\partial}{\partial {z}}}}\left(q^{\frac{\partial}{\partial {z}}}-q^{-\frac{\partial}{\partial {z}}}\right)\sum_{a=1}^\ell e^{z_a\frac{\partial}{\partial {z}}}\frac{\partial}{\partial {z}}\log f(z)\\ =&Y_{\widehat{\ell}}(x_1^\pm,z_1)\cdots Y_{\widehat{\ell}}(x_1^\pm,z_\ell){{\bf 1}}\otimes h_1\nonumber\\ &\pm Y_{\widehat{\ell}}(x_1^\pm,z_1)\cdots Y_{\widehat{\ell}}(x_1^\pm,z_\ell){{\bf 1}}\otimes{{\bf 1}}\nonumber\\ &\otimes[2]_{q^{\frac{\partial}{\partial {z}}}}\left(q^{\frac{\partial}{\partial {z}}}-q^{-\frac{\partial}{\partial {z}}}\right)\sum_{a=1}^\ell e^{z_a\frac{\partial}{\partial {z}}}\frac{\partial}{\partial {z}}\log f(z).\end{aligned}$$ Combining this with [\[eq:S-xk-temp1\]](#eq:S-xk-temp1){reference-type="eqref" reference="eq:S-xk-temp1"}, we get that $$\begin{aligned} &S_{\ell,1}(z) \left(f_1^\pm\otimes h_1\right)= S_{\ell,1}(z)\left(\sqrt{c_{1,\ell}} q^{(1-\ell)\partial}(x_1^\pm)_{-1}^\ell{{\bf 1}}\otimes h_1\right)\\ =&\sqrt{c_{1,\ell}} \mathop{\mathrm{Res}}_{z_1,\dots,z_\ell}z_1^{-1}\cdots z_\ell^{-1} S_{\ell,1}(z)\\ &\times\left(q^{(1-\ell)\partial}Y_{\widehat{\ell}}(x_1^\pm,z_1)\cdots Y_{\widehat{\ell}}(x_1^\pm,z_\ell){{\bf 1}}\otimes h_1\right)\\ =&\sqrt{c_{1,\ell}} \mathop{\mathrm{Res}}_{z_1,\dots,z_\ell}z_1^{-1}\cdots z_\ell^{-1}q^{(1-\ell)\partial} Y_{\widehat{\ell}}(x_1^\pm,z_1)\cdots Y_{\widehat{\ell}}(x_1^\pm,z_\ell){{\bf 1}}\otimes h_1\\ &\pm \sqrt{c_{1,\ell}} \mathop{\mathrm{Res}}_{z_1,\dots,z_\ell}z_1^{-1}\cdots z_\ell^{-1} q^{(1-\ell)\partial}Y_{\widehat{\ell}}(x_1^\pm,z_1)\cdots Y_{\widehat{\ell}}(x_1^\pm,z_\ell){{\bf 1}}\otimes{{\bf 1}}\\ &\otimes[2]_{q^{\frac{\partial}{\partial {z}}}}\left(q^{\frac{\partial}{\partial {z}}}-q^{-\frac{\partial}{\partial {z}}}\right)q^{(1-\ell)\frac{\partial}{\partial {z}}}\sum_{a=1}^\ell e^{z_a\frac{\partial}{\partial {z}}}\frac{\partial}{\partial {z}}\log f(z)\\ =&\mathop{\mathrm{Res}}_{z_1,\dots,z_\ell}\prod_{a=1}^\ell\frac{1}{z_a-2(\ell-a)\hbar}f_1^\pm\otimes h_1\\ &\pm \mathop{\mathrm{Res}}_{z_1,\dots,z_\ell}\prod_{a=1}^\ell\frac{1}{z_a-2(\ell-a)\hbar}f_1^\pm\otimes{{\bf 1}}\\ &\otimes[2]_{q^{\frac{\partial}{\partial {z}}}}\left(q^{\frac{\partial}{\partial {z}}}-q^{-\frac{\partial}{\partial {z}}}\right)q^{(1-\ell)\frac{\partial}{\partial {z}}}\sum_{a=1}^\ell e^{z_a\frac{\partial}{\partial {z}}}\frac{\partial}{\partial {z}}\log f(z)\\ =&f_1^\pm\otimes h_1\pm f_1^\pm\otimes{{\bf 1}}\otimes \frac{\partial}{\partial {z}}\log f(z)^{[2]_q[\ell]_q(q-q^{-1})}.\end{aligned}$$ This completes the proof of the lemma. ◻ **Lemma 133**. *For $u\in L_{\hat{\mathfrak{sl}}_2,\hbar}^\ell$, $h\in L_{\hat{\mathfrak{sl}}_2,\hbar}^1$ such that $$\begin{aligned} &\exp(\widetilde{h}_1^+(z))u=u\otimes g(z),\\ &S_{\ell,1}(z)(u\otimes h)=u\otimes h+u\otimes{{\bf 1}}\otimes f_1(z),\\ &S_{\ell,1}(z)(E_\ell(h_1)\otimes h)=E_\ell(h_1)\otimes h+E_\ell(h_1)\otimes{{\bf 1}}\otimes f_2(z),\end{aligned}$$ for some $f_1(z),f_2(z),g(z)\in {\mathbb{C}}((z))[[\hbar]]$ with $g(z)$ invertible. Then $$\begin{aligned} &S_{\ell,1}(z)\left(\exp(\widetilde{h}_1^+(z_1))u\otimes h\right)\\ =&\exp(\widetilde{h}_1^+(z_1))u\otimes h +\exp(\widetilde{h}_1^+(z_1))u\otimes{{\bf 1}}\otimes(f_1(z)+f_2(z+z_1)).\end{aligned}$$* *Proof.* By utilizing Proposition [Proposition 92](#prop:Y-E){reference-type="ref" reference="prop:Y-E"}, we get that $$\begin{aligned} \label{eq:S-Eu-h-temp1} &Y_{\widehat{\ell}}(E_\ell(h_1),z_1)u\\ =&\exp\left(\widetilde{h}_1^+(z_1)\right)\exp\left(\widetilde{h}_1^-(z_1)\right)u =\exp\left(\widetilde{h}_1^+(z_1)\right)u \otimes g(z_1).\nonumber\end{aligned}$$ In addition, we get from [\[eq:qyb-hex-id\]](#eq:qyb-hex-id){reference-type="eqref" reference="eq:qyb-hex-id"} that $$\begin{aligned} &S_{\ell,1}(z)\left(Y_{\widehat{\ell}}(E_\ell(h_1),z_1)u\otimes h\right)\\ =&Y_{\widehat{\ell}}(z_1)S_{\ell,1}^{23}(z)S_{\ell,1}^{13}(z+z_1)(E_\ell(h_1)\otimes u\otimes h)\\ =&Y_{\widehat{\ell}}(E_\ell(h_1),z_1)u\otimes h+Y_{\widehat{\ell}}(E_\ell(h_1),z_1)u\otimes{{\bf 1}}\otimes\left( f_1(z)+f_2(z+z_1) \right).\end{aligned}$$ Combining this with [\[eq:S-Eu-h-temp1\]](#eq:S-Eu-h-temp1){reference-type="eqref" reference="eq:S-Eu-h-temp1"}, we can conclude that $$\begin{aligned} &S_{\ell,1}(z)\left(\exp\left(\widetilde{h}_1^+(z_1)\right)u\otimes h\right)\otimes g(z_1)\\ =&\exp\left(\widetilde{h}_1^+(z_1)\right)u\otimes h\otimes g(z_1)\\ &+\exp\left(\widetilde{h}_1^+(z_1)\right)u\otimes{{\bf 1}}\otimes g(z_1)(f_1(z)+f_2(z+z_1)).\end{aligned}$$ Since $g(z)$ is invertible, we complete the proof of the lemma. ◻ **Lemma 134**. *We have that $$\begin{aligned} &\left[\widetilde{h}_1^-(z_1),Y_{\widehat{\ell}}(f_1^\pm,z_2)\right]\nonumber\\ =&\pm Y_{\widehat{\ell}}(f_1^\pm,z_2) \log f(z_1-z_2)^{q^{-\ell-1}+q^{1-\ell}-q^{\ell-1}-q^{\ell+1}},\nonumber\\ &\exp\left(\widetilde{h}_1^-(z_1)\right)Y_{\widehat{\ell}}(f_1^\pm,z_2) \label{eq:com-formulas-15}\\ =&Y_{\widehat{\ell}}(f_1^\pm,z_2)\exp\left(\widetilde{h}_1^-(z_1)\right) f(z_1-z_2)^{\pm q^{-\ell-1}\pm q^{1-\ell}\mp q^{\ell-1}\mp q^{\ell+1}}.\nonumber\end{aligned}$$* *Proof.* From Proposition [Proposition 70](#prop:normal-ordering-rel-general){reference-type="ref" reference="prop:normal-ordering-rel-general"} and the definition of $f_1^\pm$, we have that $$\begin{aligned} &Y_{\widehat{\ell}}(f_1^\pm,z)=\sqrt{c_{1,\ell}} Y_{\widehat{\ell}}\left(\left(x_1^\pm\right)_{-1}^\ell{{\bf 1}},z+(1-\ell)\hbar\right)\\ =&c\mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y_{\widehat{\ell}}(x_1^\pm,z+(\ell-1)\hbar)Y_{\widehat{\ell}}(x_1^\pm,z+(\ell-3)\hbar) \cdots Y_{\widehat{\ell}}(x_1^\pm,z+(1-\ell)\hbar)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}\end{aligned}$$ for some $c\in{\mathbb{C}}[[\hbar]]$. By utilizing [\[eq:com-formulas-7\]](#eq:com-formulas-7){reference-type="eqref" reference="eq:com-formulas-7"}, we get that $$\begin{aligned} &\left[\widetilde{h}_1^-(z_1),Y_{\widehat{\ell}}\left(f_1^\pm,z_2\right)\right]\\ =&c[\widetilde{h}_1^+(z_1),\mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y_{\widehat{\ell}}(x_1^\pm,z_2+(\ell-1)\hbar)\cdots Y_{\widehat{\ell}}(x_1^\pm,z_2+(1-\ell)\hbar)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}]\\ =&\pm c\mathopen{\overset{\circ}{ \mathsmaller{\mathsmaller{\circ}}} }Y_{\widehat{\ell}}(x_1^\pm,z_2+(\ell-1)\hbar)\cdots Y_{\widehat{\ell}}(x_1^\pm,z_2+(1-\ell)\hbar)\mathclose{\overset{\circ}{\mathsmaller{\mathsmaller{\circ}}}}\\ &\quad\times\log f(z_1-z_2)^{(q^{-2}-q^2)[\ell]_q}\\ =&\pm Y_{\widehat{\ell}}\left(f_1^\pm,z_2\right)\log f(z_1-z_2)^{q^{-\ell-1}+q^{1-\ell}-q^{\ell-1}-q^{\ell+1}}.\end{aligned}$$ The relation [\[eq:com-formulas-15\]](#eq:com-formulas-15){reference-type="eqref" reference="eq:com-formulas-15"} follows immediate from the preceding relation. ◻ **Lemma 135**. *We have that $$\begin{aligned} &S_{\ell,1}(z)\left(\exp(\widetilde{h}_1^+((\ell-1)\hbar)f_1^+\otimes h_1\right)\\ =&\exp(\widetilde{h}_1^+((\ell-1)\hbar)f_1^+\otimes h_1 +\exp(\widetilde{h}_1^+((\ell-1)\hbar)f_1^+\otimes{{\bf 1}}\nonumber\\ &\otimes\frac{\partial}{\partial {z}}\log f(z)^{[2]_q[\ell]_q(q^{-1}-q^{-3})}.\end{aligned}$$* *Proof.* Recall Lemma [Lemma 132](#lem:S-xk-h){reference-type="ref" reference="lem:S-xk-h"} which states that $$\begin{aligned} &S_{\ell,1}(z)(f_1^+\otimes h_1) =f_1^+\otimes h_1+f_1^+\otimes{{\bf 1}}\otimes\frac{\partial}{\partial {z}}\log f(z)^{[2]_q[\ell]_q(q-q^{-1}) }.\end{aligned}$$ From [\[eq:S-E-1\]](#eq:S-E-1){reference-type="eqref" reference="eq:S-E-1"}, we get that $$\begin{aligned} &S_{\ell,1}(z)\left(E_\ell(h_1)\otimes h_1\right)\\ =&E_\ell(h_1)\otimes h_1 -E_\ell(h_1)\otimes{{\bf 1}}\\ &\quad\otimes[2]_{q^{\frac{\partial}{\partial {z}}}} [\ell]_{q^{\frac{\partial}{\partial {z}}}}\left(q^{\frac{\partial}{\partial {z}}}-q^{-\frac{\partial}{\partial {z}}}\right)^2 q^{-\ell\frac{\partial}{\partial {z}}}\frac{\partial}{\partial {z}}\log f(z)\\ =&E_\ell(h_1)\otimes h_1 +E_\ell(h_1)\otimes{{\bf 1}}\otimes \frac{\partial}{\partial {z}}\log f(z)^{-[2]_q[\ell]_q(q-q^{-1})^2q^{-\ell}}.\end{aligned}$$ Applying these two equations to Lemma [Lemma 133](#lem:S-Eu-h){reference-type="ref" reference="lem:S-Eu-h"}, we complete the proof of the lemma. ◻ **Lemma 136**. *In the $\hbar$-adic quantum VA $L_{\hat{\mathfrak{sl}}_2,\hbar}^\ell$, the equation [\[eq:AQ-sp5\]](#eq:AQ-sp5){reference-type="eqref" reference="eq:AQ-sp5"} is valid for $\ell$ if and only if the following equation holds true: $$\begin{aligned} \label{eq:AQ-sp5-alt1} &\frac{d}{dz}Y_{\widehat{\ell}}(f_1^\pm,z)=\pm h_1^+(z)Y_{\widehat{\ell}}(f_1^\pm,z)\pm Y_{\widehat{\ell}}(f_1^\pm,z)h_1^-(z)\end{aligned}$$ which can also be expressed as the equation: $$\begin{aligned} \label{eq:AQ-sp5-alt2} \partial f_1^\pm=\pm(h_1)_{-1}f_1^\pm -\left.\left(\frac{\partial}{\partial {z}}\log f_0(z)^{[2]_q[\ell]_qq^\ell}\right)\right|_{z=0}\end{aligned}$$* *Proof.* Similar to the proof of Lemma [Lemma 134](#lem:com-formulas3){reference-type="ref" reference="lem:com-formulas3"}, we have that $$\begin{aligned} \label{eq:com-h--f} &[h_1^-(z_1),Y_{\widehat{\ell}}(f_1^\pm,z_2)] =\pm Y_{\widehat{\ell}}(f_1^\pm,z_2)\frac{\partial}{\partial {z_1}}\log f(z_1-z_2)^{[2]_qq^\ell[\ell]_q}.\end{aligned}$$ Taking $\mathop{\mathrm{Rat}}_{z_1}$ on both hand sides on [\[eq:com-h\--f\]](#eq:com-h--f){reference-type="eqref" reference="eq:com-h--f"}, we get that $$\begin{aligned} &[h_1^-(z_1)^+,Y_{\widehat{\ell}}(f_1^\pm,z_2)] =\pm Y_{\widehat{\ell}}(f_1^\pm,z_2)\frac{\partial}{\partial {z_1}}\log f(z_1-z_2)^{[2]_qq^\ell[\ell]_q}\\ =&\pm Y_{\widehat{\ell}}(f_1^\pm,z_2)\mathop{\mathrm{Rat}}_{z_1}\frac{\partial}{\partial {z_1}}\log f(z_1-z_2)^{[2]_qq^\ell[\ell]_q}\\ =&\pm Y_{\widehat{\ell}}(f_1^\pm,z_2)\frac{\partial}{\partial {z_1}}\log f_0(z_1-z_2)^{[2]_qq^\ell[\ell]_q}.\end{aligned}$$ Hence, the RHS of [\[eq:AQ-sp5\]](#eq:AQ-sp5){reference-type="eqref" reference="eq:AQ-sp5"} equals to $$\begin{aligned} &\pm h_1^+(z)Y_{\widehat{\ell}}(f_1^\pm,z)\pm h_1^-(z)^+Y_{\widehat{\ell}}(f_1^\pm,z)\pm Y_{\widehat{\ell}}(f_1^\pm,z)h_1^-(z)^-\\ &\quad-Y_{\widehat{\ell}}(f_1^\pm,z)\left(\frac{\partial}{\partial {z}}\log f_0(z)^{[2]_q[\ell]_qq^\ell}\right)|_{z=0}\\ =&\pm h_1^+(z)Y_{\widehat{\ell}}(f_1^\pm,z)\pm Y_{\widehat{\ell}}(f_1^\pm,z)h_1^-(z).\end{aligned}$$ Therefore, the first assertion is valid. Let the both hand sides of [\[eq:com-h\--f\]](#eq:com-h--f){reference-type="eqref" reference="eq:com-h--f"} act on ${{\bf 1}}$. Letting $z_2=0$, we get that $$\begin{aligned} \label{eq:h--f} h_1^-(z)f_1^\pm =\pm f_1^\pm\otimes\frac{\partial}{\partial {z}}\log f(z)^{[2]_q[\ell]_qq^\ell}.\end{aligned}$$ Then $$\begin{aligned} \label{eq:h--1-f} &(h_1)_{-1}f_1^\pm=\mathop{\mathrm{Res}}_zz^{-1}Y_{\widehat{\ell}}(h_1,z)f_1^\pm \nonumber\\ =&\mathop{\mathrm{Res}}_zz^{-1}(h_1^+(z)+h_1^-(z))f_1^\pm\nonumber\\ =&h_1^+(0)f_1^\pm \pm\mathop{\mathrm{Res}}_zz^{-1}\frac{\partial}{\partial {z}}\log f(z)^{[2]_q[\ell]_qq^\ell}\nonumber\\ =&h_1^+(0)f_1^\pm \pm\left.\left( \frac{\partial}{\partial {z}}\log f(z)^{[2]_q[\ell]_qq^\ell}\right)\right|_{z=0}.\end{aligned}$$ Suppose that the relation [\[eq:AQ-sp5-alt1\]](#eq:AQ-sp5-alt1){reference-type="eqref" reference="eq:AQ-sp5-alt1"} holds true. Letting the both hand sides of [\[eq:AQ-sp5-alt1\]](#eq:AQ-sp5-alt1){reference-type="eqref" reference="eq:AQ-sp5-alt1"} act on ${{\bf 1}}$ and taking $z=0$, we get that $$\begin{aligned} \label{eq:der-f} \partial f_1^\pm=\pm h_1^+(0)f_1^\pm.\end{aligned}$$ Combining this with [\[eq:h\--1-f\]](#eq:h--1-f){reference-type="eqref" reference="eq:h--1-f"}, we complete the proof of [\[eq:AQ-sp5-alt2\]](#eq:AQ-sp5-alt2){reference-type="eqref" reference="eq:AQ-sp5-alt2"}. On the other hand, notice that $$\begin{aligned} &Y_{\widehat{\ell}}(h_1,z_1)Y_{\widehat{\ell}}(f_1^\pm,z_2) =(h_1^+(z_1)+h_1^-(z_1))Y_{\widehat{\ell}}(f_1^\pm,z_2)\\ =&h_1^+(z_1)Y_{\widehat{\ell}}(f_1^\pm,z_2) +Y_{\widehat{\ell}}(f_1^\pm,z_2)h_1^-(z_1)\\ &\pm Y_{\widehat{\ell}}(f_1^\pm,z_2) \frac{\partial}{\partial {z_1}}\log f(z_1-z_2)^{[2]_q[\ell]_qq^\ell},\end{aligned}$$ where the last equation follows from [\[eq:com-h\--f\]](#eq:com-h--f){reference-type="eqref" reference="eq:com-h--f"}. Then $$\begin{aligned} &Y_{\mathcal{E}}\left(Y_{\widehat{\ell}}(h_1,z),z_0\right)Y_{\widehat{\ell}}(f_1^\pm,z)\\ =&h_1^+(z+z_0)Y_{\widehat{\ell}}(f_1^\pm,z) +Y_{\widehat{\ell}}(f_1^\pm,z)h_1^-(z+z_0)\\ &\pm Y_{\widehat{\ell}}(f_1^\pm,z)\frac{\partial}{\partial {z_0}}\log f(z_0)^{[2]_q[\ell]_qq^\ell}.\end{aligned}$$ Taking $\mathop{\mathrm{Res}}_{z_0}z_0^{-1}$ on both hand sides, we get that $$\begin{aligned} &Y_{\widehat{\ell}}\left((h_1)_{-1}f_1^\pm,z\right) =h_1^+(z)Y_{\widehat{\ell}}(f_1^\pm,z)+Y_{\widehat{\ell}}(f_1^\pm,z)h_1^-(z)\\ &\pm Y_{\widehat{\ell}}(f_1^\pm,z)\left.\left(\frac{\partial}{\partial {z_0}}\log f(z_0)^{[2]_q[\ell]_qq^\ell}\right)\right|_{z_0=0}.\end{aligned}$$ Therefore, [\[eq:AQ-sp5-alt1\]](#eq:AQ-sp5-alt1){reference-type="eqref" reference="eq:AQ-sp5-alt1"} can be derived from [\[eq:AQ-sp5-alt2\]](#eq:AQ-sp5-alt2){reference-type="eqref" reference="eq:AQ-sp5-alt2"}. ◻ **Lemma 137**. *In the $\hbar$-adic quantum VA $L_{\hat{\mathfrak{sl}}_2,\hbar}^\ell$, we have that $$\begin{aligned} &h_1^-(z-2\hbar)\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)f_1^+\\ =&\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)f_1^+ \otimes\frac{\partial}{\partial {z}}\log f(z)^{[2]_q[\ell]_qq^\ell}\end{aligned}$$* *Proof.* The following equation is a special case of [\[eq:com-formulas-10\]](#eq:com-formulas-10){reference-type="eqref" reference="eq:com-formulas-10"}: $$\begin{aligned} \left[h_1^-(z_1),\exp\left(\widetilde{h}_1^+(z_2)\right)\right]=\exp\left(\widetilde{h}_1^+(z_2)\right) \frac{\partial}{\partial {z_1}}\log f(z_1-z_2)^{ [2]_q(q^\ell-q^{-\ell})q^{2\ell} }.\end{aligned}$$ It implies that $$\begin{aligned} &\left[h_1^-(z-2\hbar),\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)\right]\\ =&\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right) \frac{\partial}{\partial {z}}\log f(z)^{ [2]_q(q^\ell-q^{-\ell})q^{\ell-1} }.\end{aligned}$$ Then we have that $$\begin{aligned} &h_1^-(z-2\hbar)\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)f_1^+\\ =&\left[h_1^-(z-2\hbar),\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)\right]f_1^+\\ &+\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right) h_1^-(z-2\hbar)f_1^+\\ =&\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)f_1^+ \otimes\frac{\partial}{\partial {z}}\log f(z)^{[2]_q[\ell]_q(q^\ell-q^{\ell-2}+q^{\ell-2})}\\ =&\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)f_1^+ \otimes\frac{\partial}{\partial {z}}\log f(z)^{[2]_q[\ell]_qq^\ell)},\end{aligned}$$ where the second equation follows from [\[eq:h\--f\]](#eq:h--f){reference-type="eqref" reference="eq:h--f"}. ◻ **Lemma 138**. *Let us assume that Theorem [Theorem 114](#thm:qlattice-inj){reference-type="ref" reference="thm:qlattice-inj"} holds true for $\ell$. In $L_{\hat{\mathfrak{sl}}_2,\hbar}^\ell\widehat{\otimes}L_{\hat{\mathfrak{sl}}_2,\hbar}^1$, we have that $$\begin{aligned} &\Delta(\partial f_1^+)=\Delta(h_1)_{-1}\Delta(f_1^+) -\left.\left(\frac{\partial}{\partial {z}}\log f(z)^{[2]_q[\ell+1]_qq^{\ell+1}}\right)\right|_{z=0}.\end{aligned}$$* *Proof.* Let $Y_\Delta$ be the vertex operator map of the twisted tensor product $L_{\hat{\mathfrak{sl}}_2,\hbar}^\ell\widehat{\otimes}L_{\hat{\mathfrak{sl}}_2,\hbar}^1$. From [\[eq:Delta-f+\]](#eq:Delta-f+){reference-type="eqref" reference="eq:Delta-f+"}, we have that $$\begin{aligned} \Delta(f_1^+)=c\left(q^{\partial}\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)f_1^+\otimes q^{\ell\partial}f_1^+\right)\end{aligned}$$ for some invertible $c\in{\mathbb{C}}[[\hbar]]$. Then we have that $$\begin{aligned} &Y_\Delta\left(\Delta(h_1),z\right)\Delta(f_1^+)\\ =&cY_\Delta\left(q^{-\partial}h_1\otimes{{\bf 1}}+{{\bf 1}}\otimes q^{\ell\partial}h_1,z\right) \left(q^{\partial}\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)f_1^+\otimes q^{\ell\partial}f_1^+\right)\\ =&cY_{\widehat{\ell}}^{12}(z)Y_{\widehat{1}}^{34}(z)S_{\ell,1}^{23}(-z)\\ &\times\Big( q^{-\partial}h_1\otimes q^{\partial}\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)f_1^+ \otimes{{\bf 1}}\otimes q^{\ell\partial}f_1^+\\ &\quad+{{\bf 1}}\otimes q^{\partial}\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)f_1^+ \otimes q^{\ell\partial}h_1\otimes q^{\ell\partial}f_1^+\Big)\\ =&cY_{\widehat{\ell}}^{12}(z)Y_{\widehat{1}}^{34}(z) \Big( q^{-\partial}h_1\otimes q^{\partial}\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)f_1^+\otimes{{\bf 1}}\otimes q^{\ell\partial}f_1^+\\ &\quad+{{\bf 1}}\otimes q^{\partial}\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)f_1^+ \otimes q^{\ell\partial}h_1\otimes q^{\ell\partial}f_1^+\\ &\quad+{{\bf 1}}\otimes q^{\partial}\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)f_1^+ \otimes{{\bf 1}}\otimes q^{\ell\partial}f_1^+\\ &\quad\quad\otimes\frac{\partial}{\partial {(-z)}}\log f(-z)^{[2]_q[\ell]_q(q^{-1}-q^{-3})q^{1-\ell}} \Big)\\ =&cY_{\widehat{\ell}}^{12}(z)Y_{\widehat{1}}^{34}(z) \Big( q^{-\partial}h_1\otimes q^{\partial}\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)f_1^+\otimes{{\bf 1}}\otimes q^{\ell\partial}f_1^+\\ &\quad+{{\bf 1}}\otimes q^{\partial}\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)f_1^+ \otimes q^{\ell\partial}h_1\otimes q^{\ell\partial}f_1^+\\ &\quad-{{\bf 1}}\otimes q^{\partial}\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)f_1^+ \otimes{{\bf 1}}\otimes q^{\ell\partial}f_1^+\\ &\quad\quad\otimes\frac{\partial}{\partial {z}}\log f(z)^{[2]_q[\ell]_q(q^{\ell}-q^{\ell+2})} \Big)\\ =&cq^\partial Y_{\widehat{\ell}}(h_1,z-2\hbar)\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)f_1^+\otimes q^{\ell\partial}f_1^+\\ &\quad+cq^\partial \exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)f_1^+\otimes q^{\ell\partial}Y_{\widehat{1}}(h_1,z)f_1^+\\ &\quad-\Delta(f_1^+)\otimes\frac{\partial}{\partial {z}}\log f(z)^{[2]_q[\ell]_q(q^{\ell}-q^{\ell+2})}\\ %%%%%%%%%%%%%%%% =&cq^\partial \left(h_1^+(z-2\hbar)+h_1^-(z-2\hbar)\right)\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)f_1^+\otimes q^{\ell\partial}f_1^+\\ &\quad+cq^\partial \exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)f_1^+\otimes q^{\ell\partial}(h_1^+(z)+h_1^-(z))f_1^+\\ &\quad-\Delta(f_1^+)\otimes\frac{\partial}{\partial {z}}\log f(z)^{[2]_q[\ell]_q(q^{\ell}-q^{\ell+2})}\\ %%%%%%%%%%%%%%%% =&\left(h_1^+(z-\hbar)\otimes 1+1\otimes h_1^+(z+\ell\hbar)\right)\Delta(f_1^+)+\Delta(f_1^+)\\ &\quad\otimes\frac{\partial}{\partial {z}}\log f(z)^{[2]_q[\ell]_qq^{\ell+2}+[2]_qq},\end{aligned}$$ where the first equation follows from [\[eq:Delta-f+\]](#eq:Delta-f+){reference-type="eqref" reference="eq:Delta-f+"}, the third equation follows from Lemma [Lemma 135](#lem:S-wt-h-f+-h){reference-type="ref" reference="lem:S-wt-h-f+-h"} and the last equation follows from [\[eq:h\--f\]](#eq:h--f){reference-type="eqref" reference="eq:h--f"} and Lemma [Lemma 137](#lem:h--exp-wt-h-f+){reference-type="ref" reference="lem:h--exp-wt-h-f+"}. Applying $\mathop{\mathrm{Res}}_zz^{-1}$, we get that $$\begin{aligned} \label{eq:Delta-h--1-Delta-f+-temp} &\Delta(h_1)_{-1}\Delta(f_1^+)\nonumber\\ =&\left(h_1^+(-\hbar)\otimes 1+1\otimes h_1^+(\ell\hbar)\right)\Delta(f_1^+)+\Delta(f_1^+)\nonumber\\ &\quad\otimes\mathop{\mathrm{Res}}_zz^{-1}\frac{\partial}{\partial {z}}\log f(z)^{[2]_q[\ell]_qq^{\ell+2}+[2]_qq}\nonumber\\ =&\left(h_1^+(-\hbar)\otimes 1+1\otimes h_1^+(\ell\hbar)\right)\Delta(f_1^+)+\Delta(f_1^+)\\ &\quad\otimes\left.\left(\frac{\partial}{\partial {z}}\log f(z)^{[2]_q[\ell+1]_qq^{\ell+1}}\right)\right|_{z=0}. \nonumber\end{aligned}$$ On the other hand, we notice that $$\begin{aligned} =\frac{\partial}{\partial {z}}\widetilde{h}_1^+(z) =h_1^+(z-(\ell+1)\hbar)-h_1^+(z+(1-\ell)\hbar).\end{aligned}$$ Then we have that $$\begin{aligned} \label{eq:der-Delta-f+-temp} &\Delta(\partial f_1^+)=(\partial\otimes 1+1\otimes\partial)\Delta(f_1^+)\nonumber\\ =&c(\partial\otimes 1+1\otimes\partial)q^\partial\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)f_1^+\otimes q^{\ell\partial}f_1^+\nonumber\\ =&cq^\partial \left(h_1^+(-2\hbar)-h_1^+(0)\right)\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)f_1^+\otimes q^{\ell\partial}f_1^+\nonumber\\ &+cq^\partial \exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)h_1^+(0)f_1^+\otimes q^{\ell\partial}f_1^+\nonumber\\ &+cq^\partial\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)f_1^+\otimes q^{\ell\partial}h_1^+(0)f_1^+\nonumber\\ =&(h_1^+(-\hbar)\otimes 1+1\otimes h_1^+(\ell\hbar)) \Delta(f_1^+),\end{aligned}$$ where the second equation follows from [\[eq:der-f\]](#eq:der-f){reference-type="eqref" reference="eq:der-f"}. Combining [\[eq:Delta-h\--1-Delta-f+-temp\]](#eq:Delta-h--1-Delta-f+-temp){reference-type="eqref" reference="eq:Delta-h--1-Delta-f+-temp"} and [\[eq:der-Delta-f+-temp\]](#eq:der-Delta-f+-temp){reference-type="eqref" reference="eq:der-Delta-f+-temp"}, we complete the proof of the lemma. ◻ **Lemma 139**. *Let us assume that Theorem [Theorem 114](#thm:qlattice-inj){reference-type="ref" reference="thm:qlattice-inj"} holds true for $\ell$. In $L_{\hat{\mathfrak{sl}}_2,\hbar}^\ell\widehat{\otimes}L_{\hat{\mathfrak{sl}}_2,\hbar}^1$, we have that $$\begin{aligned} &\Delta(\partial f_1^-)=-\Delta(h_1)_{-1}\Delta(f_1^-) -\left.\left(\frac{\partial}{\partial {z}}\log f(z)^{[2]_q[\ell+1]_qq^{\ell+1}}\right)\right|_{z=0}.\end{aligned}$$* *Proof.* Let $Y_\Delta$ be the vertex operator map of the twisted tensor product $L_{\hat{\mathfrak{sl}}_2,\hbar}^\ell\widehat{\otimes}L_{\hat{\mathfrak{sl}}_2,\hbar}^1$. From [\[eq:Delta-f-\]](#eq:Delta-f-){reference-type="eqref" reference="eq:Delta-f-"}, we have that $$\begin{aligned} \Delta(f_1^-)=c\left(q^{-\partial}f_1^-\otimes q^{\ell\partial}f_1^-\right)\end{aligned}$$ for some invertible $c\in{\mathbb{C}}[[\hbar]]$. Then $$\begin{aligned} &Y_\Delta(\Delta(h_1),z)\Delta(f_1^-)\\ =&cY_\Delta\left(q^{-\partial}h_1\otimes{{\bf 1}}+{{\bf 1}}\otimes q^{\ell\partial}h_1,z\right)\left(q^{-\partial}f_1^-\otimes q^{\ell\partial}f_1^-\right)\\ =&cY_{\widehat{\ell}}^{12}(z)Y_{\widehat{1}}^{34}(z)S_{\ell,1}^{23}(-z)\\ &\times\left(q^{-\partial}h_1\otimes q^{-\partial}f_1^-\otimes{{\bf 1}}\otimes q^{\ell\partial}f_1^- +{{\bf 1}}\otimes q^{-\partial}f_1^-\otimes q^{\ell\partial}h_1\otimes q^{\ell\partial}f_1^-\right)\\ =&cY_{\widehat{\ell}}^{12}(z)Y_{\widehat{1}}^{34}(z) \Big( q^{-\partial}h_1\otimes q^{-\partial}f_1^-\otimes{{\bf 1}}\otimes q^{\ell\partial}f_1^-\\ &\quad+ {{\bf 1}}\otimes q^{-\partial}f_1^-\otimes q^{\ell\partial}h_1\otimes q^{\ell\partial}f_1^-\\ &\quad -{{\bf 1}}\otimes q^{-\partial}f_1^-\otimes{{\bf 1}}\otimes q^{\ell\partial}f_1^- \otimes\frac{\partial}{\partial {(-z)}}\log f(-z)^{[2]_q[\ell]_q(q-q^{-1})q^{-\ell-1}} \Big)\\ =&cq^{-\partial}Y_{\widehat{\ell}}(h_1,z)f_1^-\otimes q^{\ell\partial}f_1^- +cq^{-\partial}f_1^-\otimes q^{\ell\partial}Y_{\widehat{1}}(h_1,z)f_1^-\\ &-\Delta(f_1^-)\otimes\frac{\partial}{\partial {(-z)}}\log f(-z)^{[2]_q[\ell]_q(q-q^{-1})q^{-\ell-1}}\\ =&cq^{-\partial}(h_1^+(z)+h_1^-(z))f_1^-\otimes q^{\ell\partial}f_1^- +cq^{-\partial}f_1^-\otimes q^{\ell\partial}(h_1^+(z)+h_1^-(z))f_1^-\\ &+\Delta(f_1^-)\otimes\frac{\partial}{\partial {z}}\log f(z)^{[2]_q[\ell]_q(q^{-1}-q)q^{\ell+1}}\\ =&\left(h_1^+(z-\hbar)\otimes 1+1\otimes h_1^+(z+\ell\hbar)\right)\Delta(f_1^-) +\Delta(f_1^-)\\ &\otimes\frac{\partial}{\partial {z}}\log f(z)^{[2]_q[\ell]_q(q^\ell-q^{\ell+2}-q^\ell)-[2]_qq}\\ =&\left(h_1^+(z-\hbar)\otimes 1+1\otimes h_1^+(z+\ell\hbar)\right)\Delta(f_1^-) -\Delta(f_1^-)\\ &\otimes\frac{\partial}{\partial {z}}\log f(z)^{[2]_q[\ell+1]_qq^{\ell+1}},\end{aligned}$$ where the third equation follows from Lemma [Lemma 132](#lem:S-xk-h){reference-type="ref" reference="lem:S-xk-h"} and relations [\[eq:multqyb-shift-total1\]](#eq:multqyb-shift-total1){reference-type="eqref" reference="eq:multqyb-shift-total1"}, [\[eq:multqyb-shift-total2\]](#eq:multqyb-shift-total2){reference-type="eqref" reference="eq:multqyb-shift-total2"}, and the sixth equation follows from [\[eq:h\--f\]](#eq:h--f){reference-type="eqref" reference="eq:h--f"}. Applying $\mathop{\mathrm{Res}}_zz^{-1}$, we get that $$\begin{aligned} \label{eq:Delta-h--1-Delta-f--temp} &\Delta(h_1)_{-1}\Delta(f_1^-) =\left(h_1^+(-\hbar)\otimes 1+1\otimes h_1^+(\ell\hbar)\right)\Delta(f_1^-)\nonumber\\ &\quad-\Delta(f_1^-) \otimes\mathop{\mathrm{Res}}_zz^{-1}\frac{\partial}{\partial {z}}\log f(z)^{[2]_q[\ell+1]_qq^{\ell+1}}\nonumber\\ =&\left(h_1^+(-\hbar)\otimes 1+1\otimes h_1^+(\ell\hbar)\right)\Delta(f_1^-)\\ &\quad-\Delta(f_1^-) \otimes\left.\left(\frac{\partial}{\partial {z}}\log f(z)^{[2]_q[\ell+1]_qq^{\ell+1}}\right)\right|_{z=0}.\nonumber\end{aligned}$$ On the other hand, from [\[eq:der-f\]](#eq:der-f){reference-type="eqref" reference="eq:der-f"} we have that $$\begin{aligned} &\left(\partial\otimes 1+1\otimes\partial\right)\Delta(f_1^-)\\ =&cq^{-\partial}\partial f_1^-\otimes q^{\ell\partial}f_1^- +cq^{-\partial}f_1^-\otimes q^{\ell\partial}\partial f_1^-\\ =&-cq^{-\partial}h_1^+(0)f_1^-\otimes q^{\ell\partial}f_1^- -cq^{-\partial}f_1^-\otimes q^{\ell\partial}h_1^+(0)f_1^-\\ =&-(h_1^+(-\hbar)\otimes 1+1\otimes h_1^+(\ell\hbar))f_1^-.\end{aligned}$$ Combining this with [\[eq:Delta-h\--1-Delta-f\--temp\]](#eq:Delta-h--1-Delta-f--temp){reference-type="eqref" reference="eq:Delta-h--1-Delta-f--temp"}, we complete the proof of lemma. ◻ Since the $\hbar$-adic quantum VA homomorphism $\Delta:L_{\hat{\mathfrak{sl}}_2,\hbar}^{\ell+1}\to L_{\hat{\mathfrak{sl}}_2,\hbar}^\ell\widehat{\otimes}L_{\hat{\mathfrak{sl}}_2,\hbar}^1$ is injective, it follows from Lemmas [Lemma 136](#lem:AQ-sp5-alt){reference-type="ref" reference="lem:AQ-sp5-alt"}, [Lemma 138](#lem:Delta-AQ-sp-5+){reference-type="ref" reference="lem:Delta-AQ-sp-5+"} and [Lemma 139](#lem:Delta-AQ-sp-5-){reference-type="ref" reference="lem:Delta-AQ-sp-5-"} that **Proposition 140**. *Let us assume that Theorem [Theorem 114](#thm:qlattice-inj){reference-type="ref" reference="thm:qlattice-inj"} holds true for $\ell$. Then the relation [\[eq:AQ-sp5\]](#eq:AQ-sp5){reference-type="eqref" reference="eq:AQ-sp5"} holds for $\ell+1$.* **Lemma 141**. *We have that $$\begin{aligned} &Y_{\widehat{\ell}}\left(\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)f_1^\pm,z\right)\\ =&\exp\left(\widetilde{h}_1^+(z+(\ell-1)\hbar)\right)Y_{\widehat{\ell}}(f_1^\pm,z) \exp\left(\widetilde{h}_1^-(z+(\ell-1)\hbar)\right)\\ &\times \left(\frac{f_0(2(\ell-1)\hbar)}{f_0(2(\ell+1)\hbar)}\right)^{\frac{1}{2}} \left(\frac{f_0(2\hbar)}{f_0(2(\ell-1)\hbar) f_0(2\ell\hbar)}\right)^{\pm 1}.\end{aligned}$$* *Proof.* From [\[eq:com-formulas-6\]](#eq:com-formulas-6){reference-type="eqref" reference="eq:com-formulas-6"}, we have that $$\begin{aligned} =\gamma(z_2-z_1),\end{aligned}$$ where $$\begin{aligned} &\gamma(z) =\left(q^{2\frac{\partial}{\partial {z}}}-q^{-2\frac{\partial}{\partial {z}}}\right) \left(q^{-2\ell\frac{\partial}{\partial {z}}}-1 \right)\log f(-z).\end{aligned}$$ From Lemma [Lemma 134](#lem:com-formulas3){reference-type="ref" reference="lem:com-formulas3"}, we have that $$\begin{aligned} &[\widetilde{h}_1^-(z_1+(\ell-1)\hbar),Y_{\widehat{\ell}}(f_1^\pm,z_2)] =Y_{\widehat{\ell}}(f_1^\pm,z_2)\iota_{z_1,z_2}\gamma_1(z_1-z_2),\end{aligned}$$ where $$\begin{aligned} \gamma_1(z)=\pm \log f(z)^{q^{-2} +1-q^{2\ell-2}-q^{2\ell} }.\end{aligned}$$ Notice that $$\begin{aligned} &\mathop{\mathrm{Res}}_zz^{-1}\gamma(-z)\\ =&\mathop{\mathrm{Res}}_zz^{-1}\left(q^{-2\frac{\partial}{\partial {z}}}-q^{2\frac{\partial}{\partial {z}}}\right)\left(q^{2\ell\frac{\partial}{\partial {z}}}-1\right) \log f_0(z)\\ =&\log\frac{f_0(2(\ell-1)\hbar)}{f_0(2(\ell+1)\hbar)},\end{aligned}$$ and that $$\begin{aligned} &\mathop{\mathrm{Res}}_zz^{-1}\gamma_1(z)\\ =&\pm\mathop{\mathrm{Res}}_zz^{-1}\left( q^{-2\frac{\partial}{\partial {z}}}+1-q^{2(\ell-1)\frac{\partial}{\partial {z}}}-q^{2\ell\frac{\partial}{\partial {z}}} \right) \log f(z)\\ =&\pm\mathop{\mathrm{Res}}_zz^{-1}\left( q^{-2\frac{\partial}{\partial {z}}}+1-q^{2(\ell-1)\frac{\partial}{\partial {z}}}-q^{2\ell\frac{\partial}{\partial {z}}} \right) \log f_0(z)\\ =&\pm\log\frac{f_0(2\hbar)f_0(0)}{f_0(2(\ell-1)\hbar) f_0(2\ell\hbar)} =\pm\log\frac{f_0(2\hbar)}{f_0(2(\ell-1)\hbar) f_0(2\ell\hbar)}.\end{aligned}$$ The by utilizing Lemma [Lemma 91](#lem:exp-cal){reference-type="ref" reference="lem:exp-cal"}, we complete the proof. ◻ **Lemma 142**. *We have that $$\begin{aligned} &Y_{\widehat{\ell}}\left(\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)f_1^+,z\right)f_1^-\\ =&\exp\left(\widetilde{h}_1^+(z+(\ell-1)\hbar)\right)Y_{\widehat{\ell}}(f_1^+,z)f_1^- \otimes f(z)^{q^{2\ell-2}+q^{2\ell}-q^{-2}-1}\\ &\times \frac{f_0(2\hbar)}{f_0(2(\ell-1)\hbar)^{\frac{1}{2}}f_0(2\ell\hbar)f_0(2(\ell+1)\hbar)^{\frac{1}{2}}}.\end{aligned}$$* *Proof.* From Lemma [Lemma 141](#lem:Y-exp-wt-h-x){reference-type="ref" reference="lem:Y-exp-wt-h-x"}, we get that $$\begin{aligned} \label{eq:Y-E-wt-h-x+-x--temp} &Y_{\widehat{\ell}}\left(\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)f_1^+,z\right)f_1^-\\ =&\exp\left(\widetilde{h}_1^+(z+(\ell-1)\hbar)\right)Y_{\widehat{\ell}}(f_1^+,z) \exp\left(\widetilde{h}_1^-(z+(\ell-1)\hbar)\right)f_1^-\nonumber\\ &\times \frac{f_0(2\hbar)}{f_0(2(\ell-1)\hbar)^{\frac{1}{2}}f_0(2\ell\hbar)f_0(2(\ell+1)\hbar)^{\frac{1}{2}}}.\nonumber\end{aligned}$$ From Lemma [Lemma 134](#lem:com-formulas3){reference-type="ref" reference="lem:com-formulas3"}, we get that $$\begin{aligned} &\exp\left(\widetilde{h}_1^-(z_1+(\ell-1)\hbar)\right)Y_{\widehat{\ell}}(f_1^-,z_2){{\bf 1}}\\ =&Y_{\widehat{\ell}}(f_1^-,z_2){{\bf 1}}\otimes f(z_1-z_2)^{q^{2\ell-2}+q^{2\ell}-q^{-2}-1}.\end{aligned}$$ Taking $z_2=0$, we get that $$\begin{aligned} &\exp\left(\widetilde{h}_1^-(z+(\ell-1)\hbar)\right)f_1^- =f_1^-\otimes f(z)^{q^{2\ell-2}+q^{2\ell}-q^{-2}-1}.\end{aligned}$$ Combining with [\[eq:Y-E-wt-h-x+-x\--temp\]](#eq:Y-E-wt-h-x+-x--temp){reference-type="eqref" reference="eq:Y-E-wt-h-x+-x--temp"}, we complete the proof of lemma. ◻ **Lemma 143**. *We have that $$\begin{aligned} &S_{\ell,1}(z)(f_1^-\otimes f_1^+) =f_1^-\otimes f_1^+\otimes f(z)^{q^{\ell+1}+q^{\ell-1}-q^{1-\ell}-q^{-\ell-1}}.\end{aligned}$$* *Proof.* From [\[eq:multqyb-hex1\]](#eq:multqyb-hex1){reference-type="eqref" reference="eq:multqyb-hex1"} and [\[eq:S-twisted-4\]](#eq:S-twisted-4){reference-type="eqref" reference="eq:S-twisted-4"}, we have that $$\begin{aligned} &S_{\ell,1}(z)\left(Y(x_1^-,z_1)Y(x_1^-,z_2)\cdots Y(x_1^-,z_\ell){{\bf 1}}\otimes x_1^+\right)\\ =&Y(x_1^-,z_1)Y(x_1^-,z_2)\cdots Y(x_1^-,z_\ell){{\bf 1}}\otimes x_1^+ \otimes\prod_{a=1}^\ell e^{z_a\frac{\partial}{\partial {z}}}f(z)^{q^{2}-q^{-2}}.\end{aligned}$$ Recall from Proposition [Proposition 70](#prop:normal-ordering-rel-general){reference-type="ref" reference="prop:normal-ordering-rel-general"} (4), we have that $$\begin{aligned} &\mathop{\mathrm{Rat}}_{z_1^{-1},\dots,z_\ell^{-1}} Y(x_1^-,z_1)Y(x_1^-,z_2)\cdots Y(x_1^-,z_\ell){{\bf 1}}\\ =&\prod_{a=1}^\ell\frac{z_a}{z_a-2(\ell-a)\hbar}\left(x_1^-\right)_{-1}^\ell{{\bf 1}}.\end{aligned}$$ Then $$\begin{aligned} &S_{\ell,1}(z)(f_1^-\otimes f_1^+)=\sqrt{c_{1,\ell}} S_{\ell,1}(z)\left(q^{(1-\ell)\partial}\left(x_1^-\right)_{-1}^\ell{{\bf 1}}\otimes x_1^+\right)\\ =&\mathop{\mathrm{Res}}_{z_1,\dots,z_\ell}z_1^{-1}\cdots z_\ell^{-1}\sqrt{c_{1,\ell}}\\ &\quad S_{\ell,1}(z)\left(q^{(1-\ell)\partial}Y(x_1^-,z_1)Y(x_1^-,z_2)\cdots Y(x_1^-,z_\ell){{\bf 1}}\otimes x_1^+\right)\\ =&f_1^-\otimes f_1^+\otimes\prod_{a=1}^\ell f(z)^{(q^2-q^{-2})q^{2(\ell-a)}q^{1-\ell}}\\ =&f_1^- \otimes f_1^+\otimes f(z)^{q^{\ell+1}+q^{\ell-1}-q^{1-\ell}-q^{-\ell-1}}.\end{aligned}$$ We complete the proof of the lemma. ◻ Similarly to the definition of $E^+(\beta_1,z)$, we define $$\begin{aligned} h_1^+(z)=\sum_{n\in{\mathbb{Z}}_+}h_1(-n)z^{n-1},\quad E^+(h_1,z)=\exp\left( \sum_{n\in{\mathbb{Z}}_+}\frac{h_1(-n)}{n}z^n \right).\end{aligned}$$ From the definition of $\widetilde{h}_1^+(z)$, we have that $$\begin{aligned} \label{eq:exp-wt-h=E-E} &\exp\left(\widetilde{h}_1^+(z)\right)=E^+(h_1,z-(\ell+1)\hbar)E^+(-h_1,z+(1-\ell)\hbar).\end{aligned}$$ **Lemma 144**. *For $a\in{\mathbb{C}}$, we have that $$\begin{aligned} &\exp\left(q^{a\partial}\left( \frac{e^{z\partial}-1}{\partial}h_1 \right)_{-1}\right){{\bf 1}} =q^{a\partial}E^+(h_1,z){{\bf 1}}\\ &\quad\times f_0(z)^{-{\frac{1}{2}}[2]_q[\ell]_q\left(q^\ell+q^{-\ell}\right)} \prod_{a=0}^{\ell-1}f_0(2a\hbar)f_0(2(a+1)\hbar).\end{aligned}$$* *Proof.* Set $$\begin{aligned} &\widehat{h}_1^\pm(z,w)=\frac{e^{w\frac{\partial}{\partial {z}}}-1}{\frac{\partial}{\partial {z}}}q^{a\frac{\partial}{\partial {z}}} h_1^\pm(z).\end{aligned}$$ Then we deduce from [\[eq:com-formulas-1\]](#eq:com-formulas-1){reference-type="eqref" reference="eq:com-formulas-1"} that $$\begin{aligned} &[\widehat{h}_1^-(z_1,w),\widehat{h}_1^+(z_2,w)] =[2]_{q^{\frac{\partial}{\partial {z_2}}}}[\ell]_{q^{\frac{\partial}{\partial {z_2}}}}\\ &\times q^{-\ell\frac{\partial}{\partial {z_2}}}\left(e^{-w\frac{\partial}{\partial {z_2}}}-1\right)\left(e^{w\frac{\partial}{\partial {z_2}}}-1\right)\log f(z_1-z_2).\end{aligned}$$ That is $$\begin{aligned} &[\widehat{h}_1^-(z_1,w),\widehat{h}_1^+(z_2,w)] =\gamma(z_2-z_1),\end{aligned}$$ where $$\begin{aligned} \gamma(z)=&[2]_{q^{\frac{\partial}{\partial {z}}}}[\ell]_{q^{\frac{\partial}{\partial {z}}}}q^{-\ell\frac{\partial}{\partial {z}}}\left(2-e^{-w\frac{\partial}{\partial {z}}}-e^{w\frac{\partial}{\partial {z}}}\right)\log f(-z).\end{aligned}$$ Notice that $$\begin{aligned} &\mathop{\mathrm{Res}}_zz^{-1}\gamma(-z)\\ =&\mathop{\mathrm{Res}}_zz^{-1}[2]_{q^{\frac{\partial}{\partial {z}}}}[\ell]_{q^{\frac{\partial}{\partial {z}}}}q^{\ell\frac{\partial}{\partial {z}}}\left(2-e^{w\frac{\partial}{\partial {z}}}-e^{-w\frac{\partial}{\partial {z}}}\right)\log f(z)\\ =&\mathop{\mathrm{Res}}_zz^{-1}[2]_{q^{\frac{\partial}{\partial {z}}}}[\ell]_{q^{\frac{\partial}{\partial {z}}}}q^{\ell\frac{\partial}{\partial {z}}}\left(2-e^{w\frac{\partial}{\partial {z}}}-e^{-w\frac{\partial}{\partial {z}}}\right)\log f_0(z)\\ =&2\mathop{\mathrm{Res}}_zz^{-1}[2]_{q^{\frac{\partial}{\partial {z}}}}[\ell]_{q^{\frac{\partial}{\partial {z}}}}q^{\ell\frac{\partial}{\partial {z}}}\log f_0(z)\\ &-\mathop{\mathrm{Res}}_zz^{-1}[2]_{q^{\frac{\partial}{\partial {z}}}}[\ell]_{q^{\frac{\partial}{\partial {z}}}}q^{\ell\frac{\partial}{\partial {z}}}\log f_0(z+w)\\ &-\mathop{\mathrm{Res}}_zz^{-1}[2]_{q^{\frac{\partial}{\partial {z}}}}[\ell]_{q^{\frac{\partial}{\partial {z}}}}q^{\ell\frac{\partial}{\partial {z}}}\log f_0(w-z)\\ =&2\mathop{\mathrm{Res}}_zz^{-1}[2]_{q^{\frac{\partial}{\partial {z}}}}[\ell]_{q^{\frac{\partial}{\partial {z}}}}q^{\ell\frac{\partial}{\partial {z}}}\log f_0(z)\\ &-\mathop{\mathrm{Res}}_zz^{-1}[2]_{q^{\frac{\partial}{\partial {w}}}}[\ell]_{q^{\frac{\partial}{\partial {w}}}}q^{\ell\frac{\partial}{\partial {w}}}\log f_0(z+w)\\ &-\mathop{\mathrm{Res}}_zz^{-1}[2]_{q^{\frac{\partial}{\partial {w}}}}[\ell]_{q^{\frac{\partial}{\partial {w}}}}q^{-\ell\frac{\partial}{\partial {w}}}\log f_0(w-z)\\ =&2\log\prod_{a=0}^{\ell-1}f_0(2a\hbar)f_0(2(a+1)\hbar) -\log f_0(w)^{[2]_q[\ell]_q\left(q^\ell+q^{-\ell}\right)}\end{aligned}$$ Then we get from Lemma [Lemma 91](#lem:exp-cal){reference-type="ref" reference="lem:exp-cal"} that $$\begin{aligned} &Y_{\widehat{\ell}}\left(\exp\left(\left( q^{a\partial}\frac{e^{w\partial}-1}{\partial}h_1 \right)_{-1}\right){{\bf 1}},z\right){{\bf 1}}\\ =&\exp\left(\left(\widehat{h}_1^+(z,w)+\widehat{h}_1^-(z,w)\right)_{-1}\right){{\bf 1}} =\exp\left(\widehat{h}_1^+(z,w)\right)\exp\left(\widehat{h}_1^-(z,w)\right){{\bf 1}}\\ &\quad\times f_0(w)^{-{\frac{1}{2}}[2]_q[\ell]_q\left(q^\ell+q^{-\ell}\right)} \prod_{a=0}^{\ell-1}f_0(2a\hbar)f_0(2(a+1)\hbar)\\ =&\exp\left(\widehat{h}_1^+(z,w)\right){{\bf 1}} f_0(w)^{-{\frac{1}{2}}[2]_q[\ell]_q\left(q^\ell+q^{-\ell}\right)} \prod_{a=0}^{\ell-1}f_0(2a\hbar)f_0(2(a+1)\hbar) .\end{aligned}$$ Taking $z=0$, we get that $$\begin{aligned} \label{eq:E+=exp-wh-h-temp} &\exp\left(q^{a\partial}\left( \frac{e^{w\partial}-1}{\partial}h_1 \right)_{-1}\right){{\bf 1}}\\ =&\exp\left(\widehat{h}_1^+(0,w)\right){{\bf 1}} f_0(w)^{-{\frac{1}{2}}[2]_q[\ell]_q\left(q^\ell+q^{-\ell}\right)} \prod_{a=0}^{\ell-1}f_0(2a\hbar)f_0(2(a+1)\hbar)\nonumber\\ =&E^+(h_1,w+a\hbar)E^+(-h_1,a\hbar){{\bf 1}} f_0(w)^{-{\frac{1}{2}}[2]_q[\ell]_q\left(q^\ell+q^{-\ell}\right)}\nonumber\\ &\quad\times \prod_{a=0}^{\ell-1}f_0(2a\hbar)f_0(2(a+1)\hbar),\nonumber\end{aligned}$$ where the last equation follows from the following fact $$\begin{aligned} &\widehat{h}_1^+(0,w)=q^{a\frac{\partial}{\partial {z}}}\frac{e^{w\frac{\partial}{\partial {z}}}-1}{\frac{\partial}{\partial {z}}}h_1^+(z)|_{z=0}\\ =&\frac{e^{(w+a\hbar)\frac{\partial}{\partial {z}}}-1}{\frac{\partial}{\partial {z}}}h_1^+(z)|_{z=0} -\frac{e^{a\hbar\frac{\partial}{\partial {z}}}-1}{\frac{\partial}{\partial {z}}}h_1^+(z)|_{z=0}\\ =&\sum_{n\in{\mathbb{Z}}_+}(w+a\hbar)^n \frac{1}{n!}\frac{\partial^{n-1}}{\partial z^{n-1}}h_1^+(z)|_{z=0} -\sum_{n\in{\mathbb{Z}}_+}(a\hbar)^n \frac{1}{n!}\frac{\partial^{n-1}}{\partial z^{n-1}}h_1^+(z)|_{z=0}\\ =& \sum_{n\in{\mathbb{Z}}_+}\frac{h_1(-n)}{n}(w+a\hbar)^n -\sum_{n\in{\mathbb{Z}}_+}\frac{h_1(-n)}{n}(a\hbar)^n.\end{aligned}$$ Notice that $$\begin{aligned} &q^{a\partial}\sum_{n\in{\mathbb{Z}}_+}h_1(-n) z^{n-1}=q^{a\partial}Y(h_1,z)^+=Y(h_1,z+a\hbar)^+ q^{a\partial}\\ =&\sum_{n\in{\mathbb{Z}}_+}h_1(-n)\sum_{k=0}^{n-1}\binom{n-1}{k}z^{n-1-k}(a\hbar)^kq^{a\partial}\\ =&\sum_{n\in{\mathbb{Z}}_+}\sum_{k\in{\mathbb{N}}}h_1(-n-k)\binom{n+k-1}{k}z^{n-1}(a\hbar)^kq^{a\partial}.\end{aligned}$$ It shows that $$\begin{aligned} &q^{a\partial}h_1(-n)=\sum_{k\in{\mathbb{N}}}h_1(-n-k)\binom{n+k-1}{k}(a\hbar)^kq^{a\partial}.\end{aligned}$$ Then we have that $$\begin{aligned} &q^{a\partial}\sum_{n\in{\mathbb{Z}}_+}\frac{h_1(-n)}{n}z^n\\ =&\sum_{n\in{\mathbb{Z}}_+} \sum_{k\in{\mathbb{N}}}\frac{h_1(-n-k)}{n}\binom{n+k-1}{k}(a\hbar)^k z^n q^{a\partial}\\ =&\sum_{n\in{\mathbb{Z}}_+} \sum_{k\in{\mathbb{N}}}\frac{h_1(-n-k)}{n+k}\binom{n+k}{k}(a\hbar)^k z^n q^{a\partial}\\ =&\sum_{n\in{\mathbb{Z}}_+}\frac{h_1(-n)}{n} \sum_{k=0}^{n-1}\binom{n}{k}(a\hbar)^k z^{n-k} q^{a\partial}\\ =&\sum_{n\in{\mathbb{Z}}_+}\frac{h_1(-n)}{n}\left((z+a\hbar)^n-(a\hbar)^n\right)q^{a\partial}.\end{aligned}$$ It follows that $$\begin{aligned} &q^{a\partial}E^+(h_1,z)q^{-a\partial}=E^+(h_1,z+a\hbar)E^+(-h_1,a\hbar).\end{aligned}$$ Combining this with [\[eq:E+=exp-wh-h-temp\]](#eq:E+=exp-wh-h-temp){reference-type="eqref" reference="eq:E+=exp-wh-h-temp"}, we complete the proof. ◻ **Lemma 145**. *Consider the $\hbar$-adic quantum VA homomorphism $\Delta:L_{\hat{\mathfrak g},\hbar}^{\ell+1}\to L_{\hat{\mathfrak g},\hbar}^\ell\widehat{\otimes}L_{\hat{\mathfrak g},\hbar}^1$, we have that $$\begin{aligned} \Delta&\left(E^+(h_1,z){{\bf 1}}\right)= q^{-\partial}E^+(h_1,z){{\bf 1}}\otimes q^{\ell\partial} E^+(h_1,z){{\bf 1}}.\end{aligned}$$* *Proof.* Set $$\begin{aligned} a=q^{-\partial}\frac{e^{z\partial}-1}{\partial}h_1\otimes{{\bf 1}},\quad b={{\bf 1}}\otimes q^{\ell\partial}\frac{e^{z\partial}-1}{\partial}h_1\end{aligned}$$ and $\widehat{h}_1=a+b$. Recall the definition of $S_\Delta(z)=S_{1,\ell}^{23}(z)S_{\ell,\ell}^{13}(z)S_{1,1}^{24}(z)S_{\ell,1}^{14}(z)$ (see [\[eq:def-S-Delta\]](#eq:def-S-Delta){reference-type="eqref" reference="eq:def-S-Delta"}). Then we get that $$\begin{aligned} &S_\Delta(w)(b\otimes a)\\ =&S_{1,\ell}^{23}(w)S_{\ell,\ell}^{13}(w)S_{1,1}^{24}(w)S_{\ell,1}^{14}(w)\\ &\quad\left({{\bf 1}}\otimes q^{\ell\partial}\frac{e^{z\partial}-1}{\partial}h_1\otimes q^{-\partial}\frac{e^{z\partial}-1}{\partial}h_1 \otimes{{\bf 1}}\right)\\ =&\left(1\otimes q^{\ell\partial+\ell\frac{\partial}{\partial {w}}}\frac{e^{z\partial+z\frac{\partial}{\partial {w}}}-1}{\partial+\frac{\partial}{\partial {w}}}\otimes q^{-\partial+\frac{\partial}{\partial {w}}}\frac{e^{z\partial-z\frac{\partial}{\partial {w}}}-1}{\partial-\frac{\partial}{\partial {w}}}\otimes 1\right)\\ &\quad\times S_{1,\ell}^{23}(w)({{\bf 1}}\otimes h_1\otimes h_1\otimes{{\bf 1}})\\ =&b\otimes a+{{\bf 1}}\otimes{{\bf 1}}\otimes{{\bf 1}}\otimes{{\bf 1}}\otimes q^{(\ell+1)\frac{\partial}{\partial {w}}}\frac{e^{z\frac{\partial}{\partial {w}}}-1}{\frac{\partial}{\partial {w}}}\frac{e^{-z\frac{\partial}{\partial {w}}}-1}{-\frac{\partial}{\partial {w}}}\\ &\quad\times [2]_{q^{\frac{\partial}{\partial {w}}}}[\ell]_{q^{\frac{\partial}{\partial {w}}}}\left(q^{\frac{\partial}{\partial {w}}}-q^{-\frac{\partial}{\partial {w}}}\right)\frac{\partial^{2}}{\partial w^{2}}\log f(w)\\ =&b\otimes a+{{\bf 1}}\otimes{{\bf 1}}\otimes{{\bf 1}}\otimes{{\bf 1}}\otimes [2]_{q^{\frac{\partial}{\partial {w}}}}[\ell]_{q^{\frac{\partial}{\partial {w}}}}q^{(\ell+1)\frac{\partial}{\partial {w}}}\left(q^{\frac{\partial}{\partial {w}}}-q^{-\frac{\partial}{\partial {w}}}\right)\\ &\quad\times\left(e^{z\frac{\partial}{\partial {w}}}+e^{-z\frac{\partial}{\partial {w}}}-2\right) \log f(w),\end{aligned}$$ where the second equation follows from Lemma [Lemma 27](#lem:multqyb-shift-total){reference-type="ref" reference="lem:multqyb-shift-total"} and the third equation follows from [\[eq:S-twisted-1\]](#eq:S-twisted-1){reference-type="eqref" reference="eq:S-twisted-1"}. Then $$\begin{aligned} &[Y_\Delta(a,z_1),Y_\Delta(b,z_2)]w\\ =&Y_\Delta(a,z_1)Y_\Delta(b,z_2)w-Y_\Delta(z_2)Y_\Delta^{23}(z_1)S_\Delta^{12}(z_2-z_1)(b\otimes a\otimes w)\\ &+Y_\Delta(z_2)Y_\Delta^{23}(z_1)S_\Delta^{12}(z_2-z_1)(b\otimes a\otimes w)-Y_\Delta(b,z_2)Y_\Delta(a,z_1)w\\ =&Y_\Delta(Y_\Delta(a,z_1-z_2)b-Y_\Delta(a,-z_2+z_1)b,z_2)\\ &+Y_\Delta(z_2)Y_\Delta^{23}(z_1)S_\Delta^{12}(z_2-z_1)(b\otimes a\otimes w)-Y_\Delta(b,z_2)Y_\Delta(a,z_1)w\\ =&Y_\Delta(z_2)Y_\Delta^{23}(z_1)\left(S_\Delta^{12}(z_2-z_1)(b\otimes a\otimes w)-b\otimes a\otimes w\right)\\ =&w\otimes\iota_{z_2,z_1}[2]_{q^{\frac{\partial}{\partial {z_2}}}}[\ell]_{q^{\frac{\partial}{\partial {z_2}}}}q^{(\ell+1)\frac{\partial}{\partial {z_2}}}\left(q^{\frac{\partial}{\partial {z_2}}}-q^{-\frac{\partial}{\partial {z_2}}}\right)\\ &\quad\times\left(e^{z\frac{\partial}{\partial {z_2}}}+e^{-z\frac{\partial}{\partial {z_2}}}-2\right) \log f(z_2-z_1),\end{aligned}$$ where the second equation follows from [@Li-h-adic (2.25)]. By applying $\mathop{\mathrm{Res}}_{z_1,z_2}z_1^{-1}z_2^{-1}$, we get that $$\begin{aligned} &[a_{-1},b_{-1}]\\ =&\mathop{\mathrm{Res}}_{z_1,z_2}z_1^{-1}z_2^{-1}\iota_{z_2,z_1} [2]_{q^{\frac{\partial}{\partial {z_2}}}}[\ell]_{q^{\frac{\partial}{\partial {z_2}}}}q^{(\ell+1)\frac{\partial}{\partial {z_2}}}\left(q^{\frac{\partial}{\partial {z_2}}}-q^{-\frac{\partial}{\partial {z_2}}}\right)\\ &\times\left(e^{z\frac{\partial}{\partial {z_2}}}+e^{-z\frac{\partial}{\partial {z_2}}}-2\right) \log f(z_2-z_1)\\ =&\mathop{\mathrm{Res}}_{z_2}z_2^{-1}[2]_{q^{\frac{\partial}{\partial {z_2}}}}[\ell]_{q^{\frac{\partial}{\partial {z_2}}}}q^{(\ell+1)\frac{\partial}{\partial {z_2}}}\left(q^{\frac{\partial}{\partial {z_2}}}-q^{-\frac{\partial}{\partial {z_2}}}\right)\\ &\times\left(e^{z\frac{\partial}{\partial {z_2}}}+e^{-z\frac{\partial}{\partial {z_2}}}-2\right) \log f(z_2)\\ =&\mathop{\mathrm{Res}}_{z_2}z_2^{-1}[2]_{q^{\frac{\partial}{\partial {z_2}}}}[\ell]_{q^{\frac{\partial}{\partial {z_2}}}}q^{(\ell+1)\frac{\partial}{\partial {z_2}}}\left(q^{\frac{\partial}{\partial {z_2}}}-q^{-\frac{\partial}{\partial {z_2}}}\right) \log f_0(z_2+z)\\ &+\mathop{\mathrm{Res}}_{z_2}z_2^{-1}[2]_{q^{\frac{\partial}{\partial {z_2}}}}[\ell]_{q^{\frac{\partial}{\partial {z_2}}}}q^{(\ell+1)\frac{\partial}{\partial {z_2}}}\left(q^{\frac{\partial}{\partial {z_2}}}-q^{-\frac{\partial}{\partial {z_2}}}\right) \log f_0(z_2-z)\\ &-2\mathop{\mathrm{Res}}_{z_2}z_2^{-1}[2]_{q^{\frac{\partial}{\partial {z_2}}}}[\ell]_{q^{\frac{\partial}{\partial {z_2}}}}q^{(\ell+1)\frac{\partial}{\partial {z_2}}}\left(q^{\frac{\partial}{\partial {z_2}}}-q^{-\frac{\partial}{\partial {z_2}}}\right) \log f_0(z_2)\\ =&\mathop{\mathrm{Res}}_{z_2}z_2^{-1}[2]_{q^{\frac{\partial}{\partial {z}}}}[\ell]_{q^{\frac{\partial}{\partial {z}}}}q^{(\ell+1)\frac{\partial}{\partial {z}}}\left(q^{\frac{\partial}{\partial {z}}}-q^{-\frac{\partial}{\partial {z}}}\right) \log f_0(z_2+z)\\ &-\mathop{\mathrm{Res}}_{z_2}z_2^{-1}[2]_{q^{\frac{\partial}{\partial {z}}}}[\ell]_{q^{\frac{\partial}{\partial {z}}}}q^{-(\ell+1)\frac{\partial}{\partial {z}}}\left(q^{\frac{\partial}{\partial {z}}}-q^{-\frac{\partial}{\partial {z}}}\right) \log f_0(z-z_2)\\ &-2\mathop{\mathrm{Res}}_{z_2}z_2^{-1}[2]_{q^{\frac{\partial}{\partial {z_2}}}}[\ell]_{q^{\frac{\partial}{\partial {z_2}}}}q^{(\ell+1)\frac{\partial}{\partial {z_2}}}\left(q^{\frac{\partial}{\partial {z_2}}}-q^{-\frac{\partial}{\partial {z_2}}}\right) \log f_0(z_2)\\ =&2\log f_0(2\hbar)f_0(0)-2\log f_0(2(\ell+1)\hbar)f_0(2\ell\hbar)\\ &+\log f_0(z)^{ [2]_q\left(q^\ell-q^{-\ell}\right)\left(q^{\ell+1}-q^{-\ell-1}\right) }.\end{aligned}$$ From Baker-Campbell-Hausdorff formula, we have that $$\begin{aligned} &\Delta(E^+(h_1,z){{\bf 1}}) =\Delta\left(\exp\left(\left( \frac{e^{z\partial}-1}{\partial}h_1 \right)_{-1}\right){{\bf 1}}\right) \\ &\quad\times f_0(z)^{{\frac{1}{2}}[2]_q[\ell+1]_q\left(q^{\ell+1}+q^{-\ell-1}\right)} \prod_{a=0}^\ell f_0(2a\hbar)^{-1}f_0(2(a+1)\hbar)^{-1}\\ =&\exp\left(a_{-1}+b_{-1}\right)\left({{\bf 1}}\otimes{{\bf 1}}\right)\\ &\quad\times f_0(z)^{{\frac{1}{2}}[2]_q[\ell+1]_q\left(q^{\ell+1}+q^{-\ell-1}\right)} \prod_{a=0}^\ell f_0(2a\hbar)^{-1}f_0(2(a+1)\hbar)^{-1}\\ =&\exp\left(a_{-1}\right)\exp\left(b_{-1}\right) \exp\left(-[a_{-1},b_{-1}]/2\right)\left({{\bf 1}}\otimes{{\bf 1}}\right)\\ &\quad\times f_0(z)^{{\frac{1}{2}}[2]_q[\ell+1]_q\left(q^{\ell+1}+q^{-\ell-1}\right)} \prod_{a=0}^\ell f_0(2a\hbar)^{-1}f_0(2(a+1)\hbar)^{-1}\\ =&\exp\left(\left( q^{-\partial}\frac{e^{z\partial}-1}{\partial}h_1 \right)_{-1}\right){{\bf 1}} \otimes\exp\left(\left( q^{\ell\partial}\frac{e^{z\partial}-1}{\partial}h_1 \right)_{-1}\right){{\bf 1}}\\ &\quad\times \frac{f_0(2\ell\hbar)f_0(2(\ell+1)\hbar)}{f_0(0)f_0(2\hbar)} f_0(z)^{ -{\frac{1}{2}}[2]_q\left(q^\ell-q^{-\ell}\right)\left(q^{\ell+1}-q^{-\ell-1}\right) } \\ &\quad\times f_0(z)^{{\frac{1}{2}}[2]_q[\ell+1]_q\left(q^{\ell+1}+q^{-\ell-1}\right)} \prod_{a=0}^\ell f_0(2a\hbar)^{-1}f_0(2(a+1)\hbar)^{-1}\\ =&\exp\left(\left( q^{-\partial}\frac{e^{z\partial}-1}{\partial}h_1 \right)_{-1}\right){{\bf 1}} \otimes\exp\left(\left( q^{\ell\partial}\frac{e^{z\partial}-1}{\partial}h_1 \right)_{-1}\right){{\bf 1}}\\ &\quad\times \frac{f_0(2\ell\hbar)f_0(2(\ell+1)\hbar)}{f_0(0)f_0(2\hbar)} f_0(z)^{ -{\frac{1}{2}}[2]_q\left(q^\ell-q^{-\ell}\right)\left(q^{\ell+1}-q^{-\ell-1}\right) }\\ &\quad\times f_0(z)^{{\frac{1}{2}}[2]_q[\ell+1]_q\left(q^{\ell+1}+q^{-\ell-1}\right)} \prod_{a=0}^\ell f_0(2a\hbar)^{-1}f_0(2(a+1)\hbar)^{-1}\\ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% =&q^{-\partial}E^+(h_1,z){{\bf 1}}\otimes q^{\ell\partial}E^+(h_1,z){{\bf 1}}\\ &\quad\times f_0(z)^{-{\frac{1}{2}}[2]_q[\ell]_q\left(q^{\ell}+q^{-\ell}\right)} \prod_{a=0}^{\ell-1} f_0(2a\hbar) f_0(2(a+1)\hbar) \\ &\quad\times f_0(z)^{-{\frac{1}{2}}[2]_q^2}f_0(0) f_0(2\hbar) \\ &\quad\times \frac{f_0(2\ell\hbar)f_0(2(\ell+1)\hbar)}{f_0(0)f_0(2\hbar)} f_0(z)^{ -{\frac{1}{2}}[2]_q\left(q^\ell-q^{-\ell}\right)\left(q^{\ell+1}-q^{-\ell-1}\right) }\\ &\quad\times f_0(z)^{{\frac{1}{2}}[2]_q[\ell+1]_q\left(q^{\ell+1}+q^{-\ell-1}\right)} \prod_{a=0}^\ell f_0(2a\hbar)^{-1}f_0(2(a+1)\hbar)^{-1}\\ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% =&q^{-\partial}E^+(h_1,z){{\bf 1}}\otimes q^{\ell\partial}E^+(h_1,z){{\bf 1}},\end{aligned}$$ where the first equation and the sixth equation follow from Lemma [Lemma 144](#lem:E+=exp-wh-h){reference-type="ref" reference="lem:E+=exp-wh-h"}. ◻ **Proposition 146**. *Let us assume that Theorem [Theorem 114](#thm:qlattice-inj){reference-type="ref" reference="thm:qlattice-inj"} is valid for $\ell$. The relations [\[eq:AQ-sp6\]](#eq:AQ-sp6){reference-type="eqref" reference="eq:AQ-sp6"} and [\[eq:AQ-sp7\]](#eq:AQ-sp7){reference-type="eqref" reference="eq:AQ-sp7"} hold true for $\ell+1$.* *Proof.* From Lemma [Lemma 131](#lem:Delta-f){reference-type="ref" reference="lem:Delta-f"}, we have that $$\begin{aligned} &Y_\Delta(\Delta(f_1^+),z)\Delta(f_1^-) =Y_{\widehat{\ell}}^{12}(z)Y_{\widehat{1}}^{34}(z)S_{\ell,1}^{23}(-z)\\ &\quad\left( q^{\partial}\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)f_1^+ \otimes q^{-\partial}f_1^- \otimes q^{\ell\partial}f_1^+ \otimes q^{\ell\partial}f_1^-\right)\\ &\quad\times \frac{f_0(2(\ell-1)\hbar)^{\frac{1}{2}}f_0(2\ell\hbar)f_0(2(\ell+1)\hbar)^{\frac{1}{2}}}{f_0(2\hbar)} \\ =&Y_{\widehat{\ell}}^{12}(z)Y_{\widehat{1}}^{34}(z) \left( q^{\partial}\exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)f_1^+ \otimes q^{-\partial}f_1^- \otimes q^{\ell\partial}f_1^+ \otimes q^{\ell\partial}f_1^-\right)\\ &\quad \otimes f(-z)^{1+q^{2}-q^{2\ell}-q^{2+2\ell}} \frac{f_0(2(\ell-1)\hbar)^{\frac{1}{2}}f_0(2\ell\hbar)f_0(2(\ell+1)\hbar)^{\frac{1}{2}}}{f_0(2\hbar)}\\ =&q^{-\partial}Y_{\widehat{\ell}}\left( \exp\left(\widetilde{h}_1^+((\ell-1)\hbar)\right)f_1^+,z+2\hbar \right) f_1^-\otimes q^{\ell\partial}Y_{\widehat{1}}\left(f_1^+,z\right)f_1^- \\ &\quad \otimes f(z)^{1+q^{2}-q^{2\ell}-q^{2+2\ell}} \frac{f_0(2(\ell-1)\hbar)^{\frac{1}{2}}f_0(2\ell\hbar)f_0(2(\ell+1)\hbar)^{\frac{1}{2}}}{f_0(2\hbar)}\\ =&q^{-\partial}\exp\left(\widetilde{h}_1^+(z+(\ell+1)\hbar)\right)Y_{\widehat{\ell}}( f_1^+,z+2\hbar )f_1^- \otimes q^{\ell\partial}Y_{\widehat{1}}\left(f_1^+,z\right)f_1^-,\end{aligned}$$ where the second equation follows from Lemmas [Lemma 143](#lem:S-x-ell-x+){reference-type="ref" reference="lem:S-x-ell-x+"} and [Lemma 27](#lem:multqyb-shift-total){reference-type="ref" reference="lem:multqyb-shift-total"} and the last equation follows from Lemma [Lemma 142](#lem:Y-E-wt-h-x+-x-){reference-type="ref" reference="lem:Y-E-wt-h-x+-x-"}. Proposition [Proposition 130](#prop:level-1-ssl2-case){reference-type="ref" reference="prop:level-1-ssl2-case"} provides that $$\begin{aligned} &Y_{\widehat{1}}(f_1^+,z)f_1^-=E^+(h_1,z){{\bf 1}}\otimes f(z)^{-1-q^2}.\end{aligned}$$ From the induction assumption, we have that $$\begin{aligned} &Y_{\widehat{\ell}}( f_1^+,z+2\hbar )f_1^- =E^+(h_1,z+2\hbar){{\bf 1}} f(z)^{-q^{\ell+2}(q+q^{-1})[\ell]_q}.\end{aligned}$$ Combining these equations, we get that $$\begin{aligned} &Y_\Delta(\Delta(f_1^+),z)\Delta(f_1^-)\\ =&q^{-\partial}\exp\left(\widetilde{h}_1^+(z+(\ell+1)\hbar)\right)Y_{\widehat{\ell}}( f_1^+,z+2\hbar )f_1^- \otimes q^{\ell\partial}Y_{\widehat{1}}\left(f_1^+,z\right)f_1^-\\ =&q^{-\partial}\exp\left(\widetilde{h}_1^+(z+(\ell+1)\hbar)\right)E^+(h_1,z+2\hbar){{\bf 1}} \otimes q^{\ell\partial}E^+(h_1,z){{\bf 1}}\\ &\quad\otimes f(z)^{-1-q^2-q^{\ell+2}(q+q^{-1})[\ell]_q}\\ =&q^{-\partial}E^+(h_1,z){{\bf 1}}\otimes q^{\ell\partial}E^+(h_1,z){{\bf 1}} \otimes f(z)^{-[2]_q[\ell+1]_qq^{\ell+1}}\\ =&\Delta\left(E^+(h_1,z){{\bf 1}}\right)\otimes f(z)^{-[2]_q[\ell+1]_qq^{\ell+1}},\end{aligned}$$ where the third equation follows from [\[eq:exp-wt-h=E-E\]](#eq:exp-wt-h=E-E){reference-type="eqref" reference="eq:exp-wt-h=E-E"} and the last equation follows from Lemma [Lemma 145](#lem:Delta-E+){reference-type="ref" reference="lem:Delta-E+"}. Since the map $\Delta:L_{\hat{\mathfrak{sl}}_2,\hbar}^{\ell+1}\to L_{\hat{\mathfrak{sl}}_2,\hbar}^\ell\widehat{\otimes}L_{\hat{\mathfrak{sl}}_2,\hbar}^1$ is injective, we complete the proof of the proposition. ◻ **Proposition 147**. *Let us assume that Theorem [Theorem 114](#thm:qlattice-inj){reference-type="ref" reference="thm:qlattice-inj"} is valid for $\ell$. Then Theorem [Theorem 114](#thm:qlattice-inj){reference-type="ref" reference="thm:qlattice-inj"} holds true for $\ell+1$.* *Proof.* From Lemma [Lemma 118](#lem:AQ-sp1-4){reference-type="ref" reference="lem:AQ-sp1-4"} and Propositions [Proposition 140](#prop:AQ-sp5){reference-type="ref" reference="prop:AQ-sp5"}, [Proposition 146](#prop:AQ-sp6-7){reference-type="ref" reference="prop:AQ-sp6-7"}, we have that $$\begin{aligned} &\left( L_{\hat{\mathfrak{sl}}_2,\hbar}^{\ell+1},Y_{\widehat{\ell+1}}(h_1,z), Y_{\widehat{\ell+1}}(f_1^\pm,z) \right)\in\mathop{\mathrm{obj}}\mathcal A_\hbar^{\eta_{\ell+1}}(\sqrt{\ell+1}{\mathbb{Z}}\alpha_1).\end{aligned}$$ By utilizing Proposition [Proposition 50](#prop:qlattice-universal-property){reference-type="ref" reference="prop:qlattice-universal-property"}, we complete the proof of the proposition. ◻ Using Proposition [Proposition 130](#prop:level-1-ssl2-case){reference-type="ref" reference="prop:level-1-ssl2-case"} and [Proposition 147](#prop:level-ell-to-level-ell+1){reference-type="ref" reference="prop:level-ell-to-level-ell+1"}, we complete the proof of Theorem [Theorem 114](#thm:qlattice-inj){reference-type="ref" reference="thm:qlattice-inj"} for $L_{\hat{\mathfrak{sl}}_2,\hbar}^\ell$ ($\ell\in{\mathbb{Z}}_+$), through induction on $\ell$. ## Proof of Proposition [Proposition 123](#prop:W){reference-type="ref" reference="prop:W"} {#subsec:pf-prop-W} **Lemma 148**. *For $i\in I$ and $\ell\in{\mathbb{C}}^\times$, we have that $$\begin{aligned} \mathop{\mathrm{Sing}}_zW_i(z)=0.\end{aligned}$$* *Proof.* Notice that $$\begin{aligned} &\mathop{\mathrm{Sing}}_z\exp\left(\left(\frac{1-e^{z\partial}}{\partial [r\ell/r_i]_{q^{r_i\partial}}}h_i\right)_{-1}\right)Y_{\widehat{\ell}}(x_i^+,z)x_i^-\nonumber\\ =&\mathop{\mathrm{Sing}}_z\exp\left(\left(\frac{1-e^{z\partial}}{\partial [r\ell/r_i]_{q^{r_i\partial}}}h_i\right)_{-1}\right)Y_{\widehat{\ell}}(x_i^+,z)^-x_i^-\nonumber\\ =&-\mathop{\mathrm{Sing}}_z\frac{1}{(q^{r_i}-q^{-r_i})(z+2r\ell\hbar)}\exp\left(\left(\frac{1-e^{z\partial}}{\partial [r\ell/r_i]_{q^{r_i\partial}}}h_i\right)_{-1}\right)E_\ell(h_i)\nonumber\\ &+\mathop{\mathrm{Sing}}_z\frac{1}{(q^{r_i}-q^{-r_i})z}\exp\left(\left(\frac{1-e^{z\partial}}{\partial [r\ell/r_i]_{q^{r_i\partial}}}h_i\right)_{-1}\right){{\bf 1}}\nonumber\\ =&-\frac{1}{(q^{r_i}-q^{-r_i})(z+2r\ell\hbar)}\exp\left(\left(\frac{1-q^{-2r\ell\partial}}{\partial [r\ell/r_i]_{q^{r_i\partial}}}h_i\right)_{-1}\right)E_\ell(h_i)\nonumber\\ &+\frac{1}{(q^{r_i}-q^{-r_i})z}{{\bf 1}}\nonumber\\ =&-\frac{\exp\left(\left(q^{-r\ell\partial}2\hbar f_0(2\partial\hbar)[r_i]_{q^\partial}h_i\right)_{-1}\right) E_\ell(h_i)} {(q^{r_i}-q^{-r_i})(z+2r\ell\hbar)}\nonumber\\ &+\frac{1}{(q^{r_i}-q^{-r_i})z}{{\bf 1}}\nonumber\\ =&-\frac{1}{(q^{r_i}-q^{-r_i})(z+2r\ell\hbar)} \left(\frac{f_0(2(r_i+r\ell)\hbar)}{f_0(2(r_i-r\ell)\hbar)}\right)^{\frac{1}{2}}\label{eq:Sing-W-temp1}\\ &\quad\times\exp\left(\left(q^{-r\ell\partial}2\hbar f_0(2\partial\hbar)[r_i]_{q^\partial}h_i\right)_{-1}\right)\nonumber\\ &\quad\times\exp\left(\left(-q^{-r\ell\partial}2\hbar f_0(2\partial\hbar)[r_i]_{q^\partial}h_i\right)_{-1}\right){{\bf 1}}\nonumber\\ &+\frac{1}{(q^{r_i}-q^{-r_i})z}{{\bf 1}}\nonumber\\ =&-\left(\frac{f_0(2(r_i+r\ell)\hbar)}{f_0(2(r_i-r\ell)\hbar)}\right)^{\frac{1}{2}} \frac{{{\bf 1}}}{(q^{r_i}-q^{-r_i})(z+2r\ell\hbar)} +\frac{1}{(q^{r_i}-q^{-r_i})z}{{\bf 1}}\nonumber.\end{aligned}$$ where the equation [\[eq:Sing-W-temp1\]](#eq:Sing-W-temp1){reference-type="eqref" reference="eq:Sing-W-temp1"} follows from the definition of $E_\ell(h_i)$ (see [\[eq:def-E-h\]](#eq:def-E-h){reference-type="eqref" reference="eq:def-E-h"}). From the definition of $W_i(z)$, we complete the proof. ◻ **Lemma 149**. *For $i,j\in I$ and $\ell\in{\mathbb{C}}^\times$, we have that $$\begin{aligned} &Y_{\widehat{\ell}}(h_i,z)^- W_j(z_1)=0.\label{eq:W-parafermion-2}\end{aligned}$$* *Proof.* From [\[eq:local-h-2\]](#eq:local-h-2){reference-type="eqref" reference="eq:local-h-2"}, we have that $$\begin{aligned} &[Y_{\widehat{\ell}}(h_i,z)^-,Y_{\widehat{\ell}}(x_j^\pm,z_1)]\\ =&\pm \mathop{\mathrm{Sing}}_{z}Y_{\widehat{\ell}}(x_j^\pm,z_1)[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}q^{r\ell\frac{\partial}{\partial {z}}}\frac{\partial}{\partial {z}}\log f(z-z_1)\\ =&\pm \mathop{\mathrm{Sing}}_{z}Y_{\widehat{\ell}}(x_j^\pm,z_1)[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}q^{r\ell\frac{\partial}{\partial {z}}}\frac{1+e^{-z+z_1}}{2-2e^{-z+z_1}}\\ =&\pm Y_{\widehat{\ell}}(x_{j,\hbar}^\pm,z_1)[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}q^{r\ell\frac{\partial}{\partial {z}}}\frac{1}{z-z_1}.\end{aligned}$$ By applying on ${{\bf 1}}$ and taking $z_1\to 0$, we get that $$\begin{aligned} Y_{\widehat{\ell}}(h_i,z)^-x_j^\pm =\pm x_j^\pm [a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}q^{r\ell\frac{\partial}{\partial {z}}}\frac{1}{z}.\end{aligned}$$ Then $$\begin{aligned} &Y_{\widehat{\ell}}(h_i,z)^-Y_{\widehat{\ell}}(x_j^+,z_1)x_j^-\nonumber\\ =&[Y_{\widehat{\ell}}(h_i,z)^-,Y_{\widehat{\ell}}(x_j^\pm,z_1)]x_j^- +Y_{\widehat{\ell}}(x_j^+,z_1)Y_{\widehat{\ell}}(h_i,z)^-x_j^-\nonumber\\ =&Y_{\widehat{\ell}}(x_j^+,z_1)x_j^-\otimes[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}q^{r\ell\frac{\partial}{\partial {z}}}\left(\frac{1}{z-z_1} -\frac{1}{z}\right).\label{eq:W-parafermion-1}\end{aligned}$$ From [\[eq:local-h-1\]](#eq:local-h-1){reference-type="eqref" reference="eq:local-h-1"}, we have that $$\begin{aligned} &\left[Y_{\widehat{\ell}}(h_i,z)^-, \mathop{\mathrm{Res}}_{z_2}z_2^{-1}\frac{1-e^{z_1\frac{\partial}{\partial {z_2}}}}{\frac{\partial}{\partial {z_2}}[r\ell/r_j]_{q^{r_j\frac{\partial}{\partial {z_2}}}}}Y_{\widehat{\ell}}(h_j,z_2)\right]\nonumber\\ =&\mathop{\mathrm{Sing}}_z[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}q^{r\ell\frac{\partial}{\partial {z}}}\left(1-e^{-z_1\frac{\partial}{\partial {z}}}\right)\frac{\partial}{\partial {z}}\log f(z)\nonumber\\ =&\mathop{\mathrm{Sing}}_z[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}q^{r\ell\frac{\partial}{\partial {z}}}\left(1-e^{-z_1\frac{\partial}{\partial {z}}}\right)\frac{1+e^{-z}}{2-2e^{-z}}\nonumber\\ =&[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}q^{r\ell\frac{\partial}{\partial {z}}}\left(1-e^{-z_1\frac{\partial}{\partial {z}}}\right)\frac{1}{z}\nonumber\\ =&[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}q^{r\ell\frac{\partial}{\partial {z}}}\left(\frac{1}{z}-\frac{1}{z-z_1}\right).\end{aligned}$$ Then $$\begin{aligned} &\left[Y_{\widehat{\ell}}(h_i,z)^-,\exp\left(\left(\frac{1-e^{z_1\partial}}{\partial[r\ell/r_j]_{q^{r_j\partial}}}h_j\right)_{-1}\right)\right]\nonumber\\ =&\exp\left(\left(\frac{1-e^{z_1\partial}}{\partial[r\ell/r_j]_{q^{r_j\partial}}}h_j\right)_{-1}\right) \otimes[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}q^{r\ell\frac{\partial}{\partial {z}}}\left(\frac{1}{z}-\frac{1}{z-z_1}\right).\label{eq:S-wtW-temp3}\end{aligned}$$ Combining [\[eq:W-parafermion-1\]](#eq:W-parafermion-1){reference-type="eqref" reference="eq:W-parafermion-1"} and [\[eq:S-wtW-temp3\]](#eq:S-wtW-temp3){reference-type="eqref" reference="eq:S-wtW-temp3"}, we get that $$\begin{aligned} &Y_{\widehat{\ell}}(h_i,z)^-\exp\left(\left(\frac{1-e^{z_1\partial}}{\partial[r\ell/r_j]_{q^{r_j\partial}}}h_j\right)_{-1}\right)Y_{\widehat{\ell}}(x_j^+,z_1)x_j^- =0.\end{aligned}$$ From the definition of $W_j(z)$ and the fact that $$\begin{aligned} Y_{\widehat{\ell}}(h_i,z)^-{{\bf 1}}=0,\end{aligned}$$ we complete the proof of the [\[eq:W-parafermion-2\]](#eq:W-parafermion-2){reference-type="eqref" reference="eq:W-parafermion-2"}. ◻ **Lemma 150**. *For $i,j\in I$ and $\ell,\ell'\in{\mathbb{C}}$, we have that $$\begin{aligned} &S_{\ell,\ell'}(z)\left(Y_{\widehat{\ell}}(x_j^+,z_1)x_j^-\otimes h_i\right) =Y_{\widehat{\ell}}(x_j^+,z_1)x_j^-\otimes h_i\label{eq:S-wtW-h}\\ &\quad+Y_{\widehat{\ell}}(x_j^+,z_1)x_j^-\otimes{{\bf 1}}\otimes \frac{\partial}{\partial {z}}\log \left(f(z+z_1)/f(z)\right)^{[a_{ij}]_{q^{r_i}}[r\ell']_q(q-q^{-1})},\nonumber\\ &S_{\ell,\ell'}(z)\left(Y_{\widehat{\ell}}(x_j^+,z_1)x_j^-\otimes x_i^\pm\right)\label{eq:S-wtW-x}\\ =&Y_{\widehat{\ell}}(x_j^+,z_1)x_j^-\otimes x_i^\pm \otimes \left(f(z+z_1)/f(z)\right)^{\pm q^{-r_ia_{ij}}\mp q^{r_ia_{ij}}}.\nonumber %\frac{f(z+z_1\mp r_ia_{ij}\hbar)f(z\pm r_ia_{ij}\hbar)} % {f(z+z_1\pm r_ia_{ij}\hbar)f(z\mp r_ia_{ij}\hbar)}.\nonumber\end{aligned}$$* *Proof.* From [\[eq:qyb-hex-id\]](#eq:qyb-hex-id){reference-type="eqref" reference="eq:qyb-hex-id"} and [\[eq:S-twisted-2\]](#eq:S-twisted-2){reference-type="eqref" reference="eq:S-twisted-2"}, we have that $$\begin{aligned} &S_{\ell,\ell'}(z)\left(Y_{\widehat{\ell}}(x_j^+,z_1)x_j^-\otimes h_i\right)\\ =&Y_{\widehat{\ell}}^{12}(z_1)S_{\ell,\ell'}^{23}(z)S_{\ell,\ell'}^{13}(z+z_1)\left(x_j^+\otimes x_j^-\otimes h_i\right)\\ =&Y_{\widehat{\ell}}^{12}(z_1)\Big(x_j^+\otimes x_j^-\otimes h_i +x_j^+\otimes x_j^-\otimes{{\bf 1}}\\ &\quad\otimes \left(e^{z_1\frac{\partial}{\partial {z}}}-1\right)\frac{\partial}{\partial {z}}\log f(z)^{ [a_{ij}]_{q^{r_i}}[r\ell']_{q}(q-q^{-1}) } \Big)\\ =&Y_{\widehat{\ell}}(x_j^+,z_1)x_j^-\otimes h_i +Y_{\widehat{\ell}}(x_j^+,z_1)x_j^-\otimes{{\bf 1}}\\ &\quad\otimes\frac{\partial}{\partial {z}}\log \left(f(z+z_1)/f(z)\right)^{ [a_{ij}]_{q^{r_i}}[r\ell']_{q}(q-q^{-1}) },\end{aligned}$$ which proves [\[eq:S-wtW-h\]](#eq:S-wtW-h){reference-type="eqref" reference="eq:S-wtW-h"}. From [\[eq:qyb-hex-id\]](#eq:qyb-hex-id){reference-type="eqref" reference="eq:qyb-hex-id"} and [\[eq:S-twisted-4\]](#eq:S-twisted-4){reference-type="eqref" reference="eq:S-twisted-4"}, we have that $$\begin{aligned} &S_{\ell,\ell'}(z)\left(Y_{\widehat{\ell}}(x_j^+,z_1)x_j^-\otimes x_i^\pm\right)\\ =&Y_{\widehat{\ell}}^{12}(z_1)S_{\ell,\ell'}^{23}(z)S_{\ell,\ell'}^{13}(z+z_1)\left(x_j^+\otimes x_j^-\otimes x_i^\pm\right)\\ =&Y_{\widehat{\ell}}(x_j^+,z_1)x_j^-\otimes x_i^\pm\otimes \left(f(z+z_1)/f(z)\right)^{\pm q^{-r_ia_{ij}}\mp q^{r_ia_{ij}}},\end{aligned}$$ which proves [\[eq:S-wtW-x\]](#eq:S-wtW-x){reference-type="eqref" reference="eq:S-wtW-x"}. ◻ **Lemma 151**. *For $i,j\in I$ and $\ell,\ell'\in{\mathbb{C}}$ with $\ell\ne 0$, we have that $$\begin{aligned} &S_{\ell,\ell'}(z)(W_j(z_1)\otimes h_i)=W_j(z_1)\otimes h_i,\label{eq:S-W-h}\\ &S_{\ell,\ell'}(z)(W_j(z_1)\otimes x_i^\pm)=W_j(z_1)\otimes x_i^\pm.\label{eq:S-W-x}\end{aligned}$$* *Proof.* From [\[eq:S-twisted-1\]](#eq:S-twisted-1){reference-type="eqref" reference="eq:S-twisted-1"} and [\[eq:qyb-shift-total1\]](#eq:qyb-shift-total1){reference-type="eqref" reference="eq:qyb-shift-total1"}, we have that $$\begin{aligned} &S_{\ell,\ell'}(z)\left(\frac{1-e^{z_1\partial}}{\partial [r\ell/r_j]_{q^{r_j\partial}}}h_j\otimes h_i\right)\\ =&\frac{1-e^{z_1\partial\otimes 1+z_1\frac{\partial}{\partial {z}}}}{(\partial\otimes 1+\frac{\partial}{\partial {z}}) [r\ell/r_j]_{q^{r_j\partial\otimes 1+r_j\frac{\partial}{\partial {z}}}}}S_{\ell,\ell'}(z)(h_j\otimes h_i)\\ =&\frac{1-e^{z_1\partial}}{\partial [r\ell/r_j]_{q^{r_j\partial}}}h_j\otimes h_i +{{\bf 1}}\otimes{{\bf 1}}\otimes[a_{ij}]_{q^{r_i\frac{\partial}{\partial {z}}}}[r\ell/r_j]_{q^{r_j\frac{\partial}{\partial {z}}}}[r\ell']_{q^{\frac{\partial}{\partial {z}}}}\\ &\times\left(q^{\frac{\partial}{\partial {z}}}-q^{-\frac{\partial}{\partial {z}}}\right)\frac{1-e^{z_1\frac{\partial}{\partial {z}}}}{\frac{\partial}{\partial {z}}[r\ell/r_j]_{q^{r_j\frac{\partial}{\partial {z}}}}}\frac{\partial^{2}}{\partial z^{2}}\log f(z)\\ =&\frac{1-e^{z_1\partial}}{\partial [r\ell/r_j]_{q^{r_j\partial}}}h_j\otimes h_i +{{\bf 1}}\otimes{{\bf 1}}\\ &\otimes\left(1-e^{z_1\frac{\partial}{\partial {z}}}\right)\frac{\partial}{\partial {z}}\log f(z)^{[a_{ij}]_{q^{r_i}}[r\ell']_q(q-q^{-1}) }.\end{aligned}$$ Combining this with [\[eq:S-wtW-h\]](#eq:S-wtW-h){reference-type="eqref" reference="eq:S-wtW-h"} and Lemma [Lemma 30](#lem:S-special-tech-gen4){reference-type="ref" reference="lem:S-special-tech-gen4"}, we get that $$\begin{aligned} &S_{\ell,\ell'}(z)\left(W_j(z_1)\otimes h_i\right)\\ =&S_{\ell,\ell'}(z)\left(\exp\left(\left(\frac{1-e^{z_1\partial}}{\partial [r\ell/r_j]_{q^{r_j\partial}}}h_j\right)_{-1}\right)Y_{\widehat{\ell}}(x_j^+,z_1)x_j^-\otimes h_i\right)\\ &-\frac{{{\bf 1}}\otimes h_i}{(q^{r_i}-q^{-r_i})z} +\left(\frac{f_0(2(r_i+r\ell)\hbar)}{f_0(2(r_i-r\ell)\hbar)}\right)^{\frac{1}{2}}\frac{{{\bf 1}}\otimes h_i}{(q^{r_i}-q^{-r_i})(z+2r\ell\hbar)}\\ =&\exp\left(\left(\frac{1-e^{z_1\partial}}{\partial [r\ell/r_j]_{q^{r_j\partial}}}h_j\right)_{-1}\right)Y_{\widehat{\ell}}(x_j^+,z_1)x_j^-\otimes h_i\\ &+\exp\left(\left(\frac{1-e^{z_1\partial}}{\partial [r\ell/r_j]_{q^{r_j\partial}}}h_j\right)_{-1}\right)Y_{\widehat{\ell}}(x_j^+,z_1)x_j^-\otimes{{\bf 1}} \otimes\\ &\quad\times \left(e^{z_1\frac{\partial}{\partial {z}}}-1+1-e^{z_1\frac{\partial}{\partial {z}}}\right)\log f(z)^{ [a_{ij}]_{q^{r_i}}[r\ell']_{q}(q-q^{-1}) }\\ &-\frac{{{\bf 1}}\otimes h_i}{(q^{r_i}-q^{-r_i})z} +\left(\frac{f_0(2(r_i+r\ell)\hbar)}{f_0(2(r_i-r\ell)\hbar)}\right)^{\frac{1}{2}}\frac{{{\bf 1}}\otimes h_i}{(q^{r_i}-q^{-r_i})(z+2r\ell\hbar)}\\ =&W_j(z_1)\otimes h_i,\end{aligned}$$ which proves [\[eq:S-W-h\]](#eq:S-W-h){reference-type="eqref" reference="eq:S-W-h"}. From [\[eq:S-twisted-3\]](#eq:S-twisted-3){reference-type="eqref" reference="eq:S-twisted-3"} and [\[eq:qyb-shift-total1\]](#eq:qyb-shift-total1){reference-type="eqref" reference="eq:qyb-shift-total1"}, we have that $$\begin{aligned} &S_{\ell,\ell'}(z)\left(\frac{1-e^{z_1\partial}}{\partial [r\ell/r_j]_{q^{r_j\partial}}}h_j\otimes x_i^\pm\right)\\ =&\frac{1-e^{z_1\partial\otimes 1+z_1\frac{\partial}{\partial {z}}}}{(\partial\otimes 1+\frac{\partial}{\partial {z}}) [r\ell/r_j]_{q^{r_j\partial\otimes 1+r_j\frac{\partial}{\partial {z}}}}}S_{\ell,\ell'}(z)(h_j\otimes x_i^\pm)\\ =&\frac{1-e^{z_1\partial}}{\partial [r\ell/r_j]_{q^{r_j\partial}}}h_j\otimes x_i^\pm \mp{{\bf 1}}\otimes x_i^\pm \otimes[a_{ji}]_{q^{r_j\frac{\partial}{\partial {z}}}}[r\ell]_{q^{\frac{\partial}{\partial {z}}}}\\ &\quad\times \left(q^{\frac{\partial}{\partial {z}}}-q^{-\frac{\partial}{\partial {z}}}\right)\frac{1-e^{z_1\frac{\partial}{\partial {z}}}}{\frac{\partial}{\partial {z}}[r\ell/r_j]_{q^{r_j\frac{\partial}{\partial {z}}}}}\frac{\partial}{\partial {z}}\log f(z)\\ =&\frac{1-e^{z_1\partial}}{\partial [r\ell/r_j]_{q^{r_j\partial}}}h_j\otimes x_i^\pm \mp{{\bf 1}}\otimes x_{i,\hbar}^\pm \otimes\left(1-e^{z_1\frac{\partial}{\partial {z}}}\right)\log f(z)^{q^{r_ja_{ji}}-q^{-r_ja_{ji}}}\\ =&\frac{1-e^{z_1\partial}}{\partial [r\ell/r_j]_{q^{r_j\partial}}}h_j\otimes x_i^\pm -{{\bf 1}}\otimes x_i^\pm \otimes\log \left(f(z+z_1)/f(z)\right)^{ \pm q^{-r_ia_{ij}}\mp q^{r_ia_{ij}} }\\ =&\frac{1-e^{z_1\partial}}{\partial [r\ell/r_j]_{q^{r_j\partial}}}h_j\otimes x_i^\pm -{{\bf 1}}\otimes x_i^\pm \otimes\log \left(f(z+z_1)/f(z)\right)^{\pm q^{-r_ia_{ij}}\mp q^{r_ia_{ij}}}.\end{aligned}$$ Combining this with [\[eq:S-wtW-x\]](#eq:S-wtW-x){reference-type="eqref" reference="eq:S-wtW-x"} and Lemma [Lemma 30](#lem:S-special-tech-gen4){reference-type="ref" reference="lem:S-special-tech-gen4"}, we get that $$\begin{aligned} &S_{\ell,\ell'}(z)\left(W_j(z_1)\otimes x_i^\pm\right)\\ =&S_{\ell,\ell'}(z)\left(\exp\left(\left(\frac{1-e^{z_1\partial}}{\partial [r\ell/r_j]_{q^{r_j\partial}}}h_j\right)_{-1}\right)Y_{\widehat{\ell}}(x_j^+,z_1)x_j^-\otimes x_i^\pm\right)\\ &-\frac{{{\bf 1}}\otimes x_i^\pm}{(q^{r_i}-q^{-r_i})z} +\left(\frac{f_0(2(r_i+r\ell)\hbar)}{f_0(2(r_i-r\ell)\hbar)}\right)^{\frac{1}{2}}\frac{{{\bf 1}}\otimes x_i^\pm}{(q^{r_i}-q^{-r_i})(z+2r\ell\hbar)}\\ =&\exp\left(\left(\frac{1-e^{z_1\partial}}{\partial [r\ell/r_j]_{q^{r_j\partial}}}h_j\right)_{-1}\right)Y_{\widehat{\ell}}(x_j^+,z_1)x_j^-\otimes x_i^\pm\\ &\quad\otimes \left(f(z+z_1)/f(z)\right)^{\pm q^{-r_ia_{ij}}\mp q^{r_ia_{ij}}} \left(f(z+z_1)/f(z)\right)^{\mp q^{-r_ia_{ij}}\pm q^{r_ia_{ij}}}\\ &-\frac{{{\bf 1}}\otimes x_i^\pm}{(q^{r_i}-q^{-r_i})z} +\left(\frac{f_0(2(r_i+r\ell)\hbar)}{f_0(2(r_i-r\ell)\hbar)}\right)^{\frac{1}{2}}\frac{{{\bf 1}}\otimes x_i^\pm}{(q^{r_i}-q^{-r_i})(z+2r\ell\hbar)}\\ =&W_j(z_1)\otimes x_i^\pm,\end{aligned}$$ which proves [\[eq:S-W-x\]](#eq:S-W-x){reference-type="eqref" reference="eq:S-W-x"}. ◻ *Proof of Proposition [Proposition 123](#prop:W){reference-type="ref" reference="prop:W"}.* The relation [\[eq:prop-W-well-defined\]](#eq:prop-W-well-defined){reference-type="eqref" reference="eq:prop-W-well-defined"} is an immediate consequence of Lemma [Lemma 148](#lem:W-well-defined){reference-type="ref" reference="lem:W-well-defined"}, while the relation [\[eq:prop-W-parafermion\]](#eq:prop-W-parafermion){reference-type="eqref" reference="eq:prop-W-parafermion"} follows immediate from Lemma [Lemma 149](#lem:W-parafermion){reference-type="ref" reference="lem:W-parafermion"}. Finally, by utilizing Lemma [Lemma 151](#lem:W-S-invariant){reference-type="ref" reference="lem:W-S-invariant"}, the relations [\[eq:qyb-hex-id\]](#eq:qyb-hex-id){reference-type="eqref" reference="eq:qyb-hex-id"}, [\[eq:qyb-hex-id-alt\]](#eq:qyb-hex-id-alt){reference-type="eqref" reference="eq:qyb-hex-id-alt"} and the fact that both $V_{\hat{\mathfrak g},\hbar}^{\ell'}$ and $L_{\hat{\mathfrak g},\hbar}^{\ell'}$ are generated by the set ${ \left.\left\{ {h_i,\,x_i^\pm} \,\right|\, {i\in I} \right\} }$, we complete the proof of [\[eq:prop-W-S-invariant\]](#eq:prop-W-S-invariant){reference-type="eqref" reference="eq:prop-W-S-invariant"}. # Acknowledgement {#acknowledgement .unnumbered} Part of this paper was finished during my visit at Xiamen University and Tianyuan Mathematical Center in Southeast China, in August 2023. I am very grateful to Professor Shaobin Tan, Fulin Chen, Qing Wang for their hospitality. 9999 C. Ai, C. Dong, X. Jiao, and L. Ren. The irreducible modules and fusion rules for the Parafermion vertex operator algebras. , **370** (2018), 5963--5981. T. Arakawa, C. Lam, and H. Yamada. Zhu's algebra, ${C}_2$-algebra and ${C}_2$-cofiniteness of parafermion vertex operator algebras. , **264** (2014), 261--295. T. Arakawa, C. Lam, and H. Yamada. Parafermion vertex operator algebras and w-algebras. , **371** (2019), 4277--4301. M. Butorac, N. Jing, and S. Kožić. $\hbar$-adic quantum vertex algebras associated with rational ${R}$-matrix in types ${B}$, ${C}$ and ${D}$. , **109** (2019), 2439--2471. S. Carnahan and M. Miyamoto. Regularity of fixed-point vertex operator subalgebras. . C. Dong, X. Jiao, and F. Xu. Quantum dimensions and quantum Galois theory. , **365** (2013), 6441--6469. C. Dong, V. Kac, and L. Ren. Trace functions of the parafermion vertex operator algebras. , **348** (2019), 1--17. C. Dong and J. Lepowsky. , volume 112 of *Prog. Math.* Birkhäuser, Boston, 1993. C. Dong, H. Li, and G. Mason. . , **132** (1997), 148--166. C. Dong, C. Lam, and Q. Wang. The structure of parafermion vertex operator algebras. , **323** (2010), 371--381. C. Dong, C. Lam, and H. Yamada. W-algebras related to parafermion algebras. , **322** (2009), 2366--2403, . C. Dong and L. Ren. Representations of the parafermion vertex operator algebras. , **315** (2017), 88--101. V. Drinfeld. Hopf algebras and quantum yang-baxter equation. , **283** (1985), 1060--1064, . V. Drinfeld. A new realization of Yangians and quantized affine algebras. In *Soviet Math. Dokl*, **36** (1988), 212--216. C. Dong and Q. Wang. The structure of parafermion vertex operator algebras: General case. , **299** (2010), 783--792. C. Dong and Q. Wang. On ${C}_2$-cofiniteness of parafermion vertex operator algebras. , **328** (2011), 420--431. C. Dong and Q. Wang. Parafermion vertex operator algebras. , **6** (2011), 567--579. C. Dong and Q. Wang. Quantum dimensions and fusion rules for parafermion vertex operator algebras. , **144** (2016), 1483--1492. P. Etingof and D. Kazhdan. Quantization of Lie bialgebras, Part V: Quantum vertex operator algebras. , **6** (2000), 105. I. Frenkel and N. Jing. Vertex representations of quantum affine algebras. , **85** (1988), 9373--9377. I. Frenkel and Y. Zhu. Vertex operator algebras associated to representations of affine and Virasoro algebras. , **66** (1992), 123--168. Howard Garland. The arithmetic theory of loop algebras. , **53** (1978), 480--551. N. Jing. Quantum Kac-Moody algebras and vertex representations. , **44** (1998), 261--271. N. Jing, F. Kong, H. Li, and S. Tan. Deforming vertex algebras by vertex bialgebras. , 2022. M. Jimbo and T. Miwa. , **85**. Amer. Math. Soc., 1994. V. Kac. . Cambridge University Press, 1994. C. Kassel. . Springer-Verlag, New York, 1995. S. Kožić. On the quantum affine vertex algebra associated with trigonometric ${R}$-matrix. , **27** (2021), 45. S. Kožić. $\hbar$-adic quantum vertex algebras in types ${B}$, ${C}$, ${D}$ and their $\phi$-coordinated modules. , **54** (2021), 485202. F. Kong. Quantum affine vertex algebras associated to untwisted quantum affinization algebras. **402** (2023), 2577--2625. H. Li. . , **109** (1996), 143--195. H. Li. Nonlocal vertex algebras generated by formal vertex operators. , **11** (2006), 349. H. Li. A smash product construction of nonlocal vertex algebras. , **9** (2007), 605--637. H. Li. . , **296** (2010), 475--523, . J. Lepowsky and H. Li. , **227**. Birkhäuser Boston Incoporation, 2004. H Li and J. Sun. Twisted tensor products of nonlocal vertex algebras. , **345** (2011), 266 -- 294. A. Meurman and M. Primc. . , **44** (1996), 207--215. A. Meurman and M. Primc. . , **652** (1999). H. Nakajima. Quiver varieties and finite dimensional representations of quantum affine algebras. , **14** (2001), 145--238. M. Roitman. On free conformal and vertex algebras. , **217** (1999), 496--527. Y. Reshetikhin and A. Semenov-Tian Shansky. Central extensions of quantum current groups. , **19** (1990), 133--142. J. Sun. Iterated twisted tensor products of nonlocal vertex algebras. , **381** (2013), 233--259. [^1]: $^1$Partially supported by the NSF of China (No.12371027).
arxiv_math
{ "id": "2310.03571", "title": "Quantization of parafermion vertex algebras", "authors": "Fei Kong", "categories": "math.QA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- author: - - - bibliography: - sn-bibliography.bib title: Stability of the persistence transformation --- # Introduction ## Motivation and related work Topological Data Analysis (TDA) stands at the intersection of mathematics, data science, and various application domains, promising a unique perspective on understanding complex datasets (e.g., see [@carlsson2009topology], [@edelsbrunner2022computational]). TDA is a cutting-edge approach that harnesses the power of topology, a branch of mathematics that studies the shape and structure of spaces, to analyze and extract valuable insights from data. In the fast-paced evolution of technical achievements, one of the most valuable resources is data. In scientific, economical and political areas, data are being gathered in a progressively fast pace which amounts to an exponential increase of new information each year. Traditional analytical methods often fall short in capturing the underlying patterns and structures buried within these data. This is where TDA steps in with its innovative approach. By considering data as a collection of points in a high-dimensional space, TDA aims to unveil the intrinsic geometry and relationships present in these datasets. The significance of TDA spans a wide range of fields, making it a transformative tool across diverse disciplines. In data science, TDA offers an alternative approach to dimensionality reduction, helping to projecting datasets onto representations that retain essential information (e.g., see [@nicolau2011topology], [@yu2021shape]). In biology, TDA aids in understanding complex biological systems, such as protein structures or neural connectivity, by revealing the underlying topological features that govern their behavior (e.g., [@koseki2023topological], [@das2023topological]). Moreover, in image analysis, TDA can decode intricate patterns within images, going beyond pixel-level analysis to uncover hidden structures that might represent critical information (e.g., see [@ver2023primer]). This is particularly relevant in medical imaging, where TDA's ability to highlight significant features can aid in disease diagnosis and treatment planning (e.g., [@singh2023topological]). TDA's strength lies in its ability to capture essential characteristics of data that might be overlooked by traditional techniques. By focusing on the shape, connectivity, and arrangement of data points, TDA can identify clusters, holes, voids, and other topological features that convey crucial insights about the data's underlying structure. This capability is especially powerful when dealing with noisy, high-dimensional, or incomplete datasets, where conventional methods often struggle. The given data types can range from a point cloud to time-series to even visual images and are highly dependent on the subject and method of obtaining said data. In this paper, we focus on spectral data which are typical in fields such as biology and in particular medicine. The most obvious choice for analyzing this type of data utilizing tools of TDA is applying a levelset filtration (e.g., see [@edelsbrunner2008persistent]) which extracts important topological information of the underlying space in the form of persistent homology. The results can be represented as barcodes (e.g., see [@ghrist2008barcodes]) or persistence diagrams (e.g., see [@cohen2005stability]) which can be interpreted individually dependent on the application. However, the level set filtration has difficulties in differentiating symmetric spectra, sharing the same significant peaks at different locations. In such scenarios, the generated persistence diagrams might appear identical, disrupting the analysis. One example for this disturbance can be observed in MALDI imaging, where the importance of positional information in spectral data is linked with the persistence of signal peaks. This insight inspired the closely connected paper \"Supervised topological data analysis for MALDI mass spectrometry imaging applications\" by G. Klaila, V. Vutov and A. Stefanou ([@klaila2023supervised]). In this paper, MALDI imaging was used to classify cancer cells utilizing machine learning and feature extraction with the persistence transformation. This tool helped in pre-processing the signal peaks and their positional information in order to improve the training of the machine learning algorithm. In this example, the position of the peaks could be related to specific molecules which in turn gave information about the type of cancer cell. Central to our research is the profound significance of stability. While our paper indeed delves into the discrimination of persistence and peaks, a core facet also centers on substantiating the stability of our innovative methodology. In the realm of analytical approaches, stability emerges as a pivotal element, indicating the reliability of outcomes in the face of uncertainties and fluctuations. Stability of the analysis implies robust results related to small perturbations in the input. By demonstrating the stability of our method, we reinforce its credibility and applicability, rendering it a robust tool capable of withstanding the challenges of real-world data. The first TDA stability theorem by Cohen-Steiner et al. ([@cohen2005stability]) proved the stability of persistence diagrams for certain continuous functions. This theorem was expanded to the application of the interleaving distance on persistence modules by Chazal et al. ([@chazal2009stability]), and in succession to other applications. Examples for these are the generalizations of the interleaving distance and thus generalizations of the original stability result (see [@bubenik2015metrics; @stefanou2018interleaving]) or the stability of the Euler characteristic curves and its multi-parameter extension, the Euler characteristic profile (see [@dlotko2022euler]). In this paper, we want to contribute to this already established framework of TDA by stating a stability result for the case of the persistence transformation. Providing this stability is not just an academic pursuit, but a practical necessity. It enhances the credibility and applicability of our approach, making it a robust tool for real-world data analysis. Our exploration into stability promises to be an essential step forward in the dynamic landscape of modern data analysis, where data complexities are met with steadfast methodologies. ## Overview of our results During the course of this paper, we explore the utilization of the persistence transformation, a novel method in topological data analysis, and delve into its potential benefits. The paper is organized into several chapters, each contributing to a comprehensive understanding of this methodology. Section [2](#section_morse_sets){reference-type="ref" reference="section_morse_sets"} provides the necessary background and lays the foundation for our analysis. We define the space of critical points, called Morse Sets, in [2.2](#subsection_morse_sets){reference-type="ref" reference="subsection_morse_sets"} and establish a $\ell_p$-metric to support our subsequent developments in [2.3](#subsection_lp_metrics){reference-type="ref" reference="subsection_lp_metrics"}. In Section [3](#section_persistence_transformation){reference-type="ref" reference="section_persistence_transformation"}, we use this groundwork in order to introduce the persistence transformation in [3.2](#subsection_persistence_transformation){reference-type="ref" reference="subsection_persistence_transformation"}, showcasing its formulation through the introduction of a matching mechanism on the set of local maxima. Moreover, we place significant emphasis on proving a stability theorem in [3.3](#subsection_stability){reference-type="ref" reference="subsection_stability"}, which stands as the pivotal result of our work. To provide context, we conclude this chapter by providing a brief comparison between the persistence transformation and the conventional persistence diagram in [3.4](#subsection_comparison){reference-type="ref" reference="subsection_comparison"}, along with a description of a potential implementation in [3.5](#subsection_implementation){reference-type="ref" reference="subsection_implementation"}. Our exploration continues in Section [4](#section_reduced_persistence_transformation){reference-type="ref" reference="section_reduced_persistence_transformation"}, where we unveil the reduced persistence transformation - a modified version that offers dimensionality reduction. We carefully describe its structure in [4.1](#subsection_reduced_definition){reference-type="ref" reference="subsection_reduced_definition"} and demonstrate its stability in [4.2](#subsection_reduced_stability){reference-type="ref" reference="subsection_reduced_stability"}. Additionally, we compare its performance against the traditional persistence diagram in [4.3](#subsection_reduced_comparison){reference-type="ref" reference="subsection_reduced_comparison"}, shedding light on its strengths and limitations. Intriguingly, the Section [5](#section_higher_dimensions){reference-type="ref" reference="section_higher_dimensions"} offers a conceptual sketch for extending the persistence transformation to higher-dimensional input. We explore the exciting possibilities and inherent challenges that arise in this endeavor, and we discuss the stability of this extension in light of these new conditions. As we approach the conclusion, in Section [6](#section_summary){reference-type="ref" reference="section_summary"} we summarize the key findings presented throughout the paper. We emphasize the significance of the persistence transformation as a powerful and stable tool in topological data analysis. Additionally, we give an outlook by considering potential enhancements and novel applications for this transformative method. Through these chapters, we aim to establish the persistence transformation as a crucial addition to the arsenal of topological data analysis techniques. We showcase its potential to unlock new insights in various domains and illuminate its stability in diverse scenarios. # Morse Sets {#section_morse_sets} ## Background {#subsection_background} In numerous modern applications, significant data are represented as images of real-valued functions (e.g., science, politics, economy, healthcare, engineering), known as time series. These fields include also domains such as medicine (e.g., [@radtke2016multiparametric]), robotics (e.g., [@atkeson1986robot]), and climate science (e.g., [@ge2010temperature]). While studying these functions, one can conclude that relevant information is often given by the critical values of the functions, i.e., the minima and maxima. For example, in the application of MALDI-Imaging (see [@aichler2015maldi]), the local maxima of MALDI-Spectra can be associated to specific molecules. This association can help the tumor sub-typing process (see [@klaila2023supervised]). On the other hand, values in between these critical points often carry less to none significant information for the analysis of the data. Expanding upon this observation, we establish an abstract space of critical points derived from real-valued functions to facilitate data analysis simplification. In the described context, the abstract space contains similar significant information in a more compact form compared to the original data. Another notable benefit of this abstraction is that the analysis of the data can be performed without any prior knowledge of the original function, thus enabling the application of these methods to any (data-) set contained within this abstract space. We call these abstract sets *Morse sets*. ## Definition of Morse sets {#subsection_morse_sets} The term 'Morse set' draws its inspiration from its application in the context of Morse functions. In numerous scenarios, time series data lends itself to interpretation as a real-valued function that satisfies all the prerequisites of a Morse function, as detailed in [@milnor1963morse]. However, it is essential to note that the concept of Morse sets can also extend beyond Morse functions and be defined for a broader class of real-valued functions. In this subsection, we provide a formal definition of the Morse set. For a compact subset, $M\subseteq \overline{\mathbb{R}}:=\mathbb{R}\cup \{-\infty, +\infty\}$, we define the space $\mathscr{M}:=M\times \overline{\mathbb{R}}$ and equip it with the co-lexicographic order $<_{\mathscr{M}}$, which is induced by $<_M$ as follows: For all $(x, y), (x', y)$ and $(x', y')\in \mathscr{M}$ with $x\neq x'$ and $y\neq y'$ we have: [\[def_M\]]{#def_M label="def_M"} - $(x,y)<_{\mathscr{M}} (x', y')\Leftrightarrow y<_{\overline{\mathbb{R}}}y'$ - $(x, y)<_{\mathscr{M}} (x', y)\phantom{'}\Leftrightarrow x<_Mx'$. Sets $K$ of elements of this space are denoted by as either $k\in K$ or $(x,y)\in K$, where $x\in M$ and $y\in \overline{\mathbb{R}}$. The set $K\subseteq \mathscr{M}$ is referred to as a *Morse set*, if the following conditions are satisfied: 1. Injectivity: For all $x\in M, y, y'\in \overline{\mathbb{R}}$ there is: $(x, y), (x, y')\in K\Rightarrow y=y'$. 2. Disjunction: The set $K$ is a disjoint union of two subsets, i.e., $K=K^+\sqcup K^-$. To highlight the affiliation to one of these set, we denote the elements with the corresponding symbol, e.g., $k_i^+\in K^+$ or $(x^-, y^-)\in K^-$. 3. Ordered: For all $i<j$, it holds that $k_i^+>_{\mathscr{M}}k_j^+$ with $k_i^+, k_j^+\in K^+$ and $k_i^-<_{\mathscr{M}}k_j^-$ with $k_i^-, k_j^-\in K^-$. 4. Alternation: For all $(x,y),(x', y')\in K^+$ holds: If there is no element $(x^*, y^*)\in K^+$ such that $x^*\in [x,x']$, then there is exactly one element $(\hat{x}, \hat{y})\in K^-$ such that $(\hat{x}, \hat{y})< (x,y),(x', y')$ and $\hat{x}\in [x,x']$, and vice versa. For all $(x,y),(x', y')\in K^-$ holds: If there is no element $(x^*, y^*)\in K^-$ such that $x^*\in [x,x']$, then there is exactly one element $(\hat{x}, \hat{y})\in K^+$ such that $(\hat{x}, \hat{y})> (x,y),(x', y')$ and $\hat{x}\in [x,x']$. 5. Critical Boundary: For all $x\in \partial M$ (i.e., the boundary of $M\subset \mathbb{R}$) there is $k=(x,y)\in K$, with $y\in \overline{\mathbb{R}}$. Let $\mathscr{K}:=\{K\subset \mathscr{M}|K \text{ Morse set}\}$ be the space of all Morse sets. One can think of a Morse set as being the set of non-degenerate critical points for the graph of a Morse function (see [@milnor1963morse]) $f:M\rightarrow \overline{\mathbb{R}}$ together with the boundary points, where the $(x, f(x))\in K^+$ are the maxima, and the $(x', f(x'))\in K^-$ are the minima. We denote the cardinality of the subsets $K^+, K^-\subset K$ with $\kappa^+:=|K^+|$ and $\kappa^-:=|K^-|$. The condition (4) of [\[def_K\]](#def_K){reference-type="ref" reference="def_K"} directly implies the following statement: $$\begin{aligned} \label{balanc} &&||K^+|-|K^-||&\leq 1\\ \Leftrightarrow && |\kappa^+ -\kappa^-|&\leq 1.\end{aligned}$$ A Morse set can be directly obtained by a Morse function $f$ by classifying the non-degenerate critical points in the corresponding subsets $K^+$ and $K^-$. We denote the Morse set $K_f$ to indicate the relationship with $f$. The benefits of employing Morse sets over the original Morse functions become more apparent when considering isotopy, as Morse sets encapsulate information regarding isotopy classes of Morse functions, making them more versatile. Furthermore, when Morse sets are identical, they exhibit shared characteristics related to persistent homology. According to Kudryavtseva (see [@kudryavtseva2009uniform]), two Morse functions are defined to be isotopic, if - there are diffeomorphisms $h_1$ and $h_2$ such that $f = h_2 \circ g \circ h_1$, - $h_1$ preserves the numbering of critical points - $h_1$ is homotopic to the identity mapping. Furthermore, if the critical points are identical, we call the functions *Morse isotopic*. This directly implies the identity of the Morse sets, i.e., if two Morse functions $f$ and $g$ are Morse isotopic, there is $K_f = K_g$. The subsequent theorem demonstrates that working with Morse sets not only encodes more information but also serves as a foundation for deriving concepts such as isotopy and persistent homology for similar Morse functions. **Theorem 1**. *If two Morse functions $f$ and $g$ are Morse isotopic with $K_f = K_g$, then the following two statements are satisfied:* 1. *The Morse functions $f$ and $g$ are isotopic.* 2. *The persistent homology classes of the upper levelset filtrations are identical.* *Proof.* 1. For $K_f$, let $G_f$ be the graph obtained by connecting all points $k\in K_f$ with straight lines. The graph $G_f$ is isotopic to the graph $G_g^*$ of $f$, since the straight lines of $G_f$ can continuous be deformed to the graph $G_f^*$. Since the critical points of $G_f$ and $G_f^*$ are identical, the deformation will not bring any more critical points. Similarly, a graph $G_g$ for $K_g$ can be constructed, which is isotopic to the graph $G_g^*$ of $g$. Since $K_f=K_g$, there is equality between $G_f$ and $G_g$, hence there is an isotopy between $G_f^*$ and $G_g^*$. Finally, Theorem 2 of Kudryavtseva (see [@kudryavtseva2009uniform]) implies the isotopy of $f$ and $g$. 2. The persistent homology class of a levelset filtration depends on the path-connected components of the levelsets. For the levelset, a new path-connected component is born, if a critical point is passed. The merging of two path-connected components occurs also during the passing of a critical point. For example. in the upper levelset filtration, a new path-connected component is born in each maxima, and two components are merging in each minima. Since the critical points of $f$ and $g$ are identical, i.e., $K_f=K_g$, the path-connected components of them are also identical in any step. Hence their persistent homology classes are equal, resulting in the same persistent diagram.  ◻ Utilizing Morse sets allows for the handling of Morse isotopy classes of Morse functions rather than dealing with each Morse function individually. This aids in categorizing Morse functions within a broader context, ensuring that similar Morse functions exhibit comparable behavior in terms of their persistent homology. ## $\ell_p$-Metrics on Morse sets {#subsection_lp_metrics} This work aims at proving stability of the persistence transformation, a method working on the above defined space $\mathscr{K}$. Any stability theorem depends on the choice of metrics on the input and output spaces. One metric often utilized when working with functions is the supremum norm, i.e., the maximal distance between two functions, as can be seen in the proof of the stability theorem of the persistence diagram (see [@cohen2005stability]). Translated to two sets $K_f, K_g\in \mathscr{K}$, this would be the maximal distance for the minimal matching between elements $k_f\in K_f$ and $k_g\in K_g$. However, the persistence diagram considers stability only in one dimension, therefore it is enough for the metric to control distance in a single dimension. In contrast, the later defined persistence transformation will consider stability not only in the height, but also in the position of the relevant peaks in accordance to the elder rule (see [@edelsbrunner2022computational]). To satisfy these conditions, we define an $\ell_p$ metric on $\mathscr{K}$ respecting the total order $<_{\mathscr{M}}$ of the elements and controlling the height as well as the position of similar significant peaks in the input. For this metric, these peaks are matched to each other, resulting in a measurable distance for Morse sets. Define the matching $m^*$ between the Morse sets $K$ and $\hat{K}$ as follows: - For all $i = 1, \ldots, \min\{\kappa^+, \hat{\kappa}^+\}$ there is $(k_i^+, \hat{k}_i^+)\in m^*$. - For all $j = 1, \ldots, \min\{\kappa^-, \hat{\kappa}^-\}$ there is $(k_j^-, \hat{k}_j^-)\in m^*$. - For all $i=\min\{\kappa^+, \hat{\kappa}^+\} + 1, \ldots,\max\{\kappa^+, \hat{\kappa}^+\}$ there is $(0_{\mathscr{M}}, \hat{k}_i^+)\in m^*$ (with $0_{\mathscr{M}}=(0,0)$). - For all $j=\min\{\kappa^-, \hat{\kappa}^-\} + 1, \ldots, \max\{\kappa^-, \hat{\kappa}^-\}$ there is $(0_{\mathscr{M}}, \hat{k}_j^-)\in m^*$. With this matching $m^*$, an $\ell_p$ metric on $\mathscr{K}$ can be established. **Definition 1**. For all $K, \hat{K}\in \mathscr{K}$ with the matching $m^*$, the distance between $K$ and $\hat{K}$ is defined to be: $$\begin{aligned} d_{\mathscr{K}, p}(K, \hat{K}):&=\sqrt[p]{\sum_{(k, \hat{k})\in m^*}||k-\hat{k}||_{\infty}^p}\\ &=\sqrt[p]{\sum_{((x,y), (\hat{x},\hat{y}))\in m^*}||(x,y)-(\hat{x},\hat{y})||_{\infty}^p}.\end{aligned}$$ **Theorem 2**. *The metric $d_{\mathscr{K}, p}(K, \hat{K})$ is well-defined.* *Proof.* The metric $d_{\mathscr{K}, p}(K, \hat{K})$ is well-defined since the following four conditions are satisfied: - Non-negativity. For all $K, \hat{K}\in \mathscr{K}$, there is: $$\begin{aligned} d_{\mathscr{K}, p}(K, K')&=\sqrt[p]{\sum_{(k,\hat{k})\in m^*}||k-\hat{k}||_{\infty}^p}\\ &\geq \sqrt[p]{\sum_{(k, \hat{k})\in m^*}0}\\ &= \sqrt[p]{0}\\ &=0, \end{aligned}$$ since $||\cdot , \cdot||_{\infty}$ is a metric and therefore non-negative. - Definiteness. For all $K, \hat{K}\in \mathscr{K}$ there is: $$\begin{aligned} K=\hat{K} \Rightarrow d_{\mathscr{K}, p}(K, \hat{K})=0, \end{aligned}$$ since $m^*$ is the identity matching on $K=\hat{K}$. On the other hand, assume that $$\begin{aligned} d_{\mathscr{K}, p}(K, \hat{K})=0. \end{aligned}$$ Then there is a matching $m^*$, such that for all $i=1, \dots, \min \{\kappa^+, \hat{\kappa}^+\}$ there is $(k_i^+, \hat{k_i}^+)\in m^*$ and for all $j=1, \dots, \min\{\kappa^-, \hat{\kappa}^-\}$ there is $(k_j^-, \hat{k_j}^-)\in m^*$. The zero distance between $K$ and $\hat{K}$ implies equality between all these elements, i.e., $k_i^+=\hat{k_i}^+$ and $k_j^-=\hat{k_j}^-$. Assume now without loss of generality that $\kappa^+\geq \hat{\kappa}^+$ and $\kappa^-\geq \hat{\kappa}^-$. It holds, that for all $i=\hat{\kappa}^++1, \dots, \kappa^+$ the elements $k_i^+\in K^+$ are matched to $0_{\mathscr{M}}$, and similar for all $j=\hat{\kappa}^-+1, \dots, \kappa^-$ the elements $k_j^-\in K^-$ are matched to $0_{\mathscr{M}}$. However, since the distance between $K$ and $\hat{K}$ is zero, the elements must be equal to $0_{\mathscr{M}}$. Uniqueness ([\[def_K\]](#def_K){reference-type="ref" reference="def_K"}, (1)) of the sets implies the existence of at most one such element $0_{\mathscr{M}} \in K$. Consequently, there must be equality between all elements $k\in K$ and $\hat{k}\in \hat{K}$, except for at most one element $0_{\mathscr{M}} \in K$. Without loss of generality let $0_{\mathscr{M}}\in K^+$. This element cannot be in the closure of $M$, because the critical boundary ([\[def_K\]](#def_K){reference-type="ref" reference="def_K"}, (5)) of $\hat{K}$ implies the existence of $\hat{k}\in \hat{K}$ which is in the boundary. Equality of the elements then states the existence of an corresponding element $k\in K$ in the boundary, being different than $0_{\mathscr{M}}$ and hence contradicting the uniqueness ([\[def_K\]](#def_K){reference-type="ref" reference="def_K"}, (1)) of the boundary element. If the element $0_{\mathscr{M}}$ is not in the boundary of $M$, the alternation of $K$ ([\[def_K\]](#def_K){reference-type="ref" reference="def_K"}, (4)) implies the existence of neighboring elements $k=(x,y), k'=(x', y')\in K^-$ such that $(0_M, 0_{\mathbb{R}})=0_{\mathscr{M}}\in K^+$ is a unique element in $K^+$ with $0_M\in [x,x']$. Per assumption, the elements of $K$ and $\hat{K}$ are the same except for $0_{\mathscr{M}}$, so there are elements $\hat{k}, \hat{k}'\in \hat{K}^-$ such that $k=\hat{k}$ and $k'=\hat{k}'$. The alternation of $\hat{K}$ ([\[def_K\]](#def_K){reference-type="ref" reference="def_K"}, (4)) implies the existence of at least one element $\hat{k}^*=(\hat{x}^*, \hat{y}^*)\in \hat{K}^+$ such that $\hat{x}^*\in [x, x']$. By equality of the elements, there must be an element $k^*=(x^*, y^*)\in K^+$ such that $k^*\neq 0_{\mathscr{M}}$ and $x^*\in [x, x']$, opposing the uniqueness of $0_{\mathscr{M}}$ in this interval. This contradicts the assumption, that there is an element $0_{\mathscr{M}}\in K$ with $0_{\mathscr{M}}\notin \hat{K}$, hence all the elements must be the same, proofing: $$\begin{aligned} d_{\mathscr{K}, p}(K, \hat{K})=0 \Rightarrow K=\hat{K}. \end{aligned}$$ - Symmetry. For all $K, \hat{K}\in \mathscr{K}$ there is: $$\begin{aligned} d_{\mathscr{K}, p}(K, \hat{K})^p&=\sum_{(k, \hat{k})\in m^*}||k-\hat{k}||_{\infty}^p\\ &=\sum_{(k, \hat{k})\in m^*}||\hat{k}-k||_{\infty}^p\\ &=\sum_{(\hat{k}, k)\in m^*}||\hat{k}-k||_{\infty}^p\\ &= d_{\mathscr{K}, p}(\hat{K}, K)^p. \end{aligned}$$ - Triangle inequality. For all $K, \hat{K}, \Tilde{K}\in \mathscr{K}$ assume without loss of generality $k^+=\hat{k}^+=\Tilde{k}^+$ and $k^-=\hat{k}^-=\Tilde{k}^-$. If not, add elements $0_{\mathscr{M}}$ to the sets with fewer elements. Then: $$\begin{aligned} &d_{\mathscr{K}, p}(K, \Tilde{K})^p\\ =&\sum_{(k, \Tilde{k})\in m^*_{K, \Tilde{K}}}||k-\Tilde{k}||_{\infty}^p\\ =&\sum_{i =1}^{\kappa^+}||k^+_i-\Tilde{k}^+_i||_{\infty}^p+\sum_{j=1}^{\kappa^-}||k_j^--\Tilde{k}^-_j||_{\infty}^p\\ = &\sum_{i =1}^{\kappa^+}||k^+_i-\hat{k}^+_i+\hat{k}^+_i -\Tilde{k}^+_i||_{\infty}^p+\sum_{j=1}^{\kappa^-}||k_j^--\hat{k}^-_i+\hat{k}^-_i-\Tilde{k}^-_j||_{\infty}^p\\ \leq &\sum_{i =1}^{\kappa^+}||k^+_i-\hat{k}^+_i||_{\infty}^p+||\hat{k}^+_i -\Tilde{k}^+_i||_{\infty}^p+\sum_{j=1}^{\kappa^-}||k_j^--\hat{k}^-_i||_{\infty}^p+||\hat{k}^-_i-\Tilde{k}^-_j||_{\infty}^p\\ =& \sum_{(k, \hat{k})\in m^*_{K, \hat{K}}}||k-\hat{k}||_{\infty}^p+\sum_{(\hat{k}, \Tilde{k})\in m^*_{\hat{K}, \Tilde{K}}}||\hat{k}-\Tilde{k}||_{\infty}^p\\ =& d_{\mathscr{K}, p}(K, \hat{K})^p+d_{\mathscr{K}, p}(\hat{K}, \Tilde{K})^p. \end{aligned}$$  ◻ # Persistence Transformation {#section_persistence_transformation} One main objective of topological data analysis is to track the persistence of topological features in the data. Each feature gets assigned a birth value, i.e., the value where the feature arises, and a death value, i.e., the value where the feature merges to other features. In the persistence transformation, the merging process is according to the elder rule (see [@edelsbrunner2022computational]). The persistence of a feature is the lifespan of it, in other words, the difference of birth and death. ## Matching {#subsection_matching} In this paper, we consider features to be the peaks of a Morse function, or in the notation from before, the elements $k^+=(x^+, y^+)\in K^+$, i.e., the local maxima. The birth of these features is given natural by their height, i.e., by $y^+$. On the other hand, the merging of two features always happens at a local minimum, or the $(x^-, y^-)=k^-\in K^-$. For each feature, there is a unique $k^-\in K^-$, where it dies, and the death value is given by $y^-$. To track the death of the feature, we match to each feature $k^+\in K^+$ the unique element of merging $k^-\in K^-$. The largest feature cannot be merged with another feature according to the elder rule (see [@edelsbrunner2022computational]), therefore its death is defined to be $-\infty$. The matching is denoted by $\mu:K^+\rightarrow K^-\cup M\times \{-\infty\}$ and is defined sequential and individual on each path-connected component. The results can later be joined for the persistence transformation. - For all $i=\kappa^+, \dots, 2$: $$\begin{aligned} \mu(k_i):=\sup &\{(x^-, y^-)=k^-\in K^-|k^-< k_i\\ \land &\exists k_i\leq k^+=(x^+,y^+)\in K^+: x^-\in [\min\{x_i, x^+\}, \max \{x_i, x^+\}]\\ \land & \forall j = i+1, \dots, \kappa^+: k^-\neq \mu(k_j) \}. \end{aligned}$$ - For $i=1$: $$\begin{aligned} \mu(k_1):=(x_1, -\infty). \end{aligned}$$ This functional determination approach presents an innovative method for calculating persistence's in accordance with the elder rule by utilizing a semi-algorithmic procedure, which departs from the conventional complex algorithms typically employed. An important aspect of this is the simplification of the process of obtaining persistence's, making it more accessible and efficient. **Theorem 3**. *The matching $\mu$ is injective and is well-defined.* *Proof.* Via natural induction over $\kappa^+$. Induction start for $\kappa^+=2$, i.e. $K^+=\{k_1, k_2\}$: The Alternation ([\[def_K\]](#def_K){reference-type="ref" reference="def_K"}, (4)) implies the existence of an element $k^-\in K^-$, such that $k^-< k_2$ and $x^-\in [\min \{x_1, x_2\}, \max \{x_1, x_2\}]$. Hence this element is suitable for $\mu(k_2)$. The element $k_1$ is matched to $\mu(k_1)=(x_1, -\infty)$, resulting in an injective matching. For the induction step assume that we can always find an injective matching for $\kappa^+=n$. For $\kappa^+=n+1$ now holds: The Alternation ([\[def_K\]](#def_K){reference-type="ref" reference="def_K"}, (4)) implies with the same reasoning of the induction start the existence of an suitable element $k^-$, such that $\mu(k_{\kappa^+})=k^-$. This element is by ([\[def_K\]](#def_K){reference-type="ref" reference="def_K"}, (4)) a unique element of $K^-$ for a suitable $k_i\in K$ with $x^-\in [\min \{x_{\kappa^+}, x_i\}, \max \{x_{\kappa^+}, x_i\}]$, such that there are no other elements $k^+\in K^+$ in that interval. Therefore, the points $k^-, k_{\kappa^+}$ can be removed from the set $K$, resulting in a set $\hat{K}$ with $\hat{\kappa}^+=n$, which satisfies all conditions of an ordered critical set ([\[def_K\]](#def_K){reference-type="ref" reference="def_K"}). The induction assumption applied to $\hat{K}$ completes the proof. ◻ ## Definition of the Persistence Transformation {#subsection_persistence_transformation} With the defined matching, the persistence transformation can be defined. It is constructed in such a way, that it tracks the position, the birth and the death of each feature $k\in K^+$. **Definition 2**. The **persistence transformation** is a map $$\begin{aligned} T:\mathscr{K}&\rightarrow \mathscr{M}\times \overline{\mathbb{R}}\\ K &\mapsto T_K, \end{aligned}$$ where each element $k=(x,y)\in K^+$ with $\mu(k)=\overline{k}=(\overline{x}, \overline{y})$ is mapped to $t_K:=(x,y,\overline{y})$. The elements $k^-=(x^-, y^-)\in K^-$ are mapped to $t_{K^-}=(x^-, y^-, y^-)$ on a diagonal plane. A graphical representation of the persistence transformation can be seen in Fig. [1](#fig:example_pt){reference-type="ref" reference="fig:example_pt"}. The first coordinate of each element $t_K\in T_K$ corresponds to the position $x\in M$ of a feature. The second coordinate denotes the birth value $y\in\overline{\mathbb{R}}$, while the last coordinate describes the death value, i.e., $\overline{y}\in \overline{\mathbb{R}}$. This results in trivial features for all elements $k^-\in K^-$, since their death value equals their birth value. They are located on the diagonal plane in Fig. [1](#fig:example_pt){reference-type="ref" reference="fig:example_pt"}, which can be compared to the diagonal in the persistence diagram. ![Example of the persistence transformation. The blue line represents all the trivial features, which are vanishing since they are on the diagonal plane. The red dots represent the relevant features. The distance of the dots to the diagonal plane indicates their persistence.](grafics/fig1_pt.png){#fig:example_pt} ## Stability {#subsection_stability} A necessary characteristic of a useful method in topological data analysis is the stability of the method. The stability is a measure of how changes in the input influence the output of the analysis and indicates in this sense, how robust the applied method captures the topological information of the data. A stable method ensures that the variation in the output is bound by the variation in the input (see [@cohen2005stability]). The subsequent section provides evidence for the stability of the persistence transformation. For this, the $p$-Wasserstein distance is utilized a the metric on the output of the persistence transformation. This distance is defined as follows (see [@memoli2011gromov; @berwald2018computing]): $$\begin{aligned} d_{W_p}(A, B)^p =\min_{\text{matchings }m:A\times B}\sum_{(a,b)\in m}||a-b||_{\infty}^p.\end{aligned}$$ As $p$ approaches infinity, this metric converges to the bottleneck distance, the metric employed in assessing the stability of persistence diagrams (refer to [@cohen2005stability]). The bottleneck distance exhibits notable computational efficiency and robustness to outliers when compared to finite $p$-Wasserstein distances. Furthermore, it offers a more intuitive interpretation by quantifying the maximum separation between elements. Importantly, by establishing the theorem's validity for arbitrary values of $p$, we ensure that the obtained results are equally applicable to the bottleneck distance. **Theorem 4**. *The persistence transformation is stable using the $p$-Wasserstein distance on $\mathscr{M}\times \overline{\mathbb{R}}$: $$\begin{aligned} d_{W_p}(T_K, T_L)\leq d_{\mathscr{K}, p}(K, L). \end{aligned}$$* *Proof.* Let $P=T_K\times T_L$ be the set of all matchings for $t_K=(x,y,\hat{y})\in T_K$ and $t_L=(x', y', \overline{y'})\in T_L$, given $k=(x,y)\in K$ and $l=(x', y')\in L$. Without loss of generality let $\kappa^+:=\max\{\kappa_K^+, \kappa_L^+\}$ and $\kappa^-=\max\{\kappa_K^-,\kappa_L^-\}$. The stability of the persistence transformation is given by the following inequality: $$\begin{aligned} &d_{W_p}(T_K, T_L)^p\\ =&\min_{m\in P}\sum_{(t_K,t_L)\in m}||t_K-t_L||_{\infty}^p\\ =&\min_{m\in P}\sum_{(t_K, t_L)\in m}||(x,y, \overline{y})-(x', y', \overline{y'})||_{\infty}^p\\ =&\min_{m\in P}\sum_{(t_K, t_L)\in m}\max\{||(x,y)-(x', y')||_{\infty}^p, ||\overline{y}-\overline{y'}||_{\infty}^p\}\\ \leq &\min_{m\in P}\sum_{(t_K, t_L)\in m}\max\{||(x,y)-(x', y')||_{\infty}^p, ||(\overline{x}, \overline{y})-(\overline{x'},\overline{y'})||_{\infty}^p\}\\ \leq &\sum_{(k, l)\in m^*}\max\{||(x,y)-(x', y')||_{\infty}^p, ||(\overline{x}, \overline{y})-(\overline{x'},\overline{y'})||_{\infty}^p\}\\ \overset{\ref{injective}}{\leq} &\sum_{i=1}^{\kappa^+}||k_i^+-l_i^+||_{\infty}^p +\sum_{j=1}^{\kappa^-} ||k_j^--l_j^-||_{\infty}^p\\ =&\sum_{(k,l)\in m^*}||k-l||_{\infty}^p\\ =& d_{\mathscr{K}, p}(K, L)^p. \end{aligned}$$ The inequality of (7) holds due the injectivity ([Theorem 3](#injective){reference-type="ref" reference="injective"}) of the matching function. This implies, that each element $k^-\in K^-$ and each element $l^-\in L^-$ are matched at most once. ◻ ## Comparison {#subsection_comparison} We conclude this section with a comprehensive comparison between the introduced persistence transformation and the persistence diagram. The persistence diagram captures the information of the persistence in a compact and comprehensive way by displaying the homology classes of specific filtration complexes, e.g., the sub-level set filtration or the upper levelset filtration of a real-valued function. For more information about the standard persistence diagram we refer the reader to [@cohen2005stability; @edelsbrunner2002topological]. The method has demonstrated successful applications in various occasions (e.g., [@edelsbrunner2002topological; @otter2017roadmap; @nicolau2011topology]). Nonetheless, in some settings, e.g., MALDI-Images ([@klaila2023supervised]), it exhibits a significant drawback: while tracking the persistence of the signal peaks, the persistence diagram forfeits the information of the position of the signal. For instance, the standard persistence diagram of the upper levelset filtration cannot distinguish symmetric inputs, see Figure [2](#fig:xample_pd_fail){reference-type="ref" reference="fig:xample_pd_fail"}. ![Problem of the persistence diagram: the middle column displays two symmetric functions $f,g:\mathbb{R}\rightarrow \mathbb{R}$. The first column displays the persistence transformation of these functions, with the positional information on the $x$-axis and the birth values on the $y$-axis. The death values are encoded in the color scheme. The persistence transformations of the functions can be distinguished. The right-most column shows the persistence diagram of the upper levelset filtration with the birth value on the $y$-axis and the death value on the $x$-axis. The two diagrams are identical. (Graphic: [@klaila2023supervised])](grafics/fig2_com_pt.png){#fig:xample_pd_fail} In contrast to the persistence diagram of the upper levelset filtration, the persistence transformation is specifically designed to overcome these limitations of the persistence diagram by tracking the position of significant signal peaks as well as their persistence. Even more, the persistence transformation captures all the information the persistence diagram of the upper levelset filtration captures. This implies that it is a strictly stronger method of capturing the necessary information for distinguishing data. However, this increase in performance comes with the cost of higher dimensionality. While the results of the persistence diagram are subsets of $\overline{\mathbb{R}}^2$, i.e., information about the birth and the death, the results of the persistence transformation are subsets of $M\times \overline{\mathbb{R}}^2$. This poses a significant challenge to the desire of improving computational efficiency for analysis. A decision must be made regarding whether to prioritize fast computation or more comprehensive information. In the upcoming section, we introduce a modification to the persistence transformation that decreases its dimensionality, albeit at the expense of some information. ## Implementation {#subsection_implementation} In this section, we present the pseudo-code for a potential implementation of the persistence transformation. Given the critical points $K$ as input, the algorithm efficiently computes the output $T_K$ with quadratic complexity. The resulting $T_K$ can then be further processed in linear complexity to represent the reduced persistence transformation of the next section or the persistence diagram. Additionally, the output is sorted based on persistence values. By considering low persistence features as noise, it becomes feasible to denoise the output by removing elements below a specific threshold. For a similar algorithm, its proof and its practical application, refer to the work in ([@klaila2023supervised]). **Input:** $K^+=\{k_0^+, \ldots, k_{n}^+\}; K^-=\{k_0^-, \ldots, k_{n\pm 1}^-\}$ **Return:** $T_K=\{(x_i,y_i,\overline{y_i})|i=0,\ldots,n \}$ $T_K\leftarrow \varnothing$ $k'=(x', y') \leftarrow K^+\text{.pop}(0)$ $T_K\leftarrow (x', y',-\infty)$ setOne, setTwo $\leftarrow \varnothing$ minimum $\leftarrow \infty$ maximum $\leftarrow -\infty$ setOne $\leftarrow k$ minimum $\leftarrow x$ setTwo $\leftarrow k$ maximum $\leftarrow x$ RecursionStep(minimum$, x'$, setOne, $K^-$.copy(), $T_K$) RecursionStep(maximum$, x'$, setTwo, $K^-$.copy(), $T_K$) $T_K$ **Input:** start, end, $K^+$, $K^-$, $T_K$ $K^+\leftarrow K^+\backslash k$ $k'=(x', y')\leftarrow K^+\text{.pop}(0)$ RecursionStep(start, $x'$, $K^+$.copy(), $K^-$.copy(), $T_K$) $K^-\leftarrow K^-\backslash k$ $k^-=(x^-,y^-)\leftarrow K^-$.pop$(0)$ $T_K\leftarrow (x', y', y^-)$ RecursionStep($x^-, x'$, $K^+$.copy(), $K^-$.copy(), $T_K$) RecursionStep($x^-$, end, $K^+$.copy(), $K^-$.copy(), $T_K$) # Reduced Persistence Transformation {#section_reduced_persistence_transformation} ## Motivation and Definition {#subsection_reduced_definition} As stated in [3.4](#subsection_comparison){reference-type="ref" reference="subsection_comparison"}, the advantages of the persistence transformation are accompanied by an increase in dimensionality. However, in certain applications, dealing with large amounts of data, the subsequent higher dimensionality in the analyzed data results in prolonged computational time, despite not requiring the full extent of the more comprehensive information gathered. Regardless, these application could also benefit from the positional information provided by the persistence transformation. In response to this concern, we develop an adapted version of the persistence transformation known as the reduced persistence transformation. This modified method aims at reducing the dimensionality of the output data while maintaining the positional information of signal peaks. To accomplish the intended adaptation, we opt for storing the persistence of the features, given by the difference of birth and death, instead of storing these values separately. **Definition 3**. Building upon the previous definitions of $\mathscr{M}$ ([\[def_M\]](#def_M){reference-type="ref" reference="def_M"}) and the matching $\mu$ ([\[def_mu\]](#def_mu){reference-type="ref" reference="def_mu"}), we define the **reduced persistence transformation** as a map $$\begin{aligned} \Tilde{T}:\mathscr{K}&\rightarrow \mathscr{M}\\ K^+&\mapsto \Tilde{T}_K \end{aligned}$$ where each element $k=(x,y)\in K^+$ with $\mu(k)=\overline{k}=(\overline{x}, \overline{y})$ is mapped to $\Tilde{t}_K:=(x,y-\overline{y})$. An example of the reduced persistence transformation can be seen in [3](#fig:example_rpt){reference-type="ref" reference="fig:example_rpt"} ![Example of the reduced persistence transformation. The original spectra is displayed in black, while the values of the reduced persistence transformation are illustrating in red the persistence of each feature.](grafics/fig3_rpt.png){#fig:example_rpt} ## Stability {#subsection_reduced_stability} Similar to the stability of the persistence transformation in [3.3](#subsection_stability){reference-type="ref" reference="subsection_stability"}, the stability theorem can also be applied to the reduced persistence transformation. **Theorem 5**. *The reduced persistence transformation satisfies the stability condition using the $p$-Wasserstein distance in $\mathscr{M}$: $$\begin{aligned} d_{W_p}(\Tilde{T}_K,\Tilde{T}_L)\leq d_{\mathscr{K}, p}(K, L). \end{aligned}$$* *Proof.* Let $P=\Tilde{T}_K\times \Tilde{T}_L$ be the set of all matchings for $\Tilde{t}_K=(x,y-\hat{y})\in \Tilde{T}_K$ and $\Tilde{t}_L=(x', y'- \overline{y'})\in \Tilde{T}_L$, given $k=(x,y)\in K$ and $l=(x', y')\in L$. Without loss of generality, let $\kappa^+:=\max\{\kappa_K^+,\kappa_L^+\}$ and $\kappa^-:=\max\{\kappa_K^-,\kappa_L^-\}$. The stability of the reduced persistence transformation is given by the following inequality: $$\begin{aligned} &d_{W_p}(\Tilde{T}_K, \Tilde{T}_L)^p\\ =&\min_{m\in P}\sum_{(\Tilde{t}_K,\Tilde{t}_L)\in m}||\Tilde{t}_K-\Tilde{t}_L||_{\infty}^p\\ =&\min_{m\in P}\sum_{(\Tilde{t}_K,\Tilde{t}_L)\in m}||(x,y- \overline{y})-(x', y'- \overline{y'})||_{\infty}^p\\ \leq & \min_{m\in P}\sum_{(\Tilde{t}_K,\Tilde{t}_L)\in m}||(x,y)-(x', y')||_{\infty}^p+||\overline{y}-\overline{y'}||^P_{\infty}\\ \leq & \min_{m\in P}\sum_{(\Tilde{t}_K,\Tilde{t}_L)\in m}||(x,y)-(x', y')||_{\infty}^p+||(\overline{x}, \overline{y})-(\overline{x'}, \overline{y'})||^P_{\infty}\\ \leq & \sum_{(k,l)\in m^*}||(x,y)-(x', y')||_{\infty}^p+||(\overline{x}, \overline{y})-(\overline{x'}, \overline{y'})||^P_{\infty}\\ \overset{\ref{injective}}{\leq} &\sum_{i=1}^{\kappa^+}||k_i^+-l_i^+||_{\infty}^p +\sum_{j=1}^{\kappa^-} ||k_j^--l_j^-||_{\infty}^p\\ =&\sum_{(k,\hat{k})\in m^*}||k-l||_{\infty}^p\\ =& d_{\mathscr{K}, p}(K, L)^p. \end{aligned}$$ The inequality of line (16) is given by the injectivity ([Theorem 3](#injective){reference-type="ref" reference="injective"}) of the matching function, which ensures the unique occurrence of each element $k^-\in K^-$ and $l^-\in L-$. ◻ ## Comparison {#subsection_reduced_comparison} As mentioned in [3.4](#subsection_comparison){reference-type="ref" reference="subsection_comparison"}, while the persistence transformation provides more comprehensive information than the persistence diagram of the upper levelset, it comes at the cost of increased dimensionality. Conversely, the reduced persistence transformation reduces this dimensionality but loses some information in the process. Consequently, it is crucial to conduct a comparison between the reduced persistence transformation and the persistence diagram. After careful examination, it becomes evident that neither of the mentioned methods outperforms the other, and they share the same dimensionality in the output, i.e., $\mathbb{R}^2$. Giving them a total order is not feasible, as there exist scenarios where one outperforms the other, and vice versa (see Fig. ([4](#fig:example_reduced){reference-type="ref" reference="fig:example_reduced"}). The key distinction of the persistence diagram lies in the total height of birth and death values of a feature, whereas the reduced persistence transformation incorporates the positional information alongside its relative height, i.e., its persistence. This implies that in any application where positional information is crucial, the reduced persistence transformation is a more suitable choice as an analysis method. An instance of such an application can be observed in [@klaila2023supervised], where the reduced persistence transformation is employed to enhance the accuracy of tumor tissue classification. In novel applications, it is imperative to assess the significance of positional relevance to determine the potential benefits of employing the modified methodology in comparison to the original persistence transformation or the persistence diagram as each approach possesses its distinct area of efficiency. ![Example of the reduced persistence transformation: In the middle, two symmetric functions $f,g:\mathbb{R}\rightarrow \mathbb{R}$ are displayed. The left-hand side illustrates the reduced persistence transformation of this graphs, with the positional information on the $x$-axis and the persistence information on the $y$-axis. The graphs can be distinguished. The right-hand side depicts the persistence diagram of the upper levelset filtration, with the birth value on the $y$-axis and the death value on the $x$-axis. The diagrams are identical. (Graphic: [@klaila2023supervised])](grafics/fig4_com_rpt.png){#fig:example_reduced} # Extension to higher dimensions {#section_higher_dimensions} Until this point, we have made the assumption that $M\subseteq \overline{\mathbb{R}}$; however, it is feasible to relax this condition. In this section, we present a concise overview of the approach to extend the persistent transformation to any arbitrary higher-dimensional compact metric space $M$, equipped with a total order and a notion of path connectivity. Additionally, $M$ must be a subset of a space including a neutral element, which is essential for the defined metric on $\mathscr{M}$ ([Definition 1](#metric_M){reference-type="ref" reference="metric_M"}). This requirements can be satisfied by $M\subseteq \overline{\mathbb{R}}^n$, if we equip $\overline{\mathbb{R}}^n$ with a total order, e.g., the lexicographic order. One crucial adaptations for the increase of dimension is the definition of the ordered critical set. While in $\overline{\mathbb{R}}$ each path $[x,x']$ is unique for all elements $x,x'\in\overline{\mathbb{R}}$, there may exist multiple paths for elements $x,x'\in \overline{\mathbb{R}}^n$. This property breaks the uniqueness of elements $\hat{k}=(\hat{x},\hat{y})\in K$ such that $\hat{x}\in [x, x']$, resulting in cases where no minimal element $k\in K^-$ lies on the path. Another issue that needs to be addressed is the presence of areas with the same height, where all elements within that area are critical compared to any element outside that area. Such situations could lead to an imbalance in the number of elements in the sets $K^+$ and $K^-$. To address this, we can utilize the total order of $M$ to select only one element from this area, thus ensuring the satisfaction of the balance equation ([\[balanc\]](#balanc){reference-type="ref" reference="balanc"}). Finally, we also need to consider the adaptation of saddle points, which are degenerate critical points in Morse theory (see [@milnor1963morse]). Although they may not possess a unique gradient direction, they can still act as the highest valley that needs to be traversed to reach the next peak, or in other words, they could be crucial for defining the persistence of peaks. One possibility of handling them is to consider these saddle points as maxima and minima, including them in the sets $K^+$ and $K^-$, respectively. In this approach, elements $k \in K$ may have a multiplicity to account for their presence in the analysis. After implementing all adaptations of the space of critical points, the definitions [Definition 1](#metric_M){reference-type="ref" reference="metric_M"}, [\[def_mu\]](#def_mu){reference-type="ref" reference="def_mu"}, [Definition 2](#def_persistence_transformation){reference-type="ref" reference="def_persistence_transformation"} and [Definition 3](#def_reduced_persistence_transformation){reference-type="ref" reference="def_reduced_persistence_transformation"} will hold, resulting in a stable persistence transformation on the higher dimensional set $M$. The proofs of the theorems are presented in a way that they are independent of the dimensionality of $M$. However, for the sake of simplicity and clarity, the specific details of the extension to higher dimensions are omitted in this paper. It is important to note that increasing the dimension of $M$ will also lead to an increase in the dimension of the output space. As previously mentioned, the reduced persistence transformation and the persistence transformation produce an output of dimensions $M\times \overline{\mathbb{R}}$ and $M\times \overline{\mathbb{R}}^2$, respectively. For cases where $M\subseteq \overline{\mathbb{R}}$, the output dimension of the reduced persistence transformation can match the output dimension of the persistence diagram, i.e., $\overline{\mathbb{R}}^2$. However, for any $M\subseteq \overline{\mathbb{R}}^n$, the output dimension of the reduced persistence transformation will have an increased in dimension of $n-1$, One should consider the potential increase in computational time for further processing steps and carefully evaluate the significance of the positional information obtained from the increased output dimensions. # Summary and Outlook {#section_summary} In conclusion, this paper has successfully achieved its research objectives by constructing a stable method for analyzing time series data arising from Morse functions in a topological manner while preserving the positional information of signal peaks. We have defined the persistence transformation, established its stability, and demonstrated that it complements nicely the existing machinery of persistent homology on levelset filtrations. Furthermore, we have introduced the reduced persistence transformation as a valuable side result, effectively tracking positional information with reduced dimensionality. The implications of this research are far-reaching, as the proposed method can be applied across various fields where data is represented as the image of a real-valued function. Its potential to deliver promising results, particularly in domains where positional information is crucial, underscores its significance. Our hypothesis regarding the successful application of the reduced persistence transformation to real-world problems has been confirmed in a previous study about MALDI-MSI data in ([@klaila2023supervised]). However, we acknowledge certain limitations in the persistence transformation, particularly in terms of its computational complexity. As the dimension of the data-set M increase, the output dimension of the persistence transformation also grows, resulting in longer computational times for subsequent processing tasks. Looking ahead, the outlook for this research is promising. The complexity of calculating the persistence transformation is quadratic to the number of critical points of the input. Furthermore, by treating low persistence features as noise, we can adapt the algorithm to filter out such peaks with a single hyper-parameter, leading to a denoising functionality of the persistence transformation and a more concise output. In addition to the outlook presented earlier, exploring the characteristics of critical points offers promising avenues for further improving the algorithms and matchings. Applying Morse theory ([@milnor1963morse]) to analyze critical points could provide valuable insights and refinements to our approach. Additionally, leveraging Prof. Kozlov's work on optimizing matchings (see [@kozlov2020combinatorial]) could lead to significant enhancements in the overall concept. By delving deeper into the nature of critical points and their interplay with the proposed methods, we have the potential to unlock novel techniques and strategies to advance topological data analysis. Such investigations will undoubtedly contribute to the robustness and versatility of our approach and pave the way for even more impactful applications in various domains. Lastly, the adaptability of this work to specific applications allows for customized implementations that precisely address the unique demands of various domains. As we continue to refine and extend this method, we are confident in its ability to contribute significantly to advancing topological data analysis in diverse real-world scenarios. # Acknowledgements {#acknowledgements .unnumbered} The authors extend their sincere gratitude to Prof. Dmitry Feichtner-Kozlov at the University of Bremen for supervising GK and LR. Also, we sincerely thank Lukas Mentz for his constructive criticism of the manuscript. The financial support by the German Research Foundation via the RTG 2224, titled \"$\pi^{3}$: Parameter Identification - Analysis, Algorithms, Implementations\" is gratefully acknowledged. # Authors' information {#authors-information .unnumbered} - Gideon Klaila, klailag\@uni-bremen.de, ORCID: 0009-0002-2861-2095 - Anastasios Stefanou, stefanou\@uni-bremen.de, ORCID: 0000-0002-5408-9317 - Lena Ranke, lranke\@uni-bremen.de, ORCID: 0009-0003-4258-1608 # Authors' contributions {#authors-contributions .unnumbered} This paper is based on joined work of the three main authors as equal contributors. Gideon Klaila developed the theoretical formalism, contributed the main conceptual ideas and performed the analytic calculations. Both Anastasios Stefanou and Lena Ranke verified the results and contributed to the final version of the manuscript while Anastasios Stefanou helped supervise the project. # Competing interests {#competing-interests .unnumbered} The authors declare that they have no competing interests. # Declarations {#declarations .unnumbered} ## Ethical Approval {#ethical-approval .unnumbered} Not applicable. ## Funding {#funding .unnumbered} Deutsche Forschungsgemeinschaft. Grant Number: RTG 2224 ## Availability of data and materials {#availability-of-data-and-materials .unnumbered} Not applicable.
arxiv_math
{ "id": "2310.05559", "title": "Stability of the persistence transformation", "authors": "Gideon Klaila, Anastasios Stefanou, Lena Ranke", "categories": "math.AT", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/" }
--- abstract: | We continue our previous work [@HLP1] on the limit of the spatially homogeneous quantum Boltzmann equation as the Planck constant $\epsilon$ tends to zero, also known as the semi-classical limit. For general interaction potential, we prove the following: (i). The spatially homogeneous quantum Boltzmann equations are locally well-posed in some weighted Sobolev spaces with quantitative estimates uniformly in $\epsilon$. (ii). The semi-classical limit can be further described by the following asymptotic expansion formula: $$f^\epsilon(t,v)=f_L(t,v)+O(\epsilon^{\vartheta}).$$ This holds locally in time in Sobolev spaces. Here $f^\epsilon$ and $f_L$ are solutions to the quantum Boltzmann equation and the Fokker-Planck-Landau equation with the same initial data. The convergent rate $0<\vartheta \leq 1$ depends on the integrability of the Fourier transform of the particle interaction potential. Our new ingredients lie in a detailed analysis of the Uehling-Uhlenbeck operator from both angular cutoff and non-cutoff perspectives. address: - | Department of Mathematical Sciences, Tsinghua University\ Beijing 100084, P. R. China. - | Department of Mathematical Sciences, Tsinghua University\ Beijing 100084, P. R. China. - Dipartimento di Matematica, Universite di Roma La Sapienza, Piazzale Aldo Moro 5, 00185 Rome, Italy; International Research Center M $\&$ MOCS, Universite dell'Aquila, Palazzo Caetani, Cisterna di Latina, (LT) 04012, Italy - School of Mathematics, Sun Yat-Sen University, Guangzhou, 510275, P. R. China. author: - Ling-Bing He, Xuguang Lu, Mario Pulvirenti and Yu-Long Zhou title: "On semi-classical limit of spatially homogeneous quantum Boltzmann equation: asymptotic expansion " --- # Introduction The quantum Boltzmann equations for Fermi-Dirac and Bose-Einstein statistics proposed by Uehling and Uhlenbeck in [@UU] (after Nordheim [@N]) should be derived from the evolution of real Fermions and Bosons in the so called weak-coupling limit (see [@Bal] and [@BCEP]). Since Fokker-Planck-Landau equation is the effective equation associated with a dense and weakly interacting gas of classical particles (see [@ICM], [@BPS]), it is not surprising that the semi-classical limits of the solutions to quantum Boltzmann equations are expected to be solutions to the Fokker-Planck-Landau equation. The weak convergence of the limit is justified in the paper [@HLP1]. The main purpose of this article is to provide a detailed asymptotic expansion formula to describe the semi-classical limit in the classical sense. ## Setting of the problem The Cauchy problem of the spatially homogeneous quantum Boltzmann equation reads $$\begin{aligned} \label{BUU} \partial_t f =Q_{UU}^{\epsilon}(f), \quad f|_{t=0} =f_0, \end{aligned}$$ which describes time evolution of the gas given initial datum $f_0$. Here the solution $f=f(t, v)\ge 0$ is the density of the gas. The Uehling-Uhlenbeck operator $Q_{UU}^{\epsilon}$ in the weakly coupled regime is defined by $$\begin{aligned} \quad Q_{UU}^{\epsilon}(f) = \int_{\mathop{\mathbb R\kern 0pt}\nolimits^3\times\mathop{\mathbb S\kern 0pt}\nolimits^2} B^{\epsilon}(|v-v_*|,\cos\theta) \big(f_*'f'(1\pm\epsilon^3f_*) (1\pm\epsilon^3f) -f_*f(1\pm\epsilon^3f_*')(1\pm\epsilon^3f')\big)\mathrm{d}\sigma \mathrm{d}v_*,\label{OPQUU} \end{aligned}$$ where $$\begin{aligned} \label{DefB} B^{\epsilon}(|v-v_*|,\cos\theta) \colonequals \epsilon^{-4} |v-v_*| \left(\hat{\phi}\left(\epsilon^{-1}|v-v_*|\sin (\theta/2)\right) \pm \hat{\phi}\left(\epsilon^{-1}|v-v_*|\cos (\theta/2)\right) \right)^2. \end{aligned}$$ $\bullet$ *Some explanation on the model.* On the derivation of [\[OPQUU\]](#OPQUU){reference-type="eqref" reference="OPQUU"} and [\[DefB\]](#DefB){reference-type="eqref" reference="DefB"} in the weak-coupling limit, we refer to [@BCEP; @BCEP04; @BCEP08; @ESY04]. To make [\[OPQUU\]](#OPQUU){reference-type="eqref" reference="OPQUU"} and [\[DefB\]](#DefB){reference-type="eqref" reference="DefB"} clear, we have the following remarks: 1. The parameter $\epsilon$ is the Planck constant $\hbar$. Note that for simplicity, we already drop the factor $2\pi$ appeared in [@HLP1]. As our goal is to study the semi-classical limit, we always assume $0<\epsilon<1$. 2. The sign $"+"$ and the sign $"-"$ correspond to Bose-Einstein particles and Fermi-Dirac particles respectively. 3. The real-valued function $\hat{\phi}$ is the Fourier transform of the particle interaction potential $\phi(|x|)$. 4. The deviation angle $\theta$ is defined through $\cos\theta\colonequals\frac{v-v_*}{|v-v_*|}\cdot \sigma$. Thanks to the symmetric property of the collision kernel, we can assume that $\theta\in [0,\pi/2]$. 5. We use the standard shorthand $h=h(v)$, $g_*=g(v_*)$, $h'=h(v')$, $g'_*=g(v'_*)$ where $v'$, $v_*'$ are given by $$\begin{aligned} v'=\frac{v+v_{*}}{2}+\frac{|v-v_{*}|}{2}\sigma, \quad v'_{*}=\frac{v+v_{*}}{2}-\frac{|v-v_{*}|}{2}\sigma, \quad \sigma\in\mathop{\mathbb S\kern 0pt}\nolimits^{2}. \end{aligned}$$ $\bullet$ *Basic assumptions on the potential function $\phi$.* For $a \geq 0$, let $$\begin{aligned} \label{notation-I-Iprime} I_{a} \colonequals \int_0^\infty \hat{\phi}^2(r) r^a \mathrm{d}r, \quad I^{\prime}_{a} \colonequals \int_0^\infty |r (\hat{\phi})^{\prime}(r)|^2 r^{a} \mathrm{d}r. \end{aligned}$$ Our basic assumptions on $\hat{\phi}$ are $$\begin{aligned} &&{\bf (A1)} \quad I_{0} + I_{3} + I^{\prime}_{3} < \infty. \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \\ &&{\bf(A2)} \quad I_{3+\vartheta} + I^{\prime}_{3+\vartheta} <\infty \text{ for some }\vartheta\in(0,1]. \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \end{aligned}$$ Several remarks are in order: 1. The condition $I_{0}<\infty$ in **(A1)** is used to bound $\int_{\mathop{\mathbb S\kern 0pt}\nolimits^2} B^\epsilon (|v-v_*|,\cos\theta)\mathrm{d}\sigma<\epsilon^{-3}I_{0}$, see [\[upper-bound-of-A-eps\]](#upper-bound-of-A-eps){reference-type="eqref" reference="upper-bound-of-A-eps"} for details. This is the key point to prove the *global existence* of the mild solution for the Fermi-Dirac particles. In the weak coupling regime [\[DefB\]](#DefB){reference-type="eqref" reference="DefB"}, finiteness of the $\sigma$-integral holds even for some inverse power law potentials. Indeed, taking $\phi(|x|)=|x|^{-p} (0<p<3)$, one can check that $\int_{\mathop{\mathbb S\kern 0pt}\nolimits^2} B^\epsilon (|v-v_*|,\cos\theta)\mathrm{d}\sigma$ is finite for $p>2$ and infinite for $p \leq 2$, see [@Z-2023] for more details on this. For the infinite case, one may need some angular cutoff. The condition $I_{0}<\infty$ is reminiscent of Grad's angular cutoff assumption for inverse power law potentials in the low density regime, which allows one separate the Boltzmann operator into gain and loss terms. From now on, we will call such mathematical treatment as \"angular cutoff\" view. However, such treatment is not enough since the upper bound blows up as $\epsilon\to 0$. 2. $I_{3}$ is derived by computing the momentum transfer which is defined as follows: $$\begin{aligned} \label{MometumTrans} M_o^\epsilon (|v-v_*|) \colonequals \int_{\mathop{\mathbb S\kern 0pt}\nolimits^2} B^\epsilon (|v-v_*|,\cos\theta)(1-\cos\theta)\mathrm{d}\sigma. \end{aligned}$$ We will show in the later that $M_o^\epsilon\sim I_{3}|v-v_*|^{-3}$ when $\epsilon$ is sufficiently small. This is also related to the diffusion coefficient of Fokker-Plank-Landau collision operator(see [\[OPQL\]](#OPQL){reference-type="eqref" reference="OPQL"} and [\[def-a\]](#def-a){reference-type="eqref" reference="def-a"}). The condition $I_{3}<\infty$ is reminiscent of angular non-cutoff kernels for inverse power law potentials as one always relies an additional order-2 $\theta^2$ to deal with angular singularity. From now on, we will call such mathematical treatment as \"angular non-cutoff\" view. 3. The condition $I_{3} + I^{\prime}_{3} < \infty$ in **(A1)** allows us to derive the cancellation lemma from the point view of angular non-cutoff. It plays the essential role to get the uniform-in-$\epsilon$ estimate. 4. Assumption **(A1)** is used to prove uniform-in-$\epsilon$ local well-posedness and propagation of regularity. To get the asymptotic expansion for the semi-classical limit, technically we need assumption **(A2)**. 5. We do not impose any point-wise condition on $\hat{\phi}$. All the constants depending on $\phi$ in this article are through the quantities in **(A1)** and **(A2)**. By formal computation(see [@BP]), the Cauchy problem [\[BUU\]](#BUU){reference-type="eqref" reference="BUU"} will converge to that of the Fokker-Planck Landau equation $$\begin{aligned} \label{landau} \partial_t f = Q_L(f,f), \quad f|_{t=0} =f_0. \end{aligned}$$ Here the Fokker-Planck Landau operator $Q_L$ reads $$\begin{aligned} \label{OPQL} Q_L(g,h)(v) \colonequals \nabla \cdot \int_{\mathop{\mathbb R\kern 0pt}\nolimits^3} a(v-v_*) \{ g(v_*) \nabla h(v) - \nabla g(v_*) h(v) \} \, \mathrm{d}v_*, \end{aligned}$$ where $a$ is a matrix-valued function that is symmetric, (semi-definite) positive. It depends on the interaction potential between particles, and is defined by (for $i,j =1, 2, 3$) $$\begin{aligned} \label{def-a} a_{ij}(z) =2\pi I_{3}|z|^{-1}\, \Pi_{ij} (z), \quad \Pi_{ij}(z)= \delta_{ij} - \frac{z_i z_j}{|z|^2}, \end{aligned}$$ where $I_{3}$ is defined in [\[notation-I-Iprime\]](#notation-I-Iprime){reference-type="eqref" reference="notation-I-Iprime"}. Our goal is to study the semi-classical limit from [\[BUU\]](#BUU){reference-type="eqref" reference="BUU"} to [\[landau\]](#landau){reference-type="eqref" reference="landau"} in some weighted Sobolev space. To do that, we seperate our proof into two parts: well-posedness results for [\[BUU\]](#BUU){reference-type="eqref" reference="BUU"} with uniform-in-$\epsilon$ estimates and the asymptotic expansion formula with explicit error estimates. ## Main results Before introducing the main results, we list some facts on the notations. $\bullet$ As usual, $a\lesssim b$ is used to denote that there is a universal constant $C$ such that $a \leq C b$. The notation $a\sim b$ means $a\lesssim b$ and $b\lesssim a$. We denote by $C(\lambda_1,\lambda_2,\cdots, \lambda_n)$ or $C_{\lambda_1,\lambda_2,\cdots, \lambda_n}$ some constant depending on parameters $\lambda_1,\lambda_2,\cdots, \lambda_n$. The notation $a\lesssim_{\lambda_1,\lambda_2,\cdots, \lambda_n} b$ is interpreted as $a \leq C_{\lambda_1,\lambda_2,\cdots, \lambda_n} b$. $\bullet$ We recall the $L^{p}$ space for $1 \leq p \leq \infty$ through the norm $$\begin{aligned} \|f\|_{L^p} \colonequals \left(\int_{\mathop{\mathbb R\kern 0pt}\nolimits^3} |f(v)|^p \mathrm{d}v \right)^{1/p} \text{ for } 1 \leq p < \infty; \quad \|f\|_{L^\infty} \colonequals \mathop{\mathrm{ess\,sup}}_{v \in \mathop{\mathbb R\kern 0pt}\nolimits^3} |f(v)|. \end{aligned}$$ Denote the weight function by $W_{l}(v) \colonequals (1+|v|^2)^{\frac{l}{2}}$ for $l \in \mathop{\mathbb R\kern 0pt}\nolimits$ and write $W = W_{1}$ for simplicity. Then the weighted $L^{p}_{l}$ space is defined through the norm $\|f\|_{L^p_l} \colonequals \|W_l f\|_{L^p}$. We denote the multi-index $\alpha =(\alpha _1,\alpha _2,\alpha _3) \in \mathop{\mathbb N\kern 0pt}\nolimits^3$ with $|\alpha |=\alpha _1+\alpha _2+\alpha _3$. For up to order $N \in \mathop{\mathbb N\kern 0pt}\nolimits$ derivatives, the weighted Sobolev space $W^{N,p}_l$ on $\mathop{\mathbb R\kern 0pt}\nolimits^3$ with $p\in [1, \infty], l \in \mathop{\mathbb R\kern 0pt}\nolimits$ is defined through the following norm $$\begin{aligned} \|f\|_{W^{N,p}_l} \colonequals \sum_{|\alpha|\le N} \|\partial^\alpha f\|_{L^p_l}. \end{aligned}$$ If $p = 2$, denote by $H^{N}_l$ the Hilbert space with $\|f\|_{H^{N}_l} = \|f\|_{W^{N,2}_l}$. $\bullet$ For simplicity, for $T>0$, let $\mathcal{A}_{T} \colonequals L^{\infty}([0,T]; L^{1}(\mathop{\mathbb R\kern 0pt}\nolimits^3))$ associated with the norm $\|f\|_{T} \colonequals \sup_{0 \leq t \leq T} \|f(t)\|_{L^{1}}$. Let $\mathcal{A}_{\infty} \colonequals L^{\infty}([0,\infty); L^{1}(\mathop{\mathbb R\kern 0pt}\nolimits^3))$ associated with the norm $\|f\|_{\infty} \colonequals \sup_{t \geq 0} \|f(t)\|_{L^{1}}$. $\bullet$ Given a non-negative initial datum $f_0 \in L^{1}(\mathop{\mathbb R\kern 0pt}\nolimits^3) \cap L^{\infty}(\mathop{\mathbb R\kern 0pt}\nolimits^3)$. We consider the initial value problem [\[BUU\]](#BUU){reference-type="eqref" reference="BUU"} for Bose-Einstein particles with $0<\epsilon<1$ and for Fermi-Dirac particles with $0< \epsilon\leq \min \{ 1, \|f_0\|_{L^{\infty}}^{-1/3}\}$. Set $\|f_0\|_{L^{\infty}}^{-1/3} = \infty$ for the case $\|f_0\|_{L^{\infty}} = 0$ where the problem trivially has a zero solution. $\bullet$ For simplicity, we will use the shorthand $\int (\cdots) \mathrm{d}V =\int_{v, v_* \in \mathop{\mathbb R\kern 0pt}\nolimits^3, \sigma \in \mathop{\mathbb S\kern 0pt}\nolimits^2, (v - v_*) \cdot \sigma \geq 0} (\cdots) \mathrm{d}v \mathrm{d}v_* \mathrm{d}\sigma$. We drop integration domain in most of the integrals if there is no confusion. Next we introduce the definition of mild solution to [\[BUU\]](#BUU){reference-type="eqref" reference="BUU"}. **Definition 1**. *For $T>0$, set $\mathcal{A}_{T} \colonequals L^{\infty}([0,T]; L^{1}(\mathop{\mathbb R\kern 0pt}\nolimits^3) \cap L^{\infty}(\mathop{\mathbb R\kern 0pt}\nolimits^3)$. A measurable non-negative function $f \in \mathcal{A}_{T}$ on $[0, T] \times \mathop{\mathbb R\kern 0pt}\nolimits^3$ is called a local(or global) mild solution of the initial value problem [\[BUU\]](#BUU){reference-type="eqref" reference="BUU"} if $T < \infty$(or $T = \infty$) there is a null set $Z \subset \mathop{\mathbb R\kern 0pt}\nolimits^3$ s.t., for all $t \in [0, T]$ and $v \in \mathop{\mathbb R\kern 0pt}\nolimits^3 \setminus Z$, $$\begin{aligned} % \label{mild-solution-def} f(t,v) = f_{0}(v) + \int_{0}^{t} Q_{UU}^{\epsilon}(f)(\tau, v) \mathrm{d}\tau, \end{aligned}$$ and additionally for Fermi-Dirac particles, it holds that $$\begin{aligned} \label{upper-bound} \|f(t)\|_{L^{\infty}} \leq \epsilon^{-3}. \end{aligned}$$* Our first result is the global well-posedness and propagation of regularity of the Cauchy problem [\[BUU\]](#BUU){reference-type="eqref" reference="BUU"} for Fermi-Dirac particles. **Theorem 1** (Fermi-Dirac particles). *Let $\hat{\phi}$ verify **(A1)** and $0\le f_0 \in L^1 \cap L^{\infty}$. Suppose that $0< \epsilon\leq \min \{1, \|f_0\|_{L^{\infty}}^{-1/3}\}$.* 1. ***(Global well-posedness)** The Cauchy problem [\[BUU\]](#BUU){reference-type="eqref" reference="BUU"} for Fermi-Dirac particles admits a unique global mild solution $f^\epsilon$.* 2. ***(Propagation of regularity uniformly in $\epsilon$)** If $f_0 \in H^{N}_{l}$ for $N, l \geq 2$, there exists a lifespan $T^*=T^*(N, l, \phi, \|f_0\|_{H^{N}_{l}})>0$ independent of $\epsilon$, such that the family of solution $\{f^{\epsilon}\}_{\epsilon}$ is uniformly bounded in $L^{\infty}([0, T^*]; H^{N}_{l}) \cap C([0, T^*]; H^{N-2}_{l})$. More precisely, uniformly in $\epsilon$, $$\begin{aligned} \label{bounded-by-initial-data} \sup_{t\in[0,T^*]} \|f^{\epsilon}(t)\|_{H^{N}_{l}} \le 2\|f_0\|_{H^{N}_{l}}, \quad \end{aligned}$$ and for $0 \leq t_1 \leq t_2 \leq T^*$, $$\begin{aligned} \label{continuous-in-time} \|f^{\epsilon}(t_2) - f^{\epsilon}(t_1)\|_{H^{N-2}_{l}} \le C(N, l, \phi, \|f_0\|_{H^{N}_{l}}) (\|f_0\|_{H^{N}_{l}}^2 + \|f_0\|_{H^{N}_{l}}^3) (t_2 - t_1). \end{aligned}$$* Our second result is the local well-posedness and propagation of regularity $H^{N}_{l}$ of the Cauchy problem [\[BUU\]](#BUU){reference-type="eqref" reference="BUU"} for Bose-Einstein particles. **Theorem 2** (Bose-Einstein particles). *Let $\hat{\phi}$ verify **(A1)**. Let $0 \leq f_0 \in H^{N}_{l}$ for $N, l \geq 2$, then for any $0<\epsilon<1$, the Cauchy problem [\[BUU\]](#BUU){reference-type="eqref" reference="BUU"} for Bose-Einstein particles admits a unique local mild solution $f^\epsilon\in L^{\infty}([0, T^*]; H^{N}_{l}) \cap C([0, T^*]; H^{N-2}_{l})$ where $T^*=T^*(N, l, \phi, \|f_0\|_{H^{N}_{l}})>0$ is independent of $0< \epsilon<1$. Moreover, the family of solution $\{f^{\epsilon}\}_{0< \epsilon< 1}$ satisfies [\[bounded-by-initial-data\]](#bounded-by-initial-data){reference-type="eqref" reference="bounded-by-initial-data"} and [\[continuous-in-time\]](#continuous-in-time){reference-type="eqref" reference="continuous-in-time"} uniformly in $\epsilon$.* **Remark 1**. *Note that [\[continuous-in-time\]](#continuous-in-time){reference-type="eqref" reference="continuous-in-time"} implies [\[BUU\]](#BUU){reference-type="eqref" reference="BUU"} holds in the space $H^{N-2}_{l}$ for almost all $t \in [0, T]$. Thanks to the weak convergence result in [@HLP1], similar local well-posedness also holds for the Landau equation [\[landau\]](#landau){reference-type="eqref" reference="landau"} with estimates [\[bounded-by-initial-data\]](#bounded-by-initial-data){reference-type="eqref" reference="bounded-by-initial-data"} and [\[continuous-in-time\]](#continuous-in-time){reference-type="eqref" reference="continuous-in-time"}.* Our last result is on the asymptotic expansion for the semi-classical limit. **Theorem 3** (Semi-classical limit with convergence rate). *Let $0 \leq N \in \mathop{\mathbb N\kern 0pt}\nolimits, 2 \leq l \in \mathop{\mathbb R\kern 0pt}\nolimits$. Suppose that (i). $\hat{\phi}$ satisfies **(A1)** and **(A2)**; (ii). $0\le f_0 \in H^{N+3}_{l+5}$; (iii). For Fermi-Dirac particles, $0< \epsilon\leq \min \{1, \|f_0\|_{L^{\infty}}^{-1/3}\}$. Let $f^\epsilon$ and $f_L$ be the solutions to [\[BUU\]](#BUU){reference-type="eqref" reference="BUU"} and [\[landau\]](#landau){reference-type="eqref" reference="landau"} respectively with the initial datum $f_0$ on $[0, T^*]$ where $T^*=T^*(N, l, \phi, \|f_0\|_{H^{N+3}_{l+5}})$ given in Theorem [Theorem 1](#gwp-F-D){reference-type="ref" reference="gwp-F-D"} and [Theorem 2](#lwp-B-E){reference-type="ref" reference="lwp-B-E"}. Then for $t\in [0,T^*]$, it holds that $$\begin{aligned} \label{AsEx}f^\epsilon(t,v)=f_L(t,v)+\epsilon^{\vartheta} R^\epsilon(t,v), \end{aligned}$$where $$\begin{aligned} \label{error-estimate} \sup_{t\in[0,T^*]} \|R^\epsilon\|_{H^{N}_{l}} \leq C(\|f_0\|_{H^{N+3}_{l+5}}; N, l, \phi). \end{aligned}$$* Some comments on these results are in order: **Remark 2**. *Owing to the fact that Fermi-Dirac particles enjoy the $L^\infty$ upper bound [\[upper-bound\]](#upper-bound){reference-type="eqref" reference="upper-bound"}, indeed, we can prove the global propagation of regularity with the quantitative estimates as follows:* - ***(Global propagation of regularity)** If $f_0 \in L^2_{l}$ for $l \geq 2$, then for any $t \geq 0$, $$\begin{aligned} \label{F-D-propgation-L2-l} \| f^\epsilon(t)\|_{L^{2}_l} \leq \| f_0\|_{L^{2}_l} \exp \left(t C_{\epsilon,\phi,l}(\|f_0\|_{L^1} + \epsilon^{-3}) \right). \end{aligned}$$ If $f_0 \in L^1_l \cap L^{\infty}_l \cap H^1_l$ for $l \geq 2$, then for any $t \geq 0$, $$\begin{aligned} \label{F-D-propgation-H1-l} \| f^\epsilon(t)\|_{L^1_l \cap L^{\infty}_l \cap H^1_l} \leq C(\| f_0\|_{L^1_l \cap L^{\infty}_l \cap H^1_l}, t; \epsilon, \phi, l). \end{aligned}$$ If $f_0 \in W^{1,1}_l \cap W^{1,\infty}_l \cap H^{N}_l$ for $N, l \geq 2$, then for any $t \geq 0$, $$\begin{aligned} \label{F-D-propgation-HN-l} \| f^\epsilon(t)\|_{W^{1,1}_l \cap W^{1,\infty}_l \cap H^{N}_l} \leq C(\| f_0\|_{W^{1,1}_l \cap W^{1,\infty}_l \cap H^{N}_l}, t; \epsilon, \phi, l, N). \end{aligned}$$* *We cannot expect similar results for the B-E particles because of the B-E condensation phenomenon. The proofs of these propagation results are based on the fact that derivatives and weights can be suitably distributed across the non-linear terms such that the targeting high-order norm grows at most linearly under the premise that the lower-order norms are already propagated globally. Since these results are somewhat deviated from the main purpose of this article, their proofs are not given for brevity.* **Remark 3**. *The propagation of regularity uniformly in $\epsilon$ in equation [\[bounded-by-initial-data\]](#bounded-by-initial-data){reference-type="eqref" reference="bounded-by-initial-data"} and the temporal continuity described in equation [\[continuous-in-time\]](#continuous-in-time){reference-type="eqref" reference="continuous-in-time"} are applicable to both Fermi-Dirac and Bose-Einstein particles. This enables us to delve deeper into exploring the semi-classical limit as presented in Theorem [Theorem 3](#mainthm){reference-type="ref" reference="mainthm"}.* **Remark 4**. *To get the upper bound of the error term $R^\epsilon$ uniformly in $\epsilon$, it is compulsory to impose high regularity on the solutions $f^\epsilon$ and $f_L$ since we need to estimate the error between $Q_{UU}^{\epsilon}(f^\epsilon)$ and $Q_L(f_L)$, see [\[upper-bound-3-order-higher\]](#upper-bound-3-order-higher){reference-type="eqref" reference="upper-bound-3-order-higher"} for the error equation and Lemma [Lemma 20](#Q1-and-QL){reference-type="ref" reference="Q1-and-QL"} for the main estimate.* **Remark 5**. *We emphasize that the asymptotic expansion [\[AsEx\]](#AsEx){reference-type="eqref" reference="AsEx"} is sharp. This can be easily seen from the proof of Theorem [Theorem 3](#mainthm){reference-type="ref" reference="mainthm"}. Roughly speaking, to get the factor $\epsilon^{\vartheta}$, we need to kill the singularity which behaves like the Riesz potential $|x|^{-2-\vartheta}$. Obviously this singularity can be removed in the case of $\vartheta <1$. For the borderline case $\vartheta=1$, we further check that the corresponding part in fact behaves as $\frac{K(x)}{|x|^3}$ which is the kernel of the typical Calderon-Zygmund operator. From this point, the expansion $f^\epsilon=f_L+O(\epsilon)$ is sharp for any smooth potential function $\phi$.* **Remark 6**. *The asymptotic formula [\[AsEx\]](#AsEx){reference-type="eqref" reference="AsEx"} holds locally in time for any initial data in $H^{N+3}_{l+5}$. We may expect large time or even global-in-time result for some special initial data, considering the recent progress on global well-posedness in the homogeneous [@LiLu] and inhomogeneous [@bae2021relativistic; @JXZ; @ouyang2021quantum; @Z-2023] case.* ## Short review Quantum Boltzmann equation has been widely investigated by many authors. In this subsection, we first give a short review on the existing results. Then we explain the main difficulty for the problem of semi-classical limit. In most of the literature on the quantum Boltzmann equation, the authors usually take $\epsilon=1$ in the definition of Uehling-Uhlenbeck operator [\[OPQUU\]](#OPQUU){reference-type="eqref" reference="OPQUU"}. In this situation, for Fermi-Dirac particles, we have the a priori bound for the solution $f$, that is, $f\le 1$. We refer readers to [@Lu2001; @Lu2008] for the existence result. For Bose-Einstein particles, we refer readers to [@Lu2004; @Briant-Einav] for the existence of measure solution and the local well-posedness in weighted $L^\infty$ spaces. For the Bose-Einstein condensation at low temperature, we refer to [@EV1; @EV2] and also [@CaiLu; @LiLu; @Lu2016] for the recent progress. As for semi-classical limit of [\[BUU\]](#BUU){reference-type="eqref" reference="BUU"}, in [@BP], Benedetto and Pulvirenti proved the convergence of the operator. More precisely, for a suitable class of integrable functions $f$ and any Schwartz function $\varphi$, $$\begin{aligned} \label{asyOP} \lim\limits_{\epsilon\to 0}\langle Q_{UU}^{\epsilon}(f), \varphi \rangle=\langle Q_{L}(f,f), \varphi\rangle . \end{aligned}$$ The notation $\langle f, g\rangle \colonequals \int_{\mathop{\mathbb R\kern 0pt}\nolimits^3}f(v)g(v)\mathrm{d}v$ is used to denote the inner product for $v$ variable. In [@HLP1], under some assumptions on $\hat{\phi}$, the following results are proved: (1). Starting from the Eq.(Fermi-Dirac), up to subsequences, the *isotropic weak solution* to Eq.(Fermi-Dirac) will converge to the *isotropic weak solution* to Eq.(Fokker-Planck-Landau); (2). Starting from the Eq.(Bose-Einstein), up to subsequences, the *measure-valued isotropic weak solution* to Eq.(Bose-Einstein) will converge to the *measure-valued isotropic weak solution* to Eq.(Fokker-Planck-Landau). Here *isotropic solution* means that the solution $f(t,v)$ is a radial function with respect to $v$, that is, $f(t,v)=f(t,|v|)$. To achieve these results, the main idea is to reformulate the equations in the isotropic sense and then make full use of the cancellation hidden in the cubic terms. ## Difficulties, strategies and new ideas The main difficulty is induced by the singular scaling factor in the Uehling-Uhlenbeck operator [\[OPQUU\]](#OPQUU){reference-type="eqref" reference="OPQUU"}. One may attempt to use some normalization technique to deal with the parameter $\epsilon$. For instance, if $\tilde{f}(t,v) \colonequals \epsilon^3 f(\epsilon^6t,\epsilon v)$, one can easily verify that $\tilde{f}$ is a solution to following equation: $$\begin{aligned} \label{BUU1} \partial_t \tilde{f} = \int_{\mathop{\mathbb R\kern 0pt}\nolimits^3\times\mathop{\mathbb S\kern 0pt}\nolimits^2} B^{1}(|v-v_*|,\cos\theta) \big(\tilde{f}_*'\tilde{f}'(1\pm \tilde{f}_*) (1\pm \tilde{f}) -\tilde{f}_*\tilde{f}(1\pm \tilde{f}_*')(1\pm \tilde{f}')\big)\mathrm{d}\sigma \mathrm{d}v_*. \end{aligned}$$ Now [\[BUU\]](#BUU){reference-type="eqref" reference="BUU"} is reduced to [\[BUU1\]](#BUU1){reference-type="eqref" reference="BUU1"} with the initial data $\tilde{f}|_{t=0}\colonequals \epsilon^3 f_0(\epsilon v)$. The good side is that the equation [\[BUU1\]](#BUU1){reference-type="eqref" reference="BUU1"} itself contains no $\epsilon$. However, the bad side is that the initial data $\tilde{f}|_{t=0}$ is sufficiently large in weighted Sobolev spaces when $\epsilon$ is sufficiently small. It is challenging to establish uniform lifespan for the nonlinear equations like [\[BUU1\]](#BUU1){reference-type="eqref" reference="BUU1"} starting from arbitrarily large initial data. Usually, the lifespan vanishes as the initial data blows up. Therefore, we will directly consider [\[BUU\]](#BUU){reference-type="eqref" reference="BUU"}. Let us explain our strategy from the analysis of the collision operator. It is easy to see that we can decompose $Q_{UU}^{\epsilon}$ into two parts: $$\begin{aligned} \label{OPUU} Q_{UU}^{\epsilon}(f) = Q(f,f)+R(f,f,f), \end{aligned}$$ where $Q(f,f)$ contains the quadratic terms and $R(f,f,f)$ contains the cubic terms. More precisely, $$\begin{aligned} \label{OPQ} Q(g,h)\colonequals\int_{\mathop{\mathbb R\kern 0pt}\nolimits^3\times\mathop{\mathbb S\kern 0pt}\nolimits^2} B^{\epsilon}(|v-v_*|, \cos\theta) (g_*'h'-g_*h)\mathrm{d}\sigma \mathrm{d}v_* = \sum_{i=1}^3 Q_i(g,h); \\\label{OPR} R(g,h,\rho)\colonequals \pm\epsilon^3\int_{\mathop{\mathbb R\kern 0pt}\nolimits^3\times\mathop{\mathbb S\kern 0pt}\nolimits^2} B^{\epsilon}(|v-v_*|, \cos\theta) ( g_*'h'(\rho+\rho_*)- g_*h(\rho'+\rho_*'))\mathrm{d}\sigma \mathrm{d}v_*.\end{aligned}$$ Here in [\[OPQ\]](#OPQ){reference-type="eqref" reference="OPQ"} for $i=1,2,3$, $Q_{i}$ is defined by $$\begin{aligned} %\label{OPQi} Q_{i}(g,h) \colonequals \int_{\mathop{\mathbb R\kern 0pt}\nolimits^3\times\mathop{\mathbb S\kern 0pt}\nolimits^2} B^{\epsilon}_i(|v-v_*|, \cos\theta) (g_*'h'-g_*h)\mathrm{d}\sigma \mathrm{d}v_*, \end{aligned}$$ where $B^{\epsilon}_{i}$ is defined by $$\begin{aligned} \label{DefB1} B^{\epsilon}_{1}(|v-v_*|,\cos\theta) &\colonequals& \epsilon^{-4} |v-v_*|\hat{\phi}^2\left(\epsilon^{-1}|v-v_*|\sin (\theta/2)\right), \\ \label{DefB2} B^{\epsilon}_{2}(|v-v_*|,\cos\theta) &\colonequals& \pm 2\epsilon^{-4} |v-v_*|\hat{\phi}\left(\epsilon^{-1}|v-v_*|\sin (\theta/2)\right)\hat{\phi} \left(\epsilon^{-1}|v-v_*|\cos(\theta/2) \right), \\ \label{DefB3} B^{\epsilon}_{3}(|v-v_*|,\cos\theta) &\colonequals& \epsilon^{-4} |v-v_*|\hat{\phi}^2\left(\epsilon^{-1}|v-v_*|\cos (\theta/2)\right). \end{aligned}$$ **Remark 7**. *Note that $B^{\epsilon} = B^{\epsilon}_1+B^{\epsilon}_2+B^{\epsilon}_3$ and $|B^{\epsilon}_2| = 2 \sqrt{B^{\epsilon}_1 B^{\epsilon}_3}$. The divergence in $B^{\epsilon}_1$ arises from different physical reasons: the intensity of the collisions increases together with effective domain of $\hat{\phi}^2(\cdot)$ due to the vanishing of the scattering angle. More precisely, $\epsilon^{-1}|v-v_*|\sin (\theta/2) \in [0, \epsilon^{-1}|v-v_*|\sqrt{2}/2]$ contains the dominate part of $\hat{\phi}^2(\cdot)$ as $\epsilon\to 0$. While for $B^{\epsilon}_3$, $\epsilon^{-1}|v-v_*|\cos (\theta/2) \in [\epsilon^{-1}|v-v_*|\sqrt{2}/2, \epsilon^{-1}|v-v_*|]$ goes to infinity and plays minor roles as $\epsilon\to 0$ because of the integrability conditions **(A1)** and **(A2)** on $\hat{\phi}^2$. In a word, in the limiting process $\epsilon\to 0$, $B^{\epsilon}_1$ is the dominant part. If $B^{\epsilon}$ is replaced by $B^{\epsilon}_1$ and the cubic terms are ignored we would expect the same limiting behavior. See also Remark 1.1 in [@HLP1] for another treatment of the kernel and relevant discussions.* In what follows, we will show that $B^{\epsilon}_1$ and $B^{\epsilon}_3$ should be treated in a different manner. Roughly speaking, we will treat $B^{\epsilon}_1$ from the non-cutoff view and $B^{\epsilon}_3$ from the cutoff view. This can be seen easily by the following computations. [(i).]{.ul} It is not difficult to derive that $$\int_{\mathop{\mathbb S\kern 0pt}\nolimits^2} B^{\epsilon}_{3}(|v-v_*|,\cos\theta) \mathrm{d}\sigma\sim I_3|v-v_*|^{-3},$$ where we use the facts that $\cos(\theta/2)\sim1$ and the change of variables from $\cos(\theta/2)$ to $r\colonequals\epsilon^{-1}|v-v_*|\cos (\theta/2)$. The strong singularity induced by the relative velocity $|v-v_*|^{-3}$ is consistence with the Landau collision operator [\[OPQL\]](#OPQL){reference-type="eqref" reference="OPQL"}. The singularity can be removed by sacrificing the slight regularity of the solution. [(ii).]{.ul} Again by the similar calculation, we have $$\int_{\mathop{\mathbb S\kern 0pt}\nolimits^2} B^{\epsilon}_{1}(|v-v_*|,\cos\theta) \mathrm{d}\sigma\sim \epsilon^{-2}I_1|v-v_*|^{-1}.$$ To kill the singular factor $\epsilon^{-2}$, we resort to the momentum transfer [\[MometumTrans\]](#MometumTrans){reference-type="eqref" reference="MometumTrans"}. Technically if we expand the Taylor expansion of $f(v')-f(v)$ up to the second order, then we may arrive at(see [\[order-2-cancelation-B1\]](#order-2-cancelation-B1){reference-type="eqref" reference="order-2-cancelation-B1"} for details) $$\begin{aligned} \langle Q_1(g,h),f\rangle&\sim& \int B_1^\epsilon(|v-v_*|,\cos\theta)g_*h\bigg((v'-v)\cdot \nabla_vf+(v'-v)\otimes(v'-v):\nabla_v^2f\bigg)\mathrm{d}\sigma dv_*dv \\&\sim & I_3\int |g_*h|(|\nabla f||v-v_*|^{-2}+|\nabla^2 f||v-v_*|^{-1})dv_*dv. \end{aligned}$$ On the one hand, this suggests that if we aim to obtain a uniform-in-$\epsilon$ estimate, we should deal with $Q_1$ from the non-cutoff view. However, on the other hand, this approach results in a loss of derivatives, particularly for the solution. Now we are in a position to state our main strategy and the key ideas to overcome the difficulties. The strategy can be outlined in three steps. [*Step 1.*]{.ul} We construct a local mild solution in $L^1\cap L^\infty$ space via the contraction mapping theorem. Here the lifespan $T_*$ depends heavily on the parameter $\epsilon$ since we deal with [\[BUU\]](#BUU){reference-type="eqref" reference="BUU"} from angular cutoff view. For Fermi-Dirac particles, we get the propagation of the $L^\infty$ upper bound that $f(t)\le \epsilon^{-3}$ for any $t\in[0,T_*]$. [*Step 2.*]{.ul} We prove the propagation of the regularity uniformly in $\epsilon$ in weighted Sobolev spaces. This is motivated by the fact [\[asyOP\]](#asyOP){reference-type="eqref" reference="asyOP"}. We expect that the Uehling-Uhlenbeck operator [\[OPQUU\]](#OPQUU){reference-type="eqref" reference="OPQUU"} will behave like a diffusive operator when $\epsilon$ is sufficiently small. Thus the $L^2$ framework to prove the propagation of regularity is reasonable. To implement the main idea, we develop some tools as follows. $\bullet$[ Explicit formula for the change of variable.]{.ul} As we state it in the before, to kill the singularity, we will use the Taylor expansion of $f(v')-f(v)$ up to the second order. Technically we will meet the intermediate points $\kappa(v)=\kappa v'+(1-\kappa)v, \iota(v_*)=\iota v'_*+(1-\iota)v_*$ where $\kappa, \iota \in[0,1]$. As a result, the change of variables $v \to \kappa(v)$ and $v_{*} \to \iota(v_*)$ are compulsory. Since now our kernel is not a factorized form $\Phi(|v-v_{*}|) b(\cos\theta)$ of relative velocity and deviation angle, rough treatment is not enough. For this reason, we explicitly compute the change of variables $v \to \kappa(v)$ and $v_{*} \to \iota(v_*)$ in Lemma [Lemma 2](#usual-change){reference-type="ref" reference="usual-change"} and carefully use it in Lemma [Lemma 4](#UPQ3R){reference-type="ref" reference="UPQ3R"} for $B^{\epsilon}_{3}$ and Lemma [Lemma 7](#Technical-lemma){reference-type="ref" reference="Technical-lemma"} for $B^{\epsilon}_{1}$. $\bullet$[ Coercivity estimate and the cancellation lemma.]{.ul} As we explain it in the before, the operator $Q_1$ is supposed to produce the dissipation. Indeed, if $f \geq 0$, $$\begin{aligned} \nonumber && \langle Q_1(f, \partial^{\alpha}f) , \partial^\alpha f \rangle \\ \label{into-two-terms} &=& - \frac 12 \int B^{\epsilon}_1 f_* ((\partial^\alpha f)' - (\partial^\alpha f))^2 \mathrm{d}v \mathrm{d}v_* \mathrm{d}\sigma + \frac 12 \int B^{\epsilon}_1 f_*(((\partial^\alpha f)^2)' - (\partial^\alpha f)^2) \mathrm{d}v \mathrm{d}v_* \mathrm{d}\sigma \\ \label{non-negativity-solution} & \leq & \frac 12 \int B^{\epsilon}_1 f_*(((\partial^\alpha f)^2)' - (\partial^\alpha f)^2) \mathrm{d}v \mathrm{d}v_* \mathrm{d}\sigma. \end{aligned}$$ The first term in [\[into-two-terms\]](#into-two-terms){reference-type="eqref" reference="into-two-terms"} is non-positive and corresponds to the coercitity of the operator. Unfortunately, since we consider a general interaction potential, we are unable to obtain an explicit description of the coercivity which is related closely to the one from the Landau collision operator. As a result, we only make full use of the sign. To treat the second term in [\[into-two-terms\]](#into-two-terms){reference-type="eqref" reference="into-two-terms"}, we establish the cancellation lemma(see Lemma [Lemma 6](#cancellem){reference-type="ref" reference="cancellem"} for details) to balance the loss of the derivative. $\bullet$[ Estimate the collision operators from the cutoff and non-cutoff perspectives.]{.ul} To estimate the collision operator $Q_{UU}$ in weighted Sobolev spaces $H^{N}_{l}$, we first remind that $Q_1, Q_2, Q_3$ and $R$ behave quite differently and each of them has its own difficulty. Moreover, since the kernels $B^{\epsilon}_{i}$ (where $i=1,2,3$) cannot be expressed in the product form $\Phi(|v-v_{*}|) b(\cos\theta)$, it will lead to numerous technical difficulties in the analysis. Our main approach is based on the integration of two distinct perspectives: the cutoff view and the non-cutoff view. These enable us to balance the regularity to get the uniform-in-$\epsilon$ estimates. We refer readers to Sect. 2 and Sect. 3 for details. $\bullet$[ Integration by parts formulas for the penultimate order terms.]{.ul} Since we have no explicit description for the dissipation mechanism, the main obstruction to prove the propagation of regularity uniformly in $\epsilon$ lies in the estimates for the penultimate order terms. To bound penultimate order terms, we have to sacrifice the regularity to kill the singular factor. To balance the regularity, we borrow the idea from [@chaturvedi2021stability] to establish the integration by parts formulas(see Lemma [Lemma 16](#integration-by-parts-formula){reference-type="ref" reference="integration-by-parts-formula"} for details). [*Step 3.*]{.ul} According to the Sobolev embedding theorem, the uniform-in-$\epsilon$ estimate indicates that the $L^\infty$ upper bound of the solution can be constrained by the Sobolev norm of the initial data. This in particular evokes the continuity argument to extend the lifespan $T_*$ to be $O(1)$ which is independent of $\epsilon$. ## Organization of the paper Section 2 and Section 3 aim to obtain a precise energy estimate of $Q_{UU}^{\epsilon}$ in the space $H^{N}_{l}$ through a comprehensive analysis of the bi-linear operators $Q_1, Q_2, Q_3$, and the tri-linear operator $R$. In Section 4, we prove the results in Theorems [Theorem 1](#gwp-F-D){reference-type="ref" reference="gwp-F-D"} and [Theorem 2](#lwp-B-E){reference-type="ref" reference="lwp-B-E"} that hold uniformly in $\epsilon$. Finally, Section 5 contains the proof of Theorem [Theorem 3](#mainthm){reference-type="ref" reference="mainthm"}. # Analysis of Uehling-Uhlenbeck operator In this section, we will examine the upper bounds of $Q$ and $R$, and investigate the commutator estimates between these operators and the weight function $W_{l}$. The operators will be considered from the perspective of both angular cutoff view and angular non-cutoff view. ## Some elementary facts In this subsection, we will introduce some fundamental formulas that are commonly employed in the analysis of the Boltzmann operator. These formulas are particularly useful for studying the Boltzmann operator from the perspective of angular non-cutoff view. ### Taylor expansion When evaluating the difference $f'-f$ (or $f'_*-f_*$) before and after collision, various Taylor expansions are often used. We first introduce the order-1 expansion as $$\begin{aligned} \label{Taylor1-order-1} f'-f=\int_0^1 (\nabla f)(\kappa(v)) \cdot (v'-v)\mathrm{d}\kappa, \quad f'_*-f_*= \int_0^1 (\nabla f)(\iota(v_*)) \cdot (v'_*-v_*)\mathrm{d}\iota, \end{aligned}$$ where for $\kappa, \iota \in[0,1]$, the intermediate points are defined as $$\begin{aligned} \label{Defkappav} \kappa(v)=\kappa v'+(1-\kappa)v, \quad \iota(v_*)=\iota v'_*+(1-\iota)v_*. \end{aligned}$$ Observing that $|v'-v| = |v'_*-v_*| = |v-v_*| \sin\frac{\theta}{2}$, we have $$f'-f\sim C(\nabla f) \theta; \quad f'_*-f_*\sim C(\nabla f) \theta.$$ As we emphasize in the introduction, expansion of $f'-f$ up to the second order is compulsory. We have $$\begin{aligned} &&\label{Taylor1} f'-f=(\nabla f)(v)\cdot(v'-v)+\int_0^1(1-\kappa) (\nabla^2 f)(\kappa(v)):(v'-v)\otimes(v'-v)\mathrm{d}\kappa; \\ &&\label{Taylor2} f'-f=(\nabla f)(v')\cdot(v'-v)-\int_0^1 \kappa(\nabla^2 f)(\kappa(v)):(v'-v)\otimes(v'-v)\mathrm{d}\kappa. \end{aligned}$$ Thanks to the symmetry property of $\sigma$-integral, the first terms in the formulas can be computed as follows $$\begin{aligned} \label{cancell1} \int B(|v-v_*|, \frac{v-v_*}{|v-v_*|}\cdot\sigma) (v'-v) \mathrm{d}\sigma = \int B(|v-v_*|, \frac{v-v_*}{|v-v_*|}\cdot\sigma) \sin^{2}\frac{\theta}{2} (v_* - v) \mathrm{d}\sigma, \\ \label{cancell2} \int B(|v-v_*|,\frac{v-v_*}{|v-v_*|} \cdot \sigma) (v'-v) h(v') \mathrm{d}\sigma \mathrm{d}v =0. \end{aligned}$$ We remark that the formula [\[cancell1\]](#cancell1){reference-type="eqref" reference="cancell1"} holds for fixed $v, v_*$ and [\[cancell2\]](#cancell2){reference-type="eqref" reference="cancell2"} holds for fixed $v_*$. Therefore, [\[Taylor1\]](#Taylor1){reference-type="eqref" reference="Taylor1"} and [\[cancell1\]](#cancell1){reference-type="eqref" reference="cancell1"} lead to $O(\theta^2)$ for the quantity $\int B g_*h(f'-f)\mathrm{d}\sigma \mathrm{d}v_* \mathrm{d}v$; so do [\[Taylor2\]](#Taylor2){reference-type="eqref" reference="Taylor2"} and [\[cancell2\]](#cancell2){reference-type="eqref" reference="cancell2"} for $\int B g_*h'(f'-f)\mathrm{d}\sigma \mathrm{d}v_* \mathrm{d}v$. ### Momentum transfer We claim that the kernels $B^{\epsilon}_{i}$ defined in [\[DefB1\]](#DefB1){reference-type="eqref" reference="DefB1"}-[\[DefB3\]](#DefB3){reference-type="eqref" reference="DefB3"} satisfy the estimate: $$\begin{aligned} \label{order-2-cancellation} \int B^{\epsilon} \sin^{2} \frac{\theta}{2} \mathrm{d}\sigma \leq \int (B^{\epsilon}_{1} + |B^{\epsilon}_2| + B^{\epsilon}_3) \sin^{2} \frac{\theta}{2} \mathrm{d}\sigma \lesssim I_{3} |v-v_{*}|^{-3}. \end{aligned}$$ Indeed, for $B^{\epsilon}_1$, using the change of variable $r = \epsilon^{-1} |v-v_*| \sin(\theta/2)$, we have $$\begin{aligned} \label{order-2-cancelation-B1} \int B^{\epsilon}_1 \sin^{2} \frac{\theta}{2} \mathrm{d}\sigma = 8 \pi \int_{0}^{\pi/2} \epsilon^{-4} |v-v_*| \sin^3(\theta/2) \hat{\phi}^2\left( \epsilon^{-1} |v-v_*| \sin(\theta/2) \right) \mathrm{d}\sin(\theta/2) \\ \nonumber = 8 \pi \int_{0}^{2^{-1/2}} \epsilon^{-4} \hat{\phi}^2 \left( \epsilon^{-1} |v-v_*| t \right) t^{3} \mathrm{d}t = 8 \pi |v-v_*|^{-3} \int_{0}^{2^{-1/2} \epsilon^{-1} |v-v_*|} \hat{\phi}^2 (r) r^{3} \mathrm{d}r \leq 8 \pi I_{3} |v-v_*|^{-3}. \end{aligned}$$ For $B^{\epsilon}_3$, using the fact $\sqrt{2}/2 \leq \cos \frac{\theta}{2}$ for $0 \leq \theta \leq \pi/2$ and the change of variable $r = \epsilon^{-1} |v-v_*| \cos(\theta/2)$, we can similarly get that $\int B^{\epsilon}_3 \sin^{2} \frac{\theta}{2} \mathrm{d}\sigma \leq 8 \pi I_{3} |v-v_*|^{-3}$. For $B^{\epsilon}_2$, the desired result follows the fact that $|B^{\epsilon}_2| \leq B^{\epsilon}_1 + B^{\epsilon}_3$. ### Estimates for the Riesz potentials We list the following lemma without proof. **Lemma 1**. *It holds that $$\begin{aligned} \label{minus-1} \int |g_* h f||v-v_*|^{-1} \mathrm{d}v \mathrm{d}v_* \lesssim \|g\|_{L^1 \cap L^2} \|h\|_{L^2} \|f\|_{L^2}. \end{aligned}$$ Let $\delta>0, s_{1}, s_2, s_3 \geq 0, s_{1}+ s_2+ s_3 = \frac 12 + \delta$, then $$\begin{aligned} \label{minus-2} \int |g_* h f||v-v_*|^{-2} \mathrm{d}v \mathrm{d}v_* \lesssim_{\delta} \|g\|_{H^{s_1}} \|h\|_{H^{s_2}} \|f\|_{H^{s_3}}. \end{aligned}$$ Let $\delta>0, s_{1}, s_2 \geq 0, s_{1}+ s_2 = \frac 12 + \delta$, then $$\begin{aligned} \label{2-2-minus-1-s1-s2} \int |v-v_{*}|^{-1} |g_*|^{2} |h|^{2} \mathrm{d}v \mathrm{d}v_{*} \lesssim_{\delta} \|g\|_{H^{s_1}}^2 \|h\|_{H^{s_2}}^2. \end{aligned}$$* ## A change of variable In order to deal with the intermediate variables $\kappa(v)$ and $\iota(v_*)$ defined in [\[Defkappav\]](#Defkappav){reference-type="eqref" reference="Defkappav"}, we derive a useful formula involving the change of variable $v \to \kappa(v)$ and $v_{*} \to \iota(v_*)$. It is quite important for the estimates of the integrals involving the kernels $B^{\epsilon}_i(i=1,2,3)$. **Lemma 2**. *For $\kappa \in [0, 2]$, let us define $$\begin{aligned} \label{general-Jacobean} \psi_{\kappa}(\theta) \colonequals (\cos^{2}\frac{\theta}{2}+(1-\kappa)^{2}\sin^{2}\frac{\theta}{2})^{-1/2}. \end{aligned}$$ For any $0 \leq \kappa \leq 1, v_{*} \in \mathop{\mathbb R\kern 0pt}\nolimits^{3}$, it holds that $$\begin{aligned} \label{change-of-variable} \int_{\mathop{\mathbb R\kern 0pt}\nolimits^{3}} \int_{\mathop{\mathbb S\kern 0pt}\nolimits^{2}_{+}} B(|v-v_{*}|, \cos\theta) f(\kappa(v)) \mathrm{d}v \mathrm{d}\sigma = \int_{\mathop{\mathbb R\kern 0pt}\nolimits^{3}} \int_{\mathop{\mathbb S\kern 0pt}\nolimits^{2}_{+}} B(|v-v_{*}|\psi_{\kappa}(\theta), \cos\theta) f(v) \psi_{\kappa}^{3}(\theta) \mathrm{d}v \mathrm{d}\sigma. \end{aligned}$$ Here $\mathop{\mathbb S\kern 0pt}\nolimits^{2}_{+}\colonequals\{\sigma\in\mathop{\mathbb S\kern 0pt}\nolimits^2|(v-v_{*}) \cdot \sigma \geq 0\}$, For any $0 \leq \kappa, \iota \leq 1$, it holds that $$\begin{aligned} \label{change-of-variable-2} && \int_{\mathop{\mathbb R\kern 0pt}\nolimits^{3}} \int_{\mathop{\mathbb R\kern 0pt}\nolimits^{3}} \int_{\mathop{\mathbb S\kern 0pt}\nolimits^{2}_{+}} B(|v-v_{*}|, \cos\theta) g(\iota(v_{*})) f(\kappa(v)) \mathrm{d}v \mathrm{d}v_* \mathrm{d}\sigma \\ \nonumber &=& \int_{\mathop{\mathbb R\kern 0pt}\nolimits^{3}} \int_{\mathop{\mathbb R\kern 0pt}\nolimits^{3}} \int_{\mathop{\mathbb S\kern 0pt}\nolimits^{2}_{+}} B(|v-v_{*}|\psi_{\kappa+\iota}(\theta), \cos\theta) g(v_{*}) f(v) \psi_{\kappa+\iota}^{3}(\theta) \mathrm{d}v \mathrm{d}v_* \mathrm{d}\sigma . \end{aligned}$$* *Proof.* Recalling [\[Defkappav\]](#Defkappav){reference-type="eqref" reference="Defkappav"}, we set $\cos\beta_{\kappa}\colonequals\sigma\cdot (\kappa(v)-v_{*})/|\kappa(v)-v_{*}|$. To express $\beta_{\kappa}$ in terms of $\theta$, we notice that $\kappa(v) - v_{*} = v^{\prime}-v_{*} + (\kappa-1)(v^{\prime}-v),$ which implies that $$\begin{aligned} |\kappa(v) - v_{*}|^{2} = |v^{\prime}-v_{*}|^{2} + (\kappa-1)^{2}|v^{\prime}-v_{*}|^{2} = (\cos^{2}\frac{\theta}{2}+(1-\kappa)^{2}\sin^{2}\frac{\theta}{2})|v-v_{*}|^{2}=\psi_{\kappa}(\theta)^2|v-v_{*}|^{2}.\end{aligned}$$ From this together with fact that $\left( \kappa(v) - v_{*} \right) \cdot \sigma = \left( \cos^{2}\frac{\theta}{2}+(\kappa-1)\sin^{2}\frac{\theta}{2} \right)|v-v_{*}|$, we have $$\cos \beta_{\kappa} = \frac{ \cos^{2}\frac{\theta}{2}+(\kappa-1)\sin^{2}\frac{\theta}{2}} {\left( \cos^{2}\frac{\theta}{2}+(1-\kappa)^{2}\sin^{2}\frac{\theta}{2} \right)^{1/2} } = \varphi_{\kappa}(\sin\frac{\theta}{2}),$$ where $\varphi_{\kappa}(x) = \frac{ 1 - x^{2}+(\kappa-1)x^2} {\left( 1 - x^{2} +(1-\kappa)^{2}x^{2} \right)^{1/2} }.$ The above relation yields that if $0 \leq \theta \leq \frac{\pi}{2}$, then $0 \leq \beta_{\kappa} \leq \delta_{\kappa} \colonequals \arccos( \frac{\sqrt{2}} {2} \frac{ \kappa } {\sqrt{1 + (1-\kappa)^{2} } })$ is a bijection. Now we are in a position to prove [\[change-of-variable\]](#change-of-variable){reference-type="eqref" reference="change-of-variable"}. By the fact that $$\begin{aligned} \label{Jacobi} \det (\frac{\partial u}{\partial v}) = (1-\frac{\kappa}{2})^2 \left( (1-\frac{\kappa}{2})+\frac{\kappa}{2} \cos\theta \right) \colonequals \alpha_{\kappa}(\theta), \end{aligned}$$ we get that $$\begin{aligned} \int_{\mathop{\mathbb R\kern 0pt}\nolimits^{3}} \int_{\mathop{\mathbb S\kern 0pt}\nolimits^{2}_{+}} B(|v-v_{*}|, \cos\theta) f(\kappa(v)) \mathrm{d}v \mathrm{d}\sigma = 2 \pi \int_{\mathop{\mathbb R\kern 0pt}\nolimits^{3}} \int_{0}^{\delta_{\kappa}} B(|v-v_{*}|\psi_{\kappa}(\theta), \cos\theta) f(v) \alpha_{\kappa}^{-1}(\theta) \sin \beta_{\kappa} \mathrm{d}v \mathrm{d}\beta_{\kappa}.\end{aligned}$$ Then the desired result follows the computation $$\sin \beta_{\kappa} \mathrm{d}\beta_{\kappa} = - \mathrm{d} \cos \beta_{\kappa} = -\frac{1}{4} \varphi^{\prime}_{\kappa}(\sin\frac{\theta}{2}) \sin^{-1}\frac{\theta}{2} \sin\theta \mathrm{d}\theta,\quad -\frac{1}{4} \varphi^{\prime}_{\kappa}(\sin\frac{\theta}{2}) \sin^{-1}\frac{\theta}{2} \alpha_{\kappa}^{-1}(\theta) = \psi_{\kappa}^{3}(\theta).$$ As for [\[change-of-variable-2\]](#change-of-variable-2){reference-type="eqref" reference="change-of-variable-2"}, the case $\kappa = \iota =1$ is obviously given by the natural change of variable $(v, v_*, \sigma) \to (v', v'_*, \sigma')$ where $\sigma' = (v - v_*)/|v - v_*|$. If $\kappa + \iota < 2$, we can similarly repeat the above derivation with $\kappa$ replaced by $\kappa + \iota$. Indeed, one can derive $$\begin{aligned} \det (\frac{\partial (\kappa(v), \iota(v_*))}{\partial (v, v_*)}) = \alpha_{\kappa+\iota}(\theta), \quad |v-v_{*}| = |\kappa(v) - \iota(v_*)| \psi_{\kappa+\iota}(\theta). \end{aligned}$$ Let $\beta_{\kappa+\iota}$ be the angle between $\kappa(v) - \iota(v_*)$ and $\sigma$, then $\cos \beta_{\kappa+\iota} = \varphi_{\kappa+\iota}(\sin\frac{\theta}{2})$. If $\kappa + \iota < 2$, then $\delta_{\kappa+\iota}>0$ and the function: $\theta \in [0, \frac{\pi}{2}] \to \beta_{\kappa+\iota} \in [0,\delta_{\kappa + \iota}]$ is a bijection. These facts are enough to obtain [\[change-of-variable-2\]](#change-of-variable-2){reference-type="eqref" reference="change-of-variable-2"} for $\kappa + \iota < 2$. ◻ ## Integrals involving $B^{\epsilon}_3$ We first derive the upper bound of the integrals involving $B^{\epsilon}_3$ from the cutoff perspective. We remark that in this situation the estimates depend on $\epsilon$. **Lemma 3**. *Let $a \geq 0$, then $$\begin{aligned} \label{B-3-a-order-velocity} \int B^{\epsilon}_3 |v-v_*|^a |g_* h| \mathrm{d}V \leq 8 \pi (\sqrt{2})^{(a-1)_{+}} \epsilon^{a-3} I_a \|g\|_{L^1} \|h\|_{L^1}. \end{aligned}$$* *Proof.* As $\sqrt{2}/2 \leq \cos (\theta/2) \leq 1$, we have $$\begin{aligned} \quad \label{B3-integral-independent-of-v} &&\int |z|^{a} B^{\epsilon}_3(z, \sigma) \mathrm{d}\sigma = 8 \pi \epsilon^{-4}|z|^{a+1} \int_0^{\pi/2} \hat{\phi}^2(\epsilon^{-1}|z|\cos \frac{\theta}{2}) \cos \frac{\theta}{2}\mathrm{d}\cos \frac{\theta}{2} \\ \nonumber &&\leq 8 \pi (\sqrt{2})^{(a-1)_{+}}\epsilon^{-4}|z|^{a+1} \int_0^{\pi/2} \hat{\phi}^2(\epsilon^{-1}|z|\cos \frac{\theta}{2}) \cos^a \frac{\theta}{2}\mathrm{d}\cos \frac{\theta}{2} = 8 \pi (\sqrt{2})^{(a-1)_{+}} \epsilon^{a-3} \int_{\epsilon^{-1}|z|/\sqrt{2}}^{\epsilon^{-1}|z|} \hat{\phi}^2(t) t^a \mathrm{d}t \\ \nonumber &&\leq 8 \pi (\sqrt{2})^{(a-1)_{+}} \epsilon^{a-3} I_a \lesssim \epsilon^{a-3} \int_0^{\infty} \hat{\phi}^2(r) r^a \mathrm{d}r, \end{aligned}$$ which implies [\[B-3-a-order-velocity\]](#B-3-a-order-velocity){reference-type="eqref" reference="B-3-a-order-velocity"}. ◻ By taking $a=0$ and replacing $\cos\frac{\theta}{2}$ by $\sin\frac{\theta}{2}$ in [\[B3-integral-independent-of-v\]](#B3-integral-independent-of-v){reference-type="eqref" reference="B3-integral-independent-of-v"}, we have $$\begin{aligned} \label{upper-bound-of-A-eps} A^{\epsilon} \colonequals \sup_{z \in \mathop{\mathbb R\kern 0pt}\nolimits^3} \int B^{\epsilon}(z, \sigma) \mathrm{d}\sigma \leq 2\sup_{z \in \mathop{\mathbb R\kern 0pt}\nolimits^3} \int B^{\epsilon}_1(z, \sigma) \mathrm{d}\sigma + 2\sup_{z \in \mathop{\mathbb R\kern 0pt}\nolimits^3} \int B^{\epsilon}_3(z, \sigma) \mathrm{d}\sigma \lesssim \epsilon^{-3} I_{0}. \end{aligned}$$ The above inequality shows that the $L^{\infty}$-norm of $\int B^{\epsilon}_1(\cdot, \sigma) \mathrm{d}\sigma$ and $\int B^{\epsilon}_3(\cdot, \sigma) \mathrm{d}\sigma$ is bounded by $\epsilon^{-3} I_{0}$ which tends to $\infty$ as $\epsilon\to 0$. Considering the $L^{1}$-norm of $\int B^{\epsilon}_3(\cdot, \sigma) \mathrm{d}\sigma$, we find it is bounded uniformly in $\epsilon$ by the following computation: $$\begin{aligned} \label{bounded-l1-norm-simple}\qquad \iint B^{\epsilon}_3(z, \sigma) \mathrm{d}\sigma \mathrm{d}z = 4 \pi \int_{0}^{\infty} \int_{\mathop{\mathbb S\kern 0pt}\nolimits^{2}_{+}} \epsilon^{-4}r^{3} \hat{\phi}^2(\epsilon^{-1}r \cos(\theta/2)) \mathrm{d}r \mathrm{d}\sigma =4 \pi I_{3} \int_{\mathop{\mathbb S\kern 0pt}\nolimits^{2}_{+}} \cos^{-4}(\theta/2) \mathrm{d}\sigma = 16\pi^{2} I_{3}. \end{aligned}$$ Here $\mathop{\mathbb S\kern 0pt}\nolimits^{2}_{+}$ stands for $0 \leq \theta \leq \pi/2$. Based on the above uniform $L^{1}$ upper bound, we can easily obtain uniform-in-$\epsilon$ estimates for various integrals involving $B^{\epsilon}_3$ with the change of variables in [\[change-of-variable-2\]](#change-of-variable-2){reference-type="eqref" reference="change-of-variable-2"}. **Lemma 4**. *Fix $\kappa \in [0,1]$, either $u = \kappa(v_{*})$ or $u = \kappa(v)$. Then $$\begin{aligned} \label{general-L1} \int B^{\epsilon}_3 |g(u)| \mathrm{d}V \lesssim I_{3} \|g\|_{L^1}. \end{aligned}$$ As a direct result, fix an integer $k \geq 2$ and $\iota_{i}, \kappa_{i} \in [0,1]$ for $1 \leq i \leq k$, let $u_i \in \{\iota_{i}(v_{*}), \kappa_{i}(v) : 1\leq i \leq k\}$ for $1 \leq i \leq k$, then $$\begin{aligned} \label{general-k-functions-infty-norm} \int B^{\epsilon}_3 \prod_{i=1}^{k} |f_i(u_i)| \mathrm{d}V \lesssim I_{3} \prod_{i=1}^{k} \|f_i\|_{X_i}, \end{aligned}$$ where two of $X_i$ are taken by $L^2$-norm and the others are taken by $L^\infty$-norm. Let $0 \leq s_i < \frac 32$ for $1 \leq i \leq k$ and $\sum_{i=1}^{k} s_i = \frac{3k}{2} - 3$, then $$\begin{aligned} \label{general-k-functions-h-norm} \int B^{\epsilon}_3 \prod_{i=1}^{k} |f_i(u_i)| \mathrm{d}V \lesssim_{s_1, \cdots, s_k} I_{3} \prod_{i=1}^{k} \|f_i\|_{H^{s_i}}, \end{aligned}$$* *Proof.* Applying [\[change-of-variable-2\]](#change-of-variable-2){reference-type="eqref" reference="change-of-variable-2"}, we have $\int B^{\epsilon}_3 |g(u)| \mathrm{d}V = \int J_{\kappa, \epsilon}(v-v_{*}) |g(v)| \mathrm{d}v \mathrm{d}v_{*}$, where $$\begin{aligned} \label{def-J-kappa-eps} J_{\kappa, \epsilon}(z) \colonequals \int_{\mathop{\mathbb S\kern 0pt}\nolimits^{2}_{+}} \epsilon^{-4}|z|\psi^{4}_{\kappa}(\theta) \hat{\phi}^2(\epsilon^{-1}|z|\psi_{\kappa}(\theta)\cos(\theta/2)) \mathrm{d}\sigma. \end{aligned}$$ Similarly to [\[bounded-l1-norm-simple\]](#bounded-l1-norm-simple){reference-type="eqref" reference="bounded-l1-norm-simple"}, it is easy to see that $L^{1}$-norm of $J_{\kappa, \epsilon}(z)$ is bounded (uniformly in $\kappa, \epsilon$) as follows: $$\begin{aligned} \label{bounded-l1-norm} \|J_{\kappa, \epsilon}\|_{L^{1}} &=& 4 \pi \int_{0}^{\infty} \int_{\mathop{\mathbb S\kern 0pt}\nolimits^{2}_{+}} \epsilon^{-4}r^{3} \psi^4_{\kappa}(\theta) \hat{\phi}^2(\epsilon^{-1}r\psi_{\kappa}(\theta)\cos(\theta/2)) \mathrm{d}r \mathrm{d}\sigma \\ \nonumber &=& 4 \pi I_{3} \int_{\mathop{\mathbb S\kern 0pt}\nolimits^{2}_{+}} \cos^{-4}(\theta/2) \mathrm{d}\sigma = 16\pi^{2} I_{3}, \end{aligned}$$ which yields [\[general-L1\]](#general-L1){reference-type="eqref" reference="general-L1"}. As a direct result, [\[general-k-functions-infty-norm\]](#general-k-functions-infty-norm){reference-type="eqref" reference="general-k-functions-infty-norm"} is easily followed by Hölder's inequality. To prove [\[general-k-functions-h-norm\]](#general-k-functions-h-norm){reference-type="eqref" reference="general-k-functions-h-norm"}, for $2 \leq p_i < \infty$ and $\sum_{i=1}^{k} p_i^{-1} = 1$, we have $$\begin{aligned} \int B^{\epsilon}_3 \prod_{i=1}^{k} |f_i(u_i)| \mathrm{d}V \lesssim \prod_{i=1}^{k} \left(\int B^{\epsilon}_3 |f_i(u_i)|^{p_i} \mathrm{d}V \right)^{1/p_i} \lesssim I_{3} \prod_{i=1}^{k} \|f_i\|_{L^{p_i}} \lesssim_{s_1, \cdots, s_k} I_{3} \prod_{i=1}^{k} \|f_i\|_{H^{s_i}}, \end{aligned}$$ where $\frac{s_i}{3} = \frac{1}{2} - \frac{1}{p_i}$ thanks to the Sobolev embedding theorem. ◻ With the estimates in Lemma [Lemma 4](#UPQ3R){reference-type="ref" reference="UPQ3R"}, we derive upper bounds of $Q_3$ in weighted Sobolev. **Proposition 1**. *Let $l \geq 0, \delta>0$. For $0 \leq s_1, s_2, s_3$ with $s_1 + s_2 + s_3= \frac{3}{2} + \delta$, $$\begin{aligned} \label{Q-3-upper-bound} |\langle Q_3(g,h), W_{l}f \rangle| \lesssim_{l, \delta} I_{3} \|W_{l}g\|_{H^{s_1}} \|W_{l}h\|_{H^{s_1}} \|f\|_{H^{s_3}}. \end{aligned}$$* *Proof.* For $l \geq 0$ and $0 \leq \iota_{1}, \kappa_{1}, \iota_{2}, \kappa_{2} \leq 1$, it is easy to check that $$\begin{aligned} \label{weight-formula-1} W_l(\kappa_1(v)) + W_{l}(\iota_1(v_*)) \lesssim_l W_l(\kappa_2(v)) + W_{l}(\iota_2(v_*)). \end{aligned}$$ As a result, we get that $$\begin{aligned} \label{direct-weighted} |\langle Q_3(g,h), W_{l}f \rangle| \lesssim_{l} \int B^{\epsilon}_3 (|(W_{l}g)'_* (W_{l}h)'| + |(W_{l}g)_* W_{l}h|) |f| \mathrm{d}V. \end{aligned}$$ Then the desired result follows from [\[general-k-functions-h-norm\]](#general-k-functions-h-norm){reference-type="eqref" reference="general-k-functions-h-norm"}. ◻ By taking $\delta = \frac 12$ in Proposition [Proposition 1](#Q-3){reference-type="ref" reference="Q-3"}, we easily close the energy estimate for $Q_3$ in $H^{N}_{l}$. **Lemma 5**. *Let $l \geq 0, N \geq 2$ and $m = |\alpha| \le N$. Then $$\begin{aligned} \sum_{\alpha_1+\alpha_2 = \alpha}|\langle Q_3(\partial^{\alpha_1}g,\partial^{\alpha_2}f) W_{l}, W_{l}\partial^\alpha f \rangle| \lesssim_{N,l} I_{3} \|g\|_{H^{N}_{l}} \|f\|_{H^{N}_{l}}^2. \end{aligned}$$* *Proof.* If $|\alpha_1| \ge 2$, then $|\alpha_2| \leq m -2$. Take $\delta = \frac 12$ in Proposition [Proposition 1](#Q-3){reference-type="ref" reference="Q-3"}. We take $s_1 = s_3=0$ and $s_2 = 2$ in [\[Q-3-upper-bound\]](#Q-3-upper-bound){reference-type="eqref" reference="Q-3-upper-bound"} to get $$\begin{aligned} |\langle Q_3(\partial^{\alpha_1}g,\partial^{\alpha_2}f) W_{l}, W_{l}\partial^\alpha f \rangle| \lesssim_{l} I_{3} \|\partial^{\alpha_1}g\|_{L^{2}_l} \|\partial^{\alpha_2}f\|_{H^{2}_l} \|\partial^\alpha f\|_{L^2_{l}} \lesssim_{l} I_{3} \|g\|_{H^{m}_l} \|f\|_{H^{m}_l}^2. \end{aligned}$$ If $|\alpha_1| = 1$, then $|\alpha_2| \leq m - 1$. Then the desired result follows by taking $s_1 = s_2 = 1$ and $s_3 =0$ in [\[Q-3-upper-bound\]](#Q-3-upper-bound){reference-type="eqref" reference="Q-3-upper-bound"} Similarly argument can be applied to the case that $|\alpha_1| = 0$. We complete the proof of the lemma. ◻ Relying on more regularity, we can get the weighted upper bound of $Q_3$ with a small factor $\epsilon^\vartheta$. Such estimates will be used in the last section to derive the asymptotic formula in Theorem [Theorem 3](#mainthm){reference-type="ref" reference="mainthm"}. **Proposition 2**. *Let $\vartheta \in [0,1]$, then $$\begin{aligned} \label{Q-3-vartheta} |\langle Q_3(g,h), W_l f \rangle| \lesssim \epsilon^\vartheta I_{3+\vartheta} \|W_l g\|_{H^{\frac 34 + \frac{\vartheta}{2}}} \|W_l h\|_{H^{\frac 34 + \frac{\vartheta}{2}}} \|f\|_{L^2}. \end{aligned}$$* *Proof.* Recalling [\[direct-weighted\]](#direct-weighted){reference-type="eqref" reference="direct-weighted"}, by Hölder's inequality and the change of variable [\[change-of-variable-2\]](#change-of-variable-2){reference-type="eqref" reference="change-of-variable-2"}($\kappa =\iota =1$), we have $$|\langle Q_3(g,h), W_{l}f \rangle| \lesssim \left( \int B^{\epsilon}_3 |v-v_{*}|^{-\vartheta} |(W_{l}g)_*|^4 \mathrm{d}V\right)^{1/4} \left( \int B^{\epsilon}_3 |v-v_{*}|^{-\vartheta} |W_{l}h|^4 \mathrm{d}V\right)^{1/4} \left( \int B^{\epsilon}_3 |v-v_{*}|^{\vartheta} |f|^2 \mathrm{d}V\right)^{1/2}$$$$\le \| (J_{0,\epsilon} |\cdot|^{-\vartheta}) * (W_{l}^4g^4) \|_{L^1}^{1/4} \| (J_{0,\epsilon} |\cdot|^{-\vartheta}) * (W_{l}^4h^4) \|_{L^1} \| (J_{0,\epsilon} |\cdot|^{\vartheta}) * f^2 \|_{L^1}^{1/2},$$ where we use the notation [\[def-J-kappa-eps\]](#def-J-kappa-eps){reference-type="eqref" reference="def-J-kappa-eps"}. Similarly to [\[bounded-l1-norm\]](#bounded-l1-norm){reference-type="eqref" reference="bounded-l1-norm"}, we derive that $$\begin{aligned} \label{L1-vartheta} \|J_{0, \epsilon}|\cdot|^{\vartheta}\|_{L^{1}} = 4 \pi \int_{0}^{\infty} \int_{\mathop{\mathbb S\kern 0pt}\nolimits^{2}_{+}} \epsilon^{-4}r^{3+\vartheta} \hat{\phi}^2(\epsilon^{-1}r\cos(\theta/2)) \mathrm{d}r \mathrm{d}\sigma \lesssim \epsilon^{\vartheta} I_{3+\vartheta}, \end{aligned}$$ from which together with the Hardy's inequality $\int |v-v_{*}|^{-2\vartheta}|F(v)|^4 \mathrm{d}v \lesssim \|F^2\|_{H^{\vartheta}}^2$, we get $$\begin{aligned} |\langle Q_3(g,h), W_{l}f \rangle| \lesssim \epsilon^{\vartheta} I_{3+\vartheta} \|W_{l}^2g^2 \|_{H^{\vartheta}}^{1/2} \|W_{l}^2h^2 \|_{H^{\vartheta}}^{1/2} \|f \|_{L^2}. \end{aligned}$$ Using the fact that $\|F^2\|_{H^{\vartheta}} \lesssim \|F\|_{H^{3/4+\vartheta/2}}^2$, we conclude the desire result [\[Q-3-vartheta\]](#Q-3-vartheta){reference-type="eqref" reference="Q-3-vartheta"}. ◻ ## Cancellation Lemma In this subsection, we prove the cancellation lemma for $Q_1$ and $Q_2$ which is used to transfer the regularity from one function to the other. **Lemma 6** (Cancellation Lemma). *Let $\delta>0$ and $a,b,c \geq 0$ verifying that $a+b+c =\frac 32+\delta$. For $i=1,2$, and functions $g, h, f$, we set $\mathcal{I}_i \colonequals \int B^{\epsilon}_i(|v-v_*|,\cos\theta)g_*\big((hf)'-hf\big)\mathrm{d}V.$ Then* *(i). For $\mathcal{I}_1$, it holds that $$\begin{aligned} \label{convolution-form} \mathcal{I}_1=\int (J_{\epsilon} * g)(v) h(v)f(v)\mathrm{d}v. \end{aligned}$$ where $J_{\epsilon}(u)= 8 \pi \int_{\frac{\sqrt{2}}2}^1 \epsilon^{-4} |u| \hat{\phi}^2 (\epsilon^{-1}|u|r)r \mathrm{d}r$ and $\|J_{\epsilon}\|_{L^1}= 16 \pi^{2} I_{3}$. As a result, we have $$\begin{aligned} \label{Q1-cancealltion-g-h-f} |\mathcal{I}_1| \lesssim_{\delta} I_{3} \|g\|_{H^{a}} \|h\|_{H^{b}}\|f\|_{H^{c}}. \end{aligned}$$ (ii). For $\mathcal{I}_2$, it holds that $$\begin{aligned} \label{convolution-form-B2} \mathcal{I}_2=\int (K_{\epsilon} * g)(v) h(v)f(v)\mathrm{d}v, \end{aligned}$$ where $K_{\epsilon}(u) = K_{\epsilon,1}(u) + K_{\epsilon,2}(u)$ with $$\begin{aligned} \label{kernel-K-1} K_{\epsilon,1}(u) &=& 16 \pi \int_0^{\frac{\sqrt{2}}2} \epsilon^{-4} |u| \hat{\phi}(\epsilon^{-1}|u|r) \left( \hat{\phi}(\epsilon^{-1}|u|) - \hat{\phi}(\epsilon^{-1}|u|\sqrt{1-r^2}) \right) r \mathrm{d}r, \\ \label{kernel-K-2} K_{\epsilon,2}(u) &=& 16 \pi \int_{\frac{\sqrt{2}}2}^{1} \epsilon^{-4} |u| \hat{\phi}(\epsilon^{-1}|u|r) \hat{\phi}(\epsilon^{-1}|u|) r \mathrm{d}r. \end{aligned}$$ Moreover, $\|K_{\epsilon}\|_{L^1} \leq 64 \pi^{2} (I_{3} + I^{\prime}_{3})$ which implies that $$\begin{aligned} \label{Q2-cancealltion-g-h-f} |\mathcal{I}_2| \lesssim_{\delta} (I_{3} + I^{\prime}_{3}) \|g\|_{H^{a}} \|h\|_{H^{b}}\|f\|_{H^{c}}. \end{aligned}$$ In general, for $0 \leq \vartheta \leq 1$, $\|K_{\epsilon}|\cdot|^{\vartheta}\|_{L^1} \lesssim \epsilon^{\vartheta} (I_{3+\vartheta} + I^{\prime}_{3+\vartheta})$ and $$\begin{aligned} \label{eps-vartheta-B2-cancellation} |\mathcal{I}_2| \lesssim_{\delta} \epsilon^{\vartheta} (I_{3+\vartheta} + I^{\prime}_{3+\vartheta}) \|g\|_{H^{\frac{3}{4} + \frac{\vartheta}{2} + \delta}} \|h\| _{H^{\frac{3}{4} + \frac{\vartheta}{2} + \delta}} \|f\|_{L^{2}}. \end{aligned}$$* *Proof.* We first prove the estimate of $\mathcal{I}_1$. By [\[change-of-variable\]](#change-of-variable){reference-type="eqref" reference="change-of-variable"}, we have $$\begin{aligned} \mathcal{I}_1&=& 2 \pi \int \big(B^{\epsilon}_1(\frac{|v-v_*|}{\cos(\theta/2)},\cos\theta)(\cos(\theta/2))^{-3}-B^{\epsilon}_1(|v-v_*|,\cos\theta)\big) g_* hf \sin\theta \mathrm{d}\theta \mathrm{d}v_* \mathrm{d}v \\ &=& 8 \pi \int \int_0^{\frac{\sqrt{2}}2} \epsilon^{-4} |v-v_*| \big[\hat{\phi}^2(\epsilon^{-1}|v-v_*|\frac{r}{\sqrt{1-r^2}})(1-r^2)^{-2}-\hat{\phi}^2(\epsilon^{-1}|v-v_*|r)\big] g_* hf r \mathrm{d}r \mathrm{d}v_* \mathrm{d}v. \end{aligned}$$ By the change of variable $\mathcal{r}\colonequals \frac{r}{\sqrt{1-r^2}}$ which implies $(1-r^2)^{-2} r \mathrm{d}r = \mathcal{r} \mathrm{d}\mathcal{r}$, we get $$\begin{aligned} \mathcal{I}_1 = 8 \pi \int \int_{\frac{\sqrt{2}}2}^{1} \epsilon^{-4} |v-v_*| \hat{\phi}^2(\epsilon^{-1}|v-v_*|r) g_* hf r \mathrm{d}r \mathrm{d}v_* \mathrm{d}v, \end{aligned}$$ which is exactly [\[convolution-form\]](#convolution-form){reference-type="eqref" reference="convolution-form"}. Since $J_{\epsilon}$ is radial, let $\mathcal{r}\colonequals |u|$, $$\begin{aligned} \|J_{\epsilon}\|_{L^1} = 32 \pi^{2} \int_{0}^\infty \int_{\frac{\sqrt{2}}2}^1 \epsilon^{-4} \mathcal{r}^{3} \hat{\phi}^2 (\epsilon^{-1} \mathcal{r} r) r \mathrm{d}r \mathrm{d}\mathcal{r} =32 \pi^{2} \left(\int_{0}^\infty s^{3} \hat{\phi}^2 (s) \mathrm{d}s \right) \left( \int_{\frac{\sqrt{2}}2}^1 r^{-3} dr \right) = 16 \pi^{2} I_{3}. \end{aligned}$$ We turn to the estimate of $\mathcal{I}_2$. Following the same argument in the above, we can get the formula [\[convolution-form-B2\]](#convolution-form-B2){reference-type="eqref" reference="convolution-form-B2"} with $K_{\epsilon}$ as the sum of [\[kernel-K-1\]](#kernel-K-1){reference-type="eqref" reference="kernel-K-1"} and [\[kernel-K-2\]](#kernel-K-2){reference-type="eqref" reference="kernel-K-2"}. Let us compute the $L^{1}$-norm of $K_{\epsilon,1}$ and $K_{\epsilon,2}$. Let $\mathcal{r}\colonequals |u|$, then $$\begin{aligned} &&\|K_{\epsilon,2}\|_{L^1} = 64 \pi^{2} \int_{0}^\infty \left|\int_{\frac{\sqrt{2}}2}^1 \epsilon^{-4} \mathcal{r}^{3} \hat{\phi}(\epsilon^{-1}r\mathcal{r}) \hat{\phi}(\epsilon^{-1}\mathcal{r}) r \mathrm{d}r \right| \mathrm{d}\mathcal{r} \\ &&\le 64 \pi^{2} \int_{\frac{\sqrt{2}}2}^1 \left(\int_{0}^\infty \epsilon^{-4} \mathcal{r}^{3} \hat{\phi}^{2}(\epsilon^{-1}r\mathcal{r}) \mathrm{d}\mathcal{r} \right)^{1/2} \left(\int_{0}^\infty \epsilon^{-4} \mathcal{r}^{3} \hat{\phi}^{2}(\epsilon^{-1}\mathcal{r}) \mathrm{d}\mathcal{r} \right)^{1/2} r \mathrm{d}r \leq 32 \pi^{2} I_{3}, \end{aligned}$$ where for fixed $r$, we used the change of variables $\mathcal{r} \to \epsilon^{-1} r\mathcal{r}$ and $\mathcal{r} \to \epsilon^{-1} \mathcal{r}$. To kill the singularity at $r=0$ in $K_{\epsilon,1}$, by Taylor expansion, we have $$\begin{aligned} K_{\epsilon,1}(u) = 16 \pi \int_0^{\frac{\sqrt{2}}2} \int_{0}^{1} \epsilon^{-5} |u|^{2} \hat{\phi}(\epsilon^{-1}|u|r) (\hat{\phi})^{\prime}(\epsilon^{-1}|u|(\tau + (1-\tau)\sqrt{1-r^2})) (1-\sqrt{1-r^2}) r \mathrm{d}r \mathrm{d}\tau, \end{aligned}$$ which implies that $$\begin{aligned} &&\|K_{\epsilon,1}\|_{L^1} = 64 \pi^{2} \int_{0}^\infty \left| \int_0^{\frac{\sqrt{2}}2} \int_{0}^{1} \epsilon^{-5} \mathcal{r}^{4} \hat{\phi}(\epsilon^{-1}\mathcal{r}r) (\hat{\phi})^{\prime}(\epsilon^{-1}\mathcal{r}(\tau + (1-\tau)\sqrt{1-r^2})) (1-\sqrt{1-r^2}) r \mathrm{d}r \mathrm{d}\tau \right| \mathrm{d}\mathcal{r} \\ &&\le 64 \pi^{2} \int_0^{\frac{\sqrt{2}}2} \int_{0}^{1} \left(\int_{0}^\infty \epsilon^{-4} \mathcal{r}^{3} \hat{\phi}^{2}(\epsilon^{-1}r\mathcal{r}) \mathrm{d}\mathcal{r}\right)^{1/2} \left(\int_{0}^\infty \epsilon^{-6} \mathcal{r}^{5} |(\hat{\phi})^{\prime}(\epsilon^{-1}\mathcal{r}(\tau + (1-\tau)\sqrt{1-r^2}))|^{2} \mathrm{d}\mathcal{r} \right)^{1/2} \\ &&\times (1-\sqrt{1-r^2}) r \mathrm{d}r \mathrm{d}\tau \le 64 \pi^{2} \left(\int_{0}^\infty s^{3} \hat{\phi}^{2}(s) \mathrm{d}s \right)^{1/2} \left(\int_{0}^\infty s^{5} |(\hat{\phi})^{\prime}(s)|^{2} \mathrm{d}s \right)^{1/2} \int_0^{\frac{\sqrt{2}}2} \int_{0}^{1} (1-\sqrt{1-r^2}) r^{-1} \\&&\times (\tau + (1-\tau)\sqrt{1-r^2})^{-3} dr \mathrm{d}\tau \leq 8 \sqrt{2} \pi^{2} (I_{3} + I^{\prime}_{3}), \end{aligned}$$ where the estimates $1-\sqrt{1-r^2} \leq r^{2}/2$, $\sqrt{2}/2 \leq \tau + (1-\tau) \sqrt{1-r^2} \leq 1$ are used. Now we have $$\begin{aligned} \|K_{\epsilon}\|_{L^1} \leq \|K_{\epsilon,1}\|_{L^1} + \|K_{\epsilon,2}\|_{L^1} \leq 8 \sqrt{2} \pi^{2} (I_{3} + I^{\prime}_{3}) + 32 \pi^{2} I_{3} \leq 64 \pi^{2} (I_{3} + I^{\prime}_{3}), \end{aligned}$$ which gives [\[Q2-cancealltion-g-h-f\]](#Q2-cancealltion-g-h-f){reference-type="eqref" reference="Q2-cancealltion-g-h-f"}. The same argument can be applied to get that $\|K_{\epsilon}|\cdot|^{\vartheta}\|_{L^1} \lesssim \epsilon^{\vartheta} (I_{3+\vartheta} + I^{\prime}_{3+\vartheta})$, which gives [\[eps-vartheta-B2-cancellation\]](#eps-vartheta-B2-cancellation){reference-type="eqref" reference="eps-vartheta-B2-cancellation"}. We end the proof. ◻ ## Integrals involving $B^{\epsilon}_1$ We shall use [\[change-of-variable-2\]](#change-of-variable-2){reference-type="eqref" reference="change-of-variable-2"} to give the estimate of the integrals involving $B^{\epsilon}_1$. **Lemma 7**. *Let $a, b \in \mathop{\mathbb R\kern 0pt}\nolimits$ with $b \geq 0$. If $0 \leq \kappa, \iota \leq 1$, then $$\begin{aligned} \qquad\label{Q1-result-general} \int B^{\epsilon}_1 | g(\iota(v_*)) h(\kappa(v))| |v-v_*|^{a+b} \sin^{b-1}(\theta/2) \mathrm{d}V \leq (\sqrt{2})^{(a+1)_{+}} 8 \pi \epsilon^{b-3} I_{b} \int |v-v_*|^{a} | g_* h| \mathrm{d}v_* \mathrm{d}v. \end{aligned}$$* *Proof.* Applying [\[change-of-variable-2\]](#change-of-variable-2){reference-type="eqref" reference="change-of-variable-2"}, we have $$\begin{aligned} && \int B^{\epsilon}_1 | g(\iota(v_*)) h(\kappa(v))| |v-v_*|^{a+b} \sin^{b-1}(\theta/2) \mathrm{d}V \\ &=& \int \epsilon^{-4} |v-v_*|^{a+b+1} \psi^{a+b+4}_{\kappa+\iota}(\theta) \sin^{b-1}(\theta/2) \hat{\phi}^2\bigg( \epsilon^{-1} |v-v_*| \psi_{\kappa+\iota}(\theta) \sin(\theta/2) \bigg) | g_* h| \mathrm{d}V. \end{aligned}$$ Recalling [\[general-Jacobean\]](#general-Jacobean){reference-type="eqref" reference="general-Jacobean"}, if we set $\mathcal{r}: = \psi_{\kappa+\iota}(\theta) \sin(\theta/2)$ then $\mathrm{d} \mathcal{r} = \psi_{\kappa+\iota}^{3}(\theta) \mathrm{d}\sin(\theta/2)$. Since $1 \leq \psi_{\kappa+\iota} \leq \sqrt{2}$, we have $$\begin{aligned} \label{explicit-computation} && \int \epsilon^{-4} \psi^{a+b+4}_{\kappa+\iota}(\theta) \sin^{b-1}(\theta/2) \hat{\phi}^2\bigg( \epsilon^{-1} |v-v_*| \psi_{\kappa+\iota}(\theta) \sin(\theta/2) \bigg) \mathrm{d}\sigma \\ \nonumber &=& 8 \pi \int_{0}^{\pi/2} \epsilon^{-4} \psi^{a+b+4}_{\kappa+\iota}(\theta) \sin^b(\theta/2) \hat{\phi}^2\bigg( \epsilon^{-1} |v-v_*| \psi_{\kappa+\iota}(\theta) \sin(\theta/2) \bigg) \mathrm{d}\sin(\theta/2) \\ \nonumber &=& 8 \pi \int_{0}^{(2-2(\kappa+\iota)+(\kappa+\iota)^{2})^{-1/2}} \epsilon^{-4} \psi^{a+1}_{\kappa+\iota}(\theta) \hat{\phi}^2\bigg( \epsilon^{-1} |v-v_*| \mathcal{r} \bigg) \mathcal{r}^{b} \mathrm{d}\mathcal{r} \\ \nonumber &\leq& (\sqrt{2})^{(a+1)_{+}} 8 \pi \int_{0}^{1} \epsilon^{-4} \hat{\phi}^2\bigg( \epsilon^{-1} |v-v_*| \mathcal{r} \bigg) \mathcal{r}^{b} \mathrm{d}\mathcal{r} \leq (\sqrt{2})^{(a+1)_{+}} 8 \pi \epsilon^{b-3} I_{b} |v-v_*|^{-b-1}, \end{aligned}$$ which yields [\[Q1-result-general\]](#Q1-result-general){reference-type="eqref" reference="Q1-result-general"}. ◻ **Remark 8**. *If we borrow the same idea used here to the estimate of integrals involving $B^{\epsilon}_3$, we will use the change of variable: $\mathcal{r} = \psi_{\kappa+\iota}(\theta) \cos(\theta/2)$ which indicates that $\mathrm{d} \mathcal{r} = (1-\kappa-\iota)^2\psi_{\kappa+\iota}^{3}(\theta) \mathrm{d}\cos(\theta/2)$. As a result, the ending estimate will have a singular factor $(1-\kappa-\iota)^{-2}$ as $\kappa + \iota \to 1$. For this reason, we always avoid the change of variable $v \to \kappa(v)$ or $v_* \to \iota(v_*)$ for integrals involving $B^{\epsilon}_3$.* **Remark 9**. *Lemma [Lemma 7](#Technical-lemma){reference-type="ref" reference="Technical-lemma"} and its proof are highly versatile, making them applicable to the majority of integrals involving $B^{\epsilon}_1$ that will arise in this article. To facilitate future reference, we provide several examples of their use below.* *$\bullet$ Using [\[change-of-variable\]](#change-of-variable){reference-type="eqref" reference="change-of-variable"} and the computation in [\[explicit-computation\]](#explicit-computation){reference-type="eqref" reference="explicit-computation"}, we have $$\begin{aligned} \label{B1-sigma-vStar} \int B^{\epsilon}_1(|v-v_{*}|, \cos\theta) f_*' \mathrm{d}\sigma \mathrm{d}v_* = \int B^{\epsilon}_1(|v-v_{*}|\psi_{1}(\theta), \cos\theta) f_* \psi_{1}^{3}(\theta) \mathrm{d}\sigma \mathrm{d}v_* \lesssim \epsilon^{-3} I_{0} \| f \|_{L^1} . \end{aligned}$$* *$\bullet$ By taking $a=0$ in [\[Q1-result-general\]](#Q1-result-general){reference-type="eqref" reference="Q1-result-general"}, for $b \geq 0$, we have $$\begin{aligned} \label{general-result-cutoff} \int B^{\epsilon}_1 | g(\iota(v_*)) h(\kappa(v))| |v-v_*|^b \sin^{b-1} \frac{\theta}{2} \mathrm{d}V \leq 8 \sqrt{2} \pi \epsilon^{b-3} I_{b} \|g\|_{L^1} \|h\|_{L^1}. \end{aligned}$$* *$\bullet$ Let $0 \leq \vartheta \leq 1$. Let $c, d \in \mathop{\mathbb R\kern 0pt}\nolimits$ with $d \geq 2+\vartheta$. Thanks to the fact that $|v-v_*|^c \sin^d(\theta/2) \leq |v-v_*|^{c - 3-\vartheta} |v-v_*|^{3+\vartheta} \sin^{2+\vartheta}(\theta/2)$, if we take $a = c - 3-\vartheta$ and $b = 3+\vartheta$ in [\[Q1-result-general\]](#Q1-result-general){reference-type="eqref" reference="Q1-result-general"}, then $$\begin{aligned} \label{Q1-result-with-factor-a-minus3-with-small-facfor}\qquad \int B^{\epsilon}_1 | g(\iota(v_*)) h(\kappa(v))| |v-v_*|^c \sin^d(\theta/2) \mathrm{d}V \leq 8 \pi (\sqrt{2})^{(c-2-\vartheta)_+} \epsilon^\vartheta I_{3+\vartheta} \int |v-v_*|^{c-3-\vartheta} | g_* h| \mathrm{d}v_* \mathrm{d}v. \end{aligned}$$ In particular, if $\vartheta = 0, d = 2$ in [\[Q1-result-with-factor-a-minus3-with-small-facfor\]](#Q1-result-with-factor-a-minus3-with-small-facfor){reference-type="eqref" reference="Q1-result-with-factor-a-minus3-with-small-facfor"}, then we get that $$\begin{aligned} \label{Q1-result-with-factor-a-minus3-type-2}\qquad \int B^{\epsilon}_1 | g(\iota(v_*)) h(\kappa(v))| |v-v_*|^c \sin^2(\theta/2) \mathrm{d}V \leq 8 \pi (\sqrt{2})^{(c-2)_{+}} I_{3} \int |v-v_*|^{-3+c} | g_* h| \mathrm{d}v_* \mathrm{d}v. \end{aligned}$$* Using [\[Q1-result-with-factor-a-minus3-type-2\]](#Q1-result-with-factor-a-minus3-type-2){reference-type="eqref" reference="Q1-result-with-factor-a-minus3-type-2"} properly, we derive the following flexible estimate allowing some balance of weight. **Lemma 8**. *For $0 \leq \iota, \kappa_{1}, \kappa_{2} \leq 1$, it holds that $$\begin{aligned} \label{general-B1-h-f} \int B^{\epsilon}_{1} |g(\iota(v_*)) h(\kappa_1(v)) f(\kappa_2(v))| |v' - v|^{2} \mathrm{d}V \lesssim I_{3} (\|g\|_{L^1_{a_1}} + \|g\|_{L^2}) \|h\|_{L^2_{a_2}} \|f\|_{L^2_{a_3}}, \end{aligned}$$ where $-1 \leq a_1 \leq 0 \leq a_2, a_3$ and $a_1+ a_2 + a_3 = 0$.* *Proof.* We divide the integration domain into two parts: $\mathcal{U} \colonequals \{(v, v_*, \sigma) : |v-v_*| \leq 1, (v-v_*) \cdot \sigma \geq 0\}$ and $\mathcal{F} \colonequals \{(v, v_*, \sigma) : |v-v_*| \ge 1, (v-v_*) \cdot \sigma \geq 0\}$ and denote the associated integrals by $\mathcal{I}_{\leq}$ and $\mathcal{I}_{\geq}$ accordingly. $\bullet$ In the domain $\mathcal{U}$, one has $|\kappa(v) - \iota(v_*)| \leq 1$ for all $0 \leq \iota, \kappa\leq 1$. Using [\[Q1-result-with-factor-a-minus3-type-2\]](#Q1-result-with-factor-a-minus3-type-2){reference-type="eqref" reference="Q1-result-with-factor-a-minus3-type-2"}, we will have $$\begin{aligned} \mathcal{I}_{\leq} &\leq& \left(\int \mathrm{1}_{\mathcal{U}} B^{\epsilon}_{1} |g(\iota(v_*)) h^2(\kappa_1(v))| |v' - v|^{2} \mathrm{d}V \right)^{1/2} \left( \int \mathrm{1}_{\mathcal{U}} B^{\epsilon}_{1}|g(\iota(v_*)) f^2(\kappa_2(v))| |v' - v|^{2} \mathrm{d}V \right)^{1/2} \\ &\lesssim& I_{3} \left(\int \mathrm{1}_{|v-v_*| \leq 1} |g_* h^2| |v - v_*|^{-1} \mathrm{d}v \mathrm{d}v_* \right)^{1/2} \left(\int \mathrm{1}_{|v-v_*| \leq 1} |g_* f^2| |v - v_*|^{-1} \mathrm{d}v \mathrm{d}v_* \right)^{1/2} \\ &\lesssim& I_{3} \|g\|_{L^2}\|h\|_{L^2}\|f\|_{L^2}. \end{aligned}$$ $\bullet$ In the domain $\mathcal{F}$, one has $|\kappa(v) - \iota(v_*)| \geq \sqrt{2}/2$ and $W_l(v-v_*)\sim |v - v_*|^l$. Then we get that $$\begin{aligned} |v - v_*|^{-1} &\lesssim& |v - v_*|^{a_1} = |v - v_*|^{-a_2 - a_3} \sim |\kappa_1(v) - \iota(v_*)|^{-a_2} |\kappa_2(v) - \iota(v_*)|^{-a_3} \\ &\lesssim& W_{a_1}(\iota(v_*)) W_{a_2}(\kappa_1(v)) W_{a_3}(\kappa_2(v)), \end{aligned}$$ which gives $$\begin{aligned} &&\mathcal{I}_{\geq} \lesssim \int \mathrm{1}_{\mathcal{F}} B^{\epsilon}_{1} |(W_{a_1}g)(\iota(v_*)) (W_{a_2}h)(\kappa_1(v)) (W_{a_3}f)(\kappa_2(v))| |v - v_*|^{3}\sin^2\frac{\theta}{2} \mathrm{d}V \\ &&\lesssim\left(\int B^{\epsilon}_{1} |(W_{a_1}g)(\iota(v_*)) (W_{a_2}h)^2(\kappa_1(v))| |v - v_*|^{3}\sin^2\frac{\theta}{2} \mathrm{d}V \right)^{1/2} \bigg( \int B^{\epsilon}_{1} |(W_{a_1}g)(\iota(v_*)) (W_{a_3}f)^2(\kappa_2(v))|\\&&\times |v - v_*|^{3}\sin^2\frac{\theta}{2} \mathrm{d}V \bigg)^{1/2} \lesssim I_{3} \left(\int |(W_{a_1}g)_* (W_{a_2}h)^2| \mathrm{d}v \mathrm{d}v_* \right)^{1/2} \left(\int |(W_{a_1}g)_* (W_{a_3}f)^2| \mathrm{d}v \mathrm{d}v_* \right)^{1/2} \\ &&\lesssim I_{3} \|g\|_{L^1_{a_1}} \|h\|_{L^2_{a_2}}\|f\|_{L^2_{a_2}}. \end{aligned}$$ We complete the proof of the lemma by patching together these two estimates. ◻ ## Upper bounds from the cutoff perspective We will prove several upper bounds of the operator $Q$ and $R$. All the estimates will depend heavily on the parameter $\epsilon$. **Lemma 9**. *It holds that $$\begin{aligned} \label{rough-upper-Q} |\langle Q (g, h), f \rangle| \lesssim (\epsilon^{-3} I_{0} + I_{3})\|g\|_{L^{1} \cap L^2} \|h\|_{L^2} \|f\|_{L^2}, \\ \label{rough-upper-R-rho-L-infty} |\langle R (g, h, \rho), f \rangle| \lesssim (I_{0} + \epsilon^{3} I_{3}) \|g\|_{L^{1} \cap L^2} \|h\|_{L^2} \|\rho\|_{L^\infty} \|f\|_{L^2}, \\ \label{rough-upper-R-rho-L2} |\langle R (g, h, \rho), f \rangle| \lesssim (I_{0} + \epsilon^{3} I_{3}) \|g\|_{L^{1} \cap L^\infty} \|h\|_{L^2 \cap L^\infty} \|\rho\|_{L^2} \|f\|_{L^2}. \end{aligned}$$* *Proof.* Observing that $B^{\epsilon} \leq 2 B^{\epsilon}_1 + 2 B^{\epsilon}_3$, we get $$\begin{aligned} \label{Q-inner-product}\qquad |\langle Q (g, h), f \rangle| = |\int B^{\epsilon} g_* h (f' - f) \mathrm{d}V| \leq 2\int B^{\epsilon}_1 |g_* h (|f'| + |f|)| \mathrm{d}V + 2\int B^{\epsilon}_3 |g_* h (|f'| + |f|)| \mathrm{d}V. \end{aligned}$$ Applying [\[general-result-cutoff\]](#general-result-cutoff){reference-type="eqref" reference="general-result-cutoff"} with $\iota=\kappa=b=0$ and noting $0 \leq \sin \frac{\theta}{2} \leq 1$, we have $$\begin{aligned} \label{B1-g-star-hf} \int B^{\epsilon}_1 |g_* h f| \mathrm{d}V \lesssim \epsilon^{-3} I_{0} \|g\|_{L^1} \|h f\|_{L^1} \lesssim \epsilon^{-3} I_{0} \|g\|_{L^1} \|h\|_{L^2} \|f\|_{L^2}. \end{aligned}$$ Again by [\[general-result-cutoff\]](#general-result-cutoff){reference-type="eqref" reference="general-result-cutoff"} with $\iota=b=0$, $\kappa=0$ or $1$, we get the same bound for $\int B^{\epsilon}_1 |g_* h f'| \mathrm{d}V$. Next, by taking $a=0$ in [\[B-3-a-order-velocity\]](#B-3-a-order-velocity){reference-type="eqref" reference="B-3-a-order-velocity"}, we have $$\begin{aligned} \label{B3-g-star-hf} \int B^{\epsilon}_3 |g_* h f| \mathrm{d}V \lesssim \epsilon^{-3} I_{0} \|g\|_{L^1} \|h f\|_{L^1} \lesssim \epsilon^{-3} I_{0} \|g\|_{L^1} \|h\|_{L^2} \|f\|_{L^2}. \end{aligned}$$ It is not difficult to check that $$\begin{aligned} \label{B3-g-star-h-f-prime}\int B^{\epsilon}_3 |g_* h f'| \mathrm{d}V\le (I_0\epsilon^{-3} I_{3})^{\frac 12} \|g\|_{L^2} \|h\|_{L^2} \|f\|_{L^2}.\end{aligned}$$We conclude [\[rough-upper-Q\]](#rough-upper-Q){reference-type="eqref" reference="rough-upper-Q"} by patching together the above estimates. For the cubic term, we first observe that $$\begin{aligned} \label{R-inner-product} |\langle R (g, h, \rho), f \rangle| = \epsilon^{3}|\int B^{\epsilon} g_* h (\rho' + \rho'_*) (f' - f) \mathrm{d}V|. \end{aligned}$$Then [\[rough-upper-R-rho-L-infty\]](#rough-upper-R-rho-L-infty){reference-type="eqref" reference="rough-upper-R-rho-L-infty"} follows [\[rough-upper-Q\]](#rough-upper-Q){reference-type="eqref" reference="rough-upper-Q"} by imposing $L^\infty$ norm on $\rho$. To prove [\[rough-upper-R-rho-L2\]](#rough-upper-R-rho-L2){reference-type="eqref" reference="rough-upper-R-rho-L2"}, by using $B^{\epsilon} \leq 2 B^{\epsilon}_1 + 2 B^{\epsilon}_3$, we get that $$\begin{aligned} &&|\langle R (g, h, \rho), f \rangle| = \epsilon^{3} |\int B^{\epsilon} g_* h (\rho' + \rho'_*) (f' - f) \mathrm{d}V| \leq 2 \epsilon^{3} \|h\|_{L^\infty} \int B^{\epsilon}_1 |g_* \rho' (|f'| + |f|)| \mathrm{d}V \\ &&+ 2 \epsilon^{3} \int B^{\epsilon}_1 |g_* h \rho'_* (|f'| + |f|)| \mathrm{d}V + 2 \epsilon^{3} \|g\|_{L^\infty} \|h\|_{L^\infty} \int B^{\epsilon}_3 (|\rho'| + |\rho'_*|) (|f'| + |f|)| \mathrm{d}V. \end{aligned}$$ By the Cauchy-Schwartz inequality, using [\[general-result-cutoff\]](#general-result-cutoff){reference-type="eqref" reference="general-result-cutoff"} and [\[general-k-functions-infty-norm\]](#general-k-functions-infty-norm){reference-type="eqref" reference="general-k-functions-infty-norm"}, we conclude the desired result. ◻ In the next, we prove the commutator estimates from the cutoff perspective. **Lemma 10**. *Let $l \geq 2$, then $$\begin{aligned} \label{rough-commutator-Q} |\langle Q(g,h)W_l-Q(g,hW_l),f \rangle| \leq C_{l} (\epsilon^{-3} I_{0} + I_{3}) \|g\|_{L^2_{l}} \|h\|_{L^2_l} \|f\|_{L^2}, \\ \label{rough-commutator-R-rho-L-infty} |\langle R(g, h, \rho)W_l - R(g, hW_l, \rho),f \rangle| \leq C_{l} (I_{0} + \epsilon^{3} I_{3}) \|g\|_{L^2_{l}} \|h\|_{L^2_l} \|\rho\|_{L^\infty} \|f\|_{L^2}, \\ \label{rough-commutator-R} |\langle R(g, h, \rho)W_l - R(g, hW_l, \rho),f \rangle| \leq C_{l} (I_{0} + \epsilon^{3} I_{3}) \|g\|_{L^2_{l} \cap L^\infty_l} \|h\|_{L^2_l \cap L^\infty_l} \|\rho\|_{L^2} \|f\|_{L^2}. \end{aligned}$$* *Proof.* It is easy to compute that $$\begin{aligned} \label{Commutator-formula-Q} \mathcal{I}\colonequals \langle Q(g,h)W_l-Q(g,hW_l),f \rangle=\int B^{\epsilon} g_* h f' (W_l'-W_l)\mathrm{d}V. \end{aligned}$$ From the fact that $|\nabla^{k} W_{l}| \lesssim_{l,k} W_{l-k}$ with $l \in \mathop{\mathbb R\kern 0pt}\nolimits, k \in \mathop{\mathbb N\kern 0pt}\nolimits$, [\[weight-formula-1\]](#weight-formula-1){reference-type="eqref" reference="weight-formula-1"} yields that $$\begin{aligned} \label{weight-diff-into-two-cases} |W_{l}' - W_{l}| \lesssim_{l} \mathrm{1}_{|v-v_*| \leq 1} W_{l} + \mathrm{1}_{|v-v_*| \geq 1} (W_{l-1} + (W_{l-1})_*) |v-v_*| \sin\frac{\theta}{2}. \end{aligned}$$ Then we have $$\begin{aligned} |\mathcal{I}| \lesssim_{l} \int B^{\epsilon} |g_* W_l h f' | \mathrm{d}V + \int B^{\epsilon} |g_* W_{l-1} h f' | |v-v_*| \mathrm{d}V + \int B^{\epsilon} \mathrm{1}_{|v-v_*| \geq 1} |v-v_*| \sin\frac{\theta}{2} |(W_{l-1}g)_* h f' | \mathrm{d}V. \end{aligned}$$ Thanks to [\[B3-g-star-h-f-prime\]](#B3-g-star-h-f-prime){reference-type="eqref" reference="B3-g-star-h-f-prime"}, the first term can be bounded as $\int B^{\epsilon} |g_* W_l h f' | \mathrm{d}V \lesssim (\epsilon^{-3} I_{0} + I_{3}) \|g\|_{L^1 \cap L^2} \|h\|_{L^2_l} \|f\|_{L^2}$. For the second term, using $B^{\epsilon} \leq 2B^{\epsilon}_1 + 2B^{\epsilon}_3$, it suffices to estimate $\int B^{\epsilon}_1 |g_* W_{l-1} h f' | |v-v_*| \mathrm{d}V$ and $\int B^{\epsilon}_3 |g_* W_{l-1} h f' | |v-v_*| \mathrm{d}V$. By the Cauchy-Schwartz inequality, [\[general-result-cutoff\]](#general-result-cutoff){reference-type="eqref" reference="general-result-cutoff"}, [\[B-3-a-order-velocity\]](#B-3-a-order-velocity){reference-type="eqref" reference="B-3-a-order-velocity"} and [\[general-L1\]](#general-L1){reference-type="eqref" reference="general-L1"} imply that $$\begin{aligned} \int B^{\epsilon} |g_* W_{l-1} h f' | |v-v_*| \mathrm{d}V\lesssim (\epsilon^{-2}I_1+(\epsilon^{-1} I_2 I_{3})^{1/2}) \|g\|_{L^1 \cap L^2} \|h\|_{L^2_l} \|f\|_{L^2}\lesssim (\epsilon^{-3} I_{0} + I_{3}) \|g\|_{L^1 \cap L^2} \|h\|_{L^2_l} \|f\|_{L^2},\end{aligned}$$ where we use the fact that $\epsilon^{a-3}I_a\le \epsilon^{-3} I_{0} + I_{3}$ for $0\le a\le3$ thanks to the interpolation. The similar argument can be applied to the third term and we conclude the desired result [\[rough-commutator-Q\]](#rough-commutator-Q){reference-type="eqref" reference="rough-commutator-Q"}. For the cubic term, we first have $$\begin{aligned} \label{Commutator-formula-R} \mathcal{K} \colonequals \langle R(g, h, \rho)W_l - R(g, hW_l, \rho),f \rangle= \pm \epsilon^{3} \int B^{\epsilon} g_* h (\rho' + \rho'_*) f' (W_l'-W_l)\mathrm{d}V. \end{aligned}$$ By comparing the structure of [\[Commutator-formula-Q\]](#Commutator-formula-Q){reference-type="eqref" reference="Commutator-formula-Q"} and that of [\[Commutator-formula-R\]](#Commutator-formula-R){reference-type="eqref" reference="Commutator-formula-R"}, [\[rough-commutator-R-rho-L-infty\]](#rough-commutator-R-rho-L-infty){reference-type="eqref" reference="rough-commutator-R-rho-L-infty"} easily follows [\[rough-upper-R-rho-L-infty\]](#rough-upper-R-rho-L-infty){reference-type="eqref" reference="rough-upper-R-rho-L-infty"}. It remains to derive [\[rough-commutator-R\]](#rough-commutator-R){reference-type="eqref" reference="rough-commutator-R"}. Note that $$\begin{aligned} %\label{Commutator-formula-R-B1-B3} \epsilon^{-3}|\mathcal{K}| \lesssim \int B^{\epsilon}_1 |g_* h| (|\rho'| + |\rho'_*|) |f'| |W_l'-W_l|\mathrm{d}V + \int B^{\epsilon}_3 |g_* h| (|\rho'| + |\rho'_*|) |f'| |W_l'-W_l|\mathrm{d}V. \end{aligned}$$ Since $|W_l'-W_l| \lesssim W_l + (W_l)_*$, [\[general-k-functions-infty-norm\]](#general-k-functions-infty-norm){reference-type="eqref" reference="general-k-functions-infty-norm"} implies that $\int B^{\epsilon}_3 |g_* h| (|\rho'| + |\rho'_*|) |f'| |W_l'-W_l|\mathrm{d}V \lesssim I_{3} \|g\|_{ L^\infty_l} \|h\|_{ L^\infty_l} \|\rho\|_{L^2} \|f\|_{L^2} .$ We use [\[weight-diff-into-two-cases\]](#weight-diff-into-two-cases){reference-type="eqref" reference="weight-diff-into-two-cases"} to expand the first term by $$\begin{aligned} && \int B^{\epsilon}_1 |g_* h| (|\rho'| + |\rho'_*|) |f'| |W_l'-W_l|\mathrm{d}V \lesssim \int B^{\epsilon}_1 |g_* W_l h| (|\rho'| + |\rho'_*|) |f'| \mathrm{d}V + \int B^{\epsilon}_1 |g_* W_{l-1}h| (|\rho'| + |\rho'_*|) \\ &&\times |f'| |v-v_*| \mathrm{d}V + \int B^{\epsilon}_1 |(W_{l-1}g)_* h \rho'_* f'| |v-v_*| \mathrm{d}V + \int B^{\epsilon}_1 \mathrm{1}_{|v-v_*| \geq 1} |v-v_*| \sin\frac{\theta}{2} |(W_{l-1}g)_* h \rho' f'| \mathrm{d}V . \end{aligned}$$ By repeatedly using [\[general-result-cutoff\]](#general-result-cutoff){reference-type="eqref" reference="general-result-cutoff"} and Cauchy-Schwartz inequality, we are led to the desired result [\[rough-commutator-R\]](#rough-commutator-R){reference-type="eqref" reference="rough-commutator-R"} and then complete the proof of the lemma. ◻ Lemma [Lemma 9](#Q-inner-product-upper-bound-L2-L2){reference-type="ref" reference="Q-inner-product-upper-bound-L2-L2"} and [Lemma 10](#commutator-Q-cutoff){reference-type="ref" reference="commutator-Q-cutoff"} together yield the following upper bounds in weighted $L^p$ spaces. **Proposition 3**. *Let $l \geq 2$, $$\begin{aligned} \label{rough-upper-Q-weighted} |\langle W_l Q (g, h), f \rangle| \leq C_{l} (\epsilon^{-3} I_{0} + I_{3}) \|g\|_{L^{2}_l} \|h\|_{L^2_l} \|f\|_{L^2}, \\ \label{rough-upper-R-weighted-rho-L-infty} |\langle W_l R (g, h, \rho), f \rangle| \leq C_{l} (I_{0} + \epsilon^{3} I_{3}) \|g\|_{L^{2}_l} \|h\|_{L^2_l} \|\rho\|_{L^\infty} \|f\|_{L^2}, \\\label{rough-upper-R-weighted} |\langle R (g, h, \rho), W_l f \rangle| \leq C_{l} (I_{0} + \epsilon^{3} I_{3}) \|g\|_{L^{2}_l \cap L^\infty_l} \|h\|_{L^{2}_l \cap L^\infty_l} \|\rho\|_{L^2} \|f\|_{L^2} . \end{aligned}$$* In the next subsection, we will do the energy estimate in the weighted Sobolev space $H^{N}_{l}$. In particular, we have to estimate the typical term: $\langle R(\partial^{\alpha_1}f,\partial^{\alpha_2}f, \partial^{\alpha_3}f) W_{l}, W_{l}\partial^\alpha f \rangle$ for $\alpha_1+\alpha_2+\alpha_3=\alpha$. It can divided into four cases: Case 1: $\alpha_1= \alpha$; Case 2: $\alpha_2= \alpha$; Case 3: $\alpha_3= \alpha$; Case 4: $|\alpha| \geq 1$ and $|\alpha_1|, |\alpha_2|, |\alpha_3| \le |\alpha|-1$. We will apply [\[rough-upper-R-weighted-rho-L-infty\]](#rough-upper-R-weighted-rho-L-infty){reference-type="eqref" reference="rough-upper-R-weighted-rho-L-infty"} to Case 1 and 2 and apply [\[rough-upper-R-weighted\]](#rough-upper-R-weighted){reference-type="eqref" reference="rough-upper-R-weighted"} to Case 3. For Case 4, to balance the regularity of $g, h$ and $\rho$, we need additional upper bound of $R$ which can be stated as follows. **Proposition 4**. *Let $l \geq 0$, then $$\begin{aligned} \label{R-h1-h1-h1-l2} |\langle R (g, h, \rho), W_{l}f \rangle| \leq C_{l} (I_{0} + \epsilon^{3} I_{3}) \|g\|_{H^1_2}\|h\|_{H^{1}_{l}} \|\rho\|_{H^{1}_{l}} \|f\|_{L^2}. \end{aligned}$$* *Proof.* We recall that $\langle \epsilon^{-3}R (g, h, \rho), W_{l}f \rangle = \int B g_* h \left( \rho' + \rho'_* \right) ((W_{l}f)' - W_{l}f) \mathrm{d}V .$ Obviously, using [\[weight-formula-1\]](#weight-formula-1){reference-type="eqref" reference="weight-formula-1"} and $B^{\epsilon} \leq 2B^{\epsilon}_1 + 2B^{\epsilon}_3$, we may get that $$\begin{aligned} \label{directly-into-B1} | \langle \epsilon^{-3}R (g, h, \rho), W_{l}f \rangle| &\lesssim_{l}& \int B^{\epsilon}_1 |g_* W_{l}h| \left( |(W_{l}\rho)'| + |(W_{l}\rho)'_*| \right) (|f'| + |f|) \mathrm{d}V \\ \nonumber % \label{directly-into-B3} &&+ \int B^{\epsilon}_3 |g_* W_{l}h| \left( |(W_{l}\rho)'| + |(W_{l}\rho)'_*| \right) (|f'| + |f|) \mathrm{d}V . \end{aligned}$$ By [\[general-k-functions-h-norm\]](#general-k-functions-h-norm){reference-type="eqref" reference="general-k-functions-h-norm"}, we first have $$\begin{aligned} \label{Q3-line-bound-H1-H1-H1} \int B^{\epsilon}_3 |g_* W_{l}h| \left( |(W_{l}\rho)'| + |(W_{l}\rho)'_*| \right) (|f'| + |f|) \mathrm{d}V \lesssim I_{3}\|g\|_{H^{1}} \| h\|_{H^{1}_{l}} \| \rho\|_{H^{1}_{l}} \|f\|_{L^2}. \end{aligned}$$ By Hölder's inequality and [\[general-result-cutoff\]](#general-result-cutoff){reference-type="eqref" reference="general-result-cutoff"} with $b=0$, the integral containing $B^{\epsilon}_1$ in [\[directly-into-B1\]](#directly-into-B1){reference-type="eqref" reference="directly-into-B1"} is bounded by $$\begin{aligned} \nonumber &&\left( \int B^{\epsilon}_1 |g_*| |W_{l}h|^{4} \mathrm{d}V \right)^{1/4} \left( \int B^{\epsilon}_1 |g_*| |(W_{l}\rho)'|^{4} \mathrm{d}V \right)^{1/4} \left( \int B^{\epsilon}_1 |g_*| (|f'| + |f|)^2 \mathrm{d}V \right)^{1/2} \\ \nonumber &&+ \left( \int B^{\epsilon}_1 |W_{l}h|^{2} |(W_{l}\rho)'_*|^{2} \mathrm{d}V \right)^{1/2} \left( \int B^{\epsilon}_1 |g_*|^2 (|f'| + |f|)^2 \mathrm{d}V \right)^{1/2} \lesssim \epsilon^{-3}I_{0} \|g\|_{L^{2}_2} \|W_l h\|_{H^{3/4}} \|W_l \rho\|_{H^{3/4}} \|f\|_{L^{2}}, \end{aligned}$$ where we use the Sobolev embedding theorem. Patching together all the estimates, we get [\[R-h1-h1-h1-l2\]](#R-h1-h1-h1-l2){reference-type="eqref" reference="R-h1-h1-h1-l2"}. ◻ We now apply Propositions [Proposition 3](#Q-inner-product-weighted-L2-L2){reference-type="ref" reference="Q-inner-product-weighted-L2-L2"} and [Proposition 4](#R-rough){reference-type="ref" reference="R-rough"} to get the following energy estimate of $R$ in $H^{N}_{l}$ spaces. **Lemma 11**. *Let $N, l \geq 2$. Then $$\begin{aligned} \big|\sum_{|\alpha| \leq N} \langle W_{l} \partial^\alpha R(f, f, f) , W_{l}\partial^\alpha f \rangle \big| \lesssim_{N,l} (I_{0} + \epsilon^{3} I_{3}) \|f\|_{H^{N}_{l}}^4. \end{aligned}$$* *Proof.* It is reduced to the consideration of $\langle R(\partial^{\alpha_1}f,\partial^{\alpha_2}f, \partial^{\alpha_3}f) W_{l}, W_{l}\partial^\alpha f \rangle$ for $\alpha_1+\alpha_2+\alpha_3=\alpha$. There are four cases: Case 1: $\alpha_1= \alpha$; Case 2: $\alpha_2= \alpha$; Case 3: $\alpha_3= \alpha$; Case 4: $|\alpha| \geq 1$ and $|\alpha_1|, |\alpha_2|, |\alpha_3| \le |\alpha|-1$. The desired result follows by applying [\[rough-upper-R-weighted-rho-L-infty\]](#rough-upper-R-weighted-rho-L-infty){reference-type="eqref" reference="rough-upper-R-weighted-rho-L-infty"} to Case 1 and Case 2, [\[rough-upper-R-weighted\]](#rough-upper-R-weighted){reference-type="eqref" reference="rough-upper-R-weighted"} to Case 3 and [\[R-h1-h1-h1-l2\]](#R-h1-h1-h1-l2){reference-type="eqref" reference="R-h1-h1-h1-l2"} to Case 4. ◻ ## Upper bounds from the non-cutoff perspective In this subsection, we will give the upper bounds of the operators $Q, R$ from the non-cutoff perspective. Roughly speaking, we expand the Taylor expansion up to the second order to kill the singular factor $\epsilon^{-3}$. We start with the uniform-in-$\epsilon$ estimate of $Q_1$ in the following Proposition. **Proposition 5**. *It holds that $$\begin{aligned} \label{Q1-g-h-f-L2-L2-H2} |\langle Q_1(g,h),f \rangle|\lesssim I_{3} (\|g\|_{L^{1}} + \|g\|_{L^{2}}) \|h\|_{L^2}}\|f\|_{H^{2}, \\ \label{Q1-g-h-f-L2-H2-L2} |\langle Q_1(g,h),f \rangle|\lesssim I_{3} (\|g\|_{L^{1}} + \|g\|_{L^{2}})\|h\|_{H^2}}\|f\|_{L^{2}. \end{aligned}$$* *Proof.* It is easy to see $\langle Q_{1}(g,h), f \rangle = \int B^{\epsilon}_{1} g_* h (f' -f) \mathrm{d}V.$ By [\[Taylor1\]](#Taylor1){reference-type="eqref" reference="Taylor1"} for $f'-f$, [\[Q1-g-h-f-L2-L2-H2\]](#Q1-g-h-f-L2-L2-H2){reference-type="eqref" reference="Q1-g-h-f-L2-L2-H2"} follows by applying [\[cancell1\]](#cancell1){reference-type="eqref" reference="cancell1"}, [\[order-2-cancellation\]](#order-2-cancellation){reference-type="eqref" reference="order-2-cancellation"} and [\[minus-2\]](#minus-2){reference-type="eqref" reference="minus-2"} for the first order and applying [\[general-B1-h-f\]](#general-B1-h-f){reference-type="eqref" reference="general-B1-h-f"} for the second order. For [\[Q1-g-h-f-L2-H2-L2\]](#Q1-g-h-f-L2-H2-L2){reference-type="eqref" reference="Q1-g-h-f-L2-H2-L2"}, we use Cancellation Lemma to transfer the regularity from $f$ to $h$ through $$\begin{aligned} \langle Q_{1}(g,h), f \rangle = \int B^{\epsilon}_{1}g_*\big(h - h'\big) f'\mathrm{d}V +\int B^{\epsilon}_{1}g_*\big((h f)'- hf\big)\mathrm{d}V. \end{aligned}$$ The former term is dealt with by [\[Taylor2\]](#Taylor2){reference-type="eqref" reference="Taylor2"}, [\[cancell2\]](#cancell2){reference-type="eqref" reference="cancell2"}, and [\[general-B1-h-f\]](#general-B1-h-f){reference-type="eqref" reference="general-B1-h-f"}. The latter term is estimated by [\[Q1-cancealltion-g-h-f\]](#Q1-cancealltion-g-h-f){reference-type="eqref" reference="Q1-cancealltion-g-h-f"}. Then we conclude [\[Q1-g-h-f-L2-H2-L2\]](#Q1-g-h-f-L2-H2-L2){reference-type="eqref" reference="Q1-g-h-f-L2-H2-L2"}. ◻ We next derive upper bounds of $Q_2$. **Proposition 6**. *Let $\delta>0, a \geq 0, b \geq 1, a+b=\frac 32+\delta$. Then $$\begin{aligned} \label{g-h-total-all-on-f} |\langle Q_2(g,h),f \rangle|\lesssim_{\delta} I_{3} \|g\|_{L^{2}}\|h\|_{L^2}\|f\|_{H^{\frac 32+\delta}}, \\ \label{g-h-total-3over2} |\langle Q_2(g,h),f \rangle|\lesssim_{\delta} (I_{3} + I^{\prime}_{3}) \|g\|_{H^{a}}\|h\|_{H^b}\|f\|_{L^{2}}, \\ \label{g-h-total-vartheta} |\langle Q_2(g,h),W_{l}f \rangle|\lesssim \epsilon^{\vartheta} (I_{3+\vartheta} + I^{\prime}_{3+\vartheta}) \|g\|_{H^{2}_{l}}\|h\|_{H^2_{l}}\|f\|_{L^{2}}. \end{aligned}$$* *Proof.* We begin with the estimate of [\[g-h-total-all-on-f\]](#g-h-total-all-on-f){reference-type="eqref" reference="g-h-total-all-on-f"}. We first observe that $$\begin{aligned} |\int B^{\epsilon}_{2}g_* h (f' - f) \mathrm{d}V| \leq 2 \left( \int B^{\epsilon}_{1} g^{2}_* (f' - f)^{2} \mathrm{d}V \right)^{1/2} \left( \int B^{\epsilon}_{3} h^{2}\mathrm{d}V \right)^{1/2}. \end{aligned}$$ By the order-1 Taylor expansion [\[Taylor1-order-1\]](#Taylor1-order-1){reference-type="eqref" reference="Taylor1-order-1"}, [\[Q1-result-with-factor-a-minus3-type-2\]](#Q1-result-with-factor-a-minus3-type-2){reference-type="eqref" reference="Q1-result-with-factor-a-minus3-type-2"} and [\[2-2-minus-1-s1-s2\]](#2-2-minus-1-s1-s2){reference-type="eqref" reference="2-2-minus-1-s1-s2"} imply that $$\begin{aligned} \label{2-2-difference} \int B^{\epsilon}_{1} g^{2}_* (f' - f)^{2} \mathrm{d}V &\lesssim& I_{3} \int \int_0^1 B^{\epsilon}_{1} |g_*|^{2} |(\nabla f)(\kappa(v))|^{2} |v'-v|^{2} \mathrm{d}\kappa \mathrm{d}V \lesssim_{\delta} I_{3} \|g\|_{H^a}^2 \|f\|_{H^{b}}^2. \end{aligned}$$ By [\[general-L1\]](#general-L1){reference-type="eqref" reference="general-L1"}, the integral containing $B^{\epsilon}_3$ is bounded by $I_{3} \|h\|_{L^{2}}^2$. Then [\[g-h-total-all-on-f\]](#g-h-total-all-on-f){reference-type="eqref" reference="g-h-total-all-on-f"} follows. As for [\[g-h-total-3over2\]](#g-h-total-3over2){reference-type="eqref" reference="g-h-total-3over2"}, we first have $\langle Q_{2}(g,h), f \rangle = \int B^{\epsilon}_{2}g_*\big(h - h'\big) f'\mathrm{d}V + \int B^{\epsilon}_{2} g_*\big((h f)'- hf\big)\mathrm{d}V.$ Following the above argument for [\[g-h-total-all-on-f\]](#g-h-total-all-on-f){reference-type="eqref" reference="g-h-total-all-on-f"}, the term involving $h - h'$ is bounded by $I_{3} \|g\|_{H^{a}}\|h\|_{H^b}\|f\|_{L^{2}}$. The term involving $(h f)'- hf$ is estimated by [\[Q2-cancealltion-g-h-f\]](#Q2-cancealltion-g-h-f){reference-type="eqref" reference="Q2-cancealltion-g-h-f"}. These lead to [\[g-h-total-3over2\]](#g-h-total-3over2){reference-type="eqref" reference="g-h-total-3over2"}. We turn to the estimate of [\[g-h-total-vartheta\]](#g-h-total-vartheta){reference-type="eqref" reference="g-h-total-vartheta"}. We observe that $\langle Q_{2}(g,h), W_{l}f \rangle = \int B^{\epsilon}_{2}g_*\big(h - h'\big) (W_{l}f)'\mathrm{d}V + \int B^{\epsilon}_{2} g_*\big((W_{l} h f)'- W_{l}hf\big)\mathrm{d}V.$ We only focus on the first term since [\[eps-vartheta-B2-cancellation\]](#eps-vartheta-B2-cancellation){reference-type="eqref" reference="eps-vartheta-B2-cancellation"} implies that $|\int B^{\epsilon}_{2} g_*\big((W_{l} h f)'- W_{l}hf\big)\mathrm{d}V| \lesssim \epsilon^{\vartheta} (I_{3+\vartheta} + I^{\prime}_{3+\vartheta}) \|g\|_{H^{2}} \|h\| _{H^{2}_{l}} \|f\|_{L^{2}}.$ We split it into two cases. $\bullet$ If $\vartheta \geq 1/2$, by [\[Taylor2\]](#Taylor2){reference-type="eqref" reference="Taylor2"} and [\[cancell2\]](#cancell2){reference-type="eqref" reference="cancell2"}, [\[weight-formula-1\]](#weight-formula-1){reference-type="eqref" reference="weight-formula-1"} together with the Cauchy-Schwartz inequality will lead to that $$\begin{aligned} && |\int B^{\epsilon}_{2}g_*\big(h - h'\big) (W_{l}f)'\mathrm{d}V| \lesssim \int |B^{\epsilon}_{2} (W_{l}g)_* W_{l}(\kappa(v))(\nabla^2 h)(\kappa(v)) f'| |v'-v|^2 \mathrm{d}V \mathrm{d}\kappa \\ &\lesssim& \left( \int B^{\epsilon}_{1} |(W_{l}g)_* W_{l}(\kappa(v))(\nabla^2 h)(\kappa(v))|^2 |v'-v|^4 |v-v_*|^{-\vartheta} \mathrm{d}V \mathrm{d}\kappa \right)^{1/2} \left( \int B^{\epsilon}_{3}|f'|^2 |v-v_*|^{\vartheta} \mathrm{d}V \right)^{1/2} . \end{aligned}$$ Using the change of variable [\[change-of-variable-2\]](#change-of-variable-2){reference-type="eqref" reference="change-of-variable-2"} with $\kappa =\iota =1$, [\[L1-vartheta\]](#L1-vartheta){reference-type="eqref" reference="L1-vartheta"} yields that $$\begin{aligned} \int B^{\epsilon}_{3}|f'|^2 |v-v_*|^{\vartheta} \mathrm{d}V = \int B^{\epsilon}_{3} |f|^2 |v-v_*|^{\vartheta} \mathrm{d}V = \| (J_{0,\epsilon} |\cdot|^{\vartheta}) * f^2 \|_{L^1} \lesssim \epsilon^{\vartheta} I_{3+\vartheta} \|f\|_{L^{2}}^2. \end{aligned}$$Then we can conclude [\[g-h-total-vartheta\]](#g-h-total-vartheta){reference-type="eqref" reference="g-h-total-vartheta"} since [\[Q1-result-with-factor-a-minus3-with-small-facfor\]](#Q1-result-with-factor-a-minus3-with-small-facfor){reference-type="eqref" reference="Q1-result-with-factor-a-minus3-with-small-facfor"} and the Hardy's inequality yield that $$\begin{aligned} && \int B^{\epsilon}_{1} |(W_{l}g)_* W_{l}(\kappa(v))(\nabla^2 h)(\kappa(v))|^2 |v'-v|^4 |v-v_*|^{-\vartheta} \mathrm{d}V \mathrm{d}\kappa \\ &&\lesssim \epsilon^{\vartheta} I_{3+\vartheta} \int |v-v_*|^{1-2\vartheta} |(W_{l}g)_* W_{l} \nabla^2 h|^2 \mathrm{d}v_* \mathrm{d}v \lesssim \epsilon^{\vartheta} I_{3+\vartheta} \|g\|_{H^{\vartheta - 1/2}_{l}}^2 \|h\|_{H^{2}_{l}}^2 \lesssim \epsilon^{\vartheta} I_{3+\vartheta} \|g\|_{H^{1/2}_{l}}^2 \|h\|_{H^{2}_{l}}^2 .\end{aligned}$$ $\bullet$ If $\vartheta \leq 1/2$, Thanks to the Taylor expansion, [\[weight-formula-1\]](#weight-formula-1){reference-type="eqref" reference="weight-formula-1"} will imply that$$\begin{aligned} && |\int B^{\epsilon}_{2}g_*\big(h - h'\big) (W_{l}f)'\mathrm{d}V| \lesssim \int |B^{\epsilon}_{2} (W_{l}g)_* W_{l}(\kappa(v))(\nabla h)(\kappa(v)) f'| |v'-v| \mathrm{d}V \mathrm{d}\kappa \\ &&\lesssim \left( \int B^{\epsilon}_{1} |(W_{l}g)_* W_{l}(\kappa(v))(\nabla h)(\kappa(v))|^2 |v'-v|^2 \sin\frac{\theta}{2} |v-v_*|^{-\vartheta} \mathrm{d}V \mathrm{d}\kappa \right)^{1/2} \left( \int B^{\epsilon}_{3} \sin^{-1}\frac{\theta}{2} |f'|^2 |v-v_*|^{\vartheta} \mathrm{d}V \right)^{1/2} . \end{aligned}$$ We first use [\[change-of-variable-2\]](#change-of-variable-2){reference-type="eqref" reference="change-of-variable-2"} and [\[L1-vartheta\]](#L1-vartheta){reference-type="eqref" reference="L1-vartheta"} to derive that $\int B^{\epsilon}_{3}|f'|^2 |v-v_*|^{\vartheta} \mathrm{d}V = \int B^{\epsilon}_{3} \sin^{-1}\frac{\theta}{2} |f|^2 |v-v_*|^{\vartheta} \mathrm{d}V = \| (K_{\epsilon} |\cdot|^{\vartheta}) * f^2 \|_{L^1},$ where $K_{\epsilon}(z) \colonequals \int_{\mathop{\mathbb S\kern 0pt}\nolimits^{2}_{+}} \epsilon^{-4}|z| \hat{\phi}^2(\epsilon^{-1}|z| \cos(\theta/2)) \sin^{-1}\frac{\theta}{2} \mathrm{d}\sigma$. It is easy to see that $$\begin{aligned} \|K_{\epsilon}|\cdot|^{\vartheta}\|_{L^{1}} \lesssim \epsilon^{\vartheta} I_{3+\vartheta} \int_{\mathop{\mathbb S\kern 0pt}\nolimits^{2}_{+}} \cos^{-4-\vartheta}(\theta/2) \sin^{-1}\frac{\theta}{2} \mathrm{d}\sigma \lesssim \epsilon^{\vartheta} I_{3+\vartheta}, \end{aligned}$$ which yields $\| (K_{\epsilon} |\cdot|^{\vartheta}) * f^2 \|_{L^1} \lesssim \epsilon^{\vartheta} I_{3+\vartheta} \|f\|_{L^{2}}^2.$ Following the similar argument used in the before, we have $$\begin{aligned} && \int B^{\epsilon}_{1} |(W_{l}g)_* W_{l}(\kappa(v))(\nabla h)(\kappa(v))|^2 |v'-v|^2 \sin\frac{\theta}{2} |v-v_*|^{-\vartheta} \mathrm{d}V \mathrm{d}\kappa \lesssim \epsilon^{\vartheta} I_{3+\vartheta} \|g\|_{H^{1}_{l}}^2 \|h\|_{H^{2}_{l}}^2 . \end{aligned}$$ Patching together the above estimates, we arrive at [\[g-h-total-vartheta\]](#g-h-total-vartheta){reference-type="eqref" reference="g-h-total-vartheta"}. ◻ Thanks to Propositions [Proposition 5](#UpQ1Sob){reference-type="ref" reference="UpQ1Sob"}, [Proposition 6](#UpQ2Sob){reference-type="ref" reference="UpQ2Sob"} and [Proposition 1](#Q-3){reference-type="ref" reference="Q-3"}, we get the following upper bounds of $Q$ uniformly in $\epsilon$. **Theorem 4**. *It holds that $$\begin{aligned} \label{l2-h2-l2} |\langle Q(g,h), f \rangle| \lesssim I_{3}(\|g\|_{L^1} + \|g\|_{L^2}) \|h\|_{L^2}\|f\|_{H^2} , \\ \label{l2-l2-h2} |\langle Q(g,h), f \rangle| \lesssim (I_{3} + I^{\prime}_{3})(\|g\|_{L^1} + \|g\|_{L^2}) \|h\|_{H^2}\|f\|_{L^2}. \end{aligned}$$* As a direct application, we get the following upper bounds of $R$ with the small factor $\epsilon^{3}$. **Theorem 5**. *It holds that $$\begin{aligned} \label{R-general-rough-versionh-h2-h2-h2} |\langle R (g, h, \rho), f \rangle| \lesssim \epsilon^{3} (I_{3} + I^{\prime}_{3}) \|g\|_{H^{2}_2} \|h\|_{H^{2}} \|\rho\|_{H^{2}} \|f\|_{L^2}. \end{aligned}$$* *Proof.* We observe that $|\langle R (g, h, \rho), f \rangle|= \epsilon^{3} |\langle Q(g,h),\rho f \rangle+\langle Q(g,hf),\rho \rangle +\langle Q(g\rho,h),f \rangle+\langle Q(hf, \rho),g \rangle+\langle Q(hf, g),\rho \rangle + \mathcal{I}_1 + \mathcal{I}_2|$, where $\mathcal{I}_1 = \int B g_*(h-h')(\rho_*'-\rho_*)f' \mathrm{d}V$ and $\mathcal{I}_2 = \int B (hf)_*(g\rho-(g\rho)')\mathrm{d}V.$ For the terms containing $Q$, we appropriately use [\[l2-h2-l2\]](#l2-h2-l2){reference-type="eqref" reference="l2-h2-l2"} and [\[l2-l2-h2\]](#l2-l2-h2){reference-type="eqref" reference="l2-l2-h2"} to get that $$\begin{aligned} && |\langle Q(g,h),\rho f \rangle|+|\langle Q(g,hf),\rho \rangle| +|\langle Q(g\rho,h),f \rangle|+|\langle Q(hf, \rho),g \rangle|+|\langle Q(hf, g),\rho \rangle| \\ &\lesssim& (I_{3} + I^{\prime}_{3}) \|g\|_{H^{2}_2} \|h\|_{H^{2}} \|\rho\|_{H^{2}} \|f\|_{L^2}. \end{aligned}$$ We first separate $\mathcal{I}_1$ into two parts: $$\begin{aligned} |\mathcal{I}_1| \leq 2 \int B^{\epsilon}_1 |g_*(h-h')(\rho_*'-\rho_*)f'| \mathrm{d}V + 2 \int B^{\epsilon}_3 |g_*(h-h')(\rho_*'-\rho_*)f'| \mathrm{d}V . \end{aligned}$$ By [\[general-k-functions-h-norm\]](#general-k-functions-h-norm){reference-type="eqref" reference="general-k-functions-h-norm"}, the integral containing $B^{\epsilon}_3$ is bounded by $I_{3} \|g\|_{H^{1}} \|h\|_{H^{1}} \|\rho\|_{H^{1}} \|f\|_{L^2}$. Thanks to [\[2-2-difference\]](#2-2-difference){reference-type="eqref" reference="2-2-difference"}, the integral containing $B^{\epsilon}_1$ is bounded by $$\begin{aligned} \left( \int B^{\epsilon}_1 g_*^2 (h-h')^2 \mathrm{d}V \right)^{1/2} \left( \int B^{\epsilon}_1 (\rho'-\rho)^2 f^2_* \mathrm{d}V \right)^{1/2} \lesssim I_{3} \|g\|_{L^{2}} \|h\|_{H^{2}} \|\rho\|_{H^{2}} \|f\|_{L^2}. \end{aligned}$$ As for $\mathcal{I}_2$, we separate it into three parts according to the fact that $B = B^{\epsilon}_1 + B^{\epsilon}_2 +B^{\epsilon}_3$. Then applying [\[Q1-cancealltion-g-h-f\]](#Q1-cancealltion-g-h-f){reference-type="eqref" reference="Q1-cancealltion-g-h-f"}, [\[Q2-cancealltion-g-h-f\]](#Q2-cancealltion-g-h-f){reference-type="eqref" reference="Q2-cancealltion-g-h-f"} and [\[general-k-functions-h-norm\]](#general-k-functions-h-norm){reference-type="eqref" reference="general-k-functions-h-norm"} to the integrals containing $B^{\epsilon}_1, B^{\epsilon}_2$ and $B^{\epsilon}_3$ respectively, we arrive at $$\begin{aligned} |\mathcal{I}_2| \lesssim (I_{3} + I^{\prime}_{3}) \|g\|_{H^{2}} \|h\|_{H^{2}} \|\rho\|_{H^{2}} \|f\|_{L^2}. \end{aligned}$$ We complete the proof by patching together the above estimates. ◻ In the forthcoming Lemma, we give another estimate of the commutator between the operator $R$ and the weight function $W_l$. Comparing to [\[rough-commutator-R-rho-L-infty\]](#rough-commutator-R-rho-L-infty){reference-type="eqref" reference="rough-commutator-R-rho-L-infty"} and [\[rough-commutator-R\]](#rough-commutator-R){reference-type="eqref" reference="rough-commutator-R"}, here we can get rid of $I_{0}$ and keep the small factor $\epsilon^{3} I_{3}$ by imposing more regularity on the involved functions. **Lemma 12**. *Let $l \geq 2$. Then $$\begin{aligned} \label{R-commutator-rough-version} |\langle R(g, h, \rho) W_{l}-R(g, W_{l}h, \rho), f \rangle\big| \leq \epsilon^{3} C_{l} I_{3} \|g\|_{H^2_l} \|h\|_{H^2_l} \|\rho\|_{H^2} \|f\|_{L^2}. \end{aligned}$$* *Proof.* Recalling [\[Commutator-formula-R\]](#Commutator-formula-R){reference-type="eqref" reference="Commutator-formula-R"}, it is easy to check that $$\begin{aligned} |\mathcal{K}| &=& \epsilon^{3} |\int B^{\epsilon} g_*h (\rho' + \rho'_*) f'( W_{l}' - W_{l})\mathrm{d}V| = \epsilon^{3} |\int B g'_* h' (\rho + \rho_*) f( W_{l}' - W_{l})\mathrm{d}V| \\ &\leq& \epsilon^{3} |\int B^{\epsilon} g_* h (\rho + \rho_*) f( W_{l}' - W_{l})\mathrm{d}V| + \epsilon^{3} |\int B^{\epsilon} (g'_*-g_*) h' (\rho + \rho_*) f ( W_{l}' - W_{l})\mathrm{d}V| \\ &&+ \epsilon^{3} |\int B^{\epsilon} g_* (h'-h) (\rho + \rho_*) f( W_{l}' - W_{l})\mathrm{d}V| \colonequals \epsilon^{3} \mathcal{K}_1 + \epsilon^{3} \mathcal{K}_2 + \epsilon^{3} \mathcal{K}_3 .\end{aligned}$$ For $\mathcal{K}_1$, by [\[Taylor1\]](#Taylor1){reference-type="eqref" reference="Taylor1"}, [\[cancell1\]](#cancell1){reference-type="eqref" reference="cancell1"} and [\[order-2-cancellation\]](#order-2-cancellation){reference-type="eqref" reference="order-2-cancellation"}, [\[minus-2\]](#minus-2){reference-type="eqref" reference="minus-2"} and [\[minus-1\]](#minus-1){reference-type="eqref" reference="minus-1"} imply that $|\mathcal{K}_1| \lesssim I_{3}\|g\|_{H^1_l} \|h\|_{L^2_l} \|\rho\|_{L^\infty} \|f\|_{L^2}.$ For $\mathcal{K}_2$, we use $\mathcal{K}_2 \leq 2 \mathcal{K}_2^1 + 2 \mathcal{K}_2^3$ where $\mathcal{K}_2^1 \colonequals \int B^{\epsilon}_1 |(g'_*-g_*) h' (\rho + \rho_*) f ( W_{l}' - W_{l})| \mathrm{d}V$ and $\mathcal{K}_2^3 \colonequals \int B^{\epsilon}_3 |(g'_*-g_*) h' (\rho + \rho_*) f ( W_{l}' - W_{l})|\mathrm{d}V$ . By order-1 Taylor expansion [\[Taylor1-order-1\]](#Taylor1-order-1){reference-type="eqref" reference="Taylor1-order-1"}, [\[general-B1-h-f\]](#general-B1-h-f){reference-type="eqref" reference="general-B1-h-f"} yields that $$\begin{aligned} |\mathcal{K}_2^1| \lesssim \|\rho\|_{L^\infty} \int B^{\epsilon}_1 | (\nabla g)(\iota(v_*)) h' f | W_{l-1}(\iota(v_*)) W_{l-1}(v')| |v'-v|^2 \mathrm{d}V \lesssim I_{3} \|g\|_{H^1_l} \|h\|_{L^2_l} \|\rho\|_{L^\infty} \|f\|_{L^2}. \end{aligned}$$ Using [\[weight-formula-1\]](#weight-formula-1){reference-type="eqref" reference="weight-formula-1"} and [\[general-k-functions-infty-norm\]](#general-k-functions-infty-norm){reference-type="eqref" reference="general-k-functions-infty-norm"}, we have $|\mathcal{K}_2^3| \lesssim I_{3} \|g\|_{L^\infty_l} \|h\|_{L^2_l} \|\rho\|_{L^\infty} \|f\|_{L^2}.$ As a result, by Sobolev embedding theorem, we get that $|\mathcal{K}_2| \lesssim I_{3} \|g\|_{H^2_l} \|h\|_{L^2_l} \|\rho\|_{H^2} \|f\|_{L^2}.$ Similarly, we can also get $|\mathcal{K}_3| \lesssim I_{3} \|g\|_{L^2_l} \|h\|_{H^2_l} \|\rho\|_{H^2} \|f\|_{L^2}.$ These are enough to conclude [\[R-commutator-rough-version\]](#R-commutator-rough-version){reference-type="eqref" reference="R-commutator-rough-version"}. ◻ By the upper bound estimate [\[R-general-rough-versionh-h2-h2-h2\]](#R-general-rough-versionh-h2-h2-h2){reference-type="eqref" reference="R-general-rough-versionh-h2-h2-h2"} and the commutator estimate [\[R-commutator-rough-version\]](#R-commutator-rough-version){reference-type="eqref" reference="R-commutator-rough-version"}, we have the following upper bound estimate for $R$ in weighted Sobolev spaces. **Proposition 7**. *Let $l \geq 2$, then $$\begin{aligned} \label{R-general-rough-versionh-h2-h2-h2-weighted} |\langle R (g, h, \rho), W_l f \rangle| \leq \epsilon^{3} C_{l} (I_{3} + I^{\prime}_{3}) \|g\|_{H^{2}_l} \|h\|_{H^{2}_l} \|\rho\|_{H^{2}} \|f\|_{L^2}. \end{aligned}$$* Note that the small factor $\epsilon^{3}$ is kept in [\[R-general-rough-versionh-h2-h2-h2-weighted\]](#R-general-rough-versionh-h2-h2-h2-weighted){reference-type="eqref" reference="R-general-rough-versionh-h2-h2-h2-weighted"} which shows that $R$ is smaller term if the involved functions are regular enough. Proposition [Proposition 7](#weighted-R-rough){reference-type="ref" reference="weighted-R-rough"} will be used in the last section to derive the asymptotic formula in Theorem [Theorem 3](#mainthm){reference-type="ref" reference="mainthm"}. # Uniform upper bounds in weighted Sobolev space In this section, we will close energy estimate for the Uehling-Uhlenbeck operator $Q_{UU}^{\epsilon}$ uniformly in $\epsilon$ in the weighted Sobolev space $H^{N}_{l}$. More precisely, we will derive **Theorem 6**. *Let $N, l \ge 2$. Let $f \geq 0$, then $$\begin{aligned} \label{energy-in-H-N-l-uniform-in-eps} \sum_{|\alpha| \leq N} \langle W_{l} \partial^{\alpha} Q_{UU}^{\epsilon}(f), W_{l} \partial^{\alpha}f \rangle \lesssim_{N,l} (I_{0} + I_{3} + I^{\prime}_{3}) \|f\|_{H^{N}_{l}}^2 (\|f\|_{H^{N}_{l}} + \|f\|_{H^{N}_{l}}^2). \end{aligned}$$* *Proof.* Recalling [\[OPUU\]](#OPUU){reference-type="eqref" reference="OPUU"} and [\[OPQ\]](#OPQ){reference-type="eqref" reference="OPQ"}, $Q_{UU}^{\epsilon}(f) = Q(f,f) + R(f,f,f)$ and $Q(f,f) = Q_1(f,f) + Q_2(f,f) + Q_3(f,f)$, we write $$\begin{aligned} \partial^{\alpha} Q_{UU}^{\epsilon}(f) = \partial^{\alpha} Q(f,f) + \partial^{\alpha} R(f,f,f), \end{aligned}$$ and $$\begin{aligned} \label{highest-order} \partial^{\alpha} Q(f,f) &=& Q(f, \partial^{\alpha}f) \\ \label{pul-order} &&+ \sum_{\alpha_1+\alpha_2 = \alpha, |\alpha_1| = 1} C^{\alpha_1}_{\alpha} Q_1(\partial^{\alpha_1}f, \partial^{\alpha_2}f) \\ \label{less-than-minus2-Q1} &&+ \sum_{\alpha_1+\alpha_2 = \alpha, |\alpha_1| \ge 2} C^{\alpha_1}_{\alpha} Q_1(\partial^{\alpha_1}f, \partial^{\alpha_2}f) \\ \label{less-than-minus1-Q2} &&+ \sum_{\alpha_1+\alpha_2 = \alpha, |\alpha_1| \ge 1} C^{\alpha_1}_{\alpha} Q_2(\partial^{\alpha_1}f, \partial^{\alpha_2}f) \\ \label{less-than-minus1-Q3} &&+ \sum_{\alpha_1+\alpha_2 = \alpha, |\alpha_1| \ge 1} C^{\alpha_1}_{\alpha} Q_3(\partial^{\alpha_1}f, \partial^{\alpha_2}f) . \end{aligned}$$ Let us give the estimates term by term. We first observe that the term $Q(f, \partial^{\alpha}f)$ in [\[highest-order\]](#highest-order){reference-type="eqref" reference="highest-order"} can be deal by the forthcoming coercivity estimate [\[H-2-result\]](#H-2-result){reference-type="eqref" reference="H-2-result"}. More precisely, using $f \geq 0$, $$\begin{aligned} \label{coercivity-application} \langle Q(f, \partial^{\alpha}f) W_l, W_l \partial^{\alpha}f \rangle \leq C_l (I_{3} + I^{\prime}_{3}) \|f\|_{H^2_l}\|\partial^{\alpha}f\|_{L^2_l}^2. \end{aligned}$$ The term in [\[pul-order\]](#pul-order){reference-type="eqref" reference="pul-order"} will be given by Lemma [Lemma 17](#Q1EN-g-order-1){reference-type="ref" reference="Q1EN-g-order-1"}. This is referred to as the penultimate order term. The terms in [\[less-than-minus2-Q1\]](#less-than-minus2-Q1){reference-type="eqref" reference="less-than-minus2-Q1"} and in [\[less-than-minus1-Q2\]](#less-than-minus1-Q2){reference-type="eqref" reference="less-than-minus1-Q2"} will be done in Lemma [Lemma 15](#Q1EN){reference-type="ref" reference="Q1EN"} and in Lemma [Lemma 14](#Q2EN){reference-type="ref" reference="Q2EN"} respectively. Then we conclude [\[energy-in-H-N-l-uniform-in-eps\]](#energy-in-H-N-l-uniform-in-eps){reference-type="eqref" reference="energy-in-H-N-l-uniform-in-eps"} since the terms $Q_3(\partial^{\alpha_1}f, \partial^{\alpha_2}f)$ in [\[less-than-minus1-Q3\]](#less-than-minus1-Q3){reference-type="eqref" reference="less-than-minus1-Q3"} and $\partial^{\alpha} R(f,f,f)$ have been estimated by Lemma [Lemma 5](#Q3EN){reference-type="ref" reference="Q3EN"} and Lemma [Lemma 11](#UpR1R2){reference-type="ref" reference="UpR1R2"}. ◻ In the following theorem, we derive two types of coercivity estimates of $Q$. **Theorem 7** (Coercivity estimate of $Q$). *Let $l\ge2$. Let $g \geq 0$. Then $$\begin{aligned} \label{L-infty-result} -\langle Q(g,f)W_l,fW_l \rangle &\ge& \frac 18 \int (B^{\epsilon}+B^{\epsilon}_{1})g_*((fW_l)'-fW_l)^2\mathrm{d}V \\ \nonumber &&- C_l (I_{3} + I^{\prime}_{3}) \|g\|_{L^1 \cap L^\infty}\|f\|_{L^2_{l}}^2 -C_l (I_{3} + I^{\prime}_{3}) \|g\|_{L^2_{l}}\|f\|_{L^1 \cap L^\infty}\|f\|_{L^2_{l}}, % \\ % \label{L1-L-infty-result} % -\lr{Q(g,f)W_l,fW_l} &\ge& \f18 % \bigg(\int (B^{\eps}+B^{\eps}_{1})g_*((fW_l)'-fW_l)^2\mathrm{d}V\bigg) % -C_l (I_{3} + I^{\prime}_{3}) \|g\|_{L^1_l \cap L^\infty_l}\|f\|_{L^2_l}^2, \\ \label{H-2-result} -\langle Q(g,f)W_l,fW_l \rangle &\ge& \frac 18 \int (B^{\epsilon}+B^{\epsilon}_{1})g_*((fW_l)'-fW_l)^2\mathrm{d}V -C_l (I_{3} + I^{\prime}_{3}) \|g\|_{H^2_l}\|f\|_{L^2_l}^2. \end{aligned}$$* *Proof.* We observe that $-\langle Q(g,f)W_l,fW_l \rangle=-\langle Q(g,fW_l),fW_l \rangle-\langle Q(g,f)W_l-Q(g,fW_l),fW_l \rangle$ and\ $-\langle Q(g,fW_l),fW_l \rangle=\frac 12 \int B^{\epsilon}g_*((fW_l)'-fW_l)^2\mathrm{d}V +\frac 12 \int B^{\epsilon}g_*((f^2W_l^2)-(f^2W_l^2)')\mathrm{d}V .$ We first notice that by [\[commutator-Q-for-highest-order-l-infty\]](#commutator-Q-for-highest-order-l-infty){reference-type="eqref" reference="commutator-Q-for-highest-order-l-infty"} the commutator can be estimate as follows: for $0<\eta<1$, $$\begin{aligned} &&|\langle Q(g,f)W_l-Q(g,fW_l),fW_l \rangle| \\ &\lesssim_{l}& \eta \int B^{\epsilon}_1 g_* ((fW_l)'-fW_l)^2 \mathrm{d}V + \eta^{-1} I_{3} \|g\|_{L^1 \cap L^\infty}\|f\|_{L^2_{l}}^2 + I_{3} \|g\|_{L^2_{l}}\|f\|_{L^1 \cap L^\infty}\|f\|_{L^2_{l}} . \end{aligned}$$ Then Lemma [Lemma 6](#cancellem){reference-type="ref" reference="cancellem"} and [\[general-k-functions-infty-norm\]](#general-k-functions-infty-norm){reference-type="eqref" reference="general-k-functions-infty-norm"} imply that $$\begin{aligned} \big|\int B^{\epsilon}g_*((f^2W_l^2)-(f^2W_l^2)')\mathrm{d}V\big| \lesssim (I_{3} + I^{\prime}_{3}) \|g\|_{L^\infty}\|f\|_{L^2_l}^2 . \end{aligned}$$ By taking $\eta$ small enough, we derive that $$\begin{aligned} -\langle Q(g,f)W_l,fW_l \rangle &\ge& \frac 38 \bigg(\int B^{\epsilon}g_*((fW_l)'-fW_l)^2\mathrm{d}V\bigg) \\&&- C_l (I_{3} + I^{\prime}_{3}) \|g\|_{L^1 \cap L^\infty}\|f\|_{L^2_{l}}^2 -C_l (I_{3} + I^{\prime}_{3}) \|g\|_{L^2_{l}}\|f\|_{L^1 \cap L^\infty}\|f\|_{L^2_{l}} .\end{aligned}$$ Using the fact $B^{\epsilon} \ge \frac 12 B^{\epsilon}_1 - B^{\epsilon}_3$, we get $$\begin{aligned} \int B^{\epsilon}g_*((fW_l)'-fW_l)^2\mathrm{d}V\ge \frac 12\int B^{\epsilon}_1g_*((fW_l)'-fW_l)^2\mathrm{d}V-\int B^{\epsilon}_3g_*((fW_l)'-fW_l)^2\mathrm{d}V.\end{aligned}$$ By [\[general-k-functions-infty-norm\]](#general-k-functions-infty-norm){reference-type="eqref" reference="general-k-functions-infty-norm"}, we have $\int B^{\epsilon}_3g_*((fW_l)'-fW_l)^2\mathrm{d}V \lesssim I_{3} \|g\|_{L^\infty}\|f\|_{L^2_l}^2$ . Then [\[L-infty-result\]](#L-infty-result){reference-type="eqref" reference="L-infty-result"} follows. If [\[commutator-Q-for-highest-order\]](#commutator-Q-for-highest-order){reference-type="eqref" reference="commutator-Q-for-highest-order"} is applied to $\langle Q(g,f)W_l-Q(g,fW_l),fW_l \rangle$, we will get [\[H-2-result\]](#H-2-result){reference-type="eqref" reference="H-2-result"}. ◻ **Remark 10**. *Noting that $B^{\epsilon}_1 \ge \frac 12 B^{\epsilon}-B^{\epsilon}_3$, the estimates in Theorem [Theorem 7](#Lowerbounds){reference-type="ref" reference="Lowerbounds"} still hold if $Q$ is replaced by $Q_1$.* ## Commutator estimates and weighted upper bounds of $Q$ from the angular non-cutoff perspective We have the following commutator estimates that are uniform in $\epsilon$. **Lemma 13**. *For $i=1,2,3$, we define $\mathcal{I}_i\colonequals \langle Q_i(g,h)W_l-Q_i(g,hW_l),f \rangle$ $\mathcal{I} \colonequals \langle Q(g,h)W_l-Q(g,hW_l),f \rangle.$* 1. *If $g\ge0$, then for $\eta>0$, then $$\begin{aligned} \label{commutator-Q1-for-highest-order} &&|\mathcal{I}_1| \lesssim_{l} \eta \int B^{\epsilon}_1g_*(f'-f)^2\mathrm{d}V + \eta^{-1} I_{3} \|g\|_{L^1 \cap L^2}\|h\|_{L^2_{l-1}}^2 + I_{3}(\|g\|_{L^2_l}\|h\|_{L^2}+ \|g\|_{L^{1} \cap H^{1}}\|h\|_{L^2_{l-1}}) \|f\|_{L^2}, \\ \label{commutator-Q1-for-highest-order-l-infty} &&|\mathcal{I}_1| \lesssim_{l} \eta \int B^{\epsilon}_1g_*(f'-f)^2\mathrm{d}V + \eta^{-1} I_{3} \|g\|_{L^1 \cap L^2}\|h\|_{L^2_{l-1}}^2 + I_{3} (\|g\|_{L^2_l}\|h\|_{L^2}+\|g\|_{L^{1} \cap L^{\infty}}\|h\|_{L^2_{l-1}}) \|f\|_{L^2}. \end{aligned}$$ In general, it holds that $$\begin{aligned} \label{commu-Q1-with-Wl} |\mathcal{I}_1| \lesssim_{l} I_{3} (\|g\|_{L^2_2}\|h\|_{H^1_{l-1}}+\|g\|_{L^2_l}\|h\|_{H^1_1} )\|f\|_{L^2}. \end{aligned}$$* 2. *For $(a,b)=(1,0)$ or $(0,1)$, then $$\begin{aligned} \label{commu-Q2-with-Wl} |\mathcal{I}_2| \lesssim_{l} I_{3} (\|g\|_{H^a}\|h\|_{H^b_{l-1}}+ \|g\|_{H^a_{l-1}}\|h\|_{H^b} )\|f\|_{L^2}, \\ \label{commu-Q2-with-Wl-l-infty} |\mathcal{I}_2| \lesssim_{l} I_{3} (\|g\|_{L^1 \cap L^\infty}\|h\|_{L^2_{l-1}} + \|g\|_{L^2_{l-1}}\|h\|_{L^1 \cap L^\infty})\|f\|_{L^2}. \end{aligned}$$* 3. *For $\delta>0$ and $c,d \geq 0$ with $c+d = \frac 32 + \delta$, then $$\begin{aligned} \label{commu-Q3-with-Wl} |I_3| \lesssim_{l, \delta} I_{3} (\|g\|_{H^c}\|h\|_{H^d_{l}} + \|g\|_{H^c_l}\|h\|_{H^d})\|f\|_{L^2}, \\ \label{commu-Q3-with-Wl-L-infty} |I_3| \lesssim_{l, \delta} I_{3} (\|g\|_{L^\infty}\|h\|_{L^2_l} + \|g\|_{L^2_l}\|h\|_{L^\infty})\|f\|_{L^2}. % \\ \label{commu-Q3-with-Wl-L-infty-all-on-g} % |I_3| \lesssim_{l} I_{3} (\|g\|_{L^\infty}\|h\|_{L^2_l} + \|g\|_{L^\infty_l}\|h\|_{L^2})\|f\|_{L^2}, \end{aligned}$$* 4. *If $g\ge0$, then for $\eta>0$, $$\begin{aligned} \label{commutator-Q-for-highest-order} |\mathcal{I}| \lesssim_{l} \eta \int B^{\epsilon}g_*(f'-f)^2\mathrm{d}V + \eta^{-1} I_{3} \|g\|_{L^2_2}\|h\|_{L^2_{l-1}}^2 + I_{3} \|g\|_{H^2_l}\|h\|_{L^2_l} \|f\|_{L^2}. \end{aligned}$$ In general, $$\begin{aligned} \label{commutator-Q-for-highest-order-l-infty} |\mathcal{I}| &\lesssim_{l}& \eta \int B^{\epsilon}_1g_*(f'-f)^2\mathrm{d}V + \eta^{-1} I_{3} \|g\|_{L^1 \cap L^2}\|h\|_{L^2_{l-1}}^2 \\ \nonumber &&+ I_{3} (\|g\|_{L^1 \cap L^\infty}\|h\|_{L^2_{l}} + I_{3} \|g\|_{L^2_{l}}\|h\|_{L^1 \cap L^\infty})\|f\|_{L^2}. \end{aligned}$$* *Proof.* It is easy to compute that $$\begin{aligned} \label{CQ1}\mathcal{I}_i=\int B^{\epsilon}_ig_*hf'(W_l'-W_l)\mathrm{d}V.\end{aligned}$$ **[*Step 1: Estimate of $\mathcal{I}_1$.*]{.ul}** We first consider the case that $g\ge0$. By the Taylor expansion [\[Taylor1\]](#Taylor1){reference-type="eqref" reference="Taylor1"} for $W_l'-W_l$, we have $\mathcal{I}_1= \mathcal{I}_1^1 + \mathcal{I}_1^2$ where $\mathcal{I}_1^1 \colonequals \int B^{\epsilon}_1g_*h f' (\nabla W_l)(v)\cdot (v'-v)\mathrm{d}V$ and $\mathcal{I}_1^2 \colonequals \int B^{\epsilon}_1g_*h f' (1-\kappa) (\nabla^2 W_{l})(\kappa(v)):(v'-v)\otimes(v'-v)\mathrm{d}\kappa \mathrm{d}V.$ For $\mathcal{I}_1^1$, we have $$\begin{aligned} \mathcal{I}_1^1 = \int B^{\epsilon}_1g_*h(f'-f) (\nabla W_l)(v)\cdot (v'-v)\mathrm{d}V+\int B^{\epsilon}_1g_*hf (\nabla W_l)(v)\cdot (v'-v)\mathrm{d}V \colonequals \mathcal{I}_1^{1,1}+\mathcal{I}_1^{1,2} .\end{aligned}$$ [\[general-B1-h-f\]](#general-B1-h-f){reference-type="eqref" reference="general-B1-h-f"} implies that $$\begin{aligned} |\mathcal{I}_1^{1,1}|&\lesssim& \bigg(\int B^{\epsilon}_1g_*(f'-f)^2\mathrm{d}V\bigg)^{\frac 12} \bigg( \int B^{\epsilon}_1g_*|hW_{l-1}|^2|v'-v|^2\mathrm{d}V\bigg)^{\frac 12}\\ &\lesssim& \eta \int B^{\epsilon}_1g_*(f'-f)^2\mathrm{d}V + \eta^{-1} I_{3} \|g\|_{L^{1} \cap L^2}\|h\|_{L^2_{l-1}}^2.\end{aligned}$$ Using [\[cancell1\]](#cancell1){reference-type="eqref" reference="cancell1"}, [\[order-2-cancellation\]](#order-2-cancellation){reference-type="eqref" reference="order-2-cancellation"} and [\[minus-2\]](#minus-2){reference-type="eqref" reference="minus-2"}, we have $$\begin{aligned} % \label{symtrisigma} |\mathcal{I}_1^{1,2}|&=&\big|\int B^{\epsilon}_1g_*hf (\nabla W_l)(v)\cdot (v-v_*) \sin^2(\theta/2) \mathrm{d}V\big| \\ &\lesssim& I_{3} \int |v-v_*|^{-2} g_*|hW_{l-1}|f\mathrm{d}v_* \mathrm{d}v \lesssim I_{3} \|g\|_{H^1}\|h\|_{L^2_{l-1}}\|f\|_{L^2} \text{ or } I_{3} \|g\|_{L^2 \cap L^{\infty}} \|h\|_{L^2_{l-1}}\|f\|_{L^2} .\end{aligned}$$ For $\mathcal{I}_1^2$, by applying [\[general-B1-h-f\]](#general-B1-h-f){reference-type="eqref" reference="general-B1-h-f"}, one has $$\begin{aligned} \qquad \label{I-1-2-upper-bound} |\mathcal{I}_1^2| \lesssim \int B^{\epsilon}_1 |g_*||h||f'||v'-v|^2 (W_{l-2} + (W_{l-2})_* ) \mathrm{d}V \lesssim I_{3} (\|g\|_{L^2_l}\|h\|_{L^2}+\|g\|_{L^1 \cap L^2}\|h\|_{L^2_{l-2}})\|f\|_{L^2}. \end{aligned}$$ Patching together the above estimates, we get [\[commutator-Q1-for-highest-order\]](#commutator-Q1-for-highest-order){reference-type="eqref" reference="commutator-Q1-for-highest-order"} and [\[commutator-Q1-for-highest-order-l-infty\]](#commutator-Q1-for-highest-order-l-infty){reference-type="eqref" reference="commutator-Q1-for-highest-order-l-infty"} by using $\|g\|_{L^2 } \lesssim \|g\|_{L^1} + \|g\|_{L^{\infty}}$. We next turn to the general case. Using [\[Taylor2\]](#Taylor2){reference-type="eqref" reference="Taylor2"} for $W_l'-W_l$, we have $\mathcal{I}_1 =\mathcal{I}_1^3+\mathcal{I}_1^4$ where $\mathcal{I}_1^3 \colonequals \int B^{\epsilon}_1g_*h f' (\nabla W_{l})(v')\cdot(v'-v) \mathrm{d}V$ and $\mathcal{I}_1^4 \colonequals -\int B^{\epsilon}_1g_*h f' \kappa(\nabla^2 W_{l})(\kappa(v)):(v'-v)\otimes(v'-v)\mathrm{d}\kappa \mathrm{d}V.$ For $\mathcal{I}_1^3$, thanks to [\[cancell2\]](#cancell2){reference-type="eqref" reference="cancell2"}, we observe that $$\begin{aligned} |\mathcal{I}_1^3| = \big|\int B^{\epsilon}_1g_*(h-h')f'(\nabla W_l)(v')(v'-v)\mathrm{d}V\big|.\end{aligned}$$ Then by order-1 Taylor expansion for $h-h'$, suitably using [\[general-B1-h-f\]](#general-B1-h-f){reference-type="eqref" reference="general-B1-h-f"}, we get $$\begin{aligned} |\mathcal{I}_1^3| &\lesssim& \int B^{\epsilon}_1 |g_*||(\nabla h)(\kappa(v))||f'||v'-v|^2(W_{l-1}(v_*)+W_{l-1}(\kappa(v))) \mathrm{d}\kappa \mathrm{d}V \\ &\lesssim& I_{3} \|g\|_{L^2_{l}} \|h\|_{H^1_{1}} \|f\|_{L^2} + I_{3} \|g\|_{L^2_2}\|h\|_{H^1_{l-1}}\|f\|_{L^2}. \end{aligned}$$ By comparing $\mathcal{I}_1^2$ and $\mathcal{I}_1^4$, it is not difficult to see that $\mathcal{I}_1^4$ is also bounded by [\[I-1-2-upper-bound\]](#I-1-2-upper-bound){reference-type="eqref" reference="I-1-2-upper-bound"}. We get [\[commu-Q1-with-Wl\]](#commu-Q1-with-Wl){reference-type="eqref" reference="commu-Q1-with-Wl"}. **[*Step 2: Estimate of $\mathcal{I}_2$.*]{.ul}** Thanks to the Taylor expansion [\[Taylor1\]](#Taylor1){reference-type="eqref" reference="Taylor1"} for $W_l'-W_l$, we deduce that $$\begin{aligned} |\mathcal{I}_2|\lesssim \int |B^{\epsilon}_2| |g_*||h||f'| |v-v'| \big(W_{l-1}+(W_{l-1})_*\big) \mathrm{d}V. \end{aligned}$$ By the estimates [\[Q1-result-with-factor-a-minus3-type-2\]](#Q1-result-with-factor-a-minus3-type-2){reference-type="eqref" reference="Q1-result-with-factor-a-minus3-type-2"} and [\[general-L1\]](#general-L1){reference-type="eqref" reference="general-L1"} yield that $$\begin{aligned} \mathcal{I} &\lesssim& \bigg(\int |B^{\epsilon}_1| (|g_*|^{2}|W_{l-1} h|^2 + |(W_{l-1}g)_*|^{2}|h |^2)|v-v'|^2\mathrm{d}V\bigg)^{\frac 12} \bigg(\int |B^{\epsilon}_3| |f'|^2\mathrm{d}V\bigg)^{\frac 12} \\ &\lesssim& I_{3} \bigg(\int (|g_*|^{2}|W_{l-1} h|^2 + |(W_{l-1}g)_*|^{2}|h |^2)|v-v_{*}|^{-1} \mathrm{d}v \mathrm{d}v_{*} \bigg)^{\frac 12} \|f\|_{L^2}. \end{aligned}$$ Now we can use [\[2-2-minus-1-s1-s2\]](#2-2-minus-1-s1-s2){reference-type="eqref" reference="2-2-minus-1-s1-s2"} to get [\[commu-Q2-with-Wl\]](#commu-Q2-with-Wl){reference-type="eqref" reference="commu-Q2-with-Wl"} and use [\[minus-1\]](#minus-1){reference-type="eqref" reference="minus-1"} to get [\[commu-Q2-with-Wl-l-infty\]](#commu-Q2-with-Wl-l-infty){reference-type="eqref" reference="commu-Q2-with-Wl-l-infty"}. **[*Step 3: Estimate of $I_3$.*]{.ul}** Since $W_l' + W_l \lesssim W_l + (W_l)_*$, [\[general-k-functions-infty-norm\]](#general-k-functions-infty-norm){reference-type="eqref" reference="general-k-functions-infty-norm"} implies [\[commu-Q3-with-Wl-L-infty\]](#commu-Q3-with-Wl-L-infty){reference-type="eqref" reference="commu-Q3-with-Wl-L-infty"}. Similarly, using [\[general-k-functions-infty-norm\]](#general-k-functions-infty-norm){reference-type="eqref" reference="general-k-functions-infty-norm"} and [\[general-k-functions-h-norm\]](#general-k-functions-h-norm){reference-type="eqref" reference="general-k-functions-h-norm"}, one can easily get [\[commu-Q3-with-Wl\]](#commu-Q3-with-Wl){reference-type="eqref" reference="commu-Q3-with-Wl"}. Obviously, [\[commutator-Q-for-highest-order\]](#commutator-Q-for-highest-order){reference-type="eqref" reference="commutator-Q-for-highest-order"} and [\[commutator-Q-for-highest-order-l-infty\]](#commutator-Q-for-highest-order-l-infty){reference-type="eqref" reference="commutator-Q-for-highest-order-l-infty"} follow [\[commutator-Q1-for-highest-order\]](#commutator-Q1-for-highest-order){reference-type="eqref" reference="commutator-Q1-for-highest-order"}, [\[commu-Q2-with-Wl\]](#commu-Q2-with-Wl){reference-type="eqref" reference="commu-Q2-with-Wl"}, [\[commu-Q3-with-Wl\]](#commu-Q3-with-Wl){reference-type="eqref" reference="commu-Q3-with-Wl"} and [\[commutator-Q1-for-highest-order-l-infty\]](#commutator-Q1-for-highest-order-l-infty){reference-type="eqref" reference="commutator-Q1-for-highest-order-l-infty"}, [\[commu-Q2-with-Wl-l-infty\]](#commu-Q2-with-Wl-l-infty){reference-type="eqref" reference="commu-Q2-with-Wl-l-infty"}, [\[commu-Q3-with-Wl-L-infty\]](#commu-Q3-with-Wl-L-infty){reference-type="eqref" reference="commu-Q3-with-Wl-L-infty"} respectively. ◻ With the upper bounds in the previous section and the commutator estimates in Lemma [Lemma 13](#COMT2W){reference-type="ref" reference="COMT2W"}, we are ready to state the following weighted upper bounds for $Q_1$ and $Q_2$. **Proposition 8**. *Let $l \geq 2$, $(a,b)=(1,0)$ or $(0,1)$, then $$\begin{aligned} \label{Q1-h1-h2-l2-weighted} |\langle Q_1(g, h), W_l f \rangle| \lesssim_{l} I_{3} \|g\|_{L^{2}_l} \|h\|_{H^{2}_l} \|f\|_{L^2}, \\ \label{Q2-h1-h1-l2-weighted} |\langle Q_2(g, h), W_l f \rangle| \lesssim_{l} (I_{3} + I^{\prime}_{3}) \|g\|_{H^{a}_l} \|h\|_{H^{b+1}_l} \|f\|_{L^2}. \end{aligned}$$* *Proof.* It is not difficult to conclude that [\[Q1-h1-h2-l2-weighted\]](#Q1-h1-h2-l2-weighted){reference-type="eqref" reference="Q1-h1-h2-l2-weighted"} and [\[Q2-h1-h1-l2-weighted\]](#Q2-h1-h1-l2-weighted){reference-type="eqref" reference="Q2-h1-h1-l2-weighted"} follow [\[Q1-g-h-f-L2-H2-L2\]](#Q1-g-h-f-L2-H2-L2){reference-type="eqref" reference="Q1-g-h-f-L2-H2-L2"},[\[commu-Q1-with-Wl\]](#commu-Q1-with-Wl){reference-type="eqref" reference="commu-Q1-with-Wl"} [\[g-h-total-3over2\]](#g-h-total-3over2){reference-type="eqref" reference="g-h-total-3over2"} and [\[commu-Q2-with-Wl\]](#commu-Q2-with-Wl){reference-type="eqref" reference="commu-Q2-with-Wl"}. ◻ Now we are ready to state the weighted upper bound of the operator $Q_{UU}^{\epsilon}$. **Theorem 8**. *Let $N, l \geq 2$, then $$\begin{aligned} \label{UU-general-rough-versionh-h2-h2-h2-weighted} \| Q_{UU}^{\epsilon}(f) \|_{L^{2}_{l}} \leq C_{l} (I_{3} + I^{\prime}_{3}) \| f \|_{H^{2}_{l}}^2 + \epsilon^{3} C_{l} (I_{3} + I^{\prime}_{3}) \| f \|_{H^{2}_{l}}^3, \\ \label{lose-order-2-derivative} \| Q_{UU}^{\epsilon}(f) \|_{H^{N-2}_{l}} \leq C_{l,N} (I_{3} + I^{\prime}_{3}) \| f \|_{H^{N}_{l}}^2 + \epsilon^{3} C_{l,N} (I_{3} + I^{\prime}_{3}) \| f \|_{H^{N}_{l}}^3. \end{aligned}$$* *Proof.* Recalling $Q_{UU}^{\epsilon}(f) = Q_1(f, f) + Q_2(f, f) + Q_3(f, f) + R(f, f, f)$, by [\[Q1-h1-h2-l2-weighted\]](#Q1-h1-h2-l2-weighted){reference-type="eqref" reference="Q1-h1-h2-l2-weighted"}, [\[Q2-h1-h1-l2-weighted\]](#Q2-h1-h1-l2-weighted){reference-type="eqref" reference="Q2-h1-h1-l2-weighted"}, [\[Q-3-upper-bound\]](#Q-3-upper-bound){reference-type="eqref" reference="Q-3-upper-bound"} and [\[R-general-rough-versionh-h2-h2-h2-weighted\]](#R-general-rough-versionh-h2-h2-h2-weighted){reference-type="eqref" reference="R-general-rough-versionh-h2-h2-h2-weighted"}, we derive [\[UU-general-rough-versionh-h2-h2-h2-weighted\]](#UU-general-rough-versionh-h2-h2-h2-weighted){reference-type="eqref" reference="UU-general-rough-versionh-h2-h2-h2-weighted"}. Of course, [\[lose-order-2-derivative\]](#lose-order-2-derivative){reference-type="eqref" reference="lose-order-2-derivative"} is a direct result of [\[UU-general-rough-versionh-h2-h2-h2-weighted\]](#UU-general-rough-versionh-h2-h2-h2-weighted){reference-type="eqref" reference="UU-general-rough-versionh-h2-h2-h2-weighted"}. ◻ Note that [\[Q2-h1-h1-l2-weighted\]](#Q2-h1-h1-l2-weighted){reference-type="eqref" reference="Q2-h1-h1-l2-weighted"} allows us to consider $\langle Q_2(\partial^{\alpha_1}g, \partial^{\alpha_2}f) W_{l}, W_{l}\partial^\alpha f \rangle$ with $|\alpha_1|\ge 1$. **Lemma 14**. *Let $1 \le m = |\alpha| \le N$, then $$\begin{aligned} \sum_{\alpha_1+\alpha_2 = \alpha, |\alpha_1|\ge1}|\langle Q_2(\partial^{\alpha_1}g,\partial^{\alpha_2}f) W_{l}, W_{l}\partial^\alpha f \rangle| \lesssim_{N,l} (I_{3} + I^{\prime}_{3}) \|g\|_{H^{N}_{l}} \|f\|_{H^{N}_{l}}^2. \end{aligned}$$* *Proof.* If $|\alpha_1| \ge 2$, then $|\alpha_2| \leq m -2$. By taking $s_1 = 0, s_2 = 2, s_3 =0$ in [\[Q-3-upper-bound\]](#Q-3-upper-bound){reference-type="eqref" reference="Q-3-upper-bound"} and $a=0, b=1$ in [\[Q2-h1-h1-l2-weighted\]](#Q2-h1-h1-l2-weighted){reference-type="eqref" reference="Q2-h1-h1-l2-weighted"}, we get that $$\begin{aligned} |\langle Q_2(\partial^{\alpha_1}g,\partial^{\alpha_2}f) W_{l}, W_{l}\partial^\alpha f \rangle| \lesssim_{l} (I_{3} + I^{\prime}_{3}) \|\partial^{\alpha_1}g\|_{L^{2}_l} \|\partial^{\alpha_2}f\|_{H^{2}_l} \|\partial^\alpha f\|_{L^2_{l}} \lesssim_{l} (I_{3} + I^{\prime}_{3}) \|g\|_{H^{m}_l} \|f\|_{H^{m}_l}^2. \end{aligned}$$ If $|\alpha_1| = 1$, then $|\alpha_2| \leq m - 1$. By taking $s_1 = 1, s_2 = 1, s_3 =0$ in [\[Q-3-upper-bound\]](#Q-3-upper-bound){reference-type="eqref" reference="Q-3-upper-bound"} and $a=1, b=0$ in [\[Q2-h1-h1-l2-weighted\]](#Q2-h1-h1-l2-weighted){reference-type="eqref" reference="Q2-h1-h1-l2-weighted"}, we get $$\begin{aligned} |\langle Q_2(\partial^{\alpha_1}g,\partial^{\alpha_2}f) W_{l}, W_{l}\partial^\alpha f \rangle| \lesssim_{l} (I_{3} + I^{\prime}_{3}) \|\partial^{\alpha_1}g\|_{H^{1}_l} \|\partial^{\alpha_2}f\|_{H^{1}_l} \|\partial^\alpha f\|_{L^2_{l}} \lesssim_{l} (I_{3} + I^{\prime}_{3}) \|g\|_{H^{2}_l} \|f\|_{H^{m}_l}^2. \end{aligned}$$ These are enough to conclude the desired result. ◻ Note that [\[Q1-h1-h2-l2-weighted\]](#Q1-h1-h2-l2-weighted){reference-type="eqref" reference="Q1-h1-h2-l2-weighted"} allows us to consider $\langle Q_1(\partial^{\alpha_1}g, \partial^{\alpha_2}f) W_{l}, W_{l}\partial^\alpha f \rangle$ with $|\alpha_1|\ge 2$. **Lemma 15**. *Let $2 \le m =|\alpha|\le N$, then $$\begin{aligned} \sum_{\alpha_1+\alpha_2 = \alpha, |\alpha_1| \ge 2} |\langle Q_1(\partial^{\alpha_1}g, \partial^{\alpha_2}f) W_{l}, W_{l}\partial^\alpha f \rangle| \lesssim_{N,l} I_{3} \|g\|_{H^{m}_{l}} \|f\|_{H^{m}_{l}}^2. \end{aligned}$$* *Proof.* Since $|\alpha_1|\ge 2$, $|\alpha_2| \leq m -2$, using [\[Q1-h1-h2-l2-weighted\]](#Q1-h1-h2-l2-weighted){reference-type="eqref" reference="Q1-h1-h2-l2-weighted"}, we have $$\begin{aligned} |\langle Q_1(\partial^{\alpha_1}g, \partial^{\alpha_2}f) W_{l}, W_{l}\partial^\alpha f \rangle| \lesssim_{l} I_{3} \|\partial^{\alpha_1}g\|_{L^{2}_l} \|\partial^{\alpha_2}f\|_{H^{2}_l} \|\partial^\alpha f\|_{L^2_{l}} \lesssim_{l} I_{3} \|g\|_{H^{m}_l} \|f\|_{H^{m}_l}^2. \end{aligned}$$The desired result follows by summation. ◻ ## The penultimate order terms In this subsection, we deal with the penultimate order terms $\langle Q_1(\partial^{\alpha_1}g,\partial^{\alpha_2}f) W_{l}, W_{l}\partial^\alpha f \rangle$ where $|\alpha_1| = 1$. The forthcoming Lemma is motivated by [@chaturvedi2021stability]. For the ease of interested readers, we reproduce its proof in a clearer way. The proof is based on some basic property of Boltzmann-type integral and integration by parts. **Lemma 16**. *For simplicity, let $\tilde{\partial}_{i} \colonequals \partial^{e_{i}} + \partial^{e_{i}}_*$ where $\partial^{e_{i}} = \partial_{v_{i}}, \partial^{e_{i}}_* = \partial_{(v_{i})_*}$ for a unit index $|e_{i}| =1$. For a general function depending on the variables $g = g(v, v_*, v', v'_*)$, let $g' = g(v', v'_*, v, v_*)$. For a general kernel $B=B(|v-v_{*}|,\cos\theta)$, and a general function $G = G(v)$, it holds that $$\begin{aligned} \int B g (G^{\prime} - G) \tilde{\partial}_{i} G \mathrm{d}V = \frac{1}{4} \int B \tilde{\partial}_{i} g (G^{\prime} - G)^{2} \mathrm{d}V+ \frac{1}{2} \int B (g - g') (G^{\prime} - G) \tilde{\partial}_{i} G \mathrm{d}V. \end{aligned}$$* *Proof.* We first observe that $$\begin{aligned} \tilde{\partial}_{i} B = 0, \quad \tilde{\partial}_{i} G = \partial^{e_{i}} G, \quad \tilde{\partial}_{i} G_* = (\partial^{e_{i}} G)_*, \quad \tilde{\partial}_{i} G' = (\partial^{e_{i}} G)', \quad \tilde{\partial}_{i} G'_* = (\partial^{e_{i}} G)'_*. \end{aligned}$$ As a result, when the integration by parts is used, the derivative on $B$ disappears. Moreover, derivatives and taking values are commutative through $\partial^{e_{i}}$ and $\tilde{\partial}_{i}$. Using these facts, we have $$\begin{aligned} \label{key-prood-line1} B g (G^{\prime} - G) \tilde{\partial}_{i} G = - B \tilde{\partial}_{i} g (G^{\prime} - G) G - B g ( \tilde{\partial}_{i} G^{\prime} - \tilde{\partial}_{i} G) G \\ \label{key-prood-line2} = B \tilde{\partial}_{i} g (G - G^{\prime})^2 + B \tilde{\partial}_{i} g (G - G^{\prime}) G^{\prime} - B g ( \tilde{\partial}_{i} G^{\prime} - \tilde{\partial}_{i} G) G \\ \label{key-prood-line3} = B \tilde{\partial}_{i} g (G - G^{\prime})^2 - B g ( \tilde{\partial}_{i} G - \tilde{\partial}_{i} G^{\prime}) G^{\prime} - B g (G - G^{\prime}) \tilde{\partial}_{i} G^{\prime} + B g ( \tilde{\partial}_{i} G - \tilde{\partial}_{i} G^{\prime} ) G \\ \label{key-prood-line4} = B \tilde{\partial}_{i} g (G - G^{\prime})^2 - 2 B g (G - G^{\prime}) \tilde{\partial}_{i} G^{\prime} + B g \tilde{\partial}_{i} G (G - G^{\prime}) \\ \label{key-prood-line5} = B \tilde{\partial}_{i} g (G - G^{\prime})^2 - 2 B g^{\prime} (G^{\prime} - G) \tilde{\partial}_{i} G - B g (G^{\prime} -G) \tilde{\partial}_{i} G \\ \label{key-prood-line6} = B \tilde{\partial}_{i} g (G - G^{\prime})^2 - 2 B (g^{\prime} -g ) (G^{\prime} - G) \tilde{\partial}_{i} G - 3 B g (G^{\prime} -G) \tilde{\partial}_{i} G. \end{aligned}$$ Here for the equality in [\[key-prood-line1\]](#key-prood-line1){reference-type="eqref" reference="key-prood-line1"}, we use the integration by parts formula to transfer $\tilde{\partial}_{i}$ to other functions. For the equality from [\[key-prood-line2\]](#key-prood-line2){reference-type="eqref" reference="key-prood-line2"} to [\[key-prood-line3\]](#key-prood-line3){reference-type="eqref" reference="key-prood-line3"}, we use the integration by parts formula to deal with the middle term in the line [\[key-prood-line2\]](#key-prood-line2){reference-type="eqref" reference="key-prood-line2"}. For the equality from [\[key-prood-line4\]](#key-prood-line4){reference-type="eqref" reference="key-prood-line4"} to [\[key-prood-line5\]](#key-prood-line5){reference-type="eqref" reference="key-prood-line5"}, we use the change of variable $(v, v_*, \sigma) \to (v', v'_*, \sigma')$ to deal with the last two terms in [\[key-prood-line4\]](#key-prood-line4){reference-type="eqref" reference="key-prood-line4"}. The other changes are easily to verify. We conclude the result by moving the last term of [\[key-prood-line6\]](#key-prood-line6){reference-type="eqref" reference="key-prood-line6"} to the left of [\[key-prood-line1\]](#key-prood-line1){reference-type="eqref" reference="key-prood-line1"}. ◻ Now we are ready to estimate the penultimate order terms. **Lemma 17**. *Let $1 \le m = |\alpha| \le N$, then $$\begin{aligned} \sum_{\alpha_1+\alpha_2= \alpha, |\alpha_1|=1 }|\langle Q_1(\partial^{\alpha_1}g,\partial^{\alpha_2}f) W_{l}, W_{l}\partial^\alpha f \rangle| \lesssim_{N,l} I_{3} \|g\|_{H^{2}_{l}} \|f\|_{H^{m}_{l}}^2. \end{aligned}$$* *Proof.* Let $\mathcal{A}=|\langle Q_1(\partial^{\alpha_1}g,\partial^{\alpha_2}f) W_{l}, W_{l}\partial^\alpha f \rangle|$. It is easy to see that $\mathcal{A}\le |\mathcal{A}_1|+ |\mathcal{A}_2|$ where $\mathcal{A}_1= \langle Q_1(\partial^{\alpha_1}g, W_{l}\partial^{\alpha_2}f), W_{l}\partial^\alpha f \rangle$ and $\mathcal{A}_2=\langle Q_1(\partial^{\alpha_1}g,\partial^{\alpha_2}f) W_{l}-Q_1(\partial^{\alpha_1}f, W_{l}\partial^{\alpha_2}f), W_{l}\partial^\alpha f \rangle.$ By [\[commu-Q1-with-Wl\]](#commu-Q1-with-Wl){reference-type="eqref" reference="commu-Q1-with-Wl"} in Lemma [Lemma 13](#COMT2W){reference-type="ref" reference="COMT2W"}, as $|\alpha_2| = m-1$, we have $$\begin{aligned} |\mathcal{A}_2| \lesssim_{l} I_{3} (\| W_{l} \partial^{\alpha_1}g\|_{L^{2}}\|\partial^{\alpha_2}f\|_{H^1_1} + \|\partial^{\alpha_1}g\|_{L^{2}_{2}}\| W_{l} \partial^{\alpha_2}f\|_{H^1_{-1}})\| W_{l}\partial^\alpha f\|_{L^{2}} \lesssim_{l} I_{3} \|g\|_{H^{2}_l} \|f\|_{H^{m}_l}^2. \end{aligned}$$ Since $|\alpha_1|=1$, we write $\alpha_1 = e_{i}$ for some $1 \leq i \leq 3$. Let $\partial^{e_{i}} = \partial_{v_{i}}$ and $F = \partial^{\alpha_2}f = \partial^{\alpha-e_{i}}f$, then $\partial^\alpha f =\partial^{e_{i}} F$ and $$\begin{aligned} \label{A-1-into-2-terms} \mathcal{A}_1 = \langle Q_1(\partial^{e_{i}}g, W_{l} F), \partial^{e_{i}} ( W_{l} F) \rangle + \langle Q_1(\partial^{e_{i}}g, W_{l} F), W_{l} \partial^{e_{i}}F -\partial^{e_{i}} ( W_{l} F) \rangle . \end{aligned}$$ $\bullet$ Let $G = W_{l} F$, then the first term of the r.h.s of [\[A-1-into-2-terms\]](#A-1-into-2-terms){reference-type="eqref" reference="A-1-into-2-terms"} can be written as $$\begin{aligned} \label{first-trick} \langle Q_1(\partial^{e_{i}}g, G), \partial^{e_{i}}G \rangle = \int B^{\epsilon}_{1} (\partial^{e_{i}}g)^{\prime}_{*} (G^{\prime} - G) \partial^{e_{i}} G \mathrm{d}V + \int B^{\epsilon}_{1} ((\partial^{e_{i}}g)^{\prime}_{*} - (\partial^{e_{i}}g)_{*}) G \partial^{e_{i}} G \mathrm{d}V . \end{aligned}$$ For the second term in [\[first-trick\]](#first-trick){reference-type="eqref" reference="first-trick"}, we use [\[Q1-cancealltion-g-h-f\]](#Q1-cancealltion-g-h-f){reference-type="eqref" reference="Q1-cancealltion-g-h-f"} to get $$\begin{aligned} |\int B^{\epsilon}_{1} ((\partial^{e_{i}}g)^{\prime}_{*} - (\partial^{e_{i}}g)_{*}) G \partial^{e_{i}} G \mathrm{d}V| \lesssim I_{3} \|\partial^{e_{i}}g\|_{H^{1}} \|G\|_{H^{1}}\|\partial^{e_{i}}G\|_{L^{2}} \lesssim_{l} I_{3} \|g\|_{H^{2}} \|f\|_{H^{m}_l}^2. \end{aligned}$$ By Lemma [Lemma 16](#integration-by-parts-formula){reference-type="ref" reference="integration-by-parts-formula"}, the first term in [\[first-trick\]](#first-trick){reference-type="eqref" reference="first-trick"} is $$\begin{aligned} \label{transfer-regularity-trick} \frac{1}{4} \int B^{\epsilon}_{1} (\partial^{2e_{i}}g)_{*} ( G^{\prime} - G)^{2} \mathrm{d}V+ \frac{1}{2} \int B ((\partial^{e_{i}}g)_{*}^{\prime} - (\partial^{e_{i}}g)_{*}) (G^{\prime} - G) \partial^{e_{i}} G \mathrm{d}V. \end{aligned}$$ By Taylor expansion to $G$, [\[general-B1-h-f\]](#general-B1-h-f){reference-type="eqref" reference="general-B1-h-f"} gives that $$\begin{aligned} \int B^{\epsilon}_{1} (\partial^{2e_{i}}g)_{*} (G^{\prime} - G)^{2} \mathrm{d}V \lesssim I_{3} \|g\|_{H^2_{2}} \|G\|_{H^{1}} \lesssim_{l} I_{3} \|g\|_{H^{2}_2} \|f\|_{H^{m}_l}^2. \end{aligned}$$ Apply order-1 Taylor expansion to $\partial^{e_{i}}g$ and $G$ and use [\[general-B1-h-f\]](#general-B1-h-f){reference-type="eqref" reference="general-B1-h-f"}, then the second term in [\[transfer-regularity-trick\]](#transfer-regularity-trick){reference-type="eqref" reference="transfer-regularity-trick"} is bounded by $$\begin{aligned} |\int B^{\epsilon}_1 |(\nabla\partial^{e_{i}}g)(\iota(v_*))| |(\nabla G)(\kappa(v))| | \partial^{e_{i}} G| \mathrm{d}V \mathrm{d}\kappa \mathrm{d}\iota| \lesssim_{l} I_{3} \|\nabla\partial^{e_{i}}g\|_{L^2_{2}} \|\nabla G\|_{L^{2}} \|G\|_{L^{2}} \lesssim I_{3} \|g\|_{H^{2}_2} \|f\|_{H^{m}_l}^2. \end{aligned}$$ $\bullet$ Since $W_{l} \partial^{e_{i}}F -\partial^{e_{i}} ( W_{l} F) = - F \partial^{e_{i}} W_{l}$, the second term in the r.h.s of [\[A-1-into-2-terms\]](#A-1-into-2-terms){reference-type="eqref" reference="A-1-into-2-terms"} is $$\begin{aligned} - \langle Q_1(\partial^{e_{i}}g, W_{l} F), F \partial^{e_{i}} W_{l} \rangle &=& \frac{1}{2} \int B_1 (\partial^{e_{i}}g)_{*} (( W_{l} F)^{\prime} - W_{l} F) ((F \partial^{e_{i}} W_{l})^{\prime} - F \partial^{e_{i}} W_{l})\mathrm{d}V \\&&+ \frac{1}{2} \int B_1 ((\partial^{e_{i}}g)_{*}^{\prime}- (\partial^{e_{i}}g)_{*}) W_{l} F ((F \partial^{e_{i}} W_{l})^{\prime} - F \partial^{e_{i}} W_{l})\mathrm{d}V. \end{aligned}$$ By similar argument, the r.h.s can be bounded by $I_{3} \|g\|_{H^{2}_2} \|f\|_{H^{m}_l}^2$ and $I_{3} \|g\|_{H^{2}_2} \|f\|_{H^{m}_l}^2$. We are led to the desired result. ◻ # Well-posedness and propagation of regularity In this section, we will prove part of Theorem [Theorem 1](#gwp-F-D){reference-type="ref" reference="gwp-F-D"} for Fermi-Dirac particles as well as Theorem [Theorem 2](#lwp-B-E){reference-type="ref" reference="lwp-B-E"} for Bose-Einstein particles. ## Fermi-Dirac particles In this subsection, will prove the first two results in Theorem [Theorem 1](#gwp-F-D){reference-type="ref" reference="gwp-F-D"}. We start with the mild solution since in this situation the non-negativity can be easily proved. We recall that $\partial_t f =Q_{UU}^{\epsilon}(f)$, where $Q_{UU}^{\epsilon}$ is defined by $$\begin{aligned} \label{OPQUU-eps-FD} Q_{UU}^{\epsilon}(f) = \int_{\mathop{\mathbb R\kern 0pt}\nolimits^3\times\mathop{\mathbb S\kern 0pt}\nolimits^2} B^{\epsilon} \Pi^{\epsilon}(f) \mathrm{d}\sigma \mathrm{d}v_*, \end{aligned}$$ where $$\begin{aligned} \Pi^{\epsilon}(f) \colonequals f_*' f' (1 - \epsilon^3 f_*) (1 - \epsilon^3 f) - f_* f (1 - \epsilon^3 f_*') (1 - \epsilon^3f'). \end{aligned}$$ As usual, we define the gain and loss terms by $$\begin{aligned} \label{OPQUU-eps-FD-gain-loss} Q^{\epsilon,+}(f) = \int_{\mathop{\mathbb R\kern 0pt}\nolimits^3\times\mathop{\mathbb S\kern 0pt}\nolimits^2} B^{\epsilon} \Pi^{\epsilon,+}(f) \mathrm{d}\sigma \mathrm{d}v_*, \quad Q^{\epsilon,-}(f) = \int_{\mathop{\mathbb R\kern 0pt}\nolimits^3\times\mathop{\mathbb S\kern 0pt}\nolimits^2} B^{\epsilon} \Pi^{\epsilon,-}(f) \mathrm{d}\sigma \mathrm{d}v_*, \end{aligned}$$ where $$\begin{aligned} \Pi^{\epsilon,+}(f) \colonequals f_*' f' (1 - \epsilon^3 f_*) (1 - \epsilon^3 f), \quad \Pi^{\epsilon,-}(f) \colonequals f_* f (1 - \epsilon^3 f_*') (1 - \epsilon^3 f'). \end{aligned}$$ We now prove that the initial value problem admits a mild solution if $f_0 \in L^1$ with $0 \leq f_{0} \leq \epsilon^{-3}$. To do that, we prove the following lemma. **Lemma 18**. *Let $f, g \in L^1 \cap L^{\infty}, 0 \leq f, g \leq \epsilon^{-3}$, then $$\begin{aligned} \label{Q-g-L1-F-D} \|Q_{UU}^{\epsilon}(f)\|_{L^1} \lesssim \epsilon^{-3} I_0 \|f\|_{L^1}^2, \\ \label{Q-g-Linfty-F-D} \|Q_{UU}^{\epsilon}(f)\|_{L^\infty} \lesssim \epsilon^{-6} I_{0} \| f \|_{L^1} + \epsilon^{-6} I_{3}, \\ \label{Q-g-minus-f-L1-F-D} \|Q_{UU}^{\epsilon}(f) - Q_{UU}^{\epsilon}(g)\|_{L^1} \lesssim (\epsilon^{-3} I_{0} + I_{3}) (\| f \|_{L^1} + \| g \|_{L^1} + \epsilon^{-3}) \| f -g \|_{L^1}. \end{aligned}$$* *Proof.* By [\[OPQUU-eps-FD\]](#OPQUU-eps-FD){reference-type="eqref" reference="OPQUU-eps-FD"} and [\[OPQUU-eps-FD-gain-loss\]](#OPQUU-eps-FD-gain-loss){reference-type="eqref" reference="OPQUU-eps-FD-gain-loss"}, we have $$\begin{aligned} \label{Q-into-gain-and-loss} Q_{UU}^{\epsilon}(f) = Q^{\epsilon,+}(f) - Q^{\epsilon,-}(f). \end{aligned}$$ From [\[Q-into-gain-and-loss\]](#Q-into-gain-and-loss){reference-type="eqref" reference="Q-into-gain-and-loss"} and the change of variable [\[change-of-variable-2\]](#change-of-variable-2){reference-type="eqref" reference="change-of-variable-2"} in the special case $\kappa = \iota = 1$, we get that $$\begin{aligned} \label{F-D-Q-L1-into-gain-and-loss} \|Q_{UU}^{\epsilon}(f)\|_{L^1} \leq \|Q^{\epsilon,+}(f)\|_{L^1} + \|Q^{\epsilon,-}(f)\|_{L^1} = 2 \|Q^{\epsilon,-}(f)\|_{L^1} . \end{aligned}$$ Since $0 \leq g \leq \epsilon^{-3}$, [\[upper-bound-of-A-eps\]](#upper-bound-of-A-eps){reference-type="eqref" reference="upper-bound-of-A-eps"} implies that $$\begin{aligned} \label{F-D-Q-L1-loss-up} \|Q^{\epsilon,-}(f)\|_{L^1} = \int B^{\epsilon} f_* f (1 - \epsilon^3 f_*') (1 - \epsilon^3 f') \mathrm{d}\sigma \mathrm{d}v_* \mathrm{d}v \leq \int B^{\epsilon} f_* f \mathrm{d}\sigma \mathrm{d}v_* \mathrm{d}v \lesssim \epsilon^{-3} I_0 \|g\|_{L^1}^2. \end{aligned}$$ This gives [\[Q-g-L1-F-D\]](#Q-g-L1-F-D){reference-type="eqref" reference="Q-g-L1-F-D"}. From [\[Q-into-gain-and-loss\]](#Q-into-gain-and-loss){reference-type="eqref" reference="Q-into-gain-and-loss"}, it holds that $\|Q_{UU}^{\epsilon}(f)\|_{L^\infty} \leq \|Q^{\epsilon,+}(f)\|_{L^\infty} + \|Q^{\epsilon,-}(f)\|_{L^\infty}.$ Since $0 \leq f \leq \epsilon^{-3}$, by using [\[upper-bound-of-A-eps\]](#upper-bound-of-A-eps){reference-type="eqref" reference="upper-bound-of-A-eps"}, we obtain that $$\begin{aligned} \qquad \label{Q-minus-L-infty} Q^{\epsilon,-}(f)(v) = \int B^{\epsilon} f_* f (1 - \epsilon^3 f_*') (1 - \epsilon^3 f') \mathrm{d}\sigma \mathrm{d}v_* \leq f \int B^{\epsilon} f_* \mathrm{d}\sigma \mathrm{d}v_* = A^{\epsilon} f \|f\|_{L^1} \lesssim \epsilon^{-6} I_0 \|f\|_{L^1}. \end{aligned}$$ For $Q^{\epsilon,+}(f)(v)$, we use $B^{\epsilon} \leq 2 B^{\epsilon}_1 + 2 B^{\epsilon}_3$ to get $$\begin{aligned} \label{Q-plus-L-infty} Q^{\epsilon,+}(f)(v) = \int B^{\epsilon} f_*' f' (1 - \epsilon^3 f_*) (1 - \epsilon^3 f) \mathrm{d}\sigma \mathrm{d}v_* \leq 2 \epsilon^{-3} \int B^{\epsilon}_1 f_*' \mathrm{d}\sigma \mathrm{d}v_* + 2 \epsilon^{-6} \int B^{\epsilon}_3 \mathrm{d}\sigma \mathrm{d}v_*. \end{aligned}$$ By [\[Q-plus-L-infty\]](#Q-plus-L-infty){reference-type="eqref" reference="Q-plus-L-infty"}, [\[bounded-l1-norm-simple\]](#bounded-l1-norm-simple){reference-type="eqref" reference="bounded-l1-norm-simple"} and [\[B1-sigma-vStar\]](#B1-sigma-vStar){reference-type="eqref" reference="B1-sigma-vStar"}, we arrive at $$\begin{aligned} \label{Q-plus-L-infty-result} Q^{\epsilon,+}(f)(v) \lesssim \epsilon^{-6} I_{0} \| f \|_{L^1} + \epsilon^{-6} I_{3} . \end{aligned}$$ Patching together [\[Q-minus-L-infty\]](#Q-minus-L-infty){reference-type="eqref" reference="Q-minus-L-infty"} and [\[Q-plus-L-infty-result\]](#Q-plus-L-infty-result){reference-type="eqref" reference="Q-plus-L-infty-result"} will give [\[Q-g-Linfty-F-D\]](#Q-g-Linfty-F-D){reference-type="eqref" reference="Q-g-Linfty-F-D"}. From [\[Q-into-gain-and-loss\]](#Q-into-gain-and-loss){reference-type="eqref" reference="Q-into-gain-and-loss"}, we notice that $Q_{UU}^{\epsilon}(f) - Q_{UU}^{\epsilon}(g) = Q^{\epsilon,+}(f) - Q^{\epsilon,+}(g) - \left(Q^{\epsilon,-}(f) - Q^{\epsilon,-}(g)\right).$ Recalling [\[OPQUU-eps-FD-gain-loss\]](#OPQUU-eps-FD-gain-loss){reference-type="eqref" reference="OPQUU-eps-FD-gain-loss"}, by the change of variable [\[change-of-variable-2\]](#change-of-variable-2){reference-type="eqref" reference="change-of-variable-2"} in the special case $\kappa = \iota = 1$, we get that $$\begin{aligned} \nonumber &&\|Q_{UU}^{\epsilon}(f) - Q_{UU}^{\epsilon}(g)\|_{L^1} \leq \|Q^{\epsilon,+}(f) - Q^{\epsilon,+}(g)\|_{L^1} + \|Q^{\epsilon,-}(f) - Q^{\epsilon,-}(g)\|_{L^1} \\ &&\leq \int B^{\epsilon} |\Pi^{\epsilon,+}(f) - \Pi^{\epsilon,+}(g)| \mathrm{d}V + \int B^{\epsilon} |\Pi^{\epsilon,-}(f) - \Pi^{\epsilon,-}(g)| \mathrm{d}V = 2 \int B^{\epsilon} |\Pi^{\epsilon,-}(f) - \Pi^{\epsilon,-}(g)| \mathrm{d}V. \label{L1-upper-by-2-times-loss} \end{aligned}$$ Observe that $$\begin{aligned} \Pi^{\epsilon,-}(f) - \Pi^{\epsilon,-}(g) &=& (f_* - g_{*}) f (1 - \epsilon^3 f_*') (1 - \epsilon^3 f') + g_* (f - g) (1 - \epsilon^3 f_*') (1 - \epsilon^3 f') \\ &&+ \epsilon^3 g_* g (g_*' - f_*') (1 - \epsilon^3 f') + \epsilon^3 g_* g (1 - \epsilon^3 g_*') (g' - f') . \end{aligned}$$ If $0 \leq f, g \leq \epsilon^{-3}$, then $|\Pi^{\epsilon,-}(f) - \Pi^{\epsilon,-}(g)| \leq |f_* - g_{*}| f + g_* |f - g| + g |g_*' - f_*'| + g_* |g' - f'| .$ Therefore, $$\begin{aligned} % \label{loss-part-diff-L1} \int B^{\epsilon} |\Pi^{\epsilon,-}(f) - \Pi^{\epsilon,-}(g)| \mathrm{d}V \leq \int B^{\epsilon} (|f_* - g_{*}| f + g_* |f - g|) \mathrm{d}V + 2 \int B^{\epsilon} g_* |g' - f'| \mathrm{d}V . \end{aligned}$$ By using [\[upper-bound-of-A-eps\]](#upper-bound-of-A-eps){reference-type="eqref" reference="upper-bound-of-A-eps"}, [\[general-L1\]](#general-L1){reference-type="eqref" reference="general-L1"} and [\[B1-sigma-vStar\]](#B1-sigma-vStar){reference-type="eqref" reference="B1-sigma-vStar"}, we get that $$\begin{aligned} \int B^{\epsilon} |\Pi^{\epsilon,-}(f) - \Pi^{\epsilon,-}(g)| \mathrm{d}V \lesssim (\epsilon^{-3} I_{0} + I_{3}) (\| f \|_{L^1} + \| g \|_{L^1} + \| g \|_{L^\infty})\| f -g \|_{L^1}, \end{aligned}$$ which yields [\[Q-g-minus-f-L1-F-D\]](#Q-g-minus-f-L1-F-D){reference-type="eqref" reference="Q-g-minus-f-L1-F-D"}. ◻ *Proof of Theorem [Theorem 1](#gwp-F-D){reference-type="ref" reference="gwp-F-D"}: (Well-posedness part).* We recall that $\mathcal{A}_{T} \colonequals L^{\infty}([0,T]; L^{1}(\mathop{\mathbb R\kern 0pt}\nolimits^3))$ associated with the norm $\|f\|_{T} \colonequals \sup\limits_{0 \leq t \leq T} \|f(t)\|_{L^{1}}$. We define the operator $J^{\epsilon}(\cdot)$ on $\mathcal{A}_{T}$ by $$\begin{aligned} J^{\epsilon}(f)(t,v) \colonequals f_{0}(v) + \int_{0}^{t} Q_{UU}^{\epsilon}(|f| \wedge \epsilon^{-3} )(\tau, v) \mathrm{d}\tau. \end{aligned}$$ $\bullet$ $J^{\epsilon}(\cdot)$ is a map from $\mathcal{A}_{T}$ onto $\mathcal{A}_{T}$. It is easy to check that $\|J^{\epsilon}(f)\|_{T} \leq \|f_{0}\|_{L^1} + T \| Q_{UU}^{\epsilon}(|f| \wedge \epsilon^{-3} )\|_{T}.$ By [\[Q-g-L1-F-D\]](#Q-g-L1-F-D){reference-type="eqref" reference="Q-g-L1-F-D"}, we get $$\begin{aligned} \label{map-on-itself} \|J^{\epsilon}(f)\|_{T} \leq \|f_{0}\|_{L^1} + T C_{\epsilon, \phi} \|f\|_{T}^2, \end{aligned}$$where $C_{\epsilon, \phi} \lesssim \epsilon^{-3} I_{0}$. This means $J^{\epsilon}(\cdot)$ is an operator onto $\mathcal{A}_{T}$. $\bullet$ Let $\mathcal{B}_{T} \colonequals \{f \in \mathcal{A}_{T}, \|f\|_{T} \leq 2 \|f_{0}\|_{L^1} \}$. We want to show that $J^{\epsilon}(\cdot)$ is contraction on the complete metric space $(\mathcal{B}_{T}, \|\cdot - \cdot\|_{T})$ for small enough $T>0$. By [\[map-on-itself\]](#map-on-itself){reference-type="eqref" reference="map-on-itself"}, if $T$ satisfies $$\begin{aligned} \label{small-condition-1-on-T} 4 T C_{\epsilon, \phi} \|f_{0}\|_{L^1} \leq 1, \end{aligned}$$ then $J^{\epsilon}(\cdot)$ is an operator onto $\mathcal{B}_{T}$. Given $f, g \in \mathcal{B}_{T}$, we have $$\begin{aligned} J^{\epsilon}(f)(t,v) - J^{\epsilon}(g)(t,v) = \int_{0}^{t} \left( Q_{UU}^{\epsilon}(|f| \wedge \epsilon^{-3}) - Q_{UU}^{\epsilon}(|g| \wedge \epsilon^{-3}) (\tau, v) \right) \mathrm{d}\tau. \end{aligned}$$ Similarly to the above, we have $\|J^{\epsilon}(f) - J^{\epsilon}(g)\|_{T} \leq T \| Q_{UU}^{\epsilon}(|f| \wedge \epsilon^{-3} ) - Q_{UU}^{\epsilon}(|g| \wedge \epsilon^{-3} )\|_{T}.$ By [\[Q-g-minus-f-L1-F-D\]](#Q-g-minus-f-L1-F-D){reference-type="eqref" reference="Q-g-minus-f-L1-F-D"}, since the function $x \in \mathop{\mathbb R\kern 0pt}\nolimits\to |x| \wedge \epsilon^{-3}$ is Lipschitz continuous with Lipschitz constant $1$, we get that $$\begin{aligned} \|J^{\epsilon}(f) - J^{\epsilon}(g)\|_{T} \leq T C_{\epsilon, \phi} (\| f \|_{T} + \| g \|_{T} + \epsilon^{-3}) \| f- g\|_{T} \leq T C_{\epsilon, \phi} (4 \| f_0 \|_{L^1} + \epsilon^{-3}) \| f- g\|_{T}. \end{aligned}$$ where $C_{\epsilon, \phi} \lesssim\epsilon^{-3} I_{0} + I_{3}$. If $T$ satisfies $$\begin{aligned} \label{small-condition-2-on-T} T C_{\epsilon, \phi} (4 \| f_0 \|_{L^1} + \epsilon^{-3}) \leq \frac 12, \end{aligned}$$ then $J^{\epsilon}(\cdot)$ is a contraction mapping. By the Fixed Point Theorem, there exists a unique $f^{\epsilon} \in \mathcal{B}_{T}$, s.t. $f^{\epsilon} = J^{\epsilon}(f^{\epsilon})$. After a modification on the $v$-null sets, there is a null set $Z \subset \mathop{\mathbb R\kern 0pt}\nolimits^3$, for all $t \in [0, T]$ and $v \in \mathop{\mathbb R\kern 0pt}\nolimits^3 \setminus Z$, $$\begin{aligned} % \label{solution-local} f^{\epsilon}(t,v) = J^{\epsilon}(f^{\epsilon}) (t,v) = f_{0}(v) + \int_{0}^{t} Q_{UU}^{\epsilon}(|f^{\epsilon}| \wedge \epsilon^{-3} )(\tau, v) \mathrm{d}\tau. \end{aligned}$$ $\bullet$ We now utilize the special definition of $J^{\epsilon}(\cdot)$ to prove that $0 \leq f^{\epsilon}(t) \leq \epsilon^{-3}$ for all $t \in [0, T]$ and $v \in \mathop{\mathbb R\kern 0pt}\nolimits^3 \setminus Z$. By [\[Q-g-Linfty-F-D\]](#Q-g-Linfty-F-D){reference-type="eqref" reference="Q-g-Linfty-F-D"}, we can see that for any $v \in \mathop{\mathbb R\kern 0pt}\nolimits^3 \setminus Z$ and $t_1, t_2 \in [0, T]$, $$\begin{aligned} |f^{\epsilon}(t_2,v) - f^{\epsilon}(t_1,v)| \leq C_{\epsilon, \phi} (\| f_0 \|_{L^1} + 1) |t_2 - t_1|, \end{aligned}$$ for some $C_{\epsilon, \phi} \lesssim \epsilon^{-6} \left( I_{0} + I_{3} \right)$. That is, $f^{\epsilon}(\cdot, v)$ is uniformly continuous w.r.t. $t$ on $[0, T]$ for any $v \in \mathop{\mathbb R\kern 0pt}\nolimits^3 \setminus Z$. Then we have $$\begin{aligned} \nonumber (-f^{\epsilon}(t,v))^{+} &=& - \int_{0}^{t} Q_{UU}^{\epsilon}(|f^{\epsilon}| \wedge \epsilon^{-3})(\tau, v) \mathrm{1}_{f^{\epsilon}(\tau,v)<0} \mathrm{d}\tau \leq \int_{0}^{t} Q^{\epsilon,-}(|f^{\epsilon}| \wedge \epsilon^{-3})(\tau, v) \mathrm{1}_{f^{\epsilon}(\tau,v)<0} \mathrm{d}\tau \\ \label{non-negativity} &\leq& A^{\epsilon} \int_{0}^{t} \|f^{\epsilon}(\tau)\|_{L^1} |f^{\epsilon}(\tau, v)| \mathrm{1}_{f^{\epsilon}(\tau,v)<0} \mathrm{d}\tau \leq 2 A^{\epsilon} \|f_0\|_{L^1} \int_{0}^{t} (-f^{\epsilon}(\tau,v))^{+} \mathrm{d}\tau. \end{aligned}$$By Grönwall's inequality, we get $(-f^{\epsilon}(t,v))^{+} = 0$ on $[0, T]$ and so $f^{\epsilon}(t,v) \geq 0$. We also have $$\begin{aligned} (f^{\epsilon}(t,v) - \epsilon^{-3})^{+} &=& \int_{0}^{t} Q_{UU}^{\epsilon}(|f^{\epsilon}| \wedge \epsilon^{-3})(\tau, v) \mathrm{1}_{f^{\epsilon}(\tau,v)>\epsilon^{-3}} \mathrm{d}\tau \le \int_{0}^{t} Q^{\epsilon,+}(|f^{\epsilon}| \wedge \epsilon^{-3})(\tau, v) \mathrm{1}_{f^{\epsilon}(\tau,v)>\epsilon^{-3}} \mathrm{d}\tau =0, \end{aligned}$$ which yields $f^{\epsilon}(t,v) \leq \epsilon^{-3}$. Recalling Definition [Definition 1](#def-mild-solution){reference-type="ref" reference="def-mild-solution"} for F-D particles, we already get a mild solution on $[0, T]$. By [\[Q-g-L1-F-D\]](#Q-g-L1-F-D){reference-type="eqref" reference="Q-g-L1-F-D"} and Fubini's Theorem, we have conservation of mass $\| f^{\epsilon}(t) \|_{L^1} \colonequals \| f_0 \|_{L^1}$ for $t \in [0, T]$. Note that the lifespan $T$ depends on $\epsilon, \phi$ and $\| f_0 \|_{L^1}$. Thanks to the conservation of mass, we can continue to construct the solution on $[T, 2T], [2T, 3T], \cdots$ and get a global mild solution. That is, there is a unique measurable function $f^{\epsilon} \in \mathcal{A}_{\infty}$ satisfying: there is a null set $Z \subset \mathop{\mathbb R\kern 0pt}\nolimits^3$ s.t., for all $t \geq 0$ and $v \in \mathop{\mathbb R\kern 0pt}\nolimits^3 \setminus Z$, $$\begin{aligned} \label{mild-solution-existence} f^{\epsilon}(t,v) = f_{0}(v) + \int_{0}^{t} Q_{UU}^{\epsilon}(f^{\epsilon})(\tau, v) \mathrm{d}\tau, \quad 0 \leq f^{\epsilon}(t,v) \leq \epsilon^{-3}. \end{aligned}$$ Moreover, for all $t \geq 0$, $\|f^{\epsilon}(t)\|_{L^1} = \|f_0\|_{L^1}.$ ◻ *Proof of Theorem [Theorem 1](#gwp-F-D){reference-type="ref" reference="gwp-F-D"}: (Propagation of regularity).* We will focus on the propagation of the regularity in weighted Sobolev spaces uniformly in $\epsilon$ for the mild solution. Suppose that $$\begin{aligned} \label{def-T-max} T_M\colonequals\sup_{t>0}\bigg\{t\big| \sup_{0\le s\le t} \|f^{\epsilon}(t)\|_{H^N_{l}} \le 2 \|f_0\|_{H^N_{l}} \bigg\}. \end{aligned}$$ The main goal is to prove that $T_M = T_M(N,l,\phi, \|f_0\|_{H^N_{l}})$ is strictly positive and independent of $\epsilon$. Let $f_0 \in H^{N}_{l}$ with $N, l \geq 2$. Using [\[mild-solution-existence\]](#mild-solution-existence){reference-type="eqref" reference="mild-solution-existence"} and Theorem [Theorem 6](#UU-in-H-N-l-uniform-in-eps){reference-type="ref" reference="UU-in-H-N-l-uniform-in-eps"}, the conservation of mass and the upper bound: $f(t)\le \epsilon^{-3}$ will lead to that $$\begin{aligned} \label{hnl-norm-t1-2-t2}\nonumber &&\frac{1}{2}\|f^{\epsilon}(t)\|_{H^{N}_{l}}^2 - \frac{1}{2}\|f^{\epsilon}(0)\|_{H^{N}_{l}}^2 = \sum_{|\alpha| \leq N} \int_{0}^{t} \langle W_{l} \partial^{\alpha} Q_{UU}^{\epsilon}(f^{\epsilon}(\tau)), W_{l} \partial^{\alpha}f^{\epsilon} (\tau) \rangle \mathrm{d}\tau\\&&\le C_{N,l,\phi}\int_{0}^{t} \|f^{\epsilon}(\tau)\|_{H^{N}_{l}}^2(\|f^{\epsilon}(\tau)\|_{H^{N}_{l}} + \|f^{\epsilon}(\tau)\|_{H^{N}_{l}}^2)d\tau. \end{aligned}$$ Let $T_{M}^{*}\colonequals \frac{21}{ 4C_{N,l,\phi} ( \|f_0\|_{H^{N}_{l}}^2 +1)^{3/2}}.$ Then by Grönwall's inequality, we derive that for $t\in[0,T_M^*]$, $$\begin{aligned} \label{bounded-2-times-hnl} \sup_{0\le t \le T_{M}^{*}} \|f^{\epsilon}(t)\|_{H^N_{l}} \le 2 \|f_0\|_{H^N_{l}}, \mbox{which implies that}\,\, T_M\ge T_M^*. \end{aligned}$$ We now prove [\[continuous-in-time\]](#continuous-in-time){reference-type="eqref" reference="continuous-in-time"} based on [\[mild-solution-existence\]](#mild-solution-existence){reference-type="eqref" reference="mild-solution-existence"} and [\[bounded-2-times-hnl\]](#bounded-2-times-hnl){reference-type="eqref" reference="bounded-2-times-hnl"}. By [\[mild-solution-existence\]](#mild-solution-existence){reference-type="eqref" reference="mild-solution-existence"} and Minkowski's inequality, we have $$\begin{aligned} \|f^{\epsilon}(t_2,v) - f^{\epsilon}(t_1,v)\|_{H^{N-2}_{l}} \lesssim \int_{t_1}^{t_2} \| Q_{UU}^{\epsilon}(f^{\epsilon})(\tau, v) \|_{H^{N-2}_{l}} \mathrm{d}\tau. \end{aligned}$$ From [\[lose-order-2-derivative\]](#lose-order-2-derivative){reference-type="eqref" reference="lose-order-2-derivative"} and [\[bounded-2-times-hnl\]](#bounded-2-times-hnl){reference-type="eqref" reference="bounded-2-times-hnl"}, we are led to that $$\begin{aligned} \|f^{\epsilon}(t_2,v) - f^{\epsilon}(t_1,v)\|_{H^{N-2}_{l}} \lesssim \int_{t_1}^{t_2}(\| f^{\epsilon}(\tau) \|_{H^{N}_{l}}^2 + \| f^{\epsilon}(\tau) \|_{H^{N}_{l}}^3) \mathrm{d}\tau \lesssim (\| f_0 \|_{H^{N}_{l}}^2 + \| f_0 \|_{H^{N}_{l}}^3) (t_2 - t_1), \end{aligned}$$ which gives [\[continuous-in-time\]](#continuous-in-time){reference-type="eqref" reference="continuous-in-time"}. ◻ ## Bose-Einstein particles In this subsection, we will prove the local well-posedness and propagation of regularity uniformly in $\epsilon$ for Bose-Einstein particles. The proof will follow the same spirit as we did in the previous subsection. However, density of Bose-Einstein particles could blow up, unlike Fermi-Dirac particles whose density has a natural bound $f\leq \epsilon^{-3}$. This motivates us to construct the solution in $L^{1} \cap L^{\infty}$ space. Recall [\[BUU\]](#BUU){reference-type="eqref" reference="BUU"} and [\[OPQUU\]](#OPQUU){reference-type="eqref" reference="OPQUU"}, we consider $\partial_t f =Q_{UU}^{\epsilon}(f)$, where $Q_{UU}^{\epsilon}$ denotes the Uehling-Uhlenbeck operator for Bose-Einstein particles, i.e., $Q_{UU}^{\epsilon}(f) = \int_{\mathop{\mathbb R\kern 0pt}\nolimits^3\times\mathop{\mathbb S\kern 0pt}\nolimits^2} B^{\epsilon} \Phi^{\epsilon}(f) \mathrm{d}\sigma \mathrm{d}v_*$, where $$\begin{aligned} \Phi^{\epsilon}(f) \colonequals f_*' f' (1 + \epsilon^3 f_* + \epsilon^3 f) - f_* f (1 + \epsilon^3 f_*' + \epsilon^3 f') .\end{aligned}$$ As usual, we define the gain and loss terms by $$\begin{aligned} % \label{OPQUU-eps-BE-gain-loss} Q^{\epsilon,+}(f) = \int_{\mathop{\mathbb R\kern 0pt}\nolimits^3\times\mathop{\mathbb S\kern 0pt}\nolimits^2} B^{\epsilon} \Phi^{\epsilon,+}(f) \mathrm{d}\sigma \mathrm{d}v_*, \quad Q^{\epsilon,-}(f) = \int_{\mathop{\mathbb R\kern 0pt}\nolimits^3\times\mathop{\mathbb S\kern 0pt}\nolimits^2} B^{\epsilon} \Phi^{\epsilon,-}(f) \mathrm{d}\sigma \mathrm{d}v_*, \end{aligned}$$ where $$\begin{aligned} \Phi^{\epsilon,+}(f) \colonequals f_*' f' (1 + \epsilon^3 f_* + \epsilon^3 f), \quad \Phi^{\epsilon,-}(f) \colonequals f_* f (1 + \epsilon^3 f_*' + \epsilon^3 f'). \end{aligned}$$ We begin with a lemma on the upper bound of collision operator in $L^{1} \cap L^{\infty}$ space. **Lemma 19**. *Let $f, g \in L^1 \cap L^{\infty}, 0 \leq f, g$, then $$\begin{aligned} \label{Q-g-L1-B-E} \|Q_{UU}^{\epsilon}(f)\|_{L^1} \lesssim \epsilon^{-3} I_{0} (1 + \|f\|_{L^\infty}) \|f\|_{L^1}^2, \\ \label{Q-g-Linfty-B-E} \|Q_{UU}^{\epsilon}(f)\|_{L^\infty} \lesssim (I_{3} + \epsilon^{-3} I_{0}) (1 + \|f\|_{L^\infty}) \|f\|_{L^\infty} (\|f\|_{L^1} + \|f\|_{L^\infty}). \end{aligned}$$ Moreover, $$\begin{aligned} \label{Q-g-minus-f-L1-and-infty-B-E} && \|Q_{UU}^{\epsilon}(f) - Q_{UU}^{\epsilon}(g)\|_{L^{1} \cap L^\infty} \\ \nonumber &\lesssim& (\epsilon^{-3} I_{0} + I_{3}) (\| f \|_{L^1 \cap L^\infty} + \| g \|_{L^1 \cap L^\infty} + 1) (\| f \|_{L^1 \cap L^\infty} + \| g \|_{L^1 \cap L^\infty}) \| f -g \|_{L^1 \cap L^\infty} .\end{aligned}$$* *Proof.* We give the estimates term by term. Similarly to [\[F-D-Q-L1-into-gain-and-loss\]](#F-D-Q-L1-into-gain-and-loss){reference-type="eqref" reference="F-D-Q-L1-into-gain-and-loss"} and [\[F-D-Q-L1-loss-up\]](#F-D-Q-L1-loss-up){reference-type="eqref" reference="F-D-Q-L1-loss-up"}, [\[Q-g-L1-B-E\]](#Q-g-L1-B-E){reference-type="eqref" reference="Q-g-L1-B-E"} follows the facts $\|Q_{UU}^{\epsilon}(f)\le 2 \|Q^{\epsilon,-}(f)\|_{L^1}\|_{L^1}$ and $\|Q^{\epsilon,-}(f)\|_{L^1}\lesssim \epsilon^{-3} I_{0} (1 + \|f\|_{L^\infty}) \|f\|_{L^1}^2.$ $\bullet$ To derive [\[Q-g-Linfty-B-E\]](#Q-g-Linfty-B-E){reference-type="eqref" reference="Q-g-Linfty-B-E"}, we observe that $Q^{\epsilon,-}(f)(v)\lesssim \epsilon^{-3} I_0 (1 + \|f\|_{L^\infty}) \|f\|_{L^\infty} \|f\|_{L^1}$ and $$\begin{aligned} \nonumber Q^{\epsilon,+}(f)(v) &\lesssim& (1 + \|f\|_{L^\infty}) \|f\|_{L^\infty} \int B^{\epsilon}_1 f_*' \mathrm{d}\sigma \mathrm{d}v_* + (1 + \|f\|_{L^\infty}) \|f\|_{L^\infty}^2 \int B^{\epsilon}_3 \mathrm{d}\sigma \mathrm{d}v_* \\ \label{Q-plus-L-infty-B-E} &\lesssim& \epsilon^{-3} I_{0} (1 + \|f\|_{L^\infty}) \|f\|_{L^\infty} |f|_{L^1} + I_{3} (1 + \|f\|_{L^\infty}) \|f\|_{L^\infty}^2. \end{aligned}$$ Then [\[Q-g-Linfty-B-E\]](#Q-g-Linfty-B-E){reference-type="eqref" reference="Q-g-Linfty-B-E"} follows. $\bullet$ To show [\[Q-g-minus-f-L1-and-infty-B-E\]](#Q-g-minus-f-L1-and-infty-B-E){reference-type="eqref" reference="Q-g-minus-f-L1-and-infty-B-E"}, we notice that $\|Q_{UU}^{\epsilon}(f) - Q_{UU}^{\epsilon}(g)\|_{L^1} \leq 2 \int B^{\epsilon} |\Phi^{\epsilon,-}(f) - \Phi^{\epsilon,-}(g)| \mathrm{d}V$ and $$\begin{aligned} \label{Phi-minus-f-minus-g} |\Phi^{\epsilon,-}(f) - \Phi^{\epsilon,-}(g)| \leq (1 + 2\|f\|_{L^{\infty}}) \left( |f_* - g_{*}| f + g_* |f - g| \right) + 2\|f - g\|_{L^{\infty}} g g_* ,\end{aligned}$$for $0< \epsilon\leq 1$ and $0 \leq f, g \in L^{\infty}$. From these together with [\[upper-bound-of-A-eps\]](#upper-bound-of-A-eps){reference-type="eqref" reference="upper-bound-of-A-eps"}, we get that $$\begin{aligned} \int B^{\epsilon} |\Phi^{\epsilon,-}(f) - \Phi^{\epsilon,-}(g)| \mathrm{d}V \lesssim \epsilon^{-3} I_{0} (1 + \|f\|_{L^{\infty}}) (\| f \|_{L^1} + \| g \|_{L^1})\| f -g \|_{L^1} + \epsilon^{-3} I_{0} \|f - g\|_{L^{\infty}} \| g \|_{L^1}^2 ,\end{aligned}$$ which gives $$\begin{aligned} \label{Q-g-minus-f-L1-B-E} \|Q_{UU}^{\epsilon}(f) - Q_{UU}^{\epsilon}(g)\|_{L^1} \lesssim \epsilon^{-3} I_{0} (1 + \|f\|_{L^{\infty}} + \| g \|_{L^1}) (\| f \|_{L^1} + \| g \|_{L^1})\| f -g \|_{L^1 \cap L^{\infty}}. \end{aligned}$$ $\bullet$ To get the $L^\infty$ bounds, [\[Phi-minus-f-minus-g\]](#Phi-minus-f-minus-g){reference-type="eqref" reference="Phi-minus-f-minus-g"} and [\[upper-bound-of-A-eps\]](#upper-bound-of-A-eps){reference-type="eqref" reference="upper-bound-of-A-eps"} yield that $$\begin{aligned} &&|Q^{\epsilon,-}(f) - Q^{\epsilon,-}(g)| \leq \int B^{\epsilon} |\Phi^{\epsilon,-}(f) - \Phi^{\epsilon,-}(g)| \mathrm{d}v_* \mathrm{d}\sigma \\ &\lesssim& \epsilon^{-3} I_{0} (1 + \|f\|_{L^{\infty}}) (\| f \|_{L^\infty}\| f -g \|_{L^1} + \| f -g \|_{L^\infty}\| g \|_{L^1}) + \epsilon^{-3} I_{0} \|f - g\|_{L^{\infty}} \|g\|_{L^1} \|g\|_{L^{\infty}}. \end{aligned}$$ For the gain term, similar to [\[Phi-minus-f-minus-g\]](#Phi-minus-f-minus-g){reference-type="eqref" reference="Phi-minus-f-minus-g"}, by exchanging $(v, v_*)$ and $(v', v'_*)$, we have $$\begin{aligned} \label{Phi-plus-f-minus-g}\quad |\Phi^{\epsilon,+}(f) - \Phi^{\epsilon,+}(g)| &\leq& (1 + 2\|f\|_{L^{\infty}}) \left( |f_*' - g_{*}'| f' + g_*' |f' - g'| \right) + 2 \|f - g\|_{L^{\infty}} g' g_*' \\ \nonumber &\leq& (1 + 2\|f\|_{L^{\infty}}) \left( |f_*' - g_{*}'| \|f\|_{L^{\infty}} + g_*' \|f-g\|_{L^{\infty}} \right) + 2\|f - g\|_{L^{\infty}} \|g\|_{L^{\infty}} g_*' .\end{aligned}$$ Now combining it with [\[Phi-plus-f-minus-g\]](#Phi-plus-f-minus-g){reference-type="eqref" reference="Phi-plus-f-minus-g"}, [\[B1-sigma-vStar\]](#B1-sigma-vStar){reference-type="eqref" reference="B1-sigma-vStar"} and [\[bounded-l1-norm-simple\]](#bounded-l1-norm-simple){reference-type="eqref" reference="bounded-l1-norm-simple"}, we have $$\begin{aligned} \label{L-infty-estimate} && \|Q_{UU}^{\epsilon}(f) - Q_{UU}^{\epsilon}(g)\|_{L^\infty} \\ \nonumber &\lesssim& (\epsilon^{-3} I_{0} + I_{3}) (\| f \|_{L^1 \cap L^\infty} + \| g \|_{L^1 \cap L^\infty} + 1) (\| f \|_{L^1 \cap L^\infty} + \| g \|_{L^1 \cap L^\infty}) \| f -g \|_{L^1 \cap L^\infty} .\end{aligned}$$We complete the proof of the lemma. ◻ We are now ready to prove Theorem [Theorem 2](#lwp-B-E){reference-type="ref" reference="lwp-B-E"}. *Proof of Theorem [Theorem 2](#lwp-B-E){reference-type="ref" reference="lwp-B-E"}.* We introduce the function space $\mathcal{E}_{T} \colonequals L^{\infty}([0,T]; L^{1}(\mathop{\mathbb R\kern 0pt}\nolimits^3)\cap L^\infty(\mathop{\mathbb R\kern 0pt}\nolimits^3))$ associated with the norm $\|f\|_{ET} \colonequals \sup_{0 \leq t \leq T} \|f(t)\|_{L^{1}\cap L^\infty}$. Similarly we define the operator $J^{\epsilon}(\cdot)$ on $\mathcal{E}_{T}$ by $$\begin{aligned} J^{\epsilon}(f)(t,v) \colonequals f_{0}(v) + \int_{0}^{t} Q_{UU}^{\epsilon}(|f| )(\tau, v) \mathrm{d}\tau.\end{aligned}$$ $\bullet$ Given $f \in \mathcal{E}_{T}$, we now check that $J^{\epsilon}(f) \in \mathcal{E}_{T}$. By [\[Q-g-L1-B-E\]](#Q-g-L1-B-E){reference-type="eqref" reference="Q-g-L1-B-E"} and [\[Q-g-Linfty-B-E\]](#Q-g-Linfty-B-E){reference-type="eqref" reference="Q-g-Linfty-B-E"}, we derive that $$\begin{aligned} \label{map-on-itself-ET} \|J^{\epsilon}(f)\|_{ET} \leq \|f_{0}\|_{L^1 \cap L^{\infty}} + T C_{\epsilon, \phi} (1+ \|f\|_{ET}) \|f\|_{ET}^2, \end{aligned}$$ where $C_{\epsilon, \phi} \lesssim I_{3} + \epsilon^{-3} I_{0}$. This means $J^{\epsilon}(\cdot)$ is an operator on $\mathcal{E}_{T}$. $\bullet$ Let $\mathcal{F}_{T} \colonequals \{f \in \mathcal{F}_{T}, \|f\|_{ET} \leq 2 \|f_{0}\|_{L^1 \cap L^{\infty}} \}$. We want to show that $J^{\epsilon}(\cdot)$ is a contraction mapping on the complete metric space $(\mathcal{F}_{T}, \|\cdot - \cdot\|_{ET})$ for a short time $T>0$. By [\[map-on-itself-ET\]](#map-on-itself-ET){reference-type="eqref" reference="map-on-itself-ET"}, if $T$ satisfies $$\begin{aligned} \label{small-condition-1-on-T-boson} 4 T C_{\epsilon, \phi} (1 + 2\|f_{0}\|_{L^1 \cap L^{\infty}}) \|f_{0}\|_{L^1 \cap L^{\infty}} \leq 1, \end{aligned}$$ then $J^{\epsilon}(\cdot)$ is an operator onto $\mathcal{F}_{T}$. Now we prove the contraction property. Since $||x| - |y|| \leq |x-y|$, for $f, g \in \mathcal{F}_{T}$, we get $$\begin{aligned} \|J^{\epsilon}(|f|) - J^{\epsilon}(|g|)\|_{ET} &\leq& T C_{\epsilon, \phi} (\| f \|_{ET} + \| g \|_{ET} + 1) (\| f \|_{ET} + \| g \|_{ET}) \| f -g \|_{L^1 \cap L^\infty} \\ &\leq& 4\| f_0 \|_{L^1 \cap L^\infty} (4\| f_0 \|_{L^1 \cap L^\infty} + 1) T C_{\epsilon, \phi} \| f -g \|_{ET}. \end{aligned}$$ If $T$ satisfies $$\begin{aligned} \label{small-condition-2-on-T-BE} 4\| f_0 \|_{L^1 \cap L^\infty} (4\| f_0 \|_{L^1 \cap L^\infty} + 1) T C_{\epsilon, \phi} \leq \frac 12, \end{aligned}$$ then $J^{\epsilon}(\cdot)$ is a contraction mapping on the complete metric space $(\mathcal{F}_{T}, \|\cdot - \cdot\|_{T})$. By the fixed point theorem, there exists a unique $f \in \mathcal{F}_{T}$, s.t. $f^{\epsilon} = J^{\epsilon}(f^{\epsilon})$. After a modification on the $v$-null sets, there is a null set $Z \subset \mathop{\mathbb R\kern 0pt}\nolimits^3$, for all $t \in [0, T]$ and $v \in \mathop{\mathbb R\kern 0pt}\nolimits^3 \setminus Z$, $$\begin{aligned} % \label{solution-local-B-E} f^{\epsilon}(t,v) = J^{\epsilon}(f^{\epsilon}) (t,v) = f_{0}(v) + \int_{0}^{t} Q_{UU}^{\epsilon}(|f^{\epsilon}| )(\tau, v) \mathrm{d}\tau. \end{aligned}$$ Recalling [\[Q-g-Linfty-B-E\]](#Q-g-Linfty-B-E){reference-type="eqref" reference="Q-g-Linfty-B-E"}, we can see that for any $v \in \mathop{\mathbb R\kern 0pt}\nolimits^3 \setminus Z$ and $t_1, t_2 \in [0, T]$, $$\begin{aligned} |f^{\epsilon}(t_2,v) - f^{\epsilon}(t_1,v)| \leq C_{\epsilon, \phi} (\| f_0 \|_{L^1 \cap L^\infty} + 1)\| f_0 \|_{L^1 \cap L^\infty}^2 |t_2 - t_1|, \end{aligned}$$ for some $C_{\epsilon, \phi} \lesssim I_{3} + \epsilon^{-3} I_{0}$. That is, $f(\cdot, v)$ is uniformly continuous w.r.t. $t$ on $[0, T]$ for any $v \in \mathop{\mathbb R\kern 0pt}\nolimits^3 \setminus Z$. Following the argument in [\[non-negativity\]](#non-negativity){reference-type="eqref" reference="non-negativity"} we can get that $f^{\epsilon}(t,v) \geq 0$ on $[0, T]$, i.e., $$\begin{aligned} \label{mild-solution-BE} f^{\epsilon}(t,v) = f_{0}(v) + \int_{0}^{t} Q_{UU}^{\epsilon}(f^{\epsilon})(\tau, v) \mathrm{d}\tau, \quad 0 \leq f^\epsilon(t,v). \end{aligned}$$ Conservation of mass is a direct result of [\[Q-g-L1-B-E\]](#Q-g-L1-B-E){reference-type="eqref" reference="Q-g-L1-B-E"} and the Fubini's Theorem. Moreover that the lifespan $T$ depends on $\epsilon, \phi$ and $\| f_0 \|_{L^1 \cap L^{\infty}}$, i.e., $T=T(\epsilon, \phi, f_0)$. In order to extend the solution to a positive time independent of $\epsilon$, we will prove the propagation of regularity in weighted Sobolev space $H^{N}_{l}$. Assume that $f_0 \in H^{N}_{l}$ with $N, l \geq 2$. Again by [\[mild-solution-BE\]](#mild-solution-BE){reference-type="eqref" reference="mild-solution-BE"} and Theorem [Theorem 6](#UU-in-H-N-l-uniform-in-eps){reference-type="ref" reference="UU-in-H-N-l-uniform-in-eps"}, [\[hnl-norm-t1-2-t2\]](#hnl-norm-t1-2-t2){reference-type="eqref" reference="hnl-norm-t1-2-t2"} still holds. This shows that the solution verifies the *a priori* estimate: for any $t\in[0,T_M^*=\frac{21}{ 4C_{N,l,\phi} ( \|f_0\|_{H^{N}_{l}}^2 +1)^{3/2}}]$, $$\begin{aligned} \sup_{0\le t \le T_{M}^{*}} \|f^{\epsilon}(t)\|_{H^N_{l}} \le 2 \|f_0\|_{H^N_{l}}, \end{aligned}$$ which implies the *a priori* upper bound: $$\begin{aligned} \sup_{0\le t \le T_{M}^{*}} \|f^{\epsilon}(t)\|_{L^\infty}\le 2C_S\|f_0\|_{H^N_{l}}. \end{aligned}$$ This enables to use continuity argument to extend the lifespan from $T=T(\epsilon, \phi, f_0)$ to $T_{M}^{*}$ which is independent of $\epsilon$. We end the proof. ◻ # Asymptotic formula This section is devoted to the proof of Theorem [Theorem 3](#mainthm){reference-type="ref" reference="mainthm"}. Let $f^\epsilon$ and $f$ be the solutions to [\[BUU\]](#BUU){reference-type="eqref" reference="BUU"} and [\[landau\]](#landau){reference-type="eqref" reference="landau"} respectively with the initial datum $f_0$ on $[0, T^*]$ where $T^*=T^*(N, l, \phi, \|f_0\|_{H^{N+3}_{l+5}})$ given in Theorem [Theorem 1](#gwp-F-D){reference-type="ref" reference="gwp-F-D"} and [Theorem 2](#lwp-B-E){reference-type="ref" reference="lwp-B-E"}. The solutions are uniformly bounded in $H^{N+3}_{l+5}$. More precisely, $$\begin{aligned} \label{upper-bound-3-order-higher} \sup_{t \leq T^*} \|f^\epsilon(t)\|_{H^{N+3}_{l+5}} \leq 2 \|f_0\|_{H^{N+3}_{l+5}}, \quad \sup_{t \leq T^*} \|f(t)\|_{H^{N+3}_{l+5}} \leq 2 \|f_0\|_{H^{N+3}_{l+5}}. \end{aligned}$$ Let $R^\epsilon\colonequals \epsilon^{-\vartheta}(f^\epsilon-f)$, then it is not difficult to see that $R^\epsilon$ verifies the following equation: $$\begin{aligned} \label{EqR} \partial_t R^\epsilon&=&Q_1(f^\epsilon, R^\epsilon)+Q_1(R^\epsilon, f)+\epsilon^{-\vartheta}(Q_1(f,f)-Q_L(f,f))+\epsilon^{-\vartheta}(Q_2+Q_3)(f^\epsilon,f^\epsilon) \\ \nonumber &&+ \epsilon^{-\vartheta}R(f^\epsilon, f^\epsilon, f^\epsilon).\end{aligned}$$ We will prove $R^\epsilon$ is bounded in $H^{N}_{l}$ over $[0, T^*]$. To this end, we first derive an estimate for the operator difference $Q_1-Q_L$. This is the key point for the asymptotic formula. **Lemma 20**. *It holds that $$\begin{aligned} \label{operator-diff-l2l} \|Q_1(g,h)-Q_L(g,h)\|_{L^2_l} \lesssim_{l} \epsilon^{\vartheta} I_{3+\vartheta} \left( \|g\|_{H^3_{l+5}}\|h\|_{H^3}+\|g\|_{H^3_2}\|h\|_{H^3_{l+3}} \right). \end{aligned}$$* *Proof.* The estimate is proved in the spirit of [@des1]. The proof is divided into several steps. [*Step 1: Reformulation of $Q_1$.*]{.ul} Firstly, given $v, v_* \in \mathop{\mathbb R\kern 0pt}\nolimits^3$, we introduce an orthonormal basis of $\mathop{\mathbb R\kern 0pt}\nolimits^3$: $$\begin{aligned} \big(\frac{v-v_*}{|v-v_*|},~~ h^1_{v,v_*}, ~~h^2_{v,v_*}\big). \end{aligned}$$ Then $\sigma= \frac{v-v_*}{|v-v_*|}\cos\theta+ (\cos\varphi h^1_{v,v_*}+\sin\varphi h^2_{v,v_*})\sin\theta$, which implies $v'=v+\frac 12A$ and $v_*'=v_*-\frac 12A$ with $$A=-(v-v_*)(1-\cos\theta)+|v-v_*|(\cos\varphi h^1_{v,v_*}+\sin\varphi h^2_{v,v_*})\sin\theta.$$ Now $Q_1$ can be rewritten as $$\begin{aligned} Q_1(g,h)= \int_{\mathop{\mathbb R\kern 0pt}\nolimits^3}\int_0^\frac{\pi}{2}\int_0^{2\pi} B^{\epsilon}_1\big[g(v_*-\frac 12A)h(v+\frac 12A)-g(v_*)h(v)\big]\sin\theta \mathrm{d}\varphi \mathrm{d}\theta \mathrm{d}v_*.\end{aligned}$$ By the Taylor expansion formula up to order 3: $$\begin{aligned} g(v_*-\frac 12A)=g(v_*)-\frac 12A\cdot\nabla_{v_*} g(v_*) +\frac 18A\otimes A:\nabla^2g(v_*)+r_1(v, v_*, \sigma), \\ h(v+\frac 12A)=h(v)+\frac 12A\cdot\nabla_{v} h(v)+\frac 18A\otimes A:\nabla^2h(v)+r_2(v, v_*, \sigma), \end{aligned}$$ where $$\begin{aligned} |r_1(v, v_*, \sigma)|\lesssim |A|^3\int_0^1 |\nabla^3 g (\iota(v_*))|\mathrm{d}\iota, \quad |r_2(v, v_*, \sigma)|\lesssim |A|^3\int_0^1 |\nabla^3 h (\kappa(v))| \mathrm{d}\kappa. \end{aligned}$$ Then we arrive at $$\begin{aligned} \label{apQ1} Q_1(g,h)&=& \int_{\mathop{\mathbb R\kern 0pt}\nolimits^3}\int_0^\frac{\pi}{2} \int_0^{2\pi} \big[ \frac 12A \cdot (\nabla_{v}-\nabla_{v_*})(g(v_*)h(v)) \notag\\&&\qquad +\frac 18A\otimes A:(\nabla_v-\nabla_{v_*})^2(g(v_*)h(v))+\mathcal{R}^1(v, v_*, \sigma)\big] B^{\epsilon}_1 \sin\theta \mathrm{d}\varphi \mathrm{d}\theta \mathrm{d}v_*,\end{aligned}$$ where $$\begin{aligned} \mathcal{R}^1(v, v_*, \sigma) &=& r_1(v, v_*, \sigma)\big(h(v)+\frac 12A \cdot \nabla h(v)+\frac 18A\otimes A:\nabla^2h(v)+r_2(v, v_*, \sigma) \big) \\ && +\frac 18A\otimes A:\nabla^2g(v_*)\big(\frac 12A \cdot \nabla h(v)+\frac 18A\otimes A:\nabla^2h(v)+r_2(v, v_*, \sigma) \big) \\ &&-\frac 12A \cdot \nabla g(v_*)\big(\frac 18A\otimes A:\nabla^2h(v)+r_2(v, v_*, \sigma) \big)+g(v_*)r_2(v, v_*, \sigma). \end{aligned}$$ [*Step 2: Reduction of $Q_1$.*]{.ul} According to [\[apQ1\]](#apQ1){reference-type="eqref" reference="apQ1"}, if we define $T^\epsilon(v-v_*) \colonequals \int_0^\frac{\pi}{2} \int_0^{2\pi} \frac 12A B^{\epsilon}_1 \sin\theta \mathrm{d}\varphi \mathrm{d}\theta$ and $U^\epsilon(v-v_*)\colonequals \int_0^\frac{\pi}{2} \int_0^{2\pi} \frac 18A\otimes A B^{\epsilon}_1\sin\theta \mathrm{d}\theta \mathrm{d}\varphi$, then $$\begin{aligned} \label{colli1} Q_1(g,h) &=& \int_{\mathop{\mathbb R\kern 0pt}\nolimits^3}\Big[T^\epsilon(v-v_*)\cdot (\nabla_{v}-\nabla_{v_*})(g(v_*)h(v)) +U^\epsilon(v-v_*):(\nabla_v-\nabla_{v_*})^2(g(v_*)h(v))\Big]\mathrm{d}v_*\nonumber\\&& + \int_{\mathop{\mathbb R\kern 0pt}\nolimits^3}\int_0^\frac{\pi}{2}\int_0^{2\pi} \mathcal{R}^1(v, v_*, \sigma)B^{\epsilon}_1\sin\theta \mathrm{d}\varphi \mathrm{d}\theta \mathrm{d}v_*.\end{aligned}$$ [Computation of $T^\epsilon$.]{.ul} It is not difficult to compute that $$\begin{aligned} \label{formula-of-T} T^\epsilon(v-v_*)=-8\pi I_{3} |v-v_*|^{-3}(v-v_*)+\mathcal{R}^2(v-v_*) , \end{aligned}$$ with $$\begin{aligned} \label{DEF-R2} \mathcal{R}^2(v-v_*) = 8\pi|v-v_*|^{-3}(v-v_*)\int_{\frac{\sqrt{2}|v-v_*|}{2\epsilon}}^\infty \hat{\phi}^2(r)r^3 \mathrm{d}r . \end{aligned}$$ [Computation of $U^\epsilon$.]{.ul} We claim that $$\begin{aligned} \label{colli2} U^\epsilon(v-v_*)= a(v-v_*)+\mathcal{R}^3(v-v_*) ,\end{aligned}$$ where $a$ is the matrix defined in [\[def-a\]](#def-a){reference-type="eqref" reference="def-a"}, and $$\begin{aligned} \label{DEF-R3} \mathcal{R}^3(z) &=& 2\pi|z|^{-1} \Pi(z) \left(-\int_{\frac{\sqrt{2}|z|}{2\epsilon}}^\infty \hat{\phi}^2(r)r^3 \mathrm{d}r- \int_0^{\frac{\sqrt{2}|z|}{2\epsilon}}\hat{\phi}^2(r)r^3 (\epsilon r)^2 |z|^{-2}\mathrm{d}r \right) \\ \notag && + \frac{\pi}{4} z \otimes z \int_0^{\pi/2} (1-\cos\theta)^2 B^{\epsilon}_1\sin\theta \mathrm{d}\theta. \end{aligned}$$ where $\Pi$ is the matrix defined in [\[def-a\]](#def-a){reference-type="eqref" reference="def-a"}. To see this, by definition, we have $$\begin{aligned} U^\epsilon(v-v_*) &=& \frac 18\int_0^\frac{\pi}{2} \int_0^{2\pi} \big(-(v-v_*)(1-\cos\theta)+|v-v_*|(\cos\varphi h^1_{v,v_*}+\sin\varphi h^2_{v,v_*})\sin\theta\big)\\&&\otimes \big(-(v-v_*)(1-\cos\theta)+|v-v_*|(\cos\varphi h^1_{v,v_*}+\sin\varphi h^2_{v,v_*})\sin\theta\big) B^{\epsilon}_1\sin\theta \mathrm{d}\theta \mathrm{d}\varphi \\ &=&\frac{\pi}4(v-v_*)\otimes(v-v_*) \int_0^{\pi/2} (1-\cos\theta)^2 B^{\epsilon}_1\sin\theta \mathrm{d}\theta \\&& +\frac 18|v-v_*|^2\int_0^\frac{\pi}{2} \int_0^{2\pi} \sin^2\theta(\cos^2\varphi h^1_{v,v_*}\otimes h^1_{v,v_*}+ \sin^2\varphi h^2_{v,v_*}\otimes h^2_{v,v_*})B^{\epsilon}_1\sin\theta \mathrm{d}\theta \mathrm{d}\varphi. \end{aligned}$$ Use the fact that $\frac{v-v_*}{|v-v_*|}\otimes \frac{v-v_*}{|v-v_*|}+h^1_{v,v_*}\otimes h^1_{v,v_*}+h^2_{v,v_*}\otimes h^2_{v,v_*}=Id$, then $$\begin{aligned} U^\epsilon(v-v_*) &=& 2\pi|v-v_*|^{-1}(Id- \frac{v-v_*}{|v-v_*|}\otimes \frac{v-v_*}{|v-v_*|}) \int_0^{\frac{\sqrt{2}|v-v_*|}{2\epsilon}}\hat{\phi}^2(r)r^3(1-(\epsilon r)^2|v-v_*|^{-2})\mathrm{d}r \\&&+ \frac{\pi}4(v-v_*)\otimes(v-v_*) \int_0^{\pi/2} (1-\cos\theta)^2 B^{\epsilon}_1\sin\theta \mathrm{d}\theta, \end{aligned}$$ which is enough to get [\[colli2\]](#colli2){reference-type="eqref" reference="colli2"}. Since $\nabla\cdot \big((|x|^2 Id- x\otimes x)f(|x|^2)\big)=-2xf(|x|^2)$, recalling [\[def-a\]](#def-a){reference-type="eqref" reference="def-a"} and [\[formula-of-T\]](#formula-of-T){reference-type="eqref" reference="formula-of-T"}, one has $$\begin{aligned} && (\nabla_v-\nabla_{v_*})\cdot a(v,v_*)=-8\pi I_{3} |v-v_*|^{-3}(v-v_*) = T^\epsilon(v,v_*)-\mathcal{R}^2(v-v_*) . \end{aligned}$$ Plugging this and [\[colli2\]](#colli2){reference-type="eqref" reference="colli2"} into [\[colli1\]](#colli1){reference-type="eqref" reference="colli1"}, we get $$\begin{aligned} Q_1(g,h)&=& \int_{\mathop{\mathbb R\kern 0pt}\nolimits^3} (\nabla_v-\nabla_{v_*})\cdot \big[a(v,v_*)(\nabla_v-\nabla_{v_*})(g_*h)\big]\mathrm{d}v_*+\int_{\mathop{\mathbb R\kern 0pt}\nolimits^3} \mathcal{R}^3(v-v_*) :\big[(\nabla_v-\nabla_{v_*})^2 (g_*h)\big]\mathrm{d}v_*\\ &&+\int_{\mathop{\mathbb R\kern 0pt}\nolimits^3}\mathcal{R}^2(v-v_*) \cdot (\nabla_v-\nabla_{v_*})(g_*h) \mathrm{d}v_* + \int_{\mathop{\mathbb R\kern 0pt}\nolimits^3}\int_0^\frac{\pi}{2}\int_0^{2\pi} \mathcal{R}^1(v, v_*, \sigma)B^{\epsilon}_1\sin\theta \mathrm{d}\theta \mathrm{d}\varphi \mathrm{d}v_*.\end{aligned}$$ Note that the first integral term is the Landau operator. [*Step 3: Estimate of $Q_1-Q_L$.*]{.ul} From the above equality, we arrive at $$\begin{aligned} % \label{apQ2} Q_1(g,h)-Q_L(g,h) &=& \int \mathcal{R}^1(v, v_*, \sigma)B^{\epsilon}_1\sin\theta \mathrm{d}\theta \mathrm{d}\varphi \mathrm{d}v_* +\int \mathcal{R}^2(v-v_*) \cdot (\nabla_v-\nabla_{v_*})(g_*h) \mathrm{d}v_*\\ && + \int \mathcal{R}^3(v-v_*) :\big[(\nabla_v-\nabla_{v_*})^2 (g_*h)\big]\mathrm{d}v_* \colonequals\sum_{i=1}^3 \mathcal{Q}_i. \end{aligned}$$ [Estimate of $\mathcal{Q}_3$.]{.ul} Recalling [\[DEF-R3\]](#DEF-R3){reference-type="eqref" reference="DEF-R3"} for $\mathcal{R}^3$, it is easy to see $$\begin{aligned} |\mathcal{R}^3(z) | \lesssim \epsilon^{\vartheta} I_{3+\vartheta} |z|^{-1-\vartheta}, \end{aligned}$$ which gives $$\begin{aligned} \|\mathcal{Q}_3\|_{L^2_l} \lesssim \epsilon^{\vartheta} I_{3+\vartheta} \|g\|_{H^2_2}\|h\|_{H^2_l}.\end{aligned}$$ [Estimate of $\mathcal{Q}_2$.]{.ul} Recalling [\[DEF-R2\]](#DEF-R2){reference-type="eqref" reference="DEF-R2"}, $$\begin{aligned} \epsilon^{-\vartheta}\mathcal{R}^2(x)= 8\pi |x|^{-3-\vartheta}x\int_{\frac{\sqrt{2}|x|}{2\epsilon}}^\infty\hat{\phi}^2(r)r^3\big(\frac{|x|}{\epsilon}\big)^{\vartheta}\mathrm{d}r,\end{aligned}$$ it is obvious that $\epsilon^{-\vartheta}|\mathcal{R}^2(x)|\lesssim I_{3+\vartheta} |x|^{-2-\vartheta}$. If $\vartheta=1$, we claim that $\epsilon^{-1}\mathcal{R}^2$ is the kernel of a Calderon-Zygmund operator. To see this, we first compute directly that for any $0<R_1<R_2<\infty$, $$\begin{aligned} \int_{R_1<|x|<R_2} \epsilon^{-1}\mathcal{R}^2(x) dx=0, \quad \sup_{R>0} \int_{R <|x|< 2R} \epsilon^{-1} |\mathcal{R}^2(x)| dx \lesssim I_{4}. \end{aligned}$$ Next we need to check that $\epsilon^{-1} \mathcal{R}^2$ satisfies Hörmander's condition: $$\begin{aligned} \label{R2condition} \int_{|x|\ge 2|y|} |\epsilon^{-1}\mathcal{R}^2(x)-\epsilon^{-1}\mathcal{R}^2(x-y)|dx\lesssim I_{4}. \end{aligned}$$ Note that $$\begin{aligned} \epsilon^{-1}\mathcal{R}^2(x)-\epsilon^{-1}\mathcal{R}^2(x-y) &=& 8\pi |x|^{-3}x\int_{\frac{\sqrt{2}|x|}{2\epsilon}}^\infty\hat{\phi}^2(r)r^3 \epsilon^{-1} \mathrm{d}r - 8\pi |x-y|^{-3}(x-y)\int_{\frac{\sqrt{2}|x-y|}{2\epsilon}}^\infty\hat{\phi}^2(r)r^3 \epsilon^{-1} \mathrm{d}r \\ &=& 8\pi \left(|x|^{-3}x - |x-y|^{-3}(x-y) \right) \int_{\frac{\sqrt{2}|x|}{2\epsilon}}^\infty\hat{\phi}^2(r)r^3 \epsilon^{-1} \mathrm{d}r \\ &&+ 8\pi |x-y|^{-3}(x-y) \left( \int_{\frac{\sqrt{2}|x|}{2\epsilon}}^\infty\hat{\phi}^2(r)r^3 \epsilon^{-1} \mathrm{d}r - \int_{\frac{\sqrt{2}|x-y|}{2\epsilon}}^\infty\hat{\phi}^2(r)r^3 \epsilon^{-1} \mathrm{d}r \right) . \end{aligned}$$ Under the condition $|x|\ge 2|y|$, it is easy to see that $|x-y| \sim |x|$ and $\big|x|x|^{-3}-(x-y)|x-y|^{-3}\big|\lesssim |x|^{-3}|y|$, then $$\begin{aligned} |\epsilon^{-1}\mathcal{R}^2(x)-\epsilon^{-1}\mathcal{R}^2(x-y)| \lesssim |x|^{-3}|y| \int_{\frac{\sqrt{2}|x|}{2\epsilon}}^\infty \hat{\phi}^2(r)r^3 \epsilon^{-1} \mathrm{d}r + |x|^{-2} \int_{ \frac{\sqrt{2}}{2\epsilon} \min\{|x|, |x-y|\}}^{ \frac{\sqrt{2}}{2\epsilon} \max\{ |x|, |x-y|\}} \hat{\phi}^2(r)r^3 \epsilon^{-1} \mathrm{d}r . \end{aligned}$$ For the first term, $$\begin{aligned} \int_{|x|\ge 2|y|} |x|^{-3}|y| \int_{\frac{\sqrt{2}|x|}{2\epsilon}}^\infty\hat{\phi}^2(r)r^3 \epsilon^{-1} \mathrm{d}r \mathrm{d}x \lesssim I_{4} |y| \int_{|x|\ge 2|y|} |x|^{-4} \mathrm{d}x \lesssim I_{4}. \end{aligned}$$ For the second term, $$\begin{aligned} &&\int_{|x|\ge 2|y|} |x|^{-2} \int_{ \frac{\sqrt{2}}{2\epsilon} \min\{|x|, |x-y|\}}^{ \frac{\sqrt{2}}{2\epsilon} \max\{ |x|, |x-y|\}} \hat{\phi}^2(r)r^3 \epsilon^{-1} \mathrm{d}r \mathrm{d}x \\ &\lesssim& \int_{|x|\ge 2|y|} |x|^{-3}\bigg(\int_{\frac{\sqrt{2}(|x|-|y|) }{2\epsilon}}^{\frac{\sqrt{2}|x| }{2\epsilon}}+\int_{\frac{\sqrt{2}|x| }{2\epsilon}}^{\frac{\sqrt{2}(|x|+|y|) }{2\epsilon}}\bigg) \hat{\phi}^2(r)r^4 \mathrm{d}r \mathrm{d}x \\ &\lesssim& \int_{|y|\le \sqrt{2}\epsilon r}\hat{\phi}^2(r) r^4 \mathrm{d}r\int_{\sqrt{2}\epsilon r\le |x|\le2\sqrt{2}\epsilon r}|x|^{-3} \mathrm{d}x+ \int_{2|y|\le \sqrt{2}\epsilon r}\hat{\phi}^2(r)r^4 \mathrm{d}r\int_{\sqrt{2}/2\epsilon r\le |x|\le\sqrt{2}\epsilon r}|x|^{-3} \mathrm{d}x\lesssim I_{4}.\end{aligned}$$ Combining these two estimates will yield [\[R2condition\]](#R2condition){reference-type="eqref" reference="R2condition"}. Thus we have $$\begin{aligned} \|\mathcal{Q}_2\|_{L^2_l} \lesssim \epsilon^{\vartheta} I_{3+\vartheta} \|g\|_{H^2_2}\|h\|_{H^2_l}.\end{aligned}$$ [Estimate of $\mathcal{Q}_3$.]{.ul} Recalling the definition of $\mathcal{R}^1(v, v_*, \sigma)$, it is bounded by $$\begin{aligned} && |A|^3\bigg(|g_*|\int_0^1 |\nabla^3h (\kappa(v))|\mathrm{d}\kappa+|(\nabla g)_*||\nabla^2 h|+|(\nabla^2 g)_*||\nabla h| +\int_0^1 |\nabla^3g (\iota(v_*))| \mathrm{d}\iota |h|\bigg) \\&+& |A|^4 \bigg(|(\nabla g)_*|\int_0^1|\nabla^3h (\kappa(v))|\mathrm{d}\kappa+|(\nabla^2 g)_*||\nabla^2 h|+ \int_0^1|\nabla^3g (\iota(v_*))| \mathrm{d}\iota |\nabla h|\bigg) \\ &+& |A|^5 \bigg(|(\nabla^2 g)_*|\int_0^1|\nabla^3h (\kappa(v))|\mathrm{d}\kappa + \int_0^1|\nabla^3g (\iota(v_*))| \mathrm{d}\iota |\nabla^2 h|\bigg) \\ &+& |A|^6 \int_0^1 |\nabla^3g (\iota(v_*))|\mathrm{d}\iota \int_0^1|\nabla^3h (\kappa(v))|\mathrm{d}\kappa \colonequals \sum_{i=1}^4 \mathcal{R}^1_i .\end{aligned}$$ Let $\mathcal{Q}_3^i\colonequals \int_{v_*,\theta,\varphi} \mathcal{R}^1_i(v,v_*) B^{\epsilon}_1\sin\theta \mathrm{d}\theta \mathrm{d}\varphi \mathrm{d}v_*$. Using the facts $|A|\lesssim \sin(\theta/2)|v-v_*|$ and $W_l \lesssim_{l} W_l(\kappa(v))+W_l(\iota(v_*))$, by the C-S inequality and [\[Q1-result-with-factor-a-minus3-with-small-facfor\]](#Q1-result-with-factor-a-minus3-with-small-facfor){reference-type="eqref" reference="Q1-result-with-factor-a-minus3-with-small-facfor"}, we have $$\begin{aligned} \sum_{i=1}^4|\langle \mathcal{Q}_3^iW_l, F \rangle| \lesssim_{l} \epsilon^{\vartheta}I_{3+\vartheta} \|F\|_{L^2} (\|g\|_{H^3_{l+5}}\|h\|_{H^3}+\|g\|_{H^3_2}\|h\|_{H^3_{l+3}}), \end{aligned}$$ which yields that $\|\mathcal{Q}_3\|_{L^2_l} \lesssim_{l} \epsilon^{\vartheta} I_{3+\vartheta} (\|g\|_{H^3_{l+5}}\|h\|_{H^3}+\|g\|_{H^3_2}\|h\|_{H^3_{l+3}}).$ The desired result [\[operator-diff-l2l\]](#operator-diff-l2l){reference-type="eqref" reference="operator-diff-l2l"} follows by patching together all the estimates. ◻ Now we are in a position to prove the asymptotic expansion in Theorem [Theorem 3](#mainthm){reference-type="ref" reference="mainthm"}. *Proof of Theorem [Theorem 3](#mainthm){reference-type="ref" reference="mainthm"}.* To derive the asymptotic formula in the theorem, the key point is to give the energy estimates for $R^\epsilon$ in the space $H^{N}_{l}$ for $N \geq 0, l \geq 2$. Recalling [\[EqR\]](#EqR){reference-type="eqref" reference="EqR"}, the proof is divided into several steps. [*Step 1: Estimate of $Q_1(f^\epsilon, R^\epsilon)$.*]{.ul} We claim that $$\begin{aligned} % \label{as1} \sum_{m=0}^{N} \sum_{|\alpha|=m} \langle \partial^{\alpha}Q_1(f^\epsilon, R^\epsilon) W_{l}, W_{l}\partial^{\alpha}R^\epsilon \rangle \lesssim_{N, l, \phi} \|f^\epsilon\|_{H^{N+2}_{l}} \|R^\epsilon\|_{H^{N}_{l}}^2. \end{aligned}$$ We need to consider $\langle Q_1( \partial^{\alpha_1}f^\epsilon, \partial^{\alpha_2} R^\epsilon) W_{l}, W_{l}\partial^{\alpha}R^\epsilon \rangle$ for $\alpha_1 + \alpha_2 =\alpha$. If $|\alpha_1| = 0$, we use [\[H-2-result\]](#H-2-result){reference-type="eqref" reference="H-2-result"} and Remark [Remark 10](#Q1-still-holds){reference-type="ref" reference="Q1-still-holds"}. If $|\alpha_1| = 1$, we use Lemma [Lemma 17](#Q1EN-g-order-1){reference-type="ref" reference="Q1EN-g-order-1"}. If $|\alpha_1| \geq 2$, we use Lemma [Lemma 15](#Q1EN){reference-type="ref" reference="Q1EN"}. [*Step 2: Estimate of $Q_1(R^\epsilon, f)$.*]{.ul} Using [\[Q1-h1-h2-l2-weighted\]](#Q1-h1-h2-l2-weighted){reference-type="eqref" reference="Q1-h1-h2-l2-weighted"}, we have $$\begin{aligned} % \label{R-f-R} \sum_{m=0}^{N} \sum_{|\alpha|=m} \langle \partial^{\alpha}Q_1 (R^\epsilon, f) W_{l}, W_{l}\partial^{\alpha}R^\epsilon \rangle \lesssim_{N, l, \phi} \|f \|_{H^{N+2}_{l}} \|R^\epsilon\|_{H^{N}_{l}}^2. \end{aligned}$$ [*Step 3: Estimate of $\epsilon^{-\vartheta}(Q_1(f,f)-Q_L(f,f))$.*]{.ul} Note that [\[operator-diff-l2l\]](#operator-diff-l2l){reference-type="eqref" reference="operator-diff-l2l"} yields $$\begin{aligned} \epsilon^{-\vartheta}\|Q_1(g,h)-Q_L(g,h)\|_{H^m_l} \lesssim_{m, l} I_{3+\vartheta} \left( \|g\|_{H^{m+3}_{l+5}}\|h\|_{H^{m+3}}+\|g\|_{H^{m+3}_2} \|h\|_{H^{m+3}_{l+3}} \right),\end{aligned}$$ which gives $$\begin{aligned} % \label{operator-difference-energy} \sum_{m=0}^{N}\sum_{|\alpha|=m, \alpha_1+\alpha_2=\alpha} \big| \langle \epsilon^{-\vartheta}(Q_1(\partial^{\alpha_1}f,\partial^{\alpha_2}f)-Q_L(\partial^{\alpha_1}f,\partial^{\alpha_2}f)) W_{l}, W_{l}\partial^{\alpha}R^\epsilon \rangle\big| \lesssim_{N, l, \phi} \|f \|_{H^{N+3}_{l+5}}^2 \|R^\epsilon\|_{H^{N}_{l}}. \end{aligned}$$ [*Step 4: Estimate of $\epsilon^{-\vartheta}(Q_2+Q_3)(f^\epsilon,f^\epsilon)$.* ]{.ul} We claim that $$\begin{aligned} %\label{as3} \sum_{m=0}^{N}\sum_{|\alpha|=m, \alpha_1+\alpha_2=\alpha} |\langle \epsilon^{-\vartheta}(Q_2+Q_3)(\partial^{\alpha_1} f^\epsilon,\partial^{\alpha_2} f^\epsilon)) W_{l}, W_{l}\partial^{\alpha}R^\epsilon \rangle|\lesssim_{N, l, \phi} \|f^\epsilon\|_{H^{N+2}_{l}}^2 \|R^\epsilon\|_{H^{N}_{l}}. \end{aligned}$$ For the $Q_2$ term, by using [\[g-h-total-vartheta\]](#g-h-total-vartheta){reference-type="eqref" reference="g-h-total-vartheta"}, we get $$\begin{aligned} |\langle \epsilon^{-\vartheta} Q_2(\partial^{\alpha_1} f^\epsilon,\partial^{\alpha_2} f^\epsilon)) W_{l}, W_{l}\partial^{\alpha}R^\epsilon \rangle| | \lesssim_{l} (I_{3+\vartheta} + I^{\prime}_{3+\vartheta}) \|f^\epsilon\|_{H^{N+2}_{l}}^2 \|R^\epsilon\|_{H^{N}_{l}}. \end{aligned}$$ For the $Q_3$ term, by using [\[Q-3-vartheta\]](#Q-3-vartheta){reference-type="eqref" reference="Q-3-vartheta"}, we get $$\begin{aligned} |\langle \epsilon^{-\vartheta} Q_3(\partial^{\alpha_1} f^\epsilon,\partial^{\alpha_2} f^\epsilon)) W_{l}, W_{l}\partial^{\alpha}R^\epsilon \rangle| | \lesssim_{l} I_{3} \|f^\epsilon\|_{H^{N+2}_{l}}^2 \|R^\epsilon\|_{H^{N}_{l}}. \end{aligned}$$ [*Step 5: Estimate of $\epsilon^{-\vartheta}R(f^\epsilon, f^\epsilon, f^\epsilon)$.*]{.ul} Applying [\[R-general-rough-versionh-h2-h2-h2-weighted\]](#R-general-rough-versionh-h2-h2-h2-weighted){reference-type="eqref" reference="R-general-rough-versionh-h2-h2-h2-weighted"}, we have $$\begin{aligned} %\label{as4} \sum_{m=0}^{N} \sum_{|\alpha|=m} |\langle \epsilon^{-3} \partial^{\alpha}R(f^\epsilon, f^\epsilon, f^\epsilon) W_{l}, W_{l}\partial^{\alpha}R^\epsilon \rangle | \lesssim_{N, l, \phi} \|f^\epsilon\|_{H^{N+2}_{l}}^3 \|R^\epsilon\|_{H^{N}_{l}}. \end{aligned}$$ [*Step 6: Closure of the energy estimates.*]{.ul} Patching together all the estimates in the previous steps, we arrive at $$\begin{aligned} \frac{d}{\mathrm{d}t} \|R^\epsilon\|_{H^{N}_{l}}^2 &\lesssim_{N, l, \phi} & \|f^\epsilon\|_{H^{N}_{l}} \|R^\epsilon\|_{H^{N}_{l}}^2 + \|f \|_{H^{N+2}_{l}} \|R^\epsilon\|_{H^{N}_{l}}^2 + \|f \|_{H^{N+3}_{l+5}}^2 \|R^\epsilon\|_{H^{N}_{l}} \\ &&+ \|f^\epsilon\|_{H^{N+2}_{l}}^2 \|R^\epsilon\|_{H^{N}_{l}} + \|f^\epsilon\|_{H^{N+2}_{l}}^3 \|R^\epsilon\|_{H^{N}_{l}}. \end{aligned}$$ Using the uniform upper bounds [\[upper-bound-3-order-higher\]](#upper-bound-3-order-higher){reference-type="eqref" reference="upper-bound-3-order-higher"} of $\|f^{\epsilon} \|_{H^{N+3}_{l+5}}$ and $\|f \|_{H^{N+3}_{l+5}}$, we get $$\begin{aligned} \frac{d}{\mathrm{d}t} \|R^\epsilon\|_{H^{N}_{l}} \lesssim_{N, l, \phi} (\|f_0\|_{H^{N+3}_{l+5}} + \|f_0\|_{H^{N+3}_{l+5}}^3) (\|R^\epsilon\|_{H^{N}_{l}} + 1). \end{aligned}$$ Then [\[error-estimate\]](#error-estimate){reference-type="eqref" reference="error-estimate"} follows the Grönwall's inequality. ◻ **Acknowledgments.** The work was initiated when M. Pulvirenti visited Tsinghua University in 2016. The work was partially supported by National Key Research and Development Program of China under the grant 2021YFA1002100. Ling-Bing He was also supported by NSF of China under the grant 12141102. Yu-Long Zhou was partially supported by NSF of China under the grant 12001552, Science and Technology Projects in Guangzhou under the grant 202201011144, and Youth Talent Support Program of Guangdong Provincial Association for Science and Technology under the grant SKXRC202311, 99 Alexandre R.; Desvillettes L.; Villani C.; Wennberg B.: Entropy dissipation and long-range interactions, *Arch. Rational Mech. Anal.* **152**(2000), no. 4, 327-355. Bae, G.-C.; Jang, J. W.; Yun, S.-B.: The relativistic quantum Boltzmann equation near equilibrium, *Arch. Ration. Mech. Anal.* **240**(2021), no.3, 1593-1644. Balescu R.: Equilibrium and Nonequilibrium Statistical Mechanics, John Wiley Sons, New-York, 1975. Benedetto D.; Pulvirenti M.: The classical limit for the Uehling-Uhlenbeck operator, *Bull. Inst. Math. Acad. Sin. (N.S.)* **2**(2007), no. 4, 907-920. Benedetto D.; Castella F.; Esposito R.; D.; Pulvirenti M.: On The Weak-Coupling Limit for Bosons and Fermions, *Math. Models Methods Appl. Sci.* **15**(2005), no. 12, 1811-1843. Benedetto D.; Castella F.; Esposito R.; D.; Pulvirenti M.: Some considerations on the derivation of the nonlinear Quantum Boltzmann equation, *J. Stat. Phys.* **116**(2004) 381-410. Benedetto D.; Castella F.; Esposito R.; D.; Pulvirenti M.: From the $N$-body Schöedinger equation to the Quantum Boltzmann equation: a term-by term convergence result in the weak coupling regime, *Comm. Math. Phys.* **277**(2008), 1-44. Bobylev A.; Pulvirenti M.; Saffirio,C.: From particle systems to the Landau equation: a consistency result, *Comm. Math. Phys.* **319**(2013), no. 3, 683-702 Briant M.; Einav A.: On the Cauchy problem for the homogeneous Boltzmann-Nordheim equation for bosons: local existence, uniqueness and creation of moments, *J. Stat. Phys.* **163**(2016), 1108-1156. Cai S.; Lu X.: The spatially homogeneous Boltzmann equation for Bose-Einstein particles: Rate of strong convergence to equilibrium, *J. Stat. Phys.* **175**(2019), no. 2, 289--350. Chaturvedi S.: Stability of vacuum for the Boltzmann equation with moderately soft potentials, *Ann. PDE* **7**(2021), 15. Erdős L.; Salmhofer M.; Yau H.-T.: On the quantum Boltzmann equation, *J. Stat. Phys.* **116**(2004) 367-380. Escobedo M.; Velázquez J.J.L.: On the blow up and condensation of supercritical solutions of the Nordheim equation for bosons, *Comm. Math. Phys.* **330**(2014), no. 1 331-365. Escobedo M.; Velázquez J.J.L.: Finite time blow-up and condensation for the bosonic Nordheim equation, *Invent. Math.* **200**(2015), no.3, 761-847. Desvillettes L.: On asymptotics of the Boltzmann equation when the collisions become grazing, *Transport Theory Statist. Phys.*, **21**(1992), no. 3, 259-276. He L.-B.; Lu X.; Pulvirenti M.: On semi-classical limit of spatially homogeneous quantum Boltzmann equation: weak convergence, *Comm. Math. Phys.* **386**(2021), no. 1, 143--223. Jiang N.; Xiong L.; Zhou K.: The incompressible Navier-Stokes-Fourier limit from Boltzmann-Fermi-Dirac equation, *J. Differential Equations* **308**(2022), 77--129. Li W.; Lu X.: Global existence of solutions of the Boltzmann equation for Bose-Einstein particles with anisotropic initial data, *J. Funct. Anal.* **276**(2019), no. 1, 231-283. Lu X.: On spatially homogeneous solutions of a modified Boltzmann equation for Fermi-Dirac particles, *J. Statist. Phys.* **105**(2001), no. 1-2, 353-388. Lu X.: On isotropic distributional solutions to the Boltzmann equation for Bose-Einstein particles. *J. Statist. Phys.* **116**(2004), 1597-1649. Lu X.; On the Boltzmann equation for Fermi-Dirac particles with very soft potentials: global existence of weak solutions, *J. Differential Equations* **245**(2008), no. 7, 1705-1761. Lu X.: Long time convergence of the Bose-Einstein condensation. *J. Stat. Phys.* **162**(2016), no. 3, 652-670. Nordheim L. W.: On the Kinetic Method in the New Statistics and Its Application in the Electron Theory of Conductivity, *Proc. Roy. Soc. London. Ser. A* **119**(1928), no. 783, 689-698. Ouyang Z.; Wu L.: On the quantum Boltzmann equation near Maxwellian and vacuum, *J. Differential Equations* **316**(2022), 471-551. Pulvirenti M.: The weak-coupling limit of large classical and quantum systems, International Congress of Mathematicians Vol. III, 229-256, Eur. Math. Soc., Zurich, (2006). Uehling E.A.; and Uhlenbeck G.E.: Transport Phenomena in Einstein-Bose and Fermi-Dirac Gases I, *Phys. Rev.* **43**(1933), 552-561. Zhou, Y.-L.: Global well-posedness of the quantum Boltzmann equation for bosons interacting via inverse power law potentials, *Advances in Mathematics* **430**(2023), paper no. 109234.
arxiv_math
{ "id": "2309.00891", "title": "On semi-classical limit of spatially homogeneous quantum Boltzmann\n equation: asymptotic expansion", "authors": "Ling-Bing He, Xuguang Lu, Mario Pulvirenti and Yu-Long Zhou", "categories": "math.AP", "license": "http://creativecommons.org/licenses/by-nc-sa/4.0/" }
--- abstract: | The shallow water flow model is widely used to describe water flows in rivers, lakes, and coastal areas. Accounting for uncertainty in the corresponding transport-dominated nonlinear PDE models presents theoretical and numerical challenges that motivate the central advances of this paper. Starting with a spatially one-dimensional hyperbolicity-preserving, positivity-preserving stochastic Galerkin formulation of the parametric/uncertain shallow water equations, we derive an entropy-entropy flux pair for the system. We exploit this entropy-entropy flux pair to construct structure-preserving second-order energy conservative, and first- and second-order energy stable finite volume schemes for the stochastic Galerkin shallow water system. The performance of the methods is illustrated on several numerical experiments.\ \ Keywords: stochastic Galerkin method, finite volume method, structure-preserving discretization, shallow water equations, hyperbolic systems of conservation law and balance laws.\ \ AMS subject classifications: 35L65, 35Q35, 35R60, 65M60, 65M70, 65M08 address: - San Diego, CA, USA - Department of Mathematics, The University of Utah, Salt Lake City, UT 84112, USA - Department of Mathematics and Scientific Computing and Imaging (SCI) Institute, The University of Utah, Salt Lake City, UT 84112, USA author: - Dihan Dai, Yekaterina Epshteyn and Akil Narayan bibliography: - bibfile.bib date: October 2023 title: Energy Stable and Structure-Preserving Schemes for the Stochastic Galerkin Shallow Water Equations --- # Introduction The one-dimensional Saint-Venant system of shallow water equations (SWE) is a popular model of water flows where vertical length scales are much smaller than horizontal ones [@barredesaint-venant_theorie_1871]. This system in conservative form is given by, $$\begin{aligned} \label{eq:swesg1-1d} U_t + F(U)_x &= S(U), & U = (h, q)^\top \in \mathbbm{R}^2,\end{aligned}$$ where $U = U(x,t)$ is the vector of conservative variables; $h(x,t)$ is the water height (a mass-like variable) and $q(x,t)$ is the water discharge (a momentum-like variable). The flux $F$ and source term $S$ are given by, $$\begin{aligned} \label{eq:swe-flux-1d} F(U) &= \left(\begin{array}{c} q \\ \frac{{(q)}^2}{h} + \frac{g h^2}{2} \end{array}\right), & S(U) &= \left(\begin{array}{c} 0 \\ -g h B' \end{array}\right).\end{aligned}$$ where $B(x)$ is the (assumed known) bottom topography function and $g > 0$ is the gravitational constant. The system [\[eq:swesg1-1d\]](#eq:swesg1-1d){reference-type="eqref" reference="eq:swesg1-1d"} is supplemented with initial and boundary data that we omit for the time being. The one-dimensional SWE model [\[eq:swesg1-1d\]](#eq:swesg1-1d){reference-type="eqref" reference="eq:swesg1-1d"} is hyperbolic system of partial differential equations (PDE) if $h > 0$, and hence with the non-zero source $S$, then [\[eq:swesg1-1d\]](#eq:swesg1-1d){reference-type="eqref" reference="eq:swesg1-1d"} is a nonlinear hyperbolic balance law. Because of this, it inherits the standard challenges in developing numerical methods for such models: solutions generically develop discontinuities in finite time even with smooth initial data, non-uniqueness of weak solutions should be rectified by an implicit or explicit numerical imposition of entropy conditions, and implicit time-integration solvers are challenging to implement due to the nonlinearity [@dafermos2016hyperbolic; @leveque_numerical_1992; @leveque_finite_2002]. In addition to all this, the SWE has challenges that are somewhat specific to its particular form: positivity of the water height $h$ should be maintained, and numerical schemes should accurately capture near-equilibrium dynamics, which is typically achieved by imposing the *well-balanced* property [@bermudez_upwind_1994], i.e., that the PDE equilibrium states are exactly captured at the discrete level. A more nebulous and hence more frustrating challenge is that of *uncertainty* in the model. For example, one may have incomplete, partial information about the initial data or the bottom topography function $B$. In such cases, one frequently models this data as a random variable or process, and hence the solution $U$ to [\[eq:swesg1-1d\]](#eq:swesg1-1d){reference-type="eqref" reference="eq:swesg1-1d"} is random. We consider the somewhat more simple situation when the input uncertainty is encoded with a finite-dimensional random variable, in which case [\[eq:swesg1-1d\]](#eq:swesg1-1d){reference-type="eqref" reference="eq:swesg1-1d"} becomes a parametric model (with the input random variables serving as the parameters). Even with this simplification, the parametric or stochastic nature of the solution exacerbates many of the previously described numerical challenges. A particularly successful approach for handling such problems that we will employ is the polynomial Chaos (PC) method, wherein $U$ is approximated as a polynomial function of the input parameters [@xiu2010numerical; @smith_uncertainty_2013; @sullivan2015introduction]. The class of *non-intrusive* PC strategies construct the polynomial by collecting an ensemble of solutions to [\[eq:swesg1-1d\]](#eq:swesg1-1d){reference-type="eqref" reference="eq:swesg1-1d"} at a collection of fixed values of the parameters. This approach is attractive since it can exploit existing and trusted legacy solvers for [\[eq:swesg1-1d\]](#eq:swesg1-1d){reference-type="eqref" reference="eq:swesg1-1d"}, for which there are several effective choices [@zhou_surface_2001; @rogers_mathematical_2003; @crnjaric-zic_balanced_2004; @xing_high_2006; @CiCP-1-100; @kurganov_centralupwind_2002; @xing_high_2005; @xing_chapter_2017; @liu_wellbalanced_2018; @bryson_wellbalanced_2011; @kurganov_secondorder_2007-2; @xing_high_2016; @epshteyn_adaptive_2023; @kurganov_finite-volume_2018]. However, this approach suffers from the disadvantage that making concrete statements about the quality or properties of the resulting polynomial approximation can be challenging. For example, one cannot guarantee that entropy conditions are satisfied if the polynomial approximation is evaluated away from the parameter ensemble used to construct the approximation. This paper is concerned with an alternative *intrusive* approach, the stochastic Galerkin (SG) method for PC approximation, which addresses the parametric dependence in a Galerkin fashion, e.g., by enforcing that certain probabilistic moments of [\[eq:swesg1-1d\]](#eq:swesg1-1d){reference-type="eqref" reference="eq:swesg1-1d"} vanish. This approach has the potential to provide pathways to mathematical rigor of numerical methods through weak enforcement of the parametric dependence. SG methods transform a parametric model [\[eq:swesg1-1d\]](#eq:swesg1-1d){reference-type="eqref" reference="eq:swesg1-1d"} into a new non-parametric model of larger system size. Since the new SG formulation is non-parametric, one can apply typical deterministic numerical methods for systems of PDEs to solve the SG problem. Such approaches have shown particular success for modeling parametric dependence in elliptic problems; see, e.g., [@cohen_approximation_2015]. However, the notable drawback of SG methods when applied to (nonlinear) hyperbolic PDEs is that the new non-parametric SG system need not be a hyperbolic PDE itself, which changes the essential character of the SG system relative to the original system. Recent work has developed an SG formulation for the SWE in conservative form that involves a special SG treatment for the nonlinear, non-polynomial terms [@doi:10.1137/20M1360736]. Such an approach can be used to develop a well-balanced, hyperbolicity-preserving, and positivity-preserving finite volume method to solve the SG SWE system. This approach has also been extended to two-dimensional SWE systems [@dai2021hyperbolicity]. ## Contributions of this article We make the following contributions that build on [@doi:10.1137/20M1360736]: - We derive an entropy-entropy flux pair for the spatially one-dimensional hyperbolicity-preserving, positivity-preserving SG SWE system derived in [@doi:10.1137/20M1360736], see [Theorem 2](#thm:eef-sgswe-1d){reference-type="ref" reference="thm:eef-sgswe-1d"}. Entropy-entropy flux pairs are the theoretical starting point for proposing entropy admissibility criteria to resolve non-uniqueness of weak solutions. - Using the entropy-entropy flux pair, we devise second-order energy conservative, and first- and second-order energy stable finite volume schemes for the SG SWE, all of which are also well-balanced. See [\[thm:ec-scheme,thm:es1,thm:es2\]](#thm:ec-scheme,thm:es1,thm:es2){reference-type="ref" reference="thm:ec-scheme,thm:es1,thm:es2"}, with the procedure in [\[alg:swesg\]](#alg:swesg){reference-type="ref" reference="alg:swesg"}. The designed energy conservative and energy stable schemes are the stochastic extensions of the schemes developed in [@fjordholm2011well; @fjordholm2012arbitrarily]. - We provide numerical experiments that explore the simulation capabilities of the new schemes. To the best of our knowledge, these are the first schemes for any SG SWE system that boast energy stability, the well-balanced property, while also being positivity- and hyperbolicity-preserving. An outline of this paper is as follows: introduces our notation, along with background on PC methods and the SG SWE system from [@doi:10.1137/20M1360736]. provides our entropy-entropy pair construction for the SG SWE system. provides the statement of the energy conservative and energy stable schemes that we develop, along with proofs of their theoretical properties, as well as their algorithmic details. compiles numerical examples that demonstrate the performance of our scheme. gives brief summary of the main results and some future research directions. We summarize our notation in this article in [\[tab:notation\]](#tab:notation){reference-type="ref" reference="tab:notation"}. # Preliminaries {#sec:prelim} ## Notation We use $\|\cdot\|$ to denote the standard Euclidean ($\ell^2)$ norm operating on vectors. If $f: \mathbbm{R}^m \rightarrow \mathbbm{R}^n$ for $m, n \in \mathbbm{N}$, then we write $f(x)$ for $x = (x_1, \ldots, x_m)$, and $f(x) = (f_1, \ldots, f_n)$. We use the following notation for the gradient: $$\begin{aligned} f_x \coloneqq \frac{\partial f}{\partial x} = \left(\begin{array}{cccc} \frac{\partial f_1}{\partial x_1} & \frac{\partial f_1}{\partial x_2} &\cdots & \frac{\partial f_1}{\partial x_m} \\ \frac{\partial f_2}{\partial x_1} & \frac{\partial f_2}{\partial x_2} &\cdots & \frac{\partial f_2}{\partial x_m} \\ \vdots & \vdots & & \vdots \\ \frac{\partial f_n}{\partial x_1} & \frac{\partial f_n}{\partial x_2} &\cdots & \frac{\partial f_n}{\partial x_m} \end{array}\right) \in \mathbbm{R}^{n \times m}.\end{aligned}$$ When $n = 1$ (i.e., $f$ is scalar-valued) then $\frac{\partial^{2} f}{\partial x^{2}}$ is the $n \times n$ Hessian of $f$. If $\boldsymbol{A}$ is a square matrix, then we write $\boldsymbol{A} > 0$ and $\boldsymbol{A} \geq 0$ when $\boldsymbol{A}$ is positive definite and positive semi-definite, respectively. In work on the SWE system [\[eq:swesg1-1d\]](#eq:swesg1-1d){reference-type="eqref" reference="eq:swesg1-1d"} it is common to introduce the water velocity (equilibrium) variable $$\begin{aligned} \label{eq:u-SWE} u &\coloneqq \frac{q}{h},\end{aligned}$$ and we also make use of this variable in what follows. ## Polynomial Chaos Expansion In this section, we briefly review the results and notation for polynomial chaos expansion. More comprehensive results can be found in [@debusschere2004numerical; @xiu2010numerical; @sullivan2015introduction], etc. Let $\xi\in\mathbbm{R}^d$ be a random variable associated with Lebesgue density function $\rho$. Define the function space $$\begin{aligned} L^2_\rho(\mathbbm{R}^d) \coloneqq \left\{f: \mathbbm{R}^d\to\mathbbm{R}\middle\vert\left(\int_{\mathbbm{R}^d}f^2(s)\rho(s)ds\right)^{\frac{1}{2}}<+\infty\right\}. \end{aligned}$$ Assuming finite polynomial moments of all orders for $\rho$, there exists an orthonormal basis $\{\phi_k\}_{k = 1}^{\infty}$ of $L^2_\rho$, i.e., $$\begin{aligned} \label{eq:orthocond} \langle\phi_k,\phi_\ell\rangle_{\rho} &\coloneqq \int_{\mathbbm{R}^d} \phi_k(s) \phi_\ell(s) {\rho}(s)d s = \delta_{k, \ell}, & \phi_1(\xi) &\equiv 1,\end{aligned}$$ for all $k,\ell\in\mathbbm{N}$, where $\delta_{k,\ell}$ is the Kronecker delta. PCE seeks a representation of a random field $z(\cdot, \cdot, \xi)\in L^2_\rho$ in terms of a series of orthonormal polynomials for $\xi$, $$\label{eq:PCE} z(x, t, \xi) \stackrel{L^2_\rho}{=} \sum_{k = 1}^{\infty} \widehat{z}_i(x,t)\phi_i(\xi),$$ where $x,t$ are the deterministic spatial and temporal variables, and $\widehat{z}_i(x,t)$ are deterministic Fourier-like coefficients. The equation [\[eq:PCE\]](#eq:PCE){reference-type="eqref" reference="eq:PCE"} holds true for all $z(x,t;\cdot)\in L^2_\rho$ under mild conditions [@ernst_convergence_2012]. In practice, a finite truncation of [\[eq:PCE\]](#eq:PCE){reference-type="eqref" reference="eq:PCE"} is usually considered. Let $P$ be a $K$-dimensional polynomial subspace of $L^2_\rho$, $$\begin{aligned} \label{eq:P-def} P = \mathrm{span} \left\{ \phi_k, \;\; k=1, \ldots, K \right\},\end{aligned}$$ i.e., we let $\phi_k$ be an orthonormal basis for $P$. We make the common assumption that $1 \in P$, and for convenience we assume that, $$\begin{aligned} \phi_1(\xi) \equiv 1.\end{aligned}$$ A popular choice for $P$ is the total degree space, but several other options are possible. The $K$-term PCE *approximation* of a random field $z$ onto $P$ is defined by the projection of [\[eq:PCE\]](#eq:PCE){reference-type="eqref" reference="eq:PCE"} onto $P$: $$\begin{aligned} \label{eq:PCEex} \Pi_P[z](x,t,\xi)\coloneqq\sum_{k=1}^K \widehat{z}_k(x,t)\phi_k(\xi).\end{aligned}$$ Using the orthogonality of the basis function, the statistics of $\Pi_P[z]$ can be expressed in terms of the expansion coefficients. For example, the mean and the variance of $\Pi_P[z]$ are given by: $$\label{eq:expvar} \mathbb{E}[\Pi_P[z](x,t,\xi)] = \widehat{z}_1(x,t),\quad \text{Var}[\Pi_P[z](x,t,\xi)] = \sum_{k=2}^{K}\widehat{z}_k^2(x,t).$$ Let $\widehat{z} = \left(\widehat{z}_1,\cdots,\widehat{z}_k\right)\in \mathbbm{R}^K$ be the vector of the expansion coefficients in [\[eq:expvar\]](#eq:expvar){reference-type="eqref" reference="eq:expvar"}. Define the linear operator $\mathcal{P}: \mathbbm{R}^{K} \rightarrow \mathbbm{R}^{K\times K}$ as $$\begin{aligned} \label{eq:pmatrix} \mathcal{P}(\widehat{z}) &\coloneqq \sum_{k=1}^K\widehat{z}_k\mathcal{M}_k, & \mathcal{M}_k &\in \mathbbm{R}^{K \times K}, & (\mathcal{M}_k)_{\ell, m} &= \langle\phi_k,\phi_\ell\phi_m\rangle_{\rho}.\end{aligned}$$ Fixing $\widehat{z} \in \mathbbm{R}^K$, then $\mathcal{P}(\widehat{z})$ is the (symmetric) quadratic form matrix representation of the bilinear operator $(\widehat{a}, \widehat{b}) \mapsto \left\langle a_P\, z_P, b_P \right\rangle_\rho$, where $z_P \coloneqq \sum_{k=1}^K \widehat{z}_k \phi_k(\xi)$ and similarly for $a_P, b_P$ with $\widehat{a}, \widehat{b} \in \mathbbm{R}^K$. Using the fact that $(\mathcal{M}_k)_{\ell,m}$ is commutative in $(k,m)$ a direct computation shows: $$\begin{aligned} \label{eq:pmatrixproperty} \mathcal{P}(\widehat{a}) &= \begin{pmatrix}\mathcal{M}_1\widehat{a}, \; \mathcal{M}_2\widehat{a}, \; \ldots, \; \mathcal{M}_K\widehat{a} \end{pmatrix}, %& % \mathcal{P}(\widehat{a})\widehat{b} &= \mathcal{P}(\widehat{b})\widehat{a}.\end{aligned}$$ A useful lemma is given as follows. **Lemma 1**. *For any two vectors $\widehat{a},\widehat{b}\in \mathbb{R}^K$, $$\begin{aligned} \label{eq:commute} \mathcal{P}(\widehat{a})\widehat{b} &= \mathcal{P}(\widehat{b})\widehat{a}, & \widehat{b}^\top\mathcal{P}(\widehat{a}) &= \widehat{a}^\top\mathcal{P}(\widehat{b}).\end{aligned}$$* The proof is straightforward using [\[eq:pmatrix\]](#eq:pmatrix){reference-type="eqref" reference="eq:pmatrix"} and [\[eq:pmatrixproperty\]](#eq:pmatrixproperty){reference-type="eqref" reference="eq:pmatrixproperty"} along with the symmetry of $\mathcal{P}(\cdot)$. This result is a "commutative" property of the operator $\mathcal{P}(\cdot)$. For example: For any $a, b, c \in \mathbbm{R}^K$, $$\begin{aligned} \label{eq:acb} \frac{\partial}{\partial c} a^\top \mathcal{P}(c) b^\top = a^\top \mathcal{P}(b),\end{aligned}$$ A *stochastic Galerkin* (SG) formulation of a $\xi$-parameterized PDE corresponds to making the ansatz that the state variable lies in the space $P$, and projecting the PDE residual onto the same space. Straightforward applications of this procedure to (nonlinear) hyperbolic PDEs typically do not result in hyperbolic SG formulations. ## Hyperbolic-Preserving Stochastic Galerkin Formulation for Shallow Water Equation In [@doi:10.1137/20M1360736], we have derived a hyperbolicity-preserving stochastic Galerkin formulation for the shallow water equations. We briefly recall the results in this section. We make the ansatz that the solutions to $h, q$ lie in the polynomial space $P$, [\[eq:sg-ansatz1d\]]{#eq:sg-ansatz1d label="eq:sg-ansatz1d"} $$\begin{aligned} h \simeq h_P &\coloneqq \sum_{k=1}^K \widehat{h}_k(x,t) \phi_k(\xi), \\ q \simeq q_P &\coloneqq \sum_{k=1}^K (\widehat{q})_k(x,t) \phi_k(\xi), \end{aligned}$$ and use these to formulate a $\xi$-variable Galerkin projection of the SWE. We make a special choice of how the Galerkin projection of the nonlinear, non-polynomial term $(q)^2/h$ is truncated, which results in a *new* (stochastic Galerkin) system of balance laws whose state variables are the expansion coefficients in [\[eq:sg-ansatz1d\]](#eq:sg-ansatz1d){reference-type="eqref" reference="eq:sg-ansatz1d"} [@doi:10.1137/20M1360736]: $$\label{eq:swesg41d} \widehat{U}_t+(\widehat{F}(\widehat{U}))_x = \widehat{S}(\widehat{U}),$$ Here, $\widehat{U}\coloneqq (\widehat{h}^\top, \widehat{q}^\top)^\top \in \mathbbm{R}^{2 K}$, where $\widehat{h}, \widehat{q}$ are each length-$K$ vectors of the expansion coefficients in [\[eq:sg-ansatz1d\]](#eq:sg-ansatz1d){reference-type="eqref" reference="eq:sg-ansatz1d"}. The flux and the source terms are, $$\begin{aligned} \label{eq:sgfluxessource1d} &\widehat{F}(\widehat{U}) = \begin{pmatrix} \widehat{q}\\\mathcal{P}(\widehat{q})\mathcal{P}^{-1}(\widehat{h})\widehat{q}+\frac{1}{2}g\mathcal{P}(\widehat{h})\widehat{h} \end{pmatrix},&& &\widehat{S}(\widehat{U}) = \begin{pmatrix}0\\-g\mathcal{P}(\widehat{h})\widehat{B_x}\end{pmatrix},\end{aligned}$$ cf. [\[eq:swe-flux-1d\]](#eq:swe-flux-1d){reference-type="eqref" reference="eq:swe-flux-1d"}. The flux Jacobian, written in $K\times K$ blocks, is given by $$\begin{aligned} \label{eq:x-jacobian1d} \frac{\partial \widehat{F}}{\partial \widehat{U}} &= \begin{pmatrix}O&I\\g\mathcal{P}(\widehat{h})-\mathcal{P}(\widehat{q})\mathcal{P}^{-1}(\widehat{h})\mathcal{P}(\widehat{u})&\mathcal{P}(\widehat{q})\mathcal{P}^{-1}(\widehat{h})+\mathcal{P}(\widehat{u})\end{pmatrix}. \end{aligned}$$ We have introduced the term $$\begin{aligned} \label{eq:uPCE1d} \widehat{u}&= \mathcal{P}^{-1}(\widehat{h})\widehat{q},\end{aligned}$$ which we view as the vector of the PCE coefficients of the $x$-velocity $u$ introduced in [\[eq:u-SWE\]](#eq:u-SWE){reference-type="eqref" reference="eq:u-SWE"}, and is well-defined if $\mathcal{P}(\widehat{h})$ is invertible. The deterministic SWE are hyperbolic if the water height $h > 0$; there is a natural extension of this property to the SGSWE. **Theorem 1** (Theorem 3.1, [@doi:10.1137/20M1360736]). *If the matrix $\mathcal{P}(\widehat{h})$ is strictly positive definite for every point $(x,t)$ in the computational spatial-temporal domain, then the SG formulation [\[eq:swesg41d\]](#eq:swesg41d){reference-type="eqref" reference="eq:swesg41d"} is hyperbolic.* This is proven by identifying a stochastic extension of the known eigenvector matrix for the deterministic SWE flux Jacobian $\frac{\partial F}{\partial U}$, and using this to show that $\frac{\partial \widehat{F}}{\partial \widehat{U}}$ is similar to a symmetric matrix and hence [\[eq:swesg41d\]](#eq:swesg41d){reference-type="eqref" reference="eq:swesg41d"} is hyperbolic [@doi:10.1137/20M1360736]. # An Entropy-Entropy Flux Pair for SGSWE systems {#sec3} The formulation [\[eq:swesg41d\]](#eq:swesg41d){reference-type="eqref" reference="eq:swesg41d"} will be considered in what follows. Our goal will be to derive entropy-entropy flux pairs for these formulations. The first step is for us to recall a known entropy-entropy flux pair for the *deterministic* SWE system. ## Entropy-Entropy Flux Pairs for Deterministic Shallow Water Equations It is well-known that solutions to systems of conservation/balance laws can develop shock discontinuities in finite time for generic initial data. Therefore, weak solutions, i.e., solutions in the sense of distributions, are usually considered. However, weak solutions are not necessarily unique, and to mitigate this issue, an additional *entropy admissibility criteria* is imposed [@dafermos2016hyperbolic; @benzoni2007multi] to identify the physically meaningful solution. For a general balance law in one space dimension $$\begin{aligned} \label{eq:1d-balance} U_t+F(U)_x = S(U),\end{aligned}$$ its entropy-entropy flux pair $(E(U), H(U))$ satisfies a *companion* balance law $$\begin{aligned} \label{eq:econd-11d} E(U)_t+H(U)_x = 0%K(U,S),\end{aligned}$$ where the *entropy* $E(U)$ is a scalar function that is convex in $U$, and $H$ is an *entropy flux* function. In order to be consistent with the original balance law for smooth $U$, the entropy-entropy flux pair $(E, H)$ should satisfy the following *compatibility condition*, $$\begin{aligned} \label{eq:compatibility1d} \frac{\partial E}{\partial U} (F_x - S) = H_x, %\nabla_U H &= (\nabla_U E)^T \nabla_U f, & (\nabla_U E)^T S &= J_x , %\partial_B(H(U,B))B_x. %\nabla_U H &= (\nabla_U E)^T \nabla_U f, & (\nabla_U E)^T S &= J_x , %\partial_B(H(U,B))B_x.\end{aligned}$$ which is simply the condition ensuring that multiplying [\[eq:1d-balance\]](#eq:1d-balance){reference-type="eqref" reference="eq:1d-balance"} by $\frac{\partial E}{\partial U}$ recovers [\[eq:econd-11d\]](#eq:econd-11d){reference-type="eqref" reference="eq:econd-11d"} when solutions are smooth. In the case of $S\equiv 0$ and $(E,H)=(E(U), H(U))$, equation [\[eq:compatibility1d\]](#eq:compatibility1d){reference-type="eqref" reference="eq:compatibility1d"} is the usual entropy condition for conservation laws. For a general system of balance laws in several spatial dimensions, an entropy-entropy flux pair need not exist. However, for a hyperbolic system of balance laws emerging from continuum physics, the companion balance law [\[eq:econd-11d\]](#eq:econd-11d){reference-type="eqref" reference="eq:econd-11d"} is usually related to the Second Law of thermodynamics, and the total energy of the system often serves as the entropy function. A variety of examples can be found in [@dafermos2016hyperbolic Section 3.3]. For the *deterministic* SWE system in [\[eq:swesg1-1d\]](#eq:swesg1-1d){reference-type="eqref" reference="eq:swesg1-1d"}, the total energy [@fjordholm2011well] is $$\begin{aligned} \label{eq:d-efun1d} E^d(U) = \underbrace{\frac{1}{2}qu}_{\text{kinetic energy}}+\underbrace{\frac{1}{2}gh^2+ghB}_{\text{potential energy}}.\end{aligned}$$ where we recall that $u$ is the velocity defined in [\[eq:u-SWE\]](#eq:u-SWE){reference-type="eqref" reference="eq:u-SWE"}. For any smooth solution $U$, a direct calculation yields, $$\begin{aligned} \label{eq:sweecond-eq1d} E^d(U)_t + H^d(U)_x = 0,\end{aligned}$$ where $$\begin{aligned} \label{eq:d-eflux1d} H^d(U) &= \frac{1}{2}qu^2+gq h + g q B.%, & %J^d(U) &= g q^x B\end{aligned}$$ This, along with the fact that $E^d$ is convex in $U$, establishes that $(E^d, H^d)$ is a valid entropy-entropy flux pair for [\[eq:swesg1-1d\]](#eq:swesg1-1d){reference-type="eqref" reference="eq:swesg1-1d"}. For (weak) solutions with shocks, the entropy admissibility criteria is that energy should dissipate in accordance with a vanishing viscosity principle, $$\begin{aligned} \label{eq:sweecond-dissipative1d} E^d(U)_t + H^d(U)_x \le 0. \end{aligned}$$ In what follows we will identify entropy-entropy flux pairs for the SGSWE model. This amounts to verifying that (i) such a pair satisfies the companion balance law (an equality for smooth solutions) and (ii) that the entropy function is convex in the state variable. ## An Entropy-Entropy Flux Pair for the one-dimensional SGSWE This section is dedicated to identifying an entropy-entropy flux pair for the SG system [\[eq:swesg41d\]](#eq:swesg41d){reference-type="eqref" reference="eq:swesg41d"}. In this section, we will return to the notation $\widehat{U}$ (containing PC expansion coefficients) for the derivation of an entropy-entropy flux pair for the SG system. Our main result in this section is the following entropy entropy-flux pair for the one-dimensional SGSWE: **Theorem 2**. *Define the function,* *[\[eq:sg-epair1d\]]{#eq:sg-epair1d label="eq:sg-epair1d"} $$\begin{aligned} \label{eq:sg-efun1d} E(\widehat{U}) = \frac{1}{2}\left((\widehat{q})^\top\widehat{u}+g\Vert\widehat{h}\Vert^2\right)+g\widehat{h}^\top\widehat{B}, \end{aligned}$$ and also the flux function, $$\begin{aligned} \label{eq:sg-eflux1d} H(\widehat{U}) &= \frac{1}{2} \widehat{u}^\top \mathcal{P}(\widehat{q}) \widehat{u}+ g \widehat{q}^\top \widehat{h}+ g \widehat{q}^\top \widehat{B}, \end{aligned}$$* *If $\mathcal{P}(\widehat{h}) > 0$, then $(E, H)$ is an entropy-entropy flux pair for the one-dimensional SGSWE [\[eq:swesg41d\]](#eq:swesg41d){reference-type="eqref" reference="eq:swesg41d"}.* Recall that $\widehat{u}$ above is defined in [\[eq:uPCE1d\]](#eq:uPCE1d){reference-type="eqref" reference="eq:uPCE1d"}, and contains PC expansion coefficients for the velocity $u$ defined in [\[eq:u-SWE\]](#eq:u-SWE){reference-type="eqref" reference="eq:u-SWE"}. In the absence of uncertainty, [\[eq:sg-efun1d\]](#eq:sg-efun1d){reference-type="eqref" reference="eq:sg-efun1d"} reduces to the deterministic total energy [\[eq:d-efun1d\]](#eq:d-efun1d){reference-type="eqref" reference="eq:d-efun1d"}. The rest of this section is devoted to proving [Theorem 2](#thm:eef-sgswe-1d){reference-type="ref" reference="thm:eef-sgswe-1d"}, which amounts to showing that, if $\mathcal{P}(\widehat{h}) > 0$, then $E$ is convex in $\widehat{U}$ and $(E, H)$ satisfy the companion balance law, $$\begin{aligned} \label{eq:efun-general} E(\widehat{U})_t + H(\widehat{U})_x = 0,\end{aligned}$$ for smooth solutions $\widehat{U}$. Note that for non-smooth solutions, [\[eq:efun-general\]](#eq:efun-general){reference-type="eqref" reference="eq:efun-general"} holds with $=$ replaced by $\leq$. We prove with three intermediate results. Our first result is a technical condition that facilitates later computations. **Lemma 2** (Gradient of $\widehat{u}$). *Let $\widehat{q}\in \mathbbm{R}^K$ be arbitrary, and let $\widehat{h}\in \mathbbm{R}^K$ be such that $\mathcal{P}(\widehat{h})$ is invertible. Defining $\widehat{u}$ as in [\[eq:uPCE1d\]](#eq:uPCE1d){reference-type="eqref" reference="eq:uPCE1d"}, then $$\begin{aligned} \label{eq:velderivative} \frac{\partial \widehat{u}}{\partial \widehat{U}} = \left[ \frac{\partial \widehat{u}}{\partial \widehat{h}}, \;\; \frac{\partial \widehat{u}}{\partial \widehat{q}} \right] = \left[ -\mathcal{P}^{-1}(\widehat{h})\mathcal{P}(\widehat{u}), \;\;\mathcal{P}^{-1}(\widehat{h}) \right] % \frac{\partial{\hu}}{\partial \hh} &= -\mcp^{-1}(\hh)\mcp(\hu), & % \frac{\partial{\hu}}{\partial \hqx} &= \mcp^{-1}(\hh).\end{aligned}$$* *Proof.* If $A(t)$ is a $t$-parameterized matrix, then for any $t$ at which $A$ is invertible, $$\begin{aligned} \frac{\partial}{\partial t} A^{-1}(t) = -A^{-1}(t) \frac{\partial A(t)}{\partial t} A^{-1}(t). \end{aligned}$$ Applying this this to $\mathcal{P}$, we have, $$\begin{aligned} \label{eq:Pinvdiff} \frac{\partial \mathcal{P}^{-1}(\widehat{h})}{\partial \widehat{h}_\ell} = -\mathcal{P}^{-1}(\widehat{h}) \frac{\partial \mathcal{P}(\widehat{h})}{\partial \widehat{h}_\ell}\mathcal{P}^{-1}(\widehat{h}) \overset{\eqref{eq:pmatrixproperty}}{=} -\mathcal{P}^{-1}(\widehat{h})\mathcal{M}_\ell\mathcal{P}^{-1}(\widehat{h}), \end{aligned}$$ and hence, $$\begin{aligned} \frac{\partial \widehat{u}}{\partial \widehat{h}_\ell} = \frac{\partial \mathcal{P}^{-1}(\widehat{h})}{\partial \widehat{h}_\ell} \widehat{q}\overset{\eqref{eq:Pinvdiff}}{=} -\mathcal{P}^{-1}(\widehat{h})\mathcal{M}_\ell\mathcal{P}^{-1}(\widehat{h}) \widehat{q}\overset{\eqref{eq:uPCE1d}}{=} \mathcal{P}^{-1}(\widehat{h})\mathcal{M}_\ell\widehat{u}. % -\mcp^{-1}(\hh)\mathcal{M}_\ell\mcp^{-1}(\hh)\hqx \overset{\eqref{eq:vel}}{=} -\mcp^{-1}(\hh)\mathcal{M}_\ell\hu. \end{aligned}$$ Therefore, $$\begin{aligned} &\frac{\partial \widehat{u}}{\partial \widehat{h}} = \left[-\mathcal{P}^{-1}(\widehat{h})\mathcal{M}_1\widehat{u},\;\; \cdots\;\; -\mathcal{P}^{-1}(\widehat{h})\mathcal{M}_K\widehat{u}\right] \overset{\eqref{eq:pmatrixproperty}}{=}-\mathcal{P}^{-1}(\widehat{h})\mathcal{P}(\widehat{u}), \end{aligned}$$ proving the desired relation for $\frac{\partial \widehat{u}}{\partial \widehat{h}}$. The relation for $\frac{\partial \widehat{u}}{\partial \widehat{q}}$ is immediate from the definition [\[eq:uPCE1d\]](#eq:uPCE1d){reference-type="eqref" reference="eq:uPCE1d"}. ◻ **Lemma 3** (Convexity of $E(\widehat{U})$). *If $\mathcal{P}(\widehat{h})$ is positive definite, then the function $E(\widehat{U})$ defined in [\[eq:sg-efun1d\]](#eq:sg-efun1d){reference-type="eqref" reference="eq:sg-efun1d"} is convex in $\widehat{U}$.* *Proof.* Using the definition [\[eq:uPCE1d\]](#eq:uPCE1d){reference-type="eqref" reference="eq:uPCE1d"} of $\widehat{u}$, note that, $$\begin{aligned} \label{eq:E-decomp} E(\widehat{U}) = \underbrace{\frac{1}{2} (\widehat{q})^\top\mathcal{P}^{-1}(\widehat{h})\widehat{q}}_{f_1(\widehat{U})} + \underbrace{\frac{g}{2} \widehat{h}^\top \widehat{h}+ g \widehat{h}^\top \widehat{B}}_{f_2(\widehat{U})}, \end{aligned}$$ and therefore in particular, $$\begin{aligned} \label{eq:Hessian-decomp} \frac{\partial^{2} E}{\partial \widehat{U}^{2}} = \frac{\partial^{2} f_1}{\partial \widehat{U}^{2}} + \frac{\partial^{2} f_2}{\partial \widehat{U}^{2}}. \end{aligned}$$ We will show that this Hessian is positive definite. Clearly we have, $$\begin{aligned} \label{eq:f2-hessian} \frac{\partial f_2}{\partial \widehat{U}} &= \left( g \widehat{h}^\top+g \widehat{B}^\top, \;\; 0 \right) \in \mathbbm{R}^{1 \times 2 K}, & \frac{\partial^{2} f_2}{\partial \widehat{U}^{2}} &= \left(\begin{array}{cc} g I & 0 \\ 0 & 0 \end{array}\right) \in \mathbbm{R}^{2 K \times 2 K}. \end{aligned}$$ Using the previous lemma, we can directly compute, $$\begin{aligned} \label{eq:f1-p1} \frac{\partial f_1}{\partial \widehat{h}} &= \frac{1}{2} (\widehat{q})^\top \frac{\partial \widehat{u}}{\partial \widehat{h}} \overset{\eqref{eq:velderivative}, \eqref{eq:uPCE1d}}{=} -\frac{1}{2} \widehat{u}^\top \mathcal{P}(\widehat{u}), & \frac{\partial f_1}{\partial \widehat{q}} &= (\widehat{q})^\top \mathcal{P}^{-1}(\widehat{h}) = \widehat{u}^\top, & \end{aligned}$$ which in turn implies, $$\begin{aligned} \frac{\partial^{2} f_1}{\partial \widehat{q}^{2}} &= \mathcal{P}^{-1}(\widehat{h}), & \frac{\partial^2 f_1}{\partial \widehat{h}\partial \widehat{q}} &\overset{\eqref{eq:velderivative}}{=} \left(-\mathcal{P}^{-1}(\widehat{h}) \mathcal{P}(\widehat{u})\right)^\top = -\mathcal{P}(\widehat{u}) \mathcal{P}^{-1}(\widehat{h}), \end{aligned}$$ and finally, $$\begin{aligned} \frac{\partial^{2} f_1}{\partial \widehat{h}^{2}} = \frac{1}{2} \frac{\partial}{\partial \widehat{h}} \left( -\widehat{u}^\top \mathcal{P}(\widehat{u})\right) \overset{\eqref{eq:velderivative}}{=} \mathcal{P}(\widehat{u}) \mathcal{P}^{-1}(\widehat{h}) \mathcal{P}(\widehat{u}) \end{aligned}$$ Hence, the Hessian of $f_1$ is, $$\begin{aligned} \frac{\partial^{2} f_1}{\partial \widehat{U}^{2}} = \left(\begin{array}{cc} \mathcal{P}(\widehat{u}) \mathcal{P}^{-1}(\widehat{h}) \mathcal{P}(\widehat{u}) & -\mathcal{P}(\widehat{u}) \mathcal{P}^{-1}(\widehat{h}) \\ -\mathcal{P}^{-1}(\widehat{h}) \mathcal{P}(\widehat{u}) & \mathcal{P}^{-1}(\widehat{h}) \end{array}\right). \end{aligned}$$ A direct computation of the quadratic form associated to this Hessian using an arbitrary vector $(w_1^\top, w_2^\top)^\top \in R^{2 K}$ yields, $$\begin{aligned} (w_1^\top, w_2^\top) \frac{\partial^{2} f_1}{\partial \widehat{U}^{2}} \left(\begin{array}{c} w_1 \\ w_2 \end{array}\right) = \left( \mathcal{P}(\widehat{u}) w_1 - w_2\right)^\top \mathcal{P}^{-1}(\widehat{h}) \left( \mathcal{P}(\widehat{u}) w_1 - w_2\right) \geq 0. \end{aligned}$$ Finally, combining the above with [\[eq:Hessian-decomp\]](#eq:Hessian-decomp){reference-type="eqref" reference="eq:Hessian-decomp"} and [\[eq:f2-hessian\]](#eq:f2-hessian){reference-type="eqref" reference="eq:f2-hessian"} yields, $$\begin{aligned} (w_1^\top, w_2^\top)\frac{\partial^{2} E}{\partial \widehat{U}^{2}} \left(\begin{array}{c} w_1 \\ w_2 \end{array}\right) = g \|w_1\|^2 + \left( \mathcal{P}(\widehat{u}) w_1 - w_2\right)^\top \mathcal{P}^{-1}(\widehat{h}) \left( \mathcal{P}(\widehat{u}) w_1 - w_2\right), \end{aligned}$$ which is non-negative since $\mathcal{P}(\widehat{h})$ is positive-definite. Therefore, $E$ is convex, as desired. In addition, since the above expression vanishes if and only if $w_1 = w_2 = 0$, then $E$ is also strictly convex. ◻ The final piece needed to prove is to establish that the entropy function $E$ along with the flux function $H$ defined in [\[eq:sg-eflux1d\]](#eq:sg-eflux1d){reference-type="eqref" reference="eq:sg-eflux1d"} satisfy the companion balance law. **Lemma 4** ($(E,H)$ satisfy the companion balance law). *When $U$ is a smooth function, the pair $(E,H)$ defined in [\[eq:sg-epair1d\]](#eq:sg-epair1d){reference-type="eqref" reference="eq:sg-epair1d"} satisfies $$\begin{aligned} \label{eq:sgecond-eq} E(\widehat{U})_t + H(\widehat{U})_x = 0. \end{aligned}$$* *Proof.* The compatibility condition we seek to show, equivalent to [\[eq:sgecond-eq\]](#eq:sgecond-eq){reference-type="eqref" reference="eq:sgecond-eq"}, is, $$\begin{aligned} \label{eq:compatibility-1d} \frac{\partial E}{\partial \widehat{U}} \left( \frac{\partial \widehat{F}}{\partial \widehat{U}} \frac{\partial \widehat{U}}{\partial x} - \widehat{S}\right) = \frac{\partial H}{\partial x}, \end{aligned}$$ cf. [\[eq:compatibility1d\]](#eq:compatibility1d){reference-type="eqref" reference="eq:compatibility1d"}. To proceed we split both entropy functions into two pieces: $$\begin{aligned} E(\widehat{U}) &= E_1(\widehat{U}) + E_2(\widehat{U}), & E_1(\widehat{U}) &\coloneqq \frac{1}{2} \left( (\widehat{q})^\top \widehat{u}+ g \|\widehat{h}\|^2 \right), & E_2(\widehat{U}) &\coloneqq g \widehat{h}^\top \widehat{B}, \\\label{eq:H-decomp} H(\widehat{U}) &= H_1(\widehat{U}) + H_2(\widehat{U}), & H_1(\widehat{U}) &\coloneqq \frac{1}{2} \widehat{u}^\top \mathcal{P}(\widehat{q}) \widehat{u}+ g (\widehat{q})^\top \widehat{h}, & H_2(\widehat{U}) &= g (\widehat{q})^\top \widehat{B} \end{aligned}$$ From [\[eq:E-decomp\]](#eq:E-decomp){reference-type="eqref" reference="eq:E-decomp"}, [\[eq:f2-hessian\]](#eq:f2-hessian){reference-type="eqref" reference="eq:f2-hessian"}, and [\[eq:f1-p1\]](#eq:f1-p1){reference-type="eqref" reference="eq:f1-p1"}, we have already computed the gradient of $E$: $$\begin{aligned} \label{eq:E-gradient} \frac{\partial E_1}{\partial \widehat{U}} &= \left( -\frac{1}{2} \widehat{u}^\top \mathcal{P}(\widehat{u}) + g \widehat{h}^\top, \;\; \widehat{u}^\top \right), & \frac{\partial E_2}{\partial \widehat{U}} &= \left( g \widehat{B}^\top, \;\; 0 \right). \end{aligned}$$ Combining these expressions with the flux Jacobian in [\[eq:x-jacobian1d\]](#eq:x-jacobian1d){reference-type="eqref" reference="eq:x-jacobian1d"} and the source term in [\[eq:sgfluxessource1d\]](#eq:sgfluxessource1d){reference-type="eqref" reference="eq:sgfluxessource1d"} yields, [\[eq:compatibility-decomp\]]{#eq:compatibility-decomp label="eq:compatibility-decomp"} $$\begin{aligned} -\frac{\partial E_1}{\partial \widehat{U}} \widehat{S}+ \frac{\partial E_2}{\partial \widehat{U}} \left( \frac{\partial \widehat{F}}{\partial \widehat{U}}\frac{\partial \widehat{U}}{\partial x} - \widehat{S}\right) = g \widehat{B}^\top \frac{\partial \widehat{q}}{\partial x} + g \widehat{q}^\top \widehat{B}_x \stackrel{\eqref{eq:H-decomp}}{=} \frac{\partial H_2}{\partial x} \end{aligned}$$ Note then that if we are able to show, $$\begin{aligned} \label{eq:cl-compatibility} \frac{\partial E_1}{\partial \widehat{U}} \frac{\partial \widehat{F}}{\partial \widehat{U}} = \frac{\partial H_1}{\partial \widehat{U}}, \end{aligned}$$ then the expressions [\[eq:compatibility-decomp\]](#eq:compatibility-decomp){reference-type="eqref" reference="eq:compatibility-decomp"} are equivalent to [\[eq:compatibility-1d\]](#eq:compatibility-1d){reference-type="eqref" reference="eq:compatibility-1d"}. Therefore, we are left only to show [\[eq:cl-compatibility\]](#eq:cl-compatibility){reference-type="eqref" reference="eq:cl-compatibility"}. A direct computation with [\[eq:E-gradient\]](#eq:E-gradient){reference-type="eqref" reference="eq:E-gradient"} and [\[eq:x-jacobian1d\]](#eq:x-jacobian1d){reference-type="eqref" reference="eq:x-jacobian1d"} yields, $$\begin{aligned} \frac{\partial E_1}{\partial \widehat{U}} \frac{\partial \widehat{F}}{\partial \widehat{U}} = \left( g (\widehat{q})^\top - \widehat{u}^\top \mathcal{P}(\widehat{q}) \mathcal{P}^{-1}(\widehat{h}) \mathcal{P}(\widehat{u}), \;\;\; g \widehat{h}^\top + \frac{1}{2} \widehat{u}^\top \mathcal{P}(\widehat{u}) + \widehat{u}^\top \mathcal{P}(\widehat{q}) \mathcal{P}^{-1}(\widehat{h}) \right) \end{aligned}$$ On the other hand, we have the expressions, $$\begin{aligned} \frac{\partial}{\partial \widehat{h}} \frac{1}{2} \widehat{u}^\top \mathcal{P}(\widehat{q}) \widehat{u}&= \widehat{u}^\top \mathcal{P}(\widehat{q}) \frac{\partial \widehat{u}}{\partial \widehat{h}} \stackrel{\eqref{eq:velderivative}}{=} -\widehat{u}^\top \mathcal{P}(\widehat{q}) \mathcal{P}^{-1}(\widehat{h}) \mathcal{P}(\widehat{u}), \\ \frac{\partial}{\partial \widehat{q}} \frac{1}{2} \widehat{u}^\top \mathcal{P}(\widehat{q}) \widehat{u}&= \widehat{u}^\top \mathcal{P}(\widehat{q}) \frac{\partial \widehat{u}}{\partial \widehat{q}} + \frac{1}{2} \left( \frac{\partial}{\partial \widehat{q}} z^\top \mathcal{P}(\widehat{q}) z\right)\Big|_{z\gets \widehat{u}} \stackrel{\eqref{eq:acb},\eqref{eq:velderivative}}{=} \widehat{u}^\top \mathcal{P}(\widehat{q}) \mathcal{P}^{-1}(\widehat{h}) + \frac{1}{2} \widehat{u}^\top \mathcal{P}(\widehat{u}) \end{aligned}$$ and using these to compute $\frac{\partial H_1}{\partial \widehat{U}}$ shows that [\[eq:cl-compatibility\]](#eq:cl-compatibility){reference-type="eqref" reference="eq:cl-compatibility"} is true, completing the proof. ◻ The proof of is complete: Lemmas [\[lem:sg-econvex\]](#lem:sg-econvex){reference-type="eqref" reference="lem:sg-econvex"} and [\[eq:eef-1d-companion\]](#eq:eef-1d-companion){reference-type="eqref" reference="eq:eef-1d-companion"} imply that $(E,H)$ as defined in [\[eq:sg-epair1d\]](#eq:sg-epair1d){reference-type="eqref" reference="eq:sg-epair1d"} are an entropy-entropy flux pair for [\[eq:swesg41d\]](#eq:swesg41d){reference-type="eqref" reference="eq:swesg41d"}. **Remark 1**. *The quantities, $$\begin{aligned} \label{entrvar} \widehat{V}\coloneqq \left(\frac{\partial E}{\partial \widehat{U}}\right)^\top &=\left(-\frac{1}{2}\widehat{u}^\top\mathcal{P}(\widehat{u}) + g(\widehat{h}+\widehat{B})^\top,\widehat{u}^\top\right)^\top, & %^\top, & \Psi &\coloneqq \widehat{V}\widehat{F} - H \stackrel{\eqref{eq:uPCE1d},\eqref{eq:sg-eflux1d}}{=} \frac{1}{2}g\widehat{u}^\top\mathcal{P}(\widehat{h})\widehat{h},\end{aligned}$$ are called the *entropy variable* and *stochastic energy potential*, respectively. These variables serve important roles in the construction of the energy conservative and the energy stable schemes that we develop later.* # Well-Balanced Energy Conservative And Energy Stable Schemes for SG 1D SWE {#sec:schemes} In this section, we present several well-balanced energy conservative and energy stable numerical scheme for SG SWE. The schemes designed below are stochastic extensions of the schemes developed in [@fjordholm2011well]. Our entropy-entropy flux pairs developed in will be crucial ingredients for energy conservative and energy stable schemes for the SG formulation [\[eq:swesg41d\]](#eq:swesg41d){reference-type="eqref" reference="eq:swesg41d"}-[\[eq:uPCE1d\]](#eq:uPCE1d){reference-type="eqref" reference="eq:uPCE1d"}. We also need to specify the well-balanced property we are interested in: By "well-balanced", we mean that the scheme can preserve the stochastic "lake-at-rest" state exactly at the discrete level. **Definition 1** (Well-Balanced SGSWE Property, [@doi:10.1137/20M1360736]). *We say that a solution $(h_P, q_P)$ to [\[eq:swesg41d\]](#eq:swesg41d){reference-type="eqref" reference="eq:swesg41d"} is well-balanced if it satisfies the *stochastic* "lake-at-rest" solution, $$\label{eq:lake-at-rest} q_P(x,t,\xi) \equiv 0,\quad h_P(x,t,\xi) + \Pi_P[B](x,t,\xi) \equiv C(\xi),$$ where $C(\xi)$ is a random scalar depending only on $\xi$, $\Pi_P$ corresponds to a polynomial truncation, cf. [\[eq:PCEex\]](#eq:PCEex){reference-type="eqref" reference="eq:PCEex"}, and subscripts $P$ refer to the (stochastic) discrete solution on the subspace $P$. In terms of our previous notation for $P$-expansion coefficients, equation [\[eq:lake-at-rest\]](#eq:lake-at-rest){reference-type="eqref" reference="eq:lake-at-rest"} is equivalent to the following vector equation $$\label{eq:lake-at-rest-vec} \widehat{q}\equiv \boldsymbol{0},\quad \widehat{h}+ \widehat{B}\equiv \widehat{C},\;\; \forall (x,t)\in \mathcal{D}\times[0,T],$$ where $\mathcal{D}$ is the spatial domain and $T$ is the terminal time.* We emphasize that even without introducing the lake-at-rest definition [\[eq:lake-at-rest\]](#eq:lake-at-rest){reference-type="eqref" reference="eq:lake-at-rest"}, the vector equation [\[eq:lake-at-rest-vec\]](#eq:lake-at-rest-vec){reference-type="eqref" reference="eq:lake-at-rest-vec"} itself is a steady state of the SG system [\[eq:swesg41d\]](#eq:swesg41d){reference-type="eqref" reference="eq:swesg41d"}. ## Energy Conservative Schemes We consider the semi-discrete form for FV schemes for [\[eq:swesg41d\]](#eq:swesg41d){reference-type="eqref" reference="eq:swesg41d"} over a uniform mesh in the $x$ variable: $$\begin{aligned} \label{eq:fvsemi-discrete} \dfrac{d}{dt}\boldsymbol{U}_{i} = -\dfrac{\mathcal{F}_{i+\frac{1}{2}}-\mathcal{F}_{i-\frac{1}{2}}}{\Delta x}+\boldsymbol{S}_{i}.\end{aligned}$$ Here, $\boldsymbol{U}_{i}\approx \frac{1}{\Delta x}\int_{\mathcal{I}_{i}} \widehat{U}(x,t)dx$ is the approximation of the cell averages of $\widehat{U}$ over cells $\mathcal{I}_{i} \coloneqq [ x_{i-1/2}, x_{i+1/2}]$ at time $t$, and $\Delta x = |\mathcal{I}_i| = x_{i+1/2} - x_{i-1/2}$. The terms $\mathcal{F}_{i\pm1/2}$ are numerical fluxes at the boundaries of the cells, which are functions of neighboring states, e.g., $\mathcal{F}_{i+1/2}$ is a function of $\boldsymbol{U}_i$ and $\boldsymbol{U}_{i+1}$. The term $\boldsymbol{S}_{i}\approx \frac{1}{\Delta x}\int_{\mathcal{I}_{i}} \widehat{S}(\widehat{U},\widehat{B})dx$ is a discretization of the source term, which we will design below to be well-balanced. To reiterate our notation: normal typeset capital letters (sometimes with "hat" notation) refers to degrees of freedom associated to discretizing *only* the stochastic variable $\xi$, i.e., $(\widehat{U}, \widehat{h}, \widehat{q}, \widehat{B})$. Boldface notation with subscripts $i$ refers to degrees of freedom associated to a subsequent discretization of the spatial variable $x$ over cell $\mathcal{I}_i$, i.e., $(\boldsymbol{U}_i, \boldsymbol{h}_i, \boldsymbol{q}_i, \boldsymbol{B}_i)$. We define the discrete velocity variable $\boldsymbol{u}_{i}$ in a manner analogous to [\[eq:uPCE1d\]](#eq:uPCE1d){reference-type="eqref" reference="eq:uPCE1d"}: $$\begin{aligned} \label{eq:ui-def} \boldsymbol{u}_i \coloneqq \mathcal{P}(\boldsymbol{h}_i)^{-1} \boldsymbol{q}_i.\end{aligned}$$ Discrete entropic quantities are derived from the discrete conservative variables $\boldsymbol{U}_i$ and velocity variable $\boldsymbol{u}_i$. I.e., the following are direct generalizations of the definition of $E(\widehat{U})$ in [\[eq:sg-efun1d\]](#eq:sg-efun1d){reference-type="eqref" reference="eq:sg-efun1d"}, and of $(\widehat{V},\Psi)$ in [\[entrvar\]](#entrvar){reference-type="eqref" reference="entrvar"}: $$\begin{aligned} \label{eq:Ei-def} \boldsymbol{E}_i &\coloneqq \frac{1}{2} \left( \boldsymbol{q}_i^\top \boldsymbol{u}_i + g \| \boldsymbol{h}_i\|^2 \right) + g \boldsymbol{h}_i^\top \boldsymbol{B}_i, \\ \label{eq:Vi-def} \boldsymbol{V}_i &\coloneqq \left(\frac{\partial \boldsymbol{E}_i}{\partial \boldsymbol{U}_i}\right)^\top = \left(-\frac{1}{2} \boldsymbol{u}_i^\top \mathcal{P}(\boldsymbol{u}_i) + g(\boldsymbol{h}_i + \boldsymbol{B}_i)^\top, \;\; \boldsymbol{u}_i^\top \right)^\top, \\ \label{eq:Psii-def} \boldsymbol{\Psi}_i &\coloneqq \boldsymbol{V}_i \widehat{F}(\boldsymbol{U}_i) - H(\boldsymbol{U}_i) = \frac{1}{2} g \boldsymbol{u}_i^\top \mathcal{P}(\boldsymbol{h}_i) \boldsymbol{h}_i\end{aligned}$$ We now introduce some notation that is used in [@fjordholm2011well] for averages and jumps at cell interfaces: $$\begin{aligned} \label{eq:jump-and-average} \overline{\boldsymbol{a}}_{i+1/2} &\coloneqq \frac{1}{2}(\boldsymbol{a}_{i+1} + \boldsymbol{a}_{i}), & \llbracket \boldsymbol{a}\rrbracket_{i+1/2} &\coloneqq \boldsymbol{a}_{i+1} - \boldsymbol{a}_{i},\end{aligned}$$ where $\boldsymbol{a}_i$ is the cell average over $\mathcal{I}_i$. The expressions above are equivalent to, $$\begin{aligned} \label{eq:jump-average-property} \boldsymbol{a}_i = \overline{\boldsymbol{a}}_{i+1/2} - \frac{\llbracket \boldsymbol{a}\rrbracket_{i+1/2}}{2} = \overline{\boldsymbol{a}}_{i-1/2} + \frac{\llbracket \boldsymbol{a}\rrbracket_{i-1/2}}{2},\end{aligned}$$ and all these expressions are valid regardless of the size of $\boldsymbol{a}$ (e.g., both row and column vectors are allowed). We will require some additional technical results for interfacial averages and jumps. **Lemma 5**. *Let $\boldsymbol{a}_i, \boldsymbol{b}_i$ be any spatially discrete quantities. Then:* *[\[eq:relations\]]{#eq:relations label="eq:relations"} $$\begin{aligned} \label{eq:relation1} \mathcal{P}(\overline{\boldsymbol{a}}_{i+\frac{i}{2}})\left\llbracket\boldsymbol{a}\right\rrbracket_{i+\frac{i}{2}}&= \frac{1}{2}\left\llbracket\mathcal{P}(\boldsymbol{a})\boldsymbol{a}\right\rrbracket_{i+\frac{i}{2}}. \\\label{eq:relation2} \left\llbracket\boldsymbol{a}\right\rrbracket_{i+\frac{i}{2}}^\top\overline{\boldsymbol{b}}_{i+\frac{i}{2}}+\left\llbracket\boldsymbol{b}\right\rrbracket_{i+\frac{i}{2}}^\top\overline{\boldsymbol{a}}_{i+\frac{i}{2}}&= \left\llbracket\boldsymbol{a}^\top\boldsymbol{b}\right\rrbracket_{i+\frac{i}{2}} \end{aligned}$$* *Proof.* Due to linearity of $\mathcal{P}$, then, $$\begin{aligned} \mathcal{P}(\overline{\boldsymbol{a}}_{i+\frac{1}{2}})\left\llbracket\boldsymbol{a}\right\rrbracket_{i+\frac{1}{2}} &\overset{\eqref{eq:pmatrix}}{=} \mathcal{P}\left(\frac{1}{2}(\boldsymbol{a}_{i+1}+\boldsymbol{a}_{i})\right)(\boldsymbol{a}_{i+1}-\boldsymbol{a}_{i}) \overset{\eqref{eq:commute}}{=} \frac{1}{2}\left(\mathcal{P}(\boldsymbol{a}_{i+1})\boldsymbol{a}_{i+1}-\mathcal{P}(\boldsymbol{a}_{i})\boldsymbol{a}_{i}\right) = \frac{1}{2}\left\llbracket\mathcal{P}(\boldsymbol{a})\boldsymbol{a}\right\rrbracket_{i+\frac{1}{2}}, \end{aligned}$$ which proves [\[eq:relation1\]](#eq:relation1){reference-type="eqref" reference="eq:relation1"}. Similarly, [\[eq:relation2\]](#eq:relation2){reference-type="eqref" reference="eq:relation2"} can be proven directly: $$\begin{aligned} \left\llbracket\boldsymbol{a}\right\rrbracket_{i+\frac{1}{2}}^\top\overline{\boldsymbol{b}}_{i+\frac{1}{2}}+\left\llbracket\boldsymbol{b}\right\rrbracket_{i+\frac{1}{2}}^\top\overline{\boldsymbol{a}}_{i+\frac{1}{2}} &= \frac{1}{2}\left(\left(\boldsymbol{a}_{i+1}-\boldsymbol{a}_{i}\right)^\top\left(\boldsymbol{b}_{i+1}+\boldsymbol{b}_{i}\right)+\left(\boldsymbol{b}_{i+1}-\boldsymbol{b}_{i}\right)^\top\left(\boldsymbol{a}_{i+1}+\boldsymbol{a}_{i}\right)\right) \\ &= \boldsymbol{a}_{i+1}^\top\boldsymbol{b}_{i+1} - \boldsymbol{a}_{i}^\top\boldsymbol{b}_{i} = \left\llbracket\boldsymbol{a}^\top\boldsymbol{b}\right\rrbracket_{i+\frac{1}{2}}. \end{aligned}$$ ◻ We now make particular definitions for energy conservative and energy stable schemes for one-dimensional systems of balance laws. To provide context, with no source terms (i.e. $\boldsymbol{S}_i = 0$) then spatial discretizations of the form [\[eq:fvsemi-discrete\]](#eq:fvsemi-discrete){reference-type="eqref" reference="eq:fvsemi-discrete"} are called *conservative* schemes since they imply, $$\begin{aligned} \label{eq:ec-global} \frac{d}{dt} \sum_{i \in [M]} \Delta x \boldsymbol{U}_i(t) = \left[ \mathcal{F}_{1/2} - \mathcal{F}_{M+1/2} \right], \hskip 20pt \textrm{(vanishing source, $\boldsymbol{S}_i = 0$)},\end{aligned}$$ and in particular with periodic boundary conditions, then this implies that the cumulative amount of $\widehat{U}$ in the system is constant in time.[^1] To translate this concept to the notion of an energy conservative scheme, note that an entropy-entropy flux pair $(E,H)$ introduced in [3](#sec3){reference-type="ref" reference="sec3"} is explicitly a function of the state $\widehat{U}$ and inputs in the source term (here, $\widehat{B}$). Hence, the semi-discrete form [\[eq:fvsemi-discrete\]](#eq:fvsemi-discrete){reference-type="eqref" reference="eq:fvsemi-discrete"} can be transformed into a semi-discrete form for the companion balance law [\[eq:efun-general\]](#eq:efun-general){reference-type="eqref" reference="eq:efun-general"}. Then we call [\[eq:fvsemi-discrete\]](#eq:fvsemi-discrete){reference-type="eqref" reference="eq:fvsemi-discrete"} energy conservative if it implies a conservative scheme for the companion balance law that describes the evolution of the entropy (energy). **Definition 2** (Energy conservative and energy stable schemes). *Suppose that the system of balance laws [\[eq:swesg41d\]](#eq:swesg41d){reference-type="eqref" reference="eq:swesg41d"} has an entropy-entropy flux pair $(E,H)$ where $E(\widehat{U})$ can be interpreted as energy for the system. Then the semi-discrete FV scheme [\[eq:fvsemi-discrete\]](#eq:fvsemi-discrete){reference-type="eqref" reference="eq:fvsemi-discrete"} is an Energy Conservative (EC) scheme if it can be rewritten as the following semi-discrete form for the evolution of the numerical cell averages $\boldsymbol{E}_i$ of $E$: $$\begin{aligned} \label{eq:ec-scheme-def} \frac{d}{dt} \boldsymbol{E}_i(t) &= -\frac{1}{\Delta x} \left( \mathcal{H}_{i+1/2} - \mathcal{H}_{i-1/2}\right), & i &\in [M], \end{aligned}$$ where $\mathcal{H}_{i+1/2}$ is some numerical entropy flux at the interface location $x = x_{i+1/2}$. The scheme [\[eq:fvsemi-discrete\]](#eq:fvsemi-discrete){reference-type="eqref" reference="eq:fvsemi-discrete"} is called an Energy Stable (ES) scheme if $$\begin{aligned} \label{eq:es-condition} \frac{d}{dt} \boldsymbol{E}_i(t) &\leq -\frac{1}{\Delta x} \left( \mathcal{H}_{i+1/2} - \mathcal{H}_{i-1/2}\right), & i &\in [M]. \end{aligned}$$* Note that the definitions above are cell-wise conditions that are stronger than a global condition such as [\[eq:ec-global\]](#eq:ec-global){reference-type="eqref" reference="eq:ec-global"}. ## An EC scheme for the SGSWE In this section we present an EC scheme for the one-dimensional SGSWE system [\[eq:swesg41d\]](#eq:swesg41d){reference-type="eqref" reference="eq:swesg41d"}. We use the conservative scheme [\[eq:fvsemi-discrete\]](#eq:fvsemi-discrete){reference-type="eqref" reference="eq:fvsemi-discrete"}, with the following choices of flux and source terms: [\[eq:sgswe-ec\]]{#eq:sgswe-ec label="eq:sgswe-ec"} $$\begin{aligned} \label{eq:ec-flux} \mathcal{F}_{i+1/2} = \mathcal{F}^{\mathrm{EC}}_{i+1/2} &\coloneqq \begin{pmatrix} \mathcal{P}(\overline{\boldsymbol{h}}_{i+\frac{1}{2}})\overline{\boldsymbol{u}}_{i+\frac{1}{2}}\vspace{0.5em}\\ \frac{g}{2}\left(\overline{\mathcal{P}(\boldsymbol{h})\boldsymbol{h}}\right)_{i+\frac{1}{2}} + \mathcal{P}(\overline{\boldsymbol{u}}_{i+\frac{1}{2}})\mathcal{P}(\overline{\boldsymbol{h}}_{i+\frac{1}{2}})\overline{\boldsymbol{u}}_{i+\frac{1}{2}} \end{pmatrix}, \\\label{eq:ec-source} \boldsymbol{S}_{i} &= \begin{pmatrix} \boldsymbol{0}\vspace{0.5em}\\ -\frac{g}{2\Delta x}\left(\mathcal{P}(\overline{\boldsymbol{h}}_{i+\frac{1}{2}})\left\llbracket\boldsymbol{B}\right\rrbracket_{i+\frac{1}{2}}+\mathcal{P}(\overline{\boldsymbol{h}}_{i-\frac{1}{2}})\left\llbracket\boldsymbol{B}\right\rrbracket_{i-\frac{1}{2}}\right). \end{pmatrix}\end{aligned}$$ Above, the interfacial averages $\overline{\boldsymbol{u}}_{i+1/2}$ are computed as defined in [\[eq:jump-and-average\]](#eq:jump-and-average){reference-type="eqref" reference="eq:jump-and-average"}. Our main result for this scheme is as follows. **Theorem 3** (EC Scheme). *Suppose the bottom topography function $B$ is independent of time. Consider the semi-discrete scheme [\[eq:fvsemi-discrete\]](#eq:fvsemi-discrete){reference-type="eqref" reference="eq:fvsemi-discrete"} for the SGSWE system [\[eq:swesg41d\]](#eq:swesg41d){reference-type="eqref" reference="eq:swesg41d"}. Suppose that the flux and source terms are selected as in [\[eq:sgswe-ec\]](#eq:sgswe-ec){reference-type="eqref" reference="eq:sgswe-ec"}. Then, this is a well-balanced EC scheme with local truncation error $\mathcal{O}(\Delta x^2)$.* The remainder of this section is devoted to the proof, which requires some intermediate steps. First, we show that $\boldsymbol{S}_i$ is a well-balanced choice for the source term discretization. **Lemma 6**. *Suppose $\boldsymbol{S}_i$ is chosen as in [\[eq:ec-source\]](#eq:ec-source){reference-type="eqref" reference="eq:ec-source"}. If the bottom topography $B$ is independent of time, then [\[eq:fvsemi-discrete\]](#eq:fvsemi-discrete){reference-type="eqref" reference="eq:fvsemi-discrete"} is a well-balanced scheme in the sense of [Definition 1](#def:stochastic-lake-at-rest-1d){reference-type="ref" reference="def:stochastic-lake-at-rest-1d"}.* *Proof.* Given initial data $$\begin{aligned} \label{eq:wbinit} &\boldsymbol{u}_i \equiv \boldsymbol{0},&&\boldsymbol{h}_i + \boldsymbol{B}_i = \text{const vector},\;\;\forall i,\end{aligned}$$ the well-balanced property with time-independent bottom topography (see [Definition 1](#def:stochastic-lake-at-rest-1d){reference-type="ref" reference="def:stochastic-lake-at-rest-1d"}) requires that, for every $i$, $$\begin{aligned} \label{eq:wb-goal} \frac{d}{dt}\boldsymbol{\boldsymbol{h}}_i &\equiv 0, &\frac{d}{dt}\boldsymbol{\boldsymbol{q}}_i &\equiv 0.\end{aligned}$$ We first notice that, $$\label{eq:relation3} \begin{aligned} \left(\overline{\mathcal{P}(\boldsymbol{h})\boldsymbol{h}}\right)_{i+\frac{1}{2}} - \left(\overline{\mathcal{P}(\boldsymbol{h})\boldsymbol{h}}\right)_{i-\frac{1}{2}} &=\frac{1}{2}\left(\left\llbracket\mathcal{P}(\boldsymbol{h})\boldsymbol{h}\right\rrbracket_{i+\frac{1}{2}} + \left\llbracket\mathcal{P}(\boldsymbol{h})\boldsymbol{h}\right\rrbracket_{i-\frac{1}{2}}\right) \\ &\overset{\eqref{eq:relation1}}{=}\mathcal{P}(\overline{\boldsymbol{h}}_{i+\frac{1}{2}})\left\llbracket\boldsymbol{h}\right\rrbracket_{i+\frac{1}{2}} + \mathcal{P}(\overline{\boldsymbol{h}}_{i-\frac{1}{2}})\left\llbracket\boldsymbol{h}\right\rrbracket_{i-\frac{1}{2}}. \end{aligned}$$ Note that with initialization [\[eq:wbinit\]](#eq:wbinit){reference-type="eqref" reference="eq:wbinit"}, then $\boldsymbol{u}_i = 0$, and hence $\overline{\boldsymbol{u}}_{i+1/2} = 0$. Therefore the semi-discrete scheme [\[eq:fvsemi-discrete\]](#eq:fvsemi-discrete){reference-type="eqref" reference="eq:fvsemi-discrete"} with the flux and source terms in [\[eq:sgswe-ec\]](#eq:sgswe-ec){reference-type="eqref" reference="eq:sgswe-ec"} yields, $$\begin{aligned} \frac{d}{dt}\boldsymbol{h}_i = &-\frac{1}{\Delta x}\left(\mathcal{P}(\overline{\boldsymbol{h}}_{i+\frac{1}{2}})\overline{\boldsymbol{u}}_{i+\frac{1}{2}} - \mathcal{P}(\overline{\boldsymbol{h}}_{i-\frac{1}{2}})\overline{\boldsymbol{u}}_{i-\frac{1}{2}}\right) = \boldsymbol{0},\label{eq:wbh}\\ \frac{d}{dt}\boldsymbol{q}_i = &-\frac{g}{2\Delta x}\left(\left(\overline{\mathcal{P}(\boldsymbol{h})\boldsymbol{h}}\right)_{i+\frac{1}{2}} - \left(\overline{\mathcal{P}(\boldsymbol{h})\boldsymbol{h}}\right)_{i-\frac{1}{2}}\right)\nonumber\\ &-\frac{g}{2\Delta x}\left(\mathcal{P}(\overline{\boldsymbol{h}}_{i+\frac{1}{2}})\left\llbracket\boldsymbol{B}\right\rrbracket_{i+\frac{1}{2}}+\mathcal{P}(\overline{\boldsymbol{h}}_{i-\frac{1}{2}})\left\llbracket\boldsymbol{B}\right\rrbracket_{i-\frac{1}{2}}\right)\nonumber\\\nonumber \overset{\eqref{eq:relation3}}{=}&-\frac{g}{2\Delta x}\left(\mathcal{P}(\overline{\boldsymbol{h}}_{i+\frac{1}{2}})\left\llbracket\boldsymbol{h}+\boldsymbol{B}\right\rrbracket_{i+\frac{1}{2}}+\mathcal{P}(\overline{\boldsymbol{h}}_{i-\frac{1}{2}})\left\llbracket\boldsymbol{h}+\boldsymbol{B}\right\rrbracket_{i-\frac{1}{2}}\right) \stackrel{\eqref{eq:wbinit}}{=} \boldsymbol{0},%\label{eq:wbq}.\end{aligned}$$ which establishes [\[eq:wb-goal\]](#eq:wb-goal){reference-type="eqref" reference="eq:wb-goal"}. ◻ **Lemma 7**. *The flux and source terms in [\[eq:sgswe-ec\]](#eq:sgswe-ec){reference-type="eqref" reference="eq:sgswe-ec"} commit a local truncation error of $\mathcal{O}(\Delta x^2)$.* The proof is direct, by assuming $(\boldsymbol{U}_i, \boldsymbol{B}_i)$ are exact cell averages of spatially smooth functions $(\widehat{U}, \widehat{B})$ and then comparing $\mathcal{F}_{i+1/2}$ and $\boldsymbol{S}_i$ to $\widehat{F}(\widehat{U})\big|_{x=x_{i+1/2}}$ and $\widehat{S}(\widehat{U})\big|_{x=x_{i}}$, respectively, where $\widehat{F}$ and $\widehat{S}$ are the exact flux and source functions in [\[eq:sgfluxessource1d\]](#eq:sgfluxessource1d){reference-type="eqref" reference="eq:sgfluxessource1d"}. Therefore we omit most details, pointing out only the following quantitative approximations in space (ignoring the time variable $t$): $$\begin{aligned} \overline{\boldsymbol{U}}_{i+1/2} &= \widehat{U}(x_{i+1/2}) + \mathcal{O}(\Delta x^2), & \left\llbracket\boldsymbol{U}\right\rrbracket_{i+1/2} &= \Delta x\, \widehat{U}_x\left(x_{i+1/2}\right) + \mathcal{O}(\Delta x^2) \\ \mathcal{P}(\overline{\boldsymbol{h}}_{i+1/2}) &= \mathcal{P}(\widehat{h}(x_{i+1/2})) + \mathcal{O}(\Delta x^2), & \overline{\boldsymbol{u}}_{i+1/2} &= \widehat{u}(x_{i+1/2}) + \mathcal{O}(\Delta x^2).\end{aligned}$$ Note that the implicit constants hidden in the asymptotic notation above depend on the maximum singular value of $\mathcal{P}(\widehat{h}_{xx}(x))$ and the minimum singular value of $\mathcal{P}(\widehat{h}(x_{i+1/2}))$. The final result we need is a sufficient condition for a numerical flux to result in an EC scheme. **Lemma 8**. *Let $\boldsymbol{S}_i$ be chosen as in [\[eq:ec-source\]](#eq:ec-source){reference-type="eqref" reference="eq:ec-source"}. Suppose that $\mathcal{F}_{i+1/2}$ satisfies $$\begin{aligned} \label{eq:e-conserved-cond} \left\llbracket\boldsymbol{V}\right\rrbracket_{i+\frac{1}{2}}^\top \mathcal{F}_{i+\frac{1}{2}} = \left\llbracket\boldsymbol{\Psi}\right\rrbracket_{i+\frac{1}{2}} + g\left\llbracket\boldsymbol{B}\right\rrbracket_{i+\frac{1}{2}}^\top\mathcal{P}(\overline{\boldsymbol{h}}_{i+\frac{1}{2}})\overline{\boldsymbol{u}}_{i+\frac{1}{2}}.\end{aligned}$$ Then the corresponding FV scheme [\[eq:fvsemi-discrete\]](#eq:fvsemi-discrete){reference-type="eqref" reference="eq:fvsemi-discrete"} is an EC scheme, i.e., satisfies [\[eq:ec-scheme-def\]](#eq:ec-scheme-def){reference-type="eqref" reference="eq:ec-scheme-def"}, where the numerical energy flux is given by, $$\begin{aligned} \label{eq:e-flux} \mathcal{H}_{i+\frac{1}{2}} &\coloneqq \overline{\boldsymbol{V}}_{i+\frac{1}{2}}^\top \mathcal{F}_{i+\frac{1}{2}} - \overline{\boldsymbol{\Psi}}_{i+\frac{1}{2}} - \frac{g}{4}\left\llbracket\boldsymbol{B}\right\rrbracket_{i+\frac{1}{2}}^\top\mathcal{P}(\overline{\boldsymbol{h}}_{i+\frac{1}{2}})\left\llbracket\boldsymbol{u}\right\rrbracket_{i+\frac{1}{2}}.\end{aligned}$$* *Proof.* Multiplying [\[eq:fvsemi-discrete\]](#eq:fvsemi-discrete){reference-type="eqref" reference="eq:fvsemi-discrete"} by $\boldsymbol{V}_{i}^\top$ and using the definition of $\boldsymbol{V}_i$ in [\[eq:Vi-def\]](#eq:Vi-def){reference-type="eqref" reference="eq:Vi-def"}, we obtain, $$\begin{aligned} \label{eq:ec-temp-comp} \frac{d}{dt}\boldsymbol{E}_{i} = &-\frac{1}{\Delta x}\big(\underbrace{\boldsymbol{V}_{i}^\top \mathcal{F}_{i+\frac{1}{2}}}_{(A1)}-\underbrace{\boldsymbol{V}_{i}^\top \mathcal{F}_{i-\frac{1}{2}}}_{(A2)} - \underbrace{\Delta x\boldsymbol{V}_{i}^\top \overline{\boldsymbol{S}}_{i}}_{(B)}\big) % + \langle \bV_{i}, \overline{\boldsymbol{S}}_{i}\rangle \end{aligned}$$ The first term, labeled (A1), can be expanded to, $$\begin{aligned} \mathrm{(A1)} &\stackrel{\eqref{eq:jump-average-property}}{=} \overline{\boldsymbol{V}}_{i+1/2}^\top \mathcal{F}_{i+1/2} - \frac{1}{2} \left\llbracket\boldsymbol{V}\right\rrbracket_{i+1/2}^\top \mathcal{F}_{i+1/2} \\ &\stackrel{\eqref{eq:e-conserved-cond}\eqref{eq:e-flux}}{=} \mathcal{H}_{i+\frac{1}{2}} + \overline{\boldsymbol{\Psi}}_{i+\frac{1}{2}} + \frac{g}{4}\left\llbracket\boldsymbol{B}\right\rrbracket_{i+\frac{1}{2}}^\top\mathcal{P}(\overline{\boldsymbol{h}}_{i+\frac{1}{2}})\left\llbracket\boldsymbol{u}\right\rrbracket_{i+\frac{1}{2}}-\frac{1}{2}\left\llbracket\boldsymbol{\Psi}\right\rrbracket_{i+\frac{1}{2}} - \frac{g}{2}\left\llbracket\boldsymbol{B}\right\rrbracket_{i+\frac{1}{2}}^\top\mathcal{P}(\overline{\boldsymbol{h}}_{i+\frac{1}{2}})\overline{\boldsymbol{u}}_{i+\frac{1}{2}} \\ &\stackrel{\eqref{eq:jump-average-property}}{=} \mathcal{H}_{i+\frac{1}{2}} + \boldsymbol{\Psi}_i - \frac{g}{2}\left\llbracket\boldsymbol{B}\right\rrbracket_{i+\frac{1}{2}}^\top\mathcal{P}(\overline{\boldsymbol{h}}_{i+\frac{1}{2}}) \boldsymbol{u}_i \end{aligned}$$ In an analogous computation, the term labeled (A2) is given by, $$\begin{aligned} \mathrm{(A2)} = \mathcal{H}_{i-\frac{1}{2}} + \boldsymbol{\Psi}_i + \frac{g}{2}\left\llbracket\boldsymbol{B}\right\rrbracket_{i-\frac{1}{2}}^\top\mathcal{P}(\overline{\boldsymbol{h}}_{i-\frac{1}{2}}) \boldsymbol{u}_i \end{aligned}$$ Finally, a direct computation shows that term (B) is, $$\begin{aligned} \mathrm{(B)} \stackrel{\eqref{eq:ec-source},\eqref{eq:Vi-def}} = -\frac{g}{2} \boldsymbol{u}_i^\top \mathcal{P}(\overline{\boldsymbol{h}}_{i+\frac{1}{2}}) \left\llbracket\boldsymbol{B}\right\rrbracket_{i+\frac{1}{2}} - \frac{g}{2} \boldsymbol{u}_i^\top \mathcal{P}(\overline{\boldsymbol{h}}_{i-\frac{1}{2}}) \left\llbracket\boldsymbol{B}\right\rrbracket_{i-\frac{1}{2}} \end{aligned}$$ Using the expressions for terms (A1), (A2), and (B) derived above in [\[eq:ec-temp-comp\]](#eq:ec-temp-comp){reference-type="eqref" reference="eq:ec-temp-comp"} establishes that the scheme satisfies [\[eq:ec-scheme-def\]](#eq:ec-scheme-def){reference-type="eqref" reference="eq:ec-scheme-def"}, i.e., is an EC scheme. ◻ We now have all the ingredients necessary to prove [Theorem 3](#thm:ec-scheme){reference-type="ref" reference="thm:ec-scheme"}. *Proof of [Theorem 3](#thm:ec-scheme){reference-type="ref" reference="thm:ec-scheme"}.* Lemmas [Lemma 6](#lem:ec-well-balanced){reference-type="ref" reference="lem:ec-well-balanced"} and [Lemma 7](#lem:ec-lte-2){reference-type="ref" reference="lem:ec-lte-2"} verify that the scheme is well-balanced and second-order. We therefore need only show that it is EC. To do this, we must verify the condition in [Lemma 8](#lem:e-conserved){reference-type="ref" reference="lem:e-conserved"}. We accomplish this with direct computation: $$\begin{aligned} \left\llbracket\boldsymbol{V}\right\rrbracket_{i+\frac{1}{2}}^\top \mathcal{F}^{\text{EC}}_{i+\frac{1}{2}} & \stackrel{\eqref{eq:Vi-def}, \eqref{eq:ec-flux}}{=} \left(g\left(\left\llbracket\boldsymbol{h}\right\rrbracket_{i+\frac{1}{2}}+\left\llbracket\boldsymbol{B}\right\rrbracket_{i+\frac{1}{2}}\right) - \frac{1}{2}\left\llbracket\mathcal{P}(\boldsymbol{u})\boldsymbol{u}\right\rrbracket_{i+\frac{1}{2}}\right)^\top\mathcal{P}(\overline{\boldsymbol{h}}_{i+\frac{1}{2}})\overline{\boldsymbol{u}}_{i+\frac{1}{2}}\\ &\;\;\; +\left\llbracket\boldsymbol{u}\right\rrbracket_{i+\frac{1}{2}}^\top\left(\frac{g}{2}\left(\overline{\mathcal{P}(\boldsymbol{h})\boldsymbol{h}}\right)_{i+\frac{1}{2}} + \mathcal{P}(\overline{\boldsymbol{u}}_{i+\frac{1}{2}})\mathcal{P}(\overline{\boldsymbol{h}}_{i+\frac{1}{2}})\overline{\boldsymbol{u}}_{i+\frac{1}{2}}\right)\\ &\stackrel{\eqref{eq:relation1}}{=} g\left( \left\llbracket\boldsymbol{h}\right\rrbracket_{i+\frac{1}{2}} + \left\llbracket\boldsymbol{B}\right\rrbracket_{i+1/2}\right)^\top\mathcal{P}(\overline{\boldsymbol{h}}_{i+\frac{1}{2}})\overline{\boldsymbol{u}}_{i+\frac{1}{2}}+\frac{g}{2}\left\llbracket\boldsymbol{u}\right\rrbracket_{i+\frac{1}{2}}^\top\left(\overline{\mathcal{P}(\boldsymbol{h})\boldsymbol{h}}\right)_{i+\frac{1}{2}}\\ % &+g\dbracket{\boldsymbol{B}}_{i+\frac{1}{2}}^\top\mcp(\bbh_{i+\frac{1}{2}})\bbu_{i+\frac{1}{2}}\\ &\stackrel{\eqref{eq:relation1}}{=} \frac{g}{2}\left\llbracket\mathcal{P}(\boldsymbol{h})\boldsymbol{h}\right\rrbracket_{i+\frac{1}{2}}^\top\overline{\boldsymbol{u}}_{i+\frac{1}{2}} +g\left\llbracket\boldsymbol{B}\right\rrbracket_{i+\frac{1}{2}}^\top\mathcal{P}(\overline{\boldsymbol{h}}_{i+\frac{1}{2}})\overline{\boldsymbol{u}}_{i+\frac{1}{2}} +\frac{g}{2}\left\llbracket\boldsymbol{u}\right\rrbracket_{i+\frac{1}{2}}^\top\left(\overline{\mathcal{P}(\boldsymbol{h})\boldsymbol{h}}\right)_{i+\frac{1}{2}}\\ % &+g\dbracket{\boldsymbol{B}}_{i+\frac{1}{2}}^\top\mcp(\bbh_{i+\frac{1}{2}})\bbu_{i+\frac{1}{2}}\\ &\overset{\eqref{eq:relation2}}{=}\frac{g}{2}\left\llbracket\boldsymbol{u}^\top\mathcal{P}(\boldsymbol{h})\boldsymbol{h}\right\rrbracket_{i+\frac{1}{2}}+ g\left\llbracket\boldsymbol{B}\right\rrbracket_{i+\frac{1}{2}}^\top\mathcal{P}(\overline{\boldsymbol{h}}_{i+\frac{1}{2}})\overline{\boldsymbol{u}}_{i+\frac{1}{2}}\\ &=\left\llbracket\boldsymbol{\Psi}\right\rrbracket_{i+\frac{1}{2}}+g\left\llbracket\boldsymbol{B}\right\rrbracket_{i+\frac{1}{2}}^\top\mathcal{P}(\overline{\boldsymbol{h}}_{i+\frac{1}{2}})\overline{\boldsymbol{u}}_{i+\frac{1}{2}}, \end{aligned}$$ which verifies [\[eq:e-conserved-cond\]](#eq:e-conserved-cond){reference-type="eqref" reference="eq:e-conserved-cond"}, and hence [Lemma 8](#lem:e-conserved){reference-type="ref" reference="lem:e-conserved"} is applicable, showing that this is an EC scheme. ◻ ## A first-order ES scheme The scheme determined by [\[eq:sgswe-ec\]](#eq:sgswe-ec){reference-type="eqref" reference="eq:sgswe-ec"} numerically preserves the energy of the PDE system [\[eq:swesg1-1d\]](#eq:swesg1-1d){reference-type="eqref" reference="eq:swesg1-1d"}. However, it may lead to spurious oscillations since the energy should dissipate in the presence of shocks. The issue can be resolved by introducing appropriate numerical viscosity [@tadmor1987numerical; @tadmor2003entropy; @fjordholm_mishra_tadmor_2009; @fjordholm2011well; @fjordholm2012arbitrarily]. Our numerical diffusion operators are a straightforward stochastic extension of the energy-stable diffusion operators proposed in [@fjordholm_mishra_tadmor_2009; @fjordholm2011well]. For context of the approach, the introduction of a traditional Roe-type diffusion for a conservation law involves augmenting an EC flux as follows: $$\begin{aligned} \mathcal{F}_{i+1/2}^{\mathrm{RD}} \coloneqq \mathcal{F}_{i+1/2}^{EC} - \frac{1}{2} \boldsymbol{Q}^{Roe}_{i+1/2} \left\llbracket\boldsymbol{U}\right\rrbracket_{i+1/2},\end{aligned}$$ where $\boldsymbol{Q}^{Roe}$ is a positive semi-definite matrix defined through a diagonalization of the interfacial flux Jacobian at a Roe-averaged state: $$\begin{aligned} \label{eq:roe-diffusion} \boldsymbol{Q}^{\text{Roe}}_{i+\frac{1}{2}} &\coloneqq \boldsymbol{T}^{\text{Roe}} \vert\boldsymbol{\Lambda}^{\text{Roe}}\vert\left(\boldsymbol{T}^{\text{Roe}}\right)^{-1}, & \frac{\partial \widehat{F}}{\partial \widehat{U}}(\overline{\boldsymbol{U}}_{i+1/2}) &=\boldsymbol{T}^{\text{Roe}}\boldsymbol{\Lambda}^{\text{Roe}}\left(\boldsymbol{T}^{\text{Roe}}\right)^{-1}.\end{aligned}$$ Then the semi-discrete scheme [\[eq:fvsemi-discrete\]](#eq:fvsemi-discrete){reference-type="eqref" reference="eq:fvsemi-discrete"} using the numerical flux $\mathcal{F}_{i+1/2} = \mathcal{F}^{\mathrm{RD}}_{i+1/2}$ would behave like, $$\begin{aligned} \frac{d}{dt} \boldsymbol{U}_i(t) &= -\frac{1}{\Delta x} \left( \mathcal{F}^{EC}_{i+1/2} - \mathcal{F}^{EC}_{i-1/2} \right) + \frac{1}{2\Delta x} \left( \boldsymbol{Q}^{Roe}_{i+1/2} \left\llbracket\boldsymbol{U}\right\rrbracket_{i+1/2} - \boldsymbol{Q}^{Roe}_{i-1/2} \left\llbracket\boldsymbol{U}\right\rrbracket_{i-1/2} \right) +\boldsymbol{S}_{i}\\ &\approx -\frac{1}{\Delta x} \left( \widehat{F}(\widehat{U})\big|_{x = x_{i+1/2}} - \widehat{F}(\widehat{U})\big|_{x = x_{i-1/2}} \right) + \Delta x\,\boldsymbol{Q} \widehat{U}_{xx}\big|_{x=x_i}+S(\widehat{U}) \big|_{x=x_i},\end{aligned}$$ where $\boldsymbol{Q}$ is a positive-definite matrix, and hence this introduces diffusion into an EC scheme. While the above approach works in terms of adding a diffusion-like term, a convenient way to ensure energy stability is to employ a numerical diffusion term that operates on the entropic variables $\boldsymbol{V}$ instead of the conservative variables $\boldsymbol{U}$: $$\begin{aligned} \label{eq:FES} \mathcal{F}^{ES}_{i+\frac{1}{2}}&\coloneqq \mathcal{F}^{EC}_{i+\frac{1}{2}} - \frac{1}{2}\boldsymbol{Q}_{i+\frac{1}{2}}^{ES}\left\llbracket\boldsymbol{V}\right\rrbracket_{i+\frac{1}{2}}, \end{aligned}$$ where $\boldsymbol{Q}_{i+\frac{1}{2}}^{ES}$ is a positive definite matrix that will be identified in a Roe-type way from the two adjacent states $\boldsymbol{U}_i$ and $\boldsymbol{U}_{i+1}$ at the cell interface $x = x_{i+\frac{1}{2}}$. The term $\boldsymbol{V}_i$ is as given in [\[eq:Vi-def\]](#eq:Vi-def){reference-type="eqref" reference="eq:Vi-def"}, and is a second-order approximation to the cell-average of the entropy variable $\widehat{V}$. We are interested in the Roe-type energy-stable operator defined as, $$\begin{aligned} \label{eq:roe-diff-operator} \mathcal{Q}_{i+1/2}(\boldsymbol{U}_{i}, \boldsymbol{U}_{i+1}) \coloneqq %:(\bU_{l}, \bU_{r}) \coloneqq \boldsymbol{T}_{}\vert\boldsymbol{\Lambda}_{}\vert\boldsymbol{T}_{}^{\top} \geq 0, %\to \boldsymbol{T}_{c}\vert\bs{\Lambda}_{c}\vert\boldsymbol{T}_{c}^{\top},\end{aligned}$$ where the matrices $\boldsymbol{T}_{}$ and $\boldsymbol{\Lambda}_{}$ are matrices from the eigendecomposition of the flux Jacobian [\[eq:x-jacobian1d\]](#eq:x-jacobian1d){reference-type="eqref" reference="eq:x-jacobian1d"} evaluated at a Roe-type average state: $$\begin{aligned} \label{eq:roe-diffusion-1d} \frac{\partial \widehat{F}}{\partial \widehat{U}}(\widetilde{\boldsymbol{U}}_{i+1/2}) =\boldsymbol{T}_{}\boldsymbol{\Lambda}_{}\boldsymbol{T}_{}^{-1},&&\widetilde{\boldsymbol{U}}_{i+1/2} \coloneqq \begin{pmatrix} \overline{\boldsymbol{h}}_{i+1/2}\\ \mathcal{P}(\overline{\boldsymbol{h}}_{i+1/2}) \overline{\boldsymbol{u}}_{i+1/2}%\bu_{c} \end{pmatrix}.\end{aligned}$$ Note in particular that $\overline{\boldsymbol{q}}_{i+1/2} \neq \mathcal{P}(\overline{\boldsymbol{h}}_{i+1/2}) \overline{\boldsymbol{u}}_{i+1/2}$, so that $\widetilde{\boldsymbol{U}}_{i+1/2} \neq \overline{\boldsymbol{U}}_{i+1/2}$. The focal scheme of this section uses the numerical flux [\[eq:FES\]](#eq:FES){reference-type="eqref" reference="eq:FES"}, where $\boldsymbol{Q}$ is given by the Roe-type diffusion matrix introduced above, $$\begin{aligned} \label{eq:entropic-roe-diffusion-1d} \boldsymbol{Q}^{ES1}_{i+\frac{1}{2}}\coloneqq \mathcal{Q}_{i+1/2}(\boldsymbol{U}_{i}, \boldsymbol{U}_{i+1}) = \boldsymbol{T}_{}\vert\boldsymbol{\Lambda}_{}\vert\boldsymbol{T}_{}^{\top},\end{aligned}$$ where we refer to this scheme as "ES1" because we will show it is first-order accurate. Our main result for this scheme is as follows. **Theorem 4** (ES1 scheme). *Consider the finite volume scheme [\[eq:fvsemi-discrete\]](#eq:fvsemi-discrete){reference-type="eqref" reference="eq:fvsemi-discrete"} with source term [\[eq:ec-source\]](#eq:ec-source){reference-type="eqref" reference="eq:ec-source"} and diffusive numerical flux [\[eq:FES\]](#eq:FES){reference-type="eqref" reference="eq:FES"}, selecting the diffusion matrix as, $$\begin{aligned} \label{eq:ES-ES1} \boldsymbol{Q}_{i+1/2}^{ES} = \boldsymbol{Q}_{i+1/2}^{ES1}, \end{aligned}$$ The resulting scheme is a first-order, well-balanced ES scheme.* *Proof.* We omit some details that are similar to the proof of [Theorem 3](#thm:ec-scheme){reference-type="ref" reference="thm:ec-scheme"}. We have already established in that $\mathcal{F}_{i+1/2}^{EC}$ is second-order accurate. That this ES1 scheme is first-order is direct from the definition of $\boldsymbol{V}_i$ in [\[eq:Vi-def\]](#eq:Vi-def){reference-type="eqref" reference="eq:Vi-def"}, resulting in the approximation $$\begin{aligned} \left\llbracket\boldsymbol{V}\right\rrbracket_{i+1/2} \approx \Delta x \widehat{V}_x(x_{i+1/2}). \end{aligned}$$ which implies that the diffusive augmentation in [\[eq:FES\]](#eq:FES){reference-type="eqref" reference="eq:FES"} commits a first-order local truncation error. To establish that this scheme is well-balanced, we assume the stochastic lake-at-rest initial data [\[eq:wbinit\]](#eq:wbinit){reference-type="eqref" reference="eq:wbinit"}, and this coupled with the definition of $\boldsymbol{V}_i$ in [\[eq:Vi-def\]](#eq:Vi-def){reference-type="eqref" reference="eq:Vi-def"} implies $\left\llbracket\boldsymbol{V}\right\rrbracket_{i+1/2} = 0$. Since the EC flux and source are well-balanced ([Lemma 6](#lem:ec-well-balanced){reference-type="ref" reference="lem:ec-well-balanced"}), then this implies that this ES1 scheme is also well-balanced. Finally, we seek to show the ES property. We define the ES1 energy flux, $$\begin{aligned} \mathcal{H}^{ES1}_{i+\frac{1}{2}} = \mathcal{H}_{i+\frac{1}{2}} - \frac{1}{2} \overline{\boldsymbol{V}}_{i+\frac{1}{2}}^\top \boldsymbol{Q}_{i+\frac{1}{2}}^{ES1}\left\llbracket\boldsymbol{V}\right\rrbracket_{i+\frac{1}{2}}. \end{aligned}$$ with $\mathcal{H}_{i+1/2}$ as defined in [\[eq:e-flux\]](#eq:e-flux){reference-type="eqref" reference="eq:e-flux"}. As in [Lemma 8](#lem:e-conserved){reference-type="ref" reference="lem:e-conserved"}, we multiply [\[eq:fvsemi-discrete\]](#eq:fvsemi-discrete){reference-type="eqref" reference="eq:fvsemi-discrete"} by $\boldsymbol{V}_i^\top$; after manipulations that are similar to those in the proof of [Lemma 8](#lem:e-conserved){reference-type="ref" reference="lem:e-conserved"}, we have, $$\begin{aligned} \frac{d}{dt} \boldsymbol{E}_i(t) = &-\frac{1}{\Delta x}\left(\mathcal{H}^{ES1}_{i+\frac{1}{2}} - \mathcal{H}^{ES1}_{i-\frac{1}{2}}\right)\\ &-\frac{1}{4\Delta x}\left(\left\llbracket\boldsymbol{V}\right\rrbracket_{i+\frac{1}{2}}^\top \boldsymbol{Q}_{i+\frac{1}{2}}^{ES1} \left\llbracket\boldsymbol{V}\right\rrbracket_{i+\frac{1}{2}} +\left\llbracket\boldsymbol{V}\right\rrbracket_{i-\frac{1}{2}}^\top \boldsymbol{Q}_{i-\frac{1}{2}}^{ES1} \left\llbracket\boldsymbol{V}\right\rrbracket_{i-\frac{1}{2}} \right). \end{aligned}$$ Since $\boldsymbol{Q}_{i+1/2}^{ES1}$ is positive semi-definite, then this scheme satisfies [\[eq:es-condition\]](#eq:es-condition){reference-type="eqref" reference="eq:es-condition"}, and hence is an ES scheme. ◻ ## ES1 diffusion vs Roe diffusion We provide in this section a result that motivates and justifies our particular form of the ES1 diffusion modification defined in [\[eq:entropic-roe-diffusion-1d\]](#eq:entropic-roe-diffusion-1d){reference-type="eqref" reference="eq:entropic-roe-diffusion-1d"} and [\[eq:ES-ES1\]](#eq:ES-ES1){reference-type="eqref" reference="eq:ES-ES1"}. This result states that if the bottom topography function vanishes (i.e., we are in the specialized case of a conservation law), then our chosen Roe-type ES1 diffusion in [\[eq:roe-diff-operator\]](#eq:roe-diff-operator){reference-type="eqref" reference="eq:roe-diff-operator"} and [\[eq:roe-diffusion-1d\]](#eq:roe-diffusion-1d){reference-type="eqref" reference="eq:roe-diffusion-1d"} coincides with a standard Roe-type diffusion term. Hence, in specialized scenarios our diffusive augmentations using entropic variables are equivalent to more standard Roe-type diffusion. **Proposition 1**. *Define the Roe diffusion matrix as in [\[eq:roe-diffusion\]](#eq:roe-diffusion){reference-type="eqref" reference="eq:roe-diffusion"}, but using the flux Jacobian evaluated at $\widetilde{\boldsymbol{U}}_{i+\frac{1}{2}}$, $$\begin{aligned} \label{eq:modified-roe-flux} \frac{\partial \widehat{F}}{\partial \widehat{U}}(\widetilde{\boldsymbol{U}}_{i+\frac{1}{2}}) &=\boldsymbol{T}^{\text{Roe}}\boldsymbol{\Lambda}^{\text{Roe}}\left(\boldsymbol{T}^{\text{Roe}}\right)^{-1}. \end{aligned}$$ where we have evaluated the flux jacobian at $\widetilde{\boldsymbol{U}}_{i+1/2}$ instead of at $\overline{\boldsymbol{U}}_{i+1/2}$. Assume $\boldsymbol{B}_i = 0$ for all $i \in [M]$. Then,\ $$\begin{aligned} \label{eq:flatRoe} \boldsymbol{Q}^{\text{Roe}}_{i+\frac{1}{2}}\left\llbracket\boldsymbol{U}\right\rrbracket_{i+\frac{1}{2}} = \boldsymbol{Q}^{ES1}_{i+\frac{1}{2}}\left\llbracket\boldsymbol{V}\right\rrbracket_{i+\frac{1}{2}}. \end{aligned}$$* Proving this result requires some setup: Under the assumptions of [Proposition 1](#prop:flatRoe){reference-type="ref" reference="prop:flatRoe"} we consider the SGSWE [\[eq:swesg41d\]](#eq:swesg41d){reference-type="eqref" reference="eq:swesg41d"} with flat bottom, i.e., $\widehat{B}= \boldsymbol{0}$, together with entropy $E^{\text{flat}}(\widehat{U}) = \frac{1}{2}(\widehat{q})^\top\widehat{u}+\frac{g}{2}\Vert\widehat{h}\Vert^2$ and the entropy variables, $$\begin{aligned} \label{eq:entropicvarflat} \widehat{V}^{\text{flat}} = \partial_{\widehat{U}}E = \begin{pmatrix} -\frac{1}{2}\mathcal{P}(\widehat{u})\widehat{u}+g\widehat{h}\vspace{0.5em}\\ \widehat{u} \end{pmatrix}. \end{aligned}$$ Our main tool will be some results of the proof of Theorem 3.1 in [@doi:10.1137/20M1360736]; in particular, while we have provided the flux Jacobian for this system in [\[eq:x-jacobian1d\]](#eq:x-jacobian1d){reference-type="eqref" reference="eq:x-jacobian1d"}, we will need the explicit similarity transform that accomplishes its symmetrization. **Lemma 9** ([@doi:10.1137/20M1360736], Theorem 3.1). *Assume $\mathcal{P}(\widehat{h}) > 0$. Define $G = \sqrt{g\mathcal{P}(\widehat{h})}$ as the positive definite square root matrix of $g\mathcal{P}(\widehat{h})$. Then, $$\begin{aligned} \frac{\partial \widehat{F}}{\partial \widehat{U}}(\widehat{U}) = \boldsymbol{R}\boldsymbol{D}\boldsymbol{R}^{-1},\end{aligned}$$ where $\boldsymbol{D}$ is the symmetric matrix, $$\begin{aligned} \label{eq:matrixD} \boldsymbol{D}(\widehat{U}) = \frac{1}{2} \left( \begin{array}{cc} 2 G + \mathcal{P}(\widehat{u}) + gG^{-1}\mathcal{P}(\widehat{q})G^{-1} & \mathcal{P}(\widehat{u}) - gG^{-1}\mathcal{P}(\widehat{q})G^{-1} \vspace{.5em}\\ \mathcal{P}(\widehat{u}) - gG^{-1}\mathcal{P}(\widehat{q})G^{-1} & \mathcal{P}(\widehat{u}) + g G^{-1}\mathcal{P}(\widehat{q})G^{-1} - 2 G\end{array}\right),\end{aligned}$$ and $$\begin{aligned} \label{eq:scaledeigenmatrix} \boldsymbol{R}(\widehat{U}) = \frac{1}{\sqrt{2g}}\begin{pmatrix} I&I\\ \mathcal{P}(\widehat{u})+\sqrt{g\mathcal{P}(\widehat{h})}& \mathcal{P}(\widehat{u})-\sqrt{g\mathcal{P}(\widehat{h})} \end{pmatrix}.\end{aligned}$$* The second lemma reveals the relation between the cell interface jump of $\boldsymbol{V}^{\text{flat}}$ (the spatial approximation corresponding to the cell-averaged entropy variable $\widehat{V}^{\text{flat}}$ in [\[eq:entropicvarflat\]](#eq:entropicvarflat){reference-type="eqref" reference="eq:entropicvarflat"}) and $\boldsymbol{U}$. **Lemma 10**. *Recall the definition of $\widetilde{\boldsymbol{U}}_{i+\frac{1}{2}}$ in [\[eq:roe-diffusion-1d\]](#eq:roe-diffusion-1d){reference-type="eqref" reference="eq:roe-diffusion-1d"}: $$\begin{aligned} %\label{eqapp:intermediate} \widetilde{\boldsymbol{U}}_{i+\frac{1}{2}} = \begin{pmatrix} \overline{\boldsymbol{h}}_{i+\frac{1}{2}}\\ \mathcal{P}(\overline{\boldsymbol{h}}_{i+\frac{1}{2}})\overline{\boldsymbol{u}}_{i+\frac{1}{2}} \end{pmatrix}\end{aligned}$$ which is an intermediate state defined by the arithmetic average of $\boldsymbol{h}$ and $\boldsymbol{u}$ across the cell interface $x = x_{i+\frac{1}{2}}$. Denote, $\boldsymbol{V}^{\text{flat}}$ to be the corresponding spatial approximation of the cell-averaged entropy variable defined in [\[eq:entropicvarflat\]](#eq:entropicvarflat){reference-type="eqref" reference="eq:entropicvarflat"}. Then,* 1. *The jump $\left\llbracket\boldsymbol{U}\right\rrbracket_{i+\frac{1}{2}}$ is a rescaling of the jump $\left\llbracket\boldsymbol{V}^{\text{flat}}\right\rrbracket_{i+\frac{1}{2}}$, i.e., $$\begin{aligned} \label{eq:e-jump-to-c-jump} \left\llbracket\boldsymbol{U}\right\rrbracket_{i+\frac{1}{2}} = (\boldsymbol{V}^{\text{flat}}_{\boldsymbol{U}})_{i+\frac{1}{2}}\left\llbracket\boldsymbol{V}^{\text{flat}}\right\rrbracket_{i+\frac{1}{2}}, \end{aligned}$$ where $$\begin{aligned} (\boldsymbol{V}^{\text{flat}}_{\boldsymbol{U}})_{i+\frac{1}{2}} &\coloneqq \frac{1}{g}\begin{pmatrix} I&\mathcal{P}(\overline{\boldsymbol{u}}_{i+\frac{1}{2}})\\ \mathcal{P}(\overline{\boldsymbol{u}}_{i+\frac{1}{2}})& \mathcal{P}^2(\overline{\boldsymbol{u}}_{i+\frac{1}{2}})+g\mathcal{P}(\overline{\boldsymbol{h}}_{i+\frac{1}{2}}) \end{pmatrix}, \\ \left\llbracket\boldsymbol{V}^{\text{flat}}\right\rrbracket_{i+1/2} &\stackrel{\eqref{eq:entropicvarflat}}{=} \left(\begin{array}{c} -\frac{1}{2} \left\llbracket\mathcal{P}(\boldsymbol{u}) \boldsymbol{u}\right\rrbracket_{i+\frac{1}{2}} + g \left\llbracket\boldsymbol{h}\right\rrbracket_{i+\frac{1}{2}}\\ \left\llbracket\boldsymbol{u}\right\rrbracket_{i+\frac{1}{2}}\end{array}\right) \end{aligned}$$* 2. *Let $\boldsymbol{R}_{i+\frac{1}{2}}$ denote the matrix that symmetrizes the flux Jacobian at the state $\widetilde{\boldsymbol{U}}_{i+\frac{1}{2}}$, $$\begin{aligned} %\label{eq:scaledeigenmatrix} \boldsymbol{R}_{i+\frac{1}{2}} \coloneqq \boldsymbol{R}\left(\widetilde{\boldsymbol{U}}_{i+\frac{1}{2}}\right) = \frac{1}{\sqrt{2g}}\begin{pmatrix} I&I\\ \mathcal{P}(\overline{\boldsymbol{u}}_{i+\frac{1}{2}})+\sqrt{g\mathcal{P}(\overline{\boldsymbol{h}}_{i+\frac{1}{2}})}& \mathcal{P}(\overline{\boldsymbol{u}}_{i+\frac{1}{2}})-\sqrt{g\mathcal{P}(\overline{\boldsymbol{h}}_{i+\frac{1}{2}})} \end{pmatrix}, \end{aligned}$$ cf. [\[eq:scaledeigenmatrix\]](#eq:scaledeigenmatrix){reference-type="eqref" reference="eq:scaledeigenmatrix"}. Then, $$\begin{aligned} \label{eq:u-v-decomp} \boldsymbol{R}_{i+\frac{1}{2}}\boldsymbol{R}_{i+\frac{1}{2}}^\top = (\boldsymbol{V}^{\text{flat}}_{\boldsymbol{U}})_{i+\frac{1}{2}}. \end{aligned}$$* *Proof.* Part (2), i.e., [\[eq:u-v-decomp\]](#eq:u-v-decomp){reference-type="eqref" reference="eq:u-v-decomp"}, is a straightforward matrix algebra calculation that we omit. For part (1), we first recall that [\[eq:relation1\]](#eq:relation1){reference-type="eqref" reference="eq:relation1"} implies, $$\begin{aligned} \label{eq:relation4} \frac{1}{2}\left\llbracket\mathcal{P}(\boldsymbol{u})\boldsymbol{u}\right\rrbracket_{i+\frac{1}{2}} = \mathcal{P}(\overline{\boldsymbol{u}}_{i+\frac{1}{2}})\left\llbracket\boldsymbol{u}\right\rrbracket_{i+\frac{1}{2}}. \end{aligned}$$ Second, we use the linearity of $\mathcal{P}(\cdot)$, the property [\[eq:jump-and-average\]](#eq:jump-and-average){reference-type="eqref" reference="eq:jump-and-average"} for arithmetic averages, and the commutation property [\[eq:commute\]](#eq:commute){reference-type="eqref" reference="eq:commute"}, to conclude, $$\label{eq:relation5} \mathcal{P}(\overline{\boldsymbol{u}}_{i+\frac{1}{2}})\left\llbracket\boldsymbol{h}\right\rrbracket_{i+\frac{1}{2}}+\mathcal{P}(\overline{\boldsymbol{h}}_{i+\frac{1}{2}})\left\llbracket\boldsymbol{u}\right\rrbracket_{i+\frac{1}{2}} = \left\llbracket\mathcal{P}(\boldsymbol{h})\boldsymbol{u}\right\rrbracket_{i+\frac{1}{2}} = \left\llbracket\boldsymbol{q}\right\rrbracket_{i+\frac{1}{2}}.$$ Therefore, $$\begin{aligned}\label{eq:process1} (\boldsymbol{V}^{\text{flat}}_{\boldsymbol{U}})_{i+\frac{1}{2}} \left\llbracket\boldsymbol{V}^{\text{flat}}\right\rrbracket_{i+\frac{1}{2}} %\\ &= \frac{1}{g}\begin{pmatrix} -\frac{1}{2}\left\llbracket\mathcal{P}(\boldsymbol{u})\boldsymbol{u}\right\rrbracket_{i+\frac{1}{2}} + g\left\llbracket\boldsymbol{h}\right\rrbracket_{i+\frac{1}{2}}+\mathcal{P}(\overline{\boldsymbol{u}}_{i+\frac{1}{2}})\left\llbracket\boldsymbol{u}\right\rrbracket_{i+\frac{1}{2}}\\ -\frac{1}{2}\mathcal{P}(\overline{\boldsymbol{u}}_{i+\frac{1}{2}})\left\llbracket\mathcal{P}(\boldsymbol{u})\boldsymbol{u}\right\rrbracket_{i+\frac{1}{2}} + g\mathcal{P}(\overline{\boldsymbol{u}}_{i+\frac{1}{2}})\left\llbracket\boldsymbol{h}\right\rrbracket_{i+\frac{1}{2}}+\left(\mathcal{P}^2(\overline{\boldsymbol{u}}_{i+\frac{1}{2}})+g\mathcal{P}(\overline{\boldsymbol{h}}_{i+\frac{1}{2}})\right)\left\llbracket\boldsymbol{u}\right\rrbracket_{i+\frac{1}{2}} \end{pmatrix}\\ \overset{\eqref{eq:relation4}\eqref{eq:relation5}}{=\joinrel=\joinrel=}&\begin{pmatrix} \left\llbracket\boldsymbol{h}\right\rrbracket_{i+\frac{1}{2}}\\ \left\llbracket\boldsymbol{q}\right\rrbracket_{i+\frac{1}{2}}\\ \end{pmatrix} = \left\llbracket\boldsymbol{U}\right\rrbracket_{i+\frac{1}{2}}. \end{aligned}$$  ◻ Now we are in position to show [\[eq:flatRoe\]](#eq:flatRoe){reference-type="eqref" reference="eq:flatRoe"} in [Proposition 1](#prop:flatRoe){reference-type="ref" reference="prop:flatRoe"}. *Proof of [Proposition 1](#prop:flatRoe){reference-type="ref" reference="prop:flatRoe"}.* Let $\boldsymbol{D}_{i+\frac{1}{2}}$ be the *symmetric* matrix defined in [\[eq:matrixD\]](#eq:matrixD){reference-type="eqref" reference="eq:matrixD"} evaluated at $\boldsymbol{U}_{i+\frac{1}{2}}$, and $\boldsymbol{D}_{i+\frac{1}{2}} = \boldsymbol{L}_{i+\frac{1}{2}}\boldsymbol{\Lambda}_{i+\frac{1}{2}}\boldsymbol{L}_{i+\frac{1}{2}}^{\top}$ be its eigenvalue decomposition. Then, [\[eq:T-def\]]{#eq:T-def label="eq:T-def"} $$\begin{aligned} \frac{\partial \widehat{F}}{\partial \widehat{U}}(\widetilde{\boldsymbol{U}}_{i+\frac{1}{2}}) &= \boldsymbol{R}_{i+\frac{1}{2}}\left(\boldsymbol{L}_{i+\frac{1}{2}}\boldsymbol{\Lambda}_{i+\frac{1}{2}}\boldsymbol{L}_{i+\frac{1}{2}}^{\top}\right)\boldsymbol{R}^{-1}_{i+\frac{1}{2}} \\ &\eqqcolon \boldsymbol{T}_{i+\frac{1}{2}}\boldsymbol{\Lambda}_{i+\frac{1}{2}} \boldsymbol{T}_{i+\frac{1}{2}}^{-1},\end{aligned}$$ is an eigendecomposition of the Jacobian matrix $\frac{\partial \widehat{F}}{\partial \widehat{U}}(\widetilde{\boldsymbol{U}}_{i+\frac{1}{2}})$, where we have used the fact that $\boldsymbol{L}_{i+\frac{1}{2}}^{-1} = \boldsymbol{L}_{i+\frac{1}{2}}^{\top}$ due to the symmetry of $\boldsymbol{D}_{i+\frac{1}{2}}$. The Roe-diffusion operator evaluated at the location $\widetilde{\boldsymbol{U}}_{i+\frac{1}{2}}$ as indicated in [\[eq:modified-roe-flux\]](#eq:modified-roe-flux){reference-type="eqref" reference="eq:modified-roe-flux"} is then given by, $$\begin{aligned} \boldsymbol{Q}^{Roe}_{i+\frac{1}{2}} \stackrel{\eqref{eq:roe-diffusion}}{=} \boldsymbol{T}_{i+\frac{1}{2}}\vert\boldsymbol{\Lambda}_{i+\frac{1}{2}}\vert\boldsymbol{T}_{i+\frac{1}{2}}^{-1} %\underbrace{(\bs{R}_{i+\frac{1}{2}}\bs{L}_{i+\frac{1}{2}})}_{\coloneqq\bs{T}_{i+\frac{1}{2}}}\vert\boldsymbol{\Lambda}_{i+\frac{1}{2}}\vert(\bs{R}_{i+\frac{1}{2}}\bs{L}_{i+\frac{1}{2}})^{-1}, \end{aligned}$$ Therefore, $$\label{eq:roe-entropy-jump} \begin{aligned} \boldsymbol{Q}^{\text{Roe}}_{i+\frac{1}{2}}\left\llbracket\boldsymbol{U}\right\rrbracket_{i+\frac{1}{2}} &= \boldsymbol{T}_{i+\frac{1}{2}}\vert\boldsymbol{\Lambda}_{i+\frac{1}{2}}\vert\boldsymbol{T}_{i+\frac{1}{2}}^{-1}\left\llbracket\boldsymbol{U}\right\rrbracket_{i+\frac{1}{2}},\\ %\left(\bR_{i+\frac{1}{2}}\boldsymbol{L}_{i+\frac{1}{2}}\right)\vert\boldsymbol{\Lambda}_{i+\frac{1}{2}}\vert\left(\bR_{i+\frac{1}{2}}\boldsymbol{L}_{i+\frac{1}{2}}\right)^{-1}\dbracket{\bU}_{i+\frac{1}{2}},\\ &\stackrel{\eqref{eq:T-def},\eqref{eq:e-jump-to-c-jump}}{=} \left(\boldsymbol{R}_{i+\frac{1}{2}}\boldsymbol{L}_{i+\frac{1}{2}}\right)\vert\boldsymbol{\Lambda}_{i+\frac{1}{2}}\vert\left(\boldsymbol{R}_{i+\frac{1}{2}}\boldsymbol{L}_{i+\frac{1}{2}}\right)^{-1}(\boldsymbol{V}^{\text{flat}}_{\boldsymbol{U}})_{i+\frac{1}{2}}\left\llbracket\boldsymbol{V}^{\text{flat}}\right\rrbracket_{i+\frac{1}{2}},\\ &\overset{\eqref{eq:u-v-decomp}}{=}\left(\boldsymbol{R}_{i+\frac{1}{2}}\boldsymbol{L}_{i+\frac{1}{2}}\right)\vert\boldsymbol{\Lambda}_{i+\frac{1}{2}}\vert\left(\boldsymbol{R}_{i+\frac{1}{2}}\boldsymbol{L}_{i+\frac{1}{2}}\right)^{-1}\boldsymbol{R}_{i+\frac{1}{2}}\boldsymbol{R}_{i+\frac{1}{2}}^\top\left\llbracket\boldsymbol{V}^{\text{flat}}\right\rrbracket_{i+\frac{1}{2}},\\ &=\boldsymbol{R}_{i+\frac{1}{2}}\left(\boldsymbol{L}_{i+\frac{1}{2}}\vert\boldsymbol{\Lambda}_{i+\frac{1}{2}}\vert\boldsymbol{L}^{\top}_{i+\frac{1}{2}}\right)\boldsymbol{R}_{i+\frac{1}{2}}^\top\left\llbracket\boldsymbol{V}^{\text{flat}}\right\rrbracket_{i+\frac{1}{2}},\\ &= \boldsymbol{T}_{i+\frac{1}{2}} \vert\boldsymbol{\Lambda}_{i+\frac{1}{2}}\vert \boldsymbol{T}_{i+\frac{1}{2}}^\top \left\llbracket\boldsymbol{V}^{\text{flat}}\right\rrbracket_{i+\frac{1}{2}} \\ &\stackrel{\eqref{eq:entropic-roe-diffusion-1d}}{=} \boldsymbol{Q}^{ES1}_{i+\frac{1}{2}}\left\llbracket\boldsymbol{V}^{\text{flat}}\right\rrbracket_{i+\frac{1}{2}}. \end{aligned}$$ ◻ ## A second-order ES scheme To develop a second-order accurate energy-stable scheme, we use jump operators with $O(\Delta x^2)$ accuracy. A natural choice is to use the jumps obtained by non-oscillatory second-order reconstructions of the entropy variable. However, attaining a provable energy-stable scheme requires the more subtle reconstruction procedure in [@fjordholm2012arbitrarily] that we follow. The new idea for second-order diffusions is to use reconstructions in order to compute jumps. To that end, we let $\boldsymbol{V}^+_i$ and $\boldsymbol{V}^-_{i+1}$ be second-order reconstructions from the right and left, respectively, of the entropy variable $\boldsymbol{V}(x)$ at location $x = x_{i+1/2}$. We will describe later in this section how these reconstructions are computed. Assuming we have these reconstructions in hand, we can compute second-order accurate jumps of the entropy variables: $$\begin{aligned} \label{eq:Vtilde-pw} \savebox{\@brx}{\(\m@th{\langle}\)}% \mathopen{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}\boldsymbol{V}\savebox{\@brx}{\(\m@th{\rangle}\)}% \mathclose{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}_{i+\frac{1}{2}} = \boldsymbol{V}_{i+1}^- - \boldsymbol{V}_{i}^+,\end{aligned}$$ The overall scheme is similar as the previous section, but uses a second-order diffusive augmentation of a conservative flux, $$\begin{aligned} \label{eq:es2-flux} \mathcal{F}^{ES2}_{i+\frac{1}{2}} \coloneqq \mathcal{F}^{EC}_{i+\frac{1}{2}} - \frac{1}{2}\boldsymbol{Q}^{ES2}_{i+\frac{1}{2}}\savebox{\@brx}{\(\m@th{\langle}\)}% \mathopen{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}\boldsymbol{V}\savebox{\@brx}{\(\m@th{\rangle}\)}% \mathclose{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}_{i+\frac{1}{2}}.\end{aligned}$$ We choose the matrix $\boldsymbol{Q}^{ES2}$ as for the ES1 scheme, $$\begin{aligned} \label{eq:Q-es2} \boldsymbol{Q}^{ES2}_{i+1/2} = \boldsymbol{Q}^{ES1}_{i+1/2} = \mathcal{Q}_{i+1/2}(\boldsymbol{U}_i, \boldsymbol{U}_{i+1}) = \boldsymbol{T}_{i+1/2} \vert\boldsymbol{\Lambda}_{i+1/2} \vert\boldsymbol{T}^\top_{i+1/2}, %\mathcal{Q}_{i+1/2}\left(\bs{U}^+_i, \bs{U}^-_{i+1}\right),\end{aligned}$$ where we recall that the eigendecomposition matrices $\boldsymbol{T}$, $\boldsymbol{\Lambda}$ are computed from the Roe-type average of the flux Jacobian, cf. [\[eq:roe-diff-operator\]](#eq:roe-diff-operator){reference-type="eqref" reference="eq:roe-diff-operator"}, [\[eq:roe-diffusion-1d\]](#eq:roe-diffusion-1d){reference-type="eqref" reference="eq:roe-diffusion-1d"}. One could alternatively select $\boldsymbol{Q}^{ES2}$ by using second-order reconstructions of $\boldsymbol{U}$ as input to $\mathcal{Q}$, e.g., $$\begin{aligned} \boldsymbol{Q}^{ES2}_{i+1/2} = \mathcal{Q}_{i+1/2}(\boldsymbol{U}_i^-, \boldsymbol{U}^+_i),\end{aligned}$$ for some second-order reconstructions $\boldsymbol{U}_i^{\pm}$. What remain is to describe how $\boldsymbol{V}_i^{\pm}$ are computed in a way that ensures the energy stable property. The main idea is to design $\boldsymbol{V}_i^{\pm}$ through a second-order reconstruction of *scaled* (transformed) versions of the entropy variables: $$\begin{aligned} \label{eq:wi-def} \boldsymbol{w}_i^{\pm} \coloneqq \boldsymbol{T}^\top_{i\pm 1/2} \boldsymbol{V}_i,\end{aligned}$$ where the matrices $\boldsymbol{T}_{i\pm1/2}$ are as in [\[eq:Q-es2\]](#eq:Q-es2){reference-type="eqref" reference="eq:Q-es2"}. Once these have been computed, we perform a second-order total variation-diminishing (TVD) reconstruction on the $\boldsymbol{w}$ variable at the interfaces: $$\begin{aligned} \label{eq:wtilde-def} \widetilde{\boldsymbol{w}}_i^{\pm} \coloneqq \boldsymbol{w}_i^{\pm} \pm \frac{1}{2} \phi\left( \boldsymbol{\theta}_i^{\pm}\right) \circ \savebox{\@brx}{\(\m@th{\langle}\)}% \mathopen{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}\boldsymbol{w}\savebox{\@brx}{\(\m@th{\rangle}\)}% \mathclose{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}_{i\pm 1/2},\end{aligned}$$ where $\circ$ is the Hadamard (elementwise) product on vectors, and $\boldsymbol{\theta}_i^\pm$ are difference quotients, $$\begin{aligned} \boldsymbol{\theta}_i^\pm \coloneqq \savebox{\@brx}{\(\m@th{\langle}\)}% \mathopen{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}\boldsymbol{w}\savebox{\@brx}{\(\m@th{\rangle}\)}% \mathclose{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}_{i\mp 1/2} \oslash \savebox{\@brx}{\(\m@th{\langle}\)}% \mathopen{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}\boldsymbol{w}\savebox{\@brx}{\(\m@th{\rangle}\)}% \mathclose{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}_{i\pm 1/2},\end{aligned}$$ where $\oslash$ is the Hadamard (elementwise) division between vectors. We select the function $\phi$ to be the minmod limiter, $$\begin{aligned} \label{eq:phi-def} \phi(\theta) = \left\{ \begin{aligned} &0, &&\textit{if }\theta < 0,\\ &\theta. &&\textit{if }0\le \theta \le 1,\\ &1, &&\textit{otherwise}. \end{aligned}\right.\end{aligned}$$ which operates elementwise on vector inputs. Note that other slope limiter functions $\phi$ may be selected, but minmod is the only valid limiter in this context that also satisfies the TVD property [@fjordholm2012arbitrarily Section 3.4]. Finally, the desired reconstructions for $\boldsymbol{V}_i^{\pm}$ are defined by inverting the $\boldsymbol{w}$-to-$\boldsymbol{V}$ map, $$\begin{aligned} \label{eq:wtilde-to-Vi} \boldsymbol{T}^\top_{i\pm1/2} \boldsymbol{V}_i^{\pm} \coloneqq \widetilde{\boldsymbol{w}}_i^\pm\end{aligned}$$ The full scheme has now been described, and satisfies the following properties. **Theorem 5** (ES2 scheme). *The FV scheme [\[eq:fvsemi-discrete\]](#eq:fvsemi-discrete){reference-type="eqref" reference="eq:fvsemi-discrete"} choosing the flux $\mathcal{F}_{i+1/2} = \mathcal{F}^{ES2}_{i+1/2}$ defined in [\[eq:es2-flux\]](#eq:es2-flux){reference-type="eqref" reference="eq:es2-flux"} is a second-order, well-balanced, ES scheme.* We focus the remaining discussion in this section on sketching the proof of the above result. The second-order property results from the fact that the jumps are computed using second-order accurate reconstructions; the well-balanced property can be proven in exactly the same way as is done for the ES1 scheme in the proof of [Theorem 4](#thm:es1){reference-type="ref" reference="thm:es1"}. To show the ES property, we exercise one of the major results in [@fjordholm2012arbitrarily] that we reproduce below. **Lemma 11** ([@fjordholm2012arbitrarily], Lemma 3.2). *For each $i$, if there exists a positive diagonal matrix $\boldsymbol{\Pi}_{i+1/2}\ge 0$ such that the second-order jump satisfies, $$\begin{aligned} \label{eq:cond-e-stable} \savebox{\@brx}{\(\m@th{\langle}\)}% \mathopen{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}\boldsymbol{V}\savebox{\@brx}{\(\m@th{\rangle}\)}% \mathclose{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}_{i+\frac{1}{2}} = (\boldsymbol{T}_{i+\frac{1}{2}}^\top)^{-1}\boldsymbol{\Pi}_{i+\frac{1}{2}}\boldsymbol{T}_{i+\frac{1}{2}}^{\top}\left\llbracket\boldsymbol{V}\right\rrbracket_{i+\frac{1}{2}}, \end{aligned}$$ then the scheme [\[eq:fvsemi-discrete\]](#eq:fvsemi-discrete){reference-type="eqref" reference="eq:fvsemi-discrete"} with flux term $\mathcal{F}_{i+1/2} = \mathcal{F}_{i+1/2}^{ES2}$ is an ES scheme.* Hence, showing the ES property for our scheme only requires us to establish [\[eq:cond-e-stable\]](#eq:cond-e-stable){reference-type="eqref" reference="eq:cond-e-stable"}. To accomplish this, note that the definition [\[eq:wtilde-def\]](#eq:wtilde-def){reference-type="eqref" reference="eq:wtilde-def"} implies, $$\begin{aligned} \label{eq:w-wtilde-jump} \savebox{\@brx}{\(\m@th{\langle}\)}% \mathopen{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}\widetilde{\boldsymbol{w}}\savebox{\@brx}{\(\m@th{\rangle}\)}% \mathclose{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}^\ell_{i+\frac{1}{2}} = \left(1-\frac{1}{2}\phi((\boldsymbol{\theta}_{i+1}^{-})^\ell) - \frac{1}{2}\phi((\boldsymbol{\theta}_i^{+})^\ell)\right)\savebox{\@brx}{\(\m@th{\langle}\)}% \mathopen{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}\boldsymbol{w}\savebox{\@brx}{\(\m@th{\rangle}\)}% \mathclose{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}^\ell_{i+\frac{1}{2}},\end{aligned}$$ I.e., we have, $$\begin{aligned} \label{eq:w-wtilde-jump-Pi} \savebox{\@brx}{\(\m@th{\langle}\)}% \mathopen{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}\widetilde{\boldsymbol{w}}\savebox{\@brx}{\(\m@th{\rangle}\)}% \mathclose{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}_{i+\frac{1}{2}} &= \boldsymbol{\Pi}_{i+\frac{1}{2}}\savebox{\@brx}{\(\m@th{\langle}\)}% \mathopen{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}\boldsymbol{w}\savebox{\@brx}{\(\m@th{\rangle}\)}% \mathclose{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}_{i+\frac{1}{2}}, & \left(\boldsymbol{\Pi}_{i+1/2}\right)_{\ell,\ell} &\coloneqq \left(1-\frac{1}{2}\phi((\boldsymbol{\theta}_{i+1}^{-})_\ell) - \frac{1}{2}\phi((\boldsymbol{\theta}_i^{+})_\ell)\right)\end{aligned}$$ and in particular $\boldsymbol{\Pi}_{i+1/2}$ is a diagonal matrix and positive semi-definite since $0 \leq \phi(\theta) \leq 1$. Since the jump operators $\savebox{\@brx}{\(\m@th{\langle}\)}% \mathopen{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}\cdot\savebox{\@brx}{\(\m@th{\rangle}\)}% \mathclose{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}$ and $\left\llbracket\cdot\right\rrbracket$ are linear in their arguments, then combining [\[eq:w-wtilde-jump\]](#eq:w-wtilde-jump){reference-type="eqref" reference="eq:w-wtilde-jump"} with the relations [\[eq:wi-def\]](#eq:wi-def){reference-type="eqref" reference="eq:wi-def"} and [\[eq:wtilde-to-Vi\]](#eq:wtilde-to-Vi){reference-type="eqref" reference="eq:wtilde-to-Vi"} that connect $\boldsymbol{w}_i$ and $\widetilde{\boldsymbol{w}}_i$ to $\boldsymbol{V}_i$ and $\boldsymbol{V}^{\pm}_i$ yields the relation [\[eq:cond-e-stable\]](#eq:cond-e-stable){reference-type="eqref" reference="eq:cond-e-stable"} with a positive-definite diagonal matrix $\boldsymbol{\Pi}_{i+1/2}$. Hence, this is an ES scheme, and completes the proof of [Theorem 5](#thm:es2){reference-type="ref" reference="thm:es2"}. Finally, we remark that the implementation of the diffusion term in the ES2 flux [\[eq:es2-flux\]](#eq:es2-flux){reference-type="eqref" reference="eq:es2-flux"} does not require explicit construction of $\boldsymbol{V}^{\pm}_i$. I.e., we have, $$\begin{aligned} \frac{1}{2}\boldsymbol{Q}^{ES2}_{i+\frac{1}{2}}\savebox{\@brx}{\(\m@th{\langle}\)}% \mathopen{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}\boldsymbol{V}\savebox{\@brx}{\(\m@th{\rangle}\)}% \mathclose{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}_{i+\frac{1}{2}} &\stackrel{\eqref{eq:roe-diff-operator},\eqref{eq:cond-e-stable}}{=} \frac{1}{2}\boldsymbol{T}_{i+\frac{1}{2}}\vert\boldsymbol{\Lambda}_{i+\frac{1}{2}}\vert\boldsymbol{T}_{i+\frac{1}{2}}^{\top}(\boldsymbol{T}_{i+\frac{1}{2}}^\top)^{-1}\boldsymbol{\Pi}_{i+\frac{1}{2}}\boldsymbol{T}_{i+\frac{1}{2}}^{\top}\left\llbracket\boldsymbol{V}\right\rrbracket_{i+\frac{1}{2}}\\ &= \frac{1}{2}\boldsymbol{T}_{i+\frac{1}{2}}\vert\boldsymbol{\Lambda}_{i+\frac{1}{2}}\vert\boldsymbol{\Pi}_{i+\frac{1}{2}}\boldsymbol{T}_{i+\frac{1}{2}}^{\top}\left\llbracket\boldsymbol{V}\right\rrbracket_{i+\frac{1}{2}}\\ & \stackrel{\eqref{eq:wi-def}}{=}\frac{1}{2}\boldsymbol{T}_{i+\frac{1}{2}}\vert\boldsymbol{\Lambda}_{i+\frac{1}{2}}\vert\boldsymbol{\Pi}_{i+\frac{1}{2}}\savebox{\@brx}{\(\m@th{\langle}\)}% \mathopen{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}\boldsymbol{w}\savebox{\@brx}{\(\m@th{\rangle}\)}% \mathclose{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}_{i+\frac{1}{2}},\\ &\overset{\eqref{eq:w-wtilde-jump-Pi}}{=}\frac{1}{2}\boldsymbol{T}_{i+\frac{1}{2}}\vert\boldsymbol{\Lambda}_{i+\frac{1}{2}}\vert\savebox{\@brx}{\(\m@th{\langle}\)}% \mathopen{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}\widetilde{\boldsymbol{w}}\savebox{\@brx}{\(\m@th{\rangle}\)}% \mathclose{\copy\@brx\kern-0.5\wd\@brx\usebox{\@brx}}_{i+\frac{1}{2}},\end{aligned}$$ and hence one need only compute $\widetilde{\boldsymbol{w}}^{\pm}_i$ in order to directly evaluate the diffusion part of the ES2 flux. ## Algorithmic details Our overall scheme is the semi-discrete form [\[eq:fvsemi-discrete\]](#eq:fvsemi-discrete){reference-type="eqref" reference="eq:fvsemi-discrete"}, which we pair with a numerical time-stepping scheme. We provide pseudocode in this section that describes a fully discrete SGSWE time-stepping algorithm. This full pseudocode introduces some additional details for the scheme that were devised in [@doi:10.1137/20M1360736], many of which are based on standard procedures used in schemes for deterministic SWE models [@kurganov_finite-volume_2018]. We very briefly describe these additional details in the coming sections; more comprehensive discussion can be found in [@doi:10.1137/20M1360736]. The full algorithmic pseudocode is given in [\[alg:swesg\]](#alg:swesg){reference-type="ref" reference="alg:swesg"}. ### Velocity desingularization Computing $\boldsymbol{u}_i$ requires inversion of the matrix $\mathcal{P}(\boldsymbol{h}_i)$, which is assumed (and enforced in the scheme) to be symmetric and positive-definite. However, this matrix may be ill-conditioned. To ameliorate numerical artifacts associated with this ill-conditioned operation, we employ a *desingularization* procedure, introduced for the deterministic SWE in [@kurganov_secondorder_2007-2]. We describe here the stochastic variant of the desingularization procedure, proposed in [@doi:10.1137/20M1360736]. If $\mathcal{P}(\boldsymbol{h}_i)$ has the eigenvalue decomposition, $$\begin{aligned} \mathcal{P}(\boldsymbol{h}_i) &= \boldsymbol{Q} \boldsymbol{\Pi} \boldsymbol{Q}^\top, & \boldsymbol{\Pi} &= \mathrm{diag}(\pi_1, \ldots, \pi_K),\end{aligned}$$ where $\pi_k > 0$ are the eigenvalues of $\mathcal{P}(\boldsymbol{h}_i)$, then the desingularization process approximates $\mathcal{P}(\boldsymbol{h}_i)^{-1} \boldsymbol{q}_i$ by regularizing the matrix inverse procedure: $$\begin{aligned} \label{eq:ui-desingularized} \boldsymbol{u}_i &= \boldsymbol{Q} \widetilde{\boldsymbol{\Pi}}^{-1} \boldsymbol{Q}^T \boldsymbol{q}_i, & \boldsymbol{\Pi} &= \mathrm{diag}(\widetilde{\pi}_1, \ldots, \widetilde{\pi}_K), & \widetilde{\pi}_k &= \frac{\sqrt{\pi_k^4 + \max\{ \pi_k^4, \epsilon^4\}}}{\sqrt{2} \pi_k},\end{aligned}$$ where $\epsilon > 0$ is a small constant; we choose it to be $\epsilon = \Delta x$. Note that if $\pi_k \geq \epsilon^{1/4}$, then $\widetilde{\pi}_k = \pi_k$, and hence regularization is performed only in the presence of small eigenvalues. Compared to [\[eq:ui-def\]](#eq:ui-def){reference-type="eqref" reference="eq:ui-def"}. This procedure to compute $\boldsymbol{u}_i$ is a stabilized way to compute velocities. For scheme consistency, if the desingularization above is activated, then we recompute the discharge variable: $$\begin{aligned} \boldsymbol{q}_i \gets \mathcal{P}(\boldsymbol{h}_i) \boldsymbol{u}_i.\end{aligned}$$ ### Hyperbolicity preservation The SGSWE PDE [\[eq:swesg41d\]](#eq:swesg41d){reference-type="eqref" reference="eq:swesg41d"} is hyperbolic and has an entropy pair if $\mathcal{P}(\widehat{h}) > 0$, i.e., , respectively. To ensure this holds at the discrete level, we require the condition $\mathcal{P}(\boldsymbol{h}_i) > 0$ for every cell $i$. To enforce this, we employ [@doi:10.1137/20M1360736 Theorem 3.4, Corollary 3.5], which state that a sufficient condition for $\mathcal{P}(\boldsymbol{h}_i) > 0$ is that for every $m =1, \ldots, M$, $$\begin{aligned} \label{eq:h-pos} \widehat{h}_i(\xi_m) &> 0, & \widehat{h}_i(\xi) &\coloneqq \sum_{k=1}^K \boldsymbol{h}_{i,k} \phi_k(\xi_m), & \boldsymbol{h}_i &= (\boldsymbol{h}_{i,1}, \ldots, \boldsymbol{h}_{i,K})^\top,\end{aligned}$$ where $\{\xi_m\}_{m=1}^M$ is a nodal set in $\mathbbm{R}^d$ for a positive-weight quadrature rule having sufficient accuracy relative to the $\xi$-polynomial space $P$ defined in [\[eq:P-def\]](#eq:P-def){reference-type="eqref" reference="eq:P-def"}. The functions $\phi_k$ are the basis of $P$ in [\[eq:P-def\]](#eq:P-def){reference-type="eqref" reference="eq:P-def"} for which $\boldsymbol{h}_i$ are coordinates. The function $\widehat{h}_i(\xi)$ is the SGSWE approximation to the $\mathcal{I}_i$-cell average of $\widehat{h}(x,t,\xi)$ at the current time. Hence, the computational vehicle we use to enforce hyperbolicity of the underlying PDE in our scheme is to enforce the above positivity-type condition on the $\boldsymbol{h}_i$ variable. ### Positivity-preservation We enforce the positivity condition [\[eq:h-pos\]](#eq:h-pos){reference-type="eqref" reference="eq:h-pos"} by restricting the timestep size. We assume that the current time value of $\boldsymbol{h}_i$ satisfies [\[eq:h-pos\]](#eq:h-pos){reference-type="eqref" reference="eq:h-pos"}. If Forward Euler with a stepsize $\Delta t$ is used to discretize [\[eq:fvsemi-discrete\]](#eq:fvsemi-discrete){reference-type="eqref" reference="eq:fvsemi-discrete"}, then [\[eq:h-pos\]](#eq:h-pos){reference-type="eqref" reference="eq:h-pos"} is true at the next time step if, $$\begin{aligned} \label{eq:lambda-def} \Delta t < \lambda \coloneqq \min_{i} \min_{m=1,\ldots, M} \left| \frac{\Delta x\; \widehat{h}_i(\xi_m)}{\widehat{\mathcal{F}}^h_{i+1/2}(\xi_m) - \widehat{\mathcal{F}}^h_{i-1/2}(\xi_m)} \right|,\end{aligned}$$ where $\widehat{\mathcal{F}}^h_{i+1/2}(\cdot)$ is the SG approximation of the $\widehat{h}$-variable flux: $$\begin{aligned} \widehat{\mathcal{F}}^h_{i+1/2}(\xi) &\coloneqq \sum_{k=1}^K \mathcal{F}^h_{i+1/2,k} \phi_k(\xi), & \mathcal{F}_{i+1/2} &= \left( \left(\mathcal{F}^h_{i+1/2}\right)^\top, \; \left(\mathcal{F}^q_{i+1/2}\right)^\top \right)^\top \in \mathbbm{R}^{2 K}.\end{aligned}$$ Hence, we enforce positivity preservation by ensuring a small enough timestep so that the positivity condition [\[eq:h-pos\]](#eq:h-pos){reference-type="eqref" reference="eq:h-pos"} is respected globally over all spatial cells. We must also restrict $\Delta t$ to satisfy the wave speed CFL condition; see [@doi:10.1137/20M1360736 Equation (4.16)]. ### Adaptive time-stepping {#sssec:adaptive-ts} The time step restriction [\[eq:lambda-def\]](#eq:lambda-def){reference-type="eqref" reference="eq:lambda-def"} works for Forward Euler time-stepping. To extend this to a higher-order temporal scheme, we employ a third-order strong stability-preserving scheme, which is a convex combination of Forward Euler steps [@gottlieb_strong_2001]. However, the intermediate stages of a(ny) time-stepping scheme need not obey the positivity-preserving property, even if $\Delta t$ is chosen to obey the condition [\[eq:lambda-def\]](#eq:lambda-def){reference-type="eqref" reference="eq:lambda-def"} determined at the initial step. To address this issue, we employ the *adaptive* time-stepping strategy proposed in [@chertock_well-balanced_2015 Remark 3.6]. We refer the reader to that reference for details, and present here only a high-level description of the procedure: $\lambda$ is initialized as the initial stage value of $\lambda$, as shown in [\[eq:lambda-def\]](#eq:lambda-def){reference-type="eqref" reference="eq:lambda-def"}. At intermediate stages, new intermediate values of $\lambda$ are computed. If an intermediate-stage value of $\lambda$ is smaller than the current value of $\lambda$, then we restart the entire time-step using the new, smaller-$\lambda$ restriction on $\Delta t$. Input scheme type: `scheme` = `EC`, `ES1`, or `ES2` Input: Bottom topography $B$, initial data $U(t=0)$, polynomial index set $\Lambda$ Input: Terminal time $T$ Initialize: $\boldsymbol{U}_i$, $t=0$ Compute $\boldsymbol{B}_i$ from $B$ for all $i$ Compute $\boldsymbol{u}_i$ in [\[eq:ui-desingularized\]](#eq:ui-desingularized){reference-type="eqref" reference="eq:ui-desingularized"} for all $i$ Compute $\mathcal{F}^{EC}_{i+1/2}$ for all $i$, given by [\[eq:sgswe-ec\]](#eq:sgswe-ec){reference-type="eqref" reference="eq:sgswe-ec"} for all $i$: Set $\mathcal{F}_{i+1/2} \gets \mathcal{F}^{EC}_{i+1/2}$. for all $i$: Compute entropy variable $\boldsymbol{V}_i$ using [\[eq:Vi-def\]](#eq:Vi-def){reference-type="eqref" reference="eq:Vi-def"}. Compute $\boldsymbol{T}_{i+1/2}$, $\boldsymbol{\Lambda}_{i+1/2}$ through [\[eq:roe-diffusion-1d\]](#eq:roe-diffusion-1d){reference-type="eqref" reference="eq:roe-diffusion-1d"}. : Compute $\boldsymbol{Q}_{i+1/2}^{ES1}$ using [\[eq:roe-diffusion-1d\]](#eq:roe-diffusion-1d){reference-type="eqref" reference="eq:roe-diffusion-1d"},[\[eq:entropic-roe-diffusion-1d\]](#eq:entropic-roe-diffusion-1d){reference-type="eqref" reference="eq:entropic-roe-diffusion-1d"} with $\boldsymbol{T}_{i+1/2}$, $\boldsymbol{\Lambda}_{i+1/2}$. Compute $\mathcal{F}_{i+1/2} \gets \mathcal{F}^{ES}_{i+1/2}$ in [\[eq:FES\]](#eq:FES){reference-type="eqref" reference="eq:FES"} using $\mathcal{F}^{EC}_{i+1/2}$, $\boldsymbol{V}_i$, and $\boldsymbol{Q}_{i+1/2}^{ES} \gets \boldsymbol{Q}_{i+1/2}^{ES1}$. : Construct $\boldsymbol{Q}^{ES2}_{i+1/2}$ as in [\[eq:Q-es2\]](#eq:Q-es2){reference-type="eqref" reference="eq:Q-es2"} with $\boldsymbol{T}_{i+1/2}$, $\boldsymbol{\Lambda}_{i+1/2}$. Construct $\boldsymbol{V}_i^{\pm}$ through [\[eq:wi-def\]](#eq:wi-def){reference-type="eqref" reference="eq:wi-def"}, [\[eq:wtilde-def\]](#eq:wtilde-def){reference-type="eqref" reference="eq:wtilde-def"}, and [\[eq:wtilde-to-Vi\]](#eq:wtilde-to-Vi){reference-type="eqref" reference="eq:wtilde-to-Vi"}. Compute $\mathcal{F}_{i+1/2} \gets \mathcal{F}^{ES2}_{i+1/2}$ in [\[eq:es2-flux\]](#eq:es2-flux){reference-type="eqref" reference="eq:es2-flux"} and [\[eq:Vtilde-pw\]](#eq:Vtilde-pw){reference-type="eqref" reference="eq:Vtilde-pw"} using $\boldsymbol{Q}^{ES2}_{i+1/2}$, $\boldsymbol{V}_i^{\pm}$, and $\mathcal{F}^{EC}_{i+1/2}$. Initialize $\lambda$ and $\Delta t$ as shown in [\[eq:lambda-def\]](#eq:lambda-def){reference-type="eqref" reference="eq:lambda-def"}. Adaptively determine $\Delta t$ using the procedure discussed in [4.6.4](#sssec:adaptive-ts){reference-type="ref" reference="sssec:adaptive-ts"}. Use a third-order SSP method to take a time step of size $\Delta t$, updating $\boldsymbol{h}_i$ and $\boldsymbol{q}_i$. Set $t \gets t + \Delta t$. # Numerical Experiments {#sec:results} Below we present several numerical examples to illustrate properties of the developed schemes. We refer to the second order energy-conservative scheme, the first order energy-stable scheme, and the second order energy-stable scheme as the EC, ES1, and ES2 schemes, respectively. We introduce the relative change in energy quantity, $$\begin{aligned} \textrm{relative energy} = \frac{E(t) - E(0)}{E(t)},\end{aligned}$$ where $E(t)$ is computed as $\sum_i \Delta x \boldsymbol{E}_i(t)$. This provides a way to visualize the relative change in the discrete energy for different numerical schemes, namely for the EC, ES1 and ES2. In all tests below we consider a single (scalar) random variable $\xi$ that is a uniformly distributed random variable on $[-1,1]$; hence our choice for the functions $\phi_k$ are orthonormal Legendre polynomials on $[-1,1]$. We use $K=9$ for the dimension of the PC space $P$. Instead of visualizing the conservative variable $h$ corresponding to water height, we will plot the *water surface* $w$, defined as $w = h + B$, with the bottom topography $B$ superimposed on the same graph; plots of $(w,B)$ are more physically interpretable than directly plotting the water height $h$. ## Flat-Bottom Dam Break {#ssec:results-flat-bot} In the first experiment, we consider a stochastic water surface, $$%\label{eq:IV1} h(x,0,\xi) + B(x,0,\xi) = w(x,0,\xi) = \left\{\begin{aligned}&2.0+0.1\xi&&x<0\\ &1.5+0.1\xi&&x>0\end{aligned}\right.,\quad q(x,0,\xi) = 0,$$ with a flat bottom $B(x,t,\xi)\equiv 0$. This is a stochastic modification of the deterministic "dam break test" problem from [@fjordholm2011well]. In [6](#fig:ex1a-flat-bot-wq){reference-type="ref" reference="fig:ex1a-flat-bot-wq"}, we use a uniform grid size $\Delta x=400$ over the physical domain $x \in [-1,1]$, and compute up to the time $t = 0.4$. We test the example using different numerical methods (EC, ES1, ES2), Section [4](#sec:schemes){reference-type="ref" reference="sec:schemes"}. From [6](#fig:ex1a-flat-bot-wq){reference-type="ref" reference="fig:ex1a-flat-bot-wq"}, similar to the results presented in [@fjordholm2011well Figs. 1 and 4], we observe that the water surface with uncertainties develops a leftward-going rarefaction wave and a rightward-going shock. Similar to [@fjordholm2011well], EC computes such solutions accurately, but at the expense of large post-shock oscillations as observed on [6](#fig:ex1a-flat-bot-wq){reference-type="ref" reference="fig:ex1a-flat-bot-wq"} (right plot). These oscillations are expected since the EC scheme preserves energy, and hence energy is not dissipated across the shock as it should. We also demonstrate on [9](#fig:ex1a-flat-bot-mean){reference-type="ref" reference="fig:ex1a-flat-bot-mean"} (middle and right plots) the numerical energy conservation for the EC scheme. We note that the energy conservation errors due to time discretization are reduced significantly by decreasing the time step/CFL constant (right figure), similar to the results reported in fig. 1 in [@fjordholm2011well]. The presented results in [6](#fig:ex1a-flat-bot-wq){reference-type="ref" reference="fig:ex1a-flat-bot-wq"} and [9](#fig:ex1a-flat-bot-mean){reference-type="ref" reference="fig:ex1a-flat-bot-mean"} (left figure) also illustrate that ES2 produces less smearing than the ES1 at both the rarefaction and the shock waves. The schemes ES1 and ES2 are both designed to dissipate energy which is also confirmed by the numerical experiments as presented in [9](#fig:ex1a-flat-bot-mean){reference-type="ref" reference="fig:ex1a-flat-bot-mean"} (middle plot), with the energy dissipation in the ES2 being lower than in the ES1 scheme. In addition, the numerical results seem to indicate that the ES2 scheme is better able to capture large variance spikes compared to the ES1 scheme. Finally, the employment of the the numerical diffusion operators in ES1 and ES2 schemes, removes oscillations present in the numerical solution using the EC scheme. The observed results are also in agreement with the results of the deterministic model reported in [@fjordholm2011well]. ![Results for [5.1](#ssec:results-flat-bot){reference-type="ref" reference="ssec:results-flat-bot"}. Top: water surface, Bottom: discharge: Left: ES1. Middle: ES2. Right: EC. Mesh nx = 400 and PC basis functions K=9.](new_figure/w_test1a_es1_m400_K9.pdf "fig:"){#fig:ex1a-flat-bot-wq width=".32\\textwidth"} ![Results for [5.1](#ssec:results-flat-bot){reference-type="ref" reference="ssec:results-flat-bot"}. Top: water surface, Bottom: discharge: Left: ES1. Middle: ES2. Right: EC. Mesh nx = 400 and PC basis functions K=9.](new_figure/w_test1a_es2_m400_K9.pdf "fig:"){#fig:ex1a-flat-bot-wq width=".32\\textwidth"} ![Results for [5.1](#ssec:results-flat-bot){reference-type="ref" reference="ssec:results-flat-bot"}. Top: water surface, Bottom: discharge: Left: ES1. Middle: ES2. Right: EC. Mesh nx = 400 and PC basis functions K=9.](new_figure/w_test1a_ec_m400_K9.pdf "fig:"){#fig:ex1a-flat-bot-wq width=".32\\textwidth"}\ ![Results for [5.1](#ssec:results-flat-bot){reference-type="ref" reference="ssec:results-flat-bot"}. Top: water surface, Bottom: discharge: Left: ES1. Middle: ES2. Right: EC. Mesh nx = 400 and PC basis functions K=9.](new_figure/q_test1a_es1_m400_K9.pdf "fig:"){#fig:ex1a-flat-bot-wq width=".32\\textwidth"} ![Results for [5.1](#ssec:results-flat-bot){reference-type="ref" reference="ssec:results-flat-bot"}. Top: water surface, Bottom: discharge: Left: ES1. Middle: ES2. Right: EC. Mesh nx = 400 and PC basis functions K=9.](new_figure/q_test1a_es2_m400_K9.pdf "fig:"){#fig:ex1a-flat-bot-wq width=".32\\textwidth"} ![Results for [5.1](#ssec:results-flat-bot){reference-type="ref" reference="ssec:results-flat-bot"}. Top: water surface, Bottom: discharge: Left: ES1. Middle: ES2. Right: EC. Mesh nx = 400 and PC basis functions K=9.](new_figure/q_test1a_ec_m400_K9.pdf "fig:"){#fig:ex1a-flat-bot-wq width=".32\\textwidth"} ![Results for [5.1](#ssec:results-flat-bot){reference-type="ref" reference="ssec:results-flat-bot"}. Comparison: Left - water surface mean ES1 vs. ES2. Middle: relative energy change in EC, ES1, ES2. Right: relative energy change in EC under different time step/CFL constant. Mesh nx = 400 and PC basis functions K=9.](new_figure/w_mean_test1a_es1_es2_m400_K9.pdf "fig:"){#fig:ex1a-flat-bot-mean width=".32\\textwidth"} ![Results for [5.1](#ssec:results-flat-bot){reference-type="ref" reference="ssec:results-flat-bot"}. Comparison: Left - water surface mean ES1 vs. ES2. Middle: relative energy change in EC, ES1, ES2. Right: relative energy change in EC under different time step/CFL constant. Mesh nx = 400 and PC basis functions K=9.](new_figure/rel_energy_test1a_m400_K9.pdf "fig:"){#fig:ex1a-flat-bot-mean width=".32\\textwidth"} ![Results for [5.1](#ssec:results-flat-bot){reference-type="ref" reference="ssec:results-flat-bot"}. Comparison: Left - water surface mean ES1 vs. ES2. Middle: relative energy change in EC, ES1, ES2. Right: relative energy change in EC under different time step/CFL constant. Mesh nx = 400 and PC basis functions K=9.](new_figure/rel_energy_test1a_ec_cfl_m400_K9.pdf "fig:"){#fig:ex1a-flat-bot-mean width=".32\\textwidth"} ## Stochastic Bottom Topography {#ssec:results-sbt} Next, we consider the shallow water system with deterministic initial conditions, $$%\label{eq:IV1} w(x,0) = \left\{\begin{aligned}&1&&x<0\\ &0.5&&x>0\end{aligned}\right.,\quad q(x,0) = 0,$$ and with a stochastic bottom topography, $$\label{eq:bottom 1} B(x,\xi) = \left\{\begin{aligned}0.125(\cos(5\pi x)+2)+0.125\xi,\quad&|x|<0.2\\0.125+0.125\xi,\quad&\text{otherwise.}\end{aligned}\right.$$ This test example was presented previously in [@doi:10.1137/20M1360736]. Initially, the highest possible bottom barely touches the initial water surface at $x=0$, see [15](#fig:ex1-et-m400){reference-type="ref" reference="fig:ex1-et-m400"}-[23](#fig:ex1-et-m1600){reference-type="ref" reference="fig:ex1-et-m1600"}. In [15](#fig:ex1-et-m400){reference-type="ref" reference="fig:ex1-et-m400"}-[23](#fig:ex1-et-m1600){reference-type="ref" reference="fig:ex1-et-m1600"}, we use a uniform grid size $\Delta x =400, 800, 1600$ over the physical domain $x \in [-1,1]$, and compute up to time $t=0.0995$ (Immediately after this time, the EC scheme fails for an nx=$400$ due to spurious oscillations near sharp gradients of the solution). In [19](#fig:ex1-et-m800){reference-type="ref" reference="fig:ex1-et-m800"}-[23](#fig:ex1-et-m1600){reference-type="ref" reference="fig:ex1-et-m1600"} we compare only performance of ES1 and ES2 at $t=0.0995$ since EC fails on those meshes even earlier. Again, the numerical results indicate that the ES2 scheme can more easily resolve large, spatially-concentrated variance values compared to the ES1 scheme, but under mesh refinement both schemes converge to similar numerical solutions. In [27](#fig:ex1-m800){reference-type="ref" reference="fig:ex1-m800"}, we show numerical solution obtained using ES1 and ES2 at the final time $t=0.8$ and on mesh $nx=800$. For both schemes, the $99\%$ confidence region of the water surface stays above the $99\%$ confidence region of the bottom function in [27](#fig:ex1-m800){reference-type="ref" reference="fig:ex1-m800"}, and both methods produce similar numerical solutions. The presented results are comparable to the results in [@doi:10.1137/20M1360736 Section 5.1]. In [29](#fig:ex1-rel_E){reference-type="ref" reference="fig:ex1-rel_E"}, we again observe as expected that the EC scheme numerically conserves energy, while ES1 and ES2 dissipate energy with larger dissipation produced by ES1 method. ![Comparison of the results for [5.2](#ssec:results-sbt){reference-type="ref" reference="ssec:results-sbt"} using different schemes. Top: water surface. Bottom: discharge. Left: ES1, Middle: ES2, Right: EC. Mesh $nx=400$ with $K=9$ at earlier time $T=0.0995$.](new_figure/w_test1_es1_m400_K9_et.pdf "fig:"){#fig:ex1-et-m400 width=".32\\textwidth"} ![Comparison of the results for [5.2](#ssec:results-sbt){reference-type="ref" reference="ssec:results-sbt"} using different schemes. Top: water surface. Bottom: discharge. Left: ES1, Middle: ES2, Right: EC. Mesh $nx=400$ with $K=9$ at earlier time $T=0.0995$.](new_figure/w_test1_es2_m400_K9_et.pdf "fig:"){#fig:ex1-et-m400 width=".32\\textwidth"} ![Comparison of the results for [5.2](#ssec:results-sbt){reference-type="ref" reference="ssec:results-sbt"} using different schemes. Top: water surface. Bottom: discharge. Left: ES1, Middle: ES2, Right: EC. Mesh $nx=400$ with $K=9$ at earlier time $T=0.0995$.](new_figure/w_test1_ec_m400_K9_et.pdf "fig:"){#fig:ex1-et-m400 width=".32\\textwidth"}\ ![Comparison of the results for [5.2](#ssec:results-sbt){reference-type="ref" reference="ssec:results-sbt"} using different schemes. Top: water surface. Bottom: discharge. Left: ES1, Middle: ES2, Right: EC. Mesh $nx=400$ with $K=9$ at earlier time $T=0.0995$.](new_figure/q_test1_es1_m400_K9_et.pdf "fig:"){#fig:ex1-et-m400 width=".32\\textwidth"} ![Comparison of the results for [5.2](#ssec:results-sbt){reference-type="ref" reference="ssec:results-sbt"} using different schemes. Top: water surface. Bottom: discharge. Left: ES1, Middle: ES2, Right: EC. Mesh $nx=400$ with $K=9$ at earlier time $T=0.0995$.](new_figure/q_test1_es2_m400_K9_et.pdf "fig:"){#fig:ex1-et-m400 width=".32\\textwidth"} ![Comparison of the results for [5.2](#ssec:results-sbt){reference-type="ref" reference="ssec:results-sbt"} using different schemes. Top: water surface. Bottom: discharge. Left: ES1, Middle: ES2, Right: EC. Mesh $nx=400$ with $K=9$ at earlier time $T=0.0995$.](new_figure/q_test1_ec_m400_K9_et.pdf "fig:"){#fig:ex1-et-m400 width=".32\\textwidth"} ![Comparison of the results for [5.2](#ssec:results-sbt){reference-type="ref" reference="ssec:results-sbt"} using different schemes. Top: water surface. Bottom: discharge. Left: ES1, Right: ES2. Mesh $nx=800$ with $K=9$ at earlier time $T=0.0995$.](new_figure/w_test1_es1_m800_K9_et.pdf "fig:"){#fig:ex1-et-m800 width=".49\\textwidth"} ![Comparison of the results for [5.2](#ssec:results-sbt){reference-type="ref" reference="ssec:results-sbt"} using different schemes. Top: water surface. Bottom: discharge. Left: ES1, Right: ES2. Mesh $nx=800$ with $K=9$ at earlier time $T=0.0995$.](new_figure/w_test1_es2_m800_K9_et.pdf "fig:"){#fig:ex1-et-m800 width=".49\\textwidth"}\ ![Comparison of the results for [5.2](#ssec:results-sbt){reference-type="ref" reference="ssec:results-sbt"} using different schemes. Top: water surface. Bottom: discharge. Left: ES1, Right: ES2. Mesh $nx=800$ with $K=9$ at earlier time $T=0.0995$.](new_figure/q_test1_es1_m800_K9_et.pdf "fig:"){#fig:ex1-et-m800 width=".49\\textwidth"} ![Comparison of the results for [5.2](#ssec:results-sbt){reference-type="ref" reference="ssec:results-sbt"} using different schemes. Top: water surface. Bottom: discharge. Left: ES1, Right: ES2. Mesh $nx=800$ with $K=9$ at earlier time $T=0.0995$.](new_figure/q_test1_es2_m800_K9_et.pdf "fig:"){#fig:ex1-et-m800 width=".49\\textwidth"} ![Comparison of the results for [5.2](#ssec:results-sbt){reference-type="ref" reference="ssec:results-sbt"} using different schemes. Top: water surface. Bottom: discharge. Left: ES1, Right: ES2. Mesh $nx=1600$ with $K=9$ at earlier time $T=0.0995$.](new_figure/w_test1_es1_m1600_K9_et.pdf "fig:"){#fig:ex1-et-m1600 width=".49\\textwidth"} ![Comparison of the results for [5.2](#ssec:results-sbt){reference-type="ref" reference="ssec:results-sbt"} using different schemes. Top: water surface. Bottom: discharge. Left: ES1, Right: ES2. Mesh $nx=1600$ with $K=9$ at earlier time $T=0.0995$.](new_figure/w_test1_es2_m1600_K9_et.pdf "fig:"){#fig:ex1-et-m1600 width=".49\\textwidth"}\ ![Comparison of the results for [5.2](#ssec:results-sbt){reference-type="ref" reference="ssec:results-sbt"} using different schemes. Top: water surface. Bottom: discharge. Left: ES1, Right: ES2. Mesh $nx=1600$ with $K=9$ at earlier time $T=0.0995$.](new_figure/q_test1_es1_m1600_K9_et.pdf "fig:"){#fig:ex1-et-m1600 width=".49\\textwidth"} ![Comparison of the results for [5.2](#ssec:results-sbt){reference-type="ref" reference="ssec:results-sbt"} using different schemes. Top: water surface. Bottom: discharge. Left: ES1, Right: ES2. Mesh $nx=1600$ with $K=9$ at earlier time $T=0.0995$.](new_figure/q_test1_es2_m1600_K9_et.pdf "fig:"){#fig:ex1-et-m1600 width=".49\\textwidth"} ![Comparison of the results for [5.2](#ssec:results-sbt){reference-type="ref" reference="ssec:results-sbt"} using different schemes. Top: water surface. Bottom: discharge. Left: ES1, and Right: ES2. Mesh $nx=800$ with $K=9$ at the final time $T=0.8$.](new_figure/w_test1_es1_m800_K9.pdf "fig:"){#fig:ex1-m800 width=".49\\textwidth"} ![Comparison of the results for [5.2](#ssec:results-sbt){reference-type="ref" reference="ssec:results-sbt"} using different schemes. Top: water surface. Bottom: discharge. Left: ES1, and Right: ES2. Mesh $nx=800$ with $K=9$ at the final time $T=0.8$.](new_figure/w_test1_es2_m800_K9.pdf "fig:"){#fig:ex1-m800 width=".49\\textwidth"}\ ![Comparison of the results for [5.2](#ssec:results-sbt){reference-type="ref" reference="ssec:results-sbt"} using different schemes. Top: water surface. Bottom: discharge. Left: ES1, and Right: ES2. Mesh $nx=800$ with $K=9$ at the final time $T=0.8$.](new_figure/q_test1_es1_m800_K9.pdf "fig:"){#fig:ex1-m800 width=".49\\textwidth"} ![Comparison of the results for [5.2](#ssec:results-sbt){reference-type="ref" reference="ssec:results-sbt"} using different schemes. Top: water surface. Bottom: discharge. Left: ES1, and Right: ES2. Mesh $nx=800$ with $K=9$ at the final time $T=0.8$.](new_figure/q_test1_es2_m800_K9.pdf "fig:"){#fig:ex1-m800 width=".49\\textwidth"} ![Comparison of the results for [5.2](#ssec:results-sbt){reference-type="ref" reference="ssec:results-sbt"} using different schemes. Relative energy change: Left: EC vs. ES1 vs. ES2 on mesh $nx=400$ on time interval $[0, 0.0995]$. Right: ES1 vs. ES2 on mesh $nx=800$ on time interval $[0, 0.8]$.](new_figure/rel_energy_test1_m400_K9_et.pdf "fig:"){#fig:ex1-rel_E width=".49\\textwidth"} ![Comparison of the results for [5.2](#ssec:results-sbt){reference-type="ref" reference="ssec:results-sbt"} using different schemes. Relative energy change: Left: EC vs. ES1 vs. ES2 on mesh $nx=400$ on time interval $[0, 0.0995]$. Right: ES1 vs. ES2 on mesh $nx=800$ on time interval $[0, 0.8]$.](new_figure/rel_energy_test1_m800_K9_es1_es2.pdf "fig:"){#fig:ex1-rel_E width=".49\\textwidth"} ## Perturbation to Lake at Rest {#ssec:wbperturb2} As a final example, we consider the shallow water system with stochastic water surface, $$\label{eq:IV-wbperturb1} w(x,0,\xi) = \left\{\begin{aligned}&1+0.001(\xi + 1)&&\vert x\vert \le 0.05\\ &1&&\text{otherwise}\end{aligned}\right.,\quad q(x,0,\xi) = 0,$$ and with a deterministic bottom topography $$\label{eq:bt-wbperturb1} B(x) = \left\{\begin{aligned}0.25(\cos(5\pi(x+0.35))+1),\quad &-0.55 < x < -0.15\\ 0.125(\cos(10\pi(x-0.35))+1),\quad &0.25 < x < 0.45\\ 0,\quad &\text{otherwise.}\end{aligned}\right.$$ The test is from [@CKJin1] and is similar to the deterministic tests of the perturbation of lake at rest solution, for example to the one presented in [@fjordholm2011well]. From presented results in [35](#fig:ex2-mean){reference-type="ref" reference="fig:ex2-mean"}, [41](#fig:ex2-m400){reference-type="ref" reference="fig:ex2-m400"} and [45](#fig:ex2-m1600){reference-type="ref" reference="fig:ex2-m1600"}, we make conclusions similar to previous sections: Both ES1 and ES2 capture small stochastic perturbations of the lake at rest solution quite well (with both leftward- and rightward- going waves present in the numerical solutions). The first order ES1 scheme exhibits much more dissipation in the left and the right going waves than the ES2 scheme, which produces a more accurate solution, as shown in [35](#fig:ex2-mean){reference-type="ref" reference="fig:ex2-mean"}, [41](#fig:ex2-m400){reference-type="ref" reference="fig:ex2-m400"} (left and middle figures), [45](#fig:ex2-m1600){reference-type="ref" reference="fig:ex2-m1600"}. The results of EC scheme is also shown in [41](#fig:ex2-m400){reference-type="ref" reference="fig:ex2-m400"} (right figure). The EC scheme resolves the left and the right going water waves with heights higher than in both ES1 and ES2 methods, but again there are oscillations present near both waves in EC numerical solution as expected since EC does not dissipates energy across shocks. The relative energy change for this example produced by EC, ES1 and ES2 methods is illustrated in [47](#fig:ex2-r_E){reference-type="ref" reference="fig:ex2-r_E"}. The presented results are also comparable to the results reported in [@fjordholm2011well] and in [@CKJin1]. ![Results for [5.3](#ssec:wbperturb2){reference-type="ref" reference="ssec:wbperturb2"}. Top: water surface mean, Bottom: discharge mean: Left: ES1 vs. ES2 on mesh $nx=400$. Middle: ES1 vs. ES2 on mesh $nx=800$. Right: ES1 vs. ES2 on mesh $nx=1600$. PC basis functions K=9.](new_figure/w_mean_test2_es1_es2_m400_K9.pdf "fig:"){#fig:ex2-mean width=".32\\textwidth"} ![Results for [5.3](#ssec:wbperturb2){reference-type="ref" reference="ssec:wbperturb2"}. Top: water surface mean, Bottom: discharge mean: Left: ES1 vs. ES2 on mesh $nx=400$. Middle: ES1 vs. ES2 on mesh $nx=800$. Right: ES1 vs. ES2 on mesh $nx=1600$. PC basis functions K=9.](new_figure/w_mean_test2_es1_es2_m800_K9.pdf "fig:"){#fig:ex2-mean width=".32\\textwidth"} ![Results for [5.3](#ssec:wbperturb2){reference-type="ref" reference="ssec:wbperturb2"}. Top: water surface mean, Bottom: discharge mean: Left: ES1 vs. ES2 on mesh $nx=400$. Middle: ES1 vs. ES2 on mesh $nx=800$. Right: ES1 vs. ES2 on mesh $nx=1600$. PC basis functions K=9.](new_figure/w_mean_test2_es1_es2_m1600_K9.pdf "fig:"){#fig:ex2-mean width=".32\\textwidth"}\ ![Results for [5.3](#ssec:wbperturb2){reference-type="ref" reference="ssec:wbperturb2"}. Top: water surface mean, Bottom: discharge mean: Left: ES1 vs. ES2 on mesh $nx=400$. Middle: ES1 vs. ES2 on mesh $nx=800$. Right: ES1 vs. ES2 on mesh $nx=1600$. PC basis functions K=9.](new_figure/q_mean_test2_es1_es2_m400_K9.pdf "fig:"){#fig:ex2-mean width=".32\\textwidth"} ![Results for [5.3](#ssec:wbperturb2){reference-type="ref" reference="ssec:wbperturb2"}. Top: water surface mean, Bottom: discharge mean: Left: ES1 vs. ES2 on mesh $nx=400$. Middle: ES1 vs. ES2 on mesh $nx=800$. Right: ES1 vs. ES2 on mesh $nx=1600$. PC basis functions K=9.](new_figure/q_mean_test2_es1_es2_m800_K9.pdf "fig:"){#fig:ex2-mean width=".32\\textwidth"} ![Results for [5.3](#ssec:wbperturb2){reference-type="ref" reference="ssec:wbperturb2"}. Top: water surface mean, Bottom: discharge mean: Left: ES1 vs. ES2 on mesh $nx=400$. Middle: ES1 vs. ES2 on mesh $nx=800$. Right: ES1 vs. ES2 on mesh $nx=1600$. PC basis functions K=9.](new_figure/q_mean_test2_es1_es2_m1600_K9.pdf "fig:"){#fig:ex2-mean width=".32\\textwidth"} ![Comparison of the results for [5.3](#ssec:wbperturb2){reference-type="ref" reference="ssec:wbperturb2"} using different schemes. Top: water surface. Bottom: discharge. Left: ES1, Middle: ES2, Right: EC. Mesh $nx=400$ with $K=9$ at time $T=0.8$.](new_figure/w_test2_es1_m400_K9.pdf "fig:"){#fig:ex2-m400 width=".32\\textwidth"} ![Comparison of the results for [5.3](#ssec:wbperturb2){reference-type="ref" reference="ssec:wbperturb2"} using different schemes. Top: water surface. Bottom: discharge. Left: ES1, Middle: ES2, Right: EC. Mesh $nx=400$ with $K=9$ at time $T=0.8$.](new_figure/w_test2_es2_m400_K9.pdf "fig:"){#fig:ex2-m400 width=".32\\textwidth"} ![Comparison of the results for [5.3](#ssec:wbperturb2){reference-type="ref" reference="ssec:wbperturb2"} using different schemes. Top: water surface. Bottom: discharge. Left: ES1, Middle: ES2, Right: EC. Mesh $nx=400$ with $K=9$ at time $T=0.8$.](new_figure/w_test2_ec_m400_K9.pdf "fig:"){#fig:ex2-m400 width=".32\\textwidth"}\ ![Comparison of the results for [5.3](#ssec:wbperturb2){reference-type="ref" reference="ssec:wbperturb2"} using different schemes. Top: water surface. Bottom: discharge. Left: ES1, Middle: ES2, Right: EC. Mesh $nx=400$ with $K=9$ at time $T=0.8$.](new_figure/q_test2_es1_m400_K9.pdf "fig:"){#fig:ex2-m400 width=".32\\textwidth"} ![Comparison of the results for [5.3](#ssec:wbperturb2){reference-type="ref" reference="ssec:wbperturb2"} using different schemes. Top: water surface. Bottom: discharge. Left: ES1, Middle: ES2, Right: EC. Mesh $nx=400$ with $K=9$ at time $T=0.8$.](new_figure/q_test2_es2_m400_K9.pdf "fig:"){#fig:ex2-m400 width=".32\\textwidth"} ![Comparison of the results for [5.3](#ssec:wbperturb2){reference-type="ref" reference="ssec:wbperturb2"} using different schemes. Top: water surface. Bottom: discharge. Left: ES1, Middle: ES2, Right: EC. Mesh $nx=400$ with $K=9$ at time $T=0.8$.](new_figure/q_test2_ec_m400_K9.pdf "fig:"){#fig:ex2-m400 width=".32\\textwidth"} ![Comparison of the results for [5.3](#ssec:wbperturb2){reference-type="ref" reference="ssec:wbperturb2"} using different schemes. Top: water surface. Bottom: discharge. Left: ES1, Right: ES2. Mesh $nx=1600$ with $K=9$ at time $T=0.8$.](new_figure/w_test2_es1_m1600_K9.pdf "fig:"){#fig:ex2-m1600 width=".49\\textwidth"} ![Comparison of the results for [5.3](#ssec:wbperturb2){reference-type="ref" reference="ssec:wbperturb2"} using different schemes. Top: water surface. Bottom: discharge. Left: ES1, Right: ES2. Mesh $nx=1600$ with $K=9$ at time $T=0.8$.](new_figure/w_test2_es2_m1600_K9.pdf "fig:"){#fig:ex2-m1600 width=".49\\textwidth"}\ ![Comparison of the results for [5.3](#ssec:wbperturb2){reference-type="ref" reference="ssec:wbperturb2"} using different schemes. Top: water surface. Bottom: discharge. Left: ES1, Right: ES2. Mesh $nx=1600$ with $K=9$ at time $T=0.8$.](new_figure/q_test2_es1_m1600_K9.pdf "fig:"){#fig:ex2-m1600 width=".49\\textwidth"} ![Comparison of the results for [5.3](#ssec:wbperturb2){reference-type="ref" reference="ssec:wbperturb2"} using different schemes. Top: water surface. Bottom: discharge. Left: ES1, Right: ES2. Mesh $nx=1600$ with $K=9$ at time $T=0.8$.](new_figure/q_test2_es2_m1600_K9.pdf "fig:"){#fig:ex2-m1600 width=".49\\textwidth"} ![Comparison of the results for [5.3](#ssec:wbperturb2){reference-type="ref" reference="ssec:wbperturb2"} using different schemes. Relative energy change: Left: EC vs. ES1 vs. ES2 on mesh $nx=400$. Right: ES1 vs. ES2 on mesh $nx=1600$.](new_figure/rel_energy_test2_m400_K9.pdf "fig:"){#fig:ex2-r_E width=".49\\textwidth"} ![Comparison of the results for [5.3](#ssec:wbperturb2){reference-type="ref" reference="ssec:wbperturb2"} using different schemes. Relative energy change: Left: EC vs. ES1 vs. ES2 on mesh $nx=400$. Right: ES1 vs. ES2 on mesh $nx=1600$.](new_figure/rel_energy_test2_es1_es2_m1600_K9.pdf "fig:"){#fig:ex2-r_E width=".49\\textwidth"} # Conclusion {#sec:conclusion} In this work we derived an entropy-entropy flux pair for the spatially one-dimensional hyperbolicity-preserving, positivity-preserving SG SWE system developed in [@doi:10.1137/20M1360736]. Such entropy-entropy flux pairs are the theoretical starting point for proposing entropy admissibility criteria to resolve non-uniqueness of weak solutions. Next, using the proposed entropy-entropy flux pair, we designed second-order energy conservative, and first- and second-order energy stable finite volume schemes for the SG SWE. The proposed schemes are also well-balanced. We provided several numerical experiments to illustrate performance of the methods. As a part of future research, we plan to extend such methods to models in two spatial dimensions, to explore alternative constructions of the diffusion operators, and to investigate other reconstruction approaches for the entropy variables. # Acknowledgement The work of Yekaterina Epshteyn and Akil Narayan was partially supported by NSF DMS-2207207. AN was partially supported by NSF DMS-1848508. [^1]: For non-periodic boundary conditions, the energy would increase/decrease depending on the boundary conditions and their corresponding impact on the boundary fluxes.
arxiv_math
{ "id": "2310.06229", "title": "Energy Stable and Structure-Preserving Schemes for the Stochastic\n Galerkin Shallow Water Equations", "authors": "Dihan Dai, Yekaterina Epshteyn, Akil Narayan", "categories": "math.NA cs.NA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
arxiv_math
{ "id": "2310.03120", "title": "Remarks on the Complex Euler Equations", "authors": "Dallas Albritton and W. Jacob Ogden", "categories": "math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We prove that, given a discrete group $G$, and $1 \leq p < \infty$, the algebra of $p$-convolution operators $CV_p(G)$ is weak\*-simple, in the sense of having no non-trivial weak\*-closed ideals, if and only if $G$ is an ICC group. This generalises the basic fact that $vN(G)$ is a factor if and only if $G$ is ICC. When $p=1$, $CV_p(G) = \ell^1(G)$. In this case we give a more detailed analysis of the weak\*-closed ideals, showing that they can be described in terms of the weak\*-closed ideals of $\ell^1(FC(G))$; when $FC(G)$ is finite, this leads to a classification of the weak\*-closed ideals of $\ell^1(G)$. address: Jared T. White, School of Mathematics and Statistics, The Open University, Walton Hall, Milton Keynes MK7 6AA, United Kingdom author: - Jared T. White date: September 2023 title: Weak\*-Simplicity of Convolution Algebras on Discrete Groups --- *The Open University* # Introduction Given a locally compact group $G$ and $1\leq p <\infty$ the algebra of $p$-convolution operators $CV_p(G)$ is defined to be the set of all bounded linear operators on $L^p(G)$ that commute with right translations by elements of $G$. When $G$ is discrete, $CV_p(G)$ may be identified with the set of those $\alpha \in \ell^p(G)$ with the property that $\xi \mapsto \alpha*\xi$ defines a bounded linear operator on $\ell^p(G)$. When $G$ is discrete and $p=1$ we recover the group algebra $\ell^1(G)$. The purpose of this article is to study the ideal structure of $CV_p(G)$ and $\ell^1(G)$, for a discrete group $G$. Since $\ell^1(G)$ and $CV_p(G)$ are dual Banach algebras (in the sense of having a Banach space predual that is compatible with the algebra structure), it is natural to study their weak\*-closed ideals. This continues a project of the author [@W3; @W5] in which the weak\*-closed left ideals of dual Banach algebras were studied, with particular focus on the measure algebra of a locally compact group, and the Banach algebra of operators on a reflexive Banach space. Our main theorem in this article is the following. We say that a dual Banach algebra is *weak\*-simple* if it has no proper, non-zero, weak\*-closed ideals. **Theorem 1**. *Let $G$ be a group, and let $1 \leq p < \infty$. Then $CV_p(G)$ is weak\*-simple if and only if $G$ is an ICC group.* *Proof.* For $1<p<\infty$ apply Theorem [Theorem 11](#3.8){reference-type="ref" reference="3.8"}; for $p=1$ apply Theorem [Theorem 21](#iccthm){reference-type="ref" reference="iccthm"}. ◻ A von Neumann algebra is weak\*-simple if and only if it is a factor. As such, Theorem [Theorem 1](#thm1){reference-type="ref" reference="thm1"} can be seen as a non-trivial generalisation one of the most basic facts about the group von Neumman algebra $vN(G)$ of a discrete group $G$: namely that it is a factor if and only if $G$ is an ICC group. When $p=1$ and $G$ is discrete, we have $CV_p(G) = \ell^1(G) = M(G)$, so that Theorem [Theorem 1](#thm1){reference-type="ref" reference="thm1"} falls into the context of theorems about ideals of the measure algebra $M(G)$ of a locally compact group $G$. The structure of all norm-closed ideals of $M(G)$, for an infinite locally compact group $G$, is generally considered intractable, and even the set of maximal ideals can be very wild. For example, the maximal ideal space of $M(\mathbb T)$ contains uncountably many disjoint copies of the Stone--Čech compactification of $\mathbb Z$ [@OW14 Proposition 7]; see also [@OWG16]. However, the author has found that if one restricts attention to the weak\*-closed ideals, one can obtain some interesting descriptive results. In [@W3] the weak\*-closed left ideals of $M(G)$, for a compact group $G$, were classified, and this was later extended to coamenable compact quantum groups by Anderson-Sackaney [@AS]. In [@W5] a description was obtained of the weak\*-closed maximal left ideals of $M(G)$, for $G$ belonging to a certain class of groups including the connected nilpotent Lie groups. In the present work we commence a study of the weak\*-closed two-sided ideals $\ell^1(G)$. Theorem [Theorem 1](#thm1){reference-type="ref" reference="thm1"} with $p=1$ answers the most fundamental question in this line of enquiry. However, we are able to give more detailed account of the weak\*-closed ideals of $\ell^1(G)$ than for $CV_p(G) \ (1<p<\infty)$. Our next result says that the weak\*-closed ideals of $\ell^1(G)$ are determined by what happens on the $FC$-centre $FC(G)$. **Theorem 2**. *Let $\mathcal{T}$ be a transversal for $FC(G)$ in $G$. The weak\*-closed ideals of $\ell^1(G)$ are given by $$\bigoplus_{t \in \mathcal{T}} \delta_t*J,$$ where $J$ is a weak\*-closed ideal of $\ell^1(FC(G))$ that is invariant for the action of $G$ by conjugation.* In the special case that $|FC(G)|<\infty$ this leads to the following classification theorem. Given $\Omega \subset \widehat{FC(G)}$ we write $$K(\Omega) : = \bigcap_{\pi \in \Omega} \ker \pi,$$ which is an ideal of $\ell^1(FC(G)).$ **Theorem 3**. *Suppose that $|FC(G)| < \infty$, and let $\mathcal{T}$ be a transversal for $FC(G)$ in $G$. The weak\*-closed ideals of $\ell^1(G)$ are given by $$\bigoplus_{t \in \mathcal{T}} \delta_t*K(\Omega),$$ where $\Omega \subset \widehat{FC(G)}$ is a union of orbits for the action of $G$ on $\widehat{FC(G)}$ induced by conjugation.* Unfortunately, a classification theorem for the weak\*-closed ideals of $\ell^1(G)$ for a general group $G$ seems beyond reach, as one runs into very hard questions about spectral synthesis, as we shall explain below in Remark [Remark 17](#abrem){reference-type="ref" reference="abrem"}. The paper in organised as follows. Section 2 establishes our notation and states some basic results that we shall use throughout the article; in Subsection 2.4 we discuss the proof of Theorem [Theorem 1](#thm1){reference-type="ref" reference="thm1"} and explain why we cannot simply adapt the proof available for $vN(G)$. In Section 3 we prove that if $\mathcal{A}$ is any weak\*-closed algebra lying between $PM_p(G)$ and $CV_p(G)$, where $1<p<\infty$, then $\mathcal{A}$ is weak\*-simple if and only if $G$ is ICC; this establishes Theorem [Theorem 1](#thm1){reference-type="ref" reference="thm1"} for $1<p<\infty$. The rest of the paper is devoted to studying $\ell^1(G)$. In Section 4 we define a uniform version of the ICC condition on a group, and show that it is actually equivalent to the ordinary ICC condition. Section 5 opens with a discussion of the weak\*-closed ideals of the $\ell^1$-algebra of an abelian group. Next, we prove that $\ell^1(G)$ is weak\*-simple if and only if $G$ is ICC, which completes Theorem [Theorem 1](#thm1){reference-type="ref" reference="thm1"}. At the end of Section 5 we prove Theorem [Theorem 2](#thm2){reference-type="ref" reference="thm2"} and Theorem [Theorem 3](#thm3){reference-type="ref" reference="thm3"}. # Notation and Preliminary Remarks ## General Notation and Group Theory For the remainder of the article all groups are assumed to be discrete, unless otherwise stated. Given $g \in G$ we write $ccl_G(g)$ for the conjugacy class of $g$ in $G$, and write $C_G(g)$ for the centraliser of $g$ in $G$. Given $g, t \in G$ we write $g^t= tgt^{-1}$. Moreover, for $1\leq p < \infty$, $\alpha \in \ell^p(G)$, and $t \in G$, we define $\alpha^t =\delta_t*\alpha*\delta_{t^{-1}}$, so that $$\alpha^t(g) = \alpha(t^{-1}gt) \qquad (g \in G).$$ Similarly, we write $T^t = \lambda_p(t)T\lambda_p(t^{-1}) \ (t \in G, \ T \in CV_p(G))$ (see the notation in Subsection 2.3). We say that $G$ is an *ICC group* if every non-trivial conjugacy class is infinite. We define the *$FC$-centre* of $G$ to be $$FC(G) = \{ t \in G : |ccl_G(t)| < \infty \}.$$ It turns out that $FC(G)$ is always a normal subgroup of $G$. Let $$G_T = \{ g \in G : g^n =e \text{ for some } n \in \mathbb N\}$$ be the set of torsion elements of $G$. By [@T Theorem 1.6] $FC(G)_T$ is a subgroup of $G$ containing the derived subgroup $FC(G)'$. Let $H$ be a subgroup of $G$. By a *left (right) transversal* we mean a set of left (right) coset representatives for $H$ in $G$. When $H$ is normal we simply talk about a transversal for $H$ in $G$. We write $[G:H]$ for the index of $H$ in $G$. We say that $H$ is an *$FC$-central subgroup* of $G$ if $H \subset FC(G)$. Given a group $G$ we write $\widehat{G}$ for the unitary dual of $G$, that is, the set of equivalence classes of irreducible, continuous, unitary representations of $G$ on Hilbert spaces. Note that, for any $N \lhd G$, conjugation by elements of $G$ induces an action of $G$ on $\widehat{N}$ given by $$(t \cdot \pi)(n)= \pi(t^{-1}nt) \quad (t \in G, \ n \in N, \ \pi \in \widehat{N}).$$ Finally, we fix some set theory notation. Given sets $B \subset A$ we write $\operatorname{id}_A$ for the identity map on $A$, and $\mathbbm{1}_B \colon A \to \{ 0,1\}$ for the characteristic function of $B$. We write $B \subset \subset A$ to mean that $B$ is a finite subset of $A$. ## Dual Banach Algebras By a *dual Banach algebra* we mean a pair $(\mathcal{A},\mathcal{A}_*)$, where $\mathcal{A}$ is a Banach algebra, and $\mathcal{A}_*$ is a Banach space with $(\mathcal{A_*})^* = \mathcal{A}$, with the property that the multiplication on $\mathcal{A}$ is separately weak\*-continuous. Examples include the measure algebra $M(G)$ of a locally compact group $G$, with predual $C_0(G)$, and hence in particular $\ell^1(G)$ for a discrete group $G$. Von Neumann algebras can be characterised as those C\*-algebras that are dual Banach algebras. Let $E$ and $F$ be Banach spaces. Then $\mathcal B(E,F^*)$ may be identified with $(F \widehat{\otimes} E)^*$, where $\widehat{\otimes}$ denotes the projective tensor product of Banach spaces, with the duality given by $$\label{eq2.2} \langle \sum_{j=1}^\infty y_j \otimes x_j, T \rangle = \sum_{j=1}^\infty \langle y_j, Tx_j \rangle \qquad (T \in \mathcal B(E,F^*), \ \sum_{j=1}^\infty y_j \otimes x_j \in F \widehat{\otimes} E ).$$ When $E$ is a reflexive Banach space, identifying $E$ with $E^{**}$ means that $\mathcal B(E)$ can be identified with $(E^* \widehat{\otimes} E)^*$, and this makes $\mathcal B(E)$ into a dual Banach algebra. Let $X$ be a subspace of $E$ and $Y$ a subspace of $E^*$. We define $$X^\perp = \{ \varphi \in E^* : \langle x, \varphi \rangle = 0, \ x \in X \} \quad \text{and} \quad Y_\perp = \{ x \in E : \langle x, \varphi \rangle = 0, \ \varphi \in Y \}.$$ Note that $X^\perp$ is always weak\*-closed and that $(Y_\perp)^\perp = \overline{Y}^{w^*}$. It follows that a linear subspace of $E^*$ is weak\*-closed if and only if it has the form $X^\perp$, for some $X \leq E$. A dual Banach algebra is *weak\*-simple* if it has no proper, non-zero, weak\*-closed ideals. For example, for any reflexive Banach space $E$ the dual Banach algebra $\mathcal B(E)$ is weak\*-simple since the ideal of finite rank operators is contained in every non-zero ideal and is weak\*-dense. A von Neumann algebra is weak\*-simple if and only if it is a factor. To the best of our knowledge, the term 'weak\*-simple' is new, although it is an obvious notion. Let $E$ be a Banach left $\mathcal{A}$-module. Then $E^*$ becomes a Banach right $\mathcal{A}$-module via $$\langle x, \varphi \cdot a \rangle = \langle a \cdot x, \varphi \rangle \quad (x \in E, \ a \in \mathcal{A}, \ \varphi \in E^*),$$ and we say that $E$ is a *dual right $\mathcal{A}$-module*. We say that $E^*$ is *normal* if the map $\mathcal{A} \to E^*$ given by $a \mapsto \phi \cdot a$ is weak\*-continuous for each $\phi \in E^*$. Given a Banach right $\mathcal{A}$-module $F$ we define its annihilator to be $$\operatorname{ann}(F) : = \{ a \in \mathcal{A} : x \cdot a = 0 \ (x \in F)\}.$$ When $F$ is a normal, dual right $\mathcal{A}$-module, $\operatorname{ann}(F)$ is a weak\*-closed, two-sided ideal of $\mathcal{A}$. Given Banach left $\mathcal{A}$-modules $E_1$ and $E_2$ we define $\operatorname{Hom}^l_{\mathcal{A}}(E_1, E_2)$ to be the set of bounded left $\mathcal{A}$-module homomorphisms from $E_1$ to $E_2$. As we shall detail in Section 3, $\operatorname{Hom}^l_{\mathcal{A}}(E_1,E_2)$ is itself a Banach right $\mathcal{A}$-module. ## Convolution Operators on Groups Banach algebras of $p$-convolution operators on groups have a long history of study going back at least to the work of Herz [@H71; @H73] and Figà-Talamanca [@FT65]. They have also been the subject of serious study by a number of mathematicians since - see e.g. [@Co; @De11; @DFM; @DG; @GT22]. In recent years, the study of Banach algebras of $p$-convolution operators on groups has seen renewed interest in light of the program, initiated primarily by Phillips [@P12; @P13], to study so-called $p$-operator algebras; see [@G21] for a survey of this field. We now give the formal definitions of these algebras. Let $G$ be a group. Given $1\leq p < \infty$, we write $\lambda_p$ and $\rho_p$ for the left and right regular representations of $G$ on $\ell^p(G)$, respectively. We define the algebra of $p$-convolution operators on $G$ to be $$CV_p(G) = \rho_p(G)' = \{ T \in \mathcal B(\ell^p(G)) : T \rho_p(t) = \rho_p(t) T \ (t \in G) \}.$$ In fact $CV_p(G) = \lambda_p(G)''$, and when $p=1$ we have $CV_p(G) = \ell^1(G)$. For $1<p<\infty$ we define the algebra of $p$-pseudomeasures on $G$ to be $$PM_p(G) = \overline{{\rm span \,}}^{w^*} \lambda_p(G),$$ where $w^*$ refers to the weak\*-topology on $\mathcal B(\ell^p(G))$. We always have $\lambda_p(\ell^1(G)) \subset PM_p(G) \subset CV_p(G)$. When $G$ has the approximation property, we have $PM_p(G) = CV_p(G)$ for all $1 < p<\infty$; this observation seems to be folklore, but a proof is given in [@DS]. Whether one always has $PM_p(G) = CV_p(G)$ for all $p$ and all $G$ remains a difficult unsolved problem. Our main theorem for $p \neq 1$, Theorem [Theorem 11](#3.8){reference-type="ref" reference="3.8"}, is stated for any weak\*-closed Banach algebra sitting between $PM_p(G)$ and $CV_p(G)$, so that our results do not depend the answer to this open question. For $1<p<\infty$ the algebra $CV_p(G)$ is a dual Banach algebra, whose predual is denoted by $\overline{A_p}(G)$, a concrete description of which was given by Cowling [@Co]. Similarly, $PM_p(G)$ is a dual Banach algebra with predual $A_p(G)$, the *Figà-Talamanca--Herz algebra*. See [@De11] for background on $A_p(G)$. The weak\*-topology on $CV_p(G)$ and $PM_p(G)$ corresponding to these preduals coincides with that given by considering them as subalgebras of $\mathcal B(\ell^p(G))$ with predual $\ell^{p'}(G) \widehat{\otimes} \ \ell^{p}(G)$. As such, by using [\[eq2.2\]](#eq2.2){reference-type="eqref" reference="eq2.2"}, we shall not need the precise definitions of $A_p(G)$ or $\overline{A_p}(G)$ in this article. We write $A(G) = A_2(G)$ for the Fourier algebra of $G$. We shall make frequent use of the basic properties of $A(G)$ for $G$ a compact abelian group, for which [@Ru] is a good reference. In the next proposition we summarise some basic properties of $p$-convolution operators on discrete groups, for $p$ in the reflexive range, that we shall use throughout Section 3. We have been unable to find a reference, but the proof is a direct generalisation of the case $p=2$, which is given in [@SZ §4.25]. **Proposition 4**. *Let $G$ be a group, and let $1<p< \infty$.* 1. *Given $T \in CV_p(G)$ we have $$T\xi = \alpha*\xi \quad (\xi \in \ell^p(G)),$$ where $\alpha = T\delta_e$.* 2. *Moreover, we have $$\langle \delta_t, T \delta_s \rangle = \alpha(ts^{-1}) \quad (s,t \in G) .$$* 3. *Given $S,T \in CV_p(G)$, we have $S=T$ if and only if $S\delta_e = T\delta_e$; that is, $\delta_e$ is a separating vector for the action of $CV_p(G)$ on $\ell^p(G)$.* Using this proposition, we can characterise the centre of $CV_p(G)$. **Lemma 5**. *Let $G$ be a group. Then $$Z(CV_p(G)) = \{ T \in CV_p(G) : T\delta_e \text{ is constant on conjugacy classes}\},$$ and in particular $CV_p(G)$ has trivial centre if and only if $G$ is ICC.* *Proof.* Let $T \in Z(CV_p(G))$. Then given $t \in G$ we have $\lambda_p(t)T\lambda_p(t^{-1}) = T$, so that $$T\delta_e = \lambda_p(t)T\lambda_p(t^{-1}) \delta_e = \delta_t*(T\delta_e)*\delta_{t^{-1}}.$$ Hence $T\delta_e$ is constant on conjugacy classes. Now suppose that $T \in CV_p(G)$ and that $\alpha :=T\delta_e$ is constant on conjugacy classes. Let $S \in CV_p(G)$, and let $\beta = S\delta_e$. Given $g \in G$ we have $$\begin{aligned} \langle \delta_g, TS\delta_e \rangle &= (\alpha*\beta)(g) = \sum_{s \in G} \alpha(gs^{-1})\beta(s) = \sum_{s \in G} \alpha(s^{-1}(gs^{-1})s)\beta(s) \\ &= \sum_{s \in G} \beta(s) \alpha(s^{-1}g) = (\beta*\alpha)(g) = \langle \delta_g, ST \delta_e \rangle. \end{aligned}$$ It follows that $TS=ST$, and so $T$ is central. ◻ Let $H$ be a subgroup of $G$. Then we can identify $CV_p(H)$ with a subalgebra of $CV_p(G)$ as follows. First, fix a right transversal $\mathcal{T}$ for $H$ in $G$, and define $\xi_t \in \ell^p(H)$ by $\xi_t(u) = \xi(ut) \ (u \in H, t \in \mathcal{T}).$ Then $S \in CV_p(H)$ can be extended to an operator on $\ell^p(G)$ via $$\label{eq2.1} S\xi = \sum_{t \in \mathcal{T}} (S\xi_t)*\delta_t \qquad (S \in CV_p(H), \ \xi \in \ell^p(G)),$$ and it can be checked that this extension belongs to $CV_p(G)$. The proof of this result is routine in the discrete case; see [@De11 Chapter 7, Theorem 13] for a generalisation that holds for locally compact $G$. Furthermore, given an operator $T \in CV_p(G)$ we may define its *restriction to $H$*, written $T_H \in CV_p(H)$, by $$T_H \xi = P_H T \xi^G \qquad ( \xi \in \ell^p(H)),$$ where $\xi^G$ denotes the obvious extension of $\xi$ to $G$ be adding zeros, and $P_H \colon \ell^p(G) \to \ell^p(H)$ is the canonical projection. We can use this to give a right coset decomposition for $T$, by setting $$T_t = (T \lambda_p(t^{-1}) )_H \qquad (t \in \mathcal{T}).$$ Writing $\alpha = T\delta_e$, an easy calculation then shows that $$% \label{eq2.2} T_t\delta_e = \alpha_t \qquad (t \in \mathcal{T}).$$ Unfortunately, $\sum_{t \in \mathcal{T}} T_t\lambda_p(t)$ does not usually converge to $T$ in the weak\*-topology, but we do have $\alpha = \sum_{t \in \mathcal{T}} \alpha_t*\delta_t$ convergent in $\ell^p$-norm. ## Remarks on the Proof of Theorem [Theorem 1](#thm1){reference-type="ref" reference="thm1"} {#remarks-on-the-proof-of-theorem-thm1} We shall break the proof of Theorem [Theorem 1](#thm1){reference-type="ref" reference="thm1"} into two parts: the case $1<p<\infty$ is given by Theorem [Theorem 11](#3.8){reference-type="ref" reference="3.8"}, whereas the case $p=1$ is given by Theorem [Theorem 21](#iccthm){reference-type="ref" reference="iccthm"}. Theorem [Theorem 11](#3.8){reference-type="ref" reference="3.8"} actually states the analogous result for any weak\*-closed subalgebra $\mathcal{A}$ of $CV_p(G)$ containing $PM_p(G)$. When $p=2$, we have $CV_p(G) = vN(G)$. For von Neumann algebras, weak\*-simplicity is equivalent to being a factor. Indeed, by standard results a non-trivial weak\*-closed ideal of a von Neumann algebra is generated by a non-trivial central idempotent; moreover whenever the centre of the von Neumann algebra is non-trivial it contains a non-trivial idempotent, and this generates a weak\*-closed ideal. Since (as is well known) $vN(G)$ is a factor if and only if $G$ is ICC, Theorem [Theorem 1](#thm1){reference-type="ref" reference="thm1"} follows in this special case. More generally, Lemma [Lemma 5](#0.3){reference-type="ref" reference="0.3"} shows that $CV_p(G)$, for $1 \leq p < \infty$, has trivial centre if and only if $G$ is ICC; however, for $p \neq 2$ the connection between weak\*-closed ideals and the centre of the algebra is less clear. Moreover, $CV_p(G)$ will not contain as many idempotents as $vN(G)$ when $p \neq 2$, so we should not expect every weak\*-closed ideal to be generated by an idempotent. As an example, $\ell^1(\mathbb Z)$ contains many weak\*-closed ideals (See Lemma [Lemma 16](#yc){reference-type="ref" reference="yc"} and Remark [Remark 17](#abrem){reference-type="ref" reference="abrem"}), but contains no non-trivial idempotents (by e.g. [@Ru §3.2.1]). In the following proposition we observe that, in contrast to von Neumann algebras, a general dual Banach algebra can have a weak\*-closed ideal that intersects the centre only at $\{0\}$. By a *weight* on a group $G$ we mean a function $\omega \colon G \to [1, \infty)$ such that $\omega(e) = 1$ and $\omega(st) \leq \omega(s)\omega(t) \ (s,t \in G)$. Note that $\ell^1(G, \omega)$ is then a dual Banach algebra with predual $c_0(G, 1/\omega)$ (see e.g. [@D06 Proposition 5.1]). **Proposition 6**. *Let $G$ be a finitely-generated ICC group, fix a finite generating set, and denote corresponding the word-length of $t \in G$ by $|t|$. Let $\omega \colon G \to [1, \infty)$ be given by $\omega(t) = c^{|t|}$. Then the augmentation ideal $$\ell_0^1(G, \omega) := \left\{ f \in \ell^1(G, \omega) : \sum_{t \in G} f(t) = 0 \right\}$$ is weak\*-closed, but intersects the centre trivially. In particular $Z(\ell^1(G, \omega)) = \mathbb C\delta_e$, but $\ell^1(G, \omega)$ is not weak\*-simple.* *Proof.* Due to the choice of $\omega$ the constant functions $\mathbb C1$ are contained in $c_0(G, 1/\omega)$, and $\ell_0^1(G, \omega) = (\mathbb C1)^\perp$. As such $\ell_0^1(G, \omega)$ is weak\*-closed, and of course it is an ideal. Any element of the centre of $\ell^1(G,\omega)$ must be constant on conjugacy classes, and since it also belongs to $c_0(G)$, must be a multiple of $\delta_e$. Therefore $Z(\ell^1(G, \omega)) = \mathbb C\delta_e$, and clearly $\ell_0^1(G, \omega) \cap \mathbb C\delta_e = \{ 0 \}$. ◻ In Proposition [Proposition 7](#3.1){reference-type="ref" reference="3.1"}(i) we shall show that, for $p$ in the reflexive range, every weak\*-closed ideal of $CV_p(G)$ does contain a non-zero central element; we have not been able to see how to generalise this to $\ell^1(G)$, and our proof in that case takes a different path. However, for $\ell^1(G)$ there are other nice tools available. In particular ${\rm span \,}\{ \delta_t : t \in G \}$ is norm dense, whereas for $CV_p(G)$, where $p \neq 1,2$, we don't know in general whether ${\rm span \,}\{ \lambda_p(t) : t \in G \}$ is even weak\*-dense. # Weak\*-Simplicity of $CV_p(G)$ and $PM_p(G)$ for $1<p<\infty$ Part (ii) of the following proposition is needed for our proof of Theorem [Theorem 1](#thm1){reference-type="ref" reference="thm1"}. Part (i) may be of independent interest. It can be seen as an analogue of the following property of von Neumann algebras: every non-zero weak\*-closed ideal contains a non-zero central projection. **Proposition 7**. *Let $G$ be a group, let $1<p<\infty$, and let $\mathcal{A}$ be a weak\*-closed subalgebra of $\mathcal B(\ell^p(G))$ with $$PM_p(G) \subset \mathcal{A} \subset CV_p(G).$$ Then* 1. *every non-zero weak\*-closed ideal of $\mathcal{A}$ has non-zero intersection with $Z(\mathcal{A})$;* 2. *if $G$ is ICC then $\mathcal{A}$ is weak\*-simple.* *Proof.* We shall prove (i) and (ii) together. Let $\{ 0 \} \neq I \lhd \mathcal{A}$ be weak\*-closed. Let $T \in I \setminus \{ 0 \}$. By translating and scaling $T$ we may assume that $\langle \delta_e, T \delta_e \rangle = 1$. Let $$X = \{ S \in I : \langle \delta_e, S\delta_e \rangle = 1, \ \|S \| \leq \|T \| \},$$ which is a non-empty, weak\*-compact, convex subset of $\mathcal B(\ell^p(G))$. Let $\psi \colon \mathcal B(\ell^p(G)) \to \ell^p(G)$ be given by $$\psi(S) = S \delta_e \quad (S \in \mathcal B(\ell^p(G))).$$ Observe that $\psi$ is weak\*-weakly continuous: indeed, if $\eta \in \ell^{p'}(G)$ then, for all $S \in \mathcal B(\ell^p(G))$, we have $\langle \psi(S), \eta \rangle = \langle S\delta_e, \eta \rangle = \langle \eta \otimes \delta_e, S \rangle.$ As such $Y : = \psi(X)$ is a weakly-compact convex subset of $\ell^p$, as it is the continuous linear image of a weak\*-compact convex set. Let $S \in \mathcal{A}$, and let $\alpha = S\delta_e = \psi(S).$ The action of $G$ on $Y$ by conjugation is well-defined: indeed, given $t, g \in G$ we have $$\langle \delta_t, S^g \delta_e \rangle = \langle \delta_{g^{-1}t}, S\delta_g \rangle = \langle \delta_{g^{-1}tg}, S\delta_e \rangle = \alpha^g(t),$$ so that $\psi(S)^g = \psi(S^g) \in Y$. Moreover, the conjugation action of $G$ on $\ell^p(G)$ is isometric. By the previous paragraph, we my apply the Ryll-Nardzewski Fixed Point Theorem to the conjugation action of $G$ on $Y$ to find $y_0 \in Y$ such that $y_0^g = y_0$ for all $g \in G$. Say $y_0 = \psi(T_0)$, for $T_0 \in X$. By Lemma [Lemma 5](#0.3){reference-type="ref" reference="0.3"}, since $y_0 = T_0 \delta_e$ is constant on conjugacy classes, we must have $T_0 \in Z(\mathcal{A})$, and $T_0$ belongs to $I \setminus \{0 \}$ since it is in $X$, and this proves (i). If $G$ is ICC, then the fact that $y_0 \in \ell^p(G)$ is non-zero and constant on conjugacy classes forces $y_0 = \delta_e$, and hence $T_0 = \operatorname{id}_{\ell^p(G)}$ and $I = \mathcal{A}$. As $I$ was arbitrary, $\mathcal{A}$ must be weak\*-simple, and we have proved (ii). ◻ We shall spend the remainder of Section 3 constructing a non-trivial weak\*-closed ideal of $CV_p(G)$ when $FC(G) \neq \{ e\}$. Note that if $PM_p(G) = CV_p(G)$ this task becomes much easier, as we can adapt the approach that we shall use later for $\ell^1(G)$ (see Case 2 in the proof of Theorem [Theorem 21](#iccthm){reference-type="ref" reference="iccthm"}). The difficulty of the present situation comes from the fact that, if we don't know that $\lambda_p(G)$ has weak\*-dense linear span in $CV_p(G)$, then we cannot check that a weak\*-closed subset $I \subset CV_p(G)$ is an ideal just by checking that $\lambda_p(t)T, T \lambda_p(t) \in I$ for all $t \in G$ and $T \in I$. Instead, our strategy is to take the annihilator of a well-chosen normal, dual right $CV_p(G)$-module, as this will automatically be a weak\*-closed ideal. A careful choice is required to ensure that the ideal is non-zero and proper. **Lemma 8**. *Let $(\mathcal{A}, \mathcal{A}_*)$ be a dual Banach algebra, let $\mathcal{S}$ be a closed subalgebra of $\mathcal{A}$, and let $E$ be a Banach left $\mathcal{S}$-module. Then $\operatorname{Hom}^l_{\mathcal{S}}(E, \mathcal{A})$ is a normal, dual, right $\mathcal{A}$-module, with the action given by $$\label{eq4} (\Phi \cdot a)(x) = \Phi(x)a \quad (\Phi \in \operatorname{Hom}^l_{\mathcal{S}}(E, \mathcal{A}), \ a \in \mathcal{A}, \ x \in E).$$* *Proof.* Consider $\mathcal{A}_* \widehat{\otimes} E$, whose dual space may be identified with $\mathcal B(E, \mathcal{A})$ via [\[eq2.2\]](#eq2.2){reference-type="eqref" reference="eq2.2"}, and define $$Z = \overline{{\rm span \,}}\{ (u \cdot b) \otimes x - u\otimes (b \cdot x) : u \in \mathcal{A}_*, \ x \in E, \ b \in \mathcal{S} \}.$$ Then we have $$( \mathcal{A}_* \widehat{\otimes} E / Z )^* \cong Z^\perp,$$ and a routine calculation shows that, as a subset of $\mathcal B(E, \mathcal{A})$, $$Z^\perp = \operatorname{Hom}^l_{\mathcal{S}}(E, \mathcal{A}).$$ As such $\operatorname{Hom}^l_{\mathcal{S}}(E, \mathcal{A})$ is a dual space. Now, $\mathcal{A}$ acts on $\mathcal{A}_* \widehat{\otimes} E$ on the left by $$a \cdot \left( \sum_{j=1}^\infty u_j \otimes x_j \right) = \sum_{j = 1}^\infty (a \cdot u_j)\otimes x_j \quad (u_j \in \mathcal{A}_*, \ x_j \in E),$$ and $Z$ is a closed submodule for this action, so that $\mathcal{A}_* \widehat{\otimes} E / Z$ becomes a Banach left $\mathcal{A}$-module. A routine calculation shows that the dual action is exactly the action we defined on $\operatorname{Hom}^l_{\mathcal{S}}(E, \mathcal{A})$ by [\[eq4\]](#eq4){reference-type="eqref" reference="eq4"}. Finally, we show that $\operatorname{Hom}^l_{\mathcal{S}}(E, \mathcal{A})$ is normal. Let $\Phi \in \operatorname{Hom}^l_{\mathcal{S}}(E, \mathcal{A})$, and define $L_\Phi \colon \mathcal{A} \to \operatorname{Hom}^l_{\mathcal{S}}(E, \mathcal{A})$ by $L_\Phi \colon a \mapsto \Phi \cdot a$. We want to show that $L_\Phi$ is weak\*-continuous, so take arbitrary $\tau \in \operatorname{Hom}^l_{\mathcal{S}}(E, \mathcal{A})_*$ and consider $\tau \circ L_\Phi$. We can write $\tau$ as $\sum_{j=1}^\infty u_j \otimes x_j +Z$, where $u_j \in \mathcal{A}_*$, $x_j \in E$, and $\sum_{j=1}^\infty \|u_j\| \|x_j \| < \infty.$ Given $a \in \mathcal{A}$, we have $$\begin{aligned} (\tau \circ L_\Phi)(a) &= \langle \sum_{j=1}^\infty u_j \otimes x_j, \Phi \cdot a \rangle = \sum_{j=1}^\infty \langle u_j, (\Phi \cdot a)(x_j) \rangle \\ &= \sum_{j=1}^\infty \langle u_j, \Phi(x_j)a \rangle = \sum_{j=1}^\infty \langle u_j \cdot \Phi(x_j), a \rangle. \end{aligned}$$ Note that $\sum_{j = 1}^\infty \| u_j \cdot \Phi(x_j) \| \leq \| \Phi \| \sum_{j=1}^\infty \|u_j\| \|x_j \| < \infty$, so that $\sum_{j = 1}^\infty u_j \cdot \Phi(x_j)$ converges to an element of $\mathcal{A}_*$ which we denote $u_0$. The above calculation shows that $(\tau \circ L_\Phi)(a) = \langle u_0, a \rangle$. Since $\tau$ was arbitrary, this proves that $L_\Phi$ is weak\*-continuous. ◻ **Lemma 9**. *Let $G$ be a group, and let $N$ be a normal, abelian subgroup of $G$. Let $K$ be a Banach left $CV_p(N)$-module, and let $J$ be an ideal of $CV_p(N)$ that annihilates $K$, and that is invariant for the conjugation action of $G$ on $CV_p(N)$. Write $M = \operatorname{Hom}^l_{CV_p(N)}(K, CV_p(G))$, and define $$I := \operatorname{ann}(M) = \{ T \in CV_p(G) : \Phi \cdot T = 0, \ \Phi \in \operatorname{Hom}^l_{CV_p(N)}(K, CV_p(G)) \}.$$ Then $I$ is a weak\*-closed ideal of $CV_p(G)$ containing $J$.* *Proof.* Since (by Lemma [Lemma 8](#3.4){reference-type="ref" reference="3.4"}) $I$ is the annihilator of a normal, dual, right $CV_p(G)$-module, it is easily seen to be a weak\*-closed two-sided ideal. It remains to show that it contains $J$. Let $\Phi \in M$. We shall show that $\Phi \cdot S = 0$ for every $S \in J$. Define $\alpha \colon K \to \ell^p(G)$ by $$\alpha(v)(g) = \langle \delta_g, \Phi(v) \delta_e \rangle \quad (v \in K, \ g \in G).$$ Fix a transversal $\mathcal{T}$ for $N$ in $G$, and for each $t \in \mathcal{T}$ define functions $\Phi_t \colon K \to CV_p(N)$ and $\alpha_t \colon K \to \ell^p(G)$ by $$\Phi_t(v) = [\Phi(v)]_t \qquad ( t \in \mathcal{T}, \ v \in K),$$ and $$\alpha_t(v)(g) = \alpha(v)(gt) \ (v \in K, \ g \in H), \quad \text{ and } \quad \alpha_t(v)(g) = 0 \ (v \in K, \ g \in G \setminus H),$$ so that $\Phi_t(v) \delta_e = \alpha_t(v)$. We shall show that each map $\Phi_t$ is a left $CV_p(N)$-module homomorphism. Indeed, given $S \in CV_p(N)$ we have $$\begin{aligned} S\Phi(v)\delta_e = S \sum_{t \in \mathcal{T}} \alpha(v)|_{Nt} = S \sum_{t \in \mathcal{T}} \alpha_t(v)*\delta_t = \sum_{t \in \mathcal{T}} (S\alpha_t(v))*\delta_t \quad \text{(by \eqref{eq2.1}}). % \ \ (\text{by the definition of } CV_p(N) \hookrightarrow CV_p(G)). \end{aligned}$$ Also $$\begin{aligned} S \Phi(v) \delta_e &= \Phi(S \cdot v) \delta_e = \sum_{t \in \mathcal{T}} \alpha(S \cdot v)|_{Nt} = \sum_{t \in \mathcal{T}} \alpha_t(S \cdot v)*\delta_t. \end{aligned}$$ By comparing these two expressions, we see that we must have $S\alpha_t(v) = \alpha_t(S \cdot v)$, and hence $S\Phi_t(v) = \Phi_t(S \cdot v)$, for each $v \in K$ and $t \in \mathcal{T}$, since $\delta_e$ is a separating vector. Now, fix $S \in CV_p(N)$, and let $\beta(n) = \langle \delta_n, S\delta_e \rangle \ (n \in N)$, so that $$S \xi = \beta*\xi \quad (\xi \in \ell^p(N)).$$ We claim that $$\label{eq3.1} \sum_{t \in \mathcal{T}} \beta^t*\alpha_t(v) *\delta_t = \alpha(v)*\beta \quad (v \in K).$$ Note that the left hand side converges because the terms $\beta^t*\alpha_t(v) *\delta_t$ are disjointly supported, and $$\sum_{t \in \mathcal{T}} \| \beta^t*\alpha_t(v) *\delta_t \|_p^p = \sum_{t \in \mathcal{T}} \| S^t(\alpha_t(v) *\delta_t )\|_p^p \leq \| S \|^p \sum_{t \in \mathcal{T}} \| \alpha_t(v) \|_p^p = \|S\|^p \|\alpha(v) \|_p^p < \infty.$$ We suppress $v$ for the time being. Let $n \in N, \ s \in \mathcal{T}$, and consider $g = ns \in Ns$. We have $$\begin{aligned} \left( \sum_{t \in \mathcal{T}} \beta^t*\alpha_t *\delta_t \right)(g) &= \left( \beta^s*\alpha_s *\delta_s \right)(ns) = \sum_{u \in N} \beta^s(u)(\alpha_s*\delta_s)(u^{-1}ns) \\ %&= \sum_{u \in N} \beta^s(u)\alpha(u^{-1}g) &= \sum_{u \in N} \beta^s(u)\alpha(u^{-1}ns). \end{aligned}$$ We also have $$\begin{aligned} (\alpha*\beta)(g) &= \sum_{u \in N} \alpha(nsu^{-1})\beta(u) = \sum_{u \in N} \beta(s^{-1}us)\alpha(ns(s^{-1}us)^{-1}) \\ % (N \text{ is normal convolution, and by elements of $CV_p(G)$ in absolutely convergent so that the rearranged series converges}) &= \sum_{u \in N} \beta(s^{-1}us)\alpha(nss^{-1}u^{-1}s) = \sum_{u \in N} \beta(s^{-1}us)\alpha(nu^{-1}s) \\ &= \sum_{u \in N} \beta^s(u)\alpha(u^{-1}ns), %since u^{-1} and n commute \end{aligned}$$ which proves the claim. Now suppose that $S \in J$. We shall show that $\Phi \cdot S = 0$. Let $v \in K$. Then $$\begin{aligned} (\Phi \cdot S)(v)\delta_e &= \Phi(v)S\delta_e = \alpha(v)*\beta = \sum_{t \in \mathcal{T}}\beta^t*\alpha_t(v)*\delta_t \quad (\text{by \eqref{eq3.1}}) \\ &= \sum_{t \in \mathcal{T}} S^t\Phi_t(v) \delta_t \quad (\text{by \eqref{eq2.1}}) \\ &= \sum_{t \in \mathcal{T}} \Phi_t(S^t \cdot v)\lambda(t) \delta_e . \end{aligned}$$ Since $J$ is $G$-conjugation-invariant, $S^t \in J$ for each $t \in \mathcal{T}$, and so each $S^t \cdot v = 0$, and the above sum is $0$. Since $\delta_e$ is a separating vector, this forces $\Phi \cdot S = 0$, as required. ◻ **Lemma 10**. *Let $G$ be a group, and let $N$ be a normal $FC$-central subgroup of $G$, isomorphic to $\mathbb Z^n$, for some $n \in \mathbb N$. Then $CV_p(G)$ has a proper, weak\*-closed ideal $I$ such that $I \cap \lambda_p(\ell^1(G)) \neq \{ 0 \}$.* *Proof.* We identify $\widehat{N}$ with $\mathbb T^n$. Let $U_0 = \{ \boldsymbol{z} \in \mathbb T^n : |z_i - 1|<1, \ i = 1, \ldots, n \},$ which is an open subset of $\mathbb T^n$ containing the identity. Let $U = \bigcap_{t \in G} U_0^t$. Since $N$ is finitely generated and $FC$-central, there are only finitely many distinct automorphisms of $N$ induced by conjugation by elements of $G$, and as such the intersection defining $U$ is actually a finite intersection. It follows that $U$ is an open set, and it is non-empty as $e \in U$. Now, let $E = \overline{U}$. Then $E$ is a compact neighbourhood of the identity in $\mathbb T^n$ with $0<m(E)<1$, which is invariant for the action of $G$ induced by conjugation. Define $$J = \{ T \in CV_p(N) : \widehat{T}|_E = 0\}, \qquad K = \{ T \in CV_p(N) : \widehat{T}|_{\widehat{N} \setminus E} = 0\}.$$ Then $J$ and $K$ are ideals in $CV_p(N)$, with $J = \operatorname{ann}(K)$. Because $E$ is $G$-conjugation-invariant, so is $J$. Since the Fourier algebra is regular, there exists $u \in A(\widehat{N}) \setminus \{ 0 \}$ such that $u|_E = 0$, and taking the inverse Fourier transform of $u$ gives a non-zero element $f$ of $\ell^1(G)$ such that $\lambda_p(f)$ belongs to $J$. Similarly $K \neq \{ 0\}.$ Let $$I = \operatorname{ann}\operatorname{Hom}^l_{CV_p(N)}(K, CV_p(G)).$$ By Lemma [Lemma 9](#3.6){reference-type="ref" reference="3.6"} $I$ is weak\*-closed, and contains $J$, so that $I \cap \lambda_p(\ell^1(G)) \neq \{ 0 \}$. Finally, we must show that $I$ is proper. Note that composing the inclusion maps $K \to CV_p(N) \to CV_p(G)$ gives a non-zero left $CV_p(N)$-module homomorphism $K \to CV_p(G)$, so that $\operatorname{Hom}^l_{CV_p(N)}(K, CV_p(G))$ is non-zero. As such $\operatorname{id}_{\ell^p(G)}$ is an element of $CV_p(G)$ not belonging to $I$. This completes the proof. ◻ **Theorem 11**. *Let $G$ be a group, let $1<p<\infty$, and let $\mathcal{A}$ be a weak\*-closed subalgebra of $\mathcal B(\ell^p(G))$ with $$PM_p(G) \subset \mathcal{A} \subset CV_p(G).$$ Then $\mathcal{A}$ is weak\*-simple if and only if $G$ is ICC.* *Proof.* If $G$ is ICC then $\mathcal{A}$ is weak\*-simple by Proposition [Proposition 7](#3.1){reference-type="ref" reference="3.1"}(ii). We shall prove the converse. Suppose that $G$ is not ICC. Then $FC(G) \neq \{e\}$ and we shall use the structure of $FC(G)$ to build a weak\*-closed ideal of $\mathcal{A}$. There are two possibilities concerning $FC(G)_T$. [Case 1: $FC(G)_T \neq \{e \}.$]{.ul} In this case, $G$ contains a finite normal subgroup $N$ by Dicman's Lemma [@T Lemma 1.3]. We suppose that $G \neq N$, since otherwise the result is trivial. Let $P = \lambda_p(\frac{1}{|N|}\mathbbm{1}_N)$. Then $P$ is a central idempotent in $\mathcal{A}$ and $\mathcal{A}P$ is a weak\*-closed ideal. Note that $\mathcal{A}P \neq \{0\}$. Moreover, $\mathcal{A}P \neq \mathcal{A}$ because $\operatorname{id}_{\ell^p(G)} \notin \mathcal{A}P$: indeed, $\operatorname{id}_{\ell^p(G)} - P$ annihilates $\mathcal{A}P$ but not $\operatorname{id}_{\ell^p(G)}$ [Case 2: $FC(G)_T = \{e \}.$]{.ul} By [@T Theorem 1.6] $FC(G)_T \supset FC(G)'$, so that $FC(G)'$ is also trivial, and $FC(G)$ is abelian. Pick $x \in FC(G)$, and let $N = \langle ccl_G(x) \rangle$, which is a normal subgroup of $G$. Since $ccl_G(x)$ is finite, $N$ is a finitely-generated, torsion-free, abelian group, and hence $N \cong \mathbb Z^n$, for some $n \in \mathbb N$. As such we may apply Lemma [Lemma 10](#3.7){reference-type="ref" reference="3.7"} to get a weak\*-closed ideal $I$ of $CV_p(G)$ with $I \cap \lambda_p(\ell^1(G)) \neq \{ 0 \}$, and hence also $I \cap \mathcal{A} \neq \{ 0 \}$. Moreover, $\operatorname{id}_{\ell^p(G)} \notin I$ implies that $I \cap \mathcal{A}$ is a proper ideal of $\mathcal{A}$, and clearly it is weak\*-closed. Hence $\mathcal{A}$ is not weak\*-simple. ◻ # A Uniform ICC Condition In this short section we shall prove a result (Proposition [Proposition 15](#icc3){reference-type="ref" reference="icc3"}) about conjugation in infinite conjugacy classes that we shall require for the proofs of our main results in Section 5. We find it interesting to note that, as a special case, Proposition [Proposition 15](#icc3){reference-type="ref" reference="icc3"} implies that the ICC condition for groups is equivalent to the following uniform version of the condition. **Definition 12**. Let $G$ be an infinite group. We say that $G$ is *uniformly ICC* if for every conjugacy class $Y \neq \{ e \}$, and every pair of finite sets $E, F \subset Y$ there exists $g \in G$ such that $gEg^{-1} \cap F = \emptyset$. **Lemma 13**. *Let $G$ be a group, and let $g,h,x \in G$. Then we have $$\{ t \in G: (x^h)^t = x^g\} = gh^{-1}C_G(x)^h.$$* *Proof.* We have $(x^h)^t = x^g$ if and only if $(g^{-1}th)x(h^{-1}t^{-1}g) =x$. As such $t$ conjugates $x^h$ to $x^g$ if and only if $g^{-1}th \in C_G(x)$, which is equivalent to $t \in gC_G(x)h^{-1} = gh^{-1}C_G(x)^h$, and the lemma follows. ◻ **Lemma 14**. *Let $G$ be a group, and let $H$ be a subgroup of $G$. Suppose that, for some $m,n \in \mathbb N$, there exist $t_1, \ldots, t_n \in G$, and $g_1, \ldots, g_m \in G$ such that $$\bigcup_{i = 1}^n \bigcup_{j=1}^m t_iH^{g_j} =G.$$ Then $[G:H] <\infty$.* *Proof.* We shall show that $G$ can be written as the union of finitely many left cosets of the subgroups $H^{g_1}, \ldots, H^{g_{m-1}}$. By repeating this argument $m-1$ times we can then infer that $G$ is the union of finitely many right cosets of $H^{g_1}$, and hence that $[G:H] = [G:H^{g_1}]$ is finite. If $t_1, \ldots, t_n$ is a left transversal for $H^{g_m}$ in $G$ then we are done, so suppose that there is some coset $sH^{g_m}$ which is not among $t_1H^{g_m}, \ldots, t_nH^{g_m}$. Then we must have $$sH^{g_m} \subset \bigcup_{i = 1}^n \bigcup_{j=1}^{m-1} t_iH^{g_j},$$ which implies that, for each $k = 1, \ldots, n$, we have $$t_kH^{g_m} = (t_ks^{-1})sH^{g_m} \subset \bigcup_{i = 1}^n \bigcup_{j=1}^{m-1} t_ks^{-1}t_iH^{g_j}$$ and it follows that $G$ is equal to the union of the original cosets of $H^{g_1}, \ldots, H^{g_m}$ together with these new cosets, that is $$G = \bigcup_{i = 1}^n \bigcup_{j=1}^{m-1} t_iH^{g_j} \cup \bigcup_{i = 1}^n \bigcup_{k = 1}^n\bigcup_{j=1}^{m-1} t_ks^{-1}t_iH^{g_j}.$$ This completes the proof. ◻ The following proposition proves, in particular, that being ICC is equivalent to being uniformly ICC. In fact, an even stronger statement is obtained. **Proposition 15**. *Let $G$ be a group with an infinite conjugacy class $Y$, and let $W \subset FC(G)$ be finite. Then for every pair of finite subsets $E, F \subset Y$ there exists $g \in C_G(W)$ such that $gEg^{-1} \cap F = \emptyset$.* *Proof.* Define $$D = \{ g \in G : gEg^{-1} \cap F \neq \emptyset \}.$$ We want to find $g \in C_G(W) \setminus D$. If we enumerate the elements of $E$ and $F$ as $E = \{x_1, x_2, \ldots, x_n \}$ and $F = \{ y_1, y_2, \ldots, y_m \}$ then we see that $$\begin{aligned} D = %\{ g \in G : gx_ig^{-1} = y_j \text{ for some } i=1, \ldots, n, \ j = 1, \ldots m \} \\ \bigcup_{j=1}^m \bigcup_{i = 1}^n \{ g \in G: gx_ig^{-1} = y_j\}. \end{aligned}$$ By Lemma [Lemma 13](#icc1){reference-type="ref" reference="icc1"} $D$ is a finite union of left cosets of conjugates of $C_G(x_1)$. Assume towards a contradiction that $D \supset C_G(W)$. Since $W \subset FC(G)$ we must have $[G:C_G(w)] < \infty$ for each $w \in W$, and hence $C_G(W) = \bigcap_{w \in W} C_G(w)$ has finite index in $G$. Letting $t_1, \ldots, t_k$ be a left transversal for $C_G(W)$ in $G$, we must have $t_1D \cup \cdots \cup t_k D = G$. Since $D$ is a finite union of left cosets of conjugates of $C_G(x_1)$, we can now infer that $G$ itself must be a finite union of left cosets of conjugates of $C_G(x_1)$. By Lemma [Lemma 14](#icc2){reference-type="ref" reference="icc2"} this implies that $[G : C_G(x_1)]$ is finite, and hence that $$|Y| = |G/C_G(x_1)| < \infty,$$ a contradiction. Therefore $D$ does not contain $C_G(W)$, and taking some $g \in C_G(W) \setminus D$ we have $gEg^{-1} \cap F = \emptyset$, as required. ◻ # Weak\*-Closed ideals of $\ell^1(G)$ We begin this section by looking at the $\ell^1$-algebras of infinite abelian groups. The next lemma gives a method to construct many weak\*-closed ideals in such algebras. **Lemma 16**. *Let $G$ be an infinite abelian group. Take an open set $U \subset \widehat{G}$, with $0< m(\overline{U}) <1$, where $m$ denotes the Haar measure on $\widehat{G}$, and set $E = \overline{U}$. Then $$K_E := \{ f \in \ell^1(G) : \widehat{f}(t) = 0 \ (t \in E)\}$$ is a proper, non-zero, weak\*-closed ideal of $\ell^1(G)$.* *Proof.* Note that $I := \{ f \in L^\infty(\widehat{G}) : f(t) = 0, \ \text{a.e } t \in E \}$ is a weak\*-closed ideal of $L^\infty(\widehat{G})$, and that the inclusion map $i \colon A(\widehat{G}) \hookrightarrow L^\infty(\widehat{G})$ is weak\*-continuous. As such $$i^{-1}(I) = \{ f \in A(\widehat{G}) : f(t) = 0 \ (t \in E) \}$$ is weak\*-closed in $A(\widehat{G})$, and it is proper and non-zero since $A(\widehat{G})$ is regular. Pulling this ideal back to $\ell^1(G)$ via the inverse Fourier transform gives the ideal $K_E$. ◻ **Remark 17**. We observe that it follows from Ülger's work [@U14] that, for an infinite abelian group $G$, the weak\*-closed ideal structure of $\ell^1(G)$ is very complicated, and in particular not all of the weak\*-closed ideals of $\ell^1(G)$ are of the form $K_E$. For this discussion we identify $\ell^1(G)$ with $A(\widehat{G})$. In Ülger's notation we have $$J(E) : = \overline{\{ u \in A(\widehat{G}) : {\rm supp\,}u \text{ is compact and disjoint from } E\}},$$ $$k(E) := \{u \in A(\widehat{G}) : u(t) = 0 \ (t \in E)\},$$ and $k(E)$ is the image of $K_E$ under the Fourier transform. Ülger shows in [@U14 Example 2.5] that $\widehat{G}$ contains a closed subset $E$ such that $\operatorname{hull}(\overline{J(E)}^{w^*})= E$, but such that $k(E)$ is not weak\*-closed. Hence $\overline{J(E)}^{w^*}$ is a weak\*-closed ideal not of the form $k(E)$. This, together with other results in Ülger's paper such as [@U14 Theorem 3.1], lead us to suspect that classifying the weak\*-closed ideals of $\ell^1(\mathbb Z)$ is approximately as hard as classifying the sets of spectral synthesis for $\mathbb T$, which is a very old and difficult unsolved problem. 0◻ We now move on to looking at general groups. The next lemma allows us to lift certain weak\*-closed ideals from a normal subgroup up to the ambient group. The notation $\oplus$ indicates an internal direct sum, and we have identified the subset $\delta_t*J$ of $\ell^1(tN)$ with its canonical image inside $\ell^1(G)$. **Lemma 18**. *Let $G$ be a group, let $N$ be a normal subgroup of $G$, and let $\mathcal{T}$ be a transversal for $N$ in $G$. Let $J$ be a non-trivial weak\*-closed ideal of $\ell^1(N)$ which satisfies $J^s = J \ (s \in G).$ Then $$I : = \bigoplus_{t \in \mathcal{T}} \delta_t * J$$ is a non-trivial weak\*-closed ideal of $\ell^1(G)$.* *Proof.* First, we show that $I$ is weak\*-closed. Let $E = J_\perp \subset c_0(N)$, and define $$F = \bigoplus_{t \in \mathcal{T}} \delta_t *E \leq c_0(G) .$$ A routine calculation shows that $F^\perp = I$, which implies that $I$ is weak\*-closed. Next we show that $I$ is an ideal. Let $f \in I$ and $s \in G$. As $I$ is clearly a linear subspace, it is enough to show that $\delta_s*f, f*\delta_s \in I$. Write $f = \sum_{t \in \mathcal{T}} \delta_t*f_t,$ for some $f_t \in J \ (t \in \mathcal{T}).$ For each $t \in \mathcal{T}$, there exist unique $t', t'' \in \mathcal{T}$, and $n(t), m(t) \in N$, such that $$st = t'n(t), \qquad ts = t'' m(t).$$ We have $$\delta_s *f = \sum_{t \in \mathcal{T}} \delta_{st} * f_t = \sum_{t \in \mathcal{T}} \delta_{t'}*\delta_{n(t)}*f_t.$$ Since $J \lhd \ell^1(N)$ we must have $\delta_{n(t)} * f_t \in J$ for all $t \in \mathcal{T}$, so that $\delta_s *f$ has the required form and belongs to $I$. Similarly $$f*\delta_s = \sum_{t \in \mathcal{T}} \delta_{ts} * f_t^{s^{-1}} = \sum_{t \in \mathcal{T}} \delta_{t''}*\delta_{m(t)}*f_t^{s^{-1}}.$$ By the hypothesis on $J$ we have $\delta_{m(t)}*f_t^{s^{-1}} \in J$ for each $t \in \mathcal{T}$, so that $f*\delta_s$ also has the required form to belong to $I$. It is clear that $I$ is non-zero if $J$ is. Moreover, if $\delta_e \in I$ then would have $\delta_e \in J$, so that $I$ is proper. This completes the proof. ◻ Note that, in the previous lemma, the ideal $I$ does not depend on the choice of transversal $\mathcal{T}$, since it can be characterised as the set of those $f \in \ell^1(G)$ with the property that the restriction of $f$ to each coset of $N$ belongs to $J$. **Lemma 19**. *Let $G$ be a group, and consider the collection of finite subsets of $G$ as a directed set under inclusion. Let $f \in \ell^1(G)$ and $(x_K) \ (K \subset \subset G)$ be a net of elements of $G$ satisfying $$({\rm supp\,}f)^{x_K} \cap K = \emptyset \quad (K \subset \subset G).$$ Then $f^{x_K} \to 0$ in the weak\*-topology.* *Proof.* Let $\phi \in c_0(G)$, and write $F = {\rm supp\,}f$. Then $$\begin{aligned} |\langle \phi, f^{x_K} \rangle| &= \left| \sum_{s \in F^{x_K}} f(s) \phi(s) \right| \leq \|f\| \max \{ |\phi(s)| : s \in F^{x_K}\} \leq \|f\| \max\{ |\phi(s)| : s \in G \setminus K \}, \end{aligned}$$ which can be made arbitrarily small for $K$ large enough. ◻ The next lemma is the key to proving that if $G$ is ICC then $\ell^1(G)$ is weak\*-simple, as well as our more precise structural result Theorem [Theorem 2](#thm2){reference-type="ref" reference="thm2"}. **Lemma 20**. *Let $G$ be a group and let $I \lhd \ell^1(G)$ be weak\*-closed. Then for all $f \in I$ we have $f|_{FC(G)} \in I$.* *Proof.* Let $\varepsilon>0$, and let $A \subset \subset G$ satisfy $\|f|_{G\setminus A} \|< \varepsilon/2$. By enlarging $A$ if necessary we may suppose that $A \cap FC(G) \neq \{ e \}.$ There are finitely many infinite conjugacy classes represented in $A$ and we shall call them $Y_1, Y_2, \ldots, Y_m$, where $m \in \mathbb N$. We also let $W = A \cap FC(G)$ and $w = f|_W$. Our plan is to define a sequence of functions $f_1, \ldots, f_m \in I$, with the property that each $f_i \ (i = 1, \ldots, m)$ can be decomposed as $$f_i = w+ \phi_i + \psi_i,$$ where $$\label{eq1} {\rm supp\,}\phi_i \subset Y_{i+1}\cup \cdots \cup Y_m$$ and $$\label{eq2} \|\psi_i\| \leq \varepsilon/2 + \cdots + \varepsilon/2^i.$$ It will turn out that the final iteration $f_m$ is a good approximation of $f|_{FC(G)}$, and then taking a limit will give the desired conclusion. In order to pass from stage $i$ to stage $i+1$ we shall also have to introduce functions $y_i, g_i,$ and $h_i \ (i = 1, \ldots, m)$, beginning as follows. By Proposition [Proposition 15](#icc3){reference-type="ref" reference="icc3"}, for each $K\subset \subset G$ there exists $x_K \in C_G(W)$ such that $$x_K (A \cap Y_1)x_K^{-1} \cap K = \emptyset.$$ Define $$y_0 = \sum_{t \in A \cap Y_1} f(t) \delta_t, \qquad g_0 = \sum_{t \in A \setminus(W \cup Y_1)} f(t) \delta_t, \qquad h_0 = \sum_{t \in G \setminus A}f(t) \delta_t,$$ so that $f= w + y_0+ g_0 +h_0$ and $\|h_0 \| \leq \varepsilon/2$. By the Banach--Alaoglu Theorem, we may pass to a subset $(x_{K_{\alpha}})$ of $(x_K)$ such that each of the limits $$f_1 := \lim_{w^*, \, \alpha} f^{x_{K_{\alpha}}}, \qquad \phi_1 : = \lim_{w^*, \, \alpha} g_0^{x_{K_{\alpha}}}, \qquad \text{and} \qquad \psi_1 : = \lim_{w^*, \, \alpha} h_0^{x_{K_{\alpha}}}$$ exist. By the choice of $(x_K)$ and Lemma [Lemma 19](#w*lim){reference-type="ref" reference="w*lim"} we have $\lim_{w^*, \, \alpha} y_0^{x_{K_\alpha}} = 0$. Note also that $w^{x_{K_\alpha}} = w$ for all $\alpha$, since each $x_K$ was chosen to centralise $W$. Hence we have $$f_1 = w + \phi_1 + \psi_1,$$ and since $I$ is weak\*-closed, $f_1 \in I$. Moreover, since $g_0^{x_{K_\alpha}}$ is supported on $Y_2 \cup \cdots \cup Y_m$ for all $\alpha$, we have that $${\rm supp\,}\phi_1 \subset Y_2 \cup \cdots \cup Y_m$$ as well. Also $\|\psi_1 \| \leq \varepsilon/2$ because each $\|h_0^{x_K} \| \leq \varepsilon/2 \ (K \subset \subset G)$. Hence $f_1$ has a decomposition satisfying [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"} and [\[eq2\]](#eq2){reference-type="eqref" reference="eq2"}. Now suppose that $1<i<m$ and that we have $$f_i = w+\phi_i+\psi_i,$$ for $\phi_i$ and $\psi_i$ satisfying [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"} and [\[eq2\]](#eq2){reference-type="eqref" reference="eq2"} respectively. Let $B \subset \subset G$ satisfy $\|\phi_i|_{G \setminus B} \| < \varepsilon/2^{i+1}$ and define $$y_i = \sum_{t \in B \cap Y_{i+1}} \phi_i(t) \delta_t, \qquad g_i = \sum_{t \in B \setminus Y_{i+1}} \phi_i(t) \delta_t, \qquad h_i= \psi_i + \sum_{t \in G \setminus B} \phi_i(t) \delta_t .$$ Then $f_i = w + y_i + g_i + h_i$, and $$\|h_i \| \leq \|\psi_i \| + \|\phi_i|_{G \setminus B} \| \leq \varepsilon/2 + \varepsilon/4 + \cdots + \varepsilon/2^{i+1}.$$ For each $K \subset \subset G$ we may choose $x_K' \in C_G(W)$ such that $$x_K' (B \cap Y_{i+1})x_K'^{-1} \cap K = \emptyset.$$ Again, by the choice of $(x_K')$, we have $y_i^{x'_K} \to 0$ in the weak\*-topology. As before, we may pass to a subnet $(x_{K_\beta}')$ such that the limits $$\phi_{i+1} : = \lim_{w^*, \, \beta} g_i^{x'_{K_\beta}} \qquad \text{and} \qquad \psi_{i+1} := \lim_{w^*, \, \beta} h_i^{x'_{K_\beta}}.$$ exist, and then $$f_{i+1} := \lim_{w^*, \, \beta} f_i^{x_{K_\beta}} = w + \phi_{i+1} + \psi_{i+1}.$$ We have $\|\psi_{i+1} \| \leq \|h_i\| \leq \varepsilon/2 + \cdots + \varepsilon/2^{i+1}$, so that $\psi_{i+1}$ satisfies [\[eq2\]](#eq2){reference-type="eqref" reference="eq2"}. Also, $\phi_{i+1}$ satisfies [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"} for similar reasons to $\phi_1$. At the $m^{\rm th}$ stage we find that ${\rm supp\,}\phi_m = \emptyset$, and so $\phi_m$ must be zero. As such $f_m = w+ \psi_m$, and we have $$\begin{aligned} \|f_m - f|_{FC(G)}\| &\leq \|f_m - w \|+ \|w- f|_{FC(G)} \| = \|\psi_m\| + \|w- (w+h_0|_{FC(G)}) \| \\ &= \|\psi_m \| + \|h_0|_{FC(G)} \| \leq ( \varepsilon/2 + \cdots +\varepsilon/2^m ) + \varepsilon/2 < 2\varepsilon. \end{aligned}$$ Letting $\varepsilon$ go to zero shows that $f|_{FC(G)} \in I$, as required. ◻ We now turn to the proofs of our main results. Firstly, using the previous lemma, we shall characterise weak\*-simplicity of $\ell^1(G)$. In the proof $\ell_0^1(N)$ denotes the augmentation ideal of the subgroup $N$, namely $\{ f \in \ell^1(N) : \sum_{s \in N} f(s) = 0 \}.$ **Theorem 21**. *Let $G$ be a group. Then $\ell^1(G)$ is weak\*-simple if and only if $G$ is ICC.* *Proof.* Suppose that $G$ is an ICC group, and let $\{ 0 \} \neq I \lhd \ell^1(G)$ be weak\*-closed. Then, after translating and scaling if necessary, we can find $f \in I$ with $f(e) = 1$. In follows from Lemma [Lemma 20](#icc4){reference-type="ref" reference="icc4"} that $\delta_e \in I$, so that $I = \ell^1(G)$. As $I$ was arbitrary, $\ell^1(G)$ must be weak\*-simple. Now suppose that $G$ is not ICC so that $FC(G) \neq \{ e \}$. We shall show that $\ell^1(G)$ is not weak\*-simple. There are two cases. [Case 1: $FC(G)_T \neq \{e \}.$]{.ul} Pick $x \in FC(G)_T \setminus \{ e \}$. By Dicman's Lemma [@T Lemma 1.3] there exists a finite normal subgroup $N$ of $G$ with $x \in N$, so that $N \neq \{e \}$. Let $\mathcal{T}$ be a transversal for $N$ in $G$. Then by Lemma [Lemma 18](#0.1){reference-type="ref" reference="0.1"} $I: = \bigoplus_{t \in \mathcal{T}} \delta_t*\ell_0^1(N)$ is a proper, non-trivial, weak\*-closed ideal of $\ell^1(G)$. [Case 2: $FC(G)_T = \{e \}.$]{.ul} Just as in Case 2 of the proof of Theorem [Theorem 11](#3.8){reference-type="ref" reference="3.8"}, we get an $FC$-central, normal subgroup $N \lhd G$, which is isomorphic to $\mathbb Z^n$, for some $n \in \mathbb N$. Then, as in the proof of Lemma [Lemma 10](#3.7){reference-type="ref" reference="3.7"}, we get a compact neighbourhood of the identity $E$ in $\widehat{N}$, which has $0<m(E)<1$, and which is invariant for the action of $G$ on $\widehat{N}$ induced by conjugation. As such, by Lemma [Lemma 16](#yc){reference-type="ref" reference="yc"}, $$K_E = \{ f \in \ell^1(N) : \widehat{f}|_E = 0 \}$$ is a non-trivial weak\*-closed ideal of $\ell^1(N)$, and it is invariant for the conjugation action of $G$ because $E$ is. It follows from Lemma [Lemma 18](#0.1){reference-type="ref" reference="0.1"} that $\ell^1(G)$ has a non-trivial weak\*-closed ideal. ◻ The next proposition says that the weak\*-closed ideals of $\ell^1(G)$ are determined by what happens on $FC(G)$. **Proposition 22**. *Let $G$ be a group, and let $\mathcal{T}$ be a transversal for $FC(G)$ in $G$. Let $I \lhd \ell^1(G)$ be weak\*-closed, and let $$J = I \cap \ell^1(FC(G)).$$ Then $J$ is a weak\*-closed ideal of $\ell^1(FC(G))$ invariant for the conjugation action of $G$, and $$\label{eq3} I = \bigoplus_{t \in \mathcal{T}} \delta_t*J.$$* *Proof.* First observe that $J$ is $G$-conjugation-invariant. Indeed, if $f \in J$ and $t \in G$, then $f^t \in I$ because $I$ is an ideal, but also $f^t \in \ell^1(FC(G))$ as $FC(G)$ is a normal subgroup. Clearly the right hand side of [\[eq3\]](#eq3){reference-type="eqref" reference="eq3"} is contained in the left hand side, so we shall prove the reverse containment. Indeed, let $f \in I$. Then we can write $$f = \sum_{t \in \mathcal{T}} \delta_t*f_t,$$ where $f_t \in \ell^1(FC(G))$ is given by $(\delta_{t^{-1}}*f)|_{FC(G)}$. For each $t \in \mathcal{T}$ we have $\delta_{t^{-1}}*f \in I$ since $I$ is an ideal, and hence $f_t \in I$ by Lemma [Lemma 20](#icc4){reference-type="ref" reference="icc4"}. Since also $f_t \in \ell^1(FC(G))$ we have $f_t \in J$, and the conclusion follows. ◻ We can now prove Theorem [Theorem 2](#thm2){reference-type="ref" reference="thm2"}. *Proof of Theorem [Theorem 2](#thm2){reference-type="ref" reference="thm2"}.* This follows from Proposition [Proposition 22](#idealthm){reference-type="ref" reference="idealthm"} and Lemma [Lemma 18](#0.1){reference-type="ref" reference="0.1"}. ◻ We now prove our classification theorem for groups with finite $FC$-centres, Theorem [Theorem 3](#thm3){reference-type="ref" reference="thm3"}. Recall that given $\Omega \subset \widehat{FC(G)}$ we define $$K(\Omega) : = \bigcap_{\pi \in \Omega} \ker \pi \lhd \ell^1(FC(G)).$$ For a finite group $G$ the ideals of $\ell^1(G)$ are all given by intersections of kernels of irreducible representations [@HR2 Theorem 38.7]. This fact, together with Theorem [Theorem 2](#thm2){reference-type="ref" reference="thm2"}, implies Theorem [Theorem 3](#thm3){reference-type="ref" reference="thm3"}. *Proof of Theorem [Theorem 3](#thm3){reference-type="ref" reference="thm3"}.* By Theorem [Theorem 2](#thm2){reference-type="ref" reference="thm2"} it is enough to classify those ideals of $\ell^1(FC(G))$ which are invariant for the conjugation action of $G$; these are automatically weak\*-closed since $\ell^1(FC(G))$ is finite-dimensional, by hypothesis. Let $J \lhd \ell^1(FC(G))$ be such an ideal. Then by [@HR2 Theorem 38.7] there exists a subset $\mathcal{O} \subset \widehat{FC(G)}$ such that $J = K(\mathcal{O})$. Let $\Omega = \{ t \cdot \sigma : \sigma \in \mathcal{O}, \ t \in G \}.$ Then given $\pi = t \cdot \sigma \in \Omega$ we have $J \subset \ker \sigma$ so $J = J^t \subset \ker (t \cdot \sigma) = \ker \pi$. As such $J \subset \bigcap_{\pi \in \Omega} \ker \pi$, but also clearly $J \supset \bigcap_{\pi \in \Omega} \ker \pi$. Hence $J = K( \Omega)$, and the result follows. ◻ ## Acknowledgements {#acknowledgements .unnumbered} I am grateful to Yemon Choi for useful email exchanges, and in particular for pointing out how to construct weak\*-closed ideals of $\ell^1(\mathbb Z)$. 9 B. Anderson-Sackaney, *On ideals of $L^1$-algebras of compact quantum groups*, Internat. J. Math. **33** no. 12 (2022), Paper No. 2250074, 38pp. M. Cowling, *The predual of the space of convolutors on a locally compact group*, Bull. Austral. Math. Soc. **57** (1998), 409--414. M. Daws, *Connes amenability of bidual and weighted semigroup algebras*, Math. Scand. **99** no. 2 (2006), 217--246. M. Daws and N. Spronk, *On convoluters on $L^p$ spaces*, Studia. Math. **245** (2019) 15--31. A. Derighetti, *Convolution operators on groups*, Lecture Notes of the Unione Matematica Italiana, Springer-Verlag Berlin Heidelberg (2011). A. Derighetti, M. Filali, and M. S. Monfared, *On the ideal structure of some Banach algebras related to convolution operators on $L^p(G)$*, J. Funct. Anal. **215** (2004), 341--365. A. H. Dooley, S. K. Gupta, and F. Ricci, *Asymmetry of convolution norms on Lie groups*, J. Funt. Anal. **174** (2000), 399--416. A. Figà-Talamanca, *Translation invariant operators in $L^p$*, Duke Math. J. **32** (1965), 495--501. E. Gardella, *A modern look at algebras of operators on $L^p$ spaces*, Expo. Math. **39** no.3 (2021), 420--453. E. Gardella and H. Thiel, *Isomorphisms of algebras of convolution operators* Ann. Sci. Éc. Norm. Supér. (4) **55** (2022), 1433--1471. C. Herz, *The theory of $p$-spaces with an application to convolution operators*, Trans. Am. Math. Soc. **154** (1971), 69--82. C. Herz, *Harmonic synthesis for subgroups*, Ann. Inst. Fourier (Grenoble) **23** (1973), 91--123. E. Hewitt and K. A. Ross, *Abstract harmonic analysis. Vol. II: Structure and analysis for compact groups. Analysis on locally compact Abelian groups*, Die Grundlehren der mathematischen Wissenschaften, Band 152. Springer-Verlag, New York-Berlin, 1970. P. Ohrysko and M. Wojciechowski, *On the Gelfand space of the measure algebra on the circle*, (2014), preprint arXiv:1406.0797. P. Ohrysko, M. Wojciechowski, and C. Graham, *Non-separability of the Gelfand space of measure algebras*, Ark. Mat. **54** no.2 (2016), 525--535. N. C. Phillips, *Analogs of Cuntz algebras on $L^p$ spaces*, (2012), preprint arXiv:1201.4196. N. C. Phillips, *Crossed products of $L^p$ operator algebras and the K-theory of Cuntz algebras on $L^p$ spaces,*, (2013), preprint arXiv:1309.6406. W. Rudin, *Fourier analysis on groups*, Wiley Classics Library, John Wiley & Sons Inc., New York, 1990. S. V. Strătilă and L. Zsidó, *Lectures on von Neumann algebras* (second edition), Cambridge IISc Series, Cambridge University Press, Cambridge, 2019. M. J. Tomkinson, *FC-groups*, Research Notes in Mathematics **96**, Pitman Advanced Pub. Program, Boston, MA., 1984. A. Ülger, *Relatively weak\*-closed ideals of $A(G)$, sets of synthesis and sets of uniqueness*, Colloquium Math., **136** (2014), 271--296. J. T. White, *Left ideals of Banach algebras and dual Banach algebras*, Proceedings of the 23rd International Conference on Banach Algebras and Applications, De Gruyter Proceedings in Mathematics (De Gruyter 2020), 227--253. J. T. White, *The ideal structure of measure algebras and asymptotic properties of group representations*, to appear in J. Op. Theory (21 pages).
arxiv_math
{ "id": "2309.15570", "title": "Weak*-Simplicity of Convolution Algebras on Discrete Groups", "authors": "Jared T. White", "categories": "math.FA math.OA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | This paper is a sequel to [@Plfin; @Pnun]. A construction associating a semialgebra with an algebra, subalgebra, and a coalgebra dual to the subalgebra played a central role in the author's book [@Psemi]. In this paper, we extend this construction to certain nonunital algebras. The resulting semialgebra is still semiunital over a counital coalgebra. In particular, we associate semialgebras to locally finite subcategories in $k$-linear categories. Examples include the Temperley--Lieb and Brauer diagram categories and the Reedy category of simplices in a simplicial set. address: | Institute of Mathematics, Czech Academy of Sciences\ Žitná 25, 115 67 Prague 1\ Czech Republic author: - Leonid Positselski title: | Semialgebras associated with nonunital algebras\ and $k$-linear subcategories --- # Introduction {#introduction .unnumbered} ## *Semialgebras* can be informally defined as "algebras over coalgebras". Given a coassociative, counital coalgebra $\mathcal C$ over a field $k$, the category of $\mathcal C$-$\mathcal C$-bicomodules $\mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal C$ is a monoidal category with respect to the functor $\mathbin{\text{\smaller$\square$}}_\mathcal C$ of cotensor product over $\mathcal C$. A (semiassociative, semiunital) *semialgebra* over $\mathcal C$ is a monoid object in the monoidal category $\mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal C$. Semialgebras appeared under the name of "internal categories" in Aguiar's dissertation [@Agu]. The terminology "semialgebras" was introduced by the present author in the monograph on semi-infinite homological algebra [@Psemi], where semialgebras were the main object of study. The key discovery in [@Psemi] was that the semi-infinite homology and cohomology of associative algebraic structures are properly assigned to module objects (semimodules and semicontramodules) over semialgebras. In subsequent literature, the concept and the terminology "semialgebra" was used in the paper of Holstein and Lazarev on categorical Koszul duality [@HL]. The generality level in [@Psemi] included semialgebras over corings over noncommutative, nonsemisimple rings. Natural examples of semialgebras over, say, coalgebras over commutative rings certainly arise in many contexts; so the complicated generality of three-story towers (ring, coalgebra/coring, semialgebra) is quite useful. But it made the exposition in [@Psemi] somewhat heavy and full of technical details. In the present paper, we restrict ourselves to the less complicated setting of coalgebras over fields and semialgebras over such coalgebras. ## {#introd-semialgebra-construction} All the examples of semialgebras discussed in [@Psemi] were produced by one and the same construction, spelled out in [@Psemi Chapter 10]. Restricted to our present context of coalgebras over a field $k$, this construction works as follows. Let $K$ and $R$ be associative algebras over $k$ and $f\colon K\longrightarrow R$ be a $k$-algebra homomorphism. Suppose that we are given a coalgebra $\mathcal C$ over $k$ which is "dual to $K$" in a weak sense. Specifically, this means that a multiplicative, unital bilinear pairing $\phi\colon\mathcal C\times_k K\longrightarrow k$ is given as a part of the data. Assume also that $f$ makes $R$ a flat left $K$-module. Then, under a certain further assumption (called "integrability" in this paper), the tensor product $\boldsymbol{\mathcal S}=\mathcal C\otimes_KR$ becomes a semialgebra over $\mathcal C$. It was also explained in [@Psemi Chapter 10] that some abelian module categories over the semialgebra $\mathcal C\otimes_KR$ can be described in terms of $R$-modules and $\mathcal C$-comodules or $\mathcal C$-contramodules. Specifically, the module categories admitting such a description in this generality are the categories of *right $\boldsymbol{\mathcal S}$-semimodules* and *left $\boldsymbol{\mathcal S}$-semicontramodules*. Roughly speaking, the answer is that the datum of a right $\boldsymbol{\mathcal S}$-semimodule is equivalent to that of a right $R$-module whose underlying right $K$-module structure can be integrated to a right $\mathcal C$-comodule structure. The datum of a left $\boldsymbol{\mathcal S}$-semicontramodule is equivalent to that of a left $R$-module whose underlying left $K$-module structure can be (or has been) integrated to a left $\mathcal C$-contramodule structure. The latter assertion assumes that $R$ is a projective left $K$-module. ## {#section-1} The terminology "integrable" or "integrated" in the preceding paragraphs comes from Lie theory. In fact, finite- and infinite-dimensional Lie theory provided the thematic examples of semialgebras which motivated the exposition in [@Psemi]. For example, if $H$ is an algebraic group over a field $k$ of characteristic $0$, then the universal enveloping algebra $K=U(\mathfrak h)$ of the Lie algebra of $H$ can be described as the algebra of distributions on $H$ supported at the unit element $e\in H$, with respect to the convolution multiplication. On the other hand, the convolution comultiplication defines a coalgebra (in fact, a Hopf algebra) structure on the algebra $\mathcal C=\mathcal C(H)$ of regular functions on $H$. The algebra $K$ and coalgebra $\mathcal C$ come together with a natural multiplicative, unital bilinear pairing $\phi\colon\mathcal C\times_k K\longrightarrow k$ as above [@Prev Section 2.7]. Furthermore, suppose that $H$ is a closed subgroup in an algebraic group $G$. Assume for the sake of a technical simplification that the algebraic group $H$ is *unimodular*, i. e., a nonzero biinvariant differential top form exists on $H$ (and has been chosen). Then the vector space $\boldsymbol{\mathcal S}=\boldsymbol{\mathcal S}(G,H)$ of distributions on $G$, supported in $H$ and regular along $H$, becomes a semialgebra over the coalgebra $\mathcal C=\mathcal C(H)$  [@Psemi Section C.4]. The semialgebra $\boldsymbol{\mathcal S}$ can be also produced as the tensor product $\boldsymbol{\mathcal S}=\mathcal C\otimes_{U(\mathfrak h)}U(\mathfrak g)$, where $R=U(\mathfrak g)$ is the enveloping algebra of the Lie algebra of $G$. Notice that the construction of the semialgebra of distributions $\boldsymbol{\mathcal S}(G,H)$ is left-right symmetric, but the construction of the tensor product semialgebra $\mathcal C\otimes_KR$ is *not*. Actually, in our Lie theory context we could consider two semialgebras $\boldsymbol{\mathcal S}^l=\boldsymbol{\mathcal S}^r(\mathfrak g,H)=\mathcal C\otimes_{U(\mathfrak h)}U(\mathfrak g)$ and $\boldsymbol{\mathcal S}^r=\boldsymbol{\mathcal S}^l(\mathfrak g,H)=U(\mathfrak g)\otimes_{U(\mathfrak h)}\mathcal C$, which are naturally left-right opposite to each other. Under the unimodularity assumption above, we have isomorphisms of semialgebras $\boldsymbol{\mathcal S}^r(\mathfrak g,H)\simeq\boldsymbol{\mathcal S}(G,H)\simeq\boldsymbol{\mathcal S}^l(\mathfrak g,H)$  [@Psemi Remark C.4.4]. When $H$ is not unimodular, the two semialgebras $\boldsymbol{\mathcal S}^r$ and $\boldsymbol{\mathcal S}^l$ are *no longer* isomorphic, but only Morita equivalent [@Psemi Sections C.4.3--4]. ## {#section-2} What does one achieve by having two equivalent constructions (up to isomorphism or Morita equivalence) of the same semialgebra $\boldsymbol{\mathcal S}$, viz., $\boldsymbol{\mathcal S}^r=\mathcal C\otimes_KR$ and $\boldsymbol{\mathcal S}^l=R\otimes_K\mathcal C$ ? Actually, one achieves a lot. The point is that the semialgebra $\boldsymbol{\mathcal S}^r$ comes together with a natural description of *right* $\boldsymbol{\mathcal S}$-semimodules and left $\boldsymbol{\mathcal S}$-semicontramodules (as explained in Section [0.2](#introd-semialgebra-construction){reference-type="ref" reference="introd-semialgebra-construction"}). Similarly, the semialgebra $\boldsymbol{\mathcal S}^l$ comes together with a natural description of *left* $\boldsymbol{\mathcal S}$-semimodules (and right $\boldsymbol{\mathcal S}$-semicontramodules). If one wants to know simultaneously what the left and the right $\boldsymbol{\mathcal S}$-semimodules are (e. g., for the purposes of semi-infinite homology [@Psemi Chapter 2]), or if one wants to know simultaneously what the left $\boldsymbol{\mathcal S}$-semimodules and the left $\boldsymbol{\mathcal S}$-semicontramodules are (e. g., for the purposes of the semimodule-semicontramodule correspondence [@Psemi Chapter 6], [@PS1 Sections 8 and 10.3]), then one needs to have both a "left" and a "right" description of one's semialgebra $\boldsymbol{\mathcal S}$. The two descriptions can possibly arise from two different algebras $K$, of course; or in any event, from two different algebras $R$. One is lucky if one obtains isomorphisms (or Morita equivalences) of semialgebras like $\mathcal C\otimes_KR^r\simeq\boldsymbol{\mathcal S}\simeq R^l\otimes_K\mathcal C$, where $R^r$ and $R^l$ are two different algebras sharing a common subalgebra $K$. Then right $\boldsymbol{\mathcal S}$-semimodules are described as right $R^r$-modules with a right $\mathcal C$-comodule structure and left $\boldsymbol{\mathcal S}$-semicontramodules as left $R^r$-modules with a left $\mathcal C$-contramodule structure, while left $\boldsymbol{\mathcal S}$-semimodules are described as left $R^l$-modules with a left $\mathcal C$-comodule structure. ## {#section-3} A prime example of the latter situation occurs in infinite-dimensional Lie theory. Let $\mathfrak g$ be a Tate (locally linearly compact) Lie algebra over a field $k$, such that, e. g., the Lie algebra of vector fields on the punctured formal circle $\mathfrak g=k((t))d/dt$, or the Lie algebra of currents on the punctured formal circle $\mathfrak g=\mathfrak a((t))$, where $\mathfrak a$ is a finite-dimensional Lie algebra. Let $\mathfrak h\subset\mathfrak g$ be a pro-nilpotent compact open Lie subalgebra in $\mathfrak g$, such as $\mathfrak h=t^nk[[t]]t/dt$ with some $n\ge1$ in the former (Virasoro) case or $\mathfrak h=t^n\mathfrak a[[t]]$ with some $n\ge1$ in the latter (Kac--Moody) case. Assume that the action of $\mathfrak h$ in the quotient space $\mathfrak g/\mathfrak h$ is ind-nilpotent. One considers a certain kind of central extensions $\varkappa$ of the pairs $(\mathfrak g,\mathfrak h)$. In particular, there is the *canonical central extension* $\varkappa_0$ (in the case of the Virasoro, this means the central charge $\varkappa_0=-26$). Let $R^r=U_{\varkappa+\varkappa_0}(\mathfrak g)$ and $R^l=U_\varkappa(\mathfrak g)$ be the corresponding two enveloping algebras describing respresentations of the respective central extensions of $\mathfrak g$ with the central charge fixed as a given (generally nonzero) scalar from $k$. Put $K=U(\mathfrak h)$, and let $\mathcal C$ be the coenveloping coalgebra of the conilpotent Lie coalgebra dual to $\mathfrak h$  [@Psemi Section D.6.1]. Then one has an isomorphism $\mathcal C\otimes_KR^r\simeq R^l\otimes_K\mathcal C$ of semialgebras over $\mathcal C$  [@Psemi Theorem D.3.1], [@Prev Sections 2.7--2.8]. The classical semi-infinite homology and cohomology of Tate Lie algebras arise in this context [@Psemi Section D.6.2]. ## {#section-4} The aim of the present paper is to extend the construction of the semialgebra $\mathcal C\otimes_KR$ given in [@Psemi Chapter 10] from *algebras* to *categories*. To any small $k$-linear category $\mathsf E$ one assigns a $k$-algebra $R=R_\mathsf E$, but when the set of objects of $\mathsf E$ is infinite, the algebra $R$ *has no unit*. The algebra $R_\mathsf E$ only has "local units". The rings/algebras $K$ and $R$ were, of course, assumed to be unital in [@Psemi Chapter 10]. In this paper we work out the case of *nonunital* algebras $K$ and $R$. The coalgebra $\mathcal C$ is still presumed to be counital, and the resulting semialgebra $\boldsymbol{\mathcal S}$ which we construct is semiunital. Intuitively, one can explain the situation with local and global units as follows. A unit in a ring is an element, and as such, it is a global datum: one cannot really glue it from local pieces. If a unital algebra $R$ is a union of its vector subspaces, then the unit has be an element of one of such subspaces. But the counit of a coalgebra $\mathcal C$ is a linear function $\epsilon\colon\mathcal C \longrightarrow k$, and as such, it can be obtained by gluing a compatible family of linear functions defined on vector subspaces of $\mathcal C$. Likewise, the semiunit of a semialgebra $\boldsymbol{\mathcal S}$ is a $\mathcal C$-$\mathcal C$-bicomodule map $\mathbf e\colon\mathcal C\longrightarrow\boldsymbol{\mathcal S}$, and it can be also possibly obtained by gluing a compatible family of maps defined on vector subspaces or subbicomodules of $\mathcal C$. This vague and perhaps not necessarily very convincing wording expresses the intuition behind our setting of nonunital algebras, but counital coalgebras and semiunital semialgebras in this paper. Striving to approach the natural generality, we do not restrict ourselves in this paper to algebras with enough idempotents or "locally unital algebras" (which correspond precisely to small $k$-linear categories). Rather, our main results are stated for more general classes of what we call *left flat t-unital* or *left projective t-unital* algebras, and much of the exposition is written in an even wider generality. ## {#section-5} The modest aims of the exposition in this paper do not include any attempt to construct isomorphisms of Morita equivalences of "left" and "right" semialgebras. Presumably, such constructions always arise in more or less specific contexts, like the Lie theory context discussed above. The principal examples which motivated this paper are different: these are the Temperley--Lieb and Brauer diagram categories and the $k$-linearization of the Reedy category of simplices in a simplicial set, with the natural triangular decompositions which these categories possess. Our understanding of these examples is not sufficient to attempt any constructions of isomorphisms of "left" and "right" semialgebras. In fact, the "right" semialgebra $\boldsymbol{\mathcal S}=\mathcal C\otimes_KR$ always has a technically important property of being an injective *left* $\mathcal C$-comodule (essentially, because $R$ is presumed to be a flat left $K$-module). This property guarantees that the categories of right $\boldsymbol{\mathcal S}$-semimodules and left $\boldsymbol{\mathcal S}$-semicontramodules are abelian. One may (and should) wonder whether $\boldsymbol{\mathcal S}$ turns out to be an injective *right* $\mathcal C$-comodule in the specific examples one is interested in. But for the examples discussed in this paper (viz., the Temperley--Lieb, Brauer, and simplicial set categories), we do *not* know that. ## {#section-6} This paper is a third installment in a series. It is largely based on two previous preprints [@Plfin] and [@Pnun]. Coalgebras associated to (what we call) locally finite $k$-linear categories, together with comodules and contramodules over such coalgebras, were studied in [@Plfin]. Nonunital rings were discussed in [@Pnun], with an emphasis on the tensor/Hom formalism for nonunital rings and other preparatory results for the present paper. Much of the content of [@Pnun] is covered by the much earlier (1996) unpublished manuscript of Quillen [@Quil], which has somewhat different aims and emphasis. We refer either to [@Quil] or to [@Pnun], or to both whenever appropriate. ## Acknowledgement {#acknowledgement .unnumbered} This paper owes its existence to Catharina Stroppel, who kindly invited me to give a talk at her seminar in Bonn in July 2023, suggested the Temperley--Lieb (as well as Brauer) diagram category example, and explained me the triangular decomposition of the Temperley--Lieb category. I wish to thank Jan Št'ovı́ček for numerous helpful discussions, and in particular for suggesting the Reedy category/simplicial set example. The author is supported by the GAČR project 23-05148S and the Czech Academy of Sciences (RVO 67985840). # Preliminaries on Nonunital Algebras [\[prelim-nonunital-secn\]]{#prelim-nonunital-secn label="prelim-nonunital-secn"} Throughout this paper, $k$ denotes a fixed ground field. All *rings*, *algebras* and *modules* are associative, usually noncommutative, but *nonunital* by default. Still all *coalgebras*, *comodules*, and *contramodules* are coassociative/contraassociative and counital/contraunital, while all *semialgebras*, *semimodules*, and *semicontramodules* are semiassociative/semicontraassociative and semiunital/semicontraunital. We denote by by $\mathsf{Ab}$ the category of abelian groups and by $k{\operatorname{\mathsf{--Vect}}}$ the category of $k$-vector spaces. The notation $A{\operatorname{\mathsf{--Mod}}}$ and ${\operatorname{\mathsf{Mod--}}}A$ stands for the abelian categories of *unital* left $A$-modules and right $A$-modules, respectively; so it presumes $A$ to be a unital ring. In all *bimodules* (unital or nonunital), the left and right actions of the field $k$ are presumed to agree. So the notation $A{\operatorname{\mathsf{--Bimod--}}}B$ presumes $A$ and $B$ to be unital $k$-algebras and stands for the category $A{\operatorname{\mathsf{--Bimod--}}}B=(A\otimes_k B^{\mathrm{op}}){\operatorname{\mathsf{--Mod}}}$. This section presents a discussion of nonunital $k$-algebras, mostly following the manuscript [@Quil] and the preprint [@Pnun]. The concepts related to coalgebras and semialgebras will be defined in the next section. ## Nonunital modules over $k$-algebras The preprint [@Pnun] is written in the generality of abstract rings. There is no ground field or ground ring in [@Pnun]. The exposition in the (much earlier) manusript [@Quil] is more general, and makes room for a ground field or ground ring. One of the aims of the whole Section [\[prelim-nonunital-secn\]](#prelim-nonunital-secn){reference-type="ref" reference="prelim-nonunital-secn"} is to discuss, largely following [@Quil], the comparison between abstract nonunital rings and nonunital $k$-algebras. The point is that, when one views a nonunital $k$-algebra $K$ as an abstract nonunital ring, a $K$-module *need not* be a $k$-vector space in general: for example, the group/ring of integers $\mathbb Z$ is a $K$-module with the zero action of $K$. But all the $K$-modules we are interested in *turn out* to be $k$-vector spaces, as explained in the next Section [1.2](#prelim-t-unital-c-unital-subsecn){reference-type="ref" reference="prelim-t-unital-c-unital-subsecn"}. Given a nonunital ring $K$, the *unitalization* of $K$ is the unital ring $\widetilde K=\mathbb Z\oplus K$. So the ring of integers $\mathbb Z$ is a subring in $\widetilde K$ and $K$ is a two-sided ideal in $\widetilde K$. Given a nonunital $k$-algebra $K$, the *$k$-unitalization* of $K$ is the unital $k$-algebra $\widecheck K=k\oplus K$. So the $k$-algebra $k$ is a subalgebra in $\widecheck K$ and $K$ is a two-sided ideal in $\widecheck K$. There is a natural unital ring homomorphism $\widetilde K\longrightarrow \widecheck K$ acting by the identity on $K$. Both the unital $\widetilde K$-modules and the unital $\widecheck K$-modules can (and should) be simply thought of as nonunital $K$-modules. The difference is that the unital $\widecheck K$-modules are $k$-vector spaces (with the actions of $k$ and $K$ compatible in the obvious sense), while the unital $\widetilde K$-modules are just abelian groups [@Quil §1]. ## t-Unital and c-unital modules {#prelim-t-unital-c-unital-subsecn} Let $K$ be a nonunital ring. The notation $KM\subset M$ and $NK\subset N$ for a left $K$-module $M$ and a right $K$-module $N$ stands for the subgroups/submodules spanned by the products, as usual. The notation ${}_KM$ means the $K$-submodule consisting of all elements $m\in M$ such that $Km=0$ in $M$. The following lemma is due to Quillen [@Quil proof of Proposition 9.2]. **Lemma 1** ([@Quil]). *Let $K$ be a nonunital $k$-algebra, $M$ and $P$ be unital left $\widecheck K$-modules, and $N$ be a unital right $\widecheck K$-module. Then* *[(a)]{.upright} if either $NK=N$ or $KM=M$, then the natural surjective map of tensor products $$N\otimes_{\widetilde K}M\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax N\otimes_{\widecheck K}M$$ is an isomorphism;* *[(b)]{.upright} if either $KM=M$ or ${}_KP=0$, then the natural injective map of $\mathop{\mathrm{Hom}}$ groups/vector spaces $$\mathop{\mathrm{Hom}}_{\widecheck K}(M,P)\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathop{\mathrm{Hom}}_{\widetilde K}(M,P)$$ is an isomorphism.* *Proof.* Part (a): consider the case when $KM=M$; so for every $m\in M$ there exist elements $r_1$, ..., $r_j\in K$ and $m_1$, ..., $m_j\in M$ such that $m=\sum_{i=1}^jr_im_i$ in $M$. Then, for all $n\in N$ and $\check r\in\widecheck K$ (e. g, $\check r\in k$), one has $n\check r\otimes_{\widetilde K}m= \sum_{i=1}^j n\check r\otimes_{\widetilde K}r_im_i= \sum_{i=1}^jn\check rr_i\otimes_{\widetilde K}m_i= \sum_{i=1}^jn\otimes_{\widetilde K}\check rr_im_i= n\otimes_{\widetilde K}\check rm$ (since $\check rr_i\in K\subset\widetilde K$), as desired. Part (b): let $f\colon M\longrightarrow P$ be a $\widetilde K$-linear map. Assuming $M=KM$, for any $\check r\in\widecheck K$ and $m\in M$ we have $m=\sum_{i=1}^jr_im_i$ as above and $f(\check rm)= \sum_{i=1}^jf(\check rr_im_i)=\sum_{i=1}^j\check rr_if(m_i)= \sum_{i=1}^j\check rf(r_im_i)=\check rf(m)$. Assuming ${}_KP=0$, for any $r\in K$ we compute $rf(\check rm)=f(r\check rm)=r\check rf(m)$, hence $K(f(\check rm)-\check rf(m))=0$ and it follows that $f(\check rm)-\check rf(m)=0$ in $P$. ◻ Let $K$ be a nonunital ring. Following the terminology in [@Pnun Section 1], we say that a unital left $\widetilde K$-module $M$ is *t-unital* if the natural $\widetilde K$-module map $K\otimes_{\widetilde K}M\longrightarrow M$ is an isomorphism. Similarly, a unital right $\widetilde K$-module $N$ is *t-unital* if the natural $\widetilde K$-module map $N\otimes_{\widetilde K}K\longrightarrow N$ is an isomorphism. Dual-analogously, a unital left $\widetilde K$-module $P$ is said to be *c-unital* [@Pnun Section 3] if the natural $\widetilde K$-module map $P\longrightarrow\mathop{\mathrm{Hom}}_{\widetilde K}(K,P)$ is an isomorphism. Clearly, for any t-unital left $\widetilde K$-module $M$ one has $KM=M$, and for any t-unital right $\widetilde K$-module $N$ one has $NK=N$. Similarly, for any c-unital left $\widetilde K$-module $P$ one has ${}_KP=0$. **Lemma 2** ([@Quil Proposition 9.2]). *Let $K$ be a nonunital $k$-algebra. Then* *[(a)]{.upright} for any t-unital $\widetilde K$-module $M$, the $\widetilde K$-module structure on $M$ arises from a unique $\widecheck K$-module structure;* *[(b)]{.upright} for any c-unital $\widetilde K$-module $P$, the $\widetilde K$-module structure on $P$ arises from a unique $\widecheck K$-module structure.* *Proof.* Part (a): the isomorphism $K\otimes_{\widetilde K}M\simeq M$ endows $M$ with the left $\widecheck K$-module structure induced by the left $\widecheck K$-module structure on $K$. This proves the existence. The uniqueness holds for any $\widetilde K$-module $M$ such that $KM=M$ (apply the first case of Lemma [Lemma 1](#hom-tensor-over-tilde-check-agree){reference-type="ref" reference="hom-tensor-over-tilde-check-agree"}(b) to $M$ and $P$ denoting two different $\widecheck K$-module structures on one and the same $\widetilde K$-module). Part (b): the isomorphism $P\simeq\mathop{\mathrm{Hom}}_{\widetilde K}(K,P)$ endows $P$ with the left $\widecheck K$-module structure induced by the right $\widecheck K$-module structure on $K$. This proves the existence. The uniqueness holds for any $\widetilde K$-module $P$ such that ${}_KP=0$ (apply the second case of Lemma [Lemma 1](#hom-tensor-over-tilde-check-agree){reference-type="ref" reference="hom-tensor-over-tilde-check-agree"}(b) to $M$ and $P$ denoting two different $\widecheck K$-module structures on one and the same $\widetilde K$-module). ◻ **Lemma 3** ([@Quil Proposition 9.2]). *Let $K$ be a nonunital $k$-algebra. Then* *[(a)]{.upright} a unital left $\widecheck K$-module $M$ is t-unital as a $\widetilde K$-module if and only if the natural $\widecheck K$-module map $K\otimes_{\widecheck K}M\longrightarrow M$ is an isomorphism;* *[(b)]{.upright} a unital left $\widecheck K$-module $P$ is c-unital as a $\widetilde K$-module if and only if the natural $\widecheck K$-module map $P\longrightarrow\mathop{\mathrm{Hom}}_{\widecheck K}(K,P)$ is an isomorphism.* *Proof.* Part (a): any one of the two maps $K\otimes_{\widetilde K}M\longrightarrow M$ or $K\otimes_{\widecheck K}M\longrightarrow M$ being an isomorphism implies $KM=M$. Then it remains to apply the second case of Lemma [Lemma 1](#hom-tensor-over-tilde-check-agree){reference-type="ref" reference="hom-tensor-over-tilde-check-agree"}(a) (for $N=K$). Part (b): any one of the two maps $P\longrightarrow\mathop{\mathrm{Hom}}_{\widetilde K}(K,P)$ or $P\longrightarrow\mathop{\mathrm{Hom}}_{\widecheck K}(K,P)$ being an isomorphism implies ${}_KP=0$. Then it remains to apply the second case of Lemma [Lemma 1](#hom-tensor-over-tilde-check-agree){reference-type="ref" reference="hom-tensor-over-tilde-check-agree"}(b) (for $M=K$). ◻ Let $K$ be a nonunital $k$-algebra. In view of Lemmas [Lemma 2](#module-structure-extended-lemma){reference-type="ref" reference="module-structure-extended-lemma"}(a) and [Lemma 3](#t-c-unitality-equivalence-lemma){reference-type="ref" reference="t-c-unitality-equivalence-lemma"}(a), one can equivalently speak of "t-unital $\widetilde K$-modules" or "t-unital $\widecheck K$-modules". Hence we will simply call them *t-unital $K$-modules*. The category of t-unital left $K$-modules will be denoted by $K{\operatorname{\mathsf{--{}^t Mod}}}$, and the category of t-unital right $K$-modules by ${\operatorname{\mathsf{Mod^t--}}}K$. Similarly, in view of Lemmas [Lemma 2](#module-structure-extended-lemma){reference-type="ref" reference="module-structure-extended-lemma"}(b) and [Lemma 3](#t-c-unitality-equivalence-lemma){reference-type="ref" reference="t-c-unitality-equivalence-lemma"}(b), one can equivalently speak of "c-unital $\widetilde K$-modules" or "c-unital $\widecheck K$-modules". Hence we will simply call them *c-unital $K$-modules*. The category of c-unital left $K$-modules will be denoted by $K{\operatorname{\mathsf{--{}^c Mod}}}$. **Lemma 4**. *Let $K$ be a nonunital $k$-algebra and $N$ be a unital right $\widecheck K$-module. For any $k$-vector space $V$, we endow the vector space $\mathop{\mathrm{Hom}}_k(N,V)$ with the induced structure of unital left $\widecheck K$-module. Then the following conditions are equivalent:* 1. *$N$ is a t-unital right $K$-module;* 2. *$\mathop{\mathrm{Hom}}_k(N,k)$ is a c-unital left $K$-module;* 3. *$\mathop{\mathrm{Hom}}_k(N,V)$ is a c-unital left $K$-module for every $k$-vector space $V$.* *Proof.* (1) $\Longleftrightarrow$ (2) and (1) $\Longrightarrow$ (3) Similar to [@Pnun Lemma 8.17]. (3) $\Longrightarrow$ (2) Obvious. (2) $\Longrightarrow$ (3) By [@Quil Proposition 5.5] or [@Pnun Lemma 3.5], the full subcategory $K{\operatorname{\mathsf{--{}^c Mod}}}$ is closed under kernels and direct products in $K{\operatorname{\mathsf{--{}^n Mod}}}$. Hence it is closed also under direct summands. The left $K$-module $\mathop{\mathrm{Hom}}_k(N,V)$ is a direct summand of a product of copies of the left $K$-module $\mathop{\mathrm{Hom}}_k(N,k)$ (since any vector space $V$ is a direct summand of a product of one-dimensional vector spaces). ◻ A ring or $k$-algebra $K$ is called *t-unital* if it is t-unital as a left $K$-module, or equivalently, as a right $K$-module [@Pnun Section 1]. Clearly, any t-unital ring is *idempotent*, i. e., $K^2=K$. ## Null-modules and the abelian category equivalence {#null-modules-prelim-subsecn} We will say that a unital left $\widetilde K$-module $N$ is a *null-module* if $KN=0$. The full subcategory of null-modules will be denoted by $\widetilde K{\operatorname{\mathsf{--{}^0 Mod}}}\subset\widetilde K{\operatorname{\mathsf{--Mod}}}$. The full subcategory of (similarly defined) null-modules over $\widecheck K$ is denoted by $\widecheck K{\operatorname{\mathsf{--{}^0 Mod}}}\subset \widecheck K{\operatorname{\mathsf{--Mod}}}$. For an idempotent ring/$k$-algebra $K$, the full subcategory $\widetilde K{\operatorname{\mathsf{--{}^0 Mod}}}$ is closed under submodules, quotients, direct sums, direct products, and extensions in $\widetilde K{\operatorname{\mathsf{--Mod}}}$  [@Pnun Lemma 5.1]; so $\widetilde K{\operatorname{\mathsf{--{}^0 Mod}}}\subset \widetilde K{\operatorname{\mathsf{--Mod}}}$ and $\widecheck K{\operatorname{\mathsf{--{}^0 Mod}}}\subset\widecheck K{\operatorname{\mathsf{--Mod}}}$ are Serre subcategories. Hence one can form the abelian quotient categories $\widetilde K{\operatorname{\mathsf{--Mod}}}/\widetilde K{\operatorname{\mathsf{--{}^0 Mod}}}$ and $\widecheck K{\operatorname{\mathsf{--Mod}}}/\widecheck K{\operatorname{\mathsf{--{}^0 Mod}}}$. There are natural equivalences of categories $$\label{four-category-equivalence} K{\operatorname{\mathsf{--{}^t Mod}}}\simeq\widetilde K{\operatorname{\mathsf{--Mod}}}/\widetilde K{\operatorname{\mathsf{--{}^0 Mod}}}\simeq \widecheck K{\operatorname{\mathsf{--Mod}}}/\widecheck K{\operatorname{\mathsf{--{}^0 Mod}}}\simeq K{\operatorname{\mathsf{--{}^c Mod}}}$$ [@Quil Theorems 4.5 and 5.6, and Proposition 9.1], [@Pnun Theorem 5.8]. Thus the categories $K{\operatorname{\mathsf{--{}^t Mod}}}$ and $K{\operatorname{\mathsf{--{}^c Mod}}}$ are abelian. The equivalences of categories [\[four-category-equivalence\]](#four-category-equivalence){reference-type="eqref" reference="four-category-equivalence"} make the fully faithful inclusion functors $K{\operatorname{\mathsf{--{}^t Mod}}}\longrightarrow \widetilde K{\operatorname{\mathsf{--Mod}}}$ and $K{\operatorname{\mathsf{--{}^c Mod}}}\longrightarrow\widetilde K{\operatorname{\mathsf{--Mod}}}$ adjoint on the left and on the right, respectively, to the Serre quotient functor $\widetilde K{\operatorname{\mathsf{--Mod}}}\longrightarrow\widetilde K{\operatorname{\mathsf{--Mod}}}/\widetilde K{\operatorname{\mathsf{--{}^0 Mod}}}$. The same applies to the inclusion functors $K{\operatorname{\mathsf{--{}^t Mod}}}\longrightarrow\widecheck K{\operatorname{\mathsf{--Mod}}}$ and $K{\operatorname{\mathsf{--{}^c Mod}}}\longrightarrow\widecheck K{\operatorname{\mathsf{--Mod}}}$ (and the Serre quotient functor $\widecheck K{\operatorname{\mathsf{--Mod}}}\longrightarrow\widecheck K{\operatorname{\mathsf{--Mod}}}/\widecheck K{\operatorname{\mathsf{--{}^0 Mod}}}$). The full subcategory $K{\operatorname{\mathsf{--{}^t Mod}}}$ is closed under colimits and extensions in $\widetilde K{\operatorname{\mathsf{--Mod}}}$ or $\widecheck K{\operatorname{\mathsf{--Mod}}}$ [@Pnun Lemma 1.5], but it need not be closed under kernels [@Pnun Example 4.7]. Dual-analogously, the full subcategory $K{\operatorname{\mathsf{--{}^c Mod}}}$ is closed under limits and extensions in $\widetilde K{\operatorname{\mathsf{--Mod}}}$ or $\widecheck K{\operatorname{\mathsf{--Mod}}}$ [@Pnun Lemma 3.5], but it need not be closed under cokernels [@Pnun Examples 4.2 and 4.8]. ## t-Flat, c-projective, and t-injective modules {#t-c-flat-proj-inj-prelim-subsecn} In this section we assume $K$ to be a t-unital $k$-algebra. A unital left $\widetilde K$-module $Q$ is called *c-projective* [@Pnun Definition 8.1] if the covariant $\mathop{\mathrm{Hom}}$ functor $\mathop{\mathrm{Hom}}_{\widetilde K}(Q,{-})$ takes cokernels *computed within the full subcategory* $K{\operatorname{\mathsf{--{}^c Mod}}}\subset \widetilde K{\operatorname{\mathsf{--Mod}}}$ to cokernels in $\mathsf{Ab}$. It is clear from the second case of Lemma [Lemma 1](#hom-tensor-over-tilde-check-agree){reference-type="ref" reference="hom-tensor-over-tilde-check-agree"}(b) that a unital left $\widecheck K$-module $Q$ is c-projective (as a $\widetilde K$-module) if and only if the functor $\mathop{\mathrm{Hom}}_{\widecheck K}(Q,{-})$ takes cokernels computed within the full subcategory $K{\operatorname{\mathsf{--{}^c Mod}}}\subset\widecheck K{\operatorname{\mathsf{--Mod}}}$ to cokernels in $k{\operatorname{\mathsf{--Vect}}}$. A unital $\widetilde K$-module is c-projective if and only if it represents a projective object in the quotient category $\widetilde K{\operatorname{\mathsf{--Mod}}}/\widetilde K{\operatorname{\mathsf{--{}^0 Mod}}}$  [@Pnun Remark 8.5]. By [@Pnun Corollary 8.6], a t-unital $K$-module is c-projective if and only if it is a projective unital $\widetilde K$-module. Similarly one proves that a t-unital $K$-module is c-projective if and only if it is a projective unital $\widecheck K$-module (just do the same argument over a field instead of over $\mathbb Z$). We arrive to the following corollary. **Corollary 5**. *Let $K$ be a t-unital $k$-algebra and $Q$ be a t-unital $K$-module. Then $Q$ is a projective $\widetilde K$-module if and only if it is a projective $\widecheck K$-module. 0◻* A unital left $\widetilde K$-module $J$ is called *t-injective* [@Pnun Definition 8.7] if the contravariant $\mathop{\mathrm{Hom}}$ functor $\mathop{\mathrm{Hom}}_{\widetilde K}({-},J)$ takes kernels *computed within the full subcategory* $K{\operatorname{\mathsf{--{}^t Mod}}}\subset \widetilde K{\operatorname{\mathsf{--Mod}}}$ to cokernels in $\mathsf{Ab}$. It is clear from the first case of Lemma [Lemma 1](#hom-tensor-over-tilde-check-agree){reference-type="ref" reference="hom-tensor-over-tilde-check-agree"}(b) that a unital left $\widecheck K$-module $J$ is t-injective (as a $\widetilde K$-module) if and only if the functor $\mathop{\mathrm{Hom}}_{\widecheck K}({-},J)$ takes kernels computed within the full subcategory $K{\operatorname{\mathsf{--{}^t Mod}}}\subset\widecheck K{\operatorname{\mathsf{--Mod}}}$ to cokernels in $k{\operatorname{\mathsf{--Vect}}}$. A unital $\widetilde K$-module is t-injective if and only if it represents an injective object in the quotient category $\widetilde K{\operatorname{\mathsf{--Mod}}}/\widetilde K{\operatorname{\mathsf{--{}^0 Mod}}}$  [@Pnun Remark 8.11]. By [@Pnun Corollary 8.12], a c-unital $K$-module is t-injective if and only if it is an injective unital $\widetilde K$-module. Similarly one proves that a c-unital $K$-module is t-injective if and only if it is an injective unital $\widecheck K$-module. We arrive to the following corollary, which is a particular case of [@Quil last assertion of Proposition 9.2]. **Corollary 6** ([@Quil]). *Let $K$ be a t-unital $k$-algebra and $J$ be a c-unital $K$-module. Then $J$ is an injective $\widetilde K$-module if and only if it is an injective $\widecheck K$-module. 0◻* A unital left $\widetilde K$-module $F$ is called *t-flat* [@Pnun Definition 8.13] if the tensor product functor ${-}\otimes_{\widetilde K}F$ takes kernels *computed within the full subcategory* ${\operatorname{\mathsf{Mod^t--}}}K\subset{\operatorname{\mathsf{Mod--}}}\widetilde K$ to kernels in $\mathsf{Ab}$. It is clear from the first case of Lemma [Lemma 1](#hom-tensor-over-tilde-check-agree){reference-type="ref" reference="hom-tensor-over-tilde-check-agree"}(a) that a unital left $\widecheck K$-module $F$ is t-flat (as a $\widetilde K$-module) if and only if the tensor product functor ${-}\otimes_{\widecheck K}F$ takes kernels computed within the full subcategory ${\operatorname{\mathsf{Mod^t--}}}K\subset{\operatorname{\mathsf{Mod--}}}\widecheck K$ to kernels in $k{\operatorname{\mathsf{--Vect}}}$. By [@Pnun Corollary 8.20], a t-unital $K$-module is t-flat if and only if it is a flat unital $\widetilde K$-module. Similarly one proves that a t-unital $K$-module is t-flat if and only if it is a flat unital $\widecheck K$-module. We arrive to the next corollary, which is also a particular case of [@Quil last assertion of Proposition 9.2]. **Corollary 7** ([@Quil]). *Let $K$ be a t-unital $k$-algebra and $F$ be a t-unital $K$-module. Then $F$ is a flat $\widetilde K$-module if and only if it is a flat $\widecheck K$-module. 0◻* **Proposition 8**. *Let $K$ be a t-unital $k$-algebra. Then the following conditions are equivalent:* 1. *$K$ is a flat right $\widetilde K$-module;* 2. *$K$ is a flat right $\widecheck K$-module;* 3. *the full subcategory $K{\operatorname{\mathsf{--{}^t Mod}}}$ is closed under kernels in $\widetilde K{\operatorname{\mathsf{--Mod}}}$;* 4. *the full subcategory $K{\operatorname{\mathsf{--{}^t Mod}}}$ is closed under kernels in $\widecheck K{\operatorname{\mathsf{--Mod}}}$.* *Proof.* (1) $\Longleftrightarrow$ (2) is a particular case of the Wodzicki--Quillen theorem [@Quil Corollary 9.5]. It is also a particular case of Corollary [Corollary 7](#flatness-over-tilde-check-equivalence){reference-type="ref" reference="flatness-over-tilde-check-equivalence"}. (1) $\Longleftrightarrow$ (3) is [@Pnun Corollary 7.4]. The implication (1) $\Longrightarrow$ (3) does not depend on the assumption of t-unitality of $K$  [@Pnun Remark 1.6]. (2) $\Longleftrightarrow$ (4) is similar to (1) $\Longleftrightarrow$ (3). The implication (2) $\Longrightarrow$ (4) does not depend on the assumption of t-unitality of $K$. (3) $\Longleftrightarrow$ (4) is straightforward and also does not depend on the assumption of t-unitality of $K$. ◻ **Proposition 9**. *Let $K$ be a t-unital $k$-algebra. Then the following conditions are equivalent:* 1. *$K$ is a projective left $\widetilde K$-module;* 2. *$K$ is a projective left $\widecheck K$-module;* 3. *the full subcategory $K{\operatorname{\mathsf{--{}^c Mod}}}$ is closed under cokernels in $\widetilde K{\operatorname{\mathsf{--Mod}}}$;* 4. *the full subcategory $K{\operatorname{\mathsf{--{}^c Mod}}}$ is closed under cokernels in $\widecheck K{\operatorname{\mathsf{--Mod}}}$.* *Proof.* (1) $\Longleftrightarrow$ (2) is a particular case of Corollary [Corollary 5](#projectivity-over-tilde-check-equivalence){reference-type="ref" reference="projectivity-over-tilde-check-equivalence"}. (1) $\Longleftrightarrow$ (3) is [@Pnun Corollary 7.6]. The implication (1) $\Longrightarrow$ (3) does not depend on the assumption of t-unitality of $K$  [@Pnun Remark 3.6]. (2) $\Longleftrightarrow$ (4) is similar to (1) $\Longleftrightarrow$ (3). The implication (2) $\Longrightarrow$ (4) does not depend on the assumption of t-unitality of $K$. (3) $\Longleftrightarrow$ (4) is straightforward and also does not depend on the assumption of t-unitality of $K$. ◻ ## nt-Flat and nc-projective modules {#nt-nc-flat-proj-prelim-subsecn} In this section $K$ is a nonunital $k$-algebra. The assertions involving t-flatness or c-projectivity will presume $K$ to be t-unital. Let us say that a $\widetilde K$-module $Q$ is *nc-projective* if the covariant $\mathop{\mathrm{Hom}}$ functor $\mathop{\mathrm{Hom}}_{\widetilde K}(Q,{-})$ preserves right exactness of all right exact sequences $L\longrightarrow M\longrightarrow E \longrightarrow 0$ in $\widetilde K{\operatorname{\mathsf{--Mod}}}$ with $L$, $M\in K{\operatorname{\mathsf{--{}^c Mod}}}$. It is clear that any such right exact sequence $L\longrightarrow M\longrightarrow E\longrightarrow 0$ is actually a right exact sequence in $\widecheck K{\operatorname{\mathsf{--Mod}}}$. By [@Pnun Lemma 8.3], a t-unital module $Q$ over a t-unital algebra $K$ is nc-projective if and only if it is c-projective. According to [@Pnun Corollary 8.6] and Corollary [Corollary 5](#projectivity-over-tilde-check-equivalence){reference-type="ref" reference="projectivity-over-tilde-check-equivalence"}, this holds if and only if $Q$ is a projective $\widetilde K$-module, or equivalently, a projective $\widecheck K$-module. In particular, the left $K$-module $K$ is nc-projective if and only if it is projective as a left $\widecheck K$-module. On the other hand, it is obvious from the definition that any projective $\widetilde K$-module is nc-projective, and any projective $\widecheck K$-module is nc-projective; while such modules *need not* be c-projective in general [@Pnun Remarks 8.2(1--2)]. When the full subcategory $K{\operatorname{\mathsf{--{}^c Mod}}}$ is closed under cokernels in $\widetilde K{\operatorname{\mathsf{--Mod}}}$ or $\widecheck K{\operatorname{\mathsf{--Mod}}}$, the classes of c-projective and nc-projective left $K$-modules coincide. **Lemma 10**. *Let $B\in\widecheck K{\operatorname{\mathsf{--Bimod--}}}\widecheck K$ be a $K$-$K$-bimodule and $F$ be a left $\widecheck K$-module. Assume that both the left $K$-modules $B$ and $F$ are nc-projective, and the right $K$-module $B$ is t-unital. Then the tensor product $B\otimes_{\widecheck K}F$ is an nc-projective left $K$-module.* *Proof.* The assertion follows from the natural isomorphism $\mathop{\mathrm{Hom}}_K(B\otimes_KF,\>{-})\simeq\mathop{\mathrm{Hom}}_K(F,\mathop{\mathrm{Hom}}_K(B,{-}))$ together with the fact that the functor $\mathop{\mathrm{Hom}}_K(B,{-})$ takes all left $K$-modules to c-unital ones [@Pnun Lemma 3.2]. ◻ Similarly, we will say that a left $\widetilde K$-module $F$ is *nt-flat* if the tensor product functor ${-}\otimes_{\widetilde K}F$ preserves left exactness of all left exact sequences $0\longrightarrow N \longrightarrow L\longrightarrow M$ in ${\operatorname{\mathsf{Mod--}}}\widetilde K$ with $L$, $M\in{\operatorname{\mathsf{Mod^t--}}}K$. It is clear that any such left exact sequence $0\longrightarrow N\longrightarrow L\longrightarrow M$ is actually a left exact sequence in ${\operatorname{\mathsf{Mod--}}}\widecheck K$. By [@Pnun Lemma 8.15], a t-unital module $F$ over a t-unital algebra $K$ is nt-flat if and only if it is t-flat. According to [@Pnun Corollary 8.20] and Corollary [Corollary 7](#flatness-over-tilde-check-equivalence){reference-type="ref" reference="flatness-over-tilde-check-equivalence"}, this holds if and only if $F$ is a flat $\widetilde K$-module, or equivalently, a flat $\widecheck K$-module. In particular, the left $K$-module $K$ is nt-flat if and only if it is flat as a left $\widecheck K$-module. On the other hand, it is obvious from the definition that any flat $\widetilde K$-module is nt-flat, and any flat $\widecheck K$-module is nt-flat; while such modules *need not* be t-flat in general [@Pnun Remarks 8.14(1--2)]. When the full subcategory ${\operatorname{\mathsf{Mod^t--}}}K$ is closed under kernels in ${\operatorname{\mathsf{Mod--}}}\widetilde K$ or ${\operatorname{\mathsf{Mod--}}}\widecheck K$, the classes of t-flat and nt-flat left $K$-modules coincide. **Lemma 11**. *Let $B\in\widecheck K{\operatorname{\mathsf{--Bimod--}}}\widecheck K$ be a $K$-$K$-bimodule and $F$ be a left $\widecheck K$-module. Assume that both the left $K$-modules $B$ and $F$ are nt-flat, and the right $K$-module $B$ is t-unital. Then the tensor product $B\otimes_{\widecheck K}F$ is an nt-flat left $K$-module.* *Proof.* The assertion follows from the natural isomorphism ${-}\otimes_K(B\otimes_KF) \simeq({-}\otimes_K\nobreak B)\otimes_KF$ together with the fact that the functor ${-}\otimes_KB$ takes all right $K$-modules to t-unital ones [@Pnun Lemma 1.2(b)]. ◻ In the context of the following proposition, the assumption that $K$ is a $k$-algebra is irrelevant; so $K$ can be any nonunital ring. For any right $K$-module $N$, we denote by $N^+=\mathop{\mathrm{Hom}}_\mathbb Z(N, \mathbb Q/\mathbb Z)$ the character dual left $K$-module. **Proposition 12**. *For any nonunital ring $K$, any nc-projective $K$-module is nt-flat.* *Proof.* The argument is similar to (but simpler than) [@Pnun first proof of Proposition 8.21]. Let $Q$ be an nc-projective left $K$-module and $0\longrightarrow N \longrightarrow L\longrightarrow M$ be a left exact sequence in ${\operatorname{\mathsf{Mod--}}}\widetilde K$ with $L$, $M\in{\operatorname{\mathsf{Mod^t--}}}K$. Then $M^+\longrightarrow L^+\longrightarrow N^+\longrightarrow 0$ is a right exact sequence in $\widetilde K{\operatorname{\mathsf{--Mod}}}$ with $L^+$, $M^+\in K{\operatorname{\mathsf{--{}^c Mod}}}$ (by [@Pnun Lemma 8.17]). Now the sequence of abelian groups $\mathop{\mathrm{Hom}}_{\widetilde K}(Q,M^+)\longrightarrow \mathop{\mathrm{Hom}}_{\widetilde K}(Q,L^+)\longrightarrow\mathop{\mathrm{Hom}}_{\widetilde K}(Q,N^+)\longrightarrow 0$ can be obtained by applying the functor $\mathop{\mathrm{Hom}}_\mathbb Z({-},\mathbb Q/\mathbb Z)$ to the sequence of abelian groups $0\longrightarrow N\otimes_{\widetilde K}Q\longrightarrow L\otimes_{\widetilde K}Q\longrightarrow M\otimes_{\widetilde K}Q$. The former sequence is right exact by assumption, so it follows that the latter one is left exact. ◻ ## Bimodules Let $K$ be a (nonunital) $k$-algebra. As mentioned in the beginning of Section [\[prelim-nonunital-secn\]](#prelim-nonunital-secn){reference-type="ref" reference="prelim-nonunital-secn"}, by a *unital $\widecheck K$-$\widecheck K$-bimodule* we mean a $k$-vector space endowed with commuting structures of a left unital $\widecheck K$-module and a right unital $\widecheck K$-module. In other words, the left and right actions of $k$ in bimodules are presumed to agree. We will say that a $\widecheck K$-$\widecheck K$-bimodule is *t-unital* if it is t-unital both as a left and as a right module. We will denote the full subcategory of t-unital $\widecheck K$-$\widecheck K$-bimodules by $K{\operatorname{\mathsf{--{}^t Bimod^t--}}}K\subset\widecheck K{\operatorname{\mathsf{--Bimod--}}}\widecheck K$ and call its objects simply "t-unital $K$-$K$-bimodules". **Proposition 13**. *Let $K$ be a t-unital $k$-algebra. Then* *[(a)]{.upright} the additive category of t-unital $K$-$K$-bimodules $K{\operatorname{\mathsf{--{}^t Bimod^t--}}}K$ is an associative and unital monoidal category, with the unit object $K\in K{\operatorname{\mathsf{--{}^t Bimod^t--}}}K$, with respect to the tensor product operation $\otimes_{\widecheck K}$;* *[(b)]{.upright} the additive category of t-unital left $K$-modules $K{\operatorname{\mathsf{--{}^t Mod}}}$ is an associative and unital left module category over $K{\operatorname{\mathsf{--{}^t Bimod^t--}}}K$, and the additive category of t-unital right $K$-modules ${\operatorname{\mathsf{Mod^t--}}}K$ is an associative and unital right module category over $K{\operatorname{\mathsf{--{}^t Bimod^t--}}}K$, with respect to the tensor product operation $\otimes_{\widecheck K}$;* *[(c)]{.upright} the opposite category $K{\operatorname{\mathsf{--{}^c Mod}}}^{\mathsf{op}}$ to the additive category of c-unital left $K$-modules $K{\operatorname{\mathsf{--{}^c Mod}}}$ is an associative and unital right module category over the monoidal category $K{\operatorname{\mathsf{--{}^t Bimod^t--}}}K$, with respect to the Hom operation $$P^{\mathsf{op}}*B=\mathop{\mathrm{Hom}}_{\widecheck K}(B,P)^{\mathsf{op}} \quad\text{for all $P\in K{\operatorname{\mathsf{--{}^c Mod}}}$ and $B\in K{\operatorname{\mathsf{--{}^t Bimod^t--}}}K$}.$$* *Proof.* This is essentially [@Pnun Corollaries 1.4 and 3.4]. The restriction that the left and right actions of $k$ in the bimodules agree (which is imposed here, but not in [@Pnun]) does not affect the validity of the assertions. ◻ ## s-Unital rings and modules The following definitions and a part of the results are due to Tominaga [@Tom]; we refer to the survey [@Nys] and the preprint [@Pnun Section 2] for additional discussion. A left module $M$ over a (nonunital) ring $K$ is said to be *s-unital* if for every $m\in M$ there exists $e\in K$ such that $em=m$ in $M$. Similarly, a right $K$-module $N$ is said to be *s-unital* if for every $n\in N$ there exists $e\in K$ such that $ne=n$ in $N$. We will denote the full subcategory of s-unital left $K$-modules by $K{\operatorname{\mathsf{--{}^s Mod}}}\subset\widetilde K{\operatorname{\mathsf{--Mod}}}$ and the full subcategory of s-unital right $K$-modules by ${\operatorname{\mathsf{Mod^s--}}}K\subset{\operatorname{\mathsf{Mod--}}}\widetilde K$. **Proposition 14**. *For any (nonunital) ring $K$, the full subcategory of s-unital $K$-modules $K{\operatorname{\mathsf{--{}^s Mod}}}$ is a hereditary torsion class in the abelian category $\widetilde K{\operatorname{\mathsf{--Mod}}}$ of nonunital $K$-modules. In other words, the full subcategory $K{\operatorname{\mathsf{--{}^s Mod}}}$ is closed under submodules, quotients, extensions, and infinite direct sums in $\widetilde K{\operatorname{\mathsf{--Mod}}}$.* *Proof.* This is [@Pnun Proposition 2.2]. ◻ **Proposition 15** ([@Tom]). *Let $K$ be a (nonunital) ring and $M$ be an s-unital left $K$-module. Then, for any finite collection of elements $m_1$, ..., $m_n\in M$ there exists an element $e\in K$ such that $em_i=m_i$ for all $1\le i\le n$.* *Proof.* This is [@Tom Theorem 1]; see also [@Nys Proposition 2.8 of the published version or Proposition 8 of the `arXiv` version] or [@Pnun Corollary 2.3]. ◻ A ring (or $k$-algebra) $K$ is said to be *left s-unital* if it is an s-unital left $K$-module, and *right s-unital* if it is an s-unital right $K$-module. The ring $K$ is called *s-unital* if it is both left and right s-unital. **Proposition 16**. *[(a)]{.upright} Any left s-unital ring is t-unital. Similarly, any right s-unital ring is t-unital.* *[(b)]{.upright} Over a left s-unital ring, a left module is t-unital if and only if it is s-unital. Over a right s-unital ring, a right module is t-unital if and only if it is s-unital.* *Proof.* This result is due to Tominaga [@Tom Remarks (1--2) in Section 1 on p. 121--122]. See [@Pnun Corollaries 2.7 and 2.9] for the details. ◻ **Corollary 17**. *Let $K$ be a left s-unital ring. In this context:* *[(a)]{.upright} The full subcategory of t-unital left $K$-modules $K{\operatorname{\mathsf{--{}^t Mod}}}$ is closed under kernels in $\widetilde K{\operatorname{\mathsf{--Mod}}}$. If $K$ is a $k$-algebra, then $K{\operatorname{\mathsf{--Mod}}}$ is also closed under kernels in $\widecheck K{\operatorname{\mathsf{--Mod}}}$.* *[(b)]{.upright} The right $\widetilde K$-module $K$ is flat. If $K$ is a $k$-algebra, then the right $\widecheck K$-module $K$ is also flat.* *Proof.* The first assertion of part (a) follows by comparing Propositions [Proposition 14](#s-unital-modules-hereditary-torsion){reference-type="ref" reference="s-unital-modules-hereditary-torsion"} and [Proposition 16](#s-unital-implies-t-unital){reference-type="ref" reference="s-unital-implies-t-unital"}(b). All the other assertions are then provided by Proposition [Proposition 8](#t-unital-kernels-flat-ring){reference-type="ref" reference="t-unital-kernels-flat-ring"}. ◻ # Preliminaries on Semialgebras over Coalgebras As mentioned in the beginning of Section [\[prelim-nonunital-secn\]](#prelim-nonunital-secn){reference-type="ref" reference="prelim-nonunital-secn"}, all *coalgebras* in this paper are (coassociative and) counital; all comodules are counital and contramodules are contraunital. Furthermore, all *semialgebras* are (semiassociative and) semiunital; all semimodules are semiunital and semicontramodules are semicontraunital. The definitions of semialgebras, semimodules, and semicontramodules go back to the book [@Psemi], which is written in the generality level of "three-story towers": a semialgebra over a coring over a (noncommutative, nonsemisimple, associative, unital) ring. Hence in the exposition of [@Psemi] it was necessary to deal with such issues as the nonassociativity of the cotensor product of comodules over corings, etc. The related definitions and discussion are presented in [@Psemi Chapters 1 and 3]. In this paper, we avoid many complications by restricting ourselves to coalgebras over a field and semialgebras over such coalgebras. An introductory discussion of this context can be found in [@Psemi Section 0.3] and [@Prev Sections 2.5--2.6]. ## Coalgebras, comodules, and contramodules Before passing to semialgebras, let us say a few words about coalgebras. Very briefly, a (coassociative, counital) *coalgebra* $\mathcal C$ is a $k$-vector space endowed with $k$-linear maps of *comultiplication* $\mu\colon\mathcal C\longrightarrow\mathcal C\otimes_k\mathcal C$ and *counit* $\epsilon\colon\mathcal C\longrightarrow k$ satisfying the usual coassociativity and counitality axioms obtainable by inverting the arrows in the definition of an associative, unital algebra written down in the tensor notation. A (coassociative, counital) *left comodule* $\mathcal M$ over $\mathcal C$ is a $k$-vector space endowed with a $k$-linear *left coaction* map $\nu\colon\mathcal M\longrightarrow\mathcal C\otimes_k\mathcal M$ satisfying the usual coassociativity and counitality axioms. A *right comodule* $\mathcal N$ over $\mathcal C$ is a $k$-vector space endowed with a *right coaction* map $\nu\colon\mathcal N\longrightarrow\mathcal N\otimes_k\mathcal C$ satisfying the similar axioms. The definition of a *left contramodule* over $\mathcal C$ is less familiar, so let us spell it out in a little more detail. A left $\mathcal C$-contramodule $\mathfrak P$ is a $k$-vector space endowed a $k$-linear *left contraaction* map $\pi\colon\mathop{\mathrm{Hom}}_k(\mathcal C,\mathfrak P) \longrightarrow\mathfrak P$ satisfying the following *contraassociativity* and *contraunitality* axioms. Firstly (contraassociativity), the two compositions $$\mathop{\mathrm{Hom}}_k(\mathcal C\otimes_k\mathcal C,\>\mathfrak P)\simeq\mathop{\mathrm{Hom}}_k(\mathcal C,\mathop{\mathrm{Hom}}_k(\mathcal C,\mathfrak P)) \rightrightarrows\mathop{\mathrm{Hom}}_k(\mathcal C,\mathfrak P)\longrightarrow\mathfrak P$$ of the two maps $$\mathop{\mathrm{Hom}}_k(\mu,\mathfrak P)\colon\mathop{\mathrm{Hom}}_k(\mathcal C\otimes_k\mathcal C,\>\mathfrak P)\longrightarrow \mathop{\mathrm{Hom}}_k(\mathcal C,\mathfrak P)$$ and $$\mathop{\mathrm{Hom}}_k(\mathcal C,\pi)\colon\mathop{\mathrm{Hom}}_k(\mathcal C,\mathop{\mathrm{Hom}}_k(\mathcal C,\mathfrak P))\longrightarrow\mathop{\mathrm{Hom}}_k(\mathcal C,\mathfrak P)$$ with the contraaction map $\pi\colon\mathop{\mathrm{Hom}}_k(\mathcal C,\mathfrak P)\longrightarrow\mathfrak P$ must be equal to each other. Here the presumed identification $\mathop{\mathrm{Hom}}_k(\mathcal C\otimes_k\mathcal C,\>\mathfrak P)\simeq \mathop{\mathrm{Hom}}_k(\mathcal C,\mathop{\mathrm{Hom}}_k(\mathcal C,\mathfrak P))$ is obtained as a particular case of the natural isomorphism $\mathop{\mathrm{Hom}}_k(U\otimes_k V,\>W)\simeq \mathop{\mathrm{Hom}}_k(V,\mathop{\mathrm{Hom}}_k(U,W))$ for all $k$-vector spaces $U$, $V$, and $W$. Secondly (contraunitality), the composition $$\mathfrak P\longrightarrow\mathop{\mathrm{Hom}}_k(\mathcal C,\mathfrak P)\longrightarrow\mathfrak P$$ of the map $\mathop{\mathrm{Hom}}_k(\epsilon,\mathfrak P)\colon\mathfrak P\longrightarrow\mathop{\mathrm{Hom}}_k(\mathcal C,\mathfrak P)$ with the contraaction map $\pi\colon\mathop{\mathrm{Hom}}_k(\mathcal C,\allowbreak\mathfrak P) \longrightarrow\mathfrak P$ must be equal to the identity map $\mathrm{id}_\mathfrak P$. The definition of a *right contramodule* over $\mathcal C$ is very similar to that of a left contramodule. The only difference is that, in the case of right contramodules $\mathfrak Q$, the identification $\mathop{\mathrm{Hom}}_k(\mathcal C\otimes_k\mathcal C,\>\mathfrak Q)\simeq \mathop{\mathrm{Hom}}_k(\mathcal C,\mathop{\mathrm{Hom}}_k(\mathcal C,\mathfrak Q))$ obtained as a particular case of the natural isomorphism $\mathop{\mathrm{Hom}}_k(U\otimes_k V,\>W)\simeq \mathop{\mathrm{Hom}}_k(U,\mathop{\mathrm{Hom}}_k(V,W))$ for all $k$-vector spaces $U$, $V$, and $W$ is used in the contraassociativity axiom. For any right $\mathcal C$-comodule $\mathcal N$ and any $k$-vector space $V$, the vector space $\mathfrak P=\mathop{\mathrm{Hom}}_k(\mathcal N,V)$ has a natural left $\mathcal C$-contramodule structure. The contraaction map $\pi\colon\mathop{\mathrm{Hom}}_k(\mathcal C,\mathfrak P)\longrightarrow\mathfrak P$ is constructed as the composition $$\mathop{\mathrm{Hom}}_k(\mathcal C,\mathop{\mathrm{Hom}}_k(\mathcal N,V))\simeq\mathop{\mathrm{Hom}}_k(\mathcal N\otimes_k\mathcal C,\>V)\longrightarrow \mathop{\mathrm{Hom}}_k(\mathcal N,V)$$ of the natural isomorphism of vector spaces $\mathop{\mathrm{Hom}}_k(\mathcal C,\mathop{\mathrm{Hom}}_k(\mathcal N,V)) \simeq\mathop{\mathrm{Hom}}_k(\mathcal N\otimes_k\mathcal C,\allowbreak\>V)$ with the map $\mathop{\mathrm{Hom}}_k(\nu,V)\colon \mathop{\mathrm{Hom}}_k(\mathcal N\otimes_k\mathcal C,\>V)\longrightarrow\mathop{\mathrm{Hom}}_k(\mathcal N,V)$ induced by the coaction map $\nu\colon\mathcal N\longrightarrow\mathcal N\otimes_k\mathcal C$. We will denote the $k$-linear category of left $\mathcal C$-comodules by $\mathcal C{\operatorname{\mathsf{--Comod}}}$, the $k$-linear category of right $\mathcal C$-comodules by ${\operatorname{\mathsf{Comod--}}}\mathcal C$, the $k$-linear category of left $\mathcal C$-contramodules by $\mathcal C{\operatorname{\mathsf{--Contra}}}$, and the $k$-linear category of right $\mathcal C$-contramodules by ${\operatorname{\mathsf{Contra--}}}\mathcal C$. The $k$-vector space of morphisms $\mathcal L\longrightarrow\mathcal M$ in the category $\mathcal C{\operatorname{\mathsf{--Comod}}}$ is denoted by $\mathop{\mathrm{Hom}}_\mathcal C(\mathcal L,\mathcal M)$, while the $k$-vector space of morphisms $\mathfrak P\longrightarrow\mathfrak Q$ in the category $\mathcal C{\operatorname{\mathsf{--Contra}}}$ is denoted by $\mathop{\mathrm{Hom}}^\mathcal C(\mathfrak P,\mathfrak Q)$. We refer to the book [@Swe], the introductory section and appendix [@Psemi Section 0.2 and Appendix A], the surveys [@Prev Section 1] and [@Pksurv Sections 3 and 8], and the preprints [@Phff Section 1], [@Plfin Section 1] for a further discussion of coalgebras over fields together with comodules and contramodules over them. ## Injective comodules and projective contramodules Let $\mathcal C$ be a coalgebra over $k$. Then the category of left $\mathcal C$-comodules $\mathcal C{\operatorname{\mathsf{--Comod}}}$ is a locally finite Grothendieck abelian category. The forgetful functor $\mathcal C{\operatorname{\mathsf{--Comod}}}\longrightarrow k{\operatorname{\mathsf{--Vect}}}$ is exact and preserves coproducts (but usually does *not* preserve products; and the products are *not* exact in $\mathcal C{\operatorname{\mathsf{--Comod}}}$ in general). In particular, there are enough injective objects in $\mathcal C{\operatorname{\mathsf{--Comod}}}$ (but usually *no* projectives). Left $\mathcal C$-comodules of the form $\mathcal C\otimes_kV$, where $V\in k{\operatorname{\mathsf{--Vect}}}$ ranges over the $k$-vector spaces, are called the *cofree* left $\mathcal C$-comodules. Similarly, the cofree right $\mathcal C$-comodules are the ones of the form $V\otimes_k\mathcal C$, where $V\in k{\operatorname{\mathsf{--Vect}}}$. For any left $\mathcal C$-comodule $\mathcal L$, there is a natural isomorphism of $k$-vector spaces [@Psemi Section 1.1.2] $$\mathop{\mathrm{Hom}}_\mathcal C(\mathcal L,\>\mathcal C\otimes_kV)\simeq\mathop{\mathrm{Hom}}_k(\mathcal L,V).$$ Hence the cofree $\mathcal C$-comodules are injective. A $\mathcal C$-comodule is injective if and only if it is a direct summand of a cofree one. The category of left $\mathcal C$-contramodules $\mathcal C{\operatorname{\mathsf{--Contra}}}$ is a locally presentable abelian category with enough projective objects. The forgetful functor $\mathcal C{\operatorname{\mathsf{--Contra}}}\longrightarrow k{\operatorname{\mathsf{--Vect}}}$ is exact and preserves products (but usually does *not* preserve coproducts; and the coproducts are *not* exact in $\mathcal C{\operatorname{\mathsf{--Contra}}}$ in general). Left $\mathcal C$-contramodules of the form $\mathop{\mathrm{Hom}}_k(\mathcal C,V)$ (with the $\mathcal C$-contramodule structure arising from the right $\mathcal C$-comodule structure on $\mathcal C$) are called the *free* left $\mathcal C$-contramodules. For any left $\mathcal C$-contramodule $\mathfrak Q$, there is a natural isomorphism of $k$-vector spaces [@Psemi Section 3.1.2] $$\mathop{\mathrm{Hom}}^\mathcal C(\mathop{\mathrm{Hom}}_k(\mathcal C,V),\mathfrak Q)\simeq\mathop{\mathrm{Hom}}_k(V,\mathfrak Q).$$ Hence the free $\mathcal C$-contramodules are projective. A $\mathcal C$-contramodule is projective if and only if it is a direct summand of a free one. ## Bicomodules Let $\mathcal C$ and $\mathcal D$ be two (coassociative, counital) coalgebras over $k$. A *$\mathcal C$-$\mathcal D$-bicomodule* $\mathcal B$ is a $k$-vector space endowed with a left $\mathcal C$-comodule and a right $\mathcal D$-comodule structures commuting with each other. The latter condition means that the square diagram of coaction maps $\mathcal B\longrightarrow\mathcal C\otimes_k\mathcal B\longrightarrow\mathcal C\otimes_k\mathcal B\otimes_k\mathcal D$,  $\mathcal B\longrightarrow\mathcal B\otimes_k\mathcal D\longrightarrow\mathcal C\otimes_k\mathcal B\otimes_k\mathcal D$ is commutative. Equivalently, a $\mathcal C$-$\mathcal D$-bicomodule can be defined as a $k$-vector space $\mathcal B$ endowed with a *bicoaction* map $\mathcal B\longrightarrow\mathcal C\otimes_k\mathcal B \otimes_k\mathcal D$ satisfying the coassociativity and counitality axioms. We will denote the abelian category of $\mathcal C$-$\mathcal D$-bicomodules by $\mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal D$. ## Cotensor product and cohomomorphisms Let $\mathcal N$ be a right $\mathcal C$-comodule and $\mathcal N$ be a left $\mathcal C$-comodule. The *cotensor product* $\mathcal N\mathbin{\text{\smaller$\square$}}_\mathcal C\mathcal M$ is a $k$-vector space constructed as the kernel of (the difference of) the natural pair of maps $$\mathcal N\otimes_k\mathcal M\rightrightarrows\mathcal N\otimes_k\mathcal C\otimes_k\mathcal M$$ one of which is induced by the right coaction map $\nu_\mathcal N\colon\mathcal N\longrightarrow\mathcal N\otimes_k\mathcal C$ and the other one by the left coaction map $\nu_\mathcal M\colon\mathcal M\longrightarrow\mathcal C\otimes_k\mathcal M$. Given three coalgebras $\mathcal C$, $\mathcal D$, $\mathcal E$, a $\mathcal C$-$\mathcal D$-bicomodule $\mathcal N$, and a $\mathcal D$-$\mathcal E$-bicomodule $\mathcal M$, the cotensor product $\mathcal N\mathbin{\text{\smaller$\square$}}_\mathcal D\mathcal M$ is naturally endowed with a $\mathcal C$-$\mathcal E$-bicomodule structure. For any left $\mathcal C$-comodule $\mathcal M$ and right $\mathcal C$-comodule $\mathcal N$, there are natural cotensor product unitality isomorphisms of vector spaces (or left/right $\mathcal C$-comodules) $$\label{cotensor-unitality} \mathcal C\mathbin{\text{\smaller$\square$}}_\mathcal C\mathcal M\simeq\mathcal M\quad\text{and}\quad \mathcal N\mathbin{\text{\smaller$\square$}}_\mathcal C\mathcal C\simeq\mathcal N$$ [@Psemi Section 1.2.1]. For any right $\mathcal C$-comodule $\mathcal N$, any $\mathcal C$-$\mathcal D$-bicomodule $\mathcal B$, and any left $\mathcal D$-comodule $\mathcal M$, there is a natural associativity isomorphism of $k$-vector spaces $$\label{cotensor-associativity} (\mathcal N\mathbin{\text{\smaller$\square$}}_\mathcal C\mathcal B)\mathbin{\text{\smaller$\square$}}_\mathcal D\mathcal M\simeq \mathcal N\mathbin{\text{\smaller$\square$}}_\mathcal C(\mathcal B\mathbin{\text{\smaller$\square$}}_\mathcal D\mathcal M).$$ In fact, both $(\mathcal N\mathbin{\text{\smaller$\square$}}_\mathcal C\mathcal B)\mathbin{\text{\smaller$\square$}}_\mathcal D\mathcal M$ and $\mathcal N\mathbin{\text{\smaller$\square$}}_\mathcal C(\mathcal B\mathbin{\text{\smaller$\square$}}_\mathcal D\mathcal M)$ are one and the same vector subspace in the vector space $\mathcal N\otimes_k\mathcal B\otimes_k\mathcal M$. Let $\mathcal M$ be a left $\mathcal C$-comodule and $\mathfrak P$ be a left $\mathcal C$-contramodule. The $k$-vector space of *cohomomorphisms* $\mathop{\mathrm{Cohom}}_\mathcal C(\mathcal M,\mathfrak P)$ is constructed as the cokernel of (the difference of) the natural pair of maps $$\mathop{\mathrm{Hom}}_k(\mathcal C\otimes_k\mathcal M,\>\mathfrak P)\simeq\mathop{\mathrm{Hom}}_k(\mathcal M,\mathop{\mathrm{Hom}}_k(\mathcal C,\mathfrak P)) \rightrightarrows\mathop{\mathrm{Hom}}_k(\mathcal C,\mathfrak P)$$ one of which is induced by the left coaction map $\nu_\mathcal M\colon\mathcal M\longrightarrow\mathcal C\otimes_k\mathcal M$ and the other one by the left contraaction map $\pi_\mathfrak P\colon\mathop{\mathrm{Hom}}_k(\mathcal C,\mathfrak P)\longrightarrow\mathfrak P$. Given two coalgebras $\mathcal C$ and $\mathcal D$, a $\mathcal C$-$\mathcal D$-bicomodule $\mathcal B$, and a left $\mathcal C$-contramodule $\mathfrak P$, the vector space of cohomomorphisms $\mathop{\mathrm{Cohom}}_\mathcal C(\mathcal B,\mathfrak P)$ is naturally endowed with a left $\mathcal D$-contramodule structure arising from the right $\mathcal D$-comodule structure on $\mathcal B$. For any left $\mathcal C$-contramodule $\mathfrak P$, there is a natural $\mathop{\mathrm{Cohom}}$ unitality isomorphism of vector spaces (or left $\mathcal C$-contramodules) $$\label{cohom-unitality} \mathop{\mathrm{Cohom}}_\mathcal C(\mathcal C,\mathfrak P)\simeq\mathfrak P$$ [@Psemi Section 3.2.1]. For any left $\mathcal D$-comodule $\mathcal M$, any $\mathcal C$-$\mathcal D$-bicomodule $\mathcal B$, and any left $\mathcal C$-contramodule $\mathfrak P$, there is a natural associativity isomorphism of $k$-vector spaces $$\label{cohom-associativity} \mathop{\mathrm{Cohom}}_\mathcal C(\mathcal B\mathbin{\text{\smaller$\square$}}_\mathcal D\mathcal M,\>\mathfrak P)\simeq\mathop{\mathrm{Cohom}}_\mathcal D(\mathcal M,\mathop{\mathrm{Cohom}}_\mathcal C(\mathcal B,\mathfrak P)).$$ In fact, both $\mathop{\mathrm{Cohom}}_\mathcal C(\mathcal B\mathbin{\text{\smaller$\square$}}_\mathcal D\mathcal M,\>\mathfrak P)$ and $\mathop{\mathrm{Cohom}}_\mathcal D(\mathcal M,\mathop{\mathrm{Cohom}}_\mathcal C(\mathcal B,\mathfrak P))$ are naturally identified with the quotient space of the vector space $\mathop{\mathrm{Hom}}_k(\mathcal B\otimes_k\mathcal M,\>\mathfrak P)\simeq \mathop{\mathrm{Hom}}_k(\mathcal M,\mathop{\mathrm{Hom}}_k(\mathcal B,\mathfrak P))$ by one and the same vector subspace. ## Coflatness, coprojectivity, and coinjectivity {#adjusted-co-contra-mods-subsecn} Let $\mathcal C$ be a coalgebra over a field $k$. A left $\mathcal C$-comodule $\mathcal M$ is called *coflat* if the cotensor product functor ${-}\mathbin{\text{\smaller$\square$}}_\mathcal C\mathcal M\colon{\operatorname{\mathsf{Comod--}}}\mathcal C\longrightarrow k{\operatorname{\mathsf{--Vect}}}$ is exact. A left $\mathcal C$-comodule $\mathcal M$ is called *coprojective* if the covariant $\mathop{\mathrm{Cohom}}$ functor $\mathop{\mathrm{Cohom}}_\mathcal C(\mathcal M,{-})\colon\mathcal C{\operatorname{\mathsf{--Contra}}} \longrightarrow k{\operatorname{\mathsf{--Vect}}}$ is exact. A left $\mathcal C$-contramodule $\mathfrak P$ is called *coinjective* if the contravariant $\mathop{\mathrm{Cohom}}$ functor $\mathop{\mathrm{Cohom}}_\mathcal C({-},\mathfrak P)\colon \mathcal C{\operatorname{\mathsf{--Comod}}}^{\mathsf{op}}\longrightarrow k{\operatorname{\mathsf{--Vect}}}$ is exact. It is not difficult to show that a $\mathcal C$-comodule is coflat if and only if it is coprojective, and if and only if it is injective [@Prev Lemma 3.1(a)]. The assertion that a $\mathcal C$-contramodule is coinjective if and only if it is projective is also true, but a bit harder to prove [@Prev Lemma 3.1(b)]. **Lemma 18**. *Let $\mathcal C$ and $\mathcal D$ be coalgebras, $\mathcal B$ be a $\mathcal C$-$\mathcal D$-bicomodule, and $\mathcal J$ be a left $\mathcal D$-comodule. Assume that the left $\mathcal C$-comodule $\mathcal B$ and the left $\mathcal D$-comodule $\mathcal J$ are injective. Then the left $\mathcal C$-comodule $\mathcal B\mathbin{\text{\smaller$\square$}}_\mathcal D\mathcal J$ is also injective.* *Proof.* One can replace comodule injectivity by coflatness and argue that the composition of two exact cotensor product functors is exact, using the associativity of cotensor products. Alternatively, let the left $\mathcal D$-comodule $\mathcal J$ be a direct summand of a cofree left $\mathcal D$-comodule $\mathcal D\otimes_kV$, where $V\in k{\operatorname{\mathsf{--Vect}}}$; then the left $\mathcal C$-comodule $\mathcal B\mathbin{\text{\smaller$\square$}}_\mathcal D\mathcal J$ is a direct summand of the left $\mathcal C$-comodule $\mathcal B\mathbin{\text{\smaller$\square$}}_\mathcal D(\mathcal D\otimes_kV)\simeq\mathcal B\otimes_kV$. ◻ ## Semialgebras {#semialgebras-prelim-subsecn} Let $\mathcal C$ be a (coassociative, counital) coalgebra over $k$. The cotensor product unitality and associativity isomorphisms ([\[cotensor-unitality\]](#cotensor-unitality){reference-type="ref" reference="cotensor-unitality"}--[\[cotensor-associativity\]](#cotensor-associativity){reference-type="ref" reference="cotensor-associativity"}) tell that the category of $\mathcal C$-$\mathcal C$-bicomodules $\mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal C$ is an associative, unital monoidal category with the unit object $\mathcal C$ with respect to the cotensor product operation $\mathbin{\text{\smaller$\square$}}_\mathcal C$. A *semialgebra* over $\mathcal C$ is an associative, unital monoid object in this monoidal category. In other words, a semialgebra $\boldsymbol{\mathcal S}$ over $\mathcal C$ is a $\mathcal C$-$\mathcal C$-bicomodule endowed with a *semimultiplication* map $\mathbf m\colon\boldsymbol{\mathcal S}\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal S}\longrightarrow\boldsymbol{\mathcal S}$ and a *semiunit* map $\mathbf e\colon\mathcal C\longrightarrow\boldsymbol{\mathcal S}$, which must be $\mathcal C$-$\mathcal C$-bicomodule morphisms satisfying the following *semiassociativity* and *semiunitality* axioms. Firstly (semiassociativity), the two compositions $$\boldsymbol{\mathcal S}\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal S}\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal S}\rightrightarrows\boldsymbol{\mathcal S}\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal S}\longrightarrow\boldsymbol{\mathcal S}$$ of the two maps $\mathbf m\mathbin{\text{\smaller$\square$}}_\mathcal C\mathrm{id}_{\boldsymbol{\mathcal S}}$, $\mathrm{id}_{\boldsymbol{\mathcal S}}\mathbin{\text{\smaller$\square$}}_\mathcal C\mathbf m\colon \boldsymbol{\mathcal S}\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal S}\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal S}\rightrightarrows\boldsymbol{\mathcal S}\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal S}$ induced by the semimultiplication map $\mathbf m$ with the semimultiplication map $\mathbf m\colon\boldsymbol{\mathcal S}\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal S}\longrightarrow\boldsymbol{\mathcal S}$ must be equal to each other. Secondly (semiunitality), the two compositions $$\boldsymbol{\mathcal S}\rightrightarrows\boldsymbol{\mathcal S}\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal S}\longrightarrow\boldsymbol{\mathcal S}$$ of the two maps $\mathbf e\mathbin{\text{\smaller$\square$}}_\mathcal C\mathrm{id}_{\boldsymbol{\mathcal S}}$, $\mathrm{id}_{\boldsymbol{\mathcal S}}\mathbin{\text{\smaller$\square$}}_\mathcal C\mathbf e\colon \boldsymbol{\mathcal S}\rightrightarrows\boldsymbol{\mathcal S}\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal S}$ induced by the seminunit map $\mathbf e\colon\mathcal C\longrightarrow\boldsymbol{\mathcal S}$ with the semimultiplication map $\mathbf m$ must be equal to the identity map $\mathrm{id}_{\boldsymbol{\mathcal S}}$  [@Psemi Sections 0.3.2 and 1.3.1], [@Prev Section 2.6]. ## Semimodules {#semimodules-prelim-subsecn} Let $\mathcal C$ be a coalgebra over $k$. The unitality and associativity isomorphisms ([\[cotensor-unitality\]](#cotensor-unitality){reference-type="ref" reference="cotensor-unitality"}--[\[cotensor-associativity\]](#cotensor-associativity){reference-type="ref" reference="cotensor-associativity"}) also tell that the category of left $\mathcal C$-comodules $\mathcal C{\operatorname{\mathsf{--Comod}}}$ is an associative, unital left module category, with respect to the cotensor product operation $\mathbin{\text{\smaller$\square$}}_\mathcal C$, over the monoidal category $\mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal C$. Let $\boldsymbol{\mathcal S}$ be a semialgebra over $\mathcal C$. A *left semimodule* $\boldsymbol{\mathcal M}$ over $\boldsymbol{\mathcal S}$ is a module object in the module category $\mathcal C{\operatorname{\mathsf{--Comod}}}$ over the monoid object $\boldsymbol{\mathcal S}$ in the monoidal category $\mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal C$. In other words, a left semimodule $\boldsymbol{\mathcal M}$ over $\boldsymbol{\mathcal S}$ is a left $\mathcal C$-comodule endowed with a *left semiaction* map $\mathbf n\colon\boldsymbol{\mathcal S}\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal M}\longrightarrow\boldsymbol{\mathcal M}$, which must be a left $\mathcal C$-comodule morphism satisfying the following semiassociativity and seminunitality axioms. Firstly (semiassociativity), the two compositions $$\boldsymbol{\mathcal S}\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal S}\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal M}\rightrightarrows\boldsymbol{\mathcal S}\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal M}\longrightarrow\boldsymbol{\mathcal M}$$ of the two maps $\mathbf m\mathbin{\text{\smaller$\square$}}_\mathcal C\mathrm{id}_{\boldsymbol{\mathcal M}}$, $\mathrm{id}_{\boldsymbol{\mathcal S}}\mathbin{\text{\smaller$\square$}}_\mathcal C\mathbf n\colon \boldsymbol{\mathcal S}\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal S}\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal M}\rightrightarrows\boldsymbol{\mathcal S}\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal M}$ induced by the semimultiplication and semiaction maps $\mathbf m$ and $\mathbf n$ with the semiaction map $\mathbf n\colon\boldsymbol{\mathcal S}\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal M}\longrightarrow\boldsymbol{\mathcal M}$ must be equal to each other. Secondly (semiunitality), the composition $$\boldsymbol{\mathcal M}\longrightarrow\boldsymbol{\mathcal S}\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal M}\longrightarrow\boldsymbol{\mathcal M}$$ of the map $\mathbf e\mathbin{\text{\smaller$\square$}}_\mathcal C\mathrm{id}_{\boldsymbol{\mathcal M}}\colon\boldsymbol{\mathcal M}\longrightarrow\boldsymbol{\mathcal S}\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal M}$ induced by the semiunit map $\mathbf e$ with the semiaction map $\mathbf n$ must be equal to the identity map $\mathrm{id}_{\boldsymbol{\mathcal M}}$  [@Psemi Sections 0.3.2 and 1.3.1], [@Prev Section 2.6]. Similarly, the category of right $\mathcal C$-comodules ${\operatorname{\mathsf{Comod--}}}\mathcal C$ is an associative, unital right module category, with respect to the cotensor product operation $\mathbin{\text{\smaller$\square$}}_\mathcal C$, over the monoidal category $\mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal C$. A *right semimodule* over $\boldsymbol{\mathcal S}$ is a module object in the module category ${\operatorname{\mathsf{Comod--}}}\mathcal C$ over the monoid object $\boldsymbol{\mathcal S}\in \mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal C$. In other words, a right semimodule $\boldsymbol{\mathcal N}$ over $\boldsymbol{\mathcal S}$ is a right $\mathcal C$-comodule endowed with a *right semiaction* map $\mathbf n\colon\boldsymbol{\mathcal N}\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal S}\longrightarrow\boldsymbol{\mathcal N}$, which must be a right $\mathcal C$-comodule morphism satisfying the semiassociativity and semiunitality axioms similar to the above ones. We will denote the $k$-linear category of left $\boldsymbol{\mathcal S}$-semimodules by $\boldsymbol{\mathcal S}{\operatorname{\mathsf{--Simod}}}$ and the $k$-linear category of right $\boldsymbol{\mathcal S}$-semimodules by ${\operatorname{\mathsf{Simod--}}}\boldsymbol{\mathcal S}$. The $k$-vector space of morphisms $\boldsymbol{\mathcal L}\longrightarrow\boldsymbol{\mathcal M}$ in the category $\boldsymbol{\mathcal S}{\operatorname{\mathsf{--Simod}}}$ is denoted by $\mathop{\mathrm{Hom}}_{\boldsymbol{\mathcal S}}(\boldsymbol{\mathcal L},\boldsymbol{\mathcal M})$. **Proposition 19**. *Let $\boldsymbol{\mathcal S}$ be a semialgebra over a coalgebra $\mathcal C$ over a field $k$. Then* *[(a)]{.upright} the following two conditions on $\boldsymbol{\mathcal S}$ are equivalent:* - *the category of left $\boldsymbol{\mathcal S}$-semimodules $\boldsymbol{\mathcal S}{\operatorname{\mathsf{--Simod}}}$ is abelian *and* the forgetful functor $\boldsymbol{\mathcal S}{\operatorname{\mathsf{--Simod}}}\longrightarrow\mathcal C{\operatorname{\mathsf{--Comod}}}$ is exact;* - *$\boldsymbol{\mathcal S}$ is an injective right $\mathcal C$-comodule;* *[(b)]{.upright} the following two conditions on $\boldsymbol{\mathcal S}$ are equivalent:* - *the category of right $\boldsymbol{\mathcal S}$-semimodules ${\operatorname{\mathsf{Simod--}}}\boldsymbol{\mathcal S}$ is abelian *and* the forgetful functor ${\operatorname{\mathsf{Simod--}}}\boldsymbol{\mathcal S}\longrightarrow{\operatorname{\mathsf{Comod--}}}\mathcal C$ is exact;* - *$\boldsymbol{\mathcal S}$ is an injective left $\mathcal C$-comodule.* *Proof.* This is a semialgebra version of [@Prev Proposition 2.12(a)]. The point is that a comodule is coflat if and only if it is injective (see Section [2.5](#adjusted-co-contra-mods-subsecn){reference-type="ref" reference="adjusted-co-contra-mods-subsecn"}). If $\boldsymbol{\mathcal S}$ is an injective right $\mathcal C$-comodule, then the functor $\boldsymbol{\mathcal S}\mathbin{\text{\smaller$\square$}}_\mathcal C{-}\,\colon\mathcal C{\operatorname{\mathsf{--Comod}}}\longrightarrow\mathcal C{\operatorname{\mathsf{--Comod}}}$ is exact, and one can construct a left $\boldsymbol{\mathcal S}$-semimodule structure on the kernel and cokernel of the underlying $\mathcal C$-comodule map of any left $\boldsymbol{\mathcal S}$-semimodule morphism. Conversely, one needs to observe that the induction functor $\boldsymbol{\mathcal S}\mathbin{\text{\smaller$\square$}}_\mathcal C{-}\,\colon\mathcal C{\operatorname{\mathsf{--Comod}}}\longrightarrow\boldsymbol{\mathcal S}{\operatorname{\mathsf{--Simod}}}$ is always left adjoint to the forgetful functor $\boldsymbol{\mathcal S}{\operatorname{\mathsf{--Simod}}}\longrightarrow\mathcal C{\operatorname{\mathsf{--Comod}}}$  [@Psemi Section 0.3.2, Lemma 1.1.2, and Section 1.3.1]. Hence, if $\boldsymbol{\mathcal S}{\operatorname{\mathsf{--Simod}}}$ is abelian, then the induction functor is right exact. If the forgetful functor is exact, it follows that the composition $\mathcal C{\operatorname{\mathsf{--Comod}}}\longrightarrow\boldsymbol{\mathcal S}{\operatorname{\mathsf{--Simod}}}\longrightarrow\mathcal C{\operatorname{\mathsf{--Comod}}}$ is right exact. This composition is the cotensor product functor $\boldsymbol{\mathcal S}\mathbin{\text{\smaller$\square$}}_\mathcal C{-}\,\colon\mathcal C{\operatorname{\mathsf{--Comod}}}\longrightarrow\mathcal C{\operatorname{\mathsf{--Comod}}}$. It remains to notice that the cotensor product over a coalgebra over a field is always left exact (since it is constructed as the kernel of a map of tensor products over $k$). ◻ ## Semicontramodules {#semicontramodules-prelim-subsecn} Let $\mathcal C$ be a coalgebra over $k$. The $\mathop{\mathrm{Cohom}}$ unitality and associativity isomorphisms ([\[cohom-unitality\]](#cohom-unitality){reference-type="ref" reference="cohom-unitality"}--[\[cohom-associativity\]](#cohom-associativity){reference-type="ref" reference="cohom-associativity"}) can be interpreted as saying that the opposite category $\mathcal C{\operatorname{\mathsf{--Contra}}}^{\mathsf{op}}$ to the category of left $\mathcal C$-contramodules $\mathcal C{\operatorname{\mathsf{--Contra}}}$ is an associative, unital right module category, with respect to the operation of cohomomorphisms $$\mathfrak P^{\mathsf{op}}*\mathcal B=\mathop{\mathrm{Cohom}}_\mathcal C(\mathcal B,\mathfrak P)^{\mathsf{op}} \quad\text{for $\mathfrak P\in\mathcal C{\operatorname{\mathsf{--Contra}}}$ and $\mathcal B\in\mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal C$},$$ over the monoidal category $\mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal C$. The (objects opposite to the) module objects in $\mathcal C{\operatorname{\mathsf{--Contra}}}^{\mathsf{op}}$ over a monoid object $\boldsymbol{\mathcal S}\in\mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal C$ are called the *left semicontramodules* over $\boldsymbol{\mathcal S}$. Explicitly, a *left semicontramodule* $\boldsymbol{\mathfrak P}$ over $\boldsymbol{\mathcal S}$ is a left $\mathcal C$-contramodule endowed with a *left semicontraaction* map $\mathbf p\colon\boldsymbol{\mathfrak P}\longrightarrow\mathop{\mathrm{Cohom}}_\mathcal C(\boldsymbol{\mathcal S},\boldsymbol{\mathfrak P})$, which must be a left $\mathcal C$-contramodule morphism satisfying the following semicontraassociativity and semicontraunitality equations. Firstly (semicontraassociativity), the two compositions $$\boldsymbol{\mathfrak P}\longrightarrow\mathop{\mathrm{Cohom}}_\mathcal C(\boldsymbol{\mathcal S},\boldsymbol{\mathfrak P})\rightrightarrows \mathop{\mathrm{Cohom}}_\mathcal C(\boldsymbol{\mathcal S}\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal S},\>\boldsymbol{\mathfrak P})\simeq\mathop{\mathrm{Cohom}}_\mathcal C(\boldsymbol{\mathcal S},\mathop{\mathrm{Cohom}}_\mathcal C(\boldsymbol{\mathcal S},\boldsymbol{\mathfrak P}))$$ of the semicontraaction map $\mathbf p\colon\boldsymbol{\mathfrak P}\longrightarrow\mathop{\mathrm{Cohom}}_\mathcal C(\boldsymbol{\mathcal S},\boldsymbol{\mathfrak P})$ with the two maps $$\mathop{\mathrm{Cohom}}_\mathcal C(\mathbf m,\boldsymbol{\mathfrak P})\colon\mathop{\mathrm{Cohom}}_\mathcal C(\boldsymbol{\mathcal S},\boldsymbol{\mathfrak P})\longrightarrow \mathop{\mathrm{Cohom}}_\mathcal C(\boldsymbol{\mathcal S}\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal S},\>\boldsymbol{\mathfrak P})$$ and $$\mathop{\mathrm{Cohom}}_\mathcal C(\boldsymbol{\mathcal S},\mathbf p)\colon\mathop{\mathrm{Cohom}}_\mathcal C(\boldsymbol{\mathcal S},\boldsymbol{\mathfrak P})\longrightarrow \mathop{\mathrm{Cohom}}_\mathcal C(\boldsymbol{\mathcal S},\mathop{\mathrm{Cohom}}_\mathcal C(\boldsymbol{\mathcal S},\boldsymbol{\mathfrak P}))$$ must be equal to each other. Here the presumed identification $\mathop{\mathrm{Cohom}}_\mathcal C(\boldsymbol{\mathcal S}\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal S},\>\boldsymbol{\mathfrak P})\simeq\mathop{\mathrm{Cohom}}_\mathcal C(\boldsymbol{\mathcal S},\mathop{\mathrm{Cohom}}_\mathcal C(\boldsymbol{\mathcal S},\boldsymbol{\mathfrak P}))$ is obtained as a particular case of the $\mathop{\mathrm{Cohom}}$ associativity isomorphism [\[cohom-associativity\]](#cohom-associativity){reference-type="eqref" reference="cohom-associativity"}. Secondly (semicontraunitality), the composition $$\boldsymbol{\mathfrak P}\longrightarrow\mathop{\mathrm{Cohom}}_\mathcal C(\boldsymbol{\mathcal S},\boldsymbol{\mathfrak P})\longrightarrow\boldsymbol{\mathfrak P}$$ of the semicontraaction map $\mathbf p\colon\boldsymbol{\mathfrak P}\longrightarrow\mathop{\mathrm{Cohom}}_\mathcal C(\boldsymbol{\mathcal S},\boldsymbol{\mathfrak P})$ with the map $\mathop{\mathrm{Cohom}}_\mathcal C(\mathbf e,\boldsymbol{\mathfrak P})\colon\allowbreak\mathop{\mathrm{Cohom}}_\mathcal C(\boldsymbol{\mathcal S},\boldsymbol{\mathfrak P})\longrightarrow \mathop{\mathrm{Cohom}}_\mathcal C(\mathcal C,\boldsymbol{\mathfrak P})\simeq\boldsymbol{\mathfrak P}$ must be equal to the identity map $\mathrm{id}_{\boldsymbol{\mathfrak P}}$. Here the presumed identification $\mathop{\mathrm{Cohom}}_\mathcal C(\mathcal C,\boldsymbol{\mathfrak P})\simeq\boldsymbol{\mathfrak P}$ is provided by the $\mathop{\mathrm{Cohom}}$ unitality isomorphism [\[cohom-unitality\]](#cohom-unitality){reference-type="eqref" reference="cohom-unitality"}  [@Psemi Sections 0.3.5 and 3.3.1], [@Prev Section 2.6]. We will denote the $k$-linear category of left $\boldsymbol{\mathcal S}$-semicontramodules by $\boldsymbol{\mathcal S}{\operatorname{\mathsf{--Sicntr}}}$. The definition of the category of right $\boldsymbol{\mathcal S}$-semicontramodules ${\operatorname{\mathsf{Sicntr--}}}\boldsymbol{\mathcal S}$ is similar (we suppress the details). The $k$-vector space of morphisms $\boldsymbol{\mathfrak P}\longrightarrow\boldsymbol{\mathfrak Q}$ in the category $\boldsymbol{\mathcal S}{\operatorname{\mathsf{--Sicntr}}}$ is denoted by $\mathop{\mathrm{Hom}}^{\boldsymbol{\mathcal S}}(\boldsymbol{\mathfrak P},\boldsymbol{\mathfrak Q})$. **Proposition 20**. *Let $\boldsymbol{\mathcal S}$ be a semialgebra over a coalgebra $\mathcal C$ over a field $k$. Then the following two conditions on $\boldsymbol{\mathcal S}$ are equivalent:* - *the category of left $\boldsymbol{\mathcal S}$-semicontramodules $\boldsymbol{\mathcal S}{\operatorname{\mathsf{--Sicntr}}}$ is abelian *and* the forgetful functor $\boldsymbol{\mathcal S}{\operatorname{\mathsf{--Sicntr}}}\longrightarrow\mathcal C{\operatorname{\mathsf{--Contra}}}$ is exact;* - *$\boldsymbol{\mathcal S}$ is an injective left $\mathcal C$-comodule.* *Proof.* This is a semialgebra version of [@Prev Proposition 2.12(b)]. The point is that a comodule is coprojective if and only if it is injective (see Section [2.5](#adjusted-co-contra-mods-subsecn){reference-type="ref" reference="adjusted-co-contra-mods-subsecn"}). If $\boldsymbol{\mathcal S}$ is an injective left $\mathcal C$-comodule, then the functor $\mathop{\mathrm{Cohom}}_\mathcal C(\boldsymbol{\mathcal S},{-})\colon\mathcal C{\operatorname{\mathsf{--Contra}}}\longrightarrow\mathcal C{\operatorname{\mathsf{--Contra}}}$ is exact, and one can construct a left $\boldsymbol{\mathcal S}$-semicontramodule structure on the kernel and cokernel of the underlying $\mathcal C$-contramodule map of any left $\boldsymbol{\mathcal S}$-semicontramodule morphism. Conversely, one needs to observe that the coinduction functor $\mathop{\mathrm{Cohom}}_\mathcal C(\boldsymbol{\mathcal S},{-})\colon\mathcal C{\operatorname{\mathsf{--Contra}}}\longrightarrow\boldsymbol{\mathcal S}{\operatorname{\mathsf{--Sicntr}}}$ is always right adjoint to the forgetful functor $\boldsymbol{\mathcal S}{\operatorname{\mathsf{--Sicntr}}}\longrightarrow\mathcal C{\operatorname{\mathsf{--Contra}}}$  [@Psemi Sections 0.3.5 and 3.3.1]. Hence, if $\boldsymbol{\mathcal S}{\operatorname{\mathsf{--Sicntr}}}$ is abelian, then the coinduction functor is left exact. If the forgetful functor is exact, it follows that the composition $\mathcal C{\operatorname{\mathsf{--Contra}}}\longrightarrow\boldsymbol{\mathcal S}{\operatorname{\mathsf{--Sicntr}}}\longrightarrow\mathcal C{\operatorname{\mathsf{--Contra}}}$ is left exact. This composition is the $\mathop{\mathrm{Cohom}}$ functor $\mathop{\mathrm{Cohom}}_\mathcal C(\boldsymbol{\mathcal S},{-})\colon\mathcal C{\operatorname{\mathsf{--Contra}}}\longrightarrow\mathcal C{\operatorname{\mathsf{--Contra}}}$. It remains to notice that the $\mathop{\mathrm{Cohom}}$ functor over a coalgebra over a field is always right exact (since it is constructed as the cokernel of a map of $\mathop{\mathrm{Hom}}$ spaces over $k$). ◻ # Left Flat, Right Integrable Bimodules [\[flat-integrable-bimodules-secn\]]{#flat-integrable-bimodules-secn label="flat-integrable-bimodules-secn"} Starting from this section, everything in this paper happens over a field $k$. All (possibly nonunital) rings are $k$-algebras and all (nonunital) modules are $k$-vector spaces; so by a "$K$-module" we mean a unital $\widecheck K$-module. All module morphisms are, at least, $k$-linear; so the notation $\mathop{\mathrm{Hom}}_K(M,P)$ means $\mathop{\mathrm{Hom}}_{\widecheck K}(M,P)$; and similarly for the tensor product, $N\otimes_KM$ means $N\otimes_{\widecheck K}M$. Keeping in mind Lemma [Lemma 1](#hom-tensor-over-tilde-check-agree){reference-type="ref" reference="hom-tensor-over-tilde-check-agree"}, this notation change mostly makes no difference. We will use the notation $K{\operatorname{\mathsf{--{}^n Mod}}}=\widecheck K{\operatorname{\mathsf{--Mod}}}$ and ${\operatorname{\mathsf{Mod^n--}}}K={\operatorname{\mathsf{Mod--}}}\widecheck K$ for the categories of nonunital left and right $K$-modules. The category of nonunital $K$-$K$-bimodules is denoted by $K{\operatorname{\mathsf{--{}^n Bimod^n--}}}K=\widecheck K{\operatorname{\mathsf{--Bimod--}}}\widecheck K= (\widecheck K\otimes_k\widecheck K^{\mathrm{op}}){\operatorname{\mathsf{--Mod}}}$. Then we have the full subcategories of t-unital modules $K{\operatorname{\mathsf{--{}^t Mod}}}\subset K{\operatorname{\mathsf{--{}^n Mod}}}$ and ${\operatorname{\mathsf{Mod^t--}}}K\subset{\operatorname{\mathsf{Mod^n--}}}K$, the full subcategory of t-unital bimodules $K{\operatorname{\mathsf{--{}^t Bimod^t--}}}K\subset K{\operatorname{\mathsf{--{}^n Bimod^n--}}}K$, and the full subcategory of c-unital modules $K{\operatorname{\mathsf{--{}^c Mod}}}\subset K{\operatorname{\mathsf{--{}^n Mod}}}$. ## Multiplicative pairings {#multiplicative-pairings-subsecn} The following definition is a modification of the one from [@Psemi Section 10.1.2]. Let $K$ be a (nonunital) $k$-algebra and $\mathcal C$ be a (counital) coalgebra over $k$. A *multiplicative pairing* $$\label{mult-pairing} \phi\colon\mathcal C\times K\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax k$$ is a bilinear map satisfying the equation $$\label{mult-pairing-defined} \phi(c,r'r'')=\phi(c_{(1)},r'')\phi(c_{(2)},r')$$ for all $c\in\mathcal C$ and $r'$, $r''\in K$. Here $\mu(c)=c_{(1)}\otimes c_{(2)}$ is a simplified version of Sweedler's symbolic notation [@Swe Section 1.2] for the comultiplication in $\mathcal C$. Equivalently, one can pass from $\phi$ to the induced/partial dual map $\phi^*\colon K\longrightarrow\mathcal C^*=\mathop{\mathrm{Hom}}_k(\mathcal C,k)$. Then $\phi$ is a multiplicative pairing if and only if $\phi^*$ is a morphism of (nonunital) $k$-algebras. **Remark 21**. Notice that the factors are switched in the formula [\[mult-pairing-defined\]](#mult-pairing-defined){reference-type="eqref" reference="mult-pairing-defined"}. This corresponds to our convention (unusual for the theory of coalgebras over a field) concerning the definition of multiplication in $\mathcal C^*$. Our preference, reflected in the formula and assertion above, is to define the multiplication in $\mathcal C^*$ so that left $\mathcal C$-comodules become left $\mathcal C^*$-modules and right $\mathcal C$-comodules become right $\mathcal C^*$-modules; then left $\mathcal C$-contramodules also become left $\mathcal C^*$-modules. This is mentioned in the surveys [@Prev beginning of Section 1.4] and [@Pksurv Sections 6.3 and 9.2]. In the more general context of corings over noncommutative rings, there is no choice, and our convention becomes the only consistent one (as in [@Psemi Section 10.1.2]). **Proposition 22**. *Let $K$ be a (nonunital) $k$-algebra and $\mathcal C$ be a (counital) coalgebra over $k$. Then the datum of a multiplicative pairing $\phi$ induces exact, faithful functors $$\begin{aligned} {2} \phi_*&\colon\mathcal C{\operatorname{\mathsf{--Comod}}}&&\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax K{\operatorname{\mathsf{--{}^n Mod}}}, \label{left-comodules-to-modules} \\ \phi_*&\colon{\operatorname{\mathsf{Comod--}}}\mathcal C&&\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax{\operatorname{\mathsf{Mod^n--}}}K, \label{right-comodules-to-modules} \\ \phi_*&\colon\mathcal C{\operatorname{\mathsf{--Contra}}}&&\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax K{\operatorname{\mathsf{--{}^n Mod}}} \label{left-contramodules-to-modules}\end{aligned}$$ assigning to every $\mathcal C$-comodule or $\mathcal C$-contramodule the induced $K$-module structure on the same vector space.* *Proof.* The functors in question are the compositions $$\begin{aligned} {3} &\mathcal C{\operatorname{\mathsf{--Comod}}}&&\overset{\Upsilon_\mathcal C}\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathcal C^*{\operatorname{\mathsf{--Mod}}} &&\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax K{\operatorname{\mathsf{--{}^n Mod}}}, \\ &{\operatorname{\mathsf{Comod--}}}\mathcal C&&\overset{\Upsilon_{\mathcal C^{\mathrm{op}}}}\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax{\operatorname{\mathsf{Mod--}}}\mathcal C^* &&\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax{\operatorname{\mathsf{Mod^n--}}}K, \\ &\mathcal C{\operatorname{\mathsf{--Contra}}}&&\overset{\Theta_\mathcal C}\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathcal C^*{\operatorname{\mathsf{--Mod}}} &&\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax K{\operatorname{\mathsf{--{}^n Mod}}}\end{aligned}$$ of the comodule/contramodule inclusion/forgetful functors (depending only on a coalgebra $\mathcal C$) with the functors of restriction of scalars with respect to the ring homomorphism $\phi^*\colon K\longrightarrow\mathcal C^*$. We refer to the book [@Swe Section 2.1] and the preprint [@Phff] for further details on the comodule/contramodule inclusion/forgetful functors. The functors ([\[left-comodules-to-modules\]](#left-comodules-to-modules){reference-type="ref" reference="left-comodules-to-modules"}-- [\[left-contramodules-to-modules\]](#left-contramodules-to-modules){reference-type="ref" reference="left-contramodules-to-modules"}) can be also constructed directly as follows. Given a left $\mathcal C$-comodule $\mathcal M$, the composition $$K\otimes_k\mathcal M\overset\nu\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax K\otimes_k\mathcal C\otimes_k\mathcal M\overset\phi \mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathcal M$$ of the map $K\otimes_k\mathcal M\longrightarrow K\otimes_k\mathcal C\otimes_k\mathcal M$ induced by the coaction map $\nu\colon\mathcal M\longrightarrow\mathcal C\otimes_k\mathcal M$ with the map $K\otimes_k\mathcal C\otimes_k\mathcal M\longrightarrow\mathcal M$ induced by the pairing $\phi\colon\mathcal C\times K\longrightarrow k$ defines a left $K$-module structure on $\mathcal M$. This is the functor [\[left-comodules-to-modules\]](#left-comodules-to-modules){reference-type="eqref" reference="left-comodules-to-modules"}. Given a right $\mathcal C$-comodule $\mathcal N$, the similar composition $$\mathcal N\otimes_k K\overset\nu\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathcal N\otimes_k\mathcal C\otimes_k K\overset\phi \mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathcal N$$ of the map $\mathcal N\otimes_k K\longrightarrow\mathcal N\otimes_k\mathcal C\otimes_kK$ induced by the coaction map $\nu\colon\mathcal N\longrightarrow\mathcal N\otimes_k\mathcal C$ with the map $\mathcal N\otimes_k\mathcal C\otimes_k K\longrightarrow\mathcal N$ induced by the pairing $\phi\colon\mathcal C\times K\longrightarrow k$ defines a left $K$-module structure on $\mathcal N$. This is the functor [\[right-comodules-to-modules\]](#right-comodules-to-modules){reference-type="eqref" reference="right-comodules-to-modules"}. Given a left $\mathcal C$-contramodule $\mathfrak P$, the composition $$K\otimes_k\mathfrak P\overset\phi\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathop{\mathrm{Hom}}_k(\mathcal C,\mathfrak P)\overset\pi\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathfrak P$$ of the map $r\otimes_k p\longmapsto(c\mapsto\phi(c,r)p)\colon K\otimes_k\mathfrak P\longrightarrow \mathop{\mathrm{Hom}}_k(\mathcal C,\mathfrak P)$ induced by the pairing $\phi\colon\mathcal C\times K\longrightarrow k$ with the contraaction map $\pi\colon\mathop{\mathrm{Hom}}_k(\mathcal C,\mathfrak P)\longrightarrow\mathfrak P$ defines a left $K$-module structure on $\mathfrak P$. This is the functor [\[left-contramodules-to-modules\]](#left-contramodules-to-modules){reference-type="eqref" reference="left-contramodules-to-modules"}. ◻ ## t-Unital multiplicative pairings {#t-unital-pairings-subsecn} Let $\phi\colon\mathcal C\times K\longrightarrow k$ be a multiplicative pairing for an algebra $K$ and a coalgebra $\mathcal C$. Then the construction of the functors $\phi_*$  ([\[left-comodules-to-modules\]](#left-comodules-to-modules){reference-type="ref" reference="left-comodules-to-modules"}-- [\[right-comodules-to-modules\]](#right-comodules-to-modules){reference-type="ref" reference="right-comodules-to-modules"}) from Proposition [Proposition 22](#underlying-K-module-structures){reference-type="ref" reference="underlying-K-module-structures"} endows the $\mathcal C$-$\mathcal C$-bicomodule $\mathcal C\in\mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal C$ with a $K$-$K$-bimodule structure, $\mathcal C\in K{\operatorname{\mathsf{--{}^n Bimod^n--}}}K$. We will say the pairing $\phi$ is *right t-unital* if $\mathcal C$ is a t-unital right $K$-module. Similarly, the pairing $\phi$ is called *left t-unital* if $\mathcal C$ is a t-unital left $K$-module. **Proposition 23**. *Let $K$ be a nonunital $k$-algebra, $\mathcal C$ be a coalgebra over $k$, and $\phi\colon\mathcal C\times K\longrightarrow k$ be a multiplicative pairing. Assume that the full subcategory ${\operatorname{\mathsf{Mod^t--}}}K$ is closed under kernels in ${\operatorname{\mathsf{Mod^n--}}}K$ (e. g., this holds if $K$ is a flat left $\widecheck K$-module; see Proposition [Proposition 8](#t-unital-kernels-flat-ring){reference-type="ref" reference="t-unital-kernels-flat-ring"}). Then the following conditions are equivalent:* 1. *the pairing $\phi$ is right t-unital;* 2. *for every right $\mathcal C$-comodule $\mathcal N$, the right $K$-module $\phi_*(\mathcal N)$ is t-unital;* 3. *for every finite-dimensional right $\mathcal C$-comodule $\mathcal N$, the right $K$-module $\phi_*(\mathcal N)$ is t-unital.* *Proof.* The implications (2) $\Longrightarrow$ (1) and (2) $\Longrightarrow$ (3) are obvious. (1) $\Longrightarrow$ (2) By [@Quil Proposition 4.2] or [@Pnun Lemma 1.5], the full subcategory ${\operatorname{\mathsf{Mod^t--}}}K$ is closed under direct sums in ${\operatorname{\mathsf{Mod^n--}}}K$. Hence, if the right $\mathcal C$-comodule $\mathcal C$ is a t-unital right $K$-module, then all cofree right $\mathcal C$-comodules are t-unital right $K$-modules. It remains to recall that any $\mathcal C$-comodule is the kernel of a morphism of cofree ones. (3) $\Longrightarrow$ (2) By [@Quil Proposition 4.2] or [@Pnun Lemma 1.5], the full subcategory ${\operatorname{\mathsf{Mod^t--}}}K$ is closed under direct limits in ${\operatorname{\mathsf{Mod^n--}}}K$. So it remains to use the fact that any $\mathcal C$-comodule is the union of its finite-dimensional subcomodules [@Swe Theorem 2.1.3(b)]. ◻ **Proposition 24**. *Let $K$ be a nonunital $k$-algebra, $\mathcal C$ be a coalgebra over $k$, and $\phi\colon\mathcal C\times K\longrightarrow k$ be a multiplicative pairing. Assume that the full subcategory $K{\operatorname{\mathsf{--{}^c Mod}}}$ is closed under cokernels in $K{\operatorname{\mathsf{--{}^n Mod}}}$ (e. g., this holds if $K$ is a projective left $\widecheck K$-module; see Proposition [Proposition 9](#c-unital-cokernels-projective-ring){reference-type="ref" reference="c-unital-cokernels-projective-ring"}). Then the following conditions are equivalent:* 1. *the pairing $\phi$ is right t-unital;* 2. *for the free left $\mathcal C$-contramodule $\mathop{\mathrm{Hom}}_k(\mathcal C,k)$, the left $K$-module $\phi_*(\mathop{\mathrm{Hom}}_k(\mathcal C,k))$ is c-unital;* 3. *for every left $\mathcal C$-contramodule $\mathfrak P$, the left $K$-module $\phi_*(\mathfrak P)$ is c-unital.* *Proof.* (1) $\Longleftrightarrow$ (2) One needs to observe that the functors $\phi_*$  ([\[right-comodules-to-modules\]](#right-comodules-to-modules){reference-type="ref" reference="right-comodules-to-modules"}-- [\[left-contramodules-to-modules\]](#left-contramodules-to-modules){reference-type="ref" reference="left-contramodules-to-modules"}) form a commutative square diagram with the functor $\mathop{\mathrm{Hom}}_k({-},V)\colon{\operatorname{\mathsf{Comod--}}}\mathcal C\longrightarrow\mathcal C{\operatorname{\mathsf{--Contra}}}$ and the functor $\mathop{\mathrm{Hom}}_k({-},V)\colon{\operatorname{\mathsf{Mod^n--}}}K\longrightarrow K{\operatorname{\mathsf{--{}^n Mod}}}$ for any $k$-vector space $V$ (in particular, for $V=k$). Then it remains to use Lemma [Lemma 4](#duality-preserves-reflects-t-c){reference-type="ref" reference="duality-preserves-reflects-t-c"}. (3) $\Longrightarrow$ (2) Obvious. (1) or (2) $\Longrightarrow$ (3) By Lemma [Lemma 4](#duality-preserves-reflects-t-c){reference-type="ref" reference="duality-preserves-reflects-t-c"}, all free left $\mathcal C$-contramodules $\mathop{\mathrm{Hom}}_k(\mathcal C,V)$ are c-unital left $K$-modules under the assumption of either (1) or (2). It remains to recall that any left $\mathcal C$-contramodule is the cokernel of a morphism of free ones. ◻ **Proposition 25**. *Let $K$ be a nonunital $k$-algebra, $\mathcal C$ be a coalgebra over $k$, and $\phi\colon\mathcal C\times K\longrightarrow k$ be a multiplicative pairing. Then the following conditions are equivalent:* 1. *the right $K$-module $\mathcal C$ is s-unital;* 2. *the left $K$-module $\mathcal C$ is s-unital;* 3. *for every finite-dimensional vector subspace $W\subset\mathcal C$ there exists an element $e\in K$ such that $\epsilon_\mathcal C(w)=\phi(w,e)$ for all $w\in W$ (where $\epsilon_\mathcal C\colon\mathcal C\longrightarrow k$ denotes the counit map).* *Proof.* (3) $\Longrightarrow$ (1) Let $c\in\mathcal C$ be an element. Choose a finite-dimensional vector subspace $W\subset\mathcal C$ for which $\mu(c)\in\mathcal C\otimes_kW\subset\mathcal C\otimes_k\mathcal C$. Let $e\in K$ be an element such that $\epsilon_\mathcal C(w)=\phi(w,e)$ for all $w\in W$. Then $ce=c_{(1)}\phi(c_{(2)},e)=c_{(1)}\epsilon_\mathcal C(c_{(2)})=c$, where $\mu(c)=c_{(1)}\otimes c_{(2)}\in\mathcal C\otimes_k\mathcal C$ is Sweedler's notation for the comultiplication and $c_{(1)}\epsilon_\mathcal C(c_{(2)})=c$ is the counitality equation for the coalgebra $\mathcal C$. (1) $\Longrightarrow$ (3) By (the left-right opposite version of) Proposition [Proposition 15](#s-unital-simultaneous-for-several){reference-type="ref" reference="s-unital-simultaneous-for-several"}, there exists an element $e\in K$ such that $we=w$ in $\mathcal C$ for all $w\in W$. So we have $w_{(1)}\phi(w_{(2)},e)=w$, where $w_{(1)}\otimes w_{(2)} =\mu(w)\in\mathcal C\otimes_k\mathcal C$. Applying the linear map $\epsilon_\mathcal C$, we obtain $\epsilon_\mathcal C(w_{(1)})\phi(w_{(2)},e)=\epsilon_\mathcal C(w)$, and it remains to point out that the counitality equation $\epsilon_\mathcal C(w_{(1)})w_{(2)}=w$ implies $\epsilon_\mathcal C(w_{(1)})\phi(w_{(2)},e)=\phi(w,e)$. ◻ We will say that a multiplicative pairing $\phi\colon\mathcal C\times K\longrightarrow k$ is *s-unital* if it satisfies the equivalent conditions of Proposition [Proposition 25](#s-unital-pairings-characterized){reference-type="ref" reference="s-unital-pairings-characterized"}. **Corollary 26**. *Let $K$ be a right s-unital $k$-algebra, $\mathcal C$ be a coalgebra over $k$, and $\phi\colon\mathcal C\times K\longrightarrow k$ be a multiplicative pairing. Then the pairing $\phi$ is right t-unital if and only if it is s-unital.* *Proof.* Follows from Proposition [Proposition 16](#s-unital-implies-t-unital){reference-type="ref" reference="s-unital-implies-t-unital"}(b). ◻ ## Associativity isomorphisms The following result is our version of [@Psemi Propositions 1.2.3(a)]. **Proposition 27**. *Let $K$ be a nonunital algebra, $\mathcal C$ be a coalgebra, $\mathcal N$ be a right $\mathcal C$-comodule, $F$ be a left $K$-module, and $\mathcal E$ be a left $\mathcal C$-comodule endowed with a right action of the algebra $K$ by left $\mathcal C$-comodule endomorphisms. Then there is a natural map of vector spaces $$\label{tensor-cotensor-assoc-eqn} (\mathcal N\mathbin{\text{\smaller$\square$}}_\mathcal C\mathcal E)\otimes_K F\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathcal N\mathbin{\text{\smaller$\square$}}_\mathcal C(\mathcal E\otimes_K F).$$ The map [\[tensor-cotensor-assoc-eqn\]](#tensor-cotensor-assoc-eqn){reference-type="eqref" reference="tensor-cotensor-assoc-eqn"} is an isomorphism whenever $\mathcal E$ is a t-unital right $K$-module and $F$ is an nt-flat left $K$-module.* *Proof.* By the definition of the cotensor product, we have a left exact sequence of right $K$-modules $$\label{cotensor-product-left-exact-sequence} 0\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathcal N\mathbin{\text{\smaller$\square$}}_\mathcal C\mathcal E\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathcal N\otimes_k\mathcal E\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathcal N\otimes_k\mathcal C\otimes_k\mathcal E$$ and a similar left exact sequence of vector spaces $$\label{cotensor-product-of-tensor-products-sequence} 0\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathcal N\mathbin{\text{\smaller$\square$}}_\mathcal C(\mathcal E\otimes_KF)\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathcal N\otimes_k\mathcal E\otimes_KF\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax \mathcal N\otimes_k\mathcal C\otimes_k\mathcal E\otimes_KF.$$ Taking the tensor product of [\[cotensor-product-left-exact-sequence\]](#cotensor-product-left-exact-sequence){reference-type="eqref" reference="cotensor-product-left-exact-sequence"} with the left $K$-module $F$, we produce a three-term complex of vector spaces $$\label{tensor-of-cotensor-complex} (\mathcal N\mathbin{\text{\smaller$\square$}}_\mathcal C\mathcal E)\otimes_KF\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathcal N\otimes_k\mathcal E\otimes_KF\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax \mathcal N\otimes_k\mathcal C\otimes_k\mathcal E\otimes_KF.$$ Comparing [\[tensor-of-cotensor-complex\]](#tensor-of-cotensor-complex){reference-type="eqref" reference="tensor-of-cotensor-complex"} with [\[cotensor-product-of-tensor-products-sequence\]](#cotensor-product-of-tensor-products-sequence){reference-type="eqref" reference="cotensor-product-of-tensor-products-sequence"}, we immediately obtain the desired natural map [\[tensor-cotensor-assoc-eqn\]](#tensor-cotensor-assoc-eqn){reference-type="eqref" reference="tensor-cotensor-assoc-eqn"}. Furthermore, it is clear that the map [\[tensor-cotensor-assoc-eqn\]](#tensor-cotensor-assoc-eqn){reference-type="eqref" reference="tensor-cotensor-assoc-eqn"} is an isomorphism if and only if the complex [\[tensor-of-cotensor-complex\]](#tensor-of-cotensor-complex){reference-type="eqref" reference="tensor-of-cotensor-complex"} is a left exact sequence. Assume that $\mathcal E$ is a t-unital right $K$-module. Then, by [@Quil Proposition 4.2] or [@Pnun Lemma 1.5], so are the right $K$-modules $\mathcal N\otimes_k\mathcal E$ and $\mathcal N\otimes_k\mathcal C\otimes_k\mathcal E$. By the definition of nt-flatness (see Section [1.5](#nt-nc-flat-proj-prelim-subsecn){reference-type="ref" reference="nt-nc-flat-proj-prelim-subsecn"}),  $F$ being an nt-flat left $K$-module then implies left exactness of the complex [\[tensor-of-cotensor-complex\]](#tensor-of-cotensor-complex){reference-type="eqref" reference="tensor-of-cotensor-complex"}. ◻ We refer to [@Psemi end of Section 1.2.5] for a relevant discussion of *commutativity of the diagrams of associativity isomorphisms* of cotensor products. The next proposition is a version of [@Psemi Propositions 3.2.3.2(a)]. **Proposition 28**. *Let $K$ be a nonunital algebra, $\mathcal C$ be a coalgebra, $\mathfrak P$ be a left $\mathcal C$-contramodule, $F$ be a left $K$-module, and $\mathcal E$ be a left $\mathcal C$-comodule endowed with a right action of the algebra $K$ by left $\mathcal C$-comodule endomorphisms. Then there is a natural map of vector spaces $$\label{tensor-hom-cohom-assoc-eqn} \mathop{\mathrm{Cohom}}_\mathcal C(\mathcal E\otimes_KF,\>\mathfrak P)\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathop{\mathrm{Hom}}_K(F,\mathop{\mathrm{Cohom}}_\mathcal C(\mathcal E,\mathfrak P)).$$ The map [\[tensor-hom-cohom-assoc-eqn\]](#tensor-hom-cohom-assoc-eqn){reference-type="eqref" reference="tensor-hom-cohom-assoc-eqn"} is an isomorphism whenever $\mathcal E$ is a t-unital right $K$-module and $F$ is an nc-projective left $K$-module.* *Proof.* By the definition of cohomomorphisms, we have a right exact sequence of left $K$-modules $$\label{cohom-right-exact-sequence} \mathop{\mathrm{Hom}}_k(\mathcal C\otimes_k\mathcal E,\>\mathfrak P)\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathop{\mathrm{Hom}}_k(\mathcal E,\mathfrak P)\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax \mathop{\mathrm{Cohom}}_\mathcal C(\mathcal E,\mathfrak P)\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax 0$$ and a similar right exact sequence of vector spaces $$\label{cohom-of-tensor-products-sequence} \mathop{\mathrm{Hom}}_k(\mathcal C\otimes_k\mathcal E\otimes_KF,\>\mathfrak P)\longrightarrow\mathop{\mathrm{Hom}}_k(\mathcal E\otimes_KF,\>\mathfrak P)\longrightarrow \mathop{\mathrm{Cohom}}_\mathcal C(\mathcal E\otimes_KF,\>\mathfrak P)\longrightarrow 0.$$ Applying to [\[cohom-right-exact-sequence\]](#cohom-right-exact-sequence){reference-type="eqref" reference="cohom-right-exact-sequence"} the covariant functor $\mathop{\mathrm{Hom}}_K(F,{-})$, we produce a three-term complex of vector spaces $$\label{hom-into-cohom-complex} \mathop{\mathrm{Hom}}_k(\mathcal C\otimes_k\mathcal E\otimes_KF,\>\mathfrak P)\longrightarrow\mathop{\mathrm{Hom}}_k(\mathcal E\otimes_KF,\>\mathfrak P)\longrightarrow \mathop{\mathrm{Hom}}_K(F,\mathop{\mathrm{Cohom}}_\mathcal C(\mathcal E,\mathfrak P)).$$ Comparing [\[hom-into-cohom-complex\]](#hom-into-cohom-complex){reference-type="eqref" reference="hom-into-cohom-complex"} with [\[cohom-of-tensor-products-sequence\]](#cohom-of-tensor-products-sequence){reference-type="eqref" reference="cohom-of-tensor-products-sequence"}, we immediately obtain the desired natural map [\[tensor-hom-cohom-assoc-eqn\]](#tensor-hom-cohom-assoc-eqn){reference-type="eqref" reference="tensor-hom-cohom-assoc-eqn"}. Furthermore, it is clear that the map [\[tensor-hom-cohom-assoc-eqn\]](#tensor-hom-cohom-assoc-eqn){reference-type="eqref" reference="tensor-hom-cohom-assoc-eqn"} is an isomorphism if and only if the complex [\[hom-into-cohom-complex\]](#hom-into-cohom-complex){reference-type="eqref" reference="hom-into-cohom-complex"} is a right exact sequence. Assume that $\mathcal E$ is a t-unital right $K$-module. Then, by Lemma [Lemma 4](#duality-preserves-reflects-t-c){reference-type="ref" reference="duality-preserves-reflects-t-c"}, the left $K$-modules $\mathop{\mathrm{Hom}}_k(\mathcal E,\mathfrak P)$ and $\mathop{\mathrm{Hom}}_k(\mathcal C\otimes_k\mathcal E,\>\mathfrak P)$ are c-unital. By the definition of nc-projectivity (see Section [1.5](#nt-nc-flat-proj-prelim-subsecn){reference-type="ref" reference="nt-nc-flat-proj-prelim-subsecn"}),  $F$ being an nc-projective left $K$-module then implies right exactness of the complex [\[hom-into-cohom-complex\]](#hom-into-cohom-complex){reference-type="eqref" reference="hom-into-cohom-complex"}. ◻ We refer to [@Psemi end of Section 3.2.5] for a relevant discussion of *commutativity of the diagrams of associativity isomorphisms* of iterated cohomomorphisms. ## Left flat, right integrated bimodules {#integrated-bimodules-subsecn} Let $K$ be nonunital $k$-algebra, $\mathcal C$ be a coalgebra over $k$, and $\phi\colon\mathcal C\times K\longrightarrow k$ be a multiplicative pairing. Recall that the pairing $\phi$ endows the $\mathcal C$-$\mathcal C$-bicomodule $\mathcal C$ with the induced $K$-$K$-bimodule structure (see Proposition [Proposition 22](#underlying-K-module-structures){reference-type="ref" reference="underlying-K-module-structures"}). Let $B\in K{\operatorname{\mathsf{--{}^n Bimod^n--}}}K$ be a $K$-$K$-bimodule. Then the tensor product $\mathcal C\otimes_KB$ has a natural pair of commuting structures of a left $\mathcal C$-comodule and a right $K$-module, or in other words, $\mathcal C\otimes_KB$ is a left $\mathcal C$-comodule endowed with a right action of $K$ by left $\mathcal C$-comodule endomorphisms. We will say that the $K$-$K$-bimodule $B$ is *right integrated* if the tensor product $\mathcal C\otimes_KB$ is endowed with a right $\mathcal C$-comodule structure such that 1. the right $\mathcal C$-comodule structure on $\mathcal C\otimes_KB$ commutes with the natural left $\mathcal C$-comodule structure (arising from the left $\mathcal C$-comodule structure on $\mathcal C$), so $\mathcal C\otimes_KB$ is a $\mathcal C$-$\mathcal C$-bicomodule; 2. applying the functor $\phi_*$  [\[right-comodules-to-modules\]](#right-comodules-to-modules){reference-type="eqref" reference="right-comodules-to-modules"} to the right $\mathcal C$-comodule structure on $\mathcal C\otimes_KB$ produces the natural right $K$-module structure (arising from the right $K$-module structure on $B$). In other words, a *right integrated $K$-$K$-bimodule* is a triple $B=(B,\mathcal B,\beta)$, where $B$ is a $K$-$K$-bimodule, $\mathcal B$ is a $\mathcal C$-$\mathcal C$-bicomodule, and $\beta\colon\mathcal C\otimes_KB\simeq\mathcal B$ is an isomorphism of left $\mathcal C$-comodules which is also an isomorphism of right $K$-modules. **Remark 29**. Our terminology "integrated bimodule" comes from Lie theory. Let $G$ be an algebraic group over a field $k$ and $\mathfrak g$ be the Lie algebra of $G$. Let $\mathcal C(G)$ be the coalgebra of regular functions on $G$ (with respect to the convolution comultiplication) and $U(\mathfrak g)$ be the enveloping algebra of $\mathfrak g$. Then there is a natural multiplicative pairing $\phi\colon\mathcal C(G)\times U(\mathfrak g)\longrightarrow k$ respecting the unit of $U(\mathfrak g)$ and the counit of $\mathcal C(G)$  [@Prev Section 2.7]. This pairing is responsible, as per the construction of Proposition [Proposition 22](#underlying-K-module-structures){reference-type="ref" reference="underlying-K-module-structures"}, for the Lie functor assigning the underlying $\mathfrak g$-module structure to every $G$-module (notice that by a *$G$-module*, or a *representation of $G$*, one simply means a $\mathcal C(G)$-comodule). The same proposition also assigns the underlying $\mathfrak g$-module structure to every $\mathcal C(G)$-contramodule. One speaks colloquially of the process of recovering a Lie/algebraic group from a Lie algebra, or an algebraic group action from a Lie algebra action, as of "integration". Hence the word "integrated" in our terminology. Assume that $K$ is a t-unital algebra and $\phi$ is a right t-unital multiplicative pairing. Then we have a natural (multiplication) isomorphism of left $\mathcal C$-comodules and right $K$-modules $\varkappa\colon\mathcal C\otimes_KK\longrightarrow\mathcal C$. Consequently, the $K$-$K$-bimodule $K$ can be endowed with the natural structure of right integrated bimodule given by the triple $(K,\mathcal C,\varkappa)$. Here the coalgebra $\mathcal C$ is endowed with its natural $\mathcal C$-$\mathcal C$-bicomodule structure. **Proposition 30**. *Let $K$ be a nonunital algebra, $\mathcal C$ be a coalgebra, and $\phi\colon\mathcal C\times K\longrightarrow k$ be a right t-unital multiplicative pairing. Let $B'=(B',\mathcal B',\beta')$ and $B''=(B'',\mathcal B'',\beta'')$ be two right integrated $K$-$K$-bimodules. Assume that $B''$ is an nt-flat left $K$-module. Then the tensor product $B=B'\otimes_KB''$ acquires a natural structure of right integrated $K$-$K$-bimodule, $B=(B,\mathcal B,\beta)$.* *Proof.* Applying Proposition [Proposition 27](#tensor-cotensor-assoc-prop){reference-type="ref" reference="tensor-cotensor-assoc-prop"} to the right $\mathcal C$-comodule $\mathcal N=\mathcal B'$, the $\mathcal C$-$\mathcal C$-bicomodule $\mathcal E=\mathcal C$, and the left $K$-module $F=B''$, we obtain a natural associativity isomorphism $$\mathcal B'\otimes_KB''=(\mathcal B'\mathbin{\text{\smaller$\square$}}_\mathcal C\mathcal C)\otimes_KB''\simeq\mathcal B'\mathbin{\text{\smaller$\square$}}_\mathcal C(\mathcal C\otimes_KB''),$$ that is, $$\label{integration-isomorphism} \mathcal C\otimes_KB'\otimes_KB''\simeq\mathcal B'\mathbin{\text{\smaller$\square$}}_\mathcal C\mathcal B''.$$ It remains to put $\mathcal B=\mathcal B'\mathbin{\text{\smaller$\square$}}_\mathcal C\mathcal B''$ and let $\beta$ be the isomorphism [\[integration-isomorphism\]](#integration-isomorphism){reference-type="eqref" reference="integration-isomorphism"}. ◻ We will say that a right integrated $K$-$K$-bimodule $(B,\mathcal B,\beta)$ is *left nt-flat* if the left $K$-module $B$ is nt-flat. Similarly, $(B,\mathcal B,\beta)$ is called *right t-unital* if the right $K$-module $B$ is t-unital. **Proposition 31**. *Let $K$ be a nonunital algebra, $\mathcal C$ be a coalgebra, and $\phi\colon\mathcal C\times K\longrightarrow k$ be a right t-unital multiplicative pairing. Then the category of left nt-flat, right t-unital right integrated $K$-$K$-bimodules $(B,\mathcal B,\beta)$ is an associative but *nonunital* tensor/monoidal category with respect to the operation of tensor product of bimodules and cotensor product of bicomodules.* *Proof.* The point is that for any two left nt-flat, right t-unital $K$-$K$-bimodules $B'$ and $B''$, the tensor product $B'\otimes_KB''$ is also left nt-flat and right t-unital (by Lemma [Lemma 11](#nt-flat-tensor-product){reference-type="ref" reference="nt-flat-tensor-product"} and [@Pnun Lemma 1.2(b)]). So the assertion follows from Proposition [Proposition 30](#tensor-product-integrated){reference-type="ref" reference="tensor-product-integrated"}. ◻ The problem with nonunitality of the tensor/monoidal category in Proposition [Proposition 31](#nonunital-tensor-of-nt-flat-integrated){reference-type="ref" reference="nonunital-tensor-of-nt-flat-integrated"} arises from the fact that the left $K$-module $K$ is not nt-flat in general (while the right $K$-module $\widecheck K$ is, of course, not t-unital). Indeed, as explained in Section [1.5](#nt-nc-flat-proj-prelim-subsecn){reference-type="ref" reference="nt-nc-flat-proj-prelim-subsecn"}, the left $K$-module $K$ is nt-flat if and only if $K$ is a flat left $\widecheck K$-module. We will say that a right integrated $K$-$K$-bimodule $(B,\mathcal B,\beta)$ is *left flat* if the left $\widecheck K$-module $B$ is flat. Similarly, $(B,\mathcal B,\beta)$ is called *t-unital* if the $K$-$K$-bimodule $B$ is t-unital (i. e., both left and right t-unital). **Corollary 32**. *Let $K$ be t-unital algebra such that $K$ is a flat left $\widecheck K$-module. Let $\mathcal C$ be a coalgebra and $\phi\colon\mathcal C\times K\longrightarrow k$ be a right t-unital multiplicative pairing. Then the category of t-unital, left flat right integrated $K$-$K$-bimodules $(B,\mathcal B,\beta)$ is an asssociative, *unital* monoidal category with the unit object $(K,\mathcal C,\varkappa)$ with respect to the operation of tensor product of bimodules and cotensor product of bicomodules.* *Proof.* It was mentioned in Section [1.5](#nt-nc-flat-proj-prelim-subsecn){reference-type="ref" reference="nt-nc-flat-proj-prelim-subsecn"} that all flat $\widecheck K$-modules are nt-flat. By Proposition [Proposition 13](#t-c-unital-monoidal-model-categories){reference-type="ref" reference="t-c-unital-monoidal-model-categories"}(a), the category of t-unital $K$-$K$-bimodules $K{\operatorname{\mathsf{--{}^t Bimod^t--}}}K$ is an associative, unital monoidal category with the unit object $K$ with respect to the tensor product operation $\otimes_K$. As explained in Section [2.6](#semialgebras-prelim-subsecn){reference-type="ref" reference="semialgebras-prelim-subsecn"}, the category of $\mathcal C$-$\mathcal C$-bicomodules $\mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal C$ is an associative, unital monoidal category with the unit object $\mathcal C$ with respect to the cotensor product operation $\mathbin{\text{\smaller$\square$}}_\mathcal C$. It remains to refer to Proposition [Proposition 30](#tensor-product-integrated){reference-type="ref" reference="tensor-product-integrated"} or Proposition [Proposition 31](#nonunital-tensor-of-nt-flat-integrated){reference-type="ref" reference="nonunital-tensor-of-nt-flat-integrated"}. ◻ ## Two forgetful monoidal functors {#two-forgetful-monoidal-functors-subsecn} We will simply say that a bimodule over a nonunital algebra $K$ is *left flat* if it is flat as a left $\widecheck K$-module. In particular, $K$ itself is *left flat* if $K$ is flat as a left $\widecheck K$-module. Notice that a left flat $k$-algebra $K$ is t-unital if and only if $K^2=K$  [@Quil Definition 2.3], [@Pnun Remark 1.6]. **Remark 33**. Let $K$ be a left flat $k$-algebra, $\mathcal C$ be a coalgebra over $k$, and $\phi\colon\mathcal C\times K\longrightarrow k$ be a right t-unital pairing. Then Proposition [Proposition 23](#t-unital-pairing-comodules-prop){reference-type="ref" reference="t-unital-pairing-comodules-prop"}(2) tells that the essential image of the functor $\phi_*\colon{\operatorname{\mathsf{Comod--}}}\mathcal C\longrightarrow {\operatorname{\mathsf{Mod^n--}}}K$  [\[right-comodules-to-modules\]](#right-comodules-to-modules){reference-type="eqref" reference="right-comodules-to-modules"} is contained in the full subcategory of t-unital modules ${\operatorname{\mathsf{Mod^t--}}}K\subset{\operatorname{\mathsf{Mod^n--}}}K$. So we can (and will) write $\phi_*\colon{\operatorname{\mathsf{Comod--}}}\mathcal C\longrightarrow{\operatorname{\mathsf{Mod^t--}}}K$. Obviously, both the functors ${\operatorname{\mathsf{Comod--}}}\mathcal C\longrightarrow{\operatorname{\mathsf{Mod^n--}}}K$ and ${\operatorname{\mathsf{Comod--}}}\mathcal C\longrightarrow{\operatorname{\mathsf{Mod^t--}}}K$ are always faithful; and one of them is fully faithful if and only if the other one is. Given a left flat t-unital algebra $K$, a coalgebra $\mathcal C$, and a right t-unital pairing $\phi$, the construction of Corollary [Corollary 32](#unital-monoidal-of-flat-integrated){reference-type="ref" reference="unital-monoidal-of-flat-integrated"} produces an associative, unital monoidal category of left flat, right integrated, t-unital $K$-$K$-bimodules, which we will denote by $$K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K.$$ Let us also denote by $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t--}}}K\subset K{\operatorname{\mathsf{--{}^t Bimod^t--}}}K$ the full subcategory of left flat t-unital $K$-$K$-bimodules. Clearly, $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t--}}}K$ is a monoidal subcategory in $K{\operatorname{\mathsf{--{}^t Bimod^t--}}}K$. When the algebra $K$ is left flat, the unit object $K\in K{\operatorname{\mathsf{--{}^t Bimod^t--}}}K$ belongs to $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t--}}}K$. Similarly, let us denote by $\mathcal C{\operatorname{\mathsf{--{}_{inj}Bicomod--}}}\mathcal C\subset\mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal C$ the full subcategory of left injective $\mathcal C$-$\mathcal C$-bicomodules (i. e., injective as left $\mathcal C$-comodules). By Lemma [Lemma 18](#cotensor-product-injective){reference-type="ref" reference="cotensor-product-injective"}, $\mathcal C{\operatorname{\mathsf{--{}_{inj}Bicomod--}}}\mathcal C$ is a monoidal subcategory in $\mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal C$ containing the unit object $\mathcal C\in\mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal C$. **Lemma 34**. *Let $K$ be a left flat t-unital algebra, $\mathcal C$ be a coalgebra, $F$ be an nt-flat (equivalently, t-flat) left $K$-module, and $\mathcal E$ be an injective left $\mathcal C$-comodule endowed with a t-unital right action of the algebra $K$ by left $\mathcal C$-comodule endomorphisms. Then the tensor product $\mathcal E\otimes_KF$ is an injective left $\mathcal C$-comodule.* *Proof.* Following the discussion in Section [2.5](#adjusted-co-contra-mods-subsecn){reference-type="ref" reference="adjusted-co-contra-mods-subsecn"}, it suffices to show that the functor $\mathcal N\longmapsto\mathcal N\mathbin{\text{\smaller$\square$}}_\mathcal C(\mathcal E\otimes_KF)\colon{\operatorname{\mathsf{Comod--}}}\mathcal C\longrightarrow k{\operatorname{\mathsf{--Vect}}}$ is exact. By Proposition [Proposition 27](#tensor-cotensor-assoc-prop){reference-type="ref" reference="tensor-cotensor-assoc-prop"}, the latter functor is isomorphic to the functor $\mathcal N\longmapsto(\mathcal N\mathbin{\text{\smaller$\square$}}_\mathcal C\mathcal E)\otimes_KF$. Finally, if $K$ is a left flat t-unital algebra, then the full subcategory ${\operatorname{\mathsf{Mod^t--}}}K$ is closed under kernels in ${\operatorname{\mathsf{Mod^n--}}}K$ by Proposition [Proposition 8](#t-unital-kernels-flat-ring){reference-type="ref" reference="t-unital-kernels-flat-ring"}, and it follows that $\mathcal N\mathbin{\text{\smaller$\square$}}_\mathcal C\mathcal E$ is a t-unital right $K$-module for every right $\mathcal C$-comodule $\mathcal N$. Alternatively, one can notice the natural isomorphisms $\mathcal E\otimes_KF\simeq(\mathcal E\otimes_KK)\otimes_KF\simeq\mathcal E\otimes_K(K\otimes_KF)$. The left $K$-module $K\otimes_KF$ is flat by [@Pnun Proposition 8.16 and Corollary 8.20]. This reduces the question to the case when $F$ is a flat left $K$-module. When the left $K$-module $F$ is flat, the assumption that $\mathcal E$ is a t-unital right $K$-module is no longer needed for the validity of the lemma. ◻ Let $K$ be a left flat t-unital algebra, $\mathcal C$ be a coalgebra, and $\phi\colon\mathcal C\times K\longrightarrow k$ be a right t-unital pairing. By Corollary [Corollary 32](#unital-monoidal-of-flat-integrated){reference-type="ref" reference="unital-monoidal-of-flat-integrated"} and Lemma [Lemma 34](#tensor-product-with-flat-is-injective){reference-type="ref" reference="tensor-product-with-flat-is-injective"} (for $\mathcal E=\mathcal C$), we have two associative, unital monoidal forgetful functors of associative, unital monoidal categories, depicted in the upper part of the diagram below. We also have two associative, unital monoidal fully faithful inclusion functors of such monoidal categories, depicted in the lower part of the diagram $$\label{two-monoidal-forgetful-functors-diagram} \begin{gathered} \xymatrix{ & K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K \ar[ld] \ar[rd] \\ K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t--}}}K \ar@{>->}[d] && \mathcal C{\operatorname{\mathsf{--{}_{inj}Bicomod--}}}\mathcal C\ar@{>->}[d] \\ K{\operatorname{\mathsf{--{}^t Bimod^t--}}}K && \mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal C } \end{gathered}$$ The leftmost diagonal functor assigns to an integrated bimodule $(B,\mathcal B,\beta)$ its underlying $K$-$K$-bimodule $B$. The rightmost diagonal functor assigns to $(B,\mathcal B,\beta)$ the $\mathcal C$-$\mathcal C$-bicomodule $\mathcal B$. Since the functor $\phi_*\colon{\operatorname{\mathsf{Comod--}}}\mathcal C\longrightarrow{\operatorname{\mathsf{Mod^t--}}}K\subset{\operatorname{\mathsf{Mod^n--}}}K$ (cf. Remark [Remark 33](#lands-in-t-unital-remark){reference-type="ref" reference="lands-in-t-unital-remark"}) is faithful, the leftmost diagonal functor on the diagram is always faithful. ## The fully faithful comodule inclusion case {#fully-faithful-comodule-inclusion-secn} An important special case of the theory developed in Section [3.4](#integrated-bimodules-subsecn){reference-type="ref" reference="integrated-bimodules-subsecn"} occurs when the functor $\phi_*\colon{\operatorname{\mathsf{Comod--}}}\mathcal C\longrightarrow{\operatorname{\mathsf{Mod^n--}}}K$  [\[right-comodules-to-modules\]](#right-comodules-to-modules){reference-type="eqref" reference="right-comodules-to-modules"} from Proposition [Proposition 22](#underlying-K-module-structures){reference-type="ref" reference="underlying-K-module-structures"} is fully faithful. Given an algebra $K$, a coalgebra $\mathcal C$, and a multiplicative pairing $\phi\colon\mathcal C\times K\longrightarrow k$, one can extend $\phi$ to a multiplicative pairing $\check\phi\colon\mathcal C\times\widecheck K\longrightarrow k$ by the obvious rule $\check\phi(c,\>a+r)=a\epsilon_\mathcal C(c)+\phi(c,r)$ for all $c\in\mathcal C$ and $a+r\in k\oplus K=\widecheck K$. Then the pairing $\check\phi$ is compatible with the counit of $\mathcal C$ and the unit of $\widecheck K$, in the sense that $\check\phi(c,1)= \epsilon_\mathcal C(c)$ for all $c\in\mathcal C$. Hence the induced unital $k$-algebra homomorphism $\check\phi^*\colon \widecheck K\longrightarrow\mathcal C^*$. Notice that, for every subcoalgebra $\mathcal E\subset\mathcal C$, every $\mathcal E$-comodule can be considered as a $\mathcal C$-comodule [@Pksurv Section 3.1]; so we have natural fully faithful functors $\mathcal E{\operatorname{\mathsf{--Comod}}}\longrightarrow\mathcal C{\operatorname{\mathsf{--Comod}}}$ and ${\operatorname{\mathsf{Comod--}}}\mathcal E\longrightarrow{\operatorname{\mathsf{Comod--}}}\mathcal C$. **Proposition 35**. *Let $K$ be a nonunital $k$-algebra, $\mathcal C$ be a coalgebra over $k$, and $\phi\colon\mathcal C\times K\longrightarrow k$ be a multiplicative pairing. Consider the following conditions:* 1. *the functor $\phi_*\colon\mathcal C{\operatorname{\mathsf{--Comod}}}\longrightarrow K{\operatorname{\mathsf{--{}^n Mod}}}$  [\[left-comodules-to-modules\]](#left-comodules-to-modules){reference-type="eqref" reference="left-comodules-to-modules"} is fully faithful;* 2. *the functor $\phi_*\colon{\operatorname{\mathsf{Comod--}}}\mathcal C\longrightarrow{\operatorname{\mathsf{Mod^n--}}}K$  [\[right-comodules-to-modules\]](#right-comodules-to-modules){reference-type="eqref" reference="right-comodules-to-modules"} is fully faithful;* 3. *for every finite-dimensional subcoalgebra $\mathcal E\subset\mathcal C$, the composition of functors $\mathcal E{\operatorname{\mathsf{--Comod}}}\longrightarrow\mathcal C{\operatorname{\mathsf{--Comod}}}\longrightarrow K{\operatorname{\mathsf{--{}^n Mod}}}$ is fully faithful;* 4. *for every finite-dimensional subcoalgebra $\mathcal E\subset\mathcal C$, the composition of functors ${\operatorname{\mathsf{Comod--}}}\mathcal E\longrightarrow{\operatorname{\mathsf{Comod--}}}\mathcal C\longrightarrow {\operatorname{\mathsf{Mod^n--}}}K$ is fully faithful;* 5. *the pairing $\check\phi$ is nondegenerate in $\mathcal C$, i. e., for every $c\in\mathcal C$ there exists $\check r\in\widecheck K$ such that $\check\phi(c,\check r)\ne0$;* 6. *the image of the $k$-algebra homomorphism $\check\phi^*\colon\widecheck K\longrightarrow\mathcal C^*$ is dense in the natural pro-finite-dimensional (linearly compact) topology on the vector space/algebra $\mathcal C^*$;* 7. *the pairing $\phi$ is nondegenerate in $\mathcal C$, i. e., for every $c\in\mathcal C$ there exists $r\in K$ such that $\phi(c,r)\ne0$;* 8. *the image of the $k$-algebra homomorphism $\phi^*\colon K\longrightarrow\mathcal C^*$ is dense in the natural pro-finite-dimensional topology on $\mathcal C^*$.* *Then the following implications and equivalences hold: $$(1)\ \Longleftrightarrow\ (2)\ \Longleftrightarrow\ (3) \ \Longleftrightarrow\ (4)\ \Longleftarrow\ (5) \ \Longleftrightarrow\ (6)\ \Longleftarrow\ (7) \ \Longleftrightarrow\ (8).$$* *Proof.* The point is that a monomorphism (in the category of) coassociative coalgebras need not be an injective map [@NT Example in Section 3.2] (see also [@Ag]). For this reason, the conditions (5--6) are stronger than (1--4). (1) $\Longrightarrow$ (3) and (2) $\Longrightarrow$ (4) These implications hold because the functors $\mathcal E{\operatorname{\mathsf{--Comod}}}\allowbreak\longrightarrow\mathcal C{\operatorname{\mathsf{--Comod}}}$ and ${\operatorname{\mathsf{Comod--}}}\mathcal E\longrightarrow{\operatorname{\mathsf{Comod--}}}\mathcal C$ are fully faithful. (3) $\Longrightarrow$ (1) and (4) $\Longrightarrow$ (2) Any coassociative coalgebra $\mathcal C$ is the union of its finite-dimensional subcoalgebras $\mathcal E$, and any $\mathcal C$-comodule is the union of its finite-dimensional subcomodules. Furthermore, any finite-dimensional $\mathcal C$-comodule is a comodule over a finite-dimensional subcoalgebra of $\mathcal C$  [@Pksurv Lemma 3.1]. The desired implications follow from these observations. (3) $\Longleftrightarrow$ (4) The multiplicative, unital pairing $\check\phi\colon\mathcal C\times\widecheck K \longrightarrow k$ induces functors $\check\phi_*\colon\mathcal C{\operatorname{\mathsf{--Comod}}}\longrightarrow \widecheck K{\operatorname{\mathsf{--Mod}}}=K{\operatorname{\mathsf{--{}^n Mod}}}$ and $\check\phi_*\colon{\operatorname{\mathsf{Comod--}}}\mathcal C\longrightarrow{\operatorname{\mathsf{Mod--}}} \widecheck K={\operatorname{\mathsf{Mod^n--}}}K$, which can be identified with the functors $\phi_*$. This reduces the question to the case of a unital algebra and a unital pairing. Compose the unital $k$-algebra homomorphism $\check\phi^*\colon \widecheck K\longrightarrow\mathcal C^*$ with the surjective unital $k$-algebra homomorphism $\mathcal C^*\longrightarrow\mathcal E^*$ dual to the inclusion $\mathcal E\longrightarrow\mathcal C$. Denote by $L$ the image of the composition $\widecheck K\longrightarrow\mathcal C^* \longrightarrow\mathcal E^*$; so $L$ is a finite-dimensional unital $k$-algebra. Let $\mathcal L=L^*$ be the coalgebra dual to $L$. The we have an injective homomorphism of finite-dimensional unital $k$-algebras $L\longrightarrow\mathcal E^*$ and the dual surjective homomorphism of finite-dimensional coalgebras $\mathcal E\longrightarrow\mathcal L$. It is clear that the functor $\mathcal E{\operatorname{\mathsf{--Comod}}}\longrightarrow\widecheck K{\operatorname{\mathsf{--Mod}}}$ is fully faithful if and only if the functor $\mathcal E{\operatorname{\mathsf{--Comod}}}\longrightarrow \mathcal L{\operatorname{\mathsf{--Comod}}}$ induced by the coalgebra map $\mathcal E\longrightarrow\mathcal L$ (see [@Pksurv Section 1]) is fully faithful. Similarly, the functor ${\operatorname{\mathsf{Comod--}}}\mathcal E\longrightarrow{\operatorname{\mathsf{Mod--}}}\widecheck K$ is fully faithful if and only if the functor ${\operatorname{\mathsf{Comod--}}}\mathcal E\longrightarrow{\operatorname{\mathsf{Comod--}}}\mathcal L$ is fully faithful. Finally, any one of the functors $\mathcal E{\operatorname{\mathsf{--Comod}}}\longrightarrow\mathcal L{\operatorname{\mathsf{--Comod}}}$ and/or ${\operatorname{\mathsf{Comod--}}}\mathcal E\longrightarrow{\operatorname{\mathsf{Comod--}}}\mathcal L$ is fully faithful if and only if $\mathcal E \longrightarrow\mathcal L$ is a monomorphism in the category of (finite-dimensional) coalgebras, which is a left-right symmetric property [@Sten Proposition XI.1.2], [@NT Proposition 3.2 or Theorem 3.5]. (6) $\Longrightarrow$ (3) or (6) $\Longrightarrow$ (4) The map $\check\phi^*\colon\widecheck K\longrightarrow\mathcal C^*$ has dense image if and only if, in the context of the discussion in the previous paragraphs, for every finite-dimensional subcoalgebra $\mathcal E\subset\mathcal C$, the composition $\widecheck K\longrightarrow\mathcal C^*\longrightarrow\mathcal E^*$ is surjective. In this case, the coalgebra map $\mathcal E\longrightarrow\mathcal L$ is an isomorphism. Then the induced functors $\mathcal E{\operatorname{\mathsf{--Comod}}}\longrightarrow\mathcal L{\operatorname{\mathsf{--Comod}}}$ and ${\operatorname{\mathsf{Comod--}}}\mathcal E\longrightarrow{\operatorname{\mathsf{Comod--}}}\mathcal L$ are category equivalences. (5) $\Longleftrightarrow$ (6) and (7) $\Longleftrightarrow$ (8) Straightforward. (7) $\Longrightarrow$ (5) and (8) $\Longrightarrow$ (6) Obvious. ◻ Returning to the context of Section [3.4](#integrated-bimodules-subsecn){reference-type="ref" reference="integrated-bimodules-subsecn"}, we observe that if the functor $\phi_*\colon{\operatorname{\mathsf{Comod--}}}\mathcal C\longrightarrow{\operatorname{\mathsf{Mod^n--}}}K$ is fully faithful, then so is the functor assigning to a right integrated bimodule $(B,\mathcal B,\beta)$ its underlying $K$-$K$-bimodule $B$. In particular, on the diagram [\[two-monoidal-forgetful-functors-diagram\]](#two-monoidal-forgetful-functors-diagram){reference-type="eqref" reference="two-monoidal-forgetful-functors-diagram"} from Section [3.5](#two-forgetful-monoidal-functors-subsecn){reference-type="ref" reference="two-forgetful-monoidal-functors-subsecn"}, the leftmost diagonal functor $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K\longrightarrow K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t--}}}K$ is fully faithful in this case. In this special case, being right integrated becomes a *property* of a $K$-$K$-bimodule rather than an additional structure. So we will sometimes change our terminology and speak of *right integrable* instead of "right integrated" bimodules when the functor $\phi_*\colon{\operatorname{\mathsf{Comod--}}}\mathcal C\longrightarrow{\operatorname{\mathsf{Mod^n--}}}K$ is known to be fully faithful. Moreover, if the functor $\phi_*\colon{\operatorname{\mathsf{Comod--}}}\mathcal C\longrightarrow{\operatorname{\mathsf{Mod^n--}}}K$ is fully faithful, then a right $\mathcal C$-comodule structure on a $k$-vector space $\mathcal B$ commutes with the given left $\mathcal C$-comodule structure if and only if the underlying right $K$-module structure commutes with that given left $\mathcal C$-comodule structure. So condition (i) from Section [3.4](#integrated-bimodules-subsecn){reference-type="ref" reference="integrated-bimodules-subsecn"} *need not* be imposed in the definition of a right integrable bimodule $B$; it holds automatically (provided that condition (ii) holds). Simply put: assuming the functor $\phi_*\colon{\operatorname{\mathsf{Comod--}}}\mathcal C\longrightarrow{\operatorname{\mathsf{Mod^n--}}}K$ to be fully faithful, a $K$-$K$-bimodule $B$ is called *right integrable* if the right $K$-module structure on the tensor product $\mathcal C\otimes_KB$ arises from some right $\mathcal C$-comodule structure. # Description of Right Semimodules and Left Semicontramodules [\[description-of-semimod-semicontra-secn\]]{#description-of-semimod-semicontra-secn label="description-of-semimod-semicontra-secn"} ## Construction of semialgebra {#construction-of-semialgebra-subsecn} A homomorphism of $k$-algebras $f\colon K\longrightarrow R$ is said to be *left t-unital* [@Pnun Section 9] if $f$ makes $R$ a t-unital left $K$-module. By [@Pnun Proposition 9.5], it then follows that the algebra $R$ is t-unital (but we will not use this fact). Similarly, $f$ is said to be *right t-unital* if $R$ is a t-unital right $K$-module; and $f$ is said to be *t-unital* if it is both left and right t-unital. In other words, $f$ is t-unital if and only if $R$ is a t-unital $K$-$K$-bimodule. Assume that the $k$-algebra $K$ is t-unital. Then the datum of a t-unital homomorphism of $k$-algebras $f\colon K\longrightarrow R$ is equivalent to the datum of a unital monoid object $R$ in the unital monoidal category $K{\operatorname{\mathsf{--{}^t Bimod^t--}}}K$. Let $f\colon K\longrightarrow R$ be a t-unital homomorphism of $k$-algebras. Assume further that $K$ is a left flat algebra and $R$ is a left flat $K$-$K$-bimodule (as defined in Section [3.5](#two-forgetful-monoidal-functors-subsecn){reference-type="ref" reference="two-forgetful-monoidal-functors-subsecn"}). So $R$ is a unital monoid object in the unital monoidal category $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t--}}}K$. Finally, let $\phi\colon\mathcal C\times K\longrightarrow k$ be a right t-unital multiplicative pairing (in the sense of Sections [3.1](#multiplicative-pairings-subsecn){reference-type="ref" reference="multiplicative-pairings-subsecn"}-- [3.2](#t-unital-pairings-subsecn){reference-type="ref" reference="t-unital-pairings-subsecn"}), and let $(R,\mathcal R,\rho)$ be a right integrated bimodule structure on the $K$-$K$-bimodule $R$ (in the sense of Section [3.4](#integrated-bimodules-subsecn){reference-type="ref" reference="integrated-bimodules-subsecn"}). Then $(R,\mathcal R,\rho)$ is an object of the unital monoidal category $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K$. We are interested in lifting the monoid structure on $R$ to a monoid structure on $(R,\mathcal R,\rho)$, along the monoidal functor $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K\longrightarrow K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t--}}}K$ (the leftmost diagonal functor on the diagram [\[two-monoidal-forgetful-functors-diagram\]](#two-monoidal-forgetful-functors-diagram){reference-type="eqref" reference="two-monoidal-forgetful-functors-diagram"} from Section [3.5](#two-forgetful-monoidal-functors-subsecn){reference-type="ref" reference="two-forgetful-monoidal-functors-subsecn"}). The following proposition and corollary form our version of [@Psemi Section 10.2.1]. **Proposition 36**. *Let $K$ be a left flat t-unital $k$-algebra and $\phi\colon\mathcal C\times K\longrightarrow k$ be a right t-unital multiplicative pairing. In this context, a monoid structure on an object $(R,\mathcal R,\rho)\in K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K$ is uniquely determined by the induced monoid structure on the object $R\in K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t--}}}K$. Given a t-unital homomorphism of $K$-algebras making $R$ a left flat $K$-$K$-bimodule and a right integrated bimodule structure $(R,\mathcal R,\rho)$ on $R$, the unital monoid structure on the object $R\in K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t--}}}K$ can be lifted to a unital monoid structure on $(R,\mathcal R,\rho)\in K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K$ if and only if the following two conditions hold:* - *the map $\mathcal C=\mathcal C\otimes_KK\longrightarrow\mathcal C\otimes_KR=\mathcal R$ induced by the map $f\colon K\longrightarrow R$ is a right $\mathcal C$-comodule morphism;* - *the map $\mathcal R\mathbin{\text{\smaller$\square$}}_\mathcal C\mathcal R=(\mathcal C\otimes_KR)\mathbin{\text{\smaller$\square$}}_\mathcal C(\mathcal C\otimes_KR)\simeq \mathcal C\otimes_KR\otimes_KR\longrightarrow\mathcal C\otimes_KR=\mathcal R$ induced by the multiplication map $R\otimes_KR\longrightarrow R$ is a right $\mathcal C$-comodule morphism. (Here the natural $\mathcal C$-$\mathcal C$-bicomodule isomorphism $(\mathcal C\otimes_KR)\mathbin{\text{\smaller$\square$}}_\mathcal C(\mathcal C\otimes_KR)\simeq\mathcal C\otimes_KR\otimes_KR$ is provided by Proposition [Proposition 27](#tensor-cotensor-assoc-prop){reference-type="ref" reference="tensor-cotensor-assoc-prop"}.)* *Proof.* The point is that the forgetful functor $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K\longrightarrow K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t--}}}K$ taking $(R,\mathcal R,\rho)$ to $R$ is faithful. Any monoid structure on $(R,\mathcal R,\rho)$ is given by certain morphisms in $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K$, and these morphisms are uniquely determined by their images in $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t--}}}K$. Hence the first assertion of the proposition. Conversely, given a monoid structure on $R$, in order for it to be liftable to a monoid structure on $(R,\mathcal R,\rho)$, one needs to check that the structure morphisms of the given monoid structure in $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t--}}}K$ come from some morphisms in $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K$. These are the conditions listed in the second assertion of the proposition. If the required morphisms exist in $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K$, they will automatically satisfy the axioms of a monoid structure in $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K$, because their images satisfy such axioms in $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t--}}}K$ and the forgetful functor $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K \longrightarrow K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t--}}}K$ is faithful. Simply put, an equation on morphisms in $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K$ is satisfied whenever its image in $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t--}}}K$ is satisfied. ◻ **Corollary 37**. *Let $K$ be a left flat t-unital $k$-algebra and $\phi\colon\mathcal C\times K\longrightarrow k$ be a right t-unital multiplicative pairing such that the functor $\phi_*\colon{\operatorname{\mathsf{Comod--}}}\mathcal C\longrightarrow{\operatorname{\mathsf{Mod^t--}}}K$ is fully faithful. Let $f\colon K\longrightarrow R$ be a t-unital homomorphism of $K$-algebras making $R$ a left flat $K$-$K$-bimodule, and assume that $R$ is a right integrable $K$-$K$-bimodule (as per the discussion in the end of Section [3.6](#fully-faithful-comodule-inclusion-secn){reference-type="ref" reference="fully-faithful-comodule-inclusion-secn"}). Then there exists a unique (unital) monoid structure on the object $(R,\mathcal R,\rho)\in K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K$ lifting the monoid structure on the object $R\in K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t--}}}K$.* *Proof.* Under the full-and-faithfulness assumption of the corollary, the functor $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K \longrightarrow K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t--}}}K$ is fully faithful, so the conditions of Proposition [Proposition 36](#lift-of-monoid-structure-prop){reference-type="ref" reference="lift-of-monoid-structure-prop"} are satisfied automatically. In other words, the maps appearing in the conditions of the proposition are always right $K$-module maps. Under the full-and-faithfulness assumption, any right $K$-module map between right $\mathcal C$-comodules is a right $\mathcal C$-comodule map. ◻ Suppose that we have managed to lift a monoid structure on $R\in K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t--}}}K$ to a monoid structure on $(R,\mathcal R,\rho)\in K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K$, as per Proposition [Proposition 36](#lift-of-monoid-structure-prop){reference-type="ref" reference="lift-of-monoid-structure-prop"} or Corollary [Corollary 37](#fully-faithful-lift-of-monoid-cor){reference-type="ref" reference="fully-faithful-lift-of-monoid-cor"}. Then, applying the monoidal functor $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K\longrightarrow \mathcal C{\operatorname{\mathsf{--{}_{inj}Bicomod--}}}\mathcal C$ (the rightmost diagonal functor on the diagram [\[two-monoidal-forgetful-functors-diagram\]](#two-monoidal-forgetful-functors-diagram){reference-type="eqref" reference="two-monoidal-forgetful-functors-diagram"} from Section [3.5](#two-forgetful-monoidal-functors-subsecn){reference-type="ref" reference="two-forgetful-monoidal-functors-subsecn"}), we obtain a unital monoid object $\boldsymbol{\mathcal S}=\mathcal R$ in the unital monoidal category $\mathcal C{\operatorname{\mathsf{--{}_{inj}Bicomod--}}}\mathcal C$. In other words, we have produced a (semiassociative, semiunital) *semialgebra* $\boldsymbol{\mathcal S}$ over the coalgebra $\mathcal C$, as defined in Section [2.6](#semialgebras-prelim-subsecn){reference-type="ref" reference="semialgebras-prelim-subsecn"}. Explicitly, we have $$\boldsymbol{\mathcal S}=\mathcal R=\mathcal C\otimes_KR.$$ The left $\mathcal C$-comodule structure on $\boldsymbol{\mathcal S}$ is induced by the left $\mathcal C$-comodule structure on $\mathcal C$. The right $\mathcal C$-comodule structure on $\boldsymbol{\mathcal S}$ is assumed given as a part of the data: this is what is meant by $(R,\mathcal R,\rho)$ being a right integrated $K$-$K$-bimodule. The semiunit map $\mathbf e\colon\mathcal C\longrightarrow\boldsymbol{\mathcal S}$ is the composition $$\mathcal C=\mathcal C\otimes_KK\longrightarrow\mathcal C\otimes_KR=\boldsymbol{\mathcal S},$$ as in the first condition of Proposition [Proposition 36](#lift-of-monoid-structure-prop){reference-type="ref" reference="lift-of-monoid-structure-prop"}. The semimultiplication map $\mathbf m\colon\boldsymbol{\mathcal S}\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal S}\longrightarrow\boldsymbol{\mathcal S}$ is the composition $$\boldsymbol{\mathcal S}\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal S}=(\mathcal C\otimes_KR)\mathbin{\text{\smaller$\square$}}_\mathcal C(\mathcal C\otimes_KR)\simeq \mathcal C\otimes_KR\otimes_KR\longrightarrow\mathcal C\otimes_KR=\boldsymbol{\mathcal S},$$ as in the second condition of the proposition. Moreover, the semialgebra $\boldsymbol{\mathcal S}$ is an injective left $\mathcal C$-comodule. By Propositions [Proposition 19](#comodule-categories-abelian){reference-type="ref" reference="comodule-categories-abelian"}(b) and [Proposition 20](#contramodule-category-abelian){reference-type="ref" reference="contramodule-category-abelian"}, it follows that the category of right $\boldsymbol{\mathcal S}$-semimodules ${\operatorname{\mathsf{Simod--}}}\boldsymbol{\mathcal S}$ and the category of left $\boldsymbol{\mathcal S}$-semicontramodules $\boldsymbol{\mathcal S}{\operatorname{\mathsf{--Sicntr}}}$ are abelian (with exact forgetful functors ${\operatorname{\mathsf{Simod--}}}\boldsymbol{\mathcal S}\longrightarrow{\operatorname{\mathsf{Comod--}}}\mathcal C$ and $\boldsymbol{\mathcal S}{\operatorname{\mathsf{--Sicntr}}}\longrightarrow\mathcal C{\operatorname{\mathsf{--Contra}}}$). In the rest of Section [\[description-of-semimod-semicontra-secn\]](#description-of-semimod-semicontra-secn){reference-type="ref" reference="description-of-semimod-semicontra-secn"}, our aim is to describe these abelian categories more explicitly. ## Functor between module categories of right comodules and modules {#functor-module-categories-right-comods-subsecn} Let $K$ be a left flat t-unital $k$-algebra, $\mathcal C$ be a coalgebra over $k$, and $\phi\colon\mathcal C\times K\longrightarrow k$ be a right t-unital multiplicative pairing. As mentioned in Section [2.7](#semimodules-prelim-subsecn){reference-type="ref" reference="semimodules-prelim-subsecn"}, the category of right $\mathcal C$-comodules ${\operatorname{\mathsf{Comod--}}}\mathcal C$ is a right module category over the monoidal category $\mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal C$. Restricting the action of $\mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal C$ in ${\operatorname{\mathsf{Comod--}}}\mathcal C$ along the monoidal functors $$K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathcal C{\operatorname{\mathsf{--{}_{inj}Bicomod--}}}\mathcal C\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal C$$ (the rightmost functors on the diagram [\[two-monoidal-forgetful-functors-diagram\]](#two-monoidal-forgetful-functors-diagram){reference-type="eqref" reference="two-monoidal-forgetful-functors-diagram"}), we make ${\operatorname{\mathsf{Comod--}}}\mathcal C$ a right module category over the monoidal category $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K$. On the other hand, according to Proposition [Proposition 13](#t-c-unital-monoidal-model-categories){reference-type="ref" reference="t-c-unital-monoidal-model-categories"}(b), the category of t-unital right $K$-modules ${\operatorname{\mathsf{Mod^t--}}}K$ is a right module category over the monoidal category $K{\operatorname{\mathsf{--{}^t Bimod^t--}}}K$. Restricting the action of $K{\operatorname{\mathsf{--{}^t Bimod^t--}}}K$ in ${\operatorname{\mathsf{Mod^t--}}}K$ along the monoidal functors $$K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t--}}}K\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax K{\operatorname{\mathsf{--{}^t Bimod^t--}}}K$$ (the leftmost functors on the diagram [\[two-monoidal-forgetful-functors-diagram\]](#two-monoidal-forgetful-functors-diagram){reference-type="eqref" reference="two-monoidal-forgetful-functors-diagram"}), we make ${\operatorname{\mathsf{Mod^t--}}}K$ a right module category of the monoidal category $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K$. **Proposition 38**. *The functor $\phi_*\colon{\operatorname{\mathsf{Comod--}}}\mathcal C\longrightarrow{\operatorname{\mathsf{Mod^t--}}}K$ is an associative, unital module functor between the associative, unital right module categories ${\operatorname{\mathsf{Comod--}}}\mathcal C$ and ${\operatorname{\mathsf{Mod^t--}}}K$ over the monoidal category $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K$.* *Proof.* For an explanation concerning the functor $\phi_*\colon{\operatorname{\mathsf{Comod--}}}\mathcal C\longrightarrow {\operatorname{\mathsf{Mod^t--}}}K$, see Remark [Remark 33](#lands-in-t-unital-remark){reference-type="ref" reference="lands-in-t-unital-remark"}. The point is that, for any object $(B,\mathcal B,\beta)\in K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K$ and any right $\mathcal C$-comodule $\mathcal N$, the natural isomorphism of right $K$-modules $$\mathcal N\otimes_K B=(\mathcal N\mathbin{\text{\smaller$\square$}}_\mathcal C\mathcal C)\otimes_KB\simeq \mathcal N\mathbin{\text{\smaller$\square$}}_\mathcal C(\mathcal C\otimes_KB)=\mathcal N\mathbin{\text{\smaller$\square$}}_\mathcal C\mathcal B$$ holds by Proposition [Proposition 27](#tensor-cotensor-assoc-prop){reference-type="ref" reference="tensor-cotensor-assoc-prop"}. ◻ ## Description of right semimodules Let $K$ be a left flat t-unital $k$-algebra, $\mathcal C$ be a coalgebra over $k$, and $\phi\colon\mathcal C\times K\longrightarrow k$ be a right t-unital multiplicative pairing. Let $f\colon K\longrightarrow R$ be a t-unital homomorphism of $k$-algebras making $R$ a left flat $K$-$K$-bimodule, and let $(R,\boldsymbol{\mathcal S},\rho)$ be a monoid object in $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K$ lifting the monoid object $R\in K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t--}}}K$. So $\boldsymbol{\mathcal S}\in\mathcal C{\operatorname{\mathsf{--{}_{inj}Bicomod--}}}\mathcal C$ is a semialgebra over $\mathcal C$. By the definition (see Section [2.7](#semimodules-prelim-subsecn){reference-type="ref" reference="semimodules-prelim-subsecn"}), a right semimodule $\boldsymbol{\mathcal N}$ over $\boldsymbol{\mathcal S}$ is a module object in the right module category ${\operatorname{\mathsf{Comod--}}}\mathcal C$ over the monoid object $\boldsymbol{\mathcal S}$ in the monoidal category $\mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal C$. In our context, when the monoid object $\boldsymbol{\mathcal S}$ in $\mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal C$ comes from a monoid object $(R,\boldsymbol{\mathcal S},\rho)$ in $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K$, the datum of a module object over $\boldsymbol{\mathcal S}$ in the right module category ${\operatorname{\mathsf{Comod--}}}\mathcal C$ over $\mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal C$ is equivalent to the datum of a module object over $(R,\boldsymbol{\mathcal S},\rho)$ in the right module category ${\operatorname{\mathsf{Comod--}}}\mathcal C$ over the monoidal category $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K$. Here the definition of the structure of a right module category over $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K$ on the category ${\operatorname{\mathsf{Comod--}}}\mathcal C$ was explained in the previous Section [4.2](#functor-module-categories-right-comods-subsecn){reference-type="ref" reference="functor-module-categories-right-comods-subsecn"}. According to Proposition [Proposition 38](#functor-module-cats-right-comods-prop){reference-type="ref" reference="functor-module-cats-right-comods-prop"}, we have a functor of right module categories $\phi_*\colon{\operatorname{\mathsf{Comod--}}}\mathcal C\longrightarrow {\operatorname{\mathsf{Mod^t--}}}K$ over the monoidal category $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K$. Applying this functor to a right semimodule $\boldsymbol{\mathcal N}$ over $\boldsymbol{\mathcal S}$, we obtain a module object over $(R,\boldsymbol{\mathcal S},\rho)$ in the right module category ${\operatorname{\mathsf{Mod^t--}}}K$ over $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K$. The datum of a module object in ${\operatorname{\mathsf{Mod^t--}}}K$ over a monoid object $(R,\boldsymbol{\mathcal S},\rho)$ in $K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K$ is equivalent to the datum of a module object in ${\operatorname{\mathsf{Mod^t--}}}K$ over the monoid object $R\in K{\operatorname{\mathsf{--{}^t Bimod^t--}}}K$. This means a right $R$-module $N$ that is t-unital as a right $K$-module. The latter condition is actually equivalent to $N$ being t-unital as a right $R$-module. **Remark 39**. Let $f\colon K\longrightarrow R$ be a right t-unital homomorphism of $k$-algebras. Then a right $R$-module is t-unital if and only if it is t-unital as a right $K$-module. This is the result of [@Pnun Corollary 9.7(a)]. Explicitly, let $\boldsymbol{\mathcal N}$ be a right $\boldsymbol{\mathcal S}$-semimodule. Then, first of all, $\boldsymbol{\mathcal N}$ is a right $\mathcal C$-comodule. The construction of the functor $\phi_*$  [\[right-comodules-to-modules\]](#right-comodules-to-modules){reference-type="eqref" reference="right-comodules-to-modules"} endows $\boldsymbol{\mathcal N}$ with a (t-unital) right $K$-module structure. Now the composition $$\label{right-action-underlying-right-semiaction} \boldsymbol{\mathcal N}\otimes_KR=(\boldsymbol{\mathcal N}\mathbin{\text{\smaller$\square$}}_\mathcal C\mathcal C)\otimes_KR\simeq\boldsymbol{\mathcal N}\mathbin{\text{\smaller$\square$}}_\mathcal C(\mathcal C\otimes_KR) =\boldsymbol{\mathcal N}\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal S}\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\boldsymbol{\mathcal N}$$ of the associativity isomorphism from Proposition [Proposition 27](#tensor-cotensor-assoc-prop){reference-type="ref" reference="tensor-cotensor-assoc-prop"} and the semiaction map $\mathbf n\colon\boldsymbol{\mathcal N}\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal S}\longrightarrow\boldsymbol{\mathcal N}$ provides an action map $\boldsymbol{\mathcal N}\otimes_KR\longrightarrow\boldsymbol{\mathcal N}$ making $\boldsymbol{\mathcal N}$ a right $R$-module. The following proposition and corollary form our version of [@Psemi first paragraph of Section 10.2.2]. **Proposition 40**. *The datum of a right $\boldsymbol{\mathcal S}$-semimodule $\boldsymbol{\mathcal N}$ is equivalent to the datum of a right $\mathcal C$-comodule $\mathcal N$, a (t-unital) right $R$-module $N$, and an isomorphism of right $K$-modules $\phi_*(\mathcal N)\simeq N$ satisfying the following condition. The map $\mathcal N\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal S}\longrightarrow\mathcal N$ constructed by reverting the formula [\[right-action-underlying-right-semiaction\]](#right-action-underlying-right-semiaction){reference-type="eqref" reference="right-action-underlying-right-semiaction"}, $$\mathcal N\mathbin{\text{\smaller$\square$}}_\mathcal C\boldsymbol{\mathcal S}=\mathcal N\mathbin{\text{\smaller$\square$}}_\mathcal C(\mathcal C\otimes_KR)\simeq\mathcal N\otimes_KR \simeq N\otimes_KR\longrightarrow N\simeq\mathcal N$$ must be a right $\mathcal C$-comodule morphism.* *Proof.* The logic here is similar to that in Proposition [Proposition 36](#lift-of-monoid-structure-prop){reference-type="ref" reference="lift-of-monoid-structure-prop"}. The point is that the functor of module categories $\phi_*\colon{\operatorname{\mathsf{Comod--}}}\mathcal C\longrightarrow{\operatorname{\mathsf{Mod^t--}}}K$ from Proposition [Proposition 38](#functor-module-cats-right-comods-prop){reference-type="ref" reference="functor-module-cats-right-comods-prop"} is faithful. Consequently, a module structure on an object $\mathcal N\in\mathcal C{\operatorname{\mathsf{--Comod}}}$ over a monoid object $(R,\boldsymbol{\mathcal S},\rho)\in K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K$ is uniquely determined by the induced module structure over $(R,\boldsymbol{\mathcal S},\rho)$ on the object $N=\phi_*(\mathcal N)\in K{\operatorname{\mathsf{--{}^t Mod}}}$. Conversely, given a t-unital right $K$-module $N$ with a module structure over $(R,\boldsymbol{\mathcal S},\rho)$ (which means an $R$-module structure), and given a right $\mathcal C$-comodule $\mathcal N$ such that $\phi_*(\mathcal N)=N$, in order for the $R$-module structure on $N$ to be liftable to an $\boldsymbol{\mathcal S}$-semimodule structure on $\mathcal N$, one needs to check that the structure morphism of the given module structure in ${\operatorname{\mathsf{Mod^t--}}}K$ comes from some morphism in ${\operatorname{\mathsf{Comod--}}}\mathcal C$. This is the condition stated in the proposition. If the required morphism exists in ${\operatorname{\mathsf{Comod--}}}\mathcal C$, it will automatically satisfy the axioms of a module structure in ${\operatorname{\mathsf{Comod--}}}\mathcal C$, because its image satisfies such axioms in ${\operatorname{\mathsf{Mod^t--}}}K$. Simply put, an equation on morphisms in ${\operatorname{\mathsf{Comod--}}}\mathcal C$ is satsified whenever its image in ${\operatorname{\mathsf{Mod^t--}}}K$ is satisfied. ◻ **Corollary 41**. *Assume that the functor $\phi_*\colon{\operatorname{\mathsf{Comod--}}}\mathcal C\longrightarrow{\operatorname{\mathsf{Mod^t--}}}K$ is fully faithful. Then the datum of a right $\boldsymbol{\mathcal S}$-semimodule $\boldsymbol{\mathcal N}$ is equivalent to the datum of a right $\mathcal C$-comodule $\mathcal N$, a (t-unital) right $R$-module $N$, and an isomorphism of right $K$-modules $\phi_*(\mathcal N)\simeq N$. In other words, this is the datum of a (t-unital) right $R$-module $N$ whose underlying (t-unital) right $K$-module structure has the property that it arises from some right $\mathcal C$-coomdule structure.* *Proof.* This is similar to Corollary [Corollary 37](#fully-faithful-lift-of-monoid-cor){reference-type="ref" reference="fully-faithful-lift-of-monoid-cor"}. The map appearing in the condition of Proposition [Proposition 40](#right-semimodules-described-prop){reference-type="ref" reference="right-semimodules-described-prop"} is always a morphism of right $K$-modules. Under the full-and-faithfulness assumption of the corollary, any right $K$-module morphism between right $\mathcal C$-comodules is a right $\mathcal C$-comodule morphism. ◻ ## Projectivity instead of flatness We start with a version of Proposition [Proposition 31](#nonunital-tensor-of-nt-flat-integrated){reference-type="ref" reference="nonunital-tensor-of-nt-flat-integrated"} with the nt-flatness condition replaced by nc-projectivity. We say that a right integrated $K$-$K$-bimodule $(B,\mathcal B,\beta)$ is *left nc-projective* if the left $K$-module $B$ is nc-projective. **Proposition 42**. *Let $K$ be a nonunital algebra, $\mathcal C$ be a coalgebra, and $\phi\colon\mathcal C\times K\longrightarrow k$ be a right t-unital multiplicative pairing. Then the category of left nc-projective, right t-unital right integrated $K$-$K$-bimodules $(B,\mathcal B,\beta)$ is a *nonunital* tensor/monoidal full subcategory of the nonunital tensor/monoidal category from Proposition [Proposition 31](#nonunital-tensor-of-nt-flat-integrated){reference-type="ref" reference="nonunital-tensor-of-nt-flat-integrated"}.* *Proof.* Follows from Lemma [Lemma 10](#nc-projective-tensor-product){reference-type="ref" reference="nc-projective-tensor-product"} and Proposition [Proposition 12](#nc-projective-are-nt-flat){reference-type="ref" reference="nc-projective-are-nt-flat"}. ◻ We will say that a bimodule over a nonunital algebra $K$ is *left projective* if it is projective as a left $\widecheck K$-module. In particular, $K$ itself is *left projective* if $K$ is projective as a left $\widecheck K$-module. A right integrated bimodule $(B,\mathcal B,\beta)$ is called *left projective* if the $K$-$K$-bimodule $B$ is left projective. **Remark 43**. Let $K$ be a left projective $k$-algebra, $\mathcal C$ be a coalgebra over $k$, and $\phi\colon\mathcal C\times K\longrightarrow k$ be a right t-unital pairing. Then Proposition [Proposition 24](#t-unital-pairing-contramodules-prop){reference-type="ref" reference="t-unital-pairing-contramodules-prop"}(3) tells that the essential image of the functor $\phi_*\colon\mathcal C{\operatorname{\mathsf{--Contra}}}\longrightarrow K{\operatorname{\mathsf{--{}^n Mod}}}$  [\[left-contramodules-to-modules\]](#left-contramodules-to-modules){reference-type="eqref" reference="left-contramodules-to-modules"} is contained in the full subcategory of c-unital modules $K{\operatorname{\mathsf{--{}^c Mod}}}\subset K{\operatorname{\mathsf{--{}^n Mod}}}$. So we can (and will) write $\phi_*\colon\mathcal C{\operatorname{\mathsf{--Contra}}}\longrightarrow K{\operatorname{\mathsf{--{}^c Mod}}}$. Obviously, both the functors $\mathcal C{\operatorname{\mathsf{--Contra}}}\longrightarrow K{\operatorname{\mathsf{--{}^n Mod}}}$ and $\mathcal C{\operatorname{\mathsf{--Contra}}}\longrightarrow K{\operatorname{\mathsf{--{}^c Mod}}}$ are always faithful; and one of them is fully faithful if and only if the other one is. We will denote by $K{\operatorname{\mathsf{ --{}^{\mskip 7.4\thinmuskip t}_{proj} Bimod^t--}}}K\subset K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t--}}}K$ the full subcategory of left projective $K$-$K$-bimodules. Clearly, $K{\operatorname{\mathsf{ --{}^{\mskip 7.4\thinmuskip t}_{proj} Bimod^t--}}}K$ is a monoidal subcategory in $K{\operatorname{\mathsf{--{}^t Bimod^t--}}}K$. When the algebra $K$ is left projective, the unit object $K\in K{\operatorname{\mathsf{--{}^t Bimod^t--}}}K$ belongs to $K{\operatorname{\mathsf{ --{}^{\mskip 7.4\thinmuskip t}_{proj} Bimod^t--}}}K$. **Corollary 44**. *Let $K$ be a left projective t-unital algebra, $\mathcal C$ be a coalgebra, and $\phi\colon\mathcal C\times K\longrightarrow k$ be a right t-unital multiplicative pairing. Then the category of t-unital, left projective right integrated $K$-$K$-bimodules $(B,\mathcal B,\beta)$ is a monoidal full subcategory of the *unital* monoidal category from Corollary [Corollary 32](#unital-monoidal-of-flat-integrated){reference-type="ref" reference="unital-monoidal-of-flat-integrated"}, containing the unit object $(K,\mathcal C,\varkappa)$.* *Proof.* The assertion is straightforward, so we only recall that any projective module is nc-projective, as mentioned in Section [1.5](#nt-nc-flat-proj-prelim-subsecn){reference-type="ref" reference="nt-nc-flat-proj-prelim-subsecn"}. ◻ We will denote the unital monoidal full subcategory defined in Corollary [Corollary 44](#unital-monoidal-of-projective-integrated){reference-type="ref" reference="unital-monoidal-of-projective-integrated"} by $$K{\operatorname{\mathsf{ --{}^{\mskip 7.4\thinmuskip t}_{proj} Bimod^t_{\mathcal C-int}--}}}K\,\subset\, K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K.$$ For any left projective t-unital algebra $K$, coalgebra $\mathcal C$, and a right t-unital multiplicative pairing $\phi\colon\mathcal C\times K\longrightarrow k$, we have the following commutative diagram extending the diagram [\[two-monoidal-forgetful-functors-diagram\]](#two-monoidal-forgetful-functors-diagram){reference-type="eqref" reference="two-monoidal-forgetful-functors-diagram"} from Section [3.5](#two-forgetful-monoidal-functors-subsecn){reference-type="ref" reference="two-forgetful-monoidal-functors-subsecn"}: $$\label{two-monoidal-forgetful-functors-extended-diag} \begin{gathered} \xymatrix{ & K{\operatorname{\mathsf{ --{}^{\mskip 7.4\thinmuskip t}_{proj} Bimod^t_{\mathcal C-int}--}}}K \ar@{>->}[d] \ar[ld] \\ K{\operatorname{\mathsf{ --{}^{\mskip 7.4\thinmuskip t}_{proj} Bimod^t--}}}K \ar@{>->}[d] & K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K \ar[ld] \ar[rd] \\ K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t--}}}K \ar@{>->}[d] && \mathcal C{\operatorname{\mathsf{--{}_{inj}Bicomod--}}}\mathcal C\ar@{>->}[d] \\ K{\operatorname{\mathsf{--{}^t Bimod^t--}}}K && \mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal C } \end{gathered}$$ All the functors are associative, unital monoidal functors of associative, unital monoidal categories. The four vertical arrows denote fully faithful functors. ## Functor between module categories of left contramodules and modules {#functor-module-categories-left-contramods-subsecn} Let $K$ be a left projective t-unital $k$-algebra, $\mathcal C$ be a coalgebra over $k$, and $\phi\colon\mathcal C\times K\longrightarrow k$ be a right t-unital multiplicative pairing. As mentioned in Section [2.8](#semicontramodules-prelim-subsecn){reference-type="ref" reference="semicontramodules-prelim-subsecn"}, the opposite category $\mathcal C{\operatorname{\mathsf{--Contra}}}^{\mathsf{op}}$ to the category of left $\mathcal C$-contramodules $\mathcal C{\operatorname{\mathsf{--Contra}}}$ is a right module category over the monoidal category $\mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal C$. Restricting the action of $\mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal C$ in $\mathcal C{\operatorname{\mathsf{--Contra}}}^{\mathsf{op}}$ along the monoidal functors $$K{\operatorname{\mathsf{ --{}^{\mskip 7.4\thinmuskip t}_{proj} Bimod^t_{\mathcal C-int}--}}}K\longrightarrow K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t_{\mathcal C-int}--}}}K \longrightarrow\mathcal C{\operatorname{\mathsf{--{}_{inj}Bicomod--}}}\mathcal C\longrightarrow\mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal C$$ (the right-hand side of the diagram [\[two-monoidal-forgetful-functors-extended-diag\]](#two-monoidal-forgetful-functors-extended-diag){reference-type="eqref" reference="two-monoidal-forgetful-functors-extended-diag"}), we make $\mathcal C{\operatorname{\mathsf{--Contra}}}^{\mathsf{op}}$ a right module category over the monoidal category $K{\operatorname{\mathsf{ --{}^{\mskip 7.4\thinmuskip t}_{proj} Bimod^t_{\mathcal C-int}--}}}K$. On the other hand, according to Proposition [Proposition 13](#t-c-unital-monoidal-model-categories){reference-type="ref" reference="t-c-unital-monoidal-model-categories"}(c), the opposite category $K{\operatorname{\mathsf{--{}^c Mod}}}^{\mathsf{op}}$ to the category of c-unital left $K$-modules $K{\operatorname{\mathsf{--{}^c Mod}}}$ is a right module category over the monoidal category $K{\operatorname{\mathsf{--{}^t Bimod^t--}}}K$. Restricting the action of $K{\operatorname{\mathsf{--{}^t Bimod^t--}}}K$ in $K{\operatorname{\mathsf{--{}^c Mod}}}^{\mathsf{op}}$ along the monoidal functors $$K{\operatorname{\mathsf{ --{}^{\mskip 7.4\thinmuskip t}_{proj} Bimod^t_{\mathcal C-int}--}}}K\longrightarrow K{\operatorname{\mathsf{ --{}^{\mskip 7.4\thinmuskip t}_{proj} Bimod^t--}}}K \longrightarrow K{\operatorname{\mathsf{ --{}^{\mskip 6.1\thinmuskip t}_{flat} Bimod^t--}}}K\longrightarrow K{\operatorname{\mathsf{--{}^t Bimod^t--}}}K$$ (the left-hand side of the diagram [\[two-monoidal-forgetful-functors-extended-diag\]](#two-monoidal-forgetful-functors-extended-diag){reference-type="eqref" reference="two-monoidal-forgetful-functors-extended-diag"}), we make $K{\operatorname{\mathsf{--{}^c Mod}}}^{\mathsf{op}}$ a right module category over the monoidal category $K{\operatorname{\mathsf{ --{}^{\mskip 7.4\thinmuskip t}_{proj} Bimod^t_{\mathcal C-int}--}}}K$. **Proposition 45**. *The functor $\phi_*^{\mathsf{op}}\colon\mathcal C{\operatorname{\mathsf{--Contra}}}^{\mathsf{op}}\longrightarrow K{\operatorname{\mathsf{--{}^c Mod}}}^{\mathsf{op}}$ is an associative, unital module functor between the associative, unital right module categories $\mathcal C{\operatorname{\mathsf{--Contra}}}^{\mathsf{op}}$ and $K{\operatorname{\mathsf{--{}^c Mod}}}^{\mathsf{op}}$ over the monoidal category $K{\operatorname{\mathsf{ --{}^{\mskip 7.4\thinmuskip t}_{proj} Bimod^t_{\mathcal C-int}--}}}K$.* *Proof.* The functor $\phi_*\colon\mathcal C{\operatorname{\mathsf{--Contra}}}\longrightarrow K{\operatorname{\mathsf{--{}^c Mod}}}$ was discussed in Remark [Remark 43](#lands-in-c-unital-remark){reference-type="ref" reference="lands-in-c-unital-remark"}. The proposition holds because, for any object $(B,\mathcal B,\beta)\in K{\operatorname{\mathsf{ --{}^{\mskip 7.4\thinmuskip t}_{proj} Bimod^t_{\mathcal C-int}--}}}K$ and any left $\mathcal C$-contramodule $\mathfrak P$, there is a natural isomorphism of left $K$-modules $$\mathop{\mathrm{Hom}}_K(B,\mathfrak P)=\mathop{\mathrm{Hom}}_K(B,\mathop{\mathrm{Cohom}}_\mathcal C(\mathcal C,\mathfrak P))\simeq \mathop{\mathrm{Cohom}}_\mathcal C(\mathcal C\otimes_KB,\>\mathfrak P)=\mathop{\mathrm{Cohom}}_\mathcal C(\mathcal B,\mathfrak P)$$ provided by Proposition [Proposition 28](#tensor-hom-cohom-assoc-prop){reference-type="ref" reference="tensor-hom-cohom-assoc-prop"}. ◻ ## Description of left semicontramodules Let $K$ be a left projective t-unital $k$-algebra, $\mathcal C$ be a coalgebra over $k$, and $\phi\colon\mathcal C\times K\longrightarrow k$ be a right t-unital multiplicative pairing. Let $f\colon K\longrightarrow R$ be a t-unital homomorphism of $k$-algebras making $R$ a left projective $K$-$K$-bimodule, and let $(R,\boldsymbol{\mathcal S},\rho)$ be a monoid object in $K{\operatorname{\mathsf{ --{}^{\mskip 7.4\thinmuskip t}_{proj} Bimod^t_{\mathcal C-int}--}}}K$ lifting the monoid object $R\in K{\operatorname{\mathsf{ --{}^{\mskip 7.4\thinmuskip t}_{proj} Bimod^t--}}}K$. So $\boldsymbol{\mathcal S}\in\mathcal C{\operatorname{\mathsf{--{}_{inj}Bicomod--}}}\mathcal C$ is a semialgebra over $\mathcal C$. By the definition (see Section [2.8](#semicontramodules-prelim-subsecn){reference-type="ref" reference="semicontramodules-prelim-subsecn"}), a left semicontramodule $\boldsymbol{\mathfrak P}$ over $\boldsymbol{\mathcal S}$ is essentially (up to a passage to the opposite category) a module object in the right module category $\mathcal C{\operatorname{\mathsf{--Contra}}}^{\mathsf{op}}$ over the monoid object $\boldsymbol{\mathcal S}$ in the monoidal category $\mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal C$. In our context, when the monoid object $\boldsymbol{\mathcal S}$ in $\mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal C$ comes from a monoid object $(R,\boldsymbol{\mathcal S},\rho)$ in $K{\operatorname{\mathsf{ --{}^{\mskip 7.4\thinmuskip t}_{proj} Bimod^t_{\mathcal C-int}--}}}K$, the datum of a module object over $\boldsymbol{\mathcal S}$ in the right module category $\mathcal C{\operatorname{\mathsf{--Contra}}}^{\mathsf{op}}$ over $\mathcal C{\operatorname{\mathsf{--Bicomod--}}}\mathcal C$ is equivalent to the datum of a module object over $(R,\boldsymbol{\mathcal S},\rho)$ in the right module category $\mathcal C{\operatorname{\mathsf{--Contra}}}^{\mathsf{op}}$ over the monoidal category $K{\operatorname{\mathsf{ --{}^{\mskip 7.4\thinmuskip t}_{proj} Bimod^t_{\mathcal C-int}--}}}K$. Here the definition of the structure of a right module category over $K{\operatorname{\mathsf{ --{}^{\mskip 7.4\thinmuskip t}_{proj} Bimod^t_{\mathcal C-int}--}}}K$ on the category $\mathcal C{\operatorname{\mathsf{--Contra}}}^{\mathsf{op}}$ was explained in the previous Section [4.5](#functor-module-categories-left-contramods-subsecn){reference-type="ref" reference="functor-module-categories-left-contramods-subsecn"}. According to Proposition [Proposition 45](#functor-module-cats-left-contramods-prop){reference-type="ref" reference="functor-module-cats-left-contramods-prop"}, we have a functor of right module categories $\phi_*^{\mathsf{op}}\colon\mathcal C{\operatorname{\mathsf{--Contra}}}^{\mathsf{op}}\longrightarrow K{\operatorname{\mathsf{--{}^c Mod}}}^{\mathsf{op}}$ over the monoidal category $K{\operatorname{\mathsf{ --{}^{\mskip 7.4\thinmuskip t}_{proj} Bimod^t_{\mathcal C-int}--}}}K$. Applying this functor to a left semicontramodule $\boldsymbol{\mathfrak P}$ over $\boldsymbol{\mathcal S}$, we obtain a module object over $(R,\boldsymbol{\mathcal S},\rho)$ in the right module category $K{\operatorname{\mathsf{--{}^c Mod}}}^{\mathsf{op}}$ over $K{\operatorname{\mathsf{ --{}^{\mskip 7.4\thinmuskip t}_{proj} Bimod^t_{\mathcal C-int}--}}}K$. The datum of a module object in $K{\operatorname{\mathsf{--{}^c Mod}}}^{\mathsf{op}}$ over a monoid object $(R,\boldsymbol{\mathcal S},\rho)$ in $K{\operatorname{\mathsf{ --{}^{\mskip 7.4\thinmuskip t}_{proj} Bimod^t_{\mathcal C-int}--}}}K$ is equivalent to the datum of a module object in $K{\operatorname{\mathsf{--{}^c Mod}}}^{\mathsf{op}}$ over the monoid object $R\in K{\operatorname{\mathsf{--{}^t Bimod^t--}}}K$. This means a left $R$-module $P$ that is c-unital as a left $K$-module. The latter condition is actually equivalent to $P$ being c-unital as a left $R$-module. **Remark 46**. Let $f\colon K\longrightarrow R$ be a right t-unital homomorphism of $k$-algebras. Then a left $R$-module is c-unital if and only if it is c-unital as a left $K$-module. This is the result of [@Pnun Corollary 9.7(b)]. Explicitly, let $\boldsymbol{\mathfrak P}$ be a left $\boldsymbol{\mathcal S}$-semicontramodule. Then, first of all, $\boldsymbol{\mathfrak P}$ is a left $\mathcal C$-contramodule. The construction of the functor $\phi_*$  [\[left-contramodules-to-modules\]](#left-contramodules-to-modules){reference-type="eqref" reference="left-contramodules-to-modules"} endows $\boldsymbol{\mathfrak P}$ with a (c-unital) left $K$-module structure. Now the composition $$\begin{gathered} \label{left-action-underlying-left-semicontraaction} \boldsymbol{\mathfrak P}\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathop{\mathrm{Cohom}}_\mathcal C(\boldsymbol{\mathcal S},\boldsymbol{\mathfrak P})=\mathop{\mathrm{Cohom}}_\mathcal C(\mathcal C\otimes_KR,\>\boldsymbol{\mathfrak P}) \\ \simeq\mathop{\mathrm{Hom}}_K(R,\mathop{\mathrm{Cohom}}_\mathcal C(\mathcal C,\boldsymbol{\mathfrak P}))=\mathop{\mathrm{Hom}}_K(R,\boldsymbol{\mathfrak P})\end{gathered}$$ of the semicontraaction map $\mathbf p\colon\boldsymbol{\mathfrak P}\longrightarrow\mathop{\mathrm{Cohom}}_\mathcal C(\boldsymbol{\mathcal S},\boldsymbol{\mathfrak P})$ and the associativity isomorphism from Proposition [Proposition 28](#tensor-hom-cohom-assoc-prop){reference-type="ref" reference="tensor-hom-cohom-assoc-prop"} provides an action map $\boldsymbol{\mathfrak P}\longrightarrow\mathop{\mathrm{Hom}}_K(R,\boldsymbol{\mathfrak P})$ making $\boldsymbol{\mathfrak P}$ a left $R$-module. The following proposition is our version of [@Psemi second paragraph of Section 10.2.2]. **Proposition 47**. *The datum of a left $\boldsymbol{\mathcal S}$-semicontramodule $\boldsymbol{\mathfrak P}$ is equivalent to the datum of a left $\mathcal C$-contramodule $\mathfrak P$, a (c-unital) left $R$-module $P$, and an isomorphism of left $K$-modules $\phi_*(\mathfrak P) \simeq P$ satisfying the following condition. The map $\mathfrak P\longrightarrow\mathop{\mathrm{Cohom}}_\mathcal C(\boldsymbol{\mathcal S},\mathfrak P)$ constructed by reverting the formula [\[left-action-underlying-left-semicontraaction\]](#left-action-underlying-left-semicontraaction){reference-type="eqref" reference="left-action-underlying-left-semicontraaction"}, $$\begin{gathered} \mathfrak P\simeq P\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathop{\mathrm{Hom}}_K(R,P)\simeq\mathop{\mathrm{Hom}}_K(R,\mathfrak P) \\ \simeq\mathop{\mathrm{Cohom}}_\mathcal C(\mathcal C\otimes_KR,\>\mathfrak P)=\mathop{\mathrm{Cohom}}_\mathcal C(\boldsymbol{\mathcal S},\mathfrak P)\end{gathered}$$ must be a left $\mathcal C$-contramodule morphism.* *Proof.* The proof is similar to that of Proposition [Proposition 40](#right-semimodules-described-prop){reference-type="ref" reference="right-semimodules-described-prop"}. The point is that the functor $\phi_*\colon\mathcal C{\operatorname{\mathsf{--Contra}}}\longrightarrow K{\operatorname{\mathsf{--{}^c Mod}}}$ is always faithful. ◻ **Corollary 48**. *Assume that the functor $\phi_*\colon\mathcal C{\operatorname{\mathsf{--Contra}}}\longrightarrow K{\operatorname{\mathsf{--{}^c Mod}}}$ is fully faithful. Then the datum of a left $\boldsymbol{\mathcal S}$-semicontramodule $\boldsymbol{\mathfrak P}$ is equivalent to the datum of a left $\mathcal C$-contramodule $\mathfrak P$, a (c-unital) left $R$-module $P$, and an isomorphism of left $K$-modules $\phi_*(\mathfrak P)\simeq P$. In other words, this is the datum of a (c-unital) left $R$-module $P$ whose underlying (c-unital) left $K$-module structure has the property that it arises from some left $\mathcal C$-contramodule structure.* *Proof.* This is similar to Corollary [Corollary 41](#right-semimodules-described-fully-faithful-case){reference-type="ref" reference="right-semimodules-described-fully-faithful-case"}. The map appearing in the condition of Proposition [Proposition 47](#left-semicontramodules-described-prop){reference-type="ref" reference="left-semicontramodules-described-prop"} is always a morphism of left $K$-modules. Under the full-and-faithfulness assumption of the corollary, any left $K$-module morphism between left $\mathcal C$-contramodules is a left $\mathcal C$-contramodule morphism. ◻ **Remark 49**. The conditions under which the comodule forgetful functor $\phi_*\colon{\operatorname{\mathsf{Comod--}}}\mathcal C\longrightarrow{\operatorname{\mathsf{Mod^n--}}}K$ is fully faithful were discussed in Proposition [Proposition 35](#comodule-forgetful-full-and-faithfulness){reference-type="ref" reference="comodule-forgetful-full-and-faithfulness"}. The contramodule forgetful functor $\phi_*\colon\mathcal C{\operatorname{\mathsf{--Contra}}}\longrightarrow K{\operatorname{\mathsf{--{}^n Mod}}}$ being fully faithful is a more rare occurrence. In particular, in the case when the dual $k$-algebra to $\mathcal C$ plays the role of $K$, the comodule forgetful/inclusion functor ${\operatorname{\mathsf{Comod--}}}\mathcal C\longrightarrow{\operatorname{\mathsf{Mod--}}}\mathcal C^*$ is *always* fully faithful [@Swe Propositions 2.1.1--2.1.2 and Theorem 2.1.3(e)]. This is *not* true for the contramodule forgetful functor $\mathcal C{\operatorname{\mathsf{--Contra}}}\longrightarrow\mathcal C^*{\operatorname{\mathsf{--Mod}}}$. A complete characterization is known in the particular case when the coalgebra $\mathcal C$ is *conilpotent*: the functor $\mathcal C{\operatorname{\mathsf{--Contra}}}\longrightarrow\mathcal C^*{\operatorname{\mathsf{--Mod}}}$ is fully faithful if and only if the coalgebra $\mathcal C$ is *finitely cogenerated*. See [@Psm Theorem 2.1] for the "if" implication and [@Phff Example 7.2] for the "only if". Using the full result of [@Psm Theorem 2.1], one can show that, for a conilpotent coalgebra $\mathcal C$, the functor $\phi_*\colon\mathcal C{\operatorname{\mathsf{--Contra}}}\longrightarrow K{\operatorname{\mathsf{--{}^n Mod}}}$ is fully faithful if and only if $\mathcal C$ is finitely cogenerated and the map $\check\phi^*\colon\widecheck K\longrightarrow\mathcal C^*$ has dense image in $\mathcal C^*$. Not so much is known outside of the conilpotent case, but examples of fully faithful contramodule forgetful functors $\phi_*\colon\mathcal C{\operatorname{\mathsf{--Contra}}} \longrightarrow K{\operatorname{\mathsf{--{}^c Mod}}}$ are provided by [@Plfin Theorem 3.7] or Theorem [Theorem 55](#strictly-lower-contramodule-full-and-faithfulness){reference-type="ref" reference="strictly-lower-contramodule-full-and-faithfulness"} below (use [@Pnun Proposition 6.6] or Proposition [Proposition 52](#functors-as-c-unital-modules){reference-type="ref" reference="functors-as-c-unital-modules"} below to establish a comparison between our present setting and the one in [@Plfin]). These examples will be discussed in the next Sections [\[loc-fin-subcategories-secn\]](#loc-fin-subcategories-secn){reference-type="ref" reference="loc-fin-subcategories-secn"}--[\[examples-secn\]](#examples-secn){reference-type="ref" reference="examples-secn"}. # Locally Finite Subcategories in $k$-Linear Categories [\[loc-fin-subcategories-secn\]]{#loc-fin-subcategories-secn label="loc-fin-subcategories-secn"} As above, in this section $k$ denotes a fixed ground field. We build upon the expositions in the preprints [@Plfin] and [@Pnun Section 6]. ## Recollections on $k$-linear categories A *$k$-linear category* $\mathsf E$ is a category enriched in $k$-vector spaces. In other words, for every pair of objects $x$, $y\in\mathsf E$, a $k$-vector space $\mathsf E_{y,x}=\mathop{\mathrm{Hom}}_\mathsf E(x,y)$ is given, and for every triple of objects $x$, $y$, $z\in\mathsf E$, a $k$-linear multiplication/composition map $$\mathop{\mathrm{Hom}}_\mathsf E(y,z)\otimes_k\mathop{\mathrm{Hom}}_\mathsf E(x,y)\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathop{\mathrm{Hom}}_\mathsf E(x,z)$$ is defined. Identity morphisms $\mathrm{id}_x\in\mathop{\mathrm{Hom}}_\mathsf E(x,x)$ must exist for all $x\in\mathsf E$, and the associativity and unitality axioms are imposed on the compositions of morphisms. To every small $k$-linear category $\mathsf E$, a nonunital $k$-algebra $$R_\mathsf E=\bigoplus\nolimits_{x,y\in\mathsf E}\mathop{\mathrm{Hom}}_\mathsf E(x,y),$$ with the multiplication map $R_\mathsf E\otimes_k R_\mathsf E\longrightarrow R_\mathsf E$ induced by the composition of morphisms in $\mathsf E$, is assigned. **Proposition 50**. *For every small $k$-linear category $\mathsf E$, the $k$-algebra $R=R_\mathsf E$ is t-unital. In fact, $R_\mathsf E$ is both left and right s-unital. The algebra $R$ is a projective left and a projective right $\widecheck R$-module.* *Proof.* This is [@Pnun Lemmas 6.1 and 6.2], which are applicable in view of [@Pnun Lemma 6.4]. ◻ Let $\mathsf E$ be a small $k$-linear category. Then a *left $\mathsf E$-module* $\mathsf M$ is defined as a covariant $k$-linear functor $\mathsf M\colon\mathsf E\longrightarrow k{\operatorname{\mathsf{--Vect}}}$. This is the same thing as a covariant additive functor $\mathsf M\colon\mathsf E\longrightarrow\mathsf{Ab}$, since the action of the scalar multiples of the identity morphism $\mathrm{id}_x$ always endows the abelian group $\mathsf M(x)$ with a $k$-vector space structure. Explicitly, a left $\mathsf E$-module $\mathsf M$ is the datum of a vector space $\mathsf M(x)$ for every $x\in\mathsf E$ and a linear map $$\mathop{\mathrm{Hom}}_\mathsf E(x,y)\otimes_k\mathsf M(x)\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathsf M(y)$$ for every pair of objects $x$, $y\in\mathsf E$. The usual associativity and unitality axioms are imposed. We denote the abelian category of left $\mathsf E$-modules by $\mathsf E{\operatorname{\mathsf{--Mod}}}$. Similarly, a *right $\mathsf E$-module* $\mathsf N$ is a contravariant $k$-linear functor $\mathsf N\colon\mathsf E^{\mathsf{op}}\longrightarrow k{\operatorname{\mathsf{--Vect}}}$, or equivalently, a contravariant additive functor $\mathsf N\colon\mathsf E^{\mathsf{op}}\longrightarrow\mathsf{Ab}$. In other words, it is the datum of a vector space $\mathsf N(x)$ for every $x\in\mathsf E$ and a linear map $$\mathop{\mathrm{Hom}}_\mathsf E(x,y)\otimes_k\mathsf N(y)\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathsf N(x)$$ for every pair of objects $x$, $y\in\mathsf E$, satisfying the usual associativity and unitality axioms. The abelian category of right $\mathsf E$-modules is denoted by ${\operatorname{\mathsf{Mod--}}}\mathsf E$. **Proposition 51**. *Let $\mathsf E$ be a small $k$-linear category and $R=R_\mathsf E$ be the related nonunital $k$-algebra.* *[(a)]{.upright} There is a natural equivalence of abelian categories $$\mathsf E{\operatorname{\mathsf{--Mod}}}\simeq R_\mathsf E{\operatorname{\mathsf{--{}^t Mod}}}.$$ To any left $\mathsf E$-module $\mathsf M$, the t-unital left $R_\mathsf E$-module $M=\bigoplus_{x\in\mathsf E}\mathsf M(x)$, with the left action of $R_\mathsf E$ in $M$ induced by the left action of $\mathsf E$ in $\mathsf M$, is assigned.* *[(b)]{.upright} There is a natural equivalence of abelian categories $${\operatorname{\mathsf{Mod--}}}\mathsf E\simeq {\operatorname{\mathsf{Mod^t--}}}R_\mathsf E.$$ To any right $\mathsf E$-module $\mathsf N$, the t-unital right $R_\mathsf E$-module $N=\bigoplus_{x\in\mathsf E}\mathsf N(x)$, with the right action of $R_\mathsf E$ in $N$ induced by the right action of $\mathsf E$ in $\mathsf N$, is assigned.* *Proof.* This is [@Pnun Proposition 6.5]. ◻ **Proposition 52**. *Let $\mathsf E$ be a small $k$-linear category and $R=R_\mathsf E$ be the related nonunital $k$-algebra. Then there is a natural equivalence of abelian categories $$\mathsf E{\operatorname{\mathsf{--Mod}}}\simeq R_\mathsf E{\operatorname{\mathsf{--{}^c Mod}}}.$$ To any left $\mathsf E$-module $\mathsf P$, the c-unital left $R_\mathsf E$-module $P=\prod_{x\in\mathsf E}\mathsf P(x)$, with the left action of $R_\mathsf E$ in $P$ induced by the left action of $\mathsf E$ in $\mathsf P$, is assigned.* *Proof.* This is [@Pnun Proposition 6.6]. ◻ ## Recollections on locally finite categories {#locally-finite-categories-subsecn} Let $\mathsf F$ be a small $k$-linear category. We define a preorder relation $\preceq$ on the set of all objects of $\mathsf F$ as the reflexive, transitive binary relation generated by the following elementary comparisons: $x\preceq y$ if there exists a nonzero morphism from $x$ to $y$ in $\mathsf F$, that is, if $\mathop{\mathrm{Hom}}_\mathsf F(x,y)\ne0$. In other words, we write $x\preceq y$ if there exists a finite chain of composable nonzero morphisms in $\mathsf F$ going from $x$ to $y$  [@Plfin Definition 2.1]. Furthermore, we will write $x\prec y$ whenever $x\preceq y$ but $y\not\preceq x$. We say that a small $k$-linear category $\mathsf F$ is *locally finite* [@Plfin Definition 2.2] if the following two conditions are satisfied: - Hom-finiteness: for any two objects $x$ and $y\in\mathsf F$, the $k$-vector space $\mathop{\mathrm{Hom}}_\mathsf F(x,y)$ is finite-dimensional; - interval finiteness: for any two objects $x$ and $y\in\mathsf F$, the set of all objects $z\in\mathsf F$ such that $x\preceq z\preceq y$ is finite. To every locally finite $k$-linear category $\mathsf F$, the construction spelled out in [@Plfin Construction 2.3] assigns a coassociative, counital coalgebra $$\mathcal C_\mathsf F=\bigoplus\nolimits_{x,y\in\mathsf F}\mathop{\mathrm{Hom}}_\mathsf F(x,y)^*$$ over the field $k$. The collection of all the natural duality pairings $$\phi_{x,y}\colon\mathop{\mathrm{Hom}}_\mathsf F(x,y)^*\times\mathop{\mathrm{Hom}}_\mathsf F(x,y)\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax k$$ induces, by passing to the direct sum over all the objects $x$, $y\in\mathsf F$, a multiplicative bilinear pairing $$\phi_\mathsf F\colon\mathcal C_\mathsf F\times R_\mathsf F\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax k.$$ **Lemma 53**. *For any locally finite $k$-linear category $\mathsf F$, the multiplicative pairing $\phi_\mathsf F\colon\mathcal C_\mathsf F\times R_\mathsf F\longrightarrow k$ is both left and right t-unital (and in fact, s-unital).* *Proof.* The condition (3) of Proposition [Proposition 25](#s-unital-pairings-characterized){reference-type="ref" reference="s-unital-pairings-characterized"} is obviously satisfied. So it remains to recall the second assertion of Proposition [Proposition 50](#category-algebras-t-unital-and-projective){reference-type="ref" reference="category-algebras-t-unital-and-projective"} and apply Corollary [Corollary 26](#s-unital-t-unital-pairings-cor){reference-type="ref" reference="s-unital-t-unital-pairings-cor"}. ◻ For any locally finite $k$-linear category $\mathsf F$, the *comodule inclusion functors* $$\begin{aligned} {2} \Upsilon_\mathsf F\colon &\mathcal C_\mathsf F{\operatorname{\mathsf{--Comod}}}&&\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathsf F{\operatorname{\mathsf{--Mod}}}, \label{left-comodules-into-functors-inclusion} \\ \Upsilon_{\mathsf F^{\mathsf{op}}}\colon &{\operatorname{\mathsf{Comod--}}}\mathcal C_{\mathsf F} &&\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax{\operatorname{\mathsf{Mod--}}}\mathsf F \label{right-comodules-into-functors-inclusion}\end{aligned}$$ were constructed in [@Plfin Construction 2.4]. In the language of the present paper, the functors $\Upsilon_\mathsf F$ and $\Upsilon_{\mathsf F^{\mathsf{op}}}$ are interpreted as the functors $\phi_*$  ([\[left-comodules-to-modules\]](#left-comodules-to-modules){reference-type="ref" reference="left-comodules-to-modules"}--[\[right-comodules-to-modules\]](#right-comodules-to-modules){reference-type="ref" reference="right-comodules-to-modules"}) from Proposition [Proposition 22](#underlying-K-module-structures){reference-type="ref" reference="underlying-K-module-structures"} and Remark [Remark 33](#lands-in-t-unital-remark){reference-type="ref" reference="lands-in-t-unital-remark"} for the t-unital multiplicative pairing $\phi=\phi_\mathsf F$ (cf. Proposition [Proposition 51](#functors-as-t-unital-modules){reference-type="ref" reference="functors-as-t-unital-modules"}). **Lemma 54**. *For any locally finite $k$-linear category $\mathsf F$, the functors $\Upsilon_\mathsf F$ and $\Upsilon_{\mathsf F^{\mathsf{op}}}$  ([\[left-comodules-into-functors-inclusion\]](#left-comodules-into-functors-inclusion){reference-type="ref" reference="left-comodules-into-functors-inclusion"}-- [\[right-comodules-into-functors-inclusion\]](#right-comodules-into-functors-inclusion){reference-type="ref" reference="right-comodules-into-functors-inclusion"}) are fully faithful.* *Proof.* This is [@Plfin Proposition 2.5]. Alternatively, one can observe that the multiplicative pairing $\phi_\mathsf F$ is obviously nondegenerate in $\mathcal C_\mathsf F$, i. e., condition (7) of Proposition [Proposition 35](#comodule-forgetful-full-and-faithfulness){reference-type="ref" reference="comodule-forgetful-full-and-faithfulness"} is satisfied for it. So Proposition [Proposition 35](#comodule-forgetful-full-and-faithfulness){reference-type="ref" reference="comodule-forgetful-full-and-faithfulness"} tells that conditions (1) and (2) are satisfied as well. ◻ For any locally finite $k$-linear category $\mathsf F$, the *contramodule forgetful functor* $$\label{contramodules-into-functors-forgetful} \Theta_\mathsf F\colon\mathcal C_\mathsf F{\operatorname{\mathsf{--Contra}}}\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathsf F{\operatorname{\mathsf{--Mod}}}$$ was constructed in [@Plfin Construction 3.1]. In the language of the present paper, the functor $\Theta_\mathsf F$ is interpreted as the functor $\phi_*$  [\[left-contramodules-to-modules\]](#left-contramodules-to-modules){reference-type="eqref" reference="left-contramodules-to-modules"} from Proposition [Proposition 22](#underlying-K-module-structures){reference-type="ref" reference="underlying-K-module-structures"} and Remark [Remark 43](#lands-in-c-unital-remark){reference-type="ref" reference="lands-in-c-unital-remark"} for the t-unital multiplicative pairing $\phi=\phi_\mathsf F$ (cf. Proposition [Proposition 52](#functors-as-c-unital-modules){reference-type="ref" reference="functors-as-c-unital-modules"}). A sufficient condition for full-and-faithfulness of the functor $\Theta_\mathsf F$ was obtained in [@Plfin Theorem 3.7]. In order to formulate it, we need to recall the next definition. A locally finite $k$-linear category $\mathsf F$ is said to be *lower strictly locally finite* [@Plfin Definition 3.6] if for every object $y\in\mathsf F$ there exists a finite set of objects $X_y\subset\mathsf F$ such that $x\prec y$ for all $x\in X_y$ and the composition map $$\bigoplus\nolimits_{x\in X_y}\mathop{\mathrm{Hom}}_\mathsf F(x,y)\otimes_k\mathop{\mathrm{Hom}}_\mathsf F(z,x) \mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathop{\mathrm{Hom}}_\mathsf F(z,y)$$ is surjective for all $z\prec y$. Dually, a locally finite $k$-linear category $\mathsf G$ is said to be *upper strictly locally finite* if the opposite category $\mathsf G^{\mathsf{op}}$ is lower strictly locally finite. **Theorem 55**. *For any lower strictly locally finite $k$-linear category $\mathsf F$, the functor $\Theta_\mathsf F\colon\mathcal C_\mathsf F{\operatorname{\mathsf{--Contra}}}\longrightarrow\mathsf F{\operatorname{\mathsf{--Mod}}}$ is fully faithful.* *Proof.* This is [@Plfin Theorem 3.7]. ◻ The following definition [@Plfin Definition 7.3] specifies an even more restrictive condition on a $k$-linear category. A locally finite $k$-linear category $\mathsf F$ is said to be *lower finite* if, for every object $y\in\mathsf F$, the set of all objects $x\in\mathsf F$ such that $x\preceq y$ is finite. Clearly, any lower finite $k$-linear category is lower strictly locally finite. Dually, $\mathsf F$ is *upper finite* if, for every object $x\in\mathsf F$, the set of all objects $y\in\mathsf F$ such that $x\preceq y$ is finite. **Proposition 56**. *[(a)]{.upright} For any upper finite $k$-linear category $\mathsf F$, the comodule inclusion functor $\Upsilon_\mathsf F\colon\mathcal C_\mathsf F{\operatorname{\mathsf{--Comod}}} \longrightarrow\mathsf F{\operatorname{\mathsf{--Mod}}}$ is a category equivalence.* *[(b)]{.upright} For any lower finite $k$-linear category $\mathsf F$, the comodule inclusion functor $\Upsilon_{\mathsf F^{\mathsf{op}}}\colon {\operatorname{\mathsf{Comod--}}}\mathcal C_\mathsf F\longrightarrow{\operatorname{\mathsf{Mod--}}}\mathsf F$ is a category equivalence.* *[(c)]{.upright} For any lower finite $k$-linear category $\mathsf F$, the contramodule forgetful functor $\Theta_\mathsf F\colon\mathcal C_\mathsf F{\operatorname{\mathsf{--Contra}}} \longrightarrow\mathsf F{\operatorname{\mathsf{--Mod}}}$ is a category equivalence.* *Proof.* This is [@Plfin Propositions 7.4 and 7.5]. ◻ ## Functors surjective on objects are t-unital Let $\mathsf E$ and $\mathsf F$ be two small $k$-linear categories. A *$k$-linear functor* $F\colon\mathsf F\longrightarrow\mathsf E$ is a map assigning to every object $u\in\mathsf F$ an object $F(u)\in\mathsf E$ and to every pair of objects $u$, $v\in\mathsf F$ a $k$-linear map $$F(u,v)\colon\mathop{\mathrm{Hom}}_\mathsf F(u,v)\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathop{\mathrm{Hom}}_\mathsf E(F(u),F(v)).$$ The usual axioms of multiplicativity and unitality must be satisfied. Clearly, any $k$-linear functor $F\colon\mathsf F\longrightarrow\mathsf E$ induces a homomorphism of $k$-algebras $f\colon R_\mathsf F\longrightarrow R_\mathsf E$. Given a left $\mathsf E$-module $\mathsf M\colon\mathsf E\longrightarrow k{\operatorname{\mathsf{--Vect}}}$ and a $k$-linear functor $F\colon\mathsf F\longrightarrow\mathsf E$, the composition $\mathsf F\longrightarrow\mathsf E\longrightarrow\mathsf M$ defines the *underlying left $\mathsf F$-module structure* on $\mathsf M$. Similarly, given a right $\mathsf E$-module $\mathsf N\colon\mathsf E^{\mathsf{op}}\longrightarrow k{\operatorname{\mathsf{--Vect}}}$ and a functor $F$ as above, the composition $\mathsf F^{\mathsf{op}}\longrightarrow\mathsf E^{\mathsf{op}} \longrightarrow\mathsf N$ defines the *underlying right $\mathsf F$-module structure* on $\mathsf N$. The definitions of a *left* (*right*) *t-unital* homomorphism of nonunital algebras $f\colon K\longrightarrow R$ were given in the beginning of Section [4.1](#construction-of-semialgebra-subsecn){reference-type="ref" reference="construction-of-semialgebra-subsecn"}. Let us also recall [@Pnun Section 9] that $f$ is called *left s-unital* if $f$ makes $R$ an s-unital left $K$-module, and $f$ is *right s-unital* if $R$ is an s-unital right $K$-module. A ring homomorphism $f$ is called *s-unital* if it is both left and right s-unital. An object $z\in\mathsf E$ is said to be *zero* if $\mathrm{id}_z=0$ in $\mathop{\mathrm{Hom}}_\mathsf E(z,z)$. In this case one has $\mathop{\mathrm{Hom}}_\mathsf E(x,z)=0=\mathop{\mathrm{Hom}}_\mathsf E(z,y)$ for all $x$, $y\in\mathsf E$. **Proposition 57**. *Let $F\colon\mathsf F\longrightarrow\mathsf E$ be a $k$-linear functor of small $k$-linear categories, and $f\colon R_\mathsf F\longrightarrow R_\mathsf E$ be the induced $k$-algebra homomorphism. Then the following conditions are equivalent:* 1. *$f$ is left t-unital;* 2. *$f$ is right t-unital;* 3. *$f$ is left s-unital;* 4. *$f$ is right s-unital;* 5. *for each nonzero object $x\in\mathsf E$ there exists an object $u\in\mathsf F$ such that $F(u)=x$.* Let us emphasize that condition (5) requires an *equality* of objects $F(u)=x$ and *not* just an isomorphism. This is the condition of surjectivity of the functor $F$ on (nonzero) objects, and *not* only essential surjectivity. The following simple example is instructive. **Example 58**. Let $\mathsf E$ be a $k$-linear category with only two objects $x$ and $y$ that are isomorphic in $\mathsf E$ and nonzero. Let $\mathsf F\subset\mathsf E$ be the full subcategory on the object $x$, and let $F\colon\mathsf F\longrightarrow\mathsf E$ be the inclusion functor; so $F$ is a $k$-linear category equivalence. Then $R_\mathsf F$ and $R_\mathsf E$ are actually unital algebras (because the sets of objects of $\mathsf F$ and $\mathsf E$ are finite), but the ring homomorphism $f\colon R_\mathsf F\longrightarrow R_\mathsf E$ does *not* take the unit to the unit. Consequently, $f$ is neither left nor right t-unital. *Proof of Proposition [Proposition 57](#t-unital-functors){reference-type="ref" reference="t-unital-functors"}.* (1) $\Longleftrightarrow$ (3) The ring $R_\mathsf F$ is s-unital by Proposition [Proposition 50](#category-algebras-t-unital-and-projective){reference-type="ref" reference="category-algebras-t-unital-and-projective"}, and it remains to refer to [@Pnun Lemma 9.4(a)]. (2) $\Longleftrightarrow$ (4) This is [@Pnun Lemma 9.4(b)]. (3) or (4) $\Longrightarrow$ (5) For the sake of contradiction, assume that (5) does not hold. Let $z\in\mathsf E$ be a nonzero object not belonging to the image of $F$. Put $r=\mathrm{id}_z\in\bigoplus_{x,y\in\mathsf E}\mathop{\mathrm{Hom}}_\mathsf E(x,y)=R_\mathsf E$. Then for every $e\in R_\mathsf F$ one has $f(e)r=rf(e)=0$ in $R_\mathsf E$, even though $r\ne0$. This contradicts both (3) and (4). (5) $\Longrightarrow$ (3) and (4) Let $r\in R_\mathsf E=\bigoplus_{x,y\in\mathsf E}\mathop{\mathrm{Hom}}_\mathsf E(x,y)$ be an element. Then there is a finite set of nonzero objects $Z\subset\mathsf E$ such that $r\in\bigoplus_{z,w\in Z}\mathop{\mathrm{Hom}}_\mathsf E(z,w)\subset R_\mathsf E$. Let $U\subset\mathsf F$ be a finite set of objects such that for every $z\in Z$ there exists $u\in U$ with $F(u)=z$. Put $e=\sum_{u\in U}\mathrm{id}_u\in R_\mathsf F$. Then $f(e)r=r=rf(e)$ in $R_\mathsf E$. ◻ ## Tensor products and triangular decompositions {#tensor-and-triangular-subsecn} Let $\mathsf H$ be a small $k$-linear category, $\mathsf N$ be a right $\mathsf H$-module, and $\mathsf M$ be a left $\mathsf H$-module. Then the *tensor product* $\mathsf N\otimes_\mathsf H\mathsf M$ is a $k$-vector space constructed as the cokernel of (the difference of) the natural pair of maps $$\bigoplus\nolimits_{x,y\in\mathsf H}\mathsf N(y)\otimes_k\mathop{\mathrm{Hom}}_\mathsf H(x,y)\otimes_k\mathsf M(x) \,\rightrightarrows\,\bigoplus\nolimits_{z\in\mathsf E}\mathsf N(z)\otimes_k\mathsf M(z).$$ Here one of the maps is induced by the right action of $\mathsf H$ in $\mathsf N$ and the other one by the left action of $\mathsf H$ in $\mathsf M$. In fact, let $N$ be the t-unital right $R_\mathsf H$-module corresponding to $N$ and $M$ be the t-unital left $R_\mathsf H$-module corresponding to $M$ (as per Proposition [Proposition 51](#functors-as-t-unital-modules){reference-type="ref" reference="functors-as-t-unital-modules"}). Then there is a natural isomorphism of $k$-vector spaces $$\mathsf N\otimes_\mathsf H\mathsf M\simeq N\otimes_{R_\mathsf H}M.$$ An *$\mathsf H$-$\mathsf H$-bimodule* $\mathsf B$ is defined as a $k$-linear functor of two arguments $\mathsf B\colon\mathsf H\times\mathsf H^{\mathsf{op}}\longrightarrow k{\operatorname{\mathsf{--Vect}}}$. In other words, it is the datum of a vector space $\mathsf B(x,y)$ for every pair of objects $x$, $y\in\mathsf H$, and a linear map $$\mathop{\mathrm{Hom}}_\mathsf H(x,y)\otimes_k\mathsf B(z,x)\otimes_k\mathop{\mathrm{Hom}}_\mathsf H(w,z)\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax \mathop{\mathrm{Hom}}_\mathsf H(w,y)$$ for every quadruple of objects $x$, $y$, $z$, $w\in\mathsf H$. The usual associativity and unitality axioms are imposed. Strictly speaking, an $\mathsf H$-$\mathsf H$-bimodule $\mathsf B$ has *no* underlying left or right $\mathsf H$-module. Rather, there is an *underlying collection of left $\mathsf H$-modules* $(\mathsf M_x\in\mathsf E{\operatorname{\mathsf{--Mod}}})_{x\in\mathsf H}$ with $\mathsf M_x(y)=\mathsf B(x,y)$ for all $y\in\mathsf H$, and a similar *underlying collection of right $\mathsf H$-modules* $(\mathsf N_y\in{\operatorname{\mathsf{Mod--}}}\mathsf E)_{y\in\mathsf H}$ with $\mathsf N_y(x)=\mathsf B(x,y)$ for all $x\in\mathsf H$. On the other hand, to any $\mathsf H$-$\mathsf H$-bimodule $\mathsf B$ one can assign a t-unital $R_\mathsf H$-$R_\mathsf H$-bimodule $B$ defined by the obvious rule $$B=\bigoplus\nolimits_{x,y\in\mathsf E}\mathsf B(x,y).$$ Let $\mathsf B'$ and $\mathsf B''$ be two $\mathsf H$-$\mathsf H$-bimodules. Then the *tensor product* $\mathsf B=\mathsf B'\otimes_\mathsf H\mathsf B''$ is an $\mathsf H$-$\mathsf H$-bimodule defined by the rule $$\mathsf B(x,y)=\mathsf N'_y\otimes_\mathsf H\mathsf M''_x,$$ where, as above, $\mathsf N'_y$ is the right $\mathsf H$-module with $\mathsf N'_y(x)=\mathsf B'(x,y)$, and $\mathsf M''_x$ is the left $\mathsf H$-module with $\mathsf M''_x(y)=\mathsf B''(x,y)$. The t-unital $R_\mathsf H$-$R_\mathsf H$-bimodule $B$ corresponding to the $\mathsf H$-$\mathsf H$-bimodule $B$ can be constructed as $$B=B'\otimes_{R_\mathsf H}B''.$$ Now let $\mathsf E$ be a small $k$-linear category and $\mathsf H\subset\mathsf E$ be a $k$-linear subcategory *on the same set of objects*. This means that, for every pair of objects $x$, $y\in\mathsf E$, a vector subspace $\mathop{\mathrm{Hom}}_\mathsf H(x,y)\subset\mathop{\mathrm{Hom}}_\mathsf E(x,y)$ is chosen in such a way that $\mathrm{id}_x\in\mathop{\mathrm{Hom}}_\mathsf H(x,x)$ for all $x\in\mathsf E$ and the composition of morphisms in $\mathsf E$ belonging to $\mathsf H$ again belongs to $\mathsf H$. Then there is a $k$-linear natural inclusion functor $\mathsf H\longrightarrow\mathsf E$. In this setting, the rule $\mathsf B_\mathsf E(x,y)=\mathop{\mathrm{Hom}}_\mathsf E(x,y)$ defines an $\mathsf H$-$\mathsf H$-bimodule $\mathsf B_\mathsf E$. The corresponding t-unital $R_\mathsf H$-$R_\mathsf H$-bimodule is $B_\mathsf E=R_\mathsf E$, with the $R_\mathsf H$-$R_\mathsf H$-bimodule structure on $R_\mathsf E$ induced by the $k$-algebra homomorphism $R_\mathsf H\longrightarrow R_\mathsf E$ corresponding to the inclusion functor $\mathsf H\longrightarrow\mathsf E$. Let $\mathsf F$ and $\mathsf G\subset\mathsf E$ be two other $k$-linear subcategories on the same set of objects. Assume that $\mathsf H\subset\mathsf F$ and $\mathsf H\subset\mathsf G$. Then the multiplication of morphisms in $\mathsf E$ induces an $\mathsf H$-$\mathsf H$-bimodule map $$\label{three-subcategories-bimodule-map} \mathsf B_\mathsf F\otimes_\mathsf H\mathsf B_\mathsf G\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathsf B_\mathsf E,$$ which we will simply denote, by an abuse of notation, by $$\label{simplified-notation-three-subcategories-map} \mathsf F\otimes_\mathsf H\mathsf G\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathsf E.$$ The corresponding morphism of t-unital $R_\mathsf H$-$R_\mathsf H$-bimodules is $$\label{three-subcategories-t-unital-bimodule-map} R_\mathsf F\otimes_{R_\mathsf H}R_\mathsf G\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax R_\mathsf E.$$ We will say that a triple of $k$-linear subcategories on the same set of objects $\mathsf F$, $\mathsf G$, $\mathsf H\subset\mathsf E$ forms a *triangular decomposition* of $\mathsf E$ if the map [\[three-subcategories-bimodule-map\]](#three-subcategories-bimodule-map){reference-type="eqref" reference="three-subcategories-bimodule-map"}, or which is the same, the map [\[simplified-notation-three-subcategories-map\]](#simplified-notation-three-subcategories-map){reference-type="eqref" reference="simplified-notation-three-subcategories-map"} is an isomorphism of $\mathsf H$-$\mathsf H$-bimodules. Equivalently, three subcategories $\mathsf F$, $\mathsf G$, $\mathsf H\subset\mathsf E$ form a triangular decomposition of $\mathsf E$ if and only if the $R_\mathsf H$-$R_\mathsf H$-bimodule map [\[three-subcategories-t-unital-bimodule-map\]](#three-subcategories-t-unital-bimodule-map){reference-type="eqref" reference="three-subcategories-t-unital-bimodule-map"} is an isomorphism. Explicitly, this means that the following sequence of $k$-vector spaces is right exact for all pairs of objects $x$, $y\in\mathsf E$: $$\begin{gathered} \label{triangular-decomposition-right-exact-sequence} \bigoplus\nolimits_{u,v\in\mathsf H} \mathop{\mathrm{Hom}}_\mathsf F(v,y)\otimes_k\mathop{\mathrm{Hom}}_\mathsf H(u,v)\otimes_k\mathop{\mathrm{Hom}}_\mathsf G(x,u) \\ \mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax \bigoplus\nolimits_{z\in\mathsf E}\mathop{\mathrm{Hom}}_\mathsf F(z,y)\otimes_k\mathop{\mathrm{Hom}}_\mathsf G(x,z) \mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathop{\mathrm{Hom}}_\mathsf E(x,y)\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax 0.\end{gathered}$$ ## Triangular decompositions and projectivity {#triangular-and-projectivity-subsecn} We will say that a $k$-linear category $\mathsf H$ is *discrete* if $\dim_k\mathop{\mathrm{Hom}}_\mathsf H(x,y)=\delta_{x,y}$ for all $x$, $y\in\mathsf H$. So $\mathop{\mathrm{Hom}}_\mathsf H(x,y)=0$ if $x\ne y$, and the vector space $\mathop{\mathrm{Hom}}_\mathsf H(x,x)$ consists of the scalar multiples of the identity morphism $\mathrm{id}_x$, which is assumed to be nonzero. From now on it will be convenient for us to assume that there are no zero objects in the small $k$-linear category $\mathsf E$. Then the category $\mathsf E^\mathrm{id}$ defined by the rules $$\mathop{\mathrm{Hom}}_{\mathsf E^\mathrm{id}}(x,y)= \begin{cases} k\cdot\mathrm{id}_x & \text{if $x=y\in\mathsf E$}, \\ 0 & \text{if $x\ne y\in\mathsf E$} \end{cases}$$ is the unique discrete $k$-linear subcategory on the same set of objects in $\mathsf E$. In the sequel, we will be mostly interested in triangular decompositions $\mathsf F$, $\mathsf G$, $\mathsf H\subset\mathsf E$ with a discrete $k$-linear category $\mathsf H$; so $\mathsf H=\mathsf E^\mathrm{id}$. In this case, the condition of right exactness of the sequence [\[triangular-decomposition-right-exact-sequence\]](#triangular-decomposition-right-exact-sequence){reference-type="eqref" reference="triangular-decomposition-right-exact-sequence"} takes the form of an isomorphism $$\label{triangular-over-discrete-isomorphism} \bigoplus\nolimits_{z\in\mathsf E}\mathop{\mathrm{Hom}}_\mathsf F(z,y)\otimes_k\mathop{\mathrm{Hom}}_\mathsf G(x,z) \overset\simeq\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathop{\mathrm{Hom}}_\mathsf E(x,y)$$ for all objects $x$, $y\in\mathsf E$. **Lemma 59**. *Let $\mathsf H$ be a small discrete $k$-linear category and $H=R_\mathsf H$ be the related nonunital $k$-algebra. Then all t-unital $H$-modules are projective as unital $\widecheck H$-modules, while all c-unital $H$-modules are injective as unital $\widecheck H$-modules.* *Proof.* The point is that the abelian category $$H{\operatorname{\mathsf{--{}^t Mod}}}\simeq \widecheck H{\operatorname{\mathsf{--Mod}}}/\widecheck H{\operatorname{\mathsf{--{}^0 Mod}}}\simeq H{\operatorname{\mathsf{--{}^c Mod}}}\simeq \mathsf H{\operatorname{\mathsf{--Mod}}}$$ (see formula [\[four-category-equivalence\]](#four-category-equivalence){reference-type="eqref" reference="four-category-equivalence"} in Section [1.3](#null-modules-prelim-subsecn){reference-type="ref" reference="null-modules-prelim-subsecn"} and [@Pnun Proposition 6.7]) is semisimple. In fact, it is clear that the category $\mathsf H{\operatorname{\mathsf{--Mod}}}$ is equivalent to the Cartesian product of copies of $k{\operatorname{\mathsf{--Vect}}}$ taken over all the objects $x\in\mathsf H$. Consequently, all objects in this abelian category are both projective and injective. Then it remains to refer to the discussion in Section [1.4](#t-c-flat-proj-inj-prelim-subsecn){reference-type="ref" reference="t-c-flat-proj-inj-prelim-subsecn"} for the assertions that a t-unital $K$-module is projective in $\widecheck K{\operatorname{\mathsf{--Mod}}}$ if and only it is projective in $K{\operatorname{\mathsf{--{}^t Mod}}}\simeq\widecheck K{\operatorname{\mathsf{--Mod}}}/ \widecheck K{\operatorname{\mathsf{--{}^0 Mod}}}$, and a c-unital $K$-module is injective in $\widecheck K{\operatorname{\mathsf{--Mod}}}$ if and only if it is injective in $K{\operatorname{\mathsf{--{}^c Mod}}}\simeq\widecheck K{\operatorname{\mathsf{--Mod}}}/\widecheck K{\operatorname{\mathsf{--{}^0 Mod}}}$. ◻ **Proposition 60**. *Let $\mathsf E$ be a small $k$-linear category and $\mathsf F\otimes_\mathsf H\mathsf G\simeq\mathsf E$ be a triangular decomposition of $\mathsf E$ with a discrete $k$-linear category $\mathsf H=\mathsf E^\mathrm{id}$. Put $K=R_\mathsf F$ and $R=R_\mathsf E$, and denote by $f\colon K\longrightarrow R$ the morphism of nonunital $k$-algebras induced by the inclusion functor $\mathsf F\longrightarrow\mathsf E$. Then $R$ is a left projective $K$-$K$-bimodule.* *Proof.* We need to prove that $R$ is a projective left $\widecheck K$-module. Indeed, we have $$R=R_\mathsf E\simeq R_\mathsf F\otimes_{R_\mathsf H}R_\mathsf G=K\otimes_{\widecheck H}R_\mathsf G,$$ where $H=R_\mathsf H$ (see formula [\[three-subcategories-t-unital-bimodule-map\]](#three-subcategories-t-unital-bimodule-map){reference-type="eqref" reference="three-subcategories-t-unital-bimodule-map"}). It remains to recall that $R_\mathsf G$ is a t-unital left $R_\mathsf H$-module by Proposition [Proposition 57](#t-unital-functors){reference-type="ref" reference="t-unital-functors"}, hence $R_\mathsf G$ is a projective left $\widecheck H$-module by Lemma [Lemma 59](#t-unital-projective-over-discrete){reference-type="ref" reference="t-unital-projective-over-discrete"}; and at the same time $K$ is a projective left $\widecheck K$-module by Proposition [Proposition 50](#category-algebras-t-unital-and-projective){reference-type="ref" reference="category-algebras-t-unital-and-projective"}. ◻ ## Conclusion Let us summarize the results we have proved so far, in the context of triangular decompositions of $k$-linear categories. Let $\mathsf E$ be a small $k$-linear category and $\mathsf F\otimes_\mathsf H\mathsf G\simeq\mathsf E$ be a triangular decomposition of $\mathsf E$ with a discrete $k$-linear category $\mathsf H=\mathsf E^\mathrm{id}$. Put $K=R_\mathsf F$ and $R=R_\mathsf E$, and denote by $f\colon K\longrightarrow R$ the morphism of $k$-algebras induced by the inclusion $\mathsf F\longrightarrow\mathsf E$. Then, by Proposition [Proposition 50](#category-algebras-t-unital-and-projective){reference-type="ref" reference="category-algebras-t-unital-and-projective"},  $K$ is a left projective t-unital $k$-algebra. By Propositions [Proposition 57](#t-unital-functors){reference-type="ref" reference="t-unital-functors"} and [Proposition 60](#triangular-decomposition-projectivity-prop){reference-type="ref" reference="triangular-decomposition-projectivity-prop"},  $f$ is a t-unital homomorphism of $k$-algebras making $R$ a left projective $K$-$K$-bimodule. Assume further that the $k$-linear category $\mathsf F$ is locally finite (in the sense of Section [5.2](#locally-finite-categories-subsecn){reference-type="ref" reference="locally-finite-categories-subsecn"}), and consider the coalgebra $\mathcal C=\mathcal C_\mathsf F$. Then $\phi=\phi_\mathsf F\colon\mathcal C\times K\longrightarrow k$ is a right t-unital multiplicative pairing (by Lemma [Lemma 53](#category-pairings-t-unital){reference-type="ref" reference="category-pairings-t-unital"}). Furthermore, the induced functor $\phi_*=\Upsilon_{\mathsf F^{\mathsf{op}}}\colon {\operatorname{\mathsf{Comod--}}}\mathcal C\longrightarrow{\operatorname{\mathsf{Mod^t--}}}K={\operatorname{\mathsf{Mod--}}}\mathsf F$ is fully faithful (by Lemma [Lemma 54](#Upsilon-fully-faithful){reference-type="ref" reference="Upsilon-fully-faithful"}). It remains to assume that the (t-unital) right $K$-module structure on the tensor product $\mathcal C\otimes_KR$ arises from some right $\mathcal C$-comodule structure, as in Section [3.4](#integrated-bimodules-subsecn){reference-type="ref" reference="integrated-bimodules-subsecn"}. In other words, assume that $R$ is a right integrable $K$-$K$-bimodule (as defined in the end of Section [3.6](#fully-faithful-comodule-inclusion-secn){reference-type="ref" reference="fully-faithful-comodule-inclusion-secn"}). This condition holds automatically when the $k$-linear category $\mathsf F$ is lower finite (by Proposition [Proposition 56](#upper-lower-finite-categories-prop){reference-type="ref" reference="upper-lower-finite-categories-prop"}(b)). **Corollary 61**. *Under the list of assumptions above, the tensor product $\boldsymbol{\mathcal S}=\mathcal C\otimes_K\nobreak R$ acquires a natural structure of semiassociative, semiunital semialgebra over the coalgebra $\mathcal C$. Moreover, $\boldsymbol{\mathcal S}$ is an injective left $\mathcal C$-comodule. The abelian category of right $\boldsymbol{\mathcal S}$-semimodules ${\operatorname{\mathsf{Simod--}}}\boldsymbol{\mathcal S}$ is equivalent to the full subcategory in the module category ${\operatorname{\mathsf{Mod--}}}\mathsf E$ consisting of all the right $\mathsf E$-modules whose underlying right $\mathsf F$-module structure arises from some right $\mathcal C$-comodule structure via the functor $\phi_*=\Upsilon_{\mathsf F^{\mathsf{op}}}\colon{\operatorname{\mathsf{Comod--}}}\mathcal C \longrightarrow{\operatorname{\mathsf{Mod--}}}\mathsf F$.* *Proof.* The first two assertions are Corollary [Corollary 37](#fully-faithful-lift-of-monoid-cor){reference-type="ref" reference="fully-faithful-lift-of-monoid-cor"} with the subsequent discussion. The last assertion is Corollary [Corollary 41](#right-semimodules-described-fully-faithful-case){reference-type="ref" reference="right-semimodules-described-fully-faithful-case"}. The equivalences of categories ${\operatorname{\mathsf{Mod^t--}}}R\simeq{\operatorname{\mathsf{Mod--}}}\mathsf E$ and ${\operatorname{\mathsf{Mod^t--}}}K\simeq{\operatorname{\mathsf{Mod--}}}\mathsf F$ are provided by Proposition [Proposition 51](#functors-as-t-unital-modules){reference-type="ref" reference="functors-as-t-unital-modules"}. ◻ On top of the assumptions above, let us now assume additionally that the functor $\phi_*=\Theta_\mathsf F\colon\mathcal C{\operatorname{\mathsf{--Contra}}}\longrightarrow K{\operatorname{\mathsf{--{}^c Mod}}}= \mathsf F{\operatorname{\mathsf{--Mod}}}$ is fully faithful. This condition holds whenever the $k$-linear category $\mathsf F$ is lower strictly locally finite (by Theorem [Theorem 55](#strictly-lower-contramodule-full-and-faithfulness){reference-type="ref" reference="strictly-lower-contramodule-full-and-faithfulness"}), which includes the easy special case when $\mathsf F$ is lower finite. **Corollary 62**. *Under the full list of assumptions above, the abelian category of left $\boldsymbol{\mathcal S}$-semicontramodules $\boldsymbol{\mathcal S}{\operatorname{\mathsf{--Sicntr}}}$ is equivalent to the full subcategory in the module category $\mathsf E{\operatorname{\mathsf{--Mod}}}$ consisting of all the left $\mathsf E$-modules whose underlying left $\mathsf F$-module structure arises from some left $\mathcal C$-contramodule structure via the functor $\phi_*=\Theta_\mathsf F\colon\mathcal C{\operatorname{\mathsf{--Contra}}}\longrightarrow\mathsf F{\operatorname{\mathsf{--Mod}}}$.* *Proof.* This is Corollary [Corollary 48](#left-semicontramods-described-fully-faithful-case){reference-type="ref" reference="left-semicontramods-described-fully-faithful-case"}. The equivalences of categories $R{\operatorname{\mathsf{--{}^c Mod}}}\simeq\mathsf E{\operatorname{\mathsf{--Mod}}}$ and $K{\operatorname{\mathsf{--{}^c Mod}}}\simeq\mathsf F{\operatorname{\mathsf{--Mod}}}$ are provided by Proposition [Proposition 52](#functors-as-c-unital-modules){reference-type="ref" reference="functors-as-c-unital-modules"}. ◻ **Corollary 63**. *Under the list of assumptions above, assume additionally that the $k$-linear category $\mathsf F$ is lower finite. Then the natural forgetful functors are equivalences of abelian categories ${\operatorname{\mathsf{Simod--}}}\boldsymbol{\mathcal S}\simeq{\operatorname{\mathsf{Mod--}}}\mathsf E$ and $\boldsymbol{\mathcal S}{\operatorname{\mathsf{--Sicntr}}}\simeq\mathsf E{\operatorname{\mathsf{--Mod}}}$.* *Proof.* Follows from Corollaries [Corollary 61](#category-semialgebra-right-semimodules){reference-type="ref" reference="category-semialgebra-right-semimodules"}-- [Corollary 62](#category-semialgebra-left-semicontramodules){reference-type="ref" reference="category-semialgebra-left-semicontramodules"} together with Proposition [Proposition 56](#upper-lower-finite-categories-prop){reference-type="ref" reference="upper-lower-finite-categories-prop"}(b--c). ◻ # Examples [\[examples-secn\]]{#examples-secn label="examples-secn"} ## The trivial triangular decomposition Let $X$ be a fixed set. Denote by $\mathcal C_X=\bigoplus_{x\in X}k$ the direct sum of $X$ copies of the coalgebra $k$ over $k$ (with the natural, essentially unique structure of counital $k$-coalgebra on $k$). By a *$k$-linear $X$-category* we mean a $k$-linear category $\mathsf E$ with the set of objects identified with $X$. A *$k$-linear $X$-functor* is a $k$-linear functor of $k$-linear $X$-categories acting by the identity map on the objects. The following observation is attributed to Chase in Aguiar's dissertation [@Agu Definition 2.3.1 and Section 9.1], where the term "internal categories" was used for what we call "semialgebras". This observation and, slightly more generally, the language of semialgebras over cosemisimple coalgebras were used in the paper [@HL Section 2]. **Proposition 64** ([@Agu]). *For any set $X$, the category of $k$-linear $X$-categories $\mathsf E$, with $k$-linear $X$-functors as morphisms, is equivalent to the category of semialgebras $\boldsymbol{\mathcal S}$ over the coalgebra $\mathcal C_X$. For a semialgebra $\boldsymbol{\mathcal S}$ corresponding to a $k$-linear category $\mathsf E$, the category of right $\boldsymbol{\mathcal S}$-semimodules is equivalent to the category of right $\mathsf E$-modules, while both the categories of left $\boldsymbol{\mathcal S}$-semimodules and left $\boldsymbol{\mathcal S}$-semicontramodules are equivalent to the category of left $\mathsf E$-modules: $${\operatorname{\mathsf{Simod--}}}\boldsymbol{\mathcal S}\simeq{\operatorname{\mathsf{Mod--}}}\mathsf E\quad\text{and}\quad \boldsymbol{\mathcal S}{\operatorname{\mathsf{--Simod}}}\simeq\mathsf E{\operatorname{\mathsf{--Mod}}}\simeq\boldsymbol{\mathcal S}{\operatorname{\mathsf{--Sicntr}}}.$$* **Remark 65**. One can state the assertion of the proposition more generally, by letting the set $X$ vary and considering the whole category of small $k$-linear categories and arbitrary $k$-linear functors. Then the category of small $k$-linear categories is embedded as a full subcategory into the category of pairs (coalgebra $\mathcal C$, semialgebra $\boldsymbol{\mathcal S}$ over $\mathcal C$), with compatible pairs (morphism of coalgebras, morphism of semialgebras) as morphisms. We refer to [@Psemi Chapter 8] for a detailed discussion of such morphisms of pairs (coalgebra, semialgebra). The full subcategory appearing here consists of all the pairs $(\mathcal C,\boldsymbol{\mathcal S})$ such that the coalgebra $\mathcal C$ has the form $\mathcal C=\mathcal C_X$ for some set $X$. The set $X$ serves as the set of objects of the related $k$-linear category. *Proof of Proposition [Proposition 64](#categories-as-semialgebras){reference-type="ref" reference="categories-as-semialgebras"}.* The assertions are easy, and the argument below is intented to serve as an illustration/example of the theory developed above in this paper. Without striving for full generality, let us restrict ourselves to $k$-linear $X$-categories $\mathsf E$ without zero objects; and suppress the discussion of $X$-functors, limiting ourselves to a construction of the semialgebra associated to a $k$-linear category. For any $k$-linear $X$-category $\mathsf E$ (without zero objects), we consider the *trivial triangular decomposition*: $\mathsf F=\mathsf H=\mathsf E^\mathrm{id}$ and $\mathsf G=\mathsf E$. Then the discrete $k$-linear categories $\mathsf F$ and $\mathsf H$ are locally finite, and in fact, both upper and lower finite. The coalgebra $\mathcal C_\mathsf F=\mathcal C_\mathsf H=\mathcal C$ is naturally isomorphic to $\mathcal C_X$. Therefore, Corollaries [Corollary 61](#category-semialgebra-right-semimodules){reference-type="ref" reference="category-semialgebra-right-semimodules"}-- [Corollary 63](#category-semialgebra-lower-finite-case){reference-type="ref" reference="category-semialgebra-lower-finite-case"} are applicable, and they produce a semialgebra $\boldsymbol{\mathcal S}$ over $\mathcal C_X$ such that ${\operatorname{\mathsf{Simod--}}}\boldsymbol{\mathcal S}\simeq{\operatorname{\mathsf{Mod--}}}\mathsf E$ and $\boldsymbol{\mathcal S}{\operatorname{\mathsf{--Sicntr}}}\simeq\mathsf E{\operatorname{\mathsf{--Mod}}}$. It remains to explain the equivalence $\boldsymbol{\mathcal S}{\operatorname{\mathsf{--Simod}}}\simeq\mathsf E{\operatorname{\mathsf{--Mod}}}$. One can observe that the construction of semialgebra in this proposition is actually left-right symmetric: it assigns the opposite semialgebra to the opposite category. Indeed, in the situation at hand, the $K$-$K$-bimodule $\mathcal C$ is naturally isomorphic to $K=R_\mathsf F=R_\mathsf H$; so one has $\boldsymbol{\mathcal S}=\mathcal C\otimes_KR_\mathsf E\simeq R_\mathsf E$. Thus the equivalence $\boldsymbol{\mathcal S}{\operatorname{\mathsf{--Simod}}}\simeq\mathsf E{\operatorname{\mathsf{--Mod}}}$ is analogous to ${\operatorname{\mathsf{Simod--}}}\boldsymbol{\mathcal S}\simeq{\operatorname{\mathsf{Mod--}}}\mathsf E$, which we have already proved. Alternatively, a coalgebra $\mathcal C$ is called *cosemisimple* if the abelian category $\mathcal C{\operatorname{\mathsf{--Comod}}}$ is semisimple, or equivalently, the abelian category ${\operatorname{\mathsf{Comod--}}}\mathcal C$ is semisimple, or equivaletnly, the abelian category $\mathcal C{\operatorname{\mathsf{--Contra}}}$ is semisimple [@Swe Chapters VIII--IX], [@Pksurv Lemma 3.3], [@Psemi Section A.2], [@Pkoszul beginning of Section 4.5], [@PS3 Theorem 6.2]. For any semialgebra $\boldsymbol{\mathcal S}$ over a cosemisimple coalgebra $\mathcal C$, there is a natural equivalence of abelian categories $\boldsymbol{\mathcal S}{\operatorname{\mathsf{--Simod}}}\simeq\boldsymbol{\mathcal S}{\operatorname{\mathsf{--Sicntr}}}$; this is a particular case of the *underived semico-semicontra correspondence* [@Psemi Section 0.3.7], [@Prev Section 3.5]. ◻ ## Brauer diagram category {#brauer-subsecn} The following example was suggested to me by Catharina Stroppel. Let $k$ be a field and $\delta\in k$ be an arbitrary chosen element. The *Brauer diagram category* $\mathsf{Br}(\delta)$  [@Bra Section 5], [@Wen Section 2], [@BM Example 2.10] is the following small $k$-linear category. The objects of $\mathsf{Br}(\delta)$ are the nonnegative integers $n\ge0$ interpreted as finite sets $x_n=\{1,\dotsc,n\}$. We will denote the cardinality of a finite set $x$ by $|x|\in\mathbb Z_{\ge0}$. The assertions below will sometimes depend on the assumption that *no other finite sets but $x_n=\{1,\dotsc,n\}$ are allowed as objects of the category $\mathsf{Br}(\delta)$*, while on other occasions it will be harmless and convenient to pretend that the objects of $\mathsf{Br}(\delta)$ are arbitrary finite sets. Given two finite sets $x$ and $y$, the $k$-vector space of morphisms $\mathop{\mathrm{Hom}}_\mathsf{Br}(x,y)$ is zero when the parities of the integers $|x|$ and $|y|$ are different. When the cardinalities $|x|$ and $|y|$ have the same parity, i. e., the cardinality of the disjoint union $x\sqcup y$ is an even number, the $k$-vector space $\mathop{\mathrm{Hom}}_\mathsf{Br}(x,y)$ has a basis consisting of all the partitions $a$ of $x\sqcup y$ into a disjoint union of sets of cardinality $2$. The basis elements $a\in\mathop{\mathrm{Hom}}_\mathsf{Br}(x,y)$ are interpreted as unoriented graphs $G_a$, without loops, with the set of vertices $x\sqcup y$ and every vertex adjacent to exactly one edge. The edges connecting two vertices from $x$ are called *caps*, the edges connecting two vertices from $y$ are called *cups*, and the edges connecting a vertex from $x$ with a vertex from $y$ are called *propagating strands* [@DG Section 2]. $$\label{brauer-morphism-example} \begin{gathered} \xymatrix{ y\colon & *=0{\bullet}\ar@{-}@/^-2.5pc/[rrrrr] & *=0{\bullet}\ar@{-}@/^-0.5pc/[r] & *=0{\bullet} & *=0{\bullet}\ar@{-}@/^-1.8pc/[rrrr] & *=0{\bullet}\ar@{-}[ldd] & *=0{\bullet}& *=0{\bullet}\ar@{-}[llllldd] & *=0{\bullet}\\ \quad a\colon\mkern-18mu \\ x\colon & *=0{\bullet}\ar@{-}@/^1pc/[rr] & *=0{\bullet}& *=0{\bullet}& *=0{\bullet} & *=0{\bullet}\ar@{-}@/^0.5pc/[r] & *=0{\bullet} } \end{gathered}$$ Given three finite sets $x$, $z$, $y$ and two basis elements $b\in\mathop{\mathrm{Hom}}_\mathsf{Br}(x,z)$ and $c\in\mathop{\mathrm{Hom}}_\mathsf{Br}(z,y)$, the product $cb\in\mathop{\mathrm{Hom}}_\mathsf{Br}(x,y)$ is defined as follows. Consider the graph $G=G_b\cup G_c$ with the set of vertices $x\sqcup z\sqcup y$ and the set of edges equal to the disjoint union of the sets of edges of $G_b$ and $G_c$. Let $L$ be a connected component of $G$. Then there are two possibilities. Either $L$ has the shape of a line segment with two ends in $x\sqcup y$, passing through some vertices from $z$: so there are exactly two vertices in $x\sqcup y$ belonging to $L$, and these are the only two vertices in $L$ which are adjacent to exactly one edge; while all the other vertices in $L$ belong to $z$ and are adjacent to exactly two edges. Or $L$ has the shape of a circle passing only through vertices from $z$: so there are no vertices from $x\sqcup y$ belonging to $L$, and every vertex in $L$ belongs to $z$ and is adjacent to exactly two edges. By the definition, one puts $cb=\delta^na\in\mathop{\mathrm{Hom}}_\mathsf{Br}(x,y)$, where $\delta\in k$ is our chosen scalar parameter and $n\ge0$ is the number of all connected components in $G$ that have the shape of a circle. The basis element $a\in\mathop{\mathrm{Hom}}_\mathsf{Br}(x,y)$ corresponds to the graph $G_a$ consisting of all the connected components $L\subset G$ which have the shape of a line segment. The intermediate vertices belonging to $z$ are removed from any such component $L$, and it is viewed as a single edge, i. e., a propagating strand connnecting a vertex from $x$ with a vectex from $y$. For example, the composition $cb$ of the graph/partition $b$ depicted in the lower half of the next diagram [\[brauer-multiplication-diagram\]](#brauer-multiplication-diagram){reference-type="eqref" reference="brauer-multiplication-diagram"} with the graph/partition $c$ depicted in the upper half of the same diagram $$\label{brauer-multiplication-diagram} \begin{gathered} \xymatrix{ y\colon & *=0{\bullet}\ar@{-}@/^-1pc/[rr] & *=0{\bullet}\ar@{-}[ldd] & *=0{\bullet} & *=0{\bullet}\ar@{-}[ldd] &*=0{\bullet}\ar@{-}[rrrdd] \\ \quad c\colon \mkern-18mu \\ z\colon & *=0{\bullet}\ar@{-}@/^-0.5pc/[r] & *=0{\bullet}\ar@{-}@/^3pc/[rrrrrrr] & *=0{\bullet}\ar@{-}@/^-2pc/[rrrrr] & *=0{\bullet}\ar@{-}@/^-0.5pc/[r] \ar@{-}@/^1.5pc/[rrr] & *=0{\bullet}\ar@{-}@/^0.5pc/[r] & *=0{\bullet}\ar@{-}@/^-0.5pc/[r] & *=0{\bullet} & *=0{\bullet}& *=0{\bullet}\ar@{-}[llllldd] \\ \quad b\colon \mkern-18mu \\ x\colon & *=0{\bullet}\ar@{-}@/^1pc/[rr] & *=0{\bullet}\ar@{-}@/^1.5pc/[rrr] & *=0{\bullet}& *=0{\bullet}& *=0{\bullet} } \end{gathered}$$ is equal to $\delta$ times the graph/partition $a$ depicted on the diagram $$\begin{gathered} \xymatrix{ y\colon & *=0{\bullet}\ar@{-}@/^-1pc/[rr] & *=0{\bullet}\ar@{-}[rrdd] & *=0{\bullet}& *=0{\bullet}\ar@{-}@/^-0.5pc/[r] & *=0{\bullet}\\ \quad a\colon \mkern-18mu \\ x\colon & *=0{\bullet}\ar@{-}@/^1pc/[rr] & *=0{\bullet}\ar@{-}@/^1.5pc/[rrr] & *=0{\bullet}& *=0{\bullet}& *=0{\bullet} } \end{gathered}$$ so $cb=\delta a$ in $\mathop{\mathrm{Hom}}_\mathsf{Br}(x,y)$ in this case. Now let us introduce notation for some $k$-linear subcategories in $\mathsf{Br}(\delta)$ which we are interested in. All such subcategories will have the same set of objects as $\mathsf{Br}(\delta)$; so the objects are the nonnegative integers. The subcategory $\mathsf{Br}^{+=}(\delta)\subset\mathsf{Br}(\delta)$ consists of all the morphisms in $\mathsf{Br}(\delta)$ which *contain no caps*. So one has $\mathop{\mathrm{Hom}}_{\mathsf{Br}^{+=}}(z,y)=0$ if $|z|>|y|$, while if $|y|-|z|$ is a nonnegative even number, a basis in the $k$-vector space $\mathop{\mathrm{Hom}}_{\mathsf{Br}^{+=}}(z,y)$ is formed by all the partitions $c$ of the set $z\sqcup y$ into two-element subsets in which no two elements of $z$ are grouped together. Similarly, the subcategory $\mathsf{Br}^{=-}(\delta)\subset\mathsf{Br}(\delta)$ consists of all the morphisms in $\mathsf{Br}(\delta)$ which *contain no cups*. So $\mathop{\mathrm{Hom}}_{\mathsf{Br}^{=-}}(x,z)=0$ if $|z|>|x|$, while if $|x|-|z|$ is a nonnegative even number, a basis in $\mathop{\mathrm{Hom}}_{\mathsf{Br}^{=-}}(x,z)$ is formed by all the partitions $b$ of the set $x\sqcup z$ into pairs of elements in which no two elements of $z$ are grouped in one pair. For example, the partitions $b$ and $c$ on the diagram [\[brauer-multiplication-diagram\]](#brauer-multiplication-diagram){reference-type="eqref" reference="brauer-multiplication-diagram"} are typical examples of morphisms in $\mathsf{Br}(\delta)$ *not* belonging to $\mathsf{Br}^{=-}(\delta)$ and $\mathsf{Br}^{+=}(\delta)$, respectively. The subcategory $\mathsf{Br}^=(\delta)\subset\mathsf{Br}(\delta)$ consists of all the morphisms containing *neither caps nor cups*. So the vector space $\mathop{\mathrm{Hom}}_{\mathsf{Br}^=}(z,w)$ is only nonzero when $|z|=|w|$, and a basis in this vector space is formed by all bijections between $z$ and $w$. The $k$-algebra $\mathop{\mathrm{Hom}}_{\mathsf{Br}^=}(z,z)$ is the group algebra of the symmetric group, $\mathop{\mathrm{Hom}}_{\mathsf{Br}^=}(z,z)\simeq k[\mathbb S_n]$, where $n=|z|$. **Proposition 66**. *The triple of $k$-linear subcategories $\mathsf F=\mathsf{Br}^{+=}(\delta)$,  $\mathsf G=\mathsf{Br}^{=-}(\delta)$, and $\mathsf H=\mathsf{Br}^=(\delta)$ forms a *triangular decomposition* of the $k$-linear category $\mathsf{Br}(\delta)$ in the sense of Section [5.4](#tensor-and-triangular-subsecn){reference-type="ref" reference="tensor-and-triangular-subsecn"}. In other words, the map [\[simplified-notation-three-subcategories-map\]](#simplified-notation-three-subcategories-map){reference-type="eqref" reference="simplified-notation-three-subcategories-map"} $$\mathsf{Br}^{+=}(\delta)\otimes_{\mathsf{Br}^=(\delta)}\mathsf{Br}^{=-}(\delta)\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax \mathsf{Br}(\delta)$$ is an isomorphism; or equivalently, the sequence [\[triangular-decomposition-right-exact-sequence\]](#triangular-decomposition-right-exact-sequence){reference-type="eqref" reference="triangular-decomposition-right-exact-sequence"} is right exact.* *Proof.* The assertion is essentially straightforward or geometrically intuitive, and instead of a formal proof, we will draw two illustrative diagrams for a specific example. The morphism (basis element) $a\in\mathop{\mathrm{Hom}}_\mathsf{Br}(x,y)$ from diagram [\[brauer-morphism-example\]](#brauer-morphism-example){reference-type="eqref" reference="brauer-morphism-example"} can be factorized into a composition of two basis elements of the spaces of morphisms in the subcategories $\mathsf{Br}^{=-}(\delta)$ and $\mathsf{Br}^{+=}(\delta)$ in two ways: $$\mkern-16mu \xymatrix{ y\colon\mkern-24mu & *=0{\bullet}\ar@{-}@/^-2.5pc/[rrrrr] & *=0{\bullet}\ar@{-}@/^-0.5pc/[r] & *=0{\bullet} & *=0{\bullet}\ar@{-}@/^-1.8pc/[rrrr] & *=0{\bullet}\ar@{-}[llldd] & *=0{\bullet}& *=0{\bullet}\ar@{-}[lllllldd] & *=0{\bullet}\\ \quad c'\colon\mkern-42mu \\ z\colon\mkern-24mu & *=0{\bullet}\ar@{-}[rdd] & *=0{\bullet}\ar@{-}[rrdd] \\ \quad b'\colon\mkern-42mu \\ x\colon\mkern-24mu & *=0{\bullet}\ar@{-}@/^1pc/[rr] & *=0{\bullet}& *=0{\bullet}& *=0{\bullet} & *=0{\bullet}\ar@{-}@/^0.5pc/[r] & *=0{\bullet} } \mkern28mu \xymatrix{ y\colon \mkern-24mu & *=0{\bullet}\ar@{-}@/^-2.5pc/[rrrrr] & *=0{\bullet}\ar@{-}@/^-0.5pc/[r] & *=0{\bullet} & *=0{\bullet}\ar@{-}@/^-1.8pc/[rrrr] & *=0{\bullet}\ar@{-}[lllldd] & *=0{\bullet}& *=0{\bullet}\ar@{-}[llllldd] & *=0{\bullet}\\ \quad c''\colon\mkern-42mu \\ z\colon \mkern-24mu & *=0{\bullet}\ar@{-}[rrrdd] & *=0{\bullet}\ar@{-}[dd] \\ \quad b''\colon\mkern-42mu \\ x\colon \mkern-24mu & *=0{\bullet}\ar@{-}@/^1pc/[rr] & *=0{\bullet}& *=0{\bullet}& *=0{\bullet} & *=0{\bullet}\ar@{-}@/^0.5pc/[r] & *=0{\bullet} }$$ The expressions $c'\otimes b'$ and $c''\otimes b''$ represent two different elements in the tensor product $\mathsf{Br}^{+=}(\delta)\otimes_{\mathsf{Br}(\delta)^\mathrm{id}}\mathsf{Br}^{=-}(\delta)$ over the discrete $k$-linear category $\mathsf{Br}(\delta)^\mathrm{id}$, but they are one and the same element of the tensor product $\mathsf{Br}^{+=}(\delta)\otimes_{\mathsf{Br}^=(\delta)}\mathsf{Br}^{=-}(\delta)$ over the $k$-linear subcategory $\mathsf{Br}^=(\delta)\subset\mathsf{Br}(\delta)$. ◻ So far we viewed our category objects $x_n=\{1,\dotsc,n\}$ as finite sets; now let us view them as *linearly ordered* finite sets. This (less invariant) point of view allows to define another two subcategories in $\mathsf{Br}(\delta)$. The set of objects of both the subcategories is still the same as in $\mathsf{Br}(\delta)$. The subcategory $\mathsf{Br}^+(\delta)\subset\mathsf{Br}^{+=}(\delta)$ consists of all the morphisms in $\mathsf{Br}(\delta)$ which contain *no cups* and *no intersecting propagatings strands*. So a basis in the $k$-vector space $\mathop{\mathrm{Hom}}_{\mathsf{Br}^+}(z,y)$ is formed by all the partitions $b$ of the set $z\sqcup y$ into two-element subsets such that - no two elements of $z$ are grouped together; - if an element $s\in z$ is grouped together with an element $i\in y$ in the partition $b$, and an element $t\in z$ is grouped together with an element $j\in y$ in $b$, and if $s<t$ in $z$, then $i<j$ in $y$. Similarly, the subcategory $\mathsf{Br}^-(\delta)\subset\mathsf{Br}^{=-}(\delta)$ consists of all the morphisms in $\mathsf{Br}(\delta)$ which contain *no caps* and *no intersecting propagating strands*. **Proposition 67**. *[(a)]{.upright} The triple of $k$-linear subcategories $\mathsf F=\mathsf{Br}^+(\delta)$,  $\mathsf G=\mathsf{Br}^=(\delta)$, and $\mathsf H=\mathsf{Br}(\delta)^\mathrm{id}$ forms a triangular decomposition of the $k$-linear category $\mathsf{Br}^{+=}(\delta)$ in the sense of Sections [5.4](#tensor-and-triangular-subsecn){reference-type="ref" reference="tensor-and-triangular-subsecn"}-- [5.5](#triangular-and-projectivity-subsecn){reference-type="ref" reference="triangular-and-projectivity-subsecn"}. In other words, the map [\[simplified-notation-three-subcategories-map\]](#simplified-notation-three-subcategories-map){reference-type="eqref" reference="simplified-notation-three-subcategories-map"} $$\mathsf{Br}^+(\delta)\otimes_{\mathsf{Br}(\delta)^\mathrm{id}}\mathsf{Br}^=(\delta)\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax \mathsf{Br}^{+=}(\delta)$$ is an isomorphism; or equivalently, the map [\[triangular-over-discrete-isomorphism\]](#triangular-over-discrete-isomorphism){reference-type="eqref" reference="triangular-over-discrete-isomorphism"} is an isomorphism.* *[(b)]{.upright} The triple of $k$-linear subcategories $\mathsf F=\mathsf{Br}^=(\delta)$,  $\mathsf G=\mathsf{Br}^-(\delta)$, and $\mathsf H=\mathsf{Br}(\delta)^\mathrm{id}$ forms a triangular decomposition of the $k$-linear category $\mathsf{Br}^{=-}(\delta)$ in the sense of Sections [5.4](#tensor-and-triangular-subsecn){reference-type="ref" reference="tensor-and-triangular-subsecn"}-- [5.5](#triangular-and-projectivity-subsecn){reference-type="ref" reference="triangular-and-projectivity-subsecn"}. In other words, the map [\[simplified-notation-three-subcategories-map\]](#simplified-notation-three-subcategories-map){reference-type="eqref" reference="simplified-notation-three-subcategories-map"} $$\mathsf{Br}^=(\delta)\otimes_{\mathsf{Br}(\delta)^\mathrm{id}}\mathsf{Br}^-(\delta)\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax \mathsf{Br}^{=-}(\delta)$$ is an isomorphism; or equivalently, the map [\[triangular-over-discrete-isomorphism\]](#triangular-over-discrete-isomorphism){reference-type="eqref" reference="triangular-over-discrete-isomorphism"} is an isomorphism. 0◻* Comparing the results of Propositions [Proposition 66](#triangular-decomposition-over-symmetric-group){reference-type="ref" reference="triangular-decomposition-over-symmetric-group"} and [Proposition 67](#two-small-triangular-decompositions){reference-type="ref" reference="two-small-triangular-decompositions"}, we come to an isomorphism (in the same symbolic notation of the formula [\[simplified-notation-three-subcategories-map\]](#simplified-notation-three-subcategories-map){reference-type="eqref" reference="simplified-notation-three-subcategories-map"}) $$\label{big-decomposition-into-small-pieces-eqn} \mathsf{Br}^+(\delta)\otimes_{\mathsf{Br}(\delta)^\mathrm{id}}\mathsf{Br}^=(\delta) \otimes_{\mathsf{Br}(\delta)^\mathrm{id}}\mathsf{Br}^-(\delta)\overset\simeq\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax \mathsf{Br}(\delta).$$ Finally, from [\[big-decomposition-into-small-pieces-eqn\]](#big-decomposition-into-small-pieces-eqn){reference-type="eqref" reference="big-decomposition-into-small-pieces-eqn"} we arrive to the following corollary, spelling out two versions of the triangular decomposition that are most relevant for us. **Corollary 68**. *[(a)]{.upright} The triple of $k$-linear subcategories $\mathsf F=\mathsf{Br}^+(\delta)$,  $\mathsf G=\mathsf{Br}^{=-}(\delta)$, and $\mathsf H=\mathsf{Br}(\delta)^\mathrm{id}$ forms a triangular decomposition of the $k$-linear category $\mathsf{Br}(\delta)$ in the sense of Sections [5.4](#tensor-and-triangular-subsecn){reference-type="ref" reference="tensor-and-triangular-subsecn"}-- [5.5](#triangular-and-projectivity-subsecn){reference-type="ref" reference="triangular-and-projectivity-subsecn"}. In other words, the map [\[simplified-notation-three-subcategories-map\]](#simplified-notation-three-subcategories-map){reference-type="eqref" reference="simplified-notation-three-subcategories-map"} $$\mathsf{Br}^+(\delta)\otimes_{\mathsf{Br}(\delta)^\mathrm{id}}\mathsf{Br}^{=-}(\delta)\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax \mathsf{Br}(\delta)$$ is an isomorphism; or equivalently, the map [\[triangular-over-discrete-isomorphism\]](#triangular-over-discrete-isomorphism){reference-type="eqref" reference="triangular-over-discrete-isomorphism"} is an isomorphism.* *[(b)]{.upright} The triple of $k$-linear subcategories $\mathsf F=\mathsf{Br}^{+=}(\delta)$,  $\mathsf G=\mathsf{Br}^-(\delta)$, and $\mathsf H=\mathsf{Br}(\delta)^\mathrm{id}$ forms a triangular decomposition of the $k$-linear category $\mathsf{Br}(\delta)$ in the sense of Sections [5.4](#tensor-and-triangular-subsecn){reference-type="ref" reference="tensor-and-triangular-subsecn"}-- [5.5](#triangular-and-projectivity-subsecn){reference-type="ref" reference="triangular-and-projectivity-subsecn"}. In other words, the map [\[simplified-notation-three-subcategories-map\]](#simplified-notation-three-subcategories-map){reference-type="eqref" reference="simplified-notation-three-subcategories-map"} $$\mathsf{Br}^{+=}(\delta)\otimes_{\mathsf{Br}(\delta)^\mathrm{id}}\mathsf{Br}^-(\delta)\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax \mathsf{Br}(\delta)$$ is an isomorphism; or equivalently, the map [\[triangular-over-discrete-isomorphism\]](#triangular-over-discrete-isomorphism){reference-type="eqref" reference="triangular-over-discrete-isomorphism"} is an isomorphism. 0◻* It is worth mentioning that all the five categories $\mathsf{Br}^{+=}(\delta)$,  $\mathsf{Br}^{=-}(\delta)$,  $\mathsf{Br}^=(\delta)$,  $\mathsf{Br}^+(\delta)$, and $\mathsf{Br}^-(\delta)$, *viewed as abstract $k$-linear categories* (rather than subcategories in $\mathsf{Br}(\delta)$), do not depend on the parameter $\delta$. It is only the whole category $\mathsf{Br}(\delta)$ that depends on $\delta$. We only keep the symbol $\delta$ in our notation for the five subcategories in order to make the system of notation more transparent. Let us now discuss the local finiteness conditions from Section [5.2](#locally-finite-categories-subsecn){reference-type="ref" reference="locally-finite-categories-subsecn"} in application to the Brauer diagram category and its subcategories defined above. First of all, we observe that the whole category $\mathsf{Br}(\delta)$ has finite-dimensional $\mathop{\mathrm{Hom}}$ spaces; but it is *not* locally finite in the sense of the definition in Section [5.2](#locally-finite-categories-subsecn){reference-type="ref" reference="locally-finite-categories-subsecn"}, because it does not satisfy the interval finiteness condition (as the diagram [\[brauer-multiplication-diagram\]](#brauer-multiplication-diagram){reference-type="eqref" reference="brauer-multiplication-diagram"} illustrates). **Lemma 69**. *[(a)]{.upright} The $k$-linear categories $\mathsf{Br}^+(\delta)$, $\mathsf{Br}^{+=}(\delta)$, and $\mathsf{Br}^=(\delta)$ are locally finite, and moreover, they are lower finite.* *[(b)]{.upright} The $k$-linear categories $\mathsf{Br}^-(\delta)$, $\mathsf{Br}^{=-}(\delta)$, and $\mathsf{Br}^=(\delta)$ are locally finite, and moreover, they are upper finite.* *Proof.* This is trivial: for every given $n\ge0$, the condition $|x_m|\le|x_n|$ only holds for a finite number of integers $m\ge0$. ◻ **Lemma 70**. *[(a)]{.upright} The $k$-linear categories $\mathsf{Br}^+(\delta)$ and $\mathsf{Br}^{+=}(\delta)$ are upper strictly locally finite.* *[(b)]{.upright} The $k$-linear categories $\mathsf{Br}^-(\delta)$ and $\mathsf{Br}^{=-}(\delta)$ are lower strictly locally finite.* *Proof.* Part (b): for any object $y=x_n$ in $\mathsf{Br}^-(\delta)$ or $\mathsf{Br}^{=-}(\delta)$, setting $X_y=\{x_{n+1}\}$ satisfies the condition in the definition of lower strict local finiteness from Section [5.2](#locally-finite-categories-subsecn){reference-type="ref" reference="locally-finite-categories-subsecn"}. Part (b) is dual. ◻ Finally, we come to the following theorem, which is out main result in application to the Brauer diagram category. **Theorem 71**. *[(a)]{.upright} Let $\mathcal C^+=\mathcal C_{\mathsf{Br}^+(\delta)}$ be the coalgebra corresponding to the locally finite $k$-linear category $\mathsf{Br}^+(\delta)$, as per the construction from Section [5.2](#locally-finite-categories-subsecn){reference-type="ref" reference="locally-finite-categories-subsecn"}. Let $K^+=R_{\mathsf{Br}^+(\delta)}$ and $R=R_{\mathsf{Br}(\delta)}$ be the nonunital algebras corresponding to the $k$-linear categories $\mathsf{Br}^+(\delta)$ and $\mathsf{Br}(\delta)$. Then there is a semialgebra $\boldsymbol{\mathcal S}^+=\mathcal C^+\otimes_{K^+}R$ over the coalgebra $\mathcal C^+$ such that the abelian category of right $\boldsymbol{\mathcal S}^+$-semimodules is equivalent to the category of right $\mathsf{Br}(\delta)$-modules, while the abelian category of left $\boldsymbol{\mathcal S}^+$-semicontramodules is equivalent to the category of left $\mathsf{Br}(\delta)$-modules, $${\operatorname{\mathsf{Simod--}}}\boldsymbol{\mathcal S}^+\simeq{\operatorname{\mathsf{Mod--}}}\mathsf{Br}(\delta) \quad\text{and}\quad \boldsymbol{\mathcal S}^+{\operatorname{\mathsf{--Sicntr}}}\simeq\mathsf{Br}(\delta){\operatorname{\mathsf{--Mod}}}.$$ The semialgebra $\boldsymbol{\mathcal S}^+$ is an injective left $\mathcal C^+$-comodule.* *[(b)]{.upright} Let $\mathcal C^{+=}=\mathcal C_{\mathsf{Br}^{+=}(\delta)}$ be the coalgebra corresponding to the locally finite $k$-linear category $\mathsf{Br}^{+=}(\delta)$, as per the construction from Section [5.2](#locally-finite-categories-subsecn){reference-type="ref" reference="locally-finite-categories-subsecn"}. Let $K^{+=}=R_{\mathsf{Br}^{+=}(\delta)}$ and $R=R_{\mathsf{Br}(\delta)}$ be the nonunital algebras corresponding to the $k$-linear categories $\mathsf{Br}^{+=}(\delta)$ and $\mathsf{Br}(\delta)$. Then there is a semialgebra $\boldsymbol{\mathcal S}^{+=}=\mathcal C^{+=}\otimes_{K^{+=}}R$ over the coalgebra $\mathcal C^{+=}$ such that the abelian category of right $\boldsymbol{\mathcal S}^{+=}$-semimodules is equivalent to the category of right $\mathsf{Br}(\delta)$-modules, while the abelian category of left $\boldsymbol{\mathcal S}^{+=}$-semicontramodules is equivalent to the category of left $\mathsf{Br}(\delta)$-modules, $${\operatorname{\mathsf{Simod--}}}\boldsymbol{\mathcal S}^{+=}\simeq{\operatorname{\mathsf{Mod--}}}\mathsf{Br}(\delta) \quad\text{and}\quad \boldsymbol{\mathcal S}^{+=}{\operatorname{\mathsf{--Sicntr}}}\simeq\mathsf{Br}(\delta){\operatorname{\mathsf{--Mod}}}.$$ The semialgebra $\boldsymbol{\mathcal S}^{+=}$ is an injective left $\mathcal C^{+=}$-comodule.* Let us emphasize that *we do not know* whether the semialgebra $\boldsymbol{\mathcal S}^+$ is an injective right $\mathcal C^+$-comodule, or whether the semialgebra $\boldsymbol{\mathcal S}^{+=}$ is an injective right $\mathcal C^{+=}$-comodule. *Proof.* Part (a): put $\mathsf F=\mathsf{Br}^+(\delta)$,  $\mathsf G=\mathsf{Br}^{=-}(\delta)$, and $\mathsf E=\mathsf{Br}(\delta)$. Then, by Corollary [Corollary 68](#two-main-triangular-decompositions-for-brauer){reference-type="ref" reference="two-main-triangular-decompositions-for-brauer"}(a), we have a triangular decomposition $\mathsf F\otimes_\mathsf H\mathsf G\simeq\mathsf E$, where $\mathsf H=\mathsf{Br}(\delta)^\mathrm{id}$. By Lemma [Lemma 69](#brauer-lower-upper-finiteness){reference-type="ref" reference="brauer-lower-upper-finiteness"}(a), the $k$-linear category $\mathsf F=\mathsf{Br}^+(\delta)$ is lower finite. Therefore, Corollaries [Corollary 61](#category-semialgebra-right-semimodules){reference-type="ref" reference="category-semialgebra-right-semimodules"}-- [Corollary 63](#category-semialgebra-lower-finite-case){reference-type="ref" reference="category-semialgebra-lower-finite-case"} are applicable and provide the desired assertions. Part (b): put $\mathsf F=\mathsf{Br}^{+=}(\delta)$,  $\mathsf G=\mathsf{Br}^-(\delta)$, and $\mathsf E=\mathsf{Br}(\delta)$. Then, by Corollary [Corollary 68](#two-main-triangular-decompositions-for-brauer){reference-type="ref" reference="two-main-triangular-decompositions-for-brauer"}(b), we have a triangular decomposition $\mathsf F\otimes_\mathsf H\mathsf G\simeq\mathsf E$, where $\mathsf H=\mathsf{Br}(\delta)^\mathrm{id}$. By Lemma [Lemma 69](#brauer-lower-upper-finiteness){reference-type="ref" reference="brauer-lower-upper-finiteness"}(a), the $k$-linear category $\mathsf F=\mathsf{Br}^{+=}(\delta)$ is lower finite. Thus Corollaries [Corollary 61](#category-semialgebra-right-semimodules){reference-type="ref" reference="category-semialgebra-right-semimodules"}-- [Corollary 63](#category-semialgebra-lower-finite-case){reference-type="ref" reference="category-semialgebra-lower-finite-case"} are applicable. ◻ Inverting the roles of the left and right sides in the discussion above, one can consider the coalgebras $\mathcal C^-=\mathcal C_{\mathsf{Br}^-(\delta)}$ and $\mathcal C^{=-}=\mathcal C_{\mathsf{Br}^{=-}(\delta)}$, as well as the algebras $K^-=R_{\mathsf{Br}^-(\delta)}$ and $K^{=-}=R_{\mathsf{Br}^{=-}(\delta)}$. Then, applying the left-right opposite version of our discussion in Sections [\[flat-integrable-bimodules-secn\]](#flat-integrable-bimodules-secn){reference-type="ref" reference="flat-integrable-bimodules-secn"}-- [\[description-of-semimod-semicontra-secn\]](#description-of-semimod-semicontra-secn){reference-type="ref" reference="description-of-semimod-semicontra-secn"}, one can construct two semialgebras $\boldsymbol{\mathcal S}^-=R\otimes_{K^-}\mathcal C^-$ and $\boldsymbol{\mathcal S}^{=-}=R\otimes_{K^{=-}}\mathcal C^{=-}$, which are injective as right comodules over their respective coalgebras $\mathcal C^-$ and $\mathcal C^{=-}$. One obtains equivalences of abelian categories $$\boldsymbol{\mathcal S}^-{\operatorname{\mathsf{--Simod}}}\simeq\boldsymbol{\mathcal S}^{=-}{\operatorname{\mathsf{--Simod}}}\simeq\mathsf{Br}(\delta){\operatorname{\mathsf{--Mod}}} \ \ \text{and}\ \ {\operatorname{\mathsf{Sicntr--}}}\boldsymbol{\mathcal S}^-\simeq{\operatorname{\mathsf{Sicntr--}}}\boldsymbol{\mathcal S}^{=-}\simeq{\operatorname{\mathsf{Mod--}}}\mathsf{Br}(\delta),$$ where ${\operatorname{\mathsf{Sicntr--}}}\boldsymbol{\mathcal S}$ denotes the category of right semicontramodules over a semialgebra $\boldsymbol{\mathcal S}$. ## Temperley--Lieb diagram category {#temperley-lieb-subsecn} The following example was also suggested to me by Catharina Stroppel. We keep the notation from the beginning of Section [6.2](#brauer-subsecn){reference-type="ref" reference="brauer-subsecn"}; so $k$ is a field and $\delta\in k$ is an element. The *Temperley--Lieb diagram category* $\mathsf{TL}(\delta)$  [@TL], [@Kau Section 4], [@BM Section 2], [@RS Section 2], [@DG Section 2] is the following subcategory in $\mathsf{Br}(\delta)$. The objects of $\mathsf{TL}(\delta)$ are the nonnegative integers $n\ge0$ interpreted as linearly ordered finite sets $x_n=\{1,\dotsc,n\}$. Given two linearly ordered finite sets $x$ and $y$, the $k$-vector subspace $\mathop{\mathrm{Hom}}_\mathsf{TL}(x,y)\subset\mathop{\mathrm{Hom}}_\mathsf{Br}(x,y)$ consists of all the morphisms which contain *no intersecting arcs*. More precisely, the vector subspace $\mathop{\mathrm{Hom}}_\mathsf{TL}(x,y)\subset\mathop{\mathrm{Hom}}_\mathsf{Br}(x,y)$ is spanned by all the basis vectors, i. e., partitions $a$ of the set $x\sqcup y$ into a disjoint union of sets of cardinality $2$, satisfying the following condition. Let $n=|x|$ and $m=|y|$. Let us interpret the elements of $x$ as the points $(0,1)$, ..., $(0,n)$ in the real plane $\mathbb R^2$ with coordinates $(u,v)$, and the elements of $y$ as the points $(1,1)$, ..., $(1,m)$ in the same real plane. So we have $u=0$ for all the points of $x$ and $u=1$ for all the points of $y$. Then the condition is that it must be possible to represent the edges of the graph $G_a$ by nonintersecting arcs connecting our points in $\mathbb R^2$, in such a way that all the arcs lie in the region $0\le u\le 1$ in $\mathbb R^2$ (i. e., between the two lines on which the points of $x$ and $y$ are situated). $$\label{temperley-lieb-morphism-example} \begin{gathered} \xymatrix{ y\colon & *=0{\bullet}\ar@{-}[dd] & *=0{\bullet}\ar@{-}@/^-1.5pc/[rrr] & *=0{\bullet}\ar@{-}@/^-0.5pc/[r] & *=0{\bullet}& *=0{\bullet}& *=0{\bullet}\ar@{-}[lllldd] & *=0{\bullet}\ar@{-}[rrrrdd] &*=0{\bullet}\ar@{-}@/^-0.5pc/[r] & *=0{\bullet}\\ \quad a\colon\mkern-18mu \\ x\colon & *=0{\bullet}& *=0{\bullet}& *=0{\bullet}\ar@{-}@/^2.5pc/[rrrrr] & *=0{\bullet}\ar@{-}@/^0.5pc/[r] & *=0{\bullet} & *=0{\bullet}\ar@{-}@/^0.5pc/[r] & *=0{\bullet}& *=0{\bullet} & *=0{\bullet}\ar@{-}@/^0.5pc/[r] & *=0{\bullet}&*=0{\bullet} } \end{gathered}$$ Now let us define two $k$-linear subcategories in $\mathsf{TL}(\delta)$. Both the subcategories have the same set of objects as $\mathsf{TL}(\delta)$; so the objects are the nonnegative integers. The subcategory $\mathsf{TL}^+(\delta)\subset\mathsf{TL}(\delta)$ consists of all the morphisms in $\mathsf{TL}(\delta)$ which *contain no caps*. Formally, we put $$\mathsf{TL}^+(\delta)=\mathsf{TL}(\delta)\cap\mathsf{Br}^+(\delta)= \mathsf{TL}(\delta)\cap\mathsf{Br}^{+=}(\delta).$$ Similarly, the subcategory $\mathsf{TL}^-(\delta)\subset\mathsf{TL}(\delta)$ consists of all the morphisms in $\mathsf{TL}(\delta)$ which *contain no cups*. Formally, $$\mathsf{TL}^-(\delta)=\mathsf{TL}(\delta)\cap\mathsf{Br}^-(\delta)= \mathsf{TL}(\delta)\cap\mathsf{Br}^{=-}(\delta).$$ It is worth mentioning that the categories $\mathsf{TL}^+(\delta)$ and $\mathsf{TL}^-(\delta)$, *viewed as abstract $k$-linear categories* (rather than subcategories in $\mathsf{TL}(\delta)$), do not depend on the parameter $\delta$. It is only the whole category $\mathsf{TL}(\delta)$ that depends on $\delta$. We only keep the symbol $\delta$ in our notation for the two subcategories in order to make the system of notation more transparent. **Proposition 72**. *The triple of $k$-linear subcategories $\mathsf F=\mathsf{TL}^+(\delta)$,  $\mathsf G=\mathsf{TL}^-(\delta)$, and $\mathsf H=\mathsf{TL}(\delta)^\mathrm{id}$ forms a triangular decomposition of the $k$-linear category $\mathsf{TL}(\delta)$ in the sense of Sections [5.4](#tensor-and-triangular-subsecn){reference-type="ref" reference="tensor-and-triangular-subsecn"}-- [5.5](#triangular-and-projectivity-subsecn){reference-type="ref" reference="triangular-and-projectivity-subsecn"}. In other words, the map [\[simplified-notation-three-subcategories-map\]](#simplified-notation-three-subcategories-map){reference-type="eqref" reference="simplified-notation-three-subcategories-map"} $$\mathsf{TL}^+(\delta)\otimes_{\mathsf{TL}(\delta)^\mathrm{id}}\mathsf{TL}^-(\delta)\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax \mathsf{TL}(\delta)$$ is an isomorphism; or equivalently, the map [\[triangular-over-discrete-isomorphism\]](#triangular-over-discrete-isomorphism){reference-type="eqref" reference="triangular-over-discrete-isomorphism"} is an isomorphism.* *Proof.* We restrict ourselves to drawing a diagram for the factorization of the morphism (basis element) $a\in\mathop{\mathrm{Hom}}_\mathsf{TL}(x,y)$ from diagram [\[temperley-lieb-morphism-example\]](#temperley-lieb-morphism-example){reference-type="eqref" reference="temperley-lieb-morphism-example"} into a composition of two basis elements in the spaces of morphisms $b\in\mathop{\mathrm{Hom}}_{\mathsf{TL}^-}(x,z)$ and $c\in\mathop{\mathrm{Hom}}_{\mathsf{TL}^+}(z,y)$: $$\xymatrix{ y\colon & *=0{\bullet}\ar@{-}[dd] & *=0{\bullet}\ar@{-}@/^-1.5pc/[rrr] & *=0{\bullet}\ar@{-}@/^-0.5pc/[r] & *=0{\bullet}& *=0{\bullet}& *=0{\bullet}\ar@{-}[lllldd] & *=0{\bullet}\ar@{-}[lllldd] &*=0{\bullet}\ar@{-}@/^-0.5pc/[r] & *=0{\bullet}\\ \quad c\colon\mkern-18mu \\ z\colon & *=0{\bullet}\ar@{-}[dd] & *=0{\bullet}\ar@{-}[dd] & *=0{\bullet}\ar@{-}[rrrrrrrrdd] \\ \quad b\colon\mkern-18mu \\ x\colon & *=0{\bullet}& *=0{\bullet}& *=0{\bullet}\ar@{-}@/^2.5pc/[rrrrr] & *=0{\bullet}\ar@{-}@/^0.5pc/[r] & *=0{\bullet} & *=0{\bullet}\ar@{-}@/^0.5pc/[r] & *=0{\bullet}& *=0{\bullet} & *=0{\bullet}\ar@{-}@/^0.5pc/[r] & *=0{\bullet}&*=0{\bullet} }$$ Here we have $a=cb$. ◻ Let us discuss the local finiteness conditions from Section [5.2](#locally-finite-categories-subsecn){reference-type="ref" reference="locally-finite-categories-subsecn"} in application to the Temperley--Lieb diagram category and its two subcategories defined above. Similarly to the discussion in Section [6.2](#brauer-subsecn){reference-type="ref" reference="brauer-subsecn"}, we start with observing that the whole category $\mathsf{TL}(\delta)$ has finite-dimensional $\mathop{\mathrm{Hom}}$ spaces; but it is *not* locally finite in the sense of the definition in Section [5.2](#locally-finite-categories-subsecn){reference-type="ref" reference="locally-finite-categories-subsecn"}, because it does not satisfy the interval finiteness condition. The two subcategories, however, are locally finite. **Lemma 73**. *[(a)]{.upright} The $k$-linear category $\mathsf{TL}^+(\delta)$ is locally finite, and moreover, it is lower finite.* *[(b)]{.upright} The $k$-linear category $\mathsf{TL}^-(\delta)$ is locally finite, and moreover, it is upper finite.* *Proof.* Follows from Lemma [Lemma 69](#brauer-lower-upper-finiteness){reference-type="ref" reference="brauer-lower-upper-finiteness"}. ◻ **Lemma 74**. *[(a)]{.upright} The $k$-linear category $\mathsf{TL}^+(\delta)$ is upper strictly locally finite.* *[(b)]{.upright} The $k$-linear category $\mathsf{TL}^-(\delta)$ is lower strictly locally finite.* *Proof.* Similar to Lemma [Lemma 70](#brauer-strict-local-finiteness){reference-type="ref" reference="brauer-strict-local-finiteness"}. ◻ So we come to the following theorem, which is out main result in application to the Temperley--Lieb diagram category. **Theorem 75**. *Let $\mathcal C^+=\mathcal C_{\mathsf{TL}^+(\delta)}$ be the coalgebra corresponding to the locally finite $k$-linear category $\mathsf{TL}^+(\delta)$, as per the construction from Section [5.2](#locally-finite-categories-subsecn){reference-type="ref" reference="locally-finite-categories-subsecn"}. Let $K^+=R_{\mathsf{TL}^+(\delta)}$ and $R=R_{\mathsf{TL}(\delta)}$ be the nonunital algebras corresponding to the $k$-linear categories $\mathsf{TL}^+(\delta)$ and $\mathsf{TL}(\delta)$. Then there is a semialgebra $\boldsymbol{\mathcal S}^+=\mathcal C^+\otimes_{K^+}R$ over the coalgebra $\mathcal C^+$ such that the abelian category of right $\boldsymbol{\mathcal S}^+$-semimodules is equivalent to the category of right $\mathsf{TL}(\delta)$-modules, while the abelian category of left $\boldsymbol{\mathcal S}^+$-semicontramodules is equivalent to the category of left $\mathsf{TL}(\delta)$-modules, $${\operatorname{\mathsf{Simod--}}}\boldsymbol{\mathcal S}^+\simeq{\operatorname{\mathsf{Mod--}}}\mathsf{TL}(\delta) \quad\text{and}\quad \boldsymbol{\mathcal S}^+{\operatorname{\mathsf{--Sicntr}}}\simeq\mathsf{TL}(\delta){\operatorname{\mathsf{--Mod}}}.$$ The semialgebra $\boldsymbol{\mathcal S}^+$ is an injective left $\mathcal C^+$-comodule.* Let us emphasize that *we do not know* whether the semialgebra $\boldsymbol{\mathcal S}^+$ is an injective right $\mathcal C^+$-comodule. *Proof.* Put $\mathsf F=\mathsf{TL}^+(\delta)$,  $\mathsf G=\mathsf{TL}^-(\delta)$, and $\mathsf E=\mathsf{TL}(\delta)$. Then, by Proposition [Proposition 72](#triangular-decomposition-for-temperley-lieb){reference-type="ref" reference="triangular-decomposition-for-temperley-lieb"}, we have a triangular decomposition $\mathsf F\otimes_\mathsf H\mathsf G\simeq\mathsf E$, where $\mathsf H=\mathsf{TL}(\delta)^\mathrm{id}$. By Lemma [Lemma 73](#temperley-lieb-lower-upper-finiteness){reference-type="ref" reference="temperley-lieb-lower-upper-finiteness"}(a), the $k$-linear category $\mathsf F=\mathsf{TL}^+(\delta)$ is lower finite. Therefore, Corollaries [Corollary 61](#category-semialgebra-right-semimodules){reference-type="ref" reference="category-semialgebra-right-semimodules"}-- [Corollary 63](#category-semialgebra-lower-finite-case){reference-type="ref" reference="category-semialgebra-lower-finite-case"} are applicable and provide the desired assertions. ◻ Inverting the roles of the left and right sides in the discussion above, one can consider the coalgebra $\mathcal C^-=\mathcal C_{\mathsf{TL}^-(\delta)}$ and the algebra $K^-=R_{\mathsf{TL}^-(\delta)}$. Then, applying the left-right opposite version of our discussion in Sections [\[flat-integrable-bimodules-secn\]](#flat-integrable-bimodules-secn){reference-type="ref" reference="flat-integrable-bimodules-secn"}-- [\[description-of-semimod-semicontra-secn\]](#description-of-semimod-semicontra-secn){reference-type="ref" reference="description-of-semimod-semicontra-secn"}, one can construct a semialgebra $\boldsymbol{\mathcal S}^-=R\otimes_{K^-}\mathcal C^-$, which is injective as a right comodule over $\mathcal C^-$. One obtains equivalences of abelian categories $$\boldsymbol{\mathcal S}^-{\operatorname{\mathsf{--Simod}}}\simeq\mathsf{TL}(\delta){\operatorname{\mathsf{--Mod}}} \quad\text{and}\quad {\operatorname{\mathsf{Sicntr--}}}\boldsymbol{\mathcal S}^-\simeq{\operatorname{\mathsf{Mod--}}}\mathsf{TL}(\delta).$$ ## The Reedy category of a simplicial set The following example was suggested to me by Jan Št'ovı́ček. Without going into the details of the general definition of a *Reedy category* [@Hir Section 15.1], [@RV Section 2], let us spell out a specific example relevant in our context. The very standard definition of the *cosimplicial indexing category* $\Delta$ is as follows [@Hir Definition 15.1.7]. The objects of $\Delta$ are the nonempty finite linearly ordered sets $[n]=\{0,\dotsc,n\}$, where $n\ge0$ are nonnegative integers. The morphisms $\sigma\colon[n]\longrightarrow[m]$ in $\Delta$ are the nonstrictly monotone maps $[n]\longrightarrow[m]$, i. e., the functions $\sigma\colon\{0,\dotsc,n\}\longrightarrow\{0,\dotsc,m\}$ such that $i\le j$ implies $\sigma(i)\le\sigma(j)$. Two full subcategories $\Delta^+$ and $\Delta^-\subset\Delta$ are defined by the following obvious rules. Both $\Delta^+$ and $\Delta^-$ have the same objects as $\Delta$. The morphisms in $\Delta^+$ (known as *face maps* and their compositions) are those morphisms in $\Delta$ that correspond to *injective* maps of linearly ordered finite sets $\sigma\colon[n]\longrightarrow[m]$. The morphisms in $\Delta^-$ (known as *degeneracy maps* and their compositions) are those morphisms in $\Delta$ that correspond to *surjective* maps $\sigma\colon[n]\longrightarrow[m]$. Clearly, any morphism $\sigma$ in $\Delta$ can be uniquely factorized as $\sigma=\tau\pi$, where $\tau$ is a morphism in $\Delta^+$ and $\pi$ is a morphism in $\Delta^-$. In other words, the composition map $$\label{delta-triangular-decomposition} \coprod\nolimits_{[l]\in\Delta}\mathop{\mathrm{Mor}}_{\Delta^+}([l],[m])\times \mathop{\mathrm{Mor}}_{\Delta^-}([n],[l])\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathop{\mathrm{Mor}}_\Delta([n],[m])$$ is bijective for all pairs of objects $[n]$, $[m]\in\Delta$. Here we denote by $\mathop{\mathrm{Mor}}_\mathsf C(x,y)$ the set of morphisms $x\longrightarrow y$ in a nonadditive category $\mathsf C$. A *simplicial set* $S$ is a contravariant functor $S\colon\Delta^{\mathsf{op}}\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathsf{Sets}$, where $\mathsf{Sets}$ denotes the category of sets. Let us denote by $S_n\in\mathsf{Sets}$ the set assigned to an object $[n]\in\Delta$ by the functor $S$. Given a simplicial set $S$, the *category* $\Delta S$ *of simplices of $S$* [@Hir Example 15.1.14 and Definition 15.1.16] is defined as follows. The objects of $\Delta S$ are the simplices of $S$; so the set of objects of $\Delta S$ is the disjoint union $\coprod_{n\ge0}S_n$. Given two simplices $x\in S_n$ and $y\in S_m$, the set of morphisms $\mathop{\mathrm{Mor}}_{\Delta S}(x,y)$ is, by the definition, bijective to the set of all morphisms $\sigma\in\mathop{\mathrm{Hom}}_\Delta([n],[m])$ such that $\sigma(y)=x$ in $S$. The composition of morphisms in $\Delta S$ is induced by the composition of morphisms in $\Delta$. Two full subcategories $\Delta^+S$ and $\Delta^-S\subset\Delta S$ arise from the two full subcategories $\Delta^+$ and $\Delta^- \subset\Delta$. By definition, both $\Delta^+S$ and $\Delta^-S$ have the same objects as $\Delta S$. The morphisms in $\Delta^+S$ are those morphisms in $\Delta S$ that correspond to morphisms from $\Delta^+\subset\Delta$. The morphisms in $\Delta^-S$ are those morphisms in $\Delta S$ that correspond to morphisms from $\Delta^-\subset\Delta$. The unique factorization [\[delta-triangular-decomposition\]](#delta-triangular-decomposition){reference-type="eqref" reference="delta-triangular-decomposition"} of morphisms in $\Delta$ induces a similar unique factorization of morphisms in $\Delta S$. Any morphism $\sigma$ in $\Delta S$ can be uniquely factorized as $\sigma=\tau\pi$, where $\tau$ is a morphism in $\Delta^+S$ and $\pi$ is a morphism in $\Delta^-S$. In other words, the composition map $$\label{category-of-simplices-triangular-decomposition} \coprod\nolimits_{z\in\Delta S}\mathop{\mathrm{Mor}}_{\Delta^+S}(z,y)\times \mathop{\mathrm{Mor}}_{\Delta^-S}(x,z)\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax\mathop{\mathrm{Mor}}_{\Delta S}(x,y)$$ is bijective for all pairs of objects $x$, $y\in\Delta S$. Now we want to pass from the categories enriched in sets to the categories enriched in $k$-vector spaces. The functor assigning to a set $C$ the $k$-vector space $k[C]$ freely spanned by $C$ is a monoidal functor $\mathsf{Sets}\longrightarrow k{\operatorname{\mathsf{--Vect}}}$ from the monoidal category of sets (with the Cartesian product) to the monoidal category of $k$-vector spaces (with the tensor product). Applying this monoidal functor to a category $\mathsf C$, one obtains a $k$-linear category $k[\mathsf C]$. Explicitly, the objects of $k[\mathsf C]$ are the same as the objects of $\mathsf C$. For any two objects $x$ and $y\in\mathsf C$, the elements of the set $\mathop{\mathrm{Hom}}_\mathsf C(x,y)$ form a basis in the $k$-vector space $\mathop{\mathrm{Hom}}_{k[\mathsf C]}(x,y)$. The composition of morphisms in $k[\mathsf C]$ is induced by the composition of morphisms in $\mathsf C$. Applying this construction to the category $\Delta S$ and its subcategories $\Delta^+S$ and $\Delta^-S$, we obtain a $k$-linear category $k[\Delta S]$ with two $k$-linear subcategories $k[\Delta^+S]$ and $k[\Delta^-S]\subset k[\Delta S]$. These subcategories form the triangular decomposition which we are interested in. **Proposition 76**. *The triple of $k$-linear subcategories $\mathsf F=k[\Delta^+S]$,  $\mathsf G=k[\Delta^-S]$, and $\mathsf H=k[\Delta S]^\mathrm{id}$ forms a triangular decomposition of the $k$-linear category $k[\Delta S]$ in the sense of Sections [5.4](#tensor-and-triangular-subsecn){reference-type="ref" reference="tensor-and-triangular-subsecn"}-- [5.5](#triangular-and-projectivity-subsecn){reference-type="ref" reference="triangular-and-projectivity-subsecn"}. In other words, the map [\[simplified-notation-three-subcategories-map\]](#simplified-notation-three-subcategories-map){reference-type="eqref" reference="simplified-notation-three-subcategories-map"} $$k[\Delta^+S]\otimes_{k[\Delta S]^\mathrm{id}}k[\Delta^-S]\mskip.5\thinmuskip\relbar\joinrel\relbar \joinrel\rightarrow\mskip.5\thinmuskip\relax k[\Delta S]$$ is an isomorphism; or equivalently, the map [\[triangular-over-discrete-isomorphism\]](#triangular-over-discrete-isomorphism){reference-type="eqref" reference="triangular-over-discrete-isomorphism"} is an isomorphism.* *Proof.* Apply the freely generated $k$-vector space functor to the bijection [\[category-of-simplices-triangular-decomposition\]](#category-of-simplices-triangular-decomposition){reference-type="eqref" reference="category-of-simplices-triangular-decomposition"}. ◻ Now it is the turn to discuss the local finiteness conditions from Section [5.2](#locally-finite-categories-subsecn){reference-type="ref" reference="locally-finite-categories-subsecn"} in application to the category $k[\Delta S]$ and its two subcategories defined above. Similarly to Sections [6.2](#brauer-subsecn){reference-type="ref" reference="brauer-subsecn"}-- [6.3](#temperley-lieb-subsecn){reference-type="ref" reference="temperley-lieb-subsecn"}, the whole category $k[\Delta S]$ has finite-dimensional $\mathop{\mathrm{Hom}}$ spaces; but it is *not* locally finite in the sense of the definition in Section [5.2](#locally-finite-categories-subsecn){reference-type="ref" reference="locally-finite-categories-subsecn"}, because it does not satisfy the interval finiteness condition. **Lemma 77**. *Let $\mathsf C$ be a category and $\mathsf E=k[\mathsf C]$ be the related $k$-linear category. Then one has $x\preceq y$ in $\mathsf E$ if and only if there exists a morphism $x\longrightarrow y$ in $\mathsf C$.* *Proof.* The proof is straightforward. ◻ **Lemma 78**. *[(a)]{.upright} The $k$-linear category $k[\Delta^+S]$ is locally finite, and moreover, it is lower finite.* *[(b)]{.upright} The $k$-linear category $k[\Delta^-S]$ is locally finite, and moreover, it is upper finite.* *Proof.* Part (a): let $y\in S_m$ be an object of $\Delta S$. Notice that the datum of an element $x\in S_n$ together with a morphism $\sigma\colon x\longrightarrow y$ in $\Delta S$ is uniquely determined by the related morphism $\sigma\colon[n]\longrightarrow[m]$ in $\Delta$: one recovers the element $x\in S_n$ as $x=\sigma(y)$. Furthermore, for any fixed $m\ge0$, there is only a finite number of morphisms $[n]\longrightarrow[m]$ in $\Delta^+$ (as one necessarily has $n\le m$). Consequently, for any fixed $y\in S_m$, the set of all objects $x$ in $\Delta S$ for which there exists a morphism $x\longrightarrow y$ in $\Delta^+S$ is finite. By Lemma [Lemma 77](#morphism-order-in-freely-generated){reference-type="ref" reference="morphism-order-in-freely-generated"}, this set coincides with the set of all objects $x$ in $k[\Delta^+S]$ such that $x\preceq y$ in $k[\Delta^+S]$. Part (b): let $x\in S_n$ be an object of $\Delta S$. Then the datum of an element $y\in S_m$ together with a morphism $\sigma\colon x\longrightarrow y$ in $\Delta S$ is *not* uniquely determined by the related morphism $\sigma\colon[n]\longrightarrow[m]$ in $\Delta$, generally speaking (one simplex can be a face of arbitrarily many other simplices of given dimension). However, let us restrict ourselves to morphisms in $\Delta^-S$. Then the datum of an element $y\in S_m$ together with a morphism $\pi\colon x\longrightarrow y$ in $\Delta^-S$ is still uniquely determined by the related morphism $\pi\colon[n]\longrightarrow[m]$ in $\Delta^-$. Indeed, $\pi$ is a surjective nonstrictly order-preserving map of finite sets $\{0,\dotsc,n\}\longrightarrow\{0,\dotsc,m\}$. Therefore, $\pi$ admits an order-preserving section $\tau\colon\{0,\dotsc,m\}\longrightarrow\{0,\dotsc,n\}$. So there exists a morphism $\tau\colon[m]\longrightarrow[n]$ in $\Delta$ such that $\pi\tau=\mathrm{id}_{[m]}$. Now we have $x=\pi(y)$ in $S_n$, and we can recover the element $y\in S_m$ as $y=\tau(x)$. The rest of the argument is similar to part (a). For any fixed $n\ge0$, there is only a finite number of morphisms $[n]\longrightarrow[m]$ in $\Delta^-$ (as one necessarily has $m\le n$). Consequently, for any fixed $x\in S_n$, the set of all objects $y$ in $\Delta S$ for which there exists a morphism $x\longrightarrow y$ in $\Delta^-S$ is finite. By Lemma [Lemma 77](#morphism-order-in-freely-generated){reference-type="ref" reference="morphism-order-in-freely-generated"}, this set coincides with the set of all objects $y$ in $k[\Delta^-S]$ such that $x\preceq y$ in $k[\Delta^-S]$. ◻ **Lemma 79**. *[(a)]{.upright} Let $S$ be a simplicial set such that the set $S_n$ is finite for every $n\ge0$. Then the $k$-linear category $k[\Delta^+S]$ is upper strictly locally finite.* *[(b)]{.upright} For any simplicial set $S$, the $k$-linear category $k[\Delta^-S]$ is lower strictly locally finite.* *Proof.* Part (a): let $x\in S_n$ and $w\in S_m$ be two objects of $\Delta^+S$. Then one has $x\prec w$ in $k[\Delta^+S]$ if and only if $n<m$ and there exists a morphism $x\longrightarrow w$ in $\Delta^+S$. If this is the case, then such morphism $x\longrightarrow w$ factorizes through some object $y\in S_{n+1}$ for which $x\prec y$ in $k[\Delta^+S]$. By assumption, there is only a finite set of objects $y$ that can occur here as $x$ is fixed and $w$ varies. Part (b): let $z\in S_n$ and $y\in S_m$ be two objects of $\Delta^-S$. Then one has $z\prec y$ in $k[\Delta^-S]$ if and only if $n>m$ and there exists a morphism $z\longrightarrow y$ in $\Delta^-S$. If this is the case, then such morphism $z\longrightarrow y$ factorizes through some object $x\in S_{m+1}$ for which $x\prec y$ in $k[\Delta^-S]$. In other words, there exists a morphism $x\longrightarrow y$ in $\Delta^-S$. There is only a finite set of $m+1$ such objects $x$, indexed by the morphisms $[m+1]\longrightarrow[m]$ in $\Delta^-$, that can occur here as $y$ is fixed and $z$ varies. ◻ We come to the following theorem, which is out main result in application to category of simplices of a simplicial set. **Theorem 80**. *Let $S$ be a simplicial set. In this context:* *[(a)]{.upright} Let $\mathcal C^+=\mathcal C_{k[\Delta^+S]}$ be the coalgebra corresponding to the locally finite $k$-linear category $k[\Delta^+S]$, as per the construction from Section [5.2](#locally-finite-categories-subsecn){reference-type="ref" reference="locally-finite-categories-subsecn"}. Let $K^+=R_{k[\Delta^+S]}$ and $R=R_{k[\Delta S]}$ be the nonunital algebras corresponding to the $k$-linear categories $k[\Delta^+S]$ and $k[\Delta S]$. Then there is a semialgebra $\boldsymbol{\mathcal S}^+=\mathcal C^+\otimes_{K^+}R$ over the coalgebra $\mathcal C^+$ such that the abelian category of right $\boldsymbol{\mathcal S}^+$-semimodules is equivalent to the category of right $k[\Delta S]$-modules, while the abelian category of left $\boldsymbol{\mathcal S}^+$-semicontramodules is equivalent to the category of left $k[\Delta S]$-modules, $${\operatorname{\mathsf{Simod--}}}\boldsymbol{\mathcal S}^+\simeq{\operatorname{\mathsf{Mod--}}}k[\Delta S] \quad\text{and}\quad \boldsymbol{\mathcal S}^+{\operatorname{\mathsf{--Sicntr}}}\simeq k[\Delta S]{\operatorname{\mathsf{--Mod}}}.$$ The semialgebra $\boldsymbol{\mathcal S}^+$ is an injective left $\mathcal C^+$-comodule.* *[(b)]{.upright} Let $\mathcal C^-=\mathcal C_{k[\Delta^-S]}$ be the coalgebra corresponding to the locally finite $k$-linear category $k[\Delta^-S]$, as per the construction from Section [5.2](#locally-finite-categories-subsecn){reference-type="ref" reference="locally-finite-categories-subsecn"}. Let $K^-=R_{k[\Delta^-S]}$ and $R=R_{k[\Delta S]}$ be the nonunital algebras corresponding to the $k$-linear categories $k[\Delta^-S]$ and $k[\Delta S]$. Then there is a semialgebra $\boldsymbol{\mathcal S}^-=R\otimes_{K^-}\mathcal C^-$ over the coalgebra $\mathcal C^-$ such that the abelian category of left $\boldsymbol{\mathcal S}^-$-semimodules is equivalent to the category of left $k[\Delta S]$-modules, while the abelian category of right $\boldsymbol{\mathcal S}^-$-semicontramodules is equivalent to the category of right $k[\Delta S]$-modules, $$\boldsymbol{\mathcal S}^-{\operatorname{\mathsf{--Simod}}}\simeq k[\Delta S]{\operatorname{\mathsf{--Mod}}}\quad\text{and}\quad {\operatorname{\mathsf{Sicntr--}}}\boldsymbol{\mathcal S}^-\simeq{\operatorname{\mathsf{Mod--}}}k[\Delta S].$$ The semialgebra $\boldsymbol{\mathcal S}^-$ is an injective right $\mathcal C^-$-comodule.* Let us emphasize that *we do not know* whether the semialgebra $\boldsymbol{\mathcal S}^+$ is an injective right $\mathcal C^+$-comodule, or whether the semialgebra $\boldsymbol{\mathcal S}^-$ is an injective left $\mathcal C^-$-comodule. *Proof.* Part (a): put $\mathsf F=k[\Delta^+S]$,  $\mathsf G=k[\Delta^-S]$, and $\mathsf E=k[\Delta S]$. Then, by Proposition [Proposition 76](#triangular-decomposition-for-simplicial){reference-type="ref" reference="triangular-decomposition-for-simplicial"}, we have a triangular decomposition $\mathsf F\otimes_\mathsf H\mathsf G\simeq\mathsf E$, where $\mathsf H=k[\Delta S]^\mathrm{id}$. By Lemma [Lemma 78](#simplicial-lower-upper-finiteness){reference-type="ref" reference="simplicial-lower-upper-finiteness"}(a), the $k$-linear category $\mathsf F=k[\Delta^+S]$ is lower finite. Therefore, Corollaries [Corollary 61](#category-semialgebra-right-semimodules){reference-type="ref" reference="category-semialgebra-right-semimodules"}-- [Corollary 63](#category-semialgebra-lower-finite-case){reference-type="ref" reference="category-semialgebra-lower-finite-case"} are applicable and provide the desired assertions. Part (b): $\mathsf F=k[\Delta^-S]^{\mathsf{op}}$,  $\mathsf G=k[\Delta^+S]^{\mathsf{op}}$, and $\mathsf E=k[\Delta S]^{\mathsf{op}}$. Then, by Proposition [Proposition 76](#triangular-decomposition-for-simplicial){reference-type="ref" reference="triangular-decomposition-for-simplicial"}, we have a triangular decomposition $\mathsf F\otimes_\mathsf H\mathsf G\simeq\mathsf E$, where $\mathsf H=k[\Delta S]^{{\mathsf{op}},\mathrm{id}}$. By Lemma [Lemma 78](#simplicial-lower-upper-finiteness){reference-type="ref" reference="simplicial-lower-upper-finiteness"}(b), the $k$-linear category $k[\Delta^-S]$ is upper finite; so the opposite $k$-linear category $\mathsf F=k[\Delta^-S]^{\mathsf{op}}$ is lower finite. Therefore, Corollaries [Corollary 61](#category-semialgebra-right-semimodules){reference-type="ref" reference="category-semialgebra-right-semimodules"}-- [Corollary 63](#category-semialgebra-lower-finite-case){reference-type="ref" reference="category-semialgebra-lower-finite-case"} are applicable and provide the desired assertions, up to a passage to the left-right opposite coalgebra and semialgebra. ◻ 99 A. L. Agore. Monomorphisms of coalgebras. *Colloquium Mathematicum* **120**, \#1, p. 149--155, 2010. `arXiv:0908.2959 [math.RA]` M. Aguiar. Internal categories and quantum groups. Cornell Univ. Ph.D. Thesis, 1997. Available from `https://pi.math.cornell.edu/~ maguiar/books.html` G. Benkart, D. Moon. Tensor product representations of Temperley-Lieb algebras and Chebyshev polynomials. Representations of Algebras and Related Topics, R.-O. Buchweitz and H. Lenzing, Eds., Fields Institute Communicat., vol. 45, AMS, Providence, 2005, p. 57--80. R. Brauer. On algebras wihch are connected with semisimple continuous groups. *Annals of Math. (2)* **38**, \#4, p. 857--872, 1937. S. Doty, A. Giaquinto. Origins of the Temperley--Lieb algebra: early history. `arXiv:2307.11929 [math.CO]` P. S. Hirschhorn. Model categories and their localizations. *Mathematical Surveys and Monographs*, 99. AMS, Providence, 2003. J. Holstein, A. Lazarev. Categorical Koszul duality. *Advances in Math.* **409**, part B, article ID 108644, 52 pp., 2022. `arXiv:2006.01705 [math.CT]` L. H. Kauffmann. State models and the Jones polynomial. *Topology* **26**, \#3, p. 395--407, 1987. C. Nǎstǎsescu, B. Torrecillas. Torsion theories for coalgebras. *Journ. of Pure and Appl. Algebra* **97**, \#2, p. 203--220, 1994. P. Nystedt. A survey of s-unital and locally unital rings. *Revista Integración, temas de matemáticas* (Escuela de Matemáticas, Universidad Industrial de Santader) **37**, \#2, p. 251--260, 2019. `arXiv:1809.02117 [math.RA]` L. Positselski. Homological algebra of semimodules and semicontramodules: Semi-infinite homological algebra of associative algebraic structures. Appendix C in collaboration with D. Rumynin; Appendix D in collaboration with S. Arkhipov. Monografie Matematyczne vol. 70, Birkhäuser/Springer Basel, 2010. xxiv+349 pp. `arXiv:0708.3398 [math.CT]` L. Positselski. Two kinds of derived categories, Koszul duality, and comodule-contramodule correspondence. *Memoirs of the American Math. Society* **212**, \#996, 2011. vi+133 pp. `arXiv:0905.2621 [math.CT]` L. Positselski. Contramodules. *Confluentes Math.* **13**, \#2, p. 93--182, 2021. `arXiv:1503.00991 [math.CT]` L. Positselski. Smooth duality and co-contra correspondence. *Journ. of Lie Theory* **30**, \#1, p. 85--144, 2020. `arXiv:1609.04597 [math.CT]` L. Positselski. Differential graded Koszul duality: An introductory survey. *Bulletin of the London Math. Society* **55**, \#4, p. 1551--1640, 2023. `arXiv:2207.07063 [math.CT]` L. Positselski. Homological full-and-faithfulness of comodule inclusion and contramodule forgetful functors. Electronic preprint `arXiv:2301.09561 [math.RA]`. L. Positselski. Comodules and contramodules over coalgebras associated with locally finite categories. Electronic preprint `arXiv:2307.13358 [math.CT]`. L. Positselski. Tensor-Hom formalism for modules over nonunital rings. Electronic preprint `arXiv:2308.16090 [math.RA]`\]. L. Positselski, J. Šťovı́ček. The tilting-cotilting correspondence. *Internat. Math. Research Notices* **2021**, \#1, p. 189--274, 2021. `arXiv:1710.02230 [math.CT]` L. Positselski, J. Št'ovı́ček. Topologically semisimple and topologically perfect topological rings. *Publicacions Matemàtiques* **66**, \#2, p. 457--540, 2022. `arXiv:1909.12203 [math.CT]` D. Quillen. Module theory over nonunital rings. Chapter I, August 1996. Manuscript, available from `https://ncatlab.org/nlab/files/QuillenModulesOverRngs.pdf` E. Riehl, D. Verity. The theory and practice of Reedy categories. *Theory and Appl. of Categories* **29**, \#9, p. 256--301, 2014. D. Ridout, Y. Saint-Aubin. Standard modules, induction and the structure of the Temperley-Lieb algebra. *Advances in Theor. and Math. Physics* **18**, \#5, p. 957--1041, 2014. `arXiv:1204.4505 [math-ph]` B. Stenström. Rings of quotients. An Introduction to Methods of Ring Theory. Die Grundlehren der Mathematischen Wissenschaften, Band 217. Springer-Verlag, New York, 1975. M. E. Sweedler. Hopf algebras. Mathematics Lecture Note Series, W. A. Benjamin, Inc., New York, 1969. H. N. V. Temperley, E. H. Lieb. Relations between the 'percolation' and 'colouring' problem and other graph-theoretical problems associated with regular planar lattices: some exact results for the 'percolation' problem. *Proc. of the Royal Soc. of London. Ser. A* **322**, p. 251--280, 1971. H. Tominaga. On $s$-unital rings. *Math. Journal of Okayama Univ.* **18**, \#2, p. 117--134, 1976. H. Wenzl. On the structure of Brauer's centralizer algebras. *Annals of Math.* **128**, \#1, p. 173--193, 1988.
arxiv_math
{ "id": "2310.05550", "title": "Semialgebras associated with nonunital algebras and $k$-linear\n subcategories", "authors": "Leonid Positselski", "categories": "math.CT math.RA math.RT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In this paper, we focus on a method based on optimal control to address the optimization problem. The objective is to find the optimal solution that minimizes the objective function. We transform the optimization problem into optimal control by designing an appropriate cost function. Using Pontryagin's Maximum Principle and the associated forward-backward difference equations (FBDEs), we derive the iterative update gain for the optimization. The steady system state can be considered as the solution to the optimization problem. Finally, we discuss the compelling characteristics of our method and further demonstrate its high precision, low oscillation, and applicability for finding different local minima of non-convex functions through several simulation examples. author: - "Yeming Xu, Ziyuan Guo, Hongxia Wang, and Huanshui Zhang$^*$, Senior Member, IEEE [^1] [^2]" bibliography: - An_bib.bib title: " **Optimization Method Based On Optimal Control** " --- # INTRODUCTION Optimization problems, which involve the search for the minimum of a specified objective function, play a crucial role in various fields, including engineering, economics, machine learning, etc. [@luenberger1984linear]. Optimization methods are the basis for solving various optimization problems such as system identification and optimal control. Therefore, optimization problems have attracted extensive attention in various fields over the past few centuries, leading to significant advancements as follows: Gradient descent stands as the oldest, most predominant, and most effective first-order method for tackling optimization problems. Its simplicity captured widespread attention upon its inception. Typical gradient descent techniques include exact and inexact line search [@boyd2004convex; @nocedal1999numerical], and more. With the development of artificial intelligence technology, gradient descent has gained new vitality. This revival includes the emergence of various techniques tailored to different optimization needs. These encompass batch gradient descent (BGD) [@ruder2016overview], which operates on the entire training set, and mini-batch gradient descent (MBGD) [@huo2017asynchronous], which processes subsets of training data. Notably, stochastic gradient descent (SGD) [@bottou2010large], using a data in training set in each update, has also made a significant impact. Furthermore, the development of optimization methods has introduced enhancements to the traditional gradient descent approach such as Momentum gradient descent [@qian1999momentum], Nesterov Momentum gradient descent [@o2015adaptive], AdaGrad gradient descent [@duchi2011adaptive], Adam gradient descent [@kingma2014adam], to name a few. Nevertheless, gradient descent still faces issues such as slow convergence near extremal points, susceptibility to oscillations, and difficulty in finding optimal points. Newton's method emerges as the most basic and effective second-order method for solving optimization problems. Owing to its exceptional precision and fast convergence, it is very favorable, leading to various improved versions of Newton's method, including modified Newton's method [@fletcher1977modified], damped Newton's method [@sano2019damped], and quasi-Newton methods [@broyden1967quasi; @gill1972quasi]. Notable algorithms in this category are DFP [@broyden1970convergence], BFGS [@broyden1970convergence2], and L-BFGS [@liu1989limited]. In recent years, Newton's method and its improved versions have also been widely applied in training neural networks for machine learning and solving large-scale logistic regression problems [@setiono1995use; @goldfarb2020practical; @lin2007trust]. However, Newton's method and its variants may encounter the following challenges: (a) the need for an initial value close enough to the extremal point, or it may diverge; (b) strict requirements for the objective function, necessitating second-order partial derivatives; (c) for multivariable optimization, the calculation of the inverse matrix of the Hessian matrix is computationally burdensome. There are several other algorithms like Conjugate Gradient [@luenberger1984linear] and Evolutionary Algorithms [@back1996evolutionary], used for addressing optimization problems. We won't list them exhaustively. Unfortunately, these algorithms, while valuable in many situations, may still present some challenges such as slow convergence, oscillations during convergence, susceptibility to divergence, applicability only to functions with specific structures or under certain algorithm parameter settings, and inefficiency in handling non-convex optimization. Unlike gradient descent and Newton's method, we propose a novel optimization idea for the optimization problem by addressing a new optimal control problem. It aims to design an optimal controller to regulate a first-order difference equation such that the cost function, closely related to the objective function, is minimized. The optimal trajectory of the system to rapidly approach the local minimum point of the original optimization objective. This method offers relatively flexible initial value selection, fast convergence speed, effective avoidance of oscillations observed in gradient methods, and does not require the computation of second-order partial derivatives of the original optimization objective function. Particularly, for some nonconvex functions, different local minimum points can be obtained by adjusting the input weight matrix of the optimal control problem, provided that the initial value is chosen properly. It's worth noting that the selection of the input weight matrix does not lead to divergence and oscillations compared to conventional algorithms. We use standard notation: $\mathbb{R}^n$ is the set of n-dimensional real vectors; $\mathbb{S}^n_{++}$ is the set of positive definite symmetric matrices; $I_n$ is the n-dimensional identity matrix; $A \succ B (A\prec B)$ means that the matrix $A-B$ is positive (negative) definite, we said A is larger (smaller) than B; $\nabla f({x})$ and $\nabla^2 f({x})$ denote the gradient and the hessian matrix of $f({x})$. The remainder of this paper is organized as follows. In Section II, we formulate the optimization problem as an optimal control problem. In Section III, we approach it by Pontryagin's Maximum Principle and summarize the characteristics of our proposed method. In Section IV, we conduct simulations to validate our results in both convex and un-convex settings. Concluding remarks of Section V complete the paper. # PROBLEM FORMULATION Consider the optimization problem $$\mathop {\rm minimize}\limits_{x \in {\mathbb{R}^n}} f(x), \label{1}$$ where $f:{\mathbb{R}^n} \to \mathbb{R}$ is a nonlinear function. The objective function $f(x)$ is assumed to be twice continuously differentiable on ${\mathbb{R}^n}$. Numerous algorithms have been developed to address the minimization problem, many of which are grounded in the principle of gradient descent [@ruder2016overview] i.e., $${x_{k + 1}} = {x_k} + \eta\nabla f({x_k}),\label{2}$$where $\eta$ is step size. On one hand, it has been demonstrated in convex optimization that the global optimal solution can be obtained using ([\[2\]](#2){reference-type="ref" reference="2"}) [@boyd2004convex]. However, gradient descent is highly sensitive to the choice of step size. The smaller step size ensures convergence during the iterative process, but this comes at the expense of sacrificing convergence speed. Conversely, the larger step size may easily lead to oscillations and divergence during the iterative process. One of the most fundamental methods for determining the step size is line search criterion [@nocedal1999numerical]. On the other hand, in the case of non-convex optimization, the search for the global optimal solution remains challenging. Different initial points and algorithms may lead to different local optimal solutions or oscillate and diverge in the iterative process. Moreover, there are often fewer guarantees to prove the existence and properties of an optimal solution, making algorithm design and analysis more complex. Due to the inherent challenges of effectively solving non-convex optimization problems, the primary methods currently employed to address such problems include: (a) Find problems with implicit convexity, or solve them by convex reconstruction. (b) The target from finding global solution changes for a stationary point or local extremum points. (c) Consider a class of non-convex problems that can provide global performance guarantees, such as satisfying the Polyak-Łojasiewicz condition [@danilova2022recent; @polyak1963gradient]. Different from the traditional optimization method such as ([\[2\]](#2){reference-type="ref" reference="2"}), this paper will present a novel idea by transforming the optimization into an optimal control theory. The detailed formulation is as follows: We consider the discrete-time linear time-invariant system $${x_{k + 1}} = {x_k} + {u_k}, \label{3}$$ where $x_k$ is the $n$-dimensional state, $u_k$ is the $n$-dimensional control, which can indeed be perceived as an iterative update gain, which is to be further specified later. We transform the task of finding solutions to problem ([\[1\]](#1){reference-type="ref" reference="1"}) into the updating of the state sequence ${x_k}$ within the optimal control problem, i.e., $$\begin{array}{l} {\rm minimize} {\rm{ }}\sum\limits_{k = 1}^N {(f({x_k}) + \frac{1}{2}u_k^\mathrm {T}} Ru_k^{}) + f({x_{N + 1}}),\\ {\rm subject\ to}{\rm {\; (3)}}, \end{array}\label{4}$$ where the initial condition ${x_0}$ is given, $N$ is the time horizon. The terminal cost is $f({x_{N+1}})$ and the control weighted matrix $R\in\mathbb{S}^n_{++}$. The goal of the optimal control problem is to find an admissible control sequence {${u_k}$} which minimizes the long-term cost. As mentioned earlier, we consider the solution {${u_k}$} of problem ([\[4\]](#4){reference-type="ref" reference="4"}) as the variation in sequence {${x_k}$} from ${x_0}$ to ${x^*}$, whereas our objective is to attain the steady state ${x^*}$. **Remark 1**. *It's readily apparent from ([\[4\]](#4){reference-type="ref" reference="4"}) that we reduced the accumulation of ${f(x_k)}$ and ${u_k^ \mathrm {T} R{u_k}}$. This signifies that we will strike a balance between minimizing control energy consumption and reaching the minimum value of $f(x_k)$. Considering the update formula ([\[3\]](#3){reference-type="ref" reference="3"}) for ${x_k}$, the control sequence {${u_k}$} must guide ${x_k}$ toward the local minimum point of ${f(x_k)}$ with small control energy consumption. This effectively establishes a connection with the optimization problem. A more detailed discussion will be conducted in Section [3.3](#123){reference-type="ref" reference="123"}.* # OPTIMIZATION MOTHOD USING OPTIMAL CONTROL In this section, we will solve the optimal control problem ([\[4\]](#4){reference-type="ref" reference="4"}) by applying Pontryagin's Maximum Principle[@pontryagin2018mathematical]. The resulting optimal steady state of system [\[3\]](#3){reference-type="eqref" reference="3"} can recover one of the local minimum point of optimization problem ([\[1\]](#1){reference-type="ref" reference="1"}). All minimum points can always be obtained by adjusting the input weight matrix $R$ of the optimal control problem [\[4\]](#4){reference-type="eqref" reference="4"}. ## Analytical Solution Because the optimal control problem ([\[4\]](#4){reference-type="ref" reference="4"}) essentially focuses on finding $u_k$ to minimize $f(x_k)$ and use as energy $u_k^{T} Ru_k$ as possible. The optimal state of problem ([\[4\]](#4){reference-type="ref" reference="4"}) can be used to describe a local minimum point of problem ([\[1\]](#1){reference-type="ref" reference="1"}). This establishes a connection between the optimization problem and the optimal control problem. Then, we will apply the optimal control theory to solve the problem ([\[4\]](#4){reference-type="ref" reference="4"}), leading to the following theorem. **Theorem 1**. *The local minimum point of problem ([\[1\]](#1){reference-type="ref" reference="1"}) can be characterized by the following update relation: $${x_{k + 1}^*} = {x_k^*} + {u_k^*}, x_0^* = x_0,\label{6}$$where $$u_k^* = - {{R^{ - 1}}}\sum\limits_{i = k + 1}^{N + 1} \nabla f(x_i^*).\label{5}$$* *Proof.* Based on the aforementioned relationship, to solve problem ([\[4\]](#4){reference-type="ref" reference="4"}), define the *Hamiltonian* : $$H({x_k},{u_k},{\lambda _{k + 1}}) = f({x_k}) + \frac{1}{2}u_k^ \mathrm {T} R{u_k} + \lambda _{k + 1}^ \mathrm {T} ({x_k} + {u_k}),\label{7}$$where ${\lambda _k}$ is the *n*-dimensional costate. Indeed, the costate $\lambda_{k}$ assumes the function of Lagrange multipliers [@li2017maximum]. By applying the Pontryagin's Maximum Principle, we can derive the following FBDEs: $${x_{k + 1}^*} = {x_k^*} + {u_k^*},\label{8}$$ $$\lambda _k^* = \nabla f(x_k^*) + \lambda _{k + 1}^* ,\label{9}$$ $$x_0^* = x_0, \lambda_{N+1}^* = \nabla f(x_{N+1}^*), \label{10}$$along with the equilibrium condition $$Ru_k^* + \lambda _{k + 1}^* = 0.\label{12}$$ Let $k\leftarrow k+1$, utilizing the iterative equation ([\[9\]](#9){reference-type="ref" reference="9"}) and terminal condition ([\[10\]](#10){reference-type="ref" reference="10"}), we have $$\lambda _{k+1}^* = \sum\limits_{i = k + 1}^{N + 1} \nabla f(x_i^*).\label{13}$$ By substituting ([\[13\]](#13){reference-type="ref" reference="13"}) into ([\[12\]](#12){reference-type="ref" reference="12"}), the optimal controller admits: $$u_k^* = - {{R^{ - 1}}}\sum\limits_{i = k + 1}^{N + 1} \nabla f(x_i^*).\label{14}$$The proof is now completed. ◻ **Remark 2**. *Because of noncausality, it is not used to obtain the optimal state directly.* **Remark 3**. *Each local minimum point can be associated with the optimal control problem (4) of different input weight matrix $R$. In contrast, gradient descent method finds various minimum points by adjusting the step size blindly.* ## Numerical Solution {#555} It's hard to calculate ([\[6\]](#6){reference-type="ref" reference="6"})-([\[5\]](#5){reference-type="ref" reference="5"}) analytically. However, the numerical calculation can be achieved by solving the FBDEs. Enlightened by [@li2017maximum], we thus provide a numerical solution algorithm, which is summarized as follows: Initialization: {$u_k^0$}, $k=0,1,...,N$ , $x_0$, $\alpha$, $t \leftarrow 0$, $\varepsilon$ Forward Update {$x_{k}^{t}$} based on Equation ([\[17\]](#17){reference-type="ref" reference="17"}) Backward Update {$\lambda _{k}^{t}$} based on Equation ([\[18\]](#18){reference-type="ref" reference="18"}) Calculating $\frac{{\partial H(x_k^t,u_k^t,\lambda _{k + 1}^t)}}{{\partial u_k^t}}$ from {$x_{k}^{t}$} and {$\lambda _{k}^{t}$}. Update $$u_k^{t + 1} = u_k^t - \alpha \frac{{\partial H(x_k^t,u_k^t,\lambda _{k + 1}^t)}}{{\partial u_k^t}} \label{19}$$ $t \leftarrow t + 1$ $||\frac{{\partial H(x_{k}^{t},u_{k}^{t},\lambda _{k+1}^{t})}}{{\partial u_{k}^{t}}}|| < \varepsilon$ {$x_{k}^t$}, {$u_k^t$} During the initialization phase, a set of control sequences {$u_k^0$}, step size $\alpha$, and error $\varepsilon$ are given, and the initial state $x_0$ is known. Using the forward equation $${x_{k + 1}^t} = {x_k^t} + {u_k^t}, x_0^t=x_0.\label{17}$$$\{x_k^t\}$ can be acquired. Subsequently, {$\lambda _{k}^{t}$} can be computed based on {$x_{k}^{t}$} and the backward equation $$\lambda _k^t = \nabla f(x_k^t) + \lambda _{k + 1}^t, \lambda_{N+1}^t = \nabla f(x_{N+1}^t).\label{18}$$ According to {$x_{k}^{t}$}, {$\lambda _{k+1}^{t}$} and $\frac{{\partial H(x_k^t,u_k^t,\lambda _{k + 1}^t)}}{{\partial u_k^t}}$, the new sequences {$u_k^{t+1}$} can be obtained from ([\[19\]](#19){reference-type="ref" reference="19"}). This iterative process continues until the algorithm converges. Upon the completion of this algorithm, we can use the control sequences along with ([\[3\]](#3){reference-type="ref" reference="3"}) to compute {$x_{k}$}, and the steady state $x^*$ can then be regarded as the solution to the optimization problem ([\[1\]](#1){reference-type="ref" reference="1"}). ## Dissussion of The Proposed Optimization Method {#123} In this subsection, we discuss the compelling characteristics of solving optimization problems using the optimal control theory as follows: - The selection of our input weight matrix $R$ will not result in divergence of {$x_k$}. When $R$ is smaller, $x_k$ can converge to the global or local minimum points with few update iterations. From ([\[19\]](#19){reference-type="ref" reference="19"}), our method is designed in a way that prevents the convergence of $x_k$ towards the local maximum points or saddle points of the function $f(x)$. - Our method alleviates oscillations in the iteration process of {$x_k$}. Such oscillations, which occur near the local minimum point, would contradict the fundamental objective of minimizing the cost function. - For some non-convex functions, given a judicious choice of initial value $x_0$, we can make {$x_k$} converge towards different local minimum points by adjusting the matrix $R$. A larger value of $R$ can cause {$x_k$} to converge to a local minimum point closer to $x_0$, while a smaller value of $R$ can enable {$x_k$} to converge to a local minimum point farther away from $x_0$. These characteristics are actually guaranteed by the cost function of the optimal control problem ([\[4\]](#4){reference-type="ref" reference="4"}). It will be further demonstrated in the experimental results of Section [4](#222){reference-type="ref" reference="222"}. # NUMERICAL EXPERIMENTAL {#222} In this section, we present preliminary computational results for the numerical performance analysis of our proposed method and demonstrate (a) the better convergence of our proposed method compared with gradient descent and Newton's method in both convex and non-convex functions, (b) the high accuracy of our method, (c) escaping saddle points or local maxima and (d) applicable to nonconvex functions and multivariable situation. ## Fast Convergence When $R$ is smaller, $x_k$ can converge the global or local minimum points with fewer iterations by using our method. Choosing the non-convex function $$f_1(x) = {x^4} + \sin {x}$$ with the global unique minimum point at $x^*=-0.592$ and an initial value of $x_0 = 10$. We set $R = 1$ and $R=200$ for the optimal control method. It can be seen from Fig. [1](#fig_3){reference-type="ref" reference="fig_3"} that when $R=1$ the algorithm converges to the minimum point in nearly 10 iterations, while for $R=200$, it takes approximately 70 iterations to reach the same minimum point. Given the arbitrariness in the choice of $R$ in our method, it's advisable in general to opt for smaller values of $R$ to minimize the number of iterations. ![Iteration trajectory of {$x_k$} for $f_1(x)$ with $R=1$, $200$](f4.jpg){#fig_3 width="40%" height="30%"} The basic formula for Newton's method is as follows: $${x_{k + 1}} = {x_k} - {({\nabla ^2}f(x))^{ - 1}}\nabla f(x).$$ Let's consider the case of a convex function. We choose the function $${f_2}(x) = {e^x} + \sin x + {x^2}$$ with an initial value of $x_0 = 3$ and global minimum point $x^*=-0.6558$. The initial step size for gradient descent is set to $\eta = 0.1$, and we set $R = 0.01$ for our method. As shown in Fig. [2](#fig_1){reference-type="ref" reference="fig_1"}, our method and Newton's method converge in nearly 5 iterations. It's important to note that if the gradient descent step size is chosen too large, it can lead to oscillations during the iterative process. ![Iteration trajectory of {$x_k$} for $f_2(x)$ with $R=0.01$ and $\eta=0.1$](1.jpg){#fig_1 width="40%" height="20%"} Taking a non-convex function $$f_3(x) = \ln ({x^2} + 1) + \ln ({(x - 1)^2} + 0.01)$$ with the global unique minimum point at $x^*= 0.995$ and an initial value of $x_0 = 2$. Fig. [3](#fig_2){reference-type="ref" reference="fig_2"} depicts the iterative trajectory of the gradient descent and our proposed method. The initial step size for gradient descent is set to $\eta = 0.01$ and $R = 0.01$ for our method. It's evident from the figures that the gradient descent experiences oscillations, whereas the optimal control algorithm achieves convergence in just about 5 iterations. Our method maintains a more favorable convergence behavior. Due to the convexity of $f_3(x)$, Newton's method diverges. We will not present the graphical results of Newton's method. ![Iteration trajectory of {$x_k$} for $f_3(x)$ with $R=0.01$ and $\eta=0.01$](2.jpg){#fig_2 width="40%" height="20%"} ## High Accuracy This subsection will discuss the higher convergence accuracy of our proposed method compared to gradient descent and Newton's method. Fig. [\[fig_4\]](#fig_4){reference-type="ref" reference="fig_4"} (a) shows the relative error of gradient descent, Newton's method, and our method for $f_1(x)$ with $\eta=0.001$ and $R=0.01$. Fig. [\[fig_4\]](#fig_4){reference-type="ref" reference="fig_4"} (b) illustrates the relative error of the gradient descent, Newton's method, and our method for $f_2(x)$ with $R=0.01$ and $\eta=0.1$. Fig. [\[fig_4\]](#fig_4){reference-type="ref" reference="fig_4"} (c) respectively shows the relative error of gradient descent and our method for $f_3(x)$ with $R=0.01$ and $\eta=0.01$, Newton's method diverges. It can be observed that our algorithm demonstrates higher precision. \ \ ## Escaping The Saddle Point We will show whether {$x_k$} from our proposed method can converge to the optimal solution when a saddle point is chosen as the initial value. Consider the function $$f_4(x) = 7{x^3} + {x^4} + {{\rm{e}}^{{x^2}}} + {e^{ - {x^2}}}$$and set the initial value $x_0 = 0$, which is the saddle point of $f_4(x)$. This function has a global unique minimum point $x^*=-1.566$. The initial step size for gradient descent is set to $\eta = 0.026$ and we set $R = 0.026^{-1}$ correspondingly. It can be observed that the optimal control does not remain at the saddle point but converges towards the minimum point, whereas gradient descent and Newton's method remain at the saddle point in Fig. [4](#fig_6){reference-type="ref" reference="fig_6"}. ![Iteration trajectory of {$x_k$} for $f_4(x)$ with $R=0.026^{-1}$ and $\eta=0.026$](f3newton.jpg){#fig_6 width="40%" height="24%"} ## Applicable to Nonconvex Functions {#111} A larger value of $R$ can cause $x_k$ to converge to a local minimum point closer to $x_0$, while a smaller value of $R$ can lead $x_k$ to converge to a local minimum point farther away from $x_0$. It becomes apparent that the $R$ is the weight of the problem ([\[4\]](#4){reference-type="ref" reference="4"}) as discussed in [3.3](#123){reference-type="ref" reference="123"}. We choose the function $$f_5(x) = x - 4{x^2} + 0.2{x^3} + 2{x^4}$$ to illustrate this phenomenon. This function has a local minimum point $x^*_1=0.89$ and a global minimum point $x^*_2=-1.094$. We set $x_0 = -10$, $R = 100$ and $R = 0.1$. It can be observed that our method leads $x_k$ to converge to different local minimum points in Fig. [5](#fig_61){reference-type="ref" reference="fig_61"}. ![Iteration trajectory and relative error of {$x_k$} for $f_5(x)$ with $R=100,0.1$](fig7-eps-converted-to.pdf){#fig_61 width="50%" height="84%"} Consider the non-convex function $$f_6(x) = (x - 1)(x + 1)(x + 0.5)(x + 1.5)(x - 0.5)(x - 1.5)$$ that possesses three local minimum points $x^*_1=1.323$, $x^*_2=0$, $x^*_3=-1.323$. Take an initial value of $x_0=-3$. In the case of $R = 1,200,500$, {$x_k$} obtained from the proposed method convergence to $x^*_1$, $x^*_2$, and $x^*_3$ respectively. The corresponding iteration and relative error are shown in Fig. [6](#fig_7){reference-type="ref" reference="fig_7"}. ![Iteration trajectory and relative error of {$x_k$} for $f_6(x)$ with $R=1,200,500$](fig8-eps-converted-to.pdf){#fig_7 width="50%" height="94.3%"} **Remark 4**. *It can be observed that the $x_k$ moves to the local minimum point far away from $x_0$ when we use a smaller $R$, it may stay at other extreme points for a while and then leave, as shown in Fig. [6](#fig_7){reference-type="ref" reference="fig_7"}. This is a very interesting thing and worthy of our subsequent research. In particular, it should also be pointed out that at present, we only know how to adjust $R$ from small to large, but the specific thresholdthat makes $x_k$ converge to different local minimum points remains to be studied and proved.* ## Applicable to Multivariable Function Finally, we will illustrate that our method is still valid for multivariable function by using a non-convex function $${f_7}(x,y) = {x^4} + {y^4} + \sin x .$$ We initialize with $[x_0;y_0]=[2;-2]$ and set $R=0.12^{-1}{I_2}$, $\eta=0.12$. This function has the global unique minimum point $[-0.592;0]$. The results depicted in Fig. [7](#fig_13){reference-type="ref" reference="fig_13"} demonstrate that our method exhibits almost no oscillations compared to gradient descent. ![Iteration trajectory of {$x_k$} for $f_7(x)$ with $R={0.12^{ -1}}{I_2}$ and $\eta=0.12$](f7R=0.12.jpg){#fig_13 width="50%" height="30%"} In order to make {$x_k$,$y_k$} converge towards different local minimum points by adjusting the matrix $R$, We adopt an alternating optimization approach. Let's consider the function $${f_8}(x,y) = \ln ({x^2} + {y^2} + 1) + \ln ({(x - 10)^2} + {(y - 10)^2} + 1)$$ ${\rm{ }}+ \ln ({(x - 2)^2} + {(y - 30)^2} + 1)$ with $[x_0;y_0]=[-20;40]$. This function has three local mimimum points $[x^*_1;y^*_1]=[2;29.9]$, $[x^*_2;y^*_2]=[0.05;0.08]$ and $[x^*_3;y^*_3]=[9.93;9.99]$. Different from the single variable function optimization, the carried out by decomposing it along the two directions of $x$ axis and $y$ axis. The iteration will be decomposed into a series of steps, where each variable is optimized separately. Initially, the algorithm optimizes in the $x$ axis direction or the $y$ axis direction, solving the FBDEs separately in each direction. Finally solving FBDEs in two directions to achieve convergence towards the direction of different local minimum points. Refer to Fig. [\[fig_14\]](#fig_14){reference-type="ref" reference="fig_14"} for detailed visualizations. \ \ \ In Fig. [\[fig_14\]](#fig_14){reference-type="ref" reference="fig_14"} (a), we first fix the variable $x$ and move it along the $y$ axis direction by setting $R = 1$, it reaches the point $[-20;11]$. Then we make its move along the $x, y$ axis direction by setting $R = I_2$ to reach the local minimum point $[9.93;9.99]$. Following an approach similar to Fig. [\[fig_14\]](#fig_14){reference-type="ref" reference="fig_14"} (a), we initially move it along the $y$ direction with $R = 1$, ultimately reaching the point $[-20;11]$. Then we make its move along the $x, y$ axis direction by setting $R =$ \[100 0;0 0.00001\] to reach the local minimum point $[0.05;0.08]$ as shown in Fig. [\[fig_14\]](#fig_14){reference-type="ref" reference="fig_14"} (b). In Fig. [\[fig_14\]](#fig_14){reference-type="ref" reference="fig_14"} (c), we first make its move along the $y$ direction and set $R = 100$, it reaches the point $[-20;35]$. Then we make its move along the $x, y$ axis direction by setting $R = I_2$ to reach the local minimum point $[2;29.9]$. # CONCLUSIONS In this paper, we have proposed the method based on optimal control as a novel approach to tackle the optimization problem by designing an appropriate cost function. Our method has demonstrated promising convergence performance and versatility, enabling us to apply this principle to solve various optimization problems. In the future, we plan to extend our method to systems with additive noise or time-varying input weight matrices $R$. We will also expand our analysis to address challenges in distributed optimization and explore policy optimization (PO) methods for Linear Quadratic Regulators (LQR) and other related problems. The execution of the optimal control algorithm involves solving the FBDEs, which can be time-consuming. To address this, it is essential to choose appropriate methods to simplify the solving process. We will continue our research into algorithms for solving these equations, aiming to enhance the computational speed of our method. [^1]: This work was supported by the Foundation for Innovative Research Groups of the National Natural Science Foundation of China (61821004), Major Basic Research of Natural Science Foundation of Shandong Province (ZR2021ZD14), High-level Talent Team Project of Qingdao West Coast New Area (RCTD-JC-2019-05), Key Research and Development Program of Shandong Province (2020CXGC01208), and Original Exploratory Program Project of National Natural Science Foundation of China (62250056) (Corresponding author: Huanshui Zhang). [^2]: Yeming Xu, Ziyuan Guo, Hongxia Wang, and Huanshui Zhang are with the College of Electrical Engineering and Automation, Shandong University of Science and Technology, Qingdao, 266590, China (e-mail: ymxu2022\@163.com; skdgzy\@sdust.edu.cn; whx1123\@163.com; hszhang\@sdu.edu.cn).
arxiv_math
{ "id": "2309.05280", "title": "Optimization Method Based On Optimal Control", "authors": "Yeming Xu, Ziyuan Guo, Hongxia Wang, Huanshui Zhang", "categories": "math.OC", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We consider a generalization of the Springer resolution studied in earlier work of the authors, called the extended Springer resolution. In type $A$, this map plays a role in Lusztig's generalized Springer correspondence comparable to that of the Springer resolution in the Springer correspondence. The fibers of the Springer resolution play a key part in the latter story, and connect the combinatorics of tableaux to geometry. Our main results prove the same is true for fibers of the extended Springer resolution-- their geometry is governed by the combinatorics of tableaux. In particular, we prove that these fibers are paved by affines, up to the action of a finite group, and give combinatorial formulas for their Betti numbers. This yields, among other things, a simple formula for dimensions of stalks of the Lusztig sheaves arising in the study of the generalized Springer correspondence, and shows that there is a close resemblance between each Lusztig sheaf and the Springer sheaf for a smaller group. address: - | Department of Mathematics\ University of Georgia\ Boyd Research and Education Center\ Athens, GA\ 30602\ USA - | Department of Mathematics\ Washington University in St. Louis\ One Brookings Drive\ St. Louis, Missouri\ 63130\ USA - | Department of Mathematics, Statistics, and Actuarial Science, Butler University, 4600 Sunset Avenue, Indianapolis, Indiana 46208\ USA author: - William Graham - Martha Precup - Amber Russell title: Geometric and Combinatorial Properties of Extended Springer Fibers --- # Introduction {#sec.intro} The Springer resolution $\mu: \widetilde{\mathcal{N}} \to \mathcal{N}$ is a desingularization of the nilpotent cone $\mathcal{N}$ in the Lie algebra ${\mathfrak g}$ of a semisimple algebraic group $G$. This resolution plays a prominent role in geometric representation theory, arising from Springer's striking discovery that there is a deep connection, called the Springer correspondence, between Weyl group representation and $G$-orbits on the nilpotent cone (see [@Springer1976]). In particular, there is a graded Weyl group representation on the cohomology of the fibers of this resolution, which are called Springer fibers. In the type $A$ case---that is, when ${\mathfrak g}=\mathfrak{sl}_n(\mathbb{C})$ is the Lie algebra of the special linear group $SL_n(\mathbb{C})$, with corresponding Weyl group $S_n$---the Springer fibers are parametrized by partitions of $n$. The number of irreducible components of the Springer fiber corresponding a partition $\lambda$ is equal to the number of standard tableaux of shape $\lambda$, and its Poincaré polynomial is the generating function for a particular inversion statistic on the set of all row-strict tableaux of shape $\lambda$; more details appear in Section [4](#sec.paving){reference-type="ref" reference="sec.paving"} below. In this way, Springer fibers embody the interaction between the combinatorics used to study $S_n$-representations, and the geometry of the Springer resolution. This paper studies extended Springer fibers, which are fibers of the extended Springer resolution $$\psi: \widetilde{\mathcal{M}} \to \widetilde{\mathcal{N}} \to \mathcal{N}$$ introduced by the first author in [@Graham2019]. In [@GPR] the authors show that in type $A$, this map plays a role in Lusztig's generalized Springer correspondence comparable to that of the Springer resolution in the Springer correspondence. In other words, the paper [@GPR] opens the door to studying interactions between the combinatorics of tableaux and the geometry of the generalized Springer correspondence, via the extended Springer fibers. Our work below initiates this area of study. To state our main results we briefly introduce some notation; references and more information can be found in Section [2](#sec.notation){reference-type="ref" reference="sec.notation"}. The nilpotent cone in $\mathfrak{sl}_n(\mathbb{C})$ has a stratification by nilpotent orbits, each equal to a conjugacy class in $\mathfrak{sl}_n(\mathbb{C})$ of nilpotent matrices of Jordan type $\lambda$, where $\lambda$ is a partition of $n$. Given $\mathsf{x}$ in the orbit $\mathcal{O}_\lambda$, let $\widetilde{\mathcal{N}}_{\mathsf{x}}:= \mu^{-1}(\mathsf{x})$ denote the corresponding Springer fiber. Spaltenstein proved that Springer fibers are paved by affines [@Spaltenstein1977], and the cells in the affine paving are in bijection with row-strict tableaux of shape $\lambda$. Using a particular paving constructed by the second author and Ji in [@Ji-Precup2022], we lift each cell in the affine paving of the Springer fiber to the extended Springer fiber $\widetilde{\mathcal{M}}_{\mathsf{x}}:=\psi^{-1}(\mathsf{x})$, and analyze the result. The lifts of the cells are obtained in the following way. The variety $\widetilde{\mathcal{M}}$ is constructed from $\widetilde{\mathcal{N}}$ using the affine toric variety $\mathcal{V}$ corresponding to the character group of a fixed maximal torus $T \subseteq SL_n(\mathbb{C})$ and the cone of $\mathbb{ R}_{\geq 0}$-linear combinations of simple roots; see Section [2.2](#sec.extended){reference-type="ref" reference="sec.extended"} below. The construction implies that $\widetilde{\mathcal{M}}$ carries an action of the center $Z$ of $G$. Our investigation of pavings of extended Springer fibers begins with a study of particular subvarieties of $\mathcal{V}$ and their images under the $Z$-action. There is a natural quotient map $\widetilde{\mathcal{M}}_{\mathsf{x}} \to \widetilde{\mathcal{N}}_{\mathsf{x}} = \widetilde{\mathcal{M}}_{\mathsf{x}}/Z$. Using our results about the action of $Z$ on $\mathcal{V}$, we prove that that the inverse image of each affine cell in the paving of the Springer fiber is a disjoint union of varieties, each of which is isomorphic to affine space modulo a finite group. These varieties form the cells in what we refer to as an orbifold paving. To keep track of each constituent of the union, we introduce a new numeric measure for row-strict tableaux, called divisors; see Definition [Definition 22](#def.divisor){reference-type="ref" reference="def.divisor"}. The key idea is to break each tableau into equal sized blocks labeled by consecutive numbers---it is this combinatorial structure that dictates the $Z$-action on the inverse image of the corresponding affine cell to $\widetilde{\mathcal{M}}_\mathsf{x}$. We can now state our main result. **Theorem 1** (See Theorem [Theorem 29](#thm.extended-paving){reference-type="ref" reference="thm.extended-paving"} below). *Suppose $\mathsf{x}\in \mathcal{O}_\lambda$. The extended Springer fiber $\widetilde{\mathcal{M}}_{\mathsf{x}}$ has an orbifold paving with cells indexed by pairs $(\sigma, i)$ such that $\sigma$ is a row-strict tableau of shape $\lambda$ and $0\leq i \leq d_\sigma-1$ with $d_\sigma$ the maximal divisor of $\sigma$. Furthermore, the action of $Z$ on $\widetilde{\mathcal{M}}_\mathsf{x}$ cyclically permutes these cells.* This paving allows us to compute topological invariants of the extended Springer fibers using tableaux, as in the case of ordinary Springer fibers. We obtain a number of consequences: formulas for the Poincaré polynomial of each $Z$-isotypic component of the extended Springer fiber (Theorem [Theorem 39](#thm.poincare){reference-type="ref" reference="thm.poincare"}), for the Poincaré polynomial of each extended Springer fiber in terms of Poincaré polynomials of smaller rank Springer fibers (Corollary [Corollary 41](#cor.poincare){reference-type="ref" reference="cor.poincare"}), and for the dimensions of the stalks of the Lusztig sheaves, which play an important role in Lusztig's generalized Springer correspondence (Theorem [Theorem 43](#thm.lusztigsheaf){reference-type="ref" reference="thm.lusztigsheaf"}). As a consequence, we show that there is a remarkable resemblance between each Lusztig sheaf and the Springer sheaf for a smaller group (see Corollary [Corollary 44](#cor:smallergroup){reference-type="ref" reference="cor:smallergroup"}). Although this resemblance can be seen in examples computed using the Lusztig-Shoji algorithm ([@Lu86], [@Sho87]), we are not aware of any published reference. Several of our combinatorial formulas below rely on the fact that divisors of row-strict tableaux interact well with an inversion statistic defined by Tymoczko for computing the Betti numbers of Springer fibers [@Tymoczko2006; @Precup-Tymoczko]. Indeed, Corollary [Corollary 36](#cor.shift){reference-type="ref" reference="cor.shift"} below gives an inductive formula for the number of inversions of a row-strict tableau of size $n$ with divisor $d$ in terms of the number of inversions of a row-strict tableau of size $n/d$. Although this result is purely combinatorial, it is suggestive of a geometric phenomenon, and has the potential to lend new insight into the geometry of Springer fibers, which we plan to explore in future work. Also, the existence of a paving of the extended Springer fibers suggests potential connections with parity sheaves [@JMW14], which have been instrumental in recent work on the modular Springer correspondence [@AHJR1]. We now summarize the contents of the paper. Section [2](#sec.notation){reference-type="ref" reference="sec.notation"} covers background information and definitions, including the definitions of the extended Springer resolution and of orbifold pavings. In Section [3](#sec.affine-subvarieties){reference-type="ref" reference="sec.affine-subvarieties"}, we study subvarieties of the affine toric variety $\mathcal{V}$, each indexed by a partition of the set $[n-1]$, and their image under the $Z$-action. In Section [4](#sec.paving){reference-type="ref" reference="sec.paving"}, we apply the results of Section [3](#sec.affine-subvarieties){reference-type="ref" reference="sec.affine-subvarieties"} in the context of extended Springer fibers and prove our main paving theorem, Theorem [Theorem 29](#thm.extended-paving){reference-type="ref" reference="thm.extended-paving"}. Finally, Section [5](#sec.combinatorics){reference-type="ref" reference="sec.combinatorics"} explores the combinatorial consequences of the paving.  **Acknowledgments.** The second author is partially supported by NSF grant DMS 1954001. # Background and notation {#sec.notation} Throughout this paper, $G$ denotes the simply connected, semisimple algebraic group $SL_n(\mathbb{C})$ with Lie algebra ${\mathfrak g}=\mathfrak{sl}_n(\mathbb{C})$ and center $Z\simeq \mathbb{ Z}_n$. We fix $B \subseteq G$ to be the Borel subgroup of upper triangular matrices. Then $B = TU$, where $U$ is the subgroup of upper triangular matrices with diagonal entries equal to $1$, and $T$ is the subgroup of diagonal matrices. Denoting the Lie algebras of of $B$, $T$ and $U$ by ${\mathfrak b}$, ${\mathfrak t}$, and ${\mathfrak u}$, respectively, we have ${\mathfrak b}={\mathfrak t}\oplus {\mathfrak u}$. The Weyl group of $G$ is $W= N_G(T)/T\cong S_n$, the symmetric group on $n$ letters. Given $g \in G$ and $\mathsf{x}\in {\mathfrak g}$, we write $g \cdot \mathsf{x}= \operatorname{Ad }(g) \mathsf{x}$ for the adjoint action of $g$ on $\mathsf{x}$; in terms of matrix multiplication, $g \cdot \mathsf{x}= g \mathsf{x}g^{-1}$. Let $\mathcal{N}$ denote the nilpotent cone of ${\mathfrak g}$. Let $E_{i,j}$ denote the elementary matrix whose only nonzero entry is $1$ in position $(i,j)$. Then ${\mathfrak u}=\mathbb{C}\{ E_{i,j} \mid 1\leq i<j \leq n \}$. We define $\epsilon_i\in {\mathfrak t}^*$ by $\epsilon_i(\mathsf{t}) = \mathsf{t}_{ii}$, where $\mathsf{t}= \sum \mathsf{t}_{ii} E_{i,i} \in {\mathfrak t}$. Let $\Phi$ denote the set of roots of ${\mathfrak t}$ on ${\mathfrak g}$, and $\Phi^+ = \{ \epsilon_i - \epsilon_j \mid i < j \}$ denote the set of positive roots. The corresponding set of simple roots is $\Delta= \{\alpha_i \mid i\in \{1, \ldots, n-1\}\}$, where $\alpha_i:=\epsilon_i-\epsilon_{i+1}$. The center $Z$ of $G$ is isomorphic to $\mathbb{ Z}_n$; it consists of scalar multiples of the $n \times n$ identity matrix $I_n$. Let $\widehat{Z}$ denote the character group of $Z$, and let $\chi_i \in \widehat{Z}$ be the character of $Z$ defined by $\chi_i(\operatorname{diag}(a,a,\ldots, a)) = a^i$. ## The Springer resolution {#sec.Springer.def} The *Springer resolution* is the morphism $\mu: \widetilde{\mathcal{N}} \to \mathcal{N}$, where $$\widetilde{\mathcal{N}} := \{(gB, \mathsf{x}) \in G/B \times \mathcal{N}\mid g^{-1}\cdot \mathsf{x}\in {\mathfrak u}\},$$ and the map $\mu$ is the projection onto the second factor. We will make use of the following alternate description of the Springer resolution. Write $G\times^B {\mathfrak u}$ for the mixed space $(G \times {\mathfrak u})/B$, where $B$ acts on $G\times {\mathfrak u}$ on the right by $(g,\mathsf{x})b = (gb, b^{-1} \cdot \mathsf{x})$. Let $[g,\mathsf{x}] := (g,\mathsf{x}) B \in G\times^B {\mathfrak u}$. There is an isomorphism of varieties $$G\times^B {\mathfrak u}\to \widetilde{\mathcal{N}},\; \;[g,\mathsf{x}] \mapsto (gB, g\cdot \mathsf{x}),$$ (cf. [@Jantzen pg. 66]), which we use to identify these varieties. Under this identification, the morphism $\mu$ is $$\begin{aligned} \label{eqn.Sp.res} \mu: G\times^B {\mathfrak u}\to \mathcal{N}, \; \; [g,\mathsf{x}] \mapsto g\cdot \mathsf{x}.\end{aligned}$$ The Springer fiber corresponding to a nilpotent matrix $\mathsf{x}\in \mathcal{N}$ is the fiber $\mu^{-1}(\mathsf{x})$ of the Springer resolution $\mu: G \times^B {\mathfrak u}\to {\mathcal N}$, $$\widetilde{\mathcal{N}}_{\mathsf{x}}:=\mu^{-1}(\mathsf{x}) = \{ [g, g^{-1} \cdot \mathsf{x}] \mid g^{-1} \cdot \mathsf{x}\in {\mathfrak u}\}.$$ The natural map $G \times^B {\mathfrak u}\to G/B$ maps $\widetilde{\mathcal{N}}_{\mathsf{x}}$ isomorphically onto its image in the flag variety, which is the subvariety $\mathcal{B}_{\mathsf{x}}$ of $\mathcal{B}= G/B$ defined by $$\mathcal{B}_{\mathsf{x}} = \{ gB \mid g^{-1} \cdot \mathsf{x}\in {\mathfrak u}\}.$$ We identify $\widetilde{\mathcal{N}}_{\mathsf{x}}$ with $\mathcal{B}_{\mathsf{x}}$, generally writing $\widetilde{\mathcal{N}}_{\mathsf{x}}$ throughout. Let $\lambda$ be a partition of $n$. We denote by $\mathcal{O}_\lambda$ the nilpotent orbit of matrices of Jordan type $\lambda$. When $\mathsf{x}$ is a fixed element of $\mathcal{O}_\lambda$ we write $\widetilde{\mathcal{N}}_{\lambda}:= \widetilde{\mathcal{N}}_{\mathsf{x}}$. The Springer fiber $\widetilde{\mathcal{N}}_{\lambda}$ is equidimensional of dimension $\sum_i (\lambda_i - 1)$, where the $\lambda_i$ are the parts of $\lambda$. A partition is divisible by $d$ if all of its parts are divisible by $d$; in this case, we define $\lambda/d$ to be the partition whose parts are $\lambda_i/d$. Then $\widetilde{\mathcal{N}}_{\lambda/d}$ denotes the Springer fiber for the group $SL_{n/d}(\mathbb{C})$ over a fixed nilpotent element of Jordan type $\lambda/d$. ## The extended Springer resolution {#sec.extended} We now recall from [@Graham2019; @GPR] the definition of the variety which is the main geometric focus of this paper. More details may be found in these references. The nilpotent matrix $\mathsf{n}:=\sum_{i=1}^{n-1} E_{i,i+1}\in {\mathfrak u}$ is a regular nilpotent element of ${\mathfrak g}$. We can find a standard triple in ${\mathfrak g}$ with nilpositive element $\mathsf{n}$ and semisimple element $\mathsf{s}$ such that $[\mathsf{s}, E_{i,i+1}] = 2 E_{i,i+1}$. Writing ${\mathfrak g}_2$ for the space spanned by the $E_{i,i+1}$, we see that $[\mathsf{s}, \mathsf{x}] = 2\mathsf{x}$ for all $\mathsf{x}\in {\mathfrak g}_2$. Let $\omega := \exp(2\pi i/n)$, and let $\underline{\omega} = \omega I_n$, where $I_n$ is the $n \times n$ identity matrix. The center $Z$ of $G$ is the subgroup of $T$ given by $Z = \{\underline{\omega}^k \mid 0\leq k \leq n-1\}$. Let $T_{ad}$ denote the torus $T/Z$. Since the center $Z$ acts trivially on ${\mathfrak g}$, the action of $T$ on ${\mathfrak g}$ factors through the map $T \to T_{ad} = T/Z$. The map $T_{ad} \to T_{ad} \cdot \mathsf{n}$ given by $t \mapsto t \cdot \mathsf{n}$ is a $T$-equivariant open embedding of $T_{ad}$ in ${\mathfrak g}_2$ as an open subset. Hence, ${\mathfrak g}_2$ is an affine toric variety for $T_{ad}$; to emphasize this fact, we define $\mathcal{V}_{ad}:={\mathfrak g}_2$. The composition $\mathcal{V}_{ad} \to {\mathfrak u}\to {\mathfrak u}/[{\mathfrak u}, {\mathfrak u}]$ is an isomorphism, and using this, we identify $\mathcal{V}_{ad}$ with ${\mathfrak u}/[{\mathfrak u},{\mathfrak u}]$. Via this identification, $\mathcal{V}_{ad}$ acquires a $B$-action (the subgroup $U$ acts trivially), and the projection $p: {\mathfrak u}\to {\mathfrak u}/[{\mathfrak u},{\mathfrak u}]=\mathcal{V}_{ad}$ is $B$-equivariant. An affine toric variety is characterized by the character group of the torus, which can be viewed as a subset of the dual Lie algebra of the torus, together with a cone in the real span of the character group. Note that the simple roots can be viewed as characters of either $T_{ad}$ or $T$. Let $\mathcal{P}\subset {\mathfrak t}^*$ denote the weight lattice, that is, the character group of $T$, and let $\mathcal{Q}\subset {\mathfrak t}^*$ denote the root lattice, the character group of $T_{ad}$. Both have the same real span $\mathbb{ V}$ in ${\mathfrak t}^*$, and each is a lattice in $\mathbb{ V}$. For later use, we note that the lattice $\frac{1}{n} \mathcal{Q}$ in $\mathbb{ V}$ is the character group of a torus which we denote by $\widehat{T}$. The toric variety $\mathcal{V}_{ad}$ corresponds to the lattice $\mathcal{Q}$, and the cone equal to the set of $\mathbb{ R}_{\geq 0}$-linear combinations of simple roots. We define $\mathcal{V}$ to be the toric variety for $T$ obtained by using the lattice $\mathcal{P}$ in place of $\mathcal{Q}$, but keeping the same cone. It follows from [@Fulton Section 2.2] that $\mathcal{V}/Z = \mathcal{V}_{ad}$ (see [@Graham2019 Prop. 3.1]). We use the natural projection $B \to T \cong B/U$ to extend the $T$-action on $\mathcal{V}$ to a $B$-action, where the subgroup $U$ acts trivially. The quotient map $\pi:\mathcal{V} \to \mathcal{V}_{ad}$ is $B$-equivariant. For later use, let $\widehat{\mathcal{V}}$ denote the toric variety for $\widehat{T}$ defined using the same cone, but the lattice $\frac{1}{n} \mathcal{Q}$. The variety $\widetilde{\mathcal{M}}$ and the map $\psi: \widetilde{\mathcal{M}} \to \mathcal{N}$ discussed in the introduction are defined as follows. First, consider the maps $p: {\mathfrak u}\to {\mathfrak u}/[{\mathfrak u},{\mathfrak u}]=\mathcal{V}_{ad}$ and $\pi: \mathcal{V}\to \mathcal{V}/Z=\mathcal{V}_{ad}$ defined above. Set $$\widetilde{{\mathfrak u}} := \mathcal{V}\times_{\mathcal{V}_{ad}} {\mathfrak u}=\{ (v, \mathsf{x})\mid \pi(v)=p(\mathsf{x}) \};$$ that is, we form the following Cartesian diagram: $$\xymatrix{\widetilde{{\mathfrak u}} \ar[r] \ar[d] & {\mathfrak u}\ar[d]^p \\ \mathcal{V}\ar[r]^{\pi}& \mathcal{V}_{ad}. }$$ Because the maps $p$ and $\pi$ are both $B$-equivariant, $B$ acts on $\widetilde{{\mathfrak u}}$. We define $\widetilde{\mathcal{M}}:=G\times^B\widetilde{{\mathfrak u}}$. Let $\eta: \widetilde{{\mathfrak u}} \to {\mathfrak u}$ denote projection onto the second factor. Define $\widetilde{\eta}: \widetilde{\mathcal{M}} \to \widetilde{\mathcal{N}}$ to be the map induced from $\eta$, so $\widetilde{\eta}$ maps the element $[g,y]\in \widetilde{\mathcal{M}}$ to $[g, \eta(y)]\in \widetilde{\mathcal{N}}$. The *extended Springer resolution* is the variety $\widetilde{\mathcal{M}}$, together with the map $\psi: \widetilde{\mathcal{M}}\to \mathcal{N}$, where $\psi$ is the composition $$\xymatrix{ \psi: \widetilde{\mathcal{M}} \ar[r]^{\widetilde{\eta}} & \widetilde{\mathcal{N}} \ar[r]^{\mu} & \mathcal{N}}$$ of $\widetilde{\eta}$ with the usual Springer resolution $\mu$. The *extended Springer fiber* corresponding to $\mathsf{x}\in \mathcal{N}$ is the fiber $\widetilde{\mathcal{M}}_{\mathsf{x}} := \psi^{-1}(x)$. As for Springer fibers, if $\mathsf{x}$ is a fixed matrix of Jordan type $\lambda$, we write $\widetilde{\mathcal{M}}_{\lambda}:=\widetilde{\mathcal{M}}_{\mathsf{x}}$. ## Pavings {#sec.praving.def} The main result of this paper shows that each extended Springer fiber has an orbifold paving by affine cells. We now explain this terminology. Recall that a *paving* of an algebraic variety $\mathcal{Y}$ is a filtration by closed subvarieties $$\mathcal{Y}_0 \subset \mathcal{Y}_1 \subset \cdots \subset \mathcal{Y}_d=\mathcal{Y}.$$ A paving is *affine* if every $\mathcal{Y}_i\setminus \mathcal{Y}_{i-1}$ is isomorphic to a finite disjoint union of affine spaces. We call these spaces the *affine cells* of the paving. We say a paving is an *orbifold paving* if every $\mathcal{Y}_i\setminus \mathcal{Y}_{i-1}$ is isomorphic to a finite disjoint union of quotients of an affine space by a finite group action. It is well-known that if an algebraic variety $\mathcal{Y}$ has a paving by affines, then the fundamental classes of the closures of the cells in the paving form a basis for the Borel-Moore homology of $\mathcal{Y}$ over the integers (here denoted by $\overline{H}_*(\mathcal{Y};\mathbb{ Z})$). This result follows by induction, using the long exact sequence in Borel-Moore homology, together with the fact that $\overline{H}_{2n}(\mathbb{C}^n) = \mathbb{ Z}\cdot [\mathbb{C}^n]$, and $\overline{H}_i(\mathbb{C}^n) = 0$ if $i \neq 2n$. As shown by Abe and Matsumura [@Abe-Matsumura2015 Section 2.4], if rational coefficients are used, this result extends to orbifold pavings: if $\mathcal{Y}$ has an orbifold paving, the fundamental classes of orbifold cell closures form a basis for the rational Borel-Moore homology $\overline{H}_*(\mathcal{Y};\mathbb{ Q})$. The reason is that if $H$ is a finite group acting on $\mathbb{C}^n$, then $\overline{H}_{2n}(\mathbb{C}^n/H;\mathbb{ Q}) = \mathbb{ Q}\cdot [\mathbb{C}^n/H]$. With an additional hypothesis on the group action, these results can be extended to a more general coefficient field. (We do not know if this additional hypothesis is necessary.) It is well known that if $\pi: \mathcal{Y}\to \mathcal{Y}/H$ is a quotient map by a finite group, then in ordinary homology, the map $\pi_*: H_i(\mathcal{Y};\mathbb F) \to H_i(\mathcal{Y}/H;\mathbb F)$ is an isomorphism, if coefficients are taken in a field $\mathbb F$ whose characteristic is $0$ or is relatively prime to $|H|$. For lack of reference, we prove the following lemma, which is an analogue of this result in Borel-Moore homology. **Lemma 1**. *Let $H$ be a finite abelian group acting linearly on $\mathbb{C}^n$. If $\mathbb F$ is a field whose characteristic is $0$ or is relatively prime to $|H|$, then $\overline{H}_{2n}(\mathbb{C}^n/H;\mathbb F) = \mathbb F \cdot [\mathbb{C}^n/H]$, and $\overline{H}_i(\mathbb{C}^n/H;\mathbb F) = 0$ if $i \neq 2n$.* *Proof.* For simplicity, we omit the coefficient field $\mathbb F$ from the notation and write simply $\overline{H}_i(\mathcal{Y})$ for $\overline{H}_i(\mathcal{Y};\mathbb F)$. If $H$ acts on $\mathcal{Y}$, write $\widetilde{\mathcal{Y}} = \mathcal{Y}/H$. Let $\pi: \mathbb{C}^n \to \widetilde{\mathbb{C}^n}$ be the quotient map. It suffices to prove that for all $i$, the map $\pi_*: \overline{H}_{i}(\mathbb{C}^n) \to \overline{H}_{i}( \widetilde{\mathbb{C}}^n)$ is an isomorphism. We may assume $H$ acts diagonally on $\mathbb{C}^n$. Embed $\mathbb{C}^n \hookrightarrow \mathbb{C}^{n+1}$ via $x \mapsto (x,0)$, and extend the $H$-action to $\mathbb{C}^{n+1}$ by making $H$ act trivially on the last factor. Then we have $$\mathbb{C}^n \hookrightarrow \mathbb{ P}^n \hookleftarrow \mathbb{ P}^{n-1},$$ where the horizontal maps are an open and closed embedding, respectively. These maps induce embeddings $$\widetilde{\mathbb{C}}^n \hookrightarrow \widetilde{\mathbb{ P}}^n \hookleftarrow \widetilde{\mathbb{ P}}^{n-1}.$$ We have the following map of exact sequences: $$\begin{CD} \overline{H}_{i}(\mathbb{ P}^{n-1}) @>>> \overline{H}_{i}(\mathbb{ P}^{n}) @>>> \overline{H}_{i}(\mathbb{C}^n) @>>> \overline{H}_{i-1}(\mathbb{ P}^{n-1}) @>>> \overline{H}_{i-1}(\mathbb{ P}^{n}) \\ @VVV @VVV @VVV @VVV @VVV \\ \overline{H}_{i}( \widetilde{\mathbb{ P}}^{n-1}) @>>> \overline{H}_{i}( \widetilde{\mathbb{ P}}^{n} ) @>>> \overline{H}_{i}( \widetilde{\mathbb{C}}^n) @>>> \widetilde{H}_{i-1}( \widetilde{\mathbb{ P}}^{n-1}) @>>> \overline{H}_{i-1}( \widetilde{\mathbb{ P}}^{n}). \end{CD}$$ The vertical maps are induced by the quotient map. For compact varieties, Borel-Moore homology is isomorphic to ordinary homology, on which the quotient map induces isomorphism. Because $\mathbb{ P}^n$ and $\mathbb{ P}^{n-1}$ are compact, this observation implies that all maps except the middle map are an isomorphism. The five-lemma then implies that the middle map is an isomorphism, as desired. ◻ # Affine toric varieties and quotients {#sec.affine-subvarieties} The toric varieties $\mathcal{V}$ and $\mathcal{V}_{ad}$ defined in the previous section play a key role in the study of extended Springer fibers. In this section, we define and study particular subvarieties of $\mathcal{V}$ associated to a given set partition. We prove that each such subvariety is isomorphic to a quotient of affine space by a finite group action. The center $Z$ acts on the set of such subvarieties, and we identify the orbit of each under the action of $Z$. Our work in this section lays the groundwork needed for Section [4](#sec.paving){reference-type="ref" reference="sec.paving"}, where we lift the affine cells paving a Springer fiber to the extended Springer fibers. ## The toric varieties $\mathcal{V}$, $\mathcal{V}_{ad}$, and $\widehat{\mathcal{V}}$ The inclusions $\mathcal{Q}\subset \mathcal{P}\subset \frac{1}{n} \mathcal{Q}$ of lattices correspond to inclusions $$\label{e.inclusions} \mathbb{C}[T_{ad}] \subset \mathbb{C}[T] \subset \mathbb{C}[\widehat{T}],$$ of coordinate rings, which in turn yield surjections $$\label{e.torimaps} \begin{tikzcd} \widehat T \arrow[rr,bend left,"\widehat \rho"] \arrow[r] \arrow[r, "\rho"] & T \arrow[r,"\pi"] & T_{ad} . \end{tikzcd}$$ We now introduce notation for functions on these tori. Given a character $\lambda$ of either $T$ or $T_{ad}$, let $e^{\lambda}$ denote the corresponding function on the torus. Recall that $\Delta = \{ \alpha_1, \ldots, \alpha_{n-1} \}$ denotes the set of simple roots. Let $\{ \lambda_1, \ldots, \lambda_{n-1} \}$ denote the corresponding set of fundamental weights. Write $$x_i = e^{\alpha_i}, \ \ y_i = e^{\lambda_i}, \ \ z_i = e^{\alpha_i/n}.$$ Then $\mathbb{C}[T_{ad}] = \mathbb{C}[x_1^{\pm 1},\dots, x_{n-1}^{\pm 1}]$, $\mathbb{C}[T] = \mathbb{C}[y_1^{\pm 1},\dots, y_{n-1}^{\pm 1}]$, and $\mathbb{C}[\widehat{T}] = \mathbb{C}[z_1^{\pm 1},\dots, z_{n-1}^{\pm 1}]$. We have $x_i = z_i^n$. Moreover, the equation $$\begin{aligned} \label{e.glk} \lambda_k &= & \frac{1}{n} \Big( (n-k) \alpha_1 + 2(n-k) \alpha_2 + \cdots + k(n-k) \alpha_k \\ \nonumber & & \quad\quad + k(n-k-1) \alpha_{k+1} + k(n-k-2) \alpha_{k+2} + \cdots + k \alpha_{n-1} \Big).\end{aligned}$$ implies that $$\label{e.yk} y_k = z_1^{n-k}z_2^{2(n-k)}\dots z_k^{k(n-k)}z_{k+1}^{k(n-k-1)}z_{k+2}^{k(n-k-2)}\dots z_{n-1}^k.$$ This gives a precise relationship between the coordinate functions on $T$ and $T_{ad}$ and those on $\widehat{T}$. The kernel of $\pi$ is equal to the center $Z$ of $G$. Let $\widehat{H} := \operatorname{ker }\widehat{\rho} \supset H := \operatorname{ker }\rho$. The equation $x_i = z_i^n$ implies the map $\widehat{\rho}$ can be described in coordinates as $$\widehat{\rho}(z_1,\dots,z_{n-1}) = (z_1^n, \ldots, z_{n-1}^n).$$ Hence $\widehat{H} \cong \mathbb Z_n\times\cdots \times \mathbb Z_n$ (with $(n-1)$ copies of $\mathbb Z_n$). Recall that $\omega$ denotes a primitive $n$-th root of unity. Then any element $h \in \widehat{H}$ is of the form $h = (\omega^{a_1},\dots,\omega^{a_{n-1}})$, where $a_i \in \mathbb{ Z}$. **Proposition 2**. *The group $H$ is equal to $$\label{e.H} \{ (\omega^{a_1},\dots,\omega^{a_{n-1}}) \in \widehat T \mid a_1 + 2a_2+\cdots +(n-1)a_{n-1} \equiv 0\;\mathrm{mod}\,n\}.$$* *Proof.* From [\[e.yk\]](#e.yk){reference-type="eqref" reference="e.yk"}, we see that $(\omega^{a_1},\dots,\omega^{a_{n-1}})\in H$ if and only if $\omega^{N_k} = 1$ for every $k$ such that $1\leq k\leq (n-1)$, where $$\begin{aligned} N_k &=& a_1(n-k)+2a_2(n-k)+\cdots +ka_k(n-k)\\ &&\quad \quad \quad\quad +ka_{k+1}(n-k-1) +ka_{k+2}(n-k-2)+\cdots +ka_{n-1}.\end{aligned}$$ Equivalently, the tuple $(a_1, \ldots, a_{n-1})$ is such that the modular relation $N_k \equiv 0 \;\mathrm{mod}\,n$ are satisfied for all $1\leq k\leq n-1$. This in turn is equivalent to the tuple satisfying the relations $$k a_1 + 2 k a_2 + \cdots + (n-1) k a_{n-1} \equiv 0 \;\mathrm{mod}\,n$$ for all $1\leq k\leq (n-1)$. These relations are satisfied for all such $k$ if and only if they are satisfied for $k =1$, which implies the result. ◻ We can identify $Z$ with $\widehat{H}/H$, and do so explicitly as follows. First, as $Z = \{\underline{\omega}^k \mid 0\leq k \leq n-1\}$, we see $Z$ is isomorphic to the cyclic group $\langle \omega \rangle \simeq \mathbb{ Z}_n$. By Proposition [Proposition 2](#prop.H){reference-type="ref" reference="prop.H"} the homomorphism $$\label{e.hmap} \widehat{H} \rightarrow Z,\;\; (\omega^{a_1}, \ldots, \omega^{a_{n-1}}) \mapsto \underline{\omega}^{\sum_{r=1}^{n-1} r a_r}$$ has kernel $H$, and thus yields the desired isomorphism of $\widehat{H}/H$ with $Z$. Recall from Section [2.2](#sec.extended){reference-type="ref" reference="sec.extended"} that the cone in $\mathbb{ V}$ which is equal to the $\mathbb{ R}_{\geq 0}$-linear combinations of the positive simple roots defines an affine toric variety for each of the tori $\widehat{T}$, $T$, and $T_{ad}$. These toric varieties are denoted by $\mathcal{V}$, $\mathcal{V}_{ad}$, and $\widehat{\mathcal{V}}$, respectively. As for the tori, the inclusions $\mathbb{C}[\mathcal{V}_{ad}] \subset \mathbb{C}[\mathcal{V}] \subset \mathbb{C}[\widehat{\mathcal{V}}]$ yield maps of toric varieties which extend the maps of the tori, and which we denote by the same letters: $$\label{e.toricmaps} \begin{tikzcd} \widehat \mathcal{V}\arrow[rr,bend left,"\widehat \rho"] \arrow[r] \arrow[r, "\rho"] & \mathcal{V}\arrow[r,"\pi"] & \mathcal{V}_{ad}. \end{tikzcd}$$ The maps $\widehat{\rho}$, $\rho$, and $\pi$, are quotient maps for the actions of the finite groups $\widehat{H}$, $H$, and $Z$, respectively. To simplify notation, we write $A = \mathbb{C}[\widehat{\mathcal{V}}]$. Then $\mathbb{C}[\mathcal{V}] = A^H$, since $\mathcal{V}\cong \widehat\mathcal{V}/H$. For each $i$ with $1\leq i \leq n-1$, let $\mu_i = \sum a_i \alpha_i$ denote the unique weight in the same coset of $\lambda_i$ modulo the root lattice such that each $a_i$ satisfies $0 \leq a_i < 1$, and set $v_i:= e^{\mu_i} \in \mathbb{C}[\mathcal{V}]$. The rings $A=\mathbb{C}[\widehat{\mathcal{V}}]$ and $\mathbb{C}[\mathcal{V}_{ad}]$ are the polynomial rings $\mathbb{C}[z_1, \ldots, z_{n-1}]$ and $\mathbb{C}[x_1, \ldots, x_{n-1}]$, respectively. Thus, both $\widehat{\mathcal{V}}$ and $\mathcal{V}_{ad}$ are isomorphic to $\mathbb{C}^{n-1}$, with coordinate functions $z_i$ and $x_i$, respectively. However, the ring $\mathbb{C}[\mathcal{V}]$ is more complicated: it is a quotient of the polynomial ring $\mathbb{C}[v_1, \ldots, v_{n-1}, x_1, \ldots, x_{n-1}]$, but it is not a polynomial ring (except if $n =2$). ## Affine spaces mod finite groups {#ss.aff-mod-finite} In this section, we define certain subvarieties of $\mathcal{V}$, and show that each such subvariety is the quotient of affine space by a finite group. These subvarieties will play a role in our paving of the generalized Springer fibers. Fix a decomposition of the set $[n-1]:=\{1, \ldots, n-1 \}$ into three disjoint (possibly empty) subsets $I$, $J$, $K$. Define $\mathcal{W}_{ad}$ to be the subvariety of $\mathcal{V}_{ad}$ defined by the equations $x_j = 1$, $x_k = 0$ for $j \in J$ and $k \in K$. There is a natural isomorphism $$\begin{aligned} \label{eqn.affine.iso} i: \mathbb{C}^{|I|} \to \mathcal{W}_{ad}.\end{aligned}$$ If $J$ is nonempty, we fix a tuple of integers $(c_j)_{j\in J}$ such that $c_j\in \{0,\ldots, n-1\}$. Define $\widehat{\mathcal{W}}_{(c_j)}\simeq \mathbb{C}^{|I|}$ to be the affine subvariety in $\widehat\mathcal{V}$ defined by the equations $z_j = \omega^{c_j}$, $z_k = 0$, for $j \in J$, $k \in K$, and set $\mathcal{W}_{(c_j)} := \rho(\widehat{\mathcal{W}}_{(c_j)})$. If the integers $c_j$ are fixed and there is no possible confusion, we write $\widehat\mathcal{W}:= \widehat\mathcal{W}_{(c_j)}$ and $\mathcal{W}:= \rho(\widehat\mathcal{W}) \subset \mathcal{V}$. Observe that $\mathcal{W}_{ad} = \widehat\rho(\widehat\mathcal{W}) \subset \mathcal{V}_{ad}$. The varieties $\widehat{\mathcal{W}}_{(c_j)}$ are all distinct for different $(c_j)$, but this is not true for the $\mathcal{W}_{(c_j)}$; see Proposition [Proposition 12](#p.components){reference-type="ref" reference="p.components"} for a precise statement. However, the subgroup of $H$ preserving $\mathcal{W}_{(c_j)}$ depends only on the set $J$ and not on the particular values $c_j$. Indeed, if we define the subgroup $H_J$ of $H$ by the equation $$H_J = \{ (\omega^{a_1},\dots,\omega^{a_{n-1}}) \in \widehat T \mid a_j \equiv 0 \;\mathrm{mod}\,n\, \mbox { for } j \in J,\, \sum_{r \not\in J} r a_r \equiv 0 \;\mathrm{mod}\,n \},$$ then we have the following proposition. **Proposition 3**. *An element $h \in H$ preserves the subvariety $\widehat{\mathcal{W}}$ if and only if $h \in H_J$.* *Proof.* An element $h = (\omega^{a_1}, \ldots, \omega^{a_{n-1}})$ of $H$ preserves $\widehat{\mathcal{W}}$ if and only if $\omega^{a_j} = 1$ for all $j \in J$. We may therefore assume $a_j\equiv 0 \;\mathrm{mod}\,n$ for all $j\in J$. Since $h\in H$, it must also satisfy the conditions of [\[e.H\]](#e.H){reference-type="eqref" reference="e.H"}. The proposition follows. ◻ *Example 4*. Suppose $n=6$ and consider the partition of $\{1,2,3,4,5\}$ defined by $I = \{1,3,5\}$, $J=\{4\}$, and $K=\{2\}$. Set $(c_j)=(c_4)=(0)$. In this example, the ideal defining $\widehat\mathcal{W}$ from [\[e.widehat\]](#e.widehat){reference-type="eqref" reference="e.widehat"} is $\mathcal{I}= \langle z_2, z_4 - 1 \rangle$. By Propositions [Proposition 2](#prop.H){reference-type="ref" reference="prop.H"} and [Proposition 3](#prop.HW){reference-type="ref" reference="prop.HW"}, $H$ consists of the set of $(\omega^{a_i})$ satisfying $a_1 + 2 a_2 + 3 a_3 + 4 a_4 + 5 a_5 = 0$, and $H_J$ is the subgroup of $H$ equal to the set of $(\omega^{a_i})$ satisfying $a_4 = 0$, $a_1 + 2 a_2 + 3 a_3 + 5 a_5 \equiv 0 \;\mathrm{mod}\,6$. The main result of this subsection, Proposition [Proposition 9](#prop.quotient){reference-type="ref" reference="prop.quotient"} below, is that there is an isomorphism $\widehat{\mathcal{W}}/H_J \to \mathcal{W}$. We begin with a lemma. **Lemma 5**. *Let $C$ be a ring with an action of a finite group $M$, and let $I$ be an $M$-stable ideal in $C$. There is a ring isomorphism $(C/I)^{M} \simeq C^{M} / I^{M}$.* *Proof.* The composition $C^M \to C \to C/I$ has kernel $I^M= I \cap C^M$, which yields an injective homomorphism $C^M /I^M \to (C/I)^M$. To prove that this homomorphism is surjective, observe that $c+I \in (C/I)^M$ if and only if $c+I= mc+I$ for all $m \in M$. Hence $c+I= c'+I$, where $$c' = \frac{1}{|M|}\sum_{m\in M} mc \in C^M,$$ and surjectivity follows. ◻ The map $\widehat{\mathcal{W}}/H_J \to \mathcal{W}$ is defined as follows. The ideal of $\widehat{\mathcal{W}}$ in the ring $A$ is $$\label{e.widehat} \mathcal{I} = \langle z_j - \omega^{c_j}, z_k \mid j\in J, k\in K \rangle,$$ so $\mathbb{C}[\widehat{\mathcal{W}}] = A/\mathcal{I}$. The group $H_J$ preserves $\mathcal{I}$, and using Lemma [Lemma 5](#lemma.ringiso){reference-type="ref" reference="lemma.ringiso"}, we see that $$\mathbb{C}[\widehat{\mathcal{W}} / H_J] = (A / \mathcal{I})^{H_J} \cong A^{H_J}/ \mathcal{I}^{H_J}.$$ The map $$\rho: \widehat{\mathcal{V}} = \operatorname{Spec}A \to \mathcal{V}= \operatorname{Spec}A^H$$ corresponds to the inclusion of rings $A^H \to A$, and $\mathcal{W}= \rho(\widehat{\mathcal{W}})$ is the subvariety of $\mathcal{V}$ defined by the ideal $\mathcal{I} \cap A^H$. Thus, $\mathbb{C}[\mathcal{W}] = A/(\mathcal{I} \cap A^H)$. The composition $$\label{e.ainject0} A^H \to A^{H_J} \to A^{H_J}/ \mathcal{I}^{H_J}$$ has kernel $\mathcal{I} \cap A^H$, and hence induces an injective ring homomorphism $$\label{e.ainject} A^H/(\mathcal{I} \cap A^H) \to A^{H_J}/ \mathcal{I}^{H_J}.$$ This yields a corresponding map of affine varieties: $$\label{e.ainject2} \widehat{\mathcal{W}}/H_J = \operatorname{Spec}A^{H_J}/ \mathcal{I}^{H_J} \to \mathcal{W}= \operatorname{Spec}A^H/(\mathcal{I} \cap A^H).$$ We will prove that [\[e.ainject2\]](#e.ainject2){reference-type="eqref" reference="e.ainject2"} is an isomorphism by showing that the ring map [\[e.ainject\]](#e.ainject){reference-type="eqref" reference="e.ainject"} is an isomorphism. We have already observed that [\[e.ainject\]](#e.ainject){reference-type="eqref" reference="e.ainject"} is injective, so it suffices to show surjectivity. The next proposition describes the rings $A^H$ and $A^{H_J}$ explicitly as the span of certain monomials in the $z_i$. **Proposition 6**. 1. *$A^H$ is the span of the monomials $z_1^{b_1} \cdots z_{n-1}^{b_{n-1}}$ where, modulo $n$, the tuple $(b_1, \ldots, b_{n-1})$ is a multiple of $(1,\ldots, n-1)$.* 2. *$A^{H_J}$ is the span of the monomials $z_1^{b_1} \cdots z_{n-1}^{b_{n-1}}$ where, modulo $n$, the tuple $(b_r)_{r \not\in J}$ is a multiple of $(r)_{r \not\in J}$. [(]{.upright}The elements $b_j$ for $j \in J$ are arbitrary.[)]{.upright}* *Proof.* We prove (1). Since the action of $H$ on $A = \mathbb{C}[\widehat\mathcal{V}]=\mathbb{C}[z_1,\ldots, z_{n-1}]$ takes any monomial to a multiple of itself, $A^H$ is spanned by the $H$-invariant monomials. By Proposition [Proposition 2](#prop.H){reference-type="ref" reference="prop.H"}, $H$ consists of the elements $(\omega^{a_1},\dots,\omega^{a_{n-1}})$, where the $a_i$ satisfy $\sum_{r = 1}^{n-1} r a_r \equiv 0 \;\mathrm{mod}\,n$. Rewriting this condition gives $$a_{n-1} \equiv a_1 + 2 a_2 + \cdots + (n-2) a_{n-2} \;\mathrm{mod}\,n.$$ So modulo $n$, $\mathbf{a}= (a_1, \ldots, a_{n-1})$ is in the row span of the $(n-2)\times(n-1)$ matrix, $$\begin{bmatrix} 1 & 0 & 0 & \cdots & 0 & 1 \\ 0 & 1 & 0 & \cdots & 0 & 2 \\ 0 & 0 & 1 & \cdots & 0 & 3 \\ \vdots & \vdots & \vdots & \cdots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & 1 & (n-2) \end{bmatrix}.$$ A monomial $z_1^{b_1} \cdots z_{n-1}^{b_{n-1}}$ is therefore $H$-invariant if and only if the dot product of $\mathbf{b}= (b_1, \ldots, b_{n-1})$ with any row of the matrix above is $0$ modulo $n$. This yields the conditions $$b_1 \equiv - b_{n-1} \;\mathrm{mod}\,n,\;\; b_2 \equiv -2 b_{n-1} \;\mathrm{mod}\,n,\; \ldots, \; b_{n-2} \equiv -(n-2) b_{n-1} \;\mathrm{mod}\,n$$ so $\mathbf{b}\equiv -b_{n-1}(1, 2, \ldots, n-1) \;\mathrm{mod}\,n$. This proves (1); the proof of (2) is similar. ◻ **Proposition 7**. *With notation as above, $A^{H_J} = A^H + \mathcal{I}^{H_J}$.* *Proof.* We will show that given $f = z_1^{b_1} \cdots z_{n-1}^{b_{n-1}} \in A^{H_J}$, we can write $f$ as a sum of an element of $A^H$ and an element of $\mathcal{I}^{H_J}$. By Proposition [Proposition 6](#prop.invariantrings){reference-type="ref" reference="prop.invariantrings"} (2), there exists $a\in \mathbb{ Z}$ such that $b_r \equiv ar \;\mathrm{mod}\,n$ for all $r\notin J$. For each $j \in J$, choose a non-negative integer $m_j$ so that $b_j + m_j \equiv aj \;\mathrm{mod}\,n$. Let $$g = \prod_{j \in J} z_j^{m_j} \cdot f = \prod_{r \not\in J} z_r^{b_r} \prod_{j \in J} z_j^{b_j + m_j}.$$ Then $g \in A^H$ by Proposition [Proposition 6](#prop.invariantrings){reference-type="ref" reference="prop.invariantrings"}(1). Set $c = \prod_{j \in J} \omega^{c_j m_j}$. We claim that $u :=c f -g$ is in $\mathcal{I}$. Indeed, $u = (c - \prod_{j \in J}z_j^{m_j} ) f$. Since $c - \prod_{j \in J}z_j^{m_j}$ equals $0$ whenever $z_j = \omega^{c_j}$ for all $j\in J$, we have $c - \prod_{j \in J}z_j^{m_j} \in \mathcal{I}$, proving the claim. Note that $u \in A^{H_J}$ since $f$ and $g$ are elements of $A^{H_J}$. Hence $u \in \mathcal{I}^{H_J}$. We have $$f = c^{-1} g + c^{-1} u \in A^H + \mathcal{I}^{H_J},$$ as desired. This completes the proof. ◻ *Example 8*. Suppose $n = 6$, with $\widehat{\mathcal{W}}$ and $H_J$ as in Example [Example 4](#ex.hw){reference-type="ref" reference="ex.hw"}. By Proposition [Proposition 6](#prop.invariantrings){reference-type="ref" reference="prop.invariantrings"}, $f = z_1 z_2^2 z_3^3 z_4 z_5^5$ is in $A^{H_J}$. We have $J = \{4 \}$ and $(c_j) = (c_4) = (0)$ from Example [Example 4](#ex.hw){reference-type="ref" reference="ex.hw"}. Choose $m_4 = 3$ so $g = z_4^3 f$; in the notation of the proof of Proposition [Proposition 7](#prop.invariantsum){reference-type="ref" reference="prop.invariantsum"}, we have $c = 1$, so $h = f - g$. Since $1 - z_4^3 \in \mathcal{I}$, we have $h = (1 - z_4^3) f \in \mathcal{I}$, and $f = g+h \in A^H + \mathcal{I}^{H_J}$. The following proposition is the main result of this subsection. **Proposition 9**. *The natural map $A^H/(\mathcal{I} \cap A^H) \to A^{H_J}/ \mathcal{I}^{H_J}$ of [\[e.ainject\]](#e.ainject){reference-type="eqref" reference="e.ainject"} is an isomorphism. Hence the corresponding map of varieties $\widehat{\mathcal{W}}/H_J \to \mathcal{W}$ is an isomorphism.* *Proof.* As noted above, we need only prove that [\[e.ainject\]](#e.ainject){reference-type="eqref" reference="e.ainject"} is surjective, which is equivalent to the statement that the map $A^H \to A^{H_J}/ \mathcal{I}^{H_J}$ from [\[e.ainject0\]](#e.ainject0){reference-type="eqref" reference="e.ainject0"} is surjective. If $f \in A^{H_J}$, then by Proposition [Proposition 7](#prop.invariantsum){reference-type="ref" reference="prop.invariantsum"}, we can write $f = f_1 + f_2$, where $f_1 \in A^H$ and $f_2 \in \mathcal{I}^{H_J}$. Thus $f + \mathcal{I}^{H_J}$ is the image of $f_1$ under [\[e.ainject0\]](#e.ainject0){reference-type="eqref" reference="e.ainject0"}. Hence the map $A^H \to A^{H_J}/ \mathcal{I}^{H_J}$ is indeed surjective. ◻ *Example 10*. The injectivity of the map $A^H/(\mathcal{I} \cap A^H) \to A^{H_J}/ \mathcal{I}^{H_J}$ is valid more generally for any ring $A$ with an action of a finite group $H$, along with an ideal $\mathcal{I}$ such that $H_J$ is the subgroup of $H$ preserving $\mathcal{I}$. However, in this generality, the map can fail to be surjective. For example, let $A = \mathbb{C}[x,y]$, and $H = \mathbb{ Z}_2$, with the nontrivial element acting by multiplication by $-1$. Let $\mathcal{I} = \langle (x-1)(y-1) \rangle$. Then $H_J = \{ 1 \}$, so $A^{H_J}/\mathcal{I}^{H_J} = A/\mathcal{I}$. The map $$A^H = \mathbb{C}[x^2, xy, y^2] \to A/I = \mathbb{C}[x,y]/ \langle (x-1)(y-1) \rangle$$ is not surjective. This follows since the corresponding map of affine schemes, which is the projection of the union of the lines $x=1$ and $y=1$ to $\mathbb{C}^2/H$, is not injective (e.g. the points $(1,-1)$ and $(-1,1)$ have the same image). ## Components and the action of $Z$ {#ss.componentsZ} Recall the diagram: $$\label{e.toricmaps2} \begin{tikzcd} \widehat \mathcal{V}\cong \mathbb{C}^{n-1} \arrow[rr,bend left,"\widehat \rho"] \arrow[r] \arrow[r, "\rho"] & \mathcal{V}\arrow[r,"\pi"] & \mathcal{V}_{ad} \cong \mathbb{C}^{n-1}. \end{tikzcd}$$ The main results of this subsection, Propositions [Proposition 12](#p.components){reference-type="ref" reference="p.components"} and [Proposition 14](#p.zaction){reference-type="ref" reference="p.zaction"}, give a parametrization of the set of components of $\pi^{-1}(\mathcal{W}_{ad})$ in $\mathcal{V}$, and describe the $Z$-action on this set of components in terms of the parametrization. Write $J^c := I \cup K$ for the complement of $J$ in $[n-1]$. The ideal of $\mathcal{W}_{ad}$ in $\mathbb{C}[\mathcal{V}_{ad}] = \mathbb{C}[x_1,\ldots, x_{n-1}]$ is $\mathcal{I}(\mathcal{W}_{ad}) = \langle x_j - 1, x_k \mid j\in J,\,k\in K \rangle$. The inverse image $\widehat{\rho}^{-1}(\mathcal{W}_{ad})$ is a disjoint union of subvarieties $\widehat\mathcal{W}_{(c_j)}$ as defined in the previous section, one for each tuple of integers $(c_j)\in \mathbb{ Z}^{|J|}$ with $c_j\in \{0,\ldots, n-1\}$. Each $\widehat\mathcal{W}_{(c_j)}$ is isomorphic to affine space $\mathbb{C}^{|I|}$, and there are $|J|^n$ such components, each corresponding to a tuple $(c_j)$. The group $\widehat{H} = \operatorname{ker }(\widehat\rho)\simeq Z_n \times \cdots \times \mathbb{ Z}_n$ acts transitively on the set of these components. Recall that $\mathcal{W}_{(c_j)}:= \rho(\widehat\mathcal{W}_{(c_j)}) \subseteq \mathcal{V}$. **Lemma 11**. *Let $(c_j),(d_j)\in \mathbb{ Z}^{|J|}$ with $c_j,d_j\in \{0,\ldots, n-1\}$. Then either $\mathcal{W}_{(c_j)} = {\mathcal{W}}_{(d_j)}$, or ${\mathcal{W}}_{(c_j)}$ is disjoint from $\mathcal{W}_{(d_j)}$.* *Proof.* We prove that if $\mathcal{W}_{(c_j)}$ and $\mathcal{W}_{(d_j)}$ intersect, then they coincide. Suppose then that these intersect. Since $\rho$ is the quotient by $H$, this means that there is some $p \in \widehat{\mathcal{W}}_{(c_j)}$ and some $h \in H$ such that $h\cdot p\in \widehat\mathcal{W}_{(d_j)}$. But this implies that $h \cdot \widehat{\mathcal{W}}_{(c_j)} = \widehat{\mathcal{W}}_{(d_j)}$, since as noted above, the group $\widehat{H}$ permutes the components of $\widehat{\rho}^{-1}(\mathcal{W}_{ad})$. Thus $\mathcal{W}_{(c_j)} = \mathcal{W}_{(d_j)}$. ◻ It follows that $\pi^{-1}(\mathcal{W}_{ad})$ is the disjoint union of the distinct ${\mathcal{W}}_{(c_j)}$, and that these are the irreducible components of $\pi^{-1}(\mathcal{W}_{ad})$. Let $d_* := \gcd( r\mid r\in [n]\setminus J = J^c\cup\{n\} )$. In other words, $d_*$ is a generator of the subgroup of $\mathbb{ Z}_n$ generated by the elements $r\in [n]\setminus J$. Define $\phi: \mathbb{ Z}^{|J|} \to \mathbb{ Z}_{d_*}$ by $\phi((c_j) ) = \sum_{j \in J} j c_j \;\mathrm{mod}\,d_*$. We abuse notation and write $\phi(\mathcal{W}_{(c_j)}) = \phi((c_j))$. **Proposition 12**. *The components $\widehat{\mathcal{W}}_{(c_j)}$ and $\widehat{\mathcal{W}}_{(d_j)}$ of $\widehat\rho^{-1}(\mathcal{W}_{ad})$ are in the same $H$-orbit if and only if $\phi((c_j)) = \phi((d_j))$. The map $\phi : \{ \mathcal{W}_{(c_j)} \mid (c_j)\in \mathbb{ Z}^{|J|} \} \to \mathbb{ Z}_{d_*}$ induces a bijection between the components of $\pi^{-1}(\mathcal{W}_{ad})$ and $\mathbb{ Z}_{d_*}$.* *Proof.* Let $h = (\omega^{a_j})_{j \in [n-1]} \in \widehat{H}$. Then $$\label{e.horbit} h \cdot \widehat{\mathcal{W}}_{(c_j)} = \widehat{\mathcal{W}}_{(d_j)}$$ if and only if $a_j \equiv d_j - c_j\;\mathrm{mod}\,n$ for all $j \in J$. By Proposition [Proposition 2](#prop.H){reference-type="ref" reference="prop.H"}, the element $h$ is in $H$ if and only if $\sum_{r=1}^{n-1} r a_r \equiv 0\;\mathrm{mod}\,n$. Thus, there exists $h\in H$ satisfying [\[e.horbit\]](#e.horbit){reference-type="eqref" reference="e.horbit"} if and only if we can find a tuple $(a_r)_{r \in J^c}$ satisfying the equation $$\begin{aligned} \label{eqn.components} \sum_{r \in J^c} r a_r \equiv - \sum_{j \in J} j(d_j - c_j )\;\mathrm{mod}\,n.\end{aligned}$$ The left side of [\[eqn.components\]](#eqn.components){reference-type="eqref" reference="eqn.components"} is divisible by $d_*$, as is $n$. Therefore, the existence of such a tuple implies that $\sum_{j \in J} j c_j \equiv \sum_{j \in J} j d_j \;\mathrm{mod}\,d_*$, that is, $\phi((c_j)) = \phi((d_j))$. Hence, if $\widehat{\mathcal{W}}_{(c_j)}$ and $\widehat{\mathcal{W}}_{(d_j)}$ are in the same $H$-orbit, then $\phi((c_j)) = \phi((d_j))$. Conversely, suppose $\phi((c_j)) = \phi((d_j))$. We will show there is a tuple $(a_r)_{r \in J^c}$ satisfying [\[eqn.components\]](#eqn.components){reference-type="eqref" reference="eqn.components"}. As $d_*=\gcd(r\mid r\in J^c \cup \{n\})$, Bézout's identity implies that $\sum_{r\in J^c} rb_r \equiv d_* \;\mathrm{mod}\,n$ for some $b_r\in \mathbb{ Z}$. Since $\sum_{j \in J} j c_j \equiv \sum_{j \in J} j d_j \;\mathrm{mod}\,d_*$, we have $- \sum_{j \in J} j(d_j - c_j )=\tau d_*$ for some $\tau\in \mathbb{ Z}$. For each $r\in J^c$, set $a_r = \tau b_r$. The tuple $(a_r)_{r\in J^c}$ then satisfies [\[eqn.components\]](#eqn.components){reference-type="eqref" reference="eqn.components"}. Hence $\widehat{\mathcal{W}}_{(c_j)}$ and $\widehat{\mathcal{W}}_{(d_j)}$ are in the same $H$-orbit. This proves the first assertion of the proposition, and shows that $\phi$ gives an injective map from the set of components of $\pi^{-1}(\mathcal{W}_{ad})$ to $\mathbb{ Z}_{d_*}$. To complete the proof, we must show that $\phi$ is surjective. For this, we need only show that the $j \in J$ generate $\mathbb{ Z}_{d_*}$. This is equivalent to showing that the $j \in J$ together with $d_*$ generate $\mathbb{ Z}$. This holds since $d_* \mathbb{ Z}$ is the subgroup of $\mathbb{ Z}$ generated by $[n] \setminus J$. ◻ *Remark 13*. When we refer to the inverse image of a variety, we mean the inverse image with its reduced scheme structure. Indeed, $\pi^{-1}(\mathcal{W}_{ad})$ may be nonreduced if given the inverse image scheme structure. For example, suppose $n= 4$, $I$ is empty, $J = \{1,3 \}$ and $K = \{ 2 \}$. We have $\mathcal{V}_{ad} = \operatorname{Spec}\mathbb{C}[x_1, x_2, x_3] \cong \mathbb{C}^3$, and $\mathcal{W}_{ad}$ is the point $(1,0,1)$. We have $\mathcal{V}= \operatorname{Spec}B$, where $B$ is the quotient of the polynomial ring $\mathbb{C}[x_1,x_2,x_3,y_1, y_2, y_3]$ by an ideal $P$ containing the elements $y_1^4 - x_1^3 x_2^2 x_3$, $y_2^2 - x_1 x_3$, and $y_3^4 - x_1 x_2^2 x_3^3$. The inverse image of $\mathcal{W}_{ad}$ is the subscheme in $\mathcal{V}$ corresponding to the ideal $Q$ in $B$ generated by $x_1-1, x_2$, and $x_3-1$. As a reduced scheme, $\pi^{-1}(\mathcal{W}_{ad})$ is a union of two points, where $x_1 = x_3 = 1$, $x_2 = y_1 = y_3 = 0$, and $y_2 = \pm 1$. However, the scheme-theoretic inverse image $\pi^{-1}(\mathcal{W}_{ad})$ is not reduced. Indeed, $y_1^4$ and $y_3^4$ are in $Q$, but we claim $y_1$ and $y_3$ are not. We sketch the verification of this claim for $y_1$. Suppose by contradiction that $y_1 \in Q$; then $y_1 = (x_1-1)b_1 + x_2 b_2 + (x_3-1) b_3$ for $b_i \in B$. The center $Z$ acts trivially on the $x_i$ and on $y_1$ by the character $e^{-\mu_1}|_Z$, so by replacing each $b_j$ with the appropriate $Z$-isotypic component, we may assume that $Z$ acts on each $b_j$ by $e^{-\mu_1}|_Z$. But any element of $B$ on which $Z$ acts by $e^{-\mu_1}|_Z$ is divisible by $y_1$. Therefore, $b_j = y_1 c_j$ for $c_j \in B$, so $y_1(1 - (x_1-1)c_1 - x_2 c_2 - (x_3-1) c_3) = 0$. Since $B$ is an integral domain, $(x_1-1)c_1 + x_2 c_2 + (x_3-1) c_3 = 1$, but this is impossible, as the elements $x_1-1, x_2, x_3 - 1$ do not generate the unit ideal in $B$ (since they do not generate the unit ideal in the larger ring $A$). We conclude that $y_1 \not\in Q$, as claimed. Given $r \in \mathbb{ Z}_{d_*}$, we denote by $\mathcal{W}_r$ the component $\mathcal{W}_{(c_j)}$ for any tuple $(c_j)$ with $\phi((c_j)) = r$. This component depends on the decomposition $[n-1] = I \sqcup J \sqcup K$, but we omit this from the notation. The group $Z = \widehat{H}/H$ acts on $\mathcal{V}$. Recall that $\underline{\omega}$ denotes the generator of $Z$. **Proposition 14**. *With notation as above, we have $\underline{\omega} \cdot \mathcal{W}_r = \mathcal{W}_{r+1}$.* *Proof.* If $1 \notin J$, then $1 \in J^c$ so $d_* = 1$, the group $\mathbb{ Z}_{d_*}$ is trivial, and the proposition holds. Therefore we assume $1 \in J$. The element $\widehat{h} \in \widehat{H}$ defined by $\widehat{h} = (\omega, 1, 1, \ldots, 1)$ is a lift of $\underline{\omega} \in Z$ to $\widehat{H}$, in the sense that the map [\[e.hmap\]](#e.hmap){reference-type="eqref" reference="e.hmap"} takes $\widehat{h}$ to $\underline{\omega}$. Suppose $\phi((c_j)) = r$. The action of $Z$ on the component $\mathcal{W}_{(c_j)}$ is defined by $$\underline{\omega} \cdot \mathcal{W}_{(c_j)} = \underline{\omega} \cdot \rho(\widehat{\mathcal{W}}_{(c_j)}) = \rho (\widehat{h} \cdot \widehat{\mathcal{W}}_{(c_j)}).$$ We have $\widehat{h} \cdot \widehat{\mathcal{W}}_{(c_j)} = \widehat{\mathcal{W}}_{(d_j)}$ where the tuple $(d_j)_{j\in J}$ is defined by $d_1 = c_1 + 1$ and $d_j = c_j$ for $j \neq 1$. Thus $\underline{\omega}\cdot \mathcal{W}_{(c_j)} = \mathcal{W}_{(d_j)}$ where $$\phi ((d_j)) = \sum_{j\in J} j d_j = 1 + \sum_{j\in J} jc_j \equiv (r +1) \;\mathrm{mod}\,d_*.$$ The result follows. ◻ If $V$ is a vector space with basis $\{\mathcal{W}_r \mid r\in \mathbb{ Z}_{d_*} \}$, and we define a representation of $Z$ on $V$ by the rule $\underline{\omega} \cdot \mathcal{W}_r = \mathcal{W}_{r+1}$, then the matrix of $\underline{\omega}$ with respect to this basis is $$\begin{bmatrix} 0 & 0 & 0 & \cdots & 0 & 1 \\ 1 & 0 & 0 & \cdots &0 & 0 \\ 0 & 1 & 0 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots &0 & 0\\ 0 & 0 & 0 & \cdots & 1 & 0 \end{bmatrix}.$$ This matrix has eigenvalues $1, \omega^q, \omega^{2q}, \ldots, \omega^{(d_* -1) q}$, where $q = n/d_*$. (Note that $\omega^q$ is an $d_*$-th root of $1$.) Thus we see that $V$ decomposes under $Z$ as a direct sum of the $1$-dimensional representations with characters $1, \chi_q, \chi_{2q}, \ldots, \chi_{(d_* -1) q}$, where $\chi_p$ is the character of $Z$ satisfying $\chi_p(\underline{\omega}) = \omega^p$. We have obtained the following corollary. **Corollary 15**. *The character of the $Z\cong \mathbb{ Z}_n$ representation on the vector space spanned by the components of $\pi^{-1}(\mathcal{W}_{ad})$ is $1+ \chi_q+ \chi_{2q}+ \cdots+ \chi_{(d_* -1) q}$, where $q=n/d_*$. $\Box$* We now introduce some notation which will be useful later in the paper, when we apply the results of this section. By definition, $\widehat{\mathcal{W}} = \widehat{\mathcal{W}}_{(c_j)} \cong \mathbb{C}^{|I|}$ is a subvariety of $\widehat{\mathcal{V}}$ depending on a tuple $(c_j)_{j\in J}$ of integers. There is an $H_J$-equivariant isomorphism $\widehat{\mathcal{W}}_{(c_j)} \simeq \widehat{\mathcal{W}}_{(d_j)}$, which is the identity on the coordinates $z_{\ell}$ ($\ell \not\in J$) and which changes the value of the coordinate $z_j$ from $\omega^{c_j}$ to $\omega^{d_j}$. This induces an isomorphism on the quotients by $H_J$, which we use to identify $\widehat{\mathcal{W}}_{(c_j)} / H_J$ with $\widehat{\mathcal{W}}_{(d_j)} / H_J$. We set $\widetilde{\mathbb{C}}^{|I|} := \widehat{\mathcal{W}}_{(c_j)} / H_J$; by the remarks above, $\widetilde{\mathbb{C}}^{|I|}$ is independent of the choice of tuple $(c_j)$. Define $i_r: \widetilde{\mathbb{C}}^{|I|} \to \mathcal{W}_r$ to be the isomorphism of Proposition [Proposition 9](#prop.quotient){reference-type="ref" reference="prop.quotient"}. By abuse of notation, we view $i_r$ as a map $\widetilde{\mathbb{C}}^{|I|} \to \mathcal{V}$ with image $\mathcal{W}_r$. Viewing the action of the generator $\underline{\omega}$ of $Z$ as a map from $\mathcal{V}$ to itself, we restate Proposition [Proposition 14](#p.zaction){reference-type="ref" reference="p.zaction"} in terms of the maps $i_r$. **Proposition 16**. *For all $r\in \mathbb{ Z}_{d_*}$, we have $\underline{\omega} \circ i_r = i_{r+1}$.* *Proof.* Suppose $\phi((c_j)) = r$. Let $\widehat{h}$ and $(d_j)$ be as in the proof of Proposition [Proposition 14](#p.zaction){reference-type="ref" reference="p.zaction"}. The action on $\widehat{h}$ on $\widehat{\mathcal{V}}$ induces an isomorphism $\widehat{\mathcal{W}}_{(c_j)} /H_J \xrightarrow{\; \simeq \;} \widehat{\mathcal{W}}_{(d_j)} /H_J$. We obtain the diagram $$\label{e.commdiag-zaction2} \begin{CD} \widetilde{\mathbb{C}}^{|I|} = \widehat{\mathcal{W}}_{(c_j)} /H_J @>{i_r}>> \mathcal{V}\\ @V\widehat{h}VV @V{\underline{\omega}}VV\\ \widetilde{\mathbb{C}}^{|I|} = \widehat{\mathcal{W}}_{(d_j)}/H_J @>{i_{r+1}}>> \mathcal{V}. \end{CD}$$ In this diagram, we have identified both $\widehat{\mathcal{W}}_{(c_j)} /H_J$ and $\widehat{\mathcal{W}}_{(c_j)} /H_J$ with $\widetilde{\mathbb{C}}^{|I|}$ as above. The proof of Proposition [Proposition 14](#p.zaction){reference-type="ref" reference="p.zaction"} shows that the action of $\underline{\omega}$ on $\mathcal{V}$ is induced by the action of $\widehat{h}$ on $\widehat{\mathcal{V}}$, and that this action takes $\mathcal{W}_r$ to $\mathcal{W}_{r+1}$. Since by definition $i_r$ and $i_{r+1}$ are the isomorphisms of $\widehat{\mathcal{W}}_{(c_j)} /H_J$ and $\widehat{\mathcal{W}}_{(d_j)}/H_J$ with $\mathcal{W}_r$ and $\mathcal{W}_{r+1}$, respectively, the diagram commutes. The proposition follows. ◻ Let $q: \widetilde{\mathbb{C}}^{|I|} \to \mathbb{C}^{|I|}$ be the map making the following diagram commute: $$\label{e.commdiag} \begin{CD} \widetilde{\mathbb{C}}^{|I|} @>{i_r}>> \mathcal{W}_r \\ @VqVV @V{\pi}VV\\ \mathbb{C}^{|I|} @>i>> \mathcal{W}_{\operatorname{ad}} . \end{CD}$$ Note that $q$ is uniquely defined since the horizontal maps are isomorphisms. ## Rescaling Argument Our study of the components of $\pi^{-1}(\mathcal{W}_{ad})$ above utilizes the action of groups $H$ and $\widehat{H}$ on $\rho^{-1}(\mathcal{W}_{ad})$. One can also study the subvarieties $\mathcal{W}_{(c_j)} = \rho(\widehat\mathcal{W}_{(c_j)})$ directly, using coordinates on $\mathcal{V}$ and $\widehat{\mathcal{V}}$. This provides an alternative approach to some of the results in Section [3.3](#ss.componentsZ){reference-type="ref" reference="ss.componentsZ"}. In this subsection we give an example illustrating this concrete, albeit technical, approach. The results of this subsection are not needed for the remainder of the paper. *Example 17*. Let $n=12$ and consider the set partition of $[11]$ defined by $I = \{10\}$, $K=\{4,8\}$, and $J = [11]\setminus \{4,8,10\}$. We have $d_* = \gcd(4,8,10,12)=2$. The results earlier in this section tell us that there are precisely two components of $\pi^{-1}(\mathcal{W}_{ad})$, each indexed by an element of $\mathbb{ Z}_2=\{0,1\}$. To see this directly using coordinates, recall that $\mathbb{C}[\mathcal{V}]$ is a quotient of the ring $\mathbb{C}[v_1,v_2, \ldots, v_{11}, x_1, x_2,\cdots x_{11}]$, where $v_i:=e^{\mu_i}$. Here, $\mu_i = \sum c_j \alpha_j$ is the unique weight in the same coset of $\lambda_i$ modulo the root lattice which satisfies $0 \leq c_i < 1$ for each $i$. For example, using [\[e.glk\]](#e.glk){reference-type="eqref" reference="e.glk"}, we see that $$12\lambda_4 = 8\alpha_1 + 16 \alpha_2 + 24 \alpha_3+32 \alpha_4+28\alpha_5+24\alpha_6+20\alpha_7+16\alpha_8+9\alpha_9+6\alpha_{10}+3\alpha_{11}$$ and thus $$\mu_4 = \frac{1}{3}\left(2\alpha_1 + \alpha_2+2\alpha_4+\alpha_5+2\alpha_7+\alpha_8+2\alpha_{10}+\alpha_{11} \right);$$ see [@GPR Ex. 3.8]. Note that we can view each $v_i$ as a function on $\widehat{\mathcal{V}}$ via pullback (cf. [\[e.inclusions\]](#e.inclusions){reference-type="eqref" reference="e.inclusions"}), and we adopt that convention for the rest of this section. Since $n=12$, we have $x_i = e^{\alpha_i} = z_i^{12}$, so the equation above for $\mu_4$ tells us that, as a function on $\widehat{\mathcal{V}}$, $v_4 = z_1^8z_2^4z_4^8z_5^4 z_7^8z_8^4 z_{10}^8 z_{11}^4$. Similar computations show the following. $$\begin{aligned} v_1 &= z_1^{11}z_2^{10}z_3^{9}z_4^8z_5^{7}z_6^{6}z_7^{5}z_8^{4}z_9^{3}z_{10}^{2}z_{11}^{\empty} & v_7 &= z_1^{5}z_2^{10}z_3^{3}z_4^8z_5^{1}z_6^{6}z_7^{11}z_8^{4}z_9^{9}z_{10}^{2}z_{11}^{7} \\ v_2 &= z_1^{10}z_2^{8}z_3^{6}z_4^{4}z_5^{2} z_7^{10}z_8^{8}z_9^{6}z_{10}^{4}z_{11}^{2} & v_8&= z_1^4z_2^8z_4^4z_5^8 z_7^4z_8^8 z_{10}^4 z_{11}^8 \\ v_3 &= z_1^{9}z_2^{6}z_3^{3} z_5^{9} z_6^{6}z_7^{3} z_9^{9}z_{10}^{6}z_{11}^{3} & v_9&= z_1^{3}z_2^{6}z_3^{9} z_5^{3} z_6^{6}z_7^{9} z_9^{3}z_{10}^{6}z_{11}^{9} \\ v_4 &=z_1^8z_2^4z_4^8z_5^4 z_7^8z_8^4 z_{10}^8 z_{11}^4 & v_{10}&= z_1^{2}z_2^{4}z_3^{6}z_4^{8}z_5^{10} z_7^{2}z_8^{4}z_9^{6}z_{10}^{8}z_{11}^{10} \\ v_5 &= z_1^{7}z_2^{2}z_3^{9}z_4^{4}z_5^{11}z_6^{6}z_7^{\empty}z_8^{8}z_9^{3}z_{10}^{10}z_{11}^{5} & v_{11}&= z_1^{\empty}z_2^{2}z_3^{3}z_4^4 z_5^{5}z_6^{6}z_7^{7}z_8^{8}z_9^{9}z_{10}^{10}z_{11}^{11} \\ v_6 &= z_1^{6}z_3^{6}z_5^{6}z_7^{6}z_9^{6}z_{11}^{6}\end{aligned}$$ For each tuple $(c_j)_{j\in J}$, the affine variety $\widehat{\mathcal{W}}_{(c_j)}$ in $\mathcal{V}$ has defining ideal $\mathcal{I}(\widehat{\mathcal{W}}_{(c_j)}) = \left< z_j-\omega^{c_j} \mid j\in J \right>+ \left< z_{4}, z_8 \right>$. Thus $\mathcal{I}(\widehat{\mathcal{W}}_{(c_j)})$ contains $v_1$, $v_2$, $v_4$, $v_5$, $v_7$, $v_8$, $v_{10}$, $v_{11}$, as each of these has $z_4$ and $z_8$ as factors. (In general, $z_k$ with $k\in K$ will appear as a factor of $v_i$ if and only if $i$ is not divisible by $\frac{n}{\gcd(k,n)}$; see [@GPR Lemma 3.10].) In $\mathbb{C}[\widehat{\mathcal{W}}_{(c_j)}]$, we have $z_j = \omega^{c_j}$ for all $j\in J$, so $$v_3 = \omega^{3\Theta}z_{10}^6, \quad v_6 = \omega^{6\Sigma}, \quad v_9 = \omega^{9\Theta}z_{10}^6$$ where $\Theta = 3c_1+2c_2+c_3+3c_5+2c_6+c_7+3c_9+ c_{11}$ and $\Sigma = c_1+c_3+c_5+c_7+c_9+c_{11}$. The equation for $v_6$ implies that the $v_6$-coordinate of any point $p \in \mathcal{W}_{(c_j)}$ is a constant depending only on the parity of $\Sigma$. In particular, $$v_6(p) = \left\{\begin{array}{lc} 1 & \textup{ if $\Sigma$ is even} \\ \omega^6=-1 & \textup{ if $\Sigma$ is odd} \end{array} \right.$$ for all $p\in \mathcal{W}_{(c_j)}$. This shows $\pi^{-1}(\mathcal{W}_{ad})$ has at least two components, each determined by the parity of the sum $\Sigma$ for a given tuple $(c_j)_{j\in J}$. To prove that there are precisely two components, one must show that the $v_6$-coordinate uniquely determines the image of $\widehat{\mathcal{W}}_{(c_j)}$ under $\rho$. Indeed, suppose $(c_j')_{j\in J}$ is another tuple such that $\Sigma'=c_1'+c_3'+c_5'+c_7'+c_9'+c_{11}'$ has the same parity as $\Sigma$. Set $\Theta'=3c_1'+2c_2'+c_3'+3c_5'+2c_6'+c_7'+3c_9'+ c_{11}'$. Observe that the maps $\widehat{\mathcal{W}}_{(c_j)} \to \mathbb{C}$, $p \mapsto z_{10}(p)$, and $\widehat{\mathcal{W}}_{(c_j')} \to \mathbb{C}$, $p \mapsto z_{10}(p')$, are isomorphisms. We obtain an isomorphism $\widehat{\mathcal{W}}_{(c_j)} \to \widehat{\mathcal{W}}_{(c_j')}$, $p \mapsto p'$, characterized by the relation $z_{10}(p) = z_{10}(p')$. Any $p\in \widehat{\mathcal{W}}_{(c_j)}$ with $z_{10}(p)=b_{10}$ satisfies $$v_3(p) = \omega^{3\Theta}b_{10}^6 \quad \textup{ and } \quad v_9(p)=\omega^{9\Theta} b_{10}^6$$ while $p'\in \widehat{\mathcal{W}}_{(c_j')}$ such that $z_{10}(p')=b_{10}$ satisfies $$v_3(p') = \omega^{3\Theta'}b_{10}^6 \quad \textup{ and } \quad v_9(p')=\omega^{9\Theta'} b_{10}^6.$$ Note that $\Sigma$ and $\Sigma'$ have the same parity if and only if the same is true of $\Theta$ and $\Theta'$ so we may assume $\Theta-\Theta'$ is even. To show that $\mathcal{W}_{(c_j)} = \mathcal{W}_{(c_j')}$ consider the "rescaling map" $$\begin{aligned} \label{eqn.rescaling} \tau: \mathbb{C}\to \mathbb{C};\; b_{10} \mapsto \omega^{\frac{1}{2}\left( \Theta-\Theta' \right)}b_{10}.\end{aligned}$$ Using the isomorphism $\mathbb{C}\cong \widehat{\mathcal{W}}_{(c_j')}$, we view the rescaling map as an automorphism of $\widehat{\mathcal{W}}_{(c_j')}$. Under this rescaling, for all $p'\in \widehat{\mathcal{W}}_{(c_j')}$, we obtain $$v_3\circ \tau(p') = \omega^{3\Theta'}\omega^{3\left( \Theta - \Theta' \right)}b_{10}^6 = \omega^{3\Theta}b_{10}^6 = v_3(p).$$ Similarly, $v_9 \circ \tau(p')= v_9(p)$. Hence the images of $\widehat{\mathcal{W}}_{(c_j)}$ and $\widehat{\mathcal{W}}_{(c_j')}$ are equal under $\rho$, as desired.   The program outlined in the example can be carried out more generally to prove the results of Proposition [Proposition 12](#p.components){reference-type="ref" reference="p.components"} and Corollary [Corollary 15](#cor.Zaction){reference-type="ref" reference="cor.Zaction"} using the functions $z_i$ and $v_i$ on $\widehat{\mathcal{V}}$. The key point is that, when viewed as a function on $\widehat{\mathcal{V}}$ via pull-back, the $v_{{n}/{d_*}}$-coordinate of any point in $\widehat{\mathcal{W}}_{(c_j)}$ will be an element of $\mathbb{ Z}_{d_*} \cong \{ \omega^{n/d_*}, \omega^{2n/d_*}, \ldots, 1 \}$. It is then possible to define a "rescaling map" as in [\[eqn.rescaling\]](#eqn.rescaling){reference-type="eqref" reference="eqn.rescaling"} in order to prove that the image $\rho(\widehat{\mathcal{W}}_{(c_j)})=\mathcal{W}_{(c_j)}$ is uniquely determined by the value of its $v_{n/d_*}$-coordinate, thereby recovering Proposition [Proposition 12](#p.components){reference-type="ref" reference="p.components"}. Finally, the $Z$-action on the components of $\pi^{-1}(\mathcal{W}_{ad})$ is determined by the $Z$-action on the $v_{n/d_*}$-coordinate and given explicitly by multiplication by $\omega^{n/d_*}$, so we recover the results of Corollary [Corollary 15](#cor.Zaction){reference-type="ref" reference="cor.Zaction"}. # Affine Pavings {#sec.paving} Our goal is to prove that each extended Springer fiber has an affine paving modulo a finite group action. We begin with some combinatorial data. ## Row-Strict tableaux {#ss.row-strict} In the previous section, our analysis relied on disjoint subsets $I$, $J$, and $K$ partitioning the set $[n-1] =\{1,2,\dots, n-1\}$. In this section, we obtain these sets from the row-strict tableaux which parametrize pieces of the affine paving for each Springer fiber. Fix a positive integer $n$ and a partition $\lambda$ of $n$. The Young diagram of shape $\lambda$ is a collection of boxes arranged into rows and columns corresponding to the parts in the partition $\lambda$. We orient our diagrams using the English convention, so the rows are left justified with sizes decreasing from top to bottom. A tableau of shape $\lambda$ and content $[n]$ is obtained by filling each box with one of the numbers $1$ through $n$, with no repetition. The *base filling* of $\lambda$ is the tableau obtained by filling the boxes of $\lambda$ with the the numbers $1$ through $n$ by starting at the bottom of the leftmost column and moving up the column by increments of $1$, then moving to the lowest box of the next column to the right, and so on. For example, the base filling of $\lambda = [4\ 3\ 1]$ is the following. $$\ytableausetup{centertableaux} \begin{ytableau}3 & 5 & 7 & 8\\ 2 & 4 & 6\\ 1\end{ytableau}$$ Tableaux such that the entries increase across each row are called *row-strict*. Denote the set of row-strict tableaux of shape $\lambda$ by ${\tt RST}(\lambda)$. 3 & 4 & 5 & 6\ 1 & 2 & 7\ 8 1 & 2 & 3 & 4\ 5 & 6 & 7\ 8 5 & 6 & 7 & 8\ 1 & 2 & 4\ 3 Let $\sigma \in {\tt RST}(\lambda)$. We define subsets $I_\sigma$, $J_\sigma$, and $K_\sigma$ of $[n-1]$ as follows: $$\begin{aligned} I_\sigma &:=& \left\{ i\in [n-1] \mid \begin{array}{c} \textup{if $i+1$ is in the column directly to the right of $i$} \\ \textup{and in any higher row or $i+1$ is in}\\ \textup{any column at least two to the right of $i$ } \end{array} \right\} \\ J_\sigma &:=& \{i\in [n-1]\mid i+1 \text{ is in the same row as $i$}\}\\ K_\sigma &:=& [n-1] \setminus (I_\sigma \sqcup J_\sigma).\end{aligned}$$ If $i \in I_\sigma$, write $\ell(i+1)$ for the entry to the left of $i+1$. If $\sigma$ is fixed, we may omit the subscript $\sigma$ and simply denote these sets by $I, J$ and $K$. By definition, the position of $i+1$ in $\sigma$ determines which subset contains $i$. The diagram in Figure [\[fig.subsets\]](#fig.subsets){reference-type="ref" reference="fig.subsets"} below illustrates this definition. \*\[\*(gray)\]0,0,3+1,0,0,0 \*\[\*(yellow) \]3,3,4,4,4,1 \*\[\*(green)\]6,5,5,5,5,1 *Example 18*. Let $n=12$ and $\lambda = [4^2\ 2^2]$. Let $\sigma\in {\tt RST}([4^2\ 2^2])$ be the tableau pictured below. $$\ytableausetup{centertableaux} \begin{ytableau}3 & 4 & 5 & 6\\ 1 & 2 & 9 & 10\\ 7 & 8\\ 11 & 12\end{ytableau}$$ Then $I_\sigma = \{8\}$, $J_\sigma = \{1,3,4,5,7,9,11\}$, and $K_\sigma = \{2,6,10\}$. **Definition 19**. Let $i,j\in [n]$ such that $i>j$. We say that the pair $(i,j)$ is a *Springer inversion* of $\sigma\in {\tt RST}(\lambda)$ if 1. $i$ occurs in a box below $j$ and in the same column or in any column strictly to the left of the column containing $j$ in $\sigma$, and 2. if the box directly to the right of $j$ in $\sigma$ is labeled by $r$, then $i< r$. Let $|\sigma|$ be the number of Springer inversions of $\sigma\in {\tt RST}(\lambda)$. We denote the set of Springer inversions by $\mathop{\mathrm{inv}}(\sigma)$, so $|\sigma| := |\mathop{\mathrm{inv}}(\sigma)|$. Note that if $i\in I_\sigma$, then $(i,\ell)$ is a Springer inversion, where $\ell = \ell(i+1)$ denotes the entry in the box of $\sigma$ directly to the left of the box containing $i+1$. For example, in Example [Example 18](#ex.IJK){reference-type="ref" reference="ex.IJK"}, $(8,2)$ is a Springer inversion. Let $\widetilde{I}_{\sigma}$ denote the set of inversions of the form $(i, \ell(i+1))$, for $i \in I_{\sigma}$. Since $\widetilde{I}_{\sigma} \subset \mathop{\mathrm{inv}}(\sigma)$, we have $|I_{\sigma}| = |\widetilde{I}_\sigma| \leq |\sigma|$. *Example 20*. If $\sigma$ is the row-strict tableau of shape $[4^2\ 2^2]$ appearing in Example [Example 18](#ex.IJK){reference-type="ref" reference="ex.IJK"}, we have $I_{\sigma} = \{ 8 \}$ and, as noted above, $(8, 2)\in \mathop{\mathrm{inv}}(\sigma)$. The interested reader can check that $$\begin{aligned} \mathop{\mathrm{inv}}(\sigma) &=& \{ (12, 8), (12,10), (12, 6), (11,8), (11, 10), (11, 6),\\ &&\quad \quad\quad\quad (10, 6), (9,6), (8, 2), (8,6) (7,2), (7,6), (3,2) \}\end{aligned}$$ so $|\sigma|=13$ in this case. *Remark 21*. The Springer inversions form a subset of the inversion set of a particular permutation $w_{\sigma}$. Here, our convention is that the inversion set of a permutation $w$ is the set of pairs $(i,j)$ with $i>j$ such that $w(i)<w(j)$ (that is, in the $1$-line notation for $w^{-1}$, the number $i$ occurs before $j$). Define $w_{\sigma}$ to be the permutation of $[n]$ such that if $p$ occurs in the box of $\sigma$ in which $q$ occurs in the base filling of $\lambda$, then $w_{\sigma}(p) = q$. In other words, order the boxes of $\sigma$ as in the base filling, and list the entries in $\sigma$ in the order in which the boxes occur. This list is the $1$-line notation for $w_{\sigma}^{-1}$. For example, if $\sigma$ is as in Example [Example 18](#ex.IJK){reference-type="ref" reference="ex.IJK"}, then $w_{\sigma}^{-1} = [11,7,1,3,12,8,2,4,9,5,6,10]$. The pairs $(i,j)$ satisfying condition (1) in Definition [Definition 19](#def.Springer.inv){reference-type="ref" reference="def.Springer.inv"} are the inversions of $w_{\sigma}^{-1}$, so the Springer inversions form a subset of these. In this example, $w_{\sigma}^{-1}$ has $28$ inversions, but there are only $13$ Springer inversions. The permutations $w_{\sigma}$ are related to the paving of a Springer fiber discussed below. Indeed, the pieces of the paving are obtained by intersecting the Springer fiber with the Schubert cells indexed by the $w_{\sigma}$; see [@Ji-Precup2022 Theorem 5.4]. **Definition 22**. Given a tableau $\sigma\in {\tt RST}(\lambda)$, a *block* in $\sigma$ is a set of consecutive boxes in a single row of $\sigma$ labeled by consecutive numbers. If $\sigma$ can be broken up into blocks of size $d\in \mathbb{ N}$ then we say that *$d$ is a divisor of $\sigma$*. For each $\sigma\in {\tt RST}(\lambda)$ we let $d_\sigma$ denote its maximal divisor. The tableau pictured in Example [Example 18](#ex.IJK){reference-type="ref" reference="ex.IJK"} has maximal divisor equal to $2$. **Lemma 23**. *For each $\sigma\in {\tt RST}(\lambda)$, $$d_\sigma = \gcd(i \mid i \in K_\sigma \sqcup I_\sigma \sqcup \{ n \}).$$* *Proof.* Suppose that $$K_\sigma \sqcup I_\sigma = \{ n_1, n_2, \ldots, n_r \}.$$ By definition of $J_\sigma$, the tableau $\sigma$ is then composed of blocks as follows: the first block is $1, \ldots, n_1$, the second block is $n_1 + 1, \ldots, n_2$, etc. The lengths of the blocks are $n_1, n_2 -n_1, \ldots, n_r - n_{r-1}, n - n_r$. The maximal divisor $d_\sigma$ is the greatest common divisor of the lengths of the blocks, which equals $\gcd( n_1, n_2, \ldots, n_r, n )$. ◻ *Example 24*. Let $n=12$ and consider the row-strict tableau $\sigma\in {\tt RST}([6 \ 4 \ 2])$ below. $$\ytableausetup{centertableaux} \begin{ytableau}1 & 2 & 3 & 4 & 11 & 12\\ 5 & 6 & 7 & 8\\ 9 & 10\end{ytableau}$$ We have $I_\sigma = \{ 10 \}$ and $J_\sigma = \{ 1, 2, 3, 5, 6, 7, 9, 11 \}$. Now $I_\sigma \sqcup K_\sigma \sqcup \{12\} = \{ 4, 8, 10, 12 \}$ and thus $d_\sigma =2$ by Lemma [Lemma 23](#lem.maxdiv){reference-type="ref" reference="lem.maxdiv"}, which is also obvious from the picture of $\sigma$ above. ## A paving of the Springer fiber The fact that Springer fibers are paved by affines was originally proved by Spaltenstein in [@Spaltenstein]. Building on work of Tymoczko [@Tymoczko2006], Ji and the second author construct pavings of Springer fibers in [@Ji-Precup2022] suitable for use in this paper. We will use their results, along with the results of Section [3](#sec.affine-subvarieties){reference-type="ref" reference="sec.affine-subvarieties"}, to construct pavings of extended Springer fibers. Recall that ${\tt RST}(\lambda)$ is the set of all row-strict tableaux of shape $\lambda$. As in [@Ji-Precup2022 Definition 3.1], define the element $\mathsf{x}_\lambda$ of $\mathcal{O}_\lambda$ to be $\mathsf{x}_\lambda= \sum E_{\ell r}$, where the sum is over all pairs $(\ell,r)$ such that $r$ labels the box directly to the right of $\ell$ in the base filling of $\lambda$. Recall from Section [2.1](#sec.Springer.def){reference-type="ref" reference="sec.Springer.def"} that $\widetilde{\mathcal{N}}_{\lambda}:= \widetilde{\mathcal{N}}_{\mathsf{x}_\lambda}=\mu^{-1}(\mathsf{x}_\lambda)$ denotes the Springer fiber of $\mathsf{x}_{\lambda}$. The following theorem is a combination of results from [@Ji-Precup2022]. **Theorem 25**. *Let $\lambda$ be a partition of $n$. For each $\sigma\in {\tt RST}(\lambda)$ there exists a morphism $$\begin{aligned} \label{eqn.pavingmap} f_\sigma: \mathbb{C}^{|\sigma|} \to G\end{aligned}$$ such that the composition $\mathbb{C}^{|\sigma|} \to G \to G/B = \mathcal{B}$ is an isomorphism onto its image, denoted here by $\mathcal{C}_\sigma \subseteq \mathcal{B}_{\lambda}$. Moreover, the $\mathcal{C}_\sigma$ with $\sigma\in {\tt RST}(\lambda)$ are cells for an affine paving of $\mathcal{B}_{\lambda}$.* *Remark 26*. The irreducible components of $\widetilde{\mathcal{N}}_{\lambda}$ are the $\overline{\mathcal{C}_\sigma}$ where $\sigma$ is a standard tableau. Thus, if $\sigma$ is standard, then $|\mathop{\mathrm{inv}}(\sigma)| = \sum_i (\lambda_i - 1) = \dim \widetilde{\mathcal{N}}_{\lambda}$. We now adapt the map [\[eqn.pavingmap\]](#eqn.pavingmap){reference-type="eqref" reference="eqn.pavingmap"} to identify $\mathcal{C}_\sigma$ as a subset of $\widetilde{\mathcal{N}}_{\lambda}$ in $G \times^B {\mathfrak u}$. For each $\sigma\in {\tt RST}(\lambda)$, the results of [@Ji-Precup2022] imply that $f_\sigma(x)^{-1} \cdot \mathsf{x}_{\lambda} \in {\mathfrak u}$. This is implicit in the statement of Theorem [Theorem 25](#thm.paving){reference-type="ref" reference="thm.paving"}; it is explained more explicitly in the proof of Proposition [Proposition 27](#prop.toric-paving){reference-type="ref" reference="prop.toric-paving"} below. We define $g_{\sigma}: \mathbb{C}^{|\sigma|} \to {\mathfrak u}$ by $g_{\sigma}(x) = f_\sigma(-x)^{-1} \cdot \mathsf{x}_{\lambda}$. By abuse of notation, we use the same notation $\mathcal{C}_\sigma$ as above to denote the image of the map $$F_\sigma: \mathbb{C}^{|\sigma|} \to G \times^B {\mathfrak u},\; x \mapsto [f_\sigma(x), g_{\sigma}(x) ].$$ Theorem [Theorem 25](#thm.paving){reference-type="ref" reference="thm.paving"} tells us that the $\mathcal{C}_\sigma$ are cells for an affine paving of $\widetilde{\mathcal{N}}_{\lambda}$. We will need a result, Proposition [Proposition 27](#prop.toric-paving){reference-type="ref" reference="prop.toric-paving"} below, giving more information about the map $g_{\sigma}$. Before stating that proposition, we introduce some notation. We view the set of all pairs $(i,j) \in \mathop{\mathrm{inv}}(\sigma)$ as the indexing set for a basis of $\mathbb{C}^{|\sigma|}$. Let $\{ x_{(i,j)} \mid (i,j) \in \mathop{\mathrm{inv}}(\sigma) \}$ be coordinates with respect to this basis, and write $x = (x_{(i,j)})_{(i,j) \in \mathop{\mathrm{inv}}(\sigma)}$ for an element of $\mathbb{C}^{|\sigma|}$. Let $I = I_{\sigma}$, $J = J_{\sigma}$, and $K = K_{\sigma}$ denote the subsets of $[n-1]$ defined using $\sigma$ in Section [4.1](#ss.row-strict){reference-type="ref" reference="ss.row-strict"}. We identify $\mathbb{C}^{|I|}$ with the subspace of $\mathbb{C}^{|\sigma|}$ such that such that all coordinates $x_{(i,j)}$ are zero for $(i,j) \notin \widetilde{I}_{\sigma}$. Similarly, $\mathbb{C}^{|\sigma|-|I|}$ is the subspace defined by the vanishing of the complementary set of coordinates. We obtain a decomposition $\mathbb{C}^{|\sigma|} = \mathbb{C}^{|I|} \oplus \mathbb{C}^{|\sigma|-|I|}$ of $\mathbb{C}^{|\sigma|}$ into a direct sum of complementary subspaces. Let $\pi_I$ denote projection onto the first summand. Recall that $p: {\mathfrak u}\to \mathcal{V}_{ad}$ is the projection obtained by identifying $\mathcal{V}_{ad}$ with ${\mathfrak u}/[{\mathfrak u}, {\mathfrak u}]$. Let $\mathcal{W}_{\sigma, ad}$ be the subvariety of $\mathcal{V}_{ad}$ defined as in Section [3.2](#ss.aff-mod-finite){reference-type="ref" reference="ss.aff-mod-finite"} by the equations $x_j = 1$, $x_k = 0$, for $j \in J_{\sigma}$ and $k \in K_{\sigma}$. The next result describes the composition $p \circ g_{\sigma}: \mathbb{C}^{|\sigma|} \to \mathcal{V}_{ad}$, and shows that the image of this map is $\mathcal{W}_{\sigma, ad}$. **Proposition 27**. *Let $\sigma\in {\tt RST}(\lambda)$. We have $p \circ g_{\sigma} = p \circ g_{\sigma} \circ \pi_I$. Moreover, the map $p \circ g_{\sigma} |_{\mathbb{C}^{| I |} } \to \mathcal{V}_{ad}$ is given by the formula $$\label{e.toric-paving} (x_{(i,\ell(i+1))})_{i \in I_{\sigma}} \mapsto \sum_{i \in I_{\sigma}} x_{(i,\ell(i+1))} E_{i,i+1} + \sum_{i \in J_{\sigma}} E_{i,i+1}.$$ Hence $p \circ g_{\sigma} |_{\mathbb{C}^{| I |} }$ is the linear isomorphism [\[eqn.affine.iso\]](#eqn.affine.iso){reference-type="eqref" reference="eqn.affine.iso"} from $\mathbb{C}^{| I |}$ to $\mathcal{W}_{\sigma, ad}$.* *Proof.* Our proof requires a close inspection of the definition from [@Ji-Precup2022] of the map $f_\sigma$ from [\[eqn.pavingmap\]](#eqn.pavingmap){reference-type="eqref" reference="eqn.pavingmap"}. Let $x = (x_{(i,j)})_{(i,j)\in \mathop{\mathrm{inv}}(\sigma)}\in \mathbb{C}^{|\sigma|}$, and let $\{e_1, e_2 \ldots, e_n\}$ denote the standard basis of $\mathbb{C}^n$. By [@Ji-Precup2022 Lemma 4.6, Proposition 5.6] the map $f_\sigma$ satisfies the following. 1. If $r$ does not label a box in the first column of $\sigma$, let $\ell=\ell(r)$ be the label of the box directly to the left of $r$ in $\sigma$. Then $$f_\sigma(x)e_\ell = \mathsf{x}_\lambda f_{\sigma}(x) e_r + \sum_{\substack{\ell<t\leq n\\ (t,\ell)\in \mathop{\mathrm{inv}}(\sigma)}} x_{(t,\ell)} f_\sigma(x)e_{t}.$$ 2. If $r$ labels a box in the first column of $\sigma$, then $\mathsf{x}_\lambda f_{\sigma}(x) e_{r} = 0$. These properties imply that $f_\sigma(x)^{-1} \cdot \mathsf{x}_{\lambda} \in {\mathfrak u}$, as noted above. This follows since either $f_\sigma(x)^{-1} \mathsf{x}_\lambda f_{\sigma}(x) e_r = 0$, or $$\label{e.in-u} f_\sigma(x)^{-1} \mathsf{x}_\lambda f_{\sigma}(x) e_r = e_{\ell} - \sum_{\substack{\ell<t\leq n\\ (t,\ell)\in \mathop{\mathrm{inv}}(\sigma)}} x_{(t,\ell)} e_{t},$$ which is a linear combination of the $e_i$ with $i<r$, as $(t,\ell)\in \mathop{\mathrm{inv}}(\sigma)$ implies $t<r$ by definition of the Springer inversions. For each $x = (x_{(i,j)})\in \mathbb{C}^{|\sigma|}$ write $$p(f_{\sigma}(-x)^{-1}\cdot \mathsf{x}_\lambda) = \sum_{i\in [n-1]} a_i(x) E_{i,i+1} \in {\mathfrak u}/[{\mathfrak u},{\mathfrak u}] = \mathcal{V}_{ad}.$$ Here $a_i(x)$ is, by definition, the coefficient of the standard basis vector $e_i$ when we write the vector $$f_{\sigma}(-x)^{-1}\mathsf{x}_\lambda f_\sigma (-x) e_{i+1}$$ as a linear combination with respect to the standard basis. To prove formula [\[e.toric-paving\]](#e.toric-paving){reference-type="eqref" reference="e.toric-paving"}, we must show that $a_i(x) = x_{(i,\ell(i+1))}$ if $i \in I_{\sigma}$, $a_i(x) = 1$ if $i \in J_{\sigma}$, and $a_i(x) = 0$ if $i \in K_{\sigma}$. If $i+1$ labels a box in the first column of $\sigma$, then $i\in K_\sigma$. Applying (2) with $r=i+1$ gives us $$f_{\sigma}(-x)^{-1}\mathsf{x}_\lambda f_\sigma (-x) e_{i+1} =0$$ so $a_i(x)=0$ in this case. If $i+1$ is not in the first column of $\sigma$, let $\ell=\ell(i+1)$ denote the entry directly to its left and apply [\[e.in-u\]](#e.in-u){reference-type="eqref" reference="e.in-u"} with $r=i+1$ to obtain $$f_{\sigma}(-x)^{-1}\mathsf{x}_\lambda f_\sigma (-x) e_{i+1} = e_{\ell} + \sum_{\substack{\ell<t\leq n\\ (t,\ell)\in \mathop{\mathrm{inv}}(\sigma)}} x_{(t,\ell)} e_{t}.$$ We see that $a_i(x) = 1$ if $\ell=i$, which occurs exactly when $i\in J_\sigma$. Otherwise, either $(i,\ell)$ is an inversion of $\sigma$ or it is not. In the former case $i\in I_\sigma$ and $a_i(x) = x_{(i,\ell(i+1))}$, and in the latter, $i\in K_\sigma$ and $a_i(x)=0$. This proves [\[e.toric-paving\]](#e.toric-paving){reference-type="eqref" reference="e.toric-paving"}. The fact that $p \circ g_{\sigma} |_{\mathbb{C}^{| I |} }$ is the linear isomorphism [\[eqn.affine.iso\]](#eqn.affine.iso){reference-type="eqref" reference="eqn.affine.iso"} from $\mathbb{C}^{| I |}$ to $\mathcal{W}_{\sigma, ad}$ follows directly from the definition of $\mathcal{W}_{\sigma, ad}$ and map [\[eqn.affine.iso\]](#eqn.affine.iso){reference-type="eqref" reference="eqn.affine.iso"}. ◻ *Example 28*. Let $n=12$, and $\sigma\in {\tt RST}([6 \ 4\ 2])$ be as in Example [Example 24](#ex.642){reference-type="ref" reference="ex.642"}. The computation in the proof of Proposition [Proposition 27](#prop.toric-paving){reference-type="ref" reference="prop.toric-paving"} shows that for each $x=(x_{(i,j)})_{(i,j)\in \mathop{\mathrm{inv}}(\sigma)}$, $$\begin{aligned} p(f_{\sigma}(-x)^{-1}\cdot \mathsf{x}_{[6 \, 4 \, 2]}) &=& E_{1,2}+E_{2,3}+E_{3,4}+ E_{5,6}+E_{6,7}+E_{7,8}+\\ && \quad \quad\quad\quad\quad\quad\quad\quad\quad E_{9,10}+x_{(10, 4)}E_{10,11}+E_{11,12}.\end{aligned}$$ We have $I_\sigma = \{10\}$ in this case, so $\pi_I(x) = \left(x_{(10,4)}\right)$, and $$p\circ g_{\sigma} (x) = (1, 1, 1, 0, 1, 1, 1, 0, 1, x_{(10,4)}, 1) = \mathcal{W}_{\sigma, ad} \subset \mathcal{V}_{ad}.$$ ## A paving of the extended Springer fiber This section contains the main result of the paper, which gives an orbifold paving of the extended Springer fiber. Recall that to each row-strict tableau $\sigma$ of shape $\lambda$, we associated in Section [4.1](#ss.row-strict){reference-type="ref" reference="ss.row-strict"} a set partition $I_\sigma \sqcup J_\sigma\sqcup K_\sigma$ of $[n-1]$. We write $I=I_\sigma$, $J=J_\sigma$ and $K=K_\sigma$ throughout this section for simplicity. Notice that $d_*=d_\sigma$, where $d_*$ is defined in the paragraph preceeding the statement of Proposition [Proposition 12](#p.components){reference-type="ref" reference="p.components"}. The reader may also wish to reacquaint themselves with the notation and discussion preceding the statement of Proposition [Proposition 16](#p.zaction2){reference-type="ref" reference="p.zaction2"} in Section [3.3](#ss.componentsZ){reference-type="ref" reference="ss.componentsZ"}. Let $\widetilde{ \mathbb{C}}^{|I|}$ be as in Section [3.3](#ss.componentsZ){reference-type="ref" reference="ss.componentsZ"}. We define $\widetilde{\mathbb{C}}^{|\sigma|} = \widetilde{\mathbb{C}}^{|I|} \oplus \mathbb{C}^{|\sigma|-|I|}$. The map $q: \widetilde{ \mathbb{C}}^{|I|} \to \mathbb{C}^{|I|}$ induces a map $\widetilde{\mathbb{C}}^{|\sigma|} \to \mathbb{C}^{|\sigma|}$ which is the identity on the factor $\mathbb{C}^{|\sigma|-|I|}$. By abuse of notation, we denote this map by $q$ as well. Recall that $\mathcal{W}_{\sigma, ad}$ is the subvariety of $\mathcal{V}_{ad}$ defined as in Section [3.2](#ss.aff-mod-finite){reference-type="ref" reference="ss.aff-mod-finite"} by the equations $x_j = 1$, $x_k = 0$, for $j \in J$ and $k \in K$, and let $i_{\sigma}: \mathbb{C}^{|I|} \to \mathcal{W}_{\sigma, \operatorname{ad}}$ be the map defined in [\[eqn.affine.iso\]](#eqn.affine.iso){reference-type="eqref" reference="eqn.affine.iso"} of that section. For each $r\in \mathbb{ Z}_{d_\sigma}$ we have a commutative diagram, $$\label{e.commdiag2} \begin{CD} \widetilde{\mathbb{C}}^{|I|} @>{i_r}>> \mathcal{W}_r \\ @VqVV @V{\pi}VV\\ \mathbb{C}^{|I|} @>{i_\sigma}>> \mathcal{W}_{\sigma, \operatorname{ad}} \end{CD}$$ where $\mathcal{W}_r$ is the component of $\pi^{-1}(\mathcal{W}_{\sigma, ad})$ indexed by $r\in \mathbb{ Z}_{d_\sigma}$ and $i_r: \widetilde{\mathbb{C}}^{|I|} \to \mathcal{W}_r$ is the isomorphism of Proposition [Proposition 9](#prop.quotient){reference-type="ref" reference="prop.quotient"}. Define ${\mathfrak u}_\sigma:= \{\mathsf{y} \in {\mathfrak u}\mid p(\mathsf{y}) \in \mathcal{W}_{\sigma, ad}\}$ and $\widetilde{{\mathfrak u}}_r = \mathcal{W}_r \times_{\mathcal{V}_{\operatorname{ad}}} {\mathfrak u}_\sigma \subset \mathcal{V}\times_{\mathcal{V}_{\operatorname{ad}}} {\mathfrak u}= \widetilde{{\mathfrak u}}$. Define $g_{\sigma, r}: \widetilde{\mathbb{C}}^{|\sigma|} \to \widetilde{{\mathfrak u}}$ by the formula $$g_{\sigma, r}(z) = (i_r \circ \pi_I (z), g_{\sigma} \circ q (z) ) \in \mathcal{V}\times_{\mathcal{V}_{\operatorname{ad}}} {\mathfrak u}= \widetilde{{\mathfrak u}}.$$ The map $g_{\sigma, r}$ factors through the inclusion $\widetilde{{\mathfrak u}}_r \hookrightarrow \widetilde{{\mathfrak u}}$ and the map $g_{\sigma}$ factors through ${\mathfrak u}_{\sigma} \hookrightarrow {\mathfrak u}$. We obtain a commutative diagram $$\label{e.commdiag3} \begin{CD} \widetilde{\mathbb{C}}^{|\sigma |} @>{\overline{g}_{\sigma, r}}>> \widetilde{{\mathfrak u}}_r @>>> \widetilde{{\mathfrak u}} \\ @VqVV @V{\eta_r}VV @V{\eta}VV \\ \mathbb{C}^{| \sigma |} @>{\overline{g}_{\sigma}}>> {\mathfrak u}_{\sigma} @>>> {\mathfrak u}, \end{CD}$$ where the horizontal maps in the right hand square are the inclusions, and $\eta_r = \eta|_{\widetilde{{\mathfrak u}}_r}$. The composition of the top maps is $g_{\sigma, r}$, and the composition of the bottom maps is $g_{\sigma}$. Finally, define $F_{\sigma,r}: \widetilde{\mathbb{C}}^{|\sigma |} \to \widetilde{\mathcal{M}} = G \times^B \widetilde{{\mathfrak u}}$ by the formula $$F_{\sigma,r}(z) = [f_{\sigma} \circ q (z), g_{\sigma, r}(z) ]$$ and consider the commutative diagram $$\label{e.commdiag4} \begin{CD} \widetilde{\mathbb{C}}^{|\sigma |} @>{F_{\sigma,r}}>> G \times^B \widetilde{{\mathfrak u}} = \widetilde{\mathcal{M}} \\ @VqVV @V{\widetilde{\eta}}VV\\ \mathbb{C}^{| \sigma |} @>{F_\sigma}>> G \times^B {\mathfrak u}= \widetilde{\mathcal{N}}. \end{CD}$$ Let $\widetilde{\mathcal{C}}_{\sigma,r}$ denote the image of $\widetilde{\mathbb{C}}^{|\sigma |}$ under $F_{\sigma,r}$. We can now state our main result. **Theorem 29**. *Let $\lambda$ be a partition of $n$, $\sigma\in {\tt RST}(\lambda)$ and $r \in \mathbb{ Z}_{d_\sigma}$.* 1. *The map $F_{\sigma,r}$ takes $\widetilde{\mathbb{C}}^{|\sigma|}$ isomorphically onto its image $\widetilde{\mathcal{C}}_{\sigma,r}$, and the preimage of the cell $\mathcal{C}_\sigma$ under the map $\widetilde{\eta}: \widetilde{\mathcal{M}} \to \widetilde{\mathcal{N}}$ is the disjoint union over $r \in \mathbb{ Z}_{d_\sigma}$ of the $\widetilde{\mathcal{C}}_{\sigma,r}$.* 2. *The $\widetilde{\mathcal{C}}_{\sigma,r}$ for $\sigma\in {\tt RST}(\lambda)$ and $r \in \mathbb{ Z}_{d_\sigma}$ are cells of an orbifold paving of the extended Springer fiber $\widetilde{\mathcal{M}}_{\lambda} \subseteq \widetilde{\mathcal{M}}$.* 3. *The action of $Z$ on $\widetilde{\mathcal{M}}_{\lambda}$ satisfies $\underline{\omega} \cdot \widetilde{\mathcal{C}}_{\sigma, i} = \widetilde{\mathcal{C}}_{\sigma, i+1}$.* We prove this result by trivializing the bundles involved so that we can more easily analyze the morphisms. The quotient map $h: G \to G/B$ is a $B$-principal bundle. As noted in the previous section, $h \circ f_{\sigma}: \mathbb{C}^{|\sigma|} \to G/B$ takes $\mathbb{C}^{|\sigma|}$ isomorphically onto its image. We have a Cartesian square $$\label{e.commdiag5} \begin{CD} \mathbb{C}^{| \sigma |} \times B @>>> G \\ @VVV @VhVV\\ \mathbb{C}^{| \sigma |} @>{h \circ f_{\sigma}}>> G/B, \end{CD}$$ where the top map takes $(x,b)$ to $f_{\sigma}(x) b$. This trivializes the pullback of the bundle $G \to G/B$ along the map $h \circ f_{\sigma}$. This trivialization induces a trivialization of the bundle $\widetilde{\mathcal{N}} = G \times^B {\mathfrak u}\to G/B$: $$\label{e.commdiag6} \begin{CD} \mathbb{C}^{| \sigma |} \times {\mathfrak u}@>{\psi}>> G \times^B {\mathfrak u}= \widetilde{\mathcal{N}} \\ @VVV @VVV\\ \mathbb{C}^{| \sigma |} @>{h \circ f_{\sigma}}>> G/B, \end{CD}$$ where $\psi(x,\mathsf{y}) = [f_{\sigma}(x), \mathsf{y}]$. Similarly, we obtain a trivialization of the bundle $\widetilde{\mathcal{M}}=G\times^B \widetilde{{\mathfrak u}} \to G/B$: $$\label{e.commdiag7} \begin{CD} \mathbb{C}^{| \sigma |} \times \widetilde{{\mathfrak u}} @>{\widetilde{\psi}}>> G \times^B \widetilde{{\mathfrak u}} = \widetilde{\mathcal{M}} \\ @VVV @VVV\\ \mathbb{C}^{| \sigma |} @>{h \circ f_{\sigma}}>> G/B, \end{CD}$$ where $\widetilde{\psi}(x,y) = [f_{\sigma}(x),y]$. We expand the commutative diagram [\[e.commdiag4\]](#e.commdiag4){reference-type="eqref" reference="e.commdiag4"} by factoring the maps $F_{\sigma,r}$ and $F_{\sigma}$: $$\label{e.commdiag8} \begin{CD} \widetilde{\mathbb{C}}^{|\sigma |} @>{q \times \overline{g}_{\sigma,r}}>> \mathbb{C}^{| \sigma |} \times \widetilde{{\mathfrak u}}_r @>>> \mathbb{C}^{| \sigma |} \times \widetilde{{\mathfrak u}} @>{\widetilde{\psi}}>> G \times^B \widetilde{{\mathfrak u}} = \widetilde{\mathcal{M}} \\ @VqVV @V{\small{\mbox{id}} \times \eta_r}VV @V{\small{\mbox{id}} \times \eta}VV @V{\widetilde{\eta}}VV\\ \mathbb{C}^{| \sigma |} @>{\small{\mbox{id}} \times \overline{g}_{\sigma}}>> \mathbb{C}^{| \sigma |} \times {\mathfrak u}_\sigma @>>> \mathbb{C}^{| \sigma |} \times {\mathfrak u}@>{\psi}>> G \times^B {\mathfrak u}= \widetilde{\mathcal{N}}. \end{CD}$$ Note that the morphism $\mbox{id} \times g_{\sigma}: \mathbb{C}^{| \sigma |} \to \mathbb{C}^{| \sigma |} \times {\mathfrak u}$ is the graph morphism of $g_{\sigma}$. The morphism $q \times g_{\sigma,r}: \widetilde{\mathbb{C}}^{|\sigma |} \to \mathbb{C}^{| \sigma |} \times \widetilde{{\mathfrak u}}$ is the composition of the graph morphism of $g_{\sigma,r}$ with the projection $q \times \mbox{id}: \widetilde{\mathbb{C}}^{|\sigma |} \times \widetilde{{\mathfrak u}} \to \mathbb{C}^{| \sigma |} \times \widetilde{{\mathfrak u}}$. Define $\widetilde{\mathcal{C}}'_{\sigma,r}$ and $\mathcal{C}'_{\sigma}$ to be the images of $\widetilde{\mathbb{C}}^{|\sigma |}$ and $\mathbb{C}^{| \sigma |}$ under the finite morphisms $q \times \overline{g}_{\sigma,r}$ and $\small{\mbox{id}} \times \overline{g}_{\sigma}$, respectively. Because the middle horizontal maps are inclusions of closed schemes, we can view $\widetilde{\mathcal{C}}'_{\sigma,r}$ and $\mathcal{C}'_{\sigma}$ (respectively) as closed subschemes of $\mathbb{C}^{| \sigma |} \times \widetilde{{\mathfrak u}}$ and $\mathbb{C}^{| \sigma |} \times {\mathfrak u}$. *Proof of Theorem [Theorem 29](#thm.extended-paving){reference-type="ref" reference="thm.extended-paving"}.* (1) Since $\widetilde{\psi}$ is an isomorphism onto its image, to show that $q \times g_{\sigma,r}$ takes $\widetilde{\mathbb{C}}^{|\sigma |}$ isomorphically onto $\widetilde{\mathcal{C}}_{\sigma,r}$, it suffices to prove that $q \times \overline{g}_{\sigma,r}$ takes $\widetilde{\mathbb{C}}^{|\sigma |}$ isomorphically onto $\widetilde{\mathcal{C}}'_{\sigma,r}$. Define a morphism $\nu: \mathbb{C}^{| \sigma |} \times \widetilde{{\mathfrak u}}_r \to \widetilde{\mathbb{C}}^{|\sigma |}$ as follows. Recalling that $\mathbb{C}^{| \sigma |} \times \widetilde{{\mathfrak u}}_r = (\mathbb{C}^{|I|} \times \mathbb{C}^{|\sigma|-|I|}) \times (\mathcal{W}_r\times_{\mathcal{V}_{ad}} {\mathfrak u}_\sigma)$, let $\zeta=(x_1,x_2,v, \mathsf{y})$ be an element of $\mathbb{C}^{|\sigma|}\times {\mathfrak u}_r$. Recall also that $\widetilde{\mathbb{C}}^{|\sigma |} = \widetilde{\mathbb{C}}^{|I|} \times \mathbb{C}^{|\sigma| - |I|}$ and define $\nu(\zeta)=(j_r(v), x_2)$, where $j_r: \mathcal{W}_r \to \widetilde{\mathbb{C}}^{|I|}$ is the inverse of the isomorphism $i_r$. We have $$\widetilde{\mathbb{C}}^{|\sigma |} \xrightarrow{\;q \times \overline{g}_{\sigma,r}\;} \mathbb{C}^{| \sigma |} \times \widetilde{{\mathfrak u}}_r \stackrel{\nu}{\longrightarrow} \widetilde{\mathbb{C}}^{|\sigma |}.$$ Using the definitions of the maps involved, given $z=(z_1, z_2)\in \widetilde{\mathbb{C}}^{|\sigma |}$ we have $$(\nu \circ (q \times \overline{g}_{\sigma,r}))(z) = \nu (q(z_1), z_2, i_r(z_1), g_\sigma \circ q(z)) = ((j_r\circ i_r)(z_1), z_2)=z.$$ Thus $\nu \circ (q \times \overline{g}_{\sigma,r})$ is the identity, so $q \times \overline{g}_{\sigma,r}$ takes $\widetilde{\mathbb{C}}^{|\sigma |}$ isomorphically onto its image $\widetilde{\mathcal{C}}'_{\sigma,r}$, as desired. To show that $\widetilde{\eta}^{-1}(\mathcal{C}_{\sigma})$ is the disjoint union of the $\widetilde{\mathcal{C}}_{\sigma,r}$, it suffices to show that $(\mbox{id} \times \eta)^{-1}(\mathcal{C}'_{\sigma})$ is the disjoint union of the $\widetilde{\mathcal{C}}'_{\sigma,r}$. The $\mathcal{W}_r$ are disjoint for different values of $r$ (by Lemma [Lemma 11](#lem.disjoint){reference-type="ref" reference="lem.disjoint"} and Proposition [Proposition 12](#p.components){reference-type="ref" reference="p.components"}), hence so are the $\mathbb{C}^{| \sigma |} \times \widetilde{{\mathfrak u}}_r$, and hence the $\widetilde{\mathcal{C}}'_{\sigma,r}$ are as well, since $\widetilde{\mathcal{C}}'_{\sigma,r}$ is contained in $\mathbb{C}^{| \sigma |} \times \widetilde{{\mathfrak u}}_r$. The commutativity of [\[e.commdiag8\]](#e.commdiag8){reference-type="eqref" reference="e.commdiag8"} implies $$\begin{aligned} (\mbox{id}\times \eta)(\widetilde{\mathcal{C}}'_{\sigma, r}) &=& \left((\mbox{id}\times \eta) \circ (q\times g_{\sigma, r})\right)(\widetilde{\mathbb{C}}^{|\sigma|})\\ &=& \left( (\mbox{id}\times g_\sigma)\circ q\right)(\widetilde{\mathbb{C}}^{|\sigma|}) = (\mbox{id}\times g_\sigma)(\mathbb{C}^{\sigma}) = \mathcal{C}_\sigma'.\end{aligned}$$ Therefore $\bigcup_r \widetilde{\mathcal{C}}'_{\sigma,r} \subseteq (\mbox{id} \times \eta)^{-1}(\mathcal{C}'_{\sigma})$. We now prove the reverse inclusion. Let $\zeta=(x_1, x_2, v, \mathsf{y}) \in \mathbb{C}^{|\sigma|} \times \widetilde{{\mathfrak u}}$ such that $(\mbox{id}\times \eta)(\zeta)\in \mathcal{C}_\sigma'$. By definition of the map $\eta$ and scheme $\mathcal{C}_\sigma'$ and by Proposition [Proposition 27](#prop.toric-paving){reference-type="ref" reference="prop.toric-paving"}, our assumptions imply that $\mathsf{y} = g_\sigma (x_1, x_2)$ and $\pi(v) = p \circ g_\sigma(x_1) \in \mathcal{W}_{\sigma, ad}$. In particular, we see that $v\in \mathcal{W}_r$, and therefore $\zeta\in \mathbb{C}^{| \sigma |} \times \widetilde{{\mathfrak u}}_r$ for some $r\in \mathbb{ Z}_{d_\sigma}$. Let $\widetilde{\zeta} = \nu(\zeta) = (j_r(v), x_2) \in \widetilde{\mathbb{C}}^{| I |} \times \mathbb{C}^{|I| - |\sigma|} = \widetilde{\mathbb{C}}^{|\sigma |}$. We claim that $\zeta = (q \times \overline{g}_{\sigma,r})(\widetilde{\zeta})$. This suffices, since then $\zeta \in \widetilde{\mathcal{C}}'_{\sigma,r}$, and the reverse inclusion follows. To prove the claim, we begin with the observation that $\pi(v) = i_\sigma (x_1)$ by Proposition [Proposition 27](#prop.toric-paving){reference-type="ref" reference="prop.toric-paving"}. Since the diagram [\[e.commdiag2\]](#e.commdiag2){reference-type="ref" reference="e.commdiag2"} commutes and $i_\sigma$ is an isomorphism, it follows that $$(i_\sigma\circ q )(j_r(v)) = (\pi \circ i_r)(j_r(v)) = i_\sigma(x_1) \Rightarrow q(j_r(v)) = x_1.$$ Thus, $$(q\times \overline{g}_{\sigma, r}) (\widetilde{\zeta}) = (q(j_r(v)), x_2, i_r\circ j_r(v), g_\sigma(q(j_r(v)), x_2)) = (x_1, x_2, v, g_\sigma (x_1, x_2)) = \zeta,$$ as desired. \(2\) The $\mathcal{C}_{\sigma}$ for $\sigma \in {\tt RST}(\lambda)$ form the cells of an affine paving of $\widetilde{\mathcal{N}}_{\lambda}$. This means that $\widetilde{\mathcal{N}}_{\lambda}$ is the disjoint union of the $\mathcal{C}_{\sigma}$, and moreover, that there is a filtration of $\widetilde{\mathcal{N}}_{\lambda}$ by closed subspaces $$\widetilde{\mathcal{N}}_{\lambda,0} \subset \widetilde{\mathcal{N}}_{\lambda,1} \subset \cdots \subset \widetilde{\mathcal{N}}_{\lambda,k} = \widetilde{\mathcal{N}}_{\lambda}$$ such that each $\widetilde{\mathcal{N}}_{\lambda,i} - \widetilde{\mathcal{N}}_{\lambda,i-1}$ is a disjoint union of the cells $\mathcal{C}_{\sigma}$ it contains. The inverse images $\widetilde{\mathcal{M}}_{\lambda,i} = \widetilde{\eta}^{-1}( \widetilde{\mathcal{N}}_{\lambda,i} )$ form a filtration of $\widetilde{\mathcal{M}}_{\lambda}$ by closed subspaces such that each $\widetilde{\mathcal{M}}_{\lambda,i} - \widetilde{\mathcal{M}}_{\lambda,i-1}$ is a disjoint union of the $\widetilde{\eta}^{-1}(\mathcal{C}_{\sigma})$ it contains. By (1), $\widetilde{\eta}^{-1}(\mathcal{C}_{\sigma})$ is a disjoint union of $\widetilde{\mathcal{C}}_{\sigma,r}$. Hence the $\widetilde{\mathcal{C}}_{\sigma,r}$ are cells of an orbifold paving of $\widetilde{\mathcal{M}}_{\lambda} \subseteq \widetilde{\mathcal{M}}$. \(3\) Proposition [Proposition 16](#p.zaction2){reference-type="ref" reference="p.zaction2"} implies that $\underline{\omega} \circ g_{\sigma,r} = g_{\sigma,r+1}$, which in turn implies that $\underline{\omega} \circ F_{\sigma,r} = F_{\sigma,r+1}$. Since $\mathcal{C}_{\sigma,r}$ is the image of $F_{\sigma,r}$, the result follows. ◻ # Poincaré polynomials {#sec.combinatorics} ## Divisible tableaux Suppose that $\lambda = [\lambda_1 \ \cdots \ \lambda_\ell]$ is a partition of $n$ with $\ell$ parts. If $d$ divides $\lambda_i$ for each $1\leq i\leq \ell$, we define a partition of $\frac{n}{d}$ by $\lambda/d := [\lambda_1/d \ \cdots \ \lambda_\ell/d]$. Suppose that $\sigma\in {\tt RST}(\lambda)$ and that $d$ is a divisor of $\sigma$. We form a *quotient tableau* of shape $\lambda/d$, denoted $\sigma/d$, by merging consecutive blocks in the rows of $\sigma$ and labeling the merged blocks in the same relative order in which they appeared in $\sigma$. Thus, the block containing $md$ in $\sigma$ will be labeled with $m$ in $\sigma/d$ for each $1\leq m\leq n/d$. Since $\sigma$ is row-strict, $\sigma/d$ is as well. *Example 30*. The row strict tableau $\sigma$ of shape $\lambda=[4^2 \ 2^2]$ from Example [Example 18](#ex.IJK){reference-type="ref" reference="ex.IJK"} is displayed below, together with the corresponding quotient tableau $\sigma/2$ of shape $\lambda/2 = [2^2 \ 1^2]$. $$\ytableausetup{centertableaux} \begin{ytableau}3 & 4 & 5 & 6\\ 1 & 2 & 9 & 10\\ 7 & 8\\ 11 & 12\end{ytableau} \quad\quad\quad\quad\quad \begin{ytableau} 2 & 3\\ 1 & 5\\ 4 \\ 6 \end{ytableau}$$ Recall that $|\sigma|$ denotes the number of Springer inversions of $\sigma$ (Definition [Definition 19](#def.Springer.inv){reference-type="ref" reference="def.Springer.inv"}). The main result of this section proves a precise relationship between $|\sigma |$ and $|\sigma/d|$. First, we need a relaxation of our Springer inversion definition. **Definition 31**. Let $i,j\in [n]$ such that $i>j$. We say that the pair $(i,j)$ is a *Springer pair* of $\sigma\in {\tt RST}(\lambda)$ if 1. $i$ occurs in the same column as $j$ or in any column strictly to the left of the column containing $j$ in $\sigma$, and 2. if the box directly to the right of $j$ in $\sigma$ is labeled by $r$, then $i< r$. We will denote the set of Springer pairs for $\sigma$ by $\operatorname{pairs}(\sigma)$ and the size of this set by $|\operatorname{pairs}(\sigma)|$. Comparing this with Definition [Definition 19](#def.Springer.inv){reference-type="ref" reference="def.Springer.inv"} (the definition of Springer inversions), we see that for a given $\sigma\in {\tt RST}(\lambda)$, all Springer inversions are Springer pairs, but there can be more Springer pairs than Springer inversions. Indeed, condition (2) of Definitions [Definition 31](#def.Springer.pair){reference-type="ref" reference="def.Springer.pair"} and [Definition 19](#def.Springer.inv){reference-type="ref" reference="def.Springer.inv"} are the same, but condition (1) of Definition [Definition 31](#def.Springer.pair){reference-type="ref" reference="def.Springer.pair"} includes all pairs $i>j$ satisfying (2) such that $j$ appears below $i$ and in the same column while condition (1) of Definition [Definition 19](#def.Springer.inv){reference-type="ref" reference="def.Springer.inv"} excludes these pairs. Note that if $\sigma$ is a standard tableau, then the Springer pairs and Springer inversions coincide. *Example 32*. The pair $(4,2)$ is a Springer pair for the row-strict tableau of shape $[4^2 \ 2^2]$ appearing in Examples [Example 18](#ex.IJK){reference-type="ref" reference="ex.IJK"} and [Example 30](#ex.divtab){reference-type="ref" reference="ex.divtab"}, but not a Springer inversion (see Example [Example 20](#ex.inversions){reference-type="ref" reference="ex.inversions"}). **Proposition 33**. *Let $\sigma \in {\tt RST}(\lambda)$. If $d$ is a divisor of $\sigma$, then $$|\operatorname{pairs}(\sigma)| - |\sigma | = |\operatorname{pairs}(\sigma/d)| - |\sigma/d|.$$* *Proof.* Without loss of generality, we may assume $d>1$. Suppose $\ell>k$. As the entries in each block are consecutive, $(\ell,k)$ can satisfy condition (2) of the Springer pair or Springer inversion definition only if $k$ labels the cell at the end of a block. Furthermore, if $(\ell,k)$ is a Springer pair but not an inversion, then both $\ell$ and $k$ must occur at the end of a block (since $\ell$ and $k$ are in the same column). This shows that every Springer pair of $\sigma$ that is not an inversion is of the form $(\ell, k)=(di, dj)$, where $(i,j)$ is a Springer pair of $\sigma/d$ that is not a Springer inversion. Conversely, suppose that $(i,j) \in \operatorname{pairs}(\sigma/d)$, but $(i,j)$ is not a Springer inversion of $\sigma/d$. Then $i$ is in the same column as $j$ but above $j$. This means that in $\sigma$, the boxes containing values $(i-1)d+1,(i-1)d+2,\dots,id$ appear in a row above the boxes containing $(j-1)d+1,(j-1)d+2,\dots,jd$. Let $r$ denote the entry labeling the box to the right of $j$ in $\sigma/d$, if it exists. We have the following configuration of blocks in $\sigma$. $$\ytableausetup{centertableaux, boxsize=0.56in} \begin{ytableau} {\scriptstyle(i-1)d+1} & {\scriptstyle(i-1)d+2} & \cdots & {\scriptstyle id} \\ \none[\vdots] & \none[\vdots] & \none[\vdots] & \none[\vdots] \\ {\scriptstyle(j-1)d+1} & {\scriptstyle(j-1)d+2} & \cdots & {\scriptstyle jd} & {\scriptstyle(r-1)d+1} & {\scriptstyle(r-1)d+2} & \cdots & {\scriptstyle rd} \end{ytableau}$$ As $(i,j)$ is a Springer pair for $\sigma/d$, we have $$i< r \Rightarrow i \leq (r-1) \Rightarrow id \leq (r-1)d \Rightarrow id<(r-1)d+1.$$ We see that $(id,jd)$ is a Springer pair for $\sigma$ that is not a Springer inversion. Note that this is also true in the case where $j$ labels a box at the end of a row, i.e., the case in which $r$ does not exist. The preceding discussion shows that there is a bijection $(i,j) \mapsto (di,dj)$ between Springer pairs of $\sigma/d$ which are not Springer inversions and Springer pairs of $\sigma$ which are not Springer inversions. The result follows. ◻ We rearrange the equality in Proposition [Proposition 33](#pairs.invers){reference-type="ref" reference="pairs.invers"} to obtain **Corollary 34**. *If $\sigma \in {\tt RST}(\lambda)$ and $d$ is a divisor of $\sigma$, then $$|\sigma | = |\operatorname{pairs}(\sigma) | -|\operatorname{pairs}(\sigma/d)| +|\sigma /d|.$$* If $\sigma$ is row strict, we can rearrange the entries in each column of $\sigma$ into decreasing order to obtain a standard tableau. This is called the *standardization of $\sigma$*, denoted here by ${\tt std}(\sigma)$. We use this operation in the proof of the following result. **Proposition 35**. *For all $\sigma \in {\tt RST}(\lambda)$, $|\operatorname{pairs}(\sigma)| = \dim \widetilde{\mathcal{N}}_{\lambda}$.* *Proof.* We claim that $|\operatorname{pairs}(\sigma)|$ is invariant under the standardization operation, so $|\operatorname{pairs}(\sigma)| = |\operatorname{pairs}({\tt std}(\sigma))|$. This suffices, since because ${\tt std}(\sigma)$ is standard, all of its Springer pairs are also Springer inversions. Thus, Remark [Remark 26](#rem:dimension){reference-type="ref" reference="rem:dimension"} implies that $|\operatorname{pairs}({\tt std}(\sigma))|=|{\tt std}(\sigma) | = \dim \widetilde{\mathcal{N}}_{\lambda}$. To prove the claim, we show that for $i \in [n-1]$, the the number of pairs of the form $(i, j)$ of $\sigma$ depends only on the entries in the column containing $i$, so it is unchanged by standardization. Define $a_m(i)$ to be the number of entries in the $m$-th column of $\sigma$ that are less than $i$. Suppose $i$ is in the $k$-th column. We will prove that the number of pairs of the form $(i,j)$ is $a_k(i)$. To count pairs $(i,j)$ of $\sigma$, by condition (1) of Definition [Definition 31](#def.Springer.pair){reference-type="ref" reference="def.Springer.pair"}, we need not consider any $j<i$ such that $j$ is in a column to the left of $i$. Suppose $\sigma$ has $t$ total columns, and consider the nonnegative integers $a_k(i),\dots, a_t(i)$. Since $\sigma$ is row strict, this sequence must be decreasing, so that $a_k(i)\geq a_{k+1}(i)\geq\cdots\geq a_t(i)$. For each $m$ such that $k\leq m <t$, each of the $a_{m+1}(i)$ entries in the $m+1$-st column must be preceded by an entry in the $m$-th column that is also less than $i$. This leaves precisely $a_{m}(i) - a_{m+1}(i)$ entries in the $m$-th column to satisfy condition (2) of Definition [Definition 31](#def.Springer.pair){reference-type="ref" reference="def.Springer.pair"}. For the final column, there is no need to check this condition. Thus $i$ appears in exactly $$\left(\sum_{m=k}^{t-1} a_m(i) - a_{m+1}(i)\right) + a_t(i) = a_k(i)$$ Springer pairs of the form $(i,j)$, proving the claim. ◻ Combining Proposition [Proposition 35](#dim.pairs){reference-type="ref" reference="dim.pairs"} with Corollary [Corollary 34](#cor.pairs){reference-type="ref" reference="cor.pairs"} yields a simple relation between $|\sigma |$ and $|\sigma/d |$. Recall that $\widetilde{\mathcal{N}}_{\lambda/d}$ is a Springer fiber for the group $SL_{n/d}(\mathbb{C})$ over a nilpotent element of Jordan type $\lambda/d$. (see Section [2.1](#sec.Springer.def){reference-type="ref" reference="sec.Springer.def"}) **Corollary 36**. *If $\sigma \in {\tt RST}(\lambda)$ and $d$ is a divisor of $\sigma$, then $$|\sigma | = \dim \widetilde{\mathcal{N}}_{\lambda}-\dim \widetilde{\mathcal{N}}_{\lambda/d} +|\sigma /d|.$$* If $d$ is a common divisor of all the parts of $\lambda$, denoted below by $d|\lambda$, we define $$\label{e.dgld} D_{\lambda, d} = \dim \widetilde{\mathcal{N}}_{\lambda} -\dim \widetilde{\mathcal{N}}_{\lambda/d}.$$ Equivalently, if $\mu$ is a partition of $n/d$ such that $\mu = \lambda/d$ we write $\mu| \lambda$ and say that $\mu$ divides $\lambda$. In that case, we write $$D_{\lambda, \mu} = \dim \widetilde{\mathcal{N}}_{\lambda}-\dim \widetilde{\mathcal{N}}_{\mu}.$$ Since $\dim \widetilde{\mathcal{N}}_{\lambda} = \sum_i (i-1)\lambda_i$, we see that $\dim\widetilde{\mathcal{N}}_{\lambda/d} = \frac{1}{d} \dim \widetilde{\mathcal{N}}_{\lambda}$. Hence $$\label{e.dim.diff} D_{\lambda, d} = \frac{d-1}{d} \sum_i (i-1)\lambda_{i} .$$ *Example 37*. Let $\sigma$ and $\sigma/2$ be the row strict tableaux of shape $\lambda = [4^2 \ 2^2]$ and $\lambda/2 = [2^2 \ 1^2]$, respectively, appearing in Example [Example 30](#ex.divtab){reference-type="ref" reference="ex.divtab"} above. The Springer inversions of $\sigma$ were computed in Example [Example 20](#ex.inversions){reference-type="ref" reference="ex.inversions"} and we also have $$\mathop{\mathrm{inv}}(\sigma/2) = \{ (6,4), (6,5), (6,3), (5,3), (4,1), (4,3) \}.$$ The reader can confirm that the map $(i,j) \mapsto (2i, 2j)$ defines an injection from $\mathop{\mathrm{inv}}(\sigma/2)$ to $\mathop{\mathrm{inv}}(\sigma)$, as established in the proof of Proposition [Proposition 33](#pairs.invers){reference-type="ref" reference="pairs.invers"} above. Thus $|\sigma|-|\sigma/2| = 13-6= 7$. This confirms the results of Corollary [Corollary 36](#cor.shift){reference-type="ref" reference="cor.shift"}, as $\dim \widetilde{\mathcal{N}}_{[4^2 \ 2^2]} - \dim \widetilde{\mathcal{N}}_{[2^2 \ 1^2]} = 14-7 = 7$ in this case. ## Poincaré polynomials of extended Springer fibers Let $\mathbb{F}$ denote a field of characteristic zero or of characteristic prime to $n$. We take homology and cohomology with coefficients in $\mathbb{F}$, and write simply $H_i(\mathcal{Y})$ and $H^i(\mathcal{Y})$ for $H_i(\mathcal{Y};\mathbb{F})$ and $H^i(\mathcal{Y};\mathbb{F})$. If $\mathcal{Y}$ is a variety whose odd-dimensional cohomology vanishes, we define its *modified Poincaré polynomial* by $$P(\mathcal{Y};t) := \sum_j \dim H^{2j}(\mathcal{Y}) \ t^j,$$ so $P(\mathcal{Y}; t^2)$ is the usual Poincaré polynomial of $\mathcal{Y}$. By abuse of terminology, we will simply refer to $P(\mathcal{Y};t)$ as the Poincaré polynomial of $\mathcal{Y}$. If $Z \cong \mathbb{ Z}_n$ acts on $\mathcal{Y}$, then it acts on each $H^{2j}(\mathcal{Y})$. For each character $\chi_i$ of $Z$, let $H^{2j}(\mathcal{Y})_{\chi_i}$ denote the $\chi_i$-isotypic component of $H^{2j}(\mathcal{Y})$, and define the *$\chi_i$-isotopic Poincaré polynomial* by the formula $$P_{\chi_i}(\mathcal{Y}; t) := \sum_j \dim H^{2j}(\mathcal{Y})_{\chi_i} \ t^j.$$ It is convenient to combine all the isotypic Poincaré polynomials into the *$Z$-equivariant Poincaré polynomial* defined by the formula $$\widetilde{P}(\mathcal{Y}; t) = \sum P_{\chi_i}(\mathcal{Y};t) \chi_i \in \widehat{Z}\otimes\mathbb{ Z}[t].$$ $\widetilde{P}(\mathcal{Y};t)$ may be thought of as a map from $Z$ to $\mathbb{ Z}[t]$; evaluating at the identity element of $Z$ yields $P(\mathcal{Y};t) = \sum_i P_{\chi_i}(\mathcal{Y};t)$, recovering the Poincaré polynomial as a sum of isotypic Poincaré polynomials. The main result of this section, Theorem [Theorem 39](#thm.poincare){reference-type="ref" reference="thm.poincare"}, shows that up to a shift, the isotypic Poincaré polynomials $P_{\chi_i}(\widetilde{\mathcal{M}}_{\lambda}; t)$ of the extended Springer fibers $\widetilde{\mathcal{M}}_{\lambda}$ equal the Poincaré polynomials of ordinary Springer fibers for smaller rank groups. This yields a description of the cohomology stalks of the corresponding Lusztig sheaves considered by the authors in [@GPR]; see Theorem [Theorem 43](#thm.lusztigsheaf){reference-type="ref" reference="thm.lusztigsheaf"}. Let $V_i$ denote the $1$-dimensional representation of $Z$ with character $\chi_i$. For each divisor $d$ of $n$, let $E_d$ be the $d$-dimensional representation of $Z$ where $Z$ acts by cyclically permuting a basis. Then as representations of $Z$, $$\label{e.decomp} E_d \cong V_0 \oplus V_{n/d} \oplus V_{2n/d} \oplus \cdots \oplus V_{(d-1)n/d}.$$ **Proposition 38**. *For $i \geq 0$, $H^{2i+1}(\widetilde{\mathcal{M}}_{\lambda}) = 0$. As a representation of $Z$, $$H^{2i}(\widetilde{\mathcal{M}}_{\lambda}) = \bigoplus_{\sigma\in {\tt RST}(\lambda), |\sigma|=i} E_{d_{\sigma}}$$ and $$P(\widetilde{\mathcal{M}}_{\lambda};t) = \sum_{\sigma\in {\tt RST}(\lambda)} d_{\sigma} t^{ |\sigma|}.$$* *Proof.* This is an immediate consequence of Theorem [Theorem 29](#thm.extended-paving){reference-type="ref" reference="thm.extended-paving"}. ◻ We say a tableau is *indivisible* if it is not divisible by any $d>1$. Let ${\tt IRST}(\lambda)$ be the set of indivisible row-strict tableaux of shape $\lambda$, and define $$Q_\lambda(t) := \sum_{\sigma \in {\tt IRST}(\lambda)} t^{|\sigma|}.$$ Using the affine paving of the Springer fiber $\widetilde{\mathcal{N}}_{\lambda}$, we see that $$\begin{aligned} P( \widetilde{\mathcal{N}}_{\lambda};t) &= \sum_{\sigma \in {\tt RST}(\lambda)} t^{|\sigma|} = \sum_{d|\lambda} \, \sum_{\substack{\sigma\in {\tt RST}(\lambda)\\d_\sigma=d}} t^{|\sigma|} \\ &= \sum_{d|\lambda} t^{D_{\lambda,d}} \sum_{\substack{\sigma\in {\tt RST}(\lambda)\\d_\sigma=d}} t^{|\sigma/d|} = \sum_{d | \lambda} t^{D_{\lambda,d}}Q_{\lambda/d}(t) , \end{aligned}$$ where the third equality follows from Corollary [Corollary 36](#cor.shift){reference-type="ref" reference="cor.shift"}, and the last from the fact that $d=d_\sigma$ and the definition of $Q_{\lambda/d}(t)$. We can rewrite this as a sum over partitions dividing $\lambda$: $$\begin{aligned} \label{eqn.poincare.Sp} P(\widetilde{\mathcal{N}}_{\lambda};t) = \sum_{\mu | \lambda} t^{D_{\lambda,\mu}} Q_\mu(t).\end{aligned}$$ We are now ready to prove the main result of this section, which describes the isotypic Poincaré polynomial of a generalized Springer fiber. **Theorem 39**. *Let $i \in \{0, \ldots, n-1 \}$, and set $d = n/\gcd(n,i)$. Then $P_{\chi_i}(\widetilde{\mathcal{M}}_{\lambda};t)=0$ unless $d|\lambda$, in which case $$P_{\chi_i}(\widetilde{\mathcal{M}}_{\lambda};t ) = t^{D_{\lambda,d}} P(\widetilde{\mathcal{N}}_{\lambda/d};t).$$* *Proof.* By Proposition [Proposition 38](#prop.cohomologyZaction){reference-type="ref" reference="prop.cohomologyZaction"} and equation [\[e.decomp\]](#e.decomp){reference-type="eqref" reference="e.decomp"}, $$\widetilde{P}(\widetilde{\mathcal{M}}_{\lambda};t ) = \sum_{a | \lambda} \sum_{\substack{\sigma \in {\tt RST}(\lambda)\\ d_{\sigma} = a}} (\chi_0 + \chi_{n/a} + \cdots + \chi_{(a-1)n/a}) t^{|\sigma|}.$$ In light of Corollary [Corollary 36](#cor.shift){reference-type="ref" reference="cor.shift"}, this can be rewritten as $$\widetilde{P}(\widetilde{\mathcal{M}}_{\lambda};t ) = \sum_{a | \lambda} t^{D_{\lambda,a}} \sum_{\sigma \in {\tt IRST}(\lambda/a)} (\chi_0 + \chi_{n/a} + \cdots + \chi_{(a-1)n/a}) t^{|\sigma|}.$$ The isotypic Poincare polynomial $P_{\chi_i}( \widetilde{\mathcal{M}}_{\lambda};t )$ is the coefficient of $\chi_i$ on the right hand side of the above. We see that $\chi_i$ occurs in the summands corresponding to $a$ such that $\frac{n}{a}$ divides $i$, or equivalently, by Lemma [Lemma 40](#lem.elementary){reference-type="ref" reference="lem.elementary"} below, such that $d=\frac{n}{\gcd(n,i)}$ divides $a$. Thus, $$\begin{aligned} \label{eqn.isotypic.pf} P_{\chi_i}(\widetilde{\mathcal{M}}_{\lambda};t ) = \sum_{\substack{a \textup{ such that}\\ a|\lambda\textup{ and }d|a}} t^{D_{\lambda,a}} \sum_{\sigma \in {\tt IRST}(\lambda/a)} t^{|\sigma|}.\end{aligned}$$ Let $\mu= \lambda/a$. By definition, $D_{\lambda,a} = D_{\lambda, \mu}$; the condition that $d$ divides $a$ is equivalent to $\mu \vert (\lambda/d)$. Reindexing the sum in [\[eqn.isotypic.pf\]](#eqn.isotypic.pf){reference-type="eqref" reference="eqn.isotypic.pf"} over all tableaux dividing $\lambda/d$, we obtain $$P_{\chi_i}(\widetilde{\mathcal{M}}_{\lambda};t ) = \sum_{\mu | (\lambda/d)} t^{D_{\lambda, \mu}} \sum_{\sigma \in {\tt IRST}(\mu)} t^{|\sigma|} = \sum_{\mu | (\lambda/d)} t^{D_{\lambda, \mu}} Q_\mu(t) .$$ Since $$D_{\lambda, \mu} = D_{\lambda, (\lambda/d)} + D_{(\lambda/d), \mu} = D_{\lambda,d} + D_{(\lambda/d), \mu},$$ we have $$P_{\chi_i}(\widetilde{\mathcal{M}}_{\lambda};t ) = t^{D_{\lambda, d}} \sum_{\mu | (\lambda/d)} t^{D_{(\lambda/d), \mu}} Q_\mu(t) = t^{D_{\lambda, d}} P( \widetilde{\mathcal{N}}_{\lambda/d};t)$$ where the last equality follows from [\[eqn.poincare.Sp\]](#eqn.poincare.Sp){reference-type="eqref" reference="eqn.poincare.Sp"}. This completes the proof. ◻ In the previous proof, we used the following elementary lemma. Our convention is that $\gcd(n,0) = n$. **Lemma 40**. *Suppose $a$ divides $n$, and let $i$ be a nonnegative integer. Then $\frac{n}{a}$ divides $i$ if and only if $\frac{n}{\gcd(n,i)}$ divides $a$.* *Proof.* If $i = 0$, then the assertions $\frac{n}{a}$ divides $i$ and $\frac{n}{\gcd(n,i)}$ divides $a$ are both true, so we may assume $i>0$. $(\Rightarrow)$ By hypothesis, $i = \frac{cn}{a}$ for some positive integer $c$, so $ai = cn$. Hence $\frac{ai}{\gcd(n,i)} = \frac{cn}{\gcd(n,i)}$. Since $\frac{i}{\gcd(n,i)}$ and $\frac{n}{\gcd(n,i)}$ are relatively prime, we see that $\frac{n}{\gcd(n,i)}$ divides $a$. $(\Leftarrow)$ By hypothesis, $a = \frac{cn}{\gcd(n,i)}$ for some positive integer $c$, so $\gcd(n,i) = \frac{cn}{a}$. Hence $\frac{n}{a}$ divides $\gcd(n,i)$; since $\gcd(n,i)$ divides $i$, the result follows. ◻ Theorem [Theorem 39](#thm.poincare){reference-type="ref" reference="thm.poincare"} yields another formula for the Poincaré polynomial of the extended Springer fiber in terms of those of smaller rank Springer fibers. **Corollary 41**. *Let $\lambda$ be a partition of $n$. Then $$P(\widetilde{\mathcal{M}}_{\lambda}; t) = \sum_{d|\lambda} \varphi(d) t^{D_{\lambda, d}} P( \widetilde{\mathcal{N}}_{\lambda/d}; t)$$ where $\varphi(d)$ denotes Euler's totient function.* *Proof.* Since the Poincaré polynomial of $\widetilde{\mathcal{M}}_{\lambda}$ is a sum of the isotypic Poincaré polynomials, the formula follows immediately once we establish that $$\varphi(d)= \left|\left\{ i \mid 0\leq i \leq n-1,\, d = \frac{n}{\gcd(n,i) } \right\}\right|.$$ If $d = 1$, then $\varphi(d) = 1 = | \{ 0 \} |$, so above equation holds. So assume $d>1$. The condition $d = \frac{n}{\gcd(n,i)}$ is equivalent to $\gcd(n,i) = \frac{n}{d}$, and if this holds, then $i = \frac{kn}{d}$ for some $1\leq k\leq d$. The condition $\gcd(n,\frac{kn}{d}) = \frac{n}{d}$ holds if and only if $\gcd(d,k) = 1$. By definition of the totient function, there are precisely $\varphi(d)$ possibilities for $k$, and so there are precisely $\varphi(d)$ many $i$ with the desired property. ◻ *Example 42*. Let $n=12$ and set $\lambda=[6^2]$. By Theorem [Theorem 39](#thm.poincare){reference-type="ref" reference="thm.poincare"}, $P_{\chi_i}(\widetilde{\mathcal{M}}_{\lambda};t)=0$ unless $i\in \{ 0,2,4,6,8,10 \}$, as these are the values of $i$ such that $d=\frac{12}{\gcd(12,i)}$ divides $\lambda$. Let $i=4$, so $d=3$ and $D_{\lambda,3} = 6-2=4$. By the theorem, $$P_{\chi_4}(\widetilde{\mathcal{M}}_{[6^2]} ;t) = t^4 P(\widetilde{\mathcal{N}}_{[2^2]}; t).$$ Corollary [Corollary 41](#cor.poincare){reference-type="ref" reference="cor.poincare"} gives us $$\begin{aligned} P(\widetilde{\mathcal{M}}_{[6^2]};t) = P(\widetilde{\mathcal{N}}_{[6^2]};t)+ t^3 P(\widetilde{\mathcal{N}}_{[3^2]} ;t) + 2t^4 P(\widetilde{\mathcal{N}}_{[2^2]};t) + 2t^5 P(\widetilde{\mathcal{N}}_{[1^2]};t).\end{aligned}$$ ## The Poincaré polynomials of Lusztig sheaves In this section, we apply our results to the Lusztig sheaves, which play an important role in Lusztig's generalized Springer correspondence. In particular, we prove that in type $A$, each Lusztig sheaf bears a close resemblance to the Springer sheaf for a smaller rank group (see Corollary [Corollary 44](#cor:smallergroup){reference-type="ref" reference="cor:smallergroup"} for a precise statement). We begin by recalling some results about the Lusztig sheaves. For each central character $\chi \in \widehat{Z}$, there is a *Lusztig sheaf* $\mathbb{ A}_{\chi}$, which is an element in the constructible derived category of $\mathcal{N}$ on which the center $Z$ acts by the character $\chi$. (See [@GPR] or [@Achar-book §8.4-8.5] for a precise definition, and [@Achar-book §6.2, Theorem 8.5.8] for the central character.) The Lusztig sheaf corresponding to the trivial character is the *Springer sheaf* $\mathbb{ A}_{\mathrm{id}} := \mu_*\underline{\mathbb{C}}_{\widetilde{\mathcal{N}}}[\dim \mathcal{N}]$. The main result of [@GPR] connects the Lusztig sheaves to the generalized Springer resolution. The pushforward $\psi_* \underline{\mathbb{C}}_{\widetilde{\mathcal{M}}}[\dim\mathcal{N}]$ is an element in the constructible derived category of $\mathcal{N}$, and by [@GPR Theorem 5.3], $$\begin{aligned} \label{eqn.pushforward} \psi_* \underline{\mathbb{C}}_{\widetilde{\mathcal{M}}}[\dim\mathcal{N}] = \bigoplus_{\chi \in \widehat{Z}} \mathbb{ A}_{\chi}.\end{aligned}$$ Taking stalks in [\[eqn.pushforward\]](#eqn.pushforward){reference-type="eqref" reference="eqn.pushforward"} and using proper base change, we see that the cohomology of the extended Springer fibers and the stalks of the cohomology sheaves of the complex $\psi_* \underline{\mathbb{C}}_{\widetilde{\mathcal{M}}}[\dim\mathcal{N}]$ are related by $$\begin{aligned} \label{eqn.stalks} H^{2j+N}(\widetilde{\mathcal{M}}_{\mathsf{x}_\lambda}) \cong \bigoplus_{\chi \in \widehat{Z}} \mathcal{H}_{\mathsf{x}_\lambda}^{2j} (\mathbb{ A}_{\chi})\end{aligned}$$ where $N=\dim \mathcal{N}$. For the Springer sheaf, we have $$\begin{aligned} \label{eqn.stalks2} H^{2j+N}(\widetilde{\mathcal{N}}_{\mathsf{x}_\lambda}) \cong \mathcal{H}_{\mathsf{x}_\lambda}^{2j} ( \mathbb{ A}_{\mathrm{id}}).\end{aligned}$$ Given a constructible sheaf $\mathcal{E}$ on $\mathcal{N}$ whose stalks have only even-dimensional cohomology, and $\mathsf{x}\in \mathcal{N}$, we define the *Poincaré polynomial of the stalk of $\mathcal{E}$ at $\mathsf{x}$* by $$P_{\mathsf{x}}(\mathcal{E};t):= \sum_{j} \dim \mathcal{H}_{\mathsf{x}}^{2j} (\mathcal{E}) t^j.$$ Considering the $Z$-action on both sides of [\[eqn.stalks\]](#eqn.stalks){reference-type="eqref" reference="eqn.stalks"}, and restricting to the $\chi$-isotypic component, we obtain the following consequence of Theorem [Theorem 39](#thm.poincare){reference-type="ref" reference="thm.poincare"}. **Theorem 43**. *Let $\chi\in \widehat{Z}$ be the character of $Z$ defined by $\operatorname{diag}(a,a,\ldots, a) \mapsto a^i$, and let $d = n/\gcd(i,n)$. Then $P_{\mathsf{x}_\lambda}(\mathbb{ A}_\chi;t)=0$ unless $d$ divides $\lambda$, in which case $$P_{\mathsf{x}_\lambda}(\mathbb{ A}_\chi; t) = t^{N+D_{\lambda,d}} P( \widetilde{\mathcal{N}}_{\mathsf{x}_{\lambda/d}}; t).$$* *Proof.* We have $P_{\mathsf{x}_\lambda}(\mathbb{ A}_\chi; t) = t^N P_{\chi}(\widetilde{\mathcal{M}}_{\lambda};t)$ from the $Z$-equivariance of the decomposition [\[eqn.stalks\]](#eqn.stalks){reference-type="eqref" reference="eqn.stalks"}. The result now follows from Theorem [Theorem 39](#thm.poincare){reference-type="ref" reference="thm.poincare"}. ◻ As a consequence, we obtain the following corollary, relating each Lusztig sheaf in type A to the Springer sheaf corresponding to a smaller rank group. If $d|n$, let $N_d= N - N'$, where $N = \dim \mathcal{N}$, and $N'$ is the dimension of the nilpotent cone for $SL_{n/d}(\mathbb{C})$. **Corollary 44**. *Let $\chi\in \widehat{Z}$ be the character of $Z$ defined by $\operatorname{diag}(a,a,\ldots, a) \mapsto a^i$, and let $d = n/\gcd(i,n)$. Let $\mathbb{ A}_{\mathrm{id}, n/d}$ denote the Springer sheaf of $SL_{n/d}(\mathbb{C})$. For all partitions $\lambda$ such that $d$ divides $\lambda$, $$P_{\mathsf{x}_{\lambda}}(\mathbb{ A}_\chi;t) = t^{N_d + D_{\lambda,d}} P_{\mathsf{x}_{\lambda/d}}(\mathbb{ A}_{\mathrm{id}, n/d};t).$$* *Proof.* We have $$P_{\mathsf{x}_{\lambda}}(\mathbb{ A}_\chi;t) = t^{N+D_{\lambda,d}} P( \widetilde{\mathcal{N}}_{\mathsf{x}_{\lambda/d}}; t) = t^{N-N'+D_{\lambda,d}} P_{\mathsf{x}_{\lambda/d}}(\mathbb{ A}_{\mathrm{id}, n/d};t),$$ where the second equality is from Theorem [Theorem 43](#thm.lusztigsheaf){reference-type="ref" reference="thm.lusztigsheaf"}, and the third from [\[eqn.stalks2\]](#eqn.stalks2){reference-type="eqref" reference="eqn.stalks2"} for $SL_{n/d}(\mathbb{C})$. ◻ *Remark 45*. Given $\chi$, the Poincaré polynomial $P_{\mathsf{x}_{\lambda}}(\mathbb{ A}_\chi;t)$ can be computed using Lusztig's generalized Springer correspondence and the Lusztig--Shoji algorithm. Here is an outline of the computation. The generalized Springer correspondence gives a decomposition $$\mathbb{ A}_\chi = \oplus IC(\mathcal{O}_{\lambda_i}, \mathcal{L}_i) \otimes V_i,$$ where $V_i$ is an irreducible representation of the relative Weyl group, whose dimension $r_i = \dim V_i$ can be computed. We have $$P_{\mathsf{x}_{\lambda}}(\mathbb{ A}_\chi;t) = \sum_i r_i P_{\mathsf{x}_{\lambda}}(IC(\mathcal{O}_{\lambda_i}, \mathcal{L}_i);t).$$ The Lusztig--Shoji algorithm gives a method to compute $P_{\mathsf{x}_{\lambda}}(IC(\mathcal{O}_{\lambda_i}, \mathcal{L}_i);t)$, so the $P_{\mathsf{x}_{\lambda}}(\mathbb{ A}_\chi;t)$ can be computed. Using this method, the equality of Corollary [Corollary 44](#cor:smallergroup){reference-type="ref" reference="cor:smallergroup"} can be observed in examples. It is possible that a proof of this corollary can be obtained by carefully tracing through the Lusztig-Shoji computation. (See [@GM99] and [@Geck11] for information on how to compute examples.) 10 Pramod N. Achar. , volume 258 of *Mathematical Surveys and Monographs*. American Mathematical Society, 2021. Pramod Achar, Anthony Henderson, Daniel Juteau, and Simon Riche. Modular generalized springer correspondence i: the general linear group. , 18:1013--1070, 2017. Hiraku Abe and Tomoo Matsumura. Equivariant cohomology of weighted Grassmannians and weighted Schubert classes. , (9):2499--2524, 2015. W. Fulton. . Princeton University Press, Princeton, New Jersey, 1993. Meinolf Geck. Some applications of chevie to the theory of algebraic groups. , 27(1):64--94, 2011. Meinolf Geck and Gunter Malle. . , 8(3):281 -- 290, 1999. William Graham, Martha Precup, and Amber Russell. A new approach to the generalized Springer correspondence. , 376(6):3891--3918, 2023. William Graham. Toric varieties and a generalization of the Springer resolution. In *Facets of algebraic geometry. Vol. I*, volume 472 of *London Math. Soc. Lecture Note Ser.*, pages 333--370. Cambridge Univ. Press, Cambridge, 2022. Jens Carsten Jantzen. , pages 1--211. Birkhäuser Boston, Boston, MA, 2004. Daniel Juteau, Carl Mautner, and Georgie Williamson. Parity sheaves. , 27(4):1169--1212, 2014. Caleb Ji and Martha Precup. Hessenberg varieties associated to ad-nilpotent ideals. , 50(4):1728--1749, 2022. George Lusztig. Character sheaves, V. , 61(2):103--155, 1986. Martha Precup and Julianna Tymoczko. Springer fibers and Schubert points. , 76:10--26, 2019. T. Shoji. Green functions on reductive groups over a finite field. , Part 1(47):289--302, 1987. N. Spaltenstein. . , 16(2):203 -- 204, 1977. N. Spaltenstein. . Springer-Verlag, 1982. T.A. Springer. . , 36(1):173--207, 1976. Julianna S. Tymoczko. Linear conditions imposed on flag varieties. , 128(6):1587--1604, 2006.
arxiv_math
{ "id": "2309.13764", "title": "Geometric and combinatorial properties of extended Springer fibers", "authors": "William Graham, Martha Precup, Amber Russell", "categories": "math.AG math.CO math.RT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | A star coloring of a graph $G$ is a proper vertex coloring such that no path on four vertices is bicolored. The smallest integer $k$ for which $G$ admits a star coloring with $k$ colors is called the star chromatic number of $G$, denoted as $\chi_s(G)$. In this paper, we study the star coloring of tensor product of two graphs and obtain the following results. 1. We give an upper bound on the star chromatic number of the tensor product of two arbitrary graphs. 2. We determine the exact value of the star chromatic number of tensor product two paths. 3. We show that the star chromatic number of tensor product of two cycles is five, except for $C_3 \times C_3$ and $C_3 \times C_5$. 4. We give tight bounds for the star chromatic number of tensor product of a cycle and a path. author: - Harshit Kumar Choudhary, Swati Kumari and I. Vinod Reddy bibliography: - ref.bib title: Star Coloring of Tensor Product of Two Graphs --- # Introduction {#S:intro} A proper $k$-coloring of a graph $G$ is an assignment of colors to the vertices of $G$ from the set $\{1,2,\ldots,k\}$ such that no two adjacent vertices are assigned the same color. The smallest integer $k$ for which $G$ admits a proper $k$-coloring is called the *chromatic number* of $G$, denoted by $\chi(G)$. A $k$-star coloring of a graph $G$ is a proper $k$-coloring of $G$ such that every path on four vertices uses at least three distinct colors. The smallest integer $k$ such that $G$ has a $k$-star coloring is called *star chromatic number* of $G$, denoted by $\chi_s(G)$. Star coloring of graphs was introduced by Grünbaum in [@grunbaum1973acyclic]. The problem is NP-complete even when restricted to planar bipartite graphs [@albertson2004coloring] and line graphs of subcubic graphs [@lei2018star]. The problem is polynomial time solvable on cographs [@lyons2011acyclic], line graphs of tress [@omoomi2018polynomial], outer planar graphs and 2-dimensional grids [@fertin2004star]. Recently Shalu and Cyriac [@shalu2022complexity] showed that for $k \in \{4,5\}$, the $k$-star coloring is NP-complete for graphs of degree at most four. The *Cartesian product* and *tensor product* of two graphs $G$ and $H$ are denoted by $G\square H$ and $G \times H$ respectively. The vertex set of the above products is $V(G) \times V(H)$ and their edges are determined as follows. Let $(u,v), (u',v') \in V(G) \times V(H)$. Then $(u,v) (u',v')$ belongs to 1. $E(G \square H)$ if either $u=u'$ and $vv'\in E(H)$, or $v=v'$ and $uu'\in E(G)$. 2. $E(G \times H)$ if $uu'\in E(G)$ and $vv'\in E(H)$. Proper coloring has been well studied with respect to various graph products. The chromatic number of the Cartesian product of two graphs $G$ and $H$ is equal to the maximum of chromatic numbers of $G$ and $H$ [@sabidussi1957graphs]. The chromatic number of a lexicographic product of two graphs $G$ and $H$ is equal to the $b$-fold chromatic number of $G$, where $b=\chi(G)$ [@geller1975chromatic]. The chromatic number of tensor product of two graphs $G$ and $H$ is at most the chromatic numbers of graphs $G$ and $H$ [@shitov2019counterexamples]. Star coloring of the Cartesian product of graphs has been studied in several papers [@fertin2004star; @han2016star; @akbari2019star]. Fertin et al. [@fertin2004star] established an upper bound on the star chromatic number of the Cartesian product of two arbitrary graphs. They gave exact values of the star chromatic number for the Cartesian product of two paths. Han et al. [@han2016star] studied the star coloring of Cartesian products of paths and cycles and determined the star chromatic number for some of the cases. Extending this work, Akbari et al. [@akbari2019star] studied the star coloring of the Cartesian product of two cycles. They showed that the Cartesian product of any two cycles except $C_3 \square C_3$ and $C_3 \square C_5$ has a $5$-star coloring. Motivated by the results obtained in  [@fertin2004star; @han2016star; @akbari2019star], in this paper we focus on star coloring of the tensor product of graphs. In Section [3](#S:tensor){reference-type="ref" reference="S:tensor"}, we establish an upper bound on star chromatic number of tensor product of two arbitrary graphs. In Section [3.1](#sec-PP){reference-type="ref" reference="sec-PP"} we give exact values of star chromatic number of tensor product of two paths. In Section [3.2](#sec-CC){reference-type="ref" reference="sec-CC"}, we study the star coloring of tensor product of two cycles. We showed that tensor product of two cycles except $C_3 \times C_3$ and $C_3 \times C_5$ has a $5$-star coloring. In Section [3.3](#sec-CP){reference-type="ref" reference="sec-CP"}, we study the star coloring of tensor product of a cycle and path. In some cases, we give the exact value of the star chromatic number and in some cases we give upper bounds for the star chromatic number. # Preliminaries {#S:prelims} In this section, we introduce some basic notation and terminology related to graph theory that we need throughout the paper. All the graphs considered in this paper are undirected, finite and simple (no self-loops and no multiple edges). For a graph $G=(V,E)$, by $V(G)$ and $E(G)$ we denote the vertex set and edge set of $G$ respectively. The set $\{1,2,\ldots,k\}$ is denoted by $[k]$. We use $P_n$ and $C_n$ to denote a path and a cycle on $n$ vertices respectively. We denote the complete bipartite graph using $K_{m,n}$. For any positive integer $n$, $K_{1,n}$ is called a star graph. In the proofs of our results we use the following known results.  [@fertin2004star] For a positive integer $n$, where $n \geq 2$, we have $$\chi_s(P_n) = \begin{cases} 2 &\text{if $n \in \{2,3\}$}; \\ 3 &\text{otherwise.} \\ \end{cases}$$  [@fertin2004star][\[lem-c\]]{#lem-c label="lem-c"} For a positive integer $n$, where $n \geq 3$, we have $$\chi_s(C_n) = \begin{cases} 4 &\text{if $n =5$}; \\ 3 &\text{otherwise.} \\ \end{cases}$$ [\[lem-subgraph\]]{#lem-subgraph label="lem-subgraph"} For any subgraph $H$ of a graph $G$, we have $\chi_s(H) \leq \chi_s(G)$. The following result on star coloring of the Cartesian product of two paths is used in our results. [\[lem-1\]]{#lem-1 label="lem-1"}[@fertin2004star] For every pair of positive integers $m$ and $n$, where $2 \leq m \leq n$, we have $$\chi_s(P_m \square P_n) = \begin{cases} 3, &\text{if $m=n=2$};\\ 4, &\text{if $m\in \{2,3\}$, $n \geq 3$};\\ 5, &\text{if $m\geq 4$, $n\geq 4$.} \end{cases}$$ We denote the graphs shown in the Fig. [4](#fig:zy-graph){reference-type="ref" reference="fig:zy-graph"} as $Z$-graph and $Y$-graph respectively. We found that $\chi_s(Z)=\chi_s(Y)=5$ by performing a tedious case-by-case analysis. This helps to establish the lower bounds in some cases. ![ Star coloring of $Z$-graph (left) and $Y$-graph (right). ](Z-graph.pdf "fig:"){#fig:zy-graph} [\[fig:z-graph\]]{#fig:z-graph label="fig:z-graph"} ![ Star coloring of $Z$-graph (left) and $Y$-graph (right). ](Y-graph.pdf "fig:"){#fig:zy-graph} [\[fig:y-graph\]]{#fig:y-graph label="fig:y-graph"} A $k$-star coloring of $G \times H$ can be represented by a pattern (matrix) with $n_1$ rows and $n_2$ columns, where $n_1=|V(G)|$ and $n_2=|V(H)|$. For example, a $3$-star coloring of $P_3 \times P_4$ can be represented by a pattern as shown in the Fig. [6](#fig:prelim){reference-type="ref" reference="fig:prelim"}. ![ A 3-star coloring of $P_3 \times P_4$ (left) and coloring pattern representing 3-star coloring of $P_3 \times P_4$ (right). ](prelim1.pdf){#fig:prelim} ![ A 3-star coloring of $P_3 \times P_4$ (left) and coloring pattern representing 3-star coloring of $P_3 \times P_4$ (right). ](prelim2.pdf){#fig:prelim} For every $m, n \geq 3$, $p,q \geq 1$, if $\chi_s(C_m \times C_n) \leq k$ then $\chi_s(C_{pm} \times C_{qn}) \leq k$. Given a $k$-star coloring of $C_m \times C_n$, we can obtain a $k$-star coloring of $C_{pm} \times C_{qn}$ by repeating the coloring pattern $p$ times vertically and $q$ times horizontally. For example, a $5$-star coloring of $C_6 \times C_8$ can be obtained from a $5$-star coloring of $C_3 \times C_4$ by repeating the pattern two times vertically and two times horizontally as shown in Fig. [\[fig-suitable\]](#fig-suitable){reference-type="ref" reference="fig-suitable"}. # Tensor Product of Two Graphs {#S:tensor} In this section, we give an upper bound on the star chromatic number of the tensor product of two arbitrary graphs. Next, we give exact values of the star chromatic number of tensor product of (a) two paths, (b) two cycles, and (c) a cycle and a path. Fertin et al. [@fertin2004star] showed that $\chi_s(G \square H) \leq \chi_s(G) \chi_s(H)$. It is interesting to know an upper bound for the star chromatic number of tensor product of graphs. We observe that $\chi_s(G \times H)$ can be arbitrarily large even if $\chi_s(G)$ and $\chi_s(H)$ are constant. For example, if $G=K_{1,n_1}$ and $H=K_{1,n_2}$, then $\chi_s(G)=\chi_s(H)=2$. Since $G \times H$ contains $K_{{(n_1-1),}{(n_2-1)}}$ as a subgraph, $\chi_s(G \times H) \geq \chi_s(K_{{(n_1-1),}{(n_2-1)}})=\min \{n_1-1,n_2-1\}+1$. In the following theorem we give an upper bound for the star chromatic number of tensor product of two arbitrary graphs. Let $G$ and $H$ be two connected graphs having $n_1$ and $n_2$ vertices respectively. Then we have $\chi_s(G \times H) \leq \min \{n_1 \chi_s(H), n_2 \chi_s(G)\}$. *Proof.* Let $V(G) = \{u_1,u_2,\ldots,u_{n_1}\}$, $V(H) = \{v_1,v_2,\ldots,v_{n_2}\}$ and $V(G\times H) = \{(u_i,v_j) | i\in[n_1],j\in [n_2]\}$. Suppose that $\chi_s(G) = k_1$ and $\chi_s(H) = k_2$ and let $f_G:V(G) \rightarrow [k_1]$ and $f_H:V(H) \rightarrow [k_2]$ are star colorings of $G$ and $H$ respectively. Without loss of generality, assume that $n_1 k_2 < n_2k_1$. Define $g:V(G\times H) \rightarrow [n_1k_2]$ such that $g((u_i,v_j)) = (i,f_H(v_j))$. Clearly $g$ uses $n_1k_2$ colors. Consider any two adjacent vertices $(u_i,-)$ and $(u_j,-)$. We have $g((u_i,-))=(i,-) \neq (j,-) =g((u_j,-))$. Therefore $g$ is a proper coloring of $G \times H$. Consider a path $P$ of length three having the vertices $(u_{i_1}, v_{j_1})$, $(u_{i_2}, v_{j_2})$, $(u_{i_3}, v_{j_3})$ and $(u_{i_4}, v_{j_4})$. If $u_{i_1}=u_{i_3}$ and $u_{i_2}=u_{i_4}$ then $v_{j_1}, v_{j_2}, v_{j_3},v_{j_4}$ forms a $P_4$ in the graph $H$, hence $P$ is colored with at least three distinct colors. If either $u_{i_1} \neq u_{i_3}$ or $u_{i_2} \neq u_{i_4}$ then the set $\{u_{i_1}, u_{i_2}, u_{i_3}, u_{i_4}\}$ contains at least three distinct vertices of $G$, hence $P$ is colored with at least three distinct colors. Therefore, $g$ is a star coloring of $G \times H$. 0◻ ◻ ## Tensor product of two paths {#sec-PP} In this subsection, we study the star coloring of the tensor product of two paths. For every pair of integers $m$ and $n$, where $2 \leq m \leq n$, we have $$\chi_s(P_m \times P_n) = \begin{cases} 2 &\text{if $m =2$ and $n \in \{2,3\}$}; \\ 3 &\text{if $m = 2$ and $n \geq 4$}; \\ 3 &\text{if $m = 3$ and $n \geq 3$}; \\ 4 &\text{if $m = 4$ and $n \geq 4$}; \\ 4 &\text{if $m = 5$ and $n \geq 5$}; \\ 4 &\text{if $m = 6$ and $n \in \{6,7\}$}; \\ 5 &\text{if $m = 6$ and $n \geq 8$}; \\ 5 &\text{if $m \geq 7$ and $ n \geq 7$.} \\ \end{cases}$$ *Proof.* **Case 1.** $m=2, n \in \{2,3\}$\ The graphs $P_2 \times P_2$ and $P_2 \times P_3$ are disjoint union of two $P_2$'s and two $P_3$'s respectively. Hence $\chi_s(P_2 \times P_2)= \chi_s(P_2 \times P_3)=2$. **Case 2.** $m=2, n \geq 4$\ The graph $P_2 \times P_n$ is a disjoint union of two $P_n$'s. Hence $\chi_s(P_2\times P_n)=3$, for $n \geq 4$. **Case 3(a).** $m=3, n =3$\ The graph $P_3 \times P_3$ is a disjoint union of two components $C_4$ and $K_{1,4}$. As $\chi_s(C_4)=3$ and $\chi_s(K_{1,4})=2$, we have $\chi_s(P_3\times P_3)=3$. **Case 3(b).** $m=3, n \geq 4$\ The graph $P_3 \times P_n$, for $n \geq 4$ contains two connected components. Both components contain $C_4$ as a subgraph, hence from Lemma [\[lem-subgraph\]](#lem-subgraph){reference-type="ref" reference="lem-subgraph"} and [\[lem-c\]](#lem-c){reference-type="ref" reference="lem-c"}, we have $\chi_s(P_3\times P_n)\geq \chi_s(C_4) = 3$ for $n \geq 4$. Also, we have shown a $3$-star coloring of $P_3 \times P_n$ in the Fig. [\[PP\]](#PP){reference-type="ref" reference="PP"}, thus $\chi_s(P_3\times P_n)\leq 3$. Altogether, we have $\chi_s(P_3\times P_n)=3$, for $n \geq 4$. **Case 4.** $m =4, n \geq 4$,\ For $n \geq 4$, the graph $P_4 \times P_n$ contains $P_2 \square P_3$ as a subgraph, hence from Lemma [\[lem-subgraph\]](#lem-subgraph){reference-type="ref" reference="lem-subgraph"} and [\[lem-1\]](#lem-1){reference-type="ref" reference="lem-1"}, we have $\chi_s(P_4 \times P_n) \geq \chi_s(P_2 \square P_3)=4$. A $4$-star coloring of $P_4 \times P_n$, for $n \geq 4$ is shown in the Fig. [\[PP\]](#PP){reference-type="ref" reference="PP"}. Hence $\chi_s(P_4 \times P_n) = 4$. **Case 5.** $m =5, n \geq 5$,\ For $n \geq 5$, the graph $P_5 \times P_n$ contains $P_2 \square P_3$ as a subgraph, hence from Lemma [\[lem-subgraph\]](#lem-subgraph){reference-type="ref" reference="lem-subgraph"} and [\[lem-1\]](#lem-1){reference-type="ref" reference="lem-1"}, we have $\chi_s(P_5 \times P_n) \geq \chi_s(P_2 \square P_3)=4$. A $4$-star coloring of $P_5 \times P_n$, for $n \geq 5$ is shown in the Fig. [\[PP\]](#PP){reference-type="ref" reference="PP"}. Hence $\chi_s(P_5 \times P_n) = 4$. **Case 6.** $m =6, n \in \{6,7\}$,\ For $n \in \{6,7\}$, the graph $P_6 \times P_n$ contains $P_2 \square P_3$ as a subgraph, hence from Lemma [\[lem-subgraph\]](#lem-subgraph){reference-type="ref" reference="lem-subgraph"} and [\[lem-1\]](#lem-1){reference-type="ref" reference="lem-1"}, we have $\chi_s(P_6 \times P_n) \geq \chi_s(P_2 \square P_3)=4$. A $4$-star coloring of $P_6 \times P_n$, for $n \in \{6,7\}$ is shown in Fig. [\[PP\]](#PP){reference-type="ref" reference="PP"}. Hence $\chi_s(P_6 \times P_n) = 4$ for $n \in \{6,7\}$. **Case 7.** $m =6, n \geq 8$,\ For $n \geq 8$, the graph $P_6 \times P_n$ contains $Z$ as a subgraph, hence from Lemma [\[lem-subgraph\]](#lem-subgraph){reference-type="ref" reference="lem-subgraph"}, we have $\chi_s(P_6 \times P_n) \geq \chi_s(Z)=5$. A $5$-star coloring of $P_6 \times P_n$, for $n \geq 8$ is shown in the Fig. [\[PP\]](#PP){reference-type="ref" reference="PP"}. Hence $\chi_s(P_6 \times P_n) = 5$ for $n \geq 8$. **Case 8.** $m \geq 7, n\geq 7$\ The graph $P_m\times P_n$ contains $P_4 \square P_4$ as a subgraph, hence from Lemma [\[lem-subgraph\]](#lem-subgraph){reference-type="ref" reference="lem-subgraph"} and [\[lem-1\]](#lem-1){reference-type="ref" reference="lem-1"}, we have $\chi_s(P_m\times P_n)\geq \chi_s(P_4\square P_4)= 5$. A $5$-star coloring of $P_m \times P_n$, for $m \geq 7, n \geq 7$ is shown in Fig. [\[PP\]](#PP){reference-type="ref" reference="PP"}. Hence $\chi_s(P_m \times P_n) = 5$ for $m \geq 7, n \geq 7$. 0◻ ◻ ## Tensor product of two cycles {#sec-CC} In this subsection, we study the star coloring of the tensor product of two cycles. In particular, we prove the following theorem. [\[th-CC\]]{#th-CC label="th-CC"} For every pair of positive integers $m$ and $n$, where $3 \leq m \leq n$, we have $$\chi_s(C_m \times C_n)= \begin{cases} 6, & \text{if $m=3$, $n\in \{3,5\}$};\\ 5, & \text{otherwise}. \end{cases}$$ The proof of Theorem [\[th-CC\]](#th-CC){reference-type="ref" reference="th-CC"} follows from the following lemmas. [\[th-atleast5\]]{#th-atleast5 label="th-atleast5"} For every pair of positive integers $m ,n \geq 3$, we have $\chi_s(C_m \times C_n) \geq 5$. *Proof.* The graph $C_m \times C_n$ contains $P_4 \square P_4$ as a subgraph when $m, n \geq 7$. Therefore, from Lemma [\[lem-subgraph\]](#lem-subgraph){reference-type="ref" reference="lem-subgraph"} and [\[lem-1\]](#lem-1){reference-type="ref" reference="lem-1"}, we have $\chi_s(C_m \times C_n) \geq \chi_s(P_4 \square P_4)=5$ for $m ,n \geq 7$. Consider the case when the minimum of $m$ and $n$ is at most $6$. Suppose $\chi_s(C_m \times C_n) \leq 4$ and let $f$ be a $4$-star coloring of $C_m \times C_n$, then by selecting suitable copies of coloring of $C_m \times C_n$, we get a $4$-star coloring of $C_{3m} \times C_{3n}$ which is contradiction as $C_{3m} \times C_{3n}$ contains $P_4 \square P_4$ as a subgraph, thus $\chi_s(C_{3m} \times C_{3n}) \geq 5$. 0◻ ◻  [\[thm-sylvester\]]{#thm-sylvester label="thm-sylvester"}[@sylvester1882subvariants] Let $m$ and $n$ be two positive integers which are relatively prime. Then for every integer $k\geq(n-1)(m-1)$, there exist non-negative integers $\alpha$ and $\beta$ such that $k = \alpha n + \beta m$. $\chi_s(C_3 \times C_3)=6$ and $\chi_s(C_3 \times C_5)=6$. *Proof.* By performing a tedious case-by-case analysis we found that $\chi_s(C_3 \times C_3)=6$ and $\chi_s(C_3 \times C_5)=6$. Formal proof is omitted as it requires an extensive case analysis and its contribution to the theory would be minimal. 0◻ ◻ [\[th-c3cn\]]{#th-c3cn label="th-c3cn"} For every positive integer $n \geq 4$ and $n \neq 5$, we have $\chi_s(C_3 \times C_n)=5$. *Proof.* By Lemma [\[thm-sylvester\]](#thm-sylvester){reference-type="ref" reference="thm-sylvester"}, every positive integer greater than or equal to $18$ can be expressed as an integer linear combination of $4$ and $7$. As first three columns in colorings of $C_3 \times C_4$ and $C_3 \times C_7$ are identical (see Fig. [\[fig-C3xCn\]](#fig-C3xCn){reference-type="ref" reference="fig-C3xCn"}), by selecting suitable copies of colorings of $C_3 \times C_4$ and $C_3 \times C_7$, we can obtain a $5$-star coloring of $C_3 \times C_n$ for $n \geq 18$. Observe that every integer $n$, $n \in \{4,6,7 \ldots, 17\}\setminus \{6,9,10,13,17\}$ is an integer linear combination of $4$ and $7$. Therefore, $5$-star coloring of $C_3 \times C_n$ can be obtained by selecting suitable copies of colorings of $C_3 \times C_4$ and $C_3 \times C_7$. $5$-star colorings of $C_3 \times C_{6}$, $C_3 \times C_{9}$ and $C_3 \times C_{10}$ are shown in the Fig. [\[fig-C3xCn\]](#fig-C3xCn){reference-type="ref" reference="fig-C3xCn"}. Since the colors of the first three columns of $5$-star coloring of $C_3 \times C_4$ and $C_3 \times C_9$ are identical, we can obtain $5$-star colorings of $C_3 \times C_{13}$ and $C_3 \times C_{17}$ by selecting suitable copies of colorings of $C_3 \times C_4$ and $C_3 \times C_9$. Thus, by considering Fig. [\[fig-C3xCn\]](#fig-C3xCn){reference-type="ref" reference="fig-C3xCn"} and using Lemma [\[th-atleast5\]](#th-atleast5){reference-type="ref" reference="th-atleast5"}, the proof is complete. 0◻ ◻ [\[th-c4cn\]]{#th-c4cn label="th-c4cn"} For every positive integer $n \geq 4$, we have $\chi_s(C_4 \times C_n)=5$. *Proof.* By Lemma [\[thm-sylvester\]](#thm-sylvester){reference-type="ref" reference="thm-sylvester"}, every positive integer greater than or equal to $12$ can be expressed as an integer linear combination of $4$ and $5$. As first three columns of $C_4 \times C_4$ and $C_4 \times C_5$ are identical (see Fig. [\[C4\]](#C4){reference-type="ref" reference="C4"}), by selecting suitable copies of colorings of $C_4 \times C_4$ and $C_4 \times C_5$, we can obtain a $5$-star coloring of $C_4 \times C_n$ for $n \geq 12$. As every integer $n \in \{4,5,8,9,10\}$ can be expressed as an integer linear combination of $4$ and $5$, we get a $5$-star coloring of $C_4 \times C_n$ for $n \in \{4,5,8,9,10\}$. $5$-star colorings of $C_4 \times C_6$, $C_4 \times C_7$ and $C_4 \times C_{11}$ are given in the Fig. [\[C4\]](#C4){reference-type="ref" reference="C4"}. Thus, by considering Fig. [\[C4\]](#C4){reference-type="ref" reference="C4"} and using Lemma [\[th-atleast5\]](#th-atleast5){reference-type="ref" reference="th-atleast5"}, the proof is complete. 0◻ ◻ [\[th-c5cn\]]{#th-c5cn label="th-c5cn"} For every positive integer $n \geq 5$, we have $\chi_s(C_5 \times C_n)=5$. *Proof.* By Lemma [\[thm-sylvester\]](#thm-sylvester){reference-type="ref" reference="thm-sylvester"}, every positive integer greater than or equal to $12$ can be expressed as an integer linear combination of $4$ and $5$. As first three columns of $C_5 \times C_4$ and $C_5 \times C_5$ are identical (see Fig. [\[C5\]](#C5){reference-type="ref" reference="C5"}), by selecting suitable copies of colorings of $C_5 \times C_4$ and $C_5 \times C_5$, we can obtain a $5$-star coloring of $C_5 \times C_n$ for $n \geq 12$. As every integer $n \in \{5,8,9,10\}$ can be expressed as an integer linear combination of $4$ and $5$, we get a $5$-star coloring of $C_5 \times C_n$ for $n \in \{5,8,9,10\}$. $5$-star colorings of $C_5 \times C_6$, $C_5 \times C_7$ and $C_5 \times C_{11}$ are given in the Fig. [\[C5\]](#C5){reference-type="ref" reference="C5"}. Thus, by considering Fig. [\[C5\]](#C5){reference-type="ref" reference="C5"} and using Lemma [\[th-atleast5\]](#th-atleast5){reference-type="ref" reference="th-atleast5"}, the proof is complete. 0◻ ◻ [\[C7Cn\]]{#C7Cn label="C7Cn"} For every positive integer $n \geq 7$, we have $\chi_s(C_7 \times C_n)=5$. *Proof.* By Lemma [\[thm-sylvester\]](#thm-sylvester){reference-type="ref" reference="thm-sylvester"}, every positive integer greater than or equal to $12$ can be expressed as an integer linear combination of $4$ and $5$. As first three columns of $C_7 \times C_4$ and $C_7 \times C_5$ are identical (see Fig. [\[C7\]](#C7){reference-type="ref" reference="C7"}), by selecting suitable copies of colorings of $C_7 \times C_4$ and $C_7 \times C_5$, we can obtain a $5$-star coloring of $C_7 \times C_n$ for $n \geq 12$. As every integer $n \in \{8,9,10\}$ can be expressed as an integer linear combination of $4$ and $5$, we get a $5$-star coloring of $C_7 \times C_n$ for $n \in \{8,9,10\}$. $5$-star colorings of $C_7 \times C_7$ and $C_7 \times C_{11}$ are given in the Fig. [\[C7\]](#C7){reference-type="ref" reference="C7"}. Thus, by considering Fig. [\[C7\]](#C7){reference-type="ref" reference="C7"} and using Lemma [\[th-atleast5\]](#th-atleast5){reference-type="ref" reference="th-atleast5"}, the proof is complete. 0◻ ◻ [\[th-C11\]]{#th-C11 label="th-C11"} For every positive integer $n \geq 11$, we have $\chi_s(C_{11} \times C_n)=5$. *Proof.* By Lemma [\[thm-sylvester\]](#thm-sylvester){reference-type="ref" reference="thm-sylvester"}, every positive integer greater than or equal to $12$ can be expressed as an integer linear combination of $4$ and $5$. As first three rows of $C_4 \times C_{11}$ (see Fig. [\[C4\]](#C4){reference-type="ref" reference="C4"}) and $C_5 \times C_{11}$ (see Fig. [\[C5\]](#C5){reference-type="ref" reference="C5"}) are identical, by selecting suitable copies of colorings of $C_4 \times C_{11}$ and $C_5 \times C_{11}$, we can obtain a $5$-star coloring of $C_{11} \times C_n$ for $n \geq 12$. For $n=11$ we have given a $5$-star coloring of $C_{11} \times C_{11}$ in the Fig. [\[C11\]](#C11){reference-type="ref" reference="C11"}. Now, by using Lemma [\[th-atleast5\]](#th-atleast5){reference-type="ref" reference="th-atleast5"}, the proof is complete. 0◻ ◻ [\[th-C6C8C9C10\]]{#th-C6C8C9C10 label="th-C6C8C9C10"} For every positive integer $n \geq m$, where $m \in \{6,8,9,10\}$ we have $\chi_s(C_{m} \times C_n)=5$. *Proof.* For every natural number $n \geq m$, where $m \in \{6,9\}$, we get $5$-star colorings of $C_6 \times C_n$ and $C_9 \times C_n$ from the $5$-star coloring of $C_3 \times C_n$. We get $5$-star colorings of $C_8 \times C_n$ and $C_{10} \times C_n$ from the 5-star colorings of $C_4 \times C_n$ and $C_5 \times C_n$ respectively. Now, by using Lemma [\[th-atleast5\]](#th-atleast5){reference-type="ref" reference="th-atleast5"}, the proof is complete. 0◻ ◻ For every pair of positive integers $m$ and $n$, where $12 \leq m \leq n$, we have $\chi_s(C_m \times C_n)=5$ *Proof.* By Lemma [\[thm-sylvester\]](#thm-sylvester){reference-type="ref" reference="thm-sylvester"}, every positive integer greater than or equal to $12$ can be expressed as an integer linear combination of $4$ and $5$. We have given $5$-star colorings of $C_4 \times C_4$, $C_4 \times C_5$ in Fig. [\[C4\]](#C4){reference-type="ref" reference="C4"} and $C_5 \times C_4$ and $C_5 \times C_5$ in Fig. [\[C5\]](#C5){reference-type="ref" reference="C5"} such that - The colors of the first three columns of $C_4 \times C_4$ and $C_4 \times C_5$ are same. - The colors of the first three columns of $C_5 \times C_4$ and $C_5 \times C_5$ are same. - The colors of the first two rows and the last row of $C_4 \times C_5$ and $C_5 \times C_5$ are the same. - The colors of the first two rows and the last row of $C_5 \times C_4$ and $C_4 \times C_4$ are the same. By selecting suitable copies of the colorings of $C_4 \times C_4$, $C_4 \times C_5$, $C_5 \times C_4$ and $C_5 \times C_5$, we can obtain a $5$-star coloring of $C_m \times C_n$ for $12 \leq m \leq n$. For example, a $5$-star coloring of $C_{14} \times C_{19}$ can be obtained from the colorings of $C_4 \times C_4$, $C_4 \times C_5$, $C_5 \times C_4$ and $C_5 \times C_5$ as shown in Fig. [7](#fig:block){reference-type="ref" reference="fig:block"}. 0◻ ◻ ![ A $5$-star coloring of $C_{14} \times C_{19}$ can be obtained from the colorings of $C_4 \times C_4$, $C_4 \times C_5$, $C_5 \times C_4$ and $C_5 \times C_5$.](Star_block.pdf){#fig:block} ## Tensor product of a cycle and a path {#sec-CP} In this subsection, we study star the coloring of the tensor product of a path and a cycle. In particular, we prove the following theorem. [\[th-CP\]]{#th-CP label="th-CP"} For every pair of integers $m \geq 3$ and $n \geq 2$, we have $$\chi_s(C_m \times P_n) = \begin{cases} 3, & \text{if $m \geq 3$, $n\in \{2,3\}$};\\ 4, & \text{if $m=3k$, $k\in \mathbb{N}$, $n \in \{4,5\}$};\\ \leq 5, & \text{if $m\neq 3k$, $k\in \mathbb{N}$, $n \in \{4,5\}$};\\ %5, & \text{if $m=4$, $n \geq 4$}\\ 5, & \text{otherwise.} \end{cases}$$ The proof of Theorem [\[th-CP\]](#th-CP){reference-type="ref" reference="th-CP"} follows from the following lemmas. For every integer $m$, where $m \geq 3$, we have $\chi_s(C_m \times P_2)=3$. *Proof.* If $m$ is even, the graph $C_m\times P_2$ is a disjoint union of two $C_m$'s and if $m$ is odd, the graph $C_m\times P_2$ is isomorphic to $C_{2m}$. Thus, in both the cases, we have $\chi_s(C_m\times P_2)=3$. 0◻ ◻ [\[lem-21\]]{#lem-21 label="lem-21"} For every integer $m$, where $m \geq 3$, we have $\chi_s(C_m \times P_3)=3$. *Proof.* From Lemma [\[thm-sylvester\]](#thm-sylvester){reference-type="ref" reference="thm-sylvester"}, every positive integer greater than or equal to $12$ can be expressed as an integer linear combination of $4$ and $5$. As first three rows of $C_4 \times P_3$ and $C_5 \times P_3$ are identical (see Fig. [\[P3\]](#P3){reference-type="ref" reference="P3"}), by selecting suitable copies of colorings of $C_4 \times P_3$ and $C_5 \times P_3$, we can obtain a $3$-star coloring of $C_m \times P_3$ for $n \geq 12$. As every integer $n \in \{6,8,9,10\}$ can be expressed as an integer linear combination of $3$, $4$ and $5$, we get $3$-star coloring of $C_m \times P_3$ for $n \in \{6,8,9,10\}$. $3$-star colorings of $C_3 \times P_3$, $C_7 \times P_3$ and $C_{11} \times P_3$ are given in the Fig. [\[P3\]](#P3){reference-type="ref" reference="P3"}. Therefore we have $\chi_s(C_m \times P_3)\leq 3$. As the graph $C_m\times P_3$ contains $C_4$ as a subgraph, therefore from Lemma [\[lem-subgraph\]](#lem-subgraph){reference-type="ref" reference="lem-subgraph"} and [\[lem-c\]](#lem-c){reference-type="ref" reference="lem-c"}, we have $\chi_s(C_m\times P_3)\geq \chi_s(C_4) = 3$. Altogether we have $\chi_s(C_m \times P_3)=3$. 0◻ ◻ [\[C3kP45\]]{#C3kP45 label="C3kP45"} For every pair of positive integers $m$ and $n$, where $m\geq 3$, $m=3k$ for some $k \in \mathbb{N}$ and $n \in \{4,5\}$, we have $\chi_s(C_m \times P_n)=4$. *Proof.* The proof is divided into two cases. **Case 1.** When $m = 3$.\ Consider the graph $C_3 \times P_4$. Let $V(C_3) = \{u_1,u_2,u_3\}$, $V(P_4) = \{v_1,v_2,v_3,v_4\}$ and $V(C_3\times P_4) = \{(u_i,v_j) | i\in[3],j\in [4]\}$. As $C_3 \times P_3$ is a subgraph of $C_3 \times P_4$, from Lemma [\[lem-subgraph\]](#lem-subgraph){reference-type="ref" reference="lem-subgraph"} and [\[lem-21\]](#lem-21){reference-type="ref" reference="lem-21"}, we have $\chi_s(C_3\times P_4) \geq \chi_s(C_3 \times P_3) = 3$. We have observed that the graph $C_3 \times P_3$ has a unique (up to permutation of colors) $3$-star coloring, which is given in Fig. [\[C3P3\]](#C3P3){reference-type="ref" reference="C3P3"}. Suppose $\chi_s(C_3\times P_4) = 3$ and let $f$ be a 3-star coloring of $C_3 \times P_4$ with colors $a,b,c$. Then from the above observation, $f$ restricted to the vertices of subgraph $C_3 \times P_3$, gives a coloring as shown in the Fig. [\[C3P3\]](#C3P3){reference-type="ref" reference="C3P3"}. Now consider the vertex $(u_1,v_4)$ of $C_3\times P_4$. Clearly, $f((u_1,v_4)) \notin \{b,c\}$, else $f$ is not proper coloring. Also $f((u_1,v_4)) \neq a$, else we get a bicolored path of length three. Therefore, $f((u_1,v_4)) \notin \{a,b,c\}$, which is a contraction to our assumption that $f$ is a 3-star coloring of $C_3 \times P_4$. Thus $\chi_s(C_3\times P_4) \geq 4$. Since the graph $C_3\times P_4$ is a subgraph of $C_3 \times P_5$, we have $\chi_s(C_3\times P_5) \geq 4$. $4$-star colorings of $C_3\times P_4$ and $C_3\times P_5$ are given in Fig. [\[C3P45\]](#C3P45){reference-type="ref" reference="C3P45"}. Therefore, $\chi_s(C_3\times P_n) = 4$, $n\in \{4,5\}$. **Case 2.** When $m >3$.\ For $m >3$ and $n\in \{4,5\}$, the graph $C_m\times P_n$ contains $P_2 \square P_3$ as a subgraph, thus from Lemma [\[lem-subgraph\]](#lem-subgraph){reference-type="ref" reference="lem-subgraph"} and [\[lem-1\]](#lem-1){reference-type="ref" reference="lem-1"}, we have $\chi_s(C_m\times P_n) \geq \chi_s(P_2 \square P_3) = 4$. For $n \in \{4,5\}$, $4$-star coloring of $C_{3k} \times P_n$ can be obtained by using suitable copies of colorings of $C_3 \times P_n$, $n\in \{4,5\}$. Therefore, $\chi_s(C_m\times P_n)=4$, for $n \in \{4,5\}$. 0◻ ◻ [\[lem-33\]]{#lem-33 label="lem-33"} For every pair of positive integers $m$ and $n$, where $m\neq 3k$ for some $k \in \mathbb{N}$ and $n \in \{4,5\}$, we have $4 \leq \chi_s(C_m \times P_n) \leq 5$. *Proof.* For $m >3$ and $n \in \{4,5\}$, the graph $P_2 \square P_3$ is a subgraph of $C_m \times P_n$. Thus from Lemma [\[lem-subgraph\]](#lem-subgraph){reference-type="ref" reference="lem-subgraph"} and [\[lem-1\]](#lem-1){reference-type="ref" reference="lem-1"} we have $\chi_s(C_m \times P_n) \geq 4$. As the graph $C_m \times P_n$ is a subgraph of $C_m \times C_n$, thus from Lemma [\[lem-subgraph\]](#lem-subgraph){reference-type="ref" reference="lem-subgraph"} and Theorem [\[th-CC\]](#th-CC){reference-type="ref" reference="th-CC"}, we have $\chi_s(C_m \times P_n) \leq 5$. 0◻ ◻ For every positive integer $n \geq 6$, we have $\chi_s(C_{3} \times P_n)=5$. *Proof.* For $n\geq 6$, the graph $C_3 \times P_n$ has $C_3 \times P_5$ as a subgraph. Thus from Lemma [\[lem-subgraph\]](#lem-subgraph){reference-type="ref" reference="lem-subgraph"} and [\[C3kP45\]](#C3kP45){reference-type="ref" reference="C3kP45"}, we have $\chi_s(C_{3} \times P_n) \geq \chi_s(C_{3} \times P_5) = 4$. We have observed that the graph $C_3 \times P_5$ has a unique coloring (up to permutation of colors) pattern with four colors $a,b,c,d$ as shown in the Fig. [\[C3P5\]](#C3P5){reference-type="ref" reference="C3P5"}. By using arguments similar to Case 1 of Lemma [\[C3kP45\]](#C3kP45){reference-type="ref" reference="C3kP45"}, we can show that for $n\geq 6$, $\chi_s(C_{3} \times P_n) \geq 5$. Also $C_3 \times P_n$ is a subgraph of $C_3 \times C_n$, for $n\geq 6$, therefore from Lemma [\[th-c3cn\]](#th-c3cn){reference-type="ref" reference="th-c3cn"}, we have $\chi_s(C_3\times P_n) \leq \chi_s(C_3 \times C_n) = 5$. Altogether we have $\chi_s(C_3 \times P_n) = 5$. 0◻ ◻ For every positive integer $n \geq 4$, we have $\chi_s(C_4 \times P_n)=5$. *Proof.* For $n\geq 4$, the graph $C_4\times P_n$ contains $Y$ as a subgraph, thus from Lemma [\[lem-subgraph\]](#lem-subgraph){reference-type="ref" reference="lem-subgraph"}, we have $\chi_s(C_4\times P_n) \geq \chi_s(Y) = 5$. Since $C_4 \times P_n$ is a subgraph of $C_4 \times C_n$, therefore from Lemma [\[th-c4cn\]](#th-c4cn){reference-type="ref" reference="th-c4cn"}, we have $\chi_s(C_4\times P_n) \leq \chi_s(C_4\times C_n) = 5$. Altogether we have $\chi_s(C_4\times P_n) = 5$. 0◻ ◻ For every positive integer $n \geq 4$, we have $\chi_s(C_5 \times P_n)=5$. *Proof.* By case by case analysis we found that $\chi_s(C_5\times P_4) = 5$. As the graph $C_5 \times P_4$ is a subgraph of $C_5\times P_n$ for $n\geq 5$, from Lemma [\[lem-subgraph\]](#lem-subgraph){reference-type="ref" reference="lem-subgraph"} we have $\chi_s(C_5 \times P_n) \geq \chi_s(C_5 \times P_4) = 5$. Since $C_5 \times P_n$ is a subgraph of $C_5 \times C_n$, thus from Lemma [\[lem-subgraph\]](#lem-subgraph){reference-type="ref" reference="lem-subgraph"} and [\[th-c5cn\]](#th-c5cn){reference-type="ref" reference="th-c5cn"}, we have $\chi_s(C_5\times P_n) \leq \chi_s(C_5\times C_n) = 5$. Altogether we have $\chi_s(C_5\times P_n) = 5$. 0◻ ◻ For every positive integer $n \geq 6$, we have $\chi_s(C_6 \times P_n)=5$. *Proof.* For $n \geq 8$, the graph $C_{6}\times P_n$ contains $Z$ as a subgraph. Thus from Lemma [\[lem-subgraph\]](#lem-subgraph){reference-type="ref" reference="lem-subgraph"}, we have $\chi_s(C_6 \times P_n) \geq \chi_s(Z) = 5$. Since $C_6 \times P_n$ is a subgraph of $C_6 \times C_n$, therefore from Lemma [\[th-C6C8C9C10\]](#th-C6C8C9C10){reference-type="ref" reference="th-C6C8C9C10"}, we have $\chi_s(C_6\times P_n) \leq \chi_s(C_6\times C_n) = 5$. Altogether, for $n \geq 8$, we have $\chi_s(C_6\times P_n) = 5$. Also by tedious case by case analysis we found that $\chi_s(C_6 \times P_6) = 5$ and $\chi_s(C_6 \times P_7) = 5$. 0◻ ◻ For every positive integer $n \geq 4$, we have $\chi_s(C_7 \times P_n)=5$. *Proof.* For $n\geq 7$, the graph $C_{7}\times P_n$ contains $P_4 \square P_4$ as a subgraph. Thus from Lemma [\[lem-subgraph\]](#lem-subgraph){reference-type="ref" reference="lem-subgraph"} and [\[lem-1\]](#lem-1){reference-type="ref" reference="lem-1"}, we have $\chi_s(C_7 \times P_n) \geq \chi_s(P_4 \square P_4) = 5$. Since $C_7 \times P_n$ is a subgraph of $C_7 \times C_n$, thus from Lemma [\[lem-subgraph\]](#lem-subgraph){reference-type="ref" reference="lem-subgraph"} and [\[C7Cn\]](#C7Cn){reference-type="ref" reference="C7Cn"}, we have $\chi_s(C_7\times P_n) \leq \chi_s(C_7\times C_n) = 5$. Altogether, for $n \geq 7$, we have $\chi_s(C_7\times P_n) = 5$. Also by tedious case by case analysis we found that $\chi_s(C_7 \times P_4) = \chi_s(C_7 \times P_5)=\chi_s(C_7 \times P_6)=5$. 0◻ ◻ For every pair of positive integers $m$ and $n$, where $m \geq 8$, $n \geq 6$, we have $\chi_s(C_m \times P_n)= 5$. *Proof.* For $m \geq 8$ and $n \geq 6$, the graph $C_{m}\times P_n$ contains $Z$ as a subgraph. Thus from Lemma [\[lem-subgraph\]](#lem-subgraph){reference-type="ref" reference="lem-subgraph"}, we have $\chi_s(C_m \times P_n) \geq \chi_s(Z) = 5$. Since $C_m \times P_n$ is the subgraph of $C_m \times C_n$, therefore from Lemma [\[lem-subgraph\]](#lem-subgraph){reference-type="ref" reference="lem-subgraph"}, [\[th-C6C8C9C10\]](#th-C6C8C9C10){reference-type="ref" reference="th-C6C8C9C10"} and [\[th-C11\]](#th-C11){reference-type="ref" reference="th-C11"}, we have $\chi_s(C_m\times P_n) \leq \chi_s(C_m\times C_n) = 5$ for $m\geq 8$ and $n \geq 6$. Altogether we have $\chi_s(C_m\times P_n) = 5$. 0◻ ◻
arxiv_math
{ "id": "2310.04851", "title": "Star Coloring of Tensor Product of Two Graphs", "authors": "Harshit Kumar Choudhary, Swati Kumari and I. Vinod Reddy", "categories": "math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
# Prisms to Tambara functors {#sec:main} In this section, we present our main construction. In [1.1](#sub:main){reference-type="ref" reference="sub:main"}, we functorially associate a Tambara functor $\m A$ to every prism $(A, I)$. In [1.2](#sub:subcats){reference-type="ref" reference="sub:subcats"}, we identify the images of various special types of prisms under this functor. ## Main construction {#sub:main} We give a few descriptions of the construction. Recall that $$I_n \defeq I\phi(I)\dotsm\phi^n(I)A.$$ On objects, we set $$A^{C_{p^n}} = A/I_n.$$ The restriction maps are the natural quotient maps. To define the transfers and norms, recall that there exist elements $\pi_n\in\phi^n(I)A$ with $\pi_n \equiv p\bmod I_{n-1}$ [@SulSliceTHH Proposition 3.25], [@APC Lemma 2.2.8]. We define $$\begin{aligned} V, N\colon A/I_{n-1} &\to A/I_n\\ V(x) &= \pi_n x\\ N(x) &= \phi(x) - \pi_n\delta(x)\end{aligned}$$ Observe that $FVx = x^p$, $FNx = x^p$, and that $N$ is a ring homomorphism (namely $\phi$) mod $V$. When $I=(d)$ is principal, we may take $\pi_n = u_n \phi^n(d)$ for some unit $u_n\in A^\times$. $$\xymatrix{ A/I_2 \ar@{-}[d] \ar[r] & A/\phi(I_1) \ar@{-}[d] \ar[r] & A/\phi^2(I)\\ A/I_1 \ar@{-}[d] \ar[r] & A/\phi(I)\\ A/I }$$ To make this construction independent of choices, we will Kan extend from the case of transversal prisms. Recall that for a transversal prism, we have an injection $$c_\trans\colon A/I_n \mono \prod_{i=0}^n A/\phi^i(I)$$ by [@AClBTrace Lemma 3.7]. We will call these *transversal coordinates* and write them as $(t_0,\dotsc,t_{n})$. To work effectively with transversal coordinates, we need to characterize the image of $A/I_n$. [\[lem:trans-coords\]]{#lem:trans-coords label="lem:trans-coords"} A vector $(t_0,\dotsc,t_n)$ is in the image of $c_\trans$ if and only if $(t_0,\dotsc,t_{n-1})=c_\trans(t)$ is in the image of $c_\trans$, and $t=t_n$ in $A/(I_{n-1},\phi^n(I))=A/(I_{n-1},p)$. For any prism $(A, I)$, there is a comparison map $$W_m(A/I) \to A/I_m.$$ If we use ghost coordinates on the source and transversal coordinates on the target, this map is given by $$(w_0,\dotsc,w_m) \mapsto (w_m,\phi(w_{m-1}),\dotsc,\phi^m(w_0)).$$ We will construct this map and derive this formula in [\[sec:drw\]](#sec:drw){reference-type="ref" reference="sec:drw"}, but seeing it now is helpful for understanding the "backward" formulas in Construction [\[cons:main\]](#cons:main){reference-type="ref" reference="cons:main"}. We are ready to give the official definition of $\m A$. [\[cons:main\]]{#cons:main label="cons:main"} Let $A$ be a transversal prism. We define a Tambara functor $\m A$ as follows. On objects, $$\m A(G/C_{p^n}) = A/I_n.$$ We define the Tambara structure maps using transversal coordinates as $$\begin{aligned} F(t_0,\dotsc,t_n) &= (t_0,\dotsc,t_{n-1})\\ V(t_0,\dotsc,t_{n-1}) &= (pt_0,\dotsc,pt_{n-1},0)\\ N(t_0,\dotsc,t_{n-1}) &= (t_0^p,\dotsc,t_{n-1}^p,\phi(t_{n-1}))\end{aligned}$$ We need to verify that this definition satisfies the conditions of Lemma [\[lem:trans-coords\]](#lem:trans-coords){reference-type="ref" reference="lem:trans-coords"}. This is clear for $F$. If $(t_0,\dotsc,t_{n-1})=c_\trans(t)$, then $(pt_0,\dotsc,pt_{n-1})=c_\trans(pt)$ and $(t_0^p,\dotsc,t_{n-1}^p)=c_\trans(t^p)$. Finally, we have $pt_{n-1}\equiv0$ and $\phi(t_{n-1})\equiv t_{n-1}^p$ mod $I_{n-1}+\phi^n(I)A$ since $p\in I_{n-1}+\phi^n(I)A$ by the extended prism condition. From the description in transversal coordinates, we see that every identity between $F,V,N$ that holds in the Witt Tambara functor $\m W$, for example $$NVx=p^{p-2}V^2x^p,$$ is also satisfied in $\m A$. We have thus indeed defined a Tambara functor (if this does not satisfy the reader, we give a "fully coherent" proof of an analogous fact in [\[sec:gns\]](#sec:gns){reference-type="ref" reference="sec:gns"}, which can be $p$-typified to cover the present situation). We now animate this construction [@CSPurity 5.1.4], associating an *animated* Tambara functor $\m A$ to every animated prism $A$. It remains to show that $\m A$ is a static Tambara functor when $A$ is a static (not necessarily transversal) prism. For this, we must show that for any transversal prism $(B,J)$ mapping to $(A,I)$, the map $$A\tnsr_B B/J_n \to A/I_n$$ is an isomorphism. Note that $\pi_1(A\tnsr_B B/J_n)=\Tor^B_1(A,B/J_n)$ is the $J_n$-torsion in $A$. Since $I_n$ is the pushforward of $J_n$ [@Prismatic Lemma 3.5], and $A$ is $I_n$-torsionfree by definition of prism, this Tor term is zero. The same lemma shows that the map is an isomorphism on $\pi_0$. In the literature, "derived Mackey functors" refers to something different [@KaledinMackey; @SpecDerivedMackey; @AMGRMackey]. We really want animated $\m W(\Z)$-modules. For perfect prisms, we additionally have an $R$ map given by $$R(t_0,\dotsc,t_n) = (\phi^{-1}(t_1),\dotsc,\phi^{-1}(t_n))$$ in transversal coordinates. This map does not exist for general prisms (although the quotient map $A/I_n \to A/\phi(I_{n-1})$ is nearly as good). Recall from above that the norm $N^{n+1}_n(f\bmod I_n)$ is lifted by the expression $\~N^{n+1}_n(f) = \phi(f)-\pi_{n+1}\delta(f)$. It is useful to have a similar explicit expression for iterates of the norm. In a $\delta$-ring, the operations $\theta_n$ are given by $$\phi(f^{p^{n-1}}) = f^{p^n} + p^n\theta_n(f).$$ In particular, $\theta_1=\delta$. In a $\delta$-ring, we have $$\phi^n(f) = f^{p^n} + \sum_{i=1}^n p^i \theta_i(f)^{\phi^{n-i}}.$$ [\[lem:n-lift-p\]]{#lem:n-lift-p label="lem:n-lift-p"} The expressions $$\begin{aligned} \~V^{m+n}_n(f) &= \pi_{n+1}\dotsm \pi_{m+n}\\ \~N^{m+n}_n(f) &= f^{\phi^m} - \sum_{i=1}^m \~V^{m+n}_{m+n-i}\theta_i(f)^{\phi^{m-i}} \end{aligned}$$ are lifts of $V^{m+n}_n(f\bmod I_n)$ and $N^{m+n}_n(f\bmod I_n)$. *Proof.* The claim for $\~V^{m+n}_n$ is clear. The norm $N^{m+n}_n(f\bmod I_n)$ is characterized in transversal coordinates by $$N^{m+n}_n(f\bmod I_n) = \begin{cases} f^{p^m} & {}\bmod {I_n}\\ \phi^k(f)^{p^{m-k}} & {}\bmod{\phi^{n+k}(I)},\quad 1\le k\le m \end{cases}$$ so we just need to check that $\~N^{m+n}_n$ satisfies this congruence. For any $0\le k\le m$, we have $$\begin{aligned} \~N^{m+n}_n(f) &= f^{\phi^m} - \sum_{i=1}^m\~V^{m+n}_{m+n-i} \theta_i(f)^{\phi^{m-i}}\\ &\equiv f^{\phi^m} - \sum_{i=1}^{m-k} p^i\theta_i(f)^{\phi^{m-i}} \quad\bmod {\phi^{n+k}(I),\text{ or }I_n\text{ if }k=0}\\ &= \phi^k\left(f^{\phi^{m-k}}-\sum_{i=1}^{m-k} p^i\theta_i(f)^{m-k-i}\right)\\ &= \phi^k(f^{p^{m-k}})\\ &= \phi^k(f)^{p^{m-k}}.\qedhere \end{aligned}$$ ◻ Similarly, one can check that for any ring $R$, we have $$N^n(f) = f - \sum_{i=1}^n V^i \theta_i(f)$$ in $W(R)$. ## Subcategories {#sub:subcats} We now identify the image of transversal, crystalline, and perfect prisms under the functor constructed above. The point is to give definitions that make sense for every $G$, so that these notions will make sense for $G$-prisms once those have been defined. The fact that we are able to do so is evidence for thinking that "$G$-prisms" might exist. A Mackey functor $\m M$ is *transversal* if the natural map $$M^H \to \prod_{K\subconj H} \H^0(W_H(K), M^K)/\P_K$$ is an injection for all $H\le G$. Let $\m M$ be a Mackey functor, and let $\cF$ be a family of subgroups of $G$. Viewing $\m M$ as a $G$-spectrum, there is a pullback square $$\xymatrix{ \m M \ar[r] \ar[d] \ar@{}[dr]|\square & \~{E\cF} \tnsr \m M \ar[d]\\ \m M^{E\cF_+} \ar[r] & \~{E\cF} \tnsr \m M^{E\cF_+} }$$ We expect that $\m M$ being transversal is equivalent to $\mpi_0$ of this square being a pullback for all families $\cF$. Indeed, this was our original definition of transversality. We have adopted the above definition for expediency, since we want to postpone developing the theory of transversality to future work. Notably, for transversal Mackey functors, there should be a version of the stratified perspective on equivariance [@Glasman; @AMGRStratified] that stays in the world of Mackey functors. A prism $A$ is transversal if and only if $\m A$ is a transversal Mackey functor. *Proof.* The "only if" direction is [@AClBTrace Lemma 3.7]. Conversely, if $\m A$ is a transversal Mackey functor, then in particular $$A/I_1 \hookrightarrow A/I \times A/\phi(I).$$ Suppose $f\in A$ with $pf\in I$. Since $p\in I+\phi(I)A$, this is equivalent to $\pi_1f\in I$, where $\pi_1$ is a generator of $\phi(I)$. The above injection implies that $I\cap \phi(I)=I_1$, so we have $\pi_1 f\in I_1$. Since $\pi_1$ is a non-zerodivisor, we can cancel to get $f\in I$. This shows that $A/I$ is $p$-torsionfree. ◻ Next we turn to crystalline prisms. One can easily characterize the image of crystalline prisms as those $\m A$ such that $p=0$ in $A^e$. However, this characterization is not very natural from the equivariant perspective. A Mackey functor $\m M$ is *cohomological* if $\tr^H_K\res^H_K x=|H/K|x$ for all $K\preceq H$ and all $x\in M^H$. A Tambara functor $\m A$ is cohomological if it is cohomological as a Mackey functor, and furthermore $N^H_K\res^H_K x = x^{|H/K|}$ for all $K\preceq H$ and all $x\in A^H$. For $G=C_2$, a Tambara functor $\m A$ is cohomological as soon as it satisfies $N^H_K\res^H_K x = x^{|H/K|}$. This follows from the formula $$N(x+y) = N(x) + N(y) + V(x\bar y)$$ applied to $y=1$ [@DMPPolynomial Example 3.8(i)]. We do not know if this is true for general $G$. Being a cohomological Mackey functor (resp. Tambara functor) is the same as being a $\m\Z$-module (resp. $\m\Z$-algebra). We wanted to prove the following: [\[conj:crystalline\]]{#conj:crystalline label="conj:crystalline"} A prism $(A, I)$ is crystalline if and only if $\m A$ is a cohomological Tambara functor. As we will see below, this is equivalent to the conjecture that $A$ is crystalline if and only if $\phi(I)A=(p)$. Unfortunately, we were unable to prove this. We can do so in the case $p=2$, however: Conjecture [\[conj:crystalline\]](#conj:crystalline){reference-type="ref" reference="conj:crystalline"} is true when $p=2$. *Proof.* Working locally, we can assume that $I=(\g)$ is principal. Recall that $V\colon A/I\to A/I_1$ can be given by multiplication by $\phi(\g)/\delta(\g)$. Thus, the statement that $\m A$ is cohomological, i.e. $V(1)=p$, means that $$\frac{\phi(\g)}{\delta(\g)} \equiv p \bmod I_1 .$$ (For a general prism, this is only true mod $I$.) This gives $$\begin{aligned} \frac{\g^p+p\delta(\g)}{\delta(\g)} &= p + \g\phi(\g)x\\ \frac{\g^p}{\delta(\g)} &= \g\phi(\g)x\\ \g^{p-1} &= \phi(\g)x\delta(\g) \end{aligned}$$ On the other hand, we can write $$p=\phi(\g)(\delta(\g)^{-1}-\g x)$$ which shows that $\phi(\g)A=(p)$ by [@Prismatic Lemma 2.24]. Combining these, we have $p\mid\g^{p-1}$. When $p=2$, this means $p\mid\g$, so $(p)=(\g)$ by another application of [@Prismatic Lemma 2.24]. ◻ In one failed attempt to prove Conjecture [\[conj:crystalline\]](#conj:crystalline){reference-type="ref" reference="conj:crystalline"}, we discovered the identity $$\delta(f^{n+1}) = \delta(f)\sum_{i=0}^n \phi(f)^{n-i} f^{pi}.$$ This can be proved by induction using the identity $$\delta(fg) = \phi(f)\delta(g) + \delta(f)g^p.$$ Sadly, this identity plays no role in the present paper; but it is a nice identity, so we wanted to share it. A prism $(A,I)$ is perfect if and only if the de Rham-Witt comparison map $$\m W(A/I) \to \m A$$ constructed in [\[sec:drw\]](#sec:drw){reference-type="ref" reference="sec:drw"} is an equivalence. *Proof.* The "only if" direction is well-known, so suppose that $\m W(A/I)\to \m A$ is an equivalence. We will show that $\phi\colon A\to A$ is an isomorphism. Working locally, we can assume that $I=(d)$ is principal. Taking the limit over $F$, we get an isomorphism $W((A/I)^\flat) \iso A$. Since $(A/I)^\flat$ is a perfect $\F_p$-algebra, we can apply [@Prismatic Lemma 2.34] to get that $\phi(d)$ is a non-zerodivisor. Let us write $\phi_0\colon A/I\to A/\phi(I)$ for the map induced by $\phi$, in order to distinguish it from $\phi\colon A\to A$. Note that $\phi_0$ is the composite $$\xymatrix@ur{ A/I \ar[d]_N \ar[dr]^-{\phi_0}\\ A/I_1 \ar[r]_-R & A/\phi(I) }$$ Since this composite is always an isomorphism for the Witt vectors Tambara functor, we have that $\phi_0$ is an isomorphism. Suppose that $\phi(f)=0$. Since $\phi_0$ is injective, we get $f=dg$ for some $g$. But now $0=\phi(f)=\phi(d)\phi(g)$; since $\phi(d)$ is a non-zerodivisor, we get $\phi(g)=0$. Iterating, we get $f\in(\phi(I)A)^\infty$. Since $A$ is $\phi(I)$-adically separated, this shows $f=0$. Next let $f\in A$ be arbitrary. Since $\phi_0$ is surjective, we can write $f=\phi(g_0) + \phi(d) h_0$. We can then write $h_0=\phi(g_1)+\phi(d) h_1$. Iterating this process, we have $$\begin{aligned} f &= \phi(g_0) + \phi(d)h_0\\ &= \phi(g_0 + dg_1) + \phi(d)^2 h_1\\ &= \phi(g_0 + dg_1 + d^2g_2) + \phi(d)^3 h_2\\ &= \dots \end{aligned}$$ Since $A$ is complete with respect to both $I$ and $\phi(I)A$, this converges to a preimage of $f$ for $\phi$. ◻
arxiv_math
{ "id": "2309.03181", "title": "Prisms and Tambara functors I: Twisted powers, transversality, and the\n perfect sandwich", "authors": "Yuri J. F. Sulyma", "categories": "math.NT math.AT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- --- **On generalized fraction and power series properties of $\mathcal{S}$-Noetherian rings** Xiaolei Zhang$^{a}$ School of Mathematics and Statistics, Shandong University of Technology, Zibo 255049, China\ E-mail: zxlrghj\@163.com\ **Abstract** 10truemm 10truemm In this note, we study the generalized fraction properties and power series properties of $\mathcal{S}$-Noetherian rings. Actually, we answer two questions proposed in \[A. Dabbabi, A. Benhissi, Generalization of the $S$-Noetherian concept, *Arch. Math.* (Brno) **59**(4) (2023) 307-314.\] to 0.3cm\ *Key Words:* $\mathcal{S}$-Noetherian ring, generalized fraction ring, formal power series ring.\ *2020 Mathematics Subject Classification:* 13E05, 13A15. 0truemm 0truemm # Introduction Throughout this note, all rings are always commutative rings with identity. Let $A$ be a ring. We always denote by $A[[x]]$ the formal power series ring with coefficients in $A$, $S$ a multiplicative subset of $A$, and $\mathcal{S}$ a multiplicative system of ideals of $A$. For a subset $U$ of an $A$-module $M$, we denote by $\langle U\rangle$ or $(U)A$ the $A$-submodule of $M$ generated by $U$. In 2002, Anderson and Dumitrescu [@ad02] introduced the so-called $S$-Noetherian rings. An ideal $I$ of a ring $A$ is said to be *$S$-finite* if there is a finitely generated subideal $K$ of $I$ such that $sI\subseteq K$ for some $s\in S$. And a ring $A$ is called an *$S$-Noetherian ring* if every ideal of $A$ is $S$-finite. The $S$-Noetherian version of Cohen's Theorem, Eakin-Nagata Theorem and Hilbert Basis Theorem of both polynomial and power series forms are given in [@ad02]. Some more works on $S$-Noetherian rings can be found in [@hh15; @l15; @lO14; @lO15; @l07]. Recently, Dabbabi and Benhissi [@DB23] generalized the notion of $S$-Noetherian rings in terms of multiplicative systems of ideals of a given ring. Let $A$ be a ring and $\mathcal{S}$ be a multiplicative system of ideals of $A$. An ideal $I$ of $A$ is said to be *$\mathcal{S}$-finite* if there is a finitely generated subideal $F$ of $I$ such that $HI\subseteq F$ for some $H\in \mathcal{S}$. A ring $A$ is called an *$\mathcal{S}$-Noetherian ring* if every ideal of $A$ is $\mathcal{S}$-finite. Certainly if $\mathcal{S}$ is composed of principal ideals generated by elements in $S$, then $\mathcal{S}$-Noetherian rings are exactly $S$-Noetherian rings. Moreover, $\mathcal{S}$-Noetherian version of Cohen's Theorem, Eakin-Nagata Theorem and Hilbert Basis Theorem of polynomial form are also investigated in [@DB23]. It is known in [@ad02] that if $A$ is an $S$-Noetherian ring, then the fraction ring $A_S$ is a Noetherian ring, and the power series ring $A[[x]]$ is also an $S$-Noetherian ring under the anti-Archimedean condition. However, they are left as two open questions for $\mathcal{S}$-Noetherian rings by Dabbabi and Benhissi (see [@DB23 *Questions*]). The main motivation of this note is to investigate these two questions. Actually, we show that suppose $A$ is an $\mathcal{S}$-Noetherian domain. The generalized fraction ring $A_\mathcal{S}$ need not be Noetherian (see Example [Example 1](#ce){reference-type="ref" reference="ce"}). We also obtain that if $A$ is $\mathcal{S}$-Noetherian, then $A[[x]]$ is $\mathcal{S}$-Noetherian under some mild assumption (see Theorem [Theorem 3](#pow){reference-type="ref" reference="pow"}). # main results Let $A$ be an integral domain with its quotient field $K$ and $\mathcal{S}$ a multiplicative system of ideals of $A$. Denote by $$A_\mathcal{S}=\{x\in K\mid xH\subseteq A\ \mbox{for some}\ H\in \mathcal{S}\},$$ and call it the generalized fraction ring of $A$ with respect to $\mathcal{S}$. If $\mathcal{S}=\{sA\mid s\in S\}$ for some multiplicative subset $S$ of $A$, then $A_\mathcal{S}=A_S$ the localization of $A$ at $S$. It follows by [@ad02 Proposition 2(f)] that if $A$ is an $S$-Noetherian ring, then $A_S$ is a Noetherian ring. For general case, the authors in [@DB23] proposed the following question: **Question 1**. *Let $A$ be an integral domain and $\mathcal{S}$ a multiplicative system of ideals of $A$ such that $A$ is $\mathcal{S}$-Noetherian. Does it follow that $A_\mathcal{S}$ is Noetherian?* To give a counter-example to this question, we recall some basic notions of $w$-operations on integral domains (see [@wm97; @fk16] for more details). Let $A$ be an integral domain with its quotient field $K$. Let $J$ be a finitely generated ideal of $A$ and set $J^{-1}:=\{x\in K\mid Jx\subseteq A\}$. If $J^{-1}=A$. Then $J$ is said to be a ${\rm GV}$-ideal of $A$, and denoted it by $J\in {\rm GV}(A)$. Certainly, ${\rm GV}(A)$ is a multiplicative system of ideals of $A$. Let $M$ be a torsion-free $A$-module. Denote by $$M_w=\{x\in M\otimes_AK\mid Jx\subseteq M\ \mbox{for some}\ J\in {\rm GV}(A)\}.$$ Trivially, $M\subseteq M_w$ and $(M_w)_w=M_w$. Moreover, if $M=M_w$, then $M$ is called a $w$-module. Trivially, the basic ring $A$ itself is a $w$-module. So if we take $\mathcal{S}={\rm GV}(A)$, then $A_\mathcal{S}=A_w=A$. Now, we are ready to give a counter-example to Question [Question 1](#1){reference-type="ref" reference="1"}. **Example 1**. *Let $A$ be a non-Noetherian domain such that every $w$-ideal of $A$ is finitely generated $($e.g. $a$ is a non-Noetherian unique factorization domain, see [@fk16 Theorem 7.9.5]$).$ We claim that $A$ is a ${\rm GV}(A)$-Noetherian ring in the sense of [@DB23]. Indeed, let $I$ be an ideal of $A$. Then $I_w$ is a finitely generated ideal of $A$ by assumption. Assume $I_w=\langle a_1,\dots,a_n\rangle$. Then there exists $J_i\in{\rm GV}(A)$ such that $J_ia_i\in I$ for each $i=1,\dots,n$. Setting $J=J_1\dots J_n$, we have $J\in{\rm GV}(A)$ and $JI_w\subseteq I.$ Note that $JI\subseteq JI_w$ and $JI_w$ is finitely generated. So $I$ is ${\rm GV}(A)$-finite in the sense of [@DB23]. Consequently, $A$ is a ${\rm GV}(A)$-Noetherian ring. However, the generalized fraction ring $A_\mathcal{S}=A_w=A$ is not a Noetherian domain.* Let $A$ be a ring and $S$ a multiplicative subset of $A$ such that for each $s\in S$, $\bigcap\limits_{n=1}^\infty s^nA$ contains some element in $S$ (i.e., $S$ is anti-Archimedean). Then it was proved in [@ad02 Proposition 10] that if $A$ is an $S$-Noetherian ring, then $A[[x]]$ is also $S$-Noetherian. For the general case, the authors in [@DB23] proposed the following question: **Question 2**. *Let $A$ be a ring and $\mathcal{S}$ a multiplicative system of ideals of $A$ such that for each $I\in \mathcal{S}$, $\bigcap\limits_{n=1}^\infty I^n$ contains some ideal of $\mathcal{S}$. Suppose $A$ is $\mathcal{S}$-Noetherian. Do it follow that $A[[x]]$ is also $\mathcal{S}$-Noetherian?* We give a positive answer to this question under a mild assumption. **Lemma 2**. *Let $H=\langle a_1, a_2,\cdots,a_n\rangle$ be a finitely generated ideal of $A$. Suppose $m$ is a positive integer and $I=\langle a^m_1, a^m_2,\cdots,a^m_n\rangle$. Then $H^{mn}\subseteq I$.* *Proof.* Note that $H^{mn}$ is generated by $\{\prod\limits_{i=1}^na_i^{k_i}\mid \sum\limits_{i=1}^nk_i=mn\}$. By the pigeonhole principle, there exits some $k_i$ such that $k_i\geq m$. So each $\prod\limits_{i=1}^na_i^{k_i}\in I$, and thus $H^{mn}\subseteq I$. ◻ **Theorem 3**. *Let $A$ be a ring and $\mathcal{S}$ be a multiplicative system of finitely generated ideals of $A$ such that for each $I\in \mathcal{S}$, $\bigcap\limits_{n=1}^\infty I^n$ contains some ideal of $\mathcal{S}$. Suppose $A$ is an $\mathcal{S}$-Noetherian ring. Then $A[[x]]$ is also an $\mathcal{S}$-Noetherian ring.* *Proof.* It follows by [@DB23 Corollary 2.12] that we only need to show that every prime ideal $P$ of $A[[x]]$ that not containing any ideal in $\mathcal{S}$ is $\mathcal{S}$-finite. Let $\pi: A[[x]]\rightarrow A$ be an $A$-homomorphism sending $x$ to $0$. Set $P'=\pi(P)$. Since $A$ is $\mathcal{S}$-Noetherian, $HP'\subseteq (g_1(0),\dots,g_k(0))A$ for some $H=\langle h_l\mid l=1,\dots,n\rangle\in \mathcal{S}$ and $g_1,\dots,g_k\in P$. If $x\in P$, then $P=(P',x)A[[x]]$, and so $HP\subseteq(g_1,\dots,g_k,x)A[[x]]\subseteq P$ implying $P$ is $\mathcal{S}$-finite. Now, we assume that $x\not\in P$. Let $f\in P$. Then $f(0)\in P'$, and so for each $l$, we have $h_lf(0)=\sum_i d_{0i,l}g_i(0)$ for some $d_{0i,l}\in A$. And so $h_lf-d_{0i,l}g_i$ can be written as $xf_{1,l}$ with $f_{1,l}\in A[[x]]$ for each $l$. Note that $xf_{1,l}\in P$, and so $f_{1,l}\in P$ for each $l$ since $x\not\in P$ and $P$ is a prime ideal. In the same way, for each $l$ we can write $h_lf_1=\sum_i d_{1i,l}g_i+xf_2$ with $d_{1i,l}\in A$ and $f_2\in P$. Continuing these steps, for each $l$ and $j$ we have $h_lf_j=\sum_i d_{ji,l}g_i+xf_{j+1}$ with $d_{ji,l}\in A$ and $f_{j+1}\in P$, where $f_0=f$. So for each $l$, we have $$f=\sum_i g_i(\sum_j(d_{ji,l}/h_l^{j+1})x^j).$$ Let $I\subseteq \bigcap\limits_{j=1}^\infty H^j$ be an ideal in $\mathcal{S}$. It follows by Lemma [Lemma 2](#fangmi){reference-type="ref" reference="fangmi"} that $H^{nj}\subseteq \langle h_1^j,\dots, h_n^j\rangle$ for each $j$, and so $I\subseteq\bigcap\limits_{j=1}^\infty \langle h_1^j,\dots, h_n^j\rangle$. Let $a=\sum\limits_{l=1}^na_{(j+1)l}h_l^{j+1}\in I$ with $a_{(j+1)l}\in A$ for each $j$ and $l$. Then $$\begin{aligned} af =& (\sum\limits_{l=1}^na_{(j+1)l}h_l^{j+1}) (\sum_i g_i(\sum_j(d_{ji,l}/h_l^{j+1})x^j)) \\ = &\sum\limits_{l=1}^n(\sum_i g_i(\sum_j(a_{(j+1)l}d_{ji,l})x^j))\\ \in& (g_1,\dots,g_k)A[[x]].\end{aligned}$$ So we have $If\subseteq (g_1,\dots,g_k)A[[x]]$. Hence $IP\subseteq (g_1,\dots,g_k)A[[x]]\subseteq P$. Therefore, $A[[x]]$ is an $\mathcal{S}$-Noetherian ring. ◻ *Remark 4*. It follows from [@DB23 Theorem 2.7] that suppose $\mathcal{S}$ is a multiplicative system of ideals of $A$ such that for each $I\in \mathcal{S}$, $\bigcap\limits_{n=1}^\infty I^n$ contains some ideal of $\mathcal{S}$. Then $A$ is an $\mathcal{S}$-Noetherian ring implies that the polynomial ring $A[x]$ is also an $\mathcal{S}$-Noetherian ring. However, we don't know that whether it is also true for power series rings in this general situation. Note that in the proof of Theorem [Theorem 3](#pow){reference-type="ref" reference="pow"}, we use Cohen's Theorem for $\mathcal{S}$-Noetherian rings which necessarily requires that "$\mathcal{S}$ is a multiplicative system of finitely generated ideals" (see [@DB23 Example 2.14]). 99 H. Ahmed, H. Sana, $S$-Noetherian rings of the forms $A[X]$ and $A[[X]]$, *Comm. Algebra* **43**(9) (2015) 3848-3856. D. D. Anderson, T. Dumitrescu, $S$-Noetherian rings, *Comm. Algebra* **30**(9) (2002) 4407-4416. A. Dabbabi, A. Benhissi, Generalization of the $S$-Noetherian concept, *Arch. Math.* (Brno) **59**(4) (2023) 307-314. J. W. Lim, A note on $S$-Noetherian domains, *Kyungpook Math. J.* **55**(3) (2015) 507-514. J. W. Lim, D. Y. Oh, $S$-Noetherian properties on amalgamated algebras along an ideal, *J. Pure Appl. Algebra* **218**(6) (2014) 2099-2123. J. W. Lim, D. Y. Oh, $S$-Noetherian properties on composite ring extensions, *Comm. Algebra* **43**(7) (2015) 2820-2829. Z. Liu, On $S$-Noetherian rings, *Arch. Math.* (Brno) **43**(1) (2007) 55-60. H. Kim, N. Mahdou and Y. Zahir, $S$-Noetherian in Bi-amalgamations, *Bull. Korean Math. Soc.* **58**(4) (2021) 1021-1029. F. G. Wang, R. McCasland, On $w$-modules over strong Mori domains, *Comm. Algebra* **25**(4) (1997) 1285-1306. F. G. Wang, H. Kim, *Foundations of Commutative Rings and Their Modules* (Singapore, Springer, 2016).
arxiv_math
{ "id": "2309.04936", "title": "On generalized fraction and power series properties of\n $\\mathcal{S}$-Noetherian rings", "authors": "Xiaolei Zhang", "categories": "math.AC", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper, we consider the orthogonal projection of a surface in $\mathbb{R}^3$ for a given view direction. We then introduce and investigate several invariants of the families of the plane curves that locally configure the projection image of the surface . Using the invariants, we also show an extension of d'Ocagne-Koenderink formula that associates a local behavior of the projection image of a surface with Gaussian curvature of the surface. author: - Ken Anjyo and Yutaro Kabata title: A view-parametric extension of d'Ocagne-Koenderink formula for a surface in $\mathbb{R}^3$ --- [^1] [^2] # Introduction Jan Koenderink showed in his famous book [@Koenderink] that classical differential geometry is a strong tool to study vision science. Among his various results in the area, what is called Koenderink's formula (see [@Koenderink1984; @Koenderink]) has attracted many mathematicians [@Araujo; @Cipolla-Giblin; @Damon-Giblin-Haslinger; @FHS; @HKS; @IRFT]. This formula gives the relationship between the Gaussian curvature of a surface and the curvature of the apparent contour. For instance, while it is a naive intuition that we know whether a surface is locally elliptic or hyperbolic from its apparent contour, the formula mathematically justifies this intuition. A precise statement of the Koenderink formula is given as follows: Let $M \subset \mathbb{R}^3$ be a smooth surface and $K$ the Gaussian curvature of $M$. For a point $p\in M$ and a tangent vector $v\in T_p M$, let $\kappa_n(p)$ denote the normal curvature of $M$ at $p$ along $v$. When $\kappa_n(p)\not=0$, the apparent contour (the singular value set of the orthogonal projection of $M$ along $v$) is non-singular at $p$. Denoting the curvature of the contour as a plane curve by $\kappa_c(p)$, we have the Koenderink formula: $$\label{KoenderinkFormula} \kappa_c(p) \kappa_n(p)=K(p).$$ Recently, it is found that the same formula was already discovered by d'Ocagne in 1895 (see also [@Araujo page 22]). We therefore refer to the equation ([\[KoenderinkFormula\]](#KoenderinkFormula){reference-type="ref" reference="KoenderinkFormula"}) as *d'Ocagne-Koenderink formula* throughout this paper. While Koenderink in [@Koenderink] is interested mainly in the boundary of the projection image (apparent contour) of a surface, we are interested in the interior of its projection image. Suppose that a given surface consists of a certain family of space curves, then its projection image gives a one-parameter family of curves as illustrated in Figure [1](#curvessurffig){reference-type="ref" reference="curvessurffig"}. This situation would make us expect that there is a geometric relationship between the shape of the surface and the pattern of the projection image. The present paper aims to justify this expectation by deriving formulae similar to d'Ocagne-Koenderink formula. To achieve this goal, assuming that all maps and surfaces considered in this paper are smooth ($C^\infty$), we investigate the above one-parameter family of curves in the projective image of a surface and then recovering the Gaussian curvature of the surface. Differential geometric approaches for families of curves as in [@Kabata-Takahashi; @Takahashi2] would then be quite useful. ![A surface given by $z=\sin xy$. The right side is just drawn by the curves parametrized by the parameter $y$, that is, each space curve is expressed as $(x_0,y,\sin x_0 y)$ for a fixed $x_0$.](gunegunezu.pdf){#curvessurffig width="8.0cm"} ![The orthogonal projection of a surface $M$ along the view direction ${\bf v}$ and the projection image giving a one-parameter family of curves. ](ellipproj.pdf){#fig:ellipproj width="12.0cm"} The rest of this paper is organized as follows. As a preliminary step, we define several differential geometric notions for one-parameter families of regular curves in plane in §2, which is important in stating our main results. In particular, we note that these differential geometric notions are invariant under translations and rotations on the plane (see Remark [\[rem:invariant\]](#rem:invariant){reference-type="ref" reference="rem:invariant"}). In §3, we consider orthogonal projections of a surface in $\mathbb{R}^3$ that consists of certain families of space curves. Here we suppose that the surface is expressed as the graph of a function $g(x,y)$: $s(x,y)=(x,y,g(x,y))^t$, where $(x, y)^t$ belongs to a certain domain $U$ in $\mathbb{R}^2$. For a fixed $x$ (resp. $y$), $s(x,y)$ gives a space curve parametrized by $y$ (resp. $x$), thus our surface is regarded as a one-parameter family of space curves $\{s(x_0,y) \}_{x_0}$ (resp. $\{s(x,y_0) \}_{y_0}$). The orthogonal projection of such a surface is then considered for a given unit vector $\mathbf{v}$, which is referred to as the *view direction* in this paper. As shown in Figure 2, it means that we investigate the projection $\pi_{\mathbf{v}}$ of the surface $s$ onto a plane $\mathbf{v}^{\perp}$ which is perpendicular to the view direction $\mathbf{v}$. More specifically we examine the one-parameter families of plane curves on $\mathbf{\Phi}_{\mathbf{v}}(U)$, where $\mathbf{\Phi}_{\mathbf{v}}:= \pi_{\mathbf{v}}\circ s$. Thus we derive several formulae which describe relationship between the invariants of such one-parameter families of plane curves and geometric information of the surface. In §4, we investigate the signs of the formulae obtained in §3. This leads us to consider an extension of the d'Ocagne-Koenderink formula. Actually, in a special setting in §5, the formulae containing the Gaussian curvature are given. We note all the formulae obtained in this paper include the two parameters that prescribe a view direction. In this sense, we say those formulae are expressed along with the view direction parameters. **Acknowledgement**. The second author is partially supported by JSPS KAKENHI Grant Number JP 20K14312. The authors want to thank Professor Farid Tari for his helpful comment. # One-parameter family of smooth curves and its invariant We consider local differential geometry of families of plane curves, which is regarded as a smooth mapping from plane to plane. We assume that $\mathbb{R}^2$ is the Euclidean plane equipped with the scalar product $<\cdot,\cdot>$, and denote the Euclidean distance by $||\cdot||$. Let $U\subset \mathbb{R}^2$ be a simply connected open subset. **Definition 1**. * We say that a smooth map $f\colon U \to \mathbb{R}^2, (u,v)\mapsto f(u,v)$ is *a one-parameter family of regular curves with respect to $u$ (resp. $v$)* when $f_u(u,v)\not=0$ (resp. $f_v(u,v)\not=0$) for all $(u,v)\in U$. * Let $f\colon U \to \mathbb{R}^2, (u,v)\mapsto f(u,v)$ be a one-parameter family of regular curves with respect to $u$ (resp. $v$). For a fixed parameter $v$ (resp. $u$), $f(\cdot,v)$ (resp. $f(u,\cdot)$) is a regular plane curve, which is called *a $u$ (resp. $v$)-curve*. Such $u$ or $v$-curves have the curvatures in the usual sense. **Definition 2**. * We define *the curvature of a one-parameter family of regular curves $f$ with respect to $u$ (resp. $v$)* as the function $$\kappa[f,u](u,v):=\frac{\det (f_u\; f_{uu})}{\|f_u\|^{3}}(u,v) \;\; \left(\mbox{\it resp.}\;\; \kappa[f,v](u,v):=\frac{\det (f_v\; f_{vv})}{\|f_v\|^{3}}(u,v)\right).$$ Here $\det (\mathbf{a}\; \mathbf{b})$ for $\mathbf{a}, \mathbf{b}\in\mathbb{R}^2$ means the determinant of the matrix whose columns are $\mathbf{a}$ and $\mathbf{b}$. * **Example 3**. *Figure [4](#exuv2){reference-type="ref" reference="exuv2"} shows the image of the mapping $f_\pm(u,v)=(u\pm v^2,v)$ which are regarded as one-parameter families of regular curves with respect to $v$ with $\kappa[f_\pm,v]\lessgtr 0$.* *![The left (resp. right) represents the image of $f_-$ (resp. $f_+$) for $f_\pm\colon (-1,1)\times (-1,1)\to \mathbb{R}^2$, $f_\pm(u,v)=(u\pm v^2,v)$.](u-v2.pdf "fig:"){#exuv2 width="6.0cm"} ![The left (resp. right) represents the image of $f_-$ (resp. $f_+$) for $f_\pm\colon (-1,1)\times (-1,1)\to \mathbb{R}^2$, $f_\pm(u,v)=(u\pm v^2,v)$.](u+v2.pdf "fig:"){#exuv2 width="6.0cm"}* Next, we define the squared velocity of a smooth mapping. **Definition 4**. * Let $f\colon U \to \mathbb{R}^2, (u,v)\mapsto f(u,v)$ be a smooth mapping. We define *the squared velocity $SV[f,u](u,v)$ (resp. $SV[f,v](u,v)$) of $f$ at $(u,v)\in U$ with respect to $u$ (resp. $v$)* as $$SV[f,u](u,v):=\|f_u(u,v)\|^2 \;\; (\mbox{\it resp.}\;\; SV[f,v](u,v):=\|f_v(u,v)\|^2).$$ * ![The image of the mapping $f\colon (-1,1)\times (-1,1)\to \mathbb{R}^2$, $f(u,v)=(u+ u^3,v)$. The red line is the image of the $v$-axis in the $uv$-plane by $f$.](expalinSV.pdf){#exu+u^3 width="10.0cm"} **Example 5**. *Figure [5](#exu+u^3){reference-type="ref" reference="exu+u^3"} shows the image of the mapping $f(u,v)=(u+u^3,v)$ which is regarded as a one-parameter family of curves (straight lines) parametrized by $v$. Here we have $SV[f,u]=(1+3u^2)^2$ and $\frac{d}{du} SV[f,u]=12u(1+3u^2)$, thus $\frac{d}{du} SV[f,u] <0$ for $u<0$; $\frac{d}{du} SV[f,u] >0$ for $u>0$; and $\frac{d}{du} SV[f,u] =0$ for $u=0$.* **Remark 6**. * Let $f, \tilde{f} \colon U \to \mathbb{R}^2$ be smooth mappings such that $\tilde{f}(u,v)=A(f(u,v))+a$ for a rotation $A\in SO(2)$ and a constant vector $a\in\mathbb{R}^2$. Then $SV[f,u]=SV[\tilde{f},u]$ and $SV[f,v]=SV[\tilde{f},v]$ hold. In addition, assuming that $f$ and $\tilde{f}$ are one-parameter families of regular curves with respect to $u$ (resp. $v$), then $\kappa[f,u]=\kappa[\tilde{f},u]$ (resp. $\kappa[f,v]=\kappa[\tilde{f},v]$) holds. In this sense, the differential geometric notions introduced in this section are invariant under the action of rotations and translations on the plane. In the following sections, the partial differentials of the squared velocities: $\frac{d}{du} SV[f,u]$ and $\frac{d}{dv} SV[f,v]$ also play important roles and are invariant in the above sense. Note that a general theory for invariants of families of plane curves is considered in [@Kabata-Takahashi]. In particular, the notion of one-parameter families of Legendre curves is introduced, and their complete invariants under the action of rotations and translations on the plane are given with the uniqueness and existence theorem. [\[rem:invariant\]]{#rem:invariant label="rem:invariant"}* # Formulae with respect to projections of a surface We consider local geometry of a smooth surface $M\subset \mathbb{R}^3$ through projection. Let $\pi_{\bf v}\colon\mathbb{R}^3 \to {\bf v}^\bot$ be the orthogonal projection with the view direction ${\bf v}\in S^2$. We consider the family of regular curves generated by the orthogonal projection $\pi_{\bf v}|S$ as in Figure [2](#fig:ellipproj){reference-type="ref" reference="fig:ellipproj"}--[8](#fig:paraboproj){reference-type="ref" reference="fig:paraboproj"}. For convenience, we rotate our surface $M$ so that ${\bf v}$ is moved into $(0,0,-1)^t$, and then our orthogonal projection is regarded as the canonical projection from $\mathbb{R}^3$ to the $xy$-plane. See Figure [6](#fig:projandrot){reference-type="ref" reference="fig:projandrot"}. ![The orthogonal projection of a surface $M$ along a view direction ${\bf v}$ and its rotation.](projandrot.pdf){#fig:projandrot width="14.0cm"} Without loss of generality, we may suppose that $M$ contains the origin $0\in\mathbb{R}^3$, and $M$ around $0$ is locally parameterized by $s\colon U\to \mathbb{R}^3, \;s(x,y)=(x,y,g(x,y))^t$ for $(x,y)^t\in U$ with a simply connected open subset $U\subset\mathbb{R}^2$ containing the origin $0\in\mathbb{R}^2$ and a smooth function $g\colon U\to \mathbb{R}$ so that $g(0)=0$. Take the view direction ${\bf v}=(\sin \theta \cos \phi,\sin \theta \sin \phi, \cos \theta)^t\in S^2$ with $\theta, \phi \in \mathbb{R}$. Set the rotation matrices as $$R_y(a)=\left( \begin{array}{ccc} \cos a & 0& \sin a \\ 0&1&0\\ -\sin a & 0& \cos a \\ \end{array} \right),\;\;\;\; R_z(a)=\left( \begin{array}{ccc} \cos a& -\sin a & 0 \\ \sin a & \cos a& 0 \\ 0&0&1\\ \end{array} \right)$$ with $a\in\mathbb{R}$. Putting $$G:=R_y(\pi-\theta)R_z(-\phi)=\left( \begin{array}{ccc} -\cos\theta\cos\phi & -\cos\theta\sin\phi& \sin\theta \\ -\sin\phi&\cos\phi&0\\ -\sin\theta\cos\phi & -\sin\theta\sin\phi& -\cos\theta \\ \end{array} \right),$$ we have $G({\bf v})=(0,0,-1)^t$, and $G(M)=G(s(U))$ is parametrized as $$G(s(x,y))= \left( \begin{array}{c} g(x,y) \sin \theta -x \cos \theta \cos \phi -y \cos \theta \sin \phi \\ y \cos \phi -x \sin \phi\\ -g(x,y) \cos \theta -x \sin \theta \cos \phi -y \sin \theta \sin \phi\\ \end{array} \right).$$ Thus the orthogonal projection of $G(M)=G(s(U))$ along $G({\bf v})$ is expressed as $$\Phi[{\bf v}](x,y)= \left( \begin{array}{c} g(x,y) \sin \theta -x \cos \theta \cos \phi -y \cos \theta \sin \phi \\ y \cos \phi -x \sin \phi\end{array} \right).$$ Write $$\begin{aligned} P_{\theta,\phi}(x,y):=-\cos\theta\cos\phi +g_x(x,y)\sin\theta,\\ Q_{\theta,\phi}(x,y):=-\cos\theta\sin\phi +g_y(x,y)\sin\theta.\end{aligned}$$ Immediately, we have the following lemma. **Lemma 7**. 1. *$\Phi[{\bf v}]\colon U\to \mathbb{R}^2$ is a one-parameter family of regular curves with respect to $x$ if and only if either $\sin \phi\neq0$ or $P_{\theta,\phi}(x,y)\neq0$.* 2. *$\Phi[{\bf v}]\colon U\to \mathbb{R}^2$ is a one-parameter family of regular curves with respect to $y$ if and only if either $\cos \phi\neq0$ or $Q_{\theta,\phi}(x,y)\neq0$.* *[\[regcondlem\]]{#regcondlem label="regcondlem"}* With the condition in Lemma [\[regcondlem\]](#regcondlem){reference-type="ref" reference="regcondlem"}, $\Phi[{\bf v}]$ is a one-parameter family of regular curves with respect to $x$ or $y$. Let $K$ denote the Gaussian curvature of the surface $M$. We have Theorems [Theorem 8](#thm:y-zeta){reference-type="ref" reference="thm:y-zeta"}, [Theorem 9](#thm:x-zeta){reference-type="ref" reference="thm:x-zeta"}. **Theorem 8**. *If $\Phi[{\bf v}]\colon U\to \mathbb{R}^2$ is a one-parameter family of regular curves with respect to $y$, then we have $$\kappa[\Phi[{\bf v}],y] \cdot \frac{d}{dx}\left( SV[\Phi[{\bf v}],x]\right) = -2g_{xx}g_{yy}\frac{P_{\theta,\phi}\cos\phi \sin^2\theta}{(\cos^2 \phi+Q_{\theta,\phi}^2)^{\frac32}}.$$* **Theorem 9**. *If $\Phi[{\bf v}]\colon U\to \mathbb{R}^2$ is a one-parameter family of regular curves with respect to $x$, then we have $$\kappa[\Phi[{\bf v}],x] \cdot \frac{d}{dy}\left( SV[\Phi[{\bf v}],y]\right) = 2 g_{xx}g_{yy}\frac{Q_{\theta,\phi}\sin\phi \sin^2\theta}{(\sin^2 \phi+P_{\theta,\phi}^2)^{\frac32}}.$$* *Proofs for Theorems [Theorem 8](#thm:y-zeta){reference-type="ref" reference="thm:y-zeta"}, [Theorem 9](#thm:x-zeta){reference-type="ref" reference="thm:x-zeta"}.* Combining statements in the following Propositions [Proposition 10](#prop:curveturep){reference-type="ref" reference="prop:curveturep"}, [Proposition 11](#prop:sv){reference-type="ref" reference="prop:sv"}, we see the formulae. $\Box$ **Proposition 10**. 1. *If $\Phi[{\bf v}]\colon U\to \mathbb{R}^2$ is a one-parameter family of regular curves with respect to $y$, then the curvature of the one-parameter family of regular curves $\Phi[{\bf v}]$ with respect to $y$ is $$\kappa[\Phi[{\bf v}],y]= -\frac{g_{yy}\cos\phi \sin\theta}{(\cos^2 \phi+(\cos\theta\sin\phi-g_y\sin \theta)^2)^{3/2}}.$$* 2. *If $\Phi[{\bf v}]\colon U\to \mathbb{R}^2$ is a one-parameter family of regular curves with respect to $x$, then the curvature of the one-parameter family of regular curves $\Phi[{\bf v}]$ with respect to $x$ is $$\kappa[\Phi[{\bf v}],x]= \frac{g_{xx}\sin\phi \sin\theta}{(\sin^2 \phi+(\cos\theta\cos\phi-g_x\sin \theta)^2)^{3/2}}.$$*   Direct calculations based on Definition [Definition 2](#def:curvature){reference-type="ref" reference="def:curvature"} show the statements. $\Box$ **Proposition 11**. *We have the following equations with respect to the differentials of the squared velocity of $\Phi[{\bf v}]$.* 1. *$$\frac{d}{dx}\left( SV[\Phi[{\bf v}],x]\right)=2g_{xx} \sin\theta(-\cos\theta\cos\phi +g_x\sin\theta).$$* 2. *$$\frac{d}{dy}\left( SV[\Phi[{\bf v}],y]\right)=2g_{yy} \sin\theta(-\cos\theta\sin\phi +g_y\sin\theta).$$*   Direct calculations based on Definition [Definition 4](#def:sqvel){reference-type="ref" reference="def:sqvel"} show the statements. $\Box$ Furthermore, if we have two families of curves with respect to both $x$ and $y$ on a surface $M$, we also obtain the following theorems as byproducts of Propositions [Proposition 10](#prop:curveturep){reference-type="ref" reference="prop:curveturep"}, [Proposition 11](#prop:sv){reference-type="ref" reference="prop:sv"}. **Theorem 12**. *If $\Phi[{\bf v}]\colon U\to \mathbb{R}^2$ is a one-parameter family of regular curves with respect to both $x$ and $y$, then we have $$\begin{aligned} \kappa[\Phi[{\bf v}],x] \cdot \kappa[\Phi[{\bf v}],y] =-g_{xx}g_{yy}\frac{\sin\phi\cos\phi \sin^2\theta}{(\sin^2 \phi+P_{\theta,\phi}^2)^{\frac32}(\cos^2 \phi+Q_{\theta,\phi}^2)^{\frac32}}.\end{aligned}$$* **Theorem 13**. *We have $$\begin{aligned} \frac{d}{dx}\left( SV[\Phi[{\bf v}],x]\right) \cdot \frac{d}{dy}\left( SV[\Phi[{\bf v}],y]\right) =4g_{xx}g_{yy}P_{\theta,\phi}Q_{\theta,\phi} \sin^2\theta.\end{aligned}$$* **Remark 14**. * Since the squared velocity can be defined for any smooth mappings, we do not need any conditions to $\Phi[{\bf v}]$ in Theorem [Theorem 13](#thm:xySV){reference-type="ref" reference="thm:xySV"}. * # Relationship between the signs of invariants In the d'Ocagne-Koenderink formula ([\[KoenderinkFormula\]](#KoenderinkFormula){reference-type="ref" reference="KoenderinkFormula"}), the coincidence of the signs of the different quantities is a remarkable feature, which mathematically justifies the intuition that the approximate shape of an object can be determined by looking at its contour. Focusing on the sign of each quantity in the formulae in Theorems [Theorem 8](#thm:y-zeta){reference-type="ref" reference="thm:y-zeta"}, [Theorem 9](#thm:x-zeta){reference-type="ref" reference="thm:x-zeta"}, [Theorem 12](#thm:xy-curvature){reference-type="ref" reference="thm:xy-curvature"} and [Theorem 13](#thm:xySV){reference-type="ref" reference="thm:xySV"}, we get the statements below. **Corollary 15**. *Suppose that $\Phi[{\bf v}]\colon U\to \mathbb{R}^2$ is a one-parameter family of regular curves with respect to $y$ and $sin \theta\not=0$. Then we have $$\operatorname{sign}\kappa[\Phi[{\bf v}],y] \cdot \operatorname{sign}\frac{d}{dx}\left( SV[\Phi[{\bf v}],x]\right) = -\operatorname{sign}(g_{xx}g_{yy}) \cdot \operatorname{sign}P_{\theta,\phi}\cdot \operatorname{sign}\cos\phi.$$* **Corollary 16**. *Suppose that $\Phi[{\bf v}]\colon U\to \mathbb{R}^2$ is a one-parameter family of regular curves with respect to $x$ and $sin \theta\not=0$. Then we have $$\operatorname{sign}\kappa[\Phi[{\bf v}],x] \cdot \operatorname{sign}\frac{d}{dy}\left( SV[\Phi[{\bf v}],y]\right) = \operatorname{sign}(g_{xx}g_{yy}) \cdot \operatorname{sign}Q_{\theta,\phi}\cdot \operatorname{sign}\sin\phi.$$* **Corollary 17**. *Suppose that $\Phi[{\bf v}]\colon U\to \mathbb{R}^2$ is a one-parameter family of regular curves with respect to both $x$ and $y$ and $sin \theta\not=0$. Then we have $$\begin{aligned} \operatorname{sign}\kappa[\Phi[{\bf v}],x] \cdot \operatorname{sign}\kappa[\Phi[{\bf v}],y] =-\operatorname{sign}(g_{xx}g_{yy}) \cdot \operatorname{sign}\sin2\phi. \end{aligned}$$* **Corollary 18**. *Suppose $sin \theta\not=0$. Then we have $$\begin{aligned} \operatorname{sign}\frac{d}{dx}\left( SV[\Phi[{\bf v}],x]\right) \cdot \operatorname{sign}\frac{d}{dy}\left( SV[\Phi[{\bf v}],y]\right) =\operatorname{sign}(g_{xx}g_{yy})\cdot \operatorname{sign}P_{\theta,\phi}\cdot \operatorname{sign}Q_{\theta,\phi}.\end{aligned}$$* Note that the following equation holds for a surface $M$ parametrized by $s(x,y)=(x,y,g(x,y))$: $$g_{xx}g_{yy}=K\left(1+g_x^2+g_y^2\right)^2+g_{xy}^2,$$ where $K$ is the Gaussian curvature of $M$. Thus, we have the following lemma. **Lemma 19**. 1. *If $K(p)\ge0$ for $p\in M$, then $\operatorname{sign}K(p)=\operatorname{sign}( g_{xx}g_{yy}(p))$.* 2. *If $g_{xx} g_{yy}(p)\le0$ for $p\in M$, then $K(p)\le0$ holds for $p\in M$.* *[\[lem:gaussandg\]]{#lem:gaussandg label="lem:gaussandg"}* **Remark 20**. * Combining Corollaries [Corollary 15](#cor:y-zeta){reference-type="ref" reference="cor:y-zeta"}--[Corollary 18](#cor:xySV){reference-type="ref" reference="cor:xySV"} and Lemma [\[lem:gaussandg\]](#lem:gaussandg){reference-type="ref" reference="lem:gaussandg"}, it is possible to obtain relationships between the Gaussian curvature $K$ of a surface $M$ and invariants of the projections $\Phi[{\bf v}]$ as families of curves under appropriate assumptions. * # Example Finally, as the simplest examples, we consider our formulae in a special case. Suppose that a surface $M$ is locally parameterized by $s(x,y)=(x,y,g(x,y))^t$ with $g(x,y)$ having a critical point at the origin $0$, that is, $g_x(0)=g_y(0)=0$. In this case, $$\begin{aligned} P_{\theta,\varphi}(0)=-\cos\theta\cos\phi,\;\; Q_{\theta,\varphi}(0)=-\cos\theta\sin\phi\end{aligned}$$ hold. Furthermore, we assume that the $x$-curve and $y$-curve of $M$ are along the principal directions at the origin, that is, $g_{xy}(0)=0$, thus we have $g_{xx}g_{yy}(0)=K(0)$. Then, from Corollaries [Corollary 15](#cor:y-zeta){reference-type="ref" reference="cor:y-zeta"}--[Corollary 18](#cor:xySV){reference-type="ref" reference="cor:xySV"}, we have simple relationships as in the following. **Proposition 21**. *Suppose $g_x(0)=g_y(0)=g_{xy}(0)=0$ and $\sin 2\theta\cos\phi\not=0$. Then $\Phi[{\bf v}]\colon U\to \mathbb{R}^2$ is a one-parameter family of regular curves with respect to $y$ around $0$, and we have $$\operatorname{sign}\kappa[\Phi[{\bf v}],y](0) \cdot \operatorname{sign}\frac{d}{dx}\left( SV[\Phi[{\bf v}],x]\right)(0) = \operatorname{sign}K(0)\cdot \operatorname{sign}\cos\theta.$$* **Proposition 22**. *Suppose $g_x(0)=g_y(0)=g_{xy}(0)=0$ and $\sin 2\theta\sin\phi\not=0$. Then $\Phi[{\bf v}]\colon U\to \mathbb{R}^2$ is a one-parameter family of regular curves with respect to $x$ around $0$, and we have $$\operatorname{sign}\kappa[\Phi[{\bf v}],x](0) \cdot \operatorname{sign}\frac{d}{dy}\left( SV[\Phi[{\bf v}],y]\right)(0) = -\operatorname{sign}K(0)\cdot \operatorname{sign}\cos\theta.$$* **Proposition 23**. *Suppose $g_x(0)=g_y(0)=g_{xy}(0)=0$ and $\sin \theta\not=0$. Then $\Phi[{\bf v}]\colon U\to \mathbb{R}^2$ is a one-parameter family of regular curves with respect to both $x$ and $y$ around $0$, and we have $$\begin{aligned} \operatorname{sign}\kappa[\Phi[{\bf v}],x](0) \cdot \operatorname{sign}\kappa[\Phi[{\bf v}],y](0) =-\operatorname{sign}K(0) \cdot \operatorname{sign}\sin2\phi. \end{aligned}$$* **Proposition 24**. *Suppose $g_x(0)=g_y(0)=g_{xy}(0)=0$ and $\sin 2\theta\not=0$. Then we have $$\begin{aligned} \operatorname{sign}\frac{d}{dx}\left( SV[\Phi[{\bf v}],x]\right)(0) \cdot \operatorname{sign}\frac{d}{dy}\left( SV[\Phi[{\bf v}],y]\right)(0) =\operatorname{sign}K(0)\cdot \operatorname{sign}\sin2\phi.\end{aligned}$$* **Example 25**. *Let $M$ be a surface expressed as the graph of a function $g(x,y)=-x^2- y^2$ on $\mathbb{R}^2$, where $K(x,y)>0$ holds for any $(x,y)\in\mathbb{R}^2$. Take the view direction ${\bf v}=(\sin \theta \cos \phi,\sin \theta \sin \phi, \cos \theta)^t\in S^2$ so that $\sin 2\theta\cos\phi\not=0$. Then $\Phi[{\bf v}](x,y)$ is a one-parameter family of regular curves with respect to $y$ around $0$. In this case, $$\kappa[\Phi[{\bf v}],y](0)=\frac{2\cos\phi\sin\theta}{(\cos^2\phi+\cos^2\theta\sin^2\phi)^{\frac32}},\;\; \frac{d}{dx}\left( SV[\Phi[{\bf v}],x]\right)(0)=2\sin2\theta\cos\phi.$$ Figure [2](#fig:ellipproj){reference-type="ref" reference="fig:ellipproj"} represents the case with $\phi=0$ and $\frac\pi2<\theta<\pi$, where $\kappa[\Phi[{\bf v}],y](0)>0$ and $\frac{d}{dx}\left( SV[\Phi[{\bf v}],x]\right)(0)<0$. Thus we see that the relationship in Proposition [Proposition 21](#prop:y-zeta:simplecase){reference-type="ref" reference="prop:y-zeta:simplecase"} holds in this case.* ![ $M$ is hyperbolic, that is, $K>0$. Here $\kappa[\Phi[{\bf v}],y]>0$ and $\frac{d}{dx}\left( SV[\Phi[{\bf v}],x]\right)<0$ hold for the one-parameter family of regular curves $\Phi[{\bf v}]$ with respect to $y$. ](hypproj.pdf){#fig:hypproj width="12.0cm"} **Example 26**. *Let $M$ be a surface expressed as the graph of a function $g(x,y)=x^2- y^2$ on $\mathbb{R}^2$, where $K(x,y)<0$ holds for any $(x,y)\in\mathbb{R}^2$. Take the view direction ${\bf v}=(\sin \theta \cos \phi,\sin \theta \sin \phi, \cos \theta)^t\in S^2$ so that $\sin 2\theta\cos\phi\not=0$. Then $\Phi[{\bf v}](x,y)$ is a one-parameter family of regular curves with respect to $y$ around $0$. In this case, it is easy to see that $$\kappa[\Phi[{\bf v}],y](0)=\frac{2\cos\phi\sin\theta}{(\cos^2\phi+\cos^2\theta\sin^2\phi)^{\frac32}},\;\; \frac{d}{dx}\left( SV[\Phi[{\bf v}],x]\right)(0)=-2\sin2\theta\cos\phi.$$ Figure [7](#fig:hypproj){reference-type="ref" reference="fig:hypproj"} represents the case with $\phi=0$ and $\frac\pi2<\theta<\pi$, where $\kappa[\Phi[{\bf v}],y](0)>0$ and $\frac{d}{dx}\left( SV[\Phi[{\bf v}],x]\right)(0)>0$. Thus we see that the relationship in Proposition [Proposition 22](#prop:x-zeta:simplecase){reference-type="ref" reference="prop:x-zeta:simplecase"} holds in this case.* ![ $M$ contains parabolic points $p$ where $K(p)=0$. $\frac{d}{dx}\left( SV[\Phi[{\bf v}],x]\right)(p)=0$ holds for the one-parameter family of regular curves $\Phi[{\bf v}]$ with respect to $y$. ](paraboproj.pdf){#fig:paraboproj width="12.0cm"} **Example 27**. *Let $M$ be a surface expressed as the graph of a function $g(x,y)=y^2-x^3$ on $\mathbb{R}^2$, where $K(0)=0$ holds. Take the view direction ${\bf v}=(\sin \theta \cos \phi,\sin \theta \sin \phi, \cos \theta)^t\in S^2$ so that $\sin 2\theta\cos\phi\not=0$. Then $\Phi[{\bf v}](x,y)$ is a one-parameter family of regular curves with respect to $y$ around $0$. In this case, it is easy to see that $$\frac{d}{dx}\left( SV[\Phi[{\bf v}],x]\right)(0)=0.$$ Thus we see that the relationship in Proposition [Proposition 21](#prop:y-zeta:simplecase){reference-type="ref" reference="prop:y-zeta:simplecase"} holds in this case. Figure [8](#fig:paraboproj){reference-type="ref" reference="fig:paraboproj"} represents the case.* Observe that, in Examples [Example 25](#ex:ellip){reference-type="ref" reference="ex:ellip"} -- [Example 27](#ex:para){reference-type="ref" reference="ex:para"}, the difference in the patterns of the projection images corresponds to the difference in the signs of the Gaussian curvatures of the surfaces. Compare also Figures [2](#fig:ellipproj){reference-type="ref" reference="fig:ellipproj"} -- [8](#fig:paraboproj){reference-type="ref" reference="fig:paraboproj"}. **Example 28**. *Let $M$ be a surface expressed as the graph of a function $g(x,y)=-x^2- y^2$ on $\mathbb{R}^2$, where $K(x,y)>0$ holds for any $(x,y)\in\mathbb{R}^2$. Take the view direction ${\bf v}=(\sin \theta \cos \phi,\sin \theta \sin \phi, \cos \theta)^t\in S^2$ so that $\sin 2\theta\not=0$. Then $\Phi[{\bf v}]$ is a one-parameter family of regular curves with respect to both $x$ and $y$ around $0$. We can see $$\kappa[\Phi[{\bf v}],x](0)=-\frac{2\sin\phi\sin\theta}{\sin^2\phi+\cos^2\theta\cos^2\phi},\; \kappa[\Phi[{\bf v}],y](0)=-\frac{2\cos\phi\sin\theta}{\cos^2\phi+\cos^2\theta\sin^2\phi},\;\;$$ and $$\frac{d}{dx}\left( SV[\Phi[{\bf v}],x]\right)(0)=2\sin2\theta\cos\phi,\; \frac{d}{dy}\left( SV[\Phi[{\bf v}],y]\right)(0)=2\sin2\theta\sin\phi.$$ See also Figures [9](#fig:ellipprojcurvaturexy){reference-type="ref" reference="fig:ellipprojcurvaturexy"}, [10](#fig:ellipprojSVxy){reference-type="ref" reference="fig:ellipprojSVxy"} where $\frac\pi2<\theta<\pi$ and $\pi<\phi<\frac32 \pi$. Here the relationships in Propositions [Proposition 23](#prop:xy-curvature:simplecase){reference-type="ref" reference="prop:xy-curvature:simplecase"} and [Proposition 24](#prop:xySV:simplecase){reference-type="ref" reference="prop:xySV:simplecase"} hold.* ![ The surface $M$ is elliptic, that is, $K>0$, and $\kappa[\Phi[{\bf v}],y](0)<0$ and $\kappa[\Phi[{\bf v}],x](0)>0$ hold for the one-parameter family of regular curves $\Phi[{\bf v}]$ with respect to $x$ and $y$. ](ellipxycurvature.pdf){#fig:ellipprojcurvaturexy width="13.0cm"} ![ The surface $M$ is elliptic, that is, $K>0$, and $\frac{d}{dx}\left( SV[\Phi[{\bf v}],x]\right)(0) >0$ $\frac{d}{dy}\left( SV[\Phi[{\bf v}],y]\right)(0) >0$ hold for the one-parameter family of regular curves $\Phi[{\bf v}]$ with respect to $x$ and $y$. ](ellipxySV.pdf){#fig:ellipprojSVxy width="13.0cm"} nn M. P. Araujo, Um estudo da geometria de superfícies via projeção ortogonal: Teorema de Koenderink e extensões, Ph.d Thesis, Universidade Estadual Paulista (UNESP), 2022. J. W. Bruce and P. J. Giblin, . , Cambridge, 1992. R. Cipolla and P. Giblin, , 2000. J. Damon, P. Giblin and G. Haslinger, , 2016. M. d'Ocagne, Sur la courbure du contour apparent d'une surface projetée orthogonalement. Nouvelles annales de mathématiques : journal des candidats aux écoles polytechnique et normale, Serie 3, Volume **14** (1895), 262--264. T. Fukui, M. Hasegawa and K. Saji, Extensions of Koenderink's formula, J. Gökova Geom. Topol. **10** (2017), 42--59. M. Hasegawa, Y. Kabata and K. Saji, Capturing information on curves and surfaces from their projected images, Int. J. Math. Industry **12** (2020), 2050004. S. Izumiya, M. C. Romero-Fuster, M. A. S. Ruas and F. Tari, . 2015. Y. Kabata and M. Takahashi, One-parameter families of Legendre curves and plane line congruences,  Math. Nachr. **295** (2022), 1533--1561. J. J. Koenderink, What does the occluding contour tell us about solid shape?, Perception, **13** (1984), 321--330. J. J. Koenderink, , 1990. M. Takahashi, **71** (2017), 1473--1489. Ken Anjyo,\ OLM Digital Inc., Japan and IMAGICA GROUP Inc., Japan\ \ \ Yutaro Kabata,\ School of Information and Data Sciences, Nagasaki University, Nagasaki 852-8131, Japan.\ E-mail address: kabata\@nagasaki-u.ac.jp\ [^1]: 2020 Mathematics Subject classification: 53A04, 53A05, 53A55, 68T45 [^2]: Key Words and Phrases. One-parameter family of regular curves, curvature, Koenderink's formula.
arxiv_math
{ "id": "2310.05087", "title": "A view-parametric extension of d'Ocagne-Koenderink formula for a surface\n in $\\mathbb{R}^3$", "authors": "Ken Anjyo and Yutaro Kabata", "categories": "math.DG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We introduce a general random model of a combinatorial optimization problem with geometric structure that encapsulates both linear programming and integer linear programming. Let $Q$ be a bounded set called the feasible set, $E$ be an arbitrary set called the constraint set, and $A$ be a random linear transform. We define and study the $\ell^q$-*margin*, $$\mathcal{M}_q := d_{\ell^q}\left(AQ, E\right)\,.$$ The margin quantifies the feasibility of finding $y \in AQ$ satisfying the constraint $y \in E$. Our contribution is to establish strong concentration of the $\ell^q$-margin for any $q \in (2,\infty]$, assuming only that $E$ has permutation symmetry. The case of $q = \infty$ is of particular interest in applications---specifically to combinatorial "balancing" problems---and is markedly out of the reach of the classical isoperimetric and concentration-of-measure tools that suffice for $q \le 2$. Generality is a key feature of this result: we assume permutation symmetry of the constraint set and nothing else. This allows us to encode many optimization problems in terms of the margin, including random versions of: the closest vector problem, integer linear feasibility, perceptron-type problems, $\ell^q$-combinatorial discrepancy for $2 \le q \le \infty$, and matrix balancing. Concentration of the margin implies a host of new sharp threshold results in these models, and also greatly simplifies and extends some key known results. address: | D.J. AltschulerDepartment of Mathematical Sciences\ Carnegie Mellon University author: - Dylan J. Altschuler bibliography: - feas.bib title: Zero-One laws for random feasibility problems --- # Introduction Balancing covariates in experimental design, sparsifying a graph, and training a single-layer neural net are seemingly disparate combinatorial optimization problems. Nonetheless, they as well as a wide range of other problem can be recast as generalizations of the Closest Vector Problem, a core problem in integer programming. > [\[txt:orig\]]{#txt:orig label="txt:orig"} *Find the $\ell^q$-closest point of a set $Q$ to another set $E$.* This is highly non-trivial for general $Q$ and $E$; crucially, properties like convexity are not assumed. A natural random model is to take a random linear transformation of either $Q$ or $E$. **Definition 1** (Generalized Random Feasibility). Let $Q \subset \mathbb{R}^{N}$ and $E \subset \mathbb{R}^{M}$ be sets called the feasible set and constraint set, respectively. Fix a matrix $A \in \mathbb{R}^{M \times N}$ with independent standard normal entries. The $\ell^q$-margin $\mathcal{M}_q(A) := \mathcal{M}_{q,Q,E}(A)$ is defined as: $$\mathcal{M}_q(A) := \min_{\sigma \in Q} d_{\ell^q}\left(A\sigma, E\right)\,,$$ and the $(A,Q,E)$-feasibility problem is the task of determining if $\mathcal{M}_q$ is zero. The word margin is chosen in analogy to terminology from the literature of perceptron models. The margin quantifies the "distance to satisfiability" for the following program: find $\sigma \in Q$ with $A\sigma \in E$. In this program, $A$ and $E$ encode a random set of constraints on $Q$, and the margin $\mathcal{M}_q(A)$ measures the least the constraints can be violated by optimizing over $\sigma \in Q$. In particular, the margin is zero if and only if this program is feasible. The exponent $q$ controls how heavily the largest violations are penalized. In the special case of the feasible set $Q$ being a subset of the integer lattice and the constraint set $E$ being a rectangle $[-\infty, b_1] \times \dots \dots [-\infty, b_{M}]$ for some vector $b \in \mathbb{R}^{M}$, we recover the canonical form of random integer linear feasibility. Namely, the margin is zero if and only if there exists $\sigma \in Q$ satisfying $$(A\sigma)_i \le b_i\,, \quad\forall\, i \in [M]\,.$$ If $Q$ is $\mathbb{R}^N$ instead, we recover random linear programming. The technical contribution of this article is strong concentration bounds for $\mathcal{M}_q$ under the assumptions that $E$ has sufficient permutation symmetry and $Q$ is bounded. Concentration for the margin can be interpreted as a sharp threshold as follows. Define the $\ell^q$-expansion of the constraint set $E$ by $$E_\delta := \left\{x \in \mathbb{R}^{M} ~:~ d_q(x, E) \le \delta\right\}\,.$$ The $\ell^q$-margin is exactly the smallest $\delta$ so that $E_\delta \cap AQ$ is non-empty. Thus, if the margin has fluctuations on some vanishing scale, then as one expands $E$ with respect to $\ell^q$, the probability that it contains a point of $AQ$ will jump from zero to one in a vanishing window. It is worth highlighting that this notion of sharp threshold is *non-asymptotic* in an important sense. The dimension is held fixed while $\delta$ is varied. The existence of a sequence of problems indexed by $N$ or $M$ is not assumed. In the examples we will introduce shortly, this will be a significant departure from previous work and a source of increased generality. ## Main Results **Definition 2**. Say a set $E \subset \mathbb{R}^{n}$ has *permutation symmetry* if for any permutation $\pi \in S_{n}$ and $x \in \mathbb{R}^{n}$, $$(x_1, \dots, x_n) \in E \quad \text{if and only if} \quad \left(x_{\pi(1)},\dots, x_{\pi(n)}\right) \in E\,.$$ This is a combinatorial notion of regularity. Typical examples include Cartesian products $E := (E_0)^{M}$ or the $\ell^p$ unit ball for any $p$. **Theorem 1** (Main result: concentration of the margin). *There is a universal constant $C>0$ so the following holds. Let $Q \subset \mathbb{R}^{N}$ be a subset of the Euclidean unit ball and $E$ have permutation symmetry. For $q \in [2,\infty]$, $$\label{eq:main} \mathrm{Var}\left(\mathcal{M}_q(A)\right) \le \frac{C}{1 + \left(\frac{1}{2} - \frac{1}{q}\right) \log M} \,.$$* By homogeneity, the assumption that $Q$ is a subset of the unit ball could be replaced by multiplying the right-hand side of [\[eq:main\]](#eq:main){reference-type="ref" reference="eq:main"} by a factor of $\max_{\sigma \in Q} \|\sigma\|_2^2$. At least for $q = \infty$, our result cannot be improved without further assumptions. Letting $Q$ be a singleton on the $\ell^2(\mathbb{R}^N)$-unit sphere and $E$ be the origin in $\mathbb{R}^M$ makes this clear (see Chapter 5, section 6 of [@chat-sc]). However, [Theorem 1](#thm:margin){reference-type="ref" reference="thm:margin"} should be seen as an important but purely qualitative improvement over the trivial bound of $\mathrm{Var}\left(\mathcal{M}_q(A)\right) \le 1$, which follows from the Gaussian Poincaré inequality (see details in the proof of [Theorem 1](#thm:margin){reference-type="ref" reference="thm:margin"}). In many of the $(A,Q,E)$-feasibility problems that appear in actual applications, it should be the case that $\mathrm{Var}\left(\mathcal{M}_q(A)\right) \le 1/\mathrm{poly}(M)$.\ The assumption of permutation symmetry on $E$ is a serious restriction. Unfortunately, it cannot be completely dropped: letting $Q$ be a singleton and $E = \mathbb{R}^{M-1} \times [-1,1]$, clearly the fluctuations of the $\mathcal{M}_\infty$ are order one. Nonetheless, it is possible to somewhat relax the permutation symmetry condition on $E$. The following extension allows for imposing several different types of constraints on a feasibility problem. **Theorem 2** (Block symmetry suffices). *There is a universal constant $C>0$ so that the following holds. Let $Q \subset \mathbb{R}^N$ be a subset of the Euclidean unit ball and let $E = E_1 \times \dots, E_{k}$ where $E_i \subset \mathbb{R}^{M_i}$ has permutation symmetry for each $i$ and $\sum_{i=1}^k M_i = M$. Abbreviate $m := \min_{i \in k} M_i$. For each $q \in [2,\infty]$, $$\mathrm{Var}\left(\mathcal{M}_q(A)\right) \le C \left(1 + \frac{1}{2}\log \left(\frac{m^{1-\frac{2}{q}}}{k } \vee 1\right)\right)^{-1}\,.$$* **Remark 1**. Let us briefly highlight the generality of [\[thm:margin,cor:block\]](#thm:margin,cor:block){reference-type="ref" reference="thm:margin,cor:block"}. While there are permutation symmetry requirements on $E$, there is nearly complete freedom for $Q$. The feasible set can be discrete, continuous, or even a singleton. Canonical examples include the sphere, solid cube, discrete cube, bounded subset of the integer lattice, and arbitrary subsets (such as level-sets of an arbitrary function) of any of the previous examples. Our results hold without distinguishing between these situations. Our main technical tool is Talagrand's $L^1$-$L^2$ (Gaussian) hypercontractive inequality, given below as [Lemma 2](#lemma:L1L2){reference-type="ref" reference="lemma:L1L2"}. There is a long and rich history of this inequality being used to prove similar sharp thresholds in various settings. See the expositions of Kalai [@kalai-summary] and Chatterjee [@chat-sc] for a wealth of examples in the Boolean and Gaussian settings, respectively. In particular, our result is similar to the classical theorems of Friedgut and Bourgain [@fried] and Friedgut and Kalai [@fried-kalai] that establish a sharp threshold for any permutation-transitive monotone Boolean function. The assumption of permutation-transitivity is used in their theorems for the same purpose as in ours. ## Future Directions Some open problems that seem attractive but quite challenging: - Obtain polynomial improvements over the given rates in the case that $Q$ has large cardinality and is well-spread (i.e. the average inner product over all pairs of elements of $Q$ is bounded away from one). A natural place to start is the dynamical variance identity given in Lemma 2.1 of [@chat-sc]. - Explore the quantitative the trade-off between the permutation symmetry of $E$ and the concentration of the margin. - Extend these results to other disorders for the matrix $A$. Independent Boolean or Rademacher entries are natural and may be easy. Independent columns would be quite interesting but potentially harder. ## Notation The phrase "almost every" will always be with respect to the Gaussian or Lebesgue measure. Since they are mutually absolutely continuous, this is without ambiguity. We say a function $F: \mathbb{R}^{d} \to \mathbb{R}$ is $(L,q)$-Lipschitz if $|F(x)-F(y)| \le L \|x - y\|_{q}$ for all $x$ and $y$. The set $E$ will always be assumed closed. # Applications and Previous Works ## Random Integer Programming Average-case linear programming and integer linear programming have been the subject of intense study over the past few decades, in large part due to the enormous gap between worst-case guarantees and average-case empirical performance of (integer) linear programming algorithms [@roughgarden-summary; @spielman-teng; @dadush-smooth]. Much of the work on integer programming focuses on understanding integrality gaps. Both the random setting [@dyer-frieze; @chand-vemp; @gap1; @gap2] and deterministic setting have been extensively considered. (The latter is too rich to introduce here; we refer the reader to [@dadush-thesis].) The study of integrality gaps often utilizes the notions of "sensitivity" and "proximity," which quantify the distance between the vertices and lattice points contained in a polytope [@prox; @sensitivity]. Much more closely related are the Shortest Vector Problem (SVP) and Closest Vector Problem (CVP) [@dadush-thesis; @cvp]. The CVP asks: given a lattice and a target vector $v$, find $w$ in the lattice minimizing $\|v-w\|_2$. The SVP asks the same with $v = 0$ and the origin removed from the lattice. The author is not aware of work prior to this article on random versions of these problems. ## Combinatorial Discrepancy Combinatorial discrepancy arises as a fundamental quantity in a plethora of fields including combinatorics, geometry, optimization, information theory, and experimental design [@Spe94; @chaz; @matou]. For an $M \times N$ matrix $A$, the combinatorial discrepancy $\mathrm{disc}(A)$ is given by $$\mathrm{disc}(A) := \min_{\sigma \in \frac{1}{\sqrt {N}}\left\{-1,+1\right\}^N} \|A\sigma\|_\infty\,.$$ There are a number of outstanding open conjectures that seem out of reach of current tools, motivating much recent interest in random [@chand-vemp; @APZ; @turner2020balancing; @hoberg-rothvoss; @franks2020discrepancy; @potukuchi2018discrepancy; @bansal-meka; @ezra-lovett; @dja-jnw; @ALS1; @ALS2; @PX; @ss; @gamarnik-sbp] and semi-random models [@bansal-smooth1; @bansal-smooth2; @unified; @rada-smooth]. It is a general feature in random discrepancy that the second moment method will yield a coarse threshold [@APZ], but not a sharp threshold (unless $M \ll N$ [@turner2020balancing]). The second moment method is generically difficult to improve upon. However, a recent series of breakthroughs overcame this technical hurdle and established extremely precise control in the square regime $M \asymp N$. Perkins and Xu [@PX] and Abbe, Li, and Sly [@ALS1] simultaneously obtained a sharp threshold for the discrepancy of Gaussian and Rademacher matrices, respectively, as well as strong control over the number of solutions. Subsequently, the discrepancy of a Gaussian matrix was shown by the author to concentrate within a $\mathcal{O}\left(\log(N)/N\right)$ window [@dja-crit]; the logarithm has been removed by Sah and Sawhney [@ss]. The mentioned breakthroughs all are quite technically involved. Our main theorem recovers a very simple, direct, and different proof (albeit with a highly non-optimal rate) of the recently established sharp threshold for discrepancy. Additionally, we also obtain some truly new results in related settings. A natural generalization of combinatorial discrepancy, the $\ell^q$ discrepancy of a matrix is given by $$\mathrm{disc}_q(A) := \min_{\sigma \in \left\{-1,+1\right\}^N} \|A\sigma\|_q\,.$$ While $\ell^q$ discrepancy has certainly been extensively studied for deterministic matrices (see e.g. [@colorful; @anynorm] for a list of references; the literature dates back to at least a question of Dvoretsky in 1960, and is too vast to introduce here), to the best of our knowledge nothing is known about the random setting. Similarly to the $q =\infty$ setting, it is natural to expect the second moment method to yield a coarse threshold. Any such future result can be automatically upgraded to a sharp threshold by applying our main result: **Theorem 3** (Sharp threshold for $\ell^q$ discrepancy). *Let $A \in \mathbb{R}^{N \times N}$ with independent standard normal entries and $2 \le q \le \infty$. Then: $$\frac{\mathbb{E}_{\unskip\space}\left[\mathrm{disc}_q(A)\right]}{\sqrt{\mathrm{Var}\left(\mathrm{disc}_q(A)\right)}} \ge \Omega\left(N^{\frac{1}{q}}\sqrt{ 1 + \left(\frac{1}{2} - \frac{1}{q}\right)\log N} \right)\,.$$* *Proof.* The variance of $\mathcal{M}_q$ can be upper-bounded by applying [Theorem 1](#thm:margin){reference-type="ref" reference="thm:margin"} with $E = \left\{0\right\}$ and $Q$ the normalized discrete cube $N^{-1/2}\left\{-1,+1\right\}^N$. The expectation of $\mathcal{M}_q$ is lower-bounded by $N^{1/q}$ via a simple application of Markov's inequality. Indeed, it suffices to show that for all $\sigma \in Q$ that $\mathbb{P}_{}\left[\|A\sigma\|_q < cN^{1/q}\right] < 2^{-(1+c')N}$. For a particular $\sigma$, we have $A\sigma =: Y$ is distributed as a standard normal in $\mathbb{R}^{M}$. Let $\mathcal{G}$ denote the event that at least $\epsilon n$ entries of $Y$ are greater than $1$. On the event $\mathcal{G}$, we have $\|A\sigma\|_q \ge \epsilon^{1/q}N^{1/q}$. Standard tail bounds for the binomial distribution yield $$\mathbb{P}_{}\left[\mathcal{G}^c\right] \le \exp\left\{-\left|\Omega\left(N\epsilon\log(1/\epsilon)\right)\right|\right\} \le 2^{-2N}\,,$$ where the second inequality follows by taking $\epsilon$ sufficiently small. Taking $c = \epsilon^{1/q}$ and $c' = 1$, we are done. ◻ The result and proof remain essentially unchanged if we optimize discrepancy over any of the feasible sets given in [Remark 1](#rem:feas-sets){reference-type="ref" reference="rem:feas-sets"} rather than taking $Q$ as the discrete hypercube. For example, another natural problem for which, to the best of our knowledge, there are no published results is the relaxation of random discrepancy to the sphere rather than the discrete cube. This is analogous to the relaxation of "Ising" spin glasses to "spherical" spin glasses in statistical physics. **Theorem 4** (Sharp threshold for the symmetric spherical perceptron). *For any $q \in [2,\infty]$ and $\alpha \in (0,\infty)$, let $A \in \mathbb{R}^{\alpha N \times N}$ have iid standard normal entries. $$\mathrm{Var}\left(\min_{\sigma \in S^{N-1}} \|A\sigma\|_q)\right) = \mathcal{O}\left(\frac{1}{1 + \left(\frac{1}{2}-\frac{1}{q}\right)\log N}\right)\,.$$* It seems reasonable to expect the second moment method to yield a coarse threshold in this setting as well, which can thus be automatically promoted to a sharp threshold.\ We also note a related work of Minzer, Sah, and Sawhney [@minzer2023perfectly] that appeared during the writing of this article. They use the Boolean version of Talagrand's $L^1$-$L^2$ inequality to establish a sharp threshold in the "perfectly friendly bisection" problem. They note that their methods can be used to show concentration of random $\ell^\infty$ discrepancy with Bernoulli disorder. ## Perceptron Models The Binary Perceptron problem is an enduring model of the classification power of a single-layer neural net. First introduced in the 1960's, it remains a fascinating yet stubborn source of open problems. The binary perceptron is exactly the $(A,Q,E)$-feasibility problem with parameters: $$q = \infty\,, \quad Q = N^{-1/2}\left\{-1,+1\right\}^N \,, \quad E = [K,\infty)^{\alpha N}$$ In the literature surrounding the perceptron, the following approach is taken: fix $K$ and ask for the largest $\alpha$, called the *capacity*, such that the above problem is feasible. This exploits the fact that there is a natural sequence of problems indexed by $\alpha$. Namely, add independent rows to $A$ and extend $E$ by taking Cartesian products with more copies of $[K,\infty)$. Concretely, the capacity $\bm{\alpha_*}$ is the random variable $$\max\left\{\alpha : \exists \sigma \in Q,~ \left\langle A_i, \sigma \right\rangle > 0, \quad \forall i \in [\alpha N]\right\}\,,$$ where $A_i$ is the $i$'th row of $A$. Concentration of $\bm{\alpha_*}$ was shown somewhat recently in two impressive and technically involved works [@xu; @nakajima-sun]. Xu used the Fourier-analytic pseudo-junta theorem of Hatami to show concentration of $\bm{\alpha_*}$, establishing the first sharp threshold result for the perceptron [@xu]. Subsequently, Nakajima and Sun established a sharp threshold for a wide variety of related models [@nakajima-sun]; their methods extend some prior work of Talagrand [@tal-selfave; @tal-mf2]. The generalization studied by Nakajima and Sun corresponds to letting $E:= (E_0)^{\alpha N}$, where $E$ can be any set that satisfies a mild structural assumption[^1]. In our notation, previous work can be summarized as: **Theorem 5** (Sharp threshold for the capacity of the generalized perceptron [@xu; @nakajima-sun]). *Let $q = \infty$, $Q = \left\{-1,+1\right\}^{N}$ and $E_{\alpha} := (E_0)^{\alpha N}$. Then there exists some sequence $a_c := a_c(N, E_0)$ such that: for any $\epsilon> 0$ and $N$ sufficiently large, the $(A,Q,E_\alpha)$-feasibility problem is satisfiable with high probability for $\alpha < a_c - \epsilon$ and unsatisfiable with high probability for $\alpha > a_c + \epsilon$.* We take a dual approach: fix an $\alpha$, and look for the largest $K$ so that there exists $\sigma \in Q$ with $A\sigma \in E$. Applying [Theorem 1](#thm:margin){reference-type="ref" reference="thm:margin"} readily recovers a similar sharp threshold in the parameter $K$. Additionally, we gain a signficant increase in the generality of the constraint sets. As previously mentioned, our notion of sharp threshold for the margin is *non-asymptotic*; it is well-defined for a fixed dimension. Thus we need not assume that $E$ has product structure $(E_0)^{\alpha N}$. Instead, we only need $E$ to be permutation invariant. (For example: the $\ell^2$-unit sphere is permutation invariant but not a product set). Recall the notation $E_{\delta,q}$ for the $\ell^q$-metric expansion of the set $E$, $$E_\delta := \left\{x \in \mathbb{R}^{\alpha N} ~:~ d_\infty(x, E) \le \delta\right\}\,.$$ Our result is the following immediate consequence of [Theorem 1](#thm:margin){reference-type="ref" reference="thm:margin"}: **Theorem 6** (Sharp threshold for the margin of the generalized perceptron). *Let $2 < q \le \infty$, $Q$ be a feasible set and $E$ be permutation invariant. Then there exists some sequence $K_c := K_c(N)$ such that: for any $\epsilon> 0$ and $N$ sufficiently large, the $(A,Q,E_K)$-feasibility problem is unsatisfiable with high probability for $K < K_c - \epsilon$ and satisfiable with high probability for $K > K_c + \epsilon$.* Note that the setting considered here is a significant generalization of the setting of [Theorem 5](#thm:prev-perceptron){reference-type="ref" reference="thm:prev-perceptron"}. Not only do we allow for permutation-invariant constraint sets rather than just product sets, we also are able to take $Q$ as e.g. the sphere. While the positive spherical perceptron is already well-understood [@spherical-perceptron; @tal-mf1; @tal-mf2], to the best of our knowledge a sharp threshold result was not previously known for the negative spherical perceptron. (The "positive" and "negative" spherical perceptron refer to parameter regimes for $\alpha$ for which $K_c$ is positive or negative, respectively. The negative perceptron is conjectured to exhibit a property called "full replica symmetry breaking", which makes it difficult to analyze.)\ A natural question is whether these two different notions of sharp threshold considered in [Theorem 5](#thm:prev-perceptron){reference-type="ref" reference="thm:prev-perceptron"} and [Theorem 6](#thm:new-perceptron){reference-type="ref" reference="thm:new-perceptron"} coincide when they are both defined---i.e. when $E = (E_0)^{\alpha N}$ for some $E \subset \mathbb{R}$. Very loosely speaking, if $K_c$ and $\bm{\alpha_*}$ are of the same order, then a sharp threshold and a sharp threshold in the capacity are equivalent. This is called the "proportional regime" in constraint satisfaction literature. For the familiar reader, a slightly more precise statement: the log-number of satisfying $\sigma$ needs to decrease at asymptotically the same rate if we add $\delta N$ new constraints or decrease $K$ by $-\delta$, for any positive constant $\delta$. A formal and rigorous statement of this is available in the special case of $Q = N^{-1/2}\left\{-1,+1\right\}^N$ and $E_0 = [-K,K]$, given by Lemma 3.2 of [@dja-crit].\ We conclude with a final generalization: [Theorem 2](#cor:block){reference-type="ref" reference="cor:block"}---the extension of our main theorem to block-symmetric constraint sets---immediately yields a sharp threshold for the margin of perceptron problems with "random labels," such as the model raised in [@randlabel]. This corresponds to a feasibility problem with constraint set $E = \left(E_0\right)^{M_1} \times (E_0^c)^{M - M_1}$ and $M_1 \asymp M$. ## Matrix Balancing The "Matrix Spencer" conjecture of R. Meka asks the following: given $N$ symmetric matrices $A_1,\dots,A_N$ of dimension $d \times d$, each of operator norm at most one, determine the feasibility of finding $\sigma \in \frac{1}{\sqrt{N}}\left\{-1,1\right\}^N$ such that $$\left\|\sum_i \sigma_i A_i\right\|_{\mathrm{op}} \le 1\,.$$ This problem has interesting applications to quantum comunication complexity [@hopkins] and graph sparsification [@sparse1; @sparse2]. There has been some recent progress [@4dev; @polylog; @hopkins; @mirror] in the low-rank setting. Very recently, the random version of this problem has been considered; a lower bound by the first moment method is given in Theorem 1.13 of [@tim-mtx]. Due to the extremely strong lower-tail concentration of the operator norm of a GOE, the main regime of interest is $N \asymp d^2$. Only when $N$ is at least this large does optimizing over $\sigma$ allow $\|\sum_i\sigma_iA_i\|$ to be changed to first order; conversely, if $N$ is much larger, the discrepancy will be vanishing [@priv-com]. **Theorem 7** (Sharp threshold for matrix balancing). *Let $A_1,\dots,A_N$ be $d\times d$ GOE matrices, and define $$\mathrm{disc}(A_1,\dots,A_N) := \min_{\sigma \in N^{-1/2}\left\{-1,+1\right\}^N} \frac{1}{\sqrt{d}}\left\|\sum \sigma_i A_i \right\|_{\mathrm{op}}\,.$$ Then, for $N \asymp d^2$, $$\frac{\mathbb{E}_{\unskip\space}\left[\mathrm{disc}(A_1,\dots,A_N)\right]}{\sqrt{\mathrm{Var}\left(\mathrm{disc}(A_1,\dots,A_N)\right)}} \ge \Omega\left(\sqrt{d}~\right) \,.$$* The proof is deferred to the end of the next section. Matrix balancing can be seen as an integer feasibility problem as follows: flatten each matrix $A_i$ into a $d^2 \times 1$ column vector. And let $A = (A_1,\dots,A_N)$. Let the constraint set $E \subset \mathbb{R}^{d^2}$ be the flattening of the $d\times d$ operator norm ball, and let $Q$ be the discrete cube $N^{-1/2}\left\{-1,+1\right\}^N$. However, matrix balancing is not easily quantified in terms of the $\ell^q$-margin and does not quite fit nicely into the framework of [Theorem 1](#thm:margin){reference-type="ref" reference="thm:margin"} for two reasons. First, $E$ does not have much permutation symmetry since the operator norm of a matrix is only invariant under permuting rows or columns, not arbitrary entries. Second, [Theorem 1](#thm:margin){reference-type="ref" reference="thm:margin"} would give results on the distance to $E$ in terms of entry-wise norms of the matrix $M$, which is not quite right. Nonetheless, the ideas behind the proof of [Theorem 1](#thm:margin){reference-type="ref" reference="thm:margin"} are extremely general and easy modifications suffice here. # Preliminaries Before proving our main theorems, we introduce our main tools as well as some technical observations that will help us apply it. We begin with the celebrated Gaussian Poincaré inequality. **Lemma 1** (Gaussian Poincaré). *Let $n$ be finite, $f: \mathbb{R}^{n} \to \mathbb{R}$ be an absolutely continuous function, and $\gamma^n$ denote the standard Gaussian measure on $\mathbb{R}^n$. $$\mathrm{Var}\left(f\right) \le \mathbb{E}_{\gamma^n\unskip\space}\left[\|\nabla f\|_{2}^2 \right]\,.$$* The Gaussian Poincaré inequality is often quite useful, but sometimes fails to give optimal rates. In his influential monograph [@chat-sc], Chatterjee gives the name "superconcentration" to the variance of a random variable being far smaller than the Poincaré inequality implies. Chatterjee also shows the equivalence of superconcentration to the fascinating properties of "chaos" and "multiple valleys." It remains a major open problem in this area to establish general methods for providing polynomial improvements over the Gaussian Poincaré inequality. However, there is a general tool for obtaining logarithmic improvements: **Lemma 2** (Talagrand's $L^1$-$L^2$ inequality; Theorem 5.1 of [@chat-sc]). *There is some constant $C$ so that the following holds. Let $\gamma^n$ be the Gaussian measure on $\mathbb{R}^n$ for any $n$, and $f: \mathbb{R}^n \to \mathbb{R}$ any absolutely continuous function. $$\label{eq:l1l2} \mathrm{Var}\left(f\right) \le C \sum_{i=1}^n \|\partial_{i} f\|_{L^2(\gamma^n)}^2 \left( 1 + \log\left(\frac{\|\partial_{i} f\|_{L^2(\gamma^n)}}{\|\partial_{i} f\|_{L^1(\gamma^n)}}\right) \right)^{-1}\,.$$ In particular, $$\label{eq:l1l2-jensen} \mathrm{Var}\left(f\right) \le C \left(\sum_{i=1}^n \|\partial_{i} f\|_{L^2(\gamma^n)}^2 \right)\left( 1 + \frac{1}{2}\log\left(\frac{\sum_i\|\partial_{i} f\|_{L^2(\gamma^n)}^2}{\sum_i\|\partial_{i} f\|_{L^1(\gamma^n)}^2}\right) \right)^{-1}\,.$$* The usual formulation of Talagrand's inequality is [\[eq:l1l2\]](#eq:l1l2){reference-type="ref" reference="eq:l1l2"}. Here, [\[eq:l1l2-jensen\]](#eq:l1l2-jensen){reference-type="ref" reference="eq:l1l2-jensen"} will be more convenient for us. It readily follows from [\[eq:l1l2\]](#eq:l1l2){reference-type="ref" reference="eq:l1l2"} by applying Jensen's inequality to the function $g(x) = (1 + \log(x)/2)^{-1}$, which is concave on $(0,1)$. The details can be found in the proof of Theorem 5.4 of [@chat-sc].\ Talagrand's inequality is based on hypercontractivity---a way of quantifying the extreme smoothing properties of the heat flow. Roughly speaking, the Poincaré inequality can fail to yield optimal rates because it forces a heavy quadratic penalty on values of $X$ where $f$ has large derivative, even if the Gaussian measure assigns little mass to these locations. Hypercontractivity allows for mitigation of this penalty.\ We plan to apply Talagrand's inequality to the variance of the margin. This requires differentiating the margin. Since the margin is defined in terms of a distance between two sets, this will consist of two tasks. First, interchanging a derivative and an infimum. Second, differentiating the $\ell^q$ distance. The former will be accomplished by the classical "envelope theorem" of Milgrom and Segal (Theorem 1 in [@milgrom-segal]). **Lemma 3** (Envelope Theorem). *Let $f(x,A)$ be a map $f: X \times [0,1] \to \mathbb{R}$ where $X$ is a subset of $\mathbb{R}^{n}$. Define the value function $$V(t) := \sup_{x \in X} f(x,t)\,.$$ For any $t \in (0,1)$ and any $x^* \in \arg\max_{x \in X} f(x,t)$, if $V'(t)$ and the partial derivative $f_t(x^*,t)$ both exist, $$V'(t) = f_t(x^*,t)\,.$$* For the latter task of differentiating the $\ell^q$ distance, we collect some easy regularity results on $\ell^q$ distances. In what follows, let $E \subset \mathbb{R}^{m}$ be an arbitrary closed set and define $$f_q(x) := d_q(x,E)\,.$$ Recall our notation that a function $F: \mathbb{R}^{M} \to \mathbb{R}$ is called $(L,q)$-Lipschitz if it is Lipschitz continuous with constant $L$ with respect to $\ell^q$ perturbations to the input. **Proposition 1**. *Fix $q \in [2,\infty)$ and a non-empty set $E \subset \mathbb{R}^{M}$. Let $f_q(x) := d_q(x,E)$ be the $\ell^q$ distance between $x$ and $E$.* - *$f_q$ is $(1,q)$-Lipschitz continuous everywhere.* - *$f_q$ is absolutely continuous everywhere and continuously differentiable for Lebesgue a.e. $x$.* - *Let $x$ be a point of differentiability for $f_q(x)$. Then there is a unique $\ell^q$-projection of $x$ onto $E$; denote it by $z$. Letting $v := x-z$ and $i \in [M]$, we additionally have: $$\left|\left(\nabla f_q(x)\right)_i\right| = \frac{|v_i|^{q-1}}{\|v\|_{q}^{q-1}}\,.$$* These are standard facts, but we give a proof for completeness. *Proof of [Proposition 1](#prop:lip){reference-type="ref" reference="prop:lip"}.* Let $z \in E$ be arbitrary and let $x$ and $y$ be outside $E$. Fix $q \ge 2$ and set $d := d_q$. By triangle inequality, $$d(x,E) \le d(x,z) \le d(x,y) + d(y,z)\,.$$ Taking an infimum over $z \in E$ yields $d(x,E) - d(y,E) \le d(x-y)$. Reversing the roles of $x$ and $y$ yields a symmetric bound, completing the proof of the first claim.\ By equivalence of norms (i.e. since $\ell_q \le \ell_2 \le \sqrt{N} \ell_q$ for any $q \in [2,\infty]$), we have that $f_q$ is Lipschitz continuous with respect to the Euclidean norm. This implies a.e. differentiability by Rademacher's theorem, as well as absolute continuity everywhere. Continuity of the derivative will follow by explicit computation.\ Let $x$ be a point of differentiability of $f_q$ and $y$ be an $\ell^q$-metric projection of $x$ onto $E$. By definition of the derivative, there exists a unique vector $v$ such that for $w$ sufficiently near $x$, $$\label{eq:deriv} f_q(w) - f_q(x) = v \cdot (w-x) + \mathrm{o}\left(\|w-x\|\right)\,.$$ Since we just showed $f_q$ is $(1,q)$-Lipschitz, then $\|v\|_{q^*} \le 1$ where $q^*$ is the conjugate exponent to $q$. Letting $w = x + \epsilon(y - x)$, we obtain by homogeneity of the $\ell^q$ norm $$\begin{aligned} |f_q(w) - f_q(x)| &= \epsilon\|y-x\|_q\,. \end{aligned}$$ Rearranging [\[eq:deriv\]](#eq:deriv){reference-type="ref" reference="eq:deriv"}, $$\label{eq:holder-equal} |v \cdot (w-x)| = \epsilon\|y-x\|_{q} + \mathrm{o}\left(\epsilon\right)\,.$$ Since $\|v\|_{q^*} \le 1$, we have by Hölder's inequality $$\label{eq:holder-ineq} |v \cdot (w-x)| \le \|w-x\|_{q}\|v\|_{q^*} = \epsilon\|y-x\|_{q} \|v\|_{q^*} \le \epsilon\|y-x\|_{q}\,.$$ But, [\[eq:holder-equal\]](#eq:holder-equal){reference-type="ref" reference="eq:holder-equal"} shows that Hölder's inequality [\[eq:holder-ineq\]](#eq:holder-ineq){reference-type="ref" reference="eq:holder-ineq"} actually holds with equality, up to some vanishing error. For $q < \infty$, it is a standard fact [@holder] that this implies $$|v_i| = \frac{|y-x|_i^{q-1}}{\|y-x\|_{q}^{q-1}}\,.$$ In summary, given a projection $y$ of $x$ onto $E$, we can explicitly solve for the derivative. It is clear that if there is another projection $y' \neq y$, this would contradict the uniqueness of $v$. With the formula for the derivative of $f_q$ established, continuity of the derivative is now clear by inspection. ◻ Although similar statements hold for $q = \infty$, there is a lack of uniqueness of the projection onto $E$, which makes direct analysis more difficult---although definitely still possible. We sidestep this issue with the standard approach of simply taking $q$ large. Let us make this precise. **Proposition 2**. *Let $X$ and $Y$ be random variables with finite second moments. Then: $$\label{eq:var-add} \mathrm{Var}\left(X+Y\right) \le 2\left(\mathrm{Var}\left(X\right) + \mathrm{Var}\left(Y\right) \right)$$ and $$\label{eq:var-perturb} \mathrm{Var}\left(X\right) \le 2\left(\mathbb{E}_{\unskip\space}\left[|X-Y|^2\right] + \mathrm{Var}\left(Y\right) \right)$$* *Proof.* Both are trivial consequences of the inequality $(a+b)^2 \le 2(a^2+b^2)$, valid for all real $a$ and $b$. ◻ **Proposition 3**. *For any $q \ge \log(M)^2$, we have $$\mathcal{M}_\infty = \mathcal{M}_q\left(1 + \mathcal{O}\left(\frac{1}{\log M}\right)\right)\,.$$* In particular, combining [Proposition 2](#prop:var-perturb){reference-type="ref" reference="prop:var-perturb"} and [Proposition 3](#prop:infty-approx){reference-type="ref" reference="prop:infty-approx"} will allow us to study the variance of $\mathcal{M}_\infty$ via the variance of $\mathcal{M}_{\log(N)^2}$, which is easier to differentiate due to [Proposition 1](#prop:lip){reference-type="ref" reference="prop:lip"}. We omit the proof of [Proposition 3](#prop:infty-approx){reference-type="ref" reference="prop:infty-approx"} since it is an immediate application of the classical fact: $$\|\cdot\|_{\infty} = \|\cdot\|_q \left(1 + \mathcal{O}\left(\frac{1}{\log M}\right)\right)\,.$$ We conclude our preliminary discussion by illustrating the utility of the Poincaré inequality with a quick proof of [Theorem 7](#thm:mtx-spencer){reference-type="ref" reference="thm:mtx-spencer"}, the sharp threshold for random matrix balancing problem. The following classical fact is needed. **Lemma 4**. *Let $A$ be a symmetric matrix of full rank with distinct eigenvalues. Let $\lambda$ be the largest eigenvalue and let $u$ be the corresponding unique eigenvector of unit norm. $$\label{eq:op-grad} \frac{\partial}{\partial A_{ij}}\lambda = u_iu_j$$* *Proof of [Lemma 4](#lemma:op-grad){reference-type="ref" reference="lemma:op-grad"}.* Implicitly differentiate the formula $$Au = \lambda u\,,$$ and then left multiply by $u^t$. Using that $u^tdu = (d\|u\|_2^2)/2 = 0$, and then using symmetry of $A$ so that $u^tA = \lambda u^t$, we obtain the result. ◻ *Proof of [Theorem 7](#thm:mtx-spencer){reference-type="ref" reference="thm:mtx-spencer"}.* We apply the classical Gaussian Poincaré inequality to $\mathrm{disc}(A)$. Clearly the operator norm of $\sum A_i\sigma_i$ for a particular $\sigma$ is Lipschitz in $A$, and thus so is $\mathrm{disc}(A)$. By Rademacher's theorem, both are then differentiable for almost every $A$. So, applying [Lemma 3](#thm:envelope){reference-type="ref" reference="thm:envelope"}, for almost every $A$ there is a vector $\sigma^*$ with $\mathrm{disc}(A) = \|\sum_i \sigma^*_i A_i\|$ and $$\nabla \mathrm{disc}(A) = \frac{1}{\sqrt{d}} \nabla \left\|\sum \sigma_i^* A_i \right\|_{\mathrm{op}} = \left(u_i^*u_j^*\right)_{i,j\in [d]}\,.$$ Here, $u^*$ denotes the (unit) eigenvector associated with the top eigenvalue of $\sum \sigma^*_i A_i$. Then: $$\begin{aligned} \mathrm{Var}\left(\mathrm{disc}(G)\right) \le \frac{1}{d}~ \mathbb{E}_{\unskip\space}\left[\|\nabla \mathrm{disc}(G) \|_2^2\right] = \frac{1}{d}~\|u^*\|_2^4 = \frac{1}{d}\,.\end{aligned}$$ This is far from sharp and can easily be improved, even polynomially. But it suffices for a sharp threshold. This concludes the upper bound on the variance. The lower bound that $\mathrm{disc}(G) \ge \Omega\left(1\right)$ follows from a first moment method (see Theorem 1.13 of [@tim-mtx]). ◻ # Concentration of the margin *Proof of [Theorem 1](#thm:margin){reference-type="ref" reference="thm:margin"}.* By [Proposition 3](#prop:infty-approx){reference-type="ref" reference="prop:infty-approx"} in conjunction with [Proposition 2](#prop:var-perturb){reference-type="ref" reference="prop:var-perturb"}, we may assume without loss of generality that $q \le \log(M)^2$. In particular, $q$ is finite (but possibly $M$-dependent).\ Let us verify the differentiability of the margin. Define the function $g(A,\sigma) = d(A\sigma, E)$. For any matrix $B \in \mathbb{R}^{M \times N}$, we have by triangle inequality: $$\begin{aligned} \left|g(A,\sigma) -g(B,\sigma)\right| &\le d_q(A\sigma, B\sigma) \\ &\le \|A-B\|_{2,q}\|\sigma\|_2 \\ &\le \|A-B\|_{2,q}\,.\end{aligned}$$ The final inequality follows from the theorem assumption that the feasible set is bounded in $\ell^2(\mathbb{R}^N)$. By equivalence of matrix norms, we have that $g$ is $\ell^2$-Lipschitz with some finite (possibly $N$-dependent) constant. By Rademacher's theorem, $\nabla_A g(A,\sigma)$ exists for almost every $A$. Similarly, the margin is also Lipschitz and thus differentiable for almost every $A$. Indeed, by triangle inequality: $$\begin{aligned} |\mathcal{M}_q(A) - \mathcal{M}_q(B)| \le \sup_{\sigma \in Q} \|A\sigma - B\sigma\|_q \le \|A-B\|_{2,q}\,.\end{aligned}$$ By [Proposition 1](#prop:lip){reference-type="ref" reference="prop:lip"} and the chain rule, we have for each $\sigma \in Q$ and almost every $A$ the identity: $$\left|\partial_{A_{ij}} g(A,\sigma)\right| = |\sigma_j| \frac{|v_i|^{q-1}}{\|v\|_{q}^{q-1}}\,, \quad \forall~i \in [M],~j \in [N]$$ where $z$ is the unique $\ell^q$-projection of $A\sigma$ onto $E$ and $v = A\sigma - z$. So, for almost every $A$ and any $\sigma^*$ with $g(A,\sigma^*) = \mathcal{M}(A)$, [Lemma 3](#thm:envelope){reference-type="ref" reference="thm:envelope"} yields: $$\nabla \mathcal{M}(A) =\left( |\sigma_j^*| \frac{|v_i|^{q-1}}{\|v\|_{q}^{q-1}} \right)_{i \in [M],~j \in [N]}\,.$$ Here $v$ again denotes the unique $\ell^q$ projection of $A\sigma^*$ onto $E$.\ We turn to bounding the variance of $\mathcal{M}_q$ using Talagrand's $L^1$-$L^2$ inequality. The quantities involved in [\[eq:l1l2-jensen\]](#eq:l1l2-jensen){reference-type="ref" reference="eq:l1l2-jensen"} of [Lemma 2](#lemma:L1L2){reference-type="ref" reference="lemma:L1L2"} are: $$\begin{aligned} a_{ij}^2 &:= \mathbb{E}_{\unskip\space}\left[\left|\partial_{A_{ij}} \mathcal{M}_{q,E}(A) \right|\right]^2 \\ b_{ij}^2 &:=\mathbb{E}_{\unskip\space}\left[\left|\partial_{A_{ij}} \mathcal{M}_{q,E}(A) \right|^2\right]\,. \end{aligned}$$ By Hölder's inequality, the $q$ and $(q-1)$ norm of $v$ cannot be too far apart: $$\label{eq:norm-comp} \|v\|_q^{q-1} \le \|v\|_{q-1}^{q-1} \le \left(M\right)^{\frac{1}{q}} \|v\|_{q}^{q-1}\,.$$ Additionally, by permutation-invariance of the constraint set $E$, $$\label{eq:perm-invar-a} a_{ij}^2 = \mathbb{E}_{\unskip\space}\left[\left|\sigma_j^*\right| \frac{|v_i|^{q-1}}{\|v\|_{q}^{q-1}}\right]^2 = \left(\frac{1}{M}~\mathbb{E}_{\unskip\space}\left[\left|\sigma_j^*\right| \frac{\|v\|_{q-1}^{q-1}}{\|v\|_{q}^{q-1}}\right]\right)^2\,.$$ Combining [\[eq:norm-comp\]](#eq:norm-comp){reference-type="ref" reference="eq:norm-comp"} and [\[eq:perm-invar-a\]](#eq:perm-invar-a){reference-type="ref" reference="eq:perm-invar-a"}, $$\begin{aligned} \sum_{ij} a_{ij}^2 &= \frac{1}{M}\sum_{j}~\mathbb{E}_{\unskip\space}\left[ \left|\sigma_j^*\right| \frac{\|v\|_{q-1}^{q-1}}{\|v\|_{q}^{q-1}}\right]^2 \le M^{\frac{2}{q} - 1}\sum_{j}~\mathbb{E}_{\unskip\space}\left[ \left|\sigma_j^*\right| \right]^2 = M^{\frac{2}{q} - 1}\mathbb{E}_{\unskip\space}\left[ \|\sigma^*\|_2^2\right] \le M^{\frac{2}{q} - 1}\,. \label{eq:ub-a}\end{aligned}$$ The last inequality follows from the assumption that the feasible set is bounded. By monotonicity of $\ell^p$ norms, we also have $$\begin{aligned} \sum_{ij} b_{ij}^2 &= \sum_{j}~\mathbb{E}_{\unskip\space}\left[ \left|\sigma_j^*\right|^2 \left(\frac{\|v\|_{2(q-1)}}{\|v\|_{q}} \right)^{2(q-1)} \right] \le \sum_{j}~\mathbb{E}_{\unskip\space}\left[ \left|\sigma_j^*\right|^2 \right] \le 1 \,. \label{eq:ub-b} \end{aligned}$$ Finally, it is readily checked that for any $a > 0$, the following is monotone increasing in $x$ on $[a^{-1},\infty)$: $$f(x) = \frac{x}{1 + \frac{1}{2}\log(ax)}\,.$$ By Jensen's inequality and [\[eq:ub-b\]](#eq:ub-b){reference-type="ref" reference="eq:ub-b"}, we have $\sum_{ij}a_{ij}^2 \le \sum_{ij}b_{ij}^2 \le 1$. So, (in order of the inequalities below) applying [\[eq:l1l2-jensen\]](#eq:l1l2-jensen){reference-type="ref" reference="eq:l1l2-jensen"} of [Lemma 2](#lemma:L1L2){reference-type="ref" reference="lemma:L1L2"}, then monotonicity of $f$, and then [\[eq:ub-a\]](#eq:ub-a){reference-type="ref" reference="eq:ub-a"} yields: $$\begin{aligned} \mathrm{Var}\left(\mathcal{M}_q\right) \le \frac{C\sum_{ij}b_{ij}^2}{1 + \frac{1}{2}\log\left( \frac{\sum_{ij}b_{ij}^2} {\sum_{ij}a_{ij}^2} \right)} \le \frac{C}{1 + \frac{1}{2}\log\left( \frac{1} {\sum_{ij}a_{ij}^2} \right)} \le \frac{C}{1 + \frac{1}{2}\log\left( M^{1-\frac{2}{q}}\right)} \,.\end{aligned}$$ ◻ **Remark 2**. The reason that it works to apply Hölder's inequality so crudely is the following dichotomy: we are already done by Poincaré if $\sum_{ij} b_{ij}^2 \ll 1$. On the other hand, if $\sum_{ij} b_{ij}^2 \approx 1$ then the $q$ and $2(q-1)$ norms of $v$ are not very far apart, so $v$ must be a highly structured vector, in which case the $q$ and $q-1$ norms of $v$ must also be comparable. Then $b^2 \gg a^2$ and Talagrand's inequality easily yields a logarithmic improvement. *Proof of [Theorem 2](#cor:block){reference-type="ref" reference="cor:block"}.* We adopt the notation and proof of [Theorem 1](#thm:margin){reference-type="ref" reference="thm:margin"}. The first section on differentiability of the margin remains unchanged. Partition $[M]$ into $I_1,\dots,I_k$ so that $|I_j| = M_j$ by setting $I_1 = [M_1]$, $I_2 = \left(M_1+1,\dots,M_1 + M_2\right)$, and so on. Let $w_1 := v_{I_1}$ be the $M_1$-dimensional vector induced by taking the subset of $v$ with indices in $I_1$. Define $w_j := v_{I_j}$ similarly. Then the concatenation $(w_1, \dots w_k)$ is simply $v$. Arguing as in the proof of [Theorem 1](#thm:margin){reference-type="ref" reference="thm:margin"}, it is easy to check by permutation symmetry: $$\begin{aligned} \sum_{i\in I_t}\sum_j a_{ij}^2 = \frac{1}{M_t}\sum_j \mathbb{E}_{\unskip\space}\left[|\sigma^*_j|\left(\frac{\|w^{(t)}\|_{(q-1)}}{\|v\|_{q}} \right)^{(q-1)} \right]^2 \le M_t^{\frac{2}{q}-1}\sum_j \mathbb{E}_{\unskip\space}\left[|\sigma^*_j|\left(\frac{\|w^{(t)}\|_q}{\|v\|_q} \right)^{(q-1)}\right]^2 \le M_t^{\frac{2}{q}-1}\,. \end{aligned}$$ Note that by Jensen's inequality $\sum_{ij} b_{ij}^2 \ge \sum_{ij} a_{ij}^2$ always. Thus, by [\[eq:l1l2-jensen\]](#eq:l1l2-jensen){reference-type="ref" reference="eq:l1l2-jensen"} of [Lemma 2](#lemma:L1L2){reference-type="ref" reference="lemma:L1L2"}, we have: $$\begin{aligned} \mathrm{Var}\left(\mathcal{M}_q\right) &\le C \left(\sum_{ij} b_{ij}^2\right) \left(1 + \frac{1}{2}\log \left(\frac{\sum_{ij} b_{ij}^2 }{ \sum_{ij} a_{ij}^2 } \vee 1 \right) \right)^{-1} \\ &\le C \left(1 + \frac{1}{2}\log \left(\frac{1}{ \sum_{t=1}^k M_t^{\frac{2}{q}-1} } \vee 1 \right) \right)^{-1} \\&\le C \left(1 + \frac{1}{2}\log \left(\frac{\left(\min_{t\in [k]} M_t\right)^{1 - \frac{2}{q}}}{k} \vee 1 \right)\right)^{-1} \,. \end{aligned}$$ From the first to the second line we have used monotonicity in $x$, for any $a > 0$ and all $x \in \mathbb{R}$, of the function $$f(x) = \frac{x}{1 + \frac{1}{2}\log (ax \vee 1)}\,.$$ ◻ # Acknowledgments {#acknowledgments .unnumbered} I am deeply grateful to Jonathan Niles-Weed for invaluable support, advice, and mentorship throughout all stages of this project. I also benefited from useful and encouraging conversations with Paul Bourgade, Guy Bresler, Brice Huang, Mark Sellke, Joel Spencer, Nike Sun, Konstantin Tikhomirov, and Will Perkins during the long exploratory phase of this project. Finally, the monograph [@chat-sc] of Chatterjee was an important source of inspiration for this work. This article is adapted from a chapter of the author's dissertation at NYU and was supported by a MacCracken fellowship, an NSF GRFP grant, and NSF grant DMS-2015291. [^1]: We will not make this precise. See Assumption 1.2 and Theorem 1.2 of [@nakajima-sun] for the full statement.
arxiv_math
{ "id": "2309.13133", "title": "Zero-One Laws for Random Feasibility Problems", "authors": "Dylan J. Altschuler", "categories": "math.PR cs.DM math.CO", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | *If on a smooth bounded domain $\Omega\subset\mathbb{R}^2$ there is a nonconstant Neumann eigenfunction $u$ that is locally constant on the boundary, must $\Omega$ be a disk or an annulus?* This question can be understood as a weaker analog of the well known Schiffer conjecture, in that the function $u$ is allowed to take a different constant value on each connected component of $\partial\Omega$ yet many of the known rigidity properties of the original problem are essentially preserved. Our main result provides a negative answer by constructing a family of nontrivial doubly connected domains $\Omega$ with the above property. As a consequence, a certain linear combination of the indicator functions of the domains $\Omega$ and of the bounded component of the complement $\mathbb{R}^2\backslash\overline\Omega$ fails to have the Pompeiu property. Furthermore, our construction implies the existence of continuous, compactly supported stationary weak solutions to the 2D incompressible Euler equations which are not locally radial. address: - **Alberto Enciso** Instituto de Ciencias Matemáticas, Consejo Superior de Investigaciones Cientf́icas, 28049 Madrid, Spain - **Antonio J. Fernández** Departamento de Matemáticas, Universidad Autónoma de Madrid, 28049 Madrid, Spain - **David Ruiz** IMAG, Departamento de Análisis Matemático, Universidad de Granada, 18071 Granada, Spain - **Pieralberto Sicbaldi** IMAG, Departamento de Análisis Matemático, Universidad de Granada, 18071 Granada, Spain Aix-Marseille Université - CNRS, Centrale Marseille - I2M, Marseille, France author: - Alberto Enciso - Antonio J. Fernández - David Ruiz - Pieralberto Sicbaldi title: | A Schiffer-type problem for annuli\ with applications to stationary planar Euler flows --- # Introduction One of the most intriguing problems in spectral geometry is the so-called Schiffer conjecture, that dates back to the 1950s. In his 1982 list of open problems, S.T. Yau stated it (in the case $n = 2$) as follows [@Yau Problem 80]: **Conjecture 1**. *If a nonconstant Neumann eigenfunction $u$ of the Laplacian on a smooth bounded domain $\Omega\subset\mathbb{R}^n$ is constant on the boundary $\partial\Omega$, then $u$ is radially symmetric and $\Omega$ is a ball.* This overdetermined problem is closely related to the *Pompeiu problem* [@Pompeiu], an open question in integral geometry with many applications in remote sensing, image recovery and tomography [@Berenstein; @WillmsGladwell; @SSW]. The Pompeiu problem can be stated as the following inverse problem: Given a bounded domain $\Omega\subset\mathbb{R}^n$, is it possible to recover any continuous function $f$ on $\mathbb{R}^n$ from knowledge of its integral over all the domains that are the image of $\Omega$ under a rigid motion? If this is the case, so that the only $f\in C(\mathbb{R}^n)$ satisfying $$\label{E.Pompeiu} \int_{{\mathcal R}(\Omega)}f(x)\, dx=0\,,$$ for any rigid motion ${\mathcal R}$ is $f\equiv 0$, the domain $\Omega$ is said to have the *Pompeiu property*. Squares, polygons, convex domains with a corner, and ellipses have the Pompeiu property, and Chakalov was apparently the first to point out that balls fail to have the Pompeiu property [@chakalov; @L.BrownSchreiberTaylor; @Zalcman]. In 1976, Williams proved [@Williams76] that a smooth bounded domain with boundary homeomorphic to a sphere fails to have the Pompeiu property if and only if it supports a nontrivial Neumann eigenfunction which is constant on $\partial \Omega$. Therefore, the Schiffer conjecture is equivalent to saying that balls are the unique smooth bounded domains with boundary homeomorphic to a sphere without the Pompeiu property[^1]. Although the Schiffer conjecture is famously open, some partial results are available. It is known that $\Omega$ must indeed be a ball under one of the following additional hypotheses: 1. There exists an infinite sequence of orthogonal Neumann eigenfunctions that are constant on $\partial \Omega$, which is connected [@Berenstein; @BerensteinYang]. 2. The third order interior normal derivative of $u$ is constant on $\partial\Omega$, which is connected [@Liu]. In dimension 2, some further results are available, and the conjecture has been shown to be true in some other special cases: 3. When $\Omega$ is simply connected and $u$ has no saddle points in the interior of $\Omega$ [@WillmsGladwell]. 4. If $\Omega$ is simply connected and the eigenvalue $\mu$ is among the seven lowest Neumann eigenvalues of the domain (twelve, if one assumes that $\Omega$ is convex) [@aviles; @Deng]. 5. If the fourth or fifth order interior normal derivative of $u$ is constant on $\partial\Omega$ [@KawohlLucia]. It is also known that the boundary of any reasonably smooth domain $\Omega\subset\mathbb{R}^n$ with the property stated in the Schiffer conjecture must be analytic as a consequence of a result of Kinderlehrer and Nirenberg [@KinderlehrerNirenberg] on the regularity of free boundaries. If one considers domains with disconnected boundary, it is natural to wonder what happens if one relaxes the hypotheses in the Schiffer conjecture by allowing the eigenfunction to be *locally constant* on the boundary, that is, constant on each connected component of $\partial\Omega$. Of course, the radial Neumann eigenfunctions of a ball or an annulus are locally constant on the boundary, so the natural question in this case is whether $\Omega$ must be necessarily a ball or an annulus. We emphasize that this question shares many features with the Schiffer conjecture. First, essentially the same arguments show that $\partial\Omega$ must be analytic. Moreover, in the spirit of [@Berenstein; @BerensteinYang], we show that if there exists an infinite sequence of orthogonal eigenfunctions that are locally constant on the boundary of $\Omega\subset \mathbb{R}^2$, then $\Omega$ must be a disk or an annulus. Furthermore, we prove that if the Neumann eigenvalue is sufficiently low, $\Omega$ must be a disk or an annulus, which is a result along the lines of [@aviles; @Deng]. Precise statements and proofs of these rigidity results can be found in Section [6](#A.Rigidity){reference-type="ref" reference="A.Rigidity"}. Our objective in this paper is to give a negative answer to this question. Indeed, we can prove the following: **Theorem 1**. *There exist parametric families of doubly connected bounded domains $\Omega\subset\mathbb{R}^2$ such that the Neumann eigenvalue problem $$\Delta u + \mu u = 0\quad in\ \Omega\,, \qquad \partial_\nu u = 0 \quad on \ \partial\Omega\,,$$ admits, for some $\mu \in \mathbb{R}$, a non-radial eigenfunction that is locally constant on the boundary (i.e., $\nabla u \equiv 0$ on $\partial\Omega$). More precisely, for any large enough integer $l$ and for all $s$ in a small neighborhood of $0$, the family of domains $\Omega\equiv \Omega_{l,s}$ are given in polar coordinates by $$\label{E.Omthm} \Omega:=\big\{(r,\theta)\in\mathbb{R}^+\times\mathbb{T}: a_l+s\, b_{l,s}(\theta)<r<1+s\, B_{l,s}(\theta)\big\}\,,$$ where $b_{l,s},B_{l,s}$ are analytic functions of the form $$b_{l,s}(\theta)= \alpha_l\cos l\theta+ o(1)\,,\qquad B_{l,s}(\theta)= \beta_l \cos l\theta+ o(1)\,,$$ $a_l\in(0,1)$, $\alpha_l$ and $\beta_l$ are nonzero constants, and where the $o(1)$ terms tend to $0$ as $s\to0$.* We refer to Theorem [Theorem 15](#T.mainDetails){reference-type="ref" reference="T.mainDetails"} in the main text for a more precise statement. In view of the connection between the Schiffer conjecture and the Pompeiu problem, it stands to reason that the domains of the main theorem should be connected with an integral identity somewhat reminiscent of the Pompeiu property. Specifically, one can prove the following corollary of our main result, which is an analog of the identity [\[E.Pompeiu\]](#E.Pompeiu){reference-type="eqref" reference="E.Pompeiu"} in which one replaces the indicator function $\mathbbm 1_\Omega$ implicit in the integral by a linear combination of indicator functions: **Corollary 2**. *Let  $\Omega$ be a domain as in Theorem [Theorem 1](#T.main){reference-type="ref" reference="T.main"} and let $\Omega':=\{r<a_l+ s\, b_{l,s}(\theta)\}$ denote the bounded component of $\mathbb{R}^2\backslash\overline\Omega$. Then there exists a positive constant $c$ and a nonzero function $f \in C^{\infty}(\mathbb{R}^2)$ such that $$\int_{{\mathcal R}(\Omega)}f\, dx - c\int_{{\mathcal R}(\Omega')}f\, dx=0\,,$$ for any rigid motion ${\mathcal R}$.* ## Non-radial compactly supported solutions to the 2D Euler equations {#non-radial-compactly-supported-solutions-to-the-2d-euler-equations .unnumbered} Theorem [Theorem 1](#T.main){reference-type="ref" reference="T.main"} implies the existence of a special type of stationary 2D Euler flows, which was the original motivation of our work. Let us now describe this connection in detail. It is well known that if a scalar function $w$ satisfies a semilinear equation of the form $$\label{E.Deg} \Delta w+ g(w)=0 \quad \textup{in } \mathbb{R}^2\,,$$ for some function $g$, then the velocity field and pressure function given by $$v:=\left(\frac{\partial w}{\partial x_2},-\frac{\partial w}{\partial x_1}\right)\,,\qquad p:= -\frac12|\nabla w|^2- \int_0^w g(s) \, ds\,,$$ define a stationary solution to the Euler equations: $$\label{euler} v\cdot \nabla v+\nabla p=0\quad\textup{ and } \quad \mathop{\mathrm{div}}v=0 \quad \textup{ in } \mathbb{R}^2\,.$$ Any function $w$ that is radial and compactly supported gives rise to a compactly supported stationary Euler flow in two dimensions, even if it does not solve a semilinear elliptic equation as above. The stationary solutions that are obtained by patching together radial solutions with disjoint supports are referred to as *locally radial*. For a detailed bibliography on the rigidity and flexibility properties of stationary Euler flows in two dimensions and their applications, we refer to the recent papers [@JGS; @gomez]. Compactly supported stationary Euler flows in two dimensions that are not locally radial were constructed just very recently [@JGS]. They are of vortex patch type and the velocity field is continuous but not differentiable[^2]. The proof of this result is hard: it involves several clever observations and a Nash--Moser iteration scheme. Furthermore, as emphasized in [@JGS], these solutions cannot be obtained using an equation of the form [\[E.Deg\]](#E.Deg){reference-type="eqref" reference="E.Deg"}. On the other hand, the existence of non-radial stationary planar Euler flows is constrained by several recent rigidity results [@Hamel; @gomez; @R]. We emphasize that the existence of $C^1$ compactly supported solutions of [\[euler\]](#euler){reference-type="eqref" reference="euler"} which are not locally radial is not yet known (if the support condition is relaxed to having finite energy, the existence of smooth non-radial solutions follows directly from [@Musso]). In this context, one can readily use Theorem [Theorem 1](#T.main){reference-type="ref" reference="T.main"} to construct families of continuous compactly supported stationary Euler flows on the plane that are not locally radial. These solutions, which are not of patch type, are analytic in $\Omega$ but fail to be differentiable on the boundary of their support. **Theorem 3**. *Let $\Omega$ be a domain as in Theorem [Theorem 1](#T.main){reference-type="ref" reference="T.main"}. Then there exists a continuous, compactly supported stationary solution $(v,p)$ to the incompressible Euler equations on the plane such that $\operatorname{supp}v=\overline\Omega$. Furthermore, $(v,p)$ is analytic in $\Omega$.* We believe that the same strategy we use in this paper can be generalized for some nonlinear functions $g$ in Equation [\[E.Deg\]](#E.Deg){reference-type="eqref" reference="E.Deg"}, which can be engineered to produce interesting nontrivial solutions to the incompressible Euler equations. We will explore this direction in the future. ## Strategy of the proof of Theorem [Theorem 1](#T.main){reference-type="ref" reference="T.main"} {#strategy-of-the-proof-of-theorem-t.main .unnumbered} Since any radial Neumann eigenfunction of an annulus is locally constant on the boundary, the basic idea of the proof of Theorem [Theorem 1](#T.main){reference-type="ref" reference="T.main"} is to construct $\Omega$ by bifurcating from a 1-parameter family of annuli. Specifically, we have chosen to bifurcate from the second nontrivial radial eigenfunction of the annuli. The reason for not choosing the first is that, in such case, the maximum and the minimum of the eigenfunction are attained at the boundary, and this is the setting in which the symmetry result of [@Reichel] holds. Hence, we do not expect that nonradial solutions can bifurcate from the first nontrivial radial eigenfunction of annuli. To implement the bifurcation argument, the naive approach would be to carry out the analysis using one-variable functions $b(\theta), B(\theta)$ defined on the circle $\mathbb{T}$ to parametrize the boundary, as suggested by the expression [\[E.Omthm\]](#E.Omthm){reference-type="eqref" reference="E.Omthm"}. However, this leads to linearized operators that involve a serious loss of derivatives, which we do not know how to compensate using a Nash--Moser iteration scheme. Instead, the strategy we follow is influenced by a very clever recent paper of Fall, Minlend and Weth [@Fall-Minlend-Weth-main]. Our main unknown is a two-variable function $v(r,\theta)$ defined on the annulus $\{\frac12< r <1\}$. To control the regularity of $v$, we use anisotropic Hölder spaces, as the functions that arise naturally are one derivative smoother in $r$ than they are in $\theta$. Fall, Minlend and Weth introduced this strategy to construct the first examples of domains in the sphere that admit a Neumann eigenfunction which is constant on the boundary and whose boundary has nonconstant principal curvatures, thereby disproving a conjecture of Souam [@Souam]. We refer to [@Fall-Minlend-Weth-main] for results on analogs of the Schiffer conjecture on manifolds. In this functional setting we need to impose symmetry with respect to the group of isometries of an $l$-sided regular polygon. The key ingredient required to make the bifurcation argument work is a result about the crossing on certain Dirichlet and Neumann eigenvalues for annuli of certain radii, which takes place for $l \geqslant 4$. For technical reasons, we also have to exclude resonances with other Dirichlet eigenvalues. Once this is established, and using the functional framework described above, the proof of Theorem [Theorem 1](#T.main){reference-type="ref" reference="T.main"} follows from the Crandall--Rabinowitz theorem. A considerable part of our paper is devoted to showing that the corresponding transversality condition holds. The proof is based on a delicate asymptotic analysis of Dirichlet and Neumann eigenfunctions on annuli in presence of a large parameter (namely, the angular mode of certain eigenfunction) and a small one (namely, the thickness of the annulus). What makes the analysis rather subtle is that these parameters are not independent, but related by the requirement that a certain pair of eigenvalues must coincide. This is the reason why we need to take a large integer $l$ in Theorem [Theorem 1](#T.main){reference-type="ref" reference="T.main"}. However, this transversality condition can be numerically verified for specific values of $l$ with the help of Mathematica, so in Appendix [7](#A.transversality){reference-type="ref" reference="A.transversality"} we show that this condition holds for $l=4$. This allows us to ensure that the conclusions of Theorem [Theorem 1](#T.main){reference-type="ref" reference="T.main"} also hold in the case $l=4$. ## Organization of the paper {#organization-of-the-paper .unnumbered} In Section [2](#S.eigen){reference-type="ref" reference="S.eigen"} we prove the asymptotic estimates for the Dirichlet and Neumann eigenfunctions on annuli that are needed to establish eigenvalue crossing, non-resonance and transversality conditions. As discussed above, these will play a key role in the bifurcation argument later on. In Section [3](#S.deform){reference-type="ref" reference="S.deform"} we reformulate the eigenvalue problem for a deformed annulus in terms of functions defined on a fixed domain and introduce the functional setting that we use. The proof of Theorem [Theorem 1](#T.main){reference-type="ref" reference="T.main"} is presented in Section [4](#S.bifurcation){reference-type="ref" reference="S.bifurcation"}. The applications to an analog of the Pompeiu problem and to the construction of non-radial stationary Euler flows on the plane are developed in Section [5](#S.corollaries){reference-type="ref" reference="S.corollaries"}. For completeness, in Section [6](#A.Rigidity){reference-type="ref" reference="A.Rigidity"} we establish two rigidity results for the problem considered in this paper. We first show how existing arguments on the Schiffer problem prove that if a domain has infinitely many Neumann eigenfunctions that are locally constant on the boundary, then such domain is either a disk or an annulus. Then, we show that if the Neumann eigenvalue $\mu$ is sufficiently low, then $\Omega$ must be as well either a disk or an annulus. The paper is concluded with an Appendix where, using some elementary numerical computations, we show that the transversality condition holds for $l = 4$. # The Dirichlet and Neumann spectrum of an annulus {#S.eigen} In this section we shall establish the auxiliary results about Dirichlet and Neumann eigenvalues of annular domains that we will need in the proof of Theorem [Theorem 1](#T.main){reference-type="ref" reference="T.main"}. In view of the scaling properties of the Laplacian, it suffices to consider annuli with fixed outer radius. In polar coordinates $(r,\theta)\in\mathbb{R}^+\times\mathbb{T}$, let us therefore consider the annular domain $$\Omega_a :=\{(r,\theta)\in(a,1)\times\mathbb{T}\}\,,$$ where $a\in(0,1)$ and $\mathbb{T}:=\mathbb{R}/2\pi\mathbb{Z}$. It is well known that an orthogonal basis of Neumann eigenfunctions in $\Omega_a$ is given by $$\{\psi_{0,n}(r), \psi_{l,n}(r)\cos l\theta,\psi_{l,n}(r)\sin l\theta\}_{l=1, n=0}^\infty\,.$$ Here $\{\psi_{l,n}\}_{n=0}^\infty$ is an orthonormal basis of eigenfunctions of the associated radial operator. To put it differently, these functions (whose dependence on $a$ we omit notationally) satisfy the ODE $$\label{E.ODE.Neumann} \tag{${\rm N}_a^l$} \psi_{l,n}''+\frac{\psi_{l,n}'}r-\frac{l^2 \psi_{l,n}}{r^2}+\mu_{l,n}(a)\, \psi_{l,n}=0\quad \text{in } (a,1)\,,\qquad \psi_{l,n}'(a)=\psi_{l,n}'(1)=0\,,$$ for some nonnegative constants $$\mu_{l,0}(a)<\mu_{l,1}(a)<\mu_{l,2}(a)<\cdots < \mu_{l,n}(a) <\, \cdots$$ tending to infinity as $n\to\infty$. We will omit the dependence on $a$ when no confusion may arise. The Neumann spectrum of the annulus $\Omega_a$ is then $\{\mu_{l,n}\}_{l,n=0}^\infty$. Let us record here the following immediate properties of these eigenvalues: $$\mu_{0,0}=0 \quad \textup{ and } \quad \mu_{l,n} < \mu_{l',n} \ \mbox{ if } l < l'\,.$$ Moreover, note that the eigenvalues of $\Omega_a$ may have multiplicity larger than 1 if $\mu_{l,n}=\mu_{l',n'}$ for some $(l,n)\neq (l',n')$. Here, when we talk about mulitiplicity we do not consider rotations of the same eigenfunction (that is, we only take $\cos(l \theta)$, for instance). Likewise, an orthogonal basis of Dirichlet eigenfunctions of the annulus $\Omega_a$ is $$\{\varphi_{0,n}(r),\varphi_{l,n}(r)\cos l\theta,\varphi_{l,n}(r)\sin l\theta\}_{l=1,n=0}^\infty\,,$$ where now the radial eigenfunctions satisfy the ODE $$\label{E.ODE.Dirichlet} \tag{${\rm D}_a^l$} \varphi_{l,n}''+\frac{\varphi_{l,n}'}r-\frac{l^2 \varphi_{l,n}}{r^2}+\lambda_{l,n}(a)\, \varphi_{l,n}=0\quad \text{in } (a,1)\,,\qquad \varphi_{l,n}(a)=\varphi_{l,n}(1)=0\,,$$ for some positive constants $$\lambda_{l,0}(a)<\lambda_{l,1}(a)<\lambda_{l,2}(a)<\cdots < \lambda_{l,n}(a) <\, \cdots$$ tending to infinity as $n \to \infty$. The Dirichlet spectrum of $\Omega_a$ is therefore $\{\lambda_{l,n}\}_{l,n=0}^\infty$. Again, we have $$\lambda_{l,n} < \lambda_{l',n} \ \mbox{ if } l < l'\,.$$ Observe also that, thanks to the min-max characterization of the eigenvalues, the Neumann and Dirichlet eigenvalues are ordered, i.e. $$\mu_{l,n} < \lambda_{l,n}\,.$$ In the following elementary lemma we show a property relating radial Neumann eigenvalues with Dirichlet ones. This is a variation on the fact that the derivative of a (nonconstant) radial Neumann eigenfunction is a Dirichlet eigenfunction. **Lemma 4**. *For any $n \geqslant 1$, $\mu_{0,n} = \lambda_{1,n-1}.$* *Proof.* Recall that $\psi_{0,n}$ is a solution to $$\psi_{0,n}''+\frac{\psi_{0,n}'}r+\mu_{0,n}\, \psi_{0,n}=0\quad \text{in } (a,1)\,,\qquad \psi_{0,n}'(a)=\psi_{0,n}'(1)=0\,.$$ Moreover, $\psi_{0,n}$ has exactly $n$ changes of sign in the interval $(a,1)$. As a consequence, $\phi= \psi_{0,n}'$ changes sign $n-1$ times. Differentiating the equation above, we obtain that $\phi$ solves $$\phi''+\frac{\phi'}r-\frac{\phi}{r^2}+\mu_{0,n}\, \phi=0\quad \text{in } (a,1)\,, \qquad \phi(a)=\phi(1)=0\,.$$ Since the solution space is one-dimensional, $\phi = \varphi_{1,n-1}$ (possibly up to a nonzero multiplicative constant), so the lemma is proved. ◻ Next we show that there exist annuli where the third radial Neumann eigenvalue and the first $l$-mode Dirichlet eigenvalue coincide, and where certain non-resonance conditions are satisfied. This technical result is the first key ingredient in the bifurcation argument that the proof of Theorem [Theorem 1](#T.main){reference-type="ref" reference="T.main"} hinges on. **Proposition 5**. *For any integer $l \geqslant 4$, there exists some constant $a_l \in (0,1)$ such that $$\label{id.eigenvaluesBif} \mu_{0,2}(a_l)=\lambda_{l,0}(a_l)\,.$$ Moreover, the following non-resonance condition holds: $$\label{NR} \tag{NR} \lambda_{l,0}(a_l) \neq \lambda_{ml,n}(a_l) \quad \textup{ for all }\quad (m,n)\neq (1,0).$$* *Proof.* We set $h := \tfrac{1-a}{\pi}$ and $r =: 1-hx$. Suppose now that $h\ll1$, so $a$ is very close to 1. By the variational characterization of $\mu_{0,2}$, we get $$\label{E.asymp1} \begin{aligned} \mu_{0,2}(a) & = \inf_{\dim V = 3} \, \max_{\psi \in V \setminus \{0\}} \frac{\displaystyle \int_0^{\pi} \psi'(x)^2(1-hx)\, dx}{h^2 \displaystyle \int_0^\pi \psi(x)^2(1-hx)\, dx}\\ & = \big[h^{-2}+O(h^{-1}) \big] \inf_{\dim V = 3} \, \max_{\psi \in V \setminus \{0\}} \frac{\displaystyle \int_0^{\pi} \psi'(x)^2\, dx}{\displaystyle \int_0^\pi \psi(x)^2\, dx} = 4h^{-2}+O(h^{-1})\,. \end{aligned}$$ Here, $V$ ranges over the set of 3-dimensional subspaces of $H^1((0,\pi))$. Likewise, in the case of the first $l$-mode Dirichlet eigenvalue, we have $$\label{E.asymp2} \begin{aligned} \lambda_{l,0}(a) & = \inf_{\varphi \in H_0^1((0,\pi)) \setminus\{0\}} \, \frac{\displaystyle h^{-2} \int_0^{\pi} \psi'(x)^2(1-hx)\, dx + l^2 \int_0^{\pi} \frac{\psi(x)^2}{(1-hx)}\, dx}{ \displaystyle \int_0^\pi \psi(x)^2(1-hx)\, dx} \\ & = h^{-2} \left [ \inf_{\varphi \in H_0^1(0,\pi) \setminus\{0\}} \, \frac{\int_0^{\pi} \psi'(x)^2\, dx} {\int_0^\pi \psi(x)^2\, dx} + O(h) \right ] + l^2 (1+O(h)) = \big(1+(hl)^2 \big) \big[h^{-2}+O(h^{-1})\big]\,. \end{aligned}$$ In particular, we conclude from these expressions for $\mu_{0,2}$ and $\lambda_{l,0}$ that $$\label{E.muBiggerLambda} \mu_{0,2}(a) > \lambda_{l,0}(a)$$ whenever $a$ is close enough to $1$. We now show the converse inequality for $a$ small. In order to do that, recall that, by Lemma [Lemma 4](#extra){reference-type="ref" reference="extra"}, $\mu_{0,2}(a)= \lambda_{1,1}(a)$. We now claim that for any $l \geqslant 1$, $k \geqslant 0$, $$\label{disco} \lambda_{l,k}(a) \to \lambda_{l,k}^0\,, \quad \textup{ as } a \to 0\,,$$ where we have set $$\lambda_{l,k}^0:= \inf_{\dim V = k} \, \max_{\varphi \in V \setminus \{0\}} \frac{\displaystyle \int_0^{1} \Big( \varphi'(r)^2 + l^2 \frac{\varphi(r)^2}{r^2} \Big)\, r\, dr}{\displaystyle \int_0^1 \varphi(r)^2\,r\, dr}\,.$$ Here, $V$ ranges over the set of $k$-dimensional subspaces of $\mathcal{H}_0 \subset \mathcal{H}$, where $$\mathcal{H} := \Big\{\varphi: (0,1) \to \mathbb{R}: \int_0^{1} \Big( \varphi'(r)^2 + \frac{\varphi(r)^2}{r^2}\Big)\,r \, dr < +\infty\ \Big\} \quad \textup{ and } \quad \mathcal{H}_0:= \big\{\varphi \in \mathcal{H}:\ \varphi(1)=0\big\}\,.$$ These spaces are equipped with the natural norm $$\| \varphi\|_{\mathcal{H}}^2 := \int_0^{1} \left(\varphi'(r)^2 + \frac{\varphi(r)^2}{r^2}\right)\, r\, dr \,.$$ Note that the functions in $\mathcal{H}$ are continuous in $[0,1]$ and vanish at $0$, see e.g. [@byeon Proposition 3.1]. In other words, [\[disco\]](#disco){reference-type="eqref" reference="disco"} simply asserts that the Dirichlet eigenvalues of the annulus $\Omega_a$ converge, as $a \to 0$, to the Dirichlet eigenvalues of the unit disk. This is well known, but we give a short self-contained proof for the convenience of the reader. First, observe that $\lambda_{l,k}(a)$ is decreasing in $a$, and it is always bigger than $\lambda_{l,k}^0$. As a consequence, it has a limit and $$\lim_{a\to 0} \lambda_{l,k}(a) \geqslant\lambda_{l,k}^0\,.$$ To show the converse inequality, it suffices to prove that any function in $\mathcal{H}_0$ can be approximated by functions vanishing in a neighborhood of $0$. For this, let us choose a smooth cut-off function $\chi:[0, +\infty) \to [0,1]$ such that $\chi|_{[0,1]}=1$ and $\chi_{[2,+\infty)} =0$, and define $\chi_\varepsilon(r):= \chi(\frac{r}{\varepsilon})$ for $\varepsilon$ sufficiently small. We shall next show that, given any $\varphi \in \mathcal{H}$, $\chi_{\varepsilon} \varphi \to 0$ in $\mathcal{H}$ as $\varepsilon\to 0$. This obviously implies [\[disco\]](#disco){reference-type="eqref" reference="disco"} since in this case $(1-\chi_{\varepsilon}) \varphi \to \varphi$ in $\mathcal{H}$. To show that $\chi_{\varepsilon} \varphi \to 0$ in $\mathcal{H}$, we first note that $$\int_0^1 \frac{\chi_\varepsilon(r)^2 \varphi(r)^2}{r} \leqslant\int_0^\varepsilon\frac{ \varphi(r)^2}{r} \to 0\,, \quad \textup{ as } \varepsilon\to 0\,.$$ Moreover, as $\varepsilon\to0$, $$\int_0^1 \chi_\varepsilon(r)^2 \varphi'(r)^2 r \, dr \leqslant\int_0^\varepsilon\varphi'(r)^2 r \, dr \to 0\,,$$ and $$\int_0^1 \chi_\varepsilon'(r)^2 \varphi(r)^2 r \, dr \leqslant\frac{\| \chi'\|_{L^{\infty}}^2}{\varepsilon^2} \int_0^\varepsilon\varphi(r)^2 r \, dr \leqslant\frac{\| \chi'\|_{L^{\infty}}^2}{\varepsilon} \int_0^\varepsilon\varphi(r)^2 \, dr \leqslant\| \chi'\|_{L^{\infty}}^2\| \varphi\|_{L^{\infty}([0,\varepsilon])}^2 \to 0\,,$$ since $\varphi$ is continuous in $[0,1]$ and $\varphi(0)=0$. As $(\chi_{\varepsilon} \varphi)'= \chi_{\varepsilon}' \varphi + \chi_{\varepsilon} \varphi'$, it follows that $\chi_{\varepsilon} \varphi \to 0$ in ${\mathcal H}$ and thus [\[disco\]](#disco){reference-type="eqref" reference="disco"} follows. We now compute the eigenvalues $\lambda_{1,1}^0$ and $\lambda_{l,0}^0$, which can be written explicitly in terms of Bessel zeros. More precisely, it is known that if $j_{m,k}$ is the $k$-th positive zero of the Bessel function of the first kind $J_m$, then $$\lambda_{l,k}^0=j_{l,k+1}^2\,.$$ The numerical values of the zeros are well known, e.g., $$j_{1,2} = 7.01559\dots$$ and $$j_{1,1} = 3.83171\dots\,, \quad j_{2,1} = 5.13562\dots\,, \quad j_{3,1} = 6.38016\dots \,,\quad j_{4,1} = 7.58834\dots \,.$$ Therefore, $\lambda_{1,1}^0 <\lambda_{l,0}^0$ for any $\l \geqslant 4$. By [\[disco\]](#disco){reference-type="eqref" reference="disco"} we obtain that, for any integer $l \geqslant 4$ and for sufficiently small $a$, $\mu_{0,2}(a)< \lambda_{l,0}(a)$. This inequality, together with [\[E.muBiggerLambda\]](#E.muBiggerLambda){reference-type="eqref" reference="E.muBiggerLambda"} and the continuous dependence of the eigenvalues on $a$ (see e.g. [@Kato Section II.7]), allows us to conclude the existence of $a_l\in(0,1)$ such that [\[id.eigenvaluesBif\]](#id.eigenvaluesBif){reference-type="eqref" reference="id.eigenvaluesBif"} holds. We now turn our attention to [\[NR\]](#NR){reference-type="eqref" reference="NR"}. In the rest of the proof, we fix $a:= a_l$, omitting the dependence on this variable for the ease of notation. First of all, by the strict monotonicity with respect to $l$ and $n$ of the Neumann and Dirichlet eigenvalues, it follows that $$\begin{aligned} \lambda_{l, n} > \lambda_{l,0} \ \textup{ for all }n \geqslant 1\,, \qquad \lambda_{ml,n} \geqslant\lambda_{ml,0}> \lambda_{l,0}\ \textup{ for all } m \geqslant 2, \ n \geqslant 0\,. \end{aligned}$$ Moreover, $$\lambda_{0,0} < \lambda_{l,0}\,, \qquad \lambda_{0, n} \geqslant\lambda_{0,2} > \mu_{0,2} = \lambda_{l,0} \ \mbox { if } n \geqslant 2\,.$$ Finally, by Lemma [Lemma 4](#extra){reference-type="ref" reference="extra"}, $\mu_{0,2} = \lambda_{1,1}$, which is clearly bigger than $\lambda_{0,1}$. The proposition is then proved. ◻ Next proposition is devoted to prove an estimate which will imply the usual transversality condition in the Crandall--Rabinowitz theorem. We prove this estimate for any sufficiently large $l$, and refer to Appendix [7](#A.transversality){reference-type="ref" reference="A.transversality"} for the direct verification of this condition in the case $l=4$ (the same strategy could be used for other specific values of $l$). In the statement, we denote by a prime the derivative of an eigenvalue with respect to the parameter $a$. **Proposition 6**. *For any large enough integer $l$, the constant $a_l$ introduced in Proposition [Proposition 5](#P.cross1){reference-type="ref" reference="P.cross1"} is of the form $$\label{E.al} a_l = 1-\frac{\sqrt{3}\pi}{l}+ O(l^{-2})\,.$$ Moreover, the following transversality condition holds: $$\label{T} \tag{T} \mu_{0,2}'(a_l) \neq \lambda_{l,0}'(a_l)\,.$$* *Proof.* The proof will be divided into four steps for the sake of clarity. ### Step 1: Zeroth order estimates {#step-1-zeroth-order-estimates .unnumbered} In this first step we prove [\[E.al\]](#E.al){reference-type="eqref" reference="E.al"}. First of all, observe that $$\label{E.easylal} \lambda_{l,0}(a) := \inf_{\varphi\in H_0^1((a,1))\setminus \{0\}} \displaystyle \frac{\displaystyle\int_a^1 \Big(\varphi'(r)^2 + \frac{l^2}{r^2} \varphi(r)^2 \Big)\, r\, dr}{\displaystyle\int_a^1 \varphi(r)^2 r \, dr} \geqslant l^2\,, \quad \textup{ for all } a \in (0,1)\,.$$ This implies in particular that, for any sufficiently large integer $l$, $$\label{E.lambdaBiggerMu} \lambda_{l,0}(\tfrac12) > \mu_{0,2}(\tfrac12)\,.$$ As a consequence, we can take $a_l \in (\tfrac12, 1)$ for sufficiently large $l$. As in the proof of Proposition [Proposition 5](#P.cross1){reference-type="ref" reference="P.cross1"}, we define $h_l := \tfrac{1-a_l}{\pi}$ and $r =: 1-h_lx$. Then, we can estimate $$\begin{aligned} \mu_{0,2}(a_l) & = \inf_{\dim V = 3} \, \max_{\psi \in V \setminus \{0\}} \frac{\displaystyle \int_0^{\pi} \psi'(x)^2(1-h_lx)\, dx}{h_l^2 \displaystyle \int_0^\pi \psi(x)^2(1-h_lx)\, dx} & \leqslant\inf_{\dim V = 3} \, \max_{\psi \in V \setminus \{0\}} \frac{2\displaystyle \int_0^{\pi} \psi'(x)^2\, dx}{\displaystyle h_l^2\int_0^\pi \psi(x)^2\, dx} = 8h_l^{-2}\end{aligned}$$ which, together with [\[E.easylal\]](#E.easylal){reference-type="eqref" reference="E.easylal"}, implies that $h_l = O(l^{-1})$ for large $l$. Note that in the inequality we have used that $a_l\in(\frac12,1)$, so $h_l\leqslant 2/\pi$. Hence, we can use the asymptotic formulas [\[E.asymp1\]](#E.asymp1){reference-type="eqref" reference="E.asymp1"}-[\[E.asymp2\]](#E.asymp2){reference-type="eqref" reference="E.asymp2"} to infer that $$4 + O(h_l)= 1+ (h_ll)^2\,.$$ Thus, [\[E.al\]](#E.al){reference-type="eqref" reference="E.al"} holds. ### Step 2: First order estimates and asymptotics for $a_l$ {#step-2-first-order-estimates-and-asymptotics-for-a_l .unnumbered} Our goal now is to obtain a more accurate asymptotic estimate for $a_l$ in terms of $l\gg1$. To this end, let us introduce the differential operator $$\label{E.TetaEpsilon} T_{\eta,\varepsilon} := \partial_x^2 - \frac{\eta(1+\varepsilon)}{1-\eta(1+\varepsilon)x}\, \partial_x - \frac{3(1+\varepsilon)^2}{(1-\eta(1+\varepsilon)x)^2} \,.$$ Here $(\eta,\varepsilon)$ are real constants, which can be assumed to be suitably small (say, $|\eta| + |\epsilon| < \tfrac{1}{100}$). Note that for $\eta=\varepsilon=0$, the operator is simply $T_{0,0}=\partial_x^2-3$. Let us respectively denote by $\Lambda_n(\eta,\varepsilon)$ and ${\nu}_n(\eta,\varepsilon)$ the Dirichlet and Neumann eigenvalues of the operator $T_{\eta,\varepsilon}$ on the interval $(0,\pi)$, where $n$ ranges over the nonnegative integers. It is standard (and easy to see) that all these eigenvalues have multiplicity 1, so it is well known [@Kato Section II.7] that the eigenvalues and their corresponding eigenfunctions (which we will respectively denote by $\Phi_n(\eta,\varepsilon)$ and $\Psi_n(\eta,\varepsilon)$) depend analytically on $(\eta,\varepsilon)$. Therefore, for all $n \geqslant 0$, we can assume that $$\tag{${\rm N}_{\eta,\varepsilon}$} \label{E.NeumannTep} T_{\eta,\varepsilon} \Psi_n + \nu_n(\eta,\varepsilon) \Psi_n = 0 \quad \textup{in } (0,\pi)\,, \qquad \Psi_n'(0) = \Psi_n'(\pi) = 0\,,$$ and $$\tag{${\rm D}_{\eta,\varepsilon}$} \label{E.DirichletTep} T_{\eta,\varepsilon} \Phi_n + \Lambda_n(\eta,\varepsilon) \Phi_n = 0 \quad \textup{in } (0,\pi)\,, \qquad \Phi_n(0) = \Phi_n(\pi) = 0\,.$$ Likewise, we introduce the operator $$\label{E.TtildeEtaEpsilon} \widetilde{T}_{\eta,\varepsilon}:= \partial_x^2 - \frac{\eta(1+\varepsilon)}{1-\eta(1+\varepsilon)x}\, \partial_x\,,$$ and, for all nonnegative integer $n$, we denote by $\widetilde{\nu}_n(\eta,\varepsilon)$ and $\widetilde{\Lambda}_n(\eta,\varepsilon)$ the Neumann and Dirichlet eigenvalues in $(0,\pi)$ with corresponding eigenfunctions $\widetilde{\Psi}_n(\eta,\epsilon)$ and $\widetilde{\Phi}_n(\eta,\varepsilon)$ respectively. In other words, $$\tag{${\rm \widetilde{N}}_{\eta,\varepsilon}$} \widetilde{T}_{\eta,\varepsilon} \widetilde{\Psi}_n + \widetilde{\nu}_n(\eta,\varepsilon) \widetilde{\Psi}_n = 0 \quad \textup{in } (0,\pi)\,, \qquad \Psi_n'(0) = \Psi_n'(\pi) = 0\,,$$ and $$\tag{${\rm \widetilde{D}}_{\eta,\varepsilon}$} \widetilde{T}_{\eta,\varepsilon} \widetilde{\Phi}_n + \widetilde{\Lambda}_n(\eta,\varepsilon) \widetilde{\Phi}_n = 0 \quad \textup{in } (0,\pi)\,, \qquad \Phi_n(0) = \Phi_n(\pi) = 0\,.$$ We start with the proof of first order asymptotics for $\Lambda_0$ and $\widetilde{\nu}_2$: **Lemma 7**. *For $|\eta|+|\varepsilon| \ll 1$, $$\begin{aligned} \Lambda_0(\eta,\varepsilon) & = 4 + 3\pi \eta + 6 \varepsilon+ O(\eta^2+\varepsilon^2)\,,\\ \widetilde{\nu}_2(\eta,\varepsilon) & = 4+ O(\eta^2+\varepsilon^2)\,.\end{aligned}$$* *Proof.* Let us start with $\Lambda_0$. Noting that $\Phi_{00}(x):=\sin(x)$ is the first Dirichlet eigenfunction of $T_{0,0}=\partial_x^2-3$ on $(0,\pi)$ with eigenvalue $\Lambda_{00}:=4$, by the analytic dependence on the parameters, we can take an analytic two-parameter family of Dirichlet eigenfunctions of the form $$\Phi_0(\eta,\varepsilon)(x) := \sum_{j,k \geqslant 0} \eta^j \varepsilon^k \, \Phi_{jk}(x)\,, \quad \textup{with} \quad \Phi_{jk}(0) = \Phi_{jk}(\pi) = 0\,, \quad \textup{for all} \quad j,k \geqslant 0\,,$$ with corresponding eigenvalues $$\Lambda_0(\eta,\varepsilon) := \sum_{j,k \geqslant 0} \eta^j \varepsilon^k \, \Lambda_{jk}\,.$$ Moreover, let us normalize $\Phi_0(\eta,\varepsilon)$ so that $$\int_{0}^{\pi} \Phi_0(\varepsilon,\eta)(x) \sin(x)\, dx = \frac{\pi}{2}\,.$$ Since $\int_{0}^{\pi} \Phi_{00}(x) \sin(x)\, dx = \frac{\pi}{2}$, this implies $$\int_{0}^{\pi} \Phi_{jk}(x) \sin(x)\, dx = 0 \quad \textup{for all} \quad (j,k) \neq (0,0)\,.$$ Let us now compute the first order terms in the expansion. Since $$\begin{gathered} T_{\eta,\epsilon} = \partial_x^2 + \big(\eta+\varepsilon\eta+\eta^2 x - O(\varepsilon\eta^2 + \eta^3) \big) \,\partial_x \\ - \big( 3+6\varepsilon+ 3 \varepsilon^2 + 6 \eta x + 18 \eta \varepsilon x + 9 \eta^2 x^2 + O(\eta^2 \varepsilon+ \varepsilon^2 \eta + \eta^3 + \varepsilon^3) \big)\,,\end{gathered}$$ defining $L:= \partial_x^2 + \Lambda_{00} -3=\partial_x^2+1$, we get $$\begin{aligned} 0 & = T_{\eta,\varepsilon} \Phi_0 + \Lambda_0(\eta,\varepsilon) \Phi_0 \\ & = L \Phi_{00} + \varepsilon\Big( L \Phi_{01} + ( \Lambda_{01} - 6 )\Phi_{00} \Big) + \eta \Big( L \Phi_{10} - \partial_x \Phi_{00} + (\Lambda_{10}-6x) \Phi_{00} \Big) + O(\eta^2+\varepsilon^2)\quad \textup{in } (0,\pi)\,.\end{aligned}$$ Also, as $\Lambda_{00} = 4$ and $\Phi_{00}(x)= \sin(x)$, the zeroth order term vanishes, and we arrive at the equations $$\begin{aligned} \label{E.Phi01} L \Phi_{01} + ( \Lambda_{01} - 6 )\Phi_{00} &= 0 \,, \\ \label{E.Phi10} L \Phi_{10} - \partial_x \Phi_{00} + (\Lambda_{10}-6x) \Phi_{00}&= 0 \,.\end{aligned}$$ Multiplying Equation [\[E.Phi01\]](#E.Phi01){reference-type="eqref" reference="E.Phi01"} by $\Phi_{00}$ and integrating by parts, we easily find that $\Lambda_{01} = 6$ and $\Phi_{01} \equiv 0$. Doing the same with [\[E.Phi10\]](#E.Phi10){reference-type="eqref" reference="E.Phi10"}, we get $$\Lambda_{10} \, \frac{\pi}{2} = \Lambda_{10} \int_0^\pi \sin(x)^2\, dx = 6 \int_0^\pi x \sin(x)^2\, dx = \frac{3\pi^2}{2}\,,$$ so $\Lambda_{10} = 3\pi$. We then conclude that $$\Lambda_0(\eta,\varepsilon) = 4+3\pi\eta+6\varepsilon+ O(\eta^2+\varepsilon^2)\,.$$ We now compute the first order expansion of $\widetilde{\nu}_2$. We start with the observation that $\widetilde{\Psi}_{00}(x):=\cos(2x)$ is the third Neumann eigenfunction of $\widetilde T_{0,0}=\partial_x^2$ in $(0,\pi)$, and that the corresponding eigenvalue is $\widetilde{\nu}_{00}:=4$. Arguing as above, we consider a convergent series expansion of the eigenvalue $$\widetilde{\nu}_2(\eta,\varepsilon) := \sum_{j,k \geqslant 0} \eta^j \varepsilon^k\, \widetilde{\nu}_{jk}$$ and of the corresponding eigenfunction $$\widetilde{\Psi}_2(\eta,\varepsilon)(x):= \sum_{j,k\geqslant 0} \eta^j \varepsilon^k\, \widetilde{\Psi}_{jk}(x)\,, \quad \textup{with} \quad \widetilde{\Psi}_{jk}'(0) = \widetilde{\Psi}_{jk}'(\pi) = 0\,, \quad \textup{for all} \quad j,k \geqslant 0\,,$$ which we normalize so that $$\int_0^{\pi} \widetilde{\Psi}_2(\eta,\varepsilon)(x)\, \cos(2x) dx = \frac{\pi}{2}= \int_0^{\pi} \widetilde{\Psi}_{00}(x)\, \cos(2x) dx\,.$$ Since $$\widetilde{T}_{\eta,\varepsilon} = \partial_x^2 - \big(\eta+\varepsilon\eta+\eta^2 x + O(\varepsilon\eta^2 + \eta^3) \big) \,\partial_x \,,$$ just as in the case of $\Lambda_0$ we can write $$\begin{aligned} 0 & = \widetilde{T}_{\eta,\varepsilon} \widetilde{\Psi}_2 + \widetilde{\nu}_2(\eta,\varepsilon) \widetilde{\Psi}_2 \\ & = \widetilde{L} \widetilde{\Psi}_{00} + \varepsilon\Big( \widetilde{L} \widetilde{\Psi}_{01} + \widetilde{\nu}_{01} \widetilde{\Psi}_{00} \Big) + \eta \Big( \widetilde{L} \widetilde{\Psi}_{10} - \partial_x \widetilde{\Psi}_{00} + \widetilde{\nu}_{10} \widetilde{\Psi}_{00} \Big) + O(\eta^2+\varepsilon^2) \quad \textup{in } (0,\pi)\,,\end{aligned}$$ with $\widetilde{L}:= \partial_x^2 + \widetilde{\nu}_{00}=\partial_x^2+4$. Arguing as we did in the analysis of $\Lambda_0$, multiplying the equations now by $\widetilde{\Psi}_{00}$ and integrating by parts, we infer that $\widetilde{\nu}_{01} = \widetilde{\nu}_{10} = 0$, and thus we conclude that $$\widetilde{\nu}_2(\eta,\varepsilon) = 4 + O(\eta^2+\varepsilon^2)\,.$$ The lemma is then proved. ◻ Armed with this auxiliary estimate, we readily obtain the following second order formula for $a_l$: **Lemma 8**. *There exists $\delta_l = O(l^{-1})$ such that $$\label{E.epsilonl} \varepsilon_l = -\frac{\sqrt{3}\pi}{2l} \big(1 + \delta_l \big)\,.$$* *Proof.* With $\eta_l:= \tfrac{\sqrt{3}}{l}$, let us write $a_l= 1- \eta_l (1+ \varepsilon_l ) =: 1-\pi h_l$. Using the change of variables $r=:1-h_l x$ as before, a direct calculation shows that the radial eigenvalue equation [\[E.ODE.Neumann\]](#E.ODE.Neumann){reference-type="eqref" reference="E.ODE.Neumann"} is $$T_{\eta_l,\varepsilon_l} u + h_l^2\mu_{0,2}(a_l) u =0 \quad \textup{in } (0,\pi)\,, \qquad u'(0) = u'(\pi) = 0\,,$$ which shows that $$\mu_{0,2}(a_l) = h_l^{-2}\,\widetilde{\nu}_2(\eta_l, \varepsilon_l)\,.$$ Similarly, $$\lambda_{l,0}(a_l) =h_l^{-2}\, \Lambda_0(\eta_l,\varepsilon_l)\,.$$ The condition $\lambda_{l,0}(a_l)=\mu_{0,2}(a_l)$ and the asymptotic formulas of Lemma [Lemma 7](#L.1stOrderLambdaMu){reference-type="ref" reference="L.1stOrderLambdaMu"} readily yield the desired expression for $\varepsilon_l$. ◻ ### Step 3: Second order estimates {#step-3-second-order-estimates .unnumbered} In view of Lemma [Lemma 8](#L.epsilonl){reference-type="ref" reference="L.epsilonl"}, in the operators $T_{\eta,\varepsilon}$ and $\widetilde T_{\eta,\varepsilon}$ it is natural to take $\varepsilon:= -\frac\pi{2} \eta(1+\delta)$ with $|\delta| \ll 1$. With some abuse of notation, we shall denote by $$T_{\eta,\delta} := \partial_x^2 - \frac{\eta(1-\frac\pi{2} \eta(1+\delta))}{1-\eta(1-\frac\pi{2} \eta(1+\delta))x}\, \partial_x - \frac{3(1-\frac\pi{2} \eta(1+\delta))^2}{(1-\eta(1-\frac\pi{2} \eta(1+\delta))x)^2}$$ the resulting differential operator, and we shall use the notation $\nu_n(\eta,\delta)$, $\Lambda_n(\eta,\delta)$, $\Psi_n(\eta,\delta)$ and $\Phi_n(\eta,\delta)$ for its eigenvalues and eigenfunctions. Likewise, we define $$\widetilde{T}_{\eta,\delta} := \partial_x^2 - \frac{\eta(1-\frac\pi{2} \eta(1+\delta))}{1-\eta(1-\frac\pi{2} \eta(1+\delta))x}\, \partial_x\,,$$ and we use a similar notation for its eigenvalues and eigenfunctions: $\widetilde{\nu}_n(\eta,\delta)$, $\widetilde{\Lambda}_n(\eta,\delta)$, $\widetilde{\Psi}_n(\eta,\delta)$ and $\widetilde{\Phi}_n(\eta,\delta)$. Of course, the dependence on the new parameters $(\eta,\delta)$ is still analytic. The second order asymptotic expansions that we need are the following: **Lemma 9**. *For $|\eta|+|\delta| \ll 1$, $$\begin{aligned} \Lambda_0(\eta,\delta) & = 4-3\pi\eta\delta-16 \eta^2 + O(|\eta|^3+|\delta|^3)\,, \\ \nu_1(\eta,\delta) & = 4-3\pi\eta\delta+ 24 \eta^2 + O(|\eta|^3+|\delta|^3)\,, \\ \widetilde{\nu}_2(\eta,\delta) & = 4 + \tfrac34\, \eta^2 + O(|\eta|^3+|\delta|^3)\,,\\ \widetilde{\Lambda}_1(\eta,\delta) & = 4 + \tfrac72\,\eta^2 + O(|\eta|^3+|\delta|^3)\,.\end{aligned}$$* *Proof.* We start with $\Lambda_0$. In the case $\eta=\delta=0$, it is clear that $\Phi_{00}(x) := \sin(x)$ is an eigenfunction of $T_{\eta,\delta}$ with eigenvalue $\Lambda_{00} := 4$. Hence, there exists an eigenfunction of the form $$\Phi_0(\eta,\delta)(x) := \sum_{j,k \geqslant 0} \eta^j \delta^k\, \Phi_{jk}(x)\,, \quad \textup{with} \quad \Phi_{jk}(0) = \Phi_{jk}(\pi) = 0\,, \quad \textup{for all} \quad j,k \geqslant 0\,,$$ with eigenvalue $$\Lambda_0(\eta,\delta) := \sum_{j,k\geqslant 0} \eta^j \delta^k\, \Lambda_{jk}\,,$$ which we normalize as $$\int_{0}^{\pi} \Phi_0(\eta,\delta)(x) \sin(x)\, dx = \frac{\pi}{2}= \int_{0}^{\pi} \Phi_{00}(x) \sin(x)\, dx\,,$$ to ensure that $$\int_{0}^{\pi} \Phi_{jk}(x) \sin(x)\, dx = 0 \quad \textup{for all} \quad (j,k)\neq (0,0)\,.$$ Since $$\begin{gathered} T_{\eta,\delta} = \partial_x^2 - \big(\eta + (x-\tfrac{\pi}{2}) \eta^2 + O(\eta^2 |\delta| + |\eta|^3) \big)\,\partial_x \\ - \big(3-3(\pi-2x)\eta -3\pi\eta\delta+ \frac34(\pi^2-12\pi x + 12x^2) \eta^2 + O(\eta^2|\delta| + |\eta|^3) \big)\,,\end{gathered}$$ arguing as in Lemma [Lemma 7](#L.1stOrderLambdaMu){reference-type="ref" reference="L.1stOrderLambdaMu"}, the eigenvalue equation reads as $$\begin{aligned} 0 & = T_{\eta,\delta} \Phi_0 + \Lambda_0(\eta,\delta) \Phi_0 \\ & = L \Phi_{00} + \delta\Big( L \Phi_{01} + \Lambda_{01} \Phi_{00} \Big) + \delta^2 \Big( L \Phi_{02} + \Lambda_{02} \Phi_{00} + \Lambda_{01} \Phi_{01} \Big) \\ & \quad + \eta \Big( L \Phi_{10} - \partial_x \Phi_{00} + \big( \Lambda_{10} +3(\pi -2x) \big)\Phi_{00} \Big) \\ & \quad + \eta^2 \Big( L \Phi_{20} - \partial_x \Phi_{10} + (\tfrac{\pi}{2}-x) \partial_x \Phi_{00} + \big(\Lambda_{10}+3(\pi-2x)\big) \Phi_{10} + \big(\Lambda_{20} - \tfrac34(\pi^2-12\pi x + 12x^2) \big) \Phi_{00} \Big) \\ & \quad + \eta \delta\Big( L \Phi_{11} - \partial_x \Phi_{01} + \big(\Lambda_{10} + 3(\pi - 2x) \big) \Phi_{01} + \Lambda_{01} \Phi_{10} + (\Lambda_{11}+3\pi) \Phi_{00} \Big) + O\big(|\eta|^3+|\delta|^3\big) \quad \textup{in } (0,\pi)\,,\end{aligned}$$ with $L:= \partial_x^2 + 1$. By the choice of $\Lambda_{00}$ and $\Phi_{00}$, the zeroth order term of the above expression cancels. Moreover, arguing as in Lemma [Lemma 7](#L.1stOrderLambdaMu){reference-type="ref" reference="L.1stOrderLambdaMu"}, it is immediate to see that $\Lambda_{01} = \Lambda_{02} = 0$ and that $\Phi_{01} \equiv \Phi_{02} \equiv 0$. The first nontrivial equation we need to solve is $$L \Phi_{10} - \partial_x \Phi_{00} + \big( \Lambda_{10} +3(\pi -2x) \big)\Phi_{00} = 0 \quad \textup{in } (0,\pi)\,.$$ Multiplying by $\Phi_{00}$ and integrating by parts, we get $$\Lambda_{10}\, \frac{\pi}{2} = \Lambda_{10} \int_0^\pi \sin(x)^2 \,dx = 3 \int_0^\pi (2x-\pi)\sin(x)^2 \,dx = 0\,,$$ so $\Lambda_{10} = 0$. The equation can then be rewritten as $$\label{E.2nd1} L \Phi_{10} - \cos(x) + 3(\pi-2x) \sin(x) = 0 \quad \textup{ in } (0,\pi)\,.$$ Since $\Phi_{10}(0)=\Phi_{10}(\pi)=0$, the solution can be obtained using Fourier series, and a tedious but elementary computation shows $$\label{E.Phi10Serie} \Phi_{10}(x) = \frac4{\pi}\sum_{k =1}^\infty \frac{2k(4k^2-13)}{(1-4k^2)^3} \sin(2kx)\,.$$ The next equations we have are $$\begin{aligned} & L \Phi_{11} + (\Lambda_{11}+3\pi) \Phi_{00} = 0 \quad \textup{in } (0,\pi)\,, \label{E.2nd2} \\ & L \Phi_{20} - \partial_x \Phi_{10} + (\tfrac{\pi}{2}-x) \partial_x \Phi_{00} + 3(\pi-2x) \Phi_{10} + \big(\Lambda_{20} - \tfrac34(\pi^2-12\pi x + 12x^2)\big) \Phi_{00} = 0 \quad \textup{in }(0,\pi)\,. \label{E.2nd3} \end{aligned}$$ Multiplying [\[E.2nd2\]](#E.2nd2){reference-type="eqref" reference="E.2nd2"} by $\Phi_{00}$ and integrating by parts, we readily get $\Lambda_{11} = -3\pi$ and $\Phi_{11} \equiv 0$. Finally, doing the same with [\[E.2nd3\]](#E.2nd3){reference-type="eqref" reference="E.2nd3"}, we obtain $$\Lambda_{20} = \frac2{\pi} \int_0^\pi \Big( \partial_x \Phi_{10} - (\tfrac{\pi}{2}-x) \cos(x) - 3(\pi-2x) \Phi_{10} + \tfrac34(\pi^2-12\pi x + 12x^2) \sin(x) \Big) \sin(x) \, dx \,.$$ Plugging in the expression for $\Phi_{10}$ that we found in [\[E.Phi10Serie\]](#E.Phi10Serie){reference-type="eqref" reference="E.Phi10Serie"}, a straightforward computation yields $\Lambda_{20} = -16$, so we arrive at the asymptotic formula $$\Lambda_0(\eta,\delta) = 4-3\pi\eta\delta-16\eta^2 + O(|\eta|^3+|\delta|^3)\,.$$ Next, consider $\nu_1$. Starting with $\nu_{00} := 4$ and $\Psi_{00}(x):=\cos(x)$, we can then take $$\begin{aligned} \Psi_1(\eta,\delta)(x) &:= \sum_{j,k\geqslant 0} \eta^j \delta^k\, \Psi_{jk}(x)\,, \quad \textup{with} \quad \Psi_{jk}'(0) = \Psi_{jk}'(\pi) = 0\,, \quad \textup{for all} \quad j,k \geqslant 0\,,\\ \nu_1(\eta,\delta) &:= \sum_{j,k \geqslant 0} \eta^j \delta^k\, \nu_{jk}\,,\end{aligned}$$ with the normalization $$\int_{0}^{\pi} \Psi_1(\eta,\delta)(x) \cos(x)\, dx = \frac{\pi}{2}= \int_{0}^{\pi} \Psi_{00}(x) \cos(x)\, dx\,.$$ We then proceed as in the case of $\Lambda_0$, so we skip some details. Again with $L:=\partial_x^2+1$, one readily arrives at the equations $$\begin{aligned} 0 & = T_{\eta,\delta} \Psi_0 + \nu_0(\eta,\delta) \Psi_0 \\ & = L \Psi_{00} + \delta\Big( L \Psi_{01} + \nu_{01} \Psi_{00} \Big) + \delta^2 \Big( L \Psi_{02} + \nu_{02} \Psi_{00} + \nu_{01} \Psi_{01} \Big) \\ & \quad + \eta \Big( L \Psi_{10} - \partial_x \Psi_{00} + \big( \nu_{10} +3(\pi -2x) \big)\Psi_{00} \Big) \\ & \quad + \eta^2 \Big( L \Psi_{20} - \partial_x \Psi_{10} + (\tfrac{\pi}{2}-x) \partial_x \Psi_{00} + \big(\nu_{10}+3(\pi-2x)\big) \Psi_{10} + \big(\nu_{20} - \tfrac34(\pi^2-12\pi x + 12x^2) \big) \Psi_{00} \Big) \\ & \quad + \eta \delta\Big( L \Psi_{11} - \partial_x \Psi_{01} + \big(\nu_{10} + 3(\pi - 2x) \big) \Psi_{01} + \nu_{01} \Psi_{10} + (\nu_{11}+3\pi) \Psi_{00} \Big) + O\big(|\eta|^3+|\delta|^3\big) \quad \textup{in } (0,\pi)\,.\end{aligned}$$ As in the case of $\Lambda_0$, it is straightforward to see that $\nu_{01} = \nu_{02} = 0$ and that $\Psi_{01} \equiv \Psi_{02} \equiv 0$. Moreover, arguing as we did for $\Lambda_{10}$, we get that $\nu_{10} = 0$. Hence, the equation for $\Psi_{10}$ can be rewritten as $$L \Psi_{10} +\sin(x) +3(\pi -2x) \cos(x) = 0 \quad \textup{in } (0,\pi)\,.$$ We can solve this ODE with Neumann boundary conditions using Fourier series: $$\label{E.Psi10Serie} \Psi_{10}(x) = -\frac{14}{\pi} + \frac{4}{\pi} \sum_{k =1}^\infty \frac{7+20k^2}{(4k^2-1)^3} \cos(2kx)\,.$$ The next equations we need to analyze are $$\begin{aligned} \label{E.2nd6} & L \Psi_{11} - (\nu_{11}+3\pi)\Psi_{00} = 0 \quad \textup{in }(0,\pi)\,, \\ \label{E.2nd7} & L \Psi_{20} - \partial_x \Psi_{10} + (\tfrac{\pi}{2}-x) \partial_x \Psi_{00} +3(\pi-2x)\Psi_{10} + \big(\nu_{20} - \tfrac34(\pi^2-12\pi x + 12x^2) \Psi_{00} = 0 \quad \textup{in }(0,\pi)\,,\end{aligned}$$ again with zero Neumann boundary conditions. Multiplying [\[E.2nd6\]](#E.2nd6){reference-type="eqref" reference="E.2nd6"} by $\Psi_{00}$ and integrating by parts, we get $\nu_{11} = -3\pi$ and $\Psi_{11} \equiv 0$. Proceeding similarly with the second equation, we get $$\nu_{20} = \frac2{\pi} \int_0^{\pi} \Big( \partial_x \Psi_{10} + (\tfrac{\pi}{2}-x) \sin(x) - 3(\pi-2x) \Psi_{10} + \tfrac34(\pi^2-12\pi x + 12x^2)\cos(x) \Big) \cos(x) \,dx \,.$$ Plugging in the formula for $\Psi_{10}$, one gets $\nu_{20} = 24$, and thus we conclude that $$\nu_1(\eta,\delta) = 4-3\pi\eta\delta+ 24 \eta^2 + O(|\eta|^3+|\delta|^3)\,.$$ We now compute $\widetilde{\nu}_2$. We then start with $\widetilde{\nu}_{00} := 4$ and $\widetilde{\Psi}_{00}(x): = \cos(2x)$, which leads to the formulas $$\begin{aligned} \widetilde{\Psi}_2(\eta,\delta)(x) &= \sum_{j,k\geqslant 0} \eta^j \delta^k\, \widetilde{\Psi}_{jk}(x)\,, \quad \textup{with} \quad \widetilde{\Psi}_{jk}'(0) = \widetilde{\Psi}_{jk}'(\pi) = 0 \,, \quad \textup{for all} \quad j,k \geqslant 0\,,\\ \widetilde{\nu}_2(\eta,\delta) &= \sum_{j,k \geqslant 0} \eta^j \delta^k\, \widetilde{\nu}_{jk}\,,\end{aligned}$$ with the normalization $$\int_{0}^{\pi} \widetilde{\Psi}_2(\eta,\delta)(x) \cos(2x)\, dx = \frac{\pi}{2}= \int_{0}^{\pi} \widetilde{\Psi}_{00}(x) \cos(2x)\, dx \,.$$ Taking into account the expansion of $T_{\eta,\delta}$, it is clear that $$\widetilde{T}_{\eta,\delta} = \partial_x^2 - \big(\eta + (x-\tfrac{\pi}{2}) \eta^2 + O(\eta^2 |\delta| + |\eta|^3) \big)\,\partial_x\,. \\ $$ Setting $\widetilde{L}:= \partial_x^2 + 4$, we then get $$\begin{aligned} 0 & = \widetilde{T}_{\eta,\delta} \widetilde{\Psi}_2 + \widetilde{\nu}_2(\eta,\delta) \widetilde{\Psi}_2 \\ & = \widetilde{L} \widetilde{\Psi}_{00} + \delta\Big( \widetilde{L} \widetilde{\Psi}_{01} + \widetilde{\nu}_{01} \widetilde{\Psi}_{00} \Big) + \eta \Big( \widetilde{L} \widetilde{\Psi}_{10} - \partial_x \widetilde{\Psi}_{00} + \widetilde{\nu}_{10}\widetilde{\Psi}_{00} \Big) \\ & \quad + \delta^2 \Big( \widetilde{L} \widetilde{\Psi}_{02} + \widetilde{\nu}_{02} \widetilde{\Psi}_{00} + \widetilde{\nu}_{01} \widetilde{\Psi}_{01} \Big)+ \eta^2 \Big( \widetilde{L} \widetilde{\Psi}_{20} - \partial_x \widetilde{\Psi}_{10} + (\tfrac{\pi}{2}-x) \partial_x \widetilde{\Psi}_{00} + \widetilde{\nu}_{10} \widetilde{\Psi}_{10} + \widetilde{\nu}_{20} \widetilde{\Psi}_{00} \Big) \\ & \quad + \eta \delta\Big( \widetilde{L} \widetilde{\Psi}_{11} - \partial_x \widetilde{\Psi}_{01} + \widetilde{\nu}_{10} \widetilde{\Psi}_{01} + \widetilde{\nu}_{01} \widetilde{\Psi}_{10} + \widetilde{\nu}_{11} \widetilde{\Psi}_{00} \Big) + O\big(|\eta|^3+|\delta|^3\big)\,, \quad \textup{in } (0,\pi)\,.\end{aligned}$$ Arguing as in the first two cases, it is not difficult to see that $\widetilde{\nu}_{01} = \widetilde{\nu}_{02} = \widetilde{\nu}_{10}$ and that $\widetilde{\Psi}_{01} \equiv \widetilde{\Psi}_{02} \equiv 0$. The equation for $\widetilde{\Psi}_{10}$ therefore reads as $$\widetilde{L} \widetilde{\Psi}_{10} + 2\sin(2x) = 0 \quad \textup{in }(0,\pi)\,,$$ so once again we can solve this with zero Neumann boundary conditions by means of Fourier series: $$\widetilde{\Psi}_{10}(x) = -\frac{16}{\pi} \sum_{j = 0}^\infty \frac{1}{((2j+1)^2-4)^2} \cos((2j+1)x)\,.$$ On the other hand, we also need that $$\begin{aligned} \label{E.2nd4} & \widetilde{L} \widetilde{\Psi}_{11} + \widetilde{\nu}_{11} \widetilde{\Psi}_{00} = 0 \quad \textup{in } (0,\pi)\,,\\ \label{E.2nd5} & \widetilde{L} \widetilde{\Psi}_{20} - \partial_x \widetilde{\Psi}_{10} + (\tfrac{\pi}{2}-x) \partial_x \widetilde{\Psi}_{00} + \widetilde{\nu}_{20} \widetilde{\Psi}_{00} = 0 \quad \textup{in } (0,\pi)\,.\end{aligned}$$ Using $\widetilde{\Psi}_{00}$ as test function in [\[E.2nd4\]](#E.2nd4){reference-type="eqref" reference="E.2nd4"} and integrating by parts, we get $\widetilde{\nu}_{11} = 0$ and $\widetilde{\Psi}_{11} \equiv 0$. Doing the same in [\[E.2nd5\]](#E.2nd5){reference-type="eqref" reference="E.2nd5"}, it follows that $$\widetilde{\nu}_{20} = \frac2{\pi} \int_0^{\pi} \big( \partial_x \widetilde{\Psi}_{10} + (\pi-2x)\sin(2x) \big) \cos(2x)\, dx\,.$$ Direct computations using the explicit series expansion of $\widetilde{\Psi}_{10}$ give $\widetilde{\nu}_{20} = \tfrac34$. Thus, we conclude that$$\widetilde{\nu}_2(\eta,\delta) = 4 + \frac34 \eta^2 + O(|\eta|^3+|\delta|^3)\,.$$ Finally, we prove the expansion of $\widetilde{\Lambda}_1$ starting from $\widetilde{\Lambda}_{00} := 4$ and $\widetilde{\Phi}_{00}(x) := \sin(2x)$. The chosen normalization here is $$\int_{0}^{\pi} \widetilde{\Phi}_1(\eta,\delta)(x) \sin(2x)\, dx = \frac{\pi}{2}\,.$$ Since the proof is similar to the one of $\widetilde{\nu}_2$ we skip some details. First of all, note that we can expand $$\widetilde{\Lambda}_1(\eta,\delta) := \sum_{j,k\geqslant 0} \eta^j \delta^k\, \widetilde{\Lambda}_{jk}\,,$$ and $$\widetilde{\Phi}_1(\eta,\delta)(x) := \sum_{j,k \geqslant 0} \eta^j \delta^k\, \widetilde{\Phi}_{jk}(x)\,, \quad \textup{with} \quad \widetilde{\Phi}_{jk}(0) = \widetilde{\Phi}_{jk}(\pi) = 0\,, \quad \textup{for all} \quad j,k \geqslant 0\,.$$ Then, arguing as in the case of $\widetilde{\nu}_2$, we get $$\begin{aligned} 0 & = \widetilde{T}_{\eta,\delta} \widetilde{\Phi}_1 + \widetilde{\Lambda}_1(\eta,\delta) \widetilde{\Phi}_1 \\ & = \widetilde{L} \widetilde{\Phi}_{00} + \delta\Big( \widetilde{L} \widetilde{\Phi}_{01} + \widetilde{\Lambda}_{01} \widetilde{\Phi}_{00} \Big) + \eta \Big( \widetilde{L} \widetilde{\Phi}_{10} - \partial_x \widetilde{\Phi}_{00} + \widetilde{\Lambda}_{10}\widetilde{\Phi}_{00} \Big) \\ & \quad + \delta^2 \Big( \widetilde{L} \widetilde{\Phi}_{02} + \widetilde{\Lambda}_{02} \widetilde{\Phi}_{00} + \widetilde{\Lambda}_{01} \widetilde{\Phi}_{01} \Big)+ \eta^2 \Big( \widetilde{L} \widetilde{\Phi}_{20} - \partial_x \widetilde{\Phi}_{10} + (\tfrac{\pi}{2}-x) \partial_x \widetilde{\Phi}_{00} + \widetilde{\Lambda}_{10} \widetilde{\Phi}_{10} + \widetilde{\Lambda}_{20} \widetilde{\Phi}_{00} \Big) \\ & \quad + \eta \delta\Big( \widetilde{L} \widetilde{\Phi}_{11} - \partial_x \widetilde{\Phi}_{01} + \widetilde{\Lambda}_{10} \widetilde{\Phi}_{01} + \widetilde{\Lambda}_{01} \widetilde{\Phi}_{10} + \widetilde{\Lambda}_{11} \widetilde{\Phi}_{00} \Big) + O\big(|\eta|^3+|\delta|^3\big)\,, \quad \textup{in } (0,\pi)\,.\end{aligned}$$ Arguing as in the previous cases, one can readily check that $\widetilde{\Lambda}_{01} = \widetilde{\Lambda}_{02} = \widetilde{\Lambda}_{10} = 0$ and that $\widetilde{\Phi}_{01} \equiv \widetilde{\Phi}_{02} \equiv 0$. Hence, we actually need that $$\widetilde{L} \widetilde{\Phi}_{10} -2\cos(2x) = 0 \quad \textup{in } (0,\pi)\,.$$ This ODE can be solved using Fourier series and it follows that $$\widetilde{\Phi}_{10}(x) = - \frac{8}{\pi}\, \sum_{j \geqslant 0} \frac{2j+1}{((2j+1)^2-4)^2} \sin((2j+1)x)\,.$$ Moreover, we also need that $$\begin{aligned} & \widetilde{L} \widetilde{\Phi}_{11} + \widetilde{\Lambda}_{11} \widetilde{\Phi}_{00} = 0 \quad \textup{in } (0,\pi)\,, \label{E.2nd8} \\ & \widetilde{L} \widetilde{\Phi}_{20} - \partial_x \widetilde{\Phi}_{10} + (\tfrac{\pi}{2}-x) \partial_x \widetilde{\Phi}_{00} + \widetilde{\Lambda}_{20} \widetilde{\Phi}_{00} = 0 \quad \textup{in } (0,\pi)\,. \label{E.2nd9}\end{aligned}$$ Multiplying by $\widetilde{\Phi}_{00}$ and integrating by parts as above, we get $\widetilde{\Lambda}_{11} = 0$, $\widetilde{\Phi}_{11} \equiv 0$ and $$\widetilde{\Lambda}_{20} = \frac{2}{\pi} \int_0^{\pi} \big( \partial_x \widetilde{\Phi}_{10} - (\pi -2x) \cos(2x) \Big) \sin(2x)\, dx\,.$$ Using the series for $\widetilde{\Phi}_{10}$, this yields $\widetilde{\Lambda}_{20} = \frac72$, so we conclude $$\widetilde{\Lambda}_1(\eta,\delta) = 4 + \frac{7}{2}\eta^2 + O(|\eta|^3+|\delta|^3)\,.$$ The lemma then follows. ◻ ### Step 4: The transversality condition {#step-4-the-transversality-condition .unnumbered} Armed with the above estimates, we can now finish the proof of Proposition [Proposition 6](#P.cross2){reference-type="ref" reference="P.cross2"}. We argue as in Lemma [Lemma 8](#L.epsilonl){reference-type="ref" reference="L.epsilonl"}. That is, given the parameters $l\gg1$ and $h=(1-a)/\pi\ll1$, we consider the small constants $(\eta,\delta)$ for which $$\label{E.fixed} h= \eta[1-\tfrac\pi 2\eta(1+\delta)]\,, %\qquad hl = 3[1-\tfrac\pi 2\eta(1+\de)]$$ and recall that in this case $$\widetilde{\nu}(h):= h^2 \mu_{0,2}(a) =\widetilde{\nu}_2(\eta,\delta)\,,\qquad \lambda(h):= h^2\lambda_{l,0}(a)=\Lambda_0(\eta,\delta)\,.$$ By [\[E.fixed\]](#E.fixed){reference-type="eqref" reference="E.fixed"}, the derivative of $\lambda$ with respect to the parameter $h$, leaving $l$ fixed, is $$\lambda'(h)=-\frac2{\pi \eta^2}\frac\partial{\partial\delta}\Lambda_0(\eta,\delta)= \frac2{\pi \eta^2}[3\pi\eta + O(\delta^2+\eta^2)] \,,$$ where we have used the asymptotics of Lemma [Lemma 9](#L.2ndOrder){reference-type="ref" reference="L.2ndOrder"}. Note that this asymptotic expansion can be differentiated term by term because it defines an analytic function of $(\eta,\delta)$. Similarly, $$\widetilde{\nu}'(h)=-\frac2{\pi \eta^2}\frac\partial{\partial\delta}\widetilde{\nu}_2(\eta,\delta)= \frac2{\pi \eta^2}O(\delta^2+\eta^2)\,.$$ Since $\eta_l=\sqrt3/l$ and $\delta_l=O(l^{-1})$, we then get that $$\widetilde{\nu}'(h_l)=\frac2{\pi \eta_l^2}O(l^{-2})\neq \frac2{\pi \eta^2}[3\sqrt3\pi l^{-1} + O(l^{-2})]=\lambda'(h_l)$$ for large $l$, so [\[T\]](#T){reference-type="eqref" reference="T"} follows. ◻ # Setting up the problem {#S.deform} Given a constant $a\in(0,1)$ and functions $b,B\in C^{2,\alpha}(\mathbb{T})$ satisfying $$\|b\|_{L^\infty(\mathbb{T})}+\|B\|_{L^\infty(\mathbb{T})}<\min\Big\{a,\tfrac{1-a}{2}\Big\}\,,$$ we consider the bounded domain defined in polar coordinates by $$\label{E.defOm} \Omega_a^{b,B}:=\{(r,\theta)\in\mathbb{R}^+\times\mathbb{T}: a+b(\theta)< r< 1+B(\theta)\}\,.$$ We aim to show that there exist some constant $a>0$ and some nonconstant functions $b$ and $B$ such that there is a Neumann eigenfunction $u$ on the domain $\Omega_a^{b,B}$, with eigenvalue $\mu_{0,2}(a)$, which is locally constant on the boundary. Equivalently, the function $u$ satisfies the equation $$\label{E.uOm} \Delta u+\mu_{0,2}(a) u=0\quad\text{in }\Omega_a^{b,B}\,,\qquad \nabla u=0\quad\text{on }\partial\Omega_a^{b,B}\,.$$ Since finding the functions $b$ and $B$ is the key part of the problem, we shall start by fixing some $a>0$ and by mapping the fixed annulus $\Omega_\frac12:=\big\{(R,\theta)\in \big(\tfrac12,1\big)\times\mathbb{T}\big\}$ into $\Omega_a^{b,B}$ through the $C^{2,\alpha}$ diffeomorphism $$\label{E.defPhi} \Phi_a^{b,B}:\Omega_\frac12\to \Omega_a^{b,B}$$ defined in polar coordinates as $\Omega_\frac12\ni(R,\theta)\mapsto (r,\theta)\in \Omega_a^{b,B}$, with $$r:=a+(1-a +B(\theta))(2R-1)+2(1-R)b(\theta) \,.$$ For later purposes, we also introduce the shorthand notation $$\Phi_a := \Phi_a^{0,0}\,,$$ and denote the nontrivial component of this diffeomorphism by $\Phi_a^1$, which only depends on the radial variable on $\Omega_\frac12$: $$\Phi_a(R,\theta) = (\Phi_a^1(R),\theta) \,.$$ Denoting by $\psi_{0,2}$, $\varphi_{l,0}$ the radial parts of the corresponding Neumann and Dirichlet eigenfunctions of the annulus $\Omega_a$, as in Section [2](#S.eigen){reference-type="ref" reference="S.eigen"}, we also find it convenient to set $$\label{E.eigenvaluel.fixed.annulus} \overline{\psi}_a:= \psi_{0,2} \circ \Phi_a^1 \in C^{2,\alpha}([\tfrac12,1]) \quad \textup{and} \quad \overline{\varphi}_a := \varphi_{l,0} \circ \Phi_a^{1} \in C^{2,\alpha}([\tfrac12,1])\,,$$ which are functions defined on the interval $[\frac12,1]$. Using this change of variables, one can rewrite Equation [\[E.uOm\]](#E.uOm){reference-type="eqref" reference="E.uOm"} in terms of the function $$v:=u\circ\Phi_a^{b,B}\in C^{2,\alpha}(\Omega_\frac12)\,,$$ as $$\label{E.problem.fixed.annulus} L_a^{b,B}v=0\quad \text{in }\Omega_\frac12\,,\qquad \nabla v=0\quad \text{on }\partial\Omega_\frac12\,,$$ where the differential operator $$L_a^{b,B}v:= [\Delta(v\circ (\Phi_a^{b,B})^{-1})]\circ\Phi_a^{b,B} +\mu_{0,2}(a)v$$ is simply $\Delta+\mu_{0,2}(a)$ written in the coordinates $(R,\theta)$. A tedious but straightforward computation yields $$% \label{P.L} \begin{aligned} L_a^{b,B}v & = \frac{1}{4(1-a+B(\theta)-b(\theta))^2}\, \partial^2_R v \\ & \quad + \frac{1}{2(1-a+B(\theta)-b(\theta))(a+(1-a +B(\theta))(2R-1)+2(1-R)b(\theta))} \partial_R v \\ & \quad + \frac{1}{(a+(1-a +B(\theta))(2R-1)+2(1-R)b(\theta))^2} \Bigg[ \partial_\theta^2 v + \left(\frac{B'(\theta)(2R-1) + 2(1-R)b'(\theta)}{2(1-a+B(\theta)-b(\theta))} \right)^2 \partial_R^2 v \\ & \quad + \bigg( \frac{(B'(\theta)-b'(\theta))^2(2R-1)+b'(\theta)(B'(\theta)-b'(\theta))}{(1-a+B(\theta)- b(\theta))^2} - \frac{B''(\theta)(2R-1) + 2(1-R)b''(\theta)}{2(1-a+B(\theta)-b(\theta))} \bigg) \partial_R v \\ & \quad - \frac{B'(\theta)(2R-1) + 2(1-R)b'(\theta)}{1-a+B(\theta)-b(\theta)}\, \partial_{R\theta} v \Bigg] + \mu_{0,2}(a) v\,. \end{aligned}$$ In particular, $$\label{E.La} L_a v:= L_a^{0,0} v = \frac{1}{4(1-a)^2} \bigg[ \partial_R^2 v + \frac{1}{R-\frac12+\frac{a}{2(1-a)}} \partial_R v + \frac{1}{(R-\frac12+\frac{a}{2(1-a)})^2} \partial_\theta^2 v \bigg] + \mu_{0,2}(a)v \,.$$ Let us now present the functional setting that we will use to analyze these operators. In what follows we will always consider function spaces with dihedral symmetry, that is, spaces of functions which are invariant under the action of the isometry group of a $l$-sided regular polygon, for $l \geqslant 3$. More precisely, let us define $$C_{l}^{k,\alpha}(\overline{\Omega}_\frac12):= \big\{u \in C^{k,\alpha}(\overline{\Omega}_\frac12): u(R,\theta)=u(R,-\theta)\,, \ u(R,\theta) = u (R,\theta+\tfrac{2\pi}l)\big\}\,,$$ whose canonical norm we shall denote by $$\|u\|_{C^{k,\alpha}} := \|u\|_{C^{k,\alpha}(\overline{\Omega}_\frac12)}\,.$$ Following the recent paper [@Fall-Minlend-Weth-main], let us define the "anisotropic" space $$\mathcal{X}^{k,\alpha}:= \big\{ u \in C^{k,\alpha}_{l}(\overline{\Omega}_\frac12): \partial_R u \in C^{k,\alpha}_{l}(\overline{\Omega}_\frac12) \big\}\,,$$ endowed with the norm $\|u\|_{\mathcal{X}^{k,\alpha}} := \|u\|_{C^{k,\alpha}} + \|\partial_R u \|_{C^{k,\alpha}}$, and its closed subspaces $$\mathcal{X}^{k,\alpha}_{\mathrm{D}}:= \big\{u \in \mathcal{X}^{k,\alpha}: u = 0 \textup{ on } \partial \Omega_{\frac12} \big\} \quad \textup{ and } \quad \mathcal{X}^{k,\alpha}_{\mathrm{DN}}:= \big\{u \in \mathcal{X}^{k,\alpha}: u = \partial_R u =0 \textup{ on } \partial \Omega_{\frac12} \big\}\,.$$ For convenience, we also introduce the shorthand notation $$\mathcal{X}:= \mathcal{X}_{\mathrm{D}}^{2,\alpha} \quad \textup{ and } \quad \|u\|_{{\mathcal X}} := \|u\|_{{\mathcal X}^{2,\alpha}}\,.$$ We also need the space $$\mathcal{Y}:= C_{l}^{1,\alpha}(\overline{\Omega}_\frac12) + \mathcal{X}_{\mathrm{D}}^{0,\alpha}\,,$$ endowed with the norm $$\|u\|_{\mathcal{Y}} := \inf\big\{ \|u_1\|_{C^{1,\alpha}}+\|u_2\|_{{\mathcal X}^{0,\alpha}} : u_1 \in C_{l}^{1,\alpha}(\overline{\Omega}_\frac12),\ u_2 \in \mathcal{X}_{\mathrm{D}}^{0,\alpha} \textup{ and } u = u_1+u_2 \big\}\,.$$ As discussed in [@Fall-Minlend-Weth-main Remark 3.2], it is not difficult to see that $(\mathcal{Y},\|\cdot \|_{\mathcal{Y}})$ is a Banach space. Furthermore, we define the open subset $$\label{E.cU} \mathcal{U}:= \Big\{(b,B) \in C^{2,\alpha}_{l}(\mathbb{T}) \times C^{2,\alpha}_{l}(\mathbb{T}) : \|b\|_{L^{\infty}(\mathbb{T})} + \|B\|_{L^{\infty}(\mathbb{T})} < \min\big\{a,\tfrac{1-a}{2}\big\} \Big\}\,.$$ A first result we will need is the following: **Lemma 10**. *The following assertions hold true:* 1. *The function $(v,b,B) \mapsto L_a^{b,B}\,[\,\overline{\psi}_a + v\,]$, with $\overline\psi_a$ given by [\[E.eigenvaluel.fixed.annulus\]](#E.eigenvaluel.fixed.annulus){reference-type="eqref" reference="E.eigenvaluel.fixed.annulus"}, maps ${\mathcal X}_{\mathrm{DN}}^{2,\alpha} \times \mathcal{U}$ into ${\mathcal Y}$.* 2. *If $v \in {\mathcal X}$, then $L_a v \in {\mathcal Y}$.* *Proof.* Let $(w,b,B) \in {\mathcal X}^{2,\alpha} \times \mathcal{U}$ be fixed but arbitrary. We define $$\begin{aligned} & F_1(w,b,B) := \frac{1}{4(1-a+B(\theta)-b(\theta))^2}\, \partial^2_R w \\ & \quad + \frac{1}{2(1-a+B(\theta)-b(\theta))(a+(1-a +B(\theta))(2R-1)+2(1-R)b(\theta))} \partial_R w \\ & \quad + \frac{1}{(a+(1-a +B(\theta))(2R-1)+2(1-R)b(\theta))^2} \Bigg[ \left(\frac{B'(\theta)(2R-1) + 2(1-R)b'(\theta)}{2(1-a+B(\theta)-b(\theta))} \right)^2 \partial_R^2 w + \\ & \quad - \frac{B'(\theta)(2R-1) + 2(1-R)b'(\theta)}{1-a+B(\theta)-b(\theta)}\, \partial_{R\theta} w \Bigg] + \mu_{0,2}(a) w \,, \end{aligned}$$ and $$\begin{aligned} & F_2(w,b,B) := \frac{1}{(a+(1-a +B(\theta))(2R-1)+2(1-R)b(\theta))^2} \Bigg[ \partial_\theta^2 w \\ & \quad + \bigg( \frac{(B'(\theta)-b'(\theta))^2(2R-1)+b'(\theta)(B'(\theta)-b'(\theta))}{(1-a+B(\theta)- b(\theta))^2} - \frac{B''(\theta)(2R-1) + 2(1-R)b''(\theta)}{2(1-a+B(\theta)-b(\theta))} \bigg) \partial_R w \Bigg]\,, \\ \end{aligned}$$ and observe that $$L_a^{b,B}w = F_1(w,b,B) + F_2(w,b,B)\,.$$ Then, it is easy to check that $F_1(w,b,B) \in C_{l}^{1,\alpha}(\overline{\Omega}_\frac12)$ and that $F_2(w,b,B) \in \mathcal{X}_{\mathrm{D}}^{0,\alpha}$ for $w = \overline{\psi}_a + v$ with $v \in {\mathcal X}_{\mathrm{DN}}^{2,\alpha}$. In this way we conclude (i). Analogously, we can prove (ii), using that in this case $b\equiv B\equiv 0$. ◻ Our goal now is to find some constant $a \in (0,1)$ and some functions $v \in C^{2,\alpha}(\overline{\Omega}_\frac12)$ and $b,B \in C^{2,\alpha}(\mathbb{T})$ such that [\[E.problem.fixed.annulus\]](#E.problem.fixed.annulus){reference-type="eqref" reference="E.problem.fixed.annulus"} admits a non-trivial solution. Using the functional setting that we just presented, we can reduce our problem so that the basic unknowns are just a constant $a \in (0,1)$ and a function $v \in {\mathcal X}$. To this end, we define the map $${\mathcal X}\to {\mathcal X}_{\mathrm{DN}}^{2,\alpha} \times C_{l}^{2,\alpha}(\mathbb{T}) \times C_{l}^{2,\alpha}(\mathbb{T}), \qquad v \mapsto ( w _v, b_v, B_v)\,,$$ where $$\label{E.bvBvuv} \begin{aligned} b_v(\theta)&:= -2(1-a)(\overline{\psi}_a''(\tfrac12))^{-1} \partial_R v\big(\tfrac12,\theta\big)\,, \\ B_v(\theta) &:= -2(1-a)(\overline{\psi}_a''(1) )^{-1} \partial_R v(1,\theta)\,, \\ w _v(R,\theta) &:= v(R,\theta) + \frac{\overline{\psi}_a'(R)}{2(1-a)}\Big[2(1-R)b_v(\theta) + (2R-1) B_v(\theta) \Big]\,, \end{aligned}$$ and introduce the open subset $$\begin{aligned} \mathcal{O}&:= \Big\{v \in {\mathcal X}: \|b_v\|_{L^{\infty}(\mathbb{T})} + \|B_v\|_{L^{\infty}(\mathbb{T})} < \min\{a,\frac{1-a}{2}\} \big\}. %\mathcal{O}_l&:= \big\{v \in \cX : \|b_v\|_{L^{\infty}(\TT)} + \|B_v\|_{L^{\infty}(\TT)} < \min\big\{a,\tfrac{1-a}{2}\big\} \Big\}\end{aligned}$$ Following [@Fall-Minlend-Weth-main], we now define the function $$\label{E.Ga} G_a: \mathcal{O} \to \mathcal{Y}\,, \qquad v \mapsto L_a^{b_v, B_v}\,[\,\overline{\psi}_{a} + w _v\,]\,,$$ If $v\in\mathcal X$ satisfies $G_a(v)=0$, where $a$ is any constant in $(0,1)$, then $\widetilde{ w }:= \overline{\psi}_a + w _v$ is the desired solution to [\[E.problem.fixed.annulus\]](#E.problem.fixed.annulus){reference-type="eqref" reference="E.problem.fixed.annulus"} with $b := b_v$, $B := B_v$. Hence, we can prove Theorem [Theorem 1](#T.main){reference-type="ref" reference="T.main"} by finding nontrivial zeros of the operator $G_a$ for some $a \in (0,1)$. This will be done in the following section. # The bifurcation argument {#S.bifurcation} In this section we present the proof of Theorem [Theorem 1](#T.main){reference-type="ref" reference="T.main"}. The basic idea is to apply the classical Crandall--Rabinowitz bifurcation theorem (see e.g. [@M.CR] or [@Kielhofer Theorem I.5.1]) to the operator $G_a$ introduced in the previous section. The key ingredients to make the argument work are the asymptotic estimates for the eigenvalues of an annulus proved in Section [2](#S.eigen){reference-type="ref" reference="S.eigen"}. The first auxiliary result we need is the following: **Lemma 11**. *For all $a \in (0,1)$, the map $G_a: \mathcal{O} \to \mathcal{Y}$ is smooth. Moreover, $$DG_a(0)v = L_a v\quad \textup{ for all } v \in \mathcal{X}\,,$$ where $L_a$ is the operator defined in [\[E.La\]](#E.La){reference-type="eqref" reference="E.La"}.* *Proof.* The smoothness of $G_a$ follows immediately from its definition, so we just have to show that $DG_a(0) = L_a$. As $b_v,B_v,w_v$ depend linearly on $v$ and $$w_v= v + w_{a}^{b_v, B_v}$$ with $$w_{a}^{b_v, B_v}(R,\theta) := \frac{\overline{\psi}_a'(R)}{2(1-a)}\Big[2(1-R)b_v(\theta) + (2R-1) B_v(\theta) \Big]\,,$$ it is easy to see that $$\label{E.DGa02} DG_a(0)v = \frac{{\rm d}}{{\rm d}s}L_a^{sb_v, sB_v}[\overline{\psi}_a+sw_v] \Big|_{s=0} = L_a[v+w_a^{b_v,B_v}] + \frac{{\rm d}}{{\rm d}s}L_a^{sb_v, B_v} \Big|_{s=0} [\, \overline{\psi}_a ]\,.$$ To compute the second term, note that the function $\psi_{0,2}(r)$ (see [\[E.ODE.Neumann\]](#E.ODE.Neumann){reference-type="eqref" reference="E.ODE.Neumann"} in Section [2](#S.eigen){reference-type="ref" reference="S.eigen"}) satisfies the ODE $$\psi_{0,2}''+ \frac{\psi_{0,2}'}{r} + \mu_{0,2}(a)\psi_{0,2} = 0$$ for all $r>0$. Therefore, writing $\psi_{0,2}(r,\theta)\equiv \psi_{0,2}(r)$ with some abuse of notation and defining $\overline{\psi}_{0,2}^{\,b,B} := \psi_{0,2} \circ \Phi_a^{b,B},$ we trivially have $$L_a^{b,B}\, \overline{\psi}_{0,2}^{\,b,B} = 0\,, \quad \textup{ in } \Omega_{\frac12}\,.$$ Substituting $(b,B):=(sb_v,sB_v)$ and differentiating the resulting identity, we then find $$\label{E.DGa01} \begin{aligned} 0 & = \frac{{\rm d}}{{\rm d}s}\left( L_a^{s b_v, sB_v}\, \overline{\psi}_{0,2}^{\, sb_v, sB_v} \right)\bigg|_{s = 0} \\ &= \frac{{\rm d}}{{\rm d}s}L_a^{sb_v, sB_v} \Big|_{s=0} [\, \overline{\psi}_a ] + L_a \left[ \frac{{\rm d}}{{\rm d}s}\overline{\psi}_{0,2}^{\,sb_v, sB_v} \Big|_{s=0} \right] = \frac{{\rm d}}{{\rm d}s}L_a^{sb_v, sB_v} \Big|_{s=0} [\, \overline{\psi}_a ] + L_a w_a^{b_v,B_v}\,. \end{aligned}$$ Since $L_a$ is a linear operator, the result immediately follows by combining [\[E.DGa02\]](#E.DGa02){reference-type="eqref" reference="E.DGa02"} and [\[E.DGa01\]](#E.DGa01){reference-type="eqref" reference="E.DGa01"}. ◻ Next we need a regularity result for the operator $$\widetilde{L}_a v:= \frac{1}{4(1-a)^2} \bigg[ \partial_R^2 v + \frac{1}{R-\frac12+\frac{a}{2(1-a)}} \partial_R v + \frac{1}{(R-\frac12+\frac{a}{2(1-a)})^2} \partial_\theta^2 v \bigg]\,.$$ **Lemma 12**. *For all $a \in (0,1)$, let $f \in C_{l}^{0,\alpha}(\overline{\Omega}_\frac12)$ and $v \in C_{l}^{2,\alpha}(\overline{\Omega}_\frac12)$ satisfy $$\label{E.regularity} \widetilde{L}_a v = f, \quad \textup{in }\Omega_\frac12\,,\qquad v=0\quad \textup{on }\partial\Omega_\frac12\,.$$ If $f \in {\mathcal Y}$, then $v \in {\mathcal X}$.* *Proof.* Taking into account the definition of ${\mathcal X}$, we only need to prove that $f \in {\mathcal Y}$ implies $\partial_R v \in C_{l}^{2,\alpha}(\overline{\Omega}_\frac12)$. Setting $w:= \partial_R v \in C_{l}^{1,\alpha}(\overline{\Omega}_\frac12)$ and differentiating [\[E.regularity\]](#E.regularity){reference-type="eqref" reference="E.regularity"}, we find that $w$ satisfies $$\label{E.regularity.w} \widetilde{L}_a w = \partial_R f + \frac{1}{4(1-a)^2} \bigg[ \frac{1}{(R-\frac12+\frac{a}{2(1-a)})^2}\, w + \frac{2}{(R-\frac12+\frac{a}{2(1-a)})^3}\, \partial_\theta^2 v \bigg], \quad \textup{ in } \Omega_\frac12\,,$$ in the distributional sense. Note that the right hand side is in $C^{0,\alpha}(\overline{\Omega}_\frac12)$ when $f\in{\mathcal Y}$ Isolating $\partial_R^2 v|_{\partial\Omega_{\frac12}}$ in Equation [\[E.regularity\]](#E.regularity){reference-type="eqref" reference="E.regularity"} and using that $v|_{\partial\Omega_{\frac12}}=0$, we easily see that $$\label{E.regularity.w.boundary} \partial_R w\, \big|_{\partial\Omega_\frac12} = 4(1-a)^2 \bigg( f - \frac{1}{R-\frac12+\frac{a}{2(1-a)}} \partial_R v \bigg)\bigg|_{\partial\Omega_\frac12}\,.$$ Decomposing ${\mathcal Y}\ni f = f_1 + f_2$ with $f_1 \in C_{l}^{1,\alpha}(\overline{\Omega}_\frac12)$ and $f_2 \in {\mathcal X}_{\mathrm{D}}^{0,\alpha}$, we then see that $f|_{\partial\Omega_\frac12} = f_1 \in C^{1,\alpha}(\partial \Omega_\frac12)$ so $\partial_Rw|_{\Omega_{\frac12}} \in C^{1,\alpha}(\partial\Omega_\frac12)$. Standard elliptic regularity theory applied to the Neumann problem [\[E.regularity.w\]](#E.regularity.w){reference-type="eqref" reference="E.regularity.w"}-[\[E.regularity.w.boundary\]](#E.regularity.w.boundary){reference-type="eqref" reference="E.regularity.w.boundary"} then shows that $\partial_R v = w \in C_{l}^{2,\alpha}(\overline{\Omega}_\frac12)$, so the result follows. ◻ In the following lemmas, we shall show that $G_a:\mathcal O\to {\mathcal Y}$ satisfies the assumptions of the Crandall--Rabinowitz theorem. The first result we need is the following: **Lemma 13**. *For all $a \in (0,1)$ and all $l \in \mathbb{N}$, $L_a: {\mathcal X}\to {\mathcal Y}$ is a Fredholm operator of index zero.* *Proof.* Since the property of being a Fredholm operator and the Fredholm index of an operator are preserved by compact perturbations, and since the embedding ${\mathcal X}\hookrightarrow {\mathcal Y}$ is compact, it suffices to show that $\widetilde{L}_a= L_a-\mu_{0,2}(a)$ defines a Fredholm operator ${\mathcal X}\to {\mathcal Y}$ of index zero. To prove this, we shall prove the stronger statement that $\widetilde{L}_a$ is a topological isomorphism. Since $\widetilde{L}_a$ is a linear continuous map, by the open mapping theorem, it suffices to show that $\widetilde{L}_a: {\mathcal X}\to {\mathcal Y}$ is a bijective map. We first prove the injectivity. Let $v \in {\mathcal X}$ such that $\widetilde{L}_a v = 0$ in $\Omega_\frac12$. Defining $w:= v \circ \Phi_a^{-1}$, we get that $$\Delta w = 0 \quad \textup{in } \Omega_a\,, \qquad w = 0 \quad \textup{on } \partial\Omega_a\,.$$ Hence, it follows that $w\equiv 0$. This implies that $v\equiv0$ and the injectivity of $\widetilde{L}_a$ follows. We now prove that $\widetilde{L}_a$ is onto. In other words, given $f \in {\mathcal Y}$, we show that there exists $v \in {\mathcal X}$ solving $$\label{E.surjectivity} \widetilde{L}_a v = f \quad \textup{in } \Omega_\frac12\,, \qquad v = 0 \quad \textup{on } \partial\Omega_\frac12\,.$$ Observe that $f \in C^{0,\alpha}_l(\overline{\Omega}_{\frac 1 2})$. Equation [\[E.surjectivity\]](#E.surjectivity){reference-type="eqref" reference="E.surjectivity"} is equivalent to: $$\label{E.surjectivity2} \Delta w = g \quad \textup{in } \Omega_a\,, \qquad w = 0 \quad \textup{on } \partial\Omega_a\,.$$ where $w:= v \circ \Phi_a^{-1}$ and $g: =f \circ \Phi_a^{-1}$. The existence of a unique $C^{2,\alpha}$ solution to [\[E.surjectivity2\]](#E.surjectivity2){reference-type="eqref" reference="E.surjectivity2"} is known. Hence, we obtain a solution $v \in C^{2,\alpha}_l(\overline{\Omega}_{\frac 1 2})$ to [\[E.surjectivity\]](#E.surjectivity){reference-type="eqref" reference="E.surjectivity"}. By Lemma [Lemma 12](#L.regularity){reference-type="ref" reference="L.regularity"}, it follows that $v \in {\mathcal X}$ and so that $\widetilde{L}_a$ is onto. ◻ We are now ready to present the bifurcation argument: **Proposition 14**. *Let $l \geqslant 4$ such that [\[NR\]](#NR){reference-type="eqref" reference="NR"} and [\[T\]](#T){reference-type="eqref" reference="T"} are satisfied (see Propositions [Proposition 5](#P.cross1){reference-type="ref" reference="P.cross1"}, [Proposition 6](#P.cross2){reference-type="ref" reference="P.cross2"}). Take $a_l \in (0,1)$ as in Proposition [Proposition 5](#P.cross1){reference-type="ref" reference="P.cross1"} and let $\overline{\varphi}_{a_l} \in C^{2,\alpha}([\tfrac12,1])$ be as in [\[E.eigenvaluel.fixed.annulus\]](#E.eigenvaluel.fixed.annulus){reference-type="eqref" reference="E.eigenvaluel.fixed.annulus"}. Then:* 1. *The kernel of $L_{a_l}$ is one-dimensional. Furthermore, $$\label{E.Kernel} \operatorname{Ker}(L_{a_l}) = {\rm span}\{\overline{v}\}, \quad \textup{ with } \quad \,\overline{v}(R,\theta):= \overline{\varphi}_{a_l}(R) \cos(l\theta)\,.$$* 2. *The image of $L_{a_l}$ is given by $${\rm{Im}}(L_{a_l}) = \left\{ w \in \mathcal{Y}: \int_{\Omega_{\frac12}} \overline{v}(R,\theta) w(R,\theta) \Big(R - \frac12 +\frac{a_l}{2(1-a_l)}\Big) dR\, d\theta= 0 \right\}\,.$$* 3. *$L_{a_l}$ satisfies the transversality property, that is $$\frac{{\rm d}}{{\rm d}a} L_a \Big|_{a = a_l} [\,\overline{v}\,] \not\in {\rm Im}(L_{a_l})\,.$$* *Proof.* (i) Let $w \in \operatorname{Ker}(L_{a_l})$ and define $z:= w \circ \Phi_{a_l}^{-1}$. Then, $w$ is a $l$-symmetric solution to the problem $$\Delta z + \mu(a_l) z =0 \quad \textup{in } \Omega_a\,, \qquad z = 0 \quad \textup{on } \partial\Omega_a\,.$$ By the analysis made in Section 2, we know that $z(r,\theta) = t \varphi_{l,0}(r) \cos (l \theta)$, with $t \in \mathbb{R}$. Hence, (i) follows. \(ii\) By Lemma [Lemma 13](#L.Fredholm){reference-type="ref" reference="L.Fredholm"} we know that $L_{a_l}$ is a Fredholm operator of index zero. Thus, it is enough to prove that $${\rm{Im}}(L_{a_l}) \subset \left\{ w \in \mathcal{Y}: \int_{\Omega_{\frac12}} \overline{v}(R,\theta) w(R,\theta) \Big(R - \frac12 +\frac{a_l}{2(1-a_l)}\Big)\, dR \,d\theta= 0 \right\}\,.$$ Let $w$ be in the image set of $\mathcal L$, that is, $w = L_{a_l}u$ for some $u \in \mathcal{X}$. Since $\overline{v} \in \operatorname{Ker}(L_{a_l})$, integrating by parts, we obtain that $$\begin{aligned} \int_{\Omega_{\frac12}} \overline{v}(R,\theta) w(R,\theta) \Big(R - \frac12 +\frac{a_l}{2(1-a_l)}\Big) dR\, d\theta= \int_{\Omega_{\frac12}} L_{a_l}\overline{v}(R,\theta) u(R,\theta) \Big(R - \frac12 +\frac{a_l}{2(1-a_l)}\Big) dR\, d\theta= 0\,, \end{aligned}$$ and the desired inclusion follows. \(iii\) For all $a \in (0,1)$, we set $\overline{v}_a(R,\theta):= \overline{\varphi}_a(R) \cos(l\theta) \in \mathcal{X}$, with $\overline{\varphi}_a$ as in [\[E.eigenvaluel.fixed.annulus\]](#E.eigenvaluel.fixed.annulus){reference-type="eqref" reference="E.eigenvaluel.fixed.annulus"}, and $\overline{w} := \tfrac{{\rm d}}{{\rm d}a} \overline{v}_a|_{a = a_l} \in \mathcal{X}$. Then, taking into account [\[E.ODE.Dirichlet\]](#E.ODE.Dirichlet){reference-type="eqref" reference="E.ODE.Dirichlet"}, we see that $$L_a \overline{v}_a = (\mu_{0,2}(a) - \lambda_{l,0}(a)) \overline{v}_a\,, \quad \textup{ in } \Omega_\frac12\,.$$ Moreover, differentiating this identity with respect to $a$ and evaluating at $a = a_l$, we get that $$\frac{{\rm d}}{{\rm d}a} L_a \Big|_{a = a_l} [\,\overline{v}\,] + L_{a_l}\overline{w} = (\mu_{0,2}'(a_l) - \lambda_{l,0}'(a_l))\overline{v}\,, \quad \textup{ in } \Omega_\frac12\,.$$ By Proposition [Proposition 6](#P.cross2){reference-type="ref" reference="P.cross2"}, we know that $\mu_{0,2}'(a_l) - \lambda_{l,0}'(a_l) \neq 0$ and by (i), we have that $\overline{v} \in \operatorname{Ker}(L_{a_l})$. Hence, $\tfrac{{\rm d}}{{\rm d}a} L_a \big|_{a = a_l}[\,\overline{v}\,] \not\in {\rm Im}(L_{a_l})$, so we conclude that $L_{a_l}$ satisfies the transversality property. ◻ We are now ready to prove Theorem [Theorem 1](#T.main){reference-type="ref" reference="T.main"}. Using the notation we have introduced in previous sections, here we provide a more precise statement of this result. Let us recall that the domains of the form $\Omega_a^{b,B}$ were introduced in [\[E.defOm\]](#E.defOm){reference-type="eqref" reference="E.defOm"}. **Theorem 15**. *Let $l \geqslant 4$ be such that [\[NR\]](#NR){reference-type="eqref" reference="NR"} and [\[T\]](#T){reference-type="eqref" reference="T"} are satisfied. This condition holds, in particular, for $l=4$ and for all large enough $l$. Given $a_l \in (0,1)$ as in Proposition [Proposition 5](#P.cross1){reference-type="ref" reference="P.cross1"}, there exist some $s_l > 0$ and a continuously differentiable curve $$\big\{(a(s), b_s, B_s): s \in (-s_l ,s_l ),\ (a(0), b_s, B_s) = (a_l,0,0) \big\} \subset (0,1) \times C_l^{2,\alpha}(\mathbb{T}) \times C_l^{2,\alpha}(\mathbb{T}) \,,$$ with $$\begin{aligned} b_s(\theta) &= -2s(1-a(s)) \, \frac{\overline{\varphi}'_{a_l}(\tfrac12)}{\overline{\psi}''_{a(s)}(\tfrac12)}\, \cos(l\theta) + o(s)\,, \\ B_s(\theta) &= -2s(1-a(s)) \, \frac{\overline{\varphi}'_{a_l}(1)}{\overline{\psi}''_{a(s)}(1)}\, \cos(l\theta) + o(s)\,,\end{aligned}$$ such that the overdetermined problem $$\Delta u_s+\mu_{0,2}(a(s)) u_s=0\quad\text{in }\Omega_{a(s)}^{b_s,B_s}\,,\qquad \nabla u_s=0\quad\text{on }\partial\Omega_{a(s)}^{b_s,B_s}\,,$$ admits a nonconstant solution for every $s \in (-s_l ,s_l )$. Moreover, both the solution $u_s$ and the boundary of the domain are analytic.* *Proof.* First, note that the fact that the conditions [\[NR\]](#NR){reference-type="eqref" reference="NR"} and [\[T\]](#T){reference-type="eqref" reference="T"} hold for all large enough $l$ was established in Propositions [Proposition 5](#P.cross1){reference-type="ref" reference="P.cross1"} and [Proposition 6](#P.cross2){reference-type="ref" reference="P.cross2"}, while the case $l=4$ follows from Propositions [Proposition 5](#P.cross1){reference-type="ref" reference="P.cross1"} and [Proposition 19](#l=4){reference-type="ref" reference="l=4"}. By Proposition [Proposition 14](#P.CR){reference-type="ref" reference="P.CR"} we know that the map $$\big(0,1\big) \times \mathcal{O} \to {\mathcal Y}\,, \qquad (a,v) \mapsto G_a(v)\,,$$ satisfies the hypotheses of the Crandall--Rabinowitz theorem. Therefore, there exits a nontrivial continuously differentiable curve through $(a_l,0)$, $$\big\{(a(s), v_s): s \in (-s_l ,s_l ),\ (a(0), v_s) = (a_l,0) \big\}\subset (0,1)\times \mathcal O\,,$$ such that $$G_{a(s)}(v_s) = 0\,, \quad \textup{for} \quad s \in (-s_l ,s_l )\,.$$ Moreover, for $\overline{v}$ as in [\[E.Kernel\]](#E.Kernel){reference-type="eqref" reference="E.Kernel"}, it follows that $$\label{E.expansionvs} v_s = s \, \overline{v} + o(s) \quad \textup{in } {\mathcal X}\quad \textup{as} \quad s \to 0\,.$$ As we saw in [\[E.bvBvuv\]](#E.bvBvuv){reference-type="eqref" reference="E.bvBvuv"}-[\[E.Ga\]](#E.Ga){reference-type="eqref" reference="E.Ga"}, for all $s \in (-s_l ,s_l )$, the function $$\widetilde{ w }_s = \overline{\psi}_{a(s)} + w _{v_s}$$ is then a nontrivial solution to [\[E.problem.fixed.annulus\]](#E.problem.fixed.annulus){reference-type="eqref" reference="E.problem.fixed.annulus"} with $b := b_{v_s}$, $B := B_{v_s}$ and $a := a(s)$. More explicitly, $$\widetilde{ w }_s(R,\theta) = \overline{\psi}_{a(s)}(R) + v_s(R,\theta) + \frac{\overline{\psi}_{a(s)}'(R)}{2(1-a(s))} \Big[2(1-R)b_{s}(\theta) + (2R-1)B_{s}(\theta) \Big]\,,$$ with $$\label{E.bvsBvs} \begin{aligned} b_{s}(\theta) & := b_{v_s}(\theta) = -2(1-a(s))(\overline{\psi}_{a(s)}''(\tfrac12))^{-1} \partial_R v_s\big(\tfrac12,\theta\big)\,,\\ B_{s}(\theta) & := B_{v_s}(\theta) = -2(1-a(s))(\overline{\psi}_{a(s)}''(1) )^{-1} \partial_R v_s(1,\theta)\,. \end{aligned}$$ Furthermore, combining [\[E.Kernel\]](#E.Kernel){reference-type="eqref" reference="E.Kernel"} with [\[E.expansionvs\]](#E.expansionvs){reference-type="eqref" reference="E.expansionvs"}, we get $$\begin{aligned} b_s(\theta) &= -2s(1-a(s)) \, \frac{\overline{\varphi}'_{a_l}(\tfrac12)}{\overline{\psi}''_{a(s)}(\tfrac12)}\, \cos(l\theta) + o(s)\,,\\[1mm] B_s(\theta) &= -2s(1-a(s)) \, \frac{\overline{\varphi}'_{a_l}(1)}{\overline{\psi}''_{a(s)}(1)}\, \cos(l\theta) + o(s)\,.\end{aligned}$$ This immediately gives the formula presented in the statement of Theorem [Theorem 1](#T.main){reference-type="ref" reference="T.main"}. To change back to the original variables $(r,\theta) \in \Omega_{a(s)}^{b_s,B_s}$, we simply define $$u_s := \widetilde{ w }_s \circ \big(\Phi_{a(s)}^{b_s,B_s}\big)^{-1}$$ and conclude that, for all $s \in (-s_l ,s_l )$, $u_s \in C^{2,\alpha}(\overline{\Omega}_{a(s)}^{\,b_{v_s}, B_{v_s}})$ solves $$\Delta u_s+\mu_{0,2}(a(s)) u_s=0\quad\text{in }\Omega_{a(s)}^{b_s,B_s}\,,\qquad \nabla u_s=0\quad\text{on }\partial\Omega_{a(s)}^{b_s,B_s}\,.$$ The analyticity of both the domains $\Omega_{a(s)}^{b_s,B_s}$ and the solutions $u_s$ for all $s \in (-s_l ,s_l )$ follows from Kinderlehrer--Nirenberg [@KinderlehrerNirenberg]. ◻ # Applications of the main theorem {#S.corollaries} In this section we present the proofs of the two applications of the main theorem that we discussed in the Introduction: *Proof of Corollary [Corollary 2](#C.Pompeiu){reference-type="ref" reference="C.Pompeiu"}.* Let $\Omega$ be a domain as in Theorem [Theorem 1](#T.main){reference-type="ref" reference="T.main"}. Therefore, the boundary $\partial\Omega$ (which is analytic) consists of two connected components $\Gamma_1,\Gamma_2$, and there exist two nonzero constants $c_1,c_2$ and a nonconstant Neumann eigenfunction $u$, satisfying $$\Delta u+\mu u=0$$ in $\Omega$, such that $u|_{\Gamma_j}=c_j$ with $j=1,2$. The complement $\mathbb{R}^2\backslash\overline\Omega$ has a bounded connected component, which we will henceforth call $\Omega'$, and an unbounded component, which is then $\mathbb{R}^2\backslash\overline{(\Omega\cup \Omega')}$. Relabeling the curves $\Gamma_j$ is necessary, we can assume that $\partial\Omega'=\Gamma_2$. Now let $f$ be any function such that $$\Delta f + \mu f=0$$ on $\mathbb{R}^2$, for instance $f(x)=\sin(\mu^{1/2} x_1)$, and set $$\label{E.defc} c:= \frac{c_2}{c_1}-1\,.$$ As the local maxima of a Bessel function are decreasing, it is easy to see that $c>0$. We claim that, for any rigid motion ${\mathcal R}$, $$\label{E.Pompclaim} 0=\int_{{\mathcal R}(\Omega)}f\, dx - c\int_{{\mathcal R}(\Omega')}f\, dx=\int_{\mathbb{R}^2} f\circ{\mathcal R}^{-1}\, \rho\, dx\,,$$ where we have introduced the function $$\rho(x):= \mathbbm 1_\Omega(x) - c\,\mathbbm 1_{\Omega'}(x)\,.$$ Integrating by parts, one readily finds that the compactly supported $C^{1,1}$ function $$w(x):=\begin{cases} 0 & \text{if } x\in\mathbb{R}^2\backslash\overline{(\Omega\cup\Omega')}\,,\\[1mm] (1-\frac {u(x)}{c_1})/\mu& \text{if } x\in\overline{\Omega}\,,\\[1mm] -c/\mu & \text{if } x\in \Omega'\,, \end{cases}$$ satisfies $$\int_{\mathbb{R}^2} w\, (\Delta F+ \mu F)\, dx= \int_{\mathbb{R}^2} F\rho\, dx\,,$$ for any $F\in C^\infty(\mathbb{R}^2)$[^3]. As rigid motions commute with the Laplacian, $$\int_{\mathbb{R}^2} f\circ{\mathcal R}^{-1}\, \rho\, dx= \int_{\mathbb{R}^2} [\Delta(f\circ{\mathcal R}^{-1})+ \mu f\circ{\mathcal R}^{-1}]\,w\, dx= \int_{\mathbb{R}^2} (\Delta f+\mu f)\circ{\mathcal R}^{-1}\, w\, dx=0\,.$$ The identity [\[E.Pompclaim\]](#E.Pompclaim){reference-type="eqref" reference="E.Pompclaim"}, and therefore Corollary [Corollary 2](#C.Pompeiu){reference-type="ref" reference="C.Pompeiu"}, follow. ◻ Before passing to the proof of Theorem [Theorem 3](#T.Euler){reference-type="ref" reference="T.Euler"}, let us recall that $(v,p)\in L^2_{\mathrm{loc}}(\mathbb{R}^2,\mathbb{R}^2)\times L^1_{\mathrm{loc}}(\mathbb{R}^2)$ is a *stationary weak solution* of the incompressible Euler equations on $\mathbb{R}^2$ if $$\int_{\mathbb{R}^2} v\cdot \nabla\phi\, dx=\int_{\mathbb{R}^2} (v_i\,v_j\, \partial_i w_j+ p\mathop{\mathrm{div}}w)\, dx=0\,,$$ for all $\phi\in C^\infty_c(\mathbb{R}^2)$ and all $w\in C^\infty_c(\mathbb{R}^2,\mathbb{R}^2)$. The first condition says that $\mathop{\mathrm{div}}v=0$ in the sense of distributions, while the second condition (where summation over repeated indices is understood) is equivalent to the equation $$v\cdot \nabla v+\nabla p=0\,,$$ when the divergence-free field $v$ and the function $p$ are continuously differentiable. *Proof of Theorem [Theorem 3](#T.Euler){reference-type="ref" reference="T.Euler"}.* Take $\Omega$, $u$ and $\mu$ as in Theorem [Theorem 1](#T.main){reference-type="ref" reference="T.main"}. We will also use the domain $\Omega'$ defined in the proof of Corollary [Corollary 2](#C.Pompeiu){reference-type="ref" reference="C.Pompeiu"}. Since $\nabla u|_{\partial\Omega}=0$, the vector field defined in Cartesian coordinates as $$v:=\begin{cases} (\frac{\partial u}{\partial x_2},-\frac{\partial u}{\partial x_1}) &\text{in } \Omega\,,\\[1mm] 0 & \text{in }\mathbb{R}^2\backslash\Omega\,, \end{cases}$$ is continuous on $\mathbb{R}^2$, supported on $\overline\Omega$ and analytic in $\Omega$. It is also divergence-free in the sense of distributions, since $$\int_{\mathbb{R}^2} v\cdot\nabla\phi\, dx=\int_{\Omega} v\cdot\nabla\phi\, dx=\int_{\Omega} \phi\, \mathop{\mathrm{div}}v\, dx+ \int_{\partial\Omega} \phi\, v\cdot\nu \,dS= 0$$ for all $\phi\in C^\infty_c(\mathbb{R}^2)$. Here we have used that, by definition, $\mathop{\mathrm{div}}v=0$ in $\Omega$ and $v|_{\partial\Omega}=0$. Now consider the continuous function $$p:=\begin{cases} 0 & \text{on } \mathbb{R}^2\backslash\overline{(\Omega\cup\Omega')},\\[1mm] -\frac12 (|\nabla u|^2+\mu u^2-\mu c_1^2) &\text{on } \overline\Omega\,,\\[1mm] \mu (c_2^2-c_1^2) & \text{on }\Omega'\,. \end{cases}$$ As an elementary computation using only the definition of $(v,p)$ and the eigenvalue equation $\Delta u+ \mu u=0$ shows that $$v\cdot \nabla v+\nabla p=0 \quad \textup{in } \Omega\,,$$ one can integrate by parts to see that $$\int_{\mathbb{R}^2} (v_i\,v_j\, \partial_i w_j+ p\mathop{\mathrm{div}}w)\, dx= \int_{\Omega} (v_i\,v_j\, \partial_i w_j+ p\mathop{\mathrm{div}}w)\, dx= - \int_{\Omega} (v\cdot \nabla v_j+\partial_j p)\,w_j\, dx =0\,,$$ for all $w\in C^\infty_c(\mathbb{R}^2,\mathbb{R}^2)$. Thus $(v,p)$ is a stationary weak solution of 2D Euler, and the theorem follows. ◻ # Rigidity results {#A.Rigidity} In this section we prove two rigidity results highlighting that, as discussed in the Introduction, the question we analyze in this paper shares many features with the celebrated Schiffer conjecture. The first is a straightforward generalization of a classical theorem due to Berenstein [@Berenstein] (which corresponds to the case $J=1$ in the notation we use below): **Theorem 16**. *Suppose that there is an infinite sequence of orthogonal Neumann eigenfunctions of a smooth bounded domain $\Omega\subset\mathbb{R}^2$ which are locally constant on the boundary. Then $\Omega$ is either a disk or an annulus.* *Proof.* By hypothesis, there is an infinite sequence of distinct eigenvalues $\mu_n$ and corresponding Neumann eigenfunctions $u_n$ which are locally constant on the boundary. That is, $u_n|_{\Gamma_j}=c^j_n$, where $\Gamma_j$, $1\leqslant j\leqslant J$ denote the connected components of $\partial\Omega$. We can assume $J\geqslant 2$, as the case $J=1$ is proved in [@Berenstein]. Also, note that $\partial\Omega$ is analytic by the results of Kinderlehrer and Nirenberg [@KinderlehrerNirenberg]. Let us call $\Omega_1,\dots,\Omega_{J-1}$ the bounded connected components of $\mathbb{R}^2\backslash\overline\Omega$ and relabel the boundary components if necessary so that $\partial\Omega_j=\Gamma_j$. For convenience, we set $\Omega_J:=\Omega$. A minor variation on the proof of Corollary [Corollary 2](#C.Pompeiu){reference-type="ref" reference="C.Pompeiu"} shows that there exists a function $w_n\in C^1_c(\mathbb{R}^2)$, equal to $a_n u_n+b_n$ in $\overline\Omega$ for some $a_n,b_n\in\mathbb{R}$ and constant in $\Omega_j$ for $1\leqslant j\leqslant J-1$, which satisfies the equation $$\Delta w_n+\mu_n w_n=\sum_{j=1}^J \gamma_{n,j} \mathbbm 1_{\Omega_j}=:\rho_n$$ in the sense of compactly supported distributions (that is, $\int_{\mathbb{R}^2} w_n (\Delta F+ \mu_n F)\, dx= \int_{\mathbb{R}^2} \rho_nF\, dx$ for all $F\in C_c^\infty(\mathbb{R}^2)$). Here $\gamma_{n,j}$ are real constants with $\gamma_{n,J}=1$. Again as in the proof of Corollary [Corollary 2](#C.Pompeiu){reference-type="ref" reference="C.Pompeiu"}, if we take $f_n(x):=\sin(\mu_n^{1/2}x_1)$, we infer that $$\int_{\mathbb{R}^2} f_n\circ{\mathcal R}^{-1}\, \rho_n\, dx=0$$ for any rigid motion ${\mathcal R}$. A theorem of Brown, Schreiber and Taylor [@L.BrownSchreiberTaylor Theorem 4.1] then ensures that the Fourier--Laplace transform of $\rho_n$, defined as $$\widehat\rho_n(\zeta):=\int_{\mathbb{R}^2}e^{-i\zeta\cdot x}\, \rho_n(x)\, dx$$ for $\zeta\in\mathbb{C}^2$, must vanish identically on the set $${\mathcal C}_n:=\{\zeta\in\mathbb{C}^2: \zeta_1^2+\zeta_2^2=\mu_n\}\,.$$ Note that, although the theorem in [@L.BrownSchreiberTaylor] is stated for indicator functions of bounded sets, the proof works for any compactly supported distribution. Following Berenstein [@Berenstein], let us now use complex variables $z:=x_1+ i x_2$, $\bar z:=x_1-i x_2$. The same argument as in the proof of [@Berenstein Proposition 2] shows that $\widehat\rho_n$ can only vanish identically on ${\mathcal C}_n$ if the Fourier--Laplace transform of its derivative $\sigma_n:=\partial_{\bar z}\rho_n$ does. The $\bar z$-derivative of an indicator function is explicitly computed in [@Berenstein] using the Cauchy integral formula. Specifically, as $\Omega_j$ is simply connected for $j\leqslant J-1$, we have $$\partial_{\bar z} \mathbbm 1_{\Omega_j}=\frac i2 \mathbbm 1_{\Gamma_j}\,,$$ while the orientation of the various boundary components implies that $$\partial_{\bar z} \mathbbm 1_{\Omega_J}=\frac i2 \left(\mathbbm 1_{\Gamma_J}-\sum_{j=1}^{J-1} \mathbbm 1_{\Gamma_j}\right)\,.$$ Therefore, there are some real constants $\sigma_{n,j}$ such that $$\label{E.hsi} \widehat\sigma_n=\frac i2\sum_{j=1}^J\sigma_{n,j}\widehat{\mathbbm 1_{\Gamma_j}}\,.$$ By the pigeonhole principle, multiplying $\rho_n$ by a nonzero constant if necessary, we can take some positive integer $j_0\leqslant J$ and a subsequence of the eigenvalues $\mu_n$ (which we will still denote by $\mu_n$ for convenience) such that $$\label{E.normalsi} 1=\sigma_{n,j_0}\geqslant\max_{ j\in\{1,\dots, J\}\backslash\{j_0\}}|\sigma_{n,j}|\,.$$ The key feature of the expression [\[E.hsi\]](#E.hsi){reference-type="eqref" reference="E.hsi"} is that the constants $\sigma_{n,j}$ depend on $n$ but the indicator functions do not. The proof of [@Berenstein Proposition 2] then applies here essentially verbatim, and shows (via [@Berenstein Equation (43)]) that there must be a point $p_1\in \partial\Omega_{j_0}\subset\partial\Omega$ which is connected to another boundary point $p_2\in \partial\Omega$ by a circular arc contained in $\partial\Omega$. As $\partial\Omega$ is analytic, we infer that at least one connected component (denoted by $\Gamma_k$) of $\partial\Omega$ must be a circle. Consider now $U$ a neighborhood of $\Gamma_k$ in $\Omega$ in the form of a thin annulus. Take $g$ an isometry of $U$ and define $\tilde{u}_1= u_1 \circ g$. By the unique continuation principle, $\tilde{u}_1 = u_1$. Since the isometry $g$ is arbitrary, we conclude that $u_1$ is radially symmetric (with respect to the center of $\Gamma_k$) in $U$. By analyticity, $u_1$ is radially symmetric in all its domain. Now recall that $u_1$ is locally constant on $\partial \Omega$, which implies that $\Gamma_j$ are all concentric circles. Since $\Omega$ is connected, we conclude that $\Omega$ is either a disk or an annulus. ◻ The second rigidity result we want to show is the analogous to [@aviles; @Deng] and ensures that if the eigenfunction corresponds to a sufficiently low eigenvalue, no nontrivial domains exist. It is interesting to stress that in our main theorem the nontrivial domains with Neumann eigenfunctions that are locally constant on the boundary correspond to the 18th lowest Neumann eigenfunction of the domain in the case $l=4$, as shown in Proposition [Proposition 19](#l=4){reference-type="ref" reference="l=4"} in the Appendix. Let us denote by $\mu_k(\Omega)$ (respectively, $\lambda_{k}(\Omega)$) the $k$-th Neumann (respectively, Dirichlet) eigenvalue of a planar domain $\Omega$, counted with multiplicity and with $k=0,1,2,\dots$ We have the following: **Theorem 17**. *Let $\Omega \subset \mathbb{R}^2$ be a smooth bounded domain such that the problem $$\Delta u + \mu u = 0\quad in\ \Omega\,, \qquad \nabla u = 0 \quad on \ \partial\Omega\,,$$ admits a nonconstant solution. Then:* 1. *If $\mu \leqslant\mu_4(\Omega)$, $\Omega$ is either a disk or an annulus.* 2. *If $\partial \Omega$ has exactly two connected components and $\mu \leqslant\mu_5(\Omega)$, $\Omega$ is an annulus.* *Proof.* Let us start with item (i). Consider the functions $$\phi_1:= \frac{\partial u}{ \partial x}\,, \qquad \phi_2:= \frac{\partial u}{ \partial y}\,, \qquad \phi_3:= x \frac{\partial u}{ \partial y} - y \frac{\partial u}{ \partial x}\,,$$ and observe that $\phi_1$ and $\phi_2$ correspond to the infinitesimal generator of translations, whereas $\phi_3$ is related to rotations. Let us also define $$E := \mbox{span } \big\{ \phi_1, \phi_2, \phi_3 \big\}\,.$$ Since $\phi_i$ commute with the Laplacian and $\nabla u=0$ on the boundary, it is clear that the functions $\phi_i$ are Dirichlet eigenfunctions of the Laplace operator with eigenvalue $\mu$. If $\phi_1$ and $\phi_2$ are proportional, this would imply that $u$ is constant on parallel lines. Taking into account that $\Omega$ is bounded we would get that $u$ is constant, contradicting our hypotheses. As a consequence, the multiplicity of $\mu$ as a Dirichlet eigenvalue is at least $2$, which implies that $\mu = \lambda_k(\Omega)$ with $k \geqslant 1$. At this point, we consider separately two different cases: 1. *The dimension of $E$ is three.* In this case, $\mu$ has at least multiplicity $3$ as a Dirichlet eigenvalue. That is, for some $k \geqslant 1$, $$\mu = \lambda_k(\Omega)= \lambda_{k+1}(\Omega)= \lambda_{k+2}(\Omega)\,.$$ We now make use of [@Fi; @Fri] to get $\lambda_{k+2}(\Omega) > \mu_{k+3}(\Omega)$, which is in contradiction with $\mu \leqslant\mu_4(\Omega)$. 2. *The dimension of $E$ is two.* That is, the functions $\{\phi_i\}_{i=1}^3$ are linearly dependent. In this case, there exist $x_0$, $y_0 \in \mathbb{R}$ such that $$(x-x_0) \frac{\partial u}{ \partial y} - (y-y_0) \frac{\partial u}{ \partial x}=0\,.$$ This implies that $u$ is radially symmetric with respect to the point $(x_0, y_0)$. Since $\Omega$ is connected and $u$ is constant on $\partial \Omega$, we conclude that $\Omega$ is either a disk or an annulus. Let us now move on to item (ii). The proof considers two different cases: ### Case 1: The function $u$ has no critical points in $\Omega$. {#case-1-the-function-u-has-no-critical-points-in-omega. .unnumbered} Indeed, if $u$ has no critical points, its global minimum and maximum are attained at the boundary. Assume for concreteness that $$u=M = \mbox{max} \, u\ \mbox{ in } \Gamma_1\,, \quad u= m = \mbox{min} \, u\ \mbox{ in } \Gamma_2\,,$$ where $\Gamma_1$ and $\Gamma_2$ are the inner and outer connected component of $\partial \Omega$, respectively. We define the function $v:= u-m \geqslant 0$, which satisfies $$\Delta v + \mu (v+m) = 0 \quad\textup{in } \Omega\,, \qquad \nabla v = 0 \quad \textup{on } \partial \Omega\,,$$ with $v = 0$ on $\Gamma_2$ and $v = M-m$ on $\Gamma_1$. A result of Reichel [@Reichel] then shows that $\Omega$ is an annulus. ### Case 2: The function $u$ has at least a critical point $p \in \Omega$. {#case-2-the-function-u-has-at-least-a-critical-point-p-in-omega. .unnumbered} We argue as in the proof of (i). If the dimension of $E$ is two, we are done, so we assume that the dimension of $E$ is three. Reasoning as in the proof of (i), we obtain that $\mu > \mu_{k+3}(\Omega)$. Hence, if we show that $k \geqslant 2$, we are done. In other words, we need to prove that $\mu \neq \lambda_1(\Omega)$. Observe that $\phi_i(p)=0$ for any $i=1, 2, 3$. Let us then consider the equation $$\nabla \phi(p)=0, \qquad \phi \in E\,.$$ This is a linear system of two equations posed in a vector space of dimension three, so it admits a nonzero solution $\phi_0 \in E$. As a consequence, it follows that $$\label{nodal} \phi_0(p)=0\,, \quad \nabla \phi_0(p)=0\,.$$ We now claim that this is impossible for $\mu= \lambda_1(\Omega)$. This fact is surely known, but we have not been able to find a explicit reference in the literature. The closest result that we have found is [@cheng Theorem 3.2, Corollary 3.5], in the setting of closed surfaces. However, the argument there cannot be directly translated to the setting of domains with boundary. For the sake of completeness, we give a short proof of this claim. First of all, we define $$N:= \{x \in \Omega: \ \phi_0(x)=0\}\,, \qquad \Omega^+:= \{x \in \Omega:\ \phi_0(x)>0\}, \qquad \Omega^-:= \{x \in \Omega:\ \phi_0(x)<0\}\,.$$ Note that if $\mu= \lambda_1(\Omega)$, the Courant nodal theorem implies that the sets $\Omega^{\pm}$ are connected. On the other hand, it is known (see e.g. [@cheng]) that [\[nodal\]](#nodal){reference-type="eqref" reference="nodal"} implies the existence of $j$ analytic curves in $N$ ($j\geqslant 2)$ that intersect transversally at $p$. Moreover, $\phi_0$ must change sign across each curve. Then, if $r$ is sufficiently small, $B_p(r) \setminus N$ has $2j$ connected components $\omega_i^{\pm} \subset \Omega^{\pm}$, $i=1 \dots j$. We take $q_i^{\pm}$ points in $\omega_i^{\pm}$. Now, choose two points $q_i^+$, $q_{i'}^+$, $i \neq i'$. In the case where $\mu = \lambda_1(\Omega)$, it follows that $\Omega^+$ is connected, so there exists a curve in $\Omega^+$ joining these two points. We can also take a curve connecting $q_i^+$ to $p$ in $B_p(r) \setminus N$, and the same for $q_{i'}^+$. Joining those curves, we obtain a closed curve $\gamma^+$ such that $\mbox{supp}\ \gamma^+ \subset \Omega^+ \cup \{p\}$. In the same way, we can build $\gamma^-$ such that $\mbox{supp} \ \gamma^- \subset \Omega^- \cup \{p\}$. Observe that $\gamma^+$ and $\gamma^-$ intersect only at the point $p$. By choosing the departing points apropriately, this intersection is transversal, which is impossible. Hence, if $\mu = \lambda_1(\Omega)$, we have a contradiction, and the claim follows. ◻ *Remark 18*. One can probably refine the arguments to obtain a better estimate on the value of $k$ of $\mu_k(\Omega)$ for which no nontrivial domains exist, in the spirit of [@aviles; @Deng]. Finding the optimal value of $k$ seems hard. # Acknowledgements {#acknowledgements .unnumbered} This work has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme through the grant agreement 862342 (A. E.). A. E. is also supported by the grant PID2022-136795NB-I00 of the Spanish Science Agency and the ICMAT--Severo Ochoa grant CEX2019-000904-S. D. R. has been supported by: the Grant PID2021-122122NB-I00 of the MICIN/AEI, the *IMAG-Maria de Maeztu* Excellence Grant CEX2020-001105-M funded by MICIN/AEI, and the Research Group FQM-116 funded by J. Andalucia. P. S. has been supported by the Grant PID2020-117868GB-I00 of the MICIN/AEI and the *IMAG-Maria de Maeztu* Excellence Grant CEX2020-001105-M funded by MICIN/AEI. # The transversality condition for $l=4$ {#A.transversality} Our aim in this Appendix is the verification of condition [\[T\]](#T){reference-type="eqref" reference="T"} for $l=4$, which will ensure that the statement of Theorem [Theorem 1](#T.main){reference-type="ref" reference="T.main"} (or Theorem [Theorem 15](#T.mainDetails){reference-type="ref" reference="T.mainDetails"}) is valid for the particular case $l=4$. In this case, we can also compute the order of the eigenvalue $\mu$ for which one can find nonradial Neumann eigenfunctions that are locally constant on the boundary. Recall that we denote by $\mu_k(\Omega)$ the $k$-th Neumann eigenvalue (counted with multiplicity) of a planar domain $\Omega$, with $k=0,1,2,\dots$ **Proposition 19**. *If $l=4$, condition [\[T\]](#T){reference-type="eqref" reference="T"} holds. Moreover, $\mu_{0,2}(a_4)= \mu_{18}(\Omega_{a_4})$.* *Proof.* Recall that $\mu\equiv \mu_{0,2}(a)$ is the value such that the following equation has a nontrivial solution, corresponding to the second eigenfunction $$\psi''+\frac{\psi'}r +\mu\, \psi=0\quad \text{in } (a,1)\,,\qquad \psi'(a)=\psi'(1)=0\,.$$ The function $\psi$ is the given by $$\psi(r)= \frac{J_0(\sqrt{\mu}r) Y_1(\sqrt{\mu}) - Y_0(\sqrt{\mu}r) J_1(\sqrt{\mu})}{Y_1(\sqrt{\mu})}\,.$$ Here and in what follows, $Y_k$ denotes the Bessel function of the second kind of order $k$. Observe that $\psi$ changes sign twice in the interval $[a,1]$. Furthermore, $\psi'$ can be easily computed: $$\psi'(r)= -\sqrt{\mu}\, \frac{J_1(\sqrt{\mu}r) Y_1(\sqrt{\mu}) - Y_1(\sqrt{\mu}r) J_1(\sqrt{\mu})}{Y_1(\sqrt{\mu})}\,.$$ Thus, we will define, for any real $\mu$ and $r$, $$\Psi(\mu, r):= \frac{J_1(\sqrt{\mu}r) Y_1(\sqrt{\mu}) - Y_1(\sqrt{\mu}r) J_1(\sqrt{\mu})}{Y_1(\sqrt{\mu})}\,.$$ Note that $\Psi(\mu_{0,2}(a), a) =0$. The chain rule then yields the following formula for $\mu_{0,2}'(a)$: $$\begin{aligned} \mu'_{0,2}& (a) = - \frac{\partial_r \Psi(\mu,r)}{\partial_{\mu} \Psi(\mu,r)}\Big|_{\mu=\mu_{0,2}(a), \, r=a}\\ &= \frac{\mu^{3/2} \pi Y_1(\sqrt{\mu}) \Big [Y_1(\sqrt{\mu}) (J_2(\sqrt{\mu}a) - J_0(\sqrt{\mu}a)) - J_1(\sqrt{\mu}) (Y_2(\sqrt{\mu}a) - Y_0(\sqrt{\mu}a))\Big]}{2 Y_1(\sqrt{\mu}a) + \pi Y_1(\sqrt{\mu}) \Big [Y_1(\sqrt{\mu})(\sqrt{\mu} a J_0(\sqrt{\mu}a)- J_1(\sqrt{\mu}a)) - J_1(\sqrt{\mu}) (\sqrt{\mu} a Y_0(\sqrt{\mu}a) - Y_1(\sqrt{\mu}a)) \Big ]}\,, \end{aligned}$$ where $\mu\equiv \mu_{0,2}(a)$ in the second line. On the other hand, $\lambda\equiv \lambda_{l,0}(a)$ is the value such that the following equation has a nontrivial solution corresponding to its principal eigenfunction: $$\varphi''+\frac{\varphi'}r-\frac{l^2 \varphi}{r^2}+\lambda\, \varphi=0\quad \text{in } (a,1)\,,\qquad \varphi(a)=\varphi(1)=0\,.$$ The function $\varphi$ is then given by $$\varphi(r)= \frac{J_l(\sqrt{\lambda}r) Y_l(\sqrt{\lambda}) - Y_l(\sqrt{\lambda}r) J_l(\sqrt{\lambda})}{Y_l(\sqrt{\lambda})}\,.$$ As before, we then define, for arbitrary $r$ and $\lambda$, $$\Phi(\lambda,r):= \frac{J_l(\sqrt{\lambda}r) Y_l(\sqrt{\lambda}) - Y_l(\sqrt{\lambda}r) J_l(\sqrt{\lambda})}{Y_l(\sqrt{\lambda})}\,.$$ Again we have that $\Phi(a, \lambda_{l,0}(a))=0$, and hence we can compute $\lambda_{l,0}'(a)$ as: $$\begin{aligned} \lambda'_{l,0}&(a)= - \frac{\partial_r \Phi(\lambda,r)}{\partial_{\lambda} \Phi(\lambda,r)}\Big|_{\lambda=\lambda_{l,0}(a), \, r=a}\\ &= \frac{\lambda^{3/2} \pi Y_l(\sqrt{\lambda}) \Big [Y_l(\sqrt{\lambda}) (J_{l+1}(\sqrt{\lambda}a) - J_{l-1}(\sqrt{\lambda}a)) - J_l(\sqrt{\lambda}) (Y_{l+1}(\sqrt{\lambda}a) - Y_{l-1}(\sqrt{\lambda}a))\Big]}{2 Y_l(\sqrt{\lambda}a) + \pi Y_l(\sqrt{\lambda}) \Big [Y_l(\sqrt{\lambda})(\sqrt{\lambda} a J_{l-1}(\sqrt{\lambda}a)- l J_l(\sqrt{\lambda}a)) - J_l(\sqrt{\lambda}) (\sqrt{\lambda} a Y_{l-1}(\sqrt{\lambda}a) -l Y_l(\sqrt{\lambda}a)) \Big ] }\,.\end{aligned}$$ In the second line, $\lambda\equiv \lambda_{l,0}(a)$. The value $a=a_l$ is obtained by imposing $\mu_{0,2}(a)= \lambda_{l,0}(a)$, that is, by solving the system $$\Psi(a, \mu)=0\,, \qquad \Phi(a, \lambda)=0\,, \qquad \mu = \lambda\,.$$ In case $l=4$, we have used Mathematica to numerically obtain $$\label{values} a= 0.140989\dots\,, \qquad \lambda= \mu = 57.5851\dots\,.$$ We are not giving a computed-assisted proof of this, but it is worth noting that the fact that these values do correspond to the correct roots of $\psi'$ and $\varphi$ is clear from the plot given in Figure 2. Plugging [\[values\]](#values){reference-type="eqref" reference="values"} (together with $l=4$) in the expressions of $\mu_{0,2}'(a)$ and $\lambda_{l,0}'(a)$ we obtain: $$\mu_{0,2}'(a)= 105.971\dots, \qquad \lambda_{4,0}'(a)= 0.12067\dots$$ The transversality condition [\[T\]](#T){reference-type="eqref" reference="T"} is clearly satisfied. We now see (numerically) that in the case [\[values\]](#values){reference-type="eqref" reference="values"}, indeed $\mu=\lambda$ corresponds to the eigenvalue $\mu(18)$, with $\mu(k):=\mu_k(\Omega_{a_4})$. We need to compare the value $\mu$ given in [\[values\]](#values){reference-type="eqref" reference="values"} with the other eigenvalues $\mu_{l,k}(a)$. First, note that $\mu\equiv \mu_{0,2} > \mu_{0, i}$ for $i=0$ or $i=1$. Moreover, by Lemma [Lemma 4](#extra){reference-type="ref" reference="extra"}, $\mu_{0,2}= \lambda_{1,1} > \mu_{1,1}$. On the other hand, with our choice of $a$, $\mu_{0,2}= \lambda_{4,0} > \mu_{i,0}$ for $i \leqslant 4$. Next, we need to compute $\mu_{l,0}(a)$ with $l \geqslant 5$, and also $\mu_{1,l}(a)$ with $l \geqslant 2$, where $a$ is given in [\[values\]](#values){reference-type="eqref" reference="values"}. For that, we solve $$\psi''+\frac{\psi'}r - l^2 \frac{\psi}{r^2}+ \bar{\mu}\, \psi=0\quad \text{in } (a,1)\,,\qquad \psi'(1)=0\,,$$ where $\bar{\mu}$ is real parameter. The function $\psi$ is given by $$\psi(r)= \frac{J_l(\sqrt{\bar{\mu}}r) Y_{l-1}(\sqrt{\bar{\mu}}) - Y_l(\sqrt{\bar{\mu}}r) J_{l-1}(\sqrt{\bar{\mu}}) -J_l(\sqrt{\bar{\mu}}r) Y_{l+1}(\sqrt{\bar{\mu}}) + Y_l(\sqrt{\bar{\mu}}r) J_{l+1}(\sqrt{\bar{\mu}}) }{Y_{l-1}(\sqrt{\bar{\mu}}) - Y_{l+1}(\sqrt{\bar{\mu}})}\,.$$ The eigenvalues $\mu_{l,i}$ are obtained as the values of $\bar{\mu}$ for which $\psi'(a)=0\,.$ The index $i$ is determined by the number of times that the corresponding eigenfunction changes sign. Using Mathematica, it is easy to find $$\mu_{5,0}=41.1601\dots, \quad \mu_{6,0}= 56.2689\dots, \quad \mu_{7,0}= 73.5792\dots,\quad \mu_{1,2}= 44.0466\dots, \ \ \mu_{1,3}= 64.1201\dots\,.$$ Therefore, $\mu_{l,0} < \mu_{0,2}$ if and only if $l \leqslant 6$, and $\mu_{1,l} \leqslant\mu_{0,2}$ if and only if $l \leqslant 2$. As the eigenvalues $\mu_{0,0}$, $\mu_{0,1}$ have multiplicity 1 (because they correspond to radial eigenfunctions) and the rest have multiplicity 2 (since the corresponding eigenspace is spanned by the radial function multiplied by $\sin(l \theta)$ and $\cos(l \theta)$), we conclude that $\mu_{0,2}= \mu_{18}(\Omega_{a_4})$. Here we have used that, of course $0=\mu_{0,0}= \mu_0(\Omega)$. ◻ 99 C. Berenstein, An inverse spectral theorem and its relation to the Pompeiu problem, J. Anal. Math. 37 (1980), 128--144. C. Berenstein and P. Yang, An inverse Neumann problem, J. Reine Angew. Math. 382 (1987), 1--21. L. Brown, B. M. Schreiber and B. A. Taylor, Spectral synthesis and the Pompeiu problem, Ann. Inst. Fourier (Grenoble) 23 (1973), 125--154 , On standing waves with a vortex point of order N for the nonlinear Chern-Simons-Schrödinger equations, J. Differential Equations 261 (2016), no. 2, 1285--1316. M. Crandall and P. Rabinowitz, Bifurcation from simple eigenvalues, J. Funct. Anal. 8 (1971), 321--340. J. Deng, Some results on the Schiffer conjecture in $\mathbb{R}^2$, J. Differential Equations 253 no. 8 (2012), 2515--2526. M. M. Fall, I. A. Minlend and T. Weth, The Schiffer problem on the cylinder and on the 2-sphere, arXiv:2303.17036. J. Gómez-Serrano, J. Park and J. Shi, Existence of non-trivial non-concentrated compactly supported stationary solutions of the 2D Euler equation with finite energy, Mem. Amer. Math. Soc., in press. J. Gómez-Serrano, J. Park, J. Shi and Y. Yao, Symmetry in stationary and uniformly-rotating solutions of active scalar equations, Duke Math. J., 170 (2021), no. 13, 2957--3038. F. Hamel, N. Nadirashvili, Circular flows for the Euler equations in two-dimensional annular domains, and related free boundary problems, J. Eur. Math. Soc., in press. T. Kato, *Perturbation theory for linear operators*, Springer, Berlin, 2012. B. Kawohl and M. Lucia, Some results related to Schiffer's problem, J. Analyse Math. 142 (2020), 667--696. H. Kielhöfer, *Bifurcation Theory: An Introduction with Applications to PDEs,* Springer, Berlin, 2004. D. Kinderlehrer and L. Nirenberg, Regularity in free boundary problems, Ann. Sc. Norm. Sup. Pisa Cl. Sci. 4(2) (1977), 373--391. M. Musso, F. Pacard, J. Wei, Finite-energy sign-changing solutions with dihedral symmetry for the stationary nonlinear Schrödinger equation, J. Eur. Math. Soc. 14 (2012) 1923--1953. D. Pompeiu, Sur certains systèmes d'équations linéaires et sur une propriété intégrale des fonctions de plusieurs variables, Comptes Rendus de l'Académie des Sciences, Série I, 188 (1929), 1138---1139. W. Reichel, Radial symmetry by moving planes for semilinear elliptic BVPs on annuli and other non-convex domains, *Progress in Partial Differential Equations: Elliptic and Parabolic Problems*, Pitman Res. Notes 325 (1995), 164--182. D. Ruiz, Symmetry Results for Compactly Supported Steady Solutions of the 2D Euler Equations, Arch. Ration. Mech. Anal. 247 (2023), no. 3, Paper No. 40, 25 pp. K.T. Smith, D.C. Solmon and S.L. Wagner, Practical and mathematical aspects of the problem of reconstructing objects from radiographs, Bull. Amer. Math. Soc. 83 (1977), 1227--1270. R. Souam, Schiffer Problem and an Isoperimetric Inequality for the First Buckling Eigenvalue of Domains on $\mathbb{S}^2$, Ann. Global Anal. and Geom. 27 (2005), 341--354. S. A. Williams, A partial solution to the Pompeiu problem, Math. Ann. 223-2 (1976), 183--190. N. B. Willms and G. M. L. Gladwell, Saddle points and overdetermined problems for the Helmholtz equation, Z. angew Math. Phys. 45 (1994), 1--26. S.--T. Yau, Problem section, Seminars on Differential Geometry, Ann. of Math. Stud. 102 (1982), 669--706. L. Zalcman, A bibliographic survey of the Pompeiu problem, *Approximation by Solutions of Partial Differential equations,* Kluwer, Dordrecht (1992), 185--194. [^1]: For domains with different topology the connection between the Schiffer and Pompeiu problems is less direct, and in fact one can construct many domains without the Pompeiu property using balls of different centers and radii. [^2]: Therefore, the Euler equations are to be understood in the weak sense. Details are given in  Section [5](#S.corollaries){reference-type="ref" reference="S.corollaries"}. [^3]: Equivalently, $\Delta w +\mu w=\rho$ in the sense of compactly supported distributions.
arxiv_math
{ "id": "2309.07977", "title": "A Schiffer-type problem for annuli with applications to stationary\n planar Euler flows", "authors": "Alberto Enciso, Antonio J. Fern\\'andez, David Ruiz, Pieralberto\n Sicbaldi", "categories": "math.AP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We study a relation between the Drinfeld modules and the even dimensional noncommutative tori. A non-abelian class field theory is developed based on this relation. Explicit generators of the Galois extensions are constructed. address: $^{1}$ Department of Mathematics and Computer Science, St. John's University, 8000 Utopia Parkway, New York, NY 11439, United States. author: - Igor V. Nikolaev$^1$ title: Non-abelian class field theory and higher dimensional noncommutative tori --- # Introduction The class field theory studies abelian extensions of the number fields. Such a theory devoid of the $L$-functions is due to Claude Chevalley. The non-abelian class field theory is looking for a generalization to the extensions with non-commutative Galois groups $G$. The Artin $L$-function associated to the group $G$ was a step forward to solve the problem \[Artin 1924\] [@Art1]. The influential Langlands Program predicts an analytic solution based on the $L$-functions associated to the irreducible representations of the algebraic groups over adeles \[Langlands 1978\] [@Lan1]. Little is known about the non-abelian class field theory in the spirit of Chevalley. The $m$-dimensional noncommutative torus $\mathscr{A}_{\Theta}^m$ is the universal $C^*$-algebra generated by unitary operators $u_1,\dots, u_m$ satisfying the commutation relations $$\label{eq1.1} u_ju_i=e^{2\pi i \theta_{ij}} u_iu_j, \quad 1\le i,j\le m$$ for a skew-symmetric matrix $\Theta=(\theta_{ij})\in M_m(\mathbf{R})$ \[Rieffel 1990\] [@Rie1]. We denote by $\mathbb{A}(S_{g,n})$ the cluster $C^*$-algebra [@Nik2] of an ideal triangulation of the surface $S_{g,n}$ of genus $g$ with $n$ boundary components \[Williams 2014\] [@Wil1 Section 3]; we refer the reader to Section 2.2 for the notation and details. It is known that $\mathbb{A}(S_{g,n})/ I_{\alpha}:= \mathbb{A}_{\Theta}^{6g-6+2n}$ is an AF-algebra whose $K_0$-group is isomorphic to such of the noncommutative torus $\mathscr{A}_{\Theta}^{6g-6+2n}$, where $I_{\alpha}$ is a primitive two-sided ideal and the maximal abelian subalgebra of $\mathbb{A}(S_{g,n})$ and $\alpha\in\mathbf{R}^{6g-7+2n}$ [@Nik2 Theorem 2]. In particular, the group isomorphism $K_0( \mathbb{A}_{\Theta}^{6g-6+2n})\cong K_0(\mathscr{A}_{\Theta}^{6g-6+2n})$ implies $$\label{eq1.2} \Theta= \left( \begin{matrix} 0 & \alpha_1 & &\cr -\alpha_1 & 0 & \alpha_2 & \cr & -\alpha_2 & 0 & &\cr \vdots & & & \vdots\cr & & & 0 & \alpha_{6g-7+2n} \cr & & & - \alpha_{6g-7+2n} &0 \end{matrix} \right).$$ The noncommutative torus is said to have real multiplication, if all $\alpha_k$ in ([\[eq1.2\]](#eq1.2){reference-type="ref" reference="eq1.2"}) are algebraic numbers [@N Section 1.1.5.2]; we write $\mathscr{A}_{RM}^{6g-6+2n}$ in this case. Likewise, one can think of $\alpha_k$ as components of a normalized eigenvector $(1,\alpha_1,\dots,\alpha_{6g-7+2n})$ corresponding to the Perron-Frobenius eigenvalue $\varepsilon>1$ of a positive matrix $B\in GL_{6g-6+2n}(\mathbf{Z})$. Let $A=\mathbf{F}_q[T]$ be the ring of polynomials in one variable over a finite field $\mathbf{F}_q$ and $k=\mathbf{F}_q(T)$ its field of rational functions \[Rosen 2002\] [@R Chapter 1]. Recall that a polynomial $f\in k[x]$ is called additive in the ring $k[x,y]$ if $f(x+y)=f(x)+f(y)$. When $char ~k=p$ the polynomial $\tau_p(x)=x^p$ is additive and each additive polynomial has the form $a_0x+a_1x^p+\dots+a_rx^{p^r}$. The set of all additive polynomials is closed under addition and composition operations thus generatng a ring of the non-commutative polynomials $k\langle\tau_p\rangle$ defined by the commutation relation $\tau_p a=a^p\tau_p$ for all $a\in k$ \[Ore 1933\] [@Ore1]. The Drinfeld module $Drin_A^r(k)$ of rank $r\ge 1$ is a homomorphism $\rho$: $$\label{eq1.3} A\buildrel r\over\longrightarrow k\langle\tau_p\rangle,$$ given by a polynomial $\rho_a=a+c_1\tau_p+c_2\tau_p^2+\dots+c_r\tau_p^r$ with $c_i\in k$ and $c_r\ne 0$, such that for all $a\in A$ the constant term of $\rho_a$ is $a$ and $\rho_a\not\in k$ for at least one $a\in A$ \[Rosen 2002\] [@R p. 200]. For each non-zero $a\in A$ the function field $k\left(\Lambda_{\rho}[a]\right)$ is a Galois extension of $k$, such that its Galois group is isomorphic to a subgroup $G$ of the matrix group $GL_r\left(A/aA\right)$, where $\Lambda_{\rho}[a]=\{\lambda\in\bar k ~|~\rho_a(\lambda)=0\}$ is a torsion submodule of the non-trivial Drinfeld module $Drin_A^r(k)$ \[Rosen 2002\] [@R Proposition 12.5]. Clearly, the abelian extensions correspond to the case $r=1$. The aim of our note is a non-abelian class field theory (Corollary [Corollary 2](#cor1.2){reference-type="ref" reference="cor1.2"}) based on a representation of the Ore ring $k\langle\tau_p\rangle$ (Theorem [Theorem 1](#thm1.1){reference-type="ref" reference="thm1.1"}) by the bounded linear operators on a Hilbert space $\ell^2G$ (Definition [Definition 4](#dfn2.1){reference-type="ref" reference="dfn2.1"}). Namely, let $G$ be a left cancellative semigroup generated by $\tau_p$ and all $a_i\in k$ subject to the commutation relations $\tau_p a_i=a_i^p\tau_p$. Let $C^*(G)$ be the semigroup $C^*$-algebra \[Li 2017\] [@Li1]. For a Drinfeld module $Drin_A^r(k)$ defined by ([\[eq1.3\]](#eq1.3){reference-type="ref" reference="eq1.3"}) we consider a homomorphism of the semigroup $C^*$-algebras: $$\label{eq1.4} C^*(A)\buildrel r\over\longrightarrow C^*(k\langle\tau_p\rangle).$$ It is proved (Lemma [Lemma 7](#lm3.1){reference-type="ref" reference="lm3.1"}) that $C^*(A)\cong I_{\alpha}$ and $C^*(k\langle\tau_p\rangle)\cong\mathbb{A}_p(S_{g,n})$, where $r=3g-3+n$ and $\mathbb{A}_p(S_{g,n})$ is a congruence subalgebra of level $p$ of the cluster $C^*$-algebra $\mathbb{A}(S_{g,n})$, see Section 2.2.1. It follows from ([\[eq1.4\]](#eq1.4){reference-type="ref" reference="eq1.4"}) that $\mathbb{A}(S_{g,n})/I_{\alpha}\cong \mathbb{A}_{RM}^{2r}$ and thus the Drinfeld modules $Drin_A^{r}(k)$ are classified by the noncommutative tori $\mathscr{A}_{RM}^{2r}$; this correspondence is a functor denoted by $F$. To recast the torsion submodule $\Lambda_{\rho}[a]$ in terms of $\mathscr{A}_{RM}^{2r}$, recall that if $r=1$ then the Drinfeld module ([\[eq1.3\]](#eq1.3){reference-type="ref" reference="eq1.3"}) plays the rôle of an elliptic curve with complex multiplication $\mathcal{E}_{CM}$ \[Drinfeld 1974\] [@Dri1 p. 594]. The set $\Lambda_{\rho}[a]$ consists of coefficients of the curve $\mathcal{E}_{CM}$ which are equal to the value of the Weierstrass $\wp$-function at the torsion points of lattice $\Lambda_{CM}\subset\mathbf{C}$, where $\mathcal{E}_{CM}\cong \mathbf{C}/\Lambda_{CM}$ *ibid.* Likewise, if $r\ge 1$ then the algebra $\mathscr{A}_{RM}^{2r}$ plays the rôle of the curve $\mathcal{E}_{CM}$ given by coefficients $e^{2\pi i \alpha_k}$ in the equations ([\[eq1.1\]](#eq1.1){reference-type="ref" reference="eq1.1"}). We therefore get $F(\Lambda_{\rho}[a])=\{\log ~(\varepsilon) ~e^{2\pi i\alpha_k} ~|~1\le k\le 2r-1\}$, where $\log ~(\varepsilon)$ is a scaling factor ([\[eq2.6\]](#eq2.6){reference-type="ref" reference="eq2.6"}) coming from the geodesic flow on the Teichmüller space of surface $S_{g,n}$ (Section 2.2.1). Our main results can be formulated as follows. **Theorem 1**. *The following is true:* *(i) the map $F: Drin_A^{r}(k)\mapsto \mathscr{A}_{RM}^{2r}$ is a functor from the category of Drinfeld modules $\mathfrak{D}$ to a category of the noncommutative tori $\mathfrak{A}$, which maps any pair of isogenous (isomorphic, resp.) modules $Drin_A^{r}(k), ~\widetilde{Drin}_A^{r}(k)\in \mathfrak{D}$ to a pair of the homomorphic (isomorphic, resp.) tori $\mathscr{A}_{RM}^{2r}, \widetilde{\mathscr{A}}_{RM}^{2r} \in \mathfrak{A}$;* *(ii) $F(\Lambda_{\rho}[a])=\{e^{2\pi i\alpha_k+\log\log\varepsilon} ~|~1\le k\le 2r-1\}$, where $\mathscr{A}_{RM}^{2r}=F(Drin_A^r(k))$ and $\Lambda_{\rho}(a)$ is the torsion submodule of the Drinfeld module $Drin_A^{r}(k)$;* *(iii) the Galois group $Gal \left(\mathbf{k}(e^{2\pi i\alpha_k+\log\log\varepsilon}) ~| ~\mathbf{k}\right)\subseteq GL_{r}\left(A/aA\right)$, where $\mathbf{k}$ is a subfield of the number field $\mathbf{Q}(e^{2\pi i\alpha_k+\log\log\varepsilon})$.* Theorem [Theorem 1](#thm1.1){reference-type="ref" reference="thm1.1"} implies a non-abelian class field theory as follows. Fix a non-zero $a\in A$ and let $G:=Gal~(k(\Lambda_{\rho}[a]) ~|~ k)\subseteq GL_r(A/aA)$, where $\Lambda_{\rho}[a]$ is the torsion submodule of the Drinfeld module $Drin_A^{r}(k)$. Consider the number field $\mathbf{K}=\mathbf{Q}(F(\Lambda_{\rho}[a]))$. Denote by $\mathbf{k}$ the maximal subfield of $\mathbf{K}$ which is fixed by the action of all elements of the group $G$. (The action of $G$ on $\mathbf{K}$ is well defined in view of item (ii) of Theorem [Theorem 1](#thm1.1){reference-type="ref" reference="thm1.1"}.) **Corollary 2**. ***(Non-abelian class field theory)** The number field $$\label{eq1.5} \mathbf{K}\cong \begin{cases} \mathbf{k}\left(e^{2\pi i\alpha_k +\log\log\varepsilon}\right), & if ~\mathbf{k}\subset\mathbf{C},\cr \mathbf{k}\left(\cos 2\pi\alpha_k \times\log\varepsilon\right), & if ~\mathbf{k}\subset\mathbf{R}, \end{cases}$$ is a Galois extension of $\mathbf{k}$, such that $Gal~(\mathbf{K} | \mathbf{k})\cong G$.* *Remark 3*. Formulas ([\[eq1.5\]](#eq1.5){reference-type="ref" reference="eq1.5"}) for the abelian extensions of the imaginary quadratic fields $\mathbf{k}$ were proved in [@Nik3]. An abelian class field theory based on the Bost-Connes crossed product $C^*$-algebras was studied in \[Yalkinoglu 2013\] [@Yal1]. The paper is organized as follows. A brief review of the preliminary facts is given in Section 2. Theorem [Theorem 1](#thm1.1){reference-type="ref" reference="thm1.1"} and Corollary [Corollary 2](#cor1.2){reference-type="ref" reference="cor1.2"} are proved in Section 3. The Langlands reciprocity for the noncommutative tori is discussed in Section 4. # Preliminaries We briefly review the noncommutative tori, semigroup and cluster $C^*$-algebras, and Drinfeld modules. We refer the reader to \[Rieffel 1990\] [@Rie1], \[Li 2017\] [@Li1], [@Nik2], [@N Section 1.1] and \[Rosen 2002\] [@R Chapters 12 & 13] for a detailed exposition. ## Noncommutative tori ### $C^*$-algebras The $C^*$-algebra is an algebra $\mathscr{A}$ over $\mathbf{C}$ with a norm $a\mapsto ||a||$ and an involution $\{a\mapsto a^* ~|~ a\in \mathscr{A}\}$ such that $\mathscr{A}$ is complete with respect to the norm, and such that $||ab||\le ||a||~||b||$ and $||a^*a||=||a||^2$ for every $a,b\in \mathscr{A}$. Each commutative $C^*$-algebra is isomorphic to the algebra $C_0(X)$ of continuous complex-valued functions on some locally compact Hausdorff space $X$. Any other algebra $\mathscr{A}$ can be thought of as a noncommutative topological space. ### K-theory of $C^*$-algebras By $M_{\infty}(\mathscr{A})$ one understands the algebraic direct limit of the $C^*$-algebras $M_n(\mathscr{A})$ under the embeddings $a\mapsto ~\mathbf{diag} (a,0)$. The direct limit $M_{\infty}(\mathscr{A})$ can be thought of as the $C^*$-algebra of infinite-dimensional matrices whose entries are all zero except for a finite number of the non-zero entries taken from the $C^*$-algebra $\mathscr{A}$. Two projections $p,q\in M_{\infty}(\mathscr{A})$ are equivalent, if there exists an element $v\in M_{\infty}(\mathscr{A})$, such that $p=v^*v$ and $q=vv^*$. The equivalence class of projection $p$ is denoted by $[p]$. We write $V(\mathscr{A})$ to denote all equivalence classes of projections in the $C^*$-algebra $M_{\infty}(\mathscr{A})$, i.e. $V(\mathscr{A}):=\{[p] ~:~ p=p^*=p^2\in M_{\infty}(\mathscr{A})\}$. The set $V(\mathscr{A})$ has the natural structure of an abelian semi-group with the addition operation defined by the formula $[p]+[q]:=\mathbf{diag}(p,q)=[p'\oplus q']$, where $p'\sim p, ~q'\sim q$ and $p'\perp q'$. The identity of the semi-group $V(\mathscr{A})$ is given by $[0]$, where $0$ is the zero projection. By the $K_0$-group $K_0(\mathscr{A})$ of the unital $C^*$-algebra $\mathscr{A}$ one understands the Grothendieck group of the abelian semi-group $V(\mathscr{A})$, i.e. a completion of $V(\mathscr{A})$ by the formal elements $[p]-[q]$. The image of $V(\mathscr{A})$ in $K_0(\mathscr{A})$ is a positive cone $K_0^+(\mathscr{A})$ defining the order structure $\le$ on the abelian group $K_0(\mathscr{A})$. The pair $\left(K_0(\mathscr{A}), K_0^+(\mathscr{A})\right)$ is known as a dimension group of the $C^*$-algebra $\mathscr{A}$. ### Noncommutative tori A noncommutative $m$-torus is the universal $C^*$-algebra generated by $k$ unitary operators $u_1,\dots, u_m$; the operators do not commute with each other, but their commutators $u_iu_ju_i^{-1}u_j^{-1}$ are fixed scalar multiples $\{\exp~(2\pi i\theta_{ij}), ~|~\theta_{ij}\in \mathbf{R}\}$ of the identity operator. The $m$-dimensional noncommutative torus, $\mathscr{A}_{\Theta}^m$, is defined by a skew symmetric real matrix $\Theta=(\theta_{ij}), ~1\le i,j\le k$. It is known that $K_0(\mathscr{A}_{\Theta}^m)\cong K_1(\mathscr{A}_{\Theta}^m)\cong \mathbf{Z}^{2^{m-1}}$. The canonical trace $\tau$ on the $C^*$-algebra $\mathscr{A}_{\Theta}^m$ defines a homomorphism from $K_0(\mathscr{A}_{\Theta}^m)$ to the real line $\mathbf{R}$; under the homomorphism, the image of $K_0(\mathscr{A}_{\Theta}^m)$ is a $\mathbf{Z}$-module, whose generators $\tau=(\tau_i)$ are polynomials in $\theta_{ij}$. The noncommutative tori $\mathscr{A}_{\Theta}^m$ and $\mathscr{A}_{\Theta'}^m$ are Morita equivalent, if the matrices $\Theta$ and $\Theta'$ belong to the same orbit of a subgroup $SO(m,m~|~\mathbf{Z})$ of the group $GL_{2m}(\mathbf{Z})$, which acts on $\Theta$ by the formula $\Theta'=(A\Theta+B)~/~(C\Theta+D)$, where $(A, B, C, D)\in GL_{2m}(\mathbf{Z})$ and the matrices $A,B,C,D\in GL_m(\mathbf{Z})$ satisfy the conditions: $$%\label{eq1} A^tD+C^tB=I,\quad A^tC+C^tA=0=B^tD+D^tB,$$ where $I$ is the unit matrix and $t$ at the upper right of a matrix means a transpose of the matrix.) The group $SO(m, m ~| ~\mathbf{Z})$ can be equivalently defined as a subgroup of the group $SO(m, m ~| ~\mathbf{R})$ consisting of linear transformations of the space $\mathbf{R}^{2m}$, which preserve the quadratic form $x_1x_{m+1}+x_2x_{k+2}+\dots+x_kx_{2m}$. ### AF-algebras An *AF-algebra* (Approximately Finite-dimensional $C^*$-algebra) is defined to be the norm closure of an ascending sequence of finite dimensional $C^*$-algebras $M_n$, where $M_n$ is the $C^*$-algebra of the $n\times n$ matrices with entries in $\mathbf{C}$. Here the index $n=(n_1,\dots,n_k)$ represents the semi-simple matrix algebra $M_n=M_{n_1}\oplus\dots\oplus M_{n_k}$. The ascending sequence mentioned above can be written as $$\label{eq2.2} M_1\buildrel\rm\varphi_1\over\longrightarrow M_2 \buildrel\rm\varphi_2\over\longrightarrow\dots,$$ where $M_i$ are the finite dimensional $C^*$-algebras and $\varphi_i$ the homomorphisms between such algebras. If $\varphi_i=Const$, then the AF-algebra $\mathscr{A}$ is called *stationary*. The homomorphisms $\varphi_i$ can be arranged into a graph as follows. Let $M_i=M_{i_1}\oplus\dots\oplus M_{i_k}$ and $M_{i'}=M_{i_1'}\oplus\dots\oplus M_{i_k'}$ be the semi-simple $C^*$-algebras and $\varphi_i: M_i\to M_{i'}$ the homomorphism. One has two sets of vertices $V_{i_1},\dots, V_{i_k}$ and $V_{i_1'},\dots, V_{i_k'}$ joined by $a_{rs}$ edges whenever the summand $M_{i_r}$ contains $a_{rs}$ copies of the summand $M_{i_s'}$ under the embedding $\varphi_i$. As $i$ varies, one obtains an infinite graph called the Bratteli diagram of the AF-algebra. The matrix $A=(a_{rs})$ is known as a partial multiplicity matrix; an infinite sequence of $A_i$ defines a unique AF-algebra. If $\mathbb{A}$ is a stationary AF-algebra, then $A_i=Const$ for all $i\ge 1$. The dimension group $\left(K_0(\mathbb{A}), K_0^+(\mathbb{A})\right)$ is a complete invariant of the Morita equivalence class of the AF-algebra $\mathbb{A}$, see e.g. [@N Theorem 3.5.2]. ### Semigroup $C^*$-algebras Let $G$ be a semigroup. We assume that $G$ is left cancellative, i.e. for all $g,x,y\in G$ the equality $gx=gy$ implies $x=y$. In other words, the map $G\to G$ given by left multiplication is injective for all $g\in G$. Consider the left regular representation of $G$. Namely, let $\ell^2G$ be the Hilbert space with the canonical orthonormal basis $\{\delta_x : x\in G\}$, where $\delta_x$ is the delta-function. For every $g\in G$ the map $G\to G$ given by the formula $x\mapsto gx$ is injective, so that the mapping $\delta_x\mapsto \delta_{gx}$ extends to an isometry $V_g: \ell^2G\to \ell^2G$. In other words, the assignment $g\mapsto V_g$ represents $G$ as isometries of the space $\ell^2G$. **Definition 4**. The semigroup $C^*$-algebra $C^*(G)$ is the smallest subalgebra of bounded linear operators on the Hilbert space $\ell^2G$ containing all $\{V_g : g\in G\}$. In other words, $$C^*(G):= C^*\left(\{V_g : g\in G\}\right).$$ ## Cluster $C^*$-algebras The cluster algebra of rank $n$ is a subring $\mathcal{A}(\mathbf{x}, B)$ of the field of rational functions in $n$ variables depending on variables $\mathbf{x}=(x_1,\dots, x_n)$ and a skew-symmetric matrix $B=(b_{ij})\in M_n(\mathbf{Z})$. The pair $(\mathbf{x}, B)$ is called a seed. A new cluster $\mathbf{x}'=(x_1,\dots,x_k',\dots, x_n)$ and a new skew-symmetric matrix $B'=(b_{ij}')$ is obtained from $(\mathbf{x}, B)$ by the exchange relations \[Williams 2014\] [@Wil1 Definition 2.22]: $$\begin{aligned} \label{eq2.3} x_kx_k' &=& \prod_{i=1}^n x_i^{\max(b_{ik}, 0)} + \prod_{i=1}^n x_i^{\max(-b_{ik}, 0)},\cr b_{ij}' &=& \begin{cases} -b_{ij} & \mbox{if} ~i=k ~\mbox{or} ~j=k\cr b_{ij}+{|b_{ik}|b_{kj}+b_{ik}|b_{kj}|\over 2} & \mbox{otherwise.} \end{cases}\end{aligned}$$ The seed $(\mathbf{x}', B')$ is said to be a mutation of $(\mathbf{x}, B)$ in direction $k$. where $1\le k\le n$. The algebra $\mathcal{A}(\mathbf{x}, B)$ is generated by the cluster variables $\{x_i\}_{i=1}^{\infty}$ obtained from the initial seed $(\mathbf{x}, B)$ by the iteration of mutations in all possible directions $k$. The Laurent phenomenon says that $\mathcal{A}(\mathbf{x}, B)\subset \mathbf{Z}[\mathbf{x}^{\pm 1}]$, where $\mathbf{Z}[\mathbf{x}^{\pm 1}]$ is the ring of the Laurent polynomials in variables $\mathbf{x}=(x_1,\dots,x_n)$ \[Williams 2014\] [@Wil1 Theorem 2.27]. In particular, each generator $x_i$ of the algebra $\mathcal{A}(\mathbf{x}, B)$ can be written as a Laurent polynomial in $n$ variables with the integer coefficients. The cluster algebra $\mathcal{A}(\mathbf{x}, B)$ has the structure of an additive abelian semigroup consisting of the Laurent polynomials with positive coefficients. In other words, the $\mathcal{A}(\mathbf{x}, B)$ is a dimension group, see Section 2.1.6 or [@N Definition 3.5.2]. The cluster $C^*$-algebra $\mathbb{A}(\mathbf{x}, B)$ is an AF-algebra, such that $K_0(\mathbb{A}(\mathbf{x}, B))\cong \mathcal{A}(\mathbf{x}, B)$. ### Cluster $C^*$-algebra $\mathbb{A}(S_{g,n})$ Denote by $S_{g,n}$ the Riemann surface of genus $g\ge 0$ with $n\ge 0$ cusps. Let $\mathcal{A}(\mathbf{x}, S_{g,n})$ be the cluster algebra coming from a triangulation of the surface $S_{g,n}$ \[Williams 2014\] [@Wil1 Section 3.3]. We shall denote by $\mathbb{A}(S_{g,n})$ the corresponding cluster $C^*$-algebra. Let $p$ be a prime number, and denote by $\mathcal{A}_p(S_{g,n})$ a sub-algebra of $\mathcal{A}(S_{g,n})$ consisting of the Laurent polynomials whose coefficients are divisible by $p$. It is easy to verify that $\mathcal{A}_p(S_{g,n})$ is again a dimension group under the addition of the Laurent polynomials. We say that $\mathbb{A}_p(S_{g,n})$ is a congruence sub-algebra of level $p$ of the cluster $C^*$-algebra $\mathbb{A}(S_{g,n})$, i.e. $K_0(\mathbb{A}_p(S_{g,n}))\cong \mathcal{A}_p(S_{g,n})$. Let $T_{g,n}$ be the Teichmüller space of the surface $S_{g,n}$, i.e. the set of all complex structures on $S_{g,n}$ endowed with the natural topology. The geodesic flow $T^t: T_{g,n}\to T_{g,n}$ is a one-parameter group of matrices $\left(\small\begin{matrix} e^t &0\cr 0 &e^{-t}\end{matrix}\right)$ acting on the holomorphic quadratic differentials on the Riemann surface $S_{g,n}$. Such a flow gives rise to a one parameter group of automorphisms $$\label{eq2.4} \sigma_t: \mathbb{A}(S_{g,n})\to \mathbb{A}(S_{g,n})$$ called the Tomita-Takesaki flow on the AF-algebra $\mathbb{A}(S_{g,n})$. Denote by $Prim~\mathbb{A}(S_{g,n})$ the space of all primitive ideals of $\mathbb{A}(S_{g,n})$ endowed with the Jacobson topology. Recall ([@Nik2]) that each primitive ideal has a parametrization by a vector $\Theta\in \mathbf{R}^{6g-7+2n}$ and we write it $I_{\Theta}\in Prim~\mathbb{A}(S_{g,n})$ **Theorem 5**. ***([@Nik2])** There exists a homeomorphism $h: Prim~\mathbb{A}(S_{g,n})\times \mathbf{R}\to \{U\subseteq T_{g,n} ~|~U~\hbox{{\sf is generic}}\}$ given by the formula $\sigma_t(I_{\Theta})\mapsto S_{g,n}$; the set $U=T_{g,n}$ if and only if $g=n=1$. The $\sigma_t(I_{\Theta})$ is an ideal of $\mathbb{A}(S_{g,n})$ for all $t\in \mathbf{R}$ and the quotient algebra $\mathbb{A}(S_{g,n})/\sigma_t(I_{\Theta})$ is a non-commutative coordinate ring of the Riemann surface $S_{g,n}$.* Let $\phi\in Mod ~(S_{g,n})$ be a pseudo-Anosov automorphism of $S_{g,n}$ with the dilatation $\lambda_{\phi}>1$. If the Riemann surfaces $S_{g,n}$ and $\phi (S_{g,n})$ lie on the axis of $\phi$, then Theorem [Theorem 5](#thm2.2){reference-type="ref" reference="thm2.2"} gives rise the Connes invariant [@Nik2 Section 4.2]: $$\label{eq2.6} T(\mathbb{A}(S_{g,n}))=\{\log\lambda_{\phi} ~|~ \phi\in Mod ~(S_{g,n})\}.$$ ## Drinfeld modules The explicit class field theory for the function fields is strikingly simpler than for the number fields. The generators of the maximal abelian unramified extensions (i.e. the Hilbert class fields) are constructed using the concept of the Drinfeld module. Roughly speaking, such a module is an analog of the exponential function and a generalization of the Carlitz module. Nothing similar exists at the number fields side, where the explicit generators of abelian extensions are known only for the field of rationals (Kronecker-Weber theorem) and imaginary quadratic number fields (complex multiplication). Below we give some details on the Drinfeld modules. Let $k$ be a field. A polynomial $f\in k[x]$ is said to be additive in the ring $k[x,y]$ if $f(x+y)=f(x)+f(y)$. If $char ~k=p$, then it is verified directly that the polynomial $\tau(x)=x^p$ is additive. Moreover, each additive polynomial has the form $a_0x+a_1x^p+\dots+a_rx^{p^r}$. The set of all additive polynomials is closed under addition and composition operations. The corresponding ring is isomorphic to a ring $k\langle\tau_p\rangle$ of the non-commutative polynomials given by the commutation relation: $$\label{eq2.7} \tau_p a=a^p\tau_p, \qquad \forall a\in k.$$ Let $A=\mathbf{F}_q[T]$ and $k=\mathbf{F}_q(T)$. The Drinfeld module $Drin_A^r(k)$ of rank $r\ge 1$ is a homomorphism $\rho: A\buildrel r\over\longrightarrow k\langle\tau_p\rangle$ given by a polynomial $\rho_a=a+c_1\tau_p+c_2\tau_p^2+\dots+c_r\tau_p^r$ with $c_i\in k$ and $c_r\ne 0$, such that for all $a\in A$ the constant term of $\rho_a$ is $a$ and $\rho_a\not\in k$ for at least one $a\in A$. Consider the torsion submodule $\Lambda_{\rho}[a]:=\{\lambda\in\bar k ~|~\rho_a(\lambda)=0\}$ of a non-trivial Drinfeld module $Drin_A^r(k)$. The following result implies a non-abelian class field theory for the functions fields. **Theorem 6**. *For each non-zero $a\in A$ the function field $k\left(\Lambda_{\rho}[a]\right)$ is a Galois extension of $k$, such that its Galois group is isomorphic to a subgroup of the matrix group $GL_r\left(A/aA\right)$, where $\Lambda_{\rho}[a]$ is a torsion submodule of the non-trivial Drinfeld module $Drin_A^r(k)$.* # Proofs ## Proof of Theorem [Theorem 1](#thm1.1){reference-type="ref" reference="thm1.1"} {#proof-of-theorem-thm1.1} For the sake of clarity, let us outline the main ideas. For a Drinfeld module $Drin_A^r(k)$ given by the formula ([\[eq1.3\]](#eq1.3){reference-type="ref" reference="eq1.3"}) we consider a homomorphism $C^*(A)\buildrel r\over\longrightarrow C^*(k\langle\tau_p\rangle)$ of the $C^*$-algebras defined by the multiplicative semigroups of the rings $A$ and $k\langle\tau_p\rangle$, respectively. We prove that if $r=3g-3+n$ then $C^*(k_{\rho_a}\langle\tau_p\rangle)\cong \mathbb{A}_p(S_{g,n})$, where $k_{\rho_a}\langle\tau_p\rangle$ is the ring $k\langle\tau_p\rangle$ modulo the relation $T=\rho_a(T)$ and $\mathbb{A}_p(S_{g,n})\subset \mathbb{A}(S_{g,n})$ is a congruence subalgebra of level $p$, see Section 2.2.1. Moreover, $C^*(A)\cong I_{\alpha}$, where $I_{\alpha}$ is a primitive ideal of the cluster $C^*$-algebra $\mathbb{A}(S_{g,n})$ (Theorem [Theorem 5](#thm2.2){reference-type="ref" reference="thm2.2"}). Thus one gets a classification of the Drinfeld modules $Drin_A^r(k)$ by the noncommutative tori $\mathscr{A}_{RM}^{2r}$ given by ([\[eq1.2\]](#eq1.2){reference-type="ref" reference="eq1.2"}); this fact follows from Theorem [Theorem 5](#thm2.2){reference-type="ref" reference="thm2.2"} and an isomorphism $\mathbb{A}(S_{g,n})/I_{\alpha}\cong \mathbb{A}_{RM}^{2r}$, where $K_0( \mathbb{A}_{RM}^{2r})\cong K_0(\mathscr{A}_{RM}^{2r} )$ . The rest of the proof follows from the above observations. Let us pass to a detailed argument. \(A\) Let us prove item (i) of Theorem [Theorem 1](#thm1.1){reference-type="ref" reference="thm1.1"}. We split the proof in a series of lemmas. **Lemma 7**. ***(Main lemma)** $C^*(\rho(A))\cong I_{\alpha}\subset\mathbb{A}_p(S_{g,n})$, where $r=3g-3+n$.* *Proof.* Roughly speaking, the proof consists in comparing the crossed product $C^*$-algebras $C^*(A)\rtimes_{\rho}\mathbf{Z}$ and $C(\mathscr{B})\rtimes_{\alpha}\mathbf{Z}$, where $C^*(A)$ is the semigroup $C^*$-algebra of $A$ and $C(\mathscr{B})$ is the $C^*$-algebra of continuous complex-valued functions on the Bratteli compactum $\mathscr{B}$ of the AF-algebra $\mathbb{A}_p(S_{g,n})$. It is proved that $C^*(A)\cong C(\mathscr{B})$ and the $\rho$-action is conjugate to the $\alpha$-action. In particular, the maximal abelian subalgebras $C^*(\rho(A))$ and $I_{\alpha}$ in the corresponding crossed product $C^*$-algebras must be isomorphic. Let us pass to a step-by-step construction. \(i\) To define a crossed product $C^*$-algebra $C^*(A)\rtimes_{\rho}\mathbf{Z}$, consider a (semi-)dynamical system on $A$ given by the powers of $T$. Namely, the orbit of generator $T$ of the ring $A$ is given by the set $\{\rho_a(T), \rho_a(T^2),\dots, \rho_a(T^i),\dots\}$. One gets an orbit of any element of $A$ by the formula $\{F_q[\rho_a(T^i)]\}_{i=1}^{\infty}$. The dynamical system on the $C^*$-algebra $C^*(A)$ gives rise to a crossed product $C^*$-algebra which we shall denote by $C^*(A)\rtimes_{\rho}\mathbf{Z}$. It follows from the construction that $C^*(\rho(A))$ is the maximal abelian subalgebra of $C^*(A)\rtimes_{\rho}\mathbf{Z}$. \(ii\) Let us define a crossed product $C^*$-algebra $C(\mathscr{B})\rtimes_{\alpha}\mathbf{Z}$. Denote by $(V,E, \le)$ the Bratteli diagram endowed with partial order $\le$ defined for edges adjacent to the same vertex. Recall that a Bratteli compactum $\mathscr{B}$ is the set of infinite paths $\{e_n\}$ of $(V,E,\le)$ endowed with the distance $d(\{e_n\}, \{e_n'\})=2^{-k}$, where $k=\inf \{i ~|~e_i\ne e_i'\}$ \[Herman, Putnam & Skau 1992\] [@HerPutSka1]. The space $\mathscr{B}$ is a Cantor set. The Vershik map $h: \mathscr{B}\to\mathscr{B}$ is defined by the order $\le$ as follows. Let $e_n'$ be successor of $e_n$ relative the order $\le$. The map $h: \mathscr{B}\to\mathscr{B}$ is defined by the formula $\{e_n\}\mapsto \{e_1,\dots,e_{n}, e_{n+1}', e_{n+2}',\dots\}$. The map $h$ is a homeomorphism which generates a minimal dynamical system on the Cantor set $\mathscr{B}$. Consider now an inclusion of the Bratteli compacta $\mathscr{B}_{\alpha}\subset\mathscr{B}$ corresponding to the inclusion of the AF-algebras $I_{\alpha}\subset\mathbb{A}_p(S_{g,n})$. The Vershik map $h_{\alpha}: \mathscr{B}_{\alpha}\to \mathscr{B}_{\alpha}$ extends uniquely to a homeomorphism $h_{\alpha}: \mathscr{B}\to \mathscr{B}$. (Notice that $h_{\alpha}$ is no longer the Vershik map on $\mathscr{B}$.) The minimal dynamical system on the Cantor set $\mathscr{B}$ defined by $h_{\alpha}$ gives rise to a crossed product $C^*$-algebra $C(\mathscr{B})\rtimes_{\alpha}\mathbf{Z}$, where $C(\mathscr{B})$ is the $C^*$-algebra of continuous complex-valued functions on $\mathscr{B}$. It follows from the construction that $I_{\alpha}$ is the maximal abelian subalgebra of $C(\mathscr{B})\rtimes_{\alpha}\mathbf{Z}$. \(iii\) We proceed by establishing an isomorphism $C^*(A)\cong C(\mathscr{B})$. Denote by $\hat A$ the Pontryagin dual of the multiplicative group of $A$. Since $A$ is commutative, the Gelfand theorem implies an isomorphism $C^*(A)\cong C(\hat A)$, where $C(\hat A)$ is the $C^*$-algebra of continuous complex-valued functions on $\hat A$. Therefore it is sufficient to construct a homeomorphism between the topological spaces $\hat A$ and $\mathscr{B}$. \(iv\) Let us calculate $\hat A$. Recall that $A$ is an analog of the ring $\mathbf{Z}$ and $\hat{\mathbf{Z}}\cong\mathbf{R}/\mathbf{Z}$. Likewise $\hat A\cong \mathbf{Q}_p/\mathbf{Z}_p$, where $\mathbf{Q}_p$ and $\mathbf{Z}_p$ are the $p$-adic numbers and the $p$-adic integers, respectively. \(v\) Let $\mathscr{B}$ be the Bratteli compactum of the AF-algebra $\mathbb{A}_p(S_{g,n})$. To describe the Bratteli diagram of $\mathbb{A}_p(S_{g,n})$, let us recall the construction of $\mathbb{A}(S_{1,1})$ using the Farey tessellation of the hyperbolic plane \[Boca 2008\] [@Boc1 Figure 2]. Consider the set of rational numbers on the unit interval of the form $\mathbf{Z}[\frac{1}{p}]/\mathbf{Z}$. (Notice that such a set has structure of the Prüfer group $\mathbf{Z}(p^{\infty})$ which we will not use.) Consider a Bratteli diagram $(V_p,E_p)$ obtained by "suspension of cusps" of the Farey tessellation at the rational points $\mathbf{Z}(p^{\infty})$ of the unit interval $[0,1]$, see \[Boca 2008\] [@Boc1 Figures 1 & 2]. The $(V_p,E_p)$ consists of all vertices at the vertical lines of the suspension and all subgraphs rooted in the above vertices. (Recall that all graphs are oriented by the level structure of the diagram.) It is verified directly that $(V_p,E_p)$ is the Bratteli diagram of the AF-algebra $\mathbb{A}_p(S_{1,1})$. The general case $\mathbb{A}_p(S_{g,n})$ is treated similarly replacing the triangles in the Farey tessellation by the fundamental polygons of the surface $S_{g.n}$. The reader can see that the Bratteli compactum of $(V_p,E_p)$ is $\mathscr{B}=\mathbf{Z}[\frac{1}{p}]/\mathbf{Z}$. \(vi\) It follows from items (iv) and (v) that $\hat A\cong \mathbf{Q}_p/\mathbf{Z}_p\cong \mathbf{Z}[\frac{1}{p}]/\mathbf{Z}=\mathscr{B}$. (The middle isomorphism in the last formula is well known.) Thus we conclude that $$\label{eq3.1} C^*(A)\cong C(\hat A)\cong C(\mathscr{B}).$$ \(vii\) It remains to prove an isomorphism of the crossed product $C^*$-algebras $C^*(A)\rtimes_{\rho}\mathbf{Z}$ and $C(\mathscr{B})\rtimes_{\alpha}\mathbf{Z}$ or, equivalently, that the $\rho$-action is conjugate to the $\alpha$-action. Let $\rho: C^*(A)\to C^*(A)$ be an (outer) automorphism of the $C^*$-algebra $C^*(A)$ constructed in item (i). In view of ([\[eq3.1\]](#eq3.1){reference-type="ref" reference="eq3.1"}), the automorphism $\rho$ generates an automorphism of the $C^*$-algebra $C(\mathscr{B})$. It follows from our construction in item (ii), that any automorphism of $C(\mathscr{B})$ comes from a primitive ideal $I_{\alpha}\subset\mathbb{A}_p(S_{g,n})$. Thus the $\rho$-action is conjugate to the $\alpha$-action for some $\alpha\in \mathbf{R}^{6g-7+2n}$. In particular, $C^*(A)\rtimes_{\rho}\mathbf{Z}\cong C(\mathscr{B})\rtimes_{\alpha}\mathbf{Z}$ and the corresponding maximal abelian subalgebras $C^*(\rho(A))$ and $I_{\alpha}$ must be isomorphic. \(viii\) Finally, let us show that the rank of the Drinfeld module ([\[eq1.3\]](#eq1.3){reference-type="ref" reference="eq1.3"}) is equal to $r=3g-3+n$. Indeed, recall that the Drinfeld module ([\[eq1.3\]](#eq1.3){reference-type="ref" reference="eq1.3"}) is defined by the equation $T=\rho_a(T)$. Roughly speaking, in item (vii) we identified $\rho$ with the (real) algebraic numbers $(1,\alpha_1,\dots,\alpha_{6g-7+2n})$ given by the minimal polynomial $z=\rho_a(z)$ having the complex roots $z_k=\alpha_k+i\alpha_{k+1}$. Since $\rho_a$ has $r$ complex roots, one concludes that $2r=6g-6+2n$ and, therefore, the rank of the Drinfeld module is equal to $r=3g-3+n$. Lemma [Lemma 7](#lm3.1){reference-type="ref" reference="lm3.1"} is proved. ◻ (300,110)(-70,0) (20,70)(0,-1)35 (122,70)(0,-1)35 (45,23)(1,0)60 (45,83)(1,0)60 (15,20)$I_{\alpha}$ (118,20)$\mathbb{A}_p(S_{g,n})$ (15,80)$A$ (115,80)$k\langle\tau_p\rangle$ (5,50)$F$ (130,50)$F$ (55,30)inclusion (40,90)$r=3g-3+n$ **Lemma 8**. *The diagram in Figure 1 is commutative. In particular, for each $g\ge 1$ and $n\in\{0,1,2\}$ the Drinfeld module $Drin_A^{3g-3+n}(k)$ gives rise to a noncommutative torus $\mathscr{A}_{RM}^{6g-6+2n}$, such that isogenous (isomorphic, resp.) Drinfeld modules correspond to the homomorphic (isomorphic, resp.) noncommutative tori.* *Proof.* (i) Indeed, the functor $F$ in Figure 1 acts by the formula $A\mapsto C^*(\rho(A))$ and $k\langle\tau_p\rangle\mapsto C^*(k\langle\tau_p\rangle)$. Lemma [Lemma 7](#lm3.1){reference-type="ref" reference="lm3.1"} implies that the diagram in Figure 1 is commutative. \(ii\) Functor $F$ extends to such between the Drinfeld modules and noncommutative tori via the formula: $$\label{eq3.2} Drin_A^{3g-3+n}(k)\cong \rho(A)\mapsto \mathbb{A}_p(S_{g,n})/I_{\alpha} \cong \mathbb{A}_{RM}^{6g-6+2n},$$ where $K_0(\mathbb{A}_{RM}^{6g-6+2n}) \cong K_0(\mathscr{A}_{RM}^{6g-6+2n})$. \(iii\) Let us show that $F(Drin_A^{3g-3+n}(k))$ and $F(\widetilde{Drin}_A^{3g-3+n}(k))$ are homomorphic (isomorphic, resp.) noncommutative tori, whenever $Drin_A^{3g-3+n}(k)$ and $\widetilde{Drin}_A^{3g-3+n}(k)$ are isogenous (isomorphic, resp.) Drinfeld modules. Recall that a morphism of Drinfeld modules $f: Drin_A^{3g-3+n}(k)\to \widetilde{Drin}_A^{3g-3+n}(k)$ is an element $f\in k\langle\tau_p\rangle$ such that $f\rho_a=\widetilde{\rho_a} f$ for all $a\in A$. An isogeny is a surjective morphism with finite kernel. Let $I_{\alpha}'$ be an ideal generated by $\iota(f)$ and $I_{\alpha}$, where $\iota(f)$ is the image of the isogeny $f$ in the $C^*$-algebra $\mathbb{A}_p(S_{g,n})$, see Figure 1. The map $\mathbb{A}_p(S_{g,n})/I_{\alpha}\to \mathbb{A}_p(S_{g,n})/I_{\alpha}'$ is a homomorphism. In other words, one gets a homomorphism between the noncommutative tori $\mathscr{A}_{RM}^{6g-6+2n}$ and $\widetilde{\mathscr{A}}_{RM}^{~6g-6+2n}$. If $f$ is invertible (i.e. the kernel is trivial), one gets an isomorphism of the noncommutative tori. Lemma [Lemma 8](#lm3.2){reference-type="ref" reference="lm3.2"} is proved. ◻ Item (i) of Theorem [Theorem 1](#thm1.1){reference-type="ref" reference="thm1.1"} follows from Lemma [Lemma 8](#lm3.2){reference-type="ref" reference="lm3.2"} and $r=3g-3+n$. \(B\) Let us pass to item (ii) of Theorem [Theorem 1](#thm1.1){reference-type="ref" reference="thm1.1"}. We split the proof in a series of lemmas. **Lemma 9**. *The generators $u_1,\dots, u_{2r}$ of the algebra $\mathscr{A}_{RM}^{2r}$ satisfy the following quadratic relations: $$\label{eq3.3} \left\{ \begin{array}{lll} u_2u_1 &=& \log ~(\varepsilon) ~e^{2\pi i\alpha_1} ~u_1u_2, \\ u_3u_2 &=& \log ~(\varepsilon) ~e^{2\pi i\alpha_2} ~u_2u_3, \\ \vdots && \\ u_{2r}u_{2r-1} &=& \log ~(\varepsilon) ~e^{2\pi i\alpha_{2r-1}} ~u_{2r-1}u_{2r}. \end{array} \right.$$* *Proof.* (i) Relations ([\[eq3.3\]](#eq3.3){reference-type="ref" reference="eq3.3"}) with $\log ~(\varepsilon) =1$ follow from formulas ([\[eq1.1\]](#eq1.1){reference-type="ref" reference="eq1.1"}) and ([\[eq1.2\]](#eq1.2){reference-type="ref" reference="eq1.2"}). Let us prove ([\[eq3.3\]](#eq3.3){reference-type="ref" reference="eq3.3"}) in the general case. \(ii\) Recall that the Tomita-Takesaki flow $\sigma_t: \mathbb{A}(S_{g,n})\to \mathbb{A}(S_{g,n})$ gives rise to a one-paramter group of automorphisms $\varphi^t: \mathbb{A}_{RM}^{2r}\to \mathbb{A}_{RM}^{2r}$ defined by the formula $\varphi^t(\mathbb{A}_{RM}^{2r})=\mathbb{A}(S_{g,n})/\sigma_t(I_{\alpha})$; we refer the reader to Section 2.2.1 and Theorem [Theorem 5](#thm2.2){reference-type="ref" reference="thm2.2"} for the notation and details. It is not hard to see, that the action of $\varphi^t$ extends to $\mathscr{A}_{RM}^{2r}$, where $K_0(\mathscr{A}_{RM}^{2r}) \cong K_0(\mathbb{A}_{RM}^{2r})$. \(iii\) To establish the commutation relations ([\[eq1.1\]](#eq1.1){reference-type="ref" reference="eq1.1"}) for $\varphi^t(\mathscr{A}_{RM}^{2r})$, we must solve the equation: $$\label{eq3.4} Tr~(\varphi^t(\mathscr{A}_{RM}^{2r}))=Tr' ~(\mathscr{A}_{RM}^{2r} ),$$ where $Tr$ is the trace of $C^*$-algebra. It is easy to see, that ([\[eq3.4\]](#eq3.4){reference-type="ref" reference="eq3.4"}) with $Tr'=t~Tr$ applied to ([\[eq1.1\]](#eq1.1){reference-type="ref" reference="eq1.1"}) gives the commutation relation for the $C^*$-algebra $\varphi^t(\mathscr{A}_{RM}^{2r})$ of the form $u_{k+1}u_k=t e^{2\pi i \alpha_k}u_ku_{k+1}$. Indeed, equation ([\[eq3.4\]](#eq3.4){reference-type="ref" reference="eq3.4"}) is equivalent to $Tr~(u_{k+1}u_k)=t ~Tr~(e^{2\pi i\alpha_k} u_ku_{k+1})=Tr~(te^{2\pi i\alpha_k}u_ku_{k+1})$. The latter is satisfied if and only if $u_{k+1}u_k=t e^{2\pi i \alpha_k}u_ku_{k+1}$. \(iv\) The Riemann surfaces $S_{g,n}$ and $\phi(S_{g,n})$ lie at the axis of a pseudo-Anosov automorphism $\phi\in Mod~(S_{g,n}))$ (Section 2.2.1). In view of formula ([\[eq2.6\]](#eq2.6){reference-type="ref" reference="eq2.6"}), the Connes invariant is equal to $\log\lambda_{\phi}$, where $\lambda_{\phi}$ is the dilatation of $\phi$. In other words, $\varphi^1(\mathscr{A}_{RM}^{2r})\cong \varphi^t(\mathscr{A}_{RM}^{2r})$ if and only if $t=\log\lambda_{\phi}$. \(v\) To express $\log\lambda_{\phi}$ in terms of $\alpha_i$, let $p(x)=x^m-a_{m-1}x^{m-1}-\dots-a_1x-a_0$ be a minimal polynomial of the number field $\mathbf{Q}(\alpha_i)$. Consider the integer matrix $$\label{eq3.5} B=\left( \begin{matrix} 0 & 0 &\dots &0& a_0\cr 1 & 0 &\dots &0& a_1\cr \vdots & \vdots & && \vdots\cr 0 & 0 & \dots & 1 & a_{m-1} \end{matrix} \right),$$ which one can always assume to be non-negative. It is well known that $p(x)=\det (B-xI )$ and we let $\varepsilon>1$ be the Perron-Frobenius eigenvalue of $B$. \(vi\) Recall that if $m=6g'-6+2n'$ then $B$ corresponds to the action of $\phi$ on the relative cohomology $H^1(S_{g',n'}; \partial S_{g',n'})$ of a surface $S_{g',n'}$ \[Thurston 1988\] [@Thu1 p. 427]. (Notice that in general $g'\ne g$ and $n'\ne n$, but $S_{g',n'}$ is a finite cover of the surface $S_{g,n}$.) In particular, $\lambda_{\phi}=\varepsilon$. Thus one gets the commutation relations ([\[eq1.1\]](#eq1.1){reference-type="ref" reference="eq1.1"}) in the form: $$\label{eq3.6} u_{k+1}u_k=(\log\varepsilon) ~e^{2\pi i \alpha_k}u_ku_{k+1}=e^{2\pi i \alpha_k+\log\log\varepsilon}u_ku_{k+1}.$$ Lemma [Lemma 9](#lm3.3){reference-type="ref" reference="lm3.3"} follows from ([\[eq3.6\]](#eq3.6){reference-type="ref" reference="eq3.6"}). ◻ **Lemma 10**. *$F(\Lambda_{\rho}[a])=\{e^{2\pi i\alpha_k+\log\log\varepsilon} ~|~1\le k\le 2r-1\}$.* *Proof.* (i) Recall that the Drinfeld module of rank $r=1$ can be thought as an elliptic curve $\mathcal{E}_{CM}\cong\mathbf{C}/O_k$ with complex multiplication by the ring of integers $O_k$ of an imaginary quadratic field $k$ given by the homomorphism ([\[eq1.3\]](#eq1.3){reference-type="ref" reference="eq1.3"}) of the form $\rho: O_k\to End ~\mathcal{E}_{CM}$, where $End ~\mathcal{E}_{CM}$ is the endomorphism ring of $\mathcal{E}_{CM}$ \[Drinfeld 1974\] [@Dri1 p. 594]. Moreover, the torsion submodule $\Lambda_{\rho}[a]$ consists of the coefficients of curve $\mathcal{E}_{CM}$, i.e. the values of the Weierstrass $\wp$-function at the torsion points of the lattice $O_k$; hence the name. \(ii\) When the rank of the Drinfeld module is $r\ge 1$, a substitute for the $\mathcal{E}_{CM}$ is provided by the noncommutative torus $\mathscr{A}_{RM}^{2r}$. The latter is given by the equations ([\[eq3.3\]](#eq3.3){reference-type="ref" reference="eq3.3"}) with the constant terms $e^{2\pi i\alpha_k+\log\log\varepsilon}$. We conclude therefore that $F(\Lambda_{\rho}[a])=\{e^{2\pi i\alpha_k+\log\log\varepsilon} ~|~1\le k\le 2r-1\}$. Lemma [Lemma 10](#lm3.4){reference-type="ref" reference="lm3.4"} is proved. ◻ Item (ii) of Theorem [Theorem 1](#thm1.1){reference-type="ref" reference="thm1.1"} follows from Lemma [Lemma 10](#lm3.4){reference-type="ref" reference="lm3.4"} with $r=3g-3+n$. \(C\) Let us prove item (iii) of Theorem [Theorem 1](#thm1.1){reference-type="ref" reference="thm1.1"}. Recall that Theorem [Theorem 6](#thm2.3){reference-type="ref" reference="thm2.3"} says that there exists a subgroup $G\subseteq GL_r(A/aA)$ acting transitively on the elements of the torsion submodule $\Lambda_{\rho}[a]$. By item (ii) of Theorem [Theorem 1](#thm1.1){reference-type="ref" reference="thm1.1"} the action of $G$ extends to the set $F(\Lambda_{\rho}[a])=\{e^{2\pi i\alpha_k+\log\log\varepsilon} ~|~1\le k\le 2r-1\}$. Let $\mathbf{K}=\mathbf{Q}(e^{2\pi i\alpha_k+\log\log\varepsilon})$ be a number field and let $\mathbf{k}$ be the maximal subfield of $\mathbf{K}$ fixed by all elements of the group $G$. It is easy to see that $\mathbf{K}$ is a Galois extension of $\mathbf{k}$ and the Galois group $Gal~(\mathbf{K}|\mathbf{k})\cong G\subseteq GL_r(A/aA)$. Item (iii) of Theorem [Theorem 1](#thm1.1){reference-type="ref" reference="thm1.1"} follows from the above remark. Theorem [Theorem 1](#thm1.1){reference-type="ref" reference="thm1.1"} is proved. ## Proof of Corollary [Corollary 2](#cor1.2){reference-type="ref" reference="cor1.2"} {#proof-of-corollary-cor1.2} We split the proof in two lemmas. **Lemma 11**. *If $\mathbf{k}\subset\mathbf{C}$, then $Gal \left( \mathbf{k}\left(e^{2\pi i\alpha_k+\log\log\varepsilon}\right) | ~\mathbf{k} \right)\cong G\subseteq GL_r(A/aA)$.* *Proof.* If $\mathbf{k}\subset\mathbf{C}$, then $\mathbf{K}$ is a Galois extension of $\mathbf{k}$ only if its generators are complex numbers $\mathbf{C}-\mathbf{R}$. We can now apply item (iii) of Theorem [Theorem 1](#thm1.1){reference-type="ref" reference="thm1.1"} which says that $Gal \left( \mathbf{k}\left(e^{2\pi i\alpha_k+\log\log\varepsilon}\right) | ~\mathbf{k} \right)\cong G\subseteq GL_r(A/aA)$. Lemma [Lemma 11](#lm3.5){reference-type="ref" reference="lm3.5"} is proved. ◻ **Lemma 12**. *If $\mathbf{k}\subset\mathbf{R}$, then $Gal \left( \mathbf{k}\left(\cos 2\pi\alpha_k \times\log\varepsilon\right) | ~\mathbf{k} \right)\cong G\subseteq GL_r(A/aA)$.* *Proof.* (i) If $\mathbf{k}\subset\mathbf{R}$, then $\mathbf{K}$ is a Galois extension of $\mathbf{k}$ only if its generators are real numbers $\mathbf{R}$. Thus one cannot apply directly item (iii) of Theorem [Theorem 1](#thm1.1){reference-type="ref" reference="thm1.1"}, but the following construction is used. Recall that a real $C^*$-algebra is a Banach $^*$-algebra $\mathscr{A}$ over $\mathbf{R}$ isometrically $^*$-isomorphic to a norm-closed $^*$-algebra of bounded operators on a real Hilbert space, see \[Rosenberg 2016\] [@Ros1] for an excellent introduction. Given a real $C^*$-algebra $\mathscr{A}$, its complexification $\mathscr{A}_{\mathbf{C}} = \mathscr{A} + i\mathscr{A}$ is a complex $C^*$-algebra. Conversely, given a complex $C^*$-algebra $\mathscr{A}_{\mathbf{C}}$, whether there exists a real $C^*$-algebra $\mathscr{A}$ such that $\mathscr{A}_{\mathbf{C}} = \mathscr{A} + i\mathscr{A}$ is unknown in general. However, the noncommutative torus admits the unique canonical decomposition: $$\label{eq3.7} \mathscr{A}_{\Theta}^m=(\mathscr{A}_{\theta}^m)^{Re}+i(\mathscr{A}_{\Theta}^m)^{Re}.$$ \(ii\) Let $\mathscr{A}_{RM}^{2r}=(\mathscr{A}_{RM}^{2r})^{Re}+i(\mathscr{A}_{RM}^{2r})^{Re}$. It is easy to verify, that the commutation relations ([\[eq3.3\]](#eq3.3){reference-type="ref" reference="eq3.3"}) for the real $C^*$-algebra $(\mathscr{A}_{RM}^{2r})^{Re}$ have the form: $$\label{eq3.8} u_{k+1}u_k=Re \left(e^{2\pi i \alpha_k+\log\log\varepsilon}\right)u_ku_{k+1}= \left(\cos 2\pi\alpha_k \times\log\varepsilon\right)u_ku_{k+1}.$$ One concludes from ([\[eq3.8\]](#eq3.8){reference-type="ref" reference="eq3.8"}) that $\mathbf{K}=\mathbf{k}(\cos 2\pi\alpha_k \times\log\varepsilon)$, where $\mathbf{k}\subset\mathbf{R}$. In view of item (iii) of Theorem [Theorem 1](#thm1.1){reference-type="ref" reference="thm1.1"}, we have $Gal \left( \mathbf{k}\left(\cos 2\pi\alpha_k \times\log\varepsilon\right) | ~\mathbf{k} \right)\cong G\subseteq GL_r(A/aA)$. This argument finishes the proof of Lemma [Lemma 12](#lm3.6){reference-type="ref" reference="lm3.6"}. ◻ Corollary [Corollary 2](#cor1.2){reference-type="ref" reference="cor1.2"} follows from Lemmas [Lemma 11](#lm3.5){reference-type="ref" reference="lm3.5"} and [Lemma 12](#lm3.6){reference-type="ref" reference="lm3.6"}. # Langlands program for noncommutative tori The Langlands Program predicts an analytic solution to the non-abelian class field theory based on the $L$-functions associated to the irreducible representations of the algebraic groups over adeles. Namely, the $n$-dimensional Artin $L$-function $L(s,\sigma_n)$ \[Artin 1924\] [@Art1] is conjectured to coincide with an $L$-function $L(s,\pi)$, where $\pi$ is a representation of the adelic algebraic group $GL(n)$ \[Langlands 1978\] [@Lan1]. The $L$-functions $L(s, \mathscr{A}_{RM}^{2n})$ of the even dimensional noncommutative tori $\mathscr{A}_{RM}^{2n}$ were introduced in [@Nik1], see also [@N Section 6.6]. It was proved that the Artin $L$-function $L(s,\sigma_n)\equiv L(s, \mathscr{A}_{RM}^{2n})$ if $n=0$ or $n=1$, and conjectured to be true for all $n\ge 2$ and a suitable choice of the $\mathscr{A}_{RM}^{2n}$ [@N Conjecture 6.6.1]. Recall that the Artin $L$-functions are well defined for the non-abelian groups $G$ \[Artin 1924\] [@Art1]. On the other hand, the Galois extensions with the group $G$ are classified by the noncommutative tori $\mathscr{A}_{RM}^{2n}$ (Theorem [Theorem 1](#thm1.1){reference-type="ref" reference="thm1.1"} and Corollary [Corollary 2](#cor1.2){reference-type="ref" reference="cor1.2"}). **Conjecture 13**. Let $\mathscr{A}_{RM}^{2n}$ be a noncommutative torus underlying the extension ([\[eq1.5\]](#eq1.5){reference-type="ref" reference="eq1.5"}) with the Galois group $G$. Let $\sigma_n: G\to GL_n(\mathbf{C})$ be a representation of $G$ and $L(s,\sigma_n)$ be the corresponding Artin $L$-function. Then for all $n\ge 1$ the algebra $\mathscr{A}_{RM}^{2n}$ solves the identity: $$\label{eq4.1} L(s,\sigma_n)\equiv L(s, \mathscr{A}_{RM}^{2n}).$$ 99 E. Artin, *Über eine neue Art von $L$-Reihen*, Abhandlungen aus dem Mathematischen Seminar der Hamburgerischen Universität, **3** (1924), 89-108. F. P. Boca, *An AF-algebra associated with the Farey tessellation*, Canad. J. Math. **60** (2008), 975-1000. V. G. Drinfeld, *E1llipticheskie moduli*, Matem. Sbornik **94** (1974), 594-627. R. H. Herman, I. F. Putnam and C. F. Skau, *Ordered Bratteli diagrams, dimension groups and topological dynamics*, Internat. J. Math. **3** (1992), 827-864. R. P. Langlands, *$L$-functions and automorphic representations*, Proceedings of the ICM 1978, Helsinki, 1978, pp. 165-175. X. Li, *Semigroup $C^*$-algebras*, Operator algebras and applications -- the Abel Symposium 2015, 191-202, Abel Symp., 12, Springer, 2017. I. V. Nikolaev, *Langlands reciprocity for the even dimensional noncommutative tori*, Proc. Amer. Math. Soc. **139** (2011), 4153-4162; arXiv:1004.0904 I. V Nikolaev, *On cluster $C^*$-algebras*, J. Funct. Spaces **2016**, Article ID 9639875, 8 p. (2016) I. V. Nikolaev, *On algebraic values of function $\exp~(2\pi ix +\log\log y)$*, Ramanujan J. **47** (2018), 417--425. I. V. Nikolaev, *Noncommutative Geometry*, Second Edition, De Gruyter Studies in Math. **66**, Berlin, 2022. O. Ore, *On a special class of polynomials*, Trans. Amer. Math. Soc. **35** (1933), 559-584. M. A. Rieffel, *Non-commutative tori -- a case study of non-commutative differentiable manifolds*, Contemp. Math. **105** (1990), 191-211. M. Rosen, *Number Theory in Function Flelds*, GTM **210**, Springer, 2002. J. Rosenberg, *Structure and applications of real $C^*$-algebras*, Contemp. Math. **671** (2016), 235--258. W. P. Thurston, *On the geometry and dynamics of diffeomorphisms of surfaces*, Bull. Amer. Math. Soc. **19** (1988), 417-431. L.  K.  Williams, *Cluster algebras: an introduction*, Bull. Amer. Math. Soc. **51** (2014), 1-26. B.  Yalkinoglu, *On arithmetic models and functoriality of Bost-Connes systems*, Invent. Math. **191** (2013), 383-425.
arxiv_math
{ "id": "2309.05779", "title": "Non-abelian class field theory and higher dimensional noncommutative\n tori", "authors": "Igor V. Nikolaev", "categories": "math.NT math.OA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We study the classic cross approximation of matrices based on the maximal volume submatrices. Our main results consist of an improvement of a classic estimate for matrix cross approximation and a greedy approach for finding the maximal volume submatrices. Indeed, we present a new proof of a classic estimate of the inequality with an improved constant. Also, we present a family of greedy maximal volume algorithms which improve the error bound of cross approximation of a matrix in the Chebyshev norm and also improve the computational efficiency of classic maximal volume algorithm. The proposed algorithms are shown to have theoretical guarantees of convergence. Finally, we present two applications: one is image compression and the other is least squares approximation of continuous functions. Our numerical results in the end of the paper demonstrate the effective performances of our approach. author: - "Kenneth Allen[^1]" - "Ming-Jun Lai [^2]" - "Zhaiming Shen[^3]" title: Maximal Volume Matrix Cross Approximation for Image Compression and Least Squares Solution --- **Key words.** Matrix Cross Approximation, Low Rank Integer Matrix, Greedy Maximal Volume Algorithm, Matrix Completion and Compression, Least Squares Approximation\ \ **AMS subject classifications.** 65F55, 15A83, 15A23, 05C30 # Introduction Low rank matrix approximation has been one of the vital topics of research for decades. It has a wide range of applications in the areas such as collaborative filtering [@Bennett07; @Goldberg92], signal processing [@Cai20; @Cai19; @Candes07], matrix completion [@Cai23; @Candes12; @Wang15], data compression [@Fazel08; @Mitrovic13], just to name a few. One obvious way to obtain such an approximation is to use singular value decomposition (SVD), which has been the most commonly used technique in low rank approximation. However, the drawbacks of SVD are that it can only express data in some abstract basis (e.g. eigenspaces) and hence may lose the interpretability of the physical meaning of the rows and columns of a matrix. Also the computation of eigen-decomposition of matrices can be burdensome for large-scale problems. It is also common to use the low rankness for compressing images. For example, if a black-and-white image is of rank 1, we only need to use one column and one row to represent this image instead of using the entire image. Although one can use SVD to reduce a matrix into small blocks, the integer property of the image will lose. That is, the decomposed image is no longer of integer entries which will take more bits for an encoder to compress it. It is desired to have a low rank integer matrix to approximate an integer image. One way to do it is to use the so-called matrix cross approximation as explained in this paper. Another application of the matrix cross approximation is to solve the least squares approximation problem which is commonly encountered in practice. More precisely, let us say we need to solve the following least squares problem: $$\label{lsq} \min_{\mathbf{x}}\|A\mathbf{x}-\mathbf{b}\|_2$$ for a given matrix $A$ of size $m\times n$ and the right-hand side $\mathbf{b}$. When $m\gg n$ is very large and/or the singular value $\sigma_n(A)\approx 0$, one way to simplify the computation is to approximate $A$ by its cross approximation and then solve a much smaller size least squares problem or a better conditioned least squares problem. This technique is particularly useful when we deal with multiple least squares tasks with same data matrix and different response vectors. It helps to speed up the computation without sacrificing the interpretability of least squares parameters. Let us recall the cross approximation as follows. Given a matrix $A \in \mathbb{R}^{m\times n}$, letting $I = \{i_1,\cdots, i_k\}$ and $J = \{j_1, \cdots, j_k\}$ be the row and column indices, we denote by $A_{I,:}$ the all the rows of $A$ with row indices in $I$ and $A_{:,J}$ the all the columns of $A$ with column indices in $J$. Assume that the submatrix $A_{I;J}$ situated on rows $I$ and columns $J$ is nonsingular. Then $$\label{skeleton} A_{:,J} A_{I,J}^{-1} A_{I,:} \approx A$$ is called a skeleton or cross approximation of $A$. The left-hand side of ([\[skeleton\]](#skeleton){reference-type="ref" reference="skeleton"}) is also called CUR decomposition in the literature. Note that if $\#(I)=\#(J)=1$, it is the classic skeleton approximation. There is a plenty of study on the cross approximation and CUR decomposition of matrices. See [@GH17], [@Goreinov97], [@GOSTZ08], [@GT01], [@Hamm20], [@KS16], [@Mahoney09], [@MO18], and the literature therein. The study is also generalized to the setting of tensors. See, e.g. [@Cai21], [@Mahoney06] and [@OT10]. There are different strategies to obtain such an approximations or decomposition in the literature. These include volume optimization [@Goreinov97; @GOSTZ08; @GT01], random sampling [@Cai23; @Hamm20b], leverage scores computing [@Boutsidis14; @Drineas08; @Mahoney09], and empirical interpolation [@Chaturantabut10; @Drmac16; @Sorensen16]. The approach we will be focusing on falls into the first category. To state the main results in the paper, let us recall the definition of Chebeshev norm. For any matrix $A=[a_{ij}]_{1\leq i\leq m, 1\leq j\leq n}$, $\|A\|_C=\max_{i,j}|a_{ij}|$ is called Chebyshev norm for matrix $A$. Also, the volume of a square matrix $A$ is just the absolute value of the determinant of $A$. We start with an error estimate of the matrix cross approximation from [@GT01]. **Theorem 1** (cf. [@GT01]). *Let $A$ be a matrix of size $m\times n$. Fix $r \in [1, \min\{m,n\})$. Suppose that $I \subset \{1,\cdots,m\}$ with $\#(I)=r$ and $J\subset \{1, \cdots, n\}$ with $\#(J)=r$ such that $A_{I,J}$ has the maximal volume among all $r \times r$ submatrices of $A$. Then the Chebyshev norm of the residual matrix satisfies the following estimate: $$\label{skapp} \|A - A_r\|_C \le (r + 1)\sigma_{r+1}(A),$$ where $A_r= A_{:,J} A_{I,J}^{-1} A_{I,:}$ is the $r$-cross approximation of $A$, $\sigma_{r+1}(A)$ is the $(r+1)^{th}$ singular value of $A$.* The above result was established in [@GT01] by using the properties of polynomial interpolation. We are very interested in giving another proof by using properties of matrices and a recent result about the determinant of matrix (cf. [@GH17]). It turns out that we are able to establish a similar estimate in ([\[skapp\]](#skapp){reference-type="ref" reference="skapp"}) and in fact, we are able to improve this error bound from ([\[skapp\]](#skapp){reference-type="ref" reference="skapp"}) to ([\[myskapp\]](#myskapp){reference-type="ref" reference="myskapp"}). It is clear that a submatrix with the maximal volume as described in Theorem [Theorem 1](#GT01){reference-type="ref" reference="GT01"} is hard to find. There are a few algorithms available in the literature. In this paper, we propose a family of greedy algorithms to speed up the computation of submatrices with the maximal volume which has a theoretical guarantee of convergence. Finally in the paper, we present two new applications: one is for image compression and one is for least squares approximation. Indeed, when $A$ is an image with integer entries, one can encode $A_{I,:}, A_{:,J}$ instead of the entire image $A$ to send/store the image. After received integer matrices $A_{I,:}$ and $A_{:,J}$, one can decode them and compute according to the left-hand side of ([\[skeleton\]](#skeleton){reference-type="ref" reference="skeleton"}) which will be a good approximation of $A$ if $I$ and $J$ are appropriately chosen. We shall present a family of greedy algorithms for fast computing the maximal volume of submatrices of $A$ for finding these row and column indices $I$ and $J$. Their convergences and numerical performances will also be discussed in a later section. Next we consider a least squares problem of large size in ([\[lsq\]](#lsq){reference-type="ref" reference="lsq"}) with matrix $A$ which may not be full rank, i.e. $\sigma_n(A)=0$. If $\sigma_{r+1}(A)\approx 0$ for $r+1\leq n$ while $\sigma_r(A)$ is large enough, we can use the matrix cross approximation $A_{:,J}A_{I,J}^{-1}A_{I,:}$ to approximate $A$ with $\#(I)=\#(J)=r$. Then the least squares problem of large size can be solved by finding $$\min_{\hat{\mathbf{x}}}\|A_{:,J}A_{I,J}^{-1}A_{I,:}\mathbf{\hat{x}}-\mathbf{b}\|_2.$$ Note that letting $\hat{\mathbf{x}}$ be the new least squares solution, it is easy to see $$\|A_{:,J}A_{I,J}^{-1}A_{I,:}\mathbf{\hat{x}}-\mathbf{b}\|_2 \leq \|A\mathbf{x}_b-\mathbf{b}\|_2+\|(A_{:,J}A_{I,J}^{-1}A_{I,:}-A)\mathbf{x}_b\|_2,$$ where $\mathbf{x}_b$ is the original least squares solution. The first term on the right-hand side is the norm of the standard least squares residual vector while the second term is nicely bounded above if $A_{:,J}A_{I,J}^{-1}A_{I,:}$ is a good approximation of $A$, i.e., $\sigma_{r+1}(A)\approx 0$. We shall discuss more towards this aspect in section [5](#secNum){reference-type="ref" reference="secNum"}. # Improved Error Bounds We now see that the cross approximation of matrix is useful and the approximation in ([\[skapp\]](#skapp){reference-type="ref" reference="skapp"}) is very crucial. It is important to improve its estimate. Let us recall the following results to explain the cross decomposition, i.e., an estimate of the residual matrix $E= A - A_{:,J} A_{I,J}^{-1} A_{I,:}$. For $I=[i_1, \cdots, i_r]$ and $J=[j_1, \cdots, j_r]$, let $$\label{eij} \mathcal{E}_{ij} = \left[ \begin{matrix} A_{ij} & A_{i,j_1} & \cdots & A_{i, j_k}\cr A_{i_1,j} & A_{i_1,j_1} & \cdots & A_{i_1,j_k}\cr \vdots & \vdots & \ddots & \vdots \cr A_{i_k,j} & A_{i_k,j_1} & \cdots & A_{i_k,j_k}\cr \end{matrix} \right]$$ for all $i=1, \cdots, m$ and $j=1, \cdots, n$. The researchers in [@GH17] proved the following. **Theorem 2** (cf. [@GH17]). *Let $$\label{eq:SkeletonError} E= A - A_{:,J} A_{I,J}^{-1} A_{I,:}.$$ Then each entry $E_{ij}$ of $E$ is given by $$\label{key} E_{ij}= \frac{ {\rm det}\mathcal{E}_{ij}}{{\rm det}A_{I,J}}$$ for all $i=1, \cdots, m$ and $j=1, \cdots, n$.* Notice that $\mathcal{E}_{ij}=0$ for all $i\in I$ and for all $j\in J$. That is, the entries of $E$ are all zero except for $(i,j), i\not\in I$ and $j\not\in J$. Since the SVD approximation is optimal, it is easy to see that a cross approximation of a matrix is not as good as using SVD in Frobenius norm. However, some cross approximations can be as good as using SVD if $A_{I,J}$ has a maximal volume which is defined by $$\label{vol} \hbox{vol}(A) =\sqrt{{\rm det}(A A^*)}$$ for any matrix $A$ of size $m\times n$ with $m\le n$. Note that the concept of the volume of a matrix defined in this paper is slightly different from the one in [@BI92], [@P00], and [@SG20]. In the literature mentioned previously, the volume is the product of all singular values of the matrix $A$. However, the value of volumes in the two definition is the same. The researchers in [@P00], and [@SG20] used the concept of the maximal volume to reveal the rank of a matrix. We turn our interest to matrix approximation. The estimate in ([\[skapp\]](#skapp){reference-type="ref" reference="skapp"}) will be improved in this paper as follows. **Theorem 3**. *Under the assumption in Theorem [Theorem 1](#GT01){reference-type="ref" reference="GT01"}, we have the following estimate: $$\label{myskapp} \|A - A_r\|_C \le \frac{(r + 1)}{\sqrt{1+ \frac{r \sigma^2_{r+1}(A)}{(r+1)\|A\|^2_2}} }\sigma_{r+1}(A).$$* It is easy to see that the estimate in ([\[myskapp\]](#myskapp){reference-type="ref" reference="myskapp"}) is slightly better than ([\[skapp\]](#skapp){reference-type="ref" reference="skapp"}). For example, in the principle component analysis (PCA), a matrix may have a big leading singular value while the rest of singular values are very small, that is, $\sigma_1(A)\gg \sigma_k(A), k\ge 2$. For a matrix $A$, say $\sigma_2(A)$ is only half of $\sigma_1(A)$. Using the cross approximation with $r=1$, the estimate on the right-hand side of ([\[myskapp\]](#myskapp){reference-type="ref" reference="myskapp"}) is about $1.886\sigma_{2}(A)$, while the estimate on the right-hand side of ([\[skapp\]](#skapp){reference-type="ref" reference="skapp"}) is $2\sigma_2(A)$. Note that the computation of the cross approximation when $r=1$ is extremely simple which is an advantage over the standard PCA, the latter requires power method to find the leading singular value and two singular vectors. From the result above, we can see $A_r= A_{:,J} A_{I,J}^{-1}A_{I,:}$ is a good approximation of $A$ similar to the SVD approximation of $A$ when $A_{I,J}$ is a maximal volume submatrix and the rank $r$ is not too big. However, finding $A_{I,J}$ is not trivial when $r>1$, although it is easy to see that a maximal volume submatrix $A_{I,J}$ always exists since there are finitely many submatrices of size $r\times r$ in $A$ of size $n\times n$. As $n\gg 1$, the total number of choices of submatrices of size $r\times r$ increases combinatorially large. Computing the $A_{I,J}$ is an NP hard problem (cf. [@CI09]). To help find such indices $I$ and $J$, the following concept of dominant matrices is introduced according to the literature (cf. [@GOSTZ08; @GT01; @MO18]). **Definition 1**. *Let $A_{\square}$ be a $r \times r$ nonsingular submatrix of the given rectangular $n \times r$ matrix $A$, where $n>r$. $A_{\square}$ is dominant if all the entries of $A A^{-1}_{\square}$ are not greater than 1 in modulus.* Two basic results on dominant matrices can be found in [@GOSTZ08]. **Lemma 1**. *For $n \times r$ matrix $A$, a maximum volume $r \times r$ submatrix $A_\diamondsuit$ is dominant.* On the other hand, a dominant matrix is not too far away from the maximal volume matrix $A_\diamondsuit$ when $r>0$ as shown in the following. **Lemma 2**. *For any $n \times r$ matrix $A$ with full rank, we have $$\label{lowbound} |{\rm det}(A_{\square})|\ge |{\rm det}(A_\diamondsuit)|/r^{r/2}$$ for all dominant $r \times r$ submatrices $A_\square$ of $A$, where $A_\diamondsuit$ is a submatrix of $r\times r$ with maximal volume.* We shall use a dominant matrix to replace the maximal volume matrix $A_\diamondsuit$. In fact, in [@GT01], the researchers showed that if $A_{I,J}$ is a good approximation of $A_\diamondsuit$ with ${\rm det}(A_{I,J})=\nu {\rm det}(A_\diamondsuit)$ with $\nu\in (0,1)$, then $$\label{mjlai12062020} | A - A_{:,J} A_{I,J}^{-1} A_{I,:}|_\infty \le \frac{1}{\nu} (r+1)\sigma_{r+1}(A).$$ Our proof of Theorem [Theorem 3](#mjlai06062021){reference-type="ref" reference="mjlai06062021"} can yield the following **Theorem 4**. *Suppose that $A_{I,J}$ is a dominant matrix satisfying $|{\rm det}(A_{I,J})|=\nu |{\rm det}(A_\diamondsuit)|$ with $\nu\in (0,1]$. Then using the dominant matrix $A_{I,J}$ to form the cross approximation $A_r$, we have $$\label{myskapp2} \|A - A_r\|_C \le \frac{(r + 1)}{\nu \sqrt{1+ \frac{r \sigma^2_{r+1}(A)\nu^2}{(r+1)\|A\|^2_2}} }\sigma_{r+1}(A).$$* The proofs of Theorems [Theorem 3](#mjlai06062021){reference-type="ref" reference="mjlai06062021"} and [Theorem 4](#mjlai06082021){reference-type="ref" reference="mjlai06082021"} will be given in the next section. In addition to the introduction of dominant submatrix, the researchers in [@GOSTZ08] also explained a computational approach called maximal volume (maxvol) algorithm and several improvements of the algorithm were studied. # Proof of Theorems [Theorem 3](#mjlai06062021){reference-type="ref" reference="mjlai06062021"} and [Theorem 4](#mjlai06082021){reference-type="ref" reference="mjlai06082021"} {#proof-of-theorems-mjlai06062021-and-mjlai06082021} Let us begin with some elementary properties of the Chebyshev norm of matrices. Recall that for any matrix $A$ of size $(r+1)\times(r+1)$, we have $\|A\|_F=\sqrt{\sum_{i=1}^{r+1}\sigma_i^2(A)}$. Hence, $$\|A\|_F\geq \sqrt{r+1}\sigma_{r+1}(A) \quad\text{and}\quad \|A\|_F\leq \sqrt{r+1}\sigma_{1}(A).$$ On the other hand, $\|A\|_2\leq \|A\|_F\leq\sqrt{(r+1)^2\|A\|^2_C}=(r+1)\|A\|_C$. It follows that **Lemma 3**. *For any matrix $A$ of size $m\times n$, $$\sigma_1(A)\leq (r+1)\|A\|_C \quad \text{and}\quad \sigma_{r+1}(A)\leq \sqrt{r+1}\|A\|_C$$* Let us first prove Theorem [Theorem 3](#mjlai06062021){reference-type="ref" reference="mjlai06062021"}. *Proof.* \[Proof of Theorem [Theorem 3](#mjlai06062021){reference-type="ref" reference="mjlai06062021"}\] From ([\[key\]](#key){reference-type="ref" reference="key"}), we see that $$\label{startpt1} \|E\|_C = \max_{i,j} \frac{|{\rm det}(\mathcal{E}_{ij})|}{|{\rm det}(A_{I,J})|}.$$ We first note that the determinant ${\rm det}(\mathcal{E}_{ij})$ can be easily found. Indeed, let us recall the following *Schur determinant identity*: **Lemma 4**. *Let $A, B, C$, and $D$ be block sub-matrices of $M$ with $A$ invertible. Then $${\rm det}( \begin{bmatrix} A & B \\ C & D \end{bmatrix}) = {\rm det}(A) {\rm det}(D-CA^{-1}B).$$ In particular, when $A$ is an $r \times r$, $B$ and $C$ are vectors, and $D$ is a scalar, $${\rm det}( \begin{bmatrix} A & B \\ C & D \end{bmatrix}) = (D-CA^{-1}B){\rm det}(A).$$* *Proof.* Note that $\begin{bmatrix} A & B \\ C & D \end{bmatrix} = \begin{bmatrix} A & 0 \\ C & I \end{bmatrix} \begin{bmatrix} I & A^{-1}B \\ 0 & D-CA^{-1}B \end{bmatrix}$. Taking the determinant we get the desired result. $\Box$ By using Lemma [Lemma 4](#det){reference-type="ref" reference="det"} and ([\[eij\]](#eij){reference-type="ref" reference="eij"}), we have $$\label{result1} \|E\|_C= \max_{i,j} |{\rm det}( A_{ij}- A_{I,j}^\top A_{I,J}^{-1} A_{i,J})| =\max_{i,j} |A_{ij}- A_{I,j}^\top A_{I,J}^{-1} A_{i,J}|.$$ Next we need to estimate the right-hand side of ([\[result1\]](#result1){reference-type="ref" reference="result1"}) when $|{\rm det}(A_{I,J})|$ is largest. To do so, we consider $\mathcal{E}_{ij}^{-1}$. Then it can be written as $$\mathcal{E}_{ij}^{-1} =\frac{1}{{\rm det}(\mathcal{E}_{ij})} [ \mathcal{E}_{ij}^*],$$ where $\mathcal{E}_{ij}^*$ stands for the cofactors matrix of $\mathcal{E}_{ij}$. It is easy to see that $\|\mathcal{E}_{ij}^{-1}\|_C = |{\rm det}(A_{I,J})|/|{\rm det}(\mathcal{E}_{ij})|$ by the definition of the cofactors matrix and the ${\rm det}(A_{I,J})$ is the largest. By using Lemma [Lemma 4](#det){reference-type="ref" reference="det"}, we have $$\label{result2} \|\mathcal{E}_{ij}^{-1}\|_C = \frac{1}{|A_{ij}- A_{I,j}^\top A_{I,J}^{-1} A_{i,J}|}.$$ Next recall that for any matrix $A$ of size $(r+1)\times (r+1)$, we have $\|A\|_F = \sqrt{\sum_{i=1}^{r+1}\sigma_i^2(A)}$ and hence, $$\label{resultn1} \|A\|_F \ge \sqrt{r+1}\sigma_{r+1}(A) \hbox{ and } \|A\|_F\le \sqrt{r+1}\sigma_1(A).$$ On the other hand, $\|A\|_2\le \|A\|_F \le \sqrt{ (r+1)^2 \|A\|_C^2} = (r+1)\|A\|_C.$ It follows $$\label{result3} \sigma_{1}(A)\le (r+1) \|A\|_C \hbox{ and } \sigma_{r+1}(A)\le \sqrt{r+1}\|A\|_C.$$ Furthermore, we have **Theorem 5**. *Let $B$ be a matrix of size $m\times n$ with singular values $s_1\ge s_2\ge \cdots \ge s_m$ and let $C=AB$ with singular values $t_1\ge \cdots \ge t_m$. Then $$t_i\le s_i \|A\|_2, \quad i=1, \cdots, n$$* *Proof.* Consider the eigenvalues of $C^\top C= B^\top A^\top A B$. Let $A^\top A= Q \Sigma^2 Q^\top$ be the spectral decomposition of $A^\top A$. Let $D=Q(\|A\|_2^2 I- \Sigma^2)Q^\top$. Then $D$ is positive semi-definite. Since $D= \|A\|_2 I - A^\top A$, we have $$\|A\|_2^2 B^\top B = B^\top (A^\top A+ D)B= C^\top C+ B^\top D B\ge C^\top C.$$ It follows that the eigenvalue $t_i^2$ of $C^\top C$ is less than or equal to $\|A\|_2^2 s_i^2$ for each $i$. $\Box$ As $I_{r+1}= \mathcal{E}_{ij}\mathcal{E}_{ij}^{-1}$, we use Theorem [Theorem 5](#C=AB){reference-type="ref" reference="C=AB"} to get $$\begin{aligned} 1 &\le& \sigma_{r+1} (\mathcal{E}_{ij}) \|\mathcal{E}_{ij}^{-1}\|_2 \le \sigma_{r+1} (\mathcal{E}_{ij}) \|\mathcal{E}_{ij}^{-1}\|_F \le \sigma_{r+1} (\mathcal{E}_{ij}) (r+1)\|\mathcal{E}_{ij}^{-1}\|_C. \end{aligned}$$ It follows $$\frac{1}{\|\mathcal{E}_{ij}^{-1}\|_C} \le (r+1) \sigma_{r+1}(\mathcal{E}_{ij})$$ Combining with the result in ([\[result2\]](#result2){reference-type="ref" reference="result2"}), we have $$|A_{ij}- A_{I,j}^\top A_{I,J}^{-1} A_{i,J}|\le (r+1) \sigma_{r+1}(\mathcal{E}_{ij}).$$ Using ([\[result1\]](#result1){reference-type="ref" reference="result1"}), we finish a proof of Theorem [Theorem 1](#GT01){reference-type="ref" reference="GT01"} since $\sigma_{r+1}(\mathcal{E}_{ij}) \le \sigma_{r+1}(A)$. Again, since $I_{r+1}= \mathcal{E}_{ij}\mathcal{E}_{ij}^{-1}$, we use Theorem [Theorem 5](#C=AB){reference-type="ref" reference="C=AB"} to get $$\begin{aligned} 1 &\le& \sigma_{r+1} (\mathcal{E}_{ij}) \|\mathcal{E}_{ij}^{-1}\|_2 = \sigma_{r+1} (\mathcal{E}_{ij}) \|\mathcal{E}_{ij}^{-1}\|_F \sqrt{1- \frac{\sum_{k=2}^{r+1} \sigma_k^2(\mathcal{E}_{ij}^{-1})}{\sum_{k=1}^{r+1}\sigma_k^2(\mathcal{E}_{ij}^{-1})}} \cr &\le & \sigma_{r+1}(A) (r+1)\|\mathcal{E}_{ij}^{-1}\|_C \sqrt{1- \frac{r \sigma_{r+1}^2(\mathcal{E}_{ij}^{-1})}{(r+1)\sigma_1^2(\mathcal{E}_{ij}^{-1})}}.\end{aligned}$$ We now estimate $\sigma_k(\mathcal{E}_{ij}^{-1})$ for $k=1, r+1$. For convenience, let us write $L=\mathcal{E}_{ij}^{-1}$ and $\sigma_1(L)= \max \sqrt{ \lambda_i(L^\top L)}$ and $\sigma_{r+1}(L)=\min\sqrt{ \lambda_i(L^\top L)}$, where $\lambda_i(L^\top L)$ is the ith eigenvalue of $L^\top L$. As $L$ is invertible in this case and $L^\top L$ is symmetric, we have $$\sigma_1(L)= \frac{1}{\min \sqrt{ \lambda_i((L^\top L)^{-1})}} \hbox{ and } \sigma_{r+1}(L)=\frac{1}{\max\sqrt{ \lambda_i((L^\top L)^{-1})}}.$$ Since $L^{-1}=\mathcal{E}_{ij}$, we have $$\sigma_1(L) = \frac{1}{\sigma_{r+1}(\mathcal{E}_{ij})} \hbox{ and } \sigma_{r+1}(L)=\frac{1}{\sigma_1(\mathcal{E}_{ij})}\ge \frac{1}{\sigma_1(A)}.$$ So we have $$1- \frac{r \sigma_{r+1}^2(\mathcal{E}_{ij}^{-1})}{(r+1)\sigma_1^2(\mathcal{E}_{ij}^{-1})} = 1 - \frac{r \sigma^2_{r+1}(\mathcal{E}_{ij})}{(r+1)\sigma^2_1(\mathcal{E}_{ij})} \le 1 - \frac{r \sigma^2_{r+1}(\mathcal{E}_{ij})}{(r+1)\|A\|^2} .$$ Now we recall the proof of Theorem [Theorem 1](#GT01){reference-type="ref" reference="GT01"} above and note that we in fact have $$\|E\|_C \le \max_{ij} \sigma_{r+1}(\mathcal{E}_{ij}) (r+1).$$ Summarizing the discussions above, we consider the pair of index $(i,j)$ such that $\|E\|_C \le \sigma_{r+1}(\mathcal{E}_{ij}) (r+1)$ and hence, $$\begin{aligned} \|E\|_C^2 &\le& (r+1)^2 \sigma_{r+1}^2(A) \left( 1 - \frac{r \sigma^2_{r+1}(\mathcal{E}_{ij})}{(r+1)\|A\|^2}\right)\cr &\le& (r+1)^2 \sigma_{r+1}^2(A) \left(1 - \frac{r \|E\|^2_C }{(r+1)^3\|A\|_2}\right) \end{aligned}$$ Solving $\|E\|_C^2$ from the above inequality, we have $$\begin{aligned} \|E\|_C \le \frac{r+1}{\sqrt{1+ \frac{r \sigma^2_{r+1}(A)}{(r+1)\|A\|^2_2}}} \sigma_{r+1}(A)\end{aligned}$$ which finishes the proof. $\Box$ *Proof.* \[Proof of Theorem [Theorem 4](#mjlai06082021){reference-type="ref" reference="mjlai06082021"}\] Carefully examining the steps of the proof above yields the desired result of Theorem [Theorem 4](#mjlai06082021){reference-type="ref" reference="mjlai06082021"}. $\Box$ # Greedy Maximum Volume Algorithms {#secAlg} The aim of this section is to find a dominant submatrix which is close to the submatrix with maximal volume efficiently. We shall introduce a greedy step to improve the performance of standard maximal volume algorithm introduced in [@GOSTZ08]. We start with the maxvol algorithm described in [@GOSTZ08] for finding an approximation to a dominant $r \times r$ submatrix of an $n \times r$ matrix $M$. A detailed discussion of greedy maximum volume algorithms can also be found in [@A19]. The maximal volume algorithm [\[maxvolalg\]](#maxvolalg){reference-type="ref" reference="maxvolalg"} generates a sequence of increasing volumes of submatrices. In other words, we have the following theorem. **Theorem 6** (cf. [@GOSTZ08]). *The sequence $\{v_l\} = \{{\rm vol}(A_l)\}$ is increasing and is bounded above. Hence, Algorithm [\[maxvolalg\]](#maxvolalg){reference-type="ref" reference="maxvolalg"} converges.* *Proof.* It is easy to understand that ${\rm vol}(A_l)$ is increasing. Since the determinant of all submatrices of $A$ are bounded above by Hadamard's inequality, the sequence is convergent and hence the algorithm will terminate and produce a good approximation of dominant submatrix. $\Box$ We now generalize Algorithm [\[maxvolalg\]](#maxvolalg){reference-type="ref" reference="maxvolalg"} to find a $r \times r$ dominant submatrix of an $n \times m$ matrix $M$ by searching for the largest in modulus entry of both $B_l = M(:,J_l)A_l^{-1}$, and $C_l = A_l^{-1}M(I_l,:)$ at each step. However, Algorithm [\[2dmvalg\]](#2dmvalg){reference-type="ref" reference="2dmvalg"} requires two backslash operations at each step. To simplify this to one backslash operation at each step, we may consider an alternating maxvol algorithm such as Algorithm [\[alg3\]](#alg3){reference-type="ref" reference="alg3"} where we alternate between optimizing swapping rows and columns. Note that this converges to a dominant submatrix because the sequence ${\rm vol}(A_l)$ is again increasing and bounded above. We now introduce a greedy strategy to reduce the number of backslash operations needed to find a dominant submatrix by swapping more rows at each step, which will be called a greedy maxvol algorithm. The greedy maxvol algorithm is similar the the maxvol algorithm except that instead of swapping one row every iteration, we may swap two or more rows. First, we will describe the algorithm for swapping at most two rows of an $n \times r$ matrix, which we will call $2$-greedy maxvol. Given an $n \times r$ matrix $M$, initial $r \times r$ submatrix $A_0$ such that ${\rm det}(A_0) \neq 0$, and tolerance $\epsilon > 0$, we use Algorithm [\[gmvalg\]](#gmvalg){reference-type="ref" reference="gmvalg"} to find a dominant matrix. To prove that this algorithm [\[gmvalg\]](#gmvalg){reference-type="ref" reference="gmvalg"} converges, recall Hadamard's inequality. For an $n \times n$ matrix $M$, we have: $$\label{hadamard} |{\rm det}(M)| \leq \prod_{i=1}^n ||M(:,i)||.$$ With the estimate above, it follows **Theorem 7**. *The sequence $\{|{\rm det}(A_\ell)|, \ell\ge 1\}$ from Algorithm [\[gmvalg\]](#gmvalg){reference-type="ref" reference="gmvalg"} is increasing and is bounded above by $\prod_{i=1}^n ||M(:,i)||$. Therefore, Algorithm [\[gmvalg\]](#gmvalg){reference-type="ref" reference="gmvalg"} converges.* *Proof.* Note that multiplication by an invertible matrix does not change the ratio of determinants of pairs of corresponding sub-matrices. Suppose we only swap one row. Then $\frac{{\rm det}(A_{l+1})}{{\rm det}(A_l)} = \frac{b_{i_1j_1}}{{\rm det}(I)} = b_{i_1j_1}$. Therefore, as $|b_{i_1j_1}| > 1$, we have$|{\rm det}(A_{l+1})| > |{\rm det}(A_l)|$. Now suppose we swap two rows. Then after permutation of rows, the submatrix of $B_n$ corresponding to $A_{l+1}$ is $\begin{bmatrix} B'_l & R \\ 0 & I \end{bmatrix}$ in block form, which has determinant ${\rm det}(B'_l)$. Therefore similarly to before, we have that ${\rm det}(A_{l+1}) = {\rm det}(B'_l) {\rm det}(A_l)$, and since $|{\rm det}(B'_l)| > |b_{i_1j_1}| > 1$, we have that $|{\rm det}(A_{l+1})| > |{\rm det}(A_l)|$, so $|{\rm det}(A_l)|$ is an increasing sequence and bounded above. $\Box$ Note that when we swap two rows, the ratio between $|{\rm det}(A_{l+1})|$ and $|{\rm det}(A_l)|$ is maximized when $|{\rm det}(B'_l)|$ is maximized. For a more general algorithm, we search for a largest element in each of the $r$ rows of $B_l$: $|b_{i_1j_1}| \geq |b_{i_2j_2}| \geq \dots \geq |b_{i_rj_r}|$. We'll call $b_k := b_{i_kj_k}$. Let $B_l^{(k)} = \begin{bmatrix} b_{i_1j_1} & b_{i_1j_2} & \dots & b_{i_1j_k} \\ b_{i_2j_1} & b_{i_2j_2} & \dots & b_{i_2j_k} \\ \vdots & \vdots & & \vdots \\ b_{i_kj_1} & b_{i_kj_2} & \dots & b_{i_kj_k} \end{bmatrix}$. Then we replace the $j_k$th row of $A_n$ with the $i_k$th row of $M$ if $|{\rm det}(\begin{bmatrix} B_l^{(k)} & x_k \\ y_k & b_{k+1} \end{bmatrix})| \geq |{\rm det}( B_l^{(k)})|$, where $x_k = \begin{bmatrix} b_{i_1j_k} \\ b_{i_2j_k} \\ \vdots \\ b_{i_{k-1}j_k} \end{bmatrix}$, and $y_k = \begin{bmatrix} b_{i_kj_1} & b_{i_kj_2} & \dots & b_{i_kj_{k-1}} \end{bmatrix}$. Using the Schur determinant formula for the determinant of block-matrices, this condition is equivalent to the condition that $|b_{k+1} - d_k[B_l^{(k)}]^{-1}c_k| \geq 1$. In particular, if $|b_{k+1}| > 1$, and $\hbox{sign}(b_{k+1}) \neq \hbox{sign}(d_k[B_l^{(k)}]^{-1}c_k)$, then the condition will be met. The general $h$-greedy maxvol algorithm for swapping at most $h$ rows at each iteration runs can be derived and the detail is left. We may generalize this algorithm to $n \times m$ matrices by employing an alternating $h$-Greedy algorithm similar to before between the rows and columns. We have the following alternating $h$-greedy maxvol algorithm summarized in Algorithm [\[hGreedy\]](#hGreedy){reference-type="ref" reference="hGreedy"}. For convenience, we shall call GMVA for these greedy algorithms discussed in this section. Let us also introduce two matrices $$\label{B} B'_k = \begin{bmatrix} b_{i_1j_1} & b_{i_1j_2} & \dots & b_{i_1j_k} \\ b_{i_2j_1} & b_{i_2j_2} & \dots & b_{i_2j_k} \\ \vdots & \vdots & & \vdots \\ b_{i_kj_1} & b_{i_kj_2} & \dots & b_{i_kj_k} \end{bmatrix}$$ and $$\label{C} C'_k = \begin{bmatrix} c_{i_1j_1} & c_{i_1j_2} & \dots & c_{i_1j_k} \\ c_{i_2j_1} & c_{i_2j_2} & \dots & c_{i_2j_k} \\ \vdots & \vdots & & \vdots \\ c_{i_kj_1} & c_{i_kj_2} & \dots & c_{i_kj_k} \end{bmatrix}$$ to simplify the notations. # Numerical Performance of GMVA and its Applications {#secNum} In this section, we compare the numerical performances of GMVA by varying $h$ and present its application to image compression and least squares approximation of continuous functions via standard polynomial basis. ## Performance of Greedy Maximal Volume Algorithms (GMVA) First we report some simulation results to answer the questions: how many backslash operations, and how much computational time the $h$-greedy maxvol algorithm can save when using the greedy maxvol algorithm. We will generate $100$ random $5000\times r$ matrices, and calculate the average number of backslash operations, and average computational time, needed to find a dominant submatrix within a relative error of $10^{-8}$ for $r=30,60,90,120,150,180,210,240$. We present the average number of backslash operations in Table [1](#av_backslash1){reference-type="ref" reference="av_backslash1"} and average computational time in Table [2](#av_time1){reference-type="ref" reference="av_time1"}. ------- ---------------------------------------- --------- --------- --------- --------- $r$ Average Number of Backslash Operations $h = 1$ $h = 2$ $h = 3$ $h = 4$ $h = r$ $30$ 33.92 23.84 20.88 20.33 19.84 $60$ 50.56 34.05 31.78 31.31 29.56 $90$ 62.68 42.91 38.39 37.29 38.06 $120$ 71.64 51.46 44.43 42.57 41.12 $150$ 81.97 54.78 50.08 46.02 46.33 $180$ 89.75 58.93 54.06 52.54 53.10 $210$ 95.68 64.82 57.93 55.25 53.37 $240$ 99.65 69.85 60.77 57.39 55.55 ------- ---------------------------------------- --------- --------- --------- --------- : Average number of backslash operations needed to find a dominant submatrix of $100$ random $5000 \times r$ matrices using $h$-greedy maxvol algorithm within a relative error of $10^{-8}$. ------- -------------------- --------- --------- --------- --------- $r$ Average Time Taken $h = 1$ $h = 2$ $h = 3$ $h = 4$ $h = r$ $30$ 0.1310 0.1195 0.1072 0.1048 0.1028 $60$ 0.2653 0.2121 0.1993 0.1973 0.1883 $90$ 0.5128 0.4217 0.3823 0.3716 0.3840 $120$ 0.8644 0.7168 0.6190 0.5968 0.5850 $150$ 1.2885 1.0194 0.9437 0.8723 0.8946 $180$ 1.8529 1.4043 1.2863 1.2581 1.2983 $210$ 2.4726 1.9294 1.7411 1.6637 1.6285 $240$ 3.0215 2.4335 2.1348 2.0247 2.0117 ------- -------------------- --------- --------- --------- --------- : Average time in seconds to find a dominant submatrix of $100$ random $5000 \times r$ matrices using $h$-greedy maxvol algorithm within a relative error of $10^{-8}$. Next, we present the number of backslash operations and the computational time for running an alternating $h$-greedy algorithm on $5000 \times 5000$ matrices. As we can see from Table [3](#av_backslash2){reference-type="ref" reference="av_backslash2"} and Table [4](#av_time2){reference-type="ref" reference="av_time2"} the computational time and number of backslash operations needed decreases as $h$ increases up to $r$. This shows that our GMVA are more efficient when $h$ is large. ------- ---------------------------------------- --------- --------- --------- --------- $r$ Average Number of Backslash Operations $h = 1$ $h = 2$ $h = 3$ $h = 4$ $h = r$ $30$ 33.63 23.58 21.16 20.49 19.98 $60$ 51.22 34.65 31.81 30.39 30.13 $90$ 65.52 44.31 38.75 38.71 37.06 $120$ 74.13 48.49 45.42 43.79 43.09 $150$ 80.81 55.44 51.24 47.77 48.24 $180$ 89.85 60.85 52.64 50.52 49.04 $210$ 95.74 63.96 57.88 56.89 53.16 $240$ 100.45 67.64 59.38 59.04 55.41 ------- ---------------------------------------- --------- --------- --------- --------- : Average number of backslash operations needed to find a dominant submatrix of $100$ random $5000 \times 5000$ matrices using $h$-greedy maxvol algorithm within a relative error of $10^{-8}$. ------- -------------------- --------- --------- --------- --------- $r$ Average Time Taken $h = 1$ $h = 2$ $h = 3$ $h = 4$ $h = r$ $30$ 0.1360 0.0930 0.0795 0.0754 0.0730 $60$ 0.4169 0.2644 0.2457 0.2351 0.2372 $90$ 0.8349 0.5577 0.4921 0.4940 0.4797 $120$ 1.3528 0.8887 0.8391 0.8114 0.8128 $150$ 2.0072 1.3865 1.2946 1.2105 1.2435 $180$ 2.7712 1.8928 1.6516 1.5875 1.5680 $210$ 3.6673 2.4751 2.2535 2.2290 2.1063 $240$ 4.5758 3.1068 2.7553 2.7464 2.6220 ------- -------------------- --------- --------- --------- --------- : Average time in seconds to find a dominant submatrix of $100$ random $5000 \times 5000$ matrices using $h$-greedy maxvol algorithm within a relative error of $10^{-8}$. ## Application to Image Compression Many images are of low rank in nature or can be approximated by a low rank image. More precisely, the pixel values of an image in RGB colors are in integer matrix format. Using standard SVD to approximate a given image will lead to a low rank approximation with non-integer entries, which require more bits to store and transmit and hence defeat the purpose of compression of an image. In this subsection, we introduce the cross approximation for image compression. Let $M$ be a matrix with integer entries, we use GMVA to find a $r\times r$ submatrix $A_r$ and index sets $I$ and $J$ for row and column indices. Then we can compute a cross approximation $M(:,J) A_r^{-1} M(I,:)$ to approximate $M$. Let $U, S, V$ be the SVD approximation of $M$ with $S$ being the matrix of singular values, if $S(r+1,r+1)$ is small enough, by Theorem [Theorem 3](#mjlai06062021){reference-type="ref" reference="mjlai06062021"}, the cross approximation is a good one. One only needs to store and transmit $2nr-r^2$ integer entries for this image as both $M(:,J)$ and $M(I.:)$ contain the block $A_r$. Note that we do not need to save/store/transmit $A_r^{-1}$. Instead, it will be computed from $A_r$ after the storage/transmission. We shall present a few examples to demonstrate the effectiveness of this computational method. Certainly, the condition in Theorem [Theorem 4](#mjlai06082021){reference-type="ref" reference="mjlai06082021"} is a sufficient condition. In practice, we only need to measure the peak signal-to-noise ratio (PSNR) to determine if $r$ is large enough or not. That is, do many decomposition and reconstruction to find the best rank $r$ as well as index pair $(I,J)$ before storing or sending the components $M(:,J), M(I,J^c)$ which are compressed by using the standard JPEG, where $J^c$ stands for the complement index of $J$. Since the size of the two components is smaller than the size of the original entire image, the application of JPEG to the two components should yield a better compression performance than the original image and hence, the matrix cross approximation can help further increase the performance of the image compression. In the following, We use 13 standard images to test our method and report our results in Table [5](#caex1){reference-type="ref" reference="caex1"}. That is, the PSNR for these images with their ranks are shown in Table [5](#caex1){reference-type="ref" reference="caex1"}. In addition, in Figure [4](#caImage1){reference-type="ref" reference="caImage1"}, we show the comparison of the original images with reconstructed images from their matrix cross approximation. Table [5](#caex1){reference-type="ref" reference="caex1"} shows that many images can be recovered to have PSNR 32 or better by using fewer than half of the integer entries of the entire matrix. In other words, about the half of the pixel values are needed to be encoded for compression instead of the entire image. image names m n rank Ratio PSNR ------------- ----- ----- ------ ------- ------- bank.pgm 512 512 355 0.90 32.12 barbara.pgm 512 512 260 0.76 32.22 clock.pgm 256 256 90 0.58 32.48 lena512.pgm 512 512 240 0.72 32.20 house.pgm 256 256 90 0.58 32.38 myboat.pgm 256 256 135 0.78 32.80 saturn.pgm 512 512 55 0.20 32.21 F16.pgm 512 512 235 0.71 32.21 pepper.pgm 512 512 370 0.92 32.23 knee.pgm 512 512 105 0.37 32.24 brain.pgm 512 512 110 0.38 32.62 mri.pgm 512 512 130 0.44 34.46 : Various image compression results [\[caex1\]]{#caex1 label="caex1"} --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![Comparison of Original Images, Subsampled Images, and Reconstructed Images from Our Method[\[caImage1\]]{#caImage1 label="caImage1"}](lena512Comp.png){#caImage1 width="50%"} ![Comparison of Original Images, Subsampled Images, and Reconstructed Images from Our Method[\[caImage1\]]{#caImage1 label="caImage1"}](F16Comp.png){#caImage1 width="50%"} ![Comparison of Original Images, Subsampled Images, and Reconstructed Images from Our Method[\[caImage1\]]{#caImage1 label="caImage1"}](houseComp.png){#caImage1 width="50%"} ![Comparison of Original Images, Subsampled Images, and Reconstructed Images from Our Method[\[caImage1\]]{#caImage1 label="caImage1"}](KneeComp.png){#caImage1 width="50%"} --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ## Application to Least Squares Approximation We now turn our attention to the approximation of least squares problem $\min_{\bf x} \|A {\bf x}-{\bf b}\|$ for a given observation matrix $A$ and observation vector ${\bf b}$. We shall write ${\bf x}_{{\bf b}}$ to the least square solution and explain how to use the matrix cross approximation to find a good approximation of ${\bf x}_{{\bf b}}$. As explained before, we shall use $A_{:,J}A_{I,J}^{-1}A_{I,:}$ to approximate the observation matrix $A$. For convenience, let us write $$A = \begin{bmatrix} A_{I,J} & A_{I,J^c} \\ A_{I^c, J} & A_{I^c, J^c} \end{bmatrix}.$$ The matrix cross approximation is $$A_{:,J}A_{I,J}^{-1}A_{I,:}= \begin{bmatrix} A_{I,J} \\ A_{I^c,J} \end{bmatrix} A_{I,J}^{-1} \begin{bmatrix} A_{I,J} & A_{I,J^c} \end{bmatrix} = \begin{bmatrix} E_I \\ A_{I^c,J}A_{I,J}^{-1} \end{bmatrix} \begin{bmatrix} A_{I,J} & A_{I,J^c} \end{bmatrix},$$ where $E_I$ is the identity matrix. For simplicity, we use Theorem [Theorem 3](#mjlai06062021){reference-type="ref" reference="mjlai06062021"} to have $$\label{lsqInequ1} \|A_{I^c,J^c}-A_{I^c,J}A_{I,J}^{-1}A_{I,J^c}\|_C\leq \frac{r+1}{\sqrt{1+\frac{r\sigma^2_{r+1}(A)}{(r+1)\|A\|^2_2}}}\sigma_{r+1}(A).$$ Note that all the entries in $A_{I^c,J}A_{I,J}^{-1}$ are less than or equal to 1 in absolute value. One way to find an approximation of the least squares solution is to solve $$\label{upperhalfLSQ} \begin{bmatrix} A_{I,J} & A_{I,J^c} \end{bmatrix} \mathbf{\hat{x}} = \mathbf{b}_I$$ where $\mathbf{b}=[\mathbf{b}_I;\mathbf{b}_{I^c}]$ consists of two parts according to the indices in $I$ and the complement indices in $I^c$. Such a solution can be solved using the sparse solution techniques as discussed in [@LW2021] or simply solve $A_{I,J}\mathbf{\hat{x}}=\mathbf{b}_I$ with zero entries in ${\bf x}$ associated with the indices in $J^c$. Hence, there is no residual or near zero residual in the first part. Our main concern is the residual vector in the second part $$\begin{bmatrix} A_{I^c,J} & A_{I^c,J^c} \end{bmatrix} \mathbf{\hat{x}} - \mathbf{b}_{I^c}=A_{I^c,J}\mathbf{\hat{x}}-\mathbf{b}_{I^c}=A_{I^c,J}A_{I,J}^{-1}\mathbf{b}_I-\mathbf{b}_{I^c}.$$ Recall ${\bf x}_{\bf b}$ is a least squares solution. Wet us write the residual vector $A\mathbf{x}_{\bf b}-\mathbf{b}=:\mathbf{\epsilon}$ with $\mathbf{\epsilon}$ consisting of two parts, i.e., $\mathbf{\epsilon}=[\mathbf{\epsilon}_I;\mathbf{\epsilon}_{I^c}]$. Then $\mathbf{b}_{I^c}= \begin{bmatrix} A_{I^c,J} & A_{I^c,J^c} \end{bmatrix} \mathbf{x}_b-\epsilon_{I^c}$. We have $$\begin{aligned} A_{I^c,J}\mathbf{\hat{x}}-\mathbf{b}_{I^c}&=A_{I^c,J}A_{I,J}^{-1}\mathbf{b}_I-\mathbf{b}_{I^c} \\ & = A_{I^c,J}A_{I,J}^{-1}\mathbf{b}_I-\begin{bmatrix} A_{I^c,J} & A_{I^c,J^c} \end{bmatrix} \mathbf{x}_b + \epsilon_{I^c} \\ & = A_{I^c,J}A_{I,J}^{-1}\mathbf{b}_I - \begin{bmatrix} A_{I^c,J} & A_{I^c,J}A^{-1}_{I,J}A_{I,J^c} \end{bmatrix} \mathbf{x}_b+ \begin{bmatrix} 0_{I^c,J} & A_{I^c,J}A^{-1}_{I,J}A_{I,J^c}-A_{I^c,J^c} \end{bmatrix}\mathbf{x}_b+\epsilon_{I^c} \\ & = A_{I^c,J}A_{I,J}^{-1}(\mathbf{b}_I-\begin{bmatrix} A_{I,J} & A_{I,J^c} \end{bmatrix}\mathbf{x}_b)+ \begin{bmatrix} 0_{I^c,J} & A_{I^c,J}A^{-1}_{I,J}A_{I,J^c}-A_{I^c,J^c} \end{bmatrix}\mathbf{x}_b+\epsilon_{I^c} \\ & = A_{I^c,J}A_{I,J}^{-1}\epsilon_I+ \begin{bmatrix} 0_{I^c,J} & A_{I^c,J}A^{-1}_{I,J}A_{I,J^c}-A_{I^c,J^c} \end{bmatrix} \mathbf{x}_b+\epsilon_{I^c}.\end{aligned}$$ It follows that $$\begin{aligned} \|A\mathbf{\hat{x}}-\mathbf{b}\|_{\infty} & = \| A_{I^c,J}\mathbf{\hat{x}}-\mathbf{b}_{I^c}\|_\infty \cr & \leq \|A_{I^c,J}A^{-1}_{I,J}\epsilon_I\|_{\infty}+\|A_{I^c,J}A^{-1}_{I,J}A_{I,J^c}-A_{I^c,J^c}\|_C\|\mathbf{x}_b\|_1+\|\epsilon_{I^c}\|_{\infty} \\ & \leq \|\epsilon_I\|_1+\|A_{I^c,J}A^{-1}_{I,J}A_{I,J^c}-A_{I^c,J^c}\|_C\|\mathbf{x}_b\|_1+\|\epsilon_{I^c}\|_{\infty}. \end{aligned}$$ since all the entries of $A_{I^c,J}A_{I,J}^{-1}$ are less than or equal to 1 in absolute value, By applying ([\[lsqInequ1\]](#lsqInequ1){reference-type="ref" reference="lsqInequ1"}), we have established the following main result in this section: **Theorem 8**. *Let $\mathbf{\epsilon}=[\mathbf{\epsilon}_I;\mathbf{\epsilon}_{I^c}]$ and let the residual vector $A\mathbf{x}_{\bf b}-\mathbf{b}=\mathbf{\epsilon}$. Then $$\label{lsqIneq} \|A\mathbf{\hat{x}}-\mathbf{b}\|_{\infty}\leq \|\mathbf{\epsilon}_I\|_1+\frac{r+1}{\sqrt{1+\frac{r\sigma^2_{r+1}(A)}{(r+1)\|A\|^2_2}}}\sigma_{r+1}(A)\|\mathbf{x}_b\|_1+\|\mathbf{\epsilon}_{I^c}\|_{\infty}.$$* The solution $\mathbf{\hat{x}}$ to ([\[upperhalfLSQ\]](#upperhalfLSQ){reference-type="ref" reference="upperhalfLSQ"}) is much faster and the residual has a similar estimate as the original least squares solution if $\sigma_{r+1}(A) \ll 1$. The computational time is spent on finding a dominant matrix which is dependent on the number of iterations (MATLAB backslash) of matrices of size $r\times r$ in Greedy Maximal Volume Algorithms discussed in the previous section. This new computational method will be useful when dealing with least squares tasks which have multiple right-hand side vectors. Traditionally, we often use the well-known singular value decomposition (SVD) to solve the least squares problem. The computational method explained above has an distinct feature that the SVD does not have. Let us use the following numerical example to illustrate the feature. Now let us present some numerical examples of finding the best least squares approximation of functions of two variables by using a polynomial basis. In general, this scheme can be applied to any dimensional function approximation. Suppose that we use polynomials of degrees less or equal to 10, i.e. basis functions $1,x,y,x^2, xy, y^2, \cdots,y^{10}$ to find the best coefficients $c_0,c_1,\cdots,c_{65}$ such that $$\label{2DlsqEqn} \min_{c_0, \cdots, c_{65}} \| f(x,y) -( c_0+c_1 x+c_2 y +\cdots+c_{65}y^{10})\|_{L^2}$$ for $(x,y)\in [-1,1]^2$. We can approximate ([\[2DlsqEqn\]](#2DlsqEqn){reference-type="ref" reference="2DlsqEqn"}) by using discrete least squares method, i.e., we find the coefficients $\mathbf{c}=(c_0,\cdots,c_n)$ such that $$\label{lsqmin} \min_{c_0,\cdots,c_n}\|f(\mathbf{x}_j)-\sum_{i=0}^n c_i\phi_i(\mathbf{x}_j)\|_{\ell^2},$$ where $\phi_i$'s are the polynomial basis functions and $\mathbf{x}_j$'s are the sampled points in $[-1, 1]^2$ with $n\gg 1$. In our experiments, we select continuous functions across different families such as $e^{x^2+y^2}$, $\sin(x^2+y^2)$, $\cos(x^2+y^2)$, $\ln(1+x^2+y^2)$, $(1+x^4+y^4)/(1+x^2+y^2)$, Franke's function, Ackley's function, Rastrigin's function, and wavy function as the testing functions to demonstrate the performances. Some plots of the functions are shown in Figure [9](#samplefuns){reference-type="ref" reference="samplefuns"}. We uniformly sampled $51^2$ points over $[-1,1]^2$ to solve for an estimated $\hat{\mathbf{c}}$ of $\mathbf{c}$ of the discrete least squares problem ([\[lsqmin\]](#lsqmin){reference-type="ref" reference="lsqmin"}). We report the relative error between the estimated $\hat{f}$ with the coefficient vector $\hat{\mathbf{c}}$ and the exact function $f$ in Table [6](#relTable2D){reference-type="ref" reference="relTable2D"}. The relative errors are computed based on $501^2$ points over $[-1,1]^2$. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![The entire data locations (red) and the pivotal data locations (blue). [\[pivotalpoints\]]{#pivotalpoints label="pivotalpoints"}](2D_poly.png){#pivotalpoints width="50%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Alternatively, we use our matrix cross approximation technique to select some rows and columns as discussed before and then solve a smaller discrete least squares problem. It is worthy noting that our data matrix $[\phi_i(\mathbf{x}_j)]_{i,j}$ in this setup is of full column rank, therefore the second term in the right side of ([\[lsqIneq\]](#lsqIneq){reference-type="ref" reference="lsqIneq"}) is zero and our GMVA is going to select all the rows together with the same number of columns. Let us call these selected rows the pivotal rows, or pivotal sample locations, we can solve ([\[lsqmin\]](#lsqmin){reference-type="ref" reference="lsqmin"}) only based on those $\mathbf{x}_j$'s which belong to the pivotal sample locations to get an estimated $\hat{f}_p$ of $f$. Figure [5](#pivotalpoints){reference-type="ref" reference="pivotalpoints"} shows the pivotal locations of the uniform sampling. We also report the relative error between $\hat{f}_{p}$ and the exact testing function $f$ in Table [6](#relTable2D){reference-type="ref" reference="relTable2D"}. Again the relative errors are computed based on $501^2$ points over $[-1,1]^2$. We can see that the two columns of relative errors in Table [6](#relTable2D){reference-type="ref" reference="relTable2D"} are in the same magnitude, which shows that we just need to use the data at pivotal locations to have a good approximation instead of the entire data $51^2$ locations. That is, our computational method for the least squares solution reveals the pivotal locations in $[-1, 1]^2$ for measuring the unknown target functions so that we can use fewer measurements and quickly recover them. $f(x,y)$ $\|f-\hat{f}\|_{rel}$ $\|f-\hat{f}_p\|_{rel}$ --------------------------- ----------------------- ------------------------- $e^{x^2+y^2}$ 1.93E-05 4.59E-05 $\sin(x^2+y^2)$ 2.13E-05 5.07E-05 $\cos(x^2+y^2)$ 1.28E-05 2.83E-05 $\ln(1+x^2+y^2)$ 1.06E-04 2.10E-04 $(1+x^4+y^4)/(1+x^2+y^2)$ 3.40E-04 6.57E-04 Franke's function 5.90E-02 8.10E-02 Ackley's function 2.10E-02 4.05E-02 Rastrigin's function 7.65E-04 1.10E-03 Wavy function 7.03E-02 9.80E-02 : Relative errors of least squares approximations for functions of two variables by using $51^2$ sample locations and by using pivotal data locations [\[relTable2D\]]{#relTable2D label="relTable2D"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Plot of some testing functions (top row: Franke and Ackley. bottom row: Rastrigin and wavy). [\[samplefuns\]]{#samplefuns label="samplefuns"}](Franke.png){#samplefuns width="30%"} ![Plot of some testing functions (top row: Franke and Ackley. bottom row: Rastrigin and wavy). [\[samplefuns\]]{#samplefuns label="samplefuns"}](Ackley.png){#samplefuns width="30%"} ![Plot of some testing functions (top row: Franke and Ackley. bottom row: Rastrigin and wavy). [\[samplefuns\]]{#samplefuns label="samplefuns"}](Rastrigin.png){#samplefuns width="30%"} ![Plot of some testing functions (top row: Franke and Ackley. bottom row: Rastrigin and wavy). [\[samplefuns\]]{#samplefuns label="samplefuns"}](wavy.png){#samplefuns width="30%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- # Conclusion and Future Research In this work, we proposed a family of greedy based maximal volume algorithms which improve the classic maximal volume algorithm in terms of the error bound of matrix cross approximation and the computational efficiency. The proposed algorithms are shown to have theoretical guarantees on convergence and are demonstrated to have nice performances on the applications such as image compression and least squares solution. Future research can be conducted along the following directions. Firstly, although the improved error bound in Theorem [Theorem 3](#mjlai06062021){reference-type="ref" reference="mjlai06062021"} is slightly better than its previous counterpart, we would like to seek for an error bound which is even tighter and an algorithm which can achieve this tighter error bound. Secondly, as data matrices in the real-world applications are usually incomplete or not fully observable, it is vital to generalize the algorithms to the setting which can fit in the case where matrices have missing values. Thirdly, if a matrix is sparse or special structured, we want to develop some more pertinent algorithm to have a better approximation result. Finally, we plan to generalize the proposed approach to the setting of tensor production approximation of high dimensional functions. # Acknowledgements {#acknowledgements .unnumbered} The second author would like to acknowledge the Simon Foundation for collaboration grant \#864439. 00 K. Allen, A Geometric Approach to Low-Rank Matrix and Tensor Completion, Dissertation, University of Georgia, 2021. J. Bennett, C. Elkan, B. Liu, P. Smyth, D. Tikk. Kdd cup and workshop 2007. ACM SIGKDD explorations newsletter 9, no. 2 (2007): 51-52. A. Ben-Israel, A volume associated with $m \times n$ matrices, Linear Algebra Appl., 167, 1992. C. Boutsidis and D. P. Woodruff. Optimal CUR matrix decompositions. In Proceedings of the forty-sixth annual ACM symposium on Theory of computing, pp. 353-362. 2014. H. Cai, K. Hamm, L. Huang, J. Li, and T. Wang. \"Rapid robust principal component analysis: CUR accelerated inexact low rank estimation.\" IEEE Signal Processing Letters 28 (2020): 116-120. H, Cai, K. Hamm, L. Huang, D. Needell. Mode-wise tensor decompositions: Multi-dimensional generalizations of CUR decompositions. The Journal of Machine Learning Research 22, no. 1 (2021): 8321-8356. H. Cai, L. Huang, P. Li, D. Needell. Matrix completion with cross-concentrated sampling: Bridging uniform sampling and CUR sampling. IEEE Transactions on Pattern Analysis and Machine Intelligence (2023). J.-F. Cai, T. Wang, K. Wei. Fast and provable algorithms for spectrally sparse signal reconstruction via low-rank Hankel matrix completion. Applied and Computational Harmonic Analysis 46, no. 1 (2019): 94-121. E. Candes and J. Romberg. Sparsity and incoherence in compressive sampling. Inverse problems 23, no. 3 (2007): 969. E. Candes and B. Recht. Exact matrix completion via convex optimization. Communications of the ACM 55, no. 6 (2012): 111-119. S. Chaturantabut and D. C. Sorensen. Nonlinear model reduction via discrete empirical interpolation. SIAM Journal on Scientific Computing 32, no. 5 (2010): 2737-2764. A. Civril and M. M. Ismail, On selecting a maximum volume submatrix of a matrix and related problems, Theo. Comp. Sci., 410, 4801--4811, 2009. P. Drineas, M. W. Mahoney, S. Muthukrishnan. Relative-error CUR matrix decompositions. SIAM Journal on Matrix Analysis and Applications 30, no. 2 (2008): 844-881. Z. Drmac and S. Gugercin. A new selection operator for the discrete empirical interpolation method---improved a priori error bound and extensions. SIAM Journal on Scientific Computing 38, no. 2 (2016): A631-A648. M. Fazel, E. Candes, B. Recht, P. Parrilo. Compressed sensing and robust recovery of low rank matrices. In 2008 42nd Asilomar Conference on Signals, Systems and Computers, pp. 1043-1047. IEEE, 2008. I. Georgieva and C. Hofreithery, On the Best Uniform Approximation by Low-Rank Matrices, Linear Algebra and its Applications, vol.518(2017), Pages 159--176. D. Goldberg, D. Nichols, B. M. Oki, D. Terry. Using collaborative filtering to weave an information tapestry. Communications of the ACM 35, no. 12 (1992): 61-70. S. A. Goreinov, E. E. Tyrtyshnikov, and N. L. Zamarashkin. A theory of pseudoskeleton approximations. Linear algebra and its applications 261, no. 1-3 (1997): 1-21. S. A. Goreinov, I. V. Oseledets, D. V. Savostyanov, E. E. Tyrtyshnikov, N. L. Zamarashkin. How to find a good submatrix. In Matrix Methods: Theory, Algorithms And Applications: Dedicated to the Memory of Gene Golub, pp. 247-256. 2010. S. A. Goreinov and E. E. Tyrtyshnikov, The maximal-volume concept in approximation by low-rank matrices, Contemporary Mathematics, 208: 47--51, 2001. K. Hamm and L. Huang. Perspectives on CUR decompositions. Applied and Computational Harmonic Analysis 48, no. 3 (2020): 1088-1099. K. Hamm, L. Huang, Stability of Sampling for CUR Decomposition, Foundations of Data Science, Vol. 2, No. 2 (2020), 83-99. N. Kishore Kumar and J. Schneider, Literature survey on low rank approximation of matrices, Journal on Linear and Multilinear Algebra, vol. 65 (2017), 2212--2244. M.-J. Lai and Y. Wang. Sparse Solutions of Underdetermined Linear Systems and Their Applications, Society for Industrial and Applied Mathematics, 2021. N. Mitrovic, M. T. Asif, U. Rasheed, J. Dauwels, and P. Jaillet. \"CUR decomposition for compression and compressed sensing of large-scale traffic data.\" In 16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013), pp. 1475-1480. IEEE, 2013. M. W. Mahoney and P. Drineas. CUR matrix decompositions for improved data analysis. Proceedings of the National Academy of Sciences 106, no. 3 (2009): 697-702. M. W. Mahoney, M. Maggioni, and Petros Drineas. \"Tensor-CUR decompositions for tensor-based data.\" In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 327-336. 2006. A. Mikhaleva and I. V. Oseledets, Rectangular maximum-volume submatrices and their applications, Linear Algebra and its Applications, Vol. 538, 2018, Pages 187--211. Ivan Oseledets and Eugene Tyrtyshnikov, TT-cross approximation for multidimensional arrays, Linear Algebra and its Applications, 432 (2010) 70--88. C.-T. Pan. On the existence and computation of rank-revealing LU factorizations. Linear Algebra Appl., 316, 2000. L. Schork and J. Gondzio, Rank revealing Gaussian elimination by the maximum volume concept. Linear Algebra and its Applications, 2020 D. C. Sorensen and M. Embree. A DEIM Induced CUR Factorization. SIAM Journal on Scientific Computing 38, no. 3 (2016): A1454-A1482. Z. Wang, M.-J. Lai, Z. Lu, W. Fan, H. Davulcu, and J. Ye. Orthogonal rank-one matrix pursuit for low rank matrix completion. SIAM Journal on Scientific Computing 37, no. 1 (2015): A488-A514. [^1]: kenneth.allen\@uga.edu. Department of Mathematics, University of Georgia, Athens, GA 30602 [^2]: mjlai\@uga.edu. Department of Mathematics, University of Georgia, Athens, GA 30602. This author is supported by the Simons Foundation Collaboration Grant \#864439. [^3]: zhaiming.shen\@uga.edu.Department of Mathematics, University of Georgia, Athens, GA 30602.
arxiv_math
{ "id": "2309.17403", "title": "Maximal Volume Matrix Cross Approximation for Image Compression and\n Least Squares Solution", "authors": "Kenneth Allen, Ming-Jun Lai, Zhaiming Shen", "categories": "math.NA cs.LG cs.NA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper, we prove the converse of Rees' mixed multiplicity theorem for modules, which extends the converse of the classical Rees' mixed multiplicity theorem for ideals given by Swanson - Theorem [\[SwansonTheorem\]](#SwansonTheorem){reference-type="ref" reference="SwansonTheorem"}. Specifically, we demonstrate the following result: Let $(R,\mathfrak{m})$ be a $d$-dimensional formally equidimensional Noetherian local ring and $E_1,\dots,E_k$ be finitely generated $R$-submodules of a free $R$-module $F$ of positive rank $p$, with $x_i\in E_i$ for $i=1,\dots,k$. Consider $S$, the symmetric algebra of $F$, and $I_{E_i}$, the ideal generated by the homogeneous component of degree 1 in the Rees algebra $[\mathscr{R}(E_i)]_1$. Assuming that $(x_1,\ldots,x_k)S$ and $I_{E_i}$ have the same height $k$ and the same radical, if the Buchsbaum-Rim multiplicity of $(x_1,\dots,x_k)$ and the mixed Buchsbaum-Rim multiplicity of the family $E_1,\dots,E_k$ are equal, i.e., ${\rm e_{BR}}((x_1,\dots,x_k)_{\mathfrak{p}};R_{\mathfrak{p}}) = {\rm e_{BR}}({E_1}_{\mathfrak{p}},\dots, {E_k}_{\mathfrak{p}},R_{\mathfrak{p}})$ for all prime ideals $\mathfrak{p}$ minimal over $((x_1,\ldots,x_k):_RF)$, then $(x_1,\ldots,x_k)$ is a joint reduction of $(E_1,\dots,E_k)$. In addition to proving this theorem, we establish several properties that relate joint reduction and mixed Buchsbaum-Rim multiplicities. address: - UEM-PR, Brazil - Universidade de São Paulo - ICMC, Caixa Postal 668, 13560-970, São Carlos-SP, Brazil - Universidade Estadual de Mato Grosso do Sul, Cidade Universitária de Dourados, Caixa postal 351, 79804-970, Dourados-MS, Brazil author: - M. D. Ferrari - V. H. Jorge-Perez - L. C. Merighe date: - - title: Mixed multiplicity and Converse of Rees' theorem for modules --- [^1] # Introduction One of the main motivations for studying mixed multiplicities of ideals and modules arises from the geometric interpretation discovered in 1973 by Teissier in his paper at Cargese [@Teissier]. Let $R = \mathbb{C}\{z_0, z_1, \dots, z_d\}$ be the local ring of convergent power series with maximal ideal $\mathfrak{m}$. He established a significant result: if $f \in R$ represents the equation of an isolated hypersurface singularity $(X,0)$, then the Jacobian of $f$, denoted by $J(f)$, is an $\mathfrak{m}$-primary ideal. Consequently, the function $\ell_R\left(R/J(f)^u\mathfrak{m}^v\right)$ is represented by a polynomial $P(u,v)$ of total degree $d + 1$ for large values of $u$ and $v$. Moreover, the terms of total degree $d + 1$ in $P(u,v)$ can be expressed as: $$\sum_{i=0}^{d+1}\frac{1}{(d+1-i)!\cdot i!}\mu^{(d+1-i)} u^{d +1-i}\cdot v^i,$$ where $\mu^{(i)}$ denotes the Milnor number of $X \cap E$ at $0\in \mathbb{C}^{d+1}$, and $E$ represents an $i$-dimensional affine subspace of $\mathbb{C}^{d+1}$ passing through $0\in \mathbb{C}^{d+1}$ for all $i=0,\dots, d+1$. Furthermore, within the same context, Teissier [@Teissier] obtained two interesting results: the first one states that the mixed multiplicities of $\mathfrak{m}$ and $J(f)$, denoted by ${\rm e}\left(\mathfrak{m}^{[d+1-i]},J(f)^{[i]}\right)$, are equal to $\mu^{(d+1-i)}$ for all $i =0,\dots,d+1$. The second result states that ${\rm e}\left(\mathfrak{m}^{[d+1-i]},J(f)^{[i]}\right)$ is equal to the Hilbert-Samuel multiplicity of $(a_1,\dots,a_{d+1-i},b_1,\dots,b_i)$, denoted by ${\rm e}\left(a_1,\dots,a_{d+1-i},b_1,\dots,b_i\right)$, where $a_1,\dots,a_{d+1-i}$ represent the defining equation of $E$ and $b_1,\dots,b_i$ are general elements in $J(f)$, for general theory, see the survey [@TrungVerma p. 535-537]). Since the beginning, both the Milnor number and the Hilbert-Samuel multiplicity have played crucial roles in singularity theory, commutative algebra, and algebraic geometry. Therefore, the results established by Teissier have served as a significant motivation for the development of the theory of mixed multiplicities of ideals and modules. To determine further notations and clarify some results, let us review some definitions and some fundamental properties of multiplicities. Let $(R,\mathfrak{m})$ be a $d$-dimensional Noetherian local ring and $I_1,\ldots,I_k$ $\mathfrak{m}$-primary ideals in $R$. There exists a polynomial $P(n_1,\ldots,n_k)$ in $k$ variables with rational coefficients and total degree $d$ such that for sufficiently large $n_1,\ldots ,n_k$: $$P\left(n_1,\ldots,n_k\right)=\ell_R\left(\frac{R}{I^{n_1}_1\cdots I^{n_k}_k}\right).$$ This polynomial $P(n_1,\ldots,n_k)$ is known as the *multi-graded Hilbert polynomial* of $I_1,\ldots,I_k$. By decomposing this polynomial $P(n_1,\ldots,n_k)$ into its homogeneous parts, we can express it as: $$\sum_{d_1+\cdots+d_k=d}\frac{1}{d_1!\cdots d_k!} {\rm e}\left(I^{[d_1]}_1,\dots,I^{[d_k]}_k\right) n^{d_1}_1\cdots n^{d_k}_k,$$ where ${\rm e}\left(I^{[d_1]}_1,\ldots,I^{[d_k]}_k\right)\in \mathbb{Q}$ and $I^{[d_i]}_i$ indicates that each $I_i$ is listed $d_i$ times for all $i=1,\dots,k$. This number ${\rm e}\left(I^{[d_1]}_1,\dots,I^{[d_k]}_k\right)$ is called the *mixed multiplicity of type $(d_1,\dots,d_k)$ with respect to $I_1,\dots,I_k$*. For more details, see [@SH; @Rober]. A $k$-tuple of elements of $R$ $(x_1,\ldots,x_k)$ is a *joint reduction* of the $k$-tuple $(I_1,\ldots,I_k)$ if the ideal $\sum^k_{i=1}x_iI_1\cdots I_{i-1}I_{i+1} \cdots I_k$ is a reduction of $I_1\cdots I_k$, where $x_i\in I_i$ for each $i=1,\ldots,k$. It is worth noting that when $k=1$, the mixed multiplicity coincides with the Hilbert-Samuel multiplicity, denoted by ${\rm e}(I_1)$, and the definition of joint reduction aligns with the definition of reduction of an ideal. In this context, Rees in [@Rees1] established a fundamental result known as Rees' theorem: **Theorem 1** (Rees' theorem). *Let $(R,\mathfrak{m})$ be a formally equidimensional Noetherian local ring and two $\mathfrak{m}$-primary ideals $(x_1,\dots,x_d)\subset I$. Then $(x_1,\dots,x_d)$ is a reduction of $I$ if and only if ${\rm e}(I) = {\rm e}(x_1,\dots,x_d)$.* Bog̈er [@boger] extended this result to non-$\mathfrak{m}$-primary ideals. In addition to the results presented by Teissier, Rees and Böger, it also has as motivating factors for obtaining the fundamental results in the theory of mixed multiplicities. The first significant result was given by Rees [@Rees], commonly known as \"Rees' mixed multiplicity theorem for ideals\", which establishes a connection between joint reductions and mixed multiplicities (see also [@SH Theorem 17.4.9]). **Theorem 2** (Rees' mixed multiplicity theorem for ideals). *Let $(R,\mathfrak{m})$ be a $d$-dimensional Noetherian local ring, $I_1,\ldots,I_k$ be $\mathfrak{m}$-primary ideals in $R$, $x_i\in I_i$ for all $i=1,\ldots,k$. Consider integers $d_1,\dots,d_k$ with $d_1+\cdots+d_k=d$. If $(x_1,\ldots,x_d)$ is a joint reduction of $I_1,\ldots,I_k$ , then ${\rm e}\left(x_1,\ldots,x_d\right)={\rm e}\left(I_1^{[d_1]},\ldots,I_k^{ [d_k]}\right).$* This theorem states that the ideal generated by a joint reduction has the same multiplicity as the mixed multiplicity of ideals. The second fundamental result was given by Swanson, where she showed the converse of \"Rees' mixed multiplicity theorem for ideals\" [@Swanson; @SH Theorem 17.6.1] and further generalizes the result given by Böger for mixed multiplicities of ideals. **Theorem 3** (Converse of Rees' mixed multiplicity theorem for ideals). *Let $(R,\mathfrak{m})$ be a formally equidimensional $d$-dimensional Noetherian local ring, $I_1 ,\ldots,I_k$ be ideals in $R$, $x_i\in I_i$ for $i=1,\ldots,k$. Consider integers $d_1,\dots,d_k$ with $d_1+\cdots +d_k=d$. If ${\rm e}\left((x_1,\ldots,x_k)R_{\mathfrak{p}};R_{\mathfrak{p}}\right) = {\rm e}\left(I_1R_{\mathfrak{p}} , \ldots , I_kR_{\mathfrak{p}} ;R_{\mathfrak{p}}\right)$ for all prime ideals ${\mathfrak{p}}$ minimal over $(x_1,\ldots,x_k)$, then $(x_1,\ldots,x_k)$ is a joint reduction of $(I_1,\ldots, I_k)$.* Extending the concept of multiplicity for modules over a local Noetherian ring, known as the Buchsbaum-Rim multiplicity is also well studied by many authors, for a formal definition see Section [3.2](#section4.2){reference-type="ref" reference="section4.2"}. This multiplicity extends the notion of multiplicity for ideals, and it has been the subject of several generalizations and results. Notably, Rees' theorem and related results have been established by Buchsbaum, Rim, Kirby, Rees [@Kirby-Rees1], and Katz [@Katz]. Moreover, Kleiman and Thorup provided an algebro-geometric interpretation of these same results in [@Kleiman-Thorup] and [@Kleiman-Thorup2]. Katz made significant use of the Buchsbaum-Rim multiplicity to prove a reduction criterion for modules, which represents a generalization of Boger's theorem. Kirby-Rees, Kleiman and Thorup in [@Kirby-Rees1; @Kleiman-Thorup; @Kleiman-Thorup2] also introduced the notion of mixed multiplicities for a family $E_1, \dots, E_k$ of $R$-submodules of $R^p$ with finite colength. This multiplicity is denoted by ${\rm e_{BR}}\left(E_1^{[d_1]},\ldots,E_k^{[d_k]}\right)$, and its formal definition can be found in Section [3.3](#section4.3){reference-type="ref" reference="section4.3"}. In this context, Bedregal-Perez [@Bedregal-Perez2] provided a generalization of \"Rees' mixed multiplicity theorem for ideals\". **Theorem 4** (Rees' mixed multiplicity theorem for modules). *Let $(R,\mathfrak{m})$ be a $d$-dimensional Noetherian local ring, $E_1,\dots, E_{k}$ be $R$-submodules of $F$ such that $F/E_i$ has finite length, for all $i=1, \dots, k$. Let $x_1,\dots ,x_{d+p-1}$ be any joint reduction of $E_1,\dots, E_{k}$, with each $E_i$ listed $d_i$ times. Then, for all integers $d_1,\ldots,d_k\in\mathbb{N}$ with $d_1+d_2+\cdots+d_k=d+p-1$, ${\rm e_{BR}}\left(E_1^{[d_1]},\ldots,E_k^{[d_k]}\right) = {\rm e_{BR}}\left(x_1,\dots ,x_{d+p-1}\right),$ where ${\rm e_{BR}}(x_1,\dots ,x_{d+p-1})$ denotes the Buchsbaum-Rim multiplicity of the $R$-submodule of $F$ generated by $x_1 ,\dots ,x_{d+p-1}$.* One of the main results of this work is to provide a generalization of Böger's version of the result for modules, as given by Katz, and to demonstrate the converse of \"Rees' mixed multiplicity theorem for modules\", the Theorem [Theorem 4](#Perez-Bedregal){reference-type="ref" reference="Perez-Bedregal"} above. The organization of this paper is given as follows: In Section [2](#section2){reference-type="ref" reference="section2"}, we will first establish some notations, definitions and basic results that will be useful for the rest of the paper. In Section [3](#section3){reference-type="ref" reference="section3"}, we will review some definitions and results about Buchsbaum-Rim multiplicity, mixed Buchsbaum-Rim multiplicity, and g-multiplicity. It should be noted that these invariants will be fundamental in the proofs of almost all the results in the paper. In Section [4](#section4){reference-type="ref" reference="section4"}, we state and prove one of the main results of the paper, called the \"Risler-Teissier Theorem\", for finitely generated $R$-submodules $E_i$ of $F^{e_i}$, where $e_i \in \mathbb{N}$. This result generalizes the Risler-Teissier theorem for ideals given in [@Teissier Proposition 2.1], and also generalizes the Risler-Teissier theorem for modules given in [@Bedregal-Perez2 Theorem 5.2] when $e_i=1$, for all $i$. In Section [5](#section5){reference-type="ref" reference="section5"}, we present fundamental results that are crucial to the proof of the main theorem of this paper. In the last section, we establish the main result of this paper, which is the converse of Theorem [Theorem 4](#Perez-Bedregal){reference-type="ref" reference="Perez-Bedregal"} and also a generalization of Swanson's theorem [Theorem 3](#SwansonTheorem){reference-type="ref" reference="SwansonTheorem"}. # Setup and Background {#section2} In this paper, $(R, \mathfrak{m})$ denotes a Noetherian local ring with a maximal ideal $\mathfrak{m}$, and $F$ represents a free $R$-module of positive rank $p$. Furthermore, we assume that all $R$-modules discussed in this paper are finitely generated. Let $E$ be a finitely generated submodule of $F$. The embedding $E\subseteq F$ induces a graded $R$-algebra homomorphism between the symmetric algebras: $${\rm Sym}(i):{\rm Sym}_R(E)\longrightarrow {\rm Sym}_R(F),$$ which maps the symmetric algebra of $E$ to the symmetric algebra of $F$. Since $F$ is a free $R$-module, the symmetric algebra $S:={\rm Sym}_R(F)$ of $F$ coincides with the polynomial ring $R[t_1,t_2,\dots,t_p]$ over $R$ with $rank(F)=p$ variables. In [@Simis-Ulrich-Vasconcelos], the authors Simis-Ulrich-Vasconcelos defined the Rees algebra $\mathscr{R}(E)$ of the module $E$ as the image of this homomorphism. The Rees algebra $\mathscr{R}(E)$ is given by: $$\mathscr{R}(E)={\rm Im}\left({\rm Sym}(i)\right) = \bigoplus_{n\geq0} E^n\subseteq S = R[t_1,t_2,\ldots,t_p],$$ where $E^n:=[\mathscr{R}(E)]_n$ represents the homogeneous component of $\mathscr{R}(E)$ of degree $n$. In other words, $E^n$ is the $n$-th power of the image of $E$ in $\mathscr{R}(E)$. If we consider an element $h = (h_1,\dots ,h_p) \in F$, we define the element $w(h) = h_1t_1 + \cdots + h_pt_p \in S_1$. We denote by $[\mathscr{R}(E)]_1:= \{w(h) : h \in E\}=E$. In particular, $E=[\mathscr{R}(E)]_1$ is an $R$-submodule of $\mathscr{R}(E)$. The ideal of $S$ generated by $[\mathscr{R}(E)]_1$ is denoted by $I_E$. To simplify the notation, the element $w(h)$ will be denoted simply by $h$, throughout all the paper. Furthermore, now we establish a consistent set of notations to enhance readability and ensure clarity in our presentation. The following list outlines the notations that will be utilized throughout this paper: **Notations:**[\[notation1\]]{#notation1 label="notation1"} Let $R$ be a $d$-dimensional Noetherian ring, $Y$ be a variable over $R$ and $\mathfrak{p}$ a prime ideal of $R$. Let $N$ be a finitely generated $R$-module of dimension $d$: - $F$: A free $R$-module of positive rank $p$. - $S = \operatorname{Sym}_R(F) = R[t_1, \dots, t_p]$: The symmetric algebra of $F$ over $R$. - $S_{\mathfrak{p}} = \operatorname{Sym}_{R_{\mathfrak{p}}}F_{\mathfrak{p}}$: The localization of $S$ in $\mathfrak{p}$. - $M=S\otimes_RN=\oplus_{q\geq0}M_q=\oplus_{q\geq0}F^q\otimes_RN$. Note that $M$ is a finitely generated graded $S$-module of dimension $d+p$. - $F[Y] = R[Y]^p$: The $R[Y]$-free module of rank $p$ generated in variable $Y$. - $S[Y] = \operatorname{Sym}_{R[Y]}F[Y]$: The symmetric algebra of $F[Y]$ over $R[Y]$. - $M[Y] = S[Y] \otimes_{R} N = \bigoplus_{q \geq 0} F[Y]^q \otimes_R N$. - $E(B)$: The image of $E \subseteq R^p$ in the $B$-free module $B^p$ by the ring homomorphism $\varphi: R \to B$. For instance, if $B = R/\mathfrak{p}$, then $E(R/\mathfrak{p})$ corresponds to the image of $E$ in the $R/\mathfrak{p}$-free module $F/\mathfrak{p}F$, i.e., $E(F/\mathfrak{p}F) = (E+\mathfrak{p}F)/\mathfrak{p}F$; and $E': =EF' =E(F/(x)) = (E+(x))/(x)$. - Let ${\bf n} = (n_1, \ldots, n_k)$ and ${\bf e} = (e_1, \ldots, e_k)$ denote multi-indices. The norm $|{\bf n}|$ is represented by $|{\bf n}| = n_1 + \ldots + n_k$, the inner product of ${\bf e}$ and ${\bf n}$ is given as ${\bf e} \cdot {\bf n} = e_1n_1 + \ldots + e_kn_k$, and ${\bf n}!$ denotes the product of factorials ${\bf n}! = n_1! \cdots n_k!$. If ${\bf d} = (d_1, \ldots, d_k)$ is another multi-index, we denote ${\bf n}^{\bf d} = n_1^{d_1} \cdots n_k^{d_k}$. Now, let ${\bf E} = (E_1, \ldots, E_k)$ represent the multi-index of submodules. We denote ${\bf E}^{\bf n} = E_1^{n_1} \cdots E_k^{n_k}$. Additionally, we use the notation $\delta(i) = (\delta(i,1), \ldots, \delta(i,k))$, where $\delta(i,j) = 1$ if $i = j$ and $\delta(i,j) = 0$ otherwise. Now, we are ready to give some definitions and results that will be used throughout the paper. **Definition 5** (Reduction of a module). An $R$-submodule $U$ of $E$ is called a *reduction of $E$* if there exists an integer $n_0 > 0$ such that $E^{n+1} = UE^n$ for all $n \geq n_0$. A reduction $U \subseteq E$ is said to be a *minimal reduction of $E$* if $U$ properly contains no further reductions of $E$. If $U$ is a reduction of $E$, then there exists at least one $R$-submodule $L$ in $U$ such that $L$ is a minimal reduction of $U$. The proof of this statement follows the same deduction as the proof of Huneke-Swanson in [@SH Theorem 8.3.5] or in Ooishi in [@ooishi Theorem 3.3]. **Definition 6** (Integral closure of a module). Given an integer $n \geq 0$, the *integral closure of the $n$-th power module $E^n$* is given as $$\overline{E^n} = \left(\overline{\mathscr{R}(E)}^S\right)_n \subseteq S_n = F^n,$$ which represents the $n$-th homogeneous component of the integral closure $\overline{\mathscr{R}(E)}^S$ of $\mathscr{R}(E)$ in $S$. In other words, $\overline{E^n}$ is the integral closure of the ideal $(ES)^n$ of degree $n$. In particular, $\overline{E} = (\overline{ES})_1 \subseteq F$. Therefore, $\overline{E}$ consists of the elements $x \in F$ that satisfy the integral equation $x^n + c_1x^{n-1} + \cdots + c_{n-1}x + c_n = 0$ in $S$, where $n > 0$ and $c_i \in E^i$ for every $1 \leq i \leq n$. Next, we have an elementary fact relating reduction and integral closure. **Remark 7**. [@PF Remark 2.2]. [\[remarkideal\]]{#remarkideal label="remarkideal"} Suppose that $L \subseteq E$ are two $R$-submodules, and let $I_L$ and $I_E$ be the ideals of $S$. The following conditions are equivalent: - $L$ is a reduction of $E$. - $I_L$ is a reduction of $I_E$. - $I_E \subseteq \overline{I_L}$. - $E \subseteq \overline{L}$. **Lemma 8**. *Let $R$ be a ring, not necessarily Noetherian. An element $r \in F$ is in the integral closure of $E$ if and only if for every minimal prime ideal $\mathfrak{p}$ in $R$, the image of $r$ in $F/\mathfrak{p}F$ is in the integral closure of $E\left(F/\mathfrak{p}F\right)$, denoted as $(E + \mathfrak{p}F)/\mathfrak{p}F$.* *Proof.* By [@SH Proposition 1.1.5], it is stated that an element $r \in S$ is in the integral closure of $I_E$ if and only if for every minimal prime ideal $\mathfrak{p}$ in $S$, the image of $r$ in $S/\mathfrak{p}$ is in the integral closure of $(I_E + \mathfrak{p})/\mathfrak{p}$. Using this result, along with Remarks [\[remarkideal\]](#remarkideal){reference-type="ref" reference="remarkideal"} and the persistence of the integral closure, we can conclude the desired result. ◻ The definitions of joint reduction and superficial sequence, in this context, can be given similarly to the definitions given in [@Bedregal-Perez2 Definition 3.1 and 3.4]. **Definition 9** (Joint reduction). Let $E_1,\ldots, E_{k}$ be $R$-submodules of $F$. A sequence of elements $x_1,\ldots, x_{k}$ with $x_i \in E_i$ is called a *joint reduction* for $E_1,\ldots, E_{k}$ with respect to the $R$-module $N$ if there exists an integer $n\geq0$ such that for all $q \geq 0$, the following equality holds: $$\left[ \sum_{i=1}^k x_iE_1 \cdots E_{i-1}E_{i+1}\cdots E_k \right](E_1 \cdots E_k)^{n-1}M_q = (E_1\cdots E_k)^nM_q,$$ where $M_q=F^q\otimes_RN$, for a positive integer $q$. In the special case when $N=R$, we say that the sequence $x_1,\ldots, x_{k}$ is a joint reduction for $E_1,\ldots, E_{k}$. **Remark 10**. Notice that a sequence of elements $x_1,\dots,x_k$ with $x_i \in E_i$ is a joint reduction for $E_1,\dots,E_k$ with respect to $N$ if and only if the sequence $w(x_1),\dots,w(x_k)$ is a joint reduction for $I_{E_1},\dots,I_{E_k}$ with respect to $M=S\otimes_RN$, where $w(x_i)$ denotes the image of $x_i$ in $S\otimes_RN$. This follows from Rees' definition [@Rees] and the definition given by Huneke-Swanson [@SH Definition 17.1.1]. For more details, see those references. **Proposition 11**. *Let $(R,\mathfrak{m})$ be a Noetherian local ring and $(x_1, \dots , x_k)$ be an $R$-submodule of $F$ such that $\ell_R(F/(x_1, \dots , x_k))<\infty$. Then, the sequence $(x_1, \dots , x_k)$ is a joint reduction for the submodule sequence $((x_1 ) + \mathfrak{m}^nF , \dots , (x_k) + \mathfrak{m}^nF)$ for all sufficiently large $n$.* *Proof.* Since $\ell_R(F/(x_1, \dots , x_k))<\infty$, there exists $n$ such that $\mathfrak{m}^nF \subseteq (x_1, \dots , x_k)$. Let $$E = \sum_{i=1}^kx_i\left((x_1 ) + \mathfrak{m}^nF\right)\cdots \left((x_{i-1} ) + \mathfrak{m}^nF\right)\left((x_{i+1}) + \mathfrak{m}^nF\right)\cdots ((x_k ) + \mathfrak{m}^nF).$$ Then $$\begin{aligned} ((x_1 ) + \mathfrak{m}^nF)\cdots ((x_k) + \mathfrak{m}^nF) &= E + \mathfrak{m}^{n}F^k\\ &\subseteq E + (x_1,\dots,x_k)F^{k-1}\\ &\subseteq E\\ &\subseteq \left((x_1 ) + \mathfrak{m}^nF\right)\cdots \left((x_k) + \mathfrak{m}^nF\right).\end{aligned}$$ Therefore the equality holds, and it follows that the submodule sequence $((x_1 ) + \mathfrak{m}^nF , \dots , (x_k) + \mathfrak{m}^nF)$ forms a joint reduction. ◻ The definition of a superficial element for $E_1, \ldots, E_k$ with respect to $N$ can be stated as follows: **Definition 12** (Superficial element). Let $E_1,\ldots, E_{k}$ be $R$-submodules of $F$ and $N$ an $R$-module. An element $x\in E_1$ is a *superficial element for $E_1,\ldots,E_k$ with respect to $N$* if there exists a non-negative integer $c_1$ such that for all $n_1\geq c_1$ and all $n_2,\ldots,n_k\geq 0$, $q\geq 0$, $$\left({\bf E}^{{\bf n}}M_q:_{M_{|{\bf n}|-1+q}} x\right)\cap E_1^{c_1}E_2^{n_2}\cdots E_k^{n_k}M_{n_1-1-c_1+q}= E_1^{n_1-1}E_2^{n_2}\cdots E_k^{n_k}M_q.$$ where $M_q=F^q\otimes_RN$. **Remark 13**. Notice that $x_1 \in E_1$ is a superficial element for $E_1,\ldots, E_{k}$ with respect to $N$ in the above sense if and only if $w(x_1)\in I_{E_1}$ is a superficial element for $I_{E_1},\ldots, I_{E_{k}}$ with respect to $N$ (in the sense of [@SH Definition 17.2.1]). A sequence of elements $x_1,\dots,x_k,$ with $x_i \in E_i$, is a *superficial sequence for $E_1,\dots,E_k$ with respect to $N$* if for all $i=1,\dots,k$, $x_i\in E_i$, is such that the sequence $w(x_1),\dots,w(x_k)$ is a superficial sequence for $I_{E_1},\dots,I_{E_k}$ with respect to $M=S\otimes_RN$. **Proposition 14**. *(Existence of superficial elements) [@Bedregal-Perez2 Proposition 3.3][\[propo3.3BP\]]{#propo3.3BP label="propo3.3BP"} Let $(R,\mathfrak{m})$ be a Noetherian local ring with an infinite residue field and $E_1,\dots, E_k$ be $R$-submodules of $F$. Then there exists a superficial sequence for $E_1,\dots, E_{k}$ with respect to the $R$-module $N$. Specifically, there exists a non-empty Zariski-open subset $U$ of $E_1/\mathfrak{m}E_1$ such that for any $x \in E_1$ with image in $U$, $x$ is a superficial element for $E_1,\dots, E_k$ with respect to $N$. Moreover, if $E_1$ is not contained in the prime ideals $\mathfrak{p}_1, \dots, \mathfrak{p}_s$ of $R$. Then $x$ can be chosen to avoid the same prime ideals.* # Buchsbaum-Rim multiplicity, mixed multiplicity and g-multiplicity {#section3} In this section, we review some definitions and results about Buchsbaum-Rim multiplicity, mixed Buchsbaum-Rim multiplicity and g-multiplicity, and also we give some basic properties of those multiplicities. They will be fundamental for the rest of the paper. ## Buchsbaum-Rim multiplicity {#secBR} Suppose that $(R,\mathfrak{m})$ is a $d$-dimensional Noetherian local ring, $E$ is an $R$-submodule of $F$ such that $0<\ell_R(F/E)<\infty$ and Let $N$ be an $R$-module of dimension $d$. Buchsbaum-Rim, in [@buchsbaum Section 3], showed that there exists an integer ${\rm e}_{\rm BR}(E;N)\in \mathbb{Z}$ such that the lenght $\ell_R\left(\frac{F^n\otimes_RN}{E^n\otimes_RN}\right)$ can be expressed as the polynomial of the form: $$P_E^F(n;N)={\rm e}_{\rm BR}\left(E;N\right)\frac{n^{d+p-1}}{\left(d+p-1\right)!}+\mbox{lower terms}$$ for $n$ sufficiently large. The integer ${\rm e}_{\rm BR}(E;N)$ is called the *Buchsbaum-Rim multiplicity of $E$ with respect to $N$*. This multiplicity is also defined as $${\rm e}_{\rm BR}(E;N)=\lim\limits_{n\to \infty}\left(d+p-1\right)!\frac{\ell_R\left(\frac{F^n\otimes_RN}{E^n\otimes_RN}\right)}{n^{d+p-1}}.$$ When $N = R$ we will denote ${\rm e}_{\rm BR}(E):={\rm e}_{\rm BR}(E;R)$. Let $E_1 \subseteq E_2$ be $R$-submodules contained in $F$, such that $\ell_R(F/E_1) < \infty$ and $R$ is formally equidimensional ring. It is well known that $E_1$ is a reduction of $E_2$ if and only if the Buchsbaum-Rim multiplicity of $E_1$ is equal to the Buchsbaum-Rim multiplicity of $E_2$, i.e., ${\rm e}_{\rm BR}(E_1) = {\rm e}_{\rm BR}(E_2)$. For more details, see, for example, [@buchsbaum], [@Katz], [@SH p. 317, Corollary 16.5.7], and [@Kleiman-Thorup Theorem 5.7]. This result generalizes Rees' theorem for ideals. More explicitly, we have the following: **Lemma 15** (Rees' theorem). *Let $(R,\mathfrak{m})$ be a formally equidimensional $d$-dimensional Noetherian local ring and $E_1\subseteq E_2$ be $R$-submodules of $F$ with $\ell_R(F/E_i)<\infty$, for $i=1,2$. Then $E_1$ is a reduction of $E_2$ in $F$ if and only if ${\rm e}_{\rm BR}(E_1)={\rm e}_{\rm BR}(E_2)$.* ## Buchsbaum-Rim multiplicity in arbitrary degree {#section4.2} In analogy with the Subsection [3.1](#secBR){reference-type="ref" reference="secBR"}, in which we defined the classical Buchsbaum-Rim multiplicity, we consider the Buchsbaum-Rim multiplicity to a higher degree. Fixed some integer $e>0$. Using the notation described in Section [2](#section2){reference-type="ref" reference="section2"}, let $(R,\mathfrak{m})$ be a $d$-dimensional Noetherian local ring, $E$ be an $R$-submodule of $F^e:=S_e$ such that $\ell_R(F^{e}/E) < \infty$. Let $N$ be an $R$-module of dimension $d$. For integers $n$ sufficiently large, Kleiman and Thorup in [@Kleiman-Thorup2 Section 8 and Proposition 8.2], showed the existence of an integer ${\rm e}^{d+p-1,0}(E)\in \mathbb{Z}$ such that $\ell_R\left(\frac{F^{en}\otimes_RN}{E^n\otimes_RN}\right)$ can be expressed as a polynomial in the form: $$P_E^F\left(n;N\right)={\rm e}^{d+p-1,0}\left(E;N\right)\frac{n^{d+p-1}}{\left(d+p-1\right)!}+\mbox{lower terms}.$$ The integer ${\rm e}^{d+p-1,0}(E;N)$ is called the *Buchsbaum-Rim multiplicity in higher degree of $E$ in $F^e$*. In this paper, for simplicity, we will denote ${\rm e}^{d+p-1,0}(E;N)$ by ${\rm \tilde{e}}_{\rm BR}(E;N)$. Note that if $e=1$, the integer ${\rm e}^{d+p-1,0}(E;N)$ coincides with the usual Buchsbaum-Rim multiplicity ${\rm e}_{\rm BR}(E;N)$. ## Mixed multiplicity for modules {#section4.3} Let $(R,\mathfrak{m})$ be a $d$-dimensional Noetherian local ring and $E_1, \ldots, E_k$ be $R$-submodules of $F$ with $\ell_R(F/E_i) < \infty$, for $i = 1, \ldots, k$. Let $N$ be an $R$-module of dimension $d$. The length function $h_R(n_1, \ldots, n_k; N) = \ell_R\left(\frac{F^{|{\bf n}|}\otimes_RN}{{\bf E}^{{\bf n}}\otimes_RN}\right)$ is a polynomial of total degree at most $d+p-1$, for sufficiently large values of $n_1, \ldots, n_k > 0$. The leading term of this polynomial can be expressed as: $$\sum_{|{\bf d}|=d+p-1}\;\frac{1}{{\bf d}!}{\rm e}_{\rm BR}\left(E_1^{[d_1]}, \ldots, E_k^{[d_k]};N\right){\bf n}^{\bf d}.$$ Here, ${\bf d} = (d_1, \ldots, d_k)$ is a multi-index such that $|{\bf d}| = d+p-1$. The coefficients ${\rm e}_{\rm BR}\left(E_1^{[d_1]}, \ldots, E_k^{[d_k]};N\right)$ are called of *mixed Buchsbaum-Rim multiplicity of $E_1, \ldots, E_k$ of type $(d_1, \ldots, d_k)$ with respect to $N$*. In this definition, $E_i^{[d_i]}$ denotes that the module $E_i$ is listed $d_i$ times. This definition generalizes the notion of mixed multiplicities of $\mathfrak{m}$-primary ideals. The definition of ${\rm e}_{\rm BR}\left(E_1^{[d_1]}, \ldots, E_k^{[d_k]};N\right)$ is consistent with the definitions given by Bedregal-Perez [@Bedregal-Perez2 Section 4], Kirby-Rees [@Kirby-Rees1], and Kleiman-Thorup [@Kleiman-Thorup2 Section 8]. In the special case, when $k = d+p-1$ and $d_1 = \cdots = d_k = 1$, we can denote ${\rm e}_{\rm BR}\left(E_1^{[d_1]}, \ldots, E_k^{[d_k]};N\right)$ as ${\rm e}_{\rm BR}\left(E_1, \ldots, E_{d+p-1};N\right)$. If $N = R$, we use the notation ${\rm e}_{\rm BR}\left(E_1^{[d_1]}, \ldots, E_k^{[d_k]}\right)$ to denote ${\rm e}_{\rm BR}\left(E_1^{[d_1]}, \ldots, E_k^{[d_k]};R\right)$. ## Mixed multiplicity for modules for arbitrary degree {#MixHdegree} Fixed some integers $e_i\in \mathbb{N}$. Let $(R,\mathfrak{m})$ be a $d$-dimensional Noetherian local ring and $E_1, \ldots, E_k$ be $R$-submodules of $F^{e_1}, \ldots, F^{e_k}$ such that $\ell_R(F^{e_i}/E_i) < \infty$ for all $i = 1, \ldots, k$. Let $N$ be an $R$-module of dimension $d$. Then, the length function $h_R\left(n_1, \ldots, n_k; N\right) = \ell_R\left(\frac{F^{{\bf e\cdot n}}\otimes_RN}{{\bf E}^{{\bf n}}\otimes_RN}\right)$, for sufficiently large values of $n_1, \ldots, n_k > 0$, can be expressed as a polynomial of total degree at most $d+p-1$, and its leading term is given by: $$\sum_{|{\bf d}|=d+p-1}\;\frac{1}{{\bf d}!}{\rm \tilde{e}}_{\rm BR}\left(E_1^{[d_1]}, \ldots, E_k^{[d_k]};N\right){\bf n}^{\bf d},$$ where ${\bf d} = (d_1, \ldots, d_k)$ is a multi-index with $|{\bf d}| = d+p-1$. The coefficients ${\rm \tilde{e}}_{\rm BR}(E_1^{[d_1]}, \ldots, E_k^{[d_k]};N)$ are called *mixed Buchsbaum-Rim multiplicity of $E_1, \ldots, E_k$ of type $(d_1, \ldots, d_k)$ with respect to $N$*. This definition agrees with the one given in [@Kleiman-Thorup2 p. 568]. Moreover, it is important to note that when $e_1 = e_2 = \cdots = e_k = 1$, the mixed Buchsbaum-Rim multiplicity is reduced to the usual mixed multiplicity, that is, $${\rm \tilde{e}}_{\rm BR}\left(E_1^{[d_1]}, \ldots, E_k^{[d_k]}; N\right) = {\rm e}_{\rm BR}\left(E_1^{[d_1]}, \ldots, E_k^{[d_k]};N\right).$$ ## Multiplicity system In this subsection, we will define the $g$-multiplicity system introduced by Kirby in [@Kirby Proposition 2]. In our context, this pertains to the ring $S = \operatorname{Sym}_R(F)$. To do this, we will recall the definition of the Koszul complex $K_{\bullet}(a_1, \ldots, a_m; N)$ of a set of homogeneous elements $a_1, \ldots, a_m$ in $S$ with $\deg(a_i) = k_i$ for $i = 1, \ldots, m$, with respect to an $R$-module $N.$ Let $G = \bigoplus_{i=1}^m S(-k_i)$ be a free $S$-module with basis $e_1, \dots, e_m$. The homomorphism $\phi: G \to S$ defined by $\phi(e_i) = a_i$ gives rise to the Koszul complex $K_{\bullet}(a_1,\dots, a_m;S)$, where the $n$-th graded piece is $K_n(a_1,\dots, a_m;S) = \bigwedge^n G$ and the differential $d_n: \bigwedge^n G \to \bigwedge^{n-1} G$ is defined as $$d_n(x_1 \wedge \cdots \wedge x_n) = \sum_{i=1}^n (-1)^{i+1} \phi(x_i)x_1 \wedge \cdots \wedge \widehat{x_i} \wedge \cdots \wedge x_n,$$ where $\widehat{x_i}$ denotes the omission of $x_i$. By identifying the basis elements $e_{i_1} \wedge \cdots \wedge e_{i_n}$ with $e_{{i_1,\dots,i_n}}$, the complex $K_{\bullet}(a_1,\dots, a_m;S)$ can be written as $K_n(a_1,\dots, a_m;S) = \bigoplus_{1\leq i_1<\cdots <i_n\leq m} Se_{{i_1,\dots,i_n}}$, where $\deg(e_{{i_1,\dots,i_n}}) = k_{i_1} + \dots + k_{i_n}$. This complex has a graded structure, and the differential $d_n$ is an homogeneous morphism of graded modules. Let $N$ be an $R$-module of dimension $d$. For the graded $S$-module $M = S \otimes_R N = \oplus_{q\geq 0} F^q \otimes_R N$, the homological Koszul complex $K_{\bullet}(a_1,\dots, a_m;N)$ is obtained by tensoring $K_{\bullet}(a_1,\dots, a_m;S)$ with $N$, where $d_n \otimes \operatorname{id}_N$ defines the differentials. The complex $K_{\bullet}(a_1,\dots, a_m;N)$ is a graded $S$-module, with each component $[K_{\bullet}(a_1,\dots, a_m;N)]_t$ corresponding to a complex of $R$-modules. The Koszul homology modules $H_n(a_1,\dots, a_m;N)$ are defined as the homology modules of the Koszul complex, that is, $$H_n\left(a_1,\dots, a_m;N\right) = \frac{\operatorname{Ker}\left(d_n \otimes \operatorname{id}_N\right)}{\operatorname{Im}\left(d_{n+1} \otimes \operatorname{id}_N\right)},$$ for $n \geq 0$. These modules are finitely generated graded $S$-modules. The $i$-th homology module $H_iK_t(a_1,\dots, a_m;N)$ of the component $[K_{\bullet}(a_1,\dots, a_m;N)]_t$ is denoted by $H_iK_t(a_1,\dots, a_m;N)$. When the set of homogeneous elements $a_1,\dots,a_m$ of $S$ form a *$g$-multiplicity system* with respect to $N$, the components $(M/\sum_{i=1}^m a_i M)_t$ have finite $R$-length for sufficiently large $t\geq 0$. In this case, the Koszul homology modules $H_iK_t(a_1,\dots, a_m;N)$ also have finite length for sufficiently large $t\geq 0$. D. Kirby [@Kirby Proposition 2] introduced the *$g$-multiplicity* of degree $t$ with respect to $N$, denoted by ${\rm e_t}(a_1,\dots,a_m;N)$, defined as $${\rm e_t}((a_1,\dots,a_m);N) = \sum_{i=0}^m (-1)^i \ell_R(H_iK_t(a_1,\dots,a_m ;N)).$$ ## Some properties about multiplicities In this subsection, we state results regarding multiplicities and provide properties that establish connections between previously defined multiplicities. These results carry significant importance in the subsequent sections, where they will serve as fundamental tools in proving the main result. One important result in this context is the associativity formula for the Buchsbaum-Rim multiplicity, which has been proved by Kleiman [@Kleiman2 Proposition 7]. Similarly, a formula for the Buchsbaum-Rim multiplicity can be found in [@Bedregal-Perez2 Corollary 2.8] and [@Kirby-Rees1 Theorem 4.5(v)]. Furthermore, Validashti [@Validashti Theorem 6.5.1], establishes an associativity formula for the $j$-multiplicity, which generalizes the Buchsbaum-Rim multiplicity. These associativity formulas provide a way to compute multiplicities or mixed multiplicities on domains. Specifically, they are applicable to reduced Noetherian local rings that are domains. **Lemma 16** (Associativity formula for Buchsbaum-Rim multiplicity). *Let $(R,\mathfrak{m})$ be a $d$-dimensional Noetherian local ring, $E$ be an $R$-submodule of $F$ such that $\ell_R(F/E) <\infty$ and $N$ be an $R$-module of dimension $d$. Define $\Lambda_d=\{\mathfrak{p}\in \operatorname{Supp}(N) : \dim( R/\mathfrak{p})=d\}$. Then $${\rm e}_{\rm BR}\left(E;N\right)=\sum_{\mathfrak{p}\in \Lambda_d }\ell_{R_{\mathfrak{p}}}\left(N_{\mathfrak{p}}\right) {\rm e}_{\rm BR}\left(E\left(\frac{R}{\mathfrak{p}}\right);\frac{R}{\mathfrak{p}}\right).$$* The associativity formula for the mixed Buchsbaum-Rim multiplicity is also well-known, see Bedregal-Perez [@Bedregal-Perez2 Proposition 5.4] or Kirby-Rees [@Kirby-Rees1 Theorem 6.3(iii)]. **Lemma 17** (Associativity formula for mixed multiplicities). *Let $(R,\mathfrak{m})$ be a $d$-dimensional Noetherian local ring, $E_1, \ldots, E_k$ be $R$-submodules of $F$ such that $\ell_R(F/E_i) < \infty$ for all $i = 1, \ldots, k$, and let $N$ be an $R$-module of dimension $d$. Let $\Lambda_d = \{\mathfrak{p}\in \operatorname{Supp}(N) : \dim(R/\mathfrak{p}) = d\}$. Then for any integers $d_1, \ldots, d_k$ with $|\mathbf{d}| = d + p - 1$, $${\rm e}_{\rm BR}\left(E^{[d_1]}_1, \ldots, E^{[d_k]}_k; N\right) = \sum_{\mathfrak{p} \in \Lambda_d} \ell_{R_{\mathfrak{p}}}\left(N_{\mathfrak{p}}\right) {\rm e}_{\rm BR} \left(E^{[d_1]}_1\left(\frac{R}{\mathfrak{p}}\right), \ldots, E^{[d_k]}_k\left(\frac{R}{\mathfrak{p}}\right); \frac{R}{\mathfrak{p}}\right).$$* **Lemma 18**. *Let $(R,\mathfrak{m})$ be a $d$-dimensional Noetherian local ring, $E_1, \dots, E_{d+p-1}$ be $R$-submodules of $F$ such that $\ell_R(F/E_i) < \infty$ for all $i=1, \dots, d+p-1$, and let $N$ be an $R$-module of dimension $d$.* - *If $L_1, \dots, L_{d+p-1}$ are $R$-submodules of $F$ with $\ell_R(F/L_i) < \infty$ for $i=1, \dots, d+p-1$, and $L_i \subseteq E_i$. Then $${\rm e}_{\rm BR}\left(L_1, \ldots, L_{d+p-1}; N\right) \geq {\rm e}_{\rm BR}\left(E_1, \ldots, E_{d+p-1}; N\right).$$* - *If $x_i \in E_i$, for $i = 1, \dots, d+p-1$, and $(x_1, \dots, x_{d+p-1})$ is an $R$-module with $\ell_R(F/(x_1, \dots, x_{d+p-1})) < \infty$. Then $${\rm e}_{\rm BR}\left(\left(x_1, \dots, x_{d+p-1}\right); N\right) \geq {\rm e}_{\rm BR}\left(E_1, \dots, E_{d+p-1}; N\right).$$* *Proof.* The proof of (i) follows from [@Bedregal-Perez2 Corollary 6.4]. To prove (ii), first apply Proposition [Proposition 11](#Proposition 17.3.3){reference-type="ref" reference="Proposition 17.3.3"} to construct $R$-modules of finite length, $L_1, \dots, L_{d+p-1}$, with $x_i \in L_i \subseteq E_i$ for all $i$, and $(x_1, \dots, x_{d+p-1})$ being a joint reduction of $L_1, \dots, L_{d+p-1}$ with respect to $N$. Then (ii) follows from (i) and [@Bedregal-Perez2 Theorem 5.5]. ◻ **Proposition 19**. *Let $(R,\mathfrak{m})$ be a $d$-dimensional Noetherian local ring, $E$ be an $R$-submodule of $F$ such that $\ell_R(F/E)<\infty$ and let $N$ be an $R$-module of dimension $d$.* 1. *If $r$ is a positive integer, then ${\rm \tilde{e}}_{\rm BR}(E^r) = {\rm e}_{\rm BR}(E)r^{d+p-1}$.* 2. *If $E = (x_1, \dots, x_{d+p-1})$, then for any $l_1, \dots, l_{d+p-1} \in \mathbb{N}$, $${\rm e}_{t}\left(x_1^{l_1}, \dots, x_{d+p-1}^{l_{d+p-1}}; N\right) = l_1 \cdots l_{d+p-1} \cdot {\rm e_{BR}}\left((x_1, \dots, x_{d+p-1}); N\right),$$ for sufficiently large $t>0$.* *Proof.* (i) For $n$ sufficiently large, $$\ell_R\left(\frac{F^{rn}}{\left(E^r\right)^n}\right) = {\rm \tilde{e}}_{\rm BR}(E^r)\frac{n^{d+p-1}}{\left(d+p-1\right)!} + \text{lower terms}.$$ On the other hand, for $n$ sufficiently large, $$\begin{array}{rcl} \displaystyle \ell_R\left(\frac{F^{rn}}{E^{rn}}\right) & = & \displaystyle {\rm e}_{\rm BR}(E)\frac{\left(rn\right)^{d+p-1}}{\left(d+p-1\right)!} + \text{lower terms} \\ \\ & = & \displaystyle {\rm e}_{\rm BR}(E)r^{d+p-1}\frac{n^{d+p-1}}{\left(d+p-1\right)!} + \text{lower terms}. \end{array}$$ Now, comparing the coefficients of $n^{d+p-1}$, we obtain the desired equality. \(ii\) By [@Bedregal-Perez3 Lemma 2.7(i)], $${\rm e}_{t}\left(x_1^{l_1}, \dots, x_{d+p-1}^{l_{d+p-1}}; N\right) = l_1 \cdots l_{d+p-1} \cdot {\rm e_{t}}\left(x_1, \dots, x_{d+p-1}; N\right).$$ Since $x_i$ are homogeneous elements of degree one, by [@Bedregal-Perez3 Theorem 4.1], follows ${\rm e_{t}}(x_1, \dots, x_{d+p-1}; N) = {\rm e_{\rm BR}}((x_1, \dots, x_{d+p-1}); N)$. ◻ # The Risler-Teissier Mixed Multiplicity Theorem {#section4} In this section, we state and prove the *Risler-Teissier Theorem* for finitely generated $R$-submodules $E_i$ of $F^{e_i}$, when $e_i>0$ are positive integers. This result generalizes the Risler-Teissier Theorem for ideals given in [@Teissier Proposition 2.1], and also generalizes the Risler-Teissier theorem for modules given in [@Bedregal-Perez2 Theorem 5.2] when $e_i=1$, for all $i$. For this, let us first recall the definition of associated mixed Buchsbaum-Rim multiplicity introduced by Kirby-Rees in [@Kirby-Rees1] and by Kleiman-Thorup in [@Kleiman-Thorup2 p. 532 and 568]. Let $(R,\mathfrak{m})$ be a $d$-dimensional Noetherian local ring, $E_1, \ldots, E_k$ be $R$-submodules of $F^{e_1}, \ldots, F^{e_k}$ such that $\ell_R(F^{e_i}/E_i) < \infty$ for all $i = 1, \ldots, k$, $e_i \in \mathbb{N}$ be fixed integers. Let $N$ be an $R$-module of dimension $d$, and consider the graded $S$-module $M=S\otimes_R N$. The length function $$h_R(n_1, \ldots, n_k,q; N)=\ell_R\left(\frac{M_{{\bf e \cdot n}+q}}{{\bf E}^{{\bf n}}{M_q}}\right),$$ for sufficiently large $n_1, \ldots , n_k, q$, is a polynomial of total degree at most $d+p-1$, whose leading term can be written as $$\sum_{j+|{\bf d}|=d+p-1}\;\frac{1}{{\bf d}!}{\rm \tilde{e}}_{\rm BR}^j\left(E_1^{[d_1]}, \ldots, E_k^{[d_k]};N\right){\bf n}^{\bf d}q^j,$$ where ${\bf d} = (d_1, \ldots, d_k)$ is a multi-index with $j+|{\bf d}| = d+p-1$. The coefficients ${\rm \tilde{e}}_{\rm BR}^j\left(E_1^{[d_1]}, \ldots, E_k^{[d_k]};N\right)$ are called the *associated mixed Buchsbaum-Rim multiplicity* of $E_1, \ldots, E_k$ of type $(d_1, \ldots, d_k)$ with respect to $N$. Note that, when $j=0$, the terms ${\rm \tilde{e}}_{\rm BR}^0\left(E_1^{[d_1]}, \ldots, E_k^{[d_k]};N\right)$ are the mixed Buchsbaum-Rim multiplicity ${\rm \tilde{e}}_{\rm BR}\left(E_1^{[d_1]}, \dots, E_k^{[d_k]};N\right)$ as defined in Subsection [3.4](#MixHdegree){reference-type="ref" reference="MixHdegree"} or see also [@Kleiman-Thorup2 p. 568]. **Lemma 20**. *Let $R$ be a Noetherian ring and let $N$ be an $R$-module. Let $E_1,\dots,E_k$ be $R$-submodules of $F^{e_i}$ with positive integers $e_i$, for all $i=1,\dots,k$. Suppose that $x \in E_1$ is a superficial element for $E_1, E_2, \ldots, E_k$ with respect to $N$. Assume that $I_{E_1}\subseteq \sqrt{I_{E_2}\cdots I_{E_k}}$ and $\cap_n I_{E_1}^n M = 0$, with $M=S\otimes_R N$. Then, for positive integer $q>0$, $$\left({\bf E}^{{\bf n}}M_q :_{M_{{\bf e\cdot n}-e_1+q}} x\right) = (0 :_{M_{{\bf e\cdot n}-e_1+q}} x) + {\bf E}^{{\bf n}-e_1\delta(1)}M_q\,\,\ \mbox{ and }\,\,\,\left(0 :_{M_{{\bf e \cdot n}-e_1+q}} x\right)\cap {\bf E}^{{\bf n}-e_1\delta(1)}M_q=0.$$* *Proof.* To prove this lemma, it is sufficient to consider the ideals $I_{E_1},\dots,I_{E_k}$ of $S$, for all $i=1,\dots,k$. Thus, according to [@SH Lemma 17.2.4], for all sufficiently large $n_1,\dots,n_k$, we obtain $$({\bf I}_{\bf E}^{{\bf n}}M :_M x) = (0 :_M x) + {\bf I}_{\bf E}^{{\bf n}-e_1\delta(1)}M\,\,\ \mbox{ and }\,\,\,\left(0 :_{M} x\right)\cap {\bf I}_{\bf E}^{{\bf n}-e_1\delta(1)}M=0.$$ Hence, the result follows by Remark [Remark 13](#relasuperficial){reference-type="ref" reference="relasuperficial"} and by concentrating on degree ${\bf e\cdot n}-e_1+q$ in the above equalities. ◻ **Theorem 21** (The Risler-Teissier mixed multiplicity theorem for $e_i\geq 1$). *Let $(R,\mathfrak{m})$ be a $d$-dimensional Noetherian local ring with infinite residue field and let $N$ be an $R$-module of dimension $d$. Let $E_1,\dots, E_{k}$ be $R$-submodules of $F^{e_1},\dots, F^{e_k}$, respectively, such that, $\ell_R(F^{e_i}/E_i)<\infty$, $e_i\in \mathbb{N}$ for all $i=1,\dots,k$. Fix $c_1,\ldots,c_k\in\mathbb{N}$ and assume $e_1=\min\{e_1,\dots,e_k\}$. Let $x \in E_1$ be a superficial element for $E_1,\ldots,E_k$ with respect to $N$ and denote the $S$-graded module $M=S\otimes_RN$. Assume that $x$ is not contained in any minimal prime of ${\rm Ann}_S(M)$. Set $S'= S/xS$ and $M'= M/xM$. Then, for any integers $j, d_1,\ldots,d_k\in\mathbb{N}$ with $j+\vert{\bf d}\vert=d+p-1$, $d_1>0$, and for sufficiently large integers $n_1,\dots,n_k,$ $q$, $${\rm \tilde{e}}_{\rm BR}^j\left(E_1^{[d_1]},\ldots,E_k^{[d_k]};N\right)= \left\{\begin{array}{ll} \frac{1}{e_1}{\rm \tilde{e}}_{\rm BR}^j\left({E_1'}^{[d_1-1]},\ldots,{E_k'}^{[d_k]};M'\right), & \textrm{if}\ d+p > 2; \\ \\ \frac{1}{e_1}\left[\ell_R\left(\frac{F^{{\bf e\cdot n}+q}\otimes_RN}{(x)F^{{\bf e\cdot n}-e_1+q}\otimes_RN}\right) - \ell_R\left(0:_{F^{{\bf e\cdot n}-e_1+q}\otimes_RN} x\right)\right], & \textrm{if}\ d+p = 2. \end{array} \right.$$ where $E'_i$ is image of $E_i$ in $F^{e_i}/(x)F^{e_i-e_1}$ for all $i=1,\dots, k$.* *Proof.* By the exact sequence $$0 \longrightarrow \frac{\left({\bf E}^{{\bf n}}M_q:_{M_{{\bf e \cdot n}-e_1+q}} x\right)}{{\bf E}^{{\bf n}-e_1\delta(1)}{M_q}} \stackrel{i}{\longrightarrow} \frac{M_{{\bf e\cdot n}-e_1+q}}{{\bf E}^{{\bf n}-e_1\delta(1)}M_q} \stackrel{\cdot x}{\longrightarrow} \frac{M_{{\bf e \cdot n}+q}}{{\bf E}^{{\bf n}}{M_q}} \stackrel{\pi}{\longrightarrow} \frac{M_{{\bf e \cdot n}+q}}{xM_{{\bf e \cdot n}-e_1+q}+{\bf E}^{{\bf n}}{M_q}} \longrightarrow 0,$$ we obtain $$\displaystyle \ell_R\left(\frac{M_{{\bf e \cdot n}+q}}{{\bf E}^{{\bf n}}{M_q}}\right)-\ell_R\left(\frac{M_{{\bf e \cdot n}-e_1+q}}{{\bf E}^{{\bf n}-e_1\delta(1)}{M_q}}\right)=\ell_R\left(\frac{M_{{\bf e \cdot n}+q}}{xM_{{\bf e \cdot n}-e_1+q}+{\bf E}^{{\bf n}}M_q}\right)-\ell_R\left(\frac{({\bf E}^{{\bf n}}M_q :_{M_{{\bf e \cdot n}-e_1+q}} x)}{{\bf E}^{{\bf n}-e_1\delta(1)}M_q}\right).$$ Thus, by Lemma [Lemma 20](#Lemma 17.2.4Degree){reference-type="ref" reference="Lemma 17.2.4Degree"}, for large $n_1,\dots ,n_k$, say $n_1 \geq c_1, \dots ,n_k \geq c_k$, and all $q\geq 0,$ we get $\left(0 :_{M_{{\bf e \cdot n}-e_1+q}} x\right)\cong \frac{({\bf E}^{{\bf n}}M_q :_{M_{{\bf e\cdot n}-e_1+q}} x)}{{\bf E}^{{\bf n}-e_1\delta(1)}{M_q}}.$ Then $$\label{Equa21} \displaystyle \ell_R\left(\frac{M_{{\bf e \cdot n}+q}}{{\bf E}^{{\bf n}}{M_q}}\right)-\ell_R\left(\frac{M_{{\bf e \cdot n}-e_1+q}}{{\bf E}^{{\bf n}-e_1\delta(1)}{M_q}}\right)=\ell_R\left(\frac{M_{{\bf e \cdot n}+q}}{xM_{{\bf e \cdot n}-e_1+q}+{\bf E}^{{\bf n}}M_q}\right)-\ell_R\left(0 :_{M_{{\bf e \cdot n}-e_1+q}} x\right).$$ Then if $d+p=2$, we have that $M'_{{\bf e\cdot n}+q}=\frac{M_{{\bf e \cdot n}+q}}{xM_{{\bf e \cdot n}-e_1+q}+{\bf E}^{{\bf n}}M_q}$ and the left side of Equation ([\[Equa21\]](#Equa21){reference-type="ref" reference="Equa21"}) has coefficient ${\rm \tilde{e}}_{\rm BR}^j\left(E_1^{[d_1]},\ldots,E_k^{[d_k]};N\right)$ in its maximal dimension. Suppose now that $d+p > 2$ and consider the following exact sequences $$\label{Equa23} 0\longrightarrow \frac{{\bf E}^{{\bf n}+e_1\delta(1)}M_{q-e_1}}{x{\bf E}^{{\bf n}}{M_{q-e_1}}} \stackrel{i}{\longrightarrow} \frac{{\bf E}^{{\bf n}}M_{q}}{x{\bf E}^{{\bf n}}{M_{q-e_1}}} \stackrel{\pi}{\longrightarrow} \frac{{\bf E}^{{\bf n}}M_{q}}{{\bf E}^{{\bf n}+e_1\delta(1)}{M_{q-e_1}}}\longrightarrow 0$$ and $\displaystyle 0\longrightarrow \frac{x\left({\bf E}^{{\bf n}+e_1\delta(1)}M_{q-e_1}:_{{\bf E}^{{\bf n}-e_1\delta(1)}M_q}x\right)}{x{\bf E}^{{\bf n}}{M_{q-e_1}}} \longrightarrow \frac{{\bf E}^{{\bf n}+e_1\delta(1)}M_{q-e_1}}{x{\bf E}^{{\bf n}}{M_{q-e_1}}} \stackrel{i}{\longrightarrow}$ $\displaystyle \frac{{\bf E}^{{\bf n}}M_{q}}{x{\bf E}^{{\bf n}-e_1\delta(1)}{M_{q}}} \stackrel{\pi}{\longrightarrow} \frac{{\bf E}^{{\bf n}}M_{q}}{x{\bf E}^{{\bf n}-e_1\delta(1)}{M_{q}}+{\bf E}^{{\bf n}+e_1\delta(1)}{M_{q-e_1}}}\longrightarrow 0.$ Consider the homomorphism: $$\phi:\frac{\left({\bf E}^{{\bf n}+e_1\delta(1)}M_{q-e_1}:_{{\bf E}^{{\bf n}-e_1\delta(1)}M_q}x\right)}{{\bf E}^{{\bf n}-e_1\delta(1)}{M_{q}}}\longrightarrow \frac{x\left({\bf E}^{{\bf n}+e_1\delta(1)}M_{q-e_1}:_{{\bf E}^{{\bf n}-e_1\delta(1)}M_q}x\right)}{x{\bf E}^{{\bf n}-e_1\delta(1)}{M_{q}}}$$ with $\ker(\phi)=\{0\}$ since $x \in E_1$ is a superficial element and Lemma [Lemma 20](#Lemma 17.2.4Degree){reference-type="ref" reference="Lemma 17.2.4Degree"}. Then $\frac{\left({\bf E}^{{\bf n}+e_1\delta(1)}M_{q-e_1}:_{{\bf E}^{{\bf n}-e_1\delta(1)}{M_{q}}}x\right)}{{\bf E}^{{\bf n}-e_1\delta(1)}{M_{q}}} \cong \frac{x\left({\bf E}^{{\bf n}+e_1\delta(1)}{M_{q-e_1}}:_{{\bf E}^{{\bf n}-e_1\delta(1)}{M_{q}}}x\right)}{x{\bf E}^{{\bf n}-e_1\delta(1)}{M_{q}}}$. Similarly, we also get that $\frac{{\bf E}^{{\bf n}-e_1\delta(1)}{M_{q}}}{{\bf E}^{{\bf n}}{M_{q-e_1}}}\cong\frac{x{\bf E}^{{\bf n}-e_1\delta(1)}{M_{q}}}{x{\bf E}^{{\bf n}}{M_{q-e_1}}}$. We obtain the following exact sequence: $$\label{Equa27} \xymatrix{ & \frac{\left({\bf E}^{{\bf n}+e_1\delta(1)}M_{q-e_1}:_{{\bf E}^{{\bf n}-e_1\delta(1)}{M_{q}}}x\right)}{{\bf E}^{{\bf n}-e_1\delta(1)}{M_{q}}} \ar[d]^{\phi}& & & & \\ 0 \ar[r] & \frac{x\left({\bf E}^{{\bf n}+e_1\delta(1)}{M_{q-e_1}}:_{{\bf E}^{{\bf n}-e_1\delta(1)}{M_{q}}}x\right)}{x{\bf E}^{{\bf n}-e_1\delta(1)}{M_{q}}}\ar[r] & \frac{{\bf E}^{{\bf n}+e_1\delta(1)}M_{q-e_1}}{x{\bf E}^{{\bf n}-e_1\delta(1)}{M_{q}}} \ar[r] & \frac{{\bf E}^{{\bf n}}M_{q}}{x{\bf E}^{{\bf n}-e_1\delta(1)}{M_{q}}} \ar[r] & \frac{{\bf E}^{{\bf n}}M_{q}}{x{\bf E}^{{\bf n}-e_1\delta(1)}{M_{q}}+{\bf E}^{{\bf n}+e_1\delta(1)}{M_{q-e_1}}} \ar[r] & 0 }$$ and $$\label{Equa26} 0 \longrightarrow \frac{{\bf E}^{{\bf n}-e_1\delta(1)}{M_{q}}}{{\bf E}^{{\bf n}}{M_{q-e_1}}} \longrightarrow \frac{{\bf E}^{{\bf n}-e_1\delta(1)}{M_{q}}}{x{\bf E}^{{\bf n}}{M_{q-e_1}}} \longrightarrow \frac{{\bf E}^{{\bf n}-e_1\delta(1)}{M_{q}}}{x{\bf E}^{{\bf n}-e_1\delta(1)}{M_{q}}} \longrightarrow 0.$$ Therefore from the exact sequences ([\[Equa23\]](#Equa23){reference-type="ref" reference="Equa23"}), ([\[Equa27\]](#Equa27){reference-type="ref" reference="Equa27"}) and ([\[Equa26\]](#Equa26){reference-type="ref" reference="Equa26"}) we have that $$\label{Equ29} \ell_R\left(\frac{{\bf E}^{{\bf n}}M_{q}}{x{\bf E}^{{\bf n}-e_1\delta(1)}{M_{q}}+{\bf E}^{{\bf n}+e_1\delta(1)}{M_{q-e_1}}}\right)=\ell_R\left(\frac{{\bf E}^{{\bf n}}M_{q}}{{\bf E}^{{\bf n}+e_1\delta(1)}{M_{q-e_1}}}\right)-\ell_R\left(\frac{{\bf E}^{{\bf n}-e_1\delta(1)}{M_{q}}}{{\bf E}^{{\bf n}}{M_{q-e_1}}}\right).$$ By the definition above, for sufficiently large $n_1, \dots, n_k, q$, each term on the left side of ([\[Equ29\]](#Equ29){reference-type="ref" reference="Equ29"}), for $j=0$, is a polynomial whose leading term has the form: $$\sum_{|{\bf d}|=d+p-1}\;\frac{1}{{\bf d}!}{\rm \tilde{e}}_{\rm BR}^0\left(E_1^{[d_1]}, \ldots, E_k^{[d_k]};N\right)\left({n_1}^{d_1}-{\left(n_1+e_1\right)}^{d_1}\right){n_2}^{d_2}\dots{n_k}^{d_k}$$ and $$\sum_{|{\bf d}|=d+p-1}\;\frac{1}{{\bf d}!}{\rm \tilde{e}}_{\rm BR}^0\left(E_1^{[d_1]}, \ldots, E_k^{[d_k]};N\right)\left({n_1}^{d_1}-{\left(n_1-e_1\right)}^{d_1}\right){n_2}^{d_2}\dots{n_k}^{d_k}.$$ Thus, the difference on the left side is: $$\sum_{|{\bf d}|=d+p-1}\;\frac{1}{(d_1-2)!d_2!\dots d_k!}{\rm \tilde{e}}_{\rm BR}^0\left(E_1^{[d_1]}, \ldots, E_k^{[d_k]};N\right)e_1n_1^{d_1-2}{n_2}^{d_2}\dots{n_k}^{d_k}.$$ Therefore, the proposition follows by comparing the leading coefficients. ◻ It should be noted that the proof of the previous theorem is similar, except for the submodules contained in free modules of degree $e_1\geq 1$, to the proof given in [@Bedregal-Perez2 Theorem 5.2]. For this reason, we omit the demonstration of certain isomorphisms. **Remark 22**. Based on the definition of mixed multiplicities, we have the following relationship: if $k = 1$, then ${\rm \tilde{e}}_{\rm BR}(E_1^{[d_1]};N) = {\rm \tilde{e}}_{\rm BR}(E_1;N)$. Consequently, using the Risler-Teissier mixed multiplicity theorem [Theorem 21](#Theorem 17.4.6degree){reference-type="ref" reference="Theorem 17.4.6degree"}, we obtain: $${\rm \tilde{e}}_{\rm BR}\left(E_1;N\right) = \left\{ \begin{array}{ll} \frac{1}{e_1}{\rm \tilde{e}}_{\rm BR}\left({E_1'};M'\right), & \textrm{if } d+p > 2, \\ \\ \frac{\ell_R\left(\frac{F^{e_1\cdot n}\otimes_RN}{(x)F^{e_1\cdot n-e_1}\otimes_RN}\right) - \ell_R\left(0:_{F^{e_1\cdot n-e_1}\otimes_RN} x\right)}{e_1}, & \textrm{if } d +p = 2. \end{array} \right.$$ Here $E_1'=(E_1+(x))/(x)$ and $M'=S\otimes_RN/xS\otimes_RN$. **Corollary 23**. *Let $(R,\mathfrak{m})$ be a $d$-dimensional Noetherian local ring with infinite residue field. Let $E_1,\dots, E_{k}$ be $R$-submodules of $F^e$ such that $\ell_R(F^e/E_i)<\infty$, for $l\in\mathbb{N}$ and all $i=1, \dots, k$. Let $N$ be a finitely generated $R$-module of dimension $d$. Let $x_1,\dots ,x_{d+p-1}$ be any superficial sequence for $E_1,\dots, E_{k},$ with respect to $N$, with each $E_i$ is listed $d_i$ times and each $(x_i)S$ is not in any minimal prime ideal over $(x_1 ,\dots , x_{i-1})S$. Then, for any integers $d_1,\ldots,d_k\in\mathbb{N}$ with $d_1+d_2+\cdots+d_k=d+p-1$, $d_i>0$, and for large $n$, $${\rm \tilde{e}}_{\rm BR}\left(E_1^{[d_1]},\ldots,E_k^{[d_k]};N\right)=\frac{ \ell_R\left(\frac{M_{en}}{\left(x_1,\dots,x_{d+p-1}\right)M_{en-e}}\right) - \ell_R\left((x_1,\dots,x_{d+p-1})M_{en}:_{M_{en}} x_{d+p-1}\right)}{e^{d+p-1}},$$ where $M_n=F^n\otimes_RN$, which is equal to ${\rm\tilde{e}_{BR}}(x_1 ,\dots , x_{d+p-1} ; N )$.* *Proof.* To prove the first statement, we can use Theorem [Theorem 21](#Theorem 17.4.6degree){reference-type="ref" reference="Theorem 17.4.6degree"} iteratively. Now, we proof the second statement. Suppose $d+p-1 > 1$, and let $E = (x_1,\dots,x_{d+p-1})$. By repeatedly utilizing Remark [Remark 22](#CorteRemark){reference-type="ref" reference="CorteRemark"}, we get $${\rm \tilde{e}}_{\rm BR}\left(E;N\right)=\frac{ \ell_R\left(\frac{M_{en}}{\left(x_1,\dots,x_{d+p-1}\right)M_{en-e}}\right) - \ell_R\left((x_1,\dots,x_{d+p-1})M_{en}:_{M_{en}} x_{d+p-1}\right)}{e^{d+p-1}},$$ thus, it proves the corollary. ◻ In particular, if $e=1$, from Corollary [Corollary 23](#Theorem 17.4.6degreecor){reference-type="ref" reference="Theorem 17.4.6degreecor"} we get the same result given in [@Bedregal-Perez2 Theorem 5.2, Corollary 5.3]. **Corollary 24**. *Let $(R,\mathfrak{m})$ be a $d$-dimensional Noetherian local ring with infinite residue field. Let $E_1,\dots, E_{k},$ be $R$-submodules of $F$ such that $F/E_i$ has finite length and $e\in \mathbb{N}$, for all $i=1, \dots, k$. Let $N$ be a finitely generated $R$-module of dimension $d$. Let $x_1,\dots ,x_{d+p-1}$ be any superficial sequence for $E_1,\dots, E_{k},$ with respect to $N$ , with each $E_i$ listed $d_i$ times and each $(x_i)S$ is not in any minimal prime ideal over $(x_1 ,\dots , x_{i-1})S$. Then, for any integers $d_1,\ldots,d_k\in\mathbb{N}$ with $d_1+d_2+\cdots+d_k=d+p-1$, $d_i>0$, and for large $n$, $${\rm e}_{\rm BR}\left(E_1^{[d_1]},\ldots,E_k^{[d_k]};N\right)= \ell_R\left(\frac{M_{n}}{(x_1,\dots,x_{d+p-1})M_{n-1}}\right) - \ell_R\left(\left(x_1,\dots,x_{d+p-1}\right)M_{n}:_{M_n} x_{d+p-1}\right),$$ where $M_n=F^n\otimes_RN$, which is equal to ${\rm e}_{\rm BR}(x_1 ,\dots , x_{d+p-1} ; N )$.* # Fundamental lemmas {#section5} In this section, we present some fundamental results that will be crucial to prove the main theorem of this paper. **Lemma 25**. *Let $(R, \mathfrak{m})$ be a Noetherian local ring with an infinite residue field. Let $E_1, \dots, E_k$ be finitely generated $R$-submodules of a free module $F$ with a positive rank $p$. Suppose $I_{E_1} \subseteq \sqrt{I_{E_2} \cdots I_{E_k}}$, and let $N$ be a finitely generated $R$-module. Consider a variable $Y$ over $R$ and an element $x \in E_1$. Then there exist positive integers $c$ and $e$, and a non-empty Zariski-open subset $U$ of $E_1/\mathfrak{m}E_1$ with the following property: for any $y \in E_1$ that has a natural image in $U$, and for any $l \geq e$ and sufficiently large $n_1, \dots, n_k > 0$ (depending on $l$), the following equalities hold:* *$${\bf E}^{\bf n}M[Y]_s \cap (x^l + y^lY)M[Y]_{|{\bf n}|+s-l} = (x^l + y^lY)E_1^{n_1-l}E_2^{n_2}\cdots E_k^{n_k}M[Y]_s,$$* *$$\left({\bf E}^{\bf n}M[Y]_s :_{M[Y]_{|{\bf n}|+s-l}} (x^l + y^lY)\right) \cap E_1^cE_2^{n_2}\cdots E_k^{n_k}M[Y]_{n_1-c+s} = E_1^{n_1-l}E_2^{n_2}\cdots E_k^{n_k}M[Y]_{l+s},$$ where $M[Y] = \oplus_{i \geq 0} F[Y]^i \otimes_R N.$* *Proof.* Let $Y$ be a variable over $R$, and let $x \in E_1$. Thus, we have $x \in I_{E_1}$. According to [@SH Lemma 17.5.2], there exist positive integers $c$ and $e$, and a non-empty Zariski-open subset $U$ of $I_{E_1}/\mathfrak{m}I_{E_1}$ with the following property: for any $y \in I_{E_1}$ that has a natural image in $U$, and for any $l \geq e$ and sufficiently large $n_1, \dots, n_k > 0$ (depending on $l$), the following equalities hold: $${\bf I}_{{\bf E}}^{\bf n}M[Y] \cap (x^l + y^lY)M[Y] = (x^l + y^lY)I_{E_1}^{n_1-l}I_{E_2}^{n_2}\cdots I_{E_k}^{n_k}M[Y],$$ $$\left({\bf I}_{{\bf E}}^{\bf n}M[Y]:_{M[Y]} (x^l + y^lY)\right) \cap I_{E_1}^cI_{E_2}^{n_2}\cdots I_{E_k}^{n_k}M[Y] = I_{E_1}^{n_1-l}I_{E_2}^{n_2}\cdots I_{E_k}^{n_k}M[Y].$$ Moreover, by Proposition [\[propo3.3BP\]](#propo3.3BP){reference-type="ref" reference="propo3.3BP"}, we can choose such an open set $U$ that only consists of elements of degree one in $I_{E_1}$, meaning that it contains only elements from $E_1$. Therefore, the desired result follows by concentrating on degree $|{\bf n}|+s$ in the above equalities. ◻ **Proposition 26**. *Let $(R, \mathfrak{m})$ be a $d$-dimensional Noetherian local ring. Let $N$ be a finitely generated $R$-module of dimension $d$. Suppose $E_1, E_2, \ldots, E_{d+p-1}$ are $R$-submodules of a module $F$ such that $F/E_i$ has finite colength for all $i = 1, 2, \ldots, d+p-1$. Then, for any positive integer $l$, $${\rm \tilde{e}}_{\rm BR}\left(E_1, E_2, \ldots, E_{d+p-2}, E_{d+p-1}^l; N\right) = l \cdot {\rm e}_{\rm BR}\left(E_1, E_2, \ldots, E_{d+p-1}; N\right).$$* *Proof.* Without loss of generality, let's assume $d_k > 0$, which means that the submodules $E_i$ are repeated at least once. By using [@SH Lemma 8.4.2 and Section 8.4], we can assume that the residue field of $R$ is infinite. According to Proposition [\[propo3.3BP\]](#propo3.3BP){reference-type="ref" reference="propo3.3BP"}, there exist elements $x_1, x_2, \dots, x_{d+p-1}$, where the $j$-th element is taken from $E_j$, such that they form a superficial sequence for $E_1, E_2, \dots, E_{d+p-1}$ with respect to $N$. Moreover, we can assume that for all positive integers $l$, $x_{d+p-1}^l \in E_{d+p-1}^l$ is superficial for $E_{d+p-1}^l$ with respect to $M' = S \otimes_R N / (x_1, x_2, \dots, x_{d+p-2})S \otimes_R N$. Therefore, we have the following equalities: $$\begin{aligned} {\rm \tilde{e}}_{\rm BR}\left(E_1, E_2, \dots, E_{d+p-2}, E_{d+p-1}^l; N\right) &= {\rm \tilde{e}}_{\rm BR}\left(E_1^{[0]}, E_2^{[0]}, \dots, E_{d+p-2}^{[0]}, E_{d+p-1}^l; M'\right)\,\, \text{by Theorem \ref{Theorem 17.4.6degree}} \\ &= {\rm \tilde{e}}_{\rm BR}(E_{d+p-1}^l; M') \\ &= l \cdot {\rm e}_{\rm BR}(E_{d+p-1}; M') \quad \text{by Proposition \ref{Proposition 11.2.9}} \\ &= l \cdot {\rm e}_{\rm BR}(x_{d+p-1}; M') \quad \text{by Corollary \ref{Theorem 17.4.6degreecor1}} \\ &= l \cdot {\rm e}_{\rm BR}(x_1, x_2, \dots, x_{d+p-2}, x_{d+p-1}; N) \\ &= l \cdot {\rm e}_{\rm BR}(E_1, E_2, \dots, E_{d+p-1}; N) \quad \text{by Corollary \ref{Theorem 17.4.6degreecor1}}. \end{aligned}$$ Thus, we have shown the result. ◻ **Lemma 27**. *Let $(R, \mathfrak{m})$ be a $d$-dimensional Noetherian local ring with an infinite residue field. Let $E_1, \dots, E_k$ be finitely generated $R$-submodules of $F$, and $x_i \in E_i$ for $i = 1, \dots, k$. Consider a variable $Y$ over $R$. Assume that the ideals $(x_1, \dots, x_k)S$ and $I_{E_i}$ have the same height $k$ and the same radical for all $i = 1, \dots, k$. Let $\mathfrak{p}$ be a prime ideal minimal over $((x_1, \dots, x_k):_RF)$ such that ${\rm e_{BR}}((x_1, \dots, x_k)_{\mathfrak{p}};R_{\mathfrak{p}}) = {\rm e_{BR}}({E_1}_{\mathfrak{p}}, \dots, {E_k}_{\mathfrak{p}};R_{\mathfrak{p}})$. Set $B = R[Y]_{\mathfrak{p}R[Y]}$. Then, there exists a non-empty Zariski-open subset $U$ of $E_1/\mathfrak{m}E_1$ (actually of $(E_1 + (x_k))/(\mathfrak{m}E_1 + (x_k))$) that can be lifted to a non-empty Zariski-open subset $U$ of $E_1/\mathfrak{m}E_1$ such that for any preimage $y$ of an element of $U$ and for all sufficiently large integers $l$, $${\rm e}_{\rm t}\left(\left(x_1^l + y^lY, x_2, \dots, x_k\right)_{\mathfrak{p}R[Y]}; B\right) = l \cdot {\rm e}_{BR}\left(\left(x_1, x_2, \dots, x_k\right)_{\mathfrak{p}}; R_{\mathfrak{p}}\right).$$* *Proof.* We use induction on $k$. If $k = 1$, choose $y$ as in Lemma [Lemma 25](#Lemma 17.5.2){reference-type="ref" reference="Lemma 17.5.2"}. Then, for all sufficiently large integers $l$, $x_1^l + y^lY$ is superficial for $E_1^l(R[Y])$, hence also for $E_1^l(R[Y]_{\mathfrak{p}R[Y]})$. Thus, $$\begin{array}{llll} {\rm e_t}\left((x_1^l+y^lY)_{\mathfrak{p}R[Y]};B\right) & = & l\cdot {\rm e_t}\left((x_1+yY)_{\mathfrak{p}R[Y]};B\right) & \mbox{by \cite[Lemma 2.7(i)]{Bedregal-Perez3}} \\ & = & l\cdot {\rm e}_{\rm BR}\left((x_1+yY)_{\mathfrak{p}R[Y]};B\right) & \mbox{by \cite[Theorem 4.1]{Bedregal-Perez3}} \\ & = & l\cdot {\rm {e}}_{\rm BR}\left({E_1}_{\mathfrak{p}R[Y]};B\right) & \mbox{by Corollary \ref{Theorem 17.4.6degreecor1}} \\ & = & l\cdot {\rm e_{BR}}\left({E_1}_{\mathfrak{p}};R_{\mathfrak{p}}\right) & \\ & = & l \cdot {\rm e_{BR}}\left((x_1)_{\mathfrak{p}};R_{\mathfrak{p}}\right) & \mbox{by hypothesis}. \end{array}$$ So the case $k = 1$ is proved. Now assume $k \geq 2$. Let $\mathfrak{p}$ be a prime ideal minimal over $\left((x_1,\dots,x_k):_RF\right)$. For $\mathfrak{q}\in {\rm Min}(R_{\mathfrak{p}}),$ set $A = R_{\mathfrak{p}}/\mathfrak{q}.$ By Lemma [Lemma 18](#Lemma 17.5.3){reference-type="ref" reference="Lemma 17.5.3"} (ii), if $\dim(A) = \dim(R_{\mathfrak{p}}),$ then $${\rm e_{BR}}\left((x_1,\cdots,x_k)_{\mathfrak{p}}(R_{\mathfrak{p}}/\mathfrak{q});A\right)\geq {\rm e_{BR}}\left({E_1}_{\mathfrak{p}}(R_{\mathfrak{p}}/\mathfrak{q}),\dots,{E_k}_{\mathfrak{p}}(R_{\mathfrak{p}}/\mathfrak{q}); A\right).$$ Let $\Lambda=\left\{\mathfrak{q}\in \operatorname{Supp}(R_{\mathfrak{p}}) : \dim\left( A/\mathfrak{q}\right)=\dim(R_{\mathfrak{p}})\right\}$. By Lemmas [Lemma 16](#Associativity){reference-type="ref" reference="Associativity"} and [Lemma 17](#AssociativityMix){reference-type="ref" reference="AssociativityMix"} follows $$\begin{array}{lll} {\rm e_{BR}}\left((x_1,\dots,x_k)_{\mathfrak{p}};R_{\mathfrak{p}}\right) &=&\sum\limits_{\mathfrak{q}\in \Lambda }\ell_{R_{\mathfrak{p}}}\left([R_{\mathfrak{p}}]_{\mathfrak{q}}\right) {\rm e_{BR}}\left((x_1,\dots,x_k)_{\mathfrak{p}}(R_{\mathfrak{p}}/\mathfrak{q});A\right)\\ &=&\sum\limits_{\mathfrak{q}\in \Lambda }\ell\left(R_{\mathfrak{q}}\right) {\rm e_{BR}}\left((x_1,\dots,x_k)_{\mathfrak{p}}(R_{\mathfrak{p}}/\mathfrak{q});A\right)\\ &\geq & \sum\limits_{\mathfrak{q}\in \Lambda}\ell\left(R_{\mathfrak{q}}\right) {\rm e_{BR}}({E_1}_{\mathfrak{p}}(R_{\mathfrak{p}}/\mathfrak{q}),\dots,{E_k}_{\mathfrak{p}}(R_{\mathfrak{p}}/\mathfrak{q}); A)\\ &=&{\rm e_{BR}}({E_1}_{\mathfrak{p}},\dots,{E_k}_{\mathfrak{p}}; R_{\mathfrak{p}}). \end{array}$$ But all the terms above must be equal (by hypothesis), so for each $A = R_{\mathfrak{p}}/\mathfrak{q}$, we have $${\rm e_{BR}}\left((x_1,\dots,x_k)_{\mathfrak{p}}(R_{\mathfrak{p}}/\mathfrak{q});A\right) = {\rm e_{BR}}({E_1}_{\mathfrak{p}}(R_{\mathfrak{p}}/\mathfrak{q}),\dots,{E_k}_{\mathfrak{p}}(R_{\mathfrak{p}}/\mathfrak{q}); A).$$ Thus, the hypotheses of the lemma hold for each $R/\mathfrak{p}$ in place of $R$, with $\mathfrak{q}$ varying over those minimal prime ideals of $\Lambda$. If the conclusion holds with $R/\mathfrak{q}$ in place of $R$, then there exists a Zariski-open non-empty subset $U_{\mathfrak{q}}$ of $E_1/\mathfrak{m}E_1$ such that the conclusion of the lemma holds for $R/\mathfrak{q}$ in place of $R$. Then, by Lemma [Lemma 16](#Associativity){reference-type="ref" reference="Associativity"}, the conclusion holds in $R$ for $y$ a preimage of any element of the non-empty Zariski-open subset $\cap_{\mathfrak{q}}U_{\mathfrak{q}}$ of $E_1/\mathfrak{m}E_1$. Thus, it is sufficient to prove the lemma in the case where $R_{\mathfrak{p}}$ is an integral domain. In this case, $x_k$ is a non-zerodivisor on $S_{\mathfrak{p}}:=R_{\mathfrak{p}}[t_1,\dots,t_p]$. Set $T = S_{\mathfrak{p}}/x_kS_{\mathfrak{p}}$. Then $$\begin{array}{lll} {\rm e_{BR}}\left({E_1}_{\mathfrak{p}},\dots,{E_k}_{\mathfrak{p}}; R_{\mathfrak{p}}\right) &=& {\rm e_{BR}}\left((x_1,\dots,x_k)_{\mathfrak{p}};R_{\mathfrak{p}}\right) \,\,\,\,\,\,\,\,\, \mbox{ by assumption }\\ &=&{\rm e_{BR}}\left((x_1,\dots,x_{k-1})'_{\mathfrak{p}};T\right)\, \mbox{by \cite[Proposition 2.6]{Bedregal-Perez2} or Remark \ref{CorteRemark}} \\ &\geq &{\rm e_{BR}}\left({E_1'}_{\mathfrak{p}},\dots,{E_{k-1}'}_{\mathfrak{p}};T\right)\,\,\,\,\,\,\,\,\, \mbox{by Lemma \ref{Lemma 17.5.3} }\\ &=&{\rm e_{BR}}\left((y_1,\dots,y_{k-1})'_{\mathfrak{p}};T\right) \,\,\,\,\,\,\, \mbox{by Corollary \ref{Theorem 17.4.6degreecor1} for some } y_i\in E_i\\ &\geq & {\rm e_{BR}}\left({E_1}_{\mathfrak{p}},\dots,{E_k}_{\mathfrak{p}}; R_{\mathfrak{p}}\right) \,\,\,\,\,\,\, \mbox{by Lemma \ref{Lemma 17.5.3} } \end{array}$$ so that equality has to hold throughout. In particular, ${\rm e_{BR}}\left((x_1,\dots,x_{k-1})'_{\mathfrak{p}};T\right)= {\rm e_{BR}}\left({E_1'}_{\mathfrak{p}},\dots,{E_{k-1}'}_{\mathfrak{p}};T\right)$. By induction on $k$, there exists a non-empty Zariski open subset $U$ of $E_1/\mathfrak{m}E_1$ such that for any preimage $y$ of an element of $U$, for large $l$ and for large $t$, we have $${\rm e_{\rm t}}\left((x^l_1 + y^lY,x_2,\dots, x_{k-1})'_{\mathfrak{p}R[Y]};B'\right) = l\cdot{\rm e_{BR}}((x_1,x_2,\dots ,x_{k-1})'_{\mathfrak{p}};T),$$ where $B'=S_{\mathfrak{p}R[Y]}/x_kS_{\mathfrak{p}R[Y]}.$ Now, by [@Kirby Proposition 3 (ii)], we finish the proof: $$\begin{array}{lll} {\rm e_{t}}\left((x^l_1 + y^lY,x_2,\dots, x_k)_{\mathfrak{p}R[Y]};B\right) &=& {\rm e_{t}}\left((x^l_1 + y^lY,x_2,\dots, x_{k-1})'_{\mathfrak{p}R[Y]};B'\right) \\ & =&l\cdot {\rm e_{BR}}((x_1,x_2,\dots ,x_{k-1})'_{\mathfrak{p}};T)\\ & =& l\cdot{\rm e_{BR}}\left((x_1,\dots,x_k)_{\mathfrak{p}};R_{\mathfrak{p}}\right). \end{array}$$ ◻ **Lemma 28**. *Let $(R,\mathfrak{m})$ be a formally equidimensional Noetherian local ring with infinite residue field, $Y$ be a variable over $R$ and $A:=R[Y]_{\mathfrak{m}R[Y]+YR[Y]}$. Let $E_1,\dots,E_k$ be finitely generated $R$-submodules of $F$, with $x_i \in E_i$ for $i = 1,\dots, k$. Assume that the ideals $(x_1,\dots,x_k)S$ and all the $I_{E_i}$ have height k and have the same radical. Let $\Lambda$ be the set of prime ideals in $R$ minimal over $\left((x_1,\dots,x_k):_RF\right)$. Assume that for all $\mathfrak{p} \in \Lambda$, ${\rm e_{BR}}\left((x_1,\dots,x_k)_{\mathfrak{p}};R_{\mathfrak{p}}\right) = {\rm e_{BR}}\left({E_1}_{\mathfrak{p}},\dots, {E_k}_{\mathfrak{p}},R_{\mathfrak{p}}\right)$. Let $y \in E_1$ be a superficial element for $E_1,\dots,E_k$ that is not in any prime ideal minimal over $(x_2,\dots, x_k)S$. Then for all sufficiently large integers $l$, the set of prime ideals of $A$ minimal over $\left(\left[(x^l_1+y^lY,x_2,\dots,x_k)G\right]_n:_AG_n\right)$ is equal to $\{\mathfrak{p}A|\mathfrak{p}\in \Lambda\}$, where $G=S[Y]_{\mathfrak{m}R[Y]+R[Y]}.$* *Proof.* Let $\left[(x^l_1+y^lY,x_2,\dots,x_k)G\right]_n$ be the $A$-submodule of $G_n$, for $n\geq l.$ By the choice of $y$, the height of $I_{K_l}$ is $k$. Elements of $\Lambda$ are clearly extended to prime ideals in $A$ that are minimal over $(K_l:_AG_n)$. Suppose there exists a prime ideal $\mathfrak{q}$ in $A$, minimal over $(K_l:_AG_n)$, that is not extended from a prime ideal in $\Lambda$. By Krull's Height Theorem, [@SH Theorem B.2.1], ${\rm ht}\mathfrak{q} \leq {\rm ht}\mathfrak{p}$. As $I_{K_l}$ has height $k$, necessarily ${\rm ht}\mathfrak{q} = {\rm ht}\mathfrak{p}$. By [@SH Lemma B.4.7], $A$ is formally equidimensional, so that by [@SH Lemma B.4.2], $\dim(A/\mathfrak{q}) = \dim A -{\rm ht}\mathfrak{p}$. Similarly, for each $\mathfrak{p}\in \Lambda$, $\dim(A/\mathfrak{p}) = \dim A -{\rm ht}\mathfrak{p}$. By Additivity and Reduction Formulas, [@SH Theorem 11.2.4], for all $t \gg 0$, $$\begin{array}{lll} {\rm e}\left(\left[\frac{G}{((x^l_1+y^lY)^n,x_2^n,\dots,x_k^n)G)}\right]_t\right)&\geq & {\rm e}(A/\mathfrak{q})\ell\left(\left[\frac{G_{\mathfrak{q}}}{(x^l_1+y^lY)^n,x_2^n,\dots,x_k^n)G_{\mathfrak{q}}}\right]_t\right)+\\ && \sum\limits_{\mathfrak{p}\in\Lambda}{\rm e}(A/\mathfrak{p}A)\ell\left(\left[\frac{G_{\mathfrak{p}A}}{((x^l_1+y^lY)^n,x_2^n,\dots,x_k^n)G_{\mathfrak{p}A}}\right]_t\right) \end{array}$$ By Lechs Formula [@Bedregal-Perez3 Theorem 3.1] it follows that $$\begin{array}{lll} \lim\limits_{n\to\infty}\frac{{\rm e}\left(\left[\frac{G}{((x^l_1+y^lY)^n,x_2^n,\dots,x_k^n)G)}\right]_t\right)}{n^{k}}&\geq & {\rm e}(A/\mathfrak{q}){\rm e_t}({K_l}_{\mathfrak{q}};A_{\mathfrak{q}})+\sum\limits_{\mathfrak{p}\in\Lambda}{\rm e}(A/\mathfrak{p}A){\rm e_t}({K_l}_{\mathfrak{p}A};A_{\mathfrak{p}A})\\ &>& \sum\limits_{\mathfrak{p}\in\Lambda}{\rm e}(A/\mathfrak{p}A){\rm e_t}\left({K_l}_{\mathfrak{p}A};A_{\mathfrak{p}A}\right)\\ &=& \sum\limits_{\mathfrak{p}\in\Lambda}{\rm e}(R/\mathfrak{p})\cdot l\cdot {\rm e_{BR}}\left((x_1,\dots,x_k)_{\mathfrak{p}};R_{\mathfrak{p}}\right), \end{array}$$ the last equality holds by Lemma [Lemma 27](#Lemma 17.5.4){reference-type="ref" reference="Lemma 17.5.4"}. By the Additivity and Reduction Formulas, [@SH Theorem 11.2.4], $$\begin{array}{lll} {\rm e}\left(\left[\frac{G}{((x^l_1+y^lY)^n,x_2^n,\dots,x_k^n)G)}\right]_t\right)&\leq & {\rm e}\left(\left[\frac{G}{((x^l_1+y^lY)^n,x_2^n,\dots,x_k^n,Y)G)}\right]_t\right)\\ &=& {\rm e}\left(\left[\frac{S}{(x^{ln},x_2^n,\dots,x_k^n)S}\right]_t\right)\\ &=&\sum\limits_{\mathfrak{p}\in\Lambda}{\rm e}(R/\mathfrak{p})\ell\left(\left[\frac{S_{\mathfrak{p}}}{((x^{ln},x_2^n,\dots,x_k^n)S_{\mathfrak{p}}}\right]_t\right) \end{array}$$ so that by Lechs Formula [@Bedregal-Perez3 Theorem 3.1] and by [@Bedregal-Perez3 Lemma 2.7(i) and Theorem 4.1], we get $$\begin{array}{lll} \lim\limits_{n\to\infty}\frac{{\rm e}\left(\left[\frac{G}{((x^l_1+y^lY)^n,x_2^n,\dots,x_k^n)G)}\right]_t\right)}{n^{k}}&\leq & \sum\limits_{\mathfrak{p}\in\Lambda}{\rm e}(R/\mathfrak{p}) {\rm e_t}\left((x^{l}_1,x_2,\dots,x_k)_{\mathfrak{p}};R_{\mathfrak{p}}\right)\\ &\leq & \sum\limits_{\mathfrak{p}\in\Lambda}{\rm e}(R/\mathfrak{p})\cdot l\cdot {\rm e_{BR}}\left((x_1,x_2,\dots,x_k)_{\mathfrak{p}};R_{\mathfrak{p}}\right) \end{array}$$ contradicting the earlier inequality. Thus there is no such $\mathfrak{q}$ and the result follows. ◻ # Main result In this section, we prove the main result of this paper, which is the converse of Theorem [Theorem 4](#Perez-Bedregal){reference-type="ref" reference="Perez-Bedregal"} or the generalization of Swanson's Theorem [Theorem 3](#SwansonTheorem){reference-type="ref" reference="SwansonTheorem"}. In essence, our result states that if the mixed multiplicity of a family of modules is equal to the Buchsbaum-Rim multiplicity of a module generated by a sequence of elements from the family of modules, then this sequence constitutes a joint reduction of the family of modules. **Theorem 29** (Converse of Rees' multiplicity theorem for modules). *Let $(R,\mathfrak{m})$ be a $d$-dimensional formally equidimensional Noetherian local ring, $E_1,\dots,E_k$ be finitely generated $R$-submodules of $F$ and $x_i\in E_i$, for $i=1,\dots,k$. Assume that the ideals $\left(x_1,\ldots,x_k\right)S$ and $I_{E_i}$ have the same height $k$ and the same radical. If ${\rm e_{BR}}\left((x_1,\dots,x_k)_{\mathfrak{p}};R_{\mathfrak{p}}\right) = {\rm e_{BR}}\left({E_1}_{\mathfrak{p}},\dots, {E_k}_{\mathfrak{p}},R_{\mathfrak{p}}\right)$, for all prime ideal $\mathfrak{p}$ minimal over $\left(\left(x_1,\ldots,x_k\right):_RF\right)$, then $\left(x_1,\ldots,x_k\right)$ is a joint reduction for $\left(E_1,\dots,E_k\right).$* *Proof.* Let $X$ be a variable over $R$. If $(R,\mathfrak{m})$ is a local ring with finite residue field, we set $T=R[X]_{\mathfrak{m}R[X]}$ which is the localization of the polynomial ring $R$ at the prime ideal $\mathfrak{m}R[X]$. Then $T$ is a local ring with an infinite residue field and is still formally equidimensional, by [@SH Lemma B.4.7]. Furthermore, $R \subseteq T$ is a faithfully flat extension of Noetherian local rings of the same Krull dimension (see [@SH Section 8.4]), and radicals, heights, minimal prime ideals, multiplicities, and mixed multiplicities are preserved under passage to $T$; besides, some $k$-tuple of elements is a joint reduction for a $k$-tuple of submodules in $F$ if and only if it is so after passage to $T$. Therefore, by possibly switching to $T$, we may assume that $R$ has an infinite residue field. We will prove this theorem by induction on $k$. When $k=1$, by hypothesis, we have $x_1S$ and $I_{E_1}$ with the same height $1$ and $\sqrt{x_1S}=\sqrt{I_{E_1}}$. Besides ${\rm e_{BR}}\left((x_1)_{\mathfrak{p}};R_{\mathfrak{p}}\right) ={\rm e_{BR}}\left({E_1}_{\mathfrak{p}},R_{\mathfrak{p}}\right),$ for all prime ideal $\mathfrak{p}$ minimal over $\left(x_1:_RF\right)$. As $R_{\mathfrak{p}}$ is formally equidimensional, by Rees's Theorem (Lemma [Lemma 15](#Coro16.5.7SH){reference-type="ref" reference="Coro16.5.7SH"}), $(x_1)_{\mathfrak{p}}$ is a reduction of ${E_1}_{\mathfrak{p}}$, so $I_{(x_1)_{\mathfrak{p}}}$ is a reduction of $I_{{E_1}_{\mathfrak{p}}}$, by Remark [\[remarkideal\]](#remarkideal){reference-type="ref" reference="remarkideal"}. Thus, using the definition of integral closure of $I_{{E_1}_{\mathfrak{p}}}$, we have $I_{{E_1}_{\mathfrak{p}}}\subseteq \cap_{\mathfrak{p}}\overline{I_{(x_1)_{\mathfrak{p}}}}\cap S$, by [@SH Proposition 2.1.16], for all prime ideal $\mathfrak{p}$ minimal over $\left(x_1:_RF\right)$. By [@SH Ratliff's Theorem 5.4.1], $I_{E_1}\subseteq \overline{I_{(x_1)}}$, again by Remark [\[remarkideal\]](#remarkideal){reference-type="ref" reference="remarkideal"}, we get that $(x_1) \subseteq E_1$ is a reduction. Suppose $k>1$. Let's reduce to the case when $R$ is a domain. Let $\Lambda$ be the set of all prime ideals in $R$ that are minimal over $\left((x_1,\dots,x_{k}):_RF\right)$. Let $\mathfrak{p}\in \Lambda$ and let $Q \in {\rm Spec}(R)$ be a minimal over $\left(\mathfrak{p}F + (x_1,\dots,x_{k}):_RF\right)$. Since $R$ is formally equidimensional, hence equidimensional and catenary, ${\rm ht}(Q)={\rm ht}(Q/\mathfrak{p}) \leq {\rm ht}({\mathfrak{p}})$, so necessarily ${\rm ht}(Q)={\rm ht}({\mathfrak{p}})$ and $Q\in \Lambda$. By Lemma [Lemma 18](#Lemma 17.5.3){reference-type="ref" reference="Lemma 17.5.3"} - item (ii), $$\label{inequality30} {\rm e_{BR}}\left(\left(\left(x_1,\dots,x_{k}\right)\left(R/\mathfrak{p}\right)\right)_{Q};\left(R/\mathfrak{p}\right)_{Q}\right) \leq {\rm e_{BR}}\left(\left(E_1\left(R/\mathfrak{p}\right)\right)_{Q},\ldots,\left(E_{k}\left(R/\mathfrak{p}\right)\right)_{Q};\left(R/\mathfrak{p}\right)_{Q}\right).$$ Then, by Associativity Formula for Buchsbaum-Rim multiplicity (Lemma [Lemma 16](#Associativity){reference-type="ref" reference="Associativity"}) and Reduction Formula for mixed Buchsbaum-Rim multiplicity (Lemma [Lemma 17](#AssociativityMix){reference-type="ref" reference="AssociativityMix"}): $$\begin{array}{lll} {\rm e_{BR}}((x_1,\dots,x_{k})_{Q}; R_{Q}) & = & \sum\limits_{\mathfrak{q}\in \Lambda,\, \mathfrak{q}\subseteq Q}\ell_R((R_{Q})_{\mathfrak{q}}){\rm e_{BR}}\left(\left(x_1,\dots,x_{k}\right)(R/\mathfrak{q})_{Q}; (R/\mathfrak{q})_{Q}\right) \\ \\ & \leq & \sum\limits_{\mathfrak{q}\in \Lambda, \mathfrak{q}\subseteq Q}\ell((R_{Q})_{\mathfrak{q}}){\rm e_{BR}}\left((E_1(R/\mathfrak{q}))_Q,\ldots,(E_{k}(R/\mathfrak{q}))_Q; (R/\mathfrak{q})_{Q}\right) \\ \\ & = & {\rm e_{BR}}({E_1}_{Q},\ldots,{E_k}_{Q}; R_{Q}) \\ \\ & = & {\rm e_{BR}}((x_1,\dots,x_{k})_{Q}; R_{Q}), \end{array}$$ so, by inequality ([\[inequality30\]](#inequality30){reference-type="ref" reference="inequality30"}), follows $${\rm e_{\rm BR}}\left(\left(\left(x_1,\dots,x_k\right)\left(R/\mathfrak{p}\right)\right)_Q; (R/\mathfrak{p})_{Q}\right) = {\rm e}_{\rm BR}\left(\left(E_1(R/\mathfrak{p})\right)_Q,\ldots,\left(E_{k})(R/\mathfrak{p})\right)_Q; (R/\mathfrak{p})_{Q}\right),$$ for each $\mathfrak{p}\in \Lambda, \mathfrak{p}\subseteq Q$. If the result is true for integral domains, since ${\rm ht}(Q)={\rm ht}(Q/\mathfrak{p})={\rm ht}({\mathfrak{p}})$, then $(x_1,\dots,x_k)$ is a joint reduction for $(E_1, \dots , E_k)$ with respect to $R/\mathfrak{p}$ for each $\mathfrak{p}\in \Lambda$. Then, by Proposition [Lemma 8](#Proposition 1.1.5){reference-type="ref" reference="Proposition 1.1.5"}, since the definition of joint reduction reduces to a reduction question, $(x_1,\dots,x_k)$ is a joint reduction for $(E_1, \dots, E_k)$. Thus it is sufficient to prove the theorem for integral domains. Let $A = R[Y]_{\mathfrak{m}R[Y]+YR[Y]}$ and let $y$ be as in the statements of Lemmas [Lemma 25](#Lemma 17.5.2){reference-type="ref" reference="Lemma 17.5.2"} and [Lemma 27](#Lemma 17.5.4){reference-type="ref" reference="Lemma 17.5.4"}. Since both requirements are given by non-empty Zariski-open sets, such $y$ exists and we may choose a non-zero $y$. Thus $x^l_1 + y^lY$ is not zero for all $l$. Set $G=S[Y]_{\mathfrak{m}R[Y]+YR[Y]}$ and let $G'=G/(x^l_1 + y^lY)G$, for some large integer $l$. By Lemma [Lemma 27](#Lemma 17.5.4){reference-type="ref" reference="Lemma 17.5.4"}, if $l$ is sufficiently large, $${\rm e_{t}}\left((x^l_1 + y^lY,x_2,\dots, x_k)_{\mathfrak{p}A};A_{\mathfrak{p}A}\right) = l\cdot {\rm e_{BR}}\left((x_1,x_2,\dots ,x_k)_{\mathfrak{p}};R_{\mathfrak{p}}\right),$$for every $\mathfrak{p}\in\Lambda$. By [@Kirby Proposition 3] and [@Bedregal-Perez3 Theorem 4.1], for $t\gg 0$, $$\begin{array}{lll} {\rm e_{\rm t}}\left((x^l_1 + y^lY,x_2,\dots, x_k)_{\mathfrak{p}A};A_{\mathfrak{p}A}\right)&=& {\rm e_{\rm t}}\left((x_2,\dots, x_k)'_{\mathfrak{p}A};G_{\mathfrak{p}A}'\right)\\ &=& {\rm e_{BR}}\left((x_2,\dots, x_k)'_{\mathfrak{p}A};G_{\mathfrak{p}A}'\right). \end{array}$$ By Lemma [Lemma 25](#Lemma 17.5.2){reference-type="ref" reference="Lemma 17.5.2"}, there exists an integer $c$ such that for all large $n_1,\dots,n_k,$ $$\begin{array}{r} \left({\bf E}^{\bf n}G_s:_{G_{|{\bf n}|+s-l}}(x^l+y^lY)\right)\cap E_1^{c}E_2^{n_2}\cdots E_k^{n_k}G_{n_1-c+s}=E_1^{n_1-l}E_2^{n_2}\cdots E_k^{n_k}G_{l+s}. \end{array}$$ This, in particular, holds for all $n_1$ that are large multiples of $l$, and $c$ replaced by a larger integer that is a multiple of $l$. Thus, $$\begin{array}{lll} {\rm {e}}_{\rm BR}\left({E_2'}_{\mathfrak{p}A},\ldots,{E_k'}_{\mathfrak{p}A};G_{\mathfrak{p}A}'\right)&=&{\rm \tilde{e}}_{\rm BR}\left({E_2'}_{\mathfrak{p}A},\ldots,{E_k'}_{\mathfrak{p}A};G_{\mathfrak{p}A}'\right)\\ &=&{\rm \tilde{e}}_{\rm BR}\left({E_1^l}_{\mathfrak{p}A},\ldots,{E_k}_{\mathfrak{p}A};A_{\mathfrak{p}A}\right),\,\, \mbox{by Theorem \ref{Theorem 17.4.6degree}}\\ &=&{\rm \tilde{e}}_{\rm BR}\left({E_1^l}_{\mathfrak{p}},\ldots,{E_k}_{\mathfrak{p}};R_{\mathfrak{p}}\right)\\ &=& l\cdot {\rm {e}}_{\rm BR}\left({E_1}_{\mathfrak{p}},\ldots,{E_k}_{\mathfrak{p}};R_{\mathfrak{p}}\right),\,\, \mbox{by Proposition \ref{Proposition 17.5.1}} \end{array}$$ By assumption and the derived equalities, we get that $${\rm {e}}_{\rm BR}\left({E_2'}_{\mathfrak{p}A},\ldots,{E_k'}_{\mathfrak{p}A};G_{\mathfrak{p}A}'\right)={\rm e_{BR}}((x_2,\dots, x_k)'_{\mathfrak{p}A};G_{\mathfrak{p}A}'), \qquad \mbox{ for all } \mathfrak{p}\in \Lambda.$$ By Lemma [Lemma 28](#Lemma 17.5.5){reference-type="ref" reference="Lemma 17.5.5"}, all the minimal prime ideals over $\left([(x^l_1+y^lY,x_2,\dots,x_k)G]_n:_AG_n\right)$ are of the form $\mathfrak{p}A$, with $\mathfrak{p}\in \Lambda$. Set $I_E = x_2I_{E_3}\cdots I_{E_k} + \cdots + x_kI_{E_2} \cdots I_{E_{k-1}}$. By [@SH Lemma B.4.7], $G$ is local formally equidimensional, and by [@SH Proposition B.4.4], $G'$ is formally equidimensional. By induction on $k$, $(x_2, \dots , x_k)$ is joint reduction for $(E_2,\dots , E_k)$ with respect to $G'$ wich is equivalent to say that $(x_2, \dots , x_k)$ is joint reduction for $(I_{E_2},\dots , I_{E_k})$ with respect to $G'$ (by Remark [Remark 10](#Remark 4.4){reference-type="ref" reference="Remark 4.4"}). So $I_EG'$ is a reduction of $I_{E_2}\cdots I_{E_k}G'$. Thus, for sufficiently large $n$, $(I_{E_2}\cdots I_{E_k})^{n+1}G' \subseteq I_E(I_{E_2}\cdots I_{E_k})^nG'$. Hence $(I_{E_2}\cdots I_{E_k})^{n+1}G \subseteq I_E(I_{E_2}\cdots I_{E_k})^nG + (x^l_1 + y^lY )G$. By the choice of $y$ as in Lemma [Lemma 25](#Lemma 17.5.2){reference-type="ref" reference="Lemma 17.5.2"}, for possibly larger $n$, if $I_E' = I_EI_{E_1},$ $$(I_{E_1}\cdots I_{E_k})^{n+1}G\subseteq I_E' (I_{E_1}I_{E_2} \cdots I_{E_k})^nG + (x^l_1 + y^lY )I_{E_1}^{n+1-l}(I_{E_2}\cdots I_{E_k})^{n+1}G.$$ Thus there exists $s\in S[Y]\setminus (\mathfrak{m}R[Y]+YR[Y])$ such that $$(I_{E_1}\cdots I_{E_k})^{n+1}S[Y] \subseteq I_E' (I_{E_1}I_{E_2} \cdots I_{E_k})^nS[Y] + (x^l_1 + y^lY )I_{E_1}^{n+1-l}(I_{E_2}\cdots I_{E_k})^{n+1}S[Y].$$ But the constant term $u$ of $s$ is a unit in $R$, so by reading off the degree zero monomials in $S[Y]$ we get $$\begin{array}{lll} (I_{E_1}\cdots I_{E_k})^{n+1} &\subseteq& I_E'(I_{E_1}I_{E_2}\cdots I_{E_k})^n + x_1^l I_{E_1}^{n+1-l}(I_{E_2}\cdots I_{E_k})^{n+1}\\ &\subseteq & I_E'(I_{E_1}I_{E_2}\cdots I_{E_k})^n + x_1 I_{E_1}^{n+1}(I_{E_2}\cdots I_{E_k})^{n+1}, \end{array}$$ which proves that $(x_1,\dots,x_k)$ is a joint reduction for $(I_{E_1},\dots, I_{E_k})$. Therefore, by Remark [Remark 10](#Remark 4.4){reference-type="ref" reference="Remark 4.4"}, we get $(x_1,\dots,x_k)$ is a joint reduction for $(E_1,\dots, E_k)$. ◻ As an immediate consequence, we have the following corollary. **Corollary 30**. *Let $(R,\mathfrak{m})$ be a $d$-dimensional formally equidimensional Noetherian local ring and $E$ be finitely generated $R$-submodules of $F$. Assume that the ideals$\left(x_1,\ldots,x_k\right)S$ and $I_{E}$ have the same height $k$ and the same radical. If ${\rm e_{BR}}\left((x_1,\dots,x_k)_{\mathfrak{p}};R_{\mathfrak{p}}\right) = {\rm e_{BR}}\left({E}_{\mathfrak{p}}; R_{\mathfrak{p}}\right)$, for all prime ideal $\mathfrak{p}$ minimal over $\left(\left(x_1,\ldots,x_k\right):_RF\right)$, then $\left(x_1,\ldots,x_k\right)$ is a reduction of $R$-module $E.$* Note that in the corollary above, we show the classic Rees' theorem for modules of finite length and the reduction criterion of Böger for arbitrary modules, see [@SH Corollary 16.5.7, Theorem 16.5.8] respectively. 9999 Branco Correia, A. L. and Zarzuela, S. *On the asymptotic properties of the Rees powers of a module*. J. Pure Appl. Algebra. **207**. (2006). pg. 373--385. Böger, E. *Eine verallgemeinerung eines multiplizitátensatzes von D. Rees*. J. Algebra. **12**. (1969). pg. 207-215. Buchsbaum, D. and Rim, D. *A generalized Koszul complex II. Depth and multiplicity.* Trans. Am. Math. Soc. **111**. 2. (1964). pg. 197-224. Callejas-Bedregal, R. and Jorge Perez, V. H. *Mixed multiplicities for arbitrary ideals and generalized Buchsbaum-Rim multiplicities.* J. London. Math. Soc. **76**. (2007). pg. 384-398. Callejas-Bedregal, R. and Jorge Perez, V. H. *Mixed multiplicities and the minimal number of generators of modules*. J. Pure Appl. Algebra. **214**. (2010). pg. 1642-1653. Callejas-Bedregal, R. and Jorge Perez, V. H. *On Lech's limit formula for modules*. Colloq. Math. **148**. (2017). pg. 27-37. Ferrari, M. D. and Jorge Perez, V. H. *Coefficient modules and Ratliff-Rush closures.* Commun. Algebra **51**. (2023). pg. 3497-3509. Huneke, C. and Swanson, I. *Integral closure of ideals, rings and modules.* London Math. Soc. Lecture Note Series, 336. Cambridge University Press, Cambridge. (2006). Swanson, I. *Mixed multiplicities, joint reductions and a theorem of Rees*. J. London Math. Soc. **48**. (1993), pg. 1-14. Katz, D. *Reduction criteria for modules*. Comm. Algebra. **23**. 12. (1995). pg. 4543-4548. Kirby, D. *Graded multiplicity theory and Hilbert functions.* J. London Math. Soc. **36**. no. 1. 2. (1987). pg. 16-22. Kirby, D. and Rees, D. *Multiplicities in graded rings. I: The general theory.* Comm. Algebra: Syzygies, multiplicities and birational algebra - (W. J. Heinzer, C. L. Hunecke and J. D. Sally, eds.), Contemp. Math. **159**. (1994) pg. 209-267. Kleiman, S. L. *Two formulas for the BR-multiplicity*. Annali dell'Universita di Ferrara. **63**. (2017). pg. 147-158. Kleiman, S. and Thorup, A. *A geometric theory of the Buchsbaum-Rim multiplicity.* J. Algebra. **167**. (1994). pg. 168-231. Kleiman, S. and Thorup, A. *Mixed Buchsbaum-Rim multiplicities.* Amer. J. Math. **118**, (1996). pg. 529-569. Ooishi, A. *Reductions of graded rings and pseudo-flat graded modules.* Hiroshima Math. J. **18**. (1988). pg. 463-477. Rees, D. *Generalizations of reductions and mixed multiplicities.* J. London Math. Soc. **29**. (1984). pg. 397-414. D. Rees. *$A$-transforms of local rings and a theorem on multiplicities of ideals.* Cambridge Philos. Soc. **57** (1961), pg. 8-17. Roberts, P. *Multiplicities and Chern classes in local algebra.* Cambridge Tracts Math. **133**. (1998). Simis, A.; Ulrich, B. and Vasconcelos, W. *Codimension, multiplicity and integral extensions.* Math. Proc. Cambridge Philos. Soc. **130**, (2001), no. 2, 237--257. Teissier, B. *Cycles évanscents, sections planes et conditions de Whitney singularities à Cargèse.* Astèrisque **7-8**. (1973), pg. 285-362. Trung, N. V. and Verma, J. K. *Hilbert functions of multigraded algebras, mixed multiplicities of ideals and their applications.* J. Commut. Algebra 2. **4**. (2010). pg. 515-565. Validashti, J. *Multiplicities of graded algebras.* Ph.D. thesis, (2007). Purdue Univesity. [^1]: The second author was supported by grant 2019/21181-0, São Paulo Research Foundation (FAPESP)
arxiv_math
{ "id": "2310.01216", "title": "Mixed multiplicity and Converse of Rees' theorem for modules", "authors": "M. D. Ferrari and V. H. Jorge-Perez and L. C. Merighe", "categories": "math.AC", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/" }
--- abstract: | Relative Rota-Baxter groups are generalisations of Rota-Baxter groups and recently shown to be intimately related to skew left braces, which are well-known to yield bijective non-degenerate solutions to the Yang-Baxter equation. In this paper, we develop an extension theory of relative Rota-Baxter groups and introduce their low dimensional cohomology groups, which are distinct from the ones known in the context of Rota-Baxter operators on Lie groups. We establish an explicit bijection between the set of equivalence classes of extensions of relative Rota-Baxter groups and their second cohomology. Further, we delve into the connections between this cohomology and the cohomology of associated skew left braces. We prove that for bijective relative Rota-Baxter groups, the two cohomologies are isomorphic in dimension two. address: - Department of Mathematical Sciences, Indian Institute of Science Education and Research (IISER) Mohali, Sector 81, SAS Nagar, P O Manauli, Punjab 140306, India - Department of Mathematical Sciences, Indian Institute of Science Education and Research (IISER) Mohali, Sector 81, SAS Nagar, P O Manauli, Punjab 140306, India - Department of Mathematical Sciences, Indian Institute of Science Education and Research (IISER) Mohali, Sector 81, SAS Nagar, P O Manauli, Punjab 140306, India author: - Pragya Belwal - Nishant Rathee - Mahender Singh title: Cohomology and Extensions of Relative Rota-Baxter groups --- # Introduction Rota-Baxter operators of weight $1$ on Lie groups were introduced by Guo, Lang and Sheng in [@LHY2021], with a focus on smooth Rota-Baxter operators so that these operators can be differentiated to yield Rota-Baxter operators of weight $1$ on corresponding Lie algebras. Further study of Rota-Baxter operators on (abstract) groups has been carried out by Bardakov and Gubarev in [@VV2022; @VV2023]. They showed that every Rota-Baxter operator on a group gives rise to a skew left brace structure on that group, and hence gives a set-theoretical solution to the Yang-Baxter equation. Recall that, a set-theoretical solution to the Yang Baxter equation is a pair $(X, r)$, where $X$ is a set and $r: X\times X \longrightarrow X\times X$ is a map (called a braiding) written as $r(x,y) = (\sigma_x(y), \tau_y(x))$ such that the braid equation $$(r\times \operatorname{id})(\operatorname{id}\times r)(r\times \operatorname{id})=(\operatorname{id}\times r)(r\times \operatorname{id})( \operatorname{id}\times r)$$ is satisfied. A solution is non degenerate if the maps $x \mapsto \sigma_x$ and $x \mapsto \tau_x$ are bijective for all $x, y \in X$. It is known due to Guarnieri and Vendramin [@GV17] that every skew left brace gives rise to a bijective non-degenerate solution to the Yang-Baxter equation. This immediately relates Rota-Baxter operators on groups to the problem of classification of set-theoretical solutions to the Yang-Baxter equation as envisaged by Drinfel'd [@MR1183474]. It has been shown in [@VV2022] that any skew left brace can be embedded into a Rota-Baxter group. A cohomological characterization of skew left braces that can be induced from a Rota-Baxter group has been given in [@AS22], and compelling non-trivial examples of skew left braces that cannot be induced from any Rota-Baxter group have been provided. The idea of Rota-Baxter operators has been generalised by Jiang, Sheng and Zhu [@JYC22] to relative Rota-Baxter operators on Lie groups. Intimate connections of relative Rota-Baxter groups with skew left braces have been explored in [@NM1], where it has been shown that there is an isomorphism between the two categories under some mild conditions. The recent work [@BGST] delves into the study of relative Rota-Baxter groups, their connection to set-theoretical solutions to the Yang-Baxter equation, Butcher groups, post-groups, and pre-groups. The paper [@CMP] offers a definition of Rota-Baxter operators in the framework of Clifford semigroups. Notably, a connection between Rota-Baxter operators on Clifford semigroups and weak braces has been established, which represent a broader form of skew left braces. Weak braces are known to give set-theoretical solutions to the Yang-Baxter equation which need not be non-degenerate [@CMMS]. A cohomology theory for relative Rota-Baxter operators on Lie groups has been introduced in [@JYC22]. However, it is worth noting that this cohomology theory is specific to Lie groups since it is defined as the cohomology of the descendent group with coefficients in the Lie algebra of the acting group when viewed as a module over the descendent group. Further, a classification of abelian extensions of Rota-Baxter groups, through a suitably defined second cohomology group, has been given in [@AN2]. Taking inspiration from the extensions theory of (abstract) groups and maintaining a focus on connections to skew left braces, we formulate an extension theory tailored for relative Rota-Baxter groups. This involves the introduction of novel cohomology groups in lower dimensions. Moreover, we show that extensions of relative Rota-Baxter groups can be classified through two-dimensional cohomology. We establish connections between extensions of relative Rota-Baxter groups and that of associated skew left braces and underlying groups. The paper is organised as follows. After the preliminary Section [2](#section prelim){reference-type="ref" reference="section prelim"} on the necessary background, we develop the extension theory of relative Rota-Baxter groups in Section [3.1](#subsec extensions of RRB groups){reference-type="ref" reference="subsec extensions of RRB groups"} and introduce their low dimensional cohomology groups in Section [3.2](#subsec cohomology RRB groups){reference-type="ref" reference="subsec cohomology RRB groups"}. We present one of our main results (Theorem [Theorem 32](#ext and cohom bijection){reference-type="ref" reference="ext and cohom bijection"}) in Section [3.3](#subsec equivalent extensions and cohomology){reference-type="ref" reference="subsec equivalent extensions and cohomology"} which gives a bijection between the set of equivalence classes of extensions of relative Rota-Baxter groups and their second cohomology. In Section [4](#sec extensions skew braces){reference-type="ref" reference="sec extensions skew braces"}, we recall extension theory of skew left braces and show the existence of a homomorphism from the second cohomology of relative Rota-Baxter groups to that of corresponding skew left braces (Proposition [Proposition 37](#cohom RRB to cohom skew brace){reference-type="ref" reference="cohom RRB to cohom skew brace"}). We prove that for bijective relative Rota-Baxter groups, the two cohomologies are isomorphic in dimension two (Corollary [Corollary 39](#isomorphism RRB and SLB cohomology){reference-type="ref" reference="isomorphism RRB and SLB cohomology"}). In Section [5](#sec central and split RRB){reference-type="ref" reference="sec central and split RRB"}, we introduce central and split extensions of relative Rota-Baxter groups, and establish a bijection between semi-direct products and split extensions of such groups (Theorem [Theorem 43](#semi-direct product and split extensions){reference-type="ref" reference="semi-direct product and split extensions"}). # Preliminaries on relative Rota-Baxter groups {#section prelim} In this section, we recall some basic notions about relative Rota-Baxter groups that we shall need, and refer the readers to [@JYC22; @NM1] for more details. We follow the terminology of [@NM1], which is a bit different from other works in the literature. All through, our groups are written multiplicatively, unless explicitly stated otherwise. **Definition 1**. *A relative Rota-Baxter group is a quadruple $(H, G, \phi, R)$, where $H$ and $G$ are groups, $\phi: G \rightarrow \operatorname{Aut} (H)$ a group homomorphism (where $\phi(g)$ is denoted by $\phi_g$) and $R: H \rightarrow G$ is a map satisfying the condition $$R(h_1) R(h_2)=R(h_1 \phi_{R(h_1)}(h_2))$$ for all $h_1, h_2 \in H$.* *The map $R$ is referred as the relative Rota-Baxter operator on $H$.* We say that the relative Rota-Baxter group $(H, G, \phi, R)$ is trivial if $\phi:G \to \operatorname{Aut} (H)$ is the trivial homomorphism. **Remark 1**. Note that if $(H, G, \phi, R)$ is a trivial relative Rota-Baxter group, then $R:H \rightarrow G$ is a group homomorphism. Different homomorphisms give different trivial relative Rota-Baxter groups. Moreover, if $R$ is an isomorphism of groups, then $(H, G, \phi, R)$ is a trivial relative Rota-Baxter group. **Example 2**. *Let $G$ be a group with subgroups $H$ and $L$ such that $G=HL$ and $H\cap L=\{1\}$. Then $(G,G, \phi, R)$ is a relative Rota-Baxter group, where $R: G \rightarrow G$ denotes the map given by $R(hl) = l^{-1}$ and $\phi : G \rightarrow \operatorname{Aut} (G)$ is the adjoint action, that is, $\phi_g(x) = gxg^{-1}$ for $g, h \in G$.* **Example 3**. *Let $G$ be a group, and $G^{\rm op}$ its opposite group. Let $\phi: G^{\rm op} \rightarrow \operatorname{Aut} (G)$ be the homomorphism given by $\phi_x(y) = x^{-1} y x$ for all $x, y \in G$. Then the quadruple $(G, G^{\rm op}, \phi, \operatorname{id}_G)$ is a relative Rota-Baxter group.* **Example 4**. *Take $H= \mathbb{R}$ and $G=UP(2; \mathbb{R})$, the group of invertible upper triangular matrices. Let $\phi:UP(2; \mathbb{R})\to \operatorname{Aut} (\mathbb{R})$ be given by* *$$\phi_{\begin{pmatrix} a & b \\ 0 & c \end{pmatrix}}(r)= ar$$ for $\begin{pmatrix} a & b \\ 0 & c \end{pmatrix} \in UP(2; \mathbb{R})$ and $r \in \mathbb{R}$. Further, let $R: \mathbb{R} \to UP(2; \mathbb{R})$ be given by $$R(r)=\begin{pmatrix} 1 & r \\ 0 & 1 \end{pmatrix}.$$ Then $(\mathbb{R},UP(2; \mathbb{R}), \phi, R)$ is a relative Rota-Baxter group.* Let $(H, G, \phi, R)$ be a relative Rota-Baxter group, and let $K \leq H$ and $L \leq G$ be subgroups. 1. If $K$ is $L$-invariant under the action $\phi$, then we denote the restriction of $\phi$ by $\phi|: L \to \operatorname{Aut} (K)$. 2. If $R(K) \subseteq L$, then we denote the restriction of $R$ by $R|: K \to L$. **Definition 5**. *Let $(H,G,\phi,R)$ be a relative Rota-Baxter group, and $K\leq H$ and $L\leq G$ be subgroups. Suppose that $\phi_\ell(K) \subseteq K$ for all $\ell \in L$ and $R(K) \subseteq L$. Then $(K,L,\phi |,R |)$ is a relative Rota-Baxter group, which we refer as a relative Rota-Baxter subgroup of $(H,G,\phi,R)$ and write $(K,L,\phi |,R |)\leq(H,G,\phi,R)$.* **Definition 6**. *Let $(H, G, \phi, R)$ be a relative Rota-Baxter group and $(K, L, \phi|, R|) \leq (H, G, \phi, R)$ its relative Rota-Baxter subgroup. We say that $(K, L, \phi|, R|)$ is an ideal of $(H, G, \phi, R)$ if $$\begin{aligned} & K \trianglelefteq H \quad \mbox{and} \quad L \trianglelefteq G, \label{I0}\\ & \phi_g(K) \subseteq K \mbox{ for all } g \in G, \label{I1} \\ & \phi_\ell(h) h^{-1} \in K \mbox{ for all } h \in H \mbox{ and } \ell \in L. \label{I2} \end{aligned}$$ We write $(K, L, \phi|, R|) \trianglelefteq (H, G, \phi, R)$ to denote an ideal of a relative Rota-Baxter group.* The preceding definitions lead to the following result [@NM1 Theorem 5.3]. **Theorem 7**. *Let $(H, G, \phi, R)$ be a relative Rota-Baxter group and $(K, L, \phi|, R|)$ an ideal of $(H, G, \phi, R)$. Then there are maps $\overline{\phi}: G/L \to \operatorname{Aut} (H/K)$ and $\overline{R}: H/K \to G/L$ defined by $$\overline{\phi}_{\overline{g}}(\overline{h})=\overline{\phi_{g}(h)} \quad \textrm{and} \quad \overline{R}(\overline{h})=\overline{R(h)}$$ for $\overline{g} \in G/L$ and $\overline{h} \in H/K$, such that $(H/K, G/L, \overline{\phi}, \overline{R})$ is a relative Rota-Baxter group.* **Notation 1**. *We write $(H, G, \phi, R)/(K, L, \phi|, R|)$ to denote the quotient relative Rota-Baxter group $(H/K, G/L, \overline{\phi}, \overline{R})$.* **Definition 8**. *Let $(H, G, \phi, R)$ and $(K, L, \varphi, S)$ be two relative Rota-Baxter groups.* 1. *A homomorphism $(\psi, \eta): (H, G, \phi, R) \to (K, L, \varphi, S)$ of relative Rota-Baxter groups is a pair $(\psi, \eta)$, where $\psi: H \rightarrow K$ and $\eta: G \rightarrow L$ are group homomorphisms such that $$\label{rbb datum morphism} \eta \; R = S \; \psi \quad \textrm{and} \quad \psi \; \phi_g = \varphi_{\eta(g)}\; \psi$$ for all $g \in G$.* 2. *The kernel of a homomorphism $(\psi, \eta): (H, G, \phi, R) \to (K, L, \varphi, S)$ of relative Rota-Baxter groups is the quadruple $$(\operatorname{ker} (\psi), \operatorname{ker} (\eta), \phi|, R|),$$ where $\operatorname{ker} (\psi)$ and $\operatorname{ker} (\eta)$ denote the kernels of the group homomorphisms $\psi$ and $\eta$, respectively. The conditions in [\[rbb datum morphism\]](#rbb datum morphism){reference-type="eqref" reference="rbb datum morphism"} imply that the kernel is itself a relative Rota-Baxter group. In fact, the kernel turns out to be an ideal of $(H, G, \phi, R)$.* 3. *The image of a homomorphism $(\psi, \eta): (H, G, \phi, R) \to (K, L, \varphi, S)$ of relative Rota-Baxter groups is the quadruple $$(\operatorname{im} (\psi), \operatorname{im} (\eta), \varphi|, S| ),$$ where $\operatorname{im} (\psi)$ and $\operatorname{im} (\eta)$ denote the images of the group homomorphisms $\psi$ and $\eta$, respectively. The image is itself a relative Rota-Baxter group.* 4. *A homomorphism $(\psi, \eta)$ of relative Rota-Baxter groups is called an isomorphism if both $\psi$ and $\eta$ are group isomorphisms. Similarly, we say that $(\psi, \eta)$ is an embedding of a relative Rota-Baxter group if both $\psi$ and $\eta$ are embeddings of groups.* **Definition 9**. *A Rota-Baxter group is a group $G$ together with a map $R: G \rightarrow G$ such that $$R(x) R(y)= R(x R(x) y R(x)^{-1})$$ for all $x, y \in G$. The map $R$ is referred as the Rota-Baxter operator on $G$.* Let $\phi : G \rightarrow \operatorname{Aut} (G)$ be the adjoint action, that is, $\phi_g(x)=gxg^{-1}$ for $g, h \in G$. Then the relative Rota-Baxter group $(G, G, \phi, R)$ is simply a Rota-Baxter group. **Proposition 10**. *[@JYC22 Proposition 3.5] Let $(H, G, \phi, R)$ be a relative Rota-Baxter group. Then the operation $$\begin{aligned} h_1 \circ_R h_2 = h_1 \phi_{R(h_1)}(h_2)\end{aligned}$$ defines a group operation on $H$. Moreover, the map $R: H^{(\circ_R)} \rightarrow G$ is a group homomorphism. The group $H^{(\circ_R)}$ is called the descendent group of $R$.* If $(H, G, \phi, R)$ is a relative Rota-Baxter group, then the image $R(H)$ of $H$ under $R$ is a subgroup of $G$. **Definition 11**. *The center of a relative Rota-Baxter group $(H, G, \phi, R)$ is defined as $$\operatorname{Z} (H, G, \phi, R)= \big( \operatorname{Z} ^{\phi}_R(H), \operatorname{ker} (\phi), \phi|, R| \big),$$ where $\operatorname{Z} ^{\phi}_{R}(H)=\operatorname{Z} (H) \cap \operatorname{ker} (\phi \,R) \cap \operatorname{Fix}(\phi)$, $\operatorname{Fix}(\phi)= \{ x \in H \mid \phi_g(x)=x \mbox{ for all } g \in G \}$ and $R: H^{(\circ_R)} \rightarrow G$ is viewed as a group homomorphism.* Let us recall the definition of a skew left brace. **Definition 12**. *A skew left brace is a triple $(H,\cdot ,\circ)$, where $(H,\cdot)$ and $(H, \circ)$ are groups such that $$a \circ (b \cdot c)=(a\circ b) \cdot a^{-1} \cdot (a \circ c)$$ holds for all $a,b,c \in H$, where $a^{-1}$ denotes the inverse of $a$ in $(H, \cdot)$. The groups $(H,\cdot)$ and $(H, \circ)$ are called the additive and the multiplicative groups of the skew left brace $(H, \cdot, \circ)$, and will sometimes be abbreviated as $H ^{(\cdot)}$ and $H ^{(\circ)}$, respectively.* A skew left brace $(H, \cdot, \circ)$ is said to be trivial if $a \cdot b= a \circ b$ for all $a, b \in H$. **Proposition 13**. *[@NM1 Proposition 3.5] Let $(H, G, \phi, R)$ be a relative Rota-Baxter group. If $\cdot$ denotes the group operation of $H$, then the triple $(H, \cdot, \circ_R)$ is a skew left brace.* If $(H, G, \phi, R)$ is a relative Rota-Baxter group, then $(H, \cdot, \circ_R)$ is referred as the skew left brace induced by $R$ and will be denoted by $H_R$ for brevity. The following indispensable result is immediate [@NM1 Proposition 4.3]. **Proposition 14**. *A homomorphism of relative Rota-Baxter groups induces a homomorphism of induced skew left braces.* # Extensions and cohomology of relative Rota-Baxter groups {#sec cohomology and extensions of RRB groups} Let $A$ be a group and $K$ be an abelian group. Then $K$ is said to be a right $A$-module if there exists an anti-homomorphism $\mu: A \rightarrow \operatorname{Aut} (K)$. Writing $k \cdot a= \mu_a(k)$, we see that $$k \cdot (ab)= \mu_{ab}(k)= \mu_{b} \mu_{a}(k)= (k \cdot a) \cdot b$$ for all $a, b \in A$ and $k \in K$. Let $K$ be a right $A$-module via an anti-homomorphism $\mu: A \to \operatorname{Aut} (K)$. For each $n \geq 1$, let $C^n(A, K)$ be the group of all maps $A^n \to K$ which vanish on degenerate tuples, where a tuple $(a_1,\ldots, a_n) \in A^n$ is called degenerate if $a_i = 1$ for at least one $i$. Further, let $\partial_\mu^n: C^n(A,K) \rightarrow C^{n+1}(A,K)$ be defined by $$\begin{aligned} \label{group coboundary mu} && (\partial_\mu^n f)(a_1, a_2,\ldots, a_{n+1}) \\ &= & f(a_2, \ldots,a_{n+1})~ \prod^{n}_{k=1} (f(a_1, \ldots, a_k \cdot a_{k+1}, \ldots, a_{n+1}))^{(-1)^k}~ \mu_{a_{n+1}}(f(a_1, \ldots,a_{n})), \notag\end{aligned}$$ where $\cdot$ is the group operation in $A$. Then, $\{C^n(A, K)$, $\partial_\mu^n\}$ forms a cochain complex. Now, let $(A,B, \phi, R)$ be a relative Rota-Baxter group and $L$ a right $B$-module via an anti-homomorphism $\sigma: B \to \operatorname{Aut} (L)$. Let $\{C^n(B, L)$, $\partial^{n}_{\sigma}\}$ be the corresponding cochain complex. We have a homomorphism $R: A^{(\circ_R)} \rightarrow B$, which allows us to turn $L$ into a right $A^{(\circ_R)}$-module through the composition of $\sigma$ and $R$. Consequently, we can define the corresponding cochain complex $\{C^n(A^{(\circ_R)}, L), \delta_{\sigma}^n\}$, where $\delta_{\sigma}^n$ is defined by $$\begin{aligned} \label{group coboundary twisted sigma} && (\delta_{\sigma}^n f)(a_1, a_2,\ldots, a_{n+1})\\ &= & f(a_2, \ldots,a_{n+1}) ~\prod^{n}_{k=1} (f(a_1, \ldots, a_k \circ_R a_{k+1}, \ldots, a_{n+1}))^{(-1)^k} ~ \sigma_{R(a_{n+1})}(f(a_1, \ldots,a_{n})). \notag\end{aligned}$$ ## Extensions of relative Rota-Baxter groups {#subsec extensions of RRB groups} Here and henceforth, we write $\bf{1}$ to denote the trivial relative Rota-Baxter group for which both the underlying groups and the maps are trivial. **Definition 15**. *Let $(K,L, \alpha,S )$ and $(A,B, \beta, T)$ be relative Rota-Baxter groups.* 1. *An extension of $(A,B, \beta, T)$ by $(K,L, \alpha,S )$ is a relative Rota-Baxter group $(H,G, \phi, R)$ that fits into the sequence $$\mathcal{E} : \quad {\bf 1} \longrightarrow (K,L, \alpha,S ) \stackrel{(i_1, i_2)}{\longrightarrow} (H,G, \phi, R) \stackrel{(\pi_1, \pi_2)}{\longrightarrow} (A,B, \beta, T) \longrightarrow {\bf 1},$$ where $(i_1, i_2)$ and $(\pi_1, \pi_2)$ are morphisms of relative Rota-Baxter groups such that $(i_1, i_2)$ is an embedding, $(\pi_1, \pi_2)$ is an epimorphism of relative Rota-Baxter groups and $(\operatorname{im} (i_1), \operatorname{im} (i_2), \phi|, R|)= (\operatorname{ker} (\pi_1), \operatorname{ker} (\pi_2), \phi|, R|)$.* 2. *To avoid complexity of notation, we assume that $i_1$ and $i_2$ are inclusion maps. This allows us to write that $\phi$ restricted to $L$ is $\alpha$ and $R$ restricted to $K$ is $S$.* 3. *We say that $\mathcal{E}$ is an abelian extension if $K$ and $L$ are abelian groups and the relative Rota-Baxter group $(K,L, \alpha,S )$ is trivial.* 4. *We say that $\mathcal{E}$ is a central extension if $K \leq \operatorname{Z} ^{\phi}_{R}(H)$ and $L \leq \operatorname{Z} (G) \cap \operatorname{ker} (\phi)$. Clearly, every central extension is abelian.* By abuse of notation, let ${\bf 1}$ also denote the trivial skew left brace for which both the underlying groups are trivial. **Definition 16**. *Let $Q$ and $I$ be skew left braces. An extension of $Q$ by $I$ is a skew left brace $E$ that fits into the sequence $$\mathcal{E} : \quad {\bf 1}\longrightarrow I \stackrel{i}{\longrightarrow} E \stackrel{\pi}{\longrightarrow} Q \longrightarrow {\bf 1},$$ where $i$ and $\pi$ are morphisms of skew left braces such that $\operatorname{im} (i)=\ker(\pi)$, $i$ is injective and $\pi$ is surjective.* **Remark 1**. By Proposition [Proposition 14](#rrb to slb homo){reference-type="ref" reference="rrb to slb homo"}, a morphism of relative Rota-Baxter groups induces a morphism of induced skew left braces. Thus, it follows that an extension $$\mathcal{E}: \quad {\bf 1} \longrightarrow (K,L, \alpha,S ) \stackrel{(i_1, i_2)}{\longrightarrow} (H,G, \phi, R) \stackrel{(\pi_1, \pi_2)}{\longrightarrow} (A,B, \beta, T) \longrightarrow {\bf 1}$$ of relative Rota-Baxter groups induces an extension $$\mathcal{E}_{SB}: \quad {\bf 1} \longrightarrow K_{S} \stackrel{i_1}{\longrightarrow} H_{R} \stackrel{\pi_1}{\longrightarrow} A_{T} \longrightarrow {\bf 1}$$ of induced skew left braces. We shall see later in Corollary [Corollary 35](#RRB2sbext){reference-type="ref" reference="RRB2sbext"} that equivalent extensions of relative Rota-Baxter groups map to equivalent extensions of skew left braces. **Remark 1**. An abelian extension of relative Rota-Baxter groups induces an extension of a skew left brace by a trivial brace. Such extensions of skew left braces have been explored in [@DB18; @NMY1]. Similarly, a central extension of relative Rota-Baxter groups induces a central extension of skew left braces, which have been investigated in [@CCS3; @LV16]. Let $(H,\cdot ,\circ)$ be a skew left brace. Then the map $\lambda^H: H^{(\circ)} \longrightarrow \operatorname{Aut}(H^{(\cdot)})$ defined by $$\lambda^H_a(b) = a^{-1} \cdot (a \circ b),$$ for $a,b \in H$, is a group homomorphism. It turns out that the identity map $\operatorname{id}_H: H^{(\cdot)} \longrightarrow H^{(\circ)}$ is a relative Rota-Baxter operator with respect to the action $\lambda^H$, and hence the quadruple $(H^{(\cdot)}, H^{(\circ)}, \lambda^H, \operatorname{id}_H)$ is a relative Rota-Baxter group corresponding to the skew left brace $(H,\cdot ,\circ)$. **Proposition 17**. *[@NM1 Proposition 3.8] The skew left brace $(H,\cdot ,\circ)$ is induced by the relative Rota-Baxter group $(H^{(\cdot)}, H^{(\circ)}, \lambda^H, \operatorname{id}_H)$.* **Proposition 18**. *[@NM1 Proposition 4.4] A homomorphism of skew left braces induces a homomorphism of corresponding relative Rota-Baxter groups.* In view of the preceding discussion, we may inquire whether every extension of skew left braces arise from an extension of relative Rota-Baxter groups. **Proposition 19**. *Every extension of skew left braces is induced by an extension of relative Rota-Baxter groups.* *Proof.* Consider the extension $$\mathcal{E}: \quad {\bf 1} \longrightarrow K \stackrel{i}{\longrightarrow} H \stackrel{\pi}{\longrightarrow} A \longrightarrow {\bf 1}$$ of skew left braces. Then we have relative Rota-Baxter groups $(K^{(\cdot)}, K^{(\circ)}, \lambda^K, \operatorname{id}_K)$, $(H^{(\cdot)}, H^{(\circ)}, \lambda^H, \operatorname{id}_H)$ and $(A^{(\cdot)}, A^{(\circ)}, \lambda^A, \operatorname{id}_A)$. Since $i$ and $\pi$ are morphisms of skew left braces, using Proposition [Proposition 18](#slb to rrb homo){reference-type="ref" reference="slb to rrb homo"}, we obtain the following extension of relative Rota-Baxter groups $$\mathcal{E}_{RRB} : \quad {\bf 1} \longrightarrow (K^{(\cdot)}, K^{(\circ)}, \lambda^K, \operatorname{id}_K) \stackrel{(i, i)}{\longrightarrow} (H^{(\cdot)}, H^{(\circ)}, \lambda^H, \operatorname{id}_H) \stackrel{(\pi, \pi)}{\longrightarrow} (A^{(\cdot)}, A^{(\circ)}, \lambda^A, \operatorname{id}_A) \longrightarrow {\bf 1}.$$ It is immediate that the extension of skew left braces induced by $\mathcal{E}_{RRB}$ is identical to $\mathcal{E}$, which completes the proof. ◻ **Proposition 20**. *An extension $$\mathcal{E} : \quad {\bf 1} \longrightarrow (K,L, \alpha,S ) \stackrel{(i_1, i_2)}{\longrightarrow} (H,G, \phi, R) \stackrel{(\pi_1, \pi_2)}{\longrightarrow} (A,B, \beta, T) \longrightarrow {\bf 1}$$ of relative Rota-Baxter groups induces extensions of groups $$\mathcal{E}_1 : \quad 1 \longrightarrow K \stackrel{i_1}{\longrightarrow} H \stackrel{\pi_1 }{\longrightarrow} A \longrightarrow 1 \quad \textrm{and} \quad \mathcal{E}_2 : \quad 1 \longrightarrow L \stackrel{i_2}{\longrightarrow} G \stackrel{\pi_2 }{\longrightarrow} B \longrightarrow 1.$$ Furthermore, $(K,L, \alpha,S )$ is an ideal of $(H,G, \phi, R)$ and the quotient relative Rota-Baxter group $(H, G, \phi, R)/ (K,L, \alpha,S )$ is isomorphic to $(A, B, \beta, T).$* *Proof.* The first assertion is immediate. Since $(K,L, \alpha,S)$ is the kernel of the homomorphism $(\pi_1, \pi_2)$ of relative Rota-Baxter groups, it follows that it is an ideal of $(H,G, \phi, R)$. Let $\bar{\pi}_1: H/K \rightarrow A$ and $\bar{\pi}_2: G/L \rightarrow B$ be induced isomorphisms of groups given by $\bar{\pi}_1(\bar{h})=\pi_1(h)$ and $\bar{\pi}_2(\bar{g})=\pi_2(g)$ for $h \in H$ and $g \in G$. Further, since $\bar{\pi}_2 \bar{R}= T\bar{\pi}_1$ and $\bar{\pi}_1 \bar{\phi}_{\bar{g}}= \beta_{\bar{\pi}_2(\bar{g})} \bar{\pi}_1$ for all $\bar{g} \in G/L$, it follows that $(\bar{\pi}_1, \bar{\pi}_2): (H/K, G/L, \bar{\phi}, \bar{R}) \to (A, B, \beta, T)$ is an isomorphism of relative Rota-Baxter groups. ◻ Given an extension $$\mathcal{E} : \quad {\bf 1} \longrightarrow (K,L, \alpha,S ) \stackrel{(i_1, i_2)}{\longrightarrow} (H,G, \phi, R) \stackrel{(\pi_1, \pi_2)}{\longrightarrow} (A,B, \beta, T) \longrightarrow {\bf 1}$$ of relative Rota-Baxter groups, we obtained the following extensions of groups $$\mathcal{E}_1 : \quad 1 \longrightarrow K \stackrel{i_1}{\longrightarrow} H \stackrel{\pi_1 }{\longrightarrow} A \longrightarrow 1 \quad \textrm{and} \quad \mathcal{E}_2 : \quad 1 \longrightarrow L \stackrel{i_2}{\longrightarrow} G \stackrel{\pi_2 }{\longrightarrow} B \longrightarrow 1.$$ Since $(\pi_1, \pi_2)$ is a homomorphism of relative Rota-Baxter groups, we have $$\label{compat pi1 pi2} \pi_2 \,R= T \, \pi_1 \quad \textrm{and} \quad \pi_1\, \phi_g= \beta_{\pi_2(g)} \,\pi_1$$ for all $g \in G$. Let $s_{H}$ and $s_{G}$ be normalised set-theoretic sections to $\mathcal{E}_1$ and $\mathcal{E}_2$, respectively. The ordered pair $(s_H, s_G)$ will be referred as a set-theoretic section to the extension $\mathcal{E}$. Then each element $h \in H$ and $g \in G$ can be uniquely written as $h= s_H(a)\; k$ and $g=s_G(b) \; l$ for some $k \in K$, $l \in L$ and $a \in A$, $b \in B$. We have $$\begin{aligned} \label{leqn} \phi_g(h) &= & \phi_{ s_G(b) \; l}( s_H(a) \; k) \notag \\ &= & \phi_{ s_G(b)}( \phi_l(s_H(a)) \; \phi_l (k)) \notag \\ &= & \phi_{ s_G(b)}( s_H(a)) \; \phi_{s_G(b)} (f(l,a) \alpha_l(k)),\end{aligned}$$ where the map $f: L \times A \longrightarrow K$ is defined by $$\label{fdefn} f(l,a)= s_H( a)^{-1} \phi_l(s_H(a)) .$$ Further, we can write $$\label{atilde1} \phi_{s_G(b)}(s_H(a)) = s_H(\tilde{a}) \;\rho(a, b)$$ for unique elements $\rho(a, b) \in K$ and $\tilde{a} \in A$. This gives a map $\rho: A \times B \longrightarrow K$. Now applying $\pi_1$ on both the sides of [\[atilde1\]](#atilde1){reference-type="eqref" reference="atilde1"} and using [\[compat pi1 pi2\]](#compat pi1 pi2){reference-type="eqref" reference="compat pi1 pi2"}, we get $$\label{va} \beta{_b(a)}= \tilde{a}.$$ Using [\[va\]](#va){reference-type="eqref" reference="va"} in [\[atilde1\]](#atilde1){reference-type="eqref" reference="atilde1"}, we can write $$\label{atilde} \phi_{s_G(b)}(s_H(a)) = s_H(\beta{_b(a)}) \; \rho(a, b).$$ Using [\[atilde\]](#atilde){reference-type="eqref" reference="atilde"} in [\[leqn\]](#leqn){reference-type="eqref" reference="leqn"}, we have $$\label{feqn} \phi_g(h)= s_H(\beta{_b(a)})~ \rho(a,b) ~ \phi_{s_G(b)} (f(l,a)\alpha_l(k)) .$$ Next, we see how a relative Rota-Baxter operator can be defined on an extension of relative Rota-Baxter groups $(K,L, \alpha,S )$ and $(A,B, \beta, T)$. If $h=s_H(a) k$ is an element of $H$, then we have $$\label{RRB1} R( s_H(a) k ) = R(s_H(a) \phi_{R(s_H(a))}(\phi^{-1}_{R(s_H(a))}(k)) )= R(s_H(a)) R(\phi^{-1}_{R(s_H(a))}(k)).$$ Let $R(s_H(a))=s_G(b) \chi(a)$ for unique elements $b \in B$ and $\chi(a) \in L$. This gives a map $\chi:A \longrightarrow L$. Applying $\pi_2$ on both the sides of [\[RRB1\]](#RRB1){reference-type="eqref" reference="RRB1"} and using [\[compat pi1 pi2\]](#compat pi1 pi2){reference-type="eqref" reference="compat pi1 pi2"}, we get $T(a)=b$, and hence $$\label{RRB2} R(s_H(a)) = s_G(T(a)) ~ \chi(a).$$ Putting value of $R(s_H(a))$ from [\[RRB2\]](#RRB2){reference-type="eqref" reference="RRB2"} in [\[RRB1\]](#RRB1){reference-type="eqref" reference="RRB1"}, we get $$\label{relativecon} R( s_H(a) k ) = s_G(T(a)) \chi(a) R(\phi^{-1}_{ s_G(T(a)) \; \chi(a)}( k)).$$ **Proposition 21**. *Consider the abelian extension $$\mathcal{E} : \quad {\bf 1} \longrightarrow (K,L, \alpha,S ) \stackrel{(i_1, i_2)}{\longrightarrow} (H,G, \phi, R) \stackrel{(\pi_1, \pi_2)}{\longrightarrow} (A,B, \beta, T) \longrightarrow {\bf 1}$$ of relative Rota-Baxter groups. Then the following hold:* 1. *The map $\nu: B \rightarrow \operatorname{Aut} (K)$ defined by $$\label{nuact} \nu_b(k)= \phi_{s_G(b)}(k)$$ for $b \in B$ and $k \in K$, is a homomorphism of groups.* 2. *The $\mu: A \rightarrow \operatorname{Aut} (K)$ defined by $$\label{muact} \mu_a(k)=s_H(a)^{-1}\, k\, s_H(a)$$ for $a \in A$ and $k \in K$, is an anti-homomorphism of groups.* 3. *The map $\sigma: B \rightarrow \operatorname{Aut} (L)$ defined by $$\label{sigmaact} \sigma_b(l)=s_G(b)^{-1}\,l\, s_G(b)$$ for $b \in B$ and $l \in K$, is an anti-homomorphism of groups.* *Further, all the maps are independent of the choice of a section to $\mathcal{E}$.* *Proof.* If $b_1, b_2 \in B$, then $s_G(b_1b_2) \tau_2(b_1, b_2)= s_G(b_1) s_G(b_2)$ for some $\tau_2(b_1, b_2) \in L$. This gives $$\nu_{b_1b_2}(k)= \phi_{s_G(b_1b_2)}(k)= \phi_{s_G(b_1) s_G(b_2) \tau_2(b_1, b_2)^{-1}}(k)=\phi_{s_G(b_1)} \phi_{s_G(b_2)} \phi_{ \tau_2(b_1, b_2)^{-1}}(k)$$ and $$\nu_{b_1}\nu_{b_2}(k)= \phi_{s_G(b_1)} \phi_{s_G(b_2)}(k).$$ Since $\phi_{ \tau_2(b_1, b_2)^{-1}}=\alpha_{\tau_2(b_1, b_2)^{-1}}=\operatorname{id}_K$, we have $\phi_{ \tau_2(b_1, b_2)^{-1}}(k)=k$, and hence $\nu_{b_1b_2}= \nu_{b_1}\nu_{b_2}$. If $t_G$ is another section, then we have $s_G(b)=t_G(b) \ell$ for some $\ell \in L$. Since $\phi_\ell(k)=k$, we have $\phi_{s_G(b)}(k)= \phi_{t_G(b) \ell}(k)=\phi_{t_G(b)} \phi_{\ell}(k)=\phi_{t_G(b)}(k)$, and hence $\nu$ is independent of the choice of the section. This proves assertion (1). If $a_1, a_2 \in A$, then $s_H(a_1a_2) \tau_1(a_1, a_2)= s_H(a_1) s_H(a_2)$ for some $\tau_1(a_1, a_2) \in K$. This gives $$\begin{aligned} \mu_{a_1a_2}(k) &=& s_H(a_1a_2)^{-1} ~k ~s_H(a_1a_2)\\ &=& \tau_1(a_1, a_2) s_H(a_2)^{-1}s_H(a_1)^{-1} ~k~s_H(a_1) s_H(a_2) \tau_1(a_1, a_2)^{-1}\\ &=& \tau_1(a_1, a_2) s_H(a_2)^{-1} \, \mu_{a_1}(k) \,s_H(a_2) \tau_1(a_1, a_2)^{-1}\\ &=& \tau_1(a_1, a_2) \, \mu_{a_2}\mu_{a_1}(k) \, \tau_1(a_1, a_2)^{-1}\\ &=& \mu_{a_2}\mu_{a_1}(k), \quad \textrm{since}~ K~ \textrm{is abelian}.\end{aligned}$$ Again, since $K$ is abelian, it follows that $\mu$ is independent of the choice of the section $s_H$, which proves (2). Proof of assertion (3) is analogous. ◻ Henceforth, we assume that all extensions of relative Rota-Baxter groups are abelian. Let $$\mathcal{E} : \quad {\bf 1} \longrightarrow (K,L, \alpha,S ) \stackrel{(i_1, i_2)}{\longrightarrow} (H,G, \phi, R) \stackrel{(\pi_1, \pi_2)}{\longrightarrow} (A,B, \beta, T) \longrightarrow {\bf 1}$$ be an abelian extension of relative Rota-Baxter groups and $(s_H, s_G)$ a set-theoretic section to $\mathcal{E}$. The classical extension theory of groups gives the following (see [@MR0672956]): 1. The map $\tau_1: A \times A \rightarrow K$ given by $$\label{mucocycle} \tau_1(a_1, a_2)= s_H(a_1 a_2)^{-1}s_H(a_1)s_H(a_2)$$ for $a_1, a_2 \in A$ is a group 2-cocycle with respect to the action $\mu$. 2. The map $\tau_2: B \times B \rightarrow L$ given by $$\label{sigmacocycle} \tau_2(b_1, b_2)= s_G(b_1 b_2)^{-1}s_G(b_1)s_G(b_2)$$ $b_1, b_2 \in B$ is a group 2-cocycle with respect to the action $\sigma$. Given the abelian extension $\mathcal{E}$, using extension theory of groups, we can take $H= A \times_{\tau_1} K$ with the group operation given by $$(a_1,k_1) (a_2, k_2)=\big(a_1 a_2,~ \mu_{a_2}(k_1)\, k_2\, \tau_1(a_1, a_2) \big).$$ Similarly, we can take $G=B \times_{\tau_2} L$ with the group operation given by $$(b_1,l_1) (b_2, l_2)=\big(b_1 b_2, ~\sigma_{b_2}(l_1)\,l_2\, \tau_2(b_1, b_2)\big).$$ It follows from [\[feqn\]](#feqn){reference-type="eqref" reference="feqn"} and [\[relativecon\]](#relativecon){reference-type="eqref" reference="relativecon"} that the maps $\phi: B \times_{\tau_2} L \longrightarrow \operatorname{Aut} (A \times_{\tau_1} K )$ and $R: A \times_{\tau_1} K \longrightarrow B \times_{\tau_2} L$ are given by $$\begin{aligned} \phi_{(b,l)}(a,k) &= & \big(\beta{_{b}(a)}, ~\rho(a,b) \,\nu_b(f(l,a)k)\big),\\ R(a,k) &= & \big(T(a), ~\chi(a) \, S(\nu^{-1}_{T(a)}(k))\big)\end{aligned}$$ for $(a,k) \in A \times_{\tau_1} K$ and $(b,l) \in B \times_{\tau_2} L$. Now, we determine conditions on $\rho$ and $\nu$ for $\phi$ to be a group homomorphism. Let $(b_1,l_1), (b_2,l_2) \in B \times_{\tau_2} L$ and $(a,k) \in A \times_{\tau_1} K$. Then we have $$\begin{aligned} \label{iso1} && \phi_{(b_1,l_1) (b_2, l_2)}(a,k)\\ &=& \phi_{(b_1 b_2, \sigma_{b_2}(l_1)l_2\tau_2(b_1, b_2))}(a,k)\notag \\ &=& \big ( \beta{_{b_1 b_2}(a)},~ \rho(a, b_1 b_2) \,\nu_{b_1 b_2}( f(\sigma_{b_2}(l_1)l_2\tau_2(b_1, b_2), a)k)\big) \notag\end{aligned}$$ and $$\begin{aligned} \label{iso2} && \phi_{(b_1,l_1)}(\phi_{(b_2,l_2)}(a,k))\\ &= & \phi_{(b_1,l_1)}\big( \beta{_{b_2}(a)}, ~\rho(a,b_2) \,\nu_{b_2}(f ( l_2,a)k)\big) \notag \\ &=& \big(\beta{_{b_1 b_2}(a)},~\rho(\beta{_{b_2}(a)}, b_1)\, \nu_{b_1}\big(f(l_1, \beta{_{b_2}(a))}\, \rho(a,b_2)\, \nu_{b_2}(f ( l_2,a)k) \big)\big). \notag\end{aligned}$$ Comparing [\[iso1\]](#iso1){reference-type="eqref" reference="iso1"} and [\[iso2\]](#iso2){reference-type="eqref" reference="iso2"} and using the fact that $\beta$ and $\nu$ are homomorphisms, we get $$\begin{aligned} \label{iso5} && \rho(a, b_1 b_2) \,\nu_{b_1 b_2}\big( f(\sigma_{b_2}(l_1)l_2\tau_2(b_1, b_2), a)\big)\\ &= & \rho(\beta{_{b_2}(a)}, b_1) \, \nu_{b_1}\big (f(l_1, \beta{_{b_2}(a))}\rho(a,b_2) \nu_{b_2}(f ( l_2,a))\big) \notag.\end{aligned}$$ Taking $l_1, l_2 =1$ in [\[iso5\]](#iso5){reference-type="eqref" reference="iso5"} gives $$\begin{aligned} \label{iso3} \rho(a, b_1 b_2) \,\nu_{b_1 b_2}\big( f(\tau_2(b_1, b_2), a)\big) &=& \rho(\beta{_{b_2}(a)}, b_1) \,\nu_{b_1}(\rho(a,b_2)),\end{aligned}$$ whereas taking $b_1, b_2=1$ yields $$\begin{aligned} \label{iso4} f(l_1l_2,a) &=& f( l_1, a) \,f( l_2,a).\end{aligned}$$ Using and [\[iso4\]](#iso4){reference-type="eqref" reference="iso4"} in [\[iso5\]](#iso5){reference-type="eqref" reference="iso5"}, we get $$\begin{aligned} \nu_{b_2}(f(\sigma_{b_2}(l_1), a)) &=& f( l_1, \beta{_{b_2}(a))}.\end{aligned}$$ Next, we determine conditions under which $\phi_{(b,l)}$ is homomorphism for all $(b,l) \in B \times_{\tau_2} L$. Let $(a_1,k_1), (a_2, k_2) \in A \times_{\tau_1} K$. Then, we have $$\begin{aligned} \label{hom1} && \phi_{(b,l)} \big((a_1,k_1) (a_2, k_2)\big) \\ &=& \phi_{(b,l)}(a_1 a_2, ~\mu_{a_2}(k_1)\,k_2 \,\tau_1(a_1, a_2))\notag\\ &=& \big(\beta{_{b}(a_1 a_2)}, ~\rho(a_1 a_2,b) \,\nu_b \big( f(l, a_1 a_2)\mu_{a_2}(k_1)k_2\tau_1(a_1, a_2)\big)\big) \notag\end{aligned}$$ and $$\begin{aligned} \label{hom2} && \phi_{(b,l)}(a_1,k_1)\, \phi_{(b,l)}(a_2,k_2)\\ &= & \big(\beta{_{b}(a_1)}, ~\rho(a_1,b) \, \nu_{b} (f(l,a_1)k_1)\big) \,\big(\beta{_{b}(a_2)}, ~ \rho(a_2,b) \,\nu_b(f(l,a_2) k_2)\big)\notag \\ &= & \big(\beta{_{b}(a_1 a_2)}, ~\mu_{ \beta{_{b}(a_2)}} \big(\rho(a_1,b) \nu_{b} (f(l,a_1)k_1)\big)\, \rho(a_2,b) \nu_b(f(l,a_2) k_2) ~\tau_1(\beta{_{b}(a_1)}, \beta_{b}(a_2)) \big). \notag\end{aligned}$$ Comparing [\[hom1\]](#hom1){reference-type="eqref" reference="hom1"} and [\[hom2\]](#hom2){reference-type="eqref" reference="hom2"} give $$\begin{aligned} \label{mainhom} &&\rho(a_1 a_2,b)~ \nu_b \big( f(l, a_1 a_2)\mu_{a_2}(k_1)k_2\tau_1(a_1, a_2) \big) \\ &=& \mu_{ \beta{_{b}(a_2)}} \big(\rho(a_1,b) \nu_{b} (f(l,a_1)k_1)\big) \, \rho(a_2,b) \nu_b(f(l,a_2) k_2) \,\tau_1(\beta{_{b}(a_1)}, \beta{_{b}(a_2))}. \notag\end{aligned}$$ Taking $b=1$ in [\[mainhom\]](#mainhom){reference-type="eqref" reference="mainhom"} gives $$\begin{aligned} \label{fcon} f(l, a_1 a_2) &=& \mu_{ a_2}(f(l,a_1))f(l, a_2),\end{aligned}$$ whereas taking $a_1=1$ gives $$\begin{aligned} \label{mcon} \nu_b(\mu_{a_2}(k_1)) &=& \mu_{ \beta{_{b}(a_2)}}( \nu_{b} (k_1)).\end{aligned}$$ Using [\[fcon\]](#fcon){reference-type="eqref" reference="fcon"} and [\[mcon\]](#mcon){reference-type="eqref" reference="mcon"} in [\[mainhom\]](#mainhom){reference-type="eqref" reference="mainhom"}, we obtain $$\begin{aligned} \label{rho second equation} \rho(a_1 a_2, b) \,\nu_{b}(\tau_1(a_1, a_2)) &=& \mu_{ \beta{_{b}(a_2)}}(\rho(a_1,b)) \,\rho(a_2, b) \, \tau_1(\beta{_{b}(a_1)}, \beta{_{b}(a_2)}).\end{aligned}$$ Putting succinctly, we have the following result. **Proposition 22**. *The map $\phi: B \times_{\tau_2} L \longrightarrow \operatorname{Aut} (A \times_{\tau_1} K )$ defined by $$\begin{aligned} \label{phidefnprop} \phi_{(b,l)}(a,k) &= & \big(\beta{_{b}(a)}, ~\rho(a,b)\nu_b(f(l,a)k)\big), \end{aligned}$$ for $(a,k) \in A \times_{\tau_1} K$ and $(b,l) \in B \times_{\tau_2} L$, is a homomorphism if and only if the conditions $$\begin{aligned} \nu_b(\mu_{a_2}(k)) & = & \mu_{ \beta{_{b}(a_2)}}( \nu_{b} (k)),\\ \rho(a_1, b_1 b_2) \,\nu_{b_1 b_2}( f(\tau_2(b_1, b_2), a_1)) &=& \rho(\beta{_{b_2}(a_1)}, b_1) \,\nu_{b_1}(\rho(a,b_2)),\\ f(l_1l_2,a_1) &=& f( l_1, a_1)f( l_2,a_1),\\ f(l, a_1 a_2) &=& \mu_{ a_2}(f(l,a_1))f(l, a_2),\\ \nu_{b_2}(f(\sigma_{b_2}(l_1), a_1)) &=& f( l_1, \beta{_{b_2}(a_1))},\\ \rho(a_1 a_2, b_1) \,\nu_{b_1}(\tau_1(a_1, a_2)) &=& \mu_{ \beta{_{b_1}(a_2)}}(\rho(a_1,b_1)) \,\rho(a_2, b_1) \, \tau_1(\beta{_{b_1}(a_1)}, \beta{_{b_1}(a_2)}), \end{aligned}$$ hold for all $a_1, a_2 \in A$, $b_1, b_2 \in B$ and $k \in K$.* **Lemma 23**. *Let $\mathcal{E}$ be an abelian extension of relative Rota-Baxter groups. Then the following hold:* 1. *The map $f: L \times A \longrightarrow K$ defined by $$f(l,a)= s_H( a)^{-1} \phi_l(s_H(a))$$ for $l \in L$ and $a \in A$, is independent of the choice of the section $s_H$.* 2. *$f(l_1l_2,a) = f( l_1, a)f( l_2,a)$ for all $l_1, l_2 \in L$ and $a \in A$.* 3. *$f(l, a_1 a_2) = \mu_{ a_2}(f(l,a_1))f(l, a_2)$ for all $l \in L$ and $a_1, a_2 \in A$.* *Proof.* Let $s_{H}$ and $t_H$ be two set-theoretic sections of $\mathcal{E}_1$. For each $a \in A$, there exists a unique element $\lambda(a) \in K$ such that $s_{H}(a)= t_H(a) \lambda(a)$. Note that $(K, L , \alpha, S)$ is an ideal of $(H, G , \phi, R)$ and is also a trivial relative Rota-Baxter group. This gives $$\begin{aligned} f(l,a) &=& s_H( a)^{-1} \phi_l(s_H(a))\\ &=& \lambda(a)^{-1} t_H(a)^{-1} \phi_l(t_H(a)\lambda(a))\\ &=& \lambda(a)^{-1} t_H(a)^{-1} \phi_l(t_H(a)) \phi_l(\lambda(a))\\ &=& \lambda(a)^{-1} t_H(a)^{-1} \phi_l(t_H(a)) \lambda(a), \quad \textrm{since}~ \phi_l(\lambda(a))=\lambda(a)\\ &=& t_H(a)^{-1} \phi_l(t_H(a)), \quad \textrm{since}~ t_H(a)^{-1} \phi_l(t_H(a)) \in K.\end{aligned}$$ Thus, $f$ is independent of the choice of the section, which proves assertion (1). Assertions (2) and (3) are simply [\[iso4\]](#iso4){reference-type="eqref" reference="iso4"} and [\[fcon\]](#fcon){reference-type="eqref" reference="fcon"}, respectively. ◻ Our next aim is to determine conditions under which $R$ becomes a relative Rota-Baxter operator. Let $(a_1,k_1)$ and $(a_2, k_2) \in A \times_{\tau_1} K$. Recall that $a_1 \circ_{T} a_2=a_1 \beta{}_{T(a_1)}(a_2)$. Then we have $$\begin{aligned} \label{RRBcon} && R\big( (a_1,k_1) \phi_{R(a_1,k_1)} (a_2, k_2) \big)\\ &=& R \big( (a_1, k_1) \phi_{(T(a_1), ~\chi(a_1) S(\nu^{-1}_{T(a_1)}(k_1)))}(a_2, k_2) \big)\notag\\ &=& R \big( (a_1, k_1) \big(\beta{_{T(a_1)}}(a_2), ~\rho(a_2, T(a_1)) \,\nu_{T(a_1)} \big(f(\chi(a_1) S(\nu^{-1}_{T(a_1)}(k_1)), a_2) k_2 \big) \big)\big)\notag\\ &=& R \big( a_1 \circ_{T} a_2,~ \mu_{\beta{_{T(a_1)}(a_2)}}(k_1) \, \rho(a_2, T(a_1)) \,\nu_{T(a_1)} \big(f(\chi(a_1) S(\nu^{-1}_{T(a_1)}(k_1)), a_2)k_2 \big) \,\tau_1(a_1, \beta{_{T(a_1)}(a_2)}) \big)\notag\\ &=& \big(T(a_1 \circ_{T} a_2), ~\chi(a_1 \circ_{T} a_2) \,S(\nu^{-1}_{T(a_1 \circ_{T} a_2)} (z)) \big), \text{where} \notag\end{aligned}$$ $$\begin{aligned} z&=&\; \mu_{\beta{_{T(a_1)}(a_2)}}(k_1)\, \rho(a_2, T(a_1)) \,\nu_{T(a_1)} \big(f(\chi(a_1) S(\nu^{-1}_{T(a_1)}(k_1)), a_2) k_2 \big) \, \tau_1(a_1,\beta{_{T(a_1)}(a_2)}).\end{aligned}$$ On the other hand, we have $$\begin{aligned} \label{RRBcon2} && R (a_1,k_1)R (a_2,k_2)\\ &= & \big(T(a_1),~ \chi(a_1) S(\nu^{-1}_{T(a_1)}(k_1)) \big)\, \big(T(a_2), ~\chi(a_2) S(\nu^{-1}_{T(a_2)}(k_2))\big)\notag\\ &=& \big(T(a_1 \circ_{T} a_2),~ \sigma_{T(a_2)}\big( \chi(a_1) S(\nu^{-1}_{T(a_1)}(k_1))\big) \, \chi(a_2) \,S(\nu^{-1}_{T(a_2)}(k_2)) \,\tau_2(T(a_1), T(a_2) ) \big) \notag.\end{aligned}$$ Comparing [\[RRBcon\]](#RRBcon){reference-type="eqref" reference="RRBcon"} and [\[RRBcon2\]](#RRBcon2){reference-type="eqref" reference="RRBcon2"} and using the fact that $T(a_1 \circ_{T} a_2)= T(a_1)T(a_2)$, we obtain $$\begin{aligned} \label{Rmain} && \chi(a_1 \circ_{T} a_2) \,S\big( \nu^{-1}_{T(a_1 \circ_{T} a_2)} \big(\mu_{\beta{_{T(a_1)}}(a_2)}(k_1) \, \rho(a_2, T(a_1))\, \nu_{T(a_1)}(f(\chi(a_1) S(\nu^{-1}_{T(a_1)}(k_1)), a_2)k_2) \\ && \tau_1(a_1,\beta{_{T(a_1)}}(a_2))\big) \big)\notag\\ &=& \sigma_{T(a_2)}\big( \chi(a_1) S(\nu^{-1}_{T(a_1)}(k_1))\big)\, \chi(a_2)\, \tau_2 \,(T(a_1), T(a_2) )S(\nu^{-1}_{T(a_2)}(k_2)) . \notag\end{aligned}$$ Cancelling off $S(\nu^{-1}_{T(a_2)}(k_2))$ and putting $k_1 = 1$ in [\[Rmain\]](#Rmain){reference-type="eqref" reference="Rmain"}, we get $$\begin{aligned} \label{R1} && \chi(a_1 \circ_{T} a_2) \,S \big(\nu^{-1}_{T(a_1 \circ_{T} a_2)}\big(\rho(a_2, T(a_1)) \,\tau_1(a_1,\beta{_{T(a_1)}}(a_2))\big) \,\nu^{-1}_{T(a_2)}(f(\chi(a_1), a_2))\big) \\ &= & \sigma_{T(a_2)}( \chi(a_1)) \, \chi(a_2) \,\tau_2(T(a_1), T(a_2) ). \notag\end{aligned}$$ Using [\[R1\]](#R1){reference-type="eqref" reference="R1"} in [\[Rmain\]](#Rmain){reference-type="eqref" reference="Rmain"} and the facts that $f$ is linear in the first coordinate and $T(a_1 \circ_{T} a_2)= T(a_1)T(a_2)$, we get $$S\big(\nu^{-1}_{T(a_1 \circ_{T} a_2)}(\mu_{\beta{_{T(a_1)}}(a_2)}(k_1))\nu^{-1}_{T(a_2)}(f(S(\nu^{-1}_{T(a_1)}(k_1)), a_2))\big)= \sigma_{T(a_2)}(S(\nu^{-1}_{T(a_1)}(k_1))).$$ Replacing $k_1$ by $\nu_{T(a_1)}(k_1)$ and using [\[mcon\]](#mcon){reference-type="eqref" reference="mcon"}, we get $$\begin{aligned} \label{finalactrel} S\big(\nu^{-1}_{T(a_2)}(\mu_{a_2}(k_1)) \,\nu^{-1}_{T( a_2)}(f(S(k_1), a_2))\big) &=& \sigma_{T(a_2)}(S(k_1)).\end{aligned}$$ The quadruple $(\nu, \mu, \sigma, f)$ is called the *associated action* of the extension $\mathcal{E}$ of relative Rota-Baxter groups. The preceding observations lead to the following result. **Proposition 24**. *Let $\phi: B \times_{\tau_2} L \longrightarrow \operatorname{Aut} (A \times_{\tau_1} K )$ defined in [\[phidefnprop\]](#phidefnprop){reference-type="eqref" reference="phidefnprop"} be a homomorphism. Then the map $R: A \times_{\tau_1} K \rightarrow B \times_{\tau_2} L$ given by $$\begin{aligned} R(a,k) &= & \big(T(a), ~\chi(a) \,S(\nu^{-1}_{T(a)}(k))\big),\end{aligned}$$ for $(a,k) \in A \times_{\tau_1} K$ and $(b,l) \in B \times_{\tau_2} L$, is a relative Rota-Baxter operator if and only if the conditions $$\begin{aligned} & & \chi(a_1 \circ_{T} a_2) \,S \big(\nu^{-1}_{T(a_1 \circ_{T} a_2)}\big(\rho(a_2, T(a_1)) \,\tau_1(a_1,\beta{_{T(a_1)}}(a_2))\big) \,\nu^{-1}_{T(a_2)}(f(\chi(a_1), a_2))\big) \\ & =& \sigma_{T(a_2)}( \chi(a_1)) \, \chi(a_2) \,\tau_2(T(a_1), T(a_2) )\\ \textrm{and} &&\\ && S\big(\nu^{-1}_{T(a_2)}(\mu_{a_2}(k)) \,\nu^{-1}_{T( a_2)}(f(S(k), a_2))\big) = \sigma_{T(a_2)}(S(k)), \end{aligned}$$ hold for all $a_1, a_2 \in A$ and $k \in K$.* **Remark 1**. It follows from Proposition [Proposition 24](#Rallcondn){reference-type="ref" reference="Rallcondn"} that if $T$ and $S$ are bijections, then $R$ is also a bijection. Consequently, every extension of bijective relative Rota-Baxter groups is itself bijective. A special case of the preceding discussion gives the direct product of relative Rota-Baxter groups. **Example 25**. *Let $\mathcal{K}=(K,L,\alpha,S)$ and $\mathcal{A}=(A,B,\beta,T)$ be two relative Rota-Baxter groups. Then the direct product of $\mathcal{K}$ and $\mathcal{A}$ is defined as $$\mathcal{A}\times\mathcal{K}=(A\times K, \, B\times L, \, \beta\times \alpha, \, T\times S),$$ where $A\times K$ and $B\times L$ are direct product of groups, and the action and the Rota-Baxter operator are given by $$\begin{aligned} (\beta\times\alpha)_{(b,l)}(a,k)&=&(\beta_b(a),\,\alpha_l(k)),\\ (T\times S)(a,k)&=&(T(a),\,S(k)), \end{aligned}$$ for $a \in A$, $b \in B$, $k \in K$ and $l \in L$.* ## Cohomology of relative Rota-Baxter groups {#subsec cohomology RRB groups} Based on the relationship between $\mu, \sigma, \nu, f$ and their properties derived in the preceding subsection, we now define a module over a relative Rota-Baxter group. **Definition 26**. *A module over a relative Rota-Baxter group $(A, B,\beta, T)$ is a trivial relative Rota-Baxter group $(K, L, \alpha, S)$ such that there exists a quadruple $(\nu, \mu, \sigma, f)$ (called action) of maps satisfying the following conditions:* 1. *The group $K$ is a left $B$-module and a right $A$-module with respect to the actions $\nu:B \to \operatorname{Aut} (K)$ and $\mu: A \to \operatorname{Aut} (K)$, respectively.* 2. *The group $L$ is a right $B$-module with respect to the action $\sigma: B \to \operatorname{Aut} (L)$.* 3. *The map $f: L \times A \to K$ has the property that $f(-,a): L \rightarrow K$ is a homomorphism for all $a \in A$ and $f(l,-): A \rightarrow K$ is a derivation with respect to the action $\mu$ for all $l \in L$.* 4. *$S\big(\nu^{-1}_{T(a)}(\mu_{a}(k)) \,\nu^{-1}_{T( a)}(f(S(k), a))\big)=\;\sigma_{T(a)}(S(k))$ for all $a \in A$ and $k \in K$.* 5. *$\nu_b(\mu_{a}(k))= \; \mu_{ \beta{_{b}(a)}}( \nu_{b} (k))$ for all $a \in A$, $b \in B$ and $k \in K$.* We say that the module $(K, L, \alpha, S)$ is trivial if all the maps $\mu, \nu, \sigma, f$ are trivial. **Proposition 27**. *Suppose that $${\bf 1} \longrightarrow (K,L, \alpha,S ) \stackrel{(i_1, i_2)}{\longrightarrow} (H,G, \phi, R) \stackrel{(\pi_1, \pi_2)}{\longrightarrow} (A,B, \beta, T) \longrightarrow {\bf 1}$$ is an abelian extension of relative Rota-Baxter groups. Then the following holds:* 1. *The group $K$ is a left $B$-module via an action $\nu$ and a right $A$-module via an action $\mu$, where $\nu$ and $\mu$ are defined in [\[nuact\]](#nuact){reference-type="eqref" reference="nuact"} and [\[muact\]](#muact){reference-type="eqref" reference="muact"}, respectively.* 2. *The group $L$ is a right $B$-module via an action $\sigma$, where $\sigma$ is defined in [\[sigmaact\]](#sigmaact){reference-type="eqref" reference="sigmaact"}.* 3. *The trivial relative Rota-Baxter group $(K, L, \alpha, S)$ is an $(A,B, \beta, T)$-module via the action $(\nu, \mu, \sigma, f)$.* *Proof.* Assertions (1) and (2) follow from Proposition [Proposition 21](#construction of actions){reference-type="ref" reference="construction of actions"}. The existence of the map $f: L \times A \to K$ is given by [\[fdefn\]](#fdefn){reference-type="eqref" reference="fdefn"}. That $f$ is linear in the first variable and is a derivation in the second variable follow from [\[iso4\]](#iso4){reference-type="eqref" reference="iso4"} and [\[fcon\]](#fcon){reference-type="eqref" reference="fcon"}, respectively. The two desired relations among these maps follow from [\[mcon\]](#mcon){reference-type="eqref" reference="mcon"} and [\[finalactrel\]](#finalactrel){reference-type="eqref" reference="finalactrel"}, which proves assertion (3). ◻ Using the relationships between the maps $\tau_1, \tau_2, f,\rho,\chi$ obtained in the preceding subsection, we now define low dimensional cohomology groups of a relative Rota-Baxter group. Let $\mathcal{A}=(A, B, \beta, T)$ and $\mathcal{K}=(K, L, \alpha, S)$ be relative Rota-Baxter groups such that $\mathcal{K}$ is trivial and is an $\mathcal{A}$-module via the action $(\nu, \mu, \sigma, f)$. Let us set $$\begin{aligned} C^0_{RRB} & = & K \times_{(\nu, \mu, \sigma, f) } L,\\ C^{1}_{RRB} &=& C^1(A, K) \oplus C^1(B,L),\\ C^{2}_{RRB} &=& C^2(A, K) \oplus C^2(B,L) \oplus C(A \times B, K) \oplus C(A,L),\\ C^{3}_{RRB} &=& C^3(A, K) \oplus C^3(B,L) \oplus C(A \times B^2, K) \oplus C(A^2 \times B, K) \oplus C^2(A,L),\end{aligned}$$ where $$\begin{aligned} K \times_{(\nu, \mu, \sigma, f) } L &=& \big\{ (k,l) \in K \times L ~\mid ~ f(\sigma_b(l), a)=f(l,a),~ \nu_b(k) = k,\\ && \sigma_{T(a)}(l)=l ~\textrm{and}~ S(\mu_a(k))=S(k) ~\textrm{for all}~ a \in A~\textrm{and}~ b \in B \big\}\end{aligned}$$ and $C(A^m \times B^n, K)$ is the group of maps which vanish on degenerate tuples, i.e, on tuples in which either $a_i=1$ for some $1\leq i\leq m$ or $b_j=1$ for some $1\leq j\leq n$. Define $\partial_{RRB}^0: C^0_{RRB} \rightarrow C^{1}_{RRB}$ by $$\partial_{RRB}^0(k,l)=(\kappa_k, \,\omega_l),$$ where $\kappa_k: A \rightarrow K$ and $\omega_l: B \rightarrow L$ are defined by $$\begin{aligned} \kappa_k(a) &=&\mu_a(k)k^{-1},\\ \omega_l(b)&=& \sigma_b(l)l^{-1},\end{aligned}$$ for $k \in K$, $l \in L$, $a \in A$ and $b \in B$. Define $\partial_{RRB}^1: C^1_{RRB} \rightarrow C^{2}_{RRB}$ by $$\partial_{RRB}^1(\theta_1, \theta_2)=(\partial_{\mu}^1(\theta_1), \partial_{\sigma}^1(\theta_2), \lambda_1, \lambda_2 ),$$ where $\partial_{\mu}^1, \partial_{\sigma}^1$ are as in [\[group coboundary mu\]](#group coboundary mu){reference-type="eqref" reference="group coboundary mu"} and $\lambda_1, \lambda_2$ are given by $$\begin{aligned} \lambda_1(a,b) &=& \nu_b \big(f(\theta_2(b), a)\theta_1(a) \big)\, \big(\theta_1(\beta_b(a))\big)^{-1},\\ \lambda_2(a) &=& S \big(\nu^{-1}_{T(a)}(\theta_1(a)) \big) \, \big(\theta_2(T(a)) \big)^{-1},\end{aligned}$$ for $\theta_1 \in C^1(A, K)$, $\theta_2 \in C^1(B,L)$, $a \in A$ and $b \in B$. Finally, define $\partial_{RRB}^2: C^2_{RRB} \rightarrow C^{3}_{RRB}$ by $$\partial^2_{RRB}(\tau_1, \tau_2, \rho, \chi)=(\partial_{\mu}^2(\tau_1), \partial_{\sigma}^2(\tau_2), \gamma_1, \gamma_2, \gamma_3),$$ where $\partial_{\mu}^2, \partial_{\sigma}^2$ are as in [\[group coboundary mu\]](#group coboundary mu){reference-type="eqref" reference="group coboundary mu"} and $$\begin{aligned} \gamma_1(a,b_1, b_2) &=& \rho(a, b_1 b_2) \, \nu_{b_1 b_2}( f(\tau_2(b_1, b_2), a)) \, (\rho(\beta_{b_2}(a), b_1))^{-1} \,(\nu_{b_1}(\rho(a,b_2)))^{-1},\\ \gamma_2(a_1, a_2,b) &=& \rho(a_1 a_2, b) \, \nu_{b}(\tau_1(a_1, a_2)) \,(\mu_{ \beta_{b}(a_2)}(\rho(a_1,b)))^{-1} \,(\rho(a_2, b))^{-1} \,(\tau_1(\beta_{b}(a_1), \beta_{b}(a_2)))^{-1},\\ \gamma_3(a_1, a_2) &= & S \big(\nu^{-1}_{T(a_1 \circ_T a_2)}\big(\rho(a_2, T(a_1)) \, \tau_1(a_1,\beta_{T(a_1)}(a_2)) \,\nu_{T(a_1)}(f(\chi(a_1), a_2)) \big)\big)\\ && (\tau_2(T(a_1), T(a_2) ))^{-1} \,(\delta^1_{\sigma}(\chi)(a_1, a_2))^{-1}\end{aligned}$$ for $\tau_1 \in C^2(A, K)$, $\tau_2 \in C^2(B,L)$, $\rho \in C(A \times B, K)$, $\chi \in C(A,L)$, $a, a_1, a_2 \in A$ and $b, b_1, b_2 \in B$. **Lemma 28**. *$\operatorname{im} (\partial^0_{RRB}) \subseteq \operatorname{ker} (\partial^1_{RRB})$ and $\operatorname{im} (\partial^1_{RRB}) \subseteq \operatorname{ker} (\partial^2_{RRB})$.* *Proof.* It is enough to prove that $\partial^1_{RRB}\; \partial^0_{RRB}$ and $\partial^2_{RRB}\; \partial^1_{RRB}$ are trivial homomorphisms. We have $$\begin{aligned} \partial^1_{RRB}\; \partial^0_{RRB}(k,l) &=& (\partial^1_{\mu}(\kappa_k), \partial^1_{\sigma}(\omega_l), \lambda_1, \lambda_2). \end{aligned}$$ Since $\kappa_k$ and $\omega_l$ are group 1-coboundaries, we have $\partial^1_{\mu}(\kappa_k)=1$ and $\partial^1_{\sigma}(\omega_l)=1$. Further, we have $$\begin{aligned} \lambda_1(a,b) &=& \nu_b \big(f(\sigma_b(l)l^{-1}, a)\mu_a(k)k^{-1} \big) \,(\mu_{\beta_b(a)}(k))^{-1} \,k. \end{aligned}$$ Using the definition of $K \times_{(\nu, \mu, \sigma, f)} L$ and the compatibility conditions in Definition [Definition 26](#moddefn){reference-type="ref" reference="moddefn"}, we have $\lambda_1(a, b) = 1$. Similarly, we obtain $\lambda_2(a, b) = 1$ for all $a \in A$ and $b \in B.$ Now we prove that $\partial^2_{RRB}\; \partial^1_{RRB}$ is trivial. If $(\theta_1, \theta_2) \in C^1_{RRB}$, then $$\begin{aligned} \label{com1} \partial^2_{RRB}\; \partial^1_{RRB}(\theta_1, \theta_2) &=& \partial^2_{RRB}(\partial_{\mu}^1(\theta_1), \partial_{\sigma}^1(\theta_2), \lambda_1, \lambda_2 ). \end{aligned}$$ We show that each term on the right hand side of [\[com1\]](#com1){reference-type="eqref" reference="com1"} is a trivial map. The first two terms are trivial maps since $\partial_{\mu}^2\partial_{\mu}^1$ and $\partial_{\sigma}^2\partial_{\sigma}^1$ are trivial being the coboundary maps defining the usual group cohomology. We expand the remaining three terms using the definitions of $\partial^1_{RRB}$ and $\partial^2_{RRB}$. It follows from [\[group coboundary mu\]](#group coboundary mu){reference-type="eqref" reference="group coboundary mu"} and [\[group coboundary twisted sigma\]](#group coboundary twisted sigma){reference-type="eqref" reference="group coboundary twisted sigma"} that $$\begin{aligned} \partial^1_{\mu}(\theta_1)(a_1, a_2) &=& \theta_1(a_2)\, (\theta_1(a_1 a_2))^{-1} \,\mu_{a_2}(\theta_1(a_1)),\\ \partial^1_{\sigma}(\theta_2)(b_1, b_2) &=& \theta_2(b_2) \,(\theta_2(b_1 b_2))^{-1} \,\sigma_{b_2}(\theta_2(b_1)),\\ \delta^1_{\sigma}(\chi)(a_1, a_2) &=& \chi(a_2) \, (\chi(a_1 \circ_T a_2))^{-1} \,\sigma_{T(a_2)}( \chi(a_1)). \end{aligned}$$ The third term of [\[com1\]](#com1){reference-type="eqref" reference="com1"} is given by $$\begin{aligned} \label{t1} && \gamma_1(a, b_1, b_2) \notag\\ &=& \nu_{b_1 b_2}(f(\theta_2(b_1 b_2), a)\theta_1(a))(\theta_1(\beta_{b_1 b_2}(a)))^{-1} \nu_{b_1 b_2}(f( \theta_2(b_2)(\theta_2(b_1 b_2))^{-1}\sigma_{b_2}(\theta_2(b_1)), a))\notag \\ &&(\nu_{b_1}(f(\theta_2(b_1), \beta_{b_2}(a))\theta_1(\beta_{b_2}(a)))(\theta_1(\beta_{b_1}(\beta_{b_2}(a))))^{-1})^{-1}(\nu_{b_1}\big( \nu_{b_2}(f(\theta_2(b_2), a)\theta_1(a))(\theta_1(\beta_{b_2}(a)))^{-1})\big)^{-1}. \notag \end{aligned}$$ Using that $f(\theta_2(b_1), \beta_{b_2}(a))= \nu_{b_2}(f(\sigma_{b_2}(\theta_2(b_2)),a))$ and $f$ is linear in the first coordinate, we have $\gamma_1(a, b_1, b_2)=1$. The fourth term of [\[com1\]](#com1){reference-type="eqref" reference="com1"} is given by $$\begin{aligned} \gamma_2(a_1, a_2,b) &= & \nu_b(f(\theta_2(b), a_1a_2)\theta_1(a_1 a_2))(\theta_1(\beta_b(a_1 a_2)))^{-1} \nu_{b}(\theta_1(a_2)(\theta_1(a_1 a_2))^{-1}\mu_{a_2}(\theta_1(a_1))) \notag\\ && (\mu_{ \beta_{b}(a_2)}(\nu_b(f(\theta_2(b), a_1)\theta_1(a_1))(\theta_1(\beta_b(a_1)))^{-1}))^{-1} (\nu_b(f(\theta_2(b), a_2)\theta_1(a_2)) (\theta_1(\beta_b(a_2)))^{-1})^{-1}\notag\\ &&(\theta_1(\beta_{b}(a_2)))^{-1}\theta_1(\beta_{b}(a_1)\beta_{b}(a_2))(\mu_{\beta_{b}(a_2)}(\theta_1(\beta_{b}(a_1))))^{-1}.\notag \end{aligned}$$ Using that $f(\theta_2(b), a_1 a_2) = \mu_{ a_2}(f(\theta_2(b),a_1))f(\theta_2(b), a_2)$ and $\nu_b(\mu_{ a_2}(k))=\mu_{ \beta_{b}(a_2)}(\nu_b(k))$ for all $k \in K$, we have $\gamma_2(a_1, a_2,b)=1$. The fifth term is given by $$\begin{aligned} \gamma_3(a_1, a_2) &= & \chi(a_1 \circ_T a_2) ~S(\nu^{-1}_{T(a_1 \circ a_2)}\big(\rho(a_2, T(a_1))\tau_1(a_1,\beta_{T(a_1)}(a_2)) \nu_{T(a_1)}(f(\chi(a_1), a_2)))\big)\notag\\ && (\sigma_{T(a_2)}( \chi(a_1)))^{-1} ~(\chi(a_2))^{-1} ~(\tau_2(T(a_1), T(a_2) ))^{-1} \notag\\ &= & S(\nu^{-1}_{T(a_1 \circ_T a_2)}(\theta_1(a_1 \circ_T a_2))) (\theta_2(T(a_1 \circ_T a_2)))^{-1} S(\nu^{-1}_{T(a_1 \circ_T a_2)}\big( \nu_{T(a_1)}(f(\theta_2(T(a_1)), a_2)\notag\\ && \theta_1(a_2))(\theta_1(\beta_{T(a_1)}(a_2)))^{-1} \theta_1(\beta_{T(a_1)}(a_2))(\theta_1(a_1 \beta_{T(a_1)}(a_2)))^{-1}\mu_{\beta_{T(a_1)}(a_2)}(\theta_1(a_1))\notag\\ && \nu_{T(a_1)}(f(S(\nu^{-1}_{T(a_1)}(\theta_1(a_1)))(\theta_2(T(a_1)))^{-1}, a_2 ))) \big) (\sigma_{T(a_2)}(S(\nu^{-1}_{T(a_1)}(\theta_1(a_1)))(\theta_2(T(a_1)))^{-1}))^{-1}\notag\\ && \big(S(\nu^{-1}_{T(a_2)}(\theta_1(a_2)))(\theta_2(T(a_2)))^{-1}\big)^{-1} \big( \theta_2(T(a_2))(\theta_2(T(a_1) T(a_2)))^{-1}\sigma_{T(a_2)}(\theta_2(T(a_1))) \big)^{-1}. \end{aligned}$$ Using that $\nu^{-1}_{T(a_1)}(\mu_{\beta_{T(a_1)}(a_2)}(\theta_1(a_1)))= \mu_{a_2}(\nu^{-1}_{T(a_1)}(\theta_1(a_1)))$, the only part that survive is $$\begin{aligned} S\big(\nu^{-1}_{T(a_2)} \big(f(S(\nu^{-1}_{T(a_1)}(\theta_1(a_1))), a_2) \big)\big)\, S\big(\nu^{-1}_{T(a_2)} \big(\mu_{a_2}(\nu^{-1}_{T(a_1)}(\theta_1(a_1))) \big)\big) \,\big(\sigma_{T(a_2)} \big(S(\nu^{-1}_{T(a_1)}(\theta_1(a_1))) \big)\big)^{-1}, \end{aligned}$$ which is 1 from [\[Rallcondn\]](#Rallcondn){reference-type="eqref" reference="Rallcondn"} by putting $\nu^{-1}_{T(a_1)}(\theta_1(a_1))=k$. This completes the proof of the lemma. ◻ In view of Lemma [Lemma 28](#rrb coboundary condition){reference-type="ref" reference="rrb coboundary condition"}, we define the first and the second cohomology group of $\mathcal{A}=(A, B, \beta, T)$ with coefficients in $\mathcal{K}=(K, L, \alpha, S)$ by $$\operatorname{H} ^1_{RRB}(\mathcal{A}, \mathcal{K})=\operatorname{ker} (\partial^1_{RRB})/ \operatorname{im} (\partial^0_{RRB})$$ and $$\operatorname{H} ^2_{RRB}(\mathcal{A}, \mathcal{K})=\operatorname{ker} (\partial^2_{RRB})/ \operatorname{im} (\partial^1_{RRB}).$$ **Question 29**. *Can the preceding construction be extended to develop a full cohomology theory for relative Rota-Baxter groups?* ## Equivalent extensions and second cohomology {#subsec equivalent extensions and cohomology} Next, we define an equivalence of extensions of relative Rota-Baxter groups. **Definition 30**. *Two extensions of relative Rota-Baxter groups $$\mathcal{E}_1 : \quad {\bf 1} \longrightarrow (K,L,\alpha,S ) \stackrel{(i_1, i_2)}{\longrightarrow} (H,G, \phi, R) \stackrel{(\pi_1, \pi_2)}{\longrightarrow} (A,B, \beta, T) \longrightarrow {\bf 1}$$ and $$\mathcal{E}_2 : \quad {\bf 1} \longrightarrow (K,L,\alpha,S ) \stackrel{(i^\prime_1, i^\prime_2)}{\longrightarrow} (H^\prime,G^\prime, \phi^\prime, R^\prime) \stackrel{(\pi^\prime_1, \pi^\prime_2)}{\longrightarrow} (A,B, \beta, T) \longrightarrow {\bf 1}$$ are said to be equivalent if there exists an isomorphism $$(\eta, \zeta):(H,G, \phi, R) \longrightarrow (H^\prime,G^\prime, \phi^\prime, R^\prime)$$ of relative Rota-Baxter groups such that the following diagram commutes $$\begin{aligned} \begin{CD}\label{eqact} {\bf 1} @>>> (K,L,\alpha,S ) @>(i_1, i_2)>> (H,G, \phi, R) @>{{(\pi_1, \pi_2)} }>> (A,B, \beta, T) @>>> {\bf 1}\\ && @V{(\operatorname{id}_K, ~\operatorname{id}_L)} VV@V{(\eta, \zeta)} VV @V{(\operatorname{id}_A, ~\operatorname{id}_B)}VV \\ {\bf 1} @>>> (K,L,\alpha,S ) @>(i^\prime_1, i^\prime_2)>>(H^\prime,G^\prime, \phi^\prime, R^\prime) @>{(\pi^\prime_1, \pi^\prime_2) }>> (A,B, \beta, T) @>>> {\bf 1}. \end{CD}\end{aligned}$$* **Proposition 31**. *Equivalent extensions of relative Rota-Baxter groups induce identical associated actions.* *Proof.* We already observed in Proposition [Proposition 21](#construction of actions){reference-type="ref" reference="construction of actions"} and Lemma [Lemma 23](#properties of f){reference-type="ref" reference="properties of f"} that the actions defined by distinct set-theoretic sections to a given extension are the same. Suppose that $$\mathcal{E}_1 : \quad {\bf 1} \longrightarrow (K,L,\alpha,S ) \stackrel{(i_1, i_2)}{\longrightarrow} (H,G, \phi, R) \stackrel{(\pi_1, \pi_2)}{\longrightarrow} (A,B, \beta, T) \longrightarrow {\bf 1}$$ and $$\mathcal{E}_2 : \quad {\bf 1} \longrightarrow (K,L,\alpha,S ) \stackrel{(i^\prime_1, i^\prime_2)}{\longrightarrow} (H^\prime,G^\prime, \phi^\prime, R^\prime) \stackrel{(\pi^\prime_1, \pi^\prime_2)}{\longrightarrow} (A,B, \beta, T) \longrightarrow {\bf 1}$$ are equivalent extensions. Let $(\nu, \mu, \sigma, f)$ and $(\nu^\prime, \mu^\prime, \sigma^\prime, f^\prime)$ be associated actions of the extensions $\mathcal{E}_1$ and $\mathcal{E}_2$, respectively. The commutativity of the diagram [\[eqact\]](#eqact){reference-type="eqref" reference="eqact"} implies that the group extensions $H$ and $H'$ of $A$ by $K$ are equivalent. It follows from the classical extension theory of groups that $\mu= \mu^\prime$ and $\sigma=\sigma^\prime$ (see [@MR0672956]). Let $s_G: B \rightarrow G$ be a set-theoretic section to the group extension $G$. It can be seen that the composition $\zeta s_G$ defines a set-theoretic section to the group extension $G^\prime$. Furthermore, by the commutativity of the diagram [\[eqact\]](#eqact){reference-type="eqref" reference="eqact"}, the action defined in equation [\[nuact\]](#nuact){reference-type="eqref" reference="nuact"} induced by $\zeta s_G$ coincides with the $\nu$. By a similar reasoning, we prove that $f=f^\prime$, which completes the proof. ◻ Let $\mathcal{A}= (A,B, \beta, T)$ be an arbitrary relative Rota-Baxter group and $\mathcal{K}= (K,L,\alpha,S )$ a trivial relative Rota-Baxter group, where $K$ and $L$ are abelian groups. Let $\operatorname{Ext} (\mathcal{A}, \mathcal{K})$ denote the set of all equivalence classes of extensions of $\mathcal{A}$ by $\mathcal{K}$. By Proposition [Proposition 31](#eqactprop){reference-type="ref" reference="eqactprop"}, each element of $\operatorname{Ext} (\mathcal{A}, \mathcal{K})$ gives rise to a unique quadruple $(\nu, \mu, \sigma,f)$ of actions. Thus, we can employ the term associated action of an element of $\operatorname{Ext} (\mathcal{A}, \mathcal{K})$. Further, let $\operatorname{Ext} _{(\nu, \mu, \sigma, f)}(\mathcal{A}, \mathcal{K})$ denote the set of equivalence classes of extensions of $\mathcal{A}$ by $\mathcal{K}$ for which the associated action is $(\nu, \mu, \sigma, f)$. Then we can write $$\operatorname{Ext} (\mathcal{A}, \mathcal{K})=\bigsqcup_{(\nu, \mu, \sigma, f)} \operatorname{Ext} _{(\nu, \mu, \sigma, f)}(\mathcal{A}, \mathcal{K}).$$ Suppose that $$\mathcal{E} : \quad {\bf 1} \longrightarrow (K,L, \alpha,S ) \stackrel{(i_1, i_2)}{\longrightarrow} (H,G, \phi, R) \stackrel{(\pi_1, \pi_2)}{\longrightarrow} (A,B, \beta, T) \longrightarrow {\bf 1}$$ is an abelian extension of relative Rota-Baxter groups and $(s_H, s_G)$ is a set-theoretic section to $\mathcal{E}$. Let $\tau_1, \tau_2, \rho$ and $\chi$ be the maps defined in [\[mucocycle\]](#mucocycle){reference-type="eqref" reference="mucocycle"}, [\[sigmacocycle\]](#sigmacocycle){reference-type="eqref" reference="sigmacocycle"}, [\[atilde\]](#atilde){reference-type="eqref" reference="atilde"} and [\[RRB2\]](#RRB2){reference-type="eqref" reference="RRB2"}, respectively, corresponding to set-theoretic sections $s_H$ and $s_G$. It follows from [\[iso3\]](#iso3){reference-type="eqref" reference="iso3"}, [\[rho second equation\]](#rho second equation){reference-type="eqref" reference="rho second equation"} and [\[R1\]](#R1){reference-type="eqref" reference="R1"} that $\partial_{RRB}^2(\tau_1, \tau_2, \rho, \chi)$ is trivial. Hence, the quadruple $(\tau_1, \tau_2, \rho, \chi)$ is a 2-cocycle induced by the set-theoretic section $(s_H, s_G)$. **Theorem 32**. *Let $\mathcal{A}= (A,B, \beta, T)$ be a relative Rota-Baxter group and $\mathcal{K}= (K,L,\alpha,S )$ a trivial relative Rota-Baxter group, where $K$ and $L$ are abelian groups. Let $(\nu, \mu, \sigma, f)$ be the quadruple of actions that makes $\mathcal{K}$ into an $\mathcal{A}$-module. Then there is a bijection between $\operatorname{Ext} _{(\nu, \mu, \sigma, f)}(\mathcal{A}, \mathcal{K})$ and $\operatorname{H} ^2_{RRB}(\mathcal{A}, \mathcal{K})$.* *Proof.* Let $\mathcal{A}= (A,B, \beta, T)$ and $\mathcal{K}= (K,L,\alpha,S )$. Let $\mathcal{E}$ be an extension representing an element of $\operatorname{Ext} _{(\nu, \mu, \sigma, f)}(\mathcal{A}, \mathcal{K})$, where $$\mathcal{E} : \quad {\bf 1} \longrightarrow (K,L,\alpha,S ) \stackrel{(i_1, i_2)}{\longrightarrow} (H,G, \phi, R) \stackrel{(\pi_1, \pi_2)}{\longrightarrow} (A,B, \beta, T) \longrightarrow {\bf 1}$$ We define $$\Phi: \operatorname{Ext} _{(\nu, \mu, \sigma, f)}(\mathcal{A}, \mathcal{K}) \longrightarrow \operatorname{H} ^2_{RRB}(\mathcal{A}, \mathcal{K}) \quad \textrm{by} \quad \Phi \big([\mathcal{E}]\big)= [(\tau_1, \tau_2, \rho, \chi )] ,$$ where$(\tau_1, \tau_2, \rho, \chi)$ is the 2-cocycle induced by some fixed set-theoretic section $(s_H, s_G)$ to $\mathcal{E}$. We claim that $\Phi$ is well-defined. Thus, we need to show that $\Phi$ is independent of the choice of a section to $\mathcal{E}$, as well as the choice of a representative of the equivalence class $[\mathcal{E}]$. Let $(s_H, s_G)$ and $(s_H^\prime, s_G^\prime)$ be two distinct set-theoretic sections of $\mathcal{E}$. Two sections of a group extension of $A$ by $K$ differ by an element of $K$. Similarly, two sections of a group extension of $B$ by $L$ differ by an element of $L$. Thus, there exist maps $\theta_1: A \rightarrow K$ and $\theta_2: B \rightarrow L$ such that $s_H(a)s^\prime_H(a)^{-1}=\theta_1(a)$ and $s_G(b)s^\prime_G(b)^{-1}=\theta_2(b)$ for all $a \in A$ and $b \in B$. Let $(\tau_1, \tau_2, \rho, \chi )$ and $(\tau^\prime_1, \tau^\prime_2, \rho^\prime, \chi^\prime )$ be 2-cocycles induced by $(s_H, s_G)$ and $(s_H^\prime, s_G^\prime)$, respectively. Direct calculations yield the following: 1. $\tau_1(a_1,a_2) \,(\tau^\prime_1(a_1, a_2))^{-1}=\partial^{1}_{\mu}(\theta_1)(a_1, a_2)$ for all $a_1, a_2 \in A$, 2. $\tau_2(b_1, b_2) \,(\tau^\prime_2(b_1, b_2))^{-1}=\partial^{1}_{\sigma}(\theta_1)(b_1, b_2)$ for all $b_1, b_2 \in B$, 3. $\rho(a, b) \,(\rho^\prime(a,b))^{-1}=\nu_b(f(\theta_2(b), a)\theta_1(a)) \,(\theta_1(\beta{_b(a)}))^{-1}$ for all $a \in A$ and $b \in B$, 4. $\chi(a) \,(\chi^\prime(a))^{-1} = S(\nu^{-1}_{T(a)}(\theta_1(a))) \,(\theta_2(T(a)))^{-1}$ for all $a \in A$. This gives $$(\tau_1, \tau_2, \rho, \chi ) \,(\tau^\prime_1, \tau^\prime_2, \rho^\prime, \chi^\prime )^{-1}=\partial^1_{RRB}(\theta_1, \theta_2).$$ Hence, the cohomology class of a 2-cocycle is independent of the choice of a defining section. Likewise, we can prove that equivalent extensions possess cohomologous 2-cocycles. This proves our claim that $\Phi$ is well-defined. To demonstrate the bijectivity of $\Phi$, we define its inverse explicitly. Consider a 2-cocycle $(\tau_1, \tau_2, \rho, \chi) \in \operatorname{ker} (\partial_{RRB}^2)$. Define $H=A\times_{\tau_1} K$ and $G= B \times_{\tau_2} L$ to be the group extensions of $A$ by $K$ and $B$ by $L$ associated to the group 2-cocycles $\tau_1$ and $\tau_2$, respectively. Further, define $\phi: G \rightarrow \operatorname{Aut} (H)$ by $$\begin{aligned} \phi_{(b,l)}(a,k)=\big(\beta{_{b}(a)}, ~\rho(a,b) \,\nu_b(f(l,a)k)\big)\end{aligned}$$ and $R: H \rightarrow G$ by $$\begin{aligned} R(a,k)=\big(T(a), ~\chi(a)\,S(\nu^{-1}_{T(a)}(k))\big).\end{aligned}$$ Using the fact that $(\tau_1, \tau_2, \rho, \chi) \in \operatorname{ker} (\partial_{RRB}^2)$, it follows that $(H, G, \phi, R)$ is a relative Rota-Baxter group and is an extension of $(A,B, \beta, T)$ by $(K,L, \alpha,S )$ denoted by $$\mathcal{E}(\tau_1, \tau_2, \rho, \chi) : \quad {\bf 1} \to (K,L, \alpha,S ) \stackrel{(i_1, i_2)}{\longrightarrow} (H, G, \phi, R) \stackrel{(\pi_1, \pi_2)}{\longrightarrow} (A,B, \beta, T) \to {\bf 1}.$$ Define $$\Psi: \operatorname{H} ^2_{RRB}(\mathcal{A}, \mathcal{K}) \longrightarrow \operatorname{Ext} _{(\nu, \mu, \sigma, f)}(\mathcal{A}, \mathcal{K}) \quad \textrm{by} \quad \Psi([(\tau_1, \tau_2, \rho, \chi)])=[\mathcal{E}(\tau_1, \tau_2, \rho, \chi)].$$ We claim that $\Psi$ is the inverse of $\Phi$. To substantiate this claim, it is necessary to establish the well-definedness of $\Psi$. Let $(\tau_1, \tau_2, \rho, \chi )$ and $(\tau^\prime_1, \tau^\prime_2, \rho^\prime, \chi^\prime ) \in \operatorname{ker} (\partial_{RRB}^2)$ be two cohomologous 2-cocycles. By definition, there exist maps $\theta_1: A \rightarrow K$ and $\theta_2: B \rightarrow L$ such that $$(\tau_1, \tau_2, \rho, \chi ) \, (\tau^\prime_1, \tau^\prime_2, \rho^\prime, \chi^\prime )^{-1}=\partial^1_{RRB}(\theta_1, \theta_2).$$ Let $(A\times_{\tau_1} K,B \times_{\tau_2} L, \phi, R)$ and $(A\times_{\tau^\prime_1} K,B \times_{\tau^\prime_2} L, \phi^\prime, R^\prime)$ be the relative Rota-Baxter groups induced by the 2-cocycles $(\tau_1, \tau_2, \rho, \chi )$ and $(\tau^\prime_1, \tau^\prime_2, \rho^\prime, \chi^\prime )$, respectively. Define $\gamma_1: A \times_{\tau_1} K \rightarrow A \times_{\tau^{\prime}_1} K$ by $\gamma_1(a,k) = (a, k\theta_1(a))$ and $\gamma_2: B \times_{\tau_2} L \rightarrow B \times_{\tau^{\prime}_2} L$ by $\gamma_2(b,l) = (b, l\theta_2(a))$. Straightforward calculations show that $(\gamma_1, \gamma_2)$ is an isomorphism between relative Rota-Baxter groups $(A\times_{\tau_1} K,B \times_{\tau_2} L, \phi, R)$ and $(A\times_{\tau^\prime_1} K,B \times_{\tau^\prime_2} L, \phi^\prime, R^\prime)$ such that the following diagram commutes $$\begin{aligned} \begin{CD} {\bf 1} @>>> (K,L,\alpha,S ) @>(i_1, i_2)>> (A\times_{\tau_1} K,B \times_{\tau_2} L, \phi, R) @>{{(\pi_1, \pi_2)} }>> (A,B, \beta, T) @>>> {\bf 1}\\ && @V{(\operatorname{id}_K, ~\operatorname{id}_L)} VV@V{(\gamma_1, \gamma_2)} VV @V{(\operatorname{id}_A, ~\operatorname{id}_B) }VV \\ {\bf 1} @>>> (K,L,\alpha,S ) @>(i^\prime_1, i^\prime_2)>>(A\times_{\tau^\prime_1} K,B \times_{\tau^\prime_2} L, \phi^\prime, R^\prime)@>{(\pi^\prime_1, \pi^\prime_2) }>> (A,B, \beta, T) @>>> {\bf 1}. \end{CD} \end{aligned}$$ This shows that $\mathcal{E}(\tau_1, \tau_2, \rho, \chi)$ and $\mathcal{E}(\tau^\prime_1, \tau^\prime_2, \rho^\prime, \chi^\prime)$ are equivalent extensions, and hence the map $\Psi$ is well-defined. By direct inspection, one can readily verify that $\Phi$ and $\Psi$ are inverse of each other. This concludes the proof. ◻ The following is immediate from the definition. **Proposition 33**. *Let $\mathcal{A}=(A, B, \beta, T)$ be a relative Rota-Baxter group and $\mathcal{K}=(K, L, \alpha, S)$ a module over $\mathcal{A}$ via the action $(\nu, \mu, \sigma, f)$. Then $\operatorname{H} ^0_{RRB}(\mathcal{A}, \mathcal{K})$ is the fixed-point set of actions $\nu$, $\mu$, and $\sigma$, that is, $$\operatorname{H} ^0_{RRB}(\mathcal{A}, \mathcal{K})=\big\{(k,l) \in K \times L ~\mid~ \mu_a(k)=k,~ \nu_b(k)=k, ~\sigma_b(l)=l ~\mbox{ for all } a \in A \mbox{ and }b \in B \big\}.$$* **Remark 1**. If $\mathcal{A}= (1,B, 1, 1)$ and $\mathcal{K}= (1,L,1, 1)$ then $\operatorname{H} ^2_{RRB}(\mathcal{A}, \mathcal{K}) \cong \operatorname{H} ^2(B,L)$. Similarly, if $\mathcal{A}= (A,1, 1, 1)$ and $\mathcal{K}= (K,1,1, 1)$ then $\operatorname{H} ^2_{RRB}(\mathcal{A}, \mathcal{K}) \cong \operatorname{H} ^2(A,K)$, where $\operatorname{H} ^2$ denotes the second cohomology of groups. # Extensions and cohomology of skew left braces {#sec extensions skew braces} We recall some necessary results on extensions of skew left braces by abelian groups [@NMY1]. Let $(M, \cdot, \circ_M)$ be a skew left brace and $$\mathcal{E}: \quad {\bf 1} \longrightarrow I \stackrel{i_1}{\longrightarrow} E \stackrel{\pi_1}{\longrightarrow} M \longrightarrow {\bf 1}$$ an extension of skew left braces such that $I$ is an abelian group viewed as a trivial brace. The group $I$ is regarded as a subgroup of $E$, and the group operation in the additive group of $E$ will be denoted by juxtaposition. Let $s : M \rightarrow E$ be a set-theoretic section to $\mathcal{E}$. Then, for all $m \in M$ and $y \in I$, we define maps $\xi, \epsilon : M^{(\circ)} \rightarrow \operatorname{Aut} (I)$ and $\zeta :M^{(\cdot)} \rightarrow \operatorname{Aut} (I)$ by $$\begin{aligned} \xi_m(y) & = & \lambda^E_{s(m)}(y),\label{action1 sb}\\ \zeta_m(y) & = & s(m)^{-1} \,y \, s(m),\label{action2 sb}\\ \epsilon_m(y) & = & s(m)^{\dagger} \circ_E y \circ_E s(m),\label{action3 sb}\end{aligned}$$ for $m \in M$ and $y \in I$, where $x^{-1}$ and $x^\dagger$ denotes the inverse of $x$ in $E^{(\circ)}$ and $E^{(\cdot)}$, respectively. It is not difficult to see that the map $\xi$ is a homomorphism, whereas the maps $\zeta, \epsilon$ are anti-homomorphisms. Furthermore, these maps are independent of the choice of the set-theoretic section [@NMY1 Proposition 3.4]. The triplet $(\xi, \zeta, \epsilon)$ is called the *associated action* of the extension $\mathcal{E}$. Next, we recall the definition of the second cohomology group of a skew left brace $(M, \cdot, \circ)$ with coefficients in an abelian group $I$ viewed as a trivial brace. Let $\xi: M ^{(\circ)} \rightarrow \operatorname{Aut} (I)$ be a homomorphism and $\zeta: M^{(\cdot)} \rightarrow \operatorname{Aut} (I)$ and $\epsilon : M^{(\circ)} \rightarrow \operatorname{Aut} (I)$ be anti-homomorphisms satisfying the following conditions $$\begin{aligned} \xi_{m_1 \cdot m_2}(\epsilon_{m_1 \cdot m_2}(y)) \,\zeta_{m_2}(y) & = & \zeta_{m_1}(\xi_{ m_1 }(\epsilon_{m_1}(y))) \, \xi_{ m_2} (\epsilon_{m_2}(y)),\\ \zeta_{m^{-1}_1 \cdot (m_1 \circ_M m_2)}(\xi_{m_1}(y)) & =& \xi_{m_1}(\zeta_{m_2}(y)),\end{aligned}$$ for all $m_1, m_2 \in M$ and $y \in I$. In [@NMY1], such a triplet is referred as a *good triplet* of action of $H$ on $I$. Let $g,f: M \times M \rightarrow I$ be maps satisfying $$\begin{aligned} g(m_2, m_3) \,g(m_1 \cdot m_2, m_3)^{-1} \,g(m_1, m_2 \cdot m_3) \,\zeta_{m_3}( g(m_1, m_2))^{-1} =&\; 1 \label{sbcocycle1},\\ \xi_{m_1}(f(m_2, m_3)) \, f(m_1 \circ_M m_2, m_3)^{-1} \, f(m_1, m_2 \circ_M m_3) \, \xi_{m_1 \circ_M m_2 \circ_M m_3} (\epsilon_{m_3}(\nu^{-1}_{m_1 \circ_M m_2}f(m_1, m_2)))^{-1} = & \; 1, \label{sbcocycle2}\\ & \notag\\ \xi_{m_1}(g(m_2, m_3))\, \zeta_{m_1 \circ_M m_3}(g(m_1, m^{-1}_1)) \,\zeta_{m_1 \circ_M m_3}(g(m_1 \circ_M m_2, m^{-1}_1))^{-1} \label{sbcocycle3} \\ g((m_1 \circ_M m_2) m^{-1}_1, m_1 \circ_M m_3)^{-1}\, \zeta_{- m_1\cdot (m_1 \circ_M m_3)}\, (f(m_1, m_2))^{-1} \, f(m_1, m_2 \cdot m_3) f(m_1, m_3)^{-1} = &\; 1, \notag \end{aligned}$$ for all $m_1, m_2, m_3 \in M$. Let $$\operatorname{Z} _N^2(M, I) = \Big\{ (g ,f) \quad \Big \vert \quad g,f:M \times M \rightarrow I ~ \textrm{satisfy}~\eqref{sbcocycle1}, \eqref{sbcocycle2}, \eqref{sbcocycle3} ~\textrm{and}~ \textrm{vanish on degenerate tuples} \Big\},$$ and $\operatorname{B} _N^2(M, I)$ be the collection of the pairs $(g, f) \in \operatorname{Z} _N^2(M, I)$ such that there exists a map $\theta:M \to I$ satisfying $$\begin{aligned} g(m_1, m_2) &=& \xi_{m_1\cdot m_2}(\theta(m_1\cdot m_2)^{-1}) ~\zeta_{m_2}((\xi_{m_1}(\theta(m_1))))~ \xi_{m_2}(\theta(m_2)),\\ f(m_1, m_2) &=& \theta(m_1 \circ m_2)^{-1} ~\epsilon_{m_2}(\theta(m_1)) ~ \theta(m_2),\end{aligned}$$ for all $m_1, m_2 \in M$. Then the second cohomology group of $(M, \cdot, \circ_M)$ with coefficients in $I$ corresponding to the given good triplet of actions $(\nu, \mu, \sigma)$ is defined as $\operatorname{H} ^2_N(M, I) = \operatorname{Z} _N^2(M, I)/\operatorname{B} _N^2(M, I)$. Let $\operatorname{Ext} _{(\xi, \zeta, \epsilon)}(M, I)$ denote the set of equivalence classes of those skew left brace extensions of $M$ by $I$ whose corresponding triplet of actions is $(\xi, \zeta, \epsilon)$. It is proven in [@NMY1 Proposition 3.4] that $(\xi, \zeta, \epsilon)$ forms a good triplet of actions for $M$ on $I$ and that the following holds [@NMY1 Theorem A]. **Theorem 34**. *Let $(M, \cdot, \circ)$ be a skew left brace and $(I, +)$ an abelian group viewed as a trivial brace. Then there is a bijection $\Lambda:\operatorname{Ext} _{(\xi, \zeta, \epsilon)}(M, I) \rightarrow \operatorname{H} ^2_N(M, I)$ given by $\Lambda([\mathcal{E}])=[(\tau, \tilde{\tau})]$, where $$\begin{aligned} \tau(m_1, m_2) &= & s(m_1 \cdot m_2)^{-1} s(m_1) s(m_2),\\ \tilde{\tau}(m_1, m_2) &=& s(m_1 \circ m_2)^{-1} \,(s(m_1) \circ s(m_2)), \end{aligned}$$ and $s$ is a set-theoretic section to $\mathcal{E}$.* Suppose that $$\mathcal{E} : \quad {\bf 1} \longrightarrow (K,L,\alpha,S ) \stackrel{(i_1, i_2)}{\longrightarrow} (H,G, \phi, R) \stackrel{(\pi_1, \pi_2)}{\longrightarrow} (A,B, \beta, T) \longrightarrow {\bf 1}$$ is an abelian extension of relative Rota-Baxter groups and $$\mathcal{E}_{SB}: \quad {\bf 1} \longrightarrow K_{S} \stackrel{i_1}{\longrightarrow} H_{R} \stackrel{\pi_1}{\longrightarrow} A_{T} \longrightarrow {\bf 1}$$ the induced skew left brace extension. Let $(s_H, s_G)$ be a set-theoretic section to $\mathcal{E}$, and $(\xi, \zeta, \epsilon)$ the associated action of the extension $\mathcal{E}_{SB}$. For each $a \in A$ and $k \in K$, it is easy to see that $$\zeta_a= \mu_a$$ and $$\xi_a(k)= \phi_{R(s_H(a))}(k)=\phi_{s_G(T(a))\chi(a)}(k)=\nu_{T(a)}(k)$$ since $\phi_{\chi(a)}$ acts as the identity. To determine the map $\epsilon$, we have $$\begin{aligned} \epsilon_a(k) &=& s_H(a)^{\dagger} \circ_R k \circ_R s_H(a)\\ &=& (s_H(a)^{\dagger} \circ_R k)~ \phi_{R(s_H(a))^{-1} R(k)}(s_H(a)), \quad \textrm{by definition of $\circ_R$}\\ &=& \phi_{R(s_H(a))^{-1}} (s_H(a)^{-1})~ \phi_{R(s_H(a))^{-1}}(k) ~\phi_{R(s_H(a))^{-1}} \phi_{ R(k)}(s_H(a)), \end{aligned}$$ by definition of $\circ_R$ and the facts that $s_H(a)^{\dagger}=\phi_{R(s_H(a))^{-1}} (s_H(a)^{-1})$ and $\phi_{R(s_H(a)^{\dagger})}=\phi_{R(s_H(a))^{-1}}$. Now using the facts that $R(s_H(a))^{-1}= \chi(a)^{-1} s_G(T(a))^{-1}$, $\phi_{ R(k)}(s_H(a))=s_H(a)f(R(k), a)$ and [\[sigmaact\]](#sigmaact){reference-type="eqref" reference="sigmaact"}, we have $$\begin{aligned} \epsilon_a(k) &=& \phi_{\chi(a)^{-1} s_G(T(a))^{-1}} (s_H(a))^{-1} \; \nu_{T(a)^{-1}}(k) \; \phi_{\chi(a)^{-1} s_G(T(a))^{-1}}(s_H(a) f(R(k), a))\notag \\ &=& \phi_{s_G(T(a))^{-1} \sigma_{T(a)^{-1}}(\chi(a)^{-1})} (s_H(a))^{-1}\; \nu_{T(a)^{-1}}(k) \; \phi_{ s_G(T(a))^{-1} \sigma_{T(a)^{-1}}(\chi(a)^{-1})}(s_H(a)) \; \nu_{T(a)^{-1}}(f(R(k), a)). \end{aligned}$$ Seting $k_1:= \nu_{T(a)^{-1}}(k)$, $k_2:= \nu_{T(a)^{-1}}(f(R(k), a))$ and $l:= \sigma_{T(a)^{-1}}(\chi(a)^{-1})$, we have $$\begin{aligned} \epsilon_a(k) &=& \phi_{s_G(T(a))^{-1} l} (s_H(a))^{-1} \; k_1 \; \phi_{ s_G(T(a))^{-1} l}(s_H(a)) \; k_2 \\ &=& \big(\phi_{s_G(T(a))^{-1}} \phi_l (s_H(a)) \big)^{-1} k_1 \; \phi_{s_G(T(a))^{-1}} \phi_l (s_H(a)) \; k_2 \\ &=& \big(\phi_{s_G(T(a))^{-1}} (s_H(a)f(l, a))\big)^{-1} k_1 \; \phi_{s_G(T(a))^{-1}} (s_H(a)f(l, a)) \; k_2\\ &=& \big(\phi_{s_G(T(a))^{-1}} (s_H(a))\; \nu_{T(a)^{-1}}(f(l,a))\big)^{-1} k_1 \; \phi_{s_G(T(a))^{-1}} (s_H(a))\; \nu_{T(a)^{-1}}(f(l,a)) k_2. \end{aligned}$$ Using that $s_G(T(a))^{-1}=s_G(T(a)^{-1}) \tau_2(T(a), T(a)^{-1})^{-1}$, we have $$\begin{aligned} \epsilon_a(k) &=& \big(\phi_{s_G(T(a)^{-1}) \tau_2(T(a), T(a)^{-1})^{-1} } (s_H(a))\; \nu_{T(a)^{-1}}(f(l,a))\big)^{-1} k_1 \; \phi_{s_G(T(a)^{-1}) \tau_2(T(a), T(a)^{-1})^{-1} } (s_H(a))\\ && \nu_{T(a)^{-1}}(f(l,a)) k_2\\ &=& \big(s_H (\beta_{T(a)^{-1}}(a))\; \rho(a, T(a)^{-1}) \; f(\tau_2(T(a), T(a)^{-1})^{-1}, a)\; \nu_{T(a)^{-1}}(f(l,a))\big)^{-1}\; k_1 \; s_H (\beta_{T(a)^{-1}}(a))\\ && \rho(a, T(a)^{-1}) \; f(\tau_2(T(a), T(a)^{-1})^{-1}, a)\; \nu_{T(a)^{-1}}(f(l,a))\; k_2\\ &=& \nu_{T(a)^{-1}}(f(l,a))^{-1}\; f(\tau_2(T(a), T(a)^{-1})^{-1}, a)^{-1}\;\rho(a, T(a)^{-1})^{-1}\; s_H (\beta_{T(a)^{-1}}(a))^{-1}\; k_1 \; s_H (\beta_{T(a)^{-1}}(a))\\ && \rho(a, T(a)^{-1})\; f(\tau_2(T(a), T(a)^{-1})^{-1}, a)\; \nu_{T(a)^{-1}}(f(l,a))\; k_2\\ &=& \mu_{\beta_{T(a)^{-1}}(a)}(k_1) \; k_2. \end{aligned}$$ Using the values of $k_1$ and $k_2$, we have $$\begin{aligned} \label{epsilonf} \epsilon_a(k) &=& \mu_{\beta_{T(a)^{-1}}(a)}(\nu_{T(a)^{-1}}(k)) \; \nu_{T(a)^{-1}}(f(R(k), a)). \end{aligned}$$ The preceding computations show that $\epsilon$ is completely determined by $\nu, \mu$ and $f$. Hence, we conclude that the associated action of the induced skew left brace extension is completely determined by the action of the given relative Rota-Baxter group. This together with Proposition [Proposition 31](#eqactprop){reference-type="ref" reference="eqactprop"} gives the following corollary. **Corollary 35**. *Let $\mathcal{A}= (A,B, \beta, T)$ be a relative Rota-Baxter group and $\mathcal{K}= (K,L,\alpha,S )$ a trivial relative Rota-Baxter group such that $K$ and $L$ are abelian groups. Let $$\mathcal{E} : \quad {\bf 1} \longrightarrow (K,L,\alpha,S ) \stackrel{(i_1, i_2)}{\longrightarrow} (H,G, \phi, R) \stackrel{(\pi_1, \pi_2)}{\longrightarrow} (A,B, \beta, T) \longrightarrow {\bf 1}$$ be an extension of relative Rota-Baxter groups with action $(\nu, \mu, \sigma, f)$ and $$\mathcal{E}_{SB}: \quad {\bf 1} \longrightarrow K_{S} \stackrel{i_1}{\longrightarrow} H_{R} \stackrel{\pi_1}{\longrightarrow} A_{T} \longrightarrow {\bf 1}$$ the induced skew left brace extension with associated action $(\xi, \zeta, \epsilon)$. Then there is a map $\Pi: \operatorname{Ext} _{(\nu, \mu, \sigma, f)}(\mathcal{A}, \mathcal{K}) \to \operatorname{Ext} _{(\xi, \zeta, \epsilon)}(A_T, K_S)$ given by $\Pi\big([\mathcal{E}] \big)= [\mathcal{E}_{SB}]$.* For bijective relative Rota-Baxter groups, we can work in the reverse direction. **Proposition 36**. *Let $\mathcal{A}=(A, B, \beta, T)$ be a relative Rota-Baxter group and $\mathcal{K}=(K, L, \alpha, S)$ a trivial relative Rota-Baxter group such that $T$ and $S$ are bijections and $K$ and $L$ are abelian. Let $$\label{extension of SLB for action} \mathcal{E}_{SB}: \quad {\bf 1} \longrightarrow K_{S} \stackrel{i_1}{\longrightarrow} E \stackrel{\pi_1}{\longrightarrow} A_{T} \longrightarrow {\bf 1}$$ be an extension of skew left braces with associated action $(\xi, \zeta, \epsilon)$. If $\mathcal{E}_{SB}$ is induced by the extension $$\label{extension of RRB for action} \mathcal{E} : \quad {\bf 1} \longrightarrow (K,L,\alpha,S ) \stackrel{(i_1, i_2)}{\longrightarrow} (E,G, \phi, R) \stackrel{(\pi_1, \pi_2)}{\longrightarrow} (A,B, \beta, T) \longrightarrow {\bf 1}$$ of relative Rota-Baxter groups, then the action $(\nu, \mu, \sigma, f)$ associated to $\mathcal{E}$ is given by $$\begin{aligned} \nu_a &=& \xi_{T^{-1}(a)}, \label{RRB2sbactn2}\\ \mu_a &=& \zeta_a, \label{RRB2sbactn1}\\ \sigma_b &=& S^{-1} \; \epsilon_{T^{-1}(b)} \; S, \label{RRB2sbactn3}\\ f(l,a) &=& \zeta_a(S^{-1}(l^{-1})) \; \xi_a(\epsilon(S^{-1}(l))), \label{RRB2sbactn4}\end{aligned}$$ for $a \in A$, $b \in B$ and $l \in L$.* *Proof.* Let $(s_H, s_G)$ be a set-theoretic section to $\mathcal{E}$. Then $\epsilon_a(k) = s_H(a)^{\dagger} \circ_R k \circ_R s_H(a)$ for $a \in A$ and $k \in K$. Applying $R$ on both the sides, and using the facts that $R|_K=S$ and $R: H^{(\circ_R)} \to G$ is a homomorphism, we obtain $$\begin{aligned} S(\epsilon_a(k))&=& R((s_H(a))^{-1} \,S(k)\, R((s_H(a))\\ &=&s_G(T(a))^{-1} \,S(k)\,s_G(T(a)), \quad \textrm{by}~ \eqref{RRB2}~\textrm{and the fact that $L$ is abelian}~\\ &=& \sigma_{T(a)} (S(k)), \quad \textrm{by}~ \eqref{sigmaact}. \end{aligned}$$ Thus, $S\, \epsilon_a= \sigma_{T(a)} \, S$ for all $a \in A$. If $(\xi, \zeta, \epsilon)$ is the associated action of the skew left brace extension [\[extension of SLB for action\]](#extension of SLB for action){reference-type="eqref" reference="extension of SLB for action"} of $A_T$ by $K_S$, then the action $(\nu, \mu, \sigma, f)$ of the corresponding relative Rota-Baxter extension [\[extension of RRB for action\]](#extension of RRB for action){reference-type="eqref" reference="extension of RRB for action"} is given by $$\mu_a = \zeta_a, \quad \nu_a = \xi_{T^{-1}(a)} \quad \textrm{and} \quad \sigma_b = S^{-1} \; \epsilon_{T^{-1}(b)} \; S,$$ for all $a \in A$ and $b \in B$. Calculating $f$ separately, we see from [\[epsilonf\]](#epsilonf){reference-type="eqref" reference="epsilonf"} that $$\begin{aligned} f(l,a) &=& \nu_{T(a)}\big( \mu_{\beta_{T(a)^{-1}}(a)}(\nu_{T(a)^{-1}}(S^{-1}(l)))^{-1} \; \epsilon_a(S^{-1}(l))\big) \\ &=& \nu_{T(a)}\big( \mu_{\beta_{T(a)^{-1}}(a)}(\nu_{T(a)^{-1}}(S^{-1}(l)))^{-1}\big) ~\nu_{T(a)}(\epsilon_a(S^{-1}(l)))\notag\\ &=& \mu_{a}(S^{-1}(l^{-1})) \; \nu_{T(a)}(\epsilon_a(S^{-1}(l))), \quad \textrm{using}~ \eqref{mcon} \notag\\ &=& \zeta_a(S^{-1}(l^{-1})) \; \xi_a(\epsilon(S^{-1}(l))),\notag\end{aligned}$$ for $l \in L$ and $a \in A$. ◻ Next, we find a relationship between $\operatorname{H} ^2_{RRB}(\mathcal{A}, \mathcal{K})$ and $\operatorname{H} ^2_N(A_R, K_S)$. Define $\tilde{\tau}: A \times A \to K$ by $\tilde{\tau}(a_1, a_2) = s_H(a_1 \circ _T a_2)^{-1}(s_H(a_1) \circ_ R s_H(a_2))$. Then we have $$\begin{aligned} \tilde{\tau}(a_1, a_2) &=& s_H(a_1 \circ _T a_2)^{-1}(s_H(a_1) \circ_ R s_H(a_2))\\ &=& s_H(a_1 \beta_{T(a_1)}(a_2))^{-1} (s_H(a_1) \phi_{R(s_H(a_1))}(s_H(a_2)))\\ &=& s_H(a_1 \beta_{T(a_1)}(a_2))^{-1} ( s_H(a_1) \phi_{s_G(T(a_1)) \chi(a_1)} (s_H(a_2)) )\\ &=& s_H(a_1 \beta_{T(a_1)}(a_2))^{-1} (s_H(a_1) \phi_{s_G(T(a_1))}(s_H(a_2) f(\chi(a_1), a_2 ) ) ), \quad \textrm{by}~ \eqref{fdefn} \\ &=& s_H(a_1 \beta_{T(a_1)}(a_2))^{-1} (s_H(a_1) s_H(\beta_{T(a_1)}(a_2)) \rho(T(a_1), a_2) \nu_{T(a_1)}( f(\chi(a_1), a_2 ))), \quad \textrm{by}~ \eqref{atilde}\\ &=& \tau_1(a_1, \beta_{T(a_1)}(a_2)) \rho(T(a_1), a_2) \nu_{T(a_1)}( f(\chi(a_1), a_2) )), \quad \textrm{by}~ \eqref{mucocycle}.\end{aligned}$$ Let $\Psi: \operatorname{H} ^2_{RRB}(\mathcal{A}, \mathcal{K}) \longrightarrow \operatorname{Ext} _{(\nu, \mu, \sigma, f)}(\mathcal{A}, \mathcal{K})$ and $\Lambda:\operatorname{Ext} _{(\xi, \zeta, \epsilon)}(M, I) \longrightarrow \operatorname{H} ^2_N(M, I)$ be the bijections defined in theorems [Theorem 32](#ext and cohom bijection){reference-type="ref" reference="ext and cohom bijection"} and [Theorem 34](#gbij-thm sb){reference-type="ref" reference="gbij-thm sb"}, respectively. If $\Pi: \operatorname{Ext} _{(\nu, \mu, \sigma, f)}(\mathcal{A}, \mathcal{K}) \longrightarrow \operatorname{Ext} _{(\xi, \zeta, \epsilon)}(A_T, K_S)$ is the map defined in Corollary [Corollary 35](#RRB2sbext){reference-type="ref" reference="RRB2sbext"}, then we have the map $\Lambda \Pi \Psi: \operatorname{H} ^2_{RRB}(\mathcal{A}, \mathcal{K}) \to \operatorname{H} ^2_N(A_T, K_S)$. In fact, the map $\Lambda \Pi \Psi$ is explicitly given by $\Lambda \Pi \Psi\big( [(\tau_1, \tau_2, \rho, \chi)]\big)= [\tau_1, \tau^{(\beta, T)}_1\rho^{T}\chi^{(T, f)} ]$, where $$\begin{aligned} \tau^{(\beta, T)}_1(a_1, a_2) &=& \tau_1(a_1, \beta_{T(a_1)}(a_2)),\\ \rho^{T}(a_1, a_2) &=& \rho(T(a_1), a_2),\\ \chi^{(T, f)}(a_1, a_2) & =& \nu_{T(a_1)}( f(\chi(a_1), a_2)),\end{aligned}$$ for all $a_1, a_2 \in A$. In fact, we have the following result. **Proposition 37**. *Let $\mathcal{A}= (A,B, \beta, T)$ be a relative Rota-Baxter group and $\mathcal{K}= (K,L,\alpha,S )$ a module over $\mathcal{A}$ with respect to the action $(\nu, \mu,\sigma, f)$. Then the map $\Lambda \Pi \Psi: \operatorname{H} ^2_{RRB}(\mathcal{A}, \mathcal{K}) \to \operatorname{H} ^2_N(A_R, K_S)$ is a homomorphism of groups.* *Proof.* Let $[(\tau_1, \tau_2, \rho, \chi)]$ and $[(\tau^\prime_1, \tau^\prime_2, \rho^\prime, \chi^\prime)]$ be elements in $\operatorname{H} ^2_{RRB}(\mathcal{A}, \mathcal{K})$. Then $$\begin{aligned} \Lambda \Pi \Psi( [(\tau_1, \tau_2, \rho, \chi)] \, [(\tau^\prime_1, \tau^\prime_2, \rho^\prime, \chi^\prime)] ) &=& \Lambda \Pi \Psi( [(\tau_1\tau^\prime_1, \tau_2\tau^\prime_2, \rho \rho^\prime, \chi\chi^\prime)])\\ &=& [(\tau_1 \tau^\prime_1, (\tau_1 \tau^\prime_1)^{(\beta, T)} (\rho_1\rho^\prime_1)^T (\chi\chi^\prime)^{(T,f)})]. \end{aligned}$$ It is easy to see that $$\begin{aligned} (\tau_1\tau^\prime_1)^{(\beta, T)}&=&\tau^{(\beta, T)}_1 (\tau^\prime_1)^{(\beta, T)},\\ (\rho_1\rho^\prime_1)^T &=& \rho^T_1 (\rho^\prime_1)^T,\\ (\chi\chi^\prime)^{(T,f)} & = & \chi^{(T, f)} (\chi^\prime)^{(T, f)}, \quad \textrm{by linearity of $f$ in the first coordinate.} \end{aligned}$$ Hence, we have $$\begin{aligned} \Lambda \Pi \Psi( [(\tau_1, \tau_2, \rho, \chi)] [(\tau^\prime_1, \tau^\prime_2, \rho^\prime, \chi^\prime)] ) &=&[(\tau_1,\tau^{(\beta, T)}_1, \rho^T_1, \chi^{(T, f)})] \,[(\tau^\prime_1, (\tau^\prime_1)^{(\beta, T)}, (\rho^\prime_1)^T, (\chi^\prime)^{(T, f)}) ]\\ & =& \Lambda \Pi \Psi( [(\tau_1, \tau_2, \rho, \chi)]) \, \Lambda \Pi \Psi([(\tau^\prime_1, \tau^\prime_2, \rho^\prime, \chi^\prime)] ), \end{aligned}$$ which shows that $\Lambda \Pi \Psi$ is a homomorphism. ◻ Let $\mathcal{A} = (A, B, \beta, T)$ be a bijective relative Rota-Baxter group. Then we have an isomorphism $$(\operatorname{id}_A, T): (A^{(\cdot)}, A^{(\circ_T)}, \beta \, T, \operatorname{id}_A) \stackrel{\cong}{\longrightarrow} (A,B, \beta, T).$$ Thus, without loss of generality, we consider a bijective relative Rota-Baxter group to be of the form $(A^{(\cdot)}, A^{(\circ_T)}, \beta \, T, \operatorname{id}_A)$. **Proposition 38**. *Let $\mathcal{A}=(A, B, \beta, T)$ be a relative Rota-Baxter group and $\mathcal{K}=(K, L, \alpha, S)$ a trivial relative Rota-Baxter group such that $T$ and $S$ are bijections and $K$ and $L$ are abelian. Then the map $\Pi$ defined in Corollary [Corollary 35](#RRB2sbext){reference-type="ref" reference="RRB2sbext"} is a bijection.* *Proof.* Since $\mathcal{A}$ and $\mathcal{K}$ are bijective relative Rota-Baxter groups, in view of the preceding remark, we can take $\mathcal{A} =(A^{(\cdot)}, A^{(\circ_T)}, \beta \, T, \operatorname{id}_A)$ and $\mathcal{K} = (K, K, \alpha, \operatorname{id}_K)$. Further, the relationship between the actions $(\nu, \mu, \sigma, f)$ and $(\xi, \zeta, \epsilon)$ is given by [\[RRB2sbactn1\]](#RRB2sbactn1){reference-type="eqref" reference="RRB2sbactn1"},[\[RRB2sbactn2\]](#RRB2sbactn2){reference-type="eqref" reference="RRB2sbactn2"}, [\[RRB2sbactn3\]](#RRB2sbactn3){reference-type="eqref" reference="RRB2sbactn3"} and [\[RRB2sbactn4\]](#RRB2sbactn4){reference-type="eqref" reference="RRB2sbactn4"}. We first show that the map $\Pi$ is injective. Let $$\mathcal{E} : \quad {\bf 1} \longrightarrow (K,K,\alpha,\operatorname{id}_K) \stackrel{(i_1, i_2)}{\longrightarrow} (H,G, \phi, R) \stackrel{(\pi_1, \pi_2)}{\longrightarrow} (A^{(\cdot)},A^{(\circ_T)}, \beta\;T, \operatorname{id}_A) \longrightarrow {\bf 1},$$ $$\mathcal{E}^\prime : \quad {\bf 1} \longrightarrow (K,K,\alpha,\operatorname{id}_K) \stackrel{(i^\prime_1, i^\prime_2)}{\longrightarrow} (H^\prime,G^\prime, \phi^\prime, R^\prime ) \stackrel{(\pi^\prime_1, \pi^\prime_2)}{\longrightarrow}(A^{(\cdot)},A^{(\circ_T)}, \beta\;T, \operatorname{id}_A) \longrightarrow {\bf 1}$$ be two extensions of $\mathcal{K}$ by $\mathcal{A}$ such that the skew left brace extensions $\mathcal{E}_{SB}$ and $\mathcal{E}^\prime_{SB}$ are equivalent. Then we have an isomorphism $\eta: H_R \longrightarrow H^\prime_{R^\prime}$ of skew left braces such that the following diagram commutes $$\begin{CD} \mathcal{E}_{SB}:\quad {\bf 1} @>>> K_S @>{i_1}>>H_R @>{{\pi_1} }>> A_T @>>> {\bf 1} \\ && @V{\operatorname{id}_{K}}VV @V{\eta}VV @VV{{\operatorname{id}_{A}}}V \\ \mathcal{E}^\prime_{SB}: \quad {\bf 1} @>>> K_S @>{i^\prime_1}>> H^\prime_{R^\prime} @>{{\pi_1^\prime} }>> A_T @>>> {\bf 1} . \end{CD}$$ Since $\eta$ is an isomorphism of skew left braces, we have $\eta \; \phi_{R(h)}=\phi^\prime_{R^\prime (\eta(h))} \; \eta$ for all $h \in H$. In view of Remark [Remark 1](#bijext){reference-type="ref" reference="bijext"}, both $R$ and $R^\prime$ are bijections. Thus, we have an isomorphism $$(\eta, R^\prime \eta R^{-1} ) : (H,G, \phi, R) \longrightarrow (H^\prime,G^\prime, \phi^\prime, R^\prime )$$ of relative Rota-Baxter groups. Further, the following diagram commutes $$\begin{aligned} \begin{CD}\label{eqact1} {\bf 1} @>>> (K,K,\alpha,\operatorname{id}_K) @>(i_1, i_2)>> (H,G, \phi, R) @>{{(\pi_1, \pi_2)} }>> (A^{(\cdot)},A^{(\circ_T)}, \beta \; T, \operatorname{id}_A) @>>> {\bf 1}\\ && @V{(\operatorname{id}_K, ~\operatorname{id}_K)} VV@V{(\eta, R^\prime \eta R^{-1})} VV @V{(\operatorname{id}_A, ~\operatorname{id}_A)}VV \\ {\bf 1} @>>> (K,K,\alpha,\operatorname{id}_K) @>(i^\prime_1, i^\prime_2)>>(H^\prime,G^\prime, \phi^\prime, R^\prime) @>{(\pi^\prime_1, \pi^\prime_2) }>> (A^{(\cdot)},A^{(\circ_T)}, \beta \; T, \operatorname{id}_A) @>>> {\bf 1}, \end{CD} \end{aligned}$$ which implies that $\mathcal{E}$ and $\mathcal{E}'$ are equivalent. Hence, the map $\Pi$ is injective. To see that $\Pi$ is surjective, let $$\mathcal{E}: \quad {\bf 1} \longrightarrow K_S \stackrel{i}{\longrightarrow} H \stackrel{\pi}{\longrightarrow} A_T \longrightarrow {\bf 1}$$ be an extension of skew left braces representing an element in $\operatorname{Ext} _{(\xi, \zeta, \epsilon)}(A_T, K_S)$. Consider the extension $$\mathcal{E}_{RRB} : \quad {\bf 1} \longrightarrow (K, K, \alpha , \operatorname{id}_K) \stackrel{(i, i)}{\longrightarrow} (H^{(\cdot)}, H^{(\circ)}, \lambda^H, \operatorname{id}_H) \stackrel{(\pi, \pi)}{\longrightarrow} (A^{(\cdot)}, A^{(\circ_T)}, \beta \, T, \operatorname{id}_A) \longrightarrow {\bf 1}$$ of relative Rota-Baxter groups. In view of Proposition [Proposition 36](#actnprop){reference-type="ref" reference="actnprop"}, the action corresponding to the extension $\mathcal{E}_{RRB}$ is $(\nu, \mu, \sigma, f)$, and hence $\mathcal{E}_{RRB}$ represents an element in $\operatorname{Ext} _{(\nu, \mu, \sigma, f)}(\mathcal{A}, \mathcal{K})$. Since the skew left brace extension induced by $\mathcal{E}_{RRB}$ is identical to $\mathcal{E}$, it follows that the map $\Pi$ is surjective. ◻ **Corollary 39**. *Let $\mathcal{A}= (A,B, \beta, T)$ be a relative Rota-Baxter group and $\mathcal{K}= (K,L,\alpha,S )$ a module over $\mathcal{A}$ with respect to the action $(\nu, \mu,\sigma, f)$. If $T$ and $S$ are bijections and $K$ and $L$ are abelian, then $\Lambda \Pi \Psi: \operatorname{H} ^2_{RRB}(\mathcal{A}, \mathcal{K}) \to \operatorname{H} ^2_N(A_R, K_S)$ is an isomorphism of groups.* **Remark 1**. The map $\Pi$ defined in Corollary [Corollary 35](#RRB2sbext){reference-type="ref" reference="RRB2sbext"} need not be surjective in general. For example, let $\mathcal{K} = \mathcal{A} = (\mathbb{Z}_p, 1, \alpha, T)$ be two relative Rota-Baxter groups, where $p$ is a prime. Then every extension of $\mathcal{K}$ by $\mathcal{A}$ has the form $$\mathcal{E}: \quad {\bf 1} \longrightarrow (\mathbb{Z}_p, 1, \alpha, T) \longrightarrow (E, 1, \phi, R) \longrightarrow (\mathbb{Z}_p, 1, \alpha, T) \rightarrow {\bf 1}.$$ Hence, the skew left brace extension induced by $\mathcal{E}$ is $$\mathcal{E}_{SB}: \quad {\bf 1} \longrightarrow \mathbb{Z}_p \longrightarrow E \longrightarrow \mathbb{Z}_p \longrightarrow {\bf 1},$$ with $E$ being a trivial brace. It is easy to see that the associated action of any extension of $\mathcal{K}$ by $\mathcal{A}$ is trivial. Therefore, the associated action of the skew left brace extension induced by an extension of $\mathcal{K}$ by $\mathcal{A}$ is also trivial. Let $(H, \cdot, \circ)$ be the skew left brace with the additive group as the cyclic group $\mathbb{Z}_{p^2}$, and the multiplicative group given by $$x_1 \circ x_2 = x_1 + x_1 + p x_1 x_2$$ for all $x_1, x_2 \in H$. We know from [@DB18 p.4] that the annihilator of $H$ is of order $p$. Thus, we have the extension $$\mathcal{E}_{1}: \quad {\bf 1} \longrightarrow \mathbb{Z}_p \longrightarrow H \longrightarrow \mathbb{Z}_p \longrightarrow {\bf 1}$$ with the trivial associated action. Since $H$ is a non-trivial left brace, it follows that $[\mathcal{E}_1]$ cannot belong to the image of $\Pi$. Thus, $\Pi$ is not surjective, and consequently $\Lambda \Pi \Psi$ is not surjective in general. # Central and split extensions of relative Rota-Baxter groups {#sec central and split RRB} Let $\mathcal{A} = (A, B, \beta, T)$ be an arbitrary relative Rota-Baxter group and $\mathcal{K} = (K, L, \alpha, S)$ a trivial relative Rota-Baxter group, where $K$ and $L$ are abelian groups. Let $\operatorname{CExt} (\mathcal{A}, \mathcal{K})$ denote the set of equivalence classes of all central extensions of $\mathcal{A}$ by $\mathcal{K}$. Consider an extension in $\operatorname{CExt} (\mathcal{A}, \mathcal{K})$ and let $(\mu, \nu, \sigma, f)$ be its associated action. It is a direct check that $\mu$, $\nu$ and $\sigma$ are trivial homomorphisms. Furthermore, $f(l, a) = 1$ for all $l \in L$ and $a \in A$. In other words, the associated action is trivial for a central extension. For central extensions, the coboundary maps can be simplified. More precisely, we define $\partial_{CRRB}^1: C^1_{RRB} \longrightarrow C^{2}_{RRB}$ by $\partial_{CRRB}^1(\theta_1, \theta_2)=(\partial^1(\theta_1), \partial^1(\theta_2), \lambda_1, \lambda_2 )$, where $\lambda_1$ and $\lambda_2$ are given by $$\begin{aligned} \lambda_1(a,b) &=& \theta_1(a) \,(\theta_1(\beta_b(a)))^{-1},\\ \lambda_2(a) &=& S(\theta_1(a)) \,(\theta_2(T(a)))^{-1}.\end{aligned}$$ Similarly, we define $\partial_{CRRB}^2: C^2_{RRB} \longrightarrow C^{3}_{RRB}$ by $\partial^2_{RRB}(\tau_1, \tau_2, \rho, \chi)=(\partial^2(\tau_1), \partial^2(\tau_2), \gamma_1, \gamma_2, \gamma_3)$, where $$\begin{aligned} \gamma_1(a,b_1, b_2) &=& \rho(a, b_1 b_2) \, (\rho(\beta_{b_2}(a), b_1))^{-1} \,(\rho(a,b_2))^{-1},\\ \gamma_2(a_1, a_2,b) &=& \rho(a_1 a_2, b) \,(\rho(a_1,b))^{-1}(\rho(a_2, b))^{-1} \tau_1(a_1, a_2) \, (\tau_1(\beta_{b}(a_1), \beta_{b}(a_2)))^{-1}, \\ \gamma_3(a_1, a_2) &=& S\big(\rho(a_2, T(a_1)) \,\tau_1(a_1,\beta_{T(a_1)}(a_2)) \big) \, (\tau_2(T(a_1), T(a_2)))^{-1} \,(\delta^1(\chi)(a_1, a_2))^{-1}.\end{aligned}$$ By Lemma [Lemma 28](#rrb coboundary condition){reference-type="ref" reference="rrb coboundary condition"}, we have $\operatorname{im} (\partial^1_{CRRB}) \subseteq \ker(\partial^2_{CRRB})$, and hence we can define $$\operatorname{H} ^2_{CRRB}(\mathcal{A}, \mathcal{K})=\operatorname{ker} (\partial^2_{CRRB})/ \operatorname{im} (\partial^1_{CRRB}).$$ The proof of Theorem [Theorem 32](#ext and cohom bijection){reference-type="ref" reference="ext and cohom bijection"} yields the following result. **Theorem 40**. *There is a bijection between $\operatorname{CExt} (\mathcal{A}, \mathcal{K})$ and $\operatorname{H} ^2_{CRRB}(\mathcal{A}, \mathcal{K})$.* Next, we classify a specific class of extensions of relative Rota-Baxter groups, which may not be abelian in general. **Definition 41**. *Suppose that $$\mathcal{E} : \quad {\bf 1} \longrightarrow (K,L, \alpha,S ) \stackrel{(i_1, i_2)}{\longrightarrow} (H,G, \phi, R) \stackrel{(\pi_1, \pi_2)}{\longrightarrow} (A,B, \beta, T) \longrightarrow {\bf 1}$$ is an extension of relative Rota-Baxter groups. We say that $\mathcal{E}$ is a split-extension if there exists a set-theoretic section $(s_H, s_G)$ to $\mathcal{E}$ which is also a morphism of relative Rota-Baxter groups.* **Remark 1**. If $\mathcal{E}$ is a split extension of relative Rota-Baxter groups, then it follows that the extensions induced for both groups and skew left braces also split within their respective categories. Further, the classification of split extensions of skew left braces can be found in [@N1]. Suppose that $$\mathcal{E} : \quad {\bf 1} \longrightarrow (K,L, \alpha,S ) \stackrel{(i_1, i_2)}{\longrightarrow} (H,G, \phi, R) \stackrel{(\pi_1, \pi_2)}{\longrightarrow} (A,B, \beta, T) \longrightarrow {\bf 1}$$ is a split-extension of relative Rota-Baxter groups. Then existence of a morphism $(s_H, s_G): (A, B, \beta, T) \rightarrow (H, G, \phi, R)$ implies that $s_G \; T = R \; s_H$ and $s_H \; \beta_b = \phi_{s_G(b)} \; s_H$ for all $b \in B$. This means $\rho$ and $\chi$ defined in [\[atilde\]](#atilde){reference-type="eqref" reference="atilde"} and [\[RRB2\]](#RRB2){reference-type="eqref" reference="RRB2"}, respectively, are trivial maps. Moreover, by [\[feqn\]](#feqn){reference-type="eqref" reference="feqn"} and [\[relativecon\]](#relativecon){reference-type="eqref" reference="relativecon"}, the maps $\phi$ and $R$ are given by $$\begin{aligned} \phi_{(s_G(b)l)}(s_H(a)k) &=&s_H(\beta_b(a)) ~\nu_b(f(l,a)\alpha_l(k)),\\ R(s_H(a)k) &=& s_H(T(a)) ~S(\nu^{-1}_{T(a)}(k)),\end{aligned}$$ for all $a \in A$, $b \in B$, $k \in K$ and $l \in L$. Recall that, the group operations in the semi-direct products $A \times_{\mu} K$ and $B \ltimes_{\sigma} L$ are given by $$\begin{aligned} (a_1,k_1) (a_2, k_2) &=&(a_1 a_2, ~\mu_{a_2}(k_1)k_2),\\ (b_1,l_1) (b_2, l_2) &= & (b_1 b_1, ~\sigma_{b_2}(l_1)l_2),\end{aligned}$$ for all $a_1, a_2 \in A$, $b_1, b_2, \in B$, $k_1, k_2 \in K$ and $l_1, l_2 \in L$. Routine calculations yield the following proposition. **Proposition 42**. *Let $(K,L, \alpha,S )$ and $(A,B, \beta, T)$ be two relative Rota-Baxter groups. Suppose that there exists a map $f: L \times A \rightarrow K$, a homomorphism $\nu: B \rightarrow \operatorname{Aut} (K)$ and anti-homomorphisms $\mu: A \rightarrow \operatorname{Aut} (K)$ and $\sigma :B \rightarrow \operatorname{Aut} (L)$. Then the map $\phi: B \ltimes_{\sigma} L \rightarrow \operatorname{Aut} (A \times_{\mu} K)$ given by $$\phi_{(b,l)}(a,k)=\big(\beta_b(a),~ \nu_b(f(l,a)\alpha_l(k))\big),$$ for $(b, l) \in B \ltimes_{\sigma} L$ and $(a,k) \in A \times_{\mu} K$, is a homomorphism if and only the following conditions are satisfied: $$\begin{aligned} \nu_{b_2}\big(f(\sigma_{b_2}(l_1) l_2,a_1) ~\alpha_{\sigma_{b_2}(l_1) l_2}(k_1)\big)& =& f(l_1, \beta_{b_2}(a_1)) ~\alpha_{l_2}\big(\nu_{b_2}(f(l_2, a_1) \alpha_{l_2}(k_1))\big),\\ \nu_{b_1}\big(f(l_1, a_1 a_2) ~\alpha_l(\mu_{a_2}(k_1) k_2) \big) &=& \mu_{\beta_{b_1}(a_2)}\big(\nu_{b_1}(f(l,a_1) \alpha_l(k_1))\big) ~\nu_{b_1}(f(l,a_2) \alpha_l(k_2)), \end{aligned}$$ for all $a_1, a_2 \in A$, $b_1, b_2, \in B$, $k_1, k_2 \in K$ and $l_1, l_2 \in L$.* *Furthermore, if $\phi$ is a group homomorphism, then the map $R: A \times_{\mu} K \rightarrow B \ltimes_{\sigma} L$ given by $$R(a, k)=(T(a), ~S(\nu^{-1}_{T(a)}(k))),$$ for $(a,k) \in A \times_{\mu} K$, is a relative Rota-Baxter operator if and only if $$\begin{aligned} && \sigma_{T(a_2)}(S(\nu^{-1}_{T(a_1)}(k_1))) \, S(\nu^{-1}_{T(a_2)}(k_2))\\ &=& S\big(\nu^{-1}_{T(a_1) T(a_2)}\big(\mu_{\beta_{T(a_1)}(a_1)}(k_1) ~\nu_{T(a_1)}(f(S(\nu^{-1}_{T(a_1)}(k_1)), a_2) ~\alpha_{S(\nu^{-1}_{T(a_1)}(k_1))}(k_2)) \big) \big), \end{aligned}$$ for all $a_1, a_2 \in A$ and $k_1, k_2 \in K$.* The induced relative Rota-Baxter group $(A \times_{\mu} K, B \ltimes_{\sigma} L , \phi, R)$ obtained in the preceding proposition is called the *semi-direct product* of $(A,B, \beta, T)$ by $(K,L, \alpha,S )$ with respect to the action $(\nu, \mu, \sigma, f)$. The direct product of relative Rota-Baxter groups is a particular case of the semi-direct product when all the maps $\nu, \mu,\sigma, f$ are trivial. **Theorem 43**. *The semi-direct product of relative Rota-Baxter groups $(A, B, \beta, T)$ by $(K, L, \alpha, S)$ gives rise to a split-extension of $(A, B, \beta, T)$ by $(K, L, \alpha, S)$. Furthermore, any split-extension of relative Rota-Baxter groups is equivalent to an extension defined by the semi-direct product of relative Rota-Baxter groups.* **Remark 1**. Note that abelian split extensions of relative Rota-Baxter groups correspond to the zero element of the second cohomology. Further, the skew left brace induced from the semi-direct product of $(A, B, \beta, T)$ by $(K, L, \alpha, S)$ is the semi-direct product of skew left braces $A_{T}$ by $K_{S}$ as defined in [@N1]. **Acknowledgement 1**. *Pragya Belwal thanks UGC for the PhD research fellowship and Nishant Rathee thanks IISER Mohali for the institute post doctoral fellowship. Mahender Singh is supported by the SwarnaJayanti Fellowship grants DST/SJF/MSA-02/2018-19 and SB/SJF/2019-20.* # Declaration The authors declare that they have no conflict of interest. 99 D.  Bachiller, *Extensions, matched products, and simple braces*, J. Pure Appl. Algebra 222 (2018), 1670--1691. V. Bardakov and V. Gubarev, *Rota-Baxter groups, skew left braces, and the Yang-Baxter equation*, J. Algebra 587 (2022), 328--351. V. Bardakov and V. Gubarev, *Rota--Baxter operators on groups*, Proc. Indian Acad. Sci. Math. Sci. 133 (2023), no. 1, Paper No. 4, 29 pp. C. Bai, L.  Guo, Y. Sheng and R. Tang, *Post-groups, (Lie-)Butcher groups and the Yang--Baxter equation*, Math. Ann. (2023), https://doi.org/10.1007/s00208-023-02592-z. K. S. Brown, *Cohomology of groups*, Graduate Texts in Mathematics, 87. Springer--Verlag, New York-Berlin, 1982. x+306 pp. F. Catino, I.  Colazzo and P.  Stefanelli, *Skew left braces with non-trivial annihilator*, J. Algebra Appl. 18 (2019), No.2, 1950033. F. Catino, M. Mazzotta, M. Miccoli and P.  Stefanelli, *Set-theoretic solutions of the Yang--Baxter equation associated to weak braces*, Semigroup Forum 104 (2022), 228--255. F.  Catino, M. Mazzotta and P. Stefanelli, *Rota-Baxter operators on Clifford semigroups and the Yang-Baxter equation*, J. Algebra 622 (2023), 587--613. A.  Caranti and L. Stefanello, *Skew braces from Rota--Baxter operators: a cohomological characterisation and some examples*, Ann. Mat. Pura Appl. 202 (2023), 1--13. A. Das and N.  Rathee, *Extensions and automorphisms of Rota-Baxter groups*, (2022), arXiv:2212.06429. V. G. Drinfel'd, *On some unsolved problems in quantum group theory*. Quantum groups (Leningrad, 1990), 1--8, Lecture Notes in Math., 1510, Springer, Berlin, 1992. L. Guarnieri and L.  Vendramin, *Skew braces and the Yang-Baxter equation*, Math. Comp. 86 (2017), 2519--2534. L. Guo, H. Lang and Y.  Sheng, *Integration and geometrization of Rota-Baxter Lie algebras*, Adv. Math. 387 (2021), 107834, 34 pp. J.  Jiang, Y.  Sheng and C.  Zhu, *Lie theory and cohomology of relative Rota-Baxter operators*, (2021), arXiv:2108.02627v3. V. Lebed and L. Vendramin, *Cohomology and extensions of braces*, Pacific J. Math. 284 (2016), 191--212. N.  Rathee, *Extensions and Well's type exact sequence of skew braces*, J. Algebra Appl., doi.org/10.1142/S0219498824500099. N. Rathee and M. Singh, *Relative Rota-Baxter groups and skew left braces*, (2023), arXiv:2305.00922. N.  Rathee and M. K.  Yadav, *Cohomology, extensions and automorphisms of skew braces*, J. Pure Appl. Algebra 228 (2024), no. 2, Paper No. 107462.
arxiv_math
{ "id": "2309.00692", "title": "Cohomology and Extensions of Relative Rota-Baxter groups", "authors": "Pragya Belwal, Nishant Rathee and Mahender Singh", "categories": "math.QA math.GR math.RA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We introduce a Laplacian separation principle for the the eikonal equation on Markov chains. As application, we prove an isoperimetric concentration inequality for Markov chains with non-negative Ollivier curvature. That is, every single point from the concentration profile yields an estimate for every point of the isoperimetric estimate. Applying to exponential and Gaussian concentration, we obtain affirmative answers to two open quesions by Erbar and Fathi. Moreover, we prove that the modified log-Sobolev constant is at least the minimal Ollivier Ricci curvature, assuming non-negative Ollivier sectional curvature, i.e., the Ollivier Ricci curvature when replacing the $\ell_1$ by the $\ell_\infty$ Wasserstein distance. This settles a recent open Problem by Pedrotti. We give a simple example showing that non-negative Ollivier sectional curvature is necessary to obtain a modified log-Sobolev inequality via positive Ollivier Ricci bound. This provides a counterexample to a conjecture by Peres and Tetali. author: - Florentin Münch bibliography: - Bibliography.bib title: Ollivier curvature, Isoperimetry, concentration, and Log-Sobolev inequalitiy --- # Introduction In this article, we answer a variety of open questions regarding isoperimetric, functional and concentration inequalities on Markov chains with non-negative Ricci curvature: - Peres,Tetali: Modified Log-Sobolev under positive Ollivier curvature - Ollivier, Problem M: Log Sobolev under some curvature bound - Ollivier, Problem P: Sectional curvature using $\ell_\infty$ Wasserstein distance - Pedrotti: Modified Log-Sobolev using $\ell_\infty$ Wasserstein curvature - Erbar, Fathi: Spectral gap via exponential concentration - Erbar, Fathi: Modified log-Sobolev via Gaussian concentration For the first question, we give a counterexample, and all other questions are answered affirmatively. Discrete Ricci curvature has sparked remarkable interest in numerous mathematical communities: - Markov chain mixing, functional inequalities and cutoff [@salez2023cutoff; @fathi2015curvature; @eldan2017transport; @caputo2009convex; @riekert2022convergence] - Multi-particle systems such as Glauber dynamics [@blanca2022mixing; @erbar2017ricci; @paulin2016mixing; @villemonais2020lower; @holmes2014curvature] - Geometric group theory [@berestycki2014cutoff; @siconolfi2020coxeter; @keisling2021medium; @bar2022conjugation; @taback2023conjugation] - Discrete topology [@kempton2021homology; @saucan2017network; @ni2015ricci; @knill2014coloring; @forman2003bochner] - Quantum relativity [@loll2019quantum; @tee2021canonical; @tee2021enhanced; @gorard2020some; @loll2022quantum] - Machine learning and neural networks [@anand2022topological; @wee2021ollivier; @topping2021understanding; @li2022curvature; @ye2019curvature] - Data analysis [@weber2016forman; @sia2019ollivier; @sandhu2015analytical; @samal2018comparative; @boguna2021network] Indeed, various non-equivalent notions of discrete Ricci curvature have been introduced such as Forman curvature for cell complexes [@forman2003bochner] based on a Bochner-Weizenböck formula, Coarse Ricci curvature known as Ollivier curvature based on optimal transport [@ollivier2009ricci; @lin2011ricci], Bakry Emery curvature based on a Bochner formula [@schmuckenschlager1998curvature; @lin2010ricci], and entropic curvature based on convexity of the entropy [@erbar2012ricci; @mielke2013geodesic]. None of these curvatures coincide except Forman and Ollivier curvature when choosing the 2-cells to maximize the Forman curvature, see e.g. [@jost2019Liouville] for examples of different Ollivier and Bakry Emery curvature. Among all curvature notions, Ollivier curvature seems the most popular due to its simplicity and its elegant geometric interpretation. Specifically, the Ollivier curvature is bounded from below by $K \in \mathbb{R}$ iff $$\operatorname{Lip}(P_t f) \leq e^{-Kt} \operatorname{Lip}(f),$$ see [@munch2017ollivier] where $P_t = e^{\Delta t}$ is the heat semigroup. One remarkable implicit feature of the Lipschitz decay is that the underlying distance for the Lipschitz constant can be chosen independently of the Markov chain. Indeed, this observation seems to give rise to a completely unexplored branch of Riemannian geometry: Given a (potentially weighted) manifold $M$ with two Riemannian metrics $g_1,g_2$. - Define Laplace Beltrami with respect to $g_1$ (potentially with weight) - Define gradients and Lipschitz constant with respect to $g_2$ The Ricci curvature bounds can now be defined either via Lipschitz decay, or via Bakry Emery calculus which gives rise to a whole universe of interesting open questions, e.g., - How does the curvature localize, i.e., how to get pointwise curvature quantities from the global bounds? - Under which conditions do the Lipschitz decay curvature and Bakry Emery curvature coincide? - Which classical results regarding Ricci curvature can be generalized to the case of having two different metrics? - What are natural definitions of scalar, sectional and Riemannian curvature in this context? - Are there interesting Ricci flows, deforming the metrics differently? We now discuss the known relations between Ricci curvature, isoperimetry and log-Sobolev inequalities. In smooth and discrete settings, there is hierarchy of properties with concentration inequalities and isoperimetric inequalities at its ends. Specifically, $$\begin{aligned} \mbox{Isoperimetric inequalities} &\Rightarrow \mbox{Functional inequalities} \\&\Rightarrow \mbox{Transport entropy inequalites}\\& \Rightarrow \mbox{Concentration inequalities},\end{aligned}$$ see e.g. [@milman2012properties] for the smooth case and [@fathi2015curvature] for the discrete case. In the seminal works of Milman [@milman2009role; @milman2010Isoperimetric], it is shown that in the smooth case, the hierarchy can be reversed in case of non-negative Ricci curvature. In particular, exponential concentration of measure implies a lower bound on the Cheeger constant, and Gaussian concentration implies Gaussian isoperimetry. In [@erbar2018poincare Conjecture 6.9 and 6.10], Erbar and Fathi ask whether the same implications hold true in the discrete setting. In the smooth setting, the log Sobolev constant can be lower bounded by the minimal Ricci curvature [@cavalletti2017sharp]. This bound is sharp and equality is attained iff the space splits out a 1-dimensional Gaussian space [@ohta2020equality]. In the study of discrete Markov chains, log-Sobolev inequalities are of fundamental importance as they provide a crucial tool to investigate mixing times and cutoff behavior [@diaconis1996logarithmic]. It was conjectured by Tetali and Peres that in the discrete setting, lower Ollivier curvature bounds imply modified log-Sobolev inequalities [@eldan2017transport Conjecture 3.1], [@fathi2019quelques Conjecture 4], [@blanca2022mixing Remark 1.1], [@pedrotti2023contractive Conjecture 5.25]. There is a wide range of literature attempting to solve the conjecture: In [@eldan2017transport; @fathi2015curvature], it was tried to tackle the conjecture via transport inequalities. In [@blanca2022mixing], log-Sobolev inequalities under positive curvature bounds have been proven for Glauber dynamics. In [@johnson2015discrete], a log Sobolev inequality was proven for birth death chains with constant birth rate, assuming a Bakry Emery curvature bound. Recently, Pedrotti conjectured that positive Ollivier curvature implies a modified log-Sobolev inequality if additionally assuming non-negative $\ell_\infty$ Wasserstein curvature (i.e., the Ollivier curvature when replacing the $\ell_1$ by the $\ell_\infty$ Wasserstein curvature), see the discussion after [@pedrotti2023contractive Conjecture 5.25]. In [@ollivier2010survey Problem P], Ollivier asks whether the $\ell_\infty$ Wasserstein curvature has any interesting applications. As (unmodified) log-Sobolev inequalities seemed completely out of reach, Ollivier asked if one could get at least exponential decay of the entropy under the heat flow [@ollivier2010survey Problem M]. While it was completely open in case of Ollivier curvature bounds, modified log-Sobolev inequalities have been proven under entropic curvature bounds [@erbar2018poincare; @erbar2012ricci], and under the exponential curvature dimension condition $CDE'$ [@yong2017log]. However, the entropic curvature and the $CDE'$ curvature condition are highly non-linear as the logarithm is appearing. In particular, they are practically not computable for larger networks (i.e., networks with more than two vertices), although useful bounds have been given for important classes of Markov chains. It turns out that under Ollivier or Bakry Emery curvature conditions, estimates on graphs involving the logarithm are notoriously hard to prove. This has been impressively demonstrated in the papers on Li-Yau inequalities on graphs, which all but one have to assume a non-linear modification of the Bakry Emery condition [@bauer2015li; @horn2014volume; @munch2014li; @dier2017discrete; @gong2019li; @qian2017remarks; @weber2023li; @lippner2016li; @krass2022li; @horn2019spacial]. The only exception is [@munch2019li] in which a Li-Yau inequality under the standard Bakry Emery condition was proven for a modified heat equation so that the appearance of the logarithm was sneakily prevented. While log-Sobolev inequalities are hard to obtain from Ollivier curvature bounds, the weaker Poincare inequality is well known to be true. Specifically, if $K$ is a lower Ollivier curvature bound, then $$\lambda_1 \geq K,$$ see [@ollivier2009ricci], and in case of non-negative Ollivier curvature, $$\lambda_1 \geq \frac{\log(2)}{\operatorname{diam}^2},$$ where $\lambda$ is the smallest positive eigenvalue of the graph Laplacian, see [@munch2019non]. A similar estimate has been shown in [@lin2010ricci] in case of non-negative Bakry Emery curvature. The leading question for this article was, whether $\lambda_1$ in the above estimates can be replaced by the (modified) log-Sobolev constant. It turns out that the methods of the proofs also yield surprisingly strong isoperimetric inequalities. ## Main results for positive curvature We give a simple example that the Peres-Tetali conjecture is wrong, i.e., we construct a sequence of Markov chains with uniformly positive Ollivier and Bakry Emery curvature, but the modified log-Sobolev constant tends to zero, see Section [4.3](#sec:CounterExample){reference-type="ref" reference="sec:CounterExample"}. The Markov chain is a birth death chain on three vertices where the transition rate from an outer to the middle vertex goes to zero and the remaining transition rates stay fixed. It turns out that with appropriate choice of the remaining transition rates, the Markov chain has uniformly positive Ollivier and Bakry Emery curvature. Moreover, based on this birth death chain, we construct a combinatorial graph with uniformly positive Ollivier curvature, but arbitrarily small modified log-Sobolev constant. Moreover, we present a method to repair the Peres-Tetali conjecture by employing a notion of Ollivier sectional curvature based on the $\ell_\infty$ Wasserstein distance instead of its $\ell_1$ version (for the motivation of the sectional curvature, see [@ollivier2010survey Problem P]). Specifically, assuming non-negative Ollivier sectional curvature, we prove in Theorem [Theorem 9](#thm:modLogSobPosCurv){reference-type="ref" reference="thm:modLogSobPosCurv"} that $$\alpha_{\operatorname{mod}}\geq \inf_{x\sim y} \kappa(x,y)$$ where $\kappa$ is the Ollivier Ricci curvature and $\alpha_{\operatorname{mod}}$ the modified log-Sobolev constant. This settles an open question of Pedrotti (see the discussion after [@pedrotti2023contractive Conjecture 5.25]). ## Main results for non-negative curvature The deepest and most innovative results of this article are concerning non-negative Ollivier Ricci curvature and its consequences for the isoperimetric profile. Specifically, we present a Laplacian separation principle for the eikonal equation which, at first glance, looks just like a party trick, see Section [5.1](#sec:LaplaceSeparation){reference-type="ref" reference="sec:LaplaceSeparation"}. However, this method seems entirely new (even in the case of Riemannian manifolds) and it has surprisingly strong consequences for the isoperimetric profile. More precisely, in Theorem [Theorem 15](#thm:IsoperimetryConcentration){reference-type="ref" reference="thm:IsoperimetryConcentration"}, we prove that for every $\varepsilon\leq \frac 1 8$ and every vertex subset $W$ with $m(W) \leq \frac 1 2$, $$|\partial W| \geq P_0\frac{(m(W) \wedge \varepsilon) \log (1/\varepsilon)}{6\operatorname{diam}_{\operatorname{obs}}^{(\varepsilon)}}.$$ Here, $m$ is the reversible probability measure, $P_0$ is the minimal transition rate, $|\partial W|$ is the size of the edge boundary, and $\operatorname{diam}_{\operatorname{obs}}$ is the observable diameter, i.e., $$\operatorname{diam}_{\operatorname{obs}}^{(\varepsilon)} = \sup_{\substack{m(A) \geq \varepsilon\\m(\operatorname{cl}(B)) \geq \varepsilon}} d(A,B)$$ where $\operatorname{cl}(B)$ is the closure containing both $B$ and its outer vertex boundary. The observable diameter as a function of $\varepsilon$ is, up to constant factors, the inverse of the concentration profile, see [@ozawa2015estimate]. Therefore, the above estimate gives an intimate relation between the isoperimetric profile (depending on $|\partial W|$ and $m(W)$), and the concentration profile (depending on $\varepsilon$ and $\operatorname{diam}_{\operatorname{obs}}^{(\varepsilon)}$). Particularly, every single point of the concentration profile (with the only restriction that $\varepsilon\leq 1/8$) yields an estimate for every point of the isoperimteric profile. This is indeed a conceptual improvement compared to the seminal work of Milman [@milman2010Isoperimetric], where the full concentration profile, and particularly its tail is needed to obtain isoperimetric estimates. We now discuss the consequences of the above isoperimetry-concentration estimate. Plugging in $\varepsilon=m(W)/4$, we obtain asymptotically sharp estimates for the log-Cheeger constant, $$h_{\log} := \inf_{m(W)\leq 1/2} \frac{|\partial W|}{ - m(W) \log(m(W))} \geq \frac{P_0}{24\operatorname{diam}(G)},$$ see Corollary [Corollary 16](#cor:IsoperimetryConcentrationLargeEps){reference-type="ref" reference="cor:IsoperimetryConcentrationLargeEps"}. This seems to be the first result of this kind for discrete Ricci curvature bounds. The result is asymptotically sharp for uniformly biased and unbiased birth death chains. When plugging in $\varepsilon=1/8$, we obtain the following estimate for the classical Cheeger constant, $$h := \inf_{m(W) \leq 1/2} \frac{|\partial W|}{m(W)} \geq \frac {P_0}{12 \operatorname{diam}_{\operatorname{obs}}^{(1/8)}},$$ see Corollary [Corollary 17](#cor:hlargerdiamess){reference-type="ref" reference="cor:hlargerdiamess"}. This means that the Cheeger constant can be lower bounded in terms of a single point of the concentration profile. This indeed exceeds Erbar and Fathi's conjecture [@erbar2018poincare Conjecture 6.9] asking whether the Cheeger constant and spectral gap can be bounded assuming exponential concentration, see Section [6.1](#sec:ErbarFathiConjecture){reference-type="ref" reference="sec:ErbarFathiConjecture"}. Indeed, the reverse inequality is also true, namely $$h \leq \frac {57\operatorname{Deg}_{\max}}{12 \operatorname{diam}_{\operatorname{obs}}^{(1/8)}}$$ where $\operatorname{Deg}_{\max}=\sum_{y\neq x} P(x,y)$ is the maximal vertex degree, see Theorem [Theorem 18](#thm:hsmallerdiamess){reference-type="ref" reference="thm:hsmallerdiamess"}. We also obtain a lower bound of $|\partial W|/m(W)$ in terms of the diameter of $W$. More precisely, in Theorem [Theorem 22](#thm:internalDiameter){reference-type="ref" reference="thm:internalDiameter"}, we prove that $$\frac{|\partial W|}{m(W)(1-m(W))} \geq \frac{P_0}{\operatorname{diam}(\operatorname{cl}(W))}.$$ Another application of our general isoperimetric concentration inequality is that we can prove Gaussian isoperimetry assuming Gaussian concentration. Specifically, assume that for all $A$ with $m(A) \geq \frac 1 2$, $$m(A_r) \leq \exp(-\rho r^2)$$ where $A_r = \{x:d(A,x) \leq r\}$. Then, $$h_{\sqrt {\log}} := \inf_{m(W)\leq \frac 1 2} \frac{|\partial W|}{m(W)\sqrt{\log(1/m(W))}} \geq \frac{P_0}{48}\sqrt{\rho}.$$ The $\sqrt{\log}$ Cheeger constant (or Gaussian isoperimetric constant) is tightly related to the log-Sobolev constant via log-Cheeger Buser inequalities, see [@klartag2015discrete Theorem 4.4] and [@houdr2001mixed Remark 5], i.e., $Ch_{\sqrt{\log}}^2/P_0 \geq \alpha \geq c h_{\sqrt{\log}}^2$, where the former estimate requires non-negative Ricci curvature. By this, we provide an answer to [@erbar2018poincare Conjecture 6.10] where Erbar and Fathi ask if Gaussian concentration implies a log Sobolev inequality. One drawback when applying log-Cheeger estimates for obtaining log-Sobolev inequalities is that one loses a factor $P_0^2$. This is indeed avoidable when estimating the log-Sobolev constant purely in terms of the diameter. To address this issue, we give a new general characterization of the log-Sobolev constant $\alpha$ in terms of Dirichlet eigenvalues. Specifically, in Corollary [Corollary 4](#cor:asalpha){reference-type="ref" reference="cor:asalpha"}, we show $$\alpha \simeq \alpha_{\operatorname{spectral}}:= \inf_{W} \theta(\lambda_W,\lambda_{W^c})$$ where $\lambda_W$ is the smallest Dirichlet eigenvalue of the subset $W$, and $\theta$ is the logarithmic mean. Here, $\simeq$ means coincidence up to global multiplicative constants. Using the spectral characterization of $\alpha$, we prove in Theorem [Theorem 13](#thm:asDiam){reference-type="ref" reference="thm:asDiam"} that $$\alpha \simeq \alpha_{\operatorname{spectral}}\geq \frac {P_0} {16\operatorname{diam}(G)^2}.$$ Considering that under discrete curvature assumptions, only modified log-Sobolev inequalities have been established so far [@erbar2012ricci; @erbar2018poincare; @weber2021entropy; @yong2017log; @johnson2015discrete; @caputo2009convex; @Erbar2019EntropicCA], it is quite remarkable that our approach yields original log-Sobolev inequalities which are significantly stronger and have the advantage that they are closely related to the time to stationarity of random walks, and that they allow for geometric characterizations via isocapacitary and spectral profiles, as discussed in Section [3](#sec:GeneralLogSob){reference-type="ref" reference="sec:GeneralLogSob"}. Additionally, this addresses [@ollivier2010survey Problem M] asking for exponential entropy decay under the heat flow which is a well known consequence of log-Sobolev inequalities. Moreover it is surprising that we obtain the log-Sobolev inequality and isoperimetric inequalities under Ollivier curvature bounds which are easy to determine by a linear program, and not under modified Bakry Emery curvature bounds such as $CDE'$ or entropic curvature bounds which come with logarithmic terms facilitating the proof of log-Sobolev inequalities. # Setup and notation We say $G=(V,P,d)$ is a metric (continuous time) Markov chain if - The vertex set $V$ is a finite set, - The Markov kernel $P:V^2 \to [0,\infty)$ has symmetric support, - The distance function $d:V^2 \to [0,\infty]$ satisfies $$d(x,y)=\inf \left\{\sum_{k=1}^n d(x_k,x_{k-1}):x=x_0 \sim \ldots \sim x_n = y \right\}$$ where we write $x\sim y$ iff $P(x,y)>0$. In general, we do not require that the $P(x,\cdot)$ sums to one. However, when we refer to lazy Markov chains, we require $\sum_{y}P(x,y)=1$ and $P(x,x) \geq \frac 1 2$ for all $x \in V$. The last condition above means that $d$ is a discrete path distance, i.e., the distance between between any two vertices is realized by the length of a path with respect to the Markov chain. We always assume that the graph is connected, i.e., $d(x,y)$ is finite for all $x,y$. Then, the Markov chain is irreducible, i.e., there exists a unique probability measure $m$ on $V$, called invariant measure, such that for all $x \in V$, $$\sum_y m(x)P(x,y) = \sum_y m(y)P(y,x).$$ If for all $x,y \in V$, we have $m(x)P(x,y) = m(y)P(y,x)$, then the Markov chain is called reversible. We sometimes write $w(x,y)=m(x)P(x,y)$, referring to the symmetric edge weight of a weighted graph. We will always assume that the Markov chain is reversible. The measure $m$ induces a scalar product on $\mathbb{R}^V$ via $$\langle f,g \rangle := \sum_x f(x)g(x)m(x).$$ We denote the 1-Lipschitz functions by $$\operatorname{Lip}(1) := \{f \in \mathbb{R}^V: |f(y)-f(x)| \leq d(x,y)\}.$$ The Laplacian is given by $\Delta: \mathbb{R}^V \to \mathbb{R}^V$, $$\Delta f(x) = \sum_y P(x,y)(f(y)-f(x)).$$ It is well known that $\Delta$ is self-adjoint if and only if the Markov chain is reversible. For reversible Markov chains, the corresponding Dirichlet form is given by $$\mathcal E(f,g) = - \langle \Delta f,g \rangle.$$ For simplicity, we write $\mathcal E(f):=\mathcal E(f,f)$. For introducing the Ollivier curvature, we follow [@munch2017ollivier]. For $x\sim y$ the Ollivier curvature $\kappa(x,y)$ is given by $$\kappa(x,y) := \inf_{\substack{f \in \operatorname{Lip}(1) \\ f(y)-f(x)=d(x,y)}} \frac{\Delta f(x) -\Delta f(y)}{d(x,y)}.$$ If $P$ is a lazy Markov kernel, i.e., if $P(x,x) \geq \frac 1 2$, and if $\sum_y P(x,y)=1$ for all $x \in V$, the Ollivier curvature coincides with the expression introduced by Ollivier, namely $$\kappa(x,y) = 1 - \frac {W(P(x,\cdot),P(y,\cdot))}{d(x,y)}$$ where $W$ is the 1-Wasserstein distance, see [@bourne2018ollivier; @loisel2014ricci]. We remark that the Ollivier curvature of an edge coincides with the maximal Forman curvature of this edge where the maximum is taken over all cell complexes having the graph as 1-Skeleton [@jost2021characterizations]. # General characterization of log-Sobolev constants {#sec:GeneralLogSob} We first give the definitions of the log-Sobolev constant and its modified version. The Log-Sobolev constant $\alpha$ is defined as $$\alpha := \inf_{\|f\|_2=1} \frac{ \mathcal E(f,f)}{\mathop{\mathrm{Ent}}(f^2)}$$ where for positive $f$ with $\|f\|_1=1$, $$\mathop{\mathrm{Ent}}(f) := \langle f,\log f \rangle.$$ It is easy to check that for the complete graph on $n$ vertices (with normalized graph Laplacian), $\alpha$ goes to zero as $n$ increases. This is for some applications not desirable which was one motivation to introduce a modified log Sobolev constant $$\alpha_{\operatorname{mod}}:= \inf_{\|f\|_1=1} \frac{\mathcal E(f,\log f)}{\mathop{\mathrm{Ent}}(f)}.$$ It is well known that $$4 \alpha \leq \alpha_{\operatorname{mod}}\leq 2\lambda$$ where $\lambda$ is the smallest positive eigenvalue of $-\Delta$, see e.g. [@bobkov2006modified]. By a variational argument, if $2\alpha_{\operatorname{mod}}<\lambda$, then the minimizer for $\alpha_{\operatorname{mod}}$ satisfies $$\frac {\Delta f} f + \Delta (\log f) = -\alpha_{\operatorname{mod}}\log f,$$ see [@bobkov2006modified Theorem 6.5]. If $\alpha_{\operatorname{mod}}= 2\lambda$, then the expression for $\alpha_{\operatorname{mod}}$ is minimized by $1+\varepsilon f$ for $\varepsilon\to 0$ where $f$ is an eigenfunction to $\lambda$. The log Sobolev constant is closely related to the time to stationarity $\tau$ of a random walk. By a famous result of Diaconis and Saloff-Coste [@diaconis1996logarithmic], we have $$\frac 1 {2\alpha} \leq \tau \leq \frac {4 + \log \log (1/\pi_*)}{4\alpha}$$ where $\tau := \inf \{t: \sup_x \|e^{\Delta t} \delta_x - 1 \|_2 \leq \frac 1 e\}.$ Moreover, the log Sobolev constant $\alpha$ allows for various geometric and spectral characterizations as discussed below. In contrast, the modified log-Sobolev constant $\alpha_{\operatorname{mod}}$ can be seen as more analytic in nature as it precisely measures the decay of entropy under the heat equation [@bobkov2006modified]. In the next subsection, we discuss a capacitary characterization of $\alpha$ from [@schlichting2019poincare]. Afterwards, we give a characterization of $\alpha$ in terms of Dirichlet eigenvalues. This seems to be new. ## Capacity and Log-Sobolev constant The log-Sobolev constant is closely related to the iso-capacitary profile which can be seen as a refined version of the isoperimetric profile. Here, 'refined' means that that isocapicatary constants give upper and lower bounds to the spectral gap and log-Sobolev constant up to a global constant, while isoperimetric constants yield similar estimates via Cheeger-Buser inequality only up to a constant depending on the vertex degree. We now give the details. Let $A,B \subset V$ be disjoint subsets. We define $$\operatorname{cap}(A,B) := \min \{\mathcal E(f): f|_A=0, f|_B=1\}.$$ We assume $m(V)=1$. Let $$\alpha_{\operatorname{cap}} := \inf_{\substack{A,B \subset V \\m(B) \geq 1/2}}\frac{\operatorname{cap}(A,B)}{-m(A)\log \left(1 + \frac {e^2} {m(A)}\right)}$$ The following theorem is given in [@schlichting2019poincare Corollary 2.11]. **Theorem 1**. *There exist universal constants $c,C$ such that $$c\alpha_{\operatorname{cap}} \leq \alpha \leq C \alpha_{\operatorname{cap}}.$$* For the proof, we refer to [@schlichting2019poincare]. We first bring $\alpha_{\operatorname{cap}}$ into a more convenient form by introducing $$\alpha_{\operatorname{cap}}^{\theta} := \inf_{A,B \subset V} \theta \left(\frac{\operatorname{cap}(A,B)}{m(A)},\frac{\operatorname{cap}(A,B)}{m(B)} \right)$$ where $\theta$ is the logarithmic mean, i.e., $\theta(s,t)=(s-t)/\log(s/t)$ for $s\neq t$ and $\theta(s,s)=s$. We now compare $\alpha_{\operatorname{cap}}$ with $\alpha_{\operatorname{cap}}^\theta$. We remark that most of the following proof boils down to single variable calculus, but for convenience of the reader, we give a complete proof. **Lemma 2**. *The isocapacitary constants $\alpha_{\operatorname{cap}}$ and $\alpha_{\operatorname{cap}}^\theta$ satisfy $$\frac 1 2 \alpha_{\operatorname{cap}} \leq \alpha_{\operatorname{cap}}^\theta \leq 3 \alpha_{\operatorname{cap}}.$$* *Proof.* We calculate $$\theta \left(\frac{\operatorname{cap}(A,B)}{m(A)},\frac{\operatorname{cap}(A,B)}{m(B)} \right)= \frac{\operatorname{cap}(A,B)}{m(A)} \cdot \frac{1-\frac {m(A)}{m(B)}}{\log(m(B)/m(A))}.$$ We first show the latter inequality. Assume $A,B$ minimize $\alpha_{\operatorname{cap}}$. Then, $m(A) \leq \frac 1 2 \leq m(B) \leq 1$ and thus, $$\frac{1-\frac {m(A)}{m(B)}}{\log(m(B)/m(A))} \leq \frac{1- 2 m(A) }{-\log(2m(A))} \leq 3 \cdot \frac{1}{-\log \left(1 + \frac {e^2} {m(A)}\right)}$$ and therefore, $$\alpha_{\operatorname{cap}}^{\theta} \leq \theta \left(\frac{\operatorname{cap}(A,B)}{m(A)},\frac{\operatorname{cap}(A,B)}{m(B)} \right) \leq 3 \cdot \frac{\operatorname{cap}(A,B)}{-m(A)\log \left(1 + \frac {e^2} {m(A)}\right)} = 3 \alpha_{\operatorname{cap}}.$$ We now show the former inequality. We assume $A,B$ minimize $\alpha_{\operatorname{cap}}^\theta$. Let $h$ be such that $h_A=0$ and $h_B=1$ and $\Delta h=0$ on $(A\cup B)^c$. Let $B' := \operatorname{supp}(h-\frac 1 2)_+$. Without obstruction, we can assume $m(B') \geq \frac 1 2$, as otherwise, we could exchange $A$ and $B$. Then, $$\operatorname{cap}(A,B') \leq 4 \mathcal E((h-\frac 1 2)_+) \leq 2\operatorname{cap}(A,B).$$ Moreover, $$\frac{1}{-\log \left(1 + \frac {e^2} {m(A)}\right)} \leq \frac{1- A }{-\log(A)} \leq \frac{1-\frac {m(A)} {m(B)}}{\log(m(B)/m(A))}$$ and thus, $$\begin{aligned} \alpha_{\operatorname{cap}} \leq \frac{\operatorname{cap}(A,B')}{m(A)}\frac{1}{-\log \left(1 + \frac {e^2} {m(A)}\right)} &\leq \frac{2\operatorname{cap}(A,B)}{m(A)}\frac{1-\frac {m(A)} {m(B)}}{\log(m(B)/m(A))} \\&= 2\theta \left(\frac{\operatorname{cap}(A,B)}{m(A)},\frac{\operatorname{cap}(A,B)}{m(B)} \right) =2 \alpha_{\operatorname{cap}}^\theta.\end{aligned}$$ This implies the first estimate and finishes the proof. ◻ ## Capacity and Dirichlet eigenvalues Estimates of the log-Sobolev constant via the spectral profile are given in [@goel2006mixing; @hermon2018characterization]. In their work, they compare the Dirichlet eigenvalue with the volume. We compare the Dirichlet eigenvalue with the Dirichlet eigenvalue of the complement. Let $X \subset V$. We define $$\lambda_X := \inf \{\mathcal E(f,f): \|f\|_2 = 1, f|_{X^c} = 0 \},$$ and we define the log-spectral constant as $$\alpha_{\operatorname{spectral}}:= \inf_{X \subset V} \theta(\lambda_X,\lambda_{X^c}).$$ where $\theta$ is the logarithmic mean. **Theorem 3**. *The log-spectral constant $\alpha_{\operatorname{spectral}}$ satisfies $$\frac 1 4\alpha_{\operatorname{cap}}^\theta \leq \alpha_{\operatorname{spectral}}\leq 2\alpha_{\operatorname{cap}}^\theta.$$* *Proof.* We first prove the latter inequality. Let $A,B \subset V$ minimizing $\alpha_{\operatorname{cap}}^\theta$. Let $h$ be the minimizer of the capacity, i.e., $\Delta h= 0$ on $(A\cup B)^c$ and $h=0$ on $A$ and $h=1$ on $B$. Let $X:= \{h \leq \frac 1 2\}$ and $Y=X^c$. Let $h_X := (h - \frac 1 2)_-$ and $h_Y := (h- \frac 1 2)_+$ Then, $$\frac 1 2 \mathcal E(h) \geq \mathcal E(h_X) \geq \lambda_X \|h\|_2^2 \geq \frac {\lambda_X} 4 m(A)$$ and hence, $$\lambda_X \leq \frac{2 \operatorname{cap}(A,B)}{m(A)}$$ and similarly, $$\lambda_Y \leq \frac{2 \operatorname{cap}(A,B)}{m(B)}.$$ Thus, $$\begin{aligned} \alpha_{\operatorname{spectral}}\leq \theta(\lambda_X,\lambda_{Y}) \leq 2\theta \left(\frac{\operatorname{cap}(A,B)}{m(A)},\frac{\operatorname{cap}(A,B)}{m(B)} \right) =2 \alpha_{\operatorname{cap}}^\theta.\end{aligned}$$ We finally prove the former estimate. Let $X$ be a minimizer of $\alpha_{\operatorname{spectral}}$ and let $y=X^c$. By [@schlichting2019poincare Corollary 2.10], there exists $A \subset X$ with $$\frac{\operatorname{cap}(A,Y)}{m(A)} \leq 4 \lambda_X$$ Similarly, there exists $B \subset Y$ such that $$\frac{\operatorname{cap}(B,X)}{m(B)} \leq 4 \lambda_Y$$ Using $\operatorname{cap}(A,B) \leq \operatorname{cap}(A,Y)$ and $\operatorname{cap}(A,B) \leq \operatorname{cap}(B,X)$, we obtain $$\alpha_{\operatorname{cap}}^\theta \leq \theta \left(\frac{\operatorname{cap}(A,B)}{m(A)},\frac{\operatorname{cap}(A,B)}{m(B)} \right) \leq 4\theta(\lambda_X,\lambda_Y) = 4\alpha_{\operatorname{spectral}}.$$ This proves the former inequality and finishes the proof. ◻ Combining Thereom [Theorem 1](#thm:acapalpha){reference-type="ref" reference="thm:acapalpha"}, Lemma [Lemma 2](#lem:acaptheta){reference-type="ref" reference="lem:acaptheta"} and Theorem [Theorem 3](#thm:asacap){reference-type="ref" reference="thm:asacap"}, we obtain the following corollary. **Corollary 4**. *There exist global constants $c,C>0$ such that $$c\alpha \leq \alpha_{\operatorname{spectral}}\leq C \alpha.$$* ## Dirichlet eigenvalue and positive super-solutions We recall a general lower bound on the Dirichlet eigenvalue in terms of positive super solutions. This kind of result is known as Allegretto Piepenbrink type result, [@haeseler2011generalized]. We first set the stage. Let $W \subsetneq V$, let $$\Delta_W f := 1_W \Delta (1_W \cdot f),$$ and let $\lambda_W$ be the smallest positive eigenvalue of $-\Delta_W$. The following Lemma can be found in [@haeseler2011generalized Theorem 3.1] in a way more general framework. For convenience, we give a simple proof for our framework. **Lemma 5**. *Let $\lambda>0$. Suppose there is a non-negative function $f \in \mathbb{R}^V$ with $f|_W \neq 0$ and $$\Delta f \leq -\lambda f \mbox{ on } W.$$ Then, $\lambda_W \geq \lambda$.* *Proof.* Without obstruction, we assume $f=0$ on $V\setminus W$. We notice that $f>0$ on $W$ by the maximum principle. Let $u_t := e^{-\lambda t} f$. Then, $\partial_t u_t \geq \Delta_W u_t$. Let $g$ be an eigenfunction with respect to $\lambda_W$ such that $g \leq f$. Let $v_t := e^{-\lambda_W t} g$. Then, $\partial_t v_t = \Delta_W v_t$. Hence, $$\partial_t (u_t - v_t) \geq \Delta(u_t - v_t)$$ and $u_0 - v_0 \geq 0$. This implies $u_t - v_t \geq 0$ by the maximum principle and thus, $\lambda_W \geq \lambda$, finishing the proof. ◻ # Positive curvature {#sec:PosCurv} In this section, we investigate the relation between positive Ollivier curvature and modified log-Sobolev inequalities. Specifically, we prove that for Markov chains, $$\operatorname{Ric}\geq K \mbox{ \& } \sec \geq 0 \quad \Longrightarrow \quad \alpha_{\operatorname{mod}}\geq K$$ where $\alpha_{\operatorname{mod}}$ is the modified log-Sobolev constant, $\operatorname{Ric}$ is the Ollivier Ricci curvature, and $\sec$ is the Ollivier sectional curvature based on the $\ell_\infty$ Wasserstein distance discussed in the next subsection. This answers an open question by Pedrotti (see the discussion after [@pedrotti2023contractive Conjecture 5.25]). The proof is based on a new characterization of non-negative sectional curvature via various non-linear gradient estimates, see Corollary [Corollary 7](#cor:secCharGradientEstimates){reference-type="ref" reference="cor:secCharGradientEstimates"}. In Subsection [4.3](#sec:CounterExample){reference-type="ref" reference="sec:CounterExample"}, we prove that non-negative sectional curvature is necessary for obtaining the modified log-Sobolev inequality. More precisely, we give an example of a Markov chain with $$\begin{aligned} \operatorname{Ric}\geq 1 \quad \mbox{ and } \quad \alpha_{\operatorname{mod}}\leq \varepsilon\end{aligned}$$ for arbitrarily small $\varepsilon>0$. This example serves as counterexample for a conjecture by Peres and Tetali. ## Ollivier sectional curvature In Yann Ollivier's survey of Ricci curvature for metric spaces and Markov chains [@ollivier2010survey Problem P], he proposes a notion of discrete sectional curvature by replacing the $\ell_1$ by the $\ell_\infty$ Wasserstein metric in the formula for the Ollivier curvature. Specifically for having non-negative sectional curvature, there must exist a coupling between $P(x,\cdot)$ and $P(y,\cdot)$ moving all points by a distance of at most $d(x,y)$. We now give the precise definition. **Definition 6**. We say a lazy Markov chain $(V,P)$ has non-negative Ollivier sectional curvature at edge $x\sim y$ if there exists a transport plan $\pi:V\times V \to [0,\infty)$ transporting the measure $P(x,\cdot)$ to $P(y,\cdot)$ such that $$d(x',y') \leq 1$$ whenever $\pi(x',y')>0$. Equivalently, one can introduce a sectional curvature lower bound using the $\ell_\infty$ Wasserstein distance, i.e., $$\kappa_\infty(x,y) := 1 - \frac{W_\infty(P(x,\cdot),P(y,\cdot))}{d(x,y)}$$ where $$W_\infty(\mu,\nu) = \inf_{\pi} \sup_{(x',y')\in \operatorname{supp}(\pi)}d(x',y')$$ where the infimum is taken over all transport plans $\pi$. It is easy to check that this definition precisely coincides with non-negative curvature in [@pedrotti2023contractive Definition 5.3] where the curvatures with respect to different $p$-Wasserstein metrics have been compared. Moreover, the sectional curvature introduced above is closely related to various non-linear Ollivier type curvature notions introduced in [@kempton2019large]. Particularly, we show that non-negative sectional curvature is equivalent to all statements in [@kempton2019large Theorem 1.12(iii)]. **Corollary 7**. *Let $(V,P)$ be a lazy reversible Markov chain equipped with the combinatorial distance. The following statements are equivalent:* (i) *$G$ has non-negative sectional curvature* (ii) *$\|\nabla \log P_t f\|_\infty \leq \|\nabla \log f\|_\infty$ for all positive $f\in \mathbb{R}^V$.* (iii) *$|\nabla P_t f| \leq P_t |\nabla f|$ for all $f \in \mathbb{R}^V$* (iv) *$|\nabla \sqrt{P_t f}| \leq P_t |\nabla \sqrt f|$ for all positive $f\in \mathbb{R}^V$,* *where $|\nabla f|(x):= \max_{y\sim x} |f(y)-f(x)|$ is the pointwise gradient and $P_t = e^{\Delta t}$ is the heat semigroup.* The corollary is an easy consequence of the following theorem and [@kempton2019large Theorem 1.12(iii)] combined with [@kempton2019large Theorems 1.6,1.7,1.8]. **Theorem 8**. *Let $(V,P)$ be a lazy reversible Markov chain equipped with the combinatorial distance. Let $x\sim y \in V$. The following statements are equivalent.* (i) *The edge $(x,y)$ has non-negative Ollivier sectional curvature,* (ii) *For all functions $f \in \operatorname{Lip}(1)$ with $f(y)-f(x)=1$, and all $\lambda \geq 0$, $$\frac{\Delta e^{\lambda f}(x)}{e^{\lambda f}(x)} \geq \frac{\Delta e^{\lambda f}(y)}{e^{\lambda f}(y)} \qquad \mbox{ and } \qquad \frac{\Delta e^{-\lambda f}(x)}{e^{-\lambda f}(x)} \leq \frac{\Delta e^{-\lambda f}(y)}{e^{-\lambda f}(y)}.$$* *Proof.* We first prove $(i) \Rightarrow (ii)$. Let $\pi$ be a transport plan with transport distance at most one. We notice that whenever $\pi(x',y')>0$, we have $$f(x')-f(x) \geq f(y')-f(y)$$ as $f \in \operatorname{Lip}(1)$ and as $f(y)-f(x)=1$. We calculate $$\begin{aligned} &\frac{\Delta e^{\lambda f}(x)}{e^{\lambda f}(x)} - \frac{\Delta e^{\lambda f}(y)}{e^{\lambda f}(y)} \\=&\sum_{x',y'} \pi(x',y') \left(\exp(\lambda(f(x')-f(x))) - \exp(\lambda(f(y')-f(y))) \right) \geq 0\end{aligned}$$ as all terms are non-negative. The other inequality follows similarly. We now prove $(ii) \Rightarrow (i)$. We consider the optimal transport problem $$\min_\pi \sum_{x',y'} c(x',y')\pi(x',y')$$ over all transport plans $\pi$ from $P(x,\cdot)$ to $P(y,\cdot)$, where $c(x',y')=0$ if $d(x',y')\leq 1$ and $c(x',y')=\infty$ otherwise. The dual problem is $$\max_{f,g} Pg(y) - Pf(x)$$ where the maximum is taken over all $f,g \in \mathbb{R}^V$ such that $g(y') \leq f(x')$ whenever $d(x',y') \leq 1$. We aim to show that the maximum is zero. Suppose not. We assume without loss of generality that $f(x)=0$. As $P(z,z) \geq \frac 1 2$, we can also assume $g(y)=0$, as otherwise, we could replace $f$ by its positive part and enforce $g(y)$ to be zero. Also, without loss of generality, we assume $$Pg_+(y) > Pf_+(x)$$ as otherwise, we could interchange $x$ and $y$. By a level set argument, there exists $r>0$ such that $$P1_{\{g>r\}}(y) > P1_{\{f>r\}}(x).$$ As $g \leq f(y)$ on $B_1(y)$, we see $y \in \{f>r\}$. We now construct a 1-Lipschitz function $h$ as $$h:=1_{\{g>r\}} - 1_{\{f \leq r\}}.$$ Indeed $h \in \operatorname{Lip}(1)$ as $f(x')<g(y')$ implies $d(x',y')\geq 2$. We calculate $$\frac{\Delta e^{\lambda h}}{e^{\lambda h}}(x) = (e^\lambda - 1) P1_{\{f>r\}} (x)$$ and $$\frac{\Delta e^{\lambda h}}{e^{\lambda h}}(y) = (e^\lambda - 1) P1_{\{g>r\}}(y) + (e^{-\lambda} - 1) P1_{\{f \leq r\}}(y).$$ As $P1_{\{g>r\}}(y) > P1_{\{f>r\}}(x)$, we see that for $\lambda$ large, we have $$\frac{\Delta e^{\lambda h}}{e^{\lambda h}}(y) > \frac{\Delta e^{\lambda h}}{e^{\lambda h}}(x)$$ contradicting $(ii)$. This proves $(ii) \Rightarrow (i)$ by contradiction. ◻ ## Ollivier curvature and modified log-Sobolev inequality We show that the modified log-Sobolev constant can be lower bounded by the Ollivier Ricci curvature, assuming non-negative sectional curvature. This answers an open question by Pedrotti affirmatively, see [@pedrotti2023contractive Conjecture 5.25] and the discussion thereafter. Indeed, non-negative sectional curvature is crucial as will be shown in an example on a three vertex birth death chain. **Theorem 9**. *Let $(V,P)$ be a Markov chain equipped with the combinatorial distance. Assume that $(V,P)$ has non-negative Ollivier sectional curvature. Then, $$\alpha_{\operatorname{mod}}\geq \inf _{x\sim y} \kappa(x,y).$$* *Proof.* Let $K:=\inf _{x\sim y} \kappa(x,y)$. As $\lambda \geq K$, see e.g. [@ollivier2009ricci], we can assume $\alpha_{\operatorname{mod}}< 2\lambda$ without obstruction. Hence, there exists a non-constant function $f$ satisfying $\|f\|_1=1$ and $$\frac {\Delta f} f + \Delta (\log f) = -\alpha \log f,$$ see [@bobkov2006modified Theorem 6.5]. Let $g= \log f$. Let $x\sim y$ such that $$g(y) - g(x) = \|\nabla g \|_\infty.$$ By non-negative Ollivier sectional curvature and Theorem [Theorem 8](#thm:secCurvatureExp){reference-type="ref" reference="thm:secCurvatureExp"}, we have $$\frac{\Delta e^g}{e^g}(y) \leq \frac{\Delta e^g}{e^g}(x).$$ Moreover, $$\Delta g(y) - \Delta g(x) \leq - \kappa(x,y)\|\nabla g\|_\infty \leq -K\|\nabla g\|_\infty.$$ Hence, $$\begin{aligned} -\alpha \|\nabla g\|_\infty = \left(\frac {\Delta f} f + \Delta (\log f) \right)(y) - \left(\frac {\Delta f} f + \Delta (\log f) \right)(x) \leq -K \|\nabla g\|_\infty\end{aligned}$$ implying $\alpha \geq K$ and finishing the proof. ◻ We remark that in case of birth death chains, Theorem [Theorem 9](#thm:modLogSobPosCurv){reference-type="ref" reference="thm:modLogSobPosCurv"} precisely recovers [@caputo2009convex Theorem 3.1] as monotonicity of the jump rates is equivalent to non-negative sectional curvature. ## A counterexample {#sec:CounterExample} In this section, we disprove the Tetali-Peres conjecture, namely that the log-Sobolev constant can be lower bounded by constant times Ollivier curvature. In order to construct a counterexample, we must ensure that the graph has negative sectional curvature but positive Ricci curvature. For $\varepsilon>0$, we define a metric Markov chain $G_\varepsilon=(\{1,2,3\},P,d)$ with combinatorial distance $d$ via $$\begin{aligned} w(1,2)&=10,& w(2,3)&=1,& w(1,3)&=0,&\\ m(1)&=1/\varepsilon,& m(2)&=1,& m(3)&=1/20.\end{aligned}$$ and $P(x,y) := w(x,y)/m(x)$. It is easy to check that the Ollivier curvature is at least $1$ on the two edges. Moreover, $G_\varepsilon$ satisfies the Bakry Emery curvature condition $CD(1,0)$ as can be computed via [@hua2023ricci Proposition 2.1]. We now estimate the modified log-Sobolev constant. We choose $f(1)=\varepsilon$ and $f(2)=1$ and $f(3)=-\log(\varepsilon)$. Then, by a straight forward calculation, there exist constants $c,C$ such that for all sufficiently small $\varepsilon>0$, $$\begin{aligned} \mathop{\mathrm{Ent}}(f) \geq c\left(\log \frac 1 \varepsilon\right)^2\end{aligned}$$ and $$\mathcal{E}(f,\log f) \leq C \log\frac 1 \varepsilon\log \log \frac 1 \varepsilon.$$ Hence, $\frac{\mathcal E(f,\log f)}{\mathop{\mathrm{Ent}}(f)}$ goes to zero as $\varepsilon$ goes to zero. This proves that there is no constant $C$ such that the modified log-Sobolev constant is at least $CK$ where $K$ is a lower bound on the Ollivier curvature. **Remark 1**. This example can also be extended to combinatorial graphs, i.e., graphs for which $P(x,y)=P_0$ for all $x\sim y$. This is done by taking a Cartesian product with the complete graph $K_{2n}$ with itself, adding another complete graph $K_n$, and putting edges between all vertices $(1,i) \in K_{2n}^2$ and $j \in K_n$. The subgraph $\{2\ldots 2n\} \times K_{2n}$ replaces the single vertex 1 from the three vertex chain above, the subgraph $\{1\} \times K_{2n}$ replaces vertex 2, and the subgraph $K_n$ replaces vertex 3. It is a bit tedious but straight forward to check that the graph has uniformly positive Ollivier curvature (positive lower bound independent of $n$). On the other hand, the log-Sobolev constant tends to zero as $n \to \infty$ by exactly the same argument as in the three vertex chain. # Non-negative Curvature and Log Sobolev inequality In this section, we introduce the Laplacian separation principle and prove an (unmodified) log-Sobolev inequality in terms of the diameter for Markov chains with non-negative Ollivier Ricci curvature. We now give an intuition about the key arguments: Assume $\Delta f \leq -C$ on a subset $W\subset V$. Then, - $\lambda_W \geq \frac{C} {2\|f\|_\infty}$, - $|\partial W| \geq C m(W)$. In order to construct suitable functions $f$, we use the Laplacian separation principle introduced below to find a solution to the eikonal equation $|\nabla g| = 1$ and $$\Delta g|_W \leq C \leq \Delta g|_{W^c}$$ for some unknown constant $C$. We then set $f = \phi \circ g$ for a suitable concave increasing function $\phi:\mathbb{R}\to \mathbb{R}$. By a version of a discrete chain rule (see Subsection [5.2](#sec:ChainRule){reference-type="ref" reference="sec:ChainRule"}), the bound on $\Delta g$ can be improved exploiting concavity of $\phi$ and $|\nabla g|=1$ , i.e., $\Delta f|_W \leq C' < C$. We now give the details. ## Eikonal equation and Laplacian separation principle {#sec:LaplaceSeparation} Here, we present a mean curvature inspired Laplacian separation principle based on [@hua2021every]. The key motivation comes from the fact that isoperimetric subsets of a Riemannian manifold, i.e., sets minimizing the surface area given the volume, have a boundary with constant mean curvature, up to singularities, see [@milman2009role Appendix A.1] and references therein. In order to understand mean curvature in a discrete setting, it seems hopeless to just consider subsets of the graph, as there are only finitely many such subsets. Therefore, one would not expect to find any constant mean curvature subsets in a weighted graph. Instead, the intuition about discrete mean curvature is based on the level set approach pioneered in [@evans1991motion] for Euclidean spaces. Indeed, in the smooth case, the level set mean curvature is related to the eikonal equation $|\nabla f|=1$. That is, if $|\nabla f|=1$, then the mean curvature $H(x)$ at point $x$ of the level set $\{y:f(y)=f(x)\}$ satisfies $$H(x)= \nabla \cdot \left(\frac{\nabla f}{|\nabla f|}\right)(x)= \Delta f(x).$$ Hence, in order to mimic a constant mean curvature hypersurface in a discrete space, we aim to find a solution to the discrete eikonal equation $|\nabla f|=1$ such that $\Delta f = const.$ on a given level set. While this seems not possible due to compatibility problems with the non-reversible case, we can still treat a relaxed version of this problem. That is that $\Delta f=const.$ on a given vertex cut set (without the restriction that the vertex cut set is a level set), and that $|\nabla f|=1$ is only required outside of the vertex cut set. More precisely, we assume, we have a partition of the vertex set $V=X \dot \cup K \dot \cup Y$ such that there are no edges between $X$ and $Y$. Following [@hua2021every], we want to find a function with constant gradient on $X\cup Y$, minimal on $X$ and maximal on $Y$, such that moreover, the Laplacian of $f$ is constant on $K$. By non-negative Ollivier curvature, it will follow that the cut set $K$ separates the Laplacian $\Delta f$, i.e., $\Delta f|_X \geq const. \geq \Delta f|_Y$, meaning that $f$ sovles the Laplacian separation problem discussed above. While in [@hua2021every Lemma 3.3], it was assumed that $K$ is connected, we will drop this condition by refining the proof idea. Moreover, we drop the condition from [@hua2021every] of having at least two ends. We now give the Laplacian separation principle. **Theorem 10**. *Let $G=(V,P,d)$ be a reversible metric Markov chain with non-negative Ollivier curvature. Assume $V=X \dot \cup K \dot \cup Y$ for a finite set $K$ with $E(X,Y) = \emptyset$. Then, there exists a function $f \in \operatorname{Lip}(1)$ satisfying* (i) *$\Delta f = C= const.$ on $K$,* (ii) *$f= \min\{g \in \operatorname{Lip}(1): g|_K = f_K\}$ on $X$, and* (iii) *$f= \max\{g \in \operatorname{Lip}(1): g|_K = f_K\}$ on $Y$.* *Moreover, $\Delta f \geq C$ on $X$ and $\Delta f \leq C$ on $Y$.* *Proof.* We consider three nested optimization problems for $f \in \operatorname{Lip}(1)$ under the constraint of $(ii)$ and $(iii)$: (a) Maximize $\min_K \Delta f$ (b) Minimize $m\left( \operatorname{argmin}\left((\Delta f)|_K\right) \right)$ (c) Maximize $$\max \{f(y)-f(x): x,y \in K\mbox{ and } \Delta f(x) = \min_K \Delta f \mbox{ and } \Delta f(y)> \min_K \Delta f\}.$$ Let $F_a \subseteq Lip(1)$ be the set of optimizers of $(a)$. Let $F_b \subseteq F_a$ be the set of optimizers of $(b)$. Let $F:=F_c \subseteq F_b$ be the set of optimizers of $(c)$. A compactness argument shows $F \neq \emptyset$. Let $f \in F$. Let $$Sf(x) := \begin{cases} f(x)&: x \in K, \\ \min\{g(x) : g \in \operatorname{Lip}(1), g|_K=f|_K \} &: x \in X, \\ \max\{g(x) : g \in \operatorname{Lip}(1), g|_K=f|_K \} &: x \in Y. \end{cases}$$ Let $\varepsilon>0$ and $g := S(f + \varepsilon\Delta f)$. Indeed, the nested optimization problems are motivated by the question which properties are improved when replacing $f$ by $g$. By non-negative Ollivier curvature, we have $f + \varepsilon\Delta f \in \operatorname{Lip}(1)$ for small $\varepsilon$ and thus, $g \in Lip(1)$. As $g$ is in the image of $S$, we see that $g$ satisfies $(ii)$ and $(iii)$. Let $C=\min_K \Delta f$. As $f$ satisfies $(ii)$ and $(iii)$, we have $f=Sf$, and thus, $g \geq f+\varepsilon C$ with equality on $\operatorname{argmin}\left((\Delta f)|_K\right)$. Hence, $\Delta g \geq C$ on $\operatorname{argmin}\left((\Delta f)|_K\right)$. For $\varepsilon$ small enough, we have $\Delta g(x) > C$ for all $x \in K \setminus \operatorname{argmin}\left((\Delta f)|_K\right)$. As $f$ is an optimizer of $(a)$ and $(b)$, we infer $\Delta g= C$ on $\operatorname{argmin}\left((\Delta f)|_K\right)$. This implies $\operatorname{argmin}\left((\Delta f)|_K\right) = \operatorname{argmin}\left((\Delta g)|_K\right)$, and the maximization in $(iii)$ runs over the same vertex set for $f$ and $g$. Suppose $\max \{f(y)-f(x): x,y \in K, \Delta f(x) = \min_K \Delta f, \Delta f(y)> \min_K \Delta f\}$ is attained at $x$ and $y$. Then, $$g(y) - g(x) = f(y)-f(x) + \varepsilon\Delta f(y) - \varepsilon\Delta f(x) > f(y)-f(x)$$ contradicting that $f$ maximizes $(c)$. Hence, the maximization in $(c)$ is ill posed, meaning that the maximum is taken over the empty vertex set. This shows that $\Delta f = \min_K \Delta f$ on $K$ as desired. We finally prove the 'Moreover' statement, i.e. $\Delta f \geq C$ on $X$ and $\Delta f \leq C$ on $Y$. Let $x \in X$. As $f$ is the minimal Lipschitz extension on $X$, there exists $y \in K$ such that $f(y)-f(x)=d(x,y)$. As $f \in \operatorname{Lip}(1)$, we get $\Delta f(x) \geq \Delta f(y)=C$. The corresponding estimate on $Y$ can be proven similarly. This finishes the proof of the theorem. ◻ We remark that the theorem can be generalized to infinite, locally finite Markov chains with a finite subset $K$. An interesting question is if there is a natural parabolic flow converging to the solution $f$ from the above theorem. ## Exploiting the gradient via chain rule {#sec:ChainRule} As discussed in the introduction, no exact chain rule is available for the graph Laplacian [@bauer2015li; @erbar2012ricci; @munch2014li]. An approximate chain rule via intermediate values for the graph Laplacian is provided in [@hua2022extremal]. Here, we provide an estimate for $\Delta \phi\circ f$ for suitable functions $\phi:\mathbb{R}\to \mathbb{R}$. We recall $$P_0 = \inf_{x,y} {P(x,y)}\cdot{d(x,y)^2}$$ and $$\nabla_- f (x) = \max_{y\sim x} \frac{(f(y)-f(x))_-}{d(x,y)}.$$ **Lemma 11**. *Let $\phi:I \to \mathbb{R}$ be concave. Let $f \in \mathbb{R}^V$. Let $x \in V$. Assume $\phi'''(s) \geq 0$ whenever $$\min_{y\sim x} f(y) < s < f(x).$$ Then at $x$, $$\Delta\phi\circ f \leq \phi'(f)\cdot \Delta f + \frac{P_0}2 \phi''(f) \cdot(\nabla_- f)^2.$$* *Proof.* Let $x \in V$. We calculate $$\Delta \phi \circ f(x) = \sum_y P(x,y)(\phi(f(y)) - \phi(f(x))).$$ By Taylor expansion of $\phi$ around $f(x)$, we have $$\phi(f(y)) - \phi(f(x)) = \phi'(f(x))(f(y)-f(x)) + \frac 1 2 \phi''(\xi_y) (f(y)-f(x))^2$$ where $\xi_y$ is between $f(x)$ and $f(y)$. First assume $\nabla_-f(x)>0$. Then, there is $z$ such that $$f(z)-f(x) = - d(x,y)\nabla_-f(x)$$ Then, by concavity of $\phi$, $$\begin{aligned} \Delta f(x) &\leq \frac 1 2 \phi''(\xi_z)P(x,z)(f(z)-f(x))^2 +\sum_y P(x,y) \phi'(f(x))(f(y)-f(x)) \\ &\leq \frac 1 2 \phi''(f(x))P(x,z)(f(z)-f(x))^2 +\phi'(f(x))\Delta f(x) \\ &\leq \frac {P_0} 2 \phi''(f)\cdot (\nabla_- f(x))^2 + \phi'(f(x))\Delta f(x) \end{aligned}$$ where the second inequality follows as $f(z)<f(x)$ and $\phi'''\geq 0$, and the last inequality follows as $\phi'' \leq 0$. In the case $\nabla_-f(x)=0$, we ignore the $\phi''(\zeta_y)$ term and get the desired estimate easily. This finishes the proof. ◻ We now show how to improve a Laplacian estimate via the chain rule. **Lemma 12**. *Let $W \subset V$. Let $u \in \mathbb{R}^V$ with $\Delta u \leq C$ on $W$ and $\nabla_-f \geq 1$ on $W$. We write $\beta = 2C/P_0$. Let $$U_0 := \{x:u(y) \geq 0 \mbox{ for all } y \sim x\}.$$ Let $R>0$. Then, there exists an increasing, concave $\phi:\mathbb{R}\to \mathbb{R}$ with $\phi'\leq 1$ such that for all $x \in W$, $$\Delta \phi \circ u(x) \leq \begin{cases} 0 &: u(x)>R\\ C&:x \in W \setminus U_0\\ C \wedge \frac{-P_0}{2R}&: x \in U_0 \mbox{ and } u(x)\leq R \mbox{ and } C\leq 0 \\ -\frac{C}{\exp({\beta R})-1}&: x \in U_0 \mbox{ and } u(x)\leq R \mbox{ and } C> 0. \end{cases}$$ Moreover, the last two cases can be unified as $$\Delta \phi \circ u \leq - \frac{C/2}{\exp({\beta R})-1}.$$* *Proof.* We first assume $C> 0$. Then for concave, increasing $\phi$ with $\phi''' \geq 0$, we have on $W$ $$\Delta \phi \circ f \leq C \phi'(f) + \frac {P_0}2 \phi''(f).$$ Let $\beta = 2C/P_0$. On $[0,R]$, we define $$\phi(s) = \frac{1- \exp(-\beta s) - \beta s \exp(-\beta R)}{\beta(1-\exp(-\beta R))}$$ giving $$C\phi' + \frac {P_0}2 \phi'' = \frac{-C}{ \exp(\beta R)-1}.$$ We extend $\phi$ linearly outside the interval $[0,R]$, i.e., $\phi(s)=-s$ for $s<0$ as $\phi(0)=0$ and $\phi'(0)=-1$. Moreover, we have $\phi'(R)=0$, and thus, we set $\phi(s)=\phi(R)$ for $s>R$. Applying Lemma [Lemma 11](#lem:LocalChainRule){reference-type="ref" reference="lem:LocalChainRule"}, the claim follows easily in the case $C>0$. We now consider the case $\frac {- P_0}{2R} \leq C \leq 0$. For $s \in [0,R]$, we define $$\phi(s) = \frac{2Rs - s^2}{2R}$$ giving $$C\phi' + \frac {P_0}2 \phi'' = \frac{C(2R - 2s) -P_0}{2R} \leq \frac {-P_0}{2R} %\leq \frac{-C/2}{\exp({\beta R})-1}.$$ As above, we have $\phi'(0)=1$ and $\phi'(R)=0$, and the conclusion follows similarly. Finally, in the case $C<\frac {- P_0}{2R}$, we use $\phi(s)=s$ and obtain on $W$ $$\Delta \phi \circ u = \Delta u \leq C %\leq \frac{-C/2}{\exp({\beta R})-1}$$ where we used $\beta R<-1$. This finishes the case distinction. The 'Moreover' statement is a straight forward calculation and thus, the proof is finished. ◻ ## Log Sobolev and Dirichlet eigenvalue estimate We now use the Laplacian separation principle and the chain rule provided above to prove a lower bound for $\alpha_{\operatorname{spectral}}$. We recall that $c\alpha \leq \alpha_{\operatorname{spectral}}\leq C\alpha$ for universal constants $c,C>0$, see [Corollary 4](#cor:asalpha){reference-type="ref" reference="cor:asalpha"}. Also recall that $$\alpha_{\operatorname{spectral}}= \inf_{W\subset V} \theta(\lambda_W,\lambda_{W^c})$$ where $\theta$ is the logarithmic mean. **Theorem 13**. *Let $G=(V,P,d)$ be a reversible metric Markov chain with non-negative Ollivier Ricci curvature. Then, $$\alpha_{\operatorname{spectral}}\geq \frac {P_0} {16 \operatorname{diam}(G)^2}.$$* *Proof.* Let $V=A \dot\cup B$. We aim to give a lower bound to the logarithmic mean $\theta(\lambda_A,\lambda_B)$ where $\lambda_A$ and $\lambda_B$ are the corresponding Dirichlet eigenvalues. We apply Theorem [Theorem 10](#thm:ConstLaplacian){reference-type="ref" reference="thm:ConstLaplacian"} to $K=\operatorname{supp}(\Delta 1_A)$ and $X=A\setminus K$ and $Y=B\setminus K$ to obtain a function $f$ with $\Delta f = C$ on $K$. As $f \in \operatorname{Lip}(1)$, we can assume $\operatorname{diam}(G) \leq f \leq 2 \operatorname{diam}(G)$. Let $g=f \cdot 1_A$. Then, $\nabla_- g \geq 1$ on $B$ and $\Delta g \leq C$ on $A$. By Lemma [Lemma 12](#lem:GlobalChainRule){reference-type="ref" reference="lem:GlobalChainRule"} applied to $W=B$, and $u=g$ and $R=2\operatorname{diam}(G)$ we get for all $y \in B$ $$\Delta \phi \circ u (y) \leq -\frac{C/2}{\exp(\beta R)-1}$$ where $\beta = 2C/P_0$. Moreover $0 \leq \phi \circ u \leq R$ as $\phi' \leq 1$. Hence by Lemma [Lemma 5](#lem:DirichletSupersolution){reference-type="ref" reference="lem:DirichletSupersolution"}, $$\lambda_B \geq \frac 1 R \cdot \frac{C/2}{\exp(\beta R)-1} = \frac {P_0} {4R^2} \cdot \frac{\beta R}{\exp(\beta R)-1}.$$ By a similar argument, $$\lambda_A \geq \frac {P_0} {4R^2} \cdot \frac{-\beta R}{\exp(-\beta R)-1} = \frac {P_0} {4R^2} \cdot e^{\beta R}\cdot\frac{\beta R}{\exp(\beta R)-1}.$$ We recall $\theta(s,t)=(s-t)/(\log(s)-\log(t))$ and thus, $$\theta(\lambda_A,\lambda_B) \geq \frac {P_0} {4R^2} = \frac {P_0} {16\operatorname{diam}(G)^2}.$$ As $A,B$ were chosen as an arbitrary partition of $V$, the claim of the theorem follows immediately. ◻ As $\alpha_{\operatorname{spectral}}$ and $\alpha$ coincide up to a bounded factor (see Corollary [Corollary 4](#cor:asalpha){reference-type="ref" reference="cor:asalpha"}), we obtain the following corollary **Corollary 14**. *Let $(V,P,d)$ be a reversible metric Markov chain. Then, $$\alpha \geq \frac{CP_0}{\operatorname{diam}^2}$$ for a universal constant $C$.* # Non-negative curvature and isoperimetry {#sec:RelatedDiameter} By the pioneering work of Ledoux [@ledoux2011concentration] and Milman [@milman2009role; @milman2010Isoperimetric] on weighted manifolds with non-negative Ricci curvature, various concentration of measure inequalities imply various isoperimetric inequalities. This goes under the name 'Reversing the hierarchy' as in a general smooth setting one schematically has the following implications: $$\begin{aligned} \mbox{Isoperimetric inequalities} &\Rightarrow \mbox{Functional inequalities} \\&\Rightarrow \mbox{Transport entropy inequalites}\\& \Rightarrow \mbox{Concentration inequalities},\end{aligned}$$ see [@milman2012properties]. While in general the reverse directions are wrong, they still hold true in case of non-negative Ricci curvature. Here, we give discrete versions of the reversal of hierarchy. By doing so, we give an affirmative answer to [@erbar2018poincare Conjecture 6.9] and [@erbar2018poincare Conjecture 6.10] in case of non-negative Ollivier curvature. Moreover our isoperimetric inequalities seem the first ones which are explicit and only require an arbitrary single point from the concentration profile. ## A conjecture by Erbar and Fathi {#sec:ErbarFathiConjecture} We recall [@erbar2018poincare Conjecture 6.9] which we answer affirmatively for graphs with non-negative Ollivier Ricci curvature in this section. Assume we have concentration profile $Me^{-\beta r}$, i.e., $$m(A_r^c) \leq Me^{-\beta r}$$ whenever $m(A)\geq \frac 1 2$ where $A_r = \{x: d(x,A)> r\}.$ It was conjectured in [@erbar2018poincare Conjecture 6.9] that this implies $\lambda \geq C(M) \beta^2$. To prove this conjecture, we use a notion of observable diameter $\operatorname{diam}_{\operatorname{obs}}$ capturing the concentration of measure. The observable diameter is defined as $$\begin{aligned} \operatorname{diam}_{\operatorname{obs}}^{(\varepsilon)} := \sup_{\substack{m(A)\geq \varepsilon\\m(\operatorname{cl}(B)) \geq \varepsilon}} d(A,B)\label{eq:ObsDiam}\end{aligned}$$ where $\operatorname{cl}$ denotes the closure, i.e. it adds the outer vertex boundary. The reason behind taking closure is that if a large fraction of the mass is located at a single vertex, then $A$ and $B$ would have to overlap so that their distance would be zero. The key step to prove the conjecture is to show for non-negatively curved graphs, $$h \simeq \frac{1}{\operatorname{diam}_{\operatorname{obs}}^{(1/8)}}$$ where $h$ is the Cheeger constant. The precise statement is given in Corollary [Corollary 17](#cor:hlargerdiamess){reference-type="ref" reference="cor:hlargerdiamess"} and Theorem [Theorem 18](#thm:hsmallerdiamess){reference-type="ref" reference="thm:hsmallerdiamess"}. We then show in Lemma [Lemma 19](#lem:concentrationDiamess){reference-type="ref" reference="lem:concentrationDiamess"} that $$\operatorname{diam}_{\operatorname{obs}}\lesssim \frac 1 \beta$$ and the conjecture follows easily by the Cheeger-Buser inequality $$\lambda \simeq h^2,$$ for for graphs with non-negative Ollivier Ricci curvature, see [@munch2019non Theorem 3.2.2] for the Buser inequality $\lambda \gtrsim h^2$. ## Isoperimetry and observable diameter To motivate our new isoperimetric inequality, we recall a result from [@munch2019non] stating that for non-negatively curved graphs, $$h \geq \frac{P_0}{4 \operatorname{diam}(G)},$$ see [@munch2019non Theorem 3.5.1 and 3.2.2]. We notice that $h$ is not sensitive to narrow tails, but the diameter is. This discrepancy can be seen as reason why the above estimate does not give precise bounds in case of high dimension, as high dimension generally comes with narrow tails. This gives motivation to use a notion of observable diameter (see [\[eq:ObsDiam\]](#eq:ObsDiam){reference-type="eqref" reference="eq:ObsDiam"}) which can be seen as a diameter like quantity enforced to be insensitive to narrow tails. In the smooth setting it is well known that the observable diameter as function in $\varepsilon$ is, up to constant factors, precisely the inverse function of the concentration profile, see e.g. [@ozawa2015estimate]. We give a discrete version in Section [6.3](#sec:ObsDiam){reference-type="ref" reference="sec:ObsDiam"}. For our isoperimetric inequality, we recall the boundary measure $$|\partial W| = -\mathcal E(1_W,d(W,\cdot)) = \sum_{\substack{x \in W\\y\notin W}} P(x,y)m(x)d(x,y),$$ the minimum transition rate $$P_0 = \inf_{x\sim y}P(x,y)d(x,y)^2$$ and the observable diameter $$\operatorname{diam}_{\operatorname{obs}}^{(\varepsilon)} = \max_{\substack{m(A)\geq \varepsilon\\ m(\operatorname{cl}(B)) \geq \varepsilon}} d(A,B).$$ We now give our main isoperimetric inequality in terms of the observable diameter. We remark that the well known method of using gradient estimates for the heat semigroup seem to be not powerful enough to prove this estimate in a discrete setup due to the lack of a discrete chain rule. Instead, we will use the Laplacian separation principle with which we already proved the log-Sobolev inequality. Another feature distinguishing our result from the smooth setting is that we allow to choose the Markov operator $P$ and the distance function independently, while in the smooth setting, the underlying Laplace operator uniquely determines the distance function. **Theorem 15**. *Let $G=(V,P,d)$ be a reversible metric Markov chain with non-negative Ollivier curvature. Let $W \subset V$ with $m(W)\leq \frac 1 2$. Let $\varepsilon\leq \frac 1 8$. Then, $$|\partial W| \geq P_0\cdot \frac{(m(W) \wedge \varepsilon) \log (1/\varepsilon) }{6\operatorname{diam}_{\operatorname{obs}}^{(\varepsilon)}}.$$* *Proof.* Let $K$ be the inner vertex boundary of $W$. Let $X=W\setminus K$ and $Y=V\setminus W$. By the Laplacian separation principle (Theorem [Theorem 10](#thm:ConstLaplacian){reference-type="ref" reference="thm:ConstLaplacian"}), there exists a function $f \in \operatorname{Lip}(1)$ with $\Delta f = C$ on $K$. Let $R:=\operatorname{diam}_{\operatorname{obs}}^{(\varepsilon)}$. We proceed with case distinction with respect to $C$. We first assume $$C \geq \frac{P_0 \log(1/\varepsilon)}{6R}.$$ Then, $$|\partial W| \geq \mathcal E(1_W,-f) \geq Cm(W),$$ and the claim follows easily in this case. Now assume $$C < C_0:= \frac{P_0 \log(1/\varepsilon)}{6R}.$$ We choose $r \in \mathbb{R}$ maximal so that $m(\{f \geq R+r\}) \geq \varepsilon$. As $f \in \operatorname{Lip}(1)$ and by maximality of $r$, $$m(\{f>R+r\})\leq \varepsilon\quad\mbox{ and } \quad m(\operatorname{cl}(\{f<r\})) \leq \varepsilon.$$ Without loss of generality, we assume $r=0$. By Lemma [Lemma 12](#lem:GlobalChainRule){reference-type="ref" reference="lem:GlobalChainRule"} applied to $V\setminus W$, there exists $g = \phi \circ f \in \operatorname{Lip}(1)$ such that on $V \setminus W$, $$\Delta g(x) \leq \begin{cases} 0 &: f(x)>R, \\ C_0 &: x \in \operatorname{cl}(\{f<0\}), \\ -\frac{C_0}{\exp(\beta R)-1}&: \mbox{ else}, \end{cases}$$ where $\beta =2C_0/P_0$. We write $U=V\setminus W$ and $U_R = U \cap \{f>R\}$ and $U_0 = U \cap \operatorname{cl}(\{f<0\})$ and $U_{\operatorname{int}}= U \setminus U_R \setminus U_0$. Then, $$\begin{aligned} |\partial W| \geq \mathcal E(1_{U},g) &\geq \frac{C_0}{\exp(\beta R) - 1} m(U_{\operatorname{int}}) - C_0m(U_0) \\ &\geq \frac{C_0}{\exp(\beta R) - 1} (m(U)-2\varepsilon) - C_0\varepsilon\\ &\geq \frac{C_0/4}{\exp(\beta R) - 1} - C_0\varepsilon\end{aligned}$$ where we used $\varepsilon\leq 1/8$ and $m(U) \geq \frac 1 2$ in the last estimate. As $C_0 = \frac{P_0 \log(1/\varepsilon)}{6R}$, we get $\exp(\beta R) \leq \varepsilon^{-1/6}$ and thus, $$\begin{aligned} |\partial W| \geq C_0 \left( \frac{1/4}{\varepsilon^{-1/3} - 1} - \varepsilon\right) \geq C_0 \varepsilon\end{aligned}$$ where we used $\varepsilon\leq 1/8$. Now the claim follows easily which finishes the case distinction and thus the proof. ◻ It turns out that the case $\varepsilon<m(W)$ does not give any improvement over the choice $\varepsilon=m(W)$, so that we easily obtain the following corollary. **Corollary 16**. *Let $G=(V,P,d)$ be a reversible metric Markov chain with non-negative Ollivier curvature. Let $W \subset V$ with $m(W)\leq \frac 1 2$. Let $\varepsilon\in \left[\frac{m(W)}{4}, \frac 1 8 \right]$. Then, $$\frac{|\partial W|}{m(W)} \geq P_0\cdot \frac{\log (1/\varepsilon) }{24\operatorname{diam}_{\operatorname{obs}}^{(\varepsilon)}}.$$* Plugging in $\varepsilon= 1/8$ gives the following estimate for the Cheeger constant. **Corollary 17**. *For graphs with non-negative Ollivier curvature, $$h \geq \frac {P_0}{12 \operatorname{diam}_{\operatorname{obs}}^{(1/8)} }.$$* A key advantage of using the observable diameter is that we also get the reverse inequality for the Cheeger constant. For convenience, we restrict ourselves here to the case of the combinatorial distance. We recall $$\operatorname{Deg}_{\max}= \max_{x \in V} \Delta d(x,\cdot)(x) = \max_{x\in V} \sum_{y\neq x} P(x,y).$$ **Theorem 18**. *Let $G=(V,P,d)$ be a reversible metric Markov chain with combinatorial distance and $m(V)=1$. Then, $$h \leq \frac{57\operatorname{Deg}_{\max}}{\operatorname{diam}_{\operatorname{obs}}^{(1/8)}}.$$* *Proof.* We fist assume $\operatorname{diam}_{\operatorname{obs}}^{(1/8)} >1$. Let $A,B$ maximize the expression for $\operatorname{diam}_{\operatorname{obs}}^{(1/8)}$. We consider $f:=d(A,\cdot)$. Then, $$\frac 1 8 (\operatorname{diam}_{\operatorname{obs}}^{(1/8)} -1) \leq (\operatorname{diam}_{\operatorname{obs}}^{(1/8)} -1)m(\operatorname{cl}(B)) \leq \|f\|_1 = \int_0^\infty m(f>r)dr$$ Moreover, $$\begin{aligned} \operatorname{Deg}_{\max}\geq \frac 1 2\sum_{x,y}P(x,y)m(x)|f(y)-f(x)| =\|\nabla f\|_1 =\int_0^\infty |\partial (\{f>r\})|dr\end{aligned}$$ Hence, there exists $r_0>0$ such that $$\frac{|\partial (\{f>r_0\})|}{m(f>r_0)} \leq \frac{8\operatorname{Deg}_{\max}}{(\operatorname{diam}_{\operatorname{obs}}-1)}$$ As $m(f>r) \leq \frac 7 8$ for all $r>0$, we get $$m(f>r) \leq \frac 1 7 m(f \leq r)$$ giving $$h \leq \frac{|\partial (\{f>r_0\})|}{m(f>r_0) \wedge m(f\leq r_0)} \leq \frac{56\operatorname{Deg}_{\max}}{(\operatorname{diam}_{\operatorname{obs}}^{(1/8)} -1)}.$$ On the other hand, we generally have $h \leq \operatorname{Deg}_{\max}$ by choosing a single vertex set. Combining with the estimate above, the claim of the theorem follows easily. ◻ We remark that the upper and lower bound for $h$ from Corollary [Corollary 17](#cor:hlargerdiamess){reference-type="ref" reference="cor:hlargerdiamess"} and Theorem [Theorem 18](#thm:hsmallerdiamess){reference-type="ref" reference="thm:hsmallerdiamess"} differ by a factor of order $\operatorname{Deg}_{\max}/P_0$. The same phenomenon can be seen for the discrete Cheeger-Buser inequality and can be explained by the use of qualitatively different gradient notions for both sides of the estimate. ## Concentration of measure versus observable diameter {#sec:ObsDiam} The essntial diameter is closely related to the concentration of measure phenomenon. For a set $A \subset V$, we define $$A_r = \{x: d(x,A)\leq r\}.$$ In a slightly different setup, the following lemma was shown in [@ozawa2015estimate Proposition 2.6] and references therein. **Lemma 19**. *Let $G=(V,P,d_{\operatorname{comb}})$ be a reversible metric Markov chain with combinatorial distance $d_{\operatorname{comb}}$. Let $r>0$. Assume at distance $r \in \mathbb{N}$, we have concentration $\varepsilon$, i.e.,$$m(A_r^c) < \varepsilon$$ whenever $m(A)\geq \frac 1 2$. Then, $$\operatorname{diam}_{\operatorname{obs}}^{(\varepsilon)} \leq 2r.$$* *Proof.* For simplicity, we write $d=d_{\operatorname{comb}}$. We will throughout use that $$A \subseteq (A_R^c)_R^c$$ and $$R+d(A_R,B) \geq d(A,B)$$ for all $R>0$ and $A,B \subseteq V$. Let $A,B$ attain the observable diameter. We notice $m(A_r)> \frac 1 2$ by the concentration assumption and as $m(A)\geq \varepsilon$. Let $R \in \mathbb{N}$ such that $m(A_R)\leq \frac 1 2$ and $m(A_{R+1}) \geq \frac 1 2$. Then, $R < r$. As $m(\operatorname{cl}(B)) \geq \varepsilon$, we similarly get $$d(A_{R+1},\operatorname{cl}(B))<r.$$ Thus, $$d(A,B) \leq 1 + d(A,\operatorname{cl}(B)) \leq 1+ R+ 1 + d(A_{R+1},\operatorname{cl}(B)) \leq r+R-1 \leq 2r,$$ finishing the proof. ◻ ## Gaussian concentration implies Gaussian isoperimetry A specific application of 'Reversing the hierarchy' is that on weighted manifolds with non-negative Ricci curvature, Gaussian concentration of measure implies Gaussian isoperimetry [@ledoux2011concentration; @milman2009role; @milman2010Isoperimetric] which is equivalent to a log-Sobolev inequality, see e.g. [@ledoux2006isoperimetry]. Here, we give a discrete version of this result using our general isoperimetric inequality from Corollary [Corollary 16](#cor:IsoperimetryConcentrationLargeEps){reference-type="ref" reference="cor:IsoperimetryConcentrationLargeEps"}. By doing so, we give an affirmative answer to [@erbar2018poincare Conjecture 6.10] in case of non-negative Ollivier curvature. We now give the details. The Gaussian isoperimetric constant is defined as $$h_{\sqrt {\log}} = \inf_{m(W)\leq \frac 1 2} \frac{|\partial W|}{m(W)\sqrt{\log(1/m(W))}}$$ It is shown in [@houdr2001mixed Remark 5] that $$\alpha \gtrsim \frac{h_{\sqrt {\log}}^2}{\operatorname{Deg}_{\max}}$$ and in [@klartag2015discrete] that in case of non-negative Bakry-Emery curvature, $$\alpha \lesssim \frac{ h_{\sqrt {\log}}^2}{P_0}.$$ As the gradient estimate $$\|\nabla P_t f\|_\infty \lesssim \frac{\|f\|_\infty}{\sqrt{P_0 t}}$$ is the only consequence of non-negative Bakry Emery curvature needed in the proof, and the same gradient estimate also holds true in case of non-negative Ollivier Ricci curvature, it is expected that the upper bound for $\alpha$ is also valid in case of non-negative Ollivier curvature, so that, up to a factor $P_0$, we have $$\alpha \simeq h_{\sqrt {\log}}^2.$$ **Theorem 20**. *Let $G=(V,P,d_{\operatorname{comb}})$ be a reversible metric Markov chain with non-negative Ollivier curvature and combinatorial distance $d_{\operatorname{comb}}$. Assume that for all $A$ with $m(A) \geq \frac 1 2$, $$m(A_r) \leq \exp(-\rho r^2).$$ Then, $$h_{\sqrt {\log}} \geq \frac{P_0}{48}\sqrt{\rho}.$$* *Proof.* Without loss of generality, we can assume that the concentration inequality is strict, otherwise, we could replace $\rho$ by $\rho - \varepsilon$, and at the end take the limit $\varepsilon\to 0$. By Lemma [Lemma 19](#lem:concentrationDiamess){reference-type="ref" reference="lem:concentrationDiamess"}, we have $$\operatorname{diam}_{\operatorname{obs}}^{(\varepsilon)} \leq 2 \sqrt{\frac{\log(1/\varepsilon)}{\rho}}.$$ Let $W \subset V$ with $m(W) \leq \frac 1 2$. By Corollary [Corollary 16](#cor:IsoperimetryConcentrationLargeEps){reference-type="ref" reference="cor:IsoperimetryConcentrationLargeEps"} with $\varepsilon= m(W)/4$, $$\frac{|\partial W|}{P_0 m(W)} \geq \frac{\log (1/\varepsilon)}{24\operatorname{diam}_{\operatorname{obs}}^{(\varepsilon)}} \geq \frac{\sqrt{\rho}}{48} \sqrt{\log(1/\varepsilon)} \geq \frac{\sqrt{\rho}}{48} \sqrt{\log(1/m(W))}$$ and the claim follows by rearranging. ◻ As a corollary, we obtain that Gaussian concentration implies a log-Sobolev inequality for non-negatively curved Markov chains. **Corollary 21**. *Let $G=(V,P,d_{\operatorname{comb}})$ be a reversible metric Markov chain with non-negative Ollivier curvature and combinatorial distance $d_{\operatorname{comb}}$. Assume that for all $A$ with $m(A) \geq \frac 1 2$ and all $r \in \mathbb{N}$, $$m(A_r) \leq \exp(-\rho r^2).$$ Then, $$\alpha \geq CP_0^2 \rho$$ for some universal constant $C$.* We remark that the dependence on the minimal transition rate $P_0$ is not avoidable, as Markov chains with curvature at least $K>0$ admit Gaussian concentration with $\rho = K$, see [@jost2019Liouville Theorem 3.1]. However, the log-Soblev constant cannot be lower bounded purely in terms of the curvature, as we showed in the example in Section [4.3](#sec:CounterExample){reference-type="ref" reference="sec:CounterExample"}. ## Isoperimetry and diameter of subsets So far, we gave isoperimetric estimates in terms of the observable diameter of the whole space. In this subsection, we give isoperimetric estimates in terms of the diameter of the subset. For a subset $W \subset V$, we write $$\operatorname{diam}(W) = \max_{x,y \in W} d(x,y)$$ where $d$ is the original graph distance (and not the distance within the induced subgraph). **Theorem 22**. *Let $G=(V,P,d)$ be a metric Markov chain graph with non-negative Ollivier curvature. Let $W \subset V.$ Then, $$|\partial W| \geq \frac{P_0}{\operatorname{diam}(\operatorname{cl}(W))} m(W)(1-m(W)).$$* *Proof.* We apply the Laplacian separation principle (Theorem [Theorem 10](#thm:ConstLaplacian){reference-type="ref" reference="thm:ConstLaplacian"}) to $Y=W$ and $K=\operatorname{supp}1_W \setminus W$ and $X=V\setminus W \setminus K$. We obtain a function $f$ with $\Delta f = C$ on $K$. Let $R=\operatorname{cl}(W) = Y \cup K$ We can assume that $0 \leq f \leq R$ on $Y \cup K$. We first assume $C>0$. By Lemma [Lemma 12](#lem:GlobalChainRule){reference-type="ref" reference="lem:GlobalChainRule"}, we get $$\Delta g \leq -\frac{C}{\exp(\beta R) - 1} \quad \mbox{ on } Y$$ with $\beta = 2C/P_0$ and some $g \in \operatorname{Lip}(1)$. Hence, $$|\partial W| \geq \mathcal E(1_W,g) \geq m(W)\frac{C}{\exp(\beta R) - 1}.$$ On the other hand, $$|\partial W| \geq \mathcal E(1_{V\setminus W},-f) \geq C(1-m(W))$$ Thus, $$\begin{aligned} \frac 1 {\partial W} \cdot \frac{m(W)(1-m(W))}{m(W)+(1-m(W))} \leq \frac{\frac{\exp(\beta R) - 1}{C}\cdot \frac 1 C}{\frac{\exp(\beta R) - 1}{C}+ \frac 1 C} = \frac{1-\exp(-\beta R)}{C} \leq \frac {2R}{P_0}.\end{aligned}$$ Rearranging gives $$|\partial W| \geq \frac{P_0}{2R}m(W)(1-m(W)).$$ In the case $C\leq 0$, we again use Lemma [Lemma 12](#lem:GlobalChainRule){reference-type="ref" reference="lem:GlobalChainRule"} to get $g \in \operatorname{Lip}(1)$ with $$\Delta g \leq - \frac{P_0}{2R} \mbox{ on } W$$ and hence, $$|\partial W| \geq \frac{P_0}{2R} m(W).$$ and the desired estimate follows easily. This finishes the case distinction and thus the proof. ◻ # Acknowledgments {#acknowledgments .unnumbered} The author wants to thank Yong Lin for pointing out the question if the Dirichlet eigenvalue on balls of radius $R$ of infinite graphs is at least $1/R^2$. This question turned out to be the missing puzzle piece for finding the proof of the isoperimetric inequalities. The author also wants to thank Emanuel Milman for fruitful discussions. Florentin Münch,\ MPI MiS Leipzig, 04103 Leipzig, Germany\ `florentin.muench@mis.mpg.de`\
arxiv_math
{ "id": "2309.06493", "title": "Ollivier curvature, Isoperimetry, concentration, and Log-Sobolev\n inequalitiy", "authors": "Florentin M\\\"unch", "categories": "math.DG math.AP math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper, we deal with a classical object, namely, a nonhyperbolic limit cycle in a system of smooth autonomous ordinary differential equations. While the existence of a center manifold near such a cycle was assumed in several studies on cycle bifurcations based on periodic normal forms, no proofs were available in the literature until recently. The main goal of this paper is to give an elementary proof of the existence of a periodic smooth locally invariant center manifold near a nonhyperbolic cycle in finite-dimensional ordinary differential equations by using the Lyapunov-Perron method. In addition, we provide several explicit examples of analytic vector fields admitting (non)-unique, (non)-$C^{\infty}$-smooth and (non)-analytic periodic center manifolds. author: - "Bram Lentjes[^1]" - "Mattias Windmolders[^2]" - "Yuri A. Kuznetsov[^3]" bibliography: - references.bib title: Periodic Center Manifolds for Nonhyperbolic Limit Cycles in ODEs --- Center manifold theorem, nonhyperbolic cycles, ordinary differential equations 34C25, 34C45, 37G15 # Introduction Center manifold theory is without doubt one of the most well-known and powerful techniques to study local bifurcations of dynamical systems [@Guckenheimer1983; @Kuznetsov2023a]. In its simplest form, center manifold theory allows us to analyze the behavior of a complicated high-dimensional nonlinear dynamical system near a bifurcation by reducing the system to a low-dimensional invariant manifold, called the center manifold. The center manifold theorem for finite-dimensional ordinary differential equations (ODEs) near a nonhyperbolic equilibrium has first been proved in [@Pliss1964; @Kelley1967] and developed further in [@Hirsch1977; @Carr1981]. Over the years, the existence of a center manifold near a nonhyperbolic equilibrium has been established for various other classes of dynamical systems by employing different techniques, such as, for example, the graph transform [@Shub1987; @Sandstede2015], the parametrization method [@Berg2020a; @Berg2020] and the Lyapunov-Perron method [@Vanderbauwhede1989; @Haragus2011]. Mainly this last method has been proven to be very successful in the setting of infinite-dimensional dynamical systems. For example, the center manifold theorem for equilibria has been obtained under various assumptions for ODEs in Banach spaces [@Vanderbauwhede1987; @Chow1988; @Vanderbauwhede1989; @Vanderbauwhede1992; @Haragus2011], partial differential equations [@Henry1981; @Kirchgaessner1982; @Mielke1986; @Mielke1988; @Bates1989], stochastic dynamical systems [@Boxler1989; @Du2006; @Chen2015; @Chen2018; @Li2022], classical delay (differential) equations [@Chafee1971; @Diekmann1991a; @Hale1993; @Diekmann1995; @Bosschaert2020], renewal equations [@Diekmann1991a; @Diekmann1995; @Diekmann2008], abstract delay (differential) equations [@Janssens2020], impulsive delay differential equations [@Church2021], mixed functional difference equations [@Hupkes2008a] and mixed functional differential equations [@Hupkes2006]. Various interesting and important qualitative properties of center manifolds for equilibria can be found in [@Sijbrand1985] and an extensive literature overview on such manifolds in various classes of dynamical systems can be found in [@Osipenko2009]. In all cases, the dimension of the center manifold is equal to the number of the critical eigenvalues of the equilibrium, i.e. those with zero real parts. The natural question arises if the whole center manifold construction can be repeated for nonhyperbolic periodic orbits (cycles) in various classes of dynamical systems. While the literature for center manifolds for equilibria is extensive, the same cannot be said for center manifolds near cycles. A first proof on the existence and smoothness of a center manifold for periodic mixed functional differential equations was given in [@Hupkes2008] and has been later adapted in [@Church2018; @Church2021] to the setting of periodic impulsive delay differential equations. Recently, in [@Lentjes2023a], the existence of a smooth periodic finite-dimensional center manifold near a cycle for classical delay differential equations has been established using the general sun-star calculus framework [@Clement1987; @Clement1988; @Clement1989; @Clement1989a; @Diekmann1991; @Diekmann1995], which expands its applicability to various other classes of delay equations. Here, the dimension of the center manifold is also determined by the number of the critical multipliers of the cycle, including the trivial (equal to one) multiplier. However, as the state space in all mentioned references on this topic is infinite-dimensional, many proofs are rather involved as one must rely on non-trivial functional analytic techniques. While the resulting center manifold theorems could be applied to finite-dimensional ODEs without delays, this is certainly a redundant overkill. The main goal of this paper is to directly state and prove a center manifold theorem for cycles in finite-dimensional ODEs, using only elementary tools. Essentially, the proofs below are rather straightforward adaptations of those from [@Lentjes2023a] in a much simpler finite-dimensional context. We already remark that our exposition is based on the classical Lyapunov-Perron method as a variation of constants formula is easily available in this setting. To study stability and bifurcations of limit cycles in ODEs, one can alternatively work with a Poincaré map on a cross-section to the cycle [@Guckenheimer1983; @Kuznetsov2023a]. In most cases, this is sufficient, but then we miss one dimension, i.e. the phase coordinate along the cycle. It should also be noted immediately that the existence of a smooth (non-unique) center manifold for the fixed point of the Poincaré map on a cross-section to the cycle does not imply directly the existence of a smooth center manifold in a tubular neighborhood of the cycle.[^4] A motivation in keeping this phase dimension is to directly obtain all information of the dynamics near the cycle. We fill two important gaps in the literature. First, from a theoretical point of view, the results from [@Iooss1988; @Iooss1999] on the existence of a special coordinate system on the center manifold that allows us to describe the local dynamics near a bifurcating cycle in terms of so-called periodic normal forms, rely heavily on the local invariance and smoothness properties of the center manifold. However, no proof nor a reference towards the literature has been provided which ensures the existence of a periodic sufficiently smooth center manifold near a nonhyperbolic cycle. Second, from a more practical point of view, many researchers use nowadays the well-known software package `MatCont` [@Dhooge2003] to study codimension one and two bifurcations of limit cycles in finite-dimensional ODEs. In particular, if one is interested in determining the nature (subcritical, supercritical or degenerate) of a bifurcation, one should compute the critical normal form coefficients of an associated periodic normal form. However, the computation of these coefficients in `MatCont` employs a combination of the periodic normalization method [@Kuznetsov2005; @Witte2013; @Witte2014], again based on the smoothness and local invariance of the center manifold, with the special coordinate system and the periodic normal forms mentioned above. ## Statement of the main theorem Let $f: \mathbb{R}^{n} \rightarrow \mathbb{R}^n$ be a $C^{k+1}$-smooth vector field for some $k \geq 1$ and consider the ordinary differential equation $$\label{eq:ODE} \tag{ODE} \dot{x}(t) = f(x(t)),$$ where $x(t) \in \mathbb{R}^n$. Assume that [\[eq:ODE\]](#eq:ODE){reference-type="eqref" reference="eq:ODE"} admits a $T$-periodic solution $\gamma$ for some (minimal) $T > 0$ and let $\Gamma := \gamma(\mathbb{R})$ denote the associated (limit) *cycle*. Consider now the variational equation around $\Gamma$ $$\label{eq:variational} \dot{y}(t) = A(t)y(t),$$ where $A(t) := Df(\gamma(t))$ and $y(t) \in \mathbb{R}^n$. The unique (global) solution $y$ of [\[eq:variational\]](#eq:variational){reference-type="eqref" reference="eq:variational"} is generated by the *fundamental matrix* $U(t,s) \in \mathbb{R}^{n \times n }$ as $y(t) = U(t,s)y_0$ for all $(t,s) \in \mathbb{R}^2$, where $y_0 \in \mathbb{R}^n$ is an initial condition specified at time $s$. The eigenvalues of the matrix $U(s+T,s)$ are called *Floquet multipliers* (of $\Gamma$), and we say that $\Gamma$ is *nonhyperbolic* if there are at least $n_0 + 1 \geq 2$ Floquet multipliers on the unit circle that are counted with algebraic multiplicity. Let $E_0(s)$ denote the $(n_0+1)$-dimensional *center subspace* (at time $s$) defined by the direct sum of all generalized eigenspaces with a Floquet multiplier on the unit circle and let $E_0 := \{(s,y_0) \in \mathbb{R} \times \mathbb{R}^n : y_0 \in E_0(s) \}$ denote the *center bundle*. The main result on the existence of a periodic smooth local invariant center manifold near the cycle $\Gamma$ is summarized in and two illustrative examples of two-dimensional *local center manifolds around $\Gamma$*, denoted by $\mathcal{W}_{\mathop{\mathrm{loc}}}^c(\Gamma)$, can be found in . Explicit minimal equations (model systems) of the form [\[eq:ODE\]](#eq:ODE){reference-type="eqref" reference="eq:ODE"} admitting a $2\pi$-periodic two-dimensional center manifold around a nonhyperbolic cycle are given by (cylinder) and (Möbius band). [\[thm:main\]]{#thm:main label="thm:main"} Consider [\[eq:ODE\]](#eq:ODE){reference-type="eqref" reference="eq:ODE"} with a $C^{k+1}$-smooth right-hand side $f : \mathbb{R}^n \to \mathbb{R}^n$ for some $k \geq 1$. Let $\gamma$ be a $T$-periodic solution of [\[eq:ODE\]](#eq:ODE){reference-type="eqref" reference="eq:ODE"} such that the associated cycle $\Gamma := \gamma(\mathbb{R})$ is nonhyperbolic with $(n_0 + 1)$-dimensional center subspace $E_0(s)$ at time $s \in \mathbb{R}$. Then there exists a locally defined $T$-periodic $C^k$-smooth $(n_0 + 1)$-dimensional invariant manifold $\mathcal{W}^c_{\mathop{\mathrm{loc}}}(\Gamma)$ defined around $\Gamma$ and tangent to the center bundle $E_0$. ![Illustration of two-dimensional local center manifolds around $\Gamma$ [@Kuznetsov2023a]. The left figure represents the case when $-1$ is not a Floquet multiplier and then $\mathcal{W}_{\mathop{\mathrm{loc}}}^{c}(\Gamma)$ is locally diffeomorphic to a cylinder in a neighborhood of $\Gamma$. The right figure represents the case when $-1$ is a Floquet multiplier and then $\mathcal{W}_{\mathop{\mathrm{loc}}}^{c}(\Gamma)$ is locally diffeomorphic to a Möbius band in a neighborhood of $\Gamma$. The $(\tau,\xi)$-coordinate system on the center manifold is the special coordinate system described in [@Iooss1988; @Iooss1999].](CM.eps){#fig:CM width="10cm"} ## Overview The paper is organized as follows. In we review some basic principles of Floquet theory for ODEs and elaborate a bit more on spectral decompositions. In we use the theory from previous section to prove the existence of a Lipschitz continuous center manifold for (nonlinear) periodic ODEs. In we prove that this center manifold is periodic, sufficiently smooth, locally invariant and its tangent bundle is precisely the center bundle. The technical proofs regarding smoothness of the center manifold are relegated to Appendix A. Combining all these results proves . In we provide explicit examples of analytic vector fields admitting (non)-unique, (non)-$C^{\infty}$-smooth and (non)-analytic periodic center manifolds. # Floquet theory and spectral decompositions {#sec:Floquet} Consider $\eqref{eq:ODE}$ admitting a $T$-periodic solution $\gamma$ with associated cycle $\Gamma := \gamma(\mathbb{R})$. The aim of this section is to determine the stability of $\Gamma$ and characterize nonhyperbolicity using Floquet theory. This (linear) theory will allow us to state and prove results regarding spectral properties of our operators and spaces of interest. Standard references for this entire section are the books [@Hale2009; @Iooss1999; @Kuznetsov2023a] on ODEs and [@Axler2014] on Linear Algebra. All unreferenced claims relating to basic properties of Floquet theory and spectral decompositions can be found here. To study the stability of $\Gamma$, set $x = \gamma + y$ and notice that $y$ satisfies the (nonlinear) periodic ODE $$\label{eq:A(t) R} \dot{y}(t) = A(t)y(t) + R(t,y(t)),$$ where $A(t) := Df(\gamma(t))$ and $R(t,y(t)) := f(\gamma(t) + y(t)) - f(\gamma(t)) - A(t)y(t)$. Hence, $A : \mathbb{R} \to \mathbb{R}^{n \times n}$ is a $T$-periodic $C^k$-smooth function and $R : \mathbb{R} \times \mathbb{R}^n \to \mathbb{R}^n$ is $T$-periodic in the first component, $C^k$-smooth, $R(\cdot,0) = 0$ and $D_2 R(\cdot,0) = 0$, i.e. $R$ consists solely of nonlinear terms. Note that the nonlinearity $R$ has one degree of smoothness less than the original vector field $f$. For a starting time $s \in \mathbb{R}$ and initial condition $y_0 \in \mathbb{R}^n$ for [\[eq:A(t) R\]](#eq:A(t) R){reference-type="eqref" reference="eq:A(t) R"}, it follows from the $C^k$-smoothness of $R$ and the Picard-Lindelöf theorem that [\[eq:A(t) R\]](#eq:A(t) R){reference-type="eqref" reference="eq:A(t) R"} admits a unique (maximal) solution for all $t \in \mathbb{R}$ sufficiently close to $s$. Hence, let for such $t$ and $s$ the map $S(t,s,\cdot) : \mathbb{R}^n \to \mathbb{R}^n$ denote the *(time-dependent) flow*, also called *process* [@Kloeden2011], of [\[eq:A(t) R\]](#eq:A(t) R){reference-type="eqref" reference="eq:A(t) R"}. One can verify by uniqueness of solutions that $$\begin{aligned} \label{eq:Sts} S(t,s,S(r,s,\cdot)) = S(t+r,s,\cdot), \quad S(s,s,\cdot) = I, \quad S(t+T,s+T,\cdot) = S(t,s,\cdot), \end{aligned}$$ for all $t,r \in \mathbb{R}$ sufficiently close to $s$. It is clear from this construction that studying solutions of [\[eq:ODE\]](#eq:ODE){reference-type="eqref" reference="eq:ODE"} near $\Gamma$ is equivalent to studying solutions of [\[eq:A(t) R\]](#eq:A(t) R){reference-type="eqref" reference="eq:A(t) R"} near the origin. Therefore, we start by investigating the linearization of [\[eq:A(t) R\]](#eq:A(t) R){reference-type="eqref" reference="eq:A(t) R"} around the origin: $$\label{eq:A(t)y(t)} \dot{y}(t) = A(t)y(t).$$ Observe that its (global) solutions are generated by the *fundamental matrix* $U(t,s) \in \mathbb{R}^{n \times n}$ as $y(t) = U(t,s)y_0$ for all $(t,s) \in \mathbb{R}^2$, whenever an initial condition $y_0 \in \mathbb{R}^n$ at starting time $s \in \mathbb{R}$ is specified. Moreover, we have that the map $(t,s) \mapsto U(t,s)$ is $C^k$-smooth. Using uniqueness of solutions for [\[eq:A(t)y(t)\]](#eq:A(t)y(t)){reference-type="eqref" reference="eq:A(t)y(t)"}, $T$-periodicity of $A$, and the fact that $U(s,s) = I$ for all $s \in \mathbb{R}$, one can easily verify that $$\begin{aligned} \label{eq:propsU(t,s)} U(t,r)U(r,s) = U(t,s), \quad U(t,s)^{-1} = U(s,t), \quad U(t+T,s+T) = U(t,s), \end{aligned}$$ for all $t,r,s \in \mathbb{R}$. [\[lemma:partialUts\]]{#lemma:partialUts label="lemma:partialUts"} There holds for any $(t,s) \in \mathbb{R}^2$ that $$\begin{aligned} \frac{\partial}{\partial t} U(t,s) = A(t)U(t,s),\quad \frac{\partial}{\partial t} U(s,t) = -U(s,t)A(t). \end{aligned}$$ *Proof.* The first equality follows immediately from [\[eq:A(t)y(t)\]](#eq:A(t)y(t)){reference-type="eqref" reference="eq:A(t)y(t)"}. To prove the second equality, observe from [\[eq:propsU(t,s)\]](#eq:propsU(t,s)){reference-type="eqref" reference="eq:propsU(t,s)"} that $U(s,t)U(t,s) = I$ and so differentiating both sides with respect to $t$ yields after rearranging $$\bigg(\frac{\partial}{\partial t}U(s, t)\bigg)U(t, s) = -U(s, t)A(t)U(t, s),$$ which proves the claim. ◻ Combining the first and third equality of [\[eq:propsU(t,s)\]](#eq:propsU(t,s)){reference-type="eqref" reference="eq:propsU(t,s)"} together with induction, one proves that $U(s+nT,s) = U(s+T,s)^n$ for all $s \in \mathbb{R}$ and $n \in \mathbb{Z}$ and so $y(s+nT) = U(s+T,s)^n y_0$. Hence, the long term behavior of the solution $y$ is determined by the *monodromy matrix* (at time $s$) $U(s+T,s)$ and especially its eigenvalues, called *Floquet multipliers*. To develop a spectral theory for our problem of interest, notice that one has to *complexify* the state space $\mathbb{R}^n$ and all discussed operators defined on $\mathbb{R}^n$, i.e. one has to extend the state space to $\mathbb{C}^n$ and extend all discussed operators to $\mathbb{C}^n$, see [@Axler2014 Chapter 9] for more information. However, for the sake of simplicity, we will not introduce any additional notation for the complexification of the operators. Let us now study the spectrum $\sigma(U(s+T,s))$ of $U(s+T,s)$ in depth. It follows from [@Kuznetsov2023a Theorem 1.6] that the Floquet multipliers are independent of the starting time $s$ and that $1$ is always a Floquet multiplier. To see this last claim, differentiating $\dot{\gamma}(t) = f(\gamma(t))$ yields $\ddot{\gamma}(t) = A(t)\dot{\gamma}(t)$ and so $\dot{\gamma}$ is a solution of [\[eq:A(t)y(t)\]](#eq:A(t)y(t)){reference-type="eqref" reference="eq:A(t)y(t)"}, i.e. $\dot{\gamma}(t) = U(t,s)\dot{\gamma}(s)$. Exploiting $T$-periodicity of $\gamma$ yields $\dot{\gamma}(s) = \dot{\gamma}(s+T) = U(s+T,s)\dot{\gamma}(s)$, which proves that $1$ is an eigenvalue of $U(s+T,s)$ with associated eigenvector $\dot{\gamma}(s)$. Let $\lambda$ be a Floquet multiplier of algebraic multiplicity $m_\lambda$, i.e. the $m_\lambda$-dimensional $U(s+T,s)$-invariant subspace $E_\lambda(s) := \ker((U(s+T,s) - \lambda I)^{m_\lambda})$ of $\mathbb{C}^n$ is maximal, or equivalently, $m_\lambda$ is the order of a root of the characteristic polynomial $\det(U(s+T,s) - \lambda I)$. This allows us to choose a basis of $m_\lambda$ linearly independent (generalized) eigenvectors $\zeta_1(s),\dots,\zeta_{m_\lambda}(s)$ of $E_\lambda(s)$. Moreover, let $\pi_\lambda(s)$ be the projection from $\mathbb{C}^n$ to $E_\lambda(s)$ with kernel the orthogonal complement of $E_\lambda(s)$. Our next aim is to extend $\zeta_1(s),\dots,\zeta_{m_\lambda}(s)$ and $\pi_\lambda(s)$ forward and backward in time. The results can be found in the following two lemmas. [\[lemma:basis\]]{#lemma:basis label="lemma:basis"} Let $\lambda$ be a Floquet multiplier, then the restriction $U_\lambda(t,s) : E_\lambda(s) \to E_\lambda(t)$ is well-defined and invertible for all $(t,s) \in \mathbb{R}^2$. Moreover, there exist $C^k$-smooth maps $\zeta_i : \mathbb{R} \to \mathbb{C}^n$ such that $\zeta_1(t),\dots,\zeta_{m_\lambda}(t)$ is a basis of $E_\lambda(t)$ for all $t \in \mathbb{R}$. *Proof.* One can verify easily from the equalities in [\[eq:propsU(t,s)\]](#eq:propsU(t,s)){reference-type="eqref" reference="eq:propsU(t,s)"} that $$(U(t+T,t) - \lambda I)U(t,s) = U(t,s)(U(s+T,s) - \lambda I).$$ Therefore, for each $v \in E_\lambda(s)$, we get $$\begin{aligned} (U(t+T,t) - \lambda I)^n U(t,s)v = U(t,s)(U(s+T,s) - \lambda I)^n v = 0, \end{aligned}$$ which proves that $U_\lambda(t,s)v \in E_\lambda(t)$ since the Floquet multipliers are independent of the starting time. As $U(t,s)$ is invertible, its restriction $U_\lambda(t,s)$ is invertible as well and this proves the first claim. To prove the second claim, let $\zeta_1(s),\dots,\zeta_{m_\lambda}(s)$ be a basis of $E_\lambda(s)$ and define for all $i \in \{1,\dots,m_\lambda\}$ the $C^k$-smooth maps $\zeta_i : \mathbb{R} \to \mathbb{C}^n$ by $\zeta_i(t) := U_\lambda(t,s)\zeta_i(s)$. By the first claim, it is clear that $\zeta_1(t),\dots,\zeta_{m_\lambda}(t)$ is a basis of $E_\lambda(t)$ for all $t\in \mathbb{R}$, and this completes the proof. ◻ [\[lemma:Iooss\]]{#lemma:Iooss label="lemma:Iooss"} Let $\lambda$ be a Floquet multiplier, then there exists a $T$-periodic $C^k$-smooth map $\pi_\lambda : \mathbb{R} \to \mathbb{C}^{n \times n}$ such that $\pi_\lambda(t)$ is the projection from $\mathbb{C}^n$ onto $E_\lambda(t)$ for all $t \in \mathbb{R}$ and satisfies the periodic linear ODE $$\label{eq:ODEpi} \dot{\pi}_\lambda(t) = A(t)\pi_\lambda(t) - \pi_\lambda(t)A(t).$$ It will be convenient in the sequel to introduce the sets $\Lambda_{-} := \{ \lambda \in \sigma(U(s+T,s)) : |\lambda| < 1 \}, \Lambda_{0} := \{ \lambda \in \sigma(U(s+T,s)) : |\lambda| = 1 \}$ and $\Lambda_{+} := \{ \lambda \in \sigma(U(s+T,s)) : |\lambda| > 1 \}$, where the elements of these sets have to be counted with algebraic multiplicity. We say that the cycle $\Gamma$ is *nonhyperbolic* if there are at least $n_0 + 1 \geq 2$ Floquet multipliers on the unit circle that are counted with algebraic multiplicity, i.e. the cardinality of $\Lambda_0$ is at least $2$. [\[prop:criteria\]]{#prop:criteria label="prop:criteria"} The following properties hold. 1. For each $s \in \mathbb{R}$, Euclidean $n$-space admits a direct sum decomposition $$\label{eq:decomposition} \mathbb{R}^n = E_{-}(s) \oplus E_0(s) \oplus E_{+}(s)$$ in a *stable subspace*, *center subspace* and *unstable subspace* (at time $s$) respectively. 2. There exist three $T$-periodic $C^k$-smooth projectors $\pi_{i} : \mathbb{R} \to \mathbb{R}^{n \times n}$ with $\mathop{\mathrm{ran}}(\pi_i(s))= E_i(s)$ for all $s \in \mathbb{R}$ and $i \in \{-,0,+\}$. 3. There exists a constant $N \geq 0$ such that $\sup_{s \in \mathbb{R}}(\|\pi_{-}(s)\| + \|\pi_{0}(s)\| + \|\pi_{+}(s)\|) = N < \infty$. 4. The projections are mutually orthogonal: $\pi_{i}(s)\pi_j(s) = 0$ for all $s \in \mathbb{R}$ and $i \neq j$ with $i,j \in \{-,0,+\}$. 5. The projections commute with the fundamental matrix: $U(t,s)\pi_i(s) = \pi_i(t)U(t,s)$ for all $(t,s) \in \mathbb{R}^2$ and $i \in \{-,0,+\}$. 6. The restrictions $U_{i}(t,s) : E_{i}(s) \to E_{i}(t)$ are well-defined and invertible for all $(t,s) \in \mathbb{R}^2$ and $i \in \{-,0,+\}$. 7. The decomposition [\[eq:decomposition\]](#eq:decomposition){reference-type="eqref" reference="eq:decomposition"} is an exponential trichotomy on $\mathbb{R}$ meaning that there exist $a < 0 < b$ such that for every $\varepsilon > 0$ there exists a $C_\varepsilon > 0$ such that $$\begin{aligned} \|U_-(t,s)\| &\leq C_\varepsilon e^{a(t-s)}, \quad t \geq s,\\ \|U_0(t,s)\| &\leq C_\varepsilon e^{\varepsilon|t-s|}, \quad t,s \in \mathbb{R},\\ \|U_+(t,s)\| &\leq C_\varepsilon e^{b(t-s)}, \quad t \leq s. \end{aligned}$$ *Proof.* We verify the seven properties step by step. 1\. From the generalized eigenspace decomposition theorem, we have that $$\mathbb{R}^n = \bigoplus_{\lambda \in \sigma(U(s + T,s))}E_\lambda(s),$$ and if we define $E_{i}(s) := \oplus_{\lambda \in \Lambda_i} E_\lambda(s)$ for $i \in \{-,0,+\}$, the result follows. Notice that $E_i(s)$ can be regarded as real vector space since, if $\lambda \in \Lambda_i$, then $\overline{\lambda} \in \Lambda_{i}$ because $U$ is a real operator. 2\. Define for $i \in \{-,0,+\}$ the map $\pi_{i}$ by $\pi_i(s) := \sum_{\lambda \in \Lambda_i} \pi_\lambda(s)$. It follows from linearity and that $\pi_i$ is $T$-periodic and $C^k$-smooth. By construction, the range of $\pi_i(s)$ is $E_{i}(s)$ for all $s \in \mathbb{R}$. The same argument as in the first assertion shows that $\pi_i$ is a real operator. 3\. Because $\pi_i$ and the norm $\| \cdot \|$ are continuous, we have that the map $\| \pi_i (\cdot) \| : \mathbb{R} \to \mathbb{R}$ is $T$-periodic and continuous. The claim follows now from applying three times the extreme value theorem. 4\. For $y \in \mathbb{R}^n$ the direct sum [\[eq:decomposition\]](#eq:decomposition){reference-type="eqref" reference="eq:decomposition"} admits a unique decomposition $y = y_- + y_0 + y_+$ with $y_i \in E_i(s)$. Hence, $\pi_i(s)\pi_j(s)y = \pi_i(s)y_j = 0$ if $i \neq j$ for all $s \in \mathbb{R}$. 5\. Differentiating $t \mapsto U(t,s)\pi_i(s)$ and $t \mapsto \pi_i(t)U(t,s)$ while using [\[eq:ODEpi\]](#eq:ODEpi){reference-type="eqref" reference="eq:ODEpi"}, one sees that they both satisfy [\[eq:A(t)y(t)\]](#eq:A(t)y(t)){reference-type="eqref" reference="eq:A(t)y(t)"}. Since they coincide at time $t = s$, we have by uniqueness that they must be equal. 6\. Define for $i \in \{-,0,+\}$ the map $U_{i}(t,s)$ by $U_i(t,s) := \oplus_{\lambda \in \Lambda_i} U_\lambda(t,s)$ for all $(t,s) \in \mathbb{R}^2$. The claim follows now from linearity and . 7\. We will only prove the $U_{-}(t,s)$ estimate as the other ones can be proven similarly. Since the spectrum of $U_-(s + T,s)$ lies inside the unit disk, it follows from the spectral radius formula that $$\lim_{n \rightarrow \infty}\|U_-(s + T,s)^n\|^{\frac{1}{n}} = \max_{\lambda \in \sigma(U_-(s + T,s))}|\lambda| < 1 .$$ Hence, there exists an $a < 0$ and an integer $n > 0$ such that $$\|U_-(s + T,s)^n\| < (1 + aT)^n,$$ and by continuity of the map $t \mapsto U_-(t,s)$, there is some $L > 0$ such that $\sup_{s \leq t \leq s + T}\|U_-(t,s)\| \leq L$. Denote $L_n := L \max_{j = 0, \dots, n - 1}\|U_-(s + T,s)^j\|$ and let $m_t$ be the largest positive integer such that $s + m_tnT \leq t$ and let $0 \leq m_t^\star \leq n - 1$ be the largest integer such that $s + m_tnT + m_t^\star T \leq t$. Using [\[eq:propsU(t,s)\]](#eq:propsU(t,s)){reference-type="eqref" reference="eq:propsU(t,s)"}, one obtains $$\begin{aligned} U_-(t,s) = U_-(t - m_tnT - m_t^\star T, s) U_-(s + T,s)^{m_t^\star}U_-(s + T,s)^{m_tn}. \end{aligned}$$ By the maximum property of $m_t^\star$: $s \leq t - m_tnT - m_t^\star T \leq s + m_tnT + (m_t^\star + 1)T - m_tnT - m_t^{\star} T = s + T$, and so $$\begin{aligned} \|U_-(t,s)\| \leq L_n\|U_-(s + T,s)^n\|^{m_t} \leq L_n (1 + aT)^{m_tn} \leq L_n [(1 + aT)^{\frac{1}{aT}}]^{a(t - s)} \leq L_n e^{a(t - s)}, \end{aligned}$$ where we used in the last equality the fact that the map $x \mapsto (1+\frac{1}{x})^{x}$ is monotonically increasing on $(-\infty,0)$ and converges to $e$ as $x \to -\infty$. For the other estimates, for a given $\varepsilon > 0$ and sufficiently large $m \in \mathbb{N}$, one finds that there exists a $M_\varepsilon$ and $N_m$ such that $||U_0(t,s)|| \leq M_\varepsilon e^{\varepsilon|t-s|}$ for $(t,s) \in \mathbb{R}^2$ and $||U_{+}(t,s)|| \leq N_m e^{b(t-s)}$ for $t \leq s$. Choosing $C_\varepsilon := \max \{L_n, M_\varepsilon, N_m\}$ proves the claim. ◻ # Existence of a Lipschitz center manifold {#sec:existence} The aim of this section is to prove the existence of a (local) center manifold for [\[eq:A(t) R\]](#eq:A(t) R){reference-type="eqref" reference="eq:A(t) R"} around the origin. The proof consists of four steps. In the first step, we show that we can formulate [\[eq:A(t) R\]](#eq:A(t) R){reference-type="eqref" reference="eq:A(t) R"} equivalently as an integral equation. In the second step, we determine a pseudo-inverse for solutions of this integral equation on a suitable Banach space. In the third step, we modify our nonlinearity $R$ outside a ball of radius $\delta > 0$ such that it becomes Lipschitz continuous, and eventually a contraction when $\delta$ is chosen small enough. In the last step, we construct a (family of) fixed point operators using the pseudo-inverse and modified nonlinearity for a sufficiently small $\delta$. The fixed points of these contractions constitute the center manifold. [\[lemma:integralequivalence\]]{#lemma:integralequivalence label="lemma:integralequivalence"} The ordinary differential equation [\[eq:A(t) R\]](#eq:A(t) R){reference-type="eqref" reference="eq:A(t) R"} is equivalent to the integral equation $$\label{eq:integraleq} u(t) = U(t,s)u(s) + \int_s^t U(t,\tau)R(\tau,u(\tau)) d\tau.$$ *Proof.* Any $u$ satisfying [\[eq:integraleq\]](#eq:integraleq){reference-type="eqref" reference="eq:integraleq"} is clearly differentiable, and it follows from the Leibniz integral rule that $$\begin{aligned} \dot{u}(t) &= A(t)U(t, s)u(s) + U(t,t)R(t, u(t)) + \int_s^tA(t)U(t, \tau)R(\tau, u(\tau))d\tau \\ &= A(t)\left[U(t, s)u(s) + \int_s^tU(t, \tau)R(\tau, u(\tau))d\tau\right] + R(t, u(t)) = A(t)u(t) + R(t, u(t)), \end{aligned}$$ which proves that $u$ satisfies [\[eq:A(t) R\]](#eq:A(t) R){reference-type="eqref" reference="eq:A(t) R"}. Conversely, let $u$ satisfy [\[eq:A(t) R\]](#eq:A(t) R){reference-type="eqref" reference="eq:A(t) R"} and let $w(t) = U(s, t)u(t)$. Then $$\dot{w}(t) = \bigg(\frac{\partial}{\partial t}U(s, t)\bigg)u(t) + U(s, t)\dot{u}(t).$$ The second equality in together with the fact that $u$ satisfies [\[eq:A(t) R\]](#eq:A(t) R){reference-type="eqref" reference="eq:A(t) R"} shows that $\dot{w}(t) = U(s,t)R(t,u(t))$. Integrating both sides with respect to $t$ yields $$\begin{aligned} u(t) &= U(t, s)u(s) + U(t, s)\int_s^tU(s, \tau)R(\tau, u(\tau))d\tau = U(t, s)u(s) + \int_s^tU(t, \tau)R(\tau, u(\tau))d\tau, \end{aligned}$$ where we used [\[eq:propsU(t,s)\]](#eq:propsU(t,s)){reference-type="eqref" reference="eq:propsU(t,s)"} in the last equality. ◻ Let $C_b(\mathbb{R}, \mathbb{R}^{n})$ denote the Banach space of $\mathbb{R}^n$-valued continuous bounded functions defined on $\mathbb{R}$ equipped with the supremum norm $\| \cdot \|_{\infty}$. If we want to study solutions of [\[eq:A(t)y(t)\]](#eq:A(t)y(t)){reference-type="eqref" reference="eq:A(t)y(t)"} (or equivalently [\[eq:integraleq\]](#eq:integraleq){reference-type="eqref" reference="eq:integraleq"} with $R = 0$) in the center subspace, it turns that such solutions can be unbounded, and so we can not work in the space $C_b(\mathbb{R}, \mathbb{R}^{n})$. Instead, we must work in a function space that allows limited (sub)exponential growth both at plus and minus infinity. Therefore, define for any $\eta,s \in \mathbb{R}$ the normed space $$\mathop{\mathrm{BC}}_{s}^{\eta} := \bigg \{ f \in C(\mathbb{R},\mathbb{R}^n) : \sup_{t \in \mathbb{R}} e^{-\eta|t-s|}\|f(t)\| < \infty \bigg \},$$ with the weighted supremum norm $$\|f\|_{\eta,s} := \sup_{t \in \mathbb{R}} e^{-\eta|t-s|}\|f(t)\|.$$ Since the linear map $\iota : (C_b(\mathbb{R}, \mathbb{R}^{n}), \|\cdot\|_{\infty}) \rightarrow (\mathop{\mathrm{BC}}_{s}^{\eta}, \|\cdot\|_{\eta,s})$ defined by $\iota(f)(t) := e^{\eta|t - s|}f(t)$ is an isometry, it is clear that $(\mathop{\mathrm{BC}}_{s}^{\eta}, \|\cdot\|_{\eta,s})$ is a Banach space. The following result proves that all solutions of [\[eq:A(t)y(t)\]](#eq:A(t)y(t)){reference-type="eqref" reference="eq:A(t)y(t)"} on the center subspace belong to $\mathop{\mathrm{BC}}_{s}^{\eta}$. [\[prop:X0s\]]{#prop:X0s label="prop:X0s"} Let $\eta \in (0,\min \{-a,b\})$ and $s \in \mathbb{R}$. Then $$\begin{aligned} E_0(s) = \{y_0 \in \mathbb{R}^n : \text{ there exists a solution of \eqref{eq:A(t)y(t)}} \text{ through $y_0$ belonging to $\mathop{\mathrm{BC}}_{s}^{\eta}$}\}. \end{aligned}$$ *Proof.* Choose $y_0 \in E_0(s)$ and define $y(t) = U_0(t,s)y_0$, which is indeed a solution of [\[eq:A(t)y(t)\]](#eq:A(t)y(t)){reference-type="eqref" reference="eq:A(t)y(t)"} through $y_0$. Let $\varepsilon \in (0,\eta]$ be given. The exponential trichotomy from shows that $$\begin{aligned} e^{-\eta|t - s|}\|y(t)\| &= e^{-\eta|t - s|}\|U_0(t,s)y_0\| \leq C_\varepsilon e^{(\varepsilon - \eta)|t - s|}\|y_0\| \leq C_\varepsilon\|y_0\|, \quad \forall t,s \in \mathbb{R}. \end{aligned}$$ Taking the supremum over $t \in \mathbb{R}$ yields $y \in \mathop{\mathrm{BC}}_s^\eta$. Conversely, let $y_0 \in \mathbb{R}^n$ be such that $y$, defined by $y(t) = U(t,s)y_0$, is in $\mathop{\mathrm{BC}}_s^\eta$. For $t \geq \max\{s,0\}$ and $\varepsilon \in (0,\eta]$, we get $$\begin{aligned} \|\pi_+(s)y_0\| &= \|U_+(s,t)\pi_+(t)y(t)\| \leq C_\varepsilon e^{b(s - t)}N\|y(t)\|, \end{aligned}$$ which shows that $$e^{-\eta |t-s|}\|y(t)\| \geq \frac{e^{(b-\eta)(t-s)}}{C_\varepsilon N} \|\pi_+(s)y_0\| \to \infty,$$ as $t \to \infty$ unless $\pi_+(s)y_0 = 0$ as $y \in \mathop{\mathrm{BC}}_s^\eta$. Similarly, one can prove that $\pi_-(s)y_0 = 0$ and so $y_0 = (\pi_-(s) + \pi_0(s) + \pi_+(s))y_0 = \pi_0(s)y_0$, i.e. $y_0 \in E_0(s)$. ◻ ## Bounded solutions of the linear inhomogeneous equation {#subsec: bounded solutions} Let $f : \mathbb{R} \to \mathbb{R}^n$ be a continuous function and consider the linear inhomogeneous integral equation $$\label{eq:inhomogeneous CMT} u(t) = U(t,s)u(s) + \int_s^t U(t,\tau)f(\tau) d\tau,$$ for all $(t,s) \in \mathbb{R}^2$. To prove existence of a center manifold, we need a pseudo-inverse of (exponentially) bounded solutions of [\[eq:inhomogeneous CMT\]](#eq:inhomogeneous CMT){reference-type="eqref" reference="eq:inhomogeneous CMT"}. To do this, define (formally) for any $\eta \in (0, \min \{-a,b \})$ and $s \in \mathbb{R}$ the operator $\mathcal{K}_{s}^{\eta} : \mathop{\mathrm{BC}}_s^\eta \to \mathop{\mathrm{BC}}_s^\eta$ as $$\begin{aligned} (\mathcal{K}_{s}^{\eta} f)(t) := \int_s^tU(t,\tau)\pi_0(\tau)f(\tau)d\tau + \int_{\infty}^tU(t,\tau)\pi_+(\tau)f(\tau)d\tau + \int_{-\infty}^tU(t,\tau)\pi_-(\tau)f(\tau)d\tau, \end{aligned}$$ and we have to check that this operator is well-defined. This will be proven in the following proposition and also the fact that $\mathcal{K}_{s}^{\eta}$ is precisely the pseudo-inverse we are looking for. [\[prop:ketas\]]{#prop:ketas label="prop:ketas"} Let $\eta \in (0, \min \{-a,b \})$ and $s \in \mathbb{R}$. The following properties hold. 1. $\mathcal{K}_{s}^{\eta}$ is a well-defined bounded linear operator. Moreover, the operator norm $\|\mathcal{K}_{s}^{\eta}\|$ is bounded above independent of $s$. 2. $\mathcal{K}_{s}^{\eta}f$ is the unique solution of [\[eq:inhomogeneous CMT\]](#eq:inhomogeneous CMT){reference-type="eqref" reference="eq:inhomogeneous CMT"} in $\mathop{\mathrm{BC}}^{\eta}_s$ with vanishing $E_0(s)$-component at time $s$. *Proof.* We start by proving the first assertion. Clearly $\mathcal{K}_{s}^{\eta}$ is linear. Let $\varepsilon \in (0,\eta]$ be given and notice that for a given $f \in \mathop{\mathrm{BC}}^{\eta}_s$, we can write as $\mathcal{K}_{s}^{\eta}f$ as the sum of three integrals, i.e. $\mathcal{K}_{s}^{\eta}f = I_0(\cdot,s) + I_+ + I_-$. We estimate now the norm of each integral step by step. $I_0(\cdot,s)$: The straightforward estimate $$\|I_0(t,s)\| \leq C_\varepsilon N \|f\|_{\eta,s} \frac{e^{\eta |t-s|}}{\eta - \varepsilon} < \infty, \quad \forall t \in \mathbb{R},$$ implies that the norm of $I_0(\cdot,s)$ is bounded above. $I_{+}:$ Notice that $$\|I_+(t)\| \leq C_\varepsilon N \|f\|_{\eta,s}e^{bt} \int_t^\infty e^{-b \tau + \eta |\tau - s|} d\tau, \forall t \in \mathbb{R},$$ and to prove norm boundedness of $I_+$, we have to evaluate the integral above. A calculation shows that $$\begin{aligned} \label{eq:expo integral} \int_t^\infty e^{-b \tau + \eta |\tau - s |} d\tau = \begin{dcases} \frac{e^{-bt}}{b- \eta} e^{\eta(t-s)}, \quad & t \geq s \\ \frac{e^{-bt}}{b+\eta}e^{\eta (s-t)} - \frac{e^{-bs}}{b+\eta} + \frac{e^{-bs}}{b-\eta}, \quad &t \leq s. \end{dcases} \end{aligned}$$ We want to estimate the $t\leq s$ case. Notice that for real numbers $\alpha \geq \beta$ we have $$(\alpha - \beta) \bigg( \frac{1}{b+\eta} - \frac{1}{b-\eta} \bigg) = \frac{-2 \eta (\alpha - \beta)}{(b+ \eta)(b-\eta)} \leq 0,$$ since $\eta < b$ by assumption. Hence, $$\frac{\alpha}{b+\eta} + \frac{\beta}{b-\eta} \leq \frac{\alpha}{b-\eta} + \frac{\beta}{b+\eta}.$$ We want to replace $\alpha$ by $e^{-b t + \eta s - \eta t}$ and $\beta$ by $e^{-bs}$ and therefore we have to show that $-bt + \eta s - \eta t + bs \geq 0$ which is true because $-bt + \eta s - \eta t + bs = (s-t)(b + \eta) \geq 0$ since $s-t \geq 0$. Filling this into [\[eq:expo integral\]](#eq:expo integral){reference-type="eqref" reference="eq:expo integral"} yields $$\int_t^\infty e^{-b \tau + \eta |\tau - s |} d\tau \leq \frac{e^{-bt}}{b-\eta} e^{\eta|t-s|},$$ which shows that $$\|I_{+}(t)\| \leq C_\varepsilon N \|f\|_{\eta,s}\frac{e^{\eta |t-s|}}{b-\eta} < \infty, \quad \forall t \in \mathbb{R}.$$ and so we conclude that $I_{+}$ is well-defined. $I_{-}:$ A similar estimate as for the $I_{+}$-case shows that $$\|I_-(t)\| \leq C_\varepsilon N \|f\|_{\eta,s}\frac{e^{\eta|t - s|}}{-a - \eta} < \infty, \quad \forall t \in \mathbb{R},$$ and so it follows that the operator norm $$\|\mathcal{K}_{s}^{\eta}\| \leq C_\varepsilon N \left(\frac{1}{\eta - \varepsilon} + \frac{1}{b - \eta} + \frac{1}{-a - \eta}\right) < \infty,$$ is bounded above independent of $s$. We conclude that $\mathcal{K}_{s}^{\eta}$ is a bounded linear operator on $\mathop{\mathrm{BC}}_{s}^\eta$. Let us now prove the second assertion by first showing that $\mathcal{K}_s^\eta$ is indeed a solution of [\[eq:inhomogeneous CMT\]](#eq:inhomogeneous CMT){reference-type="eqref" reference="eq:inhomogeneous CMT"}. Let $f \in \mathop{\mathrm{BC}}_s^\eta$ and set $u = \mathcal{K}_s^\eta f$. Then, a straightforward computation shows that $$U(t,s)u(s) + \int_s^t U(t,\tau)f(\tau) d\tau = u(t),$$ and so $u$ is indeed a solution of [\[eq:inhomogeneous CMT\]](#eq:inhomogeneous CMT){reference-type="eqref" reference="eq:inhomogeneous CMT"}. Let us now prove that $u$ has vanishing $E_0(s)$-component at time $s$, i.e. $\pi_0(s)u(s) = 0$. The mutual orthogonality of the projectors () implies $$\begin{aligned} \pi_0(s)u(s) = \int_\infty^s U(s,\tau ) \pi_0(\tau)\pi_{+}(\tau)f(\tau) d\tau + \int_{-\infty}^s U(s,\tau) \pi_0(\tau) \pi_{-}(\tau)f(\tau) d\tau = 0. \end{aligned}$$ It only remains to show that $u$ is the unique solution of [\[eq:inhomogeneous CMT\]](#eq:inhomogeneous CMT){reference-type="eqref" reference="eq:inhomogeneous CMT"} in $\mathop{\mathrm{BC}}_s^\eta$. Let $v \in \mathop{\mathrm{BC}}_s^\eta$ be another solution of [\[eq:inhomogeneous CMT\]](#eq:inhomogeneous CMT){reference-type="eqref" reference="eq:inhomogeneous CMT"} with vanishing $E_0(s)$-component at time $s$. Then the function $w := u - v$ is an element of $\mathop{\mathrm{BC}}_s^\eta$ and satisfies $w(t) = U(t,s)w(s)$ for all $(t,s) \in \mathbb{R}^2$. shows us that $w(s) \in E_0(s)$ and notice that $\pi_0(s)w(s) = 0$ since $u$ and $v$ have both vanishing $E_0(s)$-component at time $s$. From we know that $w(t) = U_0(t,s)w(s)$ is in $E_0(t)$ and $$\begin{aligned} \pi_0(t)w(t) = \pi_0(t)U_0(t,s)w(s) = U_0(t,s)\pi_0(s)w(s) = 0, \end{aligned}$$ so $u = v$. ◻ ## Modification of the nonlinearity {#subsec: modification nonlinearity} To prove the existence of a center manifold, a key step will be to use the Banach fixed point theorem on some specific fixed point operator. This operator we will be of course linked to the inhomogeneous equation [\[eq:inhomogeneous CMT\]](#eq:inhomogeneous CMT){reference-type="eqref" reference="eq:inhomogeneous CMT"}. However, we can not expect that any nonlinear operator $R(t,\cdot) : \mathbb{R}^n \to \mathbb{R}^n$ for fixed $t \in \mathbb{R}$ will impose a Lipschitz condition on the fixed point operator that will be constructed. As we are only interested in the local behavior of solutions near the origin of [\[eq:A(t) R\]](#eq:A(t) R){reference-type="eqref" reference="eq:A(t) R"}, we can modify the nonlinearity $R(t,\cdot)$ outside a ball of radius $\delta > 0$ such that eventually the fixed point operator will become a contraction. To modify this nonlinearity, introduce a $C^{\infty}$-smooth cut-off function $\xi : [0,\infty) \to \mathbb{R}$ as $$\xi(s) \in \begin{cases} \{ 1 \}, \quad &0 \leq s \leq 1, \\ [0,1], \quad &0 \leq s \leq 2,\\ \{ 0 \}, \quad & s \geq 2, \end{cases}$$ and define for any $\delta > 0$ the *$\delta$-modification* of $R$ as the operator $R_{\delta} : \mathbb{R} \times \mathbb{R}^n \to \mathbb{R}^n$ with action $$\begin{aligned} &R_{\delta}(t,u):= R(t,u) \xi \bigg( \frac{\|\pi_0(t)u\|}{N \delta} \bigg) \xi \bigg( \frac{\|(\pi_{-}(t) + \pi_{+}(t))u\|}{N \delta} \bigg). \end{aligned}$$ Since $R$ is of the class $C^k$, the cut-off function $\xi$ is $C^{\infty}$-smooth, the Euclidean norm $\| \cdot \|$ is $C^{\infty}$-smooth on $\mathbb{R}^n \setminus \{ 0 \}$ and the projectors $\pi_-,\pi_0,\pi_{+}$ are $C^k$-smooth (), it is clear that $R_\delta$ is $C^k$-smooth. This $\delta$-modification of $R$ will ensure that the nonlinearity becomes eventually globally Lipschitz, as will be proven in the upcoming two statements. [\[lem:local lipschitz\]]{#lem:local lipschitz label="lem:local lipschitz"} There exist a $\delta_1 > 0$ and $l:[0,\delta_1] \rightarrow [0,\infty)$, continuous at $0$, such that $l(0) = 0$ and $l(\delta) =: l_\delta$ is a Lipschitz constant for $R(t,\cdot)$ on the open ball $B(0,\delta)$ for every $t \in \mathbb{R}$ and $\delta \in (0,\delta_1]$. *Proof.* Recall that $R$ is of the class $C^k$ and that $R(t,0) = D_2R(t,0) = 0$ for all $t \in \mathbb{R}$. By continuity, choose $\delta_1 > 0$ such that $\sup\{\|D_2R(t,y)\| : y \in B(0,\delta_1)\} \leq 1$ and define the map $l$ as $$l(\delta) := \begin{dcases} 0, \hspace{-10pt}&\delta = 0, \\ \sup\{\|D_2R(t,y)\| : y \in B(0,\delta)\}, \hspace{-10pt}&\delta \in (0,\delta_1]. \end{dcases}$$ By the mean value theorem, $l(\delta)$ is a Lipschitz constant for $R(t,\cdot)$ on $B(0,\delta)$. Moreover, $l$ is monotonically increasing and observe that for each $\varepsilon > 0$, there exists a $0 < \delta_\varepsilon \leq \delta_1$ such that $\sup\{\|D_2R(t,y)\| : y \in B(0,\delta_\varepsilon)\} \leq \varepsilon$. Then for $0 < \delta \leq \delta_\varepsilon$ we have that $0 \leq l(\delta) \leq l(\delta_\varepsilon) \leq \varepsilon$ and so the map $l$ is continuous at zero. ◻ [\[prop:global lipschitz\]]{#prop:global lipschitz label="prop:global lipschitz"} For $\delta > 0$ sufficiently small, $R_{\delta}(t,\cdot)$ is globally Lipschitz continuous for all $t \in \mathbb{R}$ with Lipschitz constant $L_\delta \rightarrow 0$ as $\delta \downarrow 0$. *Proof.* Define for any $\delta > 0$ and $t \in \mathbb{R}$ the maps $\xi_\delta, \Xi_{\delta,t} : \mathbb{R}^n \to \mathbb{R}$ by $$\begin{aligned} \xi_\delta(y) := \xi\left(\frac{\|y\|}{N\delta}\right), \quad \Xi_{\delta,t}(y) := \xi_\delta(\pi_0(t)y)\xi_\delta(\pi_-(t)y + \pi_+(t)y), \end{aligned}$$ and so $R_\delta(t,y) = \Xi_{\delta,t}(y)R(t,y)$. Note that $\xi_\delta, \Xi_{\delta,t} \leq 1$ and let $C \geq 0$ be a global Lipschitz constant of $\xi$. Then by composition of Lipschitz functions, $\xi_\delta$ has a global Lipschitz constant $C/{N\delta}$. For $y,z \in \mathbb{R}^n$: $$\begin{aligned} |\Xi_{\delta,t}(y) - \Xi_{\delta,t}(z)| &= |[\xi_\delta(\pi_0(t)y)\xi_\delta(\pi_-(t)y + \pi_+(t)y) - \xi_\delta(\pi_0(t)y)\xi_\delta(\pi_-(t)z + \pi_+(t)z)] \\ & - [\xi_\delta(\pi_0(t)z)\xi_\delta(\pi_-(t)z + \pi_+(t)z) - \xi_\delta(\pi_0(t)y)\xi_\delta(\pi_-(t)z + \pi_+(t)z)]| \\ &\leq \xi_\delta(\pi_0(t)y)|\xi_\delta(\pi_-(t)y + \pi_+(t)y) - \xi_\delta(\pi_-(t)z + \pi_+(t)z)| \\ & + \xi_\delta(\pi_-(t)z + \pi_+(t)z)|\xi_\delta(\pi_0(t)y - \xi_\delta(\pi_0(t)z)| \\ &\leq \frac{2C}{\delta}\|y - z\|. \end{aligned}$$ Now, note that $\|y\| \leq \|\pi_0(t)y\| + \|(\pi_-(t) + \pi_+(t))y\|$ for all $y \in \mathbb{R}^n$. If $\|y\| \geq 4N\delta$, then $\max\{\|\pi_0(t)y\|, \|(\pi_-(t) + \pi_+(t))y\|\} \geq 2N\delta$, so that $\Xi_{\delta,t}(y) = 0$. Let $\delta_1 > 0$ be such as in and fix $\delta > 0$ such that $4N\delta \leq \delta_1$. For $y, z \in \mathbb{R}^n$: $$\begin{aligned} \|R_\delta(t,y) - R_\delta(t,z)\| &= \|[\Xi_{\delta,t}(y)R(t,y) - \Xi_{\delta,t}(y)R(t,z)] - [\Xi_{\delta,t}(z)R(t,z) - \Xi_{\delta,t}(y)R(t,z)]\| \\ &\leq \Xi_{\delta,t}(y)\|R(t,y) - R(t,z)\| + |\Xi_{\delta,t}(y) - \Xi_{\delta,t}(z)|\|R(t,z)\|\\ &\leq\begin{dcases} l_\delta(4N\delta)\|y - z\| + 8CNl_\delta(4N\delta)\|y - z\|, & \|y\|, \|z\| < 4N\delta, \\ 0, & \|y\|, \|z\| \geq 4N\delta,\\ 8CNl_\delta(4N\delta)\|y - z\|, & \|y\| \geq 4N\delta, \|z\| < 4N\delta,\\ \end{dcases}\\ &\leq l_\delta(4N\delta)(1 + 8CN)\|y - z\|, \end{aligned}$$ Hence, $L_\delta := l_\delta(4N\delta)(1 + 8CN)$ is a Lipschitz constant for $R_\delta(t,\cdot)$ for all $t \in \mathbb{R}$. ◻ [\[cor:Rdelta\]]{#cor:Rdelta label="cor:Rdelta"} For $\delta > 0$ sufficiently small, $\|R_\delta(t, y)\| \leq 4NL_\delta\delta$ for all $(t,y) \in \mathbb{R} \times \mathbb{R}^n$. *Proof.* Note that $R_\delta(t, 0) = 0$. This means that $\|R_\delta(t, y)\| \leq L_\delta\|y\|$. Obviously the claim holds if $\|y\| \leq 4N\delta$. On the other hand, if $\|y\| > 4N\delta$, then $R_\delta(t, y) = 0$ and so the proof is complete. ◻ Let us introduce for any $\eta \in (0,\min\{-a,b\}), s \in \mathbb{R}$ and a given $\delta$-modification of $R$, the *substitution operator* $\tilde{R}_{\delta} : \mathop{\mathrm{BC}}_s^{\eta} \to \mathop{\mathrm{BC}}_s^{\eta}$ as $$\tilde{R}_{\delta}(u) := R_\delta(\cdot,u(\cdot)),$$ and we show that this operator inherits the same properties as $R_\delta$. [\[lemma:substitution\]]{#lemma:substitution label="lemma:substitution"} Let $\eta \in (0, \min \{-a,b \}), s \in \mathbb{R}$ and $\delta > 0$ be sufficiently small. Then the substitution operator $\tilde{R}_\delta$ is well-defined and inherits the Lipschitz properties of $R_\delta$. *Proof.* It follows from that $$\|\tilde{R}_{\delta}(u)(t)\| = \|R_\delta(t,u(t))\| \leq L_\delta \|u(t)\|,$$ for all $u \in \mathop{\mathrm{BC}}_s^\eta$. Hence, $\|\tilde{R}_{\delta}(u) \|_{\eta,s} \leq L_\delta \| u \|_{\eta,s} < \infty$, i.e. $\tilde{R}_{\delta}(u) \in \mathop{\mathrm{BC}}_{s}^\eta$. The Lipschitz property follows immediately from since $$\|\tilde{R}_{\delta}(u) - \tilde{R}_{\delta}(v)\|_{\eta,s} \leq L_\delta \|u - v\|_{\eta,s},$$ for all $u,v \in \mathop{\mathrm{BC}}_s^\eta$, and so $\|\tilde{R}_{\delta}(u)\|_{\eta,s} \leq 4NL_\delta\delta$. ◻ Define for any $\eta \in (0, \min \{-a,b \})$ and $s \in \mathbb{R}$ the linear operator $U_s^\eta: E_0(s) \rightarrow \mathop{\mathrm{BC}}_s^\eta$ by $$(U_s^\eta y_0)(t) := U(t, s)y_0.$$ [\[lemma:Uetas\]]{#lemma:Uetas label="lemma:Uetas"} Let $\eta \in (0, \min \{-a,b \})$ and $s \in \mathbb{R}$. Then the operator $U_s^\eta$ is well-defined and bounded. *Proof.* Let $\varepsilon \in (0,\eta]$ be given. It follows from that $$\|U_s^\eta y_0\|_{\eta,s} \leq C_\varepsilon\|y_0\|\sup_{t \in \mathbb{R}}e^{(\varepsilon - \eta)|t - s|} = C_\varepsilon\|y_0\|,$$ for all $y_0 \in E_0(s)$, and so $U_s^\eta$ is well-defined and bounded. ◻ ## Existence of a Lipschitz center manifold {#subsec: Lipschitz CM} Our next goal is to define a parameterized fixed point operator such that its fixed points correspond to (exponentially) bounded solutions on $\mathbb{R}$ of the modified equation $$\label{eq:variation constants Rdelta} u(t) = U(t,s)u(s) + \int_s^t U(t,\tau)R_\delta(\tau, u(\tau)) d\tau,$$ for all $(t,s) \in \mathbb{R}^2$ and some small $\delta > 0$. For a given $\eta \in (0,\min \{-a,b \}), s\in \mathbb{R}$ and sufficiently small $\delta > 0$, we define the fixed point operator $\mathcal{G}_s^\eta : \mathop{\mathrm{BC}}_s^\eta \times E_0(s) \to \mathop{\mathrm{BC}}_s^\eta$ as $$\mathcal{G}_s^\eta(u,y_0) := U_s^\eta y_0 + \mathcal{K}_s^\eta(\tilde{R}_{\delta}(u)).$$ It follows from , and that $\mathcal{G}_s^\eta$ is well-defined. We first show that $\mathcal{G}_s^\eta(\cdot,y_0)$ admits a unique fixed point and is globally Lipschitz for all $y_0 \in E_0(s)$. [\[thm:contraction\]]{#thm:contraction label="thm:contraction"} Let $\eta \in (0, \min\{-a,b\})$ and $s \in \mathbb{R}$. If $\delta > 0$ is sufficiently small, then the following statements hold. 1. For every $y_0 \in E_0(s)$, the map $\mathcal{G}_s^\eta(\cdot,y_0)$ has a unique fixed point $\hat{u}_s^\eta(y_0)$. 2. The map $\hat{u}_s^\eta: E_0(s) \rightarrow \mathop{\mathrm{BC}}_s^\eta$ is globally Lipschitz and $\hat{u}_s^\eta(0) = 0$. *Proof.* Fix $\varepsilon \in (0,\eta]$. For $u,v \in \mathop{\mathrm{BC}}_s^\eta$ and $y_0, z_0 \in E_0(s)$, we have $$\begin{aligned} \|\mathcal{G}_s^\eta(u,y_0) - \mathcal{G}_s^\eta(v,z_0)\|_{\eta,s} &\leq \sup_{t \in \mathbb{R}}e^{-\eta|t - s|}\|U_0(t,s)(y_0 - z_0)\| + L_\delta\|\mathcal{K}_s^\eta\|\|u - v\|_{\eta,s} \\ &\leq C_\varepsilon\|y_0 - z_0\| + L_\delta\|\mathcal{K}_s^\eta\|\|u - v\|_{\eta,s}. \end{aligned}$$ To prove the first assertion, set $y_0 = z_0$ and choose $\delta > 0$ small enough such that $L_\delta\|\mathcal{K}_s^\eta\| \leq \frac{1}{2}$ () since then $$\|\mathcal{G}_s^\eta(u,y_0) - \mathcal{G}_s^\eta(v,y_0)\|_{\eta,s} \leq \frac{1}{2}\|u - v\|_{\eta,s}.$$ Since $\mathop{\mathrm{BC}}_s^\eta$ is a Banach space, the contracting mapping principle applies and so the contraction $\mathcal{G}_s^\eta(\cdot,y_0)$ has a unique fixed point, say $\hat{u}_s^\eta(y_0)$. To prove the second assertion, let $\hat{u}_s^\eta(y_0)$ and $\hat{u}_s^\eta(z_0)$ be the unique fixed points of $\mathcal{G}_s^\eta(\cdot,y_0)$ and $\mathcal{G}_s^\eta(\cdot,z_0)$ respectively. Then, $$\begin{aligned} \|\hat{u}_s^\eta(y_0) - \hat{u}_s^\eta(z_0)\|_{\eta,s} &= \|\mathcal{G}_s^\eta(\hat{u}_s^\eta(y_0),y_0) - \mathcal{G}_s^\eta(\hat{u}_s^\eta(z_0),z_0)\|_{\eta,s} \\ &\leq C_\varepsilon \|y_0 - z_0\| + \frac{1}{2}\|\hat{u}_s^\eta(y_0) - \hat{u}_s^\eta(z_0)\|_{\eta,s}. \end{aligned}$$ This implies that $\|\hat{u}_s^\eta(y_0) - \hat{u}_s^\eta(z_0)\|_{\eta,s} \leq 2C_\varepsilon \|y_0 - z_0\|$, and so $\hat{u}_s^\eta$ is globally Lipschitz. Since $\hat{u}_s^\eta(0) = \mathcal{G}_s^\eta(\hat{u}_s^\eta(0),0) = 0,$ the second assertion follows. ◻ In order to construct a center manifold, define the *center bundle* $E_0 := \{(s,y_0) \in \mathbb{R} \times \mathbb{R}^n : y_0 \in E_0(s)\}$ and the map $\mathcal{C}: E_0 \rightarrow \mathbb{R}^n$ by $$\label{eq:mapC} \mathcal{C}(s,y_0) := \hat{u}_s^\eta(y_0)(s).$$ [\[def:Wc\]]{#def:Wc label="def:Wc"} A *global center manifold* of [\[eq:variation constants Rdelta\]](#eq:variation constants Rdelta){reference-type="eqref" reference="eq:variation constants Rdelta"} is defined as the image $\mathcal{W}^c := \mathcal{C}(E_0)$, whose $s$-*fibers* are defined as $\mathcal{W}^c(s) := \{\mathcal{C}(s, y_0) \in \mathbb{R}^n : y_0 \in E_0(s)\}.$ Recall from that for a fixed $s \in \mathbb{R}$, the map $\hat{u}_s^\eta$ is globally Lipschitz. Hence, the map $\mathcal{C}(s,\cdot) : E_0(s) \to \mathbb{R}^n$ is globally Lipschitz, where the Lipschitz constant depends on $s$, i.e. $\mathcal{C}$ is only *fiberwise Lipschitz*. The following result shows that the Lipschitz constant can be chosen independently of the fiber, and so we can say that $\mathcal{W}^c$ is a Lipschitz global center manifold of [\[eq:variation constants Rdelta\]](#eq:variation constants Rdelta){reference-type="eqref" reference="eq:variation constants Rdelta"}. [\[lemma:lipschitzCMT\]]{#lemma:lipschitzCMT label="lemma:lipschitzCMT"} There exists a constant $L > 0$ such that $\|\mathcal{C}(s,y_0) - \mathcal{C}(s,z_0)\| \leq L \|y_0 - z_0\|$ for all $(s,y_0),(s,z_0) \in E_0$. *Proof.* Let $(s, y_0), (s, z_0) \in E_0$ be given. It follows from and that $$\begin{aligned} \|\mathcal{C}(s,y_0) - \mathcal{C}(s,z_0)\| &= \|\mathcal{G}_s^\eta(\hat{u}_s^\eta(y_0),y_0)(s) - \mathcal{G}_s^\eta(\hat{u}_s^\eta(z_0),z_0)(s)\| \\ &\leq \|y_0 - z_0\| + \|\mathcal{K}_{s}^\eta\|\|\tilde{R}_{\delta}(\hat{u}_s^\eta(y_0)) - \tilde{R}_{\delta}(\hat{u}_s^\eta(z_0))\|_{\eta,s} \\ &\leq \|y_0 - z_0\| + L_\delta\|\mathcal{K}_{s}^\eta\|\|\hat{u}_s^\eta(y_0) - \hat{u}_s^\eta(z_0)\|_{\eta,s} \\ &\leq (1 + 2C_\varepsilon L_\delta \|\mathcal{K}_{s}^\eta\|)\|y_0 - z_0\|. \end{aligned}$$ Hence, $L := 1 + 2C_\varepsilon L_\delta \|\mathcal{K}_{s}^\eta\|$ is a Lipschitz constant that is independent of $s$ by . ◻ Recall from the definition of the $\delta$-modification of $R$ that $R_\delta = R$ on $\mathbb{R} \times B(0,\delta)$. Hence, the modified integral equation [\[eq:variation constants Rdelta\]](#eq:variation constants Rdelta){reference-type="eqref" reference="eq:variation constants Rdelta"} is equivalent to the original integral equation [\[eq:integraleq\]](#eq:integraleq){reference-type="eqref" reference="eq:integraleq"}, and by to the ordinary differential equation [\[eq:A(t) R\]](#eq:A(t) R){reference-type="eqref" reference="eq:A(t) R"}, on $B(0,\delta)$. [\[def:Wcloc\]]{#def:Wcloc label="def:Wcloc"} A *local center manifold* of [\[eq:A(t) R\]](#eq:A(t) R){reference-type="eqref" reference="eq:A(t) R"} is defined as the image $$\mathcal{W}^c_{\mathop{\mathrm{loc}}} := \mathcal{C}(\{(s,y_0) \in E_0 : \mathcal{C}(s,y_0) \in B(0,\delta)\}).$$ In the definitions of the center manifolds and their associated fiber bundles ( and ), we used the map $\mathcal{C}$ to explicitly construct these objects. However, sometimes one likes to think of the center manifold as the graph of a function. To obtain such a representation, define the map $\mathcal{H} : E_0 \to \mathbb{R}^n$ as $\mathcal{H}(s,y_0) := (I-\pi_0(s))\mathcal{C}(s,y_0)$ and notice from that we have the decomposition $\mathcal{C}(s,y_0) = y_0 + \mathcal{H}(s,y_0)$ in the nonhyperbolic and hyperbolic part respectively. Hence, we can write for example $$\begin{aligned} \label{eq:graphH} \mathcal{W}_c(s) = \{y_0 + \mathcal{H}(s,y_0) : y_0 \in E_0(s) \} \cong \{(y_0,\mathcal{H}(s,y_0)) : y_0 \in E_0(s) \} =\mbox{graph}(\mathcal{H}(s,\cdot)), \end{aligned}$$ and since $E_0(s)$ and $E_+(s) \oplus E_{-}(s)$ have only zero in their intersection (), this identification makes sense. Notice that the map $\mathcal{H}$, identified as a graph in [\[eq:graphH\]](#eq:graphH){reference-type="eqref" reference="eq:graphH"}, is strictly speaking a map that takes values in $E_{+}(s) \oplus E_{-}(s)$. Similar graph-like representations can be obtained for $\mathcal{W}^c$ and $\mathcal{W}_{\mathop{\mathrm{loc}}}^c$. # Properties of the center manifold {#sec:properties} In this section, we prove that $\mathcal{W}_{\mathop{\mathrm{loc}}}^c$ is locally invariant and consists of slow dynamics. Moreover, we prove that the center manifold inherits the same order of smoothness as the nonlinearity $R$ and its tangent bundle is precisely the center bundle $E_0$. Lastly, we prove that the center manifold is $T$-periodic in a neighborhood of the origin. At the end of the section, we combine all the results to prove . Our first aim is to prove the local invariance property of $\mathcal{W}_{\mathop{\mathrm{loc}}}^c$. Therefore, let $S_\delta(t,s,\cdot) : \mathbb{R}^n \to \mathbb{R}^n$ denote the (time-dependent) flow of $$\label{eq:yRdelta} \dot{y}(t) = A(t)y(t) + R_\delta(t,y(t)).$$ Moreover, still holds when $R$ is replaced by $R_\delta$ and so the ordinary differential equation [\[eq:yRdelta\]](#eq:yRdelta){reference-type="eqref" reference="eq:yRdelta"} is equivalent to the integral equation [\[eq:variation constants Rdelta\]](#eq:variation constants Rdelta){reference-type="eqref" reference="eq:variation constants Rdelta"}. By (local) uniqueness of solutions, we have that [\[eq:Sts\]](#eq:Sts){reference-type="eqref" reference="eq:Sts"} still holds with $S$ replaced by $S_\delta$. The following result is the nonlinear analogue of and is a preliminary result to prove in the local invariance property of the center manifold. [\[prop:W\^c(s)\]]{#prop:W^c(s) label="prop:W^c(s)"} Let $\eta \in (0, \min\{-a,b\})$ and $s \in \mathbb{R}$. Then $$\begin{aligned} \mathcal{W}^c(s) = \{y_0 \in \mathbb{R}^n : \text{ there exists a solution of \eqref{eq:yRdelta}} \text{ through $y_0$ belonging to $\mathop{\mathrm{BC}}_s^\eta$}\}. \end{aligned}$$ *Proof.* Choose $y_0 \in \mathcal{W}^c(s)$, then $y_0 = \mathcal{C}(s, z_0) = \hat{u}_s^\eta(z_0)(s)$ for some $z_0 \in E_0(s)$. shows that $\mathcal{K}_s^\eta \tilde{R}_{\delta}(u)$ is the unique solution of [\[eq:inhomogeneous CMT\]](#eq:inhomogeneous CMT){reference-type="eqref" reference="eq:inhomogeneous CMT"} with $f = \tilde{R}_{\delta}(u)$. Since $u = \hat{u}_s^\eta(z_0)$ is a fixed point of $\mathcal{G}_s^\eta(\cdot,z_0)$, we get $$\begin{aligned} u(t) &= U(t,s)z_0 + (\mathcal{K}_s^\eta \tilde{R}_{\delta}(u))(t) \\ &= U(t,s)z_0 + U(t,s)(\mathcal{K}_s^\eta \tilde{R}_{\delta}(u))(s) + \int_s^tU(t, \tau){R}_{\delta}(\tau, u(\tau))d\tau \\ &= U(t,s)u(s) + \int_s^tU(t, \tau){R}_{\delta}(\tau, u(\tau))d\tau. \end{aligned}$$ for all $(t,s) \in \mathbb{R}^2$. Hence, $u = \hat{u}_s^\eta(z_0)$ is a solution of [\[eq:variation constants Rdelta\]](#eq:variation constants Rdelta){reference-type="eqref" reference="eq:variation constants Rdelta"}, and so [\[eq:yRdelta\]](#eq:yRdelta){reference-type="eqref" reference="eq:yRdelta"}, through $u(s) = y_0$ which belongs to $\mathop{\mathrm{BC}}_s^\eta$. Conversely, let $y_0 \in \mathbb{R}^n$ such that there exists a solution $u$ of [\[eq:yRdelta\]](#eq:yRdelta){reference-type="eqref" reference="eq:yRdelta"}, and so [\[eq:variation constants Rdelta\]](#eq:variation constants Rdelta){reference-type="eqref" reference="eq:variation constants Rdelta"}, in $\mathop{\mathrm{BC}}_s^\eta$ satisfying $u(s) = y_0$. It follows from that $$u(t) = U(t, s)\pi_0(s)u(s) + (\mathcal{K}_s^\eta \tilde{R}_{\delta}(u))(t).$$ Hence, $u = \mathcal{G}_s^\eta (u, \pi_0(s)u(s))$ so $y_0 = u(s) = \mathcal{C}(s, \pi_0(s)y_0) \in \mathcal{W}^c(s)$ by uniqueness of the fixed point. ◻ [\[prop:invariance\]]{#prop:invariance label="prop:invariance"} The local center manifold $\mathcal{W}_{\mathop{\mathrm{loc}}}^c$ has the following properties. 1. $\mathcal{W}^c_{\mathop{\mathrm{loc}}}$ is *locally invariant*: if $(s, y_0) \in \mathbb{R} \times \mathcal{W}^c_{\mathop{\mathrm{loc}}}$ and $t_{-},t_{+} \in \mathbb{R}$ with $s \in (t_{-}, t_+)$ such that $S(t, s, y_0) \in B(0, \delta)$ for all $t \in (t_{-},t_{+})$, then $S(t, s, y_0) \in \mathcal{W}^c_{\mathop{\mathrm{loc}}}$. 2. $\mathcal{W}_{\mathop{\mathrm{loc}}}^c$ contains every solution of [\[eq:A(t) R\]](#eq:A(t) R){reference-type="eqref" reference="eq:A(t) R"} that exists on $\mathbb{R}$ and remains sufficiently small for all positive and negative time: if $u: \mathbb{R} \rightarrow B(0,\delta)$ is a solution of [\[eq:A(t) R\]](#eq:A(t) R){reference-type="eqref" reference="eq:A(t) R"}, then $u(t) \in \mathcal{W}^c_{\mathop{\mathrm{loc}}}$ for all $t \in \mathbb{R}$. 3. If $(s, y_0) \in \mathbb{R} \times \mathcal{W}^c_{\mathop{\mathrm{loc}}}$, then $S(t, s, y_0) = \hat{u}_t^\eta(\pi_0(t)S(t, s, y_0))(t) = \mathcal{C}(t, \pi_0(t)S(t, s, y_0))$ for all $t \in (t_{-},t_{+})$. 4. $0 \in \mathcal{W}^c_{\mathop{\mathrm{loc}}}$ and $\mathcal{C}(t, 0) = 0$ for all $t \in \mathbb{R}$. *Proof.* We prove the four assertions step by step. 1\. By , choose a solution $u \in \mathop{\mathrm{BC}}_s^\eta$ of [\[eq:yRdelta\]](#eq:yRdelta){reference-type="eqref" reference="eq:yRdelta"} such that $u(s) = y_0$. Note that $u(s) = S_\delta(s, s, y_0)$, so by uniqueness $u(t) = S_\delta(t, s, y_0)$ for all $t \in (t_{-},t_{+})$. Then $S_\delta(t, s, y_0) \in \mathcal{W}^c(t) \subset \mathcal{W}^c$. Since $S_\delta(t, s, y_0) \in B(0, \delta)$, it follows that $S(t,s,y_0) = S_\delta(t, s, y_0) \in \mathcal{W}^c_{\mathop{\mathrm{loc}}}$. 2\. Recall that [\[eq:A(t) R\]](#eq:A(t) R){reference-type="eqref" reference="eq:A(t) R"} and [\[eq:yRdelta\]](#eq:yRdelta){reference-type="eqref" reference="eq:yRdelta"} are equal on $B(0, \delta)$. If $u$ is such a solution, then $u \in \mathop{\mathrm{BC}}_{s}^\eta$. The assumption that $u$ takes values in $B(0,\delta)$ and together imply with the first assertion the result. 3\. In the proof of it is shown that $y_0 = \mathcal{C}(s, \pi_0(s)y_0)$ for any $y_0 \in \mathcal{W}^c(s)$. So it is certainly true for $y_0 \in \mathcal{W}^c_{\mathop{\mathrm{loc}}}$ that $S_\delta(s, s, y_0) = y_0 = \hat{u}_s^\eta(\pi_0(s)S_\delta(s, s, y_0))(s) = \mathcal{C}(s, \pi_0(s)S_\delta(s, s, y_0))$. Because $\mathcal{W}^c_{\mathop{\mathrm{loc}}}$ is locally invariant, we have that $S_\delta(t, s, y_0) \in \mathcal{W}^c_{\mathop{\mathrm{loc}}}$ for all $t \in \mathbb{R}$ sufficiently close to $s$ and by uniqueness of solutions, $S_\delta(t, s, y_0) = \hat{u}_t^\eta(\pi_0(t)S_\delta(t, s, y_0)) = \mathcal{C}(t, \pi_0(t)S_\delta(t, s, y_0))$. Since we are on the local center manifold, we can replace $S_\delta$ with $S$. 4\. Notice that $\mathcal{C}(t,0) = \hat{u}_t^\eta(0)(t) = 0$ for all $t \in \mathbb{R}$, where the last equality follows from . Clearly, $0 = \mathcal{C}(t,0) \in \mathcal{W}_{\mathop{\mathrm{loc}}}^{c}$ and so the proof is complete. ◻ It is now possible to explain the fact that the dynamics on the center manifold is rather slow. Indeed, the local invariance of $\mathcal{W}_{\mathop{\mathrm{loc}}}^{c}$ () in combination with shows that solutions on the center manifold are in $\mathop{\mathrm{BC}}_s^\eta$ for some sufficiently small $\eta > 0$, i.e. their asymptotic behavior forward and backward in time can only be a limited exponential. The next step is to show that the map $\mathcal{C}$ inherits the same order of smoothness as the time-dependent nonlinear perturbation $R$. Proving additional smoothness of center manifolds requires work. A well-known technique to increase smoothness of center manifolds is via the theory of contraction on scales of Banach spaces [@Vanderbauwhede1987]. Since this part of the theory is rather technical, it is delegated to Appendix A. The main result is presented in and simply states that the map $\mathcal{C}$ is $C^k$-smooth and so $\mathcal{W}^c$ and $\mathcal{W}_{\mathop{\mathrm{loc}}}^c$ are both $C^k$-smooth manifolds in $\mathbb{R}^n$. The additional regularity of the center manifold allows us to study its tangent bundle. [\[prop:bundle\]]{#prop:bundle label="prop:bundle"} The tangent bundle of $\mathcal{W}^c$ and $\mathcal{W}_{\mathop{\mathrm{loc}}}^c$ is $E_0$: $D_2\mathcal{C}(s, 0)y_0 = y_0$ for all $(s,y_0) \in E_0$. *Proof.* Let $\eta \in [\eta_{-},\eta_{+}] \subset (0,\min\{-a,b\})$ such that $k \eta_{-} < \eta_{+}$. Differentiating $$\hat{u}_s^{\eta_{-}}(y_0) = U_s^{\eta_{-}} y_0 + \mathcal{K}_s^{{\eta_{-}}} \circ \tilde{R}_{\delta}(\hat{u}_s^{\eta_{-}}(y_0))$$ with respect to $y_0$ yields $$D\hat{u}_s^{\eta_{-}}(y_0) = U_s^{\eta_{-}} + \mathcal{K}_s^{{\eta_{-}}} \circ \tilde{R}_{\delta}^{{(1)}}(\hat{u}_s^{\eta_{-}}(y_0)) \circ D\hat{u}_s^{\eta_{-}}(y_0).$$ Setting $y_0 = 0$ and recalling the fact that $\hat{u}_s^{\eta_{-}}(0) = \tilde{R}_{\delta}^{(1)}(0) = 0$ shows that $D\hat{u}_s^{\eta_{-}}(0) = U_s^{\eta_{-}}$. If $\mathop{\mathrm{ev}}_s : \mathop{\mathrm{BC}}_s^\eta \to \mathbb{R}^n : f \mapsto f(s)$ denotes the bounded linear evolution operator (at time $s$), then $$D_2\mathcal{C}(s,0) = \mathop{\mathrm{ev}}_s(D(\mathcal{J}_s^{\eta, \eta_{-}} \circ \hat{u}_{s}^{\eta_{-}})(0)) = \mathop{\mathrm{ev}}_s(U_s^{\eta}) = I,$$ which proves the claim. ◻ Since our original system [\[eq:A(t) R\]](#eq:A(t) R){reference-type="eqref" reference="eq:A(t) R"} is $T$-periodic, it is not surprising that the center manifold itself is $T$-periodic in a neighborhood of zero. To prove this, let us define for all $s \in \mathbb{R}$ and sufficiently small $\delta > 0$ the map $N_s : E_0(s) \to E_0(s)$ by $$N_s(y_0) := \pi_0(s)S_\delta(s + T,s,\mathcal{C}(s,y_0)).$$ [\[lemma:invertibility\]]{#lemma:invertibility label="lemma:invertibility"} The function $N_s$ is invertible in a neighborhood of the origin. Moreover, this neighborhood can be written as $U \cap E_0(s)$ for some open neighborhood $U \subset \mathbb{R}^n$ of zero, independent of $s$. *Proof.* Recall the well-known standard result: $D[S_\delta(t, s, \cdot)](0) = U(t,s)$ for all $(t,s) \in \mathbb{R}^2$, see for instance [@Hirsch2013 Section 17.6]. The differential of $N_s$ in $0$ is given by $$\begin{aligned} DN_s(0) &= \pi_0(s) \circ D[S_\delta(s + T, s, \cdot)](\mathcal{C}(s, 0))\circ D[\mathcal{C}(s,\cdot)](0) \\ &= \pi_0(s) \circ D[S_\delta(s + T, s, \cdot)](0)\\ &= \pi_0(s)U_0(s + T, s) = U_0(s + T, s), \end{aligned}$$ where we used , , and the fact that $U_0(s + T, s)y_0 \in E_0(s + T) = E_0(s)$. It follows from that $DN_s(0) = U_0(s+T,s)$ is a bounded linear isomorphism and so $N_s$ is locally invertible by the inverse function theorem. To prove that the neighborhood may be written as claimed, let us first observe that for a given $\varepsilon > 0$ there holds $$\begin{aligned} \| DN_s(y_0) - DN_s(0) \| &\leq \|U_0(s+T,s)\pi_{0}(s) \| \| D\mathcal{C}(s,y_0) - D\mathcal{C}(s,0) \| \\ & \leq N C_\varepsilon e^{\varepsilon T} \| D\mathcal{C}(s,y_0) - D\mathcal{C}(s,0) \| \\ & \leq N C_\varepsilon e^{\varepsilon T} L(1) ||y_0|| \to 0, \quad \mbox{ as } y_0 \to 0, \end{aligned}$$ due to and . Hence, $DN_s(y_0)$ is uniformly convergent (in the variable $s$) as $y_0 \to 0$ and so the implicit function may be defined on a neighborhood that does not depend on $s$. ◻ There exists a $\delta > 0$ such that $\mathcal{C}(s + T, y_0) = \mathcal{C}(s, y_0)$ for all $(s,y_0) \in E_0$ satisfying $\|y_0\| \leq \delta$. *Proof.* Let $(s,y_0) \in E_0$ be given. By , choose $\delta > 0$ such that if $\|y_0\| \leq \delta$, it is possible to write $y_0 = N_s(z_0)$. It follows from , and [\[eq:Sts\]](#eq:Sts){reference-type="eqref" reference="eq:Sts"} that $$\begin{aligned} \mathcal{C}(s + T, y_0) &= \mathcal{C}(s + T, \pi_0(s)S_\delta(s + T, s, \mathcal{C}(s, z_0))) \\ &= S_\delta(s + T, s, \mathcal{C}(s, z_0)) \\ &= S_\delta(s, s - T, \mathcal{C}(s, z_0)) \\ &= \mathcal{C}(s, \pi_0(s)S_\delta(s, s - T, \mathcal{C}(s, z_0))) \\ &= \mathcal{C}(s, \pi_0(s)S_\delta(s + T, s, \mathcal{C}(s, z_0))) = \mathcal{C}(s, y_0). \end{aligned}$$ This proves the $T$-periodicity of the center manifold. ◻ Recall that [\[eq:A(t) R\]](#eq:A(t) R){reference-type="eqref" reference="eq:A(t) R"} was just a time-dependent translation of [\[eq:ODE\]](#eq:ODE){reference-type="eqref" reference="eq:ODE"} via the given periodic solution. Hence, if $x$ is a solution of [\[eq:ODE\]](#eq:ODE){reference-type="eqref" reference="eq:ODE"} then $y = x - \gamma$ is a solution of [\[eq:A(t) R\]](#eq:A(t) R){reference-type="eqref" reference="eq:A(t) R"} and so $$\begin{aligned} \mathcal{W}_{\mathop{\mathrm{loc}}}^{c}(\Gamma) := \{ \gamma(s) + \mathcal{C}(s,y_0) \in \mathbb{R}^n: (s,y_0) \in E_0 \mbox{ and } \mathcal{C}(s,y_0) \in B(0,\delta) \} \end{aligned}$$ is a $T$-periodic $C^k$-smooth $(n_0+1)$-dimensional manifold in $\mathbb{R}^n$ defined in the vicinity of $\Gamma$ for a sufficiently small $\delta > 0$. To see this, recall that $\gamma$ is $T$-periodic and $C^k$-smooth together with the fact that $\mathcal{C}$ is $T$-periodic in the first component and $C^k$-smooth. Recall from that $\mathcal{C}(t,0) = 0$ and so $\Gamma \subset \mathcal{W}_{\mathop{\mathrm{loc}}}^{c}(\Gamma)$. We call $\mathcal{W}_{\mathop{\mathrm{loc}}}^{c}(\Gamma)$ a *local center manifold around $\Gamma$* and notice that this manifold inherits all the properties of $\mathcal{W}_{\mathop{\mathrm{loc}}}^c$, which proves . # Examples and counterexamples {#sec:examples} It is widely known that center manifolds for equilibria have interesting qualitative properties [@Guckenheimer1983; @Kuznetsov2023a; @Sijbrand1985]. For example, such center manifolds are not necessarily unique and are not necessarily of the class $C^{\infty}$ even if the vector field is $C^{\infty}$-smooth. Of course, there always exists for $C^{\infty}$-smooth systems an open neighborhood $U_k$ around the equilibrium such that a center manifold is $C^k$-smooth on $U_k$. However, the neighborhood $U_k$ may shrink towards the equilibrium as $k \to \infty$, see [@Strien1979; @Takens1984; @Sijbrand1985] for explicit examples. When the vector field is analytic, there is the possibility of the existence of a non-analytic $C^{\infty}$-smooth center manifold, see [@Kelley1967; @Sijbrand1985] for explicit examples. It is studied in [@Sijbrand1985] under which conditions a unique, $C^{\infty}$-smooth or analytic center manifold exists. The aim of this section is to provide several explicit examples illustrating similar behavior for the periodic center manifolds. For example, we provide in an analytic $2\pi$-periodic two-dimensional center manifold near a nonhyperbolic cycle that is a cylinder. To complete the periodic two-dimensional center manifold theory, we provide in a system that admits a $2\pi$-periodic two-dimensional center manifold near a nonhyperbolic cycle that is a Möbius band. Both examples are minimal polynomial vector fields admitting a cylinder or Möbius band as a periodic center manifold. To illustrate the existence of a non-unique and non-analytic $C^{\infty}$-smooth periodic center manifold, we will study the analytic nonlinear periodically driven system $$\label{eq:Ex2} \begin{dcases} \dot{x} = -x^2, \\ \dot{y} = -y + \sin(t)x^2. \end{dcases}$$ This vector field is a modification of the vector field used in [@Kelley1967] to illustrate the non-unique behavior of center manifolds for equilibria. Note that the system [\[eq:Ex2\]](#eq:Ex2){reference-type="eqref" reference="eq:Ex2"} is not of the form [\[eq:ODE\]](#eq:ODE){reference-type="eqref" reference="eq:ODE"} but already written in the style of [\[eq:A(t) R\]](#eq:A(t) R){reference-type="eqref" reference="eq:A(t) R"} where $A$ is autonomous and $R$ is $2\pi$-periodic in the first argument. The assumption of an autonomous linear part is not a restriction. Indeed, the Floquet normal form $U(t,0) = Q(t)e^{Bt}$, where $Q$ is $T$-periodic, $Q(0) = I$, $Q(t)$ is invertible for all $t \in \mathbb{R}$, and the matrix $B \in \mathbb{C}^{n \times n}$ that satisfies $U(T,0) = e^{BT}$ shows that a general system of the form [\[eq:A(t) R\]](#eq:A(t) R){reference-type="eqref" reference="eq:A(t) R"} is equivalent to the nonlinear periodically driven system $$\label{eq:transformedA(t)R} \dot{z}(t) = Bz(t) + G(t,z(t)),$$ where $G(t,z(t)) := Q(t)^{-1}R(t,Q(t)z(t))$, and $z$ satisfies the Lyapunov-Floquet transformation $y(t) = Q(t)z(t)$. Clearly [\[eq:transformedA(t)R\]](#eq:transformedA(t)R){reference-type="eqref" reference="eq:transformedA(t)R"} has an autonomous linear part and $G$ is $T$-periodic in the first component since $R$ and $Q$ are both $T$-periodic. Notice that the whole periodic center manifold construction from previous subsections still applies for systems of the form [\[eq:transformedA(t)R\]](#eq:transformedA(t)R){reference-type="eqref" reference="eq:transformedA(t)R"}. The reason we study systems of the form [\[eq:transformedA(t)R\]](#eq:transformedA(t)R){reference-type="eqref" reference="eq:transformedA(t)R"} instead of a general system of the form [\[eq:ODE\]](#eq:ODE){reference-type="eqref" reference="eq:ODE"} is to keep the calculations rather simple. Indeed, if one would like to cook up an explicit non-trivial example of a periodic center manifold near a nonhyperbolic cycle, one needs to be able to compute explicitly the periodic solution, the fundamental matrix and its associated Floquet multipliers, which is rather difficult for general systems of the form [\[eq:ODE\]](#eq:ODE){reference-type="eqref" reference="eq:ODE"}. We remark that the computations for the periodic center manifolds of simple periodically driven systems considered in this section are rather tedious compared with their equilibrium analogues. In we show that [\[eq:Ex2\]](#eq:Ex2){reference-type="eqref" reference="eq:Ex2"} admits a non-analytic $2\pi$-periodic center manifold. Next, we show in that [\[eq:Ex2\]](#eq:Ex2){reference-type="eqref" reference="eq:Ex2"} admits a $2\pi$-periodic center manifold that is *locally (non)-unique*, i.e. there exist subneighborhoods of any neighborhood of $\mathbb{R} \times \{0\}$ where the center manifold is unique and others where it is not unique. This freedom of different center manifolds allows us to choose in a particular $2\pi$-periodic center manifold for [\[eq:Ex2\]](#eq:Ex2){reference-type="eqref" reference="eq:Ex2"} that is $C^{\infty}$-smooth. Hence, we have shown that analytic vector fields can admit non-analytic $C^{\infty}$-smooth periodic center manifolds. To complete this list of examples, we will show in that the $C^{\infty}$-smooth (analytic) nonlinear periodically driven system $$\label{eq:Exshrinking} \begin{dcases} \dot{x} = xz - x^3, \\ \dot{y} = y + (1+\sin(t))x^2, \\ \dot{z} = 0. \end{dcases}$$ admits a $2\pi$-periodic non-$C^\infty$-smooth center manifold. This vector field is a modification of the vector field used in [@Strien1979] to illustrate the non-$C^\infty$-smoothness of center manifolds near equilibria. Hence, we have proven that there exists analytic vector fields admitting locally (non)-unique, (non)-$C^{\infty}$-smooth and (non)-analytic periodic center manifolds. [\[ex:analytic\]]{#ex:analytic label="ex:analytic"} The analytic system $$\label{eq:Ex1} \begin{dcases} \dot{x}_1 = x_1 - x_2 - x_1(x_1^2 + x_2^2), \\ \dot{x}_2 = x_1 + x_2 - x_2(x_1^2 + x_2^2), \\ \dot{x}_3 = 0, \end{dcases}$$ admits an analytic $2\pi$-periodic two-dimensional center manifold that is a cylinder. *Proof.* Notice that [\[eq:Ex1\]](#eq:Ex1){reference-type="eqref" reference="eq:Ex1"} admits a $2\pi$-periodic solution $\gamma(t) = (\cos(t),\sin(t),0)$. The system around $\Gamma = \gamma(\mathbb{R})$ can be written in coordinates $x = \gamma + y$ as $$\label{eq:Ex1ODE} \begin{dcases} \dot{y}_1 = -2\cos^2(t)y_1 -(1+2\sin(t)\cos(t))y_2 - 3\cos^2(t)y_1^2 -2 \sin(t)y_1y_2 - \cos(t)y_2^2 - y_1^3 -y_1y_2^2,\\ \dot{y}_2 = (1-2\sin(t) \cos(t))y_1 -2 \sin^2(t)y_2 - \sin(t)y_1^2 -2\cos(t)y_1y_2 -3 \sin^2(t)y_2^2 -y_1^2y_2 - y_2^3, \\ \dot{y}_3 = 0, \end{dcases}$$ where $y:= (y_1,y_2,y_3)$ and $x := (x_1,x_2,x_3)$. The linearization around the origin of [\[eq:Ex1ODE\]](#eq:Ex1ODE){reference-type="eqref" reference="eq:Ex1ODE"} reads $$A(t) = \begin{pmatrix} -2 \cos^2(t) & -2\sin(t)\cos(t)-1 & 0 \\ -2 \sin(t)\cos(t) + 1 & -2\sin^2(t) & 0 \\ 0 & 0 & 0 \end{pmatrix}$$ and so the solution of the variational equation around $\Gamma$ is generated by the fundamental matrix $U(t,s) = V(t)V(s)^{-1}$ where $$V(t) = \begin{pmatrix} e^{-2t}\cos(t) & - \sin(t) & 0 \\ e^{-2t}\sin(t) & \cos(t) & 0 \\ 0 & 0 & 1 \end{pmatrix}.$$ The Floquet multipliers are given by $\lambda_1 = 1, \lambda_2 = e^{-4\pi}$ and $\lambda_3 = 1$. Hence, the center subspace and stable subspace (at time $t$) can be obtained as $E_0(t) = \mathop{\mathrm{span}}\{\zeta_1(t),\zeta_3(t) \}$ and $E_{-}(t) = \mathop{\mathrm{span}}\{\zeta_2(t) \}$ respectively, where $\zeta_1(t) = (-\sin(t),\cos(t),0), \zeta_2(t) = (\cos(t),\sin(t),0)$ and $\zeta_3(t) = (0,0,1)$. The center bundle $E_0$ parametrizes a cylinder as a ruled surface since $$(x_1(t,v),x_2(t,v),x_3(t,v)) = \gamma(t) + v \zeta_3(t),$$ for all $t,v \in \mathbb{R}$. It follows from that for any $k \geq 1$ there exists a $2\pi$-periodic $C^k$-smooth two-dimensional locally invariant center manifold $\mathcal{W}_{\mathop{\mathrm{loc}}}^c$ for [\[eq:Ex1ODE\]](#eq:Ex1ODE){reference-type="eqref" reference="eq:Ex1ODE"} around the origin that is tangent to $E_0$. To obtain this center manifold, let us transform [\[eq:Ex1ODE\]](#eq:Ex1ODE){reference-type="eqref" reference="eq:Ex1ODE"} into eigenbasis, i.e. we perform the change of variables from $y$ to $z := (z_1,z_2,z_3)$ as $$\begin{aligned} \label{eq:Ex1variables} z_1 &= -\sin(t) y_1 + \cos(t)y_2, \nonumber\\ z_2 &= \cos(t)y_1 + \sin(t)y_2, \\ z_3 &= y_3 \nonumber, \end{aligned}$$ to obtain the autonomous system $$\label{eq:Ex1autono} \begin{dcases} \dot{z}_1 = -z_1(z_1^2 +z_2^2 + 2z_2), \\ \dot{z}_2 = -(z_2+1)(z_1^2 +z_2^2 + 2z_2), \\ \dot{z}_3 = 0. \end{dcases}$$ The $z_1z_3$-plane corresponds to the center subspace while the $z_2$-axis corresponds to the stable subspace. Therefore, the center manifold is parametrized by $z_2(t) = \mathcal{H}(t,z_1,z_3)$, where $\mathcal{H}$ is $2\pi$-periodic in the first variable and consists solely of nonlinear terms in the last two variables. Because [\[eq:Ex1autono\]](#eq:Ex1autono){reference-type="eqref" reference="eq:Ex1autono"} is an autonomous system, we have that $\mathcal{H}$ is constant in the first variable, and so we can write $\mathcal{H}(t,z_1,z_3) = H(z_1,z_3)$ for all $t \in \mathbb{R}$. Because $\mathcal{W}_{\mathop{\mathrm{loc}}}^c$ is locally invariant, we must have that $$\begin{aligned} z_1(z_1^2 + H(z_1,z_3)^2 + 2H(z_1,z_3))\frac{\partial}{\partial z_1}H(z_1,z_3) = (H(z_1,z_3)+1)(z_1^2 +H(z_1,z_3)^2 + 2H(z_1,z_3)). \end{aligned}$$ If $z_1^2 + H(z_1,z_3)^2 + 2H(z_1,z_3) \neq 0$, then $H$ must satisfy $$\frac{\partial}{\partial z_1}H(z_1,z_3) = \frac{1}{z_1}(H(z_1,z_3)+1), \quad H(0,0) = 0,$$ which has obviously no solution. Hence $z_1^2 + (H(z_1,z_3)+1)^2 = 1$ and so $H(z_1,z_3) = \sqrt{1-z_1^2} -1$ since $H(0,0) = 0$. Clearly $H$ is analytic on $(-1,1) \times \mathbb{R}$ since $$H(z_1,z_3) = \sum_{k=1}^\infty (-1)^k \binom{\frac{1}{2}}{k} z_1^{2k},$$ for all $(z_1,z_3) \in (-1,1) \times \mathbb{R}$ due to the Binomial series. Hence, $\mathcal{H}$ is analytic on $\mathbb{R} \times (-1,1) \times \mathbb{R}$, which proves the claim. Transforming the map $\mathcal{H}$ back into original $y$-coordinates using [\[eq:Ex1variables\]](#eq:Ex1variables){reference-type="eqref" reference="eq:Ex1variables"} shows that the center manifold $\mathcal{W}_{\mathop{\mathrm{loc}}}^c$ is parametrized as $$\begin{aligned} \label{eq:Ex1inveq} (y_1(t) + \cos(t))^2 + (y_2(t)+\sin(t))^2 = 1, y_3(t) = c_3, \quad c_3 \in \mathbb{R}. \end{aligned}$$ Writing this back into $x$-coordinates yields $$\begin{aligned} (x_1(t),x_2(t),x_3(t)) = (y_1(t) + \cos(t), y_2(t)+ \sin(t), c_3), \quad c_3 \in \mathbb{R}, \end{aligned}$$ but due to [\[eq:Ex1inveq\]](#eq:Ex1inveq){reference-type="eqref" reference="eq:Ex1inveq"} we have that $x_1(t)^2 + x_2(t)^2 = 1$ and so $$\mathcal{W}_{\mathop{\mathrm{loc}}}^c(\Gamma) = \{(x_1,x_2,x_3) \in \mathbb{R}^3 : x_1^2 + x_2^2 = 1 \}.$$ Notice that $\mathcal{W}_{\mathop{\mathrm{loc}}}^c$ and $\mathcal{W}_{\mathop{\mathrm{loc}}}^c(\Gamma)$ do not depend on a choice of $\delta > 0$. The reason is clear as for example the cylinder $\mathcal{W}_{\mathop{\mathrm{loc}}}^c(\Gamma)$ is an invariant manifold of [\[eq:Ex1\]](#eq:Ex1){reference-type="eqref" reference="eq:Ex1"} since the function $V : \mathbb{R}^3 \to \mathbb{R}$ defined by $$V(x_1,x_2,x_3) := x_1^2 + x_2^2 - 1$$ is constant along the trajectories whose points are contained in $\mathcal{W}_{\mathop{\mathrm{loc}}}^c(\Gamma) = V^{-1} (\{0\})$. ◻ [\[ex:mobius\]]{#ex:mobius label="ex:mobius"} The analytic system $$\label{eq:Exmobius} \begin{dcases} \dot{x}_1 = - x_2 + x_1 \Phi(x_1,x_2), \\ \dot{x}_2 = x_1 + x_2 \Phi(x_1,x_2),\\ \dot{x}_3 = \frac{1}{4}(1-\sigma x_2)(x_1^2 + x_2^2 - 1) + \frac{\sigma x_3}{2}(1+x_1), \end{dcases}$$ where $$\Phi(x_1,x_2) := \frac{\sigma}{4}(1-x_1)(x_1^2 + x_2^2 - 1) - \frac{x_3}{2}(1+\sigma x_2),$$ admits for $\sigma \neq 0$ a $2 \pi$-periodic two-dimensional $C^k$-smooth center manifold that is locally diffeomorphic to a Möbius band for every $k \geq 1$. If $\sigma = 0$, then the $2\pi$-periodic center manifold is the whole state space $\mathbb{R}^3$ and [\[eq:Exmobius\]](#eq:Exmobius){reference-type="eqref" reference="eq:Exmobius"} admits a family of invariant tori. *Proof.* Notice that [\[eq:Ex1\]](#eq:Ex1){reference-type="eqref" reference="eq:Ex1"} admits for all $\sigma \in \mathbb{R}$ a $2\pi$-periodic solution $\gamma_\sigma(t) = (\cos(t),\sin(t),0)$. Hence, the solution of the variational equation around $\Gamma_\sigma$ is generated by the fundamental matrix $U_\sigma(t,s) = V_\sigma(t)V_\sigma(s)^{-1}$ where $$V_\sigma(t) = \begin{pmatrix} \cos(t) \cos(\frac{t}{2}) & - \sin(t) & -e^{ \sigma t}\sin(\frac{t}{2})\cos(t) \\[1ex] \sin(t)\cos(\frac{t}{2}) & \cos(t) & -e^{ \sigma t}\sin(\frac{t}{2})\sin(t) \\[1ex] \sin(\frac{t}{2}) & 0 & e^{\sigma t} \cos(\frac{t}{2}) \end{pmatrix}.$$ The Floquet multipliers are given by $\lambda_{1} = 1, \lambda_{2} = -1$ and $\lambda_{3,\sigma} = -e^{2 \pi \sigma }$. Let $E_0^\sigma(t)$ and $E_{\pm}^\sigma(t)$ denote the center and (un)stable subspace (at time $t$) at parameter value $\sigma$ respectively. For the center subspace, we have that $E_{0}^\sigma(t) = \mathop{\mathrm{span}}\{ \zeta_1(t),\zeta_2(t) \}$ for $\sigma \neq 0$ and $E_{0}^0(t) = \mathop{\mathrm{span}}\{ \zeta_1(t),\zeta_2(t),\zeta_3(t) \}$. For the (un)stable subspace we obtain $E_{-}^{\sigma}(t) = \mathop{\mathrm{span}}\{ \zeta_3(t) \}$ for $\sigma < 0$ and $E_{+}^{\sigma}(t) = \mathop{\mathrm{span}}\{ \zeta_3(t) \}$ for $\sigma > 0$ where $$\begin{aligned} \zeta_1(t) &= (-\sin(t),\cos(t),0), \\ \zeta_2(t) &= \bigg( \cos(t)\cos(\frac{t}{2}), \sin(t)\cos(\frac{t}{2}), \sin(\frac{t}{2}) \bigg), \\ \zeta_3(t) &= \bigg(-\cos(t)\sin(\frac{t}{2}), -\sin(t)\sin(\frac{t}{2}), \cos(\frac{t}{2}) \bigg). \end{aligned}$$ Notice that the eigenvector $\zeta_3(t)$ is perpendicular to the plane spanned by $\zeta_1(t)$ and $\zeta_2(t)$. Let us first discuss the case $\sigma \neq 0$. Observe that the center bundle $E_{0}^\sigma$ at parameter value $\sigma$ parametrizes locally a Möbius band as a ruled surface since $$(x_1(t,v),x_2(t,v),x_3(t,v)) = \gamma_\sigma(t) + v \zeta_2(t),$$ for all $t \in \mathbb{R}$ and $v \in [-1,1]$. provides us for any $k\geq 1$ a $2\pi$-periodic $C^k$-smooth two-dimensional locally invariant center manifold $\mathcal{W}_{\mathop{\mathrm{loc}}}^c(\Gamma_\sigma)$ for [\[eq:Exmobius\]](#eq:Exmobius){reference-type="eqref" reference="eq:Exmobius"} around $\Gamma_\sigma$ tangent to the center bundle $E_{0}^\sigma$ that is locally diffeomorphic to a Möbius band, see . When $\sigma = 0$, it is clear that the Floquet multipliers are all on the unit circle where $1$ is simple and $-1$ has algebraic multiplicity $2$. Hence, the $2\pi$-periodic center manifold is $3$-dimensional, i.e. the whole state space $\mathbb{R}^3$. Moreover, [\[eq:Exmobius\]](#eq:Exmobius){reference-type="eqref" reference="eq:Exmobius"} admits a family of invariant tori $\{\mathbb{T}_l : l \geq 0 \}$ at $\sigma = 0$ with major radius $1$ and minor radius $r_l$ since the function $V_{l} : \mathbb{R}^3 \to \mathbb{R}$ defined by $$\begin{aligned} V_{l}(x_1,x_2,x_3) &:= \bigg(\sqrt{x_1^2 + x_2^2} - 1\bigg)^2 + x_3^2 - r_l^2\bigg(\sqrt{x_1^2 + x_2^2}\bigg) \end{aligned}$$ where $$r_l^2(u) := l + \frac{1}{2}u(u-4)+\ln(u) + \frac{3}{2},$$ is constant along the trajectories whose points are contained in $\mathbb{T}_l := V_{l}^{-1}(\{0\})$. We claim that the family of tori is rooted at the cycle, i.e. $\mathbb{T}_0 = \Gamma_0$. It is clear that solving $V_0(x_1,x_2,x_3) = 0$ is equivalent to $$\frac{1}{2}u^2 - \ln(u) = \frac{1}{2} - z^2, \quad u = \sqrt{x_1^2 + x_2^2},$$ which has only one real solution at $z = 0$ since $\frac{1}{2}u^2 - \ln(u) \geq \frac{1}{2}$ for all $u \geq 0$. Clearly this solution corresponds to the cycle $\Gamma_0$. Examples of such invariant tori can be found in . ◻ ![The left figure represents several forward orbits on a local $2\pi$-periodic two-dimensional center manifold (Möbius band) around $\Gamma_{\sigma}$ for [\[eq:Exmobius\]](#eq:Exmobius){reference-type="eqref" reference="eq:Exmobius"} at parameter value $\sigma = -1$. The right figure represents two forward orbits on two different invariant tori for [\[eq:Exmobius\]](#eq:Exmobius){reference-type="eqref" reference="eq:Exmobius"} at parameter value $\sigma = 0$. The forward orbits are obtained by numerical integration and each orbit is represented by different color.](MobiusTorus.jpg){#fig:cmmobius width="15cm"} [\[ex:nonanalytic\]]{#ex:nonanalytic label="ex:nonanalytic"} The analytic system [\[eq:Ex2\]](#eq:Ex2){reference-type="eqref" reference="eq:Ex2"} admits a non-analytic $2\pi$-periodic center manifold. *Proof.* The fundamental matrix for the linearization around the origin of [\[eq:Ex2\]](#eq:Ex2){reference-type="eqref" reference="eq:Ex2"} reads $$U(t,s) = \begin{pmatrix} 1 & 0 \\ 0 & e^{s-t}\\ \end{pmatrix}$$ for all $(t,s) \in \mathbb{R}^2$. The Floquet multipliers are given by $\lambda_1 = 1$ and $\lambda_2 = e^{-2\pi}$ and the center space (at time $t$) is given by $E_0(t) = \mathop{\mathrm{span}}\{(1,0)\}$ while the stable space (at time $t$) is given by $E_{-}(t) = \mathop{\mathrm{span}}\{(0,1)\}$. Hence, the $x$-axis corresponds to the center space and so the center manifold can be parametrized by $y(t) = \mathcal{H}(t,x)$, where $\mathcal{H}$ is $2\pi$-periodic in the first argument and consists solely of nonlinear terms. Because the center manifold is locally invariant, the map $\mathcal{H}$ must satisfy $$\label{eq:Ex2invariance} \frac{\partial \mathcal{H}}{\partial t}(t, x) - x^2 \frac{\partial \mathcal{H}}{\partial x}(t, x) = -\mathcal{H}(t, x) + \sin(t)x^2.$$ Assume that $\mathcal{H}$ is analytic on an open neighborhood of $\mathbb{R} \times \{ 0 \}$, then we can write locally $\mathcal{H}(t,x) = \sum_{n \geq 2} a_n(t)x^n$ for $2\pi$-periodic functions $a_n$. Filling this expansion into [\[eq:Ex2invariance\]](#eq:Ex2invariance){reference-type="eqref" reference="eq:Ex2invariance"} and comparing terms in $x^n$ shows that the $2\pi$-periodic functions $a_n$ must satisfy $$\begin{dcases} \dot{a}_2(t) + a_2(t) = \sin(t), \quad &n = 2, \\ \dot{a}_n(t) + a_n(t) = (n-1)a_{n-1}(t), \quad &n \geq 3. \end{dcases}$$ Hence, $a_2(t) = \alpha_2 \sin(t) + \beta_2 \cos(t)$, where $\alpha_2 = \frac{1}{2}$ and $\beta_2 = -\frac{1}{2},$ and $$\begin{aligned} a_n(t) &= \frac{(n-1)e^{-t}}{e^{2\pi}-1} \bigg( \int_t^{2\pi} e^{\tau} a_{n-1}(\tau) d\tau + e^{2\pi} \int_0^t e^{\tau} a_{n-1}(\tau) d\tau \bigg). \end{aligned}$$ Let us prove by induction for $n \geq 2$ that $a_n$ is a linear combination of sines and cosines. If $n = 2$, then the result is clear. Assume that the claim holds for a certain $n \geq 3$, it follows from the induction hypothesis and applying integration by parts twice on both integrals that $$\begin{aligned} a_{n}(t)&= \frac{(n-1) e^{-t}}{e^{2\pi}-1} \bigg( \int_t^{2\pi} e^{\tau} (\alpha_{n-1} \sin(\tau) + \beta_{n-1} \cos(\tau)) d\tau + e^{2\pi} \int_0^t e^{\tau} (\alpha_{n-1} \sin(\tau) + \beta_{n-1} \cos(\tau)) d\tau \bigg) \\ &= \frac{n-1}{2}((\alpha_{n-1} + \beta_{n-1}) \sin(t) + (-\alpha_{n-1} + \beta_{n-1})\cos(t)), \end{aligned}$$ which proves the claim. From the proof of the induction step, we obtain $$\begin{pmatrix} \alpha_n \\ \beta_n \end{pmatrix} = \frac{n-1}{2} \begin{pmatrix} 1 & 1 \\ -1 & 1 \end{pmatrix} \begin{pmatrix} \alpha_{n-1} \\ \beta_{n-1} \end{pmatrix}, \quad n \geq 3.$$ This is a linear system of difference equation and can be solved explicitly by computing the diagonalization of the associated matrix. The final result reads $$\begin{pmatrix} \alpha_n \\ \beta_n \end{pmatrix} = \frac{(n-1)!}{2^{\frac{n-1}{2}}} \begin{pmatrix} \cos(\frac{(n-1)\pi}{4}) \\ - \sin(\frac{(n-1)\pi}{4}) \end{pmatrix}.$$ Hence, the $2\pi$-periodic functions $a_n$ are given by $$\begin{aligned} a_n(t) &= \frac{(n-1)!}{2^{\frac{{n-1}}{2}}}\bigg(\cos(\frac{(n-1)\pi}{4})\sin(t) - \sin(\frac{(n-1)\pi}{4}) \cos(t) \bigg), \quad n \geq 2. \end{aligned}$$ Using the angle addition and subtractions formula for cosines yields the center manifold expansion $$\label{eq:Ex2H} \mathcal{H}(t,x) = \sum_{n \geq 2} \frac{(n-1)!}{2^{\frac{{n-1}}{2}}} \sin(t - \frac{(n-1)\pi}{4}) x^n.$$ We will prove that the radius of convergence $R(t)$ at time $t \in \mathbb{R}$ of [\[eq:Ex2H\]](#eq:Ex2H){reference-type="eqref" reference="eq:Ex2H"} is zero. Let us first observe that for any $n \geq 2$ one has $$\sup_{k \geq n} \bigg(k!\bigg|\sin(t-\frac{k \pi}{4}) \bigg| \bigg)^{\frac{1}{k}} \geq ((4n)! | \sin(t) |)^{\frac{1}{4n}},$$ for $t \neq l \pi$ and $\ l \in \mathbb{Z}$. To bound this supremum from below when $t = l \pi$ for some $l \in \mathbb{Z}$, choose $m \in \mathbb{Z}$ such that $r = 2(l-1-m) \geq n$ because then $$\sup_{k \geq n} \bigg(k!\bigg|\sin(l\pi-\frac{k \pi}{4}) \bigg| \bigg)^{\frac{1}{k}} \geq (r!)^{\frac{1}{r}} \geq (n!)^{\frac{1}{n}}.$$ The Cauchy-Hadamard theorem tells us that $$\begin{aligned} \frac{1}{R(t)} &= \limsup\limits_{n \to \infty}|a_n(t)|^{\frac{1}{n}} \geq \sqrt{2} \min\{ \lim_{n \to \infty} (n! | \sin(t) |)^{\frac{1}{n}}, \lim_{n \to \infty} (n!)^{\frac{1}{n}} \} = \infty, \end{aligned}$$ where in the first argument of the minimum it is assumed that $t \neq l \pi$ for all $l \in \mathbb{Z}$. This proves $R(t) = 0$ for all $t \in \mathbb{R}$, i.e. $\mathcal{H}$ is not analytic. ◻ [\[ex:nonunique\]]{#ex:nonunique label="ex:nonunique"} The analytic system [\[eq:Ex2\]](#eq:Ex2){reference-type="eqref" reference="eq:Ex2"} admits a locally (non)-unique family of $2\pi$-periodic center manifolds. *Proof.* Recall from that the parametrization $\mathcal{H}$ of the center manifold must satisfy [\[eq:Ex2invariance\]](#eq:Ex2invariance){reference-type="eqref" reference="eq:Ex2invariance"} in an open neighborhood of $\mathbb{R} \times \{0\}$. To construct the map $\mathcal{H}$ explicitly, let us first introduce (formally) for arbitrary constants $\alpha,\beta \in \mathbb{R}$ the family of functions $I_{\alpha,\beta} : \mathbb{R} \times \mathbb{R} \setminus \{0\} \to \mathbb{R}$ as $$\begin{aligned} \label{eq:Ialphabeta} I_{\alpha,\beta}(t,x) &:= - \sqrt{2} e^{-\frac{1}{x}}\bigg( \cos(\frac{1}{x}-t) I_{\alpha}^1(x) + \sin(\frac{1}{x}-t) I_\beta^2(x) \bigg), \end{aligned}$$ where the functions $I_\alpha^1$ and $I_\beta^2$ are defined by $$\begin{aligned} \label{eq:Ialphabeta1} I_\alpha^1(x) := \int_{\alpha}^x \frac{e^{\frac{1}{s}}}{s}\sin(\frac{1}{s} + \frac{\pi}{4})ds, \quad I_\beta^2(x) := \int_{\beta}^x \frac{e^{\frac{1}{s}}}{s}\sin(\frac{1}{s} - \frac{\pi}{4}) ds. \end{aligned}$$ It turns out that the higher order derivatives of $I_{\alpha,\beta}(t,\cdot)$ will be important for the construction of the map $\mathcal{H}$. Therefore, let us first determine all values of $\alpha$ and $\beta$ for which [\[eq:Ialphabeta\]](#eq:Ialphabeta){reference-type="eqref" reference="eq:Ialphabeta"} is well-defined on $\mathbb{R} \times (-\infty,0)$. Clearly $I_{\alpha,\beta}$ is ill-defined on $\mathbb{R} \times (-\infty,0)$ whenever $\alpha,\beta > 0$ due to the singularities at zero for the functions defined in [\[eq:Ialphabeta1\]](#eq:Ialphabeta1){reference-type="eqref" reference="eq:Ialphabeta1"}. Hence, we must have that $\alpha,\beta \leq 0$. We will show that $\alpha = \beta = 0$ are the only values for which $I_{\alpha,\beta}$ is well-defined on $\mathbb{R} \times (-\infty,0)$. Notice that $$\label{eq:limI0} \bigg|e^{-\frac{1}{x}} \cos(\frac{1}{x}-t)I_0^1(x) \bigg| \leq e^{-\frac{1}{x}} \int_x^0 \frac{e^{\frac{1}{s}}}{s} ds \to 0,$$ as $x \uparrow 0$ by an application of L'Hôpital's rule. A similar computation for the second term in [\[eq:Ialphabeta\]](#eq:Ialphabeta){reference-type="eqref" reference="eq:Ialphabeta"} shows that $I_{0,0}$ is well-defined. To show that the function $I_{\alpha,\beta}$ is ill-defined on $\mathbb{R} \times (-\infty,0)$ for all $\alpha,\beta < 0$, consider for a fixed $t \in \mathbb{R}$ the sequence $(x_m)_{m \geq m_0}$ defined by $x_m := \frac{1}{t - m \pi}$, where the integer $m_0 \geq 0$ is chosen large enough to guarantee that $t - m_0 \pi < 0$. Hence, $$\begin{aligned} \label{eq:xmintegral} e^{-\frac{1}{x_m}} \cos(\frac{1}{x_m}-t)I_\alpha^1(x_m) = (-1)^m e^{-\frac{1}{x_m}} \int_\alpha^{x_m} \frac{e^{\frac{1}{s}}}{s} \sin(\frac{1}{s} + \frac{\pi}{4}) ds. \end{aligned}$$ Because the integrand is continuous and bounded above on $[\alpha,x_m] \subset [\alpha,0)$ for large enough $m \geq m_0$, it can be extended from the left continuously at zero such that it attains the value $M_{\alpha} < \infty$. If we set the integral in [\[eq:xmintegral\]](#eq:xmintegral){reference-type="eqref" reference="eq:xmintegral"} to be $M_\alpha^m$, then $M_\alpha^m \to M_\alpha$ when $m \to \infty$. Hence, $$e^{-\frac{1}{x_m}} \cos(\frac{1}{x_m}-t)I_\alpha^1(x_m) = e^{-\frac{1}{x_m}} (-1)^m M_\alpha^m$$ which is undetermined when $m \to \infty$. A similar reasoning shows that the second term in [\[eq:Ialphabeta\]](#eq:Ialphabeta){reference-type="eqref" reference="eq:Ialphabeta"} is ill-defined on $\mathbb{R} \times (-\infty,0)$ when $\beta < 0$ and so $I_{\alpha,\beta}$ is only well-defined on $\mathbb{R} \times (-\infty,0)$ whenever $\alpha = \beta = 0$. It can be proven similarly as in [\[eq:limI0\]](#eq:limI0){reference-type="eqref" reference="eq:limI0"} that $I_{\alpha,\beta}$ is well-defined on $\mathbb{R} \times (0,\infty)$ and that $\lim_{x \downarrow 0} I_{\alpha,\beta}(\cdot,x) = 0$ for all $\alpha,\beta > 0$. Our next goal is to determine the higher order partial derivatives of the second component of $I_{\alpha,\beta}$ evaluated at zero. A straightforward computation shows already that $$\label{eq:partialIab} \frac{\partial}{\partial x} I_{\alpha,\beta}(t,x) = \frac{\sqrt{2}}{x^2} \bigg(I_{\alpha,\beta} \bigg(t+\frac{\pi}{4},x \bigg) - x \sin(t+\frac{\pi}{4}) \bigg).$$ Let us write $I_{\alpha,\beta}(t,x) = \sum_{n=0}^N b_n(t)x^n + R_N(t,x)$ as a Taylor polynomial where $R_N$ is the remainder for some $N \in \mathbb{N}$. Filling in this Taylor polynomial into [\[eq:partialIab\]](#eq:partialIab){reference-type="eqref" reference="eq:partialIab"}, we see that $b_0$ is the zero function, $b_1(t) = \sin(t)$ and $b_{n}(t) = \frac{n-1}{\sqrt{2}}b_{n-1}(t-\frac{\pi}{4})$ for all $n=2,\dots,N$. This recurrence relation shows that $$\begin{aligned} \label{eq:partialsIab} \frac{\partial^n}{\partial x^n} I_{\alpha,\beta}(t,0) = n! b_n(t) = n!\frac{(n-1)!}{2^{\frac{n-1}{2}}} \sin(t-\frac{(n-1)\pi}{4}), \end{aligned}$$ for all $n=1,\dots,N$, where $N$ can be taken arbitrary large. The construction of the map $\mathcal{H}$ will consists of two parts, namely $x \in (-\infty,0]$ and $x \in [0,\infty)$. For the first part, let $\phi : \mathbb{R} \to \mathbb{R}$ be any $2\pi$-periodic differentiable function and observe that the map $\mathcal{H}_{-} : \mathbb{R} \times (-\infty,0] \to \mathbb{R}$ defined by $$\mathcal{H}_{-}(t,x) := \begin{dcases} e^{-\frac{1}{x}} \phi \bigg(\frac{1}{x}-t \bigg) + I_{0,0}(t,x) - \sin(t)x, \quad &t \in \mathbb{R}, \ x \in (-\infty, 0),\\ 0, \quad & t \in \mathbb{R}, \ x = 0, \end{dcases}$$ satisfies the local invariance equation [\[eq:Ex2invariance\]](#eq:Ex2invariance){reference-type="eqref" reference="eq:Ex2invariance"}. However, for $\mathcal{H}_{-}$ to be a parametrization of a center manifold on $\mathbb{R} \times (-\infty,0]$, we must have that $\lim_{x \uparrow 0} \mathcal{H}_{-}(\cdot,x) = 0$. Since $\lim_{x \uparrow 0} I_{0,0}(\cdot,x)$, it is clear that $$e^{-\frac{1}{x}} \phi \bigg(\frac{1}{x}-t \bigg) \to 0, \quad x \uparrow 0.$$ We claim $\phi$ must be the zero function. Consider for fixed $t \in \mathbb{R}$ and $r \in \mathbb{Q}$ the sequence $(y_{m})_{m \geq m_1}$ defined by $y_m := \frac{1}{t-r-2m\pi}$ where the integer $m_1 \geq 0$ is chosen large enough to guarantee that $t-r-2m_1\pi < 0$. The $2\pi$-periodicity of $\phi$ implies that $$e^{-\frac{1}{y_m}} \phi \bigg(\frac{1}{y_m}-t \bigg) = \phi(r) e^{-\frac{1}{y_m}} \to \infty, \quad m \to \infty,$$ unless $\phi(r) = 0$. As $r \in \mathbb{Q}$ is arbitrary we have that $\phi$ is the zero function on $\mathbb{Q}$ and because $\phi$ is (at least) continuous, we have that $\phi$ is the zero function on $\mathbb{R}$. Moreover, we obtain from [\[eq:partialsIab\]](#eq:partialsIab){reference-type="eqref" reference="eq:partialsIab"} directly that $\lim_{x \uparrow 0} \frac{\partial}{\partial x} \mathcal{H}_{-}(\cdot,x) = 0$ and so $\mathcal{H}_{-}$ is indeed a parametrization of a center manifold on $\mathbb{R} \times (-\infty,0]$. In addition, it follows directly from [\[eq:partialIab\]](#eq:partialIab){reference-type="eqref" reference="eq:partialIab"} that $\mathcal{H}_{-}$ is $C^{\infty}$-smooth on $\mathbb{R} \times (-\infty,0]$. For the second part, let $\phi : \mathbb{R} \to \mathbb{R}$ be any $2\pi$-periodic $C^k$-smooth function and observe that for any $\alpha,\beta > 0$ and sufficiently small $\delta_k > 0$ the map $\mathcal{H}_{+,\phi}^{\alpha,\beta} : \mathbb{R} \times [0,\delta_k) \to \mathbb{R}$ defined by $$\mathcal{H}_{+,\phi}^{\alpha,\beta}(t,x) := \begin{dcases} e^{-\frac{1}{x}} \phi \bigg(\frac{1}{x}-t \bigg) + I_{\alpha,\beta}(t,x) - \sin(t)x, \quad &t \in \mathbb{R}, \ x \in (0, \delta_k),\\ 0, \quad & t \in \mathbb{R}, \ x = 0, \end{dcases}$$ satisfies the local invariance equation [\[eq:Ex2invariance\]](#eq:Ex2invariance){reference-type="eqref" reference="eq:Ex2invariance"}. Since $\phi$ is $C^k$-smooth and $2\pi$-periodic, we have for any $l =0,\dots,k$ that its $l$th derivative is bounded above by some real number $0 < M_l < \infty$. Hence, $$\bigg| e^{-\frac{1}{x}} \phi \bigg(\frac{1}{x}-t \bigg) \bigg| \leq M_{0} e^{-\frac{1}{x}} \to 0, \quad x \downarrow 0,$$ which already proves that $\lim_{x \downarrow 0} \mathcal{H}_{+,\phi}^{\alpha,\beta}(\cdot,x) = 0$. To prove that $\mathcal{H}_{+,\phi}^{\alpha,\beta}$ is tangent to the center bundle and $C^k$-smooth (at the origin), note for any $l=0,\dots,k$ that $$\bigg| \frac{d^l}{dx^l} \bigg[e^{-\frac{1}{x}} \phi \bigg(\frac{1}{x}-t \bigg) \bigg] \bigg| \leq e^{-\frac{1}{x}}\sum_{q=0}^l p_q \bigg(\frac{1}{|x|} \bigg) \to 0,$$ as $x \downarrow 0$ due to the general Leibniz rule, Faà di Bruno's formula and the fact that $p_q$, dependent on $M_0,\dots,M_q$, is a polynomial for all $q=0,\dots,l$. Hence, $\mathcal{H}_{+,\phi}^{\alpha,\beta}$ is a $C^k$-smooth function on $\mathbb{R} \times [0,\delta_k)$ for some $\delta_k > 0$. As a consequence of the results derived above, the map $\mathcal{H}_{\phi}^{\alpha,\beta} : \mathbb{R} \times (-\infty,\delta_k)$ defined by $$\begin{aligned} \label{eq:Habphi} \mathcal{H}_{\phi}^{\alpha,\beta}(t,x) := \begin{dcases} \mathcal{H}_{-}(t,x), &t \in \mathbb{R}, \ x \in (-\infty, 0],\\ \mathcal{H}_{+,\phi}^{\alpha,\beta}(t,x), &t \in \mathbb{R}, \ x \in [0,\delta_k),\\ \end{dcases} \end{aligned}$$ parametrizes a locally (non)-unique family of $2\pi$-periodic $C^k$-smooth center manifolds around $\mathbb{R} \times \{0\}$ of [\[eq:Ex2\]](#eq:Ex2){reference-type="eqref" reference="eq:Ex2"}. Two different $2\pi$-periodic center manifolds for [\[eq:Ex2\]](#eq:Ex2){reference-type="eqref" reference="eq:Ex2"} are visualized in . ◻ ![Two different $C^{\infty}$-smooth non-analytic $2\pi$-periodic center manifolds around $\mathbb{R} \times \{0\}$ for the analytic system [\[eq:Ex2\]](#eq:Ex2){reference-type="eqref" reference="eq:Ex2"} parametrized by the maps $\mathcal{H}_{0}^{1,1}$ and $\mathcal{H}_{0}^{2,3}$ respectively.](CMexample.jpg){#fig:cmexample width="15cm"} [\[ex:Cinftysmooth\]]{#ex:Cinftysmooth label="ex:Cinftysmooth"} The analytic system [\[eq:Ex2\]](#eq:Ex2){reference-type="eqref" reference="eq:Ex2"} admits a $2\pi$-periodic $C^{\infty}$-smooth center manifold. *Proof.* The map $\mathcal{H}_{0}^{1,1}$ from [\[eq:Habphi\]](#eq:Habphi){reference-type="eqref" reference="eq:Habphi"} provides us a $2\pi$-periodic $C^{\infty}$-smooth center manifold for [\[eq:Ex2\]](#eq:Ex2){reference-type="eqref" reference="eq:Ex2"} on $(-\infty,\delta_\infty)$ with $\delta_\infty > 0$ since $\mathcal{H}_{0}^{1,1}$ is $C^{\infty}$-smooth in an open neighborhood of $\mathbb{R} \times \{0\}$. ◻ [\[ex:nonCinfitysmooth\]]{#ex:nonCinfitysmooth label="ex:nonCinfitysmooth"} The analytic system [\[eq:Exshrinking\]](#eq:Exshrinking){reference-type="eqref" reference="eq:Exshrinking"} admits for any $k \geq 0$ a $2\pi$-periodic $C^k$-smooth center manifold, but not a $2\pi$-periodic $C^{\infty}$-smooth center manifold. *Proof.* The fundamental matrix for the linearization around the origin of [\[eq:Exshrinking\]](#eq:Exshrinking){reference-type="eqref" reference="eq:Exshrinking"} reads $$U(t,s) = \begin{pmatrix} 1 & 0 & 0 \\ 0 & e^{t-s} & 0\\ 0 & 0 & 1 \end{pmatrix}$$ for all $(t,s) \in \mathbb{R}^2$. The Floquet multipliers are given by $\lambda_1 = 1, \lambda_2 = e^{2\pi}$ and $\lambda_3 = 1$. The center space (at time $t$) is given by $E_0(t) = \mathop{\mathrm{span}}\{(1,0,0),(0,0,1)\}$ while the unstable space (at time $t$) is given by $E_{+}(t) = \mathop{\mathrm{span}}\{(0,1,0)\}$. Hence, the $xz$-plane corresponds to the center space and so the center manifold can be parametrized by $y(t) = \mathcal{H}(t,x,z)$, where $\mathcal{H}$ is $2\pi$-periodic in the first argument and only consists of nonlinear terms. It follows from that there exists for any $k \geq 1$ an open neighborhood $U_{2k}$ of $\mathbb{R} \times \{0\} \times \{0\}$ such that the map $\mathcal{H}$ is $C^{2k}$-smooth on $U_{2k}$. Hence, one can write $$\label{eq:Hexpansion} \mathcal{H}(t,x,z) = \sum_{n=2}^{2k} a_n(t,z)x^{n} + \mathcal{O}(x^{2k+1})$$ in the neighborhood $U_{2k}$. Because the center manifold is locally invariant, the map $\mathcal{H}$ must satisfy $$\begin{aligned} \label{eq:Ex3inveq} \frac{\partial \mathcal{H}}{\partial t}(t,x,z) + x(z-x^2) \frac{\partial \mathcal{H}}{\partial x}(t,x,z) = \mathcal{H}(t,x,z) + (1+\sin(t))x^2 \end{aligned}$$ Substituting [\[eq:Hexpansion\]](#eq:Hexpansion){reference-type="eqref" reference="eq:Hexpansion"} into [\[eq:Ex3inveq\]](#eq:Ex3inveq){reference-type="eqref" reference="eq:Ex3inveq"} and comparing terms in $x^n$ for $n = 2,\dots,2k$ shows that the functions $a_n$, which are $2\pi$-periodic in the first component, must satisfy $$\label{eq:a2an} \begin{dcases} \frac{\partial a_2}{\partial t}(t,z) + (2z - 1) a_2(t,z) = 1 + \sin(t), \quad &n = 2, \\ \frac{\partial a_n}{\partial t}(t,z) + (nz - 1) a_n(t,z) = (n-2)a_{n-2}(t,z), \quad &n = 3,\dots,2k. \end{dcases}$$ Because $\mathcal{H}$ consists only of nonlinear terms, we can assume that $a_1 = 0$. Solving the $2\pi$-periodic boundary value problem [\[eq:a2an\]](#eq:a2an){reference-type="eqref" reference="eq:a2an"} for $n=2$ yields $$\begin{aligned} \label{eq:a2} a_2(t,z) = \frac{1}{2z - 1} + \frac{2z-1}{(2z-1)^2 + 1} \sin(t) - \frac{1}{(2z-1)^2 + 1}\cos(t), \quad z \neq \frac{1}{2}. \end{aligned}$$ Furthermore, at $z = \frac{1}{2}$ one verifies easily from [\[eq:a2an\]](#eq:a2an){reference-type="eqref" reference="eq:a2an"} that $a_2(\cdot,\frac{1}{2})$ does not admit a $2\pi$-periodic solution. Moreover, all odd coefficients $a_1,a_3,\dots,a_{2k-1}$ are zero and the even coefficients $a_2,a_4,\dots,a_{2k}$ are recursively given by $$\begin{aligned} a_{2n}(t,z) &= \frac{2(n-1)e^{-(2nz-1)t}}{e^{2\pi(2nz-1)}-1} \bigg( \int_t^{2\pi} e^{(2nz-1)\tau} a_{2(n-1)}(\tau,z) d\tau \\ &+ e^{2\pi(2nz-1)} \int_0^{t} e^{(2nz-1)\tau} a_{2(n-1)}(\tau,z) d\tau \bigg), \quad z \neq \frac{1}{2n}. \end{aligned}$$ To obtain a semi-explicit representation for $a_{2n}$, we will prove by induction on $n = 1,2,\dots,k$ that $$\begin{aligned} \label{eq:a2nindunction} a_{2n}(t,z) = 2^{n-1} (n-1)! \prod_{l=1}^{n} \frac{1}{2lz - 1} + \alpha_n(z) \sin(t) + \beta_n(z) \cos(t), \quad z \neq \frac{1}{2n}, \end{aligned}$$ where $\alpha_n$ and $\beta_n$ are well-defined rational functions on $\mathbb{R}$. Clearly, the claim holds for $n=1$ due to [\[eq:a2\]](#eq:a2){reference-type="eqref" reference="eq:a2"}. Assume that the claim holds for a certain $n \geq 2$. Along the same lines of the induction step in , one derives $$\begin{aligned} a_{2n}(t,z) = 2^{n-1} (n-1)! \prod_{l=1}^{n} \frac{1}{2lz - 1} &+ \frac{2(n-1)[(2nz-1)\alpha_{n-1}(z) + \beta_{n-1}(z)]}{(2nz-1)^2+1} \sin(t) \\ &+ \frac{2(n-1)[(2nz-1)\beta_{n-1}(z) - \alpha_{n-1}(z)]}{(2nz-1)^2+1} \cos(t). \end{aligned}$$ It remains to show that the coefficients in front of the sine and cosine are well-defined rational functions on $\mathbb{R}$. Clearly, $$\begin{aligned} \begin{pmatrix} \alpha_n(z) \\ \beta_n(z) \end{pmatrix} = \frac{2(n-1)}{(2nz-1)^2+1} \begin{pmatrix} 2nz-1 & 1 \\ -1 & 2nz-1 \end{pmatrix} \begin{pmatrix} \alpha_{n-1}(z) \\ \beta_{n-1}(z) \end{pmatrix}, \quad n \geq 3, \end{aligned}$$ with initial condition $$\alpha_2(z) = \frac{2z-1}{(2z-1)^2 + 1}, \quad \beta_2(z) = \frac{1}{(2z-1)^2 + 1}.$$ Solving this linear system of difference equations semi-explicitly yields $$\begin{aligned} \begin{pmatrix} \alpha_n(z) \\ \beta_n(z) \end{pmatrix} = 2^{n-1} (n-1)! \bigg[\prod_{l=2}^n \frac{1}{(2lz-1)^2 + 1} \begin{pmatrix} 2lz-1 & 1 \\ -1 & 2lz-1 \end{pmatrix} \bigg] \begin{pmatrix} \alpha_2(z) \\ \beta_2(z) \end{pmatrix}. \end{aligned}$$ Hence, $\alpha_n$ and $\beta_n$ are both rational functions that are well-defined on $\mathbb{R}$ since $(2lz-1)^2 + 1 \geq 0$ for all $l=1,\dots,n$. This concludes the induction step. On the other hand, if $z = \frac{1}{2n}$, then one can verify rather easily from [\[eq:a2an\]](#eq:a2an){reference-type="eqref" reference="eq:a2an"} that $a_n(\cdot,\frac{1}{2n})$ has no $2\pi$-periodic solution. Using [\[eq:a2nindunction\]](#eq:a2nindunction){reference-type="eqref" reference="eq:a2nindunction"} in combination with [\[eq:Hexpansion\]](#eq:Hexpansion){reference-type="eqref" reference="eq:Hexpansion"}, we see that $\mathcal{H}(t,x,\cdot)$ is not $C^{2k}$-smooth on $(-\frac{1}{2k},\frac{1}{2k}]$ since $a_{2k}(\cdot,\frac{1}{2k})$ is simply undefined. Suppose now that $\mathcal{H}$ is $C^{\infty}$-smooth on $\mathbb{R} \times \{0\} \times \{0\}$, then for fixed $t \in \mathbb{R}$ and non-zero $x \in \mathbb{R}$ there exists an $\varepsilon > 0$ such that $\mathcal{H}(t,x,\cdot)$ is $C^\infty$-smooth on $(-\varepsilon,\varepsilon)$. Now, if $k \geq 1$ is an integer that satisfies $k > \frac{1}{2 \varepsilon}$, then $\mathcal{H}(t,x,\cdot)$ is not $C^{2k}$-smooth on $(-\varepsilon,\varepsilon)$. This contradicts the assumption that [\[eq:Exshrinking\]](#eq:Exshrinking){reference-type="eqref" reference="eq:Exshrinking"} admits a $C^{\infty}$-smooth $2\pi$-periodic center manifold at the origin. To illustrate the cascade of singularities of the periodic center manifold towards the origin, second- and fourth-order approximations at different time steps of $\mathcal{H}$ are presented in . ◻ ![In (a) a second-order approximation of $\mathcal{H}(0,\cdot,\cdot)$ and in (b) a second-order approximation of $\mathcal{H}(\pi,\cdot,\cdot)$ . In (c) a fourth-order approximation of $\mathcal{H}(0,\cdot,\cdot)$ and in (d) a fourth-order approximation of $\mathcal{H}(\pi,\cdot,\cdot)$. The red vertical planes indicate the singularities at $z=\frac{1}{4}$ and $z = \frac{1}{2}$.](CMsingularity.jpg){#fig:CMsingularity width="15cm"} # Conclusion and outlook We have proven the existence of a periodic smooth locally invariant center manifold near a nonhyperbolic cycle in the setting of finite-dimensional ordinary differential equations. Our results are based on rather simple consequences of Floquet theory in combination with a fixed point argument on the easily available variation of constants formula for periodic (nonlinear) ODEs. In addition, we have provided several examples of (non)-unique, (non)-$C^\infty$-smooth and (non)-analytic periodic center manifolds to illustrate that periodic center manifolds admit similar interesting qualitative properties as center manifolds for equilibria. Despite our illustrations from are very insightful on the nature of periodic center manifolds, it is not clear under which conditions a periodic center manifold is unique, non-unique or locally (non)-unique. To answer the first question, we believe that one must generalize techniques from [@Sijbrand1985] towards periodic center manifolds to state and prove a similar result as in [@Sijbrand1985 Theorem 3.2]. Moreover, if a periodic center manifold is not uniquely determined, how much can two periodic center manifolds differ from each other? Such results have already been established in [@Sijbrand1985 Section 4] for center manifolds for equilibria, but the question remains unanswered for periodic center manifolds. However, we have already seen in that periodic center manifolds may differ from each other by a factor of $e^{-\frac{1}{x}}\phi(\frac{1}{x} - t)$, where $\phi$ is any $T$-periodic (at least) differentiable function. Furthermore, it is not clear under which conditions a $C^\infty$-smooth or analytic periodic center manifold may exist while this question is addressed and answered in [@Sijbrand1985 Section 5 and 6] for center manifolds for equilibria. In particular, recall from that the $C^\infty$-smooth periodic center manifold is not analytic for all $t \in \mathbb{R}$. However, is it possible to construct an example where a $C^{\infty}$-smooth periodic center manifold may change periodically from non-analytic to analytic? Or is there a possibility that the (non)-analyticitiy of a periodic center manifold is time-independent? # Acknowledgements {#acknowledgements .unnumbered} The authors would like to thank Profs. Renato Huzak and Peter De Maesschalck (Hasselt University) for helpful discussions and suggestions. # Smoothness of the center manifold {#appendix:smoothness} In this appendix, we will prove that the map $\mathcal{C}$ inherits the same order of smoothness as the nonlinearity $R$. Our results are based on the theory of contraction on scales of Banach spaces, see [@Diekmann1995 Section IX.6 and Appendix IV] and [@Vanderbauwhede1987; @Hupkes2006; @Hupkes2008; @Church2018; @Lentjes2023a] for applications of this theory to ordinary differential equations and (mixed) functional differential equations. Our arguments here are based on the strategy developed in the mentioned references and closely follow [@Lentjes2023a]. To prove additional smoothness of the map $\mathcal{C}$, let us first observe that we are only interested in pairs $(s,y_0) \in E_0$ due to [\[eq:mapC\]](#eq:mapC){reference-type="eqref" reference="eq:mapC"}. Therefore, let us incorporate the starting time $s$ inside the domain of the fixed point operator $\mathcal{G}_s^\eta$ from . Hence, define for $\eta \in (0,\min\{-a,b\})$ and sufficiently small $\delta > 0$ the map $\mathcal{G}^\eta$ by $$\mathop{\mathrm{BC}}_s^\eta \times E_0 \ni (u,s,y_0) \mapsto U_s^\eta y_0 + \mathcal{K}_s^\eta (\tilde{R}_\delta(u)) \in \mathop{\mathrm{BC}}_s^\eta,$$ and following the same steps from , we have that $\mathcal{G}^\eta(\cdot,s,y_0)$ has a unique fixed point $\hat{u}^\eta : E_0 \to \mathop{\mathrm{BC}}_s^\eta$ such that $\hat{u}^\eta(s,\cdot)$ is globally Lipschitz and satisfies $\hat{u}^\eta(s,0) = 0$ for all $s \in \mathbb{R}$. It turns out that the space $\mathop{\mathrm{BC}}_s^\eta$ is not really suited to increase smoothness of the center manifold. The main idea is to work with another $\eta$-exponent that makes a trade-off between ensuring smoothness while not losing the contraction property. To make this construction, choose an interval $[\eta_{-},\eta_{+}] \subset (0,\min\{-a,b\})$ such that $k \eta_{-} < \eta_{+}$ and $\delta > 0$ small enough to guarantee that $$\label{eq:contractionregularity} L_{\delta}\|\mathcal{K}_s^\eta\|_{\eta,s} < \frac{1}{4}, \quad \forall \eta \in [\eta_{-},\eta_{+}], \ s \in \mathbb{R},$$ which is possible since $L_{\delta} \to 0$ as $\delta \downarrow 0$ proven in . From this construction, it is clear that we would like to switch back and forth between, for example, the fixed points $\hat{u}^\eta(s,y_0)$ and $\hat{u}^{\eta_{-}}(s,y_0)$. Therefore, introduce for any $0 < \eta_1 \leq \eta_2 < \min\{-a,b\}$ the linear embedding $\mathcal{J}_s^{\eta_2,\eta_1} : \mathop{\mathrm{BC}}_s^{\eta_1} \hookrightarrow \mathop{\mathrm{BC}}_s^{\eta_2}$ and notice that this map is bounded since $$\begin{aligned} \|u\|_{\eta_2, s} = \sup_{t \in \mathbb{R}}e^{-\eta_2|t - s|}\|u(t)\| \leq \sup_{t \in \mathbb{R}}e^{-\eta_1|t - s|}\|u(t)\| = \|u\|_{\eta_1, s} < \infty, \end{aligned}$$ for any $u \in \mathop{\mathrm{BC}}_s^{\eta_1}$. Hence, $\mathcal{J}_s^{\eta_2,\eta_1}$ is $C^{\infty}$-smooth and $\mathop{\mathrm{BC}}_s^{\eta_1}$ can be considered as a subspace of $\mathop{\mathrm{BC}}_s^{\eta_2}$. The following lemma shows we can switch back and forth between the fixed points of interest. Let $0 < \eta_1 \leq \eta_2 < \min\{-a, b\}$ and $s \in \mathbb{R}$. Assume that $\hat{u}^{\eta_1}(s,y_0)$ is the fixed point of $\mathcal{G}^{\eta_1}(\cdot, s,y_0)$ for some $(s,y_0) \in E_0$. Then $\hat{u}^{\eta_2}(s,y_0) = \mathcal{J}_s^{\eta_2, \eta_1}\hat{u}^{\eta_1}(s,y_0)$. *Proof.* Note that the definition of the fixed point operator does not depend explicitly on the choice of $\eta \in (0,\min\{-a, b\})$, since $\mathcal{K}_{s}^{\eta_1} u = \mathcal{K}_{s}^{\eta_2} u$ for all $u \in \mathop{\mathrm{BC}}^{\eta_1}_{s}$. Then by uniqueness of the fixed point and since $\mathop{\mathrm{BC}}_s^{\eta_1}$ is continuously embedded in $\mathop{\mathrm{BC}}_s^{\eta_2}$, it is clear that $\hat{u}_{s}^{\eta_2}(y_0) = \mathcal{J}_s^{\eta_2, \eta_1}\hat{u}_{s}^{\eta_1}(y_0)$. ◻ A first step in increasing smoothness of the center manifold is to show that $\tilde{R}_{\delta}$ is sufficiently smooth. Recall from that $R_\delta$ is $C^k$-smooth. Consider now for any pair of integers $p,q \geq 0$ with $p + q \leq k$ the map $\Tilde{R}_\delta^{(p,q)}(u) \in \mathcal{L}^q(C(\mathbb{R},\mathbb{R}^n)) := \mathcal{L}^q(C(\mathbb{R},\mathbb{R}^n),C(\mathbb{R},\mathbb{R}^n))$ defined by $$\begin{aligned} \Tilde{R}_{\delta}^{(p,q)}(u)(v_1,\dots,v_q)(t) := D_1^p D_2^q R_\delta(t,u(t))(v_1(t),\dots,v_q(t)). \end{aligned}$$ Here $\mathcal{L}^q(Y,Z)$ denotes the space of $q$-linear mappings from $Y^q := Y \times \dots \times Y$ into $Z$ for Banach spaces $Y$ and $Z$. The following three lemmas, adapted from the literature towards the finite-dimensional ODE-setting, will be crucial in the proof of . [\[lemma:smoothness3\]]{#lemma:smoothness3 label="lemma:smoothness3"} Let $p,q \geq 0$ be positive integers with $p+q \leq k$ and $\eta \geq q \mu > 0$. Then for any $u \in C(\mathbb{R},\mathbb{R}^n)$ we have $\Tilde{R}_{\delta}^{(p,q)}(u) \in \mathcal{L}^q(\mathop{\mathrm{BC}}_s^\mu,\mathop{\mathrm{BC}}_s^\eta)$, where the norm is bounded by $$\| \Tilde{R}_{\delta}^{(p,q)} \| \leq \sup_{t \in \mathbb{R}} e^{-(\eta-q\mu)|t-s|} \| D_1^p D_2^q R_\delta(t,u(t)) \| < \infty.$$ Furthermore, consider any $0 \leq l \leq k - (p+q)$ and $\sigma > 0$. If $\eta > q \mu + l \sigma$, then the map $u \mapsto \Tilde{R}_{\delta}^{(p,q)}$ from $\mathop{\mathrm{BC}}_s^\sigma$ into $\mathcal{L}^q(\mathop{\mathrm{BC}}_s^\mu,\mathop{\mathrm{BC}}_s^\eta)$ is $C^l$-smooth with $D^l \Tilde{R}_{\delta}^{(p,q)} = \Tilde{R}_\delta^{(p,q+l)}$. [\[lemma: smoothness4\]]{#lemma: smoothness4 label="lemma: smoothness4"} Let $p,q \geq 0$ be positive integers with $p+q < k$ and let $\eta > q \mu + \sigma$ for some $\mu,\sigma > 0$. Consider a map $\Phi \in C^1(E_0,\mathop{\mathrm{BC}}_s^\sigma)$. Then the map $\Tilde{R}_{\delta}^{(p,q)} \circ \Phi : E_0 \to \mathcal{L}^q(\mathop{\mathrm{BC}}_s^\mu, \mathop{\mathrm{BC}}_s^\eta)$ is $C^1$-smooth with $$\begin{aligned} D(\Tilde{R}_\delta^{(p,q)} \circ \Phi)(s_0,y_0)(v_1,\dots,v_q,(s_1,y_1)) = \Tilde{R}_\delta^{(p,q+1)}(\Phi(s_0,y_0))(v_1,\dots,v_q,D\Phi(s_0,y_0)(s_1,y_1)). \end{aligned}$$ [\[lemma:fixedpointsmooth\]]{#lemma:fixedpointsmooth label="lemma:fixedpointsmooth"} Let $Y_0,Y,Y_1$ and $\Lambda$ be Banach spaces with continuous embeddings $J_0 : Y_0 \hookrightarrow Y$ and $J : Y \hookrightarrow Y_1$. Consider the fixed point problem $y = f(y,\lambda)$ for $f : Y \times \Lambda \to Y$. Suppose that the following conditions hold. 1. The function $g : Y_0 \times \Lambda \to Y_1$ defined by $g(y_0,\lambda) := Jf(J_0y_0,\lambda)$ is of the class $C^1$ and there exist mappings $f^{2:} : J_0 Y_0 \times \Lambda \to \mathcal{L}(Y)$ and $f_1^{(1)} : J_0Y_0 \times \Lambda \to \mathcal{L}(Y_1)$ such that $D_1 g(y_0,\lambda) \xi = Jf^{(1)}(J_0y_0,\lambda)J_0$ for all $(y_0,\lambda,\xi) \in Y_0 \times \Lambda \times Y_0$ and $Jf^{(1)}(J_0y_0,\lambda)y = f_1^{(1)}(J_0y_0,\lambda)Jy$ for all $(y_0,\lambda,y) \in Y_0 \times \Lambda \times Y$. 2. There exists a $\kappa \in [0,1)$ such that for all $\lambda \in \Lambda$ the map $f(\cdot,\lambda) : Y \to Y$ is Lipschitz continuous with Lipschitz constant $\kappa$, independent of $\lambda$. Furthermore, for any $\lambda \in \Lambda$ the maps $f^{(1)}(\cdot,\lambda)$ and $f_1^{(1)}(\cdot,\lambda)$ are uniformly bounded by $\kappa$. 3. Under the previous condition, the unique fixed point $\Psi : \Lambda \to Y$ satisfies $\Psi(\lambda) = f(\Psi(\lambda),\lambda)$ and can be written as $\Psi = J_0 \circ \Phi$ for some continuous $\Phi : \Lambda \to Y_0$. 4. The function $f_0 : Y_0 \times \Lambda \to Y$ defined by $f_0(y_0,\lambda) = f(J_0y_0,\lambda)$ has continuous partial derivative $D_2 f : Y_0 \times \Lambda \to \mathcal{L}(\Lambda,Y).$ 5. The mapping $Y_0 \times \Lambda \ni (y,\lambda) \mapsto J \circ f^{(1)}(J_0y,\lambda) \in \mathcal{L}(Y,Y_1)$ is continuous. Then the map $J \circ \Psi$ is of the class $C^1$ and $D(J \circ \Psi)(\lambda) = J \circ \mathcal{A}(\lambda)$ for all $\lambda \in \Lambda$, where $A = \mathcal{A}(\lambda) \in \mathcal{L}(\Lambda,Y)$ is the unique solution of the fixed point equation $A = f^{(1)}(\Psi(\lambda),\lambda) A + D_2 f_0(\Psi(\lambda),\lambda)$. [\[thm:smoothnesscmt\]]{#thm:smoothnesscmt label="thm:smoothnesscmt"} For each $l \in \{1,\dots,k\}$ and $\eta \in (l\eta_{-},\eta_{+}] \subset (0,\min\{-a,b\})$, the map $\mathcal{J}_s^{\eta,\eta_{-}} \circ \hat{u}^{\eta_{-}} : E_0 \to \mathop{\mathrm{BC}}_s^\eta$ is $C^l$-smooth provided that $\delta > 0$ is sufficiently small. *Proof.* To begin, we choose $\delta > 0$ small enough so that [\[eq:contractionregularity\]](#eq:contractionregularity){reference-type="eqref" reference="eq:contractionregularity"} holds. We prove the assertion by induction on $l$. Let $l=k=1$ and $\eta \in (\eta_{-},\eta_+]$ be given. We show that applies with the Banach spaces $Y_0 = Y = \mathop{\mathrm{BC}}_s^{\eta_{-}}, Y_1 = \mathop{\mathrm{BC}}_s^\eta$ and $\Lambda = E_0$, and operators $$\begin{aligned} f(u,s,y_0) = \mathcal{G}^{\eta_{-}}(u,s,y_0), \quad f^{(1)}(u,s,y_0) = \mathcal{K}_s^{\eta_{-}} \circ \Tilde{R}_{\delta}^{(0,1)}(u), \quad f_1^{(1)} = \mathcal{K}_s^{\eta} \circ \Tilde{R}_{\delta}^{(0,1)}(u), \end{aligned}$$ with embeddings $J = \mathcal{J}_s^{\eta,\eta_{-}}$ and $J_0$ denotes the identity map. In the context of , the map $g$ is given by $\mathcal{G}^\eta$ due to the linearity of the embedding $J$. Because $(s,y_0) \mapsto U(\cdot,s)y_0, s \mapsto \mathcal{K}_s^\eta$ and $u \mapsto \Tilde{R}_\delta(u)$ are $C^1$-smooth (, and ), the map $g$ is $C^1$-smooth and one can easily verify the additional equalities. The second condition follows from [\[eq:contractionregularity\]](#eq:contractionregularity){reference-type="eqref" reference="eq:contractionregularity"} and the fact that the Lipschitz constant is independent of $s \in \mathbb{R}$ due to . The third condition follows from the fact that $\Psi$ is given by $\hat{u}^{\eta_{-}}$ and therefore well-defined due to . The mentioned results show that the fourth condition is satisfied. It follows from and that the fifth condition is satisfied as well. Hence, we conclude that the map $\mathcal{J}_s^{\eta,\eta_{-}} \circ \hat{u}^{\eta_{-}}$ is of the class $C^1$ and that $D(\mathcal{J}_s^{\eta,\eta_{-}} \circ \hat{u}^{\eta_{-}}) = \mathcal{J}_s^{\eta,\eta_{-}} \circ \hat{u}^{\eta_{-},(1)} \in \mathcal{L}(E_0,\mathop{\mathrm{BC}}_s^\eta)$, where $\hat{u}^{\eta_{-},(1)}(s,y_0)$ is the unique solution of $$\begin{aligned} w^{(1)} = \mathcal{K}_s^{\eta_{-}} \circ \Tilde{R}_{\delta}^{(0,1)}(\hat{u}^{\eta_{-}}(s,y_0))w^{(1)} + U_s^{\eta_{-}} =: F_{\eta_{-}}^{(1)}(w^{(1)},s,y_0) \end{aligned}$$ in the space $\mathcal{L}(E_0,\mathop{\mathrm{BC}}_s^{\eta_{-}})$. Here $F_{\eta_{-}}^{(1)} : \mathcal{L}(E_0,\mathop{\mathrm{BC}}_s^{\eta_{-}}) \times E_0 \to \mathcal{L}(E_0,\mathop{\mathrm{BC}}_s^{\eta_{-}})$ and notice that $F_{\eta_{-}}^{(1)} (\cdot,s,y_0)$ is a uniform contraction (), which proves the uniqueness of the fixed point. To specify the induction hypothesis, consider any integer $1 \leq l < k$ and suppose that for all $1 \leq q \leq l$ and all $\eta \in (q\eta_{-},\eta_{+}]$ that the map $\mathcal{J}_s^{\eta,\eta_{-}} \circ \hat{u}^{\eta_{-}}$ is $C^q$-smooth with $D^q(\mathcal{J}_s^{\eta,\eta_{-}} \circ \hat{u}^{\eta_{-}}) = \mathcal{J}_s^{\eta,\eta_{-}} \circ \hat{u}^{\eta_{-},(q)} \in \mathcal{L}^{q}(E_0,\mathop{\mathrm{BC}}_s^{q \eta})$, where $\hat{u}^{\eta_{-},(q)}$ is the unique solution of $$\begin{aligned} w^{(l)} = \mathcal{K}_s^{l \eta_{-}} \circ \Tilde{R}_{\delta}^{(0,1)}(\hat{u}^{\eta_{-}}(s,y_0))w^{(l)} + H_{\eta_{-}}^{(l)}(s,y_0) =: F_{l\eta_{-}}^{(l)}(w^{(l)},s,y_0) \end{aligned}$$ in the space $\mathcal{L}^q(E_0,\mathop{\mathrm{BC}}_s^{q \eta_{-}})$. Here $H_{\eta_{-}}^{(1)}(s,y_0) = U_s^{\eta_{-}}y_0$ and for $\nu \in [\eta_{-},\eta_{+}]$ and $l \geq 2$ we have that $H_\nu^{(l)}(s,y_0)$ is a finite sum of terms of the form $$\begin{aligned} \mathcal{K}_s^{l \nu} \circ \Tilde{R}_\delta^{(0,q)}(\hat{u}^{\eta_{-}}(s,y_0)) (\hat{u}^{\eta_{-},(r_1)}(s,y_0),\dots,\hat{u}^{\eta_{-},(r_q)}(s,y_0)), \end{aligned}$$ with $2 \leq q \leq l$ and $1 \leq r_i < l$ for $i=1,\dots,q$ such that $r_1+\dots+r_q = l$. Here $F_{l\eta}^{(l)} : \mathcal{L}^l(E_0,\mathop{\mathrm{BC}}_s^{l \eta}) \times E_0 \to \mathcal{L}^l(E_0,\mathop{\mathrm{BC}}_s^{l \eta})$ is a uniform contraction () for any $\eta \in [\eta_{-},\eta_{+}]$, which guarantees the uniqueness of the fixed point. For the induction step, fix some $\eta \in ((l+1)\eta_{-},\eta_{+}]$ and choose $\sigma,\mu > 0$ such that $\eta_{-} < \sigma < (l+1) \sigma < \mu < \eta$. We show that applies with the Banach spaces $Y_0 = \mathcal{L}^l(E_0,\mathop{\mathrm{BC}}_s^{l\sigma}), Y = \mathcal{L}^{l}(E_0,\mathop{\mathrm{BC}}_s^\mu), Y_1 = \mathcal{L}^l(E_0,\mathop{\mathrm{BC}}_s^\eta)$ and $\Lambda = E_0$, and operators $$\begin{aligned} f(u,s,y_0)&=\mathcal{K}_s^{\mu} \circ \Tilde{R}_\delta^{(0,1)}(\hat{u}^{\eta_{-}}(s,y_0))u + H_{\mu/l}^{(l)}(s,y_0), \\ f^{(1)}(u,s,y_0) &= \mathcal{K}_s^{\mu} \circ \Tilde{R}_\delta^{(0,1)}(\hat{u}^{\eta_{-}}(s,y_0))\in \mathcal{L}(\mathcal{L}^l(E_0,\mathop{\mathrm{BC}}_s^\mu)), \\ f_1^{(1)}(u,s,y_0) &= \mathcal{K}_s^{\eta} \circ \Tilde{R}_\delta^{(0,1)}(\hat{u}^{\eta_{-}}(s,y_0)) \in \mathcal{L}(\mathcal{L}^l(E_0,\mathop{\mathrm{BC}}_s^\eta)). \end{aligned}$$ To verify the first condition, we have to check that $g : \mathcal{L}^l(E_0,\mathop{\mathrm{BC}}_s^{l \sigma}) \times E_0 \to \mathcal{L}(E_0,\mathop{\mathrm{BC}}_s^\eta)$ given by $$\begin{aligned} g(u,s,y_0) &= \mathcal{K}_s^{\eta} \circ \Tilde{R}_\delta^{(0,1)}(\hat{u}^{\eta_{-}}(s,y_0))u + \mathcal{J}_s^{\eta,\mu} \circ H_{\mu/l}^{(l)}(s,y_0) \end{aligned}$$ is $C^1$-smooth, where now $\mathcal{J}_s^{\eta,\mu} : \mathcal{L}^l(E_0,\mathop{\mathrm{BC}}_s^\mu) \hookrightarrow \mathcal{L}^l(E_0,\mathop{\mathrm{BC}}_s^\eta)$ is the continuous embedding. Clearly, $g$ is $C^1$-smooth in the first variable since it is linear. For the second variable, notice that the map $(s,y_0) \mapsto \mathcal{K}_s^{\eta} \circ \Tilde{R}_\delta^{(0,1)}(\hat{u}^{\eta_{-}}(s,y_0))u$ is $C^1$-smooth due to with $\mu > (l+1)\sigma$ and the $C^1$-smoothness of $(s,y_0) \mapsto \mathcal{J}_s^{\sigma,\eta_{-}} \hat{u}^{\eta_{-}}(s,y_0)$ for any $\sigma \geq \eta_{-}$. For the $C^1$-smoothness of the map $H_{\mu/l}^{(l)}$, we get differentiability from and so we have that the derivative of this map is a finite sum of terms of the form $$\begin{aligned} \mathcal{K}_s^\mu &\circ \tilde{R}_{\delta}^{(0,q+1)}(\hat{u}^{\eta_{-}}(s,y_0))(\hat{u}^{\eta_{-},(r_1)}(s,y_0),\dots,\hat{u}^{\eta_{-},(r_q)}(s,y_0))\\ &+ \sum_{j=1}^q \mathcal{K}_s^\mu \circ \tilde{R}_{\delta}^{(0,q)}(\hat{u}^{\eta_{-}}(s,y_0)) (\hat{u}^{\eta_{-},(r_1)}(s,y_0),\dots,\hat{u}^{\eta_{-},(r_j + 1)}(s,y_0),\dots, \hat{u}^{\eta_{-},(r_q)}(s,y_0)) \end{aligned}$$ and each $\hat{u}^{\eta_{-},(r_j)}(s,y_0)$ is a map from $E_0$ into $\mathop{\mathrm{BC}}_s^{j \sigma}$ for $j=1,\dots,q$. An application of with $\mu > (l+1) \sigma$ ensures the continuity of $DH_{\mu/l}^{(l)}(s,y_0)$ and consequently that of $\mathcal{J}_s^{\eta,\mu} DH_{\mu/l}^{(l)}(s,y_0)$. The remaining calculations from the first condition are then easily checked, and condition four can be proven similarly. The Lipschitz condition and boundedness for the second condition follows by the choice of $\delta > 0$ chosen at the beginning of the proof and the contractivity of $H_{\mu/l}^{(l)}$ described above. To prove the third condition, observe that one can write $$\mathcal{K}_s^\eta \circ \Tilde{R}_{\delta}^{(0,1)}(\hat{u}^{\eta_{-}}(s,y_0)) = \mathcal{J}_s^{\eta,\mu} \mathcal{K}_s^\mu \circ \Tilde{R}_{\delta}^{(0,1)}(\hat{u}^{\eta_{-}}(s,y_0))$$ and applying together with the $C^1$-smoothness of $\hat{u}^{\eta_{-}}$ to obtain the continuity of $(s,y_0) \mapsto \Tilde{R}_{\delta}^{(0,1)}(\hat{u}^{\eta_{-}}(s,y_0))$. This also proves the fifth condition, and so we conclude that $\hat{u}^{\eta_{-}} : E_0 \mapsto \mathcal{L}^l(E_0,\mathop{\mathrm{BC}}_s^\eta)$ is of the class $C^1$ with derivative $\hat{u}^{\eta_{-},(l+1)} = D\hat{u}^{\eta_{-},(l)} \in \mathcal{L}^{l+1}(E_0,\mathop{\mathrm{BC}}_s^\eta)$ that is the unique solution of $$w^{(l+1)} = \mathcal{K}_s^\mu \circ \Tilde{R}_{\delta}^{(0,1)}(\hat{u}^{\eta_{-},(l+1)})w^{(l+1)} + H_{\mu / (l+1)}^{(l+1)}(s,y_0),$$ where $$\begin{aligned} H_{\mu / (l+1)}^{(l+1)}(s,y_0) = \mathcal{K}_s^\mu \circ \Tilde{R}_{\delta}^{(0,2)}(\hat{u}^{\eta_{-}}(s,y_0)) (\hat{u}^{\eta_{-},(l)}(s,y_0),\hat{u}^{\eta_{-},(1)}(s,y_0)) + DH_{\mu/l}^{(l)}(s,y_0). \end{aligned}$$ A similar argument as in the proof of the $l=k=1$ case shows that the unique fixed point $\hat{u}^{\eta_{-},(l+1)}$ is also contained in $\mathcal{L}^{l+1}(E_0,\mathop{\mathrm{BC}}_s^{(l+1)\eta_{-}})$. Hence, the map $\mathcal{J}_s^{\eta,\eta_{-}} \circ \hat{u}^{\eta_{-}}$ is of the class $C^{l+1}$ provided that $\eta \in ((l+1)\eta_{-},\eta_{+}]$ and $\delta > 0$ is sufficiently small. ◻ [\[thm:smoothnessC\]]{#thm:smoothnessC label="thm:smoothnessC"} The map $\mathcal{C} : E_0 \to \mathbb{R}^n$ from [\[eq:mapC\]](#eq:mapC){reference-type="eqref" reference="eq:mapC"} is $C^k$-smooth. *Proof.* Let $\eta \in [\eta_{-},\eta_{+}] \subset (0,\min\{-a,b\})$ such that $k \eta_{-} < \eta_{+}$. Let $\mathop{\mathrm{ev}}_s$ denote the bounded linear evolution operator (at time $s$) defined in the proof of . Recall that $\mathcal{C}(s,y_0) = \hat{u}^\eta(s,y_0)(s) = \mathop{\mathrm{ev}}_s(\hat{u}^\eta(s,y_0)),$ and so $\mathcal{C}(s,y_0) = \mathop{\mathrm{ev}}_s(\mathcal{J}_s^{\eta,\eta_{-}} \hat{u}^{\eta_{-}}(s,y_0))$. The result follows now from . ◻ To study in the tangent bundle of the center manifold, we have to use the partial derivative of the map $\mathcal{C}$ in the second component. The following result shows that such (higher order) partial derivatives are uniformly Lipschitz continuous. [\[cor:LipschitzC\]]{#cor:LipschitzC label="cor:LipschitzC"} For each $l \in \{0,\dots,k\}$, there exists a constant $L(l) > 0$ such that $$\|D_2^l \mathcal{C}(s,y_0) - D_2^l \mathcal{C}(s,z_0) \| \leq L(l) \|y_0 - z_0\|$$ for all $(s,y_0),(s,z_0) \in E_0$. *Proof.* For $l = 0$, the result is already proven in . Now let $l \in \{1,\dots,k\}$. Then, from the proof of we see that $\hat{u}^{\eta_{-},(l)}$ is the unique solution of a fixed point problem, where the right hand-side is a contraction with a Lipschitz constant $L(l)$ independent of $s$. Using the same strategy as the proof of , we obtain the desired result. ◻ [^1]: Department of Mathematics, Hasselt University, Diepenbeek Campus, 3590 Diepenbeek, Belgium . [^2]: Department of Mathematics, KU Leuven, 3000 Leuven, Belgium . [^3]: Department of Mathematics, Utrecht University, 3508 TA Utrecht, The Netherlands and Department of Applied Mathematics, University of Twente, 7500 AE Enschede, The Netherlands . [^4]: Notice that one can deduce the existence of the unique stable and unstable manifold near a hyperbolic cycle from the fixed point of the Poincaré map, see [@Hale1993 Theorem 10.3.2].
arxiv_math
{ "id": "2309.11919", "title": "Periodic Center Manifolds for Nonhyperbolic Limit Cycles in ODEs", "authors": "Bram Lentjes, Mattias Windmolders, Yuri A. Kuznetsov", "categories": "math.DS", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Generalizing the work of Kobayashi and the second author for elliptic curves with supersingular reduction at the prime $p$, Büyükboduk and Lei constructed multi-signed Selmer groups over the cyclotomic $\mathbb{Z}_p$-extension of a number field $F$ for more general non-ordinary motives. In particular, their construction applies to abelian varieties over $F$ with good supersingular reduction at all the primes of $F$ above $p$. In this article, we scrutinize the case in which $F$ is imaginary quadratic, and prove a control theorem (that generalizes Kim's control theorem for elliptic curves) of multi-signed Selmer groups of non-ordinary motives over the maximal abelian pro-$p$ extension of $F$ that is unramified outside $p$, which is the $\mathbb{Z}_p^2$-extension of $F$. We apply it to derive a sufficient condition when these multi-signed Selmer groups are cotorsion over the corresponding two-variable Iwasawa algebra. Furthermore, we compare the Iwasawa $\mu$-invariants of multi-signed Selmer groups over the $\mathbb{Z}_p^2$-extension for two such representations which are congruent modulo $p$. address: - Harish Chandra Research Institute, A CI of Homi Bhabha National Institute, Chhatnag Road, Jhunsi, Prayagraj (Allahabad) 211 019 India - School of Mathematical and Statistical Sciences, Arizona State University Tempe, AZ 85287-1804, USA author: - Jishnu Ray - Florian Sprung bibliography: - main.bib title: "On the signed Selmer groups for motives at non-ordinary primes in $\\mathbb{Z}_{\\MakeLowercase{p}}^2$-extensions" --- # Introduction The goals of this article are to 1. establish cotorsion of certain Selmer groups associated to motives in the non-ordinary case via a control theorem, and to 2. prove a vanishing statement of $\mu$-invariants associated to motives that are congruent in an appropriate sense. ## The objects involved Suppose $p$ is an odd prime, $F$ be an imaginary quadratic field in which $p$ splits into two primes ${\mathfrak p}$ and ${\mathfrak p}^c$. Let $F_\mathrm{cyc}$ and $F_\infty$ be the cyclotomic $\mathbb{Z}_p$-extension and the unique $\mathbb{Z}_p^2$-extension of $F$; thus $F_\infty$ contains $F_\mathrm{cyc}$. Let $\Omega=\operatorname{Gal}(F_\infty/F)$ and $\Gamma=\operatorname{Gal}(F_\mathrm{cyc}/F)$. For $E$ an elliptic curve over $\mathbb{Q}$ with supersingular reduction at $p$ and $a_p=0$, Kobayashi [@kobayashi03] constructed signed Selmer groups $\operatorname{Sel}^\pm(E/\mathbb{Q}_\mathrm{cyc})$ which are cotorsion over the Iwasawa algebra $\mathbb{Z}_p[[\operatorname{Gal}(\mathbb{Q}_\mathrm{cyc}/\mathbb{Q})]]$. For a construction that includes the general case, see [@sprung09], where the modified Selmer groups are called $\sharp,\flat$-signed Selmer groups over $\mathbb{Q}_\mathrm{cyc}$. Generalizing the work of Kobayashi, Kim constructed multi-signed Selmer groups (assuming $a_p=0$) $\operatorname{Sel}^{\pm,\pm}(E/F_\infty)$ over the $\mathbb{Z}_p^2$-extension $F_\infty$ which are also conjectured to be cotorsion as $\mathbb{Z}_p[[\Omega]]$-modules [@kimdoublysigned]. Out of these four signed Selmer groups, only the ones with the same signs $\operatorname{Sel}^{+,+}(E/F_\infty)$ $\operatorname{Sel}^{-,-}(E/F_\infty)$ are known to be cotorsion (see [@LeiPal19 Remark 8.5]). In the general case [@sprung16] (i.e. removing the $a_p=0$ condition), the second author has given a construction which shows that at least one of the chromatic Selmer groups is cotorsion. The local conditions defining these Selmer groups were then later reinterpreted in terms of Coleman maps using the theory of Wach-modules which enables one to define signed Selmer groups over $F_\mathrm{cyc}$ also for modular forms which are non-ordinary at $p$ (see [@leiloefflerzerbes10], [@leiloefflerzerbes11]). Generalizing the work of Kobayashi, Büyükboduk and Lei constructed signed Selmer groups over the cyclotomic extension of any number field for motives $\mathcal{M}$ over $\mathbb{Q}$ which are non-ordinary at $p$. Let $\mathcal{M}_p$ be the $p$-adic realization of the motive $\mathcal{M}$ with a $\mathbb{Z}_p$-lattice $T$ which is $G_F$-stable. Set $M^*=T^*\otimes \mathbb{Q}_p/\mathbb{Z}_p$ where $T^*=\operatorname{Hom}(T,\mathbb{Z}_p(1))$ is the Tate dual of $T$. Upon fixing a Hodge-compatible basis of the Dieudonné module attached to $T$ (see Section [2.2.1](#sub:HC){reference-type="ref" reference="sub:HC"}), for each $\underline{I}$ varying over a certain set (see [@BL17 Definition 3.1]), Büyükboduk and Lei constructed the signed Selmer group $\operatorname{Sel}_{\underline{I}}(M^*/F_\mathrm{cyc})$ which they conjecture to be $\mathbb{Z}_p[[\Gamma]]$-cotorsion. When $T$ is the Tate module of an elliptic curve, their construction recovers Kobayashi's signed Selmer groups [@BL17 p. 397]. The assumption that the signed Selmer group $\operatorname{Sel}_{\underline{I}}(M^*/F_\mathrm{cyc})$ is cotorsion as a $\mathbb{Z}_p[[\Gamma]]$-module can be found in many occasions throughout literature, see e.g. results in [@ponsinet]. ## The control theorem Suppose that the motive $\mathcal{M}$ is non-ordinary at both primes ${\mathfrak p}$ and ${\mathfrak p}^c$. Using the big dual exponential map of Loeffler--Zerbes for $F_\infty$ constructed in [@LZ14], one can construct the multi-signed Selmer group $\operatorname{Sel}_{\underline{I}}(M^*/F_\infty)$ for the non-ordinary motive $\mathcal{M}$ over the $\mathbb{Z}_p^2$-extension $F_\infty$. We are mainly interested in finding a sufficient condition for this multi-signed Selmer group to be cotorsion as a $\mathbb{Z}_p[[\Omega]]$-module that works in this general setup. Note that in [@Dion2022], the author *assumed* that this Selmer group is cotorsion while proving a two variable algebraic functional equation for this Selmer group (see [@Dion2022 Theorem A]). Our main result in this article is to show the following. **Corollary 1** (see Corollary [Corollary 19](#prop:check){reference-type="ref" reference="prop:check"}). *If the Bloch-Kato Selmer group $\operatorname{Sel}_{\mathrm{BK}}(M^*/F)$ is finite then the Selmer group $\operatorname{Sel}_{\underline{I}}(M^*/F_\infty)$ is $\mathbb{Z}_p[[\Omega]]$-cotorsion.* The key ingredient that goes into the proof of the above corollary is a control theorem (similar to Kim's for elliptic curves [@kimdoublysigned]). We show that **Theorem 2** (see Theorem [Theorem 16](#thm:mainKIM){reference-type="ref" reference="thm:mainKIM"}). *For all but finitely many $s\in \mathbb{Z}$, the kernel and the cokernel of the restriction map $$\rho_{\vec{n}}:\operatorname{Sel}_{\underline{I}^c}(M_s/F_{\vec{n}}) \rightarrow \operatorname{Sel}_{\underline{I}^c}(M_s/F_\infty)^{\Omega_{\vec{n}}}$$ are bounded as $\vec{n}$ varies.* Here, we have fixed generators $(\gamma_1, \gamma_2)$ of $\Omega\cong \mathbf{Z}_p^2$ and for a choice of non-negative integers $\vec{n}:=(n_1, n_2), n_i\geqslant 0$, we have denoted by $\Omega_{\vec{n}}$ the subgroup of $\Omega$ generated by $(\gamma_1^{n_1}, \gamma_2^{n_2})$, and the extension $F_{\vec{n}}$ is the extension $F_\infty^{\Omega_{\vec{n}}}$. The Selmer group $\operatorname{Sel}_{\underline{I}^c}(M_s/F_*)$ is a twisted Selmer group (twisting $M:=T\otimes \mathbb{Q}_p/\mathbb{Z}_p$ by the $s$-th power of the cyclotomic character) defined in Section [3.1](#sec:twisted){reference-type="ref" reference="sec:twisted"} for a certain indexing set $\underline{I}^c$, cf. Remark [Remark 13](#rem:mithu){reference-type="ref" reference="rem:mithu"}. ## Congruence and previous works In the last part of this article, we study congruences of the signed Selmer group $\operatorname{Sel}_{\underline{I}}(M^*/F_\infty)$. For elliptic curves $E$ and $E^\prime$ over $\mathbb{Q}$ with good ordinary reduction at $p$ such that $E[p] \cong E^\prime[p]$ as Galois modules, Greenberg and Vatsal [@GreenbergVatsal] initiated the study of such congruences in Iwasawa theory. They showed that the $\mu$-invariant of the Pontryagin dual of $\operatorname{Sel}_p(E/\mathbb{Q}_\mathrm{cyc})$ vanishes if and only if the $\mu$-invariant of the Pontryagin dual of $\operatorname{Sel}_p(E^\prime/\mathbb{Q}_\mathrm{cyc})$ vanishes. Kim generalized this result for supersingular elliptic curves for signed Selmer groups over the cyclotomic $\mathbb{Z}_p$-extension of $\mathbb{Q}$ (see [@Kim09]). It was further generalized by Ponsinet to the case of supersingular abelian varieties and the multi-signed Selmer groups of Bübükboduk--Lei [@BL17] over the cyclotomic extension of a number field ([@ponsinet Theorem 3.13]). Under additional hypotheses, in [@FilippoSujatha4 Theorem 4.15] the result of [@Kim09] was generalized to a wider context for supersingular elliptic curves, while still working over the cyclotomic extension, which was further generalized for supersingular elliptic curves over the $\mathbb{Z}_p^2$-extension of an imaginary quadratic field $F$ where $p$ splits under the assumption of Conjecture A (see [@Hamidi Corollary 4.6]). Conjecture A is a conjecture regarding the vanishing of $\mu$-invariant of the dual *fine* $p^\infty$-Selmer group over the cyclotomic $\mathbb{Z}_p$-extension of a number field (see [@CoatesSujatha_fineSelmer]). ## Context and contrast of our result on congruence In this article we generalize both the results [@ponsinet Theorem 3.13] and [@Hamidi Corollary 4.6] mentioned above. In particular, we work in the setting of supersingular abelian varieties and multi-signed Selmer groups of Bübükboduk--Lei over the $\mathbb{Z}_p^2$-extension $F_\infty$. We note that in contrast to Hamidi's result, our theorem stated below is not dependent on Conjecture A. Hamidi's result *(loc.cit)* were stated for supersingular elliptic curves with $a_{{\mathfrak p}}=a_{{\mathfrak p}^c}=0$ where ${\mathfrak p}$ and ${\mathfrak p}^c$ are two primes of $F$ above $p$ (see [@Hamidi Hyp 1 (iv)])[^1]. We remove these assumptions to arrive at the following result. Let $\mathcal{X}_{\underline{I}}(M^*/F_\infty)$ and $\mathcal{X}_{\underline{I}}({M^\prime}^*/F_\infty)$ be the Pontryagin duals of the Selmer groups $\operatorname{Sel}_{\underline{I}}(M^*/F_\infty)$ and $\operatorname{Sel}_{\underline{I}}({M^\prime}^*/F_\infty)$, where $M'$ comes from a motive $\mathcal{M'}$ with the same residual representation as $\mathcal{M}$. **Theorem 3** (see Theorem [Theorem 22](#them:congruence_mu){reference-type="ref" reference="them:congruence_mu"}). *Let $\mathcal{M}$ and $\mathcal{M'}$ have $G_F$-stable lattices $T$ and $T'$ with congruent mod $p$ reduction that satisfy certain mostly $p$-adic Hodge theoretic hypotheses (H.-T.), (Cryst.), (Tors.), (Fil), and (Slopes) described in detail in the Preliminaries section. Suppose that $\mathcal{X}_{\underline{I}}(M^*/F_\infty)$ and $\mathcal{X}_{\underline{I}}({M^\prime}^*/F_\infty)$ have the same corank as $\mathbb{Z}_p[[\Omega]]$-modules. Then the $\mu$-invariant of $\mathcal{X}_{\underline{I}}(M^*/F_\infty)$ vanishes if and only if the $\mu$-invariant of $\mathcal{X}_{\underline{I}}({M^\prime}^*/F_\infty)$ vanishes.* **Corollary 4** (see Corollary [Corollary 23](#torsioncorollary){reference-type="ref" reference="torsioncorollary"}). *Suppose that the above Selmer groups are both $\mathbb{Z}_p[[\Omega]]$-cotorsion. Then their $\mu$-invariants either both vanish, or they both don't.* Note that the theorem above can be reinterpreted in terms of an Euler characteristic, since the $\mu$-invariant of $\mathcal{X}_{\underline{I}}(M^*/F_\infty)$ is the $p$-adic valuation of the Euler characteristic $\chi\big(\Omega, \mathcal{X}_{\underline{I}}(M^*/F_\infty)(p)\big)$ (see [@Howson_02 Corollary 1.7]). Our original goal was to relate the $\Omega$-Euler characteristic $\chi\big(\Omega, \mathcal{X}_{\underline{I}}(M^*/F_\infty)\big)$ with the $\Gamma$-Euler characteristic $\chi\big(\Gamma, \mathcal{X}_{\underline{I}}(M^*/F_\mathrm{cyc})\big)$ generalizing [@LeiSujatha Theorem 5.15]. It would be interesting if one can do this. ## Outline In Section [2](#sec:pre){reference-type="ref" reference="sec:pre"}, we recall the preliminaries including the notion of Yager module which will be an important tool in developing the theory of the two-variable signed Coleman map, the construction of which is given in Section [3](#sec:multiselmer){reference-type="ref" reference="sec:multiselmer"}. Using the signed Coleman map giving the local conditions at the primes of $F$ above $p$, the signed Selmer group $\operatorname{Sel}_{\underline{I}}(M^*/F_\infty)$ is constructed in Section [3](#sec:multiselmer){reference-type="ref" reference="sec:multiselmer"}. Ranks of certain Iwasawa modules, including the ranks of the images and the kernels of the signed two variable Coleman map, are analyzed in Section [4](#sec:ranks){reference-type="ref" reference="sec:ranks"}. In Section [5](#sec:resultmain){reference-type="ref" reference="sec:resultmain"}, we prove a control theorem (Theorem [Theorem 16](#thm:mainKIM){reference-type="ref" reference="thm:mainKIM"}) and use it to provide a sufficient condition for the signed Selmer group over the $\mathbb{Z}_p^2$-extension $F_\infty$ to be cotorsion (see Corollary [Corollary 19](#prop:check){reference-type="ref" reference="prop:check"}). Finally, in Section [6](#sec: muinvariance){reference-type="ref" reference="sec: muinvariance"} we prove our theorem regarding the $\mu$-invariants under congruence (see Theorem [Theorem 22](#them:congruence_mu){reference-type="ref" reference="them:congruence_mu"}). # Acknowledgments. {#acknowledgments. .unnumbered} We thank Cédric Dion, Parham Hamidi, Antonio Lei, Filippo Nuccio and Gautier Ponsinet for carefully answering various questions. The first author is supported by the Inspire research grant, the second author by an NSF grant (2001280). # Preliminaries {#sec:pre} The goal of this section is to fix some notations. ## The setup {#sec: setup} Let $p$ be an odd prime and $F$ be a number field unramified at $p$. Let $\mathcal{M}$ be a motive over $F$ with coefficients in $\mathbb{Q}$ (see [@FPR94]). Let $\mathcal{M}_p$ be its $p$-adic realization and fix a $G_F$- stable $\mathbb{Z}_p$-lattice $T$ inside $\mathcal{M}_p$. Let $g=\dim_{\mathbb{Q}_p}(\mathrm{Ind}_F^{\mathbb{Q}}\mathcal{M}_p)$ and $g_+=\dim_{\mathbb{Q}_p}(\mathrm{Ind}_F^{\mathbb{Q}}\mathcal{M}_p)^+$ the dimension of the $+1$ eigenspace under the action of a fixed complex conjugation on $\mathrm{Ind}_F^{\mathbb{Q}}\mathcal{M}_p$. We write $g_{-}=g-g_+$ and $g_v=\dim_{\mathbb{Q}_p}(\mathrm{Ind}_{F_v}^{\mathbb{Q}_p}\mathcal{M}_p)$ where $v$ is a prime of $F$ above $p$. We then have $g=\sum_{v \mid p}g_v$. **Remark 5**. *Suppose $T$ is of rank $d$. Then $g_v=\operatorname{rank}(T)[F_v:\mathbb{Q}_p].$ Then $g=\sum_{v |p}g_v=\sum_{v |p}\operatorname{rank}(T)[F_v:\mathbb{Q}_p]$. If needed, one can also assume $g_+=g_-=\frac{g}{2}$ wherever necessary (see hypothesis (H.P) of [@LeiPonsinet2017]).* Let $T^*=\operatorname{Hom}(T,\mathbb{Z}_p(1))$ be the Tate dual of $T$. We set $M=T \otimes \mathbb{Q}_p/\mathbb{Z}_p$ and $M^*=T^* \otimes \mathbb{Q}_p/\mathbb{Z}_p$. For every prime $v$ of $F$ above $p$, we with the hypotheses on $\mathcal{M}$ as in [@ponsinet]:\ **(H.-T.)** The Hodge--Tate weights of $\mathcal{M}_p$, as a $G_{F_v}$-representation, are in $[0,1]$. **(Cryst.)** The $G_{F_v}$-representation $\mathcal{M}_p$ is crystalline. **(Tors.)** The cohomology groups $H^0(F_v, T/pT)$ and $H^2(F_v, T/pT)$ are trivial. We note that the hypotheses **(Cryst.)** and **(H.-T.)** are then also satisfied for $\mathcal{M}^*$, which is the dual of $\mathcal{M}$. The hypothesis **(Tors.)** is also satisfied by $T^*$, which is a $G_F$-stable $\mathbb{Z}_p$-lattice inside the $p$-adic realization $\mathcal{M}_p^*$ of $\mathcal{M}^*$. ## Dieudonné modules For a prime $v$ of $F$ dividing $p$, let $\mathbb{D}_{\mathrm{cris},v}(T)$ be the Dieudonné module associated to $T$ (see [@berger04 Defn. V.1.1]). It is a free $\mathcal{O}_{F_v}$- module of rank $\dim_{\mathbb{Q}_p}(\mathcal{M}_p)$ and is equipped with a filtration of $\mathcal{O}_{F_v}$-modules $(\mathrm{Fil}^i\mathbb{D}_{\mathrm{cris},v}(T))_{i \in \mathbb{Z}}$ and a Frobenius operator ${\varphi}$. The filtration is given by $$\mathrm{Fil}^i \mathbb{D}_{\mathrm{cris},v}(T) = \begin{cases} 0 & \text{if $i \geqslant 1$,} \\ \mathbb{D}_{\mathrm{cris},v}(T) & \text{if $i \leqslant-1$.} \end{cases}$$ We set $\mathbb{D}_{\mathrm{cris},v}(\mathcal{M}_p):=\mathbb{D}_{\mathrm{cris},v}(T) \otimes \mathbb{Q}_p$ and this is a Fontaine's filtered ${\varphi}$-module associated to $\mathcal{M}_p.$ We also make the following assumptions.\ **(Fil.)** $\sum_{v \mid p} \dim_{\mathbb{Q}_p} \mathrm{Fil}^0\mathbb{D}_{\mathrm{cris},v}(T) \otimes \mathbb{Q}_p =g_-.$ **(Slopes)** The slopes of the Frobenius operator ${\varphi}$ on the Dieudonné module $\mathbb{D}_{\mathrm{cris},v}(\mathcal{M}_p)$ lie inside $(-1,0)$. ### Hodge-compatible basis {#sub:HC} We fix once and for all a *Hodge-compatible* basis of $\mathbb{D}_{\mathrm{cris},v}(T)$ which is, by definition, a $\mathbb{Z}_p$-basis $\{u_1,...,u_{g_v}\}$ of $\mathbb{D}_{\mathrm{cris},v}(T)$ such that $\{u_1,...,u_{d_v}\}$ is a basis for $\mathrm{Fil}^0\mathbb{D}_{\mathrm{cris},v}(T)$ for some $d_v$. The matrix of the Frobenius ${\varphi}$ with respect to this basis is of the form $$C_{{\varphi},v} = C_v\left[ \begin{array}{c|c} I_{d_v} & 0 \\ \hline 0 & \frac{1}{p} I_{g_v-d_v} \end{array} \right]$$ for some $C_v \in \mathrm{GL}_{g_v}(\mathbb{Z}_p)$ and where $I_n$ is the identity $n \times n$ matrix (see [@BL17 Section 2.2]). We also note that the Dieudonné module $\mathbb{D}_{\mathrm{cris},v}(T^*)$ satisfies the hypotheses **(Fil.)** and **(Slopes)**. **Remark 6**. *Suppose that $A$ is an abelian variety defined over $F$ having good supersingular reduction at all the primes of $F$ dividing $p$. Let $T_p(A)$ be the Tate module of $A$ and $V_p(A):=T_p(A) \otimes\mathbb{Q}_p$. Then the $G_F$-representation $V_p(A)$ and its Galois stable lattice $T_p(A)$ satisfy all the hypotheses **(H.-T.)**, **(Cryst.)**, **(Tors.)**, **(Fil.)** and **(Slopes)** (cf. [@ponsinet Example 1.1]).* ## The Yager module {#sec Yager} Let $L/\mathbb{Q}_p$ be a finite unramified extension. Let $L_\infty$ be any unramified $p$-adic Lie extension of $L$ with Galois group $U$. For $x \in \mathcal{O}_L$, define $$y_{L/\mathbb{Q}_p}(x) := \sum_{\tau \in \operatorname{Gal}(L/\mathbf{Q}_p)} \tau (x) \cdot \tau^{-1} \in \mathcal{O}_L [\operatorname{Gal}(L/\mathbb{Q}_p)].$$ Let $S_{L/\mathbb{Q}_p}$ be the sub-$\mathcal{O}_L [\operatorname{Gal}(L/\mathbb{Q}_p)]$-module generated by the image of $y_{L/\mathbb{Q}_p}$ in $\mathcal{O}_L [\operatorname{Gal}(L/\mathbb{Q}_p)]$. Then there is an isomorphism of $\Lambda_{\mathcal{O}_L}(U)$-modules (cf. [@LZ14 Section 3.2]) $$y_{L_\infty/\mathbb{Q}_p}:\varprojlim_{\mathbb{Q}_p \subseteq L \subseteq L_\infty} \mathcal{O}_L \xrightarrow{\cong} S_{L_\infty / \mathbb{Q}_p}:= \varprojlim_{\mathbb{Q}_p \subseteq L \subseteq L_\infty} S_{L/\mathbb{Q}_p},$$ where the inverse limit is taken with respect to the trace maps on the left and the projection maps $\operatorname{Gal}(L^\prime/\mathbb{Q}_p) \to \operatorname{Gal}(L/\mathbb{Q}_p)$ for $L \subseteq L^\prime$ on the right. By [@LZ14 Proposition 3.2], $\varprojlim_{\mathbb{Q}_p \subseteq L \subseteq L_\infty} \mathcal{O}_L$ is a free $\Lambda_{\mathcal{O}_L}(U)$-module of rank one, so that the Yager module $S_{L_\infty/\mathbb{Q}_p}$ is also free of rank one over $\Lambda_{\mathcal{O}_L}(U)$. The Yager module $S_{L_\infty/\mathbb{Q}_p}$ comes equipped with a compact and Hausdorff topology that coincides with the subspace topology from $\Lambda_{\widehat{\mathcal{O}}_{L_\infty}}(U)$, where $\widehat{\mathcal{O}}_{L_\infty}$ is the completion of the ring of integers of $L_\infty$. # The multi-signed Selmer groups over a $\mathbb{Z}_p^2$-tower {#sec:multiselmer} In this section, we recall some known results, mainly the construction of certain Coleman maps and present a conjecture due to Büyükboduk--Lei concerning the cotorsion of a Selmer group constructed from the Coleman map in the case of a cyclotomic $\mathbb{Z}_p$-extension. We generalize their conjecture to the case of a $\mathbb{Z}_p^2$-extension. Let $F$ be imaginary quadratic, $F_\infty$ be the compositum of all $\mathbb{Z}_p$-extensions of $F$, so that in particular $F_{\mathrm{cyc}}\subset F_\infty$, where $F_{\mathrm{cyc}}$ is the cyclotomic $\mathbb{Z}_p$-extension of $F$. We let $\Omega:= \operatorname{Gal}(F_\infty/F)\cong \mathbb{Z}_p^2$. Suppose that the prime $p$ splits into two primes ${\mathfrak p}$ and ${\mathfrak p}^c$ of $F$, where $c$ denotes complex conjugation. We will write ${\mathfrak q}$ to denote either of these two primes. If $\mathfrak{a}$ is an ideal of $\mathcal{O}_F$, let $F(\mathfrak{a})$ be the ray class field of $F$ of conductor $\mathfrak{a}$. If $n \geqslant 0$ is an integer, we will write $\Omega_n := \Omega^{p^n}$ and $F_n := F_\infty^{\Omega_n} = F(p^{n+1})^{\operatorname{Gal}(F(1)/F)}$. We assume that $F(1)\bigcap F_\infty=F$, hence $\Omega \cong G_{{\mathfrak p}}\times G_{{\mathfrak p}^c}$ where $G_{\mathfrak q}$ is the Galois group of the extension $F({\mathfrak q}^\infty) \bigcap F_\infty / F$. We fix topological generators $\gamma_{\mathfrak p}$ and $\gamma_{{\mathfrak p}^c}$ respectively for the groups $G_{\mathfrak p}$ and $G_{{\mathfrak p}^c}$. Let $\Lambda(\Omega) := \mathbf{Z}_p[[\Omega ]] \cong \mathbf{Z}_p[[ \gamma_{\mathfrak p}-1, \gamma_{{\mathfrak p}^c}-1]]$ be the Iwasawa algebra of $\Omega$. More generally, for a choice of non-negative integers $\vec{n}:=(n_1, n_2); n_i\geqslant 0$, denote by $\Omega_{\vec{n}}$ the subgroup of $\Omega$ generated by $(\gamma_{{\mathfrak p}}^{n_1}, {\gamma_{{\mathfrak p}^c}}^{n_2})$, and let $F_{\vec{n}}:=F_\infty^{\Omega_{\vec{n}}}$. Write $\Omega_{\mathfrak q}$ for the decomposition group of ${\mathfrak q}$ in $\Omega$. Let $\Sigma$ be a finite set of primes of $F$ containing the primes dividing $p$, the archimedean primes and the primes of ramification of $M^*$. Let $F_\Sigma$ be the maximal extension of $F$ that is unramified outside $\Sigma$. If $w$ is a place of $F_\infty$ above ${\mathfrak q}$, then by local class field theory, the Galois group $\operatorname{Gal}(F_{\infty,w}/F_{\mathfrak q})$ is isomorphic to $\mathbb{Z}_p^2$. Hence the extension $F_{\infty,w}$ coincides with the compositum of the cyclotomic $\mathbb{Z}_p$-extension of $\mathbb{Q}_p$ and the unramified $\mathbb{Z}_p$-extension of $\mathbb{Q}_p$. Let $L_\infty$ and $k_\text{cyc}$ be the unramified $\mathbb{Z}_p$-extension and the cyclotomic $\mathbb{Z}_p$-extension of $\mathbb{Q}_p$ respectively. Let $k_\infty$ be the compositum of $L_\infty$ and $k_\mathrm{cyc}$. For $n \geqslant 0$, let $k_n$ and $L_n$ be the subextensions of $k_\mathrm{cyc}$ and $L_\infty$ respectively such that $[k_n:\mathbb{Q}_p]=p^n$ and $[L_n:\mathbb{Q}_p]=p^n$. We set $\Omega_p := \operatorname{Gal}(k_\infty/\mathbb{Q}_p) \cong \mathbb{Z}_p^2$, $\Gamma_{\text{ur}} := \operatorname{Gal}(L_\infty/\mathbb{Q}_p) \cong \mathbb{Z}_p$ and $\Gamma_{\text{cyc}} := \operatorname{Gal}(k_\text{cyc}/\mathbb{Q}_p) \cong \mathbb{Z}_p$. Thus $\Omega_p \cong \Gamma_{\mathrm{ur}} \times \Gamma_{\mathrm{cyc}}$. For $\vec{n}=(n_1,n_2);n_i\geqslant 0$, on fixing two topological generators $(\gamma_1,\gamma_2)$ of $\Omega_p\cong \mathbb{Z}_p^2$, we will write $\Omega_{p,\vec{n}}$ to denote the subgroup of $\Omega_p$ generated by $(\gamma_1^{n_1},\gamma_2^{n_2})$. Choosing a topological generator $\gamma_2$ of $\Gamma_\mathrm{cyc}$, we let $\mathcal{H}(\Gamma_\text{cyc})$ be the set of power series $$\sum_{n\geqslant 0} c_{n} \cdot (\gamma_2 -1 )^n$$ with coefficients in $\mathbb{Q}_p$ such that $\sum_{n\geqslant 0}c_{n}X^n$ converges on the open unit disk. Fix a Hodge-compatible basis $\{u_1,...,u_{g_{\mathfrak q}}\}$ of $\mathbb{D}_{\mathrm{cris},{\mathfrak q}}(T)$. Let $\{Y_{L_\infty/\mathbb{Q}_p}\}$ be a basis of the Yager module $S_{L_\infty/\mathbb{Q}_p}$. Then Loeffler--Zerbes [@LZ14] constructed a two-variable big logarithm map $$\mathcal{L}_{T,{\mathfrak q}}^{\infty}: H^1_\mathrm{Iw}(k_\infty,T) \to Y_{L_\infty/\mathbb{Q}_p} \cdot \left( \mathcal{H}(\Gamma_\text{cyc})\widehat{\otimes}\Lambda(\Gamma_\text{ur}) \right) \otimes_{\mathbb{Z}_p} \mathbb{D}_{\mathrm{cris},{\mathfrak q}}(T).$$ which interpolates the Bloch--Kato dual exponential map [@LZ14 Theorem 4.15]. Let $K$ be a finite unramified extension of $\mathbb{Q}_p$ and let $K_\mathrm{cyc}$ denote the cyclotomic $\mathbb{Z}_p$-extension of $K$ with Galois group $\Gamma_{\mathrm{cyc}}$. In [@BL17], the authors show the existence of one-variable Coleman maps $$\operatorname{Col}_{T,K,i}:H^1_\mathrm{Iw}(K_\mathrm{cyc},T) \to \mathcal{O}_K\otimes \Lambda(\Gamma_{\mathrm{cyc}})$$ for $1 \leqslant i \leqslant g_v$. Those maps are compatible with the corestriction maps $$H^1_\mathrm{Iw}(L_{m,\mathrm{cyc}},T)\to H^1_\mathrm{Iw}(L_{m-1,\mathrm{cyc}},T)$$ and the trace maps $\mathcal{O}_{L_m}\otimes \Lambda(\Gamma_\mathrm{cyc}) \to \mathcal{O}_{L_{m-1}}\otimes \Lambda(\Gamma_\mathrm{cyc})$, where $L_{m,\mathrm{cyc}}$ is the cyclotomic $\mathbb{Z}_p$-extension of $L_m$. We define the two-variable Coleman maps by taking the inverse limit of $\operatorname{Col}_{T,L_m,i}$ as $L_m$ runs through the finite extensions between $L_\infty$ and $\mathbb{Q}_p$. In order to get a family of maps landing in $\Lambda(\Gamma_p)$, we further compose it with the Yager module $S_{L_\infty/\mathbb{Q}_p}$. More precisely, the two-variable Coleman maps are defined by $$\begin{aligned} \operatorname{Col}_{T,i}^\infty:H^1_\mathrm{Iw}(k_\infty,T) \cong \varprojlim_{L_m} H^1_{\mathrm{Iw}}(L_{m,cyc},T) &\to Y_{L_\infty/\mathbb{Q}_p}\cdot \Lambda(\Omega_p) \label{two-variable-coleman}\\ (z_m) & \mapsto (y_{L_\infty/\mathbb{Q}_p}\otimes 1) \circ (\varprojlim_{L_m} \operatorname{Col}_{T,L_m,i}(z_m)) \notag.\end{aligned}$$ We can identify $Y_{L_\infty/\mathbb{Q}_p} \cdot \Lambda(\Omega_p)$ with $\Lambda(\Omega_p)$, and hence omit the basis $Y_{L_\infty/\mathbb{Q}_p}$ from the notation and see the Coleman maps $\operatorname{Col}_{T,i}^\infty$ as taking value in $\Lambda(\Omega_p)$. By combining [@BL17 Theorem 2.13] and [@LZ14 Theorem 4.7 (1)] and including the prime ${\mathfrak q}$ in our notation, we have the Coleman maps $$\label{eq:coleman} \operatorname{Col}_{T,{\mathfrak q},i}^\infty: H^1(K_{\mathfrak q}, T \otimes \Lambda(\Omega_{\mathfrak q})^\iota) \rightarrow \mathbb{Z}_p[[\Omega_p]]$$ for $i\in \{1,...,g_{\mathfrak q}\}$ such that $$\mathcal{L}_{T,{\mathfrak q}}^{\infty} = (u_1,\ldots, u_{g_{\mathfrak q}}) \cdot M_{T,{\mathfrak q}} \cdot \begin{bmatrix} \operatorname{Col}_{T,{\mathfrak q}, 1}^\infty \\ \vdots \\ \operatorname{Col}_{T,{\mathfrak q},g_{\mathfrak q}}^\infty \end{bmatrix} .$$ Here the matrix $M_{T,{\mathfrak q}}$ is defined in the following way. For $n \geqslant 1$, as in [@BL17], we can define $$C_{{\mathfrak q}, n} = \left[ \begin{array}{c|c} I_{d_{\mathfrak q}} & 0 \\ \hline 0 & \Phi_{p^n}(1+X)I_{g_{\mathfrak q}-d_{\mathfrak q}} \end{array} \right] C_{\mathfrak q}^{-1}$$ and $$M_{{\mathfrak q},n}=(C_{{\varphi},{\mathfrak q}})^{n+1} C_{{\mathfrak q},n}\cdots C_1,$$ where $\Phi_{p^n}$ is the $p^n$-th cyclotomic polynomial and the matrices $C_{{\varphi},{\mathfrak q}}$ and $C_{\mathfrak q}$ are as in Section [2.2.1](#sub:HC){reference-type="ref" reference="sub:HC"}. By [@BL17 Proposition 2.5] the sequence $(M_{{\mathfrak q},n})_{n \geqslant 1}$ converges to some $g_{\mathfrak q}\times g_{\mathfrak q}$ logarithmic matrix $M_{T,{\mathfrak q}}$ with entries in $\mathcal{H}(\Gamma_{\mathrm{cyc}})$. **Remark 7**. *By assumption, $F_\infty \cap F(1)=F$, which happens if $p$ does not divide $h_F$, the class number of $F$. For a prime ${\mathfrak q}$ of $F$ over $p$, ${\mathfrak q}$ does not split in $F_\mathrm{cyc}$ and every prime above $p$ in the anticyclotomic extension of $F$ is totally ramified. Hence the assumption $F_\infty \cap F(1)=F$ implies that there is a *unique* prime above ${\mathfrak q}$ in $F_\infty$. However, for the definition of the multi-signed Selmer groups given below, we do not need the assumption $F_\infty \cap F(1)=F$.* Denote by $p^t$ the number of primes above ${\mathfrak p}$ and ${\mathfrak p}^c$ in $F_\infty$. Fix a choice of coset representatives $\gamma_1,\ldots, \gamma_{p^t}$ and $\delta_1,\ldots,\delta_{p^t}$ for $\Omega/\Omega_{\mathfrak p}$ and $\Omega/\Omega_{{\mathfrak p}^c}$ respectively. Since $p$ splits in $F$, we can identify $F_{\mathfrak q}$ with $\mathbb{Q}_p$ and $\Omega_{\mathfrak q}$ with $\Omega_p$. Consider the "semi-local\" decomposition coming from Shapiro's lemma $$H^1(F_{\mathfrak p}, T\otimes \Lambda(\Omega)^\iota) = \bigoplus_{j=1}^{p^t} H^1(F_{\mathfrak p}, T \otimes \Lambda(\Omega_{\mathfrak p})^\iota)\cdot \gamma_j \cong \bigoplus_{v | {\mathfrak p}}H^1_\text{Iw}(F_{\infty,v},T),$$ where $v$ runs through the primes above ${\mathfrak p}$ in $F_\infty,$ and $\iota: \Lambda(\Omega) \rightarrow \Lambda(\Omega)$ is the involution obtained by sending $g \in \Omega$ to $g^{-1}$. By choosing a Hodge-compatible basis of $\mathbb{D}_{\mathrm{cris},{\mathfrak p}}(T)$, define the Coleman map for $T$ at ${\mathfrak p}$ by $$\begin{aligned} \operatorname{Col}_{T,{\mathfrak p},i}^{k_\infty}: H^1(F_{\mathfrak p},T \otimes \Lambda(\Omega)^\iota) &\to \Lambda(\Omega) \\ x=\sum_{j=1}^{p^t}x_j \cdot \gamma_j &\mapsto \sum_{j=1}^{p^t} \operatorname{Col}_{T,{\mathfrak p},i}^\infty(x_j)\cdot \gamma_j\end{aligned}$$ for all $1 \leqslant i \leqslant g_{\mathfrak p}$. Let $\mathcal{L}_{T,{\mathfrak p}}^{k_\infty}=\oplus_{j=1}^{p^t}\mathcal{L}_{T,{\mathfrak p}}^{\infty}\cdot \gamma_j$. Define $\operatorname{Col}_{T,{\mathfrak p}^c,i}^{k_\infty}$ and $\mathcal{L}_{T,{\mathfrak p}^c}^{k_\infty}$ in an analogous way. Let $I_{\mathfrak q}$ denote a subset of $\{1,\ldots,g_{\mathfrak q}\}$ and let $$\begin{aligned} \operatorname{Col}_{T,I_{\mathfrak q}}^\infty : H^1(F_{\mathfrak q},T \otimes \Lambda(\Omega)^\iota) & \to \bigoplus_{i=1}^{|I_{\mathfrak q}|} \Lambda(\Omega) \\ z & \mapsto (\operatorname{Col}^{k_\infty}_{T,{\mathfrak q},i}(z))_{i \in I_{\mathfrak q}}.\end{aligned}$$ Tate's local pairing induces a pairing $$\label{TatePairing} \bigoplus_{w|{\mathfrak q}}H^1_\mathrm{Iw}(F_{\infty,w},T) \times \bigoplus_{w|{\mathfrak q}}H^1(F_{\infty,w},M^*) \to \mathbb{Q}_p/\mathbb{Z}_p$$ for all places $w$ of $F_\infty$ above ${\mathfrak q}$. We define $H^1_{I_{\mathfrak q}}(F_{\infty,{\mathfrak q}},M^*) \subseteq \bigoplus_{v | {\mathfrak q}}H^1(F_{\infty,v},M^*)$ as the orthogonal complement of $\ker \operatorname{Col}_{T,I_{\mathfrak q}}^\infty$ under the previous pairing. Let $K^\prime$ be a finite extension of $\mathbb{Q}_p$. We define $$H^1_{\mathrm{un}}(K^\prime, \mathcal{M}_p^*)=\ker\Big(H^1(K^\prime, \mathcal{M}_p^*) \rightarrow H^1(K_{\mathrm{ur}}, \mathcal{M}_p^*) \Big)$$ where $K_\mathrm{ur}$ is the maximal unramified extension of $K^\prime$. Let $H^1_{\mathrm{ur}}(K,M^*)$ be the image of $H^1_{\mathrm{ur}}(K,\mathcal{M}_p^*)$ under the natural map $$H^1_{\mathrm{ur}}(K,\mathcal{M}_p^*) \rightarrow H^1(K,M^*).$$ For an infinite extension $L$ of $\mathbb{Q}_p$, we define $$H^1_{{\mathrm{ur}}}(L,M^*) = \varinjlim_{K^\prime}H^1_{\mathrm{ur}}(K^\prime, M^*)$$ where $K^\prime$ runs through all the finite subextensions of $L$ and the limit is taken with respect to restriction maps. Let $\Sigma^\prime$ be the set of primes of $F_\infty$ above $\Sigma$. Let $\mathcal{I}_p$ be the set of tuples $\underline{I}=(I_{\mathfrak p}, I_{{\mathfrak p}^c})$ such that $I_{\mathfrak q}$ is a subset of $\{1,...,g_{\mathfrak q}\}$ such that $|I_{\mathfrak p}|+|I_{{\mathfrak p}^c}|=g_{-}$. **Definition 8**. *For any $\underline{I}$ as above, set* *$$\mathcal{P}_{\Sigma,\underline{I}}(M^*/F_\infty)=\prod_{w \in \Sigma^\prime, w \nmid p}\frac{H^1(F_{\infty,w}, M^*)}{H^1_{\mathrm{ur}}(F_{\infty,w}, M^*)} \times \prod_{{\mathfrak q}\mid p}\frac{\bigoplus_{w \mid {\mathfrak q}}H^1(F_{\infty,w}, M^*)}{H^1_{I_{\mathfrak q}}(F_{\infty,{\mathfrak q}},M^*)}.$$* *Then the multi-signed Selmer group over the $\mathbb{Z}_p^2$-extension $F_\infty$ is defined as $$\operatorname{Sel}_{\underline{I}}(M^*/F_\infty)=\ker\Big(H^1(F_\Sigma/F_\infty, M^*) \rightarrow \mathcal{P}_{\Sigma,\underline{I}}(M^*/F_\infty)\Big).$$* Our construction generalizes the construction of the multi-signed Selmer groups for the cyclotomic $\mathbb{Z}_p$-extension $F_\mathrm{cyc}$ given in [@ponsinet Section 1.5 and Section 1.6] (originally constructed by Büyükboduk--Lei [@BL17]). For a prime ${\mathfrak q}$ of $F$ above $p$, we have the signed Coleman map for the cyclotomic extension $$\operatorname{Col}_{T,I_{\mathfrak q}}^\mathrm{cyc}: H^1_{\mathrm{Iw}}(F_{\mathfrak q},T) \rightarrow \bigoplus_{i=1}^{|I_{\mathfrak q}|}\mathbb{Z}_p[[\operatorname{Gal}(F_\mathrm{cyc}/F)]]$$ which give the local condition $H^1_{I_{\mathfrak q}}(F_{\mathrm{cyc},{\mathfrak q}},M^*)$. This enables the author to define the signed Selmer group $\operatorname{Sel}_{\underline{I}}(M^*/F_\mathrm{cyc})$ (see [@ponsinet Definition 1.7]) (Ponsinet uses $F_\infty$ to denote the cyclotomic $\mathbb{Z}_p$-extension and uses $\operatorname{Col}_{T,I_{\mathfrak q}}$ in place of $\operatorname{Col}_{T,I_{\mathfrak q}}^\mathrm{cyc}$; for convenience in this article we have changed the notations of Ponsinet a bit.) We can also define the signed Selmer groups at finite layers. We now assume that $F_\infty \cap F(1)=F$ and hence, by abuse of notation, we will let ${\mathfrak q}$ denote the unique prime at $F_\infty$ over the prime ${\mathfrak q}\in \{{\mathfrak p},{\mathfrak p}^c\}$ of $F$. We define $H^1_{I_{\mathfrak q}}(F_{\vec{n},{\mathfrak q}},M^*):=H^1_{I_{\mathfrak q}}(F_{\infty, {\mathfrak q}},M^*)^{\Omega_{\vec{n}}}$, just as in [@ponsinet Definition 1.5]. Defining the local condition at a finite layer by using the local condition at an infinite tower and then descending to the finite level is also crucially exploited in the work of Burungale--Büyükboduk--Lei (see [@BBL2022 Remark 4.9]). This approach also appears in many other works of Lei, for example see [@Lei2023 Definition 5.2 and Remark 5.4]. We will take this approach since it will have several advantages as we will see in the control theorem later (see Theorem [Theorem 16](#thm:mainKIM){reference-type="ref" reference="thm:mainKIM"}). Having defined the analogous local condition $\mathcal{P}_{\Sigma,\underline{I}}(M^*/F_{\vec{n}})$ as before using $H^1_{I_{\mathfrak q}}(F_{\vec{n},{\mathfrak q}},M^*)$, we can define the Selmer group $$\operatorname{Sel}_{\underline{I}}(M^*/F_{\vec{n}}):=\ker\Big(H^1(F_\Sigma/F_{\vec{n}}, M^*) \rightarrow \mathcal{P}_{\Sigma,\underline{I}}(M^*/F_{\vec{n}})\Big).$$ Recall that $\vec{n}=(n_1,n_2)$ and if $n_1=n_2=n$, then we will simply write $\vec{n}$ as $n$. Note that, since $H^1_{I_{\mathfrak q}}(F_{\infty,{\mathfrak q}},M^*)$ is a discrete $\Omega_p$-module, by [@Neukirch Proposition 1.1.8], $$H^1_{I_{\mathfrak q}}(F_{\infty,{\mathfrak q}},M^*)=\varinjlim_n H^1_{I_{\mathfrak q}}(F_{\infty,{\mathfrak q}},M^*)^{\Omega_{p,n}}=\varinjlim_n H^1_{I_{\mathfrak q}}(F_{n,{\mathfrak q}},M^*).$$ Since taking direct limits is an exact functor, this gives $$\varinjlim_n \frac{H^1(F_{n,{\mathfrak q}},M^*)}{H^1_{I_{\mathfrak q}}(F_{n,{\mathfrak q}},M^*)}\cong \frac{\varinjlim_n H^1(F_{n,{\mathfrak q}},M^*)}{\varinjlim_n H^1_{I_{\mathfrak q}}(F_{n,{\mathfrak q}},M^*)}\cong \frac{H^1(F_{\infty,{\mathfrak q}},M^*)}{H^1_{I_{\mathfrak q}}(F_{\infty,{\mathfrak q}},M^*)}.$$ Furthermore, $\varinjlim_n H^1_{\mathrm{ur}}(F_{n,v},M^*)=H^1_{\mathrm{ur}}(F_{\infty,v},M^*)$ when $v \nmid p$. Hence we obtain the following compatibility between multi-signed Selmer groups at finite levels $F_n$ and the infinite $\mathbb{Z}_p^2$-extension $F_\infty$. We have $$\operatorname{Sel}_{\underline{I}}(M^*/F_\infty)\cong \varinjlim_n \operatorname{Sel}_{\underline{I}}(M^*/F_n)$$ as $\Lambda[[\Omega]]$-modules. The following conjecture has been made by Büyükboduk and Lei [@BL17 Remark 3.57]. **Conjecture 9**. *For any $\underline{I} \in \mathcal{I}_p$, the signed Selmer group $\operatorname{Sel}_{\underline{I}}(M^*/F_\mathrm{cyc})$ is cotorsion over the Iwasawa algebra $\mathbb{Z}_p[[\operatorname{Gal}(F_\mathrm{cyc}/F)]]$.* **Remark 10**. *When the base field $F$ is $\mathbb{Q}$ and $\mathcal{M}$ is the Tate module of a supersingular elliptic curve with $a_p=0$, the signed Selmer groups $\operatorname{Sel}_{\underline{I}}(M^*/F_\mathrm{cyc})$ for $\underline{I} \in \mathcal{I}_p$ coincides with Kobayashi's plus and minus Selmer groups (see [@BL17 Appendix 4]). Kobayashi's plus and minus Selmer groups are known to be cortorsion [@kobayashi03] and hence conjecture [Conjecture 9](#conj1){reference-type="ref" reference="conj1"} holds. Moreover, conjecture [Conjecture 9](#conj1){reference-type="ref" reference="conj1"} also holds for at least one of the chromatic Selmer groups in the case of $p$-supersingular elliptic curves with $a_p \neq 0$ and eigenforms which are non-ordinary at $p$. (see [@sprung09 Proposition 6.14, Theorem 7.14] and [@leiloefflerzerbes10 Theorem 6.5]).* Generalizing Conjecture [Conjecture 9](#conj1){reference-type="ref" reference="conj1"} above in the case of $\mathbb{Z}_p^2$-extension seems reasonable to make. **Conjecture 11**. *For any $\underline{I} \in \mathcal{I}_p$, the Selmer group $\operatorname{Sel}_{\underline{I}}(M^*/F_\infty)$ is $\Lambda(\Omega)$-cotorsion.* **Remark 12**. *Suppose that $\mathcal{M}$ is the Tate module of a supersingular elliptic curve. Then the two-variable Coleman maps we defined over the $\mathbb{Z}_p^2$-extension $F_\infty$ are the same as as the $\sharp/\flat$-Coleman maps in [@LeiSprung2020 Section 5].* *For $\bullet,\circ \in \{\sharp,\flat\}$, one can construct signed $p$-adic $L$-functions $\mathfrak{L}_p^{\bullet,\circ}(E/F)$. Under the hypothesis that $\mathfrak{L}_p^{\bullet,\circ}(E/F)$ is nonzero, [@sprung16 Proposition 2.23] or [@castella2018iwasawa Theorem 3.7] shows that the signed Selmer group $\operatorname{Sel}^{\bullet,\circ}(E/F_\infty)$ is $\Lambda(\Omega)$-cotorsion. The $\sharp/\flat$-Selmer groups generalise Kim's $\pm/\pm$-Selmer groups [@kimdoublysigned], who assumed $a_p=0$. The Selmer groups for $\underline{I}=(1,1)$ and $\underline{I}=(2,2)$ correspond to $+/+$ and $-/-$ Selmer groups and these are known to be cotorsion (see [@LeiPal19 Remark 8.5]) giving partial results toward conjecture [Conjecture 11](#conj:tors){reference-type="ref" reference="conj:tors"}.* **Remark 13**. *Recall that $\mathcal{I}_p$ was the set of tuples $\underline{I}=(I_{\mathfrak p}, I_{{\mathfrak p}^c})$ such that $I_{\mathfrak q}$ is a subset of $\{1,...,g_{\mathfrak q}\}$ for ${\mathfrak q}\in \{{\mathfrak p}, {\mathfrak p}^c\}$ satisfying $|I_{\mathfrak p}|+|I_{{\mathfrak p}^c}|=g_{-}$. Denote by $I_{\mathfrak q}^c$ the complement of the subset $I_{\mathfrak q}$. Then $\underline{I}^c=(I_{\mathfrak p}^c, I_{{\mathfrak p}^c}^c)$ satisfies $|I_{\mathfrak p}^c|+|I_{{\mathfrak p}^c}^c|=g-g_{-}=g_+$. One can also define Coleman maps $\operatorname{Col}_{T^*,I_{\mathfrak q}}^\infty$ and construct the signed Selmer groups $\operatorname{Sel}_{\underline{I}^c}(M/F_\infty)$ for $M$ (for details, see [@Dion2022 Lemma 3.14]). Also note that Conjecture [Conjecture 11](#conj:tors){reference-type="ref" reference="conj:tors"} is expected to hold for $\operatorname{Sel}_{\underline{I}^c}(M/F_\infty)$. Under certain conditions, for supersingular abelian varities of $GL(2)$-type, there is also a functional equation relating the characteristic ideal of the dual of $\operatorname{Sel}_{\underline{I}}(M^*/F_\infty)$ with that of $\operatorname{Sel}_{\underline{I}^c}(M/F_\infty)$ (see [@Dion2022 Theorem A]).* ## Twisted Selmer groups {#sec:twisted} We follow the notations as in [@ponsinet discussion after Remark 1.10]. For $s \in \mathbb{Z}$, set $M_s^*=M^* \otimes \chi^s$ where $\chi$ is the cyclotomic character. Note that the cyclotomic character factors through the cyclotomic extension of $F$. We have $M_s^*=M^*$ as a $\operatorname{Gal}(\overline{F}/F_\infty)$-module and hence $H^1(F_\infty,M_s^*)=H^1(F_\infty,M^*) \otimes \chi^s$. For every prime $v$ of $F$, we also have $H^1(F_{v,\infty},M_s^*)=H^1(F_{v,\infty},M^*) \otimes \chi^s$ and $H^0(F_{v,\infty},M_s^*)=0$. At any prime ${\mathfrak q}$ of $F$ dividing $p$, we set $$H^1_{I_{\mathfrak q}}(F_{\infty,{\mathfrak q}},M_s^*)=H^1_{I_{\mathfrak q}}(F_{\infty,{\mathfrak q}},M^*) \otimes \chi^s.$$ We can now define multi-signed Selmer groups with these twisted local conditions. As in [@ponsinet p. 1643], we also note that $\operatorname{Sel}_{\underline{I}}(M_s^*/F_\infty) \cong \operatorname{Sel}_{\underline{I}}(M^*/F_\infty)\otimes \chi^s$ as $\Lambda[[\Omega]]$-modules. # Ranks of Iwasawa modules {#sec:ranks} Recall that ${\mathfrak q}\in \{{\mathfrak p},{\mathfrak p}^c\}$, put $H=\operatorname{Gal}(F_\infty/F_\mathrm{cyc})$, and $\Gamma=\Omega/H=\operatorname{Gal}(F_\mathrm{cyc}/F)$. **Proposition 14**. *For ${\mathfrak q}\in \{{\mathfrak p},{\mathfrak p}^c\}$, we have* 1. *$\operatorname{rank}_{\mathbb{Z}_p[[\Gamma]]}H^1_{\mathrm{Iw}}(F_{\mathrm{cyc},{\mathfrak q}},T)=g_{\mathfrak q}$* 2. *The torsion sub-$\mathbb{Z}_p[[\Gamma]]$-module of $H^1_{\mathrm{Iw}}(F_{\mathrm{cyc},{\mathfrak q}},T)$ is isomorphic to $T^{G_{F_\mathrm{cyc}}}$.* 3. *$\operatorname{rank}_{\mathbb{Z}_p[[\Omega_p]]}H^1_{\mathrm{Iw}}(F_{\infty,{\mathfrak q}},T)=g_{\mathfrak q}$* *Proof.* (1) and (2) is due to Perrin-Riou (see [@PR95 A.2]). Note that, by [@BL21 p. 23] and the proof of Lemma 2.16 of [@BL21], we have $$H^1_{\mathrm{Iw}}(F_{\infty,{\mathfrak q}},T)\cong \big(N_{L_\infty}(T)^{\psi=1}\big)^\Delta \cong \big(N_{\mathbb{Q}_p}(T)^{\psi=1}\big)^\Delta \widehat{\otimes} S_{L_\infty/\mathbb{Q}_p} \cong H^1_{\mathrm{Iw}}(F_{\mathrm{cyc},{\mathfrak q}},T) \widehat{\otimes} S_{L_\infty/\mathbb{Q}_p},$$ where $\Delta=\operatorname{Gal}(L_\infty(\mu_{p^\infty})/k_\infty)$ is of order $p-1$, $N_{L_\infty}(T)$ and $N_{\mathbb{Q}_p}(T)$ are Wach modules as in *(loc.cit)* and $\psi$ (à la Fontaine) is the left inverse of the Frobenius ${\varphi}$-operator on the respective Wach modules. Finally, note that the Yager module is a free module of rank one over $\Lambda_{\mathbb{Q}_p}(U)$ (cf. section [2.3](#sec Yager){reference-type="ref" reference="sec Yager"}). This proves (3). ◻ **Lemma 15**. *The kernel and the image of $\operatorname{Col}_{T,I_{\mathfrak q}}^\infty$ are $\mathbb{Z}_p[[\Omega_p]]$-modules of rank $g_{\mathfrak q}- |I_{\mathfrak q}|$ and $|I_{\mathfrak q}|$ respectively.* *Proof.* By the proof of Lemma 2.16 of [@BL21], $$\operatorname{Im}(\operatorname{Col}_{T,I_{\mathfrak q}}^\infty) \cong \operatorname{Im}(\operatorname{Col}_{T,I_{\mathfrak q}}^\mathrm{cyc}) \widehat{\otimes} S_{L_\infty/\mathbb{Q}_p.}$$ and $\operatorname{Im}(\operatorname{Col}_{T,I_{\mathfrak q}}^\mathrm{cyc})$ is of rank $|I_{\mathfrak q}|$ contained in a free $\mathbb{Z}_p[[\Gamma_\mathrm{cyc}]]$-module of finite index (see [@ponsinet Lemma 1.2]). Hence $\operatorname{Im}(\operatorname{Col}_{T,I_{\mathfrak q}}^\infty)$ is of rank $|I_{\mathfrak q}|$ contained in a free $\mathbb{Z}_p[[\Omega_p]]$-module of finite index. ◻ # A Kim-type control theorem and a mini-control theorem {#sec:resultmain} We prove two results, a Kim-type control theorem, generalizing B.D. Kim's theory for elliptic curves, and a corollary, which is a mini-control theorem that shows that the finitenes of a Selmer group at base level implies cotorsion of the same Selmer group in our towers of number fields. The first theorem below (Theorem [Theorem 16](#thm:mainKIM){reference-type="ref" reference="thm:mainKIM"}) generalizes a control theorem of Ponsinet which was over the cyclotomic extension (see [@ponsinet Lemma 2.3]). **Theorem 16**. *For all but finitely many $s\in \mathbb{Z}$, the kernel and the cokernel of the restriction map $$\rho_{\vec{n}}:\operatorname{Sel}_{\underline{I}^c}(M_s/F_{\vec{n}}) \rightarrow \operatorname{Sel}_{\underline{I}^c}(M_s/F_\infty)^{\Omega_{\vec{n}}}$$ are bounded as $\vec{n}$ varies.* *Proof.* By definition, we have the following commutative diagram: $$\label{Selmer_definition} \begin{tikzcd} 0 \arrow[r] & \operatorname{Sel}_{\underline{I}^c}(M_s/F_{\vec{n}}) \arrow[d, "\rho_{\vec{n}}"] \arrow[r] & H^1(F_\Sigma/F_{\vec{n}}, M_s) \arrow[d, "h"] \arrow[r, "\lambda"] & \mathcal{P}_{\Sigma,\underline{I}^c}(M_s/F_{\vec{n}}) \arrow[d, "\Xi_{\vec{n}}"] \\ 0 \arrow[r] & \operatorname{Sel}_{\underline{I}^c}(M_s/F_{\infty})^{\Omega_{\vec{n}}} \arrow[r] & H^1(F_\Sigma/F_{\infty}, M_s)^{\Omega_{\vec{n}}} \arrow[r] & \mathcal{P}_{\Sigma,\underline{I}^c}(M_s/F_\infty)^{\Omega_{\vec{n}}} \end{tikzcd}$$ By the assumption **(Tors)**, the fact that $\operatorname{Gal}(F_{\infty,{\mathfrak q}}/F_{\mathfrak q})$ is a pro-$p$ group along with the orbit-stabilizer theorem implies that $H^0(F_{\infty,{\mathfrak q}},M_s)=0$ and hence $\ker(h)=0$ and $\operatorname{coker}(h)=0$. Thus, we have that $\ker({\rho_{\vec{n}}})=0$ and $\operatorname{coker}({\rho_{\vec{n}}})=\ker(\Xi_{\vec{n}})\cap\operatorname{Im}(\lambda)$ by the snake lemma. Thus, it suffices to prove the theorem for $\ker(\Xi_{\vec{n}})$, which we now study one prime $v$ at a time. We show that the $v$-component is almost always $0$, and then analyze the $v$-component when it is not. To this end, we consider the commutative diagram $$\label{local} \begin{tikzcd} 0 \arrow[r] & H^1_{*}(F_{\vec{n},v},M_s) \arrow[d, "\ell"] \arrow[r] & H^1(F_{\vec{n},v},M_s) \arrow[d, "g"] \arrow[r, twoheadrightarrow] & \frac{H^1(F_{\vec{n},v},M_s)}{H^1_*(F_{\vec{n},v},M_s)} \arrow[d, "p_v"] \\ 0 \arrow[r] & H^1_{*}(F_{\infty,v},M_s)^{\Omega_{\vec{n}}} \arrow[r] & H^1(F_{\infty,v},M_s)^{\Omega_{\vec{n}} }\arrow[r] & \left(\frac{H^1(F_{\infty,v},M_s)}{H^1_*(F_{\infty,v},M_s)} \right)^{\Omega_{\vec{n}}}, \end{tikzcd}$$ where $H^1_{*}(F_{\vec{n},v},M_s)=\begin{cases} H^1_{I_v}(F_{\vec{n},v},M_s) \text{ if } v={\mathfrak q}\in \{{\mathfrak p},{\mathfrak p}^c\}, \text{ and} \\ H^1_{\mathrm{ur}}(F_{\vec{n},v},M_s) \text{ if not.} \end{cases}$ **The case $v={\mathfrak q}$.** When $v|p$, both $\ell$ and $g$ are isomorphisms; $\ell$ is an isomorphism by definition, whereas $g$ is an isomorphism by the inflation-restriction exact sequence and **(Tors)**. Thus, the snake lemma implies that $\ker(p_v)=0$. **The case of archimedean $v$.** Here, we have $\operatorname{coker}(\ell)=0$ [@perrinriou00b Sect A2.4], and also $\ker(g)=0$, since $v$ splits completely. **The case $v\nmid p$.** Since $\operatorname{coker}(\ell)=0$, (this follows from the hypothesis **(Tors)** and using inflation-restriction) we know that $\ker(g)$ surjects onto $\ker(p_v)$ by the snake lemma. Now $\ker(g)=H^1\big(F_{\infty,v}/F_{\vec{n},v}, M_s^{G_{F_\infty,v}}\big)$. Since $v \nmid p$, $F_{\infty,v}$ is the unique unramified $\mathbb{Z}_p$-extension of $F_v$. Hence $\operatorname{Gal}(F_{\infty,v}/F_{\vec{n},v})$ is topologically generated by a single element $\gamma_n$. For all but finitely many $s\in \mathbb{Z}$, $M_s^{G_{F_{\vec{n},v}}}$ is finite for every $\vec{n}$; using Lemma [Lemma 17](#divisible){reference-type="ref" reference="divisible"} below, this implies that the order of $\ker(g)$ is bounded by that of $M_s^{G_{F_\infty,v}}/(M_s^{G_{F_\infty,v}})_{\mathrm{div}}$.  ◻ **Lemma 17**. *Suppose that $N$ is a $\mathbf{Z}_p$-module with an action of $\Gamma\cong \mathbf{Z}_p$ so that $N^\Gamma$ is finite. Then the order of $H^1(\Gamma, N)$ is bounded by that of $N/N_{\mathrm{div}}$.* *Proof.* Fix a topological generator $\gamma$ of $\Gamma$. Since $\Gamma$ is pro-$p$ cyclic, we know that $H^1(\Gamma,N)\cong N/(\gamma-1)N$. Since $N^\Gamma=\ker(N \xrightarrow{\gamma-1}N )$ is finite, $(\gamma-1)N$ must contain the maximal divisible subgroup $N_{\mathrm{div}}$. ◻ **Proposition 18**. *Suppose that the signed Selmer group $\operatorname{Sel}_{\underline{I}^{{c}}}(M/F_\infty)$ is $\Lambda(\Omega)$-cotorsion. Then the defining sequence $$0 \rightarrow \operatorname{Sel}_{\underline{I}}(M^*/F_\infty) \rightarrow H^1(F_\Sigma/F_\infty, M^*) \rightarrow \mathcal{P}_{\Sigma,\underline{I}}(M^*/F_\infty) \rightarrow 0$$ is short exact.* *Proof.* Suppose $\operatorname{Sel}_{\underline{I}^{{c}}}(M/F_\infty)$ is cotorsion as a $\Lambda(\Omega)$-module. Then for all but finitely many $s \in \mathbb{Z}$, $\big(\operatorname{Sel}_{\underline{I}^{{c}}}(M/F_\infty) \otimes \chi^s\big)^{\Omega_{{n}}} \cong \big(\operatorname{Sel}_{\underline{I}^{{c}}}(M_s/F_\infty)\big)^{\Omega_{{n}}}$ is finite for every ${n}$. Therefore, by Theorem [Theorem 16](#thm:mainKIM){reference-type="ref" reference="thm:mainKIM"}, for all by finitely many $s$, $\operatorname{Sel}_{\underline{I}^c}(M_s/F_{{n}})$ is finite for every ${n}$. Hence for those $s$ and for all ${n}$, by [@Greenberg Proposition 4.13], we obtain that the cokernel of the map $$f_{{n},s}:H^1(F_\Sigma/F_{{n}}, M_{-s}^*) \rightarrow \mathcal{P}_{\Sigma,\underline{I}}(M_{-s}^*/F_{{n}})$$ is the Pontryagin dual of $H^0(F_{{n}},M_s)$ (note that the local components $H^1_{I_{\mathfrak q}}(F_{n,{\mathfrak q}},M^*_{-s})$ and $H^1_{\mathrm{ur}}(F_{n,v},M^*_{-s})$ for $v \nmid p$ of $\mathcal{P}_{\Sigma,\underline{I}}(M_{-s}^*/F_{{n}})$ are divisible groups by [@Dion2022 Lemma 3.16] and [@ponsinet Lemma 1.6]). By hypothesis **(Tors)**, $H^0(F,M)=0$ and this implies that $H^0(F_\infty,M)=0$. Also $M_s \cong M$ as a $\operatorname{Gal}(\overline{F}/F_\mathrm{cyc})$-module and hence as a $\operatorname{Gal}(\overline{F}/F_\infty)$-module. Therefore, $H^0(F_\infty,M_s)=0$ and hence $H^0(F_{{n}},M_s)=0$ for any ${n}$. This gives that the maps $f_{{n},s}$ are surjective for all ${n}$ and for all but finitely many $s\in \mathbb{Z}$. Passing to the direct limit of relative to restriction maps, the surjectivity of $f_{{n},s}$ implies the surjectivity of the following map: $$f_{\infty,s}:H^1(F_\Sigma/F_{\infty}, M_{-s}^*) \rightarrow \mathcal{P}_{\Sigma,\underline{I}}(M_{-s}^*/F_{\infty}).$$ Twisting the map $f_{\infty,s}$ by $\chi_{|_\Gamma}^{-s}$ we obtain the map $$f_{\infty}:H^1(F_\Sigma/F_{\infty}, M^*) \rightarrow \mathcal{P}_{\Sigma,\underline{I}}(M^*/F_{\infty}),$$ which is hence a surjection. ◻ **Corollary 19**. *If the Bloch-Kato Selmer group $\operatorname{Sel}_{\mathrm{BK}}(M^*/F)$ is finite then the Selmer group $\operatorname{Sel}_{\underline{I}}(M^*/F_\infty)$ is $\Lambda[[\Omega]]$-cotorsion.* *Proof.* By [@ponsinet Lemma 1.11], $\operatorname{Sel}_{\mathrm{BK}}(M^*/F) =\operatorname{Sel}_{\underline{I}}(M^*/F)$. By the control theorem [Theorem 16](#thm:mainKIM){reference-type="ref" reference="thm:mainKIM"}, $\operatorname{Sel}_{\underline{I}}(M^*/F_\infty)^\Omega$ is finite and hence by Nakayama's lemma (see [@BalisterHowson]) we obtain that $\operatorname{Sel}_{\underline{I}}(M^*/F_\infty)$ is $\Lambda[[\Omega]]$-cotorsion. ◻ # Preservation of vanishing of $\mu$-invariants under congruences {#sec: muinvariance} The goal of this subsection is to deduce the vanishing of $\mu$-invariant associated to a motive $\mathcal{M}^\prime$ from that of the motive $\mathcal{M}$ in the case that their residual Galois representations are isomorphic. We first remind the reader of the definition of $\mu$-invariants, then state the theorem, before embarking on its proof. ## Some facts about $\mu$-invariants {#sec:mu} We recall Howson's treatment of $\mu$-invariants in [@Howson_02]. Let $G$ be a pro-$p$ $p$-adic Lie group without $p$-torsion. **Definition 20**. * [@Howson_02 equation 33] For a finitely generated $\mathbb{Z}_p[[G]]$-module $N$, its $\mu$-invariant is $$\mu(N)=\sum_{i \geqslant 0}\operatorname{rank}_{\mathbb{F}_p[[G]]}\Big(p^i\big(N(p)\big)/p^{i+1}\Big),$$ where $N(p)$ is the submodule of $N$ consisting of its elements annihilated by some power of $p$.* Denote by $N[p]$ the $p$-torsion of $N$. **Lemma 21**. *For $N$ as in the above definition, we have* *$$\operatorname{rank}_{\mathbb{Z}_p[[G]]}(N)=\operatorname{rank}_{\mathbb{F}_p[[G]]}(N/pN) - \operatorname{rank}_{\mathbb{F}_p[[G]]}(N[p]).$$ Further, $\mu(N)=0$ is equivalent to $$\operatorname{rank}_{\mathbb{Z}_p[[G]]}(N)=\operatorname{rank}_{\mathbb{F}_p[[G]]}(N/pN).$$* *Proof.* The first statement is [@Howson_02 Cor. 1.10]. For the second statement, first suppose $\mu(N)=0$. Then in particular, $$\operatorname{rank}_{\mathbb{F}_p[[G]]}(N(p)/pN(p))=0.$$ But $\operatorname{rank}_{\mathbb{F}_p[[G]]}(N(p)/pN(p))=0$ is equivalent to $\operatorname{rank}_{\mathbb{F}_p[[G]]}(N[p])=0$ by [@Howson_02 equations 42 and 43]. Conversely, suppose $\operatorname{rank}_{\mathbb{F}_p[[G]]}(N[p])=0$. We have $(p^iN)[p]\subseteq N[p],$ so that $\operatorname{rank}_{\mathbb{F}_p[[G]]}((p^iN)[p]))\leqslant\operatorname{rank}_{\mathbb{F}_p[[G]]}(N[p])=0.$ But we have again by [@Howson_02 equations 42 and 43] that $\operatorname{rank}_{\mathbb{F}_p[[G]]}((p^iN)[p])=0$ is equivalent to $$\operatorname{rank}_{\mathbb{F}_p[[G]]}(p^iN(p)/p^{i+1}N(p))=0.$$ Thus, $\operatorname{rank}_{\mathbb{F}_p[[G]]}(N[p])=0$ implies $\operatorname{rank}_{\mathbb{F}_p[[G]]}(p^iN(p)/p^{i+1}N(p))=0$ for all $i$, which is the same as $\mu(N)=0$. ◻ ## The Theorem {#sec:tt} Let $\mathcal{M}$ and $\mathcal{M}^\prime$ be motives with $G_F$-stable $\mathbb{Z}_p$-lattices $T$ and $T^\prime$ inside the the $p$-adic realizations satisfying the hypotheses **(H.-T.), (Cryst.), (Tors.), (Fil.), (Slopes).** In this section, we make the following assumption. **(Cong.)** $T/pT \cong T^\prime/pT^\prime$ as $G_F$-representations. Let $\mathcal{X}:=\mathcal{X}_{\underline{I}}(M^*/F_\infty)$ be the Pontryagin dual of the Selmer group $\operatorname{Sel}_{\underline{I}}(M^*/F_\infty)$ and $\mathcal{X}^\prime$ that of $\operatorname{Sel}_{\underline{I}}(M^{\prime*}/F_\infty)$. Recall that $\Omega=\mathbf{Z}_p^2$. **Theorem 22**. *Suppose that $\mathcal{X}$ and $\mathcal{X}^\prime$ have the same $\mathbf{Z}_p[[\Omega]]$-rank $r$. Then $\mu(\mathcal{X})=0$ if and only if $\mu(\mathcal{X^\prime})=0$.* Let us first sketch the proof. For the sake of readability, we let $\operatorname{Sel}(M^*)$ denote $\operatorname{Sel}_{\underline{I}}(M^*/F_\infty)$ and $\operatorname{Sel}(M^{\prime*})$ denote $\operatorname{Sel}_{\underline{I}}(M^{\prime*}/F_\infty)$. We introduce auxiliary Selmer groups $\operatorname{Sel}^{\Sigma_0}(M^*[p])=\operatorname{Sel}_{\underline{I}}^{\Sigma_0}(M^*[p]/F_\infty)$ (and similarly $\operatorname{Sel}^{\Sigma_0}(M^{\prime*}[p])$), where the local condition is changed at certain ramified primes ($\Sigma$ stands for 'set,' and the subscript $0$ indicates ramification.) The main idea is to set up two exact sequences and compare them via an isomorphism $(I)$ of $\mathbf{Z}_p[[\Omega]]$-modules: $$\label{ideaofproof} \begin{tikzcd} 0 \arrow[r] & \operatorname{Sel}(M^*)[p] \arrow[r] & \operatorname{Sel}^{\Sigma_0}(M^*[p]) \arrow[d, "(I)"] \arrow[r] & \text{(cotorsion $\mathbb{F}_p[[\Omega]]$-module)} \\ 0 \arrow[r] & \operatorname{Sel}(M^{\prime*})[p] \arrow[r] & \operatorname{Sel}^{\Sigma_0}(M^{\prime*}[p]) \arrow[r] & \text{(cotorsion $\mathbb{F}_p[[\Omega]]$-module)} \end{tikzcd}$$ Recall that $\mathcal{X}$ denotes the Pontryagin dual of $\operatorname{Sel}(M^*)$. Similarly, let $\mathcal{Y}$ and $\mathcal{Y^\prime}$ be the Pontryagin duals of $\operatorname{Sel}^{\Sigma_0}(M^*[p])$ and $\operatorname{Sel}^{\Sigma_0}(M^{\prime*}[p])$. Dualizing the first exact sequence gives $$\label{eq:fact2} \operatorname{rank}_{\mathbb{F}_p[[\Omega]]}\big(\mathcal{Y}\big)= \operatorname{rank}_{\mathbb{F}_p[[\Omega]]}\big(\mathcal{X}/p\mathcal{X}\big).$$ Recall we denote by $r$ the $\mathbf{Z}_p[[\Omega]]$-rank of $\mathcal{X}$. If $\mu(\mathcal{X})=0$, then by Lemma [Lemma 21](#lemma:imp){reference-type="ref" reference="lemma:imp"}, $$\operatorname{rank}_{\mathbb{F}_p[[\Omega]]}\big(\mathcal{X}/p\mathcal{X}\big)=r$$ and hence by [\[eq:fact2\]](#eq:fact2){reference-type="eqref" reference="eq:fact2"}, $\mathcal{Y}$ has $\mathbb{F}_p[[\Omega]]$-rank $r$ as well. We now use (I) to reverse our steps: By (I), $\mathcal{Y^\prime}$ has $\mathbb{F}_p[[\Omega]]$-rank $r$ as well. Dualizing the second exact sequence results in an analogue of [\[eq:fact2\]](#eq:fact2){reference-type="eqref" reference="eq:fact2"}, which gives us $$\operatorname{rank}_{\mathbb{F}_p[[\Omega]]}\big(\mathcal{X^\prime}/p\mathcal{X^\prime}\big)=r.$$ Since $\mathcal{X^\prime}$ was assumed to have rank $r$, we get $$\operatorname{rank}_{\mathbb{Z}_p[[\Omega]]}\big(\mathcal{X^\prime}\big)=\operatorname{rank}_{\mathbb{F}_p[[\Omega]]}\big(\mathcal{X^\prime}/p\mathcal{X^\prime}\big)=r.$$ Again by Lemma [Lemma 21](#lemma:imp){reference-type="ref" reference="lemma:imp"}, $\mu(\mathcal{X^\prime})=0$. **Corollary 23**. *Suppose that $\mathcal{X}$ and $\mathcal{X}^\prime$ are $\mathbf{Z}_p[[\Omega]]$-torsion. Then $\mu(\mathcal{X})=0$ if and only if $\mu(\mathcal{X^\prime})=0$.* *Proof.* This is the theorem with $r=0$. ◻ ## The auxiliary Selmer group, the isomorphism (I), and the exact sequences ([\[ideaofproof\]](#ideaofproof){reference-type="ref" reference="ideaofproof"}) Recall that what we need to do to make the proof work is to define the auxiliary Selmer group, establish the isomorphism (I), and prove the exactness in the diagram [\[ideaofproof\]](#ideaofproof){reference-type="ref" reference="ideaofproof"}. To this end, we generalize Ponsinet's construction, cf. [@ponsinet Section 3.3]. ### The auxiliary Selmer group Let $\Sigma_0 \subset \Sigma$ containing all the primes of ramification of $M^*$ but neither the primes of $F$ dividing $p$ nor the archimedean places. Let $\Sigma_0^\prime$ be the primes of $F_\infty$ above $\Sigma_0$. Recall that ${\mathfrak q}$ is the unique prime above $p$ in $F_\infty.$ From [@MazurRubinKolyvaginsystems Lemma 3.5.3] and proofs analogous to [@ponsinet Lemma 3.5], we know that $H^1(F_{\infty,{\mathfrak q}}, M^*[p]) \cong H^1(F_{\infty,{\mathfrak q}}, M^*)[p],$ so that $$H^1_{I_{\mathfrak q}}(F_{\infty,{\mathfrak q}},M^*)[p]\subset H^1(F_{\infty,{\mathfrak q}},M^*[p]).$$ We set, just as Ponsinet did, $$H^1_{I_{\mathfrak q}}(F_{\infty,{\mathfrak q}},M^*[p]):=H^1_{I_{\mathfrak q}}(F_{\infty,{\mathfrak q}},M^*)[p].$$ Thus, the following local condition is well-defined: $$\mathcal{P}_{\Sigma \backslash \Sigma_0,\underline{I}}(M^*[p]/F_{\infty}):=\prod_{w \in \Sigma^\prime \backslash \Sigma_0^\prime, w \nmid p}H^1(F_{w,\mathrm{ur}}, M^*[p])\times \prod_{{\mathfrak q}\mid p}\frac{\bigoplus_{w \mid {\mathfrak q}}H^1(F_{\infty,w}, M^*[p])}{H^1_{I_{\mathfrak q}}(F_{\infty,{\mathfrak q}},M^*[p])}.$$ **Definition 24**. *We define the *$\Sigma_0$-non-primitive $\underline{I}$-Selmer group of $M^*[p]$ over $F_\infty$* as $$\operatorname{Sel}_{\underline{I}}^{\Sigma_0}(M^*[p]/F_\infty):=\ker\Big(H^1(F_\Sigma/F_{\infty}, M^*[p]) \rightarrow \mathcal{P}_{\Sigma \backslash \Sigma_0,\underline{I}}(M^*[p]/F_{\infty})\Big).$$ We also use the abbreviated notation $\operatorname{Sel}^{\Sigma_0}(M^*[p])$.* ### The isomorphism (I) {#(I)} We would like to prove that as $\mathbb{F}_p[[\Omega]]$-modules, $$\operatorname{Sel}^{\Sigma_0}(M^*[p]) \cong \operatorname{Sel}^{\Sigma_0}({M^\prime}^*[p]).$$ under the hypothesis **(Cong.)**. But **(Cong.)** implies that $M^*[p] \cong {M^\prime}^*[p]$ as $G_F$-representations. Hence $$\begin{aligned} H^1(F_\Sigma/F_\infty, M^*[p])&\cong H^1(F_\Sigma/F_\infty, {M^\prime}^*[p]) \text{ and} \\ H^1(F_{w,\mathrm{ur}}, M^*[p])&\cong H^1(F_{w,\mathrm{ur}}, {M^\prime}^*[p]).\end{aligned}$$ Thus, all that remains is proving that $$H^1_{I_{\mathfrak q}}(F_{\infty,{\mathfrak q}},M^*[p])\cong H^1_{I_{\mathfrak q}}(F_{\infty,{\mathfrak q}},{M^\prime}^*[p]).$$ We prove that their duals are isomorphic: The Tate-pairing [\[TatePairing\]](#TatePairing){reference-type="eqref" reference="TatePairing"} gives that the Pontryagin dual of $H^1_{I_{\mathfrak q}}(F_{\infty,{\mathfrak q}},M^*[p])$ is $\operatorname{Im}(\operatorname{Col}_{T,I_{\mathfrak q}}^\infty)/p.$ Thus, our assertion follows from the two lemmas: **Lemma 25**. *Assume **(Cong.)** holds. Then as $\mathbb{Z}_p[[\Omega_p]]$-modules, $H^1_{\mathrm{Iw}}(F_{\infty,{\mathfrak q}},T)/p\cong H^1_{\mathrm{Iw}}(F_{\infty,{\mathfrak q}},T^\prime)/p.$* **Lemma 26**. *Assume **(Cong.)** holds. If $z\in H^1_{\mathrm{Iw}}(F_{\infty,{\mathfrak q}},T)$ and $z^\prime \in H^1_{\mathrm{Iw}}(F_{\infty,{\mathfrak q}},T^\prime)$ have the same image under the isomorphism given in the above lemma, then the Coleman maps $\operatorname{Col}_{T,i}^\infty(z)$ and $\operatorname{Col}_{T^\prime,i}^\infty(z^\prime)$ are congruent modulo $p$.* See subsection [6.3.4](#congruencelemmas){reference-type="ref" reference="congruencelemmas"} for their proofs. ### The exact sequences ([\[ideaofproof\]](#ideaofproof){reference-type="ref" reference="ideaofproof"} ) We prove exactness of the top sequence, as the second sequence is exact for the same reasons. Recall that we let $\Sigma_0 \subset \Sigma$ contain all the primes of ramification of $M^*$ but neither the primes of $F$ dividing $p$ nor the archimedean places. Let $\Sigma_0^\prime$ be the primes of $F_\infty$ above $\Sigma_0$. We would like to compare the auxiliary Selmer group $\operatorname{Sel}_{\underline{I}}^{\Sigma_0}(M^*[p]/F_\infty)$ to the ($p$-torsion of the) $\Sigma_0$-non-primitive $\underline{I}$-Selmer group , which we define by $$\operatorname{Sel}_{\underline{I}}^{\Sigma_0}(M^*/F_\infty)=\ker\Big(H^1(F_\Sigma/F_{\infty}, M^*) \rightarrow \mathcal{P}_{\Sigma \backslash \Sigma_0,\underline{I}}(M^*/F_{\infty})\Big).$$ Comparing their local conditions gives a map $$\theta: \operatorname{Sel}_{\underline{I}}(M^*/F_\infty) \xrightarrow{\theta} \operatorname{Sel}_{\underline{I}}^{\Sigma_0}(M^*/F_\infty).$$ The map appearing in the exact sequence ([\[ideaofproof\]](#ideaofproof){reference-type="ref" reference="ideaofproof"}) is the $p$-torsion part of the map $\theta$ (which we will denote by $\theta_p$, composed with the isomorphism of the following lemma: **Lemma 27**. *We have the isomorphism $$\label{eq:Selmcong} \operatorname{Sel}_{\underline{I}}^{\Sigma_0}(M^*/F_\infty)[p]\cong \operatorname{Sel}_{\underline{I}}^{\Sigma_0}(M^*[p]/F_\infty) .$$* *Proof.* We match the global term and the local conditions of the Selmer groups in question, i.e. this isomorphism reduces to 1. $H^1(F_\Sigma/F_\infty, M^*[p]) \cong H^1(F_\Sigma/F_\infty, M^*)[p],$ 2. $H^1(F_{\infty,{\mathfrak q}}, M^*[p]) \cong H^1(F_{\infty,{\mathfrak q}}, M^*)[p]$, 3. $H^1(F_{w,\mathrm{ur}}, M^*[p]) \cong H^1(F_{w,\mathrm{ur}}, M^*)[p]$ for any non-archimedean prime $w \in \Sigma_0^\prime$ not dividing $p$ or a prime of ramification of $M^*$, where $F_{w,\mathrm{ur}}$ is the maximal unramified extension of $F_w$. But these isomorphisms are induced by the exact sequence $$0 \rightarrow M^*[p] \rightarrow M^* \xrightarrow{p} M^* \rightarrow 0$$ of $G_{F_\infty}$-modules, analogously to [@ponsinet Lemma 3.5], see also [@MazurRubinKolyvaginsystems Lemma 3.5.3]. ◻ **Lemma 28**. *As a $\mathbb{Z}_p[[\Omega]]$-module, the Pontryagin dual of $\operatorname{coker}(\theta)$ is torsion.* *Proof.* By Proposition [Proposition 18](#prop:one){reference-type="ref" reference="prop:one"}, we have the following fundamental diagram. $$\label{fundamental1} \begin{tikzcd} 0 \arrow[r] & \operatorname{Sel}_{\underline{I}}(M^*/F_\infty) \arrow[d, "\theta"] \arrow[r] & H^1(F_\Sigma/F_\infty,M^*)\arrow[d, "="] \arrow[r, "\delta"] & \mathcal{P}_{\Sigma,\underline{I}}(M^*/F_\infty) \arrow[d, "h_{\Sigma_0}"] \\ 0 \arrow[r] & \operatorname{Sel}_{\underline{I}}^{\Sigma_0}(M^*/F_\infty)\arrow[r] & H^1(F_\Sigma/F_\infty,M^*)\arrow[r] & \mathcal{P}_{\Sigma \backslash \Sigma_0,\underline{I}}(M^*/F_{\infty}). \end{tikzcd}$$ By the snake lemma we have an injection $\operatorname{Sel}_{\underline{I}}(M^*/F_\infty) \hookrightarrow \operatorname{Sel}_{\underline{I}}^{\Sigma_0}(M^*/F_\infty)$ with cokernel isomorphic to $\ker(h_{\Sigma_0}) \cap \operatorname{Im}(\delta)$, where $h_{\Sigma_0}$ is the projection map on local conditions. (Note that $\mathcal{P}_{\Sigma \backslash \Sigma_0,\underline{I}}(M^*/F_{\infty})$ has the same local components as $\mathcal{P}_{\Sigma,\underline{I}}(M^*/F_{\infty})$ but removing the places in $\Sigma_0$.) Now, $$\ker(h_{\Sigma_0})=\prod_{w \in \Sigma_0^\prime}\frac{H^1(F_{\infty,w}, M^*)}{H^1_{\mathrm{ur}}(F_{\infty,w}, M^*)}.$$ Since $w \in \Sigma_0^\prime$, $w$ is not above $p$ and hence $F_{\infty,w}$ is the unique unramified $\mathbb{Z}_p$-extension of $F_v$ where $v$ is a prime of $F$ below $w$ (see [@LeiSujatha Section 5.2]). In this case $H^1(F_{\infty,w}, M^*)$ is a cotorsion $\mathbb{Z}_p[[\Gamma]]$-module (see [@greenberg89 Proposition 2]) and $H^1_{\mathrm{ur}}(F_{\infty,w}, M^*)$ is trivial (see [@PR95 Section A.2.4]). Hence $\ker(h_{\Sigma_0})$ is a cotorsion $\mathbf{Z}_p[[\Omega]]$-module and therefore it is so for $\ker(h_{\Sigma_0}) \cap \operatorname{Im}(\delta)=\operatorname{coker}(\theta)$. Hence the conclusion. ◻ Multiply the nonzero terms in the following exact sequence by $p$: $$0 \rightarrow \operatorname{Sel}_{\underline{I}}(M^*/F_\infty) \xrightarrow{\theta} \operatorname{Sel}_{\underline{I}}^{\Sigma_0}(M^*/F_\infty) \rightarrow \operatorname{coker}(\theta) \rightarrow 0,$$ Applying the snake lemma to this multiplication by $p$, we get the exact sequence $$0 \rightarrow \operatorname{Sel}_{\underline{I}}(M^*/F_\infty)[p] \xrightarrow{\theta_p} \operatorname{Sel}_{\underline{I}}^{\Sigma_0}(M^*/F_\infty)[p]\rightarrow \operatorname{coker}(\theta)[p] \rightarrow \operatorname{Sel}_{\underline{I}}(M^*/F_\infty)/p.$$ We can quotient out the first nonzero term and its image under $\theta_p$ to see that $\operatorname{coker}(\theta_p)$ injects into $\operatorname{coker}(\theta)[p]$. Thus, the Pontryagin dual of $\operatorname{coker}(\theta_p)$ is a quotient of the dual of $\operatorname{coker}(\theta)[p]$, which is $\mathbb{F}_p[[\Omega]]$-torsion by the previous lemma. Hence the Pontryagin dual of $\operatorname{coker}(\theta_p)$ is also $\mathbb{F}_p[[\Omega]]$-torsion. Thus, combining $\theta_p$ with the isomorphism [\[eq:Selmcong\]](#eq:Selmcong){reference-type="eqref" reference="eq:Selmcong"}, we obtain an injection $$\operatorname{Sel}_{\underline{I}}(M^*/F_\infty)[p] \hookrightarrow \operatorname{Sel}_{\underline{I}}^{\Sigma_0}(M^*[p]/F_\infty)$$ with cokernel a cotorsion $\mathbb{F}_p[[\Omega]]$-module; this is exactly what was required. ### Congruence lemmas in Iwasawa algebras {#congruencelemmas} Recall that ${\mathfrak q}$ is the unique prime above $p$ in $F_\infty.$ We complete the outstanding proof of the two last lemmas in subsection [6.3.2](#(I)){reference-type="ref" reference="(I)"}, which may be of independent interest. **Lemma 29**. *Assume **(Cong.)** holds. Then as $\mathbb{Z}_p[[\Omega_p]]$-modules, $H^1_{\mathrm{Iw}}(F_{\infty,{\mathfrak q}},T)/p\cong H^1_{\mathrm{Iw}}(F_{\infty,{\mathfrak q}},T^\prime)/p.$* *Proof.* Let $\mathbf{A}_{\mathbb{Q}_p}^+$ be the ring $\mathbb{Z}_p[[\pi]]$ equipped with a semilinear action of the Frobenius $\varphi$ acting as the absolute Frobenius on $\mathbb{Z}_p$ and on the uniformizer $\pi$ as $\varphi(\pi)=(\pi+1)^p-1$ with a Galois action given by $g(\pi)=(\pi+1)^{\chi(g)}-1$ where $g \in \operatorname{Gal}(\mathbb{Q}_p(\mu_{p^\infty})/\mathbb{Q}_p)$. Then the (cyclotomic) Wach module $N_{\mathbb{Q}_p}(T)$ is equipped with a filtration $$\mathrm{Fil}^i(N_{\mathbb{Q}_p}(T))=\{x \in N_{\mathbb{Q}_p}(T), \varphi(x) \in (\varphi(\pi)/\pi)^i N_{\mathbb{Q}_p}(T)\}.$$ Since the Hodge-Tate weights of $T$ and $T^\prime$ are in $[0,1]$, under **(Cong.)**, from [@berger04 Théorème IV.1.1], we deduce that $$\label{eq:Wach} N_{\mathbb{Q}_p}(T)/pN_{\mathbb{Q}_p}(T) \cong N_{\mathbb{Q}_p}(T^\prime)/pN_{\mathbb{Q}_p}(T^\prime)$$ as $\mathbf{A}_{\mathbb{Q}_p}^+$-modules and the isomorphism is compatible with the filtration, the Galois action and the action of the Frobenius operator $\varphi$. Since $H^1_{\mathrm{Iw}}(F_{{\mathfrak q}}(\mu_{p^\infty}),T) \cong N_{\mathbb{Q}_p}(T)^{\psi=1}$ as Galois modules, we have the congruence of the first Iwasawa cohomology groups $$H^1_{\mathrm{Iw}}(F_{{\mathfrak q}}(\mu_{p^\infty}),T)/p \cong H^1_{\mathrm{Iw}}(F_{{\mathfrak q}}(\mu_{p^\infty}),T^\prime)/p.$$ The proof follows by tensoring with the Yager module (cf. Section [2.3](#sec Yager){reference-type="ref" reference="sec Yager"}) and taking the $\Delta$-invariance of $$H^1_{\mathrm{Iw}}(L_\infty(\mu_{p^\infty}),T)\cong N_{L_\infty}(T)^{\psi=1}\cong N_{\mathbb{Q}_p}(T)^{\psi=1} \widehat{\otimes} S_{L_\infty/\mathbb{Q}_p} \cong H^1_{\mathrm{Iw}}(F_{\mathfrak q}(\mu_{p^\infty}),T) \widehat{\otimes} S_{L_\infty/\mathbb{Q}_p}.$$ ◻ In the last lemma, we use the identification $\mathds{D}_{\mathrm{cris},{\mathfrak q}}(T)=N_{\mathbb{Q}_p}(T)/pN_{\mathbb{Q}_p}(T)$ and fix Hodge-compatible bases (see Section [2.2.1](#sub:HC){reference-type="ref" reference="sub:HC"}) of $\mathds{D}_{\mathrm{cris},{\mathfrak q}}(T)$ and $\mathds{D}_{\mathrm{cris},{\mathfrak q}}(T^\prime)$ such that their images are the same under the isomorphism [\[eq:Wach\]](#eq:Wach){reference-type="eqref" reference="eq:Wach"}. **Lemma 30**. *Assume **(Cong.)** holds. If $z\in H^1_{\mathrm{Iw}}(F_{\infty,{\mathfrak q}},T)$ and $z^\prime \in H^1_{\mathrm{Iw}}(F_{\infty,{\mathfrak q}},T^\prime)$ have the same image under the isomorphism given in Lemma [Lemma 29](#lem: modp){reference-type="ref" reference="lem: modp"}, then the Coleman maps $\operatorname{Col}_{T,i}^\infty(z)$ and $\operatorname{Col}_{T^\prime,i}^\infty(z^\prime)$ are congruent modulo $p$.* *Proof.* Recall that $L_m$ is the unramified $\mathbb{Z}_p$-extension of degree $p^m$ over $\mathbb{Q}_p$ and $L_{m,cyc}$ is the cyclotomic $\mathbb{Z}_p$-extension of $L_m$. We identify $H^1_{\mathrm{Iw}}(F_{\infty,{\mathfrak q}},T)$ with $H^1_{\mathrm{Iw}}(k_{\infty},T)\cong\varprojlim_{L_m}H^1_{\mathrm{Iw}}(L_{m,cyc},T)$ and note that we have the following commutative diagram. $$\begin{tikzcd} H^1_{\mathrm{Iw}}(k_{\infty},T)/p \arrow[r, "~"] \arrow[d, " \cong"] & H^1_{\mathrm{Iw}}(L_{m,cyc},T)/p \arrow[d, "\cong"] \\ H^1_{\mathrm{Iw}}(k_{\infty},T^\prime)/p \arrow[r] & H^1_{\mathrm{Iw}}(L_{m,cyc},T^\prime)/p \end{tikzcd}$$ The horizontal arrows are the natural projection maps. The left vertical arrow is the isomorphism from Lemma [Lemma 29](#lem: modp){reference-type="ref" reference="lem: modp"}. The right vertical arrow is an isomorphism which follows from [@ponsinet Eq. (18) in Section 3.2]. Let $z = \varprojlim_{L_m} z_m$ where $z_m \in H^1_{\mathrm{Iw}}(L_{m,cyc},T)$ and correspondingly define $z_m^\prime$. Our hypotheses imply that $z_m$ and $z_m^\prime$ have the same image under the isomorphism given by the right vertical arrow of the commutative square given above. From [\[two-variable-coleman\]](#two-variable-coleman){reference-type="eqref" reference="two-variable-coleman"}, recall that $$\label{eq:label1} \operatorname{Col}_{T,i}^\infty((z_m))=(y_{L_\infty/\mathbb{Q}_p}\otimes 1) \circ (\varprojlim_{L_m} \operatorname{Col}_{T,L_m,i}(z_m)).$$ By [@ponsinet Proposition 3.4], $\operatorname{Col}_{T,L_m,i}(z_m)$ and $\operatorname{Col}_{T^\prime,L_m,i}(z_m^\prime)$ are congruent modulo $p$. The result follows by taking inverse limits and tensoring with the Yager module using [\[eq:label1\]](#eq:label1){reference-type="eqref" reference="eq:label1"}. ◻ **Remark 31**. *Note that if we had assumed that $\mathcal{X}_{\underline{I}^c}(M/F_\infty)$ is a torsion $\Lambda(\Omega)$-module, then the map $\delta$ will be a surjection (see Proposition [Proposition 18](#prop:one){reference-type="ref" reference="prop:one"}). So it poses a natural question of when can we relate the torsionness of the module $\mathcal{X}_{\underline{I}^c}(M/F_\infty)$ with that of $\mathcal{X}_{\underline{I}}(M^*/F_\infty)$. The result is the following.* *If the abelian variety is polarized then $\mathcal{X}_{\underline{I}^c}(M/F_\infty)$ is a torsion $\Lambda(\Omega)$-module if and only if $\mathcal{X}_{\underline{I}}(M^*/F_\infty)$ is a torsion $\Lambda(\Omega)$-module [@Dion2022 Corollary 5.13].* [^1]: The analogue of the assumption $a_p=0$ in the context of supersingular abelian varieties is an assumption on the shape of the matrix of Frobenius (on a basis of the associated Dieudonné module). This shape is a specific block anti-diagonal form (see [@LP20 Section 3.2.1]) -- the abelian varieties of the $GL(2)$-type provide such examples (see [@LP20 Section 3.3]).
arxiv_math
{ "id": "2309.02016", "title": "On the signed Selmer groups for motives at non-ordinary primes in\n $\\mathbb{Z}_p^2$-extensions", "authors": "Jishnu Ray and Florian Sprung", "categories": "math.NT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- address: Athanase Papadopoulos, Université de Strasbourg and CNRS, 7 rue René Descartes, 67084 Strasbourg Cedex, France author: - Athanase Papadopoulos title: | Café conversations:\ A tribute to Norbert A'Campo --- *Abstract.* This is an intrusion in the life and the mathematics of Norbert A'Campo, intended to be a tribute to him and an acknowledgement of his impact on those who know him and his work.\ The final version of this paper appears in the book "Essays in Geometry, Dedicated to Norbert A'Campo\", IRMA Lectures in Mathematics and Theoretical Physics, EMS Press, Berlin, 2023. AMS codes: 01-02, 01-06, 01A70 Keywords: Biography, Norbert A'Campo, mathematics in France. -- "Excuse me, what are you talking about ?\" We have heard this question so many times, from onlookers who listened to us talk for hours in cafés in Basel (Café Euler), Strasbourg (Christian and Café Brant), Vienna (Café Sperl), Paris (Rostand), other small or grand cafés, sometimes railway station cafés, in Mulhouse, Djursholm, Trento, Istanbul, Cagliari, Karlovassi, Mandraki, Barcelona, Beijing, Bangalore, Delhi, Varanasi, and other cities scattered in our Eurasian continent where mathematics drove us. Sometimes, the waiters themselves, if they don't know us yet, ask this same question; they are curious to know what is this gibberish that we have been uttering for hours. Occasionally, we engage in a conversation with them. Usually, Norbert gives the answer: "We are drawing dessins d'enfants\", or: "We are discussing infinity\", or: "We are counting singular points\", or: "We are considering the field with one element\", or: "We are constructing branched coverings\", or: "We are counting pairs of pants\", and other phrases like these which put even more mystery in the eyes of the stunned interlocutor. To a waiter at the café of the Grand Hotel Euler in Basel, Norbert once asked: "Do you know who is the Grand Euler?\" The waiter had no idea. That is how we occasionally made friends at the café. "Mathematics is made for travelling\" says Norbert, a sentence we often repeat to each other. Physical journeys, and also journeys in the world of ideas. When I discuss with him, it is always a journey. It was during those countless conversations at the café that I learned much of what I know in mathematics: spherical geometry, the Riemann--Roch theorem, Serre duality, Dolbeault's lemma, transitional geometries, the importance of the one-element field, Belyi's theorem, slalom curves and polynomials, and many other things. I also learned some details about Norbert's life story and that is what I will talk about in the next few pages. Norbert was born just over 80 years ago in a farm that belonged to his family. His father together with his uncle took it over from their parents. When Norbert was born, the extended family lived there and everybody took part in the manifold farm duties. The farm was situated in Merkelbeek, a small village in the province of Limburg, five kilometers from the German border. The province has been part of the Netherlands since 1815. Today, the former village structure has disappeared due to a reorganization of the region by the central government. A'Campo is a Latin-sounding name. It is conceivable that the origin is Van der Velde. In French it would have been Deschamps, in English Fields, and in German vom Feld. The name was probably latinized around 1850, because of the ruling power: people did not want their name to sound too Dutch, nor too German. Norbert's mother's ancestors were Germans who had emigrated to the Netherlands to escape Bismarck. She was also raised in a farm, and after her marriage she identified herself completely with the new farm life. Back when Norbert was a child, in order to buy items that were a little different from the everyday things (shoes, farm equipment, etc.), the A'Campo family used to go to Heerlen, a name which means "brighter\": you can see the sun earlier there. Indeed, Heerlen is situated in the east of the Limburg. It is a small town with a hospital and a veterinarian, merchants of agricultural equipment and people who could repair tractor wheels. The nearest big city was Maastricht. Among the other cities that people from Merkelbeek could visit easily were Aachen, Cologne and Liège. Norbert couldn't stand the Kindergarten. After a few days of trial and refusal, he was allowed to stay at home. The farm was quite big. One has to imagine that there were many children there and wonderful opportunities for them to play together, much more than at the Kindergarten. At the same time, Norbert could observe his father doing his numerous agricultural tasks. It is in this environment where man has a deep relationship with the Earth that his mind was slowly shaped. "*O fortunatos nimium, sua si bona norint, agricolas!*\" wrote Virgil about the charm of the country life. For elementary school, Norbert went to Brunssum, the nearby village. The studies there lasted six years. At the beginning, a young girl came every day to pick him up on her bicycle and take him to school. Later on, he started to go alone, on foot. From age 8 or 9, he was permitted---although he did this rarely---to go to school on horseback. Among the several horses in the farm, the one he was allowed to ride was used for light duties such as small transport: riding or pulling an open carriage. At times, the family used such a carriage to visit the grandmother who lived in another village, about 20 km away. The other horses were used for hard work (ploughing, etc.). Starting in the fifties, working horses were gradually replaced by tractors. At school, when Norbert arrived on horseback, he used to leave the horse in a pasture that belonged to a friend of his father, located in front of the school. He liked to keep an eye on it while he was in class, and for that purpose he would sit by the window. Although it was rare that he went at school on horseback, his school records of those times read: "Does not always pay attention in class,\" or "Often looks out the window\". Norbert remembers that in the mid 1950s, the Russians, in their will to develop agriculture, came to the Netherlands to buy cows. His father sold a number of them. To take care of them during the weeks-long travel on the train was not easy, and Norbert asked his father to allow him to accompany the cows aboard the same train to help with this task. His mother did not agree. At the school in Brunssum, Norbert used to take additional lessons with a private teacher---his father wanted him to learn High Dutch---, but actually the lessons ended up being math lessons because the teacher realized that Norbert liked it. He introduced him very early to calculus with the variables $x$ and $ij$ (in the Dutch system, the second variable is usually not called $y$, but $ij$). After the six years of elementary school, Norbert was put as a boarder at the secondary school in Weert, 60 km from Merkelbeek. This distance was quite large in those days. The duration of the studies at the secondary school was also six years. It was an excellent classical high school with several teachers who had completed advanced studies. The Dutch teacher spoke several languages. He had a theory about language development and he was particularly interested in Norbert's native Limburg language. At the same time, there was a very solid language program in that school: Latin, Greek, German, French, English, and of course Dutch. There was very little mathematics and science. Norbert learned at school how to debate on something. Indeed, in the Dutch school, pupils learned the art of rhetoric, that is, how to defend a thesis, even if you do not agree with it. The pupils were taught how to maintain the upper position in a discussion. This was part of the youth education. The young Norbert thought he would become a farmer later on. He used to watch his father working, and he liked that work. Had he made that choice, a farm was ready for him. The entire surrounding, that is, the extended family together with the employed people, was centered on the farm work. In that society, it was known that eventually the young become stronger than the older ones, and then the farm starts to belong to them. Some farmers suffer from this, and sometimes fall into depression. Today, Norbert's brother has taken over the farm. All his life, Norbert felt certain that he could always return to the farm as an alternative to academic life. Many things have changed in the region since that time, including the language, which became a rarely spoken dialect. But very recently there has been a will to recover the old language there. In the past, it was said that this language sounded too much like German. Today, the local government is trying to introduce it again in primary education. Norbert told me several times that there are similarities between this dialect and the Basel dialect which is spoken in the village he lives in presently. He has a theory that says that there are common features to all the languages spoken by people living along the Rhine. From his years at the farm, Norbert has kept the sense of country life. He is the opposite of a city dweller. Later, in the Paris region, when he got a job at Jussieu (University of Paris VII), then Orsay (University of Paris XI), he lived in Chantecoq, a name that fits well with the place. In Basel, the A'Campo family lives in Witterswil, a small village in the countryside. It was somewhat by the luck of an encounter that Norbert decided to study mathematics. The story starts with the first known relative who actually went to university, an uncle of Norbert's father, called Emile. The latter, in his youth, wanted to study mathematics, but his parents did not accept. They agreed however that he would study biology, which seemed to them to be a relatively useful field. Thus, Uncle Emile entered university and studied biology. After that, he went into business and became a very successful entrepreneur. At age 45, he began taking advanced mathematics courses, just for pleasure. He attended Emile Borel's lessons in Paris. When he went to Paris, he used to stay at the George V. This was unusual for a mathematics student! Uncle Emile visited the A'Campo family from time to time, and he used to go there with his car. At that time, very few people they knew had a car; in the whole village there were only two cars. Norbert remembers that in elementary school, children used to enjoy counting the number of vehicles that passed by. At the end of the day, there were five or so. Norbert's family didn't have a car; they used the horse. A car, in those times, was linked to a function: people rented one, with driver, only when they had to go somewhere. If the trip was to last three days, the driver would charge a reduced price. Norbert's father bought his first car around 1956-57. At secondary school, Norbert liked physics and initially he thought that later he would study physics at university. Uncle Emile always spoke of the profession of mathematician as an extraordinary profession. He had told him once: "If you want to enter university, I have a friend who is a mathematician who can advise you.\" This friend was a well-known professor of mathematics, Jaap Seidel, teaching in Eindhoven. Norbert and his uncle went to see him. Seidel said to Norbert: "You want to do physics, that's fine, but you should know that in this field there is too much of a hierarchy. To do research in physics, you need a laboratory and a lot of money. If you become a physicist, it is only at the age of 50 that you can become autonomous. In mathematics it is not the same; you are free and independent since the beginning.\" He added: "If you decide to become a mathematician, the best place to study math is Utrecht.\" Uncle Emile also told Norbert about his mathematical dream, namely to find an explanation of how living cells or more generally life is possible contrary to the growing entropy, by finding an extra term to insert in the Boltzmann equation which (for a limited time) can slow down or even reverse the growth of entropy. It was under the influence of this uncle that Norbert turned to mathematics. His father was always very supportive and he encouraged him in that direction. Thus, Norbert enrolled in mathematics at the University of Utrecht. Among the professors who marked him there, I heard him mentioning several times the names Freudenthal, van der Blij and de Iongh. Norbert remembers Freudenthal's first lecture: "Consider a plane, such as the blackboard plane, divided into two parts by a vertical line, where the straight lines change their direction when crossing this separating line, such as rays of light entering water.\" Freudenthal's suggestion was to study the geometry of this kind of plane with a new concept of line, talking about axioms like the existence of a unique line through any given pair of points, etc. Norbert also remembers the proof that Freudenthal gave them of the fact that a set cannot be in one-to-one correspondence with the set of its subsets. I didn't ask him if he remembered other details, but I can imagine that Freudenthal took this opportunity to tell them about the continuum hypothesis and the important developments it led to. Hans Freudenthal had moved to the Netherlands after fleeing Nazi Germany. He had been an assistant to Brouwer in Amsterdam. As early as the first year, he explained to his students how to do math. He advised them to try to understand the lectures without taking notes. At the end of his lectures, he used to ask the students if they found it complicated. He would also tell them: "Next time, one of you will try to explain to the whole class what I taught you today.\" For him, that was the way to do progress in math: to explain without looking at notes. Usually no student could do it right the first time. They would start over. Norbert kept these principles alive. I have seen him attending conferences, and giving lectures, hundreds of times, over several decades. I never saw him taking notes or looking at notes. The amount of knowledge that Freudenthal gave to his students was minimal but profound. A specialist in topology, he was no less a specialist in history of mathematics and more generally in history of science. In his teaching, he always tried to relate mathematical problems to those of everyday life, a point of view that Norbert always adopted later on. Norbert also remembers a very good analysis course by Frederik van der Blij. The latter, like Freudenthal, gave him a taste for history of mathematics and the connections between mathematics and everyday life. With van der Blij, the atmosphere was comparable to that with Freudenthal. He used to explain to them profound notions by doing simple things. For instance: Consider the sine formula given by its power series expansion. Try to find where the zeros are. Try to prove that there is a smallest positive zero; that would be $\pi$. From de J. J. de Iongh, Norbert remembers his seminars on set theory and Gödel's constructions. At the University of Utrecht, people sometimes talked about French mathematics. Everybody knew that if you wanted to become a mathematician, it was advisable to go to France, a country where mathematics was highly valued. Some weekends, the mathematics assistants at the university of Utrecht used to come back very tired from Paris, but they all used to say: it was fantastic; they were coming back from the Bourbaki seminar. Norbert's father had a friend, an old classmate, who was living in Nı̂mes. This friend had joined the resistance in France during the Second World War. After the war he remained in France and became a construction equipment dealer in Nı̂mes. One of Norbert's father's dreams was to see this friend again. Eventually, he went to visit him. At his return from France, he gave a very picturesque description of the countryside in Nı̂mes. Norbert, who at that time was a student in Utrecht, was fascinated. He inquired about Nı̂mes. He found out that there was no university there, but that there was one in Montpellier, 60 km away from Nı̂mes. He wrote to the university of Montpellier, asking whether he could continue his mathematics education there. The answer came quickly, a very gentle letter saying that this was possible. In those days, it was not complicated to be accepted at a French university. This is how Norbert became a student at the University of Montpellier. Norbert arrived in Montpellier towards the end of 1961. The 1961-62 academic year had already begun, and he enrolled there in the mid of that academic year, in January 1962. Since he had already completed two years in Utrecht, he was admitted directky to the third year. In Montpellier, several circumstances played in his favor. The first one took place during the second year of his studies there. The teaching assistant of the course on holomorphic functions fell ill. In the meanwhile, Norbert had done all the exercises. The course coordinator, who knew that, suggested that Norbert replace the assistant. The job consisted in helping the students with the exercises. By doing this, Norbert went more profoundly in the subject of holomorphic functions. At the same time, this allowed him to earn some money. Another event marked Norbert during the same period. At the end of the autumn of 1962, about a year after his arrival in Montpellier, he went on a grape harvesting trip to Vendargues, a village situated about 10 km from Montpellier. For him, it was also a way of returning to the soil, of helping to collect from it the gifts it offers to man. The winegrower for which he was working asked him what he was doing in life. Norbert responded that he was studying mathematics. The winegrower then asked him if he knew Grothendieck. Norbert said no, I don't know him; he had never heard that name. It was only about two years later that Norbert heard the name again, when Jean-Pierre Lafon, one of his teachers in Montpellier, asked him to read Grothendieck's Tohoku paper, *On some points of homological algebra*. Norbert then remembered the winegrower who had pronounced this Dutch-sounding name. He was puzzled and he returned to Vendargues, where he asked the peasant how come he knows Grothendieck. The winegrower answered: "Everyone here knows Grothendieck. He spent his childhood with his mother in a house in Meyrargues, at a short distance from Vendargues.\" When the winegrower saw that Norbert was very much interested, he took him by tractor to show him the house. Grothendieck had completed high school and university in Montpellier. At the time we are talking about, he was making his career in Paris. Later, he returned again to Montpellier. Norbert still remembers the name of the winegrower who first told him about Grothendieck, Jean-Henri Teissier. He also remembers his wife Annette, their son Jean-Louis and their daughter, Yvette. He returned later to Vendargues. He wanted to see again Grothendieck's house in Meyrargues, but he could not find his way. When he went there for the first time with Teissier, they were chatting together on the tractor, and he did not notice the way. In those times there were no paved roads and the tractor went through the fields. Today there is a national road there, and even, the highway is close by. It is still difficult for Norbert to find his way around, because Vendargues has completely changed. Norbert returned several times to visit Teissier, with whom he became friends. Teissier no longer lives. Later, with his wife Annette (the same name), Norbert visited the farm again. They talked with Teissier's wife and their son. The latter studied geology in Bordeaux. Looking in my old mails, I find the following, from Norbert, dated May 23 mai 2016: "Dear Athanase, I came back from Montpellier. I went to Vendargues and Meyrargues. Grothendieck's footsteps are now very hidden. Meyrargues did not change. But Vendargues became a little city. I bought several bottles of wine from Meyrargues.\" In the 1970s, after Grothendieck withdrew from the Parisian world of mathematics and returned living in a commune near Montpellier, he started with some students at the University of Montpellier a program on Thurston's theory of surfaces, in which he involved Norbert. Grothendieck and Norbert shared the same worries about the preservation of nature and of the animal species---including the humans. Grothendieck had decided to make from the return to nature a political commitment, but that is another story. My friend and collaborator Stelios Negrepontis, who was teaching at McGill at that time, went to Nice for the 1970 ICM. He remembers seeing Grothendieck talking there about *Survivre*, the antimilitarist and ecologist movement he had founded, together with Claude Chevalley and Pierre Samuel, a couple of years before. Stelios, who had an awe for Grothendieck, told me that he was surprised by the somewhat insolent manner that young French mathematicians were addressing him, not at all convinced by the value of his Survivre action. Let us return to Thurston's theory. Grothendieck's program included the classification, up to isotopy and mapping class group action, of objects which could be systems of curves, subsurfaces and mixtures of them. A group appeared, the cartographic group, which is related to dessins d'enfants and to other classes of cell decompositions of a surface. There is also a universal cartographic group. In working on this topic, Grothendieck developed a sort of combinatorial Teichmüller theory. One motivation for his interest in this subject was the Nielsen realization problem. This was before Steve Kerckhoff solved the problem. The solution came amid this Montpellier activity, and Grothendieck asked Norbert to explain to him and his students Kerckhoff's work. At the same time, he asked him to be a member of the jury committee of the doctoral dissertation of his student Yves Ladegaillerie. Later, Norbert became the formal advisor of another student of Grothendieck, Pierre Damphousse. The latter's thesis defense took place at Orsay, in June 1981. Norbert is not very much talented for administrative work, but Grothendieck, at that time, was even less. I met Damphousse at the 2010 Clay conference celebrating the proof of the Poincaré conjecture. It was at Thurston's talk which was held in the magnificent lecture hall of Oceanographic Institute. I was there with Norbert, who introduced me to Damphousse. The latter died unexpectedly shortly after, at a relatively young age. At several occasions, I asked Norbert who were the mathematicians who influenced him. I heard the name Marcel Lefranc, an excellent mathematician, he said. Norbert followed his lectures on homology. Thanks to these lectures, he learned the Lefschetz fixed point formula. He also learned there Sperner's lemma, which gives an alternative proof of Brouwer's fixed point theorem, without using the notion of degree. In the same period, Norbert purchased Spanier's book to learn more topology. But it was through Lefranc's course that he understood the idea of homology. Norbert also told me he learned a lot from Roger Cuppens, who was the teaching assistant of the probability course. He remembers that the latter explained to him the content of his thesis on infinitely divisible laws. Later, Cuppens became a professor in Toulouse. At the time where Norbert was studying in Montpellier, there was a very dense mathematical activity tthere, whose most important part---according to Norbert---took place at a café, "Chez Jules\", in front of the faculty of mathematics, which was still located on rue de l'Université, in front of the main university building. Today, the Faculty of Sciences of the University of Montpellier, including the mathematics department, is situated out of the historical center; only the administration remains in the original building. Among the other professors in Montpellier to whom Norbert owes part of his mathematical education, he mentioned to me several times the name of André Martineau, who taught holomorphic functions in several variables. Martineau liked to explain to his students works of other mathematicians, and Norbert learned from him a lot of mathematics. Sometimes Martineau invited the students to his home and continued to talk there while he looked after his children. His wife was a literature teacher at the University of Montpellier. Norbert learned from Martineau the Levy problem, holomorphic domains and the Cartan--Oka theorem. Martineau did a lot of mathematics with the students at the Café Jules. Norbert also mentioned Lafon, who has obtained an analogue of the Weierstrass preparation theorem for certain subrings of the ring of convergent power series. Lafon asked then whether such a theorem remains valid for a certain kind of larger ring containing convergent power series, namely, a ring of power series satisfying inequalities on the coefficients such as Gevrey classes. One day Norbert met Lafon at the beach and the latter told him about this question. Soon after, Norbert came back with a proof of the fact that the answer to this question is negative. Lafon told him then: "You should write this proof and send it to Malgrange". This is how Norbert got in touch with Malgrange. He wrote down his result and he asked Martineau whether this could give him a doctorate. Martineau told him: "Leave it now in a drawer, and wait for something better.\" A posteriori, Norbert thinks it was a good advice, not to make a PhD too early. At the beginning of the Academic year 1967--68, Norbert was hired at the University of Poitiers, as an assistant, working in the group of Pierre Dolbeault. By then, he had a car and he used it for moving from Montpellier to Poitiers. In the meanwhile, he leaned about the existence of a conference on foliations at Mont Aigoual, a place on the way between Montpellier (in the South East of France) and Poitiers (in the South West). Mont Aigoual is an elevation, situated in the Massif Central, a region in the Central South of France, consisting of mountains and plateaus. It is also a resort place, with snow and skying. Norbert likes snow. I remember a scene. I was coming back with him, in his car, from Italy to France and Switzerland. It was in March 2016, we were heading towards Strasbourg where he proposed to drop me, before going to his home in Basel. We had spent two weeks at the Mathematical Center in Trento, with Sumio Yamada, working on a project which we did not finish until today. On our way back, we took the pass called the Julier Pass, which had just opened, that year, after the period of big snow. When we arrived up there, Norbert stopped the car, went out and walked on the snow, just a few minutes. He touched the snow with his hands. I stayed in the car, watching him. Back to 1967-68. During his trip from Montpellier to Poitiers, Norbert stopped in mont Aigoual. He heard there a talk by Harold Rosenberg, in which the latter mentioned the problem of foliating the 5-sphere. For Norbert, foliations was a new subject. Rosenberg explained the construction of Reeb's foliation of the 3-sphere and he said that the existence of a foliation on the 5-sphere was an open question. Soon after, Norbert constructed a foliation of the 5-sphere. He went to Rosenberg, who was teaching at Orsay, and he showed him his foliation. Rosenberg said that this was good. Norbert asked him whether that could constitute a PhD thesis, and Rosenberg said yes. At about the same time, Norbert proved that any simply connected 5-dimensional manifold whose second Stiefel--Whitney class is zero carries a codimension-1 foliation. There was a lot of paper work to do for the thesis, and this took some time. In the meanwhile Norbert proved also his monodromy theorem, which also became part of the thesis. Partly thanks to Milnor's book on isolated hypersurface singularities, monodromy had become an interesting topic for topologists. An important result of Grothendieck states that the homological eigenvalues of the monodromy are all roots of unity and the length of the Jordan blocks is bounded by the dimension. Brieskorn conjectured that the monodromy has no Jordan blocks and hence is of finite order. Norbert constructed the first example of an isolated singularity with infinite homological local monodromy. This gave a counterexample to Brieskorn's conjecture and it became almost immediately an ingredient for Deligne's proof of the Weil conjecture. Directly afterwards, Malgrange produced similar examples in higher dimensions with Jordan blocks of the maximal possible length. Today, Norbert is still working on monodromy. In Poitiers, like in Montpellier, Norbert learned a lot of mathematics. At that time André Revuz was the head of the mathematics department there. Remarkably, he encouraged all the young members of the department working on their theses to travel to any conference or seminar they liked. He used to tell them that the department will pay their expenses provided that, when they come back, they give a seminar talk on what they heard. Those talks were usually very interesting, especially because Revuz was always asking for examples. Pierre Dolbeault, who was also in Poitiers, was a specialist of complex analysis of several variables. He had obtained his doctorate with Henri Cartan. He used to organize a seminar. One day, he was looking for someone who could give a talk. Joseph Le Potier, who had just arrived from Rennes and who was still unknown, proposed to give a talk. He ended up talking for the whole semester. He explained the Atiyah--Singer theorem with many of its applications. This was in 1968. This was the atmosphere in Poitiers at the time when Norbert was trained there. A few years later, Le Potier, with Dolbeault as advisor, presented a Doctorat d'État in which he proved a conjecture of Griffith. After Poitiers, he obtained, like Revuz, a position at Orsay and then at Jussieu. In mont Aigoual, Norbert had another favorable experience. Bill Browder was there and did not go to any of the talks. He was spending his time walking and visiting around, and he was happy to find someone like Norbert who would do the same. They used to walk together, with Browder telling math to Norbert. This is how Norbert learned about the $h$-cobordism theorem. He came back to Poitiers and recounted this to Dolbeault, who said that this was interesting, and he asked Norbert to give a course on this subject. Norbert started a one-semester course on a theory that he had heard for the first time a week before. This is how he learned $h$-cobordism theory. Among the students at that course were Le Potier, Cathelineau, Poly, and many others. From Poitiers, Norbert attended seminars in other cities. He went regularly to Paris. In the train, he used to meet people going from Bordeaux to Paris like him to attend mathematical seminars. He liked to go the Institut Henri Poincaré (IHP), to listen to the talks of the number theory seminar organized by Delange, Poitou and Pisot. In 1967/68 and 1968/69 he gave several talks there, on analogues of Malgrange's differentiable preparation theorem in the setting of ultrametric valuated fields of arbitrary characteristic. Krasner had also a seminar at IHP. The café *La Contrescarpe*, a few hundred meters from IHP, was a meeting place, with Rosenberg and others going there regularly. In the beginning of the seventies, the discussions following the regular seminars and colloquia in Paris were very lively and were often continued afterwards at some mathematicians' homes, several of which situated in upscale areas of Paris. Alain Chenciner lived in a studio apartment on Île Saint Louis, arguably the prettiest part of the center of Paris. Sandy Blank lived right next door, Georges Pompidou as well. Together with other mathematicians such as Laudenbach, Norbert would spend hours there discussing mathematics. After his first meeting with Browder, and after he became familiar with $h$-cobordism theory, Norbert wanted to learn more topology. Malgrange told him that for that he should go to Orsay. The University of Paris-Sud at Orsay was newly founded, with an "Équipe de topologie\" (topology team) working around Cerf, who was Henri Cartan's student. Some North-Americans, including Siebenmann and Rosenberg, were part of the team. Melvin Rothenberg was an invited professor. Norbert told me once that the latter was a very good lecturer. The young mathematicians in the Orsay team included Vogel, Latour, Barge, Lannes, Laudenbach and some others. Young researchers from Dijon working on foliations (Joubert, Moussu, etc.) also participated in the Orsay topology seminar. Ibish used to come quite regularly from Nantes. Eventually, in Dijon and Nantes, they hired young people who were trained at Orsay, and gradually these two cities became also centers for topology. Lafon, who had moved to Toulouse, told Norbert that there was a conference there. The reader might remember that in those times there were not as many conferences as there are today. Norbert went to that conference. Soon after the lectures started, someone said that there were interesting things happening in the next door building. They all went there. There were students, explaining their claims and demands. We were in May 68. In 1968-69, Norbert followed Thom's seminar on dynamics in Bures-sur-Yvette. Smale, Shub, Peixoto and others participated to that seminar. Norbert had already met Thom in Montpellier. He had attended there a talk by him in 1967. He remembers that the latter had asked a question about the typical patterns caused by frost on windows. Norbert told me that at the same period, a change took place in the agricultural world. Before 1968, there were no big tractors in French farms, and a substantial part of the work was done by hand. Famers needed a certain number of workers, and these workers were poorly paid. The year 1968 saw the appearance of big agricultural machines, and Norbert realized this transformation. Each year, he used to go to the *Salon international de l'Agriculture et de la machine agricole*, which was held in Paris, Porte de Versailles. He wanted to see the new agricultural machines. His father was particularly interested in how machines work, and Norbert learnt this curiosity from him. In 1969, he saw there for the first time big tractors. Today, in the French farms, there are only big tractors, and very few people working. Back to Poitiers. Pierre Dolbeault, like Norbert, came from a family of farmers. In 1969, they went together to a conference on foliations in Oberwolfach. This was after Norbert tried to explain to Dolbeault his construction of a foliation of the five-dimensional sphere. Dolbeault was not familiar with the subject and he told Norbert: There is a conference on foliations in Oberwolfach, we can go together. They used Norbert's car. During this trip, Norbert saw Dolbeault's parents' farm, which is located in central France, halfway between Poitiers and Oberwolfach. They spent a night there. The car needed some repair and Norbert repaired it there. In the farm, he could find everything that he needed for the repair. It was the first time that Norbert visited Oberwolfach. The lectures were still held in the old building. At that conference, he met for the first time Reinhold Baer, Blaine Lawson, Robert Lutz, Robert Roussarie, Claude Godbillon, Jean Martinet, Georges Reeb, Jacques Vey, André Haefliger and other mathematicians. Norbert explained his construction of a foliation of $S^5$, and Haefliger was very much interested. He asked him to explain to him all the details. Nobert tells me that he read very carefully what he had written, discussing every word, every comma. Between Norbert and Haefliger, it was the beginning of a lifelong friendship. It was also the starting point of Norbert's relation with Switzerland. Norbert obtained his doctorate at the University of Paris-Sud, in Orsay, in 1972. The title of the dissertation was *Feuilletages de variétés et monodromie des singularités* (Foliations of manifolds and monodromy of singularities). At that time, the Orsay department was very young. It had been founded in 1965, first as part of the University of Paris, integrated in a project to decentralize this university. In 1971 it had become a department of the Faculté des Sciences d'Orsay, in the newly founded Université de Paris-Sud. The creation of this university was an important step of the ungoing vast reorganization of Parisian universities, whose goal was to meet the demand for a transformation of the higher French educational system. Several mathematicians, including Henri Cartan, Hubert Delange and Georges Poitou moved from Paris to Orsay. Claude Néron, Jean Cerf, Adrien Douady, Michal Raynaud, Michel Demazure, Larry Siebenmann, Yves Meyer and Valentin Poénaru were also hired at Orsay. 1972 is also the year when Thurston obtained his PhD, whose subject was also foliations. Thurston's dissertation was the starting point of a series of papers in which he solved all the important existing problems in the subject. The same year, Thurston visited Orsay. Norbert remembers the circumstances of that visit. An invitation was made to Morris Hirsch, by Rosenberg and Siebenmann, to visit Orsay. The paperwork was done, but eventually Hirsch declined, and the reason he gave was that he had a very talented student of which he had to take care. This student was Thurston. Rosenberg and Siebenmann decided then to invite the student; this is how Thurston arrived to Orsay. He lectured there on his version of what became known later as the $h$-principle for foliations of codimension greater than 1. He also explained his result saying that an arbitrary field of 2-planes on a manifold of dimension at least four is homotopic to a field tangent to a foliation. Several young mathematicians working on foliations attended his lectures. During the same visit, Thurston went to Dijon and Plans-sur-Bex, and a large portion of these trips were done in Norbert's car. The latter remembers that at that time, Thurston was already thinking about hyperbolic geometry in dimension two. He asked him how he came to know about this subject, and Thurston answered that it was his father who first told him about it. At that time, Norbert had a position of "Maı̂tre de conférences provisoire\" at Orsay. Before that, he had taught for one year at Jussieu, also as "Maı̂tre de conférences provisoire,\" for one year, 1970-71. Norbert's first visit to the United States took place in 1971, when Sandy Blank invited him to Boston. On that occasion, Blank arranged Norbert's first meeting with Dennis Sullivan. He had told Norbert in advance that on the day of his arrival he will introduce him to Sullivan. Norbert had read some of the latter's works while he was a student in Poitiers. In fact, part of his construction of a foliation of the 5-sphere was inspired by Sullivan's work; in particular he had read his preprint on synthetic homotopy, which contains a special decomposition of the 5-sphere. This decomposition is not as good as that of the 3-sphere which is used in the construction of the Reeb foliation, but with a little bit of thinking it became the starting point for the construction of Norbert's foliation of $S^5$. Only a slight modification was needed. Sandy Blank organized an outing with Norbert and Dennis to attend a soccer or American football game (Norbert does not remember which of the two). It was the last round of the American competition. The three of them went to the stadium. When they arrived, Norbert and Dennis had already started talking math. They immediately moved to the basement of the stadium, where they continued their discussion. Suddenly they realized that everybody in the stadium was leaving. The game was over, and they hadn't seen anything of it. They had forgotten about the game and why they came there. This is why Norbert does not know whether it was soccer or American football: he did not see anything of the game. During his 1971-visit to the US, Norbert attended a course that Sullivan gave in Boston. When he returned to Orsay, he told people about Sullivan and his course. Shortly after, there was an opportunity of inviting a mathematician as a one-year visiting professor at Orsay, and they decided it would be Sullivan. They needed to fill in some administrative documents, and they had only one day for that. Cerf came to Norbert and asked for help in filling in the documents. Like others in Orsay, he considered that Norbert was close enough to Sullivan to know what to write on that form. They had to specify where Sullivan was born and similar things. One has to remember that in those days email did not exist, and that it was not easy to communicate rapidly with someone in the United States. Cerf thought it is important to fill in the form correctly whereas Norbert was not so concerned about such details. To simplify the matter, he told Cerf that Dennis was born in Dallas. In fact, Norbert just thought that he was born in Texas, and the only city in Texas he could remember was Dallas. Thus, on Sullivan's official documents at Orsay it is written that he was born in Dallas. In fact, Sullivan was not born in Texas, but in Michigan. I asked Sullivan what he remembers about this hiring process at Orsay. He twrote: "Norbert had to do quickly. He made up a fictitious address in Dallas, Texas, where JFK was assassinated ten years before. The first day at work at Orsay, Norbert introduced me to Michel Herman, François Laudenbach and Harold Rosenberg (at the Orsay pool in the latter case). His family and mine spent several 'promenade du dimanche après le repas' days together. His daughter and mine later became friends as adults.\" Norbert did not like the division of the Orsay mathematics department into "Équipes\". He felt that this compartmentalized people, and that the seminars had become more specialized. He, personally, did not consider himself more of a topologist than a geometer or algebraist. He remembers Douady saying: "I don't mind dividing the department into teams, but then I want to be part of every team\". Norbert gave his first course at Orsay in 1973. In the same year, he made his first trip to Japan, where he was invited to the conference "Manifolds" in Tokyo. Thom was also there, as a guest of honor. Norbert remembers that Thom was very unhappy because he was placed in a special hotel, a very noble one, but isolated and far from all the other participants. During that conference, on one afternoon, Ichiro Tamura presented his students to Norbert and allowed them to ask questions to him, one by one. Among these students was Mutsuo Oka who soon after (in the summer of the same year) went to France to work with Norbert as a PhD supervisor. Oka obtained his doctorate at Orsay in 1975. He was the first PhD student of Norbert and a good friend ever since. After having defended his thesis at Orsay, Norbert was hired Senior Researcher at CNRS. He had an office at Orsay and another one at IHÉS (Bures). But he was uncomfortable with such a position. At CNRS, one is supposed to be a full-time researcher, with no teaching duties. Norbert always liked the combination of research and teaching. He stayed only one year at CNRS. The year after (in 1974), he was appointed professor at Jussieu. In the same year, he was an invited lecturer at the Vancouver ICM. He gave there a talk on the monodromy group of the unfolding of isolated singularities of planar curves. The topic had been the subject of an intensive activity since the beginning of the 1960s, with works of Hirzebruch, Arnold, Thom, Brieskorn, Tjurina, Milnor and others. The participants of the ICM were staying on the campus of the university located next to the sea overlooking a very beautiful beach. One can imagine Norbert, after the lectures, spending the whole evening and part of the night talking mathematics, staring at the ocean and watching the rolling waves. He remembers that one evening, on that very beach, Douady organized a méchoui---a whole lamb grilled on a spit---for a large group of mathematicians. Norbert taught at Jussieu for 3 years. In 1977, he exchanged his position at Jussieu with that of Rosenberg, who was teaching at Orsay. Rosenberg used to live in Paris and he was happy with this arrangement. In this way he was not obliged anymore to go to Orsay several days a week. Symetrically, Norbert was happier in Orsay. It was the countryside. Furthermore, he preferred to be in Orsay because of the asbestos problem in Jussieu. Starting in 1974, people were talking about this problem there. Rosenberg said that this asbestos problem did not bother him. The famous Orsay seminar on Thurston's works on surfaces took place during the academic year 1976-77. Norbert was not present; he spent that year in Lausanne. But he used to return to Paris from time to time, and Laudenbach and others told him regularly what was going on at the seminar. Another occurrence at Orsay, involving Thurston, took place at that time. People learned about the latter's work on the topology of 3-manifolds that uses hyperbolic geometry, from his Princeton lecture notes which reached Orsay, and whose importance was immediately realized. The introduction of hyperbolic geometry in the study of the topology of 3-manifolds was completely new. Siebenmann asked Norbert to give a course on hyperbolic geometry. Norbert responded that he knew very little about this. Siebenmann told him then: "This should not be a problem, you will learn it by giving this course.\" It was a happy coincidence that Norbert was planning to visit the Mittag-Leffler Institute, just after Siebenmann's request. There, in one of the attics that were next to the library, he found a set of old papers, notes and documents on hyperbolic geometry. He went through them and he took some notes. He came back to France with enough material to build a course. At Orsay, Douady and Poénaru knew a little bit of hyperbolic geometry. Deligne also had already come across this subject. He was studying Hodge structures, which can be viewed as symmetric spaces, and he used to make computations using models of hyperbolic geometry. For most of the other mathematicians at Orsay, hyperbolic geometry was unknown at that time. It was a dormant subject and it needed to be resurrected. In fact, before Thurston arrived at Orsay, even the subject of Riemann surfaces in the original sense of Riemann, Klein and Poincaré---the sense which was restored by Thurston---, was very poorly known there. Algebraic geometry was at its peak. A Riemann surface was considered as a 1-dimensional scheme. Thurston brought hyperbolic geometry and Riemann surfaces at the center of the discussions at the topology seminar at Orsay. Norbert gave his course, and as Siebenmann had predicted, he learned hyperbolic geometry while he was lecturing on this subject. In his teaching, and later on, Norbert was doing hyperbolic geometry as it would have been done in school, or rather, as Euclid's geometry was taught in school: starting with the fact that by any two points passes a single line, and then the other axioms and postulates, then, explaining the geometry of triangles, showing that the three altitudes of any triangle intersect in one point, etc. Norbert also established the trigonometric formulae, and then he passed to dimension 3. His point of view was that of Lobachevsky, that is, model-free, About twenty-five years after Norbert gave this course, I went with him through his notes. We expanded them together, we included spherical geometry and transitional geometries into the discussion and we published them. This is how I learned much of what I know in non-Euclidean geometry. Norbert was professor at Orsay for the period 1977-1982. He taught differential calculus to second- and third-year students. The differential calculus course at French universities has very little to do with the calculus courses given at American universities. There were a few textbooks available on the subject, by Dieudonné, Dixmier and some others, but it was (and is still) unusual for a professor in France to assign a book or follow a book written by someone else. The students usually take notes. In 1977-78, Norbert's course was centered on the qualitative behavior of integral curves of differential equations. At least this is what the author of these lines, who was a third-year student there, remembers. The third-year students were divided into two groups. Douady taught the other group. Norbert recalls that some students and teaching assistants were complaining because he was doing an unusual program. In 1979, Norbert gave a talk at the Bourbaki seminar. The subject was the first part of Hilbert's 16th problem, a question which concerns the number of connected components of a smooth algebraic plane curve in terms of the degree. He explained results of Gutkov, Arnol'd, Rokhlin, Guillou--Marin and others. I was a fourth-year student (in the "maı̂trise\" year) and I attended that seminar, just by curiosity. I enjoyed the pictures, and the way Norbert was drawing them on the blackboard. Later on, I remembered these pictures when I saw for the first time the Chladni figures that appear in music theory, namely, in the theory vibration of plates. During the period he spent at Orsay and Jussieu, Norbert had the opportunity to visit again the United States. In those times, it was possible for a university professor in France to do the total amount of his yearly teaching hours in one semester, and then, during the other semester, he could travel. Nobody complained about this. In 1977, David Eisenbud invited him to Brandeis. He gave there a course on singularities. Needless to say, traveling to the United States was the occasion for him to meet mathematicians. Among those he met, he mentions Bott and Thurston. Back to Sullivan. Dennis arrived at Orsay in 1973-74 as a visiting professor. During that year, he conducted an explosive seminar at Bures, which gathered all the dynamicists from the Paris area. The sessions were always followed by long discussions, which took place first at l'Ormaille and continued at Dennis's apartment, Boulevard Jourdan. This was an apartment given to Dennis by the university. In fact, the whole building belonged to the City of Paris and it was managed by the university, and several mathematicians lived there for some time. Henri Cartan used to live in that building, and he was coordinating the distribution of the various apartments. Hadamard also had lived in that building. Pierre Lelong had occupied Dennis's apartment before him, and he had left it because he found it too big for him. Norbert remembers that officially, Lelong was still the tenant. Dennis liked to invite mathematicians at his home. During his visits to him, Norbert met several people. He remembers having seen there André Weil's daughter. After that visit to Orsay, Dennis was proposed a permanent position at IHÉS; it was the position that was left vacant by Grothendieck. Soon after, Norbert got acquainted with Swiss universities. He visited Geneva, where there was a strong mathematical department, with Haefliger, whom he already knew, Michel Kervaire, who was French but who had studied in Zürich, and who had decided to work in Switzerland after a few years of teaching at New York University, and there were also other very good topologists and geometers. It was Kervaire who told Norbert about this possibility of visiting position. In Geneva, there was a mathematics seminar where everyone could ask many questions, as many as he wanted. The participants' aim there was to understand. "Time was infinite there\", Norbert says. After the seminar, the discussions continued at the café. In contrast, asking questions during a seminar was rather unusual in France. An opportunity arose for Norbert to spend a year in Lausanne, which is less than one hour driving or by train from Geneva. He took a leave from Paris and spent the academic year 1976-77 there. In 1982, after having been professor at Orsay for five years (1977-78 to 1981-82), Norbert left permanently France for Switzerland. About this move to Switzerland, I remember the following facts I learned from him. During one of his visits to the Geneva mathematics section, Norbert saw a small handwritten announcement on the bulletin board, for a professorship in Basel. He asked people there for more information and for advice. The only response he heard was that the teaching in Basel was in German. In Geneva, very few people spoke German. An exception was Kervaire, who was fluent in several languages, including German and modern Greek. Haefliger told Norbert that for the Swiss living in French speaking regions, the German language reminds them of their military service. Norbert said he had no problem with teaching in German. He applied and obtained the position. He was appointed professor in Basel, in 1982. I do not remember the exact year where Norbert started coming regularly from Basel to Strasbourg, but it was in the 1990s. He used to come by bike. Sometimes, when there was a conference, he would come by bike together with his students, but it would take longer, because the students were not used to such a long bike trip. Everyone who knows Norbert knows his passion for cycling. He can immediately recognize a good bike. I noticed that when he sees one on the street he stops to watch. He told me that he bought his first good bike with his first professor salary, and it costed him the entire salary. People close to him were surprised by the price he spent on the bike. After a few years, this bike was damaged in an accident and he bought an even better one, which he still uses today, after more than 45 years. In May 2013, I was at a master-class on the island of Samos, and Norbert was among the teachers. In order to be able to explore the island by bike, he took with him his mountain bike, on the plane. He needed to pump air after the flight and therefore entered a bike shop in Karlovassi. He still remembers the owner's name, Georgios Katsouris. The latter gave him a pump and asked him to sign a paper written in Greek. He then revealed to him that with this signature he had registered for a bike race a few days later in a village called Kyriaki, in the mountains of Samos. Norbert indeed took part in this race and ended up first in the category of over 40s. After the race the entire village got together for a feast on the market place with a lot of food and drink. Norbert spent the evening there, talking to bikers and villagers, asking questions about the life in the island, enquiring about meanings of names and of words, while the other mathematicians were having their usual dinner at the conference hotel. This is how Norbert made friends in Samos. At the end of his stay, he had to pass by the store of Katsouris to return the pump, and I went with him. Katsouris used to sell and rent bicycles which mere made by his hands. Norbert was in awe: he told me that these hand-made bikes were really great. He discussed with Katsouris for hours. I already knew (from Norbert) that there is a worldwide community of cyclists, and that whenever they meet they have millions of things to talk about. Our common friend Charalampos Charitos tells me that some Greek mathematicians, when they talk about Norbert, talk about the "cyclist\". One day, we were at a conference in Kunming, and there is a big lake there. Charitos, who was also there, told Norbert that he would like to go around the lake by bicycle. Norbert took this very much to heart and spent some time finding him a bike. Eventually, Charitos realized that the distance to go around the lake was too long and he gave up. Norbert rides his bike mostly in the countryside. He crossed France by bicycle. I can understand that this passion for cycling is part of his need for freedom and open spaces. There is also, for a man who, like him, has a peasant's soul, the simple joy of contact with nature, a pleasure for the eyes and for the heart. To see the farms and all these fields around him reminds him of his childhood. Once, I was at his place in Witterswil, and he told me: let's go to buy some bread. I realized he wanted us to go by bike and he gave me one of his bikes. I thought we were just going to the next village, but in fact his plan was to go quite far. We took small country roads, sometimes crossing meadows and going around ponds. He was riding his bike ahead of me, and the distance between us was getting bigger and bigger, so very soon I lost sight of him. At a crossroads I didn't know which way to go. As he doesn't have a cell phone, and since I didn't know the way back to his home, I waited for him at the crossroads. The time seemed long, I was anxious, but eventually he came back. When people go to his place, I tell them: beware, do not go for biking with him unless you are used to that. I asked Norbert why he left France for Switzerland. His first answer was "teaching\". He has always enjoyed teaching at all levels. At Orsay, he did not like the way teaching was distributed. The first and second year students were not sufficiently taken care of. Students who finished the "classes préparatoires aux grandes écoles\" entered directly the "licence\" year (third year) and the level then rose sharply. The first year students of the École Normale Supérieure arrived also in that licence year. These students received special instruction at their school, in addition to the courses they followed at Orsay, where the program seemed easy for them; they were only required to obtain their "licence\" and then their "maı̂trise\" there, since the École Normale does not deliver such diplomas. They were being prepared for a PhD program. The result is that the third and fourth year students at Orsay received much more care than those of the first and second years. In general, most of the attention was devoted to those who had finished a "grande école\" (generally, École Normale Supérieure or École polytechnique) and who came to Orsay as assistants or with support from CNRS to work on a PhD or to take part in seminars. All this was at the expense of the first and second year students who were being left behind. This system seemed absurd to Norbert. Another reason for which Norbert preferred Switzerland is that in France, teachers were bound to a "program\", whereas in Switzerland they had more freedom. At Orsay, at that time, the teaching for first and second year students took place in parallel classes, each of them addressed to more than 200 students (sometimes much more) packed in big lecture halls, whereas Norbert was dreaming of teaching courses in the spirit of those he had with Freudenthal and his other teachers in Utrecht. It was only later, in Basel, that he was able to have comparable conditions, and with no predefined program: he always made his teaching adapted to the students he had in front of him. I also asked Norbert how did it come about that he became president of the Swiss Mathematical Society. Being president of whatever organism does not fit with his personality. He responded: "It was a big mistake\". He did not really explain what he meant by that, but he recounted the circumstances under which he became president, and indeed it was a chain of misunderstandings. It started a day when he decided to go to the annual meeting of the Society, where Thom was announced as a speaker. The annual meeting was organized that year in the new canton of Jura, at two different locations, namely Porrentruy and Delémont. Norbert did not read correctly the announcement, and on that day he went to Delémont, whereas Thom was speaking in Porrentruy. Incidentally, Thom's hometown, Montbéliard (in France), is about 20 km far from Porrentruy. There were very few people at Delémont where Norbert arrived; on that day, only the administrative meeting took place there. The mathematicians in the executive committee of the SMS chatted with him; they told him that a position of treasurer of the Society was vacant, and they convinced him to run for it. This is how he was elected treasurer. But soon after, he learned that normally the treasurer is elected secretary after two years, and then president for another two years. So eventually he was elected secretary then president. This is how Norbert became president of the Swiss Mathematical Society, for the years 1988 and 1989. Norbert is not talented for administrative duties. He told me that one of the former presidents, his friend and colleague in Basel Heinz Huber, when he learned about his election, said: "Let us see if the Society will survive that president.\" But he managed, even though he did not know much about Swiss organisation: the confusion between Porrentruy and Delémont is just one example of the things with which he was unfamiliar. Recently, Norbert wrote a book on geometry, very different from all the existing ones. The book is titled *Topological, differential and conformal geometry of surfaces*. It is based on courses he gave in Basel, Vienna, Samos, Kunming, Bangalore and Varanasi. The book contains classical material that every student must have seen at least once: Complex structures from the point of view of $J$-fields, integrability on surfaces and the obstructions to integrability in higher dimensions by the Nijenhuis tensor, the Poincaré lemma, Brouwer's theorem, Morse functions, Stokes theorem, de Rham, Čech and Dolbeault cohomologies, the uniformization theorem for compact Riemann surfaces, the construction of the Riemann surface associated with a meromorphic function, Riemann--Roch, the construction of Teichmüller's universal curve, the Weil--Petersson Kähler structure on Teichmüller space, the embedding of Riemann surfaces in projective spaces and Chow's theorem, etc. But there are also a lot of special topics dear to Norbert: models of the field $\mathbb{C}$ of complex numbers and its group of automorphisms, constructions with finite fields including several ones using the field with one element, a model for the algebraic closure of $\mathbb{C}$, the hyperbolic plane seen as the set of surjective homomorphisms of the ring $\mathbb{R}[X]$ of one-variable polynomials in a field modulo the automorphisms of the field, other models of the same plane: as the set of complex linear structures on the real plane, as the space of cross-ratio preserving involutions of the real projective line, etc. The book also contains several new models of hyperbolic 3-space. All along the book, when he mentions for the first time the name of a mathematician, Norbert recalls that this mathematician is Dutch, French, Iranian, British, Swiss, Russian, Polish, etc., exemplifying the fact that mathematics is the result of a collaboration without borders. We have come to the end of this short biography. I would like to take this opportunity to include a few more personal thoughts. My relation with Norbert transformed during the years from that of a student with his teacher, into friendship and collaboration. We published a certain number of joint papers, but I think that for me, from the purely mathematical point of view, the most rewarding collaboration with him was the fact that at several occasions we went through writings of great authors from the past, including Euler, Riemann, Klein, Teichmüller, Poincaré, Grothendieck and a few others. A priori it was for no special purpose, just for the satisfaction of reading the masters, but this turned out to have an impact on both of us. We understood important ideas there, and we used them later in our works. Norbert has a social and humanistic view on life, which dates back to his deeply agricultural origins. He has that profound sense of justice, which also goes back to his peasant origins, that of people who know the honesty of the land that gives back exactly what it owes to the work of the farmer. As a mathematician, he carries a sense of intellectual honesty to a very high degree. He is a hard worker, but he also knows how to have moments of rest free of all worries. After a few hours of mathematical discussions, he tells me "Now we're going to have a drink\". I learned from him the sentence "À la belle vie !\" which he pronounces as an expression of good wishes. He is curious about everything. He often asks questions about etymology and the pronunciation and spelling of words in foreign languages (Greek, Latin, Arabic, Chinese, Japanese, etc.), looking for relations between them. He is always curious about customs and ways of life in foreign countries. It is always interesting to hear him talking about scenes or small details which nobody else would have noticed, after a stay he has made at a mathematical institute in Pakistan, Algeria, etc. Without putting it in appearance, he knows a lot about history and philosophy. He has a very unconventional and anti-establishment attitude about life and relations between people, but always with an extremely civic spirit. He has a refractory and insubordinate character. In his teaching at university, he did not want to hear about programs. He hates administration and power. Without being an anarchist, he is, in every sense of the word, the opposite of a bourgeois. Spending time with him gives you a real sense of freedom. Their house, with Annette, is permanently open. His former (graduate and undergraduate) students often visit them. Everyone who knows that house can go there any time. Common friends, when they come to Strasbourg, have taken the habit of visiting them in Basel; the distance between the two cities is only 120 km. I cannot mention them all, but recently there were Sumio Yamada, Ken'ichi Ohshika, Bob Penner, Alexey Sossinsky, Krishnendu Gongopadhyay, and there are many others. To his visitors who come for the first time to Basel, Norbert is used to show the tombstone of Jacob Bernoulli in one of the cloisters of Basel's cathedral, decorated with the logarithmic spiral design and the motto *Eadem mutata resurgo* ("Changed and yet the same, I rise again\"). He also shows them the house where Euler lived as a child and the church where his father officiated as pastor, before taking them to his house, in the middle of a garden, part of which he cultivates himself, and the other part cultivated by Annette. Friends have become collaborators and vice versa. In his book on geometry I mentioned above, after a section in which he explains the classification of surfaces using Morse functions, he writes, in a small section entitled *Thoughts*: "In societies one tries to measure trends, for instance 'optimistic', 'pessimistic', 'undecided' by $o-p$, and forgets $u$. Situations which invite the response 'yes', 'no' or 'doubt' tend to be measured by $y-n$, forgetting the $d$, which is a counting that systematically forgets those who want to think or think more.\" He then adds: "Unfortunately, many countries on today's Earth consider individuals as critical points. This is true especially in the context of power and truth. In a society with only one source of truth, say by a religion or by a dominant, or only one, political party, the count is even more restricted. An individual with an opinion that does not question the official truth is counted, those that do question, but keep silent, are not counted, and others are stored in camps and prisons, if not eliminated.\" I would like to end with a few words that Norbert wrote to me in a recent mail (November 1st, 2022): "I see mathematics as an activity directly related to life, the life of animals, and therefore also of humans,\" a sentence which immediately reminded me of Manin's sentence, from his ICM talk *Mathematics as a metaphor* (Kyoto, 1990): "Mathematics is a novel about Nature and Human kind\", and also, of Galileo Galilei's often quoted related sentence: "\[Natural\] Philosophy is written in this very great book that continually stands open before our eyes (I say the universe), but it cannot be understood unless one first learns to understand the language, and the characters, and what is written there. It is written in a mathematical language, and the characters are triangles, circles, and other geometrical figures, without which it is impossible to humanly understand a word of it; without them, it is a vain wandering through a dark labyrinth.\" (The Assayer, 1623). Norbert continues: "The migration of birds, like the hunting methods of raptors, require an enormous geometrical knowledge. Look at a bird landing directly on a tiny shaking branch: no artificial intelligence can yet achieve this. Similarly, the cow that starts licking the newborn calf's skin, immediately after birth, in one direction and then in the other, so that the hairs form an air cushion which provides thermal protection, is very competent in differential geometry.\" He then adds: "But the human species can communicate with future generations by writing, while other species do it rather by a selection that is inscribed in the genes, and this process is much longer. They cannot give a knowledge to the generation that follows them immediately. This gives humans a slight advantage. Through writing, we can pass on direct knowledge to our offspring.\" In a technical civilization where we sometimes see people engaged in senseless activities, a friendship with someone like Norbert helps to regain a balanced view on life.
arxiv_math
{ "id": "2310.01150", "title": "Caf\\'e conversations: A tribute to Norbert A'Campo", "authors": "Athanase Papadopoulos (IRMA)", "categories": "math.HO math.GT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We provide an alternative unified approach for proving the Pythagorean theorem (in dimension $2$ and higher), the law of sines and the law of cosines, based on the concept of shape derivative. The idea behind the proofs is very simple: we translate a triangle along a specific direction and compute the resulting change in area. Equating the change in area to zero yields the statements of the three aforementioned theorems. author: - "Lorenzo Cavallina [^1]" title: "**Pythagorean Theorem, Law of Sines and Law of Cosines: alternative proofs via shape derivatives** " --- Key words shape derivative, Pythagorean theorem, law of sines, law of cosines, alternative proof # Introduction Let $T$ denote a triangle in the Euclidean plane with vertices $A$, $B$, and $C$. Also, let $\alpha$, $\beta$, and $\gamma$ denote the respective angles. Moreover, be a slight abuse of notation, we will use the letters $a$, $b$, and $c$ to denote both the sides $BC$, $AC$, $AB$, and their respective lengths. For instance, we will write $\int_a 1 = a$. Finally, $n_a$, $n_b$, and $n_c$ will denote the outward unit normal vectors to the sides $a$, $b$, and $c$ respectively. In what follows, we will give an alternative proof of the following three famous results of Euclidean geometry. (i) $\gamma=\pi/2\implies a^2+b^2=c^2.$ (*The Pythagorean Theorem*). (ii) $\displaystyle \frac{a}{\sin\alpha}=\frac{b}{\sin\beta}=\frac{c}{\sin\gamma}.$ (*The Law of Sines*). (iii) $a^2+b^2-2ab\cos\gamma=c^2.$ (*The Law of Cosines*) The main ingredient of our proofs is the concept of shape differentiation. Consider a bounded (reference) domain $\Omega$ in ${\mathbb{R}}^N$ ($N\ge2$) with Lipschitz boundary $\partial\Omega$ and outward unit normal vector $n$ (defined almost everywhere on $\partial\Omega$). Also, consider a Lipschitz continuous vector field $\xi:{\mathbb{R}}^N\to{\mathbb{R}}^N$ with Lipschitz continuous first derivatives and define the perturbed domain as follows: $$\Omega_t:=\left\{ x+t\xi(x) \;\middle |\; x\in\Omega\right\},\quad \text{for small } t\ge0.$$ Let $f:{\mathbb{R}}^N\to\mathbb{R}$ be a real-valued integrable function with integrable first derivatives and consider the map $t\mapsto \int_{\Omega_t} f$. This map is differentiable at $t=0$ and its derivative is given by the following lemma. **Lemma 1** (Hadamard's formula). *Under the notation above, we have $${% we make the whole thing an ordinary symbol \left.\kern-\nulldelimiterspace % automatically resize the bar with \right \frac{d}{dt}\left(\int_{\Omega_t} f\right) % the function \vphantom{|} % pretend it's a little taller at normal size \right|_{t=0} % this is the delimiter }= \int_\Omega\mathop{\mathrm{div}}(f\xi) =\int_{\partial\Omega} f\xi\cdot n$$* At its core, the above result is a consequence of the change of variables formula for integrals and the divergence theorem. We refer the interested reader to [@HP2005 Theorem 5.2.2] for a proof in a more general setting, where the integrand $f$ may also vary with time. In what follows, we will only use Lemma [Lemma 1](#Hadamard){reference-type="ref" reference="Hadamard"} in the special case when the reference domain $\Omega$ is a triangle $T$, $f\equiv 1$, and $\xi$ is constant. Notice that this corresponds to considering the change in area of our triangle $T$ after a translation in the direction of $\xi$. ![The construction used in our proof of the Pythagorean theorem](pythagoras.png){#fig pythagoras width="50%"} # The Pythagorean theorem Let $T$ denote a triangle as in the introduction. Also, assume that $\gamma$ is a right angle. Let us consider the translation $T_t:=T+t c\, n_c$. By Lemma [Lemma 1](#Hadamard){reference-type="ref" reference="Hadamard"} we have $$0={% we make the whole thing an ordinary symbol \left.\kern-\nulldelimiterspace % automatically resize the bar with \right \frac{d}{dt}|T_t| % the function \vphantom{|} % pretend it's a little taller at normal size \right|_{t=0} % this is the delimiter }=\int_{\partial T} c\, n_c\cdot n=c\int_{c} n_c\cdot n_c + c\int_{a} n_c\cdot n_a + c\int_{b}n_c\cdot n_b=c^2-a^2-b^2.$$ Here we used the fact that $c\, n_c\cdot n_a=-a$ and $c\, n_c\cdot n_b=-b$ (see Figure [1](#fig pythagoras){reference-type="ref" reference="fig pythagoras"}). # The law of sines Let $T$ denote a (non-necessarily right) triangle, as in the introduction. Also, let $e_x$ denote the unit normal vector parallel to $BC$ pointing toward $B$. Let us consider the translation $T_t:= T+te_x$. By Lemma [Lemma 1](#Hadamard){reference-type="ref" reference="Hadamard"} we have: $$0={% we make the whole thing an ordinary symbol \left.\kern-\nulldelimiterspace % automatically resize the bar with \right \frac{d}{dt}|T_t| % the function \vphantom{|} % pretend it's a little taller at normal size \right|_{t=0} % this is the delimiter }=\int_{\partial T} e_x\cdot n = \int_{c} e_x \cdot n_c + \int_{b} e_x\cdot n_b = c\sin\beta-b\sin\gamma,$$ which, rearranging the terms yields the desired $b/\sin\beta=c/\sin\gamma$. Here we used the fact that $e_x\cdot n_c=\cos(\pi/2-\beta)=\sin\beta$ and $e_x\cdot n_b=-\cos(\pi/2-\gamma)=-\sin\gamma$ (see Figure [2](#fig law of sines){reference-type="ref" reference="fig law of sines"}). The remaining identities concerning the term $a/\sin\alpha$ are obtained analogously by employing a translation along the direction parallel to one of the remaining sides. ![The construction used in our proof of the law of sines](law_of_sines.png){#fig law of sines width="40%"} # The law of cosines Let $T$ denote a (non-necessarily right) triangle, as in the introduction. Let us consider the translation $T_t:=T+t(c\, n_c-a\, n_a-b\, n_b)$. By Lemma [Lemma 1](#Hadamard){reference-type="ref" reference="Hadamard"} we have: $$\begin{aligned} 0={% we make the whole thing an ordinary symbol \left.\kern-\nulldelimiterspace % automatically resize the bar with \right \frac{d}{dt}|T_t| % the function \vphantom{|} % pretend it's a little taller at normal size \right|_{t=0} % this is the delimiter }=\int_{\partial T} (c\, n_c-a\, n_a-b\, n_b)\cdot n = \\ c\int_{a} n_c\cdot n_a + c\int_{b} n_c\cdot n_b + c^2 -a^2 - a \int_b n_a\cdot n_b - a\int_c n_a\cdot n_c -b\int_a n_b\cdot n_a - b^2 - b\int_c n_b\cdot n_c \\ = c^2-a^2-b^2 -2ab\, n_a\cdot n_b = c^2-a^2-b^2+2ab\cos\gamma, \end{aligned}$$ which is the desired identity. Here we used the fact that $n_a\cdot n_b=-\cos\gamma$ (see Figure [3](#fig law of cosines){reference-type="ref" reference="fig law of cosines"}). ![The construction used in our proof of the law of cosines](law_of_cosines.png){#fig law of cosines width="60%"} # Concluding remarks Nowadays, hundreds of different proofs of the Pythagorean theorem are known (see for instance the work of Loomis [@Loomis], where more than two hundred proofs are presented and analyzed). Nevertheless, it is worth mentioning that the approach given here does not seem to belong to either of the four categories introduced by Loomis. On a tangential note, we stress that our proofs are *not trigonometric* in nature because, even though the terms $\sin\theta$ and $\cos\theta$ appear, they simply stand for the ratio of two lengths in a figure, and no trigonometric identity is used in a significant way (the only exception being $\cos(\pi/2-\theta)=\sin\theta$, which just follows from the very geometrical definitions of sine and cosine and the fact that the interior angles of a triangle in the plane add up to $\pi$ radians). Also, notice that we used the parallel postulate (or one of its equivalent statements) in each proof (this cannot be avoided, as it is well-known that the parallel postulate and the Pythagorean theorem are equivalent). In particular, we explicitly made use of said postulate when chasing angles and implicitly when we used the fact that translating a pair of vectors does not change the angle between them. Finally, we remark that our proof of the Pythagorean theorem also generalizes to the $N$-dimensional case. The general proof in the case of right $N$-simplexes is given below (the interested reader should compare it to the one given by [@Eifler; @Rhee], where a similar construction relying on the arbitrariness of the vector field is used instead). Consider an $N$-simplex $\triangle$ in ${\mathbb{R}}^N$ ($N\ge2$) defined as the convex hull of $\{0,x_1,\dots, x_N\}$, where $\{x_1, \dots, x_N\}$ is an orthonormal family of vectors. For $i=1,\dots, N$, let $\Sigma_i$ denote the $(N-1)$-face containing $\{0,x_1, \dots, x_N\}\setminus\{x_i\}$, let $A_i$ denote its (($N-1$)-dimensional) area and let $n_i$ denote its outward unit normal vector. Similarly, let $\Sigma_C$ denote the remaining face, with (($N-1$)-dimensional) area $C$ and outward unit normal vector $n_C$. Along the same lines as section $2$, consider the translation $\triangle_t:= \triangle+tC\, n_C$. We have $$0={% we make the whole thing an ordinary symbol \left.\kern-\nulldelimiterspace % automatically resize the bar with \right \frac{d}{dt} |\triangle_t| % the function \vphantom{|} % pretend it's a little taller at normal size \right|_{t=0} % this is the delimiter }=\int_{\partial\triangle} C\, n_C\cdot n = \int_{\Sigma_C} C\, n_C\cdot n_C + \sum_{i=1}^N \int_{\Sigma_i} C\, n_C\cdot n_i = C^2-\sum_{i=1}^N A_i^2,$$ that is the desired $N$-dimensional version of the Pythagorean theorem. ![The construction used in the proof of the $N$-dimensional Pythagorean theorem.](tridimensional.png){#fig tridimensional width="60%"} Here we used the fact that for $i=1,\dots, N$, $A_i=-C\, n_C\cdot n_i$. To show this identity, we will use the volume formula for a $k$-simplex $\triangle_k$ of base $\triangle_{k-1}$ and height $h$: $$\left\{\text{$k$-dimensional volume of $\triangle_k$}\right\}=\frac{h}{k}\times \left\{\text{($k-1$)-dimensional volume of $\triangle_{k-1}$}\right\}.$$ In light of the above, since for $i=1,\dots, N$, the ($N-1$)-simplexes $\Sigma_C$ and $\Sigma_i$ share the same base $E_i$ (given by the convex hull of $\{x_1,\dots, x_N\}\setminus\{x_i\}$), it will be enough to compute the ratio of their heights. To this end, let $P_i$ be the projection of $0$ (or, equivalently, of $x_i$) onto $E_i$ and let $\theta_i$ be the angle between $P_i0$ and $P_ix_i$. Since the heights of $\Sigma_C$ and $\Sigma_i$ with respect to the common base $E_i$ are given by $\overline{P_ix_i}$ and $\overline{P_i0}$ respectively, we have $$\frac{A_i}{C}=\frac{\overline{P_i0}}{\overline{P_ix_i}}=\cos\theta_i=-n_C\cdot n_i,$$ which is the desired identity (see also Figure [4](#fig tridimensional){reference-type="ref" reference="fig tridimensional"}). 99 [L. Eifler, N.H. Rhee]{.smallcaps}, *The n-Dimensional Pythagorean Theorem via the Divergence Theorem*, The American Mathematical Monthly, Volume 115 (2008) Issue 5, 456--457, DOI: 10.1080/00029890.2008.11920550 HP2005 [A. Henrot, M. Pierre]{.smallcaps}, Shape variation and optimization (a geometrical analysis). EMS Tracts in Mathematics, Vol.28, European Mathematical Society (EMS), Zürich, (2018). [E.S. Loomis]{.smallcaps}, The Pythagorean Proposition: Its Demonstrations Analyzed and Classified, and Bibliography of Sources for Data of the Four Kinds of "Proofs", Second edition, (1940), available at http://files.eric.ed.gov/fulltext/ED037335.pdf [Mathematical Institute, Graduate School of Science, Tohoku University, Aoba-ku, Sendai 980-8578, Japan]{.smallcaps}\ *Electronic mail address:* cavallina.lorenzo.e6\@tohoku.ac.jp [^1]: This research was partially supported by JSPS KAKENHI Grant Number JP23H04459, and JP22K13935, JP21KK0044.
arxiv_math
{ "id": "2309.16691", "title": "Pythagorean Theorem, Law of Sines and Law of Cosines: alternative proofs\n via shape derivatives", "authors": "Lorenzo Cavallina", "categories": "math.HO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | This paper presents a novel approach to establish a blow-up mechanism for the forced 3D incompressible Euler equations, with a specific focus on non-axisymmetric solutions. We construct solutions on $\mathbb{R}^3$ within the function space $C^{3,\frac12}\cap L^2$ for the time interval $[0, T)$, where $T > 0$ is finite, subject to a uniform force in $C^{1,\frac12 -\epsilon}\cap L^2$. Remarkably, our methodology results in a blow-up: as the time $t$ approaches the blow-up moment $T$, the integral $\int_0^t |\nabla u| ds$ tends to infinity, all while preserving the solution's smoothness throughout, except at the origin. In the process of our blow-up construction, self-similar coordinates are not utilized and we are able to treat solutions beyond the $C^{1,\frac13+}$ threshold regularity of axy-symmetric solutions without swirl. author: - "Diego Córdoba[^1]and Luis Martínez-Zoroa[^2]" title: Blow-up for the incompressible 3D-Euler equations with uniform $C^{1,\frac{1}{2}-\epsilon}\cap L^2$ force --- # Introduction In this paper we consider the initial value problem for the forced incompressible Euler equations on $\mathbb{R}^3$ $$\begin{aligned} \label{Euler} \partial_t u + (u\cdot\nabla) u + \nabla p= f,\\\nonumber \nabla\cdot u = 0 \\ u_0(x)=u(x,0), \nonumber\end{aligned}$$ where $u(x,t)=(u_1(x,t), u_2(x,t), u_3(x,t))$ is the velocity field and $p=p(x,t)$ is the pressure function of an ideal fluid. $f=(f_1(x,t), f_2(x,t), f_3(x,t))$ is an external force. One of the significant unresolved questions in the realms of nonlinear partial differential equations and fluid dynamics is whether smooth initial data with finite energy can lead to a finite-time singularity/blow-up in the 3D incompressible smooth forced Euler equations ([\[Euler\]](#Euler){reference-type="ref" reference="Euler"}). The classical theory of well-posedness for the Euler equation ([\[Euler\]](#Euler){reference-type="ref" reference="Euler"}), dating back to the works of Lichtenstein [@Lich] and Gunther [@Gunther], asserts the existence of a unique solution in the function space $C^{k,\alpha}\cap L^2$ (with $k\geq1$ and $\alpha\in (0,1)$) for a finite time interval $[0,T)$. This solution is guaranteed when both the initial divergence free data $u_0(x)$ is in the space $C^{k,\alpha}\cap L^2$ and the external force $f(x,t)$ is uniformly in the space $C^{k,\alpha}\cap L^2$ for all time of existence. Then the natural question is: Do those solutions exist for all time? Or does it exist a finite time T in which there is a singularity? In the specific context of two-dimensional scenarios, the local well-posedness mentioned above can be seamlessly extended to cover all time intervals (see [@Hol; @wol]), which can be deduced thanks to the Beale-Kato-Majda criterion [@BKM], as the flow inherently carries the vorticity $\omega = \nabla\times u$ (see also [@CFM] for another very interesting geometric criteria for blow-up). However, it's crucial to note that singular behavior remains a possibility. For instance, Kiselev and Šverák [@KS] achieved the optimal growth bound for the 2D incompressible Euler equations within a disk. Furthermore, it has been established that the equations exhibit a lack of local well-posedness in critical spaces, notably within $C^k$ spaces, for integers $k\geq 1$. This deficiency due to the non-locality of the pressure was proven by Bourgain and Li [@Bourgaincm], as well as independently by Elgindi and Masmoudi [@Elgindi]. Their studies conclusively demonstrated the presence of strong ill-posedness and the absence of uniformly bounded solutions for the initial velocity $u_0$ within the context of $C^k$ (see also [@Bourgainsobolev; @Elgindisobolev] for similar results in critical Sobolev spaces). Moreover, in a recent collaborative publication [@CMZ] in conjunction with Ozanski, we established the existence of a family of global unique classical solutions in the two-dimensional case that display an instantaneous loss of super-critical Sobolev regularity. For a more comprehensive historical overview of the finite-time singularity problem in the context of the Euler equation, we recommend consulting the following sources: [@BT; @Con; @Fef; @Gib; @Hou3; @Kis; @MB]. These references provide in-depth insights and reviews on the subject matter. The symmetries inherent in the Euler equations offer a valuable opportunity for investigating axi-symmetric solutions. These solutions are governed by a concise system of two evolution equations that operate within two spatial variables. In particular, when there is no swirl (i.e. absence of angular velocity), it has been shown that for initial data in $C_c^{1,\frac13 +}$ there is global existence (see [@Yud] and also the survey [@DE] for an interesting review on the well-posedness of axi-symmetric flows), but it turns out that in the case of lower regularity these solutions can develop a singularity. The first proof of finite-time singularities was pioneered by Elgindi in his recent remarkable work [@Elgindi2]. In his study, he focused on flows characterized by axi-symmetric symmetry and a lack of swirl, featuring velocity profiles in the $C^{1,\alpha}$ class, where $\frac13>\alpha > 0$ is chosen to be sufficiently small. Elgindi demonstrated the existence of exact self-similar blow-up solutions with infinite energy and no external forces. However, it should be noted that finite-energy blow-up solutions can also be obtained by introducing a uniform non-zero $C^{1,\alpha}$ external force, as discussed in Remark 1.4 of [@Elgindi2]. Furthermore, Elgindi, Ghoul, and Masmoudi extended these findings in their work [@Elgindi3], revealing that these blow-up solutions can be localized. Through a detailed analysis of the stability of these blow-up solutions, they established the occurrence of finite-time singularities in the unforced ($f=0$) Euler equation ([\[Euler\]](#Euler){reference-type="ref" reference="Euler"}), for solutions with finite energy in the $C^{1,\alpha}$ space. Recent advances have brought significant developments in the study of self-similar singularities within axi-symmetric flows when boundaries are involved. We recommend beginning with the work of Elgindi and Jeong [@EJ], where they establish the existence of finite-time blow-up solutions within scale-invariant Holder spaces in domains featuring corners. Furthermore, Chen and Hou, in their work [@Hou], rigorously prove the existence of nearly self-similar $C^{1,\alpha}$ blow-up solutions near smooth boundaries. In their subsequent remarkable research [@Hou2; @Hou4], they demonstrate the blow-up of smooth self-similar solutions through the utilization of computer-assisted proofs. Moreover, in [@WLGB] by Wang, Lai, Gómez-Serrano, and Buckmaster, they leverage physics-informed neural networks to construct approximate self-similar blow-up profiles. In collaboration with Zheng, we have recently introduced in [@CMZh] a novel mechanism for blow-up in $\mathbb{R}^3$ distinct from the conventional self-similar profile. Specifically, we have devised solutions to the 3D unforced incompressible Euler equations within the domain $\mathbb{R}^3\times [-T,0]$, with velocity profiles belonging to the function space $C^{\infty}(\mathbb{R}^3 \setminus {0})\cap C^{1,\alpha}\cap L^2$, where $0<\alpha\ll\frac13$, for time intervals $t\in (-T,0)$. These solutions are smooth except at a single point and exhibit finite-time singularities precisely at time 0. Notably, these solutions possess axi-symmetric symmetry without swirl, yet they are not founded upon asymptotically self-similar profiles. Instead, they are characterized by an infinite series of vorticity regions, each positioned progressively closer to the origin than its predecessor. The arrangement of vorticity is such that it generates a hyperbolic saddle at the origin, which, as it moves and undergoes deformation, influences the inner vortices. Consequently, the dynamics of the Euler equation within this framework can be approximated by an infinite system of ordinary differential equations that is explicitly solvable. When we trace the dynamics backward from the blow-up time, this approach empowers us with precise control over the system's behavior, facilitating a rigorous demonstration of the blow-up phenomenon. The velocity field in [@Elgindi2; @Elgindi3] exhibits smoothness away from the axis of symmetry. However, in close proximity to the axis, it possesses only a $C^{1,\alpha}$ regularity. Notably, Chen in [@Che2] has recently extended the construction introduced by Elgindi in [@Elgindi2], enabling the application of asymptotically self-similar ansatz to construct blow-up scenarios where the velocity shares the same regularity class as that presented in our prior work in [@CMZh]. The primary objective of this paper is to establish a blow-up mechanism for the forced 3D incompressible Euler equations ([\[Euler\]](#Euler){reference-type="ref" reference="Euler"}). Specifically, we aim to provide a non-axisymmetric blow-up mechanism that enables the treatment of smoother solutions and push beyond the $C^{1,\frac13}$ regularity threshold of previous scenarios. In particular, we construct solutions that belong to the function space $C^{3,\frac12}\cap L^2$ over the time interval $[0, T)$ for a finite time $T > 0$, under the influence of a uniform force in $C^{1,\frac12 -\epsilon}\cap L^2$. This construction leads to the property that $$\lim_{t\rightarrow T}\int_0^t |\nabla u (x,s)|_{L^{\infty}} ds = \infty,$$ while the solution remains smooth everywhere except at the origin. We do not employ self-similar coordinates in the construction of our blow-up. ## The main blow-up result The main result of the paper is the following: **Theorem 1**. *For any $\epsilon>0$, there exist solutions of the forced 3D incompressible Euler equations ([\[Euler\]](#Euler){reference-type="ref" reference="Euler"}) in $\mathbb{R}^3\times [0,T]$ with a finite $T>0$ and with an external forcing which is uniformly in $C^{1,\frac{1}{2}-\epsilon}\cap L^2$, such that on the time interval $0 \le t < T$, the velocity $u$ is in the space $C^{3,\frac12}\cap L^2$ and satisfies $$\lim_{t\rightarrow T}\int_0^t |\nabla u (x,s)|_{L^{\infty}} ds = \infty.$$* **Remark 1**. *Even though this is not proved in this paper, a careful study of the solution shows that, for any $\delta>0$, the solution for $t\in[0,T-\delta]$ fulfills $u\in C^{\infty}$, and similarly the forcing $f\in C^{\infty}$. Furthermore the vorticity is compactly supported for all $t\in[0,T]$.* **Remark 2**. *The solution at time $T$ is in $C^{1-\delta}$, where $\delta$ depends on $\epsilon$.* **Remark 3**. *A more careful construction would allow us to show the result with forcing $f$ uniformly bounded in $C^{1,\alpha}$ for all $\alpha\in[0,\frac12)$.* ## Ideas of the proof In order to study our scenario for blow-up, we start by considering $\omega(x,t)$ a solution to the 3D-Euler equation in vorticity formulation fulfilling $\omega(0,t)=0$, $D^{1}\omega(0,t)=0$ and $u(\omega(0,t))\approx (x_{1},-x_{2},0)$, where $D^i\omega$ refers to the i-th order partial derivatives of $\omega$. Then, by adding a small perturbation around the origin $\omega_{\lambda}(x)=g(\lambda x)$, formally we get, for very large $\lambda$, the evolution equation $$\partial_{t}\omega_{\lambda}(x,t)+(u(\omega_{\lambda})+(x_{1},-x_{2},0))\cdot\nabla \omega_{\lambda}=\omega_{\lambda} \cdot \nabla(u(\omega_{\lambda})+(x_{1},-x_{2},0)),$$ $$\omega_{\lambda}(x,0)=g(\lambda x).$$ By choosing $g(\lambda x)=(g_{1}(\lambda x),0,g_{3}(\lambda x))$, this simplifies to $$\partial_{t}\omega_{1,\lambda}+(u(\omega_{\lambda})+(x_{1},-x_{2},0))\cdot\nabla \omega_{1,\lambda}=\omega_{\lambda} \cdot \nabla u(\omega_{1,\lambda})+\omega_{1,\lambda},$$ $$\partial_{t}\omega_{3,\lambda}+(u(\omega_{\lambda})+(x_{1},-x_{2},0))\cdot\nabla \omega_{3,\lambda}=\omega_{\lambda} \cdot \nabla u(\omega_{3,\lambda}),$$ $$\omega_{1,\lambda}(x,0)=g_{1}(\lambda x),\omega_{3,\lambda}(x,0)=g_{3}(\lambda x).$$ Finally, if we then make the (rather optimistic) assumption that the quadratic terms with respect to $\omega_{\lambda}(x,t)$ remain small for some long time, we get the very simple evolution equation $$\label{evolucionheuristica} \partial_{t}\omega_{1,\lambda}+(x_{1},-x_{2},0)\cdot\nabla \omega_{1,\lambda}=\omega_{1,\lambda},$$ $$\partial_{t}\omega_{3,\lambda}+(x_{1},-x_{2},0)\cdot\nabla \omega_{3,\lambda}=0$$ $$\omega_{1,\lambda}(x,0)=g_{1}(\lambda x),\omega_{3,\lambda}(x,0)=g_{3}(\lambda x).$$ This system of equations can be studied explicitly, and it is relatively straightforward to find initial conditions where $D^{i}u(\omega_{1,\lambda})$ grows a lot. The approach we would then like to use to prove our blow-up would be as follows: - We choose $\omega(x,t)$ so that its velocity creates a hyperbolic flow around the origin, and with $\omega(0,t)=\partial_{x_{j}}\omega(0,t)=0$, for $j=1,2,3$. We call $\omega(x,t)$ the big scale layer. - We add a small perturbation $\omega_{\lambda}(x)$ around the origin, which we call the small scale layer. We consider $\omega_{\lambda}(x,t)$ the solution to [\[evolucionheuristica\]](#evolucionheuristica){reference-type="eqref" reference="evolucionheuristica"}. - We check that $\omega(x,t)+\omega_{\lambda}(x,t)$ is a solution to the forced incompressible 3D-Euler equation in vorticity formulation. If all the approximations we consider to obtain [\[evolucionheuristica\]](#evolucionheuristica){reference-type="eqref" reference="evolucionheuristica"} are reasonable, the force necessary to do this should be small and relatively regular. - If we choose $\omega_{\lambda}(x,t)$ appropriately, $\omega(x,t)$ $\omega(x,t)+\omega_{\lambda}(x,t)$ can produce a (very strong) hyperbolic flow around the origin for $t\approx 1$, and we can have $$\omega(0,t)+\omega_{\lambda}(0,t)=\partial_{x_{j}}(\omega(0,t)+\omega_{\lambda}(0,t))=0.$$ - We now repeat this gluing process an infinite number of times, using now the small scale layer to make an even more localized layer grow very fast. Using this approach an infinite number of times, we get a blow-up. There are, however, several important complications one needs to take into account when trying to apply these ideas. First, we need to check that [\[evolucionheuristica\]](#evolucionheuristica){reference-type="eqref" reference="evolucionheuristica"} actually gives us a solution to the forced incompressible 3D-Euler equation, since not all forces are valid (we need, for example, the force to fulfil $\nabla \cdot f =0$). Even more importantly, we need the quadratic terms with respect to $\omega_{\lambda}(x,t)$ i.e. $$u(\omega_{\lambda}(x,t))\cdot\nabla \omega_{\lambda}(x,t),\omega_{\lambda}(x,t)\cdot\nabla u(\omega_{\lambda}(x,t)),$$ to be small. However, in principle, this clashes with our other assumptions that $\omega_{\lambda}$ is very localized and that $D^{1}u(\omega_{\lambda}(x,t))$ becomes very big. In order to deal with this, we need to find $\omega_{\lambda}(x,t)$ where there is some kind of cancellation in the quadratic term, so that it is much smaller than the rough a priori bounds suggest. Furthermore, we will actually use a more complicated evolution equation than [\[evolucionheuristica\]](#evolucionheuristica){reference-type="eqref" reference="evolucionheuristica"} in order to obtain a better approximation of the 3D-Euler equation. ## Outline of the paper Section 2 of this paper deals with some notation that we will use through the paper, as well as some important properties of both the velocity operator and the forced incompressible 3D-Euler equations. Section 3 studies the properties of a simplified evolution equation that we will use to model the evolution of the individual layers that will compose our solution to the forced incompressible 3D-Euler equations in vorticity formulation. In section 4, we consider the solutions to the simplified evolution equation and use the properties we have obtained in section 3 to obtain bounds for the different terms that will compose the force in our solutions to the forced incompressible 3D-Euler equations in vorticity formulation, showing in particular that this force has $C^{\frac12-\epsilon}$ regularity in vorticity formulation (and therefore $C^{1,\frac12-\epsilon}$ in velocity formulation). Finally, in section 5 we use all the previous results to construct the solution that blows up in finite time. # Preliminaries and notation ## Some basic notation All the norms we will be considering here will refer to spatial norms, that is to say, for example, if $f(x,t)$ is a function defined for $t\in[a,b]$, then $$||f(x,t)||_{C^{k,\beta}}=\text{sup}_{t_{0}\in[a,b]}||f(x,t_{0})||_{C^{k,\beta}}.$$ We will also use the notation $C^{k+\beta}$ to refer to the $C^{k,\beta}$ spaces. In general, the domain considered for the time will be clear by context. We also use the notation $|f(x,t)|_{C^{k}}$ to refer to the $C^{k}$ seminorm, i.e. if $f(x,t):\mathds{R}^3\times\mathds{R}\rightarrow\mathds{R}$ $$|f(x,t)|_{C^{k}}=\sum_{i=0}^{k}\sum_{j=0}^{k-i}||\frac{\partial^{k}f(x,t)}{\partial_{x_{1}}^{j}\partial_{x_{2}}^{i}\partial_{x_{3}}^{k-i-j}}||_{L^{\infty}}.$$ We will use $D^{i}$ to refer to a generic (spatial) derivative of order $i$, and $||D^{i}f||_{C^{j}}$ to refer to the supremum of the $C^{j}$ norm of all the possible $i-$order derivatives of $f$. ## Properties for the velocity We start by noting some properties of the operator $u$ that we will use through the paper. First, we have that, for $\omega(x)=(\omega_{1}(x),\omega_{2}(x),\omega_{3}(x))$ with $\nabla\cdot\omega(x)=0$ and compactly supported, we can find $u$ fulfilling $\nabla\times u=\omega$ by using the Biot-Savart law: $$\label{uwop} u(\omega)(x)=C\int \frac{(x-y)\times \omega(y)dy}{|x-y|^3}dy=C\int \frac{h\times \omega(x+h)dh}{|h|^3}dh.$$ Since the constant $C$ in the kernel is irrelevant, we will just use from now on $C=1$. Note that, even though we should only consider $\omega$ fulfilling $\nabla\cdot\omega=0$, and therefore a vorticity of the form $(\omega_{1}(x),0,0)$ would not give us a valid velocity, we can formally define $u(\omega_{1}(x),0,0)$ (sometimes written as $u(\omega_{1}(x))$ when it is clear by context which component of the vorticity we are considering) as $$u(\omega_{1})(x)=\int \frac{h\times (\omega_{1}(x+h),0,0)dh}{|h|^3}dh,$$ and similarly for other components of the vorticity. This will be just a convenient notation. With this notation, the first property we should keep in mind is that, for $i=1,2,3$ $$u_{i}(w_{i})=0$$ which can be readily obtained from [\[uwop\]](#uwop){reference-type="eqref" reference="uwop"}. Furthermore, if we consider a derivative of $u_{i}$, we have that $\partial_{x_{j}}u_{i}$, $j=1,2,3$, $i=1,2,3$, is a singular integral operator, and in particular we have that, for compactly supported $\omega$, $$||\partial_{x_{j}}u_{i}(\omega)||_{C^{k,\alpha}}\leq C_{k,\alpha}||\omega||_{C^{k,\alpha}}$$ $$\label{lnu} ||\partial_{x_{j}}u_{i}(\omega)||_{L^{\infty}}\leq C ||\omega||_{L^{\infty}}\ln(10+||\omega||_{C^1}).$$ Another interesting property of the velocity operator is that it commutes with the rotation operator. If we define, for some function $f(x)=(f_{1}(x_{1},x_{2},x_{3}),f_{2}(x_{1},x_{2},x_{3}),f_{3}(x_{1},x_{2},x_{3}))$ $$R(f(x))=(f_{3}(x_{2},x_{3},x_{1}),f_{1}(x_{2},x_{3},x_{1}),f_{2}(x_{2},x_{3},x_{1}))$$ we have that $$R(u(\omega))=u(R(\omega))$$ which in particular implies that, if $\omega(x,t)$ is a solution to the 3D-Euler equation in vorticity formulation, i.e. $$\partial_{t}\omega+u(\omega)\cdot\nabla\omega=\omega\cdot\nabla u(\omega)$$ then $R(\omega)$ is also a solution to the 3D-Euler equation in vorticity formulation. Note that R only affects the spatial variables, and the time variable remains unchanged. We define similarly $R^{-1}$, the inverse of $R$, which has the same properties regarding the velocity operator and the 3D-Euler equation. ## Forced 3D-Euler equations {#forcedeuler} Through this paper, we will be studying the forced (incompressible) 3D-Euler equations, i.e. $$\label{velocityequation} \partial_{t}u+u\cdot\nabla u=-\nabla p+f(x,t)$$ with $\nabla \cdot F(x,t)=0$. by taking the curl of this equation, we get $$\label{vorticityequation} \partial_{t}\omega+u\cdot\nabla \omega=\omega\cdot\nabla u+F(x,t)$$ with $F(x,t)=\nabla\times f(x,t).$ We will study [\[vorticityequation\]](#vorticityequation){reference-type="eqref" reference="vorticityequation"} in order to obtain information about [\[velocityequation\]](#velocityequation){reference-type="eqref" reference="velocityequation"}, which can be recovered (if $\omega(x,t)$ has enough decay and regularity) via [\[uwop\]](#uwop){reference-type="eqref" reference="uwop"}. In order to do so, we need to be a little careful about our choice of $\omega$ and $F(x,t)$. First, in order to enssure that $u(x,t)$ is well defined, has finite energy, and that $\omega\in C^{\alpha}$ implies $u\in C^{1,\alpha}$ ($\alpha\in(0,1)$), we will only consider $\omega(x,t)$ compactly supported, divergence free and with zero average. Similarly, to ensure that $f(x,t)$ is well defined, in $L^2$ and that $F\in C^{\alpha}$ implies $f$ is in $C^{1,\alpha}$ ($\alpha\in(0,1)$), we will only consider $F$ compactly supported, with zero average and divergence free. Note that there is a relation between having these properties for $\omega$ and for $F$, and in fact we will only show that $\omega$ is compactly supported in $B_{1}(0)$ for the times considered, that $\omega$ has zero average and that $\nabla \cdot F=\nabla\cdot \omega=0$, since this already implies that $F$ has zero average and is supported in $B_{1}(0)$. With this in mind, we give our definition of solution to the forced incompressible 3D-Euler equation. **Definition 1**. *We say that $\omega(x,t)$ is a solution to the forced incompressible 3D-Euler equation in vorticity formulation with force $F(x,t)$ (or a vorticity solution for short) if for $t\in[a,b]$ $\omega(x,t)$ and $F(x,t)$ are supported in some fixed compact $K$, we have $\nabla \cdot F(x,t)=\nabla\cdot \omega(x,t)=0$, $\int \omega_{i}(x,t)=0$ for $i=1,2,3$, and $$\partial_{t}\omega+u\cdot\nabla \omega=\omega\cdot\nabla u+F(x,t).$$* **Remark 4**. *The requirements for $\omega(x,t)$ to define a vorticity solution can be (significantly) relaxed, but since all the solutions we will consider in this paper fulfil the properties in definition [Definition 1](#defsol){reference-type="ref" reference="defsol"}, we will use it for simplicity.* **Remark 5**. *If $\omega(x,t)$ is a vorticity solution with force $F(x,t)$, and $u(x,t)$ is the solution to the forced incompressible 3D-Euler equation in velocity formulation that we obtain from $\omega(x,t)$, with forcing $f(x,t)$, then $F(x,t)\in C^{\alpha}$ implies $f(x,t)\in C^{1,\alpha}$ for $\alpha\in(0,1).$* To ensure the condition that $\nabla \cdot F=0$, we will consider only solutions of the form $$\omega(x,t)=\sum_{i=1}^{K}\omega^{i}(x,t)$$ with each $\omega^{i}(x,t)$ fulfilling $$\label{evolutionlayers} \partial_{t}\omega^{i}(x,t)+u^{i}(x,t)\cdot\nabla \omega^{i}(x,t)=\omega^{i}(x,t)\cdot \nabla u^{i}(x,t)$$ for some $u^{i}(x,t)$. If $\nabla\cdot\omega^{i}=\nabla\cdot u^{i}=0$ we have that $$\begin{aligned} &\nabla \cdot \partial_{t}\omega^{i}=\nabla\cdot (u^{i}\cdot\nabla \omega^{i}-\omega^{i}\cdot \nabla u^{i})=\sum_{k=1}^{3}\partial_{x_{k}}\sum_{j=1}^{3}(u_{j}^{i}\partial_{x_{j}} \omega_{k}^{i}-\omega_{j}^{i}\partial_{x_{j}}u_{k}^{i})\\ &=\sum_{k=1}^{3}\sum_{j=1}^{3}(\partial_{x_{k}}u_{j}^{i}\partial_{x_{j}} \omega_{k}^{i}-\partial_{x_{k}}\omega_{j}^{i}\partial_{x_{j}}u^{i}_{k})+\sum_{k=1}^{3}\sum_{j=1}^{3}(u_{j}^{i}\partial_{x_{k}}\partial_{x_{j}} \omega_{j}^{i}-\omega^{i}\partial_{k}\partial_{x_{j}}u^{i}_{k})=0.\end{aligned}$$ This means that we can write $$\partial_{t}\omega+u(\omega)\cdot\nabla \omega=\omega\cdot\nabla u(\omega)+F$$ with $F:=\partial_{t}\omega+u(\omega)\cdot\nabla \omega-\omega\cdot\nabla u(\omega)$. Since $\nabla \cdot (u(\omega)\cdot\nabla \omega-\omega\cdot\nabla u(\omega))=\nabla \cdot \partial_{t}\omega=0$, we have that $\nabla \cdot F = 0$. # The simplified evolution equation As mentioned before, to construct our solutions we will divide our solution in different layers, each one with a different spatial scale, and each layer will fulfil an evolution equation like [\[evolutionlayers\]](#evolutionlayers){reference-type="eqref" reference="evolutionlayers"}. For this reason, this first section will be devoted to study this kind of equations, specifically when the velocity $u^{i}$ has some useful properties. **Definition 2**. *We say that a velocity field $u(x,t)=(u_{1}(x,t), u_{2}(x,t), u_{3}(x,t))$ is an odd velocity if $x_{i}u_{i}(x,t)$ is even with respect to $x_{1},x_{2}$ and $x_{3}$. Furthermore, we say that $\omega(x,t)=(\omega_{1}(x,t),\omega_{2}(x,t),\omega_{3}(x,t))$ is an odd vorticity if $x_{i}\omega_{i}(x,t)$ is odd with respect to $x_{1},x_{2}$ and $x_{3}$.* **Remark 6**. *With our definition for $u$ and $\omega$ odd, we have that, if $u(x,t)$ is an odd velocity field, then $\nabla\times u(x,t)$ defines an odd vorticity. Similarly, if $\omega(x,t)$ is an odd vorticity, [\[uwop\]](#uwop){reference-type="eqref" reference="uwop"} gives an odd velocity.* **Remark 7**. *If $u(x,t)$ is an odd velocity then if $$\frac{\partial^{n_{1}+n_{2}+n_{3}}}{\partial x_{1}^{n_{1}}\partial x_{2}^{n_{2}} \partial x_{3}^{n_{3}}}u_{1}(x=0,t)$$ with $n_{i}\in \mathds{N}$ is well defined, it is zero unless $n_{1}$ is odd and $n_{2}$ and $n_{3}$ are even. Similar properties hold for the other components of the velocity.* *Similarly, if $\omega(x,t)$ is an odd vorticity and if $$\frac{\partial^{n_{1}+n_{2}+n_{3}}}{\partial x_{1}^{n_{1}}\partial x_{2}^{n_{2}}\partial x_{3}^{n_{3}}}\omega_{1}(x=0,t)$$ is well defined, it is zero unless $n_{1}$ is even and $n_{2}$ and $n_{3}$ are odd. Similar properties hold for the other components of the vorticity.* **Definition 3**. *Given a velocity field $u(x,t)$ we define $\bar{u}(x,t)=(\bar{u}_{1}(x,t),\bar{u}_{2}(x,t),\bar{u}_{3}(x,t))$ as $$\bar{u}_{j}(x,t)=x_{j}\partial_{x_{j}}u_{j}(0,x_{2},0), \ \text{for } j=1,3$$ $$\bar{u}_{2}(x,t)=u_{2}(0,x_{2},0).$$* **Remark 8**. *If $\nabla \cdot u(x,t)=0$ then $\nabla \cdot \bar{u}(x,t)=0$.* **Lemma 2**. *Let $\frac{1}{100}>\epsilon>0$, $T_{2}\geq T_{1}$, $P>0$ and $u_{lin}(t)$ and $u_{cub}(x,t)$ (both $u_{lin}$ and $u_{cub}$ depending on a parameter $N$) fulfilling $u_{lin}(t)\in[\frac12 PN^{\epsilon},2PN^{\epsilon}]$, $||\partial_{x}\partial_{x}\partial_{x}u_{cub}(x,t)||_{L^{\infty}}\leq N^{4}$ and $$u_{cub}(0,t)=\partial_{x}u_{cub}(0,t)=\partial_{x}\partial_{x}u_{cub}(0,t)=0$$ for $t\in[T_{1},T_{2}]$. If we define $$\partial_{t} \phi(x,t_{0},t)=-u_{lin}(t)\phi(x,t_{0},t)+u_{cub}(\phi(x,t_{0},t),t)$$ $$\phi(x,t_{0},t_{0})=x$$ then for $N$ big enough (depending on $T_{2}-T_{1},P$ and $\epsilon$), we have that, if $x\in[-N^{-3},N^{-3}]$ and $T_{1}\leq t_{0}\leq t\leq T_{2}$, then $$|\phi(x,t_{0},t)|\leq x$$ and $$|x|e^{\int_{t_0}^{t}(u_{lin}(s)-x^2N^{4})ds}\leq |\phi(x,t_{0},t)|\leq |x|e^{\int_{t_0}^{t}(u_{lin}(s)+x^2N^{4})ds}$$ $$\label{pphi} e^{\int_{t_0}^{t}(u_{lin}(s)-x^2N^{4})ds}\leq |\partial_{x}\phi(x,t_{0},t)|\leq e^{\int_{t_0}^{t}(u_{lin}(s)+x^2N^{4})ds}$$ $$|\partial_{x}\partial_{x}\phi(x,t_{0},t)|\leq |t-t_{0}|N^{4}|x|.$$* *Proof.* First, we note that, for $x\in[-N^{-3},N^{-3}]$, $T_{1}\leq t_{0}\leq t\leq T_{2}$, $N$ big, we have that $$|xu_{lin}(t)|>|u_{cub}(x,t)|$$ which implies $$\partial_{t}|\phi(x,t_{0},t)|\leq 0$$ and thus $$|\phi(x,t_{0},t)|\leq x.$$ Furthermore, we then have $|u_{cub}(\phi(x,t_{0},t),t)|< N^{4}|\phi(x,t_{0},t)|^3\leq N^{4}|\phi(x,t_{0},t)| x^2$ so, using this bound and integrating in time we get $$|x|e^{\int_{t_0}^{t}(u_{lin}(s)-x^2N^{4})ds}\leq |\phi(x,t_{0},t)|\leq |x|e^{\int_{t_0}^{t}(u_{lin}(s)+x^2N^{4})ds}.$$ To obtain [\[pphi\]](#pphi){reference-type="eqref" reference="pphi"} we note that $$\partial_{t} \partial_{x}\phi(x,t_{0},t)=-u_{lin}(t)\partial_{x}\phi(x,t_{0},t)+u_{cub}'(\phi(x,t_{0},t),t)(\partial_{x}\phi(x,t_{0},t))$$ and using that, for $x\in[-N^{-3},N^{-3}]$, $t\in[0,T]$ for $N$ big, $$|u_{cub}'(\phi(x,t_{0},t),t)|< N^{4}\phi(x,t_{0},t)^2\leq N^{4}x^2$$ we can obtain, again after integrating in time $$e^{\int_{t_0}^{t}(u_{lin}(s)-x^2N^{4})ds}\leq |\partial_{x}\phi(x,t_{0},t)|\leq e^{\int_{t_0}^{t}(u_{lin}(s)+x^2N^{4})ds}.$$ Finally, differentiating the evolution equation again we obtain $$\partial_{t} \partial_{x}\partial_{x}\phi(x,t_{0},t)=-u_{lin}(t)\partial_{x}\partial_{x}\phi(x,t_{0},t)+u_{cub}'(\phi(x,t_{0},t),t)(\partial_{x}\partial_{x}\phi(x,t_{0},t))+u_{cub}''(\phi(x,t_{0},t),t)(\partial_{x}\phi(x,t_{0},t))^2$$ and using that in particular $|\partial_{x}\phi(x,t_{0},t)|\leq 1$ and $-u_{lin}(t)+u_{cub}'(\phi(x,t_{0},t),t)\leq 0$ and integrating in time we get $$|\partial_{x}\partial_{x}\phi(x,t_{0},t)|\leq |t-t_{0}|N^{4}|x|.$$ ◻ **Lemma 3**. *Given $\frac{1}{100}>\epsilon>0$ and $P>0$, if we have $N,M>0$ big enough (depending on $\epsilon$ and $P$) and an incompressible, odd velocity field $u^{N}(x,t)=(u^{N}_{1}(x,t),u^{N}_{2}(x,t),u^{N}_{3}(x,t))$ fulfilling $||u^{N}(x,t)||_{C^{3.5}}\leq N^4$, with $$\label{MN1} e^{\int_{1-N^{-\frac{\epsilon}{2}}}^{1}\partial_{x_{1}}u_{1}(0,s)ds}=M^{\frac12},$$ $$\label{MN2} \partial_{x_{1}}u^{N}_{1}(0,t)\in[\frac{P}{2}N^{\epsilon},2PN^{\epsilon}]\text{ for }t\in [1-N^{-\frac{\epsilon}{2}},1]$$* *$$|\partial_{x_{3}}u^{N}_{3}(x,t)|\leq \ln(N)^3\ \text{for }t\in[1-N^{-\frac{\epsilon}{2}},1]$$ then, if for some $\omega_{end}=(\omega_{1,end},\omega_{2,end}=0,\omega_{3,end})$ with $\text{supp}(\omega_{i,end})\subset\{|x_{2}|\leq M^{-\frac{1}{2}-\epsilon}\}$ we define the evolution equation $$\label{ecuacionsimp} \partial_{t}\omega_{i}+\bar{u}^{N}\cdot\nabla \omega_{i}=\omega_{i}\partial_{x_{i}}\bar{u}^{N}_{i}$$ $$\omega_{i}(x,t=1)=\omega_{i,end}(x)$$ and $$\partial_{t}\phi(x,t_{0},t)=\bar{u}^{N}(\phi(x,t_{0},t),t)$$ $$\phi(x,t_{0},t_{0})=x$$ we have that $$\omega_{i}(x,t)=\omega_{i,end}(\phi(x,t,1))e^{\int_{1}^{t}(\partial_{x_{i}}\bar{u}^{N}_{i})(\phi(x,t,s),s)ds}.$$ Furthermore, if for $1-N^{-\frac{\epsilon}{2}}\leq t\leq 1$ we define $$K_{N}(t)=e^{\int_{t}^{1}\partial_{x_{1}}\bar{u}^{N}_{1}(0,s)ds}$$ then we have that $$\label{controlsoporte} \text{supp}(\omega(x,t))\subset\{|x_{2}|\leq 2 M^{-\frac{1}{2}-\epsilon}K(t)\},$$ and for $1-N^{-\frac{\epsilon}{2}}\leq t\leq 1$, $|x_{2}|\leq 4 M^{-\frac{1}{2}-\epsilon}K(t)$, $$\label{phi1} \phi_{1}(x,t,1)=x_{1}K_{N}(t)g_{er,1}(x_{2},t)$$ $$\label{phi3} \phi_{3}(x,t,1)=x_{3}g_{er,3}(x_{2},t)$$ with $$\label{controlphii} |g_{er,1}(x_{2},t)|,|g_{er,3}(x_{2},t)|\in[\frac{1}{2},2],|\partial_{x_{2}}g_{er,1}(x_{2},t)|,|\partial_{x_{2}}g_{er,3}(x_{2},t)|\leq 8N^{4}M^{-\frac{1}{2}-\epsilon}K_{N}(t)$$ $$\label{controlphii2} |\partial_{x_{2}}\partial_{x_{2}}g_{er,3}(x_{2},t)|,|\partial_{x_{2}}\partial_{x_{2}}g_{er,1}(x_{2},t)|\leq 4N^{4}$$ and $$\label{controlphi21} \frac{1}{2K_{N}(t)}\leq|\partial_{x_{2}}\phi_{2}(x,t,1)|\leq \frac{2}{K_{N}(t)}$$ $$\label{controlphi22} |\partial_{x_{2}}\partial_{x_{2}}\phi_{2}(x,t,1)|\leq N^4 M^{-\frac{1}{2}-\epsilon}K_{N}(t).$$* *Finally, also for $1-N^{-\frac{\epsilon}{2}}\leq t\leq 1$, $x_{2}\in \{|x_{2}|\leq 4 M^{-\frac{1}{2}-\epsilon}K(t)\}$, we have $$\label{amplitud1} e^{\int_{1}^{t}(\partial_{x_{1}}\bar{u}^{N}_{1})(\phi(x,t,s),s)ds}=\frac{a_{1}(x_{2},t)}{K_{N}(t)}$$ $$\label{amplitud3} e^{\int_{1}^{t}(\partial_{x_{3}}\bar{u}^{N}_{3})(\phi(x,t,s),s)ds}=a_{3}(x_{2},t)$$ with $$\label{controlamplitud} \frac{1}{2}\leq|a_{i}(x_{2},t)|\leq 2, |\partial_{x_{2}}a_{i}(x_{2},t)|\leq 1,|\partial_{x_{2}}\partial_{x_{2}}a_{i}(x_{2},t)|\leq 6N^4.$$* **Remark 9**. *For the initial consitions we consider, the family of evolution equations [\[ecuacionsimp\]](#ecuacionsimp){reference-type="eqref" reference="ecuacionsimp"} is actually equivalent to $$\partial_{t}\omega+\bar{u}^{N}\cdot\nabla \omega=\omega\cdot\nabla\bar{u}^{N}.$$* *Proof.* First, we note that, if we define $$\tilde{\omega}_{i}(x,t)=\omega(\phi(x,1,t),t)$$ we get $$\partial_{t}\tilde{\omega}_{i}(x,t)=\tilde{\omega}_{i}(x,t)(\partial_{x_{i}}\bar{u}^{N}_{i})(\phi(x,1,t),t)$$ $$\tilde{\omega}_{i}(x,t=1)=\omega_{i,end}(x)$$ so we can solve this problem to get $$\tilde{\omega}_{i}(x,t)=\omega_{i,end}(x)e^{\int_{1}^{t}(\partial_{x_{i}}\bar{u}^{N}_{i})(\phi(x,1,s),s)ds}.$$ Then, undoing the change of variables and using that $$\phi(\phi(x,t_{1},t_{2}),t_{2},t_{3})=\phi(x,t_{1},t_{3})$$ we get $$\omega_{i}(x,t)=\omega_{i,end}(\phi(x,t,1))e^{\int_{1}^{t}(\partial_{x_{i}}\bar{u}^{N}_{i})(\phi(x,t,s),s)ds}.$$ Note that this means that $$\label{suppphi} \text{supp}(\omega_{i}(x,t))\subset\{x:x=\phi(y,1,t), y\in\text{supp}(\omega_{i,end})\}.$$ Next, we have that, for $t\in[1-N^{-\frac{\epsilon}{2}},1]$, $$\begin{aligned} \partial_{t}\phi_{2}(x,1,t)&=\bar{u}^{N}_{2}(\phi_{2}(x_{2},1,t),t)=\phi_{2}(x_{2},1,t)\partial_{x_{2}}\bar{u}^{N}_{2}(0,t)+u^{N}_{2,cub}(\phi(x_{2},1,t))\\ &=-\phi_{2}(x_{2},1,t)(\partial_{x_{1}}\bar{u}^{N}_{1}(0,t)+\partial_{x_{3}}\bar{u}^{N}_{3}(0,t))+u^{N}_{2,cub}(\phi(x_{2},1,t))\end{aligned}$$ with $u^{N}_{2,cub}(x_{2})=\bar{u}^{N}_{2}(x_{2})-x_{2}(\partial_{x_{2}}\bar{u}^{N}_{2})(0,t)$, and using that $\bar{u}^{N}$ is odd (see remark [Remark 7](#odd){reference-type="ref" reference="odd"}) and the bounds for $u^{N}_{3}$ we get $$|\partial_{t}\phi_{2}(x,1,t)|\leq [|(-\partial_{x_{1}}\bar{u}^{N}_{1})(0,t)|+\ln(N)^3|]\phi_{2}(x,1,t)|+N^4|\phi_{2}(x,1,t)|^3$$ so, if $|\phi_{2}(x,1,t)|\leq N^{-3}$ for $t\in[t_{0},1]$, $t_{0}\geq 1-N^{-\frac{\epsilon}{2}}$ we get $$\label{controlsupport} |\phi_{2}(x,1,t_{0})|\leq |x_{2}|e^{\int_{1}^{t_{0}}((-\partial_{x_{1}}\bar{u}^{N}_{1})(0,t)+\ln(N)^3+1)dt}\leq |x_{2}|K_{N}(t)e^{N^{-\frac{\epsilon}{2}}(\ln(N)^3+1)}\leq 2K_{N}(t_{0})|x_{2}|.$$ In particular, we get that, for any $t_{0} \in[1-N^{-\frac{\epsilon}{2}},1]$ $$\phi(x_{2},1,t)\in [-N^{-3},N^{-3}]\text{ for } t\in[t_{0},1] \Rightarrow\ \phi_{2}(x,1,t_{0})\in [-2|x_{2}|K_{N}(t),2|x_{2}|K_{N}(t)]\text{ for } t\in[t_{0},1]$$ and since for $N$ big, $|x_{2}|\leq 4M^{-\epsilon-\frac{1}{2}}$ we have $$[-2|x_{2}|K_{N}(t),2|x_{2}|K_{N}(t)]\subset[-4M^{-\epsilon},4M^{-\epsilon}]\subset [-N^{-3},N^{-3}]$$ the continuity of $\phi(x_{2},1,t)$ with respect to $t$ gives that, in fact, for $t_{0}\in[1-N^{-\frac{\epsilon}{2}},1]$ and $|x_{2}|\leq 4M^{-\epsilon-\frac{1}{2}}$ [\[controlsupport\]](#controlsupport){reference-type="eqref" reference="controlsupport"} holds, which, combined with [\[suppphi\]](#suppphi){reference-type="eqref" reference="suppphi"}, gives us that $$\text{supp}(\omega(x,t))\subset\{|x_{2}|\leq 2 M^{-\frac{1}{2}-\epsilon}K_{N}(t)\}.$$ Now, for $1-N^{-\frac{\epsilon}{2}}\leq t_{0}\leq t\leq 1$, $|x_{2}|\leq 4 M^{-\frac{1}{2}-\epsilon}K_{N}(t)$ we can apply lemma [Lemma 2](#phix2){reference-type="ref" reference="phix2"} to obtain $$e^{\int_{t_0}^{t}(\partial_{x_{2}}(\bar{u}^{N}_{2})(0,t)-x_{2}^2N^{4})ds}\leq |\partial_{x_{2}}\phi_{2}(x,t_{0},t)|\leq e^{\int_{t_0}^{t}(\partial_{x_{2}}(\bar{u}^{N}_{2})(0,t)+x_{2}^2N^{4})ds}\leq 1$$ $$|\partial_{x_{2}}\partial_{x_{2}}\phi_{2}(x_{2},t_{0},t)|\leq |t-t_{0}|N^{4}|x_{2}|,$$ which in particular imply [\[controlphi21\]](#controlphi21){reference-type="eqref" reference="controlphi21"} and [\[controlphi22\]](#controlphi22){reference-type="eqref" reference="controlphi22"}. Now, for the bounds of $\phi_{1}$ and $\phi_{3}$ we note that, for $i=1,3$ $$\partial_{t}\phi_{i}(x,t,1)=\bar{u}^{N}_{i}(\phi(x,t,1))=\phi_{i}(x,t,1)(\partial_{x_{i}}\bar{u}^{N}_{i})((0,\phi_{2}(x_{2},t,1),0),t)$$ $$\phi_{i}(x,1,1)=x_{i},$$ so, if we define $$p_{N,i}(x_{2},t)=(\partial_{x_{i}}\bar{u}^{N}_{i})((0,\phi_{2}(x_{2},t,1),0),t)-(\partial_{x_{i}}\bar{u}^{N}_{i})(0,t)$$ we have that for $i=1,3$ $$\phi_{i}(x,t,1)=x_{i}e^{\int_{t}^{1}((\partial_{x_{i}}\bar{u}^{N}_{i})(0,s)+p_{N,i}(\phi(x_{2},t,s),s))ds}$$ so $$\phi_{1}(x,t,1)=x_{1}K_{N}(t)e^{\int_{t}^{1}p_{N,1}(\phi(x_{2},t,s),s)ds}$$ $$\phi_{3}(x,t,1)=x_{3}e^{\int_{t}^{1}((\partial_{x_{3}}\bar{u}^{N}_{3})(0,s)+p_{N,3}(\phi(x_{2},t,s),s))ds}$$ which gives [\[phi1\]](#phi1){reference-type="eqref" reference="phi1"} and [\[phi3\]](#phi3){reference-type="eqref" reference="phi3"}. Next, using that, for $1-N^{-\frac{\epsilon}{2}}\leq t_{0}\leq t\leq 1$, $|x_{2}|\leq 4 M^{-\frac{1}{2}-\epsilon}K_{N}(t)$ $$|\partial_{x_{2}}p_{N,i}(\phi(x_{2},t_{0},t),t)|\leq N^{4}|\phi(x_{2},t_{0},t)||\partial_{x_{2}}\phi(x_{2},t_{0},t)|\leq N^{4}|x_{2}|$$ we get $$|\partial_{x_{2}}e^{\int_{t}^{1}p_{N,i}(\phi(x_{2},t,s),s)ds}|\leq e^{\int_{t}^{1}p_{N,i}(\phi(x_{2},t,s),s)ds}N^{4}|x_{2}|\leq e^{\int_{t}^{1}p_{N,i}(\phi(x_{2},t,s),s)ds}N^{4}4M^{-\frac{1}{2}-\epsilon}K_{N}(t),$$ and since, $$|p_{N,i}(\phi(x_{2},t_{0},t),t)|\leq N^{4}|\phi(x_{2},t_{0},t)|^2\leq N^{4}|x_{2}|^2, |\partial_{x_{3}}u^{N}_{3}(0,t)|\leq \ln(N)^3$$ we get [\[controlphii\]](#controlphii){reference-type="eqref" reference="controlphii"}. Similarly, using $$|\partial_{x_{2}}\partial_{x_{2}}p_{N,i}(\phi(x_{2},t_{0},t),t)|\leq 2N^{4}$$ gives [\[controlphii2\]](#controlphii2){reference-type="eqref" reference="controlphii2"}. For [\[amplitud1\]](#amplitud1){reference-type="eqref" reference="amplitud1"}, [\[amplitud3\]](#amplitud3){reference-type="eqref" reference="amplitud3"} and [\[controlamplitud\]](#controlamplitud){reference-type="eqref" reference="controlamplitud"} we use that $$a_{1}(x_{2},t)=e^{\int_{1}^{t}((\partial_{x_{1}}\bar{u}^{N}_{1})(\phi_{2}(x_{2},t,s),s)-(\partial_{x_{1}}\bar{u}^{N}_{1})(0,s))ds}$$ $$a_{3}(x_{2},t)=e^{\int_{1}^{t}(\partial_{x_{3}}\bar{u}^{N}_{3})(\phi_{2}(x_{2},t,s),s)}ds$$ and using the bounds for $(\partial_{x_{3}}\bar{u}^{N}_{3})(x_{2},t)$ and for $\bar{u}^{N}_{1}(x_{2},t)$ to get, for $|x_{2}|\leq 4 M^{-\frac{1}{2}-\epsilon}K_{N}(t)$, $t\in[1-N^{-\frac{\epsilon}{2}},1]$, that $$\frac12\leq e^{-N^4(1-t)x^2_{2}}\leq |a_{1}(x_{2},t)|\leq e^{N^4(1-t)x^2_{2}}\leq 2$$ $$\frac{1}{2}\leq e^{-(1-t)\ln(N)}\leq |a_{3}(x_{2},t)|\leq e^{(1-t)\ln(N)}\leq 2$$ and similarly, using the bounds for $\bar{u}^{N}_{i},\phi_{2}$ and their derivatives, we get $$|\partial_{x_{2}}a_{1}(x_{2},t)|,|\partial_{x_{2}}a_{3}(x_{2},t)|\leq 4N^4|x_2|\leq 1$$ and for the second derivatives $$|\partial_{x_{2}}\partial_{x_{2}}a_{1}(x_{2},t)|,|\partial_{x_{2}}\partial_{x_{2}}a_{3}(x_{2},t)|\leq 6N^4$$ which finishes the proof. ◻ # Bounds for the small scale layer In order to show that the gluing of the small scale layer and the big scale layer only requires a reasonable force (more specifically, almost $C^{1,\frac{1}{2}}$) to produce the desired behaviour, we need to obtain several useful bounds regarding the properties of the small scale layer. Since all the lemmas obtained in this section will require the same set of hypothesis, and to make the statements more compact, we will start this section specifying the assumptions that will hold for the rest of the section, as well as the notation that we will use. We start by fixing $\frac{1}{100}>\epsilon>0$, $P>0$ and $f(z)$ a $C^{\infty}$ even function with $1\geq f(z)\geq 0$, $f(z)=1$ if $|z|\leq\frac{1}{2}$, $f(z)=0$ if $|z|\geq 1$. Note that this function $f(z)$ has nothing to do with the force appearing in [\[Euler\]](#Euler){reference-type="ref" reference="Euler"}. The constants in the lemmas will depend on the specific choice of $\epsilon,P$ and $f(z)$, but the result will hold independently of the choice. Given $N>1$ and an incompressible, odd velocity field $u^{N}(x,t)=(u^{N}_{1}(x,t),u^{N}_{2}(x,t),u^{N}_{3}(x,t))$ fulfilling $||u^{N}(x,t)||_{C^{3.5}}\leq N^4$, with $$\label{MN21} e^{\int_{1-N^{-\frac{\epsilon}{2}}}^{1}\partial_{x_{1}}u^{N}_{1}(0,s)ds}=M^{\frac12},$$ $$\label{MN22} \partial_{x_{1}}u^{N}_{1}(0,t)\in[\frac{P}{2}N^{\epsilon},2PN^{\epsilon}]\text{ for }t\in [1-N^{-\frac{\epsilon}{2}},1]$$ we define $\omega(x,t)=(\omega_{1}(x,t),\omega_{2}(x,t),\omega_{3}(x,t))$ by $$\label{evolucionsmall} \partial_{t}\omega+\bar{u}^{N}\cdot\nabla \omega=\omega\cdot\nabla\bar{u}^{N},$$ with $\bar{u}^{N}$ as in definition [Definition 3](#ubarra){reference-type="ref" reference="ubarra"}, and $$\omega_{1}(x,t=1)=-M^{\epsilon}f(M^{\frac{1}{2}-\epsilon}x_{1})\sin(Mx_{2})f(M^{\frac{1}{2}+\epsilon}x_{2})\sin(Mx_{3})f(M^{\frac{1}{2}}x_{3})$$ $$\omega_{2}(x,t=1)=0,$$ $$\omega_{3}(x,t=1)=-\int_{-\infty}^{x_{3}}\partial_{x_{1}}\omega_{1}(x_{1},x_{2},s,t=1)ds.$$ We note that, for $t\in[1-N^{-\frac{\epsilon}{2}},1]$, $\omega_{2}(x,t)=0$ and $\partial_{x_{1}}\omega_{1}(x,t)=-\partial_{x_{3}}\omega_{3}(x,t)$. Furthermore, if $N$ is big enough, we can apply lemma [Lemma 3](#evolucionsimp){reference-type="ref" reference="evolucionsimp"}, remembering that in lemma [Lemma 3](#evolucionsimp){reference-type="ref" reference="evolucionsimp"} we define $K_{N}(t)=e^{\int_{t}^{1}\partial_{x_{1}}\bar{u}^{N}_{1}(0,s)ds}$ and defining $b(t)=\frac{\ln(K_{N}(t))}{\ln(M)}$, we can write $\omega_{1}(x,t)=\omega_{1,b}(x)$ where $$\begin{aligned} \label{w1b} &\omega_{1,b}(x):=-M^{\epsilon-b}a_{b}(x_{2})f(M^{\frac{1}{2}+b-\epsilon}x_{1}g_{er,1,b}(x_{2}))\sin(Mg_{2,b}(x_{2}))f(M^{\frac{1}{2}+\epsilon}g_{2,b}(x_{2}))\\ \nonumber &\times \sin(Mg_{er,3,b}(x_{2})x_{3})f(M^{\frac{1}{2}}g_{er,3,b}(x_{2})x_{3}) \end{aligned}$$ with $b\in[0,\frac{1}{2}]$, $\text{supp}(\omega_{1,b})(x)\subset \{x:|x_{2}|\leq 2 M^{-\frac{1}{2}-\epsilon+b}\}$ and such that, for $|x_{2}|\leq 4 M^{-\frac{1}{2}-\epsilon+b}$ $$\frac{1}{2}\leq|a_{b}(x_{2})|\leq 2, |\partial_{x_{2}}a_{b}(x_{2})|\leq 1,|\partial_{x_{2}}\partial_{x_{2}}a_{b}(x_{2})|\leq M^{\frac{1}{2}}$$ $$|g_{er,1,b}(x_{2})|,|g_{er,3,b}(x_{2})|\in[\frac{1}{2},2],|\partial_{x_{2}}g_{er,1,b}(x_{2})|,|\partial_{x_{2}}g_{er,3,b}(x_{2})|\leq M^{-\frac{1}{2}+b},$$ $$|\partial_{x_{2}}\partial_{x_{2}}g_{er,1,b}(x_{2},t)|,|\partial_{x_{2}}\partial_{x_{2}}g_{er,3,b}(x_{2},t)|\leq M^{\frac{1}{2}}$$ and $$\frac{1}{M^{b}}\leq|\partial_{x_{2}}g_{2,b}(x_{2})|\leq \frac{2}{M^{b}},|\partial_{x_{2}}\partial_{x_{2}}g_{2,b}(x_{2})|\leq M^{-\frac{1}{2}+b},$$ for $N$ big, where we used that, for any fixed $\delta>0$, if $N$ is big enough then $M^{\delta}>N$ (using [\[MN21\]](#MN21){reference-type="eqref" reference="MN21"} and [\[MN22\]](#MN22){reference-type="eqref" reference="MN22"}). Note that we can use the incompressibility of $\omega$ to obtain a similar expression $\omega_{3}(x,t)=\omega_{3,b}(x)$, and we will also write $\omega_{b}=(\omega_{1,b},0,\omega_{3,b})$. We can now use these expressions to obtain information about the properties of $\omega(x,t)$. Since the equation [\[evolucionsmall\]](#evolucionsmall){reference-type="eqref" reference="evolucionsmall"} should be obtained by applying an almost $C^{1,\frac{1}{2}}$ force to the 3D-Euler equations, we need to show that the \"missing terms\" (i.e., the terms that appear in the 3D-Euler equation but not in [\[evolucionsmall\]](#evolucionsmall){reference-type="eqref" reference="evolucionsmall"}) are almost $C^{1,\frac{1}{2}}$ (in the velocity formulation, and therefore almost $C^{\frac{1}{2}}$ in vorticity formulation). In particular, we want the quadratic terms $\omega\cdot\nabla u(\omega),u(\omega)\cdot\nabla\omega$ to be almost $C^{\frac{1}{2}}$. For this, we start by obtaining bounds for $u(\omega_{1})$. Note also that, since $M$ and $N$ are related by [\[MN21\]](#MN21){reference-type="eqref" reference="MN21"} and [\[MN22\]](#MN22){reference-type="eqref" reference="MN22"}, it is equivalent to say \"for $N$ big enough\" or \"for $M$ big enough\". **Lemma 4**. *There exists $C_{0}>0$ such that for $M>0$ big enough if we define $$\begin{aligned} &\tilde{u}_{2}(\omega_{1,b})(x)=C_{0}\frac{Mg_{3,er,b}(x_{2})}{(Mg_{3,er,b}(x_{2}))^2+(M\partial_{x_{2}}g_{2,b}(x_{2}))^2}M^{\epsilon-b}a_{b}(x_{2})f(M^{\frac{1}{2}+b-\epsilon}x_{1}g_{er,1,b}(x_{2}))\\ &\times \sin(Mg_{2,b}(x_{2}))f(M^{\frac{1}{2}+\epsilon}g_{2,b}(x_{2})) \cos(Mg_{er,3,b}(x_{2})x_{3})f(M^{\frac{1}{2}}g_{er,3,b}(x_{2})x_{3})\\\end{aligned}$$ $$\begin{aligned} &\tilde{u}_{3}(\omega_{1,b})(x)=-C_{0}\frac{M\partial_{x_{2}}g_{2,b}(x_{2})}{(Mg_{3,er,b}(x_{2}))^2+(M\partial_{x_{2}}g_{2,b}(x_{2}))^2}M^{\epsilon-b}a_{b}(x_{2})f(M^{\frac{1}{2}+b-\epsilon}x_{1}g_{er,1,b}(x_{2}))\\ &\times \cos(Mg_{2,b}(x_{2}))f(M^{\frac{1}{2}+\epsilon}g_{2,b}(x_{2})) \sin(Mg_{er,3,b}(x_{2})x_{3})f(M^{\frac{1}{2}}g_{er,3,b}(x_{2})x_{3})\\\end{aligned}$$ we have that, for any $\delta>0$,$|x_{2}|< 4M^{-\frac{1}{2}-\epsilon+b}$ and $j=0,1$* *$$||\tilde{u}_{2}(\omega_{1,b})-u_{2}((\omega_{1,b},0,0))||_{C^{j}}\leq C_{\delta} M^{2\epsilon+j-\frac{3}{2}+\delta}$$ $$\label{linftu3} ||\tilde{u}_{3}(\omega_{1,b})-u_{3}((\omega_{1,b},0,0))||_{C^{j}}\leq C_{\delta} M^{2\epsilon+j-\frac{3}{2}+\delta}$$ and, for $i=1,3$, $j=0,1$, $|x_{2}|< 4M^{-\frac{1}{2}-\epsilon+b}$, $$||\partial_{x_{i}}\tilde{u}_{2}(\omega_{1,b})-\partial_{x_{i}}u_{2}((\omega_{1,b},0,0))||_{C^{j}}\leq C_{\delta} M^{2\epsilon+j-\frac{1}{2}+\delta}$$ $$||\partial_{x_{i}}\tilde{u}_{3}(\omega_{1,b})-\partial_{x_{i}}u_{3}((\omega_{1,b},0,0))||_{C^j}\leq C_{\delta} M^{2\epsilon+j-\frac{1}{2}+\delta}.$$* *Proof.* We will only show the inequalities for $\tilde{u}_{3}$, since the ones for $\tilde{u}_{2}$ are completely analogous. We start by studying the function $$u_{3}((\omega_{1,b},0,0))=-\int_{\mathds{R}^3} \frac{h_{2}}{|h|^3}\omega_{1,b}(x+h)dh_{1}dh_{2}dh_{3}.$$ In order to show [\[linftu3\]](#linftu3){reference-type="eqref" reference="linftu3"} for $j=0$, we will start by showing that we can make several useful approximations to $u_{3}(\omega_{1,b})$ that create an error smaller than $C_{\delta} M^{2\epsilon+j-\frac{3}{2}+\delta}$. First, we note that we can use integration by parts and the bounds for $g_{er,3,b}(x_{2})$ to show that $$\begin{aligned} \label{h3small} &|\int_{\mathds{R}}\frac{f(M^{\frac{1}{2}}g_{er,3,b}(x_{2}+h_{2})(x_{3}+h_{3}))\sin(Mg_{er,3,b}(x_{2}+h_{2})(x_{3}+h_{3})}{|h|^3}dh_{3}|\\\nonumber &=|\int_{\mathds{R}}\frac{\partial^k}{\partial h_{3}^k}\big(\frac{f(M^{\frac{1}{2}}g_{er,3,b}(x_{2}+h_{2})(x_{3}+h_{3}))}{|h|^3}\big)\frac{\sin(Mg_{er,3,b}(x_{2}+h_{2})(x_{3}+h_{3})+k\frac{\pi}{2})}{(Mg_{er,3,b}(x_{2}+h_{2}))^k}dh_{3}|\\ &\leq \int_{\mathds{R}}C_{k}\sum_{i=0}^{k}\big(\frac{M^{\frac{i}{2}}}{|h|^{3+k-i}}\big)\frac{2^k}{M^k}dh_{3}\leq C_{k}\sum_{i=0}^{k}\big(\frac{M^{\frac{i}{2}}}{|h_{1}^{2}+h_{2}^{2}|^{\frac{2+k-i}{2}}}\big)\frac{2^k}{M^k}\nonumber\end{aligned}$$ and using this plus that, for $M$ big $$\text{supp}(\omega_{1,b}(x))\subset\{x:|x|\leq 1\}$$ to show that, for any $\frac{1}{4}\geq \delta\geq 0$, $k\in\mathds{N}$, $M$ big $$\begin{aligned} &|\int_{|h_{1}^2+h_{2}^2|\geq M^{-1+\delta},h_{3}\in\mathds{R}}\frac{h_{2}}{|h|^3}\omega_{1,b}(x+h)dh_{1}dh_{2}dh_{3}|\\ &\leq\int_{|h_{1}^2+h_{2}^2|\geq M^{-1+\delta}} 1_{(|x_{1}^2+x_{2}^2|\leq 2)}M^{\epsilon-b}C_{k}h_{2}\sum_{i=0}^{k}\big(\frac{M^{\frac{i}{2}}}{|h_{1}^{2}+h_{2}^{2}|^{\frac{2+k-i}{2}}}\big)\frac{2^k}{M^k}dh_{1}dh_{2}\\ &\leq C_{\delta,k} M^{-k\delta-1+\delta+\epsilon-b}\leq CM^{-\frac{3}{2}}.\end{aligned}$$ With that in mind, we can focus now on the integral only when $|h_{1}^2+h_{2}^2|\leq M^{-1+\delta}$. Next, using the properties of $g_{er,1,b}(x_{2})$ we check that $$\begin{aligned} &|\int_{|h_{1}^2+h_{2}^2|\leq M^{-1+\delta},h_{3}\in\mathds{R}}\frac{h_{2}}{|h|^3}(\omega_{1,b}(x+h)-\omega_{1}(x_{1},x_{2}+h_{2},x_{3}+h_{3})dh_{1}dh_{2}dh_{3}|\\ &\leq C||f(z)||_{C^1}M^{\epsilon-b} M^{\frac{1}{2}+b-\epsilon}\int_{|h_{1}^2+h_{2}^2|\leq M^{-1+\delta},h_{3}\in\mathds{R}}\frac{h_{1}h_{2}}{|h|^3}dh_{1}dh_{2}dh_{3}\\ &\leq CM^{\frac{1}{2}}\int_{|h_{1}^2+h_{2}^2|\leq M^{-1+\delta}}dh_{1}dh_{2}\leq CM^{-\frac{3}{2}+2\delta}.\\\end{aligned}$$ Similarly, using the properties of $a_{b}(x_{2}),g_{er,1,b}(x_{2}),g_{2,b}(x_{2}),g_{er,3,b}(x_{2})$ and $f(z)$ we get $$\begin{aligned} &|\int_{|h_{1}^2+h_{2}^2|\leq M^{-1+\delta},h_{3}\in\mathds{R}}\frac{h_{2}}{|h|^3}\Big(\omega_{1,b}(x_{1},x_{2}+h_{2},x_{3}+h_{3})\\ &-M^{\epsilon-b}a_{b}(x_{2})f(M^{\frac{1}{2}+b-\epsilon}x_{1}g_{er,1,b}(x_{2}))\sin(M(g_{2,b}(x_{2})+h_{2}\partial_{x_{2}}g_{2,b}(x_{2}))f(M^{\frac{1}{2}+\epsilon}g_{2,b}(x_{2}))\\ &\times \sin(Mg_{er,3,b}(x_{2})(x_{3}+h_{3}))f(M^{\frac{1}{2}}g_{er,3,b}(x_{2})(x_{3}+h_{3})\Big)dh_{1}dh_{2}dh_{3}|\\ &\leq \int_{|h_{1}^2+h_{2}^2|\leq M^{-1+\delta},h_{3}\in\mathds{R}}\frac{CM^{\epsilon-b}h_{2}}{|h|^3}h_{2}\Big( 1+h_{2}M^{\frac{1}{2}+b}+M^{\frac{1}{2}+\epsilon-b}+M^{b}\Big)dh_{1}dh_{2}dh_{3}\\ &\leq C M^{\epsilon-b}\int_{|h_{1}^2+h_{2}^2|\leq M^{-1+\delta}}\frac{h_{2}^2}{h_{1}^2+h_{2}^2}\Big(h_{2}M^{\frac{1}{2}+b}+M^{\frac{1}{2}+\epsilon-b}+M^{b}\Big)dh_{1}dh_{2}\leq C M^{2\epsilon-\frac{3}{2}+2\delta}.\end{aligned}$$ Finally, using that $$\begin{aligned} &|\int_{\mathds{R}}\frac{1}{|h|^3} \sin(Mg_{er,3,b}(x_{2})(x_{3}+h_{3}))[f(M^{\frac{1}{2}}g_{er,3,b}(x_{2})(x_{3}+h_{3}))-f(M^{\frac{1}{2}}g_{er,3,b}(x_{2})x_{3})]dh_{3}|\\ &\leq C\int_{-\infty}^{\infty}\frac{M^{\frac12}|h_{3}|}{|h|^3}dh_{3}\leq C\frac{M^{\frac12}}{(h_{1}^2+h_{2}^2)^\frac{1}{2}}\end{aligned}$$ and combining this with all the other properties, we get that $$\begin{aligned} &|u_{3}((\omega_{1,b},0,0))-\int_{|h_{1}^2+h_{2}^2|\leq M^{-1+\delta},h_{3}\in\mathds{R}}M^{\epsilon-b}\frac{h_{2}}{|h|^3}a_{b}(x_{2})f(M^{\frac{1}{2}+b-\epsilon}x_{1}g_{er,1,b}(x_{2}))f(M^{\frac{1}{2}+\epsilon}g_{2,b}(x_{2}))\\ &\times \sin(M(g_{2,b}(x_{2})+h_{2}\partial_{x_{2}}g_{2,b}(x_{2}))\sin(Mg_{er,3,b}(x_{2})(x_{3}+h_{3}))f(M^{\frac{1}{2}}g_{er,3,b}(x_{2})x_{3})dh_{1}dh_{2}dh_{3}|\\ &\leq C_{\delta}M^{2\epsilon-\frac{3}{2}+2\delta}\\ &+\int_{|h_{1}^2+h_{2}^2|\leq M^{-1+\delta},h_{3}\in\mathds{R}}\frac{h_{2}}{|h|^3}M^{\epsilon-b}|a_{b}(x_{2})f(M^{\frac{1}{2}+b-\epsilon}x_{1}g_{er,1,b}(x_{2}))\sin(M(g_{2,b}(x_{2})+h_{2}\partial_{x_{2}}g_{2,b}(x_{2}))\\ &\times f(M^{\frac{1}{2}+\epsilon}g_{2,b}(x_{2}))\sin(Mg_{er,3,b}(x_{2})(x_{3}+h_{3}))[f(M^{\frac{1}{2}}g_{er,3,b}(x_{2})(x_{3}+h_{3})-f(M^{\frac{1}{2}}g_{er,3,b}(x_{2})x_{3})]dh_{1}dh_{2}dh_{3}|\\ &\leq C_{\delta}M^{2\epsilon-\frac{3}{2}+2\delta}+C\int_{|h_{1}^2+h_{2}^2|\leq M^{-1+\delta}}M^{\epsilon-b}\frac{h_{2}M^{\frac12}}{(h_{1}^2+h_{2}^2)^\frac{1}{2}}dh_{1}dh_{2}\leq C_{\delta}M^{2\epsilon-\frac{3}{2}+2\delta}.\\\end{aligned}$$ This means that, to obtain the expression for $\tilde{u}_3$, it is enough to study $$\begin{aligned} &\int_{|h_{1}^2+h_{2}^2|\leq M^{-1+\delta},h_{3}\in\mathds{R}}M^{\epsilon-b}a_{b}(x_{2})f(M^{\frac{1}{2}+b-\epsilon}x_{1}g_{er,1,b}(x_{2}))\sin(M(g_{2,b}(x_{2})+h_{2}\partial_{x_{2}}g_{2,b}(x_{2}))\\ &\times f(M^{\frac{1}{2}+\epsilon}g_{2,b}(x_{2}))\sin(Mg_{er,3,b}(x_{2})(x_{3}+h_{3}))f(M^{\frac{1}{2}}g_{er,3,b}(x_{2})x_{3})dh_{1}dh_{2}dh_{3}\end{aligned}$$ and, in particular, since most integrands do not depend on $h_{1}$, $h_{2}$ or $h_{3}$ it is enough to study $$\begin{aligned} &\int_{|h_{1}^2+h_{2}^2|\leq M^{-1+\delta},h_{3}\in\mathds{R}}\frac{h_{2}}{|h|^3}\sin(M(g_{2,b}(x_{2})+h_{2}\partial_{x_{2}}g_{2,b}(x_{2})) \sin(Mg_{er,3,b}(x_{2})(x_{3}+h_{3}))dh_{1}dh_{2}dh_{3}\\ &=\sin(Mg_{er,3,b}(x_{2})x_{3})\cos(M(g_{2,b}(x_{2}))\int_{|h_{1}^2+h_{2}^2|\leq M^{-1+\delta},h_{3}\in\mathds{R}}\frac{h_{2}}{|h|^3}\sin(Mh_{2}\partial_{x_{2}}(g_{2,b})(x_{2})) \\ &\times\cos(Mg_{er,3,b}(x_{2})h_{3}))dh_{1}dh_{2}dh_{3}\\ &=\frac{\sin(Mg_{er,3,b}(x_{2})x_{3})\cos(M(g_{2,b}(x_{2}))}{M}\int_{|h_{1}^2+h_{2}^2|\leq M^{\delta},h_{3}\in\mathds{R}}\frac{h_{2}}{|h|^3}\sin(h_{2}(\partial_{x_{2}}g_{2,b})(x_{2})) \\ &\times\cos(g_{er,3,b}(x_{2})h_{3}))dh_{1}dh_{2}dh_{3},\end{aligned}$$ so we will study $$H_{\lambda}:=\int_{|h_{1}^2+h_{2}^2|\leq \lambda,h_{3}\in\mathds{R}}\frac{h_{2}}{|h|^3}\sin(h_{2}\partial_{x_{2}}(g_{2,b})(x_{2}))\cos(g_{er,3,b}(x_{2})h_{3}))dh_{1}dh_{2}dh_{3}$$ Note that if we show that $$\text{lim}_{\lambda\rightarrow{\infty}} H_{\lambda}=C_{0}\frac{\partial_{x_{2}}(g_{2,b})(x_{2})}{(g_{3,er,b}(x_{2}))^2+(\partial_{x_{2}}g_{2,b}(x_{2}))^2}$$ and $$\label{HM1} |H_{\lambda_{1}}-H_{\lambda_{2}}|\leq C [\text{min}(\lambda_{1},\lambda_{2})]^{-\frac{1}{2\delta}},$$ then we have showed the desired bound for [\[linftu3\]](#linftu3){reference-type="eqref" reference="linftu3"} with $j=0$. To show [\[HM1\]](#HM1){reference-type="eqref" reference="HM1"}, we note that by using integration by parts with respect to $h_{3}$ $k$ times to get $$|H_{\lambda_{2}}-H_{\lambda_{1}}|\leq \int_{\lambda_{1}\leq |h_{1}^2+h_{2}^2|\leq \lambda_{2},h_{3}\in\mathds{R}}\frac{C_{k}}{|h|^{2+k}}dh_{1}dh_{2}dh_{3}\leq C_{k}[\text{min}(\lambda_{1},\lambda_{2})]^{-k+1}$$ so taking $k$ big gives us [\[HM1\]](#HM1){reference-type="eqref" reference="HM1"}. Note that in particular this shows that the limit of $H_{\lambda}$ when $\lambda$ tends to infinity exists. Then we can use integration by parts with respect to $h_{3}$ to show that $$H_{\lambda}=\frac{1}{g_{er,3,b}(x_{2})}\int_{|h_{1}^2+h_{2}^2|\leq \lambda,h_{3}\in\mathds{R}}\frac{3h_{2}h_{3}}{|h|^5}\sin(h_{2}\partial_{x_{2}}(g_{2,b})(x_{2}))\sin(g_{er,3,b}(x_{2})h_{3}))dh_{1}dh_{2}dh_{3}.$$ Now, if we define $$\tilde{H}_{\lambda}=\frac{1}{g_{er,3,b}(x_{2})}\int_{|h_{1}^2|\leq \lambda,\ h_{2},h_{3}\in\mathds{R}}\frac{3h_{2}h_{3}}{|h|^5}\sin(h_{2}\partial_{x_{2}}(g_{2,b})(x_{2}))\sin(g_{er,3,b}(x_{2})h_{3}))dh_{1}dh_{2}dh_{3},$$ which is well defined since $\frac{h_{2}h_{3}}{|h|^5}$ has enough decay at infinity, using integration by parts with respect to $h_{3}$ once we have that $$\begin{aligned} &|H_{\lambda}-\tilde{H}_{\lambda}|\leq |\frac{1}{g_{er,3,b}(x_{2})}\int_{|h_{1}^2|\leq \lambda,|h_{1}^2+h_{2}^2|\geq \lambda\,h_{3}\in\mathds{R}}\frac{3h_{2}h_{3}}{|h|^5}\sin(h_{2}\partial_{x_{2}}(g_{2,b})(x_{2}))\sin(g_{er,3,b}(x_{2})h_{3}))dh_{1}dh_{2}dh_{3}|\\ &\leq \frac{C}{|g_{er,3,b}(x_{2})|}\int_{|h_{1}^2+h_{2}^2|\geq \lambda}\frac{1}{|h_{1}^2+h_{2}^2|^\frac{3}{2}}dh_{1}dh_{2}\leq C \lambda\end{aligned}$$ so that in particular $$\text{lim}_{\lambda\rightarrow \infty}H_{\lambda}=\text{lim}_{\lambda\rightarrow \infty}\tilde{H}_{\lambda}.$$ But, if we define $R:=(g_{er,3,b}(x_{2})^2+\partial_{x_{2}}(g_{2,b})(x_{2})^2)^{\frac{1}{2}}$, and using the change of variables $Rz_{1}=h_{2}\partial_{x_{2}}(g_{2,b})(x_{2})+g_{er,3,b}(x_{2})h_{3}),Rz_{2}=-h_{3}\partial_{x_{2}}(g_{2,b})(x_{2})+g_{er,3,b}(x_{2})h_{2}$, and using that the integrands with the wrong parity (in $h_{2},h_{3},z_{1}$ or $z_{2}$) cancel out when we integrate, we have $$\begin{aligned} &\int_{|h_{1}^2|\leq \lambda,\ h_{2},h_{3}\in\mathds{R}}\frac{3h_{2}h_{3}}{|h|^5}\sin(h_{2}\partial_{x_{2}}(g_{2,b})(x_{2}))\sin(g_{er,3,b}(x_{2})h_{3}))dh_{1}dh_{2}dh_{3}\\ &=-\text{PV}\int_{|h_{1}^2|\leq \lambda,\ h_{2},h_{3}\in\mathds{R}}\frac{3h_{2}h_{3}}{|h|^5}\cos(h_{2}\partial_{x_{2}}(g_{2,b})(x_{2})+g_{er,3,b}(x_{2})h_{3}))dh_{1}dh_{2}dh_{3}\\ &=-\frac{g_{er,3,b}(x_{2})\partial_{x_{2}}(g_{2,b})(x_{2})}{R^2}\text{PV}\int_{|h_{1}^2|\leq \lambda,\ z_{1},z_{2}\in\mathds{R}}\frac{3(z_{1}^2-z_{2}^2)}{|h_{1}^2+z_{1}^2+z_{2}^2|^5}\cos(Rz_{1})dh_{1}dz_{1}dz_{2}\\ %&=-\frac{g_{er,3,b}(x_{2})\partial_{x_{2}}(g_{2,b})(x_{2})}{R^2}\text{PV}\int_{|h_{1}^2|\leq \lambda,\ z_{1},z_{2}\in\mathds{R}}\frac{3(z_{1}^2-z_{2}^2)}{|h_{1}^2+z_{1}^2+z_{2}^2|^5}\cos(Rz_{1})dh_{1}dh_{2}dh_{3}\\ &=-\frac{g_{er,3,b}(x_{2})\partial_{x_{2}}(g_{2,b})(x_{2})}{R^2}\text{PV}\int_{|\tilde{h}_{1}^2|\leq R^2\lambda,\tilde{h}_{2},\tilde{h}_{3}\in\mathds{R}}\frac{3(\tilde{h}_{2}^2-\tilde{h}_{3}^2)}{|\tilde{h}_{1}^2+\tilde{h}_{2}^2+\tilde{h}_{3}^2|^5}\cos(\tilde{h}_{2})d\tilde{h}_{1}d\tilde{h}_{2}d\tilde{h}_{3}\end{aligned}$$ and, after relabelling, if we denote $$C_{0}:=-\text{PV}\int_{|h_{1}^2|\leq R^2\lambda,\ z_{1},z_{2}\in\mathds{R}}\frac{3(h_{2}^2-h_{3}^2)}{|h_{1}^2+h_{2}^2+h_{3}^2|^5}\cos(h_{2})dh_{1}dh_{2}dh_{3}$$ we have that $$\text{lim}_{\lambda\rightarrow \infty}H_{\lambda}= C_{0}\frac{\partial_{x_{2}}(g_{2,b})(x_{2})}{(g_{3,er,b}(x_{2}))^2+(\partial_{x_{2}}g_{2,b}(x_{2}))^2}$$ as we wanted. The only remaining thing to show [\[linftu3\]](#linftu3){reference-type="eqref" reference="linftu3"} is to prove that $C_{0}>0$. For this, first we note that since $C_{0}$ does not depend on $g_{er,3,b}(x_{2})$ or $\partial_{x_{2}}(g_{2,b})(x_{2})$, we can take them to be equal to $1$. Next, we define $$\begin{aligned} \bar{H}_{\lambda}=3\int_{h_{1}\in\mathds{R},|h_{2}^2|,|h_{3}^2|\leq \lambda}\frac{h_{2}h_{3}}{|h|^5}\sin(h_{2})\sin(h_{3}))dh_{1}dh_{2}dh_{3} \end{aligned}$$ and we can check that, for $\lambda_{n}=((n+\frac{1}{2})\pi )^2$, $\text{lim}_{n\rightarrow\infty}\tilde{H}_{\lambda_{n}}=\text{lim}_{n\rightarrow\infty}\bar{H}_{\lambda_{n}}$ since for $\lambda_{n}$ as defined earlier we have, we can use integration by parts with respect to $h_{3}$ to get $$\begin{aligned} &\text{lim}_{\lambda_{n}\rightarrow\infty}\int_{h^2_{1}\geq \lambda_{n} , h_{2}^2,h_{3}^2\leq \lambda_{n}}\frac{h_{2}h_{3}}{|h|^5}\sin(h_{2})\sin(h_{3})dh_{1}dh_{2}dh_{3}|\\ &\leq \text{lim}_{\lambda_{n}\rightarrow\infty}\int_{h^2_{1}\geq \lambda_{n}, h_{2}^2,h_{3}^2\leq \lambda_{n}}\frac{1}{|h|^4}dh_{1}dh_{2}dh_{3}=0\end{aligned}$$ $$\begin{aligned} &\text{lim}_{\lambda_{n}\rightarrow\infty}\int_{h_{2}^2,h_{3}^2\geq \lambda_{n} ,h^2_{1} \leq \lambda_{n}}\frac{h_{2}h_{3}}{|h|^5}\sin(h_{2})\sin(h_{3})dh_{1}dh_{2}dh_{3}|\\ &\leq \text{lim}_{\lambda_{n}\rightarrow\infty}\int_{h_{2}^2,h_{3}^2\geq \lambda_{n} ,h^2_{1} \leq \lambda_{n}}\frac{1}{|h|^4}dh_{1}dh_{2}dh_{3}=0.\end{aligned}$$ With this in mind, we note that $$\begin{aligned} &\bar{H}_{\lambda_{n}}=3\int_{h_{1}\in\mathds{R},|h_{2}^2|,|h_{3}^2|\leq \lambda}\frac{h_{2}h_{3}}{|h|^5}\sin(h_{2})\sin(h_{3})dh_{1}dh_{2}dh_{3}\\ &=3\int_{|h_{2}^2|,|h_{3}^2|\leq \lambda_{n}}\sin(h_{2})\sin(h_{3})\frac{h_{2}h_{3}}{h_{2}^2+h_{3}^2}\Big(\int_{-\infty}^{\infty}\frac{1}{|1+\frac{h_{1}^2}{h_{2}^2+h_{3}^2}|^5}\frac{1}{|h_{2}^2+h_{3}^2|^{\frac{1}{2}}}dh_{1}\Big)dh_{2}dh_{3}\\ &=C\int_{|h_{2}^2|,|h_{3}^2|\leq \lambda_{n}}\sin(h_{2})\sin(h_{3})\frac{h_{2}h_{3}}{(h_{2}^2+h_{3}^2)^2}dh_{2}dh_{3}\\ &=C\int_{0}^{\frac{\pi}{2}}\int_{0}^{\lambda_{n}L(\alpha)}\sin(r\sin(\alpha))\sin(r\cos(\alpha))\frac{r\sin(\alpha)r\cos(\alpha)}{r^4}rdrd\alpha\\ &=C\int_{0}^{\frac{\pi}{2}}\sin(\alpha)\cos(\alpha)\int_{0}^{\lambda_{n}L(\alpha)}\frac{\cos(r(\sin(\alpha)-\cos(\alpha)))-\cos(r(\sin(\alpha)+\cos(\alpha)))}{r}drd\alpha\\\end{aligned}$$ where $C>0$ changes from line to line, we changed to polar coordinates in the fourth line and $L(\alpha)$ is the function that, given $\alpha\in[0,\frac{\pi}{2}]$ gives us the maximum value of $r$ such that $(r\sin(\alpha),r\cos(\alpha))\in [0,1]\times[0,1]$. But now, if we define $$\begin{aligned} &G(K,A,B):=\text{PV}\int_{0}^{K}\frac{\cos(rA)-\cos(rB)}{r}dr\\ &=C_{A,B}-(\text{PV}\int_{K}^{\infty}\frac{\cos(rA)}{r}dr-\int_{K}^{\infty}\frac{\cos(rB)}{r}dr)\\ &=C_{A,B}+\int_{KA}^{KB}\frac{\cos(r)}{r}dr\end{aligned}$$ which in particular gives, by taking the limit when $K$ is small, that $C_{A,B}=\ln(B)-\ln(A)$. Note also that $|G(K,A,B)|\leq 2(\ln(A)-\ln(B))$ and integration by parts gives us that, for $A,B>0$ $$\text{lim}_{K\rightarrow\infty}\int_{KA}^{KB}\frac{\cos(r)}{r}d=0$$ so we can apply the dominated convergence theorem to $$G_{\lambda_{n}}(\alpha):=\int_{0}^{\lambda_{n}L(\alpha)}\frac{\cos(r(\sin(\alpha)+\cos(\alpha)))-\cos(r(\sin(\alpha)-\cos(\alpha)))}{r}dr$$ to get $$\frac{C_{0}}{2}=\text{lim}_{\lambda_{n}\rightarrow\infty}H_{\lambda_{n}}=C\int_{0}^{\frac{\pi}{2}}\sin(\alpha)\cos(\alpha)\ln(\frac{\sin(\alpha)+\cos(\alpha)}{|\sin(\alpha)-\cos(\alpha)|})d\alpha> 0$$ as we wanted. For the bounds in $C^1$, the exact same steps we followed to obtain the $L^{\infty}$ bound [\[linftu3\]](#linftu3){reference-type="eqref" reference="linftu3"} allow us to get $$||u_{j}(\partial_{x_{i}}\omega_{1,b},0,0)-\tilde{u}_{j}(\partial_{x_{i}}\omega_{1,b})||_{L^{\infty}}\leq C_{\delta}M^{-\frac{1}{2}+2\delta}$$ for $i=1,2,3$ and $j=2,3$, since we can decompose $\partial_{x_{i}}\omega_{1,b}$ in functions with the same structure as $\omega_{1,b}$ (by aplying Leibniz's rule and looking at each of the individual terms obtained). Furthermore, since $\partial_{x_{i}}u(\omega)=u(\partial_{x_{i}}\omega)$, we have $$||\partial_{x_{i}}u_{j}(\omega_{1,b},0,0)-\partial_{x_{i}}\tilde{u}_{j}(\omega_{1,b})||_{L^{\infty}}\leq ||u_{j}(\partial_{x_{i}}\omega_{1,b},0,0)-\tilde{u}_{j}(\partial_{x_{i}}\omega_{1,b})||_{L^{\infty}}+||\tilde{u}_{j}(\partial_{x_{i}}\omega_{1,b})-\partial_{x_{i}}\tilde{u}_{j}(\omega_{1,b})||_{L^{\infty}}$$ and thus to obtain the $C^1$ bounds it is enough to prove that $$||\tilde{u}_{j}(\partial_{x_{i}}\omega_{1,b})-\partial_{x_{i}}\tilde{u}_{j}(\omega_{1,b})||_{L^{\infty}}\leq C_{\delta}M^{-\frac{1}{2}+2\delta}.$$ But (focusing on $\tilde{u}_{3}$, $\tilde{u}_{2}$ being analogous) $$\begin{aligned} &|\tilde{u}_{3}(\partial_{x_{i}}\omega_{1,b})-\partial_{x_{i}}\tilde{u}_{3}(\omega_{1,b})|\\ &=|C_{0}(\partial_{x_{i}}[\frac{M\partial_{x_{2}}g_{2,b}(x_{2})}{(Mg_{3,er,b}(x_{2}))^2+(M\partial_{x_{2}}g_{2,b}(x_{2}))^2}])M^{\epsilon-b}a_{b}(x_{2})f(M^{\frac{1}{2}+b-\epsilon}x_{1}g_{er,1,b}(x_{2}))\\ &\times \cos(Mg_{2,b}(x_{2}))f(M^{\frac{1}{2}+\epsilon}g_{2,b}(x_{2})) \sin(Mg_{er,3,b}(x_{2})x_{3})f(M^{\frac{1}{2}}g_{er,3,b}(x_{2})x_{3})|\\ &\leq C|(\partial_{x_{i}}[\frac{\partial_{x_{2}}g_{2,b}(x_{2})}{g_{3,er,b}(x_{2})^2+\partial_{x_{2}}g_{2,b}(x_{2})^2}])|M^{\epsilon-b-1}\leq CM^{\epsilon-1}.\\\end{aligned}$$ For the bounds with the $\partial_{x_{i}}$ derivative, $i=1,3$, the proof is exactly the same, since taking a derivative with respect to $x_{1}$ or $x_{3}$ gives us a function with the same properties. Relabeling $\delta$ in the error bounds so that $\delta_{new}=2\delta_{old}$ finishes the proof. ◻ **Lemma 5**. *If we have $N,M>0$ big enough, then for $t\in[1-N^{-\frac{\epsilon}{2}},1]$, $\alpha\in[0,1]$ $$||\omega\cdot\nabla u(\omega)||_{C^{\alpha}},||u(\omega)\cdot\nabla \omega||_{C^{\alpha}}\leq CM^{3\epsilon-\frac{1}{2}+\alpha}.$$* *Proof.* First we note that, using the properties of $\omega_{1,b},\omega_{3,b}$ we have that, for $i=0,1,2$ $$\label{w1bw3b} ||\omega_{1,b}||_{C^{i}}\leq \frac{CM^{\epsilon+i}}{K_{N}(t)},||\omega_{3,b}||_{C^{i}}\leq CM^{i-\frac{1}{2}}.$$ Furthermore, since we can apply lemma [Lemma 4](#vaprox){reference-type="ref" reference="vaprox"}, using the properties of $\tilde{u}(\omega_{1,b})$ and of $\tilde{w}_{1,b}$ plus some direct computations gives us for $i=0,1$ $$||\tilde{u}_{2}(\omega_{1,b})\partial_{x_{2}}\omega_{1,b}+\tilde{u}_{3}(\omega_{1,b})\partial_{x_{3}}(w_{1,b})||_{C^{i}}\leq CM^{3\epsilon-2b-\frac{1}{2}+i}$$ since the biggest terms, of order $M^{2\epsilon-2b+i}$, cancel out. On the other hand, just the bounds for $\tilde{u}$ and $\omega_{3,b}$ gives us $$||\tilde{u}_{2}(\omega_{1,b})\partial_{x_{2}}\omega_{3,b}+\tilde{u}_{3}(\omega_{1,b})\partial_{x_{3}}(\omega_{3,b})||_{C^{i}}\leq CM^{\epsilon-b-\frac{1}{2}+i}.$$ But, since, for $j=2,3$, $i=0,1$, for any $\delta>0$ $$||\tilde{u}_{j}(\omega_{1,b})-u_{j}(\omega_{1,b})||_{C^{i}}\leq C_{\delta}M^{2\epsilon-\frac{3}{2}+\delta}$$ we have, for $i=0,1$ $$||(\tilde{u}_{j}(\omega_{1,b})-u_{j}(\omega_{1,b}))\partial_{x_{j}}\omega_{1,b}||_{C^{i}}\leq C_{\delta} M^{3\epsilon-\frac{1}{2}+i+\delta-b},||(\tilde{u}_{j}(\omega_{1,b})-u_{j}(\omega_{1,b}))\partial_{x_{j}}\tilde{w}_{3,b}||_{C^{i}}\leq C_{\delta} M^{2\epsilon-1+i+\delta},$$ and combining the bounds we get, for $i=0,1$ $$\label{uw1} ||u(\omega_{1}(x,t))\cdot\nabla \omega(x,t)||_{C^{i}}\leq C_{\delta}M^{3\epsilon-\frac{1}{2}+i+\delta}$$ Regarding $\omega_{3,b}$, since we have $$\omega_{3,b}=M^{\frac{1}{2}}p(x_{1},x_{2})\int_{-\infty}^{x_{3}}\sin(Mg_{er,3,b}(x_{2})z)f(M^{\frac{1}{2}}g_{er,3,b}(x_{2})z)dz$$ which, using integration by parts with respect to $x_{3}$ can be written as $$M^{-\frac{1}{2}} p(x_{1},x_{2})[\sum_{i=0}^{k}\frac{p_{i}(x_{2},x_{3})}{M^{\frac{i}{2}}}\cos(Mg_{er,3,b}(x_{2})x_{3}+\frac{i\pi}{2})+p^{error}_{k}(x_{2},x_{3})]$$ with $||p(x_{1},x_{2})||_{L^{\infty}}\leq C,||p_{i}(x)||_{L^{\infty}}\leq C_{i},p_{i}(x)\in C^{\infty}, p^{error}_{i}(x_{2},x_{3})\leq C_{i}M^{-\frac{i}{2}}$ and $p,p^{error}_{k}$ and $p_{i}$ supported in $B_{1}(0).$ We can then act as in [\[h3small\]](#h3small){reference-type="eqref" reference="h3small"}, to obtain that, for $j=1,2$, for any $a>1$, $\delta>0$ $$\begin{aligned} &\int_{|h_{1}|,|h_{2}|\geq M^{-1+\delta}}\int_{\mathds{R}}\frac{h_{j}}{|h|^3}p(x_{1}+h_{1},x_{2}+h_{2})\frac{p_{i}(M^{\frac{1}{2}}g_{er,3,b}(x_{2}+h_{2})(x_{3}+h_{3}))}{M^{\frac{i}{2}}}\\ &\times\cos(Mg_{er,3,b}(x_{2}+h_{2})(x_{3}+h_{3})+\frac{i\pi}{2})dh_{1}dh_{2}dh_{3}\leq \frac{C_{a,\delta}}{M^{a}},\end{aligned}$$ and furthermore $$\begin{aligned} &\int_{|h_{1}|,|h_{2}|\geq M^{-1+\delta}}\int_{\mathds{R}}\frac{h_{j}}{|h|^3}p(x_{1}+h_{1},x_{2}+h_{2})p^{error}_{k}(x_{2}+h_{2},x_{3}+h_{3}) dh_{1}dh_{2}dh_{3}\leq \frac{C_{i}}{M^{\frac{i}{2}}},\end{aligned}$$ so, focusing for example on $u_{1}(0,0,\omega_{3,b})$, $u_{2}$ being analogous, we have $$\begin{aligned} &|u_{1}(0,0,\omega_{3,b})|\leq M^{-\frac{1}{2}}(|\int_{|h_{1}|,|h_{2}|\leq M^{-1+\delta}}\int_{\mathds{R}}\frac{h_{2}}{|h|^3}\omega_{3}(x+h,t)dh_{3}dh_{1}dh_{2}|+\frac{C}{M})\\ &\leq CM^{-\frac{1}{2}}(|\int_{|h_{1}|,|h_{2}|\leq M^{-1+\delta}}\frac{1}{|h|}dh_{1}dh_{2}|+\frac{C}{M})\leq C_{\delta}M^{-\frac{3}{2}+\delta}.\end{aligned}$$ Furthermore, we can use that $\partial_{x_{i}}u_{j}$ is a singular integral operator to obtain, for $\alpha \in(0,1)\cup(1,2)$ $$||\partial_{x_{i}}u_{j}(0,0,\omega_{3,b})||_{C^\alpha}\leq C||\omega_{3,b}||_{C^{\alpha}}\leq CM^{-\frac{1}{2}+\alpha}$$ and combining this with the $L^{\infty}$ bound gives us, for $i=0,1,2$, $\delta>0$ $$\label{w3i} ||u_{j}(0,0,\omega_{3,b})||_{C^{i}}\leq C_{\delta}M^{-\frac{3}{2}+i+\delta} .$$ Then, the bounds for $\omega_{j}(x,t)$ and its two first derivatives gives us, for $j=1,2,3$, $i=0,1$ $$\label{cuaduw3} ||u_{j}(\omega_{3,b})\cdot\nabla \omega(x,t)||_{C^{i}}\leq CM^{-\frac{3}{2}+\delta+\epsilon+i}.$$ Combining [\[cuaduw3\]](#cuaduw3){reference-type="eqref" reference="cuaduw3"} with [\[uw1\]](#uw1){reference-type="eqref" reference="uw1"}, taking $\delta=\epsilon$ and using interpolation gives us for $[0,1]$, $$||u(\omega(x,t))\cdot\nabla \omega(x,t)||_{C^{\alpha}}\leq C M^{4\epsilon-\frac{1}{2}+\alpha}.$$ On the other hand, using lemma [Lemma 4](#vaprox){reference-type="ref" reference="vaprox"} we have, for $i=0,1$, $k=2,3$ that $$||\partial_{x_{1}}u_{k}(\omega_{1,b})||_{C^{i}}\leq ||\partial_{x_{1}}\tilde{u}_{k}(\omega_{1,b})||_{C^{i}}+||\partial_{x_{1}}(u_{k}-\tilde{u}_{k})(\omega_{1,b})||_{C^{i}}\leq C M^{2\epsilon-\frac{1}{2}+\delta+i}$$ and using that $||\partial_{x_{3}}u_{k}||$ is a singular integral operator, the bounds for $w_{1,b}$ and interpolation we get $$||\partial_{x_{3}}u_{k}(\omega_{1,b})||_{C^{i}}\leq C M^{\epsilon-b+i}$$ which give us, using the bounds for $\omega_{1,b}$ and $\omega_{3,b}$, for $i=0,1$ $$||\omega_{1,b}\partial_{x_{1}}u_{k}(\omega_{1,b})||_{C^{i}}\leq CM^{3\epsilon-\frac{1}{2}+\delta+i}$$ $$||\omega_{3,b}\partial_{x_{3}}u_{k}(\omega_{1,b})||_{C^{i}}\leq CM^{\epsilon-\frac{1}{2}+i}$$ and taking $\delta=\epsilon$ and using interpolation gives us that $$||\omega\cdot\nabla u(\omega)||_{C^{\alpha}}\leq CM^{4\epsilon-\frac{1}{2}+\alpha}$$ as we wanted. ◻ In order to control interactions that come from our small layer moving the vorticity of bigger scale layers, we need to obtain bounds for the decay of the velocity generated by the small scale vorticity. **Lemma 6**. *If $M$ is big enough, we have that, if $|x_{1}|\geq4 M^{-\frac{1}{2}-b+\epsilon}$, then for $i=0,1,2$, $j=1,2,3$ and any $k>1$* *$$\label{decay1} |D^{i}u_{j}(\omega_{1,b})(x)|\leq C_{k}M^{-k}.$$ Similarly, if $|x_{2}|\geq 4M^{-\frac{1}{2}-\epsilon+b}$, $i=0,1,2$ then $$\label{decay2} |D^{i}u_{j}(\omega_{1,b})(x)|\leq C_{k}M^{-k}.$$* *Proof.* We begin by showing [\[decay1\]](#decay1){reference-type="eqref" reference="decay1"} with $i=0$ and in the case $|x_{1}|\geq4 M^{-\frac{1}{2}-b+\epsilon}$, the case $|x_{2}|\geq 4M^{-\frac{1}{2}-\epsilon+b}$ being analogous. Similarly, we consider only $j=2$. We first note that $$\text{supp}(\omega_{1,b})\subset[-2M^{-\frac{1}{2}-b+\epsilon},2M^{-\frac{1}{2}-b+\epsilon}]\times[-2M^{-\frac{1}{2}-\epsilon+b},2M^{-\frac{1}{2}-\epsilon+b}]\times[2M^{-\frac12},2M^{-\frac12}].$$ But, acting exactly as in [\[h3small\]](#h3small){reference-type="eqref" reference="h3small"}, using integration by parts with respect to $h_{3}$ we can obtain, for $|h_{1}|\geq M^{-\frac{1}{2}-b+\epsilon}$, for any $m>1$ $$\int_{\mathds{R}}\frac{h_{2}}{|h|^3}\omega_{1,b}(x+h)dh_{3}\leq \int_{\mathds{R}} \sum_{l=0}^{m}\frac{C_{m}M^{\frac{l}{2}}}{|h|^{2+m-l}} \frac{1}{M^{m}}dh_{3}\leq \frac{C_{m}}{M^{\epsilon m}}.$$ Then, using the expression for $u_{2}(\omega_{1,b})$ and the bounds for the support of $\omega_{1,b}(x)$ we get, if $|x_{1}|\geq 4M^{-\frac12-b+\epsilon}$ $$|u(\omega_{1,b})|\leq |\int_{\mathds{R}^3}\frac{h_{2}}{|h|^3}\omega_{1,b}(x+h)dh_{3}dh_{2}dh_{1}|\leq |\int_{-2M^{-\frac{1}{2}+\epsilon}-x_{1}}^{-2M^{-\frac{1}{2}+\epsilon}-x_{1}}\int_{-2M^{-\frac{1}{2}-\epsilon+b}-x_{2}}^{2M^{-\frac{1}{2}-\epsilon+b}-x_{2}}\frac{C_{m}}{M^{\frac{m}{2}}}dh_{2}dh_{1}|\leq \frac{C_{m}}{M^{\epsilon m}},$$ and since $m$ is arbitrary this finishes the proof of [\[decay1\]](#decay1){reference-type="eqref" reference="decay1"}. For [\[decay2\]](#decay2){reference-type="eqref" reference="decay2"} the proof is completely analogous. To obtain the same result for $i=1,2$, we just use the fact that derivatives commute with the velocity operator and argue in the exact same way as before but with $u(D^{i}\omega_{1,b})$ ◻ This lemma now allows us to obtain bounds for the interactions that come from the velocity of the small scale layer interacting with the big scale layer. **Lemma 7**. *Let $\omega^{N}(x,t)=(\omega^{N}_{1}(x,t)\omega^{N}_{2}(x,t),\omega^{N}_{3}(x,t))$ be an odd-vorticty such that $||\omega^{N}(x,t)||_{C^{2.5}}\leq N^{4}$ for $t\in[1-N^{-\frac{\epsilon}{2}},1]$ and with $\text{supp}(\omega^{N}(x,t))\subset |x|\leq 1$. Then, if $N$ is big enough, we have that, for $\alpha\in[0,1]$ $$\label{smallbig1} ||u(\omega)\cdot\nabla\omega^{N}||_{C^{\alpha}}\leq CM^{2\epsilon+\alpha-\frac{1}{2}}$$ $$\label{smallbig2} ||\omega^{N}\cdot\nabla u(\omega)||_{C^{\alpha}}\leq CM^{2\epsilon+\alpha-\frac{1}{2}}$$* *Proof.* We start by noting that, using [\[w3i\]](#w3i){reference-type="eqref" reference="w3i"} and the bounds for $\omega^{N}$, we directly have, for $N$ big enough so that $N^{4}\leq M^{\epsilon}$, and after taking $\delta=\epsilon$, for $i=0,1$ $$\label{3buw} ||u(0,0,\omega_{3,b})\cdot\nabla\omega^{N}||_{C^{i}}\leq C_{\delta} N^{4}M^{-\frac{3}{2}+i+\delta}\leq CM^{-\frac{3}{2}+i+2\epsilon},$$ $$\label{3bwu} ||\omega^{N}\cdot\nabla u(0,0,\omega_{3,b})||_{C^{i}}\leq C_{\delta} N^{4}M^{-\frac{1}{2}+i+\delta}\leq CM^{-\frac{1}{2}+i+2\epsilon}.$$ Furthermore, since, for $i=0,1,$ $$\label{uw1b} ||u_{j}(\omega_{1,b},0,0)||_{C^{i}}\leq ||(\tilde{u}_{j}-u_{j})(\omega_{1,b},0,0)||_{C^{i}}+||\tilde{u}_{j}(\omega_{1,b},0,0)||_{C^{i}}\leq CM^{\epsilon-b-1+i}$$ we can get $$||u(\omega_{1,b},0,0)\cdot\nabla\omega^{N}||_{C^{i}}\leq C_{\delta} N^{4}M^{\epsilon-1+i}\leq C_{\delta}M^{2\epsilon-1+i},$$ and combining this with [\[3buw\]](#3buw){reference-type="eqref" reference="3buw"} plus interpolation already yields [\[smallbig1\]](#smallbig1){reference-type="eqref" reference="smallbig1"}. On the other hand, using [\[uw1b\]](#uw1b){reference-type="eqref" reference="uw1b"}, we have $$\label{D1wu} |D^{1}[\omega^{N}(x)\cdot\nabla u(\omega_{1,b},0,0)(x)]|\leq CM^{2\epsilon}+|\omega^{N}(x)\cdot\nabla D^{1}[u(\omega_{1,b},0,0)(x)]$$ and, since for $x$ such that $|x_{1}|\geq 4M^{-\frac{1}{2}+\epsilon}$ or $|x_{2}|\geq 4M^{-\frac{1}{2}-\epsilon+b}$, we can apply [\[decay1\]](#decay1){reference-type="eqref" reference="decay1"} or [\[decay2\]](#decay2){reference-type="eqref" reference="decay2"} from lemma [Lemma 6](#decay){reference-type="ref" reference="decay"} to obtain, for $i=0,1$ that $$|\omega^{N}(x)\cdot\nabla u(\omega_{1,b},0,0)(x)|)\leq C_{k,\delta}M^{-k},|\omega^{N}(x)\cdot\nabla D^{1}[u(\omega_{1,b},0,0)(x)]|)\leq C_{k,\delta}M^{-k},$$ we can combine this with [\[D1wu\]](#D1wu){reference-type="eqref" reference="D1wu"}, for $|x_{1}|\geq 4M^{-\frac{1}{2}+\epsilon}$ or $|x_{2}|\geq 4M^{-\frac{1}{2}-\epsilon+b}$ we get $$|\omega^{N}(x)\cdot\nabla u(\omega_{1,b},0,0)(x)|)\leq C_{k,\delta}M^{-k},|D^{1}[\omega^{N}(x)\cdot\nabla u(\omega_{1,b},0,0)(x)]|)\leq C_{k,\delta}M^{-k}+CM^{2\epsilon}.$$ But, for $|x_{1}|\leq 4M^{-\frac{1}{2}+\epsilon}\cap|x_{2}|\leq 4M^{-\frac{1}{2}-\epsilon+b}\cap |x_{3}|\leq 1$ we can use the fact that $\omega^{N}$ is odd and $\omega^{N}\in C^{2.5}$ to get that $$|\omega^{N}_{1}(x)|\leq N^{4}x_{2}x_{3}\leq N^{4}M^{-\frac{1}{2}+\epsilon},|\omega^{N}_{2}(x)|\leq N^{4}x_{1}x_{3}\leq N^{4}M^{-\frac{1}{2}-\epsilon+b},|\omega^{N}_{3}(x)|\leq N^{4}x_{1}x_{2}\leq N^{4}M^{-\frac{1}{2}},$$ so, for $|x_{1}|\leq 4M^{-\frac{1}{2}+\epsilon}\cap|x_{2}|\leq 4M^{-\frac{1}{2}-\epsilon+b}$ we have $$|\omega^{N}(x)\cdot\nabla u(\omega_{1,b},0,0)(x)|)\leq CN^{4}[M^{-\frac{1}{2}+\epsilon}+M^{-\frac{1}{2}-\epsilon+b}]M^{\epsilon-b}\leq CM^{2\epsilon-\frac{1}{2}},$$ $$|\omega^{N}(x)\cdot\nabla D^{1}[u(\omega_{1,b},0,0)(x)]|)\leq CN^{4}[M^{-\frac{1}{2}+\epsilon}+M^{-\frac{1}{2}-\epsilon+b}]M^{\epsilon-b+1}\leq CM^{2\epsilon+\frac{1}{2}}$$ which, combined with [\[D1wu\]](#D1wu){reference-type="eqref" reference="D1wu"} gives us, for for $|x_{1}|\leq 4M^{-\frac{1}{2}+\epsilon}\cap|x_{2}|\leq 4M^{-\frac{1}{2}-\epsilon+b}$, for $i=0,1$ $$|D^{i}[\omega^{N}(x)\cdot\nabla u(\omega_{1,b},0,0)(x)]|\leq CM^{2\epsilon-\frac{1}{2}+i}$$ and since we had already proved it for for $|x_{1}|\geq 4M^{-\frac{1}{2}+\epsilon}$ or $|x_{2}|\geq 4M^{-\frac{1}{2}-\epsilon+b}$, the bound holds for any $x$, and thus, for $i=0,1$ $$||\omega^{N}(x)\cdot\nabla u(\omega_{1,b},0,0)(x)||_{C^{i}}\leq CM^{2\epsilon-\frac{1}{2}+i}$$ which combined with [\[3bwu\]](#3bwu){reference-type="eqref" reference="3bwu"} and using interpolation gives [\[smallbig2\]](#smallbig2){reference-type="eqref" reference="smallbig2"}. ◻ Finally, since we are using a simplified version of the velocity $\bar{u}$ instead of $u$, a force is also required to compensate for this difference, and we need to show that this force is almost $C^{\frac{1}{2}}$ in vorticity formulation. **Lemma 8**. *For $N$ big enough and $\alpha\in[0,1]$, we have that $$||(u^{N}-\bar{u}^{N})\cdot\nabla\omega||_{C^{\alpha}}\leq CM^{4\epsilon-\frac{1}{2}+\alpha},||\omega\cdot\nabla(u^{N}-\bar{u}^{N})||_{C^{\alpha}}\leq CM^{3\epsilon-\frac{1}{2}+\alpha}$$* *Proof.* We start by noting that, using the fact that $u^{N}(x,t)$ is odd and $||u^{N}(x,t)||_{C^{3.5}}\leq N^{4}$, we have that, for $i=1,2,3$, defining $z=|x_{1}^2+x_{3}^2|^{\frac{1}{2}}$ $$|u^{N}_{i}(x,t)-\bar{u}^{N}_{i}(x,t)|\leq C N^4x_{i}|x_{1}^2+x_{3}^2|=CN^{4}x_{i}z^2.$$ But, using that, for $N$ big, $$\text{supp}(\omega_{b})\subset[-2M^{-\frac{1}{2}-b+\epsilon},2M^{-\frac{1}{2}-b+\epsilon}]\times[-2M^{-\frac{1}{2}-\epsilon+b},2M^{-\frac{1}{2}-\epsilon+b}]\times[2M^{-\frac12},2M^{-\frac12}],$$ we have that $$|(u^{N}_{i}(x,t)-\bar{u}^{N}_{i}(x,t))\partial_{x_{i}}\omega_{j,b}(x)|\leq C N^{4}|z^2x_{i}\partial_{x_{i}}\omega_{j,b}(x)|\leq CN^{4}M^{-1+2\epsilon}|x_{i}\partial_{x_{i}}\omega_{j,b}|.$$ But, using the properties of $\omega_{j,b}$, it is easy to check that $|x_{i}\partial_{x_{i}}\omega_{j,b}(x)|\leq C M^{\epsilon-b} M^{\frac{1}{2}}$ so, taking $N$ big enough so that $N^{4}\leq M^{\epsilon}$ $$|(u^{N}_{i}(x,t)-\bar{u}^{N}_{i}(x,t))\partial_{x_{i}}\omega_{j,b}(x)|\leq C N^{4}M^{-\frac{1}{2}+3\epsilon-b}\leq CM^{-\frac{1}{2}+4\epsilon}.$$ A similar computation gives us $$|(u^{N}_{i}(x,t)-\bar{u}^{N}_{i}(x,t))D^{1}[\partial_{x_{i}}\omega_{j,b}(x)]|\leq CM^{\frac12+4\epsilon},$$ which combined with $$|D^{1}[u^{N}_{i}(x,t)-\bar{u}^{N}_{i}(x,t)]\partial_{x_{i}}\omega_{j,b}(x)|\leq CN^{4}[z|x_{i}\partial_{x_{i}}\omega_{j,b}(x)|+z^2|\partial_{x_{i}}\omega_{j,b}(x)|]\leq CM^{4\epsilon}$$ and using interpolation gives us $$||(u^{N}-\bar{u}^{N})\cdot\nabla\omega||_{C^{\alpha}}\leq CM^{4\epsilon-\frac{1}{2}+\alpha}.$$ On the other hand, we have $$|\omega_{i,b}\partial_{x_{i}}(u^{N}_{j}-\bar{u}^{N}_{j})|\leq CN^4|\omega_{i,b}(x)|[z|x_{i}|+z^2]|\leq CM^{3\epsilon-\frac{1}{2}},$$ $$|D^{1}[\omega_{i,b}]\partial_{x_{i}}(u^{N}_{j}-\bar{u}^{N}_{j})|\leq CN^4|D^1[\omega_{i,b}(x)][z|x_{i}|+z^2]|\leq CM^{3\epsilon+\frac{1}{2}},$$ $$|\omega_{i,b}D^{1}[\partial_{x_{i}}(u^{N}_{j}-\bar{u}^{N}_{j})]|\leq CN^4[\omega_{i,b}(x)]\leq CM^{2\epsilon}$$ and again interpolation gives us $$\|\omega_{i,b}\partial_{x_{i}}(u^{N}_{j}-\bar{u}^{N}_{j})||_{C^{\alpha}}\leq CM^{3\epsilon-\frac12+\alpha}.$$ ◻ Finally, in order to be able to use induction when gluing different layers together, we want to show that the small scale layer has the desired properties so that it can act as the big scale layer in the next iteration. This includes regularity of $\omega$ as well as properties of the velocity field. **Lemma 9**. *For $N$ big enough, $t\in[1-M^{-\frac{\epsilon}{2}},1]$, we have that $$\label{w25u35} ||\omega(x,t)||_{C^{2.5}},||u(\omega)(x,t)||_{C^{3.5}}\leq \frac{M^{4}}{2}.$$ Furthermore, we have $$\label{2c0} C_{0}M^{\epsilon}\geq \partial_{x_{2}}u_{2}(\omega(\cdot,t))(x=0)\geq \frac{C_{0}}{4}M^{\epsilon}$$ and $$\label{u1log} |\partial_{x_{1}}u_{1}(\omega(\cdot,t))(x=0)|\leq 1.$$* *Proof.* First, we note that, since for $t\in[1-M^{-\frac{\epsilon}{2}},1]$, we have $\text{supp}(\omega(x,t))\subset B_{1}(0)$, the evolution of $\omega(x,t)$ is equivalent to $$\partial_{t}\omega+v\cdot\nabla \omega=\omega\cdot\nabla v.$$ with $v(x,t)=\bar{u}^{N}(x,t)G(|x|)$, with $G(|x|)=1$ if $|x|\leq 1$, $G(|x|)=0$ if $|x|\geq 2$ and $G\in C^{\infty}$. Note that in particular $||v(x,t)||_{C^{3.5}}\leq CN^{4}$. Now, if we define $$\tilde{\omega}_{i}(x,t)=\omega(\phi(x,1,t),t)$$ with $$\partial_{t}\phi(x,1,t)=v(\phi(x,1,t),t)$$ $$\phi(x,1,1)=x,$$ we get $$\partial_{t}\tilde{\omega}_{i}(x,t)=\tilde{\omega}_{i}(x,t)(\partial_{x_{i}}v_{i})(\phi(x,1,t),t)$$ $$\tilde{\omega}_{i}(x,t=1)=\omega_{i,end}(x).$$ But $$\begin{aligned} &\partial_{t}||\tilde{\omega}_{i}(x,t)||_{C^{2.5}}\leq ||\tilde{\omega}_{i}(x,t)(\partial_{x_{i}}v_{i})(\phi(x,1,t),t)||_{C^{2.5}}\leq C ||\tilde{\omega}_{i}(x,t)||_{C^{2.5}}||(\partial_{x_{i}}v_{i})(\phi(x,1,t),t)||_{C^{2.5}}\\ &\leq C ||\tilde{\omega}_{i}(x,t)||_{C^{2.5}} ||v(x,t)||_{C^{3.5}}[1+||\partial_{x_{1}}\phi(x,1,t)||_{C^{1.5}}+||\partial_{x_{2}}\phi(x,1,t)||_{C^{1.5}}]^{3}\\ &\leq C N^{4}||\tilde{\omega}_{i}(x,t)||_{C^{2.5}} [1+||\partial_{x_{1}}\phi(x,1,t)||_{C^{1.5}}+||\partial_{x_{2}}\phi(x,1,t)||_{C^{1.5}}]^{3} \end{aligned}$$ On the other hand, we have $$\partial_{t}D^1\phi(x,1,t)=D^1(v)(\phi(x,1,t),t)D^{1}(\phi)(x,1,t),$$ so $$\partial_{t}||D^{1}\phi(x,1,t)||_{L^{\infty}}\leq ||D^{1}v||_{L^{\infty}}||D^{1}\phi||_{L^{\infty}}\leq CN^{4}||\phi(x,1,t)||_{L^{\infty}}$$ and similarly $$\begin{aligned} &\partial_{t}||D^3\phi(x,1,t)||_{L^{\infty}}=(||D_{1}\phi(x,1,t)||_{L^{\infty}}+||D^{3}\phi(x,1,t)||_{L^{\infty}})(1+||v||_{C^{3}})^3\\ &\leq CN^{12}(||D_{1}\phi(x,1,t)||_{L^{\infty}}+||D^{3}\phi(x,1,t)||_{L^{\infty}}) \end{aligned}$$ and integrating in time and taking $N$ big enough we get, for $t\in[1-M^{-\frac{\epsilon}{2}},1]$ $$||D^{1}\phi(x,1,t)||_{C^2}\leq 1+CN^{12}M^{-\frac{\epsilon}{2}}\leq 2,$$ Now integrating in time $\partial_{t}||\tilde{w}(x,t)||_{C^{2.5}}$, for $N$ big and $t\in[1-M^{\frac{\epsilon}{2}}]$ we get $$||\tilde{\omega}(x,t)||_{C^{2.5}}\leq 2||\tilde{\omega}(x,1)||_{C^{2.5}}\leq CM^{2.5+\epsilon}\leq M^{3}.$$ But then, since $$||\omega(x,t)||_{C^{2.5}}=||\tilde{\omega}(\phi(x,t,1),t)||_{C^{2.5}}\leq C ||\tilde{\omega}(x,t)||_{C^{2.5}}(1+||D^1\phi(x,t,1)||_{C^{2}})^3$$ and using that we can obtain the same kind of bounds for $||D^{1}\phi(x,t,1)||_{C^2}$ as the ones we got for $||D^{1}\phi(x,1,t)||_{C^{2}}$, we have that $$||\omega(x,t)||_{C^{2.5}}\leq CM^{3}\leq \frac{M^{4}}{2}.$$ Furthermore, since $\partial_{x_{i}}u_{j}$ is a singular integral operator, this implies $$||D^1 u(\omega(x,t))||_{C^{2.5}}\leq C||\omega(x,t)||_{C^{2.5}}\leq CM^{3}\leq \frac{M^{4}}{4},$$ and since $\text{supp}(\omega(x,t))\subset B_{1}(0)$ we also have $$||u(\omega(x,t))||_{L^{\infty}}\leq C||\omega(x,t)||_{C^{2.5}}\leq CM^{3}\leq \frac{M^{4}}{4},$$ so we have [\[w25u35\]](#w25u35){reference-type="eqref" reference="w25u35"}. To prove [\[2c0\]](#2c0){reference-type="eqref" reference="2c0"}, we define $\bar{\omega}(x,t)$ as $$\partial_{t}\bar{\omega}+u_{lin}\cdot\nabla \bar{\omega}=\bar{\omega}\cdot\nabla u_{lin},$$ with $$u_{lin}(x,t)=(x_{1}\partial_{x_{1}}u^{N}_{1}(x=0,t),x_{2}\partial_{x_{2}}u^{N}_{2}(x=0,t)x_{3}\partial_{x_{3}}u^{N}_{3}(x=0,t)),$$ which fulfils, if we define $A_{i}(t)=\int_{t}^{1}\partial_{x_{i}}u^{N}_{i}(x=0,s)ds$ $$w_{1}(x,t)=\frac{M^{\epsilon}}{A_{1}(t)}f(M^{\frac{1}{2}-\epsilon}A_{1}(t)x_{1})\sin(MA_{2}(t)x_{2})f(M^{\frac{1}{2}+\epsilon}A_{2}(t)x_{2}) \sin(MA_{3}(t)x_{3})f(M^{\frac{1}{2}}A_{3}(t)x_{3}).$$ We can then apply lemma [Lemma 4](#vaprox){reference-type="ref" reference="vaprox"} to get that $$||(u_{2}-\tilde{u}_{2})(\bar{\omega}_{1}(x,t))||_{C^1}\leq C_{\delta}M^{2\epsilon-\frac{1}{2}+\delta}$$ and using the expression of $\tilde{u}(\bar{\omega}_{1}(x,t))$ plus the bounds for $u^{N}(x,t)$ we get, for any $\delta>0$, if $N$ big enough $$\frac{1-\delta}{2}C_{0}M^{\epsilon}\leq \partial_{x_{2}}\tilde{u}_{2}(\bar{\omega}_{1}(x,t))\leq \frac{1+\delta}{2}C_{0}M^{\epsilon}.$$ Then, taking $N$ big enough we get, again for any $\delta>0$ $$\frac{1-2\delta}{2}C_{0}M^{\epsilon}\leq \partial_{x_{2}}u_{2}(\bar{\omega}_{1}(x,t))\leq C_{0}M^{\epsilon}\frac{1+2\delta}{2}.$$ Furthermore, using that $\partial_{x_{i}}u_{j}$ as an operator is a singular integral operator, we can get $$||\partial_{x_{i}}u_{j}(0,0,\bar{\omega}_{3}(x,t))||_{L^{\infty}}\leq CM^{\epsilon-\frac{1}{2}}$$ so that $$\label{3delta} \frac{1-3\delta}{2}C_{0}M^{\epsilon}\leq \partial_{x_{2}}u_{2}(\bar{\omega}(x,t))\leq C_{0}M^{\epsilon}\frac{1+3\delta}{2}.$$ To finish the proof of [\[2c0\]](#2c0){reference-type="eqref" reference="2c0"}, we just need to obtain bounds for $$||\partial_{x_{2}}u_{2}(\omega(x,t)-\bar{\omega}(x,t))||_{L^{\infty}}.$$ For this, if we define $W=\omega(x,t)-\bar{\omega}(x,t)$, we have that $$\partial_{t}W+(v-u_{lin})\cdot\nabla \omega+u_{lin}\cdot \nabla W=W\cdot\nabla v+\bar{\omega}\cdot\nabla (v-u_{lin})$$ and using the bounds for the support of $\omega,\bar{\omega}$, the definitions of $v$ and $u_{lin}$, the bounds for $u^{N}$ and that $u^{N}$ is an odd velocity, we get $$||(v-u_{lin})\cdot\nabla \omega||_{L^{\infty}},||\bar{\omega}\cdot\nabla (v-u_{lin})||_{L^{\infty}}\leq CN^{4} M^{-\frac{1}{2}+4\epsilon}\leq CM^{-\frac{1}{2}+5\epsilon}.$$ Then, the evolution equation for $W$ gives us $$||W||_{L^{\infty}}\leq CM^{-\frac{1}{2}+5\epsilon}.$$ Furthermore, we have that $||\omega||_{C^{1}},||\bar{\omega}||_{C^{1}}\leq CM^{1+\epsilon}$ so $||W||_{C^{1}}\leq CM^{1+\epsilon}$ and finally, using the properties of the operator $\partial_{x_{2}}u_{2}$ we get $$||\partial_{x_{2}}u_{2}(W)||_{L^{\infty}}\leq C||W||_{L^{\infty}}(\ln(1+||W||_{C^{1}}))\leq CM^{-\frac{1}{2}+6\epsilon},$$ which combined with [\[3delta\]](#3delta){reference-type="eqref" reference="3delta"} gives us the bound for $\partial_{x_{2}}u_{2}(\omega)$. To obtain [\[u1log\]](#u1log){reference-type="eqref" reference="u1log"}, we use the properties of $\partial_{x_{1}}u_{1}$ to get $$|\partial_{x_{1}}u_{1}(\omega)|=|\partial_{x_{1}}u_{1}(0,0,\omega_{3}(x,t))|\leq C||\omega_{3}||_{L^{\infty}}(1+\ln(||\omega||_{C^{1}}))\leq 1.$$ ◻ # Constructing the solution Now, combining all the lemmas we have proved so far, we can get the theorem that will allow us to construct our solution iteratively. **Theorem 10**. *Given $\frac{1}{100}\geq \epsilon>0$, for $N>1$ big enough, given an odd vorticity $\bar{\omega}^{1}(x,t)$ satisfying the forced 3D-Euler equations with force $\bar{F}^{1}(x,t)$ such that $||\bar{\omega}^1||_{C^{2.5}},||u(\bar{\omega}^1)||_{C^{3.5}}\leq N^4$, $\bar{F}^1\in C^{\frac{1}{2}-6\epsilon},$ and fulfilling, for $t\in[1-N^{-\frac{\epsilon}{2}},1]$ $$C_{0}N^{\epsilon}\geq \partial_{x_{1}}u_{1}(\bar{\omega}^1)\geq \frac{C_{0}}{4}N^{\epsilon}, ||\partial_{x_{3}}u_{3}(\bar{\omega}^1)||_{L^{\infty}}\leq \ln(N)^{3},$$ with $C_{0}$ the constant from lemma [Lemma 4](#vaprox){reference-type="ref" reference="vaprox"}, then, if we define $$M^{\frac{1}{2}}:=e^{\int_{1-N^{-\frac{\epsilon}{2}}}^{1}\partial_{x_{1}}u_{1}(\bar{\omega}^1)(s,0)ds}$$ there is an odd vorticity $\bar{\omega}^2(x,t)$ such that $\bar{\omega}^1(x,t)+\bar{\omega}^2(x,t)$ is a solution to the forced 3D-Euler with force $\bar{F}^1(x,t)+\bar{F}^2(x,t)$ fulfilling $$\label{cotas1} ||\bar{\omega}^1(x,t)+\bar{\omega}^2(x,t)||_{C^{2.5}},||u(\bar{\omega}^1(x,t)+\bar{\omega}^2(x,t))||_{C^{3.5}}\leq M^{4},||\bar{F}^2(x,t)||_{C^{\frac{1}{2}-6\epsilon}}\leq M^{-2\epsilon}$$ and, for $t\in[1-M^{-\frac{\epsilon}{2}},1]$ $$\label{cotas2} C_{0}M^{\epsilon}\geq \partial_{x_{2}}u_{2}(\bar{\omega}^1+\bar{\omega}^2)\geq \frac{C_{0}}{4}M^{\epsilon}, ||\partial_{x_{1}}u_{1}(\bar{\omega}^1+\bar{\omega}^{2})||_{L^{\infty}}\leq \ln(M)^{3}.$$ Furthermore, for $t\in[0,1-N^{-\frac{\epsilon}{2}}]$, $\bar{F}^2(x,t)=\bar{\omega}^{2}(x,t)=0$.* *Proof.* To obtain $\bar{\omega}^2(x,t)$, we consider $$\partial_{t}\omega+\bar{u}(\omega^{1})\cdot\nabla \omega=\omega\cdot\nabla\bar{u}(\omega^{1}),$$ with $\bar{u}(\omega^{1})$ as in definition [Definition 3](#ubarra){reference-type="ref" reference="ubarra"}, and $$\omega_{1}(x,t=1)=-M^{\epsilon}f(M^{\frac{1}{2}-\epsilon}x_{1})\sin(Mx_{2})f(M^{\frac{1}{2}+\epsilon}x_{2})\sin(Mx_{3})f(M^{\frac{1}{2}}x_{3})$$ $$\omega_{2}(x,t=1)=0,$$ $$\omega_{3}(x,t=1)=-\int_{-\infty}^{x_{3}}\partial_{x_{1}}\omega_{1}(x_{1},x_{2},s,t=1)ds,$$ with some fixed $f(z)\in C^{\infty}$, $1\geq f(z)\geq 0$, $f(z)=1$ if $|z|\leq \frac{1}{2}$, $f(z)=0$ if $|z|\geq 1$. We then take $$\bar{\omega}^{2}(x,t):= g(t)\omega(x,t)$$ with $$g(t)= \begin{cases} 0&\text{if } t\leq 1-N^{-\frac{\epsilon}{2}}\\ (t-1+N^{-\frac{\epsilon}{2}})M^{\epsilon}&\text{if }t\in[1-N^{-\frac{\epsilon}{2}},1-N^{-\frac{\epsilon}{2}}+M^{-\epsilon}]\\ 1&\text{if } t\geq 1-N^{-\frac{\epsilon}{2}}+M^{-\epsilon}, \end{cases}$$ so $\bar{\omega}^{2}(x,t)=0$ if $t\leq 1-N^{-\frac{\epsilon}{2}}$. For $\bar{w}^{2}(x,t)$ defined this way, we have that $\bar{\omega}^{1}(x,t)+\bar{\omega}^{2}(x,t)$ is a solution to the forced 3D-Euler equation with force $$F(x,t)=\bar{F}^{1}(x,t)+\bar{F}^2(x,t)$$ with $$\begin{aligned} \bar{F}^{2}(x,t)=&g'(t)\omega(x,t)+g(t)\big((u-\bar{u})(\bar{\omega}^{1}(x,t))\cdot\nabla \omega(x,t)-\omega(x,t)\cdot\nabla[(u-\bar{u})(\bar{w}^{1})(x,t)]\big)\\ &+g(t)\big(u(\omega)\cdot\nabla \bar{\omega}^1(x,t)-\bar{\omega}^1(x,t)\cdot\nabla u(\omega)(x,t)\big)\\ &+g(t)^2\big(u(\omega)\cdot\nabla \omega(x,t)-\omega(x,t)\cdot\nabla u(\omega)(x,t)\big).\end{aligned}$$ Then, using lemmas [Lemma 8](#ubarraerror){reference-type="ref" reference="ubarraerror"}, [Lemma 7](#upequeñoagrande){reference-type="ref" reference="upequeñoagrande"} and [Lemma 5](#cuaderror){reference-type="ref" reference="cuaderror"} gives us $$||g(t)\big((u-\bar{u})(\bar{\omega}^{1}(x,t))\cdot\nabla \omega(x,t)-\omega(x,t)\cdot\nabla[(u-\bar{u})(\bar{w}^{1})(x,t)]\big)||_{C^{\frac{1}{2}-6\epsilon}}\leq CM^{-2\epsilon},$$ $$||g(t)\big(u(\omega)\cdot\nabla \bar{\omega}^1(x,t)-\bar{\omega}^1(x,t)\cdot\nabla u(\omega)(x,t)\big)||_{C^{\frac{1}{2}-6\epsilon}}\leq CM^{-4\epsilon}$$ $$||g(t)^2\big(u(\omega)\cdot\nabla \omega(x,t)-\omega(x,t)\cdot\nabla u(\omega)(x,t)\big)||_{C^{\frac{1}{2}-6\epsilon}}\leq CM^{-3\epsilon}$$ ,which in particular gives $\bar{F}^2(x,t)=0$ for $t\in[0,1-N^{-\frac{\epsilon}{2}}]$. Furthermore, using the bounds for $\omega(x,t)$ (see [\[w1bw3b\]](#w1bw3b){reference-type="eqref" reference="w1bw3b"}) we have $$||g'(t)\omega(x,t)||_{C^{\frac{1}{2}-6\epsilon}}\leq M^{\epsilon}\text{supp}_{t\in[1-N^{-\frac{\epsilon}{2}},1-N^{-\frac{\epsilon}{2}}+M^{-\epsilon}]}||\omega(x,t)||_{C^{\frac{1}{2}-6\epsilon}}\leq CM^{-4\epsilon}.$$ This combined with [\[w25u35\]](#w25u35){reference-type="eqref" reference="w25u35"} gives us [\[cotas1\]](#cotas1){reference-type="eqref" reference="cotas1"}. Next, using lemma [Lemma 9](#ultimolemma){reference-type="ref" reference="ultimolemma"} plus $$||\partial_{x_{1}}u_{1}(\omega^{1})||\leq C_{0}N^{\epsilon}\leq \frac{4}{C_{0}}\ln(M)^2$$ gives us [\[cotas2\]](#cotas2){reference-type="eqref" reference="cotas2"} and finishes the proof. ◻ **Corollary 1**. *For any $\delta>0$, there exists a solution to the forced 3D-Euler equation in vorticity formulation $\omega(x,t)$ with force $F(x,t)$ fulfilling $\omega(x,t)\in C^{\frac{1}{2}-\delta}$ for $t\in[0,1)$, $$\text{sup}_{t\in[0,1)}||F(x,t)||_{C^{\frac{1}{2}-\delta}}<\infty$$ and $$\text{lim}_{t\rightarrow 1}||\omega(x,t)||_{L^{\infty}}=\infty.$$* *Proof.* First, given $\epsilon>0$, for some big $N$ we define $\omega^{0}(x,t)=(\omega^{0}_{1}(x,t),\omega^{0}_{2}(x,t),\omega^{0}_{3}(x,t))$ with $$\omega^{0}_{1}(x,t)=0$$ $$\omega^{0}_{2}(x,t)=-\int_{-\infty}^{x_{2}}\partial_{x_{3}}\omega^{0}_{3}(x_{1},s,x_{3})ds,$$ $$\omega^{0}_{3}(x,t)=-M^{\epsilon}f(M^{\frac{1}{2}-\epsilon}x_{3})\sin(Mx_{1})f(M^{\frac{1}{2}+\epsilon}x_{1})\sin(Mx_{2})f(M^{\frac{1}{2}}x_{2}),$$ ($f(z)\in C^{\infty}$, $0\leq f(z)\leq 1$, $f(z)=1$ if $|z|\leq \frac12$, $f(z)=0$ if $|z|\geq 1$), which is a solution to the forced incompressible 3D-Euler equation in vorticity formulation with some force $F^{0}(x,t)\in C^{\infty}$ and $||F^{0}(x,t)||_{C^{1}}\leq C.$ Furthermore, we can apply lemma [Lemma 4](#vaprox){reference-type="ref" reference="vaprox"} to $R(0,0,\omega^{0}_{3})$ show that, for any $a>0$, if $N$ big, $$\frac{(1+a)C_{0}N^{\epsilon}}{2}\geq \partial_{x_{1}}u_{1}(\omega^{0}_{3})\geq \frac{(1-a)C_{0}N^{\epsilon}}{2}$$ which, combined with the bounds for $\omega^{0}_{2}$ and [\[lnu\]](#lnu){reference-type="eqref" reference="lnu"} gives us, for $N$ big $$C_{0}N^{\epsilon}\geq \partial_{x_{1}}u_{1}(\omega^{0})\geq \frac{C_{0}N^{\epsilon}}{4}.$$ With this $\omega^{0}(x,t)$, we can apply theorem [Theorem 10](#glue){reference-type="ref" reference="glue"} with $\bar{\omega}^{1}(x,t)=\omega^{0}(x,t)$, $\bar{F}^{1}(x,t)=F^{0}(x,t)$ to obtain $\bar{\omega}^{2}(x,t)$ such that $\omega^{0}(x,t)+\bar{\omega}^{2}(x,t)$ is a solution to the forced incompressible 3D-Euler equation in vorticity formulation with force $F^{0}(x,t)+\bar{F}^{2}(x,t)$, fulfilling, if $M_{0}^{\frac{1}{2}}:=e^{\int_{1-N^{-\frac{\epsilon}{2}}}^{1}\partial_{x_{1}}u_{1}(\omega^0)(s,0)ds}$, $$||\omega^{0}(x,t)+\bar{\omega}^2(x,t)||_{C^{2.5}},||u(\omega^{0}(x,t)+\bar{\omega}^2(x,t))||_{C^{3.5}}\leq M_{0}^{4},||\bar{F}^2(x,t)||_{C^{\frac{1}{2}-6\epsilon}}\leq M_{0}^{-\epsilon}$$ and for $t\in[1-M_{0}^{-\frac{\epsilon}{2}},1]$ $$C_{0}M_{0}^{\epsilon}\geq \partial_{x_{2}}u_{2}(\omega^{0}+\bar{\omega}^2)\geq \frac{C_{0}}{4}M_{0}^{\epsilon}, ||\partial_{x_{1}}u_{1}(\omega^0+\bar{\omega}^{2})||_{L^{\infty}}\leq \ln(M_{0})^{3},$$ and, for $t\in[0,1-N^{\frac{\epsilon}{2}}]$, $\bar{F}^2(x,t)=\bar{\omega}^{2}(x,t)=0$. We can define then $\omega^{1}(x,t)=\omega^{0}(x,t)+\bar{\omega}^{2}(x,t)$ and iterate the process. More precisely, we apply theorem [Theorem 10](#glue){reference-type="ref" reference="glue"} with $\bar{\omega}^1(x,t)=R^{-i}(\omega^{i}(x,t))$, which satisfies the hypothesis of the theorem, to obtain a new $\bar{\omega}^{2}(x,t)$ such that $R^{-i}(\omega^{i} (x,t))+\bar{\omega}^{2}(x,t)$ is a solution to the forced incompressible 3D-Euler equation in vorticity formulation with force $R^{-i}(F^{i}(x,t))+\bar{F}^{2}(x,t)$, and, if we define $M_{i}^{\frac{1}{2}}:=e^{\int_{1-M_{i-1}^{-\frac{\epsilon}{2}}}^{1}\partial_{x_{1}}u_{1}(\omega^i)(s,0)ds}$, we have $$||R^{-i}(\omega^{i}(x,t))+\bar{\omega}^2(x,t)||_{C^{2.5}},||u(R^{-i}(\omega^{i}(x,t))+\bar{\omega}^2(x,t))||_{C^{3.5}}\leq M_{i}^{4},||\bar{F}^2(x,t)||_{C^{\frac{1}{2}-6\epsilon}}\leq M_{i}^{-\epsilon}$$ and for $t\in[1-M_{i}^{-\frac{\epsilon}{2}},1]$ $$C_{0}M_{i}^{\epsilon}\geq \partial_{x_{3}}u_{3}(R^{-i}(\omega^{i}(x,t))+\bar{\omega}^2)\geq \frac{C_{0}}{4}M_{i}^{\epsilon}, ||\partial_{x_{1}}u_{1}(R^{-i}(\omega^{i}(x,t))+\bar{\omega}^{2})||_{L^{\infty}}\leq \ln(M_{i})^{3},$$ and $\bar{\omega}^2(x,t))=0$ if $t\in[0,1-M_{i-1}^{-\frac{\epsilon}{2}}]$. Then, we define $\omega^{i+1}(x,t)=\omega^{i}(x,t)+R^{i}(\bar{\omega}^{2}(x,t))$, which is a solution to the forced incompressible 3D-Euler equation in vorticity formulation with force $F^{i+1}(x,t)=F^{i}(x,t)+R^{i}(\bar{F}^{2}(x,t))$. Finally, the solution that blows up will be $$\omega^{\infty}(x,t)=\text{lim}_{i\rightarrow\infty}\omega^{i}(x,t).$$ We note that the limit trivially exists for all $t\in[0,1)$, since, for $t\in[0,1-M_{j}^{-\frac{\epsilon}{2}}]$, we have that, for $i_{1},i_{2}\geq j+1$ $$\omega^{i_{1}}(x,t)=\omega^{i_{2}}(x,t),\ F^{i_{1}}(x,t)=F^{i_{2}}(x,t)$$ and $\omega^{i}(x,t)\in C^{2.5}$ for $t\in[0,1]$. For the same reason, we have that for $t\in[0,1)$, $\omega^{\infty}(x,t)$ is a solution to the forced incompressible 3D-Euler equation in vorticity formulation, since $\omega^{i}(x,t)$ is a solution to the forced incompressible 3D-Euler equation in vorticity formulation for $t\in[0,1]$. To obtain bounds for the force, for any fixed $t_{0}\in[0,1)$ we have $$||F^{\infty}(x,t_{0})||_{C^{\frac{1}{2}-6\epsilon}}\leq ||F_{0}(x,t)||_{C^{\frac{1}{2}-6\epsilon}}+\sum_{j=0}^{\infty}(M_{j})^{-\epsilon}\leq C$$ where we used that for $j$ big $M_{j}^{-\epsilon}<< j^{-2}$. Finally, we only need to check that the solution blows up, but we know that, for $t_{i}=1-M_{i-1}^{-\frac{\epsilon}{2}}$ $$|u(\omega^{\infty}(x,t_{i}))|_{C^1}=|u(\omega^{i}(x,t_{i}))|_{C^1}\geq \frac{C_{0}}{4}M_{i-1}^{\epsilon},$$ $$||\omega^{\infty}(x,t_{i})||_{C^{2.5}}=||\omega^{i}(x,t_{i})||_{C^{2.5}}\leq M_{i-1}^{4}$$ and using that $$|u(\omega^{\infty}(x,t_{i}))|_{C^1}\leq C||\omega^{\infty}(x,t_{i})||_{L^{\infty}}\ln(10+||\omega^{\infty}(x,t_{i})||_{C^{2.5}})$$ gives us $$||\omega^{\infty}(x,t_{i})||_{L^{\infty}}\geq \frac{CM_{i-1}^{\epsilon}}{\ln(10+M_{i-1}^{4})}$$ and since $M_{i-1}$ tends to infinity as $i$ tends to infinity this finishes the proof. ◻ # Acknowledgements {#acknowledgements .unnumbered} This work is supported in part by the Spanish Ministry of Science and Innovation, through the "Severo Ochoa Programme for Centres of Excellence in R$\&$D (CEX2019-000904-S)" and 114703GB-100. We were also partially supported by the ERC Advanced Grant 788250. 93 Bardos, C., Titi, E.S.: Euler equations for incompressible ideal fluids. Uspekhi Mat. Nauk, 62(3 (375)):5--46, 2007. Beale, J.T., Kato, T., Majda, A.: "Remarks on the breakdown of smooth solutions for the 3-D Euler equations". Commun. Math. Phys. 94, 61--66, 1984 Bourgain, J., Li, D.: "Strong illposedness of the incompressible Euler equation in integer $C^m$ spaces". Geom. Funct. Anal. 25 (2015), no. 1, 1--86. Bourgain, J., Li, D.: "Strong ill-posedness of the incompressible Euler equation in borderline Sobolev spaces". Inventiones Mathematicae, 201(1), 2014, 97-157. Chen, J. Remarks on the smoothness of the $C^{1,\alpha}$ asymptotically self-similar singularity in the 3D Euler and 2D Boussinesq equations. https://arxiv.org/pdf/2309.00150.pdf Chen, J., Hou., T., "Finite time blowup of 2D Boussinesq and 3D Euler equations with $C^{1,\alpha}$ velocity and boundary". Communications in Mathematical Physics, 383(3):1559--1667, 2021. Chen, J., Hou., T.: "Stable nearly self-similar blowup of the 2D Boussinesq and 3D Euler equations with smooth data I". arXiv:2210.07191 Chen, J., Hou., T.: Stable nearly self-similar blowup of the 2D Boussinesq and 3D Euler equations with smooth data II: Rigorous numerics. arXiv preprint: arXiv:2305.05660 Constantin, P.: On the Euler equations of incompressible fluids. Bull. Amer. Math. Soc. (N.S.), 44(4):603--621, 2007. Constantin, P., Fefferman, C., Majda, A.: "Geometric constraints on potentially singular solutions for the 3-D Euler equations". Comm. Partial Differential Equations 21 (1996), no. 3-4, 559--571. Cordoba, D., Martinez-Zoroa, L., Ozanski, W.: Instantaneous gap loss of Sobolev regularity for the 2D incompressible Euler equations. https://arxiv.org/abs/2210.17458. To appear in Duke Math.J. Cordoba, D., Martinez-Zoroa, L., Zheng, F.: Finite time singularities to the 3D incompressible Euler equations for solutions in $C^{\infty}(\mathbb{R}^3 \setminus {0})\cap C^{1,\alpha}\cap L^2$ . arXiv preprint arXiv:2308.12197, 2023. Drivas, T. and Elgindi, T. Singularity formation in the incompressible Euler equation in finite and infinite time. arXiv:2203.17221v1, 2022. Elgindi, T.M., Jeong, I.-J.: "Ill-posedness for the Incompressible Euler Equations in Critical Sobolev Spaces". Annals of PDE 3(1), 19pp, (2017). Elgindi, T.M.: "Finite-time singularity formation for $C^{1,\alpha}$ solutions to the incompressible Euler equations on $R^3$". Ann. Math. (2) 194.3 (2021), pp. 647--727. Elgindi, T.M., Ghoul, T.-E., Masmoudi, N.: "On the Stability of Self-similar Blow-up for $C^{1,\alpha}$ Solutions to the Incompressible Euler Equations on $R^3$". Camb. J. Math. 9 (2021), no. 4, 1035--1075. Elgindi, T.M. and Jeong, I-J. Finite-time singularity formation for strong solutions to the axisymmetric 3D Euler equations. Annals of PDE 5.2 (2019): 1-51. Elgindi, T.M., Masmoudi, N. "$L^{\infty}$ ill-posedness for a class of equations arising in hydrodynamics". Arch. Ration. Mech. 235(3) 1979-2025, (2020). Fefferman, C.L.: Existence and smoothness of the Navier--Stokes equation, The millennium prize problems (2006), 57--67. Gibbon, J.D.: The three-dimensional Euler equations: where do we stand? Phys. D, 237(14- 17):1894--1904, 2008. Gunther, N.: "On the motion of fluid in a moving container". Izvestia Akad. Nauk USSR, Ser. Fiz.--Mat., 20(1323-1348), 1927. Hölder, E.: "Über die unbeschränkte Fortsetzbarkeit einer stetigen ebenen Bewegung in einer unbegrenzten inkompressiblen Flüssigkeit". Math. Z. 37(1), 727--738, 1933 Hou, T.Y. Blow-up or no blow-up? a unified computational and analytic approach to 3D incompressible Euler and Navier-Stokes equations. Acta Numerica, 18(1):277--346, 2009. Kiselev, A.: Small scales and singularity formaiton in fluid dynamics. Progress in mathematical fluid dynamics, 125--161, Lecture Notes in Math., 2272, Fond. CIME/CIME Found. Subser., Springer, Cham, \[2020\], ©2020. Kiselev, A., Šverák, V.: "Small scale creation for solutions of the incompressible twodimensional Euler equation". Ann. Math. (2) 180(3), 1205--1220 (2014). Lichtenstein, L.: "Über einige Existenzprobleme der Hydrodynamik homogenerunzusammendrück barer, reibungsloser Flüßbigkeiten und die Helmgoltzschen Wirbelsatze". Mathematische Zeitschrift 32 (1930), 608--725. Majda, A.J., Bertozzi, A.L. "Vorticity and Incompressible Flow". Cambridge Texts in Applied Mathematics. Cambridge University Press, 2001. M.R.Ukhovskii and V.I.Yudovich. Axially symmetric flows of ideal and viscous fluids filling the whole space. Prikl. Math. Meh., 32(1):59--69, 1968. Wang, Y., Lai, C., Gómez-Serrano, J. and Buckmaster, T. Asymptotic self-similar blow up profile for 3-D Euler via physicsinformed neural networks, Phys. Rev. Lett. 130 (2023), no. 24, Paper No. 244002, 6 pp. Wolibner, W.: "Un theorème sur l'existence du mouvement plan d'un fluide parfait, homogène, incompressible, pendant un temps infiniment long". Math. Z. 37(1), 698-- 726, 1933 [^1]: dcg\@icmat.es [^2]: luis.martinezzoroa\@unibas.ch
arxiv_math
{ "id": "2309.08495", "title": "Blow-up for the incompressible 3D-Euler equations with uniform\n $C^{1,\\frac{1}{2}-\\epsilon}\\cap L^2$ force", "authors": "Diego C\\'ordoba, Luis Mart\\'inez-Zoroa", "categories": "math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We consider an offline learning problem for an agent who first estimates an unknown price impact kernel from a static dataset, and then designs strategies to liquidate a risky asset while creating transient price impact. We propose a novel approach for a nonparametric estimation of the propagator from a dataset containing correlated price trajectories, trading signals and metaorders. We quantify the accuracy of the estimated propagator using a metric which depends explicitly on the dataset. We show that a trader who tries to minimise her execution costs by using a greedy strategy purely based on the estimated propagator will encounter suboptimality due to so-called spurious correlation between the trading strategy and the estimator and due to intrinsic uncertainty resulting from a biased cost functional. By adopting an offline reinforcement learning approach, we introduce a pessimistic loss functional taking the uncertainty of the estimated propagator into account, with an optimiser which eliminates the spurious correlation, and derive an asymptotically optimal bound on the execution costs even without precise information on the true propagator. Numerical experiments are included to demonstrate the effectiveness of the proposed propagator estimator and the pessimistic trading strategy. author: - Eyal Neuman - Wolfgang Stockinger - "Yufei Zhang[^1]" title: An Offline Learning Approach to Propagator Models --- Mathematics Subject Classification (2010): : 62L05, 60H30, 91G80, 68Q32, 93C73, 93E35, 62G08 JEL Classification: : C02, C61, G11 Keywords: : optimal portfolio liquidation, price impact, propagator models, predictive signals, Volterra stochastic control, offline reinforcement learning, nonparametric estimation, pessimistic principle, regret analysis # Introduction {#sec-mot-res} Price impact refers to the empirical fact that execution of a large order affects the risky asset's price in an adverse and persistent manner and is leading to less favourable prices for the trader. Accurate estimation of transactions' price impact is instrumental for designing profitable trading strategies. Propagator models serve as a central tool in describing these phenomena mathematically (see @bouchaud-gefen [@gatheral2010no]). These models express price moves in terms of the influence of past trades, and give a reliable reduced form view on the limit order book reaction for trades execution. The model's tractability provides a convenient formulation for stochastic control problems arising from optimal portfolio liquidation [@AS12; @GSS; @AJ-N-2022; @ANV-23]. For an agent who wishes to liquidate a large order, a common practice for avoiding undesirable execution costs due to price impact, is to split the so called *metaorder* into smaller child orders described by $\{u_{t_j} :\ 0=t_1< \ldots <t_M=T \}$ over a time interval of length $T$. Propagator models assume that there exists a matrix $G = (G_{i,j})$, known as the *propagator* or the *price impact kernel*, such that the asset's execution price $S$ is given by $$\label{p-proc} S_{t_{i+1}} = S_{t_1} + \sum_{j =1}^{i} G_{i,j} u_{t_j} +A_{t_i}+ \varepsilon_{t_i} , \quad i=1,\ldots, M,$$ where the process $A+\varepsilon$ represents the fundamental (or unaffected) asset price. Here $A$ is often referred to as a trading signal and $\varepsilon$ is a mean-zero noise process (see e.g., @bouchaud_bonart_donier_gould_2018 [Chapter 13]). The propagator $G_{i,j}$ typically decays for $j \gg i$ and hence the sum on the right-hand side of [\[p-proc\]](#p-proc){reference-type="eqref" reference="p-proc"} is often referred to *transient price impact*. A well-known example à la Almgren and Chriss, discusses the case where all entries of $G$ are similar, then this sum represents permanent price impact. If $G = \lambda\mathbb{I}$, where $\lambda>0$ and $\mathbb{I}_M$ is the $\mathbb{R}^{M \times M}$ identity matrix, then the sum represents temporary price impact (see [@AlmgrenChriss1; @OPTEXECAC00]). In the aforementioned setting, the trader can only observe the visible price process $S$, her own trades $u$ and the trading signal $A$. In order to quantify the price impact and to design a profitable trading strategy, the trader needs a precise estimation of the matrix $G$. Some estimators for the convolution case (i.e., where $G_{i,j} = G_{i-j})$ were proposed in [@bouchaud-gefen; @Forde:2022aa; @Toth_17; @Benzaquen_22] and in Chapter 13.2 of [@bouchaud_bonart_donier_gould_2018]. These regression based estimation methods are already a common practice in the industry, however they ignore numerous mathematical issues, such as the illposedness of the least-squares estimation problem, dependencies between price trajectories and spurious correlations between the estimator and greedy strategies. Hence the convergence of these price impact estimators remains unproved and rigorous results on the convergence rate are considered as long-standing open problem. In [@neuman2023statistical] a novel approach for nonparametric estimation of the price impact kernel was proposed for the continuous time version of [\[p-proc\]](#p-proc){reference-type="eqref" reference="p-proc"}. Sharp bounds on the convergence rate which are characterised by the singularity of the propagator were derived therein. The estimation phase in [@neuman2023statistical] was designed for an online reinforcement learning framework where the agent interacts with the environment while trading. However, there are a few crucial drawbacks for the estimation methods developed in [@neuman2023statistical]: the trader's strategy had to be similar and deterministic in all interactions with the market, and moreover the price trajectories in each episode were assumed to be independent. These assumptions do not apply in crowded markets for example, where all agents are following similar trading signals (see [@N-V-2021]). Lastly, but most importantly, online learning for portfolio liquidation is very expensive. In order to get one sample path of $(S,u,A)$ in [\[p-proc\]](#p-proc){reference-type="eqref" reference="p-proc"} the trader needs to execute an order according to a highly suboptimal strategy, which leads to undesirable liquidation costs. For this reason most propagator calibrations are done offline, while fine tuning of the estimator should take place using online algorithms. In order to resolve the above issues, the goals of this work are as follows: - to provide an offline estimator for the propagator $G$ in [\[p-proc\]](#p-proc){reference-type="eqref" reference="p-proc"} based on a static dataset of historical trades and prices and to derive optimal convergence rate for this estimator. - to propose a stochastic control framework that will eliminate the correlation between the offline estimator $\hat G$ and a greedy trading strategy based on $\hat G$, and to derive a tight bound on its performance gap. We will first provide details on objective (i). We choose to work in a discrete time setting, which is compatible with financial datasets and is inline with common practice in the financial industry. Our approach for the propagator estimation is offline since most of the existing data available to researchers is historical. Further, as mentioned before, online learning in order execution is expensive due to suboptimal interaction with the market using greedy trading strategies. Hence, deriving estimates for the price impact based on existing datasets could save substantial trading costs. In the following, we assume that there exists a dataset $\mathcal{D}$ with $N$ price and signal trajectories, including the corresponding agents' metaorders following the dynamics in [\[p-proc\]](#p-proc){reference-type="eqref" reference="p-proc"}, $$\label{data-b} \mathcal{D} = \left\{(S^{(n)}, u^{(n)}, A^{(n)}) \mid n= 1, \ldots, N\right\},$$ see Definition [Definition 5](#def:offline_data){reference-type="ref" reference="def:offline_data"} for details. In Theorems [Theorem 11](#thm:confidence_volterra){reference-type="ref" reference="thm:confidence_volterra"} and [Theorem 15](#thm:confidence_convolution){reference-type="ref" reference="thm:confidence_convolution"}, we improve the convergence rate results of Theorem 2.10 of [@neuman2023statistical]. Our contribution to propagator estimations is as follows: - We relax the assumption on independence between the price trajectories, by assuming that the noise process $\varepsilon^{(n)}$ is conditionally sub-Gaussian (see Assumption [Assumption 9](#assum:offline_data){reference-type="ref" reference="assum:offline_data"}). - We allow for any price adaptive strategies $u^{(n)}$ in the dataset $\mathcal{D}$, in contrast to choosing only one deterministic strategy that will repeat itself (i.e., $u^{(n)} = u^{(m)}$ for all $0\leq m,n \leq N$) as in Theorem 2.10 of [@neuman2023statistical] . - We allow for estimation of non-convolution propagators $G$ and we show that in the convolution case the convergence rate is substantially faster (see Remark [Remark 16](#rem-con-rate){reference-type="ref" reference="rem-con-rate"}). - We present a conceptually new norm, $$\label{norm-v-n} \|\hat G - G\|_{V_{N,\lambda}} := \|(\hat G - G)V_{N,\lambda}^{\frac{1}{2}} \|,$$ which not only quantifies the estimation error in terms of the quality of the dataset, but will also have a crucial role in the pessimistic offline learning approach which we present in the following. Here $\hat{G}$ denotes the estimated propagator, and $V_{N,\lambda}$ is a matrix that measures the covariance between the $u^{(n)}$'s in $\mathcal D$ (see [\[eq:project_admissible\]](#eq:project_admissible){reference-type="eqref" reference="eq:project_admissible"}). Next we discuss objective (ii) of this work, to propose a stochastic control framework that will eliminate the correlation between a greedy strategy and the estimated price impact kernel. Precise estimation of the propagator is crucial in portfolio liquidation and portfolio selection problems in order minimise the trading costs. The following cost functional measures the expected costs of a trade of size $x_0$ executed within the time interval $[0,T]$ (see [@cartea15book; @GSS2; @GSS; @Lehalle-Neum18] among others): $$\label{cost} \begin{aligned} %\label{per-func} J(u;G) &= \mathbb{E}\left[ \sum_{i=1}^{M} \sum_{j=1 }^{i} G_{i,j} u _{t_j} u_{t_i} + \sum_{i=1}^M A_{t_i} u_{t_i} \right], \end{aligned}$$ where $u=(u_{t_i})_{i=1}^M$ satisfies a fuel constraint i.e., $\sum_{i=1}^N u_{t_i} = x_0$. The first term in the expectation on the right-hand side of [\[cost\]](#cost){reference-type="eqref" reference="cost"} represents the price impact costs and the second term represents the fundamental price of the trade. The two step procedure starting with offline estimation of the propagator $G$ and then using the estimator $\hat G$ in order to obtain a greedy strategy $\hat u$ (i.e., to optimize $J(u;\hat{G})$ over a set of admissible controls) introduces some statistical issues and challenges which fit into the framework of *offline reinforcement learning* (offline RL). As mentioned before typical training schemes of online RL algorithms rely on interaction with the environment, where in the case of trading this interaction is expensive and in other areas such autonomous driving, or healthcare it is dangerous. Recently, the area of offline RL has emerged as a promising candidate to overcome this barrier. Offline RL algorithms learn a policy from a static dataset which is collected before the agent executes any greedy policy. However, these algorithms may suffer from several pathological issues which classified as follows: - Intrinsic uncertainty: the dataset possibly fails to cover the trajectory induced by the optimal policy, which however carries the essential information. - Spurious correlation: the dataset possibly happens to cover greedy trajectories unrelated to the optimal policy, which by chance induces low execution costs. This leads to an estimation error of the true parameters, and to an underperforming greedy policy correlated with the estimated parameters. These issues were outlined and partially quantified for Markov decision processes with unknown distribution of transition probabilities and rewards in [@chang2021mitigating; @jin2021pessimism; @NEURIPS2020_f7efa4f8; @Li-Tang-21; @Uehara-21]. We demonstrate how these issues emerge in offline propagator estimation. In the spirit of Section 3.1 of [@jin2021pessimism], we decompose the suboptimality of any strategy into the following ingredients: the spurious correlation, intrinsic uncertainty and optimization error. We denote by $( G^{\star}, u^{\star},J)$ the true propagator, the optimal strategy and the associated cost functional $J$ in [\[cost\]](#cost){reference-type="eqref" reference="cost"}, respectively. We use the notation $(\hat G, \hat u, \hat J)$ for the estimator of the true kernel $G^\star$, a convex cost functional $\hat J(\cdot;\hat G)$ using $\hat G$, and a greedy strategy $\hat u$ minimising this functional. Note that we do not insist that $\hat J$ and $J$ necessarily agree, that is we allow $\hat J(\cdot, G) \not= J(\cdot, G)$ for some or all $G$'s. We further define the suboptimality of an arbitrary algorithm that aims to estimate $G^{\star}$ and also produces a strategy $\hat u$ as follows: $$\textrm{SubOpt} = J(\hat u; G^{\star}) - J( u^{\star}; G^{\star}).$$ We decompose the suboptimality of such algorithm into the following components along the same lines as Lemma 3.1 of [@jin2021pessimism], $$\label{sub-decom} \begin{aligned} \textrm{SubOpt} &=\underbrace{J(\hat u; G^{\star}) - \hat J( \hat u ; \hat G)}_\text{(i): Spurious Correlation} + \underbrace{\hat J( u^{\star}; \hat G) - J( u^{\star}; G^{\star})}_\text{(ii): Intrinsic Uncertainty} +\underbrace{\hat J( \hat u ; \hat G) -\hat J( u^\star ; \hat G)}_\text{(iii): Optimization Error}. \end{aligned}$$ Note that term (i) in [\[sub-decom\]](#sub-decom){reference-type="eqref" reference="sub-decom"} is the most challenging to control, as $\hat u$, $\hat G$ and hence $\hat J$ simultaneously depend on the dataset $\mathcal{D}$. This leads to a spurious correlation between them. In Example [Example 1](#exmp-corr){reference-type="ref" reference="exmp-corr"} of Appendix [8](#sec-examples){reference-type="ref" reference="sec-examples"}, we show that such spurious correlation can lead to a greedy strategy $\hat u$, which is substantially suboptimal in a specific tractable case. More involved examples are given in Section [3](#sec-numerics){reference-type="ref" reference="sec-numerics"}. We refer to Section 3.2 of [@jin2021pessimism] for an additional negative example for multi-armed bandit problems. In particular, due to the correlation between $\hat u$ and $\hat G$, term (i) can have a large expectation, with respect to the probability measure induced by the dataset, even under the assumption that $\hat G$ is an unbiased estimator of $G^{\star}$. In contrast, term (ii) is less challenging to control, as $u^\star$ is the minimiser of $J(\cdot; G^{\star})$, hence it does not depend on $\mathcal D$ and in particular on $\hat J(\cdot; \hat G)$. In Section 4.3 of [@jin2021pessimism], it was shown that the intrinsic uncertainty is impossible to eliminate for linear Markov decision processes. Finally we notice that the optimization error in term (iii) is nonpositive as long as $\hat u$ is greedy (i.e., optimal) with respect to $\hat J(\cdot; \hat G)$. Note that even with the tools developed in this paper, the estimation of price impact can be quite sensitive to various conditions in the market, which are sometimes difficult to quantify precisely. As pointed out before such deviations in the estimations of the true propagator will lead to undesirable costs due to spurious correlation (see Example [Example 1](#exmp-corr){reference-type="ref" reference="exmp-corr"}). For example in [@Madhavan2001; @Mich-Neum20] permanent and temporary price impact on the Russell 3000 annual additions and deletions events were computed. It was found that after 2008, the temporary market impact often presented a negative sign, presumably amenable to the 2010s bull market which signed a positive trend in the equity stock market [@wsj_2019; @ft_2019; @reuters_2019]. While a regression model can be applied in order to fully test the effect of market trends on price impact, both the reference market index S$\&$P500 and individual stock returns present significant autocorrelations across different lags [@EGADEESH1990], hence this type of analysis is quite involved. In order to minimise the ingredients of the suboptimality in [\[sub-decom\]](#sub-decom){reference-type="eqref" reference="sub-decom"} we adopt a *pessimistic offline RL* framework, which penalises trading strategies visiting states or using actions, which were not explored in the static dataset. We define the uncertainty quantifier which is a key concept in this theory. In the following we denote by $\mathbb P'$ the probability measure governing the dataset (see Section [\[sec-pessim\]](#sec-pessim){reference-type="eqref" reference="sec-pessim"} for the precise definition). We refer to $\mathcal A$ as the class of admissible controls with respect to the cost functional $J$ in [\[cost\]](#cost){reference-type="eqref" reference="cost"}, where a precise definition is given in [\[admiss\]](#admiss){reference-type="eqref" reference="admiss"}. **Definition 1** ($\delta$-uncertainty quantifier). *We say that $\Gamma:\mathcal A \rightarrow\mathbb{R}_+$ is a $\delta$-uncertainty quantifier with respect to $\mathbb{P}'$ if the following event $$A = \left \{| J( u; G^{\star}) - J( u ; \hat G)| \leq \Gamma(u), \textrm{ for all } u\in \mathcal A \right \},$$ satisfies $\mathbb{P}' (A) \geq 1-\delta$ for $\delta \in (0,1)$.* Assuming that we have $\delta$-uncertainty quantifier $\Gamma$ we now propose a candidate for $\hat J$ so that the spurious correlation in [\[sub-decom\]](#sub-decom){reference-type="eqref" reference="sub-decom"} is nonpositive, $$\label{j-h-def} \hat J (u ; \hat G) = J (u ; \hat G) + \Gamma(u), \quad u \in \mathcal A.$$ We also require that $\Gamma(u)$ is convex so that the cost functional [\[j-h-def\]](#j-h-def){reference-type="eqref" reference="j-h-def"} is convex. Indeed by Definition [Definition 1](#def-quant){reference-type="ref" reference="def-quant"} and [\[j-h-def\]](#j-h-def){reference-type="eqref" reference="j-h-def"}, the spurious correlation is now bounded by $$\label{spurious-neg} J(\hat u; G^{\star}) - \hat J( \hat u ; \hat G) = J(\hat u; G^{\star}) - J (\hat u ; \hat G) - \Gamma(\hat{u})\leq 0,$$ with probability $1-\delta$ for $\delta \in (0,1)$. Using $\hat J$ as in [\[j-h-def\]](#j-h-def){reference-type="eqref" reference="j-h-def"} concludes that suboptimality in [\[sub-decom\]](#sub-decom){reference-type="eqref" reference="sub-decom"} only corresponds to term (ii) which characterizes the intrinsic uncertainty. Our definitions and main results in Sections [2.3](#sec-pessim){reference-type="ref" reference="sec-pessim"} and [2.4](#sec-res-pess){reference-type="ref" reference="sec-res-pess"} propose such a $\delta$-uncertainty quantifier $\Gamma$ which is sufficiently efficient and helps us to establish a tight upper bound for the suboptimality in [\[sub-decom\]](#sub-decom){reference-type="eqref" reference="sub-decom"}. It is important to notice that deriving $\Gamma$ as in [\[def-quant\]](#def-quant){reference-type="eqref" reference="def-quant"} is highly nontrivial since the true propagator $G^{\star}$ is not an observable. In order to derive an uncertainty quantifier we use the bounds, which were developed in the estimation part of the paper, on the distance between the estimator $\hat G$ and $G^{\star}$ in [\[norm-v-n\]](#norm-v-n){reference-type="eqref" reference="norm-v-n"}, under the norm $\|\cdot\|_{V_{N,\lambda}}$. Specifically, in [\[j-p1\]](#j-p1){reference-type="eqref" reference="j-p1"} we choose $\hat J(\cdot ;\hat G) = J_{P,1}(\cdot ; G_{N,\lambda})$ and in Corollary [Corollary 20](#cor-quant){reference-type="ref" reference="cor-quant"} we show that the $\delta$-uncertainty quantifier is given by, $$\label{ell-1-def} \Gamma(u) = {\mathbb{E}}[\ell_1(V_{N,\lambda},u)], \quad \textrm{where } \ \ell_1(V_{N,\lambda},u) \coloneqq C(N) \big\| V_{N,\lambda}^{-\frac{1}{2}} u \big\|,$$ for some constant $C(N) >0$. The optimiser of $\hat J(\cdot ;\hat G)$ is referred to as a pessimistic optimal strategy and it is denoted by $u^{(1)}$. In Example [Example 2](#exmp-l1-pen){reference-type="ref" reference="exmp-l1-pen"} we demonstrate the effect of $\ell_1$ on the pessimistic optimal strategy in a simple and tractable setup. More realistic examples are given in Section [3](#sec-numerics){reference-type="ref" reference="sec-numerics"}. This is the first application of pessimistic offline RL in the area of quantitative finance and also in continuous state stochastic control. In the reinforcement learning literature, mathematical results that quantify these issues of offline RL are scarce, and they focus on Markov decision processes (MDPs). In the following we summarise our contribution in this direction: - In Theorem [Theorem 19](#THM:performance_bound){reference-type="ref" reference="THM:performance_bound"}(i) we prove that the spurious correlation of $u^{(1)}$ is nonpositive and that the suboptimality of $u^{(1)}$, which is essentially due to the intrinsic uncertainty, is bounded by $$\textrm{SubOpt} = J( u^{(1)}; G^{\star}) - J( u; G^{\star}) \leq 2 \mathbb{E} \left[ \ell_1(V_{N,\lambda},u) \right],$$ for any admissible strategy $u$ including $u^{\star}$ which is unknown. - In Theorem [Theorem 24](#thm-l-bound){reference-type="ref" reference="thm-l-bound"}(i), we prove that ${\mathbb{E}}[\ell_1(V_{N,\lambda},u)] = O(N^{-1/2}(\log N)^{1/2})$, where $N$ corresponds to the the number of samples in the static dataset [\[data-b\]](#data-b){reference-type="eqref" reference="data-b"}, under the assumption of a well-explored dataset (see Assumption [Assumption 22](#assum:concentration_inequ){reference-type="ref" reference="assum:concentration_inequ"} and Remark [Remark 23](#rem-asump-mat){reference-type="ref" reference="rem-asump-mat"}) . Together with [\[sub-decom\]](#sub-decom){reference-type="eqref" reference="sub-decom"} and [\[spurious-neg\]](#spurious-neg){reference-type="eqref" reference="spurious-neg"} this leads to the asymptotic behaviour of the performance gap, $$\label{sub-res} \textrm{SubOpt} =O(N^{-1/2}(\log N)^{1/2}),$$ where the constants of this rate are given explicitly. Special treatment and slightly refined results are given for convolution kernels in Theorems [Theorem 19](#THM:performance_bound){reference-type="ref" reference="THM:performance_bound"}(ii) and [Theorem 24](#thm-l-bound){reference-type="ref" reference="thm-l-bound"}(ii). In Remark [Remark 23](#rem-asump-mat){reference-type="ref" reference="rem-asump-mat"}, we show that for a convolution type propagator the assumption of a well-explored dataset should hold with asymptotically high probability for large $N$, in contrast to a Volterra type propagator. - Note that the expected convergence rate for a generic least-square estimator $\hat G$ to the true kernel $G$ is of order $O(N^{-1/2})$, hence the statement of Theorem [Theorem 24](#thm-l-bound){reference-type="ref" reference="thm-l-bound"} is sharp up to a logarithmic factor. In fact the logarithmic correction in [\[sub-res\]](#sub-res){reference-type="eqref" reference="sub-res"} is a result of the estimation scheme of $\hat G$, as described in [\[eq:G_n\_lambda_volterra\]](#eq:G_n_lambda_volterra){reference-type="eqref" reference="eq:G_n_lambda_volterra"}, which is subject to quality of coverage of the dataset (see [\[eq:G_unconstraint_minimiser\]](#eq:G_unconstraint_minimiser){reference-type="eqref" reference="eq:G_unconstraint_minimiser"}). For a sufficiently regular dataset the regularization is not needed and the logarithmic term in Theorem [Theorem 11](#thm:confidence_volterra){reference-type="ref" reference="thm:confidence_volterra"} will vanish. This will give us asymptotically sharp bounds in Theorem [Theorem 24](#thm-l-bound){reference-type="ref" reference="thm-l-bound"}(i), that is $\textrm{SubOpt} =O(N^{-1/2} )$. We compare the contribution of this work to [@jin2021pessimism], where partial results on pessimistic offline RL with respect to spurious correlation and intrinsic uncertainty were derived for MDPs. The bound on suboptimality in Theorem [Theorem 19](#THM:performance_bound){reference-type="ref" reference="THM:performance_bound"} coincides with the corresponding bound established in Theorem 4.2 of [@jin2021pessimism] that deals with a much simpler case of a class of MDPs where the rewards and distribution of the transition probabilities are Markovian. Note that stochastic control problems that involve propagators not only take place in a continuous state space but are also non-Markovian (see e.g., [@AJ-N-2022]). The convergence rate of $O(N^{-1/2}\log N)$ in Theorem [Theorem 24](#thm-l-bound){reference-type="ref" reference="thm-l-bound"} coincides with the convergence rate established in Corollary 4.5 of [@jin2021pessimism] for *linear* MDP under similar assumptions. #### Structure of the paper: In Section [2](#sec-l-p){reference-type="ref" reference="sec-l-p"} we setup the trading model and describe our main results on propagator estimation in Theorems [Theorem 11](#thm:confidence_volterra){reference-type="ref" reference="thm:confidence_volterra"} and [Theorem 15](#thm:confidence_convolution){reference-type="ref" reference="thm:confidence_convolution"}, and on the performance gap for pessimistic learning in Theorems [Theorem 19](#THM:performance_bound){reference-type="ref" reference="THM:performance_bound"} and [Theorem 24](#thm-l-bound){reference-type="ref" reference="thm-l-bound"}. Section [3](#sec-numerics){reference-type="ref" reference="sec-numerics"} is dedicated to numerical illustrations of the propagator estimation and of pessimistic learning strategies. In Section [4](#sec-mart){reference-type="ref" reference="sec-mart"}, we develop the mathematical foundations for our results on convergence of the propagator estimators. Sections [5](#sec-pf-est){reference-type="ref" reference="sec-pf-est"}--[7](#sec-pf-opt){reference-type="ref" reference="sec-pf-opt"} are dedicated to the proofs of the main results. In Appendix [8](#sec-examples){reference-type="ref" reference="sec-examples"} we provide some simple and tractable examples for the framework of pessimistic RL in portfolio liquidation. In Appendix [9](#sec-examples_noisy){reference-type="ref" reference="sec-examples_noisy"} we provide further numerical experiments for estimation of Volterra kernels. # Problem formulation and main results {#sec-l-p} In this section we present our results on the optimal liquidation model which was briefly described in [\[cost\]](#cost){reference-type="eqref" reference="cost"}, the price impact estimation with offline data and finally on the pessimistic control problem which corresponds to [\[j-h-def\]](#j-h-def){reference-type="eqref" reference="j-h-def"}. ## The optimal liquidation model {#sec-liq-model} We fix $T>0$ along with an equidistant partition of the interval $[0,T]$, $\mathbb{T} \coloneqq \lbrace 0=t_1,\ldots,t_M=T \rbrace$. Let $(\Omega, \mathcal F, \mathbb F=\{\mathcal F_{t_i}\}_{i=1}^M, \mathbb{P})$ be a filtered probability space and let $\mathcal F_0$ be the null $\sigma$-field. Let $(A_{t_i})_{i=1}^M$ and $(\varepsilon_{t_i})_{i=1}^M$ be $\mathbb F$-adapted stochastic processes satisfying $$\mathbb{E}[\varepsilon_{t_i}^2] < \infty, \quad \mathbb{E}[A_{t_i}^2] < \infty, \quad \textrm{for all } i=1,\ldots,M.$$ where we further assume that $$\mathbb{E}[\varepsilon_{t_i}\mid \mathcal{F}_{t_{i-1}}]=0, \quad \textrm{for all } i=1,\ldots,M.$$ We consider a trader with a target position of $x_0 \in \mathbb{R}$ shares in a risky asset. The number of shares the trader holds at time $t_n$ is prescribed as $$X_{t_n} = \sum_{i=1}^{n} u_{t_i}, \quad n=1,\ldots,M,$$ where $u = (u_{t_1}, \ldots, u_{t_M})^\top$ denotes her trading speed, choosen from a set of admissible strategies $$\label{admiss} \mathcal A = \left\{ (u_{t_i})_{i=1}^M \textrm{ are } \mathbb F\textrm{-adapted and } \mathbb{E}[u_{t_i}^2] < \infty, \ \textrm{for all } i=1,\ldots,M \right\}.$$ We will further impose the fuel constraint $\sum_{i=1}^{M} u_{t_i} = x_0$, hence $x_0$ can be regarded as the target inventory to be achieved by the terminal time. We fix an $M \times M$ matrix $G=(G_{i,j})_{i,j=1}^{M}$, which is called the propagator, such that $$\label{g-def} \sum_{i=1}^{M} \sum_{j=1 }^{M} (G_{i,j} +G_{j,i}) x_{j} x_{i} > 0, \quad \textrm{for all } x \in \mathbb{R}^M, \quad G_{i,j}= 0, \quad \textrm{for } 1 \leq i < j \leq M.$$ We assume that the trader's trading activity causes transient price impact on the risky asset's execution price in the sense that her orders are filled at prices $$\label{eq:price_dynamics_G_star} S_{t_{i+1}} = S_{t_1} + \sum_{j =1}^{i} G_{i,j} u_{t_j} +A_{t_i}+ \varepsilon_{t_i} , \quad i=1,\ldots, M,$$ where $S_{t_1}$ is an $\mathcal F_{t_1}$-measurable, square integrable random variable, which describes the initial price. Note that in this setup the process $A+\varepsilon$ represents the fundamental (or unaffected) asset price, where $A$ is often referred to as a trading signal see e.g., Section 2 of [@NeumanVoss:20]. Following a similar argument as in Section 2 of [@Lehalle-Neum18] we consider the following performance functional, which measures the costs of the trader's strategy in the presence of transient price impact and a signal, $$\label{per-func} \begin{aligned} %\label{per-func} J(u;G) &= \mathbb{E}\left[ \sum_{i=1}^{M} \sum_{j=1 }^{i} G_{i,j} u_{t_j} u_{t_i} + \sum_{i=1}^M A_{t_i} u_{t_i} \right], \end{aligned}$$ where $u=(u_{t_i})_{i=1}^M \in \mathcal A$ satisfies a fuel constraint. We therefore wish to solve the following cost minimization problem, $$\label{opt-prog} \begin{cases} &\min_{u \in \mathcal A} J(u;G), \\ &\textrm{s.t.} \sum_{i=1}^N u_{t_i} = x_0 . \end{cases}$$ In order to derive the optimiser to [\[opt-prog\]](#opt-prog){reference-type="eqref" reference="opt-prog"}, we introduce some additional definitions and notation. #### Notation For $G$ as in [\[g-def\]](#g-def){reference-type="eqref" reference="g-def"}, we denote by $2\eta >0$ the smallest eigenvalue of the positive definite matrix $G +G^{\top}$ and define $$\label{g-dec} G = \eta \mathbb{I}_M + \widetilde G,$$ so that $\widetilde G$ satisfies $$\label{g-non-neg-def} \sum_{i=1}^{M} \sum_{j=1 }^{M} (\widetilde G_{i,j} + \widetilde G_{j,i}) x_{j} x_{i} \geq 0, \quad \textrm{for all } x \in \mathbb{R}^M, \quad \widetilde G_{i,j}= 0, \quad \textrm{for } j>i.$$ Here $\mathbb{I}_M$ is the $M \times M$ identity matrix, where we often omit the subscript $M$ when there is no ambiguity. We define the following $M$-vectors: $\boldsymbol{1} = (1,\ldots,1)^{\top}$ and $A=(A_{t_1},\ldots,A_{t_M})^\top$. Lastly we denote $\mathbb{E}_{s} [\cdot] = \mathbb{E}[ \cdot | \mathcal F_s]$ for any $s \in \mathbb{T}$. For $\widetilde G$ as in [\[g-dec\]](#g-dec){reference-type="eqref" reference="g-dec"}, let $$\label{tile-g-i} \widetilde G^{(\ell)} := (\widetilde G^{(\ell)}_{i,j}) = ( \widetilde G_{i,j} 1_{\{\ell \leq i\}}), \quad \ell = 1, \ldots, M.$$ We further define the matrices $$D^{(\ell)} = 2\eta \mathbb I_M+\widetilde G^{(\ell)} + (\widetilde G^{\top})^{(\ell)},$$ and $K = (K_{i,j})_{i,j=1}^{M}$ with $$\label{k-def-og} K_{i,j}= \widetilde G_{i,j} - \mathrm{1}_{\lbrace t_j < t_i \rbrace} (\widetilde G^{\top} (D^{(i)} )^{-1}\widetilde G^{(i)})_{i,j }.$$ The following theorem derives an explicit solution to the constrained optimal liquidation problem in [\[opt-prog\]](#opt-prog){reference-type="eqref" reference="opt-prog"}. **Theorem 2**. *There exists a unique admissible strategy solving [\[opt-prog\]](#opt-prog){reference-type="eqref" reference="opt-prog"} which is given by $$u_{t_i} = \sum_{j\leq i}(2\eta \mathbb{I}_M + K)^{-1}_{i,j}\left(g_{t_j} +a_{t_j} \mathbb{E}_{t_j }[\lambda]\right) , \quad i=1,\ldots,M,$$ where $$\begin{aligned} a_{t_j} &= -1+\sum_{k=1}^M ( \widetilde G^{\top} (D^{(j)} )^{-1})_{j,k}1_{\{t_j \leq t_k\}}, \\ g_{t_j} &= -A_{t_j}+ \sum_{k=1}^M ( \widetilde G^{\top} (D^{(j)})^{-1})_{j,k} 1_{\{t_j \leq t_k\}} \mathbb{E}_{t_j}[ A_{t_k}], \end{aligned}$$ and, for $r = 1, \ldots, M-1$, $$\mathbb{E}_{t_{r+1}}[\lambda]= \frac{(b_r-h_r){\mathbb{E}}_{t_{r}}[\lambda]- (c_{r+1}-c_r)}{b_{r+1}} , \quad \mathbb{E}_{t_1}[\lambda] = \frac{x_0-c_1}{b_1},$$ with $$\begin{aligned} & \widetilde A_{t_i} = ((2\eta \mathbb{I}_M + K)^{-1} g)_i, \quad c_r = \sum_{i=1}^M {\mathbb{E}}_{t_r} [\widetilde A_{t_i}] , \\ & b_{r} =\sum_{i=1}^M \sum_{r \leq j \leq M} (2\eta \mathbb{I}_M + K)^{-1}_{i,j} a_{t_j}, \quad h_r = \sum_{i=1}^M(2\eta \mathbb{I}_M + K)^{-1}_{i,r} a_{t_r}. \end{aligned}$$* The proof of Theorem [Theorem 2](#prop-opt-strat){reference-type="ref" reference="prop-opt-strat"} is given in Section [7](#sec-pf-opt){reference-type="ref" reference="sec-pf-opt"}. **Remark 3**. *In the case where the signal $A\equiv0$ the execution problem reduces to the deterministic optimization problem which was presented in Proposition 1 of Alfonsi et al. [@alf-sch12]. The solution to the problem in this special case is a straightforward generalization of equation (5) therein and it is given by $$\begin{aligned} u^{\star} = \frac{x_0}{\boldsymbol{1}^{\top}(G + G^{\top} )^{-1} \boldsymbol{1}}(G + G ^{\top})^{-1}\boldsymbol{1},\end{aligned}$$ which is well-defined since $G + G^{\top}$ is symmetric positive definite hence is invertible.* **Remark 4**. *The proof of Theorem [Theorem 2](#prop-opt-strat){reference-type="ref" reference="prop-opt-strat"} generalises the methods that were recently developed in [@AJ-N-2022; @ANV-23] in order to tackle such non-Markovian problems as [\[opt-prog\]](#opt-prog){reference-type="eqref" reference="opt-prog"} in two directions. First, we provide a framework for solving problems with constraints by introducing a stochastic Lagrange multiplier. Moreover we manage to solve the non-regularized problem, that is, without the linear temporary price impact term $\lambda u_{t_i}$ in the right-hand side of [\[eq:price_dynamics_G\_star\]](#eq:price_dynamics_G_star){reference-type="eqref" reference="eq:price_dynamics_G_star"} (compare with equation (2.4) in [@AJ-N-2022]). This allows us to study the propagator model in its original form as introduced in Section 3.1 of [@bouchaud-gefen].* ## Main results on price impact estimation with offline data {#sec-data} In this section, we provide an offline estimator for the propagator $G$ in [\[eq:price_dynamics_G\_star\]](#eq:price_dynamics_G_star){reference-type="eqref" reference="eq:price_dynamics_G_star"} based on a static dataset of historical trades and prices and derive optimal convergence rates for this estimator. In order to tackle the estimation problem, we assume that the agent has access to an offline dataset, which consists of price trajectories and metaorders across several episodes. The precise properties of the dataset are given below. **Definition 5** (Offline dataset). *Let $N\in \mathbb{N}$ be the number of episodes in the dataset and consider $$\mathcal{D} = \left\{(S^{(n)}_{t_i})_{i=1}^{M+1}, (u^{(n)}_{t_i})_{i=1}^{M}, (A^{(n)}_{t_i})_{i=1}^{M} \mid n= 1, \ldots, N\right\},$$ where $S^{(n)} \coloneqq (S^{(n)}_{t_i})_{i=1}^{M+1}$, $u^{(n)} \coloneqq (u^{(n)}_{t_i})_{i=1}^{M}$ and $A^{(n)} \coloneqq (A^{(n)}_{t_i})_{i=1}^{M}$ are price trajectories, trading strategies and trading signals available in the $n$-th episode, respectively.* *We call $\mathcal{D}$ an offline dataset for [\[eq:price_dynamics_G\_star\]](#eq:price_dynamics_G_star){reference-type="eqref" reference="eq:price_dynamics_G_star"} if $(S^{(n)}, u^{(n)}, A^{(n)})_{n=1}^N$ are realisations of random variables defined on a probability space $(\Omega',\mathcal{F}',\mathbb{P}')$ satisfying the following properties: for each $n\in \mathbb{N}$, $u^{(n)}$ is measurable with respect to the $\sigma$-algebra $\mathcal{G}_{n-1}$ where $$\mathcal{G}_{n-1}=\sigma\left\{(S^{(k)})_{k=1}^{n-1},(u^{(k)})_{k=1}^{n-1}, (A^{(k)})_{k=1}^n\right\},$$ and there exist $\mathcal G_n$-measurable random variables $\varepsilon^{(n)}=(\varepsilon^{(n)}_{t_i})_{ i=1}^{M}$ such that, $$\label{eq:data_price_n} S^{(n)}_{t_{i+1}} -S^{(n)}_{t_1} = \sum_{j =1}^{i} G^\star_{i,j} u^{(n)}_{t_j} + A^{(n)}_{t_i}+ \varepsilon^{(n)}_{t_i} , \quad i=1,\ldots, M,$$ where $G^\star$ is the true (unknown) propagator and $\mathbb{E}^{\mathbb{P}'}[\varepsilon_{t_i}^{(n)}\mid \mathcal{G}_{n-1}]=0$ for all $i=1,...,M$.* [EN: I suspect that there is an issue here: we expect $u^{(n)}_{t_j}$ to be measurable with respect to the $\sigma$ field created by $\mathcal{G}_{n-1}$ and $S^{(n)}_{t_{i}}$ for $i \leq j$ not just $\mathcal{G}_{n-1}$ due to the noise. The definition needs to be amended? Will this affect later sections? Also we expect $\varepsilon^{(n)}_{t_i}$ independent from $\varepsilon^{(n)}_{t_j}$ for $i\not =j$? ]{style="color: red"} [Yufei: $(\varepsilon^{(n)}_{t_i})_i$ does not need to be independent over $i$, as the estimation views the noise as a random vector taking values in $\mathbb R^M$. It seems that if $u^{(n)}_{t_j}$ can depend on $S^{(n)}_{t_i}$, $i<j$, then one cannot directly apply Theorem 4.2 to quantify the estimation error. To apply the setting of Section 4, we need to define a filtration $(\mathcal G_n)_n$ such that $u^{(n)}$ is $\mathcal G_{n-1}$-measurable, $\varepsilon^{(n)}$ is $\mathcal G_n$-measurable, and $\mathbb{E}^{\mathbb{P}'}[\varepsilon^{(n)}\mid \mathcal{G}_{n-1}]=0$. Allowing the additional dependence of $u^{(n)}$ on $S^{(n)}$ creates an additional time structure of the filtration, which does not fall in the framework. One may define $$\mathcal{G}_{n-1}=\sigma\left\{(S^{(k)})_{k=1}^{n},(u^{(k)})_{k=1}^{n-1}, (A^{(k)})_{k=1}^n\right\},$$ but then it is unclear why $\mathbb{E}^{\mathbb{P}'}[\varepsilon^{(n)}\mid \mathcal{G}_{n-1}]=0$ as $S^{(n)}$ depends on $\varepsilon^{(n)}$. EN: in the database you have strategies which do not agree on the signal, they observe price trajectories, and they are probably sub-optimal. I think that we shouldn't change the current framework at this stage though \.... YZ: I agree that what you describe is practically important and I believe that people do trade by just following the observed prices. I think it would be interesting to extend this framework and allow for an additional time structure, but I don't immediately see how to modify the setting and the statement. Maybe a more natural setting is to view the trading trajectories as a time series, instead of an episodic model? Luckily, in our numerical experiments, the strategies in our synthetic database does not involve this additional intraday feedback :)]{style="color: blue"} **Remark 6**. *Definition [Definition 5](#def:offline_data){reference-type="ref" reference="def:offline_data"} accommodates dependent trading strategies $(u^{(n)})_{n=1}^N$ that may not optimize the control objective [\[per-func\]](#per-func){reference-type="eqref" reference="per-func"}. This setting is relevant for various practical scenarios such as crowded markets where agents with different objective functionals follow similar signals (see [@C-N-M-2023; @Mich-Neum-MK; @N-V-2021]). Moreover, these strategies may be generated adaptively in the data collecting process, meaning that the trader has incorporated historical observations and currently observed signals in order to design the trading strategy $u^{(n)}$ for the $n$-th episode.* #### Convention. For any $v, u \in \mathbb R^M$ we denote by $\langle v,u \rangle$ the inner product in $\mathbb R^M$ and by $\|v\|$ the Euclidean norm. For any matrix $B = (B_{i,j})^{M}_{i,j =1}$ we denote by $\|B\|_{\mathbb{R}^{M\times M}}$ its Frobenius norm, i.e., $$\label{frob} \|B\|_{\mathbb{R}^{M\times M}} = \sqrt{\sum_{i=1}^M\sum_{j=1}^M |B_{i,j}|^2} = \sqrt{\textnormal{tr}(B^\top B)},$$ where $\textnormal{tr}(B)$ represents the trace of $B$. We further define the Frobenius inner product, $$\label{frob-in} \langle A,B\rangle_{\mathbb{R}^{M\times M} } =\textnormal{tr}(A^\top B), \quad A,B \in \mathbb{R}^{M\times M}.$$ We define the following class of admissible propagators. The true propagator $G^\star \in \mathbb{R}^{M\times M}$ is assumed to be in the following set with some known constant $\kappa>0$: $$\label{assum:volterra_kernel} \mathscr{G}_{\textrm{ad}} \coloneqq \left\{ G=(G_{i,j})_{i,j=1}^M \,\middle\vert\, \begin{aligned} & \textnormal{$x^\top (G+G^\top)x \ge \kappa \|x\|^2 $ for all $x\in {\mathbb R}^M$,} \\ & \textnormal{$G_{i,j}\ge 0$ for all $ i, j = 1, \ldots, M$,} \\ & \textnormal{$G_{i,j}=0$ for all $1\le i< j\le M$.} \end{aligned} \right\}.$$ **Remark 7**. *Note that the conditions in $\mathscr{G}_{\textrm{ad}}$ agree with the assumptions on the propagator which were made in [\[g-def\]](#g-def){reference-type="eqref" reference="g-def"}. Indeed the constant $\kappa$ coincides with $2\eta$ in [\[g-dec\]](#g-dec){reference-type="eqref" reference="g-dec"} and it can be regarded as a lower bound on the temporary price impact. The lower diagonal structure of $G$ ensures non-anticipative structure (see [\[eq:data_price_n\]](#eq:data_price_n){reference-type="eqref" reference="eq:data_price_n"}). The fact that the entries of $G$ are nonnegative ensures that a sell (buy) transaction, which causes a negative (positive) change in the inventory, will push the price downwards (upwards) in [\[eq:data_price_n\]](#eq:data_price_n){reference-type="eqref" reference="eq:data_price_n"}.* **Remark 8**. *We present a couple of typical examples for price impact Volterra kernels, whose projection on a finite grid belongs to $\mathscr{G}_{\textrm{ad}}$. More examples for convolution kernels are discussed in Remark [Remark 14](#rem-examples){reference-type="ref" reference="rem-examples"}. The following non-convolution kernel was introduced in order to model price impact in bond trading (see Section 3.1 of [@brigo2020]): $$G(t,s) = f(T-t) K(t-s) \mathrm{1}_{\{s<t\}},$$ where $K:\mathbb{R}_{+} \rightarrow\mathbb{R}_+$ is a usual nonnegative definite decay kernel and $f$ is a bounded function satisfying $f(0)=0$, due to the terminal condition on the bond price. In another example which was proposed in [@FruthSchoenebornUrusov] for modelling order books with time-varying liquidity, the propagator is given by $G(t,s) = L(t) \exp \left(-\int_s^t \rho(u) du \right)$, where $L,\rho$ are bounded strictly positive functions.* In the following assumption we classify the noise in [\[eq:data_price_n\]](#eq:data_price_n){reference-type="eqref" reference="eq:data_price_n"} as conditionally sub-Gaussian. **Assumption 9**. *The agent has an offline dataset $\mathcal{D}$ of size $N\in \mathbb{N}$ as in Definition [Definition 5](#def:offline_data){reference-type="ref" reference="def:offline_data"} and there exists a known constant $R>0$ such that for all $n=1,\ldots, N$, $$\mathbb{E}^{\mathbb{P}'}\left[\exp\left({\langle v, \varepsilon^{(n)} \rangle}\right)\mid \mathcal{G}_{n-1} \right] \le \exp\left( \frac{R^2\|v\|^2}{2}\right), \quad \textrm{for all } v\in \mathbb{R}^M.$$ That is, $\varepsilon^{(n)}$ in [\[eq:data_price_n\]](#eq:data_price_n){reference-type="eqref" reference="eq:data_price_n"} is $R$-conditionally sub-Gaussian with respect to $\mathcal{G}_{n-1}$.* Based on the dataset $\mathcal{D}$, the unknown price impact coefficients $(G^\star_{i,j})_{1\le j\le i\le M}$ can be estimated via a least-squares method. Note that for any $n\in \mathbb{N}$, the observed data $y^{(n)} = (S^{(n)}_{t_{i+1}}-S^{(n)}_{t_1}-A^{(n)}_{t_i})_{i=1}^{M}\in \mathbb{R}^M$ and $u^{(n)}= (u^{(n)}_{t_i})_{i=1}^{M}\in \mathbb{R}^M$ satisfy $$\label{eq:price_impact_linear_regression} y^{(n)} =G^\star u^{(n)} +\varepsilon^{(n)} , \quad \textnormal{with $\varepsilon^{(n)}= (\varepsilon^{(n)}_{t_i})_{i=1}^{M}$.}$$ This motivates us to estimate $G^\star$ by minimising the following quadratic loss over all admissible price impact coefficients: $$\label{eq:G_n_lambda_volterra} G_{N,\lambda}\coloneqq\mathop{\mathrm{arg\,min}}_{G \in \mathscr{G}_{\textrm{ad}}} \left( \sum_{n=1}^N \|y^{(n)}-G u^{(n)}\|^2+\lambda\|G\|_{\mathbb{R}^{M \times M}}^2 \right),$$ where $\lambda>0$ is a given regularisation parameter and $\mathscr{G}_{\textrm{ad}}$ is given in [\[assum:volterra_kernel\]](#assum:volterra_kernel){reference-type="eqref" reference="assum:volterra_kernel"}. **Remark 10**. *The regularisation parameter $\lambda>0$ ensures that the loss function [\[eq:G_n\_lambda_volterra\]](#eq:G_n_lambda_volterra){reference-type="eqref" reference="eq:G_n_lambda_volterra"} is strongly convex in $G$. This along with the fact that $\mathscr{G}_{\textrm{ad}}$ is nonempty, closed and convex implies that ${G}_{N,\lambda}$ is uniquely defined. The necessity of such regularisation can be clearly seen later in [\[eq:G_unconstraint_minimiser\]](#eq:G_unconstraint_minimiser){reference-type="eqref" reference="eq:G_unconstraint_minimiser"}, as it ensures the invertibility of the matrix on the right-hand side of the equality. The need for this regularisation is often ignored in the propagator estimation literature (see e.g., Section 13 of [@bouchaud_bonart_donier_gould_2018]).* The estimator $G_{N,\lambda}$ can be computed by projecting the unconstrained least-squares estimator onto the set $\mathscr{G}_{\textrm{ad}}$. Indeed, if one sets $$\begin{aligned} \label{eq:G_unconstraints} \begin{split} \tilde{G}_{N,\lambda} &\coloneqq\mathop{\mathrm{arg\,min}}_{G \in \mathbb{R}^{M \times M}} \left( \sum_{n=1}^N \|y^{(n)}-G u^{(n)}\|^2+\lambda\|G\|_{\mathbb{R}^{M \times M}}^2 \right), \end{split}\end{aligned}$$ then $$\label{eq:project_admissible} G_{N,\lambda} =\mathop{\mathrm{arg\,min}}_{G \in \mathscr{G}_{\textrm{ad}}}\left\|\left(G-\tilde{G}_{N,\lambda}\right) V_{N,\lambda}^ {\frac{1}{2}}\right\|^2_{\mathbb{R}^{M \times M}}, \quad \textnormal{with $V_{N,\lambda}=\sum_{n=1}^N u^{(n)} (u^{(n)})^{\top } +\lambda \mathbb{I}_{M }$},$$ where we recall that $\mathbb{I}_{M}$ denotes the $M \times M$ identity matrix. The unconstrained problem [\[eq:G_unconstraints\]](#eq:G_unconstraints){reference-type="eqref" reference="eq:G_unconstraints"} can be solved explicitly as $$\label{eq:G_unconstraint_minimiser} \tilde{G}_{N,\lambda}= \sum_{n=1}^N y^{(n)} (u^{(n)})^{\top} V_{N,\lambda}^{-1},$$ while the optimisation problem [\[eq:project_admissible\]](#eq:project_admissible){reference-type="eqref" reference="eq:project_admissible"} can be efficiently solved by standard convex optimisation packages (see e.g., [@diamond2016cvxpy]). The following theorem provides a confidence region of the estimated coefficient ${G}_{N,\lambda}$ in terms of the observed data. **Theorem 11**. *Assume that $G^\star \in \mathscr G_{ad}$ and that Assumption [Assumption 9](#assum:offline_data){reference-type="ref" reference="assum:offline_data"} holds. For any $\lambda>0$, let ${G}_{N,\lambda}$ and $V_{N,\lambda}$ be defined in [\[eq:G_n\_lambda_volterra\]](#eq:G_n_lambda_volterra){reference-type="eqref" reference="eq:G_n_lambda_volterra"}. Then for all $\lambda>0$ and $\delta \in (0,1)$, with $\mathbb{P}'$-probability at least $1-\delta$, $$\label{bnd-err-1} \begin{aligned} %\label{eq:G_error} & \left\|({G}_{N,\lambda} - G^\star)V_{N,\lambda}^{\frac{1}{2}} \right\|_{\mathbb{R}^{M \times M}} \le R \left( \log\left( \frac{\lambda^{-M^2} (\det(V_{N,\lambda} ))^{M}}{\delta^2} \right)\right)^{\frac{1}{2}} +\lambda \left \| G^\star V_{N,\lambda}^{-\frac{1}{2}} \right\|_{\mathbb{R}^{M \times M}}. \end{aligned}$$* The proof of Theorem [Theorem 11](#thm:confidence_volterra){reference-type="ref" reference="thm:confidence_volterra"} is given in Section [5](#sec-pf-est){reference-type="ref" reference="sec-pf-est"}. **Remark 12**. *Theorem [Remark 6](#rmk:data_set){reference-type="ref" reference="rmk:data_set"} quantifies the accuracy of the estimator ${G}_{N,\lambda}$ with an explicit dependence on the given dataset $\mathcal{D}$ and the magnitude of $G^\star$. The result is applicable to a general discrete-time Volterra propagator $G^\star$ and non-Markovian and correlated observations (as stated in Remark [Remark 6](#rmk:data_set){reference-type="ref" reference="rmk:data_set"}). It is worth noting that this error estimate substantially improves the result presented in [@neuman2023statistical Theorem 2.10], as the latter assumes the propagator to be a (continuous-time) convolution kernel and requires the observed trading speeds $(u^{(n)})_{n=1}^N$ to be fixed (i.e., $u^n=u^m$ for all $n,m =1, \ldots, M$) and deterministic vectors.* Examples where the Volterra price impact kernel $G^\star$ is in fact a convolution kernel arise from empirical studies and are quite popular in the literature (see e.g., [@bouchaud-gefen; @gatheral2010no; @Ob-Wan2005]). In the sequel, we incorporate this structural property of the price impact coefficient to enhance our estimation procedure. We therefore assume that for the true price impact kernel $G^\star$ there exists $K^\star = (K^\star_i)_{i=0}^{M-1}$ such that $$\label{eq:price_impact_kernel} G^\star = \begin{pmatrix} K^\star_0 & 0 & \cdots & 0\\ K^\star_1 & K^\star_0 & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\ K^\star_{M-1 } & K^\star_{M-2 } & \cdots & K^\star_{0 } \end{pmatrix},$$ and $K^\star$ is in the following class $\mathscr{K}_{\textrm{ad}}$ of admissible convolution kernels: $$\label{assume:convolution_kernel} \mathscr{K}_{\textrm{ad}} \coloneqq \left\{ K=(K_{i})_{i=0}^{M-1} \,\middle\vert\, \begin{aligned} &\textnormal{$K_i-K_{i-1} \le K_{i+1}-K_{i} \le 0$ for all $1\le i \le M-2$,} \\ &\textnormal{and the associated $G$ defined by \eqref{eq:price_impact_kernel} is in $\mathscr{G}_{\textrm{ad}}$.} \end{aligned} \right\}.$$ **Remark 13**. *Note that [\[assume:convolution_kernel\]](#assume:convolution_kernel){reference-type="eqref" reference="assume:convolution_kernel"} assumes that the price impact $G^\star_{i,j}$ is determined by a convex and decreasing kernel evaluated at equally distributed time grids. Specifically this means that there exists a function $K^\star:\mathbb{R}_{+} \rightarrow\mathbb{R}_{+}$, called a resilience function, such that $G^\star_{i,j}=K^\star(t_{i}-t_j)= K^\star(t_{i-j})$ (with $t_0=0$) for all $1\le j\le i\le M$, then it is easy to see that the first constraint in [\[assume:convolution_kernel\]](#assume:convolution_kernel){reference-type="eqref" reference="assume:convolution_kernel"} holds if $K^\star$ is convex and decreasing.* **Remark 14**. *We present some typical examples for price impact convolution kernels, whose projection on a finite grid belongs to $\mathscr{K}_{\textrm{ad}}$. In [@bouchaud-gefen; @gatheral2010no] among others, the following kernel was introduced: $K(t)=\frac{\ell_{_{0}}}{(\ell_{0}+t)^{\beta}}$, for some constants $\beta, \ell_0>0$. The example of $K(t)=\frac{1}{t^{\beta}}1_{\{t>0\}}$ for $0< \beta < 1/2,$ were proposed by Gatheral in [@gatheral2010no]. The case where $K (t) = e^{-\rho t}$, for some constant $\rho >0$, was proposed by Obizhaeva and Wang [@Ob-Wan2005].* The convolution structure in [\[eq:price_impact_kernel\]](#eq:price_impact_kernel){reference-type="eqref" reference="eq:price_impact_kernel"} simplifies the estimation of the matrix $G^\star\in \mathbb{R}^{M \times M}$ to the estimation of $K^\star \in \mathbb{R}^{M}$. Let $\mathcal{D} = \{(S^{(n)}_{t_i})_{i=1}^{M+1}, (u^{(n)}_{t_i})_{i=1}^{M}, (A^{(n)}_{t_i})_{i=1}^{M} \mid n= 1, \ldots, N\}$ be a given dataset. By [\[eq:price_impact_kernel\]](#eq:price_impact_kernel){reference-type="eqref" reference="eq:price_impact_kernel"} and [\[eq:data_price_n\]](#eq:data_price_n){reference-type="eqref" reference="eq:data_price_n"}, we have for any $n=1,\ldots, N$, $$\begin{aligned} \label{eq:price_convolution_model} S^{(n)}_{t_{i+1}}-S^{(n)}_{t_1}-A^{(n)}_{t_i} &= \sum_{j=1}^i K^\star_{i-j } u^{(n)}_{t_j} +\varepsilon^{(n)}_{t_i} = \sum_{j=0}^{i-1} K^\star_{j } u^{(n)}_{t_{i-j}} +\varepsilon^{(n)}_{t_i}, \quad i=1,\ldots, M,\end{aligned}$$ which can be equivalently written as $y^{(n)} =U_n K^\star +\varepsilon^{(n)}$, with $$\label{eq:y_n_U_n} y^{(n)} = \begin{pmatrix} S^{(n)}_{t_{2}}-S^{(n)}_{t_1}-A^{(n)}_{t_1} \\ \vdots \\ S^{(n)}_{t_{M+1}}-S^{(n)}_{t_1}-A^{(n)}_{t_M} \end{pmatrix}, \; U_n = \begin{pmatrix} u^{(n)}_{t_1} & 0 & \cdots & 0\\ u^{(n)}_{t_2} & u^{(n)}_{t_1} & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\ u^{(n)}_{t_M} & u^{(n)}_{t_{M-1}} & \cdots & u^{(n)}_{t_1} \end{pmatrix}, \; \varepsilon^{(n)} = \begin{pmatrix} \varepsilon^{(n)}_{t_1} \\ \vdots \\ \varepsilon^{(n)}_{t_M} \end{pmatrix}.$$ The data series of $S^n$ in [\[eq:price_convolution_model\]](#eq:price_convolution_model){reference-type="eqref" reference="eq:price_convolution_model"} motivates us to consider the following constrained least-squares estimator: $$\label{eq:K_n_lambda} K_{N,\lambda}\coloneqq\mathop{\mathrm{arg\,min}}_{K \in \mathscr{K}_{\textrm{ad}}} \left( \sum_{n=1}^N \|y^{(n)}-U_n K \|^2+\lambda\|K\|^2 \right),$$ where $\lambda>0$ is a given regularisation parameter, and $\mathscr{K}_{\textrm{ad}}$ is given in [\[assume:convolution_kernel\]](#assume:convolution_kernel){reference-type="eqref" reference="assume:convolution_kernel"}. Similar to $G_{N,\lambda}$ in [\[eq:G_n\_lambda_volterra\]](#eq:G_n_lambda_volterra){reference-type="eqref" reference="eq:G_n_lambda_volterra"}, $K_{N,\lambda}$ can be computed by projecting the unconstrained least-squares estimator onto the parameter set $\mathscr{K}_{\textrm{ad}}$: $$\begin{aligned} \label{eq:project_admissible_conv} K_{N,\lambda} =\mathop{\mathrm{arg\,min}}_{K \in \mathscr{K}_{\textrm{ad}}}\left\| W_{N,\lambda}^ {\frac{1}{2}}\left(K-\tilde{K}_{N,\lambda}\right)\right\|^2,\end{aligned}$$ where $$\label{w-def} W_{N,\lambda}\coloneqq \sum_{n=1}^N U^\top_n U_n +\lambda \mathbb{I}_{M }, \quad \tilde{K}_{N,\lambda} \coloneqq W_{N,\lambda}^{-1} \sum_{n=1}^N U^\top_n y^{(n)}.$$ with $y^{(n)}\in \mathbb{R}^M$ and $U_n\in \mathbb{R}^{M \times M}$ defined in [\[eq:y_n\_U_n\]](#eq:y_n_U_n){reference-type="eqref" reference="eq:y_n_U_n"}. Given $K_{N,\lambda}=((K_{N,\lambda})_i)_{i=0}^{M-1}$, the associated propagator $G_{N,\lambda}$ is then defined according to [\[eq:price_impact_kernel\]](#eq:price_impact_kernel){reference-type="eqref" reference="eq:price_impact_kernel"}: $$\label{eq:G_n_lambda_convolution} G_{N,\lambda} = \begin{pmatrix} (K_{N,\lambda})_0 & 0 & \cdots & 0\\ (K_{N,\lambda})_1 & (K_{N,\lambda})_0 & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\ (K_{N,\lambda})_{M-1 } & (K_{N,\lambda})_{M-2 } & \cdots & (K_{N,\lambda})_{0 } \end{pmatrix}.$$ The following theorem is analogue to Theorem [Theorem 11](#thm:confidence_volterra){reference-type="ref" reference="thm:confidence_volterra"} and provides a confidence region of the estimator ${K}_{N,\lambda}$. **Theorem 15**. *Suppose Assumption [Assumption 9](#assum:offline_data){reference-type="ref" reference="assum:offline_data"} holds and that $K^{\star}\in \mathscr{K}_{\textrm{ad}}$. For each $\lambda>0$, let ${K}_{N,\lambda}$ be defined in [\[eq:K_n\_lambda\]](#eq:K_n_lambda){reference-type="eqref" reference="eq:K_n_lambda"}. Then for all $\lambda>0$ and $\delta \in (0,1)$, with $\mathbb{P}'$-probability at least $1-\delta$, $$\label{diff-k} \begin{aligned} %\label{eq:G_error} & \left\|W_{N,\lambda}^{\frac{1}{2}} ({K}_{N,\lambda} - K^\star) \right\| \le R \left( \log\left( \frac{\lambda^{-M} \det(W_{N,\lambda} ) }{\delta^2} \right)\right)^{\frac{1}{2}} +\lambda \left \| W_{N,\lambda}^{-\frac{1}{2}} K^\star \right\|, \end{aligned}$$ with $W_{N,\lambda} =\sum_{n=1}^N U_n^\top U_n+\lambda\mathbb{I}_M$.* The proof of Theorem [Theorem 15](#thm:confidence_convolution){reference-type="ref" reference="thm:confidence_convolution"} is given in Section [5](#sec-pf-est){reference-type="ref" reference="sec-pf-est"}. **Remark 16**. *By imposing the structural property [\[eq:price_impact_kernel\]](#eq:price_impact_kernel){reference-type="eqref" reference="eq:price_impact_kernel"}, we achieve in Theorem [Theorem 15](#thm:confidence_convolution){reference-type="ref" reference="thm:confidence_convolution"} a better dependence in the number of trading periods $M$ compared to Theorem [Theorem 11](#thm:confidence_volterra){reference-type="ref" reference="thm:confidence_volterra"}. Specifically compare the power of $\lambda>0$ in the first term on the right-hand side of [\[bnd-err-1\]](#bnd-err-1){reference-type="eqref" reference="bnd-err-1"} to the corresponding term in [\[diff-k\]](#diff-k){reference-type="eqref" reference="diff-k"}. We present in Remark [Remark 23](#rem-asump-mat){reference-type="ref" reference="rem-asump-mat"} some additional important advantages for using a convolution kernel in the context of the regret analysis of the pessimistic control problem. See Figure [3](#FIGURE LABEL_d){reference-type="ref" reference="FIGURE LABEL_d"} for a numerical comparison of the estimators [\[eq:G_unconstraints\]](#eq:G_unconstraints){reference-type="eqref" reference="eq:G_unconstraints"} and [\[eq:K_n\_lambda\]](#eq:K_n_lambda){reference-type="eqref" reference="eq:K_n_lambda"}.* ## Pessimistic control design with offline data {#sec-pessim} In the following two subsections, we propose a stochastic control framework that will eliminate the spurious correlation, which was defined in [\[sub-decom\]](#sub-decom){reference-type="eqref" reference="sub-decom"}, between the offline estimator ${G}_{N,\lambda}$ and a greedy trading strategy based on ${G}_{N,\lambda}$, and derive a tight bound on the associated performance gap. An important consideration in designing trading strategies arise from the fact that pre-collected data may not provide a uniform exploration of the parameter space, and hence certain entries of the unknown propagator $G^\star$ may have been estimated with limited accuracy. Consequently, if a trading strategy is solely based on the estimated propagator ${G}_{N,\lambda}$ in [\[eq:G_n\_lambda_volterra\]](#eq:G_n_lambda_volterra){reference-type="eqref" reference="eq:G_n_lambda_volterra"}, its performance in real deployment may be suboptimal. To overcome this challenge, in this section we introduce an additional loss term that takes into account the statistical uncertainty of the estimated propagator. This is often referred to as the principle of pessimism in the face of uncertainty in the offline reinforcement learning literature (see e.g., [@jin2021pessimism; @chang2021mitigating]). We recall that the offline dataset $\mathcal{D}$ from Definition [Definition 5](#def:offline_data){reference-type="ref" reference="def:offline_data"} is built out of $N$ trajectories of prices, trading speeds and signals $(S^{(n)}, u^{(n)}, A^{(n)})_{n=1}^N$ which are defined on the probability space $(\Omega',\mathcal{F}', \mathbb{P}')$. In Section [2.1](#sec-liq-model){reference-type="ref" reference="sec-liq-model"} we have specified the filtered probability space $(\Omega, \mathcal F, \mathbb F=\{\mathcal F_{t_i}\}_{i=1}^M, \mathbb{P})$ in which the optimal liquidation problem [\[opt-prog\]](#opt-prog){reference-type="eqref" reference="opt-prog"} takes place. We define the following product space $$(\overline{\Omega}, \overline{\mathcal{F}}, \overline{\mathbb{P}}) = (\Omega' \times\Omega,\mathcal{F}'\times\mathcal{F} ,\mathbb P' \times \mathbb P).$$ We further define $\mathbb P_{\mathcal{D}}$ to be the probability measure on the product space conditioned on a realisation of $\mathcal{D}$ and ${\mathbb{E}}_{\mathcal D}$ as the corresponding conditional expectation. Next we introduce the pessimistic stochastic control problem in the spirit of [\[j-h-def\]](#j-h-def){reference-type="eqref" reference="j-h-def"} that corresponds to the optimal liquidation problem [\[opt-prog\]](#opt-prog){reference-type="eqref" reference="opt-prog"}. For the remainder of this section we assume that $N$ and the dataset $\mathcal D$ in [\[def:offline_data\]](#def:offline_data){reference-type="eqref" reference="def:offline_data"} are fixed and that for any $\lambda>0$, $V_{\lambda,N}$ is defined as in [\[eq:project_admissible\]](#eq:project_admissible){reference-type="eqref" reference="eq:project_admissible"}. Let $L^\mathcal{A}_{\eqref{l-bnd}}, L^\mathscr{G}_{\eqref{l-bnd}} \geq 1$. We define a subclass of the admissible strategies $\mathcal A$ in [\[admiss\]](#admiss){reference-type="eqref" reference="admiss"} and admissible kernels $\mathscr{G}_{ad}$ in [\[assum:volterra_kernel\]](#assum:volterra_kernel){reference-type="eqref" reference="assum:volterra_kernel"} as follows: $$\label{l-bnd} \mathcal{A}_{b} = \left\{ u \in \mathcal{A}: \textcolor{black}{ \|u \| }\leq L^\mathcal{A}_{\eqref{l-bnd}} \right\}, \quad \mathscr{G}_{b} = \left\{ G \in \mathscr{G}_{ad}: \|G \|_{\mathbb{R}^{M \times M}} \leq L^\mathscr{G}_{\eqref{l-bnd}} \right\}.$$ **Remark 17**. *We choose $L^\mathcal{A}_{\eqref{l-bnd}}$ to be large enough so that $\mathcal{A}_{b}$ contains strategies satisfying the fuel constraint of [\[opt-pes\]](#opt-pes){reference-type="eqref" reference="opt-pes"}. The constant $L^\mathscr{G}_{\eqref{l-bnd}}$ is chosen so that $G^\star \in \mathscr{G}_{b}$. Since $\mathscr{G}_{b}$ provides a radius for the parameter space, using Theorem [Theorem 2](#prop-opt-strat){reference-type="ref" reference="prop-opt-strat"} and our assumption that the signal $A$ is known to the trader, we can also choose $L^\mathcal{A}_{\eqref{l-bnd}}$ such that the optimal strategy [\[prop-opt-strat\]](#prop-opt-strat){reference-type="eqref" reference="prop-opt-strat"} with $G^\star$, is in the set $\mathcal{A}_{b}$.* Now, for an arbitrary admissible control strategy $u \in\mathcal{A}_{b}$, we define the following penalization functional $$\begin{aligned} \label{fun_regularization} \ell_1(V_{N,\lambda},u) \coloneqq L^\mathcal{A}_{\eqref{l-bnd}}C_{\eqref{c-const}}(N) \left\| V_{N,\lambda}^{-\frac{1}{2}} u \right\|, \end{aligned}$$ where $C_{\eqref{c-const}} (N) = C_{\eqref{c-const}}(N,M,\delta,\lambda, R, L^\mathscr{G}_{\eqref{l-bnd}}, V_{N,\lambda})$ bounds the estimation error from Theorem [Theorem 11](#thm:confidence_volterra){reference-type="ref" reference="thm:confidence_volterra"} and it is given by, $$\label{c-const} \begin{aligned} C_{\eqref{c-const}} (N) \coloneqq R \left( \log\left( \frac{\lambda^{-M^2} (\det(V_{N,\lambda}))^{M}}{\delta^2} \right)\right)^{\frac{1}{2}} +\lambda L^\mathscr{G}_{\eqref{l-bnd}} \left \| V_{N,\lambda}^{-\textcolor{black}{1/2}} \right\|_{\mathbb{R}^{M \times M}}. \end{aligned}$$ Here we have used the fact that for any two matrices $A,B \in \mathbb{R}^{M \times M}$ we have in the Frobenius norm $$\label{frob-id} \|A B \|_{\mathbb{R}^{M \times M}} \leq \|A \|_{\mathbb{R}^{M \times M} } \| B \|_{\mathbb{R}^{M \times M}}.$$ The regularized cost functional for the pessimistic Volterra liquidation problem is given by, $$\label{j-p1} {J}_{P,1}(u;G_{N,\lambda}) \coloneqq \mathbb{E}_{\mathcal D} \left[ \sum_{i=1}^{M} \sum_{j=1 }^{i} (G_{N,\lambda})_{i,j} u_{t_j} u_{t_i} + \sum_{i=1}^M A_{t_i} u_{t_i} + \ell_1(V_{N,\lambda},u) \right].$$ The cost functional [\[j-p1\]](#j-p1){reference-type="eqref" reference="j-p1"} replaces the unknown propagator in [\[per-func\]](#per-func){reference-type="eqref" reference="per-func"} with the estimated propagator $G_{N,\lambda}$ and includes an additional cost term $\ell_1$ to discourage the agent from taking actions that are not supported by the offline data. We identify the expectation of $\ell_1$ as a $\delta$-uncertainty quantifier (see [\[ell-1-def\]](#ell-1-def){reference-type="eqref" reference="ell-1-def"}). This statement will be made precise in Section [2.4](#sec-res-pess){reference-type="ref" reference="sec-res-pess"}. The penalty strength is determined by the inverse covariance matrix $V^{-1/2}_{N,\lambda}$, where a larger $V^{-1/2}_{N,\lambda}$ implies higher uncertainty in the estimated model $G_{N,\lambda}$ and a stronger penalty $\ell_1$. As we will show in Section [2.4](#sec-res-pess){reference-type="ref" reference="sec-res-pess"}, this pessimistic loss yields a trading strategy that can compete with any other admissible strategy, with regret depending explicitly on the offline dataset. In the case where the price impact kernel $G^{\star}$ is given by a convolution kernel as in [\[eq:price_impact_kernel\]](#eq:price_impact_kernel){reference-type="eqref" reference="eq:price_impact_kernel"}, the impact of a trading speed $u = (u_1,\ldots,u_M)^\top \in \mathbb R^{M}$ on the price $S$ includes the following matrix $U\in {\mathbb R}^{M\times M}$ (see [\[eq:price_convolution_model\]](#eq:price_convolution_model){reference-type="eqref" reference="eq:price_convolution_model"}): $$\label{t-op} U\coloneqq \begin{pmatrix} u_{1} & 0 & \cdots & 0\\ u_{2} & u_{1} & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\ u_{M} & u_{M-1} & \cdots & u_{1} \end{pmatrix}. % \quad u = (u_1,...,u_M)^\top \in \mathbb R^{M}.$$ Similarly to $\mathscr{G}_{b}$ in [\[l-bnd\]](#l-bnd){reference-type="eqref" reference="l-bnd"}, we define the following subclass of convolution kernels, recalling that $K$ is a vector, $$\label{k-bnd} \mathscr{K}_{b} = \left\{ K \in \mathscr{K}_{ad}: \|K \| \leq L^\mathscr{K}_{\eqref{k-bnd}} \right\}.$$ We then introduce the following regularization $$\begin{aligned} \label{eq:cost_conv} \ell_2(W_{N,\lambda},u) \coloneqq L^{\mathcal A}_{\eqref{l-bnd}}C_{\eqref{c-2-const} }(N) \left \| U W_{N,\lambda}^{-1/2} \right \|_{\mathbb{R}^{M\times M}},\end{aligned}$$ where $W_{N,\lambda}$ was defined in [\[w-def\]](#w-def){reference-type="eqref" reference="w-def"} and $C_{\eqref{c-2-const}} (N) = C_{\eqref{c-2-const}}(N,M,\delta,\lambda,R , L^\mathscr{K}_{\eqref{k-bnd}}, W_{N,\lambda})$ is the estimation error from Theorem [Theorem 15](#thm:confidence_convolution){reference-type="ref" reference="thm:confidence_convolution"}, $$\label{c-2-const} \begin{aligned} C _{\eqref{c-2-const}} (N) \coloneqq R \left( \log\left( \frac{\lambda^{-M} \det(W_{N,\lambda} ) }{\delta^2} \right)\right)^{\frac{1}{2}} +\lambda L^{\mathscr K}_{\eqref{k-bnd}} \left \| W_{N,\lambda}^{\textcolor{black}{-1/2}} \right\|_{\mathbb{R}^M \times \mathbb{R}^M} . \end{aligned}$$ Recall that the estimated convolution kernel $K_{N,\lambda}$ was defined in [\[eq:K_n\_lambda\]](#eq:K_n_lambda){reference-type="eqref" reference="eq:K_n_lambda"}. The regularized objective functional in the convolution case is given by, $$\label{j-p2} J_{P,2}(u;K_{N,\lambda}) \coloneqq \mathbb{E}_{\mathcal D}\left[ \sum_{i=1}^{M} \sum_{j=1 }^{i} (K_{N,\lambda})_{i - j} u_{t_j} u_{t_i} + \sum_{i=1}^M A_{t_i} u_{t_i} + {\ell}_2(W_{N,\lambda},u) \right],$$ where we used the relation [\[eq:G_n\_lambda_convolution\]](#eq:G_n_lambda_convolution){reference-type="eqref" reference="eq:G_n_lambda_convolution"} between the estimated convolution kernel $K_{N,\lambda}$ and the associated propagator $G_{N,\lambda}$. Note that since the estimated propagator is of convolution form, the cost $\ell_2$ in [\[eq:cost_conv\]](#eq:cost_conv){reference-type="eqref" reference="eq:cost_conv"} measures the impact of the statistical uncertainty of $K_{N,\lambda}$ on a trading speed $u$ through the corresponding matrix $U$ (compare to $\ell_1$ in [\[fun_regularization\]](#fun_regularization){reference-type="eqref" reference="fun_regularization"} and [\[j-p1\]](#j-p1){reference-type="eqref" reference="j-p1"}). We define the following pessimistic optimization problems for $i=1,2$: $$\label{opt-pes} \begin{cases} &\min_{u \in \mathcal A_b} J_{P,i}(u;G_{N,\lambda}), \\ &\textrm{s.t.} \sum_{i=1}^N u_{t_i} = x_0, \end{cases}$$ **Remark 18**. *The existence of a unique minimizer $\hat{u} \in \mathcal{A}$ for ${J}_{P,i}$ follows directly from [@bonnans2013perturbation Lemma 2.33] and the facts that $\mathcal{A}\ni u\mapsto {J}_{P,i}(u, G_{N,\lambda})\in \mathbb{R}$ is strongly convex, and $\left\{u\in \mathcal{A}_b\mid \sum_{i=1}^N u_i=x_0\right\}$ is nonempty if $L^\mathcal{A}_{\eqref{l-bnd}}$ is sufficiently large. Indeed, the fact that $G_{\lambda,N}$ in [\[eq:G_n\_lambda_volterra\]](#eq:G_n_lambda_volterra){reference-type="eqref" reference="eq:G_n_lambda_volterra"} is chosen from the class of admissible kernels [\[assum:volterra_kernel\]](#assum:volterra_kernel){reference-type="eqref" reference="assum:volterra_kernel"} ensures the strong convexity of $J_{P,i}$.* ## Main results on performance of pessimistic strategies {#sec-res-pess} The following theorem provides an upper bound on the performance gap between the pessimistic solution to [\[opt-pes\]](#opt-pes){reference-type="eqref" reference="opt-pes"} and any arbitrary admissible strategy including the optimal strategy from Theorem [Theorem 2](#prop-opt-strat){reference-type="ref" reference="prop-opt-strat"} using the true propagator $G^\star$, in terms of the mean of $\ell_i$. We also derive the asymptotic behaviour of this bound with respect to large values of the sample size $N$ in Theorem [Theorem 24](#thm-l-bound){reference-type="ref" reference="thm-l-bound"}. Throughout this section we assume that the dataset $\mathcal{D}$ is fixed according to Definition [Definition 5](#def:offline_data){reference-type="ref" reference="def:offline_data"}. We recall that the performance functional $J(u;G^{\star})$ is related to the original control problem [\[per-func\]](#per-func){reference-type="eqref" reference="per-func"}, and the performance functional $J_{P,i}(u;G_{N,\lambda})$ is related to the pessimistic control problem [\[opt-pes\]](#opt-pes){reference-type="eqref" reference="opt-pes"} conditioned on $\mathcal{D}$. Further, $G_{N,\lambda}$ which is determined by $\mathcal D$ was defined in [\[eq:G_n\_lambda_volterra\]](#eq:G_n_lambda_volterra){reference-type="eqref" reference="eq:G_n_lambda_volterra"} and [\[eq:G_n\_lambda_convolution\]](#eq:G_n_lambda_convolution){reference-type="eqref" reference="eq:G_n_lambda_convolution"}. The class of admissible Volterra kernels $\mathscr{G}_{ad}$ was defined in [\[assum:volterra_kernel\]](#assum:volterra_kernel){reference-type="eqref" reference="assum:volterra_kernel"}, and the subclass of convolution kernels with kernels in $\mathscr{K}_{ad}$, was defined in [\[assume:convolution_kernel\]](#assume:convolution_kernel){reference-type="eqref" reference="assume:convolution_kernel"}. **Theorem 19**. *Let $\hat{u}^{(i)}$ be the minimizer of $J_{P,i}(u;G_{N,\lambda})$, $i=1,2$. Then under Assumption [Assumption 9](#assum:offline_data){reference-type="ref" reference="assum:offline_data"}, for all $\lambda>0$ and $\delta \in (0,1)$ we have with $\mathbb P'$-probability at least $1-\delta$,* - *if $G^{\star} \in \mathscr{G}_{ad}$ is a Volterra kernel, $$J(\hat u^{(1)};G^{\star})- J(u;G^{\star}) \leq 2 \mathbb{E}_{\mathcal D} \left[ \ell_1(V_{N,\lambda},u) \right], \quad \textrm{for all } u\in \mathcal{A}_{b},$$* - *if $G^{\star}$ is a convolution kernel with $K^\star \in \mathscr{K}_{ad}$, $$J(\hat u^{(2)};G^{\star})- J(u;G^{\star}) \leq 2 \mathbb{E}_{\mathcal D} \left[ \ell_2(W_{N,\lambda},u) \right], \quad \textrm{for all } u\in \mathcal{A}_{b}.$$* One of the by-products of the proof of Theorem [Theorem 19](#THM:performance_bound){reference-type="ref" reference="THM:performance_bound"} is the fact that $\Gamma(u)= \mathbb{E} \left[ \ell_i(W_{N,\lambda},u) \right]$ is a $\delta$-uncertainty quantifier in the sense of Definition [Definition 1](#def-quant){reference-type="ref" reference="def-quant"}. **Corollary 20**. *Under Assumption [Assumption 9](#assum:offline_data){reference-type="ref" reference="assum:offline_data"}, for all $\lambda>0$ and $\delta \in (0,1)$ we have with $\mathbb P'$-probability at least $1-\delta$,* - *if $G^{\star} \in \mathscr{G}_{ad}$ is a Volterra kernel, $$| J( u; G^{\star}) - J( u ; \hat G)| \leq\mathbb{E}_{\mathcal D} \left[ \ell_1(W_{N,\lambda},u) \right], \quad \textrm{for all } u\in \mathcal{A}_{b},$$* - *if $G^{\star}$ is a convolution kernel with $K^\star \in \mathscr{K}_{ad}$, $$| J( u; G^{\star}) - J( u ; \hat G)| \leq \mathbb{E}_{\mathcal D} \left[ \ell_2(W_{N,\lambda},u) \right], \quad \textrm{for all } u\in \mathcal{A}_{b}.$$* *Hence, $\Gamma(u)= \mathbb{E} \left[ \ell_i(W_{N,\lambda},u) \right]$ is a $\delta$-uncertainty quantifier.* The proofs of Theorem [Theorem 19](#THM:performance_bound){reference-type="ref" reference="THM:performance_bound"} and Corollary [Corollary 20](#cor-quant){reference-type="ref" reference="cor-quant"} are given in Section [6](#sec-pf-pessim){reference-type="ref" reference="sec-pf-pessim"}. In order to derive a convergence rate for the error bound on the regret, which was established in Theorem [Theorem 19](#THM:performance_bound){reference-type="ref" reference="THM:performance_bound"}, we make the following assumptions. Recall that $(u^{(n)})_{n=1}^N$ are the strategies recorded in the dataset $\mathcal{D}$ (see Definition [Definition 5](#def:offline_data){reference-type="ref" reference="def:offline_data"}) and that for any $u^{(n)}$, $U_n$ is the matrix defined in [\[eq:y_n\_U_n\]](#eq:y_n_U_n){reference-type="eqref" reference="eq:y_n_U_n"}. We give the definition of the Loewner order, i.e., the partial order defined by the convex cone of positive semi-definite matrices. **Definition 21**. *For any two symmetric matrices $A,B\in \mathbb{R}^{M \times M}$ we say that $A \leq B$ if for any $x \in \mathbb{R}^M$, we have $x^{\top}(B-A) x \ge 0$.* **Assumption 22**. *For any $\delta \in (0,1)$ there exists a (known) constant $C_{\ref{assum:concentration_inequ}}(\delta)>0$ such that the following bound holds with probability $1-\delta$,* - *if $G^{\star} \in \mathscr{G}_{ad}$ is a Volterra kernel $$\begin{aligned} - \frac{C_{\ref{assum:concentration_inequ}}(\delta)}{\sqrt{N}} \mathbb{I}_{M} \leq \frac{1}{N}\sum_{n=1}^N u^{(n)} (u^{(n)})^{\top} -\Sigma \leq \frac{C_{\ref{assum:concentration_inequ}}(\delta)}{\sqrt{N}} \mathbb{I}_{M}, \end{aligned}$$* - *if $G^{\star}$ is a convolution kernel with $K \in \mathscr{K}_{ad}$, $$\begin{aligned} - \frac{ C_{\ref{assum:concentration_inequ}}(\delta)}{\sqrt{N}} \mathbb{I}_{M} \leq \frac{1}{N}\sum_{n=1}^N U_n^{\top} U_n - \hat \Sigma \leq \frac{ C_{\ref{assum:concentration_inequ}}(\delta)}{\sqrt{N}} \mathbb{I}_{M}, \end{aligned}$$* *where $\Sigma$ and $\hat{\Sigma}$ are symmetric positive-definite matrices, not depending on $(N,\delta)$.* **Remark 23**. *There is an important advantage in verifying Assumption [Assumption 22](#assum:concentration_inequ){reference-type="ref" reference="assum:concentration_inequ"}(ii) for convolution kernels compared to Assumption [Assumption 22](#assum:concentration_inequ){reference-type="ref" reference="assum:concentration_inequ"}(i) for Volterra kernels. Note that in case (ii), $U_n^{\top} U_n$ is a product of two triangular matrices (see [\[eq:y_n\_U_n\]](#eq:y_n_U_n){reference-type="eqref" reference="eq:y_n_U_n"}) which is positive definite if the first entry $u_{t_1}^{(n)} \not = 0$. This means that $N^{-1}\sum_{n=1}^N U_n^{\top} U_n$ is positive definite if $\sum_{n=1}^N u_{t_1}^{(n)} \not = 0$, which is an event with probability $1$, if the transaction size is assumed to be continuous. In reality the transaction size is quantized, and this event will have probability asymptotically close to $1$ for large $N$. This means that normalising this sum of matrices, with respect to a symmetric positive-definite matrix $\hat{\Sigma}$ is a very natural assumption, which is expected to hold for almost any dataset of metaorders. On the other hand, in case (i) the the product $u^{(n)} (u^{(n)})^{\top}$ yields a matrix of rank $1$, hence in order for the assumption to hold the number of samples $N$ has to be much larger than the number of grid points $M$.* #### Notation: We denote by $\underline\xi_{\Sigma }$ (resp. $\underline\xi_{\hat \Sigma}$) the minimal eigenvalue of $\Sigma$ (resp. $\hat{\Sigma}$), and by $\overline\xi_{\Sigma }$ (resp. $\overline\xi_{\hat \Sigma}$) the maximal eigenvalue of $\Sigma$ (resp. $\hat{\Sigma}$). In the following we show that our pessimistic strategy $\hat u^{(i)}$ is $O(N^{-1/2}(\log N)^{1/2})$ close to any competing strategy, where $N$ corresponds to the sample size of the dataset specified in Definition [Definition 5](#def:offline_data){reference-type="ref" reference="def:offline_data"}. **Theorem 24**. *Let $\delta \in (0,1)$ and let $\hat{u}^{(i)}$ be the minimizer of $J_{P,i}(u;G_{N,\lambda})$, $i=1,2$ with $\lambda= C_{\ref{assum:concentration_inequ}}(\delta)N^{1/2}$. Then under Assumptions [Assumption 9](#assum:offline_data){reference-type="ref" reference="assum:offline_data"} and [Assumption 22](#assum:concentration_inequ){reference-type="ref" reference="assum:concentration_inequ"} we have with $\mathbb{P}'$-probability at least $1-2\delta$,* - *if $G^{\star} \in \mathscr{G}_{ad}$ is a Volterra kernel, $$J(\hat u^{(1)};G^{\star})- J(u;G^{\star}) \leq \frac{1}{\sqrt{N}} \left( C_1(M,\delta,\Sigma, N) + C_2(M,\delta,\Sigma) \right), \quad \textrm{for all } u\in \mathcal{A}_{b},$$* - *if $G^{\star}$ is a convolution kernel with $K \in \mathscr{K}_{ad}$, $$J(\hat u^{(2)};G^{\star})- J(u;G^{\star}) \leq \frac{1}{\sqrt{N}} \left( C_1(M,\delta,\hat \Sigma,N) + \hat C_2(M,\delta,\hat \Sigma) \right), \quad \textrm{for all } u\in \mathcal{A}_{b},$$* *where $C_1(N) = O(\sqrt{\log N})$ and $C_2 = O(1)$ are given by, $$\begin{aligned} C_1(M,\delta,\Sigma, N) &= 2R\underline\xi_{\Sigma}^{-1/2} (L^{\mathcal A}_{\eqref{l-bnd}})^2 M \sqrt{\log \left(N \frac{1}{\delta^2}\left(1 + \frac{ %\| \Sigma \|_{\mathbb{R}^{M \times M}} \overline\xi_{\Sigma} } {2C_{\ref{assum:concentration_inequ}}(\delta) } \right) \right)} , \\ C_2(M,\delta,\Sigma) & = 2\underline\xi_{\Sigma}^{-1} (L^{\mathcal A}_{\eqref{l-bnd}})^2 L^\mathscr{G}_{\eqref{l-bnd}} C_{\ref{assum:concentration_inequ}}(\delta) \sqrt{M}, \\ \hat C_2(M,\delta,\Sigma) & = 2\underline\xi_{\Sigma}^{-1} (L^{\mathcal A}_{\eqref{l-bnd}})^2 L^\mathscr{K}_{\eqref{k-bnd}} C_{\ref{assum:concentration_inequ}}(\delta) M. \end{aligned}$$* The proof of Theorem [Theorem 24](#thm-l-bound){reference-type="ref" reference="thm-l-bound"} is given in Section [6](#sec-pf-pessim){reference-type="ref" reference="sec-pf-pessim"}. **Remark 25**. *Theorem [Theorem 24](#thm-l-bound){reference-type="ref" reference="thm-l-bound"} derives upper bounds of order $N^{-1/2}(\log N)^{1/2}$ for the performance difference between the optimal pessimistic strategies $\hat u^{(i)}$ and any arbitrary strategy of a competitor having access to a similar dataset. Note that $\mathcal A_b$ also includes the optimal strategy using the unknown $G^{\star}$ from Theorem [Theorem 2](#prop-opt-strat){reference-type="ref" reference="prop-opt-strat"}. This framework introduces a novel approach for nonparametric estimation of financial models, which is particularly effective in the case where the quality of the common dataset is poor or it contains a relatively low number of samples, the statistical estimators can be biased and resulting greedy strategies may create unfavorable costs. See Figure [9](#FIGURE LABEL_c){reference-type="ref" reference="FIGURE LABEL_c"} for numerical illustrations.* **Remark 26**. *Our results using the pessimistic learning approach can be compared to the well-known robust finance approach, in which strong assumptions are made on the parametric model, and the unknown parameters are assumed to be within a certain radius from the true parameters. Our nonparametric framework suggests that the radius of feasible models is in fact measured by matrix norms induced by $V_{N,\lambda}^{-1/2}$ and $W_{N,\lambda}^{-1/2}$ on the left-hand side of [\[bnd-err-1\]](#bnd-err-1){reference-type="eqref" reference="bnd-err-1"} and [\[diff-k\]](#diff-k){reference-type="eqref" reference="diff-k"}, respectively. In contrast to the robust finance approach theses norms are determined directly from the dataset and are not chosen as a hyperparameter (see [\[eq:project_admissible\]](#eq:project_admissible){reference-type="eqref" reference="eq:project_admissible"} and [\[w-def\]](#w-def){reference-type="eqref" reference="w-def"}).* **Remark 27**. *We briefly compare the results of Theorems [Theorem 24](#thm-l-bound){reference-type="ref" reference="thm-l-bound"} and [Theorem 19](#THM:performance_bound){reference-type="ref" reference="THM:performance_bound"} to existing results in the offline reinforcement learning literature. These results typically focus on the much simpler setup of Markov decision processes (MDPs) with unknown transition probabilities and rewards. Note that stochastic control problems related to propagators as in [\[per-func\]](#per-func){reference-type="eqref" reference="per-func"} and [\[j-p1\]](#j-p1){reference-type="eqref" reference="j-p1"} not only take place in a continuous state space but are also non-Markovian (see e.g., [@AJ-N-2022]). In [@jin2021pessimism] some results on pessimistic offline RL with respect to minimization of the spurious correlation and intrinsic uncertainty were derived for MDPs. The bound on suboptimality in Theorem [Theorem 19](#THM:performance_bound){reference-type="ref" reference="THM:performance_bound"} coincides with the corresponding bound for MDPs established in Theorem 4.2 of [@jin2021pessimism]. The convergence rate of order $N^{-1/2}\log N$ in Theorem [Theorem 24](#thm-l-bound){reference-type="ref" reference="thm-l-bound"} is compatible with the convergence rate of order $N^{-1/2}$ established in Corollary 4.5 of [@jin2021pessimism] for *linear* MDP, under similar assumptions as in Assumption [Assumption 22](#assum:concentration_inequ){reference-type="ref" reference="assum:concentration_inequ"}. The logarithmic correction in Theorem [Theorem 24](#thm-l-bound){reference-type="ref" reference="thm-l-bound"} is a result of the estimation scheme for $G_{N,\lambda}$ (see [\[eq:G_n\_lambda_volterra\]](#eq:G_n_lambda_volterra){reference-type="eqref" reference="eq:G_n_lambda_volterra"}), which is subject to the regularisation of the dataset in [\[eq:G_unconstraint_minimiser\]](#eq:G_unconstraint_minimiser){reference-type="eqref" reference="eq:G_unconstraint_minimiser"}. This estimation procedure is completely independent from the results of [@jin2021pessimism] and for a sufficiently regular dataset the regularization is not needed and the logarithmic term in Theorem [Theorem 11](#thm:confidence_volterra){reference-type="ref" reference="thm:confidence_volterra"} will vanish.* # Numerical Illustration {#sec-numerics} In this section, we examine the performance of the propagator estimators in Section [2.2](#sec-data){reference-type="ref" reference="sec-data"} and the pessimistic strategies presented in Section [2.4](#sec-res-pess){reference-type="ref" reference="sec-res-pess"}. Using a synthetic dataset, we illustrate the following characteristics of our methods: - Directly estimating a Volterra propagator using [\[eq:G_n\_lambda_volterra\]](#eq:G_n_lambda_volterra){reference-type="eqref" reference="eq:G_n_lambda_volterra"} may result in large estimation errors unless the dataset contains sufficiently noisy trading strategies. By imposing a convolution structure and shape constraints of the estimated model, [\[eq:K_n\_lambda\]](#eq:K_n_lambda){reference-type="eqref" reference="eq:K_n_lambda"} significantly improves the estimation accuracy, even with a smaller sample size. - Minimising the execution costs in [\[per-func\]](#per-func){reference-type="eqref" reference="per-func"} after substituting the estimated propagator instead of the true one, yields a greedy strategy that is very sensitive to the accuracy of the estimated model and also creates unfavorable transaction costs. The pessimistic strategy takes the model uncertainty into account and achieves more stable performance and drastically reduces the execution costs. We start by describing the construction of the synthetic dataset $\mathcal{D}$ for our experiments. For fixed $N$-trading days, we split each trading day into $5$ minute bins. Hence, for a trading day of $6.5$ hours we have $M= 78$. We assume that the unaffected price process $P$ has the following dynamics: $$\mathrm{d}P_{t} = I_t \mathrm{d}t + \sigma_P \mathrm{d}W^P_{t},\quad P_{0} = p_0 >0,$$ where $p_0$ is the initial price, $I$ is the expected return following an Ornstein-Uhlenbeck dynamics (cf. [@Lehalle-Neum18 Section 2.3]), $$\label{eq-ornstein-uhlenbeck} \mathrm{d}I_{t} = -\mu I_{t}\mathrm{d}t + \sigma \mathrm{d}W_{t},\quad I_{0} = 0,$$ and the values of $\sigma_P$, $\mu$ and $\sigma$ are given in Table [1](#table:1){reference-type="ref" reference="table:1"}. The signal $A_{t_i}$ in [\[eq:data_price_n\]](#eq:data_price_n){reference-type="eqref" reference="eq:data_price_n"} at time $t_i \in \lbrace 0=t_1<t_2<\ldots<t_M =1 \rbrace$ is then given by $$\begin{aligned} A_{t_i} = \int_{0}^{t_i} I_u \mathrm{d}u.\end{aligned}$$ We consider a market with 3 types of traders, trading simultaneously over one year (i.e. $N= 252$ trading days). For simplicity, we construct a dataset with buy strategies, by sampling the target inventory $x_0$ of each type of trade uniformly from $[500,2000]$. Including sell strategies in our dataset will not change our estimation. We assume that the traders do not have precise information on the true price impact parameters and adopt the following commonly used strategies: - TWAP trades, aiming at $x_0$ stocks and buying at a constant rate throughout each day: $$u^1_{t_i} = \frac{x_0}{M}, \quad i=1,\ldots,M.$$ - Execution strategies according to @Ob-Wan2005 as described in Remark [Remark 3](#rem-sol-no-sig){reference-type="ref" reference="rem-sol-no-sig"}, $$u^2= \frac{x_0}{\boldsymbol{1}^{\top}(G + G^{\top} )^{-1} \boldsymbol{1}}(G + G ^{\top})^{-1}\boldsymbol{1},$$ where $G_{i,j}=\hat{\kappa} e^{-\hat \rho (t_i-t_j)}\mathrm{1}_{\{ i \geq j \}}$, with $\hat \rho$ and $\hat \kappa$ sampled uniformly from $[\rho/2, 3\rho/2]$ and $[\kappa/2, 3\kappa/2]$, respectively. The values of $\rho$ and $\kappa$ are given in table [1](#table:1){reference-type="ref" reference="table:1"}. - Purely trend following strategies taking into account only temporary price impact as in Remark 3.4 of [@Lehalle-Neum18], $$u^3_{t_i} = \frac{I_{t_i}}{2 \hat \mu \hat{\kappa}}\big(1-e^{-\hat{\mu}(T-t_i)}\big), \quad i=1,\ldots,M,$$ with $\hat \kappa$ and $\hat \mu$ sampled uniformly from $[\kappa/2, 3\kappa/2]$ and $[\mu/2, 3\mu/2]$, respectively. The values of $\mu$ and $\kappa$ are given in Table [1](#table:1){reference-type="ref" reference="table:1"}. Parameter Value ------------------------------- -------- Price volatility $\sigma_{P}$ 0.0088 Signal volatility $\sigma$ 0.06 Signal mean reversion $\mu$ 0.1 Trading Cost $\kappa$ $0.01$ Resilience $\rho$ $0.04$ : Model Parameters [\[table:1\]]{#table:1 label="table:1"} Given the above three trading strategies, for each $n=1,\ldots, N$, we generate the observed price trajectories according to the following dynamics (see [\[eq:data_price_n\]](#eq:data_price_n){reference-type="eqref" reference="eq:data_price_n"}): $$\begin{aligned} \label{eq:price_experiments} S^{(n)}_{t_{i+1}} &= \sum_{j =1}^{i} G^\star_{i,j} u^{(n)}_{t_j} + P^{(n)}_{t_i} = \sum_{j =1}^{i} G^\star_{i,j} u^{(n)}_{t_j} + p_0 + A^{(n)}_{t_i} + \sigma_{P} W_{t_i}, \quad i=1,\ldots, M, \end{aligned}$$ where $(u^{(n)}_{t_j})_{j=1}^M$ is a realisation of $u_{t_i} \coloneqq u^1_{t_i}+ u^{2}_{t_i}+ u^{3}_{t_i}, i=1, \ldots,M.$ Here, we construct the market by giving equal weight to each of the type of trade, but similar results can be obtained by varying different weights to every strategy. We consider the true parameter $G^\star$ in [\[eq:price_experiments\]](#eq:price_experiments){reference-type="eqref" reference="eq:price_experiments"} to be a convolution-type propagator from one of the following two classes: $$\label{g-power} G^{\star,(1)}_{i,j} = K^{\star,(1)}(t_i-t_j) = \frac{ \kappa }{(t_i-t_j+1)^{\beta^{\star}}}, \quad i \geq j, \quad \textrm{for } \beta^{\star} \in \lbrace 0.1, 0.2, 0.3, 0.4 \rbrace,$$ and $$\label{g-exp} G^{\star,(2)}_{i,j} = K^{\star,(2)}(t_i-t_j) = \kappa e^{-\rho^{\star}(t_i -t_j)}, \quad i \geq j, \quad \textrm{for } \rho^{\star} \in \lbrace 0.1, 0.2, 0.3, 0.4\rbrace,$$ where $\kappa$ is given in Table [1](#table:1){reference-type="ref" reference="table:1"}. Given the above synthetic dataset $\mathcal D$, we examine the accuracy of the estimators [\[eq:G_n\_lambda_volterra\]](#eq:G_n_lambda_volterra){reference-type="eqref" reference="eq:G_n_lambda_volterra"} and [\[eq:K_n\_lambda\]](#eq:K_n_lambda){reference-type="eqref" reference="eq:K_n_lambda"}, where the former performs a fully nonparametric estimation of the Volterra propagator, and the latter imposes a convolution structure of the estimated model. These estimators are implemented as in [\[eq:project_admissible\]](#eq:project_admissible){reference-type="eqref" reference="eq:project_admissible"} and [\[eq:project_admissible_conv\]](#eq:project_admissible_conv){reference-type="eqref" reference="eq:project_admissible_conv"} with $\lambda=10^{-3}$, where the projection step is carried out using the Python optimisation package CVXPY [@diamond2016cvxpy]. Our numerical results show that in the present setting, the Volterra estimator [\[eq:project_admissible\]](#eq:project_admissible){reference-type="eqref" reference="eq:project_admissible"} yields large estimation errors, as the observed trading speeds in the dataset $\mathcal D$ are not random enough to fully explore the parameter space (see Remark [Remark 23](#rem-asump-mat){reference-type="ref" reference="rem-asump-mat"}). This is demonstrated in Figure [3](#FIGURE LABEL_d){reference-type="ref" reference="FIGURE LABEL_d"}, where we consider a propagator $G^\star$ associated with the power-law kernel [\[g-power\]](#g-power){reference-type="eqref" reference="g-power"} with $\kappa=0.01$ and $\beta^\star= 0.4$, and present the accuracy of the two estimators. Figure [3](#FIGURE LABEL_d){reference-type="ref" reference="FIGURE LABEL_d"} (right) shows that the Volterra estimator [\[eq:project_admissible\]](#eq:project_admissible){reference-type="eqref" reference="eq:project_admissible"} yields large component-wise relative errors, and hence it is difficult to recover the power-law kernel from the estimated model. We refer the reader to Figure [12](#FIGURE LABEL_e){reference-type="ref" reference="FIGURE LABEL_e"} in Appendix [9](#sec-examples_noisy){reference-type="ref" reference="sec-examples_noisy"}, which shows that if the historical trading speeds in the dataset are sufficiently noisy, then the estimator [\[eq:project_admissible\]](#eq:project_admissible){reference-type="eqref" reference="eq:project_admissible"} recovers the true propagator accurately. ![ The true power-law propagator $G^{\star}(t)= 0.01 (t+1)^{-0.4}$ (left) and relative errors of the Volterra estimator (right) and the convolution estimator (bottom). ](heatmap_shadow_1_final "fig:"){#FIGURE LABEL_d width="0.48\\linewidth"} ![ The true power-law propagator $G^{\star}(t)= 0.01 (t+1)^{-0.4}$ (left) and relative errors of the Volterra estimator (right) and the convolution estimator (bottom). ](heatmap_shadow_3_final "fig:"){#FIGURE LABEL_d width="0.48\\linewidth"} ![ The true power-law propagator $G^{\star}(t)= 0.01 (t+1)^{-0.4}$ (left) and relative errors of the Volterra estimator (right) and the convolution estimator (bottom). ](heatmap_shadow_4_final "fig:"){#FIGURE LABEL_d width="0.48\\linewidth"} In contrast, Figure [3](#FIGURE LABEL_d){reference-type="ref" reference="FIGURE LABEL_d"} (bottom) clearly shows that by imposing a convolution structure of the estimated model, the estimator [\[eq:project_admissible_conv\]](#eq:project_admissible_conv){reference-type="eqref" reference="eq:project_admissible_conv"} achieves a much higher accuracy using the same dataset. The estimator also recovers the true convolution kernel accurately, even with a smaller sample size. This can be seen from Figure [\[FIGURE LABEL\]](#FIGURE LABEL){reference-type="ref" reference="FIGURE LABEL"}, where we plot $K^{\star,(1)}$ in [\[g-power\]](#g-power){reference-type="eqref" reference="g-power"} and $K^{\star,(2)}$ in [\[g-exp\]](#g-exp){reference-type="eqref" reference="g-exp"} for the above listed values of $\rho^{\star}, \beta^{\star}$ (see Figures [4](#SUBFIGURE LABEL 1){reference-type="ref" reference="SUBFIGURE LABEL 1"} and [6](#SUBFIGURE LABEL 2){reference-type="ref" reference="SUBFIGURE LABEL 2"}) and compare them to the estimated kernels obtained by [\[eq:project_admissible_conv\]](#eq:project_admissible_conv){reference-type="eqref" reference="eq:project_admissible_conv"} (see Figures [5](#SUBFIGURE LABEL 3){reference-type="ref" reference="SUBFIGURE LABEL 3"} and [7](#SUBFIGURE LABEL 4){reference-type="ref" reference="SUBFIGURE LABEL 4"}). One can clearly observe that, for both considered kernels, the proposed estimator captures the behavior of the true propagators accurately. To quantify the accuracy of our estimators and to demonstrate the importance of the projection step, we define the following relative errors for different sample sizes $N$: $$\text{err}^{(i)} \coloneqq \max_{j \in \lbrace 1, \ldots, M \rbrace} \frac{|(K^{\star,(i)})_j - (\tilde{K}_{N,\lambda})_j|}{(K^{\star,(i)})_j}, \quad \text{err}^{(i)}_{\textrm{proj}} \coloneqq \max_{j \in \lbrace 1, \ldots, M \rbrace} \frac{|(K^{\star,(i)})_j - (K_{N,\lambda})_j|}{(K^{\star,(i)})_j},$$ where $K^{\star,(1)}$ refers to the power-law kernel [\[g-power\]](#g-power){reference-type="eqref" reference="g-power"} with $\beta^{\star} = 0.1$, $K^{\star,(2)}$ refers to the exponential kernel [\[g-exp\]](#g-exp){reference-type="eqref" reference="g-exp"} with $\rho^{\star} = 0.1$, $\tilde{K}_{N,\lambda}$ is the estimated kernel using the plain least-squares estimator [\[w-def\]](#w-def){reference-type="eqref" reference="w-def"}, and ${K}_{N,\lambda}$ is obtained by [\[eq:project_admissible_conv\]](#eq:project_admissible_conv){reference-type="eqref" reference="eq:project_admissible_conv"} with the additional projection step. Table [2](#table:2){reference-type="ref" reference="table:2"} summarises the estimation accuracy for sample sizes $N \in \lbrace 63, 126, 252 \rbrace$, which shows that both estimators achieve relative errors of order $10^{-3}$. Moreover, one can see that by imposing monotonicity and convexity on the estimated model, the estimator [\[eq:project_admissible_conv\]](#eq:project_admissible_conv){reference-type="eqref" reference="eq:project_admissible_conv"} improves the accuracy of the plain least-squares estimator at least by a factor of 2. ![](truncated_power_law_true_final){#SUBFIGURE LABEL 1 width="1\\linewidth"} ![](truncated_power_law_after_proj_final){#SUBFIGURE LABEL 3 width="1\\linewidth"} ![](exponential_kernel_true_final){#SUBFIGURE LABEL 2 width="1\\linewidth"} ![](exponential_after_proj_final){#SUBFIGURE LABEL 4 width="1\\linewidth"} Power-law kernel Exponential kernel ----------------- ----------------------- -------------------------------------- ----------------------- -------------------------------------- Sample size $N$ $\textrm{err}^{(1)}$ $\textrm{err}_{\textrm{proj}}^{(1)}$ $\textrm{err}^{(2)}$ $\textrm{err}_{\textrm{proj}}^{(2)}$ 63 $8.4 \times 10^{-3}$ $1.9 \times 10^{-3}$ $9.8 \times 10^{-3}$ $4.4 \times 10^{-3}$ 126 $6.9 \times 10^{-3}$ $1.7 \times 10^{-3}$ $5.3 \times 10^{-3}$ $2.9 \times 10^{-3}$ 252 $4.2 \times 10^{-3}$ $1.2 \times 10^{-3}$ $4.2 \times 10^{-3}$ $2. 2\times 10^{-3}$ : Accuracy of the convolution estimators [\[eq:project_admissible_conv\]](#eq:project_admissible_conv){reference-type="eqref" reference="eq:project_admissible_conv"} and [\[w-def\]](#w-def){reference-type="eqref" reference="w-def"} for different sample sizes [\[table:2\]]{#table:2 label="table:2"} Given the above estimated models, we proceed to investigate the performance of the pessimistic trading strategy. Our numerical results show that the performance of a naive greedy strategy using the estimated model is very sensitive to the quality of the estimated model. In contrast, the pessimistic trading strategy exhibits a stable performance regardless of the accuracy of the estimated models, and achieves an execution cost close to the optimal one. In particular, recall that given the present dataset, the Volterra estimator [\[eq:G_n\_lambda_volterra\]](#eq:G_n_lambda_volterra){reference-type="eqref" reference="eq:G_n_lambda_volterra"} yields a poor estimated propagator as illustrated in Figure [3](#FIGURE LABEL_d){reference-type="ref" reference="FIGURE LABEL_d"}. As a result, a naive deployment of a greedy strategy as in Theorem [Theorem 2](#prop-opt-strat){reference-type="ref" reference="prop-opt-strat"}, using the estimator $G_{N,\lambda}$, can lead to substantially suboptimal costs as discussed after [\[sub-decom\]](#sub-decom){reference-type="eqref" reference="sub-decom"}. To illustrate this phenomenon clearly, we continue with the aforementioned example (for the power-law kernel [\[g-power\]](#g-power){reference-type="eqref" reference="g-power"} with $\kappa=0.01$ and $\beta^\star= 0.4$) and present in Figure [9](#FIGURE LABEL_c){reference-type="ref" reference="FIGURE LABEL_c"} the trading speeds and inventories for the relevant strategies, where we neutralize the effects of exogenous trading signals on these strategies. Specifically, we plot the optimal strategy with precise information on the propagator [\[g-power\]](#g-power){reference-type="eqref" reference="g-power"}, the greedy strategy from Theorem [Theorem 2](#prop-opt-strat){reference-type="ref" reference="prop-opt-strat"} using the estimator $G_{N,\lambda}$ in [\[eq:G_n\_lambda_volterra\]](#eq:G_n_lambda_volterra){reference-type="eqref" reference="eq:G_n_lambda_volterra"}, and the pessimistic strategy minimizing the cost functional [\[j-p1\]](#j-p1){reference-type="eqref" reference="j-p1"}, using the same $G_{N,\lambda}$ (with $N=252$ and $\lambda= 10^{-3}$). We observe in Figure [9](#FIGURE LABEL_c){reference-type="ref" reference="FIGURE LABEL_c"} that the greedy strategy exhibits an uninterpretable behaviour as a result of oscillation in the estimator, and that these oscillations are regularized by the pessimistic strategy. We further report that the optimal costs using the true propagator $G^\star$ with zero signal, defined in Theorem [Theorem 2](#prop-opt-strat){reference-type="ref" reference="prop-opt-strat"} attains the value $4500.24$. Executing a greedy strategy as in Theorem [Theorem 2](#prop-opt-strat){reference-type="ref" reference="prop-opt-strat"} but with the estimated propagator $G_{N,\lambda}$ yields excessive costs $5216.68$, while for the pessimistic strategy with $G_{N,\lambda}$, the execution costs are significantly closer to optimality, $4537.19$ (see Table [3](#table:5){reference-type="ref" reference="table:5"}). On the other hand, when the convolution estimator [\[eq:project_admissible_conv\]](#eq:project_admissible_conv){reference-type="eqref" reference="eq:project_admissible_conv"} is employed to estimate the propagator, both the greedy strategy and the pessimistic strategy yield a close-to-optimal expectation cost. This is due to the fact that the estimator [\[eq:project_admissible_conv\]](#eq:project_admissible_conv){reference-type="eqref" reference="eq:project_admissible_conv"} recovers the true kernel accurately, and hence stability properties of the associated optimal liquidation problem imply that introducing a regularization in the cost functional as in [\[eq:K_n\_lambda\]](#eq:K_n_lambda){reference-type="eqref" reference="eq:K_n_lambda"}, will not provide a significant improvement. However, it is important to note that in practice, the true propagator and the accuracy of an estimated model are unknown to the agent, and hence compared with the pessimistic strategy, it is more challenging to assess the performance of a greedy strategy before its real deployment (see the discussion after [\[sub-decom\]](#sub-decom){reference-type="eqref" reference="sub-decom"}). ![Trading speeds (left panel) and inventories (right panel) of the optimal trading strategy (green), greedy strategy (blue) and pessimistic strategy (orange).](policies_final "fig:"){#FIGURE LABEL_c width="0.48\\linewidth"} ![Trading speeds (left panel) and inventories (right panel) of the optimal trading strategy (green), greedy strategy (blue) and pessimistic strategy (orange).](inventories_final "fig:"){#FIGURE LABEL_c width="0.48\\linewidth"} Type of strategy liquidation costs ---------------------------------------- ------------------- Optimal strategy ($G^\star$) $4500.24$ Greedy strategy ($G_{N,\lambda}$) $5216.68$ Pessimistic strategy ($G_{N,\lambda}$) $4537.19$ : Liquidation Costs [\[table:5\]]{#table:5 label="table:5"} # Martingale tail inequality for least-squares estimation {#sec-mart} This section first establishes a martingale tail inequality for noise in a general finite dimensional Hilbert space. Based on this tail inequality, we derive high probability bounds for least-squares estimators resulting from correlated observations. The results of this section are of independent interest and extend the results from [@abbasi2011improved] from observation taking values in $\mathbb{R}$ to observations taking values in a general finite dimensional Hilbert space, which will be needed in order to prove Theorems [Theorem 11](#thm:confidence_volterra){reference-type="ref" reference="thm:confidence_volterra"} and [Theorem 15](#thm:confidence_convolution){reference-type="ref" reference="thm:confidence_convolution"}. We introduce some definitions and notation which are relevant to our setting. #### Notation: For all real finite dimensional Hilbert spaces $(X,\langle \cdot,\cdot\rangle_X)$ and $(Y,\langle \cdot,\cdot\rangle_Y)$, we denote by ${\rm id}_X$ the identity map on $X$, and by $\mathcal{L}_2(X,Y)$ be the space of linear maps $A: X\rightarrow Y$ equipped with the Hilbert-Schmidt norm $\|A\|_{\mathcal{L}_2} =\sqrt{\sum_{k=1}^{n_x} \|Ae_k\|^2_{Y}}<\infty$, where $(e_k)_{k=1}^{n_x}$ is an orthonormal basis of $X$ of dimension $n_x$. The norm $\|\cdot\|_{\mathcal{L}_2}$ is induced by an inner product $\langle \cdot,\cdot\rangle_{\mathcal{L}_2}$ such that $\langle A,A'\rangle_{\mathcal{L}_2} =\sum_{k=1}^{n_x} \langle Ae_k,A'e_k\rangle_{Y}$ for all $A,A'\in \mathcal{L}_2(X,Y)$. Both $\|\cdot\|_{\mathcal{L}_2}$ and $\langle \cdot,\cdot\rangle_{\mathcal{L}_2}$ do not depend on the choice of the basis $(e_k)_{k=1}^{n_x}$ of $X$. We write $\mathcal{L}_2 (X)=\mathcal{L}_2 (X,X)$ for simplicity. For each $A\in \mathcal{L}_2(X,Y)$, we denote by $A^*:Y\rightarrow X$ the adjoint of $A$, and say $A$ is symmetric if $A=A^*$. We denote by $\mathcal{S}_0(X)$ the space of symmetric linear maps $A :X\rightarrow X$ satisfying $\langle Ax, x\rangle_X\ge 0$ for all $x\in X$, and by $\mathcal{S}_+(X)$ the space of symmetric linear maps $A :X\rightarrow X$ satisfying $\langle Ax, x\rangle_X> 0$ for all $x\in X$ and $x\not =0$. For any $A\in \mathcal{S}_0(X)$, we define the seminorm $\|\cdot\|_{X,A}:X\rightarrow[0,\infty)$ by $\|u\|_{X,A}=\sqrt{\langle Au, u\rangle_X}$ for all $u\in X$. We write $\|\cdot\|_{X}=\|\cdot\|_{X,{\rm id}_X}$ for simplicity. The following definition introduces conditional sub-Gaussian random variables. **Definition 28**. *Let $(\Omega, \mathcal{F},\mathbb{P})$ be a probability space, let $(H,\langle\cdot, \cdot\rangle_H)$ be a finite dimensional real Hilbert space, and let $Z:\Omega\rightarrow H$ be a mean zero random variable. Let $\mathcal{G}\subset \mathcal{F}$ be sub-sigma field and $\alpha \ge 0$ a constant. We say $Z$ is $\alpha$-conditionally sub-Gaussian with respect to $\mathcal{G}$ if $$\label{cond-prop} \mathbb{E}\left[\exp\left({\langle u, Z \rangle_H}\right)\mid \mathcal{G}\right] \le \exp\left( \frac{\alpha^2\|u\|^2_H }{2}\right), \quad u\in H.$$* We first extend the martingale tail inequality with scalar noise, which was introduced in [@abbasi2011improved Theorem 1], to noise in a general finite dimensional Hilbert space. #### Our setting: For the remainder of this section we fix $X$ and $Y$ to be real finite dimensional Hilbert spaces and let $(\Omega, \mathcal{F},\mathbb{P})$ be a probability space with a filtration $\mathbb{G}=(\mathcal{G}_i)_{i=0}^\infty\subset \mathcal{F}$. We define $(A_i)_{i\in \mathbb{N}}$ as a $\mathcal{L}_2(X,Y)$-valued process predictable with respect to $\mathbb{G}$, and the noise process $(\eta_i)_{i\in \mathbb{N}}$ as a $Y$-valued process adapted to $\mathbb{G}$ such that $$\mathbb{E}[\eta_i\mid \mathcal{G}_{i-1}]=0, \quad \textrm{for all } i \geq 1,$$ and $\eta_i$ is $\alpha$-conditionally sub-Gaussian with respect to $\mathcal{G}_{i-1}$ (cf. Definition [Definition 28](#def:conditional_subgaussian){reference-type="ref" reference="def:conditional_subgaussian"}). We further define the following stochastic processes. Let $M_{n}: \Omega \rightarrow\mathcal{S}_0(X)$ and $U_n: \Omega \rightarrow X$ be such that for all $\omega\in \Omega$, $$\label{m-proc} M_{n}(\omega) =\sum_{i=1}^n A_i^*(\omega)A_i(\omega), \quad n \geq 1,$$ and $$\label{u-proc} U_n (\omega) =\sum_{i=1}^n A^*_i(\omega)\eta_i(\omega), \quad n \geq 1.$$ Now we are ready to introduce our martingale tail inequality for noise in a general finite dimensional Hilbert space. **Theorem 29**. *For all $V\in \mathcal{S}_+(X)$ and $\delta \in (0,1)$ we have $$\mathbb{P}\left( \|U_{n}\|^2_{X, (M_{n}+V)^{-1}} \le 2\alpha^2\log\left( \frac{\sqrt{\det \left( V^{-1} M_{n }+{\rm id}_X \right)}}{\delta} \right), \quad \forall n\in \mathbb{N} \right)\ge 1-\delta,$$ where $\det()$ denotes the Fredholm determinant (cf. [@gohberg1990classes Chapter VII]).* In order to prove Theorem [Theorem 29](#thm:martingale_tail){reference-type="ref" reference="thm:martingale_tail"} we introduce the following auxiliary results. **Lemma 30**. *Let $(M_n)_{n \in \mathbb{N}}$ and $(U_n)_{n \in \mathbb{N}}$ as in [\[m-proc\]](#m-proc){reference-type="eqref" reference="m-proc"} and [\[u-proc\]](#u-proc){reference-type="eqref" reference="u-proc"}. Then for all $u\in X$ and $\mathbb{G}$-stopping time $\tau$, $$G_{\tau} (u)=\exp\left( \frac{\langle u, U_{\tau}\rangle_{X}}{\alpha}-\frac{1}{2}\langle M_{\tau} u, u\rangle_{X}\right)$$ is well-defined and satisfies $\mathbb{E}[G_{\tau}(u)]\le 1$.* *Proof.* For any $u\in X$ we define, $$\label{g-n-def} G_{n} (u)=\exp\left( \frac{\langle u, U_{n}\rangle_{X}}{\alpha}-\frac{1}{2}\langle M_{n} u, u\rangle_{X} \right), \quad n \geq 1,$$ with $G_{0}(u)=1$. From [\[m-proc\]](#m-proc){reference-type="eqref" reference="m-proc"} and [\[u-proc\]](#u-proc){reference-type="eqref" reference="u-proc"} we get, $$G_{n}(u)= \exp\left( \sum_{i=1}^{n} \left[ \frac{1}{\alpha}\langle u, A_{i}^* \eta_i \rangle_{X}-\frac{1}{2} \langle A_{i}^*A_{i} u, u \rangle_{X} \right] \right) =\prod_{i=1}^n D_i, \quad \textrm{for all }n \geq 1,$$ with $$D_i \coloneqq \exp\left( \frac{1}{\alpha}\langle A_{i} u, \eta_i \rangle_{Y}-\frac{1}{2} \langle A_{i} u, A_{i} u \rangle_{Y} \right).$$ Since $A_{i}$ is $\mathcal{G}_{i-1}$ measurable, and $\eta_i$ is $\alpha$-conditional sub-Gaussian with respect to $\mathcal{G}_{i-1}$ we get from [\[cond-prop\]](#cond-prop){reference-type="eqref" reference="cond-prop"}, $$\begin{aligned} \mathbb{E}[D_i|\mathcal{G}_{i-1}] &=\mathbb{E}\left[\exp \left( \frac{1}{\alpha}\langle A_{i} u, \eta_i \rangle_{Y} \right)\bigg|\mathcal{G}_{i-1}\right] \exp\left( -\frac{1}{2} \langle A_{i} u, A_{i} u \rangle_{Y} \right) \\ &\le \exp\left( \frac{1}{2}\left \| A_{i} u \right \|_{Y} \right)\exp\left( -\frac{1}{2} \|A_{i} u\|_Y^2 \right)=1.\end{aligned}$$ Thus by the measurability of $D_i$ with respect to $\mathcal{G}_i$, we get for all $n \geq 1$, $$\begin{aligned} \mathbb{E}[G_{n}(u )|\mathcal{G}_{n-1}] =\left(\prod_{i=1}^{n-1}D_i \right) \mathbb{E}[D_n|\mathcal{G}_{n-1}] = G_{n-1}(u) \mathbb{E}[D_n|\mathcal{G}_{n-1}] \le G_{n-1}(u),\end{aligned}$$ hence $(G_{n}(u))_{n \geq 0 }$ is a super-martingale with respect to $\mathbb{G}$. Using the fact that $G_{0}(u)=1$, we deduce $\mathbb{E}[G_{n}(u)] \leq \mathbb{E}[G_{n-1}(u)]\le 1$ for all $n\in \mathbb{N}$. Now let $\tau$ be a stopping time with respect to the filtration $\mathbb{G}$. Then $(G_{n \land \tau }(u))_{n \in \mathbb{N}}$ is a nonnegative super-martingale. By the optional stopping theorem, $\mathbb{E}[G_{n \land \tau}(u)] \leq 1$ for any $n \in \mathbb{N}$. By Doob's martingale convergence theorem, $G_{\tau}(u)=\lim_{n \rightarrow\infty} (G_{n \land \tau}(u))$ exists and is finite a.s., hence along with Fatou's lemma it follows that $\mathbb{E}[G_{\tau}(u)] \leq 1$. ◻ **Proposition 31**. *Let $(M_n)_{n \in \mathbb{N}}$ and $(U_n)_{n \in \mathbb{N}}$ as in [\[m-proc\]](#m-proc){reference-type="eqref" reference="m-proc"} and [\[u-proc\]](#u-proc){reference-type="eqref" reference="u-proc"}. Then, for any $\mathbb{G}$-stopping times $\tau$, $V\in \mathcal{S}_+(X)$ and $\delta\in (0,1)$, $$\begin{aligned} \mathbb{P}\left( \exp\left( \frac{1}{2\alpha^2} \|U_{\tau}\|^2_{X, (M_{\tau}+V)^{-1}}\right) \ge \delta^{-1}\sqrt{\det \left( V^{-1} M_{\tau }+{\rm id}_X \right)} \right) & \le \delta^{-1}.\end{aligned}$$* *Proof.* As $X$ is a finite dimensional space and $V$ is symmetric and positive definite, it follows that $V$ is invertible, and $V^{-1}\in \mathcal{S}_+(X)$. A singular value decomposition of $V^{-1}$ implies that there exists $(\lambda_i)_{i=1}^{n_x}\subset (0,\infty)$ and an orthonormal basis $(e_i)_{i=1}^{n_x}$ of $X$ such that $$\label{v-inv-dec} V^{-1}x=\sum_{i=1}^{n_x} \lambda_i \langle x,e_i\rangle_X e_i, \quad \textrm{for all } x\in X.$$ Let $\mathcal{G}_\infty=\bigcup_{n=0}^\infty\mathcal{G}_n$, and $(\xi_i)_{i=1}^{n_x}$ be standard normal random variables that are mutually independent and are independent of $\mathcal{G}_\infty$. Define $$\label{def-w} W =\sum_{i=1}^{n_x} \sqrt{\lambda_i}\xi_ie_i \in L^2(\Omega; X).$$ Let $n\in \mathbb{N}$ be fixed, and further define $$\label{h-n-def} H_{n}=\mathbb{E}[G_{n}(W)\mid \mathcal{G}_\infty].$$ Then, since $Vx=\sum_{i=1}^{n_x} \lambda^{-1}_i \langle x,e_i\rangle_X e_i$ for all $x\in X$, we get from [\[g-n-def\]](#g-n-def){reference-type="eqref" reference="g-n-def"} and [\[def-w\]](#def-w){reference-type="eqref" reference="def-w"}, $$\label{h-bnd1} \begin{aligned} H_{n} &=\mathbb{E}\left[\exp\left( \frac{\langle W, U_{n}\rangle_{X}}{\alpha}-\frac{1}{2}\langle M_{n} W, W\rangle_{X} \right) \,\bigg\vert\, \mathcal{G}_\infty \right] \\ & =\mathbb{E}\left[\exp\left( \frac{\langle W, U_{n}\rangle_{X}}{\alpha}-\frac{1}{2}\langle (M_{n}+ V) W, W\rangle_{X} + \frac{1}{2}\langle V W, W\rangle_{X} \right) \,\bigg\vert\, \mathcal{G}_\infty \right] \\ & = \mathbb{E}\left[\exp\left( \frac{\langle W, U_{n}\rangle_{X}}{\alpha}-\frac{1}{2}\langle (M_{n}+V) W, W\rangle_{X} + \frac{1}{2}\sum_{i=1}^m \xi^2_i \right) \,\bigg\vert\, \mathcal{G}_\infty \right] \\ & = \mathbb{E}\left[\exp\left( \frac{1}{2} \left\|\frac{U_{n}}{\alpha}\right\|^2_{X, (M_{n}+V)^{-1}} -\frac{1}{2} \left\| W-(M_{n}+V)^{-1}\frac{U_{n}}{\alpha}\right\|_{X, M_{n}+V}^2 +\frac{1}{2} \sum_{i=1}^m \xi_{i}^2\right) \,\bigg\vert\, \mathcal{G}_\infty \right], \end{aligned}$$ where the last identity used $M_{n}+V\in \mathcal{S}_+(X)$ and the fact that for all $u,v\in X$ and $A\in \mathcal{S}_+(X)$, $$\begin{aligned} & \langle u, v\rangle_{X} -\frac{1}{2}\langle A u, u\rangle_{X} % &\quad = % \langle (M_n+V^\dagger_n)^{-1}U_{n,m}, U_{n,m}\rangle_{X} -\frac{1}{2}\langle (M_n+V^\dagger_n) % \left(A-(M_n+V^\dagger_n)^{-1}U_{n,m}\right), A-(M_n+V^\dagger_n)^{-1}U_{n,m}\rangle_{X}, = \frac{1}{2} \|v\|^2_{X, A^{-1}} -\frac{1}{2} \| u-A^{-1}v\|_{X, A}^2. \end{aligned}$$ Since $U_{n}$ and $M_{n}$ are measurable with respect to $\mathcal{G}_\infty$, $( \xi_i)_{i=1}^{n_x}$ are standard Gaussian random variables independent of $\mathcal{G}_\infty$, by writing $\bar{M}_{n}=M_{n}+ V$ we get from [\[h-bnd1\]](#h-bnd1){reference-type="eqref" reference="h-bnd1"} and [\[v-inv-dec\]](#v-inv-dec){reference-type="eqref" reference="v-inv-dec"} that $$\begin{aligned} \label{eq:H_n_m} \begin{split} H_{n} & = \exp\left( \frac{1}{2\alpha^2} \|U_{n}\|^2_{X, \bar{M}_{n}^{-1}}\right) \mathbb{E}\left[\exp\left( -\frac{1}{2} \| \bar{M}^{1/2}_{n} (W_{n}-\alpha^{-1}\bar{M}_{n}^{-1}U_{n})\|_{X}^2 +\frac{1}{2} \sum_{i=1}^{n_x} \xi_{i}^2\right) \,\bigg\vert\, \mathcal{G}_\infty \right] \\ &= \frac{\exp\left( \frac{1}{2\alpha^2} \|U_{n}\|^2_{X, \bar{M}_{n}^{-1}}\right)} {\sqrt{(2\pi)^{n_x}}} \int_{\mathbb{R}^{n_x}} \exp\left( -\frac{1}{2} \left\| \bar{M}^{1/2}_{n}\left( \sum_{i=1}^{n_x} \sqrt{\lambda_i}x_ie_i - \alpha^{-1} \bar{M}_{n}^{-1}U_{n}\right)\right\|_{X}^2 \right) \mathrm{d}x \\ & =\exp\left( \frac{1}{2\alpha^2} \|U_{n}\|^2_{X, \bar{M}_{n}^{-1}}\right) \sqrt{\det \left(V \bar{M}_{n}^{-1}\right)} \\ & =\left( \frac{1}{\sqrt{\det(V^{-1}M_n+{\rm id}_X)}} \right)\exp\left( \frac{1}{2\alpha^2} \|U_{n}\|^2_{X, (M_{n}+V)^{-1}}\right), \end{split}\end{aligned}$$ where the last identity used $\det(AB)=\det(A)\det(B)$. Since [\[eq:H_n\_m\]](#eq:H_n_m){reference-type="eqref" reference="eq:H_n_m"} holds for all $n\in \mathbb{N}$, it also holds for all $\mathbb{G}$-stopping times $\tau$. By Lemma [Lemma 30](#lemma:super-martingale){reference-type="ref" reference="lemma:super-martingale"} and [\[h-n-def\]](#h-n-def){reference-type="eqref" reference="h-n-def"}, for all $\mathbb{G}$-stopping times $\tau$, $\mathbb{E}[H_{\tau}] =\mathbb{E}[\mathbb{E}[G_{\tau}(W)\mid {\mathcal G_{\infty}}]]\le 1$. Then for all $\delta\in (0,1)$, by Markov's inequality and $\mathbb{E}[H_{\tau}]\le 1$ we get, $$\begin{aligned} \mathbb{P}\left( \frac{\exp\left( \frac{1}{2\alpha^2} \|U_{\tau}\|^2_{X, (M_{\tau}+V)^{-1}}\right)} {\delta^{-1}\sqrt{\det \left(V^{-1} M_{\tau}+ {\rm id}_{X} \right)}} \ge 1 \right) & \le \mathbb{E}\left[ \frac{\exp\left( \frac{1}{2\alpha^2} \|U_{\tau}\|^2_{X, (M_{\tau}+V)^{-1}}\right)} {\delta^{-1}\sqrt{\det \left( V^{-1} M_{\tau}+ {\rm id}_{X} \right)}} \right] \\ &=\delta^{-1}\mathbb{E}[H_{\tau}]\le \delta^{-1},\end{aligned}$$ which concludes the proof of Proposition [Proposition 31](#prop:tail_stopping){reference-type="ref" reference="prop:tail_stopping"}. ◻ Now we are ready to prove Theorem [Theorem 29](#thm:martingale_tail){reference-type="ref" reference="thm:martingale_tail"}. *Proof of Theorem [Theorem 29](#thm:martingale_tail){reference-type="ref" reference="thm:martingale_tail"}.* Let $\delta \in (0,1)$ be fixed. For each $n\in \mathbb{N}$, define the event $$B_n(\delta) = \left\{ \omega\in \Omega \,\bigg\vert\, \exp\left( \frac{1}{2\alpha^2} \|U_{n}\|^2_{X, (M_{n}+V)^{-1}}\right) > \delta^{-1}\sqrt{\det \left( V^{-1} M_{n }+{\rm id}_X \right)} \right\}.$$ Define the stopping time $\tau:\Omega\rightarrow\mathbb{N}$ such that $\tau(\omega)=\min\{n\in \mathbb{N}\mid \omega\in B_n(\delta)\}$ for all $\omega\in \Omega$. Then by the identity $\bigcup_{n\in \mathbb{N}}B_n(\delta) =\{\tau <\infty\}$ and by Proposition [Proposition 31](#prop:tail_stopping){reference-type="ref" reference="prop:tail_stopping"}, $$\begin{aligned} \mathbb{P}\left(\bigcup_{n\in \mathbb{N}}B_n(\delta)\right) &=\mathbb{P}(\tau <\infty) \\ &= \mathbb{P}\left( \exp\left( \frac{1}{2\alpha^2} \|U_{\tau}\|^2_{X, (M_{\tau}+V)^{-1}}\right) > \delta^{-1}\sqrt{\det \left( V^{-1} M_{\tau }+{\rm id}_X \right)}, \, \tau<\infty \right) \\ &\le \mathbb{P}\left( \exp\left( \frac{1}{2\alpha^2} \|U_{\tau}\|^2_{X, (M_{\tau}+V)^{-1}}\right) > \delta^{-1}\sqrt{\det \left( V^{-1} M_{\tau }+{\rm id}_X \right)} \right) \le \delta.\end{aligned}$$ This proves the desired estimate. ◻ Based on Theorem [Theorem 29](#thm:martingale_tail){reference-type="ref" reference="thm:martingale_tail"}, we establish the following high probability bounds of a projected least-squares estimator based on correlated observations. Recall that the Hilbert spaces $X$, $Y$ and $(\Omega, \mathcal{F},\mathbb{P})$, the filtration $\mathbb{G}$ and the predictable process $(A_n)_{n \in \mathbb{N}}$ were defined before [\[m-proc\]](#m-proc){reference-type="eqref" reference="m-proc"} and that $(M_n)_{n \in \mathbb{N}}$ was introduced in [\[m-proc\]](#m-proc){reference-type="eqref" reference="m-proc"} and that $\alpha$-conditional sub-Gaussian random variables were defined in Definition [Definition 28](#def:conditional_subgaussian){reference-type="ref" reference="def:conditional_subgaussian"}. **Theorem 32**. *Let $(y_i)_{i\in \mathbb{N}}$ be an $Y$-valued process adapted with respect to $\mathbb{G}$. Assume that there exists a nonempty closed convex subset $\mathcal{C}\subset X$, $x^\star \in \mathcal{C}$ and $R\ge 0$ such that $$\label{regression_general_set_up} y_i = A_i x^\star+\varepsilon_i ,\quad \textrm{for all } i \geq 1,$$ where $\mathbb{E}[\varepsilon_i\mid \mathcal{G}_{i-1}]=0$, and $\varepsilon_i$ is $R$-conditional sub-Gaussian with respect to $\mathcal{G}_{i-1}$. Define $$\label{eq:x_n_lambda} x_{n,\lambda}\coloneqq\mathop{\mathrm{arg\,min}}_{x\in \mathcal{C}} \left( \sum_{i=1}^n\|y_i-A_i x\|^2_Y+\lambda \|x\|^2_{X}\right) , \quad n\in \mathbb{N} ,\ \lambda>0.$$ Then for any $\lambda>0$ and $\delta \in (0,1)$, with probability at least $1-\delta$ we have for all $n\in \mathbb{N}$, $$\left \| x_{n, \lambda} - x^\star \right\|_{X,M_n+\lambda {\rm id}_X} \le R \left( 2\log\left( \frac{\sqrt{\det \left( \lambda^{-1} M_{n }+{\rm id}_X \right)}}{\delta} \right)\right)^{1/2} + \lambda \left\|x^\star \right\|_{X,(M_n+\lambda {\rm id}_X)^{-1}}.$$* *Proof.* Note that for any $n\in \mathbb{N}$ and $\lambda>0$ the map $J_{n,\lambda}:X \mapsto \mathbb{R}\cup \{\infty\}$ which is given by $$J_{n,\lambda}(x) = \sum_{i=1}^{n} \|y_i-A_i x\|^2_Y+\lambda \|x\|^2_{X}+ \boldsymbol{\delta}_{\mathcal{C}}(x) , \quad \textnormal{with $\boldsymbol{\delta}_{\mathcal{C}}(x)=\begin{cases} 0, & x\in \mathcal{C}\\ \infty, & x\not \in \mathcal{C} \end{cases},$}$$ is strongly convex. Hence $x_{n,\lambda}\in \mathcal{C}$ is well-defined and it satisfies the following first-order condition: $$\left\langle 2\sum_{i=1}^{n} A_i^*(A_i x_{n,\lambda}-y_i) +2\lambda x_{n,\lambda}, h-x_{n,\lambda}\right\rangle_X\ge 0, \quad \textrm{for all }h\in \mathcal{C}.$$ Substituting $h=x^\star\in \mathcal{C}$ in the above inqeuality and using $y_i =A_i x^\star+\varepsilon_i$ gives, $$\begin{aligned} \left\langle \sum_{i=1}^{n} A_i^* ( A_i x_{n,\lambda}-A_i x^\star -\varepsilon_i ) + \lambda (x_{n,\lambda}-x^\star) +\lambda x^\star, x^\star-x_{n,\lambda}\right\rangle_X \ge 0.\end{aligned}$$ Now let $M_n=\sum_{i=1}^{n} A_i^{*} A_i$ and $\bar{M}_n=M_n+\lambda {\rm id}_X$. Then $$\begin{aligned} \left\langle \bar{M}_n ( x_{n,\lambda}-x^\star) -\sum_{i=1}^{n} A_i^* \varepsilon_i +\lambda x^\star, x_{n,\lambda}-x^\star\right\rangle_X \le 0,\end{aligned}$$ which along with the invertibility of $\bar{M}_n\in \mathcal{S}_+(X)$ and the Cauchy-Schwarz inequality implies that $$\begin{aligned} \left \| x_{n, \lambda} - x^\star \right\|_{X,\bar{M}_n}^2 &\le \left\langle \sum_{i=1}^{n} A_i^* \varepsilon_i -\lambda x^\star, x_{n,\lambda}-x^\star\right\rangle_X \\ &= \left\langle \bar{M}_n \bar{M}_n^{-1}\left(\sum_{i=1}^{n} A_i^* \varepsilon_i -\lambda x^\star\right), x_{n,\lambda}-x^\star\right\rangle_X \\ &\le \left\| \bar{M}_n^{-1}\left(\sum_{i=1}^{n} A_i^* \varepsilon_i -\lambda x^\star\right) \right\|_{X,\bar{M}_n} \left\| x_{n,\lambda}-x^\star\right\|_{X,\bar{M}_n }.\end{aligned}$$ This together with the identitity $\left \| \bar{M}_n^{-1} x \right\|_{X,\bar{M}_n} =\left \| x\right\|_{X,\bar{M}_n^{-1}}$ for all $x\in X$ yields $$\begin{aligned} \left \| x_{n, \lambda} - x^\star \right\|_{X,\bar{M}_n} &\le \left\| \sum_{i=1}^{n} A_i^* \varepsilon_i -\lambda x^\star \right\|_{X,\bar{M}^{-1}_n} \le \left(\left\| \sum_{i=1}^{n} A_i^* \varepsilon_i \right\|_{X,\bar{M}^{-1}_n} +\lambda\left\| x^\star \right\|_{X,\bar{M}^{-1}_n}\right).\end{aligned}$$ The desired estimate then follows from Theorem [Theorem 29](#thm:martingale_tail){reference-type="ref" reference="thm:martingale_tail"} with $V=\lambda {\rm id}_X$ and $\alpha =R$. ◻ # Proof of Theorems [Theorem 11](#thm:confidence_volterra){reference-type="ref" reference="thm:confidence_volterra"} and [Theorem 15](#thm:confidence_convolution){reference-type="ref" reference="thm:confidence_convolution"} {#sec-pf-est} *Proof of Theorem [Theorem 11](#thm:confidence_volterra){reference-type="ref" reference="thm:confidence_volterra"}.* Let $( \mathcal{G}_i)_{i=0}^{N-1}$ be the filtration given in Definition [Definition 5](#def:offline_data){reference-type="ref" reference="def:offline_data"}, and for each $n=1,\ldots, N$, let $T_n:\mathbb{R}^{M \times M}\rightarrow\mathbb{R}^M$ be such that $T_n G\coloneqq Gu^{(n)}$ for all $G\in \mathbb{R}^{M \times M}$, and let $T^*_n: \mathbb{R}^M \rightarrow\mathbb{R}^{M \times M}$ be the adjoint of $T_n$. Then by [\[eq:price_impact_linear_regression\]](#eq:price_impact_linear_regression){reference-type="eqref" reference="eq:price_impact_linear_regression"}, $y^{(n)}=T_nG^\star+\varepsilon^{(n)}$ for all $n=1,\ldots, N$, and the least-squares estimator [\[eq:G_n\_lambda_volterra\]](#eq:G_n_lambda_volterra){reference-type="eqref" reference="eq:G_n_lambda_volterra"} is equivalent to $$G_{N,\lambda}\coloneqq\mathop{\mathrm{arg\,min}}_{G \in \mathbb{R}^{M \times M}} \sum_{n=1}^N \|y^{(n)}-T_n G \|^2+\lambda\|G\|_{\mathbb{R}^{M \times M}}^2.$$ Recall that $(\mathbb{R}^{M \times M}, \|\cdot\|_{\mathbb{R}^{M \times M}})$ is a finite-dimensional Hilbert space with the inner product $\langle A,B\rangle_{\mathbb{R}^{M \times M}}\coloneqq \textnormal{tr}(A^\top B)$ for all $A,B\in \mathbb{R}^{M \times M}$. Then by Theorem [Theorem 32](#thm:confidence_interval){reference-type="ref" reference="thm:confidence_interval"} (with $X=\mathbb{R}^{M \times M}$ and $Y=\mathbb{R}^M$), for all $\lambda>0$ and $\delta \in (0,1)$, with probability at least $1-\delta$, $$\begin{aligned} \label{eq:G_error_HS} \begin{split} &\left \| G_{N,\lambda} - G^\star \right\|_{\mathbb{R}^{M \times M},M_N+\lambda {\rm id}_{\mathbb{R}^{M \times M}}} \\ &\quad \le R \left( 2\log\left( \frac{\sqrt{\det \left( \lambda^{-1} M_{N }+{\rm id}_{\mathbb{R}^{M \times M}} \right)}}{\delta} \right)\right)^{1/2} + \lambda \left\|G^\star \right\|_{\mathbb{R}^{M \times M},(M_N+\lambda {\rm id}_{\mathbb{R}^{M \times M}})^{-1}}, \end{split}\end{aligned}$$ where $\det()$ denotes the Fredholm determinant, $M_N =\sum_{n=1}^N T_n^*T_n: \mathbb{R}^{M \times M}\rightarrow\mathbb{R}^{M \times M}$, ${\rm id}_{\mathbb{R}^{M \times M}}$ is the identity map on $\mathbb{R}^{M \times M}$, and the norm $\|\cdot\|_{\mathbb{R}^{M \times M},M_N+\lambda {\rm id}_{\mathbb{R}^{M \times M}}}$ is defined by $$\label{eq:RHH_weightnorm} \|G\|_{\mathbb{R}^{M \times M},M_N+\lambda {\rm id}_{\mathbb{R}^{M \times M}}} =\left( \left\langle (M_N+\lambda {\rm id}_{\mathbb{R}^{M \times M}} )G,G \right\rangle_{\mathbb{R}^{M \times M}} \right)^{1/2}, \quad G\in \mathbb{R}^{M \times M}.$$ In the sequel, we fix $\lambda>0$ and $\delta \in (0,1)$, and simplify the estimate [\[eq:G_error_HS\]](#eq:G_error_HS){reference-type="eqref" reference="eq:G_error_HS"}. Observe that for all $G\in \mathbb{R}^{M \times M}$, ${\rm id}_{\mathbb{R}^{M \times M}} G=G \mathbb{I}_{M}$, and for all $G\in \mathbb{R}^{M \times M}$ and $y\in \mathbb{R}^M$, $$\langle T_n G,y\rangle =\langle Gu^{(n)},y\rangle = ( Gu^{(n)})^\top y=\textnormal{tr}( (u^{(n)})^\top G^\top y) =\textnormal{tr}(G^\top y(u^{(n)})^\top )=\langle G,y(u^{(n)})^\top\rangle_{\mathbb{R}^{M \times M}},$$ which implies that $T_n^*: \mathbb{R}^M\rightarrow\mathbb{R}^{M \times M}$ is given by $T_n^* y \coloneqq y (u^{(n)})^\top$ for all $y\in \mathbb{R}^M$. Thus $T_n^*T_n: \mathbb{R}^{M \times M}\rightarrow\mathbb{R}^{M \times M}$ satisfies $T_n^*T_nG=Gu^{(n)}(u^{(n)})^\top$ for all $G\in \mathbb{R}^{M \times M}$. Then by [\[eq:RHH_weightnorm\]](#eq:RHH_weightnorm){reference-type="eqref" reference="eq:RHH_weightnorm"}, $$\begin{aligned} \|G\|^2_{\mathbb{R}^{M \times M},M_N+\lambda {\rm id}_{\mathbb{R}^{M \times M}}} &= \left\langle (M_N+\lambda {\rm id}_{\mathbb{R}^{M \times M}} )G,G \right\rangle_{\mathbb{R}^{M \times M}} \\ &=\left\langle G\left(\sum_{n=1}^N u^{(n)}(u^{(n)})^\top +\lambda \mathbb{I}_M \right),G \right\rangle_{\mathbb{R}^{M \times M}} . \end{aligned}$$ Let $V_N= \sum_{n=1}^N u^{(n)}(u^{(n)})^\top$. Then for all $A\in \mathbb{R}^{M \times M}$, $(M_N+\lambda {\rm id}_{\mathbb{R}^{M \times M}} )A =A(V_N+ \lambda \mathbb{I}_M)$, and $(M_N+\lambda {\rm id}_{\mathbb{R}^{M \times M}} )^{-1}A =A(V_N+\lambda \mathbb{I}_M)^{-1}$. As $V_N+\lambda \mathbb{I}_M$ is symmetric and positive definite, for all $G\in \mathbb{R}^{M \times M}$, $$\begin{aligned} \label{eq:RHH_weightednorm_V} \begin{split} \|G\|^2_{\mathbb{R}^{M \times M},M_N+\lambda {\rm id}_{\mathbb{R}^{M \times M}}} & =\textnormal{tr}\left((V_N+\lambda \mathbb{I}_M)G^\top G\right) =\textnormal{tr}\left((V_N+\lambda \mathbb{I}_M)^{\frac{1}{2}} G^\top G(V_N+\lambda \mathbb{I}_M)^{\frac{1}{2}} \right) \\ & = \left\|G(V_N+\lambda \mathbb{I}_M)^{\frac{1}{2}} \right\|^2_{\mathbb{R}^{M \times M}}, \end{split}\end{aligned}$$ and similarly, $$\begin{aligned} \label{eq:RHH_weightednorm_V_inverse} \begin{split} \|G\|_{\mathbb{R}^{M \times M},(M_N+\lambda {\rm id}_{\mathbb{R}^{M \times M}})^{-1}} & = \left\|G(V_N+\lambda \mathbb{I}_M)^{-\frac{1}{2}} \right\|_{\mathbb{R}^{M \times M}}. \end{split}\end{aligned}$$ It remains to compute the Fredholm determinant $\det \left( \lambda^{-1} M_{N }+{\rm id}_{\mathbb{R}^{M \times M}} \right)$. For each $i,j=1,\ldots, M$, let $E_{ij}\in \mathbb{R}^{M \times M}$ be the matrix such that the $(i,j)$-th entry is $1$ and all remaining entries are zero. Then $(E_{ij})_{i,j=1}^M$ is an orthonormal basis of $(\mathbb{R}^{M \times M}, \|\cdot\|_{\mathbb{R}^{M \times M}})$, and the Fredholm determinant of $\lambda^{-1} M_{N }+{\rm id}_{\mathbb{R}^{M \times M}}$ can be computed using its matrix representation with respect to the basis $(E_{ij})_{i,j=1}^M$. Indeed, by [@gohberg1990classes Theorem 3.2 p. 117], $$\begin{aligned} \det \left( \lambda^{-1} M_{N }+{\rm id}_{\mathbb{R}^{M \times M}} \right) &=\det\left( \left(\delta_{ij,i'j'}+\lambda^{-1}\langle M_N E_{ij}, E_{i'j'} \rangle_{\mathbb{R}^{M \times M}}\right)_{i,j,i',j'=1}^M \right) \\ &= \det\left( \left(\delta_{ij,i'j'}+\lambda^{-1}\langle E_{ij}V_N, E_{i'j'} \rangle_{\mathbb{R}^{M \times M}}\right)_{i,j,i',j'=1}^M \right),\end{aligned}$$ where $\delta_{ij,i'j'}$ is the Kronecker's delta (i.e., $\delta_{ij,i'j'} =1$ if $i=i'$ and $j =j'$ and $0$ otherwise). A direct computation shows that for all $\ell,k=1,\ldots, M$, $(E_{ij}V_N)_{\ell, k}=\delta_{i,\ell}(V_N)_{j,k}$, $((E_{ij}V_N)^\top)_{\ell, k}(E_{i'j'})_{k,\ell} =\delta_{i,k}(V_N)_{j,\ell }\delta_{i',k}\delta_{j',\ell}$, and, hence, $\langle E_{ij}V_N, E_{i'j'} \rangle_{\mathbb{R}^{M \times M}}=\delta_{i,i'}(V_N)_{j,j' }$. Thus $$\left(\delta_{ij,i'j'}+\lambda^{-1}\langle E_{ij}V_N, E_{i'j'} \rangle_{\mathbb{R}^{M \times M}}\right)_{i,j,i',j'=1}^M =\operatorname{diag}( \mathbb{I}_M+\lambda^{-1}V_N,\cdots , \mathbb{I}_M+\lambda^{-1}V_N )\in \mathbb{R}^{M^2\times M^2},$$ which implies that $$\det \left( \lambda^{-1} M_{N }+{\rm id}_{\mathbb{R}^{M \times M}} \right) =(\det(\mathbb{I}_M+\lambda^{-1}V_N ))^M =(\lambda^{-M}\det(\lambda\mathbb{I}_M+V_N ))^M.$$ This along with [\[eq:G_error_HS\]](#eq:G_error_HS){reference-type="eqref" reference="eq:G_error_HS"} and [\[eq:RHH_weightednorm_V\]](#eq:RHH_weightednorm_V){reference-type="eqref" reference="eq:RHH_weightednorm_V"} shows that $$\begin{aligned} \label{eq:unconstrain_G_error} \begin{split} &\left\|(G_{N,\lambda} - G^\star)(\lambda \mathbb{I}_M+V_N)^{\frac{1}{2}} \right\|_{\mathbb{R}^{M \times M}} \\ &\quad \le R \left( 2\log\left( \frac{ \sqrt{ \lambda^{-M^2} (\det(\lambda\mathbb{I}_M+V_N ))^{M}} }{\delta} \right)\right)^{\frac{1}{2}} +\lambda \left \| G^\star (V_N+\lambda \mathbb{I}_M)^{-\frac{1}{2}}\right\|_{\mathbb{R}^{M \times M}} \\ &\quad = R \left( \log\left( \frac{ \lambda^{-M^2} (\det(\lambda\mathbb{I}_M+V_N ))^{M} }{\delta^2} \right)\right)^{\frac{1}{2}} +\lambda \left \| G^\star (V_N+\lambda \mathbb{I}_M)^{-\frac{1}{2}} \right\|_{\mathbb{R}^{M \times M}}. \end{split}\end{aligned}$$ This proves the desired estimate. ◻ *Proof of Theorem [Theorem 15](#thm:confidence_convolution){reference-type="ref" reference="thm:confidence_convolution"}.* Let $(\mathcal{G}_i)_{i=0}^{N-1}$ be the filtration given in Definition [Definition 5](#def:offline_data){reference-type="ref" reference="def:offline_data"}. As $u^{(n)}$ is $\mathcal{G}_{n-1}$-measurable, $U_n$ is $\mathcal{G}_{n-1}$-measurable and $y^{(n)}$ is $\mathcal{G}_{n}$-measurable. Hence by Theorem [Theorem 32](#thm:confidence_interval){reference-type="ref" reference="thm:confidence_interval"} (with $X=Y =\mathbb{R}^M$), for each $n\in \mathbb{N}$ and $\lambda>0$, $$\begin{aligned} &\left \| W_{N,\lambda}^{\frac{1}{2}} ({K}_{N,\lambda} - K^\star) \right\| \\ &\quad \le R \left( \log\left( \frac{ \det \left( \lambda^{-1} \sum_{n=1}^N U^\top_n U_n+\mathbb{I}_H \right) }{\delta^2} \right)\right)^{\frac{1}{2}} + \lambda \left\|W_{N,\lambda}^{\frac{1}{2}} K^\star \right\| \\ &\quad \le R \left( \log\left( \frac{ \lambda ^{-M}\det \left( W_{N,\lambda} \right) }{\delta^2} \right)\right)^{\frac{1}{2}} + \lambda \left\|W_{N,\lambda}^{\frac{1}{2}} K^\star \right\|.\end{aligned}$$ This proves the desired estimate. ◻ # Proof of Theorems [Theorem 19](#THM:performance_bound){reference-type="ref" reference="THM:performance_bound"} and [Theorem 24](#thm-l-bound){reference-type="ref" reference="thm-l-bound"} and Corollary [Corollary 20](#cor-quant){reference-type="ref" reference="cor-quant"} {#sec-pf-pessim} *Proof of Theorem [Theorem 19](#THM:performance_bound){reference-type="ref" reference="THM:performance_bound"}.* (i) Let $\lambda>0$ and $\delta\in (0,1)$. By Theorem [Theorem 11](#thm:confidence_volterra){reference-type="ref" reference="thm:confidence_volterra"}, [\[bnd-err-1\]](#bnd-err-1){reference-type="eqref" reference="bnd-err-1"} holds with probability $1-\delta$. Using the invertibility of $V_{N,\lambda}$ (recall [\[eq:project_admissible\]](#eq:project_admissible){reference-type="eqref" reference="eq:project_admissible"}), and Cauchy-Schwarz inequality we get for all $u\in \mathcal{A}_{b}$, $$\label{g-diff-est1} \begin{aligned} \left \| (G^{\star} - G_{N,\lambda}) u \right \| & = \left \| (G^{\star} - G_{N,\lambda})V_{N,\lambda}^{1/2} V_{N,\lambda}^{-1/2} u \right\| \\ & \leq \left \| (G^{\star} - G_{N,\lambda})V_{N,\lambda}^{1/2} \right \|_{\mathbb{R}^{M\times M}} \left \| V_{N,\lambda}^{-1/2} u \right \| . \end{aligned}$$ From [\[bnd-err-1\]](#bnd-err-1){reference-type="eqref" reference="bnd-err-1"}, [\[l-bnd\]](#l-bnd){reference-type="eqref" reference="l-bnd"}, [\[c-const\]](#c-const){reference-type="eqref" reference="c-const"} and [\[g-diff-est1\]](#g-diff-est1){reference-type="eqref" reference="g-diff-est1"}, we obtain with probability at least $1 - \delta$, $$\label{g-diff-est2} \begin{aligned} & \| (G^{\star} - G_{N,\lambda}) u \| \| u \| \\ &\leq L^\mathcal{A}_{\eqref{l-bnd}} \left[ R\left(\log\left(\frac{\lambda^{-M^2} (\det(V_{N\lambda} ))^{M}}{\delta^2}\right)\right)^{\frac{1}{2}}+\lambda \left \| G^\star (V_{N,\lambda})^{-\frac{1}{2}} \right\|_{\mathbb{R}^{M \times M}} \right] \left \| V_{N,\lambda}^{-1/2} u \right \| \\ & \leq {\color{black}L^\mathcal{A}_{\eqref{l-bnd}}C_{\eqref{c-const}}(N) \left \| V_{N,\lambda}^{-1/2} u \right \|} \\ &= \ell_1(V_{N,\lambda},u), \end{aligned}$$ where we used [\[fun_regularization\]](#fun_regularization){reference-type="eqref" reference="fun_regularization"} in the last equality. From [\[per-func\]](#per-func){reference-type="eqref" reference="per-func"}, [\[j-p1\]](#j-p1){reference-type="eqref" reference="j-p1"}, [\[g-diff-est2\]](#g-diff-est2){reference-type="eqref" reference="g-diff-est2"} and Cauchy-Schwarz inequality, we get (with probability at least $1-\delta$) $$\label{j-diff1} \begin{aligned} & J(\hat{u}^{(1)};G^{\star})- J_{P,1} (\hat{u}^{(1)};G_{N,\lambda}) \\ &= \mathbb{E}_{\mathcal D} \left[ \sum_{i=1}^{M} \sum_{j=1 }^{i} G^{\star}_{i,j} \hat{u}^{(1)}_{t_j} \hat{u}^{(1)}_{t_i} \right] - \mathbb{E}_{\mathcal D}\left[ \sum_{i=1}^{M} \sum_{j=1 }^{i} (G_{N,\lambda})_{i,j} \hat{u}^{(1)}_{t_j} \hat{u}^{(1)}_{t_i} +\ell_1(V_{N,\lambda},\hat{u}) \right] \\ & = \mathbb{E}_{\mathcal D}\left[ \left \langle (G^{\star} - G_{N,\lambda}) \hat{u}^{(1)},\hat{u}^{(1)} \right \rangle \right] - \mathbb{E}_{\mathcal D}\left[ \ell_1(V_{N,\lambda}\hat{u}^{(1)}) \right]\\ & \leq \mathbb{E}_{\mathcal D}\left[ \| (G^{\star} - G_{N,\lambda}) \hat{u}^{(1)} \| \| \hat{u}^{(1)} \| \right] -\mathbb{E}_{\mathcal D}\left[ \ell_1(V_{N,\lambda},\hat{u}^{(1)}) \right] \\ & \leq \mathbb{E}_{\mathcal D}\left[\ell_1(V_{N,\lambda},\hat{u}^{(1)}) \right] -\mathbb{E}_{\mathcal D} \left[ \ell_1(V_{N,\lambda},\hat{u}^{(1)}) \right] \\ &= 0. \end{aligned}$$ Further, we deduce from [\[j-diff1\]](#j-diff1){reference-type="eqref" reference="j-diff1"} and the optimality of $\hat u^{(1)}$ with respect to $J_{P,1}( \cdot \,;G_{N,\lambda})$ that with probability $1 - \delta$, $$\label{i01} \begin{aligned} J(\hat u^{(1)};G^{\star})- J(u;G^{\star}) & \leq J_{P,1}(\hat u^{(1)};G_{N,\lambda}) -J(u;G^{\star}) \\ & \leq J_{P,1}(u;G_{N,\lambda}) -J(u;G^{\star}) \\ & = \mathbb{E}_{\mathcal D}\left[ \sum_{i=1}^{M} \sum_{j=1 }^{i} (G_{N,\lambda})_{i,j} u_{t_j} u_{t_i} +\ell_1(V_{N,\lambda},u) \right] -\mathbb{E}_{\mathcal D}\left[ \sum_{i=1}^{M} \sum_{j=1 }^{i} G^{\star}_{i,j} u_{t_j} u_{t_i} \right] \\ & = \mathbb{E}_{\mathcal D}\left[ \ell_1(V_{N,\lambda},u) \right] - \mathbb{E}_{\mathcal D}\left[ \left \langle (G^{\star} - G_{N,\lambda}) u, u \right \rangle \right] \\ & \leq \mathbb{E}_{\mathcal D}\left[ \ell_1(V_{N,\lambda},u) \right] +\left| \mathbb{E}_{\mathcal D}\left[ \left \langle (G^{\star} - G_{N,\lambda}) u, u \right \rangle \right] \right|\\ &\leq 2 \mathbb{E}_{\mathcal D} \left[ \ell_1(V_{N,\lambda},u) \right], \quad \textrm{for all } u \in \mathcal A_b, \end{aligned}$$ where we used the Cauchy-Schwarz inequality and [\[g-diff-est2\]](#g-diff-est2){reference-type="eqref" reference="g-diff-est2"} to derive the last estimate. This completes the proof of (i). \(ii\) Let $\lambda>0$ and $\delta\in (0,1)$. By Theorem [Theorem 15](#thm:confidence_convolution){reference-type="ref" reference="thm:confidence_convolution"}, [\[diff-k\]](#diff-k){reference-type="eqref" reference="diff-k"} holds with probability $1-\delta$. Using the invertibility of $W_{N,\lambda}$ (recall [\[w-def\]](#w-def){reference-type="eqref" reference="w-def"}), [\[diff-k\]](#diff-k){reference-type="eqref" reference="diff-k"}, [\[l-bnd\]](#l-bnd){reference-type="eqref" reference="l-bnd"}, [\[eq:cost_conv\]](#eq:cost_conv){reference-type="eqref" reference="eq:cost_conv"}, [\[c-2-const\]](#c-2-const){reference-type="eqref" reference="c-2-const"} and Cauchy-Schwarz inequality we get for all $u\in \mathcal{A}_{b}$, $$\label{ey1} \begin{aligned} \| (\mathcal T u) (K^{\star} - K_{N,\lambda}) \| \|u\| &\leq L^\mathcal{A}_{\eqref{l-bnd}} \left \| \mathcal ( \mathcal{T}u) W_{N,\lambda}^{-1/2} W_{N,\lambda}^{1/2} (K^{\star} - K_{N,\lambda}) \right \| \\ % & = L^\mathcal{A}_{\eqref{l-bnd}} \left \| W_{N,\lambda}^{-1/2} W_{N,\lambda}^{1/2} (K^{\star} - K_{N,\lambda}) \right \| \left \|\mathcal Tu W_{N,\lambda}^{-1/2} W_{N,\lambda}^{1/2} (K^{\star} - K_{N,\lambda}) \right \| \\ & \leq L^\mathcal{A}_{\eqref{l-bnd}} \left \| W_{N,\lambda}^{1/2} (K^{\star} - K_{N,\lambda}) \right \| \left \| \mathcal Tu W_{N,\lambda}^{-1/2} \right \|_{\mathbb{R}^{M\times M}} \\ & \leq \ell_2(W_{N,\lambda},u), \end{aligned}$$ where we define a linear map $\mathcal T : \mathbb R^{M} \mapsto \mathbb R^{M\times M}$ such that $U=\mathcal T u$, with $U$ being defined as in [\[t-op\]](#t-op){reference-type="eqref" reference="t-op"}. From [\[per-func\]](#per-func){reference-type="eqref" reference="per-func"}, [\[j-p2\]](#j-p2){reference-type="eqref" reference="j-p2"}, [\[t-op\]](#t-op){reference-type="eqref" reference="t-op"}, [\[ey1\]](#ey1){reference-type="eqref" reference="ey1"} and Cauchy-Schwarz inequality, it follows that with probability at least $1 - \delta$, $$\begin{aligned} & J(\hat{u}^{(2)};K^{\star})- J_{P,2} (\hat{u}^{(2)};K_{N,\lambda}) \\ &= \mathbb{E}_{\mathcal D}\left[ \sum_{i=1}^{M} \sum_{j=1 }^{i} K^{\star}_{i-j} \hat{u}^{(2)}_{t_j} \hat{u}^{(2)}_{t_i} \right] - \mathbb{E}_{\mathcal D}\left[ \sum_{i=1}^{M} \sum_{j=1 }^{i} (K_{N,\lambda})_{i-j} \hat{u}^{(2)}_{t_j}\hat{u}^{(2)}_{t_i} +\ell_2(W_{N,\lambda},\hat{u}^{(2)}) \right] \\ & = \mathbb{E}_{\mathcal D}\left[ \left \langle \mathcal T \hat{u}^{(2)} (K^{\star} - K_{N,\lambda}) , \hat{u}^{(2)}\right \rangle \right] - \mathbb{E}_{\mathcal D}\left[ {\ell}_2(W_{N,\lambda},\hat{u}^{(2)}) \right]\\ & \leq \mathbb{E}_{\mathcal D}\left[ \| \mathcal T \hat{u}^{(2)} (K^{\star} - K_{N,\lambda}) \| \| \hat{u}^{(2)} \| \right] -\mathbb{E}_{\mathcal D}\left[ {\ell}_2(W_{N,\lambda},\hat{u}^{(2)}) \right] \\ & \leq \mathbb{E}_{\mathcal D}\left[{\ell}_2(W_{N,\lambda},\hat{u}^{(2)}) \right] -\mathbb{E}\left[ {\ell}_2(W_{N,\lambda},\tilde{u}) \right] = 0.\end{aligned}$$ Then (ii) follows by applying similar steps as in [\[i01\]](#i01){reference-type="eqref" reference="i01"}. ◻ *Proof of Corollary [Corollary 20](#cor-quant){reference-type="ref" reference="cor-quant"}.* (i) Repeating similar steps as in [\[j-diff1\]](#j-diff1){reference-type="eqref" reference="j-diff1"} we have with $\mathbb P'$-probability at least $1-\delta$, for any $u\in \mathcal A_b$, $$\begin{aligned} & J(u;G^{\star})- J (u;G_{N,\lambda}) \\ &= \mathbb{E}_{\mathcal D}\left[ \sum_{i=1}^{M} \sum_{j=1 }^{i} G^{\star}_{i,j} u_{t_j}{u}_{t_i} \right] - \mathbb{E}_{\mathcal D}\left[ \sum_{i=1}^{M} \sum_{j=1 }^{i} (G_{N,\lambda})_{i,j} u_{t_j} u_{t_i} \right] \\ & = \mathbb{E}_{\mathcal D}\left[ \left \langle (G^{\star} - G_{N,\lambda}) u,u \right \rangle \right] \\ & \leq \mathbb{E}_{\mathcal D}\left[ \| (G^{\star} - G_{N,\lambda})u \| \| u \| \right]\\ & \leq \mathbb{E}_{\mathcal D}\left[\ell(V_{N,\lambda},u) \right], \end{aligned}$$ which completes the proof. \(ii\) the proof follows similar lines as the proof of (i), hence it is omitted. ◻ *Proof of Theorem [Theorem 24](#thm-l-bound){reference-type="ref" reference="thm-l-bound"}.* (i) Let $\lambda>0$ and $\delta \in (0,1)$. Then there exists an event $A_\delta$ with $\mathbb{P}'(A_\delta)\ge 1-2\delta$, on which both Theorem [Theorem 19](#THM:performance_bound){reference-type="ref" reference="THM:performance_bound"}(i) and Assumption [Assumption 22](#assum:concentration_inequ){reference-type="ref" reference="assum:concentration_inequ"}(i) hold. We work on the event $A_\delta$ in the subsequent analysis. From Theorem [Theorem 19](#THM:performance_bound){reference-type="ref" reference="THM:performance_bound"}(i) it follows that we need to bound $\mathbb{E} \left[ \ell_1(V_{N,\lambda},u) \right]$. We note that by using [\[fun_regularization\]](#fun_regularization){reference-type="eqref" reference="fun_regularization"} and Jensen's inequality, we can estimate ${\mathbb{E}}[ \ell_1(V_{N,\lambda},u)]$ as follows, $$\label{g1} \begin{aligned} {\mathbb{E}}_{\mathcal D}\big[ \ell_1(V_{N,\lambda},u) \big]& = L^\mathcal{A}_{\eqref{l-bnd}}C_{\eqref{c-const}}(N) \mathbb{E}_{\mathcal D} \left[ \sqrt{u^{\top} V_{N,\lambda}^{-1} u} \right] \\ & \leq L^\mathcal{A}_{\eqref{l-bnd}}C_{\eqref{c-const}}(N) \sqrt{\mathbb{E}_{\mathcal D} \left[ u^{\top} V_{N,\lambda}^{-1} u \right]}. \end{aligned}$$ By [\[eq:project_admissible\]](#eq:project_admissible){reference-type="eqref" reference="eq:project_admissible"} and Assumption [Assumption 22](#assum:concentration_inequ){reference-type="ref" reference="assum:concentration_inequ"}(i), on the event $A_\delta$, $$\label{g2} V_{N,\lambda}^{-1} = \left( \sum_{n = 1}^{N} u^{(n)} (u^{(n)})^{\top} + \lambda \mathbb{I}_{M} \right)^{-1} \leq \left( N \Sigma - C_{\ref{assum:concentration_inequ}}(\delta) \sqrt{N} \mathbb{I}_{M} + \lambda \mathbb{I}_{M} \right)^{-1} \leq N^{-1} \Sigma^{-1},$$ where we used $\lambda=C_{\ref{assum:concentration_inequ}}(\delta) \sqrt{N}$ by the hypothesis of the theorem. Inequality [\[g2\]](#g2){reference-type="eqref" reference="g2"} is to be interpreted as follows, $$\begin{aligned} x^{\top} V_{N,\lambda}^{-1} x \leq x^{\top} \left( N \Sigma \right)^{-1} x, \quad \textrm{for all } x \in \mathbb{R}^M. \end{aligned}$$ Note that $\Sigma^{-1}$ exists since $\Sigma$ is a symmetric positive-definite matrix. By using the Rayleigh quotient bounds on the symmetric positive-definite matrix $\Sigma^{-1}$ we have, $$\label{g3} \|\Sigma^{-1} x \| \leq \underline\xi^{-1}_{\Sigma} \|x\| , \quad \textrm{for all } x \in \mathbb{R}^M,$$ where $\underline\xi_{\Sigma}$ is the minimal eigenvalue of $\Sigma$, hence $\underline\xi_{\Sigma}^{-1}$ it is the maximal eigenvalue of $\Sigma^{-1}$. From [\[g2\]](#g2){reference-type="eqref" reference="g2"}, Cauchy-Schwarz inequality and [\[g3\]](#g3){reference-type="eqref" reference="g3"}, we deduce that with probability $1-\delta$, $$\label{tf1} \begin{aligned} \mathbb{E}_{\mathcal D}\left[ u^{\top} V_{N,\lambda}^{-1} u\right] & \leq \mathbb{E}_{\mathcal D}\left[ u^{\top} \left( N \Sigma \right)^{-1} u \right] \\ &= \frac{1}{N} \mathbb{E}_{\mathcal D} \left[\left \langle \Sigma ^{-1} u, u \right \rangle \right] \\ &\leq \frac{1}{N} \mathbb{E}_{\mathcal D} \left[ \| \Sigma ^{-1} u \| \| u \| \right] \\ &\leq \frac{1}{N} \underline\xi^{-1}_{\Sigma} \mathbb{E}_{\mathcal D}[\| u \|^2], \quad \textrm{for all } u \in \mathcal A_b. \end{aligned}$$ From [\[g1\]](#g1){reference-type="eqref" reference="g1"}, [\[tf1\]](#tf1){reference-type="eqref" reference="tf1"} and [\[l-bnd\]](#l-bnd){reference-type="eqref" reference="l-bnd"}, we get with probability $1-\delta$, $$\label{g4} {\mathbb{E}}_{\mathcal D}\big[ \ell_1(V_{N,\lambda},u) \big] \leq (L^{\mathcal A}_{\eqref{l-bnd}})^2 \frac{C_{\eqref{c-const}}(N)}{\sqrt{N}} \underline\xi_{\Sigma}^{-1/2}.$$ We conclude from [\[g4\]](#g4){reference-type="eqref" reference="g4"} that in order to derive an explicit bound on ${\mathbb{E}}\big[ \ell_1(V_{N,\lambda},u) \big]$ in terms of $N$ we need to further analyse the constant $C_{\eqref{c-const}} (N)$, which was defined in [\[c-const\]](#c-const){reference-type="eqref" reference="c-const"}. To do so, we first bound the so called *information gain*, $$I(M,N,\Sigma) \coloneqq \log\left( \lambda^{-M^2} (\det V_{N,\lambda} ))^{M} \right).$$ Denote the eigenvalues of $\Sigma$ by $\xi_1 \geq \ldots \geq \xi_i \geq \ldots \geq \xi_M > 0$. Using [\[eq:project_admissible\]](#eq:project_admissible){reference-type="eqref" reference="eq:project_admissible"}, Assumption [Assumption 22](#assum:concentration_inequ){reference-type="ref" reference="assum:concentration_inequ"}(i) and the monotonicity of the determinant of positive definite matrices we obtain $$\label{i-m} \begin{aligned} I(M,N,\Sigma) & = M\log \left( (\det(\lambda \mathbb{I}_{M}))^{-1}\det( V_{N, \lambda} ) \right) \\ & \leq M\log \left( (\det(\lambda \mathbb{I}_{M}))^{-1}\det( N\Sigma + (C_{\ref{assum:concentration_inequ}}(\delta) \sqrt{N} + \lambda) \mathbb{I}_M) \right) \\ &= M \log \left( \frac{ \prod_{i=1}^M (\lambda+ C_{\ref{assum:concentration_inequ}}(\delta) \sqrt{N} +N\xi_i) }{\lambda^M} \right) \\ &= M \sum_{i=1}^M \log \left( \frac{ \lambda+ C_{\ref{assum:concentration_inequ}}(\delta) \sqrt{N} +N\xi_i }{\lambda} \right). \end{aligned}$$ Plugging in our choice of $\lambda = C_{\ref{assum:concentration_inequ}}(\delta) \sqrt{N}$, we get $$\label{g7} \begin{aligned} I(M,N,\Sigma) &\leq M^2 \log \left( 2 + \frac{ \sqrt{N} \xi_1}{C_{\ref{assum:concentration_inequ}}(\delta)} \right) \\ &\leq M^2 \left( \log \left( 2 + \frac{ \xi_1}{C_{\ref{assum:concentration_inequ}}(\delta)} \right) +\frac{1}{2} \log N \right) \\ % & \leq M^2 \left( \log \left(2 + \frac{ \| % \Sigma \|_{\mathbb{R}^{M \times M}}} {C_{\ref{assum:concentration_inequ}}(\dl) } \right)+\frac{1}{2} \log N \right), & = M^2 \left( \log \left(2 + \frac{ \overline\xi_\Sigma} {C_{\ref{assum:concentration_inequ}}(\delta) } \right)+\frac{1}{2} \log N \right), \end{aligned}$$ where we recall that $\overline\xi_\Sigma =\xi_1$ is the maximal eigenvalue of $\Sigma$. From [\[g2\]](#g2){reference-type="eqref" reference="g2"} and [\[frob\]](#frob){reference-type="eqref" reference="frob"} we get that, $$\label{g8} \left \| (V_{N,\lambda} )^{-1/2} \right\|^2_{\mathbb{R}^{M \times M} } = \textrm{tr}({V_{N,\lambda}}^{-1}) \leq N^{-1}\textrm{tr} (\Sigma^{-1}) = N^{-1}\sum_{k=1}^M \xi^{-1}_k\\ \leq M N^{-1}\underline\xi^{-1}_{\Sigma},$$ where we recall that $\underline\xi_{\Sigma} = \xi_M$ is the minimal eigenvalue of $\Sigma$. From [\[c-const\]](#c-const){reference-type="eqref" reference="c-const"}, [\[i-m\]](#i-m){reference-type="eqref" reference="i-m"}, [\[g7\]](#g7){reference-type="eqref" reference="g7"} and [\[g8\]](#g8){reference-type="eqref" reference="g8"} and $\lambda = C_{\ref{assum:concentration_inequ}}(\delta) \sqrt{N}$, it follows that, $$\label{g10} \begin{aligned} C_{\eqref{c-const}} (N) &= R\big( I(M,N,\Sigma) +\log(\delta^{-2})\big)^{1/2} + \lambda L^\mathscr{G}_{\eqref{l-bnd}}\left \| (V_{N,\lambda} )^{-1/2} \right\|_{\mathbb{R}^{M \times M} } \\ & \leq M R \left( \log \left(2 + \frac{ %\| \Sigma \|_{\mathbb{R}^{M \times M}} \overline\xi_\Sigma } {C_{\ref{assum:concentration_inequ}}(\delta) } \right)+\frac{1}{2} \log(N) + \log(\delta^{-2}) \right)^{1/2} \\ & \quad \ +L^\mathscr{G}_{\eqref{l-bnd}} C_{\ref{assum:concentration_inequ}}(\delta) M^{1/2} \underline\xi^{-1/2}_{\Sigma}. \end{aligned}$$ Item (i) is then a consequence of [\[g4\]](#g4){reference-type="eqref" reference="g4"} and [\[g10\]](#g10){reference-type="eqref" reference="g10"}. \(ii\) Using [\[w-def\]](#w-def){reference-type="eqref" reference="w-def"}, Assumption [Assumption 22](#assum:concentration_inequ){reference-type="ref" reference="assum:concentration_inequ"}(ii) with $\lambda = C_{\ref{assum:concentration_inequ}}(\delta) \sqrt{N}$, it follows that $N\hat \Sigma \leq W_{N,\lambda}$. Together with the Cauchy-Schwarz inequality and [\[frob-id\]](#frob-id){reference-type="eqref" reference="frob-id"}, we deduce that with probability $1-\delta$, $$\label{tf31} \begin{aligned} \mathbb{E}_{\mathcal D}\left[ U^{\top} W_{N,\lambda}^{-1} U\right] & \leq \mathbb{E}_{\mathcal D}\left[ U^{\top} \left( N \hat \Sigma \right)^{-1} U \right] \\ &= \frac{1}{N} \mathbb{E} _{\mathcal D}\left[\left \langle \hat \Sigma ^{-1} U, U \right \rangle_{\mathbb{R}^{M\times M}} \right] \\ &\leq \frac{1}{N} \mathbb{E}_{\mathcal D} \left[ \| \hat \Sigma ^{-1} U \|_{\mathbb{R}^{M\times M}} \| U \|_{\mathbb{R}^{M\times M}}\right] \\ &\leq \frac{M}{N} \underline\xi^{-1}_{\Sigma} \mathbb{E}_{\mathcal D}[\| u \|^2], \quad \textrm{for all } u \in \mathcal A_b, \end{aligned}$$ where we used $\|U\|_{\mathbb{R}^{M\times M}} \leq M^{1/2} \|u\|$ (cf. [\[t-op\]](#t-op){reference-type="eqref" reference="t-op"}) and [\[g3\]](#g3){reference-type="eqref" reference="g3"}, which gives $\|\hat \Sigma ^{-1} U \|_{\mathbb{R}^{M\times M}} \leq M^{1/2} \underline\xi^{-1}_{\hat \Sigma} \|u\|$ in the last inequality. Following similar lines as in [\[g1\]](#g1){reference-type="eqref" reference="g1"}--[\[g4\]](#g4){reference-type="eqref" reference="g4"}, using Theorem [Theorem 19](#THM:performance_bound){reference-type="ref" reference="THM:performance_bound"}(ii), Assumption [Assumption 22](#assum:concentration_inequ){reference-type="ref" reference="assum:concentration_inequ"}(ii) with $\lambda = C_{\ref{assum:concentration_inequ}}(\delta) \sqrt{N}$ and [\[tf31\]](#tf31){reference-type="eqref" reference="tf31"}, we get with probability $1-\delta$, $$\label{g11} {\mathbb{E}}_{\mathcal D}\big[ \ell_2(W_{N,\lambda},u) \big] \leq M^{1/2} (L^{\mathcal A}_{\eqref{l-bnd}})^2 \frac{C_{\eqref{c-2-const}}(N)}{\sqrt{N}} \underline\xi_{\hat \Sigma}^{-1/2}.$$ From [\[w-def\]](#w-def){reference-type="eqref" reference="w-def"}, Assumption [Assumption 22](#assum:concentration_inequ){reference-type="ref" reference="assum:concentration_inequ"}(ii) with $\lambda = C_{\ref{assum:concentration_inequ}}(\delta) \sqrt{N}$, it follows that $\hat \Sigma \leq W_{N,\lambda}$. Together with [\[frob\]](#frob){reference-type="eqref" reference="frob"} we get, $$\label{g11.5} \left \| (W_{N,\lambda} )^{-1/2} \right\|^2_{\mathbb{R}^{M \times M} } = \textrm{tr}({W_{N,\lambda}}^{-1}) \leq N^{-1}\textrm{tr} (\hat \Sigma^{-1}) \leq M N^{-1}\underline\xi^{-1}_{\hat \Sigma},$$ where we recall that $\underline\xi_{\hat \Sigma}$ is the minimal eigenvalue of $\hat \Sigma$. By using [\[g11.5\]](#g11.5){reference-type="eqref" reference="g11.5"} and repeating similar steps that were leading to [\[g10\]](#g10){reference-type="eqref" reference="g10"}, we get the following bound on the left-hand side of [\[c-2-const\]](#c-2-const){reference-type="eqref" reference="c-2-const"}, $$\label{g12} \begin{aligned} C_{\eqref{c-2-const}} (N) &\leq M^{1/2} R \left( \log \left(2 + \frac{ % \| \hat \Sigma \|_{\mathbb{R}^{M \times M} } \overline\xi_{\hat \Sigma} } {C_{\ref{assum:concentration_inequ}}(\delta) } \right)+\frac{1}{2} \log(N) + \log(\delta^{-2}) \right)^{1/2} \\ & \quad \ +L^\mathscr{K}_{\eqref{k-bnd}}C_{\ref{assum:concentration_inequ}}(\delta) M^{1/2} \underline\xi^{-1/2}_{\hat \Sigma}. \end{aligned}$$ We note that there is a difference in a factor of $\sqrt{M}$ between [\[g10\]](#g10){reference-type="eqref" reference="g10"} and [\[g12\]](#g12){reference-type="eqref" reference="g12"}, due to a difference between the powers of $\lambda$ in the logarithm terms of [\[c-const\]](#c-const){reference-type="eqref" reference="c-const"} and [\[c-2-const\]](#c-2-const){reference-type="eqref" reference="c-2-const"}, respectively. From [\[g11\]](#g11){reference-type="eqref" reference="g11"} and [\[g12\]](#g12){reference-type="eqref" reference="g12"}, (ii) follows. ◻ # Proof of Theorem [Theorem 2](#prop-opt-strat){reference-type="ref" reference="prop-opt-strat"} {#sec-pf-opt} *Proof of Theorem [Theorem 2](#prop-opt-strat){reference-type="ref" reference="prop-opt-strat"}.* Note that the uniqueness of the optimizer of [\[opt-prog\]](#opt-prog){reference-type="eqref" reference="opt-prog"} follows immediately from the convexity of the cost functional, see e.g., Theorem 2.3 in [@Lehalle-Neum18]. Next we derive the solution to [\[opt-prog\]](#opt-prog){reference-type="eqref" reference="opt-prog"}. Let $\lambda$ be $\mathcal F_{T}$-measurable, square-integrable random variable. We write the Lagrangian of the performance functional in [\[per-func\]](#per-func){reference-type="eqref" reference="per-func"} as follows $$\label{opt-lagran} \begin{aligned} \mathcal L(u;\lambda) = \mathbb{E}\left[ \frac{1}{2} u^{\top} (G+G^{\top})u + u^{\top} A + \lambda (u^\top \boldsymbol{1} - x_0) \right]. \end{aligned}$$ The first order condition with respect to [\[opt-lagran\]](#opt-lagran){reference-type="eqref" reference="opt-lagran"} is given by $$\begin{aligned} \nabla_{u} \mathcal L(u;\lambda) =\mathbb{E}\left[ (G+G^{\top})u + A + \lambda \boldsymbol{1} \right] &=0, \label{foc} \\ u^\top \boldsymbol{1} &= x_0. \label{constr}\end{aligned}$$ Let $t_i \in \mathbb{T}$, then from [\[foc\]](#foc){reference-type="eqref" reference="foc"} and the tower property we have $$\mathbb{E}\left[ \mathbb{E}_{t_i} \left[ (G+G^{\top})u + A + \lambda \boldsymbol{1} \right] \right] =0.$$ So if $$\label{proj} \mathbb{E}_{t_i} \left[ (G+G^{\top})u + A + \lambda \boldsymbol{1} \right] =0 ,$$ holds then [\[foc\]](#foc){reference-type="eqref" reference="foc"} is satisfied. Recall that $G_{i,j}=0$ for $j>i$. We rewrite explicitly for each entry in [\[proj\]](#proj){reference-type="eqref" reference="proj"} (conditioned on $\mathcal F_{t_i}$) to get, $$\label{gf1} \sum_{ j \leq i } G_{i,j}u_{t_j} + \sum_{ i \leq j \leq M } G_{j,i} \mathbb{E}_{t_i} [u_{t_j}] + A_{t_i} + \mathbb{E}_{t_i}[\lambda] =0, \quad \textrm{for all } i =1,\ldots,M.$$ Using [\[g-dec\]](#g-dec){reference-type="eqref" reference="g-dec"} we obtain the following condition for the optimal strategy, $$\label{gf1.1} 2\eta u_{t_i} + \sum_{ j \leq i } \widetilde G_{i,j}u_{t_j} + \sum_{ i \leq j \leq M } \widetilde G_{j,i} \mathbb{E}_{t_i} [u_{t_j}] + A_{t_i} + \mathbb{E}_{t_i}[\lambda] =0, \quad \textrm{for all } i =1,\ldots,M.$$ In the following we fix $t_\ell\in \mathbb T$ such that $t_\ell \leq t_i$. Taking conditional expectation on both sides of [\[gf1.1\]](#gf1.1){reference-type="eqref" reference="gf1.1"}, using the tower property we get for all $\ell \leq i$, $$\label{gf2} 2\eta \mathbb{E}_{t_\ell}[u_{t_i}] + \sum_{ j < \ell } \widetilde G_{i,j} u_{t_j} + \sum_{ \ell \leq j \leq i } \widetilde G_{i,j} \mathbb{E}_{t_\ell}[u_{t_j}] + \sum_{ i \leq j \leq M } \widetilde G_{j,i} \mathbb{E}_{t_\ell} [u_{t_j}] + \mathbb{E}_{t_\ell}[ A_{t_i}] + \mathbb{E}_{t_\ell }[\lambda] =0.$$ Define $m_{t_\ell} = (m_{t_\ell}(t_1),\ldots,m_{t_\ell}(t_M))^{\top}$ such that $$m_{t_\ell}(t_i)= 1_{\{ t_\ell \leq t_i \}} \mathbb{E}_{t_\ell} [ u_{t_i}].$$ By multiplying both sides of [\[gf2\]](#gf2){reference-type="eqref" reference="gf2"} by $1_{\{t_\ell \leq t_i\}}$ we get for all $i=1,\ldots,M$: $$\label{gf3} \begin{aligned} 0&= 2\eta m_{t_\ell}(t_i) +1_{\{t_\ell \leq t_i\}} \sum_{ j < \ell } \widetilde G_{i,j} u_{t_j} + 1_{\{t_\ell \leq t_i\}} \sum_{ \ell \leq j \leq i } \widetilde G_{i,j} m_{t_\ell}(t_j) \\ &\quad + 1_{\{t_\ell \leq t_i\}} \sum_{ i \leq j \leq M } \widetilde G_{j,i} m_{t_\ell}(t_j) + 1_{\{t_\ell \leq t_i\}} \left( \mathbb{E}_{t_\ell}[ A_{t_i}] + \mathbb{E}_{t_\ell }[\lambda] \right) \\ &=f_{t_\ell}(t_i) + 2\eta m_{t_\ell}(t_i) + 1_{\{t_\ell \leq t_i\}} \sum_{ \ell \leq j \leq i } \widetilde G_{i,j} m_{t_\ell}(t_j) +1_{\{t_\ell \leq t_i\}} \sum_{ i \leq j \leq M } \widetilde G_{j,i} m_{t_\ell}(t_j) \\ &=f_{t_\ell}(t_i) + 2\eta m_{t_\ell}(t_i) + \sum_{ \ell \leq j \leq i } \widetilde G^{(\ell)}_{i,j} m_{t_\ell}(t_j) + \sum_{ i \leq j \leq M } ((\widetilde G^{\top})^{(\ell)})_{i,j} m_{t_\ell}(t_j) \\ &=f_{t_\ell}(t_i) + 2\eta m_{t_\ell}(t_i) + \widetilde G^{(\ell)}_{i, \cdot} m_{t_\ell} + ((\widetilde G^{\top})^{(\ell)})_{i,\cdot} m_{t_\ell}, \end{aligned}$$ where $\widetilde G^{(\ell)}_{i, \cdot}$ is the $i$-th row of the matrix $\widetilde G^{(\ell)}$ which was introduced in [\[tile-g-i\]](#tile-g-i){reference-type="eqref" reference="tile-g-i"} and $$\label{f-proc} f_{t_\ell}(t_i) := 1_{\{t_\ell \leq t_i\}} \sum_{ j < \ell } \widetilde G_{i,j} u_{t_j} + 1_{\{t_\ell \leq t_i\}} \left( \mathbb{E}_{t_\ell}[ A_{t_i}] + \mathbb{E}_{t_\ell }[\lambda] \right).$$ Let $f_{t_\ell} = (f_{t_\ell}(t_1),\ldots,f_{t_\ell}(t_M))^{\top}$, then from [\[gf3\]](#gf3){reference-type="eqref" reference="gf3"} we get, $$\label{gf4} \left( 2\eta \mathbb I_M+ \widetilde G^{(\ell)} + (\widetilde G^{\top})^{(\ell)} \right) m_{t_\ell} = -f_{t_\ell}.$$ We present the following technical lemma, which will be proved at the end of this section. **Lemma 33**. *The matrix $$D^{(\ell)} = 2\eta \mathbb I_M+ \big(\widetilde G + \widetilde G^{\top} \big)^{(\ell)}$$ is invertible for any $\ell=1,\ldots,M$.* From [\[gf4\]](#gf4){reference-type="eqref" reference="gf4"} and Lemma [Lemma 33](#lemma-d-inv){reference-type="ref" reference="lemma-d-inv"} we get, $$\label{gf5} m_{t_\ell} = - \left( D^{(\ell)} \right)^{-1} f_{t_\ell}, \quad \textrm{for all } \ell =1,\ldots,M,$$ We now plug-in [\[gf5\]](#gf5){reference-type="eqref" reference="gf5"} to [\[gf1.1\]](#gf1.1){reference-type="eqref" reference="gf1.1"} to get an equation for $u_{t_i}$ $$\label{gf6} 2\eta u_{t_i} + \sum_{ j \leq i } \widetilde G_{i,j}u_{t_j} - \sum_{ i \leq j \leq M } \widetilde G_{j,i} (D^{(i)})^{-1}_{j, \cdot} f_{t_i} + A_{t_i} + \mathbb{E}_{t_i}[\lambda] =0.$$ We first consider the second and third terms on the left-hand side of [\[gf6\]](#gf6){reference-type="eqref" reference="gf6"}. Using [\[f-proc\]](#f-proc){reference-type="eqref" reference="f-proc"} we get, $$\label{gf7} \begin{aligned} & \sum_{ j \leq i } \widetilde G_{i,j}u_{t_j} - \sum_{ i \leq j \leq M } \widetilde G_{j,i} (D^{(i)})^{-1}_{j, \cdot} f_{t_i} \\ &= \sum_{ j \leq i } \widetilde G_{i,j}u_{t_j} - \sum_{ i \leq j \leq M } \widetilde G_{j,i} \sum_{k=1}^M (D^{(i)})^{-1}_{j, k} f_{t_i}(t_k) \\ & = \sum_{ j \leq i } \widetilde G_{i,j}u_{t_j} - \sum_{ i \leq j \leq M } \widetilde G_{j,i} \sum_{k=1}^M (D^{(i)})^{-1}_{j, k} \left( 1_{\{t_i \leq t_k\}} \sum_{ \ell < i } \widetilde G_{k,\ell} u_{t_\ell} + 1_{\{t_i \leq t_k\}} \big( \mathbb{E}_{t_i}[ A_{t_k}] + \mathbb{E}_{t_i }[\lambda] \big)\right) \\ &= \sum_{ j \leq i } \widetilde G_{i,j}u_{t_j} - \sum_{ i \leq j \leq M } \widetilde G_{j,i} \sum_{k=1}^M (D^{(i)})^{-1}_{j, k} 1_{\{t_i \leq t_k\}} \sum_{ \ell < i } \widetilde G_{k,\ell} u_{t_\ell} \\ &\quad - \sum_{ i \leq j \leq M } \widetilde G_{j,i} \sum_{k=1}^M (D^{(i)})^{-1}_{j, k} 1_{\{t_i \leq t_k\}} \left( \mathbb{E}_{t_i}[ A_{t_k}] + \mathbb{E}_{t_i }[\lambda] \right) \\ &=\sum_{j\leq i} K_{i,j}u_j -\sum_{ i \leq j \leq M } \widetilde G_{j,i} \sum_{k=1}^M (D^{(i)})^{-1}_{j, k} 1_{\{t_i \leq t_k\}} \left( \mathbb{E}_{t_i}[ A_{t_k}] + \mathbb{E}_{t_i }[\lambda] \right), \\ \end{aligned}$$ where $K$ is the following lower triangular matrix, $$\label{k-def} \begin{aligned} K_{i,\ell} & \coloneqq \widetilde G_{i,\ell} -1_{\{t_\ell < t_i\}} \sum_{i \leq q \leq M} \widetilde G_{q,i} \sum_{k=1}^{M} (D^{(i)})^{-1}_{q,k} \mathrm{1}_{\lbrace t_i \leq t_k \rbrace} \widetilde G_{k,\ell} \\ & = \widetilde G_{i,\ell} - \mathrm{1}_{\lbrace t_\ell < t_i \rbrace} \sum_{k=1}^{M} \sum_{1 \leq q \leq M} \widetilde G_{q,i}(D^{(i)})^{-1}_{q,k} \widetilde G^{(i)}_{k,\ell} \\ & = \widetilde G_{i,l} - \mathrm{1}_{\lbrace t_\ell < t_i \rbrace} \sum_{k=1}^{M}(\widetilde G^{\top} (D^{(i)})^{-1})_{i,k}\widetilde G^{(i)}_{k,\ell} \\ & = \widetilde G_{i,\ell} - \mathrm{1}_{\lbrace t_\ell < t_i \rbrace} (\widetilde G^{\top} (D^{(i)})^{-1} \widetilde G^{(i)})_{i,\ell }, \end{aligned}$$ which agrees with [\[k-def-og\]](#k-def-og){reference-type="eqref" reference="k-def-og"}. From [\[gf6\]](#gf6){reference-type="eqref" reference="gf6"} and [\[gf7\]](#gf7){reference-type="eqref" reference="gf7"} we get, $$\label{gf8} \begin{aligned} 2\eta u_{t_i}+ \sum_{j\leq i} K_{i,j} u_{t_j}& = \sum_{k=1}^M \sum_{ 1 \leq j \leq M } \widetilde G_{j,i} (D^{(i)})^{-1}_{j, k} 1_{\{t_i \leq t_k\}} \left( \mathbb{E}_{t_i}[ A_{t_k}] + \mathbb{E}_{t_i }[\lambda] \right) - A_{t_i} - \mathbb{E}_{t_i}[\lambda] \\ & = \sum_{k=1}^M ( \widetilde G^{\top}(D^{(i)})^{-1})_{i,k} 1_{\{t_i \leq t_k\}} \left( \mathbb{E}_{t_i}[ A_{t_k}] + \mathbb{E}_{t_i }[\lambda] \right) - A_{t_i} - \mathbb{E}_{t_i}[\lambda] \\ & = -A_{t_i} + \sum_{k=1}^M ( \widetilde G^{\top} (D^{(i)})^{-1})_{i,k} 1_{\{t_i \leq t_k\}} \mathbb{E}_{t_i}[ A_{t_k}] \\ &\quad - \left(1-\sum_{k=1}^M ( \widetilde G ^{\top} (D^{(i)})^{-1})_{i,k}1_{\{t_i \leq t_k\}} \right) \mathbb{E}_{t_i}[\lambda] . \end{aligned}$$ We define $g=(g_{t_1},\ldots,g_{t_M})^\top$ and $a=(a_{t_1},\ldots,a_{t_M})^\top$ as follows, $$\label{} \begin{aligned} g_{t_i} &= -A_{t_i} +\sum_{k=1}^M ( \widetilde G^{\top} (D^{(i)})^{-1})_{i,k} 1_{\{t_i \leq t_k\}} \mathbb{E}_{t_i}[ A_{t_k}], \\ a_{t_i} &= -1+\sum_{k=1}^M ( \widetilde G^{\top} (D^{(i)})^{-1})_{i,k}1_{\{t_i \leq t_k\}}. \end{aligned}$$ Writing [\[gf8\]](#gf8){reference-type="eqref" reference="gf8"} in matrix form gives $$\label{gf9} (2\eta \mathbb{I}_M + K) u = g + a^\top \mathbb{I}_M \mathbb{E}_{\cdot }[\lambda], \\ $$ where we introduce the notation $$\mathbb{E}_{\cdot }[\lambda] = (\mathbb{E}_{t_1}[\lambda], \ldots,\mathbb{E}_{t_M}[\lambda])^\top,$$ so that, $$a^\top \mathbb{I}_M \mathbb{E}_{\cdot }[\lambda] = (a_{t_1} \mathbb{E}_{t_1 }[\lambda], \ldots, a_{t_M} \mathbb{E}_{t_M }[\lambda])^{\top}.$$ From [\[k-def\]](#k-def){reference-type="eqref" reference="k-def"} it follows that $2\eta \mathbb{I}_M + K$ is lower triangular matrix and $$(2\eta \mathbb{I}_M + K)_{j,j} = 2\eta + \tilde G_{j,j} > 0, \quad \textrm{for all } j=1,\ldots,M,$$ hence it is invertible. We therefore get from [\[gf9\]](#gf9){reference-type="eqref" reference="gf9"}, $$\label{gf10} u = (2\eta \mathbb{I}_M + K)^{-1} (g + a^\top \mathbb{I}_M \mathbb{E}_{\cdot }[\lambda]).$$ Let $$\widetilde A = (2\eta \mathbb{I}_M + K)^{-1} g,$$ and note that $\widetilde A_{t_i}$ is $\mathcal F_{t_i}$ measurable since $(2\eta \mathbb{I}_M + K)$ and hence $(2\eta \mathbb{I}_M + K)^{-1}$ are lower triangular matrices. We can rewrite [\[gf10\]](#gf10){reference-type="eqref" reference="gf10"} as follows: $$u_{t_i} = \widetilde A_{t_i} + \sum_{j \leq i} (2\eta \mathbb{I}_M + K)^{-1}_{i,j} a_{t_j} \mathbb{E}_{t_j }[\lambda].$$ By using the tower property we get for any $r< i$, $$\begin{aligned} {\mathbb{E}}_{t_r} [u_{t_i}] &= {\mathbb{E}}_{t_r} [\widetilde A_{t_i}] + \sum_{r \leq j \leq i} (2\eta \mathbb{I}_M + K)^{-1}_{i,j} a_{t_j} \mathbb{E}_{t_r }[\lambda]+\sum_{j < r } (2\eta \mathbb{I}_M + K)^{-1}_{i,j} a_{t_j} \mathbb{E}_{t_j }[\lambda].\end{aligned}$$ Summing over $i$ and using $\sum_{i=1}^M{\mathbb{E}}_{t_r} [u_{t_i}] =x_0$, which follows from [\[constr\]](#constr){reference-type="eqref" reference="constr"}, we get $$\begin{aligned} x_0 &= \sum_{i=1}^M {\mathbb{E}}_{t_r} [\widetilde A_{t_i}] + \mathbb{E}_{t_r }[\lambda]\sum_{i=1}^M {\color{black} \sum_{r \leq j \leq M} }(2\eta \mathbb{I}_M + K)^{-1}_{i,j} a_{t_j} +\sum_{i=1}^M \sum_{ j <r }(2\eta \mathbb{I}_M + K)^{-1}_{i,j} a_{t_j} \mathbb{E}_{t_j }[\lambda]. \end{aligned}$$ It follows that $\mathbb{E}_{t_r } [\lambda]$ satisfies the following forward equation: $$\begin{aligned} c_r +b_r \mathbb{E}_{t_r }[\lambda]+ \sum_{ j <r } h_j \mathbb{E}_{t_j }[\lambda] =x_0, \quad r= 1, \ldots, M.\end{aligned}$$ where $$\begin{aligned} c_r &= \sum_{i=1}^M {\mathbb{E}}_{t_r} [\widetilde A_{t_i}] , \quad b_{r} =\sum_{i=1}^N {\color{black}\sum_{r \leq j \leq M}} (2\eta \mathbb{I}_M + K)^{-1}_{i,j} a_{t_j}, \quad h_j = \sum_{i=1}^M(2\eta \mathbb{I}_M + K)^{-1}_{i,j} a_{t_j},\end{aligned}$$ which admits the following solution: $$\mathbb{E}_{t_{r+1}}[\lambda]= \frac{(b_r-h_r) \mathbb{E}_{t_{r}}[\lambda]- (c_{r+1}-c_r)}{b_{r+1}} , \quad \mathbb{E}_{t_1}[\lambda] = \frac{x_0-c_1}{b_1}.$$ ◻ *Proof of Lemma [Lemma 33](#lemma-d-inv){reference-type="ref" reference="lemma-d-inv"}.* From [\[g-def\]](#g-def){reference-type="eqref" reference="g-def"} and [\[g-dec\]](#g-dec){reference-type="eqref" reference="g-dec"} if follows that $\widetilde G + \widetilde G^{\top}$ is symmetric nonnegative definite matrix. Therefore, $D^{(1)}$ is symmetric positive definite, hence it is invertible, or equivalently its determinant is nonzero. Now, $D^{(2)}$ is the same as $D^{(1)}$ except that the off-diagonal entries of the first row, $D^{(2)}_{1,j}$, for $j =2, \ldots, M$ are zero. In general, for $\ell \geq 2$, the entries $D^{(\ell)}_{k,j}$, for $k= 1, \ldots, \ell-1$ and $j \neq k$, of $D^{(\ell)}$ are zero and equal to $D^{(1)}$ otherwise. Therefore, the determinant of $D^{(2)}$ can be expressed as $$\mathrm{det}(D^{(2)}) = D^{(2)}_{1,1}\mathrm{det}(D^{(2)}_{2:M,2:M}),$$ where $D^{(2)}_{2:M,2:M}$ is the symmetric $\mathbb{R}^{(M-1) \times (M-1)}$-matrix obtained by deleting the first row and first column of $D^{(2)}$ (or equivalently the first row and first column of $D^{(1)}$). Now, observe that $$D^{(2)}_{1,1}\mathrm{det}(D^{(2)}_{2:M,2:M}) \neq 0,$$ since $D^{(2)}_{1,1} \neq 0$ and $\mathrm{det}(D^{(2)}_{2:M,2:M}) \neq 0$, which is a consequence of $x^{\top}D^{(1)} x > 0$, for all $x \in \mathbb{R}^{M}$. To see this, set the first component of $x \in \mathbb{R}^M$ equal to zero which implies $\tilde{x}^{\top} D^{(2)}_{2:M,2:M} \tilde{x} > 0$ for any $\tilde{x} \in \mathbb{R}^{M-1}$, and thus $D^{(2)}_{2:M,2:M}$ is invertible. A similar argument holds for the matrices $D^{(\ell)}$, $\ell >2$. ◻ # Illustrative examples {#sec-examples} In the following example, we demonstrate how spurious correlation (i) in [\[sub-decom\]](#sub-decom){reference-type="eqref" reference="sub-decom"} could lead to unfavourable costs. **Example 1** (Spurious correlation for propagator estimation). *As described in Section [1](#sec-mot-res){reference-type="ref" reference="sec-mot-res"}, we consider the scenario of underestimating the price impact, and we capture the estimation error between the true propagator $G^{\star}$ to the estimator $\hat G$ as follows, $$\label{g-err} G^{\star}- \hat G > \Delta \mathbb{I}_M,$$ for some constant $\Delta>0$. The above inequality is understood as partial order defined by the convex cone of positive semi-definite matrices (see Definition [Definition 21](#ineq-matrix){reference-type="ref" reference="ineq-matrix"}). Note that in [\[g-err\]](#g-err){reference-type="eqref" reference="g-err"} we neglect the error of off-diagonal elements for convenience. Including these error terms would clearly make the spurious correlation larger.* *From [\[sub-decom\]](#sub-decom){reference-type="eqref" reference="sub-decom"} it follows that the cost created by the spurious correlation of a greedy strategy $\hat u$ is given by, $$\label{diff-j} \begin{aligned} \textrm{SpurCor} &= J(\hat{u};G^{\star}) - J(\hat{u};\hat{G}) \\ &= \mathbb{E}\left[ \sum_{i=1}^{M} \sum_{j=1 }^{i} ( G^{\star}_{i,j} - \hat{G}_{i,j}) \hat{u}_{t_j} \hat{u}_{t_i} \right] \\ &= {\mathbb{E}}\left[\langle ( G^{\star} - \hat{G} ) \hat u, \hat u \rangle \right] \\ & \geq \Delta {\color{black} \mathbb{E}[\|\hat{u}\|^2] }. \end{aligned}$$ Note however that underestimating the propagator $G^{\star}$ will lead to underestimation of price impact costs and will result in a faster execution strategy and hence create additional trading costs. This monotonicity relation is demonstrated in Figure 4 of [@AJ-N-2022], where it is shown that smaller propagators allow for faster execution rate. Next, we derive a lower bound for the spurious correlation contribution in [\[diff-j\]](#diff-j){reference-type="eqref" reference="diff-j"}, by considering a special case that will help to simplify the computations and keep this example tractable. We assume that the price impact is temporary, i.e., $\hat G = \hat \kappa \mathbb{I}_M$, and $G^\star = \kappa \mathbb{I}_M$, for some constants $0<\hat \kappa < \kappa$. Note that in [\[diff-j\]](#diff-j){reference-type="eqref" reference="diff-j"} we have $\Delta = \kappa - \hat \kappa$. We further make the assumption that the unobserved asset price has the following dynamics, $$P_t = \mu t + \sigma W_t,$$ where $W_t$ is a Brownian motion and $\mu, \sigma> 0$ are constants. Specifically $\mu$ is a trend that can be regarded as a deterministic signal. In this case one can solve the continuous time analog of the execution problem [\[per-func\]](#per-func){reference-type="eqref" reference="per-func"} (see [@cartea15book Exercise E.6.3, Chapter 6.9]) to get, $$u_t^{\star} = \frac{x_0}{T} + \frac{\mu}{4\kappa}(T-2t), \quad 0\leq t \leq T.$$ In [@collin.al.20 Table 2 , Section 4.3], the price impact coefficient was estimated from proprietary dataset of real transactions executed by a large investment bank, where it was found that $\kappa \approx 10^{-10}$. Since in a bullish market a daily stock return of $2\%$ is quite common we have $\mu T \approx 0.02$. Therefore, if we wish to execute $x_0=100$ stocks within one day of $8$ hours of trading and the basic time unit is seconds, we have $T=28800$ and it is clear that $\frac{x_0}{T} \ll \frac{\mu}{2\hat \kappa}(T-2t)$, except for a small interval $T/2 \pm O(\kappa/\mu)$. Outside this interval we have, $$\label{ratio-strat} \frac{\hat u_t}{u_t^{\star}} = \frac{ \frac{x_0}{T} + \frac{\mu}{2\hat \kappa}(T-2t)}{ \frac{x_0}{T} + \frac{\mu}{2\kappa}(T-2t)} \approx \frac{\kappa}{\hat \kappa}.$$ Hence from [\[ratio-strat\]](#ratio-strat){reference-type="eqref" reference="ratio-strat"} and [\[diff-j\]](#diff-j){reference-type="eqref" reference="diff-j"} it follows that $$\label{exmp-spur} \begin{aligned} \textrm{SpurCor} &= {\mathbb{E}}\left[\langle ( G^{\star} - \hat{G} ) \hat u, \hat u \rangle \right] & \geq (\kappa -\hat \kappa) \int_{0}^T (\hat u_t)^2 \mathrm{d}t &\approx (\kappa -\hat \kappa) \left( \frac{\kappa}{\hat{\kappa}}\right)^2 \int_{0}^T (u^{\star}_t)^2 \mathrm{d}t, \end{aligned}$$ where we have neglected the aforementioned small interval in which $\hat u_t/u_t^{\star} \approx 1$ with a smaller order contribution.* *We learn from [\[exmp-spur\]](#exmp-spur){reference-type="eqref" reference="exmp-spur"} that underestimating the price impact has an amplifying effect of order $(\kappa/\hat\kappa)^2 >1$ on the transaction costs when a greedy strategy is implemented.* *In fact in this simple case we can compare the pessimistic strategy $u^{(1)}$ to the greedy strategy $\hat u$. Recall that, $$J(u;\kappa \mathbb{I}_M ) = \kappa \int_0^Tu^2_t \mathrm{d}t +\mu \int_0^T u_t \mathrm{d}t ,$$ so the (optimal) uncertainty quantifier in Definition [Definition 1](#def-quant){reference-type="ref" reference="def-quant"} takes the form $\Gamma(u)= \Delta \int_0^T u_t^2 \mathrm{d}t$ and the pessimistic cost functional in [\[j-h-def\]](#j-h-def){reference-type="eqref" reference="j-h-def"} is $$J(u;\hat\kappa \mathbb{I}_M ) = (\hat\kappa +\Delta) \int_0^Tu^2_t \mathrm{d}t+ \mu \int_0^T u_t \mathrm{d}t.$$ The minimiser $u^{(1)}$ of $J(u;\hat\kappa \mathbb{I} )$ is given by $$u_t^{(1)} = \frac{x_0}{T} + \frac{\mu}{4(\hat \kappa+\Delta)}(T-2t), \quad 0\leq t \leq T.$$ Since $\hat \kappa+\Delta = \kappa$ we get $u^{(1)}= u^{\star}$ so the pessimistic strategy outperforms the greedy strategy $\hat u$ due to the optimality of $u^{\star}$ with respect to $J( \cdot ;\kappa \mathbb{I}_M)$. Note that in practice we do not know $\kappa$ (or $\Delta$). Therefore, we need to choose a $\Gamma$ which may be not be as sharp as in this example. In Section [3](#sec-numerics){reference-type="ref" reference="sec-numerics"} we present further examples for spurious correlation costs in the presence of trading signals and more general propagators.* *The dependence of the trading rate in the price impact coefficient was made precise in [@Mich-Neum-MK] for optimal portfolio choice problem, which serves as a good proxy for the model defined in Section [2](#sec-l-p){reference-type="ref" reference="sec-l-p"} when the horizon is long. In order avoid technicality we will present the idea according to the optimal strategy from [@Mich-Neum-MK], since the steps in [\[diff-j\]](#diff-j){reference-type="eqref" reference="diff-j"} apply also for the problem studied therein. For simplicity we further assume that there is no signal i.e., $A\equiv 0$. Examples for spurious correlation in execution problems with signals are presented in Section [3](#sec-numerics){reference-type="ref" reference="sec-numerics"}.* *We assume that $\hat G = \hat \lambda\mathbb{I}$, and $G = \lambda\mathbb{I}$, so that $\Delta = \lambda- \hat \lambda$. By taking $N=1$ and in [@Mich-Neum-MK eq. (4.3)] we get for small $\hat \lambda$, $$\label{u-port} \hat u_t \approx X_0 M_{rate} e^{-M_{rate}t}, \quad \textrm{where } M^{\hat \lambda}_{rate} = \sqrt{ \frac{\gamma}{2\hat \lambda}} + O(1),$$ where $\gamma>0$ is a risk aversion coefficient and $X_0$ is the initial inventory.* *From [\[diff-j\]](#diff-j){reference-type="eqref" reference="diff-j"} and [\[u-port\]](#u-port){reference-type="eqref" reference="u-port"} it follows that $$\textrm{SpurCor} \gtrsim (\lambda-\lambda') \int_{0}^{\infty} e^{-\rho t} \hat u_t^2 dt \approx X_0^2 (\lambda- \hat \lambda) \frac{(M^{\hat \lambda}_{rate})^2}{ \rho + 2M^{\hat \lambda}_{rate}}.$$ Using the asymptotic relation in [\[u-port\]](#u-port){reference-type="eqref" reference="u-port"} we get, $$\textrm{SpurCor} \gtrsim (\lambda-\hat \lambda) \left( \frac{\lambda}{\hat\lambda} \right) X_0^2 \frac{(M^{ \lambda}_{rate})^2}{ \rho + 2M^{ \lambda}_{rate}\sqrt{\frac{\lambda}{\hat\lambda}}}.$$ Plugging in now typical values from Table 1 and Figure 2 of [@Mich-Neum-MK]: $$\lambda=1.88\times 10^{-10} \quad \rho = 4\times 10^{-5}, \quad M^{ \lambda}_{rate} =0.081$$ and assuming no more than $20\%$ deviation in the estimation $\lambda/\hat\lambda\leq 1.2$, although this could be much larger for such small values of $\lambda$, we have $$\label{exmp-spur} \textrm{SpurCor} \gtrsim (\lambda-\hat \lambda) \left( \frac{\lambda}{\hat\lambda} \right)^{1/2} \|u\|_{L^2}^2 .$$* We recall that $u^{(1)}$ denotes the optimal strategy with respect to the pessimistic cost functional [\[j-h-def\]](#j-h-def){reference-type="eqref" reference="j-h-def"}. In the following simple example, we show how our choice of $\ell_1(V_{N,\lambda}, \cdot)$ in [\[ell-1-def\]](#ell-1-def){reference-type="eqref" reference="ell-1-def"} as a $\delta$-uncertainty quantifier (see [\[j-h-def\]](#j-h-def){reference-type="eqref" reference="j-h-def"}) helps to keep the pessimistic optimal strategy $u^{(1)}$ close to trajectories which are frequently observed in the dataset. **Example 2** (Effect of $\ell_1$-penalty function on order execution). *The role of the penalty $\ell_1$ in [\[ell-1-def\]](#ell-1-def){reference-type="eqref" reference="ell-1-def"} is to discourage the strategies to visit out-of-distribution actions. We will illustrate this effect in the following simple example. Consider a dataset of $N$ buy metaorders, such that each order is executed over two even time bins, that is $M=2$. The dataset contains the following strategies $u^{(i)} = (u^{(i)}_1, u_2^{(i)})^\top$, for $i=1,\ldots,N$. It is well known that in buy execution problems without signals we should have $$\label{asmp} u^{(i)}_1 \gg u^{(i)}_2 \geq 0, \quad \textrm{for all } i=1,\ldots,N,$$ due to risk aversion terms and inventory constraints (see e.g., [@cartea15book Chapter 6.5, Figure 6.2]). For our example we choose for simplicity $\lambda=1$ in $V_{N,\lambda}$ and hence $$V_{N, 1 } = \sum_{i=1}^N u^{(i)}(u^{(i)})^\top + \mathbb{I}_M = \begin{pmatrix} \sum_{i=1}^N (u_1^{(i)})^2 +1 & \sum_{i=1}^N u_1^{(i)} u_2^{(i)} \\ \sum_{i=1}^N u_1^{(i)} u_2^{(i)} & \sum_{i=1}^N (u_2^{(i)})^2 +1 \end{pmatrix}.$$ It follows that we can write, $$V_{N,1} = \begin{pmatrix} a_{1} & a_2 \\ a_2 &a_3 \end{pmatrix}, \quad \textrm{for some } 0< a_3 <a_2 < a_1,$$ where from the fact that $a_3 <a_2$ follows from $u^{(i)}_1 \gg u^{(i)}_2$. Our assumptions also imply $a_1a_3 >a_2^2$. Hence, $V_{N,1}$ is invertible and we have, $$V^{-1}_{N,1} = \frac{1}{a_1a_3 - a_2^2} \begin{pmatrix} a_{3} & - a_2 \\ - a_2 & a_1 \end{pmatrix} =: \begin{pmatrix} b_{1} & b_2 \\ b_2 &b_3 \end{pmatrix} \quad \textrm{ where } b_2<0<b_1<b_3.$$ Now let $v=(v_1,v_2)^\top$ and consider $$\ell^2_1(v, V^{-1}_{N,1}) = v^\top V^{-1}_{N,1} v = b_1 v_1^2 + 2b_2v_1v_2 + b_3v_2^2.$$ Since according to our assumption we expect $b_1\ll b_3$, it follows that $\ell_1$ will penalize strategies $v$ where $v_1 \ll v_2$, which are not covered by the dataset (see [\[asmp\]](#asmp){reference-type="eqref" reference="asmp"}).* # Volterra kernel estimation for a noisy dataset {#sec-examples_noisy} In this example, we construct the trading dataset as follows: the strategies $u_{t_i}^{(n)}$ for each $n \in \lbrace 1, \ldots, N\rbrace$ and $i \in \lbrace 1, \ldots, M \rbrace$ are i.i.d. samples from a normal distribution with mean 50 and standard deviation $9$. We consider the propagator $G^{\star}_{i,j} = \frac{\kappa}{(t_i - t_j +1)^{\beta} }$ $i \geq j$, with $\kappa = 0.01$ and $\beta = 0.4$, and apply [\[eq:G_n\_lambda_volterra\]](#eq:G_n_lambda_volterra){reference-type="eqref" reference="eq:G_n_lambda_volterra"} to estimate it using observed price trajectories. For the estimation procedure, we set $\lambda = 10^{-3}$, $N= 252$, $M =78$. Numerical results are presented in Figure [12](#FIGURE LABEL_e){reference-type="ref" reference="FIGURE LABEL_e"}, which indicate that [\[eq:G_n\_lambda_volterra\]](#eq:G_n_lambda_volterra){reference-type="eqref" reference="eq:G_n_lambda_volterra"} recovers the true propagator reasonably well. ![ The exact propagator $G^{\star}$ (left), the estimated propagator (right) and the relative error (bottom) of the Volterra estimator by observing noisy trading strategies. ](heatmap_shadow_1_final "fig:"){#FIGURE LABEL_e width="0.48\\linewidth"} ![ The exact propagator $G^{\star}$ (left), the estimated propagator (right) and the relative error (bottom) of the Volterra estimator by observing noisy trading strategies. ](heatmap_est_rand_dataset "fig:"){#FIGURE LABEL_e width="0.48\\linewidth"} ![ The exact propagator $G^{\star}$ (left), the estimated propagator (right) and the relative error (bottom) of the Volterra estimator by observing noisy trading strategies. ](heatmap_res_rand_dataset "fig:"){#FIGURE LABEL_e width="0.48\\linewidth"} 40 urlstyle Y. Abbasi-Yadkori, D. Pál, and C. Szepesvári. Improved algorithms for linear stochastic bandits. *Advances in neural information processing systems*, 24, 2011. E. Abi Jaber and E. Neuman. Optimal liquidation with signals: the general propagator case. *arXiv:2211.00447*, 2022. E. Abi Jaber, E. Neuman, and M. Voss. Equilibrium in functional stochastic games with mean-field interaction. *arXiv:2306.05433*, 2023. A. Alfonsi and A. Schied. Capacitary measures for completely monotone kernels via singular control. *SIAM Journal on Control and Optimization*, 51 (2): 1758--1780, 2013. doi: 10.1137/120862223. URL `https://doi.org/10.1137/120862223`. A. Alfonsi, A. Schied, and A. Slynko. Order book resilience, price manipulation, and the positive portfolio problem. *SIAM Journal on Financial Mathematics*, 3 (1): 511--533, 2012. ISSN 1945-497X. doi: 10.1137/110822098. URL `http://dx.doi.org/10.1137/110822098`. R. Almgren and N. Chriss. Value under liquidation. *Risk*, 12: 61--63, 1999. R. Almgren and N. Chriss. . *Journal of Risk*, 3 (2): 5--39, 2000. J. F. Bonnans and A. Shapiro. *Perturbation analysis of optimization problems*. Springer Series in Operations Research and Financial Engineering. Springer Science & Business Media, 1 edition, 2013. J.-P. Bouchaud, Y. Gefen, M. Potters, and M. Wyart. Fluctuations and response in financial markets: the subtle nature of 'random' price changes. *Quantitative finance*, 4 (2): 176--190, 2004. J.-P. Bouchaud, J. Bonart, J. Donier, and M. Gould. *Trades, Quotes and Prices: Financial Markets Under the Microscope*. Cambridge University Press, 2018. doi: 10.1017/9781316659335. D. Brigo, F. Graceffa, and E. Neuman. Price impact on term structure. *Quantitative Finance*, 22 (1): 171--195, 01 2022. doi: 10.1080/14697688.2021.1983201. URL `https://doi.org/10.1080/14697688.2021.1983201`. Á. Cartea, S. Jaimungal, and J. Penalva. *Algorithmic and High-Frequency Trading (Mathematics, Finance and Risk)*. Cambridge University Press, 1 edition, Oct. 2015. ISBN 1107091144. URL `http://www.amazon.com/exec/obidos/redirect?tag=citeulike07-20&path=ASIN/1107091144`. J. Chang, M. Uehara, D. Sreenivas, R. Kidambi, and W. Sun. Mitigating covariate shift in imitation learning via offline data with partial coverage. *Advances in Neural Information Processing Systems*, 34: 965--979, 2021. P. Collin-Dufresne, K. Daniel, and M. Saglam. Liquidity regimes and optimal dynamic asset allocation. *Journal of Financial Economics*, 136 (2): 379--406, 2020. R. Cont, E. Neuman, and A. Micheli. Fast and slow optimal trading with exogenous information. *arXiv:2210.01901*, 2023. G. Davies. The great bull market reaches its 10th birthday. Available at `https://www.ft.com/content/4f941406-5157-11e9-b401-8d9ef1626294`., 2019. S. Diamond and S. Boyd. : A Python-embedded modeling language for convex optimization. *Journal of Machine Learning Research*, 17 (83): 1--5, 2016. M. Forde, L. Sánchez-Betancourt, and B. Smith. Optimal trade execution for Gaussian signals with power-law resilience. *Quantitative Finance*, 22 (3): 585--596, 03 2022. doi: 10.1080/14697688.2021.1950919. URL `https://doi.org/10.1080/14697688.2021.1950919`. A. Fruth, T. Schöneborn, and M. Urusov. Optimal trade execution and price manipulation in order books with time-varying liquidity. *Mathematical Finance*, 24: 651--695, 2014. URL `http://ssrn.com/paper=1925808`. J. Gatheral. No-dynamic-arbitrage and market impact. *Quantitative Finance*, 10 (7): 749--759, 08 2010. doi: 10.1080/14697680903373692. URL `https://doi.org/10.1080/14697680903373692`. J. Gatheral, A. Schied, and A. Slynko. Exponential resilience and decay of market impact. In F. Abergel, B. Chakrabarti, A. Chakraborti, and M. Mitra, editors, *Econophysics of Order-driven Markets*, pages 225--236. Springer-Verlag, 2011. J. Gatheral, A. Schied, and A. Slynko. Transient linear price impact and Fredholm integral equations. *Mathematical Finance*, 22 (3): 445--474, 2012. doi: https://doi.org/10.1111/j.1467-9965.2011.00478.x. URL `https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-9965.2011.00478.x`. I. Gohberg, S. Goldberg, and M. A. Kaashoek. *Classes of linear operators*, volume 49. Birkhauser, 1990. N. Jegadeesh. Evidence of predictable behavior of security returns. *The Journal of Finance*, 45 (3): 881--898, 1990. ISSN 00221082, 15406261. URL `http://www.jstor.org/stable/2328797`. Y. Jin, Z. Yang, and Z. Wang. Is pessimism provably efficient for offline RL? In *International Conference on Machine Learning*, pages 5084--5096. PMLR, 2021. R. Kidambi, A. Rajeswaran, P. Netrapalli, and T. Joachims. Morel: Model-based offline reinforcement learning. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, editors, *Advances in Neural Information Processing Systems*, volume 33, pages 21810--21823. Curran Associates, Inc., 2020. URL `https://proceedings.neurips.cc/paper_files/paper/2020/file/f7efa4f864ae9b88d43527f4b14f750f-Paper.pdf`. C. A. Lehalle and E. Neuman. Incorporating signals into optimal trading. *Finance and Stochastics*, 23 (2): 275--311, 2019. doi: 10.1007/s00780-019-00382-7. URL `https://doi.org/10.1007/s00780-019-00382-7`. J. Li, C. Tang, M. Tomizuka, and W. Zhan. Dealing with the unknown: Pessimistic offline reinforcement learning. *arXiv:2111.05440*, 2021. A. Madhavan. The Russell reconstitution effect. *Financial Analysts Journal*, 59 (4): 51--64, 2003. doi: 10.2469/faj.v59.n4.2545. URL `https://doi.org/10.2469/faj.v59.n4.2545`. A. Micheli and E. Neuman. Evidence of crowding on Russell 3000 reconstitution events. *Market Microstructure and Liquidity*, 2022. doi: 10.1142/S2382626620500094. URL `https://doi.org/10.1142/S2382626620500094`. A. Micheli, J. Muhle-Karbe, and E. Neuman. Closed-loop Nash competition for liquidity. *To appear in Mathematical Finance*, 2023. doi: https://doi.org/10.1111/mafi.12409. URL `https://onlinelibrary.wiley.com/doi/abs/10.1111/mafi.12409`. E. Neuman and M. Voss. Optimal signal-adaptive trading with temporary and transient price impact. *SIAM Journal on Financial Mathematics*, 13 (2): 551--575, 2022. doi: 10.1137/20M1375486. URL `https://doi.org/10.1137/20M1375486`. E. Neuman and M. Voss. Trading with the crowd. *Mathematical Finance*, 33 (3): 548--617, 2023. doi: https://doi.org/10.1111/mafi.12390. URL `https://onlinelibrary.wiley.com/doi/abs/10.1111/mafi.12390`. E. Neuman and Y. Zhang. Statistical learning with sublinear regret of propagator models. *arXiv preprint arXiv:2301.05157*, 2023. A. A. Obizhaeva and J. Wang. Optimal trading strategy and supply/demand dynamics. *Journal of Financial Markets*, 16 (1): 1 -- 32, 2013. ISSN 1386-4181. doi: http://dx.doi.org/10.1016/j.finmar.2012.09.001. URL `http://www.sciencedirect.com/science/article/pii/S1386418112000328`. N. Randewich. Wall Street's oldest-ever bull market turns 10 years old. Available at `https://uk.reuters.com/article/usa-stocks-bull/rpt-wall-streets-oldest-ever-bull-market-turns-10-years-old-idUKL1N20V1RJ`, 2019. W. S. J. Staff. Inside a Decadelong Bull Run. Available at `https://www.wsj.com/articles/inside-a-decade-long-bull-run-11552041001`, 2019. B. Tóth, Z. Eisler, and J. P. Bouchaud. The short-term price impact of trades is universal. *Market Microstructure and Liquidity*, 03 (02): 1850002, 2017. doi: 10.1142/S2382626618500028. URL `https://doi.org/10.1142/S2382626618500028`. M. Uehara and W. Sun. Pessimistic model-based offline reinforcement learning under partial coverage. *arXiv:2107.06226*, 2021. M. Vodret, I. Mastromatteo, B. Tóth, and M. Benzaquen. Do fundamentals shape the price response? A critical assessment of linear impact models. *Quantitative Finance*, 22 (12): 2139--2150, 2022. doi: 10.1080/14697688.2022.2114376. URL `https://doi.org/10.1080/14697688.2022.2114376`. [^1]: Corresponding author. <yufei.zhang@imperial.ac.uk>
arxiv_math
{ "id": "2309.02994", "title": "An Offline Learning Approach to Propagator Models", "authors": "Eyal Neuman, Wolfgang Stockinger, Yufei Zhang", "categories": "math.OC math.PR q-fin.ST q-fin.TR stat.ML", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: - | In the synthetic geometric setting introduced by Kunzinger and Sämann, we present an analogue of Toponogov's Globalisation Theorem which applies to Lorentzian length spaces with lower (timelike) curvature bounds. Our approach utilises a "cat's cradle" construction akin to that which appears in several proofs in the metric setting. On the road to our main result, we also provide a lemma regarding the subdivision of triangles in spaces with a local lower curvature bound and a synthetic Lorentzian version of the Lebesgue Number Lemma. Several properties of time functions and the null distance on globally hyperbolic Lorentzian length spaces are also highlighted. We conclude by presenting several applications of our results, including versions of the Bonnet--Myers Theorem and Splitting Theorem for Lorentzian length spaces with local lower curvature bounds, as well as discussion of stability of curvature bounds under Gromov--Hausdorff convergence. - | We want to thank James Grant, Stacey Harris and Didier Solis for valuable input in the early stages of this work. We also want to acknowledge the kind hospitality of the Erwin Schrödinger International Institute for Mathematics and Physics (ESI), where this project was initiated. This work was supported by research grant P33594 of the Austrian Science Fund FWF and by a UKRI Future Leaders Fellowship \[grant number MR/W01176X/1\]. *Keywords:* Lorentzian length spaces, synthetic curvature bounds, globalisation, Lorentzian geometry, null distance, time functions *MSC2020:* 53C50, 53C23, 53B30, 51K10, 53C80 author: - | Tobias Beran,[^1] John Harvey,[^2] Lewis Napper,[^3] Felix Rott[^4]\ bibliography: - references.bib title: A Toponogov globalisation result for Lorentzian length spaces --- # Introduction {#sec:intro} Recall that a metric space $(X,d)$ is called a length (or intrinsic) space if its distance function $d(p,q)$ can be recovered as the infimum of the length of curves joining $p$ to $q$. This is the realm of so-called synthetic geometry, which can be seen as a generalisation of Riemannian geometry to spaces of lower regularity. Such spaces have proven an essential tool in the study of geometric flows [@AGS08; @GN21], optimal transport [@Stu06; @Vil09], and bounds on the number of finite subgroups of fundamental and crystallographic groups, see [@BBI01 Corollary 9.3.2] and [@Leb15] respectively. One notion that frequently arises in this setting is the concept of curvature bounds; a metric length space is said to have a lower (or upper) curvature bound if a given comparison condition[^5] holds on a neighbourhood of each point $x\in X$, [@AKP19; @BBI01]. These conditions are used to tame some of the more erratic behaviours of metric length spaces, so that they act more like their Riemannian counterparts, while not requiring smoothness. A vast amount of theory has been developed concerning spaces which exhibit global curvature bounds (where the comparison condition holds on the whole space) and their properties [@Gro78; @Gro99]. The preservation of curvature bounds along sequences of spaces which converge in the Gromov--Hausdorff topology is a prime example [@BGP92; @Kap02]. As such, it is pertinent to ask when a space with a known local curvature bound also possesses a global one, that is, when does a curvature bound globalise? In the case of lower curvature bounds, this was first proven in two dimensions by Pizzetti [@Piz07] (see the history [@PZ11] for more details) and later independently re-proven by Alexandrov [@Ale51; @Ale57]. These "Toponogov Globalisation Theorems" were popularised by Toponogov's proof for Riemannian manifolds in the late 1950s [@Top57; @Top58; @Top59]. Since then, Burago, Gromov and Perelman [@BGP92] and Plaut [@Pla91; @Pla96] have extended the result to arbitrary complete metric length spaces, with refinements to their proofs being made by Alexander, Kapovitch, and Petrunin [@AKP19], as well as Lang and Schroeder [@LS13]. A further generalisation regarding (not necessarily complete) geodesic spaces was also provided by Petrunin in [@Pet16]. Analogously to metric length spaces, in [@KS18], Kunzinger and Sämann introduced the notion of a Lorentzian (pre-)length space, which facilitates the study of non-smooth Lorentzian geometry, with key applications in the investigation of spacetimes with low regularity metrics [@CG12; @GKS19; @GKSS20], cones [@AGKS19], and robust concepts of Gromov--Hausdorff convergence in the Lorentzian setting [@KS21; @Mul23; @MS23]. This synthetic Lorentzian picture also admits bounds on the so-called timelike curvature of a Lorentzian length space, via comparison conditions [@KS18; @BMS22; @BS22]. These timelike curvature bounds have been shown to behave like their metric counterparts in many circumstances and have hence been crucial for deriving Lorentzian equivalents to many metric results, including the Reshetnyak Gluing Theorem [@BR22; @Rot22], Splitting Theorem [@BORS23], and a Bonnet--Myers style theorem for spaces with global lower timelike curvature bounds [@BNR23]. As for metric spaces, it is again pertinent to ask when a Lorentzian space with a local timelike curvature bound has a global one. In the smooth Lorentzian setting, the first result in this direction was achieved by Harris in [@Har82], where a global comparison condition was inferred from lower timelike (sectional) curvature bounds. An Alexandrov's Patchwork approach was used by three of the present authors to answer this question for Lorentzian length spaces in the case of upper timelike curvature bounds in [@BNR23], where globalisation results in the metric and Lorentzian settings were compared in detail. This paper is a continuation of that work and presents a solution for spaces with lower timelike curvature bounds, as well as several consequences of interest. The paper is organised as follows. We begin in Section [2](#sec:preliminaries){reference-type="ref" reference="sec:preliminaries"} with a brief review of some basic yet crucial properties of Lorentzian (pre-)length spaces. We provide an overview of hyperbolic angles and how they may be used to describe curvature bounds, before discussing existence conditions for time functions and null distances with advantageous properties. The principal part of this paper is contained in Section [3](#sec:toponogov){reference-type="ref" reference="sec:toponogov"}, which begins with a series of supplementary results, including a Lorentzian analogue of the Lebesgue Number Lemma and a result concerning the splitting of triangles in Lorentzian length spaces with lower curvature bounds, in the spirit of the Gluing Lemma [@BR22]. We then proceed with a construction derived from the "cat's cradle" of Lang and Schroeder [@LS13] in the metric setting, with our main result being stated as follows: Let $X$ be a connected, globally hyperbolic, regular Lorentzian length space with a time function $T$ and curvature bounded below by $K\in \mathbb{R}$ in the sense of angle comparison. Then each of the properties in Definition [Definition 6](#def-cb-ang){reference-type="ref" reference="def-cb-ang"} hold globally; in particular, the entire space $X$ is a $(\geq K)$-comparison neighbourhood and hence has curvature globally bounded below by $K$. This result globalises the notion of lower curvature bounds defined via "angle comparison," as in Definition [Definition 6](#def-cb-ang){reference-type="ref" reference="def-cb-ang"}. Analogously to the metric setting, curvature bounds may also be characterised with respect to other comparison conditions, which can be shown to be equivalent (see [@BKR23 Theorem 5.1] for a complete list of equivalent characterisations). Therefore, these conditions also exhibit the globalisation property, sometimes under additional assumptions which shall be discussed in Section [3](#sec:toponogov){reference-type="ref" reference="sec:toponogov"}. Note that the existence of a time function in this setting is guaranteed if $X$ is second countable. We close this paper in Section [4](#sec: applicationsandoutlook){reference-type="ref" reference="sec: applicationsandoutlook"} with an overview of some applications of our results. In particular, we show that lower curvature bounds are preserved under appropriate Lorentzian versions of Gromov--Hausdorff convergence, we extend the Lorentzian Bonnet--Myers Theorem [@BNR23] and Splitting Theorem [@BORS23] to Lorentzian length spaces with (local) lower curvature bounds and discuss the stability of curvature bounds under Gromov--Hausdorff type convergences (for example Minguzzi--Suhr convergence [@MS23]). Potential future results are also discussed. # Preliminaries {#sec:preliminaries} Over the course of the last half-decade, the theory of Lorentzian length spaces has gained immense traction, so much so that it is now a rather standard tool in the study of Lorentzian geometry. Consequently, in this section we only present material which is both critical for deriving our results and which also appears infrequently or disparately in the literature. In particular, we focus on the properties of hyperbolic angles [@BS22; @BMS22], time functions [@KS21], and null distances [@SV16]. For more fundamental definitions, we refer the reader to [@KS18; @BNR23]. ## Notation and conventions {#subsec: notationconventions} Let us begin by reintroducing our main characters and fixing our conventions. Recall that a *Lorentzian pre-length space* $(X,d,\leq,\ll,\tau)$ consists of a metric space $(X,d)$ equipped with a causal relation $\leq$, timelike relation $\ll$, and time separation function $\tau$, cf. [@KS18 Definition 2.8]. For brevity, we shall simply denote such spaces by their associated set $X$, where the additional structures can be identified from the context. A Lorentzian pre-length space which is additionally locally causally closed, causally path-connected, localisable, and whose time separation function takes the form $$\tau(x,y) = \sup \Set*{ L_\tau(\gamma) \gamma \textrm{ future-directed, causal curve from } x \textrm{ to } y}\, ,$$ for $x,y\in X$ with a future-directed causal curve between them and $\tau(x,y)=0$ otherwise, is called a *Lorentzian length space*, cf. [@KS18 Definition 3.22]. Unless explicitly stated otherwise, causal curves are assumed to be future-directed. Furthermore, we use the term *distance realiser* to refer to any causal curve in a Lorentzian pre-length space, cf. [@KS18 Definition 2.24], whose $\tau$-length , attains the $\tau$-distance between its endpoints, i.e. a causal curve $\gamma$ from $x$ to $y$, such that $L_{\tau}(\gamma)=\tau(x,y)$. We inherit from earlier works the notion of the causal past/future of a point $x\in X$, which we denote by $J^\pm(x)$. The analogous timelike past/future is denoted $I^\pm(x)$. Causal and timelike diamonds with governing points $x,y\in X$ are respectively denoted by $J(x,y):=J^+(x) \cap J^-(y)$ and $I(x,y):=I^+(x) \cap I^-(y)$. Recall that a Lorentzian pre-length space is *globally hyperbolic* if all causal diamonds $J(x,y)\subseteq X$ are compact and $X$ is non-totally imprisoning, cf [@KS18 Definition 2.35 (iii)]. We now wish to address the concept of regularity, one of the defining properties of a regularly localisable Lorentzian pre-length space, cf. [@KS18 Definition 3.16] and a natural condition to impose on a Lorentzian pre-length space in its own right. This property is also crucial for defining timelike curvature bounds via angle comparison. **Definition 1** (Regularity). Let $X$ be a Lorentzian (pre-)length space. $X$ is called *regular* if any distance realiser between timelike related points is timelike, i.e. it cannot contain a null piece. It is worth observing that under strong causality, the notion of being regularly localisable is equivalent to being regular (in the sense of Definition [Definition 1](#def: regular){reference-type="ref" reference="def: regular"}) and localisable, see [@BKR23 Lemma 3.6]. ## Hyperbolic angles and curvature bounds {#subsec:LLS:angles} Hyperbolic angles in Lorentzian pre-length spaces were introduced in [@BS22] and [@BMS22], where the latter puts a greater focus on comparison results. Throughout this section, we follow the conventions of the former reference. First recall that the *finite diameter* of a Lorentzian pre-length space is given by the supremum of (finite) $\tau$-values on the space. Denote by $\mathbb{L}^2(K)$ the *Lorentzian model space* of constant curvature $K$ and its finite diameter by $D_K$, cf. [@BS22 Definition 1.11]. Similarly to the metric case, we have $$D_K=\ensuremath{\mathop{\mathrm{diam}}_{\mathrm{fin}}}(\mathbb{L}^2(K))= \begin{cases} \infty, & \text{ if } K \geq 0 \, , \\ \frac{\pi}{\sqrt{-K}}, & \text{ if } K < 0 \, . \end{cases}$$ Furthermore, in a Lorentzian pre-length space, triples of points $(p,q,r)$ with $\tau(p,r)<\infty$, either $p\ll q\leq r$ or $p \leq q \ll r$, and (non-trivial) time-separations realised by distance realisers, will be called *admissible causal triangles*. They shall be denoted by $\Delta(p,q,r)$, where the points are written according to their causal order unless otherwise stated, with each side being labelled either by the name of an associated distance realiser or, if the specific choice of distance realiser or parametrisation thereof is unimportant, by the closed interval between the endpoints, i.e. $[p,q]$ is a distance realiser from $p$ to $q$. If we additionally have $p \ll q \ll r$, the triple is called a *timelike triangle*, cf. [@KS18 Lemma 4.4]. Throughout the remainder of this paper, we tacitly assume that any such triangles satisfy appropriate size bounds, cf. [@KS18 Lemma 4.6], that is, $\tau(p,r)<D_K$. **Definition 2** (Comparison angles). Let $K\in \mathbb{R}$ and let $X$ be a Lorentzian pre-length space. Let $x_1 \leq x_2 \leq x_3$ be a triple of causally related points in $X$, satisfying size bounds for $K$, cf. [@KS18 Lemma 4.6] and let $\Delta(\bar{x}_1, \bar{x}_2, \bar{x}_3)$ be a comparison triangle[^6] in $\mathbb{L}^2(K)$ for $(x_1,x_2,x_3)$. Fix distinct indices $i,j,k \in \{ 1,2,3 \}$ and assume that $x_i$ is timelike related to both $x_j$ and $x_k$ in some way. We define the *comparison angle* at $x_i$ by $$\tilde\ensuremath{\measuredangle}_{x_i}^K(x_j,x_k) \coloneqq \ensuremath{\measuredangle}_{\bar{x}_i}^{\mathbb{L}^2(K)}(\bar{x}_j,\bar{x}_k)\,.$$ Here $\ensuremath{\measuredangle}_{\bar{x}_i}^{\mathbb{L}^2(K)}(\bar{x}_j,\bar{x}_k)$ is the hyperbolic angle at $\bar{x}_i$ in $\Delta(\bar{x}_1,\bar{x}_2,\bar{x}_3)\subseteq \mathbb{L}^2(K)$, which can be calculated via the law of cosines, cf. [@BS22 Lemma 2.3], by setting $\sigma = 1$ if $i=2$ ($x_i$ is not a time endpoint), $\sigma = -1$ if $i = 1 \textrm{ or } 3$ ($x_i$ is a time endpoint). To reduce the quantity of case distinctions, we also define the *signed comparison angle* $\tilde{\ensuremath{\measuredangle}}_{x_i}^{\mathrm{S},K}(x_j,x_k)=\sigma\tilde{\ensuremath{\measuredangle}}_{x_i}^{K}(x_j,x_k)$, where $\sigma$ is called the *sign* and $\tilde{\ensuremath{\measuredangle}}_{x_i}^K(x_j,x_k)>0$. In this way, $\tilde{\ensuremath{\measuredangle}}_{x_i}^{\mathrm{S},K}(x_j,x_k)$ is positive at $i=2$ and negative at $i=1$ or $3$. Another important consequence of the law of cosines is the following property, which will be used extensively throughout this work. **Corollary 3** (Law of cosine monotonicity). Let $K\in\mathbb{R}$ and consider any timelike triangle in the Lorentzian model space $\mathbb{L}^2(K)$. Then fixing the two short side lengths and varying the longest, any angle is monotonically increasing. Fixing one short side and the longest side length and varying the other short side, any angle is monotonically decreasing. Both upper angles and angles between timelike curves in a Lorentzian pre-length space may now be defined via the comparison angle introduced above. **Definition 4** (Angles). Let $X$ be a Lorentzian pre-length space and $\alpha,\beta:[0,\varepsilon)\to X$ be two timelike curves (where we permit one or both of the curves to be past-directed) with $x:=\alpha(0)=\beta(0)$. Then we define the *upper angle* $$\ensuremath{\measuredangle}_x(\alpha,\beta)=\limsup_{\substack{(s,t)\in D \\ s,t\to 0}}\tilde{\ensuremath{\measuredangle}}_x^{K}(\alpha(s),\beta(t))\,,$$ where $$\begin{aligned} D &= \Set*{ (s,t) s,t>0, \, \alpha(s),\beta(t) \text{ timelike related} } \\ & \qquad \cap \Set*{ (s,t) \alpha(s),\beta(t), x \text{ satisfies size bounds for $K$}}\,.\end{aligned}$$ If the limit superior is in fact a limit and is finite, we say the angle exists and call $\ensuremath{\measuredangle}_x(\alpha,\beta)$ an *angle*. Observe that the sign $\sigma$ of the comparison angle is independent of $(s,t)\in D$. Therefore, the *sign* of the (upper) angle is also defined to be precisely $\sigma$. The *signed (upper) angle* is then defined as $\ensuremath{\measuredangle}_x^{\mathrm{S}}(\alpha,\beta)=\sigma \ensuremath{\measuredangle}_x(\alpha,\beta)$. The following proposition provides sufficient conditions for adjacent angles taken at a point along a distance realiser to be equal. This property is similar to the metric notion of a segment being balanced, cf. [@LS13 Lemma 1.3], and, as such, it will be crucial in constructing a proof of our main result. **Proposition 5** (Balanced segments in Lorentzian pre-length space). Let $X$ be a strongly causal and locally causally closed Lorentzian pre-length space with timelike curvature bounded below by $K \in \mathbb{R}$, and let $\alpha:[0,1] \to X$ be a timelike distance realiser. Let $x=\alpha(t)$ for $t \in (0,1)$ and consider the restrictions $\alpha_-=\alpha|_{[0,t]}$ and $\alpha_+=\alpha|_{[t,1]}$ as past-directed and future-directed distance realisers emanating from $x$, respectively. Let $\beta$ be a timelike distance realiser emanating from $x$. Then $\ensuremath{\measuredangle}_x(\alpha_-,\beta)=\ensuremath{\measuredangle}_x(\alpha_+,\beta)$. *Proof.* See [@BS22 Corollary 4.6] (and [@BS22 Lemma 4.10] for the existence of the angle). ◻ Throughout this paper, we make use of several different formulations of curvature bounds via comparison methods. Each of these has, at least partly, been introduced in the context of Lorentzian length spaces in earlier works, with full details on all current formulations being found in [@BKR23], which also provides conditions under which they are equivalent. Since we predominantly use the formulation of curvature bounds in terms of angle comparison, we now provide this explicitly. This angle comparison condition is analogous to the one globalised by [@Har82] in the smooth Lorentzian setting and is the definition to which our globalisation result will directly apply. **Definition 6** (Curvature bounds by angle comparison). An open subset $U$ in a regular Lorentzian pre-length space $X$ is called a *$(\geq K)$-comparison neighbourhood* if it satisfies the following: 1. [\[def-cb-ang.item1\]]{#def-cb-ang.item1 label="def-cb-ang.item1"} $\tau$ is continuous on $(U\times U) \cap \tau^{-1}([0,D_K))$ and this set is open. 2. [\[def-cb-ang.item2\]]{#def-cb-ang.item2 label="def-cb-ang.item2"} For all $x,y \in U$ with $x \ll y$ and $\tau(x,y) < D_K$ there exists a distance realiser contained entirely in $U$ connecting $x$ and $y$. 3. [\[def-cb-ang.main\]]{#def-cb-ang.main label="def-cb-ang.main"} Let $\alpha:[0,a]\to U,\beta:[0,b]\to U$ be timelike distance realisers with arbitrary time-orientation and such that $x:=\alpha(0)=\beta(0)$ and $\Delta(x,\alpha(a),\beta(b))$, with some permutation of vertices, is an admissible causal triangle satisfying size bounds. Then $$\ensuremath{\measuredangle}_x^{\mathrm{S}}(\alpha,\beta)\leq\tilde{\ensuremath{\measuredangle}}_x^{K,\mathrm{S}}(\alpha(a),\beta(b))\,.$$ 4. [\[def-cb-ang.item4\]]{#def-cb-ang.item4 label="def-cb-ang.item4"} Additionally, the following property must hold. If $\alpha,\beta,\gamma:[0,\varepsilon)\to U$ are three timelike curves with $x:=\alpha(0)=\beta(0)=\gamma(0)$, $\alpha,\gamma$ pointing in the same time direction, and $\beta$ in the other, then we have the following special case of the triangle inequality of angles: $$\label{eq: triangle inequality for angles for lower curvature bounds} \ensuremath{\measuredangle}_x(\alpha,\gamma)\leq\ensuremath{\measuredangle}_x(\alpha,\beta)+\ensuremath{\measuredangle}_x(\beta,\gamma)\, .$$ We say that $X$ has *curvature bounded below by $K$ in the sense of angle comparison* if every point in $X$ has a $(\geq K)$-comparison neighbourhood. If $X$ itself is a $(\geq K)$-comparison neighbourhood, then we say that $X$ has *curvature globally bounded below by $K$*, and similarly for curvature bounds above. Observe that, in point (iv) of the above definition, we can also take the curves to be maps into $X$, as the angles only depend on the initial segments of the curves. Furthermore, when considering curvature bounds from above, the inequality in (iii) is reversed and (iv) is dropped, though this notion will not be used in the remainder of the paper. For completeness sake, below we state the equivalence result for the characterisations we use. Note that the assumptions in [@BKR23 Theorem 5.1] are much weaker than ours; our presentation, however, is entirely sufficient for our purpose. The a priori assumption of [\[eq: triangle inequality for angles for lower curvature bounds\]](#eq: triangle inequality for angles for lower curvature bounds){reference-type="eqref" reference="eq: triangle inequality for angles for lower curvature bounds"} in the following proposition is a consequence of a cumbersome technicality when trying to obtain angle comparison from other formulations. This is another reason why we prefer to work with angle comparison directly: this triangle inequality of angles is already assumed in the definition. **Proposition 7** (Equivalence of curvature bounds). Let $X$ be a globally hyperbolic and regular Lorentzian length space which satisfies [\[eq: triangle inequality for angles for lower curvature bounds\]](#eq: triangle inequality for angles for lower curvature bounds){reference-type="eqref" reference="eq: triangle inequality for angles for lower curvature bounds"}. Then the following are equivalent for an open subset $U \subseteq X$: 1. $U$ is a $(\geq K)$-comparison neighbourhood in the sense of timelike triangle comparison. 2. $U$ is a $(\geq K)$-comparison neighbourhood in the sense of monotonicity comparison. 3. $U$ is a $(\geq K)$-comparison neighbourhood in the sense of angle comparison. 4. $U$ is a $(\geq K)$-comparison neighbourhood in the sense of hinge comparison. Our eventual proof of the globalisation of timelike curvature bounds will consider admissible causal triangles which are not contained in comparison neighbourhoods and for which Definition [Definition 6](#def-cb-ang){reference-type="ref" reference="def-cb-ang"}[\[def-cb-ang.main\]](#def-cb-ang.main){reference-type="ref" reference="def-cb-ang.main"} fails to hold at some vertex and show that, under certain assumptions, these cannot exist. We formulate the aforementioned failure characteristic more precisely as follows. **Definition 8** (Angle condition holds/ fails). Let $X$ be a regular Lorentzian pre-length space with timelike curvature bounded below by $K\in \mathbb{R}$ in the sense of angle comparison and let $\alpha:[0,a]\rightarrow X$, $\beta:[0,b]\rightarrow X$ be timelike distance realisers of arbitrary time-orientation (not necessarily contained in a comparison neighbourhood), with $L(\alpha)$, $L(\beta)$, $\tau(\alpha(a), \beta(b))$, $\tau(\beta(b), \alpha(a)) < D_K$, and such that $x\coloneqq \alpha(0)=\beta(0)$ and $\alpha(a),\beta(b)$ are causally related. We say that the *angle condition holds* at $x$ if Definition [Definition 6](#def-cb-ang){reference-type="ref" reference="def-cb-ang"}[\[def-cb-ang.main\]](#def-cb-ang.main){reference-type="ref" reference="def-cb-ang.main"} is satisfied at $x$, with respect to the curvature bound $K$ on $X$. Similarly, we say that the *angle condition fails to hold* at $x$ if Definition [Definition 6](#def-cb-ang){reference-type="ref" reference="def-cb-ang"}[\[def-cb-ang.main\]](#def-cb-ang.main){reference-type="ref" reference="def-cb-ang.main"} is not satisfied at $x$, i.e. if the inequality $$\ensuremath{\measuredangle}_x^{\mathrm{S}}(\alpha,\beta)>\tilde{\ensuremath{\measuredangle}}_x^{K,\mathrm{S}}(\alpha(a),\beta(b))\,,$$ holds, with respect to the curvature bound $K$ on $X$. In particular, the angle condition may be said to hold/fail at vertices between timelike sides of an admissible causal triangle. Moreover, note that by [@BKR23 Remark 3.12], it is sufficient to only consider timelike triangles when dealing with curvature bounds in the sense of angle comparison. In order to verify whether or not triangles may have a failing angle, we need to be able to divide timelike triangles into smaller timelike triangles for which the answer to this question is known. To do so, we will utilise the twin Lorentzian versions of Alexandrov's Lemma. Each result in the pair corresponds to a different subcase depending on which side we divide along; more precisely, the "across version" discusses divisions along the longest side, while the "future version" discusses divisions along one of the shorter sides. Since the statements of these lemmata are rather extensive, we only provide the statement of the latter. The former is illustrated in Figure [\[fig: alexlem across concave\]](#fig: alexlem across concave){reference-type="ref" reference="fig: alexlem across concave"} and the reader is referred to [@BORS23 Proposition 2.42, 2.43] and [@BR22 Lemma 4.2.1, 4.2.2] for more detail, including proofs of the respective statements. While the presentation in [@BORS23] concerns the case $K=0$, generalising to non-zero $K$ is straightforward, provided we assume the associated size bounds. **Proposition 9** (Alexandrov Lemma: future version). Let $X$ be a Lorentzian pre-length space. Let $\Delta:=\Delta(p,q,r)$ be a timelike triangle satisfying size bounds for $K$. Let $x$ be a point on the side $[p,q]$, such that the distance realiser between $x$ and $r$ exists. Then we can consider the smaller triangles $\Delta_1:=\Delta(p,x,r)$ and $\Delta_2:=\Delta(x,q,r)$. We construct a comparison situation consisting of a comparison triangle $\bar{\Delta}_1$ for $\Delta_1$ and $\bar{\Delta}_2$ for $\Delta_2$, with $\bar{p}$ and $\bar{q}$ on different sides of the line through $[\bar{x},\bar{r}]$ and a comparison triangle $\tilde{\Delta}$ for $\Delta$ with a comparison point $\tilde{x}$ for $x$ on the side $[\tilde p,\tilde q]$. This contains the subtriangles $\tilde{\Delta}_1:=\Delta(\tilde{p},\tilde{x},\tilde{r})$ and $\tilde{\Delta}_2:=\Delta(\tilde{x},\tilde{q},\tilde{r})$, see Figure [\[fig: alexlem future convex\]](#fig: alexlem future convex){reference-type="ref" reference="fig: alexlem future convex"}. Then the situation $\bar{\Delta}_1$,$\bar{\Delta}_2$ is convex (concave) at $x$ (i.e. $\ensuremath{\measuredangle}_{\bar{x}}(\bar{q},\bar{r})\leq\ensuremath{\measuredangle}_{\bar{x}}(\bar{p},\bar{r})$ (or $\geq$)) if and only if $\tau(x,r) = \tau(\bar{x},\bar{r}) \leq \tau(\tilde{x},\tilde{r})$ (or $\geq$). The same is true if $x$ is a point on the side $[q,r]$. Note that the convexity (resp. concavity) condition ($\tau(x,q)\leq\bar{\tau}(\tilde{x},\tilde{q})$ (or $\geq$)) is automatically satisfied if $X$ has timelike curvature bounded below (above) by $K$ and $\Delta$ is within a comparison neighbourhood. ## Null distance The null distance $d_T$ induced by a time function $T$ was originally introduced by Sormani and Vega [@SV16] in the smooth setting, as a convenient way of equipping a spacetime with a (distance) metric which is compatible with the causal structure. This concept has also been introduced in the setting of synthetic Lorentzian geometry, cf. [@KS21]. The null distance between two points is defined to be the infimum over all piecewise causal curves between those points of the total variation of the associated time function. In the case of a spacetime, if this is achieved it must be along a piecewise null curve, inspiring the name. However, the null distance is not necessarily a true distance, and [@SV16 Theorem 4.6] demonstrates that a sufficient condition for $d_T$ to be a distance function is $T$ being locally anti-Lipschitz. With regard to our ultimate goal of globalisation, the null distance is also an ideal way of describing the "size\" of a timelike triangle. Contrary to the metric setting, there are always two notions of size at play in a Lorentzian pre-length space: on the one hand, we have the $\tau$-length of the sides of a triangle, which may be used to describe timelike curvature bounds, and on the other, we have the $d$-length of the sides, which is responsible for whether or not a triangle is inside a comparison neighbourhood. It will turn out that particularly well behaved null distances, when combined with timelike diamonds which are also comparison neighbourhoods, à la [@BNR23 Proposition 4.3], form the key to controlling both of these points of view simultaneously. Although in the next section we directly assume that our space possesses a time function, we first draw the reader's attention to the following result, which provides sufficient conditions for this to be the case. **Proposition 10** (Existence of time functions). Let $X$ be a second countable, globally hyperbolic Lorentzian length space. Then $X$ possesses a time function $T$. *Proof.* The result is clear upon combining [@BGH21 Theorem 3.2] with [@ACS20 Theorem 3.20], [@Rot22 Lemma 3.8], and [@KS18 Theorem 3.7]. ◻ We now wish to make the notion of a well behaved null distance more precise; in particular, we shall require our null distance to be a finite, continuous pseudo-metric.[^7] Before providing conditions under which this must be the case, we present one further observation. **Lemma 11** (Path-connected Lorentzian pre-length spaces). Let $X$ be a causally path connected Lorentzian pre-length space such that for each $x \in X$ either $I^+(x)$ or $I^-(x)$ is non-empty. Then the following are equivalent: 1. $X$ is connected. 2. $X$ is path connected. 3. $X$ is piecewise causal path connected, i.e. any $x,y\in X$ can be connected by a continuous curve consisting of future directed and past directed causal pieces, cf. [@KS21 Definition 3.2]. *Proof.* Two of the implications are clear, so let $X$ be connected and we claim it is piecewise causal path connected. Let $p\in X$ and $R_p$ be the set of all points which are connected to $p$ by piecewise causal paths. We claim that $R_p$ is open and in turn that $R_p = X$: By assumption, for each $q\in R_p$, there exists an $r\ll q$ (or $q\ll r$) and, as $X$ is causally path connected, a causal curve between them. Hence there is a piecewise causal curve from $p$ to $r$ and so $r\in R_p$. Similarly, each point in $I^+(r)$ (resp. $I^-(r)$) is connected to $r$ (and hence $p$) by a piecewise causal curve. So $I^+(r)\subseteq R_p$ (resp. $I^-(r)\subseteq R_p$) is an open neighbourhood of $q$ contained in $R_p$. As $q$ was arbitrary, it follows that $R_p$ is open. Then $\Set*{ R_p p \in X}$ gives an open partition of $X$. However, $X$ is connected, hence the partition must consist of precisely one element, namely $R_p = X$ for all $p\in X$, and $X$ is piecewise causal path-connected. ◻ It should be clear that the above lemma holds for Lorentzian length spaces and this is the context in which we will utilise the result. We also note that a Lorentzian pre-length space $X$ which is connected and causally path connected, such that for each $x\in X$ one of $I^+(x)$ or $I^-(x)$ is non-empty, is automatically *sufficiently causally connected*, see [@KS21 Definition 3.4]. The equivalence between path-connected and piecewise causal path-connected was also noted by [@KS21 Lemma 3.5] and [@SV16 Lemma 3.5] in their respective settings. In the following proposition we demonstrate that the null distance on a connected Lorentzian length space satisfies all of the requirements of a distance function other than separation of points, even if we do not assume that the associated time function is locally anti-Lipschitz (cf. [@SV16 Lemma 3.8] for a corresponding result on spacetimes). **Proposition 12** (Null distance is a finite, continuous pseudo-metric). Let $X$ be a connected Lorentzian length space with a (not necessarily locally anti-Lipschitz) time function $T$ and metric $d$. The null distance $d_T$, induced by $T$, is a finite pseudo-metric which is continuous (with respect to $d$). Moreover, $$\label{eq: null-distance-time-function} p \leq q \Rightarrow d_T(p,q)=T(q)-T(p)\, .$$ *Proof.* By our previous discussion, every connected Lorentzian length space is sufficiently causally connected. The fact that $d_T$ is a finite pseudo-metric then follows directly from [@KS21 Lemma 3.7]. Similarly, continuity of $d_T$ and [\[eq: null-distance-time-function\]](#eq: null-distance-time-function){reference-type="eqref" reference="eq: null-distance-time-function"} follow from [@KS21 Proposition 3.9] and [@KS21 Proposition 3.8.(ii)], respectively. ◻ The diameter of a subset in a metric space is a well known concept, which also makes sense when considering such a pseudo-metric. In particular, due to the nature of $T$ and $d_T$, the $d_T$-diameter, denoted by $\mathop{\mathrm{diam}}_T$, of a causal or timelike diamond is simply the difference in $T$-values of its endpoints, i.e. $\mathop{\mathrm{diam}}_T(I(p,q)) = \mathop{\mathrm{diam}}_T(J(p,q)) = T(q)-T(p)$. Indeed, if $x, y \in J(p,q)$ then the two piecewise causal curves from $x$ to $y$ with one breakpoint at either $p$ or $q$ together have length $2(T(q)-T(p))$ and so one of them must have length bounded above by $T(q)-T(p)$. Viewing an admissible causal triangle as the union of the images of the curves corresponding to its sides, we therefore have $\mathop{\mathrm{diam}}_T(\Delta(p,q,r)) = T(r) - T(p)$. Of course, from a metric point of view, any admissible causal triangle is degenerate with respect to $d_T$, i.e. $$\label{eq: null distance degenerate in timelike triangles} d_T(p,r)=d_T(p,q)+d_T(q,r)\,.$$ In the next section we shall put the key we have just constructed into action and finally prove the Toponogov Globalisation Theorem for Lorentzian length spaces. # Lorentzian Toponogov Globalisation {#sec:toponogov} The main goal of this section is to prove a synthetic Lorentzian analogue of Toponogov's Globalisation theorem for lower timelike curvature bounds. This will be proven in the setting of connected, globally hyperbolic, regular Lorentzian length spaces having a time function. As previously noted, second countability is sufficient for the existence of a time function. However, before we dive into the proof proper, we first require a small collection of essential lemmata. To begin, recall that globally hyperbolic Lorentzian length spaces $X$ are geodesic with finite and continuous time separation $\tau$ [@KS18 Theorems 3.28 and 3.30]. Thus, in this case, [\[def-cb-ang.item1\]](#def-cb-ang.item1){reference-type="ref" reference="def-cb-ang.item1"} and [\[def-cb-ang.item2\]](#def-cb-ang.item2){reference-type="ref" reference="def-cb-ang.item2"} from Definition [Definition 6](#def-cb-ang){reference-type="ref" reference="def-cb-ang"} (curvature bounds in the sense of angle comparison) hold for $U=X$, i.e. globalisation of these properties is automatic for such spaces. We will also use the geodesic nature of globally hyperbolic Lorentzian length spaces implicitly throughout the remainder of this section, to avoid concerns regarding the existence of distance realisers. Our next result is a slight adaptation of the Lebesgue Number Lemma, which allows us to properly configure coverings of causal diamonds by small and well behaved timelike diamonds. **Lemma 13** (Lebesgue Number Lemma, Lorentzian version). Let $X$ be a connected, globally hyperbolic, Lorentzian length space with $T: X \to \mathbb{R}$ a time function on $X$ and let $d_T$ be the associated null distance. Consider any causal diamond $J(x,y)$ in $X$ and let $\{D_i\}_{i=1}^n$ be an open cover of $J(x,y)$ consisting of timelike diamonds.[^8] Then there exists an $\varepsilon >0$ such that any causal (and hence any timelike) diamond with $d_T$-diameter less than $\varepsilon$ contained in $J(x,y)$ is also contained in one element of the covering. *Proof.* The main difference when comparing to the original version of the Lebesgue number lemma is that $d_T$ is only a finite, continuous, pseudo-metric in general, as a result of Proposition [Proposition 12](#prop:pseudometric-null-distance){reference-type="ref" reference="prop:pseudometric-null-distance"}. The causal structure of diamonds and its interplay with the null distance will be crucial in the proof. Firstly, if $J(x,y) \subseteq D_i$ for some $i$ then we can choose $\varepsilon$ arbitrary and we are done. Otherwise, denote by $C_i \coloneqq J(x,y) \setminus D_i$ the complement of $D_i$ in $J(x,y)$. Define a function $f: J(x,y) \to \mathbb{R}$ via $$f(p)=\max_{i\in\{ 1,2,\ldots, n\}} d_T(p, C_i \cap (J^+(p) \cup J^-(p))) \, .$$ Note that the infimum in the definition of $d_T(p, C_i \cap (J^+(p) \cup J^-(p)))$ is attained as $C_i \cap (J^+(p) \cup J^-(p))$ is a closed subset of $J(x,y)$ and hence compact. We now show that $f(p) \in (0, \infty)$ for all $p$. If $f(p)$ were $0$ for some $p \in J(x,y)$, by the closedness of $C_i \cap (J^+(p) \cup J^-(p))$ and because $p \in (J^+(p) \cup J^-(p))$, we infer $p \in C_i$ for all $i$, i.e. $p \notin D_i$ for all $i$. As the $D_i$ cover $J(x,y)$, we arrive at the contradiction $p \notin J(x,y)$. If $f(p)=\infty$ for some $p$, then there exists some $i$ such that $C_i \cap (J^+(p) \cup J^-(p)) = \emptyset$. Indeed, as all of these sets are compact and the null distance is finite valued, the maximum of finitely many infima can only be infinite if (at least) one of the sets is empty. Thus, $J(x,y) \cap (J^+(p) \cup J^-(p)) \subseteq D_i$, and hence $x,y \in D_i$. As $D_i$ is a timelike diamond and therefore causally convex, this implies $J(x,y) \subseteq D_i$, which we treated separately. As the sets $C_i \cap (J^+(p) \cup J^-(p))$ are all compact and the null distance is continuous, it follows that $f$ is continuous and hence attains its minimum value. Consequently, set $\varepsilon \coloneqq \min_{p \in J(x,y)} f(p) >0$. Now let $p,q \in J(x,y)$ with $p \leq q$ and $\mathop{\mathrm{diam}}_T(J(p,q))=d_T(p,q) < \varepsilon$. As $f(p) \geq \varepsilon$, there exists $i$ such that $d_T(p, C_i \cap (J^+(p) \cup J^-(p)) \geq \varepsilon$. Then clearly, $p \notin C_i$. Furthermore, $p \leq q$ and $d_T(p,q) < \varepsilon$, hence also $q \notin C_i$. Thus, $p,q \in D_i$ and by the causal convexity of diamonds, also $J(p,q) \subseteq D_i$. ◻ We now turn to proving the most essential synthetic Lorentzian tool required for the proof of the Globalisation Theorem. Recall that the so-called Gluing Lemma for triangles with upper curvature bounds, [@BR22 Lemma 4.3.1, Corollary 4.3.2], roughly states that when two subtriangles satisfy the same curvature inequalities, then a large triangle formed by combining the two must also satisfy that curvature bound. The Gluing Lemma (and hence the Lorentzian analogue of the Reshetnyak Gluing Theorem [@BR22 Theorem 5.2.1]) is not valid in full generality for lower curvature bounds, as not all of the inequalities in the Alexandrov Lemma [Proposition 9](#lem: alexlem future){reference-type="ref" reference="lem: alexlem future"} point in the same direction in this case. However, we propose the following result, in the spirit of the Gluing Lemma, under lower curvature bounds. In essence, if the angle condition fails to hold at a vertex in a timelike triangle then, upon splitting the triangle into two timelike subtriangles along one of the adjacent sides, then at least one angle condition must fail in one of the two subtriangles. In particular, the failing angle condition(s) will either be at the original vertex (viewed as part of a subtriangle), or at the point at which we split the adjacent side. **Lemma 14** (Gluing Lemma for timelike triangles, lower curvature bounds). Let $X$ be a globally hyperbolic, regular Lorentzian length space with curvature bounded below by $K\in\mathbb{R}$ in the sense of angle comparison. Let $\Delta(p,q,r)$ be a timelike triangle in $X$ (which is not necessarily contained in a comparison neighbourhood), where the sides are given by distance realisers $\alpha$ from $p$ to $r$, $\beta$ from $p$ to $q$ and $\gamma$ from $q$ to $r$, respectively. Let $\Delta(\tilde{p},\tilde{q},\tilde{r})$ be a comparison triangle for $\Delta(p,q,r)$ and assume that the angle condition fails to hold at $p$ in $\Delta(p,q,r)$, i.e. $\ensuremath{\measuredangle}_p(\alpha, \beta) < \ensuremath{\measuredangle}_{\tilde{p}}(\tilde{q},\tilde{r})$. Let $x$ be a point on $\beta$. Then at least one of the following three angle conditions fails to hold: the angle conditions at $x$ and $p$ in $\Delta(p,x,r)$ and the one at $x$ in $\Delta(x,q,r)$ (see Figure [\[fig: Gluing Lemma constellation\]](#fig: Gluing Lemma constellation){reference-type="ref" reference="fig: Gluing Lemma constellation"}). An analogous statement holds if $x$ is on $\alpha$ and timelike related to $q$, or if the angle condition initially failed at $r$ (and the subdividing point $x$ is on $\gamma$ or on $\alpha$ and timelike related to $q$) or at $q$ (and $x$ is on either $\beta$ or $\gamma$), instead of $p$. *Proof.* We prove the result for the case where the angle condition fails to hold at $p$ in $\Delta(p,q,r)$ and $x$ is on $\beta$. Denote a distance realiser (which exists since $X$ is globally hyperbolic) from $x$ to $r$ by $\eta$. Denote by $\beta_-$ and $\beta_+$ the parts of $\beta$ which go from $x$ to $p$ and from $x$ to $q$, respectively. Assume that the angle condition at $p$ in $\Delta(p,x,r)$ holds, i.e. $\ensuremath{\measuredangle}_p(\alpha, \beta_-) \geq \ensuremath{\measuredangle}_{\bar{p}}(\bar{x},\bar{r})$, otherwise we are done. We now show that the angle condition at $x$ in $\Delta(p,x,r)$ or at $x$ in $\Delta(x,q,r)$ must fail. To this end, consider comparison triangles $\Delta(\bar{p},\bar{x},\bar{r})$ and $\Delta(\bar{x},\bar{q},\bar{r})$ for $\Delta(p,x,r)$ and $\Delta(x,q,r)$, respectively, as well as a comparison triangle $\Delta(\tilde{p},\tilde{q},\tilde{r})$ for $\Delta(p,q,r)$. Let $\tilde{x}$ be the comparison point for $x$ in $\Delta(\tilde{p},\tilde{q},\tilde{r})$ and consider the subtriangle $\Delta(\tilde{p},\tilde{x},\tilde{r})$. $\Delta(\bar{p},\bar{x},\bar{r})$ and $\Delta(\tilde{p},\tilde{x},\tilde{r})$ have two sides of equal length, and for the angles at $\bar{p}$ and $\tilde{p}$ we know $$\ensuremath{\measuredangle}_{\bar{p}}(\bar{x},\bar{r}) \leq \ensuremath{\measuredangle}_p(\alpha, \beta) < \ensuremath{\measuredangle}_{\tilde{p}}(\tilde{q},\tilde{r}) = \ensuremath{\measuredangle}_{\tilde{p}}(\tilde{x},\tilde{r}) \, .$$ Thus, law of cosines monotonicity gives $\tau(x,r)=\tau(\bar{x},\bar{r}) > \tau(\tilde{x},\tilde{r})$ and so, by the Alexandrov Lemma [Proposition 9](#lem: alexlem future){reference-type="ref" reference="lem: alexlem future"}, the comparison triangles $\Delta(\bar{p},\bar{x},\bar{r})$ and $\Delta(\bar{x},\bar{q},\bar{r})$ form a concave situation, i.e. $$\label{eq: gluing lemma ineq} \ensuremath{\measuredangle}_{\bar{x}}(\bar{p},\bar{r}) < \ensuremath{\measuredangle}_{\bar{x}}(\bar{q},\bar{r}) \, .$$ Moreover, by Proposition [Proposition 5](#pop: equal angles along geodesic){reference-type="ref" reference="pop: equal angles along geodesic"}, we have $\ensuremath{\measuredangle}_x(\beta_-,\eta)=\ensuremath{\measuredangle}_x(\eta,\beta_+)$. If the angle condition were to hold both at $x$ in $\Delta(p,x,r)$ and at $x$ in $\Delta(x,q,r)$, then we would have $$\ensuremath{\measuredangle}_{\bar{x}}(\bar{p},\bar{r}) \geq \ensuremath{\measuredangle}_x(\beta_-,\eta) = \ensuremath{\measuredangle}_x(\eta,\beta_+) \geq \ensuremath{\measuredangle}_{\bar{x}}(\bar{q},\bar{r}) \, ,$$ a contradiction to [\[eq: gluing lemma ineq\]](#eq: gluing lemma ineq){reference-type="eqref" reference="eq: gluing lemma ineq"}. Hence, the angle condition must fail at $x$ either in $\Delta(p,x,r)$ or $\Delta(x,q,r)$, if it does not fail at $p$ in $\Delta(p,x,r)$. For the remaining cases, the proof is similar, upon using the appropriate version of the Alexandrov Lemma (cf. [@BR22 Lemma 4.2.1] or [@BORS23 Proposition 2.42]). ◻ As should be clear from the proof, this gluing property also holds for strongly causal, locally causally closed, regular Lorentzian pre-length spaces with curvature bounded below in the sense of angle comparison. Using the previous lemmata, we can now prove two results which, when taken together, allow us to prove our main theorem. One key difficulty in generalising globalisation to the Lorentzian setting is that splitting a timelike triangle along the longest side does not, in general, produce two timelike triangles. This issue is handled by the first result, which demonstrates that if any angle fails, it is always possible to assume that an angle of type $\sigma = +1$ fails. **Proposition 15** (Failing angles can be assumed to be of type $\sigma=+1$). Let $X$ be a connected, globally hyperbolic, regular Lorentzian length space with time function $T$ and curvature bounded below by $K\in \mathbb{R}$ in the sense of angle comparison. Let $0 < \varepsilon < 1$. Let $\Delta = \Delta(p,q,r)$ be a timelike triangle in $X$ which satisfies the size bounds for $K$ and for which the angle condition fails at some vertex. If the angle condition holds at each angle in every timelike triangle $\Delta(p', q', r')$ with 1. $p \leq p' \ll q' \ll r' \leq r$ and 2. $d_T(p', r') \leq (1-\varepsilon ) d_T(p,r)$ then there is at least one timelike triangle $\Delta(p'', q'', r'')$ with $p \leq p'' \ll q'' \ll r'' \leq r$ such that the angle condition fails at $q''$. *Proof.* Without loss of generality, assume that the angle condition in $\Delta$ fails at $p$ (the case where it fails at $r$ is analogous under reversal of the time orientation, while if it fails at $q$ the result is trivially satisfied). Splitting the side $[p,q]$ into two pieces at some $x \in [p,q]$, say the $d_T$-midpoint, by Lemma [Lemma 14](#lem:triangleGluingCBB){reference-type="ref" reference="lem:triangleGluingCBB"} we get that either an angle condition fails at $x$ in $\Delta(p,x,r)$, in which case the result follows, or at either $p$ in $\Delta(p,x,r)$ or $x$ in $\Delta(x,q,r)$. In either of the two latter cases, we rename the triangle where the angle condition fails by $\Delta(p_1, q_1, r)$, with the angle condition now failing at $p_1$. (Both triangles may have a failing angle condition, in which case we may simply pick one at random.) This procedure can be repeated arbitrarily many times (see Figure [\[fig: final steps labeling toponogov\]](#fig: final steps labeling toponogov){reference-type="ref" reference="fig: final steps labeling toponogov"}) and, if no positive angle fails at any stage, this will result in a sequence of pairs $p_n\ll q_n$ on the side $[p,q]$ such that the angle conditions in $\Delta(p_n,q_n,r)$ fail at $p_n$. If the new subdivision point (which is either relabelled to $p_n$ or $q_n$) is always chosen to be the midpoint of the side $[p_{n-1},q_{n-1}]$ in the $d_T$ metric, then $d_T(p_n, q_n) \to 0$ and, since these points lie on the distance realiser $[p,q]$, it must be the case that $p_n$ and $q_n$ have a common limit point $p^* \in [p,q]$ with $p_n\nearrow p^*$ and $q_n\searrow p^*$. If $d_T(p^*, r) < (1-\varepsilon ) d_T(p,r)$, then $d_T(p_n, r) \leq (1-\varepsilon ) d_T(p,r)$ for large $n$ so that $\Delta(p_n, q_n, r)$ is already sufficiently small that it cannot have a failing angle, yielding a contradiction. However, this need not hold and it may be necessary to split the long side $[p_n, r]$ in the following manner. Let $r_n'$ be the point on the intersection of some distance realiser $p_nr$ with $\partial J^+(q_n)$ (by regularity, this point of intersection is unique). By compactness of $J(p,r)$, we may, after passing to a subsequence if necessary, assume that $r_n'$ is convergent with $r_n' \to r^*$. By construction, $\tau(q_n,r_n')=0, q_n \leq r_n'$, and hence by continuity of $\tau$ and the closedness of the causal relation, we get $\tau(p^*,r^*)=0, p^* \leq r^*$. Moreover, we have $\tau(p_n,r)=\tau(p_n,r_n')+\tau(r_n',r)$ and hence again by continuity, $0<\tau(p^*,r)=\tau(p^*,r^*)+\tau(r^*,r)$, so the three points lie on a distance realiser. By regularity, the segment $[p^*, r]$ is timelike, so $\tau(p^*,r^*)=0 \implies p^*=r^*$. For sufficiently large $n$, then, we may take a point $r_n$ slightly to the future of $r_n'$ on the segment $p_n r$. Then $p_n$, $q_n$ and $r_n$ are all so close to $p^*$ that the timelike triangle $\Delta(p_n,q_n,r_n)$ has $d_T(p_n, r_n) \leq (1-\varepsilon ) d_T(p,r)$. Splitting the triangle $\Delta(p_n,q_n,r)$, which has an angle condition failing at $p_n$, through $q_nr_n$ using Lemma [Lemma 14](#lem:triangleGluingCBB){reference-type="ref" reference="lem:triangleGluingCBB"}, results either in an angle condition failing at $r_n$ in $\Delta(q_n,r_n,r)$, so that the result follows, or at $p_n$ or $r_n$ in $\Delta(p_n,q_n,r_n)$, which is not possible since $\Delta(p_n,q_n,r_n)$ is sufficiently small in the $d_T$ metric. ◻ Following the work of Plaut across two papers [@Pla91; @Pla96], Lang and Schroeder [@LS13] provided a "cat's cradle\" construction for use in proving Toponogov's theorem for metric length spaces. Independently of and in parallel to this, Petrunin [@AKP19] also derived a similar, elegant scheme. In our second result, we demonstrate that this construction can also be used in the Lorentzian setting, despite the challenge posed by the fact that triangles with short side lengths (in $\tau$) need not be small topologically. This rules out the failure of angles of type $\sigma = +1$, provided that a collection of smaller triangles obey the angle condition at each of their vertices, essentially completing the proof. **Proposition 16** (Cat's cradle). Let $X$ be a connected, globally hyperbolic, regular Lorentzian length space with time function $T$ and curvature bounded below by $K\in \mathbb{R}$ in the sense of angle comparison. Let $0 < \varepsilon < \frac12$ and let $\Delta = \Delta(p,q,r)$ be a timelike triangle in $X$ which satisfies the size bounds for $K$. If, for every timelike triangle $\Delta(p', q', r')$ with 1. $p \leq p' \ll q' \ll r' \leq r$ and 2. $d_T(p', r') \leq (1-\varepsilon ) d_T(p,r)$ the angle condition holds at all vertices of $\Delta(p', q', r')$, then the angle condition also holds at $q$ in $\Delta$. Since the following proof is rather extended, we first offer a brief overview. The cat's cradle construction (see Figure [\[fig:catsCradle\]](#fig:catsCradle){reference-type="ref" reference="fig:catsCradle"}) is a recursive decomposition of $\Delta$ into smaller triangles designed to ensure that the angle condition holds for the $\sigma=+1$ angle opposite the longest side, namely for the angle at $q$. From this construction, we infer a sequence of inequalities [\[eq:LS5\]](#eq:LS5){reference-type="eqref" reference="eq:LS5"}. We then continue with a similarly recursive construction in the model space, assembling a sequence of comparison triangles to infer a sequence of inequalities [\[eq:LS6\]](#eq:LS6){reference-type="eqref" reference="eq:LS6"}. Finally we show that the two sequences of inequalities converge to the same limit, which implies that hinge comparison at $q$ cannot fail. *Proof.* To begin, set $L\coloneqq d_T(p,r)$ and $q_0 \coloneqq q$. Assume without loss of generality that $d_T(p,q_0)\geq d_T(q_0,r)$, otherwise the roles of $p$ and $r$ should be interchanged. Let $q_1$ be the point on the distance realiser $[p,q_0]$ such that[^9] $d_T(p,q_1)=\varepsilon L$, from which it follows by [\[eq: null distance degenerate in timelike triangles\]](#eq: null distance degenerate in timelike triangles){reference-type="eqref" reference="eq: null distance degenerate in timelike triangles"} that $d_T(q_1,r)=(1 - \varepsilon )L$. Now $\Delta_1 = \Delta(q_1, q_0, r)$ is a timelike triangle satisfying conditions of the statement, hence the angle condition holds at all vertices of $\Delta_1$ by assumption. We continue this construction recursively, picking points $q_n$, depending on whether $n$ is odd or even, to form new triangles. For even $n$, pick $q_{n}$ on the distance realiser $[q_{n-1},r]$ so that[^10] $d_T(q_{n}, r) = \varepsilon L$ and $d_T(p, q_{n}) = (1 - \varepsilon )L$. This defines a triangle $\Delta_{n} = \Delta(p, q_{n-1}, q_{n})$ for $n\geq 1$. Similarly, for odd $n$, pick $q_{n}$ on the distance realiser $[p,q_{n-1}]$ to define $\Delta_{n} = \Delta(q_{n}, q_{n-1}, r)$. In both cases, $\Delta_{n}$ satisfies the conditions of the statement and so the angle condition holds at all vertices of $\Delta_n$ by assumption. Consider now the angles in $\Delta_n$. Let $\theta_{n} \coloneqq \ensuremath{\measuredangle}_{q_{n-1}}(p, r)$ be the angle at $q_{n-1}$, which is given by $\ensuremath{\measuredangle}_{q_{n-1}}(p,q_{n})$ or $\ensuremath{\measuredangle}_{q_{n-1}}(q_{n},r)$ in $\Delta_{n}$, when $n$ is respectively even or odd. Denote by $\varphi_{n}$ the angle at $q_n$ in $\Delta_{n}$, which will be adjacent to $\theta_{n+1}$ in the subsequent triangle. When $n$ is even, $\varphi_{n}$ is $\ensuremath{\measuredangle}_{q_{n}}(q_{n-1}, p)$, while for odd $n$, the angle is $\ensuremath{\measuredangle}_{q_{n}}(q_{n-1}, r)$. In either case, $\varphi_{n} = \theta_{n+1}$, but with opposite signs $\sigma$, by Proposition [Proposition 5](#pop: equal angles along geodesic){reference-type="ref" reference="pop: equal angles along geodesic"}. Set $l_n \coloneqq \tau(p, q_n) + \tau(q_n, r)$, for $n\geq 0$. By applying the reverse triangle inequality to each $\Delta_n$ (recalling that these are defined for $n\geq 1$), we have $$\label{eq:LS5} 0 < l_0 \leq l_1 \leq \ldots \leq \tau(p, r)\,.$$ Indeed, for odd $n$, we have $l_{n-1} = \tau(p, q_{n-1}) + \tau(q_{n-1}, r) = \tau(p, q_{n})+\tau(q_{n}, q_{n-1}) + \tau(q_{n-1}, r)\leq \tau(p, q_{n}) + \tau(q_{n}, r) = l_n$ and for even $n$ a similar argument can be used. The initial, strict inequality is due to $\Delta(p,q_0,r)$ being non-degenerate, while the final inequality in the chain follows from applying reverse triangle inequality to $\Delta(p,q_n,r)$. The sequence $\{ (l_n) \}_{n\geq 0}$ in [\[eq:LS5\]](#eq:LS5){reference-type="eqref" reference="eq:LS5"} is a Cauchy sequence, as it is monotone increasing and bounded above by $\tau(p,r)$ (which is finite by size-bounds). Therefore, we have that $l_{n+1} - l_{n} \to 0$. This value is the excess in the triangle $\Delta_n$, that is, the value by which the longest side exceeds the sum of the two shortest sides (see Figure [\[fig:catsCradle\]](#fig:catsCradle){reference-type="ref" reference="fig:catsCradle"}). For $n$ even, this is $\tau(q_{n+1}, r) - \tau(q_{n}, r) - \tau(q_{n+1}, q_{n})$. For $n$ odd, on the other hand, this is $\tau(p, q_{n+1}) - \tau(p, q_{n}) - \tau(q_{n}, q_{n+1})$. **Claim:** For some subsequence $n_i$, the time separation between the vertices $q_{n_i-1}$ and $q_{n_i}$ of the triangle $\Delta_{n_i}$ is uniformly bounded away from zero. **Proof of claim:** For a contradiction, assume that the claim is false. Then we have $\lim_{n\to\infty} \tau(q_{2n-1}, q_{2n}) = \lim_{n\to\infty} \tau(q_{2n+1}, q_{2n}) = 0$. Consider the sequence of triples $\{ (q_{2n-1}, q_{2n}, q_{2n+1}) \}_{n\geq 1}$, which lies in the compact set $J(p,r)\times J(p,r)\times J(p,r)$. After passing to some subsequence $n_i$, we have that these converge to a limit triple $(q_a, q_b, q_c)$. Inspecting the time function, we see $T(q_{2n-1})=T(q_{2n+1})\neq T(q_{2n})$, hence $q_a\neq q_b\neq q_c$. Furthermore, by continuity of $\tau$, we have $\tau(q_a, q_b) = \tau(q_c, q_b) = 0$. Again by continuity of $\tau$, we have $\tau(p,q_c)+\tau(q_c,q_b)=\tau(p,q_b)$ and by causal closedness, we have $p\leq q_c\leq q_b$. In particular, $p,q_c, \textrm{ and } q_b$ lie on a distance realiser with a non-constant null piece $[q_c,q_b]$. Thus, by regularity, the whole distance realiser must be null and therefore $\tau(p,q_b)=0$. Similarly, from $\tau(q_a,q_b)+\tau(q_b,r)=\tau(q_a,r)$ and $q_a\leq q_b\leq r$, we obtain that $q_a,q_b, \textrm{ and } r$ lie on a distance realiser which is null, so $\tau(q_b,r)=0$ (see Figure [\[fig:catsCradleLimit\]](#fig:catsCradleLimit){reference-type="ref" reference="fig:catsCradleLimit"}). Therefore, $\lim_{i \to \infty} l_{2n_i} = \lim_{i \to \infty} \left( \tau(p, q_{2n_i}) + \tau(q_{2n_i}, r) \right) = \tau(p,q_b) + \tau(q_b,r) = 0$. However, [\[eq:LS5\]](#eq:LS5){reference-type="eqref" reference="eq:LS5"} states that $l_n$ is a non-decreasing sequence, beginning with $l_0 > 0$, which yields a contradiction. **Claim proven.** Let $p_n = p$ and $r_n = r$ for all $n\geq 0$. We now carry out a similar construction in the model space $\mathbb{L}^2(K)$ by arranging comparison triangles $\bar{\Delta}_{n}$ (see Figure [\[fig:catsCradleComparison\]](#fig:catsCradleComparison){reference-type="ref" reference="fig:catsCradleComparison"}) for $\Delta_n$. Since, in general, the angles in $\bar\Delta_n$ do not match those in $\Delta_{n}$, the construction in $\mathbb{L}^2(K)$ does not fit together as neatly. In fact, we begin by considering a comparison hinge $([\bar{q}_0,\bar{p}_0],[\bar{q}_0,\bar{r}_0],\bar\omega_1)$ in $\mathbb{L}^2(K)$ for $([q_0,p_0],[q_0,r_0],\theta_1)$; here, $(\bar{p}_0,\bar{q}_0,\bar{r}_0)$ is a triple of points such that $\tau(\bar{p}_0,\bar{q}_0) = \tau(p_0,q_0)$, $\tau(\bar{q}_0,\bar{r}_0) = \tau(q_0,r_0)$, and the angle $\bar\omega_1$ between the distance realisers $[\bar{q}_0,\bar{p}_0],[\bar{q}_0,\bar{r}_0]$ satisfies $\bar\omega_1 = \theta_1$. In particular, there is no a priori restriction on $\tau(\bar{p}_0,\bar{r}_0)$ and instead we set out to obtain one (we are not considering a comparison triangle for $\Delta$, for example). Using our hinge, we now recursively construct the comparison triangles $\bar\Delta_n$, for $n\geq 1$. For odd $n$, fix $\bar{q}_{n}$ on the distance realiser $[\bar{p}_{n-1},\bar{q}_{n-1}]$, such that $\tau(\bar{p}_{n-1},\bar{q}_{n}) = \tau(p_{n-1},q_{n})$. Then choose $\bar{r}_{n}$ such that the timelike triangle $\bar{\Delta}_{n} = \Delta(\bar{q}_{n},\bar{q}_{n-1}, \bar{r}_{n})$ has the same side lengths as $\Delta_{n}$. Finally, set $\bar{p}_{n} = \bar{p}_{n-1}$. For even $n$, similarly fix $\bar{q}_{n}$ on the distance realiser $[\bar{q}_{n-1},\bar{r}_{n-1}]$, such that $\tau(\bar{q}_{n}, \bar{r}_{n-1}) = \tau(q_{n}, r_{n-1})$, construct a comparison triangle $\bar{\Delta}_{n} = \Delta(\bar{p}_{n}, \bar{q}_{n-1}, \bar{q}_{n})$, and set $\bar{r}_{n} = \bar{r}_{n-1}$. The choice of the two new points at each stage again defines new angles. Denote by $\bar{\theta}_{n}$ the angle in $\bar{\Delta}_{n}$ at $\bar{q}_{n-1}$ (note that $\bar{\theta}_{n}=\tilde{\ensuremath{\measuredangle}}_{q_{n-1}}(q_{n},r_{n})$ for $n$ odd and $\bar{\theta}_{n}=\tilde{\ensuremath{\measuredangle}}_{q_{n-1}}(q_{n},p_{n})$ for $n$ even), by $\bar{\varphi}_{n}$ the angle in $\bar{\Delta}_{n}$ at $\bar{q}_{n}$ and by $\bar{\omega}_{n+1}$ the angle of the remaining open hinge $([\bar{q}_{n},\bar{p}_{n}],[\bar{q}_{n},\bar{r}_{n}])$ adjacent to $\bar{\varphi}_{n}$, see Figure [\[fig:catsCradleComparison\]](#fig:catsCradleComparison){reference-type="ref" reference="fig:catsCradleComparison"}. Note that $\bar{\varphi}_{n} = \bar{\omega}_{n+1}$, but with opposite sign, again by Proposition [Proposition 5](#pop: equal angles along geodesic){reference-type="ref" reference="pop: equal angles along geodesic"}. As the angle condition holds at $q_{n-1}$ and $q_{n}$ in $\Delta_{n}$ by our assumptions, we have $\theta_{n} \leq \bar{\theta}_{n}$ at $q_{n-1}$, and at $q_{n}$, the type $\sigma = -1$ angle satisfies $\varphi_{n} \geq \bar{\varphi}_{n}$. Furthermore, by construction $\bar{\omega}_1 = \theta_1$ and by the above $\theta_1 \leq \bar\theta_1$, so $\bar{\omega}_1 \leq \bar\theta_1$. More generally, using the inequalities for $\varphi_n$ and $\theta_n$ borne from the angle conditions holding in each $\Delta_n$, as well as equality of adjacent angles (see Proposition [Proposition 5](#pop: equal angles along geodesic){reference-type="ref" reference="pop: equal angles along geodesic"}), we obtain $\bar{\omega}_{n} = \bar{\varphi}_{n-1} \leq \varphi_{n-1} = \theta_{n} \leq \bar{\theta}_{n}$ for all $n\geq 2$. Therefore, we have $\bar\omega_n \leq \bar\theta_n$ for all $n \geq 1$, such that the relative sizes of the angles are indeed as depicted in Figure [\[fig:catsCradleComparison\]](#fig:catsCradleComparison){reference-type="ref" reference="fig:catsCradleComparison"}. Hence, by law of cosines monotonicity (Corollary [Corollary 3](#cor:LOC Mono){reference-type="ref" reference="cor:LOC Mono"}), we have $\tau(\bar{p}_{n-1}, \bar{r}_{n-1}) \leq \tau(\bar{p}_{n}, \bar{r}_{n})$. Thus, the sequence of inequalities $$\label{eq:LS6} \tau(\bar{p}_0, \bar{r}_0) \leq \tau(\bar{p}_1, \bar{r}_1) \leq \ldots$$ holds. Consider again the subsequence $n_i$ from the claim above. Since in $\Delta_{n_i}$ the length of the (short) side $[q_{n_i-1},q_{n_i}]$ is uniformly bounded away from zero on this subsequence, the same is true of the length of the side $[\bar{q}_{n_i-1},\bar{q}_{n_i}]$ in $\bar\Delta_{n_i}$. Note that this implies the length of the longest side in $\bar\Delta_{n_i}$ is also uniformly bounded away from zero. Hence, the angle $\bar\varphi_{n_i}$ lies between two timelike sides of the triangle $\bar{\Delta}_{n_i}$ whose lengths are uniformly bounded away from zero, where the excess of $\bar{\Delta}_{n_i}$ (being equal to that of $\Delta_{n_i}$) is approaching $0$. This means that this sequence of configurations approaches a line, and not a point, so that $\bar\varphi_{n_i}\to 0$. It follows from $\bar\omega_{n+1} = \bar\varphi_n$ that $\bar\omega_{n_i+1}\to 0$. As $\bar\omega_{n_i+1}$ is given by $\ensuremath{\measuredangle}_{\bar{q}_{n_i}}(\bar{p}_{n_i}, \bar{r}_{n_i})$, we conclude that, along our subsequence, $\tau(\bar{p}_{n_i}, \bar{r}_{n_i}) - \tau(\bar{p}_{n_i}, \bar{q}_{n_i}) - \tau(\bar{q}_{n_i}, \bar{r}_{n_i}) = \tau(\bar{p}_{n_i}, \bar{r}_{n_i}) - l_{n_i} \to 0$. In other words, the difference of the terms of the sequences in [\[eq:LS5\]](#eq:LS5){reference-type="eqref" reference="eq:LS5"} and [\[eq:LS6\]](#eq:LS6){reference-type="eqref" reference="eq:LS6"} is converging to 0. Finally, assume that $\tau(p,r) < \tau(\bar{p}_0, \bar{r}_0)$, that is, the hinge condition [@BKR23 Definition 3.14] fails at $q$ in $\Delta$. Set $C := \tau(\bar{p}_0, \bar{r}_0) - \tau(p,r)>0$. Since $\tau(\bar{p}_n, \bar{r}_n) - l_n \geq \tau(\bar{p}_0, \bar{r}_0) - \tau(p,r)$ for all $n \geq 0$ by [\[eq:LS5\]](#eq:LS5){reference-type="eqref" reference="eq:LS5"} and [\[eq:LS6\]](#eq:LS6){reference-type="eqref" reference="eq:LS6"}, we have $\tau(\bar{p}_n, \bar{r}_n) - l_n \geq C>0$ for all $n$, contradicting the fact that, on $n_i$, $\tau(\bar{p}_n, \bar{r}_n) - l_n \to 0$. It follows, therefore, that the hinge condition must hold at $q$ in $\Delta$. Since hinge comparison and angle comparison are equivalent, see Proposition [Proposition 7](#pop: equivalence of curvature bounds){reference-type="ref" reference="pop: equivalence of curvature bounds"}, the claim follows. ◻ Collecting the previous two propositions, we can deduce that the angle conditions hold in the large as long as they hold in the small. We formalise this statement here and will apply it in the proof of the main theorem. **Corollary 17** (Core argument of Lorentzian Toponogov Globalisation). Let $X$ be a connected, globally hyperbolic, regular Lorentzian length space with time function $T$ and curvature bounded below by $K\in \mathbb{R}$ in the sense of angle comparison. Let $0 < \varepsilon < \frac12$. Let $\Delta = \Delta(p,q,r)$ be a timelike triangle in $X$ which satisfies the size bounds for $K$. If, for every timelike triangle $\Delta(p', q', r')$ with 1. $p \leq p' \ll q' \ll r' \leq r$ and 2. $d_T(p', r') \leq (1-\varepsilon ) d_T(p,r)$ the angle condition holds at all vertices of $\Delta(p',q',r')$, then the angle condition also holds at each angle in $\Delta$. *Proof.* First, observe that our assumptions include the criteria for Proposition [Proposition 16](#prop:catsCradle){reference-type="ref" reference="prop:catsCradle"} to hold. In particular, the angle condition must not fail at $q$ in $\Delta$. Now assume for a contradiction that the angle condition fails at either $p$ or $r$ in $\Delta$. Then by Proposition [Proposition 15](#prop:negativeAngles){reference-type="ref" reference="prop:negativeAngles"}, there exists a timelike triangle $\Delta'' \coloneqq \Delta(p'',q'',r'')$ with $p\leq p''\ll q''\ll r'' \leq r$, such that the angle condition fails at $q''$. Furthermore, we have $d_T(p'', r'') \leq d_T(p,r)$, via our discussion around [\[eq: null distance degenerate in timelike triangles\]](#eq: null distance degenerate in timelike triangles){reference-type="eqref" reference="eq: null distance degenerate in timelike triangles"}. Suppose that $\Delta(p', q', r')$ is a timelike triangle with $p'' \leq p' \ll q' \ll r' \leq r''$ and $d_T(p', r') \leq (1-\varepsilon ) d_T(p'',r'')$. Then it is also the case that $p \leq p' \ll q' \ll r' \leq r$ and $d_T(p', r') \leq (1-\varepsilon ) d_T(p,r)$. By the initial hypotheses,[^11] then, the angle condition holds at all vertices of all such $\Delta(p',q',r')$ and so Proposition [Proposition 16](#prop:catsCradle){reference-type="ref" reference="prop:catsCradle"} may be applied to $\Delta''$, to show that the angle condition cannot fail at $q''$, yielding a contradiction. Hence the angle condition must also not fail at $p$ or $r$ in $\Delta$ and our result follows. ◻ The previous result shows that the angle condition holds at all vertices of an arbitrarily large triangle, under the assumption that the angle condition holds for all vertices in a certain proportion of the smaller triangles in the space. It remains to show that (local) lower curvature bounds provide sufficiently many triangles with no failing angle condition for the above assumption to hold for each and every triangle. That is, no triangle possesses a vertex at which the angle condition fails. **Theorem 18** (Lorentzian Toponogov globalisation). Let $X$ be a connected, globally hyperbolic, regular Lorentzian length space with a time function $T$ and curvature bounded below by $K\in \mathbb{R}$ in the sense of angle comparison. Then each of the properties in Definition [Definition 6](#def-cb-ang){reference-type="ref" reference="def-cb-ang"} hold globally; in particular, the entire space $X$ is a $(\geq K)$-comparison neighbourhood and hence has curvature globally bounded below by $K$. *Proof.* First note that Definition [Definition 6](#def-cb-ang){reference-type="ref" reference="def-cb-ang"}[\[def-cb-ang.item4\]](#def-cb-ang.item4){reference-type="ref" reference="def-cb-ang.item4"} is a local condition, only requiring the germs of curves, hence it globalises trivially. Recall from the opening of this section that Definitions [Definition 6](#def-cb-ang){reference-type="ref" reference="def-cb-ang"}[\[def-cb-ang.item1\]](#def-cb-ang.item1){reference-type="ref" reference="def-cb-ang.item1"} and [Definition 6](#def-cb-ang){reference-type="ref" reference="def-cb-ang"}[\[def-cb-ang.item2\]](#def-cb-ang.item2){reference-type="ref" reference="def-cb-ang.item2"} also hold globally under our assumptions. It remains to check Definition [Definition 6](#def-cb-ang){reference-type="ref" reference="def-cb-ang"}[\[def-cb-ang.main\]](#def-cb-ang.main){reference-type="ref" reference="def-cb-ang.main"} for arbitrarily large triangles in $X$. Let $\Delta = \Delta(p,q,r)$ be a triangle in $X$, which we may assume to be timelike by [@BKR23 Remark 3.12], such that the angle condition fails at some vertex in $\Delta$ (this also permits triangles where the angle condition fails at multiple vertices). Clearly, $\Delta$ is contained in the causal diamond $J(p,r)$, which is compact by the global hyperbolicity of $X$. Suppose $\delta > 0$ is a greatest lower bound on the size of timelike triangles in $J(p,r)$ which exhibit a failing angle condition. In particular, any timelike triangle with $d_T$-diameter less than $\delta$ satisfies the angle condition, and there are triangles with $d_T$-diameter greater than yet arbitrarily close to $\delta$ that exhibit a failing angle condition[^12]. Applying Corollary [Corollary 17](#cor:coreArgument){reference-type="ref" reference="cor:coreArgument"} to such triangles yields a contradiction which proves the result. All that remains is to establish the existence of the greatest lower bound $\delta$. Let $A$ be the set of $d_T$-diameters of triangles in $J(p,r)$ with a failing angle condition. By assumption, an angle condition fails in $\Delta(p,q,r)$, so $d_T(p,r) \in A$ and $A \neq \emptyset$. It follows that $A$ has a greatest lower bound, which we now verify is positive by demonstrating the existence of some positive lower bound. By [@BNR23 Proposition 4.3][^13], we can cover $J(p,r)$ by finitely many timelike diamonds which are all comparison neighbourhoods. Then by Lemma [Lemma 13](#lem: Lebesgue number lemma){reference-type="ref" reference="lem: Lebesgue number lemma"} there exists some $\delta' >0$, such that any timelike diamond of $d_T$-diameter less than $\delta'$ contained in $J(p,r)$ is contained in an element of this covering. In particular, any timelike triangle of $d_T$-diameter less than $\delta'$ is contained in a comparison neighbourhood and so has no failing angle conditions. It follows that $\delta'$ is a positive lower bound for $A$. ◻ An application of Proposition [Proposition 7](#pop: equivalence of curvature bounds){reference-type="ref" reference="pop: equivalence of curvature bounds"} also yields that, provided [\[eq: triangle inequality for angles for lower curvature bounds\]](#eq: triangle inequality for angles for lower curvature bounds){reference-type="eqref" reference="eq: triangle inequality for angles for lower curvature bounds"} holds, lower curvature bounds in the sense of hinge, monotonicity, and triangle comparison also globalise. # Applications and Outlook {#sec: applicationsandoutlook} Finally, in this section we demonstrate the application of our results to the wider field of synthetic Lorentzian geometry and discuss potential refinements of the globalisation theorem along with some open problems. ## Gromov--Hausdorff convergence We begin by taking inspiration from the metric setting and consider the stability of curvature bounds under Gromov--Hausdorff convergence, a result which has been crucial for the proofs of finiteness results in Riemannian geometry. Prior to the development of Alexandrov geometry as an independent subject, it was already understood that limits of Riemannian manifolds with sectional curvature bounded below are length spaces with curvature bounded below, in the sense that the conclusion of the Toponogov comparison theorem and certain nice topological properties hold [@GP91]. This insight was used to prove a variety of finiteness, pinching and rigidity results [@GP88; @Yam88; @GPW90; @GP90; @OSY89]. The proof of the globalisation theorem for general Alexandrov spaces [@BGP92] placed this on a much clearer footing. It ensures that lower curvature bounds in the triangle comparison sense always survive Gromov--Hausdorff convergence, since there is no possibility that the size of comparison neighborhoods shrinks to zero along the sequence. Perelman used Alexandrov geometry to prove a much more powerful homeomorphism finiteness result for Alexandrov spaces and hence Riemannian manifolds [@Per91], which has been generalised further to the setting of Riemannian orbifolds [@Harv16]. Gromov--Hausdorff convergence is most natural in the compact setting and can then be generalised to the non-compact case. As most interesting Lorentzian examples are non-compact, however, it is difficult to establish a general notion of convergence in this setting. Minguzzi and Suhr have provided an excellent notion of convergence for "bounded Lorentzian metric spaces" [@MS23] and in the globally hyperbolic case this can be applied to causal diamonds, as we will soon show. For any reasonable notion of Gromov--Hausdorff convergence of Lorentzian length spaces, we should expect that the condition of a timelike lower curvature bound is stable. This general principle is illustrated by Theorem [Theorem 20](#thm: GHStability){reference-type="ref" reference="thm: GHStability"}, which brings together the globalisation result for spaces in the Kunzinger--Sämann sense with the convergence result for spaces in the Minguzzi--Suhr sense. A *bounded Lorentzian metric space* is a topological space with a continuous time separation function satisfying a boundedness property ($\lbrace (p,q) : \tau(p,q) \geq \varepsilon \rbrace$ is compact for all $\varepsilon > 0$) and distinguishing points (if $p \neq q$ then for some $r$ either $\tau(p,r) \neq \tau(q,r)$ or $\tau(r,p) \neq \tau(r,q)$). It is a *bounded Lorentzian length space* in the sense of Minguzzi--Suhr if timelike related points are connected by maximal causal curves. We begin with a lemma to show that causal diamonds are bounded Lorentzian length spaces in the Minguzzi--Suhr sense (after removing the spacelike boundary). Note, however, that causal diamonds are *not* Lorentzian length spaces in the Kunzinger--Sämann sense, since they are not localisable. **Lemma 19** (Bounded Lorentzian length spaces and causal diamonds). Let $X$ be a globally hyperbolic, regular Lorentzian length space (in the sense of Kunzinger--Sämann, as used throughout this paper) and let $J(p,q)$ be a causal diamond in X. Let $S$ be the set of points in $J(p,q)$ which are not timelike related to any other point in $J(p,q)$ -- the "spacelike boundary" of the diamond. Then $J(p,q) \setminus S$ is a bounded Lorentzian length space in the sense of Minguzzi--Suhr. *Proof.* Let $J(p,q)$ be a causal diamond in a globally hyperbolic Lorentzian length space. By global hyperbolicity, $\tau$ is continuous with respect to the metric topology and, since $J(p,q)$ is compact and $\tau$ vanishes on $S$, the boundedness property holds on $J(p,q) \setminus S$. The final requirement for $J(p,q) \setminus S$ to be a bounded Lorentzian metric space is that $\tau$ distinguishes points. We adapt the argument from [@ACS20] which shows that globally hyperbolic Lorentzian length spaces have the stronger property of being past- and future-distinguishing. Assume for a contradiction that $x, y \in J(p,q)\setminus S$ are different points, which are not distinguished by $\tau$. In particular, $I^-(x) = I^-(y)$ and $I^+(x)=I^+(y)$. If the points are timelike related to each other, this contradicts chronology, which is implied by global hyperbolicity. Consider now the case when $x$ and $y$ are not timelike related. Since $x \notin S$, at least one point in $J(p,q) \setminus S$ is timelike related to $x$. Then, $x$ is joined to that point by a timelike curve in $J(p,q)$ and so is the limit of some sequence $x_n$, with the entire sequence lying either in $I^-(x)$ or $I^+(x)$. Without loss of generality, suppose $x_n \in I^-(x)$. Since $I^-(x) = I^-(y)$, we also have $x_n \in I^-(y)$. Hence, $x \in J^-(y)$, with $\tau(x,y)=0$ and $x\neq y$. As $\tau$ does not distinguish $x$ and $y$, we have $\tau(x_n, x) = \tau(x_n, y) >0$, from which it follows that the broken distance realiser from $x_n$ to $x$ to $y$ is a distance realising curve of mixed causal character, contradicting regularity. Therefore $J(p,q) \setminus S$ is a bounded Lorentzian metric space. By global hyperbolicity again, any two points in $J(p,q)$ are connected by a distance realiser lying in $J(p,q)$. By regularity, this must in fact lie inside $J(p,q) \setminus S$, which is therefore a bounded Lorentzian length space in the sense of Minguzzi--Suhr. ◻ **Theorem 20** (Stability of lower curvature bounds). Let $X_i$ be a sequence of connected, globally hyperbolic, regular, Lorentzian length spaces with time functions and curvature bounded below by $K\in \mathbb{R}$ in the sense of angle comparison. Let $J_i = J(p_i, q_i)$ be a sequence of causal diamonds in $X_i$ and let $S_i$ be the spacelike boundary of $J_i$. If the sequence $J_i \setminus S_i$ converges in the sense of Minguzzi--Suhr to some $J$, then $J$ is a bounded Lorentzian length space with sectional curvature bounded below by $K$ in the sense of Minguzzi--Suhr. *Proof.* Each $J_i \setminus S_i$ is a bounded Lorentzian length space in the sense of Minguzzi--Suhr, by the previous lemma. By Theorem [Theorem 18](#thm: LorentzianToponogov){reference-type="ref" reference="thm: LorentzianToponogov"}, these spaces have a global lower curvature bound in any of the senses mentioned in Proposition [Proposition 7](#pop: equivalence of curvature bounds){reference-type="ref" reference="pop: equivalence of curvature bounds"}. In particular, $J_i \setminus S_i$ has curvature globally bounded below by $K$ in the sense of timelike triangle comparison, which is precisely the definition of sectional curvature bounded below by $K$ in the sense of Minguzzi--Suhr. By [@MS23 Theorem 5.18], the limit $J$ is a bounded Lorentzian length space in the sense of Minguzzi--Suhr. and by [@MS23 Theorem 6.7], it has sectional curvature bounded below in the sense of Minguzzi--Suhr. ◻ In particular, an application of Proposition [Proposition 7](#pop: equivalence of curvature bounds){reference-type="ref" reference="pop: equivalence of curvature bounds"} also yields that, provided [\[eq: triangle inequality for angles for lower curvature bounds\]](#eq: triangle inequality for angles for lower curvature bounds){reference-type="eqref" reference="eq: triangle inequality for angles for lower curvature bounds"} holds on each $X_i$, lower curvature bounds in the sense of hinge, monotonicity, and triangle comparison are also stable under convergence, in the same sense, i.e. the limit space has sectional curvature bounded below in the sense of Minguzzi--Suhr. ## Geometric consequences There are also several direct corollaries to Theorem [Theorem 18](#thm: LorentzianToponogov){reference-type="ref" reference="thm: LorentzianToponogov"}, which extend known results for spaces with global timelike curvature bounds to those with local timelike curvature bounds, under the assumptions of our Toponogov-style Globalisation Theorem. In what follows, we present two such results, namely the Bonnet--Myers Theorem and Splitting Theorem. First proven by Bonnet in two dimensions, the Bonnet--Myers theorem states that a complete Riemannian manifold with sectional curvature bounded below by some *positive* $k\in \mathbb{R}$, has diameter $\mathop{\mathrm{diam}}(M) \leq \frac{\pi}{\sqrt{k}}$. For dimensions greater than two, the result was formalised by Myers [@Mye35], who later demonstrated that the weaker assumption of a positive lower Ricci curvature bound was sufficient to obtain an associated upper bound on the diameter [@Mye41]. A corresponding synthetic result appears in [@BBI01 Theorem 10.4.1], where complete metric length spaces with sectional curvature bounded below by some $k>0$ are shown to also satisfy $\mathop{\mathrm{diam}}(X)\leq \frac{\pi}{\sqrt{k}}$. Bonnet--Myers-style theorems also appear in the literature of Lorentzian geometry. In the smooth setting, Beem and Ehrlich [@BE79 Theorem 9.5] have shown that globally hyperbolic spacetimes with timelike (sectional) curvature bounded below by some *negative* $K \in \mathbb{R}$ have $\mathop{\mathrm{diam}}(M) \leq \frac{\pi}{\sqrt{-K}}$, where the diameter is now defined in terms of the Lorentzian distance function induced by the spacetime metric.[^14] In the synthetic Lorentzian setting, where the diameter is defined in terms of the time-separation function $\tau$, Cavalletti and Mondino [@CM20 Proposition 5.10] have shown that measured Lorentzian pre-length spaces with suitable timelike measure contraction property (including a lower Ricci curvature bound), also have an upper bound on their diameter. Observe how, while the metric theorems consider $k>0$, the Lorentzian results concern $K<0$. This is not quite as superficial a change as it might first seem; it is a consequence of the hierarchy of curvature bound implications being reversed, following the conventions set by [@KS18; @BE79]. In particular, in the metric setting, curvature bounded below by $k$ implies curvature bounded below by all $k'\leq k$, whereas in the Lorentzian setting, curvature bounded below by $K$ implies curvature bounded below by all $K' \geq K$. A similar statement holds for upper curvature bounds, but with the inequalities reversed. Although we adhere to these conventions throughout this paper, they are by no means ubiquitous. For example, [@CM20; @AB08] present Lorentzian results using the metric hierarchy. While, in the metric setting, we could be content with a result utilising bounds on the Ricci curvature, since they are known to be weaker than sectional curvature bounds, see [@Pet19], in the setting of Lorentzian pre-length spaces, the hierarchy of Ricci curvature bounds and timelike (sectional) curvature bounds via triangle comparison is an open question. As such, in [@BNR23 Theorem 4.11], a preliminary Bonnet--Myers result for timelike curvature bounds via triangle comparison is proven; namely, it is shown that strongly causal, locally causally closed, regular, and geodesic Lorentzian pre-length spaces with timelike curvature *globally* bounded below by $K<0$ have finite diameter $\mathop{\mathrm{diam}}_{\mathrm{fin}}(X)\leq \frac{\pi}{\sqrt{-K}}$. Applying Theorem [Theorem 18](#thm: LorentzianToponogov){reference-type="ref" reference="thm: LorentzianToponogov"} re-frames this result in terms of local timelike curvature bounds as follows. **Theorem 21** (Synthetic Lorentzian Bonnet--Myers). Let $X$ be a connected, globally hyperbolic, and regular Lorentzian length space which has a time function $T$ and local curvature bounded below by $K\in \mathbb{R}$. Assume $K<0$. Assume that $X$ possesses the following non-degeneracy condition: for each pair of points $x\ll z$ in $X$ we find $y \in X$ such that $\Delta(x,y,z)$ is a non-degenerate timelike triangle. Then the diameter[^15] satisfies $\mathop{\mathrm{diam}}(X)\leq \frac{\pi}{\sqrt{-K}}$. Following [@BNR23 Remark 4.12], this result may be viewed as a direct synthetic extension of [@BE79 Theorem 9.5], with an additional non-degeneracy condition. Similarly to the exclusion of spaces isomorphic to $\mathbb{R}$, $(0,\infty)$, $[0,B]$ for all $B>\frac{\pi}{\sqrt{k}}$, or circles of radius greater than $\frac{1}{\sqrt{k}}$ in the metric setting, this condition excludes locally one-dimensional spaces from the remit of our theorem. Recall that, throughout this paper, we have assumed triangles satisfy appropriate size-bounds, such that their comparison triangle is realisable cf. [@AB08 Lemma 2.1]. In particular, given a Lorentzian pre-length space $X$ with curvature bounded below (or above) by $K$, we assume that triangles $\Delta(p,q,r)$ have $\tau(p,r)<D_K$. The following lemma, which was previously presented in the context of spacetimes by [@BE79 Proposition 9.4], gives us conditions under which the diameter of a Lorentzian pre-length space is not attained. Note that the following lemma is formulated via the ordinary diameter instead of the finite diameter, i.e. the supremum of all $\tau$-values in the space. **Lemma 22**. Let $X$ be a strongly causal Lorentzian pre-length space. If $\mathop{\mathrm{diam}}(X)$ is finite, then it is not attained on $X$. Furthermore, if $X$ is a globally hyperbolic Lorentzian length space, then $\mathop{\mathrm{diam}}(X)$ is never attained on $X$, independently of whether it is finite. *Proof.* Let $X$ be a strongly causal Lorentzian pre-length space. Assume for contradiction that $\mathop{\mathrm{diam}}(X)$ is finite and attained by some $p,q\in X$, that is, $\tau(p,q) = \mathop{\mathrm{diam}}(X)$. Then, by strong causality, there exists a point $q'$ with $q \ll q'$, such that $\tau(p,q') \geq \tau(p,q) + \tau(q,q') > \tau(p,q) = \mathop{\mathrm{diam}}(X)$, contradicting the definition of the diameter. Now assume that $X$ is a globally hyperbolic Lorentzian length space. Recall that, on such a space, the time separation function is finite. Furthermore, the assumptions of the previous part still hold, hence $\mathop{\mathrm{diam}}(X)$ can never be attained. ◻ Therefore, all triangles in a globally hyperbolic Lorentzian length space with curvature bounded below by $K < 0$ are realisable. Furthermore, all triangles satisfy size bounds in Lorentzian pre-length spaces which satisfy the assumptions of either Theorem [Theorem 21](#thm: lor meyers){reference-type="ref" reference="thm: lor meyers"} or [@BNR23 Theorem 4.11]. Let us now move on to discussing the Splitting Theorem. Under the assumption of non-negative curvature, splitting theorems have also been proven in a variety of settings. In Riemannian geometry, Toponogov showed that if a complete non-negatively curved manifold contains a line, it splits as a product with one factor being $\mathbb{R}$ [@Top59-2; @Top64]. Cheeger and Gromoll generalised this to the case where the manifold has only non-negative Ricci curvature [@CG71]. Beem, Ehrlich, Markvorsen and Galloway proved an analogous result for Lorentzian geometry, where the hypothesis of completeness is replaced with global hyperbolicity, non-negative curvature need only hold on timelike planes, and the line must be timelike [@BEMG85-2; @BEMG85]. In the synthetic setting, Toponogov's result can be generalised to Alexandrov geometry. This was first achieved by Milka, with the stronger assumption that an affine function exists [@Mil67], but was later weakened by Burago--Burago--Ivanov to the presence of a line [@BBI01]. In the context of Lorentzian length spaces, Beran, Ohanyan, Rott and Solis proved a Splitting Theorem under the presence of global curvature bounds [@BORS23], which we can now restate with the weaker assumption of local curvature bounds. **Theorem 23** (Synthetic Lorentzian Splitting). Let $(X,d,\ll,\leq,\tau)$ be a connected, globally hyperbolic, regular Lorentzian length space with a proper metric $d$, a time function $T$, and (local) timelike curvature bounded below by zero, which satisfies timelike geodesic prolongation and contains a complete timelike line $\gamma:\mathbb{R}\to X$. Then there is a $\tau$- and $\leq$-preserving homeomorphism $f:\mathbb{R}\times S \to X$, where $S$ is a proper, strictly intrinsic metric space of Alexandrov curvature $\geq 0$. Observe that the only additional assumption, cf. [@BORS23 Theorem 1.4], made in order to replace global curvature bounds with local ones in the above is the presence of a time function, which is necessary in order to apply Theorem [Theorem 18](#thm: LorentzianToponogov){reference-type="ref" reference="thm: LorentzianToponogov"}. Since time functions exist on any second countable, globally hyperbolic, Lorentzian length space [Proposition 10](#pop:exist-time-function){reference-type="ref" reference="pop:exist-time-function"}, this condition is relatively mild. ## Future work The assumption in Theorem [Theorem 18](#thm: LorentzianToponogov){reference-type="ref" reference="thm: LorentzianToponogov"} that the space be a globally hyperbolic Lorentzian length space is quite a strong one. In the metric setting, the assumptions are comparatively mild, e.g. [@BGP92] and [@LS13] manage to show the theorem for complete length spaces. The result can even be shown for non-complete geodesic spaces of curvature bounded below [@Pet16]. It is therefore only natural to ask whether or not the Toponogov Globalisation Theorem holds in the Lorentzian context under milder assumptions as well. Given that [@BGP92] globalises curvature bounds using a four-point condition, which was recently adapted to the Lorentzian setting [@BKR23 Definition 4.6], we are optimistic that the answer is positive and a more general result might be obtained. Such a generalisation would also extend the applicability of the Bonnet--Myers theorem, for which the assumptions of the Globalisation Theorem are sufficient but may not all be necessary. In particular, the additional assumptions under which the Bonnet--Myers theorem holds for global curvature bounds are weaker than the local version, aside from the bounds themselves. In the metric case, a powerful consequence of the Toponogov Globalisation Theorem is that the Hausdorff dimension of an Alexandrov space is the same at all neighborhoods in the space [@BGP92]. A similar notion of dimension has been proposed for Lorentzian length spaces by McCann and Sämann [@MS22] and it is reasonable to expect that Theorem [Theorem 18](#thm: LorentzianToponogov){reference-type="ref" reference="thm: LorentzianToponogov"} can be used to make an analogous statement. [^1]: <tobias.beran@univie.ac.at>, Department of Mathematics, University of Vienna, Oskar-Morgenstern-Platz 1, 1090 Wien, Austria. [^2]: [harveyj13\@cardiff.ac.uk](mailto:HarveyJ13@cardiff.ac.uk), School of Mathematics, Cardiff University, Senghenydd Road, Cardiff, CF24 4AG, UK. [^3]: <lewis.napper@surrey.ac.uk>, Department of Mathematics, University of Surrey, Stag Hill Campus, Guildford, GU2 7XH, UK. [^4]: <felix.rott@univie.ac.at>, Department of Mathematics, University of Vienna, Oskar-Morgenstern-Platz 1, 1090 Wien, Austria. [^5]: Several of these comparison conditions are on display in [@AKP19 Theorem 8.30] and are shown to be equivalent for complete metric length spaces. [^6]: Recall that a triple of causally related points has a comparison triangle in the model space $\mathbb{L}^2(K)$ if the side-lengths satisfy size bounds with respect to $K$, cf. [@KS18 Definition 4.14]. This does not require the points to be timelike related, nor that curves between the points exist. [^7]: By pseudo-metric we mean a metric which does not always distinguish points. Compare with the 'semi-metric' applied to the quotient spaces in [@BR22; @Rot22]. [^8]: Such an open cover must exist by [@Rot22 Corollary 3.6] and [@KS18 Theorem 3.26.(v)]. [^9]: By [\[eq: null distance degenerate in timelike triangles\]](#eq: null distance degenerate in timelike triangles){reference-type="eqref" reference="eq: null distance degenerate in timelike triangles"}, $d_T(p,q_0)\geq \frac12 L$ and as $\varepsilon < \frac12$, it follows that $d_T(p,\cdot)$ attains $\varepsilon L$ within the distance realiser $[p,q_0]$. [^10]: Again, such a $q_n$ exists as $d_T(q_{n-1}, r) = (1-\varepsilon ) L>\varepsilon L$. [^11]: Here we show that Proposition [Proposition 16](#prop:catsCradle){reference-type="ref" reference="prop:catsCradle"} holds for $\Delta''$ with the same $\varepsilon$ as $\Delta$. In fact, if $\Delta$ satisfies the assumptions of the proposition for some $\varepsilon$, as $d_T(p'',r'') = \delta d_T(p,r)$ for $\delta \in (1-\varepsilon , 1]$, then $\Delta''$ does so for any value in $[1-\frac{1-\varepsilon }{\delta},\frac12 )$. [^12]: It is not strictly necessarily that $\delta$ be a *greatest* lower bound. This allows us to apply our propositions with an arbitrarily small constant $\varepsilon > 0$, but they are stated for any $0< \varepsilon < \frac12$. [^13]: Recall that any globally hyperbolic Lorentzian length space is both non-timelike locally isolating and strongly causal. Furthermore, although [@BNR23 Proposition 4.3] is formulated in terms of distance comparison, it is clear that the proof also holds for curvature bounds in terms of angle comparison. [^14]: Lorentzian distance functions are, in essence, time-separation functions which are induced by a Lorentzian metric, in much the same way that a Riemannian manifold induces a distance. [^15]: Here we can replace the finite diameter with the diameter, since these notions coincide on globally hyperbolic Lorentzian length spaces.
arxiv_math
{ "id": "2309.12733", "title": "A Toponogov globalisation result for Lorentzian length spaces", "authors": "Tobias Beran, John Harvey, Lewis Napper, Felix Rott", "categories": "math.DG math-ph math.MG math.MP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Using techniques developed in [@batanindeleger], we extend the Turchin/Dwyer-Hess double delooping result to further iterations of the Baez-Dolan plus construction. For $0 \leq m \leq n$, we introduce a notion of $(m,n)$-bimodules which extends the notions of bimodules and infinitesimal bimodules over the terminal non-symmetric operad. We show that a double delooping always exists for these bimodules. For the triple iteration of the Baez-Dolan construction starting from the initial $1$-coloured operad, we provide a further reduceness condition to have a third delooping. address: - | Mathematical Institute of the Academy\ Žitná 25, 115 67 Prague 1, Czech Republic - | Faculty of Mathematics and Physics, Charles University\ Sokolovská 49/83, 186 75 Prague 8, Czech Republic author: - Florian De Leger - Maroš Grego bibliography: - references.bib title: Triple delooping for multiplicative hyperoperads --- # Introduction The goal of this paper is to extend the Turchin/Dwyer-Hess double delooping result to further iterations of the Baez-Dolan plus construction. The Turchin/Dwyer-Hess theorem [@turchin; @dwyerhess] concerns multiplicative (non-symmetric) operads. The notion of non-symmetric operad will be recalled in Example [Example 2](#noppolynomialmonad){reference-type="ref" reference="noppolynomialmonad"}. A non-symmetric operad $\mathcal{O}$ is called *multiplicative* when it is equipped with an operad map $Ass \to \mathcal{O}$, where $Ass$ is the terminal non-symmetric operad. Such a map endows the collection $(\mathcal{O}_n)_{n \geq 0}$ with a structure of cosimplicial object [@turchin], which we will write $\mathcal{O}^\bullet$. The theorem states that if a multiplicative operad $\mathcal{O}$ is *reduced*, that is $\mathcal{O}_0$ and $\mathcal{O}_1$ are contractible, then there is a double delooping $$\label{theoremtdh} \Omega^2 \mathrm{Map}_{\mathrm{NOp}} (Ass, u^*(\mathcal{O})) \sim \mathrm{Tot}(\mathcal{O}^\bullet),$$ where $\mathrm{Map}$ is the homotopy mapping space, taken in the category $\mathrm{NOp}$ of non-symmetric operads, $u^*$ is the forgetful functor from multiplicative to non-symmetric operads and $\mathrm{Tot}$ is the homotopy totalization. This result is remarkable especially because of an earlier result of Sinha [@sinha1] which states that the space of *long knots modulo immersions* [@dwyerhess] is equivalent to the totalization of the Kontsevich operad. The double delooping [\[theoremtdh\]](#theoremtdh){reference-type="ref" reference="theoremtdh"} has been extended to general deloopings in higher dimensions in [@ducoulombier; @ducoulombierturchin]. Our goal is also to extend [\[theoremtdh\]](#theoremtdh){reference-type="ref" reference="theoremtdh"} but in another direction. Non-symmetric operads appear when we iterate the Baez-Dolan plus construction, as we will now explain. This construction was first introduced in [@baezdolan] in order to define weak $n$-categories. It is a construction which associates to a (symmetric coloured) operad $P$ a new operad $P^+$ for *operads over $P$*. More explicitly, if $P$ is an $S$-coloured operad, $P^+$ is the operad whose algebras are $S$-coloured operads equipped with an operad map to $P$. For example, if $I$ is the initial $1$-coloured operad, a $1$-coloured operad is equipped with an operad map to $I$ if and only if it is a monoid, so $I^+$ is the operad for monoids $Ass$ (as a symmetric operad this time). Iterating this construction, one gets the operad $I^{++}$ for non-symmetric operads. This construction can of course be iterated infinitely many times. The question that naturally arises and that we will try to answer in this paper is if there are delooping results analogous to [\[theoremtdh\]](#theoremtdh){reference-type="ref" reference="theoremtdh"} for the next iterations of the plus construction. We will work with polynomial monads, which are equivalent to symmetric coloured operads with freely acting symmetric groups [@kock]. We will recall the notion in Section [2](#sectionpreliminaries){reference-type="ref" reference="sectionpreliminaries"}, as well as the description of the plus construction for polynomial monads from [@KJBM]. We iterate this construction from the identity monad on the category of sets, which corresponds to the initial $1$-coloured operad. This gives us a sequence of polynomial monads which we call the *opetopic sequence*. Our main tool in order to get delooping results such as [\[theoremtdh\]](#theoremtdh){reference-type="ref" reference="theoremtdh"} is the extension of some homotopy theory results from small categories to polynomial monads. Therefore we will recall an important notion of [@batanindeleger], namely the notion of *homotopically cofinal* map of polynomial monads. In Section [3](#sectionbimodules){reference-type="ref" reference="sectionbimodules"}, we introduce the notion of *$(m,n)$-bimodule*, for $0 \leq m \leq n$. Let us motivate this notion now. The proofs of the double delooping [\[theoremtdh\]](#theoremtdh){reference-type="ref" reference="theoremtdh"} presented in [@turchin; @dwyerhess], as well as in [@batanindeleger], all proceed in two steps. Indeed, we have the deloopings $$\label{firsttdhdelooping} \Omega \mathrm{Map}_{\mathrm{NOp}}(Ass, u^*(\mathcal{O})) \sim \mathrm{Map}_\mathrm{Bimod}(Ass,v^*(\mathcal{O}))$$ if $\mathcal{O}_1$ is contractible, and $$\label{secondtdhdelooping} \Omega \mathrm{Map}_{\mathrm{Bimod}}(Ass, v^*(\mathcal{O})) \sim \mathrm{Map}_\mathrm{IBimod}(Ass,w^*(\mathcal{O}))$$ if $\mathcal{O}_0$ is contractible, where $\mathrm{Bimod}$ and $\mathrm{IBimod}$ are the category of bimodules over $Ass$ and infinitesimal bimodules over $Ass$ respectively, $Ass$ is seen as both a bimodule and infinitesimal bimodule over itself and $v^*$ and $w^*$ are the appropriate forgetful functors. Since infinitesimal bimodules over $Ass$ are known to be equivalent to cosimplicial objects [@turchincosimplicial], we do indeed get the double delooping [\[theoremtdh\]](#theoremtdh){reference-type="ref" reference="theoremtdh"}. Our notion of $(m,n)$-bimodule is an extension of bimodule over $Ass$ and infinitesimal bimodule over $Ass$ to further iterations of the plus construction. In Section [4](#sectioncofinality){reference-type="ref" reference="sectioncofinality"}, we construct a map of polynomial monads whose algebras involve these $(m,n)$-bimodules. We prove in Theorem [Theorem 28](#thmcofinalcospan){reference-type="ref" reference="thmcofinalcospan"} that this map is homotopically cofinal, which extends a result of [@batanindeleger]. In Section [5](#sectiondeloopingtheorems){reference-type="ref" reference="sectiondeloopingtheorems"}, we investigate a general delooping for $(m,n)$-bimodules. More precisely, we try to exhibit reduceness conditions for a delooping $$\Omega \mathrm{Map}_{\mathrm{Bimod}_{m,n}} (\zeta,u^* (\mathcal{O})) \sim \mathrm{Map}_{\mathrm{Bimod}_{m-1,n}} (\zeta, v^* (\mathcal{O})),$$ where $\zeta$ is our notation for the terminal object in the category $\mathrm{Bimod}_{m,n}$ of $(m,n)$-bimodules. We prove in Theorem [Theorem 39](#theoremtdhgeneral){reference-type="ref" reference="theoremtdhgeneral"} that in the cases $m=n$ and $m=n-1$, the reduceness conditions are analogous to the ones for the Turchin/Dwyer-Hess theorem. Theorem [Theorem 39](#theoremtdhgeneral){reference-type="ref" reference="theoremtdhgeneral"} gives us in particular the deloopings [\[firsttdhdelooping\]](#firsttdhdelooping){reference-type="ref" reference="firsttdhdelooping"} and [\[secondtdhdelooping\]](#secondtdhdelooping){reference-type="ref" reference="secondtdhdelooping"}. We further investigate the third iteration of the plus construction, starting from the identity monad. We start by recalling the definition of the category $\Omega_p$, the version of the dendroidal category for planar trees [@moerdijktoen]. We prove that $\Omega_p$ has the same role as the simplex category $\Delta$ has in the Turchin/Dwyer-Hess theorem. We define a notion of *functor equipped with retractions* for a covariant presheaf over $\Omega_p$, which we use to exhibit a reduceness condition for a third delooping. The triple delooping we get in Corollary [Corollary 55](#corollarytripledelooping){reference-type="ref" reference="corollarytripledelooping"} is the analogue of the double delooping [\[theoremtdh\]](#theoremtdh){reference-type="ref" reference="theoremtdh"} for the next iteration of the plus construction. Finally, we apply our triple delooping to a non-trivial example, namely the desymmetrisation of the Kontsevich operad. In a future work, we would like to investigate the geometric meaning of our triple delooping and try to find out if there is an analogue to Sinha's result in this case. We would also like to explore the connections between our direction of delooping results and the one from [@ducoulombierturchin]. ## Acknowledgement {#acknowledgement .unnumbered} Both authors are deeply grateful to Michael Batanin who suggested this project, for his guidance and the many illuminating discussions during our weekly meetings. # Preliminaries {#sectionpreliminaries} ## Polynomial monads Recall [@gambinokock] that a polynomial monad $T$ is a cartesian monad whose underlying functor is given by the composite $t_! p_* s^*$ for some diagram in $\mathrm{Set}$, the category of sets, of the form $$\label{diagrampolynomial} \xymatrix{ I & E \ar[l]_s \ar[r]^p & B \ar[r]^t & I }$$ where $p^{-1}(b)$ is finite for all $b \in B$. The elements of the sets $I$, $B$ and $E$ will be called *colours*, *operations* and *marked operations* respectively. The maps $s$, $p$ and $t$ will be called *source map*, *middle map* and *target map* respectively. The diagram [\[diagrampolynomial\]](#diagrampolynomial){reference-type="ref" reference="diagrampolynomial"} will be called the *polynomial* for the monad. An algebra of such polynomial monad in a symmetric monoidal category $(\mathcal{E},\otimes,e)$ is given by a collection $(A_i)_{i \in I}$ together with, for all $b \in B$, structure maps $$m_b: \bigotimes_{e \in p^{-1}(b)} A_{s(e)} \to A_{t(b)}$$ satisfying associativity and unitality axioms. **Example 1**. The free monoid monad $\mathbf{Mon}$ is a polynomial monad [@batanindeleger Example 2.6] given by $$\xymatrix{ 1 & Ltr^* \ar[l] \ar[r] & Ltr \ar[r] & 1, }$$ where $Ltr$ is the set of (isomorphism classes of) linear trees, $Ltr^*$ is the set of linear trees with one vertex marked, the middle map forgets the marking. Multiplication is given by insertion of a linear tree inside a vertex. **Example 2**. Recall that a non-symmetric operad $A$ in a symmetric monoidal category $(\mathcal{E},\otimes,e)$ in given by a collection of objects $A_n \in \mathcal{E}$ for $n \geq 0$ together with maps $$A_k \otimes A_{n_1} \otimes \ldots \otimes A_{n_k} \to A_{n_1+\ldots+n_k}.$$ and a map $e \to A_1$ satisfying associativity and unitality axioms. The polynomial $\mathbf{NOp}$ for non-symmetric operads [@batanindeleger Example 2.7] is given by $$\xymatrix{ \mathbb{N} & Ptr^* \ar[l] \ar[r] & Ptr \ar[r] & \mathbb{N}, }$$ where $Ptr$ is the set of (isomorphism classes of) planar trees, $Ptr^*$ is the set of planar trees with one vertex marked, the source map returns the number of edges directly above the marked vertex, the middle map forgets the marking, the target map returns the number of leaves. Multiplication is given by insertion of a planar tree inside a vertex. ## Baez-Dolan plus construction {#subsectionbdplusconstruction} The Baez-Dolan plus construction was first introduced in [@baezdolan]. Let us recall this construction for polynomial monads from [@KJBM]. Assume $T$ is a polynomial monad whose underlying polynomial is given by diagram [\[diagrampolynomial\]](#diagrampolynomial){reference-type="ref" reference="diagrampolynomial"}. We construct a new polynomial monad $T^+$: $$\xymatrix{ B & tr(T)^* \ar[l] \ar[r] & tr(T) \ar[r] & B, }$$ where $tr(T)$ is the set of $T$-trees, that is trees whose vertices are decorated with elements of $B$ and edges are decorated with elements of $I$, satisfying the coherence condition that if a vertex decorated with $b \in B$, its outcoming edge is decorated with $t(b)$ and its incoming edges are decorated with $s(e)$, for $e \in p^{-1}(b)$: $$\begin{tikzpicture}[scale=1.2] \draw (0,-.9) -- (0,0) -- (-.7,.7); \draw (0,0) -- (1.5,1.5); \draw (.8,.8) -- (.1,1.5); \draw[fill=white] (0,0) circle (.26) node{$\tiny{b_1}$}; \draw[fill=white] (.8,.8) circle (.26) node{$\tiny{b_2}$}; \draw (-.05,-.55) node[right]{$\tiny{i_1}$}; \draw (-.25,.25) node[left]{$\tiny{i_2}$}; \draw (.25,.25) node[right]{$\tiny{i_3}$}; \draw (.55,1.05) node[left]{$\tiny{i_4}$}; \draw (1.05,1.05) node[right]{$\tiny{i_5}$}; \end{tikzpicture}$$ $tr(T)^*$ is the set of $T$-trees with one vertex marked. The source map returns the element which decorates the marked vertex, the middle map forgets the marking, the target map is given by composition of all the elements which decorate the vertices. Multiplication is given by insertion of a tree inside a vertex. ## Opetopic sequence For $n \geq 0$, let us define the polynomial monad $\mathbf{Id}^{+n}$ by induction. $\mathbf{Id}^{+0} = \mathbf{Id}$, that is the identity monad on $\mathrm{Set}$. For $n > 0$, $\mathbf{Id}^{+n} = \left(\mathbf{Id}^{+(n-1)}\right)^+$. One can check that $\mathbf{Id}^{+1} = \mathbf{Mon}$ and $\mathbf{Id}^{+2} = \mathbf{NOp}$, the polynomial monads defined in Example [Example 1](#monpolynomialmonad){reference-type="ref" reference="monpolynomialmonad"} and Example [Example 2](#noppolynomialmonad){reference-type="ref" reference="noppolynomialmonad"} respectively. We will write the underlying polynomial of $\mathbf{Id}^{+n}$ as follows: $$\label{polynomialopetopicsequence} \xymatrix{ I_n & E_n \ar[l]_{s_n} \ar[r]^{p_n} & B_n \ar[r]^{t_n} & I_n. }$$ Note that for $n > 0$, $I_n = B_{n-1}$. To avoid too heavy notations, we will simply write $s$, $t$ and $p$ instead of $s_n$, $t_n$ and $p_n$ respectively when $n \geq 0$ is clear from the context. ## Homotopically cofinal maps of polynomial monads Since a polynomial monad $T$ is in particular cartesian, it makes sense to talk about lax morphisms of categorical $T$-algebras. **Definition 3**. Let $f: S \to T$ be a map of polynomial monads and $A$ a categorical $T$-algebra. An *internal $S$-algebra* in $A$ is a lax morphism of $T$-algebras $1 \to f^*(A)$, where $1$ is the terminal $T$-algebra and $f^*$ is the restriction functor. We have the following theorem [@batanin]: **Theorem 4**. *Let $f: S \to T$ be a map of polynomial monads. The $2$-functor $$Int_S: \mathrm{Alg}_T(\mathrm{Cat}) \to \mathrm{Cat},$$ which sends a categorical $T$-algebra $A$ to the category of internal $S$-algebras in $A$, is representable. We will write $T^S$ the representing object and call it *classifier induced by $f$*.* **Remark 5**. Let $f: S \to T$ be a map of polynomial monads given by $\phi: I \to J$ on colours. It was proved in [@batanin] that the classifier induced by $f$ can be computed as a truncated simplicial $T$-algebra $$\label{formulaclassifier} \xymatrix{ F_T(\phi_!(1)) \ar[r] & F_T(\phi_!(S 1)) \ar@<-1ex>[l] \ar@<1ex>[l] & F_T(\phi_!(S^2 1)), \ar[l] \ar@<-1ex>[l] \ar@<1ex>[l] }$$ where $1$ is the terminal $I$-collection, $F_T$ is the free $T$-algebra functor and $\phi_!$ is the left adjoint of the restriction $\phi^*$, given by coproduct over fibres of $\phi$. **Remark 6**. The classifier $T^S$ being a categorical $T$-algebra, it has an underlying collection of categories. We will consider classifiers as categories, implying implicitly that we are talking about an arbitrary category of the underlying collection. Any statement made about a classifier, seen as a category, will imply that it is true for any category of the underlying collection. **Example 7**. For a polynomial monad $T$ given by diagram [\[diagrampolynomial\]](#diagrampolynomial){reference-type="ref" reference="diagrampolynomial"}, we describe the classifier induced by the identity on the polynomial monad $T^+$. The set of objects is the set of $T$-trees $tr(T)$. Morphisms are contractions of edges and multiplication of the elements of the set $B$ of operations of the polynomial monad $T$ accordingly, or insertion of unary vertices decorated with the unit. In particular, this gives us a category structure on $B_n$. We now recall an important notion of [@batanindeleger]: **Definition 8**. A map of polynomial monads $f: S \to T$ is *homotopically cofinal* if $N(T^S)$ is contractible. # Bimodules in the opetopic sequence {#sectionbimodules} ## $\downarrow$-construction **Definition 9**. For $n \geq 0$, we call *tree with white vertices* a pair $(b,W)$, where $b \in B_n$ and $W$ is a subset of the set of vertices of $b$. We call *white vertices* the elements of $W$. The other vertices of $b$ will be called *black vertices*. **Definition 10**. For $n > 0$, we associate to a tree with white vertices $(b \in B_n,W)$, a tree with white vertices $(b^\downarrow \in B_{n-1},W^\downarrow)$ as follows. Let $b_0$ be the maximal subtree of $b$ containing the root edge and only black vertices. We take $b^\downarrow = t(b_0) \in I_n = B_{n-1}$. Note that the vertices of $b^\downarrow$ correspond to the leaves of $b_0$ which are also edges of $b$. We define $W^\downarrow$ as the set of vertices of $b^\downarrow$ which correspond to an internal edge in $b$. **Example 11**. Let $(b,W)$ be the planar tree in the following picture: $$\begin{tikzpicture}[scale=.8] \draw (0,-.5) -- (0,0) -- (-2.1,2.1); \draw (-1,1) -- (-1.7,1.7); \draw (-1.7,1.7) -- (-1.3,2.1); \draw (-.3,1.7) -- (-.7,2.1); \draw (-1,1) -- (-.3,1.7); \draw (-.3,1.7) -- (.1,2.1); \draw (0,0) -- (1,1); \draw[fill] (0,0) circle (2pt); \draw[fill] (-1,1) circle (2pt); \draw[fill=white] (-1.7,1.7) circle (2pt); \draw[fill] (-.3,1.7) circle (2pt); \draw[fill=white] (1,1) circle (2pt); \end{tikzpicture}$$ Then $b_0$ will be the planar tree on the left of the following picture and $(b^\downarrow,W^\downarrow)$ will be the linear tree on the right: $$\begin{tikzpicture}[scale=.8] \draw (0,-.5) -- (0,0) -- (-1.7,1.7); \draw (-1,1) -- (-1.7,1.7); \draw (-.3,1.7) -- (-.7,2.1); \draw (-1,1) -- (-.3,1.7); \draw (-.3,1.7) -- (.1,2.1); \draw (0,0) -- (1,1); \draw[fill] (0,0) circle (2pt); \draw[fill] (-1,1) circle (2pt); \draw[fill] (-.3,1.7) circle (2pt); \begin{scope}[shift={(6,1)}] \draw (-1.5,0) -- (1.5,0); \draw[fill=white] (-1,0) circle (2pt); \draw[fill] (-.333,0) circle (2pt); \draw[fill] (.333,0) circle (2pt); \draw[fill=white] (1,0) circle (2pt); \end{scope} \end{tikzpicture}$$ Indeed, $b^\downarrow$ has four vertices which correspond to leaves of $b_0$. The first and last vertices of $b^\downarrow$ are white because they correspond to leaves in $b_0$ which are internal edges in $b$. The following lemma will be useful later: **Lemma 12**. *Let $(b,W)$ be a tree with white vertices and $(b^\downarrow,W^\downarrow)$ be the tree obtained by applying the construction of Definition [Definition 10](#constructiontree){reference-type="ref" reference="constructiontree"}. Let $\tilde{b}^\downarrow$ be a tree obtained from $b^\downarrow$ by adding unary vertices and $\tilde{W}^\downarrow$ be the set $W^\downarrow$ plus the unary vertices which have been added. Then there is a tree with white vertices $(\tilde{b},\tilde{W})$ such that $(\tilde{b}^\downarrow,\tilde{W}^\downarrow)$ is the tree obtained from it by applying the construction of Definition [Definition 10](#constructiontree){reference-type="ref" reference="constructiontree"}.* *Proof.* Recall from Example [Example 7](#examplecategorystructureonbn){reference-type="ref" reference="examplecategorystructureonbn"} that $B_n$ has a category structure where morphisms are contractions of edges and insertion of unary vertices. Let us assume that the maximal subtree $b_0$ of $b$ containing the root and only black vertices has been contracted to a corolla. This can be done without loss of generality because the tree obtained by applying the construction of Definition [Definition 10](#constructiontree){reference-type="ref" reference="constructiontree"} remains the same. Assume $\tilde{b}^\downarrow$ is obtained from $b^\downarrow$ by adding a unary vertex decorated with $\eta(c) \in B_{n-2}$, where $c \in I_{n-2}$ is the decoration of the edge where the unary vertex is added. Then we take $\tilde{b}$ as the tree obtained from $b_0$ by adding a trunk above the root vertex. The vertex of the trunk is decorated with $\nu(c) \in B_{n-1}$, where $\nu(c)$ is the free living edge decorated with $c$. The edge of the trunk is decorated with $t \nu(c) = \eta(c)$. The tree $b^\downarrow$ which decorates the root vertex of $b$ is replaced by $\tilde{b}^\downarrow$. For example, in the following picture, the tree on the left is $\tilde{b}$, and the inserted trunk is dotted. The tree on the right is $\tilde{b}^\downarrow$, and the inserted unary vertex is dotted. It has four vertices, including the added unary vertex, since the root vertex of $\tilde{b}$ has four edges above it, including the added trunk: $$\begin{tikzpicture}[scale=.8] \draw (0,-.4) -- (0,0) -- (-.9,1); \draw (-.3,1) -- (0,0) -- (.3,1); \draw[densely dotted] (0,0) -- (.9,1); \draw (-1.35,1.8) -- (-.9,1) -- (-.9,1.8); \draw (-.9,1) -- (-.45,1.8); \draw (-.05,1.8) -- (.3,1) -- (.65,1.8); \draw[fill] (0,0) circle (2pt); \draw[fill=white] (-.9,1) circle (2pt); \draw[fill=white] (.3,1) circle (2pt); \draw[fill=white,densely dotted] (.9,1) circle (2pt); \draw[fill] (-.05,1.8) circle (2pt); \begin{scope}[shift={(6,0)}] \draw (0,-.4) -- (0,0) -- (-.8,.8); \draw (0,0) -- (1.6,1.6); \draw (.8,.8) -- (-.4,2); \draw (0,2) -- (0,1.6) -- (.4,2); \draw[fill=white] (0,0) circle (2pt); \draw[fill] (.8,.8) circle (2pt); \draw[fill=white,densely dotted] (.4,1.2) circle (2pt); \draw[fill=white] (0,1.6) circle (2pt); \end{scope} \end{tikzpicture}$$ This can of course be done for multiple unary vertices added. ◻ ## $m$-dimensional sets of vertices For $0 \leq i \leq n$ and a tree with white vertices $(b \in B_n,W)$, we will write $(b^{\downarrow i},W^{\downarrow i})$ for the tree with white vertices obtained by iterating $i$ times the construction of Definition [Definition 10](#constructiontree){reference-type="ref" reference="constructiontree"}. **Definition 13**. Let $0 \leq m \leq n$ and $(b,W)$ a tree with white vertices. We say that $W$ is *$m$-dimensional* if - for $0 \leq i < n-m$, $W^{\downarrow i}$ does not contain any pairs of vertices one above the other, - $W^{\downarrow (n-m)}$ is the set of vertices of $b^{\downarrow (n-m)}$. **Remark 14**. It is immediate from Definition [Definition 13](#definitionmdimensionalsetofvertices){reference-type="ref" reference="definitionmdimensionalsetofvertices"} that if $(b,W)$ is $m$-dimensional, then $(b^{\downarrow i},W^{\downarrow i})$ is $m$-dimensional for all $0 \leq i \leq n-m$. **Remark 15**. Let $(b,W)$ be a tree with white vertices such that $W$ is $m$-dimensional. Then all the sets $W^{\downarrow i}$ for $0 \leq i \leq n-m$ are in bijection with each other. Indeed, the construction of Definition [Definition 10](#constructiontree){reference-type="ref" reference="constructiontree"} induces a surjection from $W^{\downarrow i}$ to $W^{\downarrow (i+1)}$ for $0 \leq i < n-m$. The first condition of Definition [Definition 13](#definitionmdimensionalsetofvertices){reference-type="ref" reference="definitionmdimensionalsetofvertices"} implies that these surjections are also injective. **Remark 16**. Let $n > 0$ and $(b \in B_n,W)$ be a tree with white vertices. $W$ is $n$-dimensional if it is the set of vertices of $b$. It is $0$-dimensional if it is a singleton. It is $n-1$-dimensional if each path from the root to a leaf in $b$ contains exactly one vertex of $W$. Indeed, the first condition of Definition [Definition 13](#definitionmdimensionalsetofvertices){reference-type="ref" reference="definitionmdimensionalsetofvertices"} ensures that any path contains at most one vertex, while the second one ensures that any path contains at least one vertex. ## $(m,n)$-bimodules **Definition 17**. For $0 \leq m \leq n$, let $B_{m,n}$ be the set of trees with white vertices $(b,W)$ where $b \in B_n$, $W$ is $m$-dimensional and the tree does not contain any pairs of adjacent black vertices or any unary black vertices. **Definition 18**. Let $\mathbf{Bimod}_{m,n}$ be the polynomial monad represented by $$\xymatrix{ I_n & E_{m,n} \ar[l]_s \ar[r]^p & B_{m,n} \ar[r]^t & I_n, }$$ where $E_{m,n}$ is the set of elements of $B_{m,n}$ with a marked white vertex. The source, target and middle map are defined as in Subsection [2.2](#subsectionbdplusconstruction){reference-type="ref" reference="subsectionbdplusconstruction"}. Multiplication is given by inserting a tree inside a white vertex, then contracting all edges between two black vertices and removing all unary black vertices. **Definition 19**. For $0 \leq m \leq n$, an *$(m,n)$-bimodule* in a symmetric monoidal category $(\mathcal{E},\otimes,e)$ is an algebra of $\mathbf{Bimod}_{m,n}$ in $\mathcal{E}$. For an $I_n$-collection $A$ in $\mathcal{E}$, $b \in B_n$ and $V \subset p^{-1}(b)$, we will write $$A_{s(V)} = \bigotimes_{v \in V} A_{s(v)}.$$ **Remark 20**. Explicitly, an *$(m,n)$-bimodule* in a symmetric monoidal category $(\mathcal{E},\otimes,e)$ is given by - a collection of objects $(A_i)_{i \in I_n}$ in $\mathcal{E}$, - for all $(b,V) \in B_{m,n}$, a map $$\mu_{b,V}: A_{s(V)} \to A_{t(b)},$$ satisfying associativity and unitality axioms. **Remark 21**. According to Remark [Remark 16](#casezeroandn){reference-type="ref" reference="casezeroandn"}, $B_{n,n} = B_n$, so the polynomial monad $\mathbf{Bimod}_{n,n}$ is just $\mathbf{Id}^{+n}$. Also according to Remark [Remark 16](#casezeroandn){reference-type="ref" reference="casezeroandn"}, the elements of $B_{0,n}$ are trees with exactly one white vertex. We deduce that $E_{0,n} = B_{0,n}$ and the middle map for the polynomial monad $\mathbf{Bimod}_{0,n}$ is the identity, so this polynomial monad is actually a small category. **Remark 22**. According to Remark [Remark 16](#casezeroandn){reference-type="ref" reference="casezeroandn"}, the sets $B_{2,2}$, $B_{1,2}$ and $B_{0,2}$ are the sets of planar trees, planar trees with white vertices, called *beads* in [@turchin], lying on the same horizontal line and planar trees with one distinguished bead, respectively. One recognises the polynomial monad $\mathbf{Bimod}_{m,2}$, when $m=2$, $m=1$ and $m=0$, as the monad for non-symmetric operads, bimodules over the terminal non-symmetric operad $Ass$ and infinitesimal bimodules over $Ass$ described in [@batanindeleger], respectively. ## Universal classifier for $(m,n)$-bimodules {#subsectionuniversalclassifierbimodmn} Let us now describe the classifier induced by the identity on the polynomial monad $\mathbf{Bimod}_{m,n}$. The set of objects is $B_{m,n}$. The morphisms can be given by nested trees, that is trees $(b,W) \in B_{m,n}$, where each vertex $v \in W$ has itself a tree inside it, called nest. The following picture is an example of such nested tree: $$\begin{tikzpicture}[scale=.9] \draw (.4,-.1) -- (.4,.2) -- (-1.2,1); \draw (.4,.2) -- (2,1); \draw (-1.2,1) -- (-1.2,1.5) -- (-1.9,2.2); \draw (-1.2,1.5) -- (-.5,2.2); \draw (2,1) -- (2,1.5) -- (1.7,2.1) -- (1.2,3.1); \draw (2,1.5) -- (2.8,3.1); \draw (2.3,2.1) -- (2,3.1); \draw (2,1.9) circle (.9); \draw (-1.2,1.55) circle (.55); \draw (-1.75,1.55); \draw[fill=white] (-1.2,1.5) circle (1.6pt); \draw[fill] (.4,.2) circle (1.6pt); \draw[fill] (2,1.5) circle (1.6pt); \draw[fill=white] (2.3,2.1) circle (1.6pt); \draw[fill=white] (1.7,2.1) circle (1.6pt); \draw (2.9,1.9); \end{tikzpicture}$$ The source of the nested tree is obtained by inserting the nests into each corresponding vertex and then contracting the edges connecting black vertices if necessary. The target is obtained by forgetting the nests. For example, the nested tree of the previous picture represents the following morphism: $$\begin{tikzpicture}[scale=.7] \draw (0,-.1) -- (0,.2) -- (-1.3,1) -- (-1.9,1.6); \draw (-1.3,1) -- (-.7,1.6); \draw (0,1.6) -- (0,.2) -- (1.3,1) -- (1.9,1.6); \draw (1.3,1) -- (.7,1.6); \draw[fill] (0,.2) circle (2pt); \draw[fill=white] (-1.3,1) circle (2pt); \draw[fill=white] (0,1) circle (2pt); \draw[fill=white] (1.3,1) circle (2pt); \draw (3.2,.8) node{$\longrightarrow$}; \begin{scope}[shift={(6,0)}] \draw (0,-.1) -- (0,.2) -- (-.9,1) -- (-1.5,1.6); \draw (-.9,1) -- (-.3,1.6); \draw (0,.2) -- (.9,1) -- (1.5,1.6); \draw (.9,1.6) -- (.9,1) -- (.3,1.6); \draw[fill] (0,.2) circle (2pt); \draw[fill=white] (-.9,1) circle (2pt); \draw[fill=white] (.9,1) circle (2pt); \end{scope} \end{tikzpicture}$$ ## Bimodules over $(m,n)$-bimodules **Lemma 23**. *Let $(b,W) \in B_{m,n}$ with an $m-1$-dimensional $V \subset W$. Then the complement of $V$ in $W$ is the canonical union of two sets $V_-$ and $V_+$.* *Proof.* If $m=n$, then according to Remark [Remark 16](#casezeroandn){reference-type="ref" reference="casezeroandn"} $W$ is the set of vertices of $b$ and each path from a leaf to the root in $b$ contains exactly one vertex of $V$. Then we can take $V_-$ and $V_+$ as the set of vertices in $W$ that are below and above a vertex of $V$, respectively. If $m < n$, let $(b^\downarrow_V,V^\downarrow)$ and $(b^\downarrow_W,W^\downarrow)$ be the pairs obtained by applying the construction of Definition [Definition 10](#constructiontree){reference-type="ref" reference="constructiontree"} to $(b,V)$ and $(b,W)$, respectively. According to Remark [Remark 14](#remarkallmdimensional){reference-type="ref" reference="remarkallmdimensional"}, $V^\downarrow$ and $W^\downarrow$ are $m-1$-dimensional and $m$-dimensional, respectively. It is easy to see from the construction that, since $V \subset W$, $b^\downarrow_W$ can be obtained from $b^\downarrow_V$ by contracting some of its edges, which connect black vertices. So $V^\downarrow$ can be seen as a set of vertices of $b^\downarrow_W$ and it is also $m-1$-dimensional as a set of vertices of this tree. By induction, the complement of $V^\downarrow$ in $W^\downarrow$ is the canonical union of two sets. According to Remark [Remark 15](#remarkbijections){reference-type="ref" reference="remarkbijections"}, $W$ and $W^\downarrow$, as well as $V$ and $V^\downarrow$, are in bijection. This concludes the proof. ◻ **Definition 24**. For $0 < m \leq n$ and $A$ and $B$ two $(m,n)$-bimodules in $\mathcal{E}$, an *$A-B$-bimodule* $C$ is given by - a collection of objects $(C_i)_{i \in I_n}$, - for all $(b,W) \in B_{m,n}$ with an $m-1$-dimensional $V \subset W$, a map $$\label{mappointedbimodule} A_{s(V_-)} \otimes C_{s(V)} \otimes B_{s(V_+)} \to C_{t(b)},$$ where $V_-$ and $V_+$ are given by Lemma [Lemma 23](#lemmamminusonedimensional){reference-type="ref" reference="lemmamminusonedimensional"}, satisfying associativity and unitality axioms. **Lemma 25**. *For $0 < m \leq n$, let $\zeta$ be the terminal $(m,n)$-bimodule. If $n-2 \leq m$, the category of $(m-1,n)$-bimodules is isomorphic to the category of $\zeta-\zeta$-bimodules.* *Proof.* Let $(b,V) \in B_{m-1,n}$. We will construct $(\tilde{b},\tilde{W}) \in B_{m,n}$ such that $\tilde{W}$ contains an $m-1$-dimensional subset which is in bijection with $V$. - If $m=n$, we take $\tilde{b}=b$ and $\tilde{W}$ as the set of vertices of $b$. - If $m=n-1$, $\tilde{b}$ is the tree obtained from $b$ by adding a unary vertex on each leaf which is not above a vertex of $V$. $\tilde{W}$ is the union of $V$ and all the unary vertices which have been added. - If $m=n-2$, let $(b^\downarrow,V^\downarrow)$ be the pair obtained from $(b,V)$ by applying the construction of Definition [Definition 10](#constructiontree){reference-type="ref" reference="constructiontree"}. Let $(\tilde{b}^\downarrow,\tilde{W}^\downarrow)$ be the tree obtained from $(b^\downarrow,V^\downarrow)$ by adding unary vertices as in the case $m=n-1$. We take $(\tilde{b},\tilde{W})$ given by Lemma [Lemma 12](#lemmainsertingtrunk){reference-type="ref" reference="lemmainsertingtrunk"}. It is easy to see that the $\zeta-\zeta$-bimodule structure map induced by $(\tilde{b},\tilde{W})$ corresponds to the $(m-1,n)$-bimodule structure induced by $(b,V)$. ◻ ## Pointed bimodules over $(m,n)$-bimodules **Definition 26**. For $0 < m \leq n$ and $A$ and $B$ two $(m,n)$-bimodules in $\mathcal{E}$, a *pointed $A-B$-bimodule* $C$ is given by - a collection of objects $(C_i)_{i \in I_n}$, - for all $(b,W) \in B_{m,n}$ and partitions of $W$ into $V_-$, $V_+$ and $V$ such that there is an $m-1$-dimensional subset $U \subset W$ satisfying $U_- \subset V_-$ and $U_+ \subset V_+$, a map $$A_{s(V_-)} \otimes C_{s(V)} \otimes B_{s(V_+)} \to C_{t(b)},$$ satisfying associativity and unitality axioms. **Lemma 27**. *For $0 < m \leq n$ and $A$ and $B$ two $(m,n)$-bimodules in $\mathcal{E}$, there is an $A-B$-bimodule $\alpha$ such that the category of pointed $A-B$-bimodules is isomorphic to the comma category of $A-B$-bimodules under $\alpha$.* *Proof.* It is obvious that there is a forgetful functor from pointed $A-B$-bimodules to $A-B$-bimodules. Indeed, if one takes $V=U$ in the definition of pointed $A-B$-bimodule, one gets the structure maps for $A-B$-bimodules. Let $\alpha$ be the image of the initial pointed $A-B$-bimodule through this forgetful functor. If $C$ is a pointed $A-B$-bimodule, then it is an $A-B$-bimodule equipped with a map from $\alpha$. Now assume that $C$ is an $A-B$-bimodule equipped with a map from $\alpha$. Note that $\alpha$, being a pointed $A-B$-bimodule, is equipped with maps $A \to \alpha \leftarrow B$ of collections, therefore $C$ is also equipped with such maps. So $C$ has a pointed bimodule structure given by the composite $$A_{s(V_-)} \otimes C_{s(V)} \otimes B_{s(V_+)} \to A_{s(U_-)} \otimes C_{s(U)} \otimes B_{s(U_+)} \to C_{t(b)},$$ where the first map is given from the maps $A \to C \leftarrow B$ and the second is from the $A-B$-bimodule structure. ◻ # Cofinality {#sectioncofinality} ## Statement of the result In this section, we assume $0 < m \leq n$ are fixed. We will closely follow [@deleger Section 3]. Let $\mathbf{Bimod}_{m,n}^{\bullet+\bullet}$ be the polynomial monad for triples $(A,B,C)$ where $A$ and $B$ are $(m,n)$-bimodules and $C$ is a pointed $A-B$-bimodule. Let $\mathbf{Bimod}_{m,n}^{\bullet \to \bullet \leftarrow \bullet}$ be the polynomial monad for cospans $A \to C \leftarrow B$ of $(m,n)$-bimodules. **Theorem 28**. *There is a homotopically cofinal map of polynomial monads $$\label{cofinalmappolymon} f: \mathbf{Bimod}_{m,n}^{\bullet+\bullet} \to \mathbf{Bimod}_{m,n}^{\bullet \to \bullet \leftarrow \bullet}.$$* ## Description of the map of polynomial monads The polynomial monad $\mathbf{Bimod}_{m,n}^{\bullet+\bullet}$ is given by $$\label{polynomialpointed} \xymatrix{ \{A,B,C\} \times I_n & E_{m,n}^{\bullet+\bullet} \ar[l] \ar[r] & B_{m,n}^{\bullet+\bullet} \ar[r] & \{A,B,C\} \times I_n }$$ where elements of $B_{m,n}^{\bullet+\bullet}$ are pairs $(b,W) \in B_{m,n}$, equipped with a label in $\{A,B,C\}$ called *target label* and for each white vertex a label in $\{A,B,C\}$ called *source label*, subject to the following restrictions. If the target label is $A$ (resp. $B$), then all the source labels are also $A$ (resp. $B$). If the target label is $C$, there must be an $m-1$-dimensional subset $V \subset W$ such that all the vertices in $V_-$ have label $A$ and all the vertices in $V_+$ have label $B$. $E_{m,n}^{\bullet+\bullet}$ is the set of pairs $(b,W)$ of $B_{m,n}^{\bullet+\bullet}$ with one white vertex marked. Note that there is a projection from [\[polynomialpointed\]](#polynomialpointed){reference-type="ref" reference="polynomialpointed"} to [\[polynomialopetopicsequence\]](#polynomialopetopicsequence){reference-type="ref" reference="polynomialopetopicsequence"}. The source map is given by the source label of the marked vertex and an element in $I_n$ thanks to this projection. Similarly, the target map is given by the target label and an element in $I_n$ thanks to this projection. The middle map forgets the marking. Multiplication is given by insertion of a tree inside a vertex, if the source and target labels correspond. The description of the polynomial monad $\mathbf{Bimod}_{m,n}^{\bullet \to \bullet \leftarrow \bullet}$ is completely similar. The difference is that we drop the condition that there must be an $m-1$-dimensional subset $V \subset W$ such that all the vertices in $V_-$ have label $A$ and all the vertices in $V_+$ have label $B$. The map $f$ in [\[cofinalmappolymon\]](#cofinalmappolymon){reference-type="ref" reference="cofinalmappolymon"} is given by inclusion of sets. ## Construction of a smooth functor For a functor $F: \mathcal{X} \to \mathcal{Y}$ between categories and $y \in \mathcal{Y}$, we will write $F_y$ for the fibre of $F$ over $y$. **Definition 29**. A functor $F : \mathcal{X} \to \mathcal{Y}$ is *smooth* if, for all $y \in \mathcal{Y}$, the canonical functor $$F_y \to y/F$$ induces a weak equivalence between nerves. Let us state the Cisinski lemma [@cisinski Proposition 5.3.4]: **Lemma 30**. *A functor $F : \mathcal{X} \to \mathcal{Y}$ is smooth if and only if for all maps $f_1 : y_0 \to y_1$ in $\mathcal{Y}$ and objects $x_1$ in $\mathcal{X}$ such that $F(x_1) = y_1$, the nerve of the *lifting category* of $f_1$ over $x_1$, whose objects are arrows $f : x \to x_1$ such that $F(f) = f_1$ and morphisms are commutative triangles $$\xymatrix{ x \ar[rr]^g \ar[rd]_f && x' \ar[ld]^{f'} \\ & x_1 }$$ with $g$ a morphism in $F_{y_0}$, is contractible.* We have a commutative square of polynomial monads $$\xymatrix{ \mathbf{Bimod}_{m,n}^{\bullet+\bullet} \ar[r]^-{uf} \ar[d]_-f & \mathbf{Bimod}_{m,n} \ar[d]^-{id} \\ \mathbf{Bimod}_{m,n}^{\bullet \to \bullet \leftarrow \bullet} \ar[r]_-u & \mathbf{Bimod}_{m,n} }$$ where $u$ is given by projection. This square induces a morphism of algebras [@batanindeleger Proposition 4.7] $$\label{inducedfunctor} F: (\mathbf{Bimod}_{m,n}^{\bullet \to \bullet \leftarrow \bullet})^{\mathbf{Bimod}_{m,n}^{\bullet+\bullet}} \to u^* \left( (\mathbf{Bimod}_{m,n})^{\mathbf{Bimod}_{m,n}} \right).$$ To simplify the notations, we will write $\mathcal{X}$ and $\mathcal{Y}$ for the domain and codomain of $F$ respectively. ## Proof of cofinality **Lemma 31**. *For a tree $b \in B_m$, let us consider the category $\mathcal{C}(b)$ whose objects are decorations of the vertices of $b$ by labels in $\{A,B,C\}$, such that there is an $m-1$-dimensional subset for which the vertices below have label $A$ and above have label $B$. The morphisms turn vertices with label $A$ or $B$ to vertices with label $C$. This category has contractible nerve.* *Proof.* We proceed by induction on the number of vertices of $b$. If $b$ has no vertices, that is the free living edge, then $\mathcal{C}(b)$ is the terminal category. Now assume $b$ has at least one vertex. Let $\mathcal{A}$ be the full subcategory of $\mathcal{C}(b)$ containing the trees for which the root vertex has label $A$. Let $\mathcal{B}$ be the full subcategory of $\mathcal{C}(b)$ containing the trees for which all the vertices which are not the root have label $B$. The union of $\mathcal{A}$ and $\mathcal{B}$ gives the full category, the intersection is the terminal category, $\mathcal{B}$ consists of a cospan. The category $\mathcal{A}$ is isomorphic to $\prod_{e \in E} \mathcal{C}(b(e))$, where $E$ is the set of edges directly above the root and $b(e)$ is the maximal subtree of $b$ having $e$ as root. So the nerve of $\mathcal{A}$ is contractible by induction. ◻ Note that an object of $\mathcal{X}$ is an element of the set of operations of the polynomial monad $\mathbf{Bimod}_{m,n}^{\bullet \to \bullet \leftarrow \bullet}$, which in particular gives us a pair $(b,W) \in B_{m,n}$. **Lemma 32**. *Let $f_1: y_0 \to y_1$ be a map in $\mathcal{Y}$ and $x_1 = (b_1,W_1) \in \mathcal{X}$ such that $F(x_1) = y_1$. If $W_1$ is a singleton, then the nerve of the lifting category $\mathcal{X}(x_1,f_1)$ of $f_1$ over $x_1$ is contractible.* *Proof.* Let us prove that the lifting category is trivial or isomorphic to a category of Lemma [Lemma 31](#claimcontractibleliftingcategory){reference-type="ref" reference="claimcontractibleliftingcategory"}. First, we will describe the functor $F: \mathcal{X} \to \mathcal{Y}$ more explicitly. As mentioned above, the objects of $\mathcal{X}$ are operations of the polynomial monad $\mathbf{Bimod}_{m,n}^{\bullet \to \bullet \leftarrow \bullet}$. So, they are pairs $(b,W) \in B_{m,n}$, equipped with a target label and, for each white vertex, a source label in $\{A,B,C\}$ such that if the target label is $A$ (resp. $B$), then all the source labels are also $A$ (resp. $B$). The morphisms are given by nested trees, as in Subsection [3.4](#subsectionuniversalclassifierbimodmn){reference-type="ref" reference="subsectionuniversalclassifierbimodmn"}. It is important to note that there are morphisms which turn vertices with label $A$ or $B$ to vertices with label $C$. The set of objects of $\mathcal{Y}$ is just $B_{m,n}$. Its set of morphisms can again be described in terms of nested trees. The functor $F$ forgets all the labels. Now let us describe the lifting category. If $W_1$ is a singleton, $x_1$ only depends on the label of the unique element of $W_1$. Let us write $y_0=(b_0,W_0) \in B_{m,n}$. The lifting category has as objects the pairs $(b,W) \in \mathcal{X}$ together with a morphism to $x_1$. We must have $(b_0,W_0) = (b,W)$ as elements of $B_{m,n}$, so the only degree of freedom is in the labels of the white vertices. Since there is a morphism to $x_1$, it means by definition of the category $\mathcal{X}$ that there is an $m-1$-dimensional subset of $W$ for which the vertices below have label $A$ and above have label $B$. The morphisms in the lifting category can only be the morphisms which turn vertices with label $A$ or $B$ to vertices with label $C$. If the label of $W_1$ is $A$ or $B$, then the lifting category is the terminal category. So the only non-trivial case is when the label of $W_1$ is $C$. If $W_0$ is a singleton, then the lifting category consists of a cospan. Otherwise, by definition of $m$-dimensional subset of vertices, there is $b' \in B_m$ obtained by iterating Construction [Definition 10](#constructiontree){reference-type="ref" reference="constructiontree"} from $(b_0,W_0)$ such that $p^{-1}(b_0)$ is in bijection with $W_0$. Then the lifting category is isomorphic to $\mathcal{C}(b')$. This concludes the proof. ◻ **Lemma 33**. *The functor $F$ of [\[inducedfunctor\]](#inducedfunctor){reference-type="ref" reference="inducedfunctor"} is smooth.* *Proof.* The argument is the same as in [@deleger Lemma 3.15]. Let $f_1: y_0 \to y_1$ be a map in $\mathcal{Y}$ and $x_1 = (b_1,W_1)$ be an object in $\mathcal{X}$ such that $F(x_1) = y_1$. We want to prove that the lifting category of $f_1$ over $x_1$ has contractible nerve. For a white vertex $v \in W_1$, let $x_1^v$ given the corolla in $B_n$ corresponding to $v$. Let $y_1^v$ be the same corolla but without the labels and $f_1^v: y_0^v \to y_1^v$ be the restriction of $f_1$ for this corolla. The lifting category of $f_1$ over $x_1$ is isomorphic to the product over the white vertices $v$ of the lifting categories of $f_1^v$ over $x_1^v$. It has therefore contractible nerve thanks to Lemma [Lemma 32](#lemmacontractibleliftingcategoryinparticularcase){reference-type="ref" reference="lemmacontractibleliftingcategoryinparticularcase"}. We conclude the proof using Lemma [Lemma 30](#cisinskilemma){reference-type="ref" reference="cisinskilemma"}. ◻ *Proof of Theorem [Theorem 28](#thmcofinalcospan){reference-type="ref" reference="thmcofinalcospan"}.* The functor $F$ of [\[inducedfunctor\]](#inducedfunctor){reference-type="ref" reference="inducedfunctor"} is smooth according to Lemma [Lemma 33](#lemmasmooth){reference-type="ref" reference="lemmasmooth"}. Its fibres have contractible nerve, since they have a terminal object, which is the object where all the source labels are the same as the target label. Using Quillen Theorem A, we deduce that $F$ induces a weak equivalence between nerve. Again, the nerve of $\mathcal{Y}$ is contractible since this category has a terminal object. So the nerve of $\mathcal{X}$ is contractible, which concludes the proof. ◻ # Delooping theorems {#sectiondeloopingtheorems} ## General delooping for $(m,n)$-bimodules For $0 \leq m \leq n$, we will write $\mathrm{Bimod}_{m,n}$ for the category of $(m,n)$-bimodules. We will write $\zeta$ for the terminal object in this category. **Definition 34**. For $0 \leq m \leq n$, we will say $X \in \mathrm{Bimod}_{m,n}$ is *multiplicative* if it is equipped with a map $\zeta \to X$. For $0 < m \leq n$ fixed, let $\kappa$ be the composite $I_{m-1} \to B_m \to B_n \to I_n$ where the first map picks the free living edge, the second map is obtained by applying the unit $n-m$ times, the last map is the target map $t_n$. The objective for the rest of this paper is to determine whether, for a multiplicative $(m,n)$-bimodule $X$, there is a fibration sequence $$\label{fibrationsequence} \Omega \mathrm{Map}_{\mathrm{Bimod}_{m,n}} (\zeta,u^* X) \to \mathrm{Map}_{\mathrm{Bimod}_{m-1,n}} (\zeta, v^* X) \to \prod_{i \in I^{m-1}} X_{\kappa(i)},$$ where $u^*$ and $v^*$ are the appropriate forgetful functors. **Lemma 35**. *The following categories are left proper:* - *the category of $(m,n)$-bimodules,* - *the category of triples $(A,B,C)$, where $A$ and $B$ are $(m,n)$-bimodules and $C$ is an $A-B$-bimodule,* *Proof.* We want to prove that the polynomial monad for $(m,n)$-bimodules is tame [@bataninberger Definition 6.19]. By definition, we have to prove that the classifier for semi-free coproducts is a coproduct of categories with terminal object. The set of objects for the classifier is the set $B_{m,n}$ of Definition [Definition 17](#operationsbimodmn){reference-type="ref" reference="operationsbimodmn"}, where the white vertices are also coloured with $X$ or $K$. The morphisms can be given by nested trees, as it was done in Subsection [3.4](#subsectionuniversalclassifierbimodmn){reference-type="ref" reference="subsectionuniversalclassifierbimodmn"}. For each vertex of the nested tree, if the vertex is $X$-coloured, the tree inside it can be any tree with all vertices $X$-coloured. If the vertex is $K$-coloured, the tree inside it must be the corolla with the only vertex $K$-coloured. When $m=n$, the local terminal objects are trees in $B_{m,n}$ with white vertices coloured by $X$ and $K$ such that adjacent vertices have different colours, and such that vertices incident to the root or to the leaves are $X$-coloured, as in [@bataninberger Subsection 9.2]. If $m<n$, let us pick an object of the classifier, that is a tree $(b,W) \in B_{m,n}$ with white vertices coloured by $X$ and $K$. Let $(b^\downarrow,W^\downarrow)$ be the tree obtained by applying the construction of Definition [Definition 10](#constructiontree){reference-type="ref" reference="constructiontree"}, with white vertices coloured by $X$ and $K$ as the corresponding vertices of $b$. A morphism from $(b,W)$ corresponds to the contraction of edges and insertion of unary vertices in $b^\downarrow$. If $m = n-1$, the tree $(b,W) \in B_{m,n}$ is again terminal when in $(b^\downarrow,W^\downarrow)$, the $X$-vertices and $K$-vertices alternate. If $m < n-1$, it is terminal when, in $(b^\downarrow,W^\downarrow)$, there are no vertices above an $X$-vertex and all vertices which are below an $X$-vertex are also below a $K$-vertex. Now let us prove that the polynomial monad for triples $(A,B,C)$, where $A$ and $B$ are $(m,n)$-bimodules and $C$ is an $A-B$-bimodule, is quasi-tame [@bwdquasitame Definition 4.11]. For $m=n$ and $m=n-1$, this can be done using the fact that the polynomial monad for $(m,n)$-bimodules is tame and applying [@bwdquasitame Theorem 4.22]. The strategy is completely similar as in the proof of [@bwdquasitame Proposition 4.26]. If $m < n-1$, the strategy is slightly different. Unfortunately, the subcategory of trees which do not have vertices above an $X$-vertex and such that all vertices which are below an $X$-vertex are also below a $K$-vertex, is not always discrete, because there might be non-trivial morphisms which turn vertices with label $A$ or $B$ to vertices with label $C$. However, it is still final subcategory. Indeed, the inclusion functor of this subcategory has a left adjoint given by the functor which automatically contracts the necessary edges. It also has a contractible nerve, because the only degree of freedom is in the labels of the vertices, so we get a category as in Lemma [Lemma 31](#claimcontractibleliftingcategory){reference-type="ref" reference="claimcontractibleliftingcategory"}. Note that any tame polynomial monad is also quasi-tame [@bwdquasitame Proposition 4.20]. We get the conclusion from [@bwdquasitame Theorem 4.17], which states that the category of algebras over a quasi-tame polynomial monad is left proper. ◻ **Lemma 36**. *Let $D^0$ be a cofibrant replacement of $\zeta$. There is a Quillen equivalence between the category of pointed $D^0-D^0$-bimodules and the category of pointed $\zeta-\zeta$-bimodules.* *Proof.* Let $\mathcal{C}$ be the category of pairs of $(m,n)$-bimodules and $\mathrm{PBimod}_{-,-}: \mathcal{C}^{op} \to \mathrm{CAT}$ the functor which sends a pair $(A,B)$ to the category of pointed $A-B$-bimodules. The Grothendieck construction over this functor is left proper according to Lemma [Lemma 35](#lemmaleftproper){reference-type="ref" reference="lemmaleftproper"}. The desired Quillen adjunction is induced by the unique map $\tau: (D^0,D^0) \to (\zeta,\zeta)$ in $\mathcal{C}$. To prove that it is indeed a Quillen equivalence, we need to prove that the unique map $0 \to \tau^*(0)$ is a weak equivalence. The initial $D^0-D^0$-bimodule is the left adjoint of the projection from the Grothendieck construction to the base $\mathcal{C}$ applied to the pair $(D^0,D^0)$. The projection is the restriction functor induced by a map of polynomial monads, and the initial $D^0-D^0$-bimodule can be computed as the nerve of the classifier induced by this map. The objects of this classifier are pair $(b,W) \in B_{m,n}$ with white vertices labelled with $A$ or $B$. There must be an $m-1$-dimensional subset $V \subset W$ such that all the vertices in $V_-$ have label $A$ and all the vertices in $V_+$ have label $B$. The morphisms can be given by nested trees, as it was done in Subsection [3.4](#subsectionuniversalclassifierbimodmn){reference-type="ref" reference="subsectionuniversalclassifierbimodmn"}. The trees inside a nested tree must have all vertices with the same label. If $m=n$ or $m=n-1$, his category is a coproduct of categories with terminal objects. A typical terminal object for $m=n$ has the root vertex labelled with $A$ and all edges above the root vertex is the root of a corolla with label $B$. For $m=n-1$, an object given by $(b,W) \in B_{m,n}$ is terminal when the tree $(b^\downarrow,W^\downarrow)$ has the same description as for the case $m=n$. For $m<n-1$, the classifier is not a coproduct of categories with terminal object. Let us consider the subcategory of objects given by $(b,W) \in B_{m,n}$ such that in $b^\downarrow$, there are no black vertices above a white vertex and a black vertex is below an $A$-vertex if and only if it is also below a $B$-vertex. It is a final subcategory because the inclusion functor has a left adjoint. It is a coproduct of categories with initial object because the only degree of freedom for the morphisms is in the colours of the vertices. The local initial objects are when the number of black vertices is maximal. The initial $\zeta-\zeta$-bimodule is discrete, it is given by the set of connected components of the initial $D^0-D^0$-bimodule. This proves that the unique map $0 \to \tau^*(0)$ is a weak equivalence. Therefore the conditions of [@bwdgrothendieck Theorem 3.22] are satisfied, which means that $\tau$ indeed induces a Quillen equivalence. ◻ Let $\alpha$ be the $\zeta-\zeta$-bimodule such that the category of pointed $\zeta-\zeta$-bimodules is equivalent to the comma category of $\zeta-\zeta$-bimodules under $\alpha$, given by Lemma [Lemma 27](#lemmaalpha){reference-type="ref" reference="lemmaalpha"}. **Lemma 37**. *For any multiplicative $(m,n)$-bimodule $X$, there is a fibration sequence $$\Omega \mathrm{Map}_{\mathrm{Bimod}_{m,n}} (\zeta,u^* X) \to \mathrm{Map}_{\mathrm{Bimod}_{\zeta,\zeta}} (\zeta, v^* X) \to \mathrm{Map}_{\mathrm{Bimod}_{\zeta,\zeta}} (\alpha, v^* X),$$ where $\mathrm{Bimod}_{\zeta,\zeta}$ is the category of $\zeta-\zeta$-bimodules.* *Proof.* According to Lemma [Lemma 35](#lemmaleftproper){reference-type="ref" reference="lemmaleftproper"}, $\mathrm{Bimod}_{m,n}$ is left proper. We can therefore apply [@deleger Theorem 4.5] to get the delooping $$\label{equationdelooping} \Omega \mathrm{Map}_{\mathrm{Bimod}_{m,n}} \left(\zeta,u^* X\right) \to \mathrm{Map}_{S^0/\mathrm{Bimod}_{m,n}} \left(\zeta,h^* X \right),$$ where $S^0 := D^0 \amalg D^0$. The map $f$ of polynomial monads [\[cofinalmappolymon\]](#cofinalmappolymon){reference-type="ref" reference="cofinalmappolymon"} induces a Quillen adjunction $f_! \dashv f^*$ between categories of algebras. Note that there is a Quillen adjunction $g_! \dashv g^*$ between $\mathrm{PBimod}_{D^0,D^0}$ and $S^0/\mathrm{Bimod}_{m,n}$, such that $f_!(D^0,D^0,C) = (D^0,D^0,g_!(C))$ and $f^*(D^0,D^0,C) = (D^0,D^0,g^*(C))$. Thanks to Theorem [Theorem 28](#thmcofinalcospan){reference-type="ref" reference="thmcofinalcospan"}, $f$ is homotopically cofinal. According to [@deleger Remark 4.8], this means that $f_!$ is *a left cofinal Quillen functor*, that is, it preserves cofibrant replacements of the terminal objects [@deleger Definition 4.7]. Therefore, $g_!$ is also left cofinal. We deduce by adjunction that there is a weak equivalence $$\label{equationweakequivalence} \mathrm{Map}_{S^0/\mathrm{Bimod}_{m,n}} \left(\zeta,h^* X \right) \to \mathrm{Map}_{\mathrm{PBimod}_{D^0,D^0}} \left(\zeta,g^* h^* X \right).$$ According to Lemma [Lemma 36](#lemmaquillenequivalences){reference-type="ref" reference="lemmaquillenequivalences"}, there is a weak equivalence $$\label{quillenequivalence} \mathrm{Map}_{\mathrm{PBimod}_{\zeta,\zeta}} (\zeta, w^*X) \to \mathrm{Map}_{\mathrm{PBimod}_{D^0,D^0}} (\zeta, g^* h^*X).$$ Using Lemma [Lemma 35](#lemmaleftproper){reference-type="ref" reference="lemmaleftproper"}, $\mathrm{Bimod}_{\zeta,\zeta}$ is left proper. According to Lemma [Lemma 27](#lemmaalpha){reference-type="ref" reference="lemmaalpha"}, $\mathrm{PBimod}_{\zeta,\zeta}$ is isomorphic to $\alpha / \mathrm{Bimod}_{\zeta,\zeta}$. This means that we can apply [@deleger Theorem 4.13] and [@rezk Proposition 2.7] to get the fibration sequence $$\label{equationfibrationsequence} \mathrm{Map}_{\mathrm{PBimod}_{\zeta,\zeta}} \left(\zeta, w^* X \right) \to \mathrm{Map}_{\mathrm{Bimod}_{\zeta,\zeta}} \left(\zeta,v^* X \right) \to \mathrm{Map}_{\mathrm{Bimod}_{\zeta,\zeta}} (\alpha, v^* X).$$ We get the desired result by combining [\[equationdelooping\]](#equationdelooping){reference-type="ref" reference="equationdelooping"}, [\[equationweakequivalence\]](#equationweakequivalence){reference-type="ref" reference="equationweakequivalence"}, [\[quillenequivalence\]](#quillenequivalence){reference-type="ref" reference="quillenequivalence"} and [\[equationfibrationsequence\]](#equationfibrationsequence){reference-type="ref" reference="equationfibrationsequence"}. ◻ ## The cases $m=n$ and $m=n-1$ **Lemma 38**. *Let $0 < m \leq n$ and a triple $(A,B,C)$ where $A$ and $B$ are $(m,n)$-bimodules in $(\mathcal{E},\otimes,e)$ and $C$ is an $A-B$-bimodule. Let us assume that $m=n$ or $m=n-1$. Then $C$ is pointed if and only if it is equipped with a map $e \to C_{\kappa(i)}$ for all $i \in I_{m-1}$.* *Proof.* First let us assume that $C$ is pointed. Let $\lambda$ be the composite $I_{m-1} \to B_m \to B_n$ where the first map picks the free living edge and the second map is obtained by applying the unit $n-m$ times. For $i \in I_{m-1}$, $(\lambda(i),\varnothing)$, where $\varnothing$ is the empty set, is $m$-dimensional. The pointed $A-B$-bimodule map induced by $(\lambda(i),\varnothing)$ is a map $e \to C_{\kappa(i)}$. Now let us assume that $C$ is equipped with a map $e \to C_{\kappa(i)}$ for all $i \in I_{m-1}$. Let $(b,W) \in B_{m,n}$ and partitions of $W$ into $V_-$, $V_+$ and $V$ such that there is an $m-1$-dimensional subset $U \subset W$ satisfying $U_- \subset V_-$ and $U_+ \subset V_+$. We want to construct a map [\[mappointedbimodule\]](#mappointedbimodule){reference-type="ref" reference="mappointedbimodule"}. If $m=n$, according to Remark [Remark 16](#casezeroandn){reference-type="ref" reference="casezeroandn"}, each path in $b$ from the root to a leaf meets vertices in $V_-$, then at most one vertex in $V$, then vertices in $V_+$. Let $\tilde{b}$ be the tree obtained from $b$ by adding a unary vertex on each edge between a vertex in $V_-$ and a vertex in $V_+$. Let $\tilde{V}$ and $\tilde{W}$ be the sets $V$ and $W$, respectively, plus the set of unary vertices which have been added. Then $(\tilde{b},\tilde{W}) \in B_{m,n}$ and $\tilde{V} \subset \tilde{W}$ is $m-1$-dimensional. The desired map is given by the composite $$\label{compositepointedbimodule} A_{s(V_-)} \otimes C_{s(V)} \otimes B_{s(V_+)} \to A_{s(\tilde{V}_-)} \otimes C_{s(\tilde{V})} \otimes B_{s(\tilde{V}_+)} \to C_{t(b)},$$ where the first map is using the maps $e \to C_{\kappa(i)}$ for $i \in I_{m-1}$ and the second map is from the $A-B$-bimodule structure. If $m=n-1$, let $(b^\downarrow,W^\downarrow)$ be the tree obtained by applying the construction of Definition [Definition 10](#constructiontree){reference-type="ref" reference="constructiontree"} to $(b,W)$. We can construct $(\tilde{b}^\downarrow,\tilde{W}^\downarrow)$ as in the case $m=n$. Let $(\tilde{b},\tilde{W})$ given by Lemma [Lemma 12](#lemmainsertingtrunk){reference-type="ref" reference="lemmainsertingtrunk"}. The desired map is given by the composite [\[compositepointedbimodule\]](#compositepointedbimodule){reference-type="ref" reference="compositepointedbimodule"} again. ◻ **Theorem 39**. *We do have the fibration sequence [\[fibrationsequence\]](#fibrationsequence){reference-type="ref" reference="fibrationsequence"} if $m=n$ or $m=n-1$.* *Proof.* We want to prove that the desired fibration sequence is equivalent to the fibration sequence of Lemma [Lemma 37](#lemmafibrationsequence){reference-type="ref" reference="lemmafibrationsequence"}. The first terms of both fibration sequences are the same. According to Lemma [Lemma 25](#lemmabimodulesbimodules){reference-type="ref" reference="lemmabimodulesbimodules"}, the category $\mathrm{Bimod}_{\zeta,\zeta}$ of $\zeta-\zeta$-bimodules is isomorphic to the category $\mathrm{Bimod}_{m-1,n}$, so the second terms are also equivalent. We deduce from Lemma [Lemma 38](#lemmamonoids){reference-type="ref" reference="lemmamonoids"} that $\alpha$ is the image of the terminal $I_{m-1}$-collection through the left adjoint of the forgetful functor from $\zeta-\zeta$-bimodules to $I_{m-1}$-collections. So, by an adjunction argument, the third terms are also equivalent. ◻ **Remark 40**. Recall from Example [Remark 22](#examplelowercases){reference-type="ref" reference="examplelowercases"} what are $(m,n)$-bimodules in the case $n=2$. We deduce that Theorem [Theorem 39](#theoremtdhgeneral){reference-type="ref" reference="theoremtdhgeneral"} in this case gives us the Turchin/Dwyer-Hess theorem. Interestingly, the fibration sequence [\[fibrationsequence\]](#fibrationsequence){reference-type="ref" reference="fibrationsequence"} does not seem to hold in general for $m<n-1$ without extra assumptions. The rest of this paper will consist in investigating the case $(m,n)=(1,3)$. ## Dendroidal category for planar trees $\Omega_p$ Let $\Omega_p$ be the version of the dendroidal category for planar trees [@moerdijktoen Definition 2.2.1]. The objects are isomorphism classes of planar trees and the morphisms are generated by: - *inner face maps* of the form $\partial_e: T/e \to T$, where $e$ is an internal edge of $T$ and $T/e$ is the tree obtained from $T$ by contracting $e$: $$\begin{tikzpicture} \draw (0,-.2) -- (0,0) -- (-.4,.4); \draw (0,.4) -- (0,0) -- (.5,.5); \draw[fill] (0,0) circle (1pt); \draw[fill] (.5,.5) circle (1pt); \draw (0,-.2) node[below]{$T/e$}; \draw (1.5,.2) node{$\longrightarrow$}; \draw (1.5,.2) node[above]{$\tiny{\partial_e}$}; \begin{scope}[shift={(3,0)}] \draw (0,-.2) -- (0,0) -- (-.8,.8); \draw[very thick] (0,0) -- (-.5,.5); \draw (-.5,.5) -- (-.2,.8); \draw (0,0) -- (.5,.5); \draw[fill] (0,0) circle (1pt); \draw[fill] (-.5,.5) circle (1pt); \draw[fill] (.5,.5) circle (1pt); \draw (-.15,.35) node[below left]{$e$}; \draw (0,-.2) node[below]{$T$}; \end{scope} \end{tikzpicture}$$ - *outer face maps* of the form $\partial_v: T/v \to T$, where $v$ is a vertex of $T$, possibly the root, with exactly one inner edge attached to it and $T/v$ is the tree obtained from $T$ by removing the vertex $v$ and all the outer edges incident to it: $$\begin{tikzpicture} \draw (0,-.15) -- (0,0) -- (-.7,.7); \draw (-.35,.7) -- (-.35,.35) -- (0,.7); \draw (0,0) -- (.35,.35); \draw[fill] (0,0) circle (1pt); \draw[fill] (-.35,.35) circle (1pt); \draw (1.5,.2) node{$\longrightarrow$}; \draw (1.5,.2) node[above]{$\tiny{\partial_v}$}; \begin{scope}[shift={(3.2,0)}] \draw (0,-.15) -- (0,0) -- (-.35,.35) -- (.25,.95); \draw (0,.7) -- (-.25,.95); \draw (-.7,.7) -- (-.35,.35) -- (-.35,.7); \draw (0,0) -- (.35,.35); \draw[fill] (0,0) circle (1pt); \draw[fill] (-.35,.35) circle (1pt); \draw[fill] (0,.7) circle (1pt) node[right]{$v$}; \end{scope} \end{tikzpicture}$$ - *degeneracy maps* of the form $\sigma_v: T \to T \backslash v$, where $v$ is a unary vertex of $T$ and $T \backslash v$ is the tree obtained from $T$ by removing the vertex $v$ and merging the two edges incident to it into one: $$\begin{tikzpicture} \draw (0,-.2) -- (0,0) -- (-.4,.4); \draw (0,0) -- (.5,.5) -- (.2,.8); \draw (.5,.5) -- (.8,.8); \draw[fill] (0,0) circle (1pt); \draw[fill] (.25,.25) circle (1pt) node[right]{$v$}; \draw[fill] (.5,.5) circle (1pt); \draw (1.5,.2) node{$\longrightarrow$}; \draw (1.5,.2) node[above]{$\tiny{\sigma_v}$}; \begin{scope}[shift={(3,0)}] \draw (0,-.2) -- (0,0) -- (-.4,.4); \draw (0,0) -- (.5,.5) -- (.2,.8); \draw (.5,.5) -- (.8,.8); \draw[fill] (0,0) circle (1pt); \draw[fill] (.5,.5) circle (1pt); \end{scope} \end{tikzpicture}$$ These generating morphisms are subject to obvious relations [@moerdijktoen]. **Remark 41**. Note that $\Omega_p$ has a factorisation system [@berger]. Inert maps are generated by the outer face maps. They correspond to full inclusions of trees. Active maps consist of blowing up some vertices, that is inserting new trees inside these vertices. They are generated by the inner face and degeneracy maps. ## $(0,3)$-bimodules as covariant presheaves over $\Omega_p$ We have the following analogue of the well-known equivalence between infinitesimal bimodules over the terminal non-symmetric operad and cosimplicial objects [@turchincosimplicial Lemma 4.2]. **Lemma 42**. *The category of $(0,3)$-bimodules is isomorphic to the category of covariant presheaves over $\Omega_p$.* *Proof.* It comes from the fact that $B_{0,3}$ is in bijection with the set of morphisms of $\Omega_p$. Indeed, according to Remark [Remark 16](#casezeroandn){reference-type="ref" reference="casezeroandn"}, we deduce that the set $B_{0,3}$ is the set of trees in $B_3$ with exactly one white vertex. The trees whose root vertex is the unique white vertex correspond to active morphisms. The trees without any black vertices above the unique white vertex correspond to inert morphisms. ◻ **Definition 43**. We will call *stratum* a connected component of the complement of a planar tree in the plane (we consider that the leaves and the root of the tree go to infinity). For example, in the following picture, the planar tree has four strata: $$\begin{tikzpicture} \draw (0,-.3) -- (0,0) -- (-1,1); \draw (-.5,.5) -- (0,1); \draw (0,0) -- (.5,.5); \draw (-.5,0) node{$1$}; \draw (-.5,1) node{$2$}; \draw (0,.5) node{$3$}; \draw (.5,0) node{$4$}; \draw[fill] (0,0) circle (1pt); \draw[fill] (-.5,.5) circle (1pt); \end{tikzpicture}$$ In general, a planar tree with $n$ leaves has $n+1$ strata. Let $\alpha: \Omega_p \to \mathcal{E}$ be the functor which sends a planar tree to the coproduct of the unit over all strata of this tree. **Lemma 44**. *Let $\zeta$ be the terminal $(1,3)$-bimodule. The category of pointed $\zeta-\zeta$-bimodules is isomorphic to the category of covariant presheaves over $\Omega_p$ equipped with a map from $\alpha$ to this presheaf.* *Proof.* According to Lemma [Lemma 25](#lemmabimodulesbimodules){reference-type="ref" reference="lemmabimodulesbimodules"} and Lemma [Lemma 42](#lemmabimodzerothree){reference-type="ref" reference="lemmabimodzerothree"}, $\zeta-\zeta$-bimodules are equivalent to covariant presheaves over $\Omega_p$. What we need to prove is that $\alpha$ is indeed the initial pointed $\zeta-\zeta$-bimodule of Lemma [Lemma 27](#lemmaalpha){reference-type="ref" reference="lemmaalpha"}. As it was pointed out in the proof of Lemma [Lemma 36](#lemmaquillenequivalences){reference-type="ref" reference="lemmaquillenequivalences"}, the initial pointed $\zeta-\zeta$-bimodule is given by the set of connected components of a classifier. The objects are pairs $(b,W) \in B_{1,3}$. There must be a $0$-dimensional subset $V \subset W$ such that all the vertices in $V_-$ have label $A$ and all the vertices in $V_+$ have label $B$. This means that $b^{\downarrow 2}$ is a linear tree whose vertices have label $A$ or $B$ and all the vertices with label $A$ are on one side and all the vertices with label $B$ are on the other side. This gives labels $A$ and $B$ to the leaves of $t(b)$, where the labels $A$ are to the left and the labels $B$ are to the right. Such labels correspond bijectively to a connected component since it is invariant in each connected component. They also correspond bijectively with the set of strata of $t(b)$. This proves that $\alpha$ is indeed the initial pointed $\zeta-\zeta$-bimodule. ◻ ## Functors equipped with retractions **Definition 45**. We will say that a morphism in $\Omega_p$ *consists of blowing up a vertex to add a trunk* when it is an inner face map $\partial_e: T/e \to T$, where the vertex directly above $e$ has no inputs: $$\begin{tikzpicture}[scale=.6] \draw (0,-.5) -- (0,0) -- (-1.7,1.7); \draw (-1,1) -- (-.3,1.7); \draw (0,0) -- (1.7,1.7); \draw (1,1) -- (.3,1.7); \draw[fill] (0,0) circle (1.5pt); \draw[fill] (-1,1) circle (1.5pt); \draw[fill] (1,1) circle (1.5pt); \draw (0,-.5) node[below]{$T/e$}; \draw (3,0) node{$\longrightarrow$}; \draw (3,0) node[above]{$\tiny{\partial_e}$}; \begin{scope}[shift={(6,0)}] \draw (0,-.5) -- (0,0) -- (-1.7,1.7); \draw (-1,1) -- (-.3,1.7); \draw[very thick] (0,1) -- (0,0); \draw (0,0) -- (1.7,1.7); \draw (1,1) -- (.3,1.7); \draw[fill] (0,0) circle (1.5pt); \draw[fill] (-1,1) circle (1.5pt); \draw[fill] (1,1) circle (1.5pt); \draw[fill] (0,1) circle (1.5pt); \draw (0,-.5) node[below]{$T$}; \end{scope} \end{tikzpicture}$$ **Definition 46**. We will say that a functor $\mathcal{K}: \Omega_p \to \mathrm{Top}$ is *equipped with retractions* if for all morphisms $\partial_e: T/e \to T$ in $\Omega_p$ consisting of blowing up a vertex to add a trunk, there is a retraction $r_T: \mathcal{K}(T) \to \mathcal{K}(T/e)$ of $\mathcal{K}(\partial_e)$. Moreover, the retractions are natural, that is, for all $h: S \to T$ in $\Omega_p$, the following square commutes: $$\xymatrix{ \mathcal{K}(S) \ar[r]^-{r_S} \ar[d]_-{\mathcal{K}(h)} & \mathcal{K}(S/e) \ar[d]^-{\mathcal{K}(h/e)} \\ \mathcal{K}(T) \ar[r]_-{r_T} & \mathcal{K}(T/e) }$$ ## Computation of homotopy limit For $n \geq 0$, we will write $[n]$ for the set $\{0,\ldots,n\}$ and $\mathbf{P}[n]$ for the category of subsets of $[n]$ and inclusions. **Definition 47**. For $n \geq 0$, let $\mathbf{C}[n]$ be the full subcategory of $$\{A \xleftarrow{\sigma} B \xrightarrow{\tau} C \xleftarrow{\upsilon} D\} \times \mathbf{P}[n]$$ of pairs $(l,S)$ where - $S = [i]$ for some $i \in [n]$ if $l=A$, - $S$ is non-empty if $l \in \{B,C\}$, - $S$ is empty if $l = D$. For example, here is a picture of $\mathbf{C}[1]$: $$\xymatrix{ A,\{0\} \ar[d] & B,\{0\} \ar[l] \ar[ld] \ar[d] \ar[rd] \ar[r] & C,\{0\} \ar[d] \\ A,\{0,1\} & B,\{0,1\} \ar[l] \ar[r] & C,\{0,1\} & D,\varnothing \ar[l] \ar[lu] \ar[ld] \\ & B,\{1\} \ar[lu] \ar[u] \ar[ru] \ar[r] & C,\{1\} \ar[u] }$$ Recall that for $n \geq 0$, the *topological $n$-simplex* is the topological space: $$\Delta^n = \left\{(x_0,\dots,x_n) \in \mathbb{R}^{n+1} \middle| \text{$\sum_{i=0}^n x_i = 1$ and $x_i \geq 0$ for $0 \leq i \leq n$} \right\}$$ **Lemma 48**. *For $n \geq 0$, there is a canonical isomorphism between the realisation of the nerve of $\mathbf{C}[n]$ and $\Delta^{n+1}$.* *Proof.* We describe a map $f: ob(\mathbf{C}[n]) \to \Delta^{n+1}$, where $ob(\mathbf{C}[n])$ is the set of objects of $\mathbf{C}[n]$. Let $(e_0,\ldots,e_{n+1})$ be the standard basis of $\mathbb{R}^{n+2}$. For $S \subset [n]$, let $\max(S)$ be the maximum and $|S|$ be the number of elements of $S$. We define $$f(l,S) = \begin{cases} e_{\max(S)} &\text{if $l=A$,} \\ \frac{2}{3 |S|} \sum_{i \in S} e_i + \frac{1}{3} e_{n+1} &\text{if $l=B$,} \\ \frac{1}{3 |S|} \sum_{i \in S} e_i + \frac{2}{3} e_{n+1} &\text{if $l=C$,} \\ e_{n+1} &\text{if $l=D$.} \end{cases}$$ It is easy to check that this map induces the desired isomorphism. ◻ **Definition 49**. Let $\Omega_p^*$ be the category of elements of the presheaf $\alpha: \Omega_p \to \mathrm{Set}$ which sends a tree to its set of strata. So $\Omega_p^*$ is the category of trees with a chosen stratum. In the following lemma, we will consider $[n]$ as a category, where there is a morphism $i \to j$ if $i < j$. It is a subcategory of $\mathbf{C}[n]$ through the inclusion $i \mapsto (A,[i])$. Also, we will write $\gamma_0 \in \Omega_p^*$ for the trunk, with the unique choice of stratum. **Lemma 50**. *For $n \geq 0$, any functor $T: [n] \to \Omega_p^*$ can be naturally extended to a functor $\bar{T}: \mathbf{C}[n] \to \Omega_p^*$ such that, for $S \subset [n]$ non-empty, the map $\bar{T}(\tau,id_S): \bar{T}(B,S) \to \bar{T}(C,S)$ consists of blowing up a vertex to add a trunk and $\bar{T}(D,\varnothing)=\gamma_0$.* *Proof.* Let $T: [n] \to \Omega_p^*$ be a functor. We will extend it to a functor $\bar{T}: \mathbf{C}[n] \to \Omega_p^*$. By assumption, we should have $\bar{T}(A,[i]) = T(i)$ for $i \in [n]$, and $\bar{T}(D,\varnothing)$ should be the trunk. It remains to define $\bar{T}(B,S)$ and $\bar{T}(C,S)$ for $S \subset [n]$ non-empty. For a non-trivial map $i \to j$ in $[n]$, using Remark [Remark 41](#remarkinertactiveomegap){reference-type="ref" reference="remarkinertactiveomegap"}, there is $T_{ij} \in \Omega_p^*$ such that $T(i) \to T(j)$ factorises as $T(i) \twoheadrightarrow T_{ij} \rightarrowtail T(j)$, where the first map is active and the second map is inert. Observe that we can add a circle $c_{ij}$ on the tree $T(j)$ such that the tree inside this circle is $T_{ij}$: $$\begin{tikzpicture}[scale=1.5] \draw (0,-.3) -- (0,0) -- (-.5,.5); \draw (-.166,.5) -- (0,0) -- (.166,.5); \draw (.5,.5) -- (0,0); \draw (0,-.3) node[below]{$T(0)$}; \draw[fill] (0,0) circle (.6pt); \draw (1.5,0) node{$\longrightarrow$}; \begin{scope}[shift={(3,0)}] \draw (0,-.3) -- (0,0) -- (-.75,.75); \draw (-.5,.75) -- (-.5,.5) -- (-.125,.875); \draw (-.25,.75) -- (-.375,.875); \draw (0,0) -- (.75,.75); \draw (.25,.75) -- (.5,.5) -- (.5,.75); \draw (-.03,.47) node{$\tiny{c_{01}}$}; \draw (0,-.3) node[below]{$T(1)$}; \draw[fill] (0,0) circle (.6pt); \draw[fill] (-.5,.5) circle (.6pt); \draw[fill] (-.25,.75) circle (.6pt); \draw[fill] (.5,.5) circle (.6pt); \draw[ultra thin] ({-.15*cos(45)},{-.15*sin(45)}) arc (-135:45:.15) -- ({.15*cos(45)-.5},{.15*sin(45)+.5}) arc (45:225:.15) -- ({-.15*cos(45)},{-.15*sin(45)}); \end{scope} \draw (4.5,0) node{$\longrightarrow$}; \begin{scope}[shift={(6,0)}] \draw (0,-.3) -- (0,0) -- (-.875,.875); \draw (-.5,.5) -- (-.125,.875); \draw (-.75,.75) -- (-.625,.875); \draw (-.25,.75) -- (-.375,.875); \draw (0,0) -- (.875,.875); \draw (.5,.5) -- (.125,.875); \draw (.25,.75) -- (.375,.875); \draw (.75,.75) -- (.625,.875); \draw (.48,.06) node{$\tiny{c_{12}}$}; \draw (-.08,.42) node{$\tiny{c_{02}}$}; \draw (0,-.3) node[below]{$T(2)$}; \draw[fill] (0,0) circle (.6pt); \draw[fill] (-.5,.5) circle (.6pt); \draw[fill] (-.75,.75) circle (.6pt); \draw[fill] (-.25,.75) circle (.6pt); \draw[fill] (.5,.5) circle (.6pt); \draw[fill] (.25,.75) circle (.6pt); \draw[fill] (.75,.75) circle (.6pt); \draw[ultra thin] ({-.08*cos(45)},{-.08*sin(45)}) arc (-135:45:.08) -- ({.08*cos(45)-.75},{.08*sin(45)+.75}); \draw[ultra thin] ({-.08*cos(45)},{-.08*sin(45)}) -- ({-.08*cos(45)-.75},{-.08*sin(45)+.75}); \draw[ultra thin] ({.08*cos(45)-.75},{.08*sin(45)+.75}) arc (45:225:.08); \draw[ultra thin] ({-.12*cos(45)},{-.12*sin(45)}) arc (-135:-45:.12); \draw[ultra thin] ({.12*cos(45)},{-.12*sin(45)}) -- ({.12*cos(45)+.75},{-.12*sin(45)+.75}); \draw[ultra thin] ({.12*cos(45)+.75},{-.12*sin(45)+.75}) arc (-45:135:.12); \draw[ultra thin] ({-.12*cos(45)+.75},{.12*sin(45)+.75}) -- ({.12*cos(45)+.25},{.36*sin(45)+.25}); \draw[ultra thin] ({-.12*cos(45)+.25},{.36*sin(45)+.25}) arc (-135:-45:.12); \draw[ultra thin] ({-.12*cos(45)+.25},{.36*sin(45)+.25}) -- ({.12*cos(45)-.25},{.12*sin(45)+.75}); \draw[ultra thin] ({.12*cos(45)-.25},{.12*sin(45)+.75}) arc (45:135:.12); \draw[ultra thin] ({-.12*cos(45)-.25},{.12*sin(45)+.75}) -- ({.12*cos(45)-.5},{.36*sin(45)+.5}); \draw[ultra thin] ({-.12*cos(45)-.5},{.36*sin(45)+.5}) arc (-135:-45:.12); \draw[ultra thin] ({-.12*cos(45)-.5},{.36*sin(45)+.5}) -- ({.12*cos(45)-.75},{.12*sin(45)+.75}); \draw[ultra thin] ({.12*cos(45)-.75},{.12*sin(45)+.75}) arc (45:225:.12); \draw[ultra thin] ({-.12*cos(45)},{-.12*sin(45)}) -- ({-.12*cos(45)-.75},{-.12*sin(45)+.75}); \end{scope} \end{tikzpicture}$$ We define $\bar{T}(B,S)$ as the tree obtained from $T(\max(S))$ by contracting all the internal edges except the ones that are crossed by a circle $c_{ij}$ for $i,j \in S$ and $j=\max(S)$. Note that in particular, $\bar{T}(B,\{i\})$ is the tree obtained from $T(i)$ by contracting all the internal edges. In the case of the free living edge, we add a unary vertex. For example, if we start with $T: [2] \to \Omega_p^*$ as in the previous picture, we will get the following trees $\bar{T}(B,S)$ for $S \subset [2]$ non-empty (forgetting about the chosen strata): $$\begin{tikzpicture}[scale=1.1] \begin{scope} \draw (0,0) -- (0,-.2); \draw (-.4,.5) -- (0,0) -- (.4,.5); \draw (-.2,.5) -- (0,0) -- (0,.5); \draw (.25,.8) -- (.4,.5) -- (.55,.8); \draw (.4,.5) -- (.4,.8); \draw (-.1,.8) -- (0,.5) -- (.1,.8); \draw (.18,.98) -- (.25,.8) -- (.32,.98); \draw[fill] (0,0) circle (.8pt); \draw[fill] (0,.5) circle (.8pt); \draw[fill] (.4,.5) circle (.8pt); \draw[fill] (.25,.8) circle (.8pt); \draw (0,-.2) node[below]{$\tiny{\{0,1,2\}}$}; \end{scope} \begin{scope}[shift={({3*cos(30)},{3*sin(30)})}] \draw (0,-.2) -- (0,0) -- (0,.5); \draw (-.6,.5) -- (0,0) -- (.6,.5); \draw (-.4,.5) -- (0,0) -- (.4,.5); \draw (-.2,.5) -- (0,0) -- (.2,.5); \draw (.1,.8) -- (.2,.5) -- (.3,.8); \draw[fill] (.2,.5) circle (.8pt); \draw[fill] (0,0) circle (.8pt); \draw (0,-.2) node[below]{$\tiny{\{1,2\}}$}; \end{scope} \begin{scope}[shift={({3*cos(90)},{3*sin(90)})}] \draw (0,-.2) -- (0,0) -- (0,.5); \draw (-.6,.5) -- (0,0) -- (.6,.5); \draw (-.4,.5) -- (0,0) -- (.4,.5); \draw (-.2,.5) -- (0,0) -- (.2,.5); \draw[fill] (0,0) circle (.8pt); \draw (0,-.2) node[below]{$\tiny{\{1\}}$}; \end{scope} \begin{scope}[shift={({3*cos(150)},{3*sin(150)})}] \draw (0,0) -- (0,-.2); \draw (-.4,.5) -- (0,0) -- (.4,.5); \draw (-.2,.5) -- (0,0) -- (0,.5); \draw (.25,.8) -- (.4,.5) -- (.55,.8); \draw (.4,.5) -- (.4,.8); \draw (-.1,.8) -- (0,.5) -- (.1,.8); \draw[fill] (0,0) circle (.8pt); \draw[fill] (0,.5) circle (.8pt); \draw[fill] (.4,.5) circle (.8pt); \draw (0,-.2) node[below]{$\tiny{\{0,1\}}$}; \end{scope} \begin{scope}[shift={({3*cos(210)},{3*sin(210)})}] \draw (0,0) -- (0,-.2); \draw (-.42,.5) -- (0,0) -- (.42,.5); \draw (-.14,.5) -- (0,0) -- (.14,.5); \draw[fill] (0,0) circle (.8pt); \draw (0,-.2) node[below]{$\tiny{\{0\}}$}; \end{scope} \begin{scope}[shift={({3*cos(270)},{3*sin(270)})}] \draw (0,0) -- (0,-.2); \draw (-.4,.5) -- (0,0) -- (.4,.5); \draw (-.2,.5) -- (0,0) -- (0,.5); \draw (.22,.8) -- (.4,.5) -- (.58,.8); \draw (.34,.8) -- (.4,.5) -- (.46,.8); \draw (-.1,.8) -- (0,.5) -- (.1,.8); \draw[fill] (0,0) circle (.8pt); \draw[fill] (0,.5) circle (.8pt); \draw[fill] (.4,.5) circle (.8pt); \draw[fill] (0,0) circle (.8pt); \draw (0,-.2) node[below]{$\tiny{\{0,2\}}$}; \end{scope} \begin{scope}[shift={({3*cos(330)},{3*sin(330)})}] \draw (0,0) -- (0,-.2); \draw (-.63,.5) -- (0,0) -- (.63,.5); \draw (-.45,.5) -- (0,0) -- (.45,.5); \draw (-.27,.5) -- (0,0) -- (.27,.5); \draw (-.09,.5) -- (0,0) -- (.09,.5); \draw[fill] (0,0) circle (.8pt); \draw (0,-.2) node[below]{$\tiny{\{2\}}$}; \end{scope} \draw ({1.5*cos(30)},{1.5*sin(30)}) node[rotate=30]{$\longleftarrow$}; \draw ({1.5*cos(90)},{1.5*sin(90)}) node[rotate=90]{$\longleftarrow$}; \draw ({1.5*cos(150)},{1.5*sin(150)}) node[rotate=150]{$\longleftarrow$}; \draw ({1.5*cos(210)},{1.5*sin(210)}) node[rotate=210]{$\longleftarrow$}; \draw ({1.5*cos(270)},{1.5*sin(270)}) node[rotate=270]{$\longleftarrow$}; \draw ({1.5*cos(330)},{1.5*sin(330)}) node[rotate=330]{$\longleftarrow$}; \draw ({3*cos(30)},0) node[rotate=90]{$\longrightarrow$}; \draw ({1.5*cos(30)},2.25) node[rotate=-30]{$\longrightarrow$}; \draw ({-1.5*cos(30)},2.25) node[rotate=30]{$\longleftarrow$}; \draw ({-3*cos(30)},0) node[rotate=90]{$\longrightarrow$}; \draw ({-1.5*cos(30)},-2.25) node[rotate=-30]{$\longrightarrow$}; \draw ({1.5*cos(30)},-2.25) node[rotate=30]{$\longleftarrow$}; \end{tikzpicture}$$ $\bar{T}(C,S)$ is the same tree as $\bar{T}(B,S)$ but where the vertex in the most inside circle is blown up to add a trunk in the chosen strata. Such $\bar{T}$ defined on objects extends to a functor. For $i \in [n]$, $\bar{T}(\sigma,id_{[i]})$ is an active. For $S \subset [n]$ non-empty, $\bar{T}(\tau,id_S)$ consists of blowing up a vertex to add a trunk, as required. For $i \notin S$, the maps $\bar{T}(id_B, S \to S \amalg i)$ and $\bar{T}(id_C, S \to S \amalg i)$ are active if $i \leq \max(S)$ and inert if $i > \max(S)$. Finally, the map $\bar{T}(\upsilon, \varnothing \to S)$ is the inert map given by inclusion of $\gamma_0$ to the trunk of $\bar{T}(C,S)$ which was added from $\bar{T}(B,S)$. ◻ Let $\Delta^\bullet: \Delta \to \mathrm{Top}$ be the cosimplicial space which sends $n$ to $\Delta^n$. **Lemma 51**. *Let $\mathcal{C}$ be a category and $\mathcal{F}: \mathcal{C} \to \mathrm{Top}$ be a functor. Then giving $\beta \in \operatornamewithlimits{holim}\mathcal{F}$ is the same as naturally describing, for all $n \geq 0$ and $T: [n] \to \mathcal{C}$, a map $\beta(T): \Delta^n \to \mathcal{F}T(n)$. *Naturally* means that for all $f: [m] \to [n]$, $$\label{equationnaturalholim} \beta(T) \Delta^f = \mathcal{F}T(f(m) \to n) \beta(Tf).$$* *Proof.* Let $c \mathcal{F}: \Delta \to \mathrm{Top}$ be the cosimplicial space given by $$(c \mathcal{F})_n = \coprod_{T: [n] \to C} \mathcal{F}T(n).$$ The homotopy limit of $\mathcal{F}$ is the totalization of $c \mathcal{F}$. This gives us the desired result. ◻ Let $\pi: \Omega_p^* \to \Omega_p$ be the projection functor to the base. Concretely, $\pi$ forgets the strata. **Lemma 52**. *Let $\mathcal{K}: \Omega_p \to \mathrm{Top}$ equipped with retractions. Then the homotopy limit of $\pi^*\mathcal{K}$ is given by its value at the trunk.* *Proof.* Let us write $\mathcal{L} = \pi^*(\mathcal{K})$ and $\gamma_0 \in \Omega_p^*$ be the trunk. We will construct two maps $$\xymatrix{ \operatornamewithlimits{holim}\mathcal{L} \ar@<.5ex>[r]^-q & \mathcal{L}(\gamma_0) \ar@<.5ex>[l]^-j }$$ and prove that they are homotopy inverse of each other. The map $q$ is the projection which sends $\beta \in \operatornamewithlimits{holim}\mathcal{L}$ to the point given by the map $\beta(\gamma_0): \Delta^0 \to \mathcal{L}(\gamma_0)$ of Lemma [Lemma 51](#lemmadescriptionholim){reference-type="ref" reference="lemmadescriptionholim"}. We can construct the map $j$ using Lemma [Lemma 50](#lemmatechnical){reference-type="ref" reference="lemmatechnical"} and the assumption that $\mathcal{K}$ is equipped with retractions. Let $\xi \in \mathcal{L}(\gamma_0)$. A functor $T: [n] \to \Omega_p^*$ can be extended to $\bar{T}: \mathbf{C}[n] \to \Omega_p^*$ according to Lemma [Lemma 50](#lemmatechnical){reference-type="ref" reference="lemmatechnical"}. We have the zigzag $$\label{zigzag} \xymatrix{ \gamma_0 \ar[rr]^-{\bar{T}(\nu,id_{[n]})} && \bar{T}(C,[n]) && \bar{T}(B,[n]) \ar[ll]_-{\bar{T}(\tau,id_{[n]})} \ar[rr]^-{\bar{T}(\sigma,id_{[n]})} && T(n) }$$ where the map in the middle consists of blowing up a vertex to add the trunk. Applying $\mathcal{L}$ to this zigzag gives us another zigzag, where the map in the middle has a retraction, since $\mathcal{K}$ is equipped with retractions. Then $j(\xi)(T)$ given is by the composite $\Delta^n \to \mathcal{L}(\gamma_0) \to \mathcal{L}T(n)$, where the first map is constant to $\xi$ and the second map is the composite obtained by applying $\mathcal{L}$ to [\[zigzag\]](#zigzag){reference-type="ref" reference="zigzag"} and replacing the map in the middle by its retraction. The composite $qj$ is the identity. We will now prove that the composite $jq$ is also homotopic to the identity. Let us fix $\beta \in \operatornamewithlimits{holim}\mathcal{L}$ and a functor $T: [n] \to \Omega_p^*$, which can again be extended to $\bar{T}: \mathbf{C}[n] \to \Omega_p^*$. We will describe a map $H(\beta)(T): \Delta^{n+1} \to \mathcal{L}T(n)$. Let $i: [n+1] \to \mathbf{C}[n]$ non-degenerate, that is $i(f) \neq id$ if $f \neq id$. We define $H(\beta)(T)|_i: \Delta^{n+1} \to \mathcal{L}T(n)$ as follows. Using Lemma [Lemma 51](#lemmadescriptionholim){reference-type="ref" reference="lemmadescriptionholim"}, we can associate to the functor $\bar{T}i$ a map $\beta(\bar{T}i) : \Delta^{n+1} \to \mathcal{L}\bar{T}i(n+1)$. Note that since $i$ is non-degenerate, $i(n+1)=(A,[n])$ or $i(n+1)=(C,[n])$. So $\bar{T}i(n+1) = T(n)$ or $\bar{T}i(n+1)=\bar{T}(C,[n])$. We define $$H(\beta)(T)|_i = \begin{cases} \beta(\bar{T}i) &\text{if $i(n+1) = (A,[n])$,}\\ \mathcal{L}\bar{T}(\sigma,id_{[n]}) \mathcal{L}\bar{T}(\tau,id_{[n]})^{-1} \beta(\bar{T}i) &\text{if $i(n+1)=(C,[n])$,} \end{cases}$$ where $\mathcal{L}\bar{T}(\tau,id_{[n]})^{-1}$ is the retraction of $\mathcal{L}\bar{T}(\tau,id_{[n]})$. According to Lemma [Lemma 48](#lemmacanonicaliso){reference-type="ref" reference="lemmacanonicaliso"}, $i$ induces a canonical map $|N|(i): \Delta^{n+1} \to \Delta^{n+1}$. We can define the restriction of $H(\beta)(T)$ to the image of $|N|(i)$ to be given by $H(\beta)(T)|_i$. It remains to check that $H(\beta)(T)$ is well-defined. Let $i_A,i_C: [n+1] \to \mathbf{C}[n]$ be two non-degenerate functors such that $i_A(n+1) = (A,[n])$, $i_C(n+1) = (C,[n])$ and $i := i_A d_{n+1} = i_C d_{n+1}$, where $d_{n+1}: [n] \to [n+1]$ is the inclusion. Note that $i_A(n \to n+1) = (\sigma,id_{[n]})$ and $i_C(n \to n+1) = (\tau,id_{[n]})$. Using the naturality [\[equationnaturalholim\]](#equationnaturalholim){reference-type="ref" reference="equationnaturalholim"}, we have $$\begin{split} H(\beta)(T)|_{i_A} \delta_{n+1} &= \beta(\bar{T} i_A) \delta_{n+1}, \\ &= \mathcal{L}\bar{T}(\sigma,id_{[n]}) \beta(\bar{T} i), \\ &= \mathcal{L}\bar{T}(\sigma,id_{[n]}) \mathcal{L}\bar{T}(\tau,id_{[n]})^{-1} \mathcal{L}\bar{T}(\tau,id_{[n]}) \beta(\bar{T} i), \\ &= \mathcal{L}\bar{T}(\sigma,id_{[n]}) \mathcal{L}\bar{T}(\tau,id_{[n]})^{-1} \beta(\bar{T} i_C) \delta_{n+1}, \\ &= H(\beta)(T)|_{i_C} \delta_{n+1}, \end{split}$$ where $\delta_{n+1} = \Delta^{d_{n+1}}: \Delta^n \to \Delta^{n+1}$. This proves that $H(\beta)(T)$ is well-defined. Finally, the desired homotopy $H: \left[0,1\right] \times \operatornamewithlimits{holim}\mathcal{L} \to \operatornamewithlimits{holim}\mathcal{L}$ is given by $H(t,\beta)(T)(x) = H(\beta)(T)((1-t)x,t)$. ◻ ## The case $(m,n)=(1,3)$ **Theorem 53**. *We do have the fibration sequence [\[fibrationsequence\]](#fibrationsequence){reference-type="ref" reference="fibrationsequence"} for $(m,n) = (1,3)$ with the extra condition that $X$ is such that $v^*X$ is equipped with retractions.* *Proof.* Since $\pi: \Omega_p^* \to \Omega_p$ is a discrete fibration, the left Kan extension can be computed as a coproduct over fibres. In case of the terminal presheaf, we get the coproduct of $1$ over the fibres of $\pi$, that is the fibres of $\pi$ themselves. So, $\pi_!(1) = \alpha$ and by adjunction, $$\mathrm{Map}_{[\Omega_p,\mathrm{SSet}]} (\alpha,Y) \sim \mathrm{Map}_{[\Omega_p^*,\mathrm{SSet}]} (1,\pi^* Y) = \operatornamewithlimits{holim}_{\Omega_p^*} \pi^* Y,$$ where $Y := v^*(X)$. We get the desired result by combining Lemma [Lemma 25](#lemmabimodulesbimodules){reference-type="ref" reference="lemmabimodulesbimodules"}, Lemma [Lemma 37](#lemmafibrationsequence){reference-type="ref" reference="lemmafibrationsequence"} and Lemma [Lemma 52](#lemmaholimcontractible){reference-type="ref" reference="lemmaholimcontractible"}. ◻ **Definition 54**. We will call *hyperoperads* algebras of the polynomial monad $\mathbf{Id}^{+3}$. We will write $\mathrm{HOp}$ for the category of hyperoperads. **Corollary 55**. *Let $\mathcal{O}$ be a multiplicative hyperoperad. Assume that for all planar trees $T$ with zero or one vertices, that is, the free living edge or the corollas, $\mathcal{O}_T$ is contractible. Assume further that $\mathcal{O}^\bullet$ is equipped with retractions. Then we have a weak equivalence $$\Omega^3 \mathrm{Map}_{\mathrm{HOp}} (\zeta,u^*(\mathcal{O})) \sim \operatornamewithlimits{holim}_{\Omega_p} \mathcal{O}^\bullet.$$* ## Example: desymmetrisation of the Kontsevich operad Recall [@bwdquasitame Section 3.5] that for any polynomial monad $T$, there is a canonical map of polynomial monads from $T^+$ to the polynomial monad for symmetric operads. In particular, for $T=\mathrm{NOp}$, this map of polynomial monad induces a *desymmetrisation* functor $des: \mathrm{SOp} \to \mathrm{HOp}$, where $\mathrm{SOp}$ is the category of symmetric operads. Explicitly, it is given, for a planar tree $T$, by $des(\mathcal{P})(T) = \mathcal{P}(|T|)$, where $|T|$ is the set of vertices of $T$. Let $\mathcal{O}$ be a multiplicative hyperoperad. If $u^*(\mathcal{O})$ is the desymmetrisation of a reduced symmetric operad, then $\mathcal{O}^\bullet$ is equipped with retractions. Indeed, let $\mathcal{P}$ be the symmetric operad such that $des(\mathcal{P}) = u^*(\mathcal{O})$. The retraction, for a morphism $\partial_e: T /e \to T$ in $\Omega_p$, is given by the map $$\mathcal{O}(T) = \mathcal{P}(|T|) \otimes \mathcal{P}(\varnothing) \xrightarrow{\circ_v} \mathcal{P}(|T| \setminus \{v\}) = \mathcal{O}(T/e),$$ where $v \in |T|$ is the vertex directly above $e$, and $\circ_v$ is the multiplication of the symmetric operad given in terms of partial operations [@markl2008operads]. We will now give a non-trivial example of multiplicative hyperoperad. For $m \geq 2$, let $\tilde{C}_m(n)$ be the quotient of the configuration space $$\mathrm{Conf}_n(\mathbb{R}^m) = \{ (x_1,\ldots,x_n) \in (\mathbb{R}^m)^n, \text{ $x_i \neq x_j$ if $i \neq j$} \}$$ with respect to the action of the group $G_m = \{ x \mapsto \lambda x + v | \lambda > 0, v \in \mathbb{R}^m \}$. **Definition 56**. [@kontsevich Definition 12] The *Kontsevich operad* $\mathcal{K}_m(n)$ is the closure of the image $\tilde{C}_m(n)$ in $(S^{m-1})^{n \choose 2}$ under the map $$G_m \cdot (x_1,\ldots,x_n) \mapsto \left( \frac{x_j - x_i}{|x_j - x_i|} \right)_{1 \leq i < j \leq n}$$ Set-theoretically, the operad $\mathcal{K}_m$ is the same as the free operad generated by the symmetric collection of sets $(\tilde{C}_m(n))_{n \geq 0}$. **Lemma 57**. *The hyperoperad obtained by desymmetrisation of the Kontsevich operad $\mathcal{K}_m$ has a multiplicative structure.* *Proof.* We write $x = (x_{ij})_{i \neq j \in |T|}$ for an element of $\mathcal{K}_m(T)$. Let $e_1,e_2 \in S^{m-1}$ given by the standard inclusion of $\mathbb{R}^2$ into $\mathbb{R}^m$. Let $x(T) \in \mathcal{K}_m(T)$ given by $$x(T)_{ij} = \begin{cases} e_1 &\text{if $i$ is below $j$,} \\ e_2 &\text{if $i$ is to the left of $j$.} \end{cases}$$ It is easy to check that this does give a multiplicative structure. ◻ In particular we can apply Corollary [Corollary 55](#corollarytripledelooping){reference-type="ref" reference="corollarytripledelooping"} to the desymmetrisation of the Kontsevich operad.
arxiv_math
{ "id": "2309.15055", "title": "Triple delooping for multiplicative hyperoperads", "authors": "Florian De Leger and Maro\\v{s} Grego", "categories": "math.AT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We give an algebro-geometric proof of the fact that for a smooth fibration $\pi\,:\,X\,\longrightarrow\, Y$ of projective varieties, the direct image $\pi_*(L\otimes K_{X/Y})$ of the adjoint line bundle of an ample (respectively, nef and $\pi$-strongly big) line bundle $L$ is ample (respectively, nef and big). address: - School of Mathematics, Tata Institute of Fundamental Research, 1 Homi Bhabha Road, Colaba, Mumbai 400005 - Mathématiques - bât. M2, Université Lille 1, F-59655 Villeneuve d'Ascq Cedex, France - Indian Institute of Science Education and Research, Tirupati - Dublin Institute for Advanced Studies, 10 Burlington Road, Dublin 4, Ireland author: - Indranil Biswas - Fatima Laytimi - D. S. Nagaraj - Werner Nahm title: On the direct image of the adjoint line bundle --- # Introduction Throughout we work over the field $\mathbb{C}$ of complex numbers. For the standard notation used here the reader is referred to [@La1], [@La2]. We first give an algebraic proof of the following: **Theorem 1**. *Let $\pi\,:\,X\,\longrightarrow\, Y$ be a smooth fibration of smooth projective varieties and $K_{X/Y}$ the relative canonical line bundle for it. For any ample line bundle $L$ on $X$ the direct image $\pi_* (L\otimes K_{X/Y})$ is either zero or an ample vector bundle.* Mourougane in [@Mour] has proved by analytic methods that the vector bundle $\pi_* (L\otimes K_{X/Y})$ is Griffith positive (hence ample). We define below the notion of $\pi$-strongly big line bundle (see, Definition [Definition 1](#sbig){reference-type="ref" reference="sbig"}) and prove the following: **Theorem 1**. *Let $\pi\,:\,X\,\longrightarrow\, Y$ be a smooth fibration of projective varieties. If L is a $\pi$-strongly big line bundle on $X,$ then the vector bundle $\pi_* (L\otimes K_{X/Y})$ is big. Moreover, if L is nef and $\pi$-strongly big, then $\pi_* (L\otimes K_{X/Y})$ is also nef and big.* # Proof of Theorem [Theorem 1](#main1){reference-type="ref" reference="main1"} {#proof-of-theorem-main1} We need to recall following known results. **Theorem 1** ([@La1 Theorem 4.1.10]). *Let $\xi$ be a line bundle on a projective variety $Y$. Fix a positive integer $d$. Then there exists a projective variety $\widetilde Y$, a finite surjective morphism $f\,:\,\widetilde{Y} \,\longrightarrow\, Y$ and a line bundle $M$ on $\widetilde{Y}$, such that $f^*\xi \,\simeq\, M^{\otimes d}.$* **Theorem 1** ([@Vie Theorem 2.43]). *Let $\pi\,:\,X\,\longrightarrow\, Y$ be a smooth fibration of projective varieties. If $L$ is semi-ample, then $\pi_*(L\otimes K_{X/Y})$ is nef.* Take $\pi$ as above. If $L$ is an ample bundle on $X$, then the restriction $L\big\vert_{X_y}$ of $L$ to the fiber $X_y\,:=\, \pi^{-1}(y)$ over $y\,\in\, Y$ is ample, and hence from Kodaira vanishing theorem it follows that $\pi_* (L\otimes K_{X/Y})$ is either $0$ or a vector bundle on $Y.$ Now we start the proof Theorem [Theorem 1](#main1){reference-type="ref" reference="main1"}. Let $L$ be an ample line bundle on $X$ and $\xi$ a line bundle on $Y.$ Then there is an integer $d\,>\,0$ such that $L^d \otimes \pi^*\xi^*$ is ample, where $\xi^*$ is the dual of the line bundle $\xi$. We assume that $\xi$ is ample. By Theorem [Theorem 1](#BG){reference-type="ref" reference="BG"} there exists a projective variety $\widetilde{Y},$ a finite surjective morphism $f\,:\,\widetilde{Y}\, \longrightarrow\, Y$ and a line bundle $M$ on $\widetilde{Y},$ such that $$\label{t1} f^*\xi\,= \,M^{\otimes d}.$$ Note that $f^*\xi$ is ample because $f$ is a finite map and $\xi$ is ample. Hence from [\[t1\]](#t1){reference-type="eqref" reference="t1"} it follows that $M$ is ample (recall that $d\, >\, 0$). We have a commutative diagram: $$\label{e1} \begin{CD} \widetilde{X} @>\widetilde{f}>> X \\ @VV\widetilde{\pi} V @VV \pi V\\ \widetilde{Y} @>f>> Y \end{CD}$$ The line bundle $L^d \otimes \pi^*\xi^*$ is ample if and only if $\widetilde f^*( L^d \otimes \pi^*\xi^*)$ is ample, because $\widetilde f$ is finite. We have $$\begin{array}{rcl} {\widetilde f}^*(L^d \otimes \pi^*\xi^*) &= &({\widetilde f}^*L^d)\otimes ({\widetilde f}^*{\pi^*}{\xi}^*)\\ {} &=& (\widetilde{f}^*L^d) \otimes (\widetilde{\pi}^*{f^*}\xi^*)\\ {}&=& (\widetilde{f}^*L^d)\otimes (\widetilde{\pi}^* ({M^*})^d) \\ {}& = & ((\widetilde{f}^*L)\otimes (\widetilde{\pi}^* M^*))^d. \end{array}$$ Set $\mathcal{L}\,=\,\widetilde{f}^*L\otimes \widetilde{\pi}^* M^*;$ then $\mathcal L$ is ample. From Theorem [Theorem 1](#Vie){reference-type="ref" reference="Vie"} we know that $\widetilde{\pi}_*(\mathcal {L}\otimes K_{\widetilde{X}/\widetilde{Y}})$ is nef. Using projection formula, $$\widetilde{\pi}_*(\mathcal{L}\otimes K_{\widetilde{X}/\widetilde{Y}})\,=\, \widetilde{\pi}_*(\widetilde{f}^{*}{L}\otimes K_{\widetilde{X}/\widetilde{Y}})\otimes M^*.$$ The vector bundle $\widetilde{\pi}_*((\widetilde{f}^*{L})\otimes K_{\widetilde{X}/\widetilde{Y}})\,=\, \widetilde{\pi}_*((\widetilde{f}^*{L})\otimes K_{\widetilde{X}/\widetilde{Y}})\otimes M^* \otimes M$ is ample, since the tensor product of a nef vector bundle and an ample vector bundle is ample (see [@FL Lemma 1.3]). But $\widetilde{\pi}_*((\widetilde{f}^*{L})\otimes K_{\widetilde{X}/\widetilde{Y}})\,=\, f^*(\pi_*(L\otimes K_{X/Y}))$ by the commutative diagram in [\[e1\]](#e1){reference-type="eqref" reference="e1"}. Hence $\pi_*(L\otimes K_{X/Y})$ is ample. This completes the proof of Theorem [Theorem 1](#main1){reference-type="ref" reference="main1"}. **Remark 1**. For any ample line bundle $L$ on a smooth projective variety $Z$, the vanishing theorem of Kodaira says that $H^i(Z,\, L\otimes K_Z)\,=\, 0$ for all $i\, \geq\,1$, where $K_Z$ is the canonical line bundle of $Z$ [@La1]. Let $\pi\,:\,X\,\longrightarrow\, Y$ be a smooth fibration of smooth projective varieties. Let $L$ be an ample line bundle on $X$. From Kodaira vanishing theorem it now follows that $R^i\pi_* (L\otimes K_{X/Y})\,=\, 0$ for all $i\,\geq\, 1$. **Remark 1**. In Theorem [Theorem 1](#main1){reference-type="ref" reference="main1"} set $X\,=\, {\mathbb P}(E)$, where $E$ is a vector bundle on $Y$. Let $L$ be an ample line bundle on ${\mathbb P}(E)$. Assume that $\pi_*(L\otimes K_{X/Y})\,\not=\, 0$. Then there is a unique positive integer $d$ and a unique line bundle $L_0$ on $Y$ such that $$L\otimes K_{X/Y}\,=\, {\mathcal O}_{{\mathbb P}(E)}(d)\otimes \pi^*L_0.$$ Hence using the projection formula, we have $$\pi_*(L\otimes K_{X/Y})\,=\, \pi_*({\mathcal O}_{{\mathbb P}(E)}(d)\otimes \pi^*L_0)\,=\, \text{Sym}^d(E)\otimes L_0,$$ where $\text{Sym}^d(E)$ is the $d$-th symmetric product of $E$. Now Theorem [Theorem 1](#main1){reference-type="ref" reference="main1"} says that $\text{Sym}^d(E)\otimes L_0$ is ample. When $Y$ is a smooth projective curve, $\text{Sym}^d(E)\otimes L_0$ is ample if and only if $\mu_{\rm min}(\text{Sym}^d(E)\otimes L_0)\, >\, 0$ [@Ha p. 84, Theorem 2.4]. # Proof of Theorem [Theorem 1](#main2){reference-type="ref" reference="main2"} {#proof-of-theorem-main2} **Definition 1**. Let $\pi\,:\,X\,\longrightarrow\, Y$ be a smooth fibration of smooth projective varieties. A line bundle $L$ on $X$ is said to be *$\pi$-strongly big* if there is an effective divisor $D$ on $Y$ with simple normal crossing support (see [@La1 Definition 4.1.1]) such that $L^{d}\otimes \pi^{*}({\mathcal O}_Y(D))^*$ is ample for some integer $d\,>\,0,$ where $\pi^{*}({\mathcal O}_Y(D))^*\,=\, \pi^{*}({\mathcal O}_Y(-D)).$ The following proposition states some properties of $\pi$-strongly big line bundles. **Proposition 1**. *Take $\pi$ and $L$ as in Definition [Definition 1](#sbig){reference-type="ref" reference="sbig"}. If $L$ is $\pi$-strongly big, then it is big and $\pi$-ample. In addition, if $L$ is nef, then $\pi_*(L\otimes K_{X/Y})$ is also nef.* *Proof.* Since $\pi^*({\mathcal O}_Y(D)) \,= \,{\mathcal O}_X(\pi^{-1}(D))$, and $\pi^{-1}(D)$ is effective, it follows from a characterization of big divisors (see [@La1 p. 141, Corollary 2.2.7]) that $L$ is big. On the other hand, by definition, and the fact that $L^d(-\pi^{*}{\mathcal O}_Y(D))\big\vert_{X_y} \,\simeq\, L^d\big\vert_{X_y}$ for all $y \,\in\, Y,$ we obtain that $L$ is $\pi$-ample; here $X_y$ denotes the fiber of $\pi$ over $y.$ With these observations, the second statement of the proposition follows from [@Mour Theorem 1]. ◻ Before proving Theorem [Theorem 1](#main2){reference-type="ref" reference="main2"} we need the following lemma which is a consequence Kawamata coverings result (see [@La1 Proposition 4.1.12]). **Lemma 1**. *In Theorem [Theorem 1](#BG){reference-type="ref" reference="BG"} assume $Y$ is non-singular and $\xi \,=\, {\mathcal O}_Y(D),$ where $D$ is an effective divisor on $Y$ with simple normal crossing support (i.e., the reduced divisor $D_{\rm red}$ is normal crossing divisor). Then given any integer $d \,>\, 0$, there is a finite surjective morphism $f\,:\,\widetilde{Y} \,\longrightarrow\, Y,$ and a line bundle $M\,=\,{\mathcal O}_{\widetilde Y}(\widetilde{D})$ on $\widetilde{Y}$, such that $f^*\xi\,\simeq\, M^d,$ and $\widetilde{D}$ is effective.* For the proof of Theorem [Theorem 1](#main2){reference-type="ref" reference="main2"} we proceed as in the proof of Theorem [Theorem 1](#main1){reference-type="ref" reference="main1"}. Let $L$ be a $\pi$-strongly big line bundle on $X$. Then there exists an effective divisor $D$ on $Y$ with simple normal crossing support and an integer $d\,>\,0,$, such that $L^d \otimes \pi^*{{\mathcal O}_Y (D)}^*$ is an ample line bundle. By Lemma [Lemma 1](#Kaw){reference-type="ref" reference="Kaw"} there exists a projective variety $\widetilde Y,$ a finite surjective morphism $f\,:\,\widetilde{Y} \,\longrightarrow\, Y$ and a line bundle $M\,=\,{\mathcal O}_{\widetilde Y}(\widetilde{D})$ with $\widetilde D$ effective, such that $f^*({\mathcal O}_Y(D))\,=\, M^d$. We have the commutative diagram: $$\label{e2} \begin{CD} \widetilde{X} @>\widetilde{f}>> X \\ @VV\widetilde{\pi} V @VV \pi V\\ \widetilde{Y} @>f>> Y \end{CD}$$ The line bundle $L^d \otimes \pi^*({\mathcal O}_Y{(D)})^*$ is ample if and only if $\widetilde{f}^*( L^d \otimes \pi^*({\mathcal O}_Y{(D)})^*)$ is ample. We have $$\begin{array}{rcl} {\widetilde f}^*(L^d \otimes \pi^*({\mathcal O}_Y{(D)})^*)&\,=\, &({\widetilde f}^*L^d)\otimes ({\widetilde f}^*\pi^*({\mathcal O}_Y{(D)})^*)\\ {} &=\,& (\widetilde{f}^*L^d)\otimes (\widetilde{\pi} ^*(f^*{\mathcal O}_Y{(D)})^*) \\ {} &=\,& (\widetilde{f}^*L^d)\otimes (\widetilde{\pi}^* {M^*})^d \\ {} &=\,& ((\widetilde{f}^*L)\otimes (\widetilde{\pi}^* M^*))^d. \end{array}$$ Set $\mathcal {L}\,=\, (\widetilde f^*L)\otimes \widetilde{\pi}^* M^*$. Then $\mathcal L$ is ample, and hence from Theorem [Theorem 1](#main1){reference-type="ref" reference="main1"} it follows that $\widetilde{\pi}_*(\mathcal {L}\otimes K_{\widetilde{X}/{\widetilde Y}})$ is ample. Now $$\widetilde{\pi}_*(\mathcal{L}\otimes K_{\widetilde{X}/\widetilde{Y}})\,=\,\widetilde{\pi}_*((\widetilde{f}^{*}L)\otimes K_{\widetilde{X}/\widetilde{Y}})\otimes M^*,$$ where $M\,=\, {\mathcal O}_{\widetilde Y}(\widetilde D)$ is effective. Then $\widetilde{\pi}_*((\widetilde{f}^{*}L)\otimes K_{\widetilde{X}/\widetilde{Y}})$ is big, because an ample vector bundle tensored with an effective line bundle is big (see [@La2 Example 6.1.23]). But $\widetilde{\pi}_*(\mathcal{L}\otimes K_{\widetilde{X}/\widetilde Y})\,=\, f^*(\pi_*(L\otimes K_{X/Y}))$ by [\[e2\]](#e2){reference-type="eqref" reference="e2"}. Since $f$ is finite we conclude that $\pi_*(L\otimes K_{X/Y})$ is big. Moreover, if $L$ is $\pi$-strongly big and nef, by Proposition [Proposition 1](#prop1){reference-type="ref" reference="prop1"} the direct image $\pi_*(\mathcal{L}\otimes K_{\widetilde{X}/\widetilde{Y}})$ is nef. This completes the proof. # Some observations **Proposition 1**. *Let $L$ be a line bundle on a non-singular projective variety $X$ and $E$ a vector bundle on $X.$ Assume that the vector bundle $E^{\otimes s}\otimes L$ is generated by its global sections for every integer $s\,>\!\!> \,0$. Then $E$ is nef.* *Proof.* Assume that $E$ is not nef. Then a criterion for nef bundles (see [@La2 Proposition 6.1.18]) gives that there is a non-constant morphism $f \,:\,C \,\longrightarrow\, X$ from a non-singular projective curve $C$, and a line bundle $M$ on $C$ of degree $n \,<\, 0$, such that there is a surjective map $$f^*E \,\longrightarrow\, M.$$ So for all $s \,>\, 0$ there is a surjection $$(f^*E)^{\otimes s} \otimes f^{*}L\,=\, f^*(E^{\otimes s} \otimes L)\,\longrightarrow\, M^{\otimes s} \otimes (f^{*}L).$$ Note that the degree of the line bundle $M^{\otimes s} \otimes (f^{*}L)$ is $n.s + d,$ where $d$ is the degree of the line bundle $f^{*}L.$ Since $n\,<\,0$, we have $n.s + d \,<\, 0$ for all large $s.$ The bundle $(f^*E)^{\otimes s} \otimes (f^{*}L)$ is generated by its global sections, by hypothesis, for all large $s.$ Hence it cannot admit a surjective map to a negative line bundle. Thus we get a contradiction to the assumption that $E$ is not nef. This proves the proposition. ◻ We give an example to show that the assumptions do hold. **Example 1**. Let $Z$ and $Y$ be two smooth projective varieties, and let $X \,=\, Y \times Z.$ Let $\pi_1\,:\, X \,\longrightarrow\, Y$ and $\pi_2\,: \,X\,\longrightarrow\, Z$ be the natural projections. If $L_1$ is a big line bundle on $Y$, and $L_2$ is an ample line bundle on $Z$, then the line bundle $M\,=\,(\pi_1^*L_1) \otimes (\pi_2^*L_2)$ is big on $X$ and $(\pi_1)_*(M\otimes K_{X/Y} )\,=\, {\text H}^0(Z,\,L_2\otimes K_Z)\otimes L_1$ is big provided ${\text H}^0(Z,\,L_2\otimes K_Z) \,\neq\, 0$. **Remark 1**. Note that in Example [Example 1](#ex1){reference-type="ref" reference="ex1"} the line bundle $M$ is $\pi_1$-strongly big provided the big bundle $L_1$ satisfies the following property: There is an integer $d\, >\, 0$, and a divisor $D$ on $Y$ with a simple normal crossing support, such that $L^d(-D)$ is ample on $Y.$ Let $\pi\,:\, X\,\longrightarrow\, Y$ be a smooth fibration of smooth projective varieties. Next example shows that when $L$ is nef and $\pi$-big, the direct image $\pi_* (L\otimes K_{X/Y})$ is not necessarily big. **Example 1**. Let $Z$ and $Y$ be two smooth projective varieties, and let $X \,=\, Y \times Z.$ Let $\pi_1\,:\, X \,\longrightarrow\, Y$ and $\pi_2\,:\, X \, \longrightarrow\, Z$ be the natural projections. Let $M$ be an ample line bundle on $Z$. Then the line bundle $\pi_2^* M$ is nef and $\pi_1$-ample (and hence $\pi_1$-big) on $X$ and $(\pi_1)_*((\pi_2^*M)\otimes K_{X/Y} )\,=\, {\text H}^0(Z,\,M\otimes K_Z)\otimes {\mathcal O}_Y$ is a trivial bundle, if ${\text H}^0(Z,\,L_2\otimes K_Z) \,\neq\, 0$, which is only nef but not big. **Definition 1**. Let $\pi\,:\,X\,\longrightarrow\, Y$ be a smooth fibration of smooth projective varieties. A line bundle $L$ on $X$ is said to be *$\pi$-weakly big* if there is an effective divisor $D$ on $Y$ such that $L^{d}\otimes f^{*}({\mathcal O}_Y(D))^*$ is ample for some integer $d\,>\,0,$ where $f^{*}({\mathcal O}_Y(D))^*\,=\, f^{*}({\mathcal O}_Y(-D)).$ **Remark 1**. For a $\pi\,:\,X\,\longrightarrow\, Y$ smooth fibration of smooth projective varieties a $\pi$-strongly big line bundle is $\pi$-weakly big but is $\pi$-weakly big then it may not be $\pi$-strongly big. **Proposition 1**. *Let $\pi\,:\,X\,\longrightarrow\, Y$ as in Theorem [Theorem 1](#main1){reference-type="ref" reference="main1"} and $L$ be a $\pi$-weakly big line bundle on $X.$ Then there is an integer $r > 0$ such that the bundles $\pi_*(L^{rm}\otimes K_{X/Y})$ is big for all $m > 0.$* **Proof:** By definition of $\pi$-weakly big line bundle there is a effective divisor $D$ on $Y$ and a positive integer $r$ such that $L^r\otimes f^{*}({\mathcal O}_Y(D))^*$ is an ample and hence $L^{rm}\otimes f^{*}({\mathcal O}_Y(mD))^*$ is ample for all $m>0.$ By Theorem [Theorem 1](#main1){reference-type="ref" reference="main1"} $\pi_*(L^{rm}\otimes f^{*}({\mathcal O}_Y(mD))^*\otimes K_{X/Y})$ is ample for all $m \,>\, 0$. But by projection formula $$\pi_*(L^{rm}\otimes\otimes K_{X/Y}) \,=\, \pi_*(L^{rm}\otimes f^{*}({\mathcal O}_Y(mD))^*\otimes K_{X/Y})\otimes {\mathcal O}_Y(mD)$$ for all $m > 0.$ Thus $\pi_*(L^{rm}\otimes\otimes K_{X/Y})$ is big for $m>0.$ **Conjecture:** Let $\pi\,:\,X\,\longrightarrow\, Y$ as in Theorem [Theorem 1](#main1){reference-type="ref" reference="main1"} and $L$ be a big and nef line bundle on $X.$ Then $\pi_*(L\otimes K_{X/Y})$ is nef and big. # Acknowledgements {#acknowledgements .unnumbered} DSN thanks CEMPI of Lille university for the financial support. 000 R. Hartshorne, Ample vector bundles on curves, *Nagoya Math. J.* **43** (1971), 73--89. R. Lazarsfeld, *Positivity in algebraic geometry. I*, Ergeb. Math. Grenzgeb. 48, Springer-Verlag, Berlin, 2004. R. Lazarsfeld, *Positivity in algebraic geometry. II*, Ergeb. Math. Grenzgeb. 49, Springer-Verlag, Berlin, 2004. C. Mourougane, Image direct de fibré en droites adjoints, *Proc. Res. Ins. Math. Sci.* **33** (1997), 893--916. E. Viehweg, *Quasi-projective Moduli for Polarised Manifolds*, Ergebnisse der Mathematik und ihrer Grenzgebiete, 3. Folge, A Series of Modern Surveys in Mathematics, Springer, 1995. F. Laytimi, On Degeneracy Loci, *Int. Jour. of Math.* **7** (1996), 745--754.
arxiv_math
{ "id": "2310.02764", "title": "On the direct image of the adjoint line bundle", "authors": "Indranil Biswas, Fatima Laytimi, D. S. Nagaraj, Werner Nahm", "categories": "math.AG", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Say that $(x, y, z)$ is a positive primitive integral Pythagorean triple if $x, y, z$ are positive integers without common factors satisfying $x^2 + y^2 = z^2$. An old theorem of Berggren gives three integral invertible linear transformations whose semi-group actions on $(3, 4, 5)$ and $(4, 3, 5)$ generate all positive primitive Pythagorean triples in a unique manner. We establish an analogue of Berggren's theorem in the context of a one-variable polynomial ring over a field of characteristic $\neq 2$. As its corollaries, we obtain some structure theorems regarding the orthogonal group with respect to the Pythagorean quadratic form over the polynomial ring. address: - Department of Mathematics, 2400 W Chew st., Allentown, PA 18104 - Department of Mathematics, 300 N Washington St, Gettysburg, PA 17325 author: - Byungchul Cha - Ricardo Conceição title: A polynomial analogue of Berggren's theorem on Pythagorean triples --- # Introduction A triple $(x,y,z)\in \mathbb{Z}^3$ is an *integral Pythagorean triple* if it satisfies $$\label{eq:pyt} x^2+y^2=z^2.$$ It is said to be *primitive* if $\gcd(x,y,z)=1$ and *positive* if $x,y,z>0$. An old theorem of Berggren [@Ber34], rediscovered independently first by [@Bar63] and later by several other authors[^1], says that every positive primitive integral Pythagorean triple can be generated from the well-known integral Pythagorean triple $(3,4,5)$ using four linear transformations, one of which is the permutation of $x$ and $y$. More precisely, if we define $$N_1 = \begin{pmatrix} 1 & -2 & 2 \\ 2 & -1 & 2 \\ 2 & -2 & 3 \end{pmatrix}, \quad N_2 = \begin{pmatrix} 1 & 2 & 2 \\ 2 & 1 & 2\\ 2 & 2 & 3 \end{pmatrix}, \quad N_3 = \begin{pmatrix} -1 & 2 & 2 \\ -2 & 1 & 2\\ -2 & 2 & 3 \end{pmatrix} \label{eq:Berggren_matrices}$$ then **Theorem 1** (Berggren's theorem). *Let $(x, y, z)$ be a positive primitive integral Pythagorean triple. Then there exists a unique sequence $\{d_1, \dots, d_k\}\in \{ 1, 2, 3 \}^k$ such that $$\begin{pmatrix} x \\ y \\ z \end{pmatrix} = N_{d_1} \cdots N_{d_k} \begin{pmatrix} 3 \\ 4 \\ 5 \end{pmatrix} \text{ or } \begin{pmatrix} x \\ y \\ z \end{pmatrix} = N_{d_1} \cdots N_{d_k} \begin{pmatrix} 4 \\ 3 \\ 5 \end{pmatrix}.$$ (Here, $\{d_1, \dots, d_k\}$ is understood to be an empty sequence if $(x, y, z) = (3, 4, 5)$ or $(4, 3, 5)$.)* Berggren's theorem has been generalized to Pythagorean forms in more than two variables [@CA90] and certain indefinite binary quadratic forms [@CNT]. In both cases, it is shown that all primitive integral tuples are generated from a finite set of primitive tuples and a finite number of linear transformations. The main result of this paper, Theorem [Theorem 2](#thm:main_theorem_tree){reference-type="ref" reference="thm:main_theorem_tree"} below, is an analogue of Berggren's theorem for polynomial rings over a field. Similarly to the results in [@CA90] and [@CNT], it describes how all primitive polynomial Pythagorean triples can be generated from a single polynomial Pythagorean triple using linear transformations and composition of polynomials. The remaining of this introduction is used to make this statement more precise by finding analogues over $K[t]$ of the notions of positive primitive integral Pythagorean triples, the triple $(3,4,5)$ and the matrices in [\[eq:Berggren_matrices\]](#eq:Berggren_matrices){reference-type="eqref" reference="eq:Berggren_matrices"}. We also present applications of the polynomial version of Berggren's Theorem to the group of linear automorphisms of [\[eq:pyt\]](#eq:pyt){reference-type="eqref" reference="eq:pyt"} over a polynomial ring. Let $K$ be a field of characteristic $\neq 2$ and $K[t]$ be the ring of polynomials over $K$ in the indeterminate $t$. A non-zero triple $(x,y,z)\in K[t]^3$ satisfying [\[eq:pyt\]](#eq:pyt){reference-type="eqref" reference="eq:pyt"} is called a *(polynomial) Pythagorean triple*. As before, $(x,y,z)$ is said to be primitive if $\gcd(x,y,z)=1$[^2]. The analogue of a positive primitive integral Pythagorean triple is a primitive Pythagorean triple $(x, y, z) \in K[t]^3$ such that $\deg x<\deg y = \deg z$ and the leading coefficients of $y$ and $z$ are the same. We call them *standard Pythagorean triples* or SPT in short. As in the classical case, any non-standard primitive Pythagorean triple can be brought to a standard one by means of a $K$-linear coordinate change (cf. Lemma [Lemma 19](#lem:non_SPT_Rf){reference-type="ref" reference="lem:non_SPT_Rf"}). Moreover, it follows from the primality condition that all SPT's $(x,y,z)$ with $x=0$ are of the form $(0,c,c)$, for some $c\in K^*$. Therefore, we restrict ourselves to the study of SPT's with $x\neq0$. To find an analogue to $(3,4,5)$, we observe that $$S_t=(2t,t^2-1,t^2+1),$$ which comes from the classical rational parametrization of the unit circle, yields an SPT of smallest height (see definition of height in §[2](#sec:Prelim){reference-type="ref" reference="sec:Prelim"}). But, because [\[eq:pyt\]](#eq:pyt){reference-type="eqref" reference="eq:pyt"} is defined over $K$, new SPT's can be created from other SPT's by replacing $t$ with a polynomial $f\in K[t]\backslash K$. In particular, $$\label{eq:St} S_f=(2f,f^2-1,f^2+1)$$ is also an SPT, which we take to be the natural analogue over $K[t]$ of the triple $(3,4,5)$. As for the linear transformations [\[eq:Berggren_matrices\]](#eq:Berggren_matrices){reference-type="eqref" reference="eq:Berggren_matrices"} appearing in Berggren's theorem, in §[2](#sec:Prelim){reference-type="ref" reference="sec:Prelim"} we explain how they can be constructed from reflections on a quadratic space defined by the quadratic form $\mathcal Q(x, y, z) = x^2 + y^2 - z^2$ associated to [\[eq:pyt\]](#eq:pyt){reference-type="eqref" reference="eq:pyt"}. When this construction is applied to reflections defined by a polynomial $f\in K[t]$, we arrive at the matrix $$M_f=\left(\begin{array}{rrr} -1 & 2 f & 2 f \\ -2 f & 2 f^{2} - 1 & 2 f^{2} \\ -2 f & 2 f^{2} & 2 f^{2} + 1 \end{array}\right).$$ This is the final piece needed to state the following analogue of Berggren's theorem, which is proved in §[3](#sec:Tree_structure_SPT){reference-type="ref" reference="sec:Tree_structure_SPT"}. **Theorem 2**. *Let $Q=(x,y,z)$ be an SPT with $x\neq 0$. Then there exist $c\in K^*$, $f\in K[t]\backslash K$, and a (possibly empty) sequence $\{f_1, \dots f_k \}$ in $K[t] \backslash K$ such that $$Q^T = cM_{f_1} \cdots M_{f_k} S_f^T.$$ Moreover, this representation of $Q$ is unique.* As consequences of this polynomial Berggren's theorem, we obtain the following two theorems on the orthogonal group $O_{\mathcal Q}(K[t])$ of the quadratic form $\mathcal Q$. **Theorem 3**. *The group $O_{\mathcal Q}(K[t])$ acts transitively on the set of all primitive Pythagorean triples.* For each $c\in K^*$ and $f\in K[t]$, define $$\label{eq:definition_T_c} T_c = \begin{pmatrix} 1 & 0 & 0 \\ 0 & (c + c^{-1})/2 & (c - c^{-1})/2 \\ 0 & (c - c^{-1})/2 & (c + c^{-1})/2 \\ \end{pmatrix}$$ and $$%\label{EqDefRt} R_f= \left(\begin{array}{rrr} -1 & -2 f & 2 f \\ -2 f & -2 f^{2} + 1 & 2 f^{2} \\ -2 f & -2 f^{2} & 2 f^{2} + 1 \end{array}\right).$$ It is straightforward to see that $T_c$ and $R_f$ preserve the form $\mathcal Q(x, y, z)$. **Theorem 4**. *The group $O_{\mathcal Q}(K[t])$ is generated by the following set: $$\{ R_f \mid f \in K[t] \} \cup \left\{ P_{xy} \right\} \cup \left\{ T_c \mid c\in K^* \right\}$$ where $P_{xy}$ is the permutation $(x, y, z) \mapsto (y, x, z)$.* The paper is organized as follows. In §[2](#sec:Prelim){reference-type="ref" reference="sec:Prelim"} we present some preliminary definitions and results that will be used in the proofs of the three theorems above. The proof of Theorem [Theorem 2](#thm:main_theorem_tree){reference-type="ref" reference="thm:main_theorem_tree"} is given in §[3](#sec:Tree_structure_SPT){reference-type="ref" reference="sec:Tree_structure_SPT"}, while the proofs of Theorems [Theorem 3](#thm:transitivity_action){reference-type="ref" reference="thm:transitivity_action"} and [Theorem 4](#thm:orthogonal_generator){reference-type="ref" reference="thm:orthogonal_generator"} are given in §[4](#sec:Corolaries){reference-type="ref" reference="sec:Corolaries"}. # Definitions and preliminary results {#sec:Prelim} Recall that $K$ is a field of characteristic $\neq 2$. Given a non-zero polynomial $f\in K[t]$, we denote by $\ell(f)$ the (nonzero) leading coefficient of $f$ and $\deg(f)$ the degree of $f$. We adopt the convention that $\deg(0) = -\infty$. For a triple $Q = (x, y, z)\in K[t]^3$, the *height of $Q$* is the integer $$h(Q)=\max\{\deg(x), \deg(y), \deg(z)\}.$$ When $\gcd(x, y, z) = 1$, we say that $Q$ is *primitive*. Given a ring $R$, recall that $A \in \operatorname{GL}_3(R)$ if both $A$ and $A^{-1}$ are defined over $R$. Below we record two properties of height and primitivity of triples that will be useful later. **Lemma 5**. *For a triple $Q = (x, y, z)\in K[t]^3$ and $A \in \operatorname{GL}_3(K[t])$, write $\tilde{Q}^T = (\tilde{x}, \tilde{y}, \tilde{z})^T = AQ^T$. Then $\gcd(x, y, z) = \gcd(\tilde{x}, \tilde{y}, \tilde{z})$. In particular, $Q$ is primitive if and only if $\tilde{Q}$ is primitive.* *Proof.* Write $f = \gcd(x, y, z)$ and $\tilde{f}= \gcd(\tilde{x}, \tilde{y}, \tilde{z})$. Then $f$ divides any $K[t]$-linear combination of $x, y, z$. Therefore, $f$ divides each of $\tilde{x}$, $\tilde{y}$, $\tilde{z}$, thus $\tilde{f}$ as well. Apply the same argument with $A^{-1}$ to show that $\tilde{f}$ divides $f$. So we conclude that $f = \tilde{f}$, which clearly implies that $A$ preserves primitivity. ◻ **Lemma 6**. *For a triple $Q = (x, y, z)\in K[t]^3$ and $A \in \operatorname{GL}_3(K)$, $$h(Q) = h(AQ^T).$$* *Proof.* As before, write $Q = (x, y, z)$ and $\tilde{Q}^T = (\tilde{x}, \tilde{y}, \tilde{z})^T = AQ^T$. Then the degree of any $K$-linear combination of $x, y, z$ cannot exceed $h(Q)$, therefore $h(\tilde{Q}) \le h(Q)$. Apply the same argument with $A^{-1}$, which would give $h(Q) \le h(\tilde{Q})$. This completes the proof. ◻ To find analogues over $K[t]$ of the matrices $N_1, N_2, N_3$ in [\[eq:Berggren_matrices\]](#eq:Berggren_matrices){reference-type="eqref" reference="eq:Berggren_matrices"}, we first contextualize their construction using a geometric interpretation that first appeared in Conrad's note [@Conrad] and that was later generalized by [@CNT]. Since this construction works simultaneously for both $\mathbb{Z}$ and $K[t]$, we briefly consider the more general framework of an integral domain $D$ of characteristic $\neq 2$ and its fraction field $F$. We view the quadratic form $$\label{eq:pyth_quadform} \mathcal Q(x, y, z) = x^2 + y^2 - z^2$$ associated to [\[eq:pyt\]](#eq:pyt){reference-type="eqref" reference="eq:pyt"} as being defined over $D$. The orthogonal group $O_{\mathcal Q}(D)$ of $\mathcal Q$ over $D$ is, by definition, the group of matrices $A\in GL_3(D)$ satisfying $\mathcal Q(A\mathbf{x}) = \mathcal Q(\mathbf{x})$, for all $\mathbf{x}\in D^3$. Note that $\mathcal Q$ defines the bilinear pairing $\langle , \rangle: F^3 \times F^3 \longrightarrow F$ $$\label{eq:Berggren_bilinear_pairing} \langle \mathbf{x}, \mathbf{y} \rangle = \frac{1}{2} \left( \mathcal Q(\mathbf{x} + \mathbf{y}) - \mathcal Q(\mathbf{x}) - \mathcal Q(\mathbf{y}) \right),$$ for $\mathbf{x}, \mathbf{y} \in F^3$. For $\mathbf{w}\in F^3$ with $\mathcal Q(\mathbf{w}) \neq 0$, we define a *reflection* $R_{\mathbf{w}}$ with respect to $\mathbf{w}$ to be the linear map from $F^3$ onto itself given by $$R_{\mathbf{w}}(\mathbf{x}) = \mathbf{x} - 2\frac{ \langle \mathbf{x}, \mathbf{w}\rangle}{\mathcal Q(\mathbf{w})}\mathbf{w}. \label{eq:Berggren_reflection_definition}$$ The map $R_{\mathbf{w}}$ is easily seen to have order 2 and to be an element of $O_{\mathcal Q}(F)$. If $\mathbf{w} \in F^3$ is such that $\mathcal Q(\mathbf{w})=\pm1,\pm2$, then $R_{\mathbf{w}}$ is actually an element of $O_{\mathcal Q}(D)$. When we specialize to the case $D=\mathbb{Z}$, $F=\mathbb Q$, $\mathbf{w}=(1,1,1)$, we may regard $R_{\mathbf{w}}$ as a $3\times 3$ matrix with respect to the standard basis of $\mathbb Q^3$. Under this point of view, for $d = 1, 2, 3$, the matrices $N_d$ in [\[eq:Berggren_matrices\]](#eq:Berggren_matrices){reference-type="eqref" reference="eq:Berggren_matrices"} are given by $$N_d = R_{\mathbf{w}} U_d \label{eq:Berggren_construction_Md}$$ where $U_1, U_2, U_3$ are defined by $$U_1 = \begin{pmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{pmatrix}, \quad U_2 = \begin{pmatrix} -1 & 0 & 0 \\ 0 & -1 & 0\\ 0 & 0 & 1 \end{pmatrix}, \quad U_3 = \begin{pmatrix} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}. \label{eq:def_Berggren_Ud}$$ Considering the case where $D=K[t]$, $F=K(t)$ and $\mathbf{w}=(1,f,f)$, for some $f\in K[t]$, we denote by $R_f$ the reflection with respect to $\mathbf{w}$ using the formula [\[eq:Berggren_reflection_definition\]](#eq:Berggren_reflection_definition){reference-type="eqref" reference="eq:Berggren_reflection_definition"}. The matrix representation of $R_f$ with respect to the standard basis of $K(t)^3$ is $$\label{EqDefRt} R_f= \left(\begin{array}{rrr} -1 & -2 f & 2 f \\ -2 f & -2 f^{2} + 1 & 2 f^{2} \\ -2 f & -2 f^{2} & 2 f^{2} + 1 \end{array}\right).$$ Observe that $R_f$ is the matrix appearing in Theorem [Theorem 4](#thm:orthogonal_generator){reference-type="ref" reference="thm:orthogonal_generator"}, while the matrix $M_f$ appearing in the statement of Theorem [Theorem 2](#thm:main_theorem_tree){reference-type="ref" reference="thm:main_theorem_tree"} is given by the product $$\label{eq:definition_M_f} R_fU_1=M_f=\left(\begin{array}{rrr} -1 & 2 f & 2 f \\ -2 f & 2 f^{2} - 1 & 2 f^{2} \\ -2 f & 2 f^{2} & 2 f^{2} + 1 \end{array}\right).$$ Since $\mathcal Q(1,f, f) = 1$, we see that both $R_f$ and $M_f$ are elements of $O_{\mathcal Q}(K[t])$, with $R_f$ having order 2. We record here $M_f^{-1}$ for future use: $$\label{eq:rmininv} M_f^{-1}= U_1 R_f = \left(\begin{array}{rrr} -1 & -2 f & 2 f \\ 2 f & 2 f^{2} - 1 & -2 f^{2} \\ -2 f & -2 f^{2} & 2 f^{2} + 1 \end{array}\right).$$ The matrix $R_f$ also satisfies the following identities. **Lemma 7**. *For $a, b\in K[t]$, we have $$\begin{cases} R_{a}R_{b} = R_{a - b}R_0,\\ R_aR_0R_b = R_{a + b}. \end{cases}$$* *Proof.* Both of these equations are easily verified by direct computation. ◻ As defined in the introduction, a *standard Pythagorean triple (SPT)* is a primitive Pythagorean triple $(x,y,z)$ satisfying $\deg x<\deg y=\deg z$ and $\ell(y) =\ell(z)$. SPT's play the role over $K[t]$ of a positive primitive integral Pythagorean triple. In the classical setting, every primitive integral Pythagorean triple can be obtained from a positive one by an element of $O_{\mathcal Q}(\mathbb{Z})$; namely, a change of sign. The next two results are used to show that, similar to the integral case, any non-standard primitive Pythagorean triple can be obtained from SPT's via multiplication by an element of $O_{\mathcal Q}(K[t])$. **Lemma 8**. *Suppose that $Q = (x, y, z) \in K[t]^3$ is a Pythagorean triple but not an SPT. Then $Q$ is one of the following types:* - *$\deg x<\deg y=\deg z$ and $\ell(y)=-\ell(z)$.* - *$\deg y<\deg x=\deg z$.* - *$\deg x=\deg y=\deg z$.* - *$\deg z < \deg x = \deg y$. In this case, $K$ must contain a square root of $-1$, say, $i = \sqrt{-1}$, and $\ell(x)=\pm i\ell(y)$.* *Proof.* The proof follows easily by comparing the degrees and leading coefficients of the polynomials appearing in both sides of the equation [\[eq:pyt\]](#eq:pyt){reference-type="eqref" reference="eq:pyt"}. We leave the details to the reader. ◻ **Lemma 9**. *If $Q$ is a primitive Pythagorean triple that is not an SPT and $f\in K[t]\backslash K$ then $R_fQ$ is an SPT.* *Proof.* Write $Q=(x,y,z)$. Then $Q$ is one of the type (I)--(IV) in Lemma [Lemma 8](#lem:non_SPT){reference-type="ref" reference="lem:non_SPT"}. Let $d = z - y$. We claim that $d$ is a non-zero polynomial with $\deg d = \max\{\deg y,\deg z\} = h(Q)$. This claim is obvious if $\deg z \neq \deg y$ (type (II) or (IV)). Also, if $Q$ is type (I), then clearly $\deg d = \deg y = \deg z$. In case (III), we must have $\ell(x)^2+\ell(y)^2=\ell(z)^2$, which implies $\ell(y)\neq\pm \ell(z)$ (because $\ell(x)$ is nonzero). Therefore, the claim is proved. If $\begin{pmatrix} \tilde{x},\tilde{y},\tilde{z} \end{pmatrix}^T=R_f \begin{pmatrix} x,y,z \end{pmatrix}^T$ then [\[EqDefRt\]](#EqDefRt){reference-type="eqref" reference="EqDefRt"} gives $$\label{eq:rplus_of_f} \begin{aligned} \tilde{x}&= -x +2 f d,\\ \tilde{y}&=-2 f x +2 f^{2}d + y=(\tilde{x}-x)f+y, \\ \tilde{z}&=-2 f x +2 f^{2}d + z=(\tilde{x}-x)f+z. \end{aligned}$$ As a consequence, $$\deg \tilde{x}=\deg d+\deg f<\deg d+2\deg f=\deg \tilde{y}=\deg \tilde{z}$$ and $\ell(\tilde{y})=\ell(\tilde{z})=\ell(2f^2d)$. ◻ We end this section of preliminary results with a lemma relating the height of an SPT $Q$ with that of $M_fQ^T$, for some $f\in K[t]\backslash K$. **Lemma 10**. *If $Q$ is an SPT then, for all $f\in K[t]\backslash K$, $M_fQ^T=(\tilde{x},\tilde{y},\tilde{z})^T$ is an SPT with $\tilde{x}\neq 0$. Furthermore, $$h(M_fQ^T) = 2\deg f + h(Q).$$* *Proof.* Write $Q=(x,y,z)$ and let $\begin{pmatrix} \tilde{x},\tilde{y},\tilde{z} \end{pmatrix}^T=M_f \begin{pmatrix} x,y,z \end{pmatrix}^T$. Then [\[eq:definition_M\_f\]](#eq:definition_M_f){reference-type="eqref" reference="eq:definition_M_f"} implies $$\label{eq:rminus_of_f} \begin{array}{ccl} \tilde{x}&=& -x +2 f (y+z)\\ \tilde{y}&=&-2 f x +2 f^{2}(y+z) - y \\ \tilde{z}&=&-2 f x +2 f^{2}(y+z) + z \end{array}$$ Because $Q$ is an SPT and $f$ is non-constant, by analyzing degrees we can see that the leading terms from both $\tilde{y}$ and $\tilde{z}$ come from the polynomial $2 f^{2}(y+z)$, while the leading term from $\tilde{x}$ comes from $2 f (y+z)$. From that, it follows easily that $M_fQ^T$ is an SPT with $\tilde{x}\neq 0$ and $h(M_fQ^T) = 2\deg f + h(Q)$. ◻ # Proof of Theorem [Theorem 2](#thm:main_theorem_tree){reference-type="ref" reference="thm:main_theorem_tree"} {#sec:Tree_structure_SPT} In this section, we use infinite descent to prove the existence of a product representation of an SPT as given in Theorem [Theorem 2](#thm:main_theorem_tree){reference-type="ref" reference="thm:main_theorem_tree"}. Its proof in Corollary [Corollary 13](#cor:infinite_desc){reference-type="ref" reference="cor:infinite_desc"} is a consequence of the next two propositions. The proof of the uniqueness of the representation in Theorem [Theorem 2](#thm:main_theorem_tree){reference-type="ref" reference="thm:main_theorem_tree"} is found in Corollary [Corollary 17](#cor:uniqueness){reference-type="ref" reference="cor:uniqueness"}. **Proposition 11**. *Let $Q=(x,y,z)$ be an SPT with $x \neq 0$ and $\deg z\neq 2\deg x$. Then there exists $f\in K[t]\backslash K$ such that $\tilde{Q}^T=\begin{pmatrix} \tilde{x},\tilde{y},\tilde{z} \end{pmatrix}^T =M_f^{-1}Q^T$ is an SPT with $\tilde{x}\neq0$ and $h(\tilde{Q})<h(Q)$.* *Proof.* Let $f\in K[t]$ satisfy $z=fx+k$ with $\deg k<\deg x$. Notice that $f\not\in K$ since $\deg z>\deg x$. From [\[eq:rmininv\]](#eq:rmininv){reference-type="eqref" reference="eq:rmininv"}, it follows that $$\begin{aligned} \tilde{x}&=& -x -2 f y + 2 f z\\ \tilde{y}&=&2 f x + (2 f^{2} - 1) y -2 f^{2} z\\ \tilde{z}&=&-2 f x -2 f^{2} y + (2 f^{2} + 1) z.\end{aligned}$$ If we let $d=z-y$ then the above expressions can be simplified into $$\begin{aligned} \label{eq:bx}\tilde{x}&=& -x+2fd\\ \label{eq:by}\tilde{y}&=&-\tilde{z}+d\end{aligned}$$ Moreover, since $(\tilde{x},\tilde{y},\tilde{z})$ is a Pythagorean triple, we have that $$\label{eq:barxD} \tilde{x}^2+d^2=2\tilde{z}d.$$ We also observe that [\[eq:pyt\]](#eq:pyt){reference-type="eqref" reference="eq:pyt"} yields $$\label{eq:pytD} x^2=d(z+y).$$ From this last equality, the definition of $f$ and $k$, and [\[eq:bx\]](#eq:bx){reference-type="eqref" reference="eq:bx"} we obtain $$\label{eq:bxDk} \tilde{x}x=-d(z+y)+2(z-k)d=d(d-2k).$$ Because $(x,y,z)$ is an SPT with $x\neq 0$, we conclude that $d\neq 0$ and, by [\[eq:pytD\]](#eq:pytD){reference-type="eqref" reference="eq:pytD"}, $$\label{eq:degD} \deg d= 2\deg x-\deg z.$$ Since $(x,y,z)$ is an SPT with $\deg z\neq 2\deg x$ then [\[eq:degD\]](#eq:degD){reference-type="eqref" reference="eq:degD"} and $\deg x<\deg z$ show that $$\label{eq:degdx} 0<\deg d<\deg x.$$ Thus, $\deg(d-2k)<\deg x$ and, by equating degrees in [\[eq:bxDk\]](#eq:bxDk){reference-type="eqref" reference="eq:bxDk"}, we arrive at $$\deg \tilde{x}+\deg x=\deg d+\deg(d-2k)<\deg d+\deg x.$$ This shows that $\deg \tilde{x}<\deg d$. Consequently, [\[eq:barxD\]](#eq:barxD){reference-type="eqref" reference="eq:barxD"} proves that $\deg \tilde{z}=\deg d$ and $2\ell(\tilde{z})=\ell(d)$. Together with [\[eq:by\]](#eq:by){reference-type="eqref" reference="eq:by"}, we conclude that $\ell(\tilde{y})=\ell(\tilde{z})$ and that $\deg \tilde{y}=\deg \tilde{z}=\deg d>\deg \tilde{x}$, proving that $(\tilde{x},\tilde{y},\tilde{z})$ is an SPT. If we assume that $\tilde{x}=0$, then $\tilde{z}=\tilde{y}$ and $$\tilde{z}=\gcd(\tilde{x},\tilde{y},\tilde{z})=\gcd(x,y,z)=1.$$ Moreover, [\[eq:barxD\]](#eq:barxD){reference-type="eqref" reference="eq:barxD"} shows that $1=\tilde{z}=d/2$. Since this equality contradicts [\[eq:degdx\]](#eq:degdx){reference-type="eqref" reference="eq:degdx"}, we conclude that $\tilde{x}\neq 0$. To finish our proof, notice that $Q^T=M_f\tilde{Q}^T$ and Lemma [Lemma 10](#lem:RfPPT){reference-type="ref" reference="lem:RfPPT"} imply that $h(\tilde{Q})<h(Q)$. ◻ **Proposition 12**. *Let $Q=(x,y,z)$ be an SPT with $x \neq 0$ and $\deg z= 2\deg x$. Then there exist $c\in K^*$ and $f\in K[t]\backslash K$ such that $Q=cS_f$, for $S_f$ as in [\[eq:St\]](#eq:St){reference-type="eqref" reference="eq:St"}.* **Proof.* We let $e=\deg x$ and $2e=\deg y=\deg z$. Using the euclidean algorithm, we can find $a\in K^*$ and $b\in K[t]$ satisfying $y=ax^2+b$ and $\deg b<2e$. And since $y$ and $z$ have the same leading coefficient, we have that $z=ax^2+\beta$, for some $\beta \in K[t]$ with $\deg \beta<2e$. From [\[eq:pyt\]](#eq:pyt){reference-type="eqref" reference="eq:pyt"}, we arrive at $$\label{eq:bbeta} x^2(1+2a(b-\beta))=\beta^2-b^2.$$* *If $b=0$ or $\beta=0$ then [\[eq:bbeta\]](#eq:bbeta){reference-type="eqref" reference="eq:bbeta"} implies that $\gcd(x,y,z)\neq 1$. Therefore, we assume that $\beta$ and $b$ are non-zero.* *We show that $\beta^2-b^2= 0$. If we assume otherwise, then $(\beta-b)\neq 0$, $(\beta+b)\neq 0$ and, by equating degrees in [\[eq:bbeta\]](#eq:bbeta){reference-type="eqref" reference="eq:bbeta"}, $$2e=\deg(\beta+b)\leq \max\{\deg b,\deg \beta\}<2e.$$ Also from [\[eq:bbeta\]](#eq:bbeta){reference-type="eqref" reference="eq:bbeta"} we see that $\beta\neq b$, since $x\neq 0$. Therefore $b=-\beta$ and, consequently, [\[eq:bbeta\]](#eq:bbeta){reference-type="eqref" reference="eq:bbeta"} implies that $$1+4ab=0.$$ This proves that $$Q=\left(x,ax^2-\frac{1}{4a},ax^2+\frac{1}{4a}\right).$$ The result follows by taking $f=2ax$ and $c=1/4a$. ◻* **Corollary 13**. *Let $Q=(x,y,z)$ be an SPT with $x\neq 0$. Then there exist a sequence $\{f,f_1, \dots f_k \}$ in $K[t] \backslash K$ and $c\in K^*$ such that $$Q^T = cM_{f_1} \cdots M_{f_k} S_f^T.$$* *Proof.* If $Q=(x,y,z)$ satisfies $\deg z=2\deg x$ then $Q=cS_f$, where $c\in K^*$ and $f\in K[t]$ are given by Proposition [Proposition 12](#prop:degz2degx){reference-type="ref" reference="prop:degz2degx"}. Thus we may assume that $\deg z\neq 2\deg x$. According to Proposition [Proposition 11](#prop:descent){reference-type="ref" reference="prop:descent"}, there exists $f_1\in K[t]\backslash K$ such that $Q_1^T=(x_1,y_1,z_1)^T=M_{f_1}^{-1}Q^T$ is an SPT with $x_1\neq 0$ and $h(Q_1)<h(Q)$. If $\deg z_1=2\deg x_1$ then, from Proposition [Proposition 12](#prop:degz2degx){reference-type="ref" reference="prop:degz2degx"} we find $c\in K^*$ and $f\in K[t]$ such that $$Q^T=cM_{f_1}S_f^T,$$ and we are done. If $\deg z_1\neq 2\deg x_1$, then we can use Proposition [Proposition 11](#prop:descent){reference-type="ref" reference="prop:descent"} to construct $Q_2^T=(x_2,y_2,z_2)^T=M_{f_2}^{-1}Q_1^T$, for some $f_2\in K[t]\backslash K$, such that $x_2\neq 0$ and $$h(Q_2)<h(Q_1)<h(Q).$$ Again, either $\deg z_2=2\deg x_2$ or $\deg z_2\neq2\deg x_2$. In the first case, we are finished because Proposition [Proposition 12](#prop:degz2degx){reference-type="ref" reference="prop:degz2degx"} implies $$Q^T=cM_{f_2}M_{f_1}S_f^T.$$ Otherwise, we can use Proposition [Proposition 11](#prop:descent){reference-type="ref" reference="prop:descent"} to construct an SPT $Q_3^T=(x_3,y_3,z_3)^T=M_{f_3}^{-1}Q_2^T$ with $x_3\neq 0$ and $$h(Q_3)<h(Q_2)<h(Q_1)<h(Q).$$ In this fashion, for $Q_0=Q$, $i\geq 1$ and some sequence $\{f_1, \dots, f_i \}$ in $K[t] \backslash K$ we can construct a recursive sequence of SPT's $$(x_i,y_i,z_i)^T=Q_i^T=M_{f_i}^{-1}Q_{i-1}^T$$ with $x_i\neq 0$ and $$h(Q_i)<\cdots <h(Q_2)<h(Q_1)<h(Q)$$ Since $h(Q_i)$ is a positive integer for all $i$, the above inequality shows that we can not continue this construction indefinitely. Therefore, there exists an integer $k\geq 1$ such that $Q_k^T=(x_k,y_k,z_k)^T=M_{f_k}^{-1}Q_{k-1}^T$ is a SPT with $x_k\neq 0$ and $\deg z_k=2\deg x_k$. Therefore, Proposition [Proposition 12](#prop:degz2degx){reference-type="ref" reference="prop:degz2degx"} guarantees the existence of $c\in K^*$ and $f\in K[t]$ such that $$Q_k=cS_f,$$ The result follows by noticing that $$cS_f^T=Q_k^T=M_{f_k}^{-1}Q_{k-1}^T=M_{f_{k}}^{-1}M_{f_{k-1}}^{-1}Q_{k-2}^T=\cdots =M_{f_{k}}^{-1}M_{f_{k-1}}^{-1}\cdots M_{f_1}^{-1}Q^T.$$ ◻ The next series of results are used to prove the uniqueness of the product representation $$Q^T = cM_{f_1} \cdots M_{f_k} S_f^T$$ of any SPT $Q$ with $x\neq 0$. **Lemma 14**. *Let $P$ and $Q$ be SPT's and $g$ and $h$ be polynomials in $K[t]\backslash K$. If $$M_gP^T=M_hQ^T$$ then $P=Q$ and $g=h$.* *Proof.* Write $f=g-h$. Clearly, it is enough to prove that $f=0$. Use Lemma [Lemma 7](#prop:r+identity){reference-type="ref" reference="prop:r+identity"}, together with the fact that $R_f$ is of order 2 to obtain $$% U_1P^T=R_fR_0U_1Q^T.$$ We rewrite this equation as $$\label{eq:RgeqRh} \begin{pmatrix} \tilde{x},\tilde{y},\tilde{z} \end{pmatrix}^T=R_f \begin{pmatrix} x,y,z \end{pmatrix}^T$$ where $(\tilde{x},\tilde{y},\tilde{z})^T=U_1P^T,$ and $(x,y,z)^T=R_0U_1Q^T.$ In what follows, we use notation from the proof of Lemma [Lemma 9](#lem:NonPPTtoPPT){reference-type="ref" reference="lem:NonPPTtoPPT"}. Since $P$ is an SPT, $(\tilde{x},\tilde{y},\tilde{z})$ is a Pythagorean triple of type (I). If we assume that $f\notin K$, Lemma [Lemma 9](#lem:NonPPTtoPPT){reference-type="ref" reference="lem:NonPPTtoPPT"} would imply that the right-hand side of [\[eq:RgeqRh\]](#eq:RgeqRh){reference-type="eqref" reference="eq:RgeqRh"} is an SPT. Therefore, $f\in K$. Notice that $(x,y,z)$ is also a Pythagorean triple of type (I). Thus, if $f\in K^*$ then [\[eq:rplus_of_f\]](#eq:rplus_of_f){reference-type="eqref" reference="eq:rplus_of_f"} implies that $\deg x<\deg \tilde{x}=\deg (z-y)=\deg z=\deg y$. Additionally, [\[eq:rplus_of_f\]](#eq:rplus_of_f){reference-type="eqref" reference="eq:rplus_of_f"} implies that $\ell(\tilde{y})=\ell(\tilde{x})f-\ell(z)$ and $\ell(\tilde{z})=\ell(\tilde{x})f+\ell(z)$. This shows that $\ell(\tilde{y})$ and $\ell(\tilde{z})$ cannot be both zero; consequently, either $\deg \tilde{y}=\deg z$ or $\deg \tilde{z}=\deg z$. Therefore, $\deg \tilde{x}=\deg \tilde{y}$ or $\deg \tilde{x}=\deg \tilde{z}$. Since this contradicts the fact that $(\tilde{x},\tilde{y},\tilde{z})$ is a Pythagorean triple of type (I), we have that $f=g-h=0$, as desired. ◻ **Proposition 15**. *Let $P$ and $Q$ be SPT's and $m\geq n$ be positive integers. Suppose that there are sequences $\{g_0,g_1,\ldots,g_m\}$ and $\{h_0,h_1,\ldots,h_n\}$ in $K[t]\backslash K$ such that $$M_{g_m}M_{g_{m-1}}\cdots M_{g_1}P^T=M_{h_n}M_{h_{n-1}}\cdots M_{h_1} Q^T.$$ Then either $P=Q$, $m=n$ and $g_i=h_i$; or $m>n$, $Q^T=M_{g_{m-n}}\cdots M_{g_1}P^T$ and $g_{m-n+i}=h_{i}$ for $1\leq i\leq n$.* *Proof.* Assume first that $m=n$. We prove, by induction on $n$, that $$M_{g_n}M_{g_{n-1}}\cdots M_{g_1}P^T=M_{h_n}M_{h_{n-1}}\cdots M_{h_1} Q^T$$ implies $P=Q$ and $g_i=h_i$, for all $1\leq i\leq n$. The base case of induction is Lemma [Lemma 14](#lem:RpeqRq){reference-type="ref" reference="lem:RpeqRq"}. By Lemma [Lemma 10](#lem:RfPPT){reference-type="ref" reference="lem:RfPPT"}, $\bar{P}^T=M_{g_{n-1}}\cdots M_{g_1}P^T$ and $\bar{Q}^T=M_{h_{n-1}}\cdots M_{h_1} Q^T$ are SPT's. By assumption, they satisfy $$M_{g_n}\bar{P}^T=M_{h_n}\bar{Q}^T.$$ Another application of Lemma [Lemma 14](#lem:RpeqRq){reference-type="ref" reference="lem:RpeqRq"} implies that $g_n=h_n$ and $\bar{P}=\bar{Q}$. Therefore, the induction hypothesis implies that $P=Q$ and $g_i=h_i$ for all $1\leq i\leq n$. Suppose $m>n$. If $P'^T=M_{g_{m-n}}M_{g_{j-1}}\cdots M_{g_1}P^T$ then, by hypothesis, $$M_{g_m}M_{g_{m-1}}\cdots M_{g_{m-n+1}}{P'}^T=M_{h_n}M_{h_{n-1}}\cdots M_{h_1} Q^T.$$ Notice that there are $n$ matrices in both sides of the previous equation. Therefore, we can apply the $m=n$ case which had been proved above to arrive at our desired result. ◻ **Lemma 16**. *Let $P$ be an SPT, $c\in K^*$ and $f$ and $g$ be polynomials in $K[t]\backslash K$. Then $$cS_g^T=M_fP^T$$ if and only if $g=2f$ and $P=(0,c,c)$.* *Proof.* We write $P=(x,y,z)$. From $cS_g^T=M_fP^T$ and [\[eq:rminus_of_f\]](#eq:rminus_of_f){reference-type="eqref" reference="eq:rminus_of_f"}, we get $$\begin{aligned} 2cg&=& -x +2 f (y+z)\\ cg^2-c&=&-2 f x +2 f^{2}(y+z) - y \\ cg^2+c&=&-2 f x +2 f^{2}(y+z) + z \end{aligned}$$ Therefore, $$2c=cg^2+c-(cg^2-c)=z+y.$$ Since $P$ is an SPT, we conclude that $z=y=c$ and $x=0$. Consequently, the first equality above implies that $g=2f$. The converse is proved via direct computation using [\[eq:rminus_of_f\]](#eq:rminus_of_f){reference-type="eqref" reference="eq:rminus_of_f"}. ◻ **Corollary 17**. *Let $m$ and $n$ be positive integers. Suppose that there are $c,d\in K^*$ and sequences $\{g_0,g_1,\ldots,g_m\}$ and $\{h_0,h_1,\ldots,h_n\}$ in $K[t]\backslash K$ such that $$c M_{g_m}M_{g_{m-1}}\cdots M_{g_1}S_{g_0}^T=dM_{h_n}M_{h_{n-1}}\cdots M_{h_1} S_{h_0}^T.$$ Then $m=n$, $c=d$ and $g_i=h_i$, for all $0\leq i\leq n$.* *Proof.* First, we show that $m\neq n$ is impossible. Otherwise, we may assume without loss of generality that $m>n$ and conclude from Proposition [Proposition 15](#prop:equalproduct){reference-type="ref" reference="prop:equalproduct"} that $$\label{eq:hoP} cS_{h_0}^T=M_h\bar{P}^T$$ where $h=g_{m-n}$, and $\bar{P}=dS_{g_0}$ or $\bar{P}^T=dM_{g_{m-n-1}}\cdots M_{g_1}S_{g_0}^T$. In any case, $\bar{P}=(\bar{x},\bar{y},\bar{z})$ is an SPT  with $\bar{x}\neq 0$, according to Lemma [Lemma 10](#lem:RfPPT){reference-type="ref" reference="lem:RfPPT"}. But, given [\[eq:hoP\]](#eq:hoP){reference-type="eqref" reference="eq:hoP"}, $\bar{x}\neq 0$ contradicts the conclusion of Lemma [Lemma 16](#lem:SgP){reference-type="ref" reference="lem:SgP"}. Therefore, $m=n$ and Proposition [Proposition 15](#prop:equalproduct){reference-type="ref" reference="prop:equalproduct"} implies that $g_i=h_i$, for all $1\leq i\leq n$, and $cS_{g_0}=dS_{h_0}$. From the last equality, it easily follows that $c=d$ and $g_0=h_0$, finishing our proof. ◻ # Generators of $O_{\mathcal Q}(K[t])$ {#sec:Corolaries} When $(\tilde{x}, \tilde{y}, \tilde{z})^T = R_f (x, y, z)^T$, recall from [\[eq:rplus_of_f\]](#eq:rplus_of_f){reference-type="eqref" reference="eq:rplus_of_f"} that $$\label{eq:rplus_of_f2} \begin{aligned} \tilde{x}&= -x +2 f (z - y),\\ \tilde{y}&=(\tilde{x}-x)f+y, \\ \tilde{z}&=(\tilde{x}-x)f+z. \end{aligned}$$ We will use these identities in the following two lemmas. **Lemma 18**. *Suppose that $Q$ is an SPT. Then $R_fQ^T$ is also an SPT for any $f\in K$.* *Proof.* Write $Q = (x, y, z)$ and $\tilde{Q}^T = (\tilde{x}, \tilde{y}, \tilde{z})^T = R_fQ^T$ for $f\in K$. Note from Lemma [Lemma 6](#lem:map_over_K_preserve_height){reference-type="ref" reference="lem:map_over_K_preserve_height"} that $h(Q) = h(\tilde{Q})$. From the first equation in [\[eq:rplus_of_f2\]](#eq:rplus_of_f2){reference-type="eqref" reference="eq:rplus_of_f2"}, we see that that $\deg\tilde{x}< \deg(y) = h(Q) = h(\tilde{Q})$. This implies that $\deg\tilde{y}= \deg\tilde{z}$. Also it is obvious from the second and third equations of [\[eq:rplus_of_f2\]](#eq:rplus_of_f2){reference-type="eqref" reference="eq:rplus_of_f2"} that $\ell(\tilde{y}) = \ell(\tilde{z})$. This completes the proof that $\tilde{Q}$ is an SPT. ◻ **Lemma 19**. *Suppose that $Q$ is a primitive Pythagorean triple, which is not an SPT. Then there exists $f\in K$ such that $M_f^{-1}Q^T$ is an SPT.* *Proof.* Recall from Lemma [Lemma 8](#lem:non_SPT){reference-type="ref" reference="lem:non_SPT"} that $Q$ is one of the type (I)--(IV). We claim that there exists $f\in K$ such that $\tilde{Q}^T = (\tilde{x}, \tilde{y}, \tilde{z})^T := R_f Q^T$ is of type (I). This would imply the conclusion in the lemma because $U_1 \tilde{Q}^T = M_f^{-1} Q^T$ would then be an SPT. Choose $f\in K$ so that $$\label{eq:condition_for_constant_f} \begin{cases} f= 0 & \text{ if $Q$ is of type (I)}, \\ -\ell(x) +2f\ell(z) = 0 & \text{ if $Q$ is of type (II)}, \\ -\ell(x) +2f(\ell(z) - \ell(y)) = 0 & \text{ if $Q$ is of type (III)}, \\ -\ell(x) -2f\ell(y) = 0 & \text{ if $Q$ is of type (IV)}. \end{cases}$$ We see from the first equation of [\[eq:rplus_of_f2\]](#eq:rplus_of_f2){reference-type="eqref" reference="eq:rplus_of_f2"} that the above equations are solvable in $f$ in all cases and that $h(Q) = h(\tilde{Q})$ because of Lemma [Lemma 6](#lem:map_over_K_preserve_height){reference-type="ref" reference="lem:map_over_K_preserve_height"}. Once $f$ is chosen to satisfy [\[eq:condition_for_constant_f\]](#eq:condition_for_constant_f){reference-type="eqref" reference="eq:condition_for_constant_f"}, we have $\deg\tilde{x}< h(\tilde{Q})$, which results in $\deg\tilde{y}= \deg\tilde{z}$. This implies that either $\ell(\tilde{y}) = -\ell(\tilde{z})$ (in which case the claim is proven) or $\ell(\tilde{y}) = \ell(\tilde{z})$. However, if $\ell(\tilde{y})= \ell(\tilde{z})$, this means that $\tilde{Q}$ is an SPT. This is a contradiction because Lemma [Lemma 18](#lem:SPT_Rf){reference-type="ref" reference="lem:SPT_Rf"} would then imply that $$R_f \tilde{Q}^T = R_f (R_f Q)^T = Q^T$$ is also an SPT, which we had assumed not. ◻ *Proof of Theorem [Theorem 3](#thm:transitivity_action){reference-type="ref" reference="thm:transitivity_action"}.* We will prove that every primitive Pythagorean triple $Q$ is in the $O_{\mathcal Q}(K[t])$-orbit of $(0, 1, 1)$. We handle the case $h(Q) = 0$. If $Q$ is an SPT, then $Q = (0, c, c)$ for some $c\in K^*$. Recall that $T_c$ is a matrix defined by [\[eq:definition_T\_c\]](#eq:definition_T_c){reference-type="eqref" reference="eq:definition_T_c"}. It is easily verified that $$\label{eq:Tc} T_c(0, 1, 1)^T = (0, c, c)^T,$$ so that $Q = (0, c, c)$ is in the $O_{\mathcal Q}(K[t])$-orbit of $(0, 1, 1)$. If $Q$ is not an SPT, we apply Lemma [Lemma 19](#lem:non_SPT_Rf){reference-type="ref" reference="lem:non_SPT_Rf"} to obtain $f\in K$ such that $M_f^{-1}Q^T$ is an SPT. Since $h(M_f^{-1}Q^T) = 0$ (cf. Lemma [Lemma 6](#lem:map_over_K_preserve_height){reference-type="ref" reference="lem:map_over_K_preserve_height"}) we see from the above argument that $M_f^{-1}Q^T$ is in the orbit of $(0, 1, 1)$, which of course implies that $Q$ is as well. It remains to consider the case $h(Q) > 0$. Then Theorem [Theorem 2](#thm:main_theorem_tree){reference-type="ref" reference="thm:main_theorem_tree"} shows that $Q$ is in the orbit of $cS_g$, for some $c\in K^*$ and $g\in K[t]\backslash K$. But Lemma [Lemma 16](#lem:SgP){reference-type="ref" reference="lem:SgP"} implies that $cS_g$ and, thus, $Q$ are in the orbit of $(0,c,c)$, which has already been shown to be in the orbit of $(0, 1, 1)$. ◻ **Proposition 20**. *The stabilizer of $(0,1, 1)$ in $O_{\mathcal Q}(K[t])$ is generated by the set $$\{ R_f \mid f\in K[t] \}.$$* *Proof.* We follow Conrad's argument given in Appendix of [@Conrad]. Let $\mathcal{S}_1$ be the group generated by $\{ R_f \mid f \in K[t]\}$. A simple calculation shows that $R_f$ fixes $(0, 1, 1)$ for every $f\in K[t]$ therefore $\mathcal{S}_1$ is a subgroup of the stabilizer of $(0, 1, 1)$. Conversely, let $R$ be in the stabilizer of $(0, 1, 1)$. Then $R$ is of the form $$R = \begin{pmatrix} a_1 & a_2 & -a_2 \\ a_3 & a_4 & 1-a_4 \\ a_5 & a_6 & 1-a_6 \\ \end{pmatrix}$$ for some $a_1, \dots, a_6 \in K[t]$. Also, letting $J$ be the $3\times 3$ diagonal matrix whose diagonal entries are $(1, 1, -1)$, the condition that $R$ is orthogonal with respect to the quadratic form $\mathcal Q(t)$ is equivalent to $$R^T J R = J,$$ which yields $$\begin{cases} a_1^2 + a_3^2 - a_5^2 = 1,\\ a_2^2 + a_4^2 - a_6^2 = 1,\\ a_2^2 + (a_4 - 1)^2 - (a_6- 1)^2 = -1, \end{cases} \begin{cases} a_1a_2 + a_3a_4 - a_5a_6 = 0,\\ -a_1a_2 - a_3(a_4 - 1) + a_5(a_6 - 1) = 0,\\ -a_2^2 - a_4(a_4-1) + a_6(a_6-1)= 0. \end{cases}$$ Solving them simultaneously, we obtain $$\begin{cases} a_1^2 = 1,\\ a_3 = a_5 = -a_1a_2,\\ a_4 = 1 + a_6 = 1 - \frac{a_2^2}2.\\ \end{cases}$$ As a result, we have $$R = \begin{pmatrix} a_1 & a_2 & -a_2 \\ -a_1a_2 & 1 - \frac{a_2^2}2 & \frac{a_2^2}2 \\ -a_1a_2 & - \frac{a_2^2}2& 1 + \frac{a_2^2}2 \end{pmatrix}$$ with $a_1 = 1$ or $a_1 = -1$. Therefore, we have $$R = R_{-a_2/2} \text{ or } R_{-a_2/2}U_3$$ depending on $a_1 = 1$ or $a_1 = -1$, respectively. Since $U_3 = R_0 \in \mathcal{S}_1$ this completes the proof. ◻ *Proof of Theorem [Theorem 4](#thm:orthogonal_generator){reference-type="ref" reference="thm:orthogonal_generator"}.* Let $\mathcal{S}_2$ be the subgroup of $O_{\mathcal Q}(K[t])$ generated by the set given in the statement of the theorem. Since $$U_1 = \begin{pmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{pmatrix} = P_{xy} \begin{pmatrix} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} P_{xy} = P_{xy}R_0P_{xy}$$ it follows that $M_h\in \mathcal{S}_2$ for any $h\in K[t]$ (see [\[eq:definition_M\_f\]](#eq:definition_M_f){reference-type="eqref" reference="eq:definition_M_f"}). Suppose $A\in O_{\mathcal Q}(K[t])$. Let $Q^T = A (0, 1, 1)^T$, which is obviously a primitive Pythagorean triple. Next, we find $N\in \mathcal{S}_2$ such that $NQ^T$ is an SPT; if $Q$ is already an SPT, then $N = I_3$ (the identity matrix), otherwise, we choose $N$ to be $M_f^{-1}$ as constructed in Lemma [Lemma 19](#lem:non_SPT_Rf){reference-type="ref" reference="lem:non_SPT_Rf"}. Now, we apply Theorem [Theorem 2](#thm:main_theorem_tree){reference-type="ref" reference="thm:main_theorem_tree"} and Lemma [Lemma 16](#lem:SgP){reference-type="ref" reference="lem:SgP"} to obtain $$\begin{pmatrix} 0 \\ c \\ c \end{pmatrix} = M^{-1}_{g/2}M_{f_1} \cdots M_{f_k} N Q^T = M^{-1}_{g/2}M_{f_1} \cdots M_{f_k} N A \begin{pmatrix} 0 \\ 1 \\ 1 \end{pmatrix}$$ for some $g,f_1, \cdots, f_k \in K[t]$ and $c\in K^*$. Therefore we conclude from [\[eq:Tc\]](#eq:Tc){reference-type="eqref" reference="eq:Tc"} that $$T_c^{-1} M_{f_1} \cdots M_{f_k} N A$$ fixes $(0, 1, 1)$. From Proposition [Proposition 20](#thm:stabilizer_011){reference-type="ref" reference="thm:stabilizer_011"}, it follows that the above product belongs to $\mathcal{S}_1$ (thus to $\mathcal{S}_2$ as well), which then implies that $A \in \mathcal{S}_2$. ◻ [^1]: See the introduction of [@Rom08] for a more comprehensive list. [^2]: Recall that $\gcd(x, y, z)$ is by definition the unique monic polynomial in $K[t]$ that generates the smallest ideal of $K[t]$ containing $x, y, z$.
arxiv_math
{ "id": "2310.02144", "title": "A polynomial analogue of Berggren's theorem on Pythagorean triples", "authors": "Byungchul Cha, Ricardo Concei\\c{c}\\~ao", "categories": "math.NT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We show that local times of super-Brownian motion, or of Brownian motion indexed by the Brownian tree, satisfy an explicit stochastic differential equation. Our proofs rely on both excursion theory for the Brownian snake and tools from the theory of superprocesses. author: - "Jean-François Le Gall[^1]  and Edwin Perkins[^2]" date: Université Paris-Saclay and University of British Columbia title: | A stochastic differential equation for local times\ of super-Brownian motion --- # Introduction The main purpose of the present work is to derive a stochastic differential equation for the local times of super-Brownian motion, or equivalently for the local times of Brownian motion indexed by the Brownian tree. Consider a super-Brownian motion whose initial value is a constant multiple of the Dirac measure at $0$. Informally, the local time $L^a$ at level $a\in\mathbb{R}$ counts how many "particles" visit the point $a$. It was shown recently [@Mar] that, although the process $(L^a)_{a\geq 0}$ is not Markov, the pair consisting of $L^a$ and its derivative, $\dot L^a$, is a Markov process (when $a=0$ we need to consider the right derivative at $0$). However, the transition kernel of this Markov process is identified in [@Mar] in a complicated manner. Our goal here is to characterize this transition kernel in terms of a stochastic differential equation. There is an obvious analogy between our main result and the classical Ray-Knight theorems showing that the local times of a linear Brownian motion taken at certain particular stopping times, and viewed as processes in the space variable, are squared Bessel processes which satisfy simple stochastic differential equations. In the setting of the present paper, it is remarkable that the relevant stochastic differential equation involves the derivative of the local time. Let us give a more precise description of our main result. On a given probability space, we consider a super-Brownian motion $\mathbf{X}=(\mathbf{X}_t)_{t\geq 0}$ with initial value $\mathbf{X}_0=\alpha\,\delta_0$, where $\alpha>0$ is a constant. The associated total occupation measure is defined by $$\mathbf{Y}:=\int_0^\infty \mathbf{X}_t\,\mathrm{d}t.$$ Since $\mathbf{X}$ becomes extinct a.s., the measure $\mathbf{Y}$ is finite. Sugitani [@Sug] proved that the measure $\mathbf{Y}$ has a.s. a continuous density $(L^a)_{a\in\mathbb{R}}$, which is even continuously differentiable on $(-\infty,0)\cup(0,\infty)$. We write $\dot L^a$ for the derivative of this function at $a\in\mathbb{R}\backslash\{0\}$. Moreover, the function $a\mapsto L^a$ has a right derivative $\dot L^{0+}$ and a left derivative $\dot L^{0-}$ at $0$, and, by convention, we set $\dot L^0=\dot L^{0+}$. In order to state our result, let $U=(U_t)_{t\geq 0}$ be a stable Lévy process with index $3/2$ and no negative jumps. The distribution of $U$ is characterized by specifying its Laplace exponent $\psi(\lambda)=\sqrt{2/3}\,\lambda^{3/2}$ (see Section [2.5](#subsec:bridge){reference-type="ref" reference="subsec:bridge"}). For every $t>0$, let $(p_t(x))_{x\in\mathbb{R}}$ be the continuous density of $U_t$, which is determined by its Fourier transform $$\int_\mathbb{R}e^{\mathrm{i}u x}\,p_t(x)\,\mathrm{d}x = \exp(-c_0t\, |u|^{3/2}\,(1+\mathrm{i}\,\mathrm{sgn}(u))),$$ where $c_0=1/\sqrt{3}$ and $\mathrm{sgn}(u)=\mathbf{1}_{\{u>0\}}-\mathbf{1}_{\{u<0\}}$. Then $x\mapsto p_t(x)=t^{-2/3}p_1(xt^{-2/3})$ is strictly positive, infinitely differentiable and has bounded derivatives for each $t$ (see Ch. 2 of [@Zol] for these and other properties of stable densities). Write $p'_t(x)$ for the derivative of this function. **Theorem 1**. *For every $y\in\mathbb{R}$, set $g(0,y)=0$ and, for every $t>0$, $$g(t,y)=8 t\,\frac{p'_t(y)}{p_t(y)}.$$ Then $$\int_0^\infty |g\Bigl(L^y,\frac{1}{2}\dot L^y\Bigr)|\,\mathrm{d}y<\infty, \quad\hbox{a.s.}$$ and the pair $(L^x,\dot L^x)_{x\geq 0}$ satisfies the two-dimensional stochastic differential equation $$\label{main-SDE} \begin{aligned} \dot L^x&=\dot L^0 + 4\int_0^x \sqrt{{L^y}}\,\mathrm{d}B_y + \int_0^x g\Bigl(L^y,\frac{1}{2}\dot L^y\Bigr)\,\mathrm{d}y\\ L^x&=L^0+\int_0^x\dot L^y\,\mathrm{d}y, \end{aligned}$$ where $B$ is a linear Brownian motion. Moreover if $R=\inf\{x\ge 0:L^x=0\}$, then $(L^x,\dot L^x)$ is the pathwise unique solution to [\[main-SDE\]](#main-SDE){reference-type="eqref" reference="main-SDE"} which satisfies $(L^x,\dot L^x)=(L^{x\wedge R},\dot L^{x\wedge R})$ for all $x\ge 0$ a.s.* The fact that the local time satisfies the last property stated in the Theorem follows from Theorem 1.7 in [@MP] where it is shown that if $R$ is as above and $G=\sup\{x\le 0:L^x=0\}$, then $$\label{Lsupp} -\infty<G<0<R<\infty\ \text{ and }\{x\in \mathbb{R}:L^x>0\}=(G,R)\ a.s.$$ Strictly speaking, in order to write equation [\[main-SDE\]](#main-SDE){reference-type="eqref" reference="main-SDE"}, it may be necessary to enlarge the underlying probability space. The point is that the Brownian motion $B$ will be determined from the pair $(L^x,\dot L^x)_{x\geq 0}$ only up to the "time" $R$ (for $x>R$ we have $L^x=\dot L^x=0$). So a more precise statement would be the existence of an enlarged probability space $(\Omega, \mathcal{F},\mathbb{P})$ equipped with a filtration $(\mathcal{F}_t)_{t\geq 0}$ and an $(\mathcal{F}_t)$-Brownian motion $B$ such that $(L^t,\dot L^t)_{t\geq 0}$ is adapted to the filtration $(\mathcal{F}_t)_{t\geq 0}$ and [\[main-SDE\]](#main-SDE){reference-type="eqref" reference="main-SDE"} holds (see the proof in Section [6](#sec:SDE){reference-type="ref" reference="sec:SDE"}). Interestingly, the functions $p_t$ and $p'_t$ have explicit expressions in terms of the classical Airy function $\mathrm{Ai}$ and its derivative $\mathrm{Ai}'$. In fact, $x\to p_t(-x)$ is called the Airy map distribution in [@FS]. For every $t>0$ and $x\in\mathbb{R}$, we have $$p_t(x)= 6^{-1/3}\,t^{-2/3} \,\mathcal{A}(6^{-1/3}t^{-2/3}x),$$ where $$\mathcal{A}(x)=-2\,e^{2x^3/3}\Big( x\mathrm{Ai}(x^2) + \mathrm{Ai}'(x^2)\Big).$$ See [@FS Section IX.11], or [@CC] and the references therein, and note that our choice of $p_t$ differs from that in [@CC] by a scaling constant. It follows that $$g(t,x)=8\times 6^{-1/3}\,t^{1/3} \,\frac{\mathcal{A}'}{\mathcal{A}}( 6^{-1/3}\,t^{-2/3} x),$$ with (the Airy equation $\mathrm{Ai}''(x)=x\mathrm{Ai}(x)$ helps here) $$\label{Airatio}\frac{\mathcal{A}'}{\mathcal{A}}(x)= 4x^2 + \frac{\mathrm{Ai}(x^2)}{x\mathrm{Ai}(x^2) + \mathrm{Ai}'(x^2)}.$$ One useful application of this representation and known asymptotics for $\mathrm{Ai}$ and $\mathrm{Ai}'$ (see p. 448 of [@AS]) is that $$\label{rtpt}\frac{p_1'}{p_1}(y)=6^{-1/3}\,\frac{\mathcal{A}'}{\mathcal{A}}(6^{-1/3}y)= -\frac{5}{2y}+O\Bigl(\frac{1}{y^4}\Bigr)\ \text{ as }y\to+\infty,$$ and so $$\label{gbnd1} \text{for all }y_0\in\mathbb{R},\ \sup_{y\ge y_0}\Bigl|\frac{p_1'}{p_1}(y)\Bigr|=C(y_0)<\infty.$$ We can reformulate our theorem in terms of the model called Brownian motion indexed by the Brownian tree. Here the Brownian tree $\mathcal{T}$ is a "free" version of Aldous' Continuum Random Tree [@Ald] and may be defined as the tree coded by a Brownian excursion under the ($\sigma$-finite) Itô measure. Points of $\mathcal{T}$ are assigned "Brownian labels" $(V_u)_{u\in\mathcal{T}}$, in such a way that the label of the root is $0$ and labels evolve like linear Brownian motion along the line segments of the tree. It is convenient to assume that both the tree $\mathcal{T}$ and the labels $(V_u)_{u\in\mathcal{T}}$ are defined on the canonical space of snake trajectories under the "excursion measure" $\mathbb{N}_0$ (see Section [2](#sec:preli){reference-type="ref" reference="sec:preli"} below for a more precise presentation). If $\mathrm{Vol}$ denotes the volume measure on the tree $\mathcal{T}$, we are interested in the total occupation measure, which is the finite measure $\mathcal{Y}$ on $\mathbb{R}$ defined by $$\label{Ydefn}\mathcal{Y}(f)=\int_\mathcal{T}f(V_u)\,\mathrm{Vol}(\mathrm{d}u),$$ for every nonnegative Borel function $f$ on $\mathbb{R}$. The measure $\mathcal{Y}$ has a continuously differentiable density $(\ell^x)_{x\in\mathbb{R}}$ with respect to Lebesgue measure on $\mathbb{R}$, and we write $(\dot\ell^x)_{x\in\mathbb{R}}$ for its derivative. We can then state an analog of Theorem [Theorem 1](#main-th){reference-type="ref" reference="main-th"}. There is a technical difficulty due to the fact that $\mathbb{N}_0$ is an infinite measure, and for this reason we need to make an appropriate conditioning. **Theorem 2**. *Let $\delta>0$, and consider the probability measure $\mathbb{N}^{(\delta)}_0:=\mathbb{N}_0(\cdot\mid \ell^0>\delta)$. Then, $$\int_0^\infty |g\Bigl(\ell^y,\frac{1}{2}\dot \ell^y\Bigr)|\,\mathrm{d}y<\infty, \quad\mathbb{N}^{(\delta)}_0\hbox{\ a.s.}$$ and, under $\mathbb{N}^{(\delta)}_0$, the pair $(\ell^x,\dot \ell^x)_{x\geq 0}$ satisfies the two-dimensional stochastic differential equation $$\begin{aligned} \dot \ell^x&=\dot \ell^0 + 4\int_0^x \sqrt{{\ell^y}}\,\mathrm{d}\beta_y + \int_0^x g\Bigl(\ell^y,\frac{1}{2}\dot \ell^y\Bigr)\,\mathrm{d}y\\ \ell^x&=\ell^0+\int_0^x\dot \ell^y\, \mathrm{d}y,\end{aligned}$$ where $\beta$ is a linear Brownian motion. Moreover if $\rho=\inf\{x\ge 0:\ell^x=0\}$, then $(\ell^x,\dot \ell^x)$ is the pathwise unique solution to the above equation which satisfies $(\ell^x,\dot \ell^x)=(\ell^{x\wedge \rho},\dot \ell^{x\wedge \rho})$ for all $x\ge 0$ a.s.* In the language of superprocesses, Theorem [Theorem 2](#main2){reference-type="ref" reference="main2"} corresponds to a version of Theorem [Theorem 1](#main-th){reference-type="ref" reference="main-th"} under the so-called canonical measure. In what follows, we will only deal with Theorem [Theorem 1](#main-th){reference-type="ref" reference="main-th"}. Theorem [Theorem 2](#main2){reference-type="ref" reference="main2"} then follows since it is shown in [@Mar] that the process $(\ell^t,\dot \ell^t)_{t\geq 0}$ is Markov with the same transition kernels as the process $(L^t,\dot L^t)_{t\geq 0}$ considered in Theorem [Theorem 1](#main-th){reference-type="ref" reference="main-th"} (the pathwise uniqueness in either equation will follow easily from a classical result for locally Lipschitz coefficients). Still the formulation of Theorem [Theorem 2](#main2){reference-type="ref" reference="main2"} is useful to understand our approach, as we will rely on the Brownian snake representation of super-Brownian motion, which involves considering a Poisson collection of Brownian trees equipped with Brownian labels. The same remark as for Theorem [Theorem 1](#main-th){reference-type="ref" reference="main-th"} applies also to Theorem [Theorem 2](#main2){reference-type="ref" reference="main2"} (see Theorem 1.4 of [@Hon] for the analogue of [\[Lsupp\]](#Lsupp){reference-type="eqref" reference="Lsupp"}). One motivation for deriving a stochastic differential equation for $(L^x,\dot L^x)$ is to allow one access to the tools of stochastic analysis for a more detailed analysis of these processes. To this end, we use a transformation of the state space and a random time change to effectively transform the solution to [\[main-SDE\]](#main-SDE){reference-type="eqref" reference="main-SDE"} into an explicit one-dimensional diffusion which can be studied in detail, and from which one can reconstruct $(L^x,\dot L^x)$ (see Propositions [Proposition 18](#simple-SDE){reference-type="ref" reference="simple-SDE"} and [Proposition 19](#thm:1ddiffusion){reference-type="ref" reference="thm:1ddiffusion"}). The diffusion will be a time change of $\dot L^x/(L^x)^{2/3}$ and is the unique solution of [\[SDE4\]](#SDE4){reference-type="eqref" reference="SDE4"} below. Our proofs depend on both the excursion theory for the Brownian snake [@ALG] and tools coming from the theory of superprocesses [@Per; @Hon]. Excursion theory for the Brownian snake was the key ingredient for getting the Markov property of the process $(L^x,\dot L^x)_{x\geq 0}$ in [@Mar]. The transition kernel of this process was described in terms of the "positive excursion measures" $\mathbb{N}^{*,z}_0$, which roughly speaking give the distribution of the labeled tree $(\mathcal{T},(V_u)_{u\in\mathcal{T}})$ conditioned to have only nonnegative labels, with a parameter $z>0$ that in some sense prescribes how many points $u$ of $\mathcal{T}$ have the label zero. Local times still make sense under the measures $\mathbb{N}^{*,z}_0$ and, for every $h>0$, one can compute the expected value of the derivative of local time at level $h$ in the form $$\mathbb{N}^{*,z}_0(\dot \ell^h)= z\,\gamma\Big(\frac{3h}{2z^2}\Big),$$ where the function $\gamma$ has an explicit expression in terms of the (complementary) error function $\mathrm{erfc}$ (Proposition [Proposition 13](#expected-derivative){reference-type="ref" reference="expected-derivative"}). For every $a\geq 0$ and $t>0$, $y\in\mathbb{R}$, excursion theory then leads to the formula $$\mathbb{E}\Big[\dot L^{a+h}\,\Big|\, L^a=t,\frac{1}{2}\dot L^a=y\Big]=\mathbb{E}\Big[\sum_{j=1}^\infty Z_j\,\gamma\Big(\frac{3Z_j}{2h^2}\Big)\Big],$$ where $(Z_j)_{j\geq 1}$ are the jumps of the bridge from $0$ to $y$ in time $t$ associated with the Lévy process $U$ (Proposition [Proposition 14](#tech-incre){reference-type="ref" reference="tech-incre"}) and listed in decreasing order. The precise justification of the formulas of the last two displays requires certain bounds on moments of the derivatives of local time (Lemmas [Lemma 10](#moment-deri){reference-type="ref" reference="moment-deri"} and [Lemma 12](#lem-tech){reference-type="ref" reference="lem-tech"}). We obtain these bounds via a stochastic integral representation of the derivative $\dot L^x$ in terms of the martingale measure associated with $\mathbf{X}$, which is due to Hong [@Hon]. Here the use of these techniques from the theory of superprocesses is crucial since the excursion measures $\mathbb{N}^{*,z}_0$ do not seem to provide a tractable setting for a direct derivation of the required bounds. It turns out that one can explicitly compute the right-hand side of the last display in terms of an integral involving the density $p_t$ (Proposition [Proposition 15](#increment-derivative){reference-type="ref" reference="increment-derivative"}) and it is then an easy matter to obtain $$\lim_{h\to 0} \frac{1}{h}\, \mathbb{E}\Big[\dot L^{a+h}-\dot L^a\,\Big|\, L^a=t,\frac{1}{2}\dot L^a=y\Big] = 8\,t\, \frac{p'_t(y)}{p_t(y)}=g(t,y).$$ From this, one can infer that, for every ${\varepsilon}>0$, the process $$M^{\varepsilon}_x:=\dot L^{x\wedge S_{\varepsilon}}-\dot L^0-\int_0^{x\wedge S_{\varepsilon}} g(L^y,\frac{1}{2}\dot L^y)\,\mathrm{d}y$$ is a local martingale, where we have written $S_{\varepsilon}:=\inf\{x\geq 0:L^x\leq {\varepsilon}\}$. At that point, we again use the stochastic integral representation of Hong [@Hon], from which we can deduce that the quadratic variation of $M^{\varepsilon}_x$ is $16\int_0^{x\wedge S_{\varepsilon}} L^y\,\mathrm{d}y$. Although there are some additional technicalites to handle, often due to the unboundedness of $g(L^y,\dot L^y/2)$, we then can use standard tools of stochastic calculus to derive the stochastic differential equation [\[main-SDE\]](#main-SDE){reference-type="eqref" reference="main-SDE"}. We note that the recent paper of Chapuy and Marckert [@CM] addresses similar questions for the model called ISE (integrated super-Brownian excursion). This model, which was introduced by Aldous [@Al2], corresponds to conditioning the Brownian tree $\mathcal{T}$ to have total volume equal to $1$. Under this conditioning, local times are still well defined and continuously differentiable. On the basis of discrete approximations, [@CM] conjectures a stochastic differential equation for local times of ISE, which is similar to [\[main-SDE\]](#main-SDE){reference-type="eqref" reference="main-SDE"} but with a more complicated drift term involving also the integrals $\int_{-\infty}^x L^y\,\mathrm{d}y$ --- the reason why these integrals appear is of course the special conditioning which forces $\int_{-\infty}^\infty L^y\,\mathrm{d}y=1$. It is likely that Theorem [Theorem 2](#main2){reference-type="ref" reference="main2"} can be used to also derive a stochastic differential equation for local times of ISE, but we do not pursue this matter here. The paper is organized as follows. Section [2](#sec:preli){reference-type="ref" reference="sec:preli"} gathers a number of preliminaries. In particular, we introduce the positive excursion measures $\mathbb{N}^{*,z}_0$, and we recall the main result of the excursion theory of [@ALG]. In Section [3](#sec:super){reference-type="ref" reference="sec:super"}, we briefly recall the Brownian snake construction of the super-Brownian motion $\mathbf{X}$, and we state a key result from [@Mar] giving the conditional distribution of the collection of "excursions" of $\mathbf{X}$ above a level $a\geq 0$ knowing $(L^x,\dot L^x)_{x\leq a}$ (Proposition [Proposition 9](#key-ingre){reference-type="ref" reference="key-ingre"}). This conditional distribution knowing $L^a=t$ and $\dot L^a=y$ is given in terms of the measures $\mathbb{N}^{*,z}_0$ and the collection of jumps of the Lévy bridge from $0$ to $y$ in time $t$. In Section [4](#est-moment){reference-type="ref" reference="est-moment"}, we rely on Hong's representation to derive our estimates on moments of the increments of $\dot L^x$, and then to evaluate the quadratic variation of this process. Section [5](#sec:exp-incre-deriv){reference-type="ref" reference="sec:exp-incre-deriv"} is devoted to the calculation of the conditonal expected value of $\dot L^{a+h}-\dot L^a$ knowing $L^a=t$ and $\dot L^a=y$. Finally, Section [6](#sec:SDE){reference-type="ref" reference="sec:SDE"} gives the proof of Theorem [Theorem 1](#main-th){reference-type="ref" reference="main-th"} and also establishes the connection between $(L^x,\dot L^x)$ and the simple diffusion in [\[SDE4\]](#SDE4){reference-type="eqref" reference="SDE4"}. # Preliminaries {#sec:preli} ## Snake trajectories {#sna-tra} The proof of our main result relies in part on the Brownian snake representation of super-Brownian motion. We start by recalling the formalism of snake trajectories, referring to [@ALG] for more details. A (one-dimensional) finite path $\mathrm{w}$ is a continuous mapping $\mathrm{w}:[0,\zeta]\longrightarrow\mathbb{R}$, where $\zeta=\zeta_{(\mathrm{w})}\in[0,\infty)$ is called the lifetime of $\mathrm{w}$. The space $\mathcal{W}$ of all finite paths is a Polish space when equipped with the distance $$d_\mathcal{W}(\mathrm{w},\mathrm{w}')=|\zeta_{(\mathrm{w})}-\zeta_{(\mathrm{w}')}|+\sup_{t\geq 0}|\mathrm{w}(t\wedge \zeta_{(\mathrm{w})})-\mathrm{w}'(t\wedge\zeta_{(\mathrm{w}')})|.$$ The endpoint or tip of the path $\mathrm{w}$ is denoted by $\widehat\mathrm{w}=\mathrm{w}(\zeta_{(\mathrm{w})})$. For every $x\in\mathbb{R}$, we set $\mathcal{W}_x=\{\mathrm{w}\in\mathcal{W}:\mathrm{w}(0)=x\}$. The trivial element of $\mathcal{W}_x$ with zero lifetime is identified with the point $x$ of $\mathbb{R}$. **Definition 3**. *Let $x\in \mathbb{R}$. A snake trajectory with initial point $x$ is a continuous mapping $s\mapsto \omega_s$ from $\mathbb{R}_+$ into $\mathcal{W}_x$ which satisfies the following two properties:* 1. *We have $\omega_0=x$ and the number $\sigma(\omega):=\sup\{s\geq 0: \omega_s\not =x\}$, called the duration of the snake trajectory $\omega$, is finite (by convention $\sigma(\omega)=0$ if $\omega_s=x$ for every $s\geq 0$).* 2. *(Snake property) For every $0\leq s\leq s'$, we have $\omega_s(t)=\omega_{s'}(t)$ for every $t\in[0,\displaystyle{\min_{s\leq r\leq s'}} \zeta_{(\omega_r)}]$.* We will write $\mathcal{S}_x$ for the set of all snake trajectories with initial point $x$, and $\mathcal{S}$ for the union of the sets $\mathcal{S}_x$ for all $x\in\mathbb{R}$. If $\omega\in \mathcal{S}$, we often write $W_s(\omega)=\omega_s$ and $\zeta_s(\omega)=\zeta_{(\omega_s)}$ for every $s\geq 0$, and we omit $\omega$ in the notation. The sets $\mathcal{S}$ and $\mathcal{S}_x$ are equipped with the distance $$d_{\mathcal{S}}(\omega,\omega')= |\sigma(\omega)-\sigma(\omega')|+ \sup_{s\geq 0} \,d_\mathcal{W}(W_s(\omega),W_{s}(\omega')).$$ For $\omega\in\mathcal{S}_x$ and $a\in\mathbb{R}$, we will use the obvious notation $\omega + a\in\mathcal{S}_{x+a}$ for the translated snake trajectory. It is easy to verify [@ALG Proposition 8] that a snake trajectory $\omega$ is determined by the two functions $s\mapsto \zeta_s(\omega)$ and $s\mapsto \widehat W_s(\omega)$ (the latter is sometimes called the tip function). Let $\omega\in \mathcal{S}$ be a snake trajectory and $\sigma=\sigma(\omega)$. We define a pseudo-distance on $[0,\sigma]^2$ by setting $$d_\zeta(s,s')= \zeta_s+\zeta_{s'}-2 \min_{s\wedge s'\leq r\leq s\vee s'} \zeta_r.$$ We then consider the associated equivalence relation $s\sim s'$ if and only if $d_\zeta(s,s')=0$ (or equivalently $\zeta_s=\zeta_{s'}= \min_{s\wedge s'\leq r\leq s\vee s'} \zeta_r$), and the quotient space $\mathcal{T}(\omega):=[0,\sigma]/\!\sim$ , which is equipped with the distance induced by $d_\zeta$. The metric space $(\mathcal{T}(\omega),d_\zeta)$ is a compact $\mathbb{R}$-tree called the *genealogical tree* of the snake trajectory $\omega$ (we refer to [@probasur] for more information about the coding of $\mathbb{R}$-trees by continuous functions). Let $p_{(\omega)}:[0,\sigma]\longrightarrow\mathcal{T}(\omega)$ stand for the canonical projection. By convention, the tree $\mathcal{T}=\mathcal{T}(\omega)$ is rooted at the point $\rho_{(\omega)}:=p_{(\omega)}(0)=p_{(\omega)}(\sigma)$, and the volume measure $\mathrm{Vol}(\cdot)$ on $\mathcal{T}$ is defined as the pushforward of Lebesgue measure on $[0,\sigma]$ under $p_{(\omega)}$. As usual, for $u,v\in\mathcal{T}$, we say that $u$ is an ancestor of $v$, or $v$ is a descendant of $u$, if $u$ belongs to the line segment from $\rho_{(\omega)}$ to $v$ in $\mathcal{T}$. The snake property shows that the condition $p_{(\omega)}(s)=p_{(\omega)}(s')$ implies that $W_s(\omega)=W_{s'}(\omega)$. So the mapping $s\mapsto W_s(\omega)$ can be viewed as defined on the quotient space $\mathcal{T}$. For $u\in \mathcal{T}$, we set $V_u:=\widehat W_s(\omega)$, for any $s\in[0,\sigma]$ such that $u=p_{(\omega)}(s)$. We interpret $V_u$ as a "label" assigned to the "vertex" $u$ of $\mathcal{T}$, and each path $W_s$ records the labels along the line segment from $\rho_{(\omega)}$ to $p_{(\omega)}(s)$ in $\mathcal{T}$. We will use the notation $$\begin{aligned} W^*&:=\max\{W_s(t): s\geq 0,t\in[0,\zeta_s]\}=\max\{\widehat W_s:0\leq s\leq \sigma\}= \max\{V_u:u\in\mathcal{T}\},\\ W_*&:=\min\{W_s(t): s\geq 0,t\in[0,\zeta_s]\}= \min\{\widehat W_s:0\leq s\leq \sigma\}=\min\{V_u:u\in\mathcal{T}\},\end{aligned}$$ and we also let $\mathcal{Y}=\mathcal{Y}_{(\omega)}$ be the occupation measure of $\omega$, which is the finite measure on $\mathbb{R}$ defined by setting $$\label{occu-measure} \mathcal{Y}(f)= \int_0^{\sigma} f(\widehat W_s)\,\mathrm{d}s=\int_{\mathcal{T}} f(V_u)\,\mathrm{Vol}(\mathrm{d}u),$$ for any Borel function $f:\mathbb{R}\longrightarrow\mathbb{R}_+$. Trivially, $\mathcal{Y}$ is supported on $[W_*,W^*]$. We next introduce the truncation of snake trajectories. For any $\mathrm{w}\in\mathcal{W}_x$ and $y\in\mathbb{R}$, we set $$\tau_y(\mathrm{w}):=\inf\{t\in(0,\zeta_{(\mathrm{w})}]: \mathrm{w}(t)=y\}\,,$$ with the usual convention $\inf\varnothing =\infty$. Then if $\omega\in \mathcal{S}_x$ and $y\in \mathbb{R}$, we set, for every $s\geq 0$, $$\nu_s(\omega):=\inf\Big\{t\geq 0:\int_0^t \mathrm{d}u\,\mathbf{1}_{\{\zeta_{(\omega_u)}\leq\tau_y(\omega_u)\}}>s\Big\}$$ (note that the condition $\zeta_{(\omega_u)}\leq\tau_y(\omega_u)$ holds if and only if $\tau_y(\omega_u)=\infty$ or $\tau_y(\omega_u)=\zeta_{(\omega_u)}$). Then, setting $\omega'_s=\omega_{\nu_s(\omega)}$ for every $s\geq 0$ defines an element $\omega'$ of $\mathcal{S}_x$, which will be denoted by ${\rm tr}_y(\omega)$ and called the truncation of $\omega$ at $y$ (see [@ALG Proposition 10]). The effect of the time change $\nu_s(\omega)$ is to "eliminate" those paths $\omega_s$ that hit $y$ and then survive for a positive amount of time. The genealogical tree of ${\rm tr}_y(\omega)$ is canonically and isometrically identified with the closed subset of $\mathcal{T}(\omega)$ consisting of all $u$ such that $V_v(\omega)\not =y$ for every strict ancestor $v$ of $u$ (different from $\rho_{(\omega)}$ when $y=x$). Finally, for $\omega\in\mathcal{S}_x$ and $y\in\mathbb{R}\backslash\{x\}$, we define the excursions of $\omega$ away from $y$. In contrast with the truncation ${\rm tr}_y(\omega)$, these excursions now account for the paths $\omega_s$ that hit $y$ and survive for a positive amount of time. More precisely, let $(\alpha_j,\beta_j)$, $j\in J$, be the connected components of the open set $\{s\in[0,\sigma]:\tau_y(\omega_s)<\zeta_{(\omega_s)}\}$ (note that the indexing set $J$ may be empty). We notice that $\omega_{\alpha_j}=\omega_{\beta_j}$ for every $j\in J$, by the snake property, and $\widehat\omega_{\alpha_j}=y$. For every $j\in J$, we define a snake trajectory $\omega^j\in\mathcal{S}_y$ by setting $$\omega^j_{s}(t):=\omega_{(\alpha_j+s)\wedge\beta_j}(\zeta_{(\omega_{\alpha_j})}+t)\;,\hbox{ for }0\leq t\leq \zeta_{(\omega^j_s)} :=\zeta_{(\omega_{(\alpha_j+s)\wedge\beta_j})}-\zeta_{(\omega_{\alpha_j})}\hbox{ and } s\geq 0.$$ We say that $\omega^j$, $j\in J$, are the excursions of $\omega$ away from $y$. ## The Brownian snake excursion measure {#sna-mea} Let $x\in\mathbb{R}$. The Brownian snake excursion measure $\mathbb{N}_x$ is the $\sigma$-finite measure on $\mathcal{S}_x$ that is characterized by the following two properties: Under $\mathbb{N}_x$, 1. the distribution of the lifetime function $(\zeta_s)_{s\geq 0}$ is the Itô measure of positive excursions of linear Brownian motion, normalized so that, for every ${\varepsilon}>0$, $$\mathbb{N}_x\Big(\sup_{s\geq 0} \zeta_s >{\varepsilon}\Big)=\frac{1}{2{\varepsilon}};$$ 2. conditionally on $(\zeta_s)_{s\geq 0}$, the tip function $(\widehat W_s)_{s\geq 0}$ is a Gaussian process with mean $x$ and covariance function $$K(s,s')= \min_{s\wedge s'\leq r\leq s\vee s'} \zeta_r.$$ Conditionally on the lifetime process $(\zeta_s)_{s\geq 0}$, each path $W_r$ is a linear Brownian path started from $x$ with lifetime $\zeta_r$. When $r$ varies, the evolution of the path $W_r$ is described informally as follows. When $\zeta_r$ decreases, the path $W_r$ is "erased" from its tip, and when $\zeta_r$ increases, the path $W_r$ is "extended" by adding little pieces of Brownian motion at its tip. The measure $\mathbb{N}_x$ can be interpreted as the excursion measure away from $x$ for the Markov process in $\mathcal{W}_x$ called the (one-dimensional) Brownian snake. We refer to [@Zurich] for a detailed study of the Brownian snake with a more general underlying spatial motion. For every $r>0$, we have $$\mathbb{N}_x(W^*>x+r)=\mathbb{N}_x(W_*<x-r)={\displaystyle \frac{3}{2r^2}}$$ (see e.g. [@Zurich Section VI.1]). In particular, $\mathbb{N}_x(y\in[W_*,W^*])<\infty$ if $y\not =x$. The following scaling property is often useful. For $\lambda>0$, for every $\omega\in \mathcal{S}_x$, we define $\theta_\lambda(\omega)\in \mathcal{S}_{x\sqrt{\lambda}}$ by $\theta_\lambda(\omega)=\omega'$, with $$\label{scalereln}\omega'_s(t):= \sqrt{\lambda}\,\omega_{s/\lambda^2}(t/\lambda)\,,\ \hbox{for } s\geq 0\hbox{ and }0\leq t\leq \zeta'_s:=\lambda\zeta_{s/\lambda^2}.$$ Then $\theta_\lambda(\mathbb{N}_x)= \lambda\, \mathbb{N}_{x\sqrt{\lambda}}$. Let us introduce the local times, $(\ell^y)_{y\in\mathbb{R}}$, under $\mathbb{N}_x$. The next proposition follows from [@CM] (a slightly weaker statement had been obtained in [@BMJ]), and is also closely related to the results of [@Sug] concerning super-Brownian motion. **Proposition 4**. *Let $x\in\mathbb{R}$. Then, $\mathbb{N}_x(\mathrm{d}\omega)$ a.e. the occupation measure $\mathcal{Y}_{(\omega)}$ has a continuously differentiable density with respect to Lebesgue measure. This density is denoted by $(\ell^y(\omega))_{y\in\mathbb{R}}$ and its derivative is denoted by $(\dot\ell^y(\omega))_{y\in\mathbb{R}}$* We now introduce exit measures. We argue under $\mathbb{N}_x$, and fix $y\in\mathbb{R}\backslash\{x\}$. One shows that the limit $$\label{formu-exit} \mathcal{Z}_y:=\lim_{{\varepsilon}\to 0} \frac{1}{{\varepsilon}} \int_0^\sigma \mathrm{d}s\,\mathbf{1}_{\{\tau_y(W_s)\leq \zeta_s\leq\tau_y(W_s)+{\varepsilon}\}}$$ exists $\mathbb{N}_x$ a.e., and $\mathcal{Z}_y$ is called the exit measure from $(y,\infty)$ (if $x>y$) or from $(-\infty,y)$ (if $y>x$). Roughly speaking, $\mathcal{Z}_y$ counts how many paths $W_s$ hit $y$ and are stopped at that moment. The definition of $\mathcal{Z}_y$ is a particular case of the theory of exit measures, see [@Zurich Chapter V]. We have $\mathcal{Z}_y>0$ if and only if $y\in[W_*,W^*]$, $\mathbb{N}_x$ a.e. (recall $y$ is fixed). Let us recall the special Markov property of the Brownian snake under $\mathbb{N}_0$ (see, for example, the appendix of [@subor]). **Proposition 5** (Special Markov property). *Let $x\in \mathbb{R}$ and $y\in\mathbb{R}\backslash\{x\}$. Under the measure $\mathbb{N}_x(\mathrm{d}\omega)$, let $\omega^j$, $j\in J$, be the excursions of $\omega$ away from $y$ and consider the point measure $$\mathcal{N}_y=\sum_{j\in J} \delta_{\omega^j}.$$ Then, under the probability measure $\mathbb{N}_x(\mathrm{d}\omega\,|\, y\in[W_*,W^*])$ and conditionally on $\mathcal{Z}_y$, the point measure $\mathcal{N}_y$ is Poisson with intensity $\mathcal{Z}_y\,\mathbb{N}_y(\cdot)$ and is independent of $\mathrm{tr}_y(\omega)$.* We now introduce a process called the exit measure process at a point, which will play an important role in the excursion theory discussed below. Let $x\in \mathbb{R}$ and argue under the excursion measure $\mathbb{N}_x$. Also fix another point $y\in\mathbb{R}$ (which may be equal to $x$). Since, conditionally on $\zeta_s$, $W_s$ is just a Brownian path with lifetime $\zeta_s$, we can make sense of its local time at level $y$, which we denote by $\mathcal{L}^y(W_s)=(\mathcal{L}^y_t(W_s))_{0\leq t\leq \zeta_s}$, and the mapping $s\mapsto \mathcal{L}^y(W_s)$, with values in $(\mathcal{W},d_\mathcal{W})$, is continuous (note that $(W_s,\mathcal{L}^y(W_s))$ can be viewed as the Brownian snake whose spatial motion is the pair formed by Brownian motion and its local time at $y$). Then, for every $r\geq 0$ and $s\in[0,\sigma]$, set $$\eta^y_r(W_s)=\inf\{t\in[0,\zeta_s]: \mathcal{L}^y_t(W_s)\geq r\},$$ with the usual convention $\inf\varnothing=\infty$. From the general theory of exit measures [@Zurich Chapter V], we get, for every $r>0$, the existence of the almost sure limit $$\mathcal{X}^y_r=\lim_{{\varepsilon}\to 0} \frac{1}{{\varepsilon}}\int_0^\sigma \mathrm{d}s\, \mathbf{1}_{\{\eta^y_r(W_s)\leq \zeta_s\leq \eta^y_r(W_s)+{\varepsilon}\}}.$$ Roughly speaking, $\mathcal{X}^y_r$ measures the "quantity" of paths $W_s$ that end at $y$ after having accumulated a local time at $y$ exactly equal to $r$. See the discussion in the introduction of [@ALG] for more details. Suppose that $y\not = x$. In that case, we also take $\mathcal{X}^y_0=\mathcal{Z}_y$ (compare the last display with [\[formu-exit\]](#formu-exit){reference-type="eqref" reference="formu-exit"}). Then under the probability measure $\mathbb{N}_x(\cdot\mid y\in[W_*,W^*])=\mathbb{N}_x(\cdot\mid \mathcal{Z}_y>0)$, conditionally on $\mathcal{Z}_y$, the process $(\mathcal{X}^y_r)_{r\geq 0}$ is a continuous-state branching process with branching mechanism $\varphi(u)=\sqrt{8/3}\,u^{3/2}$ (in short, a $\varphi$-CSBP) started at $\mathcal{Z}_y$. In particular, $(\mathcal{X}^y_r)_{r\geq 0}$ has a càdlàg modification, which we consider from now on. We refer to [@Zurich Chapter II] for basic facts about continuous-state branching processes, and to [@ALG] for the preceding facts. In the case $y=x$, we take $\mathcal{X}^x_0=0$ by convention, and the process $(\mathcal{X}^x_r)_{r\geq 0}$ is distributed under $\mathbb{N}_x$ according to the excursion measure of the $\varphi$-CSBP. This means that, if $\mathcal{N}=\sum_{k\in K}\delta_{\omega_k}$ is a Poisson point measure with intensity $\alpha\,\mathbb{N}_x$, the process $X$ defined by $X_0=\alpha$ and, for every $r>0$, $$X_r:=\sum_{k\in K} \mathcal{X}^x_r(\omega_k),$$ is a $\varphi$-CSBP started at $\alpha$ (see [@LGR Section 2.4]). In all cases, we call $(\mathcal{X}^y_r)_{r\geq 0}$ the exit measure process at $y$. Local times are related to this process by the formula $$\label{local3} \ell^y=\int_0^\infty \mathrm{d}r \,\mathcal{X}^y_r,$$ which holds $\mathbb{N}_x$ a.e., for every $y\in\mathbb{R}$. See [@LGR Proposition 26] when $y\not = x$, and [@LGR2 Proposition 3.1] when $y=x$. ## The positive excursion measure We now introduce another important measure on $\mathcal{S}_0$. There exists a $\sigma$-finite measure $\mathbb{N}^*_0$ on $\mathcal{S}_0$, which is supported on the set $\mathcal{S}_0^+$ of nonnegative snake trajectories, such that, for every bounded continuous function $G$ on $\mathcal{S}_0^+$ that vanishes on $\{\omega\in\mathcal{S}_0^+:W^*(\omega)\leq \delta\}$ for some $\delta >0$, we have $$\mathbb{N}^*_0(G)=\lim_{{\varepsilon}\to 0}\frac{1}{{\varepsilon}}\,\mathbb{N}_{\varepsilon}(G(\mathrm{tr}_{0}(\omega))).$$ See [@ALG Theorem 23]. Under $\mathbb{N}^*_0(\mathrm{d}\omega)$, each path $\omega_s$, for $0<s<\sigma$, starts from $0$, then stays positive during some time interval $(0,u)$, and is stopped immediately when it returns to $0$, if it does return to $0$. Similarly to the definition of exit measures, one can make sense of the "quantity" of paths $\omega_s$ that return to $0$ under $\mathbb{N}^*_0$. To this end, one proves that the limit $$\label{approxz*0} \mathcal{Z}^*_0:=\lim_{{\varepsilon}\to 0} \frac{1}{{\varepsilon}^2} \int_0^\sigma \mathrm{d}s\,\mathbf{1}_{\{\widehat W_s<{\varepsilon}\}}$$ exists $\mathbb{N}^*_0$ a.e. See [@Disks Section 10]. Notice that replacing the limit by a liminf in [\[approxz\*0\]](#approxz*0){reference-type="eqref" reference="approxz*0"} allows us to make sense of $\mathcal{Z}^*_0(\omega)$ for every $\omega\in\mathcal{S}_0^+$. We can then define conditional versions of the measure $\mathbb{N}^*_0$, which play a fundamental role in the present work. Recall the definition of the scaling operators $\theta_\lambda$ in [\[scalereln\]](#scalereln){reference-type="eqref" reference="scalereln"}. According to [@ALG Proposition 33], there exists a unique collection $(\mathbb{N}^{*,z}_0)_{z>0}$ of probability measures on $\mathcal{S}_0^+$ such that: &(i)\^\*\_0= \_0\^z z\^-5/2  \^\*,z\_0.&&\ [\[N\*props\]]{#N*props label="N*props"}&\ & Informally, $\mathbb{N}^{*,z}_0=\mathbb{N}^*_0(\cdot\mid \mathcal{Z}^*_0=z)$. Note that the "law" of $\mathcal{Z}^*_0$ under $\mathbb{N}^*_0$ is the $\sigma$-finite measure $$\label{Levy-mea} \mathbf{n}(\mathrm{d}z)=\mathbf{1}_{\{z>0\}}\,\sqrt{\frac{3}{2 \pi}} \,z^{-5/2}\,\mathrm{d}z.$$ It will be convenient to write $\check\mathbb{N}^{*,z}_0$ for the pushforward of $\mathbb{N}^{*,z}_0$ under the mapping $\omega\to -\omega$. Furthermore, for every $a\in\mathbb{R}$, we write $\mathbb{N}^{*,z}_a$, resp. $\check\mathbb{N}^{*,z}_a$ for the pushforward of $\mathbb{N}^{*,z}_0$, resp. of $\check\mathbb{N}^{*,z}_0$, under the shift $\omega\mapsto \omega + a$. We state a useful technical lemma. **Lemma 6**. *For every $z>0$ and ${\varepsilon}>0$, $\mathbb{N}^{*,z}_0(W^*<{\varepsilon})>0$. Moreover, there exists a constant $C$ such that, for every $z>0$ and $x>0$, $$\mathbb{N}^{*,z}_0(W^*>x)\leq C \,\frac{z^3}{x^6}.$$* We omit the proof of the first assertion. For the second one, see [@LGR Corollary 5]. Recall the notation $\mathcal{ Y}_{(\omega)}$ for the occupation measure of $\omega\in\mathcal{S}$ from [\[occu-measure\]](#occu-measure){reference-type="eqref" reference="occu-measure"}. **Lemma 7**. *Let $z>0$. Then, $\mathbb{N}^{*,z}_0(\mathrm{d}\omega)$ a.s. the measure $\mathcal{Y}_{(\omega)}$ has a continuous density with respect to Lebesgue measure on $\mathbb{R}$. This density vanishes on $(-\infty,0]$ and is continuously differentiable on $(0,\infty)$.* Via scaling arguments, it is enough to prove this with $\mathbb{N}^{*,z}_0$ replaced by $\mathbb{N}^*_0$. Then, we can use the re-rooting property of $\mathbb{N}^{*}_0$ (see [@ALG Theorem 28] or [@Mar Theorem 5]) to obtain that it suffices to prove the following claim: For every $b>0$, $\mathbb{N}_b(\mathrm{d}\omega)$ a.e., the occupation measure $\mathcal{Y}_{(\mathrm{tr}_0(\omega))}$ has a continuous density, which vanishes on $(-\infty,0]$ and is continuously differentiable on $(0,\infty)$. Note that, $\mathbb{N}_b(\mathrm{d}\omega)$ a.e., $\mathcal{Y}_{(\mathrm{tr}_0(\omega))}$ is supported on $[0,\infty)$ and thus, once we know that $\mathcal{Y}(\mathrm{tr}_0(\omega))$ has a continuous density it is obvious that this density vanishes on $(-\infty,0]$. Let us fix $b>0$ and argue under $\mathbb{N}_b$. Writing $(\omega^j)_{j\in J}$ for the excursions of $\omega$ away from $0$, one easily verifies that, $\mathbb{N}_b(\mathrm{d}\omega)$ a.e., $$\label{decompo} \mathcal{Y}_{(\omega)}=\mathcal{Y}_{(\mathrm{tr}_0(\omega))}+ \sum_{j\in J} \mathcal{Y}_{(\omega^j)}.$$ We know that $\mathbb{N}_b(\mathrm{d}\omega)$ a.e., the measure $\mathcal{Y}_{(\omega)}$ has a continuously differentiable density $(\ell^x(\omega))_{x\in\mathbb{R}}$ and the same holds for the measures $\mathcal{Y}_{(\omega^j)}$ since we know that (conditionally on $\mathcal{Z}_0(\omega)$) the snake trajectories $\omega^j$, $j\in J$ are the atoms of a Poisson point measure with intensity $\mathcal{Z}_0\mathbb{N}_0$. Note that, for every fixed $x\not =0$, there are only finitely many indices $j$ such that $\ell^x(\omega^j)>0$. It then follows that the measure $\sum_{j\in J} \mathcal{Y}_{(\omega^j)}$ has a density, and this density is given for $x\not =0$ by the function $\sum_{j\in J} \ell^x(\omega^j)$, which is continuously differentiable on $\mathbb{R}\backslash\{0\}$. However, $\mathbb{N}_b(\mathrm{d}\omega)$ a.e., the function $$x\mapsto \sum_{j\in J} \ell^x(\omega^j)$$ is continuous on $\mathbb{R}$: we already know that it is continuous on $\mathbb{R}\backslash\{0\}$, and for the continuity at $0$ we refer to formula (3.9) and the subsequent discussion in [@LGR2]. From [\[decompo\]](#decompo){reference-type="eqref" reference="decompo"}, we now deduce that $\mathcal{Y}_{(\mathrm{tr}_0(\omega))}$ has a continuous density on $\mathbb{R}$, which is given by $$x\mapsto \ell^x(\omega)-\sum_{j\in J} \ell^x(\omega^j).$$ This completes the proof. In what follows, we will use the same notation $(\ell^x(\omega))_{x\in\mathbb{R}}$ to denote the density of $\mathcal{Y}_{(\omega)}$ under $\mathbb{N}^{*,z}_0(\mathrm{d}\omega)$ or under $\mathbb{N}^{*,z}_a(\mathrm{d}\omega)$ for any $a\in\mathbb{R}$. ## Excursion theory {#sec:excu} Let us now recall the main theorem of the excursion theory developed in [@ALG]. We fix $x\in\mathbb{R}$ and $y\in[x,\infty)$, and we argue under $\mathbb{N}_x(\mathrm{d}\omega)$. As in the classical setting of excursion theory for linear Brownian motion, our goal is to describe the evolution of the labels $V_u$ on the connected components of $\{u\in\mathcal{T}(\omega): V_u(\omega)\not =y\}$. So, let $\mathcal{C}$ be such a connected component and write $\overline{\mathcal{C}}$ for the closure of $\mathcal{C}$. We leave aside the case where $\mathcal{C}$ contains the root $\rho_{(\omega)}$ of $\mathcal{T}(\omega)$ (this case does not occur if $y=x$). Then, there is a unique point $u$ of $\overline{\mathcal{C}}$ at minimal distance from $\rho_{(\omega)}$, such that all points of $\overline{\mathcal{C}}$ are descendants of $u$, and we have $V_u=y$. Following [@ALG], we say that $u$ is an excursion debut (from $y$). We can then define a snake trajectory $\omega^{(u)}$ that accounts for the connected component $\mathcal{C}$ and the labels on $\mathcal{C}$. To this end, we first observe that the set of all descendants of $u$ in $\mathcal{T}(\omega)$ can be written as $p_{(\omega)}([s_0,s'_0])$ , where $s_0$ and $s'_0$ are such that $p_{(\omega)}(s_0)=p_{(\omega)}(s'_0)=u$. Then, we first define a snake trajectory $\tilde\omega^{(u)}\in \mathcal{S}_y$ coding the subtree $p_{(\omega)}([s_0,s'_0])$ (and its labels) by setting $$\tilde\omega^{(u)}_s(t):=\omega_{(s_0+s)\wedge s'_0}(\zeta_{s_0}+t)\,\hbox{ for }0\leq t\leq \zeta_{(s_0+s)\wedge s'_0}-\zeta_{s_0}.$$ The set $\overline{\mathcal{C}}$ is the subset of $p_{(\omega)}([s_0,s'_0])$ consisting of all $v$ such that labels stay greater than $y$ along the line segment from $u$ to $v$, except at $u$ and possibly at $v$. This leads us to define $$\omega^{(u)}:=\mathrm{tr}_y(\tilde\omega^{(u)}).$$ Then one can check (see [@ALG] for more details) that the compact $\mathbb{R}$-tree $\overline{\mathcal{C}}$ is identified isometrically to the tree $\mathcal{T}(\omega^{(u)})$, and moreover this identification preserves labels. Also, the restriction of the volume measure of $\mathcal{T}(\omega)$ to $\mathcal{C}$ corresponds to the volume measure of $\mathcal{T}_{(\omega^{(u)})}$ via the latter identification. We say that $\omega^{(u)}$ is an excursion above $y$ if the values of $V_v$ for $v\in\mathcal{C}$ are greater than $y$ and that $\omega^{(u)}$ is an excursion below $y$ if the values of $V_v$ for $v\in\mathcal{C}$ are smaller than $y$. Note that an excursion away from $y$, as considered in Proposition [Proposition 5](#SMP){reference-type="ref" reference="SMP"}, will contain infinitely many excursions above or below $y$. Let $\mathcal{Y}_{(\omega)}^{(y,\infty)}$ denote the restriction of $\mathcal{Y}_{(\omega)}$ to $(y,\infty)$. Then, the preceding identification of volume measures entails that $$\label{decomp-occup} \mathcal{Y}_{(\omega)}^{(y,\infty)}=\sum_{u\in\mathcal{D}_y^+} \mathcal{Y}_{(\omega^{(u)})},$$ where $\mathcal{D}_y^+$ is the set of all debuts of excursions above $y$. Recall that the exit measure process $(\mathcal{X}^y_r)_{r\geq 0}$ was defined in Section [2.2](#sna-mea){reference-type="ref" reference="sna-mea"}. By Proposition 3 of [@ALG] (and an application of the special Markov property when $y\not=x$), excursion debuts from $y$ are in one-to-one correspondence with the jump times of the process $(\mathcal{X}^y_r)_{r\geq 0}$, or equivalently with the jumps of this process, in such a way that, if $u$ is an excursion debut and $s\in[0,\sigma]$ is such that $p_{(\omega)}(s)=u$, the associated jump time of the exit measure process at $y$ is the total local time at $y$ accumulated by the path $W_s$. We can rank the jumps of $(\mathcal{X}^y_r)_{r\geq 0}$ in a sequence $(\delta_i)_{i\in\mathbb{N}}$ in decreasing order. For every $i\in\mathbb{N}$, we write $u_i$ for the excursion debut associated with the jump $\delta_i$. The following theorem is essentially Theorem 4 in [@ALG]. We write $\mathbb{N}_x^{(y)}=\mathbb{N}_x(\cdot \mid \mathcal{Z}_y>0)$ when $y\not=x$, and $\mathbb{N}_x^{(x)}=\mathbb{N}_x$. **Theorem 8**. *Under $\mathbb{N}_x^{(y)}$, conditionally on $(\mathcal{X}^y_r)_{r\geq 0}$, the excursions $\omega^{(u_i)}$, $i\in \mathbb{N}$, are independent, and independent of $\mathrm{tr}_y(\omega)$, and, for every $i\in\mathbb{N}$, the conditional distribution of $\omega^{(u_i)}$ is $$\frac{1}{2}\Big(\mathbb{N}^{*,\delta_i}_y+\check\mathbb{N}^{*,\delta_i}_y\Big).$$* We say that $\delta_i$ is the boundary size of the excursion $\omega^{(u_i)}$. The case $y=x$ of Theorem [Theorem 8](#theo-excursion){reference-type="ref" reference="theo-excursion"} is Theorem 4 of [@ALG] and the case $y\not =x$ can then be derived by an application of the special Markov property (Proposition [Proposition 5](#SMP){reference-type="ref" reference="SMP"}). ## The Lévy bridge {#subsec:bridge} Recall from the Introduction and Section [2.2](#sna-mea){reference-type="ref" reference="sna-mea"} that for $\lambda\geq 0$, $\psi(\lambda)=\frac{1}{2}\varphi(\lambda)=\sqrt{2/3}\,\lambda^{3/2}$, and that $(U_t)_{t\geq 0}$ denotes a stable Lévy process with index $3/2$, without negative jumps, and scaled so that its Laplace exponent is $\psi(\lambda)$. This means that for every $t\geq 0$ and $\lambda >0$, we have $$\mathbb{E}[\exp(-\lambda U_t)]=\exp(t\psi(\lambda)).$$ The Lévy measure of $U$ is $\frac{1}{2}\mathbf{n}(\mathrm{d}z)$, where $\mathbf{n}(\mathrm{d}z)$ was defined in [\[Levy-mea\]](#Levy-mea){reference-type="eqref" reference="Levy-mea"}, and $U_s$ has characteristic function $$\mathbb{E}[e^{\mathrm{i}u U_s}]= e^{-s\Psi(u)},$$ where $$\label{carac-U} \Psi(u)= c_0 |u|^{3/2}\,(1+\mathrm{i}\,\mathrm{sgn}(u)),$$ and $c_0=1/\sqrt{3}$. Recall also that $U_s$ has a density, $p_s(x)$, which by Fourier inversion is given by $$p_s(x)=\frac{1}{2\pi}\int e^{-\mathrm{i}ux-s\Psi(u)}\mathrm{d}u.$$ Several properties of $p_s(x)$ were recalled in the Introduction. Another property we use is that the distribution of $U_s$ is known to be unimodal, in the sense that there exists $a\in\mathbb{R}$ such that both functions $x\mapsto p_s(a-x)$ and $x\mapsto p_s(a+x)$ are nonincreasing on $\mathbb{R}_+$ (cf. [@Zol Theorem 2.7.5]). For every $t>0$ and $y\in\mathbb{R}$, we can make sense of the process $(U_s)_{0\leq s\leq t}$ conditioned on $\{U_t=y\}$, which is called the $\psi$-Lévy bridge from $0$ to $y$ in time $t$ (see [@FPY] for a construction in a much more general setting). Write $(U^{\mathrm{br},t,y})_{0\leq s\leq t}$ for a $\psi$-Lévy bridge from $0$ to $y$ in time $t$. Then, for every $r\in(0,t)$ and every nonnegative measurable function $F$ on the Skorokhod space $\mathbb{D}([0,r],\mathbb{R})$, we have $$\label{RNF}\mathbb{E}\Big[F\Big((U^{\mathrm{br},t,y})_{0\leq s\leq r}\Big)\Big] =\mathbb{E}\Bigg[ \frac{p_{t-r}(y-U_r)}{p_t(y)}\,F\Big((U_s)_{0\leq s\leq r}\Big)\Bigg].$$ See [@FPY Proposition 1]. In particular, the law of $(U^{\mathrm{br},t,y}_s)_{0\leq s\leq r}$ has a bounded density with respect to the law of $(U_s)_{0\leq s\leq r}$. Via a simple time-reversal argument, the same holds for the law of $(y-U^{\mathrm{br},t,y}_{(t-s)-})_{0\leq s\leq r}$. In what follows, when we write $$\mathbb{E}\Big[F\Big((U_s)_{0\leq s\leq t}\Big)\,\Big|\,U_t=y\Big],$$ this should always be understood as $\mathbb{E}[F( (U^{\mathrm{br},t,y})_{0\leq s\leq t})]$ (which makes sense for every choice of $y\in\mathbb{R}$). # The connection with super-Brownian motion {#sec:super} Let us briefly recall the connection between the Brownian snake excursion measures $\mathbb{N}_x$ and super-Brownian motion, referring to [@Zurich] for more details. We fix $\alpha>0$, and consider a Poisson point measure on $\mathcal{S}$, $$\mathcal{N}=\sum_{k\in K} \delta_{\omega_k}$$ with intensity $\alpha\,\mathbb{N}_0$. Then one can construct a one-dimensional super-Brownian motion $(\mathbf{X}_t)_{t\geq 0}$ with branching mechanism $\Phi(u)=2u^2$ and initial value $\mathbf{X}_0=\alpha\,\delta_0$, such that, for any nonnegative measurable function $f$ on $\mathbb{R}$, $$\label{occu-super} \int_0^\infty \mathbf{X}_t(f)\, \mathrm{d}t= \sum_{k\in K} \mathcal{Y}_{(\omega_k)}(f)$$ where $\mathcal{Y}_{(\omega_k)}$ is defined in formula [\[occu-measure\]](#occu-measure){reference-type="eqref" reference="occu-measure"}. In a more precise way, the process $(\mathbf{X}_t)_{t\geq0}$ is defined by setting, for every $t>0$ and every nonnegative Borel function $f$ on $\mathbb{R}$, $$\mathbf{X}_t(f) := \sum_{k\in K} \int_0^{\sigma(\omega_k)} f(\widehat W_r(\omega_k))\,\mathrm{d}_rl^t_r(\omega_k),$$ where $l^t_r(\omega_k)$ denotes the local time of the process $s\mapsto \zeta_s(\omega_k)$ at level $t$ and at time $r$, and the notation $\mathrm{d}_rl^t_r(\omega_k)$ refers to integration with respect to the nondecreasing function $r\mapsto l^t_r(\omega_k)$ (see Chapter 4 of [@Zurich]). The preceding representation of $\mathbf{X}$ allows us to consider excursions above and below $a$, for any $a\in\mathbb{R}$. Consider for simplicity the case $a=0$. We define the exit measure process $(X^0_t)_{t\geq 0}$ at $0$ by setting $X^0_0=\alpha$ and, for $t>0$, $$\label{exitmdef}X^0_t=\sum_{k\in K} \mathcal{X}^0_t(\omega_k).$$ As was already mentioned in Section [2.2](#sna-mea){reference-type="ref" reference="sna-mea"}, the process $(X^0_t)_{t\geq 0}$ is a $\varphi$-CSBP started at $\alpha$. Write $(\delta_i)_{i\in\mathbb{N}}$ for the sequence of its jumps ordered in decreasing size. Then the collection of all excursions of $\omega_k$ above and below $0$, combined for all $k\in K$, is in one-to-one correspondence with the collection $(\delta_i)_{i\in\mathbb{N}}$. Moreover, if $\omega_i$ denotes the excursion associated with the jump $\delta_i$, then: $$\begin{aligned} \label{indtexc} &\text{The excursions $\omega_i$, $i\in \mathbb{N}$, are independent conditionally on $(X^0_t)_{t\geq 0}$,} \\ \nonumber&\text{and the conditional distribution of $\omega_i$ is $\frac{1}{2}\Big(\mathbb{N}^{*,\delta_i}_y+\check\mathbb{N}^{*,\delta_i}_y\Big)$.}\end{aligned}$$ All these facts are immediate consequences of Theorem [Theorem 8](#theo-excursion){reference-type="ref" reference="theo-excursion"} and the discussion preceding it. We are primarily interested in the total occupation measure $$\mathbf{Y}:=\int_0^\infty \mathbf{X}_t \,\mathrm{d}t.$$ Recall from the Introduction, the notation $L^x$, $\dot L^x$ for its continuous density, and its continuous derivative on $\{x\neq 0\}$, and $\dot L^{0+}$, $\dot L^{0-}$ for the right and left derivatives at $0$, respectively, and $\dot L^0:=\dot L^{0+}$. It also follows from Sugitani [@Sug Theorem 4] and its proof that $$\dot L^{0+}=\lim_{x\to 0, x>0} \dot L^x\,,\quad \dot L^{0-}=\lim_{x\to 0, x<0} \dot L^x\,,$$ and $$\label{L0jump}\dot L^{0+}-\dot L^{0-}= -2\alpha.$$ Fix $a\geq 0$, and write $\mathbf{Y}^{(a,\infty)}$ for the restriction of $\mathbf{Y}$ to $(a,\infty)$, and similarly $\mathcal{Y}_{(\omega_k)}^{(a,\infty)}$ for the restriction of $\mathcal{Y}_{(\omega_k)}$ to $(a,\infty)$. In what follows, we assume that $\{k\in K:W^*(\omega_k)>a\}$ is not empty. In view of our applications, we are interested in excursions of $\omega_k$ above level $a$, combined for all $k\in K$, such that $W^*(\omega_k)>a$. We can order these excursions in a sequence $(\omega^{a,+}_j)_{j\in\mathbb{N}}$ in decreasing order of their boundary sizes (Theorem [Theorem 8](#theo-excursion){reference-type="ref" reference="theo-excursion"} implies that these boundary sizes are distinct a.s.). From [\[decomp-occup\]](#decomp-occup){reference-type="eqref" reference="decomp-occup"} and [\[occu-super\]](#occu-super){reference-type="eqref" reference="occu-super"}, we have $$\mathbf{Y}^{(a,\infty)}=\sum_{k\in K} \mathcal{Y}_{(\omega_k)}^{(a,\infty)}=\sum_{j\in \mathbb{N}} \mathcal{Y}_{(\omega^{a,+}_j)}.$$ Consequently, for every $h>0$, we have $$\label{decomp-local} L^{a+h}=\sum_{j=1}^\infty \ell^{a+h}(\omega^{a,+}_j)\,,\quad \dot L^{a+h}=\sum_{j=1}^\infty \dot\ell^{a+h}(\omega^{a,+}_j)\,.$$ Note that there are only finitely many nonzero terms in the sums of the last display. The next proposition will be a key ingredient of our approach. We can write the supremum of the support of $\mathbf{Y}$ as $R=\sup\{W^*(\omega_k):k\in K\}$. By [\[Lsupp\]](#Lsupp){reference-type="eqref" reference="Lsupp"}, we have for any $a\ge 0$, $\{L^a>0\}=\{R>a\}$, a.s. **Proposition 9**. *Let $a\geq 0$, let $F$ be a nonnegative measurable function on the space $C((-\infty,a],\mathbb{R}_+\times \mathbb{R})$, and let $G$ be a nonnegative measurable function on $(\mathcal{S}_a)^\mathbb{N}$. Then, $$\mathbb{E}\Big[\mathbf{1}_{\{R>a\}}\,F\Big((L^x,\dot L^x)_{x\in(-\infty,a]}\Big) G\Big((\omega^{a,+}_j)_{j\in\mathbb{N}}\Big)\Big] =\mathbb{E}\Big[ F\Big((L^x,\dot L^x)_{x\in(-\infty,a]}\Big) \Phi_G(L^a,\frac{1}{2}\dot L^a)\Big]$$ where $\Phi_G(0,y)=0$ for every $y\in\mathbb{R}$, and, for every $t>0$ and $y\in\mathbb{R}$, $\Phi_G(t,y)$ is defined as follows. Let $U^{\mathrm{br},t,y}$ be a $\psi$-Lévy bridge from $0$ to $y$ in time $t$, and let $(Z_j)_{j\in\mathbb{N}}$ be the collection of jumps of $U^{\mathrm{br},t,y}$ ordered in nonincreasing size. Then, $$\Phi_G(t,y)=\mathbb{E}\Big[G\Big((\varpi_j)_{j\in\mathbb{N}}\Big)\Big],$$ where, conditionally on $(Z_j)_{j\in\mathbb{N}}$, the random snake trajectories $(\varpi_j)_{j\in\mathbb{N}}$ are independent, and, for every $j$, $\varpi_j$ is distributed according to $\mathbb{N}^{*,Z_j}_a$.* See [@Mar Section 6] for a proof of this proposition (cf. formula (38) in [@Mar]). Proposition [Proposition 9](#key-ingre){reference-type="ref" reference="key-ingre"} is basically a consequence of Theorem [Theorem 8](#theo-excursion){reference-type="ref" reference="theo-excursion"}, but one needs to understand the conditional distribution of the boundary sizes of excursions above level $a$ given the collection of boundary sizes of excursions below $a$, see in particular formula (24) in [@Mar]. Thanks to formula [\[decomp-local\]](#decomp-local){reference-type="eqref" reference="decomp-local"}, Proposition [Proposition 9](#key-ingre){reference-type="ref" reference="key-ingre"} immediately gives the (time-homogeneous) Markov property of the process $(L^x,\dot L^x)_{x\geq 0}$. Moreover, this proposition shows that, for every $t>0$ and $y\in\mathbb{R}$, the conditional distribution of $(\omega^{a,+}_j)_{j\in\mathbb{N}}$ knowing $L^a=t$ and $\frac{1}{2}\dot L^a=y$ is the law of the sequence $(\varpi_j)_{j\in\mathbb{N}}$, as described in the statement. We emphasize that this conditional distribution makes sense for *every* choice of $t>0$ and $y\in\mathbb{R}$. Later, when we consider expressions of the form $$\label{condi-expec} \mathbb{E}\Big[G\Big((\omega^{a,+}_j)_{j\in\mathbb{N}}\Big)\,\Big|\,L^a=t, \frac{1}{2}\dot L^a=y\Big],$$ this will always mean that we integrate $G$ with respect to the conditional distribution described above. # Moment Bounds and Quadratic Variation {#est-moment} In this section, we use a representation due to Hong [@Hon] to derive certain estimates for moments of the derivatives $\dot L^x$ introduced in the previous section. We consider the super-Brownian motion $\mathbf{X}$ with $\mathbf{X}_0=\alpha\delta_0$, constructed as above, and write $M$ for the associated martingale measure (see [@Per Section II.5]. For every function $\phi:\mathbb{R}\longrightarrow\mathbb{R}$ of class $C^2$, $$M_t(\phi):=\mathbf{X}_t(\phi) - \mathbf{X}_0(\phi) -\int_0^t \mathbf{X}_s(\phi''/2)\,\mathrm{d}s$$ is a (continuous) local martingale (with respect to the canonical filtration of $\mathbf{X}$) with quadratic variation $$\label{quad-var-M} \langle M(\phi),M(\phi)\rangle_t= 4\int_0^t \mathbf{X}_s(\phi^2)\,\mathrm{d}s.$$ There is a linear extension of the definition of the local martingale $M_t(\phi)$ to locally bounded Borel functions $\phi$ and [\[quad-var-M\]](#quad-var-M){reference-type="eqref" reference="quad-var-M"} remains valid (e.g., see Proposition II.5.4 and Corollary III.1.7 of [@Per]). Let $\xi:=\inf\{t\geq 0:\mathbf{X}_t=0\}$ stand for the (a.s. finite) extinction time of $\mathbf{X}$ and let $x>0$. According to [@Hon Proposition 2.2], we have a.s. for every $t\geq \xi$, $$\label{repre-deri} \dot L^x=-\alpha-M_t(\mathrm{sgn}(x-\cdot)),$$ where $\mathrm{sgn}(x-\cdot)$ stands for the function $y\mapsto \mathbf{1}_{\{x>y\}}-\mathbf{1}_{\{x<y\}}$. With our convention for $\dot L^0$, this formula remains valid for $x=0$. We use this representation to derive the following lemma. **Lemma 10**. *(i) For every $q\in [1,4/3)$, for every $x,y\in \mathbb{R}$, $$\mathbb{E}[|\dot L^x-\dot L^y|^{q}]<\infty.$$ (ii) Let $q\in [1,4/3)$. There exists a constant $\beta>0$ such that, for every $0<u<v$, $$\label{bound-incre} \mathbb{E}\Big[\sup_{x, y\in[u,v],x\neq y}\Big( \frac{ |\dot L^x -\dot L^y|}{|x-y|^\beta}\Big)^q\Big]<\infty.$$* \(i\) We first verify that, for every $x>0$ and every $q\in(0,2/3)$, $$\label{deri-tec1} \mathbb{E}\Big[\Big(\int_0^\infty \mathbf{X}_s([0,x])\,\mathrm{d}s\Big)^q\Big]<\infty.$$ To see this, recall the well-known formula $\mathbb{P}(\xi>t)=1-\exp(-\frac{\alpha}{2t})$ (which is easily derived from the representation of the preceding section), and write for every $\lambda>0$ and $r>0$, $$\begin{aligned} \mathbb{P}\Bigg(\Big(\int_0^\infty \mathbf{X}_s([0,x])\,\mathrm{d}s\Big)^q>\lambda\Bigg) &\leq \mathbb{P}(\xi >\lambda^r)+\mathbb{P}\Big(\int_0^{\lambda^r} \mathbf{X}_s([0,x])\,\mathrm{d}s>\lambda^{1/q}\Big)\\ &\leq \frac{\alpha}{2\lambda^r} + \frac{1}{\lambda^{1/q}} \int_0^{\lambda^r} \mathbb{E}[\mathbf{X}_s([0,x])]\,\mathrm{d}s\\ &= \frac{\alpha}{2\lambda^r}+ \frac{\alpha}{\lambda^{1/q}} \int_0^{\lambda^r} \mathbb{P}(B_s\in[0,x])\,\mathrm{d}s\\ &\leq \alpha\Big(\frac{1}{2}\,\lambda^{-r}+ x\,\lambda^{r/2-1/q}\Big),\end{aligned}$$ where we wrote $(B_t)_{t\geq 0}$ for a linear Brownian motion started at $0$, and we used the trivial bound $\mathbb{P}(B_s\in[0,x])\leq x/\sqrt{2\pi s}$. If we take $r=2/(3q)$, the right-hand side of the previous display becomes a constant, depending on $x$, times $\lambda^{-2/(3q)}$, which is integrable in $\lambda$ with respect to Lebesgue measure on $[1,\infty)$ if $0<q<2/3$. Our claim [\[deri-tec1\]](#deri-tec1){reference-type="eqref" reference="deri-tec1"} follows. Next let $K>0$ and $0\leq x<y\leq K$. We observe that $M_t(\mathrm{sgn}(x-\cdot))-M_t(\mathrm{sgn}(y-\cdot))$ is a continuous local martingale with quadratic variation $$4\int_0^t \mathbf{X}_s((\mathrm{sgn}(x-\cdot)-\mathrm{sgn}(y-\cdot))^2)\,\mathrm{d}s= 16\int_0^t\mathbf{X}_s([x,y])\,\mathrm{d}s.$$ From [\[deri-tec1\]](#deri-tec1){reference-type="eqref" reference="deri-tec1"} and the Burkholder-Davis-Gundy inequalities, we obtain that, for every $q\in[1,4/3)$, $$\mathbb{E}\Big[ \Big|M_t(\mathrm{sgn}(x-\cdot))-M_t(\mathrm{sgn}(y-\cdot))\Big|^q\Big]\leq C_{(q,K)},$$ where the constant $C_{(q,K)}$ only depends on $K$ and $q$. Letting $t$ tend to infinity and using [\[repre-deri\]](#repre-deri){reference-type="eqref" reference="repre-deri"} together with Fatou's lemma, we get that $\mathbb{E}[|\dot L^x-\dot L^y|^{q}]\leq C_{(q,K)}$. By symmetry, we have for every $x>0$, $\mathbb{E}[|\dot L^{-x}-\dot L^{0-}|^{q}]=\mathbb{E}[|\dot L^x-\dot L^0|^{q}]<\infty$, and, by [\[L0jump\]](#L0jump){reference-type="eqref" reference="L0jump"}, $|L^0-L^{0-}|=2\alpha$. Assertion (i) follows. \(ii\) We first observe that, for every $\delta>0$, there is a constant $C_\delta$ (depending on $\alpha$) such that, for every $\delta\leq x\leq y$ and every $s>0$, $$\label{L2int}\mathbb{E}[\mathbf{X}_s([x,y])^2]\leq C_\delta\,(y-x)^2.$$ To see this first use the explicit formula $$\mathbb{E}[\mathbf{X}_s([x,y])^2]= \alpha^2\Bigg(\int_x^y q_s(u)\mathrm{d}u \Bigg)^2 + 4\alpha\int_0^s \mathrm{d}r\int_\mathbb{R}\mathrm{d}u \,q_r(u)\Bigg(\int_x^y \mathrm{d}v\, q_{s-r}(v-u)\Bigg)^2,$$ where $q_s(u)$ is the Brownian transition density (see e.g. Proposition II.11 in [@Zurich]). To handle the second term of the right-hand side, bound $q_{s-r}(v-u)$ by $C/\sqrt{s}$ when $r<s/2$, and when $r>s/2$ use $\int \mathrm{d}u \,q_{s-r}(v-u)q_{s-r}(v'-u)=q_{2(s-r)}(v-v')$. The bound [\[L2int\]](#L2int){reference-type="eqref" reference="L2int"} now follows from a short calculation. To simplify notation, set $\widehat L^x_t=-\alpha-M_t(\mathrm{sgn}(x-\cdot))$. From the Burkholder-Davis-Gundy inequalities and the bound in [\[L2int\]](#L2int){reference-type="eqref" reference="L2int"}, we get the existence of a constant $C$ such that, for every $\delta\leq x\leq y$, $$\mathbb{E}[(\widehat L^y_t-\widehat L^x_t)^4]\leq C\,\mathbb{E}\Big[\Big(\int_0^t \mathbf{X}_s([x,y])\,\mathrm{d}s\Big)^2\Big] \leq C\,t\,\mathbb{E}\Big[\int_0^t (\mathbf{X}_s([x,y]))^2\,\mathrm{d}s\Big] \leq C\,C_\delta\,t^2(y-x)^2.$$ Let $a>0$ and $\lambda>0$. For every $n\in\mathbb{N}$, we can bound $$\mathbb{P}\Bigg(\sup_{1\leq k\leq 2^n} |\widehat L^{1+k2^{-n}}_t- \widehat L^{1+(k-1)2^{-n}}_t| >\lambda a^n\Bigg) \leq 2^n\times (\lambda a^n)^{-4} \times C\,C_1\,t^22^{-2n}=C\,C_1\,t^2\lambda^{-4}a^{-4n}2^{-n}.$$ We fix $a\in(0,1)$ such that $a^{-4}<2$. Consider the event $$A:=\bigcup_{n\in\mathbb{N}} \Bigg\{\sup_{1\leq k\leq 2^n} |\widehat L^{1+k2^{-n}}_t- \widehat L^{1+(k-1)2^{-n}}_t| >\lambda a^n\Bigg\}.$$ We get $\mathbb{P}(A)\leq \widetilde C\,t^2\lambda^{-4}$, where $\widetilde C$ is a constant. Let $D$ be the set of all real numbers of the form $1+k2^{-n}$ with $n\in\mathbb{N}$ and $k\in\{0,1,\ldots, 2^n\}$ On the complement of the set $A$, simple chaining arguments show that we have $|\widehat L^x_t-\widehat L^y_t|\leq K\,\lambda\,|x-y|^\beta$ for every $x,y\in D$, where $\beta=-\log a/\log 2>0$ and $K$ is a constant (which does not depend on $\lambda$). Finally, since $\dot L^y -\dot L^x=\widehat L^y_t-\widehat L^x_t$ on $\{\xi\leq t\}$, we have $$\mathbb{P}\Bigg(\sup_{x,y\in[1,2],x\neq y} \frac{ |\dot L^x -\dot L^y|}{|x-y|^\beta}>K\,\lambda\Bigg) =\mathbb{P}\Bigg(\sup_{x,y\in D,x\neq y} \frac{ |\dot L^x -\dot L^y|}{|x-y|^\beta} >K\,\lambda\Bigg)\leq \mathbb{P}(\xi>t) + \widetilde C\,t^2\lambda^{-4}\leq\frac{\alpha}{2t} + \widetilde C\,t^2\lambda^{-4}.$$ We apply this bound with $t=\lambda^{4/3}$, and it follows that $$\mathbb{E}\Big[\Big(\sup_{x,y\in[1,2],x\neq y}\Big( \frac{ |\dot L^x -\dot L^y|}{|x-y|^\beta}\Big)^q\Big]<\infty$$ for every $q\in[1,4/3)$. By a minor modification of the argument, the last display still holds if we replace $[1,2]$ by any interval $[u,v]$ with $0<u<v$. The following proposition determines the quadratic variation of $(\dot L^x)_{x\geq 0}$. We will see later that this process is a semimartingale (for an appropriate filtration). **Proposition 11**. *Let $\overline x>0$, and, for every integer $n\in\mathbb{N}$, let $\pi_n=\{0=x^n_0<x^n_1<\cdots< x^n_{m_n}=\overline x\}$ be a subdivision of $[0,\overline x]$. Set $\|\pi_n\|:=\max\{x^n_i-x^n_{i-1}:1\leq i\leq m_n\}$, and $$Q(\pi_n)= \sum_{i=1}^{m_n} (\dot L^{x^n_i}-\dot L^{x^n_{i-1}})^2.$$ Assume that $\|\pi_n\| \longrightarrow 0$ as $n\to\infty$. Then, $$Q(\pi_n)\mathrel{ \mathop{\kern 0pt\longrightarrow}\limits_{n\to\infty}^{}} 16 \int_0^{\overline x} L^x\,\mathrm{d}x\quad\hbox{in probability}.$$* We use the same notation $\widehat L^x_t=-\alpha-M_t(\mathrm{sgn}(x-\cdot))$, for $x\geq 0$ and $t\geq 0$, as in the previous proof, and we recall that $\dot L^x=\widehat L^x_t$ when $t\geq \xi$, by [\[repre-deri\]](#repre-deri){reference-type="eqref" reference="repre-deri"}. If $0\leq x\leq y$, we have $$\widehat L^y_t -\widehat L^x_t= -2\,M_t(\mathbf{1}_{[x,y]}).$$ Fix a subdivision $\pi=\{0=x_0<x_1<\cdots< x_{m}=\overline x\}$ of $[0,\overline x]$. We will use the last display to evaluate $$Q_t(\pi):=\sum_{i=1}^m (\widehat L^{x_i}_t-\widehat L^{x_{i-1}}_t)^2.$$ For every $i\in\{1,\ldots,m\}$, set $$M^i_t:=-2\,M_t(\mathbf{1}_{[x_{i-1},x_i]})$$ so that $M^i$ is a local martingale with quadratic variation $$\langle M^i,M^i\rangle_t= 16 \int_0^t\,\mathbf{X}_s([x_{i-1},x_i])\,\mathrm{d}s.$$ Also set $$N^i_t:=(M^i_t)^2-\langle M^i,M^i\rangle_t = 2\int_0^t M^i_s\,\mathrm{d}M^i_s.$$ Then, $$\begin{aligned} \label{QV-tech1} \mathbb{E}\Big[\Big(Q_t(\pi)-16 \int_0^t\,\mathbf{X}_s([0,\overline x])\,\mathrm{d}s\Big)^2\Big] & = \mathbb{E}\Big[\Big(\sum_{i=1}^m \big((M^i_t)^2 - \langle M^i,M^i\rangle_t\big)\Big)^2\Big]\nonumber\\ &= \mathbb{E}\Big[\sum_{i=1}^m (N^i_t)^2\Big] + 2\sum_{1\leq i<j\leq m} \mathbb{E}[N^i_tN^j_t]. \end{aligned}$$ On one hand, we have $\mathbb{E}[N^i_tN^j_t]=0$ if $i\not =j$, because $$\langle M^i,M^j\rangle_t=16\int_0^t \mathbf{X}_s([x_{i-1},x_i]\cap[x_{j-1},x_j])\,\mathrm{d}s = 0$$ and $N^i_t$ is a stochastic integral with respect to $M^i$ Note that integrability issues are trivial here because the random variables $\mathbf{X}_s(\mathbb{R})$, $0\leq s\leq t$, are uniformly bounded in $L^p$, for any $p<\infty$ (e.g., see Lemma III.3.6 of [@Per]). On the other hand, we can estimate $\mathbb{E}[(N^i_t)^2]$ as follows. Using the Burkholder-Davis-Gundy inequalities and writing $C_1$ and $C_2$ for the appropriate constants, we have $$\begin{aligned} \mathbb{E}[(N^i_t)^2]&\leq 2\Big( \mathbb{E}[(M^i_t)^4]+\mathbb{E}[(\langle M^i,M^i\rangle_t )^2]\Big)\\ &\leq C_1\,\mathbb{E}[(\langle M^i,M^i\rangle_t )^2]\\ &=C_2 \,\mathbb{E}\Big[\int_0^t \mathrm{d}s\int_s^t \mathrm{d}r\,\mathbf{X}_s([x_{i-1},x_i])\mathbf{X}_r([x_{i-1},x_i])\Big]\\ &=C_2\,\int_0^t \mathrm{d}s\int_s^t \mathrm{d}r\,\mathbb{E}\Big[\mathbf{X}_s([x_{i-1},x_i])\,\mathbb{E}_{\mathbf{X}_s}[\mathbf{X}_{r-s}([x_{i-1},x_i])]\Big]\\ &\leq C_2\,\int_0^t \mathrm{d}s\int_s^t \mathrm{d}r\,\frac{x_i-x_{i-1}}{2\sqrt{r-s}}\, \mathbb{E}\Big[\mathbf{X}_s([x_{i-1},x_i])\mathbf{X}_s(\mathbb{R})\Big]\\ &\leq C_2\,(x_i-x_{i-1})\,\sqrt{t} \int_0^t \mathrm{d}s \,\mathbb{E}\Big[\mathbf{X}_s([x_{i-1},x_i])\mathbf{X}_s(\mathbb{R})\Big]. \end{aligned}$$ In the fourth line of this calculation, we applied the Markov property of $\mathbf{X}$, writing $\mathbb{P}_\mu$ for a probability measure under which $\mathbf{X}$ starts from $\mu$, and, in the next line, we used the first-moment formula for $\mathbf{X}$. By summing the estimate of the last display over $i\in\{1,\ldots,m\}$, we get $$\mathbb{E}\Big[\sum_{i=1}^m (N^i_t)^2\Big]\leq C_2\,\|\pi\| \sqrt{t}\,\int_0^t \mathbb{E}[\mathbf{X}_s(\mathbb{R})^2]\,\mathrm{d}s\leq C_2\,\|\pi\| \sqrt{t}\,(\alpha^2t+2\alpha t^2),$$ using the simple estimate $\mathbb{E}[\mathbf{X}_s(\mathbb{R})^2]\leq \alpha^2 +4\alpha s$. Finally, we deduce from [\[QV-tech1\]](#QV-tech1){reference-type="eqref" reference="QV-tech1"} that, for $t\geq 1$, $$\mathbb{E}\Big[\Big(Q_t(\pi)-16 \int_0^t\,\mathbf{X}_s([0,\overline x])\,\mathrm{d}s\Big)^2\Big]\leq C_3\,t^{5/2}\,\|\pi\|.$$ We apply the latter estimate to $\pi=\pi_n$ for every $n\geq 1$, and it follows that, for every $t\geq 1$, $$\lim_{n\to\infty}\mathbb{E}\Big[\Big(Q_t(\pi_n)-16 \int_0^t\,\mathbf{X}_s([0,\overline x])\,\mathrm{d}s\Big)^2\Big]=0.$$ Since $$\mathbb{P}\Big(Q_t(\pi_n)=Q(\pi_n),\int_0^t\,\mathbf{X}_s([0,\overline x])\,\mathrm{d}s=\int_0^\infty\mathbf{X}_s([0,\overline x])\,\mathrm{d}s\Big) \geq \mathbb{P}(\xi\leq t) \mathrel{ \mathop{\kern 0pt\longrightarrow}\limits_{t\to\infty}^{}} 1,$$ this immediately gives the convergence in probability $$Q(\pi_n)\mathrel{ \mathop{\kern 0pt\longrightarrow}\limits_{n\to\infty}^{}} 16 \int_0^\infty\,\mathbf{X}_s([0,\overline x])\,\mathrm{d}s = 16\int_0^{\overline x} L^x\,\mathrm{d}x.\eqno{\square}$$ # The expected value of increments of the derivative of local time {#sec:exp-incre-deriv} ## The case of the positive excursion measure {#sec:casepositive} Our goal in this section is to compute the quantities $\mathbb{N}^{*,z}(\dot\ell^a)$ for $z>0$ and $a>0$. We start with a technical estimate. **Lemma 12**. *Let $q\in[1,4/3)$. Then, for every $0<u<v$, and $n\in\mathbb{N}$, $$\sup_{1/n\le z\le n}\mathbb{N}^{*,z}_0\Big(\Big(\sup_{u\leq x\leq v}|\dot \ell^x|\Big)^q\Big) <\infty.$$* We will derive this result from Lemma [Lemma 10](#moment-deri){reference-type="ref" reference="moment-deri"}, using the construction of the super-Brownian motion $(\mathbf{X}_t)_{t\geq 0}$ in Section [3](#sec:super){reference-type="ref" reference="sec:super"}. Recall the definition of the exit measure process $(X^0_t)_{t\geq 0}$ in [\[exitmdef\]](#exitmdef){reference-type="eqref" reference="exitmdef"} and that it is a $\varphi$-CSBP, where $\varphi=2\psi$. By the Lamperti transformation [@Lam], we can write $X^0$ as a (continuous) time change of a Lévy process with no negative jumps and Laplace exponent $\varphi$, started at $\alpha$, up to its first hitting time of $0$. Up to enlarging the probability space, we may assume that this Lévy process $(\mathcal{U}_t)_{t\geq 0}$ is defined for all $t\geq 0$ and we write $T_0=\inf\{t\geq 0:\mathcal{U}_t=0\}$. Notice that the jumps of $X^0$ are exactly the jumps of $\mathcal{U}$ on the time interval $[0,T_0]$. Let us fix $0<u<v$. Let $b>0$, and let $\mathcal{U}^{(1)}$ be the Lévy process that only records the jumps of $\mathcal{U}$ of size greater than $b$, $$\mathcal{U}^{(1)}_t := \sum_{s\leq t} \Delta \mathcal{U}_s\,\mathbf{1}_{\{\Delta \mathcal{U}_s >b\}}.$$ Also set $\mathcal{U}^{(0)}_t:=\mathcal{U}_t-\mathcal{U}^{(1)}_t$, so that $\mathcal{U}^{(0)}$ and $\mathcal{U}^{(1)}$ are two independent Lévy processes, with $\mathcal{U}^{(1)}_0=0$ and $\mathcal{U}^{(0)}_0=\alpha$. We can find a constant $t_1>0$ such that the probability of the event $A$ where $\mathcal{U}^{(1)}$ has exactly one jump during $[0,t_1]$ and $\mathcal{U}^{(0)}$ does not hit $0$ before $t_1$ is positive. On the event $A$, let $\Delta_0$ be the unique jump of $\mathcal{U}^{(1)}$ on the time interval $[0,t_1]$. Then, conditionally on the event $A$, $\Delta_0$ is distributed according to the probability measure $(3b^{3/2}/2)\,\mathbf{1}_{(b,\infty)}(z)z^{-5/2}\,\mathrm{d}z$. On the event $A$, let $\omega_0$ be the excursion of $\mathbf{X}$ (above or below $0$) associated with the jump $\Delta_0$. Here, recall the definition of these excursions in Section [3](#sec:super){reference-type="ref" reference="sec:super"}, and the fact that they are in one-to-one correspondence with the jumps of $X^0$, or equivalently the jumps of $\mathcal{U}$ on $[0,T_0]$ (see especially [\[indtexc\]](#indtexc){reference-type="eqref" reference="indtexc"} and the discussion prior to it). Also let $A'$ be the event where all excursions of $\mathbf{X}$ above or below $0$, except possibly the excursion $\omega_0$ (if it is defined), stay in the interval $(-1,u)$. On the event $B=A \cap A'$, we have $L^a=\ell^a(\omega_0)$ for every $a\notin (-1,u)$. Then, on one hand, it follows from Lemma [Lemma 10](#moment-deri){reference-type="ref" reference="moment-deri"} that $$\label{estim1} \mathbb{E}\Big[\mathbf{1}_{B}\,\Big(\sup_{x\in[u,v]} |\dot L^x-\dot L^{-1}|\Big)^q\Big]<\infty.$$ On the other hand, the preceding remarks give $$\begin{aligned} \label{estim2} &\mathbb{E}\Big[\mathbf{1}_{B}\,\Big(\sup_{x\in[u,v]} |\dot L^x-\dot L^{-1}|\Big)^q\Big]\nonumber\\ &=\mathbb{E}\Big[\mathbf{1}_{B}\,\Big(\sup_{x\in[u,v]} |\dot\ell^x(\omega_0)-\dot\ell^{-1}(\omega_0)|\Big)^q\Big]\nonumber\\ &=\mathbb{E}\Big[\mathbf{1}_{A}\,\mathbb{P}(A'\mid (\mathcal{U}_t)_{0\leq t\leq T_0})\times \mathbb{E}\Big[\mathbf{1}_{A}\Big(\sup_{x\in[u,v]} |\dot \ell^x(\omega_0)-\dot \ell^{-1}(\omega_0)|\Big)^q \,\Big|\, (\mathcal{U}_t)_{0\leq t\leq T_0}\Big]\Big]\end{aligned}$$ where we use the conditional independence of the excursions of $\mathbf{X}$ given $(X^0_t)_{t\geq 0}$ (equivalently, given $(\mathcal{U}_t)_{0\leq t\leq T_0}$) from [\[indtexc\]](#indtexc){reference-type="eqref" reference="indtexc"}. From Lemma [Lemma 6](#maxN*){reference-type="ref" reference="maxN*"}, one easily verifies that $$\mathbb{P}(A'\mid (\mathcal{U}_t)_{0\leq t\leq T_0})>0\quad\hbox{a.s.}$$ Furthermore by [\[indtexc\]](#indtexc){reference-type="eqref" reference="indtexc"}, $$\mathbb{E}\Big[\mathbf{1}_{A}\Big(\sup_{x\in[u,v]} |\dot\ell^x(\omega_0)-\dot \ell^{-1}(\omega_0)|\Big)^q \,\Big|\,(\mathcal{U}_t)_{0\leq t\leq T_0}\Big] =\mathbf{1}_{A}\Bigg(\frac{1}{2} \mathbb{N}^{*,\Delta_0}\Big(\Big(\sup_{x\in[u,v]} |\dot\ell^x|\Big)^q \Big) +\frac{1}{2} \check\mathbb{N}^{*,\Delta_0}(|\dot\ell^{-1}(\omega_0)|^q)\Bigg),$$ and, from [\[estim1\]](#estim1){reference-type="eqref" reference="estim1"} and [\[estim2\]](#estim2){reference-type="eqref" reference="estim2"}, it follows that $$\mathbf{1}_{A} \mathbb{N}^{*,\Delta_0}\Big(\Big(\sup_{x\in[u,v]} |\dot\ell^x|\Big)^q \Big)<\infty\quad \hbox{a.s.}$$ Using the conditional distribution of $\Delta_0$ given $A$, we conclude that $$\mathbb{N}^{*,z}_0\Big(\Big(\sup_{x\in[u,v]} |\dot\ell^x|\Big)^q\Big)<\infty,\hbox{ for a.e. }z>0,$$ We have thus proved that, for a.e. $z>0$, $$\mathbb{N}^{*,z}_0\Big(\Big(\sup_{x\in[u,v]} |\dot\ell^x|\Big)^q\Big)<\infty, \hbox{ for every } 0<u<v.$$ However, if the last display holds for one value of $z>0$, the scaling in [\[N\*props\]](#N*props){reference-type="eqref" reference="N*props"}(iii) shows that it must hold for every $z>0$ and in fact has a uniform bound for $z\in[1/n,n]$. Thanks to the above, the quantity $\mathbb{N}^{*,z}(\dot\ell^a)$ is well defined for every $a>0$ and $z>0$. It can in fact be computed explicitly. **Proposition 13**. *For every $z>0$ and $a>0$, we have $$\label{moment1} \mathbb{N}^{*,z}_0(\ell^a)= \sqrt{6\pi}\,a^{-2}\,z^{5/2}\,\chi(\frac{3z}{2a^2})$$ where, for every $x>0$, $$\chi(x)=\frac{2}{\sqrt{\pi}} (x^{3/2}+x^{1/2}) -2 x(x+\frac{3}{2})\,e^x\,\mathrm{erfc}(\sqrt{x}),$$ with the notation $\mathrm{erfc}(y)=\frac{2}{\sqrt{\pi}}\int_y^\infty e^{-x^2}\mathrm{d}x$. Moreover, for every $z>0$ and $a>0$, $$\label{moment2} \mathbb{N}^{*,z}_0(\dot \ell^a)= z\,\gamma\Big(\frac{3z}{2 a^2}\Big)$$ where, for every $u>0$, $$\gamma(u)=-\frac{8}{3}\sqrt{\pi}\,u^{3/2}\,\Big(\chi(u)+u\chi'(u)\Big).$$* The function $\chi$ is positive on $(0,\infty)$ and its Laplace transform is $\int_0^\infty \chi(z)\,e^{-\lambda z}\,\mathrm{d}z=(1+\sqrt{\lambda})^{-3}$, cf. the appendix of [@Spine]. By [@Spine Proposition 3], we have, for every nonnegative Borel function $f$ on $[0,\infty)$, $$\mathbb{N}^{*,z}_0\Bigg(\int_0^\infty f(a)\,\ell^a\,\mathrm{d}a\Bigg)=\mathbb{N}^{*,z}_0\Bigg(\int_0^\sigma f(\widehat W_s)\,\mathrm{d}s\Bigg) = \int_0^\infty f(a)\,\pi_z(a)\,\mathrm{d}a$$ where $$\pi_z(a)= \sqrt{6\pi}\, a^{-2}\,z^{5/2}\,\chi(\frac{3z}{2a^2}),$$ and $\chi(\cdot)$ is as in the statement. So, we have $$\label{desint} \int_0^\infty f(a)\,\mathbb{N}^{*,z}_0(\ell^a)\,\mathrm{d}a= \int_0^\infty f(a)\,\pi_z(a)\,\mathrm{d}a.$$ It follows that $\mathbb{N}^{*,z}_0(\ell^a)=\pi_z(a)$ for a.e. $a>0$, and Fatou's lemma then gives $\mathbb{N}^{*,z}_0(\ell^a)\leq\pi_z(a)<\infty$ for every $a>0$. If $a>0$ is fixed, we have $\mathbb{N}^{*,z}_0$ a.s. $$\label{conv-deriv} \frac{1}{b-a} \int_a^b \dot \ell^c\,\mathrm{d}c=\frac{1}{b-a} (\ell^b-\ell^a) \mathrel{ \mathop{\kern 0pt\longrightarrow}\limits_{b\to a,b\not =a}^{}} \dot \ell^a.$$ From Lemma [Lemma 12](#lem-tech){reference-type="ref" reference="lem-tech"} and dominated convergence, we get that the convergence [\[conv-deriv\]](#conv-deriv){reference-type="eqref" reference="conv-deriv"} holds in $L^1(\mathbb{N}^{*,z}_0)$. Consequently, $$\frac{1}{b-a} (\mathbb{N}^{*,z}_0(\ell^b)-\mathbb{N}^{*,z}_0(\ell^a)) \mathrel{ \mathop{\kern 0pt\longrightarrow}\limits_{b\to a,b\not =a}^{}} \mathbb{N}^{*,z}_0(\dot \ell^a).$$ It follows that the function $a\mapsto \mathbb{N}^{*,z}_0(\ell^a)$ is differentiable on $(0,\infty)$, and $$\frac{\mathrm{d}}{\mathrm{d}a} \mathbb{N}^{*,z}_0(\ell^a)=\mathbb{N}^{*,z}_0(\dot \ell^a).$$ In particular, since $a\mapsto \mathbb{N}^{*,z}_0(\ell^a)$ is continuous on $(0,\infty)$, we deduce from [\[desint\]](#desint){reference-type="eqref" reference="desint"} that $\mathbb{N}^{*,z}_0(\ell^a)= \pi_z(a)$ for every $a\in(0,\infty)$, which give [\[moment1\]](#moment1){reference-type="eqref" reference="moment1"}. Then $$\mathbb{N}^{*,z}_0(\dot \ell^a)= \frac{\mathrm{d}}{\mathrm{d}a} \mathbb{N}^{*,z}_0(\ell^a)=\sqrt{6\pi}\Big(-2a^{-3}z^{5/2}\,\chi(\frac{3z}{2a^2})-3\,a^{-5}z^{7/2}\,\chi'(\frac{3z}{2a^2})\Big),$$ and formula [\[moment2\]](#moment2){reference-type="eqref" reference="moment2"} follows. We now record some asymptotics of the function $\gamma(u)$ introduced in the proposition, which will be useful in the next sections. We first note that $$\label{chi'}\chi'(x)= \frac{2}{\sqrt{\pi}}\Big(x^{3/2}+3x^{1/2}+\frac{1}{2}x^{-1/2}\Big)+ \Big(-2x^2-7x-3\Big)\,e^x\mathrm{erfc}(\sqrt{x}),$$ and, for every integer $N\geq 0$, $$e^x\mathrm{erfc}(\sqrt{x})=\frac{1}{\sqrt{\pi}} \sum_{n=0}^N (-1)^{n}\,\frac{1\times 3\times\cdots\times (2n-1)}{2^n}\,x^{-n-1/2}+ O(x^{-N-3/2}),$$ as $x\to\infty$. By simple calculations it follows that, as $x\to\infty$, $$\label{chiinfty}\chi(x)=\frac{1}{\sqrt{\pi}}\Big(\frac{3}{2} x^{-3/2} -\frac{15}{2} x^{-5/2}+O(x^{-7/2})\Big)$$ and $$\label{chi'infty}\chi'(x)=\frac{1}{\sqrt{\pi}}\Big(-\frac{9}{4}x^{-5/2} +\frac{75}{4}x^{-7/2}+O(x^{-9/2})\Big).$$ Consequently, $$\label{gaminfty}\gamma(x)=2-\frac{30}{x}+O(x^{-2})\ \text{ as $x\to\infty$},$$ and so by [\[moment2\]](#moment2){reference-type="eqref" reference="moment2"}, $\mathbb{N}^{*,z}_0(\dot \ell^a)=2z+O(a^{2})$ as $a\to 0$. Moreover, from the formulas for $\chi$ and $\chi'$, one has $$\label{gam0} \gamma(x)=-8x^2+o(x^2)\text{ as }x\to 0,$$ and therefore $\mathbb{N}^{*,z}_0(\dot \ell^a)=-18a^{-4}\,z^3+o(z^3)$ as $z\to 0$. We can also estimate $$\label{gam'inf}\gamma'(x)=\frac{3}{2}\frac{\gamma(x)}{x} + (-\frac{8}{3}\sqrt{\pi})\,x^{3/2}(2\chi'(x)+x\chi''(x)) =\frac{15}{x} + (-\frac{8}{3}\sqrt{\pi})x^{5/2}\chi''(x)+O(x^{-2}),$$ as $x\to\infty$. Noting that $$\chi''(x)=\frac{2}{\sqrt{\pi}}\Big(x^{3/2}+5x^{1/2}+3x^{-1/2}-\frac{1}{4} x^{-3/2}\Big)+\Big(-2x^2-11x-10\Big)\,e^x\mathrm{erfc}(\sqrt{x}),$$ we can verify that $$x^{5/2}\chi''(x)=\frac{1}{\sqrt{\pi}}\,\frac{45}{8x} + O(x^{-2})$$ and consequently $\gamma'(x)=O(x^{-2})$ as $x\to\infty$. From the first equality in [\[gam\'inf\]](#gam'inf){reference-type="eqref" reference="gam'inf"}, [\[chi\'\]](#chi'){reference-type="eqref" reference="chi'"}, the above expression for $\chi''$, and [\[gam0\]](#gam0){reference-type="eqref" reference="gam0"}, one gets that $\gamma'(x)=-16\,x+o(x)$ when $x\to 0$. It follows from the preceding estimates for $\gamma'$ that if $\gamma'(0):=0$, then $$\label{gamma'cont} \gamma'\text{ is continuous on }[0,\infty),$$ and $$\label{intgam'} \int_0^\infty |\gamma'(x)|(1\vee x^{-1})\,\mathrm{d}x<\infty.$$ ## The derivative of local times of super-Brownian motion We now consider the super-Brownian motion $\mathbf{X}$ started at $\mathbf{X}_0=\alpha\,\delta_0$ constructed as in Section [3](#sec:super){reference-type="ref" reference="sec:super"}, and its local times $(L^a)_{a\in\mathbb{R}}$. We fix $a\geq 0$ and $h>0$. Let $\Theta_a$ denote the law of the pair $(L^a,\frac{1}{2}\dot L^a)$ under $\mathbb{P}(\cdot\cap\{L^a>0\})$. Our goal is to compute the conditional expectation $$\mathbb{E}\Big[\dot L^{a+h}\,\Big|\, L^a=t,\frac{1}{2}\dot L^a=y\Big]$$ for $t>0$ and $y\in\mathbb{R}$. Recall that we will interpret this conditional expectation as in [\[condi-expec\]](#condi-expec){reference-type="eqref" reference="condi-expec"}, using [\[decomp-local\]](#decomp-local){reference-type="eqref" reference="decomp-local"}. Therefore we can unambiguously make assertions for all $h>0$ simultaneously. **Proposition 14**. *Let $a\geq 0$. Then, for $\Theta_a$-almost every $(t,y)$, for every $h>0$, we have $$\mathbb{E}[|\dot L^{a+h}|\,|\, L^a=t,\frac{1}{2}\dot L^a=y]<\infty$$ and $$\label{key-equa} \mathbb{E}\Big[\dot L^{a+h}\,\Big|\, L^a=t,\frac{1}{2}\dot L^a=y\Big]=\mathbb{E}\Big[\sum_{j=1}^\infty Z_j\,\gamma\Big(\frac{3Z_j}{2h^2}\Big)\Big],$$ where $(Z_j)_{j\geq 1}$ is the sequence of jumps of the $\psi$-Lévy bridge $U^{\mathrm{br},t,y}$, listed in decreasing order, and $$\label{bound-series} \mathbb{E}\Big[\sum_{j=1}^\infty Z_j\,\Big|\gamma\Big(\frac{3Z_j}{2h^2}\Big)\Big|\Big]<\infty, \text{ for every }h>0.$$* From the asymptotics derived at the end of Section [5.1](#sec:casepositive){reference-type="ref" reference="sec:casepositive"}, we have $|\gamma(z)|\leq C(1\wedge z^2)$ for some constant $C$. Hence, using the absolute continuity relation [\[RNF\]](#RNF){reference-type="eqref" reference="RNF"}, we claim it is easy to verify [\[bound-series\]](#bound-series){reference-type="eqref" reference="bound-series"}, so that the right-hand side of [\[key-equa\]](#key-equa){reference-type="eqref" reference="key-equa"} makes sense. To see this, write the sum inside the expectation in [\[bound-series\]](#bound-series){reference-type="eqref" reference="bound-series"} as $S_1+S_2$, where $S_1$ corresponds to the contribution from jumps occurring in $[0,t/2]$ and $S_2$ corresponds to those which occurred in $[t/2,t]$. Apply [\[RNF\]](#RNF){reference-type="eqref" reference="RNF"} to show that $\mathbb{E}[S_1]<\infty$, and its counterpart for the time-reversed process $(y-U^{\text{br},t,y}_{(t-s)-})_{0\le s\le t/2}$ to show $\mathbb{E}[S_2]<\infty$. Let $h>0$. By Lemma [Lemma 10](#moment-deri){reference-type="ref" reference="moment-deri"} (i), $\mathbb{E}[|\dot L^{a+h}-\dot L^{a}|]<\infty$, and therefore we have $$\label{bound-1mom} \mathbb{E}[|\dot L^{a+h}|\mid L^a=t,\frac{1}{2}\dot L^a=y]<\infty,\ \text{for $\Theta_a$-a.e.~$(t,y)$}.$$ By the convention noted before the Proposition, the quantity $\mathbb{E}[|\dot L^{a+h}|\mid L^a=t,\frac{1}{2}\dot L^a=y]$ is well defined simultaneously for every choice of $t>0$, $y\in\mathbb{R}$, and $h>0$. Lemma [Lemma 10](#moment-deri){reference-type="ref" reference="moment-deri"} (ii) shows that [\[bound-1mom\]](#bound-1mom){reference-type="eqref" reference="bound-1mom"} holds simultaneously for every $h>0$, for $\Theta_a$-a.e. $(t,y)$, giving the first required result. **In what follows, we fix $t>0$ and $y\in\mathbb{R}$ such that [\[bound-1mom\]](#bound-1mom){reference-type="eqref" reference="bound-1mom"} holds for every $h>0$.** Recall the notation introduced before Proposition [Proposition 9](#key-ingre){reference-type="ref" reference="key-ingre"}. By [\[decomp-local\]](#decomp-local){reference-type="eqref" reference="decomp-local"}, we have $$\label{ident-incre} \dot L^{a+h}=\sum_{j\in \mathbb{N}} \dot\ell^{a+h}(\omega^{a,+}_j).$$ where the sum involves only a finite number of nonzero terms. We know from Proposition [Proposition 9](#key-ingre){reference-type="ref" reference="key-ingre"} that the conditional distribution of $(\omega^{a,+}_j)_{j\in\mathbb{N}}$ knowing $L^a=t$ and $\frac{1}{2}\dot L^a=y$, is the law of $(\varpi_j)_{j\in\mathbb{N}}$, where, conditionally on the (ordered) sequence $(Z_j)_{j\in\mathbb{N}}$ of jumps of a $\psi$-Lévy bridge $U^{\mathrm{br},t,y}$, the snake trajectories $\varpi_j$ are independent and $\varpi_j$ is distributed according to $\mathbb{N}^{*,Z_j}_a$. Therefore, [\[ident-incre\]](#ident-incre){reference-type="eqref" reference="ident-incre"} gives $$\mathbb{E}\Big[\dot L^{a+h}\,\Big|\, L^a=t,\frac{1}{2}\dot L^a=y\Big]=\mathbb{E}\Big[\sum_{j=1}^\infty \dot\ell^{a+h}(\varpi_j)\Big],$$ where $(\varpi_j)_{j\in\mathbb{N}}$ and $(Z_j)_{j\in\mathbb{N}}$ are as described above. Note that the (a.s. finite) sum $\sum_{j=1}^\infty \dot\ell^{a+h}(\varpi_j)$ is an integrable random variable, as a consequence of [\[bound-1mom\]](#bound-1mom){reference-type="eqref" reference="bound-1mom"} and [\[ident-incre\]](#ident-incre){reference-type="eqref" reference="ident-incre"}. To get [\[key-equa\]](#key-equa){reference-type="eqref" reference="key-equa"} it then suffices to show that $$\label{tech-incre3} \mathbb{E}\Big[\sum_{j=1}^\infty \dot\ell^{a+h}(\varpi_j)\Big]=\mathbb{E}\Big[\sum_{j=1}^\infty Z_j\,\gamma\Big(\frac{3Z_j}{2h^2}\Big)\Big].$$ For every integer $n\geq 1$, set $N_n:=\max\{j\in\mathbb{N}:Z_j\geq 1/n\}$, with the convention $\max\varnothing=0$. Let $H_n$ stand for the event where $Z_j\leq n$ for every $j\in \mathbb{N}$, and $W^*(\varpi_j)<a+h$ for every $j>N_n$. Then, $$\mathbb{E}\Big[\mathbf{1}_{H_n} \sum_{j=1}^{N_n} | \dot \ell^{a+h}(\varpi_j)|\Big] =\mathbb{E}\Big[\mathbb{E}\Big[\mathbf{1}_{H_n} \sum_{j=1}^{N_n} |\dot \ell^{a+h}(\varpi_j)|\,\Big|\,(Z_j)_{j\in\mathbb{N}}\Big]\Big] \leq \mathbb{E}\Big[ \mathbf{1}_{\{Z_j\leq n,\,\forall j\in\mathbb{N}\}} \sum_{j=1}^{N_n} \mathbb{N}^{*,Z_j}_a(|\dot \ell^{a+h}|)\Big] <\infty$$ because we know that $\mathbb{N}^{*,z}_a(|\dot \ell^{a+h}|)$ is bounded by a constant if $1/n\leq z\leq n$ (Lemma [Lemma 12](#lem-tech){reference-type="ref" reference="lem-tech"}), and it is easy to verify that $\mathbb{E}[N_n\,|\, L^a=t,\dot L^a=y]<\infty$. For the latter we again may use the absolute continuity of the law of the Lévy bridge $U^{\mathrm{br},t,y}$ with respect to the law of the Lévy process $U$ in [\[RNF\]](#RNF){reference-type="eqref" reference="RNF"}, and the analogue for the time-reversed processes, to count the jumps occurring in $[0,t/2]$ and $[t/2,t]$ separately. The preceding display allows us to interchange sum and expected value in the following calculation, $$\begin{aligned} \mathbb{E}\Big[\mathbf{1}_{H_n} \sum_{j=1}^{N_n} \dot \ell^{a+h}(\varpi_j)\Big] =\sum_{j=1}^\infty \mathbb{E}\Big[\mathbf{1}_{H_n}\mathbf{1}_{\{j\leq N_n\}} \dot \ell^{a+h}(\varpi_j)\Big] &=\sum_{j=1}^\infty \mathbb{E}\Big[\mathbf{1}_{H_n}\mathbf{1}_{\{j\leq N_n\}}\,\mathbb{N}^{*,Z_j}_a(\dot \ell^{a+h})\Big]\\ &=\sum_{j=1}^\infty \mathbb{E}\Big[\mathbf{1}_{H_n}\mathbf{1}_{\{j\leq N_n\}}\,Z_j\gamma\Bigl(\frac{3 Z_j}{2 h^2}\Bigr)\Big],\end{aligned}$$ where [\[moment2\]](#moment2){reference-type="eqref" reference="moment2"} is used in the last. In the second equality, we also use the conditional independence of the excursions $\varpi_j$ given their boundary sizes $Z_j$. The left-hand side of the last display is equal to $$\mathbb{E}\Big[\mathbf{1}_{H_n} \sum_{j=1}^{\infty} \dot \ell^{a+h}(\varpi_j)\Big]\mathrel{ \mathop{\kern 0pt\longrightarrow}\limits_{n\to\infty}^{}} \mathbb{E}\Big[\sum_{j=1}^{\infty} \dot \ell^{a+h}(\varpi_j)\Big]$$ by dominated convergence (recall that the variable $\sum_{j=1}^{\infty} \dot \ell^{a+h}(\varpi_j)$ is integrable). On the other hand, the right-hand side is $$\mathbb{E}\Big[\mathbf{1}_{H_n}\sum_{j=1}^{N_n}\,Z_j\gamma\Bigl(\frac{3 Z_j}{2 h^2}\Bigr)\Big]\mathrel{ \mathop{\kern 0pt\longrightarrow}\limits_{n\to\infty}^{}} \mathbb{E}\Big[\sum_{j=1}^\infty Z_j\gamma\Bigl(\frac{3 Z_j}{2 h^2}\Bigr)\Big]$$ by dominated convergence again, using [\[bound-series\]](#bound-series){reference-type="eqref" reference="bound-series"}. This completes the proof of [\[tech-incre3\]](#tech-incre3){reference-type="eqref" reference="tech-incre3"}, and hence of the proposition. The preceding proof would be much shorter if one could verify that $\mathbb{E}\big[\sum_{j=1}^\infty |\dot\ell^{a+h}(\varpi_j)|\big]<\infty$. However, this does not seem to follow from our estimates. Recall the notation $p_t(y)$ for the density at time $t$ of the Lévy process $U$ in Section [2.5](#subsec:bridge){reference-type="ref" reference="subsec:bridge"}. To simplify notation, we also set $c_1:=\sqrt{3/8\pi}$, so that the Lévy measure of $U$ is $\frac{1}{2}\mathbf{n}(\mathrm{d}z)=c_1z^{-5/2}\,\mathbf{1}_{(0,\infty)}(z)\,\mathrm{d}z$. For $h,t>0$ and $y\in\mathbb{R}$, we introduce $$\label{def-gh} g_{h}(t,y)=\frac{1}{h}\,\frac{c_1t}{p_t(y)}\,\int_0^{\infty} \Big(p_t(y)-p_t(y-h^2z)\Big)\Big(2-\gamma\Bigl(\frac{3z}{2}\Bigr)\Big)\,\frac{\mathrm{d}z}{z^{3/2}},$$ and set $g_{h}(0,y)=0$. The boundedness of $p_t$, $|p'_t|$ and $|\gamma|$ (the latter from [\[gaminfty\]](#gaminfty){reference-type="eqref" reference="gaminfty"} and [\[gam0\]](#gam0){reference-type="eqref" reference="gam0"}), and the Mean Value Theorem, show that the above integrand is integrable on $[0,\infty)$. **Proposition 15**. *Let $a\geq 0$. For $\Theta_a$-almost every $(t,y)\in (0,\infty)\times\mathbb{R}$, we have, for every $h>0$, $$\mathbb{E}\Big[\dot L^{a+h}-\dot L^a\,\Big|\, L^a=t,\frac{1}{2}\dot L^a=y\Big]=g_{h}(t,y),$$ and $$\lim_{h\to 0} \frac{1}{h}\, \mathbb{E}\Big[\dot L^{a+h}-\dot L^a\,\Big|\, L^a=t,\frac{1}{2}\dot L^a=y\Big] = 8\,t\, \frac{p'_t(y)}{p_t(y)}.$$* such that [\[key-equa\]](#key-equa){reference-type="eqref" reference="key-equa"} holds for every $h>0$, and we let the sequence $(Z_j)_{j\in\mathbb{N}}$ be as in Proposition [Proposition 14](#tech-incre){reference-type="ref" reference="tech-incre"}. The first statement of Proposition [Proposition 15](#increment-derivative){reference-type="ref" reference="increment-derivative"} will follow from the computation of $$\mathbb{E}\Big[\sum_{j\in\mathbb{N}} Z_j\,\gamma\Big(\frac{3Z_j}{2 h^2}\Big)\Big].$$ We consider the Lévy process $U$ with Laplace exponent $\psi$ described in the Introduction and Section [2.5](#subsec:bridge){reference-type="ref" reference="subsec:bridge"}. Write $(Y_j)_{j\in \mathbb{N}}$ for the collection of jumps of $U$ over $[0,t]$ (ranked in decreasing size), so that we have $$\label{F2}\mathbb{E}\Big[\sum_{j\in\mathbb{N}} Z_j\,\gamma\Big(\frac{3Z_j}{2 h^2}\Big)\Big]=\mathbb{E}\Big[\sum_{j\in\mathbb{N}} Y_j\,\gamma\Big(\frac{3Y_j}{2 h^2}\Big)\,\Big|\, U_t=y\Big].$$ (Recall that, when we write $\mathbb{E}[\cdot\mid U_t=y]$, this means that we integrate with respect to the law of the $\psi$-Lévy bridge from $0$ to $y$ in time $t$.) We will first compute, for every ${\varepsilon}>0$, $$\mathbb{E}\Big[\sum_{j\in\mathbb{N}} Y_j\,\mathbf{1}_{\{Y_j>{\varepsilon}\}}\,\Big|\,U_t=y\Big].$$ To this end, we evaluate, for every $u\in\mathbb{R}$, $$\mathbb{E}\Big[\Big(\sum_{j\in\mathbb{N}} Y_j\,\mathbf{1}_{\{Y_j>{\varepsilon}\}}\Big) e^{\mathrm{i}u U_t}\Big].$$ Set $$R_{\varepsilon}= \sum_{j\in\mathbb{N}} Y_j\,\mathbf{1}_{\{Y_j>{\varepsilon}\}} - 2\,c_1\,t\,{\varepsilon}^{-1/2}-U_t.$$ The facts that $\mathbb{E}[|U_t|]<\infty$ and $\mathbb{E}[\sum_{j\in\mathbb{N}} Y_j\,\mathbf{1}_{\{Y_j>{\varepsilon}\}} ]=\frac{t}{2}\int_{\varepsilon}^\infty x\,\mathbf{n}(\mathrm{d}x) <\infty$ imply $$\label{meanRepf} \mathbb{E}[|R_{\varepsilon}|]<\infty.$$ Recall that $\mathbb{E}[e^{\mathrm{i}u U_t}]= e^{-t\Psi(u)}$, where $\Psi(u)=c_0 |u|^{3/2}\,(1+\mathrm{i}\,\mathrm{sgn}(u))$, with $c_0=1/\sqrt{3}$. Then $$\label{tech} \mathbb{E}[R_{\varepsilon}\,e^{\mathrm{i}u U_t}]= \mathbb{E}\Big[\Big(\sum_{j\in\mathbb{N}} Y_j\,\mathbf{1}_{\{Y_j>{\varepsilon}\}}\Big)e^{\mathrm{i}u U_t}\Big] -2\,c_1\,t\,{\varepsilon}^{-1/2}\,e^{-t\Psi(u)} - \mathrm{i}\,t\Psi'(u)\,e^{-t\Psi(u)},$$ because $$\mathbb{E}[U_t\,e^{\mathrm{i}u U_t}]= -\mathrm{i}\,\frac{\mathrm{d}}{\mathrm{d}u} \mathbb{E}[e^{\mathrm{i}u U_t}]= \mathrm{i}\,t\Psi'(u)\,e^{-t\Psi(u)}.$$ By a classical formula for Poisson measures (Mecke's formula, cf. Theorem 4.1 in [@LP]), we have $$\mathbb{E}\Big[\Big(\sum_{j\in\mathbb{N}} Y_j\,\mathbf{1}_{\{Y_j>{\varepsilon}\}}\Big)e^{\mathrm{i}u U_t}\Big]=c_1 t\int_{\varepsilon}^\infty e^{\mathrm{i}uz}\,\frac{\mathrm{d}z}{z^{3/2}} \times e^{-t\Psi(u)}.$$ Now note that $$\int_{\varepsilon}^\infty e^{\mathrm{i}uz}\,\frac{\mathrm{d}z}{z^{3/2}} = 2\,{\varepsilon}^{-1/2} - \int_{\varepsilon}^\infty (1-e^{\mathrm{i}uz})\,\frac{\mathrm{d}z}{z^{3/2}},$$ and $$\begin{aligned} -\int_{\varepsilon}^\infty (1-e^{\mathrm{i}uz})\,\frac{\mathrm{d}z}{z^{3/2}}&=-\int_0^\infty (1-e^{\mathrm{i}uz})\,\frac{\mathrm{d}z}{z^{3/2}} + \int_0^{\varepsilon}(1-e^{\mathrm{i}uz})\,\frac{\mathrm{d}z}{z^{3/2}}\\ &=-\sqrt{2\pi}(1-\mathrm{i}\,\mathrm{sgn}(u))\,|u|^{1/2} + \int_0^{\varepsilon}(1-e^{\mathrm{i}uz})\,\frac{\mathrm{d}z}{z^{3/2}}.\end{aligned}$$ On the other hand, since $\Psi'(u)=\frac{3}{2} c_0|u|^{1/2}(1+\mathrm{i}\,\mathrm{sgn}(u))\times\mathrm{sgn}(u)=\frac{3}{2} c_0|u|^{1/2}(\mathrm{i}+\mathrm{sgn}(u))$, we have $$\mathrm{i}\,t\Psi'(u)=\frac{3}{2} c_0\,t |u|^{1/2}(-1+\mathrm{i}\,\mathrm{sgn}(u))=-c_1\,t\,\sqrt{2\pi} |u|^{1/2}\,(1-\mathrm{i}\,\mathrm{sgn}(u)).$$ By substituting the preceding calculations in [\[tech\]](#tech){reference-type="eqref" reference="tech"}, we get after simplifications $$\label{E1} \mathbb{E}[R_{\varepsilon}\,e^{\mathrm{i}u U_t}]= c_1\,t \Big(\int_0^{\varepsilon}(1-e^{\mathrm{i}uz})\,\frac{\mathrm{d}z}{z^{3/2}}\Big) \,e^{-t\Psi(u)}.$$ Let $\varphi_{\varepsilon}(x)=\mathbb{E}[R_{\varepsilon}\mid U_t=x]$ for $x\in\mathbb{R}$. Use [\[meanRepf\]](#meanRepf){reference-type="eqref" reference="meanRepf"} to see that $$\label{integra1} \int_\mathbb{R}|\varphi_{\varepsilon}(x)|\,p_t(x)\,\mathrm{d}x\leq \int_\mathbb{R}\mathbb{E}[|R_{\varepsilon}|\,|\,U_t=x]\,p_t(x)\,\mathrm{d}x= \mathbb{E}[|R_{\varepsilon}|] <\infty.$$ We have $$\label{E2} \mathbb{E}[R_{\varepsilon}e^{\mathrm{i}uU_t}]=\mathbb{E}[\mathbb{E}[R_{\varepsilon}\mid U_t]\,e^{\mathrm{i}uU_t}]=\int_\mathbb{R}\varphi_{\varepsilon}(x)\,p_t(x)\,e^{\mathrm{i}ux}\,\mathrm{d}x.$$ On the other hand, for $0<\delta<{\varepsilon}$, we can write $$e^{-t\Psi(u)}\int_\delta^{\varepsilon}e^{\mathrm{i}uz}\,\frac{\mathrm{d}z}{z^{3/2}} =\int_\delta^{\varepsilon}\Big(\int_\mathbb{R}p_t(x)\,e^{\mathrm{i}ux}\,\mathrm{d}x\Big)\,e^{\mathrm{i}uz}\,\frac{\mathrm{d}z}{z^{3/2}} =\int_\delta^{\varepsilon}\Big(\int_\mathbb{R}p_t(x-z)\,e^{\mathrm{i}ux}\,\mathrm{d}x\Big)\,\frac{\mathrm{d}z}{z^{3/2}},$$ and $$e^{-t\Psi(u)}\!\!\int_\delta^{\varepsilon}(1-e^{\mathrm{i}uz})\frac{\mathrm{d}z}{z^{3/2}} =\int_\delta^{\varepsilon}\Big(\int_\mathbb{R}(p_t(x)-p_t(x-z))e^{\mathrm{i}ux}\,\mathrm{d}x\Big)\frac{\mathrm{d}z}{z^{3/2}} =\int_\mathbb{R}\Big(\int_\delta^{\varepsilon}(p_t(x)-p_t(x-z))\frac{\mathrm{d}z}{z^{3/2}}\Big)e^{\mathrm{i}ux} \mathrm{d}x.$$ The last display remains valid for $\delta=0$ as we now show. By dominated convergence to justify the passage to the limit $\delta\to 0$, it suffices to show $$\label{integra2} \int_\mathbb{R}\Big(\int_0^{\varepsilon}|p_t(x)-p_t(x-z)|\,\frac{\mathrm{d}z}{z^{3/2}}\Big) \mathrm{d}x <\infty.$$ For this, use the fact that $x\mapsto p_t(x)$ is unimodal (see Section [2.5](#subsec:bridge){reference-type="ref" reference="subsec:bridge"}) to observe that for $K$ large, for $x\geq K$, and $0\leq z\leq {\varepsilon}$, one has $|p_t(x)-p_t(x-z)|= p_t(x-z)-p_t(x)$ and thus $$\int_{[K,\infty)} \Big(\int_0^{\varepsilon}|p_t(x)-p_t(x-z)|\,\frac{\mathrm{d}z}{z^{3/2}}\Big) \mathrm{d}x = \int_0^{\varepsilon}\Big(\int_{[K-z,K]} p_t(x)\,\mathrm{d}x\Big)\frac{\mathrm{d}z}{z^{3/2}} <\infty$$ because $p_t$ is bounded, argue similarly for $x\leq -K$, and use the bound $|p_t(x)-p_t(x-z)|\leq Cz$ when $-K\leq x\leq K$, where $C$ is a bound for $|p'_t|$. So we have shown [\[integra2\]](#integra2){reference-type="eqref" reference="integra2"}, and therefore, $$\label{E3} e^{-t\Psi(u)}\int_0^{\varepsilon}(1-e^{\mathrm{i}uz})\frac{\mathrm{d}z}{z^{3/2}}=\int_\mathbb{R}\Big(\int_0^{\varepsilon}(p_t(x)-p_t(x-z))\,\frac{\mathrm{d}z}{z^{3/2}}\Big)e^{\mathrm{i}ux} \, \mathrm{d}x.$$ From [\[E2\]](#E2){reference-type="eqref" reference="E2"},[\[E1\]](#E1){reference-type="eqref" reference="E1"}, and then [\[E3\]](#E3){reference-type="eqref" reference="E3"}, we get $$\label{Fourier-equal} \int_\mathbb{R}\varphi_{\varepsilon}(x)\,p_t(x)\,e^{\mathrm{i}ux}\,\mathrm{d}x = c_1\,t\,\int_\mathbb{R}\Big(\int_0^{\varepsilon}(p_t(x)-p_t(x-z))\,\frac{\mathrm{d}z}{z^{3/2}}\Big)e^{\mathrm{i}ux} \, \mathrm{d}x.$$ Observe that both functions $x\mapsto p_t(x)\,\varphi_{\varepsilon}(x)$ and $$x\mapsto \int_0^{\varepsilon}(p_t(x)-p_t(x-z))\,\frac{\mathrm{d}z}{z^{3/2}}$$ are integrable with respect to Lebesgue measure and continuous. For the second function, we use [\[integra2\]](#integra2){reference-type="eqref" reference="integra2"} for integrability, and for continuity we apply the dominated convergence theorem (with the bound $|p_t(x)-p_t(x-z)|\leq C\,z$). For the first one, we use [\[integra1\]](#integra1){reference-type="eqref" reference="integra1"} for integrability, but we have to check that $\varphi_{\varepsilon}$ is continuous. This is however easy thanks to the the absolute continuity relation between the Lévy bridge and the Lévy process. In fact, write $t_j$ for the time at which the jump $Y_j$ occurs. Then, we have from [\[RNF\]](#RNF){reference-type="eqref" reference="RNF"} that $$\mathbb{E}\Bigg[\sum_{j\in\mathbb{N}, t_j\in[0,t/2]} Y_j\mathbf{1}_{\{Y_j>{\varepsilon}\}}\,\Bigg|\,U_t=x\Bigg] = \mathbb{E}\Bigg[\Bigg(\sum_{j\in\mathbb{N}, t_j\in[0,t/2]} Y_j\mathbf{1}_{\{Y_j>{\varepsilon}\}}\Bigg)\frac{p_{t/2}(x-U_{t/2})}{p_t(x)}\Bigg],$$ where the right-hand side is clearly a continuous function of $x$. A time-reversal argument shows the same conclusion if we instead take $t_j\in[t/2,t]$, and the desired continuity property of $\varphi_{\varepsilon}$ follows. From [\[Fourier-equal\]](#Fourier-equal){reference-type="eqref" reference="Fourier-equal"} and the above regularity, we conclude that, for every $x\in\mathbb{R}$, $$\varphi_{\varepsilon}(x)=c_1\,t\,\frac{1}{p_t(x)}\int_0^{\varepsilon}(p_t(x)-p_t(x-z))\,\frac{\mathrm{d}z}{z^{3/2}}.$$ Therefore, from the definition of $R_{\varepsilon}$ we have $$\label{E4}\mathbb{E}\Big[\sum_{j\in \mathbb{N}} Y_j\,\mathbf{1}_{\{Y_j>{\varepsilon}\}}\,\Big|\,U_t=y\Big] = y+2\,c_1\,t\,{\varepsilon}^{-1/2}+ c_1\,t\,\frac{1}{p_t(y)}\int_0^{\varepsilon}(p_t(y)-p_t(y-z))\,\frac{\mathrm{d}z}{z^{3/2}}.$$ The facts that $\lim_{x\to0}\gamma(x)=0$ and $\gamma'$ is continuous on $[0,\infty)$ (i.e., [\[gam0\]](#gam0){reference-type="eqref" reference="gam0"} and [\[gamma\'cont\]](#gamma'cont){reference-type="eqref" reference="gamma'cont"}) imply $$\sum_{j\in\mathbb{N}} Y_j\,\gamma\Bigl(\frac{3Y_j}{2h^2}\Bigr)=\sum_{j\in\mathbb{N}} Y_j\int_0^{\frac{3Y_j}{2h^2}} \gamma'(u)\,\mathrm{d}u =\int_0^\infty \Big(\sum_{j\in\mathbb{N}} Y_j\,\mathbf{1}_{\{Y_j>2h^2u/3\}}\Big)\,\gamma'(u)\,\mathrm{d}u,$$ where the interchange between summation and integration holds by the bound $|\gamma'(u)|\leq C\,u$ and the fact that $\mathbb{P}(\sum_{j\in\mathbb{N}} Y_j^2<\infty|U_t=y)=1$. (The latter again holds for our fixed value of $y$ by the usual Radon-Nikodym argument.) From the last display, we get $$\label{F3} \mathbb{E}\Big[\sum_{j\in \mathbb{N}} Y_j\,\gamma\Bigl(\frac{3Y_j}{2h^2}\Bigr)\,\Big|\,U_t=y\Big] = \int_0^\infty \mathbb{E}\Big[\sum_{j\in\mathbb{N}} Y_j\,\mathbf{1}_{\{Y_j>2h^2u/3\}}\,\Big|\, U_t=y\Big]\,\gamma'(u)\,\mathrm{d}u,$$ where now the interchange between expectation and Lebesgue integration is justified by the fact that $$\int_0^\infty \mathbb{E}\Big[\sum_{j\in\mathbb{N}} Y_j\,\mathbf{1}_{\{Y_j>2h^2u/3\}}\,\Big|\, U_t=y\Big]\,|\gamma'(u)|\,\mathrm{d}u<\infty.$$ This holds thanks to [\[E4\]](#E4){reference-type="eqref" reference="E4"}, the fact that $\int_0^\infty |\gamma'(u)|\,(1\vee u ^{-1/2})\,\mathrm{d}u<\infty$ (by [\[intgam\'\]](#intgam'){reference-type="eqref" reference="intgam'"}), and $$\label{ptincint} \int_0^\infty |p_t(y)-p_t(y-z)|\,z^{-3/2}\,\mathrm{d}z<\infty,$$ where the last follows from the boundedness of $|p'_t|$. It follows from [\[F3\]](#F3){reference-type="eqref" reference="F3"} and [\[E4\]](#E4){reference-type="eqref" reference="E4"} that $$\label{F4}\mathbb{E}\Big[\sum_{j\in \mathbb{N}} Y_j\,\gamma\Bigl(\frac{3Y_j}{2h^2}\Bigr)\,\Big|\,U_t=y\Big]=2y + \frac{c_1t}{p_t(y)}\int_0^\infty\Bigg(\int_0^{2h^2u/3} (p_t(y)-p_t(y-z))\,\frac{\mathrm{d}z}{z^{3/2}}\Bigg)\gamma'(u)\mathrm{d}u,$$ where we used the equalities $$\int_0^\infty \gamma'(u)\,\mathrm{d}u= \lim_{K\to\infty}(\gamma(K)-\gamma(1/K))=2$$ (by [\[gaminfty\]](#gaminfty){reference-type="eqref" reference="gaminfty"} and [\[gam0\]](#gam0){reference-type="eqref" reference="gam0"}), and $$\int_0^\infty \frac{\gamma'(u)}{\sqrt{u}}\,\mathrm{d}u=0.$$ To get the last equality, first note that $$\frac{\gamma'(u)}{\sqrt{u}}= -\frac{8}{3}\sqrt{\pi}\,\Big(\frac{3}{2}(\chi(u)+u\chi'(u))+ u(2\chi'(u)+u\chi''(u))\Big)= -\frac{8}{3}\sqrt{\pi}\,\frac{\mathrm{d}}{\mathrm{d}u} \Bigl(\frac{3}{2} u\chi(u)+u^2\chi'(u)\Bigr),$$ and then apply the asymptotics for $\chi$ and $\chi'$ from Section [5.1](#sec:casepositive){reference-type="ref" reference="sec:casepositive"}, namely [\[chiinfty\]](#chiinfty){reference-type="eqref" reference="chiinfty"} and [\[chi\'infty\]](#chi'infty){reference-type="eqref" reference="chi'infty"}. Finally, by $\gamma(K)\to 2$ as $K\to\infty$ (by [\[gaminfty\]](#gaminfty){reference-type="eqref" reference="gaminfty"} again), we have $$\int_0^\infty\Bigg(\int_0^{2h^2u/3} (p_t(y)-p_t(y-z))\,\frac{\mathrm{d}z}{z^{3/2}}\Bigg)\gamma'(u)\mathrm{d}u =\int_0^{\infty} (p_t(y)-p_t(y-z))\Bigl(2-\gamma\Bigl(\frac{3z}{2h^2}\Bigr)\Bigr)\,\frac{\mathrm{d}z}{z^{3/2}}.$$ (The interchange of integrals is justified by [\[ptincint\]](#ptincint){reference-type="eqref" reference="ptincint"} and [\[intgam\'\]](#intgam'){reference-type="eqref" reference="intgam'"}.) Insert this into the right-hand side of [\[F4\]](#F4){reference-type="eqref" reference="F4"}, and then recall [\[key-equa\]](#key-equa){reference-type="eqref" reference="key-equa"} and [\[F2\]](#F2){reference-type="eqref" reference="F2"}, to obtain the explicit formula $$\begin{aligned} \mathbb{E}[\dot L^{a+h}-\dot L^a\mid L^a=t,\frac{1}{2}\dot L^a=y] &= \frac{c_1t}{p_t(y)}\,\int_0^{\infty} (p_t(y)-p_t(y-z))\Bigl(2-\gamma\Bigl(\frac{3z}{2h^2}\Bigr)\Bigr)\,\frac{\mathrm{d}z}{z^{3/2}}\\ &= \frac{1}{h}\,\frac{c_1t}{p_t(y)}\,\int_0^{\infty} (p_t(y)-p_t(y-h^2z))\Bigl(2-\gamma\Bigl(\frac{3z}{2}\Bigr)\Bigr)\,\frac{\mathrm{d}z}{z^{3/2}}.\end{aligned}$$ This gives the first part of the proposition. The second part is then immediate from the following elementary lemma. 0◻ Recall the definition of the function $g$ in Theorem [Theorem 1](#main-th){reference-type="ref" reference="main-th"}. **Lemma 16**. *There is a function $\delta(h)\to 0$ as $h\to 0$, so that for any $K\in\mathbb{N}$ and some constant $C(K)$, $$\sup_{K^{-1}\leq t\leq K,|y|\le K}\Bigl|\frac{1}{h}g_{h}(t,y)-g(t,y)\Bigr|\le C(K)\delta(h).$$* *Proof.* Note that $$\label{mvt} \frac{1}{h}g_{h}(t,y)=\frac{c_1t}{p_t(y)}\,\int_0^{\infty} \frac{p_t(y)-p_t(y-h^2z)}{h^2 z}\Big(2-\gamma\Bigl(\frac{3z}{2}\Bigr)\Big)\,\frac{\mathrm{d}z}{\sqrt z},$$ while [\[gaminfty\]](#gaminfty){reference-type="eqref" reference="gaminfty"} and [\[gam0\]](#gam0){reference-type="eqref" reference="gam0"} imply that $$\label{domint} \int_0^\infty |2-\gamma\Bigl(\frac{3z}{2}\Bigr)|\,\frac{\mathrm{d}z}{\sqrt z}<\infty.$$ A tedious but straightforward calculation, left for the reader, gives $$c_1\int_0^\infty \Bigl(2-\gamma\Bigl(\frac{3z}{2}\Bigr)\Bigr)\,\frac{\mathrm{d}z}{\sqrt{z}}=8.$$ Using the above in [\[mvt\]](#mvt){reference-type="eqref" reference="mvt"}, we conclude that $$\label{incbnd1} \Bigl|\frac{1}{h}g_{h}(t,y)-g(t,y)\Bigr|\le \frac{c_1t}{p_t(y)}\int_0^\infty\Bigl|\frac{p_t(y)-p_t(y-h^2z)}{h^2 z}-p'_t(y)\Bigr|\,|2-\gamma\Bigl(\frac{3z}{2}\Bigr)|\,\frac{\mathrm{d}z}{\sqrt z}.$$ The mean value theorem implies that $$\Bigl|\frac{p_t(y)-p_t(y-h^2z)}{h^2 z}-p'_t(y)\Bigr|\le (\Vert p''_t\Vert_\infty h^2 z)\wedge (2\Vert p'_t\Vert_\infty).$$ The boundedness of $|p'_1|$ and $|p''_1|$ (Section 2 of [@Zol]) and scaling imply that $\Vert p'_t\Vert_\infty\le c t^{-4/3}$ and $\Vert p''_t\Vert_\infty\le c t^{-2}$. Moreover, $p_t(y)$ is bounded below by a positive constant (depending on $K$) when $1/K\leq t\leq K$ and $|y|\leq K$. Now use the above bounds in [\[incbnd1\]](#incbnd1){reference-type="eqref" reference="incbnd1"} to bound the right-hand side of [\[incbnd1\]](#incbnd1){reference-type="eqref" reference="incbnd1"}, and hence also the left-hand side, for $|y|\le K$ and $1/K\leq t\leq K$ by $$C(K)\int_0^\infty((h^2z)\wedge 1)|2-\gamma\Bigl(\frac{3z}{2}\Bigr)|\,\frac{\mathrm{d}z}{\sqrt z}.$$ To complete the proof, define $\delta(h)$ to be the above integral and use [\[domint\]](#domint){reference-type="eqref" reference="domint"} to see that $\delta(h)\to 0$ as $h\to 0$ by dominated convergence. ◻ # A stochastic differential equation {#sec:SDE} In this section, we derive the stochastic differential equation satisfied by the process $(L^x,\dot L^x)_{x\geq 0}$. Recall that for every $t>0$ and $y\in \mathbb{R}$, $$g(t,y):=8\,t\, \frac{p'_t(y)}{p_t(y)},$$ and $g(0,y)=0$ for every $y\in\mathbb{R}$. Recall also the notation $R$ for the supremum of the support of $\mathbf{Y}$. By [\[Lsupp\]](#Lsupp){reference-type="eqref" reference="Lsupp"}, we have $R=\inf\{x\geq 0:L^x=0\}$. **Lemma 17**. *We have $\displaystyle \int_0^R |g(L^x,\frac{1}{2}\dot L^x)|\,\mathbf{1}_{\{g(L^x,\frac{1}{2}\dot L^x)<0\}}\,\mathrm{d}x <\infty$ a.s.* By scaling, we have, for every $t>0$ and $y\in \mathbb{R}$, $$\label{gformu} g(t,y)=8\,t\, \frac{p'_t(y)}{p_t(y)}= 8\,t^{1/3}\,\frac{p'_1(yt^{-2/3})}{p_1(yt^{-2/3})}.$$ The unimodality of the function $p_1$ (Theorem 2.7.5 of [@Zol]) shows there is a constant $y_0\in\mathbb{R}$ such that $p'_1(y)\geq 0$ for every $y\leq y_0$. Recall from [\[gbnd1\]](#gbnd1){reference-type="eqref" reference="gbnd1"} that $|p'_1(y)/p_1(y)|$ is bounded above by a constant $C$ when $y\geq y_0$. Hence, if $g(t,y)<0$ (forcing $p'_1(yt^{-2/3})<0$ and thus $yt^{-2/3}> y_0$), we obtain from the above that $|g(t,y)|\leq 8C\,t^{1/3}$. Finally, we get $$\int_0^R |g(L^x,\frac{1}{2}\dot L^x)|\,\mathbf{1}_{\{g(L^x,\frac{1}{2}\dot L^x)<0\}}\,\mathrm{d}x \leq \int_0^R 8C\, (L^x)^{1/3}\,\mathrm{d}x<\infty\ \ a.s.,$$ which completes the proof. We now turn to the proof of our main result. Let $n\in\mathbb{N}$. By Proposition [Proposition 15](#increment-derivative){reference-type="ref" reference="increment-derivative"} (and the known Markov property of $(L^x,\dot L^x)_{x\geq 0}$), we have for every $u\geq 0$, $$\label{Markov1} \mathbb{E}[\,\dot L^{u+\frac{1}{n}}-\dot L^u\mid (L^r,\dot L^r)_{r\leq u}] = \mathbb{E}[\,\dot L^{u+\frac{1}{n}}-\dot L^u\mid L^u,\dot L^u]= g_{1/n}(L^u,\frac{1}{2}\dot L^u)\quad\hbox{a.s.}$$ Note that the equality of the last display is trivial on the event $\{L^u=0\}=\{u\geq R\}$. For every real $K>1$, set $$\label{TKdef}T_K:=\inf\{x\geq 0: L^x\vee |\dot L^x| \geq K \hbox{ or }L^x\leq 1/K\},$$ and for every real $a\geq 0$, let $[a]_n$ be the largest number of the form $j/n$, $j\in\mathbb{Z}$, smaller than or equal to $a$. Fix $0<s<t$, and let $f$ be a bounded continuous function on $[0,\infty)\times \mathbb{R}$. We evaluate $$\mathcal{R}^K_n(s,t):= \mathbb{E}\Bigg[\Bigg( \dot L^{[t]_n\wedge T_K}- \dot L^{[s]_n\wedge T_K}- \sum_{j=0}^{n[t]_n-n[s]_n-1} \mathbf{1}_{\{[s]_n+\frac{j}{n}< T_K\}} g_{1/n}\Bigl(L^{[s]_n+j/n},\frac{1}{2}\dot L^{[s]_n+j/n})\Bigr)\Bigg) f(L^{[s]_n},\dot L^{[s]_n})\Bigg],$$ where $g_{1/n}$ is defined in [\[def-gh\]](#def-gh){reference-type="eqref" reference="def-gh"}. To this end, we observe that $$\label{tech1} \dot L^{[t]_n\wedge T_K}- \dot L^{[s]_n\wedge T_K}= \sum_{j=0}^{n[t]_n-n[s]_n-1} \mathbf{1}_{\{[s]_n+\frac{j}{n}< T_K\}} \Big(\dot L^{[s]_n+\frac{j+1}{n}}-\dot L^{[s]_n+\frac{j}{n}}\Big) -\mathbf{1}_{\{[s]_n\leq T_K<[t]_n\}} \Big(\dot L^{[s]_n+\frac{j_n}{n}}-\dot L^{T_K}\Big)$$ where $j_n=\inf\{j\in\mathbb{Z}_+:[s]_n+\frac{j}{n}\geq T_K\}$. Note that, on the event $\{[s]_n\leq T_K<[t]_n\}$, we have $0\leq [s]_n+\frac{j_n}{n}-T_K\leq \frac{1}{n}$. Thanks to [\[tech1\]](#tech1){reference-type="eqref" reference="tech1"}, we can rewrite the definition of $\mathcal{R}^K_n(s,t)$ in the form $$\begin{aligned} \label{tech00} \mathcal{R}^K_n(s,t)=&\sum_{j=0}^{n[t]_n-n[s]_n-1}\mathbb{E}\Big[\mathbf{1}_{\{[s]_n+\frac{j}{n}< T_K\}} \Big(\dot L^{[s]_n+\frac{j+1}{n}}-\dot L^{[s]_n+\frac{j}{n}}-g_{1/n}\Bigl(L^{[s]_n+j/n},\frac{1}{2}\dot L^{[s]_n+j/n}\Bigr)\Big)f(L^{[s]_n},\dot L^{[s]_n})\Big]\nonumber\\ &-\mathbb{E}\Big[ \mathbf{1}_{\{[s]_n\leq T_K<[t]_n\}} \Big(\dot L^{[s]_n+\frac{j_n}{n}}-\dot L^{T_K}\Big) f(L^{[s]_n},\dot L^{[s]_n})\Big] .\end{aligned}$$ For every $0\leq j\leq n[t]_n-n[s]_n-1$, [\[Markov1\]](#Markov1){reference-type="eqref" reference="Markov1"} gives $$\mathbb{E}\Big[ \dot L^{[s]_n+\frac{j+1}{n}}-\dot L^{[s]_n+\frac{j}{n}}\,\Big|\,(L^r,\dot L^r)_{r\leq [s]_n+\frac{j}{n}}\Big] = g_{1/n}\Bigl(L^{[s]_n+\frac{j}{n}},\frac{1}{2}\dot L^{[s]_n+\frac{j}{n}}\Bigr),$$ so that $$\mathbb{E}\Big[\Big(\dot L^{[s]_n+\frac{j+1}{n}}-\dot L^{[s]_n+\frac{j}{n}}- g_{1/n}\Bigl(L^{[s]_n+\frac{j}{n}},\frac{1}{2}\dot L^{[s]_n+\frac{j}{n}}\Bigr)\Big) \times \mathbf{1}_{\{[s]_n+\frac{j}{n}< T_K\}} f(L^{[s]_n},\dot L^{[s]_n})\Big]=0,$$ and thus, by [\[tech00\]](#tech00){reference-type="eqref" reference="tech00"}, $$\mathcal{R}^K_n(s,t) = - \mathbb{E}\Big[ \mathbf{1}_{\{[s]_n\leq T_K<[t]_n\}} \Big(\dot L^{[s]_n+\frac{j_n}{n}}-\dot L^{T_K}\Big) f(L^{[s]_n},\dot L^{[s]_n})\Big].$$ By Lemma [Lemma 10](#moment-deri){reference-type="ref" reference="moment-deri"}(ii), we have $$\mathbb{E}\Bigg[\sup_{s/2\leq x<y\leq t+1}\Big(\frac{|\dot L^y-\dot L^x|}{|y-x|^\beta}\Big)\Bigg] <\infty$$ where $\beta>0$. Provided that $n$ is sufficiently large so that $[s]_n>s/2$, we thus get $$\label{tech2}|\mathcal{R}^K_n(s,t)|\leq C\,n^{-\beta},$$ where $C$ is a constant. When $n\to\infty$, we have $$\label{tech3}(\dot L^{[t]_n\wedge T_K},\dot L^{[s]_n\wedge T_K}) \mathrel{ \mathop{\kern 0pt\longrightarrow}\limits_{}^{\rm a.s.}} (\dot L^{t\wedge T_K},\dot L^{s\wedge T_K}),$$ and we claim that $$\label{tech4}\sum_{j=0}^{n[t]_n-n[s]_n-1} \mathbf{1}_{\{[s]_n+\frac{j}{n}< T_K\}} g_{1/n}\Bigl(L^{[s]_n+j/n},\frac{1}{2}\dot L^{[s]_n+j/n}\Bigr) \mathrel{ \mathop{\kern 0pt\longrightarrow}\limits_{}^{\rm a.s.}} \int_{s\wedge T_K}^{t\wedge T_K} g\Bigl(L^u,\frac{1}{2}\dot L^u\Bigr)\,\mathrm{d}u.$$ To justify [\[tech4\]](#tech4){reference-type="eqref" reference="tech4"}, note that $$\sum_{j=0}^{n[t]_n-n[s]_n-1} \mathbf{1}_{\{[s]_n+\frac{j}{n}< T_K\}} g_{1/n}(L^{[s]_n+j/n},\frac{1}{2}\dot L^{[s]_n+j/n})=\int_{[s]_n}^{[t]_n} n\,g_{1/n}(L^{[r]_n},\frac{1}{2}\dot L^{[r]_n})\,\mathbf{1}_{\{[r]_n< T_K\}}\,\mathrm{d}r.$$ Lemma [Lemma 16](#lem:gconv){reference-type="ref" reference="lem:gconv"} implies that $$\lim_{n\to\infty}\sup_{r<T_K}|ng_{1/n}(L^{[r]_n},\frac{1}{2}\dot L^{[r]_n})-g(L^r,\frac{1}{2}\dot L^r)|=0,$$ and [\[tech4\]](#tech4){reference-type="eqref" reference="tech4"} now follows. It follows from [\[tech2\]](#tech2){reference-type="eqref" reference="tech2"}, [\[tech3\]](#tech3){reference-type="eqref" reference="tech3"}, [\[tech4\]](#tech4){reference-type="eqref" reference="tech4"} and the definition of $\mathcal{R}^K_n(s,t)$ (justification is simple because stopping at time $T_K$ makes the dominated convergence theorem easy to apply) that $$\mathbb{E}\Bigg[\Big(\dot L^{t\wedge T_K}-\dot L^{s\wedge T_K} - \int_{s\wedge T_K}^{t\wedge T_K} g(L^u,\frac{1}{2}\dot L^u)\,\mathrm{d}u\Bigg)f(L^s,\dot L^s)\Bigg]=0.$$ We have assumed that $s>0$, but clearly we can pass to the limit $s\downarrow 0$ to derive the last display for $s=0$. Hence, $$\dot L^{t\wedge T_K}-\dot L^{0} - \int_{0}^{t\wedge T_K} g(L^u,\frac{1}{2}\dot L^u)\,\mathrm{d}u$$ is a martingale with respect to the filtration $\mathcal{F}^\circ_t:=\sigma\Big((L^r,\dot L^t)_{r\leq t}\Big)$. For ${\varepsilon}\in(0,1)$, set $S_{\varepsilon}=\inf\{r\geq 0: L^r\leq {\varepsilon}\}$. We get that $$M^{\varepsilon}_t:=\dot L^{t\wedge S_{\varepsilon}}-\dot L^{0} - \int_{0}^{t\wedge S_{\varepsilon}} g(L^u,\frac{1}{2}\dot L^u)\,\mathrm{d}u$$ is a local martingale (note that, if $R_K:=\inf\{x\geq 0:L^x\vee |\dot L^x|\geq K\}$, $M^{\varepsilon}_{t\wedge R_K}$ is a martingale, and $R_K\uparrow \infty$ as $K\uparrow \infty$). We next claim the quadratic variation of $M^{\varepsilon}$ is $$\label{mepsqv}\langle M^{\varepsilon},M^{\varepsilon}\rangle_t= 16\int_0^{t\wedge S_{\varepsilon}} L^r\,\mathrm{d}r.$$ To derive this from Proposition [Proposition 11](#quad-var){reference-type="ref" reference="quad-var"}, first fix $t>0$ and let $\pi_n=\{0=t^n_0<t^n_1<\dots<t^n_{m_n}=t\}$ be a sequence of subdivisions of $[0,t]$ such that $\Vert\pi_n\Vert=\max_{1\le i\le m_n}(t^n_i-t^n_{i-1})\to 0$ as $n\to\infty$. If $X$ is a stochastic process let $Q(\pi_n,X)=\sum_{i=1}^{m_n}(X(t^n_i)-X(t^n_{i-1}))^2$. Then, taking limits in probability with respect to $\mathbb{P}(\cdot |S_{\varepsilon}\ge t)$ we have, $$\langle M^{\varepsilon},M^{\varepsilon}\rangle_t=\lim_{n\to\infty}Q(\pi_n,M^{\varepsilon})=\lim_{n\to\infty}Q(\pi_n,\dot L)=16\int_0^tL^r\, \mathrm{d}r,$$ where we use Proposition [Proposition 11](#quad-var){reference-type="ref" reference="quad-var"} in the last equality, and the fact that $t\le S_{\varepsilon}$ in the second equality. This shows that $\langle M^{\varepsilon},M^{\varepsilon}\rangle_t=16\int_0^{t\wedge S_{\varepsilon}}L^r\,\mathrm{d}r$ a.s. on $\{t\le S_{\varepsilon}\}$ (this conclusion is trivial if this latter set is null, so the implicit assumption above that it is not null is justified). By taking left limits through rational values, it follows that $$\langle M^{\varepsilon},M^{\varepsilon}\rangle_t=16\int_0^{t\wedge S_{\varepsilon}}L^r\, \mathrm{d}r\ \text{ for every }t\le S_{\varepsilon}\ \text{a.s.}$$ Since $\langle M^{\varepsilon},M^{\varepsilon}\rangle_t$ is constant for $t\ge S_{\varepsilon}$, [\[mepsqv\]](#mepsqv){reference-type="eqref" reference="mepsqv"} follows. If we set $$\widetilde B^{\varepsilon}_t=\int_0^t \frac{1}{4\sqrt{L^r}}\,\mathrm{d}M^{\varepsilon}_r$$ then $\widetilde B^{\varepsilon}$ is a local martingale with quadratic variation $$\label{Bveqv}\langle\widetilde B^{\varepsilon},\widetilde B^{\varepsilon}\rangle_t= t\wedge S_{\varepsilon}.$$ In particular, $\widetilde B^{\varepsilon}$ is a (true) martingale. Up to enlarging the probability space, we can find a linear Brownian motion $B'$ with $B'_0=0$, which is independent of $\mathbf{X}$, and thus also of $(L^x,\dot L^x)_{x\in\mathbb{R}}$. We introduce the (completion of the) filtration $\mathcal{F}_t:=\mathcal{F}^\circ_t\vee \sigma(B'_r:0\leq r\leq t)$, so that $\widetilde B^{\varepsilon}$ remains a martingale in this filtration. If we set $$B^{\varepsilon}_t=\widetilde B^{\varepsilon}_t + \int_{t\wedge S_{\varepsilon}}^t \mathrm{d}B'_s$$ then one immediately verifies that $B^{\varepsilon}$ is a martingale of $(\mathcal{F}_t)_{t\geq 0}$ and $$\langle B^{\varepsilon},B^{\varepsilon}\rangle_t= t.$$ Therefore $B^{\varepsilon}$ is a linear Brownian motion. Next, suppose that $0<{\varepsilon}'<{\varepsilon}<1$. By construction, we have $\widetilde B^{\varepsilon}_t=\widetilde B^{{\varepsilon}'}_{t\wedge S_{\varepsilon}}$. We can deduce from this that $\widetilde B^{\varepsilon}_{S_{\varepsilon}}$ converges in probability when ${\varepsilon}\to 0$. Indeed, for every $t>0$, $$\mathbb{E}[(\widetilde B^{\varepsilon}_{S_{\varepsilon}\wedge t}- \widetilde B^{{\varepsilon}'}_{S_{{\varepsilon}'}\wedge t})^2]= \mathbb{E}[(\widetilde B^{{\varepsilon}'}_{S_{\varepsilon}\wedge t}- \widetilde B^{{\varepsilon}'}_{S_{{\varepsilon}'}\wedge t})^2] = \mathbb{E}[S_{\varepsilon}\wedge t-S_{{\varepsilon}'}\wedge t] \mathrel{ \mathop{\kern 0pt\longrightarrow}\limits_{{\varepsilon},{\varepsilon}'\to 0, {\varepsilon}'<{\varepsilon}}^{}} 0,$$ since we know that $S_{\varepsilon}\uparrow R$ as ${\varepsilon}\downarrow 0$. Let $\Gamma$ stand for the limit in probability of $\widetilde B^{\varepsilon}_{S_{\varepsilon}}$ when ${\varepsilon}\to 0$. Define a process $\widetilde B^0$ by setting $\widetilde B^0_t=\widetilde B^{\varepsilon}_t$ on the event $\{t<S_{\varepsilon}\}$ (note this does not depend on the choice of ${\varepsilon}$) and $\widetilde B^0_t=\Gamma$ on the event $\{t\geq R\}$. Finally set $$B_t:= \widetilde B^0_{t\wedge R} + \int_{t\wedge R}^t \mathrm{d}B'_s.$$ Then, it is straightforward to verify that $B^{\varepsilon}_t$ converges in probability to $B_t$ when ${\varepsilon}\to 0$, for every $t\geq 0$ (on the event $\{t\geq R\}$ use the convergence in probability of $\widetilde B^{\varepsilon}_{S_{\varepsilon}}$ to $\Gamma=\widetilde B^0_R$). The process $(B_t)_{t\geq 0}$ has right-continuous sample paths and the same finite-dimensional marginals as a linear Brownian motion, hence $(B_t)_{t\geq 0}$ is a linear Brownian motion. More precisely, it is not hard to verify that $(B_t)_{t\geq 0}$ is an $(\mathcal{F}_t)$-Brownian motion. Next note that $$M^{\varepsilon}_t=4\int_0^{t\wedge S_{\varepsilon}} \sqrt{L^s}\,\mathrm{d}\widetilde B^{\varepsilon}_s=4\int_0^{t\wedge S_{\varepsilon}} \sqrt{L^s}\,\mathrm{d}B_s,$$ since $\widetilde B^{\varepsilon}_{\cdot\wedge S_{\varepsilon}}= B^{\varepsilon}_{\cdot\wedge S_{\varepsilon}}= B_{\cdot\wedge S_{\varepsilon}}$. Therefore, we get $$\label{SDE-approx} \dot L^{t\wedge S_{\varepsilon}}= \dot L^0 + 4\int_0^{t\wedge S_{\varepsilon}} \sqrt{L^s}\,\mathrm{d}B_s + \int_0^{t\wedge S_{\varepsilon}} g(L^s,\frac{1}{2}\dot L^s)\,\mathrm{d}s.$$ When ${\varepsilon}\to 0$, $\dot L^{t\wedge S_{\varepsilon}}$ converges to $\dot L^{t\wedge R}$ and $\int_0^{t\wedge S_{\varepsilon}} \sqrt{L^s}\,\mathrm{d}B_s$ converges to $\int_0^{t\wedge R} \sqrt{L^s}\,\mathrm{d}B_s$ in probability. It follows that $\int_0^{t\wedge S_{\varepsilon}} g(L^s,\frac{1}{2}\dot L^s)\,\mathrm{d}s$ also converges in probability to a finite random variable. By Lemma [Lemma 17](#tech-SDE){reference-type="ref" reference="tech-SDE"}, this is only possible if $$\int_0^{t\wedge R} g(L^s,\frac{1}{2}\dot L^s)\,\mathbf{1}_{\{g(L^s,\frac{1}{2}\dot L^s)>0\}}\,\mathrm{d}s <\infty\,\quad\hbox{a.s.},$$ and therefore by the same lemma, $$\int_0^{t\wedge R} |g(L^s,\frac{1}{2}\dot L^s)|\,\mathrm{d}s <\infty\,\quad\hbox{a.s.},$$ which by [\[Lsupp\]](#Lsupp){reference-type="eqref" reference="Lsupp"} gives the first assertion in Theorem [Theorem 1](#main-th){reference-type="ref" reference="main-th"}. We may now let ${\varepsilon}\to 0$ in [\[SDE-approx\]](#SDE-approx){reference-type="eqref" reference="SDE-approx"}, to conclude that $$\dot L^{t\wedge R}= \dot L^0 + 4\int_0^{t\wedge R} \sqrt{L^s}\,\mathrm{d}B_s + \int_0^{t\wedge R} g(L^s,\frac{1}{2}\dot L^s)\,\mathrm{d}s.$$ Since $L^s=\dot L^s=0$ when $s>R$ by [\[Lsupp\]](#Lsupp){reference-type="eqref" reference="Lsupp"}, this implies the stochastic differential equation [\[main-SDE\]](#main-SDE){reference-type="eqref" reference="main-SDE"}. It remains to establish the pathwise uniqueness claim. Let $(X^x,Y^x)$ be any solution to [\[main-SDE\]](#main-SDE){reference-type="eqref" reference="main-SDE"} such that $(X^0,Y^0)=(L^0,\dot L^0)$ and $(X^x,Y^x)=(X^{R'},Y^{R'})$ for all $x>R'=\inf\{x\ge 0:X^x=0\}$. The smoothness of $p_t(y)$ in $(t,y)\in(0,\infty)\times \mathbb{R}$ and strict positivity of $p_t(y)$ for $t>0$ show that $g(t,y)$ is Lipschitz on $[1/K,K]\times [-K,K]$, as is $(t,y)\to \sqrt t$. The classical proof of pathwise uniqueness in Itô equations with locally Lipschitz coefficients (e.g. Theorem 3.1 in Chapter IV of [@IW]) now shows that if $T_K$ is as in [\[TKdef\]](#TKdef){reference-type="eqref" reference="TKdef"} and $T'_K$ is the analogous stopping time for $(X,Y)$, then $T_K=T'_K$ and $(X^{x\wedge T'_K},Y^{x\wedge T'_K})=(L^{x\wedge T_K},\dot L^{x\wedge T_K})$ for all $x\ge 0$ a.s. Then $T'_K=T_K\uparrow R<\infty$ a.s., and taking limits along $\{T_K\}$, we see that $R=R'$, $(X^R,Y^R)=(L^R,\dot L^R)=(0,0)$ and $(X^{x\wedge R},Y^{x\wedge R})=(L^{x\wedge R},\dot L^{x\wedge R})$ for all $x\ge 0$ a.s. It therefore follows that $(X,Y)=(L,\dot L)$ a.s. (both are $(0,0)$ for $x>R$) and the pathwise uniqueness claim is proved. We now show how a transformation of the state space and random time change can reduce the SDE [\[main-SDE\]](#main-SDE){reference-type="eqref" reference="main-SDE"} to a simple one-dimensional diffusion. We will only use the equation [\[main-SDE\]](#main-SDE){reference-type="eqref" reference="main-SDE"} and standard stochastic analysis in this discussion. In particular, we could replace $(L^x,\dot L^x)$ by any solution to [\[main-SDE\]](#main-SDE){reference-type="eqref" reference="main-SDE"} in $[0,\infty)\times\mathbb{R}$ starting from an arbitrary initial condition in $(0,\infty)\times\mathbb{R}$. Recall that $R=\inf\{x\geq 0: L^x=0\}$. **Proposition 18**. *(a) We have $$\label{integral-infinite} \int_0^R (L^x)^{-1/3} \mathrm{d}x=\infty\quad\hbox{a.s.},$$ and therefore can introduce the time change $$\tau(t)=\inf\{x\geq 0: \int_0^x (L^y)^{-1/3} \mathrm{d}y\geq t\}<R, \quad t\geq 0.$$ (b) Set $Z^x:=\dot L^x(L^x)^{-2/3}$ for every $x\in[0,R)$, and $\tilde Z_t :=Z^{\tau(t)}$ and $\tilde L_t:= L^{\tau(t)}$ for every $t\geq 0$. The process $(\widetilde Z_t,\widetilde L_t)_{t\geq 0}$ is the pathwise unique solution of the equation $$\begin{aligned} \label{SDE4} \tilde Z_t&=\tilde Z_0+4W_t+\int_0^t b(\tilde Z_s)\, \mathrm{d}s\\ \label{SDE4b} \tilde L_t&=\tilde L_0+\int_0^t\tilde L_s\tilde Z_s\,\mathrm{d}s, \end{aligned}$$ where $W$ is a linear Brownian motion, and, for $z\in \mathbb{R}$, $$\label{driftasymp} b(z):= 8\,\frac{p_1'}{p_1}\Bigl(\frac{z}{2}\Bigl)-\frac{2}{3}\,z^2=-\frac{2}{3}\mathrm{sgn}(z)z^2+O\Bigl(\frac{1}{|z|}\Bigr)\text{ as }z\to\pm\infty.$$ (c) The process $(\widetilde Z_t)_{t\geq 0}$ is the pathwise unique solution of [\[SDE4\]](#SDE4){reference-type="eqref" reference="SDE4"} and is a recurrent one-dimensional diffusion process. As $t\to\infty$, $\widetilde Z_t$ converges weakly (in fact, in total variation) to its unique invariant probability measure $\nu(\mathrm{d}z)=Cp_1(\frac{z}{2})^2 \exp(-\frac{z^3}{36})\,\mathrm{d}z$, where $C>0$. Moreover, $$\label{expsde} \tilde L_t=\tilde L_0\exp\Bigl(\int_0^t\tilde Z_s\,\mathrm{d}s\Bigr)\text{ for all }t\geq 0.$$* It is interesting to compare [\[integral-infinite\]](#integral-infinite){reference-type="eqref" reference="integral-infinite"} with Hong's results [@Hon] showing that $$\lim_{y\uparrow R}\frac{\log (L^y)}{\log(R-y)}=3\quad\hbox{a.s.}$$ It will be useful to analyze the left tail of $p'_1/p_1$ and so give a counterpart of the $O(1/y)$ right tail behavior in [\[rtpt\]](#rtpt){reference-type="eqref" reference="rtpt"}. One argues just as before, using the representation in terms of Airy functions (see [\[Airatio\]](#Airatio){reference-type="eqref" reference="Airatio"} and [\[rtpt\]](#rtpt){reference-type="eqref" reference="rtpt"}). In fact the calculation using the asymptotics of $\text{Ai}$ and $\text{Ai}'$, is now easier, but the behavior is quite different: $$\label{ltpt} \frac{p_1'}{p_1}(y)=\frac{2}{3}y^2+\frac{1}{2y}+O\Bigl(\frac{1}{y^4}\Bigr)\ \ \text{ as }y\to-\infty.$$ From [\[rtpt\]](#rtpt){reference-type="eqref" reference="rtpt"} and [\[ltpt\]](#ltpt){reference-type="eqref" reference="ltpt"}, we obtain the asymptotics in [\[driftasymp\]](#driftasymp){reference-type="eqref" reference="driftasymp"}. Then, by [\[gformu\]](#gformu){reference-type="eqref" reference="gformu"} we may write [\[main-SDE\]](#main-SDE){reference-type="eqref" reference="main-SDE"} as $$\label{SDE2} \begin{aligned} \dot L^x&=\dot L^0 + 4\int_0^x \sqrt{{L^y}}\,\mathrm{d}B_y + \int_0^x 8(L^y)^{1/3}\frac{p'_1}{p_1}\Bigl(\frac{Z^y}{2}\Bigr)\,\mathrm{d}y\\ L^x&=L^0+\int_0^x\dot L^y\,\mathrm{d}y, \end{aligned}$$ where $Z^x=0$ for $x\geq R$ by convention. We analyze the above using the coordinates $(Z^x,L^x)$, which by Itô calculus satisfy for $x<R$, $$\label{SDE3} \begin{aligned} Z^x&=Z^0+4\int_0^x(L^y)^{-1/6}\,\mathrm{d}B_y+\int_0^x(L^y)^{-1/3}\,b(Z^y)\,\mathrm{d}y\\ L^x&=L^0+\int_0^x(L^y)^{-1/3}L^y Z^y\,dy. \end{aligned}$$ The precise meaning of the above is that it holds for the equation stopped at $R_{\varepsilon}=\inf\{y\ge0:L^y\le{\varepsilon}\}$ for all ${\varepsilon}>0$. We set $\rho:=\int_0^R (L^x)^{-1/3} \mathrm{d}x$ and now use the random time change $\tau(t)$ introduced in part (a) of the proposition, observing that this random change makes sense only for $t<\rho$ (at present, we do not yet know that $\rho=\infty$ a.s.). If $\tilde Z_t :=Z^{\tau(t)}$ and $\tilde L_t:= L^{\tau(t)}$ for $t<\rho$, it follows that $$\begin{aligned} \label{SDE5} \tilde Z_t&=\tilde Z_0+4W_t+\int_0^t b(\tilde Z_s)\, \mathrm{d}s\\ \label{SDE5b} \tilde L_t&=\tilde L_0+\int_0^t\tilde L_s\tilde Z_s\,\mathrm{d}s, \end{aligned}$$ where $W_t=\int_0^{\tau(t)}(L^y)^{-1/6}\, \mathrm{d}B_y$. Again the above equation means that for all ${\varepsilon}>0$, it holds for the equation stopped at $\rho_{\varepsilon}:=\tau^{-1}(R_{\varepsilon})=\int_0^{R_{\varepsilon}}(L^x)^{-1/3}\mathrm{d}x=\inf\{t\ge 0:\tilde L_t\le {\varepsilon}\}$. Then $W_{t\wedge {\rho_{\varepsilon}}}=\int_0^{\tau(t\wedge \rho_{\varepsilon})}(L^y)^{-1/6}\, \mathrm{d}B_y$ is a continuous local martingale, with quadratic variation $$\label{Wqv} \langle W_{\cdot\wedge \rho_{\varepsilon}},W_{\cdot\wedge \rho_{\varepsilon}}\rangle_t=\int_0^{\tau(t\wedge \rho_{\varepsilon})}(L^y)^{-1/3}\, \mathrm{d}y=t\wedge \rho_{\varepsilon}.$$ Note that [\[SDE5b\]](#SDE5b){reference-type="eqref" reference="SDE5b"} implies that [\[expsde\]](#expsde){reference-type="eqref" reference="expsde"} holds for $t<\rho$. By the same method as in the proof of Theorem [Theorem 1](#main-th){reference-type="ref" reference="main-th"} (compare [\[Wqv\]](#Wqv){reference-type="eqref" reference="Wqv"} with [\[Bveqv\]](#Bveqv){reference-type="eqref" reference="Bveqv"}), we may assume that $W_t$ is defined for every $t\geq 0$ and is a linear Brownian motion. It follows from the definition of $\rho$ that $\liminf_{t\uparrow\rho}\tilde L_t=0$, and therefore by [\[expsde\]](#expsde){reference-type="eqref" reference="expsde"} (for $t<\rho$) we have $\liminf_{t\uparrow\rho}\tilde Z_t=-\infty$ a.s. on $\{\rho<\infty\}$. Therefore $\tilde Z$ is the unique solution of [\[SDE4\]](#SDE4){reference-type="eqref" reference="SDE4"} up to its explosion time $\rho\le \infty$ (again use Theorem 3.1 in Chapter IV of [@IW]). By [\[driftasymp\]](#driftasymp){reference-type="eqref" reference="driftasymp"} the explosion time of $\tilde Z$ must be infinite a.s. (see Theorem 3.1(1) of Chapter VI of [@IW]). We conclude that $\rho=\infty$ a.s., giving part (a) of the proposition, as well as [\[expsde\]](#expsde){reference-type="eqref" reference="expsde"} and the fact that $\tilde Z$ is the pathwise unique solution of [\[SDE4\]](#SDE4){reference-type="eqref" reference="SDE4"} in (c). The other assertions are now easily derived. Equations [\[SDE4\]](#SDE4){reference-type="eqref" reference="SDE4"} and [\[SDE4b\]](#SDE4b){reference-type="eqref" reference="SDE4b"} are just [\[SDE5\]](#SDE5){reference-type="eqref" reference="SDE5"} and [\[SDE5b\]](#SDE5b){reference-type="eqref" reference="SDE5b"} written for every $t\geq 0$. Pathwise uniqueness for the system [\[SDE4\]](#SDE4){reference-type="eqref" reference="SDE4"}, [\[SDE4b\]](#SDE4b){reference-type="eqref" reference="SDE4b"} again follows from Theorem 3.1 in Chapter IV of [@IW] by the local Lipschitz nature of the drift coefficient. This completes the proof of (b). By [\[SDE4\]](#SDE4){reference-type="eqref" reference="SDE4"} above and (2) of Chapter 33 of [@Ka], $\tilde Z$ is a one-dimensional diffusion with scale function $$s(x)=\int_0^x\exp\Bigl(-\int_0^y\frac{b(z)}{8}\, \mathrm{d}z\Bigr)\, \mathrm{d}y=c\int_0^xp_1\Big(\frac{y}{2}\Big)^{-2}\exp\Bigl(\frac{y^3}{36}\Bigl)\, \mathrm{d}y,$$ where $c>0$ is a constant. The scale function maps $\mathbb{R}$ onto $\mathbb{R}$ (as is clear from the above asymptotics for $b$ in [\[driftasymp\]](#driftasymp){reference-type="eqref" reference="driftasymp"}), and in particular, $\tilde Z$ is a recurrent diffusion (all points are visited w.p. $1$ from every starting point). From Chapter 33 of [@Ka] (see the discussions prior to Theorem 33.1 and after Theorem 33.9 in [@Ka]), the speed measure of the diffusion $s(\tilde Z_t)$ has density $(4s'\circ s^{-1}(y))^{-2}$, and is thus a finite measure since $$\int_\mathbb{R}(s'\circ s^{-1}(y))^{-2}\,\mathrm{d}y= \int_\mathbb{R}(s'(x))^{-1}\,\mathrm{d}x <\infty,$$ using [\[driftasymp\]](#driftasymp){reference-type="eqref" reference="driftasymp"} for the last. By Lemmas 33.17 and 33.19 in [@Ka], the diffusion $s(\tilde Z_t)$ has a unique invariant measure which is proportional to its speed measure, and starting at any initial point, will converge weakly to it (in fact in total variation) as $t\to\infty$. Therefore $\tilde Z_t$ has a unique invariant probability with density proportional to $1/s'(x)$, and will converge to it in the same sense. The proof of (c) is complete. The asymptotics for $p_1$ are $p_1(x)\sim c_-\sqrt{|x|} \exp\Bigl(-\frac{2}{9}|x|^3\Bigr)$ as $x\to-\infty$ and $p_1(x)\sim c_+ |x|^{-5/2}$ as $x\to\infty$, where $c_\pm>0$, and $\sim$ means the ratio approaches $1$ (e.g. [@CC] but recall our $p_1$ differs by a scaling constant). This shows that the invariant density of $\tilde Z$ satisfies $$f(x)\sim \begin{cases}C_-|x|\exp\Bigl(-\frac{|x|^3}{36}\Bigr)&\text{ as $x\to-\infty$}\\ C_+|x|^{-5}\exp\Bigl(-\frac{|x|^3}{36}\Bigr)&\text{ as $x\to+\infty$}, \end{cases}$$ where $C_\pm>0$. In terms of our original local time the weak convergence in (c) means that $$\frac{\dot L^{\tau(t)}}{(L^{\tau(t)})^{2/3}}\ \text{converges weakly to }Cp_1\Bigl(\frac{x}{2}\Bigr)^2\exp\Bigl(-\frac{x^3}{36}\Bigr)\mathrm{d}x\ \text{ as }t\to\infty,$$ where $\tau(t)\uparrow R$ as $t\to\infty$. Again this can be compared with the cubic behavior of $L^x$ near its extinction time from [@Hon]. Note in the above that $\tau'(t)=\tilde L_t^{1/3}$ is recoverable from $(L^0,\tilde Z)$ by [\[expsde\]](#expsde){reference-type="eqref" reference="expsde"}, and so one can reverse the above construction and build $(L^x,\dot L^x)$ from the diffusion $\tilde Z$ and a given initial condition $L^0>0$. The following proposition is immediate from the discussion above and uniqueness in law in [\[SDE4\]](#SDE4){reference-type="eqref" reference="SDE4"}. **Proposition 19**. *On a filtered probability space $(\Omega,\mathcal{F},(\mathcal{F}_t),P)$, let $W$ be an $({\mathcal F}_t)$-Brownian motion and let $(\Lambda^0,\tilde{\mathfrak{Z}}_0)$ be a pair of ${\mathcal F}_0$-measurable random variables with values in $(0,\infty)\times\mathbb{R}$. There is a pathwise unique solution, $(\tilde{\mathfrak{Z}}_t)_{t\geq 0}$, to $\mathrm{d}\tilde{\mathfrak{Z}}_t=4\mathrm{d}W_t + b(\tilde{\mathfrak{Z}}_t)\mathrm{d}t$ with initial value $\tilde{\mathfrak{Z}}_0$. For every $t>0$, set $$\tilde\Lambda_t=\Lambda^0\exp\Bigl(\int_0^t\tilde{\mathfrak{Z}}_s\, \mathrm{d}s\Big).$$ Then the following holds.* *$\tilde \Lambda_\infty:=\lim_{t\to\infty}\tilde \Lambda_t=0$, $\lim_{t\to\infty}(\tilde \Lambda_t)^{2/3}\tilde{\mathfrak{Z}}_t=0$, and $R=\int_0^\infty (\tilde \Lambda_s)^{1/3}\,\mathrm{d}s<\infty$ a.s.* *Introduce the random time change $$\int_0^{\sigma(x)} (\tilde \Lambda_s)^{1/3}\,\mathrm{d}s=x\text{ for }x<R,\text{ and set }\sigma(x)=\infty\text{ for }x\ge R.$$ Define $\Lambda^x=\tilde \Lambda_{\sigma(x)}$ for $x> 0$ and $$\mathfrak{Z}^x=\begin{cases}\tilde{\mathfrak{Z}}_{\sigma(x)}&\text{ if }x<R\\ 0&\text{ if }x\ge R. \end{cases}$$ Then $R=\inf\{x\ge 0:\Lambda^x=0\}$ and $x\mapsto \Lambda^x$ is continuously differentiable on $[0,\infty)$ with derivative $\dot \Lambda^x=\mathfrak{Z}^x(\Lambda^x)^{2/3}$ for $x\ge 0$, where we take the right-hand derivative at $x=0$.* *By enlarging our probability space, if necessary, we may assume there is a filtration $({\mathcal G}_x)_{x\ge 0}$ and a $({\mathcal G}_x)$-Brownian motion $(B_x)_{x\geq 0}$ such that $( \Lambda^x,\dot\Lambda^x)_{x\ge 0}$ is the $({\mathcal G}_x)$-adapted solution of [\[main-SDE\]](#main-SDE){reference-type="eqref" reference="main-SDE"}, stopped at $R$.* 99 C. Abraham, J.-F. Le Gall, Excursion theory for Brownian motion indexed by the Brownian tree. *J. Eur. Math. Soc. (JEMS)* 20, 2951--3016 (2018) M. Abramowitz, I. Stegun, *Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables*. Applied Mathematics Series. Vol. 55 US Dept. of Commerce, National Bureau of Standards, Washington D.C. 10th printing, 1972 D. Aldous, The continuum random tree I. *Ann. Probab.*, 19, 1--28 (1991) D. Aldous, Tree-based models for random distribution of mass. *J. Statist. Phys.* 73, 625--641 (1993) M. Bousquet-Mélou, S. Janson, The density of the ISE and local limit laws for embedded trees. *Ann. Appl. Probab.* 16, 1597--1632 (2006) G. Chapuy, J.-F. Marckert, Note on the density of ISE and a related diffusion. Preprint, arXiv:2210.10159 A. Contat, N. Curien, Parking on Cayley trees and frozen Erdös-Rényi. *Ann. Probab.*, to appear, arXiv:2107.02116 P. Fitzsimmons, J. Pitman, M. Yor, Markovian bridges: construction, Palm interpretation and splicing. Seminar on Stochastic Processes 1992, 101--134, Progr. Probab., 33, Birkhäuser Boston, 1993. P. Flajolet, R. Sedgewick, *Analytic combinatorics.* Cambridge University Press, Cambridge 2009. J. Hong, Improved Hölder continuity near the boundary of one-dimensional super-Brownian motion. *Electron. Comm. Probability* 24, no 28, 1--12 (2019) N. Ikeda, S. Watanabe, *Stochastic Differential Equation and Diffusion Processes.* North-Holland, Amsterdam 1981. O. Kallenberg, *Foundations of Modern Probability, Third ed.* Springer, Berlin 2021. J. Lamperti, Continuous state branching processes. *Bull. Amer. Math. Soc.* 73, 382--386 (1967) G. Last, M. Penrose, *Lectures on the Poisson process.* Cambridge University Press, Cambridge 2018. J.-F. Le Gall, *Spatial branching processes, random snakes and partial differential equations*. Lectures in Mathematics ETH Zürich. Birkhäuser, Boston, 1999. J.-F. Le Gall, Random trees and applications. *Probab. Surveys* 2, 245--311 (2005) J.-F. Le Gall, Subordination of trees and the Brownian map. *Probab. Theory Related Fields* 171, 819--864 (2018) J.-F. Le Gall, Brownian disks and the Brownian snake. *Ann. Inst. H. Poincaré Probab. Stat.* 55, 237--313 (2019) J.-F. Le Gall, The Markov property of local times of Brownian motion indexed by the Brownian tree. *Ann. Probab.*, to appear, arXiv:2211.08041 J.-F. Le Gall, A. Riera, Growth-fragmentation processes in Brownian motion indexed by the Brownian tree. *Ann. Probab.* 48, 1742--1784 (2020) J.-F. Le Gall, A. Riera, Some explicit distributions for Brownian motion indexed by the Brownian tree. *Markov Processes Relat. Fields* 26, 659--686 (2020) J.-F. Le Gall, A. Riera, Spine representations for non-compact models of random geometry. *Probab. Theory Related Fields* 181, 571-645 (2021) L. Mytnik, E. Perkins, The dimension of the boundary of super-Brownian motion. *Probab. Theory Related Fields* 174, 821--885 (2019) E. Perkins, Dawson-Watanabe superprocesses and measure-valued diffusions. Ecole d'été de probabilités de Saint-Flour 1999. *Lecture Notes Math.* 1781. Springer, 2002 S. Sugitani, Some properties for the measure-valued branching diffusion processes. *J. Math. Soc. Japan* 41, 437--462 (1989) V.M. Zolotarev, One-dimensional stable distributions. *Translations of Mathematical Monographs* 65. American Mathematical Society, Providence, 1986 [^1]: Supported by the ERC Advanced Grant 740943 GeoBrown [^2]: Supported by an NSERC Canada Discovery Grant
arxiv_math
{ "id": "2309.06899", "title": "A stochastic differential equation for local times of super-Brownian\n motion", "authors": "Jean-Fran\\c{c}ois Le Gall and Edwin Perkins", "categories": "math.PR", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In the context of energy market clearing, non-merchant assets are assets that do not submit bids but whose operational constraints are included. Integrating energy storage systems as non-merchant assets can maximize social welfare. However, the disconnection between market intervals poses challenges for market properties, that are not well-considered yet. We contribute to the literature on market-clearing with non-merchant storage by proposing a market-clearing procedure that preserves desirable market properties, even under uncertainty. This approach is based on a novel representation of the storage system in which the energy available is discretized to reflect the different prices at which the storage system was charged. These prices are included as virtual bids in the market clearing, establishing a link between different market intervals. We show that market clearing with virtual linking bids outperforms traditional methods in terms of cost recovery for the market participants and discuss the impacts on social welfare. author: - bibliography: - setup/ref.bib title: Virtual Linking Bids for Market Clearing with Non-Merchant Storage --- energy market design, non-merchant storage, passive storage # Introduction In order to enable the safe operation of future energy systems with a high share of intermittent and stochastic renewable sources of energy production, the share of large-scale energy storage is expected to increase significantly in the coming years [@iea2022world]. A major challenge that comes with this evolution is how to best integrate storage in energy markets. This is emphasized by a recent order by the Federal Energy Regulatory Commission in the United States, urging system operators to implement changes in order to facilitate market participation of electric storage systems [@FERC2018]. As a step towards addressing this challenge, [@Singhal2020Pricing] describes three different ways in which storage systems can be included in the market clearing. In the first and second options, storage systems participate similarly to conventional generators and loads, submitting price and quantity bids. We refer to this setup as *merchant storage*. In the third option, the storage systems' operational constraints are included but they do not need to submit bids, which we call *non-merchant storage*. Market clearing with non-merchant storage can achieve the most economically efficient outcomes and the highest social welfare [@Singhal2020Pricing], as opposed to market clearing with merchant storage [@Sioshansi2014When; @hartwig2016impact]. However, this setup requires more modifications to the current energy markets, which might explain why it has not been considered in detail yet. The main issue to address when clearing a market with non-merchant storage is how to represent the time-linking effect of the storage system. Indeed, while storage systems potentially connect an infinite number of time periods, the market-clearing window is finite. In the literature on non-merchant storage, a very common assumption is that the storage is initially empty and the state of energy at the end of the market-clearing horizon is free[^1] [@Taylor2014Financial; @jiang2023duality; @Munoz2017Financial]. This leads to myopic decision-making regarding the state of energy of the storage at the end of the market interval. These assumptions are not always stated [@weibelzahl2018effects], showing that this problem is disregarded. This time-linking property also has to be considered in the pricing problem. By default, the market does not remember the price at which the storage was charged and thus considers any energy available at the beginning of a market horizon as free. Therefore, the resulting market price might be lower than the price the storage system paid to charge. In [@frölke2022efficiency] we proposed a method to reestablish the connection between the different market intervals and retrieve prices that send the proper signal to the storage system. However, it is only valid under the assumption of perfect information, which does not correspond to a realistic setup. The problem of connection between current and future time periods with uncertain realization has also been identified in the case of remuneration of the storage system with financial storage rights [@Bose2019Some]. This situation has also been observed in the case of demand response in [@werner2021pricing], which could be seen as a virtual type of storage in the case of load shifting [@morales2022classifying], and is therefore also covered by this paper. In real-time markets, which include ramping products that can span over several time periods, a common approach is to use a multi-interval market clearing, where only the decisions over the first time periods, termed the *decision horizon*, are implemented, while the decisions over the rest of the horizon are advisory [@chen2022pricing; @zhao2020multi; @hua2019pricing; @cho2022pricing]. It can help to make future-aware decisions regarding the state of energy at the end of the decision horizon. However, even in this framework, the final decision might have an impact if the window is not long enough. Moreover, it also suffers from issues with pricing, even in a deterministic setup, as shown in [@hua2019pricing]. The work in [@hua2019pricing] proposes two methods to reintroduce a link to previous intervals but does not prevent the need for non-transparent uplift payments. In [@chen2022pricing], an argument for non-uniform pricing is made, but the prices they obtain are highly dependent on the forecasts used. Because of these limitations, we focus on single-interval market clearing with uniform pricing. Moreover, this is the usual approach for day-ahead auctions, while multi-interval market-clearing models are used for real-time markets [@cho2022pricing]. Since single-interval markets are still widely in use, it is important to study the integration of non-merchant storage into those. We move the consideration of future intervals to the choice of end-of-horizon storage parameters, which are to be decided in a separate problem[^2]. In this setup, a modeling approach to reflect the transfer of energy between time periods has been introduced in [@zhang2022pricing]. However, the authors do not discuss the valuation of the charged energy in subsequent market intervals. The problem of valuation has been studied in [@werner2021pricing] for demand response, in terms of deviation from a scenario with no flexibility, which cannot always be applied in the case of a storage system. We aim to help address the unanswered question: How to best design a market clearing that includes non-merchant storage and fully exploit the potential of storage systems to increase social welfare? Towards this, we propose a novel market-clearing procedure with non-merchant storage that ensures cost recovery for all the market participants, in particular for the storage system, and even when considering uncertainty. The main idea is to remember the prices at which the storage system charged and automatically create virtual linking bids to reflect these values in the next market intervals. First, in Section [2](#sec:2){reference-type="ref" reference="sec:2"}, we present in more detail the challenges of market clearing with non-merchant storage, with the help of an illustrative example. We then introduce market clearing with virtual linking bids in Section [3](#sec:3){reference-type="ref" reference="sec:3"} and show promising results in terms of market properties in Section [4](#sec:4){reference-type="ref" reference="sec:4"}. We conclude in Section [5](#sec:5){reference-type="ref" reference="sec:5"}. # Challenges of Market Clearing with Non-Merchant Storage {#sec:2} ## Initial Model and Assumptions In order to get a better understanding of the problem, we model a stylized version of the storage system using a couple of assumptions. First, we consider that there is only one, non-merchant, storage system in the market. We also assume that there are no losses when charging and discharging, meaning that the storage system is perfectly efficient, and has no leakage over time. We consider a storage system without charging and discharging limits and with no minimum on the state of energy. We thereby focus on the time-linking aspect of the storage system and limit the subtleties that would be introduced by considering each of these aspects. Indeed, the essence of the method presented in this paper would be the same, but small modifications would have to be introduced to deal with these different aspects, which would complicate the understanding of the basics if included here. We furthermore assume that minimum levels for loads and generators are 0, in order to avoid non-linearities. We refer to *split market clearing* as the process of clearing the market interval-by-interval, as opposed to an *ideal market clearing*, where all market intervals would be cleared at once. Here, we consider one market interval. The set $\mathcal{T}$ gathers all the time periods $t$ of the given market interval, where each time period has a duration of $\Delta t$ (in hours). For a day-ahead market for instance, $\mathcal{T}$ would typically correspond to one day and $\Delta t$ would be one hour. Under these assumptions, the market clearing with non-merchant storage for a given market interval can be modeled as follows: [\[prob:mc_init\]]{#prob:mc_init label="prob:mc_init"} $$\begin{aligned} \label{eq:mc_init_obj}\max_{\mathbf{x}} \quad & \Delta t \sum_{t \in \mathcal{T}} \left ( \sum_{l \in \mathcal{L}} U_{lt} d_{lt} - \sum_{g \in \mathcal{G}} C_{gt} p_{gt} \right )\\ \label{eq:mc_init_energy_bal} \text{s.t.} \quad & \sum_{l \in \mathcal{L}} d_{lt} + p_t^{\mathrm{C}} - \sum_{g \in \mathcal{G}} p_{gt} = 0 , \hspace{0.26\linewidth} \forall t \in \mathcal{T} \\ \label{eq:gen_bound} & 0 \leq p_{gt} \leq \overline{P}_{gt}, \hspace{0.35\linewidth} \forall g \in \mathcal{G}, t \in \mathcal{T} \\ \label{eq:load_bound} & 0 \leq d_{lt} \leq \overline{D}_{lt}, \hspace{0.36\linewidth} \forall l \in \mathcal{L}, t \in \mathcal{T} \\ \label{eq:stor_bound} & 0 \leq e_t \leq \overline{E}, \hspace{0.48\linewidth} \forall t \in \mathcal{T} \\ \label{eq:stor_bal_t} & e_t = e_{t-1} + p_t^{\mathrm{C}} \Delta t , \hspace{0.31\linewidth} \forall t \in \mathcal{T}\setminus \{1\} \\ \label{eq:stor_bal_1} & e_1 = E^{\mathrm{init}} + p_1^{\mathrm{C}} \Delta t .\end{aligned}$$ Here, and in the following, $\mathbf{x}$ is a vector gathering all the decision variables of the model at hand. The variables are the quantity accepted for load $l \in \mathcal{L}$, $d_{lt}$, the quantity accepted for generator $g \in \mathcal{G}$, $p_{gt}$, and for the storage system, the state of energy $e_t$ and the amount charged $p_t^{\mathrm{C}}$. The latter can be negative, indicating a discharge. The objective function [\[eq:mc_init_obj\]](#eq:mc_init_obj){reference-type="eqref" reference="eq:mc_init_obj"} is to maximize the difference between load utilities $U_{lt}$ and generation costs $C_{gt}$ for accepted offers. The maximum generation $\overline{P}_{gt}$ and load $\overline{D}_{lt}$ are enforced by constraints [\[eq:gen_bound\]](#eq:gen_bound){reference-type="eqref" reference="eq:gen_bound"} and [\[eq:load_bound\]](#eq:load_bound){reference-type="eqref" reference="eq:load_bound"} respectively. Constraint [\[eq:stor_bound\]](#eq:stor_bound){reference-type="eqref" reference="eq:stor_bound"} sets the maximum state of energy $\overline{E}$. Constraints [\[eq:stor_bal_t\]](#eq:stor_bal_t){reference-type="eqref" reference="eq:stor_bal_t"} and [\[eq:stor_bal_1\]](#eq:stor_bal_1){reference-type="eqref" reference="eq:stor_bal_1"} update the storage level, starting from the initial level $E^{\mathrm{init}}$. The market price at $t$ is given by $\lambda_t$, the dual variable of [\[eq:mc_init_energy_bal\]](#eq:mc_init_energy_bal){reference-type="eqref" reference="eq:mc_init_energy_bal"}. We make further assumptions regarding the market operation, namely that there is perfect competition and that market participants bid their true costs. To evaluate the efficiency of the market, one of the results to consider is the social welfare, $SW$, which is calculated as the sum of surpluses of loads and generators and the payments to the storage system. With [\[eq:mc_init_energy_bal\]](#eq:mc_init_energy_bal){reference-type="eqref" reference="eq:mc_init_energy_bal"}, it reduces to: $$\small SW = \Delta t \sum_{t \in \mathcal{T}} \left ( \sum_{l \in \mathcal{L}} U_{lt} d_{lt}^* - \sum_{g \in \mathcal{G}} C_{gt} p_{gt}^* \right ),$$ where the superscript $^*$ indicates that the optimal value of the considered variables is used. ## Approaches to Avoid Myopic Decisions As mentioned in [@Singhal2020Pricing] and [@frölke2022efficiency], the model in [\[prob:mc_init\]](#prob:mc_init){reference-type="eqref" reference="prob:mc_init"} is not sufficient to avoid myopic decisions regarding the state of energy at the end of the market interval. Indeed, with this model the storage level will most likely return to zero, and there will not be stored energy available in the first time period of the next market interval. In [@Singhal2020Pricing], several options to avoid myopic decisions are listed. The first one is to impose a final state of energy $E^{\mathrm{end}}$ at the end of the market interval, which is determined by considering information about future market intervals. This is done by adding to [\[prob:mc_init\]](#prob:mc_init){reference-type="eqref" reference="prob:mc_init"} the constraint $$\small \label{eq:stor_end} e_{\text{T}} = E^{\mathrm{end}},$$ where $t = \text{T}$ corresponds to the last time period of the market interval. Another option is to steer the level to the desired value by adding a penalty term in the objective function, with a cost $S^{\mathrm{end}}$. In [\[prob:mc_init\]](#prob:mc_init){reference-type="eqref" reference="prob:mc_init"}, the objective function [\[eq:mc_init_obj\]](#eq:mc_init_obj){reference-type="eqref" reference="eq:mc_init_obj"} becomes $$\small \label{eq:mc_penal_obj} \max_{\mathbf{x}} \quad \Delta t \sum_{t \in \mathcal{T}} \left ( \sum_{l \in \mathcal{L}} U_{lt} d_{lt} - \sum_{g \in \mathcal{G}} C_{gt} p_{gt} \right ) - S^{\mathrm{end}} e_{\text{T}}.$$ ## Limits on an Illustrative Example {#sec:ex} We consider a storage system that has a capacity $\overline{E}= 2.5$ MWh and which is initially empty. We clear the market for two market intervals (MI) of one time period of one hour each. One load and two generators participate in the market. The related parameters are listed in Table [1](#tab:data){reference-type="ref" reference="tab:data"}. The code for this example, as well as all the other examples presented in this paper, is available online at <https://github.com/eleaprat/MC_non_merchant_stg>. MI $t$ $U_{1t}$ $\overline{D}_{1t}$ $C_{1t}$ $\overline{P}_{1t}$ $C_{2t}$ $\overline{P}_{2t}$ ---- ----- ---------- --------------------- ---------- --------------------- ---------- --------------------- 1 1 12 0 5 2 10 2 2 1 12 3 2 2 9 2 : Load and generators parameters for the illustrative example. Prices are in €/MWh and quantities in MW. [\[tab:data\]]{#tab:data label="tab:data"} We first solve [\[prob:mc_init\]](#prob:mc_init){reference-type="eqref" reference="prob:mc_init"} with [\[eq:stor_end\]](#eq:stor_end){reference-type="eqref" reference="eq:stor_end"}, where $E^{\mathrm{end}}= 1$ MWh for the first MI and $E^{\mathrm{end}}= 0$ MWh for the second MI [^3]. The resulting dispatch and market prices are shown in Table [2](#tab:results){reference-type="ref" reference="tab:results"}. MI $t$ $e_{t}$ $d_{1t}$ $p_{1t}$ $p_{2t}$ Market price -------------------------------------------------------------------------------------------------- ---- ----- --------- ---------- ---------- ---------- -------------- With [\[eq:stor_end\]](#eq:stor_end){reference-type="eqref" reference="eq:stor_end"} 1 1 1 0 1 0 5 2 1 0 3 2 0 \[2,9\] With [\[eq:mc_penal_obj\]](#eq:mc_penal_obj){reference-type="eqref" reference="eq:mc_penal_obj"} 1 1 0 0 0 0 5 2 1 0 3 2 1 9 : Results for the illustrative example. The state of energy is in MWh, the market price in €/MWh and the rest of the variables in MW. [\[tab:results\]]{#tab:results label="tab:results"} We can see that on MI 2, there is a price multiplicity, where any price between 2 and 9€/MWh is valid. It can be a problem if the final price chosen is below 5€/MWh because the storage system would then not recover its charging cost. On MI 2, the information that the storage system charged at 5€/MWh is not accessible. As a consequence, the price is chosen without accounting for it. The total social welfare is 27€. On the other hand, we can use the penalty term by solving [\[prob:mc_init\]](#prob:mc_init){reference-type="eqref" reference="prob:mc_init"} with [\[eq:mc_penal_obj\]](#eq:mc_penal_obj){reference-type="eqref" reference="eq:mc_penal_obj"}. Knowing about the potential price multiplicity on MI 2, we set the penalty $S^{\mathrm{end}}=2$€/MWh on the first MI, to make sure the storage system will then recover its costs if this lower price gets selected on MI 2, and $S^{\mathrm{end}}=0$ on MI 2. The resulting dispatch and market prices are shown in Table [2](#tab:results){reference-type="ref" reference="tab:results"}. In this case, the storage system does not charge at all, and as a consequence, the social welfare is reduced to 23€. Due to this limitation, in the rest of the paper \"split market clearing\" refers to solving [\[prob:mc_init\]](#prob:mc_init){reference-type="eqref" reference="prob:mc_init"} with [\[eq:stor_end\]](#eq:stor_end){reference-type="eqref" reference="eq:stor_end"}. # Market Clearing with Virtual Linking Bids {#sec:3} In this section, we modify the market-clearing model with final storage level, to ensure that the cost at which the storage system charged in one market interval is accounted for in the following market intervals. To do so, we introduce a new representation of the non-merchant storage system. ## Inter- and Intra-Storage In order to save the value at which the storage charges in view of subsequent intervals, we introduce the concepts of net charge and net discharge over a market interval. Those are indicated by the difference between the initial and the final state of energy. If it is positive, it corresponds to a net charge and if negative, it corresponds to a net discharge. We can then consider separately the exchanges of energy within the market interval, which we associate to an *intra-storage*, and the exchanges of energy with past or future intervals, which correspond to *inter-storage*. We can conceptually split the storage, where the quantity charged previously is equivalent to a generator, bidding with the saved price, and the capacity for intra-storage corresponds to the available capacity at the beginning of the market interval. This is illustrated in Figure [1](#fig:stor_val){reference-type="ref" reference="fig:stor_val"}. ![Storage system at the beginning of a market interval, with the previously charged quantities in red and the associated price in parenthesis.](figs/tikz/stor_val.tikz){#fig:stor_val} In the case of net charge, this quantity is saved along with the corresponding charging price, which is then used as a virtual bid in future market intervals. In this way, we ensure that the storage will get paid at least what it paid for charging. Examples of net charge and net discharge are shown in Figure [\[fig:net\]](#fig:net){reference-type="ref" reference="fig:net"}. In the example of net charge from Figure [\[fig:ch\]](#fig:ch){reference-type="ref" reference="fig:ch"}, we see that over the market interval, the quantity in the inter-storage is not used. At the end of the market interval, the quantity net charged is added to the intra-storage. In the example of net discharge from Figure [\[fig:disch\]](#fig:disch){reference-type="ref" reference="fig:disch"}, we can see how intra- and inter-storages can be used independently. Similarly, the level at the end of the market interval determines the intra-storage for the next interval. \ ## Model for Market Clearing with Virtual Linking Bids We now introduce a model for market clearing with virtual linking bids (VLB), based on the model in [\[prob:mc_init\]](#prob:mc_init){reference-type="eqref" reference="prob:mc_init"} and including the representation of the storage system with intra- and inter-storage components. [\[prob:mc_vos\]]{#prob:mc_vos label="prob:mc_vos"} $$\begin{aligned} {3} \label{eq:mc_obj} \max_{\mathbf{x}} \quad & \Delta t \sum_{t \in \mathcal{T}} \left ( \sum_{l \in \mathcal{L}} U_{lt} d_{lt} - \sum_{g \in \mathcal{G}} C_{gt} p_{gt} - \sum_{v \in \mathcal{V}} S_v p_{vt}^{\mathrm{D,e}} \right )\\ \label{eq:energy_bal} \text{s.t.} \quad & \sum_{l \in \mathcal{L}} d_{lt} + p_t^{\mathrm{C,a}} - \sum_{g \in \mathcal{G}} p_{gt} - \sum_{v \in \mathcal{V}} p_{vt}^{\mathrm{D,e}} = 0 , \hspace{0.06\linewidth} \forall t \in \mathcal{T} \\ \nonumber & \eqref{eq:gen_bound} - \eqref{eq:load_bound}\\ \label{eq:stor_bal_t_intra} & e^{\mathrm{a}}_t = e^{\mathrm{a}}_{t-1} + p_t^{\mathrm{C,a}} \Delta t, \hspace{0.28\linewidth} \forall t \in \mathcal{T}\setminus \{1\}\\ \label{eq:stor_bal_1_intra} & e^{\mathrm{a}}_1 = p_1^{\mathrm{C,a}} \Delta t, \\ %\label{eq:stor_intra_lower} & e\up{a}_t \geq - \sum_{v \in \mathcal{V}} E\up{init}_v - e\up{a}_{t-1}, \quad \forall t \in \mathcal{T}\setminus \{1\}\\ %\label{eq:stor_intra_lower_init} & e\up{a}_1 \geq - \sum_{v \in \mathcal{V}} E\up{init}_v, \quad \forall t \in \mathcal{T}\\ \label{eq:stor_intra_lower_end} & e^{\mathrm{a}}_\text{T} \geq 0,\\ \label{eq:stor_bal_t_inter} & e^{\mathrm{e}}_{vt} = e^{\mathrm{e}}_{v,t-1} - p_{vt}^{\mathrm{D,e}} \Delta t, \hspace{0.13\linewidth} \forall v \in \mathcal{V}, \, t \in \mathcal{T}\setminus \{1\}\\ \label{eq:stor_bal_1_inter} & e^{\mathrm{e}}_{v,1} = E^{\mathrm{init}}_v - p_{v,1}^{\mathrm{D,e}} \Delta t, \hspace{0.33\linewidth} \forall v \in \mathcal{V}\\ \label{eq:stor_bound_all} & 0 \leq e^{\mathrm{a}}_t + \sum_{v \in \mathcal{V}} e^{\mathrm{e}}_{vt} \leq \overline{E}, \hspace{0.34\linewidth} \forall t \in \mathcal{T}\\ \label{eq:stor_pos} & e^{\mathrm{e}}_{vt}, \, p_{vt}^{\mathrm{D,e}} \geq 0, \hspace{0.35\linewidth} \forall v \in \mathcal{V}, \, \forall t \in \mathcal{T}\\ \label{eq:stor_end_all} & e^{\mathrm{a}}_\text{T} + \sum_{v \in \mathcal{V}} e^{\mathrm{e}}_{v\text{T}} \geq E^{\mathrm{end}} . %\label{eq:stor_end_all} & e\up{a}_\text{T} + \sum_{v \in \mathcal{V}} e\up{e}_{v\text{T}} = E\up{end} , \quad \text{if } E\up{end} \geq \sum_{v \in \mathcal{V}} E\up{init}_v.\end{aligned}$$ Here again, $\mathbf{x}$ is a vector gathering all the decision variables of the model. There are no changes regarding loads and generators and their decision variables. We have new variables for tracking the storage system. For the intra-storage system, $e^{\mathrm{a}}_t$ gives the state of energy and $p_t^{\mathrm{C,a}}$ the quantity charged (negative for a discharge). For the inter-storage system, we introduce $v \in \mathcal{V}$, which corresponds to the different values saved in the storage system, similarly to what was shown in Figure [1](#fig:stor_val){reference-type="ref" reference="fig:stor_val"}. The corresponding variables are $e^{\mathrm{e}}_{vt}$, the state of energy for value $v$ and $p_{vt}^{\mathrm{D,e}}$, the quantity discharged from inter-storage with value $v$. The objective function [\[eq:mc_obj\]](#eq:mc_obj){reference-type="eqref" reference="eq:mc_obj"} is modified to include the artificial bids from the inter-storage, with prices $S_v$. Here and in the balance constraint [\[eq:energy_bal\]](#eq:energy_bal){reference-type="eqref" reference="eq:energy_bal"}, the inter-storage appears similarly to conventional generators. Constraints [\[eq:stor_bal_t\_intra\]](#eq:stor_bal_t_intra){reference-type="eqref" reference="eq:stor_bal_t_intra"} and [\[eq:stor_bal_1\_intra\]](#eq:stor_bal_1_intra){reference-type="eqref" reference="eq:stor_bal_1_intra"} give the update of the state of energy for the intra-storage. Note that the intra-storage is by definition empty at the beginning of the market interval. However, it can occasionally take negative values, which corresponds to temporarily using some of the capacity that is reserved by the inter-storage. This is necessary to ensure that the arbitrage opportunities within a given market interval are not limited due to the intra-storage being initially empty, while there is, in practice, some energy to discharge. To limit these operations to arbitrage within this market interval only, [\[eq:stor_intra_lower_end\]](#eq:stor_intra_lower_end){reference-type="eqref" reference="eq:stor_intra_lower_end"} specifies that the final state of energy of the intra-storage cannot be negative. Constraints [\[eq:stor_bal_t\_inter\]](#eq:stor_bal_t_inter){reference-type="eqref" reference="eq:stor_bal_t_inter"} and [\[eq:stor_bal_1\_inter\]](#eq:stor_bal_1_inter){reference-type="eqref" reference="eq:stor_bal_1_inter"} give the update of the state of energy for each inter-storage. The initial energy available at each value is given by $E^{\mathrm{init}}_v$. Constraint [\[eq:stor_bound_all\]](#eq:stor_bound_all){reference-type="eqref" reference="eq:stor_bound_all"} enforces the storage capacity. The states of energy and the quantity discharged from each inter-storage are positive with [\[eq:stor_pos\]](#eq:stor_pos){reference-type="eqref" reference="eq:stor_pos"}. Constraint [\[eq:stor_end_all\]](#eq:stor_end_all){reference-type="eqref" reference="eq:stor_end_all"} is the updated version of constraint [\[eq:stor_end\]](#eq:stor_end){reference-type="eqref" reference="eq:stor_end"}. With the greater or equal sign, the artificial bid in the objective function guides the final level of the storage. This formulation thus combines some of the aspects of [\[eq:stor_end\]](#eq:stor_end){reference-type="eqref" reference="eq:stor_end"} and [\[eq:mc_penal_obj\]](#eq:mc_penal_obj){reference-type="eqref" reference="eq:mc_penal_obj"}. There is no charge of the inter-storage during the market clearing. In the case of net charge, the inter-storage is modified outside of the market clearing, as shown in Section [3.4](#sec:update){reference-type="ref" reference="sec:update"}. ## Simultaneous Charge and Discharge Note that in the model in [\[prob:mc_vos\]](#prob:mc_vos){reference-type="eqref" reference="prob:mc_vos"}, $p_t^{\mathrm{C,a}}$ and $p_{vt}^{\mathrm{D,e}}$ can be positive at the same time. However, this separation of the storage system is only a modeling artifice and has no physical meaning. It is the sum of the two that corresponds to the instruction of charge or discharge for the storage system. Allowing the intra-storage system to take negative values generates a multiplicity of optimal solutions, for which the sum of intra- and inter-storage is the same. It is possible to select among all those solutions one for which there is no simultaneous charge and discharge, by minimizing the total quantity exchanged or by introducing binary variables to choose between charge and discharge. We formalize this result with a proposition. **Proposition 1**. *For the market clearing in [\[prob:mc_vos\]](#prob:mc_vos){reference-type="eqref" reference="prob:mc_vos"}, and under the assumptions that the value of stored energy is strictly positive, i.e., $S_v > 0$, $\forall v \in \mathcal{V}$, and that a feasible solution exists, there will always be an optimal solution where the intra-storage does not charge when the inter-storage discharges.* *Proof.* First, we give an equivalent formulation of the model where the state of charge is replaced, using that $e^{\mathrm{a}}_t = \sum_{i=1}^{t} p_i^{\mathrm{C,a}}$ and $e^{\mathrm{e}}_{vt} = E^{\mathrm{init}}_v - \sum_{i=1}^{t} p_{vi}^{\mathrm{D,e}}$. We also notice that in [\[eq:stor_pos\]](#eq:stor_pos){reference-type="eqref" reference="eq:stor_pos"}, $e^{\mathrm{e}}_{vt}\geq 0$ can be equivalently replaced by $e^{\mathrm{e}}_{vT}\geq 0$, since the update quantities $p_{vt}^{\mathrm{D,e}}$ are non-negative. Lastly, we assume that the time periods of the market have a duration of one hour in order to lighten the notations with $\Delta t = 1$. The same proof can be made for any value of $\Delta t$. We obtain the following: [\[prob:mc_vos_proof\]]{#prob:mc_vos_proof label="prob:mc_vos_proof"} $$\begin{aligned} {3} \label{eq:mc_obj_pf} \max_{\mathbf{x}} \quad & \sum_{t \in \mathcal{T}} \left ( \sum_{l \in \mathcal{L}} U_{lt} d_{lt} - \sum_{g \in \mathcal{G}} C_{gt} p_{gt} - \sum_{v \in \mathcal{V}} S_v p_{vt}^{\mathrm{D,e}} \right )\\ \label{eq:energy_bal_pf} \text{s.t.} \quad & \sum_{l \in \mathcal{L}} d_{lt} + p_t^{\mathrm{C,a}} - \sum_{g \in \mathcal{G}} p_{gt} - \sum_{v \in \mathcal{V}} p_{vt}^{\mathrm{D,e}} = 0 , \hspace{0.06\linewidth} \forall t \in \mathcal{T} \\ \label{eq:gen_bound_pf} & 0 \leq p_{gt} \leq \overline{P}_{gt}, \hspace{0.35\linewidth} \forall g \in \mathcal{G}, t \in \mathcal{T} \\ \label{eq:load_bound_pf} & 0 \leq d_{lt} \leq \overline{D}_{lt}, \hspace{0.36\linewidth} \forall l \in \mathcal{L}, t \in \mathcal{T} \\ \label{eq:stor_intra_lower_end_pf} & \sum_{i=1}^{T} p_i^{\mathrm{C,a}} \geq 0,\\ \label{eq:stor_bound_all_pf} & 0 \leq \sum_{i=1}^{t} p_i^{\mathrm{C,a}} + \sum_{v \in \mathcal{V}} \left ( E^{\mathrm{init}}_v - \sum_{i=1}^{t} p_{vi}^{\mathrm{D,e}} \right ) \leq \overline{E}, \hspace{0.001\linewidth} \forall t \in \mathcal{T}\\ \label{eq:stor_p_pf} & p_{vt}^{\mathrm{D,e}} \geq 0, \hspace{0.41\linewidth} \forall v \in \mathcal{V}, \, \forall t \in \mathcal{T}\\ \label{eq:stor_e_pf} & E^{\mathrm{init}}_v - \sum_{i=1}^{T} p_{vi}^{\mathrm{D,e}} \geq 0, \hspace{0.36\linewidth} \forall v \in \mathcal{V}\\ \label{eq:stor_end_all_pf} & \sum_{i=1}^{T} p_i^{\mathrm{C,a}} + \sum_{v \in \mathcal{V}} \left ( E^{\mathrm{init}}_v - \sum_{i=1}^{T} p_{vi}^{\mathrm{D,e}} \right ) \geq E^{\mathrm{end}} . \end{aligned}$$ Let's consider an optimal solution to [\[prob:mc_vos_proof\]](#prob:mc_vos_proof){reference-type="eqref" reference="prob:mc_vos_proof"}. We denote it with the superscript $^*$, for example, $p_t^{\mathrm{C,a*}}$. Under the assumption that the feasible set is not empty, such a solution exists. If it is such that the intra-storage never charges when the inter-storage discharges, we are done. We consider the case where there is at least one time period $\tau \in \mathcal{T}$ such that the intra-storage charges when the inter-storage discharges, meaning that $p_\tau^{\mathrm{C,a*}} > 0$ and $\sum_{v \in \mathcal{V}} p_{v\tau}^{\mathrm{D,e*}} > 0$. Let's call $q_v^\tau$ the quantity discharged from the inter storage with value $v$ at $\tau$, i.e. $q_v^\tau = p_{v\tau}^{\mathrm{D,e*}}$. We also call $q^\tau$ the total quantity discharged from inter storage at $\tau$, $q^\tau = \sum_{v \in \mathcal{V}} q_v^\tau$. We now identify another time period $\kappa \in \mathcal{T}$ such that $p_\kappa^{\mathrm{C,a*}} < 0$. #### Existence of $\kappa$ We first prove that $\kappa$ exists. Let's suppose that it does not, i.e. we suppose that $p_t^{\mathrm{C,a*}} \geq 0$, $\forall t \in \mathcal{T}$. We have $\sum_{i=1}^{T} p_i^{\mathrm{C,a*}} > 0$ since $p_\tau^{\mathrm{C,a*}} > 0$. We build a new solution that we identify with the superscript $'$. It is identical to the previous solution, except for $p_\tau^{\mathrm{C,a'}} = p_\tau^{\mathrm{C,a*}} - q^{\tau'}$ and $p_{v\tau}^{\mathrm{D,e'}} = p_{v\tau}^{\mathrm{D,e*}} - q_v^{\tau'}$, $\forall v \in \mathcal{V}$, with $q^{\tau'} = \min \{q^{\tau}, \sum_{i=1}^{T} p_i^{\mathrm{C,a*}}\}$ and $q_v^{\tau'}$ are such that $q^{\tau'}= \sum_{v \in \mathcal{V}} q_v^{\tau'}$ and $0 \leq q_v^{\tau'} \leq q_v^{\tau}$, $\forall v \in \mathcal{V}$. It is possible to find such $q_v^{\tau'}$, since $\sum_{i=1}^{T} p_i^{\mathrm{C,a*}} > 0$ and $q^{\tau} > 0$, so $0 < q^{\tau'} \leq q^{\tau}$, meaning that $0 < \sum_{v \in \mathcal{V}} q_v^{\tau'} \leq \sum_{v \in \mathcal{V}} q_v^{\tau}$. We check that this new solution is feasible. For [\[eq:energy_bal_pf\]](#eq:energy_bal_pf){reference-type="eqref" reference="eq:energy_bal_pf"}, at $t=\tau$, we now have [\[pb:energy_bal_new\]]{#pb:energy_bal_new label="pb:energy_bal_new"} $$\begin{aligned} &\sum_{l \in \mathcal{L}} d_{l\tau}' + p_\tau^{\mathrm{C,a'}} - \sum_{g \in \mathcal{G}} p_{g\tau}' - \sum_{v \in \mathcal{V}} p_{v\tau}^{\mathrm{D,e'}} = \\ = &\sum_{l \in \mathcal{L}} d_{l\tau}^* + p_\tau^{\mathrm{C,a*}} - q^{\tau'} - \sum_{g \in \mathcal{G}} p_{g\tau}^* - \sum_{v \in \mathcal{V}}(p_{v{\tau}}^{\mathrm{D,e*}} - q_v^{\tau'}) \\ = &\sum_{l \in \mathcal{L}} d_{l\tau}^* + p_\tau^{\mathrm{C,a*}} - \sum_{g \in \mathcal{G}} p_{g\tau}^* - \sum_{v \in \mathcal{V}}p_{v\tau}^{\mathrm{D,e*}} - q^{\tau'} + \sum_{v \in \mathcal{V}} q_v^{\tau'}\\ = & \sum_{l \in \mathcal{L}} d_{l\tau}^* + p_\tau^{\mathrm{C,a*}} - \sum_{g \in \mathcal{G}} p_{g\tau}^* - \sum_{v \in \mathcal{V}}p_{v\tau}^{\mathrm{D,e*}}, \end{aligned}$$ so the constraint [\[eq:energy_bal_pf\]](#eq:energy_bal_pf){reference-type="eqref" reference="eq:energy_bal_pf"} is satisfied. Constraints [\[eq:gen_bound_pf\]](#eq:gen_bound_pf){reference-type="eqref" reference="eq:gen_bound_pf"} and [\[eq:load_bound_pf\]](#eq:load_bound_pf){reference-type="eqref" reference="eq:load_bound_pf"} still hold, since the solution is not modified for these variables. For constraint [\[eq:stor_intra_lower_end_pf\]](#eq:stor_intra_lower_end_pf){reference-type="eqref" reference="eq:stor_intra_lower_end_pf"}, we have $$\begin{aligned} \sum_{i=1}^{T} p_i^{\mathrm{C,a'}} & = \sum_{i=1}^{\tau -1} p_i^{\mathrm{C,a'}} + p_\tau^{\mathrm{C,a'}} + \sum_{i=\tau + 1}^{T} p_i^{\mathrm{C,a'}} \\ & = \sum_{i=1}^{\tau -1} p_i^{\mathrm{C,a*}} + p_\tau^{\mathrm{C,a*}} - q^{\tau'} + \sum_{i=\tau + 1}^{T} p_i^{\mathrm{C,a*}} \\ & = \sum_{i=1}^{T} p_i^{\mathrm{C,a*}} - q^{\tau'}, \end{aligned}$$ which we know is positive, since $q^{\tau'} = \min \{q^{\tau}, \sum_{i=1}^{T} p_i^{\mathrm{C,a*}}\}$. For constraint [\[eq:stor_bound_all_pf\]](#eq:stor_bound_all_pf){reference-type="eqref" reference="eq:stor_bound_all_pf"}, there are no changes for $t < \tau$. For $t \geq \tau$, we have [\[pb:stor_bound_all_new\]]{#pb:stor_bound_all_new label="pb:stor_bound_all_new"} $$\begin{aligned} & \sum_{i=1}^{t} p_i^{\mathrm{C,a'}} + \sum_{v \in \mathcal{V}} \left ( E^{\mathrm{init}}_v - \sum_{i=1}^{t} p_{vi}^{\mathrm{D,e'}} \right ) = \\ \nonumber = & \sum_{i=1}^{\tau - 1} p_i^{\mathrm{C,a'}} + p_\tau^{\mathrm{C,a'}} + \sum_{i=\tau + 1}^{t} p_i^{\mathrm{C,a'}} \\ & + \sum_{v \in \mathcal{V}} \left ( E^{\mathrm{init}}_v - \sum_{i=1}^{\tau - 1} p_{vi}^{\mathrm{D,e'}} - p_{v\tau}^{\mathrm{D,e'}} - \sum_{i=\tau + 1}^{t} p_{vi}^{\mathrm{D,e'}} \right ) \\ \nonumber = & \sum_{i=1}^{\tau - 1} p_i^{\mathrm{C,a*}} +p_\tau^{\mathrm{C,a*}} - q^{\tau'} + \sum_{i=\tau + 1}^{t} p_i^{\mathrm{C,a*}} \\ & + \sum_{v \in \mathcal{V}} \left ( E^{\mathrm{init}}_v - \sum_{i=1}^{\tau - 1} p_{vi}^{\mathrm{D,e*}} - p_{v\tau}^{\mathrm{D,e*}} + q_v^{\tau'} - \sum_{i=\tau + 1}^{t} p_{vi}^{\mathrm{D,e*}} \right )\\ = & \sum_{i=1}^{t} p_i^{\mathrm{C,a*}} - q^{\tau'} + \sum_{v \in \mathcal{V}} \left ( E^{\mathrm{init}}_v - \sum_{i=1}^{t} p_{vi}^{\mathrm{D,e*}} \right ) + \sum_{v \in \mathcal{V}} q_v^{\tau'}\\ = & \sum_{i=1}^{t} p_i^{\mathrm{C,a*}} + \sum_{v \in \mathcal{V}} \left ( E^{\mathrm{init}}_v - \sum_{i=1}^{t} p_{vi}^{\mathrm{D,e*}} \right ), \end{aligned}$$ so the constraint still stands. For [\[eq:stor_p\_pf\]](#eq:stor_p_pf){reference-type="eqref" reference="eq:stor_p_pf"}, we only have to check at $t = \tau$. We defined $q_v^{\tau'} \leq q_v^{\tau}$, to make sure that this constraint is satisfied. Constraint [\[eq:stor_e\_pf\]](#eq:stor_e_pf){reference-type="eqref" reference="eq:stor_e_pf"} still holds, since we discharge less. We have $$\begin{aligned} E^{\mathrm{init}}_v - \sum_{i=1}^{T} p_{vi}^{\mathrm{D,e'}} & = E^{\mathrm{init}}_v - \sum_{i=1}^{\tau - 1} p_{vi}^{\mathrm{D,e'}} - p_{v\tau}^{\mathrm{D,e'}} - \sum_{i=\tau + 1}^{T} p_{vi}^{\mathrm{D,e'}} \\ & = E^{\mathrm{init}}_v - \sum_{i=1}^{T} p_{vi}^{\mathrm{D,e*}} + q_v^{\tau'}) \\ & \geq E^{\mathrm{init}}_v - \sum_{i=1}^{T} p_{vi}^{\mathrm{D,e*}}. \end{aligned}$$ For [\[eq:stor_end_all_pf\]](#eq:stor_end_all_pf){reference-type="eqref" reference="eq:stor_end_all_pf"}, using previous calculation, we have that $$\small \sum_{i=1}^{T} p_i^{\mathrm{C,a'}} + \sum_{v \in \mathcal{V}} \left ( E^{\mathrm{init}}_v - \sum_{i=1}^{T} p_{vi}^{\mathrm{D,e'}} \right ) = \sum_{i=1}^{T} p_i^{\mathrm{C,a*}} + \sum_{v \in \mathcal{V}} \left ( E^{\mathrm{init}}_v - \sum_{i=1}^{T} p_{vi}^{\mathrm{D,e*}} \right ),$$ so this constraint is also satisfied. We can conclude that our new solution is feasible. The value of the objective function for this solution is $$\begin{aligned} & \sum_{t \in \mathcal{T}} \left ( \sum_{l \in \mathcal{L}} U_{lt} d_{lt}^{'} - \sum_{g \in \mathcal{G}} C_{gt} p_{gt}^{'} - \sum_{v \in \mathcal{V}} S_v p_{vt}^{\mathrm{D,e'}} \right ) \\ %& = \sum_{t \in \mathcal{T}} \left ( \sum_{l \in \mathcal{L}} U_{lt} d_{lt}^{*} - \sum_{g \in \mathcal{G}} C_{gt} p_{gt}^{*} \right ) - \sum_{v \in \mathcal{V}} S_v (\sum_{t \in \mathcal{T}} p_{vt}\up{D,e'}) \\ = & \sum_{t \in \mathcal{T}} \left ( \sum_{l \in \mathcal{L}} U_{lt} d_{lt}^{*} - \sum_{g \in \mathcal{G}} C_{gt} p_{gt}^{*} \right ) - \sum_{v \in \mathcal{V}} S_v (\sum_{i=1}^{T} p_{vi}^{\mathrm{D,e*}} - q_v^{\tau'}) \\ = & \sum_{t \in \mathcal{T}} \left ( \sum_{l \in \mathcal{L}} U_{lt} d_{lt}^{*} - \sum_{g \in \mathcal{G}} C_{gt} p_{gt}^{*} - \sum_{v \in \mathcal{V}} S_v p_{vi}^{\mathrm{D,e*}} \right ) + \sum_{v \in \mathcal{V}} S_v q_v^{\tau'}. \end{aligned}$$ We have $\sum_{v \in \mathcal{V}} S_v q_v^{\tau'} > 0$, under the assumption that $S_v > 0$, $\forall v \in \mathcal{V}$ and since $q_v^{\tau'} \geq 0$ , $\forall v \in \mathcal{V}$ and $\sum_{v \in \mathcal{V}} q_v^{\tau'} > 0$, meaning that there is at least one $v$ for which $q_v^{\tau'} > 0$. It is greater than the value of the objective function for the $^*$ solution, which is optimal, indicating a contradiction. *Note 1:* This also excludes the case where $|\mathcal{T}|=1$, meaning that it cannot be optimal to have $p_\tau^{\mathrm{C,a*}} > 0$ and $p_\tau^{\mathrm{D,e*}} > 0$ in this case. *Note 2:* $\sum_{i=1}^{T} p_i^{\mathrm{C,a*}} > 0$ also in case of net charge, so there will not be simultaneous charge of intra-storage and discharge of inter-storage then. #### Building a new solution We now know that there exists another time period $\kappa \in \mathcal{T}$ and $\kappa \neq \tau$ such that $p_\kappa^{\mathrm{C,a*}} < 0$. We build a new solution to our problem, which we identify with the superscript $'$. It is identical to the previous solution, except for $t=\tau$ and $t=\kappa$. We want to discharge less the inter-storage at $t=\tau$ and discharge it more at $t=\kappa$, and in turn charge less the intra-storage at $t=\tau$ and discharge it less at $t=\kappa$, in order to keep the same storage level when both are summed. For $t=\tau$, $p_\tau^{\mathrm{C,a'}} = p_\tau^{\mathrm{C,a*}} - q^{\tau'}$ and $p_{v\tau}^{\mathrm{D,e'}} = p_{v\tau}^{\mathrm{D,e*}} - q_v^{\tau'}$, $\forall v \in \mathcal{V}$, with $q^{\tau'} = \min \{q^{\tau}, -p_\kappa^{\mathrm{C,a*}}\}$ and $q_v^{\tau'}$ are such that $q^{\tau'}= \sum_{v \in \mathcal{V}} q_v^{\tau'}$ and $0 \leq q_v^{\tau'} \leq q_v^{\tau}$, $\forall v \in \mathcal{V}$. It is possible to find such $q_v^{\tau'}$, since $-p_\kappa^{\mathrm{C,a*}} > 0$ and $q^{\tau} > 0$, so $0 < q^{\tau'} \leq q^{\tau}$, meaning that $0 < \sum_{v \in \mathcal{V}} q_v^{\tau'} \leq \sum_{v \in \mathcal{V}} q_v^{\tau}$. For $t=\kappa$, $p_\kappa^{\mathrm{C,a'}} = p_\kappa^{\mathrm{C,a*}} + q^{\tau'}$ and $p_{v\kappa}^{\mathrm{D,e'}} = p_{v\kappa}^{\mathrm{D,e*}} + q_v^{\tau'}$, $\forall v \in \mathcal{V}$. Let's check that this solution is feasible. For [\[eq:energy_bal_pf\]](#eq:energy_bal_pf){reference-type="eqref" reference="eq:energy_bal_pf"}, at $t=\tau$, we have the same as in [\[pb:energy_bal_new\]](#pb:energy_bal_new){reference-type="eqref" reference="pb:energy_bal_new"}, which is feasible. At $t=\kappa$, $$\begin{aligned} & \sum_{l \in \mathcal{L}} d_{l\kappa}' + p_\kappa^{\mathrm{C,a'}} - \sum_{g \in \mathcal{G}} p_{g\kappa}' - \sum_{v \in \mathcal{V}} p_{v\kappa}^{\mathrm{D,e'}} \\ = & \sum_{l \in \mathcal{L}} d_{l\kappa}^* + p_\kappa^{\mathrm{C,a*}} + q^{\tau'} - \sum_{g \in \mathcal{G}} p_{g\kappa}^* - \sum_{v \in \mathcal{V}}(p_{v{\kappa}}^{\mathrm{D,e*}} + q_v^{\tau'}) \\ = & \sum_{l \in \mathcal{L}} d_{l\kappa}^* + p_\kappa^{\mathrm{C,a*}} - \sum_{g \in \mathcal{G}} p_{g\kappa}^* - \sum_{v \in \mathcal{V}}p_{v\kappa}^{\mathrm{D,e*}} + q^{\tau'} - \sum_{v \in \mathcal{V}} q_v^{\tau'}\\ = & \sum_{l \in \mathcal{L}} d_{l\kappa}^* + p_\kappa^{\mathrm{C,a*}} - \sum_{g \in \mathcal{G}} p_{g\kappa}^* - \sum_{v \in \mathcal{V}}p_{v\kappa}^{\mathrm{D,e*}}, \end{aligned}$$ so the constraint [\[eq:energy_bal_pf\]](#eq:energy_bal_pf){reference-type="eqref" reference="eq:energy_bal_pf"} is satisfied. Constraints [\[eq:gen_bound_pf\]](#eq:gen_bound_pf){reference-type="eqref" reference="eq:gen_bound_pf"} and [\[eq:load_bound_pf\]](#eq:load_bound_pf){reference-type="eqref" reference="eq:load_bound_pf"} still hold, since the solution is not modified for these variables. Let's call $t_1 = \min\{\tau,\kappa\}$ and $t_2 = \max\{\tau,\kappa\}$. The total charged quantity in the intra-storage is unchanged: [\[pb:pc_tot\]]{#pb:pc_tot label="pb:pc_tot"} $$\begin{aligned} \nonumber \sum_{i=1}^{T} p_i^{\mathrm{C,a'}} & = \sum_{i=1}^{t_1-1} p_i^{\mathrm{C,a'}} + \sum_{i=t_1+1}^{t_2-1} p_i^{\mathrm{C,a'}} +\sum_{i=t_2+1}^{T} p_i^{\mathrm{C,a'}} \\ & + p_\tau^{\mathrm{C,a'}} + p_\kappa^{\mathrm{C,a'}} \\ \nonumber & = \sum_{i=1}^{t_1-1} p_i^{\mathrm{C,a*}} + \sum_{i=t_1+1}^{t_2-1} p_i^{\mathrm{C,a*}} +\sum_{i=t_2+1}^{T} p_i^{\mathrm{C,a*}} \\ & + p_\tau^{\mathrm{C,a*}} - q^{\tau'} + p_\kappa^{\mathrm{C,a*}} + q^{\tau'}\\ & = \sum_{i=1}^{T} p_i^{\mathrm{C,a*}}. \end{aligned}$$ It is also the case for the inter-storage: $$\begin{aligned} \nonumber \sum_{i=1}^{T} p_{vi}^{\mathrm{D,e'}} & = \sum_{i=1}^{t_1-1} p_{vi}^{\mathrm{D,e'}} + \sum_{i=t_1+1}^{t_2-1} p_{vi}^{\mathrm{D,e'}} +\sum_{i=t_2+1}^{T} p_{vi}^{\mathrm{D,e'}} \\ & + p_{v\tau}^{\mathrm{D,e'}} + p_{v\kappa}^{\mathrm{D,e'}} \\ \nonumber = & \sum_{i=1}^{t_1-1} p_{vi}^{\mathrm{D,e*}} + \sum_{i=t_1+1}^{t_2-1} p_{vi}^{\mathrm{D,e*}} +\sum_{i=t_2+1}^{T} p_{vi}^{\mathrm{D,e*}} \\ & + p_{v\tau}^{\mathrm{D,e*}} - q_v^{\tau'} + p_{v\kappa}^{\mathrm{D,e*}} + q_v^{\tau'}\\ = & \sum_{i=1}^{T} p_{vi}^{\mathrm{D,e*}}. \end{aligned}$$ As a consequence, constraints [\[eq:stor_intra_lower_end_pf\]](#eq:stor_intra_lower_end_pf){reference-type="eqref" reference="eq:stor_intra_lower_end_pf"}, [\[eq:stor_e\_pf\]](#eq:stor_e_pf){reference-type="eqref" reference="eq:stor_e_pf"} and [\[eq:stor_end_all_pf\]](#eq:stor_end_all_pf){reference-type="eqref" reference="eq:stor_end_all_pf"} are satisfied by the new solution. For constraint [\[eq:stor_bound_all_pf\]](#eq:stor_bound_all_pf){reference-type="eqref" reference="eq:stor_bound_all_pf"}, there are no changes for $t < t_1$. For $t_1 \leq t < t_2$. If $t_1=\tau$, we have the same as in [\[pb:stor_bound_all_new\]](#pb:stor_bound_all_new){reference-type="eqref" reference="pb:stor_bound_all_new"}. If $t_1=\kappa$, $$\begin{aligned} & \sum_{i=1}^{t} p_i^{\mathrm{C,a'}} + \sum_{v \in \mathcal{V}} \left ( E^{\mathrm{init}}_v - \sum_{i=1}^{t} p_{vi}^{\mathrm{D,e'}} \right ) \\ \nonumber = & \sum_{i=1}^{\kappa - 1} p_i^{\mathrm{C,a'}} + p_\kappa^{\mathrm{C,a'}} + \sum_{i=\kappa + 1}^{t} p_i^{\mathrm{C,a'}} \\ & + \sum_{v \in \mathcal{V}} \left ( E^{\mathrm{init}}_v - \sum_{i=1}^{\kappa - 1} p_{vi}^{\mathrm{D,e'}} - p_{v\kappa}^{\mathrm{D,e'}} - \sum_{i=\kappa + 1}^{t} p_{vi}^{\mathrm{D,e'}} \right ) \\ \nonumber = & \sum_{i=1}^{\kappa - 1} p_i^{\mathrm{C,a*}} +p_\kappa^{\mathrm{C,a*}} + q^{\tau'} + \sum_{i=\kappa + 1}^{t} p_i^{\mathrm{C,a*}} \\ & + \sum_{v \in \mathcal{V}} \left ( E^{\mathrm{init}}_v - \sum_{i=1}^{\kappa - 1} p_{vi}^{\mathrm{D,e*}} - p_{v\kappa}^{\mathrm{D,e*}} - q_v^{\tau'} - \sum_{i=\kappa + 1}^{t} p_{vi}^{\mathrm{D,e*}} \right )\\ & = \sum_{i=1}^{t} p_i^{\mathrm{C,a*}} + q^{\tau'} + \sum_{v \in \mathcal{V}} \left ( E^{\mathrm{init}}_v - \sum_{i=1}^{t} p_{vi}^{\mathrm{D,e*}} \right ) - \sum_{v \in \mathcal{V}} q_v^{\tau'}\\ & = \sum_{i=1}^{t} p_i^{\mathrm{C,a*}} + \sum_{v \in \mathcal{V}} \left ( E^{\mathrm{init}}_v - \sum_{i=1}^{t} p_{vi}^{\mathrm{D,e*}} \right ). \end{aligned}$$ Then, it is easy to see that the constraint also stands for $t \geq t_2$. For [\[eq:stor_p\_pf\]](#eq:stor_p_pf){reference-type="eqref" reference="eq:stor_p_pf"}, at $t = \tau$, the constraint is satisfied since $q_v^{\tau'} \leq q_v^{\tau}$. At $t=\kappa$ it is also satisfied since $q_v^{\tau'} \geq 0$. All constraints are satisfied, so this new solution is feasible. The value of the objective function for this solution is the same since $d_{lt}$ and $p_{gt}$ are the same and the total energy discharged for the inter-storage is the same. We have thus found another solution in which the quantity discharged in $\tau$ is smaller. We can repeat the same procedure until there is no quantity discharged at $\tau$. #### Recursion We can repeat the following until there is no simultaneous charge of the intra-storage and discharge of the inter-storage: 1. We consider $\tau$, for which $p_\tau^{\mathrm{C,a*}} > 0$ and $\sum_{v \in \mathcal{V}} p_{v\tau}^{\mathrm{D,e*}} > 0$. We have $q_v^\tau = p_{v\tau}^{\mathrm{D,e*}}$, $\forall v \in \mathcal{V}$, and $q^\tau = \sum_{v \in \mathcal{V}} q_v^\tau$. 2. We identify $\kappa$ such that $p_\kappa^{\mathrm{C,a*}} < 0$. 3. We modify the solution. We identify $q^{\tau'} = \min \{q^{\tau}, -p_\kappa^{\mathrm{C,a*}}\}$ and $q_v^{\tau'}$, such that $q^{\tau'}= \sum_{v \in \mathcal{V}} q_v^{\tau'}$ and $0 \leq q_v^{\tau'} \leq q_v^{\tau}$, $\forall v \in \mathcal{V}$. For $t=\tau$, $p_\tau^{\mathrm{C,a'}} = p_\tau^{\mathrm{C,a*}} - q^{\tau'}$ and $p_{v\tau}^{\mathrm{D,e'}} = p_{v\tau}^{\mathrm{D,e*}} - q_v^{\tau'}$. For $t=\kappa$, $p_\kappa^{\mathrm{C,a'}} = p_\kappa^{\mathrm{C,a*}} + q^{\tau'}$ and $p_{v\kappa}^{\mathrm{D,e'}} = p_{v\kappa}^{\mathrm{D,e*}} + q_v^{\tau'}$, $\forall v \in \mathcal{V}$. 4. We update $q^{\mathrm{\tau, new}} = q^{\mathrm{\tau, old}} - q^{\tau'}$. 5. If $q^{\mathrm{\tau, new}} > 0$, we go back to step 2 and identify a new $\kappa$. If $q^{\mathrm{\tau, new}} = 0$, we go to step 6. 6. Check if there is another $t$ for which $p_t^{\mathrm{C,a*}} > 0$ and $\sum_{v \in \mathcal{V}} p_{vt}^{\mathrm{D,e*}} > 0$. If so, it is the new $\tau$ and we go back to 1. We have thus shown that there is always an optimal solution in which there is no simultaneous charge and discharge, under the assumption that $S_v > 0$, $\forall v \in \mathcal{V}$. ◻ The assumption that $S_v > 0$, $\forall v \in \mathcal{V}$ is reasonable since there is no interest to charge the storage system when prices are non-positive. Moreover, charging at a value of 0 is equivalent to not valuing the stored energy, in which case we can equivalently use the original formulation, in which there is no separation between intra- and inter-storage and thus no simultaneous charge and discharge of intra- and inter-storage. ## Update of the Inter-Storage {#sec:update} In the case of net discharge, the inter-storage can be easily updated by subtracting the sum of $p_{vt}^{\mathrm{D,e}}$ to $E^{\mathrm{init}}_v$. If doing so, the inter-storage for value $v$ is empty, the corresponding index is dropped in $\mathcal{V}$. In the case of net charge, the quantity net charged is added to the inter-storage, with the corresponding charging price. However, since the storage might have been charging at different prices during the market interval, the question of which price to save arises. To address this, we introduce the following optimization model [\[prob:val_up\]]{#prob:val_up label="prob:val_up"} $$\begin{aligned} \label{eq:val_obj}\min_{\mathbf{p^{\mathrm{C,loc}}},\mathbf{p^{\mathrm{C,mov}}}} \quad & - \Delta t \sum_{t \in \mathcal{T}} \lambda^*_{t} p^{\mathrm{C,loc}} \\ \label{eq:rev_pos} \text{s.t.} \quad & - \sum_{t \in \mathcal{T}} \lambda^*_{t} p^{\mathrm{C,loc}} \geq 0 \\ \label{eq:bal_loc} & \sum_{t \in \mathcal{T}} p_t^{\mathrm{C,loc}} = 0\\ \label{eq:total} & p_t^{\mathrm{C,loc}} + p_t^{\mathrm{C,mov}} = p_t^{\mathrm{C,a*}} , \, \hspace{0.16\linewidth} \forall t \in \mathcal{T}\\ \label{eq:disch} & p_t^{\mathrm{C,loc}} = p_t^{\mathrm{C,a*}} , \, \hspace{0.14\linewidth} \forall t \in \mathcal{T},\, p_t^{\mathrm{C,a*}} \leq 0\\ \label{eq:loc_pos} & p_t^{\mathrm{C,loc}} \geq 0 , \, \hspace{0.21\linewidth} \forall t \in \mathcal{T},\, p_t^{\mathrm{C,a*}} > 0\\ \label{eq:mov_pos} & p_t^{\mathrm{C,mov}} \geq 0 , \, \hspace{0.34\linewidth} \forall t \in \mathcal{T}.\end{aligned}$$ The parameters in this model are obtained from the solution of the market clearing, and indicated with $^*$. The idea is to split the quantity charged in the intra-storage into a local and a moved quantity, $p_t^{\mathrm{C,loc}}$ and $p_t^{\mathrm{C,mov}}$. Then, $p_t^{\mathrm{C,mov}}$ is added to the inter-storage, with the market price at $t$. The total moved quantity must correspond to the net charge, which is equivalent to setting the total local quantity to zero (sum of total local charge and discharge), in [\[eq:bal_loc\]](#eq:bal_loc){reference-type="eqref" reference="eq:bal_loc"}. The inter-storage is only updated by quantities charged, which is ensured by [\[eq:mov_pos\]](#eq:mov_pos){reference-type="eqref" reference="eq:mov_pos"}: the quantity moved can only be positive. The sum of local and moved quantities has to correspond to the quantity charged in the inter-storage, with [\[eq:total\]](#eq:total){reference-type="eqref" reference="eq:total"}. In case of discharge, constraint [\[eq:disch\]](#eq:disch){reference-type="eqref" reference="eq:disch"} applies, ensuring that the moved quantity is equal to zero. In case of charge, constraint [\[eq:loc_pos\]](#eq:loc_pos){reference-type="eqref" reference="eq:loc_pos"} prevents discharge for the local quantity. The objective [\[eq:val_obj\]](#eq:val_obj){reference-type="eqref" reference="eq:val_obj"} is to minimize the storage profit over this interval, while still ensuring that it is positive with [\[eq:rev_pos\]](#eq:rev_pos){reference-type="eqref" reference="eq:rev_pos"}. We are minimizing in order to avoid situations where the storage system would be setting prices higher than necessary. # Study of Market Properties {#sec:4} We study cost recovery and social welfare for the market clearing with VLB, comparing to the split market clearing, [\[prob:mc_init\]](#prob:mc_init){reference-type="eqref" reference="prob:mc_init"} with [\[eq:stor_end\]](#eq:stor_end){reference-type="eqref" reference="eq:stor_end"}. We discuss the impact of imperfect foresight on these properties. Finally, we discuss a limitation of both models, opening directions for future research. ## Cost Recovery There is cost recovery for a market participant, if their surplus is always non-negative. We can easily show that this is the case for the generators and loads[^4]. For a storage system, it is not relevant to ensure that the surplus is always non-negative. For example, the storage system could be only paying to charge in one market interval, to later discharge at a higher price and make a profit in a subsequent market interval. In this example, the surplus of the storage in the first market interval would be negative. Hence, we redefine cost recovery for a non-merchant storage system. **Definition 1** (Cycle and cost recovery for a non-merchant storage). *We define a cycle as a group of consecutive market intervals for which the storage system is initially empty and finally returns to this same state. We say that there is cost recovery for a non-merchant storage system if its surplus over a cycle is non-negative.* For a non-merchant storage system, and for a fair comparison, it only makes sense to look at the surplus over a cycle. Otherwise, we need to know the value of the stored energy to include it, which is actually the complex problem that we are trying to solve here. We argue that cost recovery stands by design of the market clearing and of the update of inter-storage from [\[prob:val_up\]](#prob:val_up){reference-type="eqref" reference="prob:val_up"}, under the assumption that looking far enough into the future, the inter-storage will eventually be empty. In this model, constraint [\[eq:rev_pos\]](#eq:rev_pos){reference-type="eqref" reference="eq:rev_pos"} ensures that the profit of the storage system is non-negative for the quantity exchanged over the time interval. Regarding the quantity moved to the inter-storage, the charging price is saved and later used as a bid. Since discharge is never imposed, this ensures that for this quantity the price received will be at least equal to the charging price. Note that this result is not based on the assumption of perfect foresight and is thus valid in uncertain settings. The assumption that the storage will eventually be empty is mild. It would be challenged in the case that a decision was made to charge the storage in a day with very high prices that never occur again. A complete day with very high prices is very unlikely to happen without being foreseen. And if foreseen, this situation would illustrate very poor decision-making on the final level of the storage system. We saw in the example of Section [2.3](#sec:ex){reference-type="ref" reference="sec:ex"} that cost recovery is not ensured for the storage system in the split market clearing. In that example, the storage system starts empty and finishes empty, so the two market intervals considered are a cycle. However, the surplus of the storage in this cycle can be as low as -3€. We run that same example with the market clearing with VLB, to show that this situation does not arise anymore. The results are shown in Table [3](#tab:res_prices){reference-type="ref" reference="tab:res_prices"}. At the end of the first MI, the inter-storage is charged with a price of 5€/MWh and it is discharged in the next hour. The storage is now marginal and gets paid at least 5€/MWh, so that the minimum surplus of the storage system over this cycle is 0€, and cost recovery stands. MI $t$ $d_{1t}$ $p_{1t}$ $p_{2t}$ $p_t^{\mathrm{C}}$ $p_t^{\mathrm{D,e}}$ $e^{\mathrm{a}}_t$ $e^{\mathrm{e}}_t$ $\lambda_t$ ---- ----- ---------- ---------- ---------- -------------------- ---------------------- -------------------- -------------------- ------------- 1 1 0 1 0 1 0 1 0 5 1 2 3 2 0 0 1 0 0 \[5,9\] : Results for the example of Section [2.3](#sec:ex){reference-type="ref" reference="sec:ex"}, for the market clearing with VLB [\[tab:res_prices\]]{#tab:res_prices label="tab:res_prices"} ## Social Welfare {#sec:sw} If the level of the storage system at the end of each interval of [\[prob:mc_vos\]](#prob:mc_vos){reference-type="eqref" reference="prob:mc_vos"} is set to the value that is optimal for the ideal market clearing, the storage system will follow the same trajectory, which will result in the same social welfare. Indeed, since this level is obtained in a way that the storage system recovers its costs in the ideal clearing, the inter-storage will discharge the same quantity. We now discuss different scenarios that can occur in case of imperfect information and error in setting the level of the storage system at the end of each market interval, and illustrate them with simple examples. We look at the social welfare for the market clearing with VLB, in comparison to the one obtained for the split market clearing using the same values for the final levels $E^{\mathrm{end}}$. Since social welfare is the sum of the surpluses of all the market participants, its calculation has to be carried out over a cycle, as defined in Definition [Definition 1](#def){reference-type="ref" reference="def"}. We compare both methods on a cycle for the market-clearing with VLB [^5]. We also compare to the outcome of the ideal market clearing. In both illustrative examples, the storage has the same capacity $\overline{E}= 2.5$ MWh and it is initially empty, each MI consists of one time period of one hour, and there is one load and one generator participating in the market. The rest of the data and results are given in Tables [\[tab:res_sw\]](#tab:res_sw){reference-type="ref" reference="tab:res_sw"} and [\[tab:res_sw_low\]](#tab:res_sw_low){reference-type="ref" reference="tab:res_sw_low"}, including the number of MIs. The results in column "$e_t$\" for the split market correspond to the values set for $E^{\mathrm{end}}$, which are also used for VLB. The first scenario corresponds to the case where discharge is imposed in a period of low prices, also limiting the availability of storage for future periods with low generation. In this case, the social welfare can be higher for [\[prob:mc_vos\]](#prob:mc_vos){reference-type="eqref" reference="prob:mc_vos"} than for [\[prob:mc_init\]](#prob:mc_init){reference-type="eqref" reference="prob:mc_init"} with [\[eq:stor_end\]](#eq:stor_end){reference-type="eqref" reference="eq:stor_end"}. We show this in the example of Table [\[tab:res_sw\]](#tab:res_sw){reference-type="ref" reference="tab:res_sw"}. The total social welfare for these three MIs is -1€ for the split market clearing and 16€ for the market clearing with VLB, compared to 21€ for the ideal clearing. Due to the error in predicting the ideal final levels for the storage, the social welfare is lower than in the ideal clearing. The difference in social welfare between split and VLB is due to the fact that in the split market clearing, the storage discharges on the second MI, while it does not in the market clearing with VLB. Indeed, the prices are too low for the storage to recover the costs from charging on the first MI. Rather, it discharges on the third MI, thereby allowing for the load to be completely supplied, which increases social welfare. We also observe that the formulation with VLB reestablishes cost recovery for the storage system. [\[tab:res_sw\]]{#tab:res_sw label="tab:res_sw"} The second scenario corresponds to the case where due to an error when choosing the final level, charge is imposed in a period of high prices, which do not occur again soon enough, preventing the complete use of the storage capacity in the following market intervals. Then, the social welfare can be lower for [\[prob:mc_vos\]](#prob:mc_vos){reference-type="eqref" reference="prob:mc_vos"} than for [\[prob:mc_init\]](#prob:mc_init){reference-type="eqref" reference="prob:mc_init"} with [\[eq:stor_end\]](#eq:stor_end){reference-type="eqref" reference="eq:stor_end"}. We show this in the example of Table [\[tab:res_sw_low\]](#tab:res_sw_low){reference-type="ref" reference="tab:res_sw_low"}. The total social welfare for these three MIs is 842.5€ for the split market clearing, and 772.5€ for the market clearing with VLB, compared to 855€ for the ideal clearing. The difference in social welfare between split and VLB is due to the fact that the inter-storage does not discharge on the second and fourth MIs. Indeed, the prices are too low for the storage to recover the costs from charging on the first MI. More expensive generators are used to supply the load on these MIs, thereby decreasing social welfare. [\[tab:res_sw_low\]]{#tab:res_sw_low label="tab:res_sw_low"} However, this can be limited by introducing a discount on stored value, which is shown in the last column of Table [\[tab:res_sw_low\]](#tab:res_sw_low){reference-type="ref" reference="tab:res_sw_low"}. In this case, we decreased by 25% the value of the energy stored in the inter-storage after each MI, except after the MI in which it is stored. The social welfare in this case is 807.5€, as the inter-storage can now be used on the fourth MI. ## Limitation of Split Models {#sec:limit} We have seen that our new method performs better than the traditional split market clearing in terms of cost recovery for the market participants. However, not all the limitations of the split market clearing are overcome, which we see here in a last illustrative example. We use again a storage system with capacity $\overline{E}= 2.5$ MWh and initially empty. The rest of the data and the results are shown in Table [\[tab:res_eff\]](#tab:res_eff){reference-type="ref" reference="tab:res_eff"}. There is one load and one generator. The market is cleared for two MIs of three hours each and the final storage level is set to its optimal value, obtained when clearing for the two MIs together. Note that the final level at the end of the second MI is equal to 2.5 MWh because of subsequent MIs, which are not represented here. We are in the perfect foresight set-up and we can see that all quantities agree. As a consequence, social welfare is the same for all these types of market clearing. However, we see that the prices in the last hour of the first MI can potentially be higher than in the ideal clearing, for both other approaches. This difference is due to future market intervals not being considered for pricing in these cases. In the ideal clearing, the price for that hour is at most 4€/MWh, which corresponds to the utility of the load in the first hour of the next MI, which the other approaches do not take into consideration. We argue for a limited impact of future uncertain information on the formation of current prices, to ensure transparency. However, we see here that it comes with a cost that somebody will ultimately have to pay. The study of this trade-off is a topic for future research. Data Results ideal Results split Results VLB ---- ----- --------- --------- --------- --------- --------------- --------- ---------------------- ------- ------------- -------------------- ------- ------------- ---------------------- ---------------------- -------------------- -------------------- ------------- MI $t$ $D_{t}$ $U_{t}$ $P_{t}$ $C_{t}$ $d_{t}$ $p_{t}$ $p^{\mathrm{C}}_{t}$ $e_t$ $\lambda_t$ $p^{\mathrm{C}}_t$ $e_t$ $\lambda_t$ $p^{\mathrm{C,a}}_t$ $p^{\mathrm{D,e}}_t$ $e^{\mathrm{a}}_t$ $e^{\mathrm{e}}_t$ $\lambda_t$ 1 1 0 0 1 0 0 1 1 1 2 1 1 2 1 0 1 0 2 1 2 1 5 2.5 2 1 2.5 1.5 2.5 2 1.5 2.5 2 1.5 0 2.5 0 2 1 3 2 6 0 0 2 0 -2 0.5 \[3,4\] -2 0.5 \[3,**6**\] -2 0 0.5 0 \[3,**6**\] 2 1 2.5 4 2 3 2.5 2 -0.5 0 \[3,4\] -0.5 0 \[3,4\] -0.5 0 -0.5 0.5 \[3,4\] 2 2 0 0 1 2 0 1 1 1 \[3,7\] 1 1 \[3,7\] 1 0 0.5 0.5 \[3,7\] 2 3 4 7 5.5 3 4 5.5 1.5 2.5 \[3,7\] 1.5 2.5 \[3,7\] 1.5 0 2 0.5 \[3,7\] [\[tab:res_eff\]]{#tab:res_eff label="tab:res_eff"} # Discussion and Conclusion {#sec:5} We introduced a novel procedure for clearing an energy market with non-merchant storage, using virtual linking bids. This is based on an artificial representation of the storage system, dividing it into a component for local arbitrage, within this market interval, and a component for arbitrage between market intervals. We showed that it outperforms traditional approaches when it comes to cost recovery. Indeed, it ensures cost recovery for the storage system, even over multiple market intervals, which common split market clearing does not. More importantly, we showed that this property also stands when forecast errors are made when calculating the final state of energy of the storage system, which corresponds to a realistic setup. It still remains to study how this final state of energy should be determined. This should also come with a study of the impact of the storage level on pricing. In particular, a critical next step would be to investigate the potential impact of a strategic choice of this level on prices and social welfare, and how it compares to having merchant storage. We also discuss the impacts of uncertainty on social welfare compared to traditional approaches. In the case of forcing the discharge of the storage system at a disadvantageous price, we show that our approach comes closer to closing the gap with an ideal oracle market clearing. We also showed that this method does not solve the problem of accounting for prices in future market intervals, thereby leading to higher prices compared to an ideal market clearing. This problem needs to be further studied. Another limitation of the method is that it might happen that a stored quantity is kept in store for too long, due to a very high value. We showed that a discount factor could be applied to the value of stored energy over time to avoid this. We used illustrative small-scale examples to give better intuition on how the method introduced behaves compared to a traditional split-horizon market clearing. Since this method does not introduce non-linearities, the computational complexity is similar to traditional approaches. It only involves solving one more linear program for the update of storage values, which should also scale well. This procedure was introduced on an idealized representation of a storage system, and the promising results are a good motivation for extending it to more general storage system models and to multiple storage systems. [^1]: This is equivalent to setting it to zero in the absence of negative prices. [^2]: This other problem is out-of-scope for this paper. [^3]: These are found to maximize the total social welfare when clearing the two MIs together. We refer the interested reader to [@frölke2022efficiency] for more information. [^4]: We prove cost recovery for loads and generators by formulating their individual surplus maximization problem and its dual problem and using the strong duality theorem. [^5]: A cycle for the split market-clearing is not necessarily a cycle for the market-clearing with VLB but the opposite is true. If $E^{\mathrm{end}} = 0$, the final level with VLB might be higher because of [\[eq:stor_end_all\]](#eq:stor_end_all){reference-type="eqref" reference="eq:stor_end_all"}, while if the final level with VLB is equal to zero, it means that $E^{\mathrm{end}} = 0$ and the final level for the split market is also equal to zero.
arxiv_math
{ "id": "2309.14787", "title": "Virtual Linking Bids for Market Clearing with Non-Merchant Storage", "authors": "El\\'ea Prat, Jonas Bodulv Broge, Richard Lusby", "categories": "math.OC", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We consider an incompressible fluid with axial symmetry without swirl, assuming initial data such that the initial vorticity is very concentrated inside $N$ small disjoint rings of thickness $\varepsilon$, each one of vorticity mass and main radius of order $|\log\varepsilon|$. When $\varepsilon\to 0$, we show that, at least for small but positive times, the motion of the rings converges to a dynamical system firstly introduced in [@MN]. In the special case of two vortex rings with large enough main radius, the result is improved reaching longer times, in such a way to cover the case of several overtakings between the rings, thus providing a mathematical rigorous derivation of the leapfrogging phenomenon. address: - | Dipartimento di Matematica\ Sapienza Università di Roma\ P.le Aldo Moro 5, 00185 Roma\ Italy - | Dipartimento di Matematica\ Sapienza Università di Roma\ P.le Aldo Moro 5, 00185 Roma\ Italy - | Dipartimento di Matematica\ Sapienza Università di Roma\ P.le Aldo Moro 5, 00185 Roma\ Italy\ and\ International Research Center M&MOCS\ Università di L'Aquila\ Palazzo Caetani\ 04012 Cisterna di Latina (LT)\ Italy author: - Paolo Buttà - Guido Cavallaro - Carlo Marchioro title: Leapfrogging vortex rings as scaling limit of Euler Equations --- # Introduction {#sec:1} We study the time evolution of an incompressible non viscous fluid in the whole space ${\mathbb R}^3$, in case of axial symmetry without swirl, when the initial vorticity is supported and sharply concentrated in $N$ annulii of large radius (i.e., distance from the symmetry axis) of leading term $\alpha|\log\varepsilon|$ ($\alpha>0$ fixed), thickness of order $\varepsilon$, vorticity mass of order $|\log\varepsilon|$, and finite distance from each other. We are interested in considering the time evolution of such configuration in the limit $\varepsilon\to 0$. In a previous paper of some years ago, [@MN], the same problem was investigated, showing that for $N=1$ the vorticity remains concentrated for $t>0$ in an annulus with the same distance from the symmetry axis and thickness $\rho(\varepsilon)$ (with $\rho(\varepsilon)\to 0$ as $\varepsilon\to 0$), moving with a constant speed along the symmetry axis. The case in which many coaxial vortex rings interact each other remained an open problem, and it was conjectured in [@MN] that in the limit $\varepsilon\to 0$ the motion of the rings (parameterized throughout suitable cylindrical coordinates) converges to the following dynamical system, which is the composition of the well-known point vortex system with a drift term along the symmetry axis, $$\label{ode-intro} \dot \zeta^i = -\frac{1}{2\pi} \sum_{j\ne i} a_j \frac{\zeta^i-\zeta^j}{|\zeta^i-\zeta^j|^2} + \frac{a_i}{4\pi\alpha} \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \quad i=1,\ldots,N,$$ where $\zeta^i= (\zeta^i_1, \zeta^i_2)\in {\mathbb R}^2$ and the real quantity $a_i$ is related to the vorticity mass of the $i$-th ring. This dynamical system accounts for an old observation regarding the so-called *leapfrogging* phenomenon, which goes back to the work of Helmholtz [@H; @H1], who describes such configuration, in the case of two rings solely, with the following words [@H1 p. 510]: *We can now see generally how two ring-formed vortex-filaments having the same axis would mutually affect each other, since each, in addition to its proper motion, has that of its elements of fluid as produced by the other. If they have the same direction of rotation, they travel in the same direction; the foremost widens and travels more slowly, the pursuer shrinks and travels faster till finally, if their velocities are not too different, it overtakes the first and penetrates it. Then the same game goes on in the opposite order, so that the rings pass through each other alternately.* Indeed, as discussed in Section [7](#sec:7){reference-type="ref" reference="sec:7"}, Eq. [\[ode-intro\]](#ode-intro){reference-type="eqref" reference="ode-intro"} admits solutions such that the relative position $\zeta^1-\zeta^2$ performs a periodic motion, which corresponds to the leapfrogging motion of the rings. Even if this phenomenon has been known since Helmholtz, addressed in many papers, such as [@aiki; @bori; @buffoni; @da; @dyson; @dyson1; @hicks; @klein; @lamb], and studied also from a numerical point of view [@chen; @lim; @naka; @riley], its mathematical justification, as a rigorous derivation from Euler equation, has received only recently a partial understanding, see [@DDMW] for a special solution exhibiting this feature (with a different scaling with respect to the present one), and [@JS; @js2021] in the context of the Gross-Pitaevskii equation. The dynamics of several coaxial vortex rings at distance of order $|\log\varepsilon|$ from the symmetry axis appears to represent a critical regime, in which the two terms on the right-hand side of Eq. [\[ode-intro\]](#ode-intro){reference-type="eqref" reference="ode-intro"} are of the same order (the first one is the interaction with the other vortex rings, the second one is the self-induced field). The motion exhibits features different from the cases in which the distance is of order $|\log\varepsilon|^k$, $0\le k<1$, or the distance is of order $|\log\varepsilon|^k$, $k>1$ (or larger). Note that when $0\le k<1$, in order to have the self-induced field not diverging, the vorticity mass of each ring has to be chosen of order $|\log\varepsilon|^{2k-1}$. In this case, for $k=0$, the self-induced field acting on each ring is dominant with respect to its interaction with the others, and the rings perform rectilinear motions with constant speed, see [@BCM; @BuM2], while for $0<k<1$ the dynamics has not been studied explicitly but we believe that the behavior is analogous to the case $k=0$. When the distance is of order $|\log\varepsilon|^k$, $k>1$ (or larger), the interaction of each vortex ring with the others is dominant with respect to the self-induced field and the motion is described by the point vortex system [@CS; @CavMarJMP; @Mar99] (explicitly studied for $k>2$ in [@CavMarJMP], while for $1<k\le2$ we believe to get the same behavior). We remark that for $k>1$ the vorticity mass of each ring has to be chosen of order $|\log\varepsilon|^k$ to have a not trivial behavior. In the present paper we analyze the critical regime and prove that the aforementioned conjecture of [@MN] holds true. More precisely, we show that in the limit $\varepsilon\to 0$ (when the vorticity becomes very large) and for quite general initial data the motion of the rings is governed by Eq. [\[ode-intro\]](#ode-intro){reference-type="eqref" reference="ode-intro"}, at least for short but positive times. The proof relies on some new ideas, merged with methods of previous papers [@MN; @BCM00; @BuM2], which allow to overcome delicate technical points that seemed hard to be solved (since the problem was stated, in the present framework, in [@MN]). Going into specifics, we detail below the main points. ($i$) The fact that the energy of each vortex ring almost conserves its initial value, at the leading term, allows to state that the vorticity mass of each vortex ring is concentrated inside a torus (whose cross section has a diameter vanishing with $\varepsilon$) during the time evolution. This result could at first sight seem not obvious, considering that a plain estimate of the time derivative of the energy of each vortex ring implies *a priori* a variation of the same order of the initial energy. This is the content of Section [3](#sec:3){reference-type="ref" reference="sec:3"}. ($ii$) The iterative method, used to prove that each vortex ring has compact support at positive times (which is an essential tool to control the interaction among the vortex rings), requires to be splitted into two separated procedures, due to the fact that the a priori estimate of a fundamental quantity, the moment of inertia, is not good enough to make work the iterative method in its standard form (as used, for example, in [@BuM2]). This is done in Section [4](#sec:4){reference-type="ref" reference="sec:4"}, while the subsequent support property is proved in Section [5](#sec:5){reference-type="ref" reference="sec:5"}. Once concentration and localization properties of the vortex rings are guaranteed, the proof of the main theorem on the convergence to the system Eq. [\[ode-intro\]](#ode-intro){reference-type="eqref" reference="ode-intro"} can be easily concluded. This is the content of Section [6](#sec:6){reference-type="ref" reference="sec:6"}. The last section of the paper, Section [7](#sec:7){reference-type="ref" reference="sec:7"}, concerns the leapfrogging phenomenon, treated in the special case of two rings discussed by Helmholtz. The convergence result, as stated in general and applied in this context for an appropriate choice of the initial data, guarantees at most one overtaking between the rings within the time interval of convergence. This is not completely satisfactory since, in accordance with experimental and numerical observations, several overtakings can take place before the rings dissolve and lose their shape [@YM; @AN]. Fortunately, in the special case of two vortex rings with large enough main radii, we can extend the time of convergence in order to cover several crossings between the rings. More precisely, it is possible to repeatedly apply the construction of item (i)-(ii) by suitably increasing the parameter $\alpha$ (i.e., the distance from the symmetry axis), thus reaching any arbitrarily fixed time. This is not really surprising, since as $\alpha$ increases the system approaches (formally) a planar fluid, Eq. [\[ode-intro\]](#ode-intro){reference-type="eqref" reference="ode-intro"} gets closer to the standard point vortex model, and, in the planar case, convergence to the point vortex model occurs globally in time (even up to times diverging with $\varepsilon$, see [@BuM1]). # Notation and main result {#sec:2} The Euler equations governing the time evolution in three dimension of an incompressible inviscid fluid of unitary density with velocity ${\boldsymbol u} = {\boldsymbol u}({\boldsymbol \xi},t)$ decaying at infinity take the form, $$\begin{gathered} \label{vorteq} \partial_t {\boldsymbol \omega }+ ({\boldsymbol u}\cdot \nabla) {\boldsymbol \omega }= ({\boldsymbol \omega}\cdot \nabla) {\boldsymbol u}\,, \\ \label{u-vort} {\boldsymbol u}({\boldsymbol \xi},t) = - \frac{1}{4\pi} \int_{{\mathbb R}^3}\! \mathrm{d}{\boldsymbol \eta }\, \frac{({\boldsymbol \xi}-{\boldsymbol \eta}) \wedge {\boldsymbol \omega}({\boldsymbol \eta},t)}{|{\boldsymbol \xi}-{\boldsymbol \eta}|^3} \,,\end{gathered}$$ where ${\boldsymbol \omega }= {\boldsymbol \omega}({\boldsymbol \xi},t) = \nabla \wedge {\boldsymbol u}({\boldsymbol \xi},t)$ is the vorticity, ${\boldsymbol \xi }= (\xi_1,\xi_2,\xi_3)$ denotes a point in ${\mathbb R}^3$, and $t\in {\mathbb R}_+$ is the time. Note that Eq. [\[u-vort\]](#u-vort){reference-type="eqref" reference="u-vort"} clearly implies the incompressibility condition $\nabla\cdot {\boldsymbol u}=0$. Our analysis is restricted to the special class of axisymmetric (without swirl) solutions to Eqs. [\[vorteq\]](#vorteq){reference-type="eqref" reference="vorteq"}, [\[u-vort\]](#u-vort){reference-type="eqref" reference="u-vort"}. We recall that a vector field ${\boldsymbol F}$ is called axisymmetric without swirl if, denoting by the $(z,r,\theta)$ the cylindrical coordinates in a suitable frame, the cylindrical components $(F_z, F_r, F_\theta)$ of ${\boldsymbol F}$ are such that $F_\theta=0$ and both $F_z$ and $F_r$ are independent of $\theta$. The axisymmetry is preserved by the time evolution. Furthermore, when restricted to axisymmetric velocity fields ${\boldsymbol u}({\boldsymbol \xi},t) = (u_z(z,r,t), u_r(z,r,t), 0)$, the vorticity is $$\label{omega} {\boldsymbol \omega }= (0,0,\omega_\theta) = (0,0,\partial_z u_r - \partial_r u_z)$$ and, denoting henceforth $\omega_\theta$ by $\Omega$, Eq. [\[vorteq\]](#vorteq){reference-type="eqref" reference="vorteq"} reduces to $$\label{omeq} \partial_t \Omega + (u_z\partial_z + u_r\partial_r) \Omega - \frac{u_r\Omega}r = 0 \,,$$ Finally, from Eq. [\[u-vort\]](#u-vort){reference-type="eqref" reference="u-vort"}, $u_z = u_z(z,r,t)$ and $u_r=u_r(z,r,t)$ are given by $$\begin{aligned} \label{uz} u_z & = - \frac{1}{2\pi} \int\! \mathrm{d}z' \!\int_0^\infty\! r' \mathrm{d}r' \! \int_0^\pi\!\mathrm{d}\theta \, \frac{\Omega(z',r',t) (r\cos\theta - r')}{[(z-z')^2 + (r-r')^2 + 2rr'(1-\cos\theta)]^{3/2}} \,, \\ \label{ur} u_r & = \frac{1}{2\pi} \int\! \mathrm{d}z' \!\int_0^\infty\! r' \mathrm{d}r' \! \int_0^\pi\!\mathrm{d}\theta \, \frac{\Omega(z',r',t) (z - z')}{[(z-z')^2 + (r-r')^2 + 2rr'(1-\cos\theta)]^{3/2}} \,.\end{aligned}$$ Otherwise stated, the axisymmetric solutions to the Euler equations are given by the solutions to Eqs. [\[omeq\]](#omeq){reference-type="eqref" reference="omeq"}, [\[uz\]](#uz){reference-type="eqref" reference="uz"}, and [\[ur\]](#ur){reference-type="eqref" reference="ur"}. We also notice that the incompressibility condition reduces to $$\label{divu0} \partial_z(ru_z) + \partial_r(ru_r) = 0\,.$$ Since we are interested also to non-smooth initial data, we shall consider weak formulations of the equations of motion. To this end, we notice that Eq. [\[omeq\]](#omeq){reference-type="eqref" reference="omeq"} expresses that the quantity $\Omega/r$ is conserved along the flow generated by the velocity field, i.e., $$\label{cons-omr} \frac{\Omega(z(t),r(t),t)}{r(t)} = \frac{\Omega(z(0),r(0),0)}{r(0)} \,,$$ with $(z(t),r(t))$ solution to $$\label{eqchar} \dot z(t) = u_z(z(t),r(t),t) \,, \qquad \dot r(t) = u_r(z(t),r(t),t) \,.$$ Eqs. [\[uz\]](#uz){reference-type="eqref" reference="uz"}, [\[ur\]](#ur){reference-type="eqref" reference="ur"}, [\[cons-omr\]](#cons-omr){reference-type="eqref" reference="cons-omr"}, and [\[eqchar\]](#eqchar){reference-type="eqref" reference="eqchar"} can be assumed as a weak formulation of the Euler equations in the framework of axisymmetric solutions. An equivalent weak formulation is still obtained from Eq. [\[omeq\]](#omeq){reference-type="eqref" reference="omeq"} by a formal integration by parts, $$\label{weqO} \frac{\mathrm{d}}{\mathrm{d}t} \Omega_t[f] = \Omega_t[u_z\partial_z f + u_r\partial_r f + \partial_t f] \,,$$ where $f = f(z,r,t)$ is any bounded smooth test function and $$\Omega_t[f] := \int\! \mathrm{d}z \!\int_0^\infty\! \mathrm{d}r \, \Omega(z,r,t) f(z,r,t) \,.$$ The existence of a global solution both for the Euler and Navier-Stokes equations has been established many years ago [@Lad; @UY], see also [@Fe-Sv; @Gal13; @Gal12] for more recent results. Global in time existence and uniqueness of a weak solution to the related Cauchy problem holds when the initial vorticity is a bounded function with compact support contained in the open half-plane $\Pi:=\lbrace(z,r):r>0\rbrace$, see, e.g., [@MaP94 Page 91] or [@CS Appendix]. In particular, the support of the vorticity remains in the open half-plane $\Pi$ at any time. We choose initial data representing a system of $N$ concentrated vortex rings, each one with cross-section of radius not larger than $\varepsilon$ and main radius (i.e., distance from the symmetry axis) of order $|\log\varepsilon|$, where $\varepsilon\in (0,1)$ is a small parameter. More precisely, denoting by $\Sigma(\zeta|\rho)$ the disk of center $\zeta$ and radius $\rho$, we fix $\alpha>0$, $N$ distinct points $\zeta^i \in {\mathbb R}^2$, $i=1,\ldots,N$, and $\varepsilon_0$ small enough to have $$\begin{gathered} \overline{\Sigma((0,r_\varepsilon) + \zeta^i|\varepsilon)} \subset\Pi\quad \forall\, i \quad \forall\, \varepsilon\in (0,\varepsilon_0)\,, \\ \Sigma((0,r_\varepsilon) + \zeta^i|\varepsilon)\cap \Sigma((0,r_\varepsilon) + \zeta^j|\varepsilon)=\emptyset\quad \forall\, i\ne j \quad \forall\, \varepsilon\in (0,\varepsilon_0)\,,\end{gathered}$$ where $$\label{reps} r_\varepsilon= \alpha|\log\varepsilon|\,.$$ We then choose $$\label{inO} \Omega_\varepsilon(z,r,0) = \sum_i \Omega_{i,\varepsilon}(z,r,0) \quad \forall\,\varepsilon\in (0,\varepsilon_0)\,,$$ where each $\Omega_{i,\varepsilon}(z,r,0)$ is a non-negative or non-positive function such that $$\mathop{\rm supp}\nolimits\, \Omega_{i,\varepsilon}(\cdot,0) \subset \Sigma((r_\varepsilon,0) + \zeta^i|\varepsilon)\quad \forall\,\varepsilon\in (0,\varepsilon_0)\,.$$ We also assume that there are $N$ real parameters $a_1,\ldots, a_N$, called the *vortex intensities*, such that $$\int\!\mathrm{d}z \!\int_0^\infty\!\mathrm{d}r\, \Omega_{i,\varepsilon}(z,r,0) = a_i \quad \forall\, \varepsilon\in (0,\varepsilon_0)\,,$$ which means that the vorticity mass of each ring is proportional to its mean radius, i.e, order $r_\varepsilon$. Finally, to avoid too large vorticity concentrations, our last assumption is the existence of a constant $M>0$ such that $$|\Omega_{i,\varepsilon}(z,r,0)| \le \frac{M}{\varepsilon^2} \quad \forall\, \varepsilon\in (0,\varepsilon_0)\,.$$ In view of Eq. [\[cons-omr\]](#cons-omr){reference-type="eqref" reference="cons-omr"}, the decomposition Eq. [\[inO\]](#inO){reference-type="eqref" reference="inO"} extends to positive time setting $$\Omega_\varepsilon(z,r,t) = \sum_i \Omega_{i,\varepsilon}(z,r,t) \quad \forall\,\varepsilon\in (0,\varepsilon_0)\,,$$ with $\Omega_{i,\varepsilon}(x,t)$ the time evolution of the $i$-th vortex ring, $$\Omega_{i,\varepsilon}(z(t),r(t),t) := \frac{r(t)}{r(0)} \Omega_{i,\varepsilon}(z(0),r(0),0)\,.$$ Since the parameter $\varepsilon$ will eventually go to zero, it is convenient to introduce the new variables $x = (x_1,x_2)$ defined by $$z = x_1\,, \quad r = r_\varepsilon+ x_2\,.$$ It is also useful to extend the vorticity expressed in these new variables to a function on the whole plane. More precisely, we define $$\label{Oo} \omega_\varepsilon(x,t) = \begin{cases} \Omega_\varepsilon(x_1,r_\varepsilon+x_2,t) & \text{if } x_2 > -r_\varepsilon\,, \\ 0 &\text{if } x_2 \le -r_\varepsilon\,, \end{cases}$$ and the same position defines $\omega_{i,\varepsilon}(x,t)$ provided $\Omega_\varepsilon$ is replaced by $\Omega_{i,\varepsilon}$ in the right-hand side. In particular, with a slight abuse of notation, we shall write $$\begin{gathered} \int\! \mathrm{d}z \!\int_0^\infty\! \mathrm{d}r \, \Omega_\varepsilon(z,r,t) G(z,r) = \int\!\mathrm{d}x\, \omega_\varepsilon(x,t) G(x_1,r_\varepsilon+ x_2)\,, \\ \int\! \mathrm{d}z \!\int_0^\infty\! \mathrm{d}r \, \Omega_{i,\varepsilon}(z,r,t) G(z,r) = \int\!\mathrm{d}x\, \omega_{i,\varepsilon}(x,t) G(x_1,r_\varepsilon+ x_2)\,,\end{gathered}$$ despite a function $x\mapsto G(x_1,r_\varepsilon+x_2)$ is defined only if $x_2> - r_\varepsilon$. In this way, the equations of motion Eqs. [\[uz\]](#uz){reference-type="eqref" reference="uz"}, [\[ur\]](#ur){reference-type="eqref" reference="ur"}, [\[cons-omr\]](#cons-omr){reference-type="eqref" reference="cons-omr"}, and [\[eqchar\]](#eqchar){reference-type="eqref" reference="eqchar"} take the following form, $$\label{u=} u(x,t) = \int\!\mathrm{d}y\, H(x,y)\, \omega_\varepsilon(y,t)\,,$$ $$\label{cons-omr_n} \omega_\varepsilon(x(t),t) = \frac{r_\varepsilon+ x_2(t)}{r_\varepsilon+ x_2(0)} \omega_\varepsilon(x(0),0) \,,$$ $$\label{eqchar_n} \dot x(t) = u(x(t),t) \,,$$ where $u(x,t) = (u_1(x,t), u_2(x,t))$ and the kernel $H(x,y) = (H_1(x,y),H_2(x,y))$ is given by $$\begin{aligned} \label{H1} H_1(x,y) & = \frac{1}{2\pi} \int_0^\pi\!\mathrm{d}\theta \, \frac{(r_\varepsilon+ y_2)(r_\varepsilon+ y_2 - (r_\varepsilon+ x_2)\cos\theta)}{\big[|x-y|^2 + 2(r_\varepsilon+ x_2) (r_\varepsilon+ y_2) (1-\cos\theta)\big]^{3/2}} \,, \\ \label{H2} H_2(x,y) & = \frac{1}{2\pi} \int_0^\pi\!\mathrm{d}\theta \, \frac{(r_\varepsilon+y_2) (x_1-y_1) \cos\theta}{\big[|x-y|^2 + 2(r_\varepsilon+ x_2)(r_\varepsilon+ y_2)(1-\cos\theta)\big]^{3/2}} \,.\end{aligned}$$ (we omit the explicit dependence of $u$ and $H$ on $\varepsilon$). Moreover, the initial data Eq. [\[inO\]](#inO){reference-type="eqref" reference="inO"} now reads, $$\label{in} \omega_\varepsilon(x,0) = \sum_i \omega_{i,\varepsilon}(x,0) \quad \forall\,\varepsilon\in (0,\varepsilon_0)\,,$$ with $\omega_{i,\varepsilon}(x(0),0)$ satisfying $$\begin{gathered} \label{initial} \Lambda_{i,\varepsilon}(0) := \mathop{\rm supp}\nolimits\, \omega_{i,\varepsilon}(\cdot,0) \subset \Sigma(\zeta^i|\varepsilon) \quad \forall\,\varepsilon\in (0,\varepsilon_0)\,, \\ \label{ai} \int\!\mathrm{d}x \, \omega_{i,\varepsilon}(x,0) = a_i \quad\forall\, \varepsilon\in (0,\varepsilon_0)\,, \\ \label{Mgamma} |\omega_{i,\varepsilon}(x,0)| \le \frac{M}{\varepsilon^2} \quad\forall\, \varepsilon\in (0,\varepsilon_0)\,.\end{gathered}$$ Finally, $$\label{in-t} \omega_\varepsilon(x,t) = \sum_i \omega_{i,\varepsilon}(x,t) \quad \forall\,\varepsilon\in (0,\varepsilon_0)\,,$$ with $$\label{cons-omr_ni} \omega_{i,\varepsilon}(x(t),t) = \frac{r_\varepsilon+ x_2(t)}{r_\varepsilon+ x_2(0)} \omega_{i,\varepsilon}(x(0),0)\,,$$ where $x(t)$ solves Eq. [\[eqchar_n\]](#eqchar_n){reference-type="eqref" reference="eqchar_n"}. It follows that each $\omega_{i,\varepsilon}(x,t)$ remains non-negative or non-positive also for $t>0$. Moreover, the weak formulation Eq. [\[weqO\]](#weqO){reference-type="eqref" reference="weqO"} holds also separately for each $\omega_{i,\varepsilon}(x,t)$, and reads $$\label{weq} \frac{\mathrm{d}}{\mathrm{d}t} \int\!\mathrm{d}x\, \omega_{i,\varepsilon}(x,t) f(x,t) = \int\!\mathrm{d}x\, \omega_{i,\varepsilon}(x,t) \big[ u \cdot \nabla f + \partial_t f \big](x,t) \,.$$ In particular, the vortex intensities are conserved during the time evolution, $$\label{ait} M_0^i(t) := \int\!\mathrm{d}x \, \omega_{i,\varepsilon}(x,t) = a_i \quad \forall\, t \ge 0 \quad\forall\, \varepsilon\in (0,\varepsilon_0)\,.$$ Sometimes, in the sequel, we will improperly call vorticity mass of a vortex ring its intensity. More generally, the quantity $\int_D\!\mathrm{d}x\, \omega_{i,\varepsilon}(x,t)$ will be indicated as the amount of vorticity mass of the $i$-th vortex ring contained in the region $D\subseteq {\mathbb R}^2$. We now denote by $(\zeta^1(t), \ldots,\zeta^N(t))$, $t\in [0,T^*)$, the maximal solution to the Cauchy problem, $$\label{ode} \begin{cases} \dot \zeta^i(t) = {\displaystyle \sum_{j\ne i} a_j K(\zeta^i(t)-\zeta^j(t)) +\frac{a_i}{4\pi\alpha} \begin{pmatrix} 1 \\ 0 \end{pmatrix}} \\ \zeta^i(0) = \zeta^i \end{cases} \quad \forall\, i=1, \ldots,N\,,$$ with $\{\zeta^i\}_{i=1}^N$ as in Eq. [\[initial\]](#initial){reference-type="eqref" reference="initial"} and $$\label{Kern} K(x) := - \frac{1}{2\pi} \nabla^\perp \log|x|$$ (here, if $v = (v_1,v_2)$ then $v^\perp = (v_2,-v_1)$). We can now state the main result of the paper. **Theorem 1**. *Assume the initial condition $\omega_\varepsilon(x,0)$ verifies Eqs. [\[in\]](#in){reference-type="eqref" reference="in"}, [\[initial\]](#initial){reference-type="eqref" reference="initial"}, [\[ai\]](#ai){reference-type="eqref" reference="ai"}, and [\[Mgamma\]](#Mgamma){reference-type="eqref" reference="Mgamma"}. Then, for any fixed (independent of $\varepsilon$) $\varrho>0$ such that the closed disks $\overline{\Sigma(\zeta^i|2\varrho)}$ are mutually disjointed, there exists $T_\varrho \in (0,T^*)$ such that for any $\varepsilon$ small enough and $t\in [0,T_\varrho]$ the following holds true.* - *$\Lambda_{i,\varepsilon}(t) := \mathop{\rm supp}\nolimits\, \omega_{i,\varepsilon}(\cdot,t) \subseteq \Sigma(\zeta^i(t)|\varrho)$ and the disks $\Sigma(\zeta^i(t)|2\varrho)$ are mutually disjointed.* - *There exist $(\zeta^{1,\varepsilon}(t), \ldots,\zeta^{N,\varepsilon}(t))$ and $\varrho_\varepsilon>0$ such that $$\lim_{\varepsilon\to 0} \int_{\Sigma(\zeta^{i,\varepsilon}(t)|\varrho_\varepsilon)}\!\mathrm{d}x\, \omega_{i,\varepsilon}(x,t) = a_i\quad \forall\, i=1,\ldots,N,$$ with $\displaystyle \lim_{\varepsilon\to 0} \varrho_\varepsilon= 0$, and $$\lim_{\varepsilon\to 0} \zeta^{i,\varepsilon}(t) = \zeta^i(t) \quad \forall\, i=1,\ldots,N.$$* The time interval of convergence can be enlarged in the case of two vortex rings with initial data such that the relative position $\zeta^1-\zeta^2$ performs a periodic motion (with respect to the evolution Eq. [\[ode-intro\]](#ode-intro){reference-type="eqref" reference="ode-intro"} with $N=2$) and $\alpha$ is chosen sufficiently large. For brevity in the exposition, we do not detail the result here and address the reader to Section [7](#sec:7){reference-type="ref" reference="sec:7"}. # Concentration estimates {#sec:3} Given $\varrho$ as in the statement of Theorem [Theorem 1](#thm:1){reference-type="ref" reference="thm:1"}, since $|\zeta^i-\zeta^j| > 4\varrho$ for any $i\ne j$, we can find $T\in (0,T^*)$ such that $$\label{T} \min_{i\ne j} \min_{t\in [0,T]} |\zeta^i(t)-\zeta^j(t)| \ge 4\varrho\,,$$ and let also $$\label{dbarra} \bar d:= \max_i\max_{t\in [0,T]} |\zeta^i(t)|\,.$$ We then define $$\label{Teps} T_\varepsilon:= \max\left\{t\in [0,T] \colon \Lambda_{i,\varepsilon}(s) \subset \Sigma(\zeta^i(s)|\varrho) \; \forall\, s\in [0,t]\; \forall\, i \right\}.$$ Without loss of generality, hereafter we assume $\varepsilon_0 < \varrho$ so that $T_\varepsilon>0$ for any $\varepsilon\in (0,\varepsilon_0)$ by continuity. Clearly, $$\label{sep-disks} \begin{split} & |x| \le \bar d + \varrho \quad \forall\, x\in \Sigma(\zeta^i(t)|\varrho) \quad \forall\, t \in [0,T] \quad \forall\, i \,, \\ & |x-y| \ge 2\varrho \quad \forall\, x\in\Sigma(\zeta^i(t)|\varrho) \quad\forall\, y \in\Sigma(\zeta^j(t)|\varrho) \quad \forall\, t \in [0,T] \quad \forall\, i\ne j\,, \end{split}$$ and therefore, up to time $T_\varepsilon$, also the supports of the vortex rings are uniformly bounded and separated, $$\label{sep-supp} \begin{split} & |x| \le \bar d + \varrho \quad \forall\, x\in \Lambda_{i,\varepsilon}(t) \quad \forall\, t \in [0,T_\varepsilon] \quad \forall\, i \,, \\ & |x-y| \ge 2\varrho \quad \forall\, x\in \Lambda_{i,\varepsilon}(t)\quad\forall\, y \in \Lambda_{j,\varepsilon}(t) \quad \forall\, t \in [0,T_\varepsilon] \quad \forall\, i\ne j\,. \end{split}$$ Clearly, $T_\varepsilon$ could vanish as $\varepsilon\to 0$, the key point in proving Theorem [Theorem 1](#thm:1){reference-type="ref" reference="thm:1"} will be a bootstrap argument based on the analysis of the motion in the time interval $[0,T_\varepsilon]$ which shows that in fact this is not the case. The first ingredient for such analysis are suitable concentration inequalities on the vorticities, which are the content of the present section. In [@MN] and previous work [@BCM00], concentration estimates on the vorticity mass in the case of a single vortex ring are deduced by using the conservation laws of kinetic energy, axial moment of inertia, and vortex intensity. Here, we need similar estimates for the vorticity of each vortex ring. The corresponding kinetic energies and axial moments of inertia are not conserved, but as long as the interaction among the rings is not too large, i.e., up to time $T_\varepsilon$, it is still possible to obtain such inequalities. ## Energy estimates {#sec:3.1} The kinetic energy $E(t) = \frac 12 \int\!\mathrm{d}{\boldsymbol \xi}\, |{\boldsymbol u} ({\boldsymbol \xi},t)|^2$ associated to axisymmetric solutions described via Eqs. [\[u=\]](#u=){reference-type="eqref" reference="u="}, [\[cons-omr_n\]](#cons-omr_n){reference-type="eqref" reference="cons-omr_n"}, and [\[eqchar_n\]](#eqchar_n){reference-type="eqref" reference="eqchar_n"} takes the form $$E(t) = \frac 12 \int\!\mathrm{d}z \!\int_0^\infty\!\mathrm{d}r\, 2\pi r\, (u_z^2 + u_r^2)(z,r,t) = \frac 12 \int\! \mathrm{d}x \, 2\pi (r_\varepsilon+ x_2) |u(x,t)|^2\,.$$ It is convenient to express $E(t)$ as a quadratic form of the vorticity $\omega_\varepsilon(x,t)$. To this end, we introduce the stream function $$\Psi(x,t) = \int\!\mathrm{d}y\, S(x,y)\, \omega_\varepsilon(y,t)\,,$$ where the Green kernel $S(x,y)$ reads $$S(x,y) := \frac{(r_\varepsilon+ x_2) (r_\varepsilon+ y_2)}{2\pi} \int_0^\pi\!\mathrm{d}\theta\, \frac{\cos\theta}{\sqrt{|x-y|^2 + 2(r_\varepsilon+ x_2) (r_\varepsilon+ y_2)(1-\cos\theta)}}\,,$$ so that $u(x,t) = (r_\varepsilon+ x_2)^{-1} \nabla^\perp\Psi(x,t)$ and the energy takes the form (see, e.g., [@BCM00; @F]) $$E(t) = \pi \int\! \mathrm{d}x\, \Psi(x,t) \, \omega_\varepsilon(x,t) = \pi \int\!\mathrm{d}x \int\!\mathrm{d}y\, S(x,y)\, \omega_\varepsilon(x,t) \, \omega_\varepsilon(y,t)\,.$$ In view of Eq. [\[in-t\]](#in-t){reference-type="eqref" reference="in-t"}, the energy can be decomposed as the sum of the energies due to the self-interaction of each vortex ring plus those due to the interaction among the rings, $$\begin{gathered} \label{E=sum Ei} E(t) = \sum_i E_i(t) + 2 \sum_{i > j} E_{i,j}(t)\,, \\ \label{Ei=} E_i(t) = \pi \int\!\mathrm{d}x \int\!\mathrm{d}y\, S(x,y)\, \omega_{i,\varepsilon}(x,t) \, \omega_{i,\varepsilon}(y,t)\,, \\ \label{Eij=} E_{i,j}(t) = \pi \int\!\mathrm{d}x \int\!\mathrm{d}y\, S(x,y)\, \omega_{i,\varepsilon}(x,t) \, \omega_{j,\varepsilon}(y,t)\,.\end{gathered}$$ Hereafter, we let $$\label{|a|} |a| = \sum_i |a_i|\,.$$ **Lemma 2**. *There exists $C_1 = C_1(\alpha, |a|, \bar d, \varrho)>0$ such that, for any $\varepsilon\in (0,\varepsilon_0)$, $$\label{E>} \sum_i E_i(t) \ge \frac\alpha2 \sum_i a_i^2 |\log\varepsilon|^2 - C_1 (\log|\log\varepsilon|)|\log\varepsilon| \quad \forall\, t\in [0,T_\varepsilon]\,.$$* *Proof.* We write $$\label{S=I} S(x,y) = \frac{\sqrt{(r_\varepsilon+ x_2) (r_\varepsilon+ y_2)}}{2\pi} I_0\left(\frac{|x-y|}{\sqrt{(r_\varepsilon+ x_2) (r_\varepsilon+ y_2)}}\right),$$ where $$I_0(s) := \int_0^\pi\!\mathrm{d}\theta\, \frac{\cos\theta}{[s^2 + 2(1-\cos\theta)]^{1/2}}\,, \quad s>0\,,$$ can be easily evaluated, see, e.g., [@MN Appendix A], getting $$\label{I1} C_0 := \sup_{s>0} \left|I_0(s) - \log \frac{2+\sqrt{s^2 + 4}}s \right| < \infty\,.$$ From Eqs. [\[Eij=\]](#Eij=){reference-type="eqref" reference="Eij="}, [\[S=I\]](#S=I){reference-type="eqref" reference="S=I"}, [\[I1\]](#I1){reference-type="eqref" reference="I1"} and recalling Eqs. [\[sep-supp\]](#sep-supp){reference-type="eqref" reference="sep-supp"}, [\[reps\]](#reps){reference-type="eqref" reference="reps"}, and [\[ait\]](#ait){reference-type="eqref" reference="ait"}, it follows that there is $C_1'=C_1'(\alpha,\bar d,\varrho)>0$ such that $$|E_{i,j}(t)| \le C_1'|a_i|\, |a_j| (\log|\log\varepsilon|)|\log\varepsilon| \quad \forall\, t\in [0,T_\varepsilon] \quad \forall\, \varepsilon\in (0,\varepsilon_0)\quad \forall\, i\ne j\,.$$ Analogously, in view of Eq. [\[initial\]](#initial){reference-type="eqref" reference="initial"}, there is $C_1'' = C_1''(\alpha,\bar d)>0$ such that $$E_i(0) \ge a_i^2\left[ \frac{\alpha}2 |\log\varepsilon|^2 - C_1'' (\log|\log\varepsilon|)|\log\varepsilon| \right] \quad \forall\, \varepsilon\in (0,\varepsilon_0) \quad \forall\, i\,.$$ Therefore, since the total kinetic energy is conserved along the motion, i.e., $E(t) = E(0)$, we conclude that $$\begin{aligned} \sum_i E_i(t) & = \sum_i E_i(0) + 2 \sum_{i>j} [E_{i,j}(0) - E_{i,j}(t) ] \\ & \ge \sum_i E_i(0) - 2 \sum_{i>j} (|E_{i,j}(0)| + |E_{i,j}(t)|) \ \quad \forall\, t\in [0,T_\varepsilon] \quad \forall\,\varepsilon\in (0,\varepsilon_0)\,,\end{aligned}$$ from which Eq. [\[E\>\]](#E>){reference-type="eqref" reference="E>"} follows with, e.g., $C_1 = 2(C_1'+ C_1'') |a|^2$. ◻ Without loss of generality, in what follows we further assume $\varepsilon_0<1/\mathrm{e}^\mathrm{e}$, so that $\log|\log\varepsilon| >1$ for any $\varepsilon\in (0,\varepsilon_0)$. **Proposition 3**. *There exists $C_2 = C_2(\alpha,|a|,\bar d, \varrho)>0$ such that, for any $\varepsilon\in (0,\varepsilon_0)$, $$\begin{aligned} \label{conc1} {\mathcal G}_i(t) & := \int\!\mathrm{d}x \!\int\! \mathrm{d}y\, \omega_{i,\varepsilon}(x,t) \omega_{i,\varepsilon}(y,t) \log\Big(\frac{|x-y|}{\varepsilon}\Big) {1 \mskip -5mu {\rm I}}(|x-y| \ge \varepsilon) \nonumber \\ & \qquad \le C_2 \log|\log\varepsilon| \quad \forall\, t\in [0,T_\varepsilon] \quad \forall\, i = 1,\ldots,N \,,\end{aligned}$$ where ${1 \mskip -5mu {\rm I}}(\cdot)$ denotes the indicator function of a subset.* *Proof.* Letting $$\label{Ab} A := (r_\varepsilon+x_2)(r_\varepsilon+y_2)\,,$$ from Eqs. [\[S=I\]](#S=I){reference-type="eqref" reference="S=I"} and [\[I1\]](#I1){reference-type="eqref" reference="I1"} we have, for a suitable con $$S(x,y) \le \frac{\sqrt A}{2\pi} \Big( C_0 + \log(\sqrt{4A} + \sqrt{|x-y|^2+4A}) - \log |x-y| \Big)$$ and, in view of Eqs. [\[Teps\]](#Teps){reference-type="eqref" reference="Teps"} and [\[sep-supp\]](#sep-supp){reference-type="eqref" reference="sep-supp"}, $$\label{A<} r_\varepsilon- \bar d - \varrho \le \sqrt A \le r_\varepsilon+ \bar d + \varrho\,, \quad |x-y| \le 2\varrho \quad \forall\, x,y\in \Lambda_{i,\varepsilon}(t) \quad \forall\, t \in [0,T_\varepsilon]\,.$$ Therefore, recalling Eqs. [\[reps\]](#reps){reference-type="eqref" reference="reps"} and [\[Ei=\]](#Ei=){reference-type="eqref" reference="Ei="}, there exists $C_2' = C_2'(\alpha,|a|,\bar d, \varrho)>0$ such that $$\begin{split} E_i(t) & \le \frac 12 \int\!\mathrm{d}x \!\int\! \mathrm{d}y\, \omega_{i,\varepsilon}(x,t) \omega_{i,\varepsilon}(y,t) \sqrt A \log( |x-y|^{-1}) \nonumber \\ & \quad + C_2' (\log|\log\varepsilon|)|\log\varepsilon| \quad \forall\, t\in [0,T_\varepsilon] \quad \forall\,\varepsilon\in (0,\varepsilon_0)\,, \end{split}$$ which can be recast as $$E_i(t) \le G_i^{(1)}(t) - G_i^{(2)}(t)+ G_i^{(3)}(t) + C_2' (\log|\log\varepsilon|)|\log\varepsilon| \quad \forall\, t\in [0,T_\varepsilon] \quad \forall\,\varepsilon\in (0,\varepsilon_0)\,,$$ where $$\begin{split} G_i^{(1)}(t) & = \frac 12 \int\!\mathrm{d}x \!\int\! \mathrm{d}y\, \omega_{i,\varepsilon}(x,t) \omega_{i,\varepsilon}(y,t) \sqrt A \log\Big( \frac1\varepsilon\Big)\,, \\ G_i^{(2)}(t) & = \frac 12 \int\!\mathrm{d}x \!\int\! \mathrm{d}y\, \omega_{i,\varepsilon}(x,t) \omega_{i,\varepsilon}(y,t) \sqrt A \log \Big(\frac{|x-y|}{\varepsilon}\Big){1 \mskip -5mu {\rm I}}(|x-y| \ge \varepsilon) \,, \\ G _i^{(3)}(t) & = \frac 12 \int\!\mathrm{d}x \!\int\! \mathrm{d}y\, \omega_{i,\varepsilon}(x,t) \omega_{i,\varepsilon}(y,t)\sqrt A \log\Big(\frac\varepsilon{|x-y|} \Big) {1 \mskip -5mu {\rm I}}(|x-y| < \varepsilon)\,. \end{split}$$ By Eq. [\[A\<\]](#A<){reference-type="eqref" reference="A<"} it follows that there is $C_2'' = C_2''(\alpha,|a|,\bar d, \varrho)>0$ such that $$G_i^{(1)}(t) \le \frac\alpha2 a_i^2 |\log\varepsilon|^2 + C_2'' |\log\varepsilon| \quad \forall\, t\in [0,T_\varepsilon] \quad \forall\, \varepsilon\in (0,\varepsilon_0)\,.$$ Concerning $G_i^{(3)}(t)$, again by Eq. [\[A\<\]](#A<){reference-type="eqref" reference="A<"} we have $$G_i^{(3)}(t) \le \frac 12(r_\varepsilon+\bar d+\varrho) \int\!\mathrm{d}x \, |\omega_{i,\varepsilon}(x,t)| \int\! \mathrm{d}y\,|\omega_{i,\varepsilon}(y,t)| \log\Big(\frac\varepsilon{|x-y|} \Big) {1 \mskip -5mu {\rm I}}(|x-y| < \varepsilon)\,,$$ and the integral with respect to the variable $y$ can be estimated performing a symmetrical rearrangement of the vorticity around the point $x$. More precisely, by Eqs. [\[Mgamma\]](#Mgamma){reference-type="eqref" reference="Mgamma"}, [\[initial\]](#initial){reference-type="eqref" reference="initial"}, [\[cons-omr_ni\]](#cons-omr_ni){reference-type="eqref" reference="cons-omr_ni"}, and [\[sep-supp\]](#sep-supp){reference-type="eqref" reference="sep-supp"}, $$\label{omt<} |\omega_{i,\varepsilon}(y,t)| \le \frac{r_\varepsilon+\bar d+\varrho}{r_\varepsilon-\varepsilon} \frac{M}{\varepsilon^2} \quad \forall\, t\in [0,T_\varepsilon]\,,$$ so that, by Eq. [\[ait\]](#ait){reference-type="eqref" reference="ait"} and since $\omega_{\varepsilon,i}(\cdot,t)$ does not change sign, if $\bar r$ is such that $$\label{br=} \frac{r_\varepsilon+\bar d+\varrho}{r_\varepsilon-\varepsilon} \frac{M}{\varepsilon^2} \,\pi \bar r^2 = |a_i|$$ then $$\begin{gathered} \int\! \mathrm{d}y\, |\omega_{i,\varepsilon}(y,t)| \log\Big(\frac\varepsilon{|x-y|} \Big) {1 \mskip -5mu {\rm I}}(|x-y| \le \varepsilon) \le \frac{r_\varepsilon+\bar d+\varrho}{r_\varepsilon-\varepsilon} \frac{M}{\varepsilon^2} \int_0^{\bar r\wedge \varepsilon}\!\mathrm{d}r\, 2\pi r \log\Big(\frac\varepsilon r\Big) \\ = \frac{2 |a_i|}{\bar r^2}\left(\frac{(\bar r\wedge \varepsilon)^2}4 - \frac{(\bar r\wedge \varepsilon)^2}2\log\frac{(\bar r\wedge \varepsilon)}\varepsilon\right).\end{gathered}$$ Hence, the above integral is bounded uniformly with respect to $\varepsilon$. Therefore, again recalling Eq. [\[reps\]](#reps){reference-type="eqref" reference="reps"}, there exists $C_2''' = C_2'''(\alpha,|a|,\bar d, \varrho)>0$ such that $$G_i^{(3)}(t) \le C_2''' |\log\varepsilon|\,.$$ Gathering together the above estimates, we conclude that $$\begin{split} \sum_i E_i(t) & \le \frac \alpha 2 \sum_i a_i^2 |\log\varepsilon|^2 - \sum_i G_i^{(2)}(t) \\ & \quad + NC_2' (\log|\log\varepsilon|)|\log\varepsilon| + N (C_2''+C_2''')|\log\varepsilon|\,. \end{split}$$ Comparing with Eq. [\[E\>\]](#E>){reference-type="eqref" reference="E>"} we deduce that $$\label{G<} \sum_i G_i^{(2)}(t) \le (C_1+NC_2') (\log|\log\varepsilon|)|\log\varepsilon| + N(C_2''+C_2''') |\log\varepsilon|\,.$$ But, by Eqs. [\[A\<\]](#A<){reference-type="eqref" reference="A<"} and [\[reps\]](#reps){reference-type="eqref" reference="reps"}, $$G_i^{(2)}(t) \ge \frac12 (\alpha|\log\varepsilon|-\bar d-\varrho)\, {\mathcal G}_i(t)\,,$$ and Eq. [\[conc1\]](#conc1){reference-type="eqref" reference="conc1"} follows from Eq. [\[G\<\]](#G<){reference-type="eqref" reference="G<"} for a suitable choice of $C_2$. ◻ ## Mass concentration and bound on the moment of inertia {#sec:3.2} As a consequence of Proposition [Proposition 3](#prop:conc1){reference-type="ref" reference="prop:conc1"}, we next prove that the mass of each vortex ring is concentrated in a disk of vanishing size as $\varepsilon\to 0$. **Theorem 4**. *There exist constants $C_j=C_j(\alpha,|a|,\bar d, \varrho)>0$, $j=3,4$, and points $q^{i,\varepsilon}(t)\in {\mathbb R}^2$, $t\in [0,T_\varepsilon]$, $\varepsilon\in (0,\varepsilon_0)$, such that if $R>\exp(C_3 \log|\log\varepsilon|)$ then, for any $\varepsilon\in (0,\varepsilon_0)$, $$\begin{aligned} \label{eq:conc} \frac{a_i}{|a_i|} \int_{\Sigma(q^{i,\varepsilon}(t)|\varepsilon R)}\!\mathrm{d}x\, \omega_{i,\varepsilon}(x,t) \ge |a_i| - \frac{C_4 \log|\log\varepsilon|}{\log R} \quad \forall\, t\in [0,T_\varepsilon] \quad \forall\, i = 1,\ldots,N \,.\end{aligned}$$* *Proof.* In what follows we omit the explicit dependence on $i$ and $t$ by introducing the shorten notation $$\omega(x) := \frac{a_i}{|a_i|} \omega_{i,\varepsilon}(x,t) =|\omega_{i,\varepsilon}(x,t)| \,, \quad a= |a_i|\,.$$ Since $\int\!\mathrm{d}x\, \omega(x) = |a_i|$ we can find $x_1^*$ and $L_1>1$ such that $$M_1 := \int_{x_1< x_1^*-\varepsilon L_1}\!\mathrm{d}x\, \omega(x) \le \frac a2\,, \qquad M_3 :=\int_{x_1> x_1^*+\varepsilon L_1}\!\mathrm{d}x\, \omega(x)\le \frac a2\,.$$ Setting $$M_2 := \int_{|x_1-x_1^*| \le \varepsilon L_1}\!\mathrm{d}x\, \omega(x)\,,$$ from Eq. [\[conc1\]](#conc1){reference-type="eqref" reference="conc1"}, by neglecting the vorticity in the central region, we deduce that $2M_1M_3 \log(2L_1) \le C_2 \log|\log\varepsilon|$. Therefore, $$\begin{split} a^2 & = (M_1+M_2+M_3)^2 \le \frac{a^2}2 + 2 M_2^2 + 2M_2(M_1+M_3) + 2 M_1 M_3 \\ & = \frac{a^2}2 + 2 a M_2 + 2 M_1M_3 \le \frac{a^2}2 + 2 a M_2 + C_2 \frac{\log|\log\varepsilon|}{\log(2L_1)}\,, \end{split}$$ whence $$\label{stco} M_2 \ge \frac a4 - \frac{C_2 \log|\log\varepsilon|}{2a \log(2L_1)}\,.$$ In particular, $$\label{a8} M_2 \ge \frac a8 \quad \forall \, L_1 \ge L_1^* := \frac 12 \exp\Big(\frac{4C_2}{a^2}\log|\log\varepsilon|\Big)\,.$$ Letting now $$M_1' := \int_{x_1< x_1^*- 2\varepsilon L_1}\!\mathrm{d}x\, \omega(x)\,, \quad M_3' :=\int_{x_1> x_1^*+ 2\varepsilon L_1}\!\mathrm{d}x\, \omega(x)\,,$$ from Eq. [\[conc1\]](#conc1){reference-type="eqref" reference="conc1"}, neglecting some positive terms and using [\[a8\]](#a8){reference-type="eqref" reference="a8"}, we obtain $$\frac a8 (M_1'+M_3') \log L_1 \le C_2 \log|\log\varepsilon| \quad \forall \, L_1 \ge L_1^*\,,$$ whence $$\label{w>} M_2' := \int_{|x_1-x_1^*| \le 2\varepsilon L_1}\!\mathrm{d}x\, \omega(x) \ge a - \frac{8C_2\log|\log\varepsilon|}{a\log L_1} \quad \forall \, L_1 \ge L_1^*\,.$$ We can now repeat the same argument in the $x_2$-direction for the function $$\tilde \omega(x) = \omega(x) {1 \mskip -5mu {\rm I}}(|x_1-x_1^*| \le 2\varepsilon L_1)\,,$$ when $L_1>L_1^*$. It follows that there is $x_2^*$ such that $$\label{tildw>} \int_{|x_2-x_2^*| \le 2\varepsilon L_2}\!\mathrm{d}x\, \tilde \omega(x) \ge M_2' - \frac{8C_2\log|\log\varepsilon|}{M_2'\log L_2} \quad \forall \, L_2 \ge L_2^*\,,$$ with now $$L_2^* := \frac 12 \exp\Big(\frac{4C_2}{(M_2')^2}\log|\log\varepsilon|\Big) \le \frac 12 \exp\Big(\frac{256 C_2}{a^2}\log|\log\varepsilon|\Big) \,,$$ where in the last inequality we used that $M_2'\ge M_2 \ge a/8$ by Eq. [\[a8\]](#a8){reference-type="eqref" reference="a8"}. Therefore, letting $x^* = (x_1^*,x_2^*)$ and choosing $$L_1 = L_2 = L > \frac 12 \exp\Big(\frac{256 C_2}{a^2}\log|\log\varepsilon|\Big),$$ from Eqs. [\[w\>\]](#w>){reference-type="eqref" reference="w>"} and [\[tildw\>\]](#tildw>){reference-type="eqref" reference="tildw>"} we get $$\begin{split} \int_{\Sigma(x^*|2\varepsilon\sqrt 2 L)} \! \mathrm{d}x\, \omega(x) & \ge \int_{|x_2-x_2^*| \le 2\varepsilon L}\!\mathrm{d}x\, \tilde \omega(x) \ge a - \frac{8C_2\log|\log\varepsilon|}{a\log L} - \frac{8C_2\log|\log\varepsilon|}{M_2'\log L} \\ & \ge a - \frac{72 C_2\log|\log\varepsilon|}{a\log L}\,, \end{split}$$ where we used again $M_2'\ge a/8$ in the last inequality. Coming back to the original notation, Eq. [\[eq:conc\]](#eq:conc){reference-type="eqref" reference="eq:conc"} follows with $q^{i,\varepsilon}(t) = x^*$ and suitable choices of $C_3,C_4>0$. ◻ We denote by $B^{i,\varepsilon}(t)$ the center of vorticity of the $i$-th vortex ring, defined by $$\label{c.m.} B^{i,\varepsilon}(t) := \frac{1}{a_i} \int\! \mathrm{d}x\, x\,\omega_{i,\varepsilon}(x,t)\,,$$ and by $J_{i,\varepsilon}(t)$ the corresponding moment of inertia, i.e., $$\label{J} J_{i,\varepsilon}(t) : = \int \!\mathrm{d}x\, |x-B^{i,\varepsilon}(t)|^2 |\omega_{i,\varepsilon}(x,t)| = \frac{a_i}{|a_i|} \int \!\mathrm{d}x\, |x-B^{i,\varepsilon}(t)|^2 \omega_{i,\varepsilon}(x,t) \,.$$ From Theorem [Theorem 4](#thm:conc){reference-type="ref" reference="thm:conc"}, we deduce that the moment of inertia vanishes as $\varepsilon\to 0$. This is the content of the following theorem. **Theorem 5**. *Given $\gamma\in (0,1)$ there exists $\varepsilon_\gamma \in (0,\varepsilon_0)$ such that $$\label{eq:J<} J_{i,\varepsilon}(t) \le \frac{1}{|\log\varepsilon|^\gamma} \quad \forall\, t\in [0,T_\varepsilon] \quad \forall\,\varepsilon\in (0,\varepsilon_\gamma)\,.$$* *Proof.* Without loss of generality we assume hereafter $a_i>0$, i.e., $\omega_{i,\varepsilon}(t) \ge 0$ so that $$J_{i,\varepsilon}(t) : = \int \!\mathrm{d}x\, |x-B^{i,\varepsilon}(t)|^2 \omega_{i,\varepsilon}(x,t)\,.$$ Given $\gamma\in (0,1)$, we choose $\gamma' \in (\gamma,1)$ and let $$\label{eq:Sgt} \Sigma_{i,\varepsilon}(t) := \Sigma(q^{i,\varepsilon}(t)|\varepsilon R_\varepsilon)\,, \quad R_\varepsilon:= \exp(|\log\varepsilon|^{\gamma'})\,.$$ By Theorem [Theorem 4](#thm:conc){reference-type="ref" reference="thm:conc"}, provided $\varepsilon\in (0,\varepsilon_0)$ is chosen sufficiently small to have $R_\varepsilon>\exp(C_3\log|\log\varepsilon|)$, we can apply Eq. [\[eq:conc\]](#eq:conc){reference-type="eqref" reference="eq:conc"} with $R=R_\varepsilon$ getting $$\label{stmass} \int_{ \Sigma_{i,\varepsilon}(t)}\!\mathrm{d}y\, \omega_\varepsilon(y,t) \ge a_i - \frac{C_4 \log|\log\varepsilon| }{|\log\varepsilon|^{\gamma'}} \quad \forall\, t\in [0,T_\varepsilon]\,.$$ Now, by the definition of center of vorticity, $$\begin{split} J_{i,\varepsilon}(t) & = \min_{q\in{\mathbb R}^2} \int\! \mathrm{d}x\, |x-q|^2\omega_{i,\varepsilon}(x,t) \le \int\! \mathrm{d}x\, |x-q^{i,\varepsilon}(t)|^2\omega_{i,\varepsilon}(x,t) \\ & = \int_{\Sigma_{i,\varepsilon}(t)}\! \mathrm{d}x\, |x-q^{i,\varepsilon}(t)|^2\omega_{i,\varepsilon}(x,t) + \int_{\Sigma_{i,\varepsilon}(t)^\complement}\! \mathrm{d}x\, |x-q^{i,\varepsilon}(t)|^2\omega_{i,\varepsilon}(x,t) \\ & \le a_i (\varepsilon R_\varepsilon)^2 + \frac{C_4 \log|\log\varepsilon| }{|\log\varepsilon|^{\gamma'}}\max_{x\in\Lambda_{i,\varepsilon}(t)} |x-q_\varepsilon(t)|^2. \end{split}$$ Now, for $t\in [0,T_\varepsilon]$ and $\varepsilon$ small enough, $$\max_{x\in\Lambda_{i,\varepsilon}(t)} |x-q^{i,\varepsilon}(t)|^2 \le 2 \max_{x\in\Lambda_{i,\varepsilon}(t)} |x|^2 + 2|q^{i,\varepsilon}(t)|^2 \le 2(\bar d + \varrho)^2 + 2(\bar d +\varrho+\varepsilon R_\varepsilon)^2,$$ where we used Eq. [\[sep-supp\]](#sep-supp){reference-type="eqref" reference="sep-supp"} and that, in view of Eq. [\[stmass\]](#stmass){reference-type="eqref" reference="stmass"}, $\Sigma_{i,\varepsilon}(t) \cap\Lambda_{i,\varepsilon}(t) \ne \emptyset$. Then, the lemma follows from the above estimates. ◻ # Iterative procedure {#sec:4} As already observed, our goal is to show that the time $T_\varepsilon$ does not vanish as $\varepsilon\to 0$. To this purpose, we will prove that there is $T_\varrho'\in (0,T]$ such that the condition on the supports $\Lambda_{i,\varepsilon}(t)$ in the definition of $T_\varepsilon$ can be enforced up to time $T_\varrho' \wedge T_\varepsilon$ for any $\varepsilon$ small enough. By continuity, this implies that $T_\varepsilon\ge T_\varrho'$ for any $\varepsilon$ small enough (and Theorem [Theorem 1](#thm:1){reference-type="ref" reference="thm:1"} will follow with $T_\varrho = T_\varrho'$). A key step in this strategy, which is the content of the present section, is to prove that the amount of vorticity $\omega_{i,\varepsilon}(x,t)$ outside any disk centered in $B^{i,\varepsilon}(t)$ and whose radius is fixed independent of $\varepsilon$ is indeed extremely small, say $\varepsilon^\ell$ up to a time $\bar T_\ell \wedge T_\varepsilon$. To this end, a naive application of the iterative procedure adopted in the quoted papers fails in this case, because of the worst estimate Eq. [\[eq:J\<\]](#eq:J<){reference-type="eqref" reference="eq:J<"} on the moment of inertia. To overcome this difficulty, we notice that this estimate is sufficient to apply an iterative argument based on a larger space step, which gives a weaker estimate. But this estimate is good enough to implement a new iterative argument, now based on the correct smaller space step, which leads to the result. The following lemma, whose proof is given in Appendix [8](#app:a){reference-type="ref" reference="app:a"}, will be repeatedly used in the sequel. **Lemma 6**. *Recall $H(x,y) = (H_1(x,y),H_2(x,y))$ is defined in Eqs. [\[H1\]](#H1){reference-type="eqref" reference="H1"}, [\[H2\]](#H2){reference-type="eqref" reference="H2"}. There exists $C_H = C_H(\alpha,\bar d, \varrho) >0$ such that, for any $i\ne j$, $t \in [0,T]$, and $\varepsilon\in (0,\varepsilon_0)$, $$\label{stimH} |H(x,y)| + \|D_xH(x,y)\| \le C_H\quad \forall\, x\in\Sigma(\zeta^i(t)|\varrho) \quad\forall\, y \in\Sigma(\zeta^j(t)|\varrho)$$ ($D_xH(x,y)$ denotes the Jacobian matrix of $H(x,y)$ with respect to the variable $x$). Moreover, $H(x,y)$ admits the decomposition, $$\label{sH} H(x,y) = K(x-y) + L(x,y) + {\mathcal R}(x,y),$$ where $K(\cdot)$ is defined in [\[Kern\]](#Kern){reference-type="eqref" reference="Kern"}, $$\label{sL} L(x,y) = \frac{1}{4\pi (r_\varepsilon+x_2)} \log\frac {1+|x-y|}{|x-y|}\begin{pmatrix} 1 \\ 0 \end{pmatrix},$$ and, for a suitable constant $\bar C>0$, $$\label{sR} |{\mathcal R}(x,y)| \le \bar C \frac{1+ r_\varepsilon+ x_2+\sqrt{A} \big(1+ |\log A|\big)}{(r_\varepsilon+ x_2)^2}\,,$$ with $A$ as in Eq. [\[Ab\]](#Ab){reference-type="eqref" reference="Ab"}.* We decompose the velocity field Eq. [\[u=\]](#u=){reference-type="eqref" reference="u="} according to Eq. [\[sH\]](#sH){reference-type="eqref" reference="sH"}, writing $$\label{decom_u} u(x,t) = (K*\omega_{i,\varepsilon})(x,t) + u^i_L(x,t) + u^i_{{\mathcal R}}(x,t) + F^i(x,t)\,,$$ where $(K*\omega_{i,\varepsilon})(\cdot, t)$ denotes the convolution of $K$ and $\omega_{i,\varepsilon}(\cdot,t)$, $$\label{uL=w} u^i_L(x,t) := \int\!\mathrm{d}y\, L(x,y)\, \omega_{i,\varepsilon}(y,t) = w^i_L(x,t) \begin{pmatrix} 1 \\ 0 \end{pmatrix},$$ with $$\label{wL=} w^i_L(x,t) := \frac{1}{4\pi (r_\varepsilon+ x_2)} \int\!\mathrm{d}y\, \omega_{i,\varepsilon}(y,t)\log\frac {1+|x-y|}{|x-y|}\,,$$ and $$\quad u^i_{{\mathcal R}}(x,t) := \int\!\mathrm{d}y\, {\mathcal R}(x,y)\, \omega_{i,\varepsilon}(y,t)\,,\quad F^i(x,t) := \sum_{j\ne i} \int\!\mathrm{d}y\, H(x,y)\, \omega_{j,\varepsilon}(y,t)\,.$$ By Eq. [\[sep-disks\]](#sep-disks){reference-type="eqref" reference="sep-disks"} and in view of Eqs. [\[stimH\]](#stimH){reference-type="eqref" reference="stimH"} and [\[sR\]](#sR){reference-type="eqref" reference="sR"}, for any $\varepsilon\in (0,\varepsilon_0)$, $$\label{stimF} |F^i(x,t)| + \|D_xF^i(x,t)\| \le C_F \quad \forall\, x\in \Sigma(\zeta^i(t)|\varrho) \quad \forall\, t \in [0,T_\varepsilon]\,,$$ with $C_F := |a| C_H$, and there is $C_{{\mathcal R}} = C_{{\mathcal R}}(\alpha,|a|,\bar d, \varrho) >0$ such that $$\label{stimR} |u^i_{{\mathcal R}}(x,t)| \le \frac{C_{{\mathcal R}}\log|\log\varepsilon|}{|\log\varepsilon|} \quad \forall\, x\in \bigcup_j \Sigma(\zeta^j(t)|\varrho) \quad \forall\, t \in [0,T_\varepsilon]\,.$$ Moreover, since $s \mapsto\log[(1+s)/s]$ is decreasing, the integral appearing in the definition of $w^i_L(x,t)$ can be estimated performing a symmetrical rearrangement of the vorticity around the point $x$. Therefore, recalling Eqs. [\[sep-disks\]](#sep-disks){reference-type="eqref" reference="sep-disks"} and [\[omt\<\]](#omt<){reference-type="eqref" reference="omt<"}, if $(x,t) \in \Sigma(\zeta^i(t)|\varrho) \times [0,T_\varepsilon]$ then $$\begin{aligned} |w^i_L(x,t)| & \le \frac{1}{4\pi(r_\varepsilon-\bar d-\varrho)} \int\!\mathrm{d}y\, \log\frac {1+|x-y|}{|x-y|} |\omega_{i,\varepsilon}(y,t)| \\ & \le \frac{1}{4\pi(r_\varepsilon-\bar d-\varrho)} \frac{r_\varepsilon+\bar d+\varrho}{r_\varepsilon-\varepsilon} \frac{M}{\varepsilon^2} \int_0^{\bar r}\!\mathrm{d}r\, 2\pi r \, \log\frac{1+r}{r} \\ & = \frac{M(r_\varepsilon+\bar d+\varrho)}{2(r_\varepsilon-\bar d-\varrho)(r_\varepsilon-\varepsilon)\varepsilon^2} \bigg\{\frac{\bar r^2}{2} \log\frac{1+\bar r}{\bar r} + \frac 12 \int_0^{\bar r}\!\mathrm{d}r\, \frac{r}{1+r} \bigg\},\end{aligned}$$ with $\bar r$ as in Eq. [\[br=\]](#br=){reference-type="eqref" reference="br="}. Therefore there is $C_L = C_L(\alpha,|a|,\bar d, \varrho)>0$ such that, for any $\varepsilon\in (0,\varepsilon_0)$, $$\label{uL<} |u^i_L(x,t)| = |w^i_L(x,t)| \le \frac{|a_i|}{4\pi\alpha} + \frac{C_L}{|\log\varepsilon|} \quad \forall\, x\in \Sigma(\zeta^i(t)|\varrho) \quad \forall\, t\in [0,T_\varepsilon]\,.$$ **Proposition 7**. *Let $$m^i_t(R) := \int_{\Sigma(B^{i,\varepsilon}(t)|R)^\complement}\!\mathrm{d}x\, |\omega_{i,\varepsilon}(x,t)| = \frac{a_i}{|a_i|} \int_{\Sigma(B^{i,\varepsilon}(t)|R)^\complement}\!\mathrm{d}x\, \omega_{i,\varepsilon}(x,t)$$ denote the amount of vorticity of the $i$-th ring outside the disk $\Sigma(B^{i,\varepsilon}(t)|R)$ at time $t$. Then, given $R>0$, for each $\ell>0$ there is $\widetilde T_\ell \in (0, T]$ such that $$\label{smt1} m^i_t(R) \le \frac{1}{|\log\varepsilon|^{\ell}} \quad \forall\, t\in[0, \widetilde T_\ell \wedge T_\varepsilon] \quad \forall\, i = 1,\ldots,N \,.$$ for any $\varepsilon\in (0,\varepsilon_0)$ sufficiently small.* *Proof.* Without loss of generality we assume hereafter $a_i=|a_i|$, i.e., $\omega_{i,\varepsilon}(t) \ge 0$, so that $$m^i_t(R) := \int_{\Sigma(B^{i,\varepsilon}(t)|R)^\complement}\!\mathrm{d}x\, \omega_{i,\varepsilon}(x,t)\,.$$ In what follows, $h$ and $R$ are two positive parameters to be fixed later and such that $R\ge 2h$. Let $x\mapsto W_{R,h}(x)$, $x\in {\mathbb R}^2$, be a non-negative smooth function, depending only on $|x|$, such that $$\label{W1} W_{R,h}(x) = \begin{cases} 1 & \text{if $|x|\le R$}, \\ 0 & \text{if $|x|\ge R+h$}, \end{cases}$$ and, for some $C_W>0$, $$\label{W2} |\nabla W_{R,h}(x)| < \frac{C_W}{h}\,,$$ $$\label{W3} |\nabla W_{R,h}(x)-\nabla W_{R,h}(x')| < \frac{C_W}{h^2}\,|x-x'|\,.$$ The quantity $$\label{mass 1} \mu_t(R,h) = \int\! \mathrm{d}x \, \big[1-W_{R,h}(x-B^{i,\varepsilon}(t))\big]\, \omega_{i,\varepsilon}(x,t)\,,$$ is a mollified version of $m^i_t$, satisfying $$\label{2mass 3} \mu_t(R,h) \le m^i_t(R) \le \mu_t(R-h,h)\,,$$ so that it is enough to prove [\[smt1\]](#smt1){reference-type="eqref" reference="smt1"} with $\mu_t$ in place of $m^i_t$. To this end, we study the variation in time of $\mu_t(R,h)$ by applying [\[weq\]](#weq){reference-type="eqref" reference="weq"} with test function $f(x,t) = 1-W_{R,h}(x-B^{i,\varepsilon}(t))$, getting $$\frac{\mathrm{d}}{\mathrm{d}t} \mu_t(R,h) = - \int\! \mathrm{d}x\, \nabla W_{R,h}(x-B^{i,\varepsilon}(t)) \cdot [u(x,t) - \dot B^{i,\varepsilon}(t)]\,\omega_{i,\varepsilon}(x,t)\,.$$ The time derivative of the center of vorticity can be computed by applying again Eq. [\[weq\]](#weq){reference-type="eqref" reference="weq"} (with now test function $f(x,t) =x$), so that $$\begin{aligned} \label{bpunto} \dot B^{i,\varepsilon}(t) = \frac{1}{a_i} \int\!\mathrm{d}x\, \big[ u^i_L(x,t) + u^i_{{\mathcal R}}(x,t) + F^i(x,t)\big] \omega_{i,\varepsilon}(x,t)\,,\end{aligned}$$ having used the decomposition Eq. [\[decom_u\]](#decom_u){reference-type="eqref" reference="decom_u"} and that $\int\!\mathrm{d}x\, \omega_{i,\varepsilon}(x,t) (K*\omega_{i,\varepsilon})(x,t) = 0$ by the antisymmetry of $K$. We thus conclude that $$\label{mass 4} \frac{\mathrm{d}}{\mathrm{d}t} \mu_t(R,h) = - A_1 - A_2 - A_3\,,$$ with $$\begin{split} A_1 & = \int\! \mathrm{d}x\, \nabla W_{R,h}(x-B^{i,\varepsilon}(t)) \cdot (K*\omega_{i,\varepsilon})(x,t)\, \omega_{i,\varepsilon}(x,t) \\ & = \frac 12 \int\! \mathrm{d}x \! \int\! \mathrm{d}y\, [\nabla W_{R,h}(x-B^{i,\varepsilon}(t)) - \nabla W_{R,h}(y-B^{i,\varepsilon}(t))] \\ & \qquad \cdot K(x-y) \, \omega_{i,\varepsilon}(x,t)\, \omega_{i,\varepsilon}(y,t) \\ A_2 & = \int\! \mathrm{d}x\, \nabla W_{R,h}(x-B^{i,\varepsilon}(t)) \cdot \big[u^i_L(x,t) + u^i_{{\mathcal R}} (x,t) \big] \omega_{i,\varepsilon}(x,t) \\ & \;\; - \frac{1}{a_i}\int\! \mathrm{d}x\, \nabla W_{R,h}(x-B^{i,\varepsilon}(t)) \cdot \int\! \mathrm{d}y \,\big[u^i_L(y,t) + u^i_{{\mathcal R}} (y,t) \big] \omega_{i,\varepsilon}(y,t)\, \omega_{i,\varepsilon}(x,t)\,, \\ A_3 & = \frac{1}{a_i} \int\! \mathrm{d}x\, \nabla W_{R,h}(x-B^{i,\varepsilon}(t)) \cdot \int\! \mathrm{d}y \,\big[F^i(x,t) - F^i (y,t) \big] \omega_{i,\varepsilon}(y,t)\, \omega_{i,\varepsilon}(x,t)\,, \end{split}$$ where the second expression of $A_1$ is due to the antisymmetry of $K$. Concerning $A_1$, we introduce the new variables $x'=x-B^{i,\varepsilon}(t)$, $y'=y-B^{i,\varepsilon}(t)$, define $\tilde\omega_{i,\varepsilon}(z,t) := \omega_{i,\varepsilon}(z+B^{i,\varepsilon}(t),t)$, and let $$f(x',y') = \frac 12 \tilde\omega_{i,\varepsilon}(x',t)\, \tilde\omega_{i,\varepsilon}(y',t) \, [\nabla W_{R,h}(x')-\nabla W_{R,h}(y')] \cdot K(x'-y') \,,$$ so that $A_1 = \int\!\mathrm{d}x' \! \int\!\mathrm{d}y'\,f(x',y')$. We observe that $f(x',y')$ is a symmetric function of $x'$ and $y'$ and that, by [\[W1\]](#W1){reference-type="eqref" reference="W1"}, a necessary condition to be different from zero is if either $|x'|\ge R$ or $|y'|\ge R$. Therefore, $$\begin{split} A_1 &= \bigg[ \int_{|x'| > R}\!\mathrm{d}x' \! \int\!\mathrm{d}y' + \int\!\mathrm{d}x' \! \int_{|y'| > R}\!\mathrm{d}y' - \int_{|x'| > R}\!\mathrm{d}x' \! \int_{|y'| > R}\!\mathrm{d}y'\bigg]f(x',y') \\ & = 2 \int_{|x'| > R}\!\mathrm{d}x' \! \int\!\mathrm{d}y'\,f(x',y') - \int_{|x'| > R}\!\mathrm{d}x' \! \int_{|y'| > R}\!\mathrm{d}y'\,f(x',y') \\ & = A_1' + A_1'' + A_1'''\,, \end{split}$$ with $$\begin{split} A_1' & = 2 \int_{|x'| > R}\!\mathrm{d}x' \! \int_{|y'| \le \frac{R}{2}}\!\mathrm{d}y'\,f(x',y') \,, \quad A_1'' = 2 \int_{|x'| > R}\!\mathrm{d}x' \! \int_{|y'| > \frac{R}{2}}\!\mathrm{d}y'\,f(x',y')\,, \\ A_1''' & = - \int_{|x'| > R}\!\mathrm{d}x' \! \int_{|y'| > R}\!\mathrm{d}y'\,f(x',y')\,. \end{split}$$ By the assumptions on $W_{R,h}$, we have $\nabla W_{R,h}(z) = \eta_h(|z|) z/|z|$ with $\eta_h(|z|) =0$ for $|z| \le R$. In particular, $\nabla W_{R,h}(y') = 0$ for $|y'| \le R/2$, hence $$A_1' = \int_{|x'| > R}\!\mathrm{d}x' \, \tilde\omega_{i,\varepsilon}(x',t) \eta_h(|x'|) \,\frac{x'}{|x'|} \cdot \int_{|y'| \le \frac{R}{2}}\!\mathrm{d}y'\, K(x'-y') \, \tilde\omega_{i,\varepsilon}(y',t)\,.$$ In view of [\[W2\]](#W2){reference-type="eqref" reference="W2"}, $|\eta_h(|z|)| \le C_W/h$, so that $$\label{a1'} |A_1'| \le \frac{C_W}{h} m^i_t(R) \sup_{|x'| > R} |H_1(x')|\,,$$ with $$H_1(x') = \frac{x'}{|x'|}\cdot \int_{|y'| \le \frac{R}{2}}\!\mathrm{d}y'\, K(x'-y') \, \tilde\omega_{i,\varepsilon}(y',t) \,.$$ Now, recalling [\[Kern\]](#Kern){reference-type="eqref" reference="Kern"} and using that $x'\cdot (x'-y')^\perp=-x'\cdot y'^\perp$, we get, $$\label{in H_11} H_1(x') = \frac{1}{2\pi} \int_{|y'|\leq \frac{R}{2}}\! \mathrm{d}y'\, \frac{x'\cdot y'^\perp}{|x'||x'-y'|^2}\, \tilde\omega_{i,\varepsilon}(y',t) \,.$$ By [\[c.m.\]](#c.m.){reference-type="eqref" reference="c.m."}, $\int\! \mathrm{d}y'\, y'^\perp\, \tilde\omega_{i,\varepsilon}(y',t) = 0$, so that $$\label{in H_13} H_1(x') = H_1'(x')-H_1''(x')\,,$$ where $$\begin{aligned} && H_1'(x') = \frac{1}{2\pi} \int_{|y'|\le \frac{R}{2}}\! \mathrm{d}y'\, \frac {x'\cdot y'^\perp}{|x'|}\, \frac {y'\cdot (2x'-y')}{|x'-y'|^2 \ |x'|^2} \, \tilde\omega_{i,\varepsilon}(y',t) \,, \\ && H_1''(x')= \frac{1}{2\pi} \int_{|y'|> \frac{R}{2}}\! \mathrm{d}y'\, \frac{x'\cdot y'^\perp}{|x'|^3}\, \tilde\omega_{i,\varepsilon}(y',t) \,.\end{aligned}$$ We notice that if $|x'| > R$ then $|y'| \le \frac{R}{2}$ implies $|x'-y'|\ge \frac{R}{2}$ and $|2x'-y'|\le |x'-y'|+|x'|$. Therefore, for any $|x'| > R$, $$|H_1'(x')| \le \frac 1\pi \bigg[\frac{1}{|x'|^2 R} + \frac{2}{|x'|R^2} \bigg] \int_{|y'|\leq \frac{R}{2}} \! \mathrm{d}y'\, |y'|^2 \, \tilde\omega_{i,\varepsilon}(y',t) \le \frac{3 J_{i,\varepsilon}(t)}{\pi R^3}\,.$$ To bound $H_1''(x')$, by Chebyshev's inequality, for any $|x'| > R$ we have, $$|H_1''(x')| \le \frac{1}{2\pi |x'|^2} \int_{|y'|> \frac{R}{2}}\! \mathrm{d}y'\, |y'| \tilde\omega_{i,\varepsilon}(y',t) \le \frac{J_{i,\varepsilon}(t)}{ \pi R^3} \,.$$ From Eqs. [\[a1\'\]](#a1'){reference-type="eqref" reference="a1'"}, [\[in H_13\]](#in H_13){reference-type="eqref" reference="in H_13"}, and the previous estimates, we conclude that $$\label{H_14b} |A_1'| \le \frac{4C_W J_{i,\varepsilon}(t)}{\pi R^3 h} m^i_t(R)\,.$$ Now, by [\[W3\]](#W3){reference-type="eqref" reference="W3"} and then applying the Chebyshev's inequality, $$\begin{aligned} \label{badterm} |A_1''| + |A_1'''| & \le \frac{C_W}{\pi h^2} \int_{|x'| \ge R}\!\mathrm{d}x' \! \int_{|y'| \ge \frac{R}{2}}\!\mathrm{d}y'\,\tilde\omega_{i,\varepsilon}(y',t) \, \tilde\omega_{i,\varepsilon}(x',t) \nonumber \\ & = \frac{C_W}{\pi h^2}m^i_t(R) \int_{|y'| \ge \frac{R}{2}}\!\mathrm{d}y'\, \tilde\omega_{i,\varepsilon}(y',t) \le \frac{4C_W J_{i,\varepsilon}(t)}{\pi R^2h^2} m^i_t(R)\,.\end{aligned}$$ In conclusion, $$\label{a1s} |A_1| \le \frac{4C_W}{\pi} \left( \frac{1}{R^3 h} + \frac{1}{R^2 h^2}\right) J_{i,\varepsilon}(t) m^i_t(R)\,.$$ Concerning $A_2$ and $A_3$, we observe that by [\[W1\]](#W1){reference-type="eqref" reference="W1"} the integrand is different from zero only if $R\le |x-B^{i,\varepsilon}(t)|\le R+h$ and $x,y\in \Lambda_{i,\varepsilon}(t) \subset \Sigma(\zeta^i(t)|\varrho)$. Therefore, by Eqs. [\[stimR\]](#stimR){reference-type="eqref" reference="stimR"}, [\[uL\<\]](#uL<){reference-type="eqref" reference="uL<"}, using again the variables $x'=x-B^{i,\varepsilon}(t)$, $y'=y-B^{i,\varepsilon}(t)$, and that $\int\!\mathrm{d}y\, \omega_{i,\varepsilon}(y,t)=a_i$, see Eq. [\[ait\]](#ait){reference-type="eqref" reference="ait"}, $$\label{a2s} |A_2| \le \frac{2C_W}{h}\left[\frac{|a_i|}{4\pi\alpha} + \frac{C_{{\mathcal R}}\log|\log\varepsilon|+C_L}{|\log\varepsilon|} \right] m^i_t(R)\,,$$ while, from the bounds on $F^i$ and its Lipschitz constant in Eq. [\[stimF\]](#stimF){reference-type="eqref" reference="stimF"}, $$\begin{aligned} \label{a3s} |A_3| & \le \frac{2C_W C_F}{a_ih} \int_{|x'|\ge R}\!\mathrm{d}x' \tilde\omega_{i,\varepsilon}(x',t) \int_{|y'|> R}\!\mathrm{d}y'\, \tilde\omega_{i,\varepsilon}(y',t) \nonumber \\ & \quad + \frac{C_WC_F}{a_ih} \int_{R \le |x'|\le R+h}\!\mathrm{d}x' \tilde\omega_{i,\varepsilon}(x',t) \int_{|y'| \le R}\!\mathrm{d}y'\,|x'- y'| \, \tilde\omega_{i,\varepsilon}(y',t) \nonumber \\ & \le \frac{2C_W C_F J_{i,\varepsilon}(t)}{a_iR^2h} m^i_t(R) + C_WC_F \bigg(1+\frac{2R}h\bigg) m^i_t(R)\,,\end{aligned}$$ where we used that $|x'-y'| \le 2R+h$ in the domain of integration of the last integral and the Chebyshev's inequality in the first one. From Eqs. [\[a1s\]](#a1s){reference-type="eqref" reference="a1s"}, [\[a2s\]](#a2s){reference-type="eqref" reference="a2s"}, [\[a3s\]](#a3s){reference-type="eqref" reference="a3s"}, and Theorem [Theorem 5](#thm:J<){reference-type="ref" reference="thm:J<"} we deduce that $$\label{2mass 4''} \frac{\mathrm{d}}{\mathrm{d}t} \mu_t(R,h) \le A_\varepsilon(R,h) m^i_t(h)\quad \forall\, t\in [0,T_\varepsilon] \,,$$ where, for each $\gamma \in (0,1)$, $$\begin{aligned} \label{mass 4bis} A_\varepsilon(R,h) &= \frac{2C_W}{h} \bigg[ C_FR + C_F \frac h2 + \frac{|a_i|}{4\pi\alpha} + \frac{C_{{\mathcal R}}\log|\log\varepsilon|+C_L}{|\log\varepsilon|} + \frac{C_F}{a_i|\log\varepsilon|^{\gamma}R^2} \nonumber \\ & \qquad\qquad + \frac{1}{|\log\varepsilon|^{\gamma}R^3} + \frac{1}{|\log\varepsilon|^{ \gamma}R^2 h} \bigg],\end{aligned}$$ for any $\varepsilon\in (0,\varepsilon_\gamma)$ with $\varepsilon_\gamma$ as in Theorem [Theorem 5](#thm:J<){reference-type="ref" reference="thm:J<"}. Therefore, by Eqs. [\[2mass 3\]](#2mass 3){reference-type="eqref" reference="2mass 3"} and [\[2mass 4\'\'\]](#2mass 4''){reference-type="eqref" reference="2mass 4''"}, $$\label{mass 14'} \mu_t(R,h) \le \mu_0(R,h) + A_\varepsilon(R,h) \int_{0}^t \mathrm{d}s\, \mu_s(R-h,h) \quad \forall\, t\in [0,T_\varepsilon] \,.$$ We iterate the last inequality $n = \lfloor\log|\log\varepsilon|\rfloor$ times,[^1] from $R_0 = R - h$ to $R_n = R -(n+1)h = R/2$. Since $h = R/(2n+2)$ and $R_j\in [R/2, R]$, from the explicit expression Eq. [\[mass 4bis\]](#mass 4bis){reference-type="eqref" reference="mass 4bis"} and using that $|a_i| < |a|$, it is readily seen that if $\varepsilon$ is sufficiently small then $$A_\varepsilon(R_j,h) \le A_* \frac nR \quad \forall\, j=0,\ldots,n\,,$$ with $$\label{a*} A_*= 4 C_W \left(C_FR +\frac{|a|}{4\pi\alpha}\right).$$ Therefore, for any $\varepsilon$ small enough and $t\in [0,T_\varepsilon]$, $$\begin{split} \mu_t(R-h,h) & \le \mu_0(R-h,h) + \sum_{j=1}^n \mu_0(R_j,h) \frac{(A_*nt/R)^j}{j!} \\ & \quad + \frac{(A_*n/R)^{n+1}}{n!} \int_0^t\!\mathrm{d}s\, (t-s)^n\mu_s(R_{n+1},h) \,. \end{split}$$ Since $\Lambda_{i,\varepsilon}(0) \subset \Sigma(\zeta^i_0|\varepsilon)$, if $\varepsilon$ is sufficiently small then $\mu_0(R_j,h)=0$ for any $j=0,\ldots,n$. Therefore, recalling also Eq. [\[2mass 3\]](#2mass 3){reference-type="eqref" reference="2mass 3"}, for any $\varepsilon$ small enough, $$\begin{aligned} \label{mass 15'} m^i_t(R) & \le \mu_t(R-h,h) \le \frac{(A_*n/R)^{n+1}}{n!} \int_0^t\!\mathrm{d}s\, (t-s)^n\mu_s(R_{n+1},h) \nonumber \\ & \le \frac{9}{R^2|\log\varepsilon|^\gamma} \frac{(A_*nt/R)^{n+1}}{(n+1)!} \le \frac{9}{R^2|\log\varepsilon|^\gamma} \left(\frac{\mathrm{e}A_* t}{R}\right)^{n+1} \quad \forall\, t\in [0,T_\varepsilon]\,,\end{aligned}$$ where we used the Chebyshev's inequality and Theorem [Theorem 5](#thm:J<){reference-type="ref" reference="thm:J<"} in the third inequality, to estimate $$\mu_s(R_{n+1},h) \le m^i_s(R_{n+1}) = m^i_s(R/2) \le \frac{J_{i,\varepsilon}(s)}{(R/2-h)^2} \le \frac{9}{R^2|\log\varepsilon|^\gamma}\,,$$ and the Stirling approximation in the last one. Since $n = \lfloor\log|\log\varepsilon|\rfloor$ Eq. [\[mass 15\'\]](#mass 15'){reference-type="eqref" reference="mass 15'"} implies the bound [\[smt1\]](#smt1){reference-type="eqref" reference="smt1"} for any $\varepsilon$ sufficiently small choosing, e.g., $\widetilde T_\ell = (R/A_*)\mathrm{e}^{-\ell-1} \wedge T$. ◻ **Proposition 8**. *Let $m^i_t(R)$ be as in Proposition [Proposition 7](#prop:1){reference-type="ref" reference="prop:1"}. Then, given $R>0$, for each $\ell>0$ there is $\bar T_\ell \in (0, T]$ such that $$\label{smt} m^i_t(R) \le \varepsilon^{\ell} \quad \forall\, t\in[0, \bar T_\ell \wedge T_\varepsilon]$$ for any $\varepsilon\in (0,\varepsilon_0)$ sufficiently small.* *Proof.* The strategy used in the proof of Proposition [Proposition 7](#prop:1){reference-type="ref" reference="prop:1"} would give the stronger estimate Eq. [\[smt\]](#smt){reference-type="eqref" reference="smt"} if we could choose $n = \lfloor |\log\varepsilon|\rfloor$. But this means $h \sim |\log\varepsilon|^{-1}$, which seems not acceptable since it implies that the last term in the right-hand side of Eq. [\[mass 4bis\]](#mass 4bis){reference-type="eqref" reference="mass 4bis"} diverges as $\varepsilon$ vanishes. This dangerous term comes from Eq. [\[badterm\]](#badterm){reference-type="eqref" reference="badterm"}, where the term $\int_{|y'| \ge \frac{R}{2}}\!\mathrm{d}y'\, \tilde\omega_{i,\varepsilon}(y',t) = m^i_t(R/2)$ is bounded from above by the moment of inertia. But now, Proposition [Proposition 7](#prop:1){reference-type="ref" reference="prop:1"} applied with $R/4$ in place of $R$ and, e.g., $\ell=2$, gives $$m^i_t(R/4) \le \frac{1}{|\log\varepsilon|^2} \quad \forall\, t\in[0, \widetilde T_2 \wedge T_\varepsilon]$$ for any $\varepsilon$ small enough. Therefore, besides the bound Eq. [\[badterm\]](#badterm){reference-type="eqref" reference="badterm"} (which holds for any $t\in [0,T_\varepsilon]$), we also have $$|A_1''| + |A_1'''| \le \frac{C_W}{\pi |\log\varepsilon|^2 h^2} m^i_t(R) \quad \forall\, t\in[0, \widetilde T_2 \wedge T_\varepsilon]\,.$$ We thus arrive, in place of Eqs. [\[mass 14\'\]](#mass 14'){reference-type="eqref" reference="mass 14'"}, to the integral inequality $$\mu_t(R',h) \le \mu_0(R',h) + A_\varepsilon(R',h) \int_{0}^t \mathrm{d}s\, \mu_s(R'-h,h) \quad \forall\, t\in [0,\widetilde T_2 \wedge T_\varepsilon] \,,$$ with now $$\begin{split} A_\varepsilon(R',h) &= \frac{2C_W}{h} \bigg[ C_FR' + C_F\frac h2 + \frac{|a_i|}{4\pi\alpha} + \frac{C_{{\mathcal R}}\log|\log\varepsilon|+C_L}{|\log\varepsilon|} + \frac{C_F}{a_i|\log\varepsilon|^{\gamma}(R')^2} \nonumber \\ & \qquad\qquad + \frac{1}{|\log\varepsilon|^{\gamma}(R')^3} + \frac{1}{|\log\varepsilon|^2 h} \bigg] \end{split}$$ for any $R'\ge R/2$ and $\varepsilon$ small enough. This inequality can be iterated $n = \lfloor|\log\varepsilon|\rfloor$ times, from $R'_0 = R-h$ to $R'_n = R -(n+1)h = R/2$, and arguing as done in Proposition [Proposition 7](#prop:1){reference-type="ref" reference="prop:1"} we obtain, for any $\varepsilon$ small enough, $$m^i_t(R) \le\frac{9}{R^2|\log\varepsilon|^\gamma} \left(\frac{\mathrm{e}A_* t}{R}\right)^{n+1} \quad \forall\, t\in [0,\widetilde T_2 \wedge T_\varepsilon] \,,$$ which implies the bound Eq. [\[smt\]](#smt){reference-type="eqref" reference="smt"} for any $\varepsilon$ sufficiently small choosing $\bar T_\ell = (R/A_*) \mathrm{e}^{-\ell-1} \wedge \widetilde T_2$. ◻ *Remark 1*. For later discussion, we give an explicit lower bound of the threshold $\bar T_\ell$ when $\ell>2$. From the proof of Proposition [Proposition 7](#prop:1){reference-type="ref" reference="prop:1"}, $\widetilde T_\ell = (R/A_*) \mathrm{e}^{-\ell-1} \wedge T$, $A_*$ as in Eq. [\[a\*\]](#a*){reference-type="eqref" reference="a*"}, that with $R/4$ in place of $R$ and $\ell=2$ gives $$\widetilde T_2 = \frac{R\mathrm{e}^{-3}}{4C_W} \left(C_F R +\frac{|a|}{\pi\alpha}\right)^{-1} \wedge T.$$ Therefore Eq. [\[smt\]](#smt){reference-type="eqref" reference="smt"} holds for any $i=1,\ldots, N$, choosing, e.g., $$\bar T_\ell = \frac{R\mathrm{e}^{- \ell -1}}{4C_W} \left(C_F R +\frac{|a|}{\pi\alpha}\right)^{-1} \wedge T.$$ # Localization of vortices support {#sec:5} To enforce the condition on the support of the vortex rings in Eq. [\[Teps\]](#Teps){reference-type="eqref" reference="Teps"}, we first show that these supports remain confined inside small disks centered in the corresponding centers of vorticity. To this end, we need to evaluate the force acting on the fluid particles furthest from the center of vorticity. **Lemma 9**. *Recall the definition Eq. [\[Teps\]](#Teps){reference-type="eqref" reference="Teps"} of $T_\varepsilon$ and define $$\label{Rt} R_t:= \max\{|x-B^{i,\varepsilon}(t)|\colon x\in \Lambda_{i,\varepsilon}(t)\}.$$ Let $x(t)$ be the solution to [\[eqchar_n\]](#eqchar_n){reference-type="eqref" reference="eqchar_n"} with initial condition $x(0) = x_0 \in \Lambda_{i,\varepsilon}(0)$ and suppose at time $t\in (0,T_\varepsilon)$ it happens that $$\label{hstimv} |x(t)-B^{i,\varepsilon}(t)| = R_t.$$ Then, at this time $t$, for each fixed $\gamma\in(0,1)$ and any $\varepsilon$ small enough, $$\label{stimv} \frac{\mathrm{d}}{\mathrm{d}t} |x(t)- B^{i,\varepsilon}(t)| \le 2 C_F R_t + \frac{|a_i|}{\pi\alpha} + \frac{4}{\pi|\log\varepsilon|^\gamma R_t^3} + \sqrt{\frac{M m^i_t(R_t/2)}{\varepsilon^2}}\,,$$ with $M$ as in Eq. [\[Mgamma\]](#Mgamma){reference-type="eqref" reference="Mgamma"} and $C_F$ as in Eq [\[stimF\]](#stimF){reference-type="eqref" reference="stimF"}.* *Proof.* Letting $x=x(t)$, from [\[decom_u\]](#decom_u){reference-type="eqref" reference="decom_u"} and [\[bpunto\]](#bpunto){reference-type="eqref" reference="bpunto"} we have, $$\begin{split} \frac{\mathrm{d}}{\mathrm{d}t} |x- B^{i,\varepsilon}(t)| & = \big(u(x,t) - \dot B^{i,\varepsilon}(t)\big) \cdot \frac{x-B^{i,\varepsilon}(t)}{|x-B^{i,\varepsilon}(t)|} \\ & = (K*\omega_{i,\varepsilon})(x,t) \cdot \frac{x-B^{i,\varepsilon}(t)}{|x-B^{i,\varepsilon}(t)|} + U(x,t)\,, \end{split}$$ with $$\begin{split} U(x,t) & = \bigg[ u^i_L(x,t) + u^i_{{\mathcal R}}(x,t) - \frac{1}{a_i} \int\!\mathrm{d}y\, \, \big( u^i_L(y,t) + u^i_{{\mathcal R}}(y,t)\big) \omega_{i,\varepsilon}(y,t) \\ & \quad + \frac{1}{a_i} \int\!\mathrm{d}y\, \, \big[F^i(x,t) - F^i(y,t)\big] \omega_{i,\varepsilon}(y,t) \bigg] \cdot \frac{x-B^{i,\varepsilon}(t)}{|x-B^{i,\varepsilon}(t)|}\,. \end{split}$$ To evaluate the first term in the right-hand side, we split the domain of integration into the disk ${\mathcal D}=\Sigma(B^{i,\varepsilon}(t)|R_t/2)$ and the annulus ${\mathcal A}= \Sigma(B^{i,\varepsilon}(t)|R_t)\setminus\Sigma(B^{i,\varepsilon}(t)|R_t/2)$. Then, $$\label{in A_1,A_2} (K*\omega_{i,\varepsilon})(x,t) \cdot \frac{x-B^{i,\varepsilon}(t)}{|x-B^{i,\varepsilon}(t)|} = H_{\mathcal D} + H_{\mathcal A},$$ where $$H_{\mathcal D}(x) = \frac{x-B^{i,\varepsilon}(t)}{|x-B^{i,\varepsilon}(t)|} \cdot \int_{{\mathcal D}}\! \mathrm{d}y\, K(x-y)\, \omega_{i,\varepsilon}(y,t)$$ and $$H_{\mathcal A}(x)= \frac{x-B^{i,\varepsilon}(t)}{|x-B^{i,\varepsilon}(t)|} \cdot \int_{{\mathcal A}}\! \mathrm{d}y\, K(x-y)\, \omega_{i,\varepsilon}(y,t).$$ We observe that $H_{\mathcal D}(x)$ is exactly equal to the integral $H_1(x')$ appearing in Eq. [\[a1\'\]](#a1'){reference-type="eqref" reference="a1'"}, provided $x'=x-B^{i,\varepsilon}(t)$ and $R=R_t$. Moreover, to obtain Eq. [\[H_14b\]](#H_14b){reference-type="eqref" reference="H_14b"} we had to bound $H_1(x')$ for $|x'|\ge R$, which is exactly what we need now, as $|x-B^{i,\varepsilon}(t)|=R_t$. This estimate, adapted to the present context becomes $$\label{H_14} |H_{\mathcal D}| \le \frac{4 J_{i,\varepsilon}(t)}{\pi R_t^3} \le \frac{4}{\pi|\log\varepsilon|^\gamma R_t^3}\,,$$ where the last inequality holds for given $\gamma\in (0,1)$ and any $\varepsilon$ small enough according to Eq. [\[eq:J\<\]](#eq:J<){reference-type="eqref" reference="eq:J<"}. Regarding $H_{\mathcal A}$, by the definition Eq. [\[Kern\]](#Kern){reference-type="eqref" reference="Kern"} we have, $$|H_{\mathcal A}| \le \frac{1}{2\pi} \int_{{\mathcal A}}\! \mathrm{d}y\, \frac 1{|x-y|} \, |\omega_{i,\varepsilon}(y,t)|\,.$$ Since the integrand is monotonically unbounded as $y\to x$, the maximum possible value of the integral can be obtained performing a symmetrical rearrangement of the vorticity around the point $x$. In view of Eq. [\[omt\<\]](#omt<){reference-type="eqref" reference="omt<"} and since $m^i_t(R_t/2)$ is equal to the total amount of vorticity in ${\mathcal A}$, this rearrangement reads, $$|H_{\mathcal A}| \le \frac{r_\varepsilon+\bar d+\varrho}{r_\varepsilon-\varepsilon} \frac{M}{2\pi\varepsilon^2} \int_{\Sigma (0|\rho_*)}\!\mathrm{d}y'\, \frac{1}{|y'|} =\frac{r_\varepsilon+\bar d+\varrho}{r_\varepsilon-\varepsilon} \frac{M}{\varepsilon^2} \rho_*,$$ where the radius $\rho_*$ is such that $$\frac{r_\varepsilon+\bar d+\varrho}{r_\varepsilon-\varepsilon} \frac{M}{\varepsilon^2}\pi\rho_*^2 = m^i_t(R_t/2)\,.$$ Therefore, $$\label{h2} |H_{\mathcal A}| \le \sqrt{\frac{r_\varepsilon+\bar d+\varrho}{r_\varepsilon-\varepsilon} \frac{Mm^i_t(R_t/2)}{\pi\varepsilon^2}} \le \sqrt{\frac{M m^i_t(R_t/2)}{\varepsilon^2}},$$ where the second inequality is valid for $\varepsilon$ small enough. Finally, by Eqs. [\[stimR\]](#stimR){reference-type="eqref" reference="stimR"}, [\[uL\<\]](#uL<){reference-type="eqref" reference="uL<"} (recall $x=x(t) \in \Lambda_{i,\varepsilon}(t) \subset \Sigma(\zeta^i(t)|\varrho)$) and the bound on the Lipschitz constant of $F^i$ in Eq. [\[stimF\]](#stimF){reference-type="eqref" reference="stimF"}, $$|U(x,t)| \le 2 \bigg[C_FR_t + \frac{|a_i|}{4\pi\alpha} + \frac{C_{{\mathcal R}}\log|\log\varepsilon|+C_L}{|\log\varepsilon|}\bigg].$$ From this, Eqs. [\[H_14\]](#H_14){reference-type="eqref" reference="H_14"} and [\[h2\]](#h2){reference-type="eqref" reference="h2"}, the estimate Eq. [\[stimv\]](#stimv){reference-type="eqref" reference="stimv"} follows provided $\varepsilon$ is chosen sufficiently small. ◻ **Proposition 10**. *There exists $T_\varrho' \in (0,T]$ such that, for any $\varepsilon$ small enough, $$\label{supp_pro} \Lambda_{i,\varepsilon}(t) \subset \Sigma(B^{i,\varepsilon}(t)|\varrho/2) \quad \forall\, t\in [0, T_\varrho'\wedge T_\varepsilon]\,.$$* *Proof.* Let $\bar T_3$ be as in Proposition [Proposition 8](#prop:2){reference-type="ref" reference="prop:2"} with the choice $R=\varrho/10$ and $\ell=3$. Recalling the definition Eq. [\[Rt\]](#Rt){reference-type="eqref" reference="Rt"}, we set $$t_1 := \sup\{t\in [0,\bar T_3 \wedge T_\varepsilon] \colon R_s \le \varrho/2 \;\, \forall s \in [0,t]\}\,,$$ If $t_1 = \bar T_3 \wedge T_\varepsilon$, Eq. [\[supp_pro\]](#supp_pro){reference-type="eqref" reference="supp_pro"} is proved with $T_\varrho' = \bar T_3$. Otherwise, if $t_1<\bar T_3 \wedge T_\varepsilon$ we define $$t_0 = \inf\{t\in [0,t_1]\colon R_s > \varrho/5 \;\;\forall\, s\in [t,t_1] \}\,.$$ We observe that $t_0>0$ for any $\varepsilon$ small enough since $R_0\le \varepsilon$. Moreover, $R_{t_1} = \varrho/2$, $R_{t_0} = \varrho/5$, and $R_t \in [\varrho/5,\varrho/2]$ for any $t\in [t_0,t_1]$. In particular, $m_t(R_t/2)\le m_t(\varrho/10) \le \varepsilon^3$ for any $t\in [t_0,t_1]$ and $\varepsilon$ small enough. Clearly, to prove Eq. [\[supp_pro\]](#supp_pro){reference-type="eqref" reference="supp_pro"} it is enough to show that there exists $T_\varrho'\in (0,\bar T_3]$ such that $t_1-t_0 \ge T_\varrho'\wedge T_\varepsilon$ for any $\varepsilon$ small enough. To this end, we notice that by Lemma [Lemma 9](#lem:5.1){reference-type="ref" reference="lem:5.1"}, if $\varepsilon$ is small enough then $\Lambda_{i,\varepsilon}(t) \subset \Sigma(B^{i,\varepsilon}(t)|R(t))$ for any $t\in (t_0,t_1)$, with $R(t)$ solution to $$\label{stimrbis} \dot R(t) = 2 C_FR(t) + \frac{|a|}{\pi\alpha} + \frac{4}{\pi|\log\varepsilon|^\gamma R(t)^3} + g_\varepsilon(t) \,, \quad R(t_0) = \varrho/4\,,$$ where $g_\varepsilon(t)$ is any smooth function which is an upper bound for the last term in Eq. [\[stimv\]](#stimv){reference-type="eqref" reference="stimv"}. Indeed, this is true for $t=t_0$ and suppose, by absurd, there were a first time $t_*\in (t_0,t_1)$ such that $|x(t_*)-B^{i,\varepsilon}(t_*)| = R(t_*)$ for some fluid particle initially located at $x(0) = x_0\in \Lambda_{i,\varepsilon}(0)$. Then $R(t_*) = R_{t_*}$ in view of Eq. [\[Rt\]](#Rt){reference-type="eqref" reference="Rt"}, and hence, by Eq [\[stimv\]](#stimv){reference-type="eqref" reference="stimv"} and using that $|a_i| < |a|$, the radial velocity of $x(t)- B^{i,\varepsilon}(t)$ at $t=t_*$ would be strictly smaller than $\dot R(t_*)$, in contradiction with the definition of $t_*$ as the first time at which the graph of $t \mapsto |x(t)-B^{i,\varepsilon}(t)|$ crosses the one of $t\mapsto R(t)$. Since $m_t(R_t/2) \le \varepsilon^3$ we can choose $g_\varepsilon(t) \le 2\sqrt {M\varepsilon}$ for any $t\in [t_0,t_1]$ and $\varepsilon$ small enough. Therefore, by [\[stimrbis\]](#stimrbis){reference-type="eqref" reference="stimrbis"}, $$\dot R(t) \le 2 C_F R(t) + \frac{|a|}{\pi\alpha} + \frac{4}{\pi|\log\varepsilon|^\gamma (\varrho/4)^3} + 2\sqrt{M\varepsilon} \quad \forall\, t\in [t_0,t_1]\,,$$ where we also used that $R(t) \ge R(t_0)=\varrho/4$ to estimate from above the nonlinear term in the right-hand side of Eq. [\[stimrbis\]](#stimrbis){reference-type="eqref" reference="stimrbis"}. This means that, e.g., $\dot R(t) \le 2C_F R(t) + |a|/\alpha$ for any $t\in [t_0,t_1]$ and $\varepsilon$ small enough, whence $$R(t_1) \le \mathrm{e}^{2C_F(t_1-t_0)} R(t_0) + \frac{|a|}{2C_F \alpha}\big(\mathrm{e}^{2C_F(t_1-t_0)} -1 \big)\,,$$ i.e., $$\label{rt12} t_1-t_0 \ge \frac{1}{2C_F} \log\left(\frac{2C_F\alpha R(t_1) + |a|}{2C_F\alpha R(t_0) + |a|} \right),$$ Therefore, as $R(t_1) \ge \varrho/2$ and $R(t_0) = \varrho/4$, the claim follows with $$T_\varrho' = \frac{1}{2C_F} \log\left( \frac{4C_F\alpha\varrho + 4|a|}{2C_F\alpha\varrho + 4 |a|} \right) \wedge \bar T_3\,,$$ where, according to what discussed in Remark [Remark 1](#rem:4.1){reference-type="ref" reference="rem:4.1"}, we can choose $$\bar T_3 = \frac{\varrho\,\mathrm{e}^{-4}}{4C_W} \left(C_F \varrho +\frac{10|a|}{\pi\alpha}\right)^{-1} \wedge T\,.$$ The proposition is proved. ◻ # Conclusion of the proof of Theorem [Theorem 1](#thm:1){reference-type="ref" reference="thm:1"} {#sec:6} In this section we prove the following limits, $$\begin{gathered} \label{secl} \lim_{\varepsilon\to 0} \max_{i=1,\ldots,N} \max_{t\in [0, T_\varepsilon]}\big| B^{i,\varepsilon}(t) - \zeta^i(t) \big| = 0\,, \\ \label{pril} \lim_{\varepsilon\to 0} \max_{i=1,\ldots,N} \max_{t\in [0,T_\varepsilon]} |B^{i,\varepsilon}(t)-q^{i,\varepsilon}(t)| = 0\,,\end{gathered}$$ with $q^{i,\varepsilon}(t)$ as in Theorem [Theorem 4](#thm:conc){reference-type="ref" reference="thm:conc"}. This concludes the proof of the main theorem. Indeed, from Eq. [\[secl\]](#secl){reference-type="eqref" reference="secl"} and Proposition [Proposition 10](#prop:5.1){reference-type="ref" reference="prop:5.1"} it follows by continuity that $T_\varepsilon\ge T_\varrho'$ for any $\varepsilon$ small enough. Therefore, in view of Eq. [\[pril\]](#pril){reference-type="eqref" reference="pril"} and applying Theorem [Theorem 4](#thm:conc){reference-type="ref" reference="thm:conc"} with, e.g., $R=R_\varepsilon=\exp\sqrt{|\log\varepsilon|}$, the statement of Theorem [Theorem 1](#thm:1){reference-type="ref" reference="thm:1"} is proved with $T_\varrho=T_\varrho'$, $\zeta^{i,\varepsilon}(t) = q^{i,\varepsilon}(t)$, and $\varrho_\varepsilon= \varepsilon R_\varepsilon$. *Proof of Eq. ([\[secl\]](#secl){reference-type="ref" reference="secl"}).* In what follows, we shall denote by $C$ a generic positive constant, whose numerical value may change from line to line. Let $$\Delta(t) := \sum_i |B^{i,\varepsilon}(t) - \zeta^i(t)|^2\,, \quad t\in [0,T_\varepsilon]\,.$$ From Eqs. [\[ode\]](#ode){reference-type="eqref" reference="ode"}, [\[bpunto\]](#bpunto){reference-type="eqref" reference="bpunto"}, and noticing that $$F^i(x,t) = \sum_{j\ne i} \left[(K*\omega_{j,\varepsilon})(x,t) + u^j_L(x,t) + u^j_{{\mathcal R}}(x,t)\right],$$ we have $$\dot \Delta(t) = 2\sum_i (B^{i,\varepsilon}(t) - \zeta^i(t)) \cdot (\dot B^{i,\varepsilon}(t) - \dot \zeta^i(t)) = 2\sum_{p=1}^4 \sum_i (B^{i,\varepsilon}(t) - \zeta^i(t)) \cdot D^i_p(t)\,,$$ where $$\begin{gathered} D^i_1(t) = \frac{1}{a_i} \sum_{j\ne i} \int\!\mathrm{d}x \! \int\!\mathrm{d}y\, \left[ K(x-y) - K(\zeta^i(t)-\zeta^j(t)) \right] \omega_{i,\varepsilon}(x,t)\, \omega_{j,\varepsilon}(y,t) \,, \\ D^i_2(t) = \frac{1}{a_i} \int\!\mathrm{d}x\, u^i_L(x,t)\, \omega_{i,\varepsilon}(x,t) - \frac{a_i}{4\pi\alpha} \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \\ D^i_3(t) = \frac{1}{a_i} \sum_j \int\!\mathrm{d}x\, u^j_{{\mathcal R}}(x,t) \,\omega_{i,\varepsilon}(x,t) \,, \quad D^i_4(t) = \frac{1}{a_i} \sum_{j\ne i} \int\!\mathrm{d}x \, u^j_L(x,t) \, \omega_{i,\varepsilon}(x,t)\,.\end{gathered}$$ By Eqs. [\[Kern\]](#Kern){reference-type="eqref" reference="Kern"} and [\[sep-disks\]](#sep-disks){reference-type="eqref" reference="sep-disks"} $$\begin{aligned} |D^i_1(t)| & \le \frac{C}{\varrho^2|a_i|} \sum_{j\ne i} \int\!\mathrm{d}x \! \int\!\mathrm{d}y\, \left(|x-\zeta^i(t)| + |y-\zeta^j(t)| \right) |\omega_{i,\varepsilon}(x,t)\, \omega_{j,\varepsilon}(y,t)| \\ & \le \frac{C}{\varrho^2} \sum_{j\ne i} |a_j| \left(|B^{i,\varepsilon}(t) - \zeta^i(t)| + |B^{j,\varepsilon}(t) -\zeta^j(t)| + \sqrt{\frac{J_{i,\varepsilon}(t)}{|a_i|}}+ \sqrt{\frac{J_{j,\varepsilon}(t)}{|a_j|}} \right),\end{aligned}$$ where in the last inequality we used the Cauchy-Schwarz inequality and Eq. [\[J\]](#J){reference-type="eqref" reference="J"}. Therefore, $$\label{dp1} \sum_i (B^{i,\varepsilon}(t) - \zeta^i(t)) \cdot D^i_1(t) \le \frac{C\sqrt N|a|}{\varrho^2} \left(\Delta(t) + \sqrt{\sum_i \frac{J_{i,\varepsilon}(t)}{|a_i|}}\sqrt{\Delta(t)} \right).$$ Regarding $D^i_2(t)$, in view of Eqs. [\[uL=w\]](#uL=w){reference-type="eqref" reference="uL=w"}, [\[wL=\]](#wL=){reference-type="eqref" reference="wL="} and using that each $\omega_{i,\varepsilon}(x,t)$ is a non-negative or non-positive function, $$|D^i_2(t)| = \left| \frac{1}{a_i} \int\!\mathrm{d}x\, |w^i_L(x,t)| \, \omega_{i,\varepsilon}(x,t) - \frac{|a_i|}{4\pi\alpha} \right|.$$ By Eq. [\[uL\<\]](#uL<){reference-type="eqref" reference="uL<"}, $$\label{dp2_1} \frac{1}{a_i} \int\!\mathrm{d}x\, |w^i_L(x,t)| \, \omega_{i,\varepsilon}(x,t) \le \frac{|a_i|}{4\pi\alpha} + \frac{C_L}{|\log\varepsilon|}\,.$$ For a lower bound to the integral in the left-hand side above, we consider the disk $$\label{sre} \Sigma_{i,\varepsilon}(t) := \Sigma(q^{i,\varepsilon}(t)|\varepsilon R_\varepsilon)\,, \quad R_\varepsilon:= \exp(\sqrt{|\log\varepsilon|\log|\log\varepsilon|})\,,$$ with center $q^{i,\varepsilon}(t)$ as in Theorem [Theorem 4](#thm:conc){reference-type="ref" reference="thm:conc"}. Using Eq. [\[sep-supp\]](#sep-supp){reference-type="eqref" reference="sep-supp"} and that $s \mapsto\log[(1+s)/s]$, $s>0$, is decreasing, if $x\in \Sigma_{i,\varepsilon}(t)$ and $t\in [0,T_\varepsilon]$ then, by definition Eq. [\[wL=\]](#wL=){reference-type="eqref" reference="wL="}, $$|w^i_L(x,t)| \ge \frac{\log[(1+2\varepsilon R_\varepsilon)/(2\varepsilon R_\varepsilon)]}{4\pi(r_\varepsilon+ \bar d+\varrho)}\int_{ \Sigma_{i,\varepsilon}(t)}\!\mathrm{d}y\, |\omega_\varepsilon(y,t)|\,,$$ whence $$\frac{1}{a_i} \int\!\mathrm{d}x\, |w^i_L(x,t)| \, \omega_{i,\varepsilon}(x,t) \ge \frac{\log[(1+2\varepsilon R_\varepsilon)/(2\varepsilon R_\varepsilon)]}{4\pi(r_\varepsilon+ \bar d+\varrho)} \frac{1}{|a_i|} \left[\int_{ \Sigma_{i,\varepsilon}(t)}\!\mathrm{d}y\, |\omega_{i,\varepsilon}(y,t)|\right]^2.$$ By Theorem [Theorem 4](#thm:conc){reference-type="ref" reference="thm:conc"}, provided $\varepsilon$ is chosen sufficiently small in order to have $R_\varepsilon>\exp(C_3\log|\log\varepsilon|)$, we can apply Eq. [\[eq:conc\]](#eq:conc){reference-type="eqref" reference="eq:conc"} with $R=R_\varepsilon$ getting $$\label{stmass1} \int_{ \Sigma_{i,\varepsilon}(t)}\!\mathrm{d}y\, |\omega_\varepsilon(y,t)| \ge |a_i| - C_4 \sqrt{\frac{\log|\log\varepsilon|}{|\log\varepsilon|}} \quad \forall\, t\in [0,T_\varepsilon]\,.$$ Since $$\left|\frac1{|\log\varepsilon|} \log\frac {1+2\varepsilon R_\varepsilon}{2\varepsilon R_\varepsilon} -1\right| \le \frac{C\log R_\varepsilon}{|\log\varepsilon|} = C \sqrt{\frac{\log|\log\varepsilon|}{|\log\varepsilon|}} \,,$$ and recalling $r_\varepsilon= \alpha |\log\varepsilon|$, we conclude that there is $C_5 = C_5(\alpha,|a|,\bar d, \varrho)>0$ such that, for any $\varepsilon$ sufficiently small, $$\label{uL>} \frac{1}{a_i} \int\!\mathrm{d}x\, |w^i_L(x,t)| \, \omega_{i,\varepsilon}(x,t) \ge \frac{|a_i|}{4\pi\alpha} - C_5\sqrt{\frac{\log|\log\varepsilon|}{|\log\varepsilon|}} \quad \forall\, t\in [0,T_\varepsilon]\,.$$ By Eqs. [\[dp2_1\]](#dp2_1){reference-type="eqref" reference="dp2_1"} and [\[uL\>\]](#uL>){reference-type="eqref" reference="uL>"}, for any $\varepsilon$ small enough, $$\label{dp2} \sum_i (B^{i,\varepsilon}(t) - \zeta^i(t)) \cdot D^i_2(t) \le C_5\sqrt N \sqrt{\frac{\log|\log\varepsilon|}{|\log\varepsilon|}} \sqrt{\Delta(t)}\,.$$ Concerning $D^i_3(t)$, by [\[stimR\]](#stimR){reference-type="eqref" reference="stimR"} we deduce that $$\label{dp3} \sum_i (B^{i,\varepsilon}(t) - \zeta^i(t)) \cdot D^i_3(t) \le \frac{C_{{\mathcal R}} N^{3/2} \log|\log\varepsilon|}{|\log\varepsilon|} \sqrt{\Delta(t)}\,.$$ Finally, by Eqs. [\[uL=w\]](#uL=w){reference-type="eqref" reference="uL=w"}, [\[wL=\]](#wL=){reference-type="eqref" reference="wL="}, and using again Eq. [\[sep-supp\]](#sep-supp){reference-type="eqref" reference="sep-supp"} and that $s \mapsto\log[(1+s)/s]$ is decreasing, if $j \ne i$ then $$|u^j_L(x,t)| = |w^j_L(x,t)| \le \frac{|a_j|}{4\pi(r_\varepsilon-\bar d-\varrho)} \log \frac {1+2\varrho}{2\varrho} \quad \forall\, x\in \Lambda_{i,\varepsilon}(t) \quad \forall\, t\in [0,T_\varepsilon]\,,$$ whence, by Eq. [\[reps\]](#reps){reference-type="eqref" reference="reps"}, for any $\varepsilon$ small enough, $$\label{dp4} \sum_i (B^{i,\varepsilon}(t) - \zeta^i(t)) \cdot D^i_4(t) \le \frac{CN^{3/2}|a|}{ (\alpha|\log\varepsilon| -\bar d-\varrho)} \log \frac {1+2\varrho}{2\varrho} \sqrt{\Delta(t)}\,.$$ Given $\theta\in (0,1)$, by the bounds Eqs. [\[dp1\]](#dp1){reference-type="eqref" reference="dp1"}, [\[dp2\]](#dp2){reference-type="eqref" reference="dp2"}, [\[dp3\]](#dp3){reference-type="eqref" reference="dp3"}, [\[dp4\]](#dp4){reference-type="eqref" reference="dp4"}, and applying Theorem [Theorem 5](#thm:J<){reference-type="ref" reference="thm:J<"} (with $\gamma\in(\theta,1)$), we conclude that, for any $\varepsilon$ small enough, $$\dot\Delta(t) \le \frac{C\sqrt N|a|}{\varrho^2} \Delta(t) + \frac{1}{|\log\varepsilon|^{\theta/2}}\sqrt{\Delta(t)} \quad \forall\, t\in [0,T_\varepsilon]\,.$$ Since $T_\varepsilon\le T$ and $\Delta(0) \le 4N\varepsilon^2$, the differential inequality above implies Eq. [\[secl\]](#secl){reference-type="eqref" reference="secl"}. ◻ *Proof of Eq. ([\[pril\]](#pril){reference-type="ref" reference="pril"}).* By Theorem [Theorem 4](#thm:conc){reference-type="ref" reference="thm:conc"} with $R=R_\varepsilon=\exp\sqrt{\log\varepsilon}$, $$\begin{split} |B^{i,\varepsilon}(t) -q^{i,\varepsilon}(t)| & \le \frac{1}{a_i}\int\!\mathrm{d}x\, |x-q^{i,\varepsilon}(t)|\, \omega_{i,\varepsilon}(x,t) \\ & \le \varepsilon R_\varepsilon+ \frac{1}{a_i}\int_{\Sigma(q^{i,\varepsilon}(t)|\varepsilon R_\varepsilon)^{\complement}}\!\mathrm{d}x\, |x-q^{i,\varepsilon}(t)|\, \omega_{i,\varepsilon}(x,t) \\ & \le \varepsilon R_\varepsilon+ \frac{C_4\log|\log\varepsilon|}{|a_i|\sqrt{|\log\varepsilon|}} |B^{i,\varepsilon}(t)-q^{i,\varepsilon}(t)| \\ & \qquad + \frac{1}{a_i}\int_{\Sigma(q^{i,\varepsilon}(t)|\varepsilon R_\varepsilon)^{\complement}}\!\mathrm{d}x\, |x-B^{i,\varepsilon}(t)|\, \omega_{i,\varepsilon}(x,t)\,. \end{split}$$ Assuming $\varepsilon$ so small to have $|a_i|\sqrt{|\log\varepsilon|} \ge 2 C_4\log|\log\varepsilon|$, we get (by applying in the end the Cauchy-Schwarz inequality) $$\begin{split} |B^{i,\varepsilon}(t) -q^{i,\varepsilon}(t)| & \le 2 \varepsilon R_\varepsilon+ \frac{2}{a_i}\int_{\Sigma(q^{i,\varepsilon}(t)|\varepsilon R_\varepsilon)^{\complement}}\!\mathrm{d}x\, |x-B^{i,\varepsilon}(t)|\, \omega_{i,\varepsilon}(x,t) \\ & \le 2 \varepsilon R_\varepsilon+ 2 \sqrt{|a_i|J_{i,\varepsilon}(t)}\,, \end{split}$$ and Eq. [\[pril\]](#pril){reference-type="eqref" reference="pril"} follows by Theorem [Theorem 5](#thm:J<){reference-type="ref" reference="thm:J<"}. ◻ # An example of leapfrogging vortex rings {#sec:7} When we have two vortex rings only, the dynamics of their centers of vorticity (in the limit $\varepsilon\to 0$) can be completely studied, giving rise, for suitable values of the initial data, to the so called *leapfrogging* dynamics, which was first described by Helmholtz [@H; @H1], as already discussed in the Introduction. Although Theorem [Theorem 1](#thm:1){reference-type="ref" reference="thm:1"} guarantees convergence for short times only, in the special case of two vortex rings with large enough main radii we are able to extend the time of convergence in order to cover several crossings between the rings. As already noticed in the Introduction, this is completely consistent with the physical phenomenon, since the leapfrogging motion of two vortex rings is observed experimentally and numerically up to a few crossings, after which the rings dissolve and lose their shape. Let us then describe the dynamical system Eq. [\[ode\]](#ode){reference-type="eqref" reference="ode"} for $N=2$ and suppose that their vortex intensities satisfy $a_1+a_2\ne 0$. Adopting the new variables, $$x = \zeta^1-\zeta^2\,, \qquad \qquad y = \frac{a_1\zeta^1+ a_2 \zeta^2}{a_1+a_2}\,,$$ the equations take the form $$\label{xyeq} \left\{\begin{aligned} & \dot x=-\frac{a_1+a_2}{2\pi}\nabla^\perp\log |x| +\frac{a_1-a_2}{4\pi \alpha}\begin{pmatrix} 1 \\ 0 \end{pmatrix}, \\ &\dot y =\frac{a_1^2+a_2^2}{4\pi\alpha(a_1+a_2)} \begin{pmatrix} 1 \\ 0 \end{pmatrix}. \end{aligned}\right.$$ The barycenter $y$ performs a rectilinear uniform motion, while the evolution of the relative position $x$ is governed by the canonical equations $\dot x = \nabla^\perp {\mathcal H}(x)$ of Hamiltonian $${\mathcal H}(x) = -\frac{a_1+a_2}{4\pi} \log |x|^2 +\frac{a_1-a_2}{4\pi\alpha}x_2\,, \qquad x=(x_1, x_2)\,.$$ Hereafter, for the sake of concreteness, we furthermore assume $a_1>|a_2|$, the other cases can be treated analogously. ![Phase portrait of the dynamical system describing the motion of the relative position $x$ between the two rings.](pp1.pdf){#fig:1} The phase portrait of this Hamiltonian system can be obtained by drawing the energy level sets $\{x\colon {\mathcal H}(x) = E\}$ (invariant sets, each one composed by a finite union of phase curves). To this end, we recast the equation ${\mathcal H}(x) = E$ in the form $$x_1= \pm f(x_2)\,, \quad\textnormal{with} \quad f(x_2) = \sqrt{C_E\exp\left( \frac{ x_2}{\alpha a} \right)-x_2^2}\,,$$ where $$a=\frac{a_1+a_2}{a_1-a_2}\,, \qquad C_E=\exp\left(-\frac{4\pi E}{a_1+a_2} \right).$$ There is a unique equilibrium, corresponding to the critical point $x^*=(0, 2\alpha a)$ of ${\mathcal H}$ and we set $$C^*=\exp\left(-\frac{4\pi {\mathcal H}(x^*)}{a_1+a_2}\right) =\left(\frac{2\alpha a}{ \mathrm{e}} \right)^2.$$ It is easily seen that $$\mathrm{Dom}(f) = \{ x_2 \colon |x_2| \le \sqrt{C_E} \exp(x_2/2\alpha a)\} = \begin{cases} [\eta_1,\eta_2] \cup [\eta_3, +\infty) & \text{if } C_E < C^* \\ [\bar \eta, +\infty) & \text{if } C_E \ge C^* \end{cases}$$ with $\eta_1 < 0 < \eta_2 < x^*_2 < \eta_3$ and $\bar\eta < 0$. It follows that the phase portrait looks like qualitatively as depicted in Figure [1](#fig:1){reference-type="ref" reference="fig:1"}. We notice that one ring overtakes the other when $x_1 = 0$ and $\dot x_1 \ne 0$. In particular, the periodic motions occurring for $0<C_E<C^*$ (whose orbits are the closed curves in Figure [1](#fig:1){reference-type="ref" reference="fig:1"}), correspond to the leapfrogging behavior, in which the rings pass through each other alternately. Since along the orbit we have $$\dot x_2 = \frac{a_1+a_2}{2\pi}\frac{x_1}{|x|^2} = \pm\frac{(a_1+a_2)\, \mathrm{e}^{-x_2/\alpha a} }{2\pi C_E} \sqrt{C_E \, \mathrm{e}^{x_2/\alpha a} - x_2^2}\,,$$ the period of a close orbit on the level $C_E<C^*$ is given by $$T_E = 2\int_{\eta_1}^{\eta_2}\! \frac{\mathrm{d}x_2}{|\dot x_2|} = \frac{4\pi C_E}{a_1+a_2} \int_{\eta_1}^{\eta_2} \! \mathrm{d}x_2\, \frac{\mathrm{e}^{x_2/\alpha a}}{\sqrt{C_E\,\mathrm{e}^{x_2/\alpha a} - x_2^2}}\,,$$ with $\eta_1<0<\eta_2$ as before, i.e., the two smallest roots of the equation $C_E \mathrm{e}^{x_2/\alpha a}-x_2^2=0$. We also note that, for small values of the positive constant $C_E$, we have $\eta_{1,2}\approx \mp \sqrt{C_E}$, and $$T_ E \approx \frac{4\pi C_E}{a_1+a_2}\int_{-\sqrt{C_E}}^{\sqrt{C_E}}\! \frac{ \mathrm{d}x_2}{\sqrt{C_E - x_2^2}} = \frac{4\pi^2 C_E}{a_1+a_2}\,,$$ which goes to $0$ as $C_E\to 0$ (i.e., $E\to +\infty$, note that ${\mathcal H}(x)$ diverges as $x\to 0$). The time threshold $T_\varrho$ in Theorem [Theorem 1](#thm:1){reference-type="ref" reference="thm:1"} can be seen to be bounded by a constant multiple of $C_F^{-1}$ (recall that $T_\varrho = T_\varrho'$ with $T_\varrho'$ as in Proposition [Proposition 10](#prop:5.1){reference-type="ref" reference="prop:5.1"}). On the other hand, $C_F$ is an upper bound for the velocity field (and its Lipschitz constant) produced by one ring and acting on the second one, so it depends on the distance between the centers of vorticity of the two rings as a constant multiple of $(|a_1|+|a_2|)/\varrho^2$ (at short distances), and $\varrho$ is of order $|\eta_{1,2}|$ in the periodic motion considered above. Therefore, $T_\varrho$ and $T_E$ are of the same order also when $T_E$ is small, and a direct inspection easily shows that $T_\varrho <T_E$. Thus, a mere application of Theorem [Theorem 1](#thm:1){reference-type="ref" reference="thm:1"} guarantees at most one overtaking between the rings during the time interval $[0,T_\varrho]$ (to this end, it is enough to choose the initial data on the orbit close enough to the point $(0,\eta_1)$ or $(0,\eta_2)$). We next show that the result can be improved in the case of rings with large main radii, i.e., when the parameter $\alpha$ is chosen large enough (with respect to the distance between the centers of vorticity). The key observation is that when $\alpha\to +\infty$ the system Eq. [\[xyeq\]](#xyeq){reference-type="eqref" reference="xyeq"} reduces to the standard planar vortex model, i.e., $\dot y=0$ and ${\mathcal H}(x) = -\frac{a_1+a_2}{4\pi} \log |x|^2$, so that each level set $\{x\colon {\mathcal H}(x) = E\}$ consists of a circular orbit traveled at constant speed, with period $$\label{period2} {\mathcal T}_E = \frac{4\pi^2 R_E^2}{a_1+a_2}\,,$$ where $R_E= \sqrt{C_E} = \exp\big[-2\pi E/(a_1+a_2)\big]$ is the radius of the orbit (note that $C^*\to +\infty$ as $\alpha\to + \infty$). We omit the proof of the Lemma [Lemma 11](#lem:alphagrande){reference-type="ref" reference="lem:alphagrande"} below, which easily follows from the previous observation and standard arguments in the theory of ordinary differential equations. **Lemma 11**. *Given $\varrho>0$, fix $E>0$ such that $R_E>4\varrho$, an integer $k\in {\mathbb N}$, and let $T = (k+1) {\mathcal T}_E$ with ${\mathcal T}_E$ as in Eq. [\[period2\]](#period2){reference-type="eqref" reference="period2"}. Then there exits $\alpha_0>0$ such that for any $\alpha \ge \alpha_0$ we have $C_E < C^*$ and the corresponding periodic motion $t\mapsto x_E(t)$ solution to Eq. [\[xyeq\]](#xyeq){reference-type="eqref" reference="xyeq"}$_a$ satisfies $$\label{T1} \min_{t\in [0,T]} |x_E(t)| \ge 4\varrho\,, \qquad kT_E < T\,.$$* In the sequel, we fix a solution $t\mapsto (\zeta^1(t),\zeta^2(t))$ in such a way that $\zeta^1(t) - \zeta^2(t) = x_E(t)$, with $x_E(t)$ as in Lemma [Lemma 11](#lem:alphagrande){reference-type="ref" reference="lem:alphagrande"}. Therefore, in view of Eq. [\[T1\]](#T1){reference-type="eqref" reference="T1"}, if we choose $\varrho$ and $T$ as in the aforementioned lemma then Eq. [\[T\]](#T){reference-type="eqref" reference="T"} holds in this case for any $\alpha\ge \alpha_0$. Moreover, from the expression of $\dot y$ in Eq. [\[xyeq\]](#xyeq){reference-type="eqref" reference="xyeq"}, the parameter $\bar d$ in Eq. [\[dbarra\]](#dbarra){reference-type="eqref" reference="dbarra"} is uniformly bounded for $\alpha\ge \alpha_0$. Taking advantage of this uniformity, we now show that if $\alpha$ is large enough the proof of Theorem [Theorem 1](#thm:1){reference-type="ref" reference="thm:1"} can be improved, pushing the time threshold $T_\varrho$ up to $T$, so that at least $2k$ overtakings between the rings take place during the time interval of convergence. We fix an integer $n \gg 1$ to be specified later and let $\alpha = \alpha_n := \alpha_0 n$. The strategy develops according to the following steps. *Step 0.* Letting $\bar T_3$ be as in Proposition [Proposition 8](#prop:2){reference-type="ref" reference="prop:2"} with the choice $R = \varrho_n := \varrho/n$, we can argue as done in Proposition [Proposition 10](#prop:5.1){reference-type="ref" reference="prop:5.1"} to deduce that there is $T_0' \in (0,T]$ such that $$\label{st1} \Lambda_{i,\varepsilon}(t) \subset \Sigma(B^{i,\varepsilon}(t)|3\varrho_n) \quad \forall\, t\in [0, T_0' \wedge T_\varepsilon]\,.$$ To this end, we adapt the proof of that proposition by defining, in this case, $$t_1 := \sup\{t\in [0,\bar T_3 \wedge T_\varepsilon] \colon R_s \le 3\varrho_n \;\, \forall s \in [0,t]\}\,,$$ and (whenever $t_1<\bar T_3 \wedge T_\varepsilon$) $$t_0 = \inf\{t\in [0,t_1]\colon R_s > \varrho_n \;\;\forall\, s\in [t,t_1] \}\,.$$ Therefore, choosing now $R(t_0)= 2\varrho_n$ in Eq. [\[stimrbis\]](#stimrbis){reference-type="eqref" reference="stimrbis"}, from Eq. [\[rt12\]](#rt12){reference-type="eqref" reference="rt12"} we deduce that Eq. [\[st1\]](#st1){reference-type="eqref" reference="st1"} holds with $$T_0' = \frac{1}{2C_F} \log\left(\frac{6C_F\alpha_n\varrho_n + |a|}{4C_F\alpha_n\varrho_n + |a|} \right) \wedge \bar T_3 = \frac{1}{2C_F} \log\left(\frac{6C_F\alpha_0\varrho + |a|}{4C_F\alpha_0\varrho + |a|} \right) \wedge \bar T_3\,,$$ where, in view of Remark [Remark 1](#rem:4.1){reference-type="ref" reference="rem:4.1"}, $$\bar T_3 = \frac{\varrho_n\,\mathrm{e}^{-4}}{4C_W} \left(C_F \varrho_n +\frac{|a|}{\pi\alpha_n}\right)^{-1} \wedge T = \frac{\varrho\,\mathrm{e}^{-4}}{4C_W} \left(C_F \varrho +\frac{|a|}{\pi\alpha_0}\right)^{-1} \wedge T\,.$$ *Step 1.* If $T_0'=T$ we are done, otherwise, from Step 0 and Eqs. [\[secl\]](#secl){reference-type="eqref" reference="secl"}, [\[pril\]](#pril){reference-type="eqref" reference="pril"} we have $T_\varepsilon>T_0'$ for any $\varepsilon$ small enough, and whence $$\Lambda_{i,\varepsilon}(T_0') \subset \Sigma(B^{i,\varepsilon}(T_0')|3\varrho_n)\,.$$ Then, we can adapt to the present context the arguments of Propositions [Proposition 7](#prop:1){reference-type="ref" reference="prop:1"} and [Proposition 8](#prop:2){reference-type="ref" reference="prop:2"} to deduce that, for any $\varepsilon$ small enough, $$\label{4rhon} m^i_t(4\varrho_n) \le \varepsilon^{\ell} \quad \forall\, t\in[T_0', (T_0' + \bar T_\ell) \wedge T_\varepsilon]\,.$$ More precisely: \(i\) We follow the proof of Proposition [Proposition 7](#prop:1){reference-type="ref" reference="prop:1"}, with $R-\varrho_n/4$ in place of $R/2$ in the computations leading to Eq. [\[2mass 4\'\'\]](#2mass 4''){reference-type="eqref" reference="2mass 4''"}, and iterate Eq. [\[mass 14\'\]](#mass 14'){reference-type="eqref" reference="mass 14'"} from $3\varrho_n + \varrho_n/2 -h$ to $3\varrho_n + \varrho_n/4$, getting in this way $m^i_t(3\varrho_n+\varrho_n/2) \le |\log\varepsilon|^{-2}$ for $t\in[0, (T_0' + \tilde T_2) \wedge T_\varepsilon]$. Moreover, since in this case $R = 3\varrho_n + \varrho_n/2$ and $h=\varrho_n/(4n+4)$, $$\tilde T_2 = \frac{\varrho_n\mathrm{e}^{-3}}{8C_W} \left(C_F \left(3\varrho_n+\frac{\rho_n}2\right) +\frac{|a|}{\pi\alpha_n}\right)^{-1} \wedge (T-T_0')\,.$$ \(ii\) Using (i), we can adapt the proof of Proposition [Proposition 8](#prop:2){reference-type="ref" reference="prop:2"}, iterating now from $4\varrho_n -h$ to $4\varrho_n - \varrho_n/4$. Recalling Remark [Remark 1](#rem:4.1){reference-type="ref" reference="rem:4.1"} (adapted to the present context, in particular with $\tilde T_2$ as above), Eq. [\[4rhon\]](#4rhon){reference-type="eqref" reference="4rhon"} for $\ell>2$ thus holds with $$\begin{split} \bar T_\ell & = \frac{\varrho_n\mathrm{e}^{- \ell -1}}{8C_W} \left(C_F 4\varrho_n +\frac{|a|}{\pi\alpha_n}\right)^{-1} \wedge (T-T_0') \\ & = \frac{\varrho\mathrm{e}^{- \ell -1}}{8C_W} \left(C_F 4\varrho +\frac{|a|}{\pi\alpha_0}\right)^{-1} \wedge (T-T_0')\,. \end{split}$$ Using Eq. [\[4rhon\]](#4rhon){reference-type="eqref" reference="4rhon"}, we can now adjust the reasoning of Section [5](#sec:5){reference-type="ref" reference="sec:5"} to prove that, for any $\varepsilon$ small enough, $$\label{st2} \Lambda_{i,\varepsilon}(t) \subset \Sigma(B^{i,\varepsilon}(t)|6\varrho_n) \quad \forall\, t\in [T_0', (T_0'+T_1') \wedge T_\varepsilon]\,,$$ with $T_1' \in (0,T-T_0']$ as detailed below. More precisely: \(i\) We modify the claim of Lemma [Lemma 9](#lem:5.1){reference-type="ref" reference="lem:5.1"} by replacing Eq. [\[stimv\]](#stimv){reference-type="eqref" reference="stimv"} with $$\frac{\mathrm{d}}{\mathrm{d}t} |x(t)- B^{i,\varepsilon}(t)| \le 2 C_F R_t + \frac{|a_i|}{\pi\alpha} + \frac{6}{\pi|\log\varepsilon|^\gamma (R_t\wedge \varrho_n)^3} + \sqrt{\frac{M m^i_t(R^n_t)}{\varepsilon^2}}\,,$$ where $R^n_t = (R_t-(\varrho_n/2)) \vee (R_t/2)$. To this end, it is enough to change the proof of Lemma [Lemma 9](#lem:5.1){reference-type="ref" reference="lem:5.1"} by splitting $(K*\omega_{i,\varepsilon})(x,t)$ as in Eq. [\[in A_1,A_2\]](#in A_1,A_2){reference-type="eqref" reference="in A_1,A_2"} but choosing now ${\mathcal D}=\Sigma(B^{i,\varepsilon}(t)|R^n_t)$ and ${\mathcal A}= \Sigma(B^{i,\varepsilon}(t)|R_t)\setminus\Sigma(B^{i,\varepsilon}(t)|R^n_t)$. We omit the details. \(ii\) Letting $\bar T_3$ be as in Eq. [\[4rhon\]](#4rhon){reference-type="eqref" reference="4rhon"} for $\ell=3$, we prove Eq. [\[st2\]](#st2){reference-type="eqref" reference="st2"} following the proof of Proposition [Proposition 10](#prop:5.1){reference-type="ref" reference="prop:5.1"} by defining, in this case, $$t_1 := \sup\{t\in [T_0',(T_0'+\bar T_3) \wedge T_\varepsilon] \colon R_s \le 6\varrho_n \;\, \forall s \in [0,t]\}\,,$$ and (whenever $t_1<(T_0'+\bar T_3) \wedge T_\varepsilon$) $$t_0 = \inf\{t\in [T_0',t_1]\colon R_s > 4\varrho_n+\varrho_n/2 \;\;\forall\, s\in [t,t_1] \}\,.$$ We remark that if $t\in [t_0,t_1]$ then $m^i_t(R^n_t) \le m^i_t(4\varrho_n) \le \varepsilon^3$ by Eq. [\[4rhon\]](#4rhon){reference-type="eqref" reference="4rhon"}. Therefore, choosing now $R(t_0)= 5\varrho_n$ in Eq. [\[stimrbis\]](#stimrbis){reference-type="eqref" reference="stimrbis"}, from Eq. [\[rt12\]](#rt12){reference-type="eqref" reference="rt12"} we deduce that Eq. [\[st2\]](#st2){reference-type="eqref" reference="st2"} holds with $$T_1' = \frac{1}{2C_F} \log\left(\frac{12C_F\alpha_n\varrho_n + |a|}{10C_F\alpha_n\varrho_n + |a|} \right) \wedge \bar T_3 = \frac{1}{2C_F} \log\left(\frac{12C_F\alpha_0\varrho + |a|}{10C_F\alpha_0\varrho + |a|} \right) \wedge \bar T_3\,,$$ where $$\bar T_3 = \frac{\varrho\,\mathrm{e}^{-4}}{8C_W} \left(C_F 4\varrho +\frac{|a|}{\pi\alpha_0}\right)^{-1} \wedge (T-T_0')\,.$$ *Step 2.* If $T_0'+T_1' =T$ we are done, otherwise from Eqs. [\[secl\]](#secl){reference-type="eqref" reference="secl"}, [\[pril\]](#pril){reference-type="eqref" reference="pril"}, and [\[st2\]](#st2){reference-type="eqref" reference="st2"} we have $T_\varepsilon>T_0'+T_1'$ for any $\varepsilon$ small enough, and whence $$\Lambda_{i,\varepsilon}(T_0'+T_1') \subset \Sigma(B^{i,\varepsilon}(T_0'+T_1')|6\varrho_n)\,.$$ Therefore, analogously to what done in the Step 1, this implies that, for any $\varepsilon$ small enough, $$m^i_t(7\varrho_n) \le \varepsilon^{\ell} \quad \forall\, t\in[T_0'+T_1', (T_0' +T_1'+ \bar T_\ell) \wedge T_\varepsilon]\,,$$ with $$\bar T_\ell = \frac{\varrho\mathrm{e}^{- \ell -1}}{8C_W} \left(C_F 7\varrho +\frac{|a|}{\pi\alpha_0}\right)^{-1} \wedge (T-T_0'-T_1')\,,$$ whence $$\Lambda_{i,\varepsilon}(t) \subset \Sigma(B^{i,\varepsilon}(t)|9\varrho_n) \quad \forall\, t\in [T_0'+T_1', (T_0'+T_1'+T_2') \wedge T_\varepsilon]\,,$$ with $$T_2' = \frac{1}{2C_F} \log\left(\frac{18C_F\alpha_n\varrho_n + |a|}{16C_F\alpha_n\varrho_n + |a|} \right) \wedge \bar T_3 = \frac{1}{2C_F} \log\left(\frac{18C_F\alpha_0\varrho + |a|}{16C_F\alpha_0\varrho + |a|} \right) \wedge \bar T_3\,,$$ and $$\bar T_3 = \frac{\varrho\,\mathrm{e}^{-4}}{8C_W} \left(C_F 7\varrho +\frac{|a|}{\pi\alpha_0}\right)^{-1} \wedge (T-T_0'-T_1')\,.$$ *Step $j$.* The above procedure can be iterated inductively in the following manner. If at the end of the $(j-1)$th step we still have $T_0'+ \cdots + T_{j-1}' < T$ (otherwise we are done) and $3j\varrho_n < \varrho$ then $T_\varepsilon>T_0'+ \cdots + T_{j-1}'$ for any $\varepsilon$ small enough, so that $$\Lambda_{i,\varepsilon}(T_0'+ \cdots + T_{j-1}') \subset \Sigma(B^{i,\varepsilon}(T_0'+ \cdots + T_{j-1}')|3j \varrho_n)\,,$$ which allows for a further iteration, giving first $$m^i_t((3j+1)\varrho_n) \le \varepsilon^{\ell} \quad \forall\, t\in[T_0' + \cdots + T_{j-1}' , (T_0' + \cdots + T_{j-1}' + \bar T_\ell) \wedge T_\varepsilon]\,,$$ with $$\bar T_\ell = \frac{\varrho\mathrm{e}^{- \ell -1}}{8C_W} \left(C_F (3j+1) \varrho +\frac{|a|}{\pi\alpha_0}\right)^{-1} \wedge [T-(T_0'+\cdots +T_{j-1}')]\,,$$ and then $$\Lambda_{i,\varepsilon}(t) \subset \Sigma(B^{i,\varepsilon}(t)|(3j+3)\varrho_n) \quad \forall\, t\in [T_0'+\cdots +T_{j-1}', (T_0'+ \cdots + T_j') \wedge T_\varepsilon]\,,$$ with $$T_j' = \frac{1}{2C_F} \log\left(\frac{2(3j+3)C_F\alpha_0\varrho + |a|}{2(3j+2) C_F\alpha_0\varrho + |a|} \right) \wedge \bar T_3$$ and $$\bar T_3 = \frac{\varrho\,\mathrm{e}^{-4}}{8C_W} \left(C_F (3j+1) \varrho +\frac{|a|}{\pi\alpha_0}\right)^{-1} \wedge [T-(T_0'+ \cdots + T_{j-1}')]\,.$$ *Conclusion.* The maximum number of possible iterations is given by $j_* = j_n \wedge j_T$, where $$\begin{gathered} j_n = \max\{j\colon 3(j+1)\varrho_n \le \varrho/2\} = \left\lfloor \frac n 6 -1\right\rfloor, \\ j_T = \max\{ 0<j \le j_n \colon T_0'+T_1'+ \ldots + T_{j-1}' < T\}\,.\end{gathered}$$ From the explicit expression of $T_j'$, if $j<j_T$ then $$T_j' = A_j := \frac{1}{2C_F} \log\left(\frac{2(3j+3)C_F\alpha_0\varrho + |a|}{2(3j+2) C_F\alpha_0\varrho + |a|} \right) \wedge \frac{\varrho\,\mathrm{e}^{-4}}{8C_W} \left(C_F (3j+1) \varrho +\frac{|a|}{\pi\alpha_0}\right)^{-1}.$$ Since $A_j = O(j^{-1})$ for $j$ large, whence $\sum_{j=0}^{j_n} A_j = O(\log j_n) = O(\log n)$, by choosing $n$ (i.e., $\alpha=\alpha_n$) large enough we get $j_*< j_n$, which means $\sum_{j=0}^{j_*} T_j' = T$, i.e., the convergence holds up to the chosen time $T>kT_E$. 0◻ # Proof of Lemma [Lemma 6](#lem:Hdec){reference-type="ref" reference="lem:Hdec"} {#app:a} Eq. [\[stimH\]](#stimH){reference-type="eqref" reference="stimH"} easily follows from Eqs. [\[H1\]](#H1){reference-type="eqref" reference="H1"}, [\[H2\]](#H2){reference-type="eqref" reference="H2"}, and [\[sep-disks\]](#sep-disks){reference-type="eqref" reference="sep-disks"}, we omit the details. Concerning the decomposition Eq. [\[sH\]](#sH){reference-type="eqref" reference="sH"}, we observe that $$\begin{split} H(x,y) & = - \frac{1}{2\pi(r_\varepsilon+x_2)} I_1\left(\frac{|x-y|}{\sqrt{A}}\right) \frac{(x-y)^\perp }{\sqrt{A}} \\ & \quad + \frac{1}{2\pi(r_\varepsilon+x_2)} I_2\left(\frac{|x-y|}{\sqrt{A}}\right)\sqrt{\frac{r_\varepsilon+y_2}{r_\varepsilon+x_2}} \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \end{split}$$ where, for any $s>0$, $$I_1(s) = \int_0^\pi\!\mathrm{d}\theta\, \frac{\cos\theta}{[s^2 + 2(1-\cos\theta)]^{3/2}}\,, \quad I_2(s) = \int_0^\pi\!\mathrm{d}\theta\, \frac{1-\cos\theta}{[s^2+2(1-\cos\theta)]^{3/2}}\,.$$ By an explicit computation, see, e.g., the Appendix in [@Mar99], for any $s>0$, $$I_1(s) = \frac{1}{s^2} + \frac 14 \log\frac{s}{1+s} + \frac{c_1(s)}{1+s}, \quad I_2(s) = -\frac 12 \log\frac{s}{1+s} + \frac{c_2(s)}{1+s},$$ with $c_1(s)$, $c_2(s)$ uniformly bounded for $s\in (0,+\infty)$. Therefore, the kernel ${\mathcal R}(x,y)$ defined by [\[sH\]](#sH){reference-type="eqref" reference="sH"} is given by $${\mathcal R} (x,y) = \sum_{j=1}^6 {\mathcal R}^j(x,y),$$ with, letting $a = |x-y|/\sqrt A$, $$\begin{aligned} R^1(x,y) & = \frac{1}{2\pi} \bigg(1 - \sqrt{\frac{r_\varepsilon+y_2}{r_\varepsilon+x_2}} \bigg) \frac{(x-y)^\perp }{|x-y|^2}\,, \\ R^2(x,y) & = \frac{1}{8\pi} \bigg(\log\frac{1+a}{a}\bigg) \frac{(x-y)^\perp }{(r_\varepsilon+x_2)\sqrt{A}}\,, \\ R^3(x,y) & = \frac{1}{4\pi (r_\varepsilon+x_2)} \sqrt{\frac{r_\varepsilon+y_2}{r_\varepsilon+x_2}} \bigg(\log\frac{|x-y|}{1+|x-y|} - \log\frac{a}{1+a}\bigg) \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \\ R^4(x,y) & = \frac{1}{4\pi (r_\varepsilon+x_2)} \bigg(1-\sqrt{\frac{r_\varepsilon+y_2}{r_\varepsilon+x_2}} \bigg) \log\frac{|x-y|}{1+|x-y|} \begin{pmatrix} 1 \\ 0 \end{pmatrix} , \\ R^5(x,y) & = -\frac{c_1(a)}{2\pi(1+a)} \frac{(x-y)^\perp }{(r_\varepsilon+x_2)\sqrt{A}}\,, \\ R^6(x,y) & = \frac{c_2(a)}{2\pi(1+a)(r_\varepsilon+x_2)}\sqrt{\frac{r_\varepsilon+y_2}{r_\varepsilon+x_2}} \begin{pmatrix} 1 \\ 0 \end{pmatrix}.\end{aligned}$$ Using that $$\bigg|1-\sqrt{\frac{r_\varepsilon+y_2}{r_\varepsilon+x_2}} \bigg| = \frac{|y_2-x_2|}{r_\varepsilon+x_2+\sqrt{A}} \le \frac{|x-y|}{r_\varepsilon+x_2}$$ and $$\bigg|\log\frac{|x-y|}{1+|x-y|} - \log\frac{a}{1+a}\bigg| = \bigg| \log\frac{1+a}{A^{-1/2}+a}\bigg| \le \frac 12 |\log A|,$$ we have, $$\begin{aligned} & |R^1(x,y)| = \frac{1}{2\pi} \bigg|1-\sqrt{\frac{r_\varepsilon+y_2}{r_\varepsilon+x_2}}\bigg| \frac{1}{|x-y|}\le \frac{1}{2\pi (r_\varepsilon+x_2)}, \\ & |R^2(x,y)| = \frac{1}{8\pi (r_\varepsilon+x_2)} \bigg(\log\frac{1+a}{a}\bigg) \frac{|x-y|}{\sqrt{A}} \le \frac{1}{8\pi (r_\varepsilon+x_2)} \, \sup_{s>0}\bigg( s\log\frac{1+s}{s}\bigg), \\ & |R^3(x,y)| + |R^6(x,y)| \le \frac{1}{4\pi (r_\varepsilon+x_2)} \sqrt{\frac{r_\varepsilon+y_2}{r_\varepsilon+x_2}} \bigg(|\log A| + \sup_{s>0} \frac{2c_2(s)}{1+s}\bigg), \\ & |R^4(x,y)| = \frac{1}{4\pi (r_\varepsilon+x_2)} \bigg|1 - \sqrt{\frac{r_\varepsilon+y_2}{r_\varepsilon+x_2}} \bigg| \log\frac{1+|x-y|}{|x-y|} \\ & \hskip1.5cm \le \frac{1}{4\pi (r_\varepsilon+x_2)^2} \,\sup_{s>0}\bigg( s\log\frac{1+s}{s}\bigg), \\ & |R^5(x,y)| = \frac{|c_1(a)|}{2\pi (1+a)} \frac{|x-y|}{(r_\varepsilon+x_2)\sqrt A} \le \frac{1}{2\pi (r_\varepsilon+x_2)} \, \sup_{s>0}\frac{s c_1(s)}{1+s}.\end{aligned}$$ In conclusion, $$\begin{split} & |R^1(x,y)| + |R^2(x,y)| + |R^5(x,y)| \le \frac{C}{r_\varepsilon+x_2}, \quad |R^4(x,y)| \le \frac{C}{(r_\varepsilon+x_2)^2}, \\ & |R^3(x,y)| + |R^6(x,y)| \le \frac{C}{r_\varepsilon+x_2} \sqrt{\frac{r_\varepsilon+y_2}{r_\varepsilon+x_2}} \bigg(1+|\log A|\bigg). \end{split}$$ The lemma is thus proven. 0◻ 99 Aiki, M: On the existence of leapfrogging pair of circular vortex filaments. Stud. Appl. Math. **143**, 213--243 (2019) Alvarez, J., Ning, A.: Reviving the vortex particle method: A stable formulation for meshless large eddy simulation. Preprint at <https://arxiv.org/abs/2206.03658> (2022) Benedetto, D., Caglioti, E., Marchioro, C.: On the motion of a vortex ring with a sharply concentrate vorticity. Math. Meth. Appl. Sci. **23**, 147--168 (2000) Borisov, A.V., Kilin, A.A. , Mamaev, I.S.: The dynamics of vortex rings: Leapfrogging, choreographies and the stability problem. Regul. Chaot. Dyn. **18**, 33--62 (2013) Buffoni, B.: Nested axi-symmetric vortex rings. Annales de l'Institut Henri Poincarè C, Analyse non linèaire **14**, 787--797 (1997) Buttà, P., Cavallaro, G., Marchioro, C.: Global time evolution of concentrated vortex rings. Z. Angew. Math. Phys. **73**, 70 (2022) Buttà, P., Marchioro, C.: Long time evolution of concentrated Euler flows with planar symmetry. SIAM J. Math. Anal. **50**, 735--760 (2018) Buttà, P., Marchioro, C.: Time evolution of concentrated vortex rings. J. Math. Fluid Mech. **22**, Article number 19 (2020) Cavallaro, G., Marchioro, C.: Time evolution of vortex rings with large radius and very concentrated vorticity. J. Math. Phys. **62**, 053102 (2021) Cetrone, D., Serafini, G.: Long time evolution of fuids with concentrated vorticity and convergence to the point-vortex model. Rendiconti di Matematica e delle sue applicazioni **39**, 29--78 (2018) Cheng, M., Lou, J., Lim, T.T.: Leapfrogging of multiple coaxial viscous vortex rings. Phys. Fluid **27**, 031702 (2015) Danchin, R.: Axisymmetric incompressible flows with bounded vorticity. Uspekhi Mat. Nauk **62**, 73--94 (2007) Davila, J., del Pino, M., Musso, M., Wei, J.: Leapfrogging vortex rings for the 3-dimensional incompressible Euler equations. Preprint at <https://arxiv.org/abs/2207.03263> (2022) Dyson, F.W.: The potential of an anchor ring. Philos. Trans. Roy. Soc. London Ser. A **184**, 43--95 (1893) Dyson, F.W.: The potential of an anchor ring.-Part II. Philos. Trans. Roy. Soc. London Ser. A **184**, 1107--1169 (1893) Feng, H., Šverák, V.: On the Cauchy problem for axi-symmetric vortex rings. Arch. Ration. Mech. Anal. **215**, 89--123 (2015) Friedman, A.: Variational Principles and Free-Boundary Problems. Wiley, New York (1982) Gallay, T., Šverák, V.: Remarks on the Cauchy problem for the axisymmetric Navier-Stokes equations. Confluentes Math. **7**, 67--92 (2015) Gallay, T., Šverák, V.: Uniqueness of axisymmetric viscous flows originating from circular vortex filaments. Ann. Sci. Éc. Norm. Supér. **52**, 1025--1071 (2019) Helmholtz, H.: Uber Integrale der hydrodynamischen Gleichungen welche den Wirbelbewegungen entsprechen. J. Reine Angew. Math. **55**, 25--55 (1858) Helmholtz, H.: On the integrals of the hydrodynamical equations which express vortex-motion. (translated by P. G. Tait) Phil. Mag. **33**, 485--512 (1867) Hicks, W.M.: On the mutual threading of vortex rings. Proc. R. Soc. Long. A **102**, 111--131 (1922) Jerrard R.L., Smets, D.: Leapfrogging Vortex Rings for the Three Dimensional Gross-Pitaevskii Equation. Annals of PDE **4**, paper 4 (2018). Jerrard R.L., Smets, D.: Dynamics of nearly parallel vortex filaments for the Gross-Pitaevskii equation. Calc. Var. Partial Differential Equations **60**, Paper No. 127, 34 pp. (2021) Klein, R., Majda, A.J., Damodaran K.: Simplified equations for the interaction of nearly parallel vortex filaments. J. Fluid Mech. **288**, 201--248 (1995) Ladyzhenskaya, O.A.: Unique solvability in large of a three-dimensional Cauchy problem for the Navier-Stokes equations in the presence of axial symmetry. Zapisky Nauchnych Sem. LOMI **7**, 155--177 (1968) Lamb, H.: Hydrodynamics (6th Edition). Cambridge University Press (1932) Lim, T.T.: A note on the leapfrogging between two coaxial vortex rings at low Reynolds numbers. Phys. Fluid **9**, 239--241 (1997) Marchioro, C.: Large smoke rings with concentrated vorticity. Journ. Math. Phys. **40**, 869--883 (1999) Marchioro, C., Negrini, P.:, On a dynamical system related to fluid mechanics. NoDEA Nonlinear Diff. Eq. Appl., **6**, 473--499 (1999) Marchioro, C., Pulvirenti, M.: Mathematical theory of incompressible non-viscous fluids. Applied mathematical sciences vol. 96, Springer-Verlag, New York (1994) Nakanishi, Y., Kaemoto, K., Nishio, M.: Modification of Vortex Model for Consideration of Viscous Effect: Examination of Vortex Model on Interaction of Vortex Rings. JSME International Journal Series B **37**, 815--820 (1994) Riley, N., Stevens, D.P.: A note on leapfrogging vortex rings. Fluid Dynam. Res. **11**, 235--244 (1993) Ukhovskii, M., Yudovitch, V.: Axially symmetric flows of ideal and viscous fluids filling the whole space. J. Appl. Math. Mech. **32**, 52--69 (1968) Yamada H., Matsui T.: Preliminary study of mutual slip-through of a pair of vortices. Phys. Fluids **21**, 292--294 (1978) [^1]: $\lfloor z\rfloor$ denotes the integer part of the positive number $z$.
arxiv_math
{ "id": "2310.00732", "title": "Leapfrogging vortex rings as scaling limit of Euler Equations", "authors": "Paolo Butt\\`a, Guido Cavallaro, Carlo Marchioro", "categories": "math.AP math-ph math.MP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Fiol has characterized quotient-polynomial graphs as precisely the connected graphs whose adjacency matrix generates the adjacency algebra of a symmetric association scheme. We show that a subset of parameters of size $d + \frac{d(d-1)}{2}$ is adequate for describing all quotient-polynomial graphs generating a symmetric association scheme of class $d$. We use this to generate a database of quotient-polynomial graphs with small valency for up to $6$ classes, and to determine feasible parameter sets for quotient-polynomial graphs of small valency and up to $6$ classes that would have noncyclotomic eigenvalues if shown to exist. --- **Parameters of Quotient-Polynomial Graphs**\ Allen Herman$^{a,}$[^1][^2] and Roghayeh Maleki$^{b,c,}$[^3]\ $^a$Department of Mathematics and Statistics, University of Regina, Regina, Canada, S4S 0A2\ $^b$University of Primorska, UP FAMNIT, Glagoljaška 8, 6000 Koper, Slovenia\ $^c$University of Primorska, UP IAM, Muzejski trg 2, 6000 Koper, Slovenia\ > Quotient-polynomial graphs, association schemes, table algebras, orthogonal polynomials. > *Math. Subj. Class.:* Primary 05E30; Secondary 05E16, 05C75, 13P99, 42C05. # Introduction Let $(X,S)$ be a finite association scheme of order $n$ with adjacency matrices $\{A_0=I, A_1, \dots, A_d\}$. We will say that the association scheme is *monogenic* when $\mathbb{Q}S$ is equal to the algebra $\mathbb{Q}[A_1]$ of matrix polynomials generated by $A_1$. If $(X,S)$ is symmetric and $\mathbb{Q}S = \mathbb{Q}[A_1]$ then, the graph whose adjacency matrix is $A_1$ has been called a *quotient-polynomial* graph (QPG) by Fiol [@Fiol2016]. Distance-regular and cyclotomic graphs are both special cases of quotient-polynomial graphs, but there are many others (see [@FiolPenjic2021]). In this note we consider the problem of characterizing the parameters of quotient-polynomial graphs. We will introduce a meaningful system of parameters for quotient-polynomial graphs, which is along the lines of the intersection array of a distance-regular graph. We will review the available feasibility conditions for these parameters and introduce some new conditions that can be applied to the minimal polynomial of $A_1$. We have applied these to produce a database of the feasible parameters of QPGs that includes parameters for all possible QPGs of valency less than $10$ and diameter up to $6$. Our work is partially motivated by the search for symmetric association schemes with noncyclotomic eigenvalues (see [@HM2023]), so we will conclude by giving new feasible parameter sets for QPGs with noncyclotomic eigenvalues of ranks $5$, $6$, and $7$ that were revealed while producing the database of QPGs. # A parameter set system {#Sec2} It is well-known that any commutative association scheme has four equivalent parameter sets: (1) the intersection matrices (whose entries are the intersection numbers), (2) the first eigenmatrix $P$, (3) the second eigenmatrix $Q$, and (4) the dual intersection matrices (whose entries are the Krein parameters). For symmetric association schemes, we will see that there is an easy way to describe a small set of parameters from which all four of equivalent parameter sets can be determined, but it is not usually minimal. Finding the minimal number of parameters required to describe all coherent configurations of a given rank is an interesting open problem [@Problems Problem 8.1], and the symmetric association scheme case is a special case of it where there is a decent upper bound. For association schemes that are generated by a distance-regular graph, the intersection array gives a minimal subset of parameters that determines the intersection matrices. In this section, inspired by what happens for distance-regular graphs, we will describe a small set of parameters that determines the set of intersection matrices for an association scheme arising from a quotient-polynomial graph. Conversely, every quotient-polynomial graph will determine an equivalence class of these small parameter sets. For a distance-regular graph, the first intersection matrix $B_1$ corresponding to the adjacency matrix $A_1$ of the graph is tridiagonal, with row sums equal to its valency $k_1$. The entries of $B_1$ along the main diagonal are denoted by $a_i$, the entries just above the main diagonal are the $b_i$'s and the entries just below the diagonal are the $c_i$'s. Since the sum of any row of $B_1$ is $k_1$, we have the identity $c_i + a_i + b_i =k_1$ for $i=0,\dots,d$. This is why, for distance-regular graphs, to recover the parameters for the association scheme it generates, one only requires the intersection array: $[b_0,b_1,\dots,b_{d-1}; c_1, c_2, \dots, c_d]$. For quotient-polynomial graphs, we seek something along the same lines; a small subset of integer parameters from which the intersection matrices can be recovered. For symmetric association schemes, there is a straightforward way to obtain a set of parameters that will suffice. This makes use of two properties of the intersection matrices. First, it is well-known that every row sum of the intersection matrix $B_j$ for $j=0,1,\dots,d$ is equal to the corresponding valency $k_j$. The second is the general identity that the intersection numbers of a symmetric association scheme will satisfy $$\begin{aligned} \lambda_{ij\ell} k_{\ell} = \lambda_{i\ell j} k_j, \mbox{ for all } 0 \le i,j,\ell \le d.\end{aligned}$$ Let $T_s = \frac{s(s+1)}{2}$ be the $s$-th triangular number. **Proposition 1**. *All the intersection numbers of a symmetric association scheme of class $d$ can be recovered from* (i) *the list of valencies $k_1, \dots, k_d$;* (ii) *the $T_{d-1} + T_{d-2} + \dots + T_1$ intersection numbers $\lambda_{i,j,\ell}$ with $1 \le i \le j < \ell \le d$ that lie below the main diagonals of the non-identity intersection matrices.* Proof. The intersection numbers below the main diagonal of the first intersection matrix $B_1$ are given, so using the valencies we can find the intersection numbers $\lambda_{1\ell j} = \lambda_{1 j \ell} k_{\ell}/k_j$ that lie above the main diagonal. Using the row sum identity we can find the missing entries along the main diagonal. So $k_1$ and $T_{d-1}=T_3=6$ intersection numbers $\lambda_{1 j \ell}$ parameters from Proposition [Proposition 1](#paramSAS){reference-type="ref" reference="paramSAS"} determine $B_1$, i.e., we only need $7$ parameters $k_1, \lambda_{112}, \lambda_{113}, \lambda_{123}, \lambda_{114}, \lambda_{124}, \lambda_{135}$ to determine $B_1$. Moving from $B_{i-1}$ to $B_i$, we know the entries of the $j$-th columns of $B_i$ for $1 \le j \le i-1$, since $\lambda_{i i' \ell} = \lambda_{i' i \ell}$ for $i' < i$. Using the same identities as above, we can use the next $k_i$ and $T_{d-i}$, i.e., $1+T_{d-i}$ parameters to determine $B_i$. $\rule{2.5mm}{3mm}$\ To illustrate Proposition [Proposition 1](#paramSAS){reference-type="ref" reference="paramSAS"}, consider the case of a symmetric association scheme with $4$ classes. Then, Proposition [Proposition 1](#paramSAS){reference-type="ref" reference="paramSAS"} tells us the only parameters we need are the ones indicated here: $$B_1 = \begin{bmatrix} 0 & k_1 & 0 & 0 & 0 \\ 1 & \ast & \ast & \ast & \ast \\ 0 & \lambda_{112} & \ast & \ast & \ast \\ 0 & \lambda_{113} & \lambda_{123} & \ast & \ast \\ 0 & \lambda_{114} & \lambda_{124} & \lambda_{135} & \ast \end{bmatrix},\,\,\,\,\,\,\, B_2 = \begin{bmatrix} 0 & 0 & k_2 & 0 & 0 \\ 0 & \ast & \ast & \ast & \ast \\ 1 & \ast & \ast & \ast & \ast \\ 0 & \ast & \lambda_{223} & \ast & \ast \\ 0 & \ast & \lambda_{224} & \lambda_{234} & \ast \end{bmatrix},$$\ $$B_3 = \begin{bmatrix} 0 & 0 & 0 & k_3 & 0 \\ 0 & \ast & \ast & \ast & \ast \\ 0 & \ast & \ast & \ast & \ast \\ 1 & \ast & \ast & \ast & \ast \\ 0 & \ast & \ast & \lambda_{334} & \ast \end{bmatrix},\,\,\,\,\,\,\, B_4 = \begin{bmatrix} 0 & 0 & 0 & 0 & k_4 \\ 0 & \ast & \ast & \ast & \ast \\ 0 & \ast & \ast & \ast & \ast \\ 0 & \ast & \ast & \ast & \ast \\ 1 & \ast & \ast & \ast & \ast \end{bmatrix}.$$ As is well-known, for distance-regular graphs the intersection matrices are determined entirely by the first intersection matrix $B_1$, and there is a unique natural ordering of $\textbf{B}=\{B_0,B_1,\dots,B_d\}$ for which $B_1$ is tridiagonal. The non-zero off-diagonal entries of $B_1$ give us the intersection array $[b_0,b_1,\dots,b_{d-1};c_1,c_2,\dots,c_d]$, from which $B_1$ and the orthogonal polynomials $f_i(x)$ with $B_i = f_i(B_1)$ can be determined. For quotient-polynomial graphs, since the association scheme is monogenic, we are interested to know if the valencies and entries below the diagonal of $B_1$ should be enough to determine the association scheme. With this in mind, we propose the following parameter array pattern for quotient-polynomial graphs of class $d$ and valency $k_1$: $$\begin{aligned} \label{array} \quad [[k_1,\dots,k_d],[\lambda_{112},\dots,\lambda_{11d}; \lambda_{123}, \dots, \lambda_{12d}; \lambda_{134}, \dots, \lambda_{1,d-1,d}]].\end{aligned}$$ We will restrict ourselves to only consider parameter arrays of class $d$ and valency $k_1$ that determine a non-negative $(d+1) \times (d+1)$ integral intersection matrix $B_1$ with row sums equal to $k_1$ using the algorithm of Proposition [Proposition 1](#paramSAS){reference-type="ref" reference="paramSAS"}; we will refer to these as QPG parameter arrays. Recall that the set of intersection matrices $\textbf{B}$ of an association scheme is required to be the basis of a standard integral table algebra (SITA) (see [@AFM]). We will say that a QPG parameter array with class $d$ is *valid* if the intersection matrix $B_1$ it determines generates a SITA. We will consider a pair of valid QPG parameter arrays to be equivalent if the intersection matrices they determine are conjugate by a $(d+1) \times (d+1)$ permutation matrix whose corresponding permutation lies in $Sym(\{2,\dots,d\})$ (i.e. it fixes $0$ and $1$). **Theorem 2**. *There is a one-to-one correspondence between the set of equivalence classes of valid QPG parameter arrays of class $d$ and valency $k_1$ and the exact isomorphism classes of standard integral table algebras of rank $d+1$ that are generated by an element of valency $k_1$. In particular, every quotient-polynomial graph of valency $k_1$ that generates a symmetric association scheme of class $d$ can be associated with a unique equivalence class of valid QPG parameter arrays with class $d$ and valency $k_1$.* Proof. Each valid QPG parameter array in ([\[array\]](#array){reference-type="ref" reference="array"}) determines an intersection matrix $B_1$ whose adjacency algebra $\mathbb{Q}[B_1]$ has a standard integral table algebra basis $\textbf{B}$. This implies that there are polynomials $f_i(x)$ for $i\in \{2,\ldots,d\}$ of degree $\le d+1$ for which the basis elements $B_i$ is equal to $f_i(B_1)$. To find these polynomials, consider first rows $v_i$ of $B_1^i$ for $i=0,1,\dots, d$. The first rows of the $B_i$ are equal to $k_i e_i$, where $e_i$ is the elementary standard basis vector with a $1$ in position $i$ for $i\in\{0,1,\dots, d\}$. If $f_i(x) = \sum_{\ell=0}^d a_{i,\ell} x^{\ell}$, then $B_i = f_i(B_1)$ implies $k_i e_i = \sum_{\ell=0}^d a_{i,\ell} v_{\ell}$ for $i\in \{2,\ldots,d\}$. If the QPG parameter set is valid, then the coefficients of $f_i(x)$ are found by expressing the $k_i e_i$ in the basis $\{v_0,v_1,\dots,v_d\}$ of $\mathbb{Q}^{d+1}$ coming from the first rows of the powers $B_1^i$ for $i\in\{0,1,\dots,d\}$. Conversely, if we start with the set of regular matrices of a standard integral table algebra generated by an element $B_1$ of valency $k_1$, then $B_1$ determines a valid QPG parameter set of class $d$, which is unique up to permuting $\{B_2, \dots, B_d\}$. $\rule{2.5mm}{3mm}$\ Since the standard integral table algebra above is generated by $B_1$, every $B_i \in \textbf{B}$ must be a rational polynomial in $B_1$. To find the polynomials $f_i(x)$ for which $B_i = f_i(B_1)$, we can proceed as in the first part of the above proof, and write the vectors $k_ie_i$ as linear combinations of the first row vectors $v_0, v_1, \dots, v_d$ of the powers $B_1^d$. If $k_ie_i = \sum_j a_{i,j} v_j$, then it must be the case that $B_i = \sum_j a_{i,j} B_1^j$. **Corollary 3**. *A valid QPG parameter array of class $d$ determines a unique set of $d+1$ orthogonal polynomials in a single variable.* For distance-regular graphs, the degree of $f_i(x)$ is $i$, so there is a natural ordering of the $A_i$'s. Under this ordering, the pattern of valencies is unimodal: $k_1 \le k_2 \le \dots \le k_t \ge k_{t+1} \ge \dots \ge k_d$ for some $t$, the $c_i$'s are increasing: $c_1 \le c_2 \le \dots \le c_d$, and the $b_i$'s are decreasing: $k_1 = b_0 \ge b_1 \ge \dots \ge b_{d-1}$. For quotient-polynomial graphs, it is not clear that there will be a preferred way to order the basis elements. When generating our database, we only made use of two (safe) assumptions: $\lambda_{112}>0$ and $k_3 \le k_4 \le \dots \le k_d$. As noted by Fiol in [@Fiol2016], the fact that $A_1$ generates $\mathbb{Q}\mathcal{A}$ implies that the distance partition induces a partial order on the nontrivial $A_i$'s: $A_1$ is the only adjacency matrix at distance $0$, the adjacency matrices $A_i$ are at distance $2$ if $i \ne 1$ and $\lambda_{11i}>0$, $A_i$ is at distance $3$ if it is not at a distance less than $3$ and $\lambda_{1ji} > 0$ for some $A_j$ at distance $2$, etc. Distance-regular graphs are the extreme case of this, for them $A_i$ is the unique basis element at distance $i$ for $i\in\{0,1,\dots,d\}$. # Feasibility conditions for quotient-polynomial graphs Now that we have a convenient system ([\[array\]](#array){reference-type="ref" reference="array"}) for expressing parameters of quotient-polynomial graphs, our next goal for this project is to produce a database that gives representatives for the equivalence classes of valid QPG parameter arrays of class $d$ and valency $k_1$ that satisfy the known feasibility conditions for their quotient-polynomial graphs to exist. For example, for class $4$ and some small choice of $k_1$, we would first generate parameter arrays $$\begin{aligned} ,[\lambda_{112},\lambda_{113},\lambda_{114};\lambda_{123},\lambda_{124};\lambda_{134}]] \end{aligned}$$ and check that the first intersection matrix $B_1$ which arises from this parameter set has non-negative integer entries and has all row sums equal to $k_1$. Before we can consider the typical feasibility conditions for symmetric association schemes (for up-to-date account of these see [@HM2023] and [@GVW2021]), we first need to see if $\{B_0=I, B_1\}$ can be completed to a basis with the right properties, and we would like to do this in a way that is computationally inexpensive! With this in mind, we calculate the minimal polynomial $\mu_1(x)$ of $B_1$, and check that it has degree $(d+1)$. If we wish, at this point we can use an implementation of the *Hermite-Sylvester theorem* to check that the roots of $\mu_1(x)$ are all real. (For a recent overview of this classical theorem, see [@NathansonMG2021].) Next, we check that $\mathbb{Q}[B_1]$ has a SITA basis. We calculate the powers $B_1^i$ for $i\in \{2,\ldots,d\}$ and let $v_i \in \mathbb{Q}^{d+1}$ be the first row of $B_1^i$ for $i\in\{0,1,\dots,d\}$. We check that the $\mathbb{Q}$-span of $\{v_0,v_1,\dots,v_d\}$ has dimension $d+1$. Assuming it does, proceeding as in the proof of Theorem [Theorem 2](#QPGarrays){reference-type="ref" reference="QPGarrays"}, we let $e_i$ be the elementary standard basis of $\mathbb{Q}^{d+1}$ for $i\in\{0,1,\dots,d\}$, and express the $k_i e_i$ as linear combinations in the basis $\{v_0, v_1, \dots, v_d\}$: $$k_i e_i = \sum_{\ell=0}^d a_{i,\ell} v_{\ell}, \mbox{ for } i\in\{0,1,\dots,d\}.$$ Since $k_i e_i$ is naturally the first row of $B_i$ for $i\in\{0,1,\dots,d\}$, and the solution to the $a_{i,\ell}$'s is unique for each $i$, if we set $f_i(x) = \sum_{\ell=0}^d a_{i,\ell} x^{\ell}$, then we can conclude that $B_i = f_i(B_1)$ is the only possible SITA basis for $\mathbb{Q}[B_1]$ that can be associated with this QPG parameter array. Then, we check directly that $\mathbf{B} = \{B_0,B_1,\dots,B_d\}$ is a SITA basis.\ If we do obtain a SITA basis for $\mathbb{Q}[B_1]$, we can quickly check the following feasibility conditions for a symmetric association scheme: (i) [\[fc\]]{#fc label="fc"} **Quick check if $B_1$ allows integral multiplicities:** Find the irreducible factorization $\mu_1(x) = (x-k_1) g_1(x) \cdots g_s(x)$. Let $g_t(x) = x^{n_t} + g_{t,1} x^{n_t-1} + \dots$ for $t\in\{1,\dots,s\}$, and find non-negative integers $m_1,\dots,m_t$ for which $$1+n_1m_1+\dots+n_tm_t = n := 1+k_1+\dots+k_d$$ and $$\label{eq} k_1 - m_1 g_{1,1} -\dots -m_s g_{s,1}=0.$$ The equality ([\[eq\]](#eq){reference-type="ref" reference="eq"}) must be satisfied because it is precisely the column orthogonality relation applied to the $B_1$-column of the first eigenmatrix $P$. At least one such list of prospective integral multiplicities needs to exist. Since we are only checking that one column of $P$ admits integral multiplicities, we also check that these satisfy further requirements necessary for them to be the actual multiplicities of the SITA: the *integrality of the Frame number $F$ and its compatibility with the Discriminant of $\mu_1(x)$*. It is well-known that the *Frame number* of a commutative association scheme must be a positive integer. In [@Hanaki2022], Hanaki has recently observed that $F$ also needs to be a square times the product of the discriminants of the fields occurring in the Wedderburn decomposition of $\mathbb{Q}S$. From feasibility condition ([\[fc\]](#fc){reference-type="ref" reference="fc"}), we can calculate our Frame number $$F(S) = n^{d+1} \frac{k_1 \cdots k_d}{m_1^{n_1} \cdots m_t^{n_t}}.$$ Since the degree of the minimal polynomial of $B_1$ is $d+1$, and the powers of $B_1$ will lie in $\mathbb{Z}S$, we have that $\mathbb{Z}[B_1]$ is an integral subalgebra of $\mathbb{Z}(S)$. Therefore, the discriminant $D$ of the polynomial $\mu_1(x)$ must be equal to $F(S)$ times a perfect square integer. So we also check that $D/F(S)$ is a perfect square. We repeat the next step for each such list until we find a list of integral multiplicities that passes both steps. (i) [\[C1\]]{#C1 label="C1"} **Check for integrality of standard trace on powers of $B_1$:** Given a list of potential integral multiplicities found in feasibility condition ([\[fc\]](#fc){reference-type="ref" reference="fc"}) , we check for the existence of a *standard integral trace*. Calculate the degree $n$ polynomial $$f(x) = (x-k_1) g_1(x)^{m_1} g_2(x)^{m_2} \dots g_t(x)^{m_t},$$ and let $C$ be its companion matrix. Then, calculate $\frac{1}{n} tr(C^i)$ for $i\in\{0,1,\dots,d-1\}$, and check that these values are non-negative integers. Since $C$ is a conjugate of the (unknown) matrix $A_1$, and the standard feasible trace of the adjacency algebra $\mathbb{Q}S$ is equal to $\frac{1}{n}$ times the usual trace on $n \times n$ matrices, these values are the values of the standard feasible trace on the powers of $A_1^i$, so they must be non-negative integers. **Remark 4**. *The quick check that $B_1$ admits integral multiplicities ([\[C1\]](#C1){reference-type="ref" reference="C1"}), does not guarantee the other $B_i$ will admit the same integral multiplicities. To sieve out these "false positives\", it is worthwhile to find the irreducible factorization of the characteristic polynomials of $B_2, \dots, B_d$ and see if their roots satisfy the column orthogonality relations corresponding to a choice of multiplicities compatible with $B_1$. The following condition, that roots of $\mu_j(x)$ have non-negative Galois trace, for all $1 \le j \le d$, can often be used to show the multiplicities of $\mathbf{B}$ are not integral. This condition easily eliminates many parameter sets in our searches that survive the above sieve. * **Lemma 5**. *Suppose there is an intersection matrix $B_j \in \mathbf{B}$ whose minimal polynomial $\mu_j(x)$ has nonnegative roots, and has the property that the coefficient of $x^{k-1}$ for every irreducible factor of $\mu_j(x)$ of degree $k>1$ is less than or equal to zero. Then, $\mathbf{B}$ cannot have integral multiplicities.* Proof. Let $P = (P_{i,j})$ be the character table of the SITA determined by $\mathbf{B}$. Let $j>0$, by the column orthogonality relation corresponding to the column $j$ we have $$0=\sum_{i=0}^d m_i P_{i,j}.$$ The rational roots of $\mu_j(x)$ are the rational entries $P_{i,j}$ of the column of $P$ corresponding to $B_j$, and one of these roots, the valency $k_j$, is guaranteed with multiplicity $m_0 = 1$. Therefore, the contribution corresponding to the rational roots of $\mu_j(x)$ is strictly positive. If all the multiplicities $m_i$ are integers, then multiplicities corresponding to each Galois conjugacy class of irrational roots of $\mu_j(x)$ will be constant. The coefficient of $x^{k-1}$ is the negative of sum of the $P_{i,j}$'s lying in one Galois conjugacy class. So the assumption of the lemma implies that these $P_{i,j}$ sum to a nonnegative rational value $c$, which is then multiplied by common integral multiplicity $m_i$, and thus these will also make a nonnegative contribution. So if $\mathbf{B}$ were to have integral multiplicities, then it would contradict the column orthogonality relation for this column. $\rule{2.5mm}{3mm}$\ **Example 6**. *As an example surviving our sieve for which the above lemma applies, consider the SITA obtained from the QPG parameter array $[[30,2,30,30],[30,8,7;0,0;14]]$ with order $93$. The minimal polynomials of its non-identity basis elements have factorizations $\mu_1(x) = (x-30)(x)(x^3+x^2-93x-372)$, $\mu_2(x)=(x-2)(x+1)$, $\mu_3(x) = (x-30)(x)(x^3+15x^2-30x-540)$, and $\mu_4(x)=(x-30)(x)(x^3-7x^2-147x-294)$. The factorization of $\mu_4(x)$ satisfies the conditions of Lemma [Lemma 5](#nonnegelim){reference-type="ref" reference="nonnegelim"}, so this SITA cannot have integral multiplicities. * **Example 7**. *As an example where the other basis elements do not admit the same integral multiplicities, consider the QPG parameter array $[[ 12, 9, 18, 36 ],[ 4, 2, 0, 2, 1, 4 ]]$ with order $76$. This parameter array produces a singly-generated SITA $\mathbf{B}$ of rank $5$ that survives our sieve. It satisfies the handshaking lemma condition (see [@HM2023 Section 5]) because the entries $(B_3)_{3,j}$ for $j=1,\dots,4$ are all even. The minimal polynomials of its basis elements are: $\mu_1(x) = (x-12)(x+4)(x^3-4x^2-23x-2)$, $\mu_2(x)=(x-9)(x+3)(x^3-3x^2-7x+14)$, $\mu_3(x) = (x-18)(x+1)$, and $\mu_4(x) = (x-36)(x+12)(x^3-7x^2-17x-16)$. The three cubic irreducibles appearing in these factorizations have the same nonabelian splitting field, so the character values of this SITA are not cyclotomic. Since the row orthogonality relation implies the non-principal row sums of the character table are $0$, we can obtain a rationalized form of the character table of $\mathbf{B}$ from the above factorizations: $$\begin{array}{r|ccccc} P& B_0 & B_1 & B_2 & B_3 & B_4 \\ \hline \delta & 1 & 12 & 9 & 18 & 36 \\ \phi & 1 & -4 & -3 & 18 & -12 \\ \psi_1 & 1 & \alpha_1 & \beta_1 & -1 & \gamma_1 \\ \psi_2 & 1 & \alpha_2 & \beta_2 & -1 & \gamma_2 \\ \psi_3 & 1 & \alpha_3 & \beta_3 & -1 & \gamma_3 \\ \end{array}$$ where the unknown irrational entries of the character table satisfy $\alpha_1 + \alpha_2 + \alpha_3 = 4$, $\beta_1+\beta_2+\beta_3 = 3$, and $\gamma_1+\gamma_2+\gamma_3 = -7$. Applying the row orthogonality relation on the $\phi$-row with itself gives $$1 + \frac{16}{12} + \frac{9}{9} + \frac{18^2}{18} + \frac{12^2}{36} = \frac{76}{m_{\phi}},$$ so $m_{\phi}=3$. Assuming the multiplicities are integral, the last three rows have the same multiplicity $m_{\psi}$, so $1 + 3 + 3 m_{\psi} = 76$ implies $m_{\psi}$ would have to be $24$. But then the column orthogonality relation applied to the $B_0$- and $B_3$-column would require that $0=9(1) + (3)(-3) + (3)(24)$. Since this does not hold, we can conclude the multiplicities of this SITA are also not integral. * **Remark 8**. *The check of integrality of the standard trace on powers of $A_1$ is quite powerful. It even eliminates some of the examples reported in [@HM2023] that were shown to pass all known feasibility conditions with the exception of the quadruple intersection number condition (which was skipped due to the authors not being able to supply an efficient implementation for this rank). For example, the parameter array $[[8,8,24,4],[1,2,0;1,2;6]]$ corresponds to a feasible order $45$ example in [@HM2023], so it definitely corresponds to a SITA with integral multipliciites. However, it does not survive the standard trace condition of our sieve. * Our database implementation [@HMprogram] produces lists of valid QPG parameter arrays that survive the above sieving process, and indicate if they pass the basic feasibility checks for symmetric association schemes. These include the handshaking lemma, integral multiplicities, nonnegativity of Krein parameters, and the absolute bound condition (see [@HM2023]). Of course, there are more feasibility conditions for symmetric association schemes that can be checked, but some of these are challenging to implement when the splitting field of the SITA has a nonabelian Galois group, and implementing them for ranks larger than $4$ is challenging. As of this writing the database [@HMprogram] includes all QPG parameter arrays of order up to $250$ with any valency for rank $4$, valency up to $30$ for rank $5$, valency up to $15$ for rank $6$, and valency up to $12$ for rank $7$. We have also found all QPG parameter arrays with rank $4$ and $5$ with arbitrary order and valency up to $8$. There is some overlap of this database with existing databases of distance-regular graphs. # New parameter sets for symmetric association sche- mes with noncyclotomic eigenvalues In this last section, we present the new parameter sets for prospective quotient-polynomial graphs with noncyclotomic eigenvalues that emerged from our searches. Next, we will indicate whether or not they pass the feasibility conditions for symmetric association schemes which can be easily checked from the intersection matrices and rationalized character table: the handshaking lemma, if the rationalized character table admits integral multiplicities, nonnegativity of Krein parameters, and the absolute bound condition. We have used estimates of the roots of the minimal polynomials to determine that the Krein parameters are nonnegative. In addition, the nonzero Krein parameters are used to check the absolute bound condition. All the examples with status \"F\" listed in the following tables pass these feasibility conditions. The examples reported here extend the lists of parameter sets of small rank $5$ examples found in [@HM2023] and include new examples of rank $6$ and $7$. The only other such examples we know of are the examples of feasible pseudocyclic parameter sets of large prime order for ranks $5$ and $6$ were found earlier by Hanaki and Teranishi [@Hanaki]. To date, every feasible parameter set for a commutative association scheme with noncyclotomic eigenvalues that has been found can be associated with the existence of a quotient-polynomial graph. (i) Rank 5 parameter sets, $n \le 250, k_1 \le 30$: -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------ ----------------- Order Parameter Array Status nr-xC: not realizable because not in classification; nr-xF: not realizable because it fails one of the easy-to-implement feasibility conditions; F: passes easy-to-implement feasibility conditions. The status includes a citation to [@HM2023] if the corresponding SITA was reported earlier there. $35$ $[[4,12,12,6],[1,0,0;1,2;2]]$ nr-xC [@HM2023] $76$ $[[12,18,36,9],[2,0,4;4,4;4]]$ nr-xF $76$ $[[18,18,36,3],[2,5,0;5,6;12]],$ F [@HM2023] $88$ $[[14,35,35,3],[4,0,14;6,0;0]]$ nr-xF [@HM2023] $88$ $[[21,3,21,42],[21,4,3;0,0;8]]$ nr-xF [@HM2023] $93$ $[[12,30,30,20],[2,2,0;6,3;6]]$ F [@HM2023] $93$ $[[30,2,30,30],[30,8,7;0,0;14]]$ nr-xF $110$ $[[28,4,21,56],[28,12,4;0,0;6]]$ F $112$ $[[15,30,36,30],[4,0,2;5,1;6]]$ F $116$ $[[19,1,19,76],[19,6,2;0,0;3]]$ nr-xF [@HM2023] $119$ $[[16,48,48,6],[1,3,0;7,8;8]]$ F $120$ $[[17,17,34,51],[2,0,4;3,1;6]]$ F $129$ $[[28,2,28,70],[28,10,4;0,0;6]]$ nr-xF [@HM2023] $133$ $[[12,24,48,48],[2,1,0;3,2;4]]$ F $133$ $[[18,36,72,6],[4,2,0;4,6;12]]$ F $135$ $[[8,56,56,14],[1,0,0;3,4;4]]$ F $176$ $[[28,7,70,70],[28,8,0;0,0;12]]$ F $176$ $[[30,10,45,90],[30,10,0;0,0;10]]$ F $180$ $[[28,4,63,84],[28,4,0;0,0;12]$ F $189$ $[[20,80,80,8],[1,3,0;9,10;10]]$ F [@HM2023] $190$ $[[18,36,99,36],[1,0,5;4,0;11]]$ F [@HM2023] $190$ $[[27,27,27,108],[7,3,3;0,4;4]]$ nr-xF $209$ $[[10,90,90,18],[1,0,0;4,5;5]]$ F $210$ $[[11,99,66,33],[1,0,0;6,3;6]]$ nr-xF -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------ ----------------- : Rank $5$ parameter sets with noncyclotomic eigenvalues (i) Rank 6 parameter sets, $n \le 250$, $k_1 \le 16$: Order Parameter Array Status ------- -------------------------------------------- -------- $105$ $[[16,16,16,24,32],[4,2,2,1;2,0,4;4,2;3]]$ F : Rank $6$ parameter sets with noncyclotomic eigenvalues (i) Rank 7 parameter sets, $n \le 250$, $k_1 \le 12$: Order Parameter Array Status ------- --------------------------------------------------------- -------- $36$ $[[4,8,1,6,8,8],[1,4,0,0,0;0,0,3,0;0,0,0;0,3;1]]$ nr-xF $44$ $[[12,6,4,6,6,9],[10,0,0,6,4;0,2,0,0;0,2,4;4,4;0]]$ nr-xF $100$ $[[6,24,1,20,24,24],[1,6,0,0,0;0,0,5,0;0,0,0;0,5;1]]$ nr-xF $100$ $[[12,24,1,14,24,24],[5,12,0,0,0;0,0,7,0;0,0,0;0,7;5]]$ nr-xF $164$ $[[12,40,1,30,40,40]][3,12,0,0,0;0,4,6,0;0,0,0;0,6;6]]$ F $196$ $[[8,48,1,42,48,48],[1,8,0,0,0;0,0,7,0;0,0,0;0,7;1]]$ nr-xF $220$ $[[12,24,1,14,84,84],[5,12,0,0,0;0,0,2,0;0,0,0;0,2;10]$ nr-xF : Rank $7$ parameter sets with noncyclotomic eigenvalues As of this writing, we have made attempts to construct the symmetric association schemes for the feasible parameter sets listed above with orders up to $76$, but so far these attempts have been inconclusive. Based on our exhaustive computer searches, we reach the following conclusions for small rank QPGs that have noncyclotomic eigenvalues. **Theorem 9**. 1. *All rank $5$ QPGs with valency $7$ or less have cyclotomic eigenvalues. There is a feasible rank $5$ parameter set with valency $8$ that has noncyclotomic eigenvalues.* 2. *All rank $6$ QPGs with valency up to $15$ and order up to $250$ have cyclotomic eigenvalues. There is a feasible rank $6$ parameter set with valency $16$ that has noncyclotomic eigenvalues.* 3. *All rank $7$ QPGs with valency up to $11$ and order up to $250$ have cyclotomic eigenvalues. There is a feasible rank $7$ parameter set with valency $12$ that has noncyclotomic eigenvalues.* **Data availability statement:** Data supporting this article is available at <https://github.com/RoghayehMaleki/QPGdatabase->. 10 Z. Arad, E. Fisman, and M. Muzychuk, Generalized table algebras, *Israel J. Math.*, **114** (1999), 29-60. M. Fiol, Quotient-polynomial graphs, *Linear Algebra Appl.*, **488** (2016), 363-376. M. Fiol and S. Penjić, On symmetric association schemes and associated quotient-polynomial graphs, *Algebr. Comb.*, **4** (6), (2021), 947-969. A. Hanaki, Frame numbers, splitting fields, and integral adjacency algebras of commutative association schemes, *Arch. Math. (Basel)*, **114** (2020), no. 3, 259-263. A. Hanaki, Intersection numbers of integral standard generalized table algebras with non-cyclotomic minimal splitting fields, 2016.\ `http://math.shinshu-u.ac.jp/ hanaki/pdf/teranishi.pdf` A. Herman and R. Maleki, The search for small association schemes with noncyclotomic eigenvalues, *Ars. Math. Contemp.*, **23** (2023), \#P3.02. A. Herman and R. Maleki, *QPGdatabase*, August 31, 2023.\ <https://github.com/RoghayehMaleki/QPGdatabase-> A. L. Gavrilyuk, J. Vidali, and J. S. Williford, On few-class $Q$-polynomial association schemes: feasible parameters and nonexistence results, *Ars Math. Contemp.*, **20** (2021), 103-127. M. B. Nathanson, The Hermite-Sylvester criterion for real-rooted polynomials, *Math. Gaz.*, **105** (2012), no. 562, 122-125. I. Ponomarenko, Some open problems for coherent configurations, 2017.\ `http://www.pdmi.ras.ru/ inp/cp01.pdf` [^1]: Corresponding author e-mail: allen.herman\@uregina.ca [^2]: The first author's work has been supported by NSERC. [^3]: The second author's research is supported in part by the Ministry of Education, Science and Sport of Republic of Slovenia (University of Primorska Developmental funding pillar).
arxiv_math
{ "id": "2309.03657", "title": "Parameters of Quotient-Polynomial Graphs", "authors": "Allen Herman and Roghayeh Maleki", "categories": "math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We introduce a type of safe extremum seeking (ES) controller, which minimizes an unknown objective function while also maintaining practical positivity of an unknown barrier function. We show semi-global practical asymptotic stability of our algorithm and present an analogous notion of practical safety. The dynamics of the controller are inspired by the quadratic program (QP) based safety filter designs which, in the literature, are more commonly used in cases where the barrier function is known. Conditions on the barrier and objective function are explored showing that non convex problems can be solved. A Lyapunov argument is proposed to achieve the main results of the paper. Finally, an example is given of the algorithm which solves the constrained optimization problem. author: - "Alan Williams$^{1,2}$, Miroslav Krstic$^{1}$ and Alexander Scheinker$^{2}$ [^1] [^2] [^3]" bibliography: - refs.bib title: " **Semi-Global Practical Extremum Seeking with Practical Safety\\*** " --- # Introduction This paper presents an ES algorithm which can be used to minimize an unknown objective function $J(\theta)$ over parameters $\theta$ while also keeping the system safe. Safety is considered to be a measured unknown function $h(\theta)$ and is maintained by keeping $h$ positive. The analysis is based on the framework in [@nesic2010unifying], preceded by notable papers [@tan2005non], [@tan2006non] proving semi-global practical asymptotic (SPA) stability properties of extremum seeking. We use the basic ideas of [@nesic2010unifying] but instead consider the constrained optimization problem, and present a Lyapunov function for showing SPA stability of the reduced constrained dynamics. Additionally we present a notion analogous to that of practical stability, called 'practical safety'. The dynamics we present are based on the QP safety filter [@ames2016control] and therefore it gives the designer a choice to weight the importance of the objective versus the safety, through a parameter $c$, while the trajectory of $\theta$ tracks toward the constrained optimum. Our algorithm approximately solves the following problem: $$\min_{\theta(t)} J(\theta(t)) \text{ s.t. } h(\theta(t)) \geq 0 \text{ for all } t \in [0, \infty) .$$ Solving constrained optimization problems using ES has been explored in other contexts in the literature, both using the framework described in [@nesic2010unifying] as well as other methods such as [@guay2015constrained]. The results in [@poveda2015shahshahani] prove SPA stability of an ES based controller which not only converges to set, but also an optimum constrained to a set. Other dynamics have also been considered in similar settings such as [@labar2019constrained] and [@liao2019constrained]. Switching ES algorithms have also been used to solve constrained optimization problems [@chen2023continuoustime]. Safe optimization of systems is relevant in many areas. Particle accelerators are complex, time varying systems which must be constantly tuned for optimal operation. Because of its model-independence, robustness to noise, and ability to handle large numbers of parameters simultaneously, ES is especially well suited for particle accelerator applications. For example, in [@scheinker2020online], the authors used a bounded form of ES [@scheinker2014extremum] for real-time multi-objective optimization of a particle accelerator beam in which analytic bounds are guaranteed on each parameter update despite acting on an analytically unknown and noisy measurement function. In [@scheinker2020online] the algorithm was pushed towards achieving safety by weighing a measure of how far the beam was off from a prescribed optimal trajectory. Such a cost-weighing ES approach requires hand tuning of weights associated with safety and offers no guarantees that safety will be maintained. Authors in [@kirschner2022tuning] tune the several beamlines using Bayesian optimization and do so without violating safety constraints like beam loss. Here "safety" means that the particle beam does not damaging key accelerator components during tuning based on hand-picked safety margins which result in a trade-off between safety and performance. Authors in [@sui2015safe] apply constrained optimization schemes using Gaussian processes to recommender systems and therapeutic spinal cord stimulation. In these examples, it is desired that recommendations to users (for movies) must not be heavily disliked, and stimulation patterns for patients must not exceed a certain pain threshold. This work is heavily inspired by recent work [@williams2022practically]. Here, the QP based modification of standard ES was introduced, which approximately solves the constrained optimization problem. It was shown that from the QP safety formulation, an additive safety term can be introduced in the parameter's dynamics. The algorithm in [@williams2022practically] was shown to be a practically safe scheme, but only locally and under restrictive assumptions on $J$ and $h$. In this work, we explore more general assumptions on $J$ and $h$, in n-dimensions, and present a semi-global result for both safety and stability. The dynamics presented in this work are slightly different from that of [@williams2022practically], due to the time scaling by $k \omega_f$, and the parameter $M^+$. The parameter $M^+$ is chosen large to bound the dynamics if the estimate of $\nabla h$ goes to zero. This is because the safety term in its exact form contains $\nabla h^T \nabla h$ in the denominator. The gain $k \omega_f$ scales the dynamics of the optimization parameter $\hat \theta$. This provides a tuning knob which has can adjust the timescale of $\hat \theta$ proportionally, a requirement for SPA stability, which allows the estimator states to converge arbitrarily quickly. Additionally, our Lyapunov analysis provides a stability result which includes non convex $h$ and $J$, although $J$ having a unique minimum on the safe set is required for SPA stability results. We provide a 2D example with a non convex $h$ to conclude. **Notation:** For a differentiable function $Q: \mathbb{R}^n \to \mathbb{R}$ we denote the gradient $\nabla Q: \mathbb{R}^n \to \mathbb{R}^n$ as the vector $\nabla Q(x) = [\partial Q(x) / \partial x_1, \partial Q(x) / \partial x_2, ..., \partial Q(x) / \partial x_n]^T$ and where the ith component is $\nabla_i Q(x) = \partial Q(x) / \partial x_i$. For $v \in \mathbb{R}^n$, the notation $|| v ||$ denotes the Euclidean norm. The continuous function $\beta: \mathbb{R}_{\geq 0} \to \mathbb{R}_{\geq 0}$ is of class $\mathcal{K}$ if $\beta(0)=0$ and it is strictly increasing. The continuous function $\beta: \mathbb{R}_{\geq 0} \times \mathbb{R}_{\geq 0} \to \mathbb{R}_{\geq 0}$ is of class $\mathcal{KL}$ if it is strictly increasing in its first argument and strictly decreasing to zero in its second argument. The image of a function $h$ is denoted by $\mathop{\mathrm{Im}}(h)$. The compact ball around a point $p$ is $B_r(p) = \{ \theta \in \mathbb{R}^n : \| \theta - p \| \leq r \}$. We use the term "SPA stability" to refer to the notion of semi-global practical asymptotic stability [@tan2005non]. A function $f(x,\epsilon)$ is $O(\epsilon)$ if for any compact set $\Omega$ there exists a positive pair $(\epsilon^*, k)$ such that $||f(x,\epsilon)||\leq k \epsilon$ for all $\epsilon \in (0,\epsilon^*]$ for all $x \in \Omega$. # Algorithm Design For a better understanding of how the dynamics were derived, see [@williams2022practically]. We introduce the algorithm: $$\begin{aligned} \begin{split}\label{eqn:th_dyn} \dot{\hat \theta} ={}& k \omega_f (-G_J +\\ & \min \{G_h^{-2},M^+\} \max\{G_J^T G_h - c \eta_h, 0\} G_h ) \end{split}\\ \dot G_J =& - \omega_f ( G_J - (J( \hat \theta(t) + S(t) ) - \eta_J)M(t) )\label{eqn:gj_dyn}\\ \dot \eta_J =& - \omega_f ( \eta_J -J( \hat \theta(t) + S(t) )) \label{eqn:etaj_dyn}\\ \dot G_h =& - \omega_f (G_h - (h( \hat \theta(t) + S(t) ) - \eta_h)M(t) )\label{eqn:gh_dyn}\\ \dot \eta_h =& - \omega_f ( \eta_h - h( \hat \theta(t) + S(t) )) \label{eqn:etah_dyn} \end{aligned}$$ where the state variables $\hat \theta, G_J, G_h \in \mathbb{R}^n$, $\eta_J, \eta_h \in \mathbb{R}$. The overall the dimension of the system is $3n+2$. The map is evaluated at $\theta$, defined by $$\theta(t) := \hat \theta(t) + S(t) \; .$$ The integer $n$ denotes the number of parameters one wishes to optimize over. The design coefficients are $k, c, \omega_f, M^+\in \mathbb{R}_{>0}$. The perturbation signal $S$ and demodulation signal $M$ are given by $$\begin{aligned} S(t) &= a \left[ \sin(\omega_1 t), \;...\;, \sin(\omega_n t) \right]^T, \\ M(t) &= \frac{2}{a} \left[ \sin(\omega_1 t), \;...\;, \sin(\omega_n t) \right]^T,\end{aligned}$$ and contain additional design parameters $\omega_i, a \in \mathbb{R}_{>0}$. # Assumptions We define $$\begin{aligned} \mathcal{C} &= \{\theta \in \mathbb{R}^n : h(\theta) \geq 0 \}, \\ \partial \mathcal{C} &= \{\theta \in \mathbb{R}^n : h(\theta) = 0 \}, \\ %\text{Int}( \mathcal{C} )&= \{\theta \in \mathbb{R}^n : h(\theta) > 0 \}, \\ \mathcal{U}&= \{\theta \in \mathbb{R}^n : h(\theta) \leq 0 \},\end{aligned}$$ where $\mathcal{C}$ is called the 'safe set' and $\partial\mathcal{C}$ is its boundary. We also define the notation of a superlevel set of $h$, parameterized by the parameter $\rho \leq 0$ as $$\mathcal{C}_\rho = \{\theta \in \mathbb{R}^n : h(\theta) \geq \rho, \rho \in \mathop{\mathrm{Im}}(h) \cap \mathbb{R}_{\leq 0} \}.$$ The sets of the form $\mathcal{C}_\rho$ always contain $\mathcal{C}$ along with some unsafe region given by $\rho$, a non-positive value in the image of $h$. We also use the following assumptions throughout. **Assumption 1** (Objective Function Conditions). *The objective function $J: \mathbb{R}^n \to \mathbb{R}$ is continuously differentiable with locally Lipschitz Jacobian and satisfies:* 1. *$\theta^*_c \in \mathcal{C}$ is the unique constrained minimizer of $J$ on $\mathcal{C}$,* 2. *if there exists a $\theta$ such that $\nabla J(\theta) = 0$ for $\theta \in \mathcal{C}$, then $\theta=\theta^*_c$.* **Assumption 2** (Barrier Function Conditions). *The barrier (or safety) function $h: \mathbb{R}^n \to \mathbb{R}$ is continuously differentiable with locally Lipschitz Jacobian and satisfies:* 1. *the safe set $\mathcal{C}$ is non-empty,* 2. *for any $\mathcal{C}_\rho$, there exists a $L \in (0, \infty)$ such that $\| \nabla h(\theta)\|>L$ for $\theta \in \mathcal{U} \cap \mathcal{C}_\rho$.[\[assum:h_grad\]]{#assum:h_grad label="assum:h_grad"}* **Assumption 3** (Optimizer Condition). *If $\nabla h ( \theta )^T \nabla J ( \theta ) = || \nabla h ( \theta )|| ||\nabla J ( \theta ) ||$ ($\nabla h ( \theta )$ and $\nabla J (\theta)$ are collinear) for $\theta \in \partial \mathcal{C}$, then $\theta = \theta^*_c$.* **Assumption 4** (Angle Condition). *There exists a $r^*>0$ and $f^* \in [0,1)$ such that $$\frac{\nabla J(\theta)^T \nabla h(\theta)}{|| \nabla J(\theta)|| || \nabla h(\theta)||} \leq f^*,$$ for $\theta \in \{\rho \leq h(\theta) \leq 0 \} \cap \{r^* \leq || \theta - \theta^*_c || \}$ for any $\rho \in \mathop{\mathrm{Im}}(h) \cap \mathbb{R}_{< 0}$.* **Assumption 5** (Radial Unboundedness). *The function $V = \max \{-h(\theta),0 \} + \max \{J(\theta) - J(\theta^*_c),0\}$ is positive definite and $|| \theta - \theta^*_c|| \to \infty \implies V \to \infty$.* **Assumption 6** (Bounded Levels of $h$). *The family of sets $\mathcal{C}_\rho$ are compact.* **Assumption 7** (ES Constants). *The design constants are chosen as $\omega_f, \omega_i, \delta,a, k,c > 0$, where $\omega_i \slash \omega_j$ are rational with frequencies $\omega_i$ chosen such that $\omega_i\neq \omega_j$ and $\omega_i + \omega_j \neq \omega_k$ for distinct $i, j,$ and $k$.* ![Safe Extremum Seeking block diagram. Note that $A(G_J, G_h, \eta_h)=\min \{G_h^{-2},M^+\} \max\{G_J^T G_h - c \eta_h, 0\}$.](safe_es_block.pdf){#fig:block_diagram width="1.00\\linewidth"} Assumptions [Assumption 1](#assum:J){reference-type="ref" reference="assum:J"}, [Assumption 2](#assum:h){reference-type="ref" reference="assum:h"}, and [Assumption 3](#assum:gradient){reference-type="ref" reference="assum:gradient"} are used to show that the exact dynamics yield only a single equilibrium, and Assumptions [Assumption 4](#assum:angle){reference-type="ref" reference="assum:angle"}, [Assumption 5](#assum:radially_unbounded){reference-type="ref" reference="assum:radially_unbounded"} and [Assumption 5](#assum:radially_unbounded){reference-type="ref" reference="assum:radially_unbounded"} are used in the Lyapunov analysis. Assumption [Assumption 3](#assum:gradient){reference-type="ref" reference="assum:gradient"} is a general condition on the gradients of $J$ and $h$ on the boundary, which is used to force a unique equilibrium of the dynamics. This assumption is congruent with necessary conditions known about solutions in optimization (such as the so called "method of Lagrange multipliers"). More restrictive assumptions can also be made in place of this. For example, in place of Assumption [Assumption 3](#assum:gradient){reference-type="ref" reference="assum:gradient"} we can assume $\mathcal{C}$ is convex and $J$ is strictly convex on $\mathcal{C}$. Then the dynamics can be shown to yield a unique equilibrium and the convergence analysis proceeds the same otherwise. Assumption [Assumption 4](#assum:angle){reference-type="ref" reference="assum:angle"} is concerned with the cosine of the angle between $\nabla J$ and $\nabla h$ far away from the constrained equilibrium in the unsafe set. We utilize this assumption in finding a Lyapunov function with negative time derivative everywhere. It can be shown that this condition is true for $J$ quadratic and convex with $h$ linear - an example of a problem with a semi-infinite safe set. Assumption [Assumption 5](#assum:radially_unbounded){reference-type="ref" reference="assum:radially_unbounded"} is used because we use the implication that that the sublevel sets of a particular Lyapunov function (introduced later) can be chosen compact and arbitrarily large. We give our main results for either Assumption [Assumption 4](#assum:angle){reference-type="ref" reference="assum:angle"}-[Assumption 5](#assum:radially_unbounded){reference-type="ref" reference="assum:radially_unbounded"} holding or Assumption [Assumption 6](#assum:bounded_level){reference-type="ref" reference="assum:bounded_level"} holding. This is because Assumption [Assumption 6](#assum:bounded_level){reference-type="ref" reference="assum:bounded_level"} is strong enough to show SPA stability of the reduced system as it readily yields compact and arbitrarily large invariant sets. If we have a semi-infinite safe set, then we require Assumption [Assumption 4](#assum:angle){reference-type="ref" reference="assum:angle"}-[Assumption 5](#assum:radially_unbounded){reference-type="ref" reference="assum:radially_unbounded"}. # Global Convergence of the Exact Algorithm Before conducting analysis of the ES scheme, we study the optimization algorithm in its' exact form, in order to find the appropriate Lyapunov function which will be used in the next section. Consider the following dynamics: $$\begin{gathered} \dot \theta = F(\theta) = - \nabla J(\theta) + \\ \frac{\nabla h(\theta) }{||\nabla h(\theta) ||^2} \max\{ \nabla J(\theta)^T \nabla h(\theta) - c h(\theta), 0 \}. \label{eqn:exact_dynamics}\end{gathered}$$ We do not risk dividing by zero in the expression of [\[eqn:exact_dynamics\]](#eqn:exact_dynamics){reference-type="eqref" reference="eqn:exact_dynamics"} as $\nabla h(\theta) = 0 \implies h(\theta)>0$ by Assumption [Assumption 2](#assum:h){reference-type="ref" reference="assum:h"}.[\[assum:h_grad\]](#assum:h_grad){reference-type="ref" reference="assum:h_grad"}. Therefore, $\lim_{||\nabla h(\theta)|| \to 0} F(\theta) = -\nabla J(\theta)$ on compact sets. The differential inequality $\dot h + ch \geq 0$ is commonly used to show the forward invariance of the safe set [@ames2016control], where it is assumed that the initial condition of any given trajectory is safe. But it also shows attractivity to the safe set. Because when $h(\theta(t)) < 0$, for some $\theta(t)$, then we have $\dot h (\theta(t)) > 0$ and $h$ is increasing in time. So unsafe trajectories become 'safer' in an exponential fashion. Therefore, the family of sets $\mathcal{C}_\rho$ is also positively invariant, and not just the case of $\rho = 0$. We state this formally below. **Proposition 1**. *Under Assumption [Assumption 2](#assum:h){reference-type="ref" reference="assum:h"}, the dynamics [\[eqn:exact_dynamics\]](#eqn:exact_dynamics){reference-type="eqref" reference="eqn:exact_dynamics"} satisfy $\frac{dh(\theta(t))}{dt} + c h(\theta(t)) \geq 0$ for all $\theta \in \mathbb{R}^n$ and $\mathcal{C}$ is forward invariant. Moreover, all sets $\mathcal{C}_\rho$ are forward invariant and $h(\theta(t)) \geq h(\theta(t_0)) e^{-c t}$ for all $\theta(t_0) \in \mathbb{R}^n$.* To prove this, simply compute $\dot h + ch$ and use the fact that $-x + \max \{x,0\} \geq 0$. This proposition will help us establish global convergence of the algorithm for trajectories starting outside of $\mathcal{C}$. The next lemma verifies that all trajectories starting from the safe set, converge to the constrained minimum of $J$. **Lemma 2**. *Let Assumptions [Assumption 1](#assum:J){reference-type="ref" reference="assum:J"}-[Assumption 3](#assum:gradient){reference-type="ref" reference="assum:gradient"} hold. The function $V_1(\theta) = J(\theta) - J(\theta^e)$ is a Lyapunov function for the equilibrium $\theta^e$ on $\mathcal{C}$, yielding strictly $\dot V_1 < 0$ for all $\theta \in \mathcal{C} \setminus \{ \theta^e \}$. The dynamics [\[eqn:exact_dynamics\]](#eqn:exact_dynamics){reference-type="eqref" reference="eqn:exact_dynamics"} are asymptotically stable for $\theta(t_0) \in \mathcal{C}$.* In light of Proposition [Proposition 1](#prop:forward_inv){reference-type="ref" reference="prop:forward_inv"} (attractivity to $\mathcal{C}$) and Lemma [Lemma 2](#lem:V1){reference-type="ref" reference="lem:V1"} (convergence within $\mathcal{C}$), it should be intuitive that all trajectories eventually converge on the equilibrium point $\theta^e$. We can demonstrate this fact with the following Lyapunov argument on any compact, invariant set $\mathcal{C}_\rho$ around the equilibrium. A key idea used is that any initial condition is an element of some set of the form $\mathcal{C}_\rho$. **Lemma 3**. *Let Assumptions [Assumption 1](#assum:J){reference-type="ref" reference="assum:J"}-[Assumption 3](#assum:gradient){reference-type="ref" reference="assum:gradient"} hold, and let either Assumptions [Assumption 4](#assum:angle){reference-type="ref" reference="assum:angle"}-[Assumption 5](#assum:radially_unbounded){reference-type="ref" reference="assum:radially_unbounded"} or Assumption [Assumption 6](#assum:bounded_level){reference-type="ref" reference="assum:bounded_level"}. Consider the Lypaunov function $$V(\theta) = \max\{-\alpha h(\theta), 0\} + \max \{J(\theta) - J(\theta^e),0\}.$$ For any $\mathcal{C}_\rho$, there exists $\alpha \in (0, \infty)$ such that $\dot V(\theta) < 0$ for $\theta \in \mathcal{C}_\rho \setminus \{\theta^e\}$. The dynamics [\[eqn:exact_dynamics\]](#eqn:exact_dynamics){reference-type="eqref" reference="eqn:exact_dynamics"} are globally asymptotically stable.* To prove the result in Lemma [Lemma 3](#lem:global_conv_exact){reference-type="ref" reference="lem:global_conv_exact"}, time derivatives of the Lyapunov function $V$ must be computed in all regions of the state space within any arbitrary invariant set $\mathcal{C}_\rho$. In this paper we will sketch part of the proof and compute the time derivative of $V$ in the case where $\theta \in \mathcal{U} \cap \{J(\theta) - J(\theta^e) \geq 0\} \cap \{\nabla J(\theta)^T \nabla h(\theta) - c h(\theta) \geq 0\}$ (the $\max$ term is active in the dynamics). First compute expressions for $\dot V$: $$\begin{aligned} \dot V =& -\alpha c |h(\theta)| + \dot{V}_1, \nonumber \\ \begin{split} \label{eqn:case4_Vdot} ={}& c |h(\theta)| \left( -\alpha + \frac{\nabla J(\theta)^T \nabla h(\theta)}{\|\nabla h(\theta) \|^2} \right) - \\ & \nabla J(\theta)^T \left(I - \frac{\nabla h(\theta) \nabla h(\theta)^T}{\|\nabla h(\theta) \|^2} \right) \nabla J(\theta). \end{split}\end{aligned}$$ Denoting $$f(\theta) := \nabla J(\theta)^T \nabla h(\theta) /(||\nabla J(\theta) || || \nabla h(\theta)||) \leq 1 ,$$ we can rewrite and bound $\dot V$ as $$\begin{aligned} \begin{split} \dot V ={}& -\alpha c |h(\theta)| - (1-f^2(\theta)) || \nabla J(\theta) ||^2 + \\ &f(\theta) \frac{c|h(\theta)|}{||\nabla h(\theta) ||} ||\nabla J(\theta) ||, \end{split} \nonumber\\ \leq & c |h(\theta)| \left( -\alpha + \frac{||\nabla J(\theta) ||}{L} \right) - (1-f^2(\theta)) || \nabla J(\theta) ||^2. \label{eqn:form_1} %\\ %& \leq -\alpha c |h(\theta)| - (1-f^2(\theta)) || \nabla J(\theta) ||^2 + \frac{c |\rho| }{L} ||\nabla J(\theta) ||.\end{aligned}$$ The existence of $L$ is assumed by Assumption [Assumption 2](#assum:h){reference-type="ref" reference="assum:h"} and recall we assume that $\theta \in \mathcal{C}_\rho$ for some $\mathcal{C}_\rho$. **Case A) Assumption [Assumption 6](#assum:bounded_level){reference-type="ref" reference="assum:bounded_level"}:** Consider the case of Assumption [Assumption 6](#assum:bounded_level){reference-type="ref" reference="assum:bounded_level"}, then $\mathcal{C}_\rho$ is compact and we can choose $$\alpha > L^{-1} \sup_{\theta \in \mathcal{C}_\rho} \| \nabla J(\theta) \| \label{eqn:alpha_C_compact},$$ which yields $\dot V \leq 0$ in [\[eqn:form_1\]](#eqn:form_1){reference-type="eqref" reference="eqn:form_1"}. **Case B) Assumptions [Assumption 4](#assum:angle){reference-type="ref" reference="assum:angle"} - [Assumption 5](#assum:radially_unbounded){reference-type="ref" reference="assum:radially_unbounded"}:** Consider the case of Assumptions [Assumption 4](#assum:angle){reference-type="ref" reference="assum:angle"} - [Assumption 5](#assum:radially_unbounded){reference-type="ref" reference="assum:radially_unbounded"}. If $||\nabla J(\theta)||$ is bounded on $\mathcal{C}_\rho$ then we can choose $\alpha$ as in [\[eqn:alpha_C\_compact\]](#eqn:alpha_C_compact){reference-type="eqref" reference="eqn:alpha_C_compact"}. So consider the nontrivial case where $|| \nabla J(\theta)||$ unbounded on $\mathcal{C}_\rho$. Then from Assumption [Assumption 4](#assum:angle){reference-type="ref" reference="assum:angle"}, there exists a scalar $f^*$ and a compact set $\Omega$ containing $\theta^e$ such that $0<f(\theta) \leq f^* < 1$ for all $\theta \notin \Omega$. Therefore letting $\tilde{f} = 1-{f^*}^{2}$ with $\tilde{f} \in (0,1)$ we have $$\begin{aligned} \dot V & \leq c |h(\theta)|\left(-\alpha + \frac{||\nabla J(\theta) ||}{L}\right) - \tilde f || \nabla J(\theta) ||^2 \end{aligned}$$ for $\theta \notin \Omega$, which also yields the bound $$\begin{aligned} \dot V & \leq -\alpha c |h(\theta)| + \frac{c |\rho| }{L} ||\nabla J(\theta) || - \tilde f || \nabla J(\theta) ||^2.\end{aligned}$$ This implies when $||\nabla J(\theta) ||>\frac{|\rho| c}{L \tilde f}$, then $\dot V < 0$ for $\theta \notin \Omega$. Therefore choosing $$\alpha > \max \left\{ L^{-1} \sup_{\theta \in \Omega} \| \nabla J(\theta) \|, \frac{c |\rho|}{L^2 \tilde f} \right\}$$ yields $\dot V \leq 0$. For a complete proof, which lies outside the scope of this paper, one must also consider computing the time derivative of $V$ in the other regions of the state space. This Lyapunov function has an interesting connection to literature. We note that a closely related Lyapunov function for 'gradient flow' systems in a more general setting was also discovered and can also be used to show stability [@allibhoy2022control]. # Safe Extremum Seeking of Static Maps The basic outline of this section starts by following the framework in [@nesic2010unifying]. As a preliminary, we first present Lemma [Lemma 4](#lem:additive_disturbance){reference-type="ref" reference="lem:additive_disturbance"} to aid the reader in understanding later results. Then, we perform a series of transformations starting from the original system, resulting in the construction of a 'reduced model'. We then use a Lyapunov argument and the help of Lemma [Lemma 3](#lem:global_conv_exact){reference-type="ref" reference="lem:global_conv_exact"} to show the reduced model is SPA stable, which yields the original system SPA stable. Finally, we present our notion of practical safety which follows from the SPA stability of the ES scheme. **Remark 1**. *Authors in [@nesic2010unifying] require that the reduced model be "robust" to "disturbances" (SPA stable) which arise as a result of the averaging procedure. We use the terminology "robust" and "disturbance" in the following Lemma and the current section to refer to this fictitious disturbance of the reduced system. This reduced system is fully constructed later in [\[eqn:reduced\]](#eqn:reduced){reference-type="eqref" reference="eqn:reduced"} and is required for the analysis.* Consider the estimated quantities: $$Q(\theta)^T := [\nabla J (\theta)^T, \nabla h (\theta)^T, h(\theta)].$$ The dynamics in [\[eqn:exact_dynamics\]](#eqn:exact_dynamics){reference-type="eqref" reference="eqn:exact_dynamics"} can be thought of as a function of the estimated variables $F = F(Q(\theta))$. And with a small disturbance $w(t)^T = [w_1(t)^T, w_2(t)^T, w_3(t)]$ we can consider a disturbed system written as $$\dot \theta = F(Q(\theta) + w(t)) . \label{eqn:disturbed_dyn}$$ The next result is a useful preliminary as it says that the algorithm, without perfect estimates of the gradients, can be written as the exact version of the algorithm with a small additive disturbance. It is also useful as it shows that this holds for a large enough $M^+$. **Lemma 4** (Additive Disturbance). *Under Assumptions [Assumption 1](#assum:J){reference-type="ref" reference="assum:J"} - [Assumption 2](#assum:h){reference-type="ref" reference="assum:h"}, for any compact set $\Omega$ there exists a $M^+, \epsilon^*>0$ such that for all $\epsilon \in (0, \epsilon^*)$ and $||w || < \epsilon$, the disturbed dynamics in [\[eqn:disturbed_dyn\]](#eqn:disturbed_dyn){reference-type="eqref" reference="eqn:disturbed_dyn"} can be written as $$\begin{gathered} F(Q(\theta) + w(t)) = - \nabla J(\theta) + \\ \frac{\nabla h(\theta) }{||\nabla h(\theta) ||^2} \max\{ \nabla J(\theta)^T \nabla h(\theta) - c h(\theta), 0 \} + O(\epsilon). \end{gathered}$$ for all $\theta \in \Omega$.* The proof of Lemma [Lemma 4](#lem:additive_disturbance){reference-type="ref" reference="lem:additive_disturbance"} is outside the scope of the paper. We now turn our attention to the original dynamics in [\[eqn:th_dyn\]](#eqn:th_dyn){reference-type="eqref" reference="eqn:th_dyn"} - [\[eqn:etah_dyn\]](#eqn:etah_dyn){reference-type="eqref" reference="eqn:etah_dyn"}, and make a series of transformations, following the general ideas in [@nesic2010unifying]. Defining $$\begin{aligned} F_0\left(\xi\right) &:= -\xi_1 + \min \{\xi_2^{-2},M^+\} \max\{\xi_1^T \xi_2 - c \xi_4, 0\} \xi_3,\end{aligned}$$ with $$\begin{aligned} \xi^T &:= [G_J^T, \eta_J, G_h^T, \eta_h],\\ \zeta^T &:= [(J(\theta) - \xi_2)M(t)^T, J(\theta), (h(\theta) - \xi_4)M(t)^T, h(\theta)],\end{aligned}$$ we can rewrite [\[eqn:th_dyn\]](#eqn:th_dyn){reference-type="eqref" reference="eqn:th_dyn"} - [\[eqn:etah_dyn\]](#eqn:etah_dyn){reference-type="eqref" reference="eqn:etah_dyn"} as $$\begin{aligned} \dot{\hat{\theta}} &= k \omega_f F_0(\xi), \label{eqn:theta_dyn_rewrite}\\ \dot \xi &= -\omega_f (\xi - \zeta(t,\theta, \xi, a)), \label{eqn:estimate_dyn_rewrite}\end{aligned}$$ recalling $\theta = \hat \theta + S(t)$. Letting $\tilde \theta = \hat \theta - \theta^*_c$ and $\tau = \omega_f t$, the system in the new time scale is $$\begin{aligned} \frac{d \tilde \theta}{d \tau} &= k F_0(\xi), \label{eqn:theta_dyn_tau}\\ \frac{d \xi}{d \tau} &= -\left(\xi - \zeta \left( \frac{\tau}{\omega_f}, \tilde \theta + \theta^*_c + S\left(\frac{\tau}{\omega_f}\right), \xi, a\right) \right).\label{eqn:xi_dyn_tau}\end{aligned}$$ We can take the average of the system (see [@Khalil], [@nesic2010unifying]) to compute $$\begin{aligned} \frac{d \tilde \theta_{av}}{d \tau} =& k F_0(\xi_{av}), \label{eqn:theta_dyn_tau_av}\\ \frac{d \xi_{av}}{d \tau} =& -\left(\xi_{av} - \mu(\tilde \theta_{av},a) \right), \label{eqn:xi_dyn_tau_av}\\ \begin{split} \label{eqn:D_def} D(\tilde \theta_{av}+ \theta^*_c) :={}& [\nabla J(\tilde \theta_{av}+ \theta^*_c)^T, J(\tilde \theta_{av}+ \theta^*_c), \\ & \nabla h(\tilde \theta_{av}+ \theta^*_c)^T, h(\tilde \theta_{av}+ \theta^*_c)]^T , \end{split} \\ \mu(\tilde \theta_{av},a) :=& D(\tilde \theta_{av}+ \theta^*_c) + O(a). \label{eqn:mu_def}\end{aligned}$$ Making another time transformation $s = k \tau$ we have $$\begin{aligned} \frac{d \tilde \theta_{av}}{d s} &= F_0(\xi_{av}), \label{eqn:theta_dyn_s}\\ k \frac{d \xi_{av}}{d s} &= -\left(\xi_{av} - \mu(\tilde \theta_{av},a) \right) .\label{eqn:xi_dyn_s}\end{aligned}$$ Taking $k=0$, we can derive the singularly perturbed (or reduced) system with a quasi steady state $$z_s := \xi_{av} = \mu(\tilde \theta_{av},a).$$ Defining $$y = \xi_{av} - \mu(\tilde \theta_{av},a),$$ the boundary layer system (with $\tau = s/k$) is $$\frac{dy}{d \tau} = -\left(\xi_{av} - \mu(\tilde \theta_{av},a) \right) = -y.$$ The boundary layer system is UGAS uniformly in $\xi_{av}$ and $t_0$. The reduced system is $$\frac{d \theta_{r}}{d s} = F_0(\mu(\theta_{r},a)) = F_0(D(\theta_{r}+ \theta^*_c) + O(a)). \label{eqn:reduced}$$ The reduced system [\[eqn:reduced\]](#eqn:reduced){reference-type="eqref" reference="eqn:reduced"} is SPA stable in $a$. This is due to the following argument: take any positive pair $(\Delta, \nu)$, and $\theta_r(0) \leq \Delta$. Using Lemma [Lemma 3](#lem:global_conv_exact){reference-type="ref" reference="lem:global_conv_exact"} and [Lemma 4](#lem:additive_disturbance){reference-type="ref" reference="lem:additive_disturbance"} we can find a Lyapunov function $V$ such that $$\begin{aligned} & \dot V(\theta_r) = -W(\theta_r) + O(a), \\ & \dot h(\theta_r) + ch(\theta_r) \geq O(a) ,\end{aligned}$$ for strictly positive definite $V$ and $W$, for a sufficiently large $M^+$ on any compact set. In Lemma [Lemma 3](#lem:global_conv_exact){reference-type="ref" reference="lem:global_conv_exact"} we considered two cases: 1) Assumption [Assumption 6](#assum:bounded_level){reference-type="ref" reference="assum:bounded_level"} (all $\mathcal{C}_\rho$ are compact) 2) Assumptions [Assumption 4](#assum:angle){reference-type="ref" reference="assum:angle"}-[Assumption 5](#assum:radially_unbounded){reference-type="ref" reference="assum:radially_unbounded"} ($V$ is radially unbounded and the angle condition is obeyed). In either case we can construct $\mathcal{S}$, a compact invariant set containing $B(0)_\Delta$. Either $\mathcal{S} = \mathcal{C}_\rho$, for some $\mathcal{C}_\rho$ in case 1 or $\mathcal{S} = \mathcal{C}_\rho \cap \{V(\theta_r) \leq \bar V \}$, for some $\bar V>0$ in case 2 for sufficiently small $a$ and sufficiently large $M^+$. One may then choose an $a$ even smaller to achieve $\lim_{t \to \infty} ||\theta_r(t)|| < \nu$ for any $\nu>0$. Therefore there exists an $a^*$ such that for any $a \in (0, a^*)$ the solutions of [\[eqn:reduced\]](#eqn:reduced){reference-type="eqref" reference="eqn:reduced"} satisfy $$||\theta_r(s)|| \leq \beta_{\theta}(|| \theta_r(0) ||, s) + \nu$$ for some $\beta_{\theta} \in \mathcal{KL}$, $M^+>0$ and for all $||\theta_r(0)|| \leq \Delta$. SPA stability of [\[eqn:reduced\]](#eqn:reduced){reference-type="eqref" reference="eqn:reduced"} satisfies Assumption 2 in [@nesic2010unifying]). Using [@nesic2010unifying Theorem 1] we make the conclusion below in Theorem [Theorem 5](#thm:spa_stable){reference-type="ref" reference="thm:spa_stable"}. Note: The authors in [@nesic2010unifying] prove this result based upon the earlier work [@tan2005non Lemma 1,2]. [@tan2005non Lemma 2] concludes the SPA stability in $[a, k]$ of [\[eqn:theta_dyn_s\]](#eqn:theta_dyn_s){reference-type="eqref" reference="eqn:theta_dyn_s"}-[\[eqn:xi_dyn_s\]](#eqn:xi_dyn_s){reference-type="eqref" reference="eqn:xi_dyn_s"} from the SPA stability in $a$ of [\[eqn:reduced\]](#eqn:reduced){reference-type="eqref" reference="eqn:reduced"}. [@tan2005non Lemma 1] concludes the SPA stability of [\[eqn:theta_dyn_rewrite\]](#eqn:theta_dyn_rewrite){reference-type="eqref" reference="eqn:theta_dyn_rewrite"}-[\[eqn:estimate_dyn_rewrite\]](#eqn:estimate_dyn_rewrite){reference-type="eqref" reference="eqn:estimate_dyn_rewrite"} from SPA stability in $[a, k]$ of [\[eqn:theta_dyn_s\]](#eqn:theta_dyn_s){reference-type="eqref" reference="eqn:theta_dyn_s"}-[\[eqn:xi_dyn_s\]](#eqn:xi_dyn_s){reference-type="eqref" reference="eqn:xi_dyn_s"}. Let $$z = \xi - \mu(\tilde \theta, a).$$ **Theorem 5** (Semi-Global Practical Stability). *Let Assumptions [Assumption 1](#assum:J){reference-type="ref" reference="assum:J"}-[Assumption 3](#assum:gradient){reference-type="ref" reference="assum:gradient"}, [Assumption 7](#assum:es_constants){reference-type="ref" reference="assum:es_constants"} hold. Also, let either [Assumption 4](#assum:angle){reference-type="ref" reference="assum:angle"} and [Assumption 5](#assum:radially_unbounded){reference-type="ref" reference="assum:radially_unbounded"} hold or [Assumption 6](#assum:bounded_level){reference-type="ref" reference="assum:bounded_level"} hold. Then there exists $\beta_\theta, \beta_\xi \in \mathcal{KL}$ such that: for any positive pair $(\Delta, \nu)$ there exist $M^+, \omega_f^*, a^*>0$, such that for any $\omega_f \in (0, \omega_f^*)$, $a \in (0,a^*)$, there exists $k^*(a)>0$ such that for any $k \in (0, k^*(a))$ the solutions to [\[eqn:theta_dyn_rewrite\]](#eqn:theta_dyn_rewrite){reference-type="eqref" reference="eqn:theta_dyn_rewrite"}-[\[eqn:estimate_dyn_rewrite\]](#eqn:estimate_dyn_rewrite){reference-type="eqref" reference="eqn:estimate_dyn_rewrite"} satisfy $$\begin{aligned} || \tilde \theta(t) || &\leq \beta_\theta \left( ||\tilde \theta (t_0)||, k \cdot \omega_f \cdot (t-t_0) \right) + \nu, \label{eqn:theta_KL_bound} \\ || z(t) || &\leq \beta_\xi \left( ||z (t_0)||, \omega_f \cdot (t-t_0) \right) + \nu, \label{eqn:z_KL_bound}\end{aligned}$$ for all $||[\tilde \theta(t_0)^T, z(t_0)^T ]^T|| \leq \Delta$, and all $t \geq t_0\geq 0$.* Equation [\[eqn:z_KL_bound\]](#eqn:z_KL_bound){reference-type="eqref" reference="eqn:z_KL_bound"} tell us about convergence of estimated quantities $\xi(t)$ to their exact values $D(\tilde \theta(t) + \theta^*_c)$. Using [\[eqn:mu_def\]](#eqn:mu_def){reference-type="eqref" reference="eqn:mu_def"} we can write $$\begin{aligned} e &:= \xi - D(\tilde \theta + \theta^*_c), \label{eqn:e_def}\\ || e(t) || &\leq \beta_\xi (||z(t_0)||, \omega_f(t-t_0)) + v + |O(a)|. \label{eqn:e_bound}\end{aligned}$$ The variable $e$ can be though of as the estimator error of the various measurements and gradients of the static maps. The bound above says that the estimated quantities can be made to converge arbitrarily close to their true values, at a time scale faster than the movement of the parameter $\tilde \theta$ (which is governed by the gain $k$). Note, the functions $\beta_\theta, \beta_\xi$ are independent of $a, k, \omega_f$ [@tan2005non]. Because $\dot{\tilde \theta}$ is proportional to $k$ [\[eqn:theta_dyn_rewrite\]](#eqn:theta_dyn_rewrite){reference-type="eqref" reference="eqn:theta_dyn_rewrite"}, we can choose a $k$ such that the change in $\tilde \theta (t)$ over some time interval can be small relative to the change in $e(t)$ over the same interval. From Lemma [Lemma 4](#lem:additive_disturbance){reference-type="ref" reference="lem:additive_disturbance"} we know that there must be a small enough disturbance in the estimated quantities such that the dynamics can be written linearly, but only after the transient of $\beta_\xi$ has been sufficiently diminished. Therefore, we make the following claim. **Theorem 6** (Semi-Global Practical Safety). *Suppose Theorem [Theorem 5](#thm:spa_stable){reference-type="ref" reference="thm:spa_stable"} holds. For any $\Delta>0$ there exists $\delta^*>0$ such that for any $\delta \in (0, \delta^*)$ : there exists $M^+, a^{**}, \omega_f^{**}>0$ such that for any $a \in (0, a^{**})$, $\omega_f \in (0, \omega_f^{**})$, there exists $k^{**}(a)>0$ such that for any $k \in (0, k^{**}(a))$, $$h(\theta(t)) \geq h(\theta(t_0)) e^{-c k \omega_f (t-t_0)} + O(\delta) \label{eqn:practical_safety_inequality}$$ for all $||[\tilde \theta(t_0)^T, z(t_0)^T ]^T|| \leq \Delta$ for all $t \in [t_0, \infty]$.* Idea of proof: Consider the system in the form of [\[eqn:theta_dyn_rewrite\]](#eqn:theta_dyn_rewrite){reference-type="eqref" reference="eqn:theta_dyn_rewrite"}-[\[eqn:estimate_dyn_rewrite\]](#eqn:estimate_dyn_rewrite){reference-type="eqref" reference="eqn:estimate_dyn_rewrite"}, and also recall that $z=\xi - \mu(\tilde \theta, a)$. From Theorem [Theorem 5](#thm:spa_stable){reference-type="ref" reference="thm:spa_stable"} it was shown that the solutions satisfy the bounds in [\[eqn:theta_KL_bound\]](#eqn:theta_KL_bound){reference-type="eqref" reference="eqn:theta_KL_bound"} - [\[eqn:z_KL_bound\]](#eqn:z_KL_bound){reference-type="eqref" reference="eqn:z_KL_bound"}, with rates of decay $k \omega_f$ and $\omega_f$ respectively. We choose $M^+, \omega_f^*, a^*>0$ such that the $\nu$ is sufficiently small, and the estimator state $\xi$ converges to a region which is close to $\mu(\tilde \theta, a)$. Next, one can further restrict $a$ such that $\mu(\tilde \theta, a)$ is close to true gradients. Therefore, one can argue that after some finite time $T$, $\xi(t)$ will be within some small error away from the true gradients. Therefore the system, after this transient, [\[eqn:theta_dyn_rewrite\]](#eqn:theta_dyn_rewrite){reference-type="eqref" reference="eqn:theta_dyn_rewrite"} can be written as $\dot{\hat{\theta}} = k \omega_f F_0 (D(\hat{\theta}) + e(t))$. From Lemma [Lemma 4](#lem:additive_disturbance){reference-type="ref" reference="lem:additive_disturbance"}, we can write $F_0 (D(\hat{\theta}) + e(t)) = F_0 (D(\hat{\theta})) + O(\delta)$ for $||e(t)|| \leq \delta$. Therefore for t in $[T, \infty)$ we have $\dot h + c h \geq O(\delta)$, implying [\[eqn:practical_safety_inequality\]](#eqn:practical_safety_inequality){reference-type="eqref" reference="eqn:practical_safety_inequality"}. During the finite time from $[0, T]$ we can bound changes in $\hat{\theta}$ with $k$ as the time $T$ is independent of $k$, implying [\[eqn:practical_safety_inequality\]](#eqn:practical_safety_inequality){reference-type="eqref" reference="eqn:practical_safety_inequality"}. Note this argument makes use of the crucial fact that the functions $\beta_\theta, \beta_\xi$ are independent of $a, k, \omega_f$ [@tan2005non]. This argument implies the intervals of $a, \omega_f, k$ given in Theorem [Theorem 6](#thm:prac_safety){reference-type="ref" reference="thm:prac_safety"} are perhaps a more strict set of intervals than the ones given in Theorem [Theorem 5](#thm:spa_stable){reference-type="ref" reference="thm:spa_stable"} - if the user desires the type of safety given in [\[eqn:practical_safety_inequality\]](#eqn:practical_safety_inequality){reference-type="eqref" reference="eqn:practical_safety_inequality"}. Nonetheless, the safety result is elegantly analogous to the statement on stability. Theorem [Theorem 5](#thm:spa_stable){reference-type="ref" reference="thm:spa_stable"} says that for any set of initial conditions, one should be able to adjust $a, \omega_f, k$ such that trajectories are $\nu$-practically stable. Theorem [Theorem 6](#thm:prac_safety){reference-type="ref" reference="thm:prac_safety"} says that for any set of initial conditions, one should be able to adjust $a, \omega_f, k$ such that trajectories are $\delta$-practically stable. ![Simulation of the algorithm with various $c$ and $k$.](ex-1-safe-traj-all-combined.png){#fig:ex1 width="0.95\\linewidth"} # Example Consider the following maps and parameters: $J(\theta) = (\theta_1 + 3)^2 + \theta_2^2$, $h(\theta) = e^{-(\theta_1-1)^2 - \theta_2^2} + e^{-(\theta_1+1)^2 - \theta_2^2} -0.5$, $a= 0.1$, $\omega_f= 10$, $M^+= 10,000$, $\omega_1= 10$, $\omega_2= 13$. The safety function chosen has the characteristic that all the level sets for $h\leq0$ ($\mathcal{C}_\rho$) are compact, and there is a unique minimum of $J$ on $\mathcal{C}$. The levels of $J$ and $h$ can also be shown to obey the optimizer criterion in Assumption [Assumption 3](#assum:gradient){reference-type="ref" reference="assum:gradient"}. Note that the safe set is non convex. In Fig. [2](#fig:ex1){reference-type="ref" reference="fig:ex1"} we simulate a grid of initial conditions (all estimator states $G_J, \eta_h, G_h, \eta_h$ are initialized to zero) with three scenarios a) $c=1$, $k=0.0005$ b) $c=3$, $k=0.0005$ c) $c=3$, $k=0.0001$. In Fig. [2](#fig:ex1){reference-type="ref" reference="fig:ex1"}, notice that trajectories converge to a region around the minimizer of $J$ on $\mathcal{C}$. Also, trajectories that enter the safe set always remain in the safe set. Consider the effect of $c$. When in the safe set, a smaller $c$ restricts the approach towards the barrier. This can be see with black trajectories favoring the center-line (the safest part of the set) over the blue/red around the region $\hat \theta \approx [0.5, 0.0]$. Outside the safe set, compare black and blue/red trajectories at $\hat \theta(0) = [-2.5, -0.5]$. Here, a higher constant $c$ dictates a slow escape towards the boundary of the safe set which is shown by the blue/red trajectories going first towards the barrier, before converging. Therefore, higher $c$ has the effect that the optimization of $J$ is favored in the safe region, but in the unsafe region, safety is favored. Note the increased transient wiggles introduced at the start of the blue trajectories (compared to black), when increasing $c$ with no adjustment in $k$. We can fix this sign of instability by lowering $k$, and achieve the more smooth trajectories in red. This may be necessary because changing $c$ fundamentally changes the dynamics, and the same gain $k$ may no longer be appropriate for SPA stability - although in this example the stability still remains for the region we have chosen. # Conclusion We have introduced an ES algorithm that not only minimizes an objective but does so with a guarantee of practical safety. A Lyapunov analysis shows that for a semi-global region of attraction there exist intervals of design coefficients such that both practical safety and practical stability exist. Our example demonstrates a convergence of a non-convex problem. [^1]: \*This work is supported by Los Alamos National Lab LDRD DR Project 20220074DR [^2]: $^{1}$Alan Williams and Miroslav Krstić are with the Department of Mechanical and Aerospace Engineering, University of California, San Diego, CA 92093-0411, USA, `{awilliam,krstic}@ucsd.edu`. [^3]: $^{2}$Alan Williams and Alexander Scheinker are with Los Alamos National Lab, Los Alamos, NM 87545, USA `ascheink@lanl.gov`
arxiv_math
{ "id": "2309.15401", "title": "Semi-Global Practical Extremum Seeking with Practical Safety", "authors": "Alan Williams, Miroslav Krstic, Alexander Scheinker", "categories": "math.OC", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Consider a complete balanced bipartite graph $K_{n,n}$ and let $K^c_{n,n}$ be an edge-colored version of $K_{n,n}$ that is obtained from $K_{n,n}$ by having each edge assigned a certain color. A subgraph $H$ of $K^c_{n,n}$ is called properly colored (PC) if every two adjacent edges of $H$ have distinct colors. $K_{n,n}^c$ is called properly vertex-even-pancyclic if for every vertex $u\in V(K_{n,n}^c)$ and for every even integer $k$ with $4 \leq k \leq 2n$, there exists a PC $k$-cycle containing $u$. The minimum color degree $\delta^c(K^c_{n,n})$ of $K^c_{n,n}$ is the largest integer $k$ such that for every vertex $v$, there are at least $k$ distinct colors on the edges incident to $v$. In this paper we study the existence of PC even cycles in $K_{n,n}^c$. We first show that, for every integer $t\geq 3$, every $K^c_{n,n}$ with $\delta^c(K^c_{n,n})\geq \frac{2n}{3}+t$ contains a PC 2-factor $H$ such that every cycle of $H$ has a length of at least $t$. By using the probabilistic method and absorbing technique, we use the above result to further show that, for every $\varepsilon>0$, there exists an integer $n_0(\varepsilon)$ such that every $K^c_{n,n}$ with $n\geq n_0(\varepsilon)$ is properly vertex-even-pancyclic, provided that $\delta^c(K^c_{n,n})\geq (\frac{2}{3}+\varepsilon)n$.\ edge-coloring; properly colored cycle; properly colored $2$-factor; vertex-even-pancyclic; color degree\ 05C15, 05C38, 05C40 author: - Shanshan Guo - "Fei Huang[^1]" - Jinjiang Yuan - C.T. Ng - T.C.E. Cheng title: Properly colored even cycles in edge-colored complete balanced bipartite graphs --- # Introduction Consider a simple and finite graph $G$. We use the following notation and terminology in this paper. $\bullet$ $V(G)$ and $E(G)$ are the sets of vertices and edges of $G$, respectively. $\bullet$ $|G|=|V(G)|$ is the number of vertices of $G$. $\bullet$ $H\subseteq G$ means that $H$ is a subgraph of $G$, i.e., $V(H)\subseteq V(G)$ and $E(H)\subseteq E(G)$. $\bullet$ $G\setminus U$ with $U\subseteq V(G)$ is the subgraph obtained from $G$ by deleting the vertices of $U$ and the edges incident to $U$. $\bullet$ $G\setminus H=G\setminus V(H)$ for a proper subgraph $H\subset G$. $\bullet$ For two vertices $x,y\in V(G)$, the length of a shortest $xy$-path is called the distance between $x$ and $y$ in $G$ and denoted by $d_G(x, y)$. $\bullet$ An edge-colored version of $G$, denoted by $G^c$, is obtained from $G$ by assigning to each edge of $G$ a color. We also call $G^c$ an *edge-colored graph* and $c$ an *edge coloring* of $G$. $\bullet$ For each edge $e\in E(G)$, we use $c(e)$ to denote the color of edge $e$ in $G^c$. $\bullet$ $G^c$ is called *properly colored* (PC) if every two adjacent edges of $G$ receive distinct colors in $G^c$. $\bullet$ $d^c_G(v)$ with $v\in V(G^c)$, called the *color degree* of $v$ in $G^c$, is the number of colors of the edges incident to $v$ in $G^c$. When no confusion can occur, we shortly write $d^c(v)$ for $d^c_G(v)$. $\bullet$ $\delta^c(G)=\min\{d^c(v):v\in V(G^c)\}$ is called the *minimum color degree* of $G^c$. $\bullet$ $\Delta_{mon}(v)$ with $v\in V(G^c)$ denotes the maximum number of edges of the same color incident to $v$ in $G^c$. $\bullet$ $\Delta_{mon}(G^c)$ is the maximum value of $\Delta_{mon}(v)$ over all the vertices $v\in V(G^c)$. $\bullet$ For two vertices $x,y\in V(G)$, we use $d_G(x,y)$ to denote the length of the shortest $xy$-path in $G$. $\bullet$ A *1-path-cycle* of $G^c$ is a vertex-disjoint union of exactly one PC path of $G^c$ and a number of PC cycles of $G^c$. Note that every PC path of $G^c$ is a 1-path-cycle. $\bullet$ A 1-path-cycle $H$ of $G^c$ is called a *$1^{(k)}$-path-cycle* if the length of every cycle of $H$ is at least $k$. $\bullet$ A *2-factor* $H$ of $G^c$ is a spanning 2-regular subgraph of $G^c$. $\bullet$ A 2-factor $H$ of $G^c$ is called *$2^{(k)}$-factor* if the length of every cycle of $H$ is at least $k$. $\bullet$ An edge-colored complete bipartite graph $K_{n,n}^c$ is called *vertex-even-pancyclic* if, for every vertex $u\in V(K_{n,n}^c)$ and for every even integer $k$ with $4 \leq k \leq 2n$, there exists a PC $k$-cycle containing vertex $u$ in $K_{n,n}^c$. For more graph-theoretical terminology and notation not defined here, we refer the reader to [@BM]. In the past few decades, PC subgraphs in edge-colored graphs have been extensively studied in the literature. In 1976, Daykin [@D] asked whether there exists a constant $\mu$ such that every edge-colored complete graph $K_n^c$ with $\Delta^{mon}(K_n^c)\le \mu n$ and $n\ge 3$ contains a PC Hamilton cycle. This question was answered independently by Bollobás and Erdös [@BE] with $\mu=1/69$, and Chen and Daykin [@CD] with $\mu=1/17$. Bollobás and Erdős [@BE] further conjectured that if $\Delta_{mon}(K^c_ n)< \lfloor \frac{n}{2}\rceil$, then $K_n^c$ contains a PC Hamiltonian cycle. Cheng et al. [@CK] showed that every edge-colored graph $G^c$ with $\delta^c(G^c)\geq |G|/2$ has a PC spanning tree. By Lo [@A], for any $\varepsilon>0$, there exists an integer $N_0 = N_0(\varepsilon)$ such that every $K^c_{n}$ with $n\geq N_0$ and $\Delta_{mon}(K^c_{n})\leq (1/2 -\varepsilon)n$ contains a PC Hamiltonian cycle. Barr [@B1998] proved that $K_n^c$ has a PC Hamilton path, provided that $K_n^c$ contains no monochromatic triangle. Feng et al. [@FGGG] showed that $K_n^c$ contains a PC Hamilton path if and only if $K_n^c$ contains a spanning PC 1-path-cycle. For the existence of PC subgraphs in edge-colored bipartite graphs, Bang-Jensen et al. [@BG] showed that a 2-edge-colored complete bipartite multigraph contains a PC Hamilton cycle if and only if it is color-connected and has a PC 2-factor. Chen and Daykin [@CD] showed that $K^c_{n,n}$ contains a PC Hamiltonian cycle, provided $\Delta_{mon}(K_{n,n}^c)\le \frac{1}{25}n$. By Alon and Gutin [@AG], for any $\varepsilon>0$, there exists an integer $N_0 = N_0(\varepsilon)$ such that every $K^c_{n,n}$ with $n\geq N_0$ and $\Delta_{mon}(K_{n,n}^c)\le (1-\frac{1}{\sqrt{2}}-\varepsilon)n$ contains a PC cycle of length $l$ for all even $l$ with $4\le l \le 2n$. Kano and Tsugaki [@KT] showed that every edge-colored connected bipartite graph $G^c$ with $\delta^c(G^c)\geq |G|/3+1$ has a PC spanning tree. In 2017, Fujita et al. [@FLZ] presented the following conjecture. **Conjecture 1** ([@FLZ]). *If $\delta^c(K_{m,n})\geq \frac{m+n}{4}+1$, then each vertex of $K_{m,n}$ is contained in a PC $l$-cycle, where $l$ is any even integer with $4\leq l\leq \min\{2m,2n\}$.* The authors in [@FLZ] also showed the following result to support Conjecture [Conjecture 1](#conj){reference-type="ref" reference="conj"}. **Theorem 1** ([@FLZ]). *If $\delta^c(K_{n,n})\geq n/2+1$, then every vertex of $K^c_{n,n}$ is contained in a PC 4-cycle.* Recently, about the existence of PC $2$-factor in $K^c_{n,n}$, Guo et al. [@GHY] presented the following result. **Theorem 2** ([@GHY]). *$K^c_{n,n}$ has a PC 2-factor, provided that $\delta^c(K^c_{n,n})> \frac{3n}{4}$.* For more results on PC cycles and paths in edge-colored graphs, we recommend [@BG1; @BGY; @BE; @COY; @CHY; @CL; @CH; @G; @H; @LNZ; @LW; @A2; @A1; @X], and Chapter 16 of [@BG]. In this paper we study the existence of PC $2$-factors in $K^c_{n,n}$ and the vertex-even-pancyclicity of $K^c_{n,n}$. Our main results are the following Theorems [Theorem 3](#t1){reference-type="ref" reference="t1"} and [Theorem 4](#t2){reference-type="ref" reference="t2"}. **Theorem 3**. *Let $t\geq 3$ be an integer. Every $K^c_{n,n}$ with $\delta^c(K^c_{n,n})\geq \frac{2n}{3}+t$ contains a PC 2-factor $H$ such that every cycle of $H$ has a length of at least $t$.* **Theorem 4**. *For every $\varepsilon>0$, there exists an integer $n_0(\varepsilon)$ such that every $K^c_{n,n}$ with $n\geq n_0(\varepsilon)$ and $\delta^c(K^c_{n,n})\geq (\frac{2}{3}+\varepsilon)n$ is vertex-even-pancyclic.* From Theorem [Theorem 3](#t1){reference-type="ref" reference="t1"}, we have the following corollary that improves the result in Theorem [Theorem 2](#t3){reference-type="ref" reference="t3"} since the condition "$\delta^c(K^c_{n,n})> \frac{3n}{4}$\" is weakened to "$\delta^c(K^c_{n,n})\geq \frac{2n}{3}+3$\". **Corollary 5**. *Every $K^c_{n,n}$ with $\delta^c(K^c_{n,n})\geq \frac{2n}{3}+3$ contains a PC 2-factor.* Regarding the methodology to find properly a colored $2$-factor, we deploy the proof technique that we developed in [@GHY] with significant modifications. A common technique is to gradually expand a desired subgraph to a larger one. In [@GHY], the authors always add two new vertices to the current desired subgraph that is the union of some vertex-disjoint PC cycles, and then form a new larger desired subgraph. In contrast, in this paper, we mainly add a new vertex to the current desired subgraph that is a 1-path-cycle to obtain a larger one, and finally we convert a spanning 1-path-cycle into a PC 2-factor. Thus, the desired subgraphs in the two papers are different. In addition, the authors in [@GHY] mainly use the value $\Delta_{mon}(K_{n,n}^c)$ in their analysis, while we focus more on the value $\delta^c(K^c_{n,n})$ in our analysis in this paper. After establishing Theorem [Theorem 3](#t1){reference-type="ref" reference="t1"}, we use it together with the probabilistic method to derive Theorem [Theorem 4](#t2){reference-type="ref" reference="t2"}. # Proof of Theorem [Theorem 3](#t1){reference-type="ref" reference="t1"} {#proof-of-theorem-t1} Let $t\geq 3$ be an integer. Let $G=K^c_{n,n}$ be an edge-colored complete balanced bipartite graph with bipartition $(X,Y)$ and $\delta^c(G)\geq \frac{2n}{3}+t$. This means that $n\geq 3t$. Suppose to the contrary that $G$ contains no PC $2^{(t)}$-factor. Let $H$ be a $1^{(t)}$-path-cycle of $G$ and let $P$ be the unique path (as a component) of $H$. For our purpose, we may assume that the pair $(H,P)$ is chosen such that: (i) $|H|$ is as large as possible, and (ii) subject to (i), $|P|$ is as large as possible. We first show that $H$ is a spanning $1^{(t)}$-path-cycle of $G$. Note that $P$ is a PC path of $G$ and $H\setminus P$ (if $P\neq H$) consists of some PC even cycles of $G$, each of which has a length of at least $t$. For the case where $|P|=1$, from the fact that $G=K_{n,n}^c$, we have $|H|< |G|$ and there must be an edge $xy\in E(G)$ such that $x,y\in V(G)\setminus V(H\setminus P)$. Then $H'=xy \cup (H\setminus P)$ is a new $1^{(t)}$-path-cycle of $G$ such that $|H'|>|H|$. This contradicts the choice of $H$. Hence, we must have $|P|\geq 2$. Suppose that $P:=u_1u_2 \ldots u_k$. Let $S'_1:=\{u\in N_G(u_1):c(u_1u)\neq c(u_1u_2)\}$ and $S'_k:=\{u\in N_G(u_k):c(u_ku)\neq c(u_{k-1}u_k)\}$. We claim that $$\label{S2Eq1} S'_1,S'_k\subseteq V(P), \mbox{i.e., } S'_1\cup S'_k\subseteq V(P).$$ Suppose to the contrary that ([\[S2Eq1\]](#S2Eq1){reference-type="ref" reference="S2Eq1"}) is invalid. Thus, at least one of $S'_1\setminus V(P)$ and $S'_k\setminus V(P)$ is nonempty. By symmetry, we suppose $S'_k\setminus V(P)\neq \emptyset$ and let $u\in S'_k\setminus V(P)$. If $u\in V(G\setminus H)$, then $H+u_ku$ is a new $1^{(t)}$-path-cycle of $G$ such that $|H+u_ku|>|H|$, contradicting the choice of $H$. Thus, there must be a cycle $C$ of $H\setminus P$ such that $u\in V(C)$. Since $C$ is a PC cycle, the two edges incident to $u$ in $C$ have distinct colors in $G$. So we may assume that $C=ux_1x_2\ldots x_au$ such that $c(ux_1)\neq c(uu_k)$. As a result, $P'=P\cup (C-x_au)= u_1u_2\ldots u_kux_1x_2\ldots x_a$ is a PC path of $G$. Now $H'= P'\cup (H\setminus (P\cup C))$ is a new $1^{(t)}$-path-cycle of $G$ in which $P'$ is the unique path. But then $|H'|=|H|$ and $|P'|> |P|$, contradicting the maximality of $|P|$. This gives ([\[S2Eq1\]](#S2Eq1){reference-type="ref" reference="S2Eq1"}). From ([\[S2Eq1\]](#S2Eq1){reference-type="ref" reference="S2Eq1"}), it follows that $k=|P|\geq 2\delta^c(G)\geq \frac{4}{3}n+2t$. Define $V'=\{u_1,u_2,\ldots,u_{t-1}\}\cup \{u_{k-1},u_{k-2},\ldots,u_{k-t+1}\}$. Then set $S_1=S'_1\setminus V'$ and $S_2=S'_2\setminus V'$. Now we introduce some notation used in the subsequent discussion. Let $u$ be a vertex of $P$. If $u=u_i$ for some $i\in\{2,3,\ldots,k\}$, then we define $u^-=u_{i-1}$. If $u=u_i$ for some $i\in\{1,2,\ldots,k-1\}$, then we define $u^+=u_{i+1}$. Note that $u_1^-$ and $u_k^+$ have no definitions. For any two distinct vertices $u_i, u_j\in V(P)$ such that $i<j$, we write $u_i\prec u_j$ or $u_j\succ u_i$. Moreover, we define $xP^+y=x x^+ x^{++}\ldots y$ and $yP^-x=yy^- y^{--}\ldots x$ for $x\prec y$. Let $G[H]=G[V(H)]$ be the subgraph of $G$ induced by $V(H)$. In the following discussion, we consider $k$ by its parity separately. We first consider the case where $k$ is even. **Lemma 6**. *If $k$ is even, then $G[H]$ has a PC $2^{(t)}$-factor.* *Proof.* Suppose to the contrary that $G[H]$ has no PC $2^{(t)}$-factor. Since $P$ is the unique path in the $1^{(t)}$-path-cycle $H$, $G[P]$ certainly has no PC $2^{(t)}$-factor. For each $i\in\{1,k\}$, set $$\label{S2D1} R_i^{(1)}=\{u\in V(P):u^-\in S_i \mathrm{~and~} c(u_iu^-)\neq c(u^-u^{--})\}$$ and $$\label{S2D2} R_i^{(2)}=\{u\in V(P):u^+\in S_i \mathrm{~and~} c(u_iu^+)\neq c(u^+u^{++})\}.$$ Recall from ([\[S2Eq1\]](#S2Eq1){reference-type="ref" reference="S2Eq1"}) that $S_1, S_k\subseteq V(P)$. By the definitions of $S_1$ and $S_k$, and the fact that $u_1,u_2\notin S_1'$, we have $|S_1|\geq |S_1'|-\frac{2(t-1)-2}{2}\geq \frac{2}{3}n+1$. In the same way, we derive $|S_k|\geq \frac{2}{3}n+1$. Then we have $$\label{S2Eq4} \left\{\begin{array}{l} |R_1^{(1)}|+|R_1^{(2)}|\geq|S_1|\geq \frac{2}{3}n+1,\\[2mm] |R_k^{(1)}|+|R_k^{(2)}|\geq|S_k|\geq \frac{2}{3}n+1. \end{array}\right.$$ Since $|P|=k$ is even and $G=K^c_{n,n}$ has bipartition $(X,Y)$, without loss of generality, we may suppose that $u_1\in X$ and $u_k\in Y$. From ([\[S2D1\]](#S2D1){reference-type="ref" reference="S2D1"}) and ([\[S2D2\]](#S2D2){reference-type="ref" reference="S2D2"}), we have $$\label{S2Eq5} \mbox{$R_1^{(1)}, R_1^{(2)}\subseteq X$ and $R_k^{(1)},R_k^{(2)}\subseteq Y$.}$$ Set $$\left\{\begin{array}{ll} X':=R_1^{(1)}\setminus R_1^{(2)},& X'':=R_1^{(2)}\setminus R_1^{(1)},\\[2mm] Y':=R_k^{(1)}\setminus R_k^{(2)}, & Y'':=R_k^{(2)}\setminus R_k^{(1)},\\[2mm] J_1:=R_1^{(1)}\cap R_1^{(2)}, & J_k:=R_k^{(1)}\cap R_k^{(2)},\\[2mm] X^\star:= X'\cup X'', & Y^\star:= Y'\cup Y''. \end{array}\right.$$ We then define a vertex coloring of the complete bipartite graph $F:=(X^\star\cup J_1, Y^\star\cup J_k)$ in the following way. For each vertex $u\in V(F)$, we define $c(u)$ by setting $$\label{c} c(u)=\left\{ \begin{array}{ll} c(uu^+), &\mbox{if } {u\in X'\cup Y'},\\ c(uu^-), &\mbox{if } {u\in X''\cup Y''},\\ c_0, &\mbox{if } {u\in J_1\cup J_k} \ (\mbox{where $c_0$ is a new color}). \end{array} \right.$$ In the following we establish two useful claims. **Claim 1**. *For any two vertices $x\in X^\star\cup J_1$ and $y\in Y^\star\cup J_k$, if $d_P(x,y)\geq t-1$ or $d_P(x,y)=1$, then we have $c(xy)\in \{c(x),c(y)\}$. As a consequence, either $|J_1|\leq t-1$ or $|J_k|\leq t-1$.* Suppose to the contrary that Claim [Claim 1](#c1){reference-type="ref" reference="c1"} is invalid. Then there exist two vertices $x\in X^\star\cup J_1$ and $y\in Y^\star\cup J_k$ such that $d_P(x,y)\geq t-1$ or $d_P(x,y)=1$, and $c(xy)\notin \{c(x),c(y)\}$. Since $x\in X'\cup X''\cup J_1$ and $y\in Y'\cup Y''\cup J_k$, we have the following nine cases to consider. $x\in X'$ and $y\in Y'$. Then we have $c(u_1x^-)\neq c(x^-x^{--})$, $c(u_ky^{-})\neq c(y^{-}y^{--})$, and $c(x)=c(xx^+)\neq c(xy)\neq c(yy^+)=c(y)$. Furthermore, we have $xy\notin E(P)$ (for otherwise, we have $c(xy)\in \{c(xx^+),c(yy^+)\}=\{c(x),c(y)\}$, a contradiction). If $x\prec y$, then the two cycles $C_1:=u_1P^+x^-u_1$ and $C_2:=xP^+y^-u_kP^-yx$ are properly colored. Since $|u_1P^+x^-|\geq t-1$ and $|y^-u_kP^-y|\geq t-1$, we have $|C_1|\geq t$ and $|C_2|\geq t$. In this case, $C_1+C_2$ is a PC $2^{(t)}$-factor of $G[P]$, a contradiction. If $x\succ y$, then the cycle $C_{3}:=u_1P^+y^-u_k P^-xyP^+x^-u_1$ with $|C_3|=|P|+1\geq t$. In this case, $C_3$ is a PC $2^{(t)}$-factor of $G[P]$, a contradiction again. $x\in X'$ and $y\in Y''$. Then $c(u_1x^-)\neq c(x^-x^{--})$, $c(u_ky^{+})\neq c(y^{+}y^{++})$, and $c(x)=c(xx^+)\neq c(xy)\neq c(yy^-)=c(y)$. If $xy\in E(P)$, then we have $x=y^+$ (for otherwise, $c(xy)=c(yy^-)=c(y)$, a contradiction). Then the two cycles $C_4:=u_1 P^+ y u_1$ and $C_5:=u_k P^- x u_k$ are properly colored. Since $|u_1P^+y|\geq t-1$ and $|u_kP^-y|\geq t-1$, we have $|C_4|\geq t$ and $|C_5|\geq t$. In this case, $C_4+C_5$ is a PC $2^{(t)}$-factor of $G[P]$, a contradiction. Hence, we have $xy\notin E(P)$. If $x\prec y$, then the three cycles $C_{1}$, $C_{6}:=xP^+yx$, and $C_{7}:=y^+P^+u_ky^+$ are properly colored. Clearly, for every $i\in\{1,6,7\}$, the length of $C_i$ is at least $t$. In this case, $C_{1}+C_{6}+C_{7}$ is a PC $2^{(t)}$-factor of $G[P]$, a contradiction again. If $x\succ y$, then the cycle $C_{8}:=u_1P^+yxP^+u_ky^+P^+x^-u_1$ is properly colored with $|C_8|=|P|+1>t$. In this case, $C_{8}$ is a PC $2^{(t)}$-factor of $G[P]$, a contradiction still. $x\in X''$ and $y\in Y'$. Then we have $c(u_1x^+)\neq c(x^+x^{++})$, $c(u_ky^{-})\neq c(y^{-}y^{--})$, and $c(x)=c(xx^-)\neq c(xy)\neq c(yy^+)=c(y)$. If $xy\in E(P)$, then we have $x=y^-$ (for otherwise, $c(xy)=c(yy^+)=c(y)$, a contradiction). Then the cycle $C_{9}:=u_1 P^+ x u_k P^- y u_1$ with $|C_9|=|P|+1>t$ is properly colored. In this case, $C_9$ is a PC $2^{(t)}$-factor of $G[P]$, a contradiction. Hence, we have $xy\notin E(P)$. If $x\prec y$, then the cycle $C_{10}:=u_1P^+xyP^+u_ky^-P^-x^+u_1$ is properly colored with $|C_{10}|=|P|+1>t$. In this case, $C_{10}$ is a PC $2^{(t)}$-factor of $G[P]$, a contradiction again. If $x\succ y$, then the two cycles $C_{11}:=u_1P^+y^-u_kP^-x^+u_1$ and $C_{12}:=yP^+xy$ are properly colored. Clearly, for every $i\in\{11,12\}$, the length of $C_i$ is at least $t$. In this case, $C_{11}+C_{12}$ is a PC $2^{(t)}$-factor of $G[P]$, a contradiction still. $x\in X''$ and $y\in Y''$. Then we have $c(u_1x^+)\neq c(x^+x^{++})$, $c(u_ky^{+})\neq c(y^{+}y^{++})$, and $c(x)=c(xx^-)\neq c(xy)\neq c(yy^-)=c(y)$. Furthermore, we have $xy\notin E(P)$ (for otherwise, we have $c(xy)\in \{c(xx^-),c(yy^-)\}=\{c(x),c(y)\}$, a contradiction). If $x\prec y$, then the two cycles $C_{7}$ and $C_{13}:=u_1P^+xyP^-x^+u_1$ are properly colored. Clearly, for every $i\in\{7,13\}$, the length of $C_i$ is at least $t$. In this case, $C_{7}+C_{13}$ is a PC $2^{(t)}$-factor of $G[P]$, a contradiction. If $x\succ y$, then the cycle $C_{14}:=u_1P^+yxP^+u_ky^+P^+x^-u_1$ is properly colored with $|C_{14}|=|P|+1>t$. In this case, $C_{14}$ is a PC $2^{(t)}$-factor of $G[P]$, a contradiction again. $x\in X'$ and $y\in J_k$. Then we have $c(u_1x^-)\neq c(x^-x^{--})$, $c(u_ky^{+})\neq c(y^{+}y^{++})$, $c(u_ky^{-})\neq c(y^{-}y^{--})$, and $c(x)=c(xx^+)\neq c(xy)$. If $xy\in E(P)$, then we have $x=y^+$ (for otherwise, $c(xy)=c(xx^+)=c(x)$, a contradiction). In this case, $C_4+C_5$ is a PC $2^{(t)}$-factor of $G[P]$, a contradiction. Hence, we have $xy\notin E(P)$. If $c(xy)\neq c(yy^-)$, then $C_1+C_6+C_7$ or $C_8$ is a PC $2^{(t)}$-factor of $G[P]$, a contradiction. If $c(xy)\neq c(yy^+)$, then $C_1+C_2$ or $C_3$ is a PC $2^{(t)}$-factor of $G[P]$, a contradiction again. $x\in X''$ and $y\in J_k$. Then we have $c(u_1x^+)\neq c(x^+x^{++})$, $c(u_ky^{+})\neq c(y^{+}y^{++})$, $c(u_ky^{-})\neq c(y^{-}y^{--})$, and $c(x)=c(xx^-)\neq c(xy)$. If $xy\in E(P)$, then we have $x=y^-$ (for otherwise, $c(xy)=c(xx^-)=c(x)$, a contradiction). In this case, $C_9$ is a PC $2^{(t)}$-factor of $G[P]$, a contradiction. Hence, we have $xy\notin E(P)$. If $c(xy)\neq c(yy^-)$, then $C_7+C_{13}$ or $C_{14}$ is a PC $2^{(t)}$-factor of $G[P]$, a contradiction. If $c(xy)\neq c(yy^+)$, then $C_{10}$ or $C_{11}+C_{12}$ is a PC $2^{(t)}$-factor of $G[P]$, a contradiction again. $x\in J_1$ and $y\in Y'$. Then we have $c(u_1x^{-})\neq c(x^{-}x^{--})$, $c(u_1x^{+})\neq c(x^{+}x^{++})$, $c(u_ky^{-})\neq c(y^{-}y^{--})$, and $c(y)=c(yy^+)\neq c(xy)$. If $xy\in E(P)$, then we have $x=y^-$ (for otherwise, $c(xy)=c(yy^+)=c(x)$, a contradiction). In this case, $C_9$ is a PC $2^{(t)}$-factor of $G[P]$, a contradiction. Hence, we have $xy\notin E(P)$. If $c(xy)\neq c(xx^-)$, then $C_{10}$ or $C_{11}+C_{12}$ is a PC $2^{(t)}$-factor of $G[P]$, a contradiction. If $c(xy)\neq c(xx^+)$, then $C_1+C_2$ or $C_3$ is a PC $2^{(t)}$-factor of $G[P]$, a contradiction again. $x\in J_1$ and $y\in Y''$. Then we have $c(u_1x^{-})\neq c(x^{-}x^{--})$, $c(u_1x^{+})\neq c(x^{+}x^{++})$, $c(u_ky^{+})\neq c(y^{+}y^{++})$, and $c(y)=c(yy^-)\neq c(xy)$. If $xy\in E(P)$, then we have $x=y^+$, so $C_4+C_5$ is a PC 2-factor of $G[P]$, a contradiction. Hence, we have $xy\notin E(P)$. If $c(xy)\neq c(xx^-)$, then $C_7+C_{13}$ or $C_{14}$ is a PC $2^{(t)}$-factor of $G[P]$, a contradiction. If $c(xy)\neq c(xx^+)$, then $C_1+C_6+C_7$ or $C_8$ is a PC $2^{(t)}$-factor of $G[P]$, a contradiction again. $x\in J_1$ and $y\in J_k$. Suppose that $xy\in E(P)$. If $x=y^+$, then $C_4+C_5$ is a PC $2^{(t)}$-factor of $G[P]$, a contradiction. If $x=y^-$, then $C_9$ is a PC $2^{(t)}$-factor of $G[P]$, a contradiction. Thus, we have $xy\notin E(P)$. If $c(xx^-) \neq c(xy)\neq c(yy^-)$, then $C_7+C_{13}$ or $C_{14}$ is a PC $2^{(t)}$-factor of $G[P]$, a contradiction. If $c(xx^-) \neq c(xy)\neq c(yy^+)$, then $C_{10}$ or $C_{11}+C_{12}$ is a PC $2^{(t)}$-factor of $G[P]$, a contradiction. If $c(xx^+) \neq c(xy)\neq c(yy^-)$, then $C_1+C_6+C_7$ or $C_8$ is a PC $2^{(t)}$-factor of $G[P]$, a contradiction. If $c(xx^+) \neq c(xy)\neq c(yy^+)$, then $C_1+C_2$ or $C_3$ is a PC $2^{(t)}$-factor of $G[P]$, a contradiction. The claim holds. Note that $|X^\star|+2|J_1|=|R_1^{(1)}|+|R_1^{(2)}|\geq \frac{2}{3}n+1$ and $|Y^\star|+2|J_k|=|R_k^{(1)}|+|R_k^{(2)}|\geq \frac{2}{3}n+1$. So, by $t\leq \frac{1}{3}n$, we have $$\label{s2eq1} |X^\star|\geq 1 \mathrm{~or~} |Y^\star|\geq 1.$$ We now define a directed bipartite graph $D$ with bipartition $(X^\star\cup J_1, Y^\star)$ by setting the set of arcs $A(D):= \{(x,y): x\in X^\star\cup J_1, d_P(x,y)\geq t-1 \mathrm{~or~} d_P(x,y)=1, y\in Y^\star\cup J_k, c(xy)\neq c(x)\}\cup \{(y,x): x\in X^\star\cup J_1, d_P(x,y)\geq t-1\mathrm{~or~}d_P(x,y)=1, y\in Y^\star\cup J_k, c(xy)\neq c(y)\}$. By Claim [Claim 1](#c1){reference-type="ref" reference="c1"}, we know that $(y,x)\notin A(D)$ for every $y\in Y^\star$ and $x\in J_1$, and $(x,y)\notin A(D)$ for every $x\in X^\star$ and $y\in J_k$. We further have the following claim for the digraph $D$. **Claim 2**. *$D$ has a directed 2-cycle or, equivalently, there are two vertices $x$ and $y$ in $D$ such that $d_P(x,y)\geq t-1$ or $d_P(x,y)=1$, and both $(x,y)$ and $(y,x)$ are arcs of $D$.* From ([\[S2Eq4\]](#S2Eq4){reference-type="ref" reference="S2Eq4"}), we have $2|X^\star\cup J_1|=2|X_1|+2|X_2|+2|J_1|\geq |X^\star|+|R^{(1)}_1|+|R^{(2)}_1| \geq |X^\star|+\frac{2}{3}n+1$ and $2|Y^\star\cup J_k|=2|Y_1|+2|Y_2|+2|J_k|\geq |Y^\star|+|Q^{(1)}_k|+|Q^{(2)}_k| \geq |Y^\star|+\frac{2}{3}n+1$. Note that $|\{y\in V(P): 3\leq d_P(x,y)< t-1 \}|\leq t-2$ for each $x\in V(P)$. Then we have $$\begin{aligned} d_D^+(x)&=&|\{y\in N_G(x):c(xy)\neq c(x)\}\cap \{Y^\star\cup J_k\}|-(t-2)\\ &\geq& |\{y\in N_G(x):c(xy)\neq c(x)\}|+|Y^\star|-n-(t-2)\\ &\geq& \delta^c(G)-1+\frac{|Y^\star|+\frac{2}{3}n+1}{2}-n-(t-2)\\ &\geq& \frac{|Y^\star|+3}{2}.\end{aligned}$$In the same way, we have $$\begin{aligned} d_D^+(y)&=&|\{w:c(yw)\neq c(y)\}\cap (X^\star\cup J_1)|-(t-2)\\ &\geq& d^c(x)-1+| X^\star\cup J_1|-n-(t-2)\\ &\geq& \delta^c(G)-1+\frac{|X^\star|+\frac{2}{3}n+1}{2}-n-(t-2)\\ &\geq& \frac{|Y^\star|+3}{2}.\end{aligned}$$ Since the arc from $X^\star$ to $J_k$ does not exist, at least $|X^\star|\frac{|Y^\star|+3}{2}$ arcs are from $X^\star$ to $Y^\star$, i.e., $|A(X^\star,Y^\star)|\geq |X^\star|\frac{|Y^\star|+3}{2}$. Since the arc from $Y^\star$ to $J_1$ does not exist, at least $|Y^\star|\frac{|X^\star|+3}{2}$ arcs are from $Y^\star$ to $X^\star$, i.e., $|A(Y^\star,X^\star)|\geq |Y^\star|\frac{|X^\star|+3}{2}$. Note that at most $|X^\star||Y^\star|$ arcs are not contained in any directed 2-cycle in $D[X^\star\cup Y^\star]$. By simple calculation, we conclude that $$|A(X^\star,Y^\star)|+|A(Y^\star,X^\star)|-|X^\star||Y^\star|\geq \frac{3}{2}(|X^\star|+|Y^\star|).$$ This means that at least $\frac{3}{2}(|X^\star|+|Y^\star|)$ arcs are contained in directed 2-cycles in $D[X^\star\cup Y^\star]$. By ([\[s2eq1\]](#s2eq1){reference-type="ref" reference="s2eq1"}), we have $|X^\star|+|Y^\star|>0$. The claim holds. By Claim [Claim 2](#c2){reference-type="ref" reference="c2"}, there exist two vertices $x\in X^\star\cup J_1$ and $y\in Y^\star\cup J_k$ such that $d_P(x,y)\geq k-1$ or $d_P(x,y)=1$ and $c(xy)\notin \{c(x),c(y)\}$, which contradicts Claim [Claim 1](#c1){reference-type="ref" reference="c1"}. This gives Lemma [Lemma 6](#l1){reference-type="ref" reference="l1"}. ◻ Lemma [Lemma 6](#l1){reference-type="ref" reference="l1"} can lead to a paradoxical result. For the case where $k=|P|$ is even, from Lemma [Lemma 6](#l1){reference-type="ref" reference="l1"}, $G[H]$ has a PC $2^{(t)}$-factor $H'$. Since $G=K_{n,n}^c$ has no PC $2^{(t)}$-factor and $H'$ consists of even cycles, $G\setminus H'$ has at least one edge, denoted by $xy$. But then $xy \cup H'$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, contradicting the choice of $H$. Therefore, $k=|P|$ must be odd. This further implies that $u_1u_k\notin E(G)$. Let us keep the previous notation $$\label{s2eq7} \left\{\begin{array}{l} R_1^{(1)}=\{u\in V(P):u^-\in S_1 \mathrm{~and~} c(u_1u^-)\neq c(u^-u^{--})\},\\[2mm] R_1^{(2)}=\{u\in V(P):u^+\in S_1 \mathrm{~and~} c(u_1u^+)\neq c(u^+u^{++})\}. \end{array}\right.$$ Moreover, we set $$\label{s2eq8} \left\{\begin{array}{l} Q_k^{(1)}=\{u\in V(P):u^{--}\in S_k \mathrm{~and~} c(u_ku^{--})\neq c(u^{--}u^{---})\},\\[2mm] Q_k^{(2)}=\{u\in V(P):u^{++}\in S_k \mathrm{~and~} c(u_ku^{++})\neq c(u^{++}u^{+++})\}. \end{array}\right.$$ Recall from ([\[S2Eq1\]](#S2Eq1){reference-type="ref" reference="S2Eq1"}) that $S_1, S_k\subseteq V(P)$. By the definitions of $S_1$ and $S_k$, we have $u_1,u_2,u_3,u_k\notin S_1'$. This further implies that $$|S_1|\geq |S_1'|-\lceil\frac{t-4}{2}\rceil-\lceil\frac{t-2}{2}\rceil\geq \frac{2}{3}n+1.$$ In the same way, we have $|S_k|\geq \frac{2}{3}n+1$. Since $u_1,u_k\notin S_1$ and $c(u_1u)\neq c(uu^-)$ or $c(u_1u)\neq c(uu^+)$ for any $u\in S_1$, we have $u^+\in R_1^{(1)}$ or $u^-\in R_1^{(2)}$ from ([\[s2eq7\]](#s2eq7){reference-type="ref" reference="s2eq7"}). Since $u_1,u_2,u_{k-1},u_k\notin S_k$ and $c(u_ku)\neq c(uu^-)$ or $c(u_ku)\neq c(uu^+)$ for any $u\in S_k$, we have $u^{++}\in R_1^{(1)}$ or $u^{--}\in R_1^{(2)}$ from ([\[s2eq8\]](#s2eq8){reference-type="ref" reference="s2eq8"}). Thus, we have $$\label{S2Eq9} \left\{\begin{array}{l} |R_1^{(1)}|+|R_1^{(2)}|\geq|S_1|\geq \frac{2}{3}n+1,\\[2mm] |Q_k^{(1)}|+|Q_k^{(2)}|\geq|S_k|\geq \frac{2}{3}n+1. \end{array}\right.$$ Note that either $u_1,u_k\in X$ or $u_1,u_k\in Y$. By symmetry, we may suppose that $u_1,u_k\in X$. Since $G=K^c_{n,n}$, $k=|P|$ is odd and all the cycles in $H$ are even, there must be some vertex $y^*\in Y\setminus V(H)$. Since $G=K^c_{n,n}$ has bipartition $(X,Y)$, from ([\[s2eq7\]](#s2eq7){reference-type="ref" reference="s2eq7"}) and ([\[s2eq8\]](#s2eq8){reference-type="ref" reference="s2eq8"}), we have $$\mbox{$R_1^{(1)}\cup R_1^{(2)}\subseteq X$ and $Q_k^{(1)}\cup Q_k^{(2)}\subseteq Y$.}$$ Set $$\left\{\begin{array}{ll} X_1:=R_1^{(1)}\setminus R_1^{(2)},& X_2:=R_1^{(2)}\setminus R_1^{(1)},\\[2mm] Y_1:=Q_k^{(1)}\setminus Q_k^{(2)}, & Y_2:=Q_k^{(2)}\setminus Q_k^{(1)},\\[2mm] J':=R_1^{(1)}\cap R_1^{(2)}, & J'':=Q_k^{(1)}\cap Q_k^{(2)},\\[2mm] X^\star:= X_1\cup X_2, & Y^\star:= Y_1\cup Y_2. \end{array}\right.$$ We then define a vertex coloring of the complete bipartite graph $F':=(X^\star\cup J', Y^\star\cup J'')$ in the following way. For each vertex $u\in V(F')$, we define $c(u)$ by setting $$\label{c} c(u)=\left\{ \begin{array}{ll} c(uu^+), &\mbox{if } {u\in X_1\cup Y_1},\\ c(uu^-), &\mbox{if } {u\in X_2\cup Y_2},\\ c_0, &\mbox{if } {u\in J'\cup J''} \ (\mbox{where $c_0$ is a new color}). \end{array} \right.$$ In the following we establish two useful claims. **Claim 3**. *For any two vertices $x\in X^\star\cup J'$ and $y\in Y^\star\cup J''$, we have $c(xy)\in \{c(x),c(y)\}$ if at least one of the following holds:* *(a) $d_P(x,y)\neq 3$,* *(b) $t\geq 5$ and $d_P(x,y)\geq t-1$,* *(c) $d_P(x,y)=1$.* *As a consequence, either $|J'|\leq t-1$ or $|J''|\leq t-1$.* To prove Claim [Claim 3](#c3){reference-type="ref" reference="c3"}, we suppose that there exist two vertices $x\in X^\star\cup J'$ and $y\in Y^\star\cup J''$ such that $c(xy)\notin \{c(x),c(y)\}$. Then it suffices to show that $d_P(x,y)=3$, or $3\leq d_P(x,y)< t-1$ and $t\geq 5$. We divide the discussion into the following nine cases. $x\in X_1$ and $y\in Y_1$. Then we have $c(u_1x^-)\neq c(x^-x^{--})$, $c(u_ky^{--})\neq c(y^{--}y^{---})$, and $c(x)=c(xx^+)\neq c(xy)\neq c(yy^+)=c(y)$. Furthermore, we have $xy\notin E(P)$ (for otherwise, $c(xy)\in \{c(xx^+),c(yy^+)\}=\{c(x),c(y)\}$, a contradiction). If $x\prec y$, then the two cycles $C_1:=u_1 P^+ x^- u_1$ and $C_2:=xyP^+u_ky^{--}P^-x$ are properly colored. Since $|u_1P^+x^-|\geq t-1$ and $|xyP^+u_ky^{--}|\geq t-1$, we have $|C_1|\geq t$ and $|C_2|\geq t$. Thus, $H_1:=H-P+C_1+C_2+y^-y^*$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, a contradiction. If $x\succ y$, then the cycle $C_3:=u_1P^+y^{--}u_kP^-xyP^+x^-u_1$ is properly colored with $|C_3|=k-1> t$. Thus, $H_2:=H-P+C_3+y^-y^*$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, a contradiction. $x\in X_1$ and $y\in Y_2$. Then we have $c(u_1x^-)\neq c(x^-x^{--})$, $c(u_ky^{++})\neq c(y^{++}y^{+++})$, and $c(x)=c(xx^+)\neq c(xy)\neq c(yy^-)=c(y)$. If $xy\in E(P)$, then we have $x=y^+$ (for otherwise, $c(xy)=c(yy^-)=c(y)$, a contradiction). Then the two cycles $C_4:=u_1 P^+ y u_1$ and $C_5:=x^+ P^+ u_k x^+$ are properly colored. Since $|u_1P^+y|\geq t-1$ and $|y^{++} P^+ u_k|\geq t-1$, we have $|C_4|\geq t$ and $|C_5|\geq t$. In this case, $H_3:=H-P+C_4+C_5+xy^*$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, a contradiction. Hence, we must have $xy\notin E(P)$. If $x\prec y$, then the three cycles $C_1$, $C_6:=xP^+yx$, and $C_7:=y^{++}P^+u_k y^{++}$ are properly colored. Since $|u_1P^+y|\geq t-1$ and $|y^{++} P^+ u_k|\geq t-1$, we have $|C_1|\geq t$ and $|C_7|\geq t$. Thus, $H_4:=H-P+C_1+C_6+C_7+y^+y^*$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, unless $|C_6|< t$. This further implies $d_P(x,y)< t-1$ and $t\geq 5$. Since $xy\notin E(P)$, we have $3\leq d_P(x,y)< t-1$ and $t\geq 5$. If $x\succ y$ and $d_P(x,y)\neq 3$ (this implies $y^{++}\neq x^-$), then the cycle $C_8:=u_1 P^+ y x P^+u_k y^{++} P^+ x^- u_1$ is properly colored with $|C_8|=k-1> t$. Thus, $H_5:=H-P+C_8+y^+y^*$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, a contradiction. Hence, we have $d_P(x,y)=3$, or $3\leq d_P(x,y)< t-1$ and $t\geq 5$. $x\in X_2$ and $y\in Y_1$. Then we have $c(u_1x^+)\neq c(x^+x^{++})$, $c(u_ky^{--})\neq c(y^{--}y^{---})$, and $c(x)=c(xx^-)\neq c(xy)\neq c(yy^+)=c(y)$. If $xy\in E(P)$, then we have $x=y^-$ (for otherwise, we have $c(xy)=c(yy^+)=c(y)$, a contradiction). Then the cycle $C_9:=u_1 P^+ y^{--}u_kP^- x^+ u_1$ is properly colored with $|C_9|=k-1> t$. In this case, $H_6:=H-P+C_9+xy^*$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, a contradiction. Hence, we must have $xy\notin E(P)$. If $x\prec y$ and $d_P(x,y)\neq 3$ (this implies $y^{--}\neq x^+$), then the cycle $C_{10}:=u_1 P^+ xy P^+u_k y^{--} P^- x^+ u_1$ is properly colored with $|C_{10}|=k-1> t$. Thus, $H_7:=H-P+C_{10}+y^-y^*$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, a contradiction. If $x\succ y$, then the two cycles $C_{11}:=u_1 P^+ y^{--} u_kP^-x^+u_1$ and $C_{12}:=yP^+xy$ are properly colored. Clearly, the length of $C_{11}$ is at least $t$. Thus, $H_8:=H-P+C_{11}+C_{12}+y^-y^*$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, unless $|C_{12}|< t$. This further implies $d_P(x,y)< t-1$ and $t\geq 5$. Since $xy\notin E(P)$, we have $3\leq d_P(x,y)< t-1$ and $t\geq 5$. Hence, we have $d_P(y,x)=3$, or $3\leq d_P(x,y)< t-1$ and $t\geq 5$. $x\in X_2$ and $y\in Y_2$. Then we have $c(u_1x^+)\neq c(x^+x^{++})$, $c(u_ky^{++})\neq c(y^{++}y^{+++})$, and $c(x)=c(xx^-)\neq c(xy)\neq c(yy^-)=c(y)$. Furthermore, we have $xy\notin E(P)$ (for otherwise, we have $c(xy)\in \{c(xx^-),c(yy^-)\}=\{c(x),c(y)\}$, a contradiction). If $x\prec y$, then the two cycles $C_{13}:=u_1P^+xyP^-x^+u_1$ and $C_{14}:=y^{++}P^+u_k y^{++}$ are properly colored. Clearly, we have $|C_{13}|\geq t$ and $|C_{14}|\geq t$. Thus, $H_9:=H-P+C_{13}+C_{14}+y^+y^*$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, a contradiction. If $x\succ y$, then the cycle $C_{15}:=u_1P^=yxP^-y^{++}u_kP^-x^+u_1$ is properly colored with $|C_{15}|=k-1> t$. Thus, $H_{10}:=H-P+C_{15}+y^+y^*$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, a contradiction. $x\in X_1$ and $y\in J''$. Then we have $c(u_1x^-)\neq c(x^-x^{--})$, $c(u_ky^{++})\neq c(y^{++}y^{+++})$, $c(u_ky^{--})\neq c(y^{--}y^{---})$, and $c(x)=c(xx^+)\neq c(xy)$. If $xy\in E(P)$, then we have $x=y^+$ (for otherwise, we have $c(xy)=c(xx^+)=c(x)$, a contradiction). Then $H_3$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, a contradiction. If $c(xy)\neq c(yy^-)$, then $H_4$ or $H_5$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, unless $3\leq d_P(x,y)< t-1$ and $t\geq 5$. If $c(xy)\neq c(yy^+)$, then $H_1$ or $H_2$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, unless $d_P(x,y)= 3$, a contradiction. Hence, we have $d_P(x,y)=3$, or $3\leq d_P(x,y)< t-1$ and $t\geq 5$. $x\in X_2$ and $y\in J''$. Then we have $c(u_1x^-)\neq c(x^-x^{--})$, $c(u_ky^{++})\neq c(y^{++}y^{+++})$, $c(u_ky^{--})\neq c(y^{--}y^{---})$, and $c(x)=c(xx^+)\neq c(xy)$. If $xy\in E(P)$, then we have $x=y^-$ (for otherwise, we have $c(xy)=c(xx^-)=c(x)$, a contradiction). Then $H_6$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, a contradiction. If $c(xy)\neq c(yy^-)$, then $H_9$ or $H_{10}$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, a contradiction. If $c(xy)\neq c(yy^+)$, then $H_7$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, unless $d_P(x,y)= 3$, and $H_8$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, unless $3\leq d_P(x,y)< t-1$ and $t\geq 5$. Hence, we have $d_P(x,y)=3$, or $3\leq d_P(x,y)< t-1$ and $t\geq 5$. $x\in J'$ and $y\in Y_1$. Then we have $c(u_1x^{-})\neq c(x^{-}x^{--})$, $c(u_1x^{+})\neq c(x^{+}x^{++})$, $c(u_ky^{--})\neq c(y^{--}y^{---})$, and $c(y)=c(yy^+)\neq c(xy)$. If $xy\in E(P)$, then we have $x=y^-$ (for otherwise, we have $c(xy)=c(yy^+)=c(y)$, a contradiction). Then $H_6$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, a contradiction. If $c(xy)\neq c(xx^-)$, then $H_7$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, unless $d_P(x,y)= 3$, and $H_8$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, unless $3\leq d_P(x,y)< t-1$ and $t\geq 5$. If $c(xy)\neq c(xx^+)$, then $H_1$ or $H_2$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, a contradiction. Hence, we have $d_P(x,y)=3$, or $3\leq d_P(x,y)< t-1$ and $t\geq 5$. $x\in J'$ and $y\in Y_2$. Then we have $c(u_1x^{-})\neq c(x^{-}x^{--})$, $c(u_1x^{+})\neq c(x^{+}x^{++})$, $c(u_ky^{++})\neq c(y^{++}y^{+++})$, and $c(y)=c(yy^-)\neq c(xy)$. If $xy\in E(P)$, then we have $x=y^+$ (for otherwise, we have $c(xy)=c(yy^-)=c(y)$, a contradiction). Then $H_3$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, a contradiction. If $c(xy)\neq c(xx^-)$, then $H_9$ or $H_{10}$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, unless $d_P(x,y)= 3$, a contradiction. If $c(xy)\neq c(xx^+)$, then $H_5$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, unless $d_P(x,y)= 3$, and $H_4$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$ unless $3\leq d_P(x,y)< t-1$ and $t\geq 5$. Hence, we have $d_P(x,y)=3$, or $3\leq d_P(x,y)< t-1$ and $t\geq 5$. $x\in J'$ and $y\in J''$. Suppose first that $xy\in E(P)$. If $x=y^+$, then $H_3$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, a contradiction. If $x=y^-$, then $H_1$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, a contradiction. Hence, we must have $xy\notin E(P)$. If $c(xx^-) \neq c(xy)\neq c(yy^-)$, then $H_9$ or $H_{10}$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, a contradiction. If $c(xx^-) \neq c(xy)\neq c(yy^+)$, then $H_7$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, unless $d_P(x,y)= 3$, and $H_8$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$, unless $3\leq d_P(x,y)< t-1$ and $t\geq 5$. If $c(xx^+) \neq c(xy)\neq c(yy^-)$, then $H_5$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$ unless $d_P(x,y)= 3$, and $H_4$ is a $1^{(t)}$-path-cycle of $G$ larger than $H$ unless $3\leq d_P(x,y)< t-1$ and $t\geq 5$. If $c(xx^+) \neq c(xy)\neq c(yy^-)$, then $H_1$ or $H_{2}$ is a 1-path-cycle of $G$ larger than $H$, a contradiction. As a result, we have $d_P(x,y)=3$, or $3\leq d_P(x,y)< t-1$ and $t\geq 5$. The claim holds. Note that $|X^\star|+2|J'|=|R_1^{(1)}|+|R_1^{(2)}|\geq \frac{2}{3}n+1$ and $|Y^\star|+2|J''|=|R_k^{(1)}|+|R_k^{(2)}|\geq \frac{2}{3}n+1$. So, by $t\leq \frac{1}{3}n$, we have $$\label{s2equ2} |X^\star|\geq 1 \mathrm{~or~} |Y^\star|\geq 1.$$ Now we define two directed bipartite graphs. If $t\leq 4$, then we define a directed bipartite graph $D_1$ with bipartition $( X^\star\cup J', Y^\star\cup J'')$ by setting the set of arcs $A(D_1):= \{ (x,y): x\in X^\star\cup J', y\in Y^\star\cup J'', d_P(x,y)\neq 3, c(xy)\neq c(x)\}\cup \{(y,x): x\in X^\star\cup J', y\in Y^\star\cup J'', d_P(x,y)\neq 3, c(xy)\neq c(y)\}.$ If $t\geq 5$, then we define a directed bipartite graph $D_2$ with bipartition $( X^\star\cup J', Y^\star\cup J'')$ by setting the set of arcs $A(D_2):= \{ (x,y): x\in X^\star\cup J', y\in Y^\star\cup J'', d_P(x,y)\geq t-1 \mathrm{~or~} d_P(x,y)=1, c(xy)\neq c(x)\}\cup \{(y,x): x\in X^\star\cup J', y\in Y^\star\cup J'', d_P(x,y)\geq t-1 \mathrm{~or~} d_P(x,y)=1, c(xy)\neq c(y)\}$. Note that for any two vertices $x\in X^\star\cup J'$ and $y\in J''$, we have $(x,y)\notin A(D)$, and for any two vertices $x\in J'$ and $y\in Y^\star\cup J''$, we have $(y,x)\notin A(D)$. **Claim 4**. *Both $D_1$ and $D_2$ contain a directed 2-cycles or, equivalently, there are two vertices $x$ and $y$ in $D_1$ such that $d_P(x,y)\neq 3$ and both $(x,y)$ and $(y,x)$ are arcs of $D_1$, and there are two vertices $x'$ and $y'$ in $D_2$ such that $d_P(x',y')\geq t-1 \mathrm{~or~} d_P(x',y')=1$ and both $(x',y')$ and $(y',x')$ are arcs of $D_2$.* From ([\[S2Eq9\]](#S2Eq9){reference-type="ref" reference="S2Eq9"}), we have $2|X^\star\cup J'|=2|X_1|+2|X_2|+2|J'|= |X^\star|+|R^{(1)}_1|+|R^{(2)}_1| \geq |X^\star|+\frac{2}{3}n+1$ and $2|Y^\star\cup J''|=2|Y_1|+2|Y_2|+2|J''|= |Y^\star|+|Q^{(1)}_k|+|Q^{(2)}_k| \geq |Y^\star|+\frac{2}{3}n+1$. Since $|\{y\in V(P) : d_P(x,y)= 3 \}|=2$ for every $x\in X^\star$, by Claim [Claim 3](#c3){reference-type="ref" reference="c3"}, we have $$\begin{aligned} d_{D_1}^+(x)&\geq&|\{y:c(xy)\neq c(x)\}\cap (Y^\star\cup J'')|-2\\ &\geq& d^c(x)-1+\frac{|Y^\star|+\frac{2}{3}n+1}{2}-n-2\\ &\geq& \delta^c(G)-1+\frac{|Y^\star|+\frac{2}{3}n+1}{2}-n-2\\ &\geq& \frac{|Y^\star|+1}{2}.\end{aligned}$$ Since $|\{y\in V(P) : 3\leq d_P(x,y)< t-1 \}|\leq t-2$ for every $x\in X^\star$, by Claim [Claim 3](#c3){reference-type="ref" reference="c3"}, we have $$\begin{aligned} d_{D_2}^+(x)&\geq&|\{y:c(xy)\neq c(x)\}\cap (Y^\star\cup J'')|-(t-2)\\ &\geq& d^c(x)-1+\frac{|Y^\star|+\frac{2}{3}n+1}{2}-n-(t-2)\\ &\geq& \delta^c(G)-1+\frac{|Y^\star|+\frac{2}{3}n+1}{2}-n-(t-2)\\ &\geq& \frac{|Y^\star|+3}{2}.\end{aligned}$$ In the same way, for every $y\in Y^\star$, we have $$d_{D_1}^+(y)\geq \frac{|Y^\star|+1}{2} \mathrm{~and~} d_{D_2}^+(y)\geq \frac{|Y^\star|+3}{2}.$$ Since $(x,y)\notin A(D_1)$ and $(x,y)\notin A(D_2)$ for any two vertices $x\in X^\star\cup J'$ and $y\in J''$, there are at least $|X^\star|\frac{|Y^\star|+1}{2}$ arcs from $X^\star$ to $Y^\star$ in $D_i$ for each $i\in\{1,2\}$. In the same way, for each $i\in\{1,2\}$, there are at least $|Y^\star|\frac{|X^\star|+1}{2}$ arcs from $Y^\star$ to $X^\star$ in $D_i$. Since there are at most $|X^\star||Y^\star|$ arcs not contained in any directed 2-cycle in $X^\star\cup Y^\star$ and $$A(X^\star,Y^\star)+A(Y^\star,X^\star)-|X^\star||Y^\star|\geq \frac{1}{2}(|X^\star|+|Y^\star|),$$ there exist at least $\frac{1}{2}(|X^\star|+|Y^\star|)$ arcs contained in directed 2-cycles. By ([\[s2equ2\]](#s2equ2){reference-type="ref" reference="s2equ2"}), we have $\frac{1}{2}(|X^\star|+|Y^\star|)>0$. The claim holds. By Claim [Claim 4](#c5){reference-type="ref" reference="c5"}, if $k\leq 4$, then there exist two vertices $x\in X^\star$ and $y\in Y^\star$ such that $d_P(x,y)\neq 3$ and $c(xy)\notin \{c(x),c(y)\}$, and if $k\geq 5$, then there exist two vertices $x\in X^\star$ and $y\in Y^\star$ such that $d_P(x,y)\geq t-1\mathrm{~or~} d_P(x',y')=1$ and $c(xy)\notin \{c(x),c(y)\}$, which contradicts Claim [Claim 3](#c3){reference-type="ref" reference="c3"} in both situations. This completes the proof of Theorem [Theorem 3](#t1){reference-type="ref" reference="t1"}. Note that, from Theorem [Theorem 3](#t1){reference-type="ref" reference="t1"}, we have the following useful corollary. **Corollary 7**. *Let $K^c_{n,n}$ be an edge-colored graph such that $\delta^c(K^c_{n,n})\geq \frac{2n}{3}+t$. Then $K^c_{n,n}$ can be covered by at most $\lceil\frac{2n}{t}\rceil$ PC odd paths.* # Proof of Theorem [Theorem 4](#t2){reference-type="ref" reference="t2"} {#proof-of-theorem-t2} Let $\varepsilon>0$ be a fixed number. Let $G=K^c_{n,n}$ be an edge-colored complete balanced bipartite graph with bipartition $(X,Y)$ and $\delta^c(G)\geq (\frac{2}{3}+\varepsilon)n$. This means that $\varepsilon \leq \frac{1}{3}$. We first introduce two useful definitions, where Definition [Definition 2](#S3D2){reference-type="ref" reference="S3D2"} was first introduced in Lo [@A1]. **Definition 1**. *For every $x\in X$ and $y\in Y$, a path $P$ of $G$ is called an **absorbing path** for $(x,y)$ if the following statements hold.* *(i) $P:=x'y'x''y''$ is a PC $3$-path;* *(ii) $V(P)\cap \{x,y\}=\emptyset$;* *(iii) the path $x'y'xyx''y''$ is properly colored.* **Definition 2**. *Let $x_1,x_2,y_1,y_2$ be four distinct vertices of $G$ with $x_1,x_2\in X$ and $y_1,y_2\in Y$. A path $P$ of $G$ is called an **absorbing path** for $(x_1,y_1;x_2,y_2)$ if the following statements hold.* *(i) $P:=x'y'x''y''$ is a PC $3$-path;* *(ii) $V(P)\cap \{x_1,y_1, x_2,y_2\}=\emptyset$;* *(iii) the two paths $x'y'x_1y_1$ and $y''x''y_2x_2$ are properly colored.* For convenience, each $(x,y)$ in Definition [Definition 1](#S3D1){reference-type="ref" reference="S3D1"} is called a *D1-element* and each $(x_1,y_1; x_2,y_2)$ in Definition [Definition 2](#S3D2){reference-type="ref" reference="S3D2"} is called a *D2-element*. For each D1-element $(x,y)$, we use $\mathcal{P}(x,y)$ to denote the set of absorbing paths for $(x,y)$, and for each D2-element $(x_1,y_1; x_2,y_2)$, we use $\mathcal{P}(x_1,y_1;x_2,y_2)$ to denote the set of absorbing paths for $(x_1,y_1;x_2,y_2)$. Then all the paths in $\mathcal{P}(x,y)$ and $\mathcal{P}(x_1,y_1;x_2,y_2)$ are PC $3$-paths. The following proposition can be found in Lo [@A1]. **Proposition 1** ([@A1]). *Let $P':=x_1y_1x_2y_2\ldots x_{l}y_l$ be a PC path with $l\geq 2$ and $P:=x'y'x''y''$ be an absorbing path for $(x_1,y_1;x_{l},y_l)$. Suppose that $V(P)\cap V(P')=\emptyset$. Then $x'y'x_1y_1x_2y_2\ldots x_{l}y_lx''y''$ is a PC path.* The next lemma establishes a lower bound on $|\mathcal{P}(x,y)|$ and a lower bound on $|\mathcal{P}(x_1,y_1;x_2,y_2)|$. **Lemma 8**. *Suppose that $n\geq \frac{4}{\varepsilon}$. Then we have the following two inequalities:* *(i) $|\mathcal{P}(x,y)|\geq \frac{16}{9}\varepsilon^2 n^4$ for every D1-element $(x,y)$, and* *(ii) $|\mathcal{P}(x_1,y_1;x_2,y_2)|\geq \frac{16}{9}\varepsilon^2 n^4$ for every D2-element $(x_1,y_1;x_2,y_2)$.* *Proof.* Since $\delta^c(K^c_{n,n})+\Delta_{mon}(K^c_{n,n})\leq n+1$, we have $\Delta_{mon}(K^c_{n,n})\leq (\frac{1}{3}-\varepsilon)n+1$. To prove (i), we consider an arbitrary absorbing path $P=x'y'x''y''$ for $(x,y)$. Then $V(P)\cap \{x,y\}=\emptyset$. Since $c(xy')\neq c(xy)$ and $c(x'y')\neq c(xy')$, each of $x'$ and $y'$ has at least $(\frac{2}{3}+\varepsilon)n-1$ choices. Since $n\geq \frac{4}{\varepsilon}$, we have $(\frac{2}{3}+\varepsilon)n-1> \frac{2}{3} n$. Since $c(x''y')\neq c(y'x')$ and $c(x''y)\neq c(x''y')$, $x''$ has at least $\delta^c(K^c_{n,n})-1-\Delta_{mon}(K^c_{n,n})\geq (\frac{1}{3}+2\varepsilon)n-2\geq (\frac{1}{3}+\varepsilon)n$ choices, given $(x',y')$. Since $c(y''x'')\neq c(x''y)$ and $c(y''x'')\neq c(x''y')$, $y''$ has at least $\delta^c(K^c_{n,n})-1-\Delta_{mon}(K^c_{n,n})\geq (\frac{1}{3}+\varepsilon)n$ choices, given $(x',y',x'')$. Hence, we have $$|\mathcal{P}(x,y)|\geq (\frac{2}{3}n)^2[(\frac{1}{3}+\varepsilon)n]^2 =\frac{4}{9}(\frac{1}{3}+\varepsilon)^2 n^4\geq \frac{4}{9}(2\varepsilon)^2 n^4\geq\frac{16}{9}\varepsilon^2 n^4.$$ This gives (i). To prove (ii), we consider an arbitrary absorbing path $P=x'y'x''y''$ for $(x_1,y_1;x_2,y_2)$. Then $V(P)\cap \{x_1,y_1,x_2,y_2\}=\emptyset$. Since $x',y'\notin\{x_1,y_1;x_2,y_2\}$, $c(x_2y_2)\neq c(y_2x'')$, and $c(x_1y')\neq c(x_1y_1)$, each of $y'$ and $x''$ has at least $(\frac{2}{3}+\varepsilon)n-2$ choices. Since $n\geq \frac{4}{\varepsilon}$, we have $(\frac{2}{3}+\varepsilon)n-2\geq \frac{2}{3} n$. Since $x'\notin\{x_1, x_2 ,x''\}$, $c(x'y')\neq c(y'x'')$, and $c(x'y')\neq c(x_1y')$, $x'$ has at least $(\frac{2}{3}+\varepsilon)n-3-[(\frac{1}{3}-\varepsilon)n+1] =(\frac{1}{3}+2\varepsilon)n-4$ choices, given $(y',x'')$. Since $n\geq \frac{4}{\varepsilon}$, we have $(\frac{1}{3}+2\varepsilon)n-4\geq (\frac{1}{3}+\varepsilon)n$. In the same way, $y''$ has at least $(\frac{1}{3}+\varepsilon)n$ choices, given $(x',y',x'')$. Hence, we have $$|\mathcal{P}(x_1,y_1;x_2,y_2)|\geq (\frac{2}{3} n)^2[(\frac{1}{3}+\varepsilon)n]^2=\frac{4}{9}(\frac{1}{3}+\varepsilon)^2 n^4\geq \frac{4}{9}(2\varepsilon)^2 n^4\geq\frac{16}{9}\varepsilon^2 n^4.$$ This gives (ii) and the lemma follows. ◻ Next, we find a family $\mathcal{F}$ of vertex-disjoint PC $3$-paths with some probabilistic arguments such that, for every D2-element $(x_1,y_1;x_2,y_2)$, the number of absorbing paths for $(x_1,y_1)$ in ${\cal F}$, i.e., $|{\cal F}\cap \mathcal{P}(x,y)|$, and the number of absorbing paths for $(x_1,y_1;x_2,y_2)$, i.e., $|{\cal F}\cap \mathcal{P}(x_1,y_1;x_2,y_2)|$, are linear with respect to $n$, respectively. Finally, we find a small cycle $C$ from $\mathcal{F}$ and use $C$ to absorb some PC paths outside $C$. Recall the famous Chernoff bound for the binomial distribution and Markov's inequality, which we will use in the proof of Lemma [Lemma 9](#s3l2){reference-type="ref" reference="s3l2"}. **Proposition 2**. ***(Chernoff bound)** Suppose that $X$ has the binomial distribution and $0<a<3/2$. Then $\mathbf{Pr}(|X-\mathbf{E}(X)|\geq a\mathbf{E}(X)) \leq 2e^{-a^2\frac{\mathbf{E}(X)}{3}}$.* **Proposition 3**. ***(Markov's inequality)** Suppose that $Y$ is an arbitrary nonnegative random variable and $a>0$. Then $\mathbf{Pr}[Y> a\mathbf{E}(Y)]<\frac{1}{a}$.* **Lemma 9**. *Consider a D1-element $(x,y)$ and a D2-element $(x_1,y_1;x_2,y_2)$. Let $0<\gamma<1/2$, and suppose that $|\mathcal{P}(x,y)|\geq \gamma n^4$ and $|\mathcal{P}(x_1,y_1;x_2,y_2)|\geq \gamma n^4$. Then there exists an integer $n_0(\gamma)$ such that, whenever $n\geq n_0(\varepsilon)$, there exists a family $\mathcal{F}$ of vertex-disjoint PC $3$-paths such that $$|\mathcal{F}|\leq 2^{-4}\gamma n,$$ $$|\mathcal{F}\cap \mathcal{P}(x,y)|\geq 2^{-7}\gamma^2 n,$$ and $$|\mathcal{F}\cap \mathcal{P}(x_1,y_1;x_2,y_2)|\geq 2^{-7}\gamma^2 n.$$* *Proof.* Let us choose an integer $n_0=n_0(\gamma)$ sufficiently large so that $$\label{e1} \max\{\exp\{-\gamma n_0/(3\times 2^5)\}, \exp\{-\gamma^2 n_0/(3\times 2^7)\} \}\leq 1/6.$$ Then we assume in the following that $n\geq n_0$. We consider a randomly generated family $\mathcal{F'}$ of 3-paths $x'y'x''y''$. There are totally $n^2(n-1)^2$ possible such 3-paths (candidates). But $\mathcal{F'}$ is generated by selecting each candidate independently at random with probability $p=2^{-5} \gamma \frac{1}{n(n-1)^2}\geq 2^{-5} \gamma n^{-3}$. Then we have $$\mathbf{E}(|\mathcal{F'}|)=pn^2(n-1)^2= 2^{-5} \gamma n,$$ $$\mathbf{E}(|\mathcal{F}'\cap \mathcal{P}(x,y)|)=p\gamma n^4 \geq 2^{-5} \gamma^2 n,$$ and $$\mathbf{E}(|\mathcal{F'}\cap \mathcal{P}(x_1,y_1;x_2,y_2)|)=p\gamma n^4 \geq 2^{-5} \gamma^2 n.$$ By Proposition [Proposition 2](#p1){reference-type="ref" reference="p1"} and inequality ([\[e1\]](#e1){reference-type="ref" reference="e1"}), each of the following three inequalities $$|\mathcal{F'}|\leq 2\mathbf{E}(|\mathcal{F'}|)=2^{-4} \gamma n,$$ $$|\mathcal{F'}\cap \mathcal{P}(x,y)|\geq \frac{1}{2} \mathbf{E}(|\mathcal{F'}\cap \mathcal{P}(x,y)|)\geq 2^{-6} \gamma^2 n$$, $$|\mathcal{F'}\cap \mathcal{P}(x_1,y_1;x_2,y_2)|\geq \frac{1}{2} \mathbf{E}(|\mathcal{F'}\cap \mathcal{P}(x_1,y_1;x_2,y_2)|)\geq 2^{-6} \gamma^2 n$$ holds with a probability of at least $\frac{2}{3}$. We say that two 3-paths $x'y'x''y''$ and $\tilde{x}'\tilde{y}'\tilde{x}''\tilde{y}''$ are *intersecting* if they have at least one common vertex, i.e., $\{x',y',x'',y''\}\cap \{\tilde{x}',\tilde{y}',\tilde{x}'',\tilde{y}''\}\neq \emptyset$. Then the expected number of intersecting 3-path pairs in $\mathcal{F'}$ is about $$n^2(n-1)^2\times 4^2 \times n(n-1)^2 \times p^2=2^{-6}\gamma^2 n.$$ By Proposition [Proposition 3](#p2){reference-type="ref" reference="p2"}, with a probability of at least 1/2, $\mathcal{F'}$ contains at most $2^{-7}\gamma^2 n$ pairs of intersecting 3-paths. Now we obtain a family $\mathcal{F}$ of vertex-disjoint PC $3$-paths from $\mathcal{F}'$ in the following way: first remove a 3-path from each pair of intersecting $3$-paths of $\mathcal{F'}$, and then delete all the $3$-paths that are not properly colored. It follows that $$|\mathcal{F}|\leq|\mathcal{F'}|\leq 2^{-4}\gamma n,$$ $$|\mathcal{F}\cap \mathcal{P}(x,y)|\geq 2^{-6} \gamma^2 n-2^{-7}\gamma^2 n= 2^{-7}\gamma^2 n,$$ and $$|\mathcal{F}\cap \mathcal{P}(x_1,y_1;x_2,y_2)|\geq 2^{-6} \gamma^2 n-2^{-7}\gamma^2 n= 2^{-7}\gamma^2 n.$$ The result follows. ◻ **Lemma 10**. *Suppose that $\delta^c(K^c_{n,n})\geq \frac{2}{3}n+2$. Then, for every D2-element $(x_1,y_1;x_2,y_2)$, there exist at least $\frac{4}{3}n$ edges $xy$ of $G$ with $x\in X\setminus \{x_1,x_2\}$ and $y\in Y\setminus \{y_1,y_2\}$ such that $x_1y_1xyx_2y_2$ is a PC path.* *Proof.* Set $$X^\star:=\{x\in X:x\neq x_2~\mathrm{and}~c(x y_1)\neq c(x_1y_1)\}$$ and $$Y^\star:=\{y\in Y:y\neq y_1~\mathrm{and}~c(y x_2)\neq c(x_2y_2)\}.$$ Clearly, we have $|X^\star|,|Y^\star|\geq \delta^c(K^c_{n,n})-2\geq \frac{2}{3}n$. We now define a directed bipartite graph $D$ with bipartition $( X^\star, Y^\star)$ by setting the set of arcs $A(D):= \{ (x,y): x\in X^\star, y\in Y^\star, c(xy)\neq c(y_1x)\}\cup \{(y,x): x\in X^\star, y\in Y^\star, c(xy)\neq c(yx_2)\}.$ Thus, for every $x\in X^\star$, we have $$d_D^+(x)\geq |\{y\in Y^\star: c(xy)\neq c(y_1x)\}|\geq|Y^\star|+\delta^c(K^c_{n,n})-1-n\geq |Y^\star|/2+1.$$ In the same way, for every $y\in Y^\star$, we also have $d_D^+(y)\geq|X^\star|/2+1$. Since at most $|X^\star||Y^\star|$ arcs of $D$ are not contained in any directed 2-cycle and $$A(X^\star,Y^\star)+A(Y^\star,X^\star)-|X^\star||Y^\star|\geq |X^\star|+|Y^\star|,$$ at least $|X^\star|+|Y^\star|\geq \frac{4}{3}n$ arcs of $D$ are contained in directed 2-cycles. From the construction of $D$, there exist at least $\frac{4}{3}n$ edges $xy\in K^c_{n,n}[X^\star, Y^\star]$ such that $c(xy)\neq c(y_1x)~ \mathrm{and}~ c(xy)\neq c(yx_2)$. By the definitions of $X^\star$ and $Y^\star$, $c(x y_1)\neq c(x_1y_1)$ and $c(y x_2)\neq c(x_2y_2)$. Hence, the path $x_1y_1xyx_2y_2$ is PC path. The result follows. ◻ Now we are ready to show the following lemma on the absorbing cycle. **Lemma 11**. *Let $\varepsilon>0$ and suppose that $\delta^c(G)\geq (\frac{2}{3}+\varepsilon)n$. Then there exists an integer $n_0=n_0(\varepsilon)$ such that, whenever $n\geq n_0(\varepsilon)$, there exists a PC cycle $C$ of length of at most $\frac{2\varepsilon^2 n}{3}$ in $G$ such that, for each positive integer $k\leq \frac{2\varepsilon^4n}{81}$ and for every $k$ vertex-disjoint PC odd paths $Q_1, Q_2,\cdots,Q_k$ in $G\setminus C$, there exists a PC cycle $C'$ such that $V(C')=V(C)\cup \bigcup_{1\leq i\leq k}V(Q_i)$.* *Proof.* Without loss of generality, we may assume that $\varepsilon< 2^{-3}$. Set $\gamma=2^4\varepsilon^2/9$. Choose $n_0(\varepsilon)$ large enough such that Lemma [Lemma 9](#s3l2){reference-type="ref" reference="s3l2"} holds and such that $n_0(\varepsilon)\geq \frac{243}{4\varepsilon^3}$. Now let $n_0=n_0(\varepsilon)$ and suppose that $n\geq n_0$. Let $(x,y)$ be a D1-element and let $(x_1,y_1:x_2,y_2)$ be a D2-element. From Lemma [Lemma 8](#s3l1){reference-type="ref" reference="s3l1"}, we have $|\mathcal{P}(x,y)|\geq \gamma n^4$ and $|\mathcal{P}(x_1,y_1;x_2,y_2)|\geq \gamma n^4$. From Lemma [Lemma 9](#s3l2){reference-type="ref" reference="s3l2"}, there exists a family $\mathcal{F}$ of vertex-disjoint PC $3$-paths such that $$\label{s3equ1} |\mathcal{F}|\leq 2^{-4}\gamma n=\frac{\varepsilon^ 2 n}{9},$$ $$\label{s3equ2} |\mathcal{F}\cap \mathcal{P}(x,y)|\geq 2^{-7}\gamma^2 n=\frac{2\varepsilon^4n}{81},$$ and $$\label{s3equ3} |\mathcal{F}\cap \mathcal{P}(x_1,y_1;x_2,y_2)|\geq 2^{-7}\gamma^2 n=\frac{2\varepsilon^4n}{81}.$$ Next we find a desired PC cycle $C$ in $G=K^c_{n,n}$. Suppose that $|\mathcal{F}|=f$. Let $F_1,F_2,\ldots,F_{f}$ be the PC 3-paths of $\mathcal{F}$. For convenience, we suppose that $$F_i=x_{2i-1}y_{2i-1}x_{2i}y_{2i},~ i\in\{1,2,\ldots,f\},$$ where $x_j\in X$ and $y_j\in Y$ for $j=1,2,\ldots, 2f$. Moreover, we assume that $x_{2f+1}=x_1$ and $y_{2f+1}=y_1$. We next construct a matching $\{x^{(i)}y^{(i)}:i=1,2,\ldots,f\}$ of $G\setminus \mathcal{F}$ with a desired property by the following procedure of $l$ iterations, which generates a sequence of matchings $M_1,M_2,\ldots, M_f$ with $M_i=\{x^{(1)}y^{(1)}, x^{(2)}y^{(2)}, \ldots, x^{(i)}y^{(i)}\}$, $i=1,2,\ldots,f$. **Step 1:** Initially set $M_0:=\emptyset$ and $i:=1$. **Step 2:** Set $S_i:=V(\mathcal{F})\cup V(M_{i-1})\setminus \{x_{2i}, y_{2i}, x_{2i+1}, y_{2i+1}\}$ and set $G_i:=G[V(G)\setminus S_i]$. Then pick an edge $x^{(i)}y^{(i)}\in E(G_i)$ such that $x_{2i}y_{2i}x^{(i)}y^{(i)}x_{2i+1}y_{2i+1}$ is a PC path in $G_i$. Set $M_i:=M_{i-1}\cup \{x^{(i)}y^{(i)}\}$. **Step 3:** If $i<f$, then set $i:=i+1$, return to Step 2. If $i=f$, then stop the procedure. In Step 2 of the above procedure, we have $|S_i|\leq 4f+2i\leq 6f\leq \frac{2\varepsilon^2 n}{3}$, where the last inequality follows from ([\[s3equ1\]](#s3equ1){reference-type="ref" reference="s3equ1"}). This means that $G_i$ is an edge-colored complete balanced bipartite graph with $$\delta^c(G_i)\geq (\frac{2}{3}+\varepsilon)n- \frac{\varepsilon^2 n}{3}> \frac{2}{3}n+2.$$ By Lemma [Lemma 10](#lem3){reference-type="ref" reference="lem3"}, there exists an edge $x^{(i)}y^{(i)}\in E(G_i)$ such that $x_{2i}y_{2i}x^{(i)}y^{(i)}x_{2i+1}y_{2i+1}$ is a PC path in $G_i$. Consequently, the above procedure correctly generates a matching $M_f=\{x^{(i)}y^{(i)}:i=1,2,\ldots,f\}$ of $G\setminus \mathcal{F}$. The implementation of the above procedure also implies that $x_{2i}y_{2i}x^{(i)}y^{(i)}x_{2i+1}y_{2i+1}$ is a PC path in $G_i\subset G$ for $i=1,2,\ldots,f$. Thus, $$C:=x_{1}y_{1}x_{2}y_{2}x^{(1)}y^{(1)}x_{3}y_{3}x_{4}y_{4}x^{(2)}y^{(2)}\cdots x_{2f}y_{2f}x^{(f)}y^{(f)}x_1$$ is a PC cycle in $G$ containing all the PC 3-paths of ${\cal F}$. Note that $|C|=6f\leq \frac{2\varepsilon^2 n}{3}$, which is just what we desire. Suppose that $\mathcal{R}$ is a family of vertex-disjoint PC odd paths in $G\setminus C$ such that $|\mathcal{R}|\leq \frac{2\varepsilon^4n}{81}$. Recall that $\mathcal{F}=\{F_1,F_2,\ldots, F_f\}$ and $C=F_1x^{(1)}y^{(1)}F_2x^{(2)}y^{(2)}\ldots F_fx^{(f)}y^{(f)}x_1$. We next consider a bipartite graph $G^*$ with bipartition $(\mathcal{R},\mathcal{F})$. For $Q\in \mathcal{R}$ and $F\in \mathcal{F}$, we define $QF$ as an edge of $G^*$ if and only if one of the following two events occurs: (i) $Q=xy$ with $(x,y)$ being a D1-element and $F\in {\cal P}(x,y)$ is an absorbing path for $(x,y)$, and (ii) $Q=x'y'\cdots x''y''$ with $(x',y';x'',y'')$ being a D2-element and $F\in {\cal P}(x',y';x'',y'')$ is an absorbing path for $(x',y';x'',y'')$. From ([\[s3equ2\]](#s3equ2){reference-type="ref" reference="s3equ2"}) and ([\[s3equ3\]](#s3equ3){reference-type="ref" reference="s3equ3"}), each vertex $Q\in \mathcal{R}$ has a degree of at least $\frac{2\varepsilon^4n}{81}\geq |\mathcal{R}|$. By using Hall's theorem (see [@BM], page 419), there is a matching $M^*$ of $G^*$ that covers all the vertices of $\mathcal{R}$. We now generate a new cycle $C'$ of $G$ from $C=F_1x^{(1)}y^{(1)}F_2x^{(2)}y^{(2)}\ldots F_fx^{(f)}y^{(f)}x_1$ in the following way. Let $\mathcal{R}=\{Q_1,Q_2,\ldots,Q_k\}$ and suppose that $M^*=\{Q_iF_{i'}:i=1,2,\ldots,k\}$. For each $i=1,2,\ldots, k$, if $Q_i=xy$ for some D1-element $(x,y)$, then replace the path $F_{i'}=x_{2i'-1}y_{2i'-1}x_{2i'}y_{2i'}$ in $C$ by the new path $Q'_i=x_{2i'-1}y_{2i'-1}xyx_{2i'}y_{2i'}$ containing $V(Q_i)\cup V(F_{i'})$; and if $Q_i=x'y'\cdots x''y''$ for some D2-element $(x',y';x'',y'')$, then replace the path $F_{i'}=x_{2i'-1}y_{2i'-1}x_{2i'}y_{2i'}$ in $C$ by the new path $Q'_i=x_{2i'-1}y_{2i'-1}x'y'\cdots x''y''x_{2i'}y_{2i'}$ containing $V(Q_i)\cup V(F_{i'})$. From the construction of $C'$, we may find that $C'$ is a cycle of $G$ containing $V(C)$ and all the paths of ${\cal R}$. Since $M^*=\{Q_iF_{i'}:i=1,2,\ldots,k\}$ is a matching of $G^*$, for each $i\in \{1,2,\ldots,k\}$, the new path $Q'_i$ is a PC path. Consequently, $C'$ is a PC cycle with $V(C')=V(C)\cup \bigcup_{Q\in {\mathcal{R}}}(V(Q))$. The lemma follows. ◻ We are ready for the last step to prove Theorem [Theorem 4](#t2){reference-type="ref" reference="t2"}. Recall that $G=K^c_{n,n}$, $\delta^c(G)\geq (\frac{2}{3}+\varepsilon)n$, and $(X,Y)$ is the bipartition of $G$. Without loss of generality, we may suppose that $\varepsilon< 2^{-6}$, $u=x_1\in X$, and $n_0=n_0(\varepsilon)\geq \frac{243}{\varepsilon^5}$. We need to show that there exists a PC $(2l)$-cycle containing $u$ for each integer $l$ with $2\leq l\leq n$. From Lemma [Theorem 1](#h1){reference-type="ref" reference="h1"}, there exists a PC $4$-cycle containing $u$. For each integer $l$ with $3\leq l\leq \frac{2}{3}\varepsilon n$, since $\delta^c(G)\geq (\frac{2}{3}+\varepsilon)n$, we can greedily find a PC path $P:=x_1y_1x_2y_2\cdots x_{l-1}y_{l-1}$. Set $S:=\{x_2,y_2,x_3,y_3,\cdots, x_{l-2},y_{l-2}\}$ and $G':=G[V(G)\setminus S]$. Note that $$\delta^c(G')\geq (\frac{2}{3}+\varepsilon)n-\frac{2}{3}\varepsilon n \geq (\frac{2}{3}+\frac{1}{3}\varepsilon)n\geq \frac{2}{3}|G'|+2 .$$ From Lemma [Lemma 10](#lem3){reference-type="ref" reference="lem3"}, there exists an edge $xy$ of $G'$ with $x,y\in V(G')\setminus \{x_1,y_1,x_{l-1},y_{l-1}\}$ such that $y_1x_1yx y_{l-1}x_{l-1}$ is a PC path. Hence, the cycle $x_1y_1x_2y_2\cdots x_{l-1}y_{l-1}xyx_1$ is a PC $(2l)$-cycle containing $u$. For each $l\geq \frac{2}{3}\varepsilon n$, we may suppose that $C$ is a PC $k$-cycle given by Lemma [Lemma 11](#s3l4){reference-type="ref" reference="s3l4"}. Set $H:=G\setminus C$. Then $H$ is an edge-colored complete balanced bipartite graph such that $$\delta^c(H)\geq (\frac{2}{3}+\varepsilon)n- \frac{\varepsilon^2 n}{3}\geq (\frac{2}{3}+\frac{2\varepsilon}{3})n\geq \frac{2}{3}n+\frac{\varepsilon}{3}|H|.$$ Since $n_0\geq \frac{243}{\varepsilon^5}$ and $\varepsilon< 2^{-6}$, we have $\frac{\varepsilon}{3}|H|\geq \frac{\varepsilon}{3}(2n-\frac{2}{3}\varepsilon^2 n)\geq 3$. By Corollary [Corollary 7](#coro2){reference-type="ref" reference="coro2"}, there exists a family $\mathcal{F}$ of vertex-disjoint PC odd paths with $|\mathcal{F}|\leq\lceil\frac{|H|}{\frac{\varepsilon}{3}|H|} \rceil =\lceil\frac{3}{\varepsilon}\rceil$ such that $V(H)=\bigcup_{P\in \mathcal{F}}V(P)$. Set $\mathcal{F}=\{P_1,P_2,\ldots,P_x\}$. If $u\in V(C)$, then let $\mathcal{F'}=\{P'_1,P'_2,\ldots,P'_x\}$ be a family of vertex-disjoint PC odd paths obtained from $\mathcal{F}$ by removing some vertices in the paths of $\mathcal{F}$ such that $|\bigcup_{i\in[x]} V(P'_i)|=2l-k$. If $u\notin V(C)$, then we have $u\in V(\mathcal{F})$. Since $2l\geq\frac{4}{3}\varepsilon n >\frac{2}{3}\varepsilon^2 n+2$, we may suppose that $\mathcal{F'}=\{P'_1,P'_2,\ldots,P'_x\}$ is a family of vertex-disjoint PC odd paths obtained from $\mathcal{F}$ by removing some vertices in the paths of $\mathcal{F}$ such that $|\bigcup_{i\in[x]} V(P'_i)|=2l-k$ and $u\in \bigcup_{i\in[x]} V(P'_i)$. Since $\lceil\frac{3}{\varepsilon}\rceil\leq\frac{2\varepsilon^4n}{81}$, by Lemma [Lemma 11](#s3l4){reference-type="ref" reference="s3l4"}, there exists a PC cycle $C'$ with $V(C')=V(C)\cup\bigcup_{i\in[x]} V(P'_i)$ containing vertex $u$. As a result, $C'$ is a PC $(2l)$-cycle containing $u$. This completes the proof of Theorem [Theorem 4](#t2){reference-type="ref" reference="t2"}. # Acknowledgment {#acknowledgment .unnumbered} This research was supported in part by the National Natural Science Foundation of China under grant numbers 11971445 and 12171440. 1 N. Alon and G. Gutin, Properly colored Hamilton cycles in edge colored complete graphs, *Random Struct. Algor.* **11** (1997) 179-186. J. Bang-Jensen and G. Gutin, Alternating cycles and paths in edge-coloured multigraphs: a survey, *Discrete Math.* **165/166** (1997) 39-60. J. Bang-Jensen and G. Gutin, *Digraphs: Theory, Algorithms and Applications*, second ed., Springer Monographs in Mathematics, Springer-Verlag London Ltd., London, 2009. J. Bang-Jensen, G. Gutin and A. Yeo, Properly coloured Hamiltonian paths in edge-coloured complete graphs, *Discrete Appl. Math.* **82** (1998) 247-250. O. Barr, Properly coloured Hamiltonian paths in edge-coloured complete graphs without monochromatic triangles, *Ars Combin.* **50** (1998) 316-318. B. Bollobás and P. Erdős, Alternating Hamiltonian cycles, *Israel J. Math.* **23** (1976) 126-131. J.A. Bondy and U.S.R. Murty, *Graph Theory*, GTM 244, Springer, 2008. R. Čada, K. Ozeki and K. Yoshimoto, A complete bipartite graph without properly colored cycles of length four, *J. Graph Theory* **93** (2020) 168-180. C.C. Chen and D.E. Daykin, Graphs with Hamiltonian cycles having adjacent lines of different colors, *J. Combin. Theory Ser. B* **21** (1976) 135-139. X. Chen, F. Huang and J. Yuan, Proper vertex-pancyclicity of edge-colored complete graphs without monochromatic triangles, *Discrete Appl. Math.* **265** (2019) 199-203. X. Chen and X. Li, Proper vertex-pancyclicity of edge-colored complete graphs without joint monochromatic triangle, *Discrete Appl. Math.* **294** (2021) 167-180. Y. Cheng, M. Kano and G.H. Wang, Properly colored spanning trees in edge-colored graphs, *Discrete Math.* **343** (2020). A. Contreras-Balbuena, H. Galeana-Sánchez and I. A. Goldfeder, A new sufficient condition for the existence of alternating Hamiltonian cycles in 2-edge-colored multigraphs, *Discrete Appl. Math.* **229** (2017) 55-63. D.E. Daykin, Graphs with cycles having adjacent lines different colors, *J. Combin. Theory Ser. B* **20** (1976) 149-152. J. Feng, H. Giesen, Y. Guo, G. Gutin, T. Jensen and A. Rafiey, Characterization of edge-colored complete graphs with properly colored Hamilton paths, *J. Graph Theory* **53** (2006) 333-346. S. Fujita, R. Li and S. Zhang, Color degree and monochromatic degree conditions for short properly colored cycles in edge-colored graphs, *J. Graph Theory* **87** (2018) 362-373. S. Guo, F. Huang and J. Yuan, Properly colored 2-factors of edge-colored complete bipartite graphs, submitted. G. Gutin, Note on edge-colored graphs and digraphs without properly colored cycles, *Austral J Combin* **42** (2008) 137-140. R. Häggkvist, A talk at the International Colloquium on Combinatorics and Graph Theory at Balatonlelle, Hungary, July 15-19, 1996. M. Kano and M. Tsugaki, Rainbow and properly colored spanning trees in edge-colored bipartite graphs, *Graphs Comb.* **37** (2021) 1913-1921. B. Li, B. Ning, C. Xu and S. Zhang, Rainbow triangles in edge-colored graphs, *European J. Combin.* **36** (2014) 453-459. H. Li and G. Wang, Color degree and alternating cycles in edge-colored graphs, *Discrete Math.* **309** (2009) 4349-4354. A. Lo, A Dirac type condition for properly coloured paths and cycles, *J. Graph Theory* **76** (2014) 60-87. A. Lo, An edge-colored version of Dirac's Theorem, *SIAM J. Discrete Math.* **28** (2014) 18-36. A. Lo, Properly coloured Hamiltonian cycles in edge-colored complete graphs, *Combinatorica* **36** (2016) 471-492. C. Xu, X. Hu, W. Wang and S. Zhang, Rainbow cliques in edge-colored graphs, *European J. Combin.* **54** (2016) 193-200. [^1]: Corresponding author: Fei Huang. Email: hf\@zzu.edu.cn
arxiv_math
{ "id": "2310.04962", "title": "Properly colored even cycles in edge-colored complete balanced bipartite\n graphs", "authors": "Shanshan Guo, Fei Huang, Jinjiang Yuan, C.T. Ng, T.C.E. Cheng", "categories": "math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- author: - - - title: On the emergent dynamics of the infinite set of Kuramoto oscillators --- # Introduction {#sec:1} Synchronization is one of collective behaviors in which weakly coupled oscillators adjust their rhythms via mutual interactions, and it is often observed in oscillatory systems such as the collection of fireflies, neurons and pacemaker cells, etc. However, despite its ubiquitous presence, its rigorous mathematical studies were begun in only a half century ago by two pioneers, Arthur Winfree in 1967 [@Wi1; @Wi2] and Yoshiki Kuramoto [@Ku] in 1975. Since then, synchronization has been extensively investigated in diverse scientific disciplines such as an applied mathematics, neuroscience and statistical physics, etc. We refer to survey articles and books [@A-B; @A-B-F-H-K; @D-B2; @H-K-P-Z; @P-R-K; @Pe; @St] for a brief introduction to the subject. To fix the idea, we restrict our discussion to Kuramoto oscillators whose dynamics is governed by the sum of sinusoidal coupling of phase differences. Consider a lattice $\Lambda \subset \mathbb{R}^d$ with $N$ lattice points (or nodes), and we assume that Kuramoto oscillators are stationed on each lattice point, and interactions are all-to-all with a uniform strength $\frac{\kappa}{N}$. To set up stage, let $\theta_i = \theta_i(t)$ be the phase of the Kuramoto oscillator at the $i$-th lattice point. In this setting, the phase dynamics is governed by the Cauchy problem to the (finite) Kuramoto model: $$\label{Ku} \begin{cases} \displaystyle {\dot \theta}_{i} = \nu_i + \sum_{j\in[N]} \frac{\kappa}{N} \sin(\theta_j - \theta_i), \quad t > 0, \\ \displaystyle \theta_i(0) = \theta_i^{\text{in}}, \quad i \in [N]:= \{1, \ldots, N \}, \end{cases}$$ where $\nu_i$ is the natural frequency of the $i$-th oscillator. Since the right-hand side of [\[Ku\]](#Ku){reference-type="eqref" reference="Ku"} is uniformly bounded and Lipschitz continuous, the standard Cauchy-Lipschitz theory guarantees a global well-posedness of smooth solutions. Thus, what matters for [\[Ku\]](#Ku){reference-type="eqref" reference="Ku"} lies on the emergent dynamics. In fact, the emergent dynamics of [\[Ku\]](#Ku){reference-type="eqref" reference="Ku"} has been extensively studied in literature, to name a few [@B-C-M; @B-D-P; @C-S; @D-X; @D-X2; @D-B1; @D-B2; @H-N-P; @H-R; @L-X; @V-M0; @V-M1; @V-M2] from diverse scientific disciplines in last decades. In this paper, we are interested in the Kuramoto dynamics on the infinitely extended lattice, i.e., the number of Kuramoto oscillators is equivalent to the cardinality of the natural numbers. More specifically, we address the following set of questions: - What is the suitable system describing the dynamics of an infinite number of Kuramoto oscillators? - If such a dynamical system exists, under what conditions on system parameters and initial data, can we rigorously show the emergent collective dynamics? The main purpose of this paper is to answer the above proposed questions. In collective dynamics community, we often approximate infinite systems with all-to-all couplings by corresponding Vlasov type equations (see [@La]) which arise from large $N$-oscillator limit. In this way, mean-field approach can give approximate results for infinite system under consideration. Therefore, to get the exact result on the dynamics of infinite set of Kuramoto oscillators, we are forced to study the infinite set of ordinary differential equations as it is. In this regard, we propose the following natural extension of the finite model [\[Ku\]](#Ku){reference-type="eqref" reference="Ku"}: $$\label{IKM} \dot{\theta}_{i}=\nu_i+\sum_{j\in\mathbb{N}} \kappa_{ij}\sin\left(\theta_j-\theta_i \right), \quad t>0, \quad \forall~ i \in \mathbb{N},$$ where $\kappa_{ij}$ is the coupling strength between the $i$-th and $j$-th oscillators satisfying nonnegativity and row-summability: $$\label{A-1} K = (\kappa_{ij}), \quad \kappa_{ij} \geq 0,\quad \| K \|_{\infty, 1} := \sup_{i\in\mathbb{N}} \sum_{j\in\mathbb{N}} \kappa_{ij} <\infty.$$ Note that without the coupling strength $\kappa_{ij}$, the infinite sum in the R.H.S. of [\[IKM\]](#IKM){reference-type="eqref" reference="IKM"} will not be well-defined. So introduction of such weight is needed. Moreover, unlike to the Kuramoto model, uniform coupling strength $\kappa_{ij} = \kappa$ does not satisfy the condition $\eqref{A-1}_3$. Of course, it is not complete new to study such an infinite set of ordinary differential equations. For example, coagulation and fragmentation process for polymer can be described by the infinite number of ODEs (see [@B-C-P; @Du; @Sl]) and infinite lattice Kuramoto model [@Br]. Recently, Wang and Xue [@W-X] studied the flocking behaviors of the infinite number of Cucker-Smale particles, and they found that almost the same results in [@Cu-S; @H-Liu; @H-T] for the original Cucker-Smale model can be obtained. As first observed by the first author and his collaborators in [@H-L-R-S], the first-order Kuramoto model can be lifted to the Cucker-Smale model by introducing auxiliary frequency variables. Thus, it is quite reasonable to study analogous study for the Kuramoto model without resorting on the corresponding mean-field equation. The global well-posedness of [\[IKM\]](#IKM){reference-type="eqref" reference="IKM"} on the Banach space $(\ell^{\infty},~\| \cdot \|_{\infty})$ can be followed from the abstract Cauchy-Lipschitz theory together with the Lipschitz continuity of the R.H.S. of [\[IKM\]](#IKM){reference-type="eqref" reference="IKM"} (see Proposition [Proposition 2](#P2.2){reference-type="ref" reference="P2.2"} and Lemma [Lemma 12](#LA-1){reference-type="ref" reference="LA-1"}). For the special situation: $$\kappa_{ij} \equiv 0, \quad \theta_i \equiv 0, \quad \max\{i,j\} \geq N+1,$$ it is easy to see that the Kuramoto model [\[Ku\]](#Ku){reference-type="eqref" reference="Ku"} corresponds to the special case of the proposed infinite model [\[IKM\]](#IKM){reference-type="eqref" reference="IKM"}. Hence, whether the infinite system [\[IKM\]](#IKM){reference-type="eqref" reference="IKM"} can exhibit the emergent dynamics as in the Kuramoto model [\[Ku\]](#Ku){reference-type="eqref" reference="Ku"} or not will be a tempting question. Moreover, it would be very interesting to analyze distinct features which cannot be seen in the Kuramoto model with a finite system size. In what follows, we briefly discuss our main results documented in the following sections from Section [3](#sec:3){reference-type="ref" reference="sec:3"} to Section [5](#sec:5){reference-type="ref" reference="sec:5"}. Let $\mathbb{N} = \{ 1, 2, \ldots \}$ be the set of all natural numbers. For the emergent dynamics of the infinite Kuramoto model [\[IKM\]](#IKM){reference-type="eqref" reference="IKM"} -- [\[A-1\]](#A-1){reference-type="eqref" reference="A-1"}, we consider two types of coupling gain matrix $K = (\kappa_{ij})$: $$\begin{aligned} \begin{aligned} \label{A-2} & \mbox{Row-summable network}:~\kappa_{ij} > 0,\quad i,j \in {\mathbb N},\quad \| K \|_{\infty. 1} < \infty, \\ & \mbox{Sender row-summable network}:~\kappa_{ij} = \kappa_{j}\geq 0,\quad i,j \in {\mathbb N},\quad \| K \|_{\infty, 1} < \infty. \end{aligned}\end{aligned}$$ First, we consider positive and row-summability network topology $\eqref{A-2}_1$. For a homogeneous ensemble with the same natural frequency $\nu_i = \nu$, thanks to translational invariance property of [\[IKM\]](#IKM){reference-type="eqref" reference="IKM"}, we may assume that the common natural frequency $\nu$ is zero and [\[IKM\]](#IKM){reference-type="eqref" reference="IKM"} reduces to $$\label{A-3} \dot{\theta}_{i}= \sum_{j\in\mathbb{N}} \kappa_{ij}\sin\left(\theta_j-\theta_i \right), \quad t>0, \quad \forall~ i \in \mathbb{N}.$$ In this case, depending on suitable conditions for the network topology $K = (\kappa_{ij})$, the phase diameter can be constant (see Corollary [Corollary 1](#C3.1){reference-type="ref" reference="C3.1"} and Proposition [Lemma 5](#L3.2){reference-type="ref" reference="L3.2"}, respectively). In particular, we can find an explicit example of non-decreasing phase diameter for some class of coupling gain matrix $K$. This is certainly a novel feature of the infinite model which cannot be seen in a finite system (see also Remark [Remark 4](#R3.2){reference-type="ref" reference="R3.2"} and Corollary [Corollary 1](#C3.1){reference-type="ref" reference="C3.1"}). As can be seen in Proposition [Proposition 1](#P2.1){reference-type="ref" reference="P2.1"}, a gradient flow formulation for [\[Ku\]](#Ku){reference-type="eqref" reference="Ku"} plays a key role in the rigorous verificaion of phase-locking for a generic initial data in a large coupling regime [@D-X2; @H-K-R; @H-R]. Likewise, the infinite system [\[A-3\]](#A-3){reference-type="eqref" reference="A-3"} can also be written as a gradient flow on a Banach space $\ell^2$ with the potential $P$ (see Proposition [Proposition 3](#P3.4){reference-type="ref" reference="P3.4"}): $$P(\Theta) = \frac{1}{2}\sum_{i,j\in\mathbb{N}} \kappa_{ij} (1-\cos(\theta_i - \theta_j)).$$ Although we cannot use the Lojasiewicz gradient inequality in [@H-J] as it is, we can still use $P$ as a Lyapunov functional to derive complete synchronization (see Theorem [Theorem 1](#T3.1){reference-type="ref" reference="T3.1"}): $$\label{A-4} \lim_{t \to \infty} \sup_{i,j\in\mathbb{N}} |{\dot \theta}_i(t) - {\dot \theta}_j(t) | = 0.$$ On the other hand, for a heterogeneous ensemble with distinct natural frequencies, we can obtain a practical synchronization under suitable conditions on the coupling gain matrix $K = (\kappa_{ij})$ (Theorem [Theorem 2](#T4.1){reference-type="ref" reference="T4.1"}): $$\limsup_{t\to\infty} \sup_{i,j\in\mathbb{N}} |\theta_i(t) - \theta_j(t) | \le\sin^{-1} \Big( {\mathcal O}(1) \frac{ {\mathcal D}\left(\mathcal{V}\right)}{\| K \|_{-\infty, 1}} \Big).$$ Unfortunately, the complete synchronization estimate [\[A-4\]](#A-4){reference-type="eqref" reference="A-4"} for a heterogeneous ensemble is not available yet. Second, we consider a row-summable sender network topology $\eqref{A-2}_2$. In this case, the infinite Kuramoto model reads as $$\label{A-5} \dot{\theta}_{i}= \nu_i + \sum_{j\in\mathbb{N}} \kappa_j\sin\left(\theta_j-\theta_{i}\right), \quad t>0, \quad i \in {\mathbb N}.$$ Compared to the aforementioned symmetric and summable network topology, we have better controls on the emergent dynamics. For a homogeneous ensemble, there might be two possible asymptotic states (one-point phase synchrony or bi-cluster configuration). More precisely, let $\Theta$ be a solution to [\[A-5\]](#A-5){reference-type="eqref" reference="A-5"} with asymptotic configuration $\Theta^\infty=(\theta_1^\infty,\theta_2^\infty,\ldots)$. Then, we have $$\theta_{i}^{\infty}\in\left\{ \theta_{0}\right\} \cup\left\{ \theta_{0}\pm \kappa_{i}\pi\ |\ i\in\mathbb{N}\right\} \cup\left\{ \theta_{0}\pm\left(1-\kappa_{i}\right)\pi\ |\ i\in\mathbb{N}\right\},$$ where $$\theta_0 := \sum_{i\in\mathbb{N}} \kappa_{i}\theta_{i}^{\text{in}}.$$ We refer to Theorem [Theorem 3](#T5.1){reference-type="ref" reference="T5.1"} and Corollary [Corollary 5](#C5.2){reference-type="ref" reference="C5.2"} for details. On the other hand, for a heterogeneous ensemble, we can rewrite system [\[A-5\]](#A-5){reference-type="eqref" reference="A-5"} into the second-order model with row-dimensional initial data: $$\label{A-6} \begin{cases} \displaystyle \dot{\theta}_{i}=\omega_{i},\quad t>0,\quad\forall~ i\in\mathbb{N},\\ \displaystyle \dot{\omega}_{i}=\sum_{j\in\mathbb{N}} \kappa_j\cos\left(\theta_i-\theta_j\right)\left(\omega_j-\omega_{i}\right), \\ \displaystyle \theta_{i}(0)=\theta_{i}^{\text{in}}\in\mathbb{R},\quad\omega_{i}(0)=\nu_{i}+\sum_{j\in\mathbb{N}} \kappa_j\sin\left(\theta_j^{\text{in}}-\theta_{i}^{\text{in}}\right), \end{cases}$$ where $$\Theta^{\text{in}} = (\theta_1^{\text{in}}, \theta_2^{\text{in}}, \ldots ) \in \ell^{\infty}, \quad {\mathcal V} = (\nu_1, \nu_2, \ldots) \in \ell^{\infty}.$$ We set $${\mathcal W} := (\omega_1, \omega_2, \ldots) \quad \mbox{and} \quad {\mathcal D}({\mathcal W}) : = \sup_{m, n} |\omega_m - \omega_n|.$$ In this case, under some restricted class of initial phase configuration confined in a quarter arc, we can show that the frequency diameter ${\mathcal D}({\mathcal W})$ decays to zero exponentially fast (see Theorem [Theorem 4](#T5.2){reference-type="ref" reference="T5.2"}). The rest of this paper is organized as follows. In Section [2](#sec:2){reference-type="ref" reference="sec:2"}, we briefly review the emergent dynamics of the finite Kuramoto model and study basic properties of the infinite Kuramoto model such as conservation law, translational invariance and several a priori estimates. In Section [3](#sec:3){reference-type="ref" reference="sec:3"} and Section [4](#sec:4){reference-type="ref" reference="sec:4"}, we study emergent dynamics of [\[IKM\]](#IKM){reference-type="eqref" reference="IKM"} with a symmetric and row-summable network topologies. In Section [5](#sec:5){reference-type="ref" reference="sec:5"}, we investigate the complete synchronization of the infinite Kuramoto model with a sender network topology. Finally, Section [6](#sec:6){reference-type="ref" reference="sec:6"} is devoted to a brief summary of main results and discussion on some remaining issues for a future work. :  Throughout the paper, we write the phase configuration vector and natural frequency vector as $$\begin{aligned} \begin{aligned} & \Theta_N := (\theta_1,\ldots,\theta_N), \quad \Theta := (\theta_1,\theta_2,\ldots),\\ & {\mathcal V}_N := (\nu_1,\ldots,\nu_N), \quad {\mathcal V}: = (\nu_1,\nu_2,\ldots), \end{aligned}\end{aligned}$$ and we denote the set $\{1,\ldots,N\}$ by $[N]$ for simplicity. For $A=(a_1,a_2,\ldots)\in \mathbb{R}^\mathbb{N}$ and $p\in[1,\infty]$, we set $$\| { A} \|_p := \begin{cases} \displaystyle \Big( \sum_{i\in\mathbb{N}} |a_i|^p \Big)^{\frac{1}{p}}, \quad & 1 \leq p < \infty, \\ \displaystyle \sup_{i\in\mathbb{N}} |a_i|, \quad & p = \infty, \end{cases}$$ and denote $\ell^p=\ell^p({\mathbb N})$ the collection of all sequences with a finite $p$-th power sum: $$\ell^p({\mathbb N}) : = \Big \{ A\in\mathbb{R}^\mathbb{N} :~\| A \|_p < \infty \Big \}, \quad p\in[1,\infty].$$ Similarly, for every infinite matrix $K=(\kappa_{ij})\in\mathbb{R}^{\mathbb{N}\times \mathbb{N}}$ and $1\leq p,q\leq \infty$, we set $$\|K\|_{p,q}:=\begin{dcases} \displaystyle\left|\sum_{{i\in \mathbb{N}}}\|(\kappa_{ij})_j\|_q^p\right|^{\frac{1}{p}} & (1\leq p<\infty),\\ \displaystyle\sup_{i\in\mathbb{N}}\|(\kappa_{ij})_j\|_q& (p=\infty), \end{dcases}$$ and denote $$\ell^{p,q}:=\left\{K=(\kappa_{ij}):\| K \|_{p,q}<\infty \right\},$$ which also becomes a normed vector space of infinite matrices. Finally, for every real vectors $X_N$ and $X$ given by $$\begin{aligned} X_N=(x_1,\ldots,x_N)\in\mathbb{R}^N,\quad X=(x_1,x_2,\ldots)\in\mathbb{R}^\mathbb{N},\end{aligned}$$ we denote the supremum of the difference between their elements by $$\begin{aligned} {\mathcal D}(X_N) := \max_{i,j\in[N]} | x_i - x_j |, \quad {\mathcal D}(X) := \sup_{i,j\in\mathbb{N}} | x_i - x_j |,\end{aligned}$$ and call the diameter of $X_N$ and $X$, respectively. # Preliminaries {#sec:2} In this section, we study basic properties of the Kuramoto model on static networks with finite and infinite nodes. ## Kuramoto model for a finite ensemble {#sec:2.1} Consider the Cauchy problem to the Kuramoto model with a finite system size [@C-H-J-K; @H-H-K; @H-L-X; @D-X2]: $$\label{B-1} \begin{cases} \displaystyle \dot{\theta}_{i}=\nu_{i}+ \sum_{j\in[N]} \kappa_{ij}\sin\left(\theta_j-\theta_{i}\right),\quad t>0,\\ \displaystyle \theta_{i}(0)=\theta_{i}^{\text{in}}, \quad i\in [N], \end{cases}$$ where $\kappa_{ij}$ is a nonnegative symmetric constant which denotes the strength between the $i$-th and $j$-th oscillators: $$\label{B-1-1} \kappa_{ij} = \kappa_{ji} \geq 0, \quad i, j \in [N].$$ First, we recall some terminologies on emergent dynamics in the following definition. **Definition 1**. *Let $\Theta_N$ be a solution to [\[B-1\]](#B-1){reference-type="eqref" reference="B-1"} -- [\[B-1-1\]](#B-1-1){reference-type="eqref" reference="B-1-1"}.* 1. *The state $\Theta$ is phase-locked if the phase differences are constant in time: $$\theta_i(t)-\theta_j(t)\equiv\theta_{ij},\quad t \geq 0, \quad i,j\in[N].$$* 2. *The state $\Theta$ achieves asymptotic phase-locking if and only if $$\exists~\theta_{ij}^\infty=\lim_{t\to\infty} (\theta_i(t)-\theta_j(t)),\quad i,j\in [N].$$* 3. *The state $\Theta$ achieves complete synchronization if and only if $$\lim_{t\to\infty} {\mathcal D}({\dot \Theta}_N(t)) = 0.$$* Next, we study basic preliminaries for [\[B-1\]](#B-1){reference-type="eqref" reference="B-1"} on conservation law and emergent dynamics. For this, we set $$\label{B-1-2} {\mathcal C}(t) := \sum_{i\in[N]} \theta_i - t \sum_{i\in[N]} \nu_i, \quad t \geq 0.$$ **Proposition 1**. **[@H-K-R; @H-R]* Let $\Theta_N=(\theta_1,\ldots,\theta_N)$ be a solution to [\[B-1\]](#B-1){reference-type="eqref" reference="B-1"} -- [\[B-1-1\]](#B-1-1){reference-type="eqref" reference="B-1-1"}. Then, the following assertions hold.* 1. *(Balanced law): The functional ${\mathcal C}$ in [\[B-1-2\]](#B-1-2){reference-type="eqref" reference="B-1-2"} is conserved along the flow [\[B-1\]](#B-1){reference-type="eqref" reference="B-1"}. $${\mathcal C}(t) = {\mathcal C}(0), \quad t \geq 0.$$* 2. *(A gradient flow formulation): If we define a potential $P_N = P_N(\Theta_N)$: $$\label{B-1-2-1} P_N(\Theta_N) := -\sum_{l\in[N]} \nu_l \theta_l + \frac{1}{2} \sum_{k,l\in[N]} \kappa_{kl} (1 - \cos (\theta_{k}-\theta_{l} )),$$ system [\[B-1\]](#B-1){reference-type="eqref" reference="B-1"} -- [\[B-1-1\]](#B-1-1){reference-type="eqref" reference="B-1-1"} can be rewritten as a gradient flow: $$\partial_t \Theta_N = -\nabla_{\Theta_N} P_N(\Theta_N), \quad t > 0.$$* 3. *Suppose that network topology, natural freqencies and initial data satisfy $$\label{B-1-3} \kappa_{ij} = \frac{\kappa}{N}, \quad \sum_{i\in[N]} \nu_i = 0, \quad { R_0 := \Big|\frac{1}{N} \sum_{k \in[N]} e^{{\mathrm i} \theta_{k}^{\text{in}}} \Big| > 0,} \quad \kappa > \frac{1.6}{R_0^2} {\mathcal D}({\mathcal V}_N).$$ Then, there exists an equilibrium state $\Theta_N^{\infty} =(\theta_1^\infty,\ldots,\theta_N^\infty)$ such that $$\lim_{t \to \infty} \| \Theta_N(t) - \Theta_N^{\infty} \|_{\infty} = 0.$$* *Proof.* (i) We take sum [\[B-1\]](#B-1){reference-type="eqref" reference="B-1"} over all $i$ and use [\[B-1-1\]](#B-1-1){reference-type="eqref" reference="B-1-1"} to get $$\frac{d}{dt} \sum_{i\in[N]} \theta_i = \sum_{i\in[N]} \nu_{i} + \sum_{i,j\in[N]}\kappa_{ij}\sin (\theta_j-\theta_{i}) = \sum_{i\in[N]} \nu_{i}.$$ This yields the desired conservation law. (ii) For a fixed $i \in [N]$, we rewrite the potential $P_N$ as $$\begin{aligned} \begin{aligned} P_N(\Theta_N) &= -\nu_i \theta_i + \frac{1}{2} \sum_{l\in[N]} \kappa_{il} (1- \cos(\theta_i - \theta_l)) + \frac{1}{2} \sum_{l\in[N]} \kappa_{li} (1- \cos(\theta_l - \theta_i)) \\ &- \sum_{l\in[N]-\{i\}} \nu_l \theta_l + \frac{1}{2} \sum_{k,l\in[N]-\{i\}} \kappa_{kl} (1- \cos(\theta_k - \theta_l)). \end{aligned} \end{aligned}$$ Now, we differentiate the above relation with respect to $\theta_i$ to find $$\begin{aligned} \begin{aligned} \partial_{\theta_i} P_N(\Theta_N) &= -\nu_i + \frac{1}{2} \sum_{l\in[N]} \kappa_{il} \sin(\theta_i - \theta_l) -\frac{1}{2} \sum_{l\in[N]} \kappa_{li} \sin(\theta_l - \theta_i) \\ &= -\nu_i - \sum_{l\in[N]} \Big( \frac{\kappa_{il} + \kappa_{li}}{2} \Big) \sin(\theta_l - \theta_i) \\ &= -\nu_i - \sum_{l\in[N]} \kappa_{il} \sin (\theta_l - \theta_i) \quad \mbox{using the symmetry of $(\kappa_{ij})$} \\ &=-{\dot \theta}_i. \end{aligned} \end{aligned}$$ This yields $${\dot \Theta}_N = -\nabla_{\Theta_N} P_N(\Theta_N).$$ (iii) Detailed argument can be found in [@H-R]. Thus, we just sketch the main line of idea as follows. First, we show that the phase configuration is uniformly bounded in the sense that there exists a positive constant $\theta^\infty$ such that $$\sup_{0 \leq t < \infty} \| \Theta_N(t) \|_{\infty} \leq \theta^{\infty}.$$ Then, motivated by gradient flow approach in [@D-X2], the authors in [@H-L-X; @H-K-R; @H-R] also used the gradient flow formulation (ii) and the analyticity of potential to say that there exists an equilibrium $\Theta_N^{\infty}$ such that $$\lim_{t \to \infty} \| \Theta_N(t) - \Theta_N^{\infty} \|_{\infty} = 0.$$ ◻ ## Kuramoto model for an infinite ensemble {#sec:2.2} In this subsection, we present several basic properties of the Kuramoto model which concerns the dynamics of countably infinite number of oscillators, in short 'infinite Kuramoto model'. Note that for the following simple modification: $$\sum_{j\in[N]}\kappa_{ij}\sin\left(\theta_j-\theta_{i}\right) \quad \Longrightarrow \quad \sum_{j\in\mathbb{N}}\kappa_{ij}\sin\left(\theta_j-\theta_{i}\right),$$ the infinite sum in the R.H.S. of [\[IKM\]](#IKM){reference-type="eqref" reference="IKM"} might not be well-defined, unless we impose some restrictive asymptotic vanishing conditions on the network topology $K=(\kappa_{ij})_{i,j\in\mathbb{N}}$. Once the infinite sum becomes well-defined, we can consider the Cauchy problem to the infinite Kuramoto model: $$\label{B-2} \begin{cases} \displaystyle \dot{\theta}_{i}=\nu_{i}+\sum_{j\in\mathbb{N}}\kappa_{ij}\sin\left(\theta_j-\theta_{i}\right),\quad t>0, \\ \displaystyle \theta_{i}(0)=\theta_{i}^{\text{in}}, \quad \quad i \in {\mathbb N}, \end{cases}$$ where $\Theta^{\text{in}}, \mathcal{V}$ and $K=(\kappa_{ij})$ satisfy $$\label{B-3} \Theta^{\text{in}}\in\ell^p,\quad \mathcal{V}\in \ell^p,\quad K\in \ell^{p,1},\quad \kappa_{ij}\geq 0,\quad \forall~i,j\in\mathbb{N}$$ for some $p\in [1,\infty]$. Unlike in Section [2.1](#sec:2.1){reference-type="ref" reference="sec:2.1"}, we allow the asymmetric network topology $K$ to consider the most general case. Then, the following proposition guarantees the well-posedness of [\[B-2\]](#B-2){reference-type="eqref" reference="B-2"} -- [\[B-3\]](#B-3){reference-type="eqref" reference="B-3"} by using the standard Cauchy-Lipschitz theory. **Proposition 2**. *Suppose that initial configuration, natural frequencies and network topology satisfy [\[B-3\]](#B-3){reference-type="eqref" reference="B-3"}. Then, there exists a unique smooth solution $\Theta = \Theta(t) \in {\mathcal C}^1(\mathbb{R}_+; \ell^p)$ to the infinite system [\[B-2\]](#B-2){reference-type="eqref" reference="B-2"}.* *Proof.* First of all, we set $$f_i(\Theta) := \nu_i+\sum_{j\in\mathbb{N}} \kappa_{ij} \sin(\theta_j- \theta_i), \quad i \in {\mathbb N}, \quad {\mathcal F}(\Theta) := (f_1(\Theta),f_2(\Theta), \ldots ).$$ In order to use the standard Cauchy-Lipschitz theory on the Banach space $\ell^{p}$, it suffices to show that for every two solutions $\Theta$ and ${\tilde\Theta}$ to [\[B-2\]](#B-2){reference-type="eqref" reference="B-2"} -- [\[B-3\]](#B-3){reference-type="eqref" reference="B-3"}, we have $$\label{B-5} \| {\mathcal F} \|_{p} \leq \| {\mathcal V} \|_{p} + \| K \|_{p, 1}, \quad \left\| {\mathcal F}(\Theta)- {\mathcal F}(\widetilde{\Theta})\right\|_p \leq 2 \| K \|_{p, 1} \|\Theta - {\tilde \Theta} \|_{p}.$$ $\bullet$ (Derivation of $\eqref{B-5}_1$):  For each $i\in\mathbb{N}$, we have $$\label{B-6} |f_i(\Theta)|=\left|\nu_i+\sum_{j\in\mathbb{N}} \kappa_{ij}\sin(\theta_j-\theta_i) \right|\leq |\nu_i|+ \sum_{j\in\mathbb{N}} \kappa_{ij}.$$ Then by the Minkowski inequality and [\[B-6\]](#B-6){reference-type="eqref" reference="B-6"}, we have $$\| {\mathcal F} \|_{p} \leq \| {\mathcal V} \|_{p} + \| K \|_{p, 1}\quad\mbox{for}~~ 1\leq p\leq\infty.$$ $\bullet$ (Derivation of $\eqref{B-5}_2$): For $1\leq p<\infty$, every $\Theta, {\widetilde \Theta} \in \ell^{p}$ satisfy $$\label{B-7} \begin{aligned} \|\mathcal{F}(\Theta)-\mathcal{F}(\widetilde{\Theta})\|_p^p&=\sum_{i\in\mathbb{N}}|f_i(\Theta)-f_i(\widetilde{\Theta})|^p\\ &=\sum_{i\in\mathbb{N}}\left|\sum_{j\in\mathbb{N}} \kappa_{ij}\left(\sin\left(\theta_{j}-\theta_{i} \right) - \sin ({\tilde \theta}_{j} - {\tilde \theta}_{i})\right)\right|^p \\ &\le\sum_{i\in\mathbb{N}}\left|\sum_{j\in\mathbb{N}} \kappa_{ij}\left|\left(\theta_{j} -\theta_{i} \right)- ({\tilde \theta}_{j} - {\tilde \theta}_{i})\right|\right|^p\\ &\le\sum_{i\in\mathbb{N}}\left|\sum_{j\in\mathbb{N}} \kappa_{ij}|\theta_j-{\tilde \theta}_j|+|\theta_i-{\tilde\theta}_i|\sum_{j =1}^\infty \kappa_{ij}\right|^p\\ &\leq 2^{p-1}\sum_{i\in\mathbb{N}} \left[\Big(\sum_{j\in\mathbb{N}} \kappa_{ij}|\theta_j-{\tilde \theta}_j|\Big)^p+\Big(|\theta_i-{\tilde\theta}_i|\sum_{j\in\mathbb{N}} \kappa_{ij}\Big)^p\right]\\ &\leq 2^{p-1}\sum_{i\in\mathbb{N}}\left[\|(\kappa_{ij})_j\|_q^p\left(\sum_{k\in \mathbb{N}} |\theta_k-\tilde{\theta}_k|^p\right) + |\theta_i-\tilde{\theta}_i|^p\Big(\sum_{j\in\mathbb{N}} \kappa_{ij}\Big)^p \right], \end{aligned}$$ where we used the Hölder inequality for $q=\frac{p}{p-1}$ in the last inequality. If we apply the following relations $$\|X\|_q\leq \|X\|_1,\quad \sum_{i\in\mathbb{N}} |y_iz_i|\leq \Big(\sum_{i\in\mathbb{N}} |y_i|\Big)\Big(\sum_{i\in\mathbb{N}} |z_i|\Big),\quad \forall~X, Y, Z\in \mathbb{R}^{\mathbb{N}}$$ to [\[B-7\]](#B-7){reference-type="eqref" reference="B-7"}, then we have desired estimate: $$\begin{aligned} \|\mathcal{F}(\Theta)-\mathcal{F}(\widetilde{\Theta})\|_p^p&\leq 2^{p-1}\sum_{i\in\mathbb{N}}\left[\|(\kappa_{ij})_j\|_q^p\left(\sum_{k\in\mathbb{N}} |\theta_k-\tilde{\theta}_k|^p\right)+|\theta_i-\tilde{\theta}_i|^p\Big(\sum_{j\in\mathbb{N} } \kappa_{ij}\Big)^p \right]\\ &\leq 2^{p-1} \left[\|K\|_{p,1}^p\|\Theta-\widetilde{\Theta}\|_p^p+\|\Theta-\widetilde{\Theta}\|_p^p\|K\|_{p,1}^p \right]\\ &=2^p\|K\|_{p,1}^p\|\Theta-\widetilde{\Theta}\|_p^p. \end{aligned}$$ In addition, we can also obtain desired estimate for $p=\infty$: $$\begin{aligned} \left\| {\mathcal F}(\Theta)- {\mathcal F}(\widetilde{\Theta})\right\|_\infty &=\sup_{i\in\mathbb{N}}|f_i(\Theta)-f_i(\widetilde{\Theta})|\\ &=\sup_{i\in\mathbb{N}}\left|\sum_{j\in\mathbb{N}} \kappa_{ij}\left(\sin\left(\theta_{j}-\theta_{i} \right) - \sin ({\tilde \theta}_{j} - {\tilde \theta}_{i})\right)\right|\\ &\le\sup_{i\in\mathbb{N}} \sum_{j\in\mathbb{N}} \kappa_{ij}\left|\left(\theta_{j} -\theta_{i} \right)- ({\tilde \theta}_{j} - {\tilde \theta}_{i})\right|\\ & \le\sup_{i\in\mathbb{N}}\sum_{j\in\mathbb{N}} \kappa_{ij}\left( |\theta_{j} - {\tilde \theta}_{j} |+ |\theta_{i} - {\tilde \theta}_{i} |\right) \\ & \le 2 \Big( \sup_{i\in\mathbb{N}} \sum_{j\in\mathbb{N}} \kappa_{ij} \Big) \| \Theta - {\widetilde \Theta} \|_\infty \\ &= 2 \| K \|_{\infty, 1} \|\Theta - {\tilde \Theta} \|_{\infty}. \end{aligned}$$ Now, once we have [\[B-5\]](#B-5){reference-type="eqref" reference="B-5"}, the solution to [\[B-2\]](#B-2){reference-type="eqref" reference="B-2"} -- [\[B-3\]](#B-3){reference-type="eqref" reference="B-3"} exists uniquely in some nonempty finite time interval $[0,T]$, and the solution never blows up in finite time due to the boundedness of the image of ${\mathcal F}$ so that the local solution can be extended to the global solution $\Theta:[0,\infty)\to \ell^p.$ ◻ In the following lemma, we can see the analogous properties of our infinite model with the finite Kuramoto model. Lemma [Lemma 1](#L2.1){reference-type="ref" reference="L2.1"} (1) gives an invariant of our model, and (2) gives the translation-invariant property of the Kuramoto model. Then in Lemma [Lemma 2](#L2.2){reference-type="ref" reference="L2.2"}, we discuss two basic sets of estimates to be used in Section [3.1](#sec:3.1){reference-type="ref" reference="sec:3.1"}, [4](#sec:4){reference-type="ref" reference="sec:4"} and [5.2](#sec:5.2){reference-type="ref" reference="sec:5.2"}, then we establish Lipschitz continuity of some functionals. **Lemma 1**. *Let $p,q\in[1,\infty]$ with $\frac{1}{p}+\frac{1}{q}=1$, and let $\Theta$ be a global $\ell^p$-solution to [\[B-2\]](#B-2){reference-type="eqref" reference="B-2"} -- [\[B-3\]](#B-3){reference-type="eqref" reference="B-3"}. Then, the following assertions hold.* 1. *If the network topology $K=(\kappa_{ij})$ is given by $$\label{B-8} \kappa_{ij}=a_{ij}\kappa_j,\quad \forall~i,j\in\mathbb{N},$$ for some symmetric $A=(a_{ij})\in \ell^{p,p}$ and $(\kappa_1,\kappa_2,\ldots)\in\ell^{q}$, we have $$\frac{d}{dt}\left(\sum_{i\in\mathbb{N}} \kappa_i \theta_i \right)=\sum_{i\in\mathbb{N}} \kappa_i\nu_i.$$* 2. *If we set $${\hat \theta}_i(t) = \theta_i(t)-\nu t, \quad i \in {\mathbb N},\quad t \geq 0,$$ then ${\hat \Theta} := ({\hat \theta}_1,{\hat \theta}_2, \ldots)$ satisfies $$\begin{cases} \displaystyle\dot{\hat{\theta}}_{i}=\nu_i-\nu+\sum_{j\in\mathbb{N}}\kappa_{ij}\sin (\hat{\theta}_{j}-\hat{\theta}_{i}), \quad t>0,\\ \displaystyle\hat{\theta}_i(0)=\theta_i^{\text{in}}\in \mathbb{R},\quad \forall~ i\in \mathbb{N}. \end{cases}$$* *Proof.* (1) First, we multiply $\kappa_i$ to $\eqref{B-2}_1$ to obtain $$\label{B-9} \frac{d}{dt} \Big( \kappa_i \theta_{i} \Big) = \kappa_i \nu_{i}+\sum_{j\in\mathbb{N}} \kappa_i \kappa_{ij}\sin\left(\theta_{j}-\theta_{i}\right).$$ Then, we take a summation of [\[B-9\]](#B-9){reference-type="eqref" reference="B-9"} over all $i$ and use the exchange symmetry $i \longleftrightarrow j$ to get $$\frac{d}{dt} \sum_{i\in\mathbb{N}} \kappa_i \theta_{i} = \sum_{i\in\mathbb{N}} \left(\kappa_i \nu_{i}+\sum_{j\in\mathbb{N}} \kappa_i \kappa_{ij}\sin\left(\theta_{j}-\theta_{i}\right) \right) = \sum_{i\in\mathbb{N}} \left( \kappa_i \nu_{i} - \sum_{j\in\mathbb{N}} \kappa_j \kappa_{ji}\sin\left(\theta_{j}-\theta_{i}\right)\right).$$ Therefore, we employ [\[B-8\]](#B-8){reference-type="eqref" reference="B-8"} to get the desired balanced law. (2) Since the second assertion is obvious, we omit its proof. ◻ **Remark 1**. *(1) If we set $p=1$ and $\kappa_{j}\equiv 1$, then the network topology $K$ satisfying [\[B-8\]](#B-8){reference-type="eqref" reference="B-8"} is a symmetric summable infinite matrix (see Section [3](#sec:3){reference-type="ref" reference="sec:3"}): $$K=(\kappa_{ij})\in\ell^{1,1},\quad \kappa_{ij}=\kappa_{ji},\quad i,j\in\mathbb{N}.$$ (2) If we set $p=\infty$ and $a_{ij}\equiv 1$, then the network topology $K$ satisfying [\[B-8\]](#B-8){reference-type="eqref" reference="B-8"} is a sender network (see Section [5](#sec:5){reference-type="ref" reference="sec:5"}): $$(\kappa_1,\kappa_2,\ldots)\in\ell^1,\quad \kappa_{ij}=\kappa_j,\quad i,j\in\mathbb{N}.$$* **Lemma 2**. *Let $\Theta = \Theta(t)$ be a global $\ell^\infty$-solution to [\[B-2\]](#B-2){reference-type="eqref" reference="B-2"} -- [\[B-3\]](#B-3){reference-type="eqref" reference="B-3"}. Then, the following assertions hold.* 1. *$\dot{\Theta}$ and $\ddot{\Theta}$ are uniformly bounded: for every $i\in\mathbb{N}$, $$\label{B-10} \begin{aligned} &\sup_{0 \leq t < \infty} |\dot{\theta}_i(t)| \leq \|\mathcal{V}\|_\infty+ \| K \|_{\infty,1} \leq \|\mathcal{V}\|_p+ \| K \|_{p,1},\\ &\sup_{0 \leq t < \infty} |\ddot{\theta}_i(t)| \leq 2 \| K \|_{\infty, 1} (\|\mathcal{V}\|_\infty + \| K \|_{\infty, 1})\leq 2 \| K \|_{p, 1} (\|\mathcal{V}\|_p + \| K \|_{p, 1}). \end{aligned}$$* 2. *Extremals and phase-diameter functionals $${\mathcal D}(\Theta) := \sup_{i,j\in \mathbb{N}} |\theta_i - \theta_j|, \quad \sup_{i\in \mathbb{N}}\theta_i \quad \mbox{and} \quad \inf_{i \in \mathbb{N} }\theta_i$$ are Lipschitz continuous in time $t$.* *Proof.* (1) The first estimate follows from [\[B-6\]](#B-6){reference-type="eqref" reference="B-6"}. Now, we differentiate $\eqref{B-2}_1$ with respect to $t$ and use $\eqref{B-10}_1$ to obtain $$\begin{aligned} \begin{aligned} \label{B-11} \left|\ddot{\theta}_{i}\right| &=\left|\sum_{j\in\mathbb{N}} \kappa_{ij} (\dot{\theta}_{i}-\dot{\theta}_{j} )\cos\left(\theta_{i}-\theta_{j}\right)\right|\\ & \le2\left(\left\Vert \mathcal{V}\right\Vert _{\infty}+ \| K \|_{\infty, 1} \right)\sum_{j\in\mathbb{N}} \kappa_{ij} \\ &\le 2 \| K \|_{\infty, 1} \left(\left\Vert \mathcal{V}\right\Vert _{\infty}+ \| K \|_{\infty, 1} \right)\\ &\leq 2 \| K \|_{p, 1} (\|\mathcal{V}\|_p + \| K \|_{p, 1}). \end{aligned} \end{aligned}$$ (2) We first consider the Lipschitz continuity of $$t\mapsto \sup_{i\in \mathbb{N}} \theta_i(t).$$ For every $s < t$, we use Lemma [Lemma 2](#L2.2){reference-type="ref" reference="L2.2"} (1) to get $$\theta_i(t) \leq \theta_i(s) + (\| {\mathcal V} \|_\infty + \| K \|_{\infty, 1}) (t-s) \leq \sup_{i\in \mathbb{N}} \theta_i(s) + (\| {\mathcal V} \|_\infty + \| K \|_{\infty, 1}) (t-s).$$ Then, we take the supremum of the L.H.S. of the above relation to obtain $$\label{B-11-1} \sup_{i\in \mathbb{N}} \theta_i(t) \leq \sup_{i\in \mathbb{N}} \theta_i(s) + (\| {\mathcal V} \|_\infty + \| K \|_{\infty, 1}) (t-s),$$ and a similar argument also yields $$\label{B-11-2} \sup_{i\in \mathbb{N}} \theta_i(t) \geq \sup_{i\in \mathbb{N}} \theta_i(s) - (\| {\mathcal V} \|_\infty + \| K \|_{\infty, 1}) (t-s).$$ Therefore, we combine [\[B-11-1\]](#B-11-1){reference-type="eqref" reference="B-11-1"} and [\[B-11-2\]](#B-11-2){reference-type="eqref" reference="B-11-2"} to obtain $$\Big | \sup_{i\in \mathbb{N}} \theta_i(t) - \sup_{i\in \mathbb{N}} \theta_i(s) \Big| \leq (\| {\mathcal V} \|_\infty + \| K \|_{\infty, 1}) |t-s|,\quad \forall~t,s\geq 0.$$ In addtion, the Lipschitz continuity of $$t\mapsto \inf_{i\in \mathbb{N}} \theta_i(t)$$ can be also shown in a similar manner. Finally, the phase-diameter $\mathcal{D}(\Theta)$, which can be given by the difference between these two extremals, is also Lipschitz. ◻ **Remark 2**. *Note that the relation [\[B-11\]](#B-11){reference-type="eqref" reference="B-11"} yields $$\label{B-12} \sup_{0 \leq t < \infty} \left|\ddot{\theta}_{i}(t) \right| \leq 2\left(\left\Vert \mathcal{V}\right\Vert _{\infty}+ \| K \|_{\infty, 1} \right)\sum_{j\in\mathbb{N}}\kappa_{ij},\quad\forall~ i\in \mathbb{N}.$$* **Lemma 3**. *Let $\Theta = \Theta(t)$ be a global solution to [\[B-2\]](#B-2){reference-type="eqref" reference="B-2"} -- [\[B-3\]](#B-3){reference-type="eqref" reference="B-3"}. Then for every $i, j \in {\mathbb N}$, we have $$\left|\frac{d}{dt}\left({\theta}_{i}-{\theta}_{j}\right)\right|\leq \mathcal{D}(\mathcal{V})+2 \| K \|_{\infty, 1},\quad\left|\frac{d^{2}}{dt^{2}}\left({\theta}_{i}-{\theta}_{j}\right)\right|\leq2 \| K \|_{\infty, 1} \left({\mathcal D}\left({\mathcal V}\right)+2 \| K \|_{\infty, 1} \right).$$* *Proof.* For every $i,j\in\mathbb{N}$, the first and the second derivatives of $\theta_i-\theta_j$ are given by $$\label{B-13} \begin{aligned} \frac{d}{dt}\left({\theta}_{i}-{\theta}_{j}\right) &=\nu_{i}-\nu_{j}-\sum_{k\in\mathbb{N}} \left[\kappa_{ik}\sin(\theta_{i}-\theta_{k})+\kappa_{jk}\sin(\theta_{k}-\theta_{j})\right],\\ \frac{d^{2}}{dt^{2}}\left({\theta}_{i}-{\theta}_{j}\right) &=-\sum_{k\in\mathbb{N}} \left[\kappa_{ik}\cos(\theta_{i}-\theta_{k})\frac{d}{dt}\left({\theta}_{i}-{\theta}_{k}\right)+\kappa_{jk}\cos(\theta_{k}-\theta_{j})\frac{d}{dt}\left({\theta}_{k}-{\theta}_{j}\right)\right]. \end{aligned}$$ Then, we have the boundedness of $\frac{d}{dt}\left({\theta}_{i}-{\theta}_{j}\right)$ from the following inequalities: $$\label{B-14} \Big| \frac{d}{dt}\left({\theta}_{i}-{\theta}_{j}\right) \Big| \leq {\mathcal D}({\mathcal V}) + \sum_{k\in\mathbb{N}} (\kappa_{ik} + \kappa_{jk}) \leq {\mathcal D}({\mathcal V}) + 2 \| K \|_{\infty,1}.$$ Finally, we combine [\[B-13\]](#B-13){reference-type="eqref" reference="B-13"} and [\[B-14\]](#B-14){reference-type="eqref" reference="B-14"} to obtain the boundedness of $\frac{d^2}{dt^2}\left({\theta}_{i}-{\theta}_{j}\right)$: $$\left|\frac{d^2}{dt^2}\left({\theta}_{i}-{\theta}_{j}\right)\right| \leq \sum_{k\in\mathbb{N}} \Big[ \kappa_{ik} \Big| \frac{d}{dt}\left({\theta}_{i}-{\theta}_{k}\right) \Big| + \kappa_{jk} \Big| \frac{d}{dt}\left({\theta}_{k}-{\theta}_{j}\right) \Big| \Big] \leq 2 \| K \|_{\infty, 1} \left(\mathcal{D}\left({\mathcal V}\right)+2 \|K\|_{\infty,1}\right).$$ ◻ # Emergent dynamics of a homogeneous ensemble {#sec:3} In this section, we present the emergent dynamics of the infinite Kuramoto model with a homogeneous ensemble consisting of oscillators with identical natural frequencies: $$\quad\nu_{i}\equiv \nu,\quad \forall~i\in \mathbb{N}.$$ From Lemma [Lemma 1](#L2.1){reference-type="ref" reference="L2.1"} (2), we may assume that $\nu_i \equiv 0$ without loss of generality. In other words, we consider the phase configuration $\Theta$ satisfying $$\begin{cases} \label{C-1} \displaystyle\dot{\theta}_{i}=\sum_{j\in\mathbb{N}} \kappa_{ij}\sin\left(\theta_{j}-\theta_{i}\right), \quad t>0,\\ \displaystyle\theta_i(0)=\theta_i^{\text{in}}\in \mathbb{R},\quad \forall~ i\in \mathbb{N},\\ \Theta^{\text{in}}=(\theta_1^{\text{in}},\theta_2^{\text{in}},\ldots)\in\ell^p,\quad K=(\kappa_{ij})\in\ell^{p,1},\quad p\in[1,\infty]. \end{cases}$$ Since $\ell^1\subset \ell^2\subset\cdots\subset \ell^\infty$ and $\ell^{1,1}\subset \ell^{2,1}\subset\cdots\subset \ell^{\infty,1}$, all results for $\ell^p$-solution $\Theta$ can also be applied to other $\ell^q$-solutions with $q<p$. In the sequel, we will study the dynamics of $\ell^\infty$-solution and $\ell^p$-solution ($p<\infty$) to [\[C-1\]](#C-1){reference-type="eqref" reference="C-1"} and provide some results corresponding to each part of Proposition [Proposition 1](#P2.1){reference-type="ref" reference="P2.1"}. ## $\ell^\infty$-solution: complete synchronization {#sec:3.1} In this subsection, we will study the complete synchronization of the homoegeneous ensemble with $\ell^\infty$ initial data. As aforementioned, all results in this subsection can be applied to other $\ell^p$-solutions. ### Dynamics of phase diameter At a heuristic level, it is natural to expect that ${\mathcal D}(\Theta(t))$ is '*non-increasing*' in $t$ whenever ${\mathcal D}(\Theta(t))<\pi$, since the oscillators near the extremal phases $${\overline \theta}(t) :=\sup_{i\in \mathbb{N}}\theta_i(t),\quad \underline{\theta}(t) :=\inf_{i\in\mathbb{N}}\theta_i(t),$$ are pulled inward the region in which the majority of the group is located. In fact, for the finite Kuramoto ensemble, it is easy to check that ${\overline \theta}$ and ${\underline \theta}$ are nonincreasing and nondecreasing, respectively, and their difference converges to zero exponentially, so that Proposition [Proposition 1](#P2.1){reference-type="ref" reference="P2.1"} (3) holds. For the infinite Kuramoto ensemble, however, a such argument has to be refined. The following lemma shows that such a heuristic argument holds, when the interaction network $K = (\kappa_{ij})$ satisfies some structural condition uniformly in $i$.\ Throughout the paper, we refer to the following frameworks to guarantee the synchronization behavior of the infinite Kuramoto model: - (${\mathcal F}1$): The initial phase-diameter is smaller than $\pi$: the initial phase configuration $\Theta^{\text {in}}$ satisfies $$\mathcal{D}(\Theta^{\text{in}})<\pi.$$ - (${\mathcal F}2$): There exists a sequence $\boldsymbol{\tilde\kappa} := \{\tilde\kappa_j\}_{j\in \mathbb{N}}\in \ell^1$ such that $$\frac{\kappa_{ij}}{\sum_{k\in\mathbb{N}}\kappa_{ik}}>\tilde\kappa_j>0,$$ for all $i,j\in \mathbb{N}.$ **Lemma 4**. *Suppose that network topology $K=(\kappa_{ij})$ and initial data $\Theta^{\text{in}}$ satisfy $({\mathcal F}1)-({\mathcal F}2)$, and let $\Theta$ be a solution to [\[C-1\]](#C-1){reference-type="eqref" reference="C-1"} with $p=\infty$. If initial phase diameter $\mathcal{D}(\Theta^{\text{in}})$ is nonzero, there exist two positive constants $\delta$ and $\varepsilon$ such that* - *For every index $i \in {\mathbb N}$ satisfying $\theta_i^{\text{in}}\leq {\overline \theta}(0) - \varepsilon$, one has $$\theta_i(t)< {\overline \theta}(0),\quad \forall~t\in(0, \delta).$$* - *For every index $i \in {\mathbb N}$ satisfying $\theta_i^{\text{in}}> {\overline \theta}(0)-\varepsilon$, one has $$\dot{\theta}_i(t) < 0,\quad \forall~t\in(0,\delta).$$* *Proof.* Since the proof is very lengthy and technical, we leave its proof in Appendix [8](#App-B){reference-type="ref" reference="App-B"}. ◻ Note that the natural frequency $\mathcal{V}$ and initial phase diameter $\mathcal{D}(\Theta_N^{\text{in}})$ satisfy the same condition in Proposition [Proposition 1](#P2.1){reference-type="ref" reference="P2.1"} (3), and only the positivity condition $\kappa_{ij}>0$ has been modified to $(\mathcal{F}2)$. **Remark 3**. *Below, we provide several remarks on the framework $(\mathcal{F}1)-(\mathcal{F}2)$.* 1. *An interaction network $(\kappa_{ij})$ satisfying $(\mathcal{F}2)$ can be easily constructed from a sequence in $\ell^1$ whose components are all positive real numbers. More precisely, for a positive sequence $\{a_{i}\}_{i\in\mathbb{N}} \in \ell^1$, we set $$\kappa_{ij}:= a_i a_j, \quad \forall~i, j \in {\mathbb N}.$$ Then, the framework $(\mathcal{F}2)$ holds true by the following relation: $$\displaystyle \frac{\kappa_{ij}}{\sum_{k\in\mathbb{N}} \kappa_{ik}} = \frac{a_j}{\sum_{k\in\mathbb{N}} a_k} > \frac{a_j}{\sum_{k\in\mathbb{N}} a_k + 1 } =: \tilde\kappa_j.$$* 2. *The sequence $\boldsymbol{\tilde \kappa}=\{\tilde\kappa_j\}_{j\in\mathbb{N}}$ in $(\mathcal{F}2)$ is always contained in $\ell^1$. In fact, its $\ell^1$-norm is always smaller than $1$.* 3. *For the trivial initial data with ${\mathcal D}(\Theta^{\text{in}}) = 0$, we have $$\dot{\theta}_i(t) = 0, \quad \forall~i\in\mathbb{N},\quad t > 0.$$ Thus, the solution $\Theta$ is a steady state solution where whole phases are concentrated in a singleton.* As a consequence of Lemma [Lemma 4](#L3.1){reference-type="ref" reference="L3.1"}, one can see that the phase diameter $\mathcal{D}(\Theta)$ is also nonincreasing in $t$ as in Proposition [Proposition 1](#P2.1){reference-type="ref" reference="P2.1"}, though we do not have any estimate on the decay rate yet. **Corollary 1**. *Suppose that network topology $K=(\kappa_{ij})$ and initial data $\Theta^{\text{in}}$ satisfy $({\mathcal F}1)-({\mathcal F}2)$, and let $\Theta=\theta(t)$ be a solution to [\[C-1\]](#C-1){reference-type="eqref" reference="C-1"} with $p=\infty$. Then, the following assertions hold:* 1. *The phase-diameter ${\mathcal D}(\Theta(t))$ is non-increasing $t$: $${\mathcal D}(\Theta(t))\leq {\mathcal D}(\Theta^{\text{in}})<\pi,\quad \forall~t > 0.$$* 2. *$\Theta$ is a phase-locked state if and only if $$\mathcal{D}(\Theta)\equiv 0.$$* *Proof.* (1) We split the proof into two cases: $${\mathcal D}(\Theta^{\text {in}}) = 0, \quad 0 < {\mathcal D}(\Theta^{\text{in}}) < \pi.$$ $\diamond$ Case A $( {\mathcal D}(\Theta^{\text{in}}) = 0)$: In this case, as discussed in Remark [Remark 3](#R3.1){reference-type="ref" reference="R3.1"}(3), we have $${\mathcal D}(\Theta(t)) = {\mathcal D}(\Theta^{\text{in}}) = 0, \quad t > 0,$$ which yields the desired result. $\diamond$ Case B $(0 < {\mathcal D}(\Theta^{\text{in}}) < \pi)$: From Lemma [Lemma 2](#L2.2){reference-type="ref" reference="L2.2"}(2), the set $$\left\{t\geq 0: \mathcal{D}(\Theta(t))\leq \mathcal{D}(\Theta^{\text{in}}) \right\}$$ is closed in $[0,\infty)$. On the other hand, Lemma [Lemma 4](#L3.1){reference-type="ref" reference="L3.1"} implies that it is also a nonempty open subset of $[0,\infty)$. Therefore, we have $$\left\{t\geq 0: \mathcal{D}(\Theta(t))\leq \mathcal{D}(\Theta^{\text{in}}) \right\}=[0,\infty),$$ which is our desired result.\ (2) It is sufficient to prove the 'only if' part. If $\mathcal{D}(\Theta(t_0))>0$, then Lemma [Lemma 4](#L3.1){reference-type="ref" reference="L3.1"} can be applied, so that there exists a neighborhood of $\overline{\theta}(t_0)$ such that every $\theta_i$ in the neighborhood decreases strictly. Similarly, there exists a neighborhood of $\underline{\theta}(t_0)$ such that every $\theta_j$ in the neighborhood increases strictly. This contradicts the phase-locked assumption, as it is necessary to satisfy $$\dot{\theta}_i-\dot{\theta}_j=\frac{d}{dt}(\theta_i-\theta_j)=0, \quad i,j\in\mathbb{N},$$ for the phase-locked state $\Theta$. ◻ Note that Corollary [Corollary 1](#C3.1){reference-type="ref" reference="C3.1"} does not guarantee that the phase diameter is strictly decreasing. If $\overline{\theta}(t_0)$ is not a limit point of $\{\theta_i(t_0)\}_{i\in\mathbb{N}}$, then there exists a neighborhood $U$ of $\overline{\theta}(t_0)$ which contains only finitely many $\theta_i$'s, and the supremum $\overline{\theta}(t)$ is determined by those finitely many $\theta_i$'s for all $t$ sufficiently close to $t_0$. Therefore, Lemma [Lemma 4](#L3.1){reference-type="ref" reference="L3.1"} implies that the supremum $\overline{\theta}$ decreases strictly at time $t=t_0$ if $\overline{\theta}(t_0)$ is not a limit point of $\{\theta_i(t_0)\}_{i\in\mathbb{N}}$. However, if both $\overline{\theta}(t_0)$ and $\underline{\theta}(t_0)$ are the limit points of $\{\theta_i(t_0)\}_{i\in\mathbb{N}}$, Lemma [Lemma 4](#L3.1){reference-type="ref" reference="L3.1"} does not imply that the phase diameter is strictly decreasing at $t=t_0$. We can construct a solution $\Theta$ in which phase-diameter is nondecreasing in time, even if the framework $(\mathcal{F}1)-(\mathcal{F}2)$ are satisfied. **Lemma 5**. *Suppose there are two increasing sequences $\{i_n\}_{n\in \mathbb{N}}$ and $\{j_n\}_{n\in \mathbb{N}}$ of $\mathbb{N}$ such that $$\label{3-2} \lim_{n\to\infty}\sum_{k\in\mathbb{N}} \kappa_{i_nk}=0,\quad \lim_{n\to\infty}\sum_{k\in\mathbb{N}} \kappa_{j_nk}=0,\quad \lim_{n\to\infty}\theta_{i_{n}}^{\text{in}}=\sup_{k \in {\mathbb N}}\theta_{k}^{\text{in}},\quad\lim_{n \to\infty}\theta_{j_{n}}^{\text{in}}=\inf_{k \in {\mathbb N}}\theta_{k}^{\text{in}},$$ and let $\Theta=(\theta_1,\theta_2,\ldots)$ be a solution to [\[C-1\]](#C-1){reference-type="eqref" reference="C-1"} with $p=\infty$. Then, the phase-diameter ${\mathcal D}(\Theta)$ is nondecreasing along [\[C-1\]](#C-1){reference-type="eqref" reference="C-1"}.* *Proof.* From Lemma [Lemma 2](#L2.2){reference-type="ref" reference="L2.2"}, one has $$\sup_{0 \leq t < \infty} |\dot{\theta}_{i}(t)|\leq\sum_{k\in\mathbb{N}} \kappa_{ik},\quad \left| \theta_{i}(t) - \theta_{i}^{\text{in}} \right|\leq t\sum_{k\in\mathbb{N}} \kappa_{ik},\quad \forall~i\in \mathbb{N}.$$ Then, we use the triangle inequality and the above relations to obtain $$\begin{aligned} \begin{aligned} \label{C-1-3-2} \left|\theta_{i}\left(t\right)-\theta_{j}\left(t\right)\right| &\geq\left|\theta_{i}^{\text{in}}-\theta_{j}^{\text{in}}\right|-\left|\theta_{i}^{\text{in}}-\theta_{i}\left(t\right)\right|-\left|\theta_{j}^{\text{in}}-\theta_{j}\left(t\right)\right| \\ &\geq\left|\theta_{i}^{\text{in}}-\theta_{j}^{\text{in}}\right|-\left(\sum_{k\in\mathbb{N}} \kappa_{ik}+\sum_{k\in\mathbb{N}} \kappa_{jk}\right)t. \end{aligned} \end{aligned}$$ On the other hand, we use the first two conditions in [\[3-2\]](#3-2){reference-type="eqref" reference="3-2"} to see that for every $\varepsilon_{1}>0$, there exists a natural number $N=N\left(\varepsilon_{1}\right)\in\mathbb{N}$ such that $$\label{C-1-3-3} n>N \quad \Longrightarrow \quad \sum_{k\in\mathbb{N}} \kappa_{i_nk}<\varepsilon_{1} \quad \mbox{and} \quad \sum_{k\in\mathbb{N}} \kappa_{j_nk}<\varepsilon_{1}.$$ For every $\varepsilon_{2}>0$, one can also find $M=M\left(\varepsilon_{2}\right)\in\mathbb{N}$ such that for $n>M$, $$\label{C-1-3-4} \theta_{i_{n}}>\overline{\theta}-\varepsilon_{2} \quad \mbox{and} \quad \theta_{j_{n}}<\underline{\theta}+\varepsilon_{2}.$$ Then, by using $\eqref{C-1-3-3}-\eqref{C-1-3-4}$ to the relation [\[C-1-3-2\]](#C-1-3-2){reference-type="eqref" reference="C-1-3-2"} with the index pair $(i_n, j_n)$ with $n \geq N,M$, we have $$\begin{aligned} {\mathcal D}\left(\Theta\left(t\right)\right) & =\sup_{m,n}\left|\theta_{m}\left(t\right)-\theta_{n}\left(t\right)\right| \ge\left|\theta_{i_{n}}^{\text{in}}-\theta_{j_{n}}^{\text{in}}\right|-\left(\sum_{k\in\mathbb{N}} \kappa_{i_nk}+\sum_{k\in\mathbb{N}} \kappa_{j_nk}\right)t\\ & \ge {\mathcal D}(\Theta^{\text{in}} )-2\varepsilon_{1}t-2\varepsilon_{2}. \end{aligned}$$ Since $\varepsilon_{1}$ and $\varepsilon_{2}$ can be arbitrary positive numbers, we can take $\varepsilon_{1},\varepsilon_{2} \to 0$ for each fixed $t$ to obtain the desired result. Therefore, we have $${\mathcal D}\left(\Theta\left(t\right)\right) \geq {\mathcal D}(\Theta^{\text{in}} ), \quad t \geq 0.$$ ◻ **Remark 4**. *Below, we provide network topology and initial data satisfying a set of relations in $(\mathcal{F}1)-(\mathcal{F}2)$ and [\[3-2\]](#3-2){reference-type="eqref" reference="3-2"}. More precisely, we set $$\kappa_{ij}=3^{-\left(i+j\right)} \quad \mbox{and} \quad \theta_{i}^{\text{in}}=\left(-1\right)^{i}\pi/3, \quad i, j \in {\mathbb N}.$$ Then, one has $$\sup_{0 \leq t < \infty} \left|\dot{\theta}_{i}(t) \right|\le\sum_{j\in\mathbb{N}} \kappa_{ij}=\frac{1}{2\cdot3^{i-1}}$$ which yields $$\left|\theta_{i}^{\text{in}}-\theta_{i}(t) \right|\le\frac{t}{2\cdot3^{i-1}}, \quad t \geq 0.$$ Therefore, we have $$\left|\theta_{i}\left(t\right)-\theta_{j}\left(t\right)\right| \ge\left|\theta_{i}^{\text{in}}-\theta_{j}^{\text{in}}\right|-\left|\theta_{i}^{\text{in}}-\theta_{i}\left(t\right)\right|-\left|\theta_{j}^{\text{in}}-\theta_{j}\left(t\right)\right| \ge\left|\theta_{i}^{\text{in}}-\theta_{j}^{\text{in}}\right|-\frac{3t}{2}\left(\frac{1}{3^{i}}+\frac{1}{3^{j}}\right).$$ This gives $$\mathcal{D}\left(\Theta(t)\right)\ge\left|\theta_{i}^{\text{in}}-\theta_{j}^{\text{in}}\right|-\frac{3t}{2}\left(\frac{1}{3^{i}}+\frac{1}{3^{j}}\right),\quad i,j\in\mathbb{N}.$$ By letting $i=2k+1$ and $j=2k$, we obtain $$\begin{aligned} \mathcal{D}\left(\Theta(t)\right) & \ge\left|\theta_{2k+1}^{\text{in}}-\theta_{2k}^{\text{in}}\right|-\frac{3t}{2}\left(\frac{1}{3^{2k+1}}+\frac{1}{3^{2k}}\right)=\frac{2\pi}{3}-\frac{2t}{3^{2k}},\quad k\in\mathbb{N}. \end{aligned}$$ Since $\mathcal{D}\left(\Theta^{\text{in}}\right)=\frac{2\pi}{3}$, we have $\mathcal{D}\left(\Theta(t)\right)\ge\frac{2\pi}{3}=\mathcal{D}\left(\Theta^{\text{in}}\right)$.* Combining the results we have obtained so far, we can characterize the sufficient framework which makes the phase-diameter ${\mathcal D}(\Theta(t))$ constant with respect to $t$. **Corollary 2**. *Suppose that network topology and initial data satisfy $(\mathcal{F}1)-(\mathcal{F}2)$ and [\[3-2\]](#3-2){reference-type="eqref" reference="3-2"}, and let $\Theta$ be a solution to [\[C-1\]](#C-1){reference-type="eqref" reference="C-1"} with $p=\infty$. Then, the phase-diameter of the configuration $\Theta$ is constant along time: $${\mathcal D}(\Theta(t)) = {\mathcal D}(\Theta^{\text{in}}), \quad t \geq 0.$$* This counterintuitive example is the case when all particles are moving away from the boundary, but other particles closer to the boundary than we just observed continue to appear with a slower speeds. Macroscopically, it will look as if there is a fixed boundary continuously emitting new particles which velocities are slower for particles emitted later.\ This is a unique feature of the countable Kuramoto model compared to the original Kuramoto model with finitely many particles. The sufficient framework leading to the exponential convergence of phase for the homogeneous ensemble as in Proposition [Proposition 1](#P2.1){reference-type="ref" reference="P2.1"}(3) will be presented at the end of Section [4](#sec:4){reference-type="ref" reference="sec:4"}.\ ### Lyapunov functional Now, we will analyze the dynamics of [\[C-1\]](#C-1){reference-type="eqref" reference="C-1"} with the following symmetric summable network topology, which is the first case of Remark [Remark 1](#R2.1){reference-type="ref" reference="R2.1"}: $$\label{C-2-1} K=(\kappa_{ij})\in\ell^{1,1},\quad \kappa_{ij}= \kappa_{ji} \geq 0,\quad i,j\in\mathbb{N}.$$ Since $\ell^{1,1}\subset \ell^{p,1}$ for all $p\in [1,\infty]$, one can construct an $\ell^p$-solution by just considering an $\ell^p$ initial data $\Theta^{\text{in}}$ under [\[C-2-1\]](#C-2-1){reference-type="eqref" reference="C-2-1"}. In addition, under the condition [\[C-2-1\]](#C-2-1){reference-type="eqref" reference="C-2-1"}, every $\ell^\infty$-solution $\Theta$ to [\[C-1\]](#C-1){reference-type="eqref" reference="C-1"} satisfy $$\dot{\Theta}(t)\in\ell^1, \quad t\geq 0,$$ even when $\Theta$ itself is not contained in $\ell^1$. Note that the condition $\kappa_{ij}=\kappa_{ji}$ also makes the finite Kuramoto model a gradient flow (see Proposition [Proposition 1](#P2.1){reference-type="ref" reference="P2.1"}). **Theorem 1**. *Suppose that the network topology $(\kappa_{ij})$ satisfies [\[C-2-1\]](#C-2-1){reference-type="eqref" reference="C-2-1"}, and let $\Theta$ be a solution to [\[C-1\]](#C-1){reference-type="eqref" reference="C-1"} with $p=\infty$. Then, we have $$\lim_{t \to \infty} \|\dot{\Theta}(t)\|_2 = 0.$$* *Proof.* We will apply a Lyapunov functional approach. $\bullet$ Step A: First, we suggest the following function as the Lyapunov functional to [\[C-1\]](#C-1){reference-type="eqref" reference="C-1"}: $$\label{C-3} P(\Theta) = \frac{1}{2}\sum_{i,j\in\mathbb{N}} \kappa_{ij} (1-\cos(\theta_i - \theta_j)) \geq 0.$$ Then, we claim: $$\begin{aligned} \begin{aligned} \label{C-4} & \frac{d}{dt}P(\Theta) = -\sum_{i\in\mathbb{N}} |\dot{\theta}_{i}|^{2}=-\|\dot{\Theta}\|_2^2, \\ & \left|\frac{d}{dt}P(\Theta(t))\right| \leq \sum_{i\in\mathbb{N}} \left(\sum_{j\in\mathbb{N}} \kappa_{ij}\right)^{2}= \|K\|_{2,1}^2\leq \|K\|_{1,1}^2, \\ & \left| \frac{d^2}{dt^2}P(\Theta) \right| \leq 2 \|K \|^2_{\infty, 1} \|K\|_{1,1} \leq 2\|K\|_{1,1}^3. \end{aligned} \end{aligned}$$ Below, we derive the above estimates in [\[C-4\]](#C-4){reference-type="eqref" reference="C-4"} one by one. (i) We differentiate [\[C-3\]](#C-3){reference-type="eqref" reference="C-3"} with respect to $t$ and use [\[C-1\]](#C-1){reference-type="eqref" reference="C-1"} to find $$\begin{aligned} \begin{aligned} \label{C-5} \frac{d}{dt}P(\Theta) &=\frac{1}{2}\sum_{i,j\in\mathbb{N}} \kappa_{ij}\sin\left(\theta_{i}-\theta_j\right)\left(\dot{\theta}_{i}-\dot{\theta}_{j}\right)\\ & =\frac{1}{2}\sum_{i,j,k\in\mathbb{N}} \kappa_{ij}\sin\left(\theta_{i}-\theta_j\right)\kappa_{ik}\sin\left(\theta_{k}-\theta_{i}\right) \\ &\quad -\frac{1}{2}\sum_{i,j,k\in\mathbb{N}} \kappa_{ij}\sin\left(\theta_{i}-\theta_j\right)\kappa_{jk}\sin\left(\theta_{k}-\theta_j\right)\\ & \overset{i\leftrightarrow j}{=}-\sum_{i,j,k\in\mathbb{N}}\kappa_{ij}\sin\left(\theta_{i}-\theta_j\right)\kappa_{ik}\sin\left(\theta_{i}-\theta_{k}\right) \\ &=-\sum_{i\in\mathbb{N}} \left(\sum_{j\in\mathbb{N}}\kappa_{ij}\sin\left(\theta_{i}-\theta_j\right)\right)^{2} =-\sum_{i\in\mathbb{N}} |\dot{\theta}_{i} |^{2} \leq 0. \end{aligned} \end{aligned}$$ This also yields $$\Big| \frac{d}{dt}P(\Theta) \Big| = \Bigg| \sum_{i\in\mathbb{N}} \left(\sum_{j\in\mathbb{N}}\kappa_{ij}\sin\left(\theta_{i}-\theta_j\right)\right)^{2} \Bigg| \leq \sum_{i\in\mathbb{N}} \left(\sum_{j\in\mathbb{N}}\kappa_{ij} \right)^{2}=\|K\|_{2,1}^2.$$ In addition, we differentiate [\[C-5\]](#C-5){reference-type="eqref" reference="C-5"} and apply Lemma [Lemma 2](#L2.2){reference-type="ref" reference="L2.2"}, Remark [Remark 2](#R2.2){reference-type="ref" reference="R2.2"} and [\[C-2-1\]](#C-2-1){reference-type="eqref" reference="C-2-1"} to get $$\Big| \frac{d^2}{dt^2}P(\Theta) \Big| \leq 2 \sum_{i\in\mathbb{N}} |{\dot \theta}_{i} | |{\ddot \theta}_i| \leq { 2} \|K \|_{\infty, 1} \sum_{i\in\mathbb{N}} |{\ddot \theta}_i | \leq 2 \|K \|^2_{\infty, 1} \sum_{i,j\in\mathbb{N}} \kappa_{ij} =2\|K\|_{\infty,1}^2\|K\|_{1,1}.$$ $\bullet$ Step B: Next, we will show that $P$ satisfies all the conditions for Barbalat's lemma, i.e., $$\label{C-6} \exists~\lim_{t \to \infty} P(\Theta(t)), \quad \frac{dP(\Theta)}{dt}~\mbox{is uniformly continuous}.$$ The convergence of $P(\Theta)$ comes from the fact that $P(\Theta)$ is a nonincreasing and bounded from below, which we have already verified in $\eqref{C-4}_1$ and $\eqref{C-4}_2$. In addition, the uniform continuity of $\frac{dP}{dt}$ is a direct consequence of the boundedness of $\frac{d^2P}{dt^2}$, which we have already verified in $\eqref{C-4}_3$. From [\[C-3\]](#C-3){reference-type="eqref" reference="C-3"} and $\eqref{C-4}_1$, $P\left(\Theta(t)\right)$ is a non-increasing function bounded below. Thus, $P(\Theta(t))$ converges as $t \to \infty$.\ Finally, we apply the differential version of Barbalat's lemma (see Lemma [Lemma 13](#LA-2){reference-type="ref" reference="LA-2"}) to conclude $$\lim_{t \to \infty} \frac{dP(\Theta(t))}{dt} = 0, \quad \mbox{i.e.,} \quad \lim_{t \to \infty} \|\dot{\Theta}(t)\|_2 = 0.$$ ◻ ## $\ell^p$-solution: additional properties for $p<\infty$ {#sec:3.2} In this subsection, we will study some special properties for $\ell^p$-solutions which cannot be found from generic $\ell^\infty$-solutions. ### Strictly decreasing diameter If $\Theta(t)\in \ell^p$ for some $1\leq p<\infty$, the only possible limit point for the set $\{\theta_i(t)\}_{i\in\mathbb{N}}$ is $\theta=0$, so that either $\overline{\theta}(t)$ or $\underline{\theta}(t)$ is not a limit point for all time $t$. Then, Lemma [Lemma 4](#L3.1){reference-type="ref" reference="L3.1"} implies that under the framework $(\mathcal{F}1)-(\mathcal{F}2)$, the diameter $\mathcal{D}(\Theta(t))$ is strictly decreasing for all $t\geq 0$. However, if there are symmetric matrix $A=(a_{ij})\in \ell^{p,p}$ and $(\kappa_1,\kappa_2,\ldots)\in\ell^{q}$ satisfying [\[B-8\]](#B-8){reference-type="eqref" reference="B-8"}: $$\kappa_{ij}=a_{ij}\kappa_j,\quad \forall~i,j\in\mathbb{N}$$ where $q$ is the Holder conjugate of $p$. Lemma [Lemma 1](#L2.1){reference-type="ref" reference="L2.1"} implies that the weighted average $$\sum_{i\in \mathbb{N}}\kappa_i\theta_i$$ is a constant of motion of the flow $\Theta\in \mathcal{C}^1(\mathbb{R}_+,\ell^p)$. Therefore, if $\sum_{i\in \mathbb{N}}\kappa_i\theta_i^{\text{in}}$ is nonzero, $\ell^p$-solution $\Theta$ can never converge to a point $(0,0,\ldots)$ in $\ell^p$-norm. However, there exists a possibility for $\Theta$ to converge in $\ell^\infty$-norm (see Section [4](#sec:4){reference-type="ref" reference="sec:4"}). ### Gradient flow formulation {#sec:3.3} In the finite Kuramoto model, the gradient flow approach plays an essential role in the proof of Proposition [Proposition 1](#P2.1){reference-type="ref" reference="P2.1"}. So we wondered if this approach would work as well for infinite-dimensional model. To study the gradient flow structure of model [\[C-1\]](#C-1){reference-type="eqref" reference="C-1"}, a suitable space should be equipped with an inner product structure. We found the reason in differential manifold theory. For a differential Riemannian manifold $({\mathcal M}, g_{ij})$, let $f: {\mathcal M} \rightarrow\mathbb{R}$ be a smooth function. Then, the gradient vector $\nabla f$ is obtained by identifying differential $df:T_{p}{\mathcal M} \rightarrow T_{p}\mathbb{R}\cong\mathbb{R}$ by a covector $\left\langle \nabla f,-\right\rangle _{\mathbb{R}^{n}}$. Hence in this subsection, we will consider the Cauchy problem for [\[C-1\]](#C-1){reference-type="eqref" reference="C-1"} in $\ell^2$-space. We will begin with a Lemma from calculus. **Lemma 6**. *For $\theta,h\in\mathbb{R}$ with $\left|h\right|<1$, one has $${\left|2\sin\left(\theta+\frac{h}{2}\right)\sin\frac{h}{2}-h\sin\theta\right|\le h^{2}.}$$* *Proof.* We use elementary inequality: $$x-\frac{x^{3}}{6}\leq\sin x\leq x,\qquad\forall~x\ge0,$$ and mean-value theorem to get $$\begin{aligned} & \left|2\sin\left(\theta+\frac{h}{2}\right)\sin\frac{h}{2}-h\sin\theta\right|\\ & \hspace{0.5cm}\le\left|2\sin\left(\theta+\frac{h}{2}\right)\sin\frac{h}{2}-h\sin\left(\theta+\frac{h}{2}\right)\right|+\left|h\sin\left(\theta+\frac{h}{2}\right)-h\sin\theta\right|\\ & \hspace{0.5cm}\leq2\left|\sin\left(\theta+\frac{h}{2}\right)\right|\Big|\sin\frac{h}{2}-\frac{h}{2}\Big|+|h|\Big|\sin\left(\theta+\frac{h}{2}\right)-\sin\theta\Big|\\ & \hspace{0.5cm}\le\frac{h^{3}}{24}+\frac{h^{2}}{2}<h^{2}. \end{aligned}$$  ◻ **Proposition 3**. *Suppose that the network topology $K$ satisfies $$K=(\kappa_{ij})\in\ell^{1,1},\quad\kappa_{ij}=\kappa_{ji}>0,\quad i,j\in\mathbb{N}.$$ Then, every $\ell^{2}$-solution to [\[C-1\]](#C-1){reference-type="eqref" reference="C-1"} is a gradient flow with the potential $$P\left(\Theta\right)=\frac{1}{2}\sum_{i,j\in\mathbb{N}}\kappa_{ij}\left(1-\cos\left(\theta_{i}-\theta_{j}\right)\right).$$* *Proof.* Let $\boldsymbol{h}=\left\{ h_{k}\right\} _{k\in\mathbb{N}}\in\ell^{2}$ with $\left\Vert \boldsymbol{h}\right\Vert <\frac{1}{\sqrt{2}}$ and $$\Phi=\left\{ -\sum_{j\in\mathbb{N}}\kappa_{kj}\sin\left(\theta_{j}-\theta_{k}\right)\right\} _{k\in\mathbb{N}}\in\ell^{2}.$$ From direct calculation, we have $$\begin{aligned} \begin{aligned} & P\left(\Theta+\boldsymbol{h}\right)-P\left(\Theta\right)-\left\langle \Phi,\boldsymbol{h}\right\rangle _{2}\\ & \hspace{1cm}=-\frac{1}{2}\sum_{i,j\in\mathbb{N}}\kappa_{ij}\left(\cos\left(\theta_{i}-\theta_{j}+h_{i}-h_{j}\right)\right)\\ & \hspace{1cm}+\frac{1}{2}\sum_{i,j\in\mathbb{N}}\kappa_{ij}\left(\cos\left(\theta_{i}-\theta_{j}\right)\right)+\sum_{i,j\in\mathbb{N}}\kappa_{ij}h_{i}\sin\left(\theta_{j}-\theta_{i}\right)\\ & \hspace{1cm}=-\frac{1}{2}\sum_{i,j\in\mathbb{N}}\kappa_{ij}\left(\cos\left(\theta_{i}-\theta_{j}+h_{i}-h_{j}\right)-\cos\left(\theta_{i}-\theta_{j}\right)\right)+\sum_{i,j\in\mathbb{N}}\kappa_{ij}h_{i}\sin\left(\theta_{j}-\theta_{i}\right)\\ & \hspace{1cm}\overset{i\leftrightarrow j}{=}-\frac{1}{2}\sum_{i,j\in\mathbb{N}}\kappa_{ij}\left(\cos\left(\theta_{i}-\theta_{j}+h_{i}-h_{j}\right)-\cos\left(\theta_{i}-\theta_{j}\right)+\left(h_{i}-h_{j}\right)\sin\left(\theta_{i}-\theta_{j}\right)\right)\\ & \hspace{1cm}=-\frac{1}{2}\sum_{i,j\in\mathbb{N}}\kappa_{ij}\left({-2\sin\left(\theta_{i}-\theta_{j}+\frac{h_{i}-h_{j}}{2}\right)\sin\left(\frac{h_{i}-h_{j}}{2}\right)}+\left(h_{i}-h_{j}\right)\sin\left(\theta_{i}-\theta_{j}\right)\right). \end{aligned} \end{aligned}$$ Here, Lemma [Lemma 6](#L3.4){reference-type="ref" reference="L3.4"} implies $$\begin{aligned} {\begin{aligned} & \left|P\left(\Theta+\boldsymbol{h}\right)-P\left(\Theta\right)-\left\langle \Phi,\boldsymbol{h}\right\rangle _{2}\right|\leq\frac{1}{2}\sum_{i,j\in\mathbb{N}}\kappa_{ij}\left|h_{i}-h_{j}\right|^{2},\\ & \frac{1}{2}\sum_{i,j\in\mathbb{N}}\kappa_{ij}\left|h_{i}-h_{j}\right|^{2}\leq\sum_{i\neq j}\kappa_{ij}\left(h_{i}^{2}+h_{j}^{2}\right)\leq\sum_{i,j\in\mathbb{N}}\kappa_{ij}\left\Vert \boldsymbol{h}\right\Vert _{2}^{2}. \end{aligned}} \end{aligned}$$ where we used $$\left\Vert \boldsymbol{h}\right\Vert <\frac{1}{\sqrt{2}}\quad\Longrightarrow\quad|h_{i}-h_{j}|\leq1,$$ to meet the assumption in Lemma [Lemma 6](#L3.4){reference-type="ref" reference="L3.4"}. Therefore, we conclude $dP(\Theta)=\left\langle \Phi,\cdot\right\rangle _{2}$, which is equivalent to say that $\Phi$ is the gradient of $P$. ◻ Even if we have a gradient structure of [\[C-1\]](#C-1){reference-type="eqref" reference="C-1"}, we cannot show the convergence of phases as in Proposition [Proposition 1](#P2.1){reference-type="ref" reference="P2.1"}, as the Lojaciewicz inequality in Lemma [Lemma 15](#LA-4){reference-type="ref" reference="LA-4"} is not applicable under the condition [\[C-2-1\]](#C-2-1){reference-type="eqref" reference="C-2-1"}. To see this, we first assume $$\kappa_{ii}=0, \quad i\in\mathbb{N}$$ without loss of generality. Then for each $i \in {\mathbb N}$, we have $$\nabla_{\theta_i}P(\Theta) = -\sum_{k\in\mathbb{N}} \kappa_{ik}\sin\left(\theta_{k}-\theta_{i}\right),$$ and the hessian matrix $\nabla^2P(\Theta)=:\left\{ h_{ij}\right\} _{i,j\in\mathbb{N}}$ is given as $$h_{ij}=\frac{\partial}{\partial\theta_j} \Big( \sum_{k\in\mathbb{N}} \kappa_{ik}\sin(\theta_{k}-\theta_{i} ) \Big) = \begin{cases} \displaystyle \kappa_{ij}\cos\left(\theta_j-\theta_{i}\right), & i\neq j\\ \displaystyle -\sum_{k\neq i} \kappa_{ik}\cos\left(\theta_{k}-\theta_{i}\right), & i=j. \end{cases}$$ In particular, the hessian matrix at $\Theta=0$ is $$H:=\nabla^2P(\Theta)(0) = \begin{cases} \kappa_{ij} & i\neq j,\\ -\sum_{k\neq i}\kappa_{ik} & i=j. \end{cases}$$ Next, we determine the kernel of this Hessian matrix. Suppose we have two vectors $$\boldsymbol{v}=\left\{ v_{i}\right\} _{i\in\mathbb{N}}, \quad \boldsymbol{w}=\left\{ w_{i}\right\} _{i\in\mathbb{N}}\in \ell^2.$$ By direct computation, one has $$\left\langle \boldsymbol{w},H\boldsymbol{v}\right\rangle _{l^{2}} =\sum_{i\in\mathbb{N}} w_{i}\left(\sum_{j\in\mathbb{N}} \kappa_{ij}v_{j}-\sum_{j\in\mathbb{N}} \kappa_{ij}v_{i}\right) \overset{i\leftrightarrow j}{=}-\frac{1}{2}\sum_{i,j\in\mathbb{N}} \kappa_{ij}\left(v_{i}-v_{j}\right)\left(w_{i}-w_{j}\right).$$ Therefore, we have $$\begin{aligned} \begin{aligned} \boldsymbol{v}=\left\{ v_{i}\right\} _{i\in\mathbb{N}}\in\ker H &\quad \Longleftrightarrow \quad \left \langle \boldsymbol{v},H\boldsymbol{v}\right\rangle _{l^{2}} =-\frac{1}{2}\sum_{i,j\in\mathbb{N}}\kappa_{ij}\left(v_{i}-v_{j}\right)^{2}=0 \\ & \quad \Longleftrightarrow \quad v_{i}=v_{j} \quad \forall~i,j\in\mathbb{N}, \quad \mbox{i.e.,} \quad \boldsymbol{v} = (v, v, \ldots) \in \ell^2\\ & \quad \Longleftrightarrow \quad \boldsymbol{v}=0. \end{aligned}\end{aligned}$$ Now, let $\boldsymbol{e}_{i}$ be a infinite sequence such that all but $i$th element is zero, and the only nonzero element is $1$. Then, we have $$(H\boldsymbol{e}_{i})_j=\begin{cases} \kappa_{ij} & j\neq i\\ -\sum_{k\neq i} \kappa_{ik} & j=i \end{cases}.$$ Therefore, the $\ell^2$-norm of $H\boldsymbol{e}_{i}$ can be written as $$\left\Vert H\boldsymbol{e}_{i}\right\Vert _{l^{2}}^2=\sum_{j\in\mathbb{N}} \left(\kappa_{ij}\right)^{2}+\left(\sum_{j\in\mathbb{N}}\kappa_{ij}\right)^{2}.$$ However, since $K\in\ell^{1,1}$, we can have $$\lim_{i\to\infty}\sum_{j\in\mathbb{N}} \kappa_{ij}=0 \quad \Longrightarrow \quad \lim_{n\to\infty}\left\Vert H\boldsymbol{e}_{n}\right\Vert _{l^{2}}=0,$$ which violates the second condition of [\[New-A\]](#New-A){reference-type="eqref" reference="New-A"}. # Emergent dynamics of a heterogeneous ensemble {#sec:4} In this section, we study emergent dynamics of the infinite Kuramoto ensemble which might have several heterogeneous oscillators with nonidentical natural frequencies, $$\label{D-1} \begin{cases} \displaystyle\dot{\theta}_{i}=\nu_i + \sum_{j\in \mathbb{N}}\kappa_{ij}\sin\left(\theta_{j}-\theta_{i}\right), \quad t>0,\\ \displaystyle\theta_i(0)=\theta_i^{\text{in}}\in \mathbb{R},\quad \forall~ i\in \mathbb{N},\\ \displaystyle \Theta^{\text{in}}\in\ell^\infty,\quad \mathcal{V}\in\ell^\infty,\quad K=(\kappa_{ij})\in\ell^{\infty,1}. \end{cases}$$ In addition to the framework $(\mathcal{F}1)$ -- $(\mathcal{F}2)$, we also consider the following framework in this section: - $(\mathcal{F}3)$: The network topology $K=(\kappa_{ij})$ satisfies $$\inf_{i\in\mathbb{N}}\sum_{j\in\mathbb{N}} \kappa_{ij} =: \| K \|_{-\infty, 1}>0.$$ In the sequel, we first prove the existence of a trapping set for infinite heterogeneous ensembles, and then we apply the same argument to prove the convergence of diameter to zero in a homogeneous ensemble as corollary. **Lemma 7**. *Let $a, b$ and $c$ be positive constants satisfying the relations $$0\leq c-a\leq \pi-\varepsilon_1, \quad a-\varepsilon_2\leq b\leq c+\varepsilon_2,\quad 0\leq \varepsilon_{2}\leq \varepsilon_1.$$ Then, one has $$\sin\left(c-a\right)+\sin\left(a-b\right)+\sin\left(b-c\right)\le 4\sin \frac{\varepsilon_{2}}{2}.$$* *Proof.* We use the additive law for trigonometric function to obtain $$\begin{aligned} & \sin\left(c-a\right)+\sin\left(a-b\right)+\sin\left(b-c\right)\\ & \hspace{1cm} =\sin\left(c-a\right)+2\sin\left(\frac{a-c}{2}\right)\cos\left(\frac{a-2b+c}{2}\right)\\ & \hspace{1cm} =2\sin\left(\frac{c-a}{2}\right)\left(\cos\left(\frac{a-c}{2}\right)-\cos\left(\frac{a-2b+c}{2}\right)\right)\\ & \hspace{1cm} =-4\sin\left(\frac{c-a}{2}\right)\sin\left(\frac{c-b}{2}\right)\sin\left(\frac{b-a}{2}\right)\\ &\hspace{1cm} =:f(a,b,c). \end{aligned}$$ If $a\leq b\leq c$, then $$\frac{c-a}{2}, \frac{b-a}{2},\frac{c-b}{2}\in \Big [0,\frac{\pi-\varepsilon_1}{2} \Big ]$$ and therefore $$f(a,b,c)\leq 0.$$ On the other hand, if $a-\varepsilon_2\leq b\leq a$ or $c\leq b\leq c+\varepsilon_2$, then we have $$\frac{c-a}{2}\in [0,\frac{\pi-\varepsilon_1}{2}],$$ and one of $\frac{b-a}{2}$ and $\frac{c-b}{2}$ is contained in $[0,\frac{\pi-\varepsilon_{1}+\varepsilon_{2}}{2}]$, and the other is contained in $[-\frac{\varepsilon_2}{2},0]$. Therefore, we have $$f(a,b,c)\leq 4\sin \frac{\varepsilon_{2}}{2} \quad \mbox{in both cases}.$$ ◻ **Lemma 8**. *Let $\Theta$ be a solution to [\[D-1\]](#D-1){reference-type="eqref" reference="D-1"}. For every $t\geq 0$ and $\varepsilon_{2}>0$, consider the following partition of the index set $\mathbb{N}$: $$\begin{aligned} \begin{aligned} & {\mathcal J}_{1}(\varepsilon_{2},t): =\left\{ k:\theta_{k}\left(t\right)>\overline{\theta}(t)-\varepsilon_{2}\right\}, \\ & {\mathcal J}_{2}(\varepsilon_{2},t):=\left\{ k:\theta_{k}\left(t\right)<\underline{\theta}(t)+\varepsilon_{2}\right\}, \\ & {\mathcal J}_{3}(\varepsilon_{2},t): =\left\{ k:\underline{\theta}(t)+\varepsilon_{2}\le\theta_{k}\left(t\right)\le\overline{\theta}(t)-\varepsilon_{2}\right\}. \end{aligned} \end{aligned}$$ Then, if $\mathcal{D}(\Theta(t_0))=\overline{\theta}(t_0)-\underline{\theta}(t_0)= \pi-\varepsilon_1$ for some $\varepsilon_1>0$, we have $$\label{D-2} \begin{aligned} &\dot{\theta}_i(t_0)-\dot{\theta}_j(t_0) \\ &\hspace{0.2cm} \leq \nu_{i}-\nu_{j} -\sum_{k\in\mathbb{N}}\bigg[\emph{min}(\kappa_{ik},\kappa_{jk})\left[\sin(\theta_i(t_0)-\theta_j(t_0))-4\sin\frac{\varepsilon_{2}}{2} \right] -|\kappa_{ik}-\kappa_{jk}|\sin\varepsilon_{2}\bigg], \end{aligned}$$ for every $(i,j)\in\mathcal{J}_1(\varepsilon_{2},t_0)\times \mathcal{J}_2(\varepsilon_{2},t_0)$ and sufficiently small $\varepsilon_{2}$ satisfying $$\label{D-3p} \varepsilon_{2}\leq \varepsilon_{1},\quad \varepsilon_{1}+2\varepsilon_{2}\leq \pi,\quad \sin \varepsilon_{1}\geq 4\sin\frac{\varepsilon_{2}}{2},\quad \sin (\varepsilon_{1}+2\varepsilon_{2})\geq 4\sin\frac{\varepsilon_{2}}{2}.$$* *Proof.* One can apply Lemma [Lemma 7](#L4.1){reference-type="ref" reference="L4.1"} to $$a=\theta_j(t_0),\quad b=\theta_k(t_0),\quad c=\theta_i(t_0),$$ for all $i\in\mathcal{J}_1(\varepsilon_{2},t_0), j\in \mathcal{J}_2(\varepsilon_{2},t_0)$ and $k\in\mathbb{N}$ whenever $\varepsilon_{2}\leq \varepsilon_{1}$. Since $\dot{\theta}_i(t_0)-\dot{\theta}_j(t_0)$ can be written as $$\dot{\theta}_i(t_0)-\dot{\theta}_j(t_0)=\nu_{i}-\nu_{j}-\sum_{k\in\mathbb{N}} \left(\kappa_{ik}\sin\left(\theta_{i}\left(t_{0}\right)-\theta_{k}\left(t_{0}\right)\right)-\kappa_{jk}\sin\left(\theta_{k}\left(t_{0}\right)-\theta_j\left(t_{0}\right)\right)\right),$$ it is sufficient to verify that $$\begin{aligned} &\kappa_{ik}\sin\left(\theta_{i}\left(t_{0}\right)-\theta_{k}\left(t_{0}\right)\right)+\kappa_{jk}\sin\left(\theta_{k}\left(t_{0}\right)-\theta_j\left(t_{0}\right)\right)\\ &\hspace{0.5cm}\geq \mbox{min}(\kappa_{ik},\kappa_{jk})\left[\sin(\theta_i(t_0)-\theta_j(t_0))-4\sin\frac{\varepsilon_{2}}{2} \right]-|\kappa_{ik}-\kappa_{jk}|\sin\varepsilon_{2} \end{aligned}$$ for all $k\in\mathbb{N}$. Note that for sufficiently small $\varepsilon_{2}$ satisfying [\[D-3p\]](#D-3p){reference-type="eqref" reference="D-3p"}, we have $$\begin{aligned} &0\leq \mathcal{D}(\Theta(t_0))-2\varepsilon_{2}\leq \theta_i(t_0)-\theta_j(t_0)\leq \mathcal{D}(\Theta(t_0))\leq \pi,\\ &\sin \mathcal{D}(\Theta(t_0))\geq 4\sin\frac{\varepsilon_{2}}{2},\quad \sin (\mathcal{D}(\Theta(t_0))-2\varepsilon_{2})\geq 4\sin\frac{\varepsilon_{2}}{2}. \end{aligned}$$ These imply $$\sin(\theta_i(t_0)-\theta_j(t_0))-4\sin\frac{\varepsilon_{2}}{2}\geq 0$$ from the concavity of the sine function on the domain $[0,\pi]$. Below, we show the above inequality for $k\in \mathcal{J}_1(\varepsilon_{2},t_0)$, $k\in \mathcal{J}_2(\varepsilon_{2},t_0)$ and $k\in \mathcal{J}_3(\varepsilon_{2},t_0)$ one by one.\ $\bullet$ Case A ($k\in \mathcal{J}_1(\varepsilon_{2},t_0)$): In this case, we use Lemma [Lemma 7](#L4.1){reference-type="ref" reference="L4.1"} to get $$\begin{aligned} &\kappa_{ik}\sin\left(\theta_{i}\left(t_{0}\right)-\theta_{k}\left(t_{0}\right)\right)+\kappa_{jk}\sin\left(\theta_{k}\left(t_{0}\right)-\theta_j\left(t_{0}\right)\right)\\ &\hspace{0.5cm}= \kappa_{jk}\left[\sin\left(\theta_{i}\left(t_{0}\right)-\theta_{k}\left(t_{0}\right)\right)+\sin\left(\theta_{k}\left(t_{0}\right)-\theta_j\left(t_{0}\right)\right)\right]+(\kappa_{ik}-\kappa_{jk})\sin\left(\theta_{i}\left(t_{0}\right)-\theta_k\left(t_{0}\right)\right)\\ &\hspace{0.5cm}\geq \kappa_{jk}\left[\sin(\theta_i(t_0)-\theta_j(t_0))-4\sin\frac{\varepsilon_{2}}{2} \right]-|\kappa_{ik}-\kappa_{jk}|\sin\varepsilon_{2}\\ &\hspace{0.5cm}\geq \mbox{min}(\kappa_{ik},\kappa_{jk})\left[\sin(\theta_i(t_0)-\theta_j(t_0))-4\sin\frac{\varepsilon_{2}}{2} \right]-|\kappa_{ik}-\kappa_{jk}|\sin\varepsilon_{2}. \end{aligned}$$ $\bullet$ Case B ($k\in \mathcal{J}_2(\varepsilon_{2},t_0)$): Similar to the first case, we use Lemma [Lemma 7](#L4.1){reference-type="ref" reference="L4.1"} to get $$\begin{aligned} &\kappa_{ik}\sin\left(\theta_{i}\left(t_{0}\right)-\theta_{k}\left(t_{0}\right)\right)+\kappa_{jk}\sin\left(\theta_{k}\left(t_{0}\right)-\theta_j\left(t_{0}\right)\right)\\ &\hspace{0.5cm}= \kappa_{ik}\left[\sin\left(\theta_{i}\left(t_{0}\right)-\theta_{k}\left(t_{0}\right)\right)+\sin\left(\theta_{k}\left(t_{0}\right)-\theta_j\left(t_{0}\right)\right)\right]+(\kappa_{jk}-\kappa_{ik})\sin\left(\theta_{k}\left(t_{0}\right)-\theta_j\left(t_{0}\right)\right)\\ &\hspace{0.5cm}\geq \kappa_{ik}\left[\sin(\theta_i(t_0)-\theta_j(t_0))-4\sin\frac{\varepsilon_{2}}{2} \right]-|\kappa_{ik}-\kappa_{jk}|\sin\varepsilon_{2}\\ &\hspace{0.5cm}\geq \mbox{min}(\kappa_{ik},\kappa_{jk})\left[\sin(\theta_i(t_0)-\theta_j(t_0))-4\sin\frac{\varepsilon_{2}}{2} \right]-|\kappa_{ik}-\kappa_{jk}|\sin\varepsilon_{2}. \end{aligned}$$ $\bullet$ Case C ($k\in \mathcal{J}_3(\varepsilon_{2},t_0)$): Again, in this case, we use Lemma [Lemma 7](#L4.1){reference-type="ref" reference="L4.1"} to get $$\begin{aligned} &\kappa_{ik}\sin\left(\theta_{i}\left(t_{0}\right)-\theta_{k}\left(t_{0}\right)\right)+\kappa_{jk}\sin\left(\theta_{k}\left(t_{0}\right)-\theta_j\left(t_{0}\right)\right)\\ &\hspace{0.5cm}\geq \mbox{min}(\kappa_{ik},\kappa_{jk})\left[\sin\left(\theta_{i}\left(t_{0}\right)-\theta_{k}\left(t_{0}\right)\right)+\sin\left(\theta_{k}\left(t_{0}\right)-\theta_j\left(t_{0}\right)\right)\right]\\ &\hspace{0.5cm}\geq \mbox{min}(\kappa_{ik},\kappa_{jk})\sin(\theta_i(t_0)-\theta_j(t_0))\\ &\hspace{0.5cm}\geq \mbox{min}(\kappa_{ik},\kappa_{jk})\left[\sin(\theta_i(t_0)-\theta_j(t_0))-4\sin\frac{\varepsilon_{2}}{2} \right]-|\kappa_{ik}-\kappa_{jk}|\sin\varepsilon_{2}. \end{aligned}$$ Finally, we combine estimates in Case A $\sim$ Case C to obtain $$\begin{aligned} \dot{\theta}_i(t_0)-\dot{\theta}_j(t_0)&=\nu_{i}-\nu_{j}-\sum_{k\in\mathbb{N}} \left(\kappa_{ik}\sin\left(\theta_{i}\left(t_{0}\right)-\theta_{k}\left(t_{0}\right)\right)-\kappa_{jk}\sin\left(\theta_{k}\left(t_{0}\right)-\theta_j\left(t_{0}\right)\right)\right)\\ &\leq \nu_{i}-\nu_{j}-\sum_{k\in\mathbb{N}}\bigg[\mbox{min}(\kappa_{ik},\kappa_{jk})\left[\sin(\theta_i(t_0)-\theta_j(t_0))-4\sin\frac{\varepsilon_{2}}{2} \right]\\ &\hspace{1.65cm}-|\kappa_{ik}-\kappa_{jk}|\sin\varepsilon_{2}\bigg] \end{aligned}$$ for every $\varepsilon_{2}\leq \varepsilon_{1}$ and $(i,j)\in\mathcal{J}_1(\varepsilon_{2},t_0)\times \mathcal{J}_2(\varepsilon_{2},t_0)$, which is our desired result. ◻ If $\Theta$ is a solution to [\[D-1\]](#D-1){reference-type="eqref" reference="D-1"} under the framework $(\mathcal{F}1)-(\mathcal{F}3)$, one can further estimate the right-hand side of [\[D-2\]](#D-2){reference-type="eqref" reference="D-2"} to obtain the following result, so-called 'practical synchronization'. **Theorem 2**. *Assume that the initial data $\Theta^{\text{in}}$ and network topology $(\kappa_{ij})$ satisfy $(\mathcal{F}1)$ -- $(\mathcal{F}3)$. Assume further that $$0<\mathcal{D}(\mathcal{V})<\|\boldsymbol{\tilde \kappa}\|_1\|K\|_{-\infty,1},\quad \mathcal{D}(\Theta^{\text{in}})\in (\gamma,\pi-\gamma),\quad \gamma=\sin^{-1}\left(\frac{\mathcal{D}(\mathcal{V})}{\|\boldsymbol{\tilde \kappa}\|_1\|K\|_{-\infty,1}} \right)<\frac{\pi}{2}.$$ Then, if $\Theta$ is an $\ell^\infty$-solution to [\[D-1\]](#D-1){reference-type="eqref" reference="D-1"}, the phase diameter $\mathcal{D}(\Theta(t))$ has the following asymptotic upper bound: $$\limsup_{t\to\infty} \mathcal{D}(\Theta(t))\leq \gamma.$$* *Proof.* Fix a sufficiently small $\varepsilon_0>0$ satisfying $$\sin^{-1}\left(\frac{\mathcal{D}(\mathcal{V})+\|K\|_{\infty,1}\varepsilon_0}{\|\boldsymbol{\tilde \kappa}\|_1\|K\|_{-\infty,1}} \right)+\frac{\varepsilon_0}{3} \leq\mathcal{D}(\Theta^{\text{in}})\leq \pi-\sin^{-1}\left(\frac{\mathcal{D}(\mathcal{V})+\|K\|_{\infty,1}\varepsilon_0}{\|\boldsymbol{\tilde \kappa}\|_1\|K\|_{-\infty,1}} \right),$$ and define a positive number $\varepsilon_{2}$ as $$\varepsilon_{2}=\mbox{min}\left(\frac{\sin\gamma}{2},\frac{\varepsilon_0}{6}\right).$$ Then, we will show the following statement in the proof.\ *We claim: whenever the phase diameter $\mathcal{D}(\Theta)$ at time $t=t_0$ satisfy $$\sin^{-1}\left(\frac{\mathcal{D}(\mathcal{V})+\|K\|_{\infty,1}\varepsilon_0}{\|\boldsymbol{\tilde \kappa}\|_1\|K\|_{-\infty,1}} \right)+\frac{\varepsilon_0}{3}\leq\mathcal{D}(\Theta(t_0))\leq \pi-\sin^{-1}\left(\frac{\mathcal{D}(\mathcal{V})+\|K\|_{\infty,1}\varepsilon_0}{\|\boldsymbol{\tilde \kappa}\|_1\|K\|_{-\infty,1}} \right),$$ we have $$\mathcal{D}(\Theta(t))\leq\mathcal{D}(\Theta(t_0))-\|K\|_{\infty,1}\varepsilon_{2}(t-t_0),\quad \forall~ t_0\leq t\leq t_0+\frac{\varepsilon_{2}}{2(\mathcal{D}(\mathcal{V})+2\|K\|_{\infty,1})}.$$* To see this, we first verify that $\varepsilon_{2}$ satisfies the condition [\[D-3p\]](#D-3p){reference-type="eqref" reference="D-3p"} for given $\varepsilon_{1}=\pi-\mathcal{D}(\Theta(t_0))$. If $\varepsilon_{2}$ is a positive number satisfying $$\label{D-5} { 4\varepsilon_{2}\leq \sin\varepsilon_{1} \leq \pi-\varepsilon_{1},}$$ one can easily see that $\varepsilon_{2}$ satisfies [\[D-3p\]](#D-3p){reference-type="eqref" reference="D-3p"}. However, [\[D-5\]](#D-5){reference-type="eqref" reference="D-5"} follows from the fact that $$\begin{aligned} &2\varepsilon_{2}\leq \sin\gamma\leq \sin\mathcal{D}(\Theta(t_0))=\sin\varepsilon_{1},\\ &2\varepsilon_{2}\leq \frac{\varepsilon_0}{3}\leq \sin^{-1}\left(\frac{\mathcal{D}(\mathcal{V})+\|K\|_{\infty,1}\varepsilon_0}{\|\boldsymbol{\tilde \kappa}\|_1\|K\|_{-\infty,1}} \right)+\frac{\varepsilon_0}{3}\leq\mathcal{D}(\Theta(t_0))=\pi-\varepsilon_{1}. \end{aligned}$$ Now, consider any index pair $(i,j)\in\mathcal{J}_1(\varepsilon_{2},t_0)\times \mathcal{J}_2(\varepsilon_{2},t_0)$. Then, Lemma [Lemma 8](#L4.2p){reference-type="ref" reference="L4.2p"} yields $$\begin{aligned} \dot{\theta}_i(t_0)-\dot{\theta}_j(t_0)&\leq \nu_{i}-\nu_{j}-\sum_{k\in\mathbb{N}}\bigg[\mbox{min}(\kappa_{ik},\kappa_{jk})\left[\sin(\theta_i(t_0)-\theta_j(t_0))-4\sin\frac{\varepsilon_{2}}{2} \right]\\ &\hspace{1.65cm}-|\kappa_{ik}-\kappa_{jk}|\sin\varepsilon_{2}\bigg]\\ &\leq \mathcal{D}(\mathcal{V})-\left(\sum_{k\in\mathbb{N}}\tilde{\kappa}_k\right)\mbox{min}\left(\sum_{l\in\mathbb{N}}\kappa_{il},\sum_{l\in\mathbb{N}}\kappa_{jl} \right)\left[\sin(\theta_i(t_0)-\theta_j(t_0))-4\sin\frac{\varepsilon_{2}}{2} \right]\\ &\hspace{1.65cm}+\sum_{k\in\mathbb{N}}|\kappa_{ik}-\kappa_{jk}|\sin\varepsilon_{2}\\ &\leq \mathcal{D}(\mathcal{V})-\| {\boldsymbol{\tilde{\kappa}}} \|_1\|K\|_{-\infty,1}\left[\sin(\theta_i(t_0)-\theta_j(t_0))-4\sin\frac{\varepsilon_{2}}{2} \right]+2\|K\|_{\infty,1}\sin\varepsilon_{2}\\ &\leq (\mathcal{D}(\mathcal{V})+8\|K\|_{\infty,1}\sin\frac{\varepsilon_{2}}{2})-\| {\boldsymbol{\tilde{\kappa}}} \|_1\|K\|_{-\infty,1}\sin(\theta_i(t_0)-\theta_j(t_0)), \end{aligned}$$ where we used $$\| {\boldsymbol{\tilde{\kappa}}} \|_1\leq 1,\quad \|K\|_{-\infty,1}\leq \|K\|_{\infty,1}$$ in the last inequality. Then, by using Lemma [Lemma 3](#L2.3){reference-type="ref" reference="L2.3"}, we have $$\begin{aligned} \dot{\theta}_i(t)-\dot{\theta}_j(t)&\leq (\mathcal{D}(\mathcal{V})+8\|K\|_{\infty,1}\sin\frac{\varepsilon_{2}}{2})-\| {\boldsymbol{\tilde{\kappa}}} \|_1\|K\|_{-\infty,1}\sin(\theta_i(t_0)-\theta_j(t_0))\\ &\hspace{0.5cm}+2(t-t_0)\|K\|_{\infty,1}(\mathcal{D}(\mathcal{V})+2\|K\|_{\infty,1} ) \end{aligned}$$ for all $t\in\mathbb{R}_+$. In particular, we have $$\begin{aligned} \dot{\theta}_i(t)-\dot{\theta}_j(t)&\leq (\mathcal{D}(\mathcal{V})+5\|K\|_{\infty,1}\varepsilon_{2})-\| {\boldsymbol{\tilde{\kappa}}} \|_1\|K\|_{-\infty,1}\sin(\theta_i(t_0)-\theta_j(t_0))\\ &\leq 5\|K\|_{\infty,1}\varepsilon_{2}-\|K\|_{\infty,1}\varepsilon_0\\ &\leq -\|K\|_{\infty,1}\varepsilon_{2}, \end{aligned}$$ for all $|t-t_0|\leq \frac{\varepsilon_{2}}{2(\mathcal{D}(\mathcal{V})+2\|K\|_{\infty,1})}$. On the other hand, if $(i',j')$ is not contained in $\mathcal{J}_1(\varepsilon_{2},t_0)\times \mathcal{J}_2(\varepsilon_{2},t_0)$ or $\mathcal{J}_2(\varepsilon_{2},t_0)\times \mathcal{J}_1(\varepsilon_{2},t_0)$, we have $$|\theta_{i'}(t_0)-\theta_{j'}(t_0)|\leq \mathcal{D}(\Theta(t_0))-\varepsilon_{2}.$$ Therefore, by using Lemma [Lemma 3](#L2.3){reference-type="ref" reference="L2.3"}, we have $$\begin{aligned} |\theta_{i'}(t)-\theta_{j'}(t)|&\leq |\theta_{i'}(t_0)-\theta_{j'}(t_0)|+(\mathcal{D}(\mathcal{V})+2\|K\|_{\infty,1})|t-t_0|\\ &\leq \mathcal{D}(\Theta(t_0))-\frac{\varepsilon_{2}}{2}, \end{aligned}$$ for all $|t-t_0|\leq \frac{\varepsilon_{2}}{2(\mathcal{D}(\mathcal{V})+2\|K\|_{\infty,1})}$. To sum up, the phase diameter $\mathcal{D}(\Theta)$ satisfies $$\label{D-6} \begin{aligned} \mathcal{D}(\Theta(t))&\leq \mbox{max}\left(\mathcal{D}(\Theta(t_0))-\|K\|_{\infty,1}\varepsilon_{2}(t-t_0), \mathcal{D}(\Theta(t_0))-\frac{\varepsilon_{2}}{2}\right)\\ &=\mathcal{D}(\Theta(t_0))-\|K\|_{\infty,1}\varepsilon_{2}(t-t_0), \end{aligned}$$ for all $t\in [t_0,t_0+\frac{\varepsilon_{2}}{2(\mathcal{D}(\mathcal{V})+2\|K\|_{\infty,1})}]$, where we used $$\|K\|_{\infty,1}\varepsilon_{2}\cdot\frac{\varepsilon_{2}}{2(\mathcal{D}(\mathcal{V})+2\|K\|_{\infty,1})}\leq \frac{\varepsilon_{2}^2}{4}<\frac{\varepsilon_{2}}{2}$$ to find a bigger one in [\[D-6\]](#D-6){reference-type="eqref" reference="D-6"}.\ Once we prove the aforementioned claim, we can conclude that $\mathcal{D}(\Theta(t))$ reaches the lower bound $$\sin^{-1}\left(\frac{\mathcal{D}(\mathcal{V})+\|K\|_{\infty,1}\varepsilon_0}{\|\boldsymbol{\tilde \kappa}\|_1\|K\|_{-\infty,1}} \right)+\frac{\varepsilon_0}{3}$$ in a finite time. Since $\varepsilon_0$ can be chosen arbitrary small, we have $$\limsup_{t\to\infty} \mathcal{D}(\Theta(t))\leq \inf_{\varepsilon_0>0}\left[\sin^{-1}\left(\frac{\mathcal{D}(\mathcal{V})+\|K\|_{\infty,1}\varepsilon_0}{\|\boldsymbol{\tilde \kappa}\|_1\|K\|_{-\infty,1}} \right)+\frac{\varepsilon_0}{3}\right]=\gamma,$$ which is our desired result. ◻ **Remark 5**. *Note that the quantity $\|K \|_{-\infty, 1}$ measures the degree of coupling strengths. Therefore, Theorem [Theorem 2](#T4.1){reference-type="ref" reference="T4.1"} yields that the phase-diameter can be made as small as we want by increasing the quantity $\| K \|_{-\infty, 1}$. This is in fact called "practical synchronization\" as discussed in [@H-N-P2; @K-Y-K-S; @M-Z-C1; @M-Z-C2] for synchronization models with finite system size.* **Corollary 3**. *Suppose that the initial data $\Theta^{\text{in}}$ and network topology $(\kappa_{ij})$ satisfy $(\mathcal{F}1)-(\mathcal{F}3)$, and we assume that all natural frequencies are identical: $$\mathcal{D}(\mathcal{V})=0.$$ If $\Theta$ is an $\ell^\infty$-solution to [\[D-1\]](#D-1){reference-type="eqref" reference="D-1"}, the phase diameter $\mathcal{D}(\Theta(t))$ converges to zero exponentially.* *Proof.* In this case, we do not fix a constant $\varepsilon_0$, but instead define $$\varepsilon_{2}:=\frac{\| {\boldsymbol{\tilde{\kappa}}} \|_1\|K\|_{-\infty,1}}{(10\|K\|_{\infty,1}+4\| {\boldsymbol{\tilde{\kappa}}} \|_1\|K\|_{-\infty,1})}\sin\mathcal{D}(\Theta(t_0))$$ and verify [\[D-5\]](#D-5){reference-type="eqref" reference="D-5"} immediately: $$2\varepsilon_2\leq \sin\mathcal{D}(\Theta(t_0))=\sin(\pi-\varepsilon_{1})=\sin\varepsilon_{1},\quad \sin(\pi-\varepsilon_{1})\leq \pi-\varepsilon_{1}.$$ Then, for every $(i,j)\in\mathcal{J}_1(\varepsilon_{2},t_0)\times\mathcal{J}_2(\varepsilon_{2},t_0)$, one can apply Lemma [Lemma 8](#L4.2p){reference-type="ref" reference="L4.2p"} as in Theorem [Theorem 2](#T4.1){reference-type="ref" reference="T4.1"} to obtain $$\begin{aligned} \dot{\theta}_i(t)-\dot{\theta}_j(t)&\leq 5\|K\|_{\infty,1}\varepsilon_{2}-\| {\boldsymbol{\tilde{\kappa}}} \|_1\|K\|_{-\infty,1}\sin(\theta_i(t_0)-\theta_j(t_0))\\ &\leq 5\|K\|_{\infty,1}\varepsilon_{2}-\| {\boldsymbol{\tilde{\kappa}}} \|_1\|K\|_{-\infty,1}\mbox{min}\left(\sin(\mathcal{D}(\Theta(t_0))-2\varepsilon_2), \sin\mathcal{D}(\Theta(t_0))\right)\\ &\leq 5\|K\|_{\infty,1}\varepsilon_{2}-\| {\boldsymbol{\tilde{\kappa}}} \|_1\|K\|_{-\infty,1}(\sin(\mathcal{D}(\Theta(t_0)))-2\varepsilon_{2})\\ &=(5\|K\|_{\infty,1}+2\| {\boldsymbol{\tilde{\kappa}}} \|_1\|K\|_{-\infty,1})\varepsilon_{2}-\| {\boldsymbol{\tilde{\kappa}}} \|_1\|K\|_{-\infty,1}\sin(\mathcal{D}(\Theta(t_0)))\\ &=-\frac{1}{2}\| {\boldsymbol{\tilde{\kappa}}} \|_1\|K\|_{-\infty,1}\sin(\mathcal{D}(\Theta(t_0))), \end{aligned}$$ for all $|t-t_0|\leq\frac{\varepsilon_{2}}{4\|K\|_{\infty,1}}$.\ Now, define $t_0:=0$ and set $t_1,t_2,\ldots$ iteratively by using the relation $$t_{n+1}=t_n+\frac{1}{56\|K\|_{\infty,1}}\sin\mathcal{D}(\Theta(t_n)).$$ Then, we have the following series of inequalities: $$\begin{aligned} \mathcal{D}(\Theta(t_{n+1}))\leq \mathcal{D}(\Theta(t_{n}))-\frac{1}{112}\sin^2\mathcal{D}(\Theta(t_{n}))\leq \mathcal{D}(\Theta(t_{n}))-\frac{\sin^2\mathcal{D}(\Theta^{\text{in}})}{112\mathcal{D}(\Theta^{\text{in}})^2}\mathcal{D}(\Theta(t_{n}))^2. \end{aligned}$$ Therefore, we have $$\begin{aligned} \mathcal{D}(\Theta(t_n))\leq \frac{1}{\frac{1}{\mathcal{D}(\Theta^{\text{in}})}+n\cdot \frac{112\mathcal{D}(\Theta^{\text{in}})^2}{\sin^2\mathcal{D}(\Theta^{\text{in}})}},\quad t_n\lesssim \frac{\sin^2\mathcal{D}(\Theta^{\text{in}})}{112\mathcal{D}(\Theta^{\text{in}})^2}\cdot \frac{1}{56\|K\|_{\infty,1}}\cdot \log n, \end{aligned}$$ which shows the exponential convergence of $\mathcal{D}(\Theta(t))$ with respect to $t$. ◻ # Kuramoto ensemble on a sender network {#sec:5} In this section, we consider a network topology in which capacity at the $i$-th node depends only on neighboring nodes: $$\label{E-0} \kappa_{ij} = \kappa_j > 0, \quad i, j \in {\mathbb N} \quad \mbox{and} \quad \sum_{i\in\mathbb{N}} \kappa_i = 1,$$ which represents the second case of Remark [Remark 1](#R2.1){reference-type="ref" reference="R2.1"}. For this network topology, we can derive synchronization estimates for an infinite homogeneous Kuramoto ensemble without any restriction on the size of the initial configuration. Furthermore, we can also derive an exponential synchronization for a heterogeneous ensemble.\ Consider the Cauchy problem for an infinite Kuramoto model over sender network [\[E-0\]](#E-0){reference-type="eqref" reference="E-0"}: $$\label{E-1} \begin{cases} \displaystyle \dot{\theta}_{i}= \nu_i + \sum_{j\in\mathbb{N}} \kappa_j\sin\left(\theta_j-\theta_{i}\right), \quad t>0, \\ \displaystyle \theta_i(0)=\theta_i^{\text{in}},\quad \forall~ i\in \mathbb{N}. \end{cases}$$ In the following two subsections, we study the emergent dynamics of [\[E-1\]](#E-1){reference-type="eqref" reference="E-1"} for homogeneous and heterogeneous ensembles, respectively. ## Homogeneous ensemble {#sec:5.1} In this subsection, we consider the homogeneous ensemble with the same natural frequencies: $$\nu_i = \nu, \quad i \in {\mathbb N}.$$ Then, as discussed in Section [2](#sec:2){reference-type="ref" reference="sec:2"}, we may assume that $\nu = 0$ without loss of generality. $$\label{E-1-1} \begin{cases} \displaystyle \dot{\theta}_{i}= \sum_{j\in\mathbb{N}} \kappa_j\sin\left(\theta_j-\theta_{i}\right), \quad t>0, \\ \displaystyle \theta_i(0)=\theta_i^{\text{in}},\quad \forall~ i\in \mathbb{N}. \end{cases}$$ ### Order parameters {#sec:5.1.1} This part introduces the order parameters associated with [\[E-1-1\]](#E-1-1){reference-type="eqref" reference="E-1-1"}. A polar representation of the weighted sum of $z_i$ gives the order parameters $(r, \phi)$ for the phase configuration $\Theta$: $$\label{E-2} re^{{\rm{i}}\phi} :=\sum_{k\in\mathbb{N}} \kappa_{k}e^{{\rm{i}}\theta_{k}}.$$ This is equivalent to $$\label{E-3} re^{{\rm{i}}\left(\phi-\theta_{i}\right)}=\sum_{k\in\mathbb{N}} \kappa_{k}e^{{\rm{i}}\left(\theta_{k}-\theta_{i}\right)}.$$ We now compare the imaginary part of [\[E-3\]](#E-3){reference-type="eqref" reference="E-3"} to obtain $$r\sin\left(\phi-\theta_{i}\right)=\sum_{k\in\mathbb{N}} \kappa_{k}\sin\left(\theta_{k}-\theta_{i}\right).$$ Then, we use the above relation and $\eqref{E-1-1}_1$ to rewrite infinite Kuramoto model using order parameters: $$\label{E-4} \dot{\theta}_{i} = r\sin(\phi - \theta_i).$$ In the following lemma, we study the governing system for $(r, \phi)$. **Lemma 9**. *Let $\Theta$ be a solution to [\[E-1-1\]](#E-1-1){reference-type="eqref" reference="E-1-1"}. Then, the order parameters $(r, \phi)$ satisfy $$\begin{cases} \displaystyle \dot{r} =r\sum_{k\in\mathbb{N}} \kappa_{k}\sin^2\left(\theta_{k}-\phi\right),\quad t>0, \\ \displaystyle \dot{\phi} =\sum_{k\in\mathbb{N}} \kappa_{k}\cos\left(\theta_{k}-\phi\right)\sin\left(\theta_{k}-\phi\right). \end{cases}$$* *Proof.* We differentiate [\[E-2\]](#E-2){reference-type="eqref" reference="E-2"} to find $$\dot{r}e^{{\rm{i}}\phi}+{\rm{i}}re^{{\rm{i}}\phi}\dot{\phi}={\rm{i}}\sum_{k\in\mathbb{N}} \kappa_{k}e^{{\rm{i}}\theta_{k}}\cdot\dot{\theta}_{k}.$$ Now, we divide the above relation by $e^{{\mathrm i}\phi}$ to see $$\begin{aligned} \begin{aligned} & \dot{r} +{\rm{i}} r \dot{\phi}={\rm{i}}\sum_{k\in\mathbb{N}} \kappa_{k}e^{{\rm{i}} (\theta_{k} - \phi)}\cdot\dot{\theta}_{k} = -\sum_{k\in\mathbb{N}} \kappa_k {\dot \theta}_k \sin(\theta_k - \phi) + {\mathrm i} \sum_{k\in\mathbb{N}} \kappa_k {\dot \theta}_k \cos(\theta_k - \phi). \end{aligned} \end{aligned}$$ We compare the real and imaginary parts of the above relation and use [\[E-4\]](#E-4){reference-type="eqref" reference="E-4"}. More precisely, $\bullet$ (Real part): From direct calculation, we have $$\dot{r} = \sum_{k\in\mathbb{N}} \kappa_k {\dot \theta}_k \sin(\phi- \theta_k) = r \sum_{k\in\mathbb{N}} \kappa_k \sin^2(\phi - \theta_k).$$ $\bullet$ (Imaginary part): Similarly, one has $$r \dot{\phi} = \sum_{k\in\mathbb{N}} \kappa_k {\dot \theta}_k \cos(\theta_k - \phi) = r \sum_{k\in\mathbb{N}} \kappa_k \sin(\phi - \theta_k)\cos(\phi - \theta_k).$$ Therefore, we have the desired dynamics for $\phi$ as long as $r > 0$. ◻ Next, we study the asymptotic behaviors of [\[E-1-1\]](#E-1-1){reference-type="eqref" reference="E-1-1"}. **Proposition 4**. *Let $\Theta$ be a solution to [\[E-1-1\]](#E-1-1){reference-type="eqref" reference="E-1-1"}. Then, the following dichotomy holds: $$\emph{Either}~r(t)\equiv 0 \quad \emph{or} \quad \lim_{t \to \infty} \sum_{k\in\mathbb{N}} \kappa_{k}\sin^{2}\left(\theta_{k}(t) -\phi(t) \right) = 0.$$* *Proof.* Below, we consider two cases: $$r(0) = 0, \quad r(0) > 0.$$ $\bullet$ Case A $(r(0) = 0)$: In this case, we employ the uniqueness of ODEs to [\[E-4\]](#E-4){reference-type="eqref" reference="E-4"} and obtain $$\theta_i(t) = \theta_i^{\text{in}}, \quad i \in {\mathbb N}, \quad \mbox{i.e.,} \quad r(t) = r(0) = 0.$$ $\bullet$ Case B $(r(0) > 0)$: In this case, $r$ is uniformly bounded by one and monotonically increasing in time $t$ (Lemma [Lemma 9](#L5.1){reference-type="ref" reference="L5.1"}) : $$r(t) \leq \sum_{k\in\mathbb{N}} \kappa_k = 1, \quad {\dot r}(t) \geq 0, \quad t > 0.$$ Therefore, there exists a positive real number $r^{\infty} \in [r(0), 1]$ such that $$\lim_{t \to \infty} r(t) = r^{\infty}.$$ Now, we claim that $$\label{E-6} \int_0^{\infty} \sum_{k\in\mathbb{N}} \kappa_{k}\sin^{2}\left(\theta_{k}(t)-\phi(t) \right)dt < \infty \quad \mbox{and} \quad \left|\frac{d}{dt}\left(\sum_{k} \kappa_{k}\sin^{2}\left(\theta_{k}-\phi\right)\right)\right| \leq 4.$$ $\diamond$ (Derivation of the first relation in [\[E-6\]](#E-6){reference-type="eqref" reference="E-6"}): By using Lemma [Lemma 9](#L5.1){reference-type="ref" reference="L5.1"}, we have $$\frac{\dot{r}}{r}=\sum_{k} \kappa_{k}\sin^{2}\left(\theta_{k}-\phi\right) \quad \Longrightarrow \quad \ln\left(r\left(t\right)\right)-\ln\left(r\left(0\right)\right)=\int_{0}^{t}\sum_{k\in\mathbb{N}} \kappa_{k}\sin^{2}\left(\theta_{k}\left(s\right)-\phi\left(s\right)\right)ds.$$ Therefore, we take a limit $t \to \infty$ to obtain $\eqref{E-6}_1$. $\diamond$ (Derivation of the second relation in [\[E-6\]](#E-6){reference-type="eqref" reference="E-6"}): By direct calculation, we have $$\begin{aligned} & \left|\frac{d}{dt}\left(\sum_{k\in\mathbb{N}} \kappa_{k}\sin^{2}\left(\theta_{k}-\phi\right)\right)\right|\\ &\hspace{0.5cm} =\left|2\sum_{k\in\mathbb{N}} \kappa_{k}\left(\dot{\theta}_{k}-\dot{\phi}\right)\sin\left(\theta_{k}-\phi\right)\cos\left(\theta_{k}-\phi\right)\right| \le2\sum_{k\in\mathbb{N}} \kappa_{k}\left(\left|\dot{\theta}_{k}\right|+\left|\dot{\phi}\right|\right)\\ &\hspace{0.5cm} = 2\sum_{k\in\mathbb{N}} \kappa_{k} \left(\left|\sum_{m=1}^{\infty} \kappa_m \sin (\theta_m - \theta_k)\right|+\left| \sum_{m=1}^{\infty} \kappa_{m}\cos\left(\theta_{m}-\phi\right)\sin\left(\theta_{m}-\phi\right) \right|\right)\\ &\hspace{0.5cm} \le 4 \Big( \sum_{k\in\mathbb{N}} \kappa_{k} \Big)^2 =4. \end{aligned}$$ Finally, we apply the integral version of Barbalat's lemma for $\sum_{k\in\mathbb{N}} \kappa_{k}\sin^{2}\left(\theta_{k}(t)-\phi(t) \right)$ to get the zero convergence. ◻ As a direct application of Proposition [Proposition 4](#P5.1){reference-type="ref" reference="P5.1"}, we have the complete synchronization of [\[E-1-1\]](#E-1-1){reference-type="eqref" reference="E-1-1"}. **Theorem 3**. *Let $\Theta$ be a solution to [\[E-1-1\]](#E-1-1){reference-type="eqref" reference="E-1-1"}. Then, the following assertions hold.* 1. *Complete synchronization emerges asymptotically: $$\lim_{t \to \infty} |{\dot \theta}_i(t) - {\dot \theta}_j(t)| = 0, \quad i, j \in {\mathbb N}.$$* 2. *If $r(0) > 0$, then for each pair $(i, j)$, there exists an integer $n_{ij}$ such that $$\lim_{t \to \infty} (\theta_i(t) - \theta_j(t)) = n_{ij} \pi.$$* *Proof.* (i) For the case in which $r(0) = 0$, we have $${\dot \theta}_i(t) = 0, \quad t > 0,$$ which is indeed a steady-state solution. Now, we consider a generic case in which $r(0) > 0$. In this case, the dichotomy in Proposition [Proposition 4](#P5.1){reference-type="ref" reference="P5.1"} yields $$\label{E-7} \lim_{t \to \infty} \sum_{k\in\mathbb{N}} \kappa_{k}\sin^{2}\left(\theta_{k}(t) -\phi(t) \right) = 0.$$ On the other hand, it follows from Lemma [Lemma 9](#L5.1){reference-type="ref" reference="L5.1"} and [\[E-4\]](#E-4){reference-type="eqref" reference="E-4"} that $$\label{E-8} \sin(\phi - \theta_i) = \frac{{\dot \theta}_i}{r} .$$ Finally, we combine [\[E-7\]](#E-7){reference-type="eqref" reference="E-7"} and [\[E-8\]](#E-8){reference-type="eqref" reference="E-8"} to obtain $${\lim_{t \to \infty} \frac{\sum_{k\in\mathbb{N}} \kappa_{k} |{\dot \theta}_k(t)|^2}{r^2(t)} = 0.}$$ This implies $$\lim_{t \to \infty} |{\dot \theta}_i(t)| = 0, \quad \forall~i \in {\mathbb N},$$ so that complete synchronization emerges: $$\lim_{t \to \infty} |{\dot \theta}_i(t) - {\dot \theta}_j(t)| = 0, \quad i, j \in {\mathbb N}.$$ (ii) From [\[E-7\]](#E-7){reference-type="eqref" reference="E-7"}, we have $$r(t) \geq r(0) > 0 \quad \mbox{and} \quad \lim_{t \to \infty} \sin \left(\theta_{k}(t) -\phi(t) \right) = \sin \Big(\lim_{t \to \infty} (\theta_{k}(t) -\phi(t) ) \Big)= 0, \quad \forall~k \in {\mathbb N}.$$ Hence, we have $$\lim_{t \to \infty} \left({\theta_{i}(t) -\theta_j(t)} \right) = n_{ij} \pi, \quad \mbox{for some $n_{ij} \in {\mathbb Z}$}.$$ ◻ ### Constant of motion In this part, we provide two time-invariants for the dynamical system [\[E-1-1\]](#E-1-1){reference-type="eqref" reference="E-1-1"} that allow us to identify synchronized states. $\bullet$ (Constant of motion I): Let $\Theta$ be a phase configuration whose dynamics is governed by [\[E-1-1\]](#E-1-1){reference-type="eqref" reference="E-1-1"}. Then, the weighted sum ${\mathcal S}(\Theta, A)$ is given as follows: $$\label{E-9} {\mathcal S}(\Theta, A) := \sum_{k\in\mathbb{N}} \kappa_k \theta_k, \quad {\boldsymbol{\kappa}} = (\kappa_k).$$ Then, it is easy to see that ${\mathcal S}(\Theta, A)$ is time-invariant: $$\label{E-9-1} \frac{d}{dt} {\mathcal S}(\Theta, A) = \sum_{k\in\mathbb{N}} \kappa_k {\dot \theta}_k = \sum_{j, k=1}^{\infty} \kappa_j \kappa_k \sin\left(\theta_j-\theta_{k}\right) = 0.$$ In the following proposition, we are ready to verify the convergence of $\theta_i$ for each $i\in\mathbb{N}$. First we present the collision avoidance between oscillators. **Lemma 10**. *Let $\Theta$ be a solution to [\[E-1-1\]](#E-1-1){reference-type="eqref" reference="E-1-1"}. Then for each $i,j\in\mathbb{N}$, $$\theta_j^{\text{in}}<\theta_{i}^{\text{in}} \quad \Longrightarrow \quad \theta_j(t)\le\theta_{i}(t)\le\theta_j(t)+2\pi\text{ for }~~t > 0.$$* *Proof.* Suppose that there exists a first collision time $t_0 >0$ between $\theta_i$ and $\theta_j$, i.e., $$\label{E-10} { \theta_j(t) < \theta_i(t)}, \quad t < t_0, \quad \theta_j(t_0) = \theta_i(t_0).$$ Then, it follows from [\[E-1-1\]](#E-1-1){reference-type="eqref" reference="E-1-1"} that $$\label{E-11} \dot{\theta}_{i}(t_0)-\dot{\theta}_{j}(t_0) =r \Big(\sin\left(\phi(t_0)-\theta_{i}(t_0)\right)-\sin\left(\phi(t_0)-\theta_j(t_0)\right) \Big) = 0.$$ Inductively, one can see that $$\frac{d^n \theta_i}{dt^n} \Big|_{t = t_0} = \frac{d^n \theta_j}{dt^n} \Big|_{t = t_0}, \quad n \geq 2.$$ Since $\theta_i - \theta_j$ is analytic at $t = t_0$ by Proposition [Proposition 2](#P2.2){reference-type="ref" reference="P2.2"}, there exists $\delta > 0$ such that $$\theta_i(t) = \theta_j(t), \quad t \in (t_0 - \delta, t_0 + \delta),$$ which is contradictory to $\eqref{E-10}_1$. ◻ The result of Lemma [Lemma 10](#L5.2){reference-type="ref" reference="L5.2"} implies that if oscillators' phases are different initially, they can not cross each other in any finite time. On the other hand, Theorem [Theorem 3](#T5.1){reference-type="ref" reference="T5.1"} does not imply the convergence of each phase itself. By combining the conservation of the weighted sum, one can show that each oscillator is converging. **Proposition 5**. *Let $\Theta$ be a solution to [\[E-1-1\]](#E-1-1){reference-type="eqref" reference="E-1-1"} with initial data satisfying the following conditions: $$0<\theta_{i}^{\text{in}}<2\pi, \quad i \in {\mathbb N}.$$ Then, there exists a constant state $\Theta^{\infty} = \{ \theta_i^{\infty} \}$ such that $$\lim_{t \to \infty} \theta_i(t) = \theta_i^{\infty}, \quad i \in {\mathbb N}.$$* *Proof.* By Theorem [Theorem 3](#T5.1){reference-type="ref" reference="T5.1"} and Lemma [Lemma 10](#L5.2){reference-type="ref" reference="L5.2"}, one has $$\begin{aligned} \begin{aligned} & \left|\theta_{i}(t)-\theta_j(t)\right|\le2\pi, \quad \forall~i,j\in\mathbb{N} \quad \mbox{and} \\ & \exists~\theta_{ij}^{\infty} \in (-2\pi, 2\pi)\quad \mbox{such that} \quad \lim_{t \to \infty} (\theta_i(t) - \theta_j(t)) = \theta_{ij}^{\infty}. \end{aligned} \end{aligned}$$ On the other hand, note that $$\label{E-12} { \left(\sum_{i\in\mathbb{N}} \kappa_{i}\theta_{i}^{\text{in}}\right)-\theta_j\left(t\right)= \left(\sum_{i\in\mathbb{N}} \kappa_{i}\theta_{i}\left(t\right)\right)-\theta_j\left(t\right)=\sum_{i\in\mathbb{N}} \kappa_{i}\left(\theta_{i}\left(t\right)-\theta_j\left(t\right)\right).}$$ Next, we show that the R.H.S. of [\[E-12\]](#E-12){reference-type="eqref" reference="E-12"} converges as $t \to \infty$. More precisely, we claim $$\label{E-13} \lim_{t\to\infty}\sum_{i\in\mathbb{N}} \kappa_{i}\left(\theta_{i}\left(t\right)-\theta_j\left(t\right)\right)=\sum_{i\in\mathbb{N}} \kappa_{i}\theta_{ij}^{\infty}.$$ *Proof of [\[E-13\]](#E-13){reference-type="eqref" reference="E-13"}*: Since $\displaystyle \sum_{i=1}^{\infty} \kappa_i = 1$, for any $\varepsilon > 0$, there exists a $n_{\varepsilon}\in\mathbb{N}$ such that $$\label{E-14} \sum_{i\ge n_{\varepsilon}} \kappa_{i}< \frac{\varepsilon}{4\pi}.$$ For $i<n_{\varepsilon}$, we can choose $t_{\varepsilon}$ such that $$\label{E-15} t > t_\varepsilon \quad \Longrightarrow \quad \left|\theta_{i}\left(t\right)-\theta_j\left(t\right)-\theta_{ij}^{\infty}\right|< \frac{\varepsilon}{2}.$$ Now, we use [\[E-14\]](#E-14){reference-type="eqref" reference="E-14"} and [\[E-15\]](#E-15){reference-type="eqref" reference="E-15"} to obtain $$\begin{aligned} & \left|\sum_{i\in\mathbb{N}} \kappa_{i}\left(\theta_{i}\left(t\right)-\theta_j\left(t\right)\right)-{\sum_{i\in\mathbb{N}} \kappa_{i}\theta_{ij}^{\infty}}\right|\\ & \hspace{1cm} \le\sum_{i<n_{\varepsilon}}\left|\kappa_{i}\left(\theta_{i}\left(t\right)-\theta_j\left(t\right)\right)- \kappa_{i}\theta_{ij}^{\infty}\right|+\sum_{i\ge n_{\varepsilon}}\left|\kappa_{i}\left(\theta_{i}\left(t\right)-\theta_j\left(t\right)\right)- \kappa_{i}\theta_{ij}^{\infty}\right|\\ &\hspace{1cm} \le\sum_{i<n_{\varepsilon}} \kappa_{i} \frac{\varepsilon}{2} +\sum_{i\ge n_{\varepsilon}} \kappa_{i}\cdot 2\pi \le\frac{\varepsilon}{2}+\frac{\varepsilon}{2}=\varepsilon \end{aligned}$$ for $t>t_{\varepsilon}$. Hence we verified [\[E-13\]](#E-13){reference-type="eqref" reference="E-13"}. Finally, it follows from [\[E-12\]](#E-12){reference-type="eqref" reference="E-12"} and [\[E-13\]](#E-13){reference-type="eqref" reference="E-13"} that $$\lim_{t \to \infty} \theta_j(t) = \sum_{i\in\mathbb{N}} \kappa_{i}\theta_{i}^{\text{in}} -\sum_{i\in\mathbb{N}} \kappa_{i}\theta_{ij}^{\infty} =: \theta_j^{\infty}, \quad j \in {\mathbb N}.$$ ◻ $\bullet$ (Constant of motion II): As a second choice for the constant of motion, we consider a cross-ratio-like quantity for four distinct points on the unit circle. Before we discuss the second constant of motion, we recall the complex lifting of the Kuramoto model in [\[E-1-2\]](#E-1-2){reference-type="eqref" reference="E-1-2"}. For this, we set a point on the unit circle associated with the phase $\theta_i$: $$z_i = e^{{\mathrm i} \theta_i}, \quad i \in {\mathbb N}.$$ Then, it is easy to check that the Kuramoto model $\eqref{E-1}_1$ can cast as follows. $$\label{E-1-2} \dot{z}_{i}=i\nu_{i}z_{i}+{\frac{1}{2}\sum_{j\in\mathbb{N}} \kappa_j\left(z_{j}-\left\langle z_{j},z_{i}\right\rangle z_{i}\right), \quad \mbox{where $\left\langle z_{j},z_{i}\right\rangle =\bar{z}_{j}z_{i}$. }}$$ We set $$\omega(t) = \sum_{n=1}^{\infty} \kappa_n z_n(t).$$ **Lemma 11**. *Let $\{z_i \}$ be a solution to [\[E-1-2\]](#E-1-2){reference-type="eqref" reference="E-1-2"} such that $$z_i \neq z_j, \quad \forall~i \neq j, \quad \nu_i \equiv 0, \quad i \in {\mathbb N}.$$ Then, we have the following relations: for any $i \neq j\in\mathbb{N}$, $$\label{E-1-3} \frac{d}{dt}\left(z_{i}-z_{j}\right) =-\frac{1}{2}\overline{\omega} \left(z_{i}^{2}-z_{j}^{2}\right), \quad \frac{d}{dt}\left(\frac{1}{z_{i}-z_{j}}\right) = \frac{{\bar \omega}}{2}\frac{\left(z_{i}^{2}-z_{j}^{2}\right)}{\left(z_{i}-z_{j}\right)^{2}}.$$* *Proof.* Note that [\[E-1-2\]](#E-1-2){reference-type="eqref" reference="E-1-2"} can be rewritten as $${\dot z}_i = \frac{1}{2} \Big (\omega - {\bar \omega} z_i^2 \Big ).$$ This yields the desired estimates: $${\dot z}_i - {\dot z}_j = -\frac{{\bar \omega}}{2} (z_i^2 - z_j^2), \quad \frac{d}{dt}\left(\frac{1}{z_{i}-z_{j}}\right) = -\frac{1}{(z_i - z_j)^2} \frac{d}{dt} ({\dot z}_i - {\dot z}_j) = \frac{{\bar \omega}}{2} \frac{z_i^2 - z_j^2}{(z_i - z_j)^2}.$$ ◻ For $\{z_i := e^{\rm{i}\theta_i}\}_{i\in\mathbb{N}}$, we define a cross ratio-like functional ${\mathcal C}_{ijkl}$ as $$\label{E-16} {\mathcal C}_{ijkl}:=\frac{z_{i}-z_{k}}{z_{i}-z_{j}}\cdot\frac{z_{j}-z_{l}}{z_{k}-z_{l}}.$$ **Proposition 6**. *Let $\Theta$ be a solution to [\[E-1-1\]](#E-1-1){reference-type="eqref" reference="E-1-1"} with non-collisional initial data: $$\theta_i^{\text{in}} \neq \theta_j^{\text{in}} \quad \mbox{for $i \neq j$}.$$ Then, for any four-tuple $(i,j, k, l) \in {\mathbb N}^4$, ${\mathcal C}_{ijkl}$ is well-defined for all $t> 0$ and constant: $${\mathcal C}_{ijkl}(t) = {\mathcal C}_{ijkl}(0), \quad t > 0.$$* *Proof.* Since all points $\{ e^{{\mathrm i} \theta_i} \}$ are distinct, ${\mathcal C}_{ijkl}$ is well-defined at $t = 0$. Moreover, by the continuity of solution, there exists $\eta > 0$ such that for $i \neq j$, $$\theta_i(t) \neq \theta_j(t) \quad t \in (0, \eta).$$ Thus, the cross-ratio like functional ${\mathcal C}_{ijkl}$ is well-defined in the time interval $(0, \eta)$. Now we introduce a temporal set ${\mathcal T}$ and its supremum $\tau^*$ : $${\mathcal T}:= \{ \tau \in (0, \infty)~:~{\mathcal C}_{ijkl}~\mbox{is well-defined in the time interval $(0, \tau)$} \}, \quad \tau^* := \mbox{sup} {\mathcal T}.$$ Then, the set ${\mathcal T}$ is not empty and $\tau^{*} \in (0, \infty]$. In what follows, we show that $$\label{E-17} \tau^* = \infty \quad \mbox{and} \quad {\mathcal C}_{ijkl}(t) = {\mathcal C}_{ijkl}(0), \quad t > 0.$$ Suppose that the contrary holds, not, i.e., $$\tau^* < \infty.$$ First, we show that the functional ${\mathcal C}_{ijkl}$ is constant in the interval $(0, \tau^*)$. For this, we use Lemma [Lemma 11](#L5.3){reference-type="ref" reference="L5.3"} to get $$\begin{aligned} \begin{aligned} \frac{d}{dt} {\mathcal C}_{ijkl}(t) &=-\left(\frac{1}{2}\overline{\omega}(t)\left(z_{i}(t)+z_{k}(t)\right)+\frac{1}{2}\overline{\omega}(t)\left(z_{j}(t)+z_{l}(t)\right)\right)C_{ijkl}(t)\\ &\phantom{=}+\left(\frac{1}{2}\overline{\omega}(t)\left(z_{i}(t)+z_{j}(t)\right)+\frac{1}{2}\overline{\omega}(t)\left(z_{k}(t)+z_{l}(t)\right)\right)C_{ijkl}(t)\\ & =0, \qquad t \in (0, \tau^*). \end{aligned} \end{aligned}$$ Thus, as long as ${\mathcal C}_{ijkl}$ is well-defined, it is constant. Certainly, it is continuous with respect its arguments. Therefore, $$\exists~{\mathcal C}_{ijkl}(\tau^*) = \lim_{t \to \tau^*-} {\mathcal C}_{ijkl}(t).$$ By continuity, there exists a $\delta > 0$ such that $${\mathcal C}_{ijkl}(\cdot)~\mbox{is well-defined in the time-interval $[0, \tau^* + \delta)$}$$ which is contradictory to the choice of $\tau^*$. Therefore we have $$\tau^* = \infty,$$ i.e., ${\mathcal C}_{ijkl}(\cdot)$ is well-defined on the whole time interval $[0, \infty)$ and $${\mathcal C}_{ijkl}(t) = {\mathcal C}_{ijkl}(0). \quad t \in (0, \infty).$$ ◻ As a direct corollary of Proposition [Proposition 6](#P5.3){reference-type="ref" reference="P5.3"}, we have the following results on the asymptotic configurations of the set $\{ e^{{\mathrm i} \theta_i} \}$ and $\{ \theta_i \}$. First, we will see that asymptotic configuration of $\{ e^{{\mathrm i} \theta_i} \}$ is either a singleton or bi-polar configuration. **Corollary 4**. *Let $\{z_i \}$ be a solution to [\[E-1-2\]](#E-1-2){reference-type="eqref" reference="E-1-2"} with asymptotic configuration $\{z_i^{\infty} \}$. Then, the following dichotomy holds.* 1. *There exists a $k\in\mathbb{N}$ such that $z_{i}^{\infty}=-z_{k}^{\infty}$ for $i\in\mathbb{N}\setminus\{ k\}$.* 2. *$z_{i}^{\infty}=z_{j}^{\infty}$ for all $i, j \in {\mathbb N}$.* *Proof.* Suppose that there exists a $1 \neq k\in\mathbb{N}$ such that $$z_1^\infty\neq z_k^\infty.$$ By Theorem [Theorem 3](#T5.1){reference-type="ref" reference="T5.1"} and Proposition [Proposition 5](#P5.2){reference-type="ref" reference="P5.2"}, $\theta_i^\infty - \theta_k^\infty$ is an integer multiple of $\pi$, which implies $z_1^\infty = -z_k^\infty$. Then we set a partition $I_{1}\cup I_{2}$ of $\mathbb{N}$ by $$I_{1} :=\left\{ i \in {\mathbb N} \ |\ z_{i}\rightarrow z_{1}^{\infty}\text{ as }t\rightarrow\infty\right\}, \quad I_{2} :=\left\{ i \in {\mathbb N}\ |\ z_{i}\rightarrow-z_{1}^{\infty}\text{ as }t\rightarrow\infty\right\}.$$ Suppose that $$|I_1| \geq 2 \quad \mbox{and} \quad |I_2| \geq 2.$$ Then, we can choose $$i \neq j\in I_{1} \quad \mbox{and} \quad k \neq l \in I_2.$$ For such pairs $(i,j)$ and $(k,l)$, $$\lim_{t \to \infty} |{\mathcal C}_{ijkl}(t) |=\lim_{t \to \infty} \Big| \frac{z_{i}(t)-z_{k}(t)}{z_{i}(t)-z_{j}(t)}\cdot\frac{z_{j}(t)-z_{l}(t)}{z_{k}(t)-z_{l}(t)} \Big| = \infty,$$ which is contradictory to the constancy of ${\mathcal C}_{ijkl}$: $${\mathcal C}_{ijkl}(t) = {\mathcal C}_{ijkl}(0), \quad t > 0.$$ Therefore, we have $$|I_1| \leq 1 \quad \mbox{and} \quad |I_2| \leq 1.$$ Without loss of generality, we may assume $I_1\leq 1$. Then there are two cases: If $|I_1| = 0$, then asymptotic state is in complete phase synchrony: $$\lim_{t \to \infty} z_i(t) = z_1^{\infty}, \quad i \geq 2.$$ If $|I_1| = 1$, then we have bi-polar asymptotic state: $$\lim_{t \to \infty} z_i(t) = -z_1^{\infty}, \quad i \geq 2.$$ ◻ Now we are ready to study the asymptotic configuration of [\[E-1-1\]](#E-1-1){reference-type="eqref" reference="E-1-1"} that emerges from the given initial configuration $\{ \theta_i^{\text{in}} \}$. For a given initial configuration $\{ \theta_i^{\text{in}} \}$, we set $$\label{E-18} \theta_0 := \sum_{i\in\mathbb{N}} \kappa_{i}\theta_{i}^{\text{in}}.$$ Then, it follows from [\[E-9-1\]](#E-9-1){reference-type="eqref" reference="E-9-1"} that $$\label{E-19} \theta_0 = \sum_{i\in\mathbb{N}} \kappa_{i}\theta_{i}(t), \quad t > 0.$$ **Corollary 5**. *Let $\Theta$ be a solution to [\[E-1-1\]](#E-1-1){reference-type="eqref" reference="E-1-1"} with asymptotic configuration $\{\theta_i^{\infty} \}$: $$\lim_{t \to \infty} \theta_i(t) = \theta_i^{\infty}, \quad i \in {\mathbb N}.$$ Then, for each $i\in\mathbb{N}$, $$\theta_{i}^{\infty}\in\left\{ \theta_{0}\right\} \cup\left\{ \theta_{0}\pm \kappa_{i}\pi\ |\ i\in\mathbb{N}\right\} \cup\left\{ \theta_{0}\pm\left(1-\kappa_{i}\right)\pi\ |\ i\in\mathbb{N}\right\} .$$* *Proof.* It follows from Corollary [Corollary 4](#C5.1){reference-type="ref" reference="C5.1"} that we have two possible asymptotic configurations: Complete phase synchrony and bi-polar configuration. $\bullet$ Case A: Suppose that $$\lim_{t \to \infty} \theta_i(t) = \theta_\infty, \quad \forall~i \in {\mathbb N}.$$ In this case, we use the above relation, [\[E-18\]](#E-18){reference-type="eqref" reference="E-18"} and [\[E-19\]](#E-19){reference-type="eqref" reference="E-19"} to get $$\theta_{0}=\sum_{i\in\mathbb{N}} \kappa_{i}\theta_{i}(t)= \lim_{t \to \infty} \sum_{i\in\mathbb{N}} \kappa_{i}\theta_{i}(t) = \sum_{i\in\mathbb{N}} \kappa_{i}\theta_{i}^{\infty}=\sum_{i\in\mathbb{N}} \kappa_{i}\theta_{\infty}=\theta_{\infty}.$$ $\bullet$ Case B: Suppose that $$\label{E-20} \lim_{t \to \infty} \theta_j(t) = \theta_{\infty}\pm\pi \quad \mbox{for some $j$} \quad \mbox{and} \quad \lim_{t \to \infty} \theta_{i}(t) = \theta_{\infty} \quad \mbox{for all $i\neq j$}.$$ Then, we use the above relations and the same idea as Case A to find $$\theta_{0}=\sum_{i\in\mathbb{N}} \kappa_{i}\theta_{i}(t)=\sum_{i\in\mathbb{N}} \kappa_{i}\theta_{i}^{\infty}=\sum_{k\in\mathbb{N}} \kappa_{k}\theta_{\infty}\pm \kappa_j\pi=\theta_{\infty}\pm \kappa_j\pi.$$ This and [\[E-20\]](#E-20){reference-type="eqref" reference="E-20"} imply $$\theta_{i}\to\theta_{\infty}=\theta_{0}\mp \kappa_j\pi,\quad \theta_j\rightarrow\theta_{0}\pm\left(1-\kappa_j\right)\pi.$$ Finally, we combine all the results in Case A and Case B to obtain the desired estimate. ◻ ## Heterogeneous ensemble {#sec:5.2} In this subsection, we study the frequency synchronization of the heterogeneous ensemble for a restricted class of initial configurations confined in a half circle. Note that the Cauchy problem [\[E-1\]](#E-1){reference-type="eqref" reference="E-1"} is equivalent to the following Cauchy problem: $$\label{E-21} \begin{cases} \displaystyle \dot{\theta}_{i}=\omega_{i},\quad t>0,\quad\forall~ i\in\mathbb{N},\\ \displaystyle \dot{\omega}_{i}=\sum_{j\in\mathbb{N}} \kappa_j\cos\left(\theta_i-\theta_j\right)\left(\omega_j-\omega_{i}\right), \\ \displaystyle \theta_{i}(0)=\theta_{i}^{\text{in}}\in\mathbb{R},\quad\omega_{i}(0)=\nu_{i}+\sum_{j\in\mathbb{N}} \kappa_j\sin\left(\theta_j^{\text{in}}-\theta_{i}^{\text{in}}\right), \end{cases}$$ where $$\Theta^{\text{in}} = (\theta_1^{\text{in}}, \theta_2^{\text{in}}, \ldots ) \in \ell^{\infty}, \quad {\mathcal V} = (\nu_1, \nu_2, \ldots) \in \ell^{\infty}.$$ We set $${\mathcal W} := (\omega_1, \omega_2, \ldots) \quad \mbox{and} \quad {\mathcal D}({\mathcal W}) : = \sup_{m, n} |\omega_m - \omega_n|.$$ Note that for our sender network, $${\displaystyle \left\Vert K\right\Vert _{\infty,1}=\left\Vert K\right\Vert _{-\infty,1}=\sum_{j\in\mathbb{N}}\kappa_{j}}.$$ Here the Theorem [Theorem 2](#T4.1){reference-type="ref" reference="T4.1"} can be applied to trap $\Theta(t)$ into a quarter arc. **Proposition 7**. *Suppose that the initial condition $\mathcal{D}(\Theta^{\text{in}})$ and network topology $\left(\kappa_{ij}\right)$ satisfy $$0<\mathcal{D}(\mathcal{V})<\|\boldsymbol{\tilde{\kappa}}_{\epsilon}\|_{1}\|K\|_{\infty,1},\quad\mathcal{D}(\Theta^{\text{in}})\in(\gamma,\pi-\gamma),\quad\gamma=\sin^{-1}\left(\frac{\mathcal{D}(\mathcal{V})}{\|\boldsymbol{\tilde{\kappa}}_{\epsilon}\|_{1}\|K\|_{\infty,1}}\right)<\frac{\pi}{2}$$ for $$\varepsilon\ll 1,\quad \tilde{\kappa}_{\varepsilon,i}=\frac{\kappa_{i}}{\left\Vert K\right\Vert _{\infty,1}+\varepsilon},\quad \boldsymbol{\tilde{\kappa}}_{\varepsilon}=\left\{\tilde{\kappa}_{\varepsilon,i}\right\}_{i\in\mathbb{N}}.$$ Then there exists a $t_{0}>0$ such that $$\mathcal{D}\left(\Theta(t)\right)<\frac{\pi}{2}-\sin^{-1}\delta,\quad t\ge t_{0}.$$* *Proof.* The conclusion is straightforward from Theorem [Theorem 2](#T4.1){reference-type="ref" reference="T4.1"}. ◻ Next, we state our last main results on the complete synchronization of [\[E-21\]](#E-21){reference-type="eqref" reference="E-21"}. **Theorem 4**. *Let $(\Theta, {\mathcal W})$ be a solution to [\[E-21\]](#E-21){reference-type="eqref" reference="E-21"} with conditions in Proposition [Proposition 7](#P5.4){reference-type="ref" reference="P5.4"}. Then, there exists $t_0 \geq 0$ such that $${{\mathcal D}(\Theta(t))<\frac{\pi}{2}}, \quad {\mathcal D}({\mathcal W}(t)) \le {\mathcal D}({\mathcal W}(t_{0})) \cdot\exp\left[ -\frac{3\| A \|_{\infty,1} \log2}{32}\left(t-t_{0}\right)+1\right ],\quad t\ge t_{0}.$$* *Proof.* Since the proof is very lengthy, we leave its proof in Appendix [9](#App-C){reference-type="ref" reference="App-C"}. ◻ # Conclusion {#sec:6} In this paper, we have proposed a generalized synchronization model for the set of the infinite set of Kuramoto oscillators and studied its emergent asymptotic dynamics. The original Kuramoto model describes the synchronous dynamics of a finite set of Kuramoto oscillators, and it has been extensively studied in the last decade, whereas as far as the authors know, the dynamics of an infinite number of Kuramoto oscillators have not been addressed in literature as it is. In fact, for the dynamics of an infinite ensemble, the Kuramoto-Sakaguchi equation which is corresponding to a mean-field approximation is often used to describe the temporal-phase space dynamics of a one-particle distribution function. However, this is only an approximation for the dynamics of the infinite set of Kuramoto oscillators. To make sense of the infinite coupling term, we need to introduce a suitable coupling weight. These coupling weights can be realized as a network topology which is represented by an infinite matrix whose row sums are finite uniformly. For this set of an infinite number of oscillator equations, some finite-dimensional results and analytical tools can be used for our infinite setting, but there exists some fundamental differences. For example, the Dini derivative of phase diameter cannot be used, because we cannot estimate how many crossings occur at the end-point of phase space. Similarly, the gradient flow structure of the Kuramoto model cannot be applied even for a symmetric network. We show that there exists a network topology that leads to the constancy of phase diameter (see Corollary [Corollary 1](#C3.1){reference-type="ref" reference="C3.1"}). For a symmetric network topology and homogeneous ensemble with the same natural frequency, we show that complete synchronization occurs asymptotically (see Theorem [Theorem 1](#T3.1){reference-type="ref" reference="T3.1"}), whereas for a heterogeneous ensemble, we cannot show complete synchronization, but we instead obtain a practical synchronization, i.e., we can make the size of phase diameter as small as we want by increasing the size of coupling strength (see Theorem [Theorem 2](#T4.1){reference-type="ref" reference="T4.1"}). On the other hand, for a sender network topology in which coupling strength depends on neighboring oscillators, a homogeneous ensemble either evolves toward complete phase synchrony or a special type of bi-cluster configuration (see Theorem [Theorem 3](#T5.1){reference-type="ref" reference="T5.1"}). In contrast, for a heterogeneous ensemble, we show that complete synchronization emerges asymptotically (see Theorem [Theorem 3](#T5.1){reference-type="ref" reference="T5.1"}). There are several interesting remaining questions. For example, the relation between finite collisions and phase-locking is not clear at all. For the Kuramoto model for a finite ensemble, the aforementioned relations are equivalent. Moreover, we did not show the emergence of complete synchronization for a heterogeneous ensemble in a large coupling regime. We leave these interesting questions as future work. The work of S.-Y. Ha was supported by National Research Foundation of Korea (NRF-2020R1A2C3A01003881) # Some useful lemmas {#App-A} In this appendix, we collect some useful results which are used explicitly and implicitly in the main body of the paper without detailed explanation and proofs. Detailed proofs can be found in quoted references and any reasonable book on mathematical analysis, e.g. [@Ru]. First, we begin with the abstract Cauchy problem on a Banach space $(E, \| \cdot \|)$: $$\label{Ap-1} \begin{cases} \frac{du}{dt}=F\left(u\left(t\right)\right), \quad t > 0, \\ u(0) =u^{\text{in}}. \end{cases}$$ **Lemma 12**. **(Global well-posedness [@Bre; @C])* The following assertions hold.* 1. **(Existence)*: Let $F:E\rightarrow E$ be a Lipschitz map, i.e. there is a nonnegative constant $L$ such that $$\left|\left|Fu-Fv\right|\right|\le L\left|\left|u-v\right|\right|\quad \forall~ u,v\in E.$$ Then, for any given $u^{\text{in}} \in E$, there exists a global solution $u \in {\mathcal C}^{1} ([[0, \infty);E)$ to [\[Ap-1\]](#Ap-1){reference-type="eqref" reference="Ap-1"}.* 2. **(Uniqueness)*: For $U\subset E$, let $F:~U\to E$ be a locally Lipschitz map; let $I$ be an interval contained in $\mathbb{R}$ not necessarily compact. If there are two exact local solutions $\varphi_{1}$ and $\varphi_{2}$ $:I\to E$ to [\[Ap-1\]](#Ap-1){reference-type="eqref" reference="Ap-1"}. Then, they are identical in the entire interval $I$.* **Remark 6**. *This lemma has been used to guarantee the global well-posedness of the infinite Kuramoto model on the Banach space $(\ell^\infty,~\| \cdot \|_{\infty})$ in Proposition [Proposition 2](#P2.2){reference-type="ref" reference="P2.2"}.* Next, we present a differential version of Barbalat's lemma which has been used in the proof of Proposition [Proposition 4](#P5.1){reference-type="ref" reference="P5.1"}. **Lemma 13**. **(Barbalat [@B])* Let $F: [0, \infty) \rightarrow \mathbb{R}$ be a continuously differentiable function satisfying the following two properties: $$\exists~\lim_{t \to \infty} F(t) \quad \mbox{and} \quad F^{\prime}~\mbox{is uniformly continuous}.$$ Then, $F^{\prime}$ tends to zero, as $t \to \infty$.* **Lemma 14**. **[@Ru]* Let $F_{n}$ be a sequence of real-valued differentiable functions on $\left[a,b\right]$ with the following two properties:* 1. *(Pointwise convergence at one-point): For some $x_0 \in [a, b]$, $$\exists~ \lim_{n \to \infty} F_n(x_0).$$* 2. *(Uniform convergence of derivatives): the sequence $\left\{ F_{n}^{\prime} \right\}$ converges uniformly on $\left[a,b\right]$.* *Then, $\left\{ F_{n}\right\}$converges uniformly on $\left[a,b\right]$ to a function $F$ and $$F^{\prime}(x) =\lim_{n\rightarrow\infty}F^{\prime}_{n}(x)\quad\left(a\le x\le b\right).$$* **Remark 7**. *The proof can be found in Theorem 7.17 of [@Ru].* Finally, we state the Lojasiewicz gradient inequality on a Hilbert space $(H, \langle \cdot, \cdot \rangle)$: We set $$\| u \| := \sqrt{\langle u, u \rangle}, \quad u \in H.$$ **Lemma 15**. **(Lojasiewicz gradient inequality [@H-J])* For an open neighborhood $U \subset H$ of $0 \in H$, let $F:U \to {\mathbb R}$ be an analytic function such that $$F(0)=0, \quad DF(0)=0.$$ Suppose $F$ satisfies the following two conditions:* 1. *${\mathcal N}:= {\ker}(D^{2}F(0))$ is finite-dimensional.* 2. *There is $\rho>0$ such that $$\label{New-A} \| D^{2}F(0)u \| \ge \rho \| u \|, \quad \forall~u\in V\cap N^{\perp},$$ where ${\mathcal N}^{\perp}$stands for the orthogonal complement of ${\mathcal N}$.* *Then there exist $\theta\in\left(0,1/2\right)$, a neighborhood $W$ of $0$ and $c>0$ such that $$\| DF(u) \| \ge c \Big| F(u) \Big|^{1-\theta}, \quad \forall~u \in W.$$* **Remark 8**. *See the discussions right after the proof of Theorem [Theorem 1](#T3.1){reference-type="ref" reference="T3.1"}.* # Proof of Lemma [Lemma 4](#L3.1){reference-type="ref" reference="L3.1"} {#App-B} In this appendix, we provide a lengthy proof of Lemma [Lemma 4](#L3.1){reference-type="ref" reference="L3.1"}. Since $\displaystyle\overline{\theta}(0) = \sup_{i \in {\mathbb N}} \theta_i^{\text {in}}$, the following dichotomy holds: $$\begin{aligned} && (1)~\mbox{$\overline{\theta}(0)$ is an isolated point of the set $\{ \theta_i^{\text {in}}\}_{i \in {\mathbb N}}$,} \\ &&(2) ~\mbox{$\overline{\theta}(0)$ is a limit point of the set $\{ \theta_i^{\text {in}} \}_{i \in {\mathbb N}}$.}\end{aligned}$$ In the sequel, we show that the desired assertions hold for each case. (1) First of all, suppose that $\overline{\theta}(0)$ is not a limit point of the set $\{ \theta_i^{\text {in}}\}_{i \in {\mathbb N}}$. In this case, the index set $${\mathcal I}_{\overline{\theta}(0)}:=\left\{i\in \mathbb{N}:\theta_i^{\text {in}}= \overline{\theta}(0) \right\}$$ is nonempty (possibly infinite) subset of $\mathbb{N}$, and there exists $\varepsilon>0$ such that $${\mathcal D}(\Theta^{\text {in}}) + \varepsilon < \pi \quad \mbox{and} \quad {\mathcal I}_{( \overline{\theta}(0)-\varepsilon, \overline{\theta}(0))} :=\left\{i \in {\mathbb N}:~\theta_i^{\text {in}}\in ( \overline{\theta}(0)-\varepsilon, \overline{\theta}(0)) \right\}=\emptyset.$$  \ $\diamond$ Case A.1: Let $i \in {\mathbb N}$ be an index such that $$\theta_i^{\text {in}}\leq {\overline \theta}(0) - \varepsilon.$$ We set $$\delta_1 := \frac{\varepsilon}{\|K \|_{\infty,1}} > 0.$$ For $t \in (0, \delta_1)$, we use the above relation and Lemma [Lemma 2](#L2.2){reference-type="ref" reference="L2.2"} to obtain $$\label{C-1-1-0} \theta_i(t)\leq \theta_i^{\text {in}}+ \| K \|_{\infty, 1} t < {\overline \theta}(0) -\varepsilon+ \| K \|_{\infty,1} \cdot \frac{\varepsilon}{\|K \|_{\infty,1}} = {\overline \theta}(0).$$ $\diamond$ Case A.2: Let $i \in {\mathbb N}$ be an index such that $$\theta_i^{\text {in}}> \overline{\theta}(0) -\varepsilon.$$ In this case, it is easy to see that $\theta_i^{\text {in}}= \overline{\theta}(0)$ and $$\begin{aligned} \begin{aligned} \label{C-1-1-1} \frac{d^+}{dt} \Big|_{t = 0+} \theta_i &= \sum_{j\in\mathbb{N}} \kappa_{ij}\sin(\theta_j^{\text {in}}-\theta_i^{\text {in}}) =\sum_{j\in\mathbb{N}} \kappa_{ij}\sin(\theta_j^{\text {in}}- \overline{\theta}(0)) \\ &= \sum_{j\in {\mathcal I}_{\overline{\theta}(0)}} \kappa_{ij} \sin(\underbrace{\theta_j^{\text {in}}- \overline{\theta}(0)}_{=0}) + \sum_{j \notin {\mathcal I}_{\overline{\theta}(0)}}\kappa_{ij}\sin(\theta_j^{\text {in}}- \overline{\theta}(0)) \\ &= \sum_{j\notin {\mathcal I}_{\overline{\theta}(0)}}\kappa_{ij}\sin(\theta_j^{\text {in}}- \overline{\theta}(0) ) \leq - \left(\sum_{j\notin {\mathcal I}_{\overline{\theta}(0)}}\kappa_{ij}\right)\sin\varepsilon , \end{aligned}\end{aligned}$$ where $\frac{d^+}{dt}$ is the Dini derivative, and we used [\[C-1\]](#C-1){reference-type="eqref" reference="C-1"}, [\[C-1-1-1\]](#C-1-1-1){reference-type="eqref" reference="C-1-1-1"} and the relation: $$j \notin {\mathcal I}_{\overline{\theta}(0)} \quad \Longrightarrow \quad -\pi + \varepsilon < \theta_j^{\text {in}} - \overline{\theta}(0) \leq -\varepsilon.$$ On the other hand, for $i \in {\mathcal I}_{\overline{\theta}(0)}$ and sufficiently small $t$ satisfying $$\label{C-1-1-2} t < \delta_2:= \left(\sum_{j\notin {\mathcal I}_{\overline{\theta}(0)}}\tilde{\kappa}_j\right)\cdot \frac{\sin\varepsilon}{2 \| K \|_{\infty,1}},$$ we apply [\[B-12\]](#B-12){reference-type="eqref" reference="B-12"} and [\[C-1-1-1\]](#C-1-1-1){reference-type="eqref" reference="C-1-1-1"} to obtain $$\begin{aligned} \begin{aligned} \label{C-1-1-3} \dot{\theta}_i(t) &\leq \dot{\theta}_i(0)+ \Big( 2\|K \|_{\infty, 1} \sum_{j\in\mathbb{N}} \kappa_{ij} \Big) t \\ &< \underbrace{\Bigg [ -\sum_{j\notin {\mathcal I}_{\overline{\theta}(0)}}\kappa_{ij} + \Big( \sum_{j\in\mathbb{N}} \kappa_{ij} \Big) \Big(\sum_{j\notin {\mathcal I}_{\overline{\theta}(0)}}\tilde{\kappa}_j \Big) \Bigg]}_{<~ 0 \quad \mbox{from}~(\mathcal{F}2)} \sin\varepsilon <0. \end{aligned}\end{aligned}$$ Note that $\displaystyle\sum_{j\notin {\mathcal I}_{\overline{\theta}(0)}}\tilde{\kappa}_j$ is always strictly positive except the trivial case ${\mathcal I}_{\overline{\theta}(0)}=\mathbb{N}$, which we have already excluded by using the condition $\mathcal{D}(\Theta^{\text {in}})>0$. This guarantees the positivity of the constant $\delta_2$ in [\[C-1-1-2\]](#C-1-1-2){reference-type="eqref" reference="C-1-1-2"}. Finally, we set $$\delta := \min \{ \delta_1, \delta_2 \} > 0,$$ and combine [\[C-1-1-0\]](#C-1-1-0){reference-type="eqref" reference="C-1-1-0"} and [\[C-1-1-3\]](#C-1-1-3){reference-type="eqref" reference="C-1-1-3"} to conclude the desired result. (2) Suppose that $\overline{\theta}(0)$ is a limit point of the set $\{ \theta_i^{\text {in}}\}_{i \in {\mathbb N}}$. In this case, we can exclude the singleton case with $\{ \theta_i^{\text {in}}\}_{i \in {\mathbb N}} =\left\{\overline{\theta}(0)\right\}$, since every neighborhood of the limit point $\overline{\theta}(0)$ must contain a point of $\{ \theta_i^{\text {in}}\}_{i \in {\mathbb N}}$ other than $\overline{\theta}(0)$ itself. Therefore, there exists a natural number $i_0$ such that $$\overline{\theta}(0)-\theta_{i_0}^{\text {in}}=:\varepsilon_0\in (0,\pi),$$ so that the index set $${\mathcal I}_{[\underline{\theta}(0), \overline{\theta}(0)-\varepsilon_0]} :=\left\{i:\underline{\theta}(0) \leq \theta_i^{\text {in}}\leq \overline{\theta}(0)-\varepsilon_0 \right\}$$ is nonempty. Now, we define an auxiliary function $f:(0,\varepsilon_0)\to \mathbb{R}$ which will appear in [\[C-1-1-6\]](#C-1-1-6){reference-type="eqref" reference="C-1-1-6"}: $$f(x) \equiv \left(\sum_{ j \in {\mathcal I}_{[\underline{\theta}(0), \overline{\theta}(0)-\varepsilon_0]} }\tilde{\kappa}_j\right)\frac{\sin(\varepsilon_0-x)}{\sin x},\quad \forall~x\in (0,\varepsilon_0).$$ Then, it is easy to see that $f$ is a monotone decreasing continuous function such that $$f^{\prime}(x) < 0, \quad x \in (0, \varepsilon_0), \quad \lim_{x\to 0+}f(x)=+\infty,\quad \lim_{x\to \varepsilon_0-}f(x)=0.$$ Therefore, there exists a unique $\varepsilon\in (0,\varepsilon_0)$ such that $$\label{C-1-1-5} f(\varepsilon)=2, \quad \mbox{i.e.,} \quad \left(\sum_{ j \in {\mathcal I}_{[\underline{\theta}(0), \overline{\theta}(0)-\varepsilon_0]} }\tilde{\kappa}_j\right) \sin(\varepsilon_0-\varepsilon) = 2 \sin \varepsilon.$$ $\diamond$ Case B.1: Let $i\in\mathbb{N}$ be an index such that $$\theta_i^{\text {in}}\leq {\overline \theta}(0) -\varepsilon.$$ Then, for every positive $t<\delta_1 = \frac{\varepsilon}{\|K \|_{\infty, 1}}$, we use Lemma [Lemma 2](#L2.2){reference-type="ref" reference="L2.2"} to see $$\theta_i(t)\leq \theta_i^{\text {in}}+ \| K \|_{\infty, 1} t < \overline{\theta}(0) -\varepsilon+ \| K \|_{\infty, 1} \delta_1 = \overline{\theta}(0).$$ $\diamond$ Case B.2:  Let $i \in {\mathbb N}$ be an index such that $$\theta_i^{\text {in}}> \overline{\theta}(0) -\varepsilon.$$ Note that the whole index set ${\mathbb N}$ can be rewritten as $$\begin{aligned} \begin{aligned} {\mathbb N} &= \{i~:~\underline{\theta}(0) \leq \theta_i^{\text {in}} \leq \overline{\theta}(0) - \varepsilon_0 \} \bigcup \{i:~ \overline{\theta}(0) - \varepsilon_0 < \theta_i^{\text {in}} \leq \overline{\theta}(0) - \varepsilon \} \\ &\hspace{0.5cm} \bigcup \{ i:~\overline{\theta}(0) - \varepsilon < \theta_i^{\text {in}} \leq \overline{\theta}(0) \} \\ &=: {{\mathcal I}_{[\underline{\theta}(0), \overline{\theta}(0) - \varepsilon_0]} \bigcup {\mathcal I}_{(\overline{\theta}(0) -\varepsilon_0, \overline{\theta}(0) - \varepsilon]} \bigcup {\mathcal I}_{(\overline{\theta}(0) - \varepsilon, \overline{\theta}(0)]}}. \end{aligned}\end{aligned}$$ Then, we have $$\begin{aligned} \begin{aligned} \label{C-1-1-6} &\frac{d^+\theta_i}{dt} \Big|_{t = 0+} \\ &\hspace{0.5cm} { = \sum_{j \in {\mathcal I}_{[\underline{\theta}(0), \overline{\theta}(0) - \varepsilon_0]} } \kappa_{ij} \sin(\theta_j^{\text {in}} - \theta_i^{\text {in}}) + \sum_{j \in {\mathcal I}_{(\overline{\theta}(0) -\varepsilon_0, \overline{\theta}(0) - \varepsilon]} } \kappa_{ij} \underbrace{\sin(\theta_j^{\text {in}} - \theta_i^{\text {in}})}_{\leq 0}} \\ & \hspace{0.5cm} + \sum_{j \in {\mathcal I}_{(\overline{\theta}(0) - \varepsilon, \overline{\theta}(0)]}} \kappa_{ij} \sin(\theta_j^{\text {in}} - \theta_i^{\text {in}}) \\ &\hspace{0.5cm} \leq \sum_{j \in {\mathcal I}_{[\underline{\theta}(0), \overline{\theta}(0) - \varepsilon_0]} } \kappa_{ij} \sin(\theta_j^{\text {in}} - \theta_i^{\text {in}}) + \sum_{j \in {\mathcal I}_{(\overline{\theta}(0) - \varepsilon, \overline{\theta}(0)]}} \kappa_{ij}\sin(\theta_j^{\text {in}} - \theta_i^{\text {in}}) \\ &\hspace{0.5cm} \leq -\sum_{j \in {\mathcal I}_{[\underline{\theta}(0), \overline{\theta}(0) - \varepsilon_0]} } \kappa_{ij} \sin(\varepsilon_0-\varepsilon) + \sum_{j \in {\mathcal I}_{(\overline{\theta}(0) - \varepsilon, \overline{\theta}(0)]}} \kappa_{ij}\sin \varepsilon \\ &\hspace{0.5cm} \leq -\sum_{j \in {\mathcal I}_{[\underline{\theta}(0), \overline{\theta}(0) - \varepsilon_0]} } \kappa_{ij} \sin(\varepsilon_0-\varepsilon) + \sum_{j \in {\mathbb N}} \kappa_{ij}\sin \varepsilon \\ &\hspace{0.5cm} \leq \sum_{j\in \mathbb{N}} \kappa_{ij}\left(\sin\varepsilon- \sum_{j \in {\mathcal I}_{[\underline{\theta}(0), \overline{\theta}(0) - \varepsilon_0]} } \tilde{\kappa}_j\sin(\varepsilon_0-\varepsilon) \right) \quad \mbox{from}~ (\mathcal{F}1)~\mbox{and}~\eqref{C-1-1-5} \\ &\hspace{0.5cm} =-\left(\sum_{j\in \mathbb{N}} \kappa_{ij}\right)\sin\varepsilon, \end{aligned} \end{aligned}$$ where we used [\[C-1-1-5\]](#C-1-1-5){reference-type="eqref" reference="C-1-1-5"} in the last equality. Now, we set $${\tilde \delta}_2 := \frac{\sin \varepsilon}{2 \| K \|_{\infty, 1}}.$$ Then, for $t <{\tilde \delta}_2$, we apply Lemma [Lemma 2](#L2.2){reference-type="ref" reference="L2.2"} as in Case A.2 to [\[C-1-1-6\]](#C-1-1-6){reference-type="eqref" reference="C-1-1-6"} to obtain $$\begin{aligned} \begin{aligned} \dot{\theta}_i(t) & \leq \dot{\theta}_i(0)+ \Big( 2 \|K \|_{\infty, 1} \sum_{j\in \mathbb{N}}\kappa_{ij} \Big) t < \dot{\theta}_i(0)+\left(\sum_{j\in \mathbb{N}} \kappa_{ij}\right)\sin\varepsilon\leq 0. \end{aligned}\end{aligned}$$ Finally, we define $\delta = \min\{ \delta_1, {\tilde \delta}_2 \}$ to get the desired result when $\overline{\theta}(0)$ is a limit point. $\qed$ # Proof of Theorem [Theorem 4](#T5.2){reference-type="ref" reference="T5.2"} {#App-C} By Proposition [Proposition 7](#P5.4){reference-type="ref" reference="P5.4"}, we may assume that there exists an entrance time $t_{0}$ such that $$\mathcal{D}(\Theta(t))\le\frac{\pi}{2}-\sin^{-1}\delta,\quad t\ge t_{0},$$ for some $0<\delta<1$. For the derivation of desired exponential decay, we split its proof into four steps.\ $\bullet$ Step A (A differential inequality for some $\omega_{i}$): We set $$\overline{\omega}(t):=\sup_{n\in\mathbb{N}}\omega_{n}(t).$$ Let $i\in\mathbb{N}$ be an index such that $$\omega_{i}(t_{0})>\frac{3}{4}\overline{\omega}(t_{0}).\label{E-21-0-0}$$ Since $$\sum_{j\in\mathbb{N}}\kappa_{j}\omega_{j}(t_{0})=0\quad\mbox{with}~\left\{ \omega_{i}\right\} _{i\in\mathbb{N}}\not\equiv\boldsymbol{0},\label{E-21-0}$$ such $i$ exists. We set $$J(t_{0}):=\left\{ j\in\mathbb{N}:~\omega_{j}(t_{0})\ge\omega_{i}(t_{0})\right\} .\label{E-21-1}$$ Note that $$\begin{aligned} \begin{aligned}\dot{\omega}_{i}(t_{0}) & =\sum_{j\in\mathbb{N}}\kappa_{j}\cos\left(\theta_{i}(t_{0})-\theta_{j}(t_{0})\right)\left(\omega_{j}(t_{0})-\omega_{i}(t_{0})\right)\\ & =\sum_{j\in J(t_{0})}\kappa_{j}\cos\left(\theta_{i}(t_{0})-\theta_{j}(t_{0})\right)\left(\omega_{j}(t_{0})-\omega_{i}(t_{0})\right)\\ & \hspace{0.5cm}+\sum_{j\in\mathbb{N}\setminus J(t_{0})}\kappa_{j}\cos\left(\theta_{i}(t_{0})-\theta_{j}(t_{0})\right)\left(\omega_{j}(t_{0})-\omega_{i}(t_{0})\right)\\ & :=\mathcal{I}_{21}+\mathcal{I}_{22}. \end{aligned} \label{E-22}\end{aligned}$$ Below, we estimate the term $\mathcal{I}_{2i}$ with $i=1,2$.\ $\diamond$ (Estimate of $\mathcal{I}_{21}$): We use [\[E-21-0\]](#E-21-0){reference-type="eqref" reference="E-21-0"} and [\[E-21-1\]](#E-21-1){reference-type="eqref" reference="E-21-1"} to get $$\begin{aligned} \begin{aligned}\mathcal{I}_{21} & =\sum_{j\in J(t_{0})}\kappa_{j}\cos\left(\theta_{i}(t_{0})-\theta_{j}(t_{0})\right)\left(\omega_{j}(t_{0})-\omega_{i}(t_{0})\right)\\ & \le\sum_{j\in J(t_{0})}\kappa_{j}\left(\omega_{j}(t_{0})-\omega_{i}(t_{0})\right)=-\sum_{j\in\mathbb{N}\setminus J(t_{0})}\kappa_{j}\omega_{j}(t_{0})-\sum_{j\in J(t_{0})}\kappa_{j}\omega_{i}(t_{0}). \end{aligned} \label{E-23}\end{aligned}$$ $\diamond$ (Estimate of $\mathcal{I}_{22}$): Again we use [\[E-21-0\]](#E-21-0){reference-type="eqref" reference="E-21-0"} to obtain $$\begin{aligned} \begin{aligned}\mathcal{I}_{22} & =\sum_{j\in\mathbb{N}\setminus J(t_{0})}\kappa_{j}\cos\left(\theta_{i}(t_{0})-\theta_{j}(t_{0})\right)\left(\omega_{j}(t_{0})-\omega_{i}(t_{0})\right)\\ & \le\delta\sum_{j\in\mathbb{N}\setminus J(t_{0})}\kappa_{j}\left(\omega_{j}(t_{0})-\omega_{i}(t_{0})\right)=-\delta\sum_{j\in J(t_{0})}\kappa_{j}\omega_{j}(t_{0})-\delta\sum_{j\in\mathbb{N}\setminus J(t_{0})}\kappa_{j}\omega_{i}(t_{0}), \end{aligned} \label{E-24}\end{aligned}$$ the equality in [\[E-24\]](#E-24){reference-type="eqref" reference="E-24"} holds by [\[E-21-0\]](#E-21-0){reference-type="eqref" reference="E-21-0"}. In [\[E-22\]](#E-22){reference-type="eqref" reference="E-22"}, we combine all the estimates [\[E-23\]](#E-23){reference-type="eqref" reference="E-23"} and [\[E-24\]](#E-24){reference-type="eqref" reference="E-24"} to get $$\begin{aligned} \dot{\omega}_{i}(t_{0}) & \le-\sum_{j\in J(t_{0})}\kappa_{j}\omega_{i}(t_{0})-\sum_{j\in\mathbb{N}\setminus J(t_{0})}\kappa_{j}\omega_{j}(t_{0})-\delta\sum_{j\in J(t_{0})}\kappa_{j}\omega_{j}(t_{0})-\delta\sum_{j\in\mathbb{N}\setminus J(t_{0})}\kappa_{j}\omega_{i}(t_{0})\\ & =-\sum_{j\in J(t_{0})}\kappa_{j}\omega_{i}(t_{0})-\delta\sum_{j\in\mathbb{N}\setminus J(t_{0})}\kappa_{j}\omega_{i}(t_{0})-(1-\delta)\sum_{j\in\mathbb{N}\setminus J(t_{0})}\kappa_{j}\omega_{j}(t_{0})\\ & =-\delta\|K\|_{\infty,1}\omega_{i}(t_{0})+(1-\delta)\sum_{j\in J(t_{0})}\kappa_{j}\left(\omega_{j}(t_{0})-\omega_{i}(t_{0})\right)<-\delta\|K\|_{\infty,1}\omega_{i}(t_{0}).\end{aligned}$$ $\bullet$ Step B  (Estimate $\omega_{i}(t)$ for $t\ll1$): For $i\in\mathbb{N}$ satisfying the relation [\[E-21-0-0\]](#E-21-0-0){reference-type="eqref" reference="E-21-0-0"}, we set $$C_{1}:=\frac{\delta}{16\left(\left\Vert \mathcal{V}\right\Vert _{\infty}+2\|K\|_{\infty,1}\right)}\min\left\{ 1,\frac{1}{\left\Vert K\right\Vert _{\infty,1}}\right\} .\label{E-25}$$ In the sequel, we estimate $\omega_{i}$ in the time interval $[t_{0},t_{0}+C_{1}\overline{\omega}(t_{0})]$.\ First, we use Lemma [Lemma 2](#L2.2){reference-type="ref" reference="L2.2"} and [\[E-21-0-0\]](#E-21-0-0){reference-type="eqref" reference="E-21-0-0"} to find $$\begin{aligned} \begin{aligned}\dot{\omega}_{i}(t) & \le\dot{\omega}_{i}(t_{0})+2\|K\|_{\infty,1}\left(\left\Vert \mathcal{V}\right\Vert _{\infty}+\|K\|_{\infty,1}\right)(t-t_{0})\\ & \le-\frac{3\delta}{4}\|K\|_{\infty,1}\overline{\omega}(t_{0})+2\|K\|_{\infty,1}\left(\left\Vert \mathcal{V}\right\Vert _{\infty}+\|K\|_{\infty,1}\right)(t-t_{0})\\ & \le-\frac{3\delta}{8}\|K\|_{\infty,1}\overline{\omega}(t_{0}), \end{aligned}\end{aligned}$$ where we used [\[E-25\]](#E-25){reference-type="eqref" reference="E-25"}. This yields $$\omega_{i}(t)\le\overline{\omega}(t_{0})-\frac{3\delta}{8}\|K\|_{\infty,1}\overline{\omega}(t_{0})\left(t-t_{0}\right).\label{E-26}$$ Next, we consider an index $i\in\mathbb{N}$ such that $$\omega_{i}(t_{0})\le\frac{3}{4}\overline{\omega}(t_{0}).$$ In this case, we use [\[E-25\]](#E-25){reference-type="eqref" reference="E-25"} to get $$\begin{aligned} \omega_{i}(t) & \le\omega_{i}(t_{0})+2\|K\|_{\infty,1}\left(\left\Vert \mathcal{V}\right\Vert _{\infty}+2\|K\|_{\infty,1}\right)\left(t-t_{0}\right)\\ & \le\frac{3}{4}\overline{\omega}(t_{0})+\frac{1}{8}\overline{\omega}(t_{0}).\end{aligned}$$ Hence we obtain $$\overline{\omega}(t)\le\max\left\{ \frac{7}{8},\left(1-\frac{3\delta}{8}\|K\|_{\infty,1}(t-t_{0})\right)\right\} \overline{\omega}(t_{0})\label{E-27}$$ for $t_{0}\le t<t_{0}+C_{1}\overline{\omega}(t_{0})$.\ $\bullet$ Step C ($\overline{\omega}(t)$ is nonincreasing for $t\ge t_{0}$): Let $t_{1}>t_{0}$ and $$I=\left\{ t:\overline{\omega}(t)\le\overline{\omega}(t_{1})\right\}$$ in $[t_{1},\infty)$. By Lemma [Lemma 2](#L2.2){reference-type="ref" reference="L2.2"}, $\ddot{\Theta}=\dot{\mathcal{W}}$ is uniformly bounded and $\omega_{i}(t)$ is globally Lipschitz. Applying similar process with Lemma [Lemma 2](#L2.2){reference-type="ref" reference="L2.2"} gives $\overline{\omega}$ is also Lipschitz. Hence $I$ is nonempty closed subset of $[t_{1},\infty)$, and Step B proves that $I$ is open in $[t_{1},\infty)$. Hence $\overline{\omega}(t)$ is globally decreasing.\ $\bullet$ Final step ($\overline{\omega}$ is exponentially decreasing for $t\ge t_{0}$.): Let $$t_{n}=t_{n-1}+C_{1}\overline{\omega}(t_{n-1}),\quad n\in\mathbb{N}.$$ Then, $$\begin{aligned} & \left(1-\frac{3\delta}{8}\|K\|_{\infty,1}C_{1}\overline{\omega}(t_{n-1})\right)<\frac{7}{8}\iff\frac{1}{3\delta\|K\|_{\infty,1}C_{1}}<\overline{\omega}(t_{n-1})\end{aligned}$$ gives $$\overline{\omega}(t_{n})<\frac{7}{8}\overline{\omega}(t_{n-1})\iff\frac{1}{3\delta\|K\|_{\infty,1}C_{1}}<\overline{\omega}(t_{n-1})$$ in [\[E-27\]](#E-27){reference-type="eqref" reference="E-27"}. Combined with $$t_{n+1}-t_{n}<t_{n}-t_{n-1}\iff\overline{\omega}(t_{n})<\overline{\omega}(t_{n-1}),$$ we can conclude that $$\overline{\omega}(t+n(t_{1}-t_{0}))<\overline{\omega}(t_{n})<\left(\frac{7}{8}\right)^{n}\overline{\omega}(t_{0})$$ for $$n\in\left\{ m\in\mathbb{N}:\frac{1}{3\delta\|K\|_{\infty,1}C_{1}}<\overline{\omega}(t_{m})\right\} .$$ Now it is enough to show the conclusion with assuming $\frac{1}{3\delta\|K\|_{\infty,1}C_{1}}\ge\overline{\omega}(t_{0})$. We choose $k\in\mathbb{N}$ such that $$\frac{1}{2^{k-1}}\ge\overline{\omega}\left(t_{0}\right)>\frac{1}{2^{k}}.$$ Suppose that $$\overline{\omega}\left(t\right)>\frac{1}{2^{k}}\quad\mbox{for \ensuremath{t>t_{0}}}.$$ By induction on $n$, we can show that $$\overline{\omega}(t)\le\overline{\omega}(\tilde{t}_{0,n})-\frac{3}{32}\|K\|_{\infty,1}\cdot\frac{1}{2^{k}}\cdot\left(t-\tilde{t}_{0,n}\right),\quad\tilde{t}_{0,n}\le t\le\tilde{t}_{0,n+1},$$ where $\tilde{t}_{0,n}:=t_{0}+n\cdot\frac{C_{1}}{2^{k}}$.\ This implies $$\overline{\omega}(t)\le\overline{\omega}(t_{0})-\frac{3}{32}\|K\|_{\infty,1}\cdot\frac{1}{2^{k}}\cdot n,\quad\forall~n\in\mathbb{N},$$ which is contradictory to $\overline{\omega}\left(t\right)>\frac{1}{2^{k}}$.\ Furthermore, we have $$\overline{\omega}(t)\le\overline{\omega}(t_{0})-\frac{3}{32}\|K\|_{\infty,1}\cdot\frac{1}{2^{k}}\cdot n\le\frac{1}{2^{k-1}}-\frac{3}{32}\|K\|_{\infty,1}\cdot\frac{1}{2^{k}}\cdot n\le\frac{1}{2^{k}},$$ for $n\ge\lfloor\frac{32}{3\|K\|_{\infty,1}}\rfloor+1.$\ This implies $$\inf\left\{ t:\overline{\omega}(t)\le\frac{1}{2^{k}}\right\} \le t_{0}+\Big\lfloor\frac{32}{3\|K\|_{\infty,1}}\Big\rfloor+1.$$ Inductively, we can derive $$0<\overline{\omega}(t)\le\frac{1}{2^{n}}\overline{\omega}(t_{0}),\qquad t\ge t_{0}+n\left(\Big\lfloor\frac{32}{3\|K\|_{\infty,1}}\Big\rfloor+1\right).\label{E-29}$$ Similarly, we set $$\underline{\omega}(t_{0}):=\inf_{n\in\mathbb{N}}\omega_{n}(t_{0}).$$ This yields $$0>\underline{\omega}(t)\ge\frac{1}{2^{n}}\underline{\omega}(t_{0}),\quad\mbox{for}~t\ge t_{0}+n\left(\Big\lfloor\frac{32}{3\|K\|_{\infty,1}}\Big\rfloor+1\right).\label{E-30}$$ Finally, we combine [\[E-29\]](#E-29){reference-type="eqref" reference="E-29"} and [\[E-30\]](#E-30){reference-type="eqref" reference="E-30"} to find $$\mathcal{D}\left(\mathcal{W}(t)\right)=\overline{\omega}(t)-\underline{\omega}(t)\le\frac{1}{2^{n}}\left(\overline{\omega}(t_{0})-\underline{\omega}(t_{0})\right)=\frac{1}{2^{n}}\mathcal{D}\left(\mathcal{W}(t_{0})\right),$$ for $t\ge t_{0}+n\left(\lfloor\frac{32}{3\|K\|_{\infty,1}}\rfloor+1\right).$ In this case, we have exponential synchronization: $$\mathcal{D}\left(\mathcal{W}(t)\right)\le\mathcal{D}\left(\mathcal{W}(t_{0})\right)\cdot\exp\left[-\frac{3\|K\|_{\infty,1}\log2}{32}\left(t-t_{0}\right)+1\right],\quad t\ge t_{0}.$$ 99 Acebron, J. A., Bonilla, L. L., Pérez Vicente, C. J. P., Ritort, F. and Spigler, R.: *The Kuramoto model: A simple paradigm for synchronization phenomena*. Rev. Mod. Phys. **77**, 137-185 (2005) Albi, G., Bellomo, N., Fermo, L., Ha, S.-Y., Kim, J., Pareschi, L., Poyato, D. and Soler, J.: *Vehicular traffic, crowds and swarms: From kinetic theory and multiscale methods to applications and research perspectives*. Math. Models Methods Appl. Sci. **29**, 1901-2005 (2019) Ball, J. M., Carr, J. and Penrose, O.: *The Becker-Doring cluster equations: basic properties and asymptotic behaviour of solutions*. Comm. Math. Phys. **104**, 657-692 (1986) Barbalat, I.: *Systèmes déquations différentielles doscillations non Linéaires*. Rev. Math. Pures Appl. **4**, 267--270 (1959) Benedetto, D., Caglioti, E. and Montemagno, U.: *On the complete phase synchronization for the Kuramoto model in the mean-field limit*. Commun. Math. Sci. **13**, 1775--1786 (2015) Bramburger, J.: *Stability of infinite systems of coupled oscillators via random walks on weighted graphs.* Trans. Amer. Math. Soc. **372**, 1159-1192 (2019) Bronski, J., Deville, L. and Park, M. J.: *Fully synchronous solutions and the synchronization phase transition for the finite-$N$ Kuramoto model*. Chaos **22**, 033133 (2012) Brezis, H.: *Functional Analysis, Sobolev Spaces and PDE*. pp. 184--185. Springer, New York (2011) Cartan, H.: *Differential calculus*. International studies in mathematics, Hermann (1983) Choi, Y.-P., Ha, S.-Y., Jung, S. and Kim, Y.: *Asymptotic formation and orbital stability of phase-locked states for the Kuramoto model*. Phys. D. **241**, 735--754 (2012) Chopra, N. and Spong, M. W.: *On exponential synchronization of Kuramoto oscillators*. IEEE Trans. Automatic Control **54**, 353--357 (2009) Cucker, F. and Smale, S.: *Emergent behavior in flocks*. IEEE Trans. Automat. Control **52**, 852-862 (2007) Dong, J.-G. and Xue, X.: *Finite-time synchronization of Kuramoto-type oscillators*. Nonlinear Anal. Real World Appl. **26**, 133--149 (2015) Dong, J.-G. and Xue, X.: *Synchronization analysis of Kuramoto oscillators*. Commun. Math. Sci. **11**, 465--480 (2013) Dörfler, F. and Bullo, F.: *On the critical coupling for Kuramoto oscillators.* SIAM. J. Appl. Dyn. Syst. **10**, 1070--1099 (2011) Dörfler, F. and Bullo, F.: *Synchronization in complex networks of phase oscillators: A survey.* Automatica **50**, 1539--1564 (2014) Dubovski, P. B.: *Mathematical Theory of Coagulation.* Seoul National University, Research Institute of Mathematics. (1994) Ha, S.-Y., Ha, T. and Kim, J.-H.: *On the complete synchronization of the Kuramoto phase model*. Phys. D. **239**, 1692--1700 (2010) Haraux, A., and Jendoubi, M. A.: *The Lojasiewicz gradient inequality in the infinite-dimensional Hilbert space framework*. J. Funct. Anal. **260**, 2826--2842 (2011) Ha, S.-Y., Ko, D., Park, J. and Zhang, X.: *Collective synchronization of classical and quantum oscillators*. EMS Surv. Math. Sci. **3**, 209--267 (2016) Ha, S.-Y., Kim, H.-K. and Ryoo, S.-W.: *On the finiteness of collisions and phase-locked states for the Kuramoto model*. J. Stat. Phys. **163**, 1394--1424 (2016) Ha, S.-Y., Lattanzio, C., Rubino, B. and Slemrod, M.: *Flocking and synchronization of particle models.* Quart. Appl. Math. **69**, 91-103 (2011) Ha, S.-Y., Li, Z., Xue, X.: *Formation of phase-locked states in a population of locally interacting Kuramoto oscillators*. Journal of Differential Equations **255**, 3053--3070 (2013) Ha, S.-Y. and Liu, J.-G.: *A simple proof of Cucker-Smale flocking dynamics and mean field limit*. Commun. Math. Sci. **7**, 297--325 (2009) Ha, S.-Y., Noh, S. E. and Park, J.: *Synchronization of Kuramoto oscillators with adaptive couplings*. SIAM J. Appl. Dyn. Syst. **15**, 162--194 (2016) Ha, S.-Y., Noh, S. E. and Park, J.: *Practical synchronization of generalized Kuramoto systems with an intrinsic dynamics.* Netw. Heterog. Media. **10**, 787-807 (2015) Ha, S.-Y., Ryoo, S.-W.: *Asymptotic phase-locking dynamics and critical coupling strength for the Kuramoto model*. Comm. Math. Phys. **377**, 811--857 (2020) Ha, S.-Y. and Tadmor, E.: *From particle to kinetic and hydrodynamic description of flocking*. Kinetic Relat. Models **1**, 415--435 (2008) Hirsch, M. W., Smale, S. and Devaney, R. L.: *Differential equations, dynamical systems, and an introduction to chaos*. Academic press (2012) Kim, J., Yang, J., Kim, J. and Shim, H.: *Practical Consensus for Heterogeneous Linear Time-Varying Multi-Agent Systems.* In Proceedings of 12th International Conference on Control, Automation and Systems, Jeju Island, Korea (2012) Kuramoto, Y.: *Self-entrainment of a population of coupled non-linear oscillators*. International Symposium on Mathematical Problems in Theoretical Physics, 420--422 (1975) Lancellotti, C.: *On the Vlasov limit for systems of nonlinearly coupled oscillators without noise*. Transport theory and statistical physics. **34** , 523--535 (2005) Li, Z. and Xue, X.: *Convergence of analytic gradient-type systems with periodicity and its applications in Kuramoto models*. Appl. Math. Lett. **90**, 194--201 (2019) Ma, M., Zhou, J. and Cai, J.: *Practical synchronization of second-order nonautonomous systems with parameter mismatch and its applications.* Nonlinear Dynam. **69**, 1285--1292 (2012) Ma, M., Zhou, J. and Cai, J.: *Practical synchronization of non autonomous systems with uncertain parameter mismatch via a single feedback control.* Int. J. Mod Phys C **23**, 1250073 14pp (2012) Peskin, C. S.: *Mathematical Aspects of Heart Physiology*. Courant Institute of Mathematical Sciences, New York University, 268-278, New York (1975) Pikovsky, A., Rosenblum, M. and Kurths, J.: *Synchronization: A universal concept in nonlinear sciences.* Cambridge: Cambridge University Press (2001) Rudin, W.: *Principles of Mathematical Analysis.* McGraw-Hill (1976) Slemrod, M.: *Trend to equilibrium in the Becker-Doring cluster equations*. Nonlinearity **2**, 429--443 (1989) Strogatz, S. H.: *From Kuramoto to Crawford: exploring the onset of synchronization in populations of coupled oscillators*. Phys. D. **143**, 1--20 (2000) Verwoerd, M. and Mason, O.: *A convergence result for the Kurmoto model with all-to-all couplings.* SIAM J. Appl. Dyn. Syst. **10**, 906--920 (2011) Verwoerd, M. and Mason, O.: *On computing the critical coupling coefficient for the Kuramoto model on a complete bipartite graph.* SIAM J. Appl. Dyn. Syst. **8**, 417--453 (2009) Verwoerd, M. and Mason, O.: *Global phase-locking in finite populations of phase-coupled oscillators.* SIAM J. Appl. Dyn. Syst. **7**, 134--160 (2008) Wang, X., Xue, X.: *The flocking behavior of the infinite-particle Cucker-Smale model*. Proc. Amer. Math. Soc., **150**, 2165--2179 (2022) Winfree, A. T.: *The geometry of biological time*. Springer, New York (1980) Winfree, A. T.: *Biological rhythms and the behavior of populations of coupled oscillators*. J. Theor. Biol. **16**, 15--42 (1967)
arxiv_math
{ "id": "2309.16538", "title": "On the emergent dynamics of the infinite set of Kuramoto oscillators", "authors": "Seung-Yeal Ha, Euntaek Lee and Woojoo Shim", "categories": "math.DS", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We show that one-parameter deformation $\mathcal A_{q,t}$ of the skein algebra $Sk_q(\Sigma_2)$ of a genus two surface suggested in [@ArthamonovShakirov'2019] is flat. We solve the word problem in the algebra and describe monomial basis. In addition, we calculate the classical limit $\mathcal A_{q=1,t}$ of the algebra and prove that it is a one-parameter flat Poisson deformation of the coordinate ring $\mathcal A_{q=t=1}$ of an $SL(2,\mathbb C)$-charater variety of a genus two surface. As a byproduct, we obtain a remarkably simple presentation in terms of generators and relations for the coordinate ring $\mathcal A_{q=t=1}$ of a genus two character variety. address: Department of Mathematics, University of Toronto author: - S. Arthamonov bibliography: - references.bib title: Classical Limit of genus two DAHA --- # Introduction With every surface $\Sigma$ and linear algebraic group $G$ we can associate an affine variety $\mathrm{Hom}(\pi_1(\Sigma),G)$ of representations of the fundamental group. The coordinate ring $\mathcal O(\mathrm{Hom}(\pi_1(\Sigma),G))$ of representation variety comes equipped with a $G$-action corresponding to simultaneous conjugation by an element $g\in G$. The spectrum of the $G$-invariant subring $$\begin{aligned} \mathrm{Hom}(\pi_1(\Sigma),G)\mathbin{ \mathchoice{/\mkern-6mu/}% \displaystyle {/\mkern-6mu/}% \textstyle {/\mkern-5mu/}% \scriptstyle {/\mkern-5mu/}}G:=\mathrm{Spec}\,(\mathcal O(\mathrm{Hom}(\pi_1(\Sigma),G))^G)\end{aligned}$$ is commonly referred to as *$G$-character variety of $\Sigma$*. When surface $\Sigma$ is oriented, the coordinate ring of $G$-character variety comes equipped with a natural Poisson bracket [@AtiyahBott'1983; @Goldman'1986; @GuruprasadHuebschmannJeffreyWeinstein'1997]. For $G=SL(2,\mathbb C)$ the quantization of this Poisson algebra is known as the skein algebra $Sk_q(\Sigma)$ of the surface [@Turaev'1991]. In other words, the skein algebra $Sk_q(\Sigma)$ and the commutative coordinate ring of $SL(2,\mathbb C)$-character variety share the same basis as vector spaces, however the structure constants of the skein algebra depend on the additional parameter $q$. At $q=1$ the structure constants of the skein algebra coincide with those of the commutative coordinate ring, while the main linear part of the decomposition of structure constants as a series in $(q-1)$ is controlled by the Poisson bracket. For the case of a torus $\Sigma_1$, the skein algebra $Sk_q(\Sigma_1)$ can be further deformed into $A_1$ spherical Double Affine Hecke Algebra (DAHA) [@Cherednik'1995]. Now, spherical DAHA depends on two parameters $q,t$, where extra parameter $t$ corresponds to Macdonald deformation in the theory of orthogonal polynomials [@Macdonald'1979]. This brings up a natural question of study and classification of flat deformations of surface skein algebras up to isomorphism. It is known that formal deformations are controlled by the Hochschild cohomology [@Gerstenhaber'1964]. This allows, in principle, to classify flat deformations of quantum character varieties up to isomorphism. The question is well-studied in the genus one case [@Oblomkov'2004; @Sahi'1999; @NoumiStockman'2000; @Stockman'2003], yet no examples of deformations with analytic dependence on parameter were constructed earlier beyond genus one. In [@ArthamonovShakirov'2019] together with Sh. Shakirov we have proposed an algebra $A_{q,t}$ of $q$-difference operators with two generic parameters $q,t$ and proved that the Mapping Class Group of a genus two surface acts by automorphisms of this algebra. It was subsequently proved by J. Cooke and P. Samuelson [@CookeSamuelson'2021] that $q=t$ specialization of this algebra is isomorphic to the genus two skein algebra $A_{q=t}\simeq Sk_q(\Sigma_2).$ Results of the current work will, in particular, imply that $A_{q,t}$ provides an example of nontrivial flat deformation of the genus two skein algebra. ## Outline In this paper we introduce the alternative definition of algebra $\mathcal A_{q,t}$ in terms of generators and relations and show that $\mathcal A_{q,t}\simeq A_{q,t}$ is isomorphic to the original algebra of $q$-difference operators from [@ArthamonovShakirov'2019]. Our presentation of $\mathcal A_{q,t}$ allows us to solve the word problem and prove that $\mathcal A_{q,t}$ is a flat two-parameter deformation of the commutative coordinate ring of the character variety $$\begin{aligned} \mathcal A_{q=t=1}\simeq\mathcal O(\mathrm{Hom}(\pi_1(\Sigma_2),SL(2,\mathbb C)))^{SL(2,\mathbb C)}.\end{aligned}$$ We calculate the classical limit $q\rightarrow1$ of $\mathcal A_{q,t}$ and prove that resulting commutative algebra $\mathcal A_{q=1,t}$ is a flat Poisson deformation of $\mathcal A_{q=t=1}$. We show that this deformed commutative algebra $\mathcal A_{q=1,t}$ is also equipped with a $\mathrm{Mod}(\Sigma_2)$-action by Poisson automorphisms. All four algebras involved: $$\begin{aligned} \mathcal A_{q,t},\quad\mathcal A_{q=t}\simeq Sk_q(\Sigma_2),\quad\mathcal A_{q=1,t},\quad\mathcal A_{q=t=1}\end{aligned}$$ share the same monomial basis which we describe in the paper. In all four cases we construct an algorithm which brings the expression in generators to the canonical form[^1]. For the two of commutative algebras, this simply amounts to calculating the Groebner basis of defining ideal for appropriate choice of monomial ordering. While, for the two noncommutative algebras we prove a PBW-like Theorem which allows us to relate monomial bases of $\mathcal A_{q,t}$ and $\mathcal A_{q=t}$ to their commutative counterparts. As a byproduct we find a remarkably simple presentation of the $SL(2,\mathbb C)$-classical character variety of a genus two surface which have not appeared in the literature. In addition we prove that the Mapping Class Group $\mathrm{Mod}(\Sigma_2)$ of a closed genus two surface acts by automorphisms of $\mathcal A_{q,t}$ which induce Poisson automorphsims of $\mathcal A_{q=1,t}$ and $\mathcal A_{q=t=1}$. We show that the action of $\mathrm{Mod}(\Sigma_2)$ coincides with the natural action of the Mapping Class Group on the character variety $\mathcal A_{q=t=1}=\mathcal O(\mathrm{Hom}(\pi_1(\Sigma_2),SL(2,\mathbb C)))^{SL(2,\mathbb C)}$. ## Finite-dimensional representations and relation with TQFT Skein algebra $Sk_q(\Sigma_g)$ of a closed surface admits finite dimensional representations when parameter $q=\mathrm e^{\frac{2\pi\mathrm i}{k+2}},\;k\in\mathbb Z_{\geqslant0}$ is a root of unity. A family of representations for each $k\in\mathbb Z_{\geqslant0}$ can be constructed from $SU(2)$ Chern-Simons topological quantum field theory [@KirillovReshetikhin'1989; @ReshetikhinTuraev'1990; @Turaev'1994]. The mapping class group $Mod(\Sigma_g)$ of the surface acts by automorphisms of the skein algebra $Sk_q(\Sigma_g)$. On the level of finite dimensional representations of $Sk_q(\Sigma_g)$ coming from TQFT, this action corresponds to simultaneous conjugation by matrices associated to mapping classes [@MassbaumRoberts'1993; @MasbaumVogel'1994; @Funar'1999] (See also [@Massbaum'2003] for a review). Thus, $SU(2)$ Chern-Simons theory gives rise to a family of projective representations of Mapping Class Group $\mathrm{Mod}(\Sigma_g)$ for all $g,k\in\mathbb Z_{\geqslant0}$. On the other hand, $A_1$-spherical Double Affine Hecke Algebra, which realizes a one-parameter deformation of the genus one skein algebra, admits finite dimensional representations at a special values of parameters satisfying $q^kt^2=1$, where $k\in\mathbb Z_{\geqslant0}$ is a nonnegative integer [@Cherednik'2005]. In [@AganagicShakirov'2015] M. Aganagic and Sh. Shakirov have proposed a remarkable interpretation of these finite-dimensional representations from the point of view of Topological Quantum Field Theory. Namely, they have constructed a refinement of genus one algebra of knot operators of Chern-Simons topological quantum field theory and have computed the corresponding family of projective representations of the mapping class group of a torus $\mathrm{Mod}(\Sigma_2)\simeq SL(2,\mathbb Z)$. This manuscript is a continuation of our joint project with Shamil Shakirov [@ArthamonovShakirov'2020; @ArthamonovShakirov'2019] where we have studied Macdonald deformation of a TQFT representation of the genus two Mapping Class Group. In [@ArthamonovShakirov'2020] we have proposed a conjectural one-parameter deformation of the family of projective TQFT representations of $\mathrm{Mod}(\Sigma_2)$ labelled by Chern-Simons level $k\in\mathbb Z_{\geqslant0}$ and computed the corresponding algebras of knot operators. In [@ArthamonovShakirov'2019] we have proved that our formulas proposed in [@ArthamonovShakirov'2020] do define a family of projective representation of $\mathrm{Mod}(\Sigma_2)$ by introducing the algebra of $q$-difference operators $A_{q,t}$ equipped with an action of $\mathrm{Mod}(\Sigma_2)$ by automorphisms. The algebra of knot operators at level $k\in\mathbb Z_{\geqslant 0}$ provides a finite dimensional representation of $A_{q,t}$ at special value of parameters $q^kt^2=1$. Based on complete analogy with the genus one case we referred to $A_{q,t}$ as a *genus two generalization of $A_1$-spherical Double Affine Hecke Algebra.* ## Representation in $q$-difference operators Algebra $A_{q,t}$ introduced in [@ArthamonovShakirov'2019] can be defined as an associative algebra, which is generated by six $q$-difference operators: $$\begin{aligned} A_{q,t}=\mathbf k\langle\hat O_{B_{12}}, \hat O_{B_{23}}, \hat O_{B_{13}}, \hat O_{A_1}, \hat O_{A_2}, \hat O_{A_3}\rangle \label{eq:AqtDifferenceOperatorsAlgebra}\end{aligned}$$ corresponding to six cycles on genus two surface shown on figure [1](#fig:qDiffGeneratingCycles){reference-type="ref" reference="fig:qDiffGeneratingCycles"}. ![Cycles corresponding to six $q$-difference operators](qdiff-generating-cycles.pdf){#fig:qDiffGeneratingCycles width="250pt"} Here $\mathbf k=\mathbb C(q^{\frac14},t^{\frac14})$ if the ground field.[^2] The operators are acting on the subspace of Laurent polynomials in three variables $$\begin{aligned} \mathcal H=\mathbf k[X_{12}+X_{12}^{-1},X_{23}+X_{23}^{-1},X_{13}+X_{13}^{-1}]=\mathbf k[X_{12}^{\pm1},X_{23}^{\pm1},X_{13}^{\pm1}]^{S_2\times S_2\times S_2}. %\label{eq:HilbertSpace}\end{aligned}$$ The first three operators act by multiplication $$\begin{aligned} \label{eq:Mult1}{\hat O}_{B_{12}} \ = \ X_{12} + X_{12}^{-1} \\[10pt] \label{eq:Mult2}{\hat O}_{B_{13}} \ = \ X_{13} + X_{13}^{-1} \\[10pt] \label{eq:Mult3}{\hat O}_{B_{23}} \ = \ X_{23} + X_{23}^{-1}\end{aligned}$$ [\[eq:Mult\]]{#eq:Mult label="eq:Mult"} while the remaining three operators have the following form $$\begin{aligned} {\hat O}_{A_1} \ = \ \sum\limits_{a,b \in \{\pm 1\}} \ a b \ \dfrac{(1 - t^{\frac{1}{2}} X_{23} X_{12}^a X_{13}^b)(1 - t^{\frac{1}{2}} X_{23}^{-1} X_{12}^a X_{13}^b)}{t^{\frac{1}{2}} X_{12}^{a} X_{13}^b (X_{12} - X_{12}^{-1})(X_{13} - X_{13}^{-1})} \ {\hat \delta}_{12}^{a} {\hat \delta}_{13}^{b} \label{eq:Hamiltonian1}\\[5pt] {\hat O}_{A_2} \ = \ \sum\limits_{a,b \in \{\pm 1\}} \ a b \ \dfrac{(1 - t^{\frac{1}{2}} X_{13} X_{12}^a X_{23}^b)(1 - t^{\frac{1}{2}} X_{13}^{-1} X_{12}^a X_{23}^b)}{t^{\frac{1}{2}} X_{12}^{a} X_{23}^b (X_{12} - X_{12}^{-1})(X_{23} - X_{23}^{-1})} \ {\hat \delta}_{12}^{a} {\hat \delta}_{23}^{b} \label{eq:Hamiltonian2}\\[5pt] {\hat O}_{A_3} \ = \ \sum\limits_{a,b \in \{\pm 1\}} \ a b \ \dfrac{(1 - t^{\frac{1}{2}} X_{12} X_{13}^a X_{23}^b)(1 - t^{\frac{1}{2}} X_{12}^{-1} X_{13}^a X_{23}^b)}{t^{\frac{1}{2}} X_{13}^{a} X_{23}^b (X_{13} - X_{13}^{-1})(X_{23} - X_{23}^{-1})} \ {\hat \delta}_{13}^{a} {\hat \delta}_{23}^{b} \label{eq:Hamiltonian3}\end{aligned}$$ [\[eq:Hamiltonians\]]{#eq:Hamiltonians label="eq:Hamiltonians"} where $\hat\delta_{12},\hat\delta_{23},$ and $\hat\delta_{13}$ are $q$-shift operators acting on polynomials in three variables as $$\begin{aligned} \hat\delta_{12}f(X_{12},X_{23},X_{13}) &=f\left(q^{\frac12}X_{12},X_{23},X_{13}\right),\qquad \textrm{for all}\quad f\in\mathcal H,\end{aligned}$$ with the other two operators obtained by permutation of indices. An important question which was left beyond the scope of [@ArthamonovShakirov'2019] was to determine a complete set of relations on six operators listed in ([\[eq:Mult\]](#eq:Mult){reference-type="ref" reference="eq:Mult"}) and ([\[eq:Hamiltonians\]](#eq:Hamiltonians){reference-type="ref" reference="eq:Hamiltonians"}). In this paper we answer this question from the opposite direction: Based on extensive calculation of relations between generators of $A_{q,t}$, we introduce a new algebra $\mathcal A_{q,t}$ in terms of generators and relations, where we can solve the word problem and prove that $\mathcal A_{q,t}$ is a flat deformation of the commutative coordinate ring of $SL(2,\mathbb C)$-character variety. We then prove that all relations in the defining ideal of $\mathcal A_{q,t}$ are satisfied in $A_{q,t}$. This allows us to introduce a representation of $\mathcal A_{q,t}$ in terms $q$-difference operators $$\begin{aligned} \psi:\mathcal A_{q,t}\rightarrow A_{q,t}. \label{eq:QDifferencePresentation}\end{aligned}$$ Finally, we prove that representation ([\[eq:QDifferencePresentation\]](#eq:QDifferencePresentation){reference-type="ref" reference="eq:QDifferencePresentation"}) is faithful and $\mathcal A_{q,t}\simeq A_{q,t}$. Our presentation of $\mathcal A_{q,t}$ is carefully chosen to make the computation of the classical limit $q\rightarrow 1$ maximally straightforward. One of the key features of our presentation is the extended set of generators. Namely, we are using the following collection of $15$ elements of $A_{q,t}$ as a prototype for the generating set of $\mathcal A_{q,t}$: $$\begin{aligned} \hat O_{1}=&\hat O_{A_1},& \hat O_2=&\hat O_{B_{12}},& \hat O_3=&\hat O_{A_2},& \hat O_4=&\hat O_{B_{23}},& \hat O_5=&\hat O_{A_3},& \hat O_6=&\hat O_{B_{13}}, \label{eq:qdifflevelonegenerators}\\ \hat O_{12}=&\big[\hat O_1,\hat O_2\big]_q,& \hat O_{23}=&\big[\hat O_2,\hat O_3\big]_q,& \hat O_{34}=&\big[\hat O_3,\hat O_4\big]_q,& \hat O_{45}=&\big[\hat O_4,\hat O_5\big]_q,& \hat O_{56}=&\big[\hat O_5,\hat O_6\big]_q,& \hat O_{61}=&\big[\hat O_6,\hat O_1\big]_q, \label{eq:qdiffleveltwogenerators}\end{aligned}$$ $$\begin{aligned} \hat O_{123}=&\big[\big[\hat O_1,\hat O_2\big]_q,\hat O_3\big]_q=\big[\big[\hat O_4,\hat O_5\big]_q,\hat O_6\big]_q,\\ \hat O_{234}=&\big[\big[\hat O_2,\hat O_3\big]_q,\hat O_4\big]_q=\big[\big[\hat O_5,\hat O_6\big]_q,\hat O_1\big]_q,\\ \hat O_{345}=&\big[\big[\hat O_3,\hat O_4\big]_q,\hat O_5\big]_q=\big[\big[\hat O_6,\hat O_1\big]_q,\hat O_2\big]_q, \end{aligned} \label{eq:qdifflevelthreegenerators}$$ [\[eq:qdiff15generators\]]{#eq:qdiff15generators label="eq:qdiff15generators"} where $$\begin{aligned} _{q^j}=\frac{q^{\frac j4}AB-q^{-\frac j4}BA}{q^{\frac12}-q^{-\frac12}}. \label{eq:qCommutatorDef}\end{aligned}$$ # Generators and Relations For this section we take the ground field $\mathbf k=\mathbb C(q^{\frac14},t^{\frac14})$ to be a field of rational functions in $q^{\frac14}$ and $t^{\frac14}$. **Definition 1**. *Let $\mathcal A_{q,t}$ be an associative algebra over $\mathbf k$ with 15 generators $$\begin{aligned} \begin{array}{cccccc} O_1,&O_2,&O_3,&O_4,&O_5,&O_6,\\ O_{12},&O_{23},&O_{34},&O_{45},&O_{56},&O_{61},\\ O_{123},&O_{234},&O_{345} \end{array} \label{eq:15Generators}\end{aligned}$$ subject to:* - *"Normal ordering" relations listed in Table [\[tab:QCommRel\]](#tab:QCommRel){reference-type="ref" reference="tab:QCommRel"}. Here $(\pm i|X)$ entry in the $O_J$'th row and $O_K$'th column of the table stands for relation* *[\[def:Aqt\]]{#def:Aqt label="def:Aqt"}* $$\begin{array}{c|cccccccc} & O_1& O_2& O_3& O_4& O_5& O_6& O_{12}& O_{23}\\\hline O_1& * & * & * & * & * & * & * & * \\ O_2& \text{-1$|$} O_{12} & * & * & * & * & * & * & * \\ O_3& 0 & \text{-1$|$} O_{23} & * & * & * & * & * & * \\ O_4& 0 & 0 & \text{-1$|$} O_{34} & * & * & * & * & * \\ O_5& 0 & 0 & 0 & \text{-1$|$} O_{45} & * & * & * & * \\ O_6& \text{1$|$} O_{61} & 0 & 0 & 0 & \text{-1$|$} O_{56} & * & * & * \\ O_{12}&\text{1$|$} O_2 & \text{-1$|$} O_1 & \text{1$|$} O_{123} & 0 & 0 & \text{-1$|$} O_{345} & * & * \\ O_{23}& \text{-1$|$} O_{123} & \text{1$|$} O_3 & \text{-1$|$} O_2 & \text{1$|$} O_{234} & 0 & 0 & \text{2$|$} \frac{(q+1) O_1 O_3}{\sqrt{q}}-\frac{ O_5 (q+t)}{\sqrt{q} \sqrt{t}} & * \\ O_{34}& 0 & \text{-1$|$} O_{234} & \text{1$|$} O_4 & \text{-1$|$} O_3 & \text{1$|$} O_{345} & 0 & \text{-1$|$} O_{56} & \text{2$|$} \frac{(q+1) O_2 O_4}{\sqrt{q}}-\frac{ O_6 (q+t)}{\sqrt{q} \sqrt{t}} \\ O_{45}& 0 & 0 & \text{-1$|$} O_{345} & \text{1$|$} O_5 & \text{-1$|$} O_4 & \text{1$|$} O_{123} & 0 & \text{-1$|$} O_{61} \\ O_{56}& \text{1$|$} O_{234} & 0 & 0 & \text{-1$|$} O_{123} & \text{1$|$} O_6 & \text{-1$|$} O_5 & \text{1$|$} O_{34} & 0\\ O_{61}& \text{-1$|$} O_6 & \text{1$|$} O_{345} & 0 & 0 & \text{-1$|$} O_{234} & \text{1$|$} O_1 & \text{-2$|$} \frac{(q+1) O_2 O_6}{\sqrt{q}}-\frac{ O_4 (q+t)}{\sqrt{q} \sqrt{t}} & \text{1$|$} O_{45} \\ O_{123}& \text{1$|$} O_{23} & 0 & \text{-1$|$} O_{12} & \text{1$|$} O_{56} & 0 & \text{-1$|$} O_{45} & \text{1$|$} O_3 & \text{-1$|$} O_1 \\ O_{234}& \text{-1$|$} O_{56} & \text{1$|$} O_{34} & 0 & \text{-1$|$} O_{23} & \text{1$|$} O_{61} & 0 & \text{+0$|$} O_1 O_{34}- O_2 O_{56} & \text{1$|$} O_4 \\ O_{345}& 0 & \text{-1$|$} O_{61} & \text{1$|$} O_{45} & 0 & \text{-1$|$} O_{34} & \text{1$|$} O_{12} & \text{-1$|$} O_6 & \text{+0$|$} O_2 O_{45} - O_3 O_{61} \end{array}$$ $$\begin{array}{c|cccc} & O_{34}& O_{45}& O_{56}& O_{61}\\\hline O_{34}& * & * & * & * \\ O_{45}& \text{2$|$} \frac{(q+1) O_3 O_5}{\sqrt{q}}-\frac{ O_1 (q+t)}{\sqrt{q} \sqrt{t}} & * & * & * \\ O_{56}& \text{-1$|$} O_{12} & \text{2$|$} \frac{(q+1) O_4 O_6}{\sqrt{q}}-\frac{ O_2 (q+t)}{\sqrt{q} \sqrt{t}} & * & * \\ O_{61}& 0 & \text{-1$|$} O_{23} & \text{2$|$} \frac{(q+1) O_1 O_5}{\sqrt{q}}-\frac{ O_3 (q+t)}{\sqrt{q} \sqrt{t}} & * \\ O_{123}& \text{+0$|$} O_3 O_{56} - O_4 O_{12} & \text{1$|$} O_6 & \text{-1$|$} O_4 & \text{+0$|$} O_6 O_{23} - O_1 O_{45} \\ O_{234}& \text{-1$|$} O_2 & \text{+0$|$} O_4 O_{61} - O_5 O_{23} & \text{1$|$} O_1 & \text{-1$|$} O_5 \\ O_{345}& \text{1$|$} O_5 & \text{-1$|$} O_3 & \text{+0$|$} O_5 O_{12} - O_6 O_{34} & \text{1$|$} O_2 \\ \end{array}$$ $$\begin{aligned} _{q^2}=-\frac{(q+t)O_1O_5O_6}{\sqrt{q}\sqrt{t}} +\frac{(q+t)O_5O_{61}}{q^{\frac34}\sqrt{t}} +\frac{(q+t)O_1O_{56}}{\sqrt[4]{q}\sqrt{t}} -\frac{(q+t)O_{234}}{\sqrt{q}\sqrt{t}}+\frac{(q+1)O_3O_6}{\sqrt{q}}\\ [O_{234},O_{123}]_{q^2}=-\frac{(q+t)O_2O_6O_1}{\sqrt{q}\sqrt{t}} +\frac{(q+t)O_6O_{12}}{q^{\frac34}\sqrt{t}} +\frac{(q+t)O_2O_{61}}{\sqrt[4]{q}\sqrt{t}} -\frac{(q+t)O_{345}}{\sqrt{q}\sqrt{t}}+\frac{(q+1)O_4O_1}{\sqrt{q}}\\ [O_{345},O_{234}]_{q^2}=-\frac{(q+t)O_3O_1O_2}{\sqrt{q}\sqrt{t}} +\frac{(q+t)O_1O_{23}}{q^{\frac34}\sqrt{t}} +\frac{(q+t)O_3O_{12}}{\sqrt[4]{q}\sqrt{t}} -\frac{(q+t)O_{123}}{\sqrt{q}\sqrt{t}}+\frac{(q+1)O_5O_2}{\sqrt{q}}\end{aligned}$$ **Lemma 2**. *Six elements $O_1,O_2,O_3,O_4,O_5,O_6\in\mathcal A_{q,t}$ generate $\mathcal A_{q,t}$ as an associative algebra. [\[lemm:SixGenerators\]]{#lemm:SixGenerators label="lemm:SixGenerators"}* *Proof.* Normal ordering relations, in particular, imply that $$\begin{aligned} \big[O_i,O_{i+1}\big]_q=O_{i,i+1},\qquad\textrm{for all}\quad 1\leqslant i\leqslant6 \label{eq:LevelTwoGeneratorsRelation}\end{aligned}$$ Hereinafter, by index $(i+a)$ we mean $((i+a-1)\bmod 6)+1$. Similarly, $$\begin{aligned} \big[\big[O_i,O_{i+1}\big]_q,O_{i+2}]_q=O_{i,i+1,i+2}\qquad\textrm{for all}\quad 1\leqslant i\leqslant3. \label{eq:LevelThreeGeneratorRelation}\end{aligned}$$ Hence, all of the 15 generators can be obtained as $q$-commutators of the six elements $O_1,\dots,O_6$. ◻ **Lemma 3**. *Normally ordered monomials of the form $$\begin{aligned} O_1^{n_1}O_2^{n_2}O_3^{n_3}O_4^{n_4}O_5^{n_5}O_6^{n_6} O_{12}^{n_7}O_{23}^{n_8}O_{34}^{n_9}O_{45}^{n_{10}}O_{56}^{n_{11}}O_{61}^{n_{12}} O_{123}^{n_{13}}O_{234}^{n_{14}}O_{345}^{n_{15}},\qquad n_1,\dots,n_{15}\in\mathbb Z_{\geqslant 0}. \label{eq:NormallyOrderedMonomials}\end{aligned}$$ provide a spanning set for algebra $\mathcal A_{q,t}$ [\[lemm:NormalOrderingLemma\]]{#lemm:NormalOrderingLemma label="lemm:NormalOrderingLemma"}* *Proof.* Consider free associative algebra $F$ with the same collection of generators ([\[eq:15Generators\]](#eq:15Generators){reference-type="ref" reference="eq:15Generators"}). Introduce grading on $\mathcal F$ by assigning the following degrees to generators $$\begin{aligned} \deg O_i=2,\qquad\deg O_{i,i+1}=3,\qquad\deg O_{i,i+1,i+2}=4,\qquad 1\leqslant i\leqslant6. \label{eq:GeneratorWeights}\end{aligned}$$ We have an increasing filtration on $F$ defined by the grading $$\begin{aligned} F^{(0)}\subseteq F^{(1)}\subseteq F^{(2)}\subseteq F^{(3)}\subseteq\dots\subseteq F=\bigcup_{m\in\mathbb Z_{\geqslant0}} F^{(m)}.\end{aligned}$$ This induces an increasing filtration on $\mathcal A_{q,t}$, where filtered components $$\begin{aligned} \mathcal A_{q,t}^{(m)}=\pi\big(F^{(m)}\big)\end{aligned}$$ are given by images of $F^{(m)}$ under natural projection map $\pi:F\rightarrow \mathcal A_{q,t}$. We will prove the statement of the lemma by induction in degree $m$. The base case $m=0$ is immediate. Now, by inductive assumption suppose that every $x\in A_{q,t}^{(m-1)}$ can be presented as a linear combindation of normally ordered monomials. Consider $y\in\mathcal A_{q,t}^{(m)}$ and note that each entry in Table [\[tab:QCommRel\]](#tab:QCommRel){reference-type="ref" reference="tab:QCommRel"} has degree strictly smaller than the sum of the degrees of the corresponding generators which label row and column respectively. Hence, applying a sequence of relations from Table [\[tab:QCommRel\]](#tab:QCommRel){reference-type="ref" reference="tab:QCommRel"} we can bring top degree monomials in $y$ to the normally ordered form at the expense of getting some subleading monomials. Together with inductive assumption this means that $y\in\mathcal A_{q,t}^{(m)}$ can be presented as a linear combination of normally ordered monomials. ◻ **Lemma 4**. *The following permutation of generators defines an order 6 automorphism of $\mathcal A_{q,t}$: $$\begin{aligned} I=\big(O_1,O_2,O_3,O_4,O_5,O_6\big) \big(O_{12},O_{23},O_{34},O_{45},O_{56},O_{61}\big)\big(O_{123},O_{234},O_{345}\big) \label{eq:IActionAqt}\end{aligned}$$ [\[lemm:IAutomorphism\]]{#lemm:IAutomorphism label="lemm:IAutomorphism"}* *Proof.* The set of defining relations ([\[eq:NormalOrderingRelations\]](#eq:NormalOrderingRelations){reference-type="ref" reference="eq:NormalOrderingRelations"})--([\[eq:qCasimirRelation\]](#eq:qCasimirRelation){reference-type="ref" reference="eq:qCasimirRelation"}) falls short of being explicitly invariant under the action of $I$ by just three instances where the action of $I$ spoils the ordering. In these cases we use explicit calculations below $$\begin{aligned} I(\eta_{(56)(45)})=&\eta _{(61)(56)} -q^{-1}(q-1) (q+1)\eta _{(5)(1)},\\ I(\eta_{(61)(12)})=& -\eta _{(23)(12)} +q^{-1}(q-1) (q+1)\eta _{(3)(1)},\\ I(\eta_{(345)(234)})=&q^{-\frac{5}{4}}t^{-\frac{1}{2}}(q-1) (q+t)O_1\eta _{(6)(5)} +q^{-\frac{3}{4}}t^{-\frac{1}{2}}(q-1) (q+t)O_2\eta _{(4)(3)} +q^{-1}t^{-\frac{1}{2}}(q-1) (q+t)\eta _{(4)(2)}O_3\\ &-q^{-1}t^{-\frac{1}{2}}(q-1) (q+t)O_3\eta _{(4)(2)} -q^{-\frac{3}{4}}t^{-\frac{1}{2}}(q-1) (q+t)\eta _{(3)(2)}O_4 +q^{-\frac{5}{4}}t^{-\frac{1}{2}}(q-1) (q+t)\eta _{(6)(1)}O_5\\ &-\eta _{(345)(123)} +q^{-\frac{3}{2}}t^{-\frac{1}{2}}(q-1)^2 (q+t)\eta _{(61)(5)} +(q-1)\eta _{(45)(12)} +q^{-\frac{3}{2}}t^{-\frac{1}{2}}(q-1)^2 (q+t)\eta _{(23)(4)}\\ &-q^{-1}(q-1) (q+2)\eta _{(6)(3)} +q^{-\frac{1}{2}}(q-1)\rho _{15} -q^{-\frac{1}{2}}(q-1)\rho _0.\end{aligned}$$ ◻ ## $J$-relations Spanning set ([\[eq:NormallyOrderedMonomials\]](#eq:NormallyOrderedMonomials){reference-type="ref" reference="eq:NormallyOrderedMonomials"}) is not a basis, because there are relations between normally ordered monomials. To obtain examples of such relations, consider a product $O_KO_JO_I$ of three of generators in reverse lexicographic order. There exist at least two distinct ways of bringing the leading term of $O_KO_JO_I$ to lexicographic order via a sequence of relations ([\[eq:NormalOrderingRelations\]](#eq:NormalOrderingRelations){reference-type="ref" reference="eq:NormalOrderingRelations"}), we have $$\begin{aligned} (O_KO_J)O_I=&\left(q^{-\frac{c_{K,J}}2}O_JO_K +q^{-\frac{c_{K,J}}4}[O_K,O_J]_{q^{c_{K,J}}}\right)O_I\\ =&q^{-\frac{c_{K,J}+c_{K,I}+c_{J,I}}2}O_IO_JO_K +q^{-\frac{c_{K,J}+c_{K,I}}2-\frac{c_{J,I}}4} [O_J,O_I]_{q^{c_{J,I}}}O_K\\ &+q^{-\frac{c_{K,J}}2-\frac{c_{K,I}}4}O_J[O_K,O_I]_{q^{c_{K,I}}} +q^{-\frac{c_{K,J}}4}[O_K,O_J]_{q^{c_{K,J}}}O_I, \end{aligned} \label{eq:TripleOrderingI}$$ $$\begin{aligned} O_K(O_JO_I)=&q^{-\frac{c_{J,I}+c_{K,I}+c_{K,J}}2}O_IO_JO_K +q^{-\frac{c_{J,I}+c_{K,I}}2-\frac{c_{K,J}}4}O_I[O_K,O_J]_{q^{c_{K,J}}}\\ &+q^{-\frac{c_{J,I}}2-\frac{c_{K,I}}4}[O_K,O_I]_{q^{c_{K,I}}}O_J +q^{-\frac{c_{J,I}}4}O_K[O_J,O_I]_{q^{c_{J,I}}}. \end{aligned} \label{eq:TripleOrderingII}$$ Here $c_{I,J}$ stands for the integer coefficient in the $O_I$'th row and $O_J$'th column of Table [\[tab:QCommRel\]](#tab:QCommRel){reference-type="ref" reference="tab:QCommRel"}. Subtracting ([\[eq:TripleOrderingII\]](#eq:TripleOrderingII){reference-type="ref" reference="eq:TripleOrderingII"}) from ([\[eq:TripleOrderingI\]](#eq:TripleOrderingI){reference-type="ref" reference="eq:TripleOrderingI"}) we get the identity $$\begin{aligned} _{q^{c_{I,J}}},O_K]_{q^{c_{J,K}+c_{I,K}}} =[O_I,[O_J,O_K]_{q^{c_{J,K}}}]_{q^{c_{I,J}+c_{I,K}}} -[O_J,[O_I,O_K]_{q^{c_{I,K}}}]_{q^{c_{J,I}+c_{J,K}}}, \label{eq:QJacobiRelation}\end{aligned}$$ where we have used the fact that $c_{I,J}=-c_{J,I}$ for all generators $O_I,O_J$. By examining ([\[eq:QJacobiRelation\]](#eq:QJacobiRelation){reference-type="ref" reference="eq:QJacobiRelation"}) for all triples of generators we find 18 distinct relations between normally ordered monomials. For symmetry reasons, however, it will be more convenient for us to consider a closely related collection of 18 relations, which we call *$J$-relations*. This collection would have an advantage of being manifestly invariant under the action of finite-order automorphism $I$, even though individual monomials involved in these relations are not necessarily in the normal form ([\[eq:NormallyOrderedMonomials\]](#eq:NormallyOrderedMonomials){reference-type="ref" reference="eq:NormallyOrderedMonomials"}). As we will show later in the text, $J$-relations play the same role in the solution of the word problem in $\mathcal A_{q,t}$ as Jacobi relations of the usual Lie algebra do in Poincare-Birkhoff-Witt theorem. **Lemma 5** (*$J$-relations*). *The following relations hold in $\mathcal A_{q,t}$ for all $1\leqslant i\leqslant 6$* *$$\begin{aligned} q^{-\frac12}O_{i+2}O_{i+4}+q^{\frac12}O_{i+3}O_{i+2,i+3,i+4}-O_{i+2,i+3} O_{i+3,i+4}-(q^{\frac12}t^{-\frac12}+q^{-\frac12}t^{\frac12})O_i=0, \label{eq:JRelationsA}\end{aligned}$$ $$\begin{aligned} -q^{-1}O_{i+3}O_{i+5,i} -O_{i+4}O_{i+1,i+2}+q^{-\frac12}O_{i+3,i+4} O_{i+1,i+2,i+3}-&(q^{\frac12}t^{-\frac12}+t^{\frac12}q^{-\frac12})\left( O_{i,i+1}-q^{-\frac14}O_iO_{i+1}\right)=0, \label{eq:JRelationsB}\end{aligned}$$ $$\begin{aligned} -q^{\frac12}O_{i+1,i+2}O_{i+4,i+5} +O_{i,i+1,i+2} O_{i+1,i+2,i+3}-q^{-\frac12}O_iO_{i+3} -&\left(q^{\frac12}t^{-\frac12}+q^{-\frac12}t^{\frac12}\right)\times \\ \big(-\left(q-1+q^{-1}\right) O_{i+2,i+3,i+4}+q^{-3/4} O_{i+1}O_{i+5,i}+&q^{3/4} O_{i+5}O_{i,i+1}-O_iO_{i+1}O_{i+5}\big)=0. \end{aligned} \label{eq:JRelationsC}$$ [\[eq:JRelations\]]{#eq:JRelations label="eq:JRelations"}* *Here by index $(i+a)$ we mean $((i+a-1)\bmod 6)+1$. [\[lemm:JRelations\]]{#lemm:JRelations label="lemm:JRelations"}* *Proof.* Note that by Lemma ([\[lemm:IAutomorphism\]](#lemm:IAutomorphism){reference-type="ref" reference="lemm:IAutomorphism"}) it is enough for us to prove only one relation of each type. Let $F$ be the free associative algebra with the same collection of generators. Denote the l.h.s. of ([\[eq:JRelationsA\]](#eq:JRelationsA){reference-type="ref" reference="eq:JRelationsA"}), ([\[eq:JRelationsB\]](#eq:JRelationsB){reference-type="ref" reference="eq:JRelationsB"}), and ([\[eq:JRelationsC\]](#eq:JRelationsC){reference-type="ref" reference="eq:JRelationsC"}) by $\rho_i, \rho_{6+i},\rho_{12+i}\in F$ respectively. In addition, for every pair of generators $O_I,O_J$, let $\eta_{I,J}\in F$ stand for the normal ordering relator from Table [\[tab:QCommRel\]](#tab:QCommRel){reference-type="ref" reference="tab:QCommRel"}: $$\begin{aligned} \eta_{I,J}=q^{\frac{c_{I,J}}4}O_IO_J-q^{-\frac{c_{I,J}}4}O_JO_I \mp(q^{\frac12}-q^{-\frac12})X. \label{eq:EtaIJRelator}\end{aligned}$$ We have the following identities in $F$: $$\begin{aligned} \rho_1=&q^{-\frac12}O_{3}O_{5}+q^{\frac12}O_{4}O_{345}-O_{34} O_{45}-(q^{\frac12}t^{-\frac12}+q^{-\frac12}t^{\frac12})O_1\\[5pt] =&\frac{q^{\frac1{2}}}{(q-1)}\Big(\big[\eta_{(5)(4)},O_{34}\big]_{q^0} -[\eta_{(34),(4)},O_5]_{q^1} -\big[O_4,\eta_{(34)(5)}\big]_{q^1}+\eta_{(345)(4)} -\eta_{(45)(34)} -\eta_{(5)(3)}\big), \end{aligned}$$ $$\begin{aligned} \rho_7=&-q^{-1}O_{4}O_{61} -O_{5}O_{23}+q^{-\frac12}O_{45} O_{234}-(q^{\frac12}t^{-\frac12}+t^{\frac12}q^{-\frac12})\left( O_{12}-q^{-\frac14}O_1O_{2}\right)\\[5pt] =&\frac1{q-1}\left(\big[O_{45},\eta_{(34)(2)}\big]_{q^2} +\big[\eta_{(45)(2)},O_{34}\big]_{q^3} +\big[O_2,\eta_{(45)(34)}\big]_{q^1}-t^{-\frac{1}{2}}(q+t)\eta_{(2)(1)} -q^{-\frac12}(q+1)\eta_{(3)(2)}O_5\right.\\ &\left.\qquad\qquad-q^{-\frac{3}{4}}(q+1)O_3\eta_{(5)(2)} +(q-1)\eta_{(23)(5)} +q^{-1}(q-1)\eta_{(61)(4)} +q^{-\frac12}\eta_{(234)(45)}\right), \end{aligned}$$ $$\begin{aligned} \rho_{13}=&-q^{\frac12}O_{23}O_{56} +O_{123} O_{234}-q^{-\frac12}O_1O_4 -\left(q^{\frac12}t^{-\frac12}+q^{-\frac12}t^{\frac12}\right)\\ &\qquad\qquad\times\big(-\left(q-1+q^{-1}\right) O_{345}+q^{-3/4} O_{2}O_{6,1}+q^{3/4} O_{6}O_{12}-O_iO_2O_6\big)\\[5pt] =&\frac{q}{q-1}\left(\big[\eta_{(23)(4)},O_{123}\big]_{q^0} +\big[O_{23},\eta_{(123)(4)}\big]_{q^1}+[\eta_{(123)(23)},O_4]_{q^1} -q^{-\frac{7}{4}}t^{-\frac{1}{2}}(q-1)(q+t)\eta_{(2)(1)}O_6\right.\\ &\qquad\qquad+q^{-\frac{3}{2}}(q^2+q-1)\eta_{(4)(1)} -q^{-\frac{7}{4}}t^{-\frac{1}{2}}(q-1)(q+t)O_2\eta_{(6)(1)}\\ &\qquad\qquad\left.+q^{-2}t^{-\frac{1}{2}}(q-1)^2(q+t)\eta_{(12)(6)}-q^{-\frac{1}{2}}\eta_{(56)(23)} +q^{-\frac{1}{2}}\eta_{(234)(123)}\right). \end{aligned}$$ [\[eq:JrelationsViaNormal\]]{#eq:JrelationsViaNormal label="eq:JrelationsViaNormal"} Hence $\rho_1,\rho_7, \rho_{13}$ belong to defining ideal. Applying Lemma [\[lemm:IAutomorphism\]](#lemm:IAutomorphism){reference-type="ref" reference="lemm:IAutomorphism"} we conclude the proof. ◻ **Remark 6**. *It is worth noting that although $\rho_1,\dots,\rho_{18}$ belong to ideal in $F$ generated by normal ordering relations, the r.h.s. of expressions ([\[eq:JrelationsViaNormal\]](#eq:JrelationsViaNormal){reference-type="ref" reference="eq:JrelationsViaNormal"}) contain powers of $(q-1)$ in the denominator. In other words, $J$-relations become independent in the limit $q\rightarrow 1$. More details can be found in Section [\[sec:FlatFamily\]](#sec:FlatFamily){reference-type="ref" reference="sec:FlatFamily"}.* ## Mapping Class Group Action In this section we show that Mapping Class Group $\mathrm{Mod}(\Sigma_2)$ of a genus two surface acts by automorphisms of the algebra $\mathcal A_{q,t}$. Finite presentation of genus two Mapping Class Group $\mathrm{Mod}(\Sigma_2)$ was originally constructed by J. Birman and H. Hilden [@BirmanHilden'1971], it has five generators $d_1,d_2,d_3,d_4,d_5$ which are subject to - Coxeter relations: $$\begin{aligned} d_id_{i+1}d_i=&d_{i+1}d_id_{i+1},\qquad\textrm{for all}\quad 1\leqslant i\leqslant 4,\\ d_id_j=&d_jd_i,\qquad\textrm{when}\quad |i-j|>1.\end{aligned}$$ - Finite order relations: $$\begin{aligned} I^6=H^2=1,\quad\textrm{where}\quad I=d_1d_2d_3d_4d_5\quad\textrm{and}\quad H=d_1d_2d_3d_4d_5d_5d_4d_3d_2d_1.\end{aligned}$$ Five generators $d_1,d_2,d_3,d_4,d_5$ correspond respectively to Dehn twists along the cycles $A_1,B_{1,2},A_2,B_{2,3},A_3$ shown on Figure [1](#fig:qDiffGeneratingCycles){reference-type="ref" reference="fig:qDiffGeneratingCycles"}. At the first step, however, it will be easier for us to use a smaller generating set. It is known that Mapping Class Group of any closed surface can be generated by two elements [@Wajnryb'1996; @Korkmaz'2005]. In the case of $\mathrm{Mod}(\Sigma_2)$ it is enough to take $d_1$ and $I$. Note that Coxeter relations imply $$\begin{aligned} d_i=I^{i-1}d_1I^{1-i}\qquad\textrm{for all}\quad 1\leqslant i\leqslant 5. \label{eq:DehnTwistsViaD1}\end{aligned}$$ We have already introduced the action of an order six automorphism $I$ in Lemma [\[lemm:IAutomorphism\]](#lemm:IAutomorphism){reference-type="ref" reference="lemm:IAutomorphism"}, so now we will introduce the action of $d_1$ on $\mathcal A_{q,t}$. Let $F$ be the free algebra over $\mathbf k$ with 15 generators ([\[eq:15Generators\]](#eq:15Generators){reference-type="ref" reference="eq:15Generators"}). Define a homomorphism $d_1:F\rightarrow F$ by its action on generators $$\begin{aligned} d_1:\quad\left\{\begin{array}{cll} O_2 &\mapsto & q^{\frac14} O_1O_2-q^{\frac12} O_{12} \\[2pt] O_6 &\mapsto& O_{61} \\[2pt] O_{12} &\mapsto& O_2 \\[2pt] O_{23} &\mapsto& q^{\frac14} O_1O_{23}-q^{\frac12} O_{123} \\[2pt] O_{56} &\mapsto& O_{234} \\[2pt] O_{61} &\mapsto& q^{\frac14} O_1O_{61}-O_6 q^{\frac12} \\[2pt] O_{123} &\mapsto& O_{23} \\[2pt] O_{234} &\mapsto& q^{\frac14} O_1O_{234}-q^{\frac12} O_{56} \end{array}\right. \label{eq:d1Automorphism}\end{aligned}$$ where we assume that the action of $d_1$ on the remaining generators is identical. **Proposition 7**. *Homomorphism $d_1:F\rightarrow F$ is invertible. Moreover, defining ideal ([\[eq:DefiningRelationsAqt\]](#eq:DefiningRelationsAqt){reference-type="ref" reference="eq:DefiningRelationsAqt"}) of $\mathcal A_{q,t}$ is invariant under the action of $d_1$. [\[prop:d1Automorphism\]]{#prop:d1Automorphism label="prop:d1Automorphism"} In other words, $d_1$ descends to an automorphism of $\mathcal A_{q,t}$.* *Proof.* The inverse homomorphism $d_1^{-1}:F\rightarrow F$ can be described by its action on generators $$\begin{aligned} d_1^{-1}:\quad\left\{\begin{array}{cll} O_2 &\mapsto& O_{12} \\[2pt] O_6 &\mapsto& q^{-\frac14}O_1O_6-q^{-\frac12}O_{61} \\[2pt] O_{12} &\mapsto& q^{-\frac14}O_1O_{12}-q^{-\frac12}O_2\\[2pt] O_{23} &\mapsto& O_{123} \\[2pt] O_{56} &\mapsto& q^{-\frac14}O_1O_{56}-q^{-\frac12}O_{234} \\[2pt] O_{61} &\mapsto& O_6 \\[2pt] O_{123} &\mapsto& q^{-\frac14}O_1O_{123}-q^{-\frac12}O_{23} \\[2pt] O_{234} &\mapsto& O_{56} \end{array}\right. \label{eq:d1InverseAutomorphism}\end{aligned}$$ Again, here we assume that the action is identical on the remaining generators. As a result, we get $$\begin{aligned} (d_1^{-1}\circ d_1) (O_2)=&d_1^{-1}\left(q^{\frac14}O_1O_2-q^{\frac12}O_{12}\right)=q^{\frac14}O_1O_{12} -q^{\frac12}\left(q^{-\frac14}O_1O_2-q^{-\frac12}O_2\right)=O_2,\\ (d_1\circ d_1^{-1})(O_2)=&d_1\left(O_{12}\right)=O_2. \end{aligned} \label{eq:d1d1IActionO2GenericQ}$$ Similar computation for $O_6,O_{12},O_{23},O_{56},O_{61},O_{123},O_{234}$ shows that ([\[eq:d1InverseAutomorphism\]](#eq:d1InverseAutomorphism){reference-type="ref" reference="eq:d1InverseAutomorphism"}) is in fact the inverse for ([\[eq:d1Automorphism\]](#eq:d1Automorphism){reference-type="ref" reference="eq:d1Automorphism"}). The proof of the second part amounts to tedious but straightforward examination of all 106 generators of defining ideal ([\[eq:DefiningRelationsAqt\]](#eq:DefiningRelationsAqt){reference-type="ref" reference="eq:DefiningRelationsAqt"}). We present this calculation in Appendix [8](#sec:d1Automorphism){reference-type="ref" reference="sec:d1Automorphism"}. ◻ Now we are ready to prove the main theorem of the current subsection that the Mapping Class Group $\mathrm{Mod}(\Sigma_2)$ of a closed genus two surface acts by automorphisms of $\mathcal A_{q,t}$. It first appeared as Theorem 25 of [@ArthamonovShakirov'2019], even before we had a complete presentation of the algebra $\mathcal A_{q,t}$, and originally the theorem was formulated for the algebra of difference operators ([\[eq:AqtDifferenceOperatorsAlgebra\]](#eq:AqtDifferenceOperatorsAlgebra){reference-type="ref" reference="eq:AqtDifferenceOperatorsAlgebra"}). Our proof essentially repeats the logic of [@ArthamonovShakirov'2019] in the new context of Definition [\[def:Aqt\]](#def:Aqt){reference-type="ref" reference="def:Aqt"}. **Theorem 8**. *Two automorphisms $d_1,I:\mathcal A_{q,t}\rightarrow\mathcal A_{q,t}$ define the action of Mapping Class Group $\mathrm{Mod}(\Sigma_2)$ of genus two surface on $\mathcal A_{q,t}$. [\[th:MCGActionGeneric\]]{#th:MCGActionGeneric label="th:MCGActionGeneric"}* *Proof.* By ([\[eq:DehnTwistsViaD1\]](#eq:DehnTwistsViaD1){reference-type="ref" reference="eq:DehnTwistsViaD1"}), the action of the remaining four Dehn twists on generators of $\mathcal A_{q,t}$ can be obtained by a simple shift of indexes in formula ([\[eq:d1Automorphism\]](#eq:d1Automorphism){reference-type="ref" reference="eq:d1Automorphism"}). Combining this fact with Lemma [\[lemm:IAutomorphism\]](#lemm:IAutomorphism){reference-type="ref" reference="lemm:IAutomorphism"} we note that it is enough for us to prove only four relations $$\begin{aligned} d_1d_2d_1=d_2d_1d_2,\qquad d_1d_3=d_3d_1,\qquad d_1d_4=d_4d_1,\qquad H^2=1. \label{eq:SufficientMCGRelations}\end{aligned}$$ Moreover, by Lemma [\[lemm:SixGenerators\]](#lemm:SixGenerators){reference-type="ref" reference="lemm:SixGenerators"} it is enough to check that ([\[eq:SufficientMCGRelations\]](#eq:SufficientMCGRelations){reference-type="ref" reference="eq:SufficientMCGRelations"}) is satisfied only on six generators $O_1,\dots, O_6$. We get $$\begin{aligned} d_1d_2d_1(O_1)=\;&O_2=d_2d_1d_2(O_1),\\[5pt] d_1d_2d_1(O_2)=\;&q^{\frac{1}{2}}O_2O_1O_2 -qO_1O_2^2 +q^{\frac{5}{4}}O_{12}O_2 -q^{\frac{3}{4}}O_2O_{12} +qO_1\\ \stackrel{(\ref{eq:NormalOrderingRelations})}{=}& q^{\frac{1}{4}}O_{12}O_2-q^{\frac{3}{4}}O_2O_{12}+qO_1=d_2d_1d_2(O_2),\\[5pt] d_1d_2d_1(O_3)=\;&q^{\frac{1}{2}}O_1O_2O_3 -q^{\frac{3}{4}}O_{12}O_3 -q^{\frac{3}{4}}O_1O_{23} +qO_{123}\\ \stackrel{(\ref{eq:NormalOrderingRelations})}{=}&q^{\frac{3}{4}}O_{12}O_2^2O_3 -q^{\frac{5}{4}}O_2O_{12}O_2O_3 -qO_{12}O_2O_{23} +q^{\frac{3}{2}}O_2O_{12}O_{23}\\ &+q^{\frac{3}{2}}O_1O_2O_3 -q^{\frac{3}{4}}O_{12}O_3 -q^{\frac{7}{4}}O_1O_{23} +qO_{123} =d_2d_1d_2(O_3),\\[5pt] d_1d_2d_1(O_4)=\;&O_4=d_2d_1d_2(O_4),\\[5pt] d_1d_2d_1(O_5)=\;&O_5=d_2d_1d_2(O_5),\\[5pt] d_1d_2d_1(O_6)=\;&O_{345}=d_2d_1d_2(O_6). \end{aligned}$$ For the second relation in ([\[eq:SufficientMCGRelations\]](#eq:SufficientMCGRelations){reference-type="ref" reference="eq:SufficientMCGRelations"}) we have $$\begin{aligned} d_1d_3(O_1)=&O_1=d_3d_1(O_1),\\ d_1d_3(O_2)=&q^{\frac{1}{4}}O_1O_{23}-q^{\frac{1}{2}}O_{123}=d_3d_1(O_2),\\ d_1d_3(O_3)=&O_3=d_3d_1(O_3),\\ d_1d_3(O_4)=&q^{\frac{1}{4}}O_3O_4-q^{\frac{1}{2}}O_{34}=d_3d_1(O_4),\\ d_1d_3(O_5)=&O_5=d_3d_1(O_5),\\ d_1d_3(O_6)=&O_{61}=d_3d_1(O_6). \end{aligned}$$ Next, we have $$\begin{aligned} d_1d_4(O_1)=&O_1=d_4d_1(O_1),\\ d_1d_4(O_2)=&q^{\frac{1}{4}}O_1O_2-q^{\frac{1}{2}}O_{12}=d_4d_1(O_2),\\ d_1d_4(O_3)=&O_{34}=d_4d_1(O_3),\\ d_1d_4(O_4)=&O_4=d_4d_1(O_4),\\ d_1d_4(O_5)=&q^{\frac{1}{4}}O_4O_5-q^{\frac{1}{2}}O_{45}=d_4d_1(O_5),\\ d_1d_4(O_6)=&O_{61}=d_4d_1(O_6). \end{aligned}$$ [\[eq:MCGRelatorsOnGeneratorsGeneric\]]{#eq:MCGRelatorsOnGeneratorsGeneric label="eq:MCGRelatorsOnGeneratorsGeneric"} Finally, for the last relation in ([\[eq:SufficientMCGRelations\]](#eq:SufficientMCGRelations){reference-type="ref" reference="eq:SufficientMCGRelations"}) we will prove a slightly stronger statement that $H=1$, or equivalently, that $$\begin{aligned} \widetilde I=d_5d_4d_3d_2d_1=I^{-1}\end{aligned}$$ realizes the inverse of an order 6 automorphism $I$.[^3] Computing the action of $\widetilde I$ on 6 generators we get $$\begin{aligned} \widetilde I(O_1)=&O_6,\\ \widetilde I(O_2)=&-q^{\frac{3}{4}}O_{61}O_6+q^{\frac{1}{4}}O_6O_{61}+qO_1 \stackrel{(\ref{eq:NormalOrderingRelations})}{=}O_1,\\ \widetilde I(O_3)=&-q^{\frac{3}{4}}O_{345}O_{61}+q^{\frac{1}{4}}O_{61}O_{345}+qO_2 \stackrel{(\ref{eq:NormalOrderingRelations})}{=}O_2,\\ \widetilde I(O_4)=&q^{\frac{1}{4}}O_{345}O_{45}-q^{\frac{3}{4}}O_{45}O_{345}+qO_3 \stackrel{(\ref{eq:NormalOrderingRelations})}{=}O_3,\\ \widetilde I(O_5)=&q^{\frac{1}{4}}O_{45}O_5-q^{\frac{3}{4}}O_5O_{45}+qO_4 \stackrel{(\ref{eq:NormalOrderingRelations})}{=}O_4,\\ \widetilde I(O_6)=&O_5.\end{aligned}$$ ◻ ## Presentation of $\mathcal A_{q,t}$ in $q$-difference operators. In this section we show that algebra $\mathcal A_{q,t}$ admits presentation in $q$-difference operators. This result is not at all surprising, as we have come up with Definition [\[def:Aqt\]](#def:Aqt){reference-type="ref" reference="def:Aqt"} from examining relations between difference operators. **Proposition 9**. *We have an algebra homorphism $$\begin{aligned} \widehat{\Delta}:\mathcal A_{q,t}\rightarrow A_{q,t},\qquad O_i\mapsto \hat O_i,\quad 1\leqslant i\leqslant 6.\end{aligned}$$ [\[prop:qDifferenceHomomorphism\]]{#prop:qDifferenceHomomorphism label="prop:qDifferenceHomomorphism"}* *Proof.* Recall that by Lemma [\[lemm:SixGenerators\]](#lemm:SixGenerators){reference-type="ref" reference="lemm:SixGenerators"} we know that the remaining 9 generators in presentation of $\mathcal A_{q,t}$ are nothing but $q$-commutators of the six elements $O_i,\;1\leqslant i\leqslant 6$. To prove the statement of the Theorem we must show that all relations in defining ideal ([\[eq:DefiningRelationsAqt\]](#eq:DefiningRelationsAqt){reference-type="ref" reference="eq:DefiningRelationsAqt"}) are satisfied in difference operators. First, note that both $\mathcal A_{q,t}$ and $A_{q,t}$ are equipped with the Mapping Class Group action compatible on generators. To this end compare Lemma [\[lemm:IAutomorphism\]](#lemm:IAutomorphism){reference-type="ref" reference="lemm:IAutomorphism"} and Proposition [\[prop:d1Automorphism\]](#prop:d1Automorphism){reference-type="ref" reference="prop:d1Automorphism"} with Lemma 23 and Proposition 24 of [@ArthamonovShakirov'2019]. As a result, when verifying normal ordering relations ([\[eq:NormalOrderingRelations\]](#eq:NormalOrderingRelations){reference-type="ref" reference="eq:NormalOrderingRelations"}) we can assume without loss of generality that the first generator is $O_1$. Hence, it is enough for us to examine only the first column of Table [\[tab:QCommRel\]](#tab:QCommRel){reference-type="ref" reference="tab:QCommRel"}. All these relations are either identity by construction, or were proved in Lemma 11 of [@ArthamonovShakirov'2019]. As for the $q$-Casimir relation ([\[eq:qCasimirRelation\]](#eq:qCasimirRelation){reference-type="ref" reference="eq:qCasimirRelation"}), it can be proved by applying the corresponding $q$-difference operator to general function of three variables $f(x_{1,2},x_{2,3},x_{1,3})$ and combining the terms with the same $q$-shifts of the arguments. We omit cumbersome details of this calculation from the main text and invite careful reader to examine our Mathematica code [@Arthamonov-GitHub-Flat] which does such calculation. ◻ Proposition [\[prop:qDifferenceHomomorphism\]](#prop:qDifferenceHomomorphism){reference-type="ref" reference="prop:qDifferenceHomomorphism"} implies that for all generators $O_I\in\mathcal A_{q,t}$, the image $\widehat{\Delta}(O_I)$ is a $q$-difference operator with coefficients in the field $\mathbf k(x_{12},x_{23},x_{13})=\mathbb C(q^\frac14,t^{\frac14})(x_{12},x_{23},x_{13})$. In particular, $\widehat{\Delta}(O_{12})$ a priori might contain $(q-1)$ factor in the denominator, coming from the $q$-commutator formula ([\[eq:qCommutatorDef\]](#eq:qCommutatorDef){reference-type="ref" reference="eq:qCommutatorDef"}). However, this doesn't actually happen for our choice of generators. In fact, we can prove the following property, which plays crucial role in computation of quasiclassical limit in Section [3.2](#sec:QDifferenceQuasiclassicalLimit){reference-type="ref" reference="sec:QDifferenceQuasiclassicalLimit"}. **Lemma 10**. *For all generators $O_I\in\mathcal A_{q,t}$, the corresponding $q$-difference operator $$\begin{aligned} \widehat{\Delta}(O_I)\in\mathbb C[q^{\pm\frac14},t^{\pm\frac14}](x_{12},x_{23},x_{13}) \big\langle\hat\delta_{12}^{\pm1}, \hat\delta_{23}^{\pm1}, \hat\delta_{13}^{\pm1}\big\rangle\end{aligned}$$ has coefficients which are Laurent polynomials in variables $q^{\frac14}$ and $t^{\frac14}$. [\[lemm:qDifferenceLaurentCoefficients\]]{#lemm:qDifferenceLaurentCoefficients label="lemm:qDifferenceLaurentCoefficients"}* *Proof.* From ([\[eq:Mult\]](#eq:Mult){reference-type="ref" reference="eq:Mult"})--([\[eq:Hamiltonians\]](#eq:Hamiltonians){reference-type="ref" reference="eq:Hamiltonians"}) Next, we note that on the level of $q$-difference opertators automorphism $I^2$ corresponds to cyclic permutation $(1,2,3)$ of indices of $x_{ij}$ and $\hat\delta_{ij}$. Hence, it will be enough for us to consider the action of $\widehat{\Delta}$ on generators $O_1,O_2,O_{12},O_{23},O_{123}$. By construction, the statement of the lemma is immediate for $\widehat{\Delta}(O_1)$ and $\widehat{\Delta}(O_2)$. As for the remaining three generators we get $$\begin{aligned} \widehat{\Delta}(O_{12})&\;\stackrel{(\ref{eq:LevelTwoGeneratorsRelation})}{=}\; \frac{q^{\frac14}\widehat{\Delta}(O_1)\widehat{\Delta}(O_2) -q^{-\frac14}\widehat{\Delta}(O_2)\widehat{\Delta}(O_1)}{q^{\frac12}-q^{-\frac12}}\\[5pt] &\stackrel{(\ref{eq:Mult1}),(\ref{eq:Hamiltonian1})}{=} \sum_{a,b\in\{\pm1\}}ab\, q^{\frac14}t^{-\frac12}\,\frac{(1-t^{\frac12}X_{23}X_{12}^aX_{13}^b) (1-t^{\frac12}X_{23}^{-1}X_{12}^aX_{13}^b)} {X_{13}^b(X_{12}-X_{12}^{-1})(X_{13}-X_{13}^{-1})}\, \hat\delta_{12}^a\hat\delta_{13}^b, \end{aligned} \label{eq:DeltaHatO12}$$ $$\begin{aligned} \widehat{\Delta}(O_{23})&\;\stackrel{(\ref{eq:LevelTwoGeneratorsRelation})}{=}\; \frac{q^{\frac14}\widehat{\Delta}(O_2)\widehat{\Delta}(O_3) -q^{-\frac14}\widehat{\Delta}(O_3)\widehat{\Delta}(O_2)}{q^{\frac12}-q^{-\frac12}}\\[5pt] &\stackrel{(\ref{eq:Mult1}),(\ref{eq:Hamiltonian2})}{=} \sum_{a,b\in\{\pm1\}}ab\,q^{-\frac14}t^{-\frac12}\, \frac{(1-t^{\frac12}X_{13}X_{12}^aX_{23}^b) (1-t^{\frac12}X_{13}^{-1}X_{12}^aX_{23}^b)} {X_{12}^{2a}X_{23}^b(X_{23}-X_{23}^{-1})(X_{12}-X_{12}^{-1})}\, \hat\delta_{12}^a\hat\delta_{23}^b,\\[10pt] \widehat{\Delta}(O_{123})&\;\stackrel{(\ref{eq:LevelThreeGeneratorRelation})}{=}\; \frac{q^{\frac14}\widehat{\Delta}(O_{12})\widehat{\Delta}(O_3) -q^{-\frac14}\widehat{\Delta}(O_3) \widehat{\Delta}(O_{12})}{q^{\frac12}-q^{-\frac12}}\\[5pt] &\stackrel{(\ref{eq:DeltaHatO12}),(\ref{eq:Hamiltonian2})}{=}\sum_{a,b\in\{\pm1\}} ab\,\frac{(1-t^{\frac12}X_{12}X_{23}^aX_{13}^b)(1-t^{\frac12}X_{12}^{-1}X_{23}^aX_{13}^b)} {t^{\frac12}X_{23}^{2a}(X_{13}-X_{13}^{-1})(X_{23}-X_{23}^{-1})}\, \hat\delta_{23}^a\hat\delta_{13}^b\end{aligned}$$ ◻ # Classical Limit, $\mathcal A_{q=1,t}$ {#sec:ClassicalLimit} One of the main goals of the current paper is to show that $\mathcal A_{q,t}$ is a quantization of a certain commutative Poisson algebra. To this end we have to study the limit $q\rightarrow 1$. Not every generating set of the defining ideal of $\mathcal A_{q,t}$ as an algebra over $\mathbf k=\mathbb C(q^{\frac14},t^{\frac14})$ behaves well when $q$ is specialized to a particular complex number. Remarkably, when on top normal ordering relators $\eta_{I,J}$ ([\[eq:EtaIJRelator\]](#eq:EtaIJRelator){reference-type="ref" reference="eq:EtaIJRelator"}) we also add 18 extra $J$-relators ([\[eq:JRelations\]](#eq:JRelations){reference-type="ref" reference="eq:JRelations"}) to the defining ideal, this extended set of generators does behave nicely in the limit $q\rightarrow 1$. Note that for generic $q$, the $J$-relations would follow from normal ordering relations by Lemma [\[eq:NormalOrderingRelations\]](#eq:NormalOrderingRelations){reference-type="ref" reference="eq:NormalOrderingRelations"} as before. However, as $q\rightarrow 1$, they become independent. On the contrary, normal ordering relations $\eta_{I,J}=0$ in this limit turn into simple commutation relations between the generators. This motivates **Definition 11**. *Let $\mathcal A_{q=1,t}$ be a commutative algebra over $\mathbf k_t=\mathbb C(t^{\frac14})$ with 15 generators ([\[eq:15Generators\]](#eq:15Generators){reference-type="ref" reference="eq:15Generators"}), subject to* - *18 $J$-relations of the form* *[\[def:A1t\]]{#def:A1t label="def:A1t"}* ## Mapping Class Group action In this subsection we show that $\mathcal A_{q=1,t}$ is equipped with an action of the Mapping Class Group $\mathrm{Mod}(\Sigma_2)$ similarly to it generic counterpart $\mathcal A_{q,t}$. Moreover, the action of $I$ and $d_1$ on generators is described by the same formulae ([\[eq:IActionAqt\]](#eq:IActionAqt){reference-type="ref" reference="eq:IActionAqt"}) and ([\[eq:d1Automorphism\]](#eq:d1Automorphism){reference-type="ref" reference="eq:d1Automorphism"}) evaluated at $q=1$. From Definition [\[def:A1t\]](#def:A1t){reference-type="ref" reference="def:A1t"}, we immediately obtain the analog of Lemma [\[lemm:IAutomorphism\]](#lemm:IAutomorphism){reference-type="ref" reference="lemm:IAutomorphism"} **Lemma 12**. *The following permutation of generators $$\begin{aligned} I=(O_1,O_2,O_3,O_4,O_5,O_6) (O_{12},O_{23},O_{34},O_{45},O_{56},O_{61}) (O_{123}O_{234}O_{456}) \label{eq:IActionCommutative}\end{aligned}$$ extends to an order 6 automorphism of $\mathcal A_{q=1,t}$. [\[lemm:IActionCommutative\]]{#lemm:IActionCommutative label="lemm:IActionCommutative"}* **Lemma 13**. *Algebra $\mathcal A_{q=1,t}$ is equipped with an automorphism $d_1:\mathcal A_{q=1,t}\rightarrow\mathcal A_{q=1,t}$ which acts on generators as $$\begin{aligned} d_1:\quad\left\{\begin{array}{cll} O_2 &\mapsto & O_1O_2- O_{12} \\[2pt] O_6 &\mapsto& O_{61} \\[2pt] O_{12} &\mapsto& O_2 \\[2pt] O_{23} &\mapsto& O_1O_{23}- O_{123} \\[2pt] O_{56} &\mapsto& O_{234} \\[2pt] O_{61} &\mapsto& O_1O_{61}-O_6 \\[2pt] O_{123} &\mapsto& O_{23} \\[2pt] O_{234} &\mapsto& O_1O_{234}- O_{56} \end{array}\right. \qquad\qquad\qquad d_1^{-1}:\quad\left\{\begin{array}{cll} O_2 &\mapsto& O_{12} \\[2pt] O_6 &\mapsto& O_1O_6-O_{61} \\[2pt] O_{12} &\mapsto& O_1O_{12}-O_2\\[2pt] O_{23} &\mapsto& O_{123} \\[2pt] O_{56} &\mapsto& O_1O_{56}-O_{234} \\[2pt] O_{61} &\mapsto& O_6 \\[2pt] O_{123} &\mapsto& O_1O_{123}-O_{23} \\[2pt] O_{234} &\mapsto& O_{56} \end{array}\right. \label{eq:d1ActionCommutative}\end{aligned}$$ Here we have omitted all generators which are preserved by $d_1$. [\[lemm:d1HomomorphismCommutative\]]{#lemm:d1HomomorphismCommutative label="lemm:d1HomomorphismCommutative"}* *Proof.* Let $$\begin{aligned} P=\mathbf k_t[O_1, O_2, O_3, O_4, O_5, O_6, O_{12}, O_{23}, O_{34}, O_{56}, O_{61}, O_{123}, O_{234}, O_{456}]\end{aligned}$$ be polynomial ring in 15 variables. First, one verifies that the two homomorphisms $d_1,d_1^{-1}:P\rightarrow P$ given by ([\[eq:d1ActionCommutative\]](#eq:d1ActionCommutative){reference-type="ref" reference="eq:d1ActionCommutative"}) are inverse to each other. To this end, we consider the action of $d_1\circ d_1^{-1}$ and $d_1^{-1}\circ d_1$ on all 15 generators of the polynomial ring $P$, similarly to ([\[eq:d1d1IActionO2GenericQ\]](#eq:d1d1IActionO2GenericQ){reference-type="ref" reference="eq:d1d1IActionO2GenericQ"}). After that, we have left to prove that defining ideal ([\[eq:Aq1tDefiningIdeal\]](#eq:Aq1tDefiningIdeal){reference-type="ref" reference="eq:Aq1tDefiningIdeal"}) is preserved by $d_1$. For all $1\leqslant i\leqslant 6$ denote the l.h.s. of ([\[eq:CommutativeJRelationsA\]](#eq:CommutativeJRelationsA){reference-type="ref" reference="eq:CommutativeJRelationsA"}), ([\[eq:CommutativeJRelationsB\]](#eq:CommutativeJRelationsB){reference-type="ref" reference="eq:CommutativeJRelationsB"}), ([\[eq:CommutativeJRelationsC\]](#eq:CommutativeJRelationsC){reference-type="ref" reference="eq:CommutativeJRelationsC"}) by $r_i,r_{6+i},r_{12+i}$ respectively and let $r_0$ stands for the l.h.s. of the Casimir relation [\[eq:CommutativeCasimirRelation\]](#eq:CommutativeCasimirRelation){reference-type="ref" reference="eq:CommutativeCasimirRelation"}. First, we note that $$\begin{aligned} r_{16}=&O_{3,4}r_5-O_{1,2}r_6-O_2r_{12}+O_3r_{10}+r_{13},\\ r_{17}=&O_{4,5}r_6-O_{2,3}r_1-O_3r_7+O_4r_{11}+r_{14},\\ r_{18}=&O_{5,6}r_1-O_{3,4}r_2-O_4r_8+O_5r_{12}+r_{15},\end{aligned}$$ so it will be enough to examine the action of $d_1$ on 16 relators $r_0,\dots,r_{15}$. The following relators are preserved by $d_1$: $$\begin{aligned} r_1,r_3,r_4,r_5,r_9,r_{10},r_{13}.\end{aligned}$$ As for the remaining relators, we use the following identities in $P$: $$\begin{aligned} d_1(r_2)=&O_4 O_{61}+O_5 O_{23}-O_{45} O_{234}+(t^{\frac12}+t^{-\frac12})\big(O_{12}-O_1 O_2\big)\nonumber\\ =&-r_7,\label{eq:d1r2RelatorActionCommutative}\\[7pt] d_1(r_6)=&-O_3 O_{56}-O_4 O_{12}+O_1 O_2 O_4+O_{34} O_{123}+O_1 O_3 O_{234}-O_1 O_{23} O_{34}-(t^{\frac12}+t^{-\frac12})O_{61}\nonumber\\ =&O_1r_6+r_{12},\\[7pt] d_1(r_7)=&O_4 O_6+O_5 O_{123}-O_{45} O_{56}-O_1 O_4 O_{61}-O_1 O_5 O_{23}+O_1 O_{45} O_{234}\nonumber\\ &\quad+(t^{\frac12}+t^{-\frac12})\big(O_1^2 O_2-O_1 O_{12}-O_2\big),\\ =&r_2+O_1r_7,\nonumber\\[7pt] d_1(r_8)=&-O_2 O_5-O_{34} O_{61}+O_{234} O_{345}+(t^{\frac12}+t^{-\frac12})\big(O_{123}-O_1 O_{23}-O_3 O_{12}+O_1 O_2 O_3\big)\nonumber\\ =&r_{14},\label{eq:d1r8RelatorActionCommutative}\\[7pt] d_1(r_{11})=&O_3 O_6+O_{12} O_{45}-O_1 O_2 O_{45}-O_1 O_3 O_{61}-O_{123} O_{345}+O_1 O_{23} O_{345}\nonumber\\ &\quad+(t^{\frac12}+t^{-\frac12})\big(O_5 O_{61}-O_{234}\big)\\ =&O_{2,3}r_4-O_{6,1}r_5+O_2r_9-r_{15},\nonumber\\[7pt] d_1(r_{12})=&-O_2 O_4-O_3 O_{234}+O_{23} O_{34}+(t^{\frac12}+t^{-\frac12})O_6\nonumber\\ =&-r_6,\\[7pt] d_1(r_{14})=&O_5 O_{12}+O_6 O_{34}-O_1 O_2 O_5 -O_{56} O_{345}-O_1 O_{34} O_{61}+O_1 O_{234} O_{345}\nonumber\\ &\quad+(t^{\frac12}+t^{-\frac12})\big(O_{23} -O_2O_3 +O_1O_{123} -O_1^2O_{23} -O_1O_3O_{12} +O_1^2O_2O_3\big)\\ =&-r_8+O_1r_{14},\nonumber\\[7pt] d_1(r_{15})=& -O_2 O_{45}-O_3 O_{61}+O_{23} O_{345}+(t^{\frac12}+t^{-\frac12})\big(-O_{56} +O_1O_{234} +O_4O_{123}\nonumber\\ &\qquad\quad+O_{12}O_{34} -O_1O_2O_{34} -O_1O_4O_{23} -O_3O_4O_{12} +O_1O_2O_3O_4\big)\\ =&-O_{234}r_1-O_{34}r_7+O_5r_6+r_{11}+O_4r_{14}.\nonumber\end{aligned}$$ [\[eq:d1RelatorActionCommutative\]]{#eq:d1RelatorActionCommutative label="eq:d1RelatorActionCommutative"} Finally, for $r_0$ we get $$\begin{aligned} d_1(r_0)=& -O_1O_3O_5 -O_1O_4O_{345} +O_3O_{56}O_{61} +O_4O_{12}O_{61} +O_5O_{12}O_{23} -O_1O_2O_4O_{61} -O_1O_2O_5O_{23}\\ &-O_{23}O_{56}O_{345} -O_1O_3O_{61}O_{234} +O_1O_{23}O_{234}O_{345} +\big(t^{\frac12}+t^{-\frac12}\big) \big(-O_2^2 -O_6^2 -O_{34}^2 -O_{45}^2\\ &-O_1O_2O_{12} +O_1O_6O_{61} +O_3O_4O_{34} +O_4O_5O_{45} -O_1O_3O_{12}O_{23} +O_1O_{23}O_{123} +O_3O_{12}O_{123}\\ &+O_1^2O_2O_3O_{23} +O_5O_{61}O_{234} +O_1^2O_2^2 -O_1^2O_{23}^2 -O_{123}^2 -O_{234}^2 -O_1O_2O_3O_{123}\big) +\big(t^{\frac12}+t^{-\frac12}\big)^3\\ =&r_0+\frac12O_2r_2 +\frac12O_{23}O_{234}r_4 -\frac12O_{61}O_{234}r_5 +\frac12(O_6-O_1O_{61})r_6 +\frac12(O_1O_2-O_{12})r_7\\ &-\frac12O_{23}r_8+\frac12O_2O_{234} -\frac12O_{56}r_{11} -\frac12O_{61}r_{12} +\frac12(O_1O_{23}-O_{123})r_{14} -\frac12 O_{234}r_{15}.\end{aligned}$$ ◻ Now we are ready to prove the commutative counterpart of Theorem [\[th:MCGActionGeneric\]](#th:MCGActionGeneric){reference-type="ref" reference="th:MCGActionGeneric"}. **Proposition 14**. *Two automorphisms $d_1,I:\mathcal A_{q=1,t}\rightarrow\mathcal A_{q=1,t}$ define the action of Mapping Class Group $\mathrm{Mod}(\Sigma_2)$ of genus two surface on $\mathcal A_{q=1,t}$. [\[prop:MCGActionAq1t\]]{#prop:MCGActionAq1t label="prop:MCGActionAq1t"}* *Proof.* Repeat calculations of ([\[eq:MCGRelatorsOnGeneratorsGeneric\]](#eq:MCGRelatorsOnGeneratorsGeneric){reference-type="ref" reference="eq:MCGRelatorsOnGeneratorsGeneric"}) for $q=1$, assuming generators commute. All equalities in ([\[eq:MCGRelatorsOnGeneratorsGeneric\]](#eq:MCGRelatorsOnGeneratorsGeneric){reference-type="ref" reference="eq:MCGRelatorsOnGeneratorsGeneric"}), where we have used normal ordering relations ([\[eq:NormalOrderingRelations\]](#eq:NormalOrderingRelations){reference-type="ref" reference="eq:NormalOrderingRelations"}) turn into verbatim identities in $\mathcal A_{q=1,t}$. ◻ ## Quasiclassical limit of representation in difference operators {#sec:QDifferenceQuasiclassicalLimit} In this subsection we show that homomorphism $\widehat{\Delta}:\mathcal A_{q,t}\rightarrow A_{q,t}$ to the algebra of difference operators induces a homomorphism from $\mathcal A_{q=1,t}$ to rational functions in six variables, where the image $\Delta(O_I)$ of generators is obtained by the so-called quasiclassical limit of the corresponding $q$-difference operators $\widehat\Delta(O_I)$. To this end, we set $$\begin{aligned} q=\mathrm e^{4\mathrm i\hbar},\qquad X_{12}=\mathrm e^{x_{12}},\qquad X_{23}=\mathrm e^{x_{23}},\qquad X_{13}=\mathrm e^{x_{13}}\end{aligned}$$ and consider the action of $q$-difference operator $\widehat{\Delta}(O_I)$ on Fourier harmonic $$\begin{aligned} \widehat{\Delta}(O_I)\,\mathrm e^{\frac{\mathrm i}\hbar(p_{12}x_{12}+p_{23}x_{23}+p_{13}x_{13})}, %\label{eq:HarmonicAction}\end{aligned}$$ where we assume that $q$-shift operators $\hat\delta_{12}^{\pm1},\hat\delta_{23}^{\pm1},\hat\delta_{13}^{\pm1}$ act on $x_{12},x_{23},x_{13}$ by shifting the respective variable by $\mathrm i\hbar$, $$\begin{aligned} \hat\delta_{12}f(x_{12},x_{23},x_{13})=&f(x_{12}+\mathrm i\hbar,x_{23},x_{13}),\\ \hat\delta_{23}f(x_{12},x_{23},x_{13})=&f(x_{12},x_{23}+\mathrm i\hbar,x_{13}),\\ \hat\delta_{13}f(x_{12},x_{23},x_{13})=&f(x_{12},x_{23},x_{13}+\mathrm i\hbar). \end{aligned} \label{eq:ShiftOperatorsAction}$$ **Proposition 15**. - *For all generators $O_I\in\mathcal A_{q,t}$, the following limit exists* *[\[prop:qDifferenceQuasiClassicalLimit\]]{#prop:qDifferenceQuasiClassicalLimit label="prop:qDifferenceQuasiClassicalLimit"}* *Proof.* From ([\[eq:ShiftOperatorsAction\]](#eq:ShiftOperatorsAction){reference-type="ref" reference="eq:ShiftOperatorsAction"}) we immediately get $$\begin{aligned} \hat\delta_{ij}\mathrm e^{\frac{\mathrm i}\hbar(p_{12}x_{12}+p_{23}x_{23}+p_{13}x_{13})}=P_{ij}\mathrm e^{\frac{\mathrm i}\hbar(p_{12}x_{12}+p_{23}x_{23}+p_{13}x_{13})},\qquad\textrm{for all}\quad (ij)\in\{(12),(23),(13)\}.\end{aligned}$$ Combining it with Lemma [\[lemm:qDifferenceLaurentCoefficients\]](#lemm:qDifferenceLaurentCoefficients){reference-type="ref" reference="lemm:qDifferenceLaurentCoefficients"} we immediately get the first two statements of the Proposition. Now, in order to prove the last statement we need to show that all defining relations ([\[eq:Aq1tDefiningIdeal\]](#eq:Aq1tDefiningIdeal){reference-type="ref" reference="eq:Aq1tDefiningIdeal"}) of $\mathcal A_{q=1,t}$ are satisfied for $\Delta(O_I)$, images of generators. To this end, recall that each of the 19 defining relations of $\mathcal A_{q=1,t}$ has a noncommutative counterpart in $\mathcal A_{q,t}$. Namely, the 18 $J$-relations ([\[eq:JRelations\]](#eq:JRelations){reference-type="ref" reference="eq:JRelations"}) and the $q$-Casimir relation ([\[eq:qCasimirRelation\]](#eq:qCasimirRelation){reference-type="ref" reference="eq:qCasimirRelation"}). By Proposition [\[prop:qDifferenceHomomorphism\]](#prop:qDifferenceHomomorphism){reference-type="ref" reference="prop:qDifferenceHomomorphism"}, both ([\[eq:JRelations\]](#eq:JRelations){reference-type="ref" reference="eq:JRelations"}) and ([\[eq:qCasimirRelation\]](#eq:qCasimirRelation){reference-type="ref" reference="eq:qCasimirRelation"}) are satisfied for $q$-difference operators $\widehat{\Delta}(O_I)$. At the same time, the first two statements of the current Proposition imply that for every noncommutative monomial $O_{I_1}\dots O_{I_k}$ we have $$\begin{aligned} \Delta(O_{I_1})\dots\Delta(O_{I_k}) =\lim_{\hbar\rightarrow0} \dfrac{\widehat{\Delta}(O_{I_1}\dots O_{I_k})\left(\mathrm e^{\frac{\mathrm i}\hbar(p_{12}x_{12}+p_{23}x_{23}+p_{13}x_{13})}\right)}{\mathrm e^{\frac{\mathrm i}\hbar(p_{12}x_{12}+p_{23}x_{23}+p_{13}x_{13})}}.\end{aligned}$$ As a corollary, all of the 19 defining relations of $\mathcal A_{q=1,t}$ must be satisfied for $\Delta(O_I)$. ◻ # Isomorphism between $\mathcal A_{q=t=1}$ and genus two character variety {#sec:IsomorphismWithCharacterVariety} Note that all Lemmas and Propositions of Section [3](#sec:ClassicalLimit){reference-type="ref" reference="sec:ClassicalLimit"} work verbatim if we replace formal variable $t^{\frac14}$ with a nonzero parameter. In this section we examine specialization of Definition [\[def:A1t\]](#def:A1t){reference-type="ref" reference="def:A1t"} corresponding to the value of parameter $t=1$. Indeed, consider a polynomial ring in 15 variables $$\begin{aligned} P=\mathbb C[O_1, O_2, O_3, O_4, O_5, O_6, O_{12}, O_{23}, O_{34}, O_{56}, O_{61}, O_{123}, O_{234}, O_{456}],\end{aligned}$$ and let $\mathcal I_{q=t=1}\subset P$ be an ideal generated by $t=1$ specialization of relators in ([\[eq:Aq1tDefiningIdeal\]](#eq:Aq1tDefiningIdeal){reference-type="ref" reference="eq:Aq1tDefiningIdeal"}). We define a quotient algebra $$\mathcal A_{q=t=1}:={\raisebox{.2em}{$P$}\left/\raisebox{-.2em}{$\mathcal I_{q=t=1}$}\right.}.$$ The main goal of the current section is to prove that there exists a $\mathrm{Mod}(\Sigma_2)$-equivariant isomorphism $$\begin{aligned} \mathcal A_{q=t=1}\simeq\mathcal O(\mathrm{Hom}(\pi_1(\Sigma_2)),SL(2,\mathbb C))^{SL(2,\mathbb C)}, \label{eq:A1tIsomorphism}\end{aligned}$$ between $\mathcal A_{q=t=1}$ and the coordinate ring of an $SL(2,\mathbb C)$-character variety of a closed genus two surface. Turns out that we can compute Groebner basis of defining ideal $\mathcal I_{q=t=1}\subset P$ for some appropriate choice of monomial order. Without going into further details which won't be necessary until Section [5](#sec:WordProblem){reference-type="ref" reference="sec:WordProblem"} we consider 61 polynomials in $P$ obtained by $q=t=1$ specialization of formulas listed in Appendix [10](#sec:qGroebnerBasis){reference-type="ref" reference="sec:qGroebnerBasis"}, $$g_j^{(q=t=1)}:=g_j\big|_{q=t=1}\in\mathcal I_{q=t=1},\qquad 1\leq j\leq 61.$$ **Proposition 16**. *Collection of elements $$\begin{aligned} \big\{g_i^{(q=t=1)}\;\big|\;1\leqslant i\leqslant61\big\}\end{aligned}$$ provides normalized Groebner basis for $\mathcal I_{q=t=1}\subset P$ w.r.t. weighted degree reverse lexicographic monomial order. [\[prop:GroebnerbasisAqt1\]]{#prop:GroebnerbasisAqt1 label="prop:GroebnerbasisAqt1"}* ## Character variety {#sec:CharacterVariety} Genus two surface can be obtained as an identification space from an octagon with sides glued as shown on Figure [\[fig:GenusTwoPiGenerators\]](#fig:GenusTwoPiGenerators){reference-type="ref" reference="fig:GenusTwoPiGenerators"}. Let $p\in\Sigma_2$ be the center of an octagon and consider the fundamental group $\pi_1(\Sigma_2,p)$ based at $p$. We can choose generators $X_1,X_2,Y_1,Y_2$ of $\pi_1(\Sigma)$ corresponding to loops crossing the sides of an octagon exactly once. As a result, we obtain the following presentation for the fundamental group $$\begin{aligned} \pi_1(\Sigma_2,p)=\langle X_1,Y_1,X_2,Y_2\;|\; X_1Y_1X_1^{-1}Y_1^{-1}X_2Y_2X_2^{-1}Y_2^{-1}=1\rangle. \label{eq:GenusTwoFundamentalGroup}\end{aligned}$$ Hereinafter we use notation in which the composition of paths is read from right to left. Note that this is opposite to the standard convention in topology. As follows from Corollary 40 in [@Sikora'2012] (see also [@RapinchukBenyash-KrivetzChernousov'1996]), the coordinate ring of the representation variety $\mathcal O(\mathrm{Hom}(\pi_1(\Sigma_2),SL(2,\mathbb C)))$ is an integral domain. Hence, its invariant subring $\mathcal O(\mathrm{Hom}(\pi_1(\Sigma_2,p),SL(2,\mathbb C)))^{SL(2,\mathbb C)}$ must also be an integral domain. The Krull dimension of the latter can be computed by examining the general point of its spectrum, which corresponds to the orbit of an irreducible $SL(2,\mathbb C)$-representation of $\pi_1(\Sigma_2,p)$. This gives $$\begin{aligned} \dim\; \mathrm{Hom}(\pi_1(\Sigma_2,p),SL(2,\mathbb C))\mathbin{ \mathchoice{/\mkern-6mu/}% \displaystyle {/\mkern-6mu/}% \textstyle {/\mkern-5mu/}% \scriptstyle {/\mkern-5mu/}}SL(2,\mathbb C)=9-3=6.\end{aligned}$$ ## $\mathrm{Mod}(\Sigma_2)$-equivariant homomorphism Mapping Class Group $\mathrm{Mod}(\Sigma_2)$ of a genus two surface acts on conjugacy classes of $\pi_1(\Sigma_2)$. This group is generated by five left Dehn twists $d_1,d_2,d_3,d_4,d_5$ about the cycles $c_1,c_2,c_3,c_4,c_5$ shown on Figure [\[fig:GenusTwoPiGenerators\]](#fig:GenusTwoPiGenerators){reference-type="ref" reference="fig:GenusTwoPiGenerators"}.[^4] The action of the five Dehn twists on generators of the fundamental group is given by $$\begin{aligned} d_1:\;\left\{\begin{array}{ccl} X_1&\mapsto& X_1Y_1\\ Y_1&\mapsto& Y_1\\ X_2&\mapsto& X_2\\ Y_2&\mapsto& Y_2 \end{array}\right.&, \qquad d_2:\;\left\{\begin{array}{ccl} X_1&\mapsto& X_1\\ Y_1&\mapsto& Y_1X_1^{-1}\\ X_2&\mapsto& X_2\\ Y_2&\mapsto& Y_2 \end{array}\right., \qquad d_3:\;\left\{\begin{array}{ccl} X_1&\mapsto& Y_2Y_1X_1\\ Y_1&\mapsto& Y_1\\ X_2&\mapsto& Y_1Y_2X_2\\ Y_2&\mapsto& Y_2 \end{array}\right.,\\[10pt] d_4:\;&\left\{\begin{array}{ccl} X_1&\mapsto& X_1\\ Y_1&\mapsto& Y_1\\ X_2&\mapsto& X_2\\ Y_2&\mapsto& Y_2X_2^{-1} \end{array}\right., \qquad d_5:\;\left\{\begin{array}{ccl} X_1&\mapsto& X_1\\ Y_1&\mapsto& Y_1\\ X_2&\mapsto& X_2Y_2\\ Y_2&\mapsto& Y_2 \end{array}\right.. \end{aligned} \label{eq:DehnTwistsActionOnFundamentalGroup}$$ Recall that ([\[eq:IActionCommutative\]](#eq:IActionCommutative){reference-type="ref" reference="eq:IActionCommutative"}) and ([\[eq:d1ActionCommutative\]](#eq:d1ActionCommutative){reference-type="ref" reference="eq:d1ActionCommutative"}) define a pair of automorphisms of the polynomial ring $P$ which we will denote by the same letters $d_1,I:P\rightarrow P$. **Definition 17**. *Let $\Psi$ be a homomorphism of the polynomial ring $$\begin{aligned} \Psi:P\rightarrow \mathcal O(\mathrm{Hom}(\pi_1(\Sigma_2),SL(2,\mathbb C)))^{SL(2,\mathbb C)}\end{aligned}$$ defined on generators as $$\begin{aligned} \Psi(O_1)=&\tau_{Y_1}, &\Psi(O_{12})=&\tau_{Y_1X_1^{-1}}, &\Psi(O_{123})=&\tau_{Y_1X_1^{-1}Y_1^{-1}Y_2^{-1}},\\ \Psi(O_2)=&\tau_{X_1}, &\Psi(O_{23})=&\tau_{X_1^{-1}Y_1^{-1}Y_2^{-1}}, &\Psi(O_{234})=&\tau_{X_1Y_1Y_2^2X_2^{-1}Y_2^{-1}}\\ \Psi(O_3)=&\tau_{Y_1Y_2}, &\Psi(O_{34})=&\tau_{Y_2^{-1}Y_1^{-1}X_2}, &\Psi(O_{345})=&\tau_{X_2Y_1^{-1}},\\ \Psi(O_4)=&\tau_{X_2}, &\Psi(O_{45})=&\tau_{Y_2X_2},\\ \Psi(O_5)=&\tau_{Y_2}, &\Psi(O_{56})=&\tau_{X_1Y_2^2X_2^{-1}Y_2^{-1}},\\ \Psi(O_6)=&\tau_{X_1Y_2X_2^{-1}Y_2^{-1}}, &\Psi(O_{61})=&\tau_{X_1Y_1Y_2X_2^{-1}Y_2^{-1}}, \end{aligned} \label{eq:PsiActionOnGenerators}$$ where for all $M\in\pi_1(\Sigma_2,p)$ we denote by $$\begin{aligned} \tau_M\in \mathcal O(\mathrm{Hom}(\pi_1(\Sigma_2),SL(2,\mathbb C)))^{SL(2,\mathbb C)}\end{aligned}$$ the trace of the corresponding product of matrices.* **Lemma 18**. *Homomorphism $\Psi$ is equivariant with respect to the action of $d_1$ and $I$ on both sides. [\[lemm:PsiMCGEquivariance\]]{#lemm:PsiMCGEquivariance label="lemm:PsiMCGEquivariance"}* *Proof.* From ([\[eq:DehnTwistsActionOnFundamentalGroup\]](#eq:DehnTwistsActionOnFundamentalGroup){reference-type="ref" reference="eq:DehnTwistsActionOnFundamentalGroup"}) we compute the action of $I=d_1d_2d_3d_4d_5$ on generators of the fundamental group: $$\begin{aligned} I:\quad\left\{\begin{array}{ccl} X_1&\mapsto& Y_2Y_1,\\[5pt] Y_1&\mapsto& X_1^{-1},\\[5pt] X_2&\mapsto& (X_1^{-1}Y_2X_2)Y_2(X_1^{-1}Y_2X_2)^{-1},\\[5pt] Y_2&\mapsto& Y_2(X_1^{-1}Y_2X_2)^{-1}. \end{array}\right. \label{eq:IActionOnFundamentalGroup}\end{aligned}$$ On the character variety this translates to $$\begin{aligned} I:\qquad\left\{\begin{array}{l} \tau_{Y_1}\quad\mapsto\quad \tau_{X_1^{-1}}=\tau_{X_1}\quad\mapsto\quad \tau_{Y_2Y_1}\quad\mapsto\quad \tau_{Y_2X_2^{-1}Y_2^{-1}}=\tau_{X_2}\quad\mapsto\\[5pt] \tau_{(X_1^{-1}Y_2X_2)Y_2(X_1^{-1}Y_2X_2)^{-1}}=\tau_{Y_2}\quad\mapsto\quad \tau_{Y_2X_2^{-1}Y_2^{-1}X_1}=\tau_{X_1Y_2X_2^{-1}Y_2^{-1}}\quad\mapsto\\[5pt] \tau_{Y_2Y_1Y_2(X_1^{-1}Y_2X_2)^{-1}(X_1^{-1}Y_2X_2)Y_2^{-1}(X_1^{-1}Y_2X_2)^{-1} (X_1^{-1}Y_2X_2)Y_2^{-1}}=\tau_{Y_1} \end{array} \right.\end{aligned}$$ The latter compares well to the cyclic permutation of elements in the first column of ([\[eq:PsiActionOnGenerators\]](#eq:PsiActionOnGenerators){reference-type="ref" reference="eq:PsiActionOnGenerators"}). Similar calculations verify the other two orbits of $I$ corresponding to second and third columns of ([\[eq:PsiActionOnGenerators\]](#eq:PsiActionOnGenerators){reference-type="ref" reference="eq:PsiActionOnGenerators"}). As a result, we conclude that $$\begin{aligned} I(\Psi(O_J))=\Psi(I(O_J))\qquad\textrm{for all generators}\quad O_J\in P.\end{aligned}$$ Now we will prove that $\Psi$ is $d_1$-equivariant. From ([\[eq:DehnTwistsActionOnFundamentalGroup\]](#eq:DehnTwistsActionOnFundamentalGroup){reference-type="ref" reference="eq:DehnTwistsActionOnFundamentalGroup"}) we immediately note that $d_1$ acts identically on all $\tau$ not involving $X_1$, namely on $$\begin{aligned} \tau_{Y_1},\quad \tau_{Y_1Y_2},\quad \tau_{X_2},\quad \tau_{Y_2},\quad \tau_{Y_2^{-1}Y_1^{-1}X_2},\quad \tau_{Y_2X_2},\qquad \tau_{X_2Y_1^{-1}}.\end{aligned}$$ For the remaining 8 elements which appear on the r.h.s. of ([\[eq:PsiActionOnGenerators\]](#eq:PsiActionOnGenerators){reference-type="ref" reference="eq:PsiActionOnGenerators"}) we obtain $$d_1:\qquad\left\{\begin{array}{ccl} \tau_{X_1}&\mapsto& \tau_{X_1Y_1}=\tau_{X_1}\tau_{Y_1}-\tau_{Y_1X_1^{-1}},\\[5pt] \tau_{X_1Y_2X_2^{-1}Y_2^{-1}}&\mapsto& \tau_{X_1Y_1Y_2X_2^{-1}Y_2^{-1},},\\[5pt] \tau_{Y_1X_1^{-1}}&\mapsto& \tau_{X_1^{-1}}=\tau_{X_1},\\[5pt] \tau_{X_1^{-1}Y_1^{-1}Y_2^{-1}}&\mapsto& \tau_{Y_1^{-1}X_1^{-1}Y_1^{-1}Y_2^{-1}} =\tau_{Y_1}\tau_{X_1^{-1}Y_1^{-1}Y_2^{-1}}-\tau_{Y_1X_1^{-1}Y_1^{-1}Y_2^{-1}}\\[5pt] \tau_{X_1Y_2^2X_2^{-1}Y_2^{-1}}&\mapsto& \tau_{X_1Y_1Y_2^2X_2^{-1}Y_2^{-1}},\\[5pt] \tau_{X_1Y_1Y_2X_2^{-1}Y_2^{-1}}&\mapsto& \tau_{X_1Y_1^2Y_2X_2^{-1}Y_2^{-1}} =\tau_{Y_1}\tau_{X_1Y_1Y_2X_2^{-1}Y_2^{-1}}-\tau_{X_1Y_2X_2^{-1}Y_2^{-1}},\\[5pt] \tau_{Y_1X_1^{-1}Y_1^{-1}Y_2^{-1}}&\mapsto& \tau_{X_1^{-1}Y_1^{-1}Y_2^{-1}},\\[5pt] \tau_{X_1Y_1Y_2^2X_2^{-1}Y_2^{-1}}&\mapsto& \tau_{X_1Y_1^2Y_2^2X_2^{-1}Y_2^{-1}} =\tau_{Y_1}\tau_{X_1Y_1Y_2^2X_2^{-1}Y_2^{-1}}-\tau_{X_1Y_2^2X_2^{-1}Y_2^{-1}}. \end{array}\right. \label{eq:d1ActionTau}$$ Comparing ([\[eq:d1ActionTau\]](#eq:d1ActionTau){reference-type="ref" reference="eq:d1ActionTau"}) with ([\[eq:d1ActionCommutative\]](#eq:d1ActionCommutative){reference-type="ref" reference="eq:d1ActionCommutative"}) we conclude that $$\begin{aligned} d_1(\Psi(O_J))=\Psi(d_1(O_J))\qquad\textrm{for all generators}\quad O_J\in P.\end{aligned}$$ ◻ Now we are ready to prove that ([\[eq:PsiActionOnGenerators\]](#eq:PsiActionOnGenerators){reference-type="ref" reference="eq:PsiActionOnGenerators"}) defines a homomorphism from $\mathcal A_{q=t=1}$ to the coordinate ring of the genus two character variety. **Proposition 19**. *Defining ideal $\mathcal I_{q=t=1}\subset P$ of $\mathcal A_{q=t=1}$ belongs to the kernel of $\Psi$. In other words $\Psi$ descends to a homomorphism of the quotient algebra, which we denote by the same letter: $$\Psi:\mathcal A_{q=t=1}\rightarrow\mathcal O(\mathrm{Hom}(\pi_1(\Sigma_2),SL(2,\mathbb C)))^{SL(2,\mathbb C)}. \label{eq:PsiHomomorphismFromAq1t1}$$ [\[prop:PsiHomomorphism\]]{#prop:PsiHomomorphism label="prop:PsiHomomorphism"}* *Proof.* Because $\Psi$ is equivariant w.r.t. $d_1$ and $I$, so is $\ker\Psi$. Recall that 19 defining relations ([\[eq:Aq1tDefiningIdeal\]](#eq:Aq1tDefiningIdeal){reference-type="ref" reference="eq:Aq1tDefiningIdeal"}) split into 4 orbits of $I$. Moreover, from ([\[eq:d1r2RelatorActionCommutative\]](#eq:d1r2RelatorActionCommutative){reference-type="ref" reference="eq:d1r2RelatorActionCommutative"}) and ([\[eq:d1r8RelatorActionCommutative\]](#eq:d1r8RelatorActionCommutative){reference-type="ref" reference="eq:d1r8RelatorActionCommutative"}) we know that $$\begin{aligned} d_1(r_2)=-r_7,\qquad d_1(r_8)=r_{14}.\end{aligned}$$ Hence, it will be enough for us to prove that $r_0,r_1\in\ker\Psi$ and the rest will follow by equivariance. We get $$\begin{aligned} \Psi(r_1)=&\tau_{Y_1Y_2}\tau_{Y_2}+\tau_{X_2}\tau_{X_2Y_1^{-1}} -\tau_{Y_2^{-1}Y_1^{-1}X_2}\tau_{Y_2X_2}-2\tau_{Y_1},\\[5pt] \Psi(r_0)=&8-\tau_{Y_1X_1^{-1}}^2-\tau_{Y_2X_2}^2-\tau_{X_1^{-1}Y_1^{-1}Y_2^{-1}}^2 -\tau_{Y_2^{-1}Y_1^{-1}X_2}^2-\tau_{X_1Y_1Y_2X_2^{-1}Y_2^{-1}}^2\\ &-\tau_{Y_1Y_2}\tau_{X_1Y_2X_2^{-1}Y_2^{-1}}\tau_{X_1Y_1Y_2^2X_2^{-1}Y_2^{-1}} +\tau_{X_2Y_1^{-1}}\tau_{Y_1X_1^{-1}Y_1^{-1}Y_2^{-1}}\tau_{X_1Y_1Y_2^2X_2^{-1}Y_2}\\ &+\tau_{Y_1Y_2}\tau_{X_1^{-1}Y_1^{-1}Y_2^{-1}}\tau_{X_1} +\tau_{Y_1Y_2}\tau_{Y_2^{-1}Y_1^{-1}X_2}\tau_{X_2} -\tau_{X_1Y_2X_2^{-1}Y_2^{-1}}\tau_{X_1}\tau_{X_2}\\ &+\tau_{X_1Y_2X_2^{-1}Y_2^{-1}}\tau_{X_1Y_1Y_2X_2^{-1}Y_2^{-1}}\tau_{Y_1} +\tau_{Y_1X_1^{-1}}\tau_{X_1}\tau_{Y_1} -\tau_{X_2Y_1^{-1}}\tau_{X_2}\tau_{Y_1}\\ &+\tau_{X_1Y_2X_2^{-1}Y_2^{-1}}\tau_{X_1Y_2^2X_2^{-1}Y_2^{-1}}\tau_{Y_2} -\tau_{Y_1X_1^{-1}Y_1^{-1}Y_2^{-1}}\tau_{X_1}\tau_{Y_2} +\tau_{Y_2X_2}\tau_{X_2}\tau_{Y_2} -\tau_{Y_1Y_2}\tau_{Y_1}\tau_{Y_2}. \end{aligned} \label{eq:PsiImageRelations01}$$ Both expressions on the right hand side of ([\[eq:PsiImageRelations01\]](#eq:PsiImageRelations01){reference-type="ref" reference="eq:PsiImageRelations01"}) give rise to invariant polynomials in the coordinates of the full representation variety $\mathrm{Hom}(\pi_1(\Sigma_2,SL(2,\mathbb C)))$. We have used Mathematica to compute Groebner basis for defining ideal of the coordinate ring $\mathcal O(\mathrm{Hom}(\pi_1(\Sigma_2,SL(2,\mathbb C))))$ of the full representation variety in degree reverse lexicographic order. This basis has 82 elements. Using this basis we have reduced both expressions to confirm that both of them have zero remainder modulo defining ideal. We omit routine details from the main text and invite careful reader to examine our calculations at [@Arthamonov-GitHub-Flat]. ◻ ## The Isomorphism For an $SL(2,\mathbb C)$-character variety of one-relator group ([\[eq:GenusTwoFundamentalGroup\]](#eq:GenusTwoFundamentalGroup){reference-type="ref" reference="eq:GenusTwoFundamentalGroup"}) we can utilize the algorithm from [@AshleyBurelleLawton'2018]. This algorithm provides a complete set of generators of the coordinate ring of character variety together with, possibly incomplete, set of relations which are enough however to provide a set-theoretic cut-out of the variety of irreducible representations. This algorithm provides us with 14 generators: $$\begin{aligned} \tau _{X_1},\;\tau _{Y_1},\;\tau _{X_2},\;\tau _{Y_2},\;\tau _{X_1Y_1},\;\tau _{X_1X_2},\;\tau _{X_1Y_2},\;\tau _{Y_1X_2},\;\tau _{Y_1Y_2},\;\tau _{X_2Y_2},\;\tau _{X_1Y_1X_2},\;\tau _{X_1Y_1Y_2},\;\tau _{X_1X_2Y_2},\;\tau _{Y_1X_2Y_2}, \label{eq:ABL18Generators}\end{aligned}$$ subject to 19 relations which we omit from the main text for the sake of brevity. Now let $$P^{\mathrm{ABL}}:=\mathbb C[\tau_{X_1}, \tau_{Y_1}, \tau_{X_2}, \tau_{Y_2}, \tau_{X_1Y_1}, \tau_{X_1X_2}, \tau_{X_1Y_2}, \tau_{Y_1X_2}, \tau_{Y_1Y_2}, \tau_{X_2Y_2}, \tau_{X_1Y_1X_2}, \tau_{X_1Y_1Y_2}, \tau_{X_1X_2Y_2}, \tau_{Y_1X_2Y_2}]$$ be the polynomial ring in 14 variables ([\[eq:ABL18Generators\]](#eq:ABL18Generators){reference-type="ref" reference="eq:ABL18Generators"}) and let $\mathcal I^{\mathrm{ABL}}\subset P^{\mathrm{ABL}}$ be the ideal generated by 19 relations mentioned above. By Theorem 3.4 from [@AshleyBurelleLawton'2018] (see also Theorem 3.2 in [@GonzalezMontesinos'1993]), we have $${\raisebox{.2em}{$P^{\mathrm{ABL}}$}\left/\raisebox{-.2em}{$\sqrt{\mathcal I^{\mathrm{ABL}}}$}\right.}\;\simeq\; {\raisebox{.2em}{$\mathcal O(\mathrm{Hom}(\pi_1(\Sigma_2),SL(2,\mathbb C)))^{SL(2,\mathbb C)}$}\left/\raisebox{-.2em}{$\sqrt{(0)}$}\right.} \;\stackrel{\textrm{Sec.} \ref{sec:CharacterVariety}}=\; \mathcal O(\mathrm{Hom}(\pi_1(\Sigma_2),SL(2,\mathbb C)))^{SL(2,\mathbb C)}. \label{eq:ABL18SetTheoreticCutout}$$ **Definition 20**. *Let $\Phi$ be a homomorphism of the polynomial ring $$\begin{aligned} \Phi:P^{\mathrm{ABL}}\rightarrow \mathcal A_{q=t=1}\end{aligned}$$ defined on generators as $$\begin{aligned} \Phi(\tau _{X_1})=& O_2,& \Phi(\tau _{Y_2})=& O_5,& \Phi(\tau _{X_1Y_1})=& O_1 O_2-O_{1,2},\nonumber\\ \Phi(\tau _{Y_1})=& O_1,& \Phi(\tau _{Y_1Y_2})=& O_3,& \Phi(\tau _{Y_1X_2})=& O_1 O_4-O_{3,4,5},\nonumber\\ \Phi(\tau _{X_2})=& O_4,&\Phi(\tau _{X_2Y_2})=& O_{4,5},& \Phi(\tau _{Y_1X_2Y_2})=& O_{3,4}+O_1 O_{4,5}-O_5 O_{3,4,5}, \label{eq:PhiActionOnGenerators}\end{aligned}$$ $$\begin{aligned} \Phi(\tau _{X_1X_2})=& -O_1 O_{6,1}+O_1 O_2 O_{3,4,5}-O_{1,2} O_{3,4,5}+O_6,\nonumber\\ \Phi(\tau _{X_1Y_2})=& O_3 O_{1,2}+O_1 O_{2,3}-O_{1,2,3}-O_1 O_2 O_3+O_2 O_5,\nonumber\\ \Phi(\tau _{X_1Y_1X_2})=&- O_1^2 O_{6,1}+O_2 O_1^2 O_{3,4,5}-O_1 O_{1,2} O_{3,4,5}+O_{6,1}-O_2 O_{3,4,5}+O_6 O_1,\nonumber\\ \Phi(\tau _{X_1Y_1Y_2})=& -O_5 O_{1,2}-O_4 O_5 O_{6,1}-O_{2,3}+O_{4,5} O_{6,1}+ O_4 O_{2,3,4}+O_1 O_2 O_5,\nonumber\\ \Phi(\tau _{X_1X_2Y_2})=& -O_1 O_2 O_{3,4}+O_{1,2} O_{3,4}-O_{5,6}-O_1 O_5 O_{6,1}+O_1 O_{2,3,4}+O_1 O_2 O_5 O_{3,4,5}-O_5 O_{1,2} O_{3,4,5}+O_5 O_6.\nonumber\end{aligned}$$* **Lemma 21**. *Ideal $\mathcal I^{\mathrm{ABL}}\subset P^{\mathrm{ABL}}$ is annihilated by the above homomorphism: $$\mathcal I^{\mathrm{ABL}}\subset\ker\Phi.$$ [\[lemm:ABL18RelationsAnnihilatedByPhi\]]{#lemm:ABL18RelationsAnnihilatedByPhi label="lemm:ABL18RelationsAnnihilatedByPhi"}* *Proof.* To this end we calculate the Groebner basis of defining ideal of $\mathcal A_{q=t=1}$ and reduce the $\Phi$-image of generators of $\mathcal I^{\mathrm{ABL}}$ modulo this ideal. In particular, for the first generator of $\mathcal I^{\mathrm{ABL}}$ we get $$\begin{aligned} &\Phi\Big(\tau _{X_1X_2}^2+\tau _{X_1}^2+\tau _{X_2}^2+\tau _{X_1X_2} \left(-\tau _{X_1} \tau _{X_2}+\tau _{X_1Y_1} \tau _{Y_1X_2}-\tau _{Y_1} \tau _{X_1Y_1X_2}\right)+\tau _{X_1Y_1}^2+\tau _{Y_1X_2}^2+\tau _{X_1Y_1X_2}^2\\ &\quad\qquad-\tau _{X_1} \tau _{Y_1X_2} \tau _{X_1Y_1X_2}-\tau _{X_2} \tau _{Y_1} \tau _{Y_1X_2}+\tau _{X_1} \tau _{X_2} \tau _{Y_1} \tau _{X_1Y_1X_2}-\tau _{X_1Y_1} \left(\tau _{X_2} \tau _{X_1Y_1X_2}+\tau _{X_1} \tau _{Y_1}\right)+\tau _{Y_1}^2-4\Big)\\ &\quad=36\Big(O_1 O_2 O_6 O_{345} -O_6 O_{12} O_{345} -O_2 O_{61} O_{345} +O_4 O_{12} O_{61} -O_1 O_4 O_{345} +O_{345}^2\\ &\quad\qquad-O_1 O_2 O_{12} -O_1 O_6 O_{61} -O_2 O_4 O_6 +O_{12}^2 +O_{61}^2 +O_1^2 +O_2^2 +O_4^2 +O_6^2 -4\Big)\\ &\quad=36 \left(O_4 g_2^{(q=t=1)}-g_{18}^{(q=t=1)}+g_{47}^{(q=t=1)}\right)\equiv 0\bmod{\mathcal I}_{q=t=1}. \end{aligned}$$ For the sake of brevity we omit similar calculations for the remaining generators of $\mathcal I^{\mathrm{ABL}}$. A complete set of formulas along with a Mathematica algorithm used to obtain them can be found in supplementary materials to this paper [@Arthamonov-GitHub-Flat]. ◻ **Lemma 22**. *The nilradical $\sqrt{(0)}\subset\mathcal A_{q=t=1}$ of commutative algebra $\mathcal A_{q=t=1}$ is trivial. [\[lemm:NilradicalAq1t1IsTrivial\]]{#lemm:NilradicalAq1t1IsTrivial label="lemm:NilradicalAq1t1IsTrivial"}* *Proof.* To prove this statement we utilize computer algebra software SINGULAR with the source code of our computations available at [@Arthamonov-GitHub-Flat]. Due to the high computational complexity we cannot directly use the built-in function which involves making arbitrary choices while disrespecting the symmetry of the problem. Instead, we follow essentially the same algorithm but utilizing the symmetry of the defining ideal in our choices. This makes computations possible on a standard computer. We reduce calculation of the nilradical to a zero-dimensional case as described in Section 4.2 of [@GreuelPfisterBachmannLossenSchonemann'2008]. On the first step we choose a maximal algebraically independent subset $$O_1,O_2,O_3,O_4,O_5,O_6\quad\in\quad P. \label{eq:MaximalAlgebraicallyIndependentSet}$$ and consider an algebra $$\mathcal R:=\mathbb C(O_1,O_2,O_3,O_4,O_5,O_6)[O_{12},O_{23},O_{34},O_{56},O_{61},O_{123},O_{234},O_{345}].$$ over the field of rational functions in variables ([\[eq:MaximalAlgebraicallyIndependentSet\]](#eq:MaximalAlgebraicallyIndependentSet){reference-type="ref" reference="eq:MaximalAlgebraicallyIndependentSet"}). Now, using built-in function we verify that zero-dimensional ideal $\mathcal I_{q=t=1}\mathcal R\subset\mathcal R$ equals to its own radical: $$\sqrt{\mathcal I_{q=t=1}\mathcal R}=\mathcal I_{q=t=1}\mathcal R\qquad\subset\quad\mathcal R.$$ Next, we compute the Groebner basis of $\mathcal I_{q=t=1}\mathcal R$ in weighted degree reverse lexicographic order and clear denominators to obtain a finite set of polynomials $S\subset P$. The least common multiple of leading coefficients in $S$ reads $$h=O_1O_2O_3O_4O_5O_6(O_1^2-O_4^2)(O_2^2-O_5^2)(O_3^2-O_6^2)(O_1O_2O_6-O_3O_4O_5)\qquad\in\quad P.$$ We verify using SINGULAR that $$\mathcal I_{q=t=1}=\mathcal I_{q=t=1}:\langle h\rangle=\mathcal I_{q=t=1}:\langle h^2\rangle,$$ so the original six-dimensional ideal equals to the saturation $$\mathcal I_{q=t=1}=\mathcal I_{q=t=1}:\langle h^\infty\rangle.$$ As a corollary, dimensional recursion terminates on the first step and we obtain $$\sqrt{\mathcal I_{q=t=1}}=\sqrt{\mathcal I_{q=t=1}\mathcal R}\cap P=\mathcal I_{q=t=1}\mathcal R\cap P=\mathcal I_{q=t=1}.$$ ◻ **Proposition 23**. *Radical ideal $\sqrt{I^{\mathrm{ABL}}}\subset P^{\mathrm{ABL}}$ is annihilated by $\Phi$. In other words $\Phi$ descends to a homomorphism of the quotient ring, which we denote by the same letter $$\Phi: {\raisebox{.2em}{$P^{\mathrm{ABL}}$}\left/\raisebox{-.2em}{$\sqrt{\mathcal I^{\mathrm{ABL}}}$}\right.}\rightarrow \mathcal A_{q=t=1}. \label{eq:PhiFromRadicalQuotient}$$* *Proof.* Combining Lemma [\[lemm:ABL18RelationsAnnihilatedByPhi\]](#lemm:ABL18RelationsAnnihilatedByPhi){reference-type="ref" reference="lemm:ABL18RelationsAnnihilatedByPhi"} with Lemma [\[lemm:NilradicalAq1t1IsTrivial\]](#lemm:NilradicalAq1t1IsTrivial){reference-type="ref" reference="lemm:NilradicalAq1t1IsTrivial"} we get $$\Phi\Big(\sqrt{\mathcal I^{\mathrm{ABL}}}\Big)\subset \sqrt{\Phi(\mathcal I^{\mathrm{ABL}})}= \sqrt{(0)}=(0).$$ ◻ **Theorem 24**. *We have an $\mathrm{Mod}(\Sigma_2)$-equivariant isomorphism of commutative algebras $$\begin{tikzcd} \Psi:\mathcal A_{q=t=1}\ar[r,rightarrow,"\sim"]&\mathcal O(\mathrm{Hom}(\pi_1(\Sigma_2),SL(2,\mathbb C)))^{SL(2,\mathbb C)}. \end{tikzcd}$$ [\[th:IsomorphismAq1t1CoordinateRingCharacterVariety\]]{#th:IsomorphismAq1t1CoordinateRingCharacterVariety label="th:IsomorphismAq1t1CoordinateRingCharacterVariety"}* *Proof.* Combining ([\[eq:PsiHomomorphismFromAq1t1\]](#eq:PsiHomomorphismFromAq1t1){reference-type="ref" reference="eq:PsiHomomorphismFromAq1t1"}), ([\[eq:ABL18SetTheoreticCutout\]](#eq:ABL18SetTheoreticCutout){reference-type="ref" reference="eq:ABL18SetTheoreticCutout"}), and ([\[eq:PhiFromRadicalQuotient\]](#eq:PhiFromRadicalQuotient){reference-type="ref" reference="eq:PhiFromRadicalQuotient"}) we get a system of homomorphisms which can be arranged on the following diagram $$\begin{tikzcd} {\raisebox{.2em}{$P^{\mathrm{ABL}}$}\left/\raisebox{-.2em}{$\sqrt{\mathcal I^{\mathrm{ABL}}}$}\right.}\ar[r,rightarrow,"\Phi"]&\mathcal A_{q=t=1}\ar[dd,rightarrow,"\Psi"]\\\\ &\mathcal O(\mathrm{Hom}(\pi_1(\Sigma_2),SL(2,\mathbb C)))^{SL(2,\mathbb C)}\ar[luu,rightarrow,"\sim","\iota"'] \end{tikzcd}$$ Here $\iota$ stands for the isomorphism which identifies $SL(2,\mathbb C)$-invariant elements of the coordinate ring of representation variety with the elements of the quotient ring realizing its particular presentation. From Lemma [\[lemm:PsiMCGEquivariance\]](#lemm:PsiMCGEquivariance){reference-type="ref" reference="lemm:PsiMCGEquivariance"} we already know that $\Psi$ is $\mathrm{Mod}(\Sigma_2)$-equivariant, so the only thing we have to prove is that it is bijective. To this end we verify that $$\Phi\circ\iota\circ\Psi=\mathrm{Id}_{\mathcal A_{q=t=1}},\qquad \Psi\circ\Phi\circ\iota=\mathrm{Id}_{\mathcal O(\mathrm{Hom}(\pi_1(\Sigma_2)),SL(2,\mathbb C))^{SL(2,\mathbb C)}}. \label{eq:IdentityCompositionPhiPsi}$$ Note that we can compute Groebner bases for defining ideals of both $\mathcal A_{q=t=1}$ and $\mathcal O(\mathrm{Hom}(\pi_1(\Sigma_2),SL(2,\mathbb C)))$, the coordinate ring of the full representation variety. This, combined with the fact that ([\[eq:ABL18Generators\]](#eq:ABL18Generators){reference-type="ref" reference="eq:ABL18Generators"}) provides a complete set of generators of the invariant subring, allows us to perform the calculations. We omit the details of the verification from the main text and instead summarize them in Appendix [9](#sec:IdentiyCompositionPsiPhi){reference-type="ref" reference="sec:IdentiyCompositionPsiPhi"}. ◻ **Corollary 25**. *Algebra $\mathcal A_{q=t=1}$ is an integral domain of Krull dimension 6. [\[cor:A1tIntegralDomain\]]{#cor:A1tIntegralDomain label="cor:A1tIntegralDomain"}* # Word problem and monomial basis {#sec:WordProblem} In this section we prove that the three algebras $\mathcal A_{q,t}$, $\mathcal A_{q=1,t}$, and $\mathcal A_{q=t=1}$ all share the same monomial basis. Our approach is based on the computation of Groebner basis for defining ideal of $\mathcal A_{q=1,t}$. It turns out that the result specializes to the Groebner basis for defining ideal of $\mathcal A_{q=t=1}$. But, even more importantly, this Groebner basis admits a noncommutative deformation and gives rise to a family of relations in $\mathcal A_{q,t}$: $$\begin{aligned} g_i=0,\qquad 1\leqslant i\leqslant 61.\end{aligned}$$ We refer to this deformation as *$q$-Groebner basis*. Relators $g_i$ are listed in Appendix [10](#sec:qGroebnerBasis){reference-type="ref" reference="sec:qGroebnerBasis"}, each of the relators $$\begin{aligned} g_i\;\in\;\mathbb C[q^{\pm\frac14},t^{\pm\frac14}]\langle O_1,O_2,O_3,O_4,O_5,O_6,O_{12},O_{23},O_{13},O_{123},O_{234},O_{345}\rangle \label{eq:giDomain}\end{aligned}$$ is a noncommutative polynomial in generators ([\[eq:15Generators\]](#eq:15Generators){reference-type="ref" reference="eq:15Generators"}) with coefficient 1 in the leading monomial for weighted degree reverse lexicographic order. The coefficients in subleading monomials are Laurent polynomials in $q^{\frac14},t^{\frac14}$ and hence specialize well to $q=1$ and to $q=t=1$. The relations $g_i|_{q=1}$ and $g_i|_{q=t=1}$ provide Groebner basis for $\mathcal A_{q=1,1}$ and $\mathcal A_{q=t=1}$ respectively. ## Monomial basis for $\mathcal A_{q=1,t}$ and $\mathcal A_{q=t=1}$ Denote the natural images of relators $g_i$ in commutative polynomials by $$\begin{aligned} g_i\big|_{q=1}\;\in\;\mathbb C[t^{\pm\frac14}][O_1,\dots, O_{345}],\qquad\qquad g_i\big|_{q=t=1}\;\in\;\mathbb C[O_1,\dots, O_{345}]. \label{eq:giSpecializedDomain}\end{aligned}$$ In our presentations for commutative algebras $\mathcal A_{q=1,t}$ and $\mathcal A_{q=t=1}$ we fix weighted degree reverse lexicographic monomial order with weights assigned to generators by ([\[eq:GeneratorWeights\]](#eq:GeneratorWeights){reference-type="ref" reference="eq:GeneratorWeights"}). **Proposition 26**. *Collection of relators $$\begin{aligned} \big\{g_i|_{q=1}\;\big|\;1\leqslant i\leqslant61\big\} \label{eq:GBAq1t}\end{aligned}$$ provides normalized Groebner basis for defining ideal of $\mathcal A_{q=1,t}$ w.r.t. the fixed choice of monomial order. [\[prop:GroebnerBasisAq1t\]]{#prop:GroebnerBasisAq1t label="prop:GroebnerBasisAq1t"}* *Proof.* We have verified the statement of the proposition using two independent programs in SINGULAR and Mathematica which are availbale at [@Arthamonov-GitHub-Flat]. With our choice of monomial ordering the computation doesn't require any significant resources and can be performed independently using virtually any computer algebra software. ◻ It is worth noting, that ([\[eq:GBAq1t\]](#eq:GBAq1t){reference-type="ref" reference="eq:GBAq1t"}) is a Groebner basis over $\mathbf k_t=\mathbb C(t^{\frac14})$, the ground field of $\mathcal A_{q=1,t}$. A priori, it doesn't necessarily provide Groebner basis when $t$ is specialized to some complex parameter. However, specialization $t=1$ turns out to be good due to the fact that all subleading coefficients in normalized elements $g_i\big|_{q=1}$ happen to be Laurent polynomials in $t^{\frac14}$. Namely, comparing Proposition [\[prop:GroebnerBasisAq1t\]](#prop:GroebnerBasisAq1t){reference-type="ref" reference="prop:GroebnerBasisAq1t"} with Proposition [\[prop:GroebnerbasisAqt1\]](#prop:GroebnerbasisAqt1){reference-type="ref" reference="prop:GroebnerbasisAqt1"} we get **Theorem 27**. *Algebra $\mathcal A_{q=1,t}$ is a flat deformation of $\mathcal A_{q=t=1}$. [\[th:FlatDeformationAq1t\]]{#th:FlatDeformationAq1t label="th:FlatDeformationAq1t"}* *Proof.* Indeed, both algebras have monomial basis which consists of monomials which are not divisible by the leading powers or normalized Groebner bases $$\begin{aligned} \mathrm{l.p.}\; g_i|_{q=1}=\mathrm{l.p.}\; g_i|_{q=t=1},\qquad \textrm{for all}\quad 1\leqslant i\leqslant 61.\end{aligned}$$ We denote this common monomial basis by $B$. Structure constants of $\mathcal A_{q=1,t}$ in basis $B$ can be computed using reduction of the product of two monomials by Groebner relators ([\[eq:GBAq1t\]](#eq:GBAq1t){reference-type="ref" reference="eq:GBAq1t"}). Recall that by ([\[eq:giSpecializedDomain\]](#eq:giSpecializedDomain){reference-type="ref" reference="eq:giSpecializedDomain"}) all subleading coefficients in Groebner relators are Laurent polynomials in $t^\frac14$. As a corollary, so are the structure constants $$\begin{aligned} C_{b_1b_2}^{b_3}\;\in\; \mathbb C[t^{\pm\frac14}] \qquad\textrm{for all}\quad b_1,b_2,b_3\in B. \label{eq:StructureConstantsAq1t}\end{aligned}$$ Because normalized Groebner relators $g_i|_{t=1}$ of $\mathcal A_{q=1,t}$ specialize to Groebner relators of $\mathcal A_{q=t=1}$ at $t=1$, we conclude that structure constants of $\mathcal A_{q=t=1}$ in monomial basis $B$ can be obtained using the $t=1$ specialization of ([\[eq:StructureConstantsAq1t\]](#eq:StructureConstantsAq1t){reference-type="ref" reference="eq:StructureConstantsAq1t"}). ◻ ## Embedding into rational functions Homomorphism of commutative algebras $\Delta$, which we introduced in Proposition [\[prop:qDifferenceQuasiClassicalLimit\]](#prop:qDifferenceQuasiClassicalLimit){reference-type="ref" reference="prop:qDifferenceQuasiClassicalLimit"}, specializes nicely to $t=1$. This is shown by **Lemma 28**. *The following action on generators $$\widetilde{\Delta}(O_I):=\Delta(O_I)\Big|_{t=1}$$ extends to a homoorphism of commutative algebras $$\begin{aligned} \widetilde{\Delta}:\mathcal A_{q=t=1}\rightarrow\mathbb C(X_{12},X_{23},X_{13})[P_{12}^{\pm1},P_{23}^{\pm1},P_{13}^{\pm1}].\end{aligned}$$* *Proof.* Note that image of generators in ([\[eq:DeltaImageOfGeneratorsLaurent\]](#eq:DeltaImageOfGeneratorsLaurent){reference-type="ref" reference="eq:DeltaImageOfGeneratorsLaurent"}) has coefficients which are Laurent polynomials in $t^{\frac14}$. As a corollary, any relation that holds between elements $\Delta(O_I)$, when specialized to $t=1$, must also hold between the $\widetilde{\Delta}(O_I)$. ◻ **Proposition 29**. *We have an isomorphism $$\begin{aligned} \mathcal A_{q=t=1}\simeq \widetilde\Delta(\mathcal A_{q=t=1})\quad\subset\quad\mathbb C(X_{12},X_{23},X_{13})[P_{12}^{\pm1},P_{23}^{\pm1},P_{13}^{\pm1}] \label{eq:Aqt1IsomorphismRationalFunctions}\end{aligned}$$ between $\mathcal A_{q=t=1}$ and a subalgebra of rational functions in six variables. [\[prop:Aqt1IsomorphismRationalFunctions\]]{#prop:Aqt1IsomorphismRationalFunctions label="prop:Aqt1IsomorphismRationalFunctions"}* *Proof.* Recall that $\widetilde\Delta$ is a homomorphism of commutative algebras by Proposition [\[prop:qDifferenceQuasiClassicalLimit\]](#prop:qDifferenceQuasiClassicalLimit){reference-type="ref" reference="prop:qDifferenceQuasiClassicalLimit"}. Since $\mathcal A_{q=t=1}$ is finitely generated, then so is $\widetilde\Delta(\mathcal A_{q=t=1})$. On the other hand, $\widetilde\Delta(\mathcal A_{q=t=1})$ is a subring of an integral domain, hence it is an integral domain on its own. Now we will show that Krull dimension of $\widetilde\Delta(\mathcal A_{q=t=1})$ is bounded from below $$\begin{aligned} \dim(\widetilde\Delta(\mathcal A_{q=t=1}))\geqslant 6. \label{eq:DimLowerBoundDeltaAqt1}\end{aligned}$$ To this end, consider $$\begin{aligned} \widetilde\Delta(O_1)=&\sum\limits_{a,b \in \{\pm 1\}} \ a b \ \dfrac{(1 - X_{23} X_{12}^a X_{13}^b)(1 - X_{23}^{-1} X_{12}^a X_{13}^b)}{X_{12}^{a} X_{13}^b (X_{12} - X_{12}^{-1})(X_{13} - X_{13}^{-1})} \ P_{12}^{a} P_{13}^{b},\\[5pt] \widetilde\Delta(O_2)=&X_{12}+X_{12}^{-1},\\[5pt] \widetilde\Delta(O_3)=&\sum\limits_{a,b \in \{\pm 1\}} \ a b \ \dfrac{(1 - X_{13} X_{12}^a X_{23}^b)(1 - X_{13}^{-1} X_{12}^a X_{23}^b)}{X_{12}^{a} X_{23}^b (X_{12} - X_{12}^{-1})(X_{23} - X_{23}^{-1})} \ P_{12}^{a} P_{23}^{b},\\[5pt] \widetilde\Delta(O_4)=&X_{23}+X_{23}^{-1},\\[5pt] \widetilde\Delta(O_5)=& \ \sum\limits_{a,b \in \{\pm 1\}} \ a b \ \dfrac{(1 - X_{12} X_{13}^a X_{23}^b)(1 - X_{12}^{-1} X_{13}^a X_{23}^b)}{ X_{13}^{a} X_{23}^b (X_{13} - X_{13}^{-1})(X_{23} - X_{23}^{-1})} \ P_{13}^{a} P_{23}^{b},\\[5pt] \widetilde\Delta(O_6)=&X_{13}+X_{13}^{-1}.\end{aligned}$$ We claim that the six elements above are algebraically independent. Indeed, suppose there is a polynomial relation $$\begin{aligned} F\Big(\widetilde\Delta(O_1), \widetilde\Delta(O_2), \widetilde\Delta(O_3), \widetilde\Delta(O_4), \widetilde\Delta(O_5), \widetilde\Delta(O_6)\Big)=0.\end{aligned}$$ Consider the leading powers in $P_{12},P_{23},P_{13}$, we have $$\begin{aligned} \mathrm{l.p.}\,\widetilde\Delta(O_1)=&P_{12}P_{13},& \mathrm{l.p.}\,\widetilde\Delta(O_3)=&P_{12}P_{23},& \mathrm{l.p.}\,\widetilde\Delta(O_5)=&P_{13}P_{23} \label{eq:LeadingPowersDeltaO}\end{aligned}$$ From ([\[eq:LeadingPowersDeltaO\]](#eq:LeadingPowersDeltaO){reference-type="ref" reference="eq:LeadingPowersDeltaO"}) we conclude that $F$ cannot depend on $\widetilde\Delta(O_1), \widetilde\Delta(O_3), \widetilde\Delta(O_5)$. But then, because there is no polynomial relations on $\widetilde\Delta(O_2), \widetilde\Delta(O_4), \widetilde\Delta(O_6)$ we conclude that $F$ is a zero polynomial. Applying Noether Normalization Theorem we get ([\[eq:DimLowerBoundDeltaAqt1\]](#eq:DimLowerBoundDeltaAqt1){reference-type="ref" reference="eq:DimLowerBoundDeltaAqt1"}). Because $\widetilde\Delta(\mathcal A_{q=t=1})$ is an integral domain, $\ker\widetilde\Delta\subseteq\mathcal A_{q=t=1}$ must be a prime ideal of $\mathcal A_{q=t=1}$. Combining Corollary [\[cor:A1tIntegralDomain\]](#cor:A1tIntegralDomain){reference-type="ref" reference="cor:A1tIntegralDomain"} with ([\[eq:DimLowerBoundDeltaAqt1\]](#eq:DimLowerBoundDeltaAqt1){reference-type="ref" reference="eq:DimLowerBoundDeltaAqt1"}) we conclude that $\ker\widetilde\Delta$ must be of height zero and thus $\ker\widetilde\Delta=\{0\}$. ◻ Now, using the common monomial basis for $\mathcal A_{q=1,t}$ and $\mathcal A_{q=t=1}$ we can prove the analog of Proposition [\[prop:Aqt1IsomorphismRationalFunctions\]](#prop:Aqt1IsomorphismRationalFunctions){reference-type="ref" reference="prop:Aqt1IsomorphismRationalFunctions"} for $\mathcal A_{q=1,t}$. **Proposition 30**. *We have an isomorphism $$\begin{aligned} \mathcal A_{q=1,t}\simeq \Delta(\mathcal A_{q=1,t})\quad\subset\quad\mathbb C(X_{12},X_{23},X_{13})[P_{12}^{\pm1},P_{23}^{\pm1},P_{13}^{\pm1}]\end{aligned}$$ between $\mathcal A_{q=1,t}$ and a subalgebra of rational functions in six variables. [\[prop:Aq1tIsomorphismRationalFunctions\]]{#prop:Aq1tIsomorphismRationalFunctions label="prop:Aq1tIsomorphismRationalFunctions"}* *Proof.* Consider a monomial basis $B\subset\mathcal A_{q=1,t}$ and suppose that there exists at least one nontrivial linear combination of basis elements $P=P(O_1,\dots,O_{345})\in\ker\Delta$. Multiplying it by an appropriate power of $(t-1)$ we can always make coefficients in all monomials regular with at least one coefficient having nonzero limit as $t\rightarrow 1$. So without loss of generality we can assume that $P_{t=1}=\lim_{t\rightarrow 1}P(O_1,\dots,O_{345})$ exists and nontrivial. By definition of $\ker\Delta$ we have $$\begin{aligned} P(\Delta(O_1),\dots,\Delta(O_{345}))=0. \label{eq:RelationInKerDelta}\end{aligned}$$ On the other hand, by Lemma [\[lemm:qDifferenceLaurentCoefficients\]](#lemm:qDifferenceLaurentCoefficients){reference-type="ref" reference="lemm:qDifferenceLaurentCoefficients"}, $\Delta(O_I)$ is a Laurent polynomial in $t^{\frac14}$ for every generator $O_I\in\mathcal A_{q=1,t}$. Combining it with the fact that all coefficients of $P$ are regular at $t=1$ we conclude that the left hand side of ([\[eq:RelationInKerDelta\]](#eq:RelationInKerDelta){reference-type="ref" reference="eq:RelationInKerDelta"}) converges as $t\rightarrow 1$. As a corollary, $$\begin{aligned} P_{t=1}(\widetilde\Delta(O_1),\dots\widetilde\Delta(O_{345}))=0\end{aligned}$$ holds in $\mathbb C(X_{12},X_{23},X_{12})[P_{12}^{\pm1},P_{23}^{\pm1},P_{13}^{\pm1}]$. Next, by isomorphism ([\[eq:Aqt1IsomorphismRationalFunctions\]](#eq:Aqt1IsomorphismRationalFunctions){reference-type="ref" reference="eq:Aqt1IsomorphismRationalFunctions"}) we obtain a relation $P_{t=1}(O_1,\dots,O_{345})=0$ in $\mathcal A_{q=t=1}$. However, because $B$ is also a monomial basis for $\mathcal A_{q=t=1}$ here we come to contradiction with our initial assumption that $P$ is a regular linear combination of basis elements with nontrivial limit as $t\rightarrow 1$. ◻ ## Monomial basis for $\mathcal A_{q,t}$ Based on the existence of $q$-Groebner basis we introduce an algorithm which brings any monomial in $\mathcal A_{q,t}$ into normal form by reducing it with respect to the $q$-Groebner basis. Finally, we prove that this normal form is unique, using Lemma [\[lemm:qDifferenceLaurentCoefficients\]](#lemm:qDifferenceLaurentCoefficients){reference-type="ref" reference="lemm:qDifferenceLaurentCoefficients"} and Proposition [\[prop:Aq1tIsomorphismRationalFunctions\]](#prop:Aq1tIsomorphismRationalFunctions){reference-type="ref" reference="prop:Aq1tIsomorphismRationalFunctions"}. By Lemma [\[lemm:NormalOrderingLemma\]](#lemm:NormalOrderingLemma){reference-type="ref" reference="lemm:NormalOrderingLemma"} we know that normally ordered monomials of the form ([\[eq:NormallyOrderedMonomials\]](#eq:NormallyOrderedMonomials){reference-type="ref" reference="eq:NormallyOrderedMonomials"}) provide a spanning set for $\mathcal A_{q,t}$. On the set of normally ordered monomials we can introduce a total order $\prec$ by comparing their commutative counterparts. The following notation will often be convenient for us when we deal with subleading terms. **Definition 31**. *Let $m=O_1^{k_1}O_2^{k_2}\dots O_{345}^{k_{345}}$ be a normally ordered product of generators. We say that $f\in\mathcal A_{q,t}$ is Laurent-subleading to $m$ if* *$$f=\sum_{j=1}^n\lambda_jO_1^{l_1^{(j)}}O_2^{l_2^{(j)}}\dots O_{345}^{l_{345}^{(j)}}$$ where $$\lambda_j\in\mathbb C[q^{\pm\frac14},t^{\pm\frac14}],\qquad O_1^{l_1^{(j)}}O_2^{l_2^{(j)}}\dots O_{345}^{l_{345}^{(j)}}\prec O_1^{k_1}O_2^{k_2}\dots O_{345}^{k_{345}}\qquad\textrm{for all}\quad 1\leq j\leq n.$$ [\[eq:LaurentSubleadingElement\]]{#eq:LaurentSubleadingElement label="eq:LaurentSubleadingElement"}* *In this case we write $f=\boldsymbol\sigma(O_1^{k_1}O_2^{k_2}\dots O_{345}^{k_{345}})$. [\[def:LaurentSubleading\]]{#def:LaurentSubleading label="def:LaurentSubleading"}* **Remark 32**. *Note that we allowed only Laurent polynomials in coefficients of ([\[eq:LaurentSubleadingElement\]](#eq:LaurentSubleadingElement){reference-type="ref" reference="eq:LaurentSubleadingElement"}). This will be important for us later when we deal with various specializations of parameters $q^{\frac14},t^{\frac14}$.* We start by stating an immediate consequence of Definition [\[def:LaurentSubleading\]](#def:LaurentSubleading){reference-type="ref" reference="def:LaurentSubleading"} which we will then use without further mentioning. **Lemma 33**. *For a pair of normally ordered products of generators. $$m=O_1^{k_1}O_2^{k_2}\dots O_{345}^{k_{345}},\qquad\mu=O_1^{\kappa_1}O_2^{\kappa_2}\dots O_{345}^{\kappa_{345}},\qquad m\preceq\mu$$ we have $$\boldsymbol\sigma(m)+\boldsymbol\sigma(\mu)=\boldsymbol\sigma(\mu).$$* **Proposition 34**. *Let $m=O_1^{k_1}O_2^{k_2}\dots O_{345}^{k_{345}}$ be a normally ordered monomial and $O_I\in\mathcal A_{q,t}$ be any generator. The following properties hold in $\mathcal A_{q,t}$* *$$\begin{aligned} O_Im=&q^{-\frac12(k_1c_{I,1}+\dots+k_{I'}c_{I,I'})}O_1^{k_1}\dots O_I^{k_I+1}\dots O_{345}^{k_{345}}\;+\;\boldsymbol\sigma(O_1^{k_1}\dots O_I^{k_I+1}\dots O_{345}^{k_{345}}),\\ mO_I=&q^{-\frac12(k_{I''}c_{I'',I}+\dots+k_{345}c_{345,I})}O_1^{k_1}\dots O_I^{k_I+1}\dots O_{345}^{k_{345}}\;+\;\boldsymbol\sigma(O_1^{k_1}\dots O_I^{k_I+1}\dots O_{345}^{k_{345}}),\end{aligned}$$ $$\begin{aligned} O_I\boldsymbol\sigma(m)=&\boldsymbol\sigma(O_1^{k_1}\dots O_I^{k_I+1}\dots O_{345}^{k_{345}})=\boldsymbol\sigma(m)O_I\end{aligned}$$ [\[eq:SigmaPropertiesSingleGenerator\]]{#eq:SigmaPropertiesSingleGenerator label="eq:SigmaPropertiesSingleGenerator"}* *here $I'$ stands for the index of a previous generator, while $I''$ stands for the next generator following $O_I$ in ([\[eq:15Generators\]](#eq:15Generators){reference-type="ref" reference="eq:15Generators"}). [\[prop:LeadingTermMonomialProduct\]]{#prop:LeadingTermMonomialProduct label="prop:LeadingTermMonomialProduct"}* *Proof.* We will prove all three properties in ([\[eq:SigmaPropertiesSingleGenerator\]](#eq:SigmaPropertiesSingleGenerator){reference-type="ref" reference="eq:SigmaPropertiesSingleGenerator"}) by simultaneous induction in $|k|:=k_1+k_2+\dots+k_{345}$. The base case $|k|=0$ is a tautology. As for the step of induction, recall that normal ordering relations ([\[eq:NormalOrderingRelations\]](#eq:NormalOrderingRelations){reference-type="ref" reference="eq:NormalOrderingRelations"}) imply $$\begin{aligned} O_KO_J=q^{-\frac{c_{K,J}}2}O_JO_K\;+\;\boldsymbol\sigma(O_JO_K)\end{aligned}$$ ◻ **Corollary 35**. *Let $p=O_{I_1}O_{I_2}\dots O_{I_n}$ be any product of generators, not necessarily normally ordered. There exists a unique integer $M(I_1,\dots,I_n)\in\mathbb Z$ such that $$p=q^{\frac{M(I_1,\dots,I_n)}2}O_1^{k_1}O_2^{k_2}\dots O_{345}^{k_{345}}\;+\;\boldsymbol\sigma(O_1^{k_1}O_2^{k_2}\dots O_{345}^{k_{345}}).$$ Here each $k_J$ stands for the number of occurrences of generator $O_J$ in $p$. [\[cor:GeneralProductLeadingTerm\]]{#cor:GeneralProductLeadingTerm label="cor:GeneralProductLeadingTerm"}* **Definition 36**. *We say that normally ordered product of generators $m=O_1^{k_1}O_2^{k_2}\dots O_{345}^{k_{345}}$ is reducible by $q$-Groebner basis if its commutative counterpart divides commutative counterpart of the leading term of $g_i$ for some $1\leqslant i\leqslant 61$.* Hereinafter, let $\mathbf B$ denote the set of all normally ordered products of generators which are not reducible by the $q$-Groebner basis. Note that commutative counterpart of $\mathbf B$ is nothing but $B$, the common monomial basis of $\mathcal A_{q=1,t}$ and $\mathcal A_{q=t=1}$. Our main goal for this section is to prove that $\mathbf B$ provides a monomial basis for $\mathcal A_{q,t}$. **Proposition 37**. *Let $m=O_1^{k_1}O_2^{k_2}\dots O_{345}^{k_{345}}$ be a normally ordered product of generators which is reducible by $q$-Groebner basis. Then there exists the following decomposition of $m$ as element of $\mathcal A_{q,t}$ $$m=\sum_{i=1}^n\lambda_ib_i,\qquad{where}\qquad \lambda_i\in\mathbb C[q^{\pm\frac14},t^{\pm\frac14}],\quad b_i\in\mathbf B\quad\textrm{for all}\quad 1\leq i\leq n. \label{eq:NormallyOrderedProductBasisDecomposition}$$ [\[prop:ReducibleNormallyOrderedProducts\]]{#prop:ReducibleNormallyOrderedProducts label="prop:ReducibleNormallyOrderedProducts"}* *Proof.* There is only finitely many normally ordered products of generators with a given total weight. As a consequence, weighted degree reverse lexicographic monomial ordering allows us to enumerate all such products by natural numbers in the increasing order $$\nu_i\prec\nu_{i+1}\qquad\textrm{for all}\quad i\in\mathbb N.$$ We will use induction by the index $i$ in order to prove the statement of the Lemma. There is only one normally ordered monomial of degree 0, so $\nu_1=1$ is the unity element of $\mathcal A_{q,t}$. The latter belongs to $\mathbf B$, hence we get the base of our induction. For the step of induction we assume that the statement of the lemma holds for all $\nu_i$ with $i< j$ for some $j\in\mathbb N$. Now let $\nu_j=O_1^{\kappa_1}O_2^{\kappa_2}\dots O_{345}^{\kappa_{345}}$. If $\nu_j\in\mathbf B$ then we are done, so without loss of generality we can assume that $\nu_j\not\in\mathbf B$ is reducible by $q$-Groebner basis. In other words, there exists at least one element $g_r$ with the leading term $$\mathrm{l.p.}(g_r)=O_1^{l_1}O_2^{l_2}\dots O_{345}^{l_{345}}\qquad\textrm{such that}\qquad l_1\leq\kappa_1,\quad l_2\leq\kappa_2,\quad\dots,\quad l_{345}<\kappa_{345}.$$ Consider a normally ordered product of generators $$\mu:=O_1^{\kappa_1-l_1}O_2^{\kappa_2-l_2}\dots O_{345}^{\kappa_{345}-l_{345}}.$$ By Corollary [\[cor:GeneralProductLeadingTerm\]](#cor:GeneralProductLeadingTerm){reference-type="ref" reference="cor:GeneralProductLeadingTerm"} we have the following identity in $\mathcal A_{q,t}$ $$0=\mu g_r=q^{\frac M2}\nu_j+\boldsymbol\sigma(\nu_j)\qquad\textrm{for some}\quad M\in\mathbb Z.$$ In other words, as element of $\mathcal A_{q,t}$, the product $\nu_j$ must be equal to a linear combination of subleading monomials $\nu_i,i<j$ with coefficients in $\mathbb C[q^{\pm\frac14},t^{\pm\frac14}]$. Using the inductive assumption we conclude that $\nu_j$ itself must be of the form ([\[eq:NormallyOrderedProductBasisDecomposition\]](#eq:NormallyOrderedProductBasisDecomposition){reference-type="ref" reference="eq:NormallyOrderedProductBasisDecomposition"}). ◻ Combining Proposition [\[prop:ReducibleNormallyOrderedProducts\]](#prop:ReducibleNormallyOrderedProducts){reference-type="ref" reference="prop:ReducibleNormallyOrderedProducts"} with Corollary [\[cor:GeneralProductLeadingTerm\]](#cor:GeneralProductLeadingTerm){reference-type="ref" reference="cor:GeneralProductLeadingTerm"} we get **Corollary 38**. *Let $p=O_{I_1}O_{I_2}\dots O_{I_n}$ be any product of generators, not necessarily normally ordered. There exists the following decomposition of $p$ as element of $\mathcal A_{q,t}$ $$p=\sum_{i=1}^n\lambda_ib_i,\qquad\textrm{where}\qquad\lambda_i\in\mathbb C[q^{\pm\frac14},t^{\pm\frac14}],\quad b_i\in\mathbf B\quad\textrm{for all}\quad 1\leq i\leq n. \label{eq:GeneralProductBasisDecomposition}$$ [\[cor:GeneralProductBasisDecomposition\]]{#cor:GeneralProductBasisDecomposition label="cor:GeneralProductBasisDecomposition"}* The latter, in particular, implies that $\mathbf B$ provides a spanning set for $\mathcal A_{q,t}$. We are now ready to prove the main theorem of this section. **Theorem 39**. (i) *[\[it:BasisB\]]{#it:BasisB label="it:BasisB"} Set $\mathbf B$ provides a monomial basis for $\mathcal A_{q,t}$.* (ii) *[\[it:StructureConstantsAreLaurentPolynomials\]]{#it:StructureConstantsAreLaurentPolynomials label="it:StructureConstantsAreLaurentPolynomials"} Structure constants of multiplication in basis $\mathbf B$ are Laurent polynomials in parameters $q^{\frac14},t^{\frac14}$.* (iii) *[\[it:IsomorphismWithAlgebraOfqDifferenceOperators\]]{#it:IsomorphismWithAlgebraOfqDifferenceOperators label="it:IsomorphismWithAlgebraOfqDifferenceOperators"} We have a $\mathbf k$-algebra isomorphism $$\begin{aligned} \widehat\Delta: \mathcal A_{q,t}\xrightarrow{\sim} A_{q,t}\end{aligned}$$ between $\mathcal A_{q,t}$ and the algebra of $q$-difference operators ([\[eq:AqtDifferenceOperatorsAlgebra\]](#eq:AqtDifferenceOperatorsAlgebra){reference-type="ref" reference="eq:AqtDifferenceOperatorsAlgebra"}) introduced in [@ArthamonovShakirov'2019].* *[\[th:BasisAqt\]]{#th:BasisAqt label="th:BasisAqt"}* *Proof.* Note that by Corollary [\[cor:GeneralProductBasisDecomposition\]](#cor:GeneralProductBasisDecomposition){reference-type="ref" reference="cor:GeneralProductBasisDecomposition"} we know that $\mathbf B$ is a spanning set, so the only thing we have to prove for part ([\[it:BasisB\]](#it:BasisB){reference-type="ref" reference="it:BasisB"}) is that there is no nontrivial linear combinations of elements of $\mathbf B$ which vanish in $\mathcal A_{q,t}$. We will prove parts ([\[it:BasisB\]](#it:BasisB){reference-type="ref" reference="it:BasisB"}) and ([\[it:IsomorphismWithAlgebraOfqDifferenceOperators\]](#it:IsomorphismWithAlgebraOfqDifferenceOperators){reference-type="ref" reference="it:IsomorphismWithAlgebraOfqDifferenceOperators"}) of the theorem simultaneously by showing that there is no linear combination of elements of $\mathbf B$ which corresponds to the trivial $q$-difference operator in $A_{q,t}$. The logic of the proof will be similar to one of Propostion [\[prop:Aq1tIsomorphismRationalFunctions\]](#prop:Aq1tIsomorphismRationalFunctions){reference-type="ref" reference="prop:Aq1tIsomorphismRationalFunctions"}. For the sake of contradiction, suppose there is a linear combination $$\begin{aligned} P=P(O_1,\dots,O_{345})=c_1b_1+\dots c_lb_l,\qquad c_i\in\mathbf k,\quad b_i\in\mathbf B,\quad\textrm{for all}\quad 1\leqslant i\leqslant l\end{aligned}$$ of elements of $\mathbf B$ which corresponds to a zero element of $\mathcal A_{q,t}$. Without loss of generality we can assume that all $c_i,\;1\leqslant i\leqslant l$ are regular at $q=1$ and at least one of the coefficients has nontrivial limit as $q\rightarrow 1$. In other words, the limit $$\begin{aligned} P_{q=1}=\lim_{q\rightarrow 1}P(O_1,\dots, O_{345})\end{aligned}$$ exists and is nontrivial linear combination of monomials. By Proposition [\[prop:qDifferenceHomomorphism\]](#prop:qDifferenceHomomorphism){reference-type="ref" reference="prop:qDifferenceHomomorphism"} we conclude that the following identity holds in $\mathbf k(X_{12},X_{23},X_{13}) \langle\hat\delta_{12}, \hat\delta_{23}, \hat\delta_{13}\rangle$ $$\begin{aligned} P(\hat O_1,\dots, \hat O_{345})=0. \label{eq:PVanishingQDifference}\end{aligned}$$ Because all $c_i$ are regular at $q=1$ and by Lemma [\[lemm:qDifferenceLaurentCoefficients\]](#lemm:qDifferenceLaurentCoefficients){reference-type="ref" reference="lemm:qDifferenceLaurentCoefficients"} all images of generators are Laurent polynomials in $q^{\frac14}$, the left hand side of ([\[eq:PVanishingQDifference\]](#eq:PVanishingQDifference){reference-type="ref" reference="eq:PVanishingQDifference"}) converges as $q\rightarrow1$ and we get the identity $$\begin{aligned} P_{q=1}(\Delta(O_1),\dots,\Delta(O_{345}))=0\end{aligned}$$ in $\mathbf k_t(X_{12},X_{23},X_{13})[P_{12}^{\pm1},P_{23}^{\pm1},P_{13}^{\pm1}]$. By Proposition [\[prop:Aq1tIsomorphismRationalFunctions\]](#prop:Aq1tIsomorphismRationalFunctions){reference-type="ref" reference="prop:Aq1tIsomorphismRationalFunctions"} we conclude that the following relation holds in $\mathcal A_{q=1,t}$ $$\begin{aligned} P_{q=1}(O_1,\dots,O_{345})=0.\end{aligned}$$ In other words, there exists nontrivial linear combination of elements of $B$ which vanish in $\mathcal A_{q=1,t}$. Here we come to the contradiction with our initial assumption, because $B$ is a monomial basis for $\mathcal A_{q=1,t}$. Finally, part ([\[it:StructureConstantsAreLaurentPolynomials\]](#it:StructureConstantsAreLaurentPolynomials){reference-type="ref" reference="it:StructureConstantsAreLaurentPolynomials"}) of the theorem now follows from Corollary [\[cor:GeneralProductBasisDecomposition\]](#cor:GeneralProductBasisDecomposition){reference-type="ref" reference="cor:GeneralProductBasisDecomposition"}. Indeed, let $b_\alpha,b_\beta\in\mathbf B$ be a pair of basis elements. By part ([\[it:BasisB\]](#it:BasisB){reference-type="ref" reference="it:BasisB"}) of the theorem their product $b_1b_2=\sum_{i=1}^n{C_{b_\alpha,b_\beta}^{b_i}}b_i$ has a unique decomposition in basis $\mathbf B$ which must coincide with ([\[eq:GeneralProductBasisDecomposition\]](#eq:GeneralProductBasisDecomposition){reference-type="ref" reference="eq:GeneralProductBasisDecomposition"}) and thus all $C_{b_{\alpha},b_\beta}^{b_i}\in\mathbb C[q^{\frac14},t^{\frac14}]$ are Laurent polynomials. ◻ **Corollary 40**. *For all triples of basis elements $b_1,b_2,b_3\in\mathbf B$, the structure constant $C_{b_1,b_2}^{b_3}\in\mathbb C[q^{\pm\frac14},t^{\pm\frac14}]$ of associative multiplication in $\mathcal A_{q,t}$ specialize to* (i) *[\[it:StructureConstantsq1Specialization\]]{#it:StructureConstantsq1Specialization label="it:StructureConstantsq1Specialization"} The structure constant of commutative multiplication in $\mathcal A_{q=1,t}$ as $q^{\frac14}\rightarrow1$.* (ii) *The structure constant of commutative multiplication in $\mathcal A_{q=t=1}$ as $q^{\frac14},t^{\frac14}\rightarrow1$.* *[\[cor:SpecializationOfStructureConstants\]]{#cor:SpecializationOfStructureConstants label="cor:SpecializationOfStructureConstants"}* # Poisson brackets As a byproduct of Theorem [\[th:BasisAqt\]](#th:BasisAqt){reference-type="ref" reference="th:BasisAqt"} we can equip our deformed commutative algebra $\mathcal A_{q=1,t}$ with a Poisson bracket. Indeed, let $b_1,b_2\in\mathbf B$ be a pair of normally ordered products of generators. According to the Theorem [\[th:BasisAqt\]](#th:BasisAqt){reference-type="ref" reference="th:BasisAqt"}, the products of the corresponding elements of $\mathcal A_{q,t}$ have the following form $$b_1b_2=\sum_{b_3\in\mathbf B}C_{b_1,b_2}^{b_3}b_3,\qquad b_2b_1=\sum_{b_3\in\mathbf B}C_{b_2,b_1}^{b_3}b_3 \label{eq:BasisElementsMultiplicationInTwoDifferentOrders}$$ where structure constants $C_{b_1,b_2}^{b_3},C_{b_2,b_1}^{b_3}\in\mathbb C[q^{\frac14},t^{\frac14}]$ are Laurent polynomials in parameters. Also note, that only finitely many terms on the right hand sides of ([\[eq:BasisElementsMultiplicationInTwoDifferentOrders\]](#eq:BasisElementsMultiplicationInTwoDifferentOrders){reference-type="ref" reference="eq:BasisElementsMultiplicationInTwoDifferentOrders"}) are nonzero. **Lemma 41**. *For all triples of elements $b_1,b_2,b_3\in\mathbf B$, the following limit exists and is a Laurent polynomial in $t^{\frac14}$ $$\lim_{q^{\frac14}\rightarrow1}\frac{C_{b_1,b_2}^{b_3}-C_{b_2,b_1}^{b_3}}{q^{\frac14}-1}\quad\in\quad \mathbb C[t^{\pm\frac14}].$$* *Proof.* By Corollary [\[cor:SpecializationOfStructureConstants\]](#cor:SpecializationOfStructureConstants){reference-type="ref" reference="cor:SpecializationOfStructureConstants"}, part ([\[it:StructureConstantsq1Specialization\]](#it:StructureConstantsq1Specialization){reference-type="ref" reference="it:StructureConstantsq1Specialization"}) we know $$\lim_{q^{\frac14}\rightarrow 1}C_{b_1,b_2}^{b_3}=\lim_{q^{\frac14}\rightarrow1}C_{b_2,b_1}^{b_3},\qquad\textrm{for all}\quad b_1,b_2,b_3\in\mathbf B.$$ On the other hand, both $C_{b_1,b_2}^{b_3},C_{b_2,b_1}^{b_3}\in\mathbb C[q^{\pm\frac14},t^{\pm\frac14}]$ are Laurent polynomials, so their difference must divide $q^{\frac14}-1$. ◻ **Definition 42**. *Let $\{,\}:\mathbf B\times\mathbf B\rightarrow \mathbb C[t^{\pm\frac14}]\mathbf B$ be a map given by $$\{b_1,b_2\}:=\frac14\lim_{q^{\frac14}\rightarrow 1}\frac{b_1b_2-b_2b_1}{q^{\frac14}-1}=\sum_{b_3\in\mathbf B}\left(\lim_{q^{\frac14}\rightarrow1} \frac{C_{b_1,b_2}^{b_3}-C_{b_2,b_1}^{b_3}}{q^{\frac14}-1}\right)b_3. \label{eq:BracketOfBasisElements}$$ [\[def:BracketOfBasisElements\]]{#def:BracketOfBasisElements label="def:BracketOfBasisElements"}* Recall that the collection $\mathbf B$ of normally ordered products of generators providing basis of $\mathcal A_{q,t}$ is in bijection with collection $B$ of commutative products of generators providing basis on $\mathcal A_{q=1,t}$. This allows us to extend Definition [\[eq:BracketOfBasisElements\]](#eq:BracketOfBasisElements){reference-type="ref" reference="eq:BracketOfBasisElements"} to $\mathbb C(t^{\frac14})$-linear map $$\{,\}:\mathcal A_{q=1,t}\otimes\mathcal A_{q=1,t}\rightarrow\mathcal A_{q=1,t},$$ which we denote by the same figure brackets. **Proposition 43**. *Map $\{,\}$ defines a $\mathrm{Mod}(\Sigma_2)$-equivariant Poisson bracket on $\mathcal A_{q=1,t}$. [\[prop:Aq1tPoissonBracket\]]{#prop:Aq1tPoissonBracket label="prop:Aq1tPoissonBracket"}* *Proof.* The skew-symmetry and Leibnitz identity follow immediately from Definition [\[def:BracketOfBasisElements\]](#def:BracketOfBasisElements){reference-type="ref" reference="def:BracketOfBasisElements"}. The Jacobi identity is a corollary of the fact that $C_{b_1,b_2}^{b_3}$ are the structure constants of the associative algebra. Finally, the $\mathrm{Mod}(\Sigma_2)$-equivariance follows by Theorem [\[th:MCGActionGeneric\]](#th:MCGActionGeneric){reference-type="ref" reference="th:MCGActionGeneric"}. ◻ One can read off the action of the Poisson bracket on generators of $\mathcal A_{q=1,t}$ from Table [\[tab:QCommRel\]](#tab:QCommRel){reference-type="ref" reference="tab:QCommRel"}. Recall that $(\pm c_{J,K}|X)$ entry in the $O_J$'th row and and $O_K$'th column corresponds to a normal ordering relation in $\mathcal A_{q,t}$ $$q^{\frac{c_{J,K}}4}O_JO_K-q^{-\frac{c_{J,K}}4}O_KO_J\mp(q^{\frac12}-q^{-\frac12})X=0$$ Note that by linearity, we can use the the left equality in ([\[eq:BracketOfBasisElements\]](#eq:BracketOfBasisElements){reference-type="ref" reference="eq:BracketOfBasisElements"}) for an arbitrary linear combination of basis elements, in particular, for any normally ordered product of generators. Without loss of generality we assume that $O_KO_J$ is a normally ordered monomial, then the Poisson bracket between the two is given by $$\{O_J,O_K\}=\frac14\lim_{q^{\frac14}\rightarrow1}\frac{O_JO_K-O_KO_J}{q^{\frac14}-1} =\frac14\lim_{q^{\frac14}\rightarrow1}\frac{(-1+q^{-\frac{c_{J,K}}2})O_KO_J\pm q^{-\frac{c_{J,K}}4}(q^{\frac12}-q^{-\frac12})X}{q^{\frac14}-1}=-\frac{c_{J,K}}2O_KO_J\pm X.$$ Finally, note that coefficients in ([\[eq:BracketOfBasisElements\]](#eq:BracketOfBasisElements){reference-type="ref" reference="eq:BracketOfBasisElements"}) are Laurent polynomials in $t^{\frac14}$, hence specialize well to $t=1$, this allows one to introduce a $\mathbb C$-linear map $$\{,\}_{t=1}:\mathcal A_{q=t=1}\otimes\mathcal A_{q=t=1}\rightarrow \mathcal A_{q=t=1}$$ **Corollary 44**. *Map $\{,\}_{t=1}$ is a $\mathrm{Mod}(\Sigma_2)$-equivariant Poisson bracket which coincides with the Goldman Poisson bracket on $$\mathcal A_{q=t=1}\simeq\mathcal O[\mathrm{Hom}(\pi_1(\Sigma_2),SL(2,\mathbb C))]^{SL(2,\mathbb C)}.$$ [\[cor:Aq1t1BracketAgreesWithGoldman\]]{#cor:Aq1t1BracketAgreesWithGoldman label="cor:Aq1t1BracketAgreesWithGoldman"}* *Proof.* The first part of the statement follows immediately by $t^{\frac14}=1$ specialization of Proposition [\[prop:Aq1tPoissonBracket\]](#prop:Aq1tPoissonBracket){reference-type="ref" reference="prop:Aq1tPoissonBracket"} because all the structure constants involved are Laurent polynomials in $t^{\frac14}$. As for the agreement with the Goldman Poisson bracket, we can use Corollary 5.11 from [@CookeSamuelson'2021], which states that $q=t$ specialization $A_{q=t}\simeq\mathrm{Sk}_{\Sigma_2}$ of the algebra of difference operators is isomorphic to the skein algebra of the genus two surface. Because the latter is the quatization of the Poisson algebra $\mathcal A_{q=t=1}$ we finalize the proof. ◻ As an illustration of Corollary [\[cor:Aq1t1BracketAgreesWithGoldman\]](#cor:Aq1t1BracketAgreesWithGoldman){reference-type="ref" reference="cor:Aq1t1BracketAgreesWithGoldman"} we have also computed the Poisson brackets between the images of generators of $A_{q=t=1}$ in the invariant subring of $\mathcal O[\mathrm{Hom}(\pi_1(\Sigma_2),SL(2,\mathbb C))]$ using completely independent method of Double Quasi Poisson brackets [@MassuyeauTuraev'2014]. We have verified that the result agrees with entries in Table [\[tab:QCommRel\]](#tab:QCommRel){reference-type="ref" reference="tab:QCommRel"}. The Mathematica code for this illustration can be found at [@Arthamonov-GitHub-Flat]. # Discussion and future directions ## Symplectic resolutions of character varieties Remarkably enough, the $SL(2,\mathbb C)$-character variety of a genus two surface appears to be very special when character varieties are considered in a seemingly unrelated context. It was shown in [@BellamySchedler'2019] that $SL(n,\mathbb C)$ character varieties of closed genus $g>0$ surfaces are irreducible symplectic singularities. A symplectic singularity admits symplectic resolution if the symplectic form on the smooth locus can be extended to a symplectic form on the resolution. This turns out to be a rather strong requirement and symplectic singularities are rare. In the same paper [@BellamySchedler'2019] G. Bellamy and T. Schedler have shown that $SL(n,\mathbb C)$-character varieties admit symplectic resolutions only when $g=1$ or $(g,n)=(2,2)$. The case $g=1$ is well-known and symplectic resolution can be described explicitly using (a closed subscheme of) the Hilbert Scheme $\mathrm{Hilb}^n(\mathbb C^\times\times\mathbb C^\times)$ [@Nakajima'1999]. One-parameter deformation of $\mathrm{Hilb}^n(\mathbb C^\times\times\mathbb C^\times)$ is nothing but completed configuration space of the trigonometric Ruijsenaars-Shneider integrable system [@Oblomkov'2004-CM-Spaces], the same integrable system which in the quantum case gives rise to sDAHA. It is natural to expect that our deformed commutative algebra $\mathcal A_{q=1,t}$ appears in the $(g,n)=(2,2)$ case of [@BellamySchedler'2019] and thus provides a multiplicative analogue of [@LehnSorger'2006]. We are planning to investigate this question in a sequel publication. ## Fourier Duality and root systems In [@DiFrancescoKedem'2023] P. Di Franceso and R. Kedem have studied Fourier duality of Macdonald $q$-Difference operators in a family of examples which includes the genus two algebra $A_{q,t}$. In particular, authors proposed the analogue of an affine root system which can be used to define the difference operators and duality. It would be extremely interesting to investigate the potential role of this "root-like" system in the "PBW-like" approach to the solution of the word problem in $\mathcal A_{q,t}\simeq A_{q,t}$ that we present in the current manuscript. Of course, by analogy with the usual Double Affine Hecke Algebra one would expect both roots and co-roots to play equal role. Without going into much details we would like to conclude this comment by highlighting the striking, although rather indirect, similarity between formulas for extra generators ([\[eq:qdiffleveltwogenerators\]](#eq:qdiffleveltwogenerators){reference-type="ref" reference="eq:qdiffleveltwogenerators"}), ([\[eq:qdifflevelthreegenerators\]](#eq:qdifflevelthreegenerators){reference-type="ref" reference="eq:qdifflevelthreegenerators"}) and how non-simple roots are expressed through the simple roots in the usual Lie algebra. ## Relation to cluster algebras In [@ChekhovShapiro'2023] L. Chekhov and M. Shapiro have proposed a cluster algebra construction for an $SL(2,\mathbb C)$ character variety of a closed genus two surface. The authors have shown that one can introduce an extra parameter in such cluster algebra. It would be very interesting to compare our deformed commutative algebra $\mathcal A_{q=1,t}$ with the Laurent subalgebra of this deformed cluster algebra. Establishing the isomorphism between the two can be a very interesting problem. # Acknowledgements {#acknowledgements .unnumbered} I am very grateful to Shamil Shakirov, this paper would have not been possible without several years of prior joint work with Shamil on [@ArthamonovShakirov'2020] and [@ArthamonovShakirov'2019]. I am also grateful Pavel Etingof, Andrey Okounkov, and Nicolai Reshetikhin for many fruitful discussions and remarks. # Calculations for Proposition [\[prop:d1Automorphism\]](#prop:d1Automorphism){reference-type="ref" reference="prop:d1Automorphism"} {#sec:d1Automorphism} In this section we prove the second part of Proposition [\[prop:d1Automorphism\]](#prop:d1Automorphism){reference-type="ref" reference="prop:d1Automorphism"}, namely that homomorphism $d_1:F\rightarrow F$ introduced in ([\[eq:d1Automorphism\]](#eq:d1Automorphism){reference-type="ref" reference="eq:d1Automorphism"}) preserves defining ideal ([\[eq:DefiningRelationsAqt\]](#eq:DefiningRelationsAqt){reference-type="ref" reference="eq:DefiningRelationsAqt"}) of $\mathcal A_{q,t}$. Throughout the section we will utilize the same notations as in the proof of Lemma [\[lemm:JRelations\]](#lemm:JRelations){reference-type="ref" reference="lemm:JRelations"}. It will be convenient for us to express the right hand side of the action using both the original relators $\eta_{I,J}$ introduced in ([\[eq:EtaIJRelator\]](#eq:EtaIJRelator){reference-type="ref" reference="eq:EtaIJRelator"}) as well as their special combinations $\rho_i,1\leq i\leq 18$ as in ([\[eq:JrelationsViaNormal\]](#eq:JrelationsViaNormal){reference-type="ref" reference="eq:JrelationsViaNormal"}). Also, by $\rho_0\in F$ we denote the q-Casimir relator which appears on the left hand side of ([\[eq:qCasimirRelation\]](#eq:qCasimirRelation){reference-type="ref" reference="eq:qCasimirRelation"}). First we note that $d_1$ acts trivially on the following relators: $$\begin{aligned} &\eta_{(3)(1)}, \eta_{(4)(1)}, \eta_{(4)(3)}, \eta_{(5)(1)}, \eta_{(5)(3)}, \eta_{(5)(4)},\\ &\eta_{(34)(1)}, \eta_{(34)(3)}, \eta_{(34)(4)}, \eta_{(34)(5)},\\ &\eta_{(45)(1)}, \eta_{(45)(3)}, \eta_{(45)(4)}, \eta_{(45)(5)}, \eta_{(45)(34)},\\ &\eta_{(345)(1)}, \eta_{(345)(3)}, \eta_{(345)(4)}, \eta_{(345)(5)}, \eta_{(345)(34)}, \eta_{(345)(45)}.\end{aligned}$$ As for the remaining relators we get $$\begin{aligned} d_1(\eta_{(2)(1)})=&O_1O_2O_1 -q^{\frac{1}{2}}O_1^2O_2 -q^{\frac{1}{4}}O_{12}O_1 +q^{\frac{3}{4}}O_1O_{12} +q^{-\frac{1}{2}}(q-1)O_2=q^{\frac{1}{4}}O_1\eta _{(2)(1)} -\eta _{(12)(1)},\\[0.3em] d_1(\eta_{(3)(2)})=&O_3O_1O_2 -q^{\frac{1}{2}}O_1O_2O_3 +q^{\frac{3}{4}}O_{12}O_3 -q^{\frac{1}{4}}O_3O_{12} +q^{-\frac{1}{4}}(q-1)O_1O_{23} -(q-1)O_{123}\\ =&q^{\frac{1}{4}}O_1\eta _{(3)(2)} +\eta _{(3)(1)}O_2 +q^{\frac{1}{2}}\eta _{(12)(3)},\\[0.3em] d_1(\eta_{(4)(2)})=&q^{\frac{1}{4}}O_4O_1O_2 -q^{\frac{1}{4}}O_1O_2O_4 +q^{\frac{1}{2}}O_{12}O_4 -q^{\frac{1}{2}}O_4O_{12}=q^{\frac{1}{4}}O_1\eta _{(4)(2)} +q^{\frac{1}{4}}\eta _{(4)(1)}O_2 +q^{\frac{1}{2}}\eta _{(12)(4)},\\[0.3em] d_1(\eta_{(5)(2)})=&q^{\frac{1}{4}}O_5O_1O_2 -q^{\frac{1}{4}}O_1O_2O_5 +q^{\frac{1}{2}}O_{12}O_5 -q^{\frac{1}{2}}O_5O_{12}=q^{\frac{1}{4}}O_1\eta _{(5)(2)} +q^{\frac{1}{4}}\eta _{(5)(1)}O_2 +q^{\frac{1}{2}}\eta _{(12)(5)},\\[0.3em] d_1(\eta_{(6)(1)})=&q^{\frac{1}{4}}O_{61}O_1 -q^{\frac{3}{4}}O_1O_{61} +(q-1)O_6=q^{\frac{1}{2}}\eta _{(61)(1)},\\[0.3em] d_1(\eta_{(6)(2)})=&q^{\frac{1}{4}}O_{61}O_1O_2 -q^{\frac{1}{4}}O_1O_2O_{61} -q^{\frac{1}{2}}O_{61}O_{12} +q^{\frac{1}{2}}O_{12}O_{61}\\ =&q^{\frac{1}{2}}O_1\eta _{(61)(2)} +q^{\frac{1}{2}}\eta _{(61)(1)}O_2 -\eta _{(61)(12)} -q^{-1}(q-1) (q+1)\eta _{(6)(2)} +q^{-\frac{1}{2}}(q-1)\rho _4,\\[0.3em] d_1(\eta_{(6)(3)})=&O_{61}O_3 -O_3O_{61}=\eta _{(61)(3)},\\[0.3em] d_1(\eta_{(6)(4)})=&O_{61}O_4 -O_4O_{61}=\eta _{(61)(4)},\\[0.3em] d_1(\eta_{(6)(5)})=&q^{-\frac{1}{4}}O_{61}O_5 -q^{\frac{1}{4}}O_5O_{61} +q^{-\frac{1}{2}}(q-1)O_{234}=\eta _{(61)(5)},\\[0.3em] d_1(\eta_{(12)(1)})=&q^{\frac{1}{4}}O_2O_1 -q^{\frac{3}{4}}O_1O_2 +(q-1)O_{12}=q^{\frac{1}{2}}\eta _{(2)(1)},\\[0.3em] d_1(\eta_{(12)(2)})=&O_2O_1O_2 -q^{\frac{1}{2}}O_1O_2^2 +q^{\frac{3}{4}}O_{12}O_2 -q^{\frac{1}{4}}O_2O_{12} +q^{-\frac{1}{2}}(q-1)O_1=q^{\frac{1}{4}}\eta _{(2)(1)}O_2 +\eta _{(12)(2)},\\[0.3em] d_1(\eta_{(12)(3)})=& -q^{-\frac{1}{4}}O_3O_2 +q^{\frac{1}{4}}O_2O_3 -q^{-\frac{1}{2}}(q-1)O_{23}= -\eta _{(3)(2)},\\[0.3em] d_1(\eta_{(12)(4)})=& -O_4O_2 +O_2O_4= -\eta _{(4)(2)},\\[0.3em] d_1(\eta_{(12)(5)})=& -O_5O_2 +O_2O_5= -\eta _{(5)(2)},\\[0.3em] d_1(\eta_{(12)(6)})=& -q^{\frac{1}{4}}O_{61}O_2 +q^{-\frac{1}{4}}O_2O_{61} +q^{-\frac{1}{2}}(q-1)O_{345}= -\eta _{(61)(2)},\\[0.3em] d_1(\eta_{(23)(1)})=&O_1O_{23}O_1 -q^{\frac{1}{2}}O_1^2O_{23} -q^{\frac{1}{4}}O_{123}O_1 +q^{\frac{3}{4}}O_1O_{123} +q^{-\frac{1}{2}}(q-1)O_{23}=q^{\frac{1}{4}}O_1\eta _{(23)(1)} -\eta _{(123)(1)},\\[0.3em] d_1(\eta_{(23)(2)})=&q^{\frac{3}{4}}O_1O_{23}O_1O_2 -q^{\frac{1}{4}}O_1O_2O_1O_{23} +q^{\frac{1}{2}}O_{12}O_1O_{23} -qO_1O_{23}O_{12} -qO_{123}O_1O_2 +q^{\frac{1}{2}}O_1O_2O_{123}\\ &+q^{\frac{5}{4}}O_{123}O_{12} -q^{\frac{3}{4}}O_{12}O_{123} -q^{-\frac{1}{2}}(q-1)O_3\\ =& -q^{\frac{1}{2}}O_1\eta _{(2)(1)}O_{23} +qO_1^2\eta _{(23)(2)} +qO_1\eta _{(23)(1)}O_2 +q^{\frac{1}{4}}\eta _{(12)(1)}O_{23} -q^{\frac{3}{2}}O_1\eta _{(123)(2)} -q^{\frac{1}{2}}O_1\eta _{(23)(12)}\\ &-(q-1)O_1\rho _5 -q^{\frac{3}{4}}\eta _{(123)(1)}O_2 +q\eta _{(123)(12)} -(q-1)\eta _{(23)(2)},\\[0.3em] d_1(\eta_{(23)(3)})=& -q^{\frac{1}{2}}O_3O_1O_{23} +O_1O_{23}O_3 -q^{\frac{1}{4}}O_{123}O_3 +q^{\frac{3}{4}}O_3O_{123} +q^{-\frac{1}{4}}(q-1)O_1O_2 -(q-1)O_{12}\\ =& -q^{\frac{1}{2}}\eta _{(3)(1)}O_{23} +q^{\frac{1}{4}}O_1\eta _{(23)(3)} -q^{\frac{1}{2}}\eta _{(123)(3)},\\[0.3em] d_1(\eta_{(23)(4)})=& -O_4O_1O_{23} +q^{\frac{1}{2}}O_1O_{23}O_4 -q^{\frac{3}{4}}O_{123}O_4 +q^{\frac{1}{4}}O_4O_{123} -q^{-\frac{1}{4}}(q-1)O_1O_{234} +(q-1)O_{56}\\ =& -\eta _{(4)(1)}O_{23} +q^{\frac{1}{4}}O_1\eta _{(23)(4)} -q^{\frac{1}{2}}\eta _{(123)(4)},\\[0.3em] d_1(\eta_{(23)(5)})=& -q^{\frac{1}{4}}O_5O_1O_{23} +q^{\frac{1}{4}}O_1O_{23}O_5 -q^{\frac{1}{2}}O_{123}O_5 +q^{\frac{1}{2}}O_5O_{123}= -q^{\frac{1}{4}}\eta _{(5)(1)}O_{23} +q^{\frac{1}{4}}O_1\eta _{(23)(5)} -q^{\frac{1}{2}}\eta _{(123)(5)},\\[0.3em] d_1(\eta_{(23)(6)})=& -q^{\frac{1}{4}}O_{61}O_1O_{23} +q^{\frac{1}{4}}O_1O_{23}O_{61} -q^{\frac{1}{2}}O_{123}O_{61} +q^{\frac{1}{2}}O_{61}O_{123}\\ =& -q^{\frac{1}{2}}\eta _{(61)(1)}O_{23} -q^{\frac{1}{2}}O_1\eta _{(61)(23)} -q^{\frac{1}{2}}\eta _{(123)(61)} +(q-1)\eta _{(45)(1)} -(q-1)\eta _{(23)(6)},\\[0.3em] d_1(\eta_{(23)(12)})=& -q^{-\frac{1}{4}}O_2O_1O_{23} +q^{\frac{3}{4}}O_1O_{23}O_2 -qO_{123}O_2 +O_2O_{123} -q^{-1}(q-1) (q+1)O_1O_3 +q^{-1}t^{-\frac{1}{2}}(q-1) (q+t)O_5\\ =& -\eta _{(2)(1)}O_{23} +q^{\frac{1}{2}}O_1\eta _{(23)(2)} -q\eta _{(123)(2)} -q^{-\frac{1}{2}}(q-1)\rho _5,\\[0.3em] d_1(\eta_{(34)(2)})=&O_{34}O_1O_2 -q^{\frac{1}{2}}O_1O_2O_{34} -q^{\frac{1}{4}}O_{34}O_{12} +q^{\frac{3}{4}}O_{12}O_{34} +q^{-\frac{1}{4}}(q-1)O_1O_{234} -(q-1)O_{56}\\ =&q^{\frac{1}{4}}O_1\eta _{(34)(2)} +\eta _{(34)(1)}O_2 -q^{\frac{1}{2}}\eta _{(34)(12)},\\[0.3em] d_1(\eta_{(34)(6)})=& -O_{61}O_{34} +O_{34}O_{61}= -\eta _{(61)(34)},\\[0.3em] d_1(\eta_{(34)(12)})=&q^{-\frac{1}{4}}O_{34}O_2 -q^{\frac{1}{4}}O_2O_{34} +q^{-\frac{1}{2}}(q-1)O_{234}=\eta _{(34)(2)},\\[0.3em] d_1(\eta_{(34)(23)})=&q^{\frac{3}{4}}O_{34}O_1O_{23} -q^{-\frac{1}{4}}O_1O_{23}O_{34} +O_{123}O_{34} -qO_{34}O_{123} -q^{-\frac{3}{4}}(q-1) (q+1)O_1O_2O_4\\ &+q^{-\frac{1}{2}}(q-1) (q+1)O_{12}O_4 +q^{-1}t^{-\frac{1}{2}}(q-1) (q+t)O_{61}\\ =&q^{\frac{3}{4}}\eta _{(34)(1)}O_{23} +q^{\frac{1}{4}}O_1\eta _{(34)(23)} +\eta _{(123)(34)} +q^{-\frac{1}{2}}(q-1)\eta _{(56)(3)} +q^{\frac{1}{2}}(q-1)\eta _{(12)(4)}\\ &+q^{-\frac{1}{2}}t^{-\frac{1}{2}}(q-1) (q+t)\eta _{(6)(1)} -q^{\frac{1}{2}}(q-1)\rho _{12},\\[0.3em] d_1(\eta_{(45)(2)})=&q^{\frac{1}{4}}O_{45}O_1O_2 -q^{\frac{1}{4}}O_1O_2O_{45} -q^{\frac{1}{2}}O_{45}O_{12} +q^{\frac{1}{2}}O_{12}O_{45}=q^{\frac{1}{4}}O_1\eta _{(45)(2)} +q^{\frac{1}{4}}\eta _{(45)(1)}O_2 -q^{\frac{1}{2}}\eta _{(45)(12)},\\[0.3em] d_1(\eta_{(45)(6)})=& -q^{-\frac{1}{4}}O_{61}O_{45} +q^{\frac{1}{4}}O_{45}O_{61} -q^{-\frac{1}{2}}(q-1)O_{23}= -\eta _{(61)(45)},\\[0.3em] d_1(\eta_{(45)(12)})=&O_{45}O_2 -O_2O_{45}=\eta _{(45)(2)},\\[0.3em] d_1(\eta_{(45)(23)})=&O_{45}O_1O_{23} -q^{\frac{1}{2}}O_1O_{23}O_{45} +q^{\frac{3}{4}}O_{123}O_{45} -q^{\frac{1}{4}}O_{45}O_{123} +q^{-\frac{1}{4}}(q-1)O_1O_{61} -(q-1)O_6\\ =&\eta _{(45)(1)}O_{23} +q^{\frac{1}{4}}O_1\eta _{(45)(23)} +q^{\frac{1}{2}}\eta _{(123)(45)},\\[0.3em] d_1(\eta_{(56)(1)})=&q^{\frac{1}{4}}O_{234}O_1 -q^{\frac{3}{4}}O_1O_{234} +(q-1)O_{56}=q^{\frac{1}{2}}\eta _{(234)(1)},\\[0.3em] d_1(\eta_{(56)(2)})=&q^{\frac{1}{4}}O_{234}O_1O_2 -q^{\frac{1}{4}}O_1O_2O_{234} -q^{\frac{1}{2}}O_{234}O_{12} +q^{\frac{1}{2}}O_{12}O_{234}\\ =&q^{\frac{1}{2}}O_1\eta _{(234)(2)} +q^{\frac{1}{2}}\eta _{(234)(1)}O_2 -q^{\frac{1}{2}}\eta _{(234)(12)} -(q-1)\eta _{(34)(1)},\\[0.3em] d_1(\eta_{(56)(3)})=&O_{234}O_3 -O_3O_{234}=\eta _{(234)(3)},\\[0.3em] d_1(\eta_{(56)(4)})=&q^{-\frac{1}{4}}O_{234}O_4 -q^{\frac{1}{4}}O_4O_{234} +q^{-\frac{1}{2}}(q-1)O_{23}=\eta _{(234)(4)},\\[0.3em] d_1(\eta_{(56)(5)})=&q^{\frac{1}{4}}O_{234}O_5 -q^{-\frac{1}{4}}O_5O_{234} -q^{-\frac{1}{2}}(q-1)O_{61}=\eta _{(234)(5)},\\[0.3em] d_1(\eta_{(56)(6)})=&q^{-\frac{1}{4}}O_{234}O_{61} -q^{\frac{1}{4}}O_{61}O_{234} +q^{-\frac{1}{2}}(q-1)O_5=\eta _{(234)(61)},\\[0.3em] d_1(\eta_{(56)(12)})=&q^{\frac{1}{4}}O_{234}O_2 -q^{-\frac{1}{4}}O_2O_{234} -q^{-\frac{1}{2}}(q-1)O_{34}=\eta _{(234)(2)},\\[0.3em] d_1(\eta_{(56)(23)})=&q^{\frac{1}{4}}O_{234}O_1O_{23} -q^{\frac{1}{4}}O_1O_{23}O_{234} -q^{\frac{1}{2}}O_{234}O_{123} +q^{\frac{1}{2}}O_{123}O_{234}\\ =&q^{\frac{1}{2}}\eta _{(234)(1)}O_{23} +q^{\frac{1}{2}}O_1\eta _{(234)(23)} +q^{-\frac{5}{4}}t^{-\frac{1}{2}}(q-1) (q+t)O_2\eta _{(6)(1)} +q^{-\frac{5}{4}}t^{-\frac{1}{2}}(q-1) (q+t)\eta _{(2)(1)}O_6\\ &-\eta _{(234)(123)} -(q-1)\eta _{(56)(23)} -q^{-\frac{3}{2}}t^{-\frac{1}{2}}(q-1)^2 (q+t)\eta _{(12)(6)} -q^{-1}(q-1) (q+1)\eta _{(4)(1)}\\ &+q^{-\frac{1}{2}}(q-1)\rho _{13},\\[0.3em] d_1(\eta_{(56)(34)})=&q^{-\frac{1}{4}}O_{234}O_{34} -q^{\frac{1}{4}}O_{34}O_{234} +q^{-\frac{1}{2}}(q-1)O_2=\eta _{(234)(34)},\\[0.3em] d_1(\eta_{(56)(45)})=&q^{\frac{1}{2}}O_{234}O_{45} -q^{-\frac{1}{2}}O_{45}O_{234} -q^{-1}(q-1) (q+1)O_4O_{61} +q^{-\frac{3}{4}}t^{-\frac{1}{2}}(q-1) (q+t)O_1O_2\\ &-q^{-\frac{1}{2}}t^{-\frac{1}{2}}(q-1) (q+t)O_{12}\\ =&q^{\frac{1}{2}}\eta _{(234)(45)} +(q-1)\eta _{(61)(4)} -(q-1)\eta _{(23)(5)} +(q-1)\rho _7,\\[0.3em] d_1(\eta_{(61)(1)})=&O_1O_{61}O_1 -q^{\frac{1}{2}}O_1^2O_{61} -q^{\frac{1}{4}}O_6O_1 +q^{\frac{3}{4}}O_1O_6 +q^{-\frac{1}{2}}(q-1)O_{61}=q^{\frac{1}{4}}O_1\eta _{(61)(1)} -\eta _{(6)(1)},\\[0.3em] d_1(\eta_{(61)(2)})=&q^{\frac{3}{4}}O_1O_{61}O_1O_2 -q^{\frac{1}{4}}O_1O_2O_1O_{61} +q^{\frac{1}{2}}O_{12}O_1O_{61} -qO_1O_{61}O_{12} -qO_6O_1O_2 +q^{\frac{1}{2}}O_1O_2O_6 -q^{\frac{3}{4}}O_{12}O_6\\ &+q^{\frac{5}{4}}O_6O_{12} -q^{-\frac{1}{2}}(q-1)O_{345}\\ =& -q^{\frac{1}{2}}O_1\eta _{(2)(1)}O_{61} +qO_1^2\eta _{(61)(2)} +qO_1\eta _{(61)(1)}O_2 +q^{\frac{1}{4}}\eta _{(12)(1)}O_{61} -q^{\frac{1}{2}}O_1\eta _{(61)(12)}\\ &-q^{-\frac{1}{2}}(q^2+q-1)O_1\eta _{(6)(2)} +(q-1)O_1\rho _4 -q^{\frac{3}{4}}\eta _{(6)(1)}O_2 -(q-1)\eta _{(61)(2)} -q\eta _{(12)(6)},\\[0.3em] d_1(\eta_{(61)(3)})=& -q^{\frac{1}{4}}O_3O_1O_{61} +q^{\frac{1}{4}}O_1O_{61}O_3 -q^{\frac{1}{2}}O_6O_3 +q^{\frac{1}{2}}O_3O_6= -q^{\frac{1}{4}}\eta _{(3)(1)}O_{61} +q^{\frac{1}{4}}O_1\eta _{(61)(3)} -q^{\frac{1}{2}}\eta _{(6)(3)},\\[0.3em] d_1(\eta_{(61)(4)})=& -q^{\frac{1}{4}}O_4O_1O_{61} +q^{\frac{1}{4}}O_1O_{61}O_4 -q^{\frac{1}{2}}O_6O_4 +q^{\frac{1}{2}}O_4O_6= -q^{\frac{1}{4}}\eta _{(4)(1)}O_{61} +q^{\frac{1}{4}}O_1\eta _{(61)(4)} -q^{\frac{1}{2}}\eta _{(6)(4)},\\[0.3em] d_1(\eta_{(61)(5)})=& -q^{\frac{1}{2}}O_5O_1O_{61} +O_1O_{61}O_5 +q^{-\frac{1}{4}}(q-1)O_1O_{234} -q^{\frac{1}{4}}O_6O_5 +q^{\frac{3}{4}}O_5O_6 -(q-1)O_{56}\\ =& -q^{\frac{1}{2}}\eta _{(5)(1)}O_{61} +q^{\frac{1}{4}}O_1\eta _{(61)(5)} -q^{\frac{1}{2}}\eta _{(6)(5)},\\[0.3em] d_1(\eta_{(61)(6)})=& -O_{61}O_1O_{61} +q^{\frac{1}{2}}O_1O_{61}^2 +q^{\frac{1}{4}}O_{61}O_6 -q^{\frac{3}{4}}O_6O_{61} -q^{-\frac{1}{2}}(q-1)O_1= -q^{\frac{1}{4}}\eta _{(61)(1)}O_{61} +\eta _{(61)(6)},\\[0.3em] d_1(\eta_{(61)(12)})=& -q^{\frac{3}{4}}O_2O_1O_{61} +q^{-\frac{1}{4}}O_1O_{61}O_2 +q^{-\frac{3}{4}}(q-1) (q+1)O_1O_2O_{61} -q^{-\frac{1}{2}}(q-1) (q+1)O_{12}O_{61} -O_6O_2\\ &+qO_2O_6 -q^{-1}t^{-\frac{1}{2}}(q-1) (q+t)O_4\\ =& -q\eta _{(2)(1)}O_{61} +q^{-\frac{1}{2}}O_1\eta _{(61)(2)} +q^{-1}(q-1)\eta _{(61)(12)} -q^{-2}(q^2+q-1)\eta _{(6)(2)} +q^{-\frac{3}{2}}(q-1)\rho _4,\\[0.3em] d_1(\eta_{(61)(23)})=&q^{\frac{3}{4}}O_1O_{61}O_1O_{23} -q^{\frac{1}{4}}O_1O_{23}O_1O_{61} +q^{\frac{1}{2}}O_{123}O_1O_{61} -qO_1O_{61}O_{123} -qO_6O_1O_{23} +q^{\frac{1}{2}}O_1O_{23}O_6\\ &-q^{\frac{3}{4}}O_{123}O_6 +q^{\frac{5}{4}}O_6O_{123} -q^{-\frac{1}{2}}(q-1)O_{45}\\ =&qO_1\eta _{(61)(1)}O_{23} -q^{\frac{1}{2}}O_1\eta _{(23)(1)}O_{61} +qO_1^2\eta _{(61)(23)} -q^{\frac{3}{4}}\eta _{(6)(1)}O_{23} +q^{\frac{1}{4}}\eta _{(123)(1)}O_{61} +qO_1\eta _{(123)(61)}\\ &-q^{\frac{1}{2}}(q-1)O_1\eta _{(45)(1)} +q^{\frac{3}{2}}O_1\eta _{(23)(6)} -q\eta _{(123)(6)} -(q-1)\eta _{(61)(23)},\\[0.3em] d_1(\eta_{(61)(34)})=& -q^{\frac{1}{4}}O_{34}O_1O_{61} +q^{\frac{1}{4}}O_1O_{61}O_{34} +q^{\frac{1}{2}}O_{34}O_6 -q^{\frac{1}{2}}O_6O_{34}= -q^{\frac{1}{4}}\eta _{(34)(1)}O_{61} +q^{\frac{1}{4}}O_1\eta _{(61)(34)} +q^{\frac{1}{2}}\eta _{(34)(6)},\\[0.3em] d_1(\eta_{(61)(45)})=& -q^{\frac{1}{2}}O_{45}O_1O_{61} +O_1O_{61}O_{45} +q^{-\frac{1}{4}}(q-1)O_1O_{23} +q^{\frac{3}{4}}O_{45}O_6 -q^{\frac{1}{4}}O_6O_{45} -(q-1)O_{123}\\ =& -q^{\frac{1}{2}}\eta _{(45)(1)}O_{61} +q^{\frac{1}{4}}O_1\eta _{(61)(45)} +q^{\frac{1}{2}}\eta _{(45)(6)},\\[0.3em] d_1(\eta_{(61)(56)})=& -q^{-\frac{1}{4}}O_{234}O_1O_{61} +q^{\frac{3}{4}}O_1O_{61}O_{234} +O_{234}O_6 -qO_6O_{234} -q^{-1}(q-1) (q+1)O_1O_5\\ &+q^{-1}t^{-\frac{1}{2}}(q-1) (q+t)O_3\\ =& -\eta _{(234)(1)}O_{61} -q^{\frac{1}{2}}O_1\eta _{(234)(61)} +\eta _{(234)(6)} +q^{-1}(q-1)\eta _{(5)(1)} -q^{-\frac{1}{2}}(q-1)\rho _3,\\[0.3em] d_1(\eta_{(123)(1)})=&q^{\frac{1}{4}}O_{23}O_1 -q^{\frac{3}{4}}O_1O_{23} +(q-1)O_{123}=q^{\frac{1}{2}}\eta _{(23)(1)},\\[0.3em] d_1(\eta_{(123)(2)})=&q^{\frac{1}{4}}O_{23}O_1O_2 -q^{\frac{1}{4}}O_1O_2O_{23} -q^{\frac{1}{2}}O_{23}O_{12} +q^{\frac{1}{2}}O_{12}O_{23}\\ =&q^{\frac{1}{2}}O_1\eta _{(23)(2)} +q^{\frac{1}{2}}\eta _{(23)(1)}O_2 -(q-1)\eta _{(123)(2)} -\eta _{(23)(12)} -q^{-\frac{1}{2}}(q-1)\rho _5,\\[0.3em] d_1(\eta_{(123)(3)})=&q^{-\frac{1}{4}}O_{23}O_3 -q^{\frac{1}{4}}O_3O_{23} +q^{-\frac{1}{2}}(q-1)O_2=\eta _{(23)(3)},\\[0.3em] d_1(\eta_{(123)(4)})=&q^{\frac{1}{4}}O_{23}O_4 -q^{-\frac{1}{4}}O_4O_{23} -q^{-\frac{1}{2}}(q-1)O_{234}=\eta _{(23)(4)},\\[0.3em] d_1(\eta_{(123)(5)})=&O_{23}O_5 -O_5O_{23}=\eta _{(23)(5)},\\[0.3em] d_1(\eta_{(123)(6)})=& -q^{\frac{1}{4}}O_{61}O_{23} +q^{-\frac{1}{4}}O_{23}O_{61} +q^{-\frac{1}{2}}(q-1)O_{45}= -\eta _{(61)(23)},\\[0.3em] d_1(\eta_{(123)(12)})=&q^{\frac{1}{4}}O_{23}O_2 -q^{-\frac{1}{4}}O_2O_{23} -q^{-\frac{1}{2}}(q-1)O_3=\eta _{(23)(2)},\\[0.3em] d_1(\eta_{(123)(23)})=&O_{23}O_1O_{23} -q^{\frac{1}{2}}O_1O_{23}^2 +q^{\frac{3}{4}}O_{123}O_{23} -q^{\frac{1}{4}}O_{23}O_{123} +q^{-\frac{1}{2}}(q-1)O_1=q^{\frac{1}{4}}\eta _{(23)(1)}O_{23} +\eta _{(123)(23)},\\[0.3em] d_1(\eta_{(123)(34)})=& -O_{34}O_{23} +O_{23}O_{34} -q^{-\frac{1}{2}}(q-1)O_{234}O_3 +q^{-\frac{1}{2}}(q-1)O_2O_4\\ =& -q^{-\frac{1}{2}}(q-1)\eta _{(234)(3)} -q^{-\frac{1}{2}}\eta _{(34)(23)} -q^{-1}(q-1)\rho _6,\\[0.3em] d_1(\eta_{(123)(45)})=& -q^{-\frac{1}{4}}O_{45}O_{23} +q^{\frac{1}{4}}O_{23}O_{45} -q^{-\frac{1}{2}}(q-1)O_{61}= -\eta _{(45)(23)},\\[0.3em] d_1(\eta_{(123)(56)})=& -q^{\frac{1}{4}}O_{234}O_{23} +q^{-\frac{1}{4}}O_{23}O_{234} +q^{-\frac{1}{2}}(q-1)O_4= -\eta _{(234)(23)},\\[0.3em] d_1(\eta_{(123)(61)})=&q^{\frac{1}{4}}O_{23}O_1O_{61} -q^{\frac{1}{4}}O_1O_{61}O_{23} -q^{-\frac{1}{4}}(q-1)O_1O_{23}O_{61} +(q-1)O_{123}O_{61} -q^{\frac{1}{2}}O_{23}O_6 +q^{\frac{1}{2}}O_6O_{23}\\ &+q^{-\frac{1}{2}}(q-1)O_{45}O_1\\ =&q^{\frac{1}{2}}\eta _{(23)(1)}O_{61} -O_1\eta _{(61)(23)} +q^{-\frac{1}{2}}(q-1)\eta _{(45)(1)} -q^{\frac{1}{2}}\eta _{(23)(6)},\\[0.3em] d_1(\eta_{(234)(1)})=&O_1O_{234}O_1 -q^{\frac{1}{2}}O_1^2O_{234} -q^{\frac{1}{4}}O_{56}O_1 +q^{\frac{3}{4}}O_1O_{56} +q^{-\frac{1}{2}}(q-1)O_{234}=q^{\frac{1}{4}}O_1\eta _{(234)(1)} -\eta _{(56)(1)},\\[0.3em] d_1(\eta_{(234)(2)})=&q^{\frac{3}{4}}O_1O_{234}O_1O_2 -q^{\frac{1}{4}}O_1O_2O_1O_{234} +q^{\frac{1}{2}}O_{12}O_1O_{234} -qO_1O_{234}O_{12} -qO_{56}O_1O_2 +q^{\frac{1}{2}}O_1O_2O_{56}\\ &+q^{\frac{5}{4}}O_{56}O_{12} -q^{\frac{3}{4}}O_{12}O_{56} -q^{-\frac{1}{2}}(q-1)O_{34}\\ =& -q^{\frac{1}{2}}O_1\eta _{(2)(1)}O_{234} +qO_1^2\eta _{(234)(2)} +qO_1\eta _{(234)(1)}O_2 +q^{\frac{1}{4}}\eta _{(12)(1)}O_{234} -qO_1\eta _{(234)(12)} -q^{\frac{1}{2}}O_1\eta _{(56)(2)}\\ &-q^{\frac{1}{2}}(q-1)O_1\eta _{(34)(1)} -q^{\frac{3}{4}}\eta _{(56)(1)}O_2 -(q-1)\eta _{(234)(2)} +q\eta _{(56)(12)},\\[0.3em] d_1(\eta_{(234)(3)})=& -q^{\frac{1}{4}}O_3O_1O_{234} +q^{\frac{1}{4}}O_1O_{234}O_3 -q^{\frac{1}{2}}O_{56}O_3 +q^{\frac{1}{2}}O_3O_{56}= -q^{\frac{1}{4}}\eta _{(3)(1)}O_{234} +q^{\frac{1}{4}}O_1\eta _{(234)(3)} -q^{\frac{1}{2}}\eta _{(56)(3)},\\[0.3em] d_1(\eta_{(234)(4)})=& -q^{\frac{1}{2}}O_4O_1O_{234} +O_1O_{234}O_4 +q^{-\frac{1}{4}}(q-1)O_1O_{23} -q^{\frac{1}{4}}O_{56}O_4 +q^{\frac{3}{4}}O_4O_{56} -(q-1)O_{123}\\ =& -q^{\frac{1}{2}}\eta _{(4)(1)}O_{234} +q^{\frac{1}{4}}O_1\eta _{(234)(4)} -q^{\frac{1}{2}}\eta _{(56)(4)},\\[0.3em] d_1(\eta_{(234)(5)})=& -O_5O_1O_{234} +q^{\frac{1}{2}}O_1O_{234}O_5 -q^{\frac{3}{4}}O_{56}O_5 +q^{\frac{1}{4}}O_5O_{56} -q^{-\frac{1}{4}}(q-1)O_1O_{61} +(q-1)O_6\\ =& -\eta _{(5)(1)}O_{234} +q^{\frac{1}{4}}O_1\eta _{(234)(5)} -q^{\frac{1}{2}}\eta _{(56)(5)},\\[0.3em] d_1(\eta_{(234)(6)})=& -q^{\frac{1}{4}}O_{61}O_1O_{234} +q^{\frac{1}{4}}O_1O_{234}O_{61} +q^{\frac{1}{2}}O_{61}O_{56} -q^{\frac{1}{2}}O_{56}O_{61}\\ =& -q^{\frac{1}{2}}\eta _{(61)(1)}O_{234} +q^{\frac{1}{2}}O_1\eta _{(234)(61)} +\eta _{(61)(56)} -q^{-1}(q-1)\eta _{(5)(1)} +q^{-\frac{1}{2}}(q-1)\rho _3,\\[0.3em] d_1(\eta_{(234)(12)})=&q^{-\frac{1}{4}}(q-1)O_{234}O_1O_2 -q^{\frac{1}{4}}O_2O_1O_{234} +q^{\frac{1}{4}}O_1O_{234}O_2 -(q-1)O_{234}O_{12} -q^{-\frac{1}{2}}(q-1)O_{34}O_1\\ & -q^{\frac{1}{2}}O_{56}O_2 +q^{\frac{1}{2}}O_2O_{56}\\ =& -q^{\frac{1}{2}}\eta _{(2)(1)}O_{234} +qO_1\eta _{(234)(2)} +(q-1)\eta _{(234)(1)}O_2 -(q-1)\eta _{(234)(12)} -q^{\frac{1}{2}}\eta _{(56)(2)} -q^{\frac{1}{2}}(q-1)\eta _{(34)(1)},\\[0.3em] d_1(\eta_{(234)(23)})=&q^{\frac{3}{4}}O_1O_{234}O_1O_{23} -q^{\frac{1}{4}}O_1O_{23}O_1O_{234} +q^{\frac{1}{2}}O_{123}O_1O_{234} -qO_1O_{234}O_{123} -qO_{56}O_1O_{23} +q^{\frac{1}{2}}O_1O_{23}O_{56}\\ &-q^{\frac{3}{4}}O_{123}O_{56} +q^{\frac{5}{4}}O_{56}O_{123} -q^{-\frac{1}{2}}(q-1)O_4\\ =& -q^{\frac{1}{2}}O_1\eta _{(23)(1)}O_{234} +qO_1\eta _{(234)(1)}O_{23} +qO_1^2\eta _{(234)(23)} +q^{-\frac{3}{4}}t^{-\frac{1}{2}}(q-1) (q+t)O_1O_2\eta _{(6)(1)}\\ &+q^{-\frac{3}{4}}t^{-\frac{1}{2}}(q-1) (q+t)O_1\eta _{(2)(1)}O_6 +q^{\frac{1}{4}}\eta _{(123)(1)}O_{234} -q^{\frac{3}{4}}\eta _{(56)(1)}O_{23} -q^{\frac{1}{2}}O_1\eta _{(234)(123)} -q^{\frac{3}{2}}O_1\eta _{(56)(23)}\\ &-q^{-1}t^{-\frac{1}{2}}(q-1)^2 (q+t)O_1\eta _{(12)(6)} -q^{-\frac{1}{2}}(q-1) (q+1)O_1\eta _{(4)(1)} +(q-1)O_1\rho _{13} -(q-1)\eta _{(234)(23)}\\ &-q\eta _{(123)(56)},\\[0.3em] d_1(\eta_{(234)(34)})=& -q^{\frac{1}{2}}O_{34}O_1O_{234} +O_1O_{234}O_{34} -q^{\frac{1}{4}}O_{56}O_{34} +q^{\frac{3}{4}}O_{34}O_{56} +q^{-\frac{1}{4}}(q-1)O_1O_2 -(q-1)O_{12}\\ =& -q^{\frac{1}{2}}\eta _{(34)(1)}O_{234} +q^{\frac{1}{4}}O_1\eta _{(234)(34)} -q^{\frac{1}{2}}\eta _{(56)(34)},\\[0.3em] d_1(\eta_{(234)(45)})=& -q^{\frac{1}{4}}O_{45}O_1O_{234} +q^{\frac{1}{4}}O_1O_{234}O_{45} +q^{-\frac{1}{4}}(q-1)O_1O_{23}O_5 -q^{-\frac{1}{4}}(q-1)O_1O_{61}O_4 -q^{\frac{1}{2}}O_{56}O_{45}\\ &+q^{\frac{1}{2}}O_{45}O_{56} -(q-1)O_{123}O_5 +(q-1)O_6O_4\\ =& -q^{\frac{1}{4}}\eta _{(45)(1)}O_{234} +q^{\frac{1}{4}}O_1\eta _{(234)(45)} -(q-1)\eta _{(123)(5)} -\eta _{(56)(45)} +(q-1)\eta _{(6)(4)} -q^{-\frac{1}{2}}(q-1)\rho _2,\\[0.3em] d_1(\eta_{(234)(56)})=& -O_{234}O_1O_{234} +q^{\frac{1}{2}}O_1O_{234}^2 +q^{\frac{1}{4}}O_{234}O_{56} -q^{\frac{3}{4}}O_{56}O_{234} -q^{-\frac{1}{2}}(q-1)O_1= -q^{\frac{1}{4}}\eta _{(234)(1)}O_{234} +\eta _{(234)(56)},\\[0.3em] d_1(\eta_{(234)(61)})=&q^{\frac{1}{4}}O_1O_{234}O_1O_{61} -q^{\frac{3}{4}}O_1O_{61}O_1O_{234} -q^{\frac{1}{2}}O_{56}O_1O_{61} +qO_1O_{61}O_{56} +qO_6O_1O_{234} -q^{\frac{1}{2}}O_1O_{234}O_6\\ &+q^{\frac{3}{4}}O_{56}O_6 -q^{\frac{5}{4}}O_6O_{56} +q^{-\frac{1}{2}}(q-1)O_5\\ =& -qO_1\eta _{(61)(1)}O_{234} +q^{\frac{1}{2}}O_1\eta _{(234)(1)}O_{61} +qO_1^2\eta _{(234)(61)} +q^{\frac{3}{4}}\eta _{(6)(1)}O_{234} -q^{\frac{1}{4}}\eta _{(56)(1)}O_{61} -q^{\frac{1}{2}}O_1\eta _{(234)(6)}\\ &+q^{\frac{1}{2}}O_1\eta _{(61)(56)} -q^{-\frac{1}{2}}(q-1)O_1\eta _{(5)(1)} +(q-1)O_1\rho _3 -(q-1)\eta _{(234)(61)} +q\eta _{(56)(6)},\\[0.3em] d_1(\eta_{(234)(123)})=&q^{-\frac{3}{4}}t^{-\frac{1}{2}}(q-1) (q+t)O_1O_2O_{61}O_1 -q^{-\frac{1}{4}}t^{-\frac{1}{2}}(q-1) (q+t)O_1O_2O_1O_{61} -q^{-\frac{1}{4}}O_{23}O_1O_{234} \\ &+q^{\frac{3}{4}}O_1O_{234}O_{23}-q^{-\frac{1}{2}}t^{-\frac{1}{2}}(q-1) (q+t)O_{12}O_{61}O_1 +t^{-\frac{1}{2}}(q-1) (q+t)O_{12}O_1O_{61} \\ & +t^{-\frac{1}{2}}(q-1) (q+t)O_1O_2O_6 -qO_{56}O_{23} +O_{23}O_{56} -q^{\frac{1}{4}}t^{-\frac{1}{2}}(q-1) (q+t)O_{12}O_6 \\ &-q^{-\frac{5}{4}}t^{-\frac{1}{2}}(q-1) (q+t)O_{61}O_2 -q^{-1}(q-1) (q+1)O_4O_1 +q^{-1}t^{-\frac{1}{2}}(q-1) (q+t)O_{345}\\ =&q^{-\frac{1}{2}}t^{-\frac{1}{2}}(q-1) (q+t)O_1O_2\eta _{(61)(1)} -\eta _{(23)(1)}O_{234} -q^{-\frac{1}{4}}t^{-\frac{1}{2}}(q-1) (q+t)O_{12}\eta _{(61)(1)} +q^{\frac{1}{2}}O_1\eta _{(234)(23)}\\ &-q^{-\frac{3}{2}}t^{-\frac{1}{2}}(q-1) (q+t)\eta _{(61)(2)} -q\eta _{(56)(23)} -q^{-\frac{1}{2}}t^{-\frac{1}{2}}(q-1) (q+t)\eta _{(12)(6)} -q^{-1}(q-1) (q+1)\eta _{(4)(1)}\\ & +q^{-\frac{1}{2}}(q-1)\rho _{13},\\[0.3em] d_1(\eta_{(345)(2)})=&O_{345}O_1O_2 -q^{\frac{1}{2}}O_1O_2O_{345} -q^{\frac{1}{4}}O_{345}O_{12} +q^{\frac{3}{4}}O_{12}O_{345} +q^{-\frac{1}{4}}(q-1)O_1O_{61} -(q-1)O_6\\ =&q^{\frac{1}{4}}O_1\eta _{(345)(2)} +\eta _{(345)(1)}O_2 -q^{\frac{1}{2}}\eta _{(345)(12)},\\[0.3em] d_1(\eta_{(345)(6)})=&q^{\frac{1}{4}}O_{345}O_{61} -q^{-\frac{1}{4}}O_{61}O_{345} -q^{-\frac{1}{2}}(q-1)O_2=\eta _{(345)(61)},\\[0.3em] d_1(\eta_{(345)(12)})=&q^{-\frac{1}{4}}O_{345}O_2 -q^{\frac{1}{4}}O_2O_{345} +q^{-\frac{1}{2}}(q-1)O_{61}=\eta _{(345)(2)},\\[0.3em] d_1(\eta_{(345)(23)})=&q^{\frac{1}{4}}O_{345}O_1O_{23} -q^{\frac{1}{4}}O_1O_{23}O_{345} -q^{\frac{1}{2}}O_{345}O_{123} +q^{\frac{1}{2}}O_{123}O_{345} -q^{-\frac{1}{4}}(q-1)O_{45}O_1O_2\\ &+q^{-\frac{1}{4}}(q-1)O_1O_{61}O_3 +(q-1)O_{45}O_{12} -(q-1)O_6O_3\\ =&q^{\frac{1}{4}}\eta _{(345)(1)}O_{23} +q^{\frac{1}{4}}O_1\eta _{(345)(23)} +q^{-\frac{5}{4}}t^{-\frac{1}{2}}(q-1) (q+t)O_1\eta _{(6)(5)} -q^{-\frac{1}{4}}(q-1)\eta _{(45)(1)}O_2\\ &+q^{-\frac{5}{4}}t^{-\frac{1}{2}}(q-1) (q+t)\eta _{(6)(1)}O_5 -\eta _{(345)(123)} +q^{-\frac{3}{2}}t^{-\frac{1}{2}}(q-1)^2 (q+t)\eta _{(61)(5)} +(q-1)\eta _{(45)(12)}\\ &-q^{-1}(q-1) (q+1)\eta _{(6)(3)} -q^{-\frac{1}{2}}(q-1)\rho _0,\\[0.3em] d_1(\eta_{(345)(56)})=&O_{345}O_{234} -O_{234}O_{345} +q^{-\frac{1}{2}}(q-1)O_{34}O_{61} -q^{-\frac{1}{2}}(q-1)O_2O_5\\ =& -q^{-\frac{5}{4}}t^{-\frac{1}{2}}(q-1) (q+t)O_1\eta _{(3)(2)} -q^{-\frac{3}{2}}t^{-\frac{1}{2}}(q-1) (q+t)\eta _{(3)(1)}O_2 +q^{-\frac{3}{2}}t^{-\frac{1}{2}}(q-1) (q+t)O_2\eta _{(3)(1)}\\ &+q^{-\frac{5}{4}}t^{-\frac{1}{2}}(q-1) (q+t)\eta _{(2)(1)}O_3 +q^{-\frac{1}{2}}\eta _{(345)(234)} -q^{-2}t^{-\frac{1}{2}}(q-1)^2 (q+t)\eta _{(12)(3)}\\ &+q^{-\frac{3}{2}}(q-1) (q+1)\eta _{(5)(2)} -q^{-1}(q-1)\rho _{14},\\[0.3em] d_1(\eta_{(345)(61)})=&q^{\frac{1}{2}}O_{345}O_1O_{61} -O_1O_{61}O_{345} -q^{\frac{3}{4}}O_{345}O_6 +q^{\frac{1}{4}}O_6O_{345} -q^{-\frac{1}{4}}(q-1)O_1O_2 +(q-1)O_{12}\\ =&q^{\frac{1}{2}}\eta _{(345)(1)}O_{61} +q^{\frac{1}{4}}O_1\eta _{(345)(61)} -q^{\frac{1}{2}}\eta _{(345)(6)},\\[0.3em] d_1(\eta_{(345)(123)})=&q^{-1}t^{-\frac{1}{2}}(q-1) (q+t)O_5O_1O_{61} -q^{-1}t^{-\frac{1}{2}}(q-1) (q+t)O_1O_5O_{61} +q^{-\frac{1}{2}}O_{345}O_{23} -q^{\frac{1}{2}}O_{23}O_{345}\\ &+q^{-1}(q-1) (q+1)O_3O_{61} -q^{-\frac{3}{4}}t^{-\frac{1}{2}}(q-1) (q+t)O_5O_6 +q^{-\frac{1}{2}}t^{-\frac{1}{2}}(q-1) (q+t)O_{56}\\ =&q^{-1}t^{-\frac{1}{2}}(q-1) (q+t)\eta _{(5)(1)}O_{61} +q^{-\frac{1}{2}}\eta _{(345)(23)} -q^{-1}(q-1)\eta _{(61)(3)} +q^{-1}(q-1)\eta _{(45)(2)} -(q-1)\rho _{11},\\[0.3em] d_1(\eta_{(345)(234)})=&q^{\frac{3}{4}}O_{345}O_1O_{234} -q^{-\frac{1}{4}}O_1O_{234}O_{345} +q^{-\frac{3}{4}}t^{-\frac{1}{2}}(q-1) (q+t)O_3O_1^2O_2 -q^{-\frac{1}{2}}t^{-\frac{1}{2}}(q-1) (q+t)O_3O_1O_{12}\\ &-q^{-1}t^{-\frac{1}{2}}(q-1) (q+t)O_1^2O_{23} -qO_{345}O_{56} +O_{56}O_{345} -q^{-\frac{3}{4}}(q-1) (q+1)O_5O_1O_2\\ &+q^{-\frac{3}{4}}t^{-\frac{1}{2}}(q-1) (q+t)O_1O_{123}+q^{-\frac{1}{2}}(q-1) (q+1)O_5O_{12} -q^{-\frac{3}{4}}t^{-\frac{1}{2}}(q-1) (q+t)O_3O_2\\ &+q^{-1}t^{-\frac{1}{2}}(q-1) (q+t)O_{23}\\ =&q^{-\frac{3}{4}}t^{-\frac{1}{2}}(q-1) (q+t)\eta _{(3)(1)}O_1O_2 +q^{\frac{3}{4}}\eta _{(345)(1)}O_{234} -q^{-\frac{1}{2}}t^{-\frac{1}{2}}(q-1) (q+t)\eta _{(3)(1)}O_{12}+q^{\frac{1}{4}}O_1\eta _{(345)(234)}\\ &-q^{-\frac{3}{4}}(q-1) (q+1)\eta _{(5)(1)}O_2 -q\eta _{(345)(56)} +q^{\frac{1}{2}}(q-1)\eta _{(34)(6)} -q^{\frac{1}{2}}(q-1)\eta _{(12)(5)}\\ &-q^{-\frac{1}{2}}t^{-\frac{1}{2}}(q-1) (q+t)\eta _{(3)(2)} -q^{\frac{1}{2}}(q-1)\rho _8.\end{aligned}$$ We omit the explicit action of $d_1$ on the q-Casimir relator $\rho_0$ which can be found at [@Arthamonov-GitHub-Flat]. # Calculations for Theorem [\[th:IsomorphismAq1t1CoordinateRingCharacterVariety\]](#th:IsomorphismAq1t1CoordinateRingCharacterVariety){reference-type="ref" reference="th:IsomorphismAq1t1CoordinateRingCharacterVariety"} {#sec:IdentiyCompositionPsiPhi} In this section we present details of verification of identities ([\[eq:IdentityCompositionPhiPsi\]](#eq:IdentityCompositionPhiPsi){reference-type="ref" reference="eq:IdentityCompositionPhiPsi"}) used in the proof of Theorem [\[th:IsomorphismAq1t1CoordinateRingCharacterVariety\]](#th:IsomorphismAq1t1CoordinateRingCharacterVariety){reference-type="ref" reference="th:IsomorphismAq1t1CoordinateRingCharacterVariety"}. We start with a technical Lemma, which allows us to express the $SL(2,\mathbb C)$-invariant polynomials appearing on the right hand side of ([\[eq:PsiActionOnGenerators\]](#eq:PsiActionOnGenerators){reference-type="ref" reference="eq:PsiActionOnGenerators"}) in terms of generators ([\[eq:ABL18Generators\]](#eq:ABL18Generators){reference-type="ref" reference="eq:ABL18Generators"}). **Lemma 45**. *The following identities hold in $\mathcal O(\mathrm{Hom}(\pi_1(\Sigma_2),SL(2,\mathbb C)))$ $$\begin{aligned} \tau _{X_1Y_2X_2^{-1}Y_2^{-1}}=&\tau _{X_1X_2}-\tau _{X_1X_2} \tau _{Y_2}^2+\tau _{X_1} \tau _{X_2} \tau _{Y_2}^2+\tau _{Y_2} \tau _{X_1X_2Y_2}-\tau _{X_1} \tau _{Y_2} \tau _{X_2Y_2}-\tau _{X_2} \tau _{Y_2} \tau _{X_1Y_2}+\tau _{X_1Y_2} \tau _{X_2Y_2},\\ \tau _{Y_1X_1^{-1}}=&\tau _{X_1} \tau _{Y_1}-\tau _{X_1Y_1},\\ \tau _{X_1^{-1}Y_1^{-1}Y_2^{-1}}=&-\tau _{X_1Y_1Y_2}+\tau _{X_1} \tau _{Y_1Y_2}+\tau _{Y_1} \tau _{X_1Y_2}+\tau _{Y_2} \tau _{X_1Y_1}-\tau _{X_1} \tau _{Y_1} \tau _{Y_2},\\ \tau _{Y_2^{-1}Y_1^{-1}X_2}=&\tau _{Y_1X_2Y_2}-\tau _{Y_1} \tau _{X_2Y_2}-\tau _{Y_2} \tau _{Y_1X_2}+\tau _{X_2} \tau _{Y_1} \tau _{Y_2},\\ \tau _{X_1Y_2Y_2X_2^{-1}Y_2^{-1}}=&-\tau _{X_1X_2} \tau _{Y_2}^3+\tau _{X_1} \tau _{X_2} \tau _{Y_2}^3+\tau _{Y_2}^2 \tau _{X_1X_2Y_2}-\tau _{X_1} \tau _{Y_2}^2 \tau _{X_2Y_2}-\tau _{X_2} \tau _{Y_2}^2 \tau _{X_1Y_2}+2 \tau _{X_1X_2} \tau _{Y_2}\\ &\qquad+\tau _{Y_2} \tau _{X_1Y_2} \tau _{X_2Y_2}-\tau _{X_1} \tau _{X_2} \tau _{Y_2}-\tau _{X_1X_2Y_2}+\tau _{X_2} \tau _{X_1Y_2},\\ \tau _{X_1Y_1Y_2X_2^{-1}Y_2^{-1}}=&-\tau _{Y_1X_2}\tau _{X_1} \tau _{Y_2}^2 +\tau _{X_1} \tau _{X_2} \tau _{Y_1} \tau _{Y_2}^2+\tau _{Y_2} \tau _{X_1Y_2} \tau _{Y_1X_2}-\tau _{X_1X_2} \tau _{Y_1Y_2} \tau _{Y_2}+\tau _{X_1} \tau _{Y_2} \tau _{Y_1X_2Y_2}\\ &\qquad-\tau _{X_1} \tau _{Y_1} \tau _{Y_2} \tau _{X_2Y_2}-\tau _{X_2} \tau _{Y_1} \tau _{Y_2} \tau _{X_1Y_2}+\tau _{Y_1Y_2} \tau _{X_1X_2Y_2}-\tau _{X_1Y_1X_2}-\tau _{X_1Y_2} \tau _{Y_1X_2Y_2}\\ &\qquad+\tau _{X_1} \tau _{Y_1X_2}+\tau _{X_2} \tau _{X_1Y_1}+\tau _{X_1X_2} \tau _{Y_1}+\tau _{Y_1} \tau _{X_1Y_2} \tau _{X_2Y_2}-\tau _{X_1} \tau _{X_2} \tau _{Y_1},\\ \tau _{Y_1X_1^{-1}Y_1^{-1}Y_2^{-1}}=&\tau _{Y_1}^2 \tau _{X_1Y_2}-\tau _{X_1} \tau _{Y_2} \tau _{Y_1}^2-\tau _{Y_1} \tau _{X_1Y_1Y_2}+\tau _{X_1} \tau _{Y_1Y_2} \tau _{Y_1}+\tau _{Y_2} \tau _{Y_1} \tau _{X_1Y_1}-\tau _{X_1Y_2}-\tau _{Y_1Y_2} \tau _{X_1Y_1}+\tau _{X_1} \tau _{Y_2},\\ \tau _{X_1Y_1Y_2Y_2X_2^{-1}Y_2^{-1}}=&\frac{1}{2} \big(-2 \tau _{X_1} \tau _{Y_2}^3 \tau _{Y_1X_2}+2 \tau _{X_1} \tau _{X_2} \tau _{Y_1} \tau _{Y_2}^3+2 \tau _{Y_2}^2 \tau _{X_1Y_2} \tau _{Y_1X_2}-2 \tau _{X_1X_2} \tau _{Y_1Y_2} \tau _{Y_2}^2+2 \tau _{X_1} \tau _{Y_2}^2 \tau _{Y_1X_2Y_2}\\ &\qquad-2 \tau _{X_1} \tau _{Y_1} \tau _{Y_2}^2 \tau _{X_2Y_2}-2 \tau _{X_2} \tau _{Y_1} \tau _{Y_2}^2 \tau _{X_1Y_2}+2 \tau _{Y_1Y_2} \tau _{Y_2} \tau _{X_1X_2Y_2}-\tau _{Y_2} \tau _{X_1Y_1X_2}\\ &\qquad-2 \tau _{Y_2} \tau _{X_1Y_2} \tau _{Y_1X_2Y_2}+3 \tau _{X_1} \tau _{Y_2} \tau _{Y_1X_2}+\tau _{X_2} \tau _{Y_2} \tau _{X_1Y_1}+2 \tau _{X_1X_2} \tau _{Y_1} \tau _{Y_2}+2 \tau _{Y_1} \tau _{Y_2} \tau _{X_1Y_2} \tau _{X_2Y_2}\\ &\qquad-3 \tau _{X_1} \tau _{X_2} \tau _{Y_1} \tau _{Y_2}-\tau _{X_1Y_1} \tau _{X_2Y_2}-\tau _{X_1Y_2} \tau _{Y_1X_2}+\tau _{X_1X_2} \tau _{Y_1Y_2}-\tau _{X_1} \tau _{Y_1X_2Y_2}+\tau _{X_2} \tau _{X_1Y_1Y_2}\\ &\qquad-\tau _{Y_1} \tau _{X_1X_2Y_2}+\tau _{X_1} \tau _{Y_1} \tau _{X_2Y_2}+\tau _{X_2} \tau _{Y_1} \tau _{X_1Y_2}\big),\\ \tau _{X_2Y_1^{-1}}=&\tau _{X_2} \tau _{Y_1}-\tau _{Y_1X_2}.\end{aligned}$$ [\[lemm:FullRepresentationVarietyRelationsAux1\]]{#lemm:FullRepresentationVarietyRelationsAux1 label="lemm:FullRepresentationVarietyRelationsAux1"}* *Proof.* We compute the Groebner basis in the coordinate ring $\mathcal O(\mathrm{Hom}(\pi_1(\Sigma_2),SL(2,\mathbb C)))$ of the full representation variety in degree reverse lexicographic order and reduce both sides of the equation modulo this basis. The source code of the program in Mathematica is available at [@Arthamonov-GitHub-Flat]. ◻ Combining $\Psi$-action on generators ([\[eq:PsiActionOnGenerators\]](#eq:PsiActionOnGenerators){reference-type="ref" reference="eq:PsiActionOnGenerators"}) with Lemma [\[lemm:FullRepresentationVarietyRelationsAux1\]](#lemm:FullRepresentationVarietyRelationsAux1){reference-type="ref" reference="lemm:FullRepresentationVarietyRelationsAux1"} we obtain formulas for the action of $\iota\circ\Psi$. Now we are ready to verify the first of the identities in ([\[eq:IdentityCompositionPhiPsi\]](#eq:IdentityCompositionPhiPsi){reference-type="ref" reference="eq:IdentityCompositionPhiPsi"}). **Lemma 46**. *The following composition of homomorphisms represents and identity map on $\mathcal A_{q=t=1}$. $$\Phi\circ\iota\circ\Psi=\mathrm{Id}_{\mathcal A_{q=t=1}}$$* *Proof.* Generators $$O_1,O_2,O_3,O_4,O_5,O_{12},O_{34},O_{45},O_{345}$$ are preserved verbatim, as for the rest we get $$\begin{aligned} \Phi\circ\iota\circ\Psi(O_6)=&O_1 O_2 O_3 O_4 O_5 -O_3 O_4 O_5 O_{12} -O_1 O_4 O_5 O_{23} -O_1 O_2 O_5 O_{34} -O_1 O_2 O_3 O_{45} +O_5 O_{12} O_{34}\\ &\qquad +O_3 O_{12} O_{45} +O_1 O_{23} O_{45} +O_4 O_5 O_{123} +O_1 O_5 O_{234} +O_1 O_2 O_{345} -O_{45} O_{123}\\ &\qquad -O_{12} O_{345} -O_5 O_{56} -O_1 O_{61} +O_6\\ =&O_6+ \left(O_1 O_2-O_{1,2}\right)g_7^{(q=t=1)} +O_1g_{29}^{(q=t=1)} +O_2g_2^{(q=t=1)} -O_6g_{18}^{(q=t=1)} -g_{34}^{(q=t=1)}\\ \equiv&O_6\bmod \mathcal I_{q=t=1},\\[5pt] \Phi\circ\iota\circ\Psi(O_{23})=&-O_1^2 O_2 O_3 +O_1 O_3 O_{12} +O_1^2 O_{23} +O_4 O_5 O_{61} -O_{45} O_{61} -O_1 O_{123} -O_4 O_{234} +O_2 O_3 +O_{23}\\ =&O_{23} -O_1g_9^{(q=t=1)} -g_{26}^{(q=t=1)}\\ \equiv&O_{23}\bmod \mathcal I_{q=t=1},\\[5pt] \Phi\circ\iota\circ\Psi(O_{56})=&O_1 O_2 O_3 O_4 O_5^2 -O_3 O_4 O_5^2 O_{12} -O_1 O_4 O_5^2 O_{23} -O_1 O_2 O_5^2 O_{34} -O_1 O_2 O_3 O_5 O_{45} +O_5^2 O_{12} O_{34}\\ &\qquad +O_3 O_5 O_{12} O_{45} +O_1 O_5 O_{23} O_{45} +O_4 O_5^2 O_{123} +O_1 O_5^2 O_{234} +O_1 O_2 O_5 O_{345} -O_5 O_{45} O_{123}\\ &\qquad -O_5 O_{12} O_{345} -O_1 O_2 O_3 O_4 +O_3 O_4 O_{12} +O_1 O_4 O_{23} +O_1 O_2 O_{34} -O_5^2 O_{56} -O_1 O_5 O_{61}\\ &\qquad -O_{12} O_{34} -O_4 O_{123} -O_1 O_{234} +O_5 O_6 +O_{56}\\ =&O_{56} +\left(O_1 O_2 O_5-O_5 O_{12}\right)g_7^{(q=t=1)} -O_1g_{43}^{(q=t=1)} +O_1O_2g_{27}^{(q=t=1)} -O_5g_{34}^{(q=t=1)}\\ &\qquad +O_2O_5g_2^{(q=t=1)} -O_5O_6g_{18}^{(q=t=1)} -g_{28}^{(q=t=1)}\\ \equiv&O_{56}\bmod \mathcal I_{q=t=1},\\[5pt] \Phi\circ\iota\circ\Psi(O_{61})=&-O_1 O_{23} O_{34} +O_1 O_3 O_{234} +O_{34} O_{123} +O_1 O_2 O_4 -O_4 O_{12} -O_3 O_{56} -O_{61}\\ =&O_{61} -O_1g_5^{(q=t=1)} +g_{15}^{(q=t=1)}\\ \equiv&O_{61}\bmod \mathcal I_{q=t=1},\\[5pt] \Phi\circ\iota\circ\Psi(O_{123})=&-O_1^3 O_2 O_3 +O_1^2 O_3 O_{12} +O_1^3 O_{23} +O_1 O_4 O_5 O_{61} -O_1 O_{45} O_{61} -O_1^2 O_{123} -O_1 O_4 O_{234}\\ &\qquad +O_1 O_2 O_3 +O_{123}\\ =&O_{123} -O_1^2g_9^{(q=t=1)} -O_1g_{26}^{(q=t=1)}\\ \equiv&O_{123}\bmod \mathcal I_{q=t=1},\\[5pt] \Phi\circ\iota\circ\Psi(O_{234})=&-O_1 O_5 O_{23} O_{34} +O_1 O_3 O_5 O_{234} +\frac{1}{2}O_1^2 O_2 O_{34} -\frac{1}{2}O_4^2 O_5 O_{61} +O_5 O_{34} O_{123} +\frac{1}{2}O_1 O_{23} O_{345}\\ &\qquad +O_1 O_2 O_4 O_5 -\frac{1}{2}O_1 O_{12} O_{34} +\frac{1}{2}O_4 O_{45} O_{61} -\frac{1}{2}O_1^2 O_{234} +\frac{1}{2}O_4^2 O_{234} -\frac{1}{2}O_{123} O_{345}\\ &\qquad -O_4 O_5 O_{12} -\frac{1}{2}O_1 O_2 O_{45} -O_3 O_5 O_{56} -\frac{1}{2}O_1 O_3 O_{61} +\frac{1}{2}O_{12} O_{45} -\frac{1}{2}O_4 O_{23} -\frac{1}{2}O_2 O_{34}\\ &\qquad +\frac{1}{2}O_1 O_{56} -\frac{1}{2}O_5 O_{61} +\frac{1}{2}O_3 O_6\\ =&O_{2,3,4} +\frac{1}{2}O_1g_{11}^{(q=t=1)} +O_5g_{15}^{(q=t=1)} -O_1O_5g_5^{(q=t=1)} -\frac{1}{2}g_{20}^{(q=t=1)} +\frac{1}{2}g_{39}^{(q=t=1)}\\ \equiv&O_{234}\bmod \mathcal I_{q=t=1}.\end{aligned}$$ ◻ **Lemma 47**. *The following combination of homomorphisms represents an identity map on the coordinate ring of the character variety $$\Psi\circ\Phi\circ\iota=\mathrm{Id}_{\mathcal O(\mathrm{Hom}(\pi_1(\Sigma_2),SL(2,\mathbb C)))^{SL(2,\mathbb C)}}.$$* *Proof.* Recall that invariant polynomials associated to ([\[eq:ABL18Generators\]](#eq:ABL18Generators){reference-type="ref" reference="eq:ABL18Generators"}) provide a complete set of generators of the $SL(2,\mathbb C)$-invariant subring. We will show that $\Psi\circ\Phi\circ\iota$ maps all elements ([\[eq:ABL18Generators\]](#eq:ABL18Generators){reference-type="ref" reference="eq:ABL18Generators"}) into the equivalent invariant polynomials modulo defining ideal of the coordinate ring of representation variety. The action on generators given by ([\[eq:PsiActionOnGenerators\]](#eq:PsiActionOnGenerators){reference-type="ref" reference="eq:PsiActionOnGenerators"}) and ([\[eq:PhiActionOnGenerators\]](#eq:PhiActionOnGenerators){reference-type="ref" reference="eq:PhiActionOnGenerators"}) implies that this property holds verbatim for $$\tau_{X_1},\tau_{Y_1},\tau_{X_2},\tau_{Y_2},\tau_{Y_1Y_2},\tau_{X_2Y_2}.$$ As for the rest of the generators we describe our calculation below. Denote the 16 generators of $\mathcal O(\mathrm{Hom}(\pi_1(\Sigma_2)),SL(2,\mathbb C))$ by $$\left(X_i\right)_{jk},\left(Y_i\right)_{jk},\qquad 1\leq i,j,k\leq2.$$ and let $\mathcal I_{\mathrm{Hom}(\pi_1(\Sigma_2),SL(2,\mathbb C))}$ be defining ideal of the coordinate ring of representation variety. We get $$\begin{aligned} \Psi\circ\Phi\circ\iota(\tau_{X_1Y_1})=&\Psi(O_1 O_2 -O_{12})\\ =&\tau _{X_1} \tau _{Y_1}-\tau _{Y_1X_1^{-1}}\\ =&\left(X_1\right)_{11} \left(Y_1\right)_{11}+\left(X_1\right)_{21} \left(Y_1\right)_{12}+\left(X_1\right)_{12} \left(Y_1\right)_{21}+\left(X_1\right)_{22} \left(Y_1\right)_{22}\\ =&\tau_{X_1Y_1},\\[7pt] \Psi\circ\Phi\circ\iota(\tau _{X_1X_2})=&\Psi(O_1 O_2 O_{345} -O_{12} O_{345} -O_1 O_{61} +O_6)\\ =&\tau _{X_1Y_2X_2^{-1}Y_2^{-1}}-\tau _{Y_1} \tau _{X_1Y_1Y_2X_2^{-1}Y_2^{-1}}+\tau _{X_2Y_1^{-1}} \left(\tau _{X_1} \tau _{Y_1}-\tau _{Y_1X_1^{-1}}\right)\\ \equiv&\left(X_1\right)_{11} \left(X_2\right)_{11}+\left(X_1\right)_{21} \left(X_2\right)_{12}+\left(X_1\right)_{12} \left(X_2\right)_{21}+\left(X_1\right)_{22} \left(X_2\right)_{22}\bmod \mathcal I_{\mathrm{Hom}(\pi_1(\Sigma_2),SL(2,\mathbb C))}\\ \equiv&\tau_{X_1X_2}\bmod \mathcal I_{\mathrm{Hom}(\pi_1(\Sigma_2),SL(2,\mathbb C))},\\[7pt] \Psi\circ\Phi\circ\iota(\tau _{X_1Y_2})=&\Psi(-O_1 O_2 O_3 +O_3 O_{12} +O_1 O_{23} +O_2 O_5 -O_{123})\\ =&\tau _{Y_1Y_2} \tau _{Y_1X_1^{-1}}-\tau _{Y_1X_1^{-1}Y_1^{-1}Y_2^{-1}}+\tau _{Y_1} \tau _{X_1^{-1}Y_1^{-1}Y_2^{-1}}-\tau _{X_1} \tau _{Y_1} \tau _{Y_1Y_2}+\tau _{X_1} \tau _{Y_2}\\ \equiv&\left(X_1\right)_{11} \left(Y_2\right)_{11}+\left(X_1\right)_{21} \left(Y_2\right)_{12}+\left(X_1\right)_{12} \left(Y_2\right)_{21}+\left(X_1\right)_{22} \left(Y_2\right)_{22}\bmod \mathcal I_{\mathrm{Hom}(\pi_1(\Sigma_2),SL(2,\mathbb C))}\\ \equiv&\tau _{X_1Y_2}\bmod \mathcal I_{\mathrm{Hom}(\pi_1(\Sigma_2),SL(2,\mathbb C))},\\[7pt] \Psi\circ\Phi\circ\iota(\tau _{Y_1X_2})=&\Psi(O_1 O_4 -O_{345})\\ =&\tau _{X_2} \tau _{Y_1}-\tau _{X_2Y_1^{-1}}\\ =&\left(X_2\right)_{11} \left(Y_1\right)_{11}+\left(X_2\right)_{21} \left(Y_1\right)_{12}+\left(X_2\right)_{12} \left(Y_1\right)_{21}+\left(X_2\right)_{22} \left(Y_1\right)_{22}\\ =&\tau _{Y_1X_2},\\[7pt] \Psi\circ\Phi\circ\iota(\tau _{X_1Y_1X_2})=&\Psi(O_1^2 O_2 O_{345} -O_1 O_{12} O_{345} -O_1^2 O_{61} -O_2 O_{345} +O_1 O_6 +O_{61})\\ =&\tau _{Y_1} \tau _{X_1Y_2X_2^{-1}Y_2^{-1}}-\left(\tau _{Y_1}^2-1\right) \tau _{X_1Y_1Y_2X_2^{-1}Y_2^{-1}}+\tau _{X_2Y_1^{-1}} \left(\tau _{X_1} \left(\tau _{Y_1}^2-1\right)-\tau _{Y_1} \tau _{Y_1X_1^{-1}}\right)\\ \equiv&\left(X_1\right)_{11} \left(X_2\right)_{11} \left(Y_1\right)_{11}+\left(X_1\right)_{21} \left(X_2\right)_{12} \left(Y_1\right)_{11}+\left(X_1\right)_{11} \left(X_2\right)_{21} \left(Y_1\right)_{12}+\left(X_1\right)_{21} \left(X_2\right)_{22} \left(Y_1\right)_{12}\\ &\quad+\left(X_1\right)_{12} \left(X_2\right)_{11} \left(Y_1\right)_{21}+\left(X_1\right)_{22} \left(X_2\right)_{12} \left(Y_1\right)_{21}+\left(X_1\right)_{12} \left(X_2\right)_{21} \left(Y_1\right)_{22}\\ &\quad+\left(X_1\right)_{22} \left(X_2\right)_{22} \left(Y_1\right)_{22}\bmod \mathcal I_{\mathrm{Hom}(\pi_1(\Sigma(2)),SL(2,\mathbb C))}\\ \equiv&\tau _{X_1Y_1X_2}\bmod \mathcal I_{\mathrm{Hom}(\pi_1(\Sigma_2),SL(2,\mathbb C))},\\[7pt] \Psi\circ\Phi\circ\iota(\tau _{X_1Y_1Y_2})=&\Psi( -O_4 O_5 O_{61} +O_1 O_2 O_5 +O_{45} O_{61} +O_4 O_{234} -O_5 O_{12} -O_{23})\\ =&-\tau _{X_1^{-1}Y_1^{-1}Y_2^{-1}}+\tau _{X_2Y_2} \tau _{X_1Y_1Y_2X_2^{-1}Y_2^{-1}}+\tau _{X_2} \tau _{X_1Y_1Y_2Y_2X_2^{-1}Y_2^{-1}}-\tau _{Y_2} \tau _{Y_1X_1^{-1}}\\ &\quad-\tau _{X_2} \tau _{Y_2} \tau _{X_1Y_1Y_2X_2^{-1}Y_2^{-1}}+\tau _{X_1} \tau _{Y_1} \tau _{Y_2}\\ \equiv&\left(X_1\right)_{11} \left(Y_1\right)_{11} \left(Y_2\right)_{11}+\left(X_1\right)_{12} \left(Y_1\right)_{21} \left(Y_2\right)_{11}+\left(X_1\right)_{21} \left(Y_1\right)_{11} \left(Y_2\right)_{12}+\left(X_1\right)_{22} \left(Y_1\right)_{21} \left(Y_2\right)_{12}\\ &\quad+\left(X_1\right)_{11} \left(Y_1\right)_{12} \left(Y_2\right)_{21}+\left(X_1\right)_{12} \left(Y_1\right)_{22} \left(Y_2\right)_{21}+\left(X_1\right)_{21} \left(Y_1\right)_{12} \left(Y_2\right)_{22}\\ &\quad+\left(X_1\right)_{22} \left(Y_1\right)_{22} \left(Y_2\right)_{22}\bmod \mathcal I_{\mathrm{Hom}(\pi_1(\Sigma_2),SL(2,\mathbb C))}\\ \equiv&\tau _{X_1Y_1Y_2}\bmod \mathcal I_{\mathrm{Hom}(\pi_1(\Sigma_2),SL(2,\mathbb C))},\\[7pt] \Psi\circ\Phi\circ\iota(\tau _{X_1X_2Y_2})=&\Psi(O_1 O_2 O_5 O_{345} -O_5 O_{12} O_{345} -O_1 O_2 O_{34} -O_1 O_5 O_{61} +O_{12} O_{34} +O_1 O_{234} +O_5 O_6 -O_{56})\\ =&-\tau _{X_1Y_2Y_2X_2^{-1}Y_2^{-1}}+\tau _{Y_1} \tau _{X_1Y_1Y_2Y_2X_2^{-1}Y_2^{-1}}-\tau _{X_1} \tau _{Y_1} \tau _{Y_2^{-1}Y_1^{-1}X_2}+\tau _{Y_2} \tau _{X_1Y_2X_2^{-1}Y_2^{-1}}\\ &\quad-\tau _{Y_1} \tau _{Y_2} \tau _{X_1Y_1Y_2X_2^{-1}Y_2^{-1}}+\tau _{X_1} \tau _{Y_1} \tau _{Y_2} \tau _{X_2Y_1^{-1}}+\tau _{Y_1X_1^{-1}} \left(\tau _{Y_2^{-1}Y_1^{-1}X_2}-\tau _{Y_2} \tau _{X_2Y_1^{-1}}\right)\\ \equiv&\left(X_1\right)_{11} \left(X_2\right)_{11} \left(Y_2\right)_{11}+\left(X_1\right)_{12} \left(X_2\right)_{21} \left(Y_2\right)_{11}+\left(X_1\right)_{21} \left(X_2\right)_{11} \left(Y_2\right)_{12}+\left(X_1\right)_{22} \left(X_2\right)_{21} \left(Y_2\right)_{12}\\ &\quad +\left(X_1\right)_{11} \left(X_2\right)_{12} \left(Y_2\right)_{21}+\left(X_1\right)_{12} \left(X_2\right)_{22} \left(Y_2\right)_{21}+\left(X_1\right)_{21} \left(X_2\right)_{12} \left(Y_2\right)_{22}\\ &\quad+\left(X_1\right)_{22} \left(X_2\right)_{22} \left(Y_2\right)_{22}\bmod \mathcal I_{\mathrm{Hom}(\pi_1(\Sigma_2),SL(2,\mathbb C))}\\ \equiv&\tau _{X_1X_2Y_2}\bmod \mathcal I_{\mathrm{Hom}(\pi_1(\Sigma_2),SL(2,\mathbb C))},\\[7pt] \Psi\circ\Phi\circ\iota(\tau _{Y_1X_2Y_2})=&\Psi(-O_5 O_{345} +O_1 O_{45} +O_{34})\\ =&\tau _{Y_2^{-1}Y_1^{-1}X_2}-\tau _{Y_2} \tau _{X_2Y_1^{-1}}+\tau _{Y_1} \tau _{X_2Y_2}\\ =&\left(X_2\right)_{11} \left(Y_1\right)_{11} \left(Y_2\right)_{11}+\left(X_2\right)_{21} \left(Y_1\right)_{12} \left(Y_2\right)_{11}+\left(X_2\right)_{11} \left(Y_1\right)_{21} \left(Y_2\right)_{12}+\left(X_2\right)_{21} \left(Y_1\right)_{22} \left(Y_2\right)_{12}\\ &\quad+\left(X_2\right)_{12} \left(Y_1\right)_{11} \left(Y_2\right)_{21}+\left(X_2\right)_{22} \left(Y_1\right)_{12} \left(Y_2\right)_{21}+\left(X_2\right)_{12} \left(Y_1\right)_{21} \left(Y_2\right)_{22}+\left(X_2\right)_{22} \left(Y_1\right)_{22} \left(Y_2\right)_{22}\\ =&\tau _{Y_1X_2Y_2}\end{aligned}$$ ◻ # $q$-Groebner basis {#sec:qGroebnerBasis} In this subsection we introduce 61 relations in $\mathcal A_{q,t}$ which present a deformation of the Groebner basis for defining ideal of $\mathcal A_{q=1,t}$. We assign weights to generators as in ([\[eq:GeneratorWeights\]](#eq:GeneratorWeights){reference-type="ref" reference="eq:GeneratorWeights"}) and present every relator as a combination of normally ordered monomials sorted according to weighted degree reverse lexicographic monomial order. In our formulas the leading monomial always comes first and we normalize the relator so that the coefficient in leading term is 1. This simplifies the comparison with commutative Groebner bases for defining ideals of $\mathcal A_{q=1,t}$ and $\mathcal A_{q=t=1}$. Recall that by $F$ we denote a free $\mathbf k$-algebra with 15 generators. ([\[eq:15Generators\]](#eq:15Generators){reference-type="ref" reference="eq:15Generators"}) **Proposition 48**. *The following 61 elements of $F$ belong to defining ideal of $\mathcal A_{q,t}$. $$\begin{aligned} %GB element #1 g_{1}=&O_{56}O_{61} -q^{\frac{1}{2}}O_6O_{234} -q^{-\frac{1}{2}}O_1O_5 +q^{-\frac{1}{2}}t^{-\frac{1}{2}}(q+t)O_3,\\ %GB element #2 g_{2}=&O_{12}O_{61} -q^{-\frac{1}{2}}O_1O_{345} -q^{\frac{1}{2}}O_2O_6 +q^{-\frac{1}{2}}t^{-\frac{1}{2}}(q+t)O_4,\\ %GB element #3 g_{3}=&O_{45}O_{56} -q^{\frac{1}{2}}O_5O_{123} -q^{-\frac{1}{2}}O_4O_6 +q^{-\frac{1}{2}}t^{-\frac{1}{2}}(q+t)O_2,\\ %GB element #4 g_{4}=&O_{34}O_{45} -q^{\frac{1}{2}}O_4O_{345} -q^{-\frac{1}{2}}O_3O_5 +q^{-\frac{1}{2}}t^{-\frac{1}{2}}(q+t)O_1,\\ %GB element #5 g_{5}=&O_{23}O_{34} -q^{\frac{1}{2}}O_3O_{234} -q^{-\frac{1}{2}}O_2O_4 +q^{-\frac{1}{2}}t^{-\frac{1}{2}}(q+t)O_6,\\ %GB element #6 g_{6}=&O_{12}O_{23} -q^{\frac{1}{2}}O_2O_{123} -q^{-\frac{1}{2}}O_1O_3 +q^{-\frac{1}{2}}t^{-\frac{1}{2}}(q+t)O_5,\\ %GB element #7 g_{7}=&O_3O_4O_5 -q^{-\frac{1}{2}}O_1O_2O_6 +q^{\frac{1}{4}}O_6O_{12} -q^{-\frac{1}{4}}O_5O_{34} -q^{\frac{1}{4}}O_3O_{45} +q^{-\frac{5}{4}}O_2O_{61} -q^{-\frac{3}{2}}(q-1)^2O_{345},\\ %GB element #8 g_{8}=&O_2O_3O_4 -q^{-\frac{1}{2}}O_1O_5O_6 -q^{-\frac{1}{4}}O_4O_{23} -q^{\frac{1}{4}}O_2O_{34} +q^{-\frac{1}{4}}O_1O_{56} +q^{-\frac{3}{4}}O_5O_{61},\\ %GB element #9 g_{9}=&O_1O_2O_3 -O_4O_5O_6 -q^{-\frac{1}{4}}O_3O_{12} -q^{\frac{1}{4}}O_1O_{23} +q^{-\frac{1}{4}}O_6O_{45} +q^{\frac{1}{4}}O_4O_{56},\\ %GB element #10 g_{10}=&O_{56}O_{345} -q^{-\frac{1}{2}}O_5O_{12} -q^{\frac{1}{2}}O_6O_{34} +q^{-\frac{1}{4}}t^{-\frac{1}{2}}(q+t)O_2O_3 -t^{-\frac{1}{2}}(q+t)O_{23},\\ %GB element #11 g_{11}=&O_{23}O_{345} -q^{-\frac{1}{2}}O_2O_{45} -q^{\frac{1}{2}}O_3O_{61} +q^{-\frac{1}{4}}t^{-\frac{1}{2}}(q+t)O_5O_6 -t^{-\frac{1}{2}}(q+t)O_{56},\\ %GB element #12 g_{12}=&O_{45}O_{234} -q^{\frac{1}{2}}O_5O_{23} -q^{-\frac{1}{2}}O_4O_{61} +q^{-\frac{1}{4}}t^{-\frac{1}{2}}(q+t)O_1O_2 -t^{-\frac{1}{2}}(q+t)O_{12},\\ %GB element #13 g_{13}=&O_{12}O_{234} -q^{-\frac{1}{2}}O_1O_{34} -q^{\frac{1}{2}}O_2O_{56} +q^{-\frac{1}{4}}t^{-\frac{1}{2}}(q+t)O_4O_5 -t^{-\frac{1}{2}}(q+t)O_{45},\\ %GB element #14 g_{14}=&O_{61}O_{123} -q^{-\frac{1}{2}}O_6O_{23} -q^{\frac{1}{2}}O_1O_{45} +q^{-\frac{1}{4}}t^{-\frac{1}{2}}(q+t)O_3O_4 -t^{-\frac{1}{2}}(q+t)O_{34},\\ %GB element #15 g_{15}=&O_{34}O_{123} -q^{\frac{1}{2}}O_4O_{12} -q^{-\frac{1}{2}}O_3O_{56} +q^{-\frac{3}{4}}t^{-\frac{1}{2}}(q+t)O_1O_6 -q^{-1}t^{-\frac{1}{2}}(q+t)O_{61},\\ %GB element #16 g_{16}=&O_3O_4O_{34} -q^{-\frac{1}{2}}O_1O_6O_{61} -q^{\frac{1}{4}}O_{34}^2 +q^{-\frac{3}{4}}O_{61}^2 +q^{-\frac{3}{4}}O_1^2 -q^{\frac{1}{4}}O_3^2 -q^{-\frac{3}{4}}O_4^2 +q^{\frac{1}{4}}O_6^2,\\ %GB element #17 g_{17}=&O_2O_3O_{23} -O_5O_6O_{56} -q^{\frac{1}{4}}O_{23}^2 +q^{\frac{1}{4}}O_{56}^2 -q^{\frac{1}{4}}O_2^2 -q^{-\frac{3}{4}}O_3^2 +q^{\frac{1}{4}}O_5^2 +q^{-\frac{3}{4}}O_6^2,\\ %GB element #18 g_{18}=&O_1O_2O_{12} -O_4O_5O_{45} -q^{\frac{1}{4}}O_{12}^2 +q^{\frac{1}{4}}O_{45}^2 -q^{\frac{1}{4}}O_1^2 -q^{-\frac{3}{4}}O_2^2 +q^{\frac{1}{4}}O_4^2 +q^{-\frac{3}{4}}O_5^2,\\ %GB element #19 g_{19}=&O_{234}O_{345} +t^{-\frac{1}{2}}(q+t)O_4O_5O_6 -q^{\frac{1}{2}}O_{34}O_{61} -q^{-\frac{1}{4}}t^{-\frac{1}{2}}(q+t)O_6O_{45} -q^{\frac{1}{4}}t^{-\frac{1}{2}}(q+t)O_4O_{56}\\ &-q^{-\frac{1}{2}}O_2O_5 +q^{-\frac{1}{2}}t^{-\frac{1}{2}}(q+t)O_{123},\\ %GB element #20 g_{20}=&O_{123}O_{345} +q^{-\frac{1}{2}}t^{-\frac{1}{2}}(q+t)O_1O_5O_6 -q^{-\frac{1}{2}}O_{12}O_{45} -q^{-\frac{1}{4}}t^{-\frac{1}{2}}(q+t)O_1O_{56} -q^{-\frac{3}{4}}t^{-\frac{1}{2}}(q+t)O_5O_{61}\\ &-q^{\frac{1}{2}}O_3O_6 +q^{-\frac{1}{2}}t^{-\frac{1}{2}}(q+t)O_{234},\\ %GB element #21 g_{21}=&O_{123}O_{234} +q^{-\frac{1}{2}}t^{-\frac{1}{2}}(q+t)O_1O_2O_6 -q^{\frac{1}{2}}O_{23}O_{56} -q^{\frac{1}{4}}t^{-\frac{1}{2}}(q+t)O_6O_{12} -q^{-\frac{5}{4}}t^{-\frac{1}{2}}(q+t)O_2O_{61}\\ &-q^{-\frac{1}{2}}O_1O_4 +q^{-\frac{3}{2}}t^{-\frac{1}{2}}\left(q^2-q+1\right) (q+t)O_{345},\\ %GB element #22 g_{22}=&O_4O_5O_6^2 -q^{-\frac{3}{4}}O_6^2O_{45} -q^{\frac{3}{4}}O_4O_6O_{56} -q^{-\frac{3}{4}}O_2O_3O_{61} +q^{-\frac{1}{2}}O_{23}O_{61} +q^{-1}O_6O_{123} +q^{-\frac{3}{2}}O_3O_{345}\\ &+(q-2)O_4O_5 +q^{-\frac{7}{4}}(q-1)O_{45},\\ %GB element #23 g_{23}=&O_1O_2O_6^2 -q^{\frac{5}{4}}O_6^2O_{12} -q^{\frac{3}{4}}O_3O_4O_{56} -q^{-\frac{5}{4}}O_2O_6O_{61} +qO_{34}O_{56} +qO_3O_{123}\\ &+q^{-\frac{3}{2}}(q^3-q+1)O_6O_{345} -q^{-1}(2 q-1)O_1O_2,\\ %GB element #24 g_{24}=&O_1O_5^2O_6 -q^{\frac{1}{4}}O_2O_3O_{45} -q^{\frac{1}{4}}O_1O_5O_{56} -q^{-\frac{1}{4}}O_5^2O_{61} +q^{\frac{1}{2}}O_{23}O_{45} +O_5O_{234} +q^{\frac{1}{2}}O_2O_{345}\\ &-O_1O_6 -q^{-\frac{1}{4}}(q-1)O_{61},\\ %GB element #25 g_{25}=&O_4^2O_5O_6 -q^{-\frac{1}{4}}O_1O_2O_{34} -q^{-\frac{1}{4}}O_4O_6O_{45} -q^{\frac{1}{4}}O_4^2O_{56} +O_{12}O_{34} +q^{-\frac{1}{2}}O_4O_{123} +O_1O_{234} -O_5O_6,\\ %GB element #26 g_{26}=&O_1O_4O_5O_6 -q^{-\frac{1}{4}}O_1O_6O_{45} -q^{\frac{1}{4}}O_1O_4O_{56} -q^{-\frac{1}{4}}O_4O_5O_{61} +O_{45}O_{61} +q^{-\frac{1}{2}}O_1O_{123} +O_4O_{234} -O_2O_3,\\ %GB element #27 g_{27}=&O_1O_2O_5O_6 -q^{\frac{3}{4}}O_5O_6O_{12} -q^{\frac{1}{4}}O_1O_2O_{56} -q^{-\frac{3}{4}}O_2O_5O_{61} +q^{\frac{1}{2}}O_{12}O_{56} +q^{-\frac{1}{2}}O_2O_{234} +q^{-1}(q^2-q+1)O_5O_{345}\\ &-O_3O_4 +q^{-\frac{3}{4}}(q-1)O_{34},\\ %GB element #28 g_{28}=&O_1^2O_5O_6 -q^{\frac{1}{4}}O_3O_4O_{12} -q^{\frac{1}{4}}O_1^2O_{56} -q^{-\frac{1}{4}}O_1O_5O_{61} +qO_{12}O_{34} +q^{-\frac{1}{2}}O_4O_{123} +O_1O_{234}\\ &-O_5O_6 -q^{-\frac{3}{4}}(q-1)^2O_{56},\\ %GB element #29 g_{29}=&O_1O_2^2O_6 -q^{\frac{5}{4}}O_2O_6O_{12} -q^{-\frac{1}{4}}O_4O_5O_{23} -q^{-\frac{5}{4}}O_2^2O_{61} +q^{\frac{1}{2}}O_{23}O_{45} +q^{-1}O_5O_{234} +q^{-\frac{3}{2}}(q^3-q^2+1)O_2O_{345}\\ &+(q-2)O_1O_6 -q^{-\frac{1}{4}}(q-1)O_{61},\\ %GB element #30 g_{30}=&O_1O_5O_6O_{61} -q^{-\frac{1}{4}}O_5O_{61}^2 -q^{\frac{3}{4}}O_1O_6O_{234} -q^{-\frac{1}{4}}O_3O_4O_{345} +q^{\frac{1}{2}}O_{61}O_{234} +O_{34}O_{345} -q^{-\frac{1}{4}}O_1^2O_5 -q^{\frac{3}{4}}O_5O_6^2\\ &+q^{-1}O_4O_{45} +q^{\frac{3}{2}}O_6O_{56} +q^{-\frac{1}{4}}t^{-\frac{1}{2}}(q+t)O_1O_3 -q^{-\frac{5}{4}}(q-1)^2 (q+1)O_5,\\ %GB element #31 g_{31}=&O_1O_2O_6O_{61} -q^{-\frac{3}{4}}O_2O_{61}^2 -q^{\frac{3}{4}}O_3O_4O_{234} -q^{-\frac{1}{4}}O_1O_6O_{345} +qO_{34}O_{234} +q^{-\frac{3}{2}}O_{61}O_{345} -q^{\frac{1}{4}}O_1^2O_2\\ &-q^{\frac{5}{4}}O_2O_6^2 +q^{\frac{1}{2}}O_1O_{12} +qO_3O_{23} +q^{\frac{1}{4}}t^{-\frac{1}{2}}(q+t)O_4O_6 -q^{-\frac{7}{4}}(q-1)^2O_2,\\ %GB element #32 g_{32}=&O_4O_5O_6O_{56} -q^{\frac{1}{4}}O_4O_{56}^2 -q^{\frac{3}{4}}O_5O_6O_{123} -q^{-\frac{3}{4}}O_2O_3O_{234} +qO_{56}O_{123} +q^{-\frac{1}{2}}O_{23}O_{234} -q^{\frac{1}{4}}O_4O_5^2 -q^{-\frac{3}{4}}O_4O_6^2\\ &+q^{-\frac{3}{2}}O_3O_{34} +qO_5O_{45} +q^{-\frac{3}{4}}t^{-\frac{1}{2}}(q+t)O_2O_6 -q^{-\frac{7}{4}}(q-1)^2 (q+1)O_4,\\ %GB element #33 g_{33}=&O_1O_5O_6O_{56} -q^{\frac{1}{4}}O_1O_{56}^2 -q^{\frac{3}{4}}O_2O_3O_{123} -q^{-\frac{3}{4}}O_5O_6O_{234} +qO_{23}O_{123} +q^{-\frac{1}{2}}O_{56}O_{234} -q^{\frac{1}{4}}O_1O_5^2 -q^{-\frac{3}{4}}O_1O_6^2\\ &+qO_2O_{12} +q^{-\frac{3}{2}}O_6O_{61} +q^{-\frac{3}{4}}t^{-\frac{1}{2}}(q+t)O_3O_5 -q^{-\frac{7}{4}}(q-1)^2 (q+1)O_1,\\ %GB element #34 g_{34}=&O_4O_5O_6O_{45} -q^{-\frac{1}{4}}O_6O_{45}^2 -q^{-\frac{1}{4}}O_4O_5O_{123} -q^{\frac{1}{4}}O_1O_2O_{345} +q^{-1}O_{45}O_{123} +q^{\frac{1}{2}}O_{12}O_{345} -q^{\frac{3}{4}}O_4^2O_6 -q^{-\frac{1}{4}}O_5^2O_6\\ &+O_5O_{56} +q^{\frac{1}{2}}O_1O_{61} +q^{-\frac{1}{4}}t^{-\frac{1}{2}}(q+t)O_2O_4 -q^{-\frac{5}{4}}(q-1)^2O_6,\\ %GB element #35 g_{35}=&O_1O_2O_6O_{45} -q^{\frac{3}{4}}O_6O_{12}O_{45} -q^{-\frac{1}{4}}O_2O_{45}O_{61} -q^{-\frac{1}{4}}O_1O_2O_{123} +O_{12}O_{123} +q^{-\frac{1}{2}}(q^2-q+1)O_{45}O_{345}\\ &-q^{\frac{3}{4}}O_3O_4^2+O_2O_{23} +q^{\frac{3}{2}}O_4O_{34} -2q^{-\frac{1}{4}}(q-1)^2O_3,\\ %GB element #36 g_{36}=&O_5O_6^2O_{34} -O_2O_3^2O_{61} -q^{\frac{5}{4}}O_6O_{34}O_{56} +q^{\frac{3}{4}}O_3O_{23}O_{61} +q^{-\frac{5}{4}}O_3^2O_{345} -q^{-\frac{5}{4}}O_6^2O_{345} +q^{-\frac{3}{2}}(q^3-q^2+1)O_6O_{12}\\ &+(q-2)O_5O_{34} -q^{-\frac{3}{2}}O_3O_{45} -(q-2)O_2O_{61},\\ %GB element #37 g_{37}=&O_1O_5O_6O_{34} -q^{\frac{3}{4}}O_1O_{34}O_{56} -q^{-\frac{1}{4}}O_5O_{34}O_{61} -q^{-\frac{3}{4}}O_1O_6O_{345} +q^{\frac{1}{2}}O_{34}O_{234} +q^{-1}O_{61}O_{345} -q^{\frac{3}{4}}O_2O_3^2\\ &+q^{-1}(q^2-q+1)O_1O_{12} +q^{\frac{3}{2}}O_3O_{23} -q^{-\frac{5}{4}}(q-1)^2 (q+1)O_2,\\ %GB element #38 g_{38}=&O_1O_2O_6O_{34} -q^{\frac{3}{4}}O_6O_{12}O_{34} -q^{-\frac{3}{4}}O_2O_{34}O_{61} -q^{\frac{1}{4}}O_1O_6O_{234} +O_{61}O_{234} +q^{-\frac{3}{2}}(q^2-q+1)O_{34}O_{345} \\ &-q^{\frac{1}{4}}O_4^2O_5+q^{\frac{1}{2}}O_4O_{45} +qO_6O_{56} -q^{-\frac{7}{4}}(q-1)^2O_5,\\ %GB element #39 g_{39}=&O_1^2O_2O_{34} -O_4^2O_5O_{61} -q^{\frac{1}{4}}O_1O_{12}O_{34} +q^{\frac{1}{4}}O_4O_{45}O_{61} -q^{\frac{1}{4}}O_1^2O_{234} +q^{\frac{1}{4}}O_4^2O_{234} -q^{\frac{1}{2}}O_4O_{23} -O_2O_{34}\\ & +q^{\frac{1}{2}}O_1O_{56} +O_5O_{61},\\ %GB element #40 g_{40}=&O_1O_6^2O_{23} -qO_3^2O_4O_{56} +q^{\frac{5}{4}}O_3O_{34}O_{56} -q^{-\frac{5}{4}}O_6O_{23}O_{61} +q^{\frac{5}{4}}O_3^2O_{123} -q^{\frac{5}{4}}O_6^2O_{123} -q^{\frac{3}{2}}O_3O_{12}\\ &-q^{-1}(2 q-1)O_1O_{23} +q^{-\frac{3}{2}}(q^3-q+1)O_6O_{45} +qO_4O_{56},\\ %GB element #41 g_{41}=&O_4O_5O_6O_{23} -q^{\frac{1}{4}}O_6O_{23}O_{45} -q^{\frac{1}{4}}O_4O_{23}O_{56} -q^{-\frac{3}{4}}O_5O_6O_{234} +O_{23}O_{123} +q^{-\frac{1}{2}}O_{56}O_{234} -q^{\frac{1}{4}}O_1O_2^2 \\ &+qO_2O_{12} +q^{-\frac{3}{2}}(q^2-q+1)O_6O_{61} -q^{-\frac{7}{4}}(q-1)^2 (q+1)O_1,\\ %GB element #42 g_{42}=&O_1O_5O_6O_{23} -q^{\frac{1}{4}}O_1O_{23}O_{56} -q^{-\frac{3}{4}}O_5O_{23}O_{61} -q^{\frac{3}{4}}O_5O_6O_{123} +qO_{56}O_{123} +q^{-\frac{1}{2}}O_{23}O_{234} -q^{\frac{1}{4}}O_3^2O_4 \\ &+q^{\frac{1}{2}}O_3O_{34} +q^{-1}(q^2-q+1)O_5O_{45} -q^{-\frac{3}{4}}(q-1)^2O_4,\\ %GB element #43 g_{43}=&O_4O_5^2O_{23} -O_1O_2^2O_{56} -q^{\frac{5}{4}}O_5O_{23}O_{45} +q^{\frac{3}{4}}O_2O_{12}O_{56} +q^{-\frac{5}{4}}O_2^2O_{234} -q^{-\frac{5}{4}}O_5^2O_{234} +(q-2)O_4O_{23} \\ &-q^{-\frac{3}{2}}O_2O_{34} -(q-2)O_1O_{56} +q^{-\frac{3}{2}}(q^3-q^2+1)O_5O_{61},\\ %GB element #44 g_{44}=&O_5^2O_6O_{12} -O_2^2O_3O_{45} +q^{\frac{1}{4}}O_2O_{23}O_{45} -q^{-\frac{1}{4}}O_5O_{12}O_{56} +q^{\frac{1}{4}}O_2^2O_{345} -q^{\frac{1}{4}}O_5^2O_{345} -O_6O_{12} +q^{-\frac{1}{2}}O_5O_{34}\\ &+O_3O_{45} -q^{\frac{1}{2}}O_2O_{61},\\ %GB element #45 g_{45}=&O_4O_5O_6O_{12} -q^{-\frac{1}{4}}O_6O_{12}O_{45} -q^{-\frac{1}{4}}O_4O_{12}O_{56} -q^{\frac{1}{4}}O_4O_5O_{345} +q^{-1}O_{12}O_{123} +q^{\frac{1}{2}}O_{45}O_{345} -q^{-\frac{1}{4}}O_2^2O_3,\\ &+O_2O_{23} +q^{-\frac{1}{2}}O_4O_{34} -q^{-\frac{5}{4}}(q-1)^2O_3\\ %GB element #46 g_{46}=&O_3O_4^2O_{12} -q^{-1}O_1^2O_6O_{45} -q^{\frac{5}{4}}O_4O_{12}O_{34} +q^{-\frac{3}{4}}O_1O_{45}O_{61} +q^{-\frac{5}{4}}O_1^2O_{123} -q^{-\frac{5}{4}}O_4^2O_{123} +(q-2)O_3O_{12}\\ &-q^{-\frac{1}{2}}O_1O_{23} +q^{-1}O_6O_{45} +q^{-\frac{3}{2}}(q^3-q^2+1)O_4O_{56},\\ %GB element #47 g_{47}=&O_1O_2O_6O_{345} -q^{\frac{3}{4}}O_6O_{12}O_{345} -q^{-\frac{3}{4}}O_2O_{61}O_{345} +q^{-1}(q^2-q+1)O_{345}^2 -q^{-\frac{1}{4}}O_4O_5O_{45} -q^{\frac{1}{4}}O_1O_6O_{61}\\ &+O_{45}^2 +O_{61}^2 +q^{-1}O_5^2 +qO_6^2 -q^{-1}t^{-1}(t+1) \left(q^2+t\right),\\ %GB element #48 g_{48}=&O_1O_5O_6O_{234} -q^{\frac{1}{4}}O_1O_{56}O_{234} -q^{-\frac{1}{4}}O_5O_{61}O_{234} +O_{234}^2 -q^{\frac{3}{4}}O_5O_6O_{56} -q^{-\frac{3}{4}}O_1O_6O_{61} +qO_{56}^2 +q^{-1}O_{61}^2\\ &+q^{-1}O_1^2 -O_3^2 +qO_5^2 +O_6^2 -q^{-1}t^{-1}(t+1) \left(q^2+t\right),\\ %GB element #49 g_{49}=&O_4O_5O_6O_{123} -q^{-\frac{1}{4}}O_6O_{45}O_{123} -q^{\frac{1}{4}}O_4O_{56}O_{123} +q^{-\frac{1}{2}}O_{123}^2 -q^{\frac{1}{4}}O_4O_5O_{45} -q^{-\frac{3}{4}}O_5O_6O_{56} +q^{\frac{1}{2}}O_{45}^2\\ &+q^{-\frac{1}{2}}O_{56}^2 -q^{-\frac{1}{2}}O_2^2 +q^{\frac{1}{2}}O_4^2 +q^{-\frac{1}{2}}O_5^2 +q^{-\frac{3}{2}}O_6^2 -q^{-\frac{3}{2}}t^{-1}(t+1) \left(q^2+t\right),\\ %GB element #50 g_{50}=&O_1O_6O_{45}O_{61} -q^{\frac{1}{2}}O_3O_4^2O_{345} -q^{\frac{1}{4}}O_{45}O_{61}^2 +q^{\frac{5}{4}}O_4O_{34}O_{345} -q^{\frac{1}{4}}O_1O_6O_{23} -q^{-\frac{3}{4}}O_1^2O_{45} +q^{-\frac{3}{4}}O_4^2O_{45}\\ &-q^{\frac{1}{4}}O_6^2O_{45} +q^{-\frac{1}{2}}t^{-\frac{1}{2}}(q+t)O_1O_3O_4 +q^{\frac{1}{2}}O_{23}O_{61} +O_6O_{123} -q^{\frac{1}{2}}(q-2)O_3O_{345} -q^{-\frac{1}{4}}t^{-\frac{1}{2}}(q+t)O_1O_{34}\\ &-q^{-1}O_4O_5 +q^{-\frac{3}{4}}(q-1)O_{45},\\ %GB element #51 g_{51}=&O_4O_5O_{45}O_{61} -O_1^2O_2O_{345} -q^{\frac{1}{4}}O_{45}^2O_{61} +q^{\frac{1}{4}}O_1O_{12}O_{345} -q^{\frac{1}{4}}O_4O_5O_{23} +q^{\frac{1}{4}}O_1^2O_{61} -q^{\frac{1}{4}}O_4^2O_{61} -q^{-\frac{3}{4}}O_5^2O_{61}\\ &+q^{-\frac{1}{2}}t^{-\frac{1}{2}}(q+t)O_1O_2O_4 +qO_{23}O_{45} +q^{-\frac{1}{2}}O_5O_{234} +O_2O_{345} -q^{-\frac{1}{4}}t^{-\frac{1}{2}}(q+t)O_4O_{12} -q^{\frac{1}{2}}O_1O_6 -q^{\frac{1}{4}}(q-1)O_{61},\\ %GB element #52 g_{52}=&O_2O_3O_{45}O_{61} -q^{\frac{1}{4}}O_{23}O_{45}O_{61} -q^{-\frac{3}{4}}O_3O_{45}O_{345} -q^{-\frac{1}{4}}O_2O_{61}O_{345} -q^{\frac{1}{2}}O_5^2O_6^2 +q^{-\frac{1}{2}}O_{345}^2 +q^{\frac{5}{4}}O_5O_6O_{56}\\ &+q^{-\frac{3}{2}}O_{45}^2 +q^{\frac{1}{2}}O_{61}^2 -q^{-\frac{1}{2}}(q-1)O_2^2 -q^{\frac{1}{2}}(q-2)O_5^2 +q^{\frac{1}{2}}O_6^2 -q^{-\frac{3}{2}}t^{-1}(t+1) \left(q^2+t\right),\\ %GB element #53 g_{53}=&O_1O_6O_{23}O_{61} -qO_3^2O_4O_{234} -q^{-\frac{3}{4}}O_{23}O_{61}^2 +q^{\frac{5}{4}}O_3O_{34}O_{234} -q^{\frac{1}{4}}O_1^2O_{23} +q^{\frac{5}{4}}O_3^2O_{23} -q^{\frac{5}{4}}O_6^2O_{23} -q^{-\frac{1}{4}}O_1O_6O_{45}\\ &+q^{\frac{1}{2}}t^{-\frac{1}{2}}(q+t)O_3O_4O_6 +q^{-1}O_{45}O_{61} +q^{\frac{1}{2}}O_1O_{123} +qO_4O_{234} -q^{\frac{3}{4}}t^{-\frac{1}{2}}(q+t)O_6O_{34}-q^{2}O_2O_3\\ &+q^{-\frac{3}{4}}(q-1)^2 (q+1)O_{23},\\ %GB element #54 g_{54}=&O_4O_5O_{23}O_{61} -q^{\frac{3}{4}}O_{23}O_{45}O_{61} -q^{\frac{1}{4}}O_4O_{23}O_{234} -q^{-\frac{1}{4}}O_5O_{61}O_{234} -qO_1^2O_2^2 +O_{234}^2 +q^{-\frac{1}{4}}(q^2+q-1)O_4O_5O_{45}\\ &+q^{2}O_{12}^2 +qO_{23}^2 -(q^2+q-1)O_{45}^2 +q^{-1}(q^2-q+1)O_{61}^2 +2qO_1^2 +2qO_2^2 -(q^2+q-1)O_4^2\\ &-qO_5^2 -q^{-1}t^{-1}(t+1) \left(q^2+t\right),\\ %GB element #55 g_{55}=&O_5O_6O_{34}O_{56} -O_2O_3^2O_{234} -q^{\frac{3}{4}}O_{34}O_{56}^2 +q^{\frac{3}{4}}O_3O_{23}O_{234} -q^{\frac{1}{4}}O_5O_6O_{12} +q^{-\frac{5}{4}}O_3^2O_{34} -q^{-\frac{1}{4}}O_5^2O_{34}\\ &-q^{-\frac{5}{4}}O_6^2O_{34} +q^{-1}t^{-\frac{1}{2}}(q+t)O_2O_3O_6 +qO_{12}O_{56} -(q-2)O_2O_{234} +q^{-\frac{1}{2}}O_5O_{345} -q^{-\frac{3}{4}}t^{-\frac{1}{2}}(q+t)O_6O_{23}\\ &-q^{-\frac{3}{2}}O_3O_4 +q^{-\frac{1}{4}}(q-1)O_{34},\\ %GB element #56 g_{56}=&O_1O_2O_{34}O_{56} -q^{\frac{1}{4}}O_{12}O_{34}O_{56} -q^{-\frac{3}{4}}O_2O_{34}O_{234} -q^{-\frac{1}{4}}O_1O_{56}O_{234} -q^{\frac{1}{2}}O_4^2O_5^2 +q^{-\frac{1}{2}}O_{234}^2 +q^{\frac{5}{4}}O_4O_5O_{45}\\ &+q^{-\frac{3}{2}}O_{34}^2 +q^{\frac{1}{2}}O_{56}^2 -q^{-\frac{1}{2}}(q-1)O_1^2 -q^{\frac{1}{2}}(q-2)O_4^2 +q^{\frac{1}{2}}O_5^2 -q^{-\frac{3}{2}}t^{-1}(t+1) \left(q^2+t\right),\\ %GB element #57 g_{57}=&O_5O_6O_{12}O_{56} -q^{\frac{1}{2}}O_2^2O_3O_{123} -q^{-\frac{1}{4}}O_{12}O_{56}^2 +q^{\frac{3}{4}}O_2O_{23}O_{123} +q^{\frac{3}{4}}O_2^2O_{12} -q^{\frac{3}{4}}O_5^2O_{12} -q^{-\frac{1}{4}}O_6^2O_{12}\\ &-q^{-\frac{1}{4}}O_5O_6O_{34} +t^{-\frac{1}{2}}(q+t)O_2O_3O_5 +q^{-\frac{1}{2}}O_{34}O_{56} +q^{\frac{1}{2}}O_3O_{123} +O_6O_{345} -q^{\frac{1}{4}}t^{-\frac{1}{2}}(q+t)O_5O_{23} -q^{\frac{3}{2}}O_1O_2\\ &+q^{-\frac{1}{4}}(q-1)^2O_{12},\\ %GB element #58 g_{58}=&O_3O_4O_{12}O_{56} -q^{\frac{3}{4}}O_{12}O_{34}O_{56} -q^{\frac{1}{4}}O_3O_{12}O_{123} -q^{-\frac{1}{4}}O_4O_{56}O_{123} -q^{-1}O_1^2O_6^2 +O_{123}^2 +q^{-\frac{7}{4}}O_1O_6O_{61} +qO_{12}^2\\ &+q^{-1}(q^2-q+1)O_{56}^2 +q^{-2}(2 q-1)O_1^2 +q^{-1}(q-1)O_4^2 +q^{-1}O_6^2 -q^{-1}t^{-1}(t+1) \left(q^2+t\right),\\ %GB element #59 g_{59}=&O_1O_6O_{23}O_{45} -q^{-\frac{1}{4}}O_{23}O_{45}O_{61} -q^{-\frac{1}{4}}O_1O_{23}O_{123} -q^{\frac{1}{4}}O_6O_{45}O_{123} -qO_3^2O_4^2 +O_{123}^2 +q^{\frac{5}{4}}O_1O_6O_{61} +O_{23}^2\\ &+q^{2}O_{34}^2 +q^{-1}(q^2-q+1)O_{45}^2 -qO_{61}^2 -qO_1^2 +2qO_3^2 +2qO_4^2 -(q^2+q-1)O_6^2 -q^{-1}t^{-1}(t+1) \left(q^2+t\right),\\ %GB element #60 g_{60}=&O_4O_5O_{23}O_{45} -O_1O_2^2O_{123} -q^{\frac{3}{4}}O_{23}O_{45}^2 +q^{\frac{3}{4}}O_2O_{12}O_{123} +q^{-\frac{5}{4}}O_2^2O_{23} -q^{-\frac{1}{4}}O_4^2O_{23} -q^{-\frac{5}{4}}O_5^2O_{23} -q^{\frac{1}{4}}O_4O_5O_{61}\\ &+q^{-1}t^{-\frac{1}{2}}(q+t)O_1O_2O_5 +q^{\frac{3}{2}}O_{45}O_{61} -(q-2)O_1O_{123} +q^{-\frac{1}{2}}O_4O_{234} -q^{-\frac{3}{4}}t^{-\frac{1}{2}}(q+t)O_5O_{12} -q^{-\frac{3}{2}}O_2O_3\\ &-q^{-\frac{1}{4}}(q-1)^2O_{23},\\ %GB element #61 g_{61}=&O_5O_6O_{12}O_{34} -q^{\frac{1}{4}}O_{12}O_{34}O_{56} -q^{-\frac{3}{4}}O_6O_{12}O_{345} -q^{-\frac{1}{4}}O_5O_{34}O_{345} -q^{\frac{1}{2}}O_2^2O_3^2 +q^{-\frac{1}{2}}O_{345}^2 +q^{\frac{5}{4}}O_5O_6O_{56}\\ &+q^{-\frac{3}{2}}(q^2-q+1)O_{12}^2 +q^{\frac{3}{2}}O_{23}^2 +q^{-\frac{1}{2}}O_{34}^2 -q^{\frac{3}{2}}O_{56}^2 +2q^{\frac{1}{2}}O_2^2 +2q^{\frac{1}{2}}O_3^2 -q^{-\frac{1}{2}}(q^2+q-1)O_5^2 -q^{\frac{1}{2}}O_6^2\\ &-q^{-\frac{3}{2}}t^{-1}(t+1) \left(q^2+t\right).\end{aligned}$$* *Proof.* Each of the elements above can be expressed as a two-sided linear combination of defining relations $c,\rho_i$ and $\eta_{I,J}$. Below we give an example of such calculation $$\begin{aligned} g_{9}=&O_1O_2O_3 -O_4O_5O_6 -q^{-\frac{1}{4}}O_3O_{12} -q^{\frac{1}{4}}O_1O_{23} +q^{-\frac{1}{4}}O_6O_{45} +q^{\frac{1}{4}}O_4O_{56}\\ =&\frac{t^{\frac{1}{2}}}{q+t}\rho _{14} -\frac{t^{\frac{1}{2}}}{q+t}\rho _{17} -q^{-\frac{1}{2}}O_2\eta _{(3)(1)} -q^{-\frac{1}{4}}\eta _{(2)(1)}O_3 +q^{-\frac{1}{2}}O_5\eta _{(6)(4)} +q^{-\frac{1}{4}}\eta _{(5)(4)}O_6\\ &-q^{-1}(q-1)\eta _{(45)(6)} +q^{-1}(q-1)\eta _{(12)(3)} -\frac{q^{-\frac{1}{2}}t^{\frac{1}{2}}}{q+t}\eta _{(5)(2)} -\frac{q^{\frac{1}{2}}t^{\frac{1}{2}}}{q+t}\eta _{(61)(34)}\\ =&\frac{t^{\frac{1}{2}}}{q+t}\Big(\rho_{14} -\rho_{17}\Big)+\mathbf O(q-1). \end{aligned} \label{eq:g9Decomposition}$$ We omit a complete list of these routine calculations from the text and invite careful reader to examine our Mathematica program [@Arthamonov-GitHub-Flat] which generates all such decompositions. **Remark 49**. *An observation which is not used in our text, but which is still worth mentioning is that there is only a handful amount of factors that can appear in denominators of formulas like ([\[eq:g9Decomposition\]](#eq:g9Decomposition){reference-type="ref" reference="eq:g9Decomposition"}). The one we construct in our Mathematica program [@Arthamonov-GitHub-Flat] turns out to have the following Least Common Multiple of denominators $$\Lambda(q,t)=q^{\frac{3}{4}}t^{\frac{1}{2}}\Big(q^{\frac{1}{4}}+1\Big) \Big(q^{\frac{1}{2}}+1\Big) \Big(q^{\frac{1}{2}}-q^{\frac{1}{4}}+1\Big) \Big(q^{\frac{1}{2}}+q^{\frac{1}{4}}+1\Big) q^4 (q+1) \Big(q-q^{\frac{1}{2}}+1\Big) \Big(q^2-q+1\Big) (t+1)^2 (q+t)^2.$$*  ◻ [^1]: Realization of this algorithm in Mathematica can be found at [@Arthamonov-GitHub-Flat] [^2]: We use $q^{\frac14}$ and $t^{\frac14}$ instead of introducing new variables $Q=q^{\frac14}$ and $T=t^{\frac14}$ for consistency with earlier papers on Macdonald Theory and Double Affine Hecke Algebras. This simplifies comparison of familiar formulas and became standard in literature on the subject. [^3]: This is closely related to the fact that hyperelliptic involution $H$, although being nontrivial mapping class, acts trivially on the $SL(2,\mathbb C)$-character variety. As we can see, this property also translates to the quantized and deformed $SL(2,\mathbb C)$-character variety. [^4]: Note that cycles $c_1,c_2,c_3,c_4,c_5$ on Figure [\[fig:GenusTwoPiGenerators\]](#fig:GenusTwoPiGenerators){reference-type="ref" reference="fig:GenusTwoPiGenerators"} correspond to cycles $A_1,B_{12},A_2,B_{23},$ and $A_3$ on Figure [1](#fig:qDiffGeneratingCycles){reference-type="ref" reference="fig:qDiffGeneratingCycles"} respectively. So our notations for five generating Dehn twists are consistent with Theorem [\[th:MCGActionGeneric\]](#th:MCGActionGeneric){reference-type="ref" reference="th:MCGActionGeneric"} and Proposition [\[prop:MCGActionAq1t\]](#prop:MCGActionAq1t){reference-type="ref" reference="prop:MCGActionAq1t"}.
arxiv_math
{ "id": "2309.01011", "title": "Classical Limit of genus two DAHA", "authors": "S. Arthamonov", "categories": "math.QA math-ph math.MP math.RA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper, at first we introduce a sufficient condition for a rational unknotted Reeb orbit $\gamma$ in a lens space to be elliptic by using the rational self-linking number $sl_{\xi}^{\mathbb{Q}}(\gamma)$ and the Conley-Zehnder index $\mu_{\mathrm{disk}}(\gamma^{p})$, where $\mu_{\mathrm{disk}}$ is the Conley-Zehnder index with respect to a trivialization induced by a binding disk. As a consequence, we show that a periodic orbit $\gamma$ in dynamically convex $L(p,1)$ must be elliptic if $\gamma^{p}$ binds a Birkhoff section of disk type and has $\mu_{\mathrm{disk}}(\gamma^{p})=3$. It was proven in [@Sch] that such an orbit always exists in a dynamically convex $L(p,1)$. Next, we estimate the first ECH spectrum on dynamically convex $L(3,1)$. In particular, we show that the first ECH spectrum on a strictly convex (or non-degenerate dynamically convex) $(L(3,1),\lambda)$ is equal to the infimum of contact areas of certain Birkhoff sections of disk type. The key of the argument is to conduct technical computations regarding indices present in ECH and to observe the topological properties of rational open book decompositions supporting $(L(3,1),\xi_{\mathrm{std}})$ coming from $J$-holomorphic curves. address: Research Institute for Mathematical Sciences, Kyoto University, Kyoto 606-8502, JAPAN. author: - Taisuke SHIBATA title: Elliptic bindings and the first ECH spectrum for convex Reeb flows on lens spaces --- # Introduction and main results ## Introduction Let $(Y,\lambda)$ be a oriented contact 3-manifold with $\lambda \wedge d\lambda>0$. Let $X_{\lambda}$ be the Reeb vector field. A periodic orbit is a map $\gamma:\mathbb{R}/T_{\gamma}\mathbb{Z}:\to Y$ with $\Dot{\gamma}=X_{\lambda}\circ \gamma$ for some $T_{\gamma}>0$ where $T_{\gamma}$ is the period of $\gamma$ and we write $\gamma^{p}$ for $p\in \mathbb{Z}$ as a periodic orbit of composing $\gamma$ with the natural projection $\mathbb{R}/pT_{\gamma}\mathbb{Z}\to \mathbb{R}/T_{\gamma}\mathbb{Z}$. Consider a periodic orbit $\gamma$. If the eigenvalues of the return map $d\phi^{T_{\gamma}}|_{\xi}:\xi_{\gamma(0)}\to\xi_{\gamma(0)}$ are positive (resp. negative) real, $\gamma$ is called positive (resp. negative) hyperbolic. If the eigenvalues of the return map $d\phi^{T_{\gamma}}|_{\xi}:\xi_{\gamma(0)}\to\xi_{\gamma(0)}$ are on the unit circle in $\mathbb{C}$, $\gamma$ is called elliptic. Whether there exists an elliptic periodic orbit on a given contact manifold has been studied. In particular, it is a long-standing conjecture that a convex energy hypersurface in the standard symplectic Euclidean space carries an elliptic orbit and there are many previous studies under some additional assumptions (c.f. [@AbMa1], [@AbMa2], [@DDE], [@HuWa], [@LoZ]). As one of them, it is natural to consider the quotient of a convex energy hypersurface by a cyclic group action which becomes a lens space ([@AbMa1], [@AbMa2]) . In this case, the result depends on the lens space. Our first result is to introduce a sufficient condition for a given rational unknotted Reeb orbit $\gamma$ in $L(p,q)$ to be elliptic by using the rational self-linking number $sl_{\xi}^{\mathbb{Q}}(\gamma)$ and the Conley-Zehnder index $\mu_{\mathrm{disk}}(\gamma^{p})$. The sufficient condition is especially useful under dynamical convexity which is the notion introduced in [@HWZ2]. Recall that a contact 3-manifold $(Y,\lambda)$ with $c_{1}(\xi)|_{\pi_{2}(Y)}=0$ is dynamically convex if any contractible periodic orbit $\gamma$ satisfies $\mu_{\mathrm{disk}}(\gamma) \geq 3$. In particular, for any strictly convex domain with smooth boundary $0\in S \subset \mathbb{R}^{4}=\mathbb{C}^{2}$, the contact $3$-sphere $(\partial S, \lambda_{0}|_{\partial S})$ is dynamically convex where $\lambda_{0}=\frac{i}{2}\sum_{1\leq i \leq n}(z_{i}d\Bar{z_{i}}-\Bar{z_{i}}dz_{i})$. In addition, if $(Y,\lambda)$ is a dynamically convex contact 3-sphere, the contact structure must be tight. Since dynamical convexity is preserved under taking a finite cover, the contact structure of a dynamically convex contact lens space must be universally tight. See [@HWZ1; @HWZ2]. Besides the existence of elliptic orbits, there is a problem of the existence of Birkhoff sections of disk type. A Birkhoff section of disk type for $X_{\lambda}$ on $(Y,\lambda)$ is a compact immersed disk $u:\mathbb{D}\to Y$ satisfying (1). $u(\mathbb{D}\backslash \partial \mathbb{D})\subset Y\backslash u(\partial \mathbb{D})$ is embedded, (2). $X_{\lambda}$ is transversal to $u(\mathbb{D}\backslash \partial \mathbb{D})$, (3). $u(\partial \mathbb{D})$ is tangent to a periodic orbit of $X_{\lambda}$, (4). for every $x\in Y\backslash u(\partial \mathbb{D})$, there are $-\infty<t_{x}^{-}<0<t_{x}^{+}<+\infty$ such that $\phi^{t^{\pm}_{x}}(x)\in u(\mathbb{D})$ where $\phi^{t}$ is the flow of $X_{\lambda}$. Whether a dynamically convex lens space $(L(p,q),\lambda)$ has a Birkhoff section of disk type is partially known. First of all, it was proved by Hofer Wysocki Zehnder [@HWZ2] that any dynamically convex $(S^{3},\lambda)$ admits a periodic orbit $\gamma$ such that $\gamma$ binds a a Birkhoff section of disk type and $\mu_{\mathrm{disk}}(\gamma)=3$. Recently Hryniewicz and Salomão [@HrS] showed that any dynamically convex $(L(2,1),\lambda)$ admits a periodic orbit $\gamma$ which binds a a Birkhoff section of disk type and $\mu_{\mathrm{disk}}(\gamma^{2})=3$. After that Schneider [@Sch] generalized it to $(L(p,1).\xi_{\mathrm{std}})$. That is, he showed that any dynamically convex $(L(p,1),\lambda)$ with $\lambda\wedge d\lambda > 0$ admits a periodic orbit $\gamma$ such that $\gamma^p$ binds a Birkhoff section of disk type and $\mu_{\mathrm{disk}}(\gamma^{p})=3$. On the other hand, The author [@Shi] showed by using Embedded contact homology that non-degenerate dynamically convex $(L(p,p-1),\lambda)$ with $\lambda\wedge d\lambda>0$ must have an orbit $\gamma$ binding a Birkhoff section of disk type. It is natural to ask whether a periodic orbit binding a Birkhoff section is elliptic. Our sufficient condition leads to the ellipticity of the binding orbit of a Birkhoff section of disk type on $L(p,1)$ with $\mu_{\mathrm{disk}}(\gamma^{p})=3$ which was found in [@Sch]. To explain results, we recall some notions. Let $\mathbb{D}$ denote the unit closed disk. **Definition 1**. *A knot $K\subset Y^3$ is called $p$-unknotted if there exists an immersion $u:\mathbb{D}\to Y$ such that $u(\ring{\mathbb{D}})\subset Y\backslash u(\partial\mathbb{D})$ is embedded and $u|_{\partial \mathbb{D}}:\partial \mathbb{D}\to K$ is a $p$-covering map.* **Remark 2**. Let $K\subset Y$ be a $p$-unknotted knot and $u:\mathbb{D}\to Y$ be a immersed disk as above. Then the union of a neighborhood of $u(\mathbb{D})$ and $K$ is diffeomorohic to $L(p,k)\backslash B^{3}$ for some $k$ where $B^{3}$ is the 3-ball. Therefore, $Y$ is $L(p,k)\# M$ for some $k$ and a closed 3-manifold $M$. In particular, if a lens space admits a a $p$-unknotted knot, then it is diffeomorphic to $L(p,k)$ for some $k$ [@BE cf. Section 5]. **Definition 3**. *[@BE cf. Subsection 1.1] Let $(Y,\lambda)$ be a contact 3-manifold with $\mathrm{Ker}\lambda=\xi$. Assume that a knot $K\subset Y$ is $p$-unknotted, transversal to $\xi$ and oriented by the co-orientation of $\xi$. Let $u:\mathbb{D}\to Y$ be an immersion such that $u|_{\mathrm{int}(\mathbb{D})}$ is embedded and $u|_{\partial \mathbb{D}}:\partial \mathbb{D}\to K$ is a orientation preserving $p$-covering map. Take a non-vanishing section $Z:\mathbb{D}\to u^{*}\xi$ and consider the immersion $\gamma_{\epsilon}:t\in \mathbb{R}/\mathbb{Z} \to \mathrm{exp}_{u(e^{2\pi i t})}(\epsilon Z(u(e^{2\pi i t})))\in Y\backslash K$ for small $\epsilon>0$.* *Define the rational self-linking number $sl_{\xi}^{\mathbb{Q}}(K,u)\in \mathbb{Q}$ as $$sl_{\xi}^{\mathbb{Q}}(K,u)=\frac{1}{p^{2}} \#(\mathrm{Im}\gamma_{\epsilon}\cap u(\mathbb{D}))$$ where $\#$ counts the intersection number algebraically. If $c_{1}(\xi)|_{\pi_{2}(Y)}=0$, $sl_{\xi}^{\mathbb{Q}}(K,u)$ is independent of $u$. Hence we write $sl_{\xi}^{\mathbb{Q}}(K)$.* **Remark 4**. In generall, (rational) self-linking number is defined for rationally null-homologous knot by using a (rational) Seifert surface. See [@BE]. Let $(Y,\lambda)$ be a contact 3-manifold with $c_{1}(\xi)|_{\pi_{2}(Y)}=0$. Assume that a Reeb orbit $\gamma:\mathbb{R}/T_{\gamma}\mathbb{Z}\to Y$ is contractible. Take a map $u:\mathbb{D}\to Y$ so that $u(e^{2\pi t})=\gamma(T_{\gamma}t)$. Let $\tau_{\mathrm{disk}}:\gamma^{*}\xi\to \mathbb{R}/T_{\gamma}\mathbb{Z}\times \mathbb{R}^{2}$ denote a symplectic trivialization which extends to a trivialization over $u^{*}\xi$. We write the Conley-Zehnder index (see the next section) $\mu_{\mathrm{disk}}(\gamma)$ as $\mu_{\tau_{\mathrm{disk}}}(\gamma)$ if there is no confusion. ## Ellipticity of an unknotted orbit We fix an orientation on a lens space $L(p,q)$ as follows. Let $p\geq q>0$ be mutually prime. Consider the unit 4-ball $B^{4}(1)\in \mathbb{R}^{3}$. Then the boundary $\partial B^{4}(1)$ has an orientation induced by $B^{4}(1)\in \mathbb{R}^{3}$. The action $(z_{1},z_{2})\mapsto (e^{\frac{2\pi i}{p}}z_{1},e^{\frac{2\pi iq}{p}}z_{2})$ preserves $\partial B(1)$. Hence we have $L(p,q)$ as the quotient space. From now on, we assume that $L(p,q)$ is oriented by $\partial B(1)$. Our first result is as follows. **Theorem 5**. *Let $p>q>0$ be mutually prime. Let $\lambda$ be a contact form on $L(p,q)$ with $\lambda \wedge d\lambda>0$. Let $\mathrm{Ker}\lambda=\xi$ and $\gamma$ be a $p$-unknotted Reeb orbit in $(L(p,q),\lambda)$. Suppose that $-2r-2p\cdot sl_{\xi}^{\mathbb{Q}}(\gamma)-\mu_{\mathrm{disk}}(\gamma^{p})$ is not divisible by $p$ for any $r \in \mathbb{Z}$ satisfying either $r=-q\,\,\mathrm{mod}\,\,p$ or $rq=-1\,\,\mathrm{mod}\,\,p$. Then $\gamma$ is elliptic.* We note that it is obvious that a periodic orbit with $\mu_{\mathrm{disk}}(\gamma^{2})=3$ on dynamically convex $(L(2,1),\lambda)$ is elliptic as follows (see the next section for the notations in the following). As mentioned, $\mathrm{Ker}\lambda=\xi$ is universally tight if $(L(2,1),\lambda)$ is dynamically convex. Therefore $\xi$ is topologically trivial and we can take a global symplectic trivialization $\tau_{\mathrm{glob}}:\xi \to L(2,1)\times \mathbb{R}^{2}$. If a periodic orbit $\gamma$ is hyperbolic, we have $\mu_{\tau_{\mathrm{glob}}}(\gamma^2)=2\mu_{\tau_{\mathrm{glob}}}(\gamma)$ (c.f. Proposition [Proposition 14](#conleybasic){reference-type="ref" reference="conleybasic"}). On the other hand, since $\gamma^{2}$ is contractible, it follows from the definition that $\mu_{\tau_\mathrm{glob}}(\gamma^2)=\mu_{\mathrm{disk}}(\gamma^2)$. This means that $2\mu_{\tau_{\mathrm{glob}}}(\gamma)=\mu_{\tau_\mathrm{glob}}(\gamma^2)=\mu_{\mathrm{disk}}(\gamma^2)=3$. This contradicts $\mu_{\tau_{\mathrm{glob}}}(\gamma)\in \mathbb{Z}$. In general, the above argument can not be applied to $(L(p,1),\lambda)$ with $\lambda\wedge d\lambda>0$ unlike $L(2,1)$ because the universally tight contact structure on $L(p,1)$ is not topologically trivial for $p>2$. But it follows from Theorem [Theorem 5](#thm:main){reference-type="ref" reference="thm:main"} that the same result holds for $L(p,1)$ as follows. **Corollary 6**. *Let $(L(p,1),\lambda)$ with $\lambda\wedge d\lambda>0$ be dynamically convex. Let $\gamma$ be a periodic orbit such that $\gamma^p$ binds a Birkhoff section of disk type and $\mu_{\mathrm{disk}}(\gamma^{p})=3$. Then $\gamma$ is elliptic. Note that according to [@Sch], such a periodic orbit always exists.* **Remark 7**. It is proved in [@AbMa1] that a dynamically convex $(L(p,1),\lambda)$ admits an elliptic orbit. To apply Theorem [Theorem 5](#thm:main){reference-type="ref" reference="thm:main"}, the following theorem proved by Hryniewicz and Salomão is important. **Theorem 8**. *[@HrS][\[fundament\]]{#fundament label="fundament"} Let $\lambda$ be a dynamically convex contact form on $L(p,q)$. Then for a simple orbit $\gamma$, $\gamma$ is $p$-unknotted and $sl_{\xi}^{\mathbb{Q}}(\gamma)=-\frac{1}{p}$ if and only if $\gamma^{p}$ bound a disk which is a Birkhoff section. Moreover, this Birkhoff section is a page of a rational open book decomposition of $L(p,q)$ such that all pages are Birkhoff sections.* ***Proof of Corollary [Corollary 6](#main;cor){reference-type="ref" reference="main;cor"}**.* We apply Theorem [Theorem 5](#thm:main){reference-type="ref" reference="thm:main"} to $L(p,1)$. In this case, $q=1$ and hence it suffices to consider $r\in \mathbb{Z}$ satisfying $r=-1\,\,\mathrm{mod}\,\,p$. Since $r=-1\,\,\mathrm{mod}\,\,p$, $\mu_{\mathrm{disk}}(\gamma^{p})=3$ and $sl_{\xi}^{\mathbb{Q}}(\gamma)=-\frac{1}{p}$ (Theorem [\[fundament\]](#fundament){reference-type="ref" reference="fundament"}), we have $-2r-2p\cdot sl_{\xi}^{\mathbb{Q}}(\gamma)-\mu_{\mathrm{disk}}(\gamma^{p})=1 \,\mathrm{mod}\,\,p$. This means that $\gamma$ is elliptic. ◻ As a generalization, it is a natural to ask the following question. **Question 9**. *Let $(L(p,q),\lambda)$ (including $S^{3}$ as $p=1,\,q=0$) be dynamically convex. Does there always exist a periodic orbit $\gamma$ such that $\gamma^{p}$ binds a Birkhoff section of disk type?* Note that the author does not know whether the periodic orbits are elliptic obtained in [@Shi] for dynamically convex $(L(p,p-1),\lambda)$ with $\lambda\wedge d\lambda>0$. ## The first ECH spectrum and Birkhoff section Next, we focus on the relationship between Birkhoff sections of disk type and the first ECH spectrum. For $(L(p,q),\lambda)$, define $$\mathcal{S}_{p}(L(p,q),\lambda):=\{\gamma \mathrm{\,\,simple\,\,orbit\,\,of}\,\,(L(p,q),\lambda)|\,\,p\mathrm{-unknotted},\,\,sl_{\xi}^{\mathbb{Q}}(\gamma)=-\frac{1}{p}\,\}.$$ We write $\mathcal{S}_{p}$ instead of $\mathcal{S}_{p}(L(p,q),\lambda)$ if there is no confusion. As mentioned in Theomre [\[fundament\]](#fundament){reference-type="ref" reference="fundament"}, on dynamically convex $(L(p,q),\lambda)$ a periodic orbit $\gamma^p$ binds a Birkhoff section of disk type if and only if $\gamma \in \mathcal{S}_{p}(L(p,q),\lambda)$. Recently, it has been gradually understood that there is a connection between the Birkhoff section and the ECH spectrum. For instance, according to [@HrHuRa] if $(S^{3},\lambda)$ is dynamically convex, the first ECH spectrum $c_{1}^{\mathrm{ECH}}(S^{3},\lambda)$ is equal to the infimum of the actions of periodic orbit in $\mathcal{S}_{1}(S^3,\lambda)$. In [@Shi], it is shown that if $(L(2,1),\lambda)$ is strictly convex (or non-degenerate dynamically convex), then the half of the first ECH spectrum $\frac{1}{2}c_{1}^{\mathrm{ECH}}(L(2,1),\lambda)$ is equal to the infimum of the actions of periodic orbit $\gamma$ in $\mathcal{S}_{2}(L(2,1),\lambda)$ with $\mu_{\tau_{\mathrm{glob}}}(\gamma)=1$. Here we note that $(L(p,q),\lambda)$ is called strictly convex if $(L(p,q),\lambda)$ is obtained as a quotient space of $(\partial S, \lambda_{0}|_{\partial S})$ of the action $(z_{1},z_{2})\mapsto (e^{\frac{2\pi i}{p}}z_{1},e^{\frac{2\pi iq}{p}}z_{2})$ where $0\in S \subset \mathbb{R}^{4}=\mathbb{C}^{2}$ is a strictly convex domain which is invariant under the action. The following is our second main theorem. **Theorem 10**. *Let $(L(3,1)\lambda)$ be a strictly convex (or non-degenerate dynamically convex) contact 3-manifold. Then $$\frac{1}{3}c_{1}^{\mathrm{ECH}}(L(3,1)\lambda)=\inf_{\gamma\in \mathcal{S}_{3},\,\,\mu_{\mathrm{disk}}(\gamma^3)=3}\,\,\int_{\gamma}\lambda.$$* Note that our first result Theorem [Theorem 5](#thm:main){reference-type="ref" reference="thm:main"} plays an important role in the proof of the above. ## Idea and outline of this paper At first, we prove Therorem [Theorem 5](#thm:main){reference-type="ref" reference="thm:main"}. The idea is as follows. Let $\gamma$ be a unkonotted orbit in a lens space. Take a tubular neighborhood, then we have a Heegaard splitting of genus 1. Consider the twist of the gluing map and compare the trivializations of the contact structure over the solid torus and a binding disk. By combining them with the properties of Conley-Zehnder index, we have Therorem [Theorem 5](#thm:main){reference-type="ref" reference="thm:main"}. The observation of the proof is also essential in the following proof of Theorem [Theorem 10](#maintheorem){reference-type="ref" reference="maintheorem"}. Next, we consider Theorem [Theorem 10](#maintheorem){reference-type="ref" reference="maintheorem"}. The estimate $\inf_{\gamma\in \mathcal{S}_{3},\,\,\mu_{\mathrm{disk}}(\gamma^3)=3}\,\,\int_{\gamma}\lambda \leq \frac{1}{3}c_{1}^{\mathrm{ECH}}(L(3,1)\lambda)$ is a hard part of the proof. At first, we conduct technical computations regarding indices present in ECH to clear which holomorphic curves appear as $U$-map to the empty set. As a result, it follows that any moduli space of holomorphic curves counted by $U$-map gives a structure of rational open book decomposition (Lemma [Lemma 36](#lem:indechcomp){reference-type="ref" reference="lem:indechcomp"} and Lemma [Lemma 39](#lem:openbook){reference-type="ref" reference="lem:openbook"}). Next, I consider the rational open book decompositions. By considering which actually supports $(L(3,1),\xi_{\mathrm{std}})$, we can narrow down the list of holomorphic curves. Then it turn out that any binding of the rational open book decomposition coming from the $U$-map $\gamma$ is in $\mathcal{S}_{3}$ and $\mu_{\mathrm{disk}}(\gamma^{3})=3$ and in addition we can choose it so that $\int_{\gamma}\lambda \leq \frac{1}{3} c_{1}^{\mathrm{ECH}}(L(3,1),\lambda)$ (Proposition [Proposition 35](#prp;main){reference-type="ref" reference="prp;main"}). In this manner, we have the theorem. The estimate $\frac{1}{3}c_{1}^{\mathrm{ECH}}(L(3,1)\lambda)\leq \inf_{\gamma\in \mathcal{S}_{3},\,\,\mu_{\mathrm{disk}}(\gamma^3)=3}\,\,\int_{\gamma}\lambda$ is almost the same with [@Shi] except that we need to use Theorem [Theorem 5](#thm:main){reference-type="ref" reference="thm:main"} and Theorem [\[fundament\]](#fundament){reference-type="ref" reference="fundament"} (i.e. a periodic orbit $\gamma$ in $\mathcal{S}_{3}$ must be elliptic if $\mu_{\mathrm{disk}}(\gamma^{p})=3$). ## Acknowledgement {#acknowledgement .unnumbered} The author would like to thank his advisor Professor Kaoru Ono for his encouragement and support, Umberto Hryniewicz for conversation via e-mail and A. Schneider for some comments. The author also would like to thank Takahiro Oba for sharing his knowledge of contact topology. This work was supported by JSPS KAKENHI Grant Number JP21J20300. # Preliminaries ## Conley-Zehnder index In this subsection, we recall Conley-Zehnder index and its properties introduced in [@HWZ2]. The contents in this subsection are based on [@HWZ2; @HWZ4] and almost the same with what are summarized in [@Shi]. **Definition 11**. *For a smooth path $\varphi:\mathbb{R}\to Sp(1)$ in 2-dimensional symplectic matrices with $\varphi(0)=id$ and $\varphi(t+1)=\varphi(t)\varphi(1)$ for any $t\in \mathbb{R}$, the Conley-Zehnder index $\mu_{CZ}(\varphi)\in \mathbb{Z}$ is defined. In particular, $\mu_{CZ}$ is lower semi-continuous.* For $k\in \mathbb{Z}_{>0}$, define $\rho_{k}:\mathbb{R} \to \mathbb{R}$ as $t \mapsto kt$. The next proposition holds. **Proposition 12**. *Consider a smooth path $\varphi:\mathbb{R}\to Sp(1)$ in symplectic matrices with $\varphi(0)=id$ and $\varphi(t+1)=\varphi(t)\varphi(1)$ for any $t\in \mathbb{R}$.* *If $\mu_{CZ}(\varphi)=2n$ for $n\in \mathbb{Z}$, then $\mu_{CZ}(\varphi\circ \rho_{k})=2kn$ for every $k\in \mathbb{Z}_{>0}$.* *If $\mu_{CZ}(\varphi)\geq3$, then $\mu_{CZ}(\varphi\circ \rho_{k})\geq 2k+1$ for every $k\in \mathbb{Z}_{>0}$.* Consider a periodic orbit $\gamma:\mathbb{R}/T_{\gamma}\mathbb{Z}\to Y$ of $(Y,\lambda)$ and a symplectic trivialization $\tau:\gamma^{*}\xi \to \mathbb{R}/T_{\gamma}\mathbb{Z} \times \mathbb{C}$. Then we have a symplectic path $\mathbb{R}\ni t\mapsto \phi_{\gamma,\tau}(t):=\tau(\gamma(T_{\gamma}t)) \circ d\phi^{tT_{\gamma}}|_{\xi}\circ \tau^{-1}(\gamma(0))$ which satisfies $\phi_{\gamma,\tau}(t+1)=\phi_{\gamma,\tau}(t)\phi_{\gamma,\tau}(1)$ for any $t\in \mathbb{R}$. Now, we define the Conley-Zender index of $\gamma$ with respect to a trivialization $\tau$ as $$\mu_{\tau}(\gamma):=\mu_{CZ}(\phi_{\gamma,\tau}).$$ Note that $\mu_{\tau}$ is independent of the choice of a trivialization in the same homotopy class of $\tau$. Let $\mathcal{P}(\gamma)$ denote the set of homotopy classes of symplectic trivializations $\gamma^{*}\xi\to \mathbb{R}/T_{\gamma}\mathbb{Z}\times \mathbb{R}^2$. Note that a symplectic trivialization $\tau \in \mathcal{P}(\gamma)$ induce naturally a symplectic trivialization on $(\gamma^{p})^{*}\xi$ and we use the same notation $\tau\in \mathcal{P}(\gamma^{p})$ for the induced trivialization on $(\gamma^{p})^{*}\xi$ if there is no confusion. For each $\tau \in \mathcal{P}(\gamma)$, take sections $Z_{\tau},W_{\tau}:\mathbb{R}/T_{\gamma}\mathbb{Z}\to \gamma^{*}\xi$ so that the map $\gamma^{*}\xi \ni aZ_{\tau}+bW_{\tau}\mapsto a+ib\in \mathbb{C}$ gives a symplectic trivialization of homotopy class $\tau$. For $\tau,\tau'\in \mathcal{P}(\gamma)$, we define $\mathrm{wind}(\tau,\tau')\in \mathbb{Z}$ as follows. Let $a_{\tau,\tau'},b_{\tau,\tau'}:\mathbb{R}/T_{\gamma}\mathbb{Z}\to \mathbb{R}$ be continuous functions such that $Z_{\tau}(t)=a_{\tau,\tau'}(t)Z_{\tau'}(t)+b_{\tau,\tau'}(t)W_{\tau'}(t)$ for $t\in \mathbb{R}/T_{\gamma}\mathbb{Z}$. Let $\theta:[0,T_{\gamma}]\to \mathbb{R}$ be a continuous function so that $a_{\tau,\tau'}(t)+ib_{\tau,\tau'}(t)\in \mathbb{R}_{+}e^{i\theta(t)}$ for $t\in [0,T_{\gamma}]$. Then we define $$\mathrm{wind}(\tau,\tau'):=\frac{\theta(T_{\gamma})-\theta(0)}{2\pi}\in \mathbb{Z}.$$ It is obvious that $\mathrm{wind}(\tau,\tau')$ is independent of the choices $Z_{\tau^{(')}},W_{\tau^{(')}}$. The following property is well-known and important. **Proposition 13**. *Let $\gamma$ be a periodic orbit in $(Y,\lambda)$. For $\tau\in \mathcal{P}(\gamma)$. Then for any $\tau,\tau' \in \mathcal{P}(\gamma)$, $\mu_{\tau}(\gamma)+2\mathrm{wind}(\tau,\tau')=\mu_{\tau'}(\gamma)$.* If $\gamma^{n}$ is non-degenerate for every $n \in \mathbb{Z}_{>0}$, it is also well-known that the Conley-Zehnder index behave as follows. **Proposition 14**. *Let $\gamma$ be a orbit such that $\gamma^{n}$ is non-degenerate for every $n\in \mathbb{Z}_{>0}$. Fix a trivialization $\tau$ of the contact plane over $\gamma$. Consider the Conley-Zehnder indices of the multiple covers with respect to $\tau$. Write $\mu_{\tau}(\gamma^{n}):=\mu_{CZ}(\phi_{\gamma,\tau}\circ \rho_{n})$.* *If $\gamma$ is hyperbolic, $\mu_{\tau}(\gamma^{n})=n\mu_{\tau}(\gamma)$ for every $n\in \mathbb{Z}_{>0}$.* *If $\gamma$ is elliptic, there is $\theta \in \mathbb{R}\backslash \mathbb{Q}$ such that $\mu_{\tau}(\gamma^{n})=2 \lfloor n\theta \rfloor +1$ for every $n \in \mathbb{Z}_{>0}$.* *We call $\theta$ the monodromy angle of $\gamma$.* For more properties of the Conley-Zehnder index, see [@HWZ2; @HWZ4]. ## The construction and properties of ECH {#echdef} Here, we list the basic construction and properties of ECH. The content of this subsection is based on [@H1], [@H2], [@H3] almost the same with what are summarized in [@Shi]. Let $(Y,\lambda)$ be a non-degenerate contact three manifold. For $\Gamma \in H_{1}(Y;\mathbb{Z})$, Embedded contact homology $\mathrm{ECH}(Y,\lambda,\Gamma)$ is defined. At first, we define the chain complex $(\mathrm{ECC}(Y,\lambda,\Gamma),\partial)$. In this paper, we consider ECH over $\mathbb{Z}/2\mathbb{Z}=\mathbb{F}$. **Definition 15** ([@H1 Definition 1.1]). *An orbit set $\alpha=\{(\alpha_{i},m_{i})\}$ is a finite pair of distinct simple periodic orbit $\alpha_{i}$ with positive integer $m_{i}$. $\alpha=\{(\alpha_{i},m_{i})\}$ is called an ECH generator If $m_{i}=1$ whenever $\alpha_{i}$ is hyperbolic orbit.* Define $[\alpha]=\sum m_{i}[\alpha_{i}] \in H_{1}(Y)$. For two orbit sets $\alpha=\{(\alpha_{i},m_{i})\}$ and $\beta=\{(\beta_{j},n_{j})\}$ with $[\alpha]=[\beta]$, we define $H_{2}(Y,\alpha,\beta)$ to be the set of relative homology classes of 2-chains $Z$ in $Y$ with $\partial Z =\sum_{i}m_{i}\alpha_{i}-\sum_{j}m_{j}\beta_{j}$. This is an affine space over $H_{2}(Y)$. From now on, we fix a trivialization $\tau_{\gamma} \in \mathcal{P}(\gamma)$ of the contact surface $\xi$ on each simple orbit $\gamma$. Let $\tau:=\{ \tau_{\gamma} \}_{\gamma}$. **Definition 16** ([@H1 Definition 1.5]). *For $Z\in H_{2}(Y,\alpha,\beta)$, we define $$I(\alpha,\beta,Z):=c_{1}(\xi|_{Z},\tau)+Q_{\tau}(Z)+\sum_{i}\sum_{k=1}^{m_{i}}\mu_{\tau}(\alpha_{i}^{k})-\sum_{j}\sum_{k=1}^{n_{j}}\mu_{\tau}(\beta_{j}^{k}).$$ We call $I(\alpha,\beta,Z)$ an ECH index. Here, $\mu_{\tau}$ is the Conely Zhender index with respect to $\tau$ and $c_{1}(\xi|_{Z},\tau)$ is a reative Chern number and $Q_{\tau}(Z)=Q_{\tau}(Z,Z)$. Moreover this is independent of $\tau$ (see [@H1] for more details).* For $\Gamma \in H_{1}(Y)$, we define $$\mathrm{ECC}(Y,\lambda,\Gamma):= \bigoplus_{\alpha:\mathrm{ECH\,\,generator\,\,with\,\,}{[\alpha]=\Gamma}}\mathbb{F}\cdot \alpha.$$ The right hand side is a freely generated module over $\mathbb{F}$ by ECH generators $\alpha$ such that $[\alpha]=\Gamma$. To define the differential $\partial:\mathrm{ECC}(Y,\lambda,\Gamma)\to \mathrm{ECC}(Y,\lambda,\Gamma)$, we fix a generic almost complex structure $J$ on $\mathbb{R}\times Y$ which satisfies $\mathbb{R}$-invariant, $J(\frac{d}{ds})=X_{\lambda}$, $J\xi=\xi$ and $d\lambda(\cdot,J\cdot)>0$. We call such a almost complex structure $J$ admissible. We consider $J$-holomorphic curves $u:(\Sigma,j)\to (\mathbb{R}\times Y,J)$ where the domain $(\Sigma, j)$ is a punctured compact Riemann surface. Here the domain $\Sigma$ is not necessarily connected. Let $\gamma$ be a (not necessarily simple) Reeb orbit. If a puncture of $u$ is asymptotic to $\mathbb{R}\times \gamma$ as $s\to \infty$, we call it a positive end of $u$ at $\gamma$ and if a puncture of $u$ is asymptotic to $\mathbb{R}\times \gamma$ as $s\to -\infty$, we call it a negative end of $u$ at $\gamma$ ( see [@H1] for more details ). Let $\alpha=\{(\alpha_{i},m_{i})\}$ and $\beta=\{(\beta_{i},n_{i})\}$ be orbit sets. Let $\mathcal{M}^{J}(\alpha,\beta)$ denote the set of $J$-holomorphic curves with positive ends at covers of $\alpha_{i}$ with total covering multiplicity $m_{i}$, negative ends at covers of $\beta_{j}$ with total covering multiplicity $n_{j}$, and no other punctures. Moreover, in $\mathcal{M}^{J}(\alpha,\beta)$, we consider two $J$-holomorphic curves to be equivalent if they represent the same current in $\mathbb{R}\times Y$. We sometimes consider an element in $\mathcal{M}^{J}(\alpha,\beta)$ as the image in $\mathbb{R}\times Y$. For $u \in \mathcal{M}^{J}(\alpha,\beta)$, we naturally have $[u]\in H_{2}(Y;\alpha,\beta)$ and set $I(u)=I(\alpha,\beta,[u])$. Moreover we define $$\mathcal{M}_{k}^{J}(\alpha,\beta):=\{\,u\in \mathcal{M}^{J}(\alpha,\beta)\,|\,I(u)=k\,\,\}$$ Under this notations, we define $\partial_{J}:\mathrm{ECC}(Y,\lambda,\Gamma)\to \mathrm{ECC}(Y,\lambda,\Gamma)$ as $$\partial_{J} \alpha =\sum_{\beta:\mathrm{\,\,ECH\,\,generator\,\,with\,\,}[\beta]=\Gamma} \# (\mathcal{M}_{1}^{J}(\alpha,\beta)/\mathbb{R})\cdot \beta.$$ Note that the above counting is well-defined and $\partial_{J} \circ \partial_{J}=0$ (see [@H1; @HT1; @HT2], Proposition [Proposition 18](#indexproperties){reference-type="ref" reference="indexproperties"} ). Moreover, the homology defined by $\partial_{J}$ does not depend on $J$ and if $\mathrm{Ker}\lambda=\mathrm{Ker}\lambda'$ for non-degenerate $\lambda,\lambda'$, there is a natural isomorphism between $\mathrm{ECH}(Y,\lambda,\Gamma)$ and $\mathrm{ECH}(Y,\lambda',\Gamma)$. Indeed, It is proved in [@T1] that there is a natural isomorphism between ECH and a version of Monopole Floer homology defined in [@KM]. Next, we recall (Fredholm) index. For $u\in \mathcal{M}^{J}(\alpha,\beta)$, the its (Fredholm) index is defined by $$\mathrm{ind}(u):=-\chi(u)+2c_{1}(\xi|_{[u]},\tau)+\sum_{k}\mu_{\tau}(\gamma_{k}^{+})-\sum_{l}\mu_{\tau}(\gamma_{l}^{-}).$$ Here $\{\gamma_{k}^{+}\}$ is the set consisting of (not necessarilly simple) all positive ends of $u$ and $\{\gamma_{l}^{-}\}$ is that one of all negative ends. Note that for generic $J$, if $u$ is connected and somewhere injective, then the moduli space of $J$-holomorphic curves near $u$ is a manifold of dimension $\mathrm{ind}(u)$ (see [@HT1 Definition 1.3]). ### $U$-map Let $Y$ be connected. Then there is degree$-2$ map $U$. $$\label{Umap} U:\mathrm{ECH}(Y,\lambda,\Gamma) \to \mathrm{ECH}(Y,\lambda,\Gamma).$$ To define this, choose a base point $z\in Y$ which is especially not on the image of any Reeb orbits and let $J$ be generic. Then define a map $$U_{J,z}:\mathrm{ECC}(Y,\lambda,\Gamma) \to \mathrm{ECC}(Y,\lambda,\Gamma).$$ by $$U_{J,z} \alpha =\sum_{\beta:\mathrm{\,\,ECH\,\,generator\,\,with\,\,}[\beta]=\Gamma} \# \{\,u\in \mathcal{M}_{2}^{J}(\alpha,\beta)/\mathbb{R})\,|\,(0,z)\in u\,\}\cdot \beta.$$ The above map $U_{J,z}$ commute with $\partial_{J}$ and we can define the $U$ map as the induced map on homology. Moreover, this map is independent on $z$ (for a generic $J$). See [@HT3 §2.5] for more details. Moreover, in the same reason as $\partial$, $U_{J,z}$ does not depend of $J$ (see [@T1]). ### Partition conditions of elliptic orbits {#subsecparti} For $\theta\in \mathbb{R}\backslash \mathbb{Q}$, we define $S_{\theta}$ to be the set of positive integers $q$ such that $\frac{\lceil q\theta \rceil}{q}< \frac{\lceil q'\theta \rceil}{q'}$ for all $q'\in \{1,\,\,2,...,\,\,q-1\}$ and write $S_{\theta}=\{q_{0}=1,\,\,q_{1},\,\,q_{2},\,\,q_{3},...\}$ in increasing order. Also $S_{-\theta}=\{p_{0}=1,\,\,p_{1},\,\,p_{2},\,\,p_{3},...\}$. **Definition 17** ([@HT1 Definition 7.1], or [@H1 §4]). *For non negative integer $M$, we inductively define the incoming partition $P_{\theta}^{\mathrm{in}}(M)$ as follows.* *For $M=0$, $P_{\theta}^{\mathrm{in}}(0)=\emptyset$ and for $M>0$, $$P_{\theta}^{\mathrm{in}}(M):=P_{\theta}^{\mathrm{in}}(M-a)\cup{(a)}$$* *where $a:=\mathrm{max}(S_{\theta}\cap{\{1,\,\,2,...,\,\,M\}})$. Define outgoing partition $$P_{\theta}^{\mathrm{out}}(M):= P_{-\theta}^{\mathrm{in}}(M).$$* *The standard ordering convention for $P_{\theta}^{\mathrm{in}}(M)$ or $P_{\theta}^{\mathrm{out}}(M)$ is to list the entries in"nonincreasing" order.* Let $\alpha=\{(\alpha_{i},m_{i})\}$ and $\beta=\{(\beta_{i},n_{i})\}$. For $u\in \mathcal{M}^{J}(\alpha,\beta)$, it can be uniquely written as $u=u_{0}\cup{u_{1}}$ where $u_{0}$ are unions of all components which maps to $\mathbb{R}$-invariant cylinders in $u$ and $u_{1}$ is the rest of $u$. **Proposition 18** ([@HT1 Proposition 7.15]). *Suppose that $J$ is generic and $u=u_{0}\cup{u_{1}}\in \mathcal{M}^{J}(\alpha,\beta)$. Then* *$I(u)\geq 0$* *If $I(u)=0$, then $u_{1}=\emptyset$* *If $I(u)=1$, then $\mathrm{ind}(u_{1})=1$. Moreover $u_{1}$ is embedded and does not intersect $u_{0}$.* *If $I(u)=2$ and $\alpha$ and $\beta$ are ECH generators, then $\mathrm{ind}(u_{1})=2$ . Moreover $u_{1}$ is embedded and does not intersect $u_{0}$.* **Proposition 19** ([@HT1 Proposition 7.14, 7.15]). *Let $\alpha=\{(\alpha_{i},m_{i})\}$ and $\beta=\{(\beta_{j},n_{j})\}$ be ECH generators. Suppose that $I(u)=1$ or $2$ for $u=u_{0}\cup{u_{1}}\in \mathcal{M}^{J}(\alpha,\beta)$. Define $P_{\alpha_{i}}^{+}$ by the set consisting of the multiplicities of the positive ends of $u_{1}$ at covers of $\alpha_{i}$. In the same way, define $P_{\beta_{j}}^{-}$ for the negative end. Suppose that $\alpha_{i}$ in $\alpha$ (resp. $\beta_{j}$ in $\beta$) is elliptic orbit with the monodromy angle $\theta_{\alpha_{i}}$ (resp. $\theta_{\beta_{j}}$). Then under the standard ordering convention, $P_{\alpha_{i}}^{+}$ (resp. $P_{\beta_{j}}^{-}$) is an initial segment of $P_{\theta_{\alpha_{i}}}^{\mathrm{out}}(m_{i})$ (resp. $P_{\theta_{\beta_{j}}}^{\mathrm{in}}(n_{j})$).* **Remark 20**. There are partition conditions with respect to hyperbolic orbits, but omitted because we don't use it in this paper. ### $J_{0}$ index and topological complexity of $J$-holomorphic curve In this subsection, we recall the $J_{0}$ index. **Definition 21** ([@HT3 §3.3]). *Let $\alpha=\{(\alpha_{i},m_{i})\}$ and $\beta=\{(\beta_{j},n_{j})\}$ be orbit sets with $[\alpha]=[\beta]$. For $Z\in H_{2}(Y,\alpha,\beta)$, we define $$J_{0}(\alpha,\beta,Z):=-c_{1}(\xi|_{Z},\tau)+Q_{\tau}(Z)+\sum_{i}\sum_{k=1}^{m_{i}-1}\mu_{\tau}(\alpha_{i}^{k})-\sum_{j}\sum_{k=1}^{n_{j}-1}\mu_{\tau}(\beta_{j}^{k}).$$* **Definition 22** (). *Let $u=u_{0}\cup{u_{1}}\in \mathcal{M}^{J}(\alpha,\beta)$. Suppose that $u_{1}$ is somewhere injective. Let $n_{i}^{+}$ be the number of positive ends of $u_{1}$ which are asymptotic to $\alpha_{i}$, plus 1 if $u_{0}$ includes the trivial cylinder $\mathbb{R}\times \alpha_{i}$ with some multiplicity. Likewise, let $n_{j}^{-}$ be the number of negative ends of $u_{1}$ which are asymptotic to $\beta_{j}$, plus 1 if $u_{0}$ includes the trivial cylinder $\mathbb{R}\times \beta_{j}$ with some multiplicity.* Write $J_{0}(u)=J_{0}(\alpha,\beta,[u])$. **Proposition 23** ([@HT3 Lemma 3.5] [@H3 Proposition 5.8]). *Let $\alpha=\{(\alpha_{i},m_{i})\}$ and $\beta=\{(\beta_{j},n_{j})\}$ be admissible orbit sets, and let $u=u_{0}\cup{u_{1}}\in \mathcal{M}^{J}(\alpha,\beta)$. Then $$-\chi(u_{1})+\sum_{i}(n_{i}^{+}-1)+\sum_{j}(n_{j}^{-}-1)\leq J_{0}(u)$$ If $u$ is counted by the ECH differential or the $U$-map, then the above equality holds.* ### ECH spectrum The notion of ECH spectrum is introduced in [@H2]. At first, we consider the filtered ECH. The action of an orbit set $\alpha=\{(\alpha_{i},m_{i})\}$ is defined by $$A(\alpha)=\sum m_{i}A(\alpha_{i})=\sum m_{i}\int_{\alpha_{i}}\lambda.$$ For any $L>0$, $\mathrm{ECC}^{L}(Y,\lambda,\Gamma)$ denotes the subspace of $\mathrm{ECC}(Y,\lambda,\Gamma)$ which is generated by ECH generators whose actions are less than $L$. Then $(\mathrm{ECC}^{L}(Y,\lambda,\Gamma),\partial_{J})$ becomes a subcomplex and the homology group $\mathrm{ECH}^{L}(Y,\lambda,\Gamma)$ is defined. It follows from the construction that there exists a canonical homomorphism $i_{L}:\mathrm{ECH}^{L}(Y,\lambda,\Gamma) \to \mathrm{ECH}(Y,\lambda,\Gamma)$. In addition, for non-degenerate contact forms $\lambda,\lambda'$ with $\mathrm{Ker}\lambda=\mathrm{Ker}\lambda'=\xi$, there is a canonical isomorphism $\mathrm{ECH}(Y,\lambda,\Gamma)\to \mathrm{ECH}(Y,\lambda',\Gamma)$ defined by the cobordism maps for product cobordisms (see [@H2]). Therefore we can consider a pair of a group $\mathrm{ECH}(Y,\xi,\Gamma)$ and maps $j_{\lambda}:\mathrm{ECH}(Y,\lambda,\Gamma) \to \mathrm{ECH}(Y,\xi,\Gamma)$ for any non-degenerate contact form $\lambda$ with $\mathrm{Ker}\lambda=\xi$ such that $\{j_{\lambda}\}_{\lambda}$ is compatible with the canonical map $\mathrm{ECH}(Y,\lambda,\Gamma)\to \mathrm{ECH}(Y,\lambda',\Gamma)$. **Definition 24** ([@H2 Definition 4.1, cf. Definition 3.4]). *Let $Y$ be a closed oriented three manifold with a non-degenerate contact form $\lambda$ with $\mathrm{Ker}\lambda=\xi$ and $\Gamma \in H_{1}(Y,\mathbb{Z})$. If $0\neq \sigma \in \mathrm{ECH}(Y,\xi,\Gamma)$, define $$\label{spect} c_{\sigma}^{\mathrm{ECH}}(Y,\lambda)=\inf\{L>0 |\, \sigma \in \mathrm{Im}(j_{\lambda}\circ i_{L}:\mathrm{ECH}^{L}(Y,\lambda,\Gamma) \to \mathrm{ECH}(Y,\xi,\Gamma))\, \}$$ If $\lambda$ is degenerate, define $$c_{\sigma}^{\mathrm{ECH}}(Y,\lambda)=\sup\{c_{\sigma}(Y,f_{-}\lambda)\}=\inf\{c_{\sigma}(Y,f_{+}\lambda)\}$$ where the supremum is over functions $f_{-}:Y\to (0,1]$ such that $f\lambda$ is non-degenerate and the infimum is over smooth functions $f_{+}:Y\to [1,\infty)$ such that $f_{-}\lambda$ is non-degenerate. Note that $c_{\sigma}^{\mathrm{ECH}}(Y,\lambda)<\infty$ and this definition makes sense. See [@H2 Definition 4.1, Definition 3.4, §2.3] for more details.* **Definition 25**. *[@H2 Subsection 2.2] Let $Y$ be a closed oriented three manifold with a non-degenerate contact form $\lambda$ with $\xi$. Then there is a canonical element called the ECH contact invariant, $$c(\xi):=\langle \emptyset \rangle \in \mathrm{ECH}(Y,\lambda,0).$$ where $\langle \emptyset \rangle$ is the equivalent class containing $\emptyset$. Note that $\partial_{J} \emptyset =0$ because of the maximal principle. In addition, $c(\xi)$ depends only on the contact structure $\xi$.* **Definition 26**. *[@H2 Definition 4.3] If $(Y,\lambda)$ is a closed connected contact three-manifold with $c(\xi)\neq 0$, and if $k$ is a nonnegative integer, define $$c_{k}^{\mathrm{ECH}}(Y,\lambda):=\min\{c_{\sigma}^{\mathrm{ECH}}(Y,\lambda)|\,\sigma\in \mathrm{ECH}(Y,\xi,0),\,\,U^{k}\sigma=c(\xi)\}$$ The sequence $\{c_{k}^{\mathrm{ECH}}(Y,\lambda)\}$ is called the ECH spectrum of $(Y,\lambda)$.* **Proposition 27**. *Let $(Y,\lambda)$ be a closed connected contact three manifold.* *$$0=c_{0}^{\mathrm{ECH}}(Y,\lambda)<c_{1}^{\mathrm{ECH}}(Y,\lambda) \leq c_{2}^{\mathrm{ECH}}(Y,\lambda)...\leq \infty.$$* *For any $a>0$ and positive integer $k$, $$c_{k}^{\mathrm{ECH}}(Y,a\lambda)= ac_{k}^{\mathrm{ECH}}(Y,\lambda).$$* *Let $f_{1},f_{2}:Y\to (0,\infty)$ be smooth functions with $f_{1}(x)\leq f_{2}(x)$ for every $x\in Y$. Then $$c_{k}^{\mathrm{ECH}}(Y,f_{1}\lambda)\leq c_{k}^{\mathrm{ECH}}(Y,f_{2}\lambda).$$* *Suppose $c_{k}^{\mathrm{ECH}}(Y,f\lambda)<\infty$. The map $$C^{\infty}(Y,\mathbb{R}_{>0})\ni f \mapsto c_{k}^{\mathrm{ECH}}(Y,f\lambda)\in \mathbb{R}$$ is continuous in $C^{0}$-topology on $C^{\infty}(Y,\mathbb{R}_{>0})$.* ***Proof of Proposition [Proposition 27](#properties1){reference-type="ref" reference="properties1"}**.* They follow from the properties of ECH. See [@H2]. ◻ ### ECH on Lens spaces Now, we focus on lens spaces. Since $H_{2}(L(p,q))=0$, we write ECH index of $\alpha$, $\beta$ as $I(\alpha,\beta)$ instead of $I(\alpha,\beta,Z)$ where $\{Z\}= H_{2}(L(p,q);\alpha,\beta)$. Let $(L(p,q),\lambda)$ be a non-degenerate contact lens space and consider ECH of $0\in H_{1}(L(p,q))$. Since $[\emptyset]=0$, there is an absolute $\mathbb{Z}$-grading on $\mathrm{ECH}(L(p,q),\lambda,0)=\bigoplus_{k\in\mathbb{Z}}\mathrm{ECH}_{k}(L(p,q),\lambda,0)$ defined by ECH index relative to $\emptyset$ where $\mathrm{ECH}_{k}(L(p,q),\lambda,0)$ is as follows. Define $$\mathrm{ECC}_{k}(L(p,q),\lambda,0):= \bigoplus_{\alpha:\mathrm{ECH\,\,generator},\,{[\alpha]=0},\,I(\alpha,\emptyset)=k}\mathbb{F}\cdot \alpha.$$ Since $\partial_{J}$ maps $\mathrm{ECC}_{*}(L(p,q),\lambda,0)$ to $\mathrm{ECC}_{*-1}(L(p,q),\lambda,0)$ and $U_{J,z}$ does $\mathrm{ECC}_{*}(L(p,q),\lambda,0)$ to $\mathrm{ECC}_{*-2}(L(p,q),\lambda,0)$, we have $\mathrm{ECH}_{k}(L(p,q),\lambda,0)$ and $$U:\mathrm{ECH}_{*}(L(p,q),\lambda,0) \to\mathrm{ECH}_{*-2}(L(p,q),\lambda,0).$$ Based on these understandings, the next follows. **Proposition 28**. *Let $(L(p,q),\lambda)$ be non-degenerate.* *If $k$ is even and non-negative, $$\mathrm{ECH}_{k}(L(p,q),\lambda,0)\cong \mathbb{F}.$$ If $k$ is odd or negative, $\mathrm{ECH}_{k}(L(p,q),\lambda,0)$ is zero. Moreover, for $n\geq1$ the $U$-map $$U:\mathrm{ECH}_{2n}(L(p,q),\lambda,0)\to\mathrm{ECH}_{2(n-1)}(L(p,q),\lambda,0)$$ is isomorphism.* *If $\mathrm{Ker}\lambda=\xi_{\mathrm{std}}$, $0\neq c(\xi_{\mathrm{std}})\in \mathrm{ECH}_{0}(L(p,q),\lambda,0)$. Therefore, we can define the ECH spectrum (Definition [Definition 26](#echspect){reference-type="ref" reference="echspect"}).* ***Proof of Proposition [Proposition 28](#lenssp){reference-type="ref" reference="lenssp"}**.* See [@H2]. ◻ # Proof of Theorem [Theorem 5](#thm:main){reference-type="ref" reference="thm:main"} {#proof-of-theorem-thmmain} In this section, we prove Theorem [Theorem 5](#thm:main){reference-type="ref" reference="thm:main"}. First things first, we recall the existence of the following local coordinate called Martinet tube which is useful for our arguments in this paper. **Proposition 29**. *[@HWZ1] Let $(Y^3,\lambda)$ be a contact three manifold wiht $\mathrm{Ker}\lambda=\xi$. For a simple orbit $\gamma$, there is a diffeomorphism called Martinet tube $F:\mathbb{R}/\mathbb{Z}\times \mathbb{D}_{\delta} \to \Bar{U}$ for a sufficiently small $\delta>0$ such that $F(t,0)=\gamma(t)$ and there exists a smooth function $f:\mathbb{R}/\mathbb{Z}\times \mathbb{D}\to (0,+\infty)$ satisfying $f(\theta,0)=T_{\gamma}$, $df(\theta,0)=0$ and $F^{*}\lambda=f(\theta,x+iy)(d\theta+xdy)$. Here $\mathbb{D}_{\delta}$ is the disk with radius $\delta$. Note that $\mathrm{Ker}F^{*}\lambda|_{\mathbb{R}/\mathbb{Z}\times \{0\}}=\mathrm{span}(\partial_{x},\partial_{y})$. Let $\tau_{F}:\gamma^{*}\xi \to \mathbb{R}/\mathbb{Z}\times \mathbb{R}^{2}$ denote the induced trivialization by $F$ which maps $a\partial_{x}+b\partial_{y}$ to $(a,b)\in \mathbb{R}^{2}$ on each fiber.* Remark that we can take $F$ so that the trivialization $\tau_{F}$ realizes any given homotopy class of trivializations over $\gamma$. Consider $V_{1}=S^{1}\times \mathbb{D}$, $V_{2}=S^{1}\times \mathbb{D}$ and a gluing map $g:\partial V_{1}=S^{1}\times \partial \mathbb{D}\to S^{1}\times \partial \mathbb{D}=\partial V_{2}$ which is described as $$\begin{pmatrix} a & b \\ c & d \\ \end{pmatrix}$$ in standard longitude-meridian coordinates on the torus where $a,b,c,d \in \mathbb{Z}$, $b>0$ and $ad-bc=1$. Then, there is an orientation-preserving diffeomorphism from the glued manifold $V_{1}\cup_{g}V_{2}$ to $L(p,q)$ if and only if $b=p$ and in addition either $d=-q\,\,\mathrm{mod}\,\,p$ or $dq=-1\,\,\mathrm{mod}\,\,p$ (as remarked, the orientation of $L(p,q)$ is induced by the $4$-dimensional ball). Note that the boundary of a meridian disk of $V_{1}$ is glued by $g$ along a $(p,d)$-cable curve on $\partial V_{2}=S^{1}\times \partial \mathbb{D}$. Let $\gamma\subset (L(p,q),\lambda)$ be a $p$-unknotted Reeb orbits and $u:\mathbb{D}\to L(p,q)$ be a rational Seifert surface of $\gamma^{p}$. Take a Martinet tube $F:\mathbb{R}/T_{\gamma}\mathbb{Z}\times \mathbb{D}_{\delta} \to \Bar{U}$ for a sufficiently small $\delta>0$ onto a small open neighbourhood $\gamma \subset \Bar{U}$. As Remark [Remark 2](#heegaard){reference-type="ref" reference="heegaard"}, $\Bar{U}$ is a solid torus such that $L(p,q)\backslash U$ is also a solid torus, which gives a Heegaard decomposition of genus $1$. In addition, $u(\mathbb{D})\cap (L(p,q)\backslash U)$ is a meridian disk of $L(p,q)\backslash U$. Therefore, $F^{-1}(u(\mathbb{D})\cap\partial \Bar{U})$ is a $(p,r)$ cable such that either $-r=q\,\,\mathrm{mod}\,\,p$ or $-rq=1\,\,\mathrm{mod}\,\,p$ with respect to the coordinate of $\mathbb{R}/T_{\gamma}\mathbb{Z}\times \mathbb{D}$. Therefore, Theorem [Theorem 5](#thm:main){reference-type="ref" reference="thm:main"} follows directly from the next proposition. **Proposition 30**. *Suppose that the above $F^{-1}(u(\mathbb{D})\cap\partial \Bar{U})$ is a $(p,r)$ cable. If $-2r-2p\cdot sl^{\mathbb{Q}}_{\xi}(\gamma)-\mu_{\mathrm{disk}}(\gamma^{p})$ is not divisible by $p$, then $\gamma$ is elliptic.* To prove the above proposition, we take sections $Z_{\mathrm{disk}}:\mathbb{R}/pT_{\gamma}\mathbb{Z}\to (\gamma^p)^{*}\xi$ and $Z_{F}:\mathbb{R}/pT_{\gamma}\mathbb{Z}\to (\gamma^p)^{*}\xi$ so that $Z_{\mathrm{disk}}$ extends to a non-vanishing section on $u^{*}\xi$ and $Z_{F}$ corresponds to $\partial_{x}$ on the coordinate induced by $F$. Let $Z^{\epsilon}_{\mathrm{disk}}:\mathbb{R}/pT_{\gamma}\mathbb{Z}\to L(p,q)$ and $Z^{\epsilon}_{F}:\mathbb{R}/pT_{\gamma}\mathbb{Z}\to L(p,q)$ denote the curves $Z^{\epsilon}_{\mathrm{disk}}(t)=\mathrm{exp}_{\gamma(t)}(\epsilon Z_{\mathrm{disk}}(t))$ and $Z^{\epsilon}_{F}(t)=\mathrm{exp}_{\gamma(t)}(\epsilon Z_{F}(t))$ for small $\epsilon>0$ respectively. Then, it follows from a direct observation and the definition that $\# (u(\mathbb{D})\cap Z^{\epsilon}_{0})=-rp$ and $\# (u(\mathbb{D})\cap Z^{\epsilon}_{\mathrm{disk}})=p^{2}sl_{\xi}^{\mathbb{Q}}(\gamma)$ (note the orientation and sign). Let $\rho:S^{3}\to L(p,q)$ be the covering map. We can take lifts of $\gamma^{p}:\mathbb{R}/pT_{\gamma}\mathbb{Z}\to L(p,q)$ and the rational Seifert surface $u:\mathbb{D}\to L(p,q)$ to $S^{3}$, and write $\Tilde{\gamma}: \mathbb{R}/pT_{\gamma}\mathbb{Z}\to S^{3}$, $\Tilde{u}:\mathbb{D}\to S^{3}$ respectively. We may assume that $\Tilde{\gamma}(pT_{\gamma}t)=\Tilde{u}(e^{2\pi t})$. In the same way, we take lifts of $Z_{\mathrm{disk}},\,Z_{F}:\mathbb{R}/pT_{\gamma}\mathbb{Z}\to (\gamma^p)^{*}\xi$ and write $\Tilde{Z}_{\mathrm{disk}},\,\Tilde{Z}_{F}:\mathbb{R}/pT_{\gamma}\mathbb{Z}\to \Tilde{\gamma}^{*}\xi$ respectively. Let $\Tilde{Z}^{\epsilon}_{\mathrm{disk}}(t)=\mathrm{exp}_{\Tilde{\gamma}(t)}(\epsilon \Tilde{Z}_{\mathrm{disk}}(t))$ and $\Tilde{Z}^{\epsilon}_{F}(t)=\mathrm{exp}_{\Tilde{\gamma}(t)}(\epsilon \Tilde{Z}_{F}(t))$. Then it follows from the construction that $\Tilde{Z}^{\epsilon}_{\mathrm{disk}}$ and $\Tilde{Z}^{\epsilon}_{F}$ are lifts of $Z^{\epsilon}_{\mathrm{disk}}$ and $Z^{\epsilon}_{F}$ respectively. Since there are $p$ ways of lifting $Z^{\epsilon}_{\mathrm{disk}}$, $Z^{\epsilon}_{F}$ and each intersection number of a lift with $\Tilde{u}(\mathbb{D})$ is equal to each other, we have $$\# (u(\mathbb{D})\cap Z^{\epsilon}_{F})=p\# (\Tilde{u}(\mathbb{D})\cap \Tilde{Z}^{\epsilon}_{F}),\,\,\# (u(\mathbb{D})\cap Z^{\epsilon}_{\mathrm{disk}})=p\# (\Tilde{u}(\mathbb{D})\cap \Tilde{Z}^{\epsilon}_{\mathrm{disk}})$$ and hence $\# (\Tilde{u}(\mathbb{D})\cap \Tilde{Z}^{\epsilon}_{F})=-r,\,\,\# (\Tilde{u}(\mathbb{D})\cap \Tilde{Z}^{\epsilon}_{\mathrm{disk}})=psl_{\xi}^{\mathbb{Q}}(\gamma)$. Recall that $\tau_{F}:\gamma^{*}\xi\to \mathbb{R}/T_{\gamma}\times \mathbb{R}^{2}$ denotes the trivialization induced by $F$. As mentioned, we use the same notation $\tau_{F}$ for a trivialization $(\gamma^{p})^{*}\xi\to \mathbb{R}/T_{\gamma}\times \mathbb{R}^{2}$ induced by $\tau_{F}:\gamma^{*}\xi\to \mathbb{R}/T_{\gamma}\times \mathbb{R}^{2}$. **Lemma 31**. *$\mu_{\tau_{F}}(\gamma^{p})-2psl_{\xi}^{\mathbb{Q}}(\gamma)-2r=\mu_{\mathrm{disk}}(\gamma^{p})$* ***Proof of Lemma [Lemma 31](#lem;key){reference-type="ref" reference="lem;key"}**.* Take a section $\Tilde{W}_{\mathrm{disk}}:\mathbb{R}/pT_{\gamma}\mathbb{Z}\to (\Tilde{\gamma})^{*}\xi$ so that the map $\Tilde{\gamma}^{*}\xi \ni a\Tilde{Z}_{\mathrm{disk}}+b\Tilde{W}_{\mathrm{disk}}\mapsto a+ib\in \mathbb{R}^{2}$ gives a symplectic trivialization. Since $\Tilde{Z}_{\mathrm{disk}}:\mathbb{R}/pT_{\gamma}\mathbb{Z}\to (\Tilde{\gamma})^{*}\xi$ extends globally to a non-vanishing section on $\Tilde{u}^{*}\xi$, the homotopy class of this trivialization is $\tau_{\mathrm{disk}}$. In the same way, we take a section $\Tilde{W}_{F}:\mathbb{R}/pT_{\gamma}\mathbb{Z}\to (\Tilde{\gamma})^{*}\xi$ so that the map $\Tilde{\gamma}^{*}\xi \ni a\Tilde{Z}_{F}+b\Tilde{W}_{F}\mapsto a+ib\in \mathbb{R}^{2}$ gives a symplectic trivialization. Since $\Tilde{Z}_{F}:\mathbb{R}/pT_{\gamma}\mathbb{Z}\to (\Tilde{\gamma})^{*}\xi$ corresponds to $\partial_{x}$ with respect to the coordinate induced by $F$, the homotopy class of this trivialization is $\tau_{F}$. Therefore it follows directly that $$\label{wind} \mathrm{wind}(\tau_{F},\tau_{\mathrm{disk}})=\# (\Tilde{u}(\mathbb{D})\cap \Tilde{Z}^{\epsilon}_{F})-\# (\Tilde{u}(\mathbb{D})\cap \Tilde{Z}^{\epsilon}_{\mathrm{disk}})=-r-psl_{\xi}^{\mathbb{Q}}(\gamma).$$ It follows from Proposition [Proposition 13](#conleywind){reference-type="ref" reference="conleywind"} that $$\mu_{\tau_{F}}(\gamma^{p})+2\mathrm{wind}(\tau_{F},\tau_{\mathrm{disk}})=\mu_{\tau_{F}}(\gamma^{p})-2r-2psl_{\xi}^{\mathbb{Q}}(\gamma)=\mu_{\mathrm{disk}}(\gamma^{p}).$$ This completes the proof. ◻ Now, we shall complete the proof of Proposition [Proposition 30](#main;prop){reference-type="ref" reference="main;prop"}. Suppose that $\gamma \subset L(p,q)$ is hyperbolic. Then according to Proposition [Proposition 14](#conleybasic){reference-type="ref" reference="conleybasic"}, $\mu_{\tau_{F}}(\gamma^{p})=p\mu_{\tau_{F}}(\gamma)$. Therefore it follows from Lemma [Lemma 31](#lem;key){reference-type="ref" reference="lem;key"} that $-p\mu_{\tau_{F}}(\gamma)=-2psl_{\xi}^{\mathbb{Q}}(\gamma)-2r-\mu_{\mathrm{disk}}(\gamma^{p})$. This means that if $-2psl_{\xi}^{\mathbb{Q}}(\gamma)-2r-\mu_{\mathrm{disk}}(\gamma^{p})$ is not divisible by $p$, then $\gamma$ must be elliptic. This completes the proof. # Immersed $J$-holomorphic curves In this sectin, we assume that $Y\cong L(3,1)$ and $(Y,\lambda)$ is non-degenerate dynamically convex. From now, we fix a generic admissible almost complex structure $J$ (c.f. §[2.2](#echdef){reference-type="ref" reference="echdef"}) and trivialization $\tau_{\gamma} \in \mathcal{P}(\gamma)$ of the contact surface $\xi$ on each simple orbit $\gamma$. Let $\tau:=\{ \tau_{\gamma} \}_{\gamma}$. **Proposition 32**. *Let $u:(\Sigma,j)\to (\mathbb{R}\times Y,J)$ be an immersed $J$-holomorphic curve with no negative end.* *$\mathrm{ind}(u)$ is not equal to $1$.* *If $\mathrm{ind}(u)=2$, then $u$ is of genus $0$.* ***Proof of Proposition [Proposition 32](#prp:immersed){reference-type="ref" reference="prp:immersed"}**.* Let $g(u)$ denote the genus of $u$. Since $u$ is immersion, we have $$\label{eq:ind} \begin{split} \mathrm{ind}(u)=&-(2-2g(u)-k)+2c_{\tau}(\xi|_{[u]})+\sum_{1\leq i \leq k} \mu_{\tau}(\gamma_{i})\\ =&-(2-2g(u))+2c_{\tau}(\xi|_{[u]})+\sum_{1\leq i \leq k} (\mu_{\tau}(\gamma_{i})+1) \end{split}$$ where $\{\gamma_{i}\}_{i}$ is the set of periodic orbits to which the ends of $u$ are asymptotic. We may assume that $\gamma_{i}$ is hyperbolic if $1\leq i\leq k'$ and elliptic if $k'+1\leq i \leq k$. **Lemma 33**. *Let a periodic orbit $\gamma$ be elliptic. For any symplectic trivialization $\tau:\gamma^{*}\xi \to \gamma \times \mathbb{R}^{2}$ and $p\in \mathbb{Z}_{>0}$, we have $$p\mu_{\tau}(\gamma)-\mu_{\tau}(\gamma^{p})+p\geq 1.$$* ***Proof of Lemma [Lemma 33](#lem:iterate){reference-type="ref" reference="lem:iterate"}**.* Let $\theta$ denote the monodolomy angle with respect to $\tau$. Then $\mu_{\tau}(\gamma^{p})=2\lfloor p\theta \rfloor+1$ for any $p\in \mathbb{Z}_{>0}$. Since either $\lfloor2 \theta \rfloor=2\lfloor \theta \rfloor+1$ or $\lfloor2 \theta \rfloor=2\lfloor \theta \rfloor$, we have $2\lfloor \theta \rfloor +1 \geq \lfloor2 \theta \rfloor$. It is easy to check by inducting that $p\lfloor \theta \rfloor +p-1 \geq \lfloor p\theta \rfloor$. This completes the proof. ◻ Recall $Y \cong L(3,1)$. Hence for any periodic orbit $\gamma$, $\gamma^{3}$ is contractible. Let $\alpha$ denote the orbit set to which the positive ends of $u$ are asymptotic. Then $\{[u]\} = H_{2}(Y,\alpha,\emptyset)$. Now, we note some obvious facts. First, $c_{\tau}(\xi|_{3[u]})=3c_{\tau}(\xi|_{[u]})$ where $c_{\tau}$ is the first relative Chern number and $\{3[u]\} = H_{2}(Y,3\alpha,\emptyset)$. Next, since any $\gamma_{i}^{3}$ is contractible, $2c_{\tau}(\xi|_{3[u]})+\sum_{1\leq i \leq k} \mu_{\tau}(\gamma_{i}^{3})=\sum_{1\leq i \leq k} \mu_{\mathrm{disk}}(\gamma_{i}^{3})$. Based on this understanding, we multiply both sides of ([\[eq:ind\]](#eq:ind){reference-type="ref" reference="eq:ind"}) by $3$. Then we have $$\label{eq:ineq} \begin{split} 3\mathrm{ind}(u)=&-3(2-2g(u))+2c_{\tau}(\xi|_{3[u]})+\sum_{1\leq i \leq k'} (\mu_{\tau}(\gamma_{i}^{3})+3)\\ &+\sum_{k'+1\leq i \leq k} \mu_{\tau}(\gamma_{i}^{3})+\sum_{k'+1\leq i \leq k}( p\mu_{\tau}(\gamma_{i})-\mu_{\tau}(\gamma_{i}^{3})+3)\\ =&-3(2-2g(u))+\sum_{1\leq i \leq k'}( \mu_{\mathrm{disk}}(\gamma_{i}^{3})+3)\\+&\sum_{k'+1\leq i \leq k} \mu_{\mathrm{disk}}(\gamma_{i}^{3})+\sum_{k'+1\leq i \leq k}( 3\mu_{\tau}(\gamma_{i})-\mu_{\tau}(\gamma_{i}^{3})+3)\\ \geq &6g(u)-6+\sum_{1\leq i \leq k'}( \mu_{\mathrm{disk}}(\gamma_{i}^{3})+3)+\sum_{k'+1\leq i \leq k}( \mu_{\mathrm{disk}}(\gamma_{i}^{3})+1) \end{split}$$ Here Lemma [Lemma 33](#lem:iterate){reference-type="ref" reference="lem:iterate"} is used. Suppose that $\mathrm{ind}(u)=1$. From ([\[eq:ineq\]](#eq:ineq){reference-type="ref" reference="eq:ineq"}), we have $$9\geq 6g(u)+\sum_{1\leq i \leq k'}( \mu_{\mathrm{disk}}(\gamma_{i}^{3})+3)+\sum_{k'+1\leq i \leq k}( \mu_{\mathrm{disk}}(\gamma_{i}^{3})+1)$$ We note that $\mu_{\mathrm{disk}}(\gamma_{i}^{3})+3$ is at least $6$ and $\sum_{k'+1\leq i \leq k}( \mu_{\mathrm{disk}}(\gamma_{i}^{3})+1)$ is at least $4$ because of dynamical convexity. Since $\mathrm{ind}(u)=1$ is odd, it follows from ([\[eq:ind\]](#eq:ind){reference-type="ref" reference="eq:ind"}) that at least one $\gamma_{i}$ must be positive hyperbolic. Therefore $k'\geq 1$ and thus $k'=k=1$. This means that the only one orbit $\gamma_{1}$ is contractible. Now we go back to ([\[eq:ind\]](#eq:ind){reference-type="ref" reference="eq:ind"}). We have $1=2g(u)-1+\mu_{\mathrm{disk}}(\gamma_{1})$. This does not happen since $\mu_{\mathrm{disk}}(\gamma_{1})\geq 3$ and $g(u)\geq 0$. This proves Proposition [Proposition 32](#prp:immersed){reference-type="ref" reference="prp:immersed"}(1). Next we suppose that $\mathrm{ind}(u)=2$ and $g(u)\geq 1$. From ([\[eq:ineq\]](#eq:ineq){reference-type="ref" reference="eq:ineq"}), we have $$6\geq \sum_{1\leq i \leq k'}( \mu_{\mathrm{disk}}(\gamma_{i}^{3})+3)+\sum_{k'+1\leq i \leq k}( \mu_{\mathrm{disk}}(\gamma_{i}^{3})+1)$$ In the same way as above, we have $k=1$ and so from ([\[eq:ind\]](#eq:ind){reference-type="ref" reference="eq:ind"}) $2=\mathrm{ind}(u)=2g(u)-1+\mu_{\mathrm{disk}}(\gamma_{1})$. But this does not happen since $\mu_{\mathrm{disk}}(\gamma_{1})\geq 3$ and $g(u)\geq 1$. This proves Proposition [Proposition 32](#prp:immersed){reference-type="ref" reference="prp:immersed"}(2). ◻ **Remark 34**. In general, the above argument does not work for any $L(p,q)$. **Proposition 35**. *Let $\alpha$ be an ECH generator with $\langle U_{J,z}\alpha,\emptyset \rangle\neq 0$. Then $\alpha$ satisfies one of the following.* *There are simple elliptic orbits $\gamma_{1},\,\gamma_{2},\,\gamma_{3} \in \mathcal{S}_{3}$ with $\mu_{\mathrm{disk}}(\gamma^{3}_{i})=3$ for $i=1,2,3$ such that $\alpha=(\gamma_{1},1)\cup(\gamma_{2},1)\cup(\gamma_{3},1)$.* *There is a simple elliptic orbit $\gamma \in \mathcal{S}_{3}$ with $\mu_{\mathrm{disk}}(\gamma^{3})=3$ such that $\alpha=(\gamma,3)$.* *There are simple elliptic orbits $\gamma_{1},\,\gamma_{2}\in \mathcal{S}_{3}$ with $\mu_{\mathrm{disk}}(\gamma_{1}^{3})=\mu_{\mathrm{disk}}(\gamma_{2}^{3})=3$ such that $\alpha=(\gamma_{1},2)\cup(\gamma_{2},1)$.* There are many steps to prove Proposition [Proposition 35](#prp;main){reference-type="ref" reference="prp;main"}. At first, we compute some indices and list the properties of ECH generators $\alpha$ which satisfy $\langle U_{z,J}\alpha,\emptyset \rangle\neq 0$. **Lemma 36**. *Let $\alpha$ be an ECH generator with $\langle U_{J,z}\alpha,\emptyset \rangle\neq 0$. Then by conducting some computations regarding indices, we have that any $u\in \mathcal{M}^{J}(\alpha,\emptyset)$ has no two ends asymptotic to the same orbit. In addition, $\alpha$ satisfies one of the following.* *There are simple elliptic orbits $\gamma_{1}$, $\gamma_{2}$, $\gamma_{3}$ with $\mu_{\mathrm{disk}}(\gamma^{3}_{i})=3$ for $i=1,2,3$ such that $\alpha=(\gamma_{1},1)\cup(\gamma_{2},1)\cup(\gamma_{3},1)$.* *There is a simple elliptic orbit $\gamma$ with $\mu_{\mathrm{disk}}(\gamma^{3})=3$ such that $\alpha=(\gamma,3)$.* *There are simple elliptic orbits $\gamma_{1}$ and $\gamma_{2}$ with $\mu_{\mathrm{disk}}(\gamma_{1}^{3})=\mu_{\mathrm{disk}}(\gamma_{2}^{3})=3$ such that $\alpha=(\gamma_{1},2)\cup(\gamma_{2},1)$.* *There are simple elliptic orbits $\gamma_{1}$ $\gamma_{2}$ with $\mu_{\mathrm{disk}}(\gamma_{1}^{6})=5$ and $\mu_{\mathrm{disk}}(\gamma_{2}^{3})=5$ such that $\alpha=(\gamma_{1},2)\cup(\gamma_{2},1)$.* *There are simple orbits $\gamma_{1}$ and $\gamma_{2}$ such that $\alpha=(\gamma_{1},1)\cup(\gamma_{2},1)$ and each of them is not positive hyperbolic.* *There are simple elliptic orbits $\gamma_{1}$ and $\gamma_{2}$ such that $\alpha=(\gamma_{1},2)\cup(\gamma_{2},2)$.* *There is a simple orbit $\gamma$ such that $\alpha=(\gamma,1)$ and $\gamma$ is not positive hyperbolic.* ***Proof of Lemma [Lemma 36](#lem:indechcomp){reference-type="ref" reference="lem:indechcomp"}**.* Set $\alpha=\{(\gamma_{i},m_{i})\}_{1\leq i \leq k}$. We may assume that $\gamma_{i}$ is hyperbolic for $1\leq i \leq k'$ and elliptic for $k'+1\leq i \leq k$. Of course, $m_{i}=1$ for $1\leq i \leq k'$ since $\alpha$ is an ECH generator. Let $u\in \mathcal{M}^{J}(\alpha,\emptyset)$ be a $J$-holomorphic curve counted by $\langle U_{z,J}\alpha,\emptyset \rangle \neq 0$. Then we have $$\begin{split} 2c_{\tau}(\xi|_{[u]})+\sum_{1\leq i \leq k} \mu_{\tau}(\gamma_{i}^{m_{i}})=&I(u)-J_{0}(u)\\=&2-(-\chi(u)+\sum_{1\leq i \leq k}(n_{i}^{+}-1)) \\=&2-(2g(u)-2+h+\sum_{1\leq i \leq k}(n_{i}^{+}-1)). \end{split}$$ Here $h$ is the number of the positive ends of $u$ and $n_{i}^{+}$ is the number of the positive ends asymptotic to $\gamma_{i}$. It follows from Proposition [Proposition 32](#prp:immersed){reference-type="ref" reference="prp:immersed"} (2) that $g(u)=0$. Hence we have $$\begin{split} 4=h+\sum_{1\leq i \leq k}(n_{i}^{+}-1)+2c_{\tau}(\xi|_{[u]})+\sum_{1\leq i \leq k} \mu_{\tau}(\gamma_{i}^{m_{i}}) \end{split}$$ Now, we conduct similar calculations with the proof of Proposition [Proposition 32](#prp:immersed){reference-type="ref" reference="prp:immersed"}. Multiplying both side by $3$, it follows from Lemma [Lemma 33](#lem:iterate){reference-type="ref" reference="lem:iterate"} that $$\label{ineqcalc} \begin{split} 12=&3h+3\sum_{1\leq i \leq k}(n_{i}^{+}-1)+2c_{\tau}(\xi|_{3[u]})+\sum_{1\leq i \leq k'} \mu_{\tau}(\gamma_{i}^{3})\\+&\sum_{k'+1\leq i \leq k} \mu_{\tau}(\gamma_{i}^{3m_{i}})+\sum_{k'+1\leq i \leq k}( 3\mu_{\tau}(\gamma_{i}^{m_{i}})-\mu_{\tau}(\gamma_{i}^{3m_{i}}))\\=&3h+3\sum_{1\leq i \leq k}(n_{i}^{+}-1)+\sum_{1\leq i \leq k'} (\mu_{\mathrm{disk}}(\gamma_{i}^{3})+3)\\+&\sum_{k'+1\leq i \leq k} \mu_{\mathrm{disk}}(\gamma_{i}^{3m_{i}})+\sum_{k'+1\leq i \leq k}( 3\mu_{\tau}(\gamma_{i}^{m_{i}})-\mu_{\tau}(\gamma_{i}^{3m_{i}})+3)-3k\\ \geq& 3(h-k)+3\sum_{1\leq i \leq k}(n_{i}^{+}-1)\\+&\sum_{1\leq i \leq k'} (\mu_{\mathrm{disk}}(\gamma_{i}^{3})+3)+\sum_{k'+1\leq i \leq k}( \mu_{\mathrm{disk}}(\gamma_{i}^{3m_{i}})+1). \end{split}$$ **Claim 37**. *$n_{i}^{+}=1$ for any $1\leq i \leq k$. That is, the number of the ends of $u$ asymptotic to $\gamma_{i}$ is $1$ for any $1\leq i \leq k$. This means $h=k$ and $\sum_{1\leq i \leq k}(n_{i}^{+}-1)=0$ in ([\[ineqcalc\]](#ineqcalc){reference-type="ref" reference="ineqcalc"}).* ***Proof of Claim [Claim 37](#cla;n=1){reference-type="ref" reference="cla;n=1"}**.* We prove this by contradiction. Assume that $n^{+}_{j}>1$ for some $k'+1\leq j\leq k$. Then $3(h-k)\geq 3$ and $3\sum_{1\leq i \leq k}(n_{i}^{+}-1)\geq 3$. Therefore we have $$\label{equat} 6\geq \sum_{1\leq i \leq k'}(\mu_{\mathrm{disk}}(\gamma_{i}^{3})+3)+\sum_{k'+1\leq i \leq k}( \mu_{\mathrm{disk}}(\gamma_{i}^{3m_{i}})+1).$$ Moreover $m_{j}>1$. It follows easily from ([\[equat\]](#equat){reference-type="ref" reference="equat"}) that $k=1$. Indeed If $k>1$, the right hand side of ([\[equat\]](#equat){reference-type="ref" reference="equat"}) is at least $8$. Hence $k=j=1$. If $\gamma_{1}$ is contractible, $\mu_{\mathrm{disk}}(\gamma^{3m_{1}})\geq 6m_{1}+1\geq 13$. This is a contradiction. So $\gamma_{1}$ must be non-contractible. Suppose that $\gamma$ is non-contractible. Since $[\alpha]=m_{1}[\gamma_{1}]$ must be zero in $H_{1}(L(3,1))$, $m_{1}$ is divisible by $3$. Write $m_{1}=3m'_{1}$. We have $\mu_{\mathrm{disk}}(\gamma^{3m_{1}})+1=\mu_{\mathrm{disk}}((\gamma^{3})^{3m'_{1}})+1\geq 6m'_{1}+1+1\geq 8$ (Proposition [Proposition 14](#conleybasic){reference-type="ref" reference="conleybasic"}). This contradicts ([\[equat\]](#equat){reference-type="ref" reference="equat"}). In summary, we have $n^{+}_{i}=1$ for any $i$. This completes the proof. ◻ Having Claim [Claim 37](#cla;n=1){reference-type="ref" reference="cla;n=1"}, it follows from ([\[ineqcalc\]](#ineqcalc){reference-type="ref" reference="ineqcalc"}) that $$\label{ineqkk'} \begin{split} 12\geq& \sum_{1\leq i \leq k'}(\mu_{\mathrm{disk}}(\gamma_{i}^{3})+3)+\sum_{k'+1\leq i \leq k}( \mu_{\mathrm{disk}}(\gamma_{i}^{3m_{i}})+1)\\ \geq& 6k'+4(k-k'). \end{split}$$ The pair $(k-k',k')$ must satisfy ([\[ineqkk\'\]](#ineqkk'){reference-type="ref" reference="ineqkk'"}). Thus $(k-k',k')$ is one of the following. $(k-k',k')=(3,0),(0,2),(1,1),(2,0),(0,1),(1,0)$. We check their properties one by one. The next claim is obvious but it is worth to be mentioned explicitly for further arguments. **Claim 38**. *Suppose that $\mu_{\tau}(\gamma)\geq3$. If $\mu_{\tau}(\gamma^{k})=5$ or $7$ for $k>1$, then $\mu_{\tau}(\gamma)=3$. In addition if $\mu_{\tau}(\gamma^{k})=5$ for $k>1$, then $k=2$ and if $\mu_{\tau}(\gamma^{k})=7$ for $k>1$, then either $k=2$ or $k=3$.* ***Proof of Claim [Claim 38](#cla;essential){reference-type="ref" reference="cla;essential"}**.* Note that $\gamma$ is elliptic if $\mu_{\tau}(\gamma^{k})=5$ or $7$ for $k>1$. Indeed if $\gamma$ is hyperbolic, then $\mu_{\tau}(\gamma^{k})=k\mu_{\tau}(\gamma)$ (c.f. Proposition [Proposition 14](#conleybasic){reference-type="ref" reference="conleybasic"}), but since 5 and 7 are prime, this is a contradiction. Since $\gamma$ is elliptic, there is $\theta \in \mathbb{R}\backslash \mathbb{Q}$ such that $\mu_{\tau}(\gamma^{k})=2\lfloor k\theta \rfloor+1$ for any $k\geq1$ (Proposition [Proposition 14](#conleybasic){reference-type="ref" reference="conleybasic"}). If $\mu_{\tau}(\gamma)\geq 5$, then $\theta\geq 2$ because $\mu_{\tau}(\gamma)=2 \lfloor \theta \rfloor+1\geq 5$. Therefore $\mu_{\tau}(\gamma^{k})=2\lfloor k \theta \rfloor+1\geq 4k+1$ (c.f. Proposition [Proposition 12](#conleycovering){reference-type="ref" reference="conleycovering"}). But any $k>1$ does not satisfy $\mu_{\tau}(\gamma^{k})=5$ and $7$. This is a contradiction. Thus we have $\mu_{\tau}(\gamma)=3$. If $\mu_{\tau}(\gamma^{k})=5$ for $k>1$, $k=2$ follows directly from Proposition [Proposition 12](#conleycovering){reference-type="ref" reference="conleycovering"}. In the same way, we have if $\mu_{\tau}(\gamma^{k})=7$ for $k>1$, then either $k=2$ or $k=3$. ◻ In this case, we have $\alpha=(\gamma_{1},m_{1})\cup (\gamma_{2},m_{2})\cup(\gamma_{3},m_{3})$ for some elliptic orbits $\gamma_{i}$. Moreover from ([\[ineqkk\'\]](#ineqkk'){reference-type="ref" reference="ineqkk'"}), $\mu_{\mathrm{disk}}(\gamma_{i}^{3m_{i}})=3$ for $i=1,2,3$. This implies that $m_{i}=1$ (Proposition [Proposition 12](#conleycovering){reference-type="ref" reference="conleycovering"}) and thus $\mu_{\mathrm{disk}}(\gamma_{i}^{3})=3$ for $i=1,2,3$. Thus $\alpha$ satisfies (1) in Lemma [Lemma 36](#lem:indechcomp){reference-type="ref" reference="lem:indechcomp"}. In this case, we have $\alpha=(\gamma_{1},1)\cup (\gamma_{2},1)$ with $\mu_{\mathrm{disk}}(\gamma_{1}^3)=\mu_{\mathrm{disk}}(\gamma_{2}^3)=3$. Thus $\gamma_{1}$ and $\gamma_{2}$ are negative hyperbolic, and $\alpha$ satisfies (4) in Lemma [Lemma 36](#lem:indechcomp){reference-type="ref" reference="lem:indechcomp"}. In this case, $\alpha=(\gamma_{1},1)\cup (\gamma_{2},m_{2})$ for elliptic $\gamma_{2}$ and negative hyperbolic $\gamma_{1}$. Since $12\geq (\mu_{\mathrm{disk}}(\gamma_{1}^{3})+3)+( \mu_{\mathrm{disk}}(\gamma_{2}^{3m_{2}})+1)$, it follows from dynamical convexity that either $(\mu_{\mathrm{disk}}(\gamma_{1}^{3}), \mu_{\mathrm{disk}}(\gamma_{2}^{3m_{2}}))=(3,5)$ or $(5,3)$ or $(3,3)$. Suppose that $(\mu_{\mathrm{disk}}(\gamma_{1}^{3}), \mu_{\mathrm{disk}}(\gamma_{2}^{3m_{2}}))=(3,5)$. Then $m_{2}=1$ or $2$ (Clam [Claim 38](#cla;essential){reference-type="ref" reference="cla;essential"}). If $m_{2}=1$, $\alpha$ satisfies Lemma [Lemma 36](#lem:indechcomp){reference-type="ref" reference="lem:indechcomp"} (5). If $m_{2}=2$, $\mu_{\mathrm{disk}}(\gamma_{2}^{3m_{2}})=\mu_{\mathrm{disk}}((\gamma_{2}^{3})^{2}))=5$ implies that $\mu_{\mathrm{disk}}(\gamma_{3})=3$ (Claim [Claim 38](#cla;essential){reference-type="ref" reference="cla;essential"}) and thus $\alpha=(\gamma_{1},1)\cup (\gamma_{2},2)$ satisfies Lemma [Lemma 36](#lem:indechcomp){reference-type="ref" reference="lem:indechcomp"} (3). Suppose that $(\mu_{\mathrm{disk}}(\gamma_{1}^{3}), \mu_{\mathrm{disk}}(\gamma_{2}^{3m_{2}}))=(5,3)$ or $(3.3)$. $\mu_{\mathrm{disk}}(\gamma_{2}^{3m_{2}})=3$ implies that $m_{1}=1$ (Proposition [Proposition 12](#conleycovering){reference-type="ref" reference="conleycovering"}) and thus $\alpha=(\gamma_{1},1)\cup (\gamma_{2},1)$ satisfies (5) in Lemma [Lemma 36](#lem:indechcomp){reference-type="ref" reference="lem:indechcomp"}. In this case, we have $\alpha=(\gamma_{1},m_{1})\cup (\gamma_{2},m_{2})$ for elliptic orbits $\gamma_{1}$ and $\gamma_{2}$. It follows from ([\[ineqkk\'\]](#ineqkk'){reference-type="ref" reference="ineqkk'"}) that $10\geq \mu_{\mathrm{disk}}(\gamma_{1}^{3m_{1}})+\mu_{\mathrm{disk}}(\gamma_{2}^{3m_{2}})$. Without of loss generality, we may assume that $\mu_{\mathrm{disk}}(\gamma_{1})\geq \mu_{\mathrm{disk}}(\gamma_{2})$. Then we have $(\mu_{\mathrm{disk}}(\gamma_{1}^{3m_{1}}), \mu_{\mathrm{disk}}(\gamma_{2}^{3m_{2}}))=(7,3),(5,5),(5,3),(3,3)$. Suppose that $(\mu_{\mathrm{disk}}(\gamma_{1}^{3m_{1}}), \mu_{\mathrm{disk}}(\gamma_{2}^{3m_{2}}))=(7,3)$. Then $m_{2}=1$ and $\mu_{\mathrm{disk}}(\gamma_{2})=3$ (Proposition [Proposition 12](#conleycovering){reference-type="ref" reference="conleycovering"}) and in addition either $m_{1}=1$ or $m_{1}=2$ or $m_{1}=3$ (Claim [Claim 38](#cla;essential){reference-type="ref" reference="cla;essential"}). If $m_{1}=1$, then $\alpha$ satisfies (5) in Lemma [Lemma 36](#lem:indechcomp){reference-type="ref" reference="lem:indechcomp"}. If $m_{1}=2$, then $\mu_{\mathrm{disk}}(\gamma_{2}^3)=3$ (Claim [Claim 38](#cla;essential){reference-type="ref" reference="cla;essential"}) and thus $\alpha$ satisfies (3) in Lemma [Lemma 36](#lem:indechcomp){reference-type="ref" reference="lem:indechcomp"}. If $m_{1}=3$, then since $[\alpha]=m_{1}[\gamma_{1}]+[\gamma_{2}]=0$, $[\gamma_{2}]=0$ and thus $\gamma_{2}$ is contractible. But this is a contradiction because $\mu_{\mathrm{disk}}(\gamma_{2}^{3})\geq 2\times 3+1=7$ (Proposition [Proposition 12](#conleycovering){reference-type="ref" reference="conleycovering"}). Suppose that $(\mu_{\mathrm{disk}}(\gamma_{1}^{3m_{1}}), \mu_{\mathrm{disk}}(\gamma_{2}^{3m_{2}}))=(5,5)$. Then it follows from Claim [Claim 38](#cla;essential){reference-type="ref" reference="cla;essential"} that either $m_{i}=1$ or $2$ for each $i=1,2$ and in addition if $m_{i}=2$, then $\mu_{\mathrm{disk}}(\gamma_{i}^3)=3$. This means that $\alpha$ satisfies either (4) or (5) or (6) in Lemma [Lemma 36](#lem:indechcomp){reference-type="ref" reference="lem:indechcomp"}. Suppose that $(\mu_{\mathrm{disk}}(\gamma_{1}^{3m_{1}}), \mu_{\mathrm{disk}}(\gamma_{2}^{3m_{2}}))=(5,3)$. Then $m_{2}=1$ and $\mu_{\mathrm{disk}}(\gamma_{2})=3$ (Proposition [Proposition 12](#conleycovering){reference-type="ref" reference="conleycovering"}) and in addition either $m_{1}=1$ or $m_{1}=2$. If $m_{1}=1$, then $\alpha$ satisfies (5) in Lemma [Lemma 36](#lem:indechcomp){reference-type="ref" reference="lem:indechcomp"}. If $m_{1}=2$, then $\mu_{\mathrm{disk}}(\gamma_{2}^3)=3$ (Claim [Claim 38](#cla;essential){reference-type="ref" reference="cla;essential"}) and thus $\alpha$ satisfies (3) in Lemma [Lemma 36](#lem:indechcomp){reference-type="ref" reference="lem:indechcomp"}. Suppose that $(\mu_{\mathrm{disk}}(\gamma_{1}^{3m_{1}}), \mu_{\mathrm{disk}}(\gamma_{2}^{3m_{2}}))=(3,3)$. Then $m_{1}=m_{2}=1$. Thus $\alpha$ satisfies (5) in Lemma [Lemma 36](#lem:indechcomp){reference-type="ref" reference="lem:indechcomp"}. In this case, we have $\alpha=(\gamma_{1},1)$ for a negative hyperbolic $\gamma_{1}$ and thus $\alpha$ satisfies (7) in Lemma [Lemma 36](#lem:indechcomp){reference-type="ref" reference="lem:indechcomp"}. In this case, $\alpha=(\gamma_{1},m_{1})$ for some $m_{1}$ and elliptic $\gamma$. Suppose that $\gamma_{1}$ is contractible. Then $\mu_{\mathrm{disk}}(\gamma_{1}^{m_{1}})\geq 2m_{1}+1$ (Proposition [Proposition 12](#conleycovering){reference-type="ref" reference="conleycovering"}). If $m_{1}\geq 2$, we have $\mathrm{ind}(u)=\mu_{\mathrm{disk}}(\gamma_{1}^{m_{1}})-1\geq 4$. This contradicts $\mathrm{ind}(u)=2$. Therefore $m_{1}=1$ and $\alpha$ satisfies (7). Next, suppose that $\gamma_{1}$ is not contractible. Since $[\alpha]=0$, $m_{1}=3m_{1}'$ for some $m_{1}'\in \mathbb{Z}_{>0}$. Therefore, we have $\mathrm{ind}(u)=\mu_{\mathrm{disk}}((\gamma_{1}^3)^{m_{1}'})-1\geq 2m_{1}'$. This implies that $m_{1}'=1$ and $\mu_{\mathrm{disk}}(\gamma_{1}^3)=3$. Hence we have $\alpha=(\gamma_{1},3)$, which satisfies (2). In summary. we complete the proof of Lemma [Lemma 36](#lem:indechcomp){reference-type="ref" reference="lem:indechcomp"}. ◻ # Rational open book decompositions and binding orbits In this section, we narrow down the list in Lemma [Lemma 36](#lem:indechcomp){reference-type="ref" reference="lem:indechcomp"}. For the purpose, we observe topological properties of the moduli space of holomorphic curves. For $u\in \mathcal{M}^{J}(\alpha,\emptyset)$, let $\mathcal{M}^{J}_{u}$ denote the connected component of $\mathcal{M}^{J}(\alpha,\emptyset)$ containing $u$. The next lemma plays important roles in what follows. **Lemma 39**. *Let $\alpha$ be an ECH generator with $\langle U_{J,z}\alpha,\emptyset \rangle\neq 0$. Suppose that $u\in \mathcal{M}^{J}(\alpha,\emptyset)$ is a $J$-holmorphic curve counted by $U_{J,z}$. Then the quotient space $\mathcal{M}^{J}_{u}/\mathbb{R}$ is compact and thus diffeomorphic to $S^{1}$. In addition, for any section $s:\mathcal{M}^{J}_{u}/\mathbb{R}\to \mathcal{M}^{J}_{u}$, $\bigcup_{t\in S^{1}}\mathrm{\pi}(s(t))$ gives a rational open book decomposition on $Y \cong L(3,1)$ where $\mathrm{\pi}:\mathbb{R}\times Y \to Y$ is the projection.* To prove Lemma [Lemma 39](#lem:openbook){reference-type="ref" reference="lem:openbook"}, we introduce the following proposition given in [@CHP]. **Proposition 40** ([@CHP]). *Let $(Y,\lambda)$ be a non-degenerate contact three-manifold, and let $J$ be a compatible almost complex structure on $\mathbb{R}\times Y$. Let $C$ be an irreducible $J$-holomorphic curve in $\mathbb{R}\times Y$ such that:* *Every $C\in \mathcal{M}^{J}_{C}$ is embedded in $\mathbb{R}\times Y$.* *$C$ is of genus $0$, has no end asymptotic to a positive hyperbolic orbit and $\mathrm{ind}(C)=2$.* *$C$ does not have two positive ends, or two negative ends, at covers of the same simple Reeb orbit.* *Let $\gamma$ be a simple orbit with rotation nnumber $\theta \in \mathbb{R}\backslash \mathbb{Q}$. If $C$ has a positive end at an $m$-fold cover of $\gamma$, then $\mathrm{gcd}(m,\lfloor m\theta \rfloor)=1$. If $C$ has a negative end at an $m$-fold cover of $\gamma$, then $\mathrm{gcd}(m,\lceil m\theta \rceil)=1$.* *$\mathcal{M}^{J}_{C}/\mathbb{R}$ is compact.* *Then $\pi(C)\subset Y$ is a global surface of section for the Reeb flow. In addition, for any section $s:\mathcal{M}^{J}_{C}/\mathbb{R}\to \mathcal{M}^{J}_{C}$, $\bigcup_{t\in S^{1}}\pi(s(t))$ gives a rational open book decomposition supporting the contact structure $\mathrm{Ker}\lambda=\xi$.* ***Proof of Lemma [Lemma 39](#lem:openbook){reference-type="ref" reference="lem:openbook"}**.* It follows from Lemma [Lemma 36](#lem:indechcomp){reference-type="ref" reference="lem:indechcomp"} that any $u\in \mathcal{M}^{J}(\alpha,\emptyset)$ counted by $U_{J,z}$ satisfies (1), (2), (3) in Proposition [Proposition 40](#openbookhut){reference-type="ref" reference="openbookhut"}. To prove that $u\in \mathcal{M}^{J}(\alpha,\emptyset)$ satisfies (4) in Proposition [Proposition 40](#openbookhut){reference-type="ref" reference="openbookhut"}, we recall the following property. **Claim 41**. *Let $\theta\in \mathbb{R}\backslash \mathbb{Z}$. For any $q\in S_{-\theta}$ , $\mathrm{gcd}(q,\lfloor q\theta \rfloor)=1$.* ***Proof of Claim [Claim 41](#gcd){reference-type="ref" reference="gcd"}**.* Claim [Claim 41](#gcd){reference-type="ref" reference="gcd"} follows directly from the definition of $S_{\theta}$. Here, note that $-\lfloor\theta q \rfloor=\lceil -q \theta \rceil$. ◻ According to Proposition [Proposition 19](#partitioncondition){reference-type="ref" reference="partitioncondition"}, if an end of $u\in \mathcal{M}^{J}(\alpha,\emptyset)$ is asymptotic to simple orbit $\gamma$ with some multiplicity, the multiplicity is in $S_{-\theta}$ where $\theta$ is the monodromy angle of $\gamma$. Therefore it follows that$u\in \mathcal{M}^{J}(\alpha,\emptyset)$ satisfies (4) in Proposition [Proposition 40](#openbookhut){reference-type="ref" reference="openbookhut"}. At last, we check that $u\in \mathcal{M}^{J}(\alpha,\emptyset)$ satisfies (5) in Proposition [Proposition 40](#openbookhut){reference-type="ref" reference="openbookhut"}. Suppose that $\mathcal{M}^{J}_{u}/\mathbb{R}$ is not compact. Let $\overline{\mathcal{M}^{J}_{u}/\mathbb{R}}$ denote the compactified space of $\mathcal{M}^{J}_{u}/\mathbb{R}$ in the sense of SFT compactness. Choose $\Bar{u}\in \overline{\mathcal{M}^{J}_{u}/\mathbb{R}} \backslash (\mathcal{M}^{J}_{u}/\mathbb{R})$. $\Bar{u}$ consists of some $J$-holomorphic curves in several floors. Let $u'$ be the component of $\Bar{u}$ in the lowest floor. Then there is an orbit set $\beta$ such that $u'\in \mathcal{M}^{J}(\beta,\emptyset)/\mathbb{R}$. By the additivity of ECH index, we have $I(\beta,\emptyset)=1$. This contradicts Lemma [Proposition 32](#prp:immersed){reference-type="ref" reference="prp:immersed"}. Thus $\mathcal{M}^{J}_{u}/\mathbb{R}$ is compact. ◻ **Lemma 42**. *$L(3,1)$ does not admit rational open book decompositions coming from (6), (7) in Lemma [Lemma 36](#lem:indechcomp){reference-type="ref" reference="lem:indechcomp"}.* ***Proof of Lemma [Lemma 42](#lem:ex1){reference-type="ref" reference="lem:ex1"}**.* It is obvious that if a 3-manifol $Y$ has a open book decomposition such that each page is embedded disk, then $Y$ is diffeomorphic to $S^{3}$. Therefore (7) is excluded. Next, we consider (6) in Lemma [Lemma 36](#lem:indechcomp){reference-type="ref" reference="lem:indechcomp"}. For $\gamma_{1}$ and $\gamma_{2}$, we take Martinet tubes $F_{i}:\mathbb{R}/T_{\gamma_{i}}\mathbb{Z}\times \mathbb{D}_{\delta} \to \Bar{U_{i}}$ for a sufficiently small $\delta>0$ where $\gamma_{i}\subset U_{i}$. Since $\pi(s(t))$ is embedded and connected on $Y$, $F_{i}^{-1}(\mathbb{R}/T_{\gamma_{i}}\mathbb{Z}\times\partial \mathbb{D}_{\delta} \cap \pi(s(t)))$ is $(2,p_{i})$-cable for odd integers $p_{i}\in \mathbb{Z}$ for any $t\in S^{1}$. In addition, the gluing map from $F_{1}(\mathbb{R}/T_{\gamma_{1}}\mathbb{Z}\times\partial \mathbb{D}_{\delta})$ to $F_{2}(\mathbb{R}/T_{\gamma_{2}}\mathbb{Z}\times\partial \mathbb{D}_{\delta})$ which maps the $(2,p_{1})$-cabling curve to the $(-2,-p_{2})$-cabling curve along $\mathrm{pr}_{2}(s(t))$ for each $t\in S^{1}$ recovers $Y \cong L(3,1)$ (note the sign). We note that the gluing map is described as $$\begin{pmatrix} a & -3 \\ c & d \\ \end{pmatrix}$$ in standard longitude-meridian coordinates on the torus. Since the matrix send a $(2,p_{1})$-cabling curve to a $(-2,-p_{2})$-cabling curve, it follows from the first line of the matrix that $-2=2a-3p_{1}$. Since $p_{1}$ is odd, this can not occur. Thus (7) is excluded. ◻ **Lemma 43**. *Any open book decomposition coming from (5) in Lemma [Lemma 36](#lem:indechcomp){reference-type="ref" reference="lem:indechcomp"} does not support $(L(3,1),\xi_{\mathrm{std}})$.* ***Proof of Claim [Lemma 43](#lem:ex2){reference-type="ref" reference="lem:ex2"}**.* As mentioned, for any section $s:\mathcal{M}^{J}_{u}/\mathbb{R}\to \mathcal{M}^{J}_{u}$, $\bigcup_{t\in S^{1}} \pi(s(t)))$ gives a rational open book decomposition of $L(3,1)$ supporting $\xi_{\mathrm{std}}$. But it is well-known that the contact structure on $L(3,1)$ supported by an open book decomposition such that each page is an embedded annulus is overtwisted. This completes the proof. ◻ **Lemma 44**. *Suppose that $\alpha$ satisfies either (1) or (2) or (3) or (4) in Lemma [Lemma 36](#lem:indechcomp){reference-type="ref" reference="lem:indechcomp"}. Then any simple orbit $\gamma_{i}$ in $\alpha$ is in $\mathcal{S}_{3}$.* ***Proof of Lemma [Lemma 44](#lem;(1)){reference-type="ref" reference="lem;(1)"}**.* It follows from Theorem [\[fundament\]](#fundament){reference-type="ref" reference="fundament"} that if $\alpha$ satisfies (2) in Lemma [Lemma 36](#lem:indechcomp){reference-type="ref" reference="lem:indechcomp"}, $\gamma$ in $\alpha$ is in $\mathcal{S}_{3}$. Next, we consider (1), (3), (4) in Lemma [Lemma 36](#lem:indechcomp){reference-type="ref" reference="lem:indechcomp"}. We note that each monodromy of (rational) open book decomposition coming from (1), (3), (4) is unique up to isotopy. Indeed, in the case of (3) or (4), it follows from straightforward arguments since each page is annulus type. In the case of (1), it is complicated but follows from the classification of contact structures supported by planer open book decompositions with three boundaries given in [@Ar]. Therefore, it suffices to check the statement specifically. For the purpose, we give the specific (rational) open book decompositions as Milnor fibrations. Let $S^{3}:=\{(z_{1},z_{2})|\,\,|z_{1}|^2+|z_{2}|^{2}=1\,\,\} \subset \mathbb{C}^{2}$ denote the unit sphere. Recall that $(S^{3},\lambda_{0}|_{S^{3}})$ is a contact 3-sphere with tight contact structure whose Reeb vector field is given by the derivative of the action of multiplying by $e^{2\pi t}$. In this case, any flow is periodic and any simple periodic orbit is a Hopf fiber. In addition, it is obvious that any Hopf fiber is in $\mathcal{S}_{1}$. Let $(L(3,1),\lambda_{0}|_{L(3,1)})$ denote the contact manifold obtained by taking the quotient of $(S^{3},\lambda_{0}|_{S^{3}})$ under the action $(z_{1},z_{2}) \to e^{\frac{2\pi i}{3}}(z_{1},z_{2})$. Under the action, any Hopf fiber is preserved. Therefore any flow on $(L(3,1),\lambda_{0}|_{L(3,1)})$ is periodic. In addition, any simple periodic orbit is tangent to a image of a Hopf fiber of $S^{3}\to L(3,1)$ and thus obviously in $\mathcal{S}_{3}$. At first, we describe (1) as a Milnor fibration. Consider a map $f:\mathbb{C}^2\to S^{1}$ with $f(z_{1},z_{2})=z_{1}^{3}+z_{2}^{3}$. It is easy to check that $g=f/|f|:S^{3}\backslash f^{-1}(0)\to S^{1}$ gives a open book decomposition supporting the tight contact structure and the binding consists of periodic orbits of $(S^{3},\lambda_{0}|_{S^{3}})$. In addition, for each $\theta \in S^{1}$, $g^{-1}(\theta)$ is connected and of genus 1 with three boundary components. Since $f(e^{\frac{2\pi}{3}}z_{1},e^{\frac{2\pi}{3}}z_{2})=f(z_{1},z_{2})$, $g$ induces, by taking a quotient space, an open book decomposition on $(L(3,1),\lambda_{0}|_{L(3,1)})$. It follows that each fiber of the open book decomposition is of genus 0 and three boundary components consisting of periodic orbits on $(L(3,1),\lambda_{0}|_{L(3,1)})$, which gives a description of (1). Therefore any simple orbit $\gamma_{i}$ in $\alpha$ is in $\mathcal{S}_{3}$. Second we describe (3) and (4). Consider a map $f:\mathbb{C}^2\to S^{1}$ with $f(z_{1},z_{2})=z_{1}z_{2}^{2}$. $g=f/|f|:S^{3}\backslash f^{-1}(0)\to S^{1}$ gives a open book decomposition supporting the tight contact structure and the binding consists of periodic orbits of $(S^{3},\lambda_{0}|_{S^{3}})$ (see the remark below). In addition, since $f(e^{\frac{2\pi i}{3}}z_{1},e^{\frac{2\pi i}{3}}z_{2})=f(z_{1},z_{2})$, $g$ induces, by taking a quotient space, an open book decomposition on $(L(3,1),\lambda_{0}|_{L(3,1)})$ whose boundary consists of periodic orbits on $(L(3,1),\lambda_{0}|_{L(3,1)})$. Moreover, for any $\theta\in S^{1}$, $g^{-1}(\theta)$ is of genus $0$ with two boundary components and thus each page of the induced open book decomposition of $(L(3,1),\xi_{\mathrm{std}})$ is of genus $0$ with two boundary components which are periodic orbits of $(L(3,1),\lambda_{0}|_{L(3,1)})$, which gives a description of (3) and (4). This means that if $\alpha$ satisfies (3) or (4) in Lemma [Lemma 36](#lem:indechcomp){reference-type="ref" reference="lem:indechcomp"}, then any simple orbit $\gamma_{i}$ in $\alpha$ is in $\mathcal{S}_{3}$. ◻ **Remark 45**. Consider a map $g=f/|f|:S^{3}\to S^{1}$ with $f(z_{1},z_{2})=\frac{z_{1}z_{2}^{2}}{|z_{1}z_{2}^{2}|}$. Then for $\theta \in S^{1}$, the fiber $g^{-1}(\theta)$ is the set $\{(re^{2\pi i \theta_{1}},\sqrt{1-r^2}e^{2\pi i \theta_{1}})|\,\theta_{1}+2\theta_{2}=\theta\,\}$. **Remark 46**. Let $p_{1}:M\backslash B_{1} \to S^{1}$ and $p_{2}:M\backslash B_{2} \to S^{1}$ be rational open book decompositions supporting a contact manifold $(M,\xi)$. Suppose that each page are diffeomorphic to each other and the monodromies are the same. Then the binding links $B_{1}$ and $B_{2}$ are the same as transversal links. Indeed, Let $B_{i}\subset U_{i}$ be a sufficiently small tubular neighborhood. Then we can construct a diffeomorphism $f:M \to M$ such that $f(B(1))=B(2)$ and $f$ maps each page of $p_{1}:M\backslash U_{1}\to S^{1}$ to $p_{2}:M\backslash U_{2}\to S^{1}$. It follows from standard arguments that we can construct an isotopy of contact structures from $f_{*}(\xi)$ to $\xi$ so that the binding link $f(B(1))=B(2)$ is preserved with transversal to the isotopy contact structures. ***Proof of Proposition [Proposition 30](#main;prop){reference-type="ref" reference="main;prop"}**.* Having Lemma [Lemma 42](#lem:ex1){reference-type="ref" reference="lem:ex1"}, Lemma [Lemma 43](#lem:ex2){reference-type="ref" reference="lem:ex2"} and Lemma [Lemma 44](#lem;(1)){reference-type="ref" reference="lem;(1)"}, it suffices to exclude (4) in Lemma [Lemma 36](#lem:indechcomp){reference-type="ref" reference="lem:indechcomp"}. Suppose that $\alpha$ satisfies (4) in Lemma [Lemma 44](#lem;(1)){reference-type="ref" reference="lem;(1)"}. Then there are two elliptic orbits $\gamma_{1}$ and $\gamma_{2}$ such that $\alpha=(\gamma_{1},2)\cup (\gamma_{2},1)$, $\mu_{\mathrm{disk}}(\gamma_{1}^{6})=5$ and $\mu_{\mathrm{disk}}(\gamma_{2}^{3})=5$. Note that $\mu_{\mathrm{disk}}(\gamma_{1}^{6})=5$ means $\mu_{\mathrm{disk}}(\gamma_{1}^3)=3$ In addition, according to Lemma [Lemma 44](#lem;(1)){reference-type="ref" reference="lem;(1)"}, we have $\gamma_{2}\in \mathcal{S}_{3}$. Recall that $\gamma_{2}\in \mathcal{S}_{3}$ means that there is a rational Seifert surface $u:\mathbb{D}\to L(3,1)$ with $u(e^{2\pi t})=\gamma_{2}(3T_{\gamma_{2}}t)$ and $sl_{\xi}^{\mathbb{Q}}(\gamma_{2})=-\frac{1}{3}$. Take a Martinet tube $F:\mathbb{R}/T_{\gamma_{2}}\mathbb{Z}\times \mathbb{D}_{\delta} \to \Bar{U}$ for a sufficiently small $\delta>0$ onto a small open neighbourhood $\gamma_{2} \subset \Bar{U}$. Let $\tau_{F}:\gamma_{i}^{*}\xi \to \mathbb{R}/T_{\gamma_{2}}\mathbb{Z}\times \mathbb{R}^{2}$ be a trivialization induced by $F$. We may choose $F$ so that $\mu_{\tau_{F}}(\gamma_{2})=1$. Note that $F^{-1}(u(\mathbb{D})\cap\partial \Bar{U})$ is a $(3,r)$ cable such that $r=3k-1$ for some $k\in \mathbb{Z}$ with respect to the coordinate of $\mathbb{R}/T_{\gamma_{2}}\mathbb{Z}\times \mathbb{D}$. It follows from Lemma [Lemma 31](#lem;key){reference-type="ref" reference="lem;key"} that $$\mu_{\tau_{F}}(\gamma_{2}^{3})-2r-6sl_{\xi}^{\mathbb{Q}}(\gamma_{2})=\mu_{\mathrm{disk}}(\gamma_{2}^{3})$$ In this case, it follows from $r=3k-1$ and $sl_{\xi}^{\mathbb{Q}}(\gamma_{2})=-\frac{1}{3}$ that $\mu_{\tau_{F}}(\gamma_{2}^{3})-2r-6sl_{\xi}^{\mathbb{Q}}(\gamma_{2})=\mu_{\tau_{0}}(\gamma_{2}^{3})+4-6k$. Since $\gamma_{2}$ is elliptic, there is $\theta \in \mathbb{R}\backslash \mathbb{Q}$ such that $\mu_{\tau_{F}}(\gamma_{2}^m)=2\lfloor m\theta \rfloor+1$ for every $m \in \mathbb{Z}_{>0}$. Recall that we take $\tau_{F}$ so that $\mu_{\tau_{F}}(\gamma_{2})=1$. Hence $0< \theta <1$ and so we have $\mu_{\tau_{0}}(\gamma_{2}^3)=1$ or $3$ or $5$. Since $\mu_{\mathrm{disk}}(\gamma_{2}^3)=5$. $\mu_{\tau_{0}}(\gamma_{2}^{3})+4-6k=\mu_{\mathrm{disk}}(\gamma^{3})$ holds only if $k=0$ and $\mu_{\tau_{0}}(\gamma_{2}^3)=1$. Let $u\in \mathcal{M}^{J}(\alpha,\emptyset)$. From ([\[eq:ineq\]](#eq:ineq){reference-type="ref" reference="eq:ineq"}), it follows that $$\begin{split} 3\mathrm{ind}(u)=&-3(2-2g(u)-2)+2c_{\tau}(\xi|_{3[u]})+ 3\mu_{\tau_{0}}(\gamma_{2})+3\mu_{\tau}(\gamma_{1}^{2})\\ =&-6+\mu_{\mathrm{disk}}(\gamma_{1}^{3})+\mu_{\mathrm{disk}}(\gamma_{2}^{6})\\&+(3\mu_{\tau_{0}}(\gamma_{2})-\mu_{\tau_{0}}(\gamma_{2}^{3})+3)+(3\mu_{\tau}(\gamma_{1}^2)-\mu_{\tau}((\gamma_{1}^{2})^3)+3)\\ \geq&-6+\mu_{\mathrm{disk}}(\gamma_{1}^{3})+\mu_{\mathrm{disk}}(\gamma_{2}^{6})+(3\mu_{\tau_{0}}(\gamma_{2})-\mu_{\tau_{0}}(\gamma_{2}^{3})+3)+1 \end{split}$$ Here $\tau$ is a trivialization on $\gamma_{1}$ and we use Lemma [Lemma 33](#lem:iterate){reference-type="ref" reference="lem:iterate"}. Since $\mu_{\tau_{F}}(\gamma_{2})=\mu_{\tau_{0}}(\gamma_{2}^3)=1$, we have $3\mu_{\tau_{0}}(\gamma_{2})-\mu_{\tau_{0}}(\gamma_{2}^{3})+3=5$. In addition since $\mu_{\mathrm{disk}}(\gamma_{1}^{3})=\mu_{\mathrm{disk}}(\gamma_{2}^{6})=5$, we have $\mu_{\mathrm{disk}}(\gamma_{1}^{3})+\mu_{\mathrm{disk}}(\gamma_{2}^{6})=10$. In summary we have $$-6+\mu_{\mathrm{disk}}(\gamma_{1}^{3})+\mu_{\mathrm{disk}}(\gamma_{2}^{6})+(3\mu_{\tau_{0}}(\gamma_{2})-\mu_{\tau_{0}}(\gamma_{2}^{3})+3)+1=10.$$ But $3\mathrm{ind}(u)=6$. This contradicts the inequality. This means that (4) in Lemma [Lemma 36](#lem:indechcomp){reference-type="ref" reference="lem:indechcomp"} is impossible. This completes the proof. ◻ # Proof of the main theorem under non-degeneracy What follows in this section is written in almost the same way with [@Shi Section 4], but it's more detailed. Let $\langle \alpha_{1}+...+\alpha_{k} \rangle$ denotes the element in $ECH(Y,\lambda)$ for a sum of ECH generators $\alpha_{1}+...+\alpha_{k}$ with $\partial_{J}(\alpha_{1}+...+\alpha_{k})=0$. **Lemma 47**. *Let $(L(3,1),\lambda)$ be a dynamically convex non-degenerate contact manifold with $\lambda \wedge d\lambda>0$. Let $\alpha_{1},...,\alpha_{k}$ be ECH generators with $[\alpha_{i}]=0$ and $I(\alpha_{i},\emptyset)=2$ for $i=1,...k$. Suppose that $\partial_{J}(\alpha_{1}+...+\alpha_{k})=0$ and $0 \neq \langle \alpha_{1}+...+\alpha_{k} \rangle \in ECH_{2}((3,1),\lambda,0)$. Then there exists $i$ such that $\langle U_{J,z}\alpha_{i},\emptyset \rangle\neq 0$.* ***Proof of Lemma [Lemma 47](#existenceofhol){reference-type="ref" reference="existenceofhol"}**.* The proof is the same with [@Shi Lemma 4.6]. ◻ **Lemma 48**. *Let $(L(p,1),\lambda)$ be a (not necessarily dynamically convex) non-degenerate contact manifold with $\lambda \wedge d\lambda>0$. Let $\alpha_{\gamma}=(\gamma,p)$ for $\gamma \in \mathcal{S}_{p}$. If $\mu_{\mathrm{disk}}(\gamma^{p})=3$, then $\alpha_{\gamma}$ is an ECH generator. In addition $I(\alpha_{\gamma},\emptyset)=2$.* ***Proof of Lemma [Lemma 48](#lem:ECHgen){reference-type="ref" reference="lem:ECHgen"}**.* According to Corollary [Corollary 6](#main;cor){reference-type="ref" reference="main;cor"}, $\gamma$ is elliptic and hence by definition $\alpha_{\gamma}=(\gamma,p)$ is an ECH generator. Take a Martinet tube $F:\mathbb{R}/T_{\gamma}\mathbb{Z}\times \mathbb{D}_{\delta} \to \Bar{U}$ for a sufficiently small $\delta>0$ onto a small open neighbourhood $\gamma \subset \Bar{U}$. Recall that $\gamma\in \mathcal{S}_{p}$ means that there is a rational Seifert surface $u:\mathbb{D}\to L(p,1)$ with $u(e^{2\pi t})=\gamma(pT_{\gamma_{2}}t)$ and $sl_{\xi}^{\mathbb{Q}}(\gamma)=-\frac{1}{p}$. We may take $F$ so that $F^{-1}(u(\mathbb{D})\cap\partial \Bar{U})$ is $(p,p-1)$-cabling such that $r=3k-1$ for some $k\in \mathbb{Z}$ with respect to the coordinate of $\mathbb{R}/T_{\gamma_{2}}\mathbb{Z}\times \mathbb{D}_{\delta}$. Let $\tau_{F}:\gamma^{*}\xi \to \mathbb{R}/T_{\gamma}\mathbb{Z}\times \mathbb{R}^{2}$ be the trivialization induced by $F$. Let $\theta \in \mathbb{R}\backslash \mathbb{Q}$ denote the monodoromy angle of $\gamma$ with respect to $\tau_{F}$. In this case, $\mu_{\tau_{F}}(\gamma^{k})=2\lfloor k \theta \rfloor+1$ for any $k\in \mathbb{Z}_{>0}$. It follows from Lemma [Lemma 31](#lem;key){reference-type="ref" reference="lem;key"} that $$\mu_{\tau_{F}}(\gamma^{p})+4-2p=2\lfloor p \theta \rfloor +5-2p=\mu_{\mathrm{disk}}(\gamma^{p})=3.$$ and hence $$\lfloor p \theta \rfloor=p-1.$$ This implies that $1-\frac{1}{p}<\theta<1$. In particular, this means that $\mu_{\tau_{F}}(\gamma^k)=2(k-1)+1$ for any $1 \leq k \leq p$. Therefore we have $$\sum_{1 \leq k \leq p}\mu_{\tau_{F}}(\gamma^{k})=p^2.$$ To compute the relative self intersection number $Q_{\tau_{0}}$, take another Seifert surface $u':\mathbb{D}\to Y$ of $\gamma$ so that $u(\mathring{\mathbb{D}})\cap u'(\mathring{\mathbb{D}})=\emptyset$. This is possible because $u(\mathbb{D})$ is a page of a rational open book decomposition. Now, we take immersed $S_{1},S_{2}\subset [0,1]\times Y$ which are transverse to $\{0,1\}\times Y$ so that $\mathrm{pr}_{2}(S_{1})=u(\mathbb{D})$ and $\mathrm{pr}_{2}(S_{2})=u'(\mathbb{D})$ where $\mathrm{pr}_{2}:[0,1]\times Y \to Y$ is the projection. Set $\{Z\}:=H_{2}(Y,\alpha_{\gamma},\emptyset)$. It follows from the definition that we have $Q_{\tau_{F}}(Z)=-l_{\tau_{F}}(S_{1},S_{2})+\#(S_{1}\cap S_{2})$ where $l_{\tau_{F}}(S_{1},S_{2})$ is the linking number. More precisely, $l_{\tau_{F}}(S_{1},S_{2})$ is one half the signed number of crossings of $F^{-1}(\mathrm{pr}_{2}(S_{1}\cap\{1-\epsilon\}\times Y))$ with $F^{-1}(\mathrm{pr}_{2}(S_{2}\cap\{1-\epsilon\}\times Y))$ for a small $\epsilon>0$ in the projection $(\mathrm{id},\mathrm{pr}_{1}):\mathbb{R}/T_{\gamma}\mathbb{Z}\times \mathbb{R}^{2}\to \mathbb{R}/T_{\gamma}\mathbb{Z}\times \mathbb{R}$ where we naturally assume that $\mathbb{R}/T_{\gamma}\mathbb{Z}\times \mathbb{D}_{\delta}\subset \mathbb{R}/T_{\gamma}\mathbb{Z}\times \mathbb{R}^{2}$ and $\mathrm{pr}_{1}:\mathbb{R}^{2}\to \mathbb{R}$ is the projection to the first coordinate. For more details, see [@H1]. In this situation, it follows from the choice of $S_{1}$ and $S_{2}$ that $\#(S_{1}\cap S_{2})=0$. In addition, since we take $F$ so that $F^{-1}(u(\mathbb{D})\cap\partial \Bar{U})$ is a $(p,p-1)$ cabling, we have $l_{\tau_{F}}(S_{1},S_{2})=p(p-1)$. In summary, we have $Q_{\tau_{F}}(Z)=-p(p-1)$. At last, we compute the relative first chern number $c_{\tau_{F}}$. Since $\tau_{\mathrm{disk}}:(\gamma^p)^{*}\xi \to \mathbb{R}/pT_{\gamma}\mathbb{Z}\times \mathbb{R}^2$ extends to a trivialization by definition, we have $c_{\tau_{0}}=\mathrm{wind}(\tau_{0},\tau_{\mathrm{disk}})$. It follows from ([\[wind\]](#wind){reference-type="ref" reference="wind"}) that $\mathrm{wind}(\tau_{0},\tau_{\mathrm{disk}})=2-p$ and hence $c_{\tau_{0}}=\mathrm{wind}(\tau_{0},\tau_{\mathrm{disk}})=2-p$. By summarizing above results, we have $I(\alpha_{\gamma},\emptyset)=(2-p)+(-p(p-1))+p^2=2$. This completes the proof. ◻ **Lemma 49**. *Let $(L(p,1),\lambda)$ be a non-degenerate dynamically convex contact manifold with $\lambda \wedge d\lambda>0$. Let $\alpha_{\gamma}=(\gamma,p)$ for $\gamma \in \mathcal{S}_{p}$. If $\mu_{\mathrm{disk}}(\gamma^{p})=3$, then there is no somewhere injective $J$-holomorphic curve satisfying the following;* *There is only one positive end. In addition, the positive end is asymptotic to $\gamma$ with multiplicity $p$.* *There is at least one negative end.* *Any puncture on the domain is either positive or negative end.* ***Proof of Lemma [Lemma 49](#lem:boundaryop){reference-type="ref" reference="lem:boundaryop"}**.* In this proof, we set $Y=L(p,1)$. Suppose that $h:(\Sigma,j)\to (\mathbb{R}\times Y,J)$ is a somewhere injective curve satisfying the properties. Recall that $\gamma\in \mathcal{S}_{p}$ means that there is a rational Seifert surface $u:\mathbb{D}\to L(3,1)$ with $u(e^{2\pi t})=\gamma(pT_{\gamma_{2}}t)$ and $sl_{\xi}^{\mathbb{Q}}(\gamma)=-\frac{1}{p}$ such that $u(\mathbb{D})$ is a Birkhoff section for $X_{\lambda}$ of disk type. We note that $H_{1}(Y\backslash \gamma) \cong \mathbb{Z}$. For a sufficiently large $s>>0$, consider $\pi(h(\Sigma)\cap ([-s,s]\times Y))\subset Y$. Since $\#(\gamma \cap \pi(h(\Sigma)\cap ([-s,s]\times Y)))\geq 0$ because of positivity of intersection, it follows topologically that $\#(u(\mathbb{D}) \cap \pi(h(\Sigma)\cap (\{s\}\times Y)))\geq \#(u(\mathbb{D}) \cap \pi(h(\Sigma)\cap (\{-s\}\times Y)))$. This contradicts the next claim. **Claim 50**. *$0\geq \#(u(\mathbb{D}) \cap \pi(h(\Sigma)\cap (\{s\}\times Y)))$* *$\#(u(\mathbb{D}) \cap \pi(h(\Sigma)\cap (\{-s\}\times Y)))\geq 1$* ***Proof of Claim [Claim 50](#cla:wind){reference-type="ref" reference="cla:wind"}**.* Take a Martinet tube $F:\mathbb{R}/T_{\gamma}\mathbb{Z}\times \mathbb{D}_{\delta} \to \Bar{U}$ for a sufficiently small $\delta>0$ onto a small open neighbourhood $\gamma \subset \Bar{U}$. To prove Claim [Claim 50](#cla:wind){reference-type="ref" reference="cla:wind"}, we recall the following property. **Claim 51**. *Let $g:([0,\infty)\times S^{1},j_{0}) \to (\mathbb{R}\times Y,J)$ be a $J$-holomorphic curve asymptotic to $\gamma$ with multiplicity $m$ where $S^{1}=\mathbb{R}/\mathbb{Z}$ and $j_{0}$ is the standard complex structure. Let $\mathrm{pr}_{\mathbb{D}_{\delta}}$ denote the projection $\mathbb{R}/T_{\gamma}\mathbb{Z}\times \mathbb{D}_{\delta}\to \mathbb{D}_{\delta}$. Define $w(g)$ by the winding number of $\mathbb{R}/\mathbb{Z}\to \mathbb{D}_{\delta}$ which maps $t\in \mathbb{R}/\mathbb{Z}\to \mathrm{pr}_{\mathbb{D}_{\delta}}\circ F^{-1}(g(s,t))$. Then $$w(g)\leq \lfloor \frac{\mu_{\tau_{F}}(\gamma^{m}) }{2} \rfloor.$$* The above claim follows immediately from the properties of the Conley-Zehnder index and winding number of eigenfunctions of a certain self-adjoint operator. See [@HWZ1] and [@HWZ4] We may take $F$ so that $F^{-1}( u (\mathbb{D})\cap\partial \Bar{U})$ is a $(p,p-1)$ cable. Let $\tau_{F}:\gamma^{*}\xi \to \mathbb{R}/T_{\gamma}\mathbb{Z}\times \mathbb{R}^{2}$ be a trivialization induced by $F$. Let $\theta \in \mathbb{R}\backslash \mathbb{Q}$ denote the monodromy angle of $\gamma$ with respect to $\tau_{0}$. That is $\mu_{\tau_{0}}(\gamma^{k})=2\lfloor k \theta \rfloor+1$ for any $k\in \mathbb{Z}_{>0}$. It follows from the same argument that $1-\frac{1}{p}<\theta<1$. In particular, this means that $\lfloor k \theta \rfloor=k-1$ for any $1 \leq k \leq p$. At first, we consider $F^{-1}(\pi(h(\Sigma)\cap (\{s\}\times Y))\cap\partial \Bar{U})$. Let $\mathrm{pr}_{\mathbb{D}_{\delta}}:\mathbb{R}/ T_{\gamma} \mathbb{Z}\times \mathbb{D}_{\delta}\to \mathbb{D}_{\delta}$ denote the projection. Then it follows that the winding number of $\mathrm{pr}_{\mathbb{D}_{\delta}} \circ F^{-1}(\pi(h(\Sigma)\cap (\{s\}\times Y))\cap\partial \Bar{U})\subset\mathbb{D}_{\delta}$ is at most $\lfloor \frac{\mu_{\tau_{0}}(\gamma^p)}{2} \rfloor=p-1$. This means that the slope of $F^{-1}(\pi(h(\Sigma)\cap (\{s\}\times Y))\cap\partial \Bar{U})$ is at most $\frac{p-1}{p}$. Since the multiplicity of the positive end is $p$, it follows that $0\geq \#(u(\mathbb{D}) \cap \pi(h(\Sigma)\cap (\{s\}\times Y)))$. This proves (1). Next, we consider $F^{-1}(\pi(h(\Sigma)\cap (\{-s\}\times Y))\cap\partial \Bar{U})$. We note that the total multiplicity of negative ends asymptotic to $\gamma$ is at most $p-1$ because of the Stokes' theorem. Suppose that a negative end of $h$ is asymptotic to $\gamma$ with multiplicity $1\leq k \leq p-1$. Then it follows that the winding number of $\mathrm{pr}_{\mathbb{D}_{\delta}} \circ F^{-1}(\pi(h(\Sigma)\cap (\{s\}\times Y))\cap\partial \Bar{U})\subset\mathbb{D}_{\delta}$ is at least $\lceil\frac{\mu_{\tau_{0}}(\gamma^k)}{2} \rceil=k$. This means that the slope of $F^{-1}(\pi(h(\Sigma)\cap (\{s\}\times Y))\cap\partial \Bar{U})$ is at least $1$. On the other hand, $F^{-1}( u (\mathbb{D})\cap\partial \Bar{U})$ is a $(p,p-1)$ cable and hence the slope is $\frac{p-1}{p}$. This means that the negative end intersects $u(\mathbb{D})$ positively. Therefore, if there is a negative end asymptotic to $\gamma$, we have $\#(u(\mathbb{D}) \cap \pi(h(\Sigma)\cap (\{-s\}\times Y)))\geq 1$. Finally suppose that there is no negative end asymptotic to $\gamma$. Since $u(\mathbb{D})$ is a Birkhoff section for $X_{\lambda}$, any periodic orbit other than $\gamma$ intersects $u(\mathbb{D})$ positively. Therefore it follows from the assumption that $\#(u(\mathbb{D}) \cap \pi(h(\Sigma)\cap (\{-s\}\times Y)))\geq 1$. This completes the proof. ◻ Having Claim [Claim 50](#cla:wind){reference-type="ref" reference="cla:wind"}, Lemma [Lemma 49](#lem:boundaryop){reference-type="ref" reference="lem:boundaryop"} follows. ◻ **Lemma 52**. *For any $\gamma \in \mathcal{S}_{p}$ with $\mu_{\mathrm{disk}}(\gamma^{p})=3$, $\partial_{J} \alpha_{\gamma} =0$.* ***Proof of Lemma [Lemma 52](#lem:boundvanish){reference-type="ref" reference="lem:boundvanish"}**.* Let $\beta$ be an ECH generator with $I(\alpha_{\gamma},\beta)$. Let $h \in \mathcal{M}^{J}(\alpha_{\gamma},\beta)$. It follows from the partition condition that the number of positive ends of $h$ is one and the positive end is asymptotic to $\gamma$ with multiplicity $p$. According to Lemma [Lemma 49](#lem:boundaryop){reference-type="ref" reference="lem:boundaryop"}, such a $h$ does not exist. This means that $\mathcal{M}^{J}(\alpha_{\gamma},\beta) = \emptyset$. Hence we have $\partial_{J} \alpha_{\gamma} =0$. ◻ Now, we focus on $Y \cong L(3,1)$. Since $\partial_{J} \alpha_{\gamma} =0$ for any $\gamma \in \mathcal{S}_{3}$ with $\mu_{\mathrm{disk}}(\gamma^3)=3$, we may consider $\alpha_{\gamma}$ as an element in $ECH_{2}(Y,\lambda,0)$. The following lemma means that $\alpha_{\gamma}$ is not zero in $ECH_{2}(Y,\lambda,0)$. **Lemma 53**. *For any $\gamma \in \mathcal{S}_{3}$ with $\mu_{\mathrm{disk}}(\gamma^3)=3$, $0\neq \langle \alpha_{\gamma} \rangle = \langle(\gamma,3) \rangle \in ECH_{2}(Y,\lambda,0)$.* To prove Lemma [Lemma 53](#nonzero){reference-type="ref" reference="nonzero"}, define a set $\mathcal{G}$ consisting of ECH generators as $$\mathcal{G}:=\{\alpha|\,\,\langle U_{J,z} \alpha,\emptyset \rangle\neq 0 \,\, \}.$$ Note that $\langle U_{J,z} \alpha,\emptyset \rangle\neq 0$ if and only if $\alpha \in \mathcal{G}$. **Claim 54**. *For any $\gamma \in \mathcal{S}_{3}$ with $\mu_{\mathrm{disk}}(\gamma^3)=3$, $\alpha_{\gamma}\in \mathcal{G}$.* ***Proof of Claim [Claim 54](#inG){reference-type="ref" reference="inG"}**.* While this proof is completely the same with [@Shi Lemma 4.6], we give it below. Recall that each page of the rational open book decomposition constructed in Theorem [\[fundament\]](#fundament){reference-type="ref" reference="fundament"}( [@HrS]) is the projection of $J$-holomorphic curve from $(\mathbb{C},i)$ to $L(3,1)$. Moreover in this case, $:\mathcal{M}^{J}(\alpha_{\gamma},\emptyset)/\mathbb{R}$ is compact and any two distinct elements $u_{1},u_{2}\in \mathcal{M}^{J}(\alpha_{\gamma},\emptyset)$ has no intersection point. Hence $:\mathcal{M}^{J}(\alpha_{\gamma},\emptyset)/\mathbb{R}\cong S^{1}$ and for a section $s:\mathcal{M}^{J}(\alpha_{\gamma},\emptyset)/\mathbb{R}\to \mathcal{M}^{J}(\alpha_{\gamma},\emptyset)$, $\bigcup_{\tau\in \mathcal{M}^{J}(\alpha_{\gamma},\emptyset)/\mathbb{R}}\overline{\pi(s(\tau))} \to \mathcal{M}^{J}(\alpha_{\gamma},\emptyset)/\mathbb{R}$ is an (rational) open book decomposition of $L(2,1)$. This implies that for $z\in L(3,1)$ not on $\gamma$, there is exactly one $J$-holomorphic curve in $\mathcal{M}^{J}(\alpha_{\gamma},\emptyset)$ through $(0,z)\in \mathbb{R}\times L(3,1)$. Therefore we have $\langle U_{J,z} \alpha_{\gamma},\emptyset \rangle\neq 0$. ◻ **Claim 55**. *Let $\alpha \in \mathcal{G}$. Suppose that $\beta$ is an ECH generator with $I(\beta,\alpha)=1$. Then $$\sum_{\alpha\in \mathcal{G}}\langle \partial_{J}\beta,\alpha \rangle=0$$* ***Proof of Claim [Claim 55](#index1bound){reference-type="ref" reference="index1bound"}**.* As the same with Claim [Claim 54](#inG){reference-type="ref" reference="inG"}, this proof is completely the same with [@Shi Lemma 4.7]. We give it below. Write $$\partial_{J}\beta=\sum_{\alpha\in \mathcal{G}}\langle \partial_{J}\beta,\alpha \rangle \alpha+\sum_{I(\beta,\sigma)=1,\sigma\notin \mathcal{G}}\langle \partial_{J}\beta,\sigma \rangle \sigma.$$ Then we have $$\langle U_{J,z}\partial_{J}\beta,\emptyset \rangle=\sum_{\alpha\in \mathcal{G}}\langle \partial_{J}\beta,\alpha \rangle \langle U_{J,z} \alpha,\emptyset \rangle +\sum_{I(\beta,\sigma)=1,\sigma\notin \mathcal{G}}\langle \partial_{J}\beta,\sigma \rangle \langle \sigma,\emptyset \rangle=\sum_{\alpha\in \mathcal{G}}\langle \partial_{J}\beta,\alpha \rangle$$ Here we use that for $\alpha\in \mathcal{G}$, $\langle U_{J,z} \alpha,\emptyset \rangle=1$ and for $\sigma$ with $\sigma \notin \mathcal{G}$, $\langle U_{J,z} \sigma,\emptyset \rangle=0$. Since $U_{J,z}\partial_{J}=\partial_{J}U_{J,z}$, we have $\langle U_{J,z}\partial_{J}\beta,\emptyset \rangle=\langle \partial_{J}U_{J,z}\beta,\emptyset \rangle=0$. This completes the proof. ◻ ***Proof of Lemma [Lemma 53](#nonzero){reference-type="ref" reference="nonzero"}**.* Suppose that $0=\langle \alpha_{\gamma} \rangle \in ECH_{2}(Y,\lambda,0)$. Then there are ECH generators $\beta_{1},...\beta_{j}$ with $I(\beta_{i},\alpha_{\gamma})=1$ for any $i$ such that $\partial_{J}(\beta_{1}+...+\beta_{j})=\alpha_{\gamma}$. From Lemma [Claim 55](#index1bound){reference-type="ref" reference="index1bound"}, we have $$\sum_{1\leq i \leq j} \sum_{\alpha\in \mathcal{G}}\langle \partial_{J}\beta_{i},\alpha \rangle=\sum_{\alpha\in \mathcal{G}}\langle \alpha_{\gamma},\alpha \rangle=0.$$ But since $\alpha_{\gamma}\in \mathcal{G}$, $\sum_{\alpha\in \mathcal{G}}\langle \alpha_{\gamma},\alpha \rangle =1$. This is a contradiction. We complete the proof. ◻ ***Proof of Theorem [Theorem 10](#maintheorem){reference-type="ref" reference="maintheorem"} under non-degeneracy**.* At first, we estimate $c_{1}^{\mathrm{Ech}}(L(3,1)\lambda)$ from below. It follows from the definitin of ECH spectrum that we can take an ECH generator $\alpha$ such that $\langle U_{J,z}\alpha,\emptyset \rangle \neq 0$ and $A(\alpha)\leq c_{1}^{\mathrm{ECH}}(L(3,1),\lambda)$. According to Proposition [Proposition 30](#main;prop){reference-type="ref" reference="main;prop"}, $\alpha$ contains $\gamma \in \mathcal{S}_{3}$ such that $\mu_{\mathrm{disk}}(\gamma^3)=3$. We may assume that $\gamma$ has the minimum period in $\alpha$. Then $\int_{\gamma}\lambda\leq \frac{1}{3}A(\alpha)\leq \frac{1}{3}c_{1}^{\mathrm{ECH}}(L(3,1),\lambda)$. This means that there is a $\gamma \in \mathcal{S}_{3}$ with $\mu_{\mathrm{disk}}(\gamma^3)=3$ such that $\int_{\gamma}\lambda\leq \frac{1}{3}A(\alpha)\leq \frac{1}{3}c_{1}^{\mathrm{ECH}}(L(3,1),\lambda)$. At last, we estimate $c_{1}^{\mathrm{EcH}}(L(3,1)\lambda)$ from above. Since $0\neq \langle \alpha_{\gamma} \rangle = \langle(\gamma,3) \rangle \in ECH_{2}(Y,\lambda,0)$ for any $\gamma \in \mathcal{S}_{3}$ with $\mu_{\mathrm{disk}}(\gamma^3)=3$ (Lemma [Lemma 53](#nonzero){reference-type="ref" reference="nonzero"}), we have $c_{1}^{\mathrm{EcH}}(L(3,1)\lambda)\leq A(\alpha_{\gamma})$ for any $\gamma \in \mathcal{S}_{3}$ with $\mu_{\mathrm{disk}}(\gamma^3)=3$. This means that $\frac{1}{3}c_{1}^{\mathrm{ECH}}(L(3,1)\lambda)\leq \inf_{\gamma\in \mathcal{S}_{3},\,\,\mu_{\mathrm{disk}}(\gamma^3)=3}\int_{\gamma}\lambda$. In summary, we have $\frac{1}{3}c_{1}^{\mathrm{ECH}}(L(3,1)\lambda)=\inf_{\gamma\in \mathcal{S}_{3},\,\mu_{\mathrm{disk}}(\gamma^3)=3}\,\int_{\gamma}\lambda$. ◻ # Extend the results to degenerate cases In this subsection, we extend the above result to degenerate case as a limit of non-degenerate result. The content in this section is completely the same with [@Shi §5]. However we provide the details as follows. At first, we show; **Proposition 56**. *Assume that $(L(3,1),\lambda)$ is strictly convex. Then there exists a simple orbit $\gamma\in \mathcal{S}_{3}$ such that $\mu_{\mathrm{disk}}(\gamma^{3})=3$ and $\int_{\gamma}\lambda = \frac{1}{3}\,c_{1}^{\mathrm{ECH}}(L(3,1),\lambda)$. In particular, $$\inf_{\gamma\in \mathcal{S}_{3},\mu_{\mathrm{disk}}(\gamma^{3})=1}\int_{\gamma}\lambda \leq \frac{1}{3}\,c_{1}^{\mathrm{ECH}}(L(3,1),\lambda).$$* ***Proof of Proposition [Proposition 56](#degeneratel2){reference-type="ref" reference="degeneratel2"}**.* Let $L=c^{\mathrm{ECH}}_{1}(L(3,1),\lambda)$. Take a sequence of strictly convex contact forms $\lambda_{n}$ such that $\lambda_{n} \to \lambda$ in $C^{\infty}$-topology and $\lambda_{n}$ is non-degenerate for each $n$. Therefore we have $$\inf_{\gamma\in \mathcal{S}_{3},\mu_{\mathrm{disk}}(\gamma^{3})=3}\int_{\gamma}\lambda_{n} = \frac{1}{3}\,c_{1}^{\mathrm{ECH}}(L(3,1),\lambda_{n})$$ Note that $c_{1}^{\mathrm{ECH}}(L(3,1),\lambda_{n})\to L$ as $n\to+\infty$. This means that there is a sequence of $\gamma_{n}\in \mathcal{S}_{3}(L(3,1),f_{n}\lambda)$ with $\mu_{\mathrm{disk}}(\gamma_{n}^{3})=3$ such that $\int_{\gamma_{n}}\lambda_{n}\to \frac{1}{3}L$. By Arzelà--Ascoli theorem, we can find a subsequence which converges to a periodic orbit $\gamma$ of $\lambda$ in $C^{\infty}$-topology. **Claim 57**. *$\gamma$ is simple. In particular, $\gamma\in \mathcal{S}_{3}(L(3,1),\lambda)$ and $\mu_{\mathrm{disk}}(\gamma^{3})=3$.* ***Proof of Claim [Claim 57](#limittingorbit){reference-type="ref" reference="limittingorbit"}**.* By the argument so far, there is a sequence of $\gamma_{n}\in \mathcal{S}_{3}(L(3,1),\lambda_{n})$ with $\mu_{\mathrm{disk}}(\gamma_{n}^{3})=3$ which converges to $\gamma$ in $C^{\infty}$. Suppose that $\gamma$ is not simple, that is, there is a simple orbit $\gamma'$ and $k\in \mathbb{Z}_{>0}$ with $\gamma'^{k}=\gamma$. From the lower semi-continuity of $\mu$, we have $\mu_{\mathrm{disk}}(\gamma_{n}^{3})\to\mu_{\mathrm{disk}}(\gamma'^{3k})=\mu_{\mathrm{disk}}((\gamma'^{3})^{k})=3$. This contradicts Proposition [Proposition 12](#conleycovering){reference-type="ref" reference="conleycovering"}. Therefore $\gamma$ is simple. This means that for sufficiently large $n$, $\gamma_{n}$ is transversally isotopic to $\gamma$. Therefore, $\gamma$ is 3-unknotted and has self-linking number $-\frac{1}{3}$. At last, we prove $\mu_{\mathrm{disk}}(\gamma^3)=3$. From the lower semi-continuity of $\mu$, we have $\mu_{\mathrm{disk}}(\gamma_{n}^3)\to\mu_{\mathrm{disk}}(\gamma^3)=3$ or $2$. $\mu_{\mathrm{disk}}(\gamma^3)=2$ contradicts the assumption of dynamical convexity. Thus we have $\mu_{\mathrm{disk}}(\gamma^3)=3$. We complete the proof. ◻ As discussion so far, there is a sequence of $\gamma_{n}\in \mathcal{S}_{3}(L(3,1),f_{n}\lambda)$ with $\mu_{\mathrm{disk}}(\gamma_{n}^3)=3$ and $\gamma\in \mathcal{S}_{3}(L(3,1),\lambda)$ with $\mu_{\mathrm{disk}}(\gamma^3)=3$ such that $\int_{\gamma_{n}}f_{n}\lambda\to \frac{1}{3}L$ and $\gamma_{n}$ converges to $\gamma$ of $\lambda$ in $C^{\infty}$-topology. Therefore we have $\int_{\gamma}\lambda = \frac{1}{3}\,c_{1}^{\mathrm{ECH}}(L(3,1),\lambda)$ in $C^{\infty}$-topology. we complete the proof of Proposition [Proposition 56](#degeneratel2){reference-type="ref" reference="degeneratel2"}. ◻ Having Proposition [Proposition 56](#degeneratel2){reference-type="ref" reference="degeneratel2"}, to complete the proof of Theorem [Theorem 10](#maintheorem){reference-type="ref" reference="maintheorem"}, it is sufficient to show the next proposition. **Proposition 58**. *Assume that $(L(3,1),\lambda)$ is strictly convex. Then $$\frac{1}{3}\,c_{1}^{\mathrm{ECH}}(L(3,1),\lambda) \leq \inf_{\gamma\in \mathcal{S}_{3},\mu_{\mathrm{disk}}(\gamma^3)=3}\int_{\gamma}\lambda .$$* ***Proof of Proposition [Proposition 58](#upperbound){reference-type="ref" reference="upperbound"}**.* We prove this by contradiction. Suppose that there exists $\gamma_{\lambda}\in \mathcal{S}_{3}(L(3,1),\lambda)$ with $\mu_{\mathrm{disk}}(\gamma_{\lambda}^3)=3$ such that $\frac{1}{3}\,c_{1}^{\mathrm{ECH}}(L(3,1),\lambda)>\int_{\gamma_{\lambda}}\lambda$. **Lemma 59**. *There exists a sequence of smooth functions $f_{n}:L(3,1)\to \mathbb{R}_{>0}$ such that $f_{n}\to 1$ in $C^{\infty}$-topology and satisfying $f_{n}|_{\gamma_{\lambda}}=1$ and $df_{n}|_{\gamma_{\lambda}}=0$. Moreover, all periodic orbits of $X_{f_{n}\lambda}$ of periods $<n$ are non-degenerate and all contractible orbits of periods $<n$ have Conley-Zehnder index $\geq 3$. In addition, $\gamma_{\lambda}$ is a non-degenerate periodic orbit of $X_{f_{n}\lambda}$ with $\mu_{\mathrm{disk}}(\gamma_{\lambda}^3)=3$ for every $n$.* ***Proof of Lemma [Lemma 59](#seqsmooth){reference-type="ref" reference="seqsmooth"}**.* See [@HWZ4 Lemma 6.8, 6.9] ◻ For a sequence of smooth functions $f_{n}:L(3,1)\to \mathbb{R}_{>0}$ in Lemma [Lemma 59](#seqsmooth){reference-type="ref" reference="seqsmooth"}, fix $N>>0$ sufficient large so that $c_{1}^{\mathrm{ECH}}(L(3,1),f_{N}\lambda)>\int_{\gamma_{\lambda}}\lambda$ and $N>2c_{1}^{\mathrm{ECH}}(L(3,1),f_{N}\lambda)$. We may take such $f_{N}$ because $c_{1}^{\mathrm{ECH}}$ is continuous in $C^{0}$-topology. **Lemma 60**. *Let $f:L(3,1)\to \mathbb{R}_{>0}$ be a smooth function such that $f(x)<f_{N}(x)$ for any $x\in L(3,1)$. Suppose that $f\lambda$ is non-degenerate dynamically convex. Then there exists a simple periodic orbit $\gamma \in \mathcal{S}_{3}(L(3,1),f\lambda)$ with $\mu(\gamma)=1$ such that $\int_{\gamma}f\lambda<\int_{\gamma_{\lambda}}\lambda$.* ***Outline of the proof of Lemma [Lemma 60](#riginalchange){reference-type="ref" reference="riginalchange"}**.* See [@HrS Proposition 3.1]. In the proof and statement of [@HrS Proposition 3.1], ellipsoids are used instead of $(L(3,1),f_{N}\lambda)$, but the important point in the proof is to find $3$-unknotted self-linking number $-\frac{1}{3}$ orbit $\gamma$ with $\mu_{\mathrm{disk}}(\gamma^3)=3$ and construct a suitable $J$-holomorphic curve from [@HrLS Proposition 6.8]. Now, we have $\gamma_{\lambda} \in \mathcal{S}_{3}(L(3,1),f_{N}\lambda)$ with $\mu_{\mathrm{disk}}(\gamma_{\lambda}^3)=3$ and hence by applying [@HrLS Proposition 6.8], we can construct a suitable $J$-holomorphic curve. By using this curves instead of ones in the original proof, we can show Proposition [Lemma 60](#riginalchange){reference-type="ref" reference="riginalchange"}. Here we note that the discussion in the proof of Lemma [Lemma 49](#lem:boundaryop){reference-type="ref" reference="lem:boundaryop"} is needed to prove the same result of [@HrS Theorem 3.15] ◻ Now, we would complete the proof of Proposition. Let $f:L(3,1)\to \mathbb{R}_{>0}$ be a smooth function such that $f(x)<f_{N}(x)$ for any $x\in L(3,1)$, $f\lambda$ be non-degenerate strictly convex and $\int_{\gamma_{\lambda}}\lambda<c_{1}^{\mathrm{ECH}}(L(3,1),f\lambda)<c_{1}^{\mathrm{ECH}}(L(3,1),f_{N}\lambda)$. We can check easily that it is possible to take such $f$. Due to Lemma [Lemma 60](#riginalchange){reference-type="ref" reference="riginalchange"}, there exists a simple periodic orbit $\gamma \in \mathcal{S}_{3}(L(3,1),f\lambda)$ with $\mu_{\mathrm{disk}}(\gamma^3)=3$ such that $\int_{\gamma}f\lambda<\int_{\gamma_{\lambda}}\lambda$. Since $\inf_{\gamma\in \mathcal{S}_{3},\mu_{\mathrm{disk}}(\gamma^3)=3}\int_{\gamma}f\lambda = \frac{1}{3}\,c_{1}^{\mathrm{ECH}}(L(3,1),f\lambda)$, we have $\int_{\gamma_{\lambda}}\lambda<c_{1}^{\mathrm{ECH}}(L(3,1),f\lambda)\leq \int_{\gamma}f\lambda$. This is a contradiction. We complete the proof. ◻ 99 M. Abreu, L. Macarini, Dynamical convexity and elliptic periodic orbits for Reeb flows. Math. Ann. 369 (2017), 331--386. M. Abreu, L. Macarini, Dynamical implications of convexity beyond dynamical convexity. Calc. Var. Partial Differential Equations (2022) volume 61, Art. 116 M. Arikan, Planar contact structures with binding number three, in Proceedings of Gökova Geometry -- Topology Conference, 2007, pp. 90--124. K. Baker and J. Etnyre, Rational linking and contact geometry. Perspectives in analysis, geometry, and topology, 19--37, Progr. Math., 296, Birkhäuser/Springer, New York, 2012. D. Cristofaro-Gardiner, M. Hutchings and D. Pomerleano, Torsion contact forms in three dimensions have two or infinitely many Reeb orbits, Geom. Top, 23 (2019), 3601-3645. D. Cristofaro-Gardiner, M. Hutchings and V.G.B. Ramos, The asymptotics of ECH capacities, Invent. Math. 199 (2015), 187-214 G. Dell'Antonio, B. D'Onofrio, I. Ekeland, Periodic solutions of elliptic type for strongly nonlinear Hamiltonian systems. The Floer memorial volume, 327--333, Progr. Math., 133, Birkh¨auser, Basel, 1995 D. Dragnev, Fredholm theory and transversality for noncompact pseudoholomorphic maps in symplectizations, Comm. Pure Appl. Math 57 (2004), 726--763. B. Ferreira, Elliptic Reeb orbit on some real projective three-spaces via ECH, arXiv:2305.06257 H. Hofer, K. Wysocki, and E. Zehnder, Properties of pseudoholomorphic curves in symplectisations. I. Asymptotics. Ann. Inst. H. Poincaré C Anal. Non Linéaire 13 (1996), no. 3, 337--379. H. Hofer, K. Wysocki, and E. Zehnder, Properties of pseudo-holomorphic curves in symplectisations. II. Embedding controls and algebraic invariants. Geom. Funct. Anal. 5 (1995), no. 2, 270--328. H. Hofer, K. Wysocki, and E. Zehnder, A characterisation of the tight three-sphere. Duke J. Math, 81(1):159--226, 1996. H. Hofer, K. Wysocki, and E. Zehnder, The dynamics on three-dimensional strictly convex energy surfaces. Ann. of Math. (2) 148 (1998), no. 1, 197--289. H. Hofer, K. Wysocki, and E. Zehnder, A characterization of the tight 3-sphere. II. Comm. Pure Appl. Math. 52 (1999), no. 9, 1139--1177. U. L. Hryniewicz, Fast finite-energy planes in symplectizations and applications. Transactionsof the American Mathematical Society, 364(4):1859--1931, 2012. U. L. Hryniewicz, Systems of global surfaces of section for dynamically convex Reeb flows on the 3-sphere. J. Symplectic Geom. 12 (2014), no. 4, 791--862. U. L. Hryniewicz, M. Hutchings and V.G.B. Ramos, Unknotted Reeb orbits and the first ECH capacity, in preparation. U. L. Hryniewicz, J. E. Licata, and P. A. S. Salomão, A dynamical characterization of universally tight lens spaces. Proceedings of the London Mathematical Society, 110(1):213--269, 2015 U. L. Hryniewicz, and P. A. S. Salomão, Elliptic bindings for dynamically convex Reeb flows on the real projective three-space. Calc. Var. Partial Differential Equations 55 (2016), no. 2, Art. 43, 57 pp. X. Hu, W. Wang, Y. Long. Resonance identity, stability, and multiplicity of closed characteristics on compact convex hypersurfaces. Duke Math. J. 139 (2007), no. 3, 411--462. M. Hutchings, An index inequality for embedded pseudoholomorphic curves in symplectizations, J. Eur. Math. Soc. 4 (2002), 313-361. M. Hutchings, Quantitative embedded contact homology J. Diff. Geom. 88 (2011), 231-266. M. Hutchings, Lecture notes on embedded contact homology. Contact and symplectic topology, 389--484, Bolyai Soc. Math. Stud., 26, János Bolyai Math. Soc., Budapest, 2014. M. Hutchings and C. H. Taubes, Gluing pseudoholomorphic curves along branched covered cylinders I, J. Symp. Geom. 5 (2007), 43-137. M. Hutchings and C. H. Taubes, Gluing pseudoholomorphic curves along branched covered cylinders II, J. Symp. Geom. 7 (2009), 29-133. M. Hutchings and C. H. Taubes, The Weinstein conjecture for stable Hamiltonian structures, Geom. Top, 13 (2009), 901-941. P. Kronheimer and T. Mrowka, Monopoles and three-manifolds, New Math. Monogr. 10, Cambridge University Press (2007). Y. Long and C. Zhu. Closed characteristics on compact convex hypersurfaces in R2n. Ann.of Math. (2) 155 (2002), no. 2, 317--368. A. Schneider, Global surfaces of section for dynamically convex Reeb flows on lens spaces. Trans. Amer. Math. Soc. 373 (2020), no. 4, 2775--2803. T. Shibata, Dynamically convex and global surface of section in $L(p,p-1)$ from the viewpoint of ECH, arXiv:2306.04132 C. H. Taubes, Embedded contact homology and Seiberg-Witten Floer homology I, Geom. Top. 14 (2010), 2497--2581.
arxiv_math
{ "id": "2309.09133", "title": "Elliptic bindings and the first ECH spectrum for convex Reeb flows on\n lens spaces", "authors": "Taisuke Shibata", "categories": "math.SG math.DG math.DS", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- author: - "Ruyu Wang[^1]" - "Yaozhong Hu[^2]" - Chao Zhang title: "A distributionally robust index tracking model with the CVaR penalty: tractable reformulation[^3]" --- [^1]: School of Mathematics and Statistics, Beijing Jiaotong University (, Correspondence:). [^2]: Department of Mathematical and Statistical Sciences, University of Alberta (). [^3]: Submitted to the editors DATE.
arxiv_math
{ "id": "2309.05597", "title": "A distributionally robust index tracking model with the CVaR penalty:\n tractable reformulation", "authors": "Ruyu Wang, Yaozhong Hu, Chao Zhang", "categories": "math.OC", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We study Iwasawa invariants associated to Selmer groups of Artin representations, and criteria for the vanishing of the associated algebraic Iwasawa invariants. The conditions obtained can be used to study natural distribution questions in this context. address: - Chennai Mathematical Institute, H1, SIPCOT IT Park, Kelambakkam, Siruseri, Tamil Nadu 603103, India - Chennai Mathematical Institute, H1, SIPCOT IT Park, Kelambakkam, Siruseri, Tamil Nadu 603103, India author: - Aditya Karnataki - Anwesh Ray bibliography: - references.bib title: On the Iwasawa invariants of Artin representations --- # Introduction Let $p$ be an odd prime number and $\mathbb{Z}_p$ denote the ring of $p$-adic integers. Let $K$ be a number field and fix an algebraic closure $\bar{K}$ of $K$. A $\mathbb{Z}_p$-extension of $K$ is an infinite Galois extension $K_\infty/K$ such that the Galois group $\operatorname{Gal}(K_\infty/K)$ is isomorphic to $\mathbb{Z}_p$ as a topological group. Let $K_n\subset K_\infty$ be the extension of $K$ for which $\operatorname{Gal}(K_n/K)\simeq \mathbb{Z}/p^n\mathbb{Z}$, and $h_k(K_n)$ denote the cardinality of the $p$-primary part of the class number of $K_n$. Writing $h_p(K_n)=p^{e_n}$, Iwasawa [@iwasawa1959gamma] showed that for all large enough values of $n$, $$e_n=p^n \mu+n \lambda+\nu,$$ where $\mu, \lambda\in \mathbb{Z}_{\geq 0}$ and $\nu \in \mathbb{Z}$ are invariants that depend on the extension $K_\infty/K$ and not on $n$. The results of Iwasawa motivated Mazur [@mazur1972rational] to study the growth properties of the $p$-primary Selmer groups of $p$-ordinary abelian varieties in $\mathbb{Z}_p$-extensions. Mazur's results were later extended to very general classes of ordinary Galois representations by Greenberg, cf. [@greenberg1989iwasawa]. Another class of representations that are natural to consider are Artin representations. Let $K/\mathbb{Q}$ be a finite and totally imaginary Galois extension with Galois group $\Delta:=\operatorname{Gal}(K/\mathbb{Q})$, and let $\rho:\Delta \rightarrow \operatorname{GL}_d(\bar{\mathbb{Q}})$ be an irreducible Artin representation. Let $p$ be an odd prime number and let $\sigma_p: \bar{\mathbb{Q}}\hookrightarrow \bar{\mathbb{Q}}_p$ be an embedding and via $\sigma_p$ we view $\rho$ as a representation $\rho: \Delta\rightarrow \operatorname{GL}_d(\bar{\mathbb{Q}}_p)$. Let $v$ denote an archimedian prime of $K$, and set $K_v$ to denote the $v$-adic completion of $K$. We shall identify $\operatorname{Gal}(K_v/\mathbb{R})$ with a subgroup $\Delta_v$ of $\Delta$. Set $d^+$ to denote the multiplicity of the trivial character in $\rho_{|\Delta_v}$ and observe that this number is well defined and independent of the choice of the archimedian prime $v$. When $K$ is totally real, we find that $d^+=d$. Let $\mathfrak{p}$ be a prime of $K$ that lies above $p$, and let $\Delta_{\mathfrak{p}}\subset \Delta$ be the decomposition group $\mathfrak{p}$. Following Greenberg and Vatsal [@greenbergvatsalArtinrepns], we make the following assumptions. **Assumption 1**. *With respect to notation above, assume that* 1. *$p$ does not divide $[K:\mathbb{Q}]$,* 2. *$d^+=1$,* 3. *there exists a $1$-dimensional representation $\epsilon_{\mathfrak{p}}$ of $\Delta_{\mathfrak{p}}$ that occurs with multiplicity $1$ in $\rho_{|\Delta_{\mathfrak{p}}}$.* In the special case $d=2$ and $d^+=1$, such representations are expected to arise from Hecke eigenforms of weight $1$ on $\Gamma_1(N)$, where $N$ is the Artin conductor of $\rho$. The conjecture has been settled in various special cases, cf. [@buzzard2001icosahedral; @taylor2003icosahedral] and references therein. The choice of the character $\epsilon_{\mathfrak{p}}$ plays a role in the definition of the Selmer groups associated to $\rho$. Take $\mathcal{K}$ be the completion of $K$ at $\mathfrak{p}$. There is a natural isomorphism $\operatorname{Gal}(\mathcal{K}/\mathbb{Q}_p)\simeq \Delta_\mathfrak{p}$, and set $\epsilon$ to denote the composite $$\operatorname{Gal}(\mathcal{K}/\mathbb{Q}_p)\xrightarrow{\sim} \Delta_\mathfrak{p}\xrightarrow{\epsilon_\mathfrak{p}} \bar{\mathbb{Q}}^\times.$$Let $\chi$ denote the character associated to $\rho$ and $\mathbb{Q}(\chi)$ denote the field generated by the values of $\chi$. Let $\mathcal{F}$ be the field generated by $\mathbb{Q}_p$, the values of $\chi$ and the values of $\epsilon_{\mathfrak{p}}$. We regard $\rho$ as a representation a vector space $V$ defined over $\mathcal{F}$ and note that $\dim_{\mathcal{F}}V=d$. Let $V^{\epsilon_\mathfrak{p}}$ be the the maximal $\mathcal{F}$-subspace of $V$ on which $\Delta_\mathfrak{p}$ acts by $\epsilon_\mathfrak{p}$. By hypothesis, $\dim_\mathcal{F}V^{\epsilon_\mathfrak{p}}=1$. Let $\mathcal{O}$ be the valuation ring in $\mathcal{F}$ and $\varpi$ be its uniformizer. We assume that $\rho$ is irreducible, and therefore, there is a Galois stable $\mathcal{O}$-lattice $T\subset V$. This lattice is uniquely determined up to scaling by a constant. We consider the divisible Galois module $D:=V/T$ and let $D^{\epsilon_\mathfrak{p}}$ be the image of $V^{\epsilon_\mathfrak{p}}$ in $D$. Let $\mathbb{Q}_{\infty}$ denote the cyclotomic $\mathbb{Z}_p$-extension of $\mathbb{Q}$. Let $\mathbb{Q}_n\subset \mathbb{Q}_\infty$ be the subextension such that $\operatorname{Gal}(\mathbb{Q}_n/\mathbb{Q})\simeq \mathbb{Z}/p^n\mathbb{Z}$. We set $$S_{\chi, \epsilon} (\mathbb{Q}_n):=\ker\left\{H^1(\mathbb{Q}_n, D)\longrightarrow \prod_{v\nmid p} H^1(\mathbb{Q}_{n, v}^{\operatorname{nr}}, D)\times H^1(\mathbb{Q}_{n, \eta_p}^{\operatorname{nr}}, D/D^{\epsilon_\mathfrak{p}})\right\},$$ where $\eta_p$ is the unique prime that lies above $p$. **Theorem 2** (Greenberg and Vatsal). *Suppose that the Assumption [Assumption 1](#main ass){reference-type="ref" reference="main ass"} holds. Then, the Selmer group $S_{\chi, \epsilon}(\mathbb{Q}_n)$ is finite for all $n\in \mathbb{Z}_{\geq 0}$.* *Proof.* The above result is [@greenbergvatsalArtinrepns Proposition 3.1]. ◻ **Definition 3**. *Define the Selmer group over $\mathbb{Q}_\infty$ to denote the direct limit with respect to restriction maps $$S_{\chi, \epsilon}(\mathbb{Q}_\infty):=\varinjlim_n S_{\chi, \epsilon} (\mathbb{Q}_n).$$* Greenberg and Vatsal show that the Selmer group is cofinitely generated and cotorsion over the Iwasawa algebra, which is a formal power series ring over $\mathcal{O}$ in $1$-variable. Leveraging the results of Greenberg and Vatsal, we study the Euler characteristic formula associated to these Selmer groups and utilize it to study explicit conditions for the vanishing of Selmer group $S_{\chi, \epsilon}(\mathbb{Q}_\infty)$. For instance, we are able to prove that there is an explicit relationship between the vanishing of the Selmer group and the *$p$-rationality* of $K$ (in the sense of [@movahhedi1990arithmetique]). **Theorem 4** (Theorem [Theorem 28](#cor 3.10){reference-type="ref" reference="cor 3.10"}). *Assume that* 1. *the conditions of Assumption [Assumption 1](#main ass){reference-type="ref" reference="main ass"} are satisfied.* 2. *$H^0(\Delta_{\mathfrak{p}}, D/D^{\epsilon_\mathfrak{p}})=0$,* 3. *$K$ is $p$-rational,* 4. *$p$ does not divide the class number of $K$.* *Then, $S_{\chi, \epsilon}(\mathbb{Q}_\infty)=0$.* These criteria are illustrated in two special cases, namely that of $2$-dimensional irreducible Artin representations (cf. Theorem [Theorem 29](#thm 4.12){reference-type="ref" reference="thm 4.12"}) and $3$-dimensional icosahedral Artin representations of $A_5$ (cf. Theorem [Theorem 30](#example thm 2){reference-type="ref" reference="example thm 2"}). The Theorem [Theorem 26](#conditions GV){reference-type="ref" reference="conditions GV"} gives a more refined criterion that is equivalent to the vanishing of $S_{\chi, \epsilon}(\mathbb{Q}_\infty)$. The relationship between $p$-rationality and the Iwasawa invariants of number fields has been studied by Hajir and Maire, cf. [@hajirmaire]. For simplicity, we then specialize our discussion to $2$-dimensional Artin representation of dihedral type. Let $L$ is an imaginary quadratic field and $\zeta:\operatorname{G}_L\rightarrow \bar{\mathbb{Q}}^\times$ is a character and $\rho=\operatorname{Ind}_L^{\mathbb{Q}}\zeta:\operatorname{G}_\mathbb{Q}\rightarrow \operatorname{GL}_2(\bar{\mathbb{Q}})$ be the associated Artin representation. Set $\zeta'$ to be conjugate character defined by setting $\zeta'(x)=\zeta(c x c^{-1})$, where $c$ denotes complex conjugation. Assume that $\zeta'=\zeta^{-1}$, thus $\rho$ is dihedral type. Let $S(\rho)$ be the set of primes $p$ such that 1. $p$ is odd and $p\nmid [K:\mathbb{Q}]$, 2. $p$ splits in $L$, $p\mathcal{O}_L=\pi \pi^*$, and $\pi$ is inert in $K/L$. It is easy to see that if $\mathfrak{p}|p$ is a prime of $K$, then, the conditions of Assumption of [Assumption 1](#main ass){reference-type="ref" reference="main ass"} are satisfied. Let $T(\rho)$ be the set of primes $p\in S(\rho)$ such that the Selmer group $S_{\chi, \epsilon}(\mathbb{Q}_\infty)$ vanishes for all of the primes $\mathfrak{p}$ that lie above $p$. Set $T'(\rho):=S(\rho)\backslash T(\rho)$. In section [4](#s 4){reference-type="ref" reference="s 4"}, we apply our results to study the following natural question. **Question 5**. *What can be said about the densities of the sets of primes in $T(\rho)$ and $T'(\rho)$?* A conjecture of Gras predicts that any number field $K$ is $p$-rational at all but finitely many primes $p$. This has conjectural implications to the vanishing of Selmer groups for all but finitely many primes a compatible family of Artin representations. It the case when $K/\mathbb{Q}$ is an $S_3$-extension, we prove certain unconditional results by leveraging a result of Maire and Rougnant [@maire2020note] on the $p$-rationality of $S_3$-extensions of $\mathbb{Q}$. **Theorem 6** (Theorem [Theorem 40](#last theorem){reference-type="ref" reference="last theorem"}). *Let $K/\mathbb{Q}$ be an imaginary $S_3$ extension and $\rho$ be a $2$-dimensional Artin representation that factors through $\operatorname{Gal}(K/\mathbb{Q})$. Then, $$\# \{p\leq x\mid p\in T(\rho)\}\geq c\log x.$$* The result above is certainly weaker than what one is led to conjecture, however, it does well to illustrate the effectiveness of the Theorem [Theorem 28](#cor 3.10){reference-type="ref" reference="cor 3.10"}. Throughout, we motivate our results by drawing upon analogues from the classical Iwasawa theory of class groups. ## Organization Including the introduction, the article consists of four sections. In section [2](#s 2){reference-type="ref" reference="s 2"} we review the classical Iwasawa theory of class groups and Artin representations. We review results of Federer and Gross [@federer1980regulators] which gives an explicit relationship between $p$-adic regulators and Iwasawa invariants. We end this section by reviewing some results of Greenberg and Vatsal [@greenbergvatsalArtinrepns] on the Iwasawa theory of Artin representations. In the section [3](#s 3){reference-type="ref" reference="s 3"}, we discuss the notion of the Euler characteristic of a cofinitely generated and cotorsion module over the Iwasawa algebra. In section [4](#s 4){reference-type="ref" reference="s 4"}, we formulate and study some natural distribution questions, which serve to illustrate our results. # Iwasawa theory of class groups and Artin representations {#s 2} ## Classical Iwasawa theory and Iwasawa invariants We review the classical Iwasawa theory over the cyclotomic $\mathbb{Z}_p$-extension of a number fields. The reader may refer to [@washington1997introduction] for a more comprehensive treatment. Throughout this section, we shall set $K$ to denote a number field and $p$ an odd prime number. Fix an algebraic closure $\bar{K}$ of $K$. All algebraic extensions considered in this article shall implicitly be assumed to be in $\bar{K}$. Let $\operatorname{Cl}(K)$ denote the class group of $K$ and $\operatorname{Cl}_p(K)$ its $p$-primary part. For $n\geq 1$, let $K(\mu_{p^n})$ be the number field extension of $K$ obtained by adjoining the $p^n$-th roots of unity to $K$. Denote by $K(\mu_{p^\infty})$ the union of all extensions $K(\mu_{p^n})$. There is a unique $\mathbb{Z}_p$-extension $K_\infty/K$ which is contained in $K(\mu_{p^\infty})$, which is referred to as the *cyclotomic $\mathbb{Z}_p$-extension* of $K$. For $n\geq 1$, set $K_n/K$ to be the sub-extension of $K_\infty/K$, for which $\operatorname{Gal}(K_n/K)$ is isomorphic to $\mathbb{Z}/p^n \mathbb{Z}$. Setting $K_0:=K$, refer to $K_n$ as the *$n$-th layer*. Taking $\Gamma_n:=\operatorname{Gal}(K_\infty/K_n)$ we identify $\operatorname{Gal}(K_n/K)$ with $\Gamma/\Gamma_n$. For $n\in \mathbb{Z}_{\geq 0}$, denote by $H_p(K_n)$ the $p$-Hilbert class field of $K_n$. In other words, $H_p(K_n)$ is the maximal unramified abelian $p$-extension of $K_n$ in $\bar{K}$. Set $X_n$ to denote the Galois group $\operatorname{Gal}(H_p(K_n)/K_n)$, and identify $X_n$ with the maximal $p$-primary quotient of $\operatorname{Cl}(K_n)$. Thus, $[H_p(K_n):K_n]$ is equal to $\# \operatorname{Cl}_p(K_n)$. Since the primes above $p$ are totally ramified in $K_\infty$, it follows that $H_p(K_n)\cap K_\infty=K_n$, and thus, there are surjective maps $X_m\rightarrow X_n$ for all $m\geq n$. Taking $X_\infty$ to be the inverse limit $\varprojlim_n X_n$, we find that $X_\infty$ is both a $\mathbb{Z}_p$-module as well as a module over $\Gamma$. On the other hand, letting $H_p(K_\infty)$ denote the maximal unramified abelian pro-$p$ extension of $K_\infty$, we identify $X_\infty$ with the Galois group $\operatorname{Gal}(H_p(K_\infty)/K_\infty)$. On order to better study the algebraic structure of $X_\infty$, it proves fruitful to view $X_\infty$ as a module over a completed group ring known as the Iwasawa algebra. This algebra is defined as follows $$\Lambda:=\mathbb{Z}_p\llbracket \Gamma \rrbracket =\varprojlim_{n} \mathbb{Z}_p[\Gamma /\Gamma_n].$$ Choosing a topological generator $\gamma\in\Gamma$, we identify $\Lambda$ with the formal power series ring $\mathbb{Z}_p\llbracket T\rrbracket$, by setting $T:=(\gamma-1)$. As a $\Lambda$-module, $X_\infty$ is finitely generated and torsion [@washington1997introduction chapter 13]. Let $\mathcal{O}$ be a valuation ring with residue characteristic $p$, and let $\varpi$ be a uniformizer of $\mathcal{O}$. Then, the Iwasawa algebra over $\mathcal{O}$ is defined by extending coefficients to $\mathcal{O}$, as follows $\Lambda_{\mathcal{O}}:=\Lambda \otimes_{\mathbb{Z}_p}\mathcal{O}$. The Iwasawa algebra $\Lambda_\mathcal{O}$ is a local ring with maximal ideal $\mathfrak{m}=(\varpi, T)$. A polynomial $f(T)\in \mathcal{O}\llbracket T\rrbracket$ is said to be *distinguished* if it is a monic polynomial whose non-leading coefficients are divisible by $\varpi$. The Weierstrass preparation theorem states that any power series $f(T)$ decomposes into a product $$f(T)=\varpi^\mu \times g(T)\times u(T),$$where $\mu\in \mathbb{Z}_{\geq 0}$, $g(T)$ is a distinguised polynomial and $u(T)$ is a unit in $\Lambda_\mathcal{O}$. The $\mu$-invariant of $f(T)$ is the power of $\varpi$ above, and the $\lambda$-invariant is the degree of the distinguished polynomial $g(T)$. The prime ideals of height $1$ are the principal ideals $(\varpi)$ and $(g(T))$, where $g(T)$ is an irreducible distinguished polynomial. Given a finitely generated and torsion $\Lambda_{\mathcal{O}}$-module $M$, there is a homomorphism of $\Lambda_{\mathcal{O}}$-modules $$\label{structural homomorphism}M \longrightarrow \left(\bigoplus_{i=1}^s\frac{\mathcal{O}\llbracket T\rrbracket}{(\varpi^{\mu_i})}\right) \oplus \left(\bigoplus_{j=1}^t \frac{\mathcal{O}\llbracket T\rrbracket}{(f_i(T)^{\lambda_i})}\right),$$ with finite kernel and cokernel. Here, $\mu_i, \lambda_j\geq 0$, and $f_j(T)$ are irreducible distinguished polynomials. For further details, we refer to [@washington1997introduction Theorem 3.12]. **Definition 7**. *The $\mu$-invariant $\mu_p(M)$ is the sum of the entries $\sum_{i=1}^s \mu_i$ if $s>0$, and is set to be $0$ if $s=0$. On the other hand, the $\lambda$-invariant $\lambda_p(M)$ is defined to be $\sum_{i=1}^s \lambda_i \operatorname{deg}(f_i)$ if $s>0$, and defined to be $0$ if $t=0$. The characteristic element $f_M(T)$ is the product $$f_M(T):=\prod_i \varpi^{\mu_i} \times \prod_j f_j(T)^{\lambda_j}.$$* We remark that the $\mu$-invariant and $\lambda$-invariant of $f_M(T)$ are $\mu_p(M)$ and $\lambda_p(M)$ respectively. **Proposition 8**. *Suppose that $M$ is a finitely generated and torsion $\Lambda_{\mathcal{O}}$-module. Then, the following assertions hold,* 1. *$\mu_p(M)=0$ if and only if $M$ is finitely generated as an $\mathcal{O}$-module. In this case, $\lambda_p(M)$ is the $\mathcal{O}$-rank of $M$.* 2. *Letting $r_p(M)$ denote the order of vanishing of $f_M$ at $0$, we have that $$\lambda_p(M)\geq r_p(M).$$* 3. *Write $f_M(T)$ as a power series $$f_M(T)=a_r T^r+a_{r+1} T^{r+1}+\dots+ a_\lambda T^{\lambda},$$ where $r=r_p(M)$ and $\lambda=\lambda_p(M)$. The $\mu$-invariant $\mu_p(M)=0$ if and only if there is a coefficient $a_i$ not divisible by $\varpi$.* 4. *We have that $\varpi^\mu|| a_\lambda$ and that $\varpi^{\mu+1}|a_i$ for all $i<\lambda$.* *Proof.* The results are easy observations that follow from the structural homomorphism [\[structural homomorphism\]](#structural homomorphism){reference-type="eqref" reference="structural homomorphism"} and the definition of the Iwasawa invariants. ◻ **Remark 9**. *Let $\mathcal{O}'$ be a valuation ring that is a finite extension of $\mathcal{O}$, and $e$ be its ramification index. Setting $M_{\mathcal{O}'}:=M\otimes_{\mathcal{O}} \mathcal{O}'$ and regard $M_{\mathcal{O}'}$ as a module over $\Lambda_{\mathcal{O}'}$. Then, it is easy to see that $$\mu_p(M_{\mathcal{O}'})=e\mu_p(M)\text{ and }\lambda_p(M_{\mathcal{O}'})=\lambda_p(M).$$* We denote the $\mu$-invariant (resp. $\lambda$-invariant) of $X_\infty$ by $\mu_p(K)$ (resp. $\lambda_p(K)$). In this setting, $\mathcal{O}:=\mathbb{Z}_p$. Iwasawa proved that for all large enough values of $n$, there is an invariant $\nu_p(K)\in \mathbb{Z}$ for which $$\log_p\left(\# \operatorname{Cl}_p(K_n)\right)=p^n \mu_p(K)+n \lambda_p(K)+\nu_p(K),$$ cf. [@washington1997introduction Theorem 13.13]. Moreover, Iwasawa conjectured that $\mu_p(K)=0$ for all number fields $K$, cf. [@iwasawa1973z]. For abelian extensions $K/\mathbb{Q}$ the conjecture has been proven by Ferrero and Washington [@ferrero1979iwasawa]. ## The leading coefficient of the characteristic series Let $K$ be a CM field with totally real subfield $K^+$. The Galois group $\operatorname{Gal}(K/K^+)$ acts on $X_\infty$. Let $\tau$ be the generator of $\operatorname{Gal}(K/K^+)$ and set $$X_\infty^-:=\{x\in X_\infty\mid \tau(x)=-x\}.$$ Then, $X_\infty^-$ is a $\Lambda$-module whose $\mu$-invariant (resp. $\lambda$-invariant) is denoted $\mu_p^-(K)$ (resp. $\lambda_p^-(K)$). Let $f_{K}^-(T)$ be the characteristic element associated to $X_\infty^-$. Take $I$ to be the set of primes of $K^+$ that lie above $p$ and split in $K$, and set $r_{p,K}:=\# I$. Let $S$ be the set of places of $K$ dividing $p$ and $\infty$. Let $U=U_K$ be the group of $S$-units of $K^*$ and $M=M_K$ be the free abelian group of divisors at $S$. Given a $\operatorname{Gal}(K/K^+)$-module $N$, let $$N^-:=\{n\in N\mid \tau(n)=-n\}.$$ Let $R$ be a ring, then we set $R N^-$ to denote the extension of scalars $N^-\otimes R$. It is easy to see that both $U^-$ and $M^-$ are free abelian group of rank $r_{p, K}$. The map $$\phi:U^-\rightarrow M^-$$ is defined by setting $$\phi(x):=\sum_{v|p} \operatorname{ord}_p\left(\operatorname{Norm}_{K_v/\mathbb{Q}_p}x \right).$$ The induced map $$\phi: \mathbb{Q}U^-\rightarrow \mathbb{Q}M^-$$ is an isomorphism (cf. [@federer1980regulators Proposition 1.4]). The inverse of $\phi$ can be described as follows. For each prime $v\in I$, we choose a prime $\widetilde{v}|v$ of $K$. Note that $\mathbb{Q}M^-$ has a basis $$\mathcal{B}=\{\widetilde{v}-\tau(\widetilde{v})\mid v\in I\}.$$ Let $\mathfrak{p}$ be the prime ideal associated to $v$, and $h$ be a positive integer such that $\mathfrak{p}^h= (\alpha)$ is principal. Both $\alpha$ and $\tau(\alpha)$ are elements of $U$, and $\alpha/\tau(\alpha)\in U^-$. Setting $f_v$ to denote the residue class degree of $v$, take $$\phi^{-1}\left(\widetilde{v}-\tau(\widetilde{v})\right):=\frac{1}{h f_v}\otimes \left(\alpha/\tau(\alpha)\right).$$ Then, $\phi^{-1}$ is the inverse to $\phi$. Define the homomorphism $$\lambda: U^-\rightarrow \mathbb{Q}_p M^-$$ by setting $$\lambda(y):=\sum_{v|p} \log_p\left(N_{K_v/\mathbb{Q}_p}(y)\right).$$ Composing $\phi^{-1}$ with $\lambda$, we obtain an endomorphism $$\lambda \phi^{-1}: \mathbb{Q}_p M^-\rightarrow \mathbb{Q}_p M^-.$$ **Definition 10**. *With notation as above, define the regulator $\operatorname{Reg}_p(K)$ as follows $$\operatorname{Reg}_p(K):=\det \left(\lambda\phi^{-1}\mid \mathbb{Q}_p M^{-}\right),$$ where it is understood that $\operatorname{Reg}_p(K):=1$ when $r_{p,K}=0$.* **Theorem 11** (Iwasawa, Greenberg). *With respect to notation above, $f_K^-(T)$ is divisible by $T^{r_{p, K}}$.* *Proof.* We refer to works of Iwasawa [@iwasawa1973z] and Greenberg [@greenberg1973certain] for the proof of the statement. ◻ As a consequence of Theorem [Theorem 11](#order of vanishing){reference-type="ref" reference="order of vanishing"}, one may write $$f_K^-(T)=a_r T^r+a_{r+1} T^{r+2}+\dots+ a_\lambda T^\lambda,$$ where $r=r_{p, K}$ and $\lambda:=\lambda_p^-(K)$. We note that as a consequence, $$\lambda_p^-(K)\geq r_{p,K}.$$For $a,b\in \mathbb{Q}_p$, we write $a\sim b$ to mean that $a=ub$ for some $u\in \mathbb{Z}_p^\times$. Let $h_K$ (resp. $h_K^-$) denote the class number (resp. number of elements in the minus part of the class group) of $K$. Let $w_{K(\mu_p)}$ be the number of roots of unity contained in $K(\mu_p)$. **Theorem 12** (Federer-Gross). *With respect to notation above, the following are equivalent* 1. *$\operatorname{Reg}_p(K)\neq 0$,* 2. *$a_r\neq 0$.* *Assuming that these equivalent conditions hold, we have that $$a_r\sim \frac{h_K^- \left(\prod_{v\in I} f_v\right) \operatorname{Reg}_p(K)}{w_{K(\mu_p)}^{r_{p,K}}}.$$* *Proof.* The above result is [@federer1980regulators Proposition 3.9]. ◻ Suppose that $p\nmid f_v$ for all $v\in I$, and $p\nmid h_K$. Then, the map $\phi^{-1}$ is defined over $\mathbb{Z}_p$ and it is easy to see that $\operatorname{Reg}_p(K)$ is divisible by $p^{r_{p,K}}$. The normalized regulator is defined as follows $$\mathcal{R}_p(K):=\frac{\operatorname{Reg}_p(K)}{p^{r_{p,K}}} .$$ **Corollary 13**. *Let $K$ be a CM field for which* 1. *$\mathcal{R}_p(K)\neq 0$,* 2. *$w_{K(\mu_p)}\sim p$,* 3. *$p\nmid f_v$ for all $v\in I$,* 4. *$p\nmid h_K$.* *Then, the following are equivalent* 1. *[\[c1cor2.5\]]{#c1cor2.5 label="c1cor2.5"} $\mu_p^-(K)=0$ and $\lambda_p^-(K)=r_{p,K}$,* 2. *[\[c2cor2.5\]]{#c2cor2.5 label="c2cor2.5"} $a_r$ is a unit in $\mathbb{Z}_p$,* 3. *[\[c3cor2.5\]]{#c3cor2.5 label="c3cor2.5"} $p\nmid \mathcal{R}_p(K)$.* *Proof.* With respect to above notation, write $f_K^-(T)=T^{r_{p,K}} g_K^-(T)$, where $a_r=g_K^-(0)$. Then, $a_r$ is a unit in $\mathbb{Z}_p$ if and only if $g_K^-(T)$ is a unit in $\Lambda$. This implies that $$\mu_p^-(K)=0\text{ and }\lambda_p^-(K)=r_{p,K}.$$ Conversely, if $$\mu_p^-(K)=0\text{ and }\lambda_p^-(K)=r_{p,K},$$ then the factorization $f_K^-(T)=T^{r_{p,K}} g_K^-(T)$ implies that $g_K^-(T)$ must be a unit in $\Lambda$. Thus, we find that [\[c1cor2.5\]](#c1cor2.5){reference-type="eqref" reference="c1cor2.5"} and [\[c2cor2.5\]](#c2cor2.5){reference-type="eqref" reference="c2cor2.5"} are equivalent. It follows from our assumptions that $a_r\sim \mathcal{R}_p(K)$. Therefore, $a_r$ is a unit in $\mathbb{Z}_p$ if and only if $p\nmid \mathcal{R}_p(K)$. This shows that the conditions [\[c2cor2.5\]](#c2cor2.5){reference-type="eqref" reference="c2cor2.5"} and [\[c3cor2.5\]](#c3cor2.5){reference-type="eqref" reference="c3cor2.5"} are equivalent. ◻ When $K$ is an imaginary quadratic field, $X_\infty=X_\infty^-$ and thus $\mu_p(K)=\mu_p^-(K)$ and $\lambda_p(K)=\lambda_p^-(K)$. In this setting, we find that $$r_{p, K}:=\begin{cases} & 0\text{ if }p\text{ splits in }K;\\ & 1\text{ if }p\text{ is inert or ramified in }K.\\ \end{cases}$$ Note that $\mu_p(K)=0$ by the aforementioned result of Ferrero and Washington [@ferrero1979iwasawa]. **Corollary 14**. *Let $K$ be an imaginary quadratic field and $p$ be an odd prime number. Then, the following assertions hold.* 1. *Suppose that $p$ splits in $K$. Then, $\lambda_p(K)=0$ if and only if $p\nmid h_K$.* 2. *Suppose that $p$ is inert in $K$. Then, $\lambda_p(K)=1$ if and only if $p\nmid h_K$ and $p\nmid \mathcal{R}_p(K/\mathbb{Q})$.* *Proof.* The result is a special case of Corollary [Corollary 13](#cor2.5){reference-type="ref" reference="cor2.5"}. ◻ Let $K$ be an imaginary quadratic field and $p$ be an odd prime which is inert in $K$. In this case $\lambda_p(K)\geq 1$ and the Corollary [Corollary 14](#cor2.6){reference-type="ref" reference="cor2.6"} asserts that $\lambda_p(K)=1$ if and only if $\mathcal{R}_p(K/\mathbb{Q})$ is a unit in $\mathbb{Z}_p$. The analysis of this condition leads to the statement of Gold's criterion. **Theorem 15** (Gold's criterion). *Let $K$ be an imaginary quadratic field and $p$ be an odd prime number which splits in $K$. Assume that $p\nmid h_K$. Let $\mathfrak{p}|p$ be a prime ideal, and $r$ be a positive integer not divisible by $p$, such that $\mathfrak{p}^r$ is principal. Let $\alpha\in\mathcal{O}_K$ be a generator of $\mathfrak{p}^r$. Setting $\bar{\mathfrak{p}}$ to denote the complex conjugate of $\mathfrak{p}$, the following conditions are equivalent* 1. *$\lambda_p(K)>1$,* 2. *$\alpha^{p-1}\equiv 1\left(\mod{\bar{\mathfrak{p}}^2}\right)$.* *Proof.* The result is a consequence of [@gold1974nontriviality Theorems 3 and 4], and can also be seen to follow from Corollary [Corollary 14](#cor2.6){reference-type="ref" reference="cor2.6"}, cf. [@sands1993non proof of Proposition 2.1]. ◻ ## Artin representations We briefly discuss the results of Greenberg and Vatsal [@greenbergvatsalArtinrepns] on the Iwasawa theory of Selmer groups associated to Artin representations. Let $K/\mathbb{Q}$ be a finite Galois extension with Galois group $\Delta:=\operatorname{Gal}(K/\mathbb{Q})$. Fix an irreducible Artin representation of dimension $d>1$ $$\rho: \Delta\rightarrow \operatorname{GL}_d(\bar{\mathbb{Q}})$$ and $$S_{\chi, \epsilon}(\mathbb{Q}_\infty):=\varinjlim_n S_{\chi, \epsilon} (\mathbb{Q}_n)$$ the Selmer group over the cyclotomic $\mathbb{Z}_p$-extension of $\mathbb{Q}$. Note that $S_{\chi, \epsilon}(\mathbb{Q}_n)$ (resp. $S_{\chi, \epsilon}(\mathbb{Q}_\infty)$) is an $\mathcal{O}[\Gamma/\Gamma_n]$-module (resp. $\Lambda_{\mathcal{O}}$-module). It is easy to see that the restriction map $$H^1(\mathbb{Q}_n, D)\rightarrow H^1(\mathbb{Q}_\infty, D)^{\Gamma_n}$$ induces a map between Selmer groups $$\iota_n:S_{\chi, \epsilon}(\mathbb{Q}_n)\rightarrow S_{\chi, \epsilon}(\mathbb{Q}_\infty)^{\Gamma_n}.$$ The following control theorem shows that this map is injective with cokernel which is independent of $n$. **Theorem 16** (Greenberg and Vatsal -- Control theorem). *Suppose that $\chi$ is nontrivial, then, $\iota_n$ fits into a short exact sequence of $\Gamma/\Gamma_n$-modules $$0\rightarrow S_{\chi, \epsilon}(\mathbb{Q}_n)\rightarrow S_{\chi, \epsilon}(\mathbb{Q}_\infty)^{\Gamma_n}\rightarrow H^0(\Delta_{\mathfrak{p}}, D/D^{\epsilon_\mathfrak{p}})\rightarrow 0.$$* *Proof.* The above result is [@greenbergvatsalArtinrepns Proposition 4.1]. ◻ We note that $H^0(\Delta_\mathfrak{p}, D/D^{\epsilon_\mathfrak{p}})\simeq (\mathcal{F}/\mathcal{O})^t$, where $t$ is the multiplicity of the trivial representation of $\Delta_{\mathfrak{p}}$ in $V/V^{\epsilon_\mathfrak{p}}$. Here, the action of $\Gamma/\Gamma_n$ on $H^0(\Delta_\mathfrak{p}, D/D^{\epsilon_\mathfrak{p}})$ is trivial. Let $\mathcal{D}$ be a discrete $p$-primary $\Lambda_\mathcal{O}$-module. We take $\mathcal{D}^\vee$ to be the Pontryagin dual defined as follows $$\mathcal{D}^\vee:=\operatorname{Hom}_{\mathbb{Z}_p} \left(\mathcal{D}, \mathbb{Q}_p/\mathbb{Z}_p\right).$$ Say that $\mathcal{D}$ is cofinitely generated (resp. cotorsion) as a $\Lambda_\mathcal{O}$-module, if $\mathcal{D}^\vee$ is finitely generated (resp. torsion) as a $\Lambda_\mathcal{O}$-module. We set $X_{\chi, \epsilon}(\mathbb{Q}_\infty)$ be the Pontryagin dual of $S_{\chi, \epsilon} (\mathbb{Q}_\infty)$. **Theorem 17** (Greenberg-Vatsal). *With respect to notation above $X_{\chi, \epsilon}(\mathbb{Q}_\infty)$ is a finitely generated and torsion $\Lambda_{\mathcal{O}}$-module that contains no non-trivial finite $\Lambda_{\mathcal{O}}$-submodules.* *Proof.* We refer to [@greenbergvatsalArtinrepns Propositions 4.5 and 4.7] for the proof of the above result. ◻ Set $\mu_{\chi, \epsilon}$ and $\lambda_{\chi, \epsilon}$ to denote the $\mu$ and $\lambda$-invariants of $S_{\chi, \epsilon}(\mathbb{Q}_\infty)^\vee$ respectively. The characteristic series of $S_{\chi, \epsilon}(\mathbb{Q}_\infty)^\vee$ is denoted $f_{\chi, \epsilon}(T)$. # The Euler characteristic {#s 3} In this section, we introduce the notion of the Euler characteristic associated to a cofinitely generated and cotorsion module over $\Lambda_{\mathcal{O}}$, and introduce conditions for it to be well defined. For the Selmer group associated to an Artin representation, the Euler characteristic is shown to equal the cardinality of $S_{\chi, \epsilon}(\mathbb{Q})$. The results presented in this section have consequences to the study of the Iwasawa $\mu$ and $\lambda$-invariants associated to an Artin representation. ## Definition and properties of the Euler characteristic Let $\mathcal{D}$ be a cofinitely generated and cotorsion module over $\Lambda_{\mathcal{O}}$. Since $\Gamma$ is pro-cyclic, it has cohomological dimension $1$, we find that $H^i(\Gamma, \mathcal{D})=0$ for $i\geq 2$. Also note that $H^1(\Gamma, \mathcal{D})$ is identified with the module of coinvariants $\mathcal{D}_\Gamma$. This invariant encodes information about the valuation of leading constant term of the characteristic series. For an Artin representation that satisfies the conditions of Assumption [Assumption 1](#main ass){reference-type="ref" reference="main ass"} we discuss conditions for the Euler characteristic to be well defined, and give an explicit formula for it. The Euler characteristic formula in this context can be viewed as an analogue of the result of Gross and Federer. **Proposition 18**. *Let $\mathcal{D}$ be a cofinitely generated and cotorsion $\Lambda_{\mathcal{O}}$-module, then,* 1. *[\[p1prop4.1\]]{#p1prop4.1 label="p1prop4.1"} $\operatorname{corank}_{\mathbb{Z}_p} H^0(\Gamma, \mathcal{D})=\operatorname{corank}_{\mathbb{Z}_p} H^1(\Gamma, \mathcal{D})$.* 2. *[\[p2prop4.1\]]{#p2prop4.1 label="p2prop4.1"} The module $H^0(\Gamma, \mathcal{D})$ is finite if and only if $H^1(\Gamma, \mathcal{D})$ is finite.* *Proof.* Since $\mathcal{D}$ is cofinitely generated as a $\Lambda_{\mathcal{O}}$-module, it follows that both $\mathcal{D}^\Gamma$ and $\mathcal{D}_\Gamma$ are cofinitely generated as a $\mathbb{Z}_p$-module. Part [\[p1prop4.1\]](#p1prop4.1){reference-type="eqref" reference="p1prop4.1"} follows from [@howson2002euler Theorem 1.1]. Since $\mathcal{D}$ is cofinitely generated as a $\Lambda$-module, it follows that $\mathcal{D}^\Gamma$ and $\mathcal{D}_\Gamma$ are both cofinitely generated as $\mathbb{Z}_p$-modules. Part [\[p2prop4.1\]](#p2prop4.1){reference-type="eqref" reference="p2prop4.1"} follows from [\[p1prop4.1\]](#p1prop4.1){reference-type="eqref" reference="p1prop4.1"}. ◻ **Definition 19**. *Let $\mathcal{D}$ be a cofinitely generated and cotorsion $\Lambda_{\mathcal{O}}$-module. Then, we say that the Euler characteristic of $\mathcal{D}$ is well defined if $H^0(\Gamma, \mathcal{D})$ (or equivalently $H^1(\Gamma, \mathcal{D})$) is finite. Assuming that the characteristic of $\mathcal{D}$ is well defined, the Euler characteristic is defined as follows $$\chi(\Gamma, \mathcal{D}):=\prod_{i\geq 0} \# H^i(\Gamma, \mathcal{D})^{(-1)^i}=\frac{\# H^0(\Gamma, \mathcal{D})}{\# H^1(\Gamma,\mathcal{D})}.$$* Recall from Definition [Definition 7](#def of mu and lambda){reference-type="ref" reference="def of mu and lambda"} that $f_{\mathcal{D}^\vee}(T)$ is the characteristic element of $\mathcal{D}^\vee$ and that $\mu_p(\mathcal{D}^\vee)$ (resp. $\lambda_p(\mathcal{D}^\vee)$) is the $\mu$-invariant (resp. $\lambda$-invariant). For the ease of notation, we set $$f_\mathcal{D}(T):=f_{\mathcal{D}^\vee}(T), \mu_p(\mathcal{D}):=\mu_p(\mathcal{D}^\vee)\text{ and }\lambda_p(\mathcal{D}):=\lambda_p(\mathcal{D}^\vee).$$From the expansion $$f_\mathcal{D}(T)=\sum_{i=0}^\lambda a_i T^i,$$ we note that $\lambda=\lambda_p(\mathcal{D})$. Moreover, $\mu_p(\mathcal{D})=0$ if and only if $\varpi \nmid a_i$ for at least one coefficient $a_i$. In particular, we have the have that $\varpi\nmid f_\mathcal{D}(0)$ if and only if $\mu_p(\mathcal{D})=\lambda_p(\mathcal{D})=0$. Fix an absolute value $|\cdot |_\varpi$ on $\mathcal{O}$ normalized by setting $$|\varpi|_\varpi:=[\mathcal{O}:\varpi\mathcal{O}]^{-1}.$$ **Proposition 20**. *Let $\mathcal{D}$ be a cofinitely generated and cotorsion $\Lambda$-module. Then, with respect to the above notation, the following conditions are equivalent* 1. *$f_\mathcal{D}(0)\neq 0$,* 2. *the Euler characteristic $\chi(\Gamma, \mathcal{D})$ is well defined.* *Moreover, if the above equivalent conditions are satisfied, then, $$\chi(\Gamma, \mathcal{D})= |f_\mathcal{D}(0)|_\varpi^{-1}.$$* *Proof.* It is easy to see that if $\mathcal{D}$ and $\mathcal{D}'$ are cofinitely generated $\Lambda_{\mathcal{O}}$-modules that are pseudo-isomorphic, then $\mathcal{D}^\Gamma$ is finite if and only if $(\mathcal{D}')^\Gamma$ is finite. Therefore, $\chi(\Gamma, \mathcal{D})$ is well defined if and only if $\chi(\Gamma, \mathcal{D}')$ is well defined. The characteristic series is determined up to pseudo-isomorphism, and therefore, $f_\mathcal{D}(T)=f_{\mathcal{D}'}(T)$. We assume without loss of generality that $$\mathcal{D}^\vee=\left(\bigoplus_{i=1}^s\frac{\mathcal{O}\llbracket T\rrbracket}{(\varpi^{\mu_i})}\right) \oplus \left(\bigoplus_{j=1}^t \frac{\mathcal{O}\llbracket T\rrbracket}{(f_i(T)^{\lambda_i})}\right).$$ We identify $\left(\mathcal{D}^\Gamma\right)^\vee$ with $$\left(\mathcal{D}^\vee\right)_\Gamma=\mathcal{D}^\vee/T\mathcal{D}^\vee \simeq \left(\bigoplus_{i=1}^s\mathcal{O}/(\varpi^{\mu_i})\right) \oplus \left(\bigoplus_{j=1}^t \mathcal{O}/(f_i(0)^{\lambda_i})\right).$$ Since $$f_\mathcal{D}(0)=\prod_i \varpi^{\mu_i}\times \prod_j f_j(0)^{\lambda_j},$$we deduce that $$\# H^0(\Gamma, \mathcal{D})=\# H^1(\Gamma, \mathcal{D}^\vee)= \# \left(\mathcal{D}^\vee\right)_\Gamma$$ is finite if and only if $f_\mathcal{D}(0)$ is non-zero. Therefore, the Euler characteristic is well defined if and only if $f_\mathcal{D}(0)$ is non-zero. Assuming that $f_\mathcal{D}(0)\neq 0$, we calculate the Euler characteristic. First, we note that the previous argument implies that $$\# H^0(\Gamma, \mathcal{D})= |f_\mathcal{D}(0)|_\varpi^{-1}.$$ On the other hand, we find that $$\# H^1(\Gamma, D)=\# H^0(\Gamma, D^\vee)=\# \left(D^\vee\right)^\Gamma.$$Identify $(\mathcal{D}^\vee)^\Gamma$ with the kernel of the multiplication by $T$ endomorphism of $\mathcal{D}^\vee$. Since $f_{\mathcal{D}}(0)$ is assumed to be non-zero, it follows that none of the terms $f_j(T)$ are divisible by $T$. It follows from this that the multiplication by $T$ map is injective and hence, $(\mathcal{D}^\vee)^\Gamma=0$. Thus it has been shown that $$\chi(\Gamma, D)=|f_\mathcal{D}(0)|_\varpi^{-1}.$$ ◻ **Corollary 21**. *Suppose that the Euler characteristic of $\mathcal{D}$ is well defined. Then the following conditions are equivalent* 1. *[\[p1cor4.4\]]{#p1cor4.4 label="p1cor4.4"} $\chi(\Gamma, \mathcal{D})=1$,* 2. *[\[p2cor4.4\]]{#p2cor4.4 label="p2cor4.4"} $\mu_p(\mathcal{D})=0$ and $\lambda_p(\mathcal{D})=0$.* *Proof.* It is easy to see that $\mu_p(\mathcal{D})=0$ and $\lambda_p(\mathcal{D})=0$ if and only if $f_\mathcal{D}(T)$ is a unit in $\Lambda_\mathcal{O}$. Thus, the condition [\[p2cor4.4\]](#p2cor4.4){reference-type="eqref" reference="p2cor4.4"} is equivalent to the condition that $\varpi\nmid f_\mathcal{D}(0)$. According to Proposition [Proposition 20](#prop4.3){reference-type="ref" reference="prop4.3"}, $$\chi(\Gamma, \mathcal{D})=|f_\mathcal{D}(0)|_\varpi^{-1}.$$ Therefore $\chi(\Gamma, \mathcal{D})=1$ if and only if $f_\mathcal{D}(0)$ is a unit in $\mathcal{O}$. ◻ ## Calculating the Euler characteristic Let $\rho$ be an Artin representation with character $\chi$ and set $\epsilon$ to be the chosen character with respect to which the Selmer group $S_{\chi, \epsilon}(\mathbb{Q}_\infty)$ is defined. We shall assume throughout that the Assumption [Assumption 1](#main ass){reference-type="ref" reference="main ass"} is satisfied. **Proposition 22**. *With respect to above notation, the following conditions are equivalent* 1. *the Euler characteristic $\chi(\Gamma, S_{\chi, \epsilon}(\mathbb{Q}_\infty))$ is defined,* 2. *$H^0(\Delta_{\mathfrak{p}}, D/D^{\epsilon_\mathfrak{p}})=0$.* *Proof.* The Euler characteristic is defined if and only if $S_{\chi, \epsilon}(\mathbb{Q}_\infty)^{\Gamma}$ is finite. By the control theorem, we have the short exact sequence $$0\rightarrow S_{\chi, \epsilon}(\mathbb{Q})\rightarrow S_{\chi, \epsilon}(\mathbb{Q}_\infty)^{\Gamma}\rightarrow H^0(\Delta_{\mathfrak{p}}, D/D^{\epsilon_\mathfrak{p}})\rightarrow 0,$$ where $H^0(\Delta_{\mathfrak{p}}, D/D^{\epsilon_\mathfrak{p}})$ is a cofree $\mathcal{O}$-module. According to Theorem [Theorem 2](#GV finiteness){reference-type="ref" reference="GV finiteness"}, the Selmer group $S_{\chi, \epsilon}(\mathbb{Q})$ is finite. Therefore, the Euler characteristic is well defined if and only if $H^0(\Delta_{\mathfrak{p}}, D/D^{\epsilon_\mathfrak{p}})=0$. ◻ **Theorem 23**. *Let $\rho$ be an Artin representation for which* 1. *the conditions of Assumption [Assumption 1](#main ass){reference-type="ref" reference="main ass"} are satisfied.* 2. *$H^0(\Delta_{\mathfrak{p}}, D/D^{\epsilon_\mathfrak{p}})=0$.* *Then, the Euler characteristic is given by $$\chi(\Gamma, S_{\chi, \epsilon}(\mathbb{Q}_\infty))=\# S_{\chi, \epsilon}(\mathbb{Q}).$$* *Proof.* According to Proposition [Proposition 22](#well defined EC){reference-type="ref" reference="well defined EC"}, the Euler characteristic is well defined. We are to calculate the orders of the finite abelian groups $S_{\chi, \epsilon}(\mathbb{Q}_\infty)^{\Gamma}$ and $S_{\chi, \epsilon}(\mathbb{Q}_\infty)_{\Gamma}$. According to Theorem [Theorem 17](#no finite submodules){reference-type="ref" reference="no finite submodules"}, $X_{\chi, \epsilon}(\mathbb{Q}_\infty):=S_{\chi, \epsilon}(\mathbb{Q}_\infty)^\vee$ does not contain any nontrivial finite $\Lambda_{\mathcal{O}}$-submodules. Since the Euler characteristic is well defined, $X_{\chi, \epsilon}(\mathbb{Q}_\infty)^{\Gamma}$ is finite, and hence is zero. In other words, $S_{\chi, \epsilon}(\mathbb{Q}_\infty)_{\Gamma}=0$. On the other hand, it follows from Theorem [Theorem 16](#control theorem){reference-type="ref" reference="control theorem"} that $$S_{\chi, \epsilon}(\mathbb{Q}_\infty)^{\Gamma}=S_{\chi, \epsilon}(\mathbb{Q}).$$Therefore, we find that $\chi(\Gamma, S_{\chi, \epsilon}(\mathbb{Q}_\infty))=\# S_{\chi, \epsilon}(\mathbb{Q})$. ◻ **Corollary 24**. *Let $\rho$ be an Artin representation and $\epsilon:\operatorname{Gal}(\mathcal{K}/\mathbb{Q}_p)\rightarrow \mathcal{O}^\times$ be a character for which the following assumptions are satisfied.* 1. *The conditions of Assumption [Assumption 1](#main ass){reference-type="ref" reference="main ass"} are satisfied.* 2. *$H^0(\Delta_{\mathfrak{p}}, D/D^{\epsilon_\mathfrak{p}})=0$.* *Then, the following conditions are equivalent* 1. *[\[c1cor4.7\]]{#c1cor4.7 label="c1cor4.7"} $S_{\chi, \epsilon}(\mathbb{Q}_\infty)=0$,* 2. *[\[c2cor4.7\]]{#c2cor4.7 label="c2cor4.7"} $S_{\chi, \epsilon}(\mathbb{Q}_\infty)$ is finite,* 3. *[\[c3cor4.7\]]{#c3cor4.7 label="c3cor4.7"} $\mu_{\chi, \epsilon}=\lambda_{\chi, \epsilon}=0$,* 4. *[\[c4cor4.7\]]{#c4cor4.7 label="c4cor4.7"} $\chi(\Gamma, S_{\chi, \epsilon}(\mathbb{Q}_\infty))=1$,* 5. *[\[c5cor4.7\]]{#c5cor4.7 label="c5cor4.7"} $S_{\chi, \epsilon}(\mathbb{Q})=0$.* *Proof.* It is clear that [\[c1cor4.7\]](#c1cor4.7){reference-type="eqref" reference="c1cor4.7"} implies [\[c2cor4.7\]](#c2cor4.7){reference-type="eqref" reference="c2cor4.7"}. On the other hand, by Theorem [Theorem 17](#no finite submodules){reference-type="ref" reference="no finite submodules"}, $X_{\chi, \epsilon}(\mathbb{Q}_\infty)$ contains no nontrivial finite $\Lambda_\mathcal{O}$-submodules. Hence, if $S_{\chi, \epsilon}(\mathbb{Q}_\infty)$ is finite, it must be equal to $0$. Therefore, [\[c1cor4.7\]](#c1cor4.7){reference-type="eqref" reference="c1cor4.7"} and [\[c2cor4.7\]](#c2cor4.7){reference-type="eqref" reference="c2cor4.7"} are equivalent. We note that a $\Lambda_\mathcal{O}$-module $M$ is finite if and only if it is pseudo-isomorphic to the trivial module. On the other hand, $M\sim 0$ if and only if the $\mu$ and $\lambda$-invariants of $M$ are $0$. Therefore [\[c2cor4.7\]](#c2cor4.7){reference-type="eqref" reference="c2cor4.7"} and [\[c3cor4.7\]](#c3cor4.7){reference-type="eqref" reference="c3cor4.7"} are equivalent. The equivalence of [\[c3cor4.7\]](#c3cor4.7){reference-type="eqref" reference="c3cor4.7"} and [\[c4cor4.7\]](#c4cor4.7){reference-type="eqref" reference="c4cor4.7"} follows from Corollary [Corollary 21](#cor4.4){reference-type="ref" reference="cor4.4"}, and the equivalence of [\[c4cor4.7\]](#c4cor4.7){reference-type="eqref" reference="c4cor4.7"} and [\[c5cor4.7\]](#c5cor4.7){reference-type="eqref" reference="c5cor4.7"} follows from Proposition [Proposition 20](#prop4.3){reference-type="ref" reference="prop4.3"}. ◻ ## The structure of the Selmer group $S_{\chi, \epsilon}(\mathbb{Q})$ In this subsection, we analyze the structure of $S_{\chi, \epsilon}(\mathbb{Q})$ and establish conditions for the vanishing of $S_{\chi, \epsilon}(\mathbb{Q}_\infty)$. We set $d(\chi)$ to denote $d$ and write $d(\chi)=d^+(\chi)+d^-(\chi)$, reflecting the action of the archimedian places on the representation. At each prime $\mathfrak{p}|p$, let $\mathcal{U}_\mathfrak{p}$ denote the principal units at $\mathfrak{p}$, and set $\mathcal{U}_p$ to denote the product $\prod_{\mathfrak{p}|p} \mathcal{U}_\mathfrak{p}$. The decomposition group $\Delta_{\mathfrak{p}}$ naturally acts on $\mathcal{U}_{\mathfrak{p}}$, and $\mathcal{U}_p$ is identified with the induced representation $\operatorname{Ind}_{\Delta_\mathfrak{p}}^\Delta \mathcal{U}_{\mathfrak{p}}$. Let $U_K$ be the group of units of $\mathcal{O}_K$ that are principal units at all primes $\mathfrak{p}|p$. The diagonal inclusion map $$U_K\hookrightarrow \mathcal{U}_p$$ is a $\Delta$-equivariant map, and induces a $\mathbb{Z}_p$-linear map $$\lambda_p: U_K\otimes \mathbb{Z}_p\rightarrow \mathcal{U}_p.$$ We recall that the character $\epsilon$ arises from a choice of prime $\mathfrak{p}|p$ and character $\epsilon_\mathfrak{p}: \Delta_\mathfrak{p}\rightarrow \mathcal{O}^\times$. Let $\mathfrak{p}'$ be any other prime above $p$, and write $\mathfrak{p}'=\delta(\mathfrak{p})$ for $\delta\in \Delta$. Then conjugation by $\delta$ gives an isomorphism $c_\delta:\Delta_{\mathfrak{p}'}\xrightarrow{\sim} \Delta_\mathfrak{p}$. We set $\epsilon_{\mathfrak{p}'}$ to denote the composite $$\Delta_{\mathfrak{p}'}\xrightarrow{c_\delta} \Delta_\mathfrak{p}\xrightarrow{\epsilon_\mathfrak{p}}\mathcal{O}^\times.$$Thus for each prime $\mathfrak{p}|p$, we have made a choice of character $\epsilon_p$ of $\Delta_\mathfrak{p}$. For $\mathfrak{p}|p$, we have a decomposition of $\mathcal{O}[\Delta_\mathfrak{p}]$-modules, $$\mathcal{U}_{\mathfrak{p}, \mathcal{O}}=\mathcal{U}_{\mathfrak{p}, \mathcal{O}}^{\epsilon_\mathfrak{p}}\times \mathcal{V}_{\mathfrak{p}, \mathcal{O}},$$where the action of $\Delta_\mathfrak{p}$ on $\mathcal{U}_{\mathfrak{p}, \mathcal{O}}^{\epsilon_\mathfrak{p}}$ is via $\epsilon_\mathfrak{p}$. Use the following notation $$\mathcal{U}_{p, \mathcal{O}}^{[\epsilon]}:=\prod_{\mathfrak{p}|p} \mathcal{U}_{\mathfrak{p}, \mathcal{O}}^{\epsilon_\mathfrak{p}}, \text{ and } \mathcal{V}_{p, \mathcal{O}}=\prod_{\mathfrak{p}|p} \mathcal{V}_{\mathfrak{p}, \mathcal{O}}.$$ Then, we find that $$\mathcal{U}_{p, \mathcal{O}}=\mathcal{U}_{p, \mathcal{O}}^{[\epsilon]}\times \mathcal{V}_{p, \mathcal{O}}.$$ The above decomposition is that of $\mathcal{O}[\Delta]$-modules. In the same way, $\bar{U}_{p,\mathcal{O}}$ decomposes into a product $$\bar{U}_{p, \mathcal{O}}=\bar{U}_{p, \mathcal{O}}^{[\epsilon]}\times \bar{V}_{p, \mathcal{O}}.$$ Let $\left(\bar{U}_{p, \mathcal{O}}^{[\epsilon]}\right)^\chi$ (resp. $\left(\bar{U}_{p, \mathcal{O}}^{[\epsilon]}\right)^\chi$) denote the $\chi$-isotypic component of $\left(\bar{U}_{p, \mathcal{O}}^{[\epsilon]}\right)$ (resp. $\left(\bar{U}_{p, \mathcal{O}}^{[\epsilon]}\right)$). Greenberg and Vatsal show that $\left(\bar{U}_{p, \mathcal{O}}^{[\epsilon]}\right)^\chi$ has finite index in $\left(\mathcal{U}_{p, \mathcal{O}}^{[\epsilon]}\right)^\chi$, provided the assertions of Assumption [Assumption 1](#main ass){reference-type="ref" reference="main ass"} are satisfied. **Proposition 25**. *With respect to notation above, suppose that the Assumption [Assumption 1](#main ass){reference-type="ref" reference="main ass"} is satisfied. Then, the following assertions hold* 1. *$\left(\bar{U}_{p, \mathcal{O}}^{[\epsilon]}\right)^\chi$ has finite index in $\left(\mathcal{U}_{p, \mathcal{O}}^{[\epsilon]}\right)^\chi$,* 2. *there is a short exact sequence of $\mathcal{O}$-modules $$0\rightarrow H^1_{\operatorname{nr}}(\mathbb{Q}, D)\rightarrow S_{\chi, \epsilon}(\mathbb{Q})\rightarrow \mathrm{Hom}_{\mathcal{O}[\Delta]}\left(\left(\mathcal{U}_{p, \mathcal{O}}^{[\epsilon]}\right)^\chi/\left(\bar{U}_{p, \mathcal{O}}^{[\epsilon]}\right)^\chi, D\right)\rightarrow 0.$$ Here, $H^1_{\operatorname{nr}}(\mathbb{Q}, D)$ is the subgroup of $H^1(\mathbb{Q},D)$ consisting of cohomology classes that are unramified at all primes.* *Proof.* The result follows from the proof of [@greenbergvatsalArtinrepns Proposition 3.1]. ◻ **Theorem 26**. *Assume that* 1. *the conditions of Assumption [Assumption 1](#main ass){reference-type="ref" reference="main ass"} are satisfied.* 2. *$H^0(\Delta_{\mathfrak{p}}, D/D^{\epsilon_\mathfrak{p}})=0$.* *Then, the following conditions are equivalent* 1. *$H^1_{\operatorname{nr}}(\mathbb{Q}, D)=0$ and $\left(\mathcal{U}_{p, \mathcal{O}}^{[\epsilon]}\right)^\chi=\left(\bar{U}_{p, \mathcal{O}}^{[\epsilon]}\right)^\chi$.* 2. *$\mu_{\chi, \epsilon}=\lambda_{\chi, \epsilon}=0$.* 3. *$S_{\chi, \epsilon}(\mathbb{Q}_\infty)=0$,* *Proof.* The result follows as a direct consequence of Corollary [Corollary 24](#cor 4.7){reference-type="ref" reference="cor 4.7"} and Proposition [Proposition 25](#prop4.8){reference-type="ref" reference="prop4.8"}. ◻ **Lemma 27**. *With respect to notation above, assume that $p\nmid \# \operatorname{Cl}(K)$. Then, we find that $H^1_{\operatorname{nr}}(\mathbb{Q}, D)=0$.* *Proof.* Since $p\nmid \# \Delta$, we find that $H^1(\Delta, D^{\operatorname{G}_K})=0$, and thus from the inflation-restriction sequence, $H^1(\mathbb{Q}, D)$ injects into $\operatorname{Hom}\left(\operatorname{G}_K^{\operatorname{ab}}, D\right)^\Delta$. Let $H_K$ be the Hilbert class field of $K$. We find that $H^1_{\operatorname{nr}}(\mathbb{Q}, D)$ injects into $\operatorname{Hom}(\operatorname{Gal}(H_K/K), D)^\Delta$. Since $p\nmid \#\operatorname{Cl}(K)$, we find that $$\operatorname{Hom}(\operatorname{Gal}(H_K/K), D)=0,$$ and hence, $H^1_{\operatorname{nr}}(\mathbb{Q}, D)=0$. ◻ **Theorem 28**. *Assume that* 1. *the conditions of Assumption [Assumption 1](#main ass){reference-type="ref" reference="main ass"} are satisfied.* 2. *$H^0(\Delta_{\mathfrak{p}}, D/D^{\epsilon_\mathfrak{p}})=0$,* 3. *$K$ is $p$-rational,* 4. *$p\nmid \#\operatorname{Cl}(K)$.* *Then, $S_{\chi, \epsilon}(\mathbb{Q}_\infty)=0$.* *Proof.* Since $p\nmid \# \operatorname{Cl}(K)$, it follows from Lemma [Lemma 27](#lemma4.10){reference-type="ref" reference="lemma4.10"} that $H^1_{\operatorname{nr}}(\mathbb{Q}, D)=0$. Let $G_p$ be the maximal pro-$p$ extension of $K$ which is unramified outside $p$. Since $p\nmid \# \operatorname{Cl}(K)$, we find that $G_p^{\operatorname{ab}}$ is isomorphic to $\mathcal{U}_p/\bar{U}$. On the other hand, $K$ is $p$-rational if and only if the Leopoldt conjecture is true at $p$ and there is no $p$ torsion in $G_p^{\operatorname{ab}}$ (cf. [@maire2020note] or [@movahhedi1990arithmetique]). This implies that $$\left(\mathcal{U}_{p, \mathcal{O}}^{[\epsilon]}\right)^\chi/\left(\bar{U}_{p, \mathcal{O}}^{[\epsilon]}\right)^\chi=0.$$ The results then follow from Theorem [Theorem 26](#conditions GV){reference-type="ref" reference="conditions GV"}. ◻ ## Special cases In this subsection, we consider certain special cases. First, we consider some $2$-dimensional Artin representations of dihedral type. Let $(L, \zeta)$ be a pair, where $L$ is an imaginary quadratic extension of $\mathbb{Q}$ and $\zeta:\operatorname{G}_L\rightarrow \bar{\mathbb{Q}}^\times$ is a character. Consider the $2$-dimensional Artin representation $$\rho=\rho_{(L, \zeta)}:=\operatorname{Ind}_{\operatorname{G}_L}^{\operatorname{G}_\mathbb{Q}}\zeta.$$ When restricted to $L$, the representation $\rho$ decomposes into a direct sum of characters $$\rho_{|\operatorname{G}_L}= \left( {\begin{array}{cc} \zeta & \\ & \zeta' \\ \end{array} } \right).$$ Here, $\zeta'$ is given as follows $$\zeta'(x)=\zeta(c x c^{-1}),$$ where $c$ denotes the complex conjugation. We remark that the representation $\rho$ is a self-dual representation of dihedral type if and only if $\zeta'=\zeta^{-1}$. Let $p$ be an odd prime number which splits in $L$ as a product $p\mathcal{O}_L=\mathfrak{p}\mathfrak{p}^*$. We set $\epsilon_\mathfrak{p}:=\zeta_{|\operatorname{G}_\mathfrak{p}}$ and note that $\epsilon_{\mathfrak{p}^*}=\zeta'_{|\operatorname{G}_\mathfrak{p}}$. The field $K=L(\zeta, \zeta')$ is the extension of $L$ generated by $\zeta$ and $\zeta'$. Choose an extension $\mathcal{F}/\mathbb{Q}_p$ containing $\mathbb{Q}_p(\chi, \epsilon_\mathfrak{p})$ and set $\mathcal{O}$ to denote its valuation ring, and $D$ the associated $\mathcal{O}$-divisible module. With respect to such a choice, we let $S_{\chi, \epsilon}(\mathbb{Q}_\infty)$ be the associated the Selmer group over the cyclotomic $\mathbb{Z}_p$-extension. **Theorem 29**. *With respect to notation above, assume that the following conditions hold* 1. *$\rho$ is irreducible,* 2. *the order of $\zeta$ is coprime to $p$,* 3. *$\epsilon_\mathfrak{p}$ is nontrivial and $\epsilon_\mathfrak{p}\neq \epsilon_{\mathfrak{p}^*}$.* *Then, the following conditions are equivalent* 1. *$H^1_{\operatorname{nr}}(\mathbb{Q}, D)=0$ and $\left(\mathcal{U}_{p, \mathcal{O}}^{[\epsilon]}\right)^\chi=\left(\bar{U}_{p, \mathcal{O}}^{[\epsilon]}\right)^\chi$.* 2. *$\mu_{\chi, \epsilon}=\lambda_{\chi, \epsilon}=0$.* 3. *$S_{\chi, \epsilon}(\mathbb{Q}_\infty)=0$.* *If $p\nmid h_K$ and $K$ is $p$-rational, then, the above conditions are satisfied.* *Proof.* The result follows from Theorem [Theorem 26](#conditions GV){reference-type="ref" reference="conditions GV"} once we show that 1. the conditions of Assumption [Assumption 1](#main ass){reference-type="ref" reference="main ass"} are satisfied. 2. $H^0(\Delta_{\mathfrak{p}}, D/D^{\epsilon_\mathfrak{p}})=0$. Consider the conditions for Assumption [Assumption 1](#main ass){reference-type="ref" reference="main ass"} to hold. 1. The field $K$ is the extension of $L$ cut out by $\zeta$ and $\zeta'$. Since the order of $\zeta$ is assumed to be coprime to $p$, the same holds for $\zeta'$. Since $p$ is odd, it follows that $p\nmid [K:\mathbb{Q}]$. 2. Since $K$ is an imaginary quadratic field complex conjugation acts via the matrix $\left( {\begin{array}{cc} & 1 \\ 1 & \\ \end{array} } \right)$, whose eigenvalues are $1$ and $-1$ respectively. Therefore, $d^+=d^-=1$. 3. The third condition follows from the assumption that $\epsilon_\mathfrak{p}$ is nontrivial and $\epsilon_\mathfrak{p}\neq \epsilon_{\bar{\mathfrak{p}}}$. Finally, we note that since $\epsilon_{\bar{\mathfrak{p}}}$ is nontrivial, and hence $H^0(\Delta_\mathfrak{p}, D/D^{\epsilon_\mathfrak{p}})=0$. By the proof of Theorem [Theorem 28](#cor 3.10){reference-type="ref" reference="cor 3.10"}, we note that $H^1_{\operatorname{nr}}(\mathbb{Q}, D)=0$ and $\left(\mathcal{U}_{p, \mathcal{O}}^{[\epsilon]}\right)^\chi=\left(\bar{U}_{p, \mathcal{O}}^{[\epsilon]}\right)^\chi$ if $p$ is such that $p\nmid h_K$ and $K$ is $p$-rational, then, the above conditions are satisfied. ◻ Next, we consider the group of rotational symmetries of the icosahedron. This group is simply $A_5$. An orthogonal set is a set of $6$ points, so that three pairwise orthogonal lines can be drawn between pairs of them. The midpoints of the $30$ edges of an icosahedron can be partitioned into $5$ orthogonal sets such that these sets are permuted by $A_5$. Let $g\in A_5$ be an element that corresponds to rotation of the icosahedron with axis $(x,y,z)$ and angle $\theta$. Then the corresponding matrix is $$\left( {\begin{array}{ccc} \cos \theta+(1-\cos \theta)x^2 & (1-\cos \theta)xy-z\sin \theta & (1-\cos \theta)xz+y\sin \theta\\ (1-\cos \theta)xy+z\sin \theta & \cos \theta+(1-\cos \theta)x^2 & (1-\cos \theta)yz-x\sin \theta \\ (1-\cos \theta)xz-y\sin \theta & (1-\cos \theta)yz+x\sin \theta & \cos \theta+(1-\cos \theta)z^2. \end{array} } \right)$$ This gives a representation $$r:A_5\rightarrow \operatorname{GL}_3(\bar{\mathbb{Q}}).$$ Let $K/\mathbb{Q}$ be a Galois extension with Galois group $\operatorname{Gal}(K/\mathbb{Q})$ isomorphic to $A_5$, and $\varrho$ be the composite $$\varrho:\operatorname{G}_\mathbb{Q}\rightarrow \operatorname{Gal}(K/\mathbb{Q})\xrightarrow{\sim} A_5\xrightarrow{r}\operatorname{GL}_3(\bar{\mathbb{Q}}),$$ where the initial map is the quotient map. Any $5$-cycle corresponds to a rotation by an angle of $\frac{2\pi}{5}$, and therefore has eigenvalues $e^{\frac{2\pi i}{5}}, e^{-\frac{2\pi i}{5}}, 1$. Let $g\in A_5$ be a $5$-cycle and $D$ be the group generated by $g$, and $L:=K^D$ be the field fixed by $D$. Let $p\geq 7$ be a prime which is split in $L$ and is inert in $K/L$. Then, the restriction to the decomposition group at $p$ is of the form $\operatorname{diag}\left(\alpha_p, \alpha_p^{-1}, 1\right)$, where $\alpha_p$ is an unramified character of order $5$. Let $\psi:\operatorname{G}_\mathbb{Q}\rightarrow \mathcal{O}^\times$ be an even character which is ramified at $p$, and set $\rho:=\varrho\otimes \psi$. **Theorem 30**. *With respect to the above notation, the following conditions are equivalent* 1. *$H^1_{\operatorname{nr}}(\mathbb{Q}, D)=0$ and $\left(\mathcal{U}_{p, \mathcal{O}}^{[\epsilon]}\right)^\chi=\left(\bar{U}_{p, \mathcal{O}}^{[\epsilon]}\right)^\chi$.* 2. *$\mu_{\chi, \epsilon}=\lambda_{\chi, \epsilon}=0$.* 3. *$S_{\chi, \epsilon}(\mathbb{Q}_\infty)=0$.* *If $p\nmid h_K$ and $K$ is $p$-rational, then, the above conditions are satisfied.* *Proof.* As in the proof of Theorem [Theorem 29](#thm 4.12){reference-type="ref" reference="thm 4.12"}, the result follows from Theorem [Theorem 26](#conditions GV){reference-type="ref" reference="conditions GV"} once we show that 1. the conditions of Assumption [Assumption 1](#main ass){reference-type="ref" reference="main ass"} are satisfied. 2. $H^0(\Delta_{\mathfrak{p}}, D/D^{\epsilon_\mathfrak{p}})=0$. Let us verify the conditions of the Assumption [Assumption 1](#main ass){reference-type="ref" reference="main ass"}. 1. Since it is assumed that $p\geq 7$, it follows that $p\nmid \# A_5$, and hence, $p\nmid [K:\mathbb{Q}]$. 2. Let $v$ be an archidean prime, and $\sigma_v$ be a generator of the decomposition group at $v$. Then, $\rho(\sigma_v)=\varrho(\sigma_v) \psi(\sigma_v)=\varrho(\sigma_v)$. Since $\varrho$ only gives rise to rotational symmetries and $\varrho(\sigma_v)$ has order $2$, it must correspond to the rotation with $\theta=\pi$. This means that it is conjugate to $\operatorname{diag}(1,-1,-1)$, and hence, $d^+=1$. 3. For the third condition, it has been arranged that $\varrho_{|\Delta_\mathfrak{p}}=\operatorname{diag}(\alpha_p, \alpha_p^{-1}, 1)$, where $\alpha_p$ is an unramified character of order $5$, and hence, $\epsilon_p:=\alpha_p \psi$ is a ramified character which occurs with multiplicity $1$. Finally, we note that $H^0(\Delta_\mathfrak{p}, D/D^{\epsilon_p})=0$ since the characters $\alpha_p^{-1}\psi$ and $\psi$ are nontrivial. This is because $\psi$ is ramified at $p$, while $\alpha_p$ is not. ◻ # Distribution questions for Artin representations {#s 4} ## Iwasawa theory of class groups In this section, we introduce and study some natural distribution questions for Iwasawa invariants of class groups. These discussions serve to motivate similar questions for Artin representations, which we discuss in the next subsection. Given a number field $K$ and a prime number $p$, the Iwasawa invariants $\mu_p(K)$ and $\lambda_p(K)$ are natural invariants to consider associated with the growth of $p$-primary parts of class groups in the cyclotomic extension of $K$. One considers the following question. **Question 31**. *Given an imaginary quadratic $K/\mathbb{Q}$, how does $\mu_p(K)$ and $\lambda_p(K)$ vary as $p\rightarrow \infty$? In other words, for a given pair $(\mu, \lambda)$, what can be said about the upper and lower densities of the set of primes $p$ for which $$\lambda_p(K)=\lambda.$$* We call the above set $\mathcal{F}_{ \lambda}$, and its upper (resp. lower) density $\bar{\mathfrak{d}}_{\lambda}$ (resp. $\underline{\mathfrak{d}}_{\lambda}$). We note that if $p$ is inert or ramified in $K$ and $p\nmid h_K$, then, $h_p(K_n)=0$ for all $n$. In particular, $\lambda_p(K)=0$. This implies in particular that $\underline{\mathfrak{d}}_{0}\geq \frac{1}{2}$. On the other hand, if $p$ splits in $K$, then, $r_{p,K}\geq 1$ and thus, $\lambda_p(K)\geq 1$. This implies that $\mathfrak{d}_0=\frac{1}{2}$. Let $p$ be a prime which splits in $K$ for which $p\nmid h_K$. The former condition is satisfied by $\frac{1}{2}$ of the primes and the latter condition is satisfied for all but finitely many primes. The following result is a corollary to Gold's criterion. **Corollary 32**. *Let $p$ be an odd prime number which splits into $\mathfrak{p}\mathfrak{p}^*$ in $\mathcal{O}_K$. Suppose that $r>1$ is an integer not divisible by $p$ and such that $\mathfrak{p}^r=(\alpha)$. Then, the following conditions are equivalent* 1. *$\lambda_p(K)>1$,* 2. *$\operatorname{Tr}(\alpha)^{p-1}\equiv 1\mod{p^2}$.* Indeed we find that for each number $a_0\in [1, p-1]$, there is precisely one congruence class $a$ modulo-$p^2$ such that is $a_0$ mod-$p$ and satisfies $a^{p-1}\equiv 1\mod{p^2}$. Therefore, the probability that an integer $a\in \mathbb{Z}/p^2\mathbb{Z}$ satisfies the congruence $a^{p-1}\equiv 1\mod{p^2}$ is $\frac{p-1}{p^2}=\frac{1}{p}-\frac{1}{p^2}$. Let $N_K(X)$ be the number of primes $p\leq X$ such that $p$ is split in $K$ and $\lambda_p(K)>1$. Thus, the heuristic suggests that $$N_K(X)\sim \sum_{p\in \Omega'(X)} \left(\frac{1}{p}-\frac{1}{p^2}\right)\sim \log \log X,$$ where $\Omega'(X)$ is the set of primes $p\leq X$ that split in $K$. This leads us to the following expectation. **Conjecture 33**. *Let $M_p(X)$ be the number of primes $\leq X$ that split in $K$ for which $\lambda_p(K)>1$, then, $M_p(X)= \frac{1}{2}\log \log X+O(1)$.* In particular the set of primes $p$ for which $\lambda_p(K)>1$ is infinite of density $0$, and thus, the conjecture in particular predicts that $\mathfrak{d}_{1}=\frac{1}{2}$. Horie [@horie1987note] has proven the infinitude of primes $p$ such that $\lambda_p(K)>1$ and Jochnowitz [@jochnowitz1994p] on the other hand the infinitude of primes $p$ for which $\lambda_p(K)=1$. On the other hand, one can fix a prime and study the variation of $\lambda_p(K)$ as $K$ ranges over all imaginary quadratic fields. We separate this problem into two cases, namely that of imaginary quadratic fields in which $p$ is inert, and those in which $p$ splits. For the primes $p$ that are inert in $K$ Ellenberg, Jain and Venkatesh [@ellenberg2011modeling] make the following prediction based on random matrix heuristics. **Conjecture 34** (Ellenberg, Jain, Venkatesh). *Amongst all imaginary quadratic fields $K$ in which $p$ is inert, the proportion for which $\lambda_p(K)=r$ is equal to $$p^{-r}\prod_{t>r}\left(1-p^{-t}\right).$$* We note that $\lambda_p(K)=0$ if and only if $p\nmid h_K$. The probability that $p\nmid h_K$ is, according to the Cohen-Lenstra heuristic, predicted to be equal to $$\prod_{t>0}\left(1-p^{-t}\right).$$ ## Artin representations Given an Artin representation $\rho$, we are interested in understanding the variation of $\mu$ and $\lambda$-invariants as $p$ ranges over all prime numbers. We specialize our discussion to odd $2$-dimensional Artin representations of dihedral type $$\rho=\operatorname{Ind}_{\operatorname{G}_L}^{\operatorname{G}_\mathbb{Q}} \zeta,$$ where $L$ is an imaginary quadratic field and $\zeta:\operatorname{G}_L\rightarrow \bar{\mathbb{Q}}^\times$ is a character. Let $\zeta'$ be the character defined by $\zeta'(\sigma)=\zeta(c \sigma c^{-1})$, where $c\in \operatorname{G}_\mathbb{Q}$ denotes the complex conjugation. Here, $L/\mathbb{Q}$ is an imaginary quadratic field. Since it is assumed that $\rho$ is of dihedral type, it follows that $\zeta'=\zeta^{-1}$ and the extension $K$ is simply the extension of $L$ that is fixed by the kernel of $\zeta$. Furthermore, assume that $2\nmid [K:L]$. Let $S(\rho)$ be the set of primes $p$ such that 1. $p$ is odd and $p\nmid [K:\mathbb{Q}]$, 2. $p$ splits in $L$, $p\mathcal{O}_L=\pi \pi^*$, and $\pi$ is inert in $K/L$. For $p\in S(\rho)$, it is then clear from the assumptions that the Assumption [Assumption 1](#main ass){reference-type="ref" reference="main ass"} holds, and that $H^0(\Delta_\mathfrak{p}, D/D^{\epsilon_\mathfrak{p}})=0$ for any of the primes $\mathfrak{p}|p$ of $K$ and the character $\epsilon_\mathfrak{p}=\zeta_{|\Delta_\mathfrak{p}}$. Let $\epsilon$ be the character associated to this choice of $\mathfrak{p}$ and $\epsilon_\mathfrak{p}$. **Definition 35**. *Let $T(\rho)$ be the set of primes $p\in S(\rho)$ such that the Selmer group $S_{\chi, \epsilon}(\mathbb{Q}_\infty)$ vanishes for all of the primes $\mathfrak{p}$ that lie above $p$. Set $T'(\rho):=S(\rho)\backslash T(\rho)$.* We note that the Theorem [Theorem 28](#cor 3.10){reference-type="ref" reference="cor 3.10"} implies that for $p\in S(\rho)$ such that $K$ is $p$-rational, then, $p\in T(\rho)$. Let us recall some heuristics on the $p$-rationality condition. The $\mathbb{Z}_p$-rank of $\mathcal{U}_p$ is $n=[K:\mathbb{Q}]=r_1+2r_2$. On the other hand, the rank of $U_K$ is $k:=r_1+r_2-1$. The probabilty that a random matrix with $n$ columns and $k$ rows does not have full $\mathbb{Z}_p$-rank is $$\operatorname{Pr}_{k,n}:=1-\frac{\prod_{i=0}^{k-1}\left(p^n-p^i\right)}{p^n}\leq \frac{1}{p^{n-k+1}}+\frac{1}{p^{n-k+2}}+\dots+\frac{1}{p^n}.$$ Therefore, the expected number of primes $p$ at which $K$ is not $p$-rational is finite, since according to this heuristic, $$\sum_{p} \operatorname{Pr}_{k,n}\leq \sum_{p} \left(\frac{1}{p^{r_2+2}}+\frac{1}{p^{r_2+3}}+\dots+\frac{1}{p^n}\right)=\zeta(r_2+2)+\zeta(r_2+3)+\dots+\zeta(n)<\infty.$$ This is indeed a conjecture due to Gras. **Conjecture 36** (Gras). *Let $K$ be a number field. Then for all large values of $p$, $K$ is $p$-rational.* This leads us to make the following conjecture. **Conjecture 37**. *With respect to above notation, the set of primes $T'(\rho)$ is finite. Thus, there are only finitely many pairs $(\mathfrak{p}, \epsilon)$ where $\mathfrak{p}$ is a prime above $p\in S(\rho)$ and $\epsilon:\operatorname{G}_{L_\mathfrak{p}}\rightarrow \bar{\mathbb{Q}}_p^\times$ is a character, such that the associated Selmer group $S_{\chi, \epsilon}(\mathbb{Q}_\infty)\neq 0$.* There is a relationship between Gras' conjecture and the generalized abc-conjecture, stated below, cf. [@vojtaabc]. **Conjecture 38** (Generalized abc-conecture). *Let $K$ be a number field and $I$ be an ideal in $\mathcal{O}_K$, the radical of $I$ is defined as follows $$\operatorname{Rad}(I):=\prod_{\mathfrak{p}|p} N(\mathfrak{p}),$$where the product is over all prime ideals $\mathfrak{p}$ dividing $I$, and $N(\mathfrak{p}):=\# \left(\mathcal{O}_K/\mathfrak{p}\mathcal{O}_K\right)$ is the norm of $\mathfrak{p}$. The generalized abc conjecture predicts that for any $\epsilon>0$, there exists a constant $C_{K, \epsilon}>0$ such that $$\prod_v \operatorname{max}\{|a|_v, |b|_v, |c|_v\}\leq C_{K, \epsilon} \left(\operatorname{Rad}(abc)\right)^{1+\epsilon},$$ holds for all non-zero $a,b,c\in \mathcal{O}_K$ such that $a+b=c$.* Let us recall a recent result of Maire and Rougnant, cf. [@maire2020note Theorem A]. **Theorem 39** (Maire-Rougnant). *Let $K/\mathbb{Q}$ be an imaginary $S_3$ extension. Then, the generalized abc-conjecture for $K$ implies that there is a constant $c>0$ such that $$\# \{p\leq x\mid K \text{ is }p\text{-rational}\}\geq c\log x.$$* The above result has implication to the vanishing of Iwasawa modules (and invariants). **Theorem 40**. *Let $K/\mathbb{Q}$ be an imaginary $S_3$ extension and $\rho$ be a $2$-dimensional Artin representation that factors through $\operatorname{Gal}(K/\mathbb{Q})$. Then, $$\# \{p\leq x\mid p\in T(\rho)\}\geq c\log x.$$* *Proof.* Theorem [Theorem 28](#cor 3.10){reference-type="ref" reference="cor 3.10"} implies that for $p\in S(\rho)$ such that $K$ is $p$-rational, then, $p\in T(\rho)$. The result thus follows. ◻
arxiv_math
{ "id": "2309.03738", "title": "On the Iwasawa invariants of Artin representations", "authors": "Aditya Karnataki, Anwesh Ray", "categories": "math.NT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We solve the comparison problem for generalized $\psi$-estimators introduced in Barczy and Páles (2022). Namely, we derive several necessary and sufficient conditions under which a generalized $\psi$-estimator less than or equal to another $\psi$-estimator for any sample. We also solve the corresponding equality problem for generalized $\psi$-estimators. For applications, we solve the two problems in question for Bajraktarević-type- and quasi-arithmetic-type estimators. We also apply our results for some known statistical estimators such as for empirical expectiles and Mathieu-type estimators and for solving likelihood equations in case of normal, a Beta-type, Gamma, Lomax (Pareto type II), lognormal and Laplace distributions. bibliography: - psi_estimator_equal_char_bib.bib --- **Comparison and equality of generalized $\psi$-estimators** Mátyás $\text{Barczy}^{*,\diamond}$, Zsolt $\text{P\'ales}^{**}$ \* ELKH-SZTE Analysis and Applications Research Group, Bolyai Institute, University of Szeged, Aradi vértanúk tere 1, H--6720 Szeged, Hungary. \*\* Institute of Mathematics, University of Debrecen, Pf. 400, H--4002 Debrecen, Hungary. e-mail: barczy\@math.u-szeged.hu (M. Barczy), pales\@science.unideb.hu (Zs. Páles). $\diamond$ Corresponding author. [^1] [^2] [^3] # Introduction {#section_intro} In this paper, we solve the comparison and equality problems for $\psi$-estimators (also called $Z$-estimators) that have been playing an important role in statistics since the 1960's. Let $(X,{\mathcal X})$ be a measurable space, $\Theta$ be a Borel subset of $\mathbb{R}$, and $\psi:X\times\Theta\to\mathbb{R}$ be a function such that for all $t\in\Theta$, the function $X\ni x\mapsto \psi(x,t)$ is measurable with respect to the sigma-algebra ${\mathcal X}$. Let $(\xi_n)_{n\geqslant 1}$ be a sequence of i.i.d. random variables with values in $X$ such that the distribution of $\xi_1$ depends on an unknown parameter $\vartheta \in\Theta$. For each $n\geqslant 1$, Huber [@Hub64; @Hub67] among others introduced an important estimator of $\vartheta$ based on the observations $\xi_1,\ldots,\xi_n$ as a solution $\widehat\vartheta_{n,\psi}(\xi_1,\ldots,\xi_n)$ of the equation (with respect to the unknown parameter): $$\frac{1}{n}\sum_{i=1}^n \psi(\xi_i,t)=0, \qquad t\in\Theta,$$ provided that such a solution exists. In the statistical literature, one calls $\widehat\vartheta_{n,\psi}(\xi_1,\ldots,\xi_n)$ a $\psi$-estimator of the unknown parameter $\vartheta\in\Theta$ based on the i.i.d. observations $\xi_1,\ldots,\xi_n$, while other authors call it a Z-estimator (the letter Z refers to "zero"). In fact, $\psi$-estimators are special M-estimators (where the letter M refers to "maximum likelihood-type") that were also introduced by Huber [@Hub64; @Hub67]. For a detailed exposition of M-estimators and $\psi$-estimators, see, e.g., Kosorok [@Kos Sections 2.2.5 and 13] or van der Vaart [@Vaa Section 5]. According to our knowledge, results on the comparison and the equality of $\psi$-estimators or $M$-estimators are not available in the literature. In other words, given $\psi,\varphi:X\times\Theta\to\mathbb{R}$ (with the properties described above), we are interested in finding necessary as well as sufficient conditions for the inequality $\widehat\vartheta_{n,\psi}\leqslant\widehat\vartheta_{n,\varphi}$ and for the equality $\widehat\vartheta_{n,\psi}=\widehat\vartheta_{n,\varphi}$ to be valid for all $n\geqslant 1$, respectively. In this paper, we make the first steps to fill this gap in case of generalized $\psi$-estimators introduced in Barczy and Páles [@BarPal2] (see also Definition [Definition. 2](#Def_Tn){reference-type="ref" reference="Def_Tn"}), which are generalizations of $\psi$ estimators recalled above. In general linear models, many authors investigated the equality of several-types of estimators of the regression parameters. For a detailed discussion of the equality problem related to the ordinary least squares estimator and the best linear unbiased estimator of the regression parameters in general linear models, see Section [2](#Section_gen_regression){reference-type="ref" reference="Section_gen_regression"}. In the rest of this section, we introduce the basic notations and concepts that are used throughout the paper. Then we present some properties of generalized $\psi$-estimators that do not appear in Barczy and Páles [@BarPal2] such as a mean-type property (see Proposition [Proposition. 4](#Pro_psi_becsles_mean_prop){reference-type="ref" reference="Pro_psi_becsles_mean_prop"}). These properties are not just interesting on their own rights, but we also use them in the proofs for solving the comparison and equality problems of generalized $\psi$-estimators. The comparison problem for generalized $\psi$-estimators is solved in Section [3](#Section_comp_equal_psi_est){reference-type="ref" reference="Section_comp_equal_psi_est"} (see Theorems [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"} and [Theorem. 10](#Lem_phi_est_eq_5){reference-type="ref" reference="Lem_phi_est_eq_5"}), namely, we derive several equivalent conditions in order that a $\psi$-estimator be less than or equal to another $\psi$-estimator based on any possible realization of samples of any size. Under an additional differentiability assumption, a further well-useable equivalent condition is derived, see Theorem [Theorem. 11](#Thm_inequality_diff){reference-type="ref" reference="Thm_inequality_diff"}. Theorem [Theorem. 12](#Thm_equality){reference-type="ref" reference="Thm_equality"} is devoted to solve the equality problem for generalized $\psi$-estimators. In Section [4](#Sec_comp_equal_Bajrak){reference-type="ref" reference="Sec_comp_equal_Bajrak"}, we apply our results in Section [3](#Section_comp_equal_psi_est){reference-type="ref" reference="Section_comp_equal_psi_est"} in order to solve the comparison and equality problems for Bajraktarević-type estimators that were introduced in Barczy and Páles [@BarPal2] (see also [\[help14\]](#help14){reference-type="eqref" reference="help14"}). Propositions [Proposition. 19](#Pro_Baj_type_comparison){reference-type="ref" reference="Pro_Baj_type_comparison"} and [Proposition. 20](#Pro_Baj_type_comparison_2){reference-type="ref" reference="Pro_Baj_type_comparison_2"} are about the comparison problem, while Theorems [Theorem. 23](#Thm_Baj_type_equality){reference-type="ref" reference="Thm_Baj_type_equality"} and [Lemma. 24](#Lem_aux){reference-type="ref" reference="Lem_aux"} are about the equality problem for Bajraktarević-type estimators. We note that, surprisingly, in the heart of the proof of the equality problem for Bajraktarević-type estimators a result about Schwarzian derivative and rational functions come into play, see Lemma [Lemma. 22](#Lem_Sch_deriv){reference-type="ref" reference="Lem_Sch_deriv"}. We can also characterize the equality of quasiarithmetic-type $\psi$-estimators, see Corollary [Corollary. 27](#Pro_qa_type_equality){reference-type="ref" reference="Pro_qa_type_equality"}. In Proposition [Proposition. 26](#Pro_Mobius){reference-type="ref" reference="Pro_Mobius"}, we derive a necessary and sufficient condition in order that two strictly increasing functions defined on a nondegenerate open interval be the Möbius transforms of each other. In Section [5](#Sec_stat_examples){reference-type="ref" reference="Sec_stat_examples"}, we apply our results in Section [3](#Section_comp_equal_psi_est){reference-type="ref" reference="Section_comp_equal_psi_est"} for some known statistical estimators such as for empirical expectiles and Mathieu-type estimators and for solving likelihood equations in case of normal, a Beta-type, Gamma, Lomax (Pareto type II), lognormal and Laplace distributions. Throughout this paper, let $\mathbb{N}$, $\mathbb{Z}_+$, $\mathbb{Q}$, $\mathbb{R}$, $\mathbb{R}_+$, $\mathbb{R}_{++}$ and $\mathbb{R}_{--}$ denote the sets of positive integers, non-negative integers, rational numbers, real numbers, non-negative real numbers, positive real numbers and negative real numbers, respectively. An interval $\Theta\subseteq\mathbb{R}$ will be called nondegenerate if it contains at least two distinct points. For a subset $S\subseteq \mathbb{R}$, the convex hull of $S$ (which is the smallest interval containing $S$) is denoted by $\mathop{\mathrm{conv}}(S)$. For each $n\in\mathbb{N}$, let us also introduce the set $\Lambda_n:=\mathbb{R}_+^n\setminus\{(0,\ldots,0)\}$. All the random variables are defined on an appropriate probability space $(\Omega,{\mathcal A},\operatorname{\mathbb{P}})$. The rank of a matrix $A\in\mathbb{R}^{n\times n}$ is denoted by $\operatorname{rank}(A)$. **Definition. 1**. *Let $\Theta$ be a nondegenerate open interval of $\mathbb{R}$. For a function $f:\Theta\to\mathbb{R}$, consider the following three level sets $$\Theta_{f>0}:=\{t\in \Theta: f(t)>0\},\qquad \Theta_{f=0}:=\{t\in \Theta: f(t)=0\},\qquad \Theta_{f<0}:=\{t\in \Theta: f(t)<0\}.$$ We say that $\vartheta\in\Theta$ is a *point of sign change (of decreasing-type) for $f$* if $$f(t) > 0 \quad \text{for $t<\vartheta$,} \qquad \text{and} \qquad f(t)< 0 \quad \text{for $t>\vartheta$.}$$* Note that there can exist at most one element $\vartheta\in\Theta$ which is a point of sign change for $f$. Further, if $f$ is continuous at a point $\vartheta$ of sign change, then $f(\vartheta) = 0$. Let $X$ be a nonempty set, $\Theta$ be a nondegenerate open interval of $\mathbb{R}$. Let $\Psi(X,\Theta)$ denote the class of real-valued functions $\psi:X\times\Theta\to\mathbb{R}$ such that, for all $x\in X$, there exist $t_+,t_-\in\Theta$ such that $t_+<t_-$ and $\psi(x,t_+)>0>\psi(x,t_-)$. Roughly speaking, a function $\psi\in\Psi(X,\Theta)$ satisfies the following property: for all $x\in X$, the function $t\ni\Theta\mapsto \psi(x,t)$ changes sign on the interval $\Theta$ at least once. **Definition. 2**. *We say that a function $\psi\in\Psi(X,\Theta)$* (i) **possesses the property $[C]$ (briefly, $\psi$ is a $C$-function) if it is continuous in its second variable, i.e., if, for all $x\in X$, the mapping $\Theta\ni t\mapsto \psi(x,t)$ is continuous.** (ii) **possesses the property $[T_n]$ (briefly, $\psi$ is a $T_n$-function) for some $n\in\mathbb{N}$* if there exists a mapping $\vartheta_{n,\psi}:X^n\to\Theta$ such that, for all $\pmb{x}=(x_1,\dots,x_n)\in X^n$ and $t\in\Theta$, $$\begin{aligned} \label{psi_est_inequality} \psi_{\pmb{x}}(t)=\sum_{i=1}^n \psi(x_i,t) \begin{cases} > 0 & \text{if $t<\vartheta_{n,\psi}(\pmb{x})$,}\\ < 0 & \text{if $t>\vartheta_{n,\psi}(\pmb{x})$}, \end{cases} \end{aligned}$$ that is, for all $\pmb{x}\in X^n$, the value $\vartheta_{n,\psi}(\pmb{x})$ is a point of sign change for the function $\psi_{\pmb{x}}$. If there is no confusion, instead of $\vartheta_{n,\psi}$ we simply write $\vartheta_n$. We may call $\vartheta_{n,\psi}(\pmb{x})$ as a generalized $\psi$-estimator for some unknown parameter in $\Theta$ based on the realization ${\boldsymbol{x}}=(x_1,\ldots,x_n)\in X^n$. If, for each $n\in\mathbb{N}$, $\psi$ is a $T_n$-function, then we say that *$\psi$ possesses the property $[T]$ (briefly, $\psi$ is a $T$-function)*.* (iii) **possesses the property $[Z_n]$ (briefly, $\psi$ is a $Z_n$-function) for some $n\in\mathbb{N}$* if it is a $T_n$-function and $$\psi_{\pmb{x}}(\vartheta_{n,\psi}(\pmb{x}))=\sum_{i=1}^n \psi(x_i,\vartheta_{n,\psi}(\pmb{x}))= 0 \qquad \text{for all}\quad \pmb{x}=(x_1,\ldots,x_n)\in X^n.$$ If, for each $n\in\mathbb{N}$, $\psi$ is a $Z_n$-function, then we say that *$\psi$ possesses the property $[Z]$ (briefly, $\psi$ is a $Z$-function)*.* (iv) **possesses the property $[T_n^{\pmb{\lambda}}]$ for some $n\in\mathbb{N}$ and $\pmb{\lambda}=(\lambda_1,\ldots,\lambda_n)\in\Lambda_n$ (briefly, $\psi$ is a $T_n^{\pmb{\lambda}}$-function)* if there exists a mapping $\vartheta_{n,\psi}^{\pmb{\lambda}}:X^n\to\Theta$ such that, for all $\pmb{x}=(x_1,\dots,x_n)\in X^n$ and $t\in\Theta$, $$\begin{aligned} \label{psi_est_inequality_weighted} \psi_{\pmb{x},\pmb{\lambda}}(t)= \sum_{i=1}^n \lambda_i\psi(x_i,t) \begin{cases} > 0 & \text{if $t<\vartheta_{n,\psi}^{\pmb{\lambda}}(\pmb{x})$,}\\ < 0 & \text{if $t>\vartheta_{n,\psi}^{\pmb{\lambda}}(\pmb{x})$}, \end{cases} \end{aligned}$$ that is, for all $\pmb{x}\in X^n$, the value $\vartheta_{n,\psi}^{\pmb{\lambda}}(\pmb{x})$ is a point of sign change for the function $\psi_{\pmb{x},\pmb{\lambda}}$. If there is no confusion, instead of $\vartheta_{n,\psi}^{\pmb{\lambda}}$ we simply write $\vartheta_n^{\pmb{\lambda}}$. We may call $\vartheta_{n,\psi}^{\pmb{\lambda}}(\pmb{x})$ as a weighted generalized $\psi$-estimator for some unknown parameter in $\Theta$ based on the realization ${\boldsymbol{x}}=(x_1,\ldots,x_n)\in X^n$ and weights $(\lambda_1,\ldots,\lambda_n)\in\Lambda_n$.* It can be seen that if $\psi$ is continuous in its second variable and, for some $n\in\mathbb{N}$, it is a $T_n$-function, then it also a $Z_n$-function. Given properties $[P_1], \ldots, [P_q]$ introduced in the Definition [Definition. 2](#Def_Tn){reference-type="ref" reference="Def_Tn"} (where $q\in\mathbb{N}$), the subclass of $\Psi(X,\Theta)$ consisting of elements possessing the properties $[P_1],\ldots,[P_q]$, will be denoted by $\Psi[P_1,\ldots,P_q](X,\Theta)$. The following statement is a direct consequence of the property $[Z]$. **Lemma. 3**. *Let $\psi\in\Psi[Z](X,\Theta)$. Then, for each $n\in\mathbb{N}$, $\pmb{x}=(x_1,\ldots,x_n)\in X^n$ and $t\in\Theta$, the inequality $$\vartheta_{n,\psi}(\pmb{x})\leqslant(<)\, t$$ is valid if and only if $$\sum_{i=1}^n \psi(x_i,t)\leqslant(<)\, 0$$ is true.* The next result establishes a mean-type property of generalized $\psi$-estimators. **Proposition. 4**. *Let $n\in\mathbb{N}$ and $\psi\in\Psi[T_n](X,\Theta)$. Then, for all $x_1,\ldots,x_n\in X$, we have $$\begin{aligned} \label{help_mean_prop} \min(\vartheta_{1,\psi}(x_1),\dots,\vartheta_{1,\psi}(x_n)) \leqslant\vartheta_{n,\psi}(x_1,\ldots,x_n)\leqslant\max(\vartheta_{1,\psi}(x_1),\dots,\vartheta_{1,\psi}(x_n)). \end{aligned}$$ Furthermore, if $\psi\in\Psi[Z_n](X,\Theta)$ and $x_1,\ldots,x_n\in X$ are such that not all the values $\vartheta_{1,\psi}(x_1),\ldots,\vartheta_{1,\psi}(x_n)$ are equal, then both inequalities in [\[help_mean_prop\]](#help_mean_prop){reference-type="eqref" reference="help_mean_prop"} are strict.* It is easy to see that, for each $n\in\mathbb{N}$, the properties $[T_n]$ and $[Z_n]$ imply $[T_1]$ and $[Z_1]$, respectively. Let $n\in\mathbb{N}$, $\psi\in\Psi[T_n](X,\Theta)$, and $x_1,\ldots,x_n\in X$. Let us introduce the notations $t_0:=\vartheta_{n,\psi}(x_1,\ldots,x_n)$ and $$\begin{aligned} t':=\min(\vartheta_{1,\psi}(x_1),\dots,\vartheta_{1,\psi}(x_n)), \qquad t'':=\max(\vartheta_{1,\psi}(x_1),\dots,\vartheta_{1,\psi}(x_n)). \end{aligned}$$ Since  $\psi$  is a $T_1$-function, we have $\psi(x_i,t)>0$, $i\in\{1,\ldots,n\}$, for all $t\in\Theta$ with $t<t'$, and $\psi(x_i,t)<0$, $i\in\{1,\ldots,n\}$, for all $t\in\Theta$ with $t>t''$. This yields that $$\sum_{i=1}^n \psi(x_i,t) \begin{cases} >0 & \text{if $t\in\Theta$ and $t<t'$,}\\ <0 & \text{if $t\in\Theta$ and $t>t''$.} \end{cases}$$ Since $\psi$ is a $T_n$-function, it implies that $t'\leqslant t_0\leqslant t''$, as desired. Now let us suppose that $n\in\mathbb{N}$, $\psi\in\Psi[Z_n](X,\Theta)$, and $x_1,\ldots,x_n\in X$ are such that not all the values $\vartheta_{1,\psi}(x_1),\ldots,\vartheta_{1,\psi}(x_n)$ are equal. Then $t'<t''$, and hence there exist $i\ne j$, $i,j\in\{1,\ldots,n\}$ such that $t'=\vartheta_{1,\psi}(x_i)$ and $t''=\vartheta_{1,\psi}(x_j)$. On the contrary, let us suppose that $t_0=t'$ or $t_0=t''$. If $t_0=t'$ were true, then, using that $\psi$ is a $Z_n$-function, we have $\psi(x_k,t_0)\geqslant 0$, $k\in\{1,\ldots,n\}$, and $\sum_{k=1}^n \psi(x_k,t_0) = 0$, from which we can conclude that $\psi(x_k,t_0)= 0$, $k\in\{1,\ldots,n\}$. On the other hand, $t_0<t''=\vartheta_{1,\psi}(x_j)$ implies $\psi(x_j,t_0) > 0$, yielding us to a contradiction. Similarly, if $t_0=t''$ were true, then, using that $\psi$ is a $Z_n$-function, we have $\psi(x_k,t_0)\leqslant 0$, $k\in\{1,\ldots,n\}$, and $\sum_{k=1}^n \psi(x_k,t_0) = 0$, from which we get that $\psi(x_k,t_0)=0$, $k\in\{1,\ldots,n\}$. However, $\vartheta_{1,\psi}(x_i)=t'<t_0$ yields $\psi(x_i,t_0) < 0$, which leads to a contradiction as well. $\Box$ **Proposition. 5**. *Let $\psi\in\Psi[T](X,\Theta)$ and assume that, for all $x,y\in X$ with $\vartheta_{1,\psi}(x)<\vartheta_{1,\psi}(y)$, the map is positive and strictly increasing. Then the set is a dense subset of $\mathop{\mathrm{conv}}(\vartheta_{1,\psi}(X))$.* According to Proposition [Proposition. 4](#Pro_psi_becsles_mean_prop){reference-type="ref" reference="Pro_psi_becsles_mean_prop"}, it follows that $S_\psi\subseteq \mathop{\mathrm{conv}}(\vartheta_{1,\psi}(X))=:C$. Therefore, if $C$ is a singleton, then there is nothing to prove. Thus, we may assume that the convex set $C$ is a nondegenerate subinterval of $\Theta$. To prove the density of $S_\psi$ in $C$, let $s,t\in C$ with $\inf C<s<t<\sup C$. Since $\inf C = \inf \vartheta_{1,\psi}(X)$ and $\sup C = \sup \vartheta_{1,\psi}(X)$, there exist $x,y\in X$ such that $\vartheta_{1,\psi}(x)<s<t<\vartheta_{1,\psi}(y)$. Consequently, $$0< -\frac{\psi(x,s)}{\psi(y,s)} < -\frac{\psi(x,t)}{\psi(y,t)},$$ and hence one can choose $n,m\in\mathbb{N}$ such that $$0 < -\frac{\psi(x,s)}{\psi(y,s)} < \frac{m}{n} < -\frac{\psi(x,t)}{\psi(y,t)}.$$ Using that $\psi(y,s)>0$ and $\psi(y,t)>0$, we have that $$n\psi(x,s) + m\psi(y,s) >0 \qquad \text{and}\qquad n\psi(x,t) + m\psi(y,t) < 0.$$ Since $\psi$ has the property $[T_{n+m}]$, we can conclude that $$s \leqslant\vartheta_{n+m,\psi}(\underbrace{x,\ldots,x}_{n}, \underbrace{y,\ldots,y}_{m}) \leqslant t.$$ As a consequence, we have that $[s,t]\cap S_\psi$ is nonempty. Since $[s,t]$ was an arbitrary nondegenerate subinterval in the interior of $C$, it follows that $S_\psi$ is dense in $C$. $\Box$ **Remark. 6**. *In our recent paper Barczy and Páles [@BarPal2], several implications between the property $[T]$ of a function $\psi\in\Psi(X,\Theta)$ and the monotonicity properties of the map [\[umap\]](#umap){reference-type="eqref" reference="umap"} have been established. Among others, we proved that if $\psi$ possesses the property $[T]$, then for all $x,y\in X$ with $\vartheta_{1,\psi}(x)<\vartheta_{1,\psi}(y)$, the map [\[umap\]](#umap){reference-type="eqref" reference="umap"} is positive and (not necessarily strictly) increasing. On the other hand, if $\psi$ has the properties $[T]$ and $[Z_1]$, and, for all $x,y\in X$ with $\vartheta_{1,\psi}(x)<\vartheta_{1,\psi}(y)$, the map [\[umap\]](#umap){reference-type="eqref" reference="umap"} is strictly increasing, then $\psi$ possesses the property $[T_n^{\pmb{\lambda}}]$ for all $n\in\mathbb{N}$ and $\pmb{\lambda}=(\lambda_1,\ldots,\lambda_n)\in\Lambda_n$. $\Box$* **Lemma. 7**. *Let $\psi\in\Psi[T](X,\Theta)$. Then, for all $k\in\mathbb{N}$ and $x,y_1,\dots,y_k\in X$, we have $$\lim_{n\to\infty} \vartheta_{n+k,\psi}({\underbrace{x,\ldots,x}_{n}},y_1,\dots,y_k) = \vartheta_{1,\psi}(x).$$ The same statement holds if $\psi$ has the properties $[Z_1]$ and $[T_2^{\boldsymbol{\lambda}}]$ for all ${\boldsymbol{\lambda}}\in\Lambda_2$.* Let $k\in\mathbb{N}$ and let $x,y_1,\dots,y_k\in X$ be fixed arbitrarily. For each $n\in\mathbb{N}$, denote $$t_n:=\vartheta_{n+k,\psi}({\underbrace{x,\ldots,x}_{n}},y_1,\dots,y_k).$$ We need to show that $t_n$ converges to $\vartheta_{1,\psi}(x)$ as $n\to\infty$. Let $t',t''\in\Theta$ be arbitrary such that $t'<\vartheta_{1,\psi}(x)<t''$ (since $\Theta$ is open such $t'$ and $t''$ exist). Then $$n\psi(x,t')+\sum_{i=1}^k\psi(y_i,t') =n\bigg(\psi(x,t')+\frac1n\sum_{i=1}^k\psi(y_i,t')\bigg)>0$$ if $n$ is large enough, because $t'<\vartheta_{1,\psi}(x)$ and $$\lim_{n\to\infty}\bigg(\psi(x,t')+\frac1n\sum_{i=1}^k\psi(y_i,t')\bigg)=\psi(x,t')>0.$$ Similarly, we get that the inequality $$n\psi(x,t'')+\sum_{i=1}^k\psi(y_i,t'')<0$$ is valid if $n$ is large enough. Therefore, the point of sign change $t_n$ of the function $$\Theta\ni t\mapsto n\psi(x,t)+\sum_{i=1}^k\psi(y_i,t)$$ is in the interval $[t',t'']$ for $n$ large enough, that is, there exists $n_0\in\mathbb{N}$ such that $t_n\in[t',t'']$ for each $n\geqslant n_0$, $n\in\mathbb{N}$. Since $t',t''\in\Theta$ were arbitrary with $t'< \vartheta_{1,\psi}(x)<t''$, this implies that $t_n$ converges to $\vartheta_{1,\psi}(x)$ as $n\to\infty$. If $\psi$ has the properties $[Z_1]$ and $[T_2^{\boldsymbol{\lambda}}]$ for all ${\boldsymbol{\lambda}}\in\Lambda_2$, then, by Corollary 2.11 in Barczy and Páles [@BarPal2], we have that $\psi$ is also a $T$-function, consequently, the same conclusion holds. $\Box$ # Historical notes about the equality of OLSE and BLUE in general linear models {#Section_gen_regression} Consider a general linear model $$\begin{aligned} \label{gen_lin_mod} \operatorname{\mathbb{E}}({\boldsymbol{\eta}})= {\boldsymbol{X}}{\boldsymbol{\beta}}\qquad \text{with}\qquad \operatorname{Cov}({\boldsymbol{\eta}})={\boldsymbol{V}}, \end{aligned}$$ where ${\boldsymbol{\eta}}$ is an $\mathbb{R}^n$-valued random vector, ${\boldsymbol{X}}\in\mathbb{R}^{n\times q}$ with $q<n$, $q,n\in\mathbb{N}$, ${\boldsymbol{\beta}}\in\mathbb{R}^q$ is a vector of unknown (regression) parameters, ${\boldsymbol{V}}\in\mathbb{R}^{n\times n}$ is a known, symmetric and positive semidefinite matrix. Note that ${\boldsymbol{V}}$ is positive definite if and only if ${\boldsymbol{V}}$ has full rank $n$. Indeed, ${\boldsymbol{V}}$ is positive (semi)definite if and only if all its eigenvalues are (non-negative) positive. Hence, using that the determinant of ${\boldsymbol{V}}$ equals the product of its eigenvalues, we have that ${\boldsymbol{V}}$ is positive definite if and only if $\det({\boldsymbol{V}})\ne 0$, or equivalently, $\operatorname{rank}({\boldsymbol{V}})=n$, as desired. The ordinary least squares estimator (OLSE) of ${\boldsymbol{\beta}}$ is defined to be $$\widehat{\boldsymbol{\beta}}:=\mathop{\mathrm{arg\,min}}_{{\boldsymbol{\beta}}\in\mathbb{R}^q}({\boldsymbol{\eta}}- {\boldsymbol{X}}{\boldsymbol{\beta}})^\top({\boldsymbol{\eta}}- {\boldsymbol{X}}{\boldsymbol{\beta}}),$$ and the OLSE of ${\boldsymbol{X}}{\boldsymbol{\beta}}$ is defined to be ${\boldsymbol{X}}\widehat{\boldsymbol{\beta}}$. The best linear unbiased estimator (BLUE) ${\boldsymbol{\beta}}^*$ of ${\boldsymbol{\beta}}$ is a linear estimator ${\boldsymbol{G}}{\boldsymbol{\eta}}$ such that $\operatorname{\mathbb{E}}({\boldsymbol{G}}{\boldsymbol{\eta}})={\boldsymbol{\beta}}$ (i.e., ${\boldsymbol{G}}{\boldsymbol{\eta}}$ is an unbiased estimator of ${\boldsymbol{\beta}}$) and for any other linear unbiased estimator ${\boldsymbol{M}}{\boldsymbol{\eta}}$ of ${\boldsymbol{\beta}}$, we have that $\operatorname{Cov}({\boldsymbol{M}}{\boldsymbol{\eta}}) - \operatorname{Cov}({\boldsymbol{G}}{\boldsymbol{\eta}})\in\mathbb{R}^{q\times q}$ is nonnegative definite, where ${\boldsymbol{G}},{\boldsymbol{M}}\in\mathbb{R}^{q\times n}$. Similarly, the BLUE estimator ${\boldsymbol{\beta}}^*$ of ${\boldsymbol{X}}{\boldsymbol{\beta}}$ is a linear estimator ${\boldsymbol{G}}{\boldsymbol{\eta}}$ such that $\operatorname{\mathbb{E}}({\boldsymbol{G}}{\boldsymbol{\eta}})={\boldsymbol{X}}{\boldsymbol{\beta}}$ (i.e., ${\boldsymbol{G}}{\boldsymbol{\eta}}$ is an unbiased estimator of ${\boldsymbol{X}}{\boldsymbol{\beta}}$) and for any other linear unbiased estimator ${\boldsymbol{M}}{\boldsymbol{\eta}}$ of ${\boldsymbol{X}}{\boldsymbol{\beta}}$, we have that $\operatorname{Cov}({\boldsymbol{M}}{\boldsymbol{\eta}}) - \operatorname{Cov}({\boldsymbol{G}}{\boldsymbol{\eta}})\in\mathbb{R}^{n\times n}$ is nonnegative definite, where ${\boldsymbol{G}},{\boldsymbol{M}}\in\mathbb{R}^{n\times n}$. In the special case when ${\boldsymbol{X}}$ and ${\boldsymbol{V}}$ are both of full rank (i.e., when $\operatorname{rank}(X)=q$ and $\operatorname{rank}(V)=n$), the OLSE and BLUE of ${\boldsymbol{\beta}}$ take the following forms $$\widehat{\boldsymbol{\beta}}=({\boldsymbol{X}}^\top {\boldsymbol{X}})^{-1}{\boldsymbol{X}}^\top {\boldsymbol{\eta}},$$ and $${\boldsymbol{\beta}}^*=({\boldsymbol{X}}^\top {\boldsymbol{V}}^{-1}{\boldsymbol{X}})^{-1}{\boldsymbol{X}}^\top {\boldsymbol{V}}^{-1}{\boldsymbol{\eta}}.$$ In the general case, there are also explicit formulae for $\widehat{\boldsymbol{\beta}}$ and ${\boldsymbol{\beta}}^*$ involving Moore-Penrose inverses, see, e.g., Lemma 1 in Tian and Wiens [@TiaWie]. Puntanen and Styan [@PunSty] gave a good historical overview on the (almost sure) equality of the OLSE and the BLUE of ${\boldsymbol{\beta}}$ and those of ${\boldsymbol{X}}{\boldsymbol{\beta}}$, respectively. It is well-known that OLSE and BLUE of ${\boldsymbol{\beta}}$ can coincide even if ${\boldsymbol{V}}$ is not a multiple of the $n\times n$ identity matrix ${\boldsymbol{I}}_n$. In particular, if $q=1$ and ${\boldsymbol{X}}=(1,\ldots,1)^\top \in\mathbb{R}^{n\times 1}$, then the OLSE of ${\boldsymbol{\beta}}$ is the arithmetic mean $\widehat{\boldsymbol{\beta}}= (\eta_1+\cdots+\eta_n)/n$ with the notation ${\boldsymbol{\eta}}=:(\eta_1,\ldots,\eta_n)$, and one can prove that if the covariance matrix ${\boldsymbol{V}}$ has all of its row totals equal to each other, then the OLSE of ${\boldsymbol{\beta}}$ coincides with the BLUE of ${\boldsymbol{\beta}}$. For several necessary and sufficient conditions on the equality of the OLSE and BLUE of ${\boldsymbol{X}}{\boldsymbol{\beta}}$ have been recalled in Section 2 in Puntanen and Styan [@PunSty]. In the special case when ${\boldsymbol{X}}$ and ${\boldsymbol{V}}$ are both of full rank (i.e., when $\operatorname{rank}(X)=q$ and $\operatorname{rank}(V)=n$), one can introduce the so-called relative goodness (also called D-relative efficiency) of the OLSE of ${\boldsymbol{\beta}}$ defined by $$\kappa:=\frac{\det(\operatorname{Cov}({\boldsymbol{\beta}}^*))}{\det(\operatorname{Cov}(\widehat{\boldsymbol{\beta}}))} = \frac{(\det({\boldsymbol{X}}^\top{\boldsymbol{X}}))^2}{\det({\boldsymbol{X}}^\top{\boldsymbol{V}}{\boldsymbol{X}}) \det({\boldsymbol{X}}^\top{\boldsymbol{V}}^{-1}{\boldsymbol{X}})} ,$$ see, e.g., Puntanen and Styan [@PunSty Section 3.1] or Tian and Wiens [@TiaWie page 1267]. One can prove that $\kappa\in(0,1]$, and $\kappa=1$ holds if and only if the OLSE and BLUE of ${\boldsymbol{\beta}}$ coincide. Lee [@Lee] considered the general linear model [\[gen_lin_mod\]](#gen_lin_mod){reference-type="eqref" reference="gen_lin_mod"} such that ${\boldsymbol{X}}$ and ${\boldsymbol{V}}$ are both of full rank. He derived necessary and sufficient conditions for the equality of the BLUEs ${\boldsymbol{\beta}}^*_{{\boldsymbol{V}}_1}$ and ${\boldsymbol{\beta}}^*_{{\boldsymbol{V}}_2}$ in two general linear models with positive definite covariance matrices ${\boldsymbol{V}}_1$ and ${\boldsymbol{V}}_2$ and with a common matrix ${\boldsymbol{X}}$ having full rank $q$. Among others, it was proved that ${\boldsymbol{\beta}}^*_{{\boldsymbol{V}}_1} = {\boldsymbol{\beta}}^*_{{\boldsymbol{V}}_2}$ holds if and only if there exists a non-singular matrix ${\boldsymbol{A}}\in\mathbb{R}^{q\times q}$ such that ${\boldsymbol{V}}_1^{-1}{\boldsymbol{X}}= {\boldsymbol{V}}_2^{-1}{\boldsymbol{X}}{\boldsymbol{A}}$. For further necessary and sufficient conditions, see also Lu and Schmidt [@LuSch Theorem 2]. If one chooses ${\boldsymbol{V}}_2:={\boldsymbol{I}}_n$, then the problem considered above by Lee [@Lee] is equivalent to finding necessary and sufficient conditions for the equality of the BLUE ${\boldsymbol{\beta}}^*_{{\boldsymbol{V}}_1}$ and the OLSE ${\boldsymbol{\beta}}^*_{{\boldsymbol{I}}_n}=\widehat{\boldsymbol{\beta}}$. Lee [@Lee Section 4] derived several other sufficient, but not necessary conditions for the equality of two BLUEs as well. For example, if ${\boldsymbol{X}}$ has full rank $q$, ${\boldsymbol{V}}_1$ is positive semidefinite, ${\boldsymbol{\Gamma}}\in\mathbb{R}^{q\times q}$ is symmetric and ${\boldsymbol{V}}_2:={\boldsymbol{V}}_1+{\boldsymbol{X}}{\boldsymbol{\Gamma}}{\boldsymbol{X}}^\top$, then ${\boldsymbol{\beta}}^*_{{\boldsymbol{V}}_1} = {\boldsymbol{\beta}}^*_{{\boldsymbol{V}}_2}$. In the special case when ${\boldsymbol{X}}$ and ${\boldsymbol{V}}$ are both of full rank, Krämer et al. [@KraBarFie] derived a necessary and sufficient condition for the equality of the OLSE and BLUE of a subset of the regression parameters $\beta_1,\ldots,\beta_q$ in a general linear model [\[gen_lin_mod\]](#gen_lin_mod){reference-type="eqref" reference="gen_lin_mod"}, where ${\boldsymbol{\beta}}^\top=:(\beta_1,\ldots,\beta_q)$. More precisely, rewriting a general linear model [\[gen_lin_mod\]](#gen_lin_mod){reference-type="eqref" reference="gen_lin_mod"} in the form $\operatorname{\mathbb{E}}({\boldsymbol{\eta}})={\boldsymbol{X}}_1{\boldsymbol{\beta}}_1+{\boldsymbol{X}}_2{\boldsymbol{\beta}}_2$ and denoting by $\widehat{\boldsymbol{\beta}}_2$ and ${\boldsymbol{\beta}}^*_2$ the respective subvectors of $\widehat{\boldsymbol{\beta}}$ and ${\boldsymbol{\beta}}^*$, a necessary and sufficient condition has been derived for the equality of $\widehat{\boldsymbol{\beta}}_2$ and ${\boldsymbol{\beta}}^*_2$, where ${\boldsymbol{\beta}}^\top = ({\boldsymbol{\beta}}_1^\top,{\boldsymbol{\beta}}_2^\top)\in\mathbb{R}^{q_1}\times\mathbb{R}^{q_2}$ and ${\boldsymbol{X}}=({\boldsymbol{X}}_1,{\boldsymbol{X}}_2)\in\mathbb{R}^{n\times q_1}\times\mathbb{R}^{n\times q_2}$. Tian and Wiens [@TiaWie Theorem 5] derived necessary and sufficient conditions for the equality and proportionality of the OLSE and BLUE of ${\boldsymbol{X}}{\boldsymbol{\beta}}$, respectively. In particular, it turned out that if the OLSE and BLUE of ${\boldsymbol{X}}{\boldsymbol{\beta}}$ are proportional to each other (i.e., there exists a $\lambda\in\mathbb{R}$ such that $\mathrm{OLSE}({\boldsymbol{X}}{\boldsymbol{\beta}})=\lambda\cdot \mathrm{BLUE}({\boldsymbol{X}}{\boldsymbol{\beta}})$ with probability one), then the two estimators in question are equal with probability 1. Tian and Puntanen [@TiaPun] derived some necessary and sufficient conditions for the equality of OLSE and BLUE of the unknown (regression) parameters under a general linear model and its linearly transformed linear model. Lu and Schmidt [@LuSch Theorem 3] derived necessary and sufficient conditions for the equality of BLUE and so-called Amemiya-Cragg estimator of ${\boldsymbol{\beta}}$. Recently, Gong [@Gon] has reconsidered the equality problem of OLSE and BLUE for so-called seemingly unrelated regression models. Jiang and Sun [@JiaSun] have investigated necessary and sufficient conditions for the equality of OLSE and BLUE of the whole (or partial) set of regression parameters in a general linear model with linear parameter restrictions. # Comparison and equality of generalized $\psi$ estimators {#Section_comp_equal_psi_est} Given $\psi,\varphi\in \Psi(X,\Theta)$, we are going to establish necessary and sufficient conditions in order that the comparison inequality $\vartheta_{n,\psi}\leqslant\vartheta_{n,\varphi}$ be valid on $X^n$ for all $n\in\mathbb{N}$. As a first result, provided that $\psi,\varphi\in\Psi(X,\Theta)$, $\varphi$ is a $Z$-function and $\psi$ possesses the properties $[T]$ and $[Z_1]$, we give necessary and sufficient conditions in order that $\vartheta_{n,\psi}\leqslant\vartheta_{n,\varphi}$ be valid on $X^n$ for all $n\in\mathbb{N}$. For a function $\psi\in\Psi[T_1](X,\Theta)$, we introduce the notation $$\begin{aligned} \label{theta_psi} \Theta_\psi:=\big\{t\in\Theta\,|\,\exists\, x,y\in X: \vartheta_{1,\psi}(x)<t<\vartheta_{1,\psi}(y)\big\}. \end{aligned}$$ Observe that $\Theta_\psi$ is open. Indeed, if it is the empty set, then it is open trivially. Otherwise, $\Theta_\psi$ is the union of all open intervals $(\vartheta_{1,\psi}(x),\vartheta_{1,\psi}(y))$, where $x,y\in X$ are such that $\vartheta_{1,\psi}(x)<\vartheta_{1,\psi}(y)$. In fact, $\Theta_\psi$ is nothing else but the interior of the convex hull of $\vartheta_{1,\psi}(X)$. Indeed, any interior point $s$ of the convex hull of a subset $S$ of $\mathbb{R}^d$ is an interior point of the convex hull of some subset (possibly depending on $s$) of $S$ containing at most $2d$ points, see Gustin [@Gus]. Consequently, $\Theta_\psi$ is an open (possibly degenerate) interval. **Theorem. 8**. *Let $\psi\in\Psi[T,Z_1](X,\Theta)$ and $\varphi\in\Psi[Z](X,\Theta)$. Then the following assertions are equivalent to each other:* (i) *The inequality $$\begin{aligned} \label{psi_est_inequality_2} \vartheta_{n,\psi}(x_1,\ldots,x_n)\leqslant\vartheta_{n,\varphi}(x_1,\ldots,x_n)\end{aligned}$$ holds for each $n\in\mathbb{N}$ and $x_1,\dots,x_n\in X$.* (ii) *The inequality $$\begin{aligned} \label{psi_est_inequality_2.5} \vartheta_{k+m,\psi}(\underbrace{x,\ldots,x}_{k}, \underbrace{y,\ldots,y}_{m}) \leqslant\vartheta_{k+m,\varphi}(\underbrace{x,\ldots,x}_{k}, \underbrace{y,\ldots,y}_{m})\end{aligned}$$ holds for each $k,m\in\mathbb{N}$ and $x,y\in X$.* (iii) *For all $x\in X$, we have $\vartheta_{1,\psi}(x)\leqslant\vartheta_{1,\varphi}(x)$, and the inequality $$\begin{aligned} \label{psi_est_inequality_3} \psi(x,t) \varphi(y,t) \leqslant\psi(y,t) \varphi(x,t) \end{aligned}$$ is valid for all $t\in\Theta$ and for all $x,y\in X$ with $\vartheta_{1,\varphi}(x)<t<\vartheta_{1,\varphi}(y)$.* (iv) *For all $x\in X$, we have $\vartheta_{1,\psi}(x)\leqslant\vartheta_{1,\varphi}(x)$, and there exists a nonnegative function $p:\Theta_\varphi\to\mathbb{R}_+$ such that $$\begin{aligned} \label{psi_est_inequality_5} \psi(z,t)\leqslant p(t)\varphi(z,t), \qquad z\in X,\, t\in\Theta_\varphi. \end{aligned}$$* The implication (i)$\Rightarrow$(ii) is obvious by taking $n=k+m$ in assertion (i). (ii)$\Rightarrow$(iii). Assume that (ii) holds. Then the inequality [\[psi_est_inequality_2.5\]](#psi_est_inequality_2.5){reference-type="eqref" reference="psi_est_inequality_2.5"} for $k:=m:=1$ and $y:=x$ together with $\vartheta_{2,\psi}(x,x) = \vartheta_{1,\psi}(x)$ and $\vartheta_{2,\varphi}(x,x) = \vartheta_{1,\varphi}(x)$, imply that $\vartheta_{1,\psi}(x)\leqslant\vartheta_{1,\varphi}(x)$ for all $x\in X$. We prove the inequality [\[psi_est_inequality_3\]](#psi_est_inequality_3){reference-type="eqref" reference="psi_est_inequality_3"} by contradiction. Assume that, for some $t\in\Theta$ and $x,y\in X$ with $\vartheta_{1,\varphi}(x)<t<\vartheta_{1,\varphi}(y)$, we have $$\begin{aligned} \label{help_6} \psi(x,t) \varphi(y,t) > \psi(y,t) \varphi(x,t). \end{aligned}$$ In view of the inequalities $\vartheta_{1,\psi}(x)\leqslant\vartheta_{1,\varphi}(x)<t<\vartheta_{1,\varphi}(y)$, we get that $\psi(x,t)<0$, $\varphi(x,t)<0$ and $\varphi(y,t)>0$, whence the inequality [\[help_6\]](#help_6){reference-type="eqref" reference="help_6"} yields that $\psi(y,t)>0$. Thus, we obtain $$\begin{aligned} \label{help_10} 0<-\frac{\psi(x,t)}{\psi(y,t)} <-\frac{\varphi(x,t)}{ \varphi(y,t)}. \end{aligned}$$ Therefore, there exist $k,m\in\mathbb{N}$ such that $$0<-\frac{\psi(x,t)}{\psi(y,t)} <\frac{m}{k}<-\frac{\varphi(x,t)}{\varphi(y,t)}.$$ Rearranging these inequalities, it follows that $$k\varphi(x,t)+m\varphi(y,t) < 0 < k\psi(x,t)+m\psi(y,t).$$ Since $\varphi$ possesses the property $[Z]$, by Lemma [Lemma. 3](#Lem_psi_est_eq_2){reference-type="ref" reference="Lem_psi_est_eq_2"}, we get the inequalities $$\vartheta_{k+m,\varphi}(\underbrace{x,\ldots,x}_{k}, \underbrace{y,\ldots,y}_{m}) < t \leqslant\vartheta_{k+m,\psi}(\underbrace{x,\ldots,x}_{k}, \underbrace{y,\ldots,y}_{m}).$$ which contradicts [\[psi_est_inequality_2.5\]](#psi_est_inequality_2.5){reference-type="eqref" reference="psi_est_inequality_2.5"}. (iii)$\Rightarrow$(iv). Assume that (iii) holds. If $\Theta_\varphi$ is empty, then there is nothing to prove. Thus, we may assume that $\Theta_\varphi\neq\emptyset$. Rearranging the inequality [\[psi_est_inequality_3\]](#psi_est_inequality_3){reference-type="eqref" reference="psi_est_inequality_3"}, for all $t\in\Theta_\varphi$ and for all $x,y\in X$ with $\vartheta_{1,\varphi}(x)<t<\vartheta_{1,\varphi}(y)$, we get $$\begin{aligned} \label{help_7} \frac{\psi(y,t)}{\varphi(y,t)} \leqslant\frac{\psi(x,t)}{\varphi(x,t)},\end{aligned}$$ where we used that $\varphi(x,t)<0$ and $\varphi(y,t)>0$. Now we define the function $p:\Theta_\varphi\to\mathbb{R}$ by $$p(t):=\inf \bigg\{\frac{\psi(x,t)}{\varphi(x,t)}\,\bigg|\,x\in X,\,\vartheta_{1,\varphi}(x)<t\bigg\}, \qquad t\in\Theta_\varphi.$$ Due to the inclusion $t\in\Theta_\varphi$, the function $p$ is finite valued, and $p(t)\geqslant 0$ for all $t\in \Theta_\varphi$. Indeed, if $t\in\Theta_\varphi$, then there exists $x\in X$ such that $\vartheta_{1,\varphi}(x)<t$ yielding that $\vartheta_{1,\psi}(x)\leqslant\vartheta_{1,\varphi}(x)<t$. Therefore $\psi(x,t)<0$ and $\varphi(x,t)<0$, and hence $p(t)$ is defined as the infimum of certain positive real numbers. Further, by the definition of infimum and [\[help_7\]](#help_7){reference-type="eqref" reference="help_7"}, for all $t\in\Theta_\varphi$ and for all $x,y\in X$ with $\vartheta_{1,\varphi}(x)<t<\vartheta_{1,\varphi}(y)$, we obtain that $$\label{help_6.5} \frac{\psi(y,t)}{\varphi(y,t)} \leqslant p(t)\leqslant\frac{\psi(x,t)}{\varphi(x,t)}.$$ Let $z\in X$ be fixed arbitrarily. The first inequality in [\[help_6.5\]](#help_6.5){reference-type="eqref" reference="help_6.5"} with $y:=z$ implies that [\[psi_est_inequality_5\]](#psi_est_inequality_5){reference-type="eqref" reference="psi_est_inequality_5"} holds for all $t\in\Theta_\varphi$ with $t<\vartheta_{1,\varphi}(z)$. Similarly, the second inequality in [\[help_6.5\]](#help_6.5){reference-type="eqref" reference="help_6.5"} with $x:=z$ yields that [\[psi_est_inequality_5\]](#psi_est_inequality_5){reference-type="eqref" reference="psi_est_inequality_5"} is also valid for all $t\in\Theta_\varphi$ with $\vartheta_{1,\varphi}(z)<t$. Finally, if $t=\vartheta_{1,\varphi}(z)$, then by the property $[Z]$ of $\varphi$, we have that $\varphi(z,t)=0$. Furthermore, by the assumption in part (iii), the inequality $\vartheta_{1,\psi}(z)\leqslant\vartheta_{1,\varphi}(z)=t$ holds. Using that $\psi$ is a $Z_1$-function, it implies that $\psi(z,t)\leqslant 0$. This proves that [\[psi_est_inequality_5\]](#psi_est_inequality_5){reference-type="eqref" reference="psi_est_inequality_5"} holds for $t=\vartheta_{1,\varphi}(z)$ as well. (iv)$\Rightarrow$(i). Assume that (iv) is valid. To prove (i), let $n\in\mathbb{N}$ and $x_1,\dots,x_n\in X$. In the proof of the inequality [\[psi_est_inequality_2\]](#psi_est_inequality_2){reference-type="eqref" reference="psi_est_inequality_2"}, we distinguish two cases. First, consider the case when $\vartheta_{1,\varphi}(x_1)=\dots=\vartheta_{1,\varphi}(x_n)=:t_0$. Then $\varphi(x_1,t_0)=\dots= \varphi(x_n,t_0)=0$, since $\varphi$ possesses the property $[Z]$. Therefore, $$\sum_{i=1}^n\varphi(x_i,t_0)=0,$$ which (using that $\varphi$ is also a $T$-function or Lemma [Lemma. 3](#Lem_psi_est_eq_2){reference-type="ref" reference="Lem_psi_est_eq_2"}) implies that $$\vartheta_{n,\varphi}(x_1,\dots,x_n)=t_0.$$ On the other hand, the inequality $\vartheta_{1,\psi}\leqslant\vartheta_{1,\varphi}$ yields that $$\vartheta_{1,\psi}(x_1)\leqslant\vartheta_{1,\varphi}(x_1)=t_0,\qquad\dots,\qquad \vartheta_{1,\psi}(x_n)\leqslant\vartheta_{1,\varphi}(x_n)=t_0,$$ and hence, using that $\psi$ is a $Z_1$-function, we have $$\psi(x_1,t_0)\leqslant 0,\qquad\dots,\qquad \psi(x_n,t_0)\leqslant 0.$$ Therefore $$\sum_{i=1}^n\psi(x_i,t_0)\leqslant 0,$$ which (using that $\psi$ is a $T$-function) implies that $$\vartheta_{n,\psi}(x_1,\dots,x_n)\leqslant t_0 =\vartheta_{n,\varphi}(x_1,\dots,x_n).$$ Thus [\[psi_est_inequality_2\]](#psi_est_inequality_2){reference-type="eqref" reference="psi_est_inequality_2"} is proved, when $\vartheta_{1,\varphi}(x_1)=\dots=\vartheta_{1,\varphi}(x_n)$. Now consider the case when $$t':=\min(\vartheta_{1,\varphi}(x_1),\dots,\vartheta_{1,\varphi}(x_n)) <\max(\vartheta_{1,\varphi}(x_1),\dots,\vartheta_{1,\varphi}(x_n))=:t''.$$ Let $t_0:=\vartheta_{n,\varphi}(x_1,\dots,x_n)$. Then there exist $i,j\in\{1,\dots,n\}$ such that $t'=\vartheta_{1,\varphi}(x_i)$ and $t''=\vartheta_{1,\varphi}(x_j)$. According to Proposition [Proposition. 4](#Pro_psi_becsles_mean_prop){reference-type="ref" reference="Pro_psi_becsles_mean_prop"} (which can be applied, since $\varphi$ has the property $[Z]$), we have that $t'<t_0<t''$, which shows that $t_0\in\Theta_\varphi$. Hence we can apply the inequality [\[psi_est_inequality_5\]](#psi_est_inequality_5){reference-type="eqref" reference="psi_est_inequality_5"} with $z:=x_i$, $i\in\{1,\dots,n\}$ and $t:=t_0$. Then, adding up the inequalities so obtained side by side, we arrive at $$\sum_{i=1}^n \psi(x_i,t_0) \leqslant p(t_0)\sum_{i=1}^n \varphi(x_i,t_0)=0,$$ where the equality follows from the property $[Z]$ of $\varphi$ and the definition if $t_0$. This, according to the property $[T_n]$ of $\psi$, implies that $$\vartheta_{n,\psi}(x_1,\dots,x_n)\leqslant t_0=\vartheta_{n,\varphi}(x_1,\dots,x_n),$$ and proves (i) in the considered second case as well. $\Box$ In the next remark, we highlight the role of the assumption $\vartheta_{1,\psi}\leqslant\vartheta_{1,\varphi}$ in part (iii) of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"}. **Remark. 9**. *(i). Let $\psi,\varphi\in\Psi[Z_1](X,\Theta)$, and suppose that $\vartheta_{1,\psi}(z)\leqslant\vartheta_{1,\varphi}(z)$ for all $z\in X$. Let $x,y\in X$ with $\vartheta_{1,\varphi}(x)<\vartheta_{1,\varphi}(y)$. Then the inequality [\[psi_est_inequality_3\]](#psi_est_inequality_3){reference-type="eqref" reference="psi_est_inequality_3"} is automatically valid for $t=\vartheta_{1,\varphi}(x)$ and $t=\vartheta_{1,\varphi}(y)$. Indeed, if $t=\vartheta_{1,\varphi}(y)$, then we have $\vartheta_{1,\psi}(y)\leqslant t$ and $$\vartheta_{1,\psi}(x)\leqslant\vartheta_{1,\varphi}(x) < \vartheta_{1,\varphi}(y) = t.$$ Using that $\psi$ and $\varphi$ have the property $[Z_1]$, it implies that $\psi(x,t)<0$, $\varphi(y,t)=0$, $\psi(y,t)\leqslant 0$ and $\varphi(x,t)<0$, and consequently, $$\psi(x,t)\varphi(y,t) = 0 \leqslant\psi(y,t)\varphi(x,t),$$ as desired. Similarly, if $t=\vartheta_{1,\varphi}(x)$, then we have $$\vartheta_{1,\psi}(x)\leqslant t = \vartheta_{1,\varphi}(x) < \vartheta_{1,\varphi}(y).$$ Using that $\psi$ and $\varphi$ have the property $[Z_1]$, it implies that $\psi(x,t)\leqslant 0$, $\varphi(y,t)>0$ and $\varphi(x,t)=0$, and consequently, $$\psi(x,t)\varphi(y,t) \leqslant 0 = \psi(y,t)\varphi(x,t),$$ as desired.* *(ii). Let $\psi,\varphi\in\Psi[Z_1](X,\Theta)$. If $\vartheta_{1,\varphi}(X)\subseteq \Theta_\varphi$, and there exists a nonnegative function $p:\Theta_\varphi\to\mathbb{R}_+$ such that the inequality [\[psi_est_inequality_5\]](#psi_est_inequality_5){reference-type="eqref" reference="psi_est_inequality_5"} holds, then the inequality $\vartheta_{1,\psi}(x)\leqslant\vartheta_{1,\varphi}(x)$ holds for all $x\in X$. Indeed, using [\[psi_est_inequality_5\]](#psi_est_inequality_5){reference-type="eqref" reference="psi_est_inequality_5"} with $z:=x$ and $t:=\vartheta_{1,\varphi}(x)$ (where $x\in X$) and that $\varphi\in\Psi[Z_1](X,\Theta)$, we get that $$\psi(x,\vartheta_{1,\varphi}(x)) \leqslant p(\vartheta_{1,\varphi}(x))\varphi(x,\vartheta_{1,\varphi}(x)) = p(\vartheta_{1,\varphi}(x)) \cdot 0 = 0, \qquad x\in X.$$ Since $\psi\in\Psi[Z_1](X,\Theta)$, this inequality implies that $\vartheta_{1,\psi}(x)\leqslant\vartheta_{1,\varphi}(x)$ for all $x\in X$, as desired.* *(iii). Let us suppose that $\psi\in\Psi[T,Z_1](X,\Theta)$ and $\varphi\in\Psi[Z](X,\Theta)$. If the inequality [\[psi_est_inequality_3\]](#psi_est_inequality_3){reference-type="eqref" reference="psi_est_inequality_3"} holds for all $t\in\Theta$ and for all $x,y\in X$ with $\vartheta_{1,\varphi}(x)\leqslant t\leqslant\vartheta_{1,\varphi}(y)$, then, in general, [\[psi_est_inequality_2\]](#psi_est_inequality_2){reference-type="eqref" reference="psi_est_inequality_2"} does not hold, not even for $n=1$. We give a counterexample. Let $X:=\{x_1,x_2\}$, $\Theta:=\mathbb{R}$, and $$\begin{aligned} &\psi(x_j,t):=-jt,\qquad t\in\mathbb{R},\qquad j\in\{1,2\},\\ &\varphi(x_j,t):=-j(t+1),\qquad t\in\mathbb{R},\qquad j\in\{1,2\}. \end{aligned}$$ Then $\psi$ and $\varphi$ are $Z$-functions with $\vartheta_{n,\psi}(\pmb{x})=0$ and $\vartheta_{n,\varphi}(\pmb{x})=-1$ for all $n\in\mathbb{N}$ and $\pmb{x}\in X^n$. In particular, $\Theta_\varphi = \emptyset$. Moreover, if $t\in\Theta$ and $x,y\in X$ are such that $\vartheta_{1,\varphi}(x)\leqslant t \leqslant\vartheta_{1,\varphi}(y)$, then $t=-1$, and, since $\varphi(x,-1) = \varphi(y,-1) = 0$, $x,y\in X$, we get that both sides of the inequality [\[psi_est_inequality_3\]](#psi_est_inequality_3){reference-type="eqref" reference="psi_est_inequality_3"} are $0$ for $t=-1$. Consequently, the inequality [\[psi_est_inequality_3\]](#psi_est_inequality_3){reference-type="eqref" reference="psi_est_inequality_3"} holds for all $t\in\Theta$ and for all $x,y\in X$ with $\vartheta_{1,\varphi}(x)\leqslant t\leqslant\vartheta_{1,\varphi}(y)$. However, $\vartheta_{1,\psi}(x)=0>\vartheta_{1,\varphi}(x)=-1$, $x\in X$, and hence [\[psi_est_inequality_2\]](#psi_est_inequality_2){reference-type="eqref" reference="psi_est_inequality_2"} does not hold for $n=1$. This example also points out the fact that, in part (iii) of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"}, the assumption that the inequality $\vartheta_{1,\psi}(x)\leqslant\vartheta_{1,\varphi}(x)$ should hold for all $x\in X$ is not a redundant one. $\Box$* The following result is parallel to Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"}. **Theorem. 10**. *Let $\psi\in\Psi[Z](X,\Theta)$ and $\varphi\in\Psi[T,Z_1](X,\Theta)$. Then both of the assertions (i) and (ii) of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"} and following statements are equivalent to each other.* 1. *For all $x\in X$, we have $\vartheta_{1,\psi}(x)\leqslant\vartheta_{1,\varphi}(x)$, and the inequality [\[psi_est_inequality_3\]](#psi_est_inequality_3){reference-type="eqref" reference="psi_est_inequality_3"} is valid for all $t\in \Theta$ and for all $x,y\in X$ with $\vartheta_{1,\psi}(x)<t<\vartheta_{1,\psi}(y)$.* 2. *For all $x\in X$, we have $\vartheta_{1,\psi}(x)\leqslant\vartheta_{1,\varphi}(x)$, and there exists a nonnegative function $q:\Theta_\psi\to\mathbb{R}_+$ such that $$\begin{aligned} \label{phi_est_inequality_5} q(t)\psi(z,t)\leqslant\varphi(z,t), \qquad z\in X,\, t\in\Theta_\psi. \end{aligned}$$* The proof of Theorem [Theorem. 10](#Lem_phi_est_eq_5){reference-type="ref" reference="Lem_phi_est_eq_5"} is completely analogous to that of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"}, for the sake of completeness, we verify it in details. \(i\) of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"} $\Rightarrow$ (ii) of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"}. It was already proved. \(ii\) of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"} $\Rightarrow$ (iii). Assume that (ii) of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"} holds. Exactly as in the proof of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"}, we have $\vartheta_{1,\psi}(x)\leqslant\vartheta_{1,\varphi}(x)$ for all $x\in X$. We prove the inequality [\[psi_est_inequality_3\]](#psi_est_inequality_3){reference-type="eqref" reference="psi_est_inequality_3"} by contradiction. Assume that for some $t\in\Theta$ and $x,y\in X$ with $\vartheta_{1,\psi}(x)<t<\vartheta_{1,\psi}(y)$, we have $$\begin{aligned} \psi(x,t) \varphi(y,t) > \psi(y,t) \varphi(x,t). \end{aligned}$$ In view of the inequalities $\vartheta_{1,\psi}(x)<t<\vartheta_{1,\psi}(y)\leqslant\vartheta_{1,\varphi}(y)$, we get that $\psi(x,t)<0$, $\psi(y,t)>0$ and $\varphi(y,t)>0$. Thus, we obtain [\[help_10\]](#help_10){reference-type="eqref" reference="help_10"}, and, as in the corresponding part of the proof of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"}, we have that there exist $k,m\in\mathbb{N}$ such that $$k\varphi(x,t) + m \varphi(y,t) < 0 < k\psi(x,t) + m \psi(y,t).$$ Since $\psi$ possesses the property $[Z]$, in view of Lemma [Lemma. 3](#Lem_psi_est_eq_2){reference-type="ref" reference="Lem_psi_est_eq_2"}, we get the inequalities $$\vartheta_{k+m,\varphi}(\underbrace{x,\ldots,x}_{k}, \underbrace{y,\ldots,y}_{m}) \leqslant t < \vartheta_{k+m,\psi}(\underbrace{x,\ldots,x}_{k}, \underbrace{y,\ldots,y}_{m}),$$ which contradicts [\[psi_est_inequality_2.5\]](#psi_est_inequality_2.5){reference-type="eqref" reference="psi_est_inequality_2.5"}. (iii)$\Rightarrow$(iv). Assume that (iii) holds. If $\Theta_\psi$ is empty, then there is nothing to prove. Thus, we may assume that $\Theta_\psi\neq\emptyset$. Rearranging the inequality [\[psi_est_inequality_3\]](#psi_est_inequality_3){reference-type="eqref" reference="psi_est_inequality_3"}, for all $t\in\Theta_\psi$ and for all $x,y\in X$ with $\vartheta_{1,\psi}(x)<t<\vartheta_{1,\psi}(y)$, we get $$\begin{aligned} \label{help_7_uj} \frac{\varphi(x,t)}{\psi(x,t)} \leqslant\frac{\varphi(y,t)}{\psi(y,t)},\end{aligned}$$ where we used that $\psi(x,t)<0$ and $\psi(y,t)>0$. Now we define the function $q:\Theta_\psi\to\mathbb{R}$ by $$q(t):=\inf \bigg\{\frac{\varphi(y,t)}{\psi(y,t)}\,\bigg|\,y\in X,\,t<\vartheta_{1,\psi}(y)\bigg\}, \qquad t\in\Theta_\psi.$$ Due to the inclusion $t\in\Theta_\psi$, the function $q$ is finite valued, and $q(t)\geqslant 0$ for all $t\in \Theta_\psi$. Indeed, if $t\in\Theta_\psi$, then there exists $y\in X$ such that $t<\vartheta_{1,\psi}(y)$ yielding that $t<\vartheta_{1,\psi}(y)\leqslant\vartheta_{1,\varphi}(y)$, therefore $\psi(y,t)>0$ and $\varphi(y,t)>0$, and hence $q(t)$ is defined as the infimum of certain positive real numbers. Further, by the definition of infimum and [\[help_7\_uj\]](#help_7_uj){reference-type="eqref" reference="help_7_uj"}, for all $t\in\Theta_\psi$ and for all $x,y\in X$ with $\vartheta_{1,\psi}(x)<t<\vartheta_{1,\psi}(y)$, we obtain that $$\label{help_6.5_uj} \frac{\varphi(x,t)}{\psi(x,t)} \leqslant q(t)\leqslant\frac{\varphi(y,t)}{\psi(y,t)}.$$ Let $z\in X$ be fixed arbitrarily. The first inequality in [\[help_6.5_uj\]](#help_6.5_uj){reference-type="eqref" reference="help_6.5_uj"} with $x:=z$ implies that [\[phi_est_inequality_5\]](#phi_est_inequality_5){reference-type="eqref" reference="phi_est_inequality_5"} holds for all $t\in\Theta_\psi$ with $\vartheta_{1,\psi}(z)<t$. Similarly, the second inequality in [\[help_6.5_uj\]](#help_6.5_uj){reference-type="eqref" reference="help_6.5_uj"} with $y:=z$ yields that [\[phi_est_inequality_5\]](#phi_est_inequality_5){reference-type="eqref" reference="phi_est_inequality_5"} is also valid for all $t\in\Theta_\psi$ with $t<\vartheta_{1,\psi}(z)$. Finally, if $t=\vartheta_{1,\psi}(z)$, then by the property $[Z]$ of $\psi$, we have that $\psi(z,t)=0$. Furthermore, by the assumption in part (iii), the inequality $t=\vartheta_{1,\psi}(z)\leqslant\vartheta_{1,\varphi}(z)$ holds. Using that $\varphi$ is a $Z_1$-function, it implies that $\varphi(z,t)\geqslant 0$. This proves that [\[phi_est_inequality_5\]](#phi_est_inequality_5){reference-type="eqref" reference="phi_est_inequality_5"} holds for $t=\vartheta_{1,\psi}(z)$ as well. \(iv\) $\Rightarrow$ (i) of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"}. Assume that (iv) is valid. To prove (i) of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"}, let $n\in\mathbb{N}$ and $x_1,\dots,x_n\in X$. In the proof of the inequality [\[psi_est_inequality_2\]](#psi_est_inequality_2){reference-type="eqref" reference="psi_est_inequality_2"}, we distinguish two cases. First, consider the case when $\vartheta_{1,\psi}(x_1)=\dots=\vartheta_{1,\psi}(x_n)=:t_0$. Then $\psi(x_1,t_0)=\dots= \psi(x_n,t_0)=0$, since $\psi$ possesses the property $[Z]$. Therefore, $$\sum_{i=1}^n\psi(x_i,t_0)=0,$$ which (using that $\psi$ is also a $T$-function or Lemma [Lemma. 3](#Lem_psi_est_eq_2){reference-type="ref" reference="Lem_psi_est_eq_2"}) implies that $$\vartheta_{n,\psi}(x_1,\dots,x_n)=t_0.$$ On the other hand, the inequality $\vartheta_{1,\psi}\leqslant\vartheta_{1,\varphi}$ yields that $$t_0=\vartheta_{1,\psi}(x_1)\leqslant\vartheta_{1,\varphi}(x_1),\qquad\dots,\qquad t_0=\vartheta_{1,\psi}(x_n)\leqslant\vartheta_{1,\varphi}(x_n),$$ and hence, using that $\varphi$ is a $Z_1$-function, we have $$\varphi(x_1,t_0)\geqslant 0,\qquad\dots,\qquad \varphi(x_n,t_0)\geqslant 0.$$ Therefore $$\sum_{i=1}^n\varphi(x_i,t_0)\geqslant 0,$$ which (using that $\varphi$ is a $T$-function) implies that $$\vartheta_{n,\psi}(x_1,\dots,x_n)= t_0 \leqslant\vartheta_{n,\varphi}(x_1,\dots,x_n).$$ Thus [\[psi_est_inequality_2\]](#psi_est_inequality_2){reference-type="eqref" reference="psi_est_inequality_2"} is proved when $\vartheta_{1,\psi}(x_1)=\dots=\vartheta_{1,\psi}(x_n)$ holds. Now consider the case, when $$t':=\min(\vartheta_{1,\psi}(x_1),\dots,\vartheta_{1,\psi}(x_n)) <\max(\vartheta_{1,\psi}(x_1),\dots,\vartheta_{1,\psi}(x_n))=:t''.$$ Let $t_0:=\vartheta_{n,\psi}(x_1,\dots,x_n)$. Then there exist $i,j\in\{1,\dots,n\}$ such that $t'=\vartheta_{1,\psi}(x_i)$ and $t''=\vartheta_{1,\psi}(x_j)$. According to Proposition [Proposition. 4](#Pro_psi_becsles_mean_prop){reference-type="ref" reference="Pro_psi_becsles_mean_prop"} (which can be applied, since $\psi$ has the property $[Z]$), we have that $t'<t_0<t''$, which shows that $t_0\in\Theta_\psi$. Hence we can apply the inequality [\[phi_est_inequality_5\]](#phi_est_inequality_5){reference-type="eqref" reference="phi_est_inequality_5"} with $z:=x_i$, $i\in\{1,\dots,n\}$ and $t:=t_0$. Then, adding up the inequalities so obtained side by side, we arrive at $$0=q(t_0)\sum_{i=1}^n \psi(x_i,t_0) \leqslant\sum_{i=1}^n \varphi(x_i,t_0),$$ where the equality follows from the property $[Z]$ of $\psi$ and from the definition of $t_0$. This, according to the property $[T_n]$ of $\varphi$, implies that $$\vartheta_{n,\psi}(x_1,\dots,x_n) = t_0 \leqslant\vartheta_{n,\varphi}(x_1,\dots,x_n),$$ and proves (i) in the considered second case as well. $\Box$ Note that if $\psi,\varphi\in\Psi[Z](X,\Theta)$, then the assertions (i), (ii), (iii) and (iv) of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"} and the assertions (iii) and (iv) of Theorem [Theorem. 10](#Lem_phi_est_eq_5){reference-type="ref" reference="Lem_phi_est_eq_5"} are equivalent to each other. In the next result, under some additional regularity assumptions on $\psi$ and $\varphi$, we derive another set of conditions that is equivalent to [\[psi_est_inequality_2\]](#psi_est_inequality_2){reference-type="eqref" reference="psi_est_inequality_2"}. **Theorem. 11**. *Let $\psi,\varphi\in\Psi[C,Z](X,\Theta)$. Assume that $\vartheta_{1,\psi}=\vartheta_{1,\varphi}=:\vartheta_1$ on $X$, $\vartheta_1(X)=\Theta$, and, for all $x\in X$, the maps $$\Theta\ni t\mapsto\psi(x,t) \qquad\mbox{and}\qquad \Theta\ni t\mapsto\varphi(x,t)$$ are differentiable at $\vartheta_1(x)$ with a non-vanishing derivative. Then any of the equivalent assertions (i), (ii), (iii) and (iv) of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"} is equivalent to the following one:* 1. *For all $x,y\in X$, we have $$\begin{aligned} \label{phi_est_inequality_6} -\frac{\psi(y,\vartheta_1(x))}{\partial_2\psi(x,\vartheta_1(x))} \leqslant-\frac{\varphi(y,\vartheta_1(x))}{\partial_2\varphi(x,\vartheta_1(x))}. \end{aligned}$$* First, we check that $\Theta_\psi=\Theta_\varphi=\Theta$. If $t\in\Theta$, then, using that $\Theta$ is open, there exist $t_1,t_2\in\Theta$ such that $t_1<t<t_2$. Since $\vartheta_1(X)=\Theta$, there exist $x_1,x_2\in X$ such that $\vartheta_1(x_1)=t_1$ and $\vartheta_1(x_2)=t_2$, yielding that $t\in\Theta_\psi$ and $t\in\Theta_\varphi$, and hence $\Theta_\psi=\Theta_\varphi=\Theta$. Next, we verify that $\partial_2\psi(z,\vartheta_1(z))<0$ and $\partial_2\varphi(z,\vartheta_1(z)) < 0$ for all $z\in X$. Using that $\psi$ is a $Z$-function, for all $z\in X$, we have $\psi(z,\vartheta_1(z))=0$, and hence $$\begin{aligned} \label{help_8} \partial_2\psi(z,\vartheta_1(z)) = \lim_{t\to\vartheta_1(z)} \frac{\psi(x,t) - \psi(z,\vartheta_1(z))}{t-\vartheta_1(z)} = \lim_{t\to\vartheta_1(z)} \frac{\psi(x,t)}{t-\vartheta_1(z)} \leqslant 0. \end{aligned}$$ Indeed, if $t\in\Theta$ and $t<\vartheta_1(z)$, then $\psi(x,t)>0$ and $t-\vartheta_1(z)<0$; and if $t\in\Theta$ and $t>\vartheta_1(z)$, then $\psi(x,t)<0$ and $t-\vartheta_1(z)>0$.  Using the assumption $\partial_2\psi(z,\vartheta_1(z))\ne 0$, $z\in X$, [\[help_8\]](#help_8){reference-type="eqref" reference="help_8"} yields $\partial_2\psi(z,\vartheta_1(z))<0$, $z\in X$, as desired. In the same way, we can get $\partial_2\varphi(z,\vartheta_1(z))<0$, $z\in X$. Assume that any of the equivalent assertions (i)-(iv) of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"} holds. We give two alternative proofs for [\[phi_est_inequality_6\]](#phi_est_inequality_6){reference-type="eqref" reference="phi_est_inequality_6"}. *First proof for [\[phi_est_inequality_6\]](#phi_est_inequality_6){reference-type="eqref" reference="phi_est_inequality_6"}:* Let $x,y\in X$ be arbitrary. We may suppose that $\vartheta_{1}(x) \ne \vartheta_{1}(y)$, since otherwise, using that both $\psi$ and $\varphi$ have the property $[Z]$, we get $$\psi(y,\vartheta_1(x)) = \varphi(y,\vartheta_1(x)) = 0,$$ and consequently, the inequality [\[phi_est_inequality_6\]](#phi_est_inequality_6){reference-type="eqref" reference="phi_est_inequality_6"} holds trivially. For each $n\in\mathbb{N}$, let $t_n:=\vartheta_{n+1,\psi}(x,\dots,x,y)$ and $\widetilde t_n:=\vartheta_{n+1,\varphi}(x,\dots,x,y)$, where $x$ is repeated $n$-times both for $t_n$ and $\widetilde t_n$. Assertion (i) of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"} yields that, for each $n\in\mathbb{N}$, we have $$\begin{aligned} \label{help_13} n(t_n-\vartheta_{1}(x)) \leqslant n(\widetilde t_n-\vartheta_{1}(x)). \end{aligned}$$ Then $t_n\to \vartheta_{1}(x)$ as $n\to\infty$ (see Lemma [Lemma. 7](#Lem_psi_est_cont){reference-type="ref" reference="Lem_psi_est_cont"}) and, since $\psi$ has the property $[Z]$, we get $$n\psi(x,t_n)+\psi(y,t_n)=0, \qquad n\in\mathbb{N}.$$ If $\psi(x,t_n) = 0$ for some $n\in\mathbb{N}$, then, we also have $\psi(y,t_n)=0$, and hence $t_n = \vartheta_{1}(x) = \vartheta_{1}(y)$. This leads us to a contradiction. Consequently, $\psi(x,t_n)\ne 0$, and then $n=-\frac{\psi(y,t_n)}{\psi(x,t_n)}$ for any $n\in\mathbb{N}$. Therefore, using also that $\psi$ is a $C$-function, we get $$\begin{aligned} n(t_n-\vartheta_{1}(x)) &=-\frac{\psi(y,t_n)}{\psi(x,t_n)}(t_n-\vartheta_{1}(x))\\ &=-\psi(y,t_n)\frac{t_n-\vartheta_{1}(x)}{\psi(x,t_n)-\psi(x,\vartheta_{1}(x))} \to -\frac{\psi(y,\vartheta_{1}(x))}{\partial_2\psi(x,\vartheta_{1}(x))} \qquad \text{as $n\to\infty$.}\end{aligned}$$ Similarly, using that $\varphi\in\Psi[C,Z](X,\Theta)$, one can check that $$\begin{aligned} n(\widetilde t_n-\vartheta_{1}(x)) \to -\frac{\varphi(y,\vartheta_{1}(x))}{\partial_2\varphi(x,\vartheta_{1}(x))} \qquad \text{as \ $n\to\infty$.} \end{aligned}$$ Now, upon taking the limit $n\to\infty$ in [\[help_13\]](#help_13){reference-type="eqref" reference="help_13"}, we can see that [\[phi_est_inequality_6\]](#phi_est_inequality_6){reference-type="eqref" reference="phi_est_inequality_6"} is valid. *Second proof for [\[phi_est_inequality_6\]](#phi_est_inequality_6){reference-type="eqref" reference="phi_est_inequality_6"}:* Assertion (iii) of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"} states that [\[psi_est_inequality_3\]](#psi_est_inequality_3){reference-type="eqref" reference="psi_est_inequality_3"} holds for all $t\in\Theta$ and for all $x,y\in X$ with $\vartheta_1(x)<t<\vartheta_1(y)$. Let $x,y\in X$ be arbitrarily fixed. If $\vartheta_1(x) = \vartheta_1(y)$, then, using that $\psi$ and $\varphi$ are $Z$-functions, we have $\psi(y,\vartheta_1(x)) = \varphi(y,\vartheta_1(x))=0$, and hence [\[phi_est_inequality_6\]](#phi_est_inequality_6){reference-type="eqref" reference="phi_est_inequality_6"} readily holds. If $\vartheta_1(x) < \vartheta_1(y)$, then, for all $t\in\Theta$ with $\vartheta_1(x)<t<\vartheta_1(y)$, the inequality [\[psi_est_inequality_3\]](#psi_est_inequality_3){reference-type="eqref" reference="psi_est_inequality_3"} is valid, and using that $\psi$ and $\varphi$ are $Z$-functions, we get $$\frac{\psi(x,t)-\psi(x,\vartheta_1(x))} {t-\vartheta_1(x)} \varphi(y,t) \leqslant\psi(y,t) \frac{\varphi(x,t)-\varphi(x,\vartheta_1(x))}{t-\vartheta_1(x)}, \qquad t\in(\vartheta_1(x), \vartheta_1(y)).$$ Upon taking the limit $t\downarrow\vartheta_1(x)$, using the differentiability assumption and that $\psi$ and $\varphi$ are $C$-functions, the previous inequality follows that $$\partial_2\psi(x,\vartheta_1(x))\cdot\varphi(y,\vartheta_1(x)) \leqslant\psi(y,\vartheta_1(x))\cdot\partial_2\varphi(x,\vartheta_1(x)).$$ Using that $\partial_2\psi(z,\vartheta_1(z))<0$, $z\in X$, and $\partial_2\varphi(z,\vartheta_1(z)) < 0$, $z\in X$, the previous inequality implies [\[phi_est_inequality_6\]](#phi_est_inequality_6){reference-type="eqref" reference="phi_est_inequality_6"} in case of $\vartheta_1(x) < \vartheta_1(y)$. If $\vartheta_1(x) > \vartheta_1(y)$, by a similar argument, we get that the inequality [\[psi_est_inequality_3\]](#psi_est_inequality_3){reference-type="eqref" reference="psi_est_inequality_3"} is valid for all $t\in\Theta$ with $\vartheta_1(y)<t< \vartheta_1(x)$. Using that $\psi$ and $\varphi$ are $Z$-functions, we get $$\psi(y,t) \frac{\varphi(x,t) - \varphi(x,\vartheta_1(x))}{t-\vartheta_1(x)} \geqslant\frac{\psi(x,t)-\psi(x,\vartheta_1(x))}{t-\vartheta_1(x)} \varphi(y,t), \qquad t\in(\vartheta_1(y), \vartheta_1(x)).$$ Upon taking the limit $t\uparrow\vartheta_1(x)$, using the differentiability assumption and that $\psi$ and $\varphi$ are $C$-functions, the previous inequality implies that $$\psi(y,\vartheta_1(x))\cdot\partial_2 \varphi(x,\vartheta_1(x)) \geqslant\partial_2\psi(x,\vartheta_1(x))\cdot \varphi(x,\vartheta_1(x)).$$ Using that $\partial_2\psi(z,\vartheta_1(z))<0$ and $\partial_2\varphi(z,\vartheta_1(z)) < 0$, $z\in X$, the previous inequality implies [\[phi_est_inequality_6\]](#phi_est_inequality_6){reference-type="eqref" reference="phi_est_inequality_6"} also in the case of $\vartheta_1(x) > \vartheta_1(y)$. Assume that [\[phi_est_inequality_6\]](#phi_est_inequality_6){reference-type="eqref" reference="phi_est_inequality_6"} holds. We give two alternative proofs for one of the equivalent assertions (i)-(iv) of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"}. *First proof for the sufficiency (using the Axiom of Choice)*. We are going to show that our assertion (v) implies condition (iv) of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"}. In view of the condition $\vartheta_1(X)=\Theta$ and applying the axiom of choice, there exists a right inverse of $\vartheta_1$, i.e., there exists a function $\rho:\Theta\to X$ such that $\vartheta_1(\rho(t))=t$ is valid for all $t\in\Theta$. Then, substituting $x:=\rho(t)$ in the inequality [\[phi_est_inequality_6\]](#phi_est_inequality_6){reference-type="eqref" reference="phi_est_inequality_6"}, for all $t\in\Theta$ and $y\in X$, we get that $$\begin{aligned} \label{phi_est_inequality_66} -\frac{\psi(y,t)}{\partial_2\psi(\rho(t),t)} \leqslant-\frac{\varphi(y,t)}{\partial_2\varphi(\rho(t),t)}. \end{aligned}$$ Using that $\partial_2\psi(\rho(t),t) = \partial_2\psi(\rho(t),\vartheta_1(\rho(t))) < 0$ and $\partial_2\varphi(\rho(t),t) = \partial_2\varphi(\rho(t),\vartheta_1(\rho(t))) < 0$ for all $t\in\Theta$, [\[phi_est_inequality_66\]](#phi_est_inequality_66){reference-type="eqref" reference="phi_est_inequality_66"} implies that condition (iv) of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"} holds with the positive function $p:\Theta\to\mathbb{R}$ defined by $$p(t):=\frac{\partial_2\psi(\rho(t),t)}{\partial_2\varphi(\rho(t),t)}, \qquad t\in\Theta.$$ *Second proof for the sufficiency (without using the axiom of choice)*. We are going to show that our assertion (v) implies condition (iii) of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"}. Assume that assertion (v) of the present theorem holds. Let $t\in\Theta$ and $x,y\in X$ be such that $\vartheta_1(x) < t < \vartheta_1(y)$. Since $\vartheta_1(X)=\Theta$, there exists $z\in X$ such that $t=\vartheta_1(z)$, yielding that $\vartheta_1(x)< \vartheta_1(z) < \vartheta_1(y)$. By applying [\[phi_est_inequality_6\]](#phi_est_inequality_6){reference-type="eqref" reference="phi_est_inequality_6"} for the pairs $x,z$ and $y,z$, we get $$\begin{aligned} \label{help_9} -\frac{\psi(x,\vartheta_1(z))}{\partial_2\psi(z,\vartheta_1(z))} \leqslant-\frac{\varphi(x,\vartheta_1(z))}{\partial_2\varphi(z,\vartheta_1(z))} \qquad \text{and} \qquad -\frac{\psi(y,\vartheta_1(z))}{\partial_2\psi(z,\vartheta_1(z))} \leqslant-\frac{\varphi(y,\vartheta_1(z))}{\partial_2\varphi(z,\vartheta_1(z))}. \end{aligned}$$ Since $\varphi(x,\vartheta_1(z))<0$ and $\partial_2\varphi(z,\vartheta_1(z))<0$, the first inequality in [\[help_9\]](#help_9){reference-type="eqref" reference="help_9"} implies that $$0 < \frac{\varphi(x,\vartheta_1(z))}{\partial_2\varphi(z,\vartheta_1(z))} \leqslant\frac{\psi(x,\vartheta_1(z))}{\partial_2\psi(z,\vartheta_1(z))}.$$ Since $\psi(y,\vartheta_1(z))>0$ and $\partial_2\psi(z,\vartheta_1(z))<0$, the second inequality in [\[help_9\]](#help_9){reference-type="eqref" reference="help_9"} yields that $$0 < -\frac{\psi(y,\vartheta_1(z))}{\partial_2\psi(z,\vartheta_1(z))} \leqslant-\frac{\varphi(y,\vartheta_1(z))}{\partial_2\varphi(z,\vartheta_1(z))}.$$ Multiplying the previous two inequalities, we have $$-\frac{\varphi(x,\vartheta_1(z))}{\partial_2\varphi(z,\vartheta_1(z))} \cdot\frac{\psi(y,\vartheta_1(z))}{\partial_2\psi(z,\vartheta_1(z))} \leqslant -\frac{\psi(x,\vartheta_1(z))}{\partial_2\psi(z,\vartheta_1(z))} \cdot\frac{\varphi(y,\vartheta_1(z))}{\partial_2\varphi(z,\vartheta_1(z))}.$$ Since $\partial_2\psi(z,\vartheta_1(z))\cdot \partial_2\varphi(z,\vartheta_1(z)) >0$, we have $$\psi(x,\vartheta_1(z)) \varphi(y,\vartheta_1(z)) \leqslant\varphi(x,\vartheta_1(z)) \psi(y,\vartheta_1(z)).$$ Consequently, using that $\vartheta_1(z)=t$, we get $$\psi(x,t) \varphi(y,t) \leqslant\psi(y,t)\varphi(x,t),$$ i.e., we have [\[psi_est_inequality_3\]](#psi_est_inequality_3){reference-type="eqref" reference="psi_est_inequality_3"}, as desired. This proves that assertion (iii) of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"} is valid. $\Box$ Theorem [Theorem. 11](#Thm_inequality_diff){reference-type="ref" reference="Thm_inequality_diff"} gives back the comparison theorem for deviation means proved by Daróczy [@Dar71b part 1, Section 3] (see also Daróczy and Páles [@DarPal82 Theorem 2]), since in the special case in question one can choose $X:=\Theta$ and we have that $\vartheta_1(x)=x$, $x\in X$. Next, we can solve the equality problem for generalized $\psi$-estimators. **Theorem. 12**. *Let $\psi\in\Psi[T,Z_1](X,\Theta)$ and $\varphi\in\Psi[Z](X,\Theta)$. Assume that $\vartheta_{1,\psi}=\vartheta_{1,\varphi}$ on $X$. Then $\Theta_\psi=\Theta_\varphi$ and the following assertions are equivalent:* (i) *The equality $$\begin{aligned} \label{psi_est_inequality_2_eq} \vartheta_{n,\psi}(x_1,\ldots,x_n) = \vartheta_{n,\varphi}(x_1,\ldots,x_n) \end{aligned}$$ holds for each $n\in\mathbb{N}$ and $x_1,\dots,x_n\in X$.* (ii) *The equality holds for each $k,m\in\mathbb{N}$ and $x,y\in X$.* (iii) *There exists a positive function $p:\Theta_\varphi\to(0,\infty)$ such that $$\begin{aligned} \label{psi_est_inequality_5_eq} \psi(z,t) = p(t)\varphi(z,t), \qquad z\in X,\, t\in\Theta_\varphi. \end{aligned}$$* The equality $\Theta_\psi=\Theta_\varphi$ follows from $\vartheta_{1,\psi}=\vartheta_{1,\varphi}=: \vartheta_1$. (i)$\Rightarrow$(ii). This implication becomes obvious by taking $n=k+m$ in assertion (i). (ii)$\Rightarrow$(iii). Assume that (ii) holds. If $\Theta_\psi=\Theta_\varphi =\emptyset$, then there is nothing to prove. Thus, we may assume that $\Theta_\psi=\Theta_\varphi \neq\emptyset$. The equality [\[psi_est_inequality_2.5_eq\]](#psi_est_inequality_2.5_eq){reference-type="eqref" reference="psi_est_inequality_2.5_eq"} implies that the inequalities hold for each $k,m\in\mathbb{N}$ and $x,y\in X$. Then the equivalence of parts (ii) and (iv) of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"} and that of parts (ii) and (iv) of Theorem [Theorem. 10](#Lem_phi_est_eq_5){reference-type="ref" reference="Lem_phi_est_eq_5"} yield that there exist non-negative functions $p,q:\Theta_\varphi\to\mathbb{R}_+$ such that respectively. Consequently, we have $$\begin{aligned} \label{help_12} q(t)\varphi(z,t) \leqslant p(t)\varphi(z,t), \qquad z\in X, \;\; t\in\Theta_\varphi. \end{aligned}$$ For all $t\in\Theta_\varphi$, there exist $x,y\in X$ such that $\vartheta_1(x)<t<\vartheta_1(y)$. Taking the substitutions $z:=x$ and $z:=y$ in the inequality [\[help_12\]](#help_12){reference-type="eqref" reference="help_12"}, and using that $\varphi(x,t)<0$ and $\varphi(y,t)>0$, it follows that $p(t)=q(t)$ for all $t\in\Theta_\varphi$. Consequently, $\psi(z,t)=p(t) \varphi(z,t)$ for all $z\in X$ and $t\in\Theta_\varphi$. This equality shows that $p$ cannot vanish anywhere. Indeed, if $p(t)=0$ for some $t\in\Theta_\varphi$, then $\psi(z,t)=0$ for all $z\in X$. However, $t\in\Theta_\varphi=\Theta_\psi$ implies the existence of $x\in X$ such that $\vartheta_1(x)<t$, yielding that $\psi(x,t)<0$, which leads us to a contradiction. Thus, $p$ has to be positive everywhere and the assertion (iii) holds. (iii)$\Rightarrow$(i). Let $n\in\mathbb{N}$ and $(x_1,\dots,x_n)\in X^n$ be fixed. If then, according to Proposition [Proposition. 4](#Pro_psi_becsles_mean_prop){reference-type="ref" reference="Pro_psi_becsles_mean_prop"}, it follows that Now assume that $\min(\vartheta_1(x_1),\dots,\vartheta_1(x_n))<\max(\vartheta_1(x_1),\dots,\vartheta_1(x_n))$. Then $\Theta_\psi=\Theta_\varphi$ is not the emptyset, and for each $t\in\Theta_\varphi$, we have $$\operatorname{sign}\left( \sum_{i=1}^n \psi(x_i,t) \right) = \operatorname{sign}\left( \sum_{i=1}^n p(t)\varphi(x_i,t) \right) = \operatorname{sign}\left( \sum_{i=1}^n\varphi(x_i,t) \right),$$ since $p(t)>0$ and $t\in\Theta_\varphi$. Recall that the nonempty set $\Theta_\varphi$ is the interior of the convex hull of $\vartheta_1(X)$ (see the discussion before Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"}). Consequently, for $t\in\Theta$ with $t\leqslant\inf\Theta_\varphi$, we have that $t\leqslant\min(\vartheta_1(x_1),\dots,\vartheta_1(x_n))$, thus, for all $i\in\{1,\dots,n\}$, the property $[Z_1]$ of $\varphi$ and $\psi$ yield that $\psi(x_i,t)\geqslant 0$ and $\varphi(x_i,t)\geqslant 0$. Furthermore, for some $j\in\{1,\dots,n\}$, we have that $t<\vartheta_1(x_j)$, which implies the strict inequalities $\psi(x_j,t)>0$ and $\varphi(x_j,t)>0$ (in fact, one can choose $j$ as an index for which $\vartheta_1(x_j) = \max(\vartheta_1(x_1),\dots,\vartheta_1(x_n))$). Therefore $\sum_{i=1}^n \psi(x_i,t)>0$ and $\sum_{i=1}^n\varphi(x_i,t)>0$ for $t\in\Theta$ with $t\leqslant\inf\Theta_\varphi$. On the other hand, for $t\in\Theta$ with $t\geqslant\sup\Theta_\varphi$, we can similarly see that the reversed inequalities $\sum_{i=1}^n \psi(x_i,t)<0$ and $\sum_{i=1}^n\varphi(x_i,t)<0$ hold. Thus, we have proved that is valid for all $t\in\Theta$. This, taking into account that $\psi$ and $\varphi$ have the property $[T_n]$, shows that the maps $$\Theta\ni t\mapsto\sum_{i=1}^n \psi(x_i,t) \qquad\mbox{and}\qquad \Theta\ni t\mapsto\sum_{i=1}^n \varphi(x_i,t)$$ must have the same point of sign change, whence the assertion (i) follows. $\Box$ **Remark. 13**. *(i). Roughly speaking, the equivalence of parts (i) and (ii) of Theorem [Theorem. 12](#Thm_equality){reference-type="ref" reference="Thm_equality"} means that if the $\psi$- and $\varphi$-estimators (satisfying the conditions of Theorem [Theorem. 12](#Thm_equality){reference-type="ref" reference="Thm_equality"}) coincide on any realization of samples of arbitrary length but having only two different observations, then the two estimators in question coincide on any realization of samples of arbitrary length without any restriction on the number of different observations.* *(ii). Under the assumptions of Theorem [Theorem. 12](#Thm_equality){reference-type="ref" reference="Thm_equality"}, if any of its equivalent assertions (i), (ii) and (iii) hold and $\Theta_\varphi$ is not empty, then $\psi$ has the property $[Z]$ as well. Indeed, in the second case of the proof of (iii)$\Rightarrow$(i), we have checked that for all $n\in\mathbb{N}$, $(x_1,\ldots,x_n)\in X^n$ and $t\in\Theta$, the sums $\sum_{i=1}^n \psi(x_i,t)$ and $\sum_{i=1}^n \varphi(x_i,t)$ have the same sign (see [\[sign-eq\]](#sign-eq){reference-type="eqref" reference="sign-eq"}). Consequently, using that $\varphi$ has the property $[Z]$, it follows that $\psi$ has the property $[Z]$ as well.* *(iii). The statement in Theorem [Theorem. 12](#Thm_equality){reference-type="ref" reference="Thm_equality"} remains true under the assumptions $\psi\in\Psi[Z](X,\Theta)$, $\varphi\in\Psi[T,Z_1](X,\Theta)$, and $\vartheta_{1,\psi}=\vartheta_{1,\varphi}$ on $X$ as well. This can be readily seen from the proof of Theorem [Theorem. 12](#Thm_equality){reference-type="ref" reference="Thm_equality"}.* *(iv). The proof of Theorem [Theorem. 12](#Thm_equality){reference-type="ref" reference="Thm_equality"} shows that its assumptions can be weakened in the following sense. For the implication (i)$\Rightarrow$(ii), it is enough to assume that $\psi$ and $\varphi$ have the property $[T]$. For the implication (iii)$\Rightarrow$(i), it is enough to assume that both $\psi$ and $\varphi$ possess the properties $[T]$ and $[Z_1]$.* *(v). It can happen that $\vartheta_{n,\psi}({\boldsymbol{x}}) = \vartheta_{n,\varphi}({\boldsymbol{x}})$ holds for all $n\in\mathbb{N}$ and ${\boldsymbol{x}}\in X^n$ with $\psi,\varphi\in \Psi(X,\Theta)$, but Theorem [Theorem. 12](#Thm_equality){reference-type="ref" reference="Thm_equality"} cannot be applied. For example, let  $X:=\{x_1,x_2\}$,  $\Theta:=\mathbb{R}$,  and $$\begin{aligned} &\psi(x_j,t):=j\big({\boldsymbol{1}}_{(-\infty,0]}(t) - {\boldsymbol{1}}_{(0,\infty)}(t) \big),\qquad t\in\mathbb{R},\quad j\in\{1,2\},\\ &\varphi(x_j,t):=-jt,\qquad t\in\mathbb{R},\quad j\in\{1,2\}. \end{aligned}$$ Then $\psi$ is a $T$-function with $\vartheta_{n,\psi}({\boldsymbol{x}})=0$ for all $n\in\mathbb{N}$ and ${\boldsymbol{x}}\in X^n$, and $\varphi$ is a $Z$-function with $\vartheta_{n,\varphi}({\boldsymbol{x}})=0$ for all $n\in\mathbb{N}$ and ${\boldsymbol{x}}\in X^n$. Consequently, $\Theta_\psi = \Theta_\varphi = \emptyset$, and $\vartheta_{n,\psi}({\boldsymbol{x}}) = \vartheta_{n,\varphi}({\boldsymbol{x}})$ for all $n\in\mathbb{N}$ and ${\boldsymbol{x}}\in X^n$. Note also that $\psi$ does not satisfy the property $[Z_1]$, since $\psi(x_j,0)=j\ne 0$, $j\in\{1,2\}$, and hence we cannot apply Theorem [Theorem. 12](#Thm_equality){reference-type="ref" reference="Thm_equality"}. $\Box$* In the next result, under some additional regularity assumptions on $\psi$ and $\varphi$, we derive another set of conditions that are equivalent to any of the equivalent conditions (i), (ii) and (iii) of Theorem [Theorem. 12](#Thm_equality){reference-type="ref" reference="Thm_equality"}. This result can be considered as a counterpart of the second part of Theorem 8 in Páles [@Pal88a] for quasi-deviation means. **Corollary. 14**. *Let $\psi,\varphi\in\Psi[C,Z](X,\Theta)$. Assume that $\vartheta_{1,\psi}=\vartheta_{1,\varphi}=:\vartheta_1$ on $X$, $\vartheta_1(X)=\Theta$, and, for all $x\in X$, the maps $$\Theta\ni t\mapsto\psi(x,t) \qquad\mbox{and}\qquad \Theta\ni t\mapsto\varphi(x,t)$$ are differentiable at $\vartheta_1(x)$ with a non-vanishing derivative. Then any of the equivalent assertions (i), (ii) and (iii) of Theorem [Theorem. 12](#Thm_equality){reference-type="ref" reference="Thm_equality"} is equivalent to the following one:* 1. *For all $x,y\in X$, we have* It readily follows from Theorem [Theorem. 11](#Thm_inequality_diff){reference-type="ref" reference="Thm_inequality_diff"}. $\Box$ # Comparison and equality of Bajraktarević-type estimators {#Sec_comp_equal_Bajrak} In this section we apply our results in Section [3](#Section_comp_equal_psi_est){reference-type="ref" reference="Section_comp_equal_psi_est"} for solving the comparison and equality problems of Bajraktarević-type estimators that are special generalized $\psi$-estimators. First, we recall the notions of Bajraktarević-type functions and then Bajraktarević-type estimators. Let $X$ be a nonempty set, $\Theta$ be a nondegenerate open interval of $\mathbb{R}$, $f:\Theta\to\mathbb{R}$ be strictly increasing, $p:X\to\mathbb{R}_{++}$ and $F:X\to\mathop{\mathrm{conv}}(f(\Theta))$. In terms of these functions, define $\psi:X\times\Theta\to\mathbb{R}$ by By Lemma 1 in Grünwald and Páles [@GruPal], there exists a uniquely determined monotone function $g:\mathop{\mathrm{conv}}(f(\Theta))\to \Theta$ such that $g$ is the left inverse of $f$, i.e., Furthermore, $g$ is monotone in the same sense as $f$ (i.e, $f$ is monotone increasing), is continuous, and the following relation holds: The function $g:\mathop{\mathrm{conv}}(f(\Theta))\to \Theta$ is called the *generalized left inverse of the strictly increasing function $f:\Theta\to\mathbb{R}$* and is denoted by $f^{(-1)}$. It is clear that the restriction of $f^{(-1)}$ to $f(\Theta)$ is the inverse of $f$ in the standard sense, which is also strictly increasing. Therefore, $f^{(-1)}$ is the continuous and monotone extension of the inverse of $f$ to the smallest interval containing the range of $f$, that is, to the convex hull of $f(\Theta)$. Recall also that, by Proposition 2.19 in Barczy and Páles [@BarPal2], under the above assumptions, $\psi$ is a $T_n^{{\boldsymbol{\lambda}}}$-function for each $n\in\mathbb{N}$ and ${\boldsymbol{\lambda}}\in\Lambda_n$, and $$\begin{aligned} \label{help14} \vartheta_{n,\psi}^{{\boldsymbol{\lambda}}}({\boldsymbol{x}}) =f^{(-1)}\bigg(\frac{\lambda_1p(x_1)F(x_1)+\dots+\lambda_np(x_n)F(x_n)}{\lambda_1p(x_1)+\dots+\lambda_np(x_n)}\bigg) \end{aligned}$$ for each $n\in\mathbb{N}$, ${\boldsymbol{x}}=(x_1,\dots,x_n)\in X^n$ and ${\boldsymbol{\lambda}}=(\lambda_1,\dots,\lambda_n)\in\Lambda_n$. In particular, $\vartheta_{1,\psi}=f^{(-1)}\circ F$ holds. The value $\vartheta_{n,\psi}^{{\boldsymbol{\lambda}}}({\boldsymbol{x}})$ given by [\[help14\]](#help14){reference-type="eqref" reference="help14"} can be called as a Bajraktarević-type $\psi$-estimator of some unknown parameter in $\Theta$ based on the realization ${\boldsymbol{x}}=(x_1,\ldots,x_n)\in X^n$ and weights $(\lambda_1,\ldots,\lambda_n)\in\Lambda_n$ corresponding to the Bajraktarević-type function given by [\[help16\]](#help16){reference-type="eqref" reference="help16"}. In particular, if $p=1$ is a constant function in [\[help16\]](#help16){reference-type="eqref" reference="help16"}, then we speak about a quasi-arithmetic-type $\psi$-estimator. As a first result, we give a necessary and sufficient condition in order that $\Theta_\psi=\emptyset$ hold, where $\Theta_\psi$ is given by [\[theta_psi\]](#theta_psi){reference-type="eqref" reference="theta_psi"}. **Lemma. 15**. *Let $X$ be a nonempty set, $\Theta$ be a nondegenerate open interval of $\mathbb{R}$, $f:\Theta\to\mathbb{R}$ be a strictly increasing function, $p:X\to\mathbb{R}_{++}$, $F:X\to \mathop{\mathrm{conv}}(f(\Theta))$, and define $\psi:X\times\Theta\to\mathbb{R}$ by [\[help16\]](#help16){reference-type="eqref" reference="help16"}. Then $\Theta_\psi = \emptyset$ holds if and only if there exists $t_0\in \Theta$ such that the range $F(X)$ of $F$ is contained in $[f(t_0-0),f(t_0+0)]$, where $f(t_0-0)$ and $f(t_0+0)$ denote the left and right hand limits of $f$ at $t_0$, respectively.* First, let us suppose that there exists $t_0\in \Theta$ such that $F(X)\subseteq J_f(t_0):=[f(t_0-0),f(t_0+0)]$. Then, using that $f$ is strictly increasing, for all $x\in X$ and $t',t''\in\Theta$ with $t'<t_0<t''$, we have that $f(t')<f(t_0-0)\leqslant F(x)\leqslant f(t_0+0)<f(t'')$, and therefore, taking into account that $p$ is positive, for all $x\in X$, we get $$p(x)(F(x) - f(t)) \begin{cases} >0 & \text{if $t<t_0$, $t\in\Theta$,}\\ <0 & \text{if $t>t_0$, $t\in\Theta$.} \end{cases}$$ Hence, $\vartheta_{1,\psi}(x)=t_0$ for all $x\in X$. This yields that $\Theta_\psi = \emptyset$. To prove the converse statement, we check that if there does not exist $t_0\in\Theta$ such that $F(X)\subseteq J_f(t_0)$, then $\Theta_\psi\ne \emptyset$. Since $f$ is strictly increasing, we have $$\mathop{\mathrm{conv}}(f(\Theta)) = \bigcup_{t\in\Theta} J_f(t),$$ and $J_f(t')\cap J_f(t'') = \emptyset$ for all $t',t''\in\Theta$ with $t'\neq t''$. Using also that $F(X)\subseteq \mathop{\mathrm{conv}}(f(\Theta))$, there exist $t_1,t_2\in\Theta$ with $t_1<t_2$ such that $F(X)\cap J_f(t_i)\ne \emptyset$, $i=1,2$. Hence there exist $x_1,x_2\in X$ such that $F(x_i)\in J_f(t_i)$, $i=1,2$. Consequently, similarly as before, we have $$p(x_i)(F(x_i) - f(t)) \begin{cases} >0 & \text{if $t<t_i$, $t\in\Theta$,}\\ <0 & \text{if $t>t_i$, $t\in\Theta$,} \end{cases} \qquad i=1,2,$$ yielding that $\vartheta_{1,\psi}(x_i)=t_i$, $i=1,2$. Therefore, $(t_1,t_2)\subseteq \Theta_\psi$, showing that $\Theta_\psi$ is not empty, as expected. $\Box$ The next lemma combined with Proposition [Proposition. 5](#Pro_density){reference-type="ref" reference="Pro_density"} will turn out to be useful in the proof of Theorem [Theorem. 25](#Thm_Baj_type_equality_2){reference-type="ref" reference="Thm_Baj_type_equality_2"} below. **Lemma. 16**. *Let $X$ be a nonempty set, $\Theta$ be a nondegenerate open interval of $\mathbb{R}$, $f:\Theta\to\mathbb{R}$ be a strictly increasing function, $p:X\to\mathbb{R}_{++}$, $F:X\to \mathop{\mathrm{conv}}(f(\Theta))$, and define $\psi:X\times\Theta\to\mathbb{R}$ by [\[help16\]](#help16){reference-type="eqref" reference="help16"}. Then, for all $x,y\in X$ with $\vartheta_{1,\psi}(x)<\vartheta_{1,\psi}(y)$, the map [\[umap\]](#umap){reference-type="eqref" reference="umap"} is positive and strictly increasing.* Let $x,y\in X$ with $\vartheta_{1,\psi}(x)< \vartheta_{1,\psi}(y)$ and let $u\in (\vartheta_{1,\psi}(x), \vartheta_{1,\psi}(y))$ be arbitrary. Then $\psi(x,u)<0<\psi(y,u)$, which proves that the map ([\[umap\]](#umap){reference-type="ref" reference="umap"}) is positive valued. To see the strict monotonicity property, let additionally $v\in (\vartheta_{1,\psi}(x), \vartheta_{1,\psi}(y))$ be arbitrary with $u<v$. Then $\psi(x,v)<0<\psi(y,v)$, which implies that $F(x)<f(v)<F(y)$. Thus $F(x)<F(y)$ and, by the strict monotonicity of $f$, we also have $f(u)<f(v)$. Therefore which is equivalent to the inequality Multiplying this inequality by $\frac{-p(x)}{p(y)(F(y)-f(u))(F(y)-f(v))}<0$ side by side, it follows that This completes the proof of the strict increasingness of the map [\[umap\]](#umap){reference-type="eqref" reference="umap"}. We note that the statement also follows from the proof of Proposition 2.19 in Barczy and Páles [@BarPal2]. $\Box$ In the following statement we describe sufficient conditions which imply that the function $\psi$ defined by [\[help16\]](#help16){reference-type="eqref" reference="help16"} possesses the property $[Z_1]$ and $[Z]$, respectively. **Lemma. 17**. *Let $X$ be a nonempty set, $\Theta$ be a nondegenerate open interval of $\mathbb{R}$, $f:\Theta\to\mathbb{R}$ be a strictly increasing function, $p:X\to\mathbb{R}_{++}$, $F:X\to f(\Theta)$ and define $\psi:X\times\Theta\to\mathbb{R}$ by [\[help16\]](#help16){reference-type="eqref" reference="help16"}. Then $\psi$ has the property $[Z_1]$ and If, in addition, $\mathop{\mathrm{conv}}(F(X))\subseteq f(\Theta)$, then $\psi$ has the property $[Z]$ as well.* Since $F(X)\subseteq f(\Theta)$, the restriction of $f^{(-1)}$ onto $F(X)$ is the strictly increasing inverse of $f$ in the standard sense restricted to $F(X)$. Thus, for all $x\in X$, we have $$\begin{aligned} \psi(x,\vartheta_{1,\psi}(x)) & = p(x) (F(x) - f(\vartheta_{1,\psi}(x))) = p(x) \big(F(x) - f(f^{(-1)}(F(x))) \big) \\ & = p(x) (F(x) - F(x))=0, \end{aligned}$$ yielding that $\psi$ has the property $[Z_1]$. Furthermore, using also that $(f^{(-1)}\circ f)(t)=t$, $t\in\Theta$, we get that as desired. To prove the last arestion, let us assume that $\mathop{\mathrm{conv}}(F(X))\subseteq f(\Theta)$. Then, by [\[help14\]](#help14){reference-type="eqref" reference="help14"}, for each $n\in\mathbb{N}$ and ${\boldsymbol{x}}=(x_1,\ldots,x_n)\in X^n$, we have $$\begin{aligned} \sum_{i=1}^n \psi(x_i,\vartheta_{n,\psi}({\boldsymbol{x}})) & = \sum_{i=1}^n p(x_i) \big( F(x_i) - f(\vartheta_{n,\psi}({\boldsymbol{x}})) \big) \\ & = \sum_{i=1}^n p(x_i) F(x_i) - \sum_{i=1}^n p(x_i) f\left(f^{(-1)}\left( \sum_{j=1}^n \frac{p(x_j)}{p(x_1)+\cdots +p(x_n)} F(x_j) \right) \right) \\ & = \sum_{i=1}^n p(x_i) F(x_i) - \sum_{i=1}^n p(x_i) \sum_{j=1}^n \frac{p(x_j)}{p(x_1)+\cdots +p(x_n)} F(x_j) =0, \end{aligned}$$ where we used that $\sum_{j=1}^n \frac{p(x_j)}{p(x_1)+\cdots +p(x_n)} F(x_j)\in \mathop{\mathrm{conv}}(F(X)) \subseteq f(\Theta)$. $\Box$ In the next remark, we point out the fact that [\[help_23A\]](#help_23A){reference-type="eqref" reference="help_23A"} does not hold in general, showing that the assumption $F(X)\subseteq f(\Theta)$ in Lemma [Lemma. 17](#Lem_Baj_psi_prop){reference-type="ref" reference="Lem_Baj_psi_prop"} is indispensable. **Remark. 18**. *Let $X:=\{x_1,x_2\}$, $\Theta:=\mathbb{R}$, $p:X\to\mathbb{R}_{++}$, $f:\mathbb{R}\to\mathbb{R}$, $$f(t):=\begin{cases} t & \text{if $t\leqslant 1$,}\\ t+1 & \text{if $1<t\leqslant 2$,}\\ t+2 & \text{if $t>2$,} \end{cases}$$ and $F:X\to \mathbb{R}$ be such that $F(x_1):=1$ and $F(x_2):=3,5$. Then $\mathop{\mathrm{conv}}(f(\Theta))=\mathbb{R}$, and $$p(x_1)(F(x_1) - f(t)) \begin{cases} >0 & \text{if $t<1$,}\\ =0 & \text{if $t=1$,}\\ <0 & \text{if $t>1$,} \end{cases}$$ and $$p(x_2)(F(x_2) - f(t)) \begin{cases} >0 & \text{if $t\leqslant 2$,}\\ <0 & \text{if $t>2$,} \end{cases}$$ yielding that $\vartheta_{1,\psi}(x_j)=j$, $j=1,2$. Hence $\Theta_\psi = (1,2)$. However, $$\big\{ t\in\Theta \,|\,\exists\; x,y\in X: F(x) < f(t) < F(y)\big\} = (1,2],$$ which does not coincide with $\Theta_\psi=(1,2)$. Note that $F(X)\subseteq f(\Theta)$ is also not valid, and $\psi$ does not have the property $[Z_1]$, since $\psi(x_2,\vartheta_{1,\psi}(x_2)) = \psi(x_2,2) = p(x_2)(F(x_2) - f(2)) = \frac{p(x_2)}{2}>0$. $\Box$* Below, we solve the comparison problem for Bajraktarević-type estimators. **Proposition. 19**. *Let $X$ be a nonempty set, $\Theta$ be a nondegenerate open interval of $\mathbb{R}$. Let $f,g:\Theta\to\mathbb{R}$ be strictly increasing functions, $p,q:X\to\mathbb{R}_{++}$, $F:X\to f(\Theta)$, $G:X\to g(\Theta)$, and suppose that $\mathop{\mathrm{conv}}(G(X)) \subseteq g(\Theta)$. Let $\psi:X\times \Theta\to\mathbb{R}$ and $\varphi: X\times \Theta\to\mathbb{R}$ be given by $$\begin{aligned} \label{help_Baj_psi_phi} \begin{split} &\psi(x,t):=p(x)(F(x)-f(t)), \qquad x\in X, \; t\in\Theta,\\ &\varphi(x,t):=q(x)(G(x)-g(t)), \qquad x\in X, \; t\in\Theta. \end{split} \end{aligned}$$ Assume that $(f^{(-1)}\circ F)(x)\leqslant(g^{(-1)}\circ G)(x)$, $x\in X$. Then $\vartheta_{n,\psi}({\boldsymbol{x}})\leqslant\vartheta_{n,\varphi}({\boldsymbol{x}})$ holds for each $n\in\mathbb{N}$ and ${\boldsymbol{x}}\in X^n$ if and only if the inequality $$\begin{aligned} \label{ineq_Bajrak_type_1} \frac{q(y)}{q(x)}\cdot \frac{G(y) - g(t)}{G(x) - g(t)} \leqslant\frac{p(y)}{p(x)}\cdot \frac{F(y) - f(t)}{F(x) - f(t)} \end{aligned}$$ is valid for all $t\in\Theta$ and for all $x,y\in X$ with $G(x)<g(t)<G(y)$.* By Proposition 2.19 in Barczy and Páles [@BarPal2] and Lemma [Lemma. 17](#Lem_Baj_psi_prop){reference-type="ref" reference="Lem_Baj_psi_prop"}, $\psi$ has the properties $[T]$ and $[Z_1]$, and $\varphi$ has the property $[Z]$. Using Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"}, we have that $\vartheta_{n,\psi}({\boldsymbol{x}})\leqslant\vartheta_{n,\varphi}({\boldsymbol{x}})$ holds for each $n\in\mathbb{N}$ and ${\boldsymbol{x}}\in X^n$ if and only if the inequality $$\begin{aligned} \psi(x,t) \varphi(y,t) \leqslant\psi(y,t) \varphi(x,t) \end{aligned}$$ is valid for all $t\in\Theta$ and for all $x,y\in X$ with $\vartheta_{1,\varphi}(x)<t<\vartheta_{1,\varphi}(y)$. Using $G(X)\subseteq g(\Theta)$, Lemma [Lemma. 17](#Lem_Baj_psi_prop){reference-type="ref" reference="Lem_Baj_psi_prop"} yields that and for all $t\in\Theta$ and $x,y\in X$, the inequality $\vartheta_{1,\varphi}(x)<t<\vartheta_{1,\varphi}(y)$ holds if and only if $G(x) < g(t) < G(y)$. Consequently, $\vartheta_{n,\psi}({\boldsymbol{x}})\leqslant\vartheta_{n,\varphi}({\boldsymbol{x}})$ holds for each $n\in\mathbb{N}$ and ${\boldsymbol{x}}\in X^n$ if and only if $$\begin{aligned} \label{help17} p(x)(F(x) - f(t)) q(y)(G(y) - g(t)) \leqslant p(y)(F(y) - f(t)) q(x)(G(x) - g(t)) \end{aligned}$$ is valid for all $t\in\Theta$ and for all $x,y\in X$ with $G(x)<g(t)<G(y)$. Using that $f$ is strictly increasing, $F(X)\subseteq f(\Theta)$, and $g^{(-1)}$ restricted to $g(\Theta)$ is strictly increasing, for all $t\in\Theta$ and for all $x\in X$ with $G(x)<g(t)$, we have that $$F(x) = (f\circ f^{(-1)})(F(x)) = f(f^{(-1)}(F(x))) \leqslant f(g^{(-1)}(G(x))) < f( g^{(-1)}(g(t)) ) = f(t).$$ Consequently, $G(x) -g(t)<0$ and $F(x)-f(t)<0$ in the inequality [\[help17\]](#help17){reference-type="eqref" reference="help17"}, and hence by rearranging it, the assertion follows. $\Box$ In the next result, under some additional regularity assumptions on $f$ and $g$, we derive another set of conditions that is equivalent to [\[ineq_Bajrak_type_1\]](#ineq_Bajrak_type_1){reference-type="eqref" reference="ineq_Bajrak_type_1"}. **Proposition. 20**. *Let $X$ be a nonempty set, $\Theta$ be a nondegenerate open interval of $\mathbb{R}$. Let $f,g:\Theta\to\mathbb{R}$ be strictly increasing functions, $p,q:X\to\mathbb{R}_{++}$, $F:X\to f(\Theta)$, $G:X\to g(\Theta)$, and suppose that $F(X)=f(\Theta)$ and $\mathop{\mathrm{conv}}(G(X)) \subseteq g(\Theta)$. Let $\psi:X\times \Theta\to\mathbb{R}$ and $\varphi: X\times \Theta\to\mathbb{R}$ be given by [\[help_Baj_psi_phi\]](#help_Baj_psi_phi){reference-type="eqref" reference="help_Baj_psi_phi"}. Assume that $(f^{(-1)}\circ F)(x) = (g^{(-1)}\circ G)(x)=:\vartheta_1(x)$, $x\in X$, and that $f$ and $g$ are differentiable at $\vartheta_1(x)$ for all $x\in X$ with non-vanishing (and hence positive) derivatives. Then $\vartheta_{n,\psi}({\boldsymbol{x}})\leqslant\vartheta_{n,\varphi}({\boldsymbol{x}})$ holds for each $n\in\mathbb{N}$ and ${\boldsymbol{x}}\in X^n$ if and only if the inequality $$\begin{aligned} \label{ineq_Bajrak_type_2} \frac{p(y)}{p(x)} \cdot \frac{F(y) - F(x)}{f'\big(f^{(-1)}(F(x))\big)} \leqslant\frac{q(y)}{q(x)} \cdot \frac{G(y) - G(x)}{g'\big(g^{(-1)}(G(x))\big)} \end{aligned}$$ is valid for all $x,y\in X$.* Since $F(X)=f(\Theta)$, we have that $$\begin{aligned} \label{help_15} \vartheta_1(X)=(f^{(-1)}\circ F)(X)=f^{(-1)}(F(X))=f^{(-1)}(f(\Theta))=\Theta. \end{aligned}$$ Further, for all $x\in X$, we have $$\begin{aligned} &\partial_2\psi(x,\vartheta_1(x)) = -p(x) f'(\vartheta_1(x)) = -p(x) f'\big(f^{(-1)}(F(x))\big),\\ &\partial_2\varphi(x,\vartheta_1(x)) = -q(x) g'(\vartheta_1(x)) = -q(x) g'\big(g^{(-1)}(G(x))\big). \end{aligned}$$ Consequently, using Theorem [Theorem. 11](#Thm_inequality_diff){reference-type="ref" reference="Thm_inequality_diff"}, we have that $\vartheta_{n,\psi}({\boldsymbol{x}})\leqslant\vartheta_{n,\varphi}({\boldsymbol{x}})$ holds for each $n\in\mathbb{N}$ and ${\boldsymbol{x}}\in X^n$ if and only if $$\begin{aligned} \frac{p(y)}{p(x)} \cdot \frac{F(y) - f(\vartheta_1(x))}{f'(\vartheta_1(x))} \leqslant\frac{q(y)}{q(x)} \cdot \frac{G(y) - g(\vartheta_1(x))}{g'(\vartheta_1(x))} \end{aligned}$$ is valid for all $x,y\in X$, which yields the statement, since $$f(\vartheta_1(x)) = f(f^{(-1)}(F(x))) = (f\circ f^{(-1)})(F(x)) = F(x),\qquad x\in X,$$ due to the condition $F(X)\subseteq f(\Theta)$, and similarly, we also have $g(\vartheta_1(x)) = G(x)$, $x\in X$. $\Box$ In the next result, among others, we point out that, under the assumptions of Proposition [Proposition. 20](#Pro_Baj_type_comparison_2){reference-type="ref" reference="Pro_Baj_type_comparison_2"}, the function $g$ is continuous. **Lemma. 21**. *Let $X$ be a nonempty set and $\Theta$ be a nondegenerate open interval of $\mathbb{R}$. Let $f,g:\Theta\to\mathbb{R}$ be strictly increasing functions, $F:X\to f(\Theta)$, $G:X\to g(\Theta)$, and suppose that $F(X)=f(\Theta)$ and $\mathop{\mathrm{conv}}(G(X)) \subseteq g(\Theta)$. Assume that $(f^{(-1)}\circ F)(x) = (g^{(-1)}\circ G)(x)$, $x\in X$. Then* 1. *$G(X)=g(\Theta)$ and this set is convex.* 2. *$g$ is continuous, and $G(x) = (g\circ f^{(-1)})(F(x))$, $x\in X$.* (i). Since $F(X)=f(\Theta)$, we have that $$(f^{(-1)}\circ F)(X)=f^{(-1)}(F(X))=f^{(-1)}(f(\Theta))=\Theta,$$ which implies that $(g^{(-1)}\circ G)(X) = (f^{(-1)}\circ F)(X) = \Theta$ holds as well. Hence, using that $G(X)\subseteq g(\Theta)$, we have that $g(\Theta) = g((g^{(-1)}\circ G)(X))=(g\circ g^{(-1)})(G(X)) = G(X)$. Consequently, $\mathop{\mathrm{conv}}(g(\Theta)) = \mathop{\mathrm{conv}}(G(X))$, and, since $\mathop{\mathrm{conv}}(G(X)) \subseteq g(\Theta)$, we obtain that $g(\Theta)\subseteq \mathop{\mathrm{conv}}(g(\Theta)) = \mathop{\mathrm{conv}}(G(X)) \subseteq g(\Theta)$, yielding that $g(\Theta)=\mathop{\mathrm{conv}}(G(X))$ is a convex set. (ii). Since $g$ is strictly increasing and its range $g(\Theta)$ is convex (see part (i)), we can see that $g$ is continuous as well. Finally, note that the equality $(f^{(-1)}\circ F)(x) = (g^{(-1)}\circ G)(x)$, $x\in X$, implies that $G(x) = (g\circ f^{(-1)})(F(x))$, $x\in X$, since $G(X)\subseteq g(\Theta)$. $\Box$ Next, we solve the equality problem for Bajraktarević-type estimators. In the proof, we will use a result on the Schwarzian derivative of a function. Given a nondegenerate open interval $I\subseteq \mathbb{R}$, for a three times differentiable function $h:I \to \mathbb{R}$ with a nonvanishing first derivative, its Schwarzian derivative $S_h: I \to \mathbb{R}$ is defined by $$S_h(x):=\frac{h'''(x)}{h'(x)} - \frac{3}{2} \left(\frac{h''(x)}{h'(x)}\right)^2,\qquad x\in I.$$ The following result is well-known, see, e.g., Grünwald and Páles [@GruPal2 Corollary 3]. **Lemma. 22**. *Let $I\subseteq \mathbb{R}$ be a nondegenerate open interval, and $h:I\to\mathbb{R}$ be a three times differentiable function such that $h'$ does not vanish on $I$. Then $S_h(x)=0$, $x\in I$, holds if and only if there exist four constants $a,b,c,d\in\mathbb{R}$ with $ad\ne bc$ and $0\notin cI +d$ such that $$h(x) = \frac{ax+b}{cx+d}, \qquad x\in I.$$* **Theorem. 23**. *Let $X$ be a nonempty set and $\Theta$ be a nondegenerate open interval of $\mathbb{R}$. Let $f,g:\Theta\to\mathbb{R}$ be strictly increasing functions such that $f$ is continuous, $p,q:X\to\mathbb{R}_{++}$, $F:X\to f(\Theta)$, $G:X\to g(\Theta)$, and suppose that $F(X)=f(\Theta)$ and $\mathop{\mathrm{conv}}(G(X)) \subseteq g(\Theta)$. Let $\psi:X\times \Theta\to\mathbb{R}$ and $\varphi: X\times \Theta\to\mathbb{R}$ be given by [\[help_Baj_psi_phi\]](#help_Baj_psi_phi){reference-type="eqref" reference="help_Baj_psi_phi"}. If $\vartheta_{n,\psi}({\boldsymbol{x}}) = \vartheta_{n,\varphi}({\boldsymbol{x}})$ holds for each $n\in\mathbb{N}$ and ${\boldsymbol{x}}\in X^n$, then there exist four constants $a,b,c,d\in\mathbb{R}$ with $ad\ne bc$ and $0\notin c f(\Theta)+d$ such that $$\begin{aligned} \label{gGq} g(t)=\frac{af(t)+b}{cf(t)+d},\,\,\, t\in\Theta\;\; \mbox{and}\;\; G(x)=\frac{aF(x)+b}{cF(x)+d},\;\;\; q(x)=(cF(x)+d)p(x),\,\,\, x\in X. \end{aligned}$$* Since $f$ is strictly increasing and continuous and $\Theta$ is a non-degenerate open interval, we have $f(\Theta)$ is a non-degenerate open interval. Hence $\mathop{\mathrm{conv}}(f(\Theta)) = f(\Theta) = F(X)$, and then, as a consequence of Lemma [Lemma. 17](#Lem_Baj_psi_prop){reference-type="ref" reference="Lem_Baj_psi_prop"}, we get $\psi\in \Psi[Z](X,\Theta)$. Further, since $\mathop{\mathrm{conv}}(G(X))\subseteq g(\Theta)$, Lemma [Lemma. 17](#Lem_Baj_psi_prop){reference-type="ref" reference="Lem_Baj_psi_prop"} also yields that $\varphi\in \Psi[Z](X,\Theta)$. We first verify that $\Theta_\psi=\Theta$. The inclusion $\Theta_\psi\subseteq\Theta$ is trivial. To prove the reversed one, let $t\in\Theta$ be arbitrary. Now choose $r,s\in\Theta$ such that $r<t<s$. Using that $F(X)=f(\Theta)$, we can find $x,y\in X$ such that $f(r)=F(x)$ and $f(s)=F(y)$. Since $f$ is strictly increasing, it follows that $F(x)=f(r)<f(t)<f(s)=F(y)$, showing that $t$ belongs to the set $$\big\{t\in\Theta\,|\,\exists\, x,y\in X: F(x)<f(t)<F(y)\big\},$$ which, according to [\[help_23A\]](#help_23A){reference-type="eqref" reference="help_23A"}, equals $\Theta_\psi$. Assume that $\vartheta_{n,\psi}({\boldsymbol{x}}) = \vartheta_{n,\varphi}({\boldsymbol{x}})$ holds for each $n\in\mathbb{N}$ and ${\boldsymbol{x}}\in X^n$. Then, in the case $n=1$, this equality and [\[help14\]](#help14){reference-type="eqref" reference="help14"} imply that $(f^{(-1)}\circ F)(x) = \vartheta_{1,\psi}(x) =\vartheta_{1,\varphi}(x) = (g^{(-1)}\circ G)(x)$, $x\in X$. Hence, according to Lemma [Lemma. 21](#Rem_psi_varphi){reference-type="ref" reference="Rem_psi_varphi"}, we get that $G(X)=\mathop{\mathrm{conv}}(G(X))=g(\Theta)$ is a convex set and $g$ is continuous. Then, similarly as we derived $\Theta_\psi = \Theta$, we have that $\Theta_\varphi = \Theta$ holds as well. For all $x,y\in X$ with $G(x)<G(y)$, let us introduce the notation $$\Theta_{x,y}:=\{t\in\Theta\,|\,G(x)<g(t)<G(y)\}.$$ Using that $F(X)\subseteq f(\Theta)$, $G(X)\subseteq g(\Theta)$, and that the restrictions of $f^{(-1)}$ and $g^{(-1)}$ to $f(\Theta)$ and $g(\Theta)$ are the strictly increasing inverses of $f$ and $g$ in the standard sense, respectively, for all $x,y\in X$ with $G(x)<G(y)$, we have $$\begin{aligned} \label{help_17} \begin{split} \Theta_{x,y}& =\{t\in\Theta\,|\,g^{(-1)}(G(x))<t<g^{(-1)}(G(y))\}\\ & = \{t\in\Theta\,|\,f^{(-1)}(F(x))<t<f^{(-1)}(F(y))\} \\ & = \{t\in\Theta\,|\,F(x)<f(t)<F(y)\}. \end{split} \end{aligned}$$ The previous argument also shows that for all $x,y\in X$, we get $G(x)<G(y)$ if and only if $F(x)<F(y)$, and the set $\Theta_{x,y}$ is a nonempty open interval for all $x,y\in X$ with $G(x)<G(y)$, since it is the intersection of the open intervals $(g^{(-1)}(G(x)), g^{(-1)}(G(y)))$ and $\Theta$. In view of [\[help_17\]](#help_17){reference-type="eqref" reference="help_17"}, Lemma [Lemma. 17](#Lem_Baj_psi_prop){reference-type="ref" reference="Lem_Baj_psi_prop"} and the equality $\Theta_\psi=\Theta$ we can see that Using that the image under $f$ of a union of subsets is the union of the images under $f$ of the given subsets, [\[help_16.5\]](#help_16.5){reference-type="eqref" reference="help_16.5"} immediately yields that $$\begin{aligned} \label{help_16} \bigcup_{\{ x,y\in X: G(x)<G(y)\} }f(\Theta_{x,y}) = f(\Theta),\end{aligned}$$ which is an open interval, since $\Theta$ is a nonempty open interval and $f$ is strictly increasing and continuous. Using Proposition [Proposition. 19](#Pro_Baj_type_comparison){reference-type="ref" reference="Pro_Baj_type_comparison"}, we have that $$\begin{aligned} \label{help_14} \frac{q(y)}{q(x)}\cdot \frac{G(y) - g(t)}{G(x) - g(t)} = \frac{p(y)}{p(x)}\cdot \frac{F(y) - f(t)}{F(x) - f(t)}\end{aligned}$$ holds for all $t\in\Theta$ and for all $x,y\in X$ with $G(x)<g(t)<G(y)$, or equivalently, [\[help_14\]](#help_14){reference-type="eqref" reference="help_14"} holds for all $x,y\in X$ with $G(x)<G(y)$ and for all $t\in\Theta_{x,y}$. One can readily check that for all $x,y\in X$ with $G(x)<G(y)$ and for all $t\in\Theta_{x,y}$, the equality [\[help_14\]](#help_14){reference-type="eqref" reference="help_14"} is equivalent to any of the following three equalities: $$\begin{aligned} \label{help18} \begin{split} &p(x)(F(x) - f(t)) q(y)(G(y) - g(t)) =p(y)(F(y) - f(t)) q(x)(G(x) - g(t)),\\[1mm] &\Big(p(y)(F(y) - f(t)) q(x)-p(x)(F(x) - f(t)) q(y)\Big)g(t)\\ &\qquad =p(y)(F(y) - f(t)) q(x)G(x)-p(x)(F(x) - f(t)) q(y)G(y),\\[1mm] &(c_{x,y}f(t)+d_{x,y})g(t)=a_{x,y}f(t)+b_{x,y}, \end{split} \end{aligned}$$ where $$\begin{aligned} a_{x,y}&:=p(x)q(y)G(y)-p(y)q(x)G(x), &\qquad b_{x,y}&:= p(y)q(x)F(y)G(x)-p(x)q(y)F(x)G(y),\\ c_{x,y}&:=p(x)q(y)-p(y)q(x), &\qquad d_{x,y}&:= p(y)q(x)F(y)-p(x)q(y)F(x). \end{aligned}$$ Here, due to $G(x)\neq G(y)$, we have that $(a_{x,y},c_{x,y})\neq(0,0)$ and $(b_{x,y},d_{x,y})\neq (0,0)$ hold. Substituting $s:=f(t)$ (i.e., $t=f^{(-1)}(s)$) in the third equality in [\[help18\]](#help18){reference-type="eqref" reference="help18"}, it follows that $$\begin{aligned} \label{help_24} (c_{x,y}s+d_{x,y})(g\circ f^{(-1)})(s)=a_{x,y}s+b_{x,y} \end{aligned}$$ for all $x,y\in X$ with $G(x)<G(y)$ and for all $s\in f(\Theta_{x,y})$. Next, we check that $c_{x,y}s+d_{x,y}\ne 0$ for all $x,y\in X$ with $G(x)<G(y)$ and for all $s\in f(\Theta_{x,y})$. If $c_{x,y}s+d_{x,y}=0$ and $c_{x,y}=0$ were true, then $d_{x,y}=0$, $a_{x,y}\ne 0$, $b_{x,y}\ne 0$ and $a_{x,y}s+b_{x,y}=0$ (following from [\[help_24\]](#help_24){reference-type="eqref" reference="help_24"}). This leads us to a contradiction, since $c_{x,y}=d_{x,y}=0$ implies that $p(x)q(y) = p(y)q(x)$ and $F(x)=F(y)$, which cannot happen due to $F(x)<F(y)$. If $c_{x,y}s+d_{x,y}=0$ and $c_{x,y}\ne 0$ were true, then $s=-\frac{d_{x,y}}{c_{x,y}}$ and $a_{x,y}s+b_{x,y}=0$, yielding that $c_{x,y}b_{x,y}-d_{x,y}a_{x,y}=0$. This leads us to a contradiction, since an easy calculation shows that $$\begin{aligned} \label{help_19} c_{x,y}b_{x,y}-d_{x,y}a_{x,y} = p(x)p(y)q(x)q(y)(F(x)-F(y))(G(y)-G(x)), \end{aligned}$$ which cannot be $0$ for any $x,y\in X$ with $G(x)<G(y)$. Consequently, $$\begin{aligned} \label{help_18} (g\circ f^{(-1)})(s)=\frac{a_{x,y}s+b_{x,y}}{c_{x,y}s+d_{x,y}} \end{aligned}$$ for all $x,y\in X$ with $G(x)<G(y)$ and for all $s\in f(\Theta_{x,y})$. We can apply Lemma [Lemma. 22](#Lem_Sch_deriv){reference-type="ref" reference="Lem_Sch_deriv"} to the function $h:=g\circ f^{(-1)}$ defined on the nondegenerate open interval $I:=f(\Theta)$, since [\[help_18\]](#help_18){reference-type="eqref" reference="help_18"} implies that $h$ is three times differentiable on $I$ such that $h'$ does not vanish on $I$. Indeed, using [\[help_18\]](#help_18){reference-type="eqref" reference="help_18"}, we have that $$h'(s) = \frac{a_{x,y}d_{x,y} - b_{x,y}c_{x,y}}{(c_{x,y}s+d_{x,y})^2} \ne 0, \qquad s\in f(\Theta_{x,y})$$ for all $x,y\in X$ with $G(x)<G(y)$, where we used [\[help_19\]](#help_19){reference-type="eqref" reference="help_19"}. Hence, as a consequence of [\[help_16\]](#help_16){reference-type="eqref" reference="help_16"}, we have $h'(s)\ne 0$, $s\in f(\Theta)$. Taking into account that $f(\Theta_{x,y})$ is a nondegenerate open interval for all $x,y\in X$ with $G(x)<G(y)$, Lemma [Lemma. 22](#Lem_Sch_deriv){reference-type="ref" reference="Lem_Sch_deriv"} and [\[help_18\]](#help_18){reference-type="eqref" reference="help_18"} imply that $S_h(s)=0$, $s\in f(\Theta)$. Consequently, using again Lemma [Lemma. 22](#Lem_Sch_deriv){reference-type="ref" reference="Lem_Sch_deriv"}, there exist four constants $a^*,b^*,c^*,d^*\in\mathbb{R}$ with $a^*d^*\ne b^*c^*$ and $0\notin c^*f(\Theta) + d^*$ such that $$\begin{aligned} \label{help_20} h(s) = (g\circ f^{(-1)})(s) = \frac{a^*s+b^*}{c^*s+d^*}, \qquad s\in f(\Theta). \end{aligned}$$ By substituting $s:=f(t)$, where $t\in\Theta$, it follows that $$\begin{aligned} \label{eq_g_form} g(t) = \frac{a^*f(t)+b^*}{c^*f(t)+d^*}, \qquad t\in \Theta, \end{aligned}$$ as desired. Using [\[help_20\]](#help_20){reference-type="eqref" reference="help_20"}, the assumptions $f^{(-1)}\circ F=g^{(-1)}\circ G$ and $G(X)\subseteq g(\Theta)$, we get that $$\begin{aligned} \label{eq_G_form} G(x)=g( (f^{(-1)}\circ F)(x) ) = (g\circ f^{(-1)})(F(x))=\frac{a^*F(x)+b^*}{c^*F(x)+d^*}, \qquad x\in X, \end{aligned}$$ where $0\notin c^*F(X) +d^*$, since $F(X)=f(\Theta)$ and $0\notin c^* f(\Theta) + d^*$. By [\[help_14\]](#help_14){reference-type="eqref" reference="help_14"} and taking into account the forms of $G$ and $g$, we get $$\begin{aligned} \frac{q(y)}{q(x)} \cdot \frac{ \dfrac{a^*F(y) + b^*}{c^*F(y) + d^*} - \dfrac{a^*f(t) + b^*}{c^*f(t) + d^*} } {\dfrac{a^*F(x) + b^*}{c^*F(x) + d^*} - \dfrac{a^*f(t) + b^*}{c^*f(t) + d^*}} = \frac{p(y)}{p(x)} \cdot \frac{F(y) - f(t)}{F(x) - f(t)} \end{aligned}$$ holds for all $x,y\in X$ with $G(x)<G(y)$ and for all $t\in \Theta_{x,y}$. Using that $a^*d^*-b^*c^*\ne0$, after some algebraic calculations, we obtain that $$\frac{q(y)}{p(y)} = \frac{c^*F(y)+d^*}{c^*F(x)+d^*} \cdot \frac{q(x)}{p(x)}$$ holds for all $x,y\in X$ with $G(x)<G(y)$, or equivalently, $$\frac{\Big(\dfrac{q}{p}\Big)(y)}{c^*F(y)+d^*} = \frac{\Big(\dfrac{q}{p}\Big)(x)}{c^*F(x)+d^*}$$ holds for all $x,y\in X$ with $G(x)<G(y)$. Since $q/p$ is positive, it follows that there exists a constant $k\in\mathbb{R}\setminus\{0\}$ such that $$q(x) = k(c^*F(x) + d^*)p(x), \qquad x\in X.$$ The statement of the proposition now holds with the choices $a:=ka^*$, $b:=kb^*$, $c:=kc^*$ and $d:=kd^*$. $\Box$ We note that in the proof of Theorem [Theorem. 23](#Thm_Baj_type_equality){reference-type="ref" reference="Thm_Baj_type_equality"}, the assumption that $f$ is continuous is used for deriving that $f(\Theta)$ is an open interval, which is essential when we apply Lemma [Lemma. 22](#Lem_Sch_deriv){reference-type="ref" reference="Lem_Sch_deriv"}. Note also that in the proof of Theorem [Theorem. 23](#Thm_Baj_type_equality){reference-type="ref" reference="Thm_Baj_type_equality"} it turned out that $g$ is continuous as well, however, we did not utilize this property in the proof. Nonetheless, the continuity of $g$ also follows from the result itself, since $f$ is continuous and $g(t)=(af(t)+b)/(cf(t)+d)$, $t\in\Theta$. Next, we will provide a set of sufficient conditions on $f$, $g$, $F$ and $G$ in order that $\vartheta_{n,\psi}({\boldsymbol{x}}) = \vartheta_{n,\varphi}({\boldsymbol{x}})$ hold for each $n\in\mathbb{N}$ and ${\boldsymbol{x}}\in X^n$. In the course of the proof, the following auxiliary lemma plays an important role. **Lemma. 24**. *Let $I$ be a nondegenerate open interval of $\mathbb{R}$. Let $f,g:I\to\mathbb{R}$ be strictly increasing functions such that there exist four constants $a,b,c,d\in\mathbb{R}$ with $0\notin c f(I)+d$ and $$g(t)=\frac{af(t)+b}{cf(t)+d},\qquad t\in I.$$ Then $ad>bc$ and $cf+d$ is either everywhere positive or everywhere negative on $I$.* The condition $0\notin c f(I)+d$ yields that $(c,d)\neq(0,0)$. If $ad=bc$, then there exists $\lambda\in\mathbb{R}$ such that $(a,b)=\lambda(c,d)$. In this case, we get that $g(t)=\lambda$ for all $t\in I$, which contradicts the strict monotonicity of $g$. Thus $ad\neq bc$ must hold. Next, we check that $cf+d$ is either everywhere positive or everywhere negative on $I$. On the contrary, if $cf+d$ changes sign in $I$, then $c$ cannot be zero and hence $cf+d$ is also strictly monotone. Therefore, using also that $I$ is a nondegenerate open interval, there exists a unique point $\tau\in I$ such that $cf(t)+d>\!(<)\,0$ for all $t<\tau$, $t\in I$, and $cf(t)+d<\!(>)\,0$ for all $t>\tau$, $t\in I$. Let $t<\tau<r<s$ be arbitrarily fixed elements of $I$. Then $(cf(t)+d)(cf(r)+d)<0$ and $(cf(r)+d)(cf(s)+d)>0$. Consequently, using the strict increasingness of $g$, the inequalities $$\frac{af(t)+b}{cf(t)+d}=g(t) <g(r)=\frac{af(r)+b}{cf(r)+d} \qquad\mbox{and}\qquad \frac{af(r)+b}{cf(r)+d}=g(r) <g(s)=\frac{af(s)+b}{cf(s)+d}$$ imply that $$(af(t)+b)(cf(r)+d)>(af(r)+b)(cf(t)+d),\qquad (af(r)+b)(cf(s)+d)<(af(s)+b)(cf(r)+d),$$ or equivalently, $$\begin{aligned} \label{help_25} 0>(ad-bc)(f(r)-f(t)) \qquad\mbox{and}\qquad 0<(ad-bc)(f(s)-f(r)). \end{aligned}$$ Since $f$ is also strictly increasing, we have that $f(t)<f(r)<f(s)$. Therefore, $(ad-bc)(f(r)-f(t))$ and $(ad-bc)(f(s)-f(r))$ should have the same signs. This together with the inequalities [\[help_25\]](#help_25){reference-type="eqref" reference="help_25"} lead us to a contradiction. Finally, to show that $ad>bc$, let $r,s\in I$ with $r<s$ be fixed. Then, using that $cf+d$ does not change sign in $I$, we have $(cf(r)+d)(cf(s)+d)>0$, and therefore the inequality $g(r)<g(s)$, in the same way as above, implies $0<(ad-bc)(f(s)-f(r))$. This, in view of the strict increasingness of $f$ yields that $ad-bc>0$. $\Box$ **Theorem. 25**. *Let $X$ be a nonempty set and $\Theta$ be a nondegenerate open interval of $\mathbb{R}$. Let $f,g:\Theta\to\mathbb{R}$ be strictly increasing functions, $p,q:X\to\mathbb{R}_{++}$, $F:X\to \mathop{\mathrm{conv}}(f(\Theta))$, and $G:X\to\mathop{\mathrm{conv}}(g(\Theta))$. Let $\psi:X\times \Theta\to\mathbb{R}$ and $\varphi: X\times \Theta\to\mathbb{R}$ be given by [\[help_Baj_psi_phi\]](#help_Baj_psi_phi){reference-type="eqref" reference="help_Baj_psi_phi"}. If there exist four constants $a,b,c,d\in\mathbb{R}$ with $0\notin c f(\Theta)+d$ such that [\[gGq\]](#gGq){reference-type="eqref" reference="gGq"} holds, then $\vartheta_{n,\psi}({\boldsymbol{x}}) = \vartheta_{n,\varphi}({\boldsymbol{x}})$ holds for each $n\in\mathbb{N}$ and ${\boldsymbol{x}}\in X^n$.* Since $p$ and $q$ are strictly positive functions, as a consequence of the equality $q=(cF+d)p$, we get that $cF+d$ is positive on $X$. Further, for all $x\in X$ and $t\in \Theta$, we obtain $$\begin{aligned} \label{help_Thm_Baj_type_equality_2} \begin{split} \varphi(x,t)&=q(x)(G(x)-g(t)) =(cF(x)+d)p(x)\bigg(\frac{aF(x)+b}{cF(x)+d}-\frac{af(t)+b}{cf(t)+d}\bigg)\\ &=p(x)\frac{(ad-bc)(F(x)-f(t))}{cf(t)+d} =\frac{ad-bc}{cf(t)+d}\psi(x,t). \end{split} \end{aligned}$$ Using Lemma [Lemma. 24](#Lem_aux){reference-type="ref" reference="Lem_aux"}, we have that $ad>bc$, and $cf+d$ is either everywhere positive or everywhere negative on $\Theta$. We show that the latter property cannot hold. To the contrary, assume that $cf+d$ is everywhere negative on $\Theta$, i.e., $cf(\Theta)+d\subseteq\mathbb{R}_{--}$. Then $c\cdot\mathop{\mathrm{conv}}(f(\Theta))+d=\mathop{\mathrm{conv}}(cf(\Theta)+d)\subseteq\mathbb{R}_{--}$. Using that $F(X)\subseteq \mathop{\mathrm{conv}}(f(\Theta))$, this implies that which contradicts the positivity of $cF+d$ on $X$. Consequently, $cf+d$ must be everywhere positive on $\Theta$. To prove the equality $\vartheta_{n,\psi} = \vartheta_{n,\varphi}$ on $X^n$, let $n\in\mathbb{N}$ and ${\boldsymbol{x}}=(x_1,\dots,x_n)\in X^n$ be arbitrary. Then, by [\[help_Thm_Baj_type_equality_2\]](#help_Thm_Baj_type_equality_2){reference-type="eqref" reference="help_Thm_Baj_type_equality_2"}, we have Since $(ad-bc)/(cf+d)$ is positive everywhere on $\Theta$. This implies that Hence the unique points of sign change of the functions $$\Theta\ni t\mapsto \sum_{i=1}^n\varphi(x_i,t) \qquad \text{and} \qquad \Theta\ni t\mapsto \sum_{i=1}^n\psi(x_i,t)$$ are equal to each other, which implies the equality $\vartheta_{n,\psi}({\boldsymbol{x}}) = \vartheta_{n,\varphi}({\boldsymbol{x}})$, as desired. $\Box$ Next, we give an equivalent form of the first equality in [\[gGq\]](#gGq){reference-type="eqref" reference="gGq"}. Roughly speaking, we derive a necessary and sufficient condition in order that two strictly increasing functions defined on a nondegenerate open interval be the Möbius transforms of each other. **Proposition. 26**. *Let $I$ be a nondegenerate open interval of $\mathbb{R}$. Let $f,g:I\to\mathbb{R}$ be strictly increasing functions. The following two statements are equivalent:* - *There exist four constants $a,b,c,d\in\mathbb{R}$ with $0\notin c f(I)+d$ and $$\begin{aligned} \label{help_27} g(t)=\frac{af(t)+b}{cf(t)+d},\qquad t\in I. \end{aligned}$$* - *For all $t_1,t_2,t_3,t_4\in I$, we have* (i)$\Longrightarrow$(ii). Let us suppose that there exist four constants $a,b,c,d\in\mathbb{R}$ such that $0\notin c f(I)+d$ and [\[help_27\]](#help_27){reference-type="eqref" reference="help_27"} hold. By Lemma [Lemma. 24](#Lem_aux){reference-type="ref" reference="Lem_aux"}, we have $ad>bc$. Further, $(cf(t)+d)g(t) = af(t)+b$, $t\in I$, yielding that $$1\cdot b + f(t)\cdot a + g(t)\cdot (-d) + f(t)g(t)\cdot (-c) =0,\qquad t\in I.$$ In particular, for all $t_1,t_2,t_3,t_4\in I$, we have that $$\begin{bmatrix} 1 & 1 & 1 & 1 \\ f(t_1) & f(t_2) & f(t_3) & f(t_4) \\ g(t_1) & g(t_2) & g(t_3) & g(t_4) \\ f(t_1)g(t_1) & f(t_2)g(t_2) & f(t_3)g(t_3) & f(t_4)g(t_4) \\ \end{bmatrix}^\top \cdot\begin{bmatrix} b \\ a \\ -d \\ -c \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \\ \end{bmatrix}.$$ As a consequence of the inequality $ad>bc$, we have that $(b,a,-d,-c)\ne(0,0,0,0)$, which shows that $(b,a,-d,-c)$ is a nontrivial solution to the above homogeneous system of linear equations. Hence we obtain that [\[help_28\]](#help_28){reference-type="eqref" reference="help_28"} must hold for all $t_1,t_2,t_3,t_4\in I$. (ii)$\Longrightarrow$(i). Let $t_3<t_4$ be fixed elements of $I$. By the strict monotonicity of $f$, the vectors $(1,f(t_3))$ and $(1,f(t_4))$ are linearly independent. Assume first that, for all $t\in I$, holds. Expanding the determinant along its first column, for all $t\in I$, we get where Therefore, [\[help_27\]](#help_27){reference-type="eqref" reference="help_27"} holds with $c=0$ and we also have that $0\not\in cf(I)+d=\{d\}$. Now we consider the case when [\[help_28.5\]](#help_28.5){reference-type="eqref" reference="help_28.5"} is not valid for all $t\in I$, that is, there exists $t_2\in I$ such that [\[help_28.5\]](#help_28.5){reference-type="eqref" reference="help_28.5"} does not hold for $t=t_2$. Then, by [\[help_28\]](#help_28){reference-type="eqref" reference="help_28"}, for all $t\in I$, Expanding the determinant along its first column, for all $t\in I$, we get where Since $c\neq0$, we have that $cf+d$ is strictly monotone. We now prove that $cf+d$ does not vanish on $I$. Assume, on the contrary, that for some $t_1\in I$, we have that $cf(t_1)+d=0$. Then, $cf(t)+d\neq0$ for $t\in I\setminus\{t_0\}$, and, by ([\[abcd\]](#abcd){reference-type="ref" reference="abcd"}), we get that $af(t_1)+b=0$. This implies that $ad=bc$. Therefore, applying [\[abcd\]](#abcd){reference-type="eqref" reference="abcd"} for $t\in I\setminus\{t_1\}$, we obtain which contradicts the strict monotonicity of $g$. $\Box$ As a consequence of Theorem [Theorem. 23](#Thm_Baj_type_equality){reference-type="ref" reference="Thm_Baj_type_equality"}, we can characterize the equality of quasiarithmetic-type $\psi$-estimators. **Corollary. 27**. *Let $X$ be a nonempty set, $\Theta$ be a nondegenerate open interval of $\mathbb{R}$. Let $f,g:\Theta\to\mathbb{R}$ be strictly increasing functions, $F:X\to \mathop{\mathrm{conv}}(f(\Theta))$, and $G:X\to \mathop{\mathrm{conv}}(g(\Theta))$. Let $\psi:X\times \Theta\to\mathbb{R}$ and $\varphi: X\times \Theta\to\mathbb{R}$ be given by $$\psi(x,t):=F(x)-f(t),\qquad \varphi(x,t):=G(x)-g(t), \qquad x\in X,\,t\in\Theta.$$ The following two assertions hold:* - *If there exist two constants $a,b\in\mathbb{R}$ with $a\ne 0$ such that $$\begin{aligned} \label{help_26} g(t)=af(t)+b,\quad t\in\Theta,\qquad \mbox{and}\qquad G(x)=aF(x)+b,\quad x\in X, \end{aligned}$$ then $\vartheta_{n,\psi}({\boldsymbol{x}}) = \vartheta_{n,\varphi}({\boldsymbol{x}})$ holds for each $n\in\mathbb{N}$ and ${\boldsymbol{x}}\in X^n$.* - *In addition, suppose that $f$ is continuous, $F(X)=f(\Theta)$ and $\mathop{\mathrm{conv}}(G(X)) \subseteq g(\Theta)$. If $\vartheta_{n,\psi}({\boldsymbol{x}}) = \vartheta_{n,\varphi}({\boldsymbol{x}})$ holds for each $n\in\mathbb{N}$ and ${\boldsymbol{x}}\in X^n$, then exist two constants $a,b\in\mathbb{R}$ with $a\ne 0$ such that [\[help_26\]](#help_26){reference-type="eqref" reference="help_26"} holds.* (i). Let us assume that there exist two constants $a,b\in\mathbb{R}$ with $a\ne 0$ such that [\[help_26\]](#help_26){reference-type="eqref" reference="help_26"} holds. By choosing $c:=0$, $d:=1$ and $p(x):=q(x):=1$, $x\in X$, we have $cf(\Theta)+d = 1$, and hence $0\notin cf(\Theta)+d$. Further, [\[gGq\]](#gGq){reference-type="eqref" reference="gGq"} is satisfied as well. Consequently, Theorem [Theorem. 25](#Thm_Baj_type_equality_2){reference-type="ref" reference="Thm_Baj_type_equality_2"} yields that $\vartheta_{n,\psi}({\boldsymbol{x}}) = \vartheta_{n,\varphi}({\boldsymbol{x}})$ for each $n\in\mathbb{N}$ and ${\boldsymbol{x}}\in X^n$. (ii). One can apply Theorem [Theorem. 23](#Thm_Baj_type_equality){reference-type="ref" reference="Thm_Baj_type_equality"} with the given functions $f,g,F$ and $G$ and by choosing $p(x):=q(x):=1$, $x\in X$. Then we obtain that there exist four constants $a,b,c,d\in\mathbb{R}$ with $ad\ne bc$ and $0\notin c f(\Theta)+d$ such that [\[gGq\]](#gGq){reference-type="eqref" reference="gGq"} holds, i.e., $$\begin{aligned} g(t)=\frac{af(t)+b}{cf(t)+d},\,\,\, t\in\Theta,\quad \mbox{and}\quad G(x)=\frac{aF(x)+b}{cF(x)+d},\quad q(x)=(cF(x)+d)p(x),\,\,\, x\in X. \end{aligned}$$ Since $p=q=1$, we get $cF(x)+d=1$, $x\in X$, and hence $G(x) = aF(x)+b$, $x\in X$. Consequently, in order to prove the statement, it is enough to verify that $cf(t)+d=1$, $t\in\Theta$. We check that $c=0$ and hence $d=1$. Since $\Theta$ is a nondegenerate open interval of $\mathbb{R}$ and $f$ is strictly increasing and continuous, we have that $f(\Theta)$ is a nondegenerate interval of $\mathbb{R}$. Hence, using that $F(X)=f(\Theta)$, the range $F(X)$ of $F$ contains at least two distinct elements, and consequently, there exist $x_1,x_2\in X$ such that $F(x_1)\ne F(x_2)$. Since $cF(x_1)+d=1$ and $cF(x_2)+d=1$, we have $c(F(x_1) - F(x_2))=0$, yielding that $c=0$, as desired. $\Box$ # Further examples for comparsion of statistical estimators {#Sec_stat_examples} In this section, we give further applications of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"} for solving the comparison problem for some statistical estimators that are special cases of generalized $\psi$-estimators. We consider empirical expectiles, Mathieu-type estimators and solutions of likelihood equations in case of normal, a Beta-type, Gamma, Lomax (Pareto type II), lognormal and Laplace distributions. **Example. 28** (Empirical expectiles). *Let $\alpha, \beta\in(0,1)$, $X:=\Theta:=\mathbb{R}$, and $\psi, \varphi:\mathbb{R}\times\mathbb{R}\to\mathbb{R}$ given by $$\psi(x,t):=\begin{cases} \alpha(x-t) & \text{if $x>t$,}\\ 0 & \text{if $x=t$,}\\ (1-\alpha)(x-t) & \text{if $x<t$,} \end{cases} \qquad\text{and}\qquad \varphi(x,t):=\begin{cases} \beta(x-t) & \text{if $x>t$,}\\ 0 & \text{if $x=t$,}\\ (1-\beta)(x-t) & \text{if $x<t$.} \end{cases}$$ By Example 4.3 in Barczy and Páles [@BarPal2], $\psi$ and $\varphi$ are $Z$-functions, $\vartheta_{1,\psi}(x) = \vartheta_{1,\varphi}(x) = x$, $x\in\mathbb{R}$, and hence we readily get that $\Theta_\psi = \Theta_\varphi = \Theta=\mathbb{R}$. Using Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"}, we have that $\vartheta_{n,\psi}({\boldsymbol{x}})\leqslant\vartheta_{n,\varphi}({\boldsymbol{x}})$ holds for each $n\in\mathbb{N}$ and ${\boldsymbol{x}}\in \mathbb{R}^n$ if and only if $$\begin{aligned} \psi(x,t) \varphi(y,t) \leqslant\psi(y,t) \varphi(x,t) \end{aligned}$$ is valid for all $t\in\mathbb{R}$ and for all $x,y\in\mathbb{R}$ with $\vartheta_{1,\varphi}(x)<t<\vartheta_{1,\varphi}(y)$, or equivalently, $$(1-\alpha)(x-t)\beta(y-t) \leqslant\alpha(y-t)(1-\beta)(x-t)$$ is valid for $x,y,t\in\mathbb{R}$ with $x<t<y$. Hence, $\vartheta_{n,\psi}({\boldsymbol{x}})\leqslant\vartheta_{n,\varphi}({\boldsymbol{x}})$ holds for all $n\in\mathbb{N}$ and ${\boldsymbol{x}}\in \mathbb{R}^n$ if and only if $\alpha \leqslant\beta$. $\Box$* **Example. 29** (Mathieu-type estimators). *Let $X:=\mathbb{R}$, $\Theta:=\mathbb{R}$ and $\psi, \varphi:\mathbb{R}\times\mathbb{R}\to\mathbb{R}$ be given by $$\begin{aligned} \label{help_Mathieu} \psi(x,t):=\operatorname{sign}(x-t)f(\vert x-t\vert) \quad \text{and}\quad \varphi(x,t):=\operatorname{sign}(x-t)g(\vert x-t\vert),\qquad x,t\in\mathbb{R}, \end{aligned}$$ where $f,g:\mathbb{R}_+\to\mathbb{R}$ are continuous and strictly increasing functions with $f(0)=g(0)=0$. This particular form of $\psi$ has been recently investigated by Mathieu [@Mat]. By Proposition 4.4 in Barczy and Páles [@BarPal2], $\psi$ and $\varphi$ have the property $[Z]$ and $\vartheta_{1,\psi}(x) = \vartheta_{1,\varphi}(x) = x$, $x\in\mathbb{R}$, and hence we readily get that $\Theta_\psi = \Theta_\varphi = \Theta=\mathbb{R}$. Using Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"}, we have that $\vartheta_{n,\psi}({\boldsymbol{x}})\leqslant\vartheta_{n,\varphi}({\boldsymbol{x}})$ holds for each $n\in\mathbb{N}$ and ${\boldsymbol{x}}\in \mathbb{R}^n$ if and only if $$\begin{aligned} \psi(x,t) \varphi(y,t) \leqslant\psi(y,t) \varphi(x,t) \end{aligned}$$ is valid for all $t\in\mathbb{R}$ and for all $x,y\in\mathbb{R}$ with $\vartheta_{1,\varphi}(x)<t<\vartheta_{1,\varphi}(y)$, or equivalently, $$\operatorname{sign}(x-t)f(\vert x-t\vert) \operatorname{sign}(y-t)g(\vert y-t\vert) \leqslant\operatorname{sign}(y-t)f(\vert y-t\vert) \operatorname{sign}(x-t)g(\vert x-t\vert)$$ is valid for all $x,y,t\in\mathbb{R}$ with $x<t<y$. It is equivalent to $$f(y-t) g(t-x) \leqslant f(t-x) g(y-t) \qquad \text{ for all $x,y,t\in\mathbb{R}$ with $x<t<y$.}$$ Using that $f$ and $g$ are strictly increasing with $f(0)=g(0)=0$, this is equivalent to $$\begin{aligned} \label{help_21} \frac{g(t-x)}{f(t-x)}\leqslant\frac{g(y-t)}{f(y-t)} \qquad \text{ for all $x,y,t\in\mathbb{R}$ with $x<t<y$.} \end{aligned}$$ Given $u,v\in\mathbb{R}_{++}$, by choosing $x:=t-u$ and $y:=t+v$ with $t\in\mathbb{R}$, we have that [\[help_21\]](#help_21){reference-type="eqref" reference="help_21"} is equivalent to $$\frac{g(u)}{f(u)} \leqslant\frac{g(v)}{f(v)}\qquad \text{for all $u,v\in\mathbb{R}_{++}$,}$$ which is equivalent to the existence of a constant $C\in\mathbb{R}_{++}$ such that $\frac{g(u)}{f(u)}=C$, $u\in\mathbb{R}_{++}$. Consequently, using $f(0)=g(0)=0$, we have that $\vartheta_{n,\psi}({\boldsymbol{x}})\leqslant\vartheta_{n,\varphi}({\boldsymbol{x}})$ holds for each $n\in\mathbb{N}$ and ${\boldsymbol{x}}\in \mathbb{R}^n$ if and only if there exists a constant $C\in\mathbb{R}_{++}$ such that $g(t)=C f(t)$, $t\in\mathbb{R}_+$. Based on what we proved, we can formulate the following result:* ***Proposition.*** **Let $f,g:\mathbb{R}_+\to\mathbb{R}$ be continuous and strictly increasing functions with $f(0)=g(0)=0$, and let $\psi, \varphi:\mathbb{R}\times\mathbb{R}\to\mathbb{R}$ be given by [\[help_Mathieu\]](#help_Mathieu){reference-type="eqref" reference="help_Mathieu"}. Then the following assertions are equivalent:** - *$\vartheta_{n,\psi}({\boldsymbol{x}})\leqslant\vartheta_{n,\varphi}({\boldsymbol{x}})$ holds for each $n\in\mathbb{N}$ and ${\boldsymbol{x}}\in \mathbb{R}^n$,* - *$\vartheta_{n,\psi}({\boldsymbol{x}})=\vartheta_{n,\varphi}({\boldsymbol{x}})$ holds for each $n\in\mathbb{N}$ and ${\boldsymbol{x}}\in \mathbb{R}^n$,* - *there exists a constant $C\in\mathbb{R}_{++}$ such that $g(t)=C f(t)$, $t\in\mathbb{R}_+$.* *$\Box$* In what follows, given a random variable $\xi$, by a sample of size $n$ (where $n\in\mathbb{N}$), we mean independent and identically distributed random variables  $\xi_1,\ldots,\xi_n$  with common distribution as that of $\xi$. **Example. 30** (Likelihood equation in case of normal distribution). *Let $\xi$ be a normally distributed random variable with mean $m\in\mathbb{R}$ and with variance $\sigma^2$, where $\sigma>0$. Let $n\in\mathbb{N}$ and $x_1,\ldots,x_n\in\mathbb{R}$ be a realization of a sample of size $n$ for $\xi$. Supposing that $m$ is known, there exists a unique MLE of $\sigma^2$ based on $x_1,\ldots,x_n\in\mathbb{R}$, and it takes the form  $\widehat{\sigma^2_n}:=\frac{1}{n}\sum_{i=1}^n (x_i-m)^2$. In this case, let $\Theta:=(0,\infty)$, $X:=\mathbb{R}$, and $\psi:\mathbb{R}\times(0,\infty)\to\mathbb{R}$, $$\begin{aligned} \psi(x,\sigma^2) := \frac{1}{2(\sigma^2)^2}\left( (x-m)^2 - \sigma^2\right) , \qquad x\in\mathbb{R},\; \sigma^2>0. \end{aligned}$$ Hence $\psi$ is a $T_1$-function with $\vartheta_{1,\psi}(x):=(x-m)^2$, $x\in\mathbb{R}$, and $\psi(x,\vartheta_{1,\psi}(x))=0$, $x\in\mathbb{R}$. By Example 4.5 in Barczy and Páles [@BarPal2], we have that for each $n\in\mathbb{N}$ and $x_1,\ldots,x_n\in \mathbb{R}$, the (likelihood) equation $\sum_{i=1}^n \psi(x_i,t)=0$, $t>0$, has a unique solution, which is equal to $\vartheta_{n,\psi}(x_1,\dots,x_n)=\frac{1}{n}\sum_{i=1}^n (x_i-m)^2=\widehat{\sigma^2_n}$.* *Further, let $m^*\in\mathbb{R}$, and $\varphi:\mathbb{R}\times(0,\infty)\to\mathbb{R}$, $$\begin{aligned} \varphi(x,\sigma^2) := \frac{1}{2(\sigma^2)^2}\left( (x-m^*)^2 - \sigma^2\right) , \qquad x\in\mathbb{R},\; \sigma^2>0. \end{aligned}$$ Then $\varphi$ is a $T_1$-function with $\vartheta_{1,\varphi}(x):=(x-m^*)^2$, $x\in\mathbb{R}$, and $\varphi(x,\vartheta_{1,\varphi}(x))=0$, $x\in\mathbb{R}$.* *Then $\vartheta_{1,\psi}(x_1)\leqslant\vartheta_{1,\varphi}(x_1)$ holds for all $x_1\in \mathbb{R}$ if and only if $(x_1-m)^2 \leqslant(x_1-m^*)^2$ holds for all $x_1\in\mathbb{R}$, which is equivalent to $m=m^*$ (indeed, replacing $x_1:=m^*$, the right hand side is zero implying that $(m^*-m)^2=0$ holds as well). Consequently, we readily have that $\vartheta_{n,\psi}({\boldsymbol{x}})\leqslant\vartheta_{n,\varphi}({\boldsymbol{x}})$ holds for each $n\in\mathbb{N}$ and ${\boldsymbol{x}}\in \mathbb{R}^n$ if and only if $m=m^*$, and in this case $\vartheta_{n,\psi}({\boldsymbol{x}}) = \vartheta_{n,\varphi}({\boldsymbol{x}})$ holds for each $n\in\mathbb{N}$ and ${\boldsymbol{x}}\in \mathbb{R}^n$. $\Box$* **Example. 31** (Likelihood equation in case of a Beta-type distribution). *Let $\alpha,\beta\in\mathbb{R}_{++}$ and let $\xi$ be an absolutely continuous random variable with a density function $$f_\xi(x) := \begin{cases} \alpha \beta x^{\beta-1} (1-x^\beta)^{\alpha-1} & \text{if $x\in(0,1)$,}\\ 0 & \text{if $x\notin(0,1)$.} \end{cases}$$* *Supposing that $\beta\in\mathbb{R}_{++}$ is known, one can check that, given $n\in\mathbb{N}$ and a realization $x_1,\ldots,x_n\in(0,1)$ of a sample of size $n$ for $\xi$, there exists a unique MLE of $\alpha$ and it takes the form $$\widehat{\alpha}_n := -\frac{n}{\sum_{i=1}^n \ln(1-x_i^\beta)}.$$ As an application of our results in Barczy and Páles [@BarPal2], we also establish the existence and uniqueness of a solution of the corresponding likelihood equation for $\alpha$ using Theorem 2.10 and Proposition 2.12 in Barczy and Páles [@BarPal2]. In the considered case, using the setup given before Example 4.6 in Barczy and Páles [@BarPal2], we have  $\Theta := (0,\infty)$  and  $f:\mathbb{R}\times (0,\infty)\to\mathbb{R}$, $$f(x,\alpha):=\begin{cases} \alpha \beta x^{\beta-1} (1-x^\beta)^{\alpha-1} & \text{if $x\in(0,1)$, \ $\alpha>0$,}\\ 0 & \text{otherwise,} \end{cases}$$ and consequently,  ${\mathcal X}_f=(0,1)$.  Then  $\psi:(0,1)\times (0,\infty)\to\mathbb{R}$, $$\begin{aligned} \psi(x,\alpha) = \frac{\partial_2 f(x,\alpha)}{f(x,\alpha)} = \frac{\beta x^{\beta-1} \big( (1-x^\beta)^{\alpha-1} + \alpha (1-x^\beta)^{\alpha-1}\ln(1-x^\beta) \big)} {\alpha \beta x^{\beta-1} (1-x^\beta)^{\alpha-1}} = \frac{1}{\alpha} + \ln(1-x^\beta) \end{aligned}$$ for $x\in(0,1)$ and $\alpha>0$. With the terminology introduced in Section [4](#Sec_comp_equal_Bajrak){reference-type="ref" reference="Sec_comp_equal_Bajrak"} (see [\[help16\]](#help16){reference-type="eqref" reference="help16"}), $\psi$ is a quasiarithmetic type function. We have $\psi$ is a $Z_1$-function with $\vartheta_{1,\psi}(x) = - \frac{1}{\ln(1-x^\beta)}$, $x\in(0,1)$, and $\psi$ is strictly decreasing and continuous in its second variable. Further, using Proposition 2.12 in Barczy and Páles [@BarPal2] (with $X:={\mathcal X}_f = (0,1)$), we can conclude that $\psi$ is a $Z$-function, and, for each $n\in\mathbb{N}$ and $x_1,\ldots,x_n\in(0,1)$, the (likelihood) equation $\sum_{i=1}^n \psi(x_i,\alpha)=0$, $\alpha>0$, has a unique solution, which takes the form $\vartheta_{n,\psi}(x_1,\ldots,x_n) = -\frac{n}{\sum_{i=1}^n \ln\left(1-x_i^\beta\right)} = \widehat{\alpha}_n$, as desired.* *Moreover, let $\beta^*\in\mathbb{R}_{++}$ be also given and let $\varphi:(0,1)\times (0,\infty)\to\mathbb{R}$ be defined by $$\begin{aligned} \varphi(x,\alpha) := \frac{1}{\alpha} + \ln(1-x^{\beta^*}), \qquad x\in(0,1),\quad \alpha>0. \end{aligned}$$ Then $\varphi$ is a $Z$-function with $\vartheta_{n,\varphi}(x_1,\ldots,x_n) = -\frac{n}{\sum_{i=1}^n \ln\left(1-x_i^{\beta^*}\right)}$, $x_1,\ldots,x_n\in(0,1)$. By a direct calculation, one check that $\vartheta_{1,\psi}(x)\leqslant\vartheta_{1,\varphi}(x)$ holds for all $x\in(0,1)$ if and only if $\beta\leqslant\beta^*$; and, more generally, $\vartheta_{n,\psi}(x_1,\ldots,x_n) \leqslant\vartheta_{n,\varphi}(x_1,\ldots,x_n)$ holds for each $n\in\mathbb{N}$ and $x_1,\ldots,x_n\in(0,1)$ if and only if $\beta\leqslant\beta^*$. As a consequence, the equivalent assertions of (i)-(iv) of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"} hold if and only if $\beta\leqslant\beta^*$. In particular, note that if $\beta\leqslant\beta^*$, then $$\psi(x,\alpha) = \frac{1}{\alpha} + \ln(1-x^\beta) \leqslant\frac{1}{\alpha} + \ln(1-x^{\beta^*}) = \varphi(x,\alpha), \qquad x\in(0,1),\quad \alpha>0,$$ i.e., [\[psi_est_inequality_5\]](#psi_est_inequality_5){reference-type="eqref" reference="psi_est_inequality_5"} holds with $p(\alpha):=1$, $\alpha>0$.* *Now, suppose that $\alpha\in\mathbb{R}_{++}$ is known. Similarly as above, we have  $\Theta := (0,\infty)$  and  $f:\mathbb{R}\times (0,\infty)\to\mathbb{R}$, $$f(x,\beta):=\begin{cases} \alpha \beta x^{\beta-1} (1-x^\beta)^{\alpha-1} & \text{if $x\in(0,1)$, \ $\beta>0$,}\\ 0 & \text{otherwise,} \end{cases}$$ and consequently,  ${\mathcal X}_f=(0,1)$.  Then  $\psi:(0,1)\times (0,\infty)\to\mathbb{R}$, $$\begin{aligned} \psi(x,\beta) &= \frac{\partial_2 f(x,\beta)}{f(x,\beta)} = \partial_2 (\ln f)(x,\beta) = \partial_2\big(\ln(\alpha) + \ln(\beta) + (\beta-1)\ln(x) + (\alpha-1)\ln(1-x^\beta)\big)\\ &= \frac{1}{\beta} + \ln(x) + (1-\alpha)\frac{x^\beta \ln(x)}{1-x^\beta}, \qquad x\in(0,1),\quad \beta>0. \end{aligned}$$ Note that $\psi$ can be rewritten in the form $$\begin{aligned} \label{help_29} \psi(x,\beta) = \frac{1}{\beta}\left(1+ \frac{1-\alpha x^\beta}{1-x^\beta} \ln(x^\beta)\right), \qquad x\in(0,1),\quad \beta>0. \end{aligned}$$ Define  $\widetilde{\psi}:(0,1)\times (0,\infty)\to\mathbb{R}$, $\widetilde{\psi}(x,\beta):=\beta\psi(x,\beta)$, $x\in(0,1)$, $\beta>0$. Next, we check that $\widetilde{\psi}$ is strictly decreasing in its second variable. Since for all $x\in(0,1)$, the range of the strictly decreasing function $(0,\infty)\ni\beta\mapsto x^\beta$ is $(0,1)$, in order to prove that $\widetilde{\psi}$ is strictly decreasing in its second variable, it is enough to verify that the function $$\begin{aligned} \label{help_31} (0,1)\ni u\mapsto \frac{1-\alpha u}{1-u}\ln(u) \end{aligned}$$ is strictly increasing. For this, it is enough to show that its derivative is positive everywhere, i.e., $$-\alpha\frac{\ln(u)}{1-u} + \frac{1-\alpha u}{(1-u)^2}\ln(u) + \frac{1-\alpha u}{1-u}\cdot\frac{1}{u}>0, \qquad u\in(0,1),$$ which is equivalent to $$h(u):=(1-\alpha)\ln(u) + \frac{1}{u} - \alpha -1 +\alpha u>0,\qquad u\in(0,1).$$ Since $h$ is continuous, $\lim_{u\downarrow 0} h(u)=\infty$ and $\lim_{u\uparrow 1}h(u)=0$, in order to show that $h(u)>0$, $u\in(0,1)$, it is enough to check that $h'(u)<0$, $u\in(0,1)$. This readily follows, since $$h'(u) = \frac{1-\alpha}{u} - \frac{1}{u^2} + \alpha <0, \qquad u\in(0,1),$$ holds if and only if $(u-1)(\alpha u + 1)<0$, $u\in(0,1)$, which is trivially satisfied.* *Now we verify that $\widetilde{\psi}$ is a $Z$-function. First, we check that $\widetilde{\psi}$ is a $T_1$-function. Using again that, for all $x\in(0,1)$, the range of the strictly decreasing function $(0,\infty)\ni\beta\mapsto x^\beta$ is $(0,1)$, taking into account [\[help_29\]](#help_29){reference-type="eqref" reference="help_29"}, it is enough to verify that there exists a unique solution of the equation $$g(u):=1 + \frac{1-\alpha u}{1-u}\ln(u) = 0, \qquad u\in(0,1).$$ Since $g$ is continuous and strictly increasing (proved earlier, see [\[help_31\]](#help_31){reference-type="eqref" reference="help_31"}), $\lim_{u\downarrow 0} g(u) = -\infty$ and $\lim_{u\uparrow 1}g(u)=1+(1-\alpha)\lim_{u\uparrow 1}\frac{1/u}{-1} = \alpha >0$, the Bolzano theorem yields the existence of a unique solution in question. Hence $\widetilde{\psi}$ is a $T_1$-function. This, together with the fact that $\widetilde{\psi}$ is strictly decreasing in its second variable, implies that $\widetilde{\psi}$ is a $T$-function (see Barczy and Páles [@BarPal2 Proposition 2.12]). Since $\widetilde{\psi}$ is continuous in its second variable as well, we have that it is a Z-function. It immediately follows that $\psi$ is also a Z-function. As a consequence, for each $n\in\mathbb{N}$ and $x_1,\ldots,x_n\in(0,1)$, there is a unique solution $\vartheta_{n,\psi}(x_1,\ldots,x_n)$ of the (likelihood) equation $\sum_{i=1}^n \psi(x_i,\beta)=0$, $\beta>0$, that is, of the equation $$1+\ln \left( \sqrt[n]{x_1^\beta\cdots x_n^\beta}\right) = (\alpha-1)\frac{1}{n}\sum_{i=1}^n \frac{x_i^\beta\ln(x_i^\beta)}{1-x_i^\beta},\qquad \beta>0.$$* *Moreover, let $\alpha^*\in\mathbb{R}_{++}$ be also given and  $\varphi:(0,1)\times (0,\infty)\to\mathbb{R}$ be defined by $$\begin{aligned} \varphi(x,\beta) := \frac{1}{\beta}\left(1+ \frac{1-\alpha^* x^\beta}{1-x^\beta} \ln(x^\beta)\right), \qquad x\in(0,1),\quad \beta>0. \end{aligned}$$ Then $\varphi$ is a Z-function, and one can readily see that $\alpha\leqslant\alpha^*$ holds if and only if $\psi(x,\beta)\leqslant\varphi(x,\beta)$, $x\in(0,1)$, $\beta>0$ (in particular, part (iv) of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"} is satisfied with $p(\beta):=1$, $\beta>0$). Consequently, Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"} yields that $\vartheta_{n,\psi}(x_1,\ldots,x_n)\leqslant\vartheta_{n,\varphi}(x_1,\ldots,x_n)$ holds for all $n\in\mathbb{N}$ and $x_1,\ldots,x_n\in(0,1)$ if and only if $\alpha\leqslant\alpha^*$. $\Box$* **Example. 32** (Likelihood equation in case of Gamma distribution). *Let $p,\lambda\in\mathbb{R}_{++}$ and let $\xi$ be a random variable having Gamma distribution with parameters $p$ and $\lambda$, i.e., $\xi$ has a density function $$f_\xi(x) := \begin{cases} \dfrac{\lambda^p x^{p-1} \mathrm{e}^{-\lambda x}}{\Gamma(p)} & \text{if $x>0$,}\\ 0 & \text{if $x\leqslant 0$.} \end{cases}$$* *Supposing that $\lambda\in\mathbb{R}_{++}$ is known, we will establish the existence and uniqueness of a solution of the likelihood equation for $p$ using Theorem 2.10 and Proposition 2.12 in Barczy and Páles [@BarPal2]. In the considered case, using the setup given before Example 4.6 in Barczy and Páles [@BarPal2], we have  $\Theta := (0,\infty)$  and  $f:\mathbb{R}\times (0,\infty)\to\mathbb{R}$, $$f(x,p) :=\begin{cases} \dfrac{\lambda^p x^{p-1} \mathrm{e}^{-\lambda x}}{\Gamma(p)} & \text{if $x>0$, $p>0$,}\\ 0 & \text{otherwise,} \end{cases}$$ and consequently,  ${\mathcal X}_f=(0,\infty)$.  Then  $\psi:(0,\infty)\times (0,\infty)\to\mathbb{R}$, $$\begin{aligned} \psi(x,p) = \frac{\partial_2 f(x,p)}{f(x,p)} = -\frac{\Gamma'(p)}{\Gamma(p)} + \ln(x) + \ln(\lambda), \qquad x,p\in(0,\infty). \end{aligned}$$ With the terminology introduced in Section [4](#Sec_comp_equal_Bajrak){reference-type="ref" reference="Sec_comp_equal_Bajrak"} (see [\[help16\]](#help16){reference-type="eqref" reference="help16"}), $\psi$ is a quasiarithmetic type function. Note also that the function $\frac{\Gamma'}{\Gamma}:(0,\infty)\to (0,\infty)$ is the digamma function, which is known to be strictly increasing and strictly concave, and it has range $(-\infty,\infty)$. As a consequence, $\psi$ is a Z-function, and, for each $n\in\mathbb{N}$ and $x_1,\ldots,x_n\in(0,\infty)$, the equation $\sum_{i=1}^n \psi(x_i,p) =0$, $p>0$, which is equivalent to $$\frac{\Gamma'(p)}{\Gamma(p)} = \ln(\lambda) + \frac{1}{n}\sum_{i=1}^n \ln(x_i),\qquad p>0,$$ has a unique solution $\vartheta_{n,\psi}(x_1,\ldots,x_n)$ for $p>0$.* *Moreover, let $\lambda^*\in\mathbb{R}_{++}$ and $\varphi:(0,\infty)\times (0,\infty)\to\mathbb{R}$, $$\begin{aligned} \varphi(x,p) := -\frac{\Gamma'(p)}{\Gamma(p)} + \ln(x) + \ln(\lambda^*), \qquad x,p\in(0,\infty). \end{aligned}$$ Then one can readily see that $\lambda\leqslant\lambda^*$ holds if and only if $\psi(x,p)\leqslant\varphi(x,p)$, $x>0$, $p>0$. Hence, part (iv) of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"} yields that $\vartheta_{n,\psi}(x_1,\ldots,x_n)\leqslant\vartheta_{n,\varphi}(x_1,\ldots,x_n)$ holds for all $n\in\mathbb{N}$ and $x_1,\ldots,x_n>0$ if and only if $\lambda\leqslant\lambda^*$.* *Now, suppose that $p\in\mathbb{R}_{++}$ is known. Similarly as above, we have  $\Theta := (0,\infty)$  and  $f:\mathbb{R}\times (0,\infty)\to\mathbb{R}$, $$f(x,\lambda) :=\begin{cases} \dfrac{\lambda^p x^{p-1} \mathrm{e}^{-\lambda x}}{\Gamma(p)} & \text{if $x>0$, $\lambda>0$,}\\ 0 & \text{otherwise,} \end{cases}$$ and consequently,  ${\mathcal X}_f=(0,\infty)$. Then  $\psi:(0,\infty)\times (0,\infty)\to\mathbb{R}$, $$\begin{aligned} \psi(x,\lambda) = \frac{\partial_2 f(x,\lambda)}{f(x,\lambda)} = \frac{p}{\lambda} - x, \qquad x,\lambda\in(0,\infty). \end{aligned}$$ With the terminology introduced in Section [4](#Sec_comp_equal_Bajrak){reference-type="ref" reference="Sec_comp_equal_Bajrak"} (see [\[help16\]](#help16){reference-type="eqref" reference="help16"}), $\psi$ is a quasiarithmetic type function. Furthermore, $\psi$ is a Z-function, and for each $n\in\mathbb{N}$ and $x_1,\ldots,x_n\in(0,\infty)$, the equation $\sum_{i=1}^n \psi(x_i,\lambda) =0$, $\lambda>0$, which is equivalent to $$\frac{pn}{\lambda} = \sum_{i=1}^n x_i,\qquad \lambda>0,$$ has a unique solution $\vartheta_{n,\psi}(x_1,\ldots,x_n):=\frac{pn}{\sum_{i=1}^n x_i}$. One can easily see that Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"} could also be applied in this case. $\Box$* **Example. 33** (Likelihood equation in case of Lomax distribution). *Let $\alpha,\lambda\in\mathbb{R}_{++}$ and let $\xi$ be a random variable having Lomax (Pareto type II) distribution with parameters $\alpha$ and $\lambda$, i.e., $\xi$ has a density function $$f_\xi(x) := \begin{cases} \dfrac{\alpha}{\lambda}\left(1 + \dfrac{x}{\lambda}\right)^{-(\alpha+1)} & \text{if $x>0$,}\\[2mm] 0 & \text{if $x\leqslant 0$.} \end{cases}$$* *Supposing that $\alpha\in\mathbb{R}_{++}$ is known, we will establish the existence and uniqueness of a solution of the likelihood equation for $\lambda$ using Theorem 2.10 and Proposition 2.12 in Barczy and Páles [@BarPal2]. In the considered case, using the setup given before Example 4.6 in Barczy and Páles [@BarPal2], we have  $\Theta := (0,\infty)$  and  $f:\mathbb{R}\times (0,\infty)\to\mathbb{R}$, $$f(x,\lambda) :=\begin{cases} \dfrac{\alpha}{\lambda}\left(1 + \dfrac{x}{\lambda}\right)^{-(\alpha+1)} & \text{if $x>0$, $\lambda>0$,}\\[2mm] 0 & \text{otherwise,} \end{cases}$$ and consequently,  ${\mathcal X}_f=(0,\infty)$.  Then  $\psi:(0,\infty)\times (0,\infty)\to\mathbb{R}$, $$\begin{aligned} \psi(x,\lambda) = \frac{\partial_2 f(x,\lambda)}{f(x,\lambda)} = \partial_2 (\ln f)(x,\lambda) = \partial_2\left( \ln(\alpha) - (\alpha+1) \ln\left(1+\frac{x}{\lambda}\right) - \ln(\lambda)\right) = \frac{\alpha x - \lambda}{\lambda(\lambda + x)} \end{aligned}$$ for $x,\lambda\in(0,\infty)$. Then $\psi$ is a $Z_1$-function with $\vartheta_{1,\psi}(x) = \alpha x$, $x>0$. Further, for each $n\in\mathbb{N}$ and $x_1,\ldots,x_n\in(0,\infty)$, we have $$\begin{aligned} \label{help_30} \sum_{i=1}^n \psi(x_i,\lambda) = \frac{\alpha}{\lambda} \sum_{i=1}^n \frac{x_i}{\lambda+x_i} - \sum_{i=1}^{n}\frac{1}{\lambda+x_i} = \frac{\alpha n}{\lambda} - (\alpha+1)\sum_{i=1}^n \frac{1}{\lambda+x_i}, \qquad \lambda >0. \end{aligned}$$ Next, we check that $\psi$ is a Z-function. For this, using that $\psi$ is continuous in its second variable, by part (vi) of Theorem 2.10 in Barczy and Páles [@BarPal2], it is enough to verify that for all $x,y>0$ with $\alpha x=\vartheta_{1,\psi}(x) < \vartheta_{1,\psi}(y) = \alpha y$, the function $$(\alpha x,\alpha y)\ni \lambda\mapsto -\frac{\psi(x,\lambda)}{\psi(y,\lambda)} = \frac{(\lambda - \alpha x)(\lambda +y)}{(\alpha y - \lambda)(\lambda +x)} = \frac{\lambda^2 + (y-\alpha x)\lambda - \alpha xy}{-\lambda^2 + (\alpha y-x)\lambda + \alpha xy} =: h(\lambda)$$ is strictly increasing. We check that $h'(\lambda)>0$ for all $\lambda>0$, which yields that $h$ is strictly increasing. An algebraic calculation shows that $$\begin{aligned} h'(\lambda) = \frac{(1+\alpha)(y-x)(\lambda^2 + \alpha xy)} {(-\lambda^2 + (\alpha y-x)\lambda + \alpha xy)^2} >0, \qquad \lambda \in (\alpha x,\alpha y), \end{aligned}$$ as desired. As a consequence, using also [\[help_30\]](#help_30){reference-type="eqref" reference="help_30"}, for each $n\in\mathbb{N}$ and $x_1,\ldots,x_n>0$, the likelihood equation for $\lambda$ takes the form $$\frac{\alpha}{\alpha+1} = \frac{\lambda}{n}\sum_{i=1}^n \frac{1}{x_i+\lambda},\qquad \lambda>0,$$ which has a unique solution $\vartheta_{n,\psi}(x_1,\ldots,x_n)$.* *Moreover, let $\alpha^*\in\mathbb{R}_{++}$ be also given, and let $\varphi:(0,\infty)\times (0,\infty)\to\mathbb{R}$ be defined by $$\begin{aligned} \varphi(x,\lambda) := \frac{\alpha^* x - \lambda}{\lambda(\lambda + x)},\qquad x,\lambda\in(0,\infty). \end{aligned}$$ Then $\varphi$ is a $Z$-function, and one can easily see that $\alpha \leqslant\alpha^*$ holds if and only if $\psi(x,\lambda)\leqslant\varphi(x,\lambda)$ holds for all $x>0$ and $\lambda>0$ (in particular, part (iv) of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"} is satisfied with $p(\lambda):=1$, $\lambda>0$). Consequently, Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"} yields that $\vartheta_{n,\psi}(x_1,\ldots,x_n) \leqslant\vartheta_{n,\varphi}(x_1,\ldots,x_n)$ holds for all $n\in\mathbb{N}$ and $x_1,\ldots,x_n>0$ if and only if $\alpha\leqslant\alpha^*$.* *Now, suppose that $\lambda\in\mathbb{R}_{++}$ is known. Similarly as above, we have  $\Theta := (0,\infty)$  and  $f:\mathbb{R}\times (0,\infty)\to\mathbb{R}$, $$f(x,\alpha) :=\begin{cases} \dfrac{\alpha}{\lambda}\left(1 + \dfrac{x}{\lambda}\right)^{-(\alpha+1)} & \text{if $x>0$, $\alpha>0$,}\\[2mm] 0 & \text{otherwise,} \end{cases}$$ and consequently,  ${\mathcal X}_f=(0,\infty)$. Then  $\psi:(0,\infty)\times (0,\infty)\to\mathbb{R}$, $$\begin{aligned} \psi(x,\alpha) = \partial_2\left( \ln(\alpha) - (\alpha+1)\ln\left(1+\frac{x}{\lambda}\right) - \ln(\lambda)\right) = \frac{1}{\alpha} - \ln\left(1+\frac{x}{\lambda}\right), \qquad x,\alpha\in(0,\infty). \end{aligned}$$ With the terminology introduced in Section [4](#Sec_comp_equal_Bajrak){reference-type="ref" reference="Sec_comp_equal_Bajrak"} (see [\[help16\]](#help16){reference-type="eqref" reference="help16"}), $\psi$ is a quasiarithmetic type function. Then $\psi$ is a $Z_1$-function with $\vartheta_{1,\psi}(x) = 1/\ln\big(1+\frac{x}{\lambda}\big)$, $x>0$. This, together with the fact that $\psi$ is strictly decreasing and continuous in its second variable, yields that $\psi$ is a Z-function, see Barczy and Páles [@BarPal2 Proposition 2.12]. Further, for all $n\in\mathbb{N}$ and $x_1,\ldots,x_n\in(0,\infty)$, the likelihood equation for $\alpha$ takes the form $$\sum_{i=1}^n \psi(x_i,\alpha) = \frac{n}{\alpha} - \sum_{i=1}^n \ln\left(1+\frac{x_i}{\lambda}\right)=0,\qquad \alpha>0,$$ which has a unique solution $$\vartheta_{n,\psi}(x_1,\ldots,x_n) = \frac{1}{\frac{1}{n}\sum_{i=1}^n \ln\left(1+\frac{x_i}{\lambda}\right)}.$$* *Moreover, let $\lambda^*\in\mathbb{R}_{++}$, and  $\varphi:(0,\infty)\times (0,\infty)\to\mathbb{R}$, $$\begin{aligned} \varphi(x,\alpha) := \frac{1}{\alpha} - \ln\left(1+\frac{x}{\lambda^*}\right), \qquad x,\alpha\in(0,\infty). \end{aligned}$$ Then $\varphi$ is a $Z$-function, and one can easily see that $\psi(x,\alpha)\leqslant\varphi(x,\alpha)$ holds for all $x>0$ and $\alpha>0$ if and only if $\lambda \leqslant\lambda^*$. Moreover, $\vartheta_{n,\psi}(x_1,\ldots,x_n) \leqslant\vartheta_{n,\varphi}(x_1,\ldots,x_n)$ holds for all $n\in\mathbb{N}$ and $x_1,\ldots,x_n>0$ if and only if $\lambda\leqslant\lambda^*$. Part (iv) of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"} is also satisfied with $p(\alpha):=1$, $\alpha>0$, which also directly implies that $\vartheta_{n,\psi}(x_1,\ldots,x_n) \leqslant\vartheta_{n,\varphi}(x_1,\ldots,x_n)$ holds for all $n\in\mathbb{N}$ and $x_1,\ldots,x_n>0$ if and only if $\lambda\leqslant\lambda^*$. $\Box$* **Example. 34** (Likelihood equation in case of lognormal distribution). *Let $\mu\in\mathbb{R}$, $\sigma^2\in\mathbb{R}_{++}$ and let $\xi$ be a random variable having lognormal distribution with parameters $\mu$ and $\sigma^2$, i.e., $\xi$ has a density function $$f_\xi(x) := \begin{cases} \dfrac{1}{\sigma \sqrt{2\pi}\, x}\exp\left\{ - \dfrac{(\ln(x)-\mu)^2}{2\sigma^2}\right\} & \text{if $x>0$,}\\[2mm] 0 & \text{if $x\leqslant 0$.} \end{cases}$$* *Supposing that $\sigma^2>0$ is known, we will establish the existence and uniqueness of a solution of the likelihood equation for $\mu$ using Proposition 2.12 in Barczy and Páles [@BarPal2]. In the considered case, using the setup given before Example 4.6 in Barczy and Páles [@BarPal2], we have  $\Theta := \mathbb{R}$  and  $f:\mathbb{R}\times \mathbb{R}\to\mathbb{R}$, $$f(x,\mu) := \begin{cases} \dfrac{1}{\sigma \sqrt{2\pi} \,x}\exp\left\{- \dfrac{(\ln(x)-\mu)^2}{2\sigma^2} \right\} & \text{if $x>0$, $\mu\in\mathbb{R}$,}\\[2mm] 0 & \text{otherwise,} \end{cases}$$ and consequently,  ${\mathcal X}_f=(0,\infty)$. Then  $\psi:(0,\infty)\times\mathbb{R}\to\mathbb{R}$, $$\begin{aligned} \psi(x,\mu) = \partial_2\left( - \frac{(\ln(x)-\mu)^2}{2\sigma^2} - \ln(x) - \ln(\sigma\sqrt{2\pi}) \right) = \frac{\ln(x) - \mu}{\sigma^2}, \qquad x\in(0,\infty),\quad \mu\in\mathbb{R}. \end{aligned}$$ With the terminology introduced in Section [4](#Sec_comp_equal_Bajrak){reference-type="ref" reference="Sec_comp_equal_Bajrak"} (see [\[help16\]](#help16){reference-type="eqref" reference="help16"}), $\psi$ is a quasiarithmetic type function. Then $\psi$ is a $Z_1$-function with $\vartheta_{1,\psi}(x) = \ln(x)$, $x>0$. This, together with the fact that $\psi$ is strictly decreasing and continuous in its second variable, yields that $\psi$ is a Z-function, see Barczy and Páles [@BarPal2 Proposition 2.12]. Further, for all $n\in\mathbb{N}$ and $x_1,\ldots,x_n\in(0,\infty)$, the likelihood equation for $\mu$ takes the form $$\sum_{i=1}^n \psi(x_i,\mu) = \frac{1}{\sigma^2}\sum_{i=1}^n \ln(x_i) - \frac{n\mu}{\sigma^2}=0,\qquad \mu\in\mathbb{R},$$ which has a unique solution $$\vartheta_{n,\psi}(x_1,\ldots,x_n) = \frac{1}{n}\sum_{i=1}^n \ln(x_i).$$ Note that $\vartheta_{n,\psi}(x_1,\ldots,x_n)$ does not depend on $\sigma^2$.* *Moreover, let $\sigma_*^2>0$ be also given, and  $\varphi:(0,\infty)\times \mathbb{R}\to\mathbb{R}$, $$\begin{aligned} \varphi(x,\mu) := \frac{\ln(x) - \mu}{\sigma_*^2}, \qquad x\in(0,\infty),\quad \mu\in\mathbb{R}. \end{aligned}$$ Then $\varphi$ is a $Z$-function, and $$\psi(x,\mu) = \frac{\sigma_*^2}{\sigma^2}\,\varphi(x,\mu),\qquad x>0,\quad \mu\in\mathbb{R}.$$ Hence part (iv) of Theorem [Theorem. 8](#Lem_psi_est_eq_5){reference-type="ref" reference="Lem_psi_est_eq_5"} holds with $p(\mu):=\frac{\sigma_*^2}{\sigma^2}$, $\mu\in\mathbb{R}$. In fact, we get $$\vartheta_{n,\psi}(x_1,\ldots,x_n) = \frac{1}{n}\sum_{i=1}^n \ln(x_i) = \vartheta_{n,\varphi}(x_1,\ldots,x_n)$$ for all $n\in\mathbb{N}$ and $x_1,\ldots,x_n\in(0,\infty)$. $\Box$* **Example. 35** (Likelihood equation in case of Laplace distribution). *Let $\mu\in\mathbb{R}$, $b\in\mathbb{R}_{++}$ and let $\xi$ be a random variable having Laplace distribution with parameters $\mu$ and $b$, i.e., $\xi$ has a density function $$f_\xi(x) := \frac{1}{2b} \mathrm{e}^{-\frac{\vert x-\mu\vert}{b}},\qquad x\in\mathbb{R}.$$ Supposing that $\mu\in\mathbb{R}$ is known, we will establish the existence and uniqueness of a solution of the likelihood equation for $b$ using Theorem 2.10 in Barczy and Páles [@BarPal2]. In the considered case, using the setup given before Example 4.6 in Barczy and Páles [@BarPal2], we have  $\Theta := (0,\infty)$  and  $f:\mathbb{R}\times (0,\infty)\to\mathbb{R}$, $$f(x,b) := \frac{1}{2b} \mathrm{e}^{-\frac{\vert x-\mu\vert}{b}},\qquad x\in\mathbb{R}, \quad b>0,$$ and consequently,  ${\mathcal X}_f=\mathbb{R}$. Then  $\psi:\mathbb{R}\times(0,\infty)\to\mathbb{R}$, $$\begin{aligned} \psi(x,b) = \partial_2\left( - \frac{\vert x-\mu\vert}{b} - \ln(2) - \ln(b) \right) = \frac{\vert x-\mu\vert}{b^2} - \frac{1}{b}, \qquad x\in\mathbb{R},\quad b>0. \end{aligned}$$ Then $\psi$ is a $Z_1$-function with $\vartheta_{1,\psi}(x) = \vert x-\mu\vert$, $x\in\mathbb{R}$. Further, if $x,y\in\mathbb{R}$ are such that $\vert x-\mu\vert = \vartheta_{1,\psi}(x) < \vartheta_{1,\psi}(y) = \vert y-\mu\vert$, then the function $$(\vert x-\mu\vert, \vert y-\mu\vert)\ni b \mapsto - \frac{\psi(x,b)}{\psi(y,b)} = - \frac{\vert x-\mu\vert - b}{\vert y-\mu\vert - b} = -1 + \frac{\vert y-\mu\vert - \vert x-\mu\vert}{\vert y-\mu\vert - b}$$ is strictly increasing. This, together with the fact that $\psi$ is a $Z_1$-function, yields that $\psi$ is a $T$-function, see part (vi) of Theorem 2.10 in Barczy and Páles [@BarPal2]. Since $\psi$ is continuous in its second variable as well, we get that $\psi$ is a $Z$-function as well. By a direct calculation, for all $n\in\mathbb{N}$ and $x_1,\ldots,x_n\in\mathbb{R}$, the likelihood equation for $b$ takes the form $$\sum_{i=1}^n \psi(x_i,b) = -\frac{1}{b^2} \sum_{i=1}^n \vert x_i - \mu \vert - \frac{n}{b}=0,\qquad b>0,$$ which has the unique solution $$\vartheta_{n,\psi}(x_1,\ldots,x_n) = \frac{1}{n}\sum_{i=1}^n \vert x_i - \mu \vert.$$* *Moreover, if $\mu^*\in\mathbb{R}$, and $\varphi:\mathbb{R}\times(0,\infty)\to\mathbb{R}$, $$\begin{aligned} \varphi(x,b) := \frac{\vert x-\mu^*\vert}{b^2} - \frac{1}{b}, \qquad x\in\mathbb{R},\quad b>0, \end{aligned}$$ then $\varphi$ is a $Z$-function with $\vartheta_{1,\varphi}(x) = \vert x-\mu^*\vert$, $x\in\mathbb{R}$. One can readily see that $\vartheta_{1,\psi}(x)\leqslant\vartheta_{1,\varphi}(x)$ holds for all $x\in\mathbb{R}$ if and only if $\mu=\mu^*$. This also implies that $\vartheta_{n,\psi}(x_1,\ldots,x_n)\leqslant\vartheta_{1,\varphi}(x_1,\ldots,x_n)$ holds for all $n\in\mathbb{N}$ and $x_1,\ldots,x_n\in\mathbb{R}$ if and only if $\mu=\mu^*$. $\Box$* [^1]: *2020 Mathematics Subject Classifications*: 62F10, 62D99, 26E60 [^2]: *Key words and phrases*: $\psi$-estimator, Z-estimator, comparison of estimators, equality of estimators, Bajraktarević-type estimator, quasi-arithmetic-type estimator, likelihood equation. [^3]: Mátyás Barczy was supported by the project TKP2021-NVA-09. Project no. TKP2021-NVA-09 has been implemented with the support provided by the Ministry of Innovation and Technology of Hungary from the National Research, Development and Innovation Fund, financed under the TKP2021-NVA funding scheme. Zsolt Páles is supported by the K-134191 NKFIH Grant.
arxiv_math
{ "id": "2309.04773", "title": "Comparison and equality of generalized $\\psi$-estimators", "authors": "Matyas Barczy, Zsolt P\\'ales", "categories": "math.ST stat.TH", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
**Interlacing property of a family of generating polynomials over Dyck paths** **Bo Wang$^1$ and Candice X.T. Zhang$^2$\ ** *Center for Combinatorics, LPMC\ Nankai University, Tianjin 300071, P. R. China\ Email: $^{1}$`bowang@nankai.edu.cn`, $^{2}$`zhang_xutong@mail.nankai.edu.cn`* **Abstract.** In the study of a tantalizing symmetry on Catalan objects, Bóna et al. introduced a family of polynomials $\{W_{n,k}(x)\}_{n\geq k\geq 0}$ defined by $$\begin{aligned} W_{n,k}(x)=\sum_{m=0}^{k}w_{n,k,m}x^{m},\end{aligned}$$ where $w_{n,k,m}$ counts the number of Dyck paths of semilength $n$ with $k$ occurrences of $UD$ and $m$ occurrences of $UUD$. They proposed two conjectures on the interlacing property of these polynomials, one of which states that $\{W_{n,k}(x)\}_{n\geq k}$ is a Sturm sequence for any fixed $k\geq 1$, and the other states that $\{W_{n,k}(x)\}_{1\leq k\leq n}$ is a Sturm-unimodal sequence for any fixed $n\geq 1$. In this paper, we obtain certain recurrence relations for $W_{n,k}(x)$, and further confirm their conjectures. **AMS Classification 2020:** 05A15, 26C10 **Keywords:** Real zeros, interlacing property, Sturm sequence, Sturm-unimodal, Dyck path # Introduction A *Dyck path* of semilength $n$ in $\mathbb{Z}^2$ is a lattice path starting at the origin $(0,0)$, ending at $(2n,0)$, and never goes below the $x$-axis, whose permitted step types are up step $U=(1,1)$ and down step $D=(1,-1)$. It is well known that the set of Dyck paths of semilength $n$ is counted by the Catalan number $C_{n}=\frac{1}{n+1}\binom{2n}{n}$, which is the sequence A000108 in the On-line Encyclopedia of Integer Sequences of Sloane [@Sloane-2018]. Numerous studies have been focused on the refinement of Catalan numbers by considering certain statistics over Dyck paths. It is easy to see that a Dyck path determines a word of $\{U,D\}$ as one records the steps along the path from left to right. One important class of statistics are defined with various factors appearing in the word representation of Dyck paths. It seems that the most natural factor is a $UD$-factor, which means that an up step is immediately followed by a down step in the Dyck path. The number of Dyck paths of semilength $n$ with exactly $k$ occurrences of $UD$-factors is given by the Narayana number $N(n,k)=\frac{1}{n}\binom{n}{k-1}\binom{n}{k}$; see Sulanke [@Sulanke-1999]. The enumeration of Dyck paths of semilength $n$ with $k$ occurrences of $UUD$-factors has been first stuided by Sapounakis, Tasoulas, and Tsikouras [@Sapounakis-Tasoulas-Tsikouras-2006]. Lin and Kim [@Lin-Kim-2021] introduced the segment statistic, which is actually the $UUD$-factor, to study various classical statistics on restricted inversion sequences. In their paper, Lin and Kim also proved that this statistic, when applied to Dyck paths, is equidistributed with the descent statistic over the group of $(3, 2, 1)$-avoiding permutations. Wang [@Wang-2011] developed a useful technique for computing relevant generating functions for Dyck paths with different factors. For more information on the enumeration of Dyck paths with respect to various factors, see [@Denise-1995; @Deutsch-1999; @Merlini-Sprugnoli-Verri-2002; @Sun-2004; @Woan-2004; @Mansour-2006; @Czabarka-Florez-2015] and references therein. This paper is much motivated by a recent work [@Bona-Dimitrov-2022] due to Bóna et al., who first considered the joint distribution of $UD$-factors and $UUD$-factors over Dyck paths. Let $w_{n,k,m}$ be the number of Dyck paths of semilength $n$ with $k$ $UD$-factors and $m$ $UUD$-factors. For these numbers $w_{n,k,m}$, Bóna et al. [@Bona-Dimitrov-2022] proved the following tantalizing symmetric property: $$w_{2k+1,k,m}=w_{2k+1,k,k+1-m}, \quad \mbox{where } 1\le m\le k.$$ To obtain this result, they derived the following explicit formula for the numbers $w_{n,k,m}$ by using generating function techniques. **Theorem 1** ([@Bona-Dimitrov-2022], Theorem 1.2). *For all $n$, $k$ and $m$, we have $$\label{eq-wnkm} w_{n,k,m}= \begin{cases} \frac{1}{k}\binom{n}{k-1}\binom{n-k-1}{m-1}\binom{k}{m}, & \mbox{if } 0<m\leq k, \mbox{and } k+m\leq n,\\ 1, & \mbox{if } m=0\ \mbox{and } n=k,\\ 0, & \mbox{otherwise.}\\ \end{cases}$$* With the above formula, Bóna et al. also noted that the numbers $w_{n,k,m}$ are closely related to the classical Narayana Numbers, as well as to Callan's generalization of Narayana Numbers [@Callan-2017]. Let $W_{n,k}(x)$ be the generating polynomial of $w_{n,k,m}$ as given by $$\label{eq-wnkx} W_{n,k}(x)=\sum_{m=0}^{k}w_{n,k,m}x^{m}.$$ By [\[eq-wnkm\]](#eq-wnkm){reference-type="eqref" reference="eq-wnkm"}, it is clear that $\deg W_{n,k}(x)=\min \{k,n-k \}$ if $n> k$ and $\deg W_{n,k}(x)=0$ otherwise. It turns out that these polynomials enjoy very interesting properties. Bóna et al. [@Bona-Dimitrov-2022] obtained the following result. **Theorem 2** ([@Bona-Dimitrov-2022], Proposition 6.1, Theorem 6.3). *For any $n,k\geq 0$ the polynomial $W_{n,k}(x)$ has only real zeros. Moreover, for all $1\leq k\leq n-1$, the polynomials $W_{n,k}(x)$ and $W_{n,n-k}(x)$ have the same zeros.* The real-rootedness of $W_{n,k}(x)$ was proved by Bóna et al. [@Bona-Dimitrov-2022] based on Malo's result regarding the roots of the Hadamard product of two real-rooted polynomials. It is worth mentioning that many combinatorial polynomials have only real zeros. For excellent surveys on this topic, we refer the readers to Stanley [@Stanley-log-1989], Brenti [@Brenti-1994], and Brändén [@Branden-2015]. One of the useful methods to prove the real-rootedness of a polynomial is to consider the interlacing property involving its zeros. Bóna et al. [@Bona-Dimitrov-2022] further studied the interlacing property of $W_{n,k}(x)$ by fixing $n$ or $k$, and proposed two interesting conjectures. Before stating these conjectures, let us recall some related definitions following Liu and Wang [@Liu-Wang-2006]. Given two real-rooted polynomials $F(x)$ and $G(x)$ with nonnegative real coefficients, let $\{\alpha_{r}\}$ and $\{\beta_{s}\}$ be their zeros in weakly decreasing order, respectively. We say that $G(x)$ *interlaces* $F(x)$, denoted by $G(x)\preccurlyeq F(x)$, if $\deg F(x)=\deg G(x)=n$ and $$\label{ineq-interlace-1} \beta_{n}\leq \alpha_{n}\leq \beta_{n-1}\leq \alpha_{n-1}\leq \cdots\leq \beta_{1}\leq \alpha_{1},$$ or $\deg f(x)=\deg g(x)+1=n$ and $$\label{ineq-interlace-2} \alpha_{n}\leq \beta_{n-1}\leq \alpha_{n-1}\leq \cdots\leq \beta_{1}\leq \alpha_{1}.$$ For convenience, let $a\preccurlyeq bx+c$ for any real numbers $a,b,c$ and $F(x)\preccurlyeq 0$, $0\preccurlyeq F(x)$ for any real-rooted polynomial $F(x)$. Given a sequence $\{F_{i}(x)\}_{i\geq 0}$ of real-rooted polynomials, we say that it is a *generalized Sturm sequence* if $F_{i}(x)\preccurlyeq F_{i+1}(x)$ for all $i\geq 0$. We would like to point out that a generalized Sturm sequence is called a Sturm sequence by Bóna et al. They also introduced the notion of Sturm-unimodal sequences. With our notation here, a finite sequence $\{F_{i}(x)\}_{1\leq i\leq n}$ of real-rooted polynomials is said to be *Sturm-unimodal*, provided that there exists $1\leq j\leq n$ such that $$F_{1}(x)\preccurlyeq \cdots\preccurlyeq F_{j-1}(x)\preccurlyeq F_{j}(x)\succcurlyeq F_{j+1}(x)\succcurlyeq \cdots\succcurlyeq F_{n}(x).$$ Now the two conjectures of Bóna et al. [@Bona-Dimitrov-2022] can be stated as follows. **Conjecture 3** ([@Bona-Dimitrov-2022], Conjecture 6.4). *For any fixed $k\geq 1$, the polynomial sequence $\{W_{n,k}(x)\}_{n\geq k}$ is a generalized Sturm sequence.* **Conjecture 4** ([@Bona-Dimitrov-2022], Conjecture 6.5). *For any fixed $n\geq 1$, the polynomial sequence $\{W_{n,k}(x)\}_{1\leq k\leq n}$ is Sturm-unimdoal.* In this paper we shall prove these two conjectures. # The main results The aim of this section is to prove Conjecture [Conjecture 3](#conje1){reference-type="ref" reference="conje1"} and Conjecture [Conjecture 4](#conje2){reference-type="ref" reference="conje2"}. In the process of proving these two conjectures, we need the following result which provides a sufficient condition for a polynomial sequence with three-term recurrence to be a generalized Sturm sequence. Note that it is a special case of Liu and Wang's criterion [@Liu-Wang-2006]. **Theorem 5** ([@Liu-Wang-2006], Corollary 2.4). *Let $\{F_{i}(x)\}_{i\geq 0}$ be a sequence of polynomial with nonnegative coefficients satisfying the following conditions:* - *$F_{0}(x)$ and $F_{1}(x)$ are real-rooted polynomial with $F_{0}(x)\preccurlyeq F_{1}(x)$.* - *$\deg F_{i+1}(x)=\deg F_{i}(x)$ or $\deg F_{i}(x)+1$ for any $i\geq 0$.* - *There exist polynomials $A_{j}(x)$ and $B_{j}(x)$ with real coefficients such that $$\begin{aligned} \label{eq-formal-rec} F_{j+2}(x)=A_{j}(x)F_{j+1}(x)+B_{j}(x)F_{j}(x). \end{aligned}$$* *If for all $x\leq 0$, we have $B_{j}(x)\leq 0$, then $\{F_{i}(x)\}_{i\geq 0}$ is a generalized Sturm sequence.* In order to use the above theorem to prove Conjecture [Conjecture 3](#conje1){reference-type="ref" reference="conje1"}, we need to establish some recurrence relation satisfied by the polynomials $W_{n,k}(x)$ when fixing $k$. Let us first give a recurrence relation of the coefficients $w_{n,k,m}$ for each fixed $k\geq 1$. **Lemma 6**. *Let $w_{n,k,m}$ be as given by [\[eq-wnkm\]](#eq-wnkm){reference-type="eqref" reference="eq-wnkm"}. Then, for any $n\geq k-1\geq 0$, we have $$\label{eq-thm-conje1-wnkm-recu} \begin{split} w_{n+2,k,m}=&\frac{2(n+2)(n-k+1)}{(n-k+2)(n-k+3)}w_{n+1,k,m}-\frac{(n+2)(n-2k+1)}{(n-k+2)(n-k+3)}w_{n+1,k,m-1}\\ &+\frac{(n+1)(n+2)(n-k)}{(n-k+2)^2(n-k+3)}(w_{n,k,m-1}-w_{n,k,m}). \end{split}$$* We may assume that $0\le m\leq k$ and $n\geq m+k-2$, since there is nothing to prove for $m<0$, or $m>k$, or $n< m+k-2$. Moreover, it is routine to verify the validity of [\[eq-thm-conje1-wnkm-recu\]](#eq-thm-conje1-wnkm-recu){reference-type="eqref" reference="eq-thm-conje1-wnkm-recu"} for $m=0$ since both sides vanish under the condition $n\ge k-1$. If $m=1$, and hence $n\ge k-1$, then we divide the proof of [\[eq-thm-conje1-wnkm-recu\]](#eq-thm-conje1-wnkm-recu){reference-type="eqref" reference="eq-thm-conje1-wnkm-recu"} into the following three cases. **Case A1:** $n=k-1$. We find that $$w_{n+1,k,m}=w_{n,k,m-1}=w_{n,k,m}=0\text{ and }w_{n+1,k,m-1}=1$$ in view of [\[eq-wnkm\]](#eq-wnkm){reference-type="eqref" reference="eq-wnkm"}. Thus, [\[eq-thm-conje1-wnkm-recu\]](#eq-thm-conje1-wnkm-recu){reference-type="eqref" reference="eq-thm-conje1-wnkm-recu"} holds since its right-hand side simplifies to $\binom{k+1}{2}$, which is indeed equal to $w_{n+2,k,m}$. **Case A2:** $n=k$. In this case, the third term on the right-hand side of [\[eq-thm-conje1-wnkm-recu\]](#eq-thm-conje1-wnkm-recu){reference-type="eqref" reference="eq-thm-conje1-wnkm-recu"} naturally vanishes. Note that for $m=1$ we have $w_{n+1,k,m-1}=0$ by [\[eq-wnkm\]](#eq-wnkm){reference-type="eqref" reference="eq-wnkm"}. Thus, it suffices to show that $$w_{k+2,k,1}=\frac{2(k+2)}{3}w_{k+1,k,1},$$ which can be easily verified by [\[eq-wnkm\]](#eq-wnkm){reference-type="eqref" reference="eq-wnkm"}. **Case A3:** $n>k$. Keep in mind that $m=1$ throughout this case. By [\[eq-wnkm\]](#eq-wnkm){reference-type="eqref" reference="eq-wnkm"} we have $w_{n+1,k,m-1}=w_{n,k,m-1}=0$. Thus, the right-hand side of [\[eq-thm-conje1-wnkm-recu\]](#eq-thm-conje1-wnkm-recu){reference-type="eqref" reference="eq-thm-conje1-wnkm-recu"} turns out to be $$\frac{2(n+2)(n-k+1)}{(n-k+2)(n-k+3)}\binom{n+1}{k-1}-\frac{(n+1)(n+2)(n-k)}{(n-k+2)^2(n-k+3)}\binom{n}{k-1}=\binom{n+2}{k-1},$$ which is equal to $w_{n+2,k,m}$, as desired. From now on we may assume that $2\leq m\leq k$. We further divide the proof of [\[eq-thm-conje1-wnkm-recu\]](#eq-thm-conje1-wnkm-recu){reference-type="eqref" reference="eq-thm-conje1-wnkm-recu"} into the following three cases. **Case B1:** $n=m+k-2$. In this case, we have $w_{n+1,k,m}=w_{n,k,m-1}=w_{n,k,m}=0$. Now it is sufficient to show that $$w_{n+2,k,m}=-\frac{(n+2)(n-2k+1)}{(n-k+2)(n-k+3)}w_{n+1,k,m-1},$$ which can be verified by [\[eq-wnkm\]](#eq-wnkm){reference-type="eqref" reference="eq-wnkm"}. **Case B2:** $n=m+k-1$. For this case we have $w_{n,k,m}=0$ by [\[eq-wnkm\]](#eq-wnkm){reference-type="eqref" reference="eq-wnkm"}. Subtituting [\[eq-wnkm\]](#eq-wnkm){reference-type="eqref" reference="eq-wnkm"} and the condition $m=n-k+1$ into the right-hand side of [\[eq-thm-conje1-wnkm-recu\]](#eq-thm-conje1-wnkm-recu){reference-type="eqref" reference="eq-thm-conje1-wnkm-recu"} and then simplifying, we obtain that $$\begin{aligned} &\frac{n-k+1}{k(n-k+2)}\binom{n+2}{k-1}\binom{k}{n-k+1}\binom{n-k}{n-k-1}+\frac{2(n-k+1)}{k(n-k+2)}\binom{n+2}{k-1}\binom{k}{n-k+1}\\[5pt] &=\frac{1}{k}(n-k+1)\binom{n+2}{k-1}\binom{k}{n-k+1},\end{aligned}$$ which is equal to $w_{n+2,k,m}$ according to [\[eq-wnkm\]](#eq-wnkm){reference-type="eqref" reference="eq-wnkm"}. **Case B3:** $n\geq m+k$. By substituting [\[eq-wnkm\]](#eq-wnkm){reference-type="eqref" reference="eq-wnkm"} into the right-hand side of [\[eq-thm-conje1-wnkm-recu\]](#eq-thm-conje1-wnkm-recu){reference-type="eqref" reference="eq-thm-conje1-wnkm-recu"} and then simplifying, we get $$\begin{split} &\frac{n-k+1+m}{k(n-k+2)}\binom{n+2}{k-1}\binom{k}{m}\binom{n-k}{m-1}+\frac{m}{k(n-k+2)}\binom{n+2}{k-1}\binom{k}{m}\binom{n-k}{m-2}\\[5pt] &=\frac{1}{k}\binom{n+2}{k-1}\binom{k}{m}\left(\frac{n-k+m+1}{n-k+2}\binom{n-k}{m-1}+\frac{m}{n-k+2}\binom{n-k}{m-2}\right)\\[5pt] &=\frac{1}{k}\binom{n+2}{k-1}\binom{n-k+1}{m-1}\binom{k}{m}, \end{split}$$ which is equal to $w_{n+2,k,m}$ according to [\[eq-wnkm\]](#eq-wnkm){reference-type="eqref" reference="eq-wnkm"}, as desired. Taking into account all the above cases, we complete the proof. The recurrence relation [\[eq-thm-conje1-wnkm-recu\]](#eq-thm-conje1-wnkm-recu){reference-type="eqref" reference="eq-thm-conje1-wnkm-recu"} satisfied by the coefficients $w_{n,k,m}$ is equivalent to the following recurrence relation for polynomials $W_{n,k}(x)$, which plays a key role in our proof of Conjecture [Conjecture 3](#conje1){reference-type="ref" reference="conje1"}. **Theorem 7**. *Fixing $k\geq 1$, for any $n\ge k-1$ we have $$\label{eq-thm-conje1-recu} \begin{split} W_{n+2,k}(x)=&\frac{(n+2)\left(2(n-k+1)-(n-2k+1)x\right)}{(n-k+2)(n-k+3)}W_{n+1,k}(x)\\[5pt] &+\frac{(n+1)(n+2)(n-k)(x-1)}{(n-k+2)^2(n-k+3)}W_{n,k}(x). \end{split}$$* Due to the fact that $\deg W_{n,k}(x)=\min\{k,n-k\}$, it suffices to compare the coefficients of $x^m$ for $0\leq m\leq k$. For the right-hand side of [\[eq-thm-conje1-recu\]](#eq-thm-conje1-recu){reference-type="eqref" reference="eq-thm-conje1-recu"}, the coefficient of $x^m$ is $$\label{eq-proof-conje1-2} \begin{split} &\frac{2(n+2)(n-k+1)}{(n-k+2)(n-k+3)}w_{n+1,k,m}-\frac{(n+2)(n-2k+1)}{(n-k+2)(n-k+3)}w_{n+1,k,m-1}\\ &\qquad\qquad+\frac{(n+1)(n+2)(n-k)}{(n-k+2)^2(n-k+3)}(w_{n,k,m-1}-w_{n,k,m}), \end{split}$$ which is equal to $w_{n+2,k,m}$ by [\[eq-thm-conje1-wnkm-recu\]](#eq-thm-conje1-wnkm-recu){reference-type="eqref" reference="eq-thm-conje1-wnkm-recu"}. This is just the coefficent of $x^m$ on the left-hand side of [\[eq-thm-conje1-recu\]](#eq-thm-conje1-recu){reference-type="eqref" reference="eq-thm-conje1-recu"}. The proof is complete. We proceed to prove Conjecture [Conjecture 3](#conje1){reference-type="ref" reference="conje1"}. Using Theorem [Theorem 7](#thm-conje1-recu){reference-type="ref" reference="thm-conje1-recu"} and Theorem [Theorem 5](#thm-lw){reference-type="ref" reference="thm-lw"}, we obtain the first main result of this section. **Theorem 8**. *For any fixed $k\geq 1$, the polynomial sequence $\{W_{n,k}(x)\}_{n\geq k}$ is a generalized Sturm sequence.* Taking $F_{i}(x)$ in Theorem [Theorem 5](#thm-lw){reference-type="ref" reference="thm-lw"} to be the polynomial $W_{k+i,k}(x)$ for each $i\geq 0$, it is clear that each $F_{i}(x)$ is a polynomial with nonnegative coefficients. By [\[eq-wnkm\]](#eq-wnkm){reference-type="eqref" reference="eq-wnkm"} and [\[eq-wnkx\]](#eq-wnkx){reference-type="eqref" reference="eq-wnkx"}, we have $\deg F_i(x)=i$ if $i\le k$ and $\deg F_i(x)=k$ otherwise. Note that $$F_{0}(x)=W_{k,k}(x)=1,\ \ \ \ F_{1}(x)=W_{k+1,k}(x)=\dbinom{k+1}{k-1}x,$$ and hence $F_{0}(x)\preccurlyeq F_{1}(x)$. Now the recurrence relation ([\[eq-thm-conje1-recu\]](#eq-thm-conje1-recu){reference-type="ref" reference="eq-thm-conje1-recu"}) can be restated as in the form [\[eq-formal-rec\]](#eq-formal-rec){reference-type="eqref" reference="eq-formal-rec"} with $$A_{j}(x)=\frac{(k+j+2)}{(j+2)(j+3)}\left(2(j+1)-(j-k+1)x\right),$$ and $$B_{j}(x)=\frac{(k+j+1)(k+j+2)j}{(j+2)^2(j+3)}(x-1).$$ Clearly, for any $j\geq 0$ and $x\leq 0$, we have $B_{j}(x)\leq 0$. From Theorem [Theorem 5](#thm-lw){reference-type="ref" reference="thm-lw"} it follows that the sequence $\{F_{i}(x)\}_{i\geq 0}$, and hence $\{W_{n,k}(x)\}_{n\geq k}$, is a generalized Sturm sequence. In order to prove Conjecture [Conjecture 4](#conje2){reference-type="ref" reference="conje2"}, we present the following recurrence relation of the coefficients $w_{n,k,m}$ for each fixed $n\geq 1$. **Lemma 9**. *Let $w_{n,k,m}$ be as given by [\[eq-wnkm\]](#eq-wnkm){reference-type="eqref" reference="eq-wnkm"}. Then, for any $1\le k\le \lfloor\frac{n+1}{2} \rfloor-2$, we have $$\label{eq-thm-conje2-wnkm-recu} w_{n,k+2,m}=a(n,k)w_{n,k+1,m-1}+b(n,k)w_{n,k+1,m}-c(n,k)w_{n,k,m},$$ where $$\left\{ \begin{aligned} a(n,k)&=\frac{(n-k)(n-2k-2)(n-2k-3)}{(k+1)(k+2)(n-k-2)},\\[5pt] b(n,k)&=\frac{2(n-k)(n-k-1)(n-2k-2)}{(k+2)(n-k-2)(n-2k-1)},\\[5pt] c(n,k)&=\frac{(n-k+1)(n-k)^{2}(n-2k-3)}{(k+1)(k+2)(n-k-2)(n-2k-1)}. \end{aligned} \right.$$* By similar arguments as in the proof of Lemma [Lemma 6](#thm-conje1-wnkm-recu){reference-type="ref" reference="thm-conje1-wnkm-recu"}, we may assume that $1\le m\leq k$, and hence $2\leq m+k\le n-2$ by the condition $1\le k\le \lfloor \frac{n+1}{2}\rfloor-2$. If $m=1$, then we have $w_{n,k+1,m-1}=0$ by [\[eq-wnkm\]](#eq-wnkm){reference-type="eqref" reference="eq-wnkm"}, and the right-hand side of [\[eq-thm-conje2-wnkm-recu\]](#eq-thm-conje2-wnkm-recu){reference-type="eqref" reference="eq-thm-conje2-wnkm-recu"} turns out to be $$b(n,k)\binom{n}{k}-c(n,k)\binom{n}{k-1}=\binom{n}{k+1}=w_{n,k+2,m},$$ as desired. The proof for the case of $2\le m\le k$ is very similar to that of Case B3 in the proof of Lemma [Lemma 6](#thm-conje1-wnkm-recu){reference-type="ref" reference="thm-conje1-wnkm-recu"}, and is omitted here. Fixing an integer $n\geq 1$, the above lemma immediately leads to a recurrence relation for the polynomial sequence $\{W_{n,k}(x)\}_{1\leq k\leq \lfloor\frac{n+1}{2} \rfloor}$. **Theorem 10**. *Let $a(n,k)$, $b(n,k)$ and $c(n,k)$ be as given in Lemma [Lemma 9](#thm-conje2-wnkm-recu){reference-type="ref" reference="thm-conje2-wnkm-recu"}. Then, for any $1\le k\le \lfloor\frac{n+1}{2} \rfloor-2$, we have $$\label{eq-thm-conje2-recu} W_{n,k+2}(x)=\left(a(n,k)x+b(n,k)\right)W_{n,k+1}(x)-c(n,k)W_{n,k}(x).$$* The desired recurrence immediately follows by comparing the coefficients of $x^{m}$ on both sides of ([\[eq-thm-conje2-recu\]](#eq-thm-conje2-recu){reference-type="ref" reference="eq-thm-conje2-recu"}) for each $m$ and then using [\[eq-thm-conje2-wnkm-recu\]](#eq-thm-conje2-wnkm-recu){reference-type="eqref" reference="eq-thm-conje2-wnkm-recu"}. Based on the recurrence relation [\[eq-thm-conje2-recu\]](#eq-thm-conje2-recu){reference-type="eqref" reference="eq-thm-conje2-recu"} and Theorem [Theorem 5](#thm-lw){reference-type="ref" reference="thm-lw"}, we obtain the following result, which provides an affirmative answer to Conjecture [Conjecture 4](#conje2){reference-type="ref" reference="conje2"}. **Theorem 11**. *For any fixed $n\geq 1$, the polynomial sequence $\{W_{n,k}(x)\}_{1\leq k\leq n}$ is Sturm-unimodal.* By [\[eq-wnkm\]](#eq-wnkm){reference-type="eqref" reference="eq-wnkm"} and [\[eq-wnkx\]](#eq-wnkx){reference-type="eqref" reference="eq-wnkx"}, one can check that $$\label{eq-sturm-unimodal} \begin{split} W_{n,n}(x)&=1, \mbox{ for } n\geq 1,\\ W_{n,1}(x)&=x, \mbox{ for } n\geq 2,\\ W_{n,2}(x)&=\frac{n}{2}\left(2x+(n-3)x^2\right), \mbox{ for } n\geq 3. \end{split}$$ Thus we always have $W_{n,n-1}(x)\succcurlyeq W_{n,n}(x)$ by convention. On the other hand, Theorem [Theorem 2](#same root){reference-type="ref" reference="same root"} implies that $W_{n,k}(x)\preccurlyeq W_{n,k+1}(x)$ if and only if $W_{n,n-k-1}(x)\succcurlyeq W_{n,n-k}(x)$ for any $1\le k\le \lfloor\frac{n+1}{2} \rfloor$. Therefore, it suffices to show that the polynomial sequence $\{W_{n,k}(x)\}_{1\leq k\leq \lfloor\frac{n+1}{2}\rfloor}$ is a generalized Sturm sequence. For $1\le n\le 5$ this can be directly verified by [\[eq-sturm-unimodal\]](#eq-sturm-unimodal){reference-type="eqref" reference="eq-sturm-unimodal"}. Now suppose that $n\geq 6$ for the remainder of the proof. For $0\le i\le \lfloor\frac{n+1}{2}\rfloor-1$, take each $F_{i}(x)$ in Theorem [Theorem 5](#thm-lw){reference-type="ref" reference="thm-lw"} to be the polynomial $W_{n,i+1}(x)$. Clearly, $F_{i}(x)$ is a polynomial with nonnegative coefficients. By [\[eq-wnkm\]](#eq-wnkm){reference-type="eqref" reference="eq-wnkm"} and [\[eq-wnkx\]](#eq-wnkx){reference-type="eqref" reference="eq-wnkx"}, we have $$\begin{aligned} \deg F_{i}(x)=\left\{ \begin{array}{ll} i+1, & \mbox{ if } 0\le i\le \lfloor\frac{n+1}{2}\rfloor-2,\\ i+1, & \mbox{ if $n$ is even and } i=\lfloor\frac{n+1}{2}\rfloor-1,\\ i, & \mbox{ if $n$ is odd and } i=\lfloor\frac{n+1}{2}\rfloor-1. \end{array} \right.\end{aligned}$$ It is also clear that $F_{0}(x)\preccurlyeq F_{1}(x)$. Now the recurrence relation ([\[eq-thm-conje2-recu\]](#eq-thm-conje2-recu){reference-type="ref" reference="eq-thm-conje2-recu"}) can be restated as in the form [\[eq-formal-rec\]](#eq-formal-rec){reference-type="eqref" reference="eq-formal-rec"} with $$A_{j}(x)=\frac{(n-j-1)(n-2j-4)(n-2j-5)}{(j+2)(j+3)(n-j-3)}x+\frac{2(n-j-1)(n-j-2)(n-2j-4)}{(i+3)(n-j-3)(n-2j-3)},$$ and $$B_{j}(x)=-\frac{(n-j)(n-j-1)^{2}(n-2j-5)}{(j+2)(j+3)(n-j-3)(n-2j-3)}\leq 0$$ for any $0\le j\le \lfloor\frac{n+1}{2}\rfloor-3$. We conclude from Theorem [Theorem 5](#thm-lw){reference-type="ref" reference="thm-lw"} that the polynomial sequence $\{F_{i}(x)\}_{0\leq i\leq \lfloor\frac{n+1}{2}\rfloor-1}$, and hence $\{W_{n,k}(x)\}_{1\leq k\leq \lfloor\frac{n+1}{2}\rfloor}$, is a generalized Sturm sequence. This completes the proof. **Acknowledgments.** This work was supported by the Fundamental Research Funds for the Central Universities and the National Science Foundation of China (Grant No. 11971249). 99 M. Bóna, S. Dimitrov, G. Labelle, Y. Li, J. Pappe, A.R. Vindas-Meléndez, and Y. Zhuang. A combinatorial proof of a tantalizing symmetry on Catalan objects. arXiv preprint arXiv:2212.10586, 2022. P. Brändén. Unimodality, log-concavity, real-rootedness and beyond, in Handbook ofEnumerative Combinatorics, CRC Press, Boca Raton, FL, 437--483, 2015. F. Brenti. Log-concave and unimodal sequences in algebra, combinatorics, and geometry: An update. *Contemp. Math.*, 178: 71--89, 1994. D. Callan. Generalized Narayana numbers. https://www.oeis.org/A281260/a281260\ .pdf, 2017. É. Czabarka, R. Flórez, and L. Junes. Some enumerations on non-decreasing Dyck paths. *Electron. J. Combin.*, P1.3: 1--22, 2015. A. Denise and R. Simion. Two combinatorial statistics on Dyck paths. *Discrete Math.*, 137(1--3): 155--176, 1995. E. Deutsch. Dyck path enumeration. *Discrete Math.*, 204(1-3): 167--202, 1999. L. L. Liu and Y. Wang. A unified approach to polynomial sequences with only real zeros. *Adv. Appl. Math.*, 38(4): 542--560, 2007. Z. Lin and D. Kim. Refined restricted inversion sequences. *Ann. Comb.*, 25: 849--875, 2021. T. Mansour. Statistics on Dyck paths. *J. Integer Sequences*, 9: 06.1.5, 2006. D. Merlini, R. Sprugnoli, and M. C. Verri. Some statistics on Dyck paths. *J. Statis. Plann. Inference*, 101(1--2): 211--227, 2002. A. Sapounakis, I. Tasoulas, and P. Tsikouras. Dyck path statistics. *WSEAS Trans. Math.*, 5(5): 459--464, 2006. N. J. A. Sloane. The on-line encyclopedia of integer sequences. Published electronically at https://oeis. org, 2018. R. P. Stanley. Log-concave and unimodal sequences in algebra, combinatorics, and geometry. *Ann. New York Acad. Sci.*, 576: 500--535, 1989. R. Sulanke. Constraint-sensitive Catalan path statistics having the Narayana distribution. *Discrete Math.*, 204: 397--414, 1999. Y. Sun. The statistic number of $udu$'s in Dyck paths. *Discrete math.*, 287(1--3): 177--186, 2004. C. J. Wang. Applications of the Goulden-Jackson cluster method to counting Dyck paths by occurrences of subwords. Brandeis University, 2011. W. Woan. Area of Catalan paths. *Discrete Math.*, 226(1--3): 439--444, 2001.
arxiv_math
{ "id": "2309.05903", "title": "Interlacing property of a family of generating polynomials over Dyck\n paths", "authors": "Bo Wang and Candice X.T. Zhang", "categories": "math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper, a compressible viscous-dispersive Euler system in one space dimension in the context of quantum hydrodynamics is considered. The purpose of this study is twofold. First, it is shown that the system is locally well-posed. For that purpose, the existence of classical solutions which are perturbation of constant states is established. Second, it is proved that in the particular case of subsonic equilibrium states, sufficiently small perturbations decay globally in time. In order to prove this stability property, the linearized system around the subsonic state is examined. Using an appropriately constructed compensating matrix symbol in the Fourier space, it is proved that solutions to the linear system decay globally in time, underlying a dissipative mechanism of regularity gain type. These linear decay estimates, together with the local existence result, imply the global existence and the decay of perturbations to constant subsonic equilibrium states as solutions to the full nonlinear system. address: - | Departamento de Matemáticas y Mecánica\ Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas\ Universidad Nacional Autónoma de México\ Circuito Escolar s/n, Ciudad Universitaria C.P. 04510 Cd. Mx. (Mexico) - | Departamento de Matemáticas y Mecánica\ Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas\ Universidad Nacional Autónoma de México\ Circuito Escolar s/n, Ciudad Universitaria C.P. 04510 Cd. Mx. (Mexico) author: - Ramón G. Plaza - Delyan Zhelyazov title: Well-posedness and decay structure of a quantum hydrodynamics system with Bohm potential and linear viscosity --- # Introduction {#sec:intro} Consider the following quantum hydrodynamics (QHD) system with linear viscosity, $$\label{QHD} \begin{cases} \rho_t + m_x=0,\\ m_t +\left(\displaystyle\frac{m^2}{\rho}+p(\rho)\right)_x=\mu m_{xx}+k^2\rho\left(\displaystyle\frac{(\sqrt{\rho})_{xx}}{\sqrt{\rho}}\right)_x, \end{cases}$$ where $\rho > 0$ is the density, $m=\rho u$ denotes the momentum ($u$ is the velocity), $p(\rho) = \rho^\gamma$ with $\gamma > 1$ is the pressure function and $\mu > 0$, and $k > 0$ are positive constants representing viscosity and dispersive coefficients, respectively. The term $(\sqrt{\rho})_{xx}/\sqrt{\rho}$ is known as the (normalized) quantum Bohm potential [@Boh52a; @Boh52b], providing the model with a nonlinear third order dispersive term. It can be interpreted as a quantum correction to the classical pressure (stress tensor). The viscosity term, in contrast, is of linear type. The resulting system is used, for instance, in superfluidity [@Khlt89], or in classical hydrodynamical models for semiconductor devices [@GarC94]. Systems in QHD first appeared in the work by Madelung [@Mdlng27] as an alternative formulation of the Schrödinger equation, written in terms of hydrodynamical variables, and structurally similar to the Navier--Stokes equations of fluid dynamics. It constituted a precursor theory of the de Broglie--Bohm causal interpretation of quantum theory [@Boh52a; @Boh52b; @BHK87]. Since then, quantum fluid models have been applied to describe many physical phenomena, such as the mathematical description of superfluidity [@Khlt89; @Land41], the modeling of quantum semiconductors [@FrZh93; @GarC94], and the dynamics of Bose--Einstein condensates [@DGPS99; @GrantJ73], just to mention a few. In recent years, models in QHD have attracted the attention of mathematicians and physicists alike, thanks to their capability of describing particular physical systems, as well as their underlying mathematical challenges. Many mathematical results pertain to the existence of weak solutions [@AnMa09; @AnMa16; @AnMa12; @JMRi02; @YPHQ20] and their stability [@AnMaZh21], relaxation limits [@ACLS21], the analysis of purely dispersive shocks [@GuPi74; @Sgdv64; @Gas01; @HACCES06; @HoAb07], or the study of classical limits when the Planck constant tends to zero [@GasM97]. Most of these works pertain to the purely dispersive model with no viscous effects. The above list of references is by no means exhaustive and the reader is invited to see the references cited therein for more information. QHD models with viscosity have been studied under the perspective of viscous (numerical) stabilization [@JuMi07a], the physical description of dense plasmas [@DiMu17; @GMOB22; @BrMe10], the existence and stability of viscous-dispersive shocks [@LMZ20a; @LMZ20b; @FPZ22; @Zhel-preprint; @LaZ21b; @LaZ21a; @FPZ23], the existence of global solutions [@GJV09], and vanishing viscosity behaviors [@YaJuY15]. In this paper, we are interested in the *decay structure* of QHD systems with viscosity under the framework of Humpherys's extension [@Hu05] to higher order system of the classical results by Kawashima [@KaTh83] and Shizuta and Kawashima [@ShKa85; @KaSh88a] for hyperbolic-parabolic type systems. Humpherys [@Hu05] extended Kawashima and Shizuta's notions of strict dissipativity, genuine coupling and symmetrizability to viscous-dispersive one-dimensional systems of any order, such as the viscous QHD model under consideration. First, Humpherys introduces the concept of *symbol symmetrizability*, which is a generalization of the classical notion symmetrizability in the sense of Friedrichs [@Frd54], and extends the genuine coupling condition for any symmetric Fourier symbol of the linearized higher-order operator around an equilibrium state. Humpherys then shows that his notion of genuine coupling is equivalent to the strict dissipativity of the system and to the existence of a compensating matrix *symbol* for the linearized system, which allows to close energy (stability) estimates. Our analysis is divided into two parts. The first one is devoted to proving the local existence of perturbations to constant equilibrium states of the form $U_* = (\rho_*, m_*)$ with $\rho_*> 0$. Since the system under consideration is structurally very similar to the Navier-Stokes-Korteweg system [@HaLi94; @Kot08] we follow the classical proof by Hattori and Li [@HaLi94] very closely. Here we recast local well-posedness in terms of perturbations of an equilibrium state, a formulation which is suitable for our needs. The local existence result (see Theorem [Theorem 1](#themlocale){reference-type="ref" reference="themlocale"} below) guarantees the existence of classical solutions as well as the appropriate energy bounds for the perturbations which are needed for the global decay analysis. Although the arguments are classical, we present a full proof of local existence for the sake of completeness and to fulfill the requirement of obtaining appropriate energy bounds (see Corollary [Corollary 1](#cor26){reference-type="ref" reference="cor26"}). Besides, there is no local existence result for the QHD system with the particular viscosity term appearing in [\[QHD\]](#QHD){reference-type="eqref" reference="QHD"} reported in the literature, up to our knowledge. The second part of the paper focuses on *subsonic* equilibrium states, satisfying the condition $$p'(\rho_*) > \frac{m_*^2}{\rho_*^2}.$$ It can be proved (see Lemma [Lemma 1](#lemsupersonic){reference-type="ref" reference="lemsupersonic"} below) that supersonic states (namely, those which satisfy $p'(\rho_*) < {m_*^2}/{\rho_*^2}$) do not satisfy the strict dissipativity condition for the linearized system, justifying in this case the choice of subsonic states for our analysis. The intermediate case of sonic states with $p'(\rho_*) = {m_*^2}/{\rho_*^2}$ is associated to degeneracies (such as in viscous-dispersive shock theory) and it is not clear whether symbol symmetrization and/or genuine coupling hold in this case. Hence, we have left the analysis of sonic states for a future work. We proceed to linearize system [\[QHD\]](#QHD){reference-type="eqref" reference="QHD"} around a subsonic state and to study its strict dissipativity. It is shown that the linearized QHD system is symbol symmetrizable but not Friedrichs symmetrizable, and that it satisfies the genuine coupling condition. Thanks to a new degree of freedom in the choice of the symbol symmetrizer (see also the related analyses [@PlV22; @PlV23] for Korteweg fluids) it is possible to construct an appropriate compensating matrix symbol for the linearized system (see Lemma [Lemma 1](#lemourK){reference-type="ref" reference="lemourK"}), which is uniformly bounded above in the Fourier parameter and which allows to close the energy estimates at the linear level. Such estimates underlie a decay structure of *regularity-gain type*, yielding optimal pointwise decay rates of the solutions to the linear system in Fourier space (see [@UDK12; @UDK18] or Remark [Remark 1](#remtypediss){reference-type="ref" reference="remtypediss"} below). The linear decay rates are then used to prove the nonlinear decay of small perturbations of constant equilibrium states, culminating into the global existence and optimal time-decay of perturbation solutions (see our Main Theorem [Theorem 1](#gloexth){reference-type="ref" reference="gloexth"} below). The work of Tong and Xia [@ToXi22] warrants note as the first work (up to our knowledge) that analyzes the decay of perturbations of equilibrium states for a QHD system with viscosity (see also Ra and Hong [@RaHo21] for a similar analysis in the case of the QHD system with energy exchanges). Our work differs from the aforementioned works in the sense that their analysis consists of obtaining direct nonlinear energy estimates, relying heavily on the intrinsic structure of the QHD model. The technique presented here examines whether the linearized system around the constant state exhibits some abstract symmetrizability and dissipative properties which can be extrapolated to the nonlinear problem. The paper is structured as follows. Section [2](#seclocale){reference-type="ref" reference="seclocale"} contains the proof of local well-posedness for system [\[QHD\]](#QHD){reference-type="eqref" reference="QHD"}. The problem is recast in terms of perturbations or arbitrary constant equilibrium states. In Section [3](#seclinear){reference-type="ref" reference="seclinear"} we study the linearized system around a subsonic equilibrium state. We examine the genuine coupling condition in the sense of Humpherys [@Hu05] and exhibit a family of symbol symmetrizers. The subsonicty of the constant state plays a key role. With this information, we obtain the linear decay rates for the associated semigroup. Finally, Section [4](#secglobal){reference-type="ref" reference="secglobal"} contains the global decay result for small perturbations of constant equilibrium subsonic states. ## On notation {#on-notation .unnumbered} Transposition of vectors or matrices is denoted by the symbol $A^\top$. Linear operators acting on infinite-dimensional spaces are indicated with calligraphic letters (e.g., ${\mathcal{A}}$). We denote the real part of a complex number $\lambda \in \mathbb{C}$ by $\mathrm{Re}\,\lambda$. Standard Sobolev spaces of complex-valued functions on the real line will be denoted as $L^2(\mathbb{R})$ and $H^s(\mathbb{R})$, with $s \in \mathbb{R}$, endowed with the standard inner products and norms. The norm on $H^s(\mathbb{R})$ will be denoted as $\| \cdot \|_s$ and the norm in $L^2$ will be henceforth denoted by $\| \cdot \|_0$. Any other $L^p$ -norm will be denoted as $\| \cdot \|_{L^p}$ for $\infty \geq p \geq 1$. The $L^2$-scalar product will be denoted by $\langle \cdot, \cdot \rangle_0$, whereas $\langle \cdot, \cdot \rangle$ is the standard inner product in $\mathbb{C}^n$. For any $T > 0$ we denote by $C_0^{\infty}([0,T] \times \mathbb{R})$ the set of infinitely differentiable functions on $[0,T] \times \mathbb{R}$ such that $\partial_x^i f(t,\cdot) \rightarrow 0$ as $|x| \to \infty$, $i \in \mathbb{N}_0:= \mathbb{N}\cup \{ 0 \}$, that is, functions that for fixed $t$ have all derivatives going to zero for large $|x|$. Let $p$ be a vector in $\mathbb{R}^n$. Its two-norm is $|p| = \left ( \sum_{i=1}^n p_i^2 \right)^{\frac{1}{2}}$. For a matrix $A \in \mathbb{R}^{n \times n}$, $| A | = \sup_{|p|=1}|A p|$ denotes its two-norm. For operators ${\mathcal{A}}, {\mathcal{B}}$ denote by $[{\mathcal{A}},{\mathcal{B}}] = {\mathcal{A}}{\mathcal{B}}- {\mathcal{B}}{\mathcal{A}}$ their commutator. Finally, we recall Sobolev's embedding theorem: if $f \in H^1(\mathbb{R})$, then $\| f\|_{L^\infty} \leq C \| f\|_{1}$, where the constant $C$ does not depend on $f$. [\[sec:preliminaries\]]{#sec:preliminaries label="sec:preliminaries"} # Local well-posedness theory {#seclocale} This section is devoted to proving the local existence of solutions for [\[QHD\]](#QHD){reference-type="eqref" reference="QHD"}. First we will derive energy estimates satisfied by the solution of a quasi-linearized system related to [\[QHD\]](#QHD){reference-type="eqref" reference="QHD"}. Then we will prove existence of solution for this system which satisfies these estimates. Finally, we will use a fixed point argument to establish local existence for the nonlinear problem. Let $\rho_*> 0$ and $m_*\in \mathbb{R}$ be constant equilibrium states for system [\[QHD\]](#QHD){reference-type="eqref" reference="QHD"}. We are interested in proving the local existence of perturbations to this constant equilibrium state. For that purpose, we define the following space of perturbations. For positive constants $R \geq r >0$, $T>0$ and for any $s \geq 3$ we denote, $$\begin{aligned} X_s((0,T);r,R) := \Big\{ (\rho,m) \, : \; \, &\rho \in C((0,T); H^{s+1}(\mathbb{R})) \cap C^1((0,T); H^{s-1}(\mathbb{R})), \\ & m \in C((0,T); H^s(\mathbb{R})) \cap C^1((0,T); H^{s-2}(\mathbb{R})),\\ &(\rho_x, m_x) \in L^2((0,T); H^{s+1}(\mathbb{R}) \times H^s(\mathbb{R})), \\ &\text{and} \; r \leq \rho(x,t) \leq R \; \; \text{a.e. in } \, (x,t) \in \mathbb{R}\times (0,T) \Big\}. \end{aligned}$$ Henceforth, for any $U = (\rho, m) \in X_s\big((0,T); r,R\big)$ we denote $$\begin{aligned} E_s(t) &:= \sup_{\tau \in [0,t]} \Big( \| \rho(\tau)\|^2_{s+1} + \| m(\tau)\|^2_{s} \Big), \label{defEs}\\ F_s(t) &:= \int_0^t \Big( \| \rho_x(\tau) \|_{s+1}^2 + \| m_x(\tau) \|_s^2 \Big) \, d\tau , \label{defFs}\end{aligned}$$ for all $t \in [0,T]$. Our main goal is to prove the following local existence result. **Theorem 1**. *Let $U_*= (\rho_*, m_*) \in \mathbb{R}^2$ be a constant equilibrium state with $\rho_*>0$. Suppose that $$\label{incond} \rho_0 \in H^{s+1}(\mathbb{R}), \quad m_0 \in H^s(\mathbb{R}),$$ for some $s \geq 3$ are initial perturbations of $(\rho_*, m_*)$ and consider an initial condition of the form $$\label{ic} U(0)+U_* = (\rho_0 + \rho_*, m_0 + m_*).$$ Then there exists a positive constant $a_0 > 0$ such that if $$\| \rho_0 \|_{s+1} + \| m_0 \|_{s} < a_0,$$ then there holds $r_0 \leq \rho_*+ \rho_0(x) \leq R_0$, *a.e.* in $x \in \mathbb{R}$, for some constants $R_0 > r_0 > 0$, and there exists a positive time $T_1 = T_1(a_0) > 0$ such that a unique smooth solution of the form $(\rho(x,t) + \rho_*, m(x,t) + m_*)$, with perturbation belonging to the space $$(\rho ,m) \in X_s\big((0,T_1); \tfrac{1}{2}r_0,2R_0\big),$$ exists for the Cauchy problem of system [\[QHD\]](#QHD){reference-type="eqref" reference="QHD"} with initial data [\[ic\]](#ic){reference-type="eqref" reference="ic"}. Moreover, the solution satisfies the energy estimate $$\label{localEE} E_s(T_1) + F_s(T_1) \leq C_0 E_s(0),$$ for some constant $C_0 > 0$ depending only on $a_0$.* **Corollary 1** (a priori estimate). *Under the assumptions of Theorem [Theorem 1](#themlocale){reference-type="ref" reference="themlocale"}, let $U = (\rho+\rho_*,m+m_*)$ with $(\rho,m) \in X_s\big((0,T); \tfrac{1}{2}r_0,2R_0\big)$ be a local solution of the initial value problem of [\[QHD\]](#QHD){reference-type="eqref" reference="QHD"} with initial data $U(0)+U_* = (\rho_0+\rho_*,m_0+m_*)$ satisfying [\[incond\]](#incond){reference-type="eqref" reference="incond"}. Then there exists $0 < \varepsilon_1 \leq a_0$, sufficiently small, such that, if for any $0 < t \leq T$ we have $E_s(t)^{1/2} \leq \varepsilon_1$, then there holds $$\label{localaprioriEE} \big( E_s(t) + F_s(t) \big)^{1/2} \leq C_2 E_s(0)^{1/2},$$ where $C_2 = C_2(\varepsilon_1) > 0$ is a positive constant independent of $t > 0$.* *Proof.* Follows directly from the local existence result, Theorem [Theorem 1](#themlocale){reference-type="ref" reference="themlocale"} (see, e.g., the proof of Theorem 2.2 in [@HaLi96a]). ◻ ## The linear system Our purpose is to formulate the well-posedness for perturbations of a given constant state. Hence, consider deviations from the given state $(\rho_*, m_*)$, which will be denoted as $$\rho = \overline{\rho}+ \rho_*, \qquad m = \overline{m}+ m_*.$$ The system [\[QHD\]](#QHD){reference-type="eqref" reference="QHD"} in the new perturbation variables $(\overline{\rho}, \overline{m})$ reads $$\label{QHD-L-deviation} \begin{aligned} \overline{\rho}_t+\overline{m}_x &= 0,\\ \overline{m}_t+\left(\displaystyle\frac{(\overline{m}+ m_*)^2}{\overline{\rho}+ \rho_*}+p(\overline{\rho}+ \rho_*)\right)_x &=\mu \overline{m}_{xx}+k^2(\overline{\rho}+ \rho_*)\left(\displaystyle\frac{(\sqrt{\overline{\rho}+ \rho_*})_{xx}}{\sqrt{\overline{\rho}+ \rho_*}}\right)_x. \end{aligned}$$ Now we deduce a linearized system that will be useful for proving local existence of solutions. For $\overline{\rho}, \rho: \mathbb{R}\rightarrow \mathbb{R}^+$ and $\overline{m},m :\mathbb{R}\rightarrow \mathbb{R}$ denote $w = (\rho, m)^\top, \overline{w}= (\overline{\rho}, \overline{m})^\top\in \mathbb{R}^2$. Let $T > 0$ and assume that $$\begin{aligned} \label{cond_bound} \sup_{x \in \mathbb{R}, \, t \in [0,T]} \left( \frac{1}{\overline{\rho}+ \rho_*} + \sum_{i=0}^2 |\partial_x^i \overline{w}| \right) \leq \beta_0,\end{aligned}$$ for some constant $\beta_0 > 0$. For such $\overline{w}$ we define $$\begin{aligned} \overline{\alpha}(x,t) &= \left (\frac{\overline{m}(x,t) + m_*}{\overline{\rho}(x,t) + \rho_*} \right )^2 - \gamma (\overline{\rho}(x,t) + \rho_*)^{\gamma - 1},\\ \overline{\beta}(x,t) &= -\frac{2 (\overline{m}(x,t) + m_*)}{\overline{\rho}(x,t) + \rho_*},\end{aligned}$$ and the matrix $$\begin{aligned} \overline{A}(x,t) = \begin{pmatrix} 0 & -1\\ \overline{\alpha}(x,t) & \overline{\beta}(x,t) \end{pmatrix}.\end{aligned}$$ Now, let us denote $$\zeta = k^2 \dfrac{\overline{\rho}_x}{\overline{\rho}+ \rho_*},\qquad \eta = \dfrac{k^2}{2} \left ( \dfrac{\overline{\rho}_x}{\overline{\rho}+ \rho_*} \right )^2.$$ Moreover, define the operators, $$\begin{aligned} &{\mathcal{T}}_1 w = \begin{pmatrix} 0 \\ \frac{k^2}{2} \rho_{xxx} \end{pmatrix}, &&{\mathcal{T}}_2 w = \begin{pmatrix} 0 \\ \mu m_{xx} \end{pmatrix},\\ &{\mathcal{T}}_3 w = \begin{pmatrix} 0 \\ - \zeta \rho_{xx} \end{pmatrix}, &&{\mathcal{T}}_4 w = \begin{pmatrix} 0\\ \eta \rho_x \end{pmatrix},\end{aligned}$$ and let $$\label{operator_L} {\mathcal{L}}w := \overline{A}w_x + \sum_{i=0}^4 {\mathcal{T}}_i w.$$ For any vector valued function $f = (f_1,f_2)^\top$, we consider the following linear system $$\label{linear_system} \left\{ \begin{aligned} \partial_t w &= {\mathcal{L}}w + f,\\ w(0) &= w_0. \end{aligned} \right.$$ This system has the property that if $w = \overline{w}$ and $f = 0$, then $\overline{w}$ solves [\[QHD-L-deviation\]](#QHD-L-deviation){reference-type="eqref" reference="QHD-L-deviation"}. Moreover, we denote $$\label{eq:components_L} {\mathcal{L}}w = \begin{pmatrix} {\mathcal{L}}_1 w\\ {\mathcal{L}}_2 w \end{pmatrix}.$$ ## The zeroth order estimate Here we derive an estimate satisfied by smooth solutions to the linear system [\[linear_system\]](#linear_system){reference-type="eqref" reference="linear_system"}. For that purpose, let us define the norm $${\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert w \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}_{[0,T]}^2 := \sup_{t \in [0,T]} (\| w(t) \|_0^2 + \| \partial_x \rho(t) \|_0^2) + \int_{0}^T \| \partial_x m(t) \|_0^2 \, dt,$$ for any arbitrary $T > 0$. **Lemma 1** (zeroth order estimate). *Suppose that $\overline{w}$ satisfies the bound [\[cond_bound\]](#cond_bound){reference-type="eqref" reference="cond_bound"} and $w, f \in C_0^{\infty}([0,T] \times \mathbb{R})$ for some $T > 0$. Then the following estimate holds for $t \in (0,T)$, $$\label{eq:estimate_0_order} \begin{aligned} \partial_t\left ( \frac{1}{2} \| w \|_0^2 + \frac{k^2}{4} \|\rho_x\|_0^2 \right ) &+ \left (\mu - \frac{C_1 \varepsilon_1}{2} - \frac{C_3 \varepsilon_2}{2} \right) \|m_x\|_0^2 \\ &\leq \frac{1}{2} \left (\frac{C_1}{\varepsilon_1} + k^2 C_2 + C_4 + 1 \right) \| w \|_0^2 + \\ &\quad + \frac{1}{2} \left( \frac{k^2}{2} + C_1 \varepsilon_1 + k^2 C_2 + \frac{C_3}{\varepsilon_2} + C_4 \right) \| \rho_x\|_0^2 + \\ &\quad + \frac{1}{2}\| f \|_0^2 + \frac{k^2}{4}\| f_1 \|_1^2, \end{aligned}$$ for any $\varepsilon_1, \varepsilon_2 >0$ such that $$\mu - \frac{C_1 \varepsilon_1}{2} - \frac{C_3 \varepsilon_2}{2} > 0,$$ and explicit constants $C_i> 0,\mbox{ }i=1,...,4$ depending only on $\beta_0$, $\rho_*$, $m_*$ and the physical parameters of [\[QHD\]](#QHD){reference-type="eqref" reference="QHD"}. Moreover, $${\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert w \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}_{[0,T]}^2 \leq C(T) \Big( \| w_0\|_0^2 + \| \rho_0 \|_1^2 + \int_0^T \big(\|f(s)\|_0^2 + \|f_1(s)\|_1^2\big) \, ds \Big). \label{eq:integ_estimate_0_order}$$ The constant $C(T)$ in [\[eq:integ_estimate_0\_order\]](#eq:integ_estimate_0_order){reference-type="eqref" reference="eq:integ_estimate_0_order"} depends only on $T$, $\beta_0$ and the parameters of [\[QHD\]](#QHD){reference-type="eqref" reference="QHD"}.* *Proof.* Taking the $L^2$-scalar product of [\[linear_system\]](#linear_system){reference-type="eqref" reference="linear_system"} with $w$, we get $$\label{eq:scalar_product_0_order} \langle \partial_t w, w \rangle_0 = \langle \overline{A}w_x, w \rangle_0 + \left \langle \sum_{i=0}^4 {\mathcal{T}}_i w, w \right\rangle_0 + \langle f, w \rangle_0.$$ We also have $$\begin{aligned} \langle \partial_t w, w \rangle_0 = \int_\mathbb{R}\left ( \rho \partial_t \rho + m \partial_t m\right )\, dx = \tfrac{1}{2} \int_\mathbb{R}\left ( (\rho^2)_t + (m^2)_t \right ) \, dx = \tfrac{1}{2}\partial_t(\| w\|_0^2).\end{aligned}$$ Moreover, $$|(\overline{A}w_x)(x,t)| \leq |\overline{A}(x,t)| |w_x(x,t)|,\qquad \text{for all } (x,t) \in \mathbb{R}\times [0,T].$$ Since $\sup_{x \in \mathbb{R}, t \in [0,T]} (\rho + \rho_*)^{-1}(x,t) \leq \beta_0$ we can find a positive constant $C_1$, depending only on $\beta_0$, $\rho_*$ and $m_*$, such that $$\sup_{x \in \mathbb{R}, \, t \in [0,T]} |\overline{A}(x,t)| \leq C_1.$$ Applying Young's inequality we deduce $$\label{eq:est_A} \begin{aligned} \langle \overline{A}w_x, w \rangle_0 = \int_{\mathbb{R}} (\overline{A}w_x)^\top w \, dx &\leq \int_\mathbb{R}| \overline{A}| |w_x| |w| \, dx \\ &\leq C_1 \int_\mathbb{R}|w_x| |w| \, dx \\ &\leq C_1 \| w_x\|_0 \| w \|_0 \\ &\leq \frac{C_1}{2 \varepsilon_1}\| w\|_0^2 + \frac{C_1 \varepsilon_1}{2}\|w_x\|_0^2 \\ %&= \frac{C_1}{2 \e_1}\| w\|_0^2 + \frac{C_1 \e_1}{2}\left ( \int_\R (\rho_x)^2 \, dx + \int_\R (m_x)^2 \, dx \right) \\ &= \frac{C_1}{2 \varepsilon_1}\| w\|_0^2 + \frac{C_1 \varepsilon_1}{2} \| \rho_x \|_0^2 + \frac{C_1 \varepsilon_1}{2} \|m_x\|_0^2. \end{aligned}$$ Denote $I = \langle {\mathcal{T}}_1 w, w \rangle_0$. Then, upon integration by parts, one gets $$\begin{aligned} \label{eq_dispersive} I = \frac{k^2}{2} \int_\mathbb{R}\rho_{xxx} m \, dx = -\frac{k^2}{2} \int_\mathbb{R}\rho_{xx} m_x \, dx.\end{aligned}$$ The first component of [\[linear_system\]](#linear_system){reference-type="eqref" reference="linear_system"} implies that $-m_x = \rho_t - f_1$. Substituting into [\[eq_dispersive\]](#eq_dispersive){reference-type="eqref" reference="eq_dispersive"} we get $$\begin{aligned} \label{eq:util1} I = \frac{k^2}{2} \int_\mathbb{R}(\rho_t - f_1) \rho_{xx} \, dx = -\frac{k^2}{2} \int_\mathbb{R}\rho_{tx} \rho_x \, dx + \frac{k^2}{2}\int_\mathbb{R}(f_1)_x \rho_x \, dx\end{aligned}$$ We clearly have $$-\frac{k^2}{2} \int_\mathbb{R}\rho_{tx} \rho_x \, dx = -\frac{k^2}{4} \partial_t \int_\mathbb{R}(\rho_x)^2 \, dx = -\frac{k^2}{4} \partial_t(\|\rho_x\|_0^2),$$ and, $$\frac{k^2}{2}\int_\mathbb{R}(f_1)_x \rho_x \, dx \leq \frac{k^2}{2} \| f_1 \|_1 \| \rho_x \|_0 \leq \frac{k^2}{4} \|f_1\|_1^2 + \frac{k^2}{4} \| \rho_x \|_0^2.$$ Substituting back into [\[eq:util1\]](#eq:util1){reference-type="eqref" reference="eq:util1"} we obtain $$\label{eq:est_T1} \langle {\mathcal{T}}_1 w, w \rangle_0 \leq -\frac{k^2}{4} \partial_t(\|\rho_x\|_0^2) + \frac{k^2}{4} \|f_1\|_1^2 + \frac{k^2}{4} \| \rho_x \|_0^2.$$ Now, integrate by parts to get $$\langle {\mathcal{T}}_2 w, w \rangle_0 = \mu \int_\mathbb{R}m m_{xx} \, dx = -\mu \| m_x \|_0^2.$$ We also have, after integration by parts, that $$\begin{aligned} \langle {\mathcal{T}}_3 w, w \rangle_0 &= - \int_\mathbb{R}\zeta \rho_{xx} m \, dx = \int_\mathbb{R}\zeta_x m \rho_x \, dx + \int_\mathbb{R}\zeta m_x \rho_x \, dx.\end{aligned}$$ Observe that $\zeta_x$ and $\zeta$ involve only derivatives up to second and first order, respectively. Hence, due to the bound [\[cond_bound\]](#cond_bound){reference-type="eqref" reference="cond_bound"}, we can find positive constants $C_2$ and $C_3$ depending only on $\beta_0$, such that $$\begin{aligned} \sup_{x \in \mathbb{R}, \, t \in [0,T]} | \zeta_x | \leq C_2, \qquad \sup_{x \in \mathbb{R}, \, t \in [0,T]} | \zeta | \leq C_3.\end{aligned}$$ Henceforth, we obtain $$\begin{aligned} \langle {\mathcal{T}}_3 w, w \rangle_0 &\leq C_2 \int_\mathbb{R}|m| |\rho_x| \, dx + C_3 \int_\mathbb{R}|m_x| |\rho_x| \, dx \nonumber\\ &\leq C_2 \| m \| \|_0 \rho_x \|_0 + C_3 \| m_x \|_0 \| \rho_x \|_0 \nonumber\\ &\leq \frac{C_2}{2} \| m \|_0^2 + \frac{C_2}{2} \| \rho_x \|_0^2 + \frac{C_3}{2 \varepsilon_2} \| \rho_x \|_0^2 + \frac{C_3 \varepsilon_2}{2} \| m_x \|_0^2 \nonumber\\ &\leq \frac{C_2}{2} \| w \|_0^2 + \frac{C_2}{2} \| \rho_x \|_0^2 + \frac{C_3}{2 \varepsilon_2} \| \rho_x \|_0^2 + \frac{C_3 \varepsilon_2}{2} \| m_x \|_0^2. \label{eq:est_T3}\end{aligned}$$ Moreover, from the definition of ${\mathcal{T}}_4$ we have $$\begin{aligned} \langle {\mathcal{T}}_4 w, w \rangle_0 = \int_\mathbb{R}\eta \rho_x m \, dx.\end{aligned}$$ Denote $$C_4(T) := \sup_{x \in \mathbb{R}, \, t \in [0,T]}|\eta|.$$ Then we have $$\begin{aligned} \langle {\mathcal{T}}_4 w, w \rangle_0 &\leq C_4 \int_\mathbb{R}|\rho_x| |m| \, dx \nonumber\\ %&\leq C_4 \| \rho_x \|_0 \| m \|_0 \nonumber\\ &\leq \frac{C_4}{2} \| \rho_x \|_0^2 + \frac{C_4}{2} \| m \|_0^2 \nonumber\\ &\leq \frac{C_4}{2} \| \rho_x \|_0^2 + \frac{C_4}{2} \| w \|_0^2. \label{eq:est_T4}\end{aligned}$$ Finally, from $\langle f, w \rangle_0 \leq \| f \|_0 \| w\|_0 \leq \frac{1}{2} \| f \|_0^2 + \frac{1}{2} \|w\|_0^2$ and substituting [\[eq:est_A\]](#eq:est_A){reference-type="eqref" reference="eq:est_A"}, [\[eq:est_T1\]](#eq:est_T1){reference-type="eqref" reference="eq:est_T1"} - [\[eq:est_T4\]](#eq:est_T4){reference-type="eqref" reference="eq:est_T4"} into [\[eq:scalar_product_0\_order\]](#eq:scalar_product_0_order){reference-type="eqref" reference="eq:scalar_product_0_order"}, we arrive at estimate [\[eq:estimate_0\_order\]](#eq:estimate_0_order){reference-type="eqref" reference="eq:estimate_0_order"}. Now, for any $c \in (0,1)$, let us choose $\varepsilon_1$ and $\varepsilon_2$ such that $$\frac{C_1 \varepsilon_1}{2} + \frac{C_3 \varepsilon_2}{2} \leq (1 - c) \mu.$$ We obtain $$\partial_t \Big( \tfrac{1}{2} \| w \|_0 ^2 + \tfrac{1}{4} k^2 \|\rho_x\|_0^2 \Big) + c \mu \| m_x \|_0^2 \leq C \Big( \tfrac{1}{2}\| w \|_0 ^2 + \tfrac{1}{4} k^2 \|\rho_x\|_0^2 +\| f\|_0^2 + \| f_1 \|_1^2 \Big), \label{eq:estimate_0_order_2}$$ where $C$ depends only on $\beta_0$. Then, inequality [\[eq:estimate_0\_order_2\]](#eq:estimate_0_order_2){reference-type="eqref" reference="eq:estimate_0_order_2"} clearly implies $$\begin{aligned} \label{eq:util4} \partial_t \Big( \tfrac{1}{2} \| w \| ^2 + \tfrac{1}{4} k^2 \|\rho_x\|^2 \Big) \leq C \Big( \tfrac{1}{2}\| w \| ^2 + \tfrac{1}{4}k^2 \|\rho_x\|^2 +\| f\|^2 + \| f_1 \|_1^2 \Big).\end{aligned}$$ Apply Gronwall's inequality to [\[eq:util4\]](#eq:util4){reference-type="eqref" reference="eq:util4"} in order to obtain $$\begin{aligned} \tfrac{1}{2}\| w(t)\|_0^2 + \tfrac{1}{4} k^2\| \partial_x \rho(t)\|_0^2 &\leq e^{C t} \left ( \tfrac{1}{2} \| w_0 \|_0^2 + \frac{1}{4} k^2 \| \partial_x \rho_0 \|_0^2 \right ) + \nonumber\\ &\quad + C \int_{0}^T e^{C(t-s)} \Big(\| f(s) \|_0^2 + \| f_1(s) \|_1^2 \Big) \, ds, \label{gronwall1}\end{aligned}$$ for all $t \in [0,T]$ and where $C = C(T)>0$ depends only on $T$ and $\beta_0$. Substituting [\[gronwall1\]](#gronwall1){reference-type="eqref" reference="gronwall1"} into [\[eq:estimate_0\_order_2\]](#eq:estimate_0_order_2){reference-type="eqref" reference="eq:estimate_0_order_2"} and integrating from $0$ to $\tilde{t}$ yields [\[eq:integ_estimate_0\_order\]](#eq:integ_estimate_0_order){reference-type="eqref" reference="eq:integ_estimate_0_order"}. ◻ ## Higher order estimates For $n \in \mathbb{N}$, consider the norm $${\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert w \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}_{n,[0,T]}^2 := \sup_{t \in [0,T]} \Big(\| \rho(t) \|_{n+1}^2 + \| m(t) \|_n^2\Big) + \int_{0}^T \|m_x(t)\|_n^2 \, dt.$$ The following result establishes estimates of higher order on the solutions. **Lemma 1** (higher order estimate). *Let $n \in \mathbb{N}$, $T > 0$ and $c \in (0,1)$. Suppose $$\begin{aligned} \label{cond_bound_higher_order} \sup_{x \in \mathbb{R}, \, t \in [0,T]} \left( \frac{1}{\overline{\rho}+ \rho_*} + \sum_{i=0}^2 |\partial_x^i \overline{w}| \right) + {\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert \overline{w} \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}_{n,[0,T]}^2 \leq \beta_n,\end{aligned}$$ where $\beta_n > 0$ is a constant. Then, we have $$\begin{aligned} \partial_t \Big( \tfrac{1}{2} \| w \|_n^2 + \tfrac{1}{4} k^2 \| \rho \|_{n+1}^2 \Big) + c \mu \| m \|_{n+1}^2 \leq C_n \Big( \tfrac{1}{2} \| w \|_n^2 + \tfrac{1}{4} k^2 \| \rho \|_{n+1}^2 + \| f \|_n^2 + \| f_1 \|_{n+1}^2 \Big), \label{estimate_higher_order} \end{aligned}$$ for all $t \in (0,T)$, and $${\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert w \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}_{n,[0,T]}^2 \leq C_n(T) \big( \| w_0 \|_n^2 + \| \rho_0 \|_{n+1}^2 \big) + C_n(T) \int_{0}^{T} \!\big( \| f(s) \|_n^2 + \| f_1(s) \|_{n+1}^2 \big) \, ds, \label{integral_estimate_higher_order}$$ where $C_n$ depends only on $T$, $\beta_n$, $\rho_*$, $m_*$ and the parameters of [\[QHD\]](#QHD){reference-type="eqref" reference="QHD"}.* *Proof.* Differentiating [\[linear_system\]](#linear_system){reference-type="eqref" reference="linear_system"} we obtain that the equation satisfied by $\partial^j_x w$ is $$\label{eq:higher_order_sys} \left\{ \begin{aligned} (\partial_x^j w)_t &= {\mathcal{L}}\partial^j_x w +[\partial^j_x,{\mathcal{L}}] w + \partial^j_x f,\\ \partial_x^j w(0) &= \partial_x^j w_0. \end{aligned} \right.$$ for $j \in \mathbb{N}_0$. Suppose $j \in \{0,...,n\}$ and take the scalar product of [\[eq:higher_order_sys\]](#eq:higher_order_sys){reference-type="eqref" reference="eq:higher_order_sys"} with $\partial_x^j w$. The result is $$\langle (\partial_x^j w)_t, \partial_x^j w \rangle_0 = \langle {\mathcal{L}}\partial^j_x w, \partial_x^j w \rangle_0 +\langle [\partial^j_x,{\mathcal{L}}] w, \partial_x^j w \rangle_0 + \langle \partial^j_x f, \partial_x^j w \rangle_0.$$ Clearly we have $$\langle (\partial_x^j w)_t, \partial_x^j w \rangle_0 = \tfrac{1}{2} \partial_t (\| \partial_x^j w \|_0^2).$$ Moreover, $${\mathcal{L}}\partial_x^j w = \overline{A}\partial_x^{j+1} w + \sum_{i=1}^{4} {\mathcal{T}}_i(\partial_x^j w).$$ In the sequel, $C$ denotes a positive constant that may depend only on $\beta_n$, $\rho_*$, $m_*$ and on the parameters of [\[QHD\]](#QHD){reference-type="eqref" reference="QHD"}, and which may change from line to line. We have $$\begin{aligned} \langle \overline{A}\partial_x^{j+1} w, \partial_x^j w \rangle_0 &\leq C \| \partial_x^{j+1} w \|_0 \| \partial_x^j w \|_0 \nonumber\\ &\leq C \varepsilon_1 \| \partial_x^{j+1} w \|_0^2 + C(\varepsilon_1) | \partial_x^j w \|_0^2 \nonumber \\ &\leq C \varepsilon_1 \| \partial_x^{j+1} \rho \|_0^2 + C \varepsilon_1 \| \partial_x^{j+1} m \|_0^2 + C(\varepsilon_1) | \partial_x^j w \|_0^2. \label{estimate_A}\end{aligned}$$ Now we shall estimate the contribution from the dispersive term. Integrating by parts we get $$\label{eq:dispersive_higher_order} \langle {\mathcal{T}}_1 \partial_x^j w, \partial_x^j w \rangle_0 = \frac{k^2}{2} \int_\mathbb{R}(\partial_x^j m) (\partial_x^{j+3} \rho) \, dx = - \frac{k^2}{2} \int_\mathbb{R}(\partial_x^{j+1} m) (\partial_x^{j+2} \rho) \, dx.$$ The first component of [\[eq:higher_order_sys\]](#eq:higher_order_sys){reference-type="eqref" reference="eq:higher_order_sys"} reads (see equation [\[eq:components_L\]](#eq:components_L){reference-type="eqref" reference="eq:components_L"}), $$(\partial_x^j \rho)_t = - \partial_x^{j+1} m + [\partial_x^j, {\mathcal{L}}_1] w + \partial_x^j f_1.$$ Since $[\partial_x^j, {\mathcal{L}}_1] w = 0$, we obtain $\partial_x^{j+1} m = -(\partial_x^j \rho)_t + \partial_x^j f_1$. Substituting into [\[eq:dispersive_higher_order\]](#eq:dispersive_higher_order){reference-type="eqref" reference="eq:dispersive_higher_order"} we get $$\begin{aligned} \langle {\mathcal{T}}_1 \partial_x^j w, \partial_x^j w \rangle_0 = I_1 + I_2, \label{eq:T1}\end{aligned}$$ where $$I_1 := \frac{k^2}{2}\int_\mathbb{R}(\partial_x^j \rho)_t (\partial_x^{j+2} \rho) \, dx, \quad I_2 := - \frac{k^2}{2}\int_\mathbb{R}(\partial_x^j f) (\partial_x^{j+2} \rho) \, dx.$$ First, notice that $$I_1 = -\frac{k^2}{2}\int_\mathbb{R}\partial_x ( (\partial_x^j \rho)_t ) (\partial_x^{j+1} \rho) \, dx = -\frac{k^2}{2}\int_{\mathbb{R}} \partial_t (\partial_x^{j+1} \rho) (\partial_x^{j+1} \rho) \, dx = - \frac{k^2}{4} \partial_t \| \partial_x^{j+1} \rho \|_0^2.$$ On the other hand, we can estimate $I_2$ by $$I_2 = -\frac{k^2}{2}\int_\mathbb{R}(\partial_x^{j+1} f) (\partial_x^{j+1}\rho) \, dx \leq C \|f_1\|_{j+1}^2 + C \|\partial_x^{j+1}\rho\|_0^2. %\label{eq:I2}$$ Substitution into [\[eq:T1\]](#eq:T1){reference-type="eqref" reference="eq:T1"} yields $$\label{estimate_T1} \langle {\mathcal{T}}_1 \partial_x^j w, \partial_x^j w \rangle_0 \leq - \tfrac{1}{4} k^2 \partial_t \| \partial_x^{j+1} \rho \|_0^2 + C \|f_1\|_{j+1}^2 + C \|\partial_x^{j+1}\rho\|_0^2.$$ Now, let us consider $\langle {\mathcal{T}}_2 \partial_x^j w, \partial_x^j w\rangle_0$. Integrate by parts in order to obtain, $$\langle {\mathcal{T}}_2 \partial_x^j w, \partial_x^j w \rangle_0 = \mu \int_{\mathbb{R}} (\partial_x^j m) (\partial_x^{j+2} m) dx = -\mu \int_\mathbb{R}(\partial_x^{j+1} m)^2 \, dx = -\mu \| \partial_x^{j+1} m \|_0^2. %\label{eq:T2}$$ Then, we deduce $$\langle {\mathcal{T}}_3 \partial_x^j w, \partial_x^j w \rangle_0= - \int_{\mathbb{R}} \zeta (\partial_x^j m) (\partial_x^{j+2} \rho) \, dx = \int_\mathbb{R}\zeta_x (\partial_x^j m) (\partial_x^{j+1} \rho) \, dx + \int_\mathbb{R}\zeta (\partial_x^{j+1} m) (\partial_x^{j+1} \rho) \, dx.$$ Notice that $\left \| \zeta \right \|_{L^\infty}$, $\left \| \zeta_x \right \|_{L^\infty}$ and $\left \| \eta \right \|_{L^\infty}$ are bounded by a constant depending only on $\beta_n$ and $\rho_*$. Therefore, $$\label{estimate_T3} \begin{aligned} \langle {\mathcal{T}}_3 \partial_x^j w, \partial_x^j w \rangle_0 &\leq \| \zeta_x \|_{L^\infty} \int_\mathbb{R}| \partial_x^j m | | \partial_x^{j+1} \rho | \, dx + \| \zeta \|_{L^\infty} \int_\mathbb{R}| \partial_x^{j+1} m | | \partial_x^{j+1} \rho | \, dx \\ &\leq C \| \partial_x^j m \|_0 \| \partial_x^{j+1} \rho \|_0 + C \| \partial_x^{j+1} m \|_0 \| \partial_x^{j+1} \rho \|_0 \\ &\leq C \| \partial_x^j m \|_0^2 + \frac{C}{\varepsilon_2} \|\partial_x^{j+1} \rho \|_0^2 + C \varepsilon_2 \| \partial_x^{j+1} m\|_0^2. \end{aligned}$$ Now, $$\begin{aligned} \langle {\mathcal{T}}_4 \partial_x^j w, \partial_x^j w \rangle_0 &= \int_{\mathbb{R}} \eta (\partial_x^j m) (\partial_x^{j+1}\rho) \, dx \nonumber\\ &\leq \| \eta \|_{L^\infty} \!\int_\mathbb{R}|\partial_x^j m| |\partial_x^{j+1}\rho| \, dx \nonumber\\ %&\leq C \| \pd_x^j m \|_0 \| \pd_x^{j+1}\rho \|_0 \nonumber\\ &\leq C \big( \| \partial_x^j m \|_0^2 + \| \partial_x^{j+1}\rho \|_0^2 \big). \label{estimate_T4}\end{aligned}$$ Furthermore, $$\langle \partial_x^j f, \partial_x^j w \rangle_0 \leq \| \partial_x^j f \|_0 \| \partial_x^j w\|_0 \leq C \big( \| \partial_x^j f \|_0^2 + \| \partial_x^j w \|_0^2 \big).$$ Now, let us consider the contribution from the term involving the commutator, which reads $\langle [\partial_x^j, {\mathcal{L}}] w, \partial_x^j w \rangle_0$. If $j = 0$, we have that $[\partial_x^j, {\mathcal{L}}]w = 0$. Therefore we will assume that $j \geq 1$. Notice that we also have $[\partial_x^j, {\mathcal{L}}_1] w = 0$. Let us first prove a statement that will be used later. If $i \in \{0,...,j-1\}$, then $\partial_x^{j-i}\overline{\alpha}$ and $\partial_x^{j-i}\overline{\beta}$ contain derivatives up to order $j$ of $\overline{\rho}$ and $\overline{m}$. Moreover, $\partial_x^{j-i}\zeta$ and $\partial_x^{j-i}\eta$ contain derivatives up to order $j+1$ or $\overline{\rho}$. Therefore, $$\| \partial_x^{j-i}\overline{\alpha}\|_0\mbox{, }\| \partial_x^{j-i}\overline{\beta}\|_0 \mbox{, }\| \partial_x^{j-i}\zeta \|_0 \mbox{, }\| \partial_x^{j-i}\eta \|_0 \leq C(\beta_n, \rho_*, m_*),$$ for $i \in \{0,...,j-1\}$. By the Leibnitz formula we have $$\partial_x^j(\overline{\alpha}\partial_x \rho) = \sum_{i=0}^j {j \choose i} (\partial_x^{j-i} \overline{\alpha}) (\partial_x^{i+1} \rho).$$ Let us denote $a_1 = \partial_x^j(\overline{\alpha}\partial_x \rho) - \overline{\alpha}\partial_x^{j+1}\rho$. Then, $$\begin{aligned} \left \| a_1 \right \|_0 = \left \| \sum_{i=0}^{j-1} {j \choose i} (\partial_x^{j-i} \overline{\alpha}) (\partial_x^{i+1} \rho) \right \|_0 &\leq C \sum_{i=0}^{j-1} \| (\partial_x^{j-i} \overline{\alpha}) (\partial_x^{i+1} \rho) \|_0 \\&\leq C \sum_{i=0}^{j-1} \| \partial_x^{j-i} \overline{\alpha}\|_0 \| \partial_x^{i+1} \rho \|_{L^\infty} . \end{aligned}$$ Also, by the Sobolev embedding theorem, we have the estimate $$\| \partial_x^{i+1} \rho \|_{L^\infty} \leq C \| \rho \|_{j+1},\qquad i \in \{0,...,j-1\}.$$ Therefore, $\left \| a_1 \right \|_0 \leq C \| \rho \|_{j+1}$ and $$\langle \partial_x^j(\overline{\alpha}\partial_x \rho) - \overline{\alpha}\partial_x^{j+1}\rho, \partial_x^j m \rangle_0 \leq C \| \rho \|_{j+1} \| m \|_j \leq C \| \rho \|_{j+1}^2 + C \| m \|_{j}^2. \label{estimate_commutator1}$$ Now, denote $a_2 = \partial_x^j(\overline{\beta}\partial_x m) - \overline{\beta}\partial_x^{j+1}m$. We clearly have the estimate $$\begin{aligned} \left \| a_2 \right \|_0 = \left \| \sum_{i=0}^{j-1} {j \choose i} (\partial_x^{j-i} \overline{\beta}) (\partial_x^{i+1} m) \right \|_0 &\leq C \sum_{i=0}^{j-1} \| (\partial_x^{j-i} \overline{\beta}) (\partial_x^{i+1} m) \|_0 \\ &\leq C \sum_{i=0}^{j-1} \| \partial_x^{j-i} \overline{\beta}\|_0 \| \partial_x^{i+1} m \|_{L^\infty}. \end{aligned}$$ Since we have $\| \partial_x^{i+1} m \|_{L^\infty} \leq C \| m \|_{j+1}$ for all $i \in \{0,...,j-1\}$ then $\left \| a_2 \right \|_0 \leq C \| m \|_{j+1}$ and $$\label{estimate_commutator2} \langle \partial_x^j(\overline{\beta}\partial_x m) - \overline{\beta}\partial_x^{j+1}m, \partial_x^j m \rangle_0 \leq C \| m \|_{j+1} \| m \|_j \leq C \varepsilon_3 \| m \|_{j+1}^2 + \frac{C}{\varepsilon_3} \| m \|_j^2.$$ Observe that $[\partial_x^j, {\mathcal{T}}_1] = [\partial_x^j, {\mathcal{T}}_2] = 0$. Let us now estimate the contribution from ${\mathcal{T}}_3$: $$\begin{aligned} \langle [\partial_x^j, {\mathcal{T}}_3]\rho, \partial_x^j m \rangle_0 &= - \int_\mathbb{R}\left ( \partial_x^j ( \zeta \partial_x^2 \rho) - \zeta \partial_x^{j+2}\rho \right ) (\partial_x^j m) \, dx \\ &= - \sum_{i=0}^{j-1} {j \choose i} \int_\mathbb{R}(\partial_x^{j-i}\zeta) (\partial_x^{i+2} \rho) (\partial_x^j m) \, dx \\ &\leq C \sum_{i=0}^{j-1} \| \partial_x^{j-i}\zeta \|_0 \| (\partial_x^{i+2} \rho) (\partial_x^j m )\|_0 \\ &\leq C \sum_{i=0}^{j-1} \| (\partial_x^{i+2} \rho)( \partial_x^j m) \|_0 \\ &\leq C \sum_{i=0}^{j-1} \| \partial_x^{i+2} \rho \|_0 \| \partial_x^j m \|_{L^\infty}.\end{aligned}$$ $$$$ We have $i \in \{ 0,...,j-1 \}$ or, equivalently, $i+2 \in \{2,...,j+1\}$. Therefore $\| \partial_x^{i+2} \rho \| \leq \| \rho\|_{j+1}$. Moreover, $\| \partial_x^j m \|_{L^\infty} \leq C \| m \|_{j+1}$. Whence we get $$\langle [\partial_x^j, {\mathcal{T}}_3]\rho, \partial_x^j m \rangle_0 \leq C \| \rho\|_{j+1} \| m \|_{j+1} \leq \frac{C}{\varepsilon_4} \| \rho \|_{j+1}^2 + C \varepsilon_4 \| m \|_{j+1}^2. \label{estimate_commutator_T3}$$ Finally, let us consider the contribution from ${\mathcal{T}}_4$. We have $$\begin{aligned} \langle [\partial_x^j, {\mathcal{T}}_4]\rho, \partial_x^j m \rangle_0 &= \int_\mathbb{R}\left ( \partial_x^j ( \eta \partial_x \rho) - \eta \partial_x^{j+1}\rho \right ) (\partial_x^j m) \, dx \\ &= \sum_{i=0}^{j-1} {j \choose i} \int_\mathbb{R}(\partial_x^{j-i}\eta) (\partial_x^{i+1} \rho) (\partial_x^j m) \, dx\\ &\leq C \sum_{i=0}^{j-1} \| \partial_x^{j-i}\eta \|_0 \| (\partial_x^{i+1} \rho)( \partial_x^j m) \|_0\\ &\leq C \sum_{i=0}^{j-1} \| (\partial_x^{i+1} \rho)( \partial_x^j m) \|_0 \\ &\leq C \sum_{i=0}^{j-1} \| \partial_x^{i+1} \rho \|_{L^\infty} \| \partial_x^j m \|_0.\end{aligned}$$ Since $i \in \{ 0, ..., j-1 \}$, we have $i+1 \in \{1, ..., j \}$ and $\| \partial_x^{i+1} \rho \|_{L^\infty} \leq \| \partial_x^{j} \rho \|_{L^\infty} \leq C \| \rho \|_{j+1}$. Moreover, there holds $\| \partial_x^j m \|_0 \leq \| m \|_{j}$. This implies that, $$\left | \langle [\partial_x^j, {\mathcal{T}}_4]\rho, \partial_x^j m \rangle_0 \right | \leq C \| \rho \|_{j+1} \| m \|_{j} \leq C \big( \| \rho\|_{j+1}^2 + \| m \|_{j}^2 \big). \label{estimate_commutator_T4}$$ The first component of [\[linear_system\]](#linear_system){reference-type="eqref" reference="linear_system"} is $\rho_t = - m_x + f_1$. Taking the scalar product of this equation with $k^2 \rho/2$ we infer $$\label{energy_estimate_continuity_equation} \frac{k^2}{4} \partial_t ( \| \rho \|_0^2 ) \leq C \left ( \| \rho \|_0^2 + \| \rho_x \|_0^2 + \| m \|_0^2 + \| f_1 \|_0^2 \right ).$$ Let $c \in (0,1)$ be an arbitrary constant. Using estimates [\[estimate_A\]](#estimate_A){reference-type="eqref" reference="estimate_A"}, and [\[estimate_T1\]](#estimate_T1){reference-type="eqref" reference="estimate_T1"} thru [\[energy_estimate_continuity_equation\]](#energy_estimate_continuity_equation){reference-type="eqref" reference="energy_estimate_continuity_equation"}, summing for $j \in \{0,...,n\}$, and choosing $\varepsilon_i > 0$, $i \in \{1,...,4\}$ sufficiently small, we obtain $$\partial_t \Big( \tfrac{1}{2} \| w \|_n^2 + \tfrac{1}{4} k^2 \| \rho \|_{n+1}^2 \Big) + c \mu \| m \|_{n+1}^2 \leq C \Big( \| w \|_n^2 + \| \rho\|_{n+1}^2 + \| f \|_n^2 + \| f_1 \|_{n+1}^2 \Big).$$ This last inequality implies [\[estimate_higher_order\]](#estimate_higher_order){reference-type="eqref" reference="estimate_higher_order"}. Similarly to the proof of Lemma [Lemma 1](#theorem_zero_order){reference-type="ref" reference="theorem_zero_order"}, applying Gronwall's inequality to [\[estimate_higher_order\]](#estimate_higher_order){reference-type="eqref" reference="estimate_higher_order"} we obtain [\[integral_estimate_higher_order\]](#integral_estimate_higher_order){reference-type="eqref" reference="integral_estimate_higher_order"}. ◻ ## Existence of solutions to the linear system Let $\phi,\mbox{ }g: \mathbb{R}\rightarrow \mathbb{R}^2$. The formal adjoint of operator [\[operator_L\]](#operator_L){reference-type="eqref" reference="operator_L"} reads [\[eq:components_adj_L\]]{#eq:components_adj_L label="eq:components_adj_L"} $${\mathcal{L}}^* \phi = \begin{pmatrix} {\mathcal{L}}_1^* \phi\\ {\mathcal{L}}_2^*\phi \end{pmatrix},$$ where $$\begin{aligned} &{\mathcal{L}}_1^*\phi = - (\overline{\alpha}\phi_2)_x - \frac{k^2}{2} (\phi_2)_{xxx} - ( \zeta \phi_2 )_{xx} - ( \eta \phi_2 )_x,\\ &{\mathcal{L}}_2^*\phi = (\phi_1)_x - (\overline{\beta}\phi_2)_x + \mu (\phi_2)_{xx},\end{aligned}$$ and the associated adjoint system is $$\label{eq:adjoint_system} \left\{ \begin{aligned} -\partial_t \phi &= {\mathcal{L}}^* \phi + g,\\ \phi(T) &= 0. \end{aligned} \right.$$ ### Energy estimate for the adjoint system Now, we derive an estimate for the solutions to [\[eq:adjoint_system\]](#eq:adjoint_system){reference-type="eqref" reference="eq:adjoint_system"}. **Lemma 1** (energy estimate). *Suppose $\overline{w}, g \in C_0^\infty([0,T] \times \mathbb{R})$ and $$\sup_{x \in \mathbb{R}, \, t \in [0,T]} \frac{1}{\overline{\rho}(x,t) + \rho_*} \leq C_0.$$ Then, we have $$\label{estimate_adjoint_system} \| \phi(t)\|_0^2 + \| \partial_x \phi_2(t) \|_0^2 \leq C(T) \int_{0}^{T} \| g(s) \|_0^2 \, ds,$$ for all $t \in [0,T]$, where $C(T)$ depends on $C_0$, $\| \overline{\rho}\|_4$, $\rho_*$, $\| \overline{m}\|_3$, $m_*$, $T$ and on the physical parameters of [\[QHD\]](#QHD){reference-type="eqref" reference="QHD"}.* *Proof.* The second component of [\[eq:adjoint_system\]](#eq:adjoint_system){reference-type="eqref" reference="eq:adjoint_system"} reads $$\label{eq:second_component_adjoint_sys} -\partial_t \phi_2 = (\phi_1)_x - (\overline{\beta}\phi_2)_x + \mu (\phi_2)_{xx} + g_2.$$ Take the $L^2$-scalar product of [\[eq:second_component_adjoint_sys\]](#eq:second_component_adjoint_sys){reference-type="eqref" reference="eq:second_component_adjoint_sys"} with $\phi_2$. The result is $$-\langle \partial_t \phi_2, \phi_2 \rangle_0 = \langle (\phi_1)_x, \phi_2 \rangle_0 - \langle (\overline{\beta}\phi_2)_x, \phi_2 \rangle_0 + \mu \langle (\phi_2)_{xx}, \phi_2 \rangle_0 + \langle g, \phi_2 \rangle_0,$$ Then, $$\label{eq:adjoint_1} - \langle \partial_t \phi_2, \phi_2 \rangle_0 = -\tfrac{1}{2} \partial_t (\| \phi_2 \|_0^2).$$ In what follows $C$ denotes a constant that may depend only on $\overline{w}$ and on the parameters of [\[QHD\]](#QHD){reference-type="eqref" reference="QHD"} and that may change from line to line. We have, $$\begin{aligned} \langle (\phi_1)_x, \phi_2 \rangle_0 = \int_\mathbb{R}(\phi_1)_x \phi_2 \, dx = - \int_\mathbb{R}\phi_1 (\phi_2)_x \, dx &\leq \| \phi_1 \|_0 \| \partial_x \phi_2 \|_0 \nonumber \\ &\leq C(\varepsilon_1) \| \phi_1 \|_0^2 + C \varepsilon_1 \| \partial_x \phi_1 \|_0^2. %\label{adjoint_2} \end{aligned}$$ Furthermore, using integration by parts one arrives at $$\begin{aligned} - \langle (\overline{\beta}\phi_2)_x, \phi_2 \rangle_0 &= - \int_\mathbb{R}(\overline{\beta}\phi_2)_x \phi_2 \, dx = \int_\mathbb{R}\overline{\beta}\phi_2 (\phi_2)_x \, dx = \frac{1}{2} \int_\mathbb{R}\overline{\beta}(\phi_2^2)_x \, dx \nonumber\\ &= - \frac{1}{2} \int_\mathbb{R}\overline{\beta}_x (\phi_2)^2 \, dx \leq \frac{1}{2} \| \overline{\beta}_x \|_{L^\infty} \| \phi_2 \|_0^2 \leq C \| \phi_2 \|_0^2. %\label{adjoint_3} \end{aligned}$$ Consequently, we have $$\mu \langle (\phi_2)_{xx}, \phi_2 \rangle_0 = \mu \int_\mathbb{R}(\phi_2)_{xx} \phi_2 \, dx = -\mu \int_\mathbb{R}(\partial_x \phi_2)^2 \, dx = -\mu \| \partial_x \phi_2 \|_0^2.$$ Likewise, there holds the estimate $$\langle g, \phi_2 \rangle_0 \leq \| g \|_0 \| \phi_2 \|_0 \leq C \| g \|_0^2 + C \| \phi_2 \|_0^2.$$ Combining these estimates, we obtain $$- \tfrac{1}{2} \partial_t (\| \phi_2 \|_0^2) + \left ( \mu - C \varepsilon_1 \right ) \| \partial_x \phi_2 \|_0^2 \leq C(\varepsilon_1) \left ( \| \phi \|_0^2 + \| g_2 \|_0^2 \right ). %\label{ineq:adjoint}$$ Choosing $\varepsilon_1 >0$ sufficiently small, we deduce $$- \tfrac{1}{2} \partial_t (\| \phi_2 \|_0^2) \leq C \left ( \| \phi \|_0^2 + \| g_2 \|_0^2 \right ). \label{ineq_adjoint}$$ Now, let us take the scalar product of [\[eq:second_component_adjoint_sys\]](#eq:second_component_adjoint_sys){reference-type="eqref" reference="eq:second_component_adjoint_sys"} with $-(\phi_2)_{xx}$. This yields $$\langle \partial_t \phi_2, (\phi_2)_{xx} \rangle_0 = -\langle (\phi_2)_x, (\phi_2)_{xx} \rangle_0 + \langle (\overline{\beta}\phi_2)_x, (\phi_2)_{xx} \rangle_0 - \mu \langle (\phi_2)_{xx}, (\phi_2)_{xx} \rangle_0 - \langle g, (\phi_2)_{xx} \rangle_0.$$ Integrate by parts in order to get $$\begin{aligned} \langle \partial_t \phi_2, (\phi_2)_{xx} \rangle_0 &= \int_\mathbb{R}(\partial_t \phi_2) (\phi_2)_{xx} \, dx = - \int_\mathbb{R}(\phi_2)_x \partial_t (\phi_2)_x \, dx = - \tfrac{1}{2} \partial_t (\| \partial_x \phi_2 \|_0^2) \label{eq:adj_1}.\end{aligned}$$ Then, we infer $$\label{eq:adj_2} - \langle (\phi_2)_{xx}, (\phi_1)_x \rangle_0 = - \int_\mathbb{R}(\phi_2)_{xx} (\phi_1)_x \, dx = \int_\mathbb{R}\phi_1 (\phi_2)_{xxx} \, dx.$$ Moreover, we obtain $$\begin{aligned} \langle (\overline{\beta}\phi_2)_x, (\phi_2)_{xx} \rangle_0 &= \int_\mathbb{R}(\overline{\beta}\phi_2)_x (\phi_2)_{xx} \, dx = \int_\mathbb{R}(\overline{\beta}_x \phi_2 + \overline{\beta}(\phi_2)_x ) (\phi_2)_{xx} \, dx \nonumber\\ &= \int_\mathbb{R}\overline{\beta}_x \phi_2 (\phi_2)_{xx} \, dx + \int_\mathbb{R}\overline{\beta}(\phi_2)_x (\phi_2)_{xx} \, dx \nonumber\\ &= -\int_\mathbb{R}(\overline{\beta}_x \phi_2)_x (\phi_2)_x \, dx + \tfrac{1}{2} \int_\mathbb{R}\overline{\beta}\left ( (\partial_x \phi_2)^2 \right )_x \, dx \nonumber\\ &= -\int_\mathbb{R}(\overline{\beta}_{xx} \phi_2 + \overline{\beta}_x (\phi_2)_x) (\phi_2)_x \, dx - \tfrac{1}{2} \int_\mathbb{R}\overline{\beta}_x (\partial_x \phi_2)^2 \, dx \nonumber\\ &= -\int_\mathbb{R}\overline{\beta}_{xx} \phi_2 (\phi_2)_x \, dx - \tfrac{3}{2} \int_\mathbb{R}\overline{\beta}_x ((\partial_x \phi_2))^2 \, dx \nonumber\\ &\leq \| \overline{\beta}_{xx} \|_{L^{\infty}} \| \phi_2 \|_0 \| \partial_x \phi_2 \|_0 + \tfrac{3}{2} \| \overline{\beta}_x \|_{L^{\infty}} \| \partial_x \phi_2 \|_0^2 \nonumber\\ &\leq C \left ( \| \phi_2 \|_0^2 + \| \partial_x \phi_2 \|_0^2 \right ). \label{adj_3}\end{aligned}$$ This yields, $$\label{adj_4} -\mu \langle (\phi_2)_{xx}, (\phi_2)_{xx} \rangle_0 = -\mu \| \partial_x^2 \phi_2 \|_0^2.$$ In addition, one can estimate $$-\langle g_2, (\phi_2)_{xx} \rangle_0 = \int_\mathbb{R}g_2 (\phi_2)_{xx} \, dx \leq \| g_2 \|_0 \| \partial_x^2 \phi_2 \|_0 \leq C(\varepsilon_2) \| g_2 \|_0^2 + C \varepsilon_2 \| \partial_x^2 \phi_2 \|_0^2. \label{adj_5}$$ Using [\[eq:adj_1\]](#eq:adj_1){reference-type="eqref" reference="eq:adj_1"}, [\[eq:adj_2\]](#eq:adj_2){reference-type="eqref" reference="eq:adj_2"}, [\[adj_3\]](#adj_3){reference-type="eqref" reference="adj_3"}, [\[adj_4\]](#adj_4){reference-type="eqref" reference="adj_4"}, and [\[adj_5\]](#adj_5){reference-type="eqref" reference="adj_5"} we deduce $$\begin{aligned} -\tfrac{1}{2} \partial_t (\| \partial_x \phi_2 \|_0^2) + \left ( \mu - C \varepsilon_2 \right )& \| \partial_x^2 \phi_2 \|_0^2 - \int_\mathbb{R}\phi_1 (\phi_2)_{xxx} \, dx \nonumber\\ &\leq C(\varepsilon_2) \left ( \| \phi_2 \|_0^2 + \| \partial_x \phi_2 \|_0^2 + \| g_2 \|_0^2 \right ). \label{ineq_adj}\end{aligned}$$ The first component of [\[eq:adjoint_system\]](#eq:adjoint_system){reference-type="eqref" reference="eq:adjoint_system"} implies that $$(\phi_2)_{xxx} = \frac{2}{k^2} \partial_t \phi_1 - \frac{2}{k^2} (\overline{\alpha}\phi_2)_x - \frac{2}{k^2} ( \zeta \phi_2 )_{xx} - \frac{2}{k^2} ( \eta \phi_2 )_x + \frac{2}{k^2} g_1.$$ Therefore, we have $$\begin{aligned} - \int_\mathbb{R}\phi_1 (\phi_2)_{xxx} \, dx &= \left \langle - \frac{2}{k^2} \partial_t \phi_1 + \frac{2}{k^2} (\overline{\alpha}\phi_2)_x + \frac{2}{k^2} ( \zeta \phi_2 )_{xx} + \frac{2}{k^2} ( \eta \phi_2 )_x - \frac{2}{k^2} g_1, \phi_1 \right \rangle_0. %\label{eq:adj_6} \end{aligned}$$ This yields, $$-\frac{2}{k^2} \langle \partial_t \phi_1, \phi_1 \rangle_0 = - \frac{1}{k^2} \partial_t \| \phi_1 \|_0^2.$$ Moreover, $$\begin{aligned} \langle (\overline{\alpha}\phi_2)_x, \phi_1 \rangle_0 &= \int_\mathbb{R}\phi_1 (\overline{\alpha}\phi_2)_x \, dx = \int_\mathbb{R}\overline{\alpha}_x \phi_1 \phi_2 \, dx + \int_\mathbb{R}\overline{\alpha}\phi_1 (\phi_2)_x \, dx \nonumber\\ &\leq \| \overline{\alpha}\|_{L^\infty} \| \phi_1 \|_0 \| \partial_x \phi_2 \|_0 + \| \overline{\alpha}_x \|_{L^\infty} \| \phi_1 \|_0 \| \phi_2 \|_0 \nonumber \\ &\leq C \| \phi \|_0^2 + C \| \partial_x \phi_2 \|_0^2. %\label{adj_8} \end{aligned}$$ We also have, $$\begin{aligned} \left \langle \frac{2}{k^2} ( \zeta \phi_2 )_{xx}, \phi_1 \right \rangle_0 &= \langle (\zeta \phi_2)_{xx}, \phi_1 \rangle_0 \\ &= \frac{2}{k^2} \left ( \int_\mathbb{R}\zeta_{xx} \phi_2 \phi_1 \, dx + 2 \int_\mathbb{R}\zeta_x (\phi_2)_x \phi_1 \, dx + \int_\mathbb{R}\zeta (\phi_2)_{xx} \phi_1 \, dx \right ) \\ &\leq \frac{2}{k^2} \Big( \| \zeta_{xx} \|_{L^\infty} \| \phi_2\|_0 \| \phi_1 \|_0 + 2 \| \zeta_x \|_{L^{\infty}} \| \partial_x \phi_2\|_0 \| \phi_1\|_0 + \| \zeta \|_{L^{\infty}} \| \partial_x^2 \phi_2 \|_0 \| \phi_1 \|_0 \Big) \\ &\leq C(\varepsilon_3) \| \phi \|_0^2 + C \| \partial_x \phi_2 \|_0^2 + C \varepsilon_3 \| \partial_x^2 \phi_2 \|_0^2. %\label{adj_9} \end{aligned}$$ Therefore, $$\begin{aligned} \frac{2}{k^2} \left \langle (\eta \phi_2 )_x, \phi_1 \right \rangle_0 &= \frac{2}{k^2} \left ( \int_\mathbb{R}\eta_x \phi_2 \phi_1 \, dx + \int_\mathbb{R}\eta (\phi_2)_x \phi_1 \, dx \right ) \nonumber \\ &\leq \frac{2}{k^2} \Big( \| \eta_x \|_{L^{\infty}} \| \phi_1\|_0 \| \phi_2\|_0 + \| \eta \|_{L^{\infty}} \| \phi_1 \|_0 \| \partial_x \phi_2 \|_0 \Big) \\ &\leq C \big( \| \phi \|_0^2 + \| \partial_x \phi_2 \|_0^2 \big). %\label{adj_10} \end{aligned}$$ Also, we clearly have $$\label{adj_11} -\frac{2}{k^2} \langle g_1, \phi_1 \rangle_0 \leq \frac{2}{k^2} \| g_1 \|_0 \| \phi_1 \|_0 \leq C \big( \| g_1 \|_0^2 + \| \phi_1 \|_0^2 \big).$$ Let $c \in (0,1)$. Thanks to estimates [\[ineq_adj\]](#ineq_adj){reference-type="eqref" reference="ineq_adj"} thru [\[adj_11\]](#adj_11){reference-type="eqref" reference="adj_11"}, choosing $\varepsilon_2, \varepsilon_3 > 0$ sufficiently small we obtain $$\begin{aligned} - \partial_t \left (\frac{1}{k^2} \| \phi_1 \|_0^2 + \frac{1}{2} \| \partial_x \phi_2 \|_0^2 \right ) + c \mu \| \partial_x^2 \phi_2 \|_0^2 \leq C \left ( \| \phi \|_0^2 + \| \partial_x \phi_2 \|_0^2 + \| g \|_0^2 \right ). %\label{ineq_adjoint_2} \end{aligned}$$ Since $c \mu \| \partial_x^2 \phi_2 \|^2 \geq 0$, last inequality implies that $$- \partial_t \left (\frac{1}{k^2} \| \phi_1 \|_0^2 + \frac{1}{2} \| \partial_x \phi_2 \|_0^2 \right ) \leq C \left ( \| \phi \|_0^2 + \| \partial_x \phi_2 \|_0^2 + \| g \|_0^2 \right ). \label{ineq_adjoint_3}$$ Multiplying [\[ineq_adjoint\]](#ineq_adjoint){reference-type="eqref" reference="ineq_adjoint"} by $2/k^2$ and adding it to [\[ineq_adjoint_3\]](#ineq_adjoint_3){reference-type="eqref" reference="ineq_adjoint_3"} we arrive at $$-\partial_t \left ( \frac{1}{k^2} \| \phi \|_0^2 + \frac{1}{2} \| \partial_x \phi_2 \|_0^2 \right ) \leq C \left ( \frac{1}{k^2} \| \phi \|_0^2 + \frac{1}{2} \| \partial_x \phi_2 \|_0^2 + \| g \|_0^2 \right ).$$ Let us change the variable $\tau = T - t$, $\phi(t) = \tilde{\phi}(\tau)$. Then, the initial condition of [\[eq:adjoint_system\]](#eq:adjoint_system){reference-type="eqref" reference="eq:adjoint_system"} implies that $\tilde{\phi}(0) = 0$. Applying Gronwall inequality to the resulting relation we obtain [\[estimate_adjoint_system\]](#estimate_adjoint_system){reference-type="eqref" reference="estimate_adjoint_system"}. The lemma is now proved. ◻ ### Negative norm estimates First, let us introduce some definitions. We denote the Fourier transform operator by ${\mathcal{F}}$ and $\hat{u}(\xi) = ({\mathcal{F}}u)(\xi)$. For $\xi \in \mathbb{R}$, we introduce the notation $\langle \xi \rangle= \sqrt{ 1+ \xi^2}$. Let $u:\mathbb{R}\rightarrow \mathbb{R}$. For $s \in \mathbb{R}$ we define the operator $\Lambda^s u = {\mathcal{F}}^{-1}(\langle \xi \rangle^s {\mathcal{F}}u)$. For $n \in \mathbb{N}_0$ we have $\partial_x^n \Lambda^s u = \Lambda^s \partial_x^n u$. Denote by ${\mathcal{S}}$ the space of Schwartz functions on $\mathbb{R}$. For a function $v : \mathbb{R}\rightarrow \mathbb{R}^2$ we set $\Lambda^s v = ( \Lambda^s v_1, \Lambda^s v_2 )^\top$. Let us define the space $$X := \bigcap_{n \in \mathbb{N}_0} H^n(\mathbb{R}).$$ For any $s \in \mathbb{R}$ we have $$\| u \|_{s+1}^2 = \| u \|_{s}^2 + \| \partial_x \Lambda^s u \|_0^2. \label{property_norm}$$ The following lemma will be used to estimate contributions coming from commutators: **Lemma 1**. *Let $s \in \mathbb{R}$ and $u \in H^{|s-1|+2}(\mathbb{R})$. Then, there exists a constant $C = C_s$, such that $$\| [\Lambda^s, u] f \| \leq C \| u \|_{|s-1|+2} \| f \|_{s-1},$$ for $f \in X$.* *Proof.* The proof follows from Lemma 6.16, p. 202 in [@Fo95] by a density argument. ◻ Suppose $c \in \mathbb{R}$ is a constant and $f, u : \mathbb{R}\rightarrow \mathbb{R}$. Then, $$f = [\Lambda^s, u]f.\label{equality_commutator}$$ We have the following result. **Lemma 1**. *Suppose $\overline{w}, g \in C_0^\infty([0,T] \times \mathbb{R})$ and $$\sup_{x \in \mathbb{R}, \, t \in [0,T]} \frac{1}{\overline{\rho}(x,t) + \rho_*} \leq C_0.$$ Then, $$\partial_x^n \left( \overline{\alpha}- \left( \Big( \frac{m_*}{\rho_*} \Big)^2 + p'(\rho_*) \right) \right ) ,\mbox{ } \partial_x^n \Big( \overline{\beta}+ \frac{2 m_*}{\rho_*} \Big),\mbox{ }\partial_x^n \zeta,\mbox{ }\partial_x^n \eta \in L^2(\mathbb{R}),\quad \forall \, n \in \mathbb{N}_0.$$* *Proof.* We clearly have $$\begin{aligned} \left (\frac{\overline{m}+ m_*}{\overline{\rho}+ \rho_*} \right )^2 - \left ( \frac{m_*}{\rho_*} \right ) = \frac{(\rho_*)^2 \overline{m}^2 - (m_*)^2 \overline{\rho}^2 + 2 \rho_*m_*(\rho_*\overline{m}- m_*\overline{\rho})}{(\rho_*)^2 (\overline{\rho}+ \rho_*)^2} \in L^2(\mathbb{R}).\end{aligned}$$ Let $a = \min\{ \rho_*, 1/C_0 \}$, $b = \|\overline{\rho}\|_{L^\infty} + \rho_*$, and $$G(z) = \int_0^1 p''(t z + \rho_*) dt,\mbox{ }z > -\rho_*.$$ Then, there holds $$p'(\overline{\rho}(x) + \rho_*) - p'(\rho_*) = G(\overline{\rho}(x))\overline{\rho}(x), \qquad x \in \mathbb{R}.$$ Moreover, $$\|G(\rho(\cdot))\|_{L^{\infty}} \leq \sup_{z \in [a,b]} | p''(z)|.$$ Therefore, $$\begin{aligned} \| p'(\overline{\rho}(\cdot) + \rho_*) - p'(\rho_*) \|_0 = \| G(\overline{\rho}(x))\overline{\rho}(x) \|_0 \leq \|G(\rho(\cdot))\|_{L^{\infty}} \| \overline{\rho}\|_0 \leq \sup_{z \in [a,b]} | p''(z)| \| \overline{\rho}\|_0.\end{aligned}$$ Hence $p'(\overline{\rho}(\cdot) + \rho_*) - p'(\rho_*) \in L^2(\mathbb{R})$, and consequently $$\overline{\alpha}- \Big( \Big( \frac{m_*}{\rho_*} \Big)^2 + p'(\rho_*) \Big) \in L^2(\mathbb{R}).$$ Similarly, $$-\frac{2 (\overline{m}+ m_*)}{\overline{\rho}+ \rho_*} + \frac{2 m_*}{\rho_*} = 2 \frac{m_*\overline{\rho}- \rho_*\overline{m}}{\rho_*(\overline{\rho}+ \rho_*)}.$$ Therefore, $$\overline{\beta}+ \frac{2 m_*}{\rho_*} \in L^2(\mathbb{R}).$$ In addition, we have $\partial_x^n \overline{\alpha}$, $\partial_x^n \overline{\beta}\in L^2(\mathbb{R})$, $n \in \mathbb{N}$. Since $\zeta$ and $\eta$ are a product of a coefficient and $\overline{\rho}_x$, we have $$\partial_x^n \zeta,\mbox{ }\partial_x^n \eta \in L^2(\mathbb{R}),\mbox{ }n \in \mathbb{N}_0,$$ and the proof is complete. ◻ **Lemma 1**. *Suppose $s \in \mathbb{R}$, $\overline{w}, g \in C_0^\infty([0,T] \times \mathbb{R})$ and $$\sup_{x \in \mathbb{R}, \, t \in [0,T]} \frac{1}{\overline{\rho}(x,t) + \rho_*} \leq C_0.$$ Then, $$\| \Lambda^s \phi(t) \|_0^2 + \| \partial_x \Lambda^s \phi_2(t) \|_0^2 \leq C(T) \int_{0}^{T} \| \Lambda^s g(\tau) \|_0^2 d\tau, \label{ineq_negative_order}$$ for all $t \in [0,T]$, where $C(T)$ depends only on $\overline{w}$, $T$ and the parameters of [\[QHD\]](#QHD){reference-type="eqref" reference="QHD"}.* *Proof.* Let $s \in \mathbb{R}$. Applying the operator $\Lambda^s$ to [\[eq:adjoint_system\]](#eq:adjoint_system){reference-type="eqref" reference="eq:adjoint_system"} we obtain that the equation satisfied by $\Lambda^s \phi$ is $$\label{adj_s_1} -\partial_t \Lambda^s \phi = {\mathcal{L}}^* \Lambda^s \phi + [ \Lambda^s, {\mathcal{L}}^*] \phi - \Lambda^s g.$$ Take the scalar product of the second component of [\[adj_s\_1\]](#adj_s_1){reference-type="eqref" reference="adj_s_1"} with $\Lambda^s \phi_2$. The result is $$-\langle \partial_t \Lambda^s \phi_2, \Lambda^s \phi_2 \rangle_0 = \langle {\mathcal{L}}_2^* \Lambda^s \phi, \Lambda^s \phi_2 \rangle_0 + \langle [ \Lambda^s, {\mathcal{L}}_2^*] \phi, \Lambda^s \phi_2 \rangle_0 - \langle \Lambda^s g_2, \Lambda^s \phi_2 \rangle_0.$$ Then, we have, $$-\langle \partial_t \Lambda^s \phi_2, \Lambda^s \phi_2 \rangle_0 = -\int_\mathbb{R}(\partial_t \Lambda^s \phi_2)(\Lambda^s \phi_2) \, dx = -\tfrac{1}{2} \partial_t (\| \phi_2 \|_{s}^2). \label{eq:adj_s_3}$$ Moreover, $$\begin{aligned} \langle {\mathcal{L}}_2^* \Lambda^s \phi, \Lambda^s \phi_i \rangle_0 = \left \langle \partial_x \Lambda^s \phi_1 - \partial_x (\overline{\beta}\Lambda^s \phi_2) + \mu \partial_x^2 \Lambda^s \phi_2, \Lambda^s \phi_2 \right \rangle_0.\end{aligned}$$ Then, after integration by parts and by [\[property_norm\]](#property_norm){reference-type="eqref" reference="property_norm"}, we arrive at $$\begin{aligned} \langle \partial_x \Lambda^s \phi_1, \Lambda^s \phi_2 \rangle_0 = \int_\mathbb{R}(\partial_x \Lambda^s \phi_1) ( \Lambda^s \phi_2) \, dx &= - \int_\mathbb{R}(\Lambda^s \phi_1) (\partial_x \Lambda^s \phi_2) \, dx \nonumber\\ &\leq \| \Lambda^s \phi_1 \|_0 \| \partial_x \Lambda^s \phi_2 \|_0 \nonumber\\ &\leq \| \phi_1 \|_s \| \phi_2\|_{s+1} \nonumber\\ &\leq C(\varepsilon_1) \| \phi_1 \|_{s}^2 + C \varepsilon_1 \| \phi_2 \|_{s+1}^2 \nonumber\\ &= C(\varepsilon_1) \| \phi_1 \|_{s}^2 + C \varepsilon_1 \| \phi_2 \|_{s}^2 + C \varepsilon_1 \| \partial_x \Lambda^s \phi_2\|_0^2. %\label{adj_s_4} \end{aligned}$$ In the same fashion, we estimate $$\begin{aligned} -\langle \partial_x (\overline{\beta}\Lambda^s \phi_2), \Lambda^s \phi_2 \rangle_0 &= - \int_\mathbb{R}\partial_x (\overline{\beta}\Lambda^s \phi_2) (\Lambda^s \phi_2) \, dx \nonumber\\ &= \int_\mathbb{R}\overline{\beta}(\Lambda^s \phi_2) (\partial_x \Lambda^s \phi_2) \, dx \nonumber\\ &\leq \| \overline{\beta}\|_{L^{\infty}} \| \Lambda^s \phi_2 \|_0 \| \partial_x \Lambda^s \phi_2 \|_0 \nonumber\\ &\leq C \| \phi_2 \|_s \| \phi_2\|_{s+1} \nonumber\\ &\leq C(\varepsilon_2) \| \phi_2 \|_s^2 + C \varepsilon_2 \| \phi_2\|_{s+1}^2 \nonumber\\ &\leq C(\varepsilon_2) \| \phi_2 \|_{s}^2 + C \varepsilon_2 \| \phi_2\|_{s}^2 + C \varepsilon_2 \| \partial_x \Lambda^s \phi_2 \|_0^2. %\label{adj_s_5} \end{aligned}$$ Furthermore, $$\begin{aligned} \langle \mu \partial_x^2 \Lambda^s \phi_2, \Lambda^s \phi_2 \rangle_0 = \mu \int_\mathbb{R}(\partial_x^2 \Lambda^s \phi_2) (\Lambda^s \phi_2) \, dx = -\mu \int_\mathbb{R}(\partial_x \Lambda^s \phi_2)^2 \, dx = - \mu \| \partial_x \Lambda^s \phi_2 \|_0^2.\end{aligned}$$ Then, $$- \langle \Lambda^s g_2, \Lambda^s \phi_2 \rangle_0 \leq \| \Lambda^s g_2 \|_0 \| \Lambda^s \phi_2 \|_0 = \| g_2 \|_s \| \phi_2 \|_s \leq C \big( \| g_2 \|_s^2 + \| \phi_2 \|_s^2\big). \label{adj_s_6}$$ Now, let us estimate the contribution coming from the commutator. We have $$\phi = a_1 + a_2,$$ where $$a_1 = -[\Lambda^s, \overline{\beta}]\partial_x \phi_2,\mbox{ } a_2 = - [\Lambda^s, \overline{\beta}_x] \phi_2.$$ Then, in view of [\[equality_commutator\]](#equality_commutator){reference-type="eqref" reference="equality_commutator"}, we obtain $$\begin{aligned} a_1 = - \left [\Lambda^s, \overline{\beta}+ \frac{2 m_*}{\rho_*} \right ]\partial_x \phi_2.\end{aligned}$$ Hence, using Lemmata [Lemma 1](#lemma:commutator_estimate){reference-type="ref" reference="lemma:commutator_estimate"} and [Lemma 1](#lemma:coefficients_Sobolev_space){reference-type="ref" reference="lemma:coefficients_Sobolev_space"}, we infer $$\begin{aligned} \| [\Lambda^s, {\mathcal{L}}_2^* ] \phi \|_0 &\leq \| a_1 \|_0 + \| a_2 \|_0 \nonumber\\ &= \left \| \left [\Lambda^s, \overline{\beta}+ \frac{2 m_*}{\rho_*} \right ]\partial_x \phi_2 \right \|_0 + \left \| [\Lambda^s, \overline{\beta}_x] \phi_2 \right \|_0 \nonumber \\ &\leq C \left \| \overline{\beta}+ \frac{2 m_*}{\rho_*} \right \|_{|s-1|+2} \| \partial_x \phi_2 \|_{s-1} + C \| \overline{\beta}_x \|_{|s-1|+2} \| \phi_2 \|_{s-1} \nonumber \\ &\leq C \big( \| \phi_2\|_{s} + \| \phi_2\|_{s-1} \big)\nonumber \\ &\leq C \| \phi_2 \|_{s}. \label{adj_s_7}\end{aligned}$$ Henceforth, $$\langle [ \Lambda^s, {\mathcal{L}}_2^*] \phi, \Lambda^s \phi_2 \rangle_0 \leq \| [\Lambda^s, {\mathcal{L}}_2^* ] \phi \|_0 \| \Lambda^s \phi_2\|_0 \leq C \| \phi_2 \|_s^2. \label{adj_s_8}$$ Using estimates [\[eq:adj_s\_3\]](#eq:adj_s_3){reference-type="eqref" reference="eq:adj_s_3"} thru [\[adj_s\_6\]](#adj_s_6){reference-type="eqref" reference="adj_s_6"}, [\[adj_s\_8\]](#adj_s_8){reference-type="eqref" reference="adj_s_8"} and choosing $\varepsilon_1, \varepsilon_2 > 0$ sufficiently small, we deduce $$-\tfrac{1}{2}\partial_t ( \| \phi_2\|_{s}^2 ) \leq C \left ( \| \phi \|_{s}^2 + \| g_2 \|_{s}^2 \right ). \label{adj_s_9}$$ Now, taking the scalar product of the second component of [\[adj_s\_1\]](#adj_s_1){reference-type="eqref" reference="adj_s_1"} with $-\partial_x^2 \Lambda^s \phi_2$ we obtain $$\begin{aligned} \langle \partial_t \Lambda^s \phi_2, \partial_x^2 \Lambda^s \phi_2 \rangle_0 = -\langle {\mathcal{L}}_2^* \Lambda^s \phi, \partial_x^2 \Lambda^s \phi_2\rangle_0 - \langle [ \Lambda^s, {\mathcal{L}}_2^*] \phi, \partial_x^2 \Lambda^s \phi_2 \rangle_0 + \langle \Lambda^s g_2, \Lambda^s \partial_x^2 \phi_2 \rangle_0.\end{aligned}$$ This yields, $$\langle \partial_t \Lambda^s \phi_2, \partial_x^2 \Lambda^s \phi_2 \rangle_0 = -\tfrac{1}{2}\partial_t \| \partial_x \Lambda^s \phi_2 \|_0^2. \label{adj_s_pd2_1}$$ Moreover, $$- \langle \partial_x \Lambda^s \phi_1, \partial_x^2 \Lambda^s \phi_2 \rangle_0 = - \int_\mathbb{R}(\partial_x \Lambda^s \phi_1) (\partial_x^2 \Lambda^s \phi_2) \, dx = \int_\mathbb{R}(\Lambda^s \phi_1) (\partial_x^3 \Lambda^s \phi_2) \, dx. \label{adj_s_pd2_2}$$ Furthermore, $$\begin{aligned} \langle \partial_x (\overline{\beta}\Lambda^s \phi_2), \partial_x^2 \Lambda^s \phi_2 \rangle_0 &= \int_\mathbb{R}\partial_x (\overline{\beta}\Lambda^s \phi_2) \partial_x^2 \Lambda^s \phi_2 \, dx \nonumber\\ &= \int_\mathbb{R}(\overline{\beta}_x \Lambda^s \phi_2) (\partial_x^2 \Lambda^s \phi_2)\, dx + \int_\mathbb{R}\overline{\beta}(\partial_x \Lambda^s \phi_2) (\partial_x^2 \Lambda^s \phi_2)\, dx \nonumber\\ & = -\int_\mathbb{R}\partial_x (\overline{\beta}_x \Lambda^s \phi_2)(\partial_x \Lambda^s \phi_2)\, dx + \frac{1}{2} \int_\mathbb{R}\overline{\beta}\partial_x \left ( (\partial_x \Lambda^s \phi_2 )^2 \right ) \, dx \nonumber\\ &= -\int_\mathbb{R}\overline{\beta}_{xx} (\Lambda^s \phi_2) \partial_x \Lambda^s \phi_2 \, dx - \int_\mathbb{R}\overline{\beta}_x (\partial_x \Lambda^s \phi_2)^2 \, dx - \frac{1}{2} \int_\mathbb{R}\overline{\beta}_x (\partial_x \Lambda^s \phi_2 )^2 \, dx \nonumber\\ &\leq \| \overline{\beta}_{xx} \|_{L^{\infty}} \| \Lambda^s \phi_2 \|_0 \| \partial_x \Lambda^s \phi_2 \|_0 + \frac{3}{2}\| \overline{\beta}_x \|_{L^{\infty}} \| \partial_x \Lambda^s \phi_2 \|_0^2 \nonumber\\ &\leq C \|\phi_2 \|_{s+1}^2. %\label{adj_s_pd2_3} \end{aligned}$$ Moreover, $$\langle \Lambda^s g_2, \partial_x^2 \Lambda^s \phi_2 \rangle_0 \leq \| \Lambda^s g_2 \|_0 \| \partial_x^2 \Lambda^s \phi_2 \|_0 \leq C(\varepsilon_3) \| g_2 \|_{s}^2 + C \varepsilon_3 \| \partial_x^2 \Lambda^s \phi_2 \|_0^2. %\label{adj_s_pd2_5}$$ Using [\[adj_s\_7\]](#adj_s_7){reference-type="eqref" reference="adj_s_7"} we obtain $$\begin{aligned} -\langle [ \Lambda^s, {\mathcal{L}}_2^* ] \phi, \partial_x^2 \Lambda^s \phi_2 \rangle_0 &\leq \| [ \Lambda^s, {\mathcal{L}}_2^* ] \phi \|_0 \| \partial_x^2 \Lambda^s \phi_2 \|_0 \nonumber \\ &\leq C \| \phi_2 \|_s \| \partial_x^2 \Lambda^s \phi_2 \|_0 \nonumber \\ &\leq C(\varepsilon_4) \| \phi_2 \|_{s}^2 + C \varepsilon_4 \| \partial_x^2 \Lambda^s \phi_2 \|_0^2. \label{adj_s_pd2_6}\end{aligned}$$ Let $c \in (0,1)$. Using estimates [\[adj_s\_pd2_1\]](#adj_s_pd2_1){reference-type="eqref" reference="adj_s_pd2_1"} thru [\[adj_s\_pd2_6\]](#adj_s_pd2_6){reference-type="eqref" reference="adj_s_pd2_6"} and choosing $\varepsilon_3, \varepsilon_4 > 0$ sufficiently small we deduce $$\begin{aligned} -\frac{1}{2} \partial_t \| \partial_x \Lambda^s \phi_2 \|_0^2 - \int_\mathbb{R}(\Lambda^s \phi_1) ( \partial_x^3 \Lambda^s \phi_2)\, dx + c \mu \| \partial_x^2 \Lambda^s \phi_2 \|_0^2 \leq C \left ( \| \phi_2 \|_{s+1}^2 + \| g_2 \|_{s}^2 \right ). \label{adj_s_pd2_7}\end{aligned}$$ The first component of [\[adj_s\_1\]](#adj_s_1){reference-type="eqref" reference="adj_s_1"} is $$-\partial_t \Lambda^s \phi_1 = {\mathcal{L}}_1^* \Lambda^s \phi + [\Lambda^s , {\mathcal{L}}_1^*] \phi - \Lambda^s g_1, \label{adj_s_firrst_eq_1}$$ where $${\mathcal{L}}_1^* \Lambda^s \phi = - \partial_x (\overline{\alpha}\Lambda^s \phi_2) - \frac{k^2}{2} \partial_x^3 \Lambda^s \phi_2 - \partial_x^2( \zeta \Lambda^s \phi_2 ) - \partial_x ( \eta \Lambda^s \phi_2 ).$$ Let us express $\Lambda^s \phi_2$ from [\[adj_s\_firrst_eq_1\]](#adj_s_firrst_eq_1){reference-type="eqref" reference="adj_s_firrst_eq_1"}. The result is $$\begin{aligned} \partial_x^3 \Lambda^s \phi_2 &= \frac{2}{k^2}\partial_t \Lambda^s \phi_1 - \frac{2}{k^2}\partial_x (\overline{\alpha}\Lambda^s \phi_2) - \frac{2}{k^2} \partial_x^2 ( \zeta \Lambda^s \phi_2 ) - \frac{2}{k^2} \partial_x ( \eta \Lambda^s \phi_2 ) + \\ &\quad + \frac{2}{k^2} [\Lambda^s , {\mathcal{L}}_1^*] \phi - \frac{2}{k^2}\Lambda^s g_1. \end{aligned}$$ Let us substitute last equation into the second term of [\[adj_s\_pd2_7\]](#adj_s_pd2_7){reference-type="eqref" reference="adj_s_pd2_7"}. This yields $$- \frac{2}{k^2} \int_\mathbb{R}(\Lambda^s \phi_1) (\partial_t \Lambda^s \phi_1) \, dx = -\frac{1}{k^2} \partial_t (\| \Lambda^s \phi_1 \|_0^2). \label{adj_s_first_eq_3}$$ Then, $$\begin{aligned} \frac{2}{k^2} \int_\mathbb{R}(\Lambda^s \phi_1) (\overline{\alpha}\Lambda^s \phi_2)_x \, dx &= \frac{2}{k^2} \int_\mathbb{R}(\Lambda^s \phi_1) (\overline{\alpha}_x \Lambda^s \phi_2) \, dx + \frac{2}{k^2} \int_\mathbb{R}(\Lambda^s \phi_1) (\overline{\alpha}\partial_x \Lambda^s \phi_2) \, dx \nonumber \\ &\leq \frac{2}{k^2} \| \overline{\alpha}_x \|_{L^{\infty}} \| \Lambda^s \phi_1 \|_0 \| \Lambda^s \phi_2 \|_0 + \frac{2}{k^2} \| \overline{\alpha}\|_{L^{\infty}} \| \Lambda^s \phi_1 \|_0 \| \partial_x \Lambda^s \phi_2 \|_0 \nonumber\\ &\leq C \big( \| \phi_1\|_{s}^2 + \| \phi_2\|_{s+1}^2\big). %\label{adj_s_first_eq_4} \end{aligned}$$ Furthermore, $$\begin{aligned} \frac{2}{k^2} \int_\mathbb{R}(\Lambda^s \phi_1) &\partial_x^2 ( \zeta \Lambda^s \phi_2 ) \, dx = -\frac{2}{k^2} \int_\mathbb{R}(\Lambda^s \phi_1) \partial_x^2 \left (\zeta \Lambda^s \phi_2 \right ) \, dx \nonumber\\ &= -\frac{2}{k^2} \int_\mathbb{R}(\Lambda^s \phi_1) \left ( \zeta_{xx} \Lambda^s \phi_1 + 2 \zeta_x \partial_x \Lambda^s \phi_2 + \zeta \partial_x^2 \Lambda^s \phi_2 \right ) \, dx \nonumber\\ &\leq \frac{2}{k^2} \Big( \| \zeta_{xx} \|_{L^{\infty}} \| \Lambda^s \phi_1 \|_0^2 + 2 \| \zeta_x \|_{L^{\infty}} \| \Lambda^s \phi_1 \|_0 \| \partial_x \Lambda^s \phi_2 \|_0 + \| \zeta \|_{L^{\infty}} \| \Lambda^s \phi_1 \|_0 \| \partial_x^2 \Lambda^s \phi_2 \|_0 \Big) \nonumber \\ &\leq C \big( \| \phi_1 \|_{s}^2 + \| \phi_1 \|_{s} \| \phi_2 \|_{s+1} + \| \phi_1 \|_{s} \| \partial_x^2 \Lambda^s \phi_2 \|_0 \big) \nonumber \\ &\leq C(\varepsilon_5) \| \phi_1 \|_{s}^2 + C \| \phi_2\|_{s+1}^2 + C \varepsilon_5 \| \partial_x^2 \Lambda^s \phi_2 \|_0^2. %\label{adj_s_first_eq_5} \end{aligned}$$ Now, let us estimate $$\begin{aligned} \frac{2}{k^2} \int_\mathbb{R}(\Lambda^s \phi_1) \partial_x ( \eta \Lambda^s \phi_2) \, dx &= \frac{2}{k^2} \int_\mathbb{R}(\Lambda^s \phi_1) \left ( \eta_x \Lambda^s \phi_2 + \eta \partial_x \Lambda^s \phi_2 \right ) \, dx \nonumber\\ &\leq \frac{2}{k^2} \| \eta_x \|_{L^{\infty}} \| \Lambda^s \phi_1 \|_0 \| \Lambda^s \phi_2 \|_0 + \frac{2}{k^2} \| \eta \|_{L^{\infty}} \| \Lambda^s \phi_1 \|_0 \| \partial_x \Lambda^s \phi_2 \|_0 \nonumber\\ &\leq C \big( \| \phi \|_{s}^2 + \| \phi_2 \|_{s+1}^2 \big). %\label{adj_s_first_eq_6} \end{aligned}$$ Moreover, $$\frac{2}{k^2} \int_\mathbb{R}(\Lambda^s \phi_1) (\Lambda^s g_1) \, dx \leq \frac{2}{k^2} \| \Lambda^s \phi_1 \|_0 \| \Lambda^s g_1 \|_0 \leq C \big(\|\phi_1 \|_{s}^2 + \| g_1 \|_{s}^2 \big). %\label{adj_s_first_eq_7}$$ Now, let us estimate the contribution coming from the commutator. Similarly to [\[adj_s\_7\]](#adj_s_7){reference-type="eqref" reference="adj_s_7"}, using Lemmata [Lemma 1](#lemma:commutator_estimate){reference-type="ref" reference="lemma:commutator_estimate"} and [\[equality_commutator\]](#equality_commutator){reference-type="eqref" reference="equality_commutator"}, we obtain $$\begin{aligned} \| [\Lambda^s, {\mathcal{L}}_1^*] \phi \| &\leq \| [ \Lambda^s, \overline{\alpha}_x] \phi_2 \|_0 + \left \| \left [\Lambda^s,\overline{\alpha} - \Big( \Big( \frac{m_*}{\rho_*} \Big)^2 + p'(\rho_*) \Big) \right ] \partial_x \phi_2 \right \|_0 \nonumber \\ &+ \| [ \Lambda^s, \zeta_{xx}]\phi_2 \|_0 + 2 \| [ \Lambda^s, \zeta_x] \partial_x \phi_2 \|_0 \nonumber \\ &+ \| [\Lambda^s, \zeta] \partial_x^2 \phi_2 \|_0 + \| [ \Lambda^s, \eta_x ] \phi_2 \|_0 + \| [ \Lambda^s, \eta] \partial_x \phi_2 \|_0 \nonumber\\ &\leq C \| \phi_2 \|_{s+1}. \label{adj_s_first_eq_8}\end{aligned}$$ Using estimates [\[adj_s\_first_eq_3\]](#adj_s_first_eq_3){reference-type="eqref" reference="adj_s_first_eq_3"} thru [\[adj_s\_first_eq_8\]](#adj_s_first_eq_8){reference-type="eqref" reference="adj_s_first_eq_8"}, and choosing $\varepsilon_5 > 0$ sufficiently small, we arrive at $$-\partial_t \left ( \frac{1}{k^2} \| \Lambda^s \phi_1 \|_0^2 + \frac{1}{2} \| \partial_x \Lambda^s \phi_2 \|_0^2 \right ) \leq C \left ( \| \phi_1 \|_{s}^2 + \| \phi_2 \|_{s+1}^2 + \| g \|_{s}^2 \right ). %label{adj_s_first_eq_9}$$ Multiplying [\[adj_s\_9\]](#adj_s_9){reference-type="eqref" reference="adj_s_9"} by $2/k^2$ and adding it to the last inequality we infer $$\begin{aligned} - \partial_t \left ( \frac{1}{k^2} \| \Lambda^s \phi \|_0^2 + \frac{1}{2} \| \partial_x \Lambda^s \phi_2 \|_0^2 \right ) \leq C \left ( \frac{1}{k^2} \| \Lambda^s \phi \|_0^2 + \frac{1}{2} \| \partial_x \Lambda^s \phi_2 \|_0^2 + \| \Lambda^s g \|_0^2 \right ) \label{res_ineq_s}\end{aligned}$$ Applying Gronwall's inequality to [\[res_ineq_s\]](#res_ineq_s){reference-type="eqref" reference="res_ineq_s"} similarly as in the proof of Lemma [Lemma 1](#theorem_zero_order){reference-type="ref" reference="theorem_zero_order"}, we obtain [\[ineq_negative_order\]](#ineq_negative_order){reference-type="eqref" reference="ineq_negative_order"}. This yields the result. ◻ Now, we are going to show that the linear system [\[linear_system\]](#linear_system){reference-type="eqref" reference="linear_system"} has a unique solution. Let us define the operator including the time derivative $$\widetilde{{\mathcal{L}}}:= \partial_t - {\mathcal{L}}. \label{operator_time_derivative}$$ Its formal adjoint is given by $\widetilde{{\mathcal{L}}}^* = -\partial_t - {\mathcal{L}}^*$ and satisfies $\langle \widetilde{{\mathcal{L}}}w , \phi \rangle_0 = \langle w, \widetilde{{\mathcal{L}}}^* \phi \rangle_0$. **Lemma 1** (existence of solution to linear system). *Suppose $n \geq 3$ and let $$\begin{aligned} &f \in L^2([0,T], H^n(\mathbb{R})),\mbox{ }f_1 \in L^2([0,T], H^{n+1}(\mathbb{R})),\\ &\rho_0 \in H^{n+1}(\mathbb{R}),\mbox{ }m_0 \in H^n(\mathbb{R}).\end{aligned}$$ Then the initial value problem [\[linear_system\]](#linear_system){reference-type="eqref" reference="linear_system"} has a unique solution which satisfies the estimate [\[integral_estimate_higher_order\]](#integral_estimate_higher_order){reference-type="eqref" reference="integral_estimate_higher_order"}.* *Proof.* Let $\phi \in C_0^{\infty}([0,T] \times \mathbb{R})$. We then have $$\left | \int_0^T \langle f, \phi \rangle_0 dt \right | = \left | \int_0^T \langle \Lambda^n f, \Lambda^{-n} \phi \rangle_0 \, dt \right | \leq \int_0^T \| \Lambda^n f \|_0 \| \Lambda^{-n} \phi \|_0 \, dt.$$ From the inequality [\[ineq_negative_order\]](#ineq_negative_order){reference-type="eqref" reference="ineq_negative_order"} with $s = - n$ we deduce $$\begin{aligned} \left | \int_0^T \langle f, \phi \rangle_0 dt \right | &\leq C(T) \int_0^T \| \Lambda^n f \|_0 \left ( \int_0^T \| \Lambda^{-n} \widetilde{{\mathcal{L}}}^* \phi \|_0^2 \, d\tau \right )^{1/2} \!\!dt\\ &\leq C(T) \left ( \int_0^T \| \Lambda^n f \|_0^2 \, dt \right )^2 \left ( \int_0^T \| \Lambda^{-n} \widetilde{{\mathcal{L}}}^* \phi \|_0^2 \, dt \right )^{1/2}.\end{aligned}$$ Hence, $$\int_0^T \langle f, \phi \rangle_0 \, dt$$ defines a bounded linear functional of $\widetilde{{\mathcal{L}}}^* \phi$ in $L^2([0,T],H^{-n}(\mathbb{R}))$. Applying the Hanh-Banach extension theorem and Riesz representation theorem we obtain that there exists a unique weak solution $w \in L^2([0,T],H^n(\mathbb{R}))$ such that $$\int_0^T \langle f, \phi \rangle_0 \, dt = \int_0^T \langle w, \widetilde{{\mathcal{L}}}^* \phi \rangle_0 \, dt,$$ for all $\phi \in C_0^{\infty}([0,T] \times \mathbb{R})$. Therefore, $$\int_\mathbb{R}f \varphi \, dx = \int_\mathbb{R}(\widetilde{{\mathcal{L}}}w) \varphi \, dx,\qquad \varphi \in C_0^{\infty}(\mathbb{R}),$$ for all $t \in [0,T]$ a.e. Since $n \geq 3$ the Sobolev embedding theorem implies that the solution is classical. ◻ ## Proof of Theorem [Theorem 1](#themlocale){reference-type="ref" reference="themlocale"} {#proof-of-theorem-themlocale} Now, let us consider the initial value problem for system [\[QHD\]](#QHD){reference-type="eqref" reference="QHD"}, with $$\rho(0) = \rho_0,\mbox{ }m(0) = m_0. \label{initial_condition}$$ First we shall prove the following lemma about local existence of solutions. **Lemma 1**. *Suppose $s \in \mathbb{N},\mbox{ }s \geq 3$. For any initial condition $(\rho_0, m_0)$ such that $\rho_0(x) \geq \delta > 0$ and $$\rho_0 - \rho_*\in H^{s+1}(\mathbb{R}),\mbox{ }m_0 - m_*\in H^s(\mathbb{R}),$$ where $\rho_*> 0$ and $m_*\in \mathbb{R}$ are constants, there exists $T > 0$ such that the initial value problem [\[QHD\]](#QHD){reference-type="eqref" reference="QHD"}-[\[initial_condition\]](#initial_condition){reference-type="eqref" reference="initial_condition"} has a unique solution $$\rho - \rho_*\in L^{\infty}([0,T],H^{s+1}(\mathbb{R})), \mbox{ }m - m_*\in L^{\infty}([0,T],H^s(\mathbb{R})).$$ Moreover, $w = (\rho - \rho_*, m - m_*)$ satisfies the estimate $${\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert w \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}_{s,[0,T]}^2 \leq C_n(T) \Big( \| w_0 \|_s^2 + \| \rho_0 - \rho_*\|_{s+1}^2 \Big),$$ with $w_0 = (\rho_0 - \rho_*, m_0 - m_*)$, and where the positive constant $C_n(T)$ depends only on $\rho_*$, $m_*$, the parameters of [\[QHD\]](#QHD){reference-type="eqref" reference="QHD"} and $T$.* *Proof.* The nonlinear problem can be written as $$\left\{ \begin{aligned} \widetilde{{\mathcal{L}}}(w)w &= 0,\\ w(0) &= w_0. \end{aligned} \right. \label{eq:quasilinear}$$ In order to recast the system so that we have homogeneous initial data, let $\tilde{w}_i$ be the classical solutions to the heat equation, $$\begin{aligned} \partial_t \tilde{w}_i &= \partial_x^2 \tilde{w}_i,\\ \tilde{w}_i(0) &= w_{i,0},\quad i=1,2,\end{aligned}$$ given by $$\tilde{w}_i(x,t) = \int_\mathbb{R}K(x-y,t) w_{i,0}(y)dy,\qquad x \in \mathbb{R}, \; t> 0, \; i=1,2,$$ where $$K(x,t) = \frac{1}{\sqrt{4 \pi t}} \exp\left (-\frac{x^2}{4t} \right ),$$ is the heat kernel. From standard theory we clearly have the estimates $$\begin{aligned} \| \tilde{w}_1(t) \|_{s+1} &\leq \| \rho_0 - \rho_*\|_{s+1},\\ \int_0^T \| \partial_t \tilde{w}_1 \|^2_s \, dt + \int_0^T \| \tilde{w}_1 \|^2_{s+2} \, dt &\leq C \| \rho_0 - \rho_*\|^2_{s+1},\\ \| \tilde{w}_2(t) \|_s &\leq \| m_0 - m_*\|_s,\\ \int_0^T \| \partial_t \tilde{w}_2 \|^2_{s-1} \, dt + \int_0^T \| \tilde{w}_2 \|^2_{s+1} \, dt &\leq C \| m_0 - m_*\|^2_s.\end{aligned}$$ Now, let us introduce the new variable $\hat{w}= w - \tilde{w}$, where $\hat{w}= (\hat{\rho}, \hat{m})^\top$, and let us define the operator $${\mathcal{M}}(\hat{w}) \hat{w}:= \widetilde{{\mathcal{L}}}(\tilde{w}+ \hat{w})\hat{w}+ ( \widetilde{{\mathcal{L}}}(\tilde{w}+ \hat{w}) - \widetilde{{\mathcal{L}}}(\tilde{w}))\tilde{w}.$$ Then, the system [\[eq:quasilinear\]](#eq:quasilinear){reference-type="eqref" reference="eq:quasilinear"} becomes $$\left\{ \begin{aligned} {\mathcal{M}}(\hat{w})\hat{w}&= f(\hat{w}),\\ \hat{w}(0) &= 0, \end{aligned} \right. \label{eq:sys_homogeneous_IC}$$ where $f = -\widetilde{{\mathcal{L}}}(\tilde{w})\tilde{w}$. We will solve [\[eq:sys_homogeneous_IC\]](#eq:sys_homogeneous_IC){reference-type="eqref" reference="eq:sys_homogeneous_IC"} by iteration. Set $$\left\{ \begin{aligned} {\mathcal{M}}(\hat{w}_j)\hat{w}_{j+1} &= f,\\ \hat{w}_{j+1}(0) &= 0, \end{aligned} \right. \label{eq:iteration}$$ for $j \in \mathbb{N}_0$ and with $\hat{w}_0 = 0$. Note that the operator ${\mathcal{M}}$ in [\[eq:iteration\]](#eq:iteration){reference-type="eqref" reference="eq:iteration"} has the same structure as $\widetilde{{\mathcal{L}}}$ defined in [\[operator_time_derivative\]](#operator_time_derivative){reference-type="eqref" reference="operator_time_derivative"} and Theorem [Lemma 1](#theorem_existence_linear_system){reference-type="ref" reference="theorem_existence_linear_system"} applies to the system [\[eq:iteration\]](#eq:iteration){reference-type="eqref" reference="eq:iteration"}. We show how to treat three of the terms in the proof of [\[eq:integ_estimate_0\_order\]](#eq:integ_estimate_0_order){reference-type="eqref" reference="eq:integ_estimate_0_order"} for equation [\[eq:iteration\]](#eq:iteration){reference-type="eqref" reference="eq:iteration"}. Indeed, we have $$\int_\mathbb{R}(\tilde{w}_2)_t m \, dx \leq \| (\tilde{w}_2)_t \|_{-1} \| m \|_1 \leq \frac{1}{2 \varepsilon_1} \| (\tilde{w}_2)_t\|^2_{-1} + \frac{\varepsilon_1}{2} \| m \|_0^2 + \frac{\varepsilon_1}{2} \| m_x \|_0^2.$$ Moreover, $$\int_\mathbb{R}(\tilde{w}_2)_{xx} m \, dx = \int_\mathbb{R}(\tilde{w}_2)_t m \, dx,$$ and, furthermore, $$\begin{aligned} \int_\mathbb{R}(\tilde{w}_1)_{xxx} m \, dx &= \int_{\mathbb{R}} \partial_t (\tilde{w}_1)_x m \, dx \\ &\leq \| \partial_t (\tilde{w}_1)_x\|_{-1} \| m \|_{1}\\ &\leq \frac{1}{2 \varepsilon_2} \| \partial_t (\tilde{w}_1)_x \|^2_{-1} + \frac{\varepsilon_2}{2} \| m \|_0^2 + \frac{\varepsilon_2}{2} \| m_x \|_0^2.\end{aligned}$$ The other terms are treated similarly. Let $i \in \mathbb{N}_0$. Thanks to the Sobolev embedding theorem if $T,\mbox{ } \delta > 0$ are sufficiently small and ${\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert \hat{w} \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}_{s,[0,T]}^2 \leq \delta$, then $\overline{\rho}+ \tilde{w}_1(x,t) + \hat{\rho}(x,t) > 0$ for $x \in \mathbb{R}$ and $0 \leq t \leq T$. Therefore, it suffices to show that there exist $T > 0$ and sufficiently small $\delta > 0$ such that that the successive iterations satisfy $$\begin{aligned} {\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert \hat{w}_i \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}_{s,[0,T]}^2 &\leq \delta \leq \beta_s, \qquad i \in \mathbb{N}_0, \label{iteration_ineq_1}\\ {\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert \hat{w}_i - \hat{w}_{i-1} \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}_{s-2,[0,T]} &\leq \frac{1}{2} {\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert \hat{w}_{i-1} - \hat{w}_{i-2} \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}_{s-2,[0,T]},\qquad i \geq 2 \label{iteration_ineq_2},\end{aligned}$$ Suppose that [\[iteration_ineq_1\]](#iteration_ineq_1){reference-type="eqref" reference="iteration_ineq_1"} and [\[iteration_ineq_2\]](#iteration_ineq_2){reference-type="eqref" reference="iteration_ineq_2"} are satisfied for $i \leq j$. The inequality [\[integral_estimate_higher_order\]](#integral_estimate_higher_order){reference-type="eqref" reference="integral_estimate_higher_order"} with $n = s$ implies that $${\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert \hat{w}_{j+1} \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}_{s,[0,T]}^2 \leq C_n(T) \int_{0}^{T} \left ( \| f \|_s^2 + \| f_1 \|_{s+1}^2 \right ) dt.$$ Therefore we can choose $T > 0$ small enough such that ${\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert \hat{w}_{j+1} \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}_{s,[0,T]}^2 \leq \delta \leq \beta_s$. Now, for $i \in \mathbb{N}_0$ define $v_i := \hat{w}_{i+1} - \hat{w}_i$. Then, $v_i$ satisfies $$\left\{ \begin{aligned} {\mathcal{M}}(\hat{w}_i)v_i &= \big ({\mathcal{M}}(\hat{w}_{i-1}) - {\mathcal{M}}(\hat{w}_i) \big ) \hat{w}_i,\\ v_i(0) &= 0. \end{aligned} \right.$$ Moreover, since $H^n(\mathbb{R})$ is a Banach algebra for $n \geq 1$, we have $$\left \| \big ({\mathcal{M}}(\hat{w}_{j-1}) - {\mathcal{M}}(\hat{w}_j) \big ) \hat{w}_j \right \| _{s-2}^2 \leq C_{s-2} \delta \| \hat{\rho}_{j} - \hat{\rho}_{j-1} \|_{s-1}^2 + C_{s-2} \delta \| \hat{m}_{j} - \hat{m}_{j-1} \|_{s-2}^2.$$ Applying [\[integral_estimate_higher_order\]](#integral_estimate_higher_order){reference-type="eqref" reference="integral_estimate_higher_order"} with $n = s- 2$ we obtain $${\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert \hat{w}_{j+1} - \hat{w}_j \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}_{s-2,[0,T]} \leq \tilde{C}_{s-2}(T) \sqrt{\delta} {\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert \hat{w}_j - \hat{w}_{j-1} \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}_{s-2,[0,T]}.$$ Choosing $\delta > 0$ such that $\tilde{C}_{s-2}(T) \sqrt{\delta} \leq 1/2$ concludes the proof of [\[iteration_ineq_1\]](#iteration_ineq_1){reference-type="eqref" reference="iteration_ineq_1"} and [\[iteration_ineq_2\]](#iteration_ineq_2){reference-type="eqref" reference="iteration_ineq_2"}. ◻ Thanks to the Sobolev embedding we infer Theorem [Theorem 1](#themlocale){reference-type="ref" reference="themlocale"} from Lemma [Lemma 1](#lemma_local_existence){reference-type="ref" reference="lemma_local_existence"}. This concludes the proof of Theorem [Theorem 1](#themlocale){reference-type="ref" reference="themlocale"}. # Linear decay rates {#seclinear} In this section, we establish the decay of solutions to the linearization of system [\[QHD\]](#QHD){reference-type="eqref" reference="QHD"} around an arbitrary constant equilibrium state $U_* = (\rho_*, m_*) \in \mathbb{R}^2$, with $\rho_* > 0$, and satisfying the subsonicity assumption $$\label{subsonic} p'(\rho_*) > \frac{m_*^2}{\rho_*^2}.$$ In contrast to the estimates from Section [2](#seclocale){reference-type="ref" reference="seclocale"}, here we focus on *stability* estimates. To that end, we examine the decay structure of the system in the sense of Humpherys' analysis for linear higher order systems in the Fourier space (cf. [@Hu05]). Symbol symmetrizability and the existence of an appropriate compensating matrix symbol are key ingredients to establish the optimal decay of the semigroup. ## Linearization and symbol symmetrizability We start by observing that system [\[QHD\]](#QHD){reference-type="eqref" reference="QHD"} can be recast in conservation form. Indeed, following Lattanzio *et al.* [@LMZ20b] let us write the Bohm potential as $$\rho\left(\displaystyle\frac{(\sqrt{\rho})_{xx}}{\sqrt{\rho}}\right)_x = \frac{1}{2} \Big( \rho \big( \ln \rho \big)_{xx}\Big)_x.$$ Therefore, system [\[QHD\]](#QHD){reference-type="eqref" reference="QHD"} in conservation form reads $$\label{QHDc} \begin{cases} \rho_t + m_x=0,\\ m_t +\left(\displaystyle\frac{m^2}{\rho}+p(\rho)\right)_x=\mu m_{xx} + \tfrac{1}{2} k^2 \Big( \rho \big( \ln \rho \big)_{xx}\Big)_x. \end{cases}$$ The conservative form of the equations will play an important role in the establishment of the energy estimates with the appropiate regularity for the density variable. Consider an arbitrary constant equilibrium state $U_* = (\rho_*, m_*) \in \mathbb{R}^2$ satisfying $\rho_* > 0$ and [\[subsonic\]](#subsonic){reference-type="eqref" reference="subsonic"}, and let $(\rho + \rho_*, m + m_*)$ be a solution to [\[QHDc\]](#QHDc){reference-type="eqref" reference="QHDc"} where $\rho$ and $m$ represent perturbations. Substituting into [\[QHDc\]](#QHDc){reference-type="eqref" reference="QHDc"} and after some elementary algebra, one arrives at a nonlinear perturbation system of the form $$\label{QHDp} \begin{cases} \rho_t + m_x=0,\\ m_t + \left( p'(\rho_*) - \displaystyle{\frac{m_*^2}{\rho_*^2}}\right) \rho_x + \left( \displaystyle{\frac{2m_*}{\rho_*}}\right) m_x = \mu m_{xx} + \tfrac{1}{2} k^2 \rho_{xxx} + \partial_x N_2, \end{cases}$$ where $N_2$ contains the nonlinear terms and is of the form $$\label{orderN2} N_2 = O\big(\rho^2 + m^2 + \rho_x^2 + |\rho||\rho_{xx}|\big),$$ as the reader may easily verify. In other words, the system [\[QHD\]](#QHD){reference-type="eqref" reference="QHD"} can be rewritten as a system of the form $$\label{nonlinQHD} U_t = {\mathcal{A}}U + \partial_x \begin{pmatrix}0 \\ N_2 \end{pmatrix},$$ in terms of the (perturbed) state variables $U = (\rho, m)^\top$ and where ${\mathcal{A}}$ is a differential operator with constant coefficients. Notice that the nonlinear terms are expressed in conservative form. Let us consider the linear part of system [\[QHDp\]](#QHDp){reference-type="eqref" reference="QHDp"}, which reads $$\label{linearQHD} U_t + A_* U_x = B_* U_{xx} + C_* U_{xxx},$$ where $$\label{defcoeffs} \begin{aligned} U = \begin{pmatrix} \rho \\ m \end{pmatrix}, &\qquad A_* = \begin{pmatrix} 0 & 1 \\ p'(\rho_*) - m_*^2 / \rho_*^2 & 2m_* / \rho_* \end{pmatrix},\\ B_* = \begin{pmatrix} 0 & 0 \\ 0 & \mu \end{pmatrix}, &\qquad C_* = \begin{pmatrix} 0 & 0 \\ \tfrac{1}{2}k^2 & 0 \end{pmatrix}, \end{aligned}$$ or, equivalently, $$\label{defcA} U_t = {\mathcal{A}}U := \big( -A_* \partial_x + B_* \partial_x^2 + C_* \partial_x^3\big) U.$$ Take the Fourier transform of [\[linearQHD\]](#linearQHD){reference-type="eqref" reference="linearQHD"}. This yields $$\label{FouQHD} \widehat{U}_t + \big( i \xi A_* + \xi^2 B_* + i \xi^3 C_* \big) \widehat{U}= 0,$$ where $\widehat{U}= \widehat{U}(\xi,t)$ denotes the Fourier transform of $U$. The evolution of the solutions to [\[FouQHD\]](#FouQHD){reference-type="eqref" reference="FouQHD"} reduces to solving the spectral equation $$\label{spect} \big( \lambda I + i \xi A_* + \xi^2 B_* + i \xi^3 C_* \big)\widehat{U}= 0,$$ for $\lambda \in \mathbb{C}$ and $\xi \in \mathbb{R}$ denoting time frequencies and (Fourier) wave number, respectively. It is said that the linear operator ${\mathcal{A}}$ is *strictly dissipative* if for each $\xi \neq 0$ then all solutions to the spectral equation [\[spect\]](#spect){reference-type="eqref" reference="spect"} satisfy $\mathrm{Re}\,\lambda (\xi) < 0$ (cf. Humpherys [@Hu05]; see also [@KaSh88a; @ShKa85]). **Remark 1**. Ueda *et al.* [@UDK12; @UDK18] further classify strictly dissipative systems as follows. The linear system is called strictly dissipative of type $(p,q)$, with $p, q \in \mathbb{Z}$, $p, q \geq 0$, provided that the solutions of the spectral problem [\[spect\]](#spect){reference-type="eqref" reference="spect"} satisfy $$\mathrm{Re}\,\lambda(\xi) \leq - \, \frac{C |\xi|^{2p}}{(1 + |\xi|^2)^{q}}, \qquad \forall \xi \neq 0,$$ for some uniform constant $C > 0$. The system is said to be of standard type when $p = q$ [@UDK12], and of regularity-loss type when $p < q$ [@UDK18]. Notice that the heat equation is a system with dissipativity of type $(1,0)$. Hence, the third case when $p > q$ is called dissipativity of regularity-gain type [@KSX22]. The type of dissipativity will be reflected in the decay rate of the solutions to the linearized system. Notice that strict dissipativity is equivalent to the stability of the essential spectrum of the linearized operator ${\mathcal{A}}$ when computed, for example, with respect to the space $L^2(\mathbb{R})$ of finite energy perturbations. The following result justifies the subsonicity assumption [\[subsonic\]](#subsonic){reference-type="eqref" reference="subsonic"} in the study of strict dissipativity. **Lemma 1**. *Suppose that a constant equilibrium state $(\rho_*,m_*) \in \mathbb{R}^2$ with $\rho_* > 0$ is *supersonic*, that is, $$\label{supersonic} p'(\rho_*) < \frac{m_*^2}{\rho_*^2}.$$ Then the operator ${\mathcal{A}}$ defined in [\[defcA\]](#defcA){reference-type="eqref" reference="defcA"} violates the strict dissipativity condition. More precisely, there exist certain values of $\xi \in\mathbb{R}$ for which the solutions to the spectral equation [\[spect\]](#spect){reference-type="eqref" reference="spect"} satisfy $\mathrm{Re}\,\lambda(\xi) > 0$.* *Proof.* See Appendix [5](#secappen){reference-type="ref" reference="secappen"}. ◻ Now, we follow Humpherys [@Hu05] and split the symbol into even and odd terms. Let us define the symbols, $$\begin{aligned} A(\xi) &:= A_* + \xi^2 C_* = \begin{pmatrix} 0 & 1 \\ p'(\rho_*) -m_*^2 / \rho_*^2 + \tfrac{1}{2} k^2 \xi^2 & 2m_* / \rho_* \end{pmatrix}, \quad &\text{(odd)},\\ B(\xi) &:= \xi^2 B_* = \xi^2 \begin{pmatrix} 0 & 0 \\ 0 & \mu \end{pmatrix}, \quad &\text{(even)} \end{aligned}$$ so that the evolution equation [\[FouQHD\]](#FouQHD){reference-type="eqref" reference="FouQHD"} is recast as $$\label{FouQHD2} \widehat{U}_t + \big( i \xi A(\xi) + B(\xi) \big) \widehat{U}= 0.$$ Notice that in the matrix symbol $A(\xi)$ we have gathered the transport and dispersive terms together (the odd part of the symbol), whereas the only dissipation term due to viscosity (the even part of the symbol) is encoded into the matrix $B(\xi)$. In this fashion, Humpherys mimics the algebraic structure of second order (purely viscous) systems of Kawashima and Shizuta [@KaSh88a; @ShKa85] at the Fourier level. Humpherys thereby introduces the following fundamental concept of symbol symmetrization, which generalizes the standard notion of symmetrizability of Lax and Friedrichs [@FLa67; @Frd54] (see also Godunov [@Godu61a]). **Definition 1** (Humpherys [@Hu05]). The operator ${\mathcal{A}}$ is *symbol symmetrizable* if there exists a smooth, symmetric matrix-valued function, $S=S(\xi)>0$, positive-definitive, such that $S(\xi)A(\xi)$ and $S(\xi)B(\xi)$ are symmetric, with $S(\xi)B(\xi)\geq 0$ (positive semi-definite) for all $\xi \in \mathbb{R}$. **Remark 1**. Let us recall that a generic (quasilinear) system of equations of the form $U_t = \sum_j A_j(U) \partial_x^j U$ is said to be symmetrizable in the classical sense of Friedrichs if, for any constant state $U_*$, there exists a symmetric, positive definite matrix $S=S(U_*) > 0$ such that $S(U_*)A_{j}(U_*)$ are all simultaneously symmetric. Clearly, every symmetrizable system in the sense of Friedrichs is symbol symmetrizable, but the converse is not true. **Lemma 1**. *Assume the subsonicity [\[subsonic\]](#subsonic){reference-type="eqref" reference="subsonic"} of the equilibrium state $U_* = (\rho_*,m_*)$ with $\rho_* > 0$. Then the linearized QHD system [\[linearQHD\]](#linearQHD){reference-type="eqref" reference="linearQHD"} is symbol symmetrizable, but not symmetrizable in the sense of Friedrichs. One symbol symmetrizer is of the form $$\label{symm} S(\xi) = \begin{pmatrix} \alpha(\xi) & 0 \\ 0 & 1 \end{pmatrix} \in C^{\infty}\big( \mathbb{R}; \mathbb{R}^{2\times 2} \big),$$ where $$\label{defalpha} \alpha(\xi) := p'(\rho_*) - \frac{m_*^2}{\rho_*^2}+ \tfrac{1}{2}k^2 \xi^2 > 0, \qquad \xi \in \mathbb{R}.$$* *Proof.* It is easy to verify that the symbol $S(\xi)$ defined in [\[symm\]](#symm){reference-type="eqref" reference="symm"} is smooth, symmetric and positive definite because of the condition [\[subsonic\]](#subsonic){reference-type="eqref" reference="subsonic"}. By inspection, one thereby obtains $$S(\xi)A(\xi) = \begin{pmatrix} \alpha(\xi) & 0 \\ 0 & 1 \end{pmatrix}\begin{pmatrix} 0 & 1 \\ \alpha(\xi) & 2m_* / \rho_*\end{pmatrix} = \begin{pmatrix} 0 & \alpha(\xi) \\ \alpha(\xi) & 2m_* / \rho_* \end{pmatrix},$$ and $S(\xi)B(\xi) = B(\xi)$, which are symmetric matrices with $S(\xi)B(\xi) \geq 0$. This easily shows that the operator ${\mathcal{A}}$ is symbol symmetrizable. To prove that the system is not Friedrichs symmetrizable, suppose there exists a positive-definite symmetrizer of the form $$S = \begin{pmatrix} s_{1} & s_{2} \\ s_{2} & s_{3} \end{pmatrix}.$$ Then the condition on $SA_*$ and $SC_*$ to be simultaneously symmetric matrices implies that $S$ cannot be positive definite, as the reader may easily verify. The lemma is proved. ◻ **Remark 1**. Up to our knowledge, the QHD system [\[QHD\]](#QHD){reference-type="eqref" reference="QHD"} is only the third example of a symbol symmetrizable system which is not Friedrichs symmetrizable, apart from the isothermal Navier-Stokes-Korteweg model (cf. [@Hu05; @PlV22]) and its non isothermal version (the so called Navier-Stokes-Fourier-Korteweg system [@PlV23]). The existence of these simple and physically relevant counterexamples exhibit the importance of Definition [Definition 1](#defsymH){reference-type="ref" reference="defsymH"}. Henceforth, multiply equation [\[FouQHD2\]](#FouQHD2){reference-type="eqref" reference="FouQHD2"} on the left by the symmetrizer $S = S(\xi)$ defined in [\[symm\]](#symm){reference-type="eqref" reference="symm"} to rewrite it in symmetric form, $$\label{sFouQHD} S(\xi) \widehat{U}_t + ( i \xi \widetilde{A}(\xi) + \widetilde{B}(\xi) ) \widehat{U}= 0,$$ where $$\widetilde{A}(\xi) := S(\xi) A(\xi) = \begin{pmatrix} 0 & \alpha(\xi) \\ \alpha(\xi) & 2m_* / \rho_* \end{pmatrix}, \qquad \widetilde{B}(\xi) := S(\xi) B(\xi) = \xi^2 \begin{pmatrix} 0 & 0 \\ 0 & \mu \end{pmatrix}.$$ Once the system in Fourier space is put into symmetric form, we can recall the following fundamental notions (cf. [@Hu05; @KaSh88a; @ShKa85]). Let $S$, $\widetilde{A}$, $\widetilde{B}\in C^{\infty} \left( \mathbb{R}; \mathbb{R}^{2 \times 2} \right)$ be smooth, real matrix-valued functions of the variable $\xi \in \mathbb{R}$. Assume that $S,$ $\widetilde{A}$, $\widetilde{B}$ are symmetric for all $\xi \in \mathbb{R}$, $S >0$ is positive definite and $\widetilde{B}\geq 0$ is positive semi-definite. The triplet $(S, \widetilde{A}, \widetilde{B})$ is said to be genuinely coupled if for all $\xi \neq 0$ every vector $V \in \ker \widetilde{B}(\xi)$, with $V \neq 0$, satisfies the condition $\big( \varrho S(\xi) + \widetilde{A}(\xi) \big) V \neq 0$ for any $\varrho \in \mathbb{R}$. In that case we say that the operator ${\mathcal{A}}$ satisfies the *genuine coupling condition*. Likewise, under the same assumptions of symmetry, smoothness and positive semidefiniteness, if a smooth, real matrix valued function, $K\in C^{\infty} \left( \mathbb{R}; \mathbb{R}^{3 \times 3} \right)$, satisfies - $K(\xi)S(\xi)$ is skew-symmetric for all $\xi\in \mathbb{R}$; and, - $\big[K(\xi)\widetilde{A}(\xi)\big]^{s}+ \widetilde{B}(\xi) \geq \theta(\xi) I > 0$ for all $\xi \in \mathbb{R}$, $\xi \neq 0$, and some $\theta = \theta(\xi) > 0$, then $K$ is said to be a *compensating matrix symbol* for the triplet $(S, \widetilde{A}, \widetilde{B})$. Here $[M]^{s} := \frac{1}{2}(M+M^\top)$ denotes the symmetric part of any real matrix $M$. The concepts of strict dissipativity, genuine coupling and the existence of a compensating matrix function are equivalent to each other (see Theorems 3.3 and 6.3 by Humpherys [@Hu05]), as it is stated in the following equivalence theorem, under the extra constant multiplicity assumption. **Theorem 1** (equivalence theorem [@Hu05]). *Suppose that a symbol symmetrizer, $S= S(\xi)$, $S \in C^\infty(\mathbb{R}; \mathbb{R}^{3 \times 3})$, exists for the operator $\mathcal{A}$ in the sense of Definition [Definition 1](#defsymH){reference-type="ref" reference="defsymH"}, and that $\widetilde{A}(\xi) = S(\xi)A(\xi)$ is of constant multiplicity in $\xi$, that is, all its eigenvalues are semi-simple and with constant multiplicity for all $\xi \in \mathbb{R}$. Then the following conditions are equivalent:* - *$\mathcal{A}$ is strictly dissipative.* - *$\mathcal{A}$ is genuinely coupled.* - *There exists a compensating matrix function for the triplet $(S,SA, SB)$.* Our first observation is that the linearized QHD system [\[FouQHD2\]](#FouQHD2){reference-type="eqref" reference="FouQHD2"} satisfies the genuine coupling and constant multiplicity conditions. **Lemma 1**. *The triplet $(S, \widetilde{A}, \widetilde{B})$ is genuinely coupled. Moreover, the matrix symbol $\widetilde{A}(\xi)$ is of constant multiplicity in $\xi \in \mathbb{R}$.* *Proof.* Clearly, for each $\xi \neq 0$ we have $\ker \widetilde{B}(\xi) = \{ (a, 0)^\top \, : \, a \in \mathbb{R}\} \subset \mathbb{R}^2$. Hence, for any $0 \neq V = (a,0)^\top \in \ker \widetilde{B}(\xi)$, with $\xi \neq 0$, and any $\varrho \in \mathbb{R}$, there holds $$\big( \varrho S(\xi) + \widetilde{A}(\xi) \big) V = \begin{pmatrix} \varrho \alpha(\xi) & \alpha(\xi) \\ \alpha(\xi) & \varrho + 2m_* / \rho_* \end{pmatrix} \begin{pmatrix} a \\ 0\end{pmatrix} = \begin{pmatrix} a \varrho \alpha(\xi) \\ a \alpha(\xi) \end{pmatrix} \neq 0,$$ because $a \neq 0$ and $\alpha(\xi) > 0$ for all $\xi$. Therefore, the triplet $(S, \widetilde{A}, \widetilde{B})$ is genuinely coupled. Upon an explicit computation of the eigenvalues of $\widetilde{A}(\xi)$, we obtain that $\det (\nu I - \widetilde{A}(\xi)) = 0$ if and only if $$\nu = \nu_\pm(\xi) = \frac{m_*}{\rho_*} \pm \sqrt{\frac{m_*^2}{\rho_*^2} + \alpha(\xi)^2},$$ yielding two real and simple eigenvalues, $\nu_-(\xi) < 0 < \nu_+(\xi)$, which never coalesce inasmuch as $\alpha(\xi) > 0$ for all $\xi$. Hence, the constant multiplicity assumption is also fulfilled. ◻ ## The compensating matrix symbol By virtue of the equivalence Theorem [Theorem 1](#HuThSym){reference-type="ref" reference="HuThSym"} and Lemma [Lemma 1](#lemgencoup){reference-type="ref" reference="lemgencoup"}, we deduce the existence of a compensating matrix symbol for the triplet $(S, \widetilde{A}, \widetilde{B})$ associated to the linear symmetric QHD system [\[sFouQHD\]](#sFouQHD){reference-type="eqref" reference="sFouQHD"}. In applications, however, it is more convenient to construct the symbol directly. For that purpose, let us go back to the (unsymmetrized) original system [\[FouQHD2\]](#FouQHD2){reference-type="eqref" reference="FouQHD2"} and introduce the following rescaling of variables, $$\label{hV} \widehat{V}:= S(\xi)^{1/2} \widehat{U},$$ where $S(\xi) > 0$ is the symmetrizer from Lemma [Lemma 1](#Sym2Full){reference-type="ref" reference="Sym2Full"}. Hence, system [\[FouQHD2\]](#FouQHD2){reference-type="eqref" reference="FouQHD2"} transforms into $$\label{eqforV} \widehat{V}_{t} + \big( i \xi \widehat{A}(\xi) + \widehat{B}(\xi) \big) \widehat{V}= 0,$$ where $$\widehat{A}(\xi) := S(\xi)^{1/2} A(\xi) S(\xi)^{-1/2},\qquad \widehat{B}(\xi) := S(\xi)^{1/2} B(\xi) S(\xi)^{-1/2}.$$ Thus, direct computations yield $$\widehat{A}(\xi) = \begin{pmatrix} 0 & \alpha(\xi)^{1/2} \\ \alpha(\xi)^{1/2} & 2m_* / \rho_* \end{pmatrix}, \qquad \widehat{B}(\xi) = \xi^2 \begin{pmatrix} 0 & 0 \\ 0 & \mu \end{pmatrix} = \xi^2 B_*.$$ Notice that $\widehat{A}$ and $\widehat{B}$ are both smooth and symmetric, with $\widehat{B}\geq 0$. In some sense, we have just symmetrized the original system with $S(\xi)^{1/2}$ instead. The following lemma appropriately chooses the compensating matrix function and provides more information than the equivalence theorem. **Lemma 1**. *There exists a smooth compensating matrix symbol, $\widehat{K}\in C^\infty(\mathbb{R}; \mathbb{R}^{2 \times 2})$, $\widehat{K}= \widehat{K}(\xi)$, for the triplet $(I, \widehat{A}(\xi), B_*)$. In other words, $\widehat{K}$ is skew-symmetric and $$\label{compmatprop} [\widehat{K}(\xi) \widehat{A}(\xi)]^s + B_* \geq {\theta} I > 0,$$ for some uniform constant ${\theta} > 0$ independent of $\xi \in \mathbb{R}$. In addition, the following estimates hold, $$\label{tiKbded} |\xi \widehat{K}(\xi)|, | \widehat{K}(\xi) | \leq C,$$ for all $\xi \in \mathbb{R}$ and some uniform constant $C>0$.* *Proof.* Let us proceed by inspection. Consider a compensating matrix symbol of the form $$\widehat{K}(\xi) = \epsilon q(\xi) \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix},$$ with $\epsilon > 0$ constant and $q(\xi) > 0$ real and smooth, both to be chosen later. Clearly, $\widehat{K}$ is skew-symmetric. Let us compute $$\widehat{K}(\xi) \widehat{A}(\xi) = \epsilon q(\xi) \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \begin{pmatrix} 0 & \alpha(\xi)^{1/2} \\ \alpha(\xi)^{1/2} & 2m_* / \rho_* \end{pmatrix} = \epsilon q(\xi) \begin{pmatrix} \alpha(\xi)^{1/2} & 2m_* / \rho_* \\ 0 & -\alpha(\xi)^{1/2} \end{pmatrix}.$$ The symmetric part of this matrix is $$[\widehat{K}(\xi) \widehat{A}(\xi)]^s = \epsilon q(\xi) \begin{pmatrix} \alpha(\xi)^{1/2} & m_* / \rho_* \\m_* / \rho_* & -\alpha(\xi)^{1/2} \end{pmatrix}.$$ Hence, let us choose $q(\xi) \equiv \alpha(\xi)^{-1/2}$, real smooth and positive, to obtain for every $y = (y_1, y_2) \in \mathbb{R}^2$ and all $\xi \in \mathbb{R}$ the quadratic form $$\begin{aligned} Q(y,\xi) &= \begin{pmatrix} y_1 \\ y_2 \end{pmatrix}^{\top} \! \big( [\widehat{K}(\xi) \widehat{A}(\xi)]^s + B_* \big) \begin{pmatrix} y_1 \\ y_2 \end{pmatrix} \\ &=\begin{pmatrix} y_1 \\ y_2 \end{pmatrix}^{\top} \begin{pmatrix} \epsilon & \epsilon \alpha(\xi)^{-1/2} m_* / \rho_* \\ \epsilon \alpha(\xi)^{-1/2} m_* / \rho_* & \mu - \epsilon \end{pmatrix} \begin{pmatrix} y_1 \\ y_2 \end{pmatrix}\\ &= a_1 y_1^2 + b_{12} y_1 y_2 + a_2 y_2^2, \end{aligned}$$ with $$a_1 = \epsilon > 0, \quad a_2 = \mu -\epsilon, \quad b_{12} = \frac{2 \epsilon m_*}{\alpha(\xi)^{1/2} \rho_*}.$$ If we choose $0 < \epsilon \ll 1$ sufficiently small such that $$a_2 > 0, \qquad a_2 - \frac{b_{12}^2}{2a_1} > 0,$$ then clearly, $$\begin{aligned} Q(y,\xi) &= \tfrac{1}{2} a_1 y_1^2 + \tfrac{1}{2} a_1 \Big( y_1 + \frac{b_{12}}{a_2} y_2 \Big)^2 + \Big( a_2 - \frac{b_{12}^2}{2a_1} \Big) y_2^2\\ &\geq \tfrac{1}{2} a_1 y_1^2 + \Big( a_2 - \frac{b_{12}^2}{2a_1} \Big) y_2^2\\ &\geq \theta |y|^2, \end{aligned}$$ with $\theta = \min \{ \tfrac{1}{2} a_1, a_2 - b_{12}^2/(2a_1) \} > 0$. That is, the quadratic form is positive. Hence, we need to choose $0 < \epsilon \ll 1$ such that $\mu > \epsilon$ and $$a_2 - \frac{b_{12}^2}{2a_1} = \mu - \Big( 1 + \frac{2 m_*^2}{\rho_*^2 \alpha(\xi)}\Big) \epsilon > 0.$$ Notice, however, that $\alpha(\xi) \geq \alpha_* > 0$ for all $\xi \in \mathbb{R}$ with $$\alpha_* := p'(\rho_*) - \frac{m_*^2}{\rho_*^2} > 0,$$ which is a positive constant because of the subsonicity condition [\[subsonic\]](#subsonic){reference-type="eqref" reference="subsonic"}. Therefore, $$a_2 - \frac{b_{12}^2}{2a_1} \geq \mu - \Big( 1 + \frac{2 m_*^2}{\rho_*^2 \alpha_*}\Big) \epsilon > 0,$$ for all $\xi$ and it suffices to choose $$\epsilon = \epsilon_* := \tfrac{1}{2} \frac{\mu \alpha_* \rho_*^2}{\alpha_* \rho_*^2 + 2 m_*^2} > 0,$$ in order to obtain $Q(y,\xi) \geq \theta |y|^2$ for all $\xi \in \mathbb{R}$ and all $y \in \mathbb{R}^2$ with a constant $$\theta = \min \Big\{ \tfrac{1}{2} \epsilon_*, \mu - \epsilon_* \Big( 1 + \frac{2m_*^2}{\alpha_* \rho_*}\Big) \Big\} > 0,$$ independent of $\xi$. This shows [\[compmatprop\]](#compmatprop){reference-type="eqref" reference="compmatprop"}. Therefore, $$\widehat{K}(\xi) = \frac{\epsilon_*}{\alpha(\xi)^{1/2}}\begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix},$$ is the compensating symbol we look for. Clearly, $\widehat{K}$ is smooth in $\xi$. Finally, since $0 < \alpha(\xi)^{-1/2} \leq \alpha_*^{-1/2}$ and since $|\xi| \alpha(\xi)^{-1/2}$ is uniformly bounded for all $\xi \in \mathbb{R}$, we conclude that there exists a uniform constant $C > 0$ such that [\[tiKbded\]](#tiKbded){reference-type="eqref" reference="tiKbded"} holds. The lemma is proved. ◻ **Remark 1**. A few comments are in order. Notice that we demand $\widehat{K}$ to be a compensating matrix symbol for the triplet $(I, \widehat{A}, B_*)$ and not for $(I, \widehat{A}, \widehat{B})$. This feature will be useful in the establishment of the energy estimate. In addition, we have constructed $\widehat{K}$ such that the constant $\theta > 0$ in [\[compmatprop\]](#compmatprop){reference-type="eqref" reference="compmatprop"} can be chosen uniformly in $\xi \in \mathbb{R}$ and that both $|\widehat{K}(\xi)|$ and $|\xi \widehat{K}(\xi)|$ are uniformly bounded above. These are properties that cannot be deduced from the equivalence theorem. ## Linear decay of the associated semigroup **Lemma 1** (basic pointwise estimate). *The solutions $\widehat{V}= \widehat{V}(\xi,t)$ to the linear system [\[eqforV\]](#eqforV){reference-type="eqref" reference="eqforV"} satisfy the pointwise estimate $$\label{bestV} |\widehat{V}(\xi,t)| \leq C \exp (- \omega_0 \xi^2 t) |\widehat{V}(\xi,0)|,$$ for all $\xi \in \mathbb{R}$, $t \geq 0$ and some uniform constants $C,\omega_0 > 0$.* *Proof.* The proof follows that of Lemma 5.2 in [@PlV22] almost word by word and therefore we gloss over some of the details. The important points in the present case are the following. By taking the standard product in $\mathbb{C}^2$ and for any $\delta > 0$ sufficiently small, the skew-symmetry of the matrix $\widehat{K}$ allows us to define an energy of the form $${\mathcal{E}}= |\widehat{V}|^2 - \delta \xi \langle \widehat{V}, i \widehat{K}(\xi) \widehat{V}\rangle,$$ which is real, positive and equivalent to $| \widehat{V}|^2$, that is, $C_1^{-1} |\widehat{V}|^2 \leq {\mathcal{E}}\leq C_1 |\widehat{V}|^2$ for some uniform $C_1 > 0$. Since $\widehat{B}(\xi) = \xi^2 B_*$, the inner product in $\mathbb{C}^2$ of $\widehat{V}$ with equation [\[eqforV\]](#eqforV){reference-type="eqref" reference="eqforV"} yields $$\label{la8} \tfrac{1}{2} \partial_t |\widehat{V}|^2 + \xi^2 \langle \widehat{V}, B_* \widehat{V}\rangle = 0,$$ where we have used the fact that $\widehat{A}$ and $B_*$ are symmetric. Likewise, multiplying the equation by by $- i \xi \widehat{K}(\xi)$ one arrives at the estimate $$\label{la10} - \tfrac{1}{2} \xi \partial_t \langle \widehat{V}, i \widehat{K}(\xi) \widehat{V}\rangle + \xi^2 \langle \widehat{V}, [\widehat{K}(\xi) \widehat{A}(\xi)]^s \widehat{V}\rangle \leq \varepsilon\xi^2 |\widehat{V}|^2 + C_\varepsilon\xi^2 \langle \widehat{V}, B_* \widehat{V}\rangle,$$ for any $\varepsilon> 0$ and some uniform $C_\varepsilon> 0$. Multiply inequality [\[la10\]](#la10){reference-type="eqref" reference="la10"} by $\delta > 0$ and add it to equation [\[la8\]](#la8){reference-type="eqref" reference="la8"}, yielding $$\label{la11} \begin{aligned} \tfrac{1}{2} \partial_t \big( |\widehat{V}|^2 - \delta \xi \langle \widehat{V}, i \widehat{K}(\xi) \widehat{V}\rangle \big) + \xi^2 \big( \delta \langle \widehat{V}, [\widehat{K}(\xi) \widehat{A}(\xi)]^s \widehat{V}\rangle &+ (1-\delta C_\varepsilon) \langle \widehat{V}, B_* \widehat{V}\rangle \big) \\&\leq \varepsilon\delta \xi^2 |\widehat{V}|^2. \end{aligned}$$ If we choose $\varepsilon= \tfrac{1}{2} \theta$ where $\theta > 0$ is the uniform constant in [\[compmatprop\]](#compmatprop){reference-type="eqref" reference="compmatprop"} then the constant $C_\varepsilon> 0$ is therefore fixed and for $0 < \delta \ll 1$ sufficiently small one obtains $$\delta \langle \widehat{V}, [\widehat{K}(\xi) \widehat{A}(\xi)]^s \widehat{V}\rangle + (1-\delta C_\varepsilon)\langle\widehat{V}, \widetilde{B}\widehat{V}\rangle \geq \delta \langle \widehat{V}, ([\widehat{K}(\xi) \widehat{A}(\xi)]^s + B_*) \widehat{V}\rangle \geq \delta \theta |\widehat{V}|^2,$$ where we have used the main property of the compensating matrix symbol (estimate [\[compmatprop\]](#compmatprop){reference-type="eqref" reference="compmatprop"}). Substitution into [\[la11\]](#la11){reference-type="eqref" reference="la11"} yields $$\partial_t {\mathcal{E}}+ \omega_0 \xi^2 {\mathcal{E}}\leq 0,$$ where $\omega_0 := \delta \theta / C_1 > 0$. This implies estimate [\[bestV\]](#bestV){reference-type="eqref" reference="bestV"}. Details are left to the reader. ◻ **Remark 1**. It is to be noticed that [\[bestV\]](#bestV){reference-type="eqref" reference="bestV"} implies that the eigenvalues in Fourier space of system [\[spect\]](#spect){reference-type="eqref" reference="spect"} satisfy $\lambda(\xi) \leq - \omega_0 \xi^2$, with $\omega_0 > 0$, yielding a dissipative structure of regularity-gain type. The pointwise estimate of Lemma [Lemma 1](#lembee){reference-type="ref" reference="lembee"} implies the following estimate for the solutions to [\[FouQHD2\]](#FouQHD2){reference-type="eqref" reference="FouQHD2"}. **Corollary 1**. *The solutions $\widehat{U}(\xi,t) = (\widehat{U}_1, \widehat{U}_2)(\xi,t)$ to the linear system [\[FouQHD2\]](#FouQHD2){reference-type="eqref" reference="FouQHD2"} satisfy the estimate $$\label{beeU} \begin{aligned} (1+\xi^{2})| \widehat{U}_{1}(\xi, t) |^{2} + &| \widehat{U}_{2}(\xi, t) |^{2} \leq C \exp(- 2\omega_0 \xi^{2} t ) \big( (1+\xi^{2})| \widehat{U}_{1}(\xi, 0) |^{2} + | \widehat{U}_{2}(\xi, 0) |^{2}\big), \end{aligned}$$ for all $t\geq 0$, $\xi \in \mathbb{R}$ and some uniform constant $C>0$.* *Proof.* Suppose $\widehat{U}= \widehat{U}(\xi,t)$ is a solution to system [\[FouQHD2\]](#FouQHD2){reference-type="eqref" reference="FouQHD2"} . Then from transformation [\[hV\]](#hV){reference-type="eqref" reference="hV"} we know that $\widehat{V}= S(\xi)^{1/2} \widehat{U}$ satisfies [\[eqforV\]](#eqforV){reference-type="eqref" reference="eqforV"} and, therefore, Lemma [Lemma 1](#lembee){reference-type="ref" reference="lembee"} applies. Hence, from estimate [\[bestV\]](#bestV){reference-type="eqref" reference="bestV"} we obtain $$\begin{aligned} |\widehat{V}|^2 = | S(\xi)^{1/2} \widehat{U}|^2 &= \left| \begin{pmatrix} \alpha(\xi)^{1/2} & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} \widehat{U}_1 \\ \widehat{U}_2\end{pmatrix} \right|^2 = \alpha(\xi) |\widehat{U}_1|^2 + |\widehat{U}_2|^2 \\ &\leq C \exp (- 2\omega_0 \xi^2 t) |\widehat{V}(\xi,0)|^2\\ &= C \exp (- 2\omega_0 \xi^2 t) \big( \alpha(\xi) |\widehat{U}_1(\xi,0)|^2 + |\widehat{U}_2(\xi,0)|^2\big). \end{aligned}$$ From the definition of $\alpha(\xi)$ (see [\[defalpha\]](#defalpha){reference-type="eqref" reference="defalpha"}) we clearly deduce that there exist constants $C_j > 0$ such that $C_2 (1+\xi^2) \leq \alpha(\xi) \leq C_1(1+\xi^2)$ for all $\xi \in \mathbb{R}$. Upon substitution we obtain estimate [\[beeU\]](#beeU){reference-type="eqref" reference="beeU"}. ◻ The decay estimates [\[beeU\]](#beeU){reference-type="eqref" reference="beeU"} of the solutions to the evolution equation in Fourier space [\[FouQHD2\]](#FouQHD2){reference-type="eqref" reference="FouQHD2"} readily imply the decay of the semigroup associated to the linear evolution system [\[linearQHD\]](#linearQHD){reference-type="eqref" reference="linearQHD"}. Notice the higher regularity on the density variable (here $U_1 = \rho$) that appears in [\[beeU\]](#beeU){reference-type="eqref" reference="beeU"}. Consider the abstract Cauchy problem for the linear system [\[linearQHD\]](#linearQHD){reference-type="eqref" reference="linearQHD"}, $$\label{Cauchylin} \left\{ \begin{aligned} U_t &= {\mathcal{A}}U,\\ U(0) &= f, \end{aligned} \right.$$ where ${\mathcal{A}}:= -A_* \partial_x + B_* \partial_x^2 + C_* \partial_x^3$ is a differential operator with constant coefficients. We densely define the operator on the space $Z := L^2(\mathbb{R}) \times L^2(\mathbb{R})$ with domain $D({\mathcal{A}}) = H^{s+1}(\mathbb{R}) \times H^s(\mathbb{R})$ for some $s \geq 3$. **Lemma 1**. *The differential operator ${\mathcal{A}}: Z \to Z$ is the infinitesimal generator of a $C_0$-semigroup, $\{ e^{t{\mathcal{A}}} \}_{t\geq 0}$, in $Z = L^2(\mathbb{R}) \times L^2(\mathbb{R})$. Moreover, for any $f \in \big( H^{s+1}(\mathbb{R}) \times H^s(\mathbb{R}) \big) \cap \big( L^1(\mathbb{R}) \times L^1(\mathbb{R}) \big)$, $s \geq 2$, and all $0 \leq \ell \leq s$, $t > 0$, there holds the estimate $$\label{linestsg} \begin{aligned} \Big( \| \partial_x^{\ell} (e^{t {\mathcal{A}}} f)_1(t) \|_1^2 + \| \partial_x^{\ell} (e^{t {\mathcal{A}}} f)_2(t) \|_0^2 \Big)^{1/2} &\leq C e^{-c_1t} \Big( \| \partial_x^{\ell } f_1 \|_1^2 + \| \partial_x^{\ell} f_2 \|_0^2 \Big)^{1/2} +\\ & \;\; + C (1+t)^{-(\ell/2 + 1/4)} \| f \|_{L^1}, \end{aligned}$$ for some uniform constants $C, c_1 > 0$.* *Proof.* The infinitesimal semigroup generated by ${\mathcal{A}}$ is necessarily associated to the solutions to the linear problem [\[Cauchylin\]](#Cauchylin){reference-type="eqref" reference="Cauchylin"}, which can be expressed in terms of the inverse Fourier transform of the solutions to [\[FouQHD2\]](#FouQHD2){reference-type="eqref" reference="FouQHD2"}. Indeed, suppose that $\widehat{U}= \widehat{U}(\xi,t)$ is the solution to [\[FouQHD2\]](#FouQHD2){reference-type="eqref" reference="FouQHD2"} with initial condition $\widehat{U}(\xi,0) = \widehat{f}(\xi)$. Then $U(x,t) = (e^{t {\mathcal{A}}} f)(x)$ is the solution to [\[Cauchylin\]](#Cauchylin){reference-type="eqref" reference="Cauchylin"} with $U(0) = f = (f_1, f_2)^\top$, where $$\label{repsg} (e^{t {\mathcal{A}}} f)(x) := \frac{1}{\sqrt{2 \pi}} \int_\mathbb{R}e^{i x \xi}e^{t R(i\xi)} \hat{f}(\xi) \, d \xi,$$ and $$R(z) := - \begin{pmatrix} 0 & z \\ z \big( p'(\rho_*) -m_*^2 / \rho_*^2 - \tfrac{1}{2} k^2 z^2\big) & 2zm_* / \rho_* + z^2 \mu \end{pmatrix}, \quad z \in \mathbb{C},$$ $$R(i\xi) = - (i \xi A(\xi) + B(\xi)), \qquad \xi \in \mathbb{R}.$$ That $\{ e^{t{\mathcal{A}}}\}_{t \geq 0}$ is a $C_0$-semigroup, where ${\mathcal{A}}$ is the constant coefficient differential operator defined above, follows from standard Fourier estimates and semigroup theory (cf. [@Pazy83; @EN06]); we omit the details. Now, since $\widehat{U}$ satisfies [\[FouQHD2\]](#FouQHD2){reference-type="eqref" reference="FouQHD2"}, then by Corollary [Corollary 1](#corlindecayW){reference-type="ref" reference="corlindecayW"} estimate [\[beeU\]](#beeU){reference-type="eqref" reference="beeU"} holds. Fix $\ell \in [0,s]$, multiply [\[beeU\]](#beeU){reference-type="eqref" reference="beeU"} by $\xi^{2 \ell}$ and integrate in $\xi \in \mathbb{R}$. This yields $$\int_\mathbb{R}\Big[ \xi^{2 \ell}(1+\xi^2) |\widehat{U}_1(\xi,t)|^2 + \xi^{2 \ell} |\widehat{U}_2(\xi,t)|^2 \Big] \, d\xi \leq C J_1(t) + C J_2(t),$$ where, $$\begin{aligned} J_1(t) &:= \int_{-1}^1 \Big[ \xi^{2 \ell}(1+\xi^2) |\widehat{U}_1(\xi,0)|^2 + \xi^{2 \ell} |\widehat{U}_2(\xi,0)|^2 \Big] \exp ( - 2\omega_0 \xi^2 t) \, d \xi,\\ J_2(t) &:= \int_{|\xi|\geq 1} \Big[ \xi^{2 \ell}(1+\xi^2) |\widehat{U}_1(\xi,0)|^2 + \xi^{2 \ell} |\widehat{U}_2(\xi,0)|^2 \Big]\exp ( - 2\omega_0 \xi^2 t) \, d \xi. \end{aligned}$$ Noticing that, clearly, $\exp (-2 \omega_0 \xi^2 t) \leq \exp (-\omega_0 \xi^2 t)$, we deduce $$J_1(t) \leq 2 \int_{-1}^1 \xi^{2 \ell} |\widehat{U}(\xi,0)|^2 e^{-\omega_0t \xi^2} \, d\xi \leq 2 \sup_{\xi \in \mathbb{R}} |\widehat{U}(\xi,0)|^2 \int_{-1}^1 \xi^{2 \ell} e^{-\omega_0t \xi^2} \, d\xi.$$ But since for any fixed $\ell \in [0,s]$ and any constant $\omega_0 > 0$, the integral $$H_0(t) := (1+t)^{\ell + 1/2} \int_{-1}^1 \xi^{2 \ell} e^{-\omega_0t \xi^2} \, d\xi \leq C,$$ is uniformly bounded for all $t > 0$ with some constant $C > 0$ (see Lemma A.1 in [@PlV22]), we arrive at $$\label{estJ1} J_1(t) \leq C (1+t)^{-(\ell + 1/2)} \| U(x,0)\|_{L^1}^2.$$ Now, if $|\xi| \geq 1$ then $\exp (-2 \omega_0 t\xi^2) \leq \exp(-\omega_0 t)$. Hence, Plancherel's theorem implies that $$\begin{aligned} J_2(t) &\leq e^{-\omega_0t} \int_{|\xi|\geq 1} \xi^{2 \ell}(1+\xi^2) |\widehat{U}_1(\xi,0)|^2 + \xi^{2 \ell} |\widehat{U}_2(\xi,0)|^2 \, d\xi \\ &= e^{-\omega_0t} \int_{|\xi|\geq 1} ( \xi^{2\ell}+ \xi^{2(\ell +1)} ) |\widehat{U}_1(\xi,0)|^2 + \xi^{2 \ell} |\widehat{U}_2(\xi,0)|^2 \, d\xi \\ &\leq e^{-\omega_0t} \int_\mathbb{R}(\xi^{2\ell}+ \xi^{2(\ell+1)}) |\widehat{U}_1(\xi,0)|^2 + \xi^{2 \ell} |\widehat{U}_2(\xi,0)|^2 \, d\xi \\ &= e^{-\omega_0t} \big( \| \partial_x^{\ell} U_1(0) \|_1^2 + \| \partial_x^{\ell} U_2(0) \|_0^2 \big), \end{aligned}$$ for all $t > 0$. Combining both estimates we obtain the result with $c_1 = \omega_0/2 > 0$, as claimed. ◻ # Global existence and decay of perturbations of equilibrium states {#secglobal} In this section we focus on the nonlinear problem. We prove the global existence and the decay of small perturbations to subsonic equilibrium states. ## Nonlinear energy estimates We start by establishing a priori energy estimates for solutions to the full nonlinear problem [\[QHD\]](#QHD){reference-type="eqref" reference="QHD"}. Let $U_* = (\rho_*, m_*) \in \mathbb{R}^2$ be a subsonic equilibrium state with $\rho_* > 0$. Then if $(\rho + \rho_*,m + m_*)$ solves [\[QHD\]](#QHD){reference-type="eqref" reference="QHD"} with $U = (\rho,m)$ being a perturbation, then the latter satisfies the equivalent nonlinear system [\[nonlinQHD\]](#nonlinQHD){reference-type="eqref" reference="nonlinQHD"} where ${\mathcal{A}}$ is the linearized operator around $U_*$ defined in [\[defcA\]](#defcA){reference-type="eqref" reference="defcA"}. Let us denote the initial perturbation as $(\rho_0, m_0)$ and suppose that $$\rho_0 \in H^{s+1}(\mathbb{R}) \cap L^1(\mathbb{R}), \qquad m_0 \in H^{s}(\mathbb{R}) \cap L^1(\mathbb{R}),$$ for some $s \geq 3$. From the local existence theorem [Theorem 1](#themlocale){reference-type="ref" reference="themlocale"} we know that if $E_s(0)^{1/2} = (\| \rho_0 \|_{s+1}^2 + \| m_0 \|_s^2 )^{1/2} < a_0$ then there exists a local solution to system [\[nonlinQHD\]](#nonlinQHD){reference-type="eqref" reference="nonlinQHD"} in the perturbation variables, namely, $U = (\rho,m) \in X_s((0,T); r, R)$, for some $R \geq r > 0$, $T > 0$, and with initial condition $U(0) = U_0 := (\rho_0, m_0)$, such that $U + U_* = (\rho + \rho_*, m + m_*)$ solves the original system [\[QHD\]](#QHD){reference-type="eqref" reference="QHD"} with initial condition $U(0) + U_* = (\rho_0 + \rho_*, m_0 + m_*)$. This local solution can be written in terms of the associated semigroup and the variations of constants formula, $$\label{SemSol} U(x,t)= e^{t {\mathcal{A}}}U_{0} + \int_{0}^{t} e^{(t- \tau){\mathcal{A}}} \partial_x \begin{pmatrix} 0 \\ N_2\end{pmatrix}(\tau) \, d\tau.$$ Therefore, for any fixed $0 \leq \ell \leq s-1$ we apply the decay estimates for the semigroup (see Lemma [Lemma 1](#lemsg){reference-type="ref" reference="lemsg"} and estimate [\[linestsg\]](#linestsg){reference-type="eqref" reference="linestsg"}) to obtain $$\label{star1} \begin{aligned} \Big( \Vert \partial_{x}^{\ell}\rho(t) \Vert_{1}^{2} + \Vert \partial_{x}^{\ell}m(t) \Vert_{0}^{2} \Big)^{1/2} &\leq C e^{-c_{1}t}\Big( \Vert \partial_{x}^{\ell}\rho_{0} \Vert_{1}^{2} +\Vert \partial_{x}^{\ell}m_{0} \Vert^{2}_0 \Big)^{1/2} + \\ &\quad + C( 1+t )^{-(1/4 + \ell/2)} \big( \| \rho_0 \|_{L^1} + \|m_0 \|_{L^1}\big) +\\ &\quad + C\int_{0}^{t} \Vert \partial_{x}^{\ell}(e^{(t-\tau){\mathcal{A}}} \partial_x N_{2}(\tau) )\Vert_0 \, d\tau . \end{aligned}$$ From the representation of the semigroup in [\[repsg\]](#repsg){reference-type="eqref" reference="repsg"}, which leads to the expression $\widehat{U}= e^{-t R(i\xi)} \widehat{U}(0)$ for any solution to the linear problem, it is easy to verify the following identity, $$\partial_x^{\ell} \big( e^{t {\mathcal{A}}} \partial_x f \big) = \partial_x^{\ell+1} \big( e^{t {\mathcal{A}}} f\big),$$ for any $f\in H^{s}(\mathbb{R})$, $0 \leq \ell \leq s-1$ and $t \geq 0$; details are left to the reader. Therefore, we may apply estimate [\[linestsg\]](#linestsg){reference-type="eqref" reference="linestsg"} once again, but now with $\ell + 1 \leq s$ replacing $\ell$, in order to arrive at $$\label{star2} \begin{aligned} \int_{0}^{t} \Vert \partial_{x}^{\ell}(e^{(t-\tau){\mathcal{A}}} \partial_x N_{2}(\tau) \Vert_0 \: d\tau &\leq C \int_0^t e^{-c_1(t-\tau)} \| \partial_x^{\ell+1} N_2(\tau) \|_0 \, d\tau + \\ &\quad + C \int_0^t (1+t-\tau)^{\ell/2 + 3/4} \| N_2(\tau) \|_{L^1} \, d\tau. \end{aligned}$$ Notice that the particular (conservative) form of the nonlinear term, namely $\partial_x (0, N_2)^\top$, is crucial to obtain the algebraic time decay inside that last integral. Upon substitution we obtain $$\label{DerEstNlop} \begin{aligned} \Big( \Vert \partial_{x}^{\ell}\rho(t) \Vert_{1}^{2} + \Vert \partial_{x}^{\ell}m(t) \Vert_{0}^{2} \Big)^{1/2} &\leq C e^{-c_{1}t}\Big( \Vert \partial_{x}^{\ell}\rho_{0} \Vert_{1}^{2} +\Vert \partial_{x}^{\ell}m_{0} \Vert^{2}_0 \Big)^{1/2} + \\ &\quad + C( 1+t )^{-(1/4 + \ell/2)} \big( \| \rho_0 \|_{L^1} + \|m_0 \|_{L^1}\big) +\\ &\quad + C \int_0^t e^{-c_1(t-\tau)} \| \partial_x^{\ell+1} N_2(\tau) \|_0 \, d\tau + \\ &\quad + C \int_0^t (1+t-\tau)^{\ell/2 + 3/4} \| N_2(\tau) \|_{L^1} \, d\tau, \end{aligned}$$ for all $0 \leq \ell \leq s-1$. Summing up estimates [\[DerEstNlop\]](#DerEstNlop){reference-type="eqref" reference="DerEstNlop"} for $\ell = 0, 1, \ldots, s-1$ yields $$\begin{aligned} \| \rho(t) \|_s + \| m(t) \|_{s-1} &\leq C e^{-c_1 t} \big( \| \rho_0 \|_s + \|m_0 \|_{s-1} \big) + C (1+t)^{-1/4} \big( \| \rho_0 \|_{L^1} + \|m_0 \|_{L^1}\big) + \\ &\quad + C \int_0^t e^{-c_1(t - \tau)} \| N_2(\tau) \|_s \, d \tau \, + \\ &\quad + C \int_0^t (1+t - \tau)^{-3/4} \| N_2(\tau) \|_{L^1} \, d \tau. \end{aligned}$$ Since, clearly, there exists a uniform constant $C > 0$ such that $e^{-c_1 t} \leq C(1+t)^{-1/4}$ for all $t \geq 0$, we simplify last estimate as $$\begin{aligned} \| \rho(t) \|_s + \| m(t) \|_{s-1} &\leq C (1+t)^{-1/4} \Big( \| \rho_0 \|_s + \|m_0 \|_{s-1} + \| \rho_0 \|_{L^1} + \|m_0 \|_{L^1}\Big) + \nonumber\\ &\quad + C \int_0^t e^{-c_1(t - \tau)} \| N_2(\tau) \|_s \, d \tau \, + \label{estimatef}\\ &\quad + C \int_0^t (1+t - \tau)^{-3/4} \| N_2(\tau) \|_{L^1} \, d \tau. \nonumber\end{aligned}$$ Now, we proceed with the estimation of the nonlinear terms. For that purpose we first recall some classical results. **Lemma 1**. *$\,$* - *Let $u, v \in H^s(\mathbb{R}^n) \cap L^\infty(\mathbb{R}^n)$, for any $s \in \mathbb{R}$, $n \in \mathbb{N}$. Then $$\| uv \|_s \leq C \big( \| u \|_s \| v \|_\infty + \|u \|_\infty \|v \|_s \big),$$ for some uniform constant $C > 0$.* - *Let $s \geq 0$ and $k \geq 0$ be such that $s+k \geq [\frac{n}{2}] +1$. Assume that $u \in H^s(\mathbb{R}^n)$, $v \in H^k(\mathbb{R}^n)$. Then for $\ell = \min \{ s, k, s+k - [\frac{n}{2}] -1 \}$ we have $uv \in H^\ell(\mathbb{R}^n)$ and there exists a uniform $C_{s,k} > 0$ such that $$\| uv \|_\ell \leq C_{s,k} \| u \|_s \| v \|_k.$$ In particular, if $s \geq [\frac{n}{2}] +1$, $0 \leq \ell \leq s$ and $u \in H^s(\mathbb{R}^n)$, $v \in H^\ell(\mathbb{R}^n)$, then $$\| uv \|_\ell \leq C_s \| u \|_s \| v \|_\ell,$$ for some uniform $C_s > 0$.* *Proof.* For the proof of (a) see Lemma 3.2 in [@HaLi96a]. The proof of (b) is a corollary of the interpolation inequalities obtained by Nirenberg [@Nir59] (see also Corollary 2.2 and Lemmata 2.1 and 2.3 in [@KaTh83]). ◻ **Corollary 1**. *For any $s > n/2$, $n \in \mathbb{N}$, the space $H^s(\mathbb{R}^n)$ is a Banach algebra. Moreover, there exists a constant $C_s > 0$ such that $$\| uv \|_s \leq C_s \|u\|_s \|v \|_s,$$ for all $u,v \in H^s(\mathbb{R}^n)$.* *Proof.* Follows immediately from Lemma [Lemma 1](#lemaux1){reference-type="ref" reference="lemaux1"} (b). ◻ We also need some estimates on composite functions. **Lemma 1**. *Let $s \geq 1$ and suppose that $Y = (Y_1, \ldots, Y_m)$, $m \in \mathbb{N}$, $Y_i \in H^s(\mathbb{R}^n) \cap L^\infty(\mathbb{R}^n)$, for all $1 \leq i \leq m$. Let $\Lambda = \Lambda(Y)$, $\Lambda : \mathbb{R}^m \to \mathbb{R}^m$, be a $C^\infty$ function. Then for each $1 \leq j \leq s$ there hold $\partial_x \Lambda(Y) \in H^{j-1}(\mathbb{R}^n)$ and $$\| \partial_x \Lambda (Y) \|_{j-1} \leq C M \big( 1+\| Y \|_\infty \big)^{j-1} \| \partial_x Y \|_{j-1},$$ where $C > 0$ is a uniform constant and $$M = \sum_{k=1}^j \sup_{\substack{V \in \mathbb{R}^m\\ |V|\leq \| Y \|_\infty}} | D^k_Y \Lambda(Y)| > 0.$$* *Proof.* See Vol$'$pert and Hudjaev [@VoH72] (see also Lemma 2.4 in [@KaTh83]). ◻ With this information at hand we proceed to estimate the terms $\| N_2 \|_{L^1}$ and $\| N_2 \|_{s}$ inside the integrals in [\[estimatef\]](#estimatef){reference-type="eqref" reference="estimatef"}. First, from [\[orderN2\]](#orderN2){reference-type="eqref" reference="orderN2"} we know that $$N_2 = O\big(\rho^2 + m^2 + \rho_x^2 + |\rho||\rho_{xx}|\big),$$ where $\rho$ and $m$ are the perturbation variables. First, we estimate the term $\rho_x^2$. From Lemma [Lemma 1](#lemaux1){reference-type="ref" reference="lemaux1"} and by Sobolev imbedding theorem, we get $$\Vert \rho_{x}^{2}(\tau) \Vert_{s} \leq 2C\Vert \rho_{x}(\tau)\Vert_{s} \| \rho_{x}(\tau) \|_{L^{\infty}} \leq C \Vert \rho_{x}(\tau) \Vert_{s+1} \Vert \rho(\tau) \Vert_{2},$$ for any $\tau \in [0,t]$ and because $s \geq 3$. Applying the Sobolev calculus inequalities from Lemma [Lemma 1](#lemaux2){reference-type="ref" reference="lemaux2"}, we arrive at $$\label{N2-s} \begin{split} \| N_{2}(\tau) \|_{s} &\leq C \big( \Vert \rho(\tau) \Vert_{s}^{2} + \Vert \rho(\tau) \Vert_{s}\Vert \rho_{xx}(\tau) \Vert_{s} + \Vert \rho(\tau) \Vert_{2} \Vert \rho_{x}(\tau) \Vert_{s+1} \big) \\ &\leq C \left( \Vert \rho(\tau) \Vert_{s}^{2}+ \Vert \rho(\tau) \Vert_{s}\Vert \rho_{x}(\tau) \Vert_{s+1} \right), \end{split}$$ and $$\label{N2-L1} \Vert N_{2}(\tau) \Vert_{L^{1}} \leq C \big( \| \rho(\tau)\|_2^2 + \| m(\tau) \|_1^2\big) \leq C \big( \| \rho(\tau)\|_{s}^2 + \| m(\tau) \|_{s-1}^2\big),$$ for all $\tau \in [0,t]$ because $s\geq 3$, where $C > 0$ is a uniform constant. Combine estimates [\[estimatef\]](#estimatef){reference-type="eqref" reference="estimatef"}, [\[N2-s\]](#N2-s){reference-type="eqref" reference="N2-s"} and [\[N2-L1\]](#N2-L1){reference-type="eqref" reference="N2-L1"} in order to get $$\begin{aligned} \| \rho(t) \|_s + \| m(t) \|_{s-1} &\leq C (1+t)^{-1/4} \Big( \| \rho_0 \|_s + \|m_0 \|_{s-1} + \| \rho_0 \|_{L^1} + \|m_0 \|_{L^1}\Big) + \nonumber\\ &\quad+ C \sup_{0 \leq \tau \leq t} \Vert \rho(\tau) \Vert_{s} \int_{0}^{t}e^{-c_{1}(t-z)}\Vert \rho(\tau) \Vert_{s} \, d\tau \, + \nonumber\\ &\quad+ C \left( \int_{0}^{t}\Vert m_{x}(\tau)\Vert_{s}^{2}\: d\tau \right)^{1/2} \left( \int_{0}^{t}e^{-2c_{1}(t-\tau)}\Vert \rho(\tau) \Vert_{s}^{2}\: d\tau \right)^{1/2}\: + \nonumber\\ &\quad + C \left( \int_{0}^{t}\Vert \rho_{x}(\tau) \Vert_{s+1}^{2}\: d\tau \right)^{1/2} \left( \int_{0}^{t}e^{-2c_{1}(t-\tau)}\Vert \rho(\tau) \Vert_{s}^{2} \: d\tau \right)^{1/2}\: + \nonumber\\ %&\quad + C \int_0^t e^{-c_1(t - \tau)} \| N_2(\tau) \|_s \, d \tau + \label{nonestNs-1.2}\\ &\quad + C \int_0^t (1+t - \tau)^{-3/4} \Big( \| \rho(\tau)\|_{s}^2 + \| m(\tau) \|_{s-1}^2 \Big) \, d \tau. \label{nonestNs-1.2}\end{aligned}$$ If we denote $$\begin{aligned} G_{s}(t) &:= \sup_{0 \leq \tau \leq t} \left( 1+ \tau \right)^{1/4} \Big( \| \rho(\tau)\|_{s} + \| m(\tau) \|_{s-1} \Big),\\ Q_s(t) &:= \sup_{0 \leq \tau \leq t} \Big(\Vert \rho(\tau)\Vert_{s+1}^{2}+ \Vert m(\tau) \Vert_{s}^{2} \Big) + \int_{0}^{t} \left( \Vert \rho_x(\tau)\Vert_{s+1}^{2} + \Vert m_x(t)\Vert_{s}^{2} \right)\: dt, \\ &= E_s(t) + F_s(t), \end{aligned}$$ where $E_s(t)$ and $F_s(t)$ are defined in [\[defEs\]](#defEs){reference-type="eqref" reference="defEs"} and [\[defFs\]](#defFs){reference-type="eqref" reference="defFs"}, respectively, then we can recast estimate [\[nonestNs-1.2\]](#nonestNs-1.2){reference-type="eqref" reference="nonestNs-1.2"} in a simplified form, namely as $$\label{nonestNs-1.sup} \begin{aligned} G_{s}(t) &\leq C \Big( \| \rho_0 \|_s + \|m_0 \|_{s-1} + \| \rho_0 \|_{L^1} + \|m_0 \|_{L^1}\Big) + C H_{1}(t) Q_s(t)^{1/2} G_{s}(t) +\\ &\qquad + C H_{2}(t) G_{s}(t)^2, \end{aligned}$$ where $$\label{mu1} \begin{split} H_{1}(t) &:= \sup_{0\leq \tau \leq t}\left( 1+ \tau \right)^{1/4} \int_{0}^{\tau}e^{-c_{1}(\tau-z)}(1+z)^{-1/4}\: dz \: + \\ & \:\:\:\: + \sup_{0\leq \tau \leq t}\left( 1 + \tau \right)^{1/4}\left[ \int_{0}^{\tau}e^{-2c_{1}(\tau-z)}(1+z)^{-1/2} \: dz \right]^{1/2}, \end{split}$$ $$\label{mu2} H_{2}(t) := \sup_{0 \leq \tau \leq t}\left( 1+ \tau \right)^{1/4}\int_{0}^{\tau}\left( 1+\tau-z \right)^{-3/4}(1+z)^{-1/2} \: dz.$$ Since both integrals, $H_{1}(t)$ and $H_{2}(t)$, are uniformly bounded in $t \geq 0$ (see Lemma A.1 in [@PlV22]), we readily obtain the estimate $$\label{finnonest} G_{s}(t) \leq C\Big( \| \rho_0 \|_s + \|m_0 \|_{s-1} + \| \rho_0 \|_{L^1} + \|m_0 \|_{L^1} \!\Big) + C Q_s(t)^{1/2} G_{s}(t) + C G_{s}(t)^2.$$ Last estimate will be used in a key way to obtain the global decay of perturbations to constant equilibrium states. ## Global existence and decay of solutions After all these preparations we are ready to prove our main result. **Theorem 1** (global decay of perturbations of subsonic equilibrium states). *Let $(\rho_*, m_*) \in \mathbb{R}^2$, with $\rho_* > 0$, be a constant equilibrium state of system [\[QHD\]](#QHD){reference-type="eqref" reference="QHD"} which satisfies the subsonicity assumption $$p'(\rho_*) = \gamma \rho_*^{\gamma -1}> \frac{m_*^2}{\rho_*^2}.$$ Suppose that $\rho_0 \in H^{s+1}(\mathbb{R}) \cap L^1(\mathbb{R})$, $m_0 \in H^s(\mathbb{R})\cap L^1(\mathbb{R})$, for some $s \geq 3$. Then there exists a positive constant $\varepsilon_{2} \leq a_0$ (with $a_{0}$ as in Theorem [Theorem 1](#themlocale){reference-type="ref" reference="themlocale"}) such that if $$\label{smallcond} \| \rho_0 \|_{s+1} + \| m_0 \|_{s} + \| \rho_0 \|_{L^1} + \| m_0 \|_{L^1} < \varepsilon_2,$$ then the Cauchy problem for the QHD system [\[QHD\]](#QHD){reference-type="eqref" reference="QHD"} with initial condition $(\rho_*+\rho_0,m_*+m_0)(x)$, $x \in \mathbb{R}$, has a unique solution of the form $(\rho_*+\rho, m_*+m)(x,t)$ satisfying $$\label{globsol} \begin{split} & \:\: \rho \in C\left((0,\infty);H^{s+1}(\mathbb{R})) \cap C^{1}((0,\infty); H^{s-1}(\mathbb{R})\right), \\ & \:\: m \in C\left((0,\infty); H^{s}(\mathbb{R})) \cap C^{1}((0,\infty);H^{s-2}(\mathbb{R})\right) \\ & \:\:(\rho_{x},m_{x})\in L^{2}\left((0,\infty);H^{s+1}(\mathbb{R}) \times H^{s}(\mathbb{R})\right). \end{split}$$ Moreover, the following estimates hold, $$\label{globdec} \| \rho(t)\|_{s} + \| m(t) \|_{s-1} \leq C_{1} \left(1+ t \right)^{-1/4} \!\Big( \| \rho_0 \|_s + \|m_0 \|_{s-1} + \| \rho_0 \|_{L^1} + \|m_0 \|_{L^1}\Big),$$ and $$\label{trinormest} Q_s(t)^{1/2} \leq C_{2} \Big( \| \rho_0 \|_s + \|m_0 \|_{s-1} \Big),$$ for every $t \in [0, \infty)$ and some uniform $C_j > 0$, where $$Q_s(t) = \sup_{0 \leq \tau \leq t} \Big(\Vert \rho(\tau)\Vert_{s+1}^{2}+ \Vert m(\tau) \Vert_{s}^{2} \Big) + \int_{0}^{t} \left( \Vert \rho_x(\tau)\Vert_{s+1}^{2} + \Vert m_x(t)\Vert_{s}^{2} \right)\: dt.$$* *Proof.* By virtue of estimate [\[finnonest\]](#finnonest){reference-type="eqref" reference="finnonest"}, we can select $\varepsilon_{1}\leq a_{0}$, $\varepsilon_{1}$ small enough as in Corollary [Corollary 1](#cor26){reference-type="ref" reference="cor26"}, and $\delta_{1}= \delta_{1}(\varepsilon_{1})$ such that for $Q_s(T_1)\leq \varepsilon_{1}$ and $$\label{vercond} \| \rho_0 \|_s + \|m_0 \|_{s-1} + \| \rho_0 \|_{L^1} + \|m_0 \|_{L^1} < \delta_{1},$$ there holds $$\label{nonlest} \| \rho(t)\|_{s} + \| m(t) \|_{s-1} \leq C_{1}\left( 1+ t\right)^{-1/4} \Big(\| \rho_0 \|_s + \|m_0 \|_{s-1} + \| \rho_0 \|_{L^1} + \|m_0 \|_{L^1}\Big),$$ for all $t \in [0, T_{1}]$ and some constant $C_{1} = C_{1}(\varepsilon_{1}, \delta_{1}) > 1$. Recall that the local solution to the initial value problem, belonging to $X_{s}(0, T_{1}; m_{0}/2, 2M_{0})$, for some $T_{1}= T_{1}(a_{0})$, exists for all $t \in [0,T_1]$ thanks to Theorem [Theorem 1](#themlocale){reference-type="ref" reference="themlocale"}. Next, we define $$\varepsilon_{2} := \mbox{min} \left\lbrace \varepsilon_1,\frac{\varepsilon_1}{C_{0}} , \frac{\varepsilon_1}{C_{2}\left( 1+C_{0}^{2}\right)^{1/2}}, \delta_{1} \right\rbrace > 0.$$ Let us suppose that condition [\[smallcond\]](#smallcond){reference-type="eqref" reference="smallcond"} holds for this selected value of $\varepsilon_2$. Whence, the local existence theorem [Theorem 1](#themlocale){reference-type="ref" reference="themlocale"} implies that $$Q_s(T_1) = E_s(T_1) + F_s(T_1) \leq C_1 E_s(0) = C_0 \big( \| \rho_0 \|_{s+1} + \| m_0 \|_s \big) < C_0 \varepsilon_2 \leq \varepsilon_1.$$ This bound, together with $$\| \rho_0 \|_s + \|m_0 \|_{s-1} + \| \rho_0 \|_{L^1} + \|m_0 \|_{L^1} < \varepsilon_2 \leq \delta_{1},$$ readily implies estimate [\[nonlest\]](#nonlest){reference-type="eqref" reference="nonlest"} for $t \in [0, T_{1}]$. In addition, we have $$E_s(T_1) \leq Q_s(T_1) \leq \varepsilon_1. % \sup_{0 \leq t \leq T_{1}} \Vert U (t)\Vert_{s} \leq \vertiii{U}_{s,T_{1}} \leq \e_1.$$ Hence, we have verified condition [\[localaprioriEE\]](#localaprioriEE){reference-type="eqref" reference="localaprioriEE"} from Corollary [Corollary 1](#cor26){reference-type="ref" reference="cor26"}. Upon application of Corollary [Corollary 1](#cor26){reference-type="ref" reference="cor26"} we obtain $$Q_s(T_1)^{1/2} \leq C_2 E_s(0)^{1/2} = C_2 \big( \| \rho_0 \|_{s+1} + \| m_0 \|_s \big).$$ By virtue of $$\| \rho(T_1) \|_{s+1} + \| m(T_1) \|_{s} \leq Q_s(T_1)^{1/2} \leq \varepsilon_1,$$ we can consider the Cauchy problem with initial condition at $t = T_1$ in order to find a local solution in $[T_{1}, 2T_{1}]$ satisfying $$\begin{aligned} \sup_{T_1 \leq \tau \leq 2T_1} \Big(\Vert \rho(\tau)\Vert_{s+1}^{2}+ \Vert m(\tau) \Vert_{s}^{2} \Big) &+ \int_{T_1}^{2T_1} \left( \Vert \rho_x(\tau)\Vert_{s+1}^{2} + \Vert m_x(t)\Vert_{s}^{2} \right)\: dt \leq \\ &\leq C_0^2 \big( \| \rho(T_1) \|_{s+1} + \| m(T_1) \|_s \big)^2\\ &\leq C_0^2 Q_s(T_1)^2. \end{aligned}$$ Therefore, we obtain $$\begin{aligned} Q_s(2T_1)^{1/2} &= \left[ Q_s(T_1) + \!\!\sup_{T_1 \leq \tau \leq 2T_1} \Big(\Vert \rho(\tau)\Vert_{s+1}^{2}+ \Vert m(\tau) \Vert_{s}^{2} \Big) + \int_{T_1}^{2T_1} \left( \Vert \rho_x(\tau)\Vert_{s+1}^{2} + \Vert m_x(t)\Vert_{s}^{2} \right)\: dt \right]^{1/2} \\ &\leq (1 + C_0^2)^{1/2} Q_s(T_1)^{1/2} \\ &\leq C_2 (1+C_0^2)^{1/2} \big( \| \rho_0 \|_{s+1} + \| m_0 \|_s \big)\\ &\leq C_2 (1+C_0^2)^{1/2} \big( \| \rho_0 \|_{s+1} + \| m_0 \|_s + \| \rho_0 \|_{L^1} + \|m_0 \|_{L^1}\big)\\ &< C_2 (1+C_0^2)^{1/2} \varepsilon_2 \\ &\leq \varepsilon_1. \end{aligned}$$ This yields, $$E_s(2T_1)^{1/2} \leq Q_s(2T_1)^{1/2} < \varepsilon_1.$$ This estimate, together with the already verified condition [\[vercond\]](#vercond){reference-type="eqref" reference="vercond"} allows us to obtain estimate [\[trinormest\]](#trinormest){reference-type="eqref" reference="trinormest"} and the condition [\[localaprioriEE\]](#localaprioriEE){reference-type="eqref" reference="localaprioriEE"} from Corollary [Corollary 1](#cor26){reference-type="ref" reference="cor26"}, but now on the time interval $t \in [0, 2T_1]$. Consequently, $$\| \rho(t)\|_{s} + \| m(t) \|_{s-1} \leq C_{1}\left( 1+ t\right)^{-1/4} \Big(\| \rho_0 \|_s + \|m_0 \|_{s-1} + \| \rho_0 \|_{L^1} + \|m_0 \|_{L^1}\Big),$$ holds for all $t \in [0,2T_1]$ and $$Q_s(2T_1)^{1/2} \leq C_2 \big( \| \rho_0 \|_{s+1} + \| m_0 \|_s \big).$$ We can proceed by iteration in order to obtain estimates [\[trinormest\]](#trinormest){reference-type="eqref" reference="trinormest"} and [\[globdec\]](#globdec){reference-type="eqref" reference="globdec"} for the time intervals $[0, 3T_{1}]$, $[0,4T_1]$, and so on. Thus, the estimates hold globally in time. The theorem is now proved. ◻ # Acknowledgements {#acknowledgements .unnumbered} The work of D. Zhelyazov was supported by a Post-doctoral Fellowship by the Dirección General de Asuntos del Personal Académico (DGAPA), UNAM. The work of R. G. Plaza was partially supported by DGAPA-UNAM, program PAPIIT, grant IN-104922. # Proof of Lemma [Lemma 1](#lemsupersonic){reference-type="ref" reference="lemsupersonic"} {#secappen} Under the supersonicity assumption [\[supersonic\]](#supersonic){reference-type="eqref" reference="supersonic"}, let us define the positive constant $$\beta_* := \frac{m_*^2}{\rho_*^2} - p'(\rho_*) > 0.$$ Assume $\widehat{U}= \widehat{U}(\xi)$ is a non-trivial solution to [\[spect\]](#spect){reference-type="eqref" reference="spect"}. Therefore $\widehat{U}\in \ker D(\lambda,\xi)$, where $$D(\lambda, \xi) = \lambda I + i \xi A_* + \xi^2 B_* + i \xi^3 C_* = \begin{pmatrix} \lambda & i \xi \\ - i \xi \beta_* + \tfrac{1}{2} i \xi^3 k^2 & \lambda + i 2 \xi m_*/\rho_* + \mu \xi^2 \end{pmatrix}.$$ This yields the dispersion relation $$\label{disprel} \det D(\lambda, \xi) = \lambda^2 + \Big( \mu \xi^2 + i \frac{2 \xi m_*}{\rho_*}\Big) \lambda + \xi^2( \tfrac{1}{2} \xi^2 k^2 - \beta_* \big)= 0.$$ The discriminant of the second order polynomial in $\lambda$ on the left hand side of [\[disprel\]](#disprel){reference-type="eqref" reference="disprel"} is $\Delta(\xi) := a(\xi) + i b(\xi)$, where $$a(\xi) := \xi^2 \big( \xi^2(\mu^2 - 2 k^2) - 4 p'(\rho_*) \big),\qquad b(\xi) := 4 \mu \xi^3 \frac{m_*}{\rho_*}.$$ Therefore, the roots of [\[disprel\]](#disprel){reference-type="eqref" reference="disprel"} are $\lambda_\pm(\xi) := - \tfrac{1}{2} \mu \xi^2 - i \xi m_*/\rho_* \pm \tfrac{1}{2} \Delta(\xi)^{1/2}$. Let us examine $$\mathrm{Re}\,\lambda_+(\xi) = - \tfrac{1}{2} \mu \xi^2 + \tfrac{1}{2} \mathrm{Re}\,\Delta(\xi)^{1/2}.$$ We now show that $\mathrm{Re}\,\lambda_+(\xi) > 0$ for $0 < |\xi| \ll1$, sufficiently small. This is equivalent to prove that $$\label{good} \mathrm{Re}\,\Delta(\xi)^{1/2} > \mu \xi^2, \qquad \text{for } \; 0 < |\xi| \ll1.$$ Recalling that $$\mathrm{Re}\,\Delta(\xi)^{1/2} = \frac{1}{\sqrt{2}} \sqrt{a(\xi) + \sqrt{a(\xi)^2 + b(\xi)^2}},$$ we observe that [\[good\]](#good){reference-type="eqref" reference="good"} is equivalent to $$a(\xi) + \sqrt{a(\xi)^2 + b(\xi)^2} > 2 \mu^2 \xi^4,$$ for $|\xi| \approx 0^+$. Since, $$\begin{aligned} 2 \mu^2 \xi^4 - a(\xi) &= 2 \mu^2 \xi^4 - \xi^2 \big( \xi^2(\mu^2 - 2 k^2) - 4 p'(\rho_*) \big)\\ &= \xi^2 \big( (\mu^2 + 2 k^2) \xi^2 + 4 p'(\rho_*) \big) > 0, \end{aligned}$$ then [\[good\]](#good){reference-type="eqref" reference="good"} holds if and only if $$b(\xi)^2 > 4 \mu^4 \xi^8 - 4 a(\xi) \mu^2 \xi^4.$$ Upon substitution of the expressions for $a(\xi)$ and $b(\xi)$ we reckon that [\[good\]](#good){reference-type="eqref" reference="good"} is satisfied if and only if $$\xi^2 \Big( 2 \frac{m_*^2}{\rho_*^2} - 2 p'(\rho_*) - k^2 \xi^2 \Big) = \xi^2 (2 \beta_* + O(\xi^2)) > 0,$$ as $|\xi| \to 0$. But this is true because of the supersonicity condition ($\beta_* > 0$). We conclude that $\mathrm{Re}\,\lambda_+(\xi) > 0$ for sufficiently small values of $|\xi|$. The lemma is proved. 0◻ 10 P. Antonelli, G. Cianfarani Carnevale, C. Lattanzio, and S. Spirito, *Relaxation limit from the quantum Navier-Stokes equations to the quantum drift-diffusion equation*, J. Nonlinear Sci. **31** (2021), no. 5, pp. Paper No. 71, 32. P. Antonelli and P. Marcati, *On the finite energy weak solutions to a system in quantum fluid dynamics*, Comm. Math. Phys. **287** (2009), no. 2, pp. 657--686. height 2pt depth -1.6pt width 23pt, *The quantum hydrodynamics system in two space dimensions*, Arch. Ration. Mech. Anal. **203** (2012), no. 2, pp. 499--527. height 2pt depth -1.6pt width 23pt, *Some results on systems for quantum fluids*, in Recent advances in partial differential equations and applications, A. Sequeira and V. A. Solonnikov, eds., vol. 666 of Contemp. Math., Amer. Math. Soc., Providence, RI, 2016, pp. 41--54. P. Antonelli, P. Marcati, and H. Zheng, *Genuine hydrodynamic analysis to the 1-D QHD system: existence, dispersion and stability*, Comm. Math. Phys. **383** (2021), no. 3, pp. 2113--2161. D. Bohm, *A suggested interpretation of the quantum theory in terms of "hidden" variables. I*, Phys. Rev. (2) **85** (1952), pp. 166--179. height 2pt depth -1.6pt width 23pt, *A suggested interpretation of the quantum theory in terms of "hidden" variables. II*, Phys. Rev. (2) **85** (1952), pp. 180--193. D. Bohm, B. J. Hiley, and P. N. Kaloyerou, *An ontological basis for the quantum theory*, Phys. Rep. **144** (1987), no. 6, pp. 321--375. S. Brull and F. Méhats, *Derivation of viscous correction terms for the isothermal quantum Euler model*, ZAMM Z. Angew. Math. Mech. **90** (2010), no. 3, pp. 219--230. F. Dalfovo, S. Giorgini, L. P. Pitaevskii, and S. Stringari, *Theory of Bose-Einstein condensation in trapped gases*, Rev. Mod. Phys. **71** (1999), pp. 463--512. A. Diaw and M. S. Murillo, *A viscous quantum hydrodynamics model based on dynamic density functional theory*, Sci. Rep. **7** (2017), p. 15532. K.-J. Engel and R. Nagel, *A short course on operator semigroups*, Universitext, Springer-Verlag, New York, 2006. D. K. Ferry and J.-R. Zhou, *Form of the quantum potential for use in hydrodynamic equations for semiconductor device modeling*, Phys. Rev. B **48** (1993), pp. 7944--7950. R. Folino, R. G. Plaza, and D. Zhelyazov, *Spectral stability of small-amplitude dispersive shocks in quantum hydrodynamics with viscosity*, Commun. Pure Appl. Anal. **21** (2022), no. 12, pp. 4019--4040. height 2pt depth -1.6pt width 23pt, *Spectral stability of weak dispersive shock profiles for quantum hydrodynamics with nonlinear viscosity*, J. Differ. Equ. **359** (2023), pp. 330--364. G. B. Folland, *Introduction to partial differential equations*, Princeton University Press, Princeton, NJ, second ed., 1995. K. O. Friedrichs, *Symmetric hyperbolic linear differential equations*, Comm. Pure Appl. Math. **7** (1954), pp. 345--392. K. O. Friedrichs and P. D. Lax, *On symmetrizable differential operators*, in Singular Integrals (Proc. Sympos. Pure Math., Chicago, Ill., 1966), Amer. Math. Soc., Providence, R.I., 1967, pp. 128--137. I. M. Gamba, A. Jüngel, and A. Vasseur, *Global existence of solutions to one-dimensional viscous quantum hydrodynamic equations*, J. Differ. Equ. **247** (2009), no. 11, pp. 3117--3135. C. L. Gardner, *The quantum hydrodynamic model for semiconductor devices*, SIAM J. Appl. Math. **54** (1994), no. 2, pp. 409--427. I. Gasser, *Traveling wave solutions for a quantum hydrodynamic model*, Appl. Math. Lett. **14** (2001), no. 3, pp. 279--283. I. Gasser and P. A. Markowich, *Quantum hydrodynamics, Wigner transforms and the classical limit*, Asymptot. Anal. **14** (1997), no. 2, pp. 97--116. S. K. Godunov, *An interesting class of quasi-linear systems*, Dokl. Akad. Nauk SSSR **139** (1961), pp. 521--523. J. Grant, *Pressure and stress tensor expressions in the fluid mechanical formulation of the Bose condensate equations*, J. Phys. A: Math. Nucl. Gen. **6** (1973), no. 11, pp. L151--L153. F. Graziani, Z. Moldabekov, B. Olson, and M. Bonitz, *Shock physics in warm dense matter: A quantum hydrodynamics perspective*, Contrib. to Plasma Phys. **62** (2022), no. 2, p. e202100170. A. V. Gurevich and L. P. Pitayevskiı̆, *Nonstationary structure of a collisionless shock wave*, Sov. Phys. JETP **38** (1974), no. 2, pp. 590--604. H. Hattori and D. N. Li, *Solutions for two-dimensional system for materials of Korteweg type*, SIAM J. Math. Anal. **25** (1994), no. 1, pp. 85--98. height 2pt depth -1.6pt width 23pt, *Global solutions of a high-dimensional system for Korteweg materials*, J. Math. Anal. Appl. **198** (1996), no. 1, pp. 84--97. M. A. Hoefer and M. J. Ablowitz, *Interactions of dispersive shock waves*, Phys. D **236** (2007), no. 1, pp. 44--64. M. A. Hoefer, M. J. Ablowitz, I. Coddington, E. A. Cornell, P. Engels, and V. Schweikhard, *Dispersive and classical shock waves in Bose-Einstein condensates and gas dynamics*, Phys. Rev. A **74** (2006), no. 2, p. 023623. J. Humpherys, *Admissibility of viscous-dispersive systems*, J. Hyperbolic Differ. Equ. **2** (2005), no. 4, pp. 963--974. A. Jüngel, M. C. Mariani, and D. Rial, *Local existence of solutions to the transient quantum hydrodynamic equations*, Math. Models Methods Appl. Sci. **12** (2002), no. 4, pp. 485--495. A. Jüngel and J. P. Milišić, *Physical and numerical viscosity for quantum hydrodynamics*, Commun. Math. Sci. **5** (2007), no. 2, pp. 447--471. S. Kawashima, *Systems of a Hyperbolic-Parabolic Composite Type, with Applications to the Equations of Magnetohydrodynamics*, PhD thesis, Kyoto University, 1983. S. Kawashima, Y. Shibata, and J. Xu, *Dissipative structure for symmetric hyperbolic-parabolic systems with Korteweg-type dispersion*, Commun. Partial Differ. Equ. **47** (2022), no. 2, pp. 378--400. S. Kawashima and Y. Shizuta, *On the normal form of the symmetric hyperbolic-parabolic systems associated with the conservation laws*, Tohoku Math. J. (2) **40** (1988), no. 3, pp. 449--464. I. M. Khalatnikov, *An introduction to the theory of superfluidity*, Advanced Book Classics, Addison-Wesley Publishing Company, Advanced Book Program, Redwood City, CA, 1989. Translated from the Russian by Pierre C. Hohenberg. Translation edited and with a foreword by David Pines. Reprint of the 1965 edition. M. Kotschote, *Strong solutions for a compressible fluid model of Korteweg type*, Ann. Inst. H. Poincaré Anal. Non Linéaire **25** (2008), no. 4, pp. 679--696. L. D. Landau, *Theory of the superfluidity of Helium II*, Phys. Rev. **60** (1941), pp. 356--358. C. Lattanzio, P. Marcati, and D. Zhelyazov, *Dispersive shocks in quantum hydrodynamics with viscosity*, Phys. D **402** (2020), p. 132222. height 2pt depth -1.6pt width 23pt, *Numerical investigations of dispersive shocks and spectral analysis for linearized quantum hydrodynamics*, Appl. Math. Comput. **385** (2020), pp. 125450, 13. C. Lattanzio and D. Zhelyazov, *Spectral analysis of dispersive shocks for quantum hydrodynamics with nonlinear viscosity*, Math. Models Methods Appl. Sci. **31** (2021), no. 9, pp. 1719--1747. height 2pt depth -1.6pt width 23pt, *Traveling waves for quantum hydrodynamics with nonlinear viscosity*, J. Math. Anal. Appl. **493** (2021), no. 1, pp. 124503, 17. E. Madelung, *Quantentheorie in hydrodynamischer Form*, Z. Physik **40** (1927), pp. 322--326. L. Nirenberg, *On elliptic partial differential equations*, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (3) **13** (1959), pp. 115--162. A. Pazy, *Semigroups of linear operators and applications to partial differential equations*, vol. 44 of Applied Mathematical Sciences, Springer-Verlag, New York, 1983. R. G. Plaza and J. M. Valdovinos, *Global decay of perturbations of equilibrium states for one-dimensional heat conducting compressible fluids of Korteweg type*. Preprint, 2023. https://arxiv.org/abs/2307.16300. height 2pt depth -1.6pt width 23pt, *Dissipative structure of one-dimensional isothermal compressible fluids of Korteweg type*, J. Math. Anal. Appl. **514** (2022), no. 2, p. 126336. S. Ra and H. Hong, *The existence, uniqueness and exponential decay of global solutions in the full quantum hydrodynamic equations for semiconductors*, Z. Angew. Math. Phys. **72** (2021), no. 3, pp. Paper No. 107, 32. S. R. Z. Sagdeev, *Kollektivnye protsessy i udarnye volny v razrezhennol plazme (Collective processes and shock waves in a tenuous plasma)*, in Voprosy teorii plazmy (Problems of Plasma Theory), M. A. Leontovich, ed., vol. 5 of Atomizdat, Moscow, 1964. English transl.: Reviews of Plasma Physics (Consultants Bureau, New York, 1966), Vol. 4, p. 23. Y. Shizuta and S. Kawashima, *Systems of equations of hyperbolic-parabolic type with applications to the discrete Boltzmann equation*, Hokkaido Math. J. **14** (1985), no. 2, pp. 249--275. L. Tong and Y. Xia, *Global existence and the algebraic decay rate of the solution for the quantum Navier-Stokes-Poisson equations in $\Bbb R^3$*, J. Math. Phys. **63** (2022), no. 9, pp. Paper No. 091511, 26. Y. Ueda, R. Duan, and S. Kawashima, *Decay structure for symmetric hyperbolic systems with non-symmetric relaxation and its application*, Arch. Ration. Mech. Anal. **205** (2012), no. 1, pp. 239--266. height 2pt depth -1.6pt width 23pt, *New structural conditions on decay property with regularity-loss for symmetric hyperbolic systems with non-symmetric relaxation*, J. Hyperbolic Differ. Equ. **15** (2018), no. 1, pp. 149--174. A. I. Vol$'\!\!$ pert and S. I. Hudjaev, *The Cauchy problem for composite systems of nonlinear differential equations*, Math. USSR Sb. **16** (1972), no. 4, pp. 517--544. J. Yang, Q. Ju, and Y.-F. Yang, *Asymptotic limits of Navier-Stokes equations with quantum effects*, Z. Angew. Math. Phys. **66** (2015), no. 5, pp. 2271--2283. J. Yang, G. Peng, H. Hao, and F. Que, *Existence of global weak solution for quantum Navier-Stokes system*, Internat. J. Math. **31** (2020), no. 5, pp. 2050038, 12. D. Zhelyazov, *Existence of standing and traveling waves in quantum hydrodynamics with viscosity*. Z. Anal. Anwend. (2023), in press. DOI:10.4171/ZAA/1723.
arxiv_math
{ "id": "2309.00175", "title": "Well-posedness and decay structure of a quantum hydrodynamics system\n with Bohm potential and linear viscosity", "authors": "Ram\\'on G. Plaza, Delyan Zhelyazov", "categories": "math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- author: - Kislaya Ravi, Tobias Neckel and Hans-Joachim Bungartz title: Multi-fidelity No-U-Turn Sampling --- # Introduction Bayesian inference is a widely used method in many fields such as astrophysics, geological exploration, machine learning etc [@ravi-bib-kaipio]. Exact inference of the posterior is rarely possible. The Markov Chain Monte Carlo (MCMC) method is one of the most commonly used methods in Bayesian inference to draw samples from the posterior. MCMC draws samples from the target density function that converge to the given distribution and are asymptotically unbiased. Classical methods such as the Metropolis-Hastings algorithm [@ravi-bib-metropolis] and Gibbs sampling [@ravi-bib-gibbs] often take a very long time to converge because of the inefficient random walk. This makes it intractable for computationally expensive models. Hamilton Monte Carlo (HMC) [@ravi-bib-neal-handbook] also known as hybrid Monte Carlo is a method that circumvents the random walk by using the derivative of the target density to propose better samples. There are two disadvantages of HMC. First, the user needs to specify some parameters of the algorithm manually. A poor choice of these parameters will have a considerable negative impact on the quality of drawn samples. Second, HMC requires the values of the derivative to be evaluated multiple times to propose one sample which means multiple evaluations of the target density function making it infeasible for computationally expensive models. The first issue can be addressed using No-U Turn sampling (NUTS) [@ravi-bib-nuts] which automatically determines the values of the crucial parameters. However, NUTS still requires access to the derivative. In this work, we propose to replace the use of the actual model for derivative evaluation with a surrogate. The proposed sample is accepted/rejected based on the evaluation of the actual model. This ensures that the samples are invariant with respect to the target density function and not it's surrogate. Moreover, we also build the surrogate using multi-fidelity method. One is often provided with different types of models solving the same phenomenon. We denote computationally cheap but inaccurate models as low-fidelity models whereas the expensive but accurate models are called high-fidelity models. We can combine these models efficiently for different applications by using the low-fidelity models more often than the high-fidelity function. Such methods fall under the category of multi-fidelity methods [@ravi-bib-survey-mf]. In this work, we use the concept of Gaussian Processes [@ravi-bib-rasmussen-book] to build multi-fidelity surrogates using a non-linear fusion of models [@ravi-bib-perdikaris-nargp; @ravi-bib-lee-mfgp]. To the best of our knowledge, the algorithm proposed in this work is the first of its kind that applies multi-fidelity to NUTS. The algorithm is implemented in python and the implementation is publicly accessible ^1^. This library can be easily coupled with solvers from various fields of application to generate multi-fidelity surrogates and solving computationally expensive problems such as forward uncertainty quantification, Bayesian optimization and MCMC sampling. In this work, we only discuss the MCMC sampling part of the implementation. We first introduce the Bayesian inference setup in Section [[2](#ravi-sec:bayesian-inverse){reference-type="ref" reference="ravi-sec:bayesian-inverse"}]{.upright}. Then, we provide a brief theoretical background for HMC and NUTS in Section [[3](#ravi-sec:mcmc){reference-type="ref" reference="ravi-sec:mcmc"}]{.upright}. After that, we discuss the multi-fidelity method in Section [[4](#ravi-sec:mf){reference-type="ref" reference="ravi-sec:mf"}]{.upright} where we explain the multi-fidelity Gaussian Process surrogate, delayed acceptance algorithm and finally describe the proposed algorithm by combining all the previous methods. Finally, we test the performance of the proposed method and compare it with some of the commonly used sampling methods in Section [[5](#ravi-sec:results){reference-type="ref" reference="ravi-sec:results"}]{.upright} and summarise our conclusions in Section [[6](#ravi-sec:conclusion){reference-type="ref" reference="ravi-sec:conclusion"}]{.upright}. # Bayesian Inference {#ravi-sec:bayesian-inverse} Let ${\boldsymbol{X}}\in {\mathbb{R}}^d$ denote the parameter space and ${\boldsymbol{Y}}\in {\mathbb{R}}^m$ a separable Banach space that represents the data space. $d, m \in {\mathbb{Z}}^+$ are the dimensions of parameters and data respectively. Both the parameters and the data belong to the finite-dimensional spaces which allow us to work with densities with respect to the Lebesque measure. Let us consider a function ${\mathcal{F}}: {\boldsymbol{X}}\rightarrow {\boldsymbol{Y}}$, which is a map from the parameter space to the data space. The noisy observations ${\boldsymbol{y}}\in {\boldsymbol{Y}}$ are typically modeled by adding some Gaussian noise ${\boldsymbol{\eta}}\sim {\mathcal{N}}(0, \Gamma)$, where $\Gamma$ is a positive definite covariance matrix. For a given parameter ${\boldsymbol{\theta}}\in {\boldsymbol{X}}$, we can express observed data ${\boldsymbol{y}}$ as a random variable: $${\boldsymbol{y}}= {\mathcal{F}}({\boldsymbol{\theta}}) + {\boldsymbol{\eta}} \label{ravi-eq:basic-inverse}$$ In inverse problems, the target is to solve  [([\[ravi-eq:basic-inverse\]](#ravi-eq:basic-inverse){reference-type="ref" reference="ravi-eq:basic-inverse"})]{.upright} w.r.t. ${\boldsymbol{\theta}}$, given the observations ${\boldsymbol{y}}$, to infer the true parameters ${\boldsymbol{\theta}}^{true} \in {\boldsymbol{X}}$. However, this problem is ill-posed in the sense of Hadamard [@ravi-bib-hadamard]. One can cure the ill-posedness of the problem by reformulating it as a Bayesian inference problem [@ravi-bib-kaipio]. We consider the target ${\boldsymbol{\theta}}$ as a random variable that follows a prior distribution $p({\boldsymbol{\theta}})$ on ${\boldsymbol{X}}$ which represents our assumption on the parameters without looking at the observations. Let us assume that ${\boldsymbol{\theta}}$ is independent of the noise ${\boldsymbol{\eta}}$. The likelihood $p({\boldsymbol{y}}|{\boldsymbol{\theta}})$ represents the quality of an assumed ${\boldsymbol{\theta}}$ to produce the given observations ${\boldsymbol{y}}$. Because of Gaussian noise ${\boldsymbol{\eta}}$, the likelihood can be written as: $$p({\boldsymbol{y}}|{\boldsymbol{\theta}}) \propto \exp \left( -\frac{1}{2} || \Gamma^{-1/2}({\boldsymbol{y}}- {\mathcal{F}}({\boldsymbol{\theta}})) ||^2 \right) \label{ravi-eq:likelihood-def}$$ In Bayesian Inference, we want to find the posterior distribution $p({\boldsymbol{\theta}}| y)$, which represents the probability distribution of the parameter given the observation. Using Bayes theorem, one obtains $$p({\boldsymbol{\theta}}|{\boldsymbol{y}}) = \frac{p({\boldsymbol{y}}|{\boldsymbol{\theta}})p({\boldsymbol{\theta}})}{p({\boldsymbol{y}})} \ . \label{ravi-eq:bayes}$$ The denominator term in  [([\[ravi-eq:bayes\]](#ravi-eq:bayes){reference-type="ref" reference="ravi-eq:bayes"})]{.upright} is known as evidence which is often difficult to calculate. However, it is independent of ${\boldsymbol{\theta}}$. There are multiple ways to draw samples from the posterior distribution. One of the commonly used methods is the Markov Chain Monte Carlo (MCMC) algorithm [@ravi-bib-neal-handbook] which also circumvents the evidence term. In this work, we deal with No-U Turns Sampling (NUTS) [@ravi-bib-nuts] which we explain in the next section. # Hamilton Monte Carlo and No-U-Turn Sampling {#ravi-sec:mcmc} No-U-Turn Sampling (NUTS) is a sampling method based on the Hamilton Monte Carlo (HMC) [@ravi-bib-neal-handbook] method. HMC utilizes the geometry of the density function to propose better samples. We introduce an extra momentum variable $r \in {\mathbb{R}}^d$. The Hamiltonian of the system is defined using the state ($\theta$) and the momentum term ($r$). Let ${\mathcal{L}}(\theta)$ be the logarithm of the target density function $\pi (\theta)$. With $r \cdot r$ representing the inner product of ${\mathbb{R}}^d$, the Hamiltonian of the system is defined as $$H(\theta, r) = - {\mathcal{L}}(\theta) + \frac{1}{2} r \cdot r \ . \label{ravi-eq:hamiltonian}$$ The canonical distribution of the system is $$p(\theta, r) \propto \exp \{ -H(\theta, r) \} \ . \label{ravi-eq:canonical-density}$$ At each iteration, the momentum term is resampled ($r \sim {\mathcal{N}}(0, {\mathbb{I}}_d)$) to propose the next state by solving the Hamiltonian dynamics for some time-steps. The Hamiltonian dynamics are numerically simulated using a time-stepping method which is reversible and volume-preserving. The leapfrog method is one of the most commonly used methods for this purpose and can be formulated as $$\begin{aligned} r^{i+1/2} &= r^i + (\epsilon/2) \nabla_{\theta} {\mathcal{L}}(\theta^i) \\ \theta^{i+1} &= \theta^i + \epsilon r^{i+1/2} \\ r^{i+1} &= r^{i+1/2} + (\epsilon/2) \nabla_{\theta} {\mathcal{L}}(\theta^{i+1}) \ , \end{aligned} \label{ravi-eq:leapfrog-hmc}$$ where $\epsilon$ is the time-step size for the leapfrog method. We perform the fictitious time integration for given $T$ steps to obtain the proposal ($\tilde{\theta}, \tilde{r}$). The proposal is accepted/rejected based on the acceptance probability $\alpha$ defined as: $$\alpha = \min \left[ 1, \frac{\exp \{ -H(\tilde{\theta}, \tilde{r}) \}}{\exp \{ -H(\theta, r) \}} \right] = \min \left[ 0, H(\theta, r) - H(\tilde{\theta}, \tilde{r}) \right] \label{ravi-eq:alpha-hmc}$$ Note that the normalization term will get canceled out. So, we can sample from density functions that are not normalized. The HMC algorithm is ergodic with respect to the canonical density function mentioned in  [([\[ravi-eq:canonical-density\]](#ravi-eq:canonical-density){reference-type="ref" reference="ravi-eq:canonical-density"})]{.upright} provided the leapfrog integrator does not generate periodic proposals. *Proof.* Interested readers can refer to [@ravi-bib-neal-handbook] for more detailed proof. If the leapfrog integrator does not generate periodic proposals then the Markov chain does not get trapped in certain subsets. We can also prove the detailed balance of the algorithm by taking into account the volume-preserving property of the leapfrog method. ◻ The quality of the samples depends on the choice of the tuning parameters, namely the number of steps $T$ and the step size $\epsilon$. NUTS is an extension of HMC, where the need to specify a fixed value of $T$ is eliminated by performing the time integration until one observes a U-turn. After the U-turn, the time integrator is very likely to retrace the old steps or visit states that are very close to already explored states. The slice sampling method is used to sample from the canonical distribution $p(\theta, r)$. The sign of the momentum term does not affect the value of $p(\theta, r)$. So, we build a tree while running the time integration. The first step is to sample the momentum ($r \sim {\mathcal{N}}(0, {\mathbb{I}}_d)$) as in standard HMC. We set the depth of the tree to zero ($l=0$). At every level, we perform the time integration for $2^l$ steps to obtain ($\tilde{\theta}, \tilde{r}$) and increase the value of $l$ by 1. At the end of time integration, we randomly draw either $1$ or $-1$ with equal probability. If the number is $1$, then we move to the rightmost node of the tree ($\theta^{r}$) and perform the next time integration steps using the corresponding momentum term. If the number is $-1$, then we move to the leftmost node of the tree ($\theta^{l}$) and perform the next time integration steps using the negative of the corresponding momentum term. At the end of every step, we check if we observe a U-turn. A U-turn is checked by verifying the following condition $$(\theta^r - \theta^l) \cdot \tilde{r} < 0 \ . \label{ravi-eq:u-turn}$$ We cut a random slice out of the canonical distribution $u \sim {\mathcal{U}}[0, p(\theta, r)]$. Out of all the states explored while building the tree, we select the states that satisfy the condition $p(\tilde{\theta}, \tilde{r}) \geq u$. We uniformly select one state out of all the chosen states which serves as the next state $\theta^{i+1}$. We repeat the aforementioned steps until the required number of samples are obtained. Interested readers can refer to [@ravi-bib-nuts] for the detailed algorithm. The samples drawn using NUTS are ergodic with respect to the canonical density function [@ravi-bib-nuts]. One of the main challenges of any gradient-based sampling method is the calculation of the gradient. In many cases, the gradients are not available but one can use the numerical approximation of the gradients. With the help of auto differentiation methods, the derivatives are easier to calculate. However, we need to compute the derivative at every point of the time integration step. This is very resource-intensive for computationally expensive functions. In this work, we suggest building a computationally cheap multi-fidelity surrogate of the function to approximate the derivative. In the next section, we explain the multi-fidelity method in more detail. # Multi-fidelity Methods {#ravi-sec:mf} For different applications, we often have a hierarchy of models. These functions model the same phenomena but with different levels of accuracy and resource requirements. Low-fidelity functions are computationally inexpensive but have relatively high errors. In contrast, high-fidelity functions are computationally expensive but accurately model the phenomena. We arrange the functions in increasing order of fidelity $\{ f_1, f_2, ..., f_L \}$, where $f_i:{\mathbb{R}}^d \rightarrow {\mathbb{R}}, \, i=1,2,...,L$. We need to evaluate the functions multiple times in applications that require uncertainty quantification, optimization, inverse problems, etc. Computing high-fidelity frequently is practically not possible because of resource limitations. Moreover, we cannot completely replace the high-fidelity function with the low-fidelity function because we need the final result to be reasonably accurate, too. Multi-fidelity methods combine different fidelity functions to leverage the speed of the low-fidelity function and the accuracy of the high-fidelity method. A detailed survey of the different multi-fidelity methods is given in [@ravi-bib-survey-mf]. In this work, we use Gaussian Processes (GPs) to create the multi-fidelity surrogate. We discuss a method to use GP to build a multi-fidelity surrogate in Section [[4.1](#ravi-subsec:mfgp){reference-type="ref" reference="ravi-subsec:mfgp"}]{.upright}. The surrogate guides NUTS to propose samples. In Section [[4.2](#ravi-subsec:da){reference-type="ref" reference="ravi-subsec:da"}]{.upright} we discuss the delayed acceptance method that ensures ergodicity of samples with respect to the high-fidelity function. Finally, in Section [[4.3](#ravi-subsec:mfnuts){reference-type="ref" reference="ravi-subsec:mfnuts"}]{.upright} we combine all the different components to explain the multi-fidelity No-U Turn sampling method. ## Multi-fidelity Gaussian Process Surrogates {#ravi-subsec:mfgp} Let us consider a two-fidelity system where $f_l(\theta)$ and $f_h(\theta)$ represent the low and high-fidelity functions, respectively, that reside on the same parameter space with points $\theta \in {\mathbb{R}}^d$. We perform a non-linear fusion of the different fidelities as discussed in [@ravi-bib-perdikaris-nargp; @ravi-bib-lee-mfgp]. We first discuss non-linear auto-regressive Gaussian process (NARGP) [@ravi-bib-perdikaris-nargp] where the high-fidelity model is expressed as a function of the low-fidelity model and the parameter space as: $$f_h(\theta) = g(f_l(\theta), \theta) \ , \label{ravi-eq:mfgp-nargp-ansatz}$$ where $g$ is a function that resides in a $d+1$ dimensional space. In this way, one can explore the non-linear relationship between the high-fidelity and low-fidelity function. A Gaussian process (GP) [@ravi-bib-rasmussen-book] is used to express the function $g$ because it provides not just the prediction but also the confidence interval of the predicted value. The kernel of NARGP is expressed in a way to mimic the auto-regressive structure as discussed in [@ravi-bib-kennedy-ohagan-auto-reg] as: $$k_{NARGP} = k_f(f_l(\theta), f_l(\theta')|\lambda_{f}) k_{\rho}(\theta, \theta'|\lambda_{\rho}) + k_{\delta}(\theta, \theta'|\lambda_{\delta}) \ , \label{ravi-eq:nargp-kernel}$$ where $k_f, k_{\rho}, k_{\delta}$ are positive definite kernels and $\lambda_f, \lambda_{\rho}, \lambda_{\delta}$ are the corresponding hyperparameters. We use square exponential kernels in our work. However, one can choose a tailored kernel depending on the application case [@ravi-bib-rasmussen-book]. One can also include the derivative of the low-fidelity function in the formulation of the composite function $g$ in  [([\[ravi-eq:mfgp-nargp-ansatz\]](#ravi-eq:mfgp-nargp-ansatz){reference-type="ref" reference="ravi-eq:mfgp-nargp-ansatz"})]{.upright}. However, the derivative of the low-fidelity function may not be readily available. Instead of calculating the derivative using the finite difference method that can sometimes cause rounding errors, extra parameters are added in the composite function that corresponds to the lag term $f_l(\theta-\tau)$ and $f_l(\theta+\tau)$ which will mimic the derivative of the low-fidelity function, where $\tau$ represent a small real number. Now, the structure of the surrogate is: $$f_h(\theta) = g(f_l(\theta), f_l(\theta+\tau), f_l(\theta-\tau), \theta) \label{ravi-eq:mfgp-gpdf-ansatz}$$ We can use any kernel of our choice to build the surrogate. It does not need to follow a structure like NARGP. This multi-fidelity Gaussian process surrogate is discussed in [@ravi-bib-lee-mfgp]. We call this method Gaussian Process with Derivative Fusion (GPDF). ## Delayed acceptance {#ravi-subsec:da} The Delayed Acceptance (DA) algorithm [@ravi-bib-delayed-acceptance] was developed to sample from a distribution $\pi(\theta)$ when there exists an approximation $\pi^*(\theta)$ of the target density distribution. The distribution does not need to be normalized. Just like any Metropolis-Hastings algorithm, one needs a proposal density function $q(\tilde{\theta}|\theta)$ and an initial point $\theta^0$. The steps of the simplified version of DA are summarized below: 1. Draw a sample from the proposal distribution $\tilde{\theta} \sim q(\tilde{\theta}|\theta)$ and accept/reject the proposal for the next step based on the following acceptance probability: $$\alpha^*(\tilde{\theta} | \theta) = \min \left\{ 1, \frac{\pi^*(\tilde{\theta}) q(\theta|\tilde{\theta})}{\pi^*(\theta) q(\tilde{\theta}|\theta)} \right\} \label{ravi-eq:da1}$$ 2. If the sample is accepted in the previous step, then accept/reject the proposal $\tilde{\theta}$ based on the following acceptance probability: $$\begin{aligned} q^*(\tilde{\theta}|\theta) &= \alpha^*(\tilde{\theta} | \theta)q(\tilde{\theta}|\theta) \\ \alpha(\tilde{\theta} | \theta) &= \min \left\{ 1, \frac{q^*(\theta|\tilde{\theta)} \pi(\tilde{\theta})}{q^*(\tilde{\theta}|\theta) \pi(\theta)} \right\} \end{aligned} \label{ravi-eq:da2}$$ 3. The next sample is taken to be the same as the previous sample if the proposal is rejected at any of the previous two steps. 4. Repeat the first three steps until the required number of samples are drawn. The DA algorithm preserves the detailed balance with respect to the target density $\pi(\theta)$. [\[ravi-lemma-detailed-balance-da\]]{#ravi-lemma-detailed-balance-da label="ravi-lemma-detailed-balance-da"} *Proof.* We need to show $q(\tilde{\theta}|\theta) \alpha^*(\tilde{\theta}|\theta) \alpha(\tilde{\theta}|\theta) \pi(\theta) = q(\theta|\tilde{\theta}) \alpha^*(\theta|\tilde{\theta}) \alpha(\theta|\tilde{\theta}) \pi(\tilde{\theta})$ to satisfy the detailed balance. The proposed sample is either accepted or rejected. Proving the detailed balance for rejected case ($\tilde{\theta} = \theta$) is trivial. Let us analyze the case when the sample is accepted. We look at the left-hand side of the target equation. $$\begin{aligned} & q(\tilde{\theta}|\theta) \alpha^*(\tilde{\theta}|\theta) \alpha(\tilde{\theta}|\theta) \pi(\theta) \\ &=q(\tilde{\theta}|\theta) \min \left\{ 1, \frac{\pi^*(\tilde{\theta}) q(\theta|\tilde{\theta})}{\pi^*(\theta) q(\tilde{\theta}|\theta)} \right\} \min \left\{ 1, \frac{q(\theta|\tilde{\theta}) \min \left\{ 1, \frac{\pi^*(\theta) q(\tilde{\theta}|\theta)}{\pi^*(\tilde{\theta}) q(\theta|\tilde{\theta})} \right\} \pi(\tilde{\theta})}{q(\tilde{\theta}|\theta)\min \left\{ 1, \frac{\pi^*(\tilde{\theta}) q(\theta|\tilde{\theta})}{\pi^*(\theta) q(\tilde{\theta}|\theta)} \right\} \pi(\theta)} \right\} \pi(\theta) \\ &=q(\tilde{\theta}|\theta) \min \left\{ 1, \frac{\pi^*(\tilde{\theta}) q(\theta|\tilde{\theta})}{\pi^*(\theta) q(\tilde{\theta}|\theta)} \right\} \min \left\{ 1, \frac{q(\theta|\tilde{\theta}) \min \left\{ 1, \frac{\pi^*(\theta) q(\tilde{\theta}|\theta)}{\pi^*(\tilde{\theta}) q(\theta|\tilde{\theta})} \right\} \pi(\tilde{\theta}) \pi^*(\tilde{\theta}) }{q(\tilde{\theta}|\theta)\min \left\{ 1, \frac{\pi^*(\tilde{\theta}) q(\theta|\tilde{\theta})}{\pi^*(\theta) q(\tilde{\theta}|\theta)} \right\} \pi(\theta) \pi^*(\tilde{\theta}) } \right\} \pi(\theta) \frac{\pi^*(\theta)}{\pi^*(\theta)} \\ &= \min \left\{ \pi^*(\theta) q(\tilde{\theta}|\theta), \pi^*(\tilde{\theta}) q(\theta|\tilde{\theta}) \right\} \min \left\{ \frac{\pi(\theta)}{\pi^*(\theta)}, \frac{\min \left\{q(\theta|\tilde{\theta})\pi^*(\tilde{\theta}), q(\tilde{\theta}|\theta) \pi^*(\theta) \right\} \pi (\tilde{\theta})}{\min \left\{q(\tilde{\theta}|\theta) \pi^*(\theta), q(\theta|\tilde{\theta}) \pi^*(\theta) \right\} \pi^*(\tilde{\theta})} \right\} \\ &= \min \left\{ \pi^*(\theta) q(\tilde{\theta}|\theta), \pi^*(\tilde{\theta}) q(\theta|\tilde{\theta}) \right\} \min \left\{ \frac{\pi(\theta)}{\pi^*(\theta)}, \frac{\pi(\tilde{\theta})}{\pi^*(\tilde{\theta})} \right\} \end{aligned}$$ We observe that the final expression is symmetric about $\theta$ and $\tilde{\theta}$. So, we will arrive at the same expression if we simplified $q(\theta|\tilde{\theta}) \alpha^*(\theta|\tilde{\theta}) \alpha(\theta|\tilde{\theta}) \pi(\tilde{\theta})$ which is the right-hand side of the detailed balance equation. This completes the proof. ◻ ## Multi-fidelity No-U Turn sampling {#ravi-subsec:mfnuts} The steps of the Multi-fidelity No-U Turn sampling (MFNUTS) are shown in Algorithm [[\[ravi-alg:mfnuts\]](#ravi-alg:mfnuts){reference-type="ref" reference="ravi-alg:mfnuts"}]{.upright}. We can broadly divide the algorithm into two parts, namely the offline stage and the sampling stage. The offline stage includes building the surrogate and the sampling stage involves using the surrogate and the high-fidelity function to obtain samples. The first part of the algorithm is to build a Multi-fidelity Gaussian Process (MFGP) surrogate as described in Section [[4.1](#ravi-subsec:mfgp){reference-type="ref" reference="ravi-subsec:mfgp"}]{.upright}. We build the surrogate of the logarithm of the target density function and randomly sample some points within the design space to build all the surrogates using NARGP and GPDF. Then, we select the model with the least mean squared error with respect to some test points. The surrogate is represented by ${\mathcal{L}}_s(\theta)$ and the corresponding canonical distribution as shown in  [([\[ravi-eq:canonical-density\]](#ravi-eq:canonical-density){reference-type="ref" reference="ravi-eq:canonical-density"})]{.upright} by $p_s(\theta, r)$. $f_s(\theta) \leftarrow$ `BuildMFGP`$({\mathcal{F}}, \theta^{\text{upper}}, \theta^{\text{lower}})$\ Define log likelihood for surrogate (${\mathcal{L}}_s(\theta)$) and corresponding density function ($\pi_s(\theta):=\exp \{{\mathcal{L}}_s(\theta)\}$)\ Define the canonical distribution of surrogate $p_s(\theta, r) := \exp\{ -{\mathcal{L}}_s(\theta) + \frac{1}{2} r \cdot r \}$, where $r$ is the momentum term\ $\epsilon \leftarrow$ `FindStepSize`$({\mathcal{L}}_s, \theta^0, M^{\text{adapt}})$ The second step is to determine the step size ($\epsilon$) of the leapfrog method. On the one hand, a big value of $\epsilon$ will cause the leapfrog method to visit a few states before a U-turn is observed. This corresponds to an improper exploration of states and may lead to many rejected samples. On the other hand, a very small value of $\epsilon$ results in the exploration of many nearby states before reaching the stopping criterion which is inefficient and leads to a too high acceptance ratio. So, we optimize the value of $\epsilon$ such that the expected value of the acceptance ratio reaches a target value ($\delta$). In this work we use $\delta=0.65$ following the argument from [@ravi-bib-neal-handbook; @ravi-bib-optimal-tuning-mcmc]. To achieve this, the dual averaging technique as described in [@ravi-bib-nuts; @ravi-bib-nesterov-dual-averaging] is used. Only the surrogate canonical distribution $p_s(\theta, r)$ is used to determine the step size. We are given a starting point $\theta_0$ and we randomly sample a momentum term ($r_0 \sim {\mathcal{N}}(0, {\mathbb{I}}_d)$). We start with some assumed value of $\epsilon$, perform one step of the leapfrog method, and check if the acceptance probability as shown in [\[ravi-eq:alpha-hmc\]](#ravi-eq:alpha-hmc){reference-type="eqref" reference="ravi-eq:alpha-hmc"} is greater than 0.5. The value of $\epsilon$ is halved until the criterion is satisfied. This is considered as the starting value of the step size ($\epsilon_0$) for the dual averaging algorithm which we run for some predefined steps. We call the part of finding the optimal value of the step size ($\epsilon$) the adaptation step. We start the actual sampling step after the adaptation step. In each sampling step, we run one step of NUTS as described in Section [[3](#ravi-sec:mcmc){reference-type="ref" reference="ravi-sec:mcmc"}]{.upright} using the surrogate canonical density function $p_s(\theta, r)$. Let the proposed sample be $(\tilde{\theta}, \tilde{r})$. The sample is accepted or rejected using the DA algorithm as described in Section [[4.2](#ravi-subsec:da){reference-type="ref" reference="ravi-subsec:da"}]{.upright}. Let us represent the density function corresponding to the highest fidelity model as $\pi_L(\theta)$. We know that the leapfrog method is time-reversible. So, the proposal distribution become symmetric $q((\tilde{\theta}, \tilde{r})|(\theta, r)) = q((\theta, r)|(\tilde{\theta}, \tilde{r}))$. After substituting the previous assumption in [([\[ravi-eq:da2\]](#ravi-eq:da2){reference-type="ref" reference="ravi-eq:da2"})]{.upright}, we obtain the acceptance ratio of the MFNUTS algorithm: $$\alpha_{\text{MFNUTS}}(\tilde{\theta}|\theta) = \min \left\{ 1, \frac{\min \left\{ 1, \frac{p_s(\theta, r)}{p_s(\tilde{\theta}, \tilde{r})} \right\}\pi_L(\tilde{\theta})}{\min \left\{ 1, \frac{p_s(\tilde{\theta}, \tilde{r})}{p_s(\theta, r)} \right\}\pi_L(\theta)} \right\} \ .$$ Using Lemma [[\[ravi-lemma-detailed-balance-da\]](#ravi-lemma-detailed-balance-da){reference-type="ref" reference="ravi-lemma-detailed-balance-da"}]{.upright}, we can show that the samples generated conserve the detailed balance with respect to $\pi_L(\theta)$. If the leapfrog steps do not get stuck in periodic cycles, then the samples are ergodic with respect to the density function of the highest-fidelity model. # Numerical Results {#ravi-sec:results} We are going to compare the MFNUTS algorithm with the Metropolis-Hastings algorithm, HMC, NUTS, and the Delayed Rejection Adaptive Metropolis (DRAM) [@ravi-bib-dram] algorithm. The implementation of MFNUTS is available in the Github repository^2^. We use Paramonte [@ravi-bib-paramonte] to run DRAM. Tensorflow Probability [@ravi-bib-tfd] is used to run the other three algorithms. We use three cases to compare the different methods and draw 10,000 samples for each case with 2,000 steps of adaptive or burn-in steps. The first case is drawing samples from the logarithm of the Rosenbrock function. This case checks the ability of the method to draw samples from a non-linear density function where a naive algorithm can result in the rejection of a lot of samples. The second case is drawing samples from an 8-dimensional correlated Gaussian distribution to check the performance for higher dimensions. Finally, we test the methods for calculating the intensity of multiple source terms in a steady-state groundwater flow problem. The Multivariate Effective Sample Size (mESS, see [@ravi-bib-mess]) is used to compare the quality of samples drawn from each method. In many real-world applications, the evaluation of a surrogate is infinitely cheap as compared to the computation of the high-fidelity function. So, we can ignore the evaluation of the surrogate in calculating the computational cost. We compare mESS with respect to the number of high-fidelity evaluations. A higher value of mESS tantamounts to a better method. We also consider the high-fidelity function evaluations done in the offline and burn-in phases in the plots. To create the multi-fidelity surrogate, we start with some random points. We keep on adding points to the training set at the locations of highest variance until the mean squared error of the surrogate is of the order of $10^{-3}$. ## Rosenbrock function We first test MFNUTS on a well-known benchmark test case. We take a two-fidelity scenario where the likelihood of the high-fidelity term ${\mathcal{L}}_2$ is the Rosenbrock function and the likelihood of the low-fidelity term ${\mathcal{L}}_1$ is a slightly modified Rosenbrock function: $$\begin{aligned} {\mathcal{L}}_1(\theta_1, \theta_2) &= - 12(\theta_2 - \theta_{1}^{2} -1)^2 + (\theta_1 -1)^2 \ , \\ {\mathcal{L}}_2(\theta_1, \theta_2) &= - 50(\theta_2 -\theta_{1}^{2})^2 + (\theta_1 - 1)^2 \ . \end{aligned} \label{ravi-eq:rosenbrock-likelihood}$$ The unnormalized density functions ($\pi_1(\theta), \pi_2(\theta)$) are exponential of the corresponding likelihood function. The contour lines of the low- and high-fidelity density function is visualized in Figure [[1](#ravi-fig:rosenbrock-contour){reference-type="ref" reference="ravi-fig:rosenbrock-contour"}]{.upright} and are considerably different. If one directly uses the low-fidelity function as a guide to drawing proposals for the high-fidelity function then it will lead to a lot of rejections. The transformation from the low-fidelity function to the high-fidelity function involves a translation and a non-linear shape modification. A linear function will not be sufficient to learn this transformation. Therefore, we use the non-linear transformation described in Section [[4.1](#ravi-subsec:mfgp){reference-type="ref" reference="ravi-subsec:mfgp"}]{.upright}. The surrogate is created using $50$ high-fidelity function evaluations and $200$ low-fidelity function evaluations. In this case, GPDF generates the smallest mean squared error. ![Contour of low and high-fidelity functions](ravi-figures/lf_hf_rosenbrock.pdf){#ravi-fig:rosenbrock-contour width="\\linewidth"} ![MFNUTS](ravi-figures/samples_mf_nuts_rosenbrock.pdf){#ravi-fig:samples-mfnuts-rosenbrock width="\\linewidth"} ![Metropolis-Hastings algorithm](ravi-figures/samples_mh_rosenbrock.pdf){#ravi-fig:samples-mh-rosenbrock width="\\linewidth"} ![HMC](ravi-figures/samples_hmc_rosenbrock.pdf){#ravi-fig:samples-hmc-rosenbrock width="\\linewidth"} ![NUTS](ravi-figures/samples_nuts_rosenbrock.pdf){#ravi-fig:samples-nuts-rosenbrock width="\\linewidth"} ![DRAM](ravi-figures/samples_dram_rosenbrock.pdf){#ravi-fig:samples-dram-rosenbrock width="\\linewidth"} ![mESS over the number of high-fidelity evaluations for Rosenbrock function](ravi-figures/mess_rosenbrock.pdf){#ravi-fig:mess-rosenbrock width="0.75\\linewidth"} Figure [[\[ravi-fig:results-rosenbrock\]](#ravi-fig:results-rosenbrock){reference-type="ref" reference="ravi-fig:results-rosenbrock"}]{.upright} shows that the samples drawn from all four methods represent the target density function. However, HMC and NUTS show a heavy tail as compared to the other methods because the tail regions are generally flat and a small value of the momentum term is not enough for the system to escape the region. MFNUTS has a relatively low number of samples in the tail region as compared to the other gradient-based methods because of the additional acceptance term ($\alpha_{\text{MFNUTS}}$) which mimics the acceptance ratio of the Metropolis-Hastings algorithm. We can observe from Figure [[7](#ravi-fig:mess-rosenbrock){reference-type="ref" reference="ravi-fig:mess-rosenbrock"}]{.upright} that MFNUTS outperforms all other methods. The mESS of the samples generated by the Metropolis-Hasting algorithm is worse than the one of NUTS and MFNUTS. This is because the Gaussian density function which is used to propose samples is not representative enough to model the complicated banana-shaped target density. Thus, a lot of samples are rejected leading to a repetition of states which lower the value of mESS. HMC has the worst performance amongst all the methods because we used the default value of the number of time steps for the leapfrog integration as provided by Tensorflow Probability instead of manually tuning it. NUTS generates a higher mESS value than HMC. This example shows the importance of the automatic selection of the number of steps. However, NUTS evaluated the model more frequently than HMC. Moreover, a lot of high-fidelity evaluations was also done during the adaptivity steps. DRAM has the highest mESS as compared to other methods but it requires a lot of high-fidelity function evaluations. Furthermore, DRAM needs extra function evaluations to learn the adaptive transition probability and propose another sample when the previously proposed sample is rejected. MFNUTS outperforms all methods by taking the good side of NUTS and replacing the high-fidelity function evaluation for derivative-evaluation with the surrogate. In most real-world scenarios, the cost of evaluating the surrogate is very small as compared to the one of the high-fidelity function. So, we circumvent the computationally expensive part by using the surrogate. ## 8-d correlated Gaussian distribution In this section, we compare the performance of MFNUTS in a higher dimensional space. The low-fidelity function is a Gaussian distribution with zero mean and identity as the covariance matrix. The high-fidelity density function is also a Gaussian distribution with zero mean but now with a tridiagonal matrix as covariance. The log-likelihood of the low and high-fidelity density will be a sphere and an ellipsoid in 8-d space respectively. We use NARGP to learn the transformation between the log-likelihoods using $100$ high-fidelity and $500$ low-fidelity evaluations. The evolution of mESS with respect to the number of high-fidelity evaluations is shown in Figure [[8](#ravi-fig:mess-8d-Gaussian){reference-type="ref" reference="ravi-fig:mess-8d-Gaussian"}]{.upright}. ![mESS over the number of high-fidelity evaluations for 8-d Gaussian test case.](ravi-figures/mess_8d_Gaussian.pdf){#ravi-fig:mess-8d-Gaussian width=".75\\linewidth"} We again observe that MFNUTS outperforms the other methods. We also observe that the gradient-based algorithms (HMC and NUTS) and DRAM outperform the Metropolis-Hastings algorithm which is the general trend observed in high-dimensional sampling problems, since they provide a better proposal than the Metropolis-Hastings algorithm. We also observe that HMC has higher mESS for the same number of high-fidelity evaluations. We assume that the default value of the number of integration steps was suitable for the target distribution. As in the previous example, MFNUTS is better than other methods by delegating the computationally expensive part of NUTS to the surrogate. ## Steady-state groundwater flow Let us consider a two-dimensional steady-state groundwater flow problem with source terms. The governing equation is: $$\begin{aligned} \frac{\partial}{\partial X}\left( \kappa(X) \frac{\partial u}{\partial X} \right) = S(X) \quad \quad \quad X \in \Omega \end{aligned} \label{ravi-eq:groundawter-flow-eq}$$ where, $\Omega:= [0,1]^2$ represents the spatial domain, $\kappa (X)$ represents the diffusion coefficient and $S(X)$ represents the source term. We consider zero Dirichlet boundary conditions. For the given problem, we assume that the diffusion coefficient is constant $(\kappa (X) = 1)$ and the source term is the summation of $N \in {\mathbb{Z}}$ Gaussian sources: $$S(X) = \sum_{i=1}^{N} S_i(X) = \sum_{i=1}^{N} \theta_i {\mathcal{N}}(\mu_i, \sigma_i^2) \ , \label{ravi-eq:source-term}$$ where the $i^{\text{th}}$ Gaussian source is defined by its location $\mu_i$, variance $\sigma_i^2$ and intensity $\theta_i$. We consider a case with four sources as shown in Figure [[9](#ravi-fig:source-poisson){reference-type="ref" reference="ravi-fig:source-poisson"}]{.upright}, at locations $\left[ (0.33, 0.33), (0.33, 0.67), (0.67, 0.33), (0.67, 0.67) \right]$ and each with variance $0.01$. We put nine probes marked by red dots in the Figure [[9](#ravi-fig:source-poisson){reference-type="ref" reference="ravi-fig:source-poisson"}]{.upright}. Our goal is to infer the source intensities for some given measurements at the probes. We use the open-source finite element solver FEniCS [@ravi-bib-fenics] to solve the differential equation. Measurement data is generated by solving [([\[ravi-eq:groundawter-flow-eq\]](#ravi-eq:groundawter-flow-eq){reference-type="ref" reference="ravi-eq:groundawter-flow-eq"})]{.upright} using a mesh size $64 \times 64$ with source intensity $\theta = \left[ 0.75, 1.25, 0.8, 1.2 \right]$ and adding Gaussian noise with variance $0.005$. ![Source term and the probe location](ravi-figures/source_poisson.pdf){#ravi-fig:source-poisson width="\\linewidth"} ![Solution of groundwater flow equation](ravi-figures/solution_poisson.pdf){#ravi-fig:solution-poisson width="\\linewidth"} [\[ravi-fig:poisson-setup\]]{#ravi-fig:poisson-setup label="ravi-fig:poisson-setup"} ![mESS over the number of high-fidelity evaluations for steady-state groundwater flow case.](ravi-figures/mess_poisson.pdf){#ravi-fig:mess-poisson width=".75\\linewidth"} ![Pairwise plot of the samples of source intensities from MFNUTS for groundwater flow test case](ravi-figures/poisson_pairwise_mfnuts.pdf){#ravi-fig:pairwise-poisson-mfnuts width="\\linewidth"} We model the likelihood function as Gaussian with variance $0.005$ (approximately $1\%$ of the mean value of the range $u$) and the prior as a Gaussian with mean $1.0$ and variance $1.0$. Then, we multiply the likelihood with the prior to get the unnormalized posterior distribution which is the target density function. For this test case, we only compare MFNUTS with the Metropolis-Hasting algorithm. The solver does not have support for auto-differentiation. So, the calculation of the derivative can only be done using the finite difference method which is computationally very demanding. So, NUTS and HMC have a clear disadvantage in this test case. We consider two fidelities of solvers for the multi-fidelity sampler. The low and high-fidelity solvers have a mesh size of $8 \times 8$ and $64 \times 64$, respectively. We create separate multi-fidelity surrogates for observations at all the probe locations using NARGP. $70$ high-fidelity and $450$ low-fidelity function evaluations were used for generating the surrogate. Then, we use all the surrogates to compute the posterior which is used as the final multi-fidelity surrogate for the MFNUTS sampler. We observe from Figure [[11](#ravi-fig:mess-poisson){reference-type="ref" reference="ravi-fig:mess-poisson"}]{.upright} that the MFNUTS results in a considerably higher mESS value than the Metropolis-Hastings algorithm for a similar number of high-fidelity function evaluations, as observed in the previous two test cases. We also plot the samples drawn from the MFNUTS in Figure [[12](#ravi-fig:pairwise-poisson-mfnuts){reference-type="ref" reference="ravi-fig:pairwise-poisson-mfnuts"}]{.upright}. The samples are mostly concentrated in elliptical blobs. The mean of the samples is $[0.87, 1.25, 0.92, 1.25]$ which is close to the source intensity that we used to generate the data. # Conclusion and future work {#ravi-sec:conclusion} In this paper, we compare our proposed MFNUTS algorithm with existing single-fidelity sampling methods. In all three cases, MFNUTS outperforms the single-fidelity methods. This was achieved by taking advantage of NUTS and delegating the computationally expensive part to the surrogate. The importance of having higher mESS values and a proper exploration of the domain becomes very important when we deal with computationally expensive models. Bad proposals will lead to a waste of computational resources and low mESS will cause a high mean squared error. Our method will be particularly useful in those cases. Moreover, our method also generates samples that are invariant with respect to the high-fidelity model. However, the quality of the proposal depends upon the surrogate. If the surrogate itself has high error then the proposals will lead to a lot of rejections, thereby decreasing the effective sample size. It is not essential to use the Gaussian process to build the surrogate. One can use any other method such as a neural network or a sparse grid approximation to build the surrogate. To further improve the algorithm, we can also add the Delayed Rejection [@ravi-bib-delayed-rejection] feature. # Acknowledgement {#acknowledgement .unnumbered} The present contribution is supported by the Helmholtz Association under the research school Munich School for Data Science - MUDS. 99. Kaipio J, Somersalo E. Statistical and computational inverse problems. Springer Science and Business Media; 2006 Mar 30. Metropolis N, Rosenbluth A, Rosenbluth, M. Teller, and Teller E. Equations of state calculations by fast computing machines. Journal of Chemical Physics, 21:1087-1092, 1953. Geman S, Geman D. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on pattern analysis and machine intelligence. 1984 Nov(6):721-41. Neal RM. MCMC using Hamiltonian dynamics. Handbook of markov chain monte carlo. 2011 Mar 2;2(11):2. Hoffman MD, Gelman A. The No-U-Turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo. J. Mach. Learn. Res.. 2014 Apr 1;15(1):1593-623. Peherstorfer B, Willcox K, Gunzburger M. Survey of multifidelity methods in uncertainty propagation, inference, and optimization. Siam Review. 2018;60(3):550-91. Rasmussen CE. Gaussian processes in machine learning. InSummer school on machine learning 2003 Feb 2 (pp. 63-71). Springer, Berlin, Heidelberg. Perdikaris P, Raissi M, Damianou A, Lawrence ND, Karniadakis GE. Nonlinear information fusion algorithms for data-efficient multi-fidelity modelling. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 2017 Feb 28;473(2198):20160751. Lee S, Dietrich F, Karniadakis GE, Kevrekidis IG. Linking Gaussian process regression with data-driven manifold embeddings for nonlinear data fusion. Interface focus. 2019 Jun 6;9(3):20180083. Hadamard J. Sur les problèmes aux dérivées partielles et leur signification physique. Princeton university bulletin. 1902:49-52. Kennedy MC, O'Hagan A. Predicting the output from a complex computer code when fast approximations are available. Biometrika. 2000 Mar 1;87(1):1-3. Christen JA, Fox C. Markov chain Monte Carlo using an approximation. Journal of Computational and Graphical statistics. 2005 Dec 1;14(4):795-810. Nesterov Y. Primal-dual subgradient methods for convex problems. Mathematical programming. 2009 Aug;120(1):221-59. Beskos A, Pillai N, Roberts G, Sanz-Serna JM, Stuart A. Optimal tuning of the hybrid Monte Carlo algorithm. Bernoulli. 2013 Nov;19(5A):1501-34. Haario, H., Laine, M., Mira, A. and Saksman, E., 2006. DRAM: efficient adaptive MCMC. Statistics and computing, 16, pp.339-354. Shahmoradi, A., Bagheri, F. and Kumbhare, S., 2020. Paramonte: Plain powerful parallel monte carlo library. Bulletin of the American Physical Society, 65. Dillon J. V, Langmore I, Tran D, Brevdo E, Vasudevan S, Moore D, Patton B, Alex Alemi, Matt Hoffman, Rif A. Saurous. TensorFlow Distributions. arXiv preprint arXiv:1711.10604, 2017. Vats D, Flegal JM, Jones GL. Multivariate output analysis for Markov chain Monte Carlo. Biometrika. 2019 Jun 1;106(2):321-37. Scroggs M. W, Baratta I. A, Richardson C. N, and Wells G. N. Basix: a runtime finite element basis evaluation library, Journal of Open Source Software 7(73) (2022) 3982. Tierney L, and Mira A, 1999. Some adaptive Monte Carlo methods for Bayesian inference. Statistics in medicine, 18(17‐18), pp.2507-2515.
arxiv_math
{ "id": "2310.02703", "title": "Multi-fidelity No-U-Turn Sampling", "authors": "Kislaya Ravi, Tobias Neckel and Hans-Joachim Bungartz", "categories": "math.NA cs.NA math.ST stat.TH", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Single-level reformulations of (non-convex) distributionally robust optimization (DRO) problems are often intractable, as they contain semiinfinite dual constraints. Based on such a semiinfinite reformulation, we present a safe approximation, that allows for the computation of feasible solutions for DROs that depend on nonconvex multivariate simple functions. Moreover, the approximation allows to address ambiguity sets that can incorporate information on moments as well as confidence sets. The typical strong assumptions on the structure of the underlying constraints, such as convexity in the decisions or concavity in the uncertainty found in the literature were, at least in part, recently overcome in [@Dienstbier2023a]. We start from the duality-based reformulation approach in [@Dienstbier2023a] that can be applied for DRO constraints based on simple functions that are univariate in the uncertainty parameters. We significantly extend their approach to multivariate simple functions which leads to a considerably wider applicability of the proposed reformulation approach. In order to achieve algorithmic tractability, the presented safe approximation is then realized by a discretized counterpart for the semiinfinite dual constraints. The approximation leads to a computationally tractable mixed-integer positive semidefinite problem for which state-of-the-art software implementations are readily available. The tractable safe approximation provides sufficient conditions for distributional robustness of the original problem, i.e., obtained solutions are provably robust. address: Friedrich-Alexander-Universität Erlangen-Nürnberg, Cauerstr. 11, 91058 Erlangen, Germany author: - J. Dienstbier, F. Liers, J. Rolfes bibliography: - bibliography.bib title: A Positive Semidefinite Safe Approximation of Multivariate Distributionally Robust Constraints Determined by Simple Functions --- # Introduction {#Sec:introduction} We consider distributionally robust optimization models that are governed by multivariate simple functions. Despite their non-convexity, we aim for algorithmically tractable approximations that are based on duality arguments. The resulting solutions yield a safe approximation which means that they are guaranteed to be robust for the original constraints. The approach presented here starts from the considerations in [@Dienstbier2023a] for constraints that are univariate in the uncertain parameters and generalizes the approach to the considerably more general case of constraints that are multivariate in the uncertainty. In the latter approach, a safe approximation was developed that leads to a mixed-integer linear optimization problem. Despite the NP-hardness of the latter, practically efficient algorithms and software are readily available. In addition, it could be proven that the safe approximation is asymptotically correct, i.e., it does not only yield robust solutions, but aysmptotically solves the original distributionally robust problem. In our generalization to the multivariate setting, we use the same notation as in [@Dienstbier2023a]. For completeness of the exposition, we repeat the necessary ingredients. Let $x\in \mathbb{R}^m$ denote the decision variables, $b\in \mathbb{R}$ a scalar, $\mathcal{P}$ a set of probability measures on the compact domain $T$. We then model the uncertainty in our DRO with a random vector $t\in T$ distributed according to an (uncertain) probability measure $\mathbbm{P}\in \mathcal{P}$. As typical in (distributionally) robust optimization, the task consists in determining decisions $x$ that are feasible even in case the uncertain probability measures are chosen in an adversarial way which coined the name 'adversary'. In addition, in case of an optimization model, the chosen robust solution shall lead to a best possible guaranteed objective value. Here, $v:\mathbb{R}^m\times T\rightarrow \mathbb{R}$ denotes a function that connects the decision variables $x$ with the random vector $t$, e.g. $v(x^-,x^+,t)=\mathbbm{1}_{[x^-,x^+]}(t)$ if we want to depict $t\in [x^-,x^+]$. Then, a *distributionally robust constraint* or DRO constraint is defined by $$\label{Eq: DRO_constraint} b\leq \min_{\mathbbm{P}\in \mathcal{P}} \mathbbm{E}_{\mathbbm{P}}\left(v(x,t)\right).$$ Constraints of this form contain both the purely stochastic as well as the robust models as special cases. Indeed, setting $\mathcal{P}=\{\mathbbm{P}\}$, leads to a *stochastic constraint*: $$b\leq \mathbbm{E}_{\mathbbm{P}}\left(v(x,t)\right).$$ Setting $\mathcal{P}=\{\delta_t: t\in T\}$, where $\delta_t$ denotes the Dirac point measures at $t\in T$, [\[Eq: DRO_constraint\]](#Eq: DRO_constraint){reference-type="eqref" reference="Eq: DRO_constraint"} yields a *robust constraint* $$b\leq \min_{t\in T} v(x,t).$$ Next, we briefly review some relevant literature in optimization under uncertainty, and distributional robustness in particular. Stochastic optimization has been established for situations when uncertainty is distributed by some (known) distribution or when constraints must be met with a certain probability. It hedges against uncertainty in a probabilistic sense and implicitely assumes that the underlying distributions can be closely approximated or is even known exactly. We refer to [@birge2006introduction] for a gentle introduction on stochastic optimization as well as to the surveys [@prekopa1998SO] and [@Shapiro2003] particularly for discrete random variables. In stochastic optimization, a large variety of efficient and elegant models and solution approaches have been established. However, in applications the underlying distribution are often unknown, which may result in low-quality or even infeasible results in case the underlying assumptions on the distributions are not satisfied. In contrast, robust optimization offers a natural alternative, whereby uncertainty sets are established a priorily. Feasibility of an obtained solution is guaranteed for all possible outcomes of uncertainty within the uncertainty sets. A solution with best guaranteed value is determined. Modelling and algorithmical aproaches consist in (duality-based) reformulations of the semi-infinite or exponentially large robust counterparts, decomposition or approximation approaches. Relevant literature on robust optimization includes [@soyster1973convex], [@ben1998robust], [@ben1999robust], [@ben2000robust]. Here, we focus on nonconvex problems. For recent decomposition approaches for nonconvex robust optimization, we refer to the adapted bundle approach presented in [@Kuchlbauer2022adaptive] which has been integrated in an outer approximation procedure in [@kuchlbauerOuter] for additional discrete decisions. Robust and stochastic constraints can be integrated either via so-called 'probust functions', see e.g. [@adelhuette2021joint], or via Distributional robustness from formula [\[Eq: DRO_constraint\]](#Eq: DRO_constraint){reference-type="eqref" reference="Eq: DRO_constraint"}. Such integrated robust-probabilistic models contain advantages of both worlds, namely full protection as in the robust world together with limited prize of uncertainty protection as in stochastic optimization. More generally, DRO determines robust solutions that are protected against uncertainty in the underlying distributions. These distributions are assumed to reside in a so-called ambiguity set of probability measures, denoted by $\mathcal{P}$ above. Discrepancy-based ambiguity sets assume a nominal, 'typical', distribution and include distributions within a certain distance of it, where Wasserstein-balls are natural distances [@mohajerin2018data]. For distributionally robust optimization, we refer to the detailed surveys [@Rahimian2019a] and [@Fengmin2021a] as well as the references therein. The adaptive bundle method has been used recently for a nonconvex DRO problems governed by a partial differential equation in [@kuchlbauer2023quality]. Our approach can also define moment-based ambiguity sets, where the moments of distributions satisfy predetermined bounds. Mean-variance or Value-at-Risk measures are studied in [@ghaoui2003worst], whereas moment information is used in [@Popescu2007a]. [@Xu2017a] uses Slater conditions to show the correctness of a duality-based reformulation of the robust counterpart, together with discretization schemes to determine approximate solutions. [@Delage2010a] presents exact reformulations of specific DRO problems containing [\[Eq: DRO_constraint\]](#Eq: DRO_constraint){reference-type="eqref" reference="Eq: DRO_constraint"} as a constraint. One of the challenges of incorporating more information into moment-based ambiguity sets is addressed by the authors of [@Parys2015a], who provide a positive semidefinite (SDP) reformulation of [\[Eq: DRO_constraint\]](#Eq: DRO_constraint){reference-type="eqref" reference="Eq: DRO_constraint"} for cases where the probability distribution is known to be unimodal. Their approach differs from our work in that it considers the interplay between an outer-level DRO and [\[Eq: DRO_constraint\]](#Eq: DRO_constraint){reference-type="eqref" reference="Eq: DRO_constraint"}. On the other hand, [@Wiesemann2014a] presents a duality-based reformulation of [\[Eq: DRO_constraint\]](#Eq: DRO_constraint){reference-type="eqref" reference="Eq: DRO_constraint"} that incorporates information on the confidence sets and assumes convexity. Under these assumptions, the approach can be applied to a DRO with [\[Eq: DRO_constraint\]](#Eq: DRO_constraint){reference-type="eqref" reference="Eq: DRO_constraint"} as a constraint. Our approach is based on ambiguity sets similar to the ones in [@Delage2010a], which consider mean and covariance matrix ranges along with confidence set information as in [@Wiesemann2014a]. However, our approach considerably extends this setting as it considers non-convex simple functions, which can approximate any, even non-convex, continuous function. [@Dienstbier2023a] considers the univariate case and presents a safe approximation that is based on mixed-integer linear constraints. In addition, they could prove that the safe approximation also converges to the true robust counterpart solution, rendering the approximation asymptotically a correct equivalent reformulation. Generalizing from [@Dienstbier2023a], we here consider multivariate simple functions, i.e. $\text{dim}(T)>1$ and $v(x,X,t)=\sum_{i=1}^k x_i\mathbbm{1}_{X_i}(t_i)$ with $k>1$ and present a mixed-integer positive semidefinite safe approximation. Due to the availability of state-of-the-art software implementations for mixed-integer positive semidefinite optimization, this proves the computational tractability of our modelling approach. This work is structured as follows. Section [2](#Sec: Problem Setting){reference-type="ref" reference="Sec: Problem Setting"} introduces the distributionally robust model including simple functions, together with motivation and illustrative examples. Subsequently, Section [3](#Sec: DRO_indicator_functions){reference-type="ref" reference="Sec: DRO_indicator_functions"} presents a new semi-infinite inner approximation of the robust counterpart, along with a suitable discretization. The result consists in a novel finite-dimensional mixed-integer positive semidefinite optimization model. The main contribution consists in showing that its feasible solutions are also feasible for the original robust DRO model. # Problem Setting and Notation {#Sec: Problem Setting} We stick to the notation from [@Dienstbier2023a] and summarize the main modelling here for completeness of our exposition. ## DRO Constraints Containing Simple Functions The DRO constraints considered in the present article are defined by functions $v(x,t)$ that consist of multivariate *simple functions*, i.e., $$v(x,t)=\sum_{i=1}^k x_i \mathbbm{1}_{X_i}(t), \text{ where } \mathbbm{1}_{X_i}(t)\coloneqq \begin{cases} 1 & \text{ if } t\in X_i\\ 0 & \text{ otherwise.} \end{cases}$$ The functions of type $\mathbbm{1}_{X_i}$ are denoted as *indicator functions* as they indicate whether $t\in X_i$ holds or not. Considering functions $v$ as above in [\[Eq: DRO_constraint\]](#Eq: DRO_constraint){reference-type="eqref" reference="Eq: DRO_constraint"} leads to $$\mathbbm{E}_{\mathbbm{P}}(v(x,t)) = \mathbbm{E}_{\mathbbm{P}}\left(\sum_{i=1}^k x_i\mathbbm{1}_{X_i}(t)\right) = \sum_{i=1}^k x_i \mathbbm{P}(X_i)$$ and consequently the following formulation of [\[Eq: DRO_constraint\]](#Eq: DRO_constraint){reference-type="eqref" reference="Eq: DRO_constraint"}: $$\label{Eq: DRO_constraint_Prob} b \leq \min_{\mathbbm{P}\in \mathcal{P}} \sum_{i=1}^k x_i \mathbbm{P}(X_i).$$ The decisions may either influence the height $x_i$ of an indicator function or will determine the underlying sets $X_i$. In the remainder of this paper, we will investigate both situations separately to ease the presentation. However, Theorem [Theorem 3](#Thm: MIP_multidim){reference-type="ref" reference="Thm: MIP_multidim"} can be extended to incorporate both cases simultaneously. **Case 1:** Suppose that the sets $X_i\subseteq \mathbb{R}^m$ are given and consider $x_i$ as decision variables. [\[Prob: Case1\]]{#Prob: Case1 label="Prob: Case1"} $$\begin{aligned} \max_{x\in P}\ & c^\top x: \\ \text{s.t.}\ & b \leq \min_{\mathbbm{P}\in \mathcal{P}} \sum_{i=1}^k x_i \mathbbm{P}(X_i),\label{Constr: Case1} \end{aligned}$$ where, for ease of exposition, $P\subseteq \mathbb{R}^k$ denotes a set of additional convex constraints and $c:\mathbb{R}^k\rightarrow \mathbb{R}$ denotes a concave function. We demonstrate the generality of [\[Prob: Case1\]](#Prob: Case1){reference-type="eqref" reference="Prob: Case1"} by an academic example on the mean-variance model from portfolio optimization, see Example 3 in [@Sengupta1985a]: To this end, suppose one aims to minimize the risk of a portfolio. Moreover, one only has $k$ risky assets $A_i$ available. Let these assets provide a revenue $r_i$ in case of an event $X_i$ and $0$ otherwise, i.e. $A_i=r_i \mathbbm{1}_{X_i}$ and let the $A_i$ be independently, identically distributed with probability $\mathbbm{P}\in \mathcal{P}$, where $\mathcal{P}$ denotes a pre-defined ambiguity set as described in Section [1](#Sec:introduction){reference-type="ref" reference="Sec:introduction"}. Then, the mean-variance model reads: $$\min_x x^\top \left( \varepsilon_\Sigma\Sigma \right)x: \min_{\mathbbm{P}\in \mathcal{P}} E_{\mathbbm{P}}\left(\sum_{i=1}^k x_iA_i\right) \geq w, \sum_{i=1}^k x_i=1, x\geq 0,$$ which for i.i.d. assets $A_i$ is equivalent to $$-\max_x -\sum_{i=1}^k \varepsilon_\Sigma\sigma_i x_i^2:\ \min_{\mathbbm{P}\in \mathcal{P}} \sum_{i=1}^k x_ir_i\mathbbm{P}(X_i) \geq w, \sum_{i=1}^k x_i=1, x\geq 0.$$ This is indeed a special case of [\[Prob: Case1\]](#Prob: Case1){reference-type="eqref" reference="Prob: Case1"} since nonnegative $\varepsilon_{\Sigma},\sigma_i,x_i$ lead to a concave objective function and $P=\left\{x\in \mathbb{R}^k: \sum_{i=1}^k x_i=1, x\geq 0\right\}$ denotes a convex set for which e.g. the methods from [@Wiesemann2014a] can be applied. Thus, although addressing Case 1 as well, in the present article we focus on the following case, where we consider the sets $X_i$ as decision variables. **Case 2:** Suppose the coefficients $x_i$ are given parameters and the sets $X_i=[x_i^-,x_i^+]\subseteq \mathbb{R}^m$ determine hypercubes. Consider the boundaries of these hypercubes as decision variables. In addition, we assume w.l.o.g. that $X_i\subseteq T$ for well-posedness of $\mathbbm{P}(X_i)$ and consider: [\[Prob: Case2\]]{#Prob: Case2 label="Prob: Case2"} $$\begin{aligned} \max_{((x^-)^\top,(x^+)^\top)\in P}\ & \sum_{i=1}^k\sum_{j=1}^m c^-_{ij}x^-_{ij}+c^+_{ij}x^+_{ij}: \\ \text{s.t.}\ & b \leq \min_{\mathbbm{P}\in \mathcal{P}} \sum_{i=1}^k x_i \mathbbm{P}([x_i^-,x_i^+]),\label{Constr: Case2} \end{aligned}$$ where $P\subseteq \mathbb{R}^{2mk}$ denotes a polytope. Note, that Case 2 appears to be more challenging than Case 1 as the function $$v(x^-,x^+,t)\coloneqq \sum_{i=1}^k x_i\mathbbm{1}_{[x_i^-,x_i^+]}(t)$$ is not only non-convex in $t$ but also in $((x_i^-)^\top,(x_i^+)^\top)$. Despite of this mathematical challenge, this case already covers interesting applications in chemical separation processes as is illustrated in [@Dienstbier2023a]. Let us now introduce essential notation and concepts. We refer to [@Barvinok2002a] and [@Shapiro2000a] for more information. The main challenges in Problems [\[Prob: Case1\]](#Prob: Case1){reference-type="eqref" reference="Prob: Case1"} and [\[Prob: Case2\]](#Prob: Case2){reference-type="eqref" reference="Prob: Case2"} arise from the DRO constraints [\[Constr: Case1\]](#Constr: Case1){reference-type="eqref" reference="Constr: Case1"} and [\[Constr: Case2\]](#Constr: Case2){reference-type="eqref" reference="Constr: Case2"}, since these constraints cannot be formulated with the canonic Euclidean inner product. Consequently, standard reformulation arguments from robust optimization such as replacing the inner adversarial optimization problem by the feasible region of its dual and solve the resulting model as a standard finite-dimensional convex problem, do not apply. However, the following inner product, illustrated in Section III.3.2 in [@Barvinok2002a], allows a similar reformulation of [\[Constr: Case1\]](#Constr: Case1){reference-type="eqref" reference="Constr: Case1"} and [\[Constr: Case2\]](#Constr: Case2){reference-type="eqref" reference="Constr: Case2"}: Let $\mathbbm{P}$ denote a probability measure on the compact domain $T$ that is defined by a probability density $\rho_{}({t})$, i.e. $d\mathbbm{P}= \rho_{}({t})dt.$ According to Riesz-Markov-Kakutani representation theorem $\mathbbm{P}$ is unique, i.e. it is the only solution that satisfies $I(f)=\int f d\mathbbm{P}$ for the linear functional $I:\mathcal{C}(T)\rightarrow \mathbb{R}$ defined by $I(f)\coloneqq\int_0^T f(t)\rho_{}({t})dt.$ The corresponding inner product $$\langle f, \mathbbm{P}\rangle \coloneqq \int_T f d\mathbbm{P}$$ then constitutes a *duality*, i.e. a non-degenerate inner product. Moreover, this duality is more generally defined on *signed Radon measures*, denoted by $\mathcal{M}(T)$. In particular, we denote the measure over which we minimize by $\mathbbm{P}_{}$ instead of $\mathbbm{P}$ for the remainder of this article, thereby indicating that we generally refer to a signed Radon measure. Thus, the above product $\langle \cdot , \cdot \rangle: \mathcal{C}(T)\times \mathcal{M}(T)\rightarrow \mathbb{R}$, enables us to consider connect $\mathbbm{P}_{}=\mathbbm{P}$ and the function $\sum_{i=1}^k x_i\mathbbm{1}_{X_i}$ and subsequently reformulate [\[Eq: DRO_constraint_Prob\]](#Eq: DRO_constraint_Prob){reference-type="eqref" reference="Eq: DRO_constraint_Prob"}: [\[Prob: DRO_with_undefined_ambiguity_set\]]{#Prob: DRO_with_undefined_ambiguity_set label="Prob: DRO_with_undefined_ambiguity_set"} $$\begin{aligned} b \le \min~& \langle \sum_{i=1}^k x_i\mathbbm{1}_{X_i}(t),\mathbbm{P}_{}\rangle & \\ \text{s.t.}~& \mathbbm{P}_{} \in \mathcal{M}(T)_{\ge 0},\label{Constr: DRO_with_undefined_ambiguity_set_1}\\ &\langle 1, \mathbbm{P}_{} \rangle \ge 1, \\ &\langle -1, \mathbbm{P}_{} \rangle \ge -1, \label{Constr: DRO_with_undefined_ambiguity_set_2}\\ & \mathbbm{P}_{} \in \mathcal{P}', \label{Constr: DRO_with_undefined_ambiguity_set_3} \end{aligned}$$ where $\mathcal{M}(T)_{\geq 0}$ denotes the cone of nonnegative Radon measures. Furthermore, Constraints [\[Constr: DRO_with_undefined_ambiguity_set_1\]](#Constr: DRO_with_undefined_ambiguity_set_1){reference-type="eqref" reference="Constr: DRO_with_undefined_ambiguity_set_1"} -- [\[Constr: DRO_with_undefined_ambiguity_set_3\]](#Constr: DRO_with_undefined_ambiguity_set_3){reference-type="eqref" reference="Constr: DRO_with_undefined_ambiguity_set_3"} require $\mathbbm{P}_{}$ to be a probability measure contained in the set $\mathcal{P}'$, i.e. $\mathbbm{P}\in \mathcal{P}$. ## Strenghtening DRO Models by Moment Control and Confidence Sets One of the major challenges in distributional robustness consists in choosing the constraints in $\mathcal{P}'$ in a way that [\[Prob: DRO_with_undefined_ambiguity_set\]](#Prob: DRO_with_undefined_ambiguity_set){reference-type="eqref" reference="Prob: DRO_with_undefined_ambiguity_set"} is on the one hand algorithmically tractable, but on the other hand also large enough to protect the solutions $x$ (in Case 1) and $x^-,x^+$ (in Case 2) against all *realistic* uncertainties. Moreover, one aims to avoid including unrealistic uncertainties as those render the decisions $x$ and $x^-,x^+$ too conservative. Within our setting, it is also possible to add additional information on the uncertain probability distributions, in particular in case bounds on the first moments can be imposed. This leads to additional constraints that can be added to [\[Prob: DRO_with_undefined_ambiguity_set\]](#Prob: DRO_with_undefined_ambiguity_set){reference-type="eqref" reference="Prob: DRO_with_undefined_ambiguity_set"} in order to specify [\[Constr: DRO_with_undefined_ambiguity_set_3\]](#Constr: DRO_with_undefined_ambiguity_set_3){reference-type="eqref" reference="Constr: DRO_with_undefined_ambiguity_set_3"}, while maintaining algorithmic tractability. First, we aim at bounding the *first moment*, i.e. the expectation $\mathbbm{E}_{\mathbbm{P}_{}}(t)$, of $\mathbbm{P}_{}$. The authors in [@Parys2015a] and other sources assume perfect knowledge about the first moment, whereas the authors of [@Delage2010a] only assume that the first moment is contained in an ellipsoid. In this article, we follow the latter modeling and assume that an estimation of the correct expectation $\mu_{}$ and covariance matrix $\Sigma$ is known. Moreover, we assume, that the ellipsoidal uncertainty set containing $\mathbbm{E}_{\mathbbm{P}_{}}(t)$ is shaped by $\mu_{}$, $\Sigma$ and a third parameter $\varepsilon_{\mu_{}}>0$, that determines its size. The ellipsoidal uncertainty set is then given by $$\varepsilon_{\mu_{}}-(\mathbbm{E}_{\mathbbm{P}_{}}(t)-\mu_{})^\top \Sigma (\mathbbm{E}_{\mathbbm{P}_{}}(t)-\mu_{})\geq 0, \Sigma\succeq 0.$$ In order to reformulate the above constraint by means of an inner product $\langle \cdot , \mathbbm{P}_{}\rangle$, we apply Schur's complement and obtain the following equivalent SDP constraint, which fits the setting in [\[Prob: DRO_with_undefined_ambiguity_set\]](#Prob: DRO_with_undefined_ambiguity_set){reference-type="eqref" reference="Prob: DRO_with_undefined_ambiguity_set"}: $$\label{Eq: Sec2_first_moment} \left\langle \begin{bmatrix} \Sigma & t-\mu_{}\\ (t-\mu_{})^\top & \varepsilon_{\mu_{}} \end{bmatrix}, \mathbbm{P}_{}\right\rangle \succeq 0.$$ Similarly, one may assume that the underlying uncertain probability measure is given by a monomodal density function, see e.g. [@Parys2015a]. Computationally, this assumption has the advantage, that, if $\mathcal{P}$ contains monomodal distributions with fixed first and second moments, [\[Prob: DRO_with_undefined_ambiguity_set\]](#Prob: DRO_with_undefined_ambiguity_set){reference-type="eqref" reference="Prob: DRO_with_undefined_ambiguity_set"} can be reformulated as an SDP, one of the main results in [@Parys2015a]. However, the corresponding SDP is not easy to incorporate into either [\[Prob: Case1\]](#Prob: Case1){reference-type="eqref" reference="Prob: Case1"} or [\[Prob: Case2\]](#Prob: Case2){reference-type="eqref" reference="Prob: Case2"} as it generally leads to bilinear terms and thereby intractable counterparts for both [\[Prob: Case1\]](#Prob: Case1){reference-type="eqref" reference="Prob: Case1"} and [\[Prob: Case2\]](#Prob: Case2){reference-type="eqref" reference="Prob: Case2"}. In particular, [@Rahimian2019a] state, that \"with the current state of literature, monomodality cannot be modeled in a tractable manner\". To circumvent this obstacle, we exploit the fact that monomodal distributions tend to have a relatively small variance. Thus, similar again to [@Delage2010a] in addition to the bounds on the first moment, we impose an upper bound on the *second moment* as follows $$\label{Eq: Sec2_second_moment} \langle -(t-\mu_{})(t-\mu_{})^\top ,\mathbbm{P}_{}\rangle \succeq -\varepsilon_\Sigma\Sigma$$ or, equivalently $\text{Var}_{\mathbbm{P}_{}}(t)\preceq \varepsilon_\Sigma \Sigma$. Finally, we add *confidence set* constraints, see e.g. [@Wiesemann2014a], where we restrict the probability of certain subsets $T_i\subseteq T$, i.e., $$\label{Eq: Sec2_confidence_sets} \langle \text{sign}(\varepsilon_i)\mathbbm{1}_{T_i}(t), \mathbbm{P}_{} \rangle \ge \varepsilon_i \text{ for every } i\in I.$$ Note, that these constraints give us a lot of modeling power as we can model $\mathbbm{P}_{}(T_i)\geq \varepsilon_i$ with $\varepsilon_i>0$ and $\mathbbm{P}_{}(T_i)\leq -\varepsilon_i$ with $\varepsilon_i<0$. In particular, the normalization constraints [\[Constr: DRO_with_undefined_ambiguity_set_1\]](#Constr: DRO_with_undefined_ambiguity_set_1){reference-type="eqref" reference="Constr: DRO_with_undefined_ambiguity_set_1"} and [\[Constr: DRO_with_undefined_ambiguity_set_2\]](#Constr: DRO_with_undefined_ambiguity_set_2){reference-type="eqref" reference="Constr: DRO_with_undefined_ambiguity_set_2"} fall in this framework and can be modeled by setting $T_i=T$ and $\varepsilon_i=\pm 1$. ## Relation to the Literature In the existing literature, distributionally robust constraints are often encoded with the expectation $\mathbbm{E}_{\mathbbm{P}}(v(x,t))$, which in the present paper encodes the expectation of a non-convex, in our case a piecewise-constant, function $v$ in $t\sim \mathbbm{P}$ by $\mathbbm{E}_{\mathbbm{P}}(v(x,t)) = \sum_{i=1}^k x_i \mathbbm{P}(X_i)$. Dropping the convexity assumption poses a stark contrast to the results in [@Wiesemann2014a] and [@Delage2010a], where the underlying function $v(x,t)$ has to be both, convex and piecewise-affine in $x$ and $t$, see Condition (C3) in [@Wiesemann2014a] and Assumption 2 in [@Delage2010a]. However, [@Wiesemann2014a] and [@Xu2017a] present exceptions to these assumptions for specific cases, namely a very low number $|I|$ of confidence sets, see Observation 1ff in the electronic compendium of [@Wiesemann2014a] or even $|I|=0$ ([@Xu2017a]). As we consider indicator functions $\mathbbm{1}_{X_i}(t)$, that generally do not satisfy any of those assumptions, we attempt to extend the existing literature to non-convex functions $v$. Moreover, in contrast to [@Dienstbier2023a], we allow $T$ to be multivariate and consider simple functions $\sum_{i=1}^k x_i\mathbbm{1}_{[x_i^-,x_i^+]}(t_i)$ instead of either sole indicator functions with $k=1$ or simple functions with the simplyfying assumption, that the $t_i$ are independent. This increased generality is achieved at the cost of a reduced approximation accuracy. Lastly, we briefly mention the differences of our approach to discrepancy-based DRO models that require an estimator for the true probability distribution $\hat{\rho}$ and restrict $\mathcal{P}$ based on a given metric, e.g. the Wasserstein metric. Here, given an estimated $\hat{\rho}$, these ambiguity sets are formed of all probability distributions, that originate from $\hat{\rho}$ by transferring at most a given probability mass. We refer to the excellent review [@Rahimian2019a] for further details. # Distributionally robust constraints dependent on simple functions {#Sec: DRO_indicator_functions} For both, Cases 1 and 2 from Section [2](#Sec: Problem Setting){reference-type="ref" reference="Sec: Problem Setting"}, we consider the DRO constraint [\[Prob: DRO_with_undefined_ambiguity_set\]](#Prob: DRO_with_undefined_ambiguity_set){reference-type="eqref" reference="Prob: DRO_with_undefined_ambiguity_set"} where $\mathcal{P}$ is defined by [\[Eq: Sec2_first_moment\]](#Eq: Sec2_first_moment){reference-type="eqref" reference="Eq: Sec2_first_moment"}, [\[Eq: Sec2_second_moment\]](#Eq: Sec2_second_moment){reference-type="eqref" reference="Eq: Sec2_second_moment"} and [\[Eq: Sec2_confidence_sets\]](#Eq: Sec2_confidence_sets){reference-type="eqref" reference="Eq: Sec2_confidence_sets"}. To this end, let again $b\in \mathbb{R}$, $T\subseteq \mathbb{R}^m$ be a compact set, and $I\subseteq \mathbb{N}$ denote a finite index set. Next we define the considered ambiguity set. We assume a 'typical', i.e., nominal distribution with mean $\mu_{} \in \mathbb{R}^m$ and covariance matrix $\Sigma \in \mathbb{R}^{m\times m}$ is given, for example from expert knowledge or by estimation from given data. In formulas, we consider [\[Prob: Primal_Purity_Constraint_indicator\]]{#Prob: Primal_Purity_Constraint_indicator label="Prob: Primal_Purity_Constraint_indicator"} $$\begin{aligned} b \leq \min_{\mathbbm{P}_{}}~& \langle \sum_{i=1}^k x_i \mathbbm{1}_{X_i}^c,\mathbbm{P}_{}\rangle && \label{Constr: Objective_Primal_indicator}\\ \text{s.t.}~& \mathbbm{P}_{} \in \mathcal{M}(T)_{\ge 0}, \\ & \langle \begin{bmatrix} \Sigma & t-\mu\\ (t-\mu_{})^\top & \varepsilon_{\mu_{}} \end{bmatrix}, \mathbbm{P}_{}\rangle \succeq 0, \label{Constr: First_Moment_indicator} \\ &\langle -(t-\mu_{})(t-\mu_{})^\top ,\mathbbm{P}_{}\rangle \succeq -\varepsilon_\Sigma\Sigma, && \label{Constr: Second_Moment_indicator}\\ &\langle \text{sign}(\varepsilon_i)\mathbbm{1}_{T_i}^c(t), \mathbbm{P}_{} \rangle \ge \varepsilon_i && i\in I \label{Constr: Primal_indicator_confidence_set}, \end{aligned}$$ where a choice of $T_1=T, \varepsilon_1=1$ and $T_2=T, \varepsilon_2=-1$ implies that $\mathbbm{P}_{}(T)=1$, i.e. $\mathbbm{P}_{}$ is a probability measure on $T$. In the following, we aim at deriving an algorithmically tractable reformulation of this set of constraints. We note that in order to dualize [\[Prob: Primal_Purity_Constraint_indicator\]](#Prob: Primal_Purity_Constraint_indicator){reference-type="eqref" reference="Prob: Primal_Purity_Constraint_indicator"}, we consider continuous approximators $x_i \mathbbm{1}_{X_i}^c, \text{sign}(\varepsilon_i)\mathbbm{1}_{T_i}^c$ of the indicator functions $x_i \mathbbm{1}_{X_i}, \text{sign}(\varepsilon_i)\mathbbm{1}_{T_i}$. The existence of approximators that are arbitrarily close to the indicator functions is given by the seminal Lemma of Urysohn, see e.g. [@Munkres2000a]. In particular, we choose $\mathbbm{1}^c_{X_i} \geq \mathbbm{1}_{X_i}$, an upper approximator whenever $x_i\geq 0$ and a lower approximator whenever $x_i<0$. The opposite approximators are chosen for $\mathbbm{1}_{T_i}$, i.e., we choose $\mathbbm{1}_{T_i}^c\leq \mathbbm{1}_{T_i}$ if $\varepsilon_i>0$ and $\mathbbm{1}_{T_i}^c\geq \mathbbm{1}_{T_i}$ whenever $\varepsilon_i<0$. This establishes the following key property $$\label{Eq: Urysohn_approx} x_i\mathbbm{1}^c_{X_i} \geq x_i\mathbbm{1}_{X_i}\text{ and } \text{sign}(\varepsilon_i)\mathbbm{1}^c_{T_i} \leq \text{sign}(\varepsilon_i)\mathbbm{1}_{T_i}.$$ In the following, we will define necessary ingredients for being able to reformulate such a DRO constraint by dualizing [\[Prob: Primal_Purity_Constraint_indicator\]](#Prob: Primal_Purity_Constraint_indicator){reference-type="eqref" reference="Prob: Primal_Purity_Constraint_indicator"}. Subsequently, a tractable and high-quality inner approximation of the resulting constraint will be obtained. We first employ duality theory using an adjoint operator: **Remark 1**. Let $\mathcal{S}^m$ denote the set of symmetric $m$ by $m$ matrices. It might not be immediately clear whether an adjoint operator with respect to the primal operator $\mathcal{A} : \mathcal{M}(T) \rightarrow \mathcal{S}^{m+1}\times \mathcal{S}^m \times \mathbb{R}^I$ of [\[Prob: Primal_Purity_Constraint_indicator\]](#Prob: Primal_Purity_Constraint_indicator){reference-type="eqref" reference="Prob: Primal_Purity_Constraint_indicator"} exists. However, it is constructed in a quite straightforward manner: First, we observe that for the inner products containing matrices $M\in \mathcal{S}^k$, we have $$\langle \langle M, \mathbbm{P}_{} \rangle , Y \rangle_F = \langle \langle M, Y\rangle_F , \mathbbm{P}_{} \rangle \text{ for arbitrary }\mathbbm{P}_{}\in \mathcal{M}(T),Y\in \mathcal{S}^k,$$ where, $\langle \cdot,\cdot\rangle_F: \mathcal{S}^k\times \mathcal{S}^k\rightarrow \mathbb{R}$ denotes the Frobenius inner product. In particular, for $k\in \{m,m+1\}$, this includes the matrices $$M\in \left\{\begin{bmatrix} \Sigma & t-\mu_{}\\ (t-\mu_{})^\top & \varepsilon_{\mu_{}} \end{bmatrix}, -(t-\mu_{})(t-\mu_{})^\top \right\}.$$ For the inner products containing only the entries $\text{sign}(\varepsilon_i) \mathbbm{1}_{T_i}$ of $\mathcal{A}$, we have $$\langle \text{sign}(\varepsilon_i) \mathbbm{1}_{T_i}, \mathbbm{P}_{} \rangle y = \langle \text{sign}(\varepsilon_i) \mathbbm{1}_{T_i} y , \mathbbm{P}_{} \rangle \text{ for every }\mathbbm{P}_{}\in \mathcal{M}(T), y\in \mathbb{R}.$$ Hence, we have constructed an adjoint operator $\mathcal{B}: \mathcal{S}^{m+1}\times \mathcal{S}^m\times \mathbb{R}^I\rightarrow \mathcal{C}(T)$ to $\mathcal{A}$, which is defined by $$\left\langle \begin{bmatrix} \Sigma & t-\mu_{}\\ (t-\mu_{})^\top & \varepsilon_{\mu_{}} \end{bmatrix}, Y_1\right\rangle + \langle -(t-\mu_{})(t-\mu_{})^\top, Y_2\rangle + \sum_{i\in I}\text{sign}(\varepsilon_i) \mathbbm{1}_{T_i} y_i.$$ Moreover, $\mathcal{B}$ is unique due to Riesz' representation theorem, see e.g. [@Brezis2010a]. With this adjoint operator, we derive the following dual program for [\[Prob: Primal_Purity_Constraint_indicator\]](#Prob: Primal_Purity_Constraint_indicator){reference-type="eqref" reference="Prob: Primal_Purity_Constraint_indicator"}: [\[Prob: Dual_Purity_Constraint_indicator\]]{#Prob: Dual_Purity_Constraint_indicator label="Prob: Dual_Purity_Constraint_indicator"} $$\begin{aligned} b \le \max_{y_i,Y_1,Y_2}~& \sum_{i\in I} \varepsilon_i y_i -\varepsilon_\Sigma \langle \Sigma, Y_2 \rangle \label{Obj: Objective_Dual_indicator} \\ \text{s.t.}~& \sum_{i=1}^k x_i \mathbbm{1}_{X_i}^c - \left\langle \begin{bmatrix} \Sigma & t-\mu_{}\\ (t-\mu_{})^\top & \varepsilon_{\mu_{}} \end{bmatrix} , Y_1 \right\rangle -\langle -(t-\mu_{})(t-\mu_{})^\top , Y_2\rangle \notag \\ & -\sum_{i\in I} \text{sign}(\varepsilon_i) \mathbbm{1}_{T_i}^c y_i \in \mathcal{C}(T)_{\ge 0}, \label{Constr: Dual_indicator}\\ & Y_1 \in \mathcal{S}^n_{\succeq 0}, Y_2 \in \mathcal{S}^n_{\succeq 0}, y \in \mathbb{R}_{\geq 0}^I, \end{aligned}$$ where $\mathcal{C}(T)_{\ge 0}$ denotes the cone of the continuous, nonnegative functions on $T$. As usual in reformulation approaches in robust optimization, we aim to apply strong duality. Indeed, next we establish strong duality between [\[Prob: Primal_Purity_Constraint_indicator\]](#Prob: Primal_Purity_Constraint_indicator){reference-type="eqref" reference="Prob: Primal_Purity_Constraint_indicator"} and [\[Prob: Dual_Purity_Constraint_indicator\]](#Prob: Dual_Purity_Constraint_indicator){reference-type="eqref" reference="Prob: Dual_Purity_Constraint_indicator"} that can be seen as a direct corollary of Corollary 3.0.2 in [@Shapiro2000a] or as a direct consequence of the dualization theory illustrated, e.g. in [@Barvinok2002a]. **Theorem 1**. *Suppose that $\mathbbm{P}_{} \sim \mathcal{N}(\mu_{},\Sigma)$ is both, a strictly positive Radon measure and feasible for [\[Prob: Primal_Purity_Constraint_indicator\]](#Prob: Primal_Purity_Constraint_indicator){reference-type="eqref" reference="Prob: Primal_Purity_Constraint_indicator"}. Then, the duality gap of the problems [\[Prob: Primal_Purity_Constraint_indicator\]](#Prob: Primal_Purity_Constraint_indicator){reference-type="eqref" reference="Prob: Primal_Purity_Constraint_indicator"} and [\[Prob: Dual_Purity_Constraint_indicator\]](#Prob: Dual_Purity_Constraint_indicator){reference-type="eqref" reference="Prob: Dual_Purity_Constraint_indicator"} is zero.* *Proof.* We observe that $\mathbbm{P}_{} \sim \mathcal{N}(\mu_{},\Sigma)$ is feasible for [\[Prob: Primal_Purity_Constraint_indicator\]](#Prob: Primal_Purity_Constraint_indicator){reference-type="eqref" reference="Prob: Primal_Purity_Constraint_indicator"}, i.e. [\[Prob: Primal_Purity_Constraint_indicator\]](#Prob: Primal_Purity_Constraint_indicator){reference-type="eqref" reference="Prob: Primal_Purity_Constraint_indicator"} is \"consistent\" in the definition of Shapiro. Furthermore, $T$ is compact and the functions in the objective as well as in the constraints of [\[Prob: Primal_Purity_Constraint_indicator\]](#Prob: Primal_Purity_Constraint_indicator){reference-type="eqref" reference="Prob: Primal_Purity_Constraint_indicator"} are continuous. Due to the isometry of the metric spaces $(\mathcal{S}^n, \langle\cdot , \cdot \rangle_F)$ and $(\mathbb{R}^{\frac{n(n-1)}{2}}, \langle \cdot , \cdot \rangle)$, we further reformulate [\[Prob: Primal_Purity_Constraint_indicator\]](#Prob: Primal_Purity_Constraint_indicator){reference-type="eqref" reference="Prob: Primal_Purity_Constraint_indicator"} as a conic program with $\mathcal{A}\mathbbm{P}_{} -b\in K$, where the cone $K\subseteq \mathbb{R}^{n(n-1)+|I|}$. Hence, strong duality follows from Corollary 3.1 in [@Shapiro2000a]. ◻ ## Computation of feasible solutions by a discretized robust counterpart {#Subsec: discretized_DRO_reformulation_indicator} In this section, we derive an algorithmically tractable model for the robust counterpart [\[Prob: Dual_Purity_Constraint_indicator\]](#Prob: Dual_Purity_Constraint_indicator){reference-type="eqref" reference="Prob: Dual_Purity_Constraint_indicator"}. A standard approach to find an approximate solution to this semiinfinite (SIP) feasibility problem is to sample the semiinfinite constraint [\[Constr: Dual_indicator\]](#Constr: Dual_indicator){reference-type="eqref" reference="Constr: Dual_indicator"} and solve the resulting finite-dimensional SDP that only contains the sampled constraints. However, a feasible solution to a finite subsets of the constraints in [\[Constr: Dual_indicator\]](#Constr: Dual_indicator){reference-type="eqref" reference="Constr: Dual_indicator"} does not necesarily satisfy [\[Constr: Dual_indicator\]](#Constr: Dual_indicator){reference-type="eqref" reference="Constr: Dual_indicator"} itself. This means that the obtained solution may not satisfy [\[Prob: Dual_Purity_Constraint_indicator\]](#Prob: Dual_Purity_Constraint_indicator){reference-type="eqref" reference="Prob: Dual_Purity_Constraint_indicator"} and thus by solving Case 1 or 2 with respect to this relaxation of [\[Prob: Dual_Purity_Constraint_indicator\]](#Prob: Dual_Purity_Constraint_indicator){reference-type="eqref" reference="Prob: Dual_Purity_Constraint_indicator"}, we might obtain a solution, which is not necessarily protected against the uncertainties in the ambiguity set $\mathcal{P}$, i.e. is not robust and does not necessarily satisfy [\[Prob: Primal_Purity_Constraint_indicator\]](#Prob: Primal_Purity_Constraint_indicator){reference-type="eqref" reference="Prob: Primal_Purity_Constraint_indicator"}. In this work, we however aim for a robust constraint for $\mathcal{P}$ as for many applications a guaranteed protection is important, e.g. in medical applications. To this end, we propose a discretization scheme that provides an inner approximation of [\[Constr: Dual_indicator\]](#Constr: Dual_indicator){reference-type="eqref" reference="Constr: Dual_indicator"}. This means that every solution of the discretization of [\[Prob: Dual_Purity_Constraint_indicator\]](#Prob: Dual_Purity_Constraint_indicator){reference-type="eqref" reference="Prob: Dual_Purity_Constraint_indicator"} will indeed satisfy [\[Prob: Dual_Purity_Constraint_indicator\]](#Prob: Dual_Purity_Constraint_indicator){reference-type="eqref" reference="Prob: Dual_Purity_Constraint_indicator"} and thereby guarantee that the corresponding decision variables $x_i$ for Case 1 and $x_i^-,x_i^+$ for Case 2 are feasible for [\[Prob: Primal_Purity_Constraint_indicator\]](#Prob: Primal_Purity_Constraint_indicator){reference-type="eqref" reference="Prob: Primal_Purity_Constraint_indicator"}. This robust formulation will make use of Lipschitz continuity of the non-indicator functions in [\[Constr: Dual_indicator\]](#Constr: Dual_indicator){reference-type="eqref" reference="Constr: Dual_indicator"}, i.e., the Lipschitz continuity of the polynomial $$p_Y(t)\coloneqq \left\langle \begin{bmatrix} \Sigma & t-\mu_{}\\ (t-\mu_{})^\top & \varepsilon_{\mu_{}} \end{bmatrix} , Y_1 \right\rangle +\langle (t-\mu_{})(t-\mu_{})^\top , Y_2\rangle.$$ In fact, the polynomial $p$ is Lipschitz continuous since $T$ is compact and its coefficients $Y_1,Y_2$ are bounded: **Lemma 1**. *Given that $\mu_{} \in T_i$ for every $i\in I$ with $\varepsilon_i>0$ and $\mu_{} \notin T_i$ if $\varepsilon_i<0$. Then, the polynomial $p_Y(t)$ is Lipschitz continuous in $t$ with a uniform Lipschitz constant $L$.* *Proof.* We show that for every feasible solution of [\[Prob: Dual_Purity_Constraint_indicator\]](#Prob: Dual_Purity_Constraint_indicator){reference-type="eqref" reference="Prob: Dual_Purity_Constraint_indicator"} the entries $Y_1,Y_2$ are bounded. To this end, w.l.o.g. let $\varepsilon_1=-1, \varepsilon_2=1, \varepsilon_i >0$ for every $i\in I \setminus\{1\}$ since every constraint $$\langle \text{sign}(\varepsilon_i)\mathbbm{1}_{T_i}^c,\tilde{\mu} \rangle \geq \varepsilon_i \text{ with } \varepsilon_i < 0$$ can equivalently be expressed by $$\langle \text{sign}(1+\varepsilon_i) \mathbbm{1}_{T_i^C}^c,\tilde{\mu} \rangle \geq 1+\varepsilon_i.$$ In order to prove this equivalence, we add $1$ on both sides and consider the complement $T_i^C$ of $T_i$. Now, we first prove that $\text{Tr}(Y_1)<\infty$: Let $t=\mu_{}$ and $v_i$ being the eigenvectors and $\lambda_i$ the eigenvalues of $Y_1$ then [\[Constr: Dual_indicator\]](#Constr: Dual_indicator){reference-type="eqref" reference="Constr: Dual_indicator"} implies: $$\label{Eq: help3} \begin{aligned} \lambda_{\min}\left(\begin{bmatrix} \Sigma & 0 \\ 0 & \varepsilon_{\mu_{}} \end{bmatrix}\right) \text{Tr}(Y_1) & = \sum_{i=1}^n \lambda_i \lambda_{\min}\left(\begin{bmatrix} \Sigma & 0 \\ 0 & \varepsilon_{\mu_{}} \end{bmatrix}\right) \overset{*}{\leq} \sum_{i=1}^n \lambda_i v_i^\top \begin{bmatrix} \Sigma & 0 \\ 0 & \varepsilon_{\mu_{}} \end{bmatrix} v_i\\ & \leq \left\langle \begin{bmatrix} \Sigma & 0 \\ 0 & \varepsilon_{\mu_{}} \end{bmatrix}, Y_1 \right\rangle \overset{{\eqref{Constr: Dual_indicator}}}{\leq} \sum_{i=1}^k x_i \mathbbm{1}_{X_i}(\mu_{}) - \sum_{i\in I} \text{sign}(\varepsilon_i) y_i, \end{aligned}$$ where (\*) holds due to the Rayleigh-Ritz principle, see e.g. [@Brezis2010a] for further details. We show that [\[Eq: help3\]](#Eq: help3){reference-type="eqref" reference="Eq: help3"} is bounded from above for every feasible solution to [\[Prob: Dual_Purity_Constraint_indicator\]](#Prob: Dual_Purity_Constraint_indicator){reference-type="eqref" reference="Prob: Dual_Purity_Constraint_indicator"} by considering the following LP: $$\label{Eq: help4} \min_{y\in \mathbb{R}^I_{\geq 0}} \sum_{i\in I} \text{sign}(\varepsilon_i) \mathbbm{1}_{T_i}(\mu_{})y_i:\ \sum_{i\in I}\varepsilon_i y_i \geq 0,$$ whose constraint can be derived from [\[Obj: Objective_Dual_indicator\]](#Obj: Objective_Dual_indicator){reference-type="eqref" reference="Obj: Objective_Dual_indicator"} and the fact that both $\Sigma$ and $Y_2$ are positive semidefinite. Moreover, this is equivalent to $$\min_{y\in \mathbb{R}^I_{\geq 0}} -y_1+\sum_{i\in I \setminus \{1\}} y_i:\ \sum_{i\in I}\varepsilon_i y_i \geq 0.$$ due to $\mu_{}\in T_i$ for every $i\in I$. Furthermore, it is bounded from below by $0$ since its dual LP: $$\begin{aligned} \max_{z\geq 0} 0 z : - z &\leq -1,\\ \varepsilon_i z &\leq 1 &&\text{ for every } i\in I\setminus\{1\}, \end{aligned}$$ is feasible for $z=1$ since w.l.o.g. $|\varepsilon_i|\leq 1$. Consequently, this provides a lower bound of $0$ to [\[Eq: help4\]](#Eq: help4){reference-type="eqref" reference="Eq: help4"} and thereby an upper bound to $\text{Tr}(Y_1)$ via [\[Eq: help3\]](#Eq: help3){reference-type="eqref" reference="Eq: help3"}. Let $\lambda_{\min}(\Sigma)> 0$ denote the minimal eigenvalue of $\Sigma$ and $\lambda_i$ the eigenvalues of $Y_2$ with respect to eigenvector $v_i$. Then, on the one hand, we have $$\label{Eq: help1} \begin{aligned} \varepsilon_\Sigma \lambda_{\min}(\Sigma) \text{Tr}(Y_2) & = \varepsilon_\Sigma \sum_{i=1}^n \lambda_i \lambda_{\min}(\Sigma) \overset{(*)}{\leq} \varepsilon_\Sigma \sum_{i=1}^n \lambda_i v_i^\top \Sigma v_i = \varepsilon_\Sigma \left\langle \Sigma , \sum_{i=1}^n \lambda_i v_iv_i^\top \right\rangle\\ & = \varepsilon_\Sigma \langle \Sigma , Y_2 \rangle \overset{\eqref{Obj: Objective_Dual_indicator}}{\leq} \sum_{i\in I} \varepsilon_i y_i \end{aligned}$$ where (\*) holds because of the Rayleigh-Ritz principle. In order to show that [\[Eq: help1\]](#Eq: help1){reference-type="eqref" reference="Eq: help1"} is bounded, we show that the following linear program is bounded from above: $$\label{Eq: LP_help} \max_{y\in \mathbb{R}^I_{\geq 0}} \varepsilon^\top y:\ \tau^\top y \leq \sum_{i=1}^k x_i \mathbbm{1}_{X_i}(\mu_{}),$$ where $\tau_i = \text{sign}(\varepsilon_i) \mathbbm{1}_{T_i}(\mu_{}).$ Note that $\tau\neq 0$ due to $\mu_{} \in T_2$. Similar as before, the constraint in [\[Eq: LP_help\]](#Eq: LP_help){reference-type="eqref" reference="Eq: LP_help"} can be derived from [\[Constr: Dual_indicator\]](#Constr: Dual_indicator){reference-type="eqref" reference="Constr: Dual_indicator"} with $t=\mu_{}$ in the following way: $$\label{Eq: help2} \begin{aligned} \sum_{i=1}^k x_i \mathbbm{1}_{X_i}(\mu_{}) & \geq \sum_{i=1}^k x_i \mathbbm{1}_{X_i}(\mu_{}) - \langle \begin{bmatrix} \Sigma & 0 \\ 0 & \varepsilon_{\mu_{}} \end{bmatrix}, Y_1 \rangle \geq \sum_{i\in I} \text{sign}(\varepsilon_i) \mathbbm{1}_{T_i}(\mu_{})y_i \end{aligned}$$ Then, weak duality implies $$\eqref{Eq: LP_help} \leq \min_{z\in \mathbb{R}_{\geq 0}} z \sum_{i=1}^k x_i \mathbbm{1}_{X_i}(\mu_{}):\ z \tau -\varepsilon \geq 0.$$ Observe that $z=1$ is a feasible solution since $$\tau_i=\text{sign}(\varepsilon_i)\mathbbm{1}_{T_i}(\mu_{})=1>\varepsilon_i$$ for every $i\in I\setminus\{1\}$ and $\tau_1=-1=\varepsilon_1$. Thus, we obtain an upper bound for [\[Eq: LP_help\]](#Eq: LP_help){reference-type="eqref" reference="Eq: LP_help"} and thereby for $\text{Tr}(Y_2)$. Finally, we proved that the coefficients of $p(t)$ are bounded and the claim follows. ◻ Observe, that the assumptions on the confidence sets $T_i$, i.e., that either it is $\mu_{} \in T_i$ whenever $\varepsilon_i>0$ or $\mu_{} \notin T_i$ if $\varepsilon_i<0$, limits the choice of probability measures $\mathbbm{P}_{}\in \mathcal{P}$. We note, that this limitation is rather mild as we are only limited in our modeling power by not being able to force deviation from $\mu_{}$. However, most real-world distributions are concentrated around their respective expectation to some degree. Consequently, since the requirement above still allows us to force the probability mass of $\mathbbm{P}_{}\in \mathcal{P}$ towards the estimated expected value $\mu_{}$, it seems not very restrictive in practice. In fact, discrepancy based approaches such as Wasserstein balls yield a similar structure. If confidence sets are used, restrictions in modeling are fairly common, also for example in the so-called nesting condition in [@Wiesemann2014a] and the references therein. In addition, there are relevant settings where the assumption from the above lemma can be weakened. Indeed, in [@Dienstbier2023a] it is shown that for one-dimensional $T$, no such assumption is needed at all. In the following Lemma, we establish an inner approximation of the DRO constraint [\[Constr: Dual_indicator\]](#Constr: Dual_indicator){reference-type="eqref" reference="Constr: Dual_indicator"}. To this end, we denote by $T_N=\delta_N \mathbbm{Z}^m\cap T$ the standard lattice with stepsize $\delta_N \in \mathbb{R}_{>0}$, that serves as a discretization of $T$. Moreover, we define a *level set* $L_h$ by $$L_h\coloneqq \left\{t\in T:\ \sum_{i=1}^k x_i \mathbbm{1}_{X_i}(t)-\sum_{i\in I} \text{sign}(\varepsilon_i) \mathbbm{1}_{T_i}(t) =h\right\},$$ where $h$ denotes the *height* of the specific level set. The motivation to consider these level sets is, that on the boundaries of $L_h$ the indicator functions $\mathbbm{1}_{X_i},\mathbbm{1}_{T_i}$ abruptly change and any potential Lipschitz constant $L$ for the continuous approximations $\mathbbm{1}_{X_i}^c,\mathbbm{1}_{T_i}^c$ of $\mathbbm{1}_{X_i},\mathbbm{1}_{T_i}$ tends to infinity, the closer the continuous approximation is. Consequently, an approximation of the left-hand side of [\[Constr: Dual_indicator\]](#Constr: Dual_indicator){reference-type="eqref" reference="Constr: Dual_indicator"} solely based on Lipschitz continuity may become quite poor. Thus, we address the indicator functions separately. To this end, let us first denote $$\begin{aligned} f^{c}(t)\coloneqq & \sum_{i=1}^k x_i \mathbbm{1}_{X_i}^{c}(t) - \left\langle \begin{bmatrix} \Sigma & t-\mu_{}\\ (t-\mu_{})^\top & \varepsilon_{\mu_{}} \end{bmatrix} , Y_1 \right\rangle +\langle (t-\mu_{})(t-\mu_{})^\top , Y_2\rangle\\ &\quad -\sum_{i\in I} \text{sign}(\varepsilon_i) \mathbbm{1}_{T_i}^{c}(t) y_i\end{aligned}$$ for fixed $Y_1 \in \mathcal{S}^n_{\succeq 0}, Y_2 \in \mathcal{S}^n_{\succeq 0}, y \in \mathbb{R}_{\geq 0}^I$ and observe the equivalence $$\eqref{Constr: Dual_indicator} \Leftrightarrow f^c(t)\geq 0 \text{ for every } t \in T.$$ Let us further observe, that in most applications, we can assume that $X_i\cap T_N\neq \emptyset$ and $T_i\cap T_N\neq \emptyset$, whenever $\delta_N$ is sufficiently small, e.g. if every $X_i$ and $T_i$ contains open sets. In particular, we assume that $\delta_N$ is chosen small enough, such that for every $t\in L_h$, we have that there is a $\bar{t}\in T_N\cap L_h$ with $\|t-\bar{t}\|\leq \sqrt{m}\delta_N$. Since $T_N = \delta_N \mathbb{Z}^m\cap T$, this guarantees that for every $t\in L_h$, there is a nearby sample point also contained in $L_h$. Consequently, as seen in Lemma [Lemma 1](#Lemma: Lipschitz_continuity_of_p_indicator){reference-type="ref" reference="Lemma: Lipschitz_continuity_of_p_indicator"}, we can address the differences on $f^c$ evaluated on sample points $\bar{t}\in T_N$ compared to the nearby non-sample points $t\in T\setminus T_N$ by exploiting Lipschitz continuity on the polynomial part $p$ of $f^c$. Finally, we observe that the union of all these level sets $\bigcup_{h} L_h=T$ is a finite, disjoint decomposition of $T$ and thus, we have addressed all potential deviations of $f^c$ between values on $T\setminus T_N$ and $T_N$. To make these arguments precise: **Lemma 2**. *Let $L>0$ be the Lipschitz constant of $p_Y$. Let further $\delta_N$ be sufficiently small, such that for every $t\in T$ with w.l.o.g. $t\in L_h$, there exists a $\bar{t}\in T_N\cap L_h$ with $\|t-\bar{t}\|\leq \delta_N\sqrt{m}$. Then, the finitely many constraints $$\label{Eq: Dual_indicator_discretized} f(\bar{t})-L\delta_N\sqrt{m} \geq 0 \text{ for every } \bar{t}\in T_N$$ imply the semiinfinite constraint $$f^c(t) \geq 0 \text{ for every } t\in T.$$* *Proof.* We first suppose w.l.o.g. that $t\in L_h$. Then, there exists a $\bar{t}\in L_h$ such that $\|t-\bar{t}\|\leq \delta_N\sqrt{m}$ and hence $$\begin{aligned} f^c(t)+L\delta_N\sqrt{m} & \geq f^c(t)+L\|t-\bar{t}\| \overset{(1)}{\geq} f^c(t) +|p(\bar{t})-p(t)|\\ & \overset{\eqref{Obj: Objective_Dual_indicator}}{\geq} \sum_{i=1}^k x_i \mathbbm{1}_{X_i}^c(t) - \sum_{i\in I} \text{sign}(\varepsilon_i) \mathbbm{1}_{T_i}^c(t) + p(\bar{t}) \\ & \overset{(2)}{\geq} \sum_{i=1}^k x_i \mathbbm{1}_{X_i}(t) - \sum_{i\in I} \text{sign}(\varepsilon_i) \mathbbm{1}_{T_i}(t) + p(\bar{t}) = f(\bar{t})\, \end{aligned}$$ where (1) holds due to definition of $L$ and (2) holds due to [\[Eq: Urysohn_approx\]](#Eq: Urysohn_approx){reference-type="eqref" reference="Eq: Urysohn_approx"}. ◻ Note, that Lemma [Lemma 2](#Lemma:inner_approx){reference-type="ref" reference="Lemma:inner_approx"} provides a sufficient criterion for the SIP constraint [\[Constr: Dual_indicator\]](#Constr: Dual_indicator){reference-type="eqref" reference="Constr: Dual_indicator"}. Thus, replacing [\[Constr: Dual_indicator\]](#Constr: Dual_indicator){reference-type="eqref" reference="Constr: Dual_indicator"} by [\[Eq: Dual_indicator_discretized\]](#Eq: Dual_indicator_discretized){reference-type="eqref" reference="Eq: Dual_indicator_discretized"} gives an inner approximation of [\[Prob: Dual_Purity_Constraint_indicator\]](#Prob: Dual_Purity_Constraint_indicator){reference-type="eqref" reference="Prob: Dual_Purity_Constraint_indicator"}. Therefore, the existence of $y,Y_1,Y_2$ satisfying [\[Eq: Dual_indicator_discretized\]](#Eq: Dual_indicator_discretized){reference-type="eqref" reference="Eq: Dual_indicator_discretized"} in addition to the remaining constraints of [\[Prob: Dual_Purity_Constraint_indicator\]](#Prob: Dual_Purity_Constraint_indicator){reference-type="eqref" reference="Prob: Dual_Purity_Constraint_indicator"} guarantees that the DRO constraint [\[Prob: Primal_Purity_Constraint_indicator\]](#Prob: Primal_Purity_Constraint_indicator){reference-type="eqref" reference="Prob: Primal_Purity_Constraint_indicator"} is satisfied. ## Tractable approximations for DRO with convex objective We note that [\[Prob: Primal_Purity_Constraint_indicator\]](#Prob: Primal_Purity_Constraint_indicator){reference-type="eqref" reference="Prob: Primal_Purity_Constraint_indicator"} is often considered as the (non-convex) DRO constraint embedded in an otherwise convex program, e.g. as illustrated by Case 1 and 2 in Section [2](#Sec: Problem Setting){reference-type="ref" reference="Sec: Problem Setting"}. Hence, instead of considering constant $x_i,X_i$, we investigate in the following paragraphs how the Lemma [Lemma 2](#Lemma:inner_approx){reference-type="ref" reference="Lemma:inner_approx"} approximation can be applied to Case 1, i.e. decision variables $x_i$ and Case 2, with decision variables $x_i^-,x_i^+$ that define the box $X_i=[x_i^-,x_i^+]$. For the sake of simplicity, we assume that the objective of DRO is linear. However, the results below hold analogously for other convex objective functions as well. For Case 1 let $x\in P \subseteq \mathbb{R}^k$ be a decision variable and consider: [\[Prob: primal_linear_x\_i\]]{#Prob: primal_linear_x_i label="Prob: primal_linear_x_i"} $$\begin{aligned} \max_{x\in P, Y_1,Y_2,y} & c^\top x \\ \text{ s.t.}~ & \sum_{i\in I} \varepsilon_i y_i -\varepsilon_\Sigma \langle \Sigma, Y_2 \rangle \geq b, \label{Constr: primal_linear_x_i_objective_constraint1} \\ & \sum_{i=1}^k x_i \mathbbm{1}_{X_i}^c(t) - \left\langle \begin{bmatrix} \Sigma & t-\mu_{}\\ (t-\mu_{})^\top & \varepsilon_1 \end{bmatrix} , Y_1 \right\rangle \notag \\ & +\left\langle (t-\mu_{})(t-\mu_{})^\top , Y_2\right\rangle -\sum_{i\in I} \text{sign}(\varepsilon_i) \mathbbm{1}_{T_i}^c(t) y_i\geq 0, \qquad \forall t\in T \label{Constr: primal_linear_x_i_objective_constraint2}\\ & Y_1 \in \mathcal{S}^n_{\succeq 0}, Y_2 \in \mathcal{S}^n_{\succeq 0}, y \in \mathbb{R}_{\geq 0}^I. \end{aligned}$$ It turns out that computing lower bounds for [\[Prob: primal_linear_x\_i\]](#Prob: primal_linear_x_i){reference-type="eqref" reference="Prob: primal_linear_x_i"} is tractable: **Theorem 2**. *A solution to the following semidefinite problem yields a feasible solution to the semiinfinite problem [\[Prob: primal_linear_x\_i\]](#Prob: primal_linear_x_i){reference-type="eqref" reference="Prob: primal_linear_x_i"}.* *[\[Prob: discretized_indicator_linear_obj\]]{#Prob: discretized_indicator_linear_obj label="Prob: discretized_indicator_linear_obj"} $$\begin{aligned} \max_{x\in P} & c^\top x\\ \text{ s.t.}~ & \sum_{i\in I} \varepsilon_i y_i -\varepsilon_\Sigma \langle \Sigma, Y_2 \rangle \geq b \\ & \sum_{i=1}^k x_i \mathbbm{1}_{X_i}(\bar{t}) - \left\langle \begin{bmatrix} \Sigma & \bar{t}-\mu_{}\\ (\bar{t}-\mu_{})^\top & \varepsilon_1 \end{bmatrix} , Y_1 \right\rangle +\langle (\bar{t}-\mu_{})(\bar{t}-\mu_{})^\top , Y_2\rangle \notag \\ & -\sum_{i\in I} \text{sign}(\varepsilon_i) \mathbbm{1}_{T_i}(\bar{t}) y_i - L\delta_N\sqrt{m}\geq 0 \qquad\qquad\qquad\qquad\qquad \forall \bar{t}\in T_N, \label{Constr: discretized_purity_indicator_linear_obj}\\ & Y_1 \in \mathcal{S}^n_{\succeq 0}, Y_2 \in \mathcal{S}^n_{\succeq 0}, y \in \mathbb{R}_{\geq 0}^I. \end{aligned}$$* *Proof.* Given an arbitrary $x\in P$. Due to Lemma [Lemma 2](#Lemma:inner_approx){reference-type="ref" reference="Lemma:inner_approx"}, we observe that Constraint [\[Constr: discretized_purity_indicator_linear_obj\]](#Constr: discretized_purity_indicator_linear_obj){reference-type="eqref" reference="Constr: discretized_purity_indicator_linear_obj"} implies $f^c(t)\geq 0$ for every $t\in T$, i.e. [\[Constr: primal_linear_x\_i_objective_constraint2\]](#Constr: primal_linear_x_i_objective_constraint2){reference-type="eqref" reference="Constr: primal_linear_x_i_objective_constraint2"}. Hence, the claim follows. ◻ We note that the objective $\sum_{i=1}^k x_i \mathbbm{1}_{X_i}$ is linear and thus convex in the $x_i$. Thus, if the number of confidence sets $|I|$ is low, Problem [\[Prob: discretized_indicator_linear_obj\]](#Prob: discretized_indicator_linear_obj){reference-type="eqref" reference="Prob: discretized_indicator_linear_obj"} satisfies the (weakened) conditions needed for Theorem 1 in [@Wiesemann2014a] and can be exactly reformulated as a convex program by applying their methods, whereas the proposed method in this paper only provides a lower bound on [\[Prob: primal_linear_x\_i\]](#Prob: primal_linear_x_i){reference-type="eqref" reference="Prob: primal_linear_x_i"}. However, our approach can also be used for a large number of confidence sets. In addition, it does not depend on the convexity of $\sum_{i=1}^k x_i \mathbbm{1}^c_{X_i}$ and can also be used in non-convex settings. This can be seen by the following result for Case 2, where $T=[0,M]^m$ and $X_i=[x_i^-,x_i^+]$ are supposed to be hypercubes: [\[Prob: primal_linear_X\_i\]]{#Prob: primal_linear_X_i label="Prob: primal_linear_X_i"} $$\begin{aligned} \max & \sum_{i=1}^k (c^-_i)^\top x_i^- + (c^+_i)^\top x_i^+\\ \text{s.t.}~ & \sum_{i\in I} \varepsilon_i y_i -\varepsilon_\Sigma \langle \Sigma, Y_2 \rangle \geq b, \label{Constr: primal_linear_X_i_objective_constraint1} \\ & \sum_{i=1}^k x_i \mathbbm{1}_{X_i}^c(t) - \left\langle \begin{bmatrix} \Sigma & t-\mu_{}\\ (t-\mu_{})^\top & \varepsilon_1 \end{bmatrix} , Y_1 \right\rangle \notag \\ & \qquad +\left\langle (t-\mu_{})(t-\mu_{})^\top , Y_2\right\rangle -\sum_{i\in I} \text{sign}(\varepsilon_i) \mathbbm{1}_{T_i}^c(t) y_i\geq 0, && \forall t\in T \label{Constr: primal_linear_X_i_objective_constraint2}\\ & x_i^-,x_i^+\in P, Y_1 \in \mathcal{S}^n_{\succeq 0}, Y_2 \in \mathcal{S}^n_{\succeq 0}, y \in \mathbb{R}_{\geq 0}^I. \end{aligned}$$ Note, that $\sum_{i=1}^k x_i \mathbbm{1}^c_{[x_i^-,x_i^+]}$ is non-convex in the variables $x_i^-,x_i^+\in \mathbb{R}^m$. In the following theorem, we model the indicator function $\mathbbm{1}_{[x_i^-,x_i^+]}:T_N\rightarrow \mathbb{R}$ by binary variables $\tilde{b}_{\bar{t}}^i$. Additionally, we ensure, that these variables properly model $\mathbbm{1}_{[x_i^-,x_i^+]}(\bar{t})$ by tracking the \"jumps\" from $0$ to $1$ at $x_{ij}^-$ in direction $j\in [m]$ by additional binary variables $\Delta_{\bar{t}}^{-,i,j}$ and the \"jumps\" form $1$ to $0$ at $x_{ij}^+$ in direction $j\in [m]$ by $\Delta_{\bar{t}}^{+,i,j}$ respectively. A similar modeling was given by Dienstbier et. al. in [@Dienstbier2020a] for an engineering application in the design of particulate products. **Theorem 3**. *Let $M_\delta\coloneqq \{0,\delta_N,\ldots, M\}$ the discretization of $[0,M]$, $T_0^j=\{\bar{t}\in T_N: \bar{t}_j=0\}\subseteq T_N$ a set of boundary points of $T_N$. Then, a solution to the following MISDP yields a feasible solution to [\[Prob: primal_linear_X\_i\]](#Prob: primal_linear_X_i){reference-type="eqref" reference="Prob: primal_linear_X_i"}.* *[\[Prob: Dual_Purity_Constraint_indicator_discretized\]]{#Prob: Dual_Purity_Constraint_indicator_discretized label="Prob: Dual_Purity_Constraint_indicator_discretized"} $$\begin{aligned} \max &\sum_{i=1}^k (c^-_i)^\top x_i^- + (c^+_i)^\top x_i^+\\ \text{s.t.}~& \sum_{i\in I} \varepsilon_i y_i -\varepsilon_\Sigma \langle \Sigma, Y_2 \rangle \geq b \label{Constr: discretized_dual_objective_indicator_geq0} \\ & \sum_{i=1}^k x_i\tilde{b}_{\bar{t}}^i - \left\langle \begin{bmatrix} \Sigma & \bar{t}-\mu_{}\\ (\bar{t}-\mu_{})^\top & \varepsilon_1 \end{bmatrix} , Y_1 \right\rangle\\ & \qquad +\langle (\bar{t}-\mu_{})(\bar{t}-\mu_{})^\top , Y_2\rangle \notag \\ & \qquad -\sum_{i\in I} \text{sign}(\varepsilon_i) \mathbbm{1}_{T_i}(\bar{t}) y_i -L\delta_N\sqrt{m}\geq 0 && \forall \bar{t}\in T_N, \label{Constr: discretized_purity_indicator}\\ & \tilde{b}_{\bar{t} + e_j\delta_N}^i -\tilde{b}_{\bar{t}}^i = \Delta_{\bar{t}}^{-,i,j}-\Delta_{\bar{t}}^{+,i,j} && \forall \bar{t}\in T_N, i\in [k], j\in [m], \label{Constr: jump_def}\\ & \sum_{l\in M_\delta:\ \bar{t}=t_0+le_j} \Delta_{\bar{t}}^{-,i,j}+\Delta_{\bar{t}}^{+,i,j} \leq 2 && \forall i\in [k], j\in [m], t_0\in T_0^j, \label{Constr: sum_delta_bound}\\ & x_{ij}^-\geq \sum_{l\in M_\delta:\ \bar{t}=t_0+le_j} (l+\delta_N) \Delta_{\bar{t}}^{-,i,j} && \forall i\in [k],j\in [m], t_0\in T_0^j, \label{Constr: a-_lower_bound}\\ & x_{ij}^+\leq M-\sum_{l\in M_\delta:\ \bar{t}=t_0+le_j} (M-l) \Delta_{\bar{t}}^{+,i,j} && \forall i\in [k], j\in [m], t_0\in T_0^j,\label{Constr: a+_upper_bound}\\ & x_{ij}^+-x_{ij}^- \geq M \sum_{l\in M_\delta: \bar{t}=t_0+le_j} \Delta_{\bar{t}}^{+,i,j}\notag\\ & \quad\ -\hspace{-0.7cm}\sum_{l\in M_\delta:\ \bar{t}=t_0+le_j} \left( (M-l) \Delta_{\bar{t}}^{+,i,j} - (l+\delta_N) \Delta_{\bar{t}}^{-,i,j}\right) && \forall i\in [k], j\in [m], t_0\in T_0^j,\label{Constr: adiff_lower_bound}\\ & 0 \leq x_{ij}^+ -x_{ij}^-\leq \sum_{\bar{t}\in T_N} \tilde{b}_{\bar{t}}^i-1 && \forall i\in [k],\forall j\in [m],\label{Constr: address_empty_set}\\ & x_i^-,x_i^+\in P, y \in \mathbb{R}_{\geq 0}^I, Y_1 \in \mathcal{S}^n_{\succeq 0}, Y_2 \in \mathcal{S}^n_{\succeq 0},\\ & \Delta_{\bar{t}}^{-,i,j},\Delta_{\bar{t}}^{+,i,j},\tilde{b}_{\bar{t}}^i\in \{0,1\}, \end{aligned}$$* *where $\tilde{b}_{\bar{t}}^i\coloneqq 0$ for every $\bar{t}\notin T_N$.* We would like to point out, that instead of fixing $x_i$ in Theorem [Theorem 3](#Thm: MIP_multidim){reference-type="ref" reference="Thm: MIP_multidim"}, we could include $x_i$ as a bounded decision variable. This is due to the fact that for bounded $x_i$ the arising bilinear term $x_i\tilde{b}_{\bar{t}}^i$ in Constraint [\[Constr: discretized_purity_indicator\]](#Constr: discretized_purity_indicator){reference-type="eqref" reference="Constr: discretized_purity_indicator"} can be rewritten as a linear term with the help of additional big-M constraints. *Proof.* We consider a feasible solution $\Delta_{\bar{t}}^{-,i,j},\Delta_{\bar{t}}^{+,i,j},\tilde{b}_{\bar{t}}^i, x_i^-,x_i^+$ for [\[Prob: Dual_Purity_Constraint_indicator_discretized\]](#Prob: Dual_Purity_Constraint_indicator_discretized){reference-type="eqref" reference="Prob: Dual_Purity_Constraint_indicator_discretized"} and show that for every $i\in [k], \bar{t}\in T_N$ we have $\tilde{b}_{\bar{t}}^i=\mathbbm{1}_{[x_i^-,x_i^+]}(\bar{t})$. To this end, note that for every $i\in [k]$ there exists indeed an index $,\bar{t}$ with $\tilde{b}_{\bar{t}}^i=1$ due to [\[Constr: address_empty_set\]](#Constr: address_empty_set){reference-type="eqref" reference="Constr: address_empty_set"}. Now, given an arbitrary index $\bar{t}$ with $\tilde{b}_{\bar{t}}^i=1$, we first show that $\tilde{b}_{\bar{t}}^i=1$ implies $\mathbbm{1}_{[x_i^-,x_i^+]}(\bar{t})=1$, i.e., $\bar{t}\in [x_i^-,x_i^+]$: We first observe, that for every direction $j$, there exists a $t_0\in T_0^j$ and $\kappa_j\in \{0,\delta_N,2\delta_N,\ldots,M\}$ such that $$\bar{t} = t_0+\kappa_j e_j,$$ i.e., we consider the line in direction $j$ passing through $\bar{t}$ and consequently through $t_0$ as well. Then, we define $\kappa_j^{\max}$ as the index of the last element on this line with $\tilde{b}_t^i=1$, i.e., $$\kappa_j^{\max}\coloneqq \max \{ l\in \{0,\delta_N,2\delta_N,\ldots,M\}: \tilde{b}_{t_0+le_j}^i=1\}.$$ Thus, $\tilde{b}_{t_0+(\kappa_j^{\max}+\delta_N)e_j}^i=0$ and [\[Constr: jump_def\]](#Constr: jump_def){reference-type="eqref" reference="Constr: jump_def"} implies $\Delta_{t_0+\kappa_j^{\max}e_j}^{-,i,j}=0, \Delta_{t_0+\kappa_j^{\max}e_j}^{+,i,j}=1$. Moreover, [\[Constr: a+\_upper_bound\]](#Constr: a+_upper_bound){reference-type="eqref" reference="Constr: a+_upper_bound"} implies $$\label{Eq: a+_help} x_{ij}^+ \leq M-(M-\kappa_j^{\max})=\kappa_j^{\max}=\bar{t}_j + (\kappa_j^{\max}-\kappa_j),$$ where the latter equality originates from the definition of $\kappa_j$ above. Similarly, we define $$\kappa_j^{\min}\coloneqq \min \{l\in \{0,\delta_N,2\delta_N,\ldots,M\}: \tilde{b}_{t_0+le_j}^i=1\}.$$ Thus, $\tilde{b}_{t_0+(\kappa_j^{\min}-\delta_N)e_j}^i=0$ and [\[Constr: jump_def\]](#Constr: jump_def){reference-type="eqref" reference="Constr: jump_def"} implies $\Delta_{t_0+(\kappa_j^{\min}-\delta_N)e_j}^{-,i,j}=1, \Delta_{t_0+(\kappa_j^{\min}-\delta_N)e_j}^{+,i,j}=0$. Moreover, [\[Constr: a-\_lower_bound\]](#Constr: a-_lower_bound){reference-type="eqref" reference="Constr: a-_lower_bound"} implies $$\label{Eq: a-_help} x_{ij}^- \geq (\kappa_j^{\min}-\delta_N)+\delta_N=\kappa_j^{\min} = \bar{t}_j + \kappa_j^{\min}-\kappa_j.$$ However, due to [\[Constr: sum_delta_bound\]](#Constr: sum_delta_bound){reference-type="eqref" reference="Constr: sum_delta_bound"} we know that these are the only nonzero entries for $\Delta_{t_0+le_j}^{-,i,j},\Delta_{t_0+le_j}^{+,i,j}$. Thus due to [\[Constr: adiff_lower_bound\]](#Constr: adiff_lower_bound){reference-type="eqref" reference="Constr: adiff_lower_bound"}, we obtain $$x_{ij}^+-x_{ij}^- \geq M - (M-\kappa_j^{\max})-\kappa_j^{\min} = \kappa_j^{\max}-\kappa_j^{\min},$$ which implies equality in both [\[Eq: a+\_help\]](#Eq: a+_help){reference-type="eqref" reference="Eq: a+_help"} and [\[Eq: a-\_help\]](#Eq: a-_help){reference-type="eqref" reference="Eq: a-_help"} and thus $\bar{t}_j=\kappa_j\in [\kappa_j^{\min},\kappa_j^{\max}]=[x_{ij}^-, x_{ij}^+]$ for every index $\bar{t}\in T_N$ with $\tilde{b}_{\bar{t}}^i=1$.\ \ For the reverse implication, we need to show that $\bar{t}\in [x_i^-,x_i^+]$ implies $\tilde{b}_{\bar{t}}^i=1$. Due to [\[Constr: address_empty_set\]](#Constr: address_empty_set){reference-type="eqref" reference="Constr: address_empty_set"}, we obtain that $[x_i^-,x_i^+]\neq \emptyset$ implies the existence of a $\bar{t}$ with $\tilde{b}_{\bar{t}}^i=1$. In particular, the previous implication shows that $\bar{t}\in [x_i^-,x_i^+]$. Beginning with this $\bar{t}$, we prove the following claim for an arbitrary direction $j$: $$\label{Eq: claim_main_Thm} \tilde{b}_{\bar{t}}^i=1 \text{ implies } \tilde{b}_{\bar{t}+le_j}^i =1 \text{ for every } l: \bar{t}_j+l\in [x_{ij}^-, x_{ij}^+].$$ Let $\bar{t}=t_0+\kappa_je_j$ with $t_0\in T_0^j$ as above. Then, with the same definitions for $\kappa_j^{\min},\kappa_j^{\max}$, the arguments from the previous implication, that led to equality in [\[Eq: a+\_help\]](#Eq: a+_help){reference-type="eqref" reference="Eq: a+_help"} and [\[Eq: a-\_help\]](#Eq: a-_help){reference-type="eqref" reference="Eq: a-_help"} imply $\kappa_j^{\min}=x_{ij}^-$, $\kappa_j^{\max}=x_{ij}^+$. Moreover, the definition of $\kappa_j^{\min}, \kappa_j^{\max}$ leads to: $$1=\tilde{b}_{t_0+\kappa_j^{\min}e_j}^i=\tilde{b}_{t_0+(\kappa_j^{\min}+\delta_N)e_j}^i=\ldots =\tilde{b}_{t_0+\kappa_j^{\max}e_j}^i=1$$ with $(t_0+\kappa_j^{\min}e_j)_j=x_{ij}^-$ and $(t_0+\kappa_j^{\max}e_j)_j=x_{ij}^+$. Hence, our claim [\[Eq: claim_main_Thm\]](#Eq: claim_main_Thm){reference-type="eqref" reference="Eq: claim_main_Thm"} follows and as the direction $j$ was chosen arbitrarily, we obtain that $\mathbbm{1}_{[x_i^-,x_i^+]}(\bar{t})=1$ also implies $\tilde{b}_{\bar{t}}^i=1$. ◻ Theorem [Theorem 3](#Thm: MIP_multidim){reference-type="ref" reference="Thm: MIP_multidim"} yields sufficient criteria for robustness of the DRO constraint. This is a considerable advantage as to our knowledge no efficient alternative approach is readily available. Although binary SDP optimization is algorithmically tractable, solving [\[Prob: Dual_Purity_Constraint_indicator_discretized\]](#Prob: Dual_Purity_Constraint_indicator_discretized){reference-type="eqref" reference="Prob: Dual_Purity_Constraint_indicator_discretized"} may be computationally challenging for modern solvers, particularly for a large cardinality of $T_N$. For one-dimensional domains $T$ as considered in [@Dienstbier2023a] this challenge may be addressed as follows: Instead of bounding the slope of $p$ through its Lipschitz constant $L$, more elaborate bounds that strengthen Lemma [Lemma 2](#Lemma:inner_approx){reference-type="ref" reference="Lemma:inner_approx"} may reduce the number of necessary sample points for a good approximation of [\[Prob: primal_linear_X\_i\]](#Prob: primal_linear_X_i){reference-type="eqref" reference="Prob: primal_linear_X_i"}. Instead of a binary SDP, we obtain a binary MIP as an approximation of [\[Prob: primal_linear_X\_i\]](#Prob: primal_linear_X_i){reference-type="eqref" reference="Prob: primal_linear_X_i"} that can typically be solved much faster. # Conclusion {#Sec: Conclusion} In this paper, we present an extension of the novel approach in [@Dienstbier2023a] for distributionally robust optimization problems to cases, where multivariate simple functions are allowed. As simple functions can be included in the model, the presented approximation pushes the applicability of duality-based reformulations of distributional robustness significantly beyond convexity. Moreover, early convergence results from [@Dienstbier2023a] for univariate indicator functions indicate, that the presented approximation may converge to the actual optimum. A proof for this convergence as well as an extension from simple functions to more general functions is a desirable goal for future research. With respect to algorithmic tractablilty, we have shown that a suitably discretized safe approximation yields a mixed-integer positive-semidefinite optimization model making it eligible for recent MISDP approaches as presented in e.g. [@Gally2018a] or the YALMIP framework [@Lofberg2004a]. Thus, the presented formulations is tractably by using state-of-the-art solvers for MISDP. # Acknowledgments {#sec:acknowledgements .unnumbered} The paper is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 416229255 - SFB 1411.
arxiv_math
{ "id": "2310.05612", "title": "A Positive Semidefinite Safe Approximation of Multivariate\n Distributionally Robust Constraints Determined by Simple Functions", "authors": "J. Dienstbier, F. Liers, J. Rolfes", "categories": "math.OC math.PR", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We have proven the existence of new higher-genus maxfaces with Enneper-type end. These maxfaces are not the companions of any existing minimal surfaces, and furthermore, the singularity set is located away from the ends. The method for discovering these maxfaces involves utilizing the calculus of Teichmüller space, as proposed by Weber and Wolf in the case of minimal surfaces. address: - Department of Mathematics, Shiv Nadar Institute of Eminence, Deemed to be University, Dadri 201314, Uttar Pradesh, India. - Department of Mathematics, Shiv Nadar Institute of Eminence, Deemed to be University, Dadri 201314, Uttar Pradesh, India. - Department of Mathematics, Shiv Nadar Institute of Eminence, Deemed to be University, Dadri 201314, Uttar Pradesh, India. author: - Rivu Bardhan - Indranil Biswas - Pradip Kumar bibliography: - ref.bib title: Higher genus maxfaces with Enneper end --- # **Introduction** Analogous to minimal surfaces in $\mathbb{R}^3$, maximal surfaces are immersions with zero mean curvature in the Lorentz Minkowski space $\mathbb E_1^3$. These surfaces arise as solutions to the variational problem of locally maximizing the area among space-like surfaces. There are similarities between maximal surfaces and minimal surfaces, apart from being of zero mean curvature, for example and both admit the Weierstrass Enneper representation. However, striking differences emerge in the global study of these surfaces. While the only complete maximal surfaces are planes, there are many complete minimal surfaces apart from planes, such as the Catenoid, Enneper surface, Costa surfaces etc. The global existence of maximal surfaces necessitates allowing natural singularities. We study a particular class of maximal surfaces called maxfaces, which were named by Umehara and Yamada in [@UMEHARA2006]. Maxfaces are those generalized maximal immersions that have only non-isolated singularities, and at singularities, limiting tangent vectors contain a light-like vector. The Lorentzian Catenoid is a maxface of genus zero. Imaizumi and Kato [@imaizumi2008] classified genus-zero maxfaces. Kumar and Mohanty [@Saipradip2023] have shown the existence of genus-zero maxfaces with an arbitrary number of complete and simple ends. For higher genus surfaces, Kim and Yang [@Kim2006] have proved the existence of genus one maxfaces and also shown the existence of maximal maps for higher genus surfaces. These maximal maps and maxfaces have two ends, both of which are catenoid types. Furthermore, Fujimori, Rossman, Umehara, Yang, and Yamada, [@Fujimori2009], have constructed a family of complete maxfaces denoted as $f_k$, $k \,=\, 1,\, 2,\, 3,\,\cdots$ with two ends. These maxface $f_k$ is of genus $k$ if $k$ is odd and it has genus $\frac{k}{2}$ if $k$ is even. All of them have two ends. In 2016, authors of [@Fujimori2016highergenus] constructed maxfaces of any odd genus $g$ with two complete ends (if $g\,=\,1$, the ends are embedded) and maxfaces of genus $g\,=\,1$ with three complete embedded ends. Recently, Fujimori and Kaneda, [@Fujimori_nonorientable2023], have demonstrated the existence of non-orientable higher genus maximal maps. In [@UMEHARA2006], Umehara and Yamada provided an example of a Lorentzian Chen--Gackstatter maxface, which is a genus one maxface with one Enneper end. Here we focus on higher genus ($g \,\geq\, 2$) maxfaces with Enneper end and prove the existence of higher genus maxfaces with one Enneper end. For constructing maxfaces of higher genus, the obvious initial approach is to consider companion surfaces. Suppose for a minimal surface the Weierstrass data on a Riemann surface $M$ is $\lbrace g,\,dh\rbrace$. Then if the companion exists, the corresponding maxface would have the data $\lbrace- i g,\, i dh\rbrace$. However, this approach faces two main challenges: **1.** It is possible that the singularity set may extend towards the ends, preventing the creation of a complete maxface. For instance, consider the Weierstrass data for Jorge-Meek's minimal surface, given by: $$g(z) \,=\, z^n \ \ \ \text{ and } \ \ \ \omega \,=\, \frac{dz}{(z^{n+1}-1)^2}.$$ This data results in a complete minimal surface on $\mathbb{C}\cup\{\infty\}$ with punctures $\{1,\, \zeta, \,\zeta^2,\, \cdots ,\, \zeta^{n-1}\}$, where $\zeta\,=\, \exp\left({\frac{2 i \pi}{n}}\right)$. It is observed that the companion whose Gauss map is $g_0\,=\, -i g$ gives a maximal map, but it is not a complete maxface because the singular set $\{z\, \big\vert\,\,|g_0(z)| \,=\,1\}$ is not compact. We refer to [@UMEHARA2006] for the definition of a complete maxface. **2.** The second issue concerns the solvability of the period problem in a direct way. For example, consider the data for the Costa surface: $$M \,=\, \left\{(z,\,w) \in \mathbb{C}\times\mathbb{C}\cup\{(\infty,\infty)\} \, \big\vert\, \, w^2\,=\,z(z^2-1) \right\} \setminus \{(0,\,0),\,(\pm 1,\,0)\}$$ with data $\left\{\frac{a}{w},\,\frac{2a}{z^2-1}dz\right\}$, $a\,\in\,\mathbb{R}^{+}\setminus\{0\}$. Suppose there exists a maxface for which the companion is the Costa surface. Then the data should be $\{M,\, \frac{-i a}{w},\, \frac{2i a}{z^2-1}dz \}$. Let $\tau$ be an one-sheeted loop around $(-1,\,0)$ that does not contain $(1,\,0)$. Then $\int_\tau(\frac{2i a}{z^2-1}dz)\,=\,2i a$, $a\,\neq\, 0$. Therefore, for the corresponding maxface, the period problem is not solved. Hence, there does not exist any maxface for which the corresponding companion is the Costa surface. Therefore, to construct a maxface, we usually set aside the companion and solve the period problem from scratch while keeping singularities in mind to ensure completeness. Using the method proposed by Wolf and Weber [@weber1998minimal], [@Weber1998TeichmullerTA], we prove the following: **Theorem 1**. *For any genus $p$, a complete maxface of genus $p$ with one Enneper-type end exists. Moreover,* 1. *this maxface has eight symmetries and* 2. *it is not the companion of the higher genus minimal surface as in [@weber1998minimal] with Enneper ends.* In the literature on the existence of maximal surfaces [@Fujimori2007; @Fujimori2009; @Kim2006], we have primarily encountered the method of solving the period problem described in some articles [@Meeks1989; @meeks2018triply; @hoffman1990embedded] co-authored by Meeks. This method effectively reduces the period problem for many curves to just one or two curves, making it a valuable tool for constructing many minimal surfaces. However, in the case of maximal surfaces, most of the existing literatures have only managed to produce a maximum of two catenoid ends. In an ongoing work, the third author, with other collaborators [@CDMP] proved the existence of higher genus maxface and more than three catenoid or planar ends. But the case of Enneper ends was missing. Here we start from scratch, solve the period problem, and prove the existence of new maxfaces with one Enneper end. We employ a method initiated by Weber and Wolf, [@weber1998minimal], [@Weber1998TeichmullerTA], to solve the period problem. In Section 3, we recall the construction of minimal surfaces by zigzag, as presented in [@weber1998minimal]. Using similar methods, in Proposition [Proposition 1](#propositon:3.1){reference-type="ref" reference="propositon:3.1"}, we construct maxfaces with zigzag. We demonstrate that this maxface is not the companion of the minimal surface by zigzag; instead, it is symmetric to a companion of the same minimal surface. Our goal is to construct maxfaces distinct from the companions of minimal surfaces by zigzag that are not symmetric to their companions. To achieve this, in Section 4, we introduce a geometric shape known as "tweezers" (a special ortho-disk in the terminology of [@Weber1998TeichmullerTA]). In Theorem [Theorem 6](#thm:surfaceFromTweezer){reference-type="ref" reference="thm:surfaceFromTweezer"} we prove that for any genus $p \,\geq\, 2$, if there exists a reflexive tweezer, then there exists a maxface that is neither the companion of the minimal surface derived from the zigzag nor the companion of any symmetric minimal surface derived from the zigzag. Section 6 delves into a discussion regarding the existence of reflexive tweezers. # **Preliminaries** We recall the Weierstrass-Enneper representation for the minimal surfaces in $\mathbb R^3$ and maximal surfaces as well as maxfaces in the Lorentz Minkowski space $\mathbb E_1^3$. Here $\mathbb{E}_1^3$ is the vector space $\mathbb{R}^3$ with the bilinear form $dx^2+dy^2-dz^2$. ## Weierstrass-Enneper representation for minimal surfaces in $\mathbb R^3$ For an oriented minimal surface $X\,:\, M\,\longrightarrow\, {\mathbb R}^3$, there is a natural Riemann surface structure on $M$ together with a meromorphic function $g$ as well as a holomorphic one-form $dh$ on $M$ such that the poles and zeros of $g$ match with the zeros of $dh$ with the same order, and $$X(p) \,=\, \text{Re}\, X(x_0)+ \text{Re} \int^p_{x_0} \left(\frac{1}{2} (g^{-1} - g) , \frac{ i }{2} ( g^{-1} + g ), 1 \right)dh.$$ The triple $\lbrace M,\,g,\,dh \rbrace$ is referred to as the Weierstrass data for the minimal surface $(X,\, M)$. Furthermore, with such $g$ and $dh$ on a Riemann surface $M$, if the above integral is well defined, then it is a minimal immersion in ${\mathbb R}^3$. Now, let us move to the maximal immersions. ## Weierstrass-Enneper representation for the maximal map {#weirstrassdata} Let $M$ be a Riemann surface, $g$ a meromorphic function on $M$ and $dh$ a holomorphic 1-form on $M$, such that the following conditions are satisfied: 1. If $p\,\in\, M$ is a zero or pole of $g$ of order $m$, then $dh$ has a zero at $p$ of order at least $m$, 2. $|g|$ is not identically equal to 1 on $M$, 3. for all closed loops $\gamma$ on $M$, $$\label{PeriodProblem} \int_{\gamma}gdh+\overline{\int_{\gamma}g^{-1}dh}\,=\,0,\ \ \ Re\int_{\gamma}dh\,=\,0.$$ Then the map $X\,:\, M\, \longrightarrow\, \mathbb E_1^3$ defined by $$\label{maximal_map} X(p)\,=\,\text{Re}\int_{x_0}^p\left( \frac{1}{2}(g^{-1} + g), \frac{ i }{2}(g^{-1} - g), 1 \right) dh$$ is a maximal map with the base point $x_0\,\in\, M$ [@Estudillo1992], [@UMEHARA2006]. Furthermore, any maximal map can be expressed in this form. The triple $(M,\,g,\, dh)$ constitutes the Weierstrass data for the maximal map. The pullback metric on the Riemann surface $M$ is given by $$ds^2\,=\,\frac{1}{4}\left(|g|^{-1}-|g|\right)^2|dh|^2,$$ and the singular locus of the maximal map with Weierstrass data $(M,\,g,\, dh)$ is the subset $\{p\,\in\, M\,\big\vert\,\, |g(p)|\,=\,1\, \text{ or }\, dh(p)\,=\,0\}$. In the context of **maxfaces**, it is known that $$\label{divisor}(g)_0-(g)_\infty\,=\, (dh)_0.$$ Therefore, the singular locus for the maxface is $\{p\,\in\, M\,\big\vert\,\, |g(p)|\,=\,1 \}$. The completeness of a maxface can be determined using the following criterion: **Fact 2**. *A maxface is complete (see [@UMEHARA2006 Corollary 4.8]) if and only if the following three hold:* 1. *$M$ is bi-holomorphic to $\overline{M}\setminus\{p_1,\, \cdots,\, p_n\}$, where $\overline{M}$ is a compact Riemann surface.* 2. *$|g|\,\neq\, 1$ at $p_j$, $1\,\leq\, j\,\leq\, n$.* 3. *The induced metric $ds^2$ is complete at the ends.* # **Maxface with Enneper ends from a symmetric zigzag** {#section:Minimal surface and maxface with enneper end} This section revisits the construction in [@weber1998minimal] of a minimal surface using the zigzag method. Subsequently, we will determine the maxface that corresponds to this minimal surface as its companion. Furthermore, we will construct a maxface using the same zigzag approach and establish its relationship with the companion maxface. ## Minimal surfaces by Weber and Wolf {#subsection:minimal surface weber & wolf} **Definition 3** ([@weber1998minimal]). *A zigzag $Z$ of genus $p$ is an open and properly embedded arc in ${\mathbb {C}}$ composed of alternating horizontal and vertical subarcs with angles of $\frac{\pi}{2}, \frac{3\pi}{2},\frac{\pi}{2},\ldots,\frac{3\pi}{2},\frac{\pi}{2}$ between consecutive sides, and having $2p + 1$ vertices (there are $2p + 2$ sides, including an initial infinite vertical side and a terminal infinite horizontal side). A symmetric zigzag of genus $p$ is a zigzag of genus $p$ which is symmetric about the line ${y \,=\, x}$.* ![Zigzag of genus $p$ (Picture Courtesy: Weber and Wolf [@weber1998minimal])](zigzag.png){#fig:zigzag} A symmetric zigzag of genus $p$ divides ${\mathbb {C}}$ into two regions, one of which we call by the name $\Omega_{NE}$ and the other by $\Omega_{SW}$ (see Figure [1](#fig:zigzag){reference-type="ref" reference="fig:zigzag"}). **Definition 4**. *A symmetric zigzag $Z$ is called reflexive if there is a conformal map $\phi\,:\, \Omega_{NE}(Z)\,\longrightarrow\, \Omega_{SW}(Z)$ which takes vertices to vertices.* ### Construction of minimal surface Let $Z$ be the genus $p$ reflexive zigzag separating ${\mathbb {C}}$ into two regions namely $\Omega_{NE}$ and $\Omega_{SW}$. We denote the vertices of $\Omega_{NE}$ by $\lbrace P_{j}\rbrace_{j=-p}^{p},\,P_{\infty}\,=\,\infty$. The vertices of $\Omega_{SW}$ are labeled in the reverse order: $Q_j\,=\,P_{-j}$ for $j\,\in\,\lbrace -p,\,-(p-1),\,\ldots,\,p\rbrace$ and $Q_{\infty}\,=\,\infty$. By doubling these two regions, we obtain two punctured spheres, denoted as $S_{NE}$ and $S_{SW}$, each with $2p+1$ marked points ${P_j}$ and ${Q_j}$, respectively, as well as a puncture at $P_{\infty}$. Further, we take the hyperelliptic cover $\mathcal{R}_{NE}$ (respectively, $\mathcal{R}_{SW}$) of $S_{NE}$ (respectively, $S_{SW}$) branched at $\{P_j\}$ and (respectively, $\{Q_j\}$). respectively. Let $$\label{epi} \pi_{NE}\,:\, \mathcal{R}_{NE}\, \longrightarrow\, S_{NE}\ \ \,\text{ and }\ \ \, \pi_{SW}\,:\, \mathcal{R}_{SW}\, \longrightarrow\, S_{SW}$$ be the degree two branched covering maps. Since the zigzag $Z$ is reflexive, a there is a conformal map $\phi\,:\,\Omega_{NE}\,\longrightarrow\,\Omega_{SW}$ taking the vertices to vertices. This $\phi$ induces a conformal map $$\widetilde{\phi}\,:\,\mathcal{R}_{NE}\,\longrightarrow\,\mathcal{R}_{SW}$$ such that $\widetilde{\phi}(P_j)\,=\,Q_j$ for all $j$. The flat metric $|dz|$ on $\Omega_{NE}$ extends to a singular flat metric on $S_{NE}$ with cone angles at $P_j$. Its pullback through $\pi_{NE}$ (see [\[epi\]](#epi){reference-type="eqref" reference="epi"}) is a singular flat metric on $\mathcal{R}_{NE}$. The corresponding nonvanishing one form (see [@weber1998minimal]) is denoted by $\omega_{NE}$. Similarly, denote by $\omega_{SW}$ the nonvanishing one form induced on $\mathcal{R}_{SW}$ by the flat metric $|dz|$ on $\Omega_{SW}$. We define two flat forms $\alpha\,=\,\exp({\frac{-\pi i }{4}})\omega_{NE}$ and $\beta\,=\, \exp({\frac{-\pi i }{4}})\widetilde{\phi}^*\omega_{SW}$ on $\mathcal{R}_{NE}$. We choose $c$ and define $dh\,=\,c d\pi_{NE}$ such that $\alpha\beta\,=\,dh^2$ (see [@weber1998minimal] for the details). Finally, we define $g\,=\,\frac{\alpha}{dh}$ and consider the formal Weierstrass data as $\lbrace g,\,dh\rbrace$. Weber and Wolf have proven in [@weber1998minimal] that this pair gives the minimal surface by showing that $$\int_{B_j} gdh\,=\,\int_{B_j}\alpha\,=\,\overline{\int_{B_j}g^{-1}dh}\,=\,\overline{\int_{B_j}\beta}$$ for all $\lbrace B_{\pm j}\rbrace_{j=1}^{p}$ (defined in the proof of [@weber1998minimal Theorem 3.3]). We will explain this technique more explicitly in the next section, where we will modify it to generate a maximal surface. ## Maximal surface generated from zigzag {#Subsec:Maximalsurface tildeX} Using the terminology in Section [3.1](#subsection:minimal surface weber & wolf){reference-type="ref" reference="subsection:minimal surface weber & wolf"}, define two nonvanishing holomorphic one forms $\widetilde{\alpha}\,=\,\exp({\frac{\pi i }{4}})\omega_{NE}$ and $\widetilde{\beta}\,=\,\exp({\frac{\pi i }{4}})\widetilde{\phi}^*\omega_{SW}$. By definition, $B_j$ encloses exactly around the line segment $P_{j+1} P_{j}$ and $B_{-j}$ encloses exactly around the line segment $P_{-j-1}P_{-j}$. Since $\omega_{NE},\omega_{SW}$ are flat forms, $$\int_{B_j}\widetilde{\alpha}\,=\, \exp({\frac{\pi i }{4}})\int_{B_j}\omega_{NE} \,=\,2\exp({\frac{\pi i }{4}})\int_{P_{j+1}}^{P_{j}} dz\,=\,2\exp({\frac{\pi i }{4}})(P_{j}-P_{j+1}).$$ Similarly, for $\widetilde\beta$, $$\int_{B_j}\widetilde{\beta}\,=\,\exp\left(\frac{\pi i }{4}\right)\int_{\widetilde{\phi}(B_j)}\omega_{SW} \,=2\color{black}\,\exp\left(\frac{\pi i }{4}\right)\int_{Q_{j+1}}^{Q_{j}} dz$$ $$=\,2\exp\left(\frac{\pi i }{4}\right)(Q_{j}-Q_{j+1})\,=\,2\exp\left(\frac{\pi i }{4}\right)(P_{-j}-P_{-j-1}).$$ By symmetry of zigzag, we have $-\overline{\exp({\frac{\pi i }{4}})P_{-j}} \,=\, \exp({\frac{\pi i }{4}})P_{j}$. Therefore, $-\overline{\int_{B_j}\widetilde{\beta}}\,=\,\int_{B_j}\widetilde{\alpha}$ for all $B_j$. We define $\widetilde{g}\,=\,\frac{\widetilde{\alpha}}{\widetilde{dh}}$, where $\widetilde{dh}\,=\,\widetilde{c}d\pi_{NE}$ for some $\widetilde{c}$, such that $\widetilde{\alpha}\widetilde{\beta}\,=\,\widetilde{dh}^2$. Therefore $\widetilde{g}\widetilde{dh}\,=\,\widetilde{\alpha}$ and $\widetilde{g}^{-1}\widetilde{dh} \,=\,\widetilde{\beta}$. Since $\widetilde{dh}$ is an exact form, we deduce that $\int_{B_j}\widetilde{dh} \,=\,0$. Further from discussions in the last paragraph it is deduced that $$\int_{B_j}\widetilde{g}\widetilde{dh}\,=\,-\overline{\int_{B_j}\widetilde{g}^{-1}\widetilde{dh}}$$ for all $B_j$. Further, since $\widetilde g$ has zero at $P_{\infty}$, the singularity set is compact. Thus we conclude the following: **Proposition 1**. *The triple $\lbrace\mathcal{R}_{NE}\setminus\lbrace P_{\infty}\rbrace,\,\widetilde{g},\,\widetilde{dh}\rbrace$ defines a maxface. This surface, which is denoted by $\widetilde{X}$, has at most eight symmetries. Moreover, the following hold:* 1. *The minimal surface $X$ is not the companion of $\widetilde{X}$.* 2. *The companion of the minimal surface $X$ exists; denote it by $X_C$. Then $X_C$ and $\widetilde{X}$ are symmetric.* *Proof.* It is easy to see that $\frac{\widetilde{\alpha}}{\alpha} \,=\, \frac{\widetilde{\beta}}{\beta} \,=\, i$. Therefore, $\frac{\widetilde{\alpha}\widetilde{\beta}}{\alpha\beta} \,=\, -1$. Thus, $$\left\{\frac{\widetilde{dh}^2}{dh^2} \,=\, -1\right\} \,\,\implies\,\, \{\widetilde{dh} \,=\, i dh\color{black}\}.$$ Therefore, $\widetilde{g} \,=\, \frac{ i \alpha}{ i dh} \,=\, \frac{\alpha}{dh} \,=\, g$. Thus, the corresponding maximal surface has data $(g,\, i dh)$. The minimal surface and the maximal surface share the same underlying Riemann surface, and they have at most eight conformal and anticonformal isometries. Companion of $X$, which is denoted by $X_C$, is given by the Weierstrass data as $g_1 \,=\, - i g$ and $dh_1 \,=\, i dh$, if period condition holds. We verify it here. Since $dh$ is exact, we have $\int_\gamma dh_1 \,=\, 0$ for all $\gamma \,\in\, \lbrace B_{\pm j} \rbrace_{j=1}^{p}$. As $\int_\gamma gdh \,=\, \overline{\int_\gamma g^{-1}dh}$, $$\int_{\gamma}g_1dh_1 \,= \,\int_\gamma gdh \,=\, \overline{\int_\gamma g^{-1}dh} \,=\, -\overline{\int_{\gamma}g_1^{-1}dh_1}.$$ Thus, $\lbrace g_1,\, dh_1 \rbrace$ is a Weierstrass data on $\mathcal{R}_{NE}$ for the maximal surface. We have $$\widetilde{X}(p) \,=\, Re\int_{0}^p((g+\frac{1}{g})\frac{ i dh}{2},\,\frac{ i }{2}(g- \frac{1}{g}){ i dh},\, i dh),$$ $$X_C(p) \,=\, Re\int_{0}^p((g-\frac{1}{g})\frac{dh}{2},\,\frac{ i }{2}(g+\frac{1}{g})dh,\, i dh).$$ It is straightforward to check that $\widetilde{X}(p) \,= \,\Psi(X_C(p))$ where $\Psi\,:\, \mathbb{E}^3_1\,\longrightarrow\, \mathbb{E}^3_1$ is the map defined by $(x,\,y,\,z) \,\mapsto\color{black}\, (y,\,-x,\,z)$. Therefore, these two maximal surfaces are symmetric. ◻ Similarly, one may attempt to construct minimal and maximal maps by zigzags symmetric about the line $y\,=\,-x$, but again, the surfaces turn out to be the same modulo symmetry. # **Tweezers and corresponding Zigzags of genus $p$** In this section, we will first revisit the concept of an \"orthodisk\" as described in [@Weber1998TeichmullerTA]. We will then focus on a specific class of orthodisks, which we refer to as \"tweezers\". Although tweezers were inspired by \"zigzags\", they differ significantly in many aspects. We will explore the relationships between the Riemann surface associated with tweezers and $\mathcal{R}_{NE}$ associated to zigzags. ## Conformal polygon and orthodisk [@Weber1998TeichmullerTA] On the upper half-plane, consider $n \,\geq\, 3$ marked points $\{t_j\}_{j=1}^n$ lying on the real line. The point $t_\infty = \infty$ is also treated as one of these marked points. The upper half-plane equipped with these marked points is referred to as a conformal polygon, while the marked points are referred to as its vertices. Two conformal polygons are called equivalent under conformal mapping if there exists a biholomorphism of the upper half-plane that preserves the set of vertices while fixing the point $\infty$. Let $a_j$, $j\, \in\, \{1,\, \cdots,\, n\}\cup\lbrace\color{black} \infty\rbrace\color{black}$, denote a set of odd integers such that $$\label{eqn:weight} a_\infty \,\,=\,\, -4 - \sum_{j} a_j.$$ A vertex $t_j$ is called "finite" if $a_j \,>\,-2$; otherwise, it is classified as "infinite". According to [\[eqn:weight\]](#eqn:weight){reference-type="ref" reference="eqn:weight"}, there is at least one finite vertex, which may be coincide with $t_\infty$. We have the corresponding Schwarz--Christoffel map $$F(z) \,:=\, \int_{ i }^z (t - t_1)^{\frac{a_1}{2}} \ldots (t - t_n)^{\frac{a_n}{2}} dt$$ defined on the complement of the infinite vertices in the upper half-plane $\mathbb{H} \cup \mathbb{R}$. Upper half plane, without the infinite vertices, equipped with the pullback, by $F$, of the flat metric on $\mathbb{C}$ is called an orthodisk. The integer $a_j$ corresponds to cone angle of $\frac{a_j + 2}{2}\pi$ at $t_j$. Negative angles bear significance, as a vertex with a negative angle $-\theta$ resides at infinity and represents the intersection of two lines. These lines also intersect at a finite point, forming a positive angle $+\theta$ at that intersection. An orthodisk is called symmetric if it has a reflectional symmetry which fixes two vertices. Two orthodisks that share the same underlying conformal polygon, but having possibly different exponents, are called **conformal orthodisks**. Consider two orthodisks, $X_1$ and $X_2$, each with distinct vertex data. These orthodisks are termed **conjugate** if there exists a straight line $l \,\subset\, \mathbb{C}$ such that the corresponding periods are symmetric with respect to $l$. Orthodisks $X_1$ and $X_2$ are called **reflexive** if they are both conformal and conjugate. Below, we will discuss a particular type of ortho-disk, which we call tweezers. ## Symmetric tweezers of genus $p$ {#definition:tweezer} A tweezer of genus $p$, with $p\,\geq\, 2$, is an open arc in ${\mathbb {C}}$ consisting of $2p+1$ vertices $\lbrace P_j\rbrace_{j=-p}^{p}$ and $2p+2$ edges such that 1. the interior angle of the region that is left when we go from $P_{p}$ to $P_{-p}$ alternate between $\frac{3\pi}{2},\frac{\pi}{2}$ except at $P_{\pm 1},\,P_0$, 2. the interior angle at $P_{\pm 1}$ is always $\frac{\pi}{2}$, and 3. The interior angle at $P_0$ is $\frac{\pi}{2}$ when $p$ is even and it is $\frac{3\pi}{2}$ when $p$ is odd. \[.48\] ![Symmetric tweezers and its corresponding zigzags genus two and genus three.](genus2zigtwe.png "fig:"){#figure 3:different tweezers g2 g3 width="\\linewidth"} \[.48\] ![Symmetric tweezers and its corresponding zigzags genus two and genus three.](genus3zigtwe.png "fig:"){#figure 3:different tweezers g2 g3 width="\\linewidth"} \[.48\] ![Symmetric tweezers and its corresponding zigzags genus 4 and genus 5](genus4zigtwe.png "fig:"){#figure 3:different tweezers g4 g5 width="\\linewidth"} \[.5\] ![Symmetric tweezers and its corresponding zigzags genus 4 and genus 5](genus5zigtwe.png "fig:"){#figure 3:different tweezers g4 g5 width="\\linewidth"} Moreover, at each vertex, there is an exterior angle assigned to it. If at $P_k$ the interior angle as above is $\theta$, then the corresponding exterior angle is $2\pi-\theta$. The notion of interior and exterior angles help us to recognize the image of ${\mathbb {H}}$ under the Schwarz--Christoffel map, as discussed below. For two fixed set of real numbers $t_{-p}\,<\, \ldots\,<\,t_p$, we will use the notation $\Omega_{Gdh}$ (following [@Weber1998TeichmullerTA]) to mean ${\mathbb {H}}\cup{\mathbb {R}}$ with the flat metric induced by the unique Schwarz--Christoffel map taking ${\mathbb {H}}$ to the interior of the polygon enclosed by the tweezer in the side of the interior angles such that the map sends ${\mathbb {R}}$ to the tweezer and the point $t_j$ to $P_j$. Similarly, for another set of real numbers $s_{-p}\,<\,\ldots\,<\,s_p$ we can find a unique Schwarz--Christoffel map, sending ${\mathbb {H}}$ to the interior polygon enclosed by the tweezer in the side of the exterior angle such that the image of $s_j$ is $P_{-j}$ which we call as $Q_j$. We call ${\mathbb {H}}\cup{\mathbb {R}}$ with the flat metric induced by this map as $\Omega_{G^{-1}dh}$. We write the Schwarz--Christoffel maps for the corresponding regions: $$\begin{aligned} z\,\,\longmapsto\,\, \int_{ i }^z \prod_{j=-p}^p (t-t_j)^{\frac{a_j}{2}}dt & \text{ on }\ \, \Omega_{Gdh},\\ z\,\,\longmapsto\,\, \int_{ i }^z \prod_{j=-p}^p (t-s_j)^{\frac{b_j}{2}}dt & \text{ on }\ \, \Omega_{G^{-1}dh},\end{aligned}$$ where $$\begin{aligned} a_j\,=\,\begin{cases} \pm 1 & \text{ alternatively when }\, j \,\neq\, \pm 1,\,0,\\ -1 & \text{ when }\, j\,=\,\pm 1,\\ 1 & \text{ when }\, j\,=\,0,\,p\, \text{ is odd},\\ -1 & \text{ when }\, j\,=\,0,\,p\, \text{ is even}, \end{cases}\\ b_j\,=\,\begin{cases} \mp 1 & \text{ alternatively when }\, j \,\neq\, \pm 1,\,0,\\ 1 & \text{ when }\, j\,=\,\pm 1,\\ -1 & \text{ when }\, j\,=\,0,\,p\, \text{ is odd},\\ 1 & \text{ when }\, j\,=0,\,p\, \text{ is even}. \end{cases}\end{aligned}$$ It is evident from our notation that the same convention used for naming vertices in the case of a zigzag has been applied to the vertices of the boundary of $\Omega_{Gdh}$ and $\Omega_{G^{-1}dh}$. If we denote the vertices of $\Omega_{Gdh}$ as $\lbrace P_j\rbrace_{j=-p}^p$ with $P_{\infty}=\infty$, then we have chosen to name the vertices in the reverse order for $\Omega_{G^{-1}dh}$, i.e., the vertices are labeled as $Q_j=P_{-j}$ and $Q_{\infty}=\infty$ From a tweezer, we can obtain a zigzag by simply mapping the points of the tweezers $P_j$, where $j\,\geq\, 1$, to $-P_j$, $P_0$ to $- i P_0$ when $p$ is even and $P_0$ to $i P_0$ when $p$ is odd. Reversing the same process, we can get the corresponding tweezer from a given zigzag. For the case $p=1$, the corresponding tweezer of genus $1$ zigzag is just the rotation of the zigzag. Therefore, we skip the discussion of genus $1$ tweezer except in some places where we need it. This zigzag may not necessarily be symmetric. We call a tweezer symmetric if it is symmetric with respect to the line $y\,=\,-x$. It is clear that a zigzag that corresponds to a symmetric tweezer is symmetric along the line $y\,=\,x$. **Definition 5**. *A symmetric tweezer is said to be reflexive if there is a conformal map $\phi\,:\, \Omega_{Gdh} \,\longrightarrow\, \Omega_{G^{-1}dh}$ taking vertices to vertices, i.e., $\phi(P_j)\,=\,Q_j$.* In the context of an orthodisk, a tweezer gives rise to two orthodisks, denoted as $\Omega_{Gdh}$ with vertex data $\lbrace t_j\rbrace_{j=-p}^{p}$ and $\Omega_{G^{-1}dh}$ with vertex data $\lbrace s_j\rbrace_{j=-p}^{p}$. Here, $t_j$ is mapped to $P_j$, and $s_j$ is mapped to $Q_j$. For a symmetric tweezer, these two orthodisks are conjugate. Reflexivity implies that the corresponding conformal polygons are conformal. ## Riemann surfaces from the tweezer Similar to zigzags as in Section [3.1](#subsection:minimal surface weber & wolf){reference-type="ref" reference="subsection:minimal surface weber & wolf"}, we can construct first the punctured spheres $S_{\Omega_{Gdh}},S_{\Omega_{G^{-1}dh}}$ with marked points $\lbrace P_j\rbrace_{j=-p}^{p}, \lbrace P^{\prime}_{j}= P_{-j}\rbrace_{j=-p}^{p}$ with puncture $\{P_{\infty}=\infty\}$, and finally the hyperelliptic Riemann surfaces $\mathcal{R}_{Gdh},\mathcal{R}_{G^{-1}dh}$ respectively, and if tweezer is reflexive, we get Riemann surfaces $\mathcal{R}_{Gdh},\mathcal{R}_{G^{-1}dh}$ conformal taking corresponding Weierstrass's point to Weierstrass's points. We denote these Riemann surfaces as $R_T$. For given tweezer, we can construct the zigzag as discussed in the earlier subsection. Let for the corresponding zigzag, the marked sphere be $S_Z$ with puncture $\{Q_{\infty}=\color{black}\infty\}$, and the corresponding hyperelliptic Riemann surfaces be $R_Z$. **Proposition 2**. *For a fixed genus $p \geq 2$, the Riemann surface $R_T$ is neither conformal nor anticonformal to $R_Z$ by a mapping that maps Weierstrass's points to Weierstrass's points. Here, $T$ and $Z$ denote the tweezer and its corresponding zigzag of genus $p$, respectively.* *Proof.* Suppose there is a conformal or anticonformal map $f$ between $R_T$ and $R_Z$ that takes corresponding Weierstrass points to each other, i.e., $f(W_T) = W_Z$ for Weierstrass points $W_T$ on $R_T$ and $W_Z$ on $R_Z$. This means the marked sphere $S_T$ maps to the marked sphere $S_Z$ with $f(P_j)=Q_j$, $f(P_{\infty})= Q_\infty$. If $f$ is conformal and it fixes at least $4$ points on the real line, which are the Weierstrass points when we consider the restriction of $f$ to the sphere $S_{T}$. In this case, therefore, $f$ would fix the entire real line, effectively taking tweezer $T$ to the corresponding zigzag $Z$. For the zigzag, the angle between two consecutive sides alternates, whereas for the tweezer when $p\geq 2$, this alternation will not always occur (as seen in the Figure [3](#figure 3:different tweezers g2 g3){reference-type="ref" reference="figure 3:different tweezers g2 g3"}, [5](#figure 3:different tweezers g4 g5){reference-type="ref" reference="figure 3:different tweezers g4 g5"}). If $p$ is even, taking a neighborhood $\mathcal{U}$ containing the arc $P_{-2}P_{-1}P_{0}P_1P_2$, we would find that the restricted map $f$ on $\mathcal{U}$ should map this arc to the corresponding arc of the zigzag $Q_{-2}Q_{-1}Q_0Q_{1}Q_2$. However, the interior angles at $P_{-1},P_{0},P_{1}$ are $\frac{\pi}{2},\frac{\pi}{2},\frac{\pi}{2}$ respectively, while the corresponding angles at $Q_{-1},Q_{0},Q_{1}$ are $\frac{3\pi}{2},\frac{\pi}{2},\frac{3\pi}{2}$ respectively, which is impossible due to conformality. Thus, no such conformal $f$ exists when $p$ is even and $p\geq 2$. For odd $p\geq 2$, considering a neighborhood $\mathcal{V}$ containing the arc $P_{-3}P_{-2}P_{-1}P_0$, a similar comparison between the angles at $P_{-2},P_{-1}$ and their corresponding vertices in $Z$ leads to a contradiction. Hence, no such $f$ exists when $p$ is odd. If there is an anticonformal map with similar conditions, the arguments remain the same, concluding that no such anticonformal map exists for either even or odd $p\geq 2$. ◻ In view of the above proposition, if we construct maxface and minimal surface, those will not be isometric to the one we get from the zigzags as in Section [\[section:Minimal surface and maxface with enneper end\]](#section:Minimal surface and maxface with enneper end){reference-type="ref" reference="section:Minimal surface and maxface with enneper end"}. In the next section, we will generate minimal and maximal surfaces using these tweezers, similar to the previous subsections. # **Minimal Surface and Maximal Surface with Tweezers $X_T$ and $\widetilde{X}_T$** Similar to the zigzag case as in the Section [3.1](#subsection:minimal surface weber & wolf){reference-type="ref" reference="subsection:minimal surface weber & wolf"}, we obtain non-vanishing holomorphic forms $\omega_{Gdh}$ and $\omega_{G^{-1}dh}$ on $\mathcal{R}_{Gdh}$ and $\mathcal{R}_{G^{-1}dh}$ respectively. If we start from the reflexive tweezer $T$, the conformal map $\phi\,:\, \Omega_{Gdh} \,\longrightarrow\, \Omega_{G^{-1}dh}$ can be extended to a map $\widetilde\phi\,:\, \mathcal{R}_{Gdh} \,\longrightarrow\, \mathcal{R}_{G^{-1}dh}$ --- between the corresponding hyperelliptic Riemann surfaces --- for which $\widetilde\phi(P_j) \,=\, Q_j$ for all $j$. Define the following four holomorphic forms on $\mathcal{R}_{Gdh}$: $$\begin{aligned} \alpha &\,=\, \exp(-{\frac{\pi i }{4}})\omega_{Gdh}, \\ \beta &\,=\, \exp(-{\frac{\pi i }{4}})\widetilde{\phi}^*(\omega_{G^{-1}dh}), \\ \widetilde\alpha &\,= \, \exp({\frac{\pi i }{4}})\omega_{Gdh}, \\ \widetilde\beta &\,= \, \exp({\frac{\pi i }{4}})\widetilde{\phi}^*(\omega_{G^{-1}dh}).\end{aligned}$$ Using the relationship between the cone angles and the order of the zeros of the $1$-forms, we find the divisors corresponding to these forms: $$\begin{aligned} (\alpha) = (\widetilde\alpha) &=\, \begin{cases} P_{\pm p}^2 P_{\pm (p-2)}^2 \ldots P^2_{\pm 3}P^2_{0}P_{\infty}^{-2} & \text{if } p\color{black} \text{ is odd}\\ P_{\pm p}^2 P_{\pm (p-2)}^2 \ldots P^2_{\pm 2}P_{\infty}^{-2} & \text{if } p\color{black} \text{ is even} \end{cases}\\ (\beta) = (\widetilde\beta) &=\, \begin{cases} P_{\pm (p-1)}^2 P_{\pm (p-3)}^2 \ldots P^2_{\pm 2}P_{\pm 1}^2P_{\infty}^{-4} & \text{if } p\color{black} \text{ is odd}\\ P_{\pm (p-1)}^2 P_{\pm (p-3)}^2 \ldots P_{\pm 3}^2P_{\pm 1}^2P_{0}^2P_{\infty}^{-4} & \text{if } p\color{black} \text{ is even} \end{cases}\end{aligned}$$ It is important to highlight that we use multiplicative notation for the divisor, while addition and subtraction follow the conventional complex number operations. The differential of the holomorphic covering $\pi_{Gdh}$ (branched at $P_j$) has the following divisor: $$(d\pi_{Gdh}) \,=\, P_{\pm p}^1P_{\pm (p-1)}^1 \ldots P_0^1 P_{\infty}^{-3}.$$ The quadratic differentials $\alpha \beta$ and $d\pi_{Gdh}^2$ share an identical set of zeros and poles, as do $\widetilde{\alpha}\widetilde{\beta}$. Consequently, there exist appropriate constants $c$ and $\widetilde{c}$ such that: $\alpha\beta = (dh)^2$, where $dh = cd\pi_{Gdh}$ and $\widetilde{\alpha}\widetilde{\beta} = (\widetilde{dh})^2$, where $\widetilde{dh} = \widetilde{c}d\pi_{Gdh}$ Now, we define the following formal Weierstrass data for our minimal surface and maximal maps: $$\begin{aligned} \left(G_T \,=\, \frac{\alpha}{dh},\, dh\right)\, \text{ for the minimal surface,}\\ \left(\widetilde{G}_T \,=\, \frac{\widetilde\alpha}{\widetilde{dh}},\, \widetilde{dh}\right)\, \text{ for the maximal surface}.\end{aligned}$$ Note that the divisor condition, as in the Equation [\[divisor\]](#divisor){reference-type="ref" reference="divisor"}, is trivially satisfied since at each zero and pole of $G_T$ and $\widetilde{G}_T$, there exists a zero of $d\pi_{Gdh}$ with an equal order. The period problem can be resolved using the same technique as discussed in Section [3.2](#Subsec:Maximalsurface tildeX){reference-type="ref" reference="Subsec:Maximalsurface tildeX"}. We begin with the basis of homology, denoted as $H_1(\mathcal{R}_{{Gdh}}, \mathbb Z)$. For $j = -p, \ldots, p - 1$, let $B_j$ represent the loop in $S_{\Omega G_{dh}}$ that encloses only the line segment $P_jP_{j+1}$ within the disk and no other vertices in its interior. These curves have closed lifts $\widetilde{B_j}$ to $\mathcal{R}_{Gdh}$ and form a homology basis of $\mathcal{R}_{Gdh}$. The following statements are valid: $$\begin{aligned} \int_{\widetilde{B_j}}\alpha &= 2\exp\left(-\frac{\pi i}{4}\right)(P_j-P_{j+1}) \\ \int_{\widetilde{B_j}}\beta &= 2\exp\left(-\frac{\pi i}{4}\right)(P_{-j}-P_{-j-1}) \\ \int_{\widetilde{B_j}}\widetilde\alpha &= 2\exp\left(\frac{\pi i}{4}\right)(P_j-P_{j+1}) \\ \int_{\widetilde{B_j}}\widetilde\beta &= 2\exp\left(\frac{\pi i}{4}\right)(P_{-j}-P_{-j-1}).\end{aligned}$$ Due to the symmetry of the tweezers, we can further deduce that $$\begin{aligned} \int_{\widetilde{B_j}}\alpha &\,=\, \overline{\int_{\widetilde{B_j}}\beta} \label{equation 8.1:period problem:minimal surface} \\ \int_{\widetilde{B_j}}\widetilde\alpha &\,=\, -\overline{\int_{\widetilde{B_j}}\widetilde\beta}. \label{equation 8.2:period problem:maximal surface}\end{aligned}$$ Moreover as $dh$ and $\widetilde{dh}$ are exact, therefore for all loops $\int_{\widetilde{B_j}}dh\,=\,\int_{\widetilde{B_{j}}}\widetilde{dh}\,=\,0$. Thus, equations [\[equation 8.1:period problem:minimal surface\]](#equation 8.1:period problem:minimal surface){reference-type="eqref" reference="equation 8.1:period problem:minimal surface"} and [\[equation 8.2:period problem:maximal surface\]](#equation 8.2:period problem:maximal surface){reference-type="eqref" reference="equation 8.2:period problem:maximal surface"} confirm that the following maps are minimal and maximal, respectively: $$\begin{aligned} X_T(z\color{black}) &\,=\, \operatorname{Re} \int_0^{z\color{black}} \left( (G_T^{-1} - G_T) \frac{dh}{2},\, i ( G_T^{-1} + G_T ) \frac{dh}{2},\, dh \right) \label{eqn:minimalsurfaceTweezer} \\ \widetilde{X}_T(z\color{black}) &\,=\, \operatorname{Re} \int_0^{z\color{black}} \left( (\widetilde{G}_T^{-1} + G_T) \frac{\widetilde{dh}}{2},\, i ( \widetilde{G}_T^{-1} -\widetilde{G}_T ) \frac{\widetilde{dh}}{2},\, \widetilde{dh} \right).\label{eqn:maximalsurfacetweezer}\end{aligned}$$ We have the following theorem **Theorem 6**. *Given a reflexive tweezer $T$ of genus $p$, there exist a minimal surface $X_T$ and a maxface $\widetilde{X}_T$ of genus $p$, each having one Enneper end and at most eight symmetries. Furthermore, $X_T$ (respectively, $\widetilde{X}_T$) is not symmetric to the minimal surface $X$ (respectively, $\widetilde{X}$) as discussed in Section [\[section:Minimal surface and maxface with enneper end\]](#section:Minimal surface and maxface with enneper end){reference-type="ref" reference="section:Minimal surface and maxface with enneper end"}.* *Proof.* Equation [\[eqn:minimalsurfaceTweezer\]](#eqn:minimalsurfaceTweezer){reference-type="eqref" reference="eqn:minimalsurfaceTweezer"} defines the minimal surface, and Equation [\[eqn:maximalsurfacetweezer\]](#eqn:maximalsurfacetweezer){reference-type="eqref" reference="eqn:maximalsurfacetweezer"} defines the corresponding maximal map. As we have already established the divisor condition, as in Equation [\[divisor\]](#divisor){reference-type="eqref" reference="divisor"}, it follows that the maximal map given in Equation [\[eqn:maximalsurfacetweezer\]](#eqn:maximalsurfacetweezer){reference-type="eqref" reference="eqn:maximalsurfacetweezer"} indeed represents a maxface. Concerning completeness, we first observe that at the end of the maxface, the Gauss map has a zero. Consequently, the singular set $\{p\color{black}: |\widetilde{G}_T(p\color{black})|=1\}$ is compact. Furthermore, we can confirm that the metric is complete at the end of both $X_T$ and $\widetilde{X}_T$ since the data at the end is the same as the data for the Enneper end. The remaining task involves proving that the minimal surface $X_T$ is not symmetric to the minimal surface $X$. If such symmetry existed, there would either be a conformal or anticonformal diffeomorphism between the corresponding Riemann surfaces. However, as stated in Proposition [Proposition 2](#prof:RiemannsurfaceComprasion){reference-type="ref" reference="prof:RiemannsurfaceComprasion"}, this is not possible. ◻ # **Existence of a reflexive Tweezer** In this section, we will demonstrate the existence of a reflexive tweezer. The proof closely follows the approach used for the zigzag case, as detailed in Section 5 of [@weber1998minimal]. While we could have directly stated the existence of the tweezer as a corollary of the zigzag and orthodisks case, we choose to present it here for the sake of clarity and comprehensiveness. Therefore, the content presented below does not introduce new concepts; instead, it serves as an application of the arguments found in various works by Weber and Wolf. Consequently, we aim to emphasize several key points within the context of tweezers, and we will discuss these in the following subsections. ## Space of Tweezers $\mathcal{T}_p$ Two symmetric tweezers of genus $p$, denoted as $T$ and $T'$, are considered equivalent if the corresponding pairs of regions $(\Omega^T_{Gdh},\, \Omega^T_{G^{-1}dh})$ and $(\Omega^{T'}_{Gdh},\, \Omega^{T'}_{G^{-1}dh})$ are conformal by a map that takes each vertex to the same vertex). Let $\mathcal{T}_p$ denote the class of equivalent symmetric tweezers of genus $p$. Furthermore, there always exists a conformal map of $\mathbb{C}$ that carries any symmetric tweezer to a symmetric tweezer $t_0$, such that the two endpoints $P_{-p}$ and $P_p$ are $i$ and $-1$ respectively, and $P_{k}=-i\overline{P_{-k}}$. From now onwards, we will represent a class of tweezer $[T]\,\in\,\mathcal{T}_p$ by the corresponding tweezer $t_0$ in $[T]$, unless stated otherwise. Similar to the space $\mathcal{Z}_p$ as defined in [@weber1998minimal], we define a map $$\mathcal{T}_p\,:\,\mathcal{T}_p\,\longrightarrow\,\mathbb{C}^{p-1}$$ as follows: $$\mathcal{T}_p([t_0]) \,=\, (P_1(t_0),\,\cdots,\,P_{p-1}(t_0)).$$ This map induces a topology on $\mathcal{T}_p$, making it homeomorphic to a $(p-1)$-cell. ## Height function, Sufficient condition of the reflexive Tweezers, and its properness We will follow Subsection 4.2 of [@weber1998minimal] in the present context. Consider the $2(p+1)$-pointed sphere $S_{\Omega_{Gdh}}$ with marked points $P_{-p},\, \ldots,\, P_0,\, \ldots,\, P_p$, and $P_\infty$. Remarkably, $S_{\Omega_{Gdh}}$ exhibits two distinct reflective symmetries: one related to the image of $T$, and another arising from a corresponding symmetry of the tweezer. We denote $[C_k]$ the homotopy class of simple curves that encircle the points $P_k$ and $P_{k+1}$ for $k \,=\, 1,\, \ldots,\, p-1$. Similarly, $[C_{-k}]$ is the homotopy class of simple curves enclosing the points $P_{-k}$ and $P_{-k-1}$ for $k \,=\, 1,\, \ldots,\, p - 1$. Furthermore, we define $[\alpha_k]$ as the combined pair of classes $[C_k] \cup [C_{-k}]$. From the homotopy class of mappings that connect $S_{\Omega_{Gdh}}$ to $S_{\Omega_{G^{-1}dh}}$ (while preserving each of the vertices), we derive corresponding homotopy classes of curves on $S_{\Omega_{G^{-1}dh}}$, which are also denoted by $[\alpha_k]$. Additionally, we denote $$\text{E}_{\Omega_{Gdh}}(k)\,:=\, \text{Ext}_{S_{\Omega_{Gdh}}}([\alpha_k]) \ \ \text{ and } \ \ \text{E}_{\Omega_{G^{-1}dh}}(k)\,:= \,\text{Ext}_{S_{\Omega_{G^{-1}dh}}}([\alpha_k]),$$ representing the extremal lengths of $[\alpha_k]$ within the domains of $S_{\Omega_{Gdh}}$ and $S_{\Omega_{G^{-1}dh}}$ respectively. Motivated by Definition 4.4 of [@weber1998minimal], we construct the similar height function $$D^T\,:\,\mathcal{T}_p \,\longrightarrow\, \mathbb{R}$$ defined by $$\label{defn: DT} D^T(T) = \sum_{j=1}^{p-1} \left(\exp\left(\frac{1}{\text{E}_{\Omega_{Gdh}}(j)}\right) - \exp\left(\frac{1}{\text{E}_{\Omega_{G^{-1}dh}}(j)}\right)\right)^2 + \sum_{j=1}^{p-1}\left(\text{E}_{\Omega_{Gdh}}(j) - \text{E}_{\Omega_{G^{-1}dh}}(j)\right)^{2}$$ From the same argument as in the case of zigzag (see [@weber1998minimal Section 4.3]) it is clear that $D^T(T)\,=\,0$ if and only $T$ is reflexive. Moreover the properness of $D^T$ is a direct consequence of Lemma 4.7.1, and Lemma 4.7.2 of [@Weber1998TeichmullerTA]. ## Tangent vector at $t_0\in \mathcal{T}$ ![Picture from the Section 5.3.1 [@Weber1998TeichmullerTA]](orthodiskedgepushing.png){#fig:FepsilonandFepsilon*} Let us fix an edge $E$ that is horizontal; the symmetric tweezer will give the corresponding edge $E^*$, which is vertical. For given $b, \delta$, take maps $f_\epsilon^E$ and $f_{\epsilon^*}^E$ as in [@Weber1998TeichmullerTA (5.1)(a)] and [@Weber1998TeichmullerTA (5.1)(b)] respectively. Geometrically, these maps push the horizontal and vertical edges so that after applying both maps, the tweezer structure is preserved (see Figure [6](#fig:FepsilonandFepsilon*){reference-type="ref" reference="fig:FepsilonandFepsilon*"}). Under the reflection across the line $y\,=\,-x$, the region $R_i$ is mapped to the region $R_i^*$ as in figure [6](#fig:FepsilonandFepsilon*){reference-type="ref" reference="fig:FepsilonandFepsilon*"}. We call the map $f^E_\epsilon \circ f^{E}_{\epsilon^*}$ above the "pushing out and pulling in" map for the edge $E$. Let $\nu_\epsilon \,:=\, \frac{(f_\epsilon)_{\overline{z}}}{(f_\epsilon)_{z}}$ represents the Beltrami differential of $f_\epsilon$, and define $\dot{\nu} \,= \,\left. \frac{d}{d\epsilon}\right|_{\epsilon=0}\nu_\epsilon$. Similarly, let $\dot{v^{*}}$ denote the infinitesimal Beltrami differential of $f^*_\epsilon$. Expressions for $\dot{\nu}$ and $\dot{\nu^{*}}$ are given in [@Weber1998TeichmullerTA (5.1)(a)] and [@Weber1998TeichmullerTA (5.1)(b)] respectively. We take $\dot{\mu} \,= \,\dot{\nu} + \dot{\nu}^*$. This is a Beltrami differential supported on a bounded domain in $\mathbb{C}\,=\, \Omega_{Gdh}\cup t_0\cup\Omega_{G^{-1}dh}$. Thus, this pair of Beltrami differentials lifts to a pair $$\label{tangentvector} \dot{\mu}\,=\,(\dot{\mu}_{\Omega_{Gdh}},\, \dot{\mu}_{\Omega_{G^{-1}dh}})$$ on the pair $S_{\Omega{Gdh}}$ and $S_{\Omega_{G^{-1}dh}}$. The above defined $\dot{\mu}$ represents a tangent vector to $\mathcal{T}_p$ at $t_0$. The above process will yield different tangent vectors for different "pushing out and pulling in" maps. ## Derivative of Extremal length function, and corresponding Quadratic Differential Let $\Phi_k$ represent the quadratic differential associated with the homotopy class of curve $\alpha_k$, defined as $$\Phi_k\,:=\, \frac{1}{2} d\text{ Ext}([\alpha_k])|_{\Omega_{Gdh}}\ \ \text{ and, }$$ $$\label{eqn:derivativeofExtermal} \left(d\text{ Ext}([\alpha_k])|_{\Omega_{Gdh}}\right)[\widehat{\nu}] \,=\, 4\text{Re}\int_{\Omega_{Gdh}} \Phi_k\widehat{\nu}.$$ The horizontal foliation of $\Phi_k$ comprises curves connecting the same edges, as $C_k$, within $\Omega_{Gdh}$. Furthermore, it preserves the reflective symmetry of the element in $\mathcal{T}_p$. Consequently, these foliations must either run parallel to or be perpendicular to the fixed sets of reflections (which correspond to the tweezer). Here, we are using the same notation for the foliation $\Phi_k$ both on the marked sphere $S_{\Omega_{Gdh}}$ and on $\Omega_{Gdh}$. Consider the edge $E$ connecting vertices $v_1$ and $v_2$, but $v_i\,\neq\, P_0$. Let $\Phi_k$ denote the quadratic differential corresponding to the homotopy class of curve $C_k$ containing both $v_1$ and $v_2$. Since genus $p\,\geq\, 2$, in this context, there is such an edge for which one of the following conditions holds for vertex $v_i$: 1. The orthodisk has an angle of $\frac{\pi}{2}$ at $v_i$. 2. Vertex $v_i$ lies at infinity. 3. The orthodisk has an angle of $\frac{3\pi}{2}$ at $v_i$, and the foliation $\Phi_k$ is either parallel or orthogonal to the edges incident to $v_i$. Using Proposition 5.3.2 from [@Weber1998TeichmullerTA] we conclude that the holomorphic quadratic differential $\Phi_k$ is admissible on the edge corresponding to points $v_1$ and $v_2$. In the next subsection, we will select edge $E$ such that the corresponding foliation is admissible on $E$. Additionally, we will consider the tangent vector $\dot{\mu}$ as described in [\[tangentvector\]](#tangentvector){reference-type="eqref" reference="tangentvector"}, which we obtain by applying corresponding pushing out and pulling in function $f_\epsilon^E \circ f_{\epsilon^*}^E$ to the tweezer. Without loss of generality, we can select the edge corresponding to $P_{-1}$ and $P_{-2}$. ## Reflexive tweezer Since the holomorphic quadratic differential $\Phi_k$ is admissible for the edge we selected (as demonstrated in the previous subsection), we can apply Lemma 5.3.1 from [@Weber1998TeichmullerTA], leading to the following corollary for the tweezer case: **Corollary 7**. *The expression $sgn(\Phi_k \dot{\mu}_{\Omega_{Gdh}}(q))$ maintains a constant sign for $q \in E$, which is opposite to $sgn(\Phi_k \dot{\mu}_{\Omega_{G^{-1}dh}}(q))$. Consequently,* *$$\text{sgn}\left( d\text{Ext}_{\Omega_{Gdh}}([\alpha_k])[\dot{\mu}_{\Omega_{Gdh}}] \right) = -\text{sgn}\left( d\text{Ext}_{\Omega_{G^{-1}dh}}([\alpha_k])[\dot{\mu}_{\Omega_{G^{-1}dh}}] \right).$$* In the above, we are employing the same notation (a slight abuse of notation) for the curve $\alpha_k$ in both $\Omega_{Gdh}$ and $\Omega_{G^{-1}dh}$, as well as the same symbol for the corresponding horizontal foliation. Above corollary [Corollary 7](#sgnQuadraticDifferential){reference-type="ref" reference="sgnQuadraticDifferential"} is the main result that enables us to use the findings of zigzag, as presented in Section 5.3 of [@weber1998minimal]. Since the proof is identical, we can establish the following fact: Suppose there is a reflexive tweezers of genus $p-1$, then there is a path (one dimensional real analytic manifold) $\mathcal{Y} \subset \mathcal{T}_p$ for which $\text{Ext}_{\Omega_{Gdh}}(\alpha_i) = \text{Ext}_{\Omega_{G^{-1}dh}}(\alpha_i)$ holds for $i = 2 \ldots p-1$. If we restrict $D^T$ as in the Equation [\[defn: DT\]](#defn: DT){reference-type="eqref" reference="defn: DT"} to $\mathcal{Y}$, we have $$D^T|_{\mathcal{Y}} = \left(\exp\left(\frac{1}{\text{E}_{\Omega_{Gdh}}(1)}\right) - \exp\left(\frac{1}{\text{E}_{\Omega_{G^{-1}dh}}(1)}\right)\right)^2 + \left(\text{E}_{\Omega_{Gdh}}(1) - \text{E}_{\Omega_{G^{-1}dh}}(1)\right)^2$$ We denote the above-restricted function by the same $D^T$. Moreover let $t_0\in Y$, and let $\dot{\mu}\in T_{t_0}\mathcal{Y}$ corresponding to the edge we selected. We get $$\begin{aligned} dD^T_{t_0}[\dot{\mu}] &=& 2\left(\exp\left(\frac{1}{\text{Ext}_{\Omega_{Gdh}}([\alpha_1])}\right) - \exp\left(\frac{1}{\text{Ext}_{\Omega_{G^{-1}dh}}([\alpha_1])}\right)\right) \\ && \left(-\exp\left(\frac{1}{\text{Ext}_{\Omega_{Gdh}}([\alpha_1])}\right)(\text{Ext}_{\Omega_{Gdh}}([\alpha_1]))^{-2}d\text{Ext}_{\Omega_{Gdh}}([\alpha_1])[\dot{\mu}_{\Omega_{Gdh}}] \right.\\ && \left. +\exp\left(\frac{1}{\text{Ext}_{\Omega_{G^{-1}dh}}([\alpha_1])}\right)(\text{Ext}_{\Omega_{G^{-1}dh}}([\alpha_1]))^{-2}d\text{Ext}_{\Omega_{G^{-1}dh}}([\alpha_1])[\dot{\mu}_{\Omega_{G^{-1}dh}}]\right)\\ && + 2\left(\text{Ext}_{\Omega_{Gdh}}([\alpha_1]) - \text{Ext}_{\Omega_{G^{-1}dh}}([\alpha_1])\right)\left(d\text{Ext}_{\Omega_{Gdh}}([\alpha_1])[\dot{\mu}_{\Omega_{Gdh}}] - d\text{Ext}_{\Omega_{G^{-1}dh}}([\alpha_1])[\dot{\mu}_{\Omega_{G^{-1}dh}}]\right)\end{aligned}$$ If we begin with $t_0\in Y$ such that $D^T(t_0)\neq 0$ and choose $\dot{\mu}$ as described in [\[tangentvector\]](#tangentvector){reference-type="ref" reference="tangentvector"} for the edge we selected at the beginning of this subsection. The equation above implies that $dD^T_{t_0}[\dot{\mu}]\neq 0$. However, since $D^T$ is a proper map, it must have a critical point in the smooth manifold $\mathcal{Y}$; therefore, at this critical point $t_0$, we must have $D^T(t_0)=0$. Now, finally, we prove that for every genus $p$, there exists a reflexive tweezer using an induction argument as outlined in [@weber1998minimal] (page 1165). To apply the induction argument for the case when $p=1$, it is evident that both the Riemann surfaces resulting from the zigzag and the tweezer are square tori. Therefore, for $p=1$, the existence of reflexive zigzag and tweezer is immediate, and then we can proceed with the induction argument. # Concluding remarks We present new examples of maxfaces with one Enneper end. Furthermore, from the proof, it becomes evident that by suitably adjusting the orthodisk, we can effectively address the period problem associated with these maxfaces. However, there are two crucial aspects that this article does not cover, but are essential to explore: Firstly, due to the lack of control over the modulus of the Schwarz--Christoffel map, we are uncertain about the precise nature of the singular set. Additionally, if we somehow manage to identify the singular set, we still need to determine the types of singularities that these maxfaces can exhibit. It is our hope that numerical analysis of the generic Schwarz--Christoffel map may yield some insights in this regard. Secondly, as observed from the proof, the process involves defining the orthodisk in a suitable manner from the tweezers and then establishing the existence of a reflexive tweezer. A similar construction approach can be applied to obtain maxfaces corresponding to structures such as Costa Towers, $DH_{m,n}$ as discussed in [@Weber1998TeichmullerTA]. This will remove scarcity of maxface with more number of ends. The investigation of these two important aspects holds promise for advancing our understanding of maxfaces and their properties.
arxiv_math
{ "id": "2310.00235", "title": "Higher genus maxfaces with Enneper end", "authors": "Rivu Bardhan, Indranil Biswas, Pradip Kumar", "categories": "math.DG", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- address: - - --- [^1] [^2] [^1]: [^2]:
arxiv_math
{ "id": "2310.04543", "title": "Extended Levett trigonometric series", "authors": "Robert Reynolds", "categories": "math.NT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We extend the theory of countably generated Demushkin groups to Demushkin groups of arbitrary rank. We investigate their algebraic properties and invariants, count their isomorphism classes and study their realization as absolute Galois group. At the end, we compute their profinite completion and conclude with some results on profinite completion of absolute Galois groups. author: - Tamar Bar-On and Nikolay Nikolov title: Demushkin groups of uncountable rank --- # Introduction {#introduction .unnumbered} Let $G$ be a finitely generated pro-$p$ group. We say that $G$ is a *Demushkin* group if the following two properties are satisfied: 1. $\dim H^2(G)=1$ 2. The cup product yields a nondegenerate bilinear form $H^1(G)\times H^1(G)\to H^2(G)$. Here and below we shall write $H^n(G)$ for $H^n(G,\mathbb{F}_p)$. Infinite Demushkin groups are precisely the Poincare-Duality pro-$p$ groups of dimension 2. In particular they have cohomological dimension $2$, and satisfy that every open subgroup is itself Demushkin. Demushkin groups play an important role in number theory. Serre proved in [@serre1962structure], [@serre1979galois], that the maximal pro-$p$ Galois groups of finite extensions of $\mathbb{Q}_p$ which contain primitive root of identity are all Demushkin groups.It is still an open question whether every finitely generated Demushkin group can be realized as a maximal pro-$p$ Galois group of a field. Here by maximal pro-$p$ Galois group of a field we mean the maximal pro-$p$ quotient of its absolute Galois group. Since $\dim(H^2(G))=1$, $G$ can be presented as a quotient $F/\langle r^F \rangle$ of a free pro-$p$ group of rank $\dim(H^1(G))$, by the normal subgroup generated by one relator $r\in \Phi(F)$. The structure of $r$ was completely classified in [@demushkin1961group],[@demushkin19632] and [@labute1967classification]. Moreover, Demushkin groups are completely determined by two invariants defined as follows: Let $q(G)=p^h$ for $h$ the maximal natural integer such that $r\in F^{p^h}[F,F]$ or $q(G)=0$ in case $r\in [F,F]$. Notice that this definition doesn't depend on the choice of $r$, as $G/[G,G]\cong \mathbb{Z}_p/q(G)\times (\mathbb{Z}_q)^{\operatorname{rank}(G)-1}$. If $q\ne 2$ then for every natural number $n> 1$ there is a unique Demushkin group of rank $n$ for which $q(G)=q$. If $q=2$ the situation is more complicated. A second invariant was defined by Serre for this case: Being a Poincare duality group, every infinite Demushkin group comes equipped with a dualizing module $I$, which is isomorphic to $\mathbb{Q}_p/\mathbb{Z}_p$. Hence, we denote by $\chi:G\to I^{\times}$ the induced homomorphism, and define the invariant $\operatorname{Im}(\chi)$. In [@labute1966demuvskin] Labute generalized the theory of Demushkin groups to Demushkin groups of rank $\aleph_0$. By a Demushkin group of rank $\aleph_0$ he referred to pro-$p$ groups $G$ satisfying the conditions 1 and 2 above, and such that $\dim(H^1(G))=\aleph_0$. He gave an almost full characterization, except to one case, for which he defined two more invariants: $s(G)$ and $t(G)$. Since Poincare-Duality groups are necessarily finitely generated, a Demushkin group of rank $\aleph_0$ is no longer a Poincare duality group. However, it can be shown that it has a dualizing module $I$, which is isomorphic to either $\mathbb{Q}_p/\mathbb{Z}_p$ or to $\mathbb Z/ q\mathbb Z$ for an integer $q$ which is a prime power of $p$. We define the invariant $s(G)$ to be zero in the former case and equal to $q$ in the latter case. Labute showed that for every $q\ne 2$ and $s\geq q$ or zero, there exists a unique Demushkin group of countable rank $G$ such that $q(G)=q$ and $s(G)=s$. The invariant $t(G)$, which is only interesting in the case $q\ne 2$ will be discussed later. In his paper [@labute1966demuvskin] Labute also proved that the $p$-Sylow subgroups of the absolute Galois groups of local fields containing a primitive root of unity are Demushkin groups of countable rank. This work was completed by Minac and Ware, who proved in [@minavc1991demuvskin] and [@minavc1992pro] that for every prime $p\ne 2$ a pro-$p$ Demushkin group of rank $\aleph_0$ occurs as an absolute Galois group of some field if and only if $s(G)=0$. Moreover, a pro-$2$ Demushkin group $G$ of rank $\aleph_0$ occurs as an absolute Galois group if and only if $s(G)=0$ and the couple $(t(G),\operatorname{Im}(\chi))$ belongs to a specific list of possibilities. The object of this paper is to present a theory of Demushkin group of uncountable rank, and discuss their applications to number theory. Let $\mu>\aleph_0$ be some cardinal. We say that a pro-$p$ group with $\dim(H^1(G))=\mu$ is Demushkin, if it satisfies the conditions 1 and 2 at the start of the paper. The paper is organized as follows: In Section 1 we give some necessary background from Galois cohomology and bilnear form theory. In Section 2 we discuss the algebraic properties of a Demushkin group of uncountable rank, and in particular prove the following: **Theorem 1**. *Let $q\ne 2$ be a prime power or equal to 0, and $\mu>\aleph_0$. For every nondegenerate skew symmetric bilinear form $(V,\varphi)$ of dimension $V$ there is a Demushkin group $G$ of rank $\mu$, with $q(G)=q, s(G)=0$, and whose cup-product bilinear form $H^1(G)\cup H^1(G)\to H^2(G)$ is isomorphic to $(V,\varphi)$.* **Corollary 2**. *For every cardinal $\mu>\aleph_0$, $p$ a prime, and $q\ne 2$ a power of $p$ or zero, there are $2^{\mu}$ pairwise nonisomorphic Demushkin pro-$p$ groups $G$ of rank $\mu$ with $q(G)=q, s(G)=0$.* In addition, for the $q=2$ case, we have the following: **Theorem 3**. *For every choice of $t\in\{-1,0,1\}$, a subgroup $A$ of $\mathbb{Z}_2^{\times}$ and uncountable cardinal $\mu$, there are $2^{\mu}$ pairwise nonisomorphic pro-$2$ Demushkin groups $G$ of rank $\mu$ with $s(G)=0, t(G)=t$ and $\operatorname{Im}(\chi)=A$.* We also give the following two characterizations of Demushkin groups of arbitrary rank: **Theorem 4**. *Let $G$ be a pro-$p$ group. The following are equivalent:* 1. *$G$ is a Demushkin group.* 2. *$\operatorname{cd}(G)=2$ and the dualizing module $I$ of $G$ has a unique additive subgroup of size $p$.* **Theorem 5**. *Let $G$ be a pro-$p$ one relator group of arbitrary rank. Then $G$ is a Demushkin group if and only if, every open subgroup $U\leq G$ of index $p$ is 1-related.* In Section 2 we discuss the realization of Demushkin groups of uncountable rank as maximal pro-$p$ quotients of absolute Galois groups. This work is motivated by the wide version of Inverse Galois Problem, which asks to determine which profinite groups can be realized as absolute Galois groups, and its restricted pro-$p$ version, which asks to determine which pro-$p$ groups can be realized as maximal pro-$p$ quotients of absolute Galois groups. In particular, we prove the following: **Theorem 6**. *For every cardinal $\mu>\aleph_0$, prime $p$, and $q$ a power of $p$ or $0$, there exists a field $F$ whose absolute Galois group is a Demushkin group $G$ of rank $\mu$, $q(G)=q$ and $s(G)=0$.* For the case $p=2$ case we need notation for the subgroups of $\mathbb Z_2^ \times$. For an integer $f \geq 2$ define $U_2^{(f)}$, respectively $U_2^{[f]}$, to be the closed subgroups generated by $1+2^f$, respectively $-1+2^f$, in $\mathbb Z_2^\times$. We have the following result. **Theorem 7**. *Let $G$ be a pro-2 Demushkin group. If $G$ is an absolute Galois group, then $t(G)\ne 0$. In addition, for every cardinal $\mu$, there exist an absolute Galois group over a field of characteristic 0 which is a pro-2 Demushkin group $G$ of rank $\mu$ for every pair of invariants in the following list, and only for such invariants.* - *$t(G)=1$ and $\operatorname{Im}(\chi) \in \{ U_2^{(f)},U_2^{[f]}, f \in \mathbb N, f \geq 2\}$,* - *$t(G) \in \{-1,1\}$,$\operatorname{Im}(\chi)=\{\pm 1\} U_2^{(f)}, f \in \mathbb N, f \geq 2$.* *If $F$ is a field of characteristic $p$, whose absolute Galois group is a pro-2 Demushkin group $G$, then $t(G)=1$ and exactly the following options of $\operatorname{Im}(\chi)$ are possible:* - *If $p\equiv 1 (mod4)$, say $p = 1 + 2^ac, a > 2, 2\nmid c$, then $\operatorname{Im}(\chi)=U_2^{(b)}$ for some $a \leq b < \infty$. Moreover, each group $U_2^{(b)}, a \leq b < \infty$, occurs as $\operatorname{Im}(\chi)$ for some Demushkin group of rank $\mu$ $G_F$ with $\operatorname{char}(F)=p$.* - *If $p\equiv -1 (mod4)$, say $p = -1 + 2^ac, a > 2, 2\nmid c$, then the only possibilities for $\operatorname{Im}(\chi)$ are the groups $U_2^{[a]}$, and $U_2^{(b)}$, with $a+1 \leq b < \infty$. Moreover, each of these groups can occur as $\operatorname{Im}(\chi)$ for some Demushkin group $G_F$ of rank $\mu$ with $\operatorname{char}(F)=p$.* In Section 4 we compute the profinite completion of a Demushkin group of infinite rank, it turns out to be always a free pro-$p$ group. Bearing in mind that free pro-$p$ groups of any rank are absolute Galois groups, we get a new class of examples of absolute Galois groups such that their profinite completion remain absolute Galois group, as been discussed in [@baron-cohomological]. # Theoretical background {#theoretical-background .unnumbered} Recall that $\dim(H^1(G))$ equals the cardinality of a minimal set of generators of $G$. In particular, $H^1(G)=(G/\Phi(G))^*$, and for every subset $X=\{x_i\}\subseteq G$, $X$ is a minimal set of generators for $G$ if and only if $\{x_i\mod \Phi(G)\}$ is a minimal set of generators for $G/\Phi(G)$. Hence, for every minimal set of generators $\{x_i\}$ of $G$, we can define a dual basis $\{\chi_i\}$ of $H^1(G)$ by $\chi_i(x_j)=\delta_{ij}$. Conversely, for every basis of $\{\chi_i\}$ of $H^1(G)$ one can construct a dual minimal set of generators $\{x_i\}$ for $G$, satisfying $\chi_i(x_j)=\delta_{ij}$. As $\dim(H^2(G))$ equals the cardinality of a minimal set of relations defining $G$, $H^2(G)\cong \mathbb{F}_p$ if and only if $G$ is 1-relator, i.e, $G\cong F/\langle \langle r \rangle \rangle$ is a quotient of a free pro-$p$ group with the same rank, by a normal subgroup generated by one element $r\in \Phi(G)$. For 1-relator pro-$p$ groups, the cup product bilinear form $H^1(G)\times H^1(G)\to H^2(G)\cong \mathbb{F}_p$ has the following interpretation: choose a basis $\{x_i\}$ for $F$. Since $r\in \Phi(G)$, $r\equiv r'\mod [[F,F],F]$ where $r'=\prod x_i^{p\alpha_i}\prod_{i<j}[x_i,x_j]^{\beta_{i,j}}$. Take $\{\chi_i\}$ to be the dual basis, then $\chi_i\cup \chi_j=\beta_{ij}\mod p$ for every $i<j$ and $\chi_\cup \chi=\binom{p}{2}\mod p$. The cup product bilinear form is skew-symmetric. Now we give a list of results regarding nondegenerate bilinear forms. Our motivation is the cup product bilinear form and hence we will only refer to skew symmetric bilinear forms. Recall that a bilinear form $\varphi:V\times V\to \mathbb{F}$ on a vector space $V$ is nondegenerate if for every $v\in V$ there exits $u\in V$ such that $\varphi(v,u)\ne 0$. In addition, over a field with character different then 2, every skew symmetric bilinear form is *alternate*, meaning that for every $v\in V$, $\varphi(v,v)=0$. It is a well known fact that every finite dimensional vector space with a nondegenerate alternate bilinear form has a *symplectic basis*, meaning a basis of the form $\{x_1,...,x_{2n}\}$ which satisfies $\varphi(x_{2i-1},x_{2_i})=1$ and for all other pairs $i\leq i, \varphi(x_i,x_j)=0$. Hence up to isometry there is only one alternate bilinear form on a even dimensional vector space $V$ over a field of characteristic different than 2. The characteristic 2 case is a bit more complicated, but still on each finite dimensional vector space $V$ there are only finitely many pairwise nonisometric nondegenerate skew symmetric forms. As for the infinite dimension case, we have the following: **Lemma 8**. *Let $V$ be a vector space a some field $F$, equipped with a skew-symmetric bilinear form $\varphi:V\times V\to F$, and let $U=\operatorname{span}\{v_1,...,v_n\}$ be a finite dimensional subspace. Then $V=U\oplus U^{\perp}$ where $U^{\perp}$ denotes the orthogonal complement of $U$.* *Proof.* It is enough to prove that for every $v\in V$ there are some scalars $\alpha_1,...,\alpha_n\in F$ such that $\varphi(v-\sum \alpha_iv_i,v_j)=0$ for every $j=1,...,n$. This is equivalent to solve the equation system $\sum \alpha_i\varphi(v_i,v_j)=\varphi(v,v_j)$ which follows immediately by the nondegeneracy of $\varphi$ on $U$, which is equivalent to the matrix $(\varphi(v_i,v_j))$ being invertible. Later on we will denote the element in $u\in U$ for which $v-u\in U^{\perp}$ by the projection of $v$ on $U$. ◻ As a result, we get the following useful proposition: **Proposition 9**. *A skew-symmetric bilinear form over an infinite dimensional vector space is nondegenerate if and only if it is locally nondegenerate.* *Proof.* The second direction is trivial, so we only prove the first direction. Let $V$ be an infinite dimensional vector space over some field $F$, equipped with a nondegenerate bilinear form $\varphi:V\times V\to F$. We want to prove that every finite subset $A\subseteq V$ is contained in some finite dimensional nondegenerate subspace $U$. We prove it by induction on the size of $A$. When $A=\{v\}$, then either $\varphi(v,v)\not =0$ and we can take $U=\operatorname{span}(v)$, or there exist some $u\in V$ such that $\varphi(u,v)\not =0$, and then we can take $U=\operatorname{span}\{v,u\}$. Now let $A=\{v_1,...,v_n\}$. If $\varphi(v_1,v_1)\ne 0$, denote $V'= \operatorname{span}\{v_1\}$. Otherwise, we might add an element $v_{n+1}$ such that, after renamimg, $\varphi(v_1,v_2)\ne 0$. In that case, denote $V'=\operatorname{span}\{v_1,v_2\}$. Notice that $V'$ is nondegenerate, and hence $V=V'\oplus V'^{\perp}$. $v_i-{\operatorname{proj}}_{V'}(v_j)\in V'^{\perp}$ for all $v_i\in A\setminus V'$. Since $V$ is nondegenerate, so is $V'^{\perp}$. Hence we can apply the induction hypothesis on $\{v_i-{\operatorname{proj}}_{V'}(v_j)\}_{v_i\in A\setminus V'}$ inside $V'^{\perp}$ to get some nondegenerate finite dimensional subspace $W\subseteq V'^{\perp}$ which contains $\{v_i-{\operatorname{proj}}_{V'}(v_j)\}_{v_i\in A\setminus V'}$ inside $V'^{\perp}$. Taking $U=V'\oplus W$, we are done. ◻ Lemma [Lemma 8](#Projections){reference-type="ref" reference="Projections"} implies the next result, which was first stated in [@kaplansky1950forms]: **Lemma 10**. *Let $\varphi$ be an alternate nondegenerate bilinear form over a vector space $V$ of countable dimsension. Then $V$ has a symplectic basis. Henceforth, there is only one alternate nondegenerate bilinear form of infinite countable dimension over a given field.* Unlike the countable case, when $\dim V$ is uncountable, we have the following Theorem: **Theorem 11**. *[@hall1988number Main Theorem][\[maximal number of forms\]]{#maximal number of forms label="maximal number of forms"} Let $\mu>\aleph_0$ be a cardinal. For every field $\mathbb{F}$ there exist $2^{\mu}$ pairwise nonisometric nondegenerate alternate bilinear forms of dimension $\mu$ over $\mathbb{F}$.* As skew symmetric bilinear forms of characteristic 2 may not be alternate, we need to handle this case separately. There are 3 kinds of skew-symmetric bilinear forms which are characterized by the invariant $t(\varphi)$ as follows: **Definition 12**. Denote $\beta:V\to \mathbb{F}$ by $\beta(v)=\varphi(v,v)$. Let $A=\ker (\beta)$, and $A^{\perp}$ be the orthogonal complement. - If $A=V$ define $t(\varphi)=1$. - If $A\ne V$ and $0\ne A^{\perp}\subseteq A$ define $t(\varphi)=1$. - If $A\ne V$ and $A^{\perp}\nsubseteq A$ define $t(\varphi)=-1$. - If $A^{\perp}=0$ define $t(\varphi)=0$. **Theorem 13**. *Let $\mu>\aleph_0$ be a cardinal. For every $t\in \{-1,0,1\}$, there are $2^{\mu}$ pairwise nonisometric nonalternate nondegenerate skew-summetric bilinear forms $\varphi$ over $\mathbb{F}_2$ of dimension $\mu$, such that $t(\varphi)=t$.* *Proof.* This proof is based on the proof of [@hall1988number Main Theorem]. Let $t\in \{-1,0,1\}$. Consider the first order language $L$ with 1. Two unary relation symbols $\mathcal{F}$ and $\mathcal{V}$; 2. Five ternary relation symbols $AF, MF, AV, SM, F$; 3. A constant 0; 4. A constant 1, for $1\in \mathbb{F}_2$. From $L$ we fashion a theory of nondegenerate skew-symmetric bilinear forms $\varphi$ over $\mathbb{F}_2$ with $t(\varphi)=t$. For a model $M$ of this theory, $M$ will be composed of a copy of $\mathbb{F}_2$ given by $\mathcal{F}_M$, an $\mathbb{F}_2$-vector space given by $\mathcal{V}_M$, and a bilinear form from $\mathcal{V}_M\times \mathcal{V}_M$ to $\mathcal{F}_M$ described by $F_M$. More precisely, we axiomatize the theory using first order sentences of $L$. We require $\mathcal{F}_M$ and $\mathcal{V}_M$ to be disjoint subsets of the model $M$ whose union is all of $M$, where $\mathcal{F}_m$ is isomorphic to $\mathbb{F}_2$ and $\mathcal{V}_M$ is a vector space over $\mathbb{F}_M$. Here addition and multiplication in $\mathcal{F}$ are described by the relations: $(\alpha,\beta,\alpha+\beta)\in AF$ and $(\alpha,\beta,\alpha\beta)\in MF$. Vector addition and scalar multiplication are described by $(v,u,v+u)\in AV$ and $(\alpha,v,\alpha v)\in SM$. Now $F$ describes a bilinear form $\varphi$ on $\mathcal{V}$ via $(v,u,\varphi(v,u))\in F$. We can require $\varphi$ to be skew-symmetric, nondegenerate, nonalternate with $t(\varphi)=t$. Eventually, we require $\mathcal{F}$ to be isomorphic to $\mathbb{F}_2$ by the first order sentence: $\forall x\in \mathcal{F}(x=0\lor x=1)$. We got the axioms of a theory $T$ of nondegenerate nonalternate skew-symmetric bilinear forms $\varphi$ over $\mathbb{F}_2$ with $t(\varphi)=t$. To every model $M$ of $T$ there is associated bilinear form as in the theorem. Conversely, every bilinear form as in the theorem gives rise to a model of $T$. It is not difficult to check that two models of $T$ are isomorphic if and only if the corresponding bilinear forms are isometric. Now we want to apply Shelah's results on instability [@shelah1972combinatorial] to $T$ and finish the proof. In order to do that, we need to show that our theory $T$ is unstable. By [@shelah1972combinatorial Theorem 2.4.2], it is enough to show that for every infinite cardinal $\mu$, there is a model $M$ of $T$ which embeds $(\mu,<)$ via a formula of $T$. We will build such a model $M$ for every $t\in \{-1,0,1\}$ as follows: Let $\mu$ be a cardinal. Let $V$ be the vector space over $\mathbb{F}_2$ with basis $\{u_i,v_i:i\in \mu\}$. We define a skew-symmetric bilinear form on $V$ by the following rules: 1. $\varphi(v_i,v_j)=0, \forall i\ne j$. $\varphi(v_i,v_i)=1 \forall i$ $\varphi(u_i,u_j)=0, \forall i\ne j$. $\varphi(u_i,u_i)=1 \forall i$. $\varphi(v_i,u_j)=1, \forall i< j$. $\varphi(v_i,u_j)=0, \forall i\geq j$. This is a nondegenerate form. Indeed, let $v=\sum_{i\in J_1} v_i+\sum _{i\in J_2}u_i$ be an arbitrary element in $V$. If $j_1=\max J_1\geq \max J_2=j_2$, then $\varphi(v,v_{j_1})=\varphi(v_{j_1},v_{j_1})=1$. Otherwise, $\varphi(u_{j_2},v)=1+|J_1|\mod 2$. Take $i>j_2$ $\varphi(u_i,v)=|J_1|\mod 2$. One of them must be nonzero. Now $\ker(\beta)$ equals to all the sums of even number of basis elements, and one checks immediately that its orthogonal complement is trivial. Hence we got a nondegenerate bilinear form with $t=0$. 2. $\varphi(v_i,v_j)=0, \forall i,j$. $\varphi(u_i,u_j)=0, \forall (i,j)\ne (0,0)$. $\varphi(u_0,u_0)=1$, $\varphi(v_i,u_j)=1, \forall i<j$. $\varphi(v_i,u_j)=0, \forall i\geq j$. 3. $\varphi(v_i,v_j)=0, \forall i,j$. $\varphi(u_i,u_j)=0, \forall (i,j)\ne (0,0)$. $\varphi(u_0,u_0)=1$, $\varphi(v_i,u_j)=1, \forall i>j$. $\varphi(v_i,u_j)=0, \forall i\leq j$. One easily checks that this is a nondegenerate bilinear form with $t=-1$. Notice that $(\mu,<)$ can be embedded into $M$ via the formula $i<j\iff M\models \lnot \delta[(u_i,v_i),(u_j,v_j)]$ in the first case, and $i<j\iff M\models \delta[(u_i,v_i),(u_j,v_j)]$ in the second and third cases, where $\delta[(x,y),(z,w)]\iff (y,z,0)\in F$. Now by [@shelah1972combinatorial Theorem 2.6], for every $\mu>\aleph_0$ there are $2^{\mu}$ pairwise nonisomorphic models of $T$ of cardinality $\mu$. ◻ # Algebraic properties of Demushkin groups of uncountable rank {#algebraic-properties-of-demushkin-groups-of-uncountable-rank .unnumbered} We start with arbitrary projective limits of Demushkin groups. **Proposition 14**. *A projective limit of Demushkin groups of arbitrary rank is either Demushkin or free.* *Proof.* Let $G={\lim_{\leftarrow}}_i\{G_i,f_i\}$ be a projective limit of Demushkin groups. By definition, for every $i$, $H^2(G_i)\cong \mathbb{F}_p$. Hence $H^2(G)=0\lor \mathbb{F}_p$. If $H^2(G)=0$ then $\operatorname{cd}(G)\leq 1$ and we get that $G$ is free. So assume $H^2(G)=\mathbb{F}_p$. We can assume that the inflation maps $\operatorname{Inf}:H^2(G_i)\to H^2(G_j)$ are isomorphism for all $i\leq j$, and hence the inflation maps $\operatorname{Inf}:H^2(G_i)\to H^2(G)$ are isomorphisms for all $i$. Since $H^1(G)={\lim_{\rightarrow}}_iH^1(G_i)$ and the cup product commutes with direct limits, we get that $(H^1(G),\cup)$ is nondegenerate. Henceforth $G$ is a Demushkin group. Now we want to show that both of these outcomes indeed occur. Obviously, every Demushkin group can be expressed as the inverse limit of copies of itself with isomorphism. So we will construct an example of a projective limit of Demushkin groups with is free. Let $G=\lim G_n$ be a Demushkin group of rank $\aleph_0$ expressed as a strictly increasing projective limit of f.g Demushkin groups. This can be done due to [@labute1966demuvskin Theorem 1]. We claim that for every open subgroup $H_n\leq G_n$ there exist an open subgroup $H_{n+1}\leq G_{n+1}$ such that $\varphi(H_{n+1})=H_n$, where $\varpi$ is the given epimorphism $G_{n+1}\to G_n$, and $[G_{n+1}:H_{n+1}]>[G_n:H_n]$. For that it is enough to show that $H_{n+1}\ne \varphi^{-1}(H_n)$. By [@ribes2000profinite Lemma 2.8.15] there is a minimal subgroup $K\leq G_n$ such that $\varphi(K)=H_n$, and that $K$ satisfies $\ker(\varphi|_{K})\leq \Phi(K)$. So it is enough to show that $M=\varphi^{-1}(H_n)$ doesn't satisfy $\ker(\varphi|_{M})\leq \Phi(M)$ to get that it is not minimal. Assume $\ker(\varphi|_{M})\leq \Phi(M)$. Then $d(M)=d(H_n)$. However, for Demushkin groups we have formula for the rank of an open subgroup. $d(M)=[G_{n+1}:M](d(G_{n+1})-2)+2>[G_n:H_n](d(G_n)-2)+2=d(H_n)$ since $[G_{n+1}:M]=[G_n:H_n]$ and $d(G_{n+1})>d(G_n)$. So we can take such a series of open subgroups projecting on each other of increasing index, and get that the inverse limit is a closed subgroup of infinite index in a countably generated Demushkin group, and hence by [@labute1966demuvskin Theorem 2] it is free. ◻ Now we present the most useful tool in the study of Demushkin groups of uncountable rank. **Theorem 15**. *Every Demushkin group is an inverse limit of finitely generated Demushkin groups.* *Proof.* Let $G=F / \langle r^F \rangle$ be a Demushkin group of infinite rank. Let $X$ be a free basis of $F$ and let $X^*$ be the dual basis to $X$ in $H^1(G)$. By definition, the vector space $H^1(G)$ has a nondegenerate cup product bilinear form. For a finite subset $J \subset X$ Lemma [Proposition 9](#locally nondegenerate){reference-type="ref" reference="locally nondegenerate"} gives a finite dimensional nonegenerate subspace $L=L_J$ of $H^1(G)$ which contains $J^*=\{x^*\}_{x\in J}\subseteq X^*$. Choose a basis $B^*_1$ for $L$ and enlarge it to a basis $B^*_1 \cup B^*_2$ of $H^1(G)$. Let $\bar B_1 \cup \bar B_2$ be the dual basis to $B^*_1 \cup B^*_2$ in $G/\Phi(G)$. Observe that $$\bar B_2 \subset \bigcap_{f \in L} \ker f \subset \bigcap_{f \in J^*} \ker f,$$ while $\cap_{f \in J^*} \ker f$ is the closed subgroup spanned by the images of $X-J$ in $G/\Phi(G)$. Now lift the basis $\bar B_1 \cup \bar B_2$ of $G/\Phi(G)$ to a basis $B_1 \cup B_2$ of $F$ such that $B_2$ belongs to the closed subgroup of $F$ generated by $X-J$. Let $F'$ be the free pro-$p$ group generated by $B_1$, and define an epimorphism $\varphi:F\to F'$ by letting $x \mapsto x$ for $x \in B_1$ and $x\mapsto 1$ for $x \in B_2$. Let $r'=\varphi(r)$. We get an epimorphism $G\to G_J:=F'/\langle r'^{F'} \rangle$, which we denote by $\varphi_J$. We claim that $G_J$ is a Demushkin group. Indeed, as $G_J$ is constructed from a free profinite group by one relation, $\dim H^2(G_J)\leq 1$. Since $\varphi:G\to G_J$ is surjective, $\operatorname{Inf}:H^1(G_J)\to H^1(G)$ is injective. Hence it maps $H^1(G_J)$ isomorphically onto $Span\{B^*_1\}=L$ in $H^1(G)$. The cup product on $H^1(G_L)$ is induced from the cup product on $H^1(G)$ via the inflation map. It follows that the cup product $H^1(G_J)\cup H^1(G_J)\to H^2(G_J)$ is nondegenerate, and $\operatorname{Inf}:H^2(G_J)\to H^2(G)$ is not the zero map, which implies that $\dim H^2(G_J)=1$. Hence $G_J$ is a a Demushkin group as claimed. In order to show that $G=\lim_{\leftarrow}\{G_J,\varphi_J\}$ of all the groups of this form, we need to show that the intersection of all the kernels $\bigcap \{\ker \varphi_J\}$ is trivial. Let $H_{J^c}$ be closed normal subgroup of $G$ generated by the complement $J^c=X-J$ of $J$ in $X$. By construction $\ker \varphi_j$ is contained in $H_{J^c}$. It is a basic property of profinite group that $\bigcap H_{J^c}$ is trivial where $J$ runs over all finite subsets of $X$. The proof is complete. ◻ **Corollary 16**. *Let $G$ be a Demushkin group of arbitrary rank. Then $\operatorname{cd}(G)=2$, and every open subgroup of a Demushkin group is again Demushkin. Moreover, every closed subgroup of infinite index of a Demushkin is free.* *Proof.* Using Theorem [Theorem 15](#inverse limit){reference-type="ref" reference="inverse limit"}, this is identical to the proofs of [@labute1966demuvskin] Theorem 2, and the corollary to Theorem 1. ◻ Now we want to study the invariants $q(G),s(G),\operatorname{Im}(\chi)$ of a Demushkin group of arbitrary rank. $q(G)$ will be defined as in the finite case, to be the maximal $q$ such that $r\in F^q[F,F]$ or $0$ if $r\in [F,F]$, when we express $G$ as $F/r$ for a free pro-$p$ group $F$. Observe that as $\operatorname{cd}(G)=2$, $G$ has a dualizing module $I$. As explained in [@labute1966demuvskin page 4], $I=\mathbb{Q}_p/\mathbb{Z}_p$ or $I=\mathbb{Z}/qZ$ for some $p$-power $q$. In the first case we set $s(G)=0$, while in the second case we set $s(G)=q$. The dualizing module comes equipped with a character $\chi:G\to \operatorname{Aut}(I)\cong (Z_p/s(G))^{\times}$ hence we have the invariant $\operatorname{Im}(\chi)$. We shall need the properties $P$ and $Q$ introduced in [@labute1966demuvskin Section 5] as follows. **Definition 17**. Let $M$ be an abelian group with automorphism group isomorphic to $\mathbb{Z}_p/q\mathbb{Z}_p$ for $q$ a power of $p$ or 0, and let $\chi:G\to \operatorname{Aut}(M)$ be some homomorphism. 1. We say that $\chi$ satisfies property $P$ if the induced map $H^1(G,M)\to H^1(G,M/p)$ is onto. 2. We say that $M$ satisfies property $Q$ if there exists a unique homomorphism $\chi:G\to \operatorname{Aut}(M)$ which satisfies property $P$. In [@labute1966demuvskin Section 5] it was proved that: **Lemma 18**. *Let $G$ be a pro-$p$ Demushkin group. The character $\chi : G \to \operatorname{Aut}(I)$ associated to $G$ has property $P$. Moreover, if there exists a homomorphism $\chi:G\to \mathbb{Z}_p$ with property $P$, then $s(G)=0$.* A careful examination of the proof shows that it holds for Demushkin group of arbitrary infinite rank. The next result [@labute1966demuvskin proposition 12] applies to any pro-$p$ group $G$. **Proposition 19**. *Let $M$ be a $G$-module which is isomorphic either to $\mathbb{Q}_p/\mathbb{Z}_p$ or to $\mathbb{Z}/q$ for some $p$-power $q$, and let $\chi:G\to \operatorname{Aut}(M)$ be the associated character. Then $\chi$ has property $P$ if and only if any map from a minimal generating set of $G$ to $M$ converging to 1 can be extended to a crossed homomorphism $G\to M$.* Now we give some results regarding the dualizing module of a Demushkin group of countable rank. **Lemma 20**. *Let $G$ be a Demushkin group of arbitrary rank, and $I$ be the dualizing module of $G$, then $\operatorname{Aut}(I)$ has property $Q$.* *Proof.* Let $\chi:G\to \operatorname{Aut}(I)$ be the associated character. Then $\chi$ has property $P$. Expressing $\operatorname{Aut}(I)$ as an inverse limit of finite groups $\mathbb{Z}/q_i^{\times}$, one can express $\chi$ as the inverse limit over some set $I$ of homomorphisms $\chi_i:G_i\to \mathbb{Z}/q_i^{\times}$ were $G_i$ are finitely generated Demushkin groups. We claim that $\chi_i:G_i\to \mathbb{Z}/q_i$ satisfies property $P$. For this we will use the equivalent criterion in Proposition [Proposition 19](#equivalent criterion){reference-type="ref" reference="equivalent criterion"}. Indeed, let $x_1,...,x_n$ be a basis of $G_i$. We can lift it to a basis $\{x_i'\}$ of $G$ such that $x_i'\to x_i$ if $i=1,..,n$ and $1$ otherwise. Recall that $\chi$ has property $P$. Now for every choice of elements $D(x_i)$, we can set $D(x_i')=D(x_i)$ if $1\leq i\leq n$ and $0$ otherwise, and get a crossed homomorphism $D:G\to J\to J_i$ that splits through $G_i$. The same proof shows, in fact, that every homomorphism $\sigma:G\to \operatorname{Aut}(I)$ which has property $P$ can be presented as an inverse limit of homomorphisms $\sigma_i:G_i\to \mathbb{Z}/q_i^{\times}$ having property $P$. Thus it is enough to prove that $\mathbb{Z}/q$ has property $Q$ for every finitely generated Demushkin group. We will prove it for $p\ne2$, the proof for $p=2$ is similiar, considering the possible forms of the 1-relator $r$. Recall that for $p\ne2$, a finitely generated pro-$p$ Demushkin group has the form $G/r$ for $r=x_1^q[x_1,x_2]\cdots [x_{2n-1},x_{2n}]$ or $r=[x_1,x_2]\cdots [x_{2n-1},x_{2n}]$. Taking $D_i$ to be the characters defined by $D_i(x_j)=\delta_{ij}$, one concludes that $\chi(x_i)=0$ for all $i\ne 2$ and $\chi(x_2)=(1-q)^{-1}\lor 0$ correspondingly. Hence, we are done. ◻ Now we determine the possible invariants $\operatorname{Im}(\chi)$ for the Demushkin groups $G$ with $q(G)\ne 2$. **Proposition 21**. *Let $\chi: G\to \mathrm{Aut}(I)=(\mathbb{Z}_p/s(G)\mathbb{Z}_p)^{\times}$ be the character of $G$ associated to the dualizing module. Then, if $q(G)\ne 2$, $\operatorname{Im}(\chi)=1+q(G)\mathbb{Z}_p/s(G)\mathbb{Z}_p$.* *Proof.* Express $G=\lim_{\leftarrow}G_i$ as an inverse limit of finitely gnerated Demushkin groups. Since $G/G'=\lim_{\leftarrow}G_i/G_i'$ we get that if $q(G)\ne 0$, then we can assume that for every $i$, $q(G_i)=q(G)$. Otherwise $q(G_i)$ can be 0 for all $i$, or can be arbitrary large. Express $\chi$ as an inverse limit $\chi_i:G_i\to \mathbb{Z}/q_i^{\times}$ where $G_i$ are finitely generated Demushkin groups. It is enough to show that for every $i$, $\operatorname{Im}(\chi_i)=1+q(G_i)\mathbb{Z}_p/s(G)$, then the result follows from standard inverse limit argument. Indeed, as $\chi$ satisfies property $P$, so is $\chi_i$ for every $i$. Hence, as in the proof of Lemma [Lemma 20](#property Q){reference-type="ref" reference="property Q"}, $\chi_i$ must have the form $x_i=0$ for all $i\ne 2$ and $x_2=(1-q)^{-1}\lor 0$ correspondingly to the value of $s(G)$. Hence, $\operatorname{Im}(\chi_i)=1+q(G_i)\mathbb{Z}_p/s(G)\mathbb{Z}_p$ ◻ In the countable case a pro-$p$ Demushkin group is completely determined by $q(G)$ and $s(G)$ whenever $q(G)\ne 2$. This is far from being the case in uncountable rank. **Proposition 22**. *Let $p$ be a prime. For every nondegenerate alternate bilinear form $(V,\varphi)$ over $\mathbb{F}_p$ and $q\ne 2$ equals to power of $p$ or 0, there is a Demushkin group $G$ with $q(G)=q$ whose cup-product bilinear form $H^1(G)\cup H^1(G)\to H^2(G)$ is isomorphic to $(V,\varphi)$, and $s(G)=0$* *Proof.* Let $(V,\varphi)$ be a nondegenerate alternate bilinear form. Let $0\ne v_1\in V$. Since $\varphi(v_1,v_1)=0$, there exists some $v_2\ne v_1$ such that $\varphi(v_1,v_2)\ne 0$, We can assume $\varphi(v_1,v_2)=1$, so by alternating $\{v_1,v_2\}$ is a symplectic basis to $V'=\operatorname{span}\{v_1,v_2\}$ and in particular $V'$ is nondegenerate. By Lemma [Lemma 8](#Projections){reference-type="ref" reference="Projections"} we can complete $\{x_1,x_2\}$ to a basis of $V$ by taking a basis for the orthogonal complement of $V'$. Now we construct a Demushkin group as follows: Let $F$ be a free pro-$p$ group over $\{x_i\}_{i\in \dim V}$, and define $r=x_1^q[x_1,x_2]\prod_{1\ne i<j} [x_i,x_j]^{\alpha_{ij}}$ or $r=[x_1,x_2]\prod_{1\ne i<j} [x_i,x_j]^{\alpha_{ij}}$ for $\alpha_{ij}=\varphi(v_i,v_j)$ for all $1\ne 1<j$. Then the bilinear form of $(H^1(G),\cup)$ is isomorphic to $(V,\varphi)$ and $q(G)=q$. We will show that $s(G)=0$. By Lemma [Lemma 18](#sufficient criterion for s(G)=0){reference-type="ref" reference="sufficient criterion for s(G)=0"}, it suffices to build a homomorphism $\sigma:G\to \mathbb{Z}_p^{\times}$ with property $P$. We do it by setting $\theta(x_2)=(1-q)^{-1}$ and $\theta(x_i)=1$ for all $i\ne 2$ in the first case, and the trivial map in the second case. One checking that every crossed homomorphism $D:F\to \mathbb{Z}_p$ vanishes on $r$, and hence property $P$ is satisfied. ◻ Proposition [Proposition 22](#group for every form){reference-type="ref" reference="group for every form"} and Theorem [\[maximal number of forms\]](#maximal number of forms){reference-type="ref" reference="maximal number of forms"} together imply: **Theorem 23**. *Let $p$ be a prime number. For every cardinal $\mu>\aleph_0$ and $q\ne 2$ equals to a power of $p$ or 0, there are $2^{mu}$ pairwise nonisomorphic pro-$p$ Demushkin groups $G$ with $q(G)=q$ and $s(G)=0$.* *Proof.* The only thing left to observe is that $G_1\cong G_2$ implies isometric of the bilinear forms $(H^1(G_1),\cup)\cong (H^1(G_2),\cup)$ but this is straightforward. ◻ For the case of $p=2$ and $q(G)=2$ we have an analogue result. First we give a list of the subgroups of $\mathbb{Z}_2^{\times}\cong \{\pm1\}\times (1+4 \mathbb{Z}_2)$. Let $f\geq 2$ be some integer or $\infty$ and define $2^{\infty}=0$. Set $U^{(f)}_2=\langle 1+2^f\rangle$ and $U^{[f]}_2=\langle -1+2^f\rangle$. Then every subgroup equals to one of the following options for some $f=2^h, h\in \mathbb{N}\cup \{\infty\}$. 1. $U^{(f)}_2$. 2. $U^{[f]}_2$. 3. $\{\pm 1\}\times U^{(f)}_2$. **Theorem 24**. *For every uncountable cardinality $\mu$, $t\in \{-1,0,1\}$, and $f\geq 2$, there are $2^{\mu}$ pairwise nonisomorphic pro-$2$ Demushkin groups with $q(G)=2,s(G)=0$ and the following pairs of invariants, where $t(G)$ denotes $t(\cup)$ for the cup product bilinear form $H^1(G)\cup H^1(G)\to\mathbb{F}_2$ :* - *$t(G)=1$, $\operatorname{Im}(\chi)=U^{[f]}_2$.* - *$t(G)=1$, $\operatorname{Im}(\chi)=\{\pm 1\}\times U^{(f)}_2$.* - *$t(G)=-1$, $\operatorname{Im}(\chi)=\{\pm 1\}\times U^{(f)}_2$.* *Proof.* For every bilinear form $\varphi$ over $\mathbb{F}_2$ with $t(\varphi)=1$, and every $A\leq\mathbb{Z}_2^{\times}$ of the form $U^{[f]}_2$ or $\{\pm 1\}\times U^{(f)}_2$, we construct a pro-2 Demushkin group $G$ with $s(G)=0$, whose cup product bilinear form is isomorphic to $\varphi$ and $\operatorname{Im}(\chi)=A$. In addition, for every $\varphi$ over $\mathbb{F}_2$ with $t(\varphi)=1$, and $A\leq\mathbb{Z}_2^{\times}$ of the form $\{\pm 1\}\times U^{(f)}_2$,we construct a pro-2 Demushkin group $G$ with $s(G)=0$, whose cup product bilinear form is isomorphic to $\varphi$ and $\operatorname{Im}(\chi)=A$. Then by Theorem [Theorem 13](#forms over F_2){reference-type="ref" reference="forms over F_2"}, we are done. - Case 1: Let $\varphi$ be a non alternate bilinear form on $V$ of dimension $\mu$ over $\mathbb{F}_2$ such that $A=\ker(\beta)\ne V$ and $0\ne A^{\perp}\subseteq A$. Let $v_1$ be such that $\varphi(v_1,v_1)=1$ and $v_2\in A^{\perp}$. Then $\varphi(v_1,v_2)\ne 0$ and we can assume $\varphi(v_1,v_2)=1$. In addition, $\varphi(v_2,v_2)=0$. Complete $v_2$ to a basis $\{v_i\}_{i\geq 2}$ for $A$, then $\{v_i\}_{i\geq 1}$ is a basis of $V$. Replace each $v_i, i\geq 3$ by $v_i'=v_i+\varphi(v_i,v_1)v_2$, we get that $\varphi(v_1,v_i)=0$ for all $i\geq 3$. So we can present the form as an orthogonal sum of nondegenerate subspaces $\operatorname{span}\{v_1,v_2,\}\perp \operatorname{span}\{v_i'\}_{i\geq 3}$. Define a pro-2 group $G$ with the relation $x_1^{2+2^{f}}[x_1,x_2]\prod_{i,j>2} [x_i,x_j]^{\varphi(v_i',v_j')}$. We can define a map $G\to \mathbb{Z}_2^{\times}$ via $x_2\to -(1+2^{f})^{-1}, x_i\to 1, \forall i\ne 2$. Then the image is $U_2^{[f]}$, and one checks that every crossed homomorphism from the free pro-2 group on $\{x_i\}$ vanishes on $r$, so it has property $P$. - Case 2: We start as in the previous case. Now we may assume that $\varphi(v_3,v_4)=1$. Replace $v_i'$ by their projections $v_i''$ over $\operatorname{span}\{v_3',v_4,\}$. Define a pro-2 group with the relation $x_1^2[x_1,x_2]x_3^{2^f}[x_3,x_4]\prod_{i,j>4} [x_i,x_j]^{\varphi(v_i'',v_j'')}$, $f'\geq f$. We can define a map $G\to \mathbb{Z}_2^{\times}$ via $x_2\to -1, x_4\to (1-2^f)^{-1} x_i\to 1, \forall i\ne 2,4$. Then the image is $\{\pm 1\}\times U_2^{(f)}$, and one checks that every crossed homomorphism from the free pro-2 group on $\{x_i\}$ vanishes on $r$, so it has property $P$. - Case 3: Let $\varphi$ be a bilinear form on $V$ of dimension $\mu$ over $\mathbb{F}_2$ such that $t(G)=-1$. Then there is an element $v_1\notin \ker(\beta)\land v_1\perp \ker{\beta}$. By Lemma [Lemma 8](#Projections){reference-type="ref" reference="Projections"} $V$ is an orthogonal sum $\operatorname{span}\{x_1\}\perp \ker(\beta)$. So $\ker(\beta)$ is an orthogonal sum of $\operatorname{span}\{v_2,v_3\}\perp V'$. Define $r=x_1^2x_2^{2^f}[x_2,x_3]\prod_{i,j>3} [x_i,x_j]^{\varphi(v_i,v_j)}$, and let $\chi$ be the character defined by $\chi(x_1)=-1,\chi(x_3)=(1-2^f)^{-1}, \chi(x_i)=1\forall i\ne 1,3$. Then the image is $\{\pm 1\}\times U_2^{(f)}$, and one checks that every crossed homomorphism from the free pro-2 group on $\{x_i\}$ vanishes on $r$, so it has property $P$. - $t=0:$ Let $\varphi$ be a bilinear form on $V$ of dimension $\mu$ over $\mathbb{F}_2$ such that $t(G)=0$. Build a Demushkin pro-2 group by the relation $r=\prod[x_i,x_j]^{\varphi(v_i,v_j)}$ for some basis $\{v_i\}$ of $V$.  ◻ For finitely generated Demushkin groups we have the following equivalence, which in fact holds in the more general context of Poincare-Duality groups: **Theorem 25**. *[@neukirch2013cohomology Theorem 3.7.2] Let $G$ be a finitely generated pro-$p$ group. The following are equivalent:* 1. *$G$ is a Demushkin group.* 2. *$\operatorname{cd}(G)=2$ and $I\cong \mathbb{Q}_p/\mathbb{Z}_p$, where $I$ stands for the dualizing module.* 3. *$\operatorname{cd}(G)=2$ and $_pI\cong \mathbb{F}_p$, where $_pI$ denotes the additive subgroup of $I$ of elements of order dividing $p$.* In the infinite dimension case there are examples of Demushkin groups whose dualizing module is different then $\mathbb{Q}_p/\mathbb{Z}_p$. We will give such examples for every rank and every value on $q(G)$ in Proposition [Proposition 29](#demushkin groups with s not 0){reference-type="ref" reference="demushkin groups with s not 0"}. However, we can still prove the following equivalence: **Theorem 26**. *Let $G$ be a pro-$p$ group. The following are equivalent:* 1. *$G$ is a Demushkin group.* 2. *$\operatorname{cd}(G)=2$ and $_pI\cong \mathbb{F}_p$, where $I$ is the dualising module of $G$ and $_pI$ denotes the additive subgroup of $I$ consisting of elements of order dividing $p$.* *Proof.* That 1 implies 2 was already observed in [@labute1966demuvskin Section 1.3] for Demushkin groups of countable rank. However, since by Corollary [Corollary 16](#open subgroups){reference-type="ref" reference="open subgroups"} every open subgroup of a Demushkin group of arbitrary rank is Demushkin, the same argument works in general. We now prove that $2$ implies $1$. Recall that by the duality property of $I$, for every discrete $p$-torsion $G$- module $A$, $H^2(G,A)^*\cong \operatorname{Hom}_G(A,I)$. In particular, if $A$ is annihilated by $p$, then $H^2(G,A)^*\cong \operatorname{Hom}_G(A,{_pI})$. By assumption, we get $H^2(G,A)^*\cong \operatorname{Hom}_G(A,\mathbb{F}_p)$. Again, for $G$- modules annihilated by $p$, $\operatorname{Hom}_G(A,\mathbb{F}_p)=\operatorname{Hom}_G(A,\mathbb{Q}_p/\mathbb{Z}_p)$, hence $H^2(G,A)^*\cong \operatorname{Hom}_G(A,\mathbb{Q}_p/\mathbb{Z}_p)\cong (A^*)^G\cong H^0(G,A^*)$. As a result, $H^2(G,\mathbb{F}_p)\cong H^0(G,\mathbb{F}_p^*)^*\cong \mathbb{F}_p$. By definition of the dualizing module, the isomorphism is induced by the cup product pairing $H^2(G,A)\cup H^0(G,A^*)\to H^2(G,\mathbb{Q}_p/\mathbb{Z}_p)\xrightarrow{\operatorname{tr}} \mathbb{Q}_p/\mathbb{Z}_p$. Notice that for finite modules we can reverse the isomorphism by taking $A^*$ instead of $A$. I.e, the cup product induces isomorphism $H^0(G,A)\to H^2(G,A^*)^*$. Since every discrete $G$-module is a direct limit of finite modules, we get this isomorphism holds also for infinite discrete $G$-modules annihilated by $p$. We left to show that the cup product bilinear form $H^1(G)\cup H^1(G)\to \mathbb{F}_p$ is nondegenerate. This is equivalent to saying that the homomorphism induced by the cup product $H^1(\mathbb{F}_p)\to \operatorname{Hom}(H^1(G),\mathbb{F}_p)\cong H^1(G)^*$ is injective. The last isomorphism follows form the fact that $H^1(G)$ is annihilated by $p$. Notice that since $\mathbb{F}_p\cong \mathbb{F}_p^*$ as $G$-modulus, this is equivalent to the injectivity of the map induced by the cup product $H^1(G,\mathbb{F}_p)\to H^1(G,\mathbb{F}_p^*)^*$. Now look at the following exact sequence: $0\to \mathbb{F}_p\to \operatorname{Ind}^G(\mathbb{F}_p)\to A\to 0$. All the $G$-modules are discrete and annihilated by $p$. Taking its dual, we get the following commutative diagram, when the vertical arrows induced by the cup product. $$\xymatrix@R=14pt{ H^0(G,\operatorname{Ind}^G(\mathbb{F}_p)) \ar[d] \ar[r] & H^0(G,A) \ar[d] \ar[r] & H^1(G,\mathbb{F}_p) \ar[d] \ar[r] & 0 \ar[d]\\ H^2(G,\operatorname{Ind}^G(\mathbb{F}_p)^*)^* \ar[r] & H^{2}(G,A^*)^* \ar[r] & H^{1}(G,\mathbb{F}_p^*)^* \ar[r]& H^1(G,\operatorname{Ind}^G(\mathbb{F}_p)^*)^* \\}$$ the isomorphisms of the first two vertical maps yields the injectivity of the third via diagram chasing. ◻ We finish this section with the following characterization of Demushkin groups, which generalizes the result in [@dummit1983demuvskin] for finitely generated Demushkin groups. **Theorem 27**. *Let $G$ be a one related pro-$p$ group. $G$ is a Demushkin group if and only if every open subgroup $U\leq G$ of index $p$ is one related.* *Proof.* By Corollary [Corollary 16](#open subgroups){reference-type="ref" reference="open subgroups"} every open subgroup of Demushkin group is again a Demushkin group, and in particular one related. Hence, we only need to prove the second direction. Since $G$ is one related, $G=F/\langle r^F\rangle$ for some free pro-$p$ group and $r\in \Phi(F)$. Assume that $G$ is not Demushkin. Then there exists a nontrivial element in the radical of the cup product bilinear form. Call it $\chi_1 \in H^1(G)$. Case 1: Suppose that $G/[G,G]$ is torsion free. Complete $\chi_1$ to a basis $(\chi_i)$ of $H^1(G)$ and take a basis $(x_i)_i$ of $F$ which maps onto the dual basis $(\chi_i^*)_i$ of $G/\Phi(G)$, we notice that $r\equiv r'\mod [[F,F],F]$, where $r'$ is product of commutators which doesn't involve $x_1$. Case 2: Suppose that $G/[G,G]$ has torsion subgroup $\mathbb{Z}/q\mathbb{Z}$ where $q=p^n$ for some $n \in \mathbb N$. There exists a unique element $s\in F$ such that $s^{p^n}\equiv r\mod [F,F]$. In that case, take a basis for $F$ which contains $s$. Put $x_1=s$, Subcase 2(a): Suppose $\chi_1(x_1)\ne 0$. We may assume that $\chi_1(x_1)=1$. By choosing a basis of the annihilator of $x_1$ in $H_1(G)$ and adding $\chi_1$ to it, we get a basis of $H^1(G)$ whose dual contains $x_1$. Since $x_1^*=\chi_1\in \operatorname{rad}(H^1)(G)$, we get that $r\equiv x_1^qr'\mod [F.F,F]$ where $r'$ is product of commutators which doesn't involve $x_1$. Subcase 2(b): Suppose that $\chi_1(x)=0$. Then complete $\chi_1$ to a basis of $\textrm{Ann}_{H^1(G)}(x_1)$, and complete it to a basis $B$ of $H^1(G)$ by adding some $\chi'\in H^1(G)$ such that $\chi'(x_1)=1$. We can choose $x_2 \in F$ such that the images of $x_1$ and $x_2$ are dual to $\chi'$ and $\chi_1$ respectively in the dual basis $B^*$ of $G/\Phi(G)$. We get that $r\equiv x_1^qr'\mod [[F,F],F]$ where $r'$ is product of commutators which doesn't involve $x_2$. Set $y=x_1$ in Case 1 and Case 2(a) and set $y=x_2$ in case 2(b). For every finite subset of the chosen basis of $G$, let $F_J$ be the free pro-$p$ group over $J$ and $r_J$ is the image of $r$ under the map $F\to F_J$ which sends $x_i\to x_i$ for all $i\in J$ and $x_i\to 1$ otherwise. Then $G=\lim_{\leftarrow}G_J$ where $J$ runs over the finite subsets of the chosen basis which contain $y$ and $G_J=F_J/\langle r_J^{F_J} \rangle$. Notice that $y$ doesn't appear in the commutators in $r_J$ and hence the image of $\chi_1$ in $H^1(G_J)$ belongs to the radical of the cup product. Hence, $G_J$ is a finitely generated one-relator pro-$p$ group which is not Demushkin and we have set up the notation for $r_J$ to apply the argument in [@dummit1983demuvskin]. Let $N_J$ be the open subgroup generated by $y^p$ and $x_i$ for all $y\ne x_i\in J$, and take $U_J=N_J/R_J$ where $R_J$ is normal subgroup in $F_J$ generated by $r_J$. Correspondingly we have $N\leq F$ and $U=N/R$. Notice that $R_J$ is generated as a normal subgroup of $N_J$ by $r_{j,i}=r_J^{y^i}$ for all $0\leq i\leq p-1$, and the same holds for $R$ and $N$. By [@dummit1983demuvskin] $\dim R_J/R_J^p[N_J,R_J]=p$ and $r_{J,i}$ form a basis. Hence the same holds for $R/R^P[R,N]$. We have 2 options: if $r\in \Phi(N)$ then $N/R$ is a minimal presentation of $U$ and we get $H^2(U)=p$. Otherwise, by [@dummit1983demuvskin], for every $J$ there is $1\leq s_J\leq p-2$ such that $r_{J,1},...,r_{J,s_J}$ are linearly independent modulo $\Phi(N_J)$ and the rest of the elements lie in $\Phi(N_J)$. Let $s=\max s_J$. If $s_J=s$ for some $J$, then it holds for every $J\subset J'$, so we can assume $s_J=s$ for all $J$. Hence, $r_1,...,r_s$ are linearly independent modulo $\Phi(N)$, and the rest belongs to $\Phi(N)$. Thus, $\dim H^1(N)-\dim H^1(U)=s$. By the exact sequence $0\to R\to N\to U\to 0$ we get $$0\to H^1(U)\to H^1(N)\to H^1(R)^N\to H^2(U)\to 0.$$ We already know that $\dim H^1(R)^N=p$. Hence $$\dim H^2(U)=\dim H^1(R)^N- \dim H^1(N)+\dim H^1(U)=p-s>1.$$ ◻ # Demushkin groups of uncountable rank as absolute Galois groups {#demushkin-groups-of-uncountable-rank-as-absolute-galois-groups .unnumbered} Let $F$ be a field. Denote by $G_F$ its absolute Galois group, and by $G_F(p)$ its maximal pro-$p$ quotient. Notice that $G_F(p)$ is isomorphic to $\operatorname{Gal}(F(p)/F)$ where $F(p)$ denotes the maximal $p$-extension of $F$- i.e, the compositum of all its finite $p$-extension, and hence it is sometimes called \"The maximal pro-$p$ Galois group of $F$ (see, for example, [@blumer2023groups]). Fix a prime $p$ and assume from now that $F$ contains a primitive $p$-th root of unity $\rho$. Notice that this implies that $\operatorname{char}(F)\ne p$. This assumption has no restrictions on our results since by [@serre1979galois §2. Proposition 3] for fields $F$ of characteristic $p$, $G_F(p)$ is a free pro-$p$ group, and in particular, not Demushkin. For $G_F$ we have an arithmetic interpretation of the first cohomology groups. More precisely: $H^1(G_F)\cong F^{\times}/{F^{\times}}^p$, and $H^2(G_F)\cong _p\operatorname{Br}(F)$, where $\operatorname{Br}(F)$ stands for the Brauer group of $F$, and $_p\operatorname{Br}(F)$ denotes the subgroup of elements of order dividing $p$. We also have an interpretation of the cup product which is due to Serre: By the Kummer isomorphism $H^1(G_F) \cong F^{\times}/{F^\times}^p$ every element in $H^1(G_F)$ is represented by an element of $F^{\times}$. We denote the elements of $H^1(G)$ by $(a)$ where $a\in F^{\times}$. Now we have the following formula: $(a)\cup(b)=\left (\dfrac{a,b}{F,\rho}\right )$ where $\left (\dfrac{a,b}{F,\rho} \right )$ denotes cyclic algebra generated by two elements $s,t$ over $F$ subject to the relations $s^p=a,t^p=b$ and $sr=\rho ts$. One of the central problems of number theory is identifying which profinite groups can occur as absolute Galois groups. Since pro-$p$ groups are easier to deal with, a simpler version of this question is: Which pro-$p$ groups can occur as maximal pro-$p$ Galois groups of fields? Recall that by the Artin-Schreier theorem the only nontrivial finite group which can occur as an absolute Galois group is $C_2$- which is the only finite Demushkin group. A few restrictions on the possible properties of absolute, and maximal pro-$p$, Galois groups are already known: Let $F$ be a field containing a primitive root of unity, and let $G_F$ be its absolute Galois group. Then $G_F$ has a natural action on $\mu_{p^{\infty}}=\bigcup_{n\in \mathbb{N}} \mu_{p^n}\subseteq \bar{F}$, the subset of all $p^n$-roots of unity. Since $\mu_{p^{\infty}}\cong \mathbb{Q}_p/\mathbb{Z}_p$, this action induces a homomorphism $f: G_F\to \operatorname{Aut}(\mathbb{Q}_p/\mathbb{Z}_p)\cong \mathbb{Z}_p^{\times}$. Since $\rho \in F$ the image of $f$ is a pro-$p$ group and $f$ induces a homomorphism $G_F(p) \to \mathbb{Z}_p^{\times}$. In [@minavc1992pro Theorem 2.2] it was shown that this homomorphism has property $P$. The same conclusion was proved in [@minavc1991demuvskin] also in case $F$ doesn't contain a primitive $p$-th root of unity. Hence, applying Proposition [Lemma 18](#sufficient criterion for s(G)=0){reference-type="ref" reference="sufficient criterion for s(G)=0"}, we conclude: **Proposition 28**. *Let $G$ be a pro-$p$ Demushkin group which is isomorphic to the maximal pro-$p$ Galois group of some field $F$. Then $s(G)=0$.* Obviously, this is not always the case, as is shown in the following proposition: **Proposition 29**. *For every prime $p$, $q$ a power of $p$ of $0$, $q'>q$ a power of $p$ and $\mu<\aleph_0$ a cardinal, there exists a pro-$p$ Demushkin group of rank $\mu$ such that $q(G)=q$ and $s(G)=q'$.* *Proof.* Let $G\cong F/\langle r^G \rangle$ where $F$ is the free pro-$p$ group generated by $\{x_i\}_{i<\mu}$ and $r=x_0^q[x_0,x_1]\prod x^{q'}_{\lambda+2n}[x_{\lambda+2n},x_{\lambda+2n+1}]$ where $\lambda$ runs over the set of all limit ordinals $\lambda<\mu$. One immediately sees that $G$ is a pro-2 Demushkin group of rank $\mu$ and $q(G)=q$. We will prove that $s(G)=q'$. This is done in the same way as the proof of [@labute1966demuvskin Theorem 4]. First assume that there exists a homomorphism $\sigma:G\to (\mathbb{Z}_p/q'')^{\times}$ satisfying property $P$ for some $q''$ power of $p$ or 0. Applying $D_i=\delta_{ij}$ for all $i<\mu$ and setting $r$ to 0, we deduce that $\sigma(x_1)=(1-q)^{-1}$, $\sigma(x_{\lambda+2n})=1$ and $\sigma(x_{\lambda+2n+1})=(1-q')^{-1}$. for all $\lambda<\mu$ limit, and all $n\in \omega$. Since $\sigma$ is continuous, $\sigma|_\{x_i\}$ mush be convergent to 1, i.e, only finite number of elements are allowed to be outside a given open subgroup. That can be achieved only when $q''\leq q'$. In particular, since the character induced by the dualizing module has property $P$, we conclude that $0\ne s(G)\leq q'$. On the other hand, we can construct a homomorphism $\sigma: G\to (\mathbb{Z}/q')^{\times}$ by letting $\sigma(x_1)=(1-q)^{-1}$ and $\sigma(x_i)=1$ if $i \not =1$. One checks that every crossed homomorphism $D:F\to (\mathbb{Z}/q')^{\times}$ vanishes on $r$, and hence $\sigma$ has property $P$. By [@labute1966demuvskin]. proof of Theorem 4, this implies $q'\leq s(G)$ and we are done. ◻ In [@minavc1991demuvskin] it was proved that for every prime $p$ and $q\ne 2$ a power of $p$ or 0, a Demushkin group $G$ of rank $\aleph_0$ with $q(G)$ can be realized as a maximal pro-$p$ Galois group of some field if and only if $s(G)=0$. Unfortunately, the proof relies on the classification of pro-$p$ Demushkin groups of countable rank by these invariants, which is far from being the case for uncountable rank. However, we can point to a strong connection between general Demushkin groups and absolute Galois groups, as shown in the following theorem. The case $q=2$ is more delicate and we will deal with it later, **Theorem 30**. *Let $p$ be a prime number, $q\ne 2$ be a power of $p$ or $0$, and $\mu>\aleph_0$ a cardinal. There exist a field $F$ whose absolute Galois group is a Demushkin group $G$ of rank $\mu$ and $q(G)=q$.* The proof will be done in several steps. First we need the following useful lemma, which is an immediate consequence of [@pierce1982associative Proposition 19.6]: **Lemma 31**. *Let $A$ be non trivial algebra in $\operatorname{Br}(K)$, and let $K(x)$ be a transcendental extension of $K$. Then the scalar extension algebra $A_{K(x)}:=A\otimes_KK(x)$ represents a non-trivial element of $\mathrm{Br}(K(x))$.* **Proposition 32**. *For every prime $p$ and $q\ne 2$ a power of $p$ of 0, there exists a field $K$ which satisfies:* 1. *$|K^{\times}/{K^p}^{\times}|=\mu$.* 2. *$_p\operatorname{Br}(K)\ne 0$.* 3. *$K\cap \mu_{p^{\infty}}=\mu_q$ if $q\ne 0$ or $\mu_{p^{\infty}}\subseteq K$ if $q=0$.* *Proof.* Let $K'=\mathbb{Q}(\mu_q)$ for $q\ne 0,2$ or $K'=\mathbb{Q}(\mu_p^{\infty})(x,y)$ for $q=0$. By [@minavc1991demuvskin] $H^2(G_{K'})\cong {_p(\operatorname{Br}(K'))}\ne 0$. We choose some $0\ne A\in {_p(\operatorname{Br}(K'))}$. Now let $K=K'[x_i]_{i\in \mu}$ be the field constructed from $K'$ by recursion as follows: For every $\lambda<\mu$ we define: - If $\lambda=\gamma+1$ then $K'_{\lambda}=K'_{\gamma}[x_{\lambda}]$ for $x_{\lambda}$ a transendental element over $K'_{\gamma}$. - If $\lambda$ is a limit ordinal, then $K'_{\lambda}=\bigcup_{\gamma<\lambda} K'_{\gamma}[x_{\lambda}]$ for $x_{\lambda}$ a transcendental element over $\bigcup_{\gamma<\lambda} K'_{\gamma}$. Let $K=\bigcup_{\lambda<\mu} K'_{\lambda}$. We claim that $K$ satisfies the required properties: 1. On one hand, $|K^{\times}/{K^{\times}}^p|\leq |K|=\mu$. On the other hand, we claim that $\{x_i{K^{\times}}^p\}_{i\in \mu}$ is an independent subset. Indeed, assume there is a finite subset $J=\{i_1,...,i_n\}\subseteq\mu$ such that $x_{i_1}^{\alpha_1}\cdots x_{i_n}^{\alpha_{n}}\in {K^{\times}}^p$ with $\alpha_i \in \mathbb N \backslash p\mathbb N$. It means that there is a finite susbset $J'=\{x_{j_1},...,x_{j_m}\}$ such that $x_{i_1}^{\alpha_1}\cdots x_{i_n}^{\alpha_{i_n}}=\left(\dfrac{\sum k_vx_{j_1}^{v_1}\cdots x_{j_m}^{v_m}}{\sum k_ux_{j_1}^{u_1}\cdots x_{j_m}^{u_m}}\right)^p$ where $v_i,u_i\in \mathbb{N}\cup\{0\}$, $k_u, k_v \in K'$ and the sums are finite. Taking $t=\max(J\cup J')$ we get a contradiction to the fact that $x_t$ is transcendental over $\bigcup_{\lambda<t} K'_{\lambda}$. 2. Let $0\ne A\in _p{\operatorname{Br}(K')}$. We prove that $A_K=A \otimes_{K'}K \ne0$ by transfinite induction. Let $\lambda<\mu$. Assume that for every $\gamma<\lambda$ $A_{K'_{\gamma}}\ne 0$. If $\lambda$ is successor, the we get the result by Lemma [Lemma 31](#useful lemma){reference-type="ref" reference="useful lemma"}. Otherwise, we claim that $A_{\bigcup_{\gamma<\lambda} K'_{\gamma} }\ne 0$. Indeed, assuming $A_{\bigcup_{\gamma<\lambda} K'_{\gamma}}=0$ it admits $n^2$ matrix units for $n=\deg A$. Every matrix unit is an expression of the form $\sum_{i=1}^{n^2} k_ia_i$ for $a_i\in A$, and hence belong to some $A_{K'_{\gamma}}$ for $\gamma<\lambda$. Hence there is some $\gamma<\lambda$ such that $A_{K'_{\gamma}}$ contains a set of $n^2$ matrix units, and thus it is trivial, a contradiction. Applying Lemma [Lemma 31](#useful lemma){reference-type="ref" reference="useful lemma"} we get $A_{K'_{\lambda}}\ne 0$. Applying the same argument to $K=\bigcup_{\lambda<\mu} K_{\lambda}$, the result follows. 3. Obviously, if $\rho_q\in K'$ then $\rho_q\in K$. Let $q$ be a prime of $p$ such that $\rho_q\notin K'$ and assume by contradiction by $\rho_q\in K$. Then there is some finite set $J=\{x_{j_1},...,x_{j_m}\}\subseteq \mu$ such that $\left(\dfrac{\sum k_vx_{j_1}^{v_1}\cdots x_{j_m}^{v_m}}{\sum k_ux_{j_1}^{u_1}\cdots x_{j_m}^{u_m}}\right)^q=1$, all the scalars are different then 0. Since $\rho_q\notin K'$, there exists some $i$ such that one of the $v_i$'s or $u_i$'s is different then 0. Take the maximal such $i$, we get a contradiction to the transcendental of $x_{j_i}$.  ◻ Now we follow the proof of [@minavc1991demuvskin Main Theorem] with the necessary adjustments to the uncountable rank. **Lemma 33**. *[@minavc1991demuvskin Lemma 2.1] Let $a\in K^{\times}\setminus {K^{\times}}^p$, $A\in {p\operatorname{Br}(K)}$ and $x$ a transcendental element over $K$. Then $A_{K(x)}$ and $\left (\dfrac{a,x}{K(x),\rho} \right)$ generate distinct nontrivial subgroups in $\operatorname{Br}(K(x))$.* **Lemma 34**. *Let $A\in {p\operatorname{Br}(K)}$ and $B$ be an algebra in $\operatorname{Br}(K)$ whose order is a multiple of $p$. If $A\notin \langle B\rangle$ then there is an extension $L$ of $K$ of the same cardinality as $K$ such that $A_L=B_L\ne 1$ in $\operatorname{Br}(L)$.* *Proof.* This is the same proof of [@minavc1991demuvskin Lemma 2.2], noticing that a generic splitting field over $K$ has the same cadinality as $K$. ◻ **Lemma 35**. *[@minavc1991demuvskin Lemma 2.3].[\[step 3\]]{#step 3 label="step 3"} Let $a\in K^{\times}/{K^{\times}}^p$, $A\in {p\operatorname{Br}(K)}$ and $K_1$ be the field obtained from $K$ by first adjoining a transcendental $x$ and then forming the generic splitting field of $A_{K(x)}\otimes \left( \dfrac{a^{-1},x}{K(x),\rho}\right)$. If $b\in K^{\times}\setminus {K^{\times}}^p$ then $b\notin K_1^p$.* **Proposition 36**. *Let $K$ be a field of cardinality $\mu$ and $0\ne A\in {p\operatorname{Br}(K)}$. Then there is a field extension $L$ of $K$ of the same cardinality, such that* 1. *For each $a\in K^{\times} \setminus {K^{\times}}^p$ there exists $h(a)\in L$ with $\left(\dfrac{a,h(a)}{L,\rho}\right)=A_L\ne 1$ and* 2. *if $B\in \operatorname{Br}(K)$ has order divisible by $p$ then either $A_L\in \langle B_L\rangle$ or the order of $B_L$ is coprime to p.* *Proof.* Let $S=\{a_{\alpha}\}_{\alpha<\delta}$ be a set of representatives of $K^{\times}/ {K^{\times}}^p$ order by an ordinal $\delta$ for some ordinal $\delta\leq \mu$. We construct a chain of fields $K_{\alpha}$ for all $\alpha<\delta$ by the following: Assuming we built $K_{\alpha}$ for all $\alpha<\beta$ such that for all $\alpha<\beta$ there exists $h(a_{\alpha})$ such that $\left(\dfrac{a_{\alpha},h(a_{\alpha})}{K_{\gamma},\rho}\right)=A_{K_{\gamma}}\ne 1$ for all $\alpha\leq\gamma<\beta$. If $\beta=\epsilon+1$, build $K_{\beta}$ over $K_{\epsilon}$ as in Lemma [\[step 3\]](#step 3){reference-type="ref" reference="step 3"}. If $\beta$ is limit, take $M=\bigcup_{\alpha<\beta}K_{\alpha}$. Notice that, as was proved in the proof of Proposition [Proposition 32](#building the field){reference-type="ref" reference="building the field"}, if $A_{K_\alpha}\ne 1$ for all $\alpha$ then $A_M\ne 1$, and if $b\notin {K_{\alpha}^{\times}}^p$ for all $\alpha$, then $b\notin {M^{\times}}^p$, so we can define $K_\beta$ over $M$ for $a_{\beta}$ as in Lemma [\[step 3\]](#step 3){reference-type="ref" reference="step 3"}. Now define $M= \bigcup_{\alpha<\delta}K_{\alpha}$. Every $n$-dimensional algebra over $K$ is determined by $n^3$ structure constants, hence we can enumerate the central simple algebras over $K$ by some cardinal $\delta$, $\delta\leq \mu$. We build a chain of field extensions $\{M_{\alpha}\}_{\alpha<\delta}$ as follows: Assume we built $M_{\alpha}$ for all $\alpha<\beta$ such that for all $K$-csa $B_{\alpha}$, $(B_{\alpha})_{M_\gamma}\ne 1$ for all $\alpha\leq \gamma<\beta$, and if the order of $(B_{\alpha})_{\delta}$ is divisible by $p$ for some $\delta<\alpha$ then $A_{M_{\alpha}}\in \langle (B_{\alpha})_{M_\alpha}\rangle$. Then if $\beta=\alpha+1$ we build $M_{\beta}$ over $M_{\alpha}$ for $B_{\beta}$ as in Lemma [Lemma 34](#step 2){reference-type="ref" reference="step 2"}. If $\beta$ is limit we set $L=\bigcup_{\alpha<\beta}M_{\alpha}$. Notice that all the required properties are satisfied by $L$. Then we build $M_{\beta}$ over $L$ for $B_{\beta}$ as in Lemma [Lemma 34](#step 2){reference-type="ref" reference="step 2"}. Eventually take $L=\bigcup_{\alpha<\delta} M_{\alpha}$. Notice that $|L|=|K|$. ◻ The last step we need is: **Proposition 37**. *Let $K$ be a field such that $_p\operatorname{Br}(K)\ne 0$ and fix a nontrivial algebra $A\in _p\operatorname{Br}(K)$. Then $K$ admits a field extension $F$ of the same cardinality, such that:* 1. *$A_F\ne 1$ in $\operatorname{Br}(F)$.* 2. *$_p\operatorname{Br}(F)=\langle A_F\rangle$.* 3. *For all $a\in F^{\times}\setminus {F^{\times}}^p$ there exits an element $b\in F^{\times}$ such that $\left(\dfrac{a,b}{F,\rho}\right)\ne 1$.* 4. *$F^p\cap K=K^p$.* 5. *$F\cap \mu_{p^{\infty}}=K\cap \mu_{p^{\infty}}$.* *Proof.* This is the same construction presented in [@minavc1991demuvskin Theorem 2.5]. Let us define a series of field extension $K=L_0\subseteq L_1\subseteq L_2\subseteq ...$ such that $A_{L_i}\ne 0$, as follows: Assume we already defined $L_i$. Let $L_i'$ be the fixed field of some $p$-Sylow subgroup $S_p$ of $G_{L_i}$. By [@serre1979galois I-11], the restriction map $H^k(G)\to H^k(S)$ from the $k$'th cohomology group of a profinite group to its $p$-Sylow subgroup is injective for all $K$. Hence $A_{L'_i}\ne 0$. Now define $L_{i+1}$ as in Proposition [Proposition 36](#main step){reference-type="ref" reference="main step"}, where $K=L'_i$ and $A=A_{L'_i}$. Let $F=\bigcup_{i\in \omega} L_i$. We claim that $F$ has properties (1)-(5). 1. Since $A_{L_i}\ne 1$ for all $i$, $A_F\ne 1$. Let $B\in _p\operatorname{Br}(F)$. 2. By [@pierce1982associative p.10] there is some $i$ such that $B={B_i}_F$ for $B_i\in {_p\operatorname{Br}(L_i)}$. ${B_i}_{L_{i+1}}$ has order which is a multiple of $p$, so by Proposition [Proposition 36](#main step){reference-type="ref" reference="main step"} $A_{L_{i+1}}\in \langle {B_i}{L_{i+1}}$. Hence $A_F\in \langle B_F=B\rangle$. But as both algebras have order $p$, $B\in {_p\operatorname{Br}}(F)$. 3. Let $a\in F$ There exists some $i$ such that $a\in L_i'$. By Proposition [Proposition 36](#main step){reference-type="ref" reference="main step"} there exists some $h(a)\in L_{i+1}$ such that $\left(\dfrac{a,h(a)}{L_{i+1},\rho}\right)\ne 1$. We claim that for every $i+j$, $\left(\dfrac{a,h(a)}{L_{i+j},\rho}\right)\ne 1$. Indeed, if $\left(\dfrac{a,h(a)}{L_{i+j-1},\rho}\right)\ne 1$ then by same observation as in the previous property, $\left(\dfrac{a,h(a)}{L'_{i+j-1},\rho}\right)\ne 1$ and hence by Lemma [Lemma 31](#useful lemma){reference-type="ref" reference="useful lemma"} $\left(\dfrac{a,h(a)}{L_{i+j},\rho}\right)\ne 1$. Hence, $\left(\dfrac{a,h(a)}{F,\rho}\right)\ne 1$. 4. Let $a\in K^{\times}\setminus {K^{\times}}^p$. If $a\in {F^{\times}}^p$ then the cohomology class of $a$ in $H^1(G_F)$ is trivial, and hence by the identification of the cup product $(a)\cup(b)=\left(\dfrac{a,b}{F,\rho}\right)=1$, a contradiction to the previous property. 5. Follows from the previous property by letting $a=\rho_{p^n}$ a primitive root of unity of the maximal order which belongs to $K$. We left to show that $|F|=|K|$. We show by induction that $|L_i|=|K|$ for all $i$, and then the result follows immediately. It is enough to show that $|L_i|=|L_{i+1}|$. By Proposition [Proposition 36](#main step){reference-type="ref" reference="main step"} $|L_{i+1}|=|L'_i|$. So we only left to show that $|L_i|=|L'_i|$. But $L'_i$ is defined to be an algebraic extension of $L_i$ and thus has the same cardinality as $L_i$. ◻ We are ready to prove the theorem. *proof of Theorem [Theorem 30](#demushkin as galois group){reference-type="ref" reference="demushkin as galois group"}.* Let $K$ be the field constructed in Proposition [Proposition 32](#building the field){reference-type="ref" reference="building the field"} and define construct $F$ as in Proposition [Proposition 37](#last step){reference-type="ref" reference="last step"}. By the same proof as [@minavc1991demuvskin Main theorem], $G_K$ is a pro-$p$ group. It also immediate to see that $G_F$ is Demushkin, since the cup product bilinear form is nondegenerate. We need to show that $\operatorname{rank}(G_F)=\dim H^1(G)=\mu$. By the Kummer isomorphism , it is equivalent to show that $|F^{\times}/{F^{\times}}^p|=\mu$. On one hand, $|F^{\times}/{F^{\times}}^p|\leq |F|=|K|=\mu$. On the other hand, by the fourth property $K^{\times}/{K^{\times}}^p\hookrightarrow F^{\times}/{F^{\times}}^p$, and we chose $K$ such that $K^{\times}/{K^{\times}}^p$. The only thing left to show is that $q(G_F)=q$. By the choice of $K$, $\operatorname{Im}(\sigma)=1+q\mathbb{Z}_p$ where $\sigma:G_F\to \mathbb{Z}_p^{\times}$ is the homomorphism induced by the action of $G_F$ on $\mu_{p^{\infty}}$. As this homomorphism satisfies property $P$, and by Lemma [Lemma 20](#property Q){reference-type="ref" reference="property Q"} $\operatorname{Aut}(I)$ has property $Q$, $\sigma$ equals to the character of $G_F$. Hence by Proposition [Proposition 21](#p=image){reference-type="ref" reference="p=image"} $q(G_F)=q$. ◻ We are moving to deal with the case $q=2$. In [@minavc1992pro] the case of pro-$2$ Demushkin group of uncountable rank has been studied. Most of the results can be apllied for the general case. **Theorem 38**. *Let $G$ be a pro-2 Demushkin group. If $G$ is a maximal pro-$2$ Galois group, then $t(G)\ne 0$. In addition, for every cardinal $\mu$, there exists an absolute Galois group over a field of characteristic 0 which is isomorphic to a pro-2 Demushkin group of rank $\mu$ for every pair of invariants:* - *$t(G)=1, \operatorname{Im}(\chi)= U_2^{(f)}, 2\leq f<\infty$* - *$t(G)=1, \operatorname{Im}(\chi)= U_2^{[f]}, 2\leq f<\infty$* - *$t(G)=1, \operatorname{Im}(\chi)=\{\pm 1\} \times U_2^{(f)}, 2\leq f<\infty$* - *$t(G)=-1, \operatorname{Im}(\chi)=\{\pm 1\}\times U_2^{(2)}, 2\leq f<\infty$* *and only for such pairs. If $F$ is a field of characteristic $p$, whose absolute Galois group is a pro-2 Demushkin group $G$, then $t(G)=1$ and exactly the following options of $\operatorname{Im}(\chi)$ are possible:* - *If $p\equiv 1 (mod4)$, say $p = 1 + 2^ac, a > 2, 2\nmid c$, then the only possibilities for $\operatorname{Im}(\chi)$ are the groups $U_2^{(b)}, a \leq b < \infty$. Moreover, each group $U_2^{(b)}, a \leq b < \infty$, occurs as $\operatorname{Im}(\chi)$ for some Demushkin group of rank $\mu$ $G_F$ with $\operatorname{char}(F)=p$.* - *If $p\equiv -1 (mod4)$, say $p = -1 + 2^ac, a > 2, 2\nmid c$, then the only possibilities for $\operatorname{Im}(\chi)$ are the groups $U_2^{[a]}$, and $U_2^(b), a+1 \leq b < \infty$. Moreover, each ot these groups can occur as $\operatorname{Im}(\chi)$ for some Demuskin group $G_F$ of rank $\mu$ with $\operatorname{char}(F)=p$* *Proof.* The fact that no pro-2 Demushkin group which invariants which have not listed above can occur as a maximal pro-$2$ Galois group was proved in [@minavc1992pro]. Let $\mu>\aleph_0$ be a cardinal , we will show that every option from the above list appear for $\mu$. Let $K'$ be the field constructed for every case in [@minavc1992pro] and take $K$ to be the field constructed in Proposition [Proposition 32](#building the field){reference-type="ref" reference="building the field"} over $K'$. We build the field $F$ as in Proposition [Proposition 36](#main step){reference-type="ref" reference="main step"}. Then $G_F$ is a pro-$2$ Demushkin group of rank $\mu$. The invariant $\operatorname{Im}(\chi)$ is determined by $F\cap \mu_{2^{\infty}}=K\cap_{2^{\infty}}=K'\cap\mu_{2^{\infty}}$. We left with calculation $t(G_F)$. In [@minavc1992pro] it was shown that $t(G_F)=1$ if $\left(\dfrac{-1,-1}{F}\right)=1$ and $t(G)=-1$ otherwise. Hence, if $\left(\dfrac{-1,-1}{K'}\right)=1$, obviously $\left(\dfrac{-1,-1}{F}\right)$. And if $\left(\dfrac{-1,-1}{K'}\right)\ne 1$ then we already shown that $\left(\dfrac{-1,-1}{K}\right)\ne 1$. Take $A=\left(\dfrac{-1,-1}{K}\right)$ we get that $\left(\dfrac{-1,-1}{F}\right)\ne 1$, as required. ◻ In contrary to the countable case, in the uncountable case we still left with much mysterious. *Questuion 39* (Open questions). Does every pro-$p$ Demushkin group $G$ of uncountable rank for $p\ne 2$ which satisfies $s(G)=0$ occur as a maximal pro-$p$ Galois group of a field? Does every pro-$2$ group of uncountable rank with $s(G)=0$ and the pairs of invariants listed above occur as a maximal pro-$2$ Galois group of a field? We end this section by showing that for $p\ne 2$, pro-$p$ Demushkin groups of arbitrary rank satisfy some properties of maximal pro-$p$ Galois groups. **Proposition 40**. *Let $p\ne 2$. Every pro-$p$ Demushkin group is Bloch-Kato.* *Proof.* A pro-$p$ group $G$ is said to Bloch-Kato, if for every closed subgroup $H\leq G$ $H^{\bullet}(H)$ is a quadretic algebra, meaning that it is generated by elements of the first level, modulo relations of the second level. This property is inspired by the positive solution to the Bloch-Kato Conjecture, states that $H^{\bullet}(G_F(p))\cong K^M_*(F)$ for every field $F$. By Corollary [Corollary 16](#open subgroups){reference-type="ref" reference="open subgroups"}, every subgroup of a Demushkin group is either Free or Demushkin. For a free pro-$p$ $F$ group the cohomological dimension is 1 and the claim follows immediately, by letting the relations to be $a\cup b=0$ for all $a,b\in H^1(F)$. We left to show that the cohomology ring of a Demushkin group is quadratic. Let $H$ be a Demushkin group. By Theorem [Theorem 15](#inverse limit){reference-type="ref" reference="inverse limit"}, Every Demushkin group can be expressed as a projective limit of finitely generated Demushkin groups $G\cong \{G_i,\varphi_{ij}$. Since $H^2(G_i)\cong H^2(G)\cong \mathbb{F}_p$ one can choose the groups $G_i$ such that $\operatorname{Inf}:H^2(G_j)\to H^2(G_i)$ are injective for every $j\leq i$. As the maps $\varphi:G_i\to G_j$ are onto, the inflations maps $\operatorname{Inf}:H^1(G_j)\to H^1(G_i)$ are injective for every $j\leq i$. Eventually, since $\operatorname{cd}(G_i)=2$ for every $i$, we get that the maps $H^{\bullet}(G_j)\to H^{\bullet}(G_i)$ induced by the inflations are injective for every $j\leq i$, hence by [@quadrelli2014bloch Proposition 5.1] we are done. ◻ **Proposition 41**. *Let $p\ne 2$. Every pro-$p$ Demushkin group satisfies the $3$-vanishing Massey product property, hereditary.* *Proof.* A pro-$p$ group is said to have the $n$-vanishing Massey product property if every homomorphism as follows $\varphi:G\to \overline{U_n(\mathbb{F}_p)}$ admits a homomorphism $\psi:\varphi:G\to \overline{U_n(\mathbb{F}_p)}$ such that $[\varphi(g)]_{i,i+1}=[\alpha(\psi(g))]_{i,i+1}$ for every $g\in G$ and $i$. $$\xymatrix@R=14pt{ & & &G \ar@{->>}[dd]^(0.3){\varphi}& \\ &&&&\\ 1 \ar[r] & \mathbb{F}_p \ar[r] & {\begin{bsmallmatrix} 1&\rho_{1,2}&\chi_{1,3}&...&\chi_{1,n}\\ &1&\rho_{2,3}&...&\chi_{2,n}\\ &&\ddots&\ddots&\vdots \\ &&&1&\chi_{n-1,n}\\ &&&&1\\ \end{bsmallmatrix}} \ar[r]^{\alpha}&{\begin{bsmallmatrix} 1&\rho_{1,2}&\rho_{1,3}&...&\\ &1&\rho_{2,3}&...&\rho_{2,n}\\ &&\ddots&\ddots&\vdots \\ &&&1&\rho_{n-1,n}\\ &&&&1\\ \end{bsmallmatrix}}\ar[r]&1\\ }$$ Since every subgroup of a Demushkin group is either free or Demushkin, it is enough to prove the property holds for every Demushkin group. This follows from a standard inverse limit argument, since by Theorem [Theorem 15](#inverse limit){reference-type="ref" reference="inverse limit"} every pro-$p$ Demushkin group can be expressed as the inverse limit of finitely generated Demushkin groups, which are all maximal pro-$p$ Galois groups and hence (see [@matzri2014triple]) satisfy the 3-vanishing Massey product property. ◻ **Proposition 42**. *Every pro-$p$ Demushkin group is hereditary of $p$-absolute Galois type.* *Proof.* A pro-$p$ group $G$ is said to be of $p$-absolute Galois type if for every $\alpha\in H^1(G)$, the following sequence is exact: $$\begin{tikzcd} H^1(\ker(\chi),\mathbb{F}_p) \arrow[r,"\operatorname{Cor}_G"]& H^1(G,\mathbb{F}_p) \arrow[r,"\chi \cup"]& H^2(G,\mathbb{F}_p) \arrow[r,"\operatorname{Res}_{\ker(\chi)}"]&[1.2em] H^2(\ker(\chi),\mathbb{F}_p) \end{tikzcd}$$ By [@lam2023generalized] it is enough to prove that the sequence is exact at $H^2(G,F)$ for every $\alpha\in H^1(G)$. Again, since every subgroup of a Demushkin group is either free or Demushkin, it is enough to prove the property holds for every Demushkin group. Let $H$ be a Demushkin group and let $\alpha\in H^1(H)$. If $\alpha=0$ then $H=\ker(\alpha)$, and the map: $\alpha\cup (-):H^1(H)\to H^2(H)$ is the zero map, so we are done. Otherwise, since the cup product is nondegenerate, the map $\alpha\cup (-):H^1(H)\to H^2(H)$ is onto, while obviously $\operatorname{Res}_ {\ker{\alpha}}(\alpha\cup \beta)=0$ for all $\beta\in H^1(G)$. ◻ # Profinite completions of Demushkin groups {#profinite-completions-of-demushkin-groups .unnumbered} In this section we compute the profinite completion of a Demushkin group of infinite rank, and get a new class of examples of absolute Galois groups having absolute Galois completions. Recall that the profinite completion of an abstract group $G$, denoted by $\hat{G}$, is defined as the inverse limit ${\lim_{\leftarrow}}_{U\unlhd_fG}G/U$ where $U$ runs over the finite index normal subgroups of $G$, equipped with the natural homomorphism $i:G\to \hat{G}$ and it satisfies the following universal property: every homomorphism $f:G\to H$ into a profinite group can be lifted uniquely to a continuous homomorphism $\hat{f}:\hat{G}\to H$. Let $G$ be a profinite group. Considered as an abstract group, $G$ has a profinite completion with an injection $i: G \to \hat G$. We say that $G$ is strongly complete when $i$ is an isomorphism. For pro-$p$ groups, we have the following equivalence: **Proposition 43**. *Let $G$ be a pro-$p$ group. The following are equivalent:* 1. *$G$ is finitely generated.* 2. *$G$ is strongly complete.* In [@baron-cohomological] the author presented the question: Can the profinite completion of an absolute Galois group also be realized as an absolute Galois group? The purpose of this section is to give a new class of examples with a positive answer. First we need to discuss free pro-$p$ groups. A well know fact (see, for example [@ribes2000profinite Example 3.3.8 (e) for infinite rank]) states that every free profinite group can be realized as an absolute Galois group. Since every subgroup of a free profintie group is projective, and for pro-$p$ group projectivity equals freeness, we get that the $p$-Sylow subgroups of free profintie groups are free pro-$p$ groups. It also can be proven the $p$-Sylow subgroup of a free profintie group has the same rank of the whole group. As a result for every cardinal $\mu$, the free pro-$p$ group of rank $\mu$ can be realized as an absolute Galois group. However, we give here a direct prove of this result, inspired by Minac& Ware proof: **Proposition 44**. *Let $\mu$ be an infinite cardinal, and $G$ be the free pro-$p$ group of rank $\mu$. Then $G$ occurs as an absolute Galois group of some field.* *Proof.* Recall that a pro-$p$ group $G$ is free if and only if $\operatorname{cd}(G)\leq 1$, which is equivakent to $H^2(G)=0$ (see, for example [@ribes2000profinite Chapter 7]). Hence, our objective is constructing a field $F$ with $|F^{\times}/{F^{\times}}^p|=\mu$, $_p\operatorname{Br}(F)=0$ and such that $G_F$ is a pro-$p$ group. We present a similar construction of Theorem [Theorem 30](#demushkin as galois group){reference-type="ref" reference="demushkin as galois group"}. Let $K$ be a field such that $|K|=|K^{\times}/{K^{\times}}^p|=\mu$. An example of such a field can taken from Proposition [Proposition 32](#building the field){reference-type="ref" reference="building the field"} for the required $\mu$. As every algebra over $F$ can be determined by $n^3$ constants, we can order $_p\operatorname{Br}(F)$ over some ordinal $\lambda\leq \mu$. We build a field $L_1$ a s follows: let $\alpha<\lambda$ and assume that for every $\beta<\alpha$ we already defined a field $M_{\beta}$, such that for every $\gamma\leq \beta$, ${A_{\gamma}}_{M_{\gamma}}=1$, and $M_{\beta}^p\cap K=K^p$. If $\alpha=\beta+1$ then let $M_{\alpha}$ be the field constructed from $M_{\beta}$ by forming the generic splitting field of ${A_{\alpha}}_{M_{\beta}}$. Otherwise $M_{\alpha}$ will be the generic splitting field forming over $\bigcup_{\beta<\alpha} M_{\beta}$ for the algebra ${A_{\beta}}_{\bigcup_{\alpha<\beta} M_{\alpha}}$. One easily observes that $|L_1|=|K|$, and using Lemma [\[step 3\]](#step 3){reference-type="ref" reference="step 3"} and the fact that this property is preserved by direct limit, that $L_1^p\cap K=K^p$. Define a series $L_1\subseteq L_2\subseteq...$ as in Proposition [Proposition 37](#last step){reference-type="ref" reference="last step"} and take $F=\bigcup L_i$. The Theorem follows. ◻ In fact there is a more general and stronger result which can be found in [@fried2006field], which states that profinite projective groups- which are the profinite groups all whose $p$-Sylow subgroups are free- are precisely the absolute Galois groups of pseudo algebraically closed fields. However, we don't need it here. Before computing the profinite completion of a Demushkin group, we need one more definition. **Definition 45**. A pro-$p$ group $G$ is called *locally free* if every finitely generated closed subgroup of $G$ is free. **Proposition 46**. *Let $G$ be a pro-$p$ group. Then $G$ is locally free if and only if $\hat{G}$ is free.* *Proof.* Assume first that $\hat{G}$ is free. Let $H\leq G$ be a finitely generated closed subgroup. Since $H$ is finitely generated. it is strongly complete. Hence, every finite-index subgroup of $H$ is open. However, by the basic properties of profinite groups, every open subgroup of $H$ contains the intersection of $H$ with an open subgroup of $G$. Thus, the inclusion $i:H\to G$ induces a monomorphism $\hat{H}\to \hat{G}$. Recalling again that $H$ is strongly complete, we get that $H$ is isomorphic to a closed subgroup of a free pro-$p$ group, and hence $H$ is free by the results in [@ribes2000profinite Chapter 7]. For the other direction we shall use the following characterisation of free pro-$p$ groups from [@ribes2000profinite Theorem 7.7.4]: A pro-$p$ group $L$ is free if and only if every finite embedding problem, i.e, a pair of continuous epimorphisms $\varphi:L\to A, \alpha: B\to A$ is weakly solvable, meaning that there is a continuous homomorphism $\psi:L\to B$ making the following diagram commutative: $$\xymatrix@R=14pt{ & L \ar@{->>}[d]^{\varphi} \ar[ld]_{\psi}& \\ B\ar [r]_{\alpha} &A \ar[r]&1\\ }$$ Now we assume that $G$ is locally free and we wish to prove that $\hat G$ is free. Let the following diagram $$\xymatrix@R=14pt{ &\hat{G} \ar@{->>}[d]^{\varphi} & \\ B\ar [r] &A \ar[r]&1\\ }$$ be an embedding problem for $\hat G$. Look at the induced abstract embedding problem for $G$: $$\xymatrix@R=14pt{ &G \ar@{->>}[d]^{\varphi\circ i} & \\ B\ar [r] &A \ar[r]&1,\\ }$$ where $i$ denotes the natural homomorphism $i:G\to \hat{G}$. Since $i(G)$ is dense in $\hat{G}$ and $A$ is finite we have that $\varphi\circ i$ is surjective. Choose a finite set $X=\{x_1,...,x_n\}$ of preimages of $A$. For every finite subset $Y\subseteq G$ Let $$H_Y=\overline{\langle X\cup Y\rangle}$$ be the closed subgroup of $G$ generated by $X\cup Y$. Since $X\subseteq H_Y$ then $\varphi|_{H_Y}:H_Y\to A$ is an epimorphism. Since $H$ is finitely generated it is strongly complete and thus any homomorphism to a profinite group is continuous. So, $\varphi|_{H_Y}:H_Y\to A$ is a continuous epimorphism. By assumption $H_Y$ is free pro-$p$ group and therefore there is a homomorphism $\psi :H_Y\to B$ such that $\alpha\circ \psi =\varphi|_{H_Y}$. Denote by $\mathcal{A}_Y$ the set of all continuous week solutions $\psi: H_Y\to B$ such that $\alpha \circ \psi =\varphi|_{H_Y}$. We have that $\mathcal{A}_Y \not = \emptyset$ and since $H_Y$ is finitely generated it follows that $\mathcal{A}_Y$ is finite. For any pair of finite subsets $Y\subseteq Z \subset G$ the restriction function $f_{ZY}: \mathcal{A}_Z\to \mathcal{A}_Y$ is defined by $\psi \mapsto \psi|_{H_Y}$ for $\psi \in \mathcal{A}_Z$. Whenever $Y\subseteq Y'\subseteq Y''$ we have $f_{Y''Y}=f_{Y'Y} \circ f_{Y''Y'}$. It follows that $\{\mathcal{A}_Y,f_{ZY}\}_{Y\subseteq_f G}$ is a directed system of nonempty finite sets and thus its inverse limit is nonempty. An element in the inverse limit is an homomorphism $f:\bigcup_{Y\subseteq_f G} H_Y\to B$ which satisfies $\alpha\circ f=\varphi$ But $\bigcup_{Y\subseteq_f G} H_{Y} = G$. So we got a weak solution $f: G \rightarrow B$ to the embedding problem $\alpha \circ f=\varphi$. By the universal property of the profinite completion $\hat G$, $f$ induces a continuous homomorphism $\hat{f}:\hat{G}\to B$. The density of $i(G)$ in $\hat G$ implies $\alpha \circ \hat f=\varphi$, i.e. $\hat f$ is the required weak solution for the embedding problem of $\hat G$. Therefore $\hat G$ is a free pro-p group as claimed. ◻ **Corollary 47**. *Let $G$ be a pro-$p$ Demushkin group of infinite rank. Then $\hat{G}$ is a free pro-$p$ group.* *Proof.* By Proposition [Proposition 46](#locally free){reference-type="ref" reference="locally free"} it is enough to prove that every finitely generated closed subgroup of $G$ is free. Since $G$ is not finitely generated, a finitely generated closed subgroup must have an infinite index. Thus by Corollary [Corollary 16](#open subgroups){reference-type="ref" reference="open subgroups"} we are done. ◻ Combining theorem [Theorem 30](#demushkin as galois group){reference-type="ref" reference="demushkin as galois group"}, Proposition [Proposition 44](#free as absolute){reference-type="ref" reference="free as absolute"} and Corollary [Corollary 47](#completion of demushkin){reference-type="ref" reference="completion of demushkin"} we conclude the following: **Corollary 48**. *For every infinite cardinal $\mu$ there exists a nonfree pro-$p$ absolute Galois group of rank $\mu$ whose profinite completion is an absolute Galois group as well.* In fact, Corollary [Corollary 47](#completion of demushkin){reference-type="ref" reference="completion of demushkin"} also gives a negative result to the open question presented by Andrew Pletch in 1982 in his paper [@pletch1982local]: Is every locally free pro-$p$ group free? 10 Tamar Bar-On. . , 2022. Simone Blumer, Alberto Cassella, and Claudio Quadrelli. Groups of p-absolute galois type that are not absolute galois groups. , 227(4):107262, 2023. SP Demushkin. The group of a maximal p-extension of a local field. , 25(3):329--346, 1961. SP Demushkin. On 2-extensions of a local field. , 4(4):951--955, 1963. D. Dummit and John P Labute. On a new characterisation of demushkin groups. , 73:413--418, 1983. Michael D Fried and Moshe Jarden. , volume 11. Springer Science & Business Media, 2006. JI Hall. The number of trace-valued forms and extraspecial groups. , 2(1):1--13, 1988. Irving Kaplansky. Forms in infinite-dimensional spaces. , 22:1--17, 1950. John P Labute. Demuškin groups of rank $\aleph_0$. , 94:211--244, 1966. John P Labute. Classification of demushkin groups. , 19:106--132, 1967. Yeuk Hay Joshua Lam, Yuan Liu, Romyar Sharifi, Preston Wake, and Jiuya Wang. Generalized bockstein maps and massey products. In *Forum of Mathematics, Sigma*, volume 11, page e5. Cambridge University Press, 2023. Eliyahu Matzri. . , 2014. Ján Mináč and Roger Ware. Demuškin groups of rank $\aleph_0$ as absolute galois groupsas absolute galois groups. , 73(1):411--421, 1991. Ján Mináč and Roger Ware. Pro-2-demuškin groups of rank $\aleph_0$ as galois groups of maximal 2-extensions of fields. , 292(1):337--353, 1992. Jürgen Neukirch, Alexander Schmidt, and Kay Wingberg. , volume 323. Springer Science & Business Media, 2013. Richard S Pierce and Richard S Pierce. . Springer, 1982. Andrew Pletch. Local freeness of profinite groups. , 25(4):441--446, 1982. Claudio Quadrelli. Bloch--kato pro-p groups and locally powerful groups. In *Forum Mathematicum*, volume 26, pages 793--814. De Gruyter, 2014. Luis Ribes and Pavel Zalesskii. . Springer, 2000. Jean-Pierre Serre. Structure de certains pro-p-groupes. , 63:357--364, 1962. Jean-Pierre Serre. . Springer, 1979. Saharon Shelah. A combinatorial problem; stability and order for models and theories in infinitary languages. , 41(1):247--261, 1972.
arxiv_math
{ "id": "2309.04007", "title": "Demushkin groups of uncountable rank", "authors": "Tamar Bar-On and Nikolay Nikolov", "categories": "math.NT math.GR", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | A vertex partition $\pi = \{V_1, V_2, \ldots, V_k\}$ of $G$ is called a *transitive partition* of size $k$ if $V_i$ dominates $V_j$ for all $1\leq i<j\leq k$. For two disjoint subsets $A$ and $B$ of $V$, we say $A$ *strongly dominates* $B$ if for every vertex $y\in B$, there exists a vertex $x\in A$, such that $xy\in E$ and $deg_G(x)\geq deg_G(y)$. A vertex partition $\pi = \{V_1, V_2, \ldots, V_k\}$ of $G$ is called a *strong transitive partition* of size $k$ if $V_i$ strongly dominates $V_j$ for all $1\leq i<j\leq k$. The [Maximum Strong Transitivity Problem]{.smallcaps} is to find a strong transitive partition of a given graph with the maximum number of parts. In this article, we initiate the study of this variation of transitive partition from algorithmic point of view. We show that the decision version of this problem is NP-complete for chordal graphs. On the positive side, we prove that this problem can be solved in linear time for trees and split graphs. author: - Subhabrata Paul - Kamal Santra - "Subhabrata Paul[^1]" - "Kamal Santra[^2]" bibliography: - Strong_Transitivity_bibliography.bib title: Strong transitivity of a graph --- **Keywords.** Strong transitivity, NP-completeness, Linear-time algorithm, Trees, Split graphs, Chordal graphs. # Introduction Partitioning a graph is one of the fundamental problems in graph theory. In the partitioning problem, the objective is to partition the vertex set (or edge set) into some parts with desired properties, such as independence, minimal edges across partite sets, etc. A *dominating set* of $G=(V, E)$ is a subset of vertices $D$ such that every vertex $x\in V\setminus D$ has a neighbour $y\in D$, that is, $x$ is dominated by some vertex $y$ of $D$. For two disjoint subsets $A$ and $B$ of $V$, we say $A$ *dominates* $B$ if every vertex of $B$ is adjacent to at least one vertex of $A$. Many variants of partitioning problem have been studied in literature based on some domination relationship among the partite sets. For example *domatic partition*[@cockayne1977towards; @zelinka1980domatically] (each partite set is a dominating set), *Grundy partition*[@hedetniemi1982linear; @zaker2005grundy; @zaker2006results] (each partite set is independent and dominates every other partite sets after itself), *transitive partition* [@hedetniemi2018transitivity; @haynes2019transitivity; @paul2023transitivity; @santra2023transitivity] (a generalization of Grundy partition where partite sets need not be independent), *upper domatic partition* [@haynes2020upper; @samuel2020new] (a generalization of transitive partition where for any two partite sets $X$ and $Y$ either $X$ dominates $Y$ or $Y$ dominates $X$ or both). In 1996, Sampathkumar and Pushpa Latha introduced the notion of *strong domination* [@sampathkumar1996strong]. A *strong dominating set* of $G=(V, E)$ is a subset of vertices $D$ such that for every vertex $x\in V\setminus D$, $x$ is dominated by some vertex $y\in D$ and $deg_G(y)\geq deg_G(x)$. Recently, based on this strong domination, a variation of domatic partition, namely *strong domatic partition*, has been studied in [@ghanbari2023strong]. In the *strong domatic partition*, the vertex set is partitioned into $k$ parts, say $\pi =\{V_1, V_2, \ldots, V_k\}$, such that each $V_i$ is a strong dominating set of $G$. In this article, we introduce a variation of transitive partition based on strong domination, namely *strong transitive partition*. A vertex partition $\pi = \{V_1, V_2, \ldots, V_k\}$ of $G$ is called a *strong transitive partition* of size $k$ if $V_i$ strongly dominates $V_j$ for all $1\leq i<j\leq k$. The maximum order of such a strong transitive partition is called *strong transitivity* of $G$ and is denoted by $Tr_{st}(G)$. The [Maximum Strong Transitivity Problem]{.smallcaps} and its corresponding decision version are defined as follows: [[Maximum Strong Transitivity Problem(MSTP)]{.ul}]{.smallcaps} *Instance:* A graph $G=(V,E)$ *Solution:* A strong transitive partition of $G$ *Measure:* Order of the strong transitive partition of $G$ [[Maximum Strong Transitivity Decision Problem(MSTDP)]{.ul}]{.smallcaps} *Instance:* A graph $G=(V,E)$, integer $k$ *Question:* Does $G$ have a Strong transitive partition of order at least $k$? Note that every strong transitive partition is also a transitive partition. Therefore, for any graph $G$, $1\leq Tr_{st}(G)\leq Tr(G)\leq \Delta(G)+1$, where $\Delta(G)$ is the maximum degree of $G$. From the definition of a strong transitive partition, it is clear that for the regular graph, transitivity is same as strong transitivity; as a consequence, for the graph class $K_n$ and cycle $C_n$, they are the same. However, the transitive partition of a graph is not always a strong transitive partition, even for the same value of both parameters. For a path $P_3$, with vertex set $\{a, b, c\}$, tacking $\pi=\{V_1={a, c}, V_2=\{b\}\}$, then it is a transitive partition but not a strong transitive partition as the $deg(b)>deg(a)$ and $deg(b)>deg(a)$. But considering $\pi'=\{V_1={b, c}, V_2=\{a\}\}$, then it is both a strong transitive and transitive partition with size $2$. It can be easily verified that for the graph class $P_n, n\geq 6$, transitivity and strong transitivity are the same and equal to $3$. So, we see that there are graph classes where both parameters have the same value, but generally, their difference can be arbitrarily large. If $G$ is a complete bipartite graph of the form $K_{m, m-1}$, then $Tr_{st}(K_{m, m-1})=2$. Let $V(G)=X\cup Y$. Also, let $x\in X$ and consider a vertex partition $\pi=\{V_1, V_2\}$, where $V_1=(X\setminus\{x\})\cup Y$, $V_2=\{x\}$. Since $m\geq2$, there exits $y\in Y$ and $m=deg(y)\geq deg(x)=m-1$. So, $\pi$ is a strong transitive partition of $G$. Therefore, $Tr_{st}(K_{m, m-1})\geq 2$. To prove $Tr_{st}(K_{m, m-1})=2$, we now show $Tr_{st}(K_{m, m-1})<3$ by contradiction. Assume $Tr_{st}(K_{m, m-1})\geq 3$ and $\pi=\{V_1, V_2, \ldots, V_k\}$ be a strong transitive partition of $G$ of size $k$. Let $y\in V_i$, $3\leq i\leq k$ and $y\in Y$. Since $\pi$ is a strong transitive partition, $V_1$ strongly dominates $V_i$. So, for $y\in V_i$, there must exists a vertex $x\in V_1$, such that $xy\in E(G)$ and $deg_G(x)\geq deg_G(y)$. But for every vertex $x\in X$, $deg_G(x)=m-1< deg_G(y)=m$. So, $V_i$ cannot contain vertices from $Y$. Therefore, $V_i$ contains only vertices from $X$. Let $x\in V_i$ and $x\in X$, $3\leq i\leq k$. As $\pi$ is a strong transitive partition, $V_2$ strongly dominates $V_i$. So, for $x\in V_i$, there must exists a vertex $y\in V_2$, such that $xy\in E(G)$ and $deg_G(y)\geq deg_G(x)$. Now, $V_1$ strongly dominates $V_2$ and $y\in V_2$. To strongly dominates $y$, we have a vertex $x'\in V_1\cup X$, such that $x'y\in E(G)$ and $deg_G(x')\geq deg_G(y)$. Again, this is not possible as the $deg_G(x')=m-1< deg_G(y)=m$. Therefore, $V_i$, $3\leq i\leq k$ cannot contain vertices from $X$ also. So, $k\leq 3$. Therefore, if $G$ is a $K_{m, m-1}$, $m\geq 2$, then $Tr_{st}(K_{m, m-1})=2$. We know that $Tr(K_{m, m-1})=\min\{m+1, m\}=m$, so we have $Tr_{st}(K_{m, m-1})=2$. So, the difference $Tr(G)-Tr_{st}(G)=m-2$, which is arbitrarily large for $m$. For the transitivity, if $H$ is a subgrah of graph $G$, then $Tr(H)\leq Tr(G)$[@hedetniemi2018transitivity]. But for the strong transitivity, this is not true. Considering $G=K_{3, 2}$ and $H=C_4$. Clearly, $H$ is a subgraph of $G$, and $Tr_{st}(G=K_{3, 2})=2$, $Tr_{st}(C_4)=3$. So, in this example, $Tr_{st}(H)>Tr_{st}(G)$. Moreover, the behaviour of the strong transitivity is the same as the transitivity when the graph is disconnected. It is also known that every connected graph $G$ with $Tr(G)=k\geq 3$ has a transitive partition $\pi =\{V_1,V_2, \ldots, V_k\}$ such that $|V_k|$ = $|V_{k-1}| = 1$ and $|V_{k-i}| \leq 2^{i-1}$ for $2\leq i\leq k-2$ [@haynes2019transitivity]. This implies that the maximum transitivity problem is fixed-parameter tractable [@haynes2019transitivity]. Since a strong transitive partition of a graph is also a transitive partition, MSTP is also fixed-parameter tractable. In this paper, we study the computational complexity of this problem. The main contributions are summarized below: 1. The [MSTDP]{.smallcaps} is NP-complete for chordal graphs. 2. The [MSTP]{.smallcaps} can be solved in linear time for trees and split graphs. The rest of the paper is organized as follows. Section 2 shows that the [MSTDP]{.smallcaps} is NP-complete in chordal graphs. Section 3 describes linear-time algorithms for trees and split graphs. Finally, Section 4 concludes the article. # NP-complete for chordal graphs of strong transitivity This section shows that [Maximum Strong Transitivity Decision Problem]{.smallcaps} is NP-complete for chordal graphs. A graph is called *chordal* if there is no induced cycle of length more than $3$. Clearly, [MSTDP]{.smallcaps} is in NP. We prove the NP-completeness of this problem by showing a polynomial-time reduction from [Proper $3$-Coloring Decision Problem]{.smallcaps} in graphs, which is known to be NP-complete [@garey1990guide]. A proper $3$-colring of a graph $G=(V,E)$ is a function $g$, from $V$ to $\{1,2,3\}$, such that for any edge $uv \in E$, $g(u)\not= g(v)$. The [Proper $3$-Coloring Decision Problem]{.smallcaps} is defined as follows: [[Proper $3$-Coloring Decision Problem (P$3$CDP)]{.ul}]{.smallcaps} *Instance:* A graph $G=(V,E)$ *Question:* Does there exist a proper $3$-coloring of $G$? ![The trees $T$ and $T'$](st_np_chordal_tree.pdf){#fig:st_np_chordal_tree} Given an instance of P$3$CDP, say $G=(V, E)$, we construct an instance of MSTDP. The construction is as follows: let $V=\{v_1, v_2, \ldots, v_n\}$ and $E= \{e_1, e_2, \ldots, e_m\}$. - For each vertex $v_i\in V$, we consider a tree $T$ (shown in Figure [1](#fig:st_np_chordal_tree){reference-type="ref" reference="fig:st_np_chordal_tree"}) with $v_i$ as the root and degree of the root is $(m+3)-deg_{G}(v_i)$. Also, for each edge, $e_j\in E$, we consider a vertex $v_{e_i}$ and consider another tree $T'$ (shown in Figure [1](#fig:st_np_chordal_tree){reference-type="ref" reference="fig:st_np_chordal_tree"}) with $v_{e_i}$ as the root, where the degree of the root is $m+2$. - For each edge $e_j\in E$, we take another vertex $e_j$ in $G'$ and also take another extra vertex $e$ in $G'$. Let $A=\{e_1,e_2,\ldots ,e_m,e\}$. We make a complete graph with vertex set $A$. - We take another extra three vertices $v_a$, $v_e$ and $v_b$ and consider three trees $T'$ (shown in Figure [1](#fig:st_np_chordal_tree){reference-type="ref" reference="fig:st_np_chordal_tree"}) with $v_a$, $v_e$ and $v_b$ as the roots, respectively. - Next we add the following edges: for every edge $e_k=(v_i,v_j)\in E$, we join the edges $(e_k, v_i)$, $(e_k, v_j)$, $(e_k, v_{e_k})$. Also we add the edges $(e, v_a)$, $(e, v_e)$, $(e, v_b)$. - Finally, we set $k=m+4$. Note that $G'$ is a chordal graph. The construction from $G$ to $G'$ is illustrated in Figure [2](#fig:chordalnp_strong_transitivity){reference-type="ref" reference="fig:chordalnp_strong_transitivity"}. ![Construction of $G'$ from $G$](chordalnp_strong_transitivity.pdf){#fig:chordalnp_strong_transitivity} Next, we show that $G$ has a proper 3-coloring if and only if $G'$ has a strong transitive partition of size $k$. For the forward direction, we have the following lemma. **Lemma 1**. *If $G=(V,E)$ has a proper 3-coloring, then $G'=(V', E')$ has a strong transitive partition of size $k$.* *Proof.* Given a proper 3-coloring $g$ from $V$ to $\{1,2,3\}$, a strong transitive partition of size $k$, say $\pi=\{V_1,V_2, \ldots,V_k\}$ can be obtain in the following ways: ![Partition of $T$ and $T'$ into $V_1, V_2$ and $V_3$. All the leaves are in $V_1$](st_np_chordal_tree_coloring.pdf){#fig:st_np_chordal_tree_coloring} 1. If $g(v_i)=q$, then $v_i \in V_q$, for all $v_i\in V(G)$. 2. $v_a \in V_3$, $v_{e} \in V_2$ and $v_b \in V_1$. 3. For each $v_{e_j}$ vertex corresponding an edge $e_j$ with end points $v_x$ and $v_y$ in $G$, assign $v_{e_j} \in V_l$, where $l= \{1, 2, 3\} \setminus \{g(v_x),g(v_y)\}$. Put the other vertices of the trees $T$ and $T'$ in $V_1, V_2$ and $V_3$ based on their root. This is illustrated in Figure [3](#fig:st_np_chordal_tree_coloring){reference-type="ref" reference="fig:st_np_chordal_tree_coloring"}. 4. Let $e_j\in V_{3+j}$, $1\leq j\leq m+3$, and $e\in V_{m+4}$. Let $H$ be the complete graph induced by $A$. Since $H$ is a complete graph, then $V_i$ strongly dominates $V_j$ for $4\leq i<j\leq k$. Also, for each $i=1, 2, 3$, every vertex of $A$ is adjacent to a vertex of $V_i$, and the degree of that vertex is equal to the degree of a vertex of $A$. Therefore, for each $i=1, 2, 3$, $V_i$ strong dominates $V_j$ for all $j>3$. At the end, from Figure [3](#fig:st_np_chordal_tree_coloring){reference-type="ref" reference="fig:st_np_chordal_tree_coloring"}, it is clear that $V_i$ strongly dominates $V_j$ for $1\leq i<j\leq 3$. Hence, $\pi$ is a strong transitive partition of $G'$ of size $k$. Therefore, if $G$ has a proper 3-coloring, then $G'$ has a strong transitive partition of size $k$. ◻ Next, we show the converse of the statement. For this, we first prove the following claim. **Claim 2**. *Let $\pi=\{V_1,V_2,\ldots ,V_k\}$ be a strong transitive partition of $G'$ of size $k$ such that $|V_k|=1$. Then the sets $V_4, V_5,\ldots V_k$ contain only vertices from $A$ and the sets $V_1, V_2, V_3$ contain only vertices from $V'\setminus A$.* *Proof.* We divide the proof into two cases: **Case 1**. *$e\in V_{m+4}$* Since the degree of each vertex from $\{v_a, v_e, v_b\}$ is $m+3$ and adjacent with three vertices having a degree is more than or equal to $m+3$, then they cannot be in $V_p$, $p\geq 5$. Now, if any vertex from $\{v_a, v_e, v_b\}$ is in $V_4$, then $e$ must be in $V_i$, $1\leq i\leq 3$, a contradiction as $e\in V_k$ and $k\geq 4$. Therefore, the vertices from $\{v_a,v_e,v_b\}$ are belong to $V_p, 1\leq p \leq 3$. The vertex $e\in V_k$, to strongly dominate $e$, each set in $\{V_1, V_2, \ldots, V_{m+3}\}$ must contain at least one vertex from $N_G'(e)=\{e_1, e_2, \ldots, e_m, v_a, v_e, v_b\}$. Since $e$ is adjacent with exactly $m+3$ vertices and the set $N_G'(e)=\{e_1, e_2, \ldots, e_m, v_a, v_e, v_b\}$ contains exactly $m+3$ vertices, each $V_i, 1\leq i \leq m+3$ contains exactly one vertex from $N_G'(e)$. Since $\{v_a, v_e, v_b\}$ belong to $V_p$ for some $1\leq p \leq 3$, it follows that vertices from $\{e_1, e_2, \ldots, e_m\}$ belong to $V_p$, $p\geq 4$. Hence, the vertices of $A$ belong to $\{V_4, V_5,\ldots V_{m+4}\}$. Note that none of the vertices from $\{v_1, v_2, \ldots, v_n, v_{e_1}, v_{e_2}, \ldots, v_{e_m}\}$ belong to $V_p$ for some $p\geq 4$. Because otherwise, there exists a vertex of $A$ must be in $V_3$. But this contradicts the fact that the vertices of $A$ belong to $\{V_4, V_5,\ldots V_k\}$. Since the number of neighbours having more than or equal degree of every other vertices is at most $2$, they cannot belong to $V_p$, $p\geq 4$. Therefore, $V_4, V_5,\ldots V_{m+4}$ contain only vertices from $A$ and $V_1, V_2, V_3$ contain only vertices from $V'\setminus A$. **Case 1**. *$e\notin V_{m+4}$* Since $\pi$ is a strong transitive partition, for any $x\in V_{m+4}$, $deg(x)\geq m+3$ and has at least $m+3$ neighbour with a degree at least $deg(x)$. As $deg(v_i)=m+3$ and $v_i$ has at most $m+2$ neighbour having degree at least $m+3$, so $v_i$ cannot be in $V_{m+4}$. Similarly, we can prove that any vertex other than $\{e_1, e_2, \ldots, e_m, e\}$ cannot belong to $V_{m+4}$. Since, $e\notin V_{m+4}$, without loss of generality assume $e_1\in V_{m+4}$, where $e_1$ is the vertex of $G'$ corresponding to the edge $e_1=v_1v_2\in E$. Now we show that $v_1$ and $v_2$ belong to the first three sets in $\pi$. Let $v_1\in V_l$ and $v_2\in V_t$, where $t\leq l$. If possible, let $l\geq 4$. Since $e_1\in V_{m+4}$, to dominate $e_1$, each set in $\{V_1, V_2, \ldots, V_{m+4}\}$ must contain at least one vertex from $N_{G'}(e_1)=\{e_2, e_3, \ldots, e_m, e, v_1, v_2, v_{e_1}\}$. Since $e_1$ is adjacent with exactly $m+3$ vertices, each $V_i$, $1\leq i \leq m+3$ contains exactly one vertex from $N_{G'}(e_1)$. So, if $l\geq 4$, then to strongly dominate $v_1$, each set in $\{V_3, V_4, \ldots, V_{l-1}\}$ contains exactly one vertex from $\{e_2, e_3, \ldots, e_m\}$. Now, to strongly dominate $e_1$, each set in $\{V_{l+1}, V_{l+2}, \ldots, V_{m+3}\}$ contains exactly one vertex from $\{e_2, e_3, \ldots, e_m, e\}$. The vertex $e$ cannot belong to $V_q, q\geq l+1$, because $V_l$ contains $v_1$ and any $\{v_a, v_e, v_b\}$ cannot be in $V_l$ as $l\geq 4$. Also $e$ cannot belong to $V_p, 3\leq p \leq l-1$, as $v_1\in V_l$ and each of $\{V_3, V_4, \ldots, V_{l-1}\}$ contains exactly one vertex from $\{e_2, \ldots, e_m\}$. Therefore, the vertex $e$ must be in $V_i, 1\leq i\leq 2$. So each of $\{V_3, V_4, \ldots, V_{l-1},V_{l+1}, \ldots, V_{m+3}\}$ contains exactly one vertex from $\{e_2, \ldots, e_m\}$ only. But the number of vertices in $\{e_2, \ldots, e_m\}$ is $m-1$ whereas we need $k-4=m$ vertices. Therefore, $l$ cannot be more than $3$. Note that $v_{e_1}$ cannot be in $V_j$ for $j\geq 4$ as no of neighbour of $v_{e_1}$ having degree at least $deg(V_{e_1})=m+3$ other than $e_1$ is $2$. Therefore, the vertices $\{v_1, v_{e_1}, v_2\}$ belong to $V_p$ for $1\leq p \leq 3$. With similar arguments as in Case [Case 1](#st_case_npchordal){reference-type="ref" reference="st_case_npchordal"}, we can say that the vertices of $A$ belong to $\{V_4, V_5,\ldots V_k\}$. We can further claim that, the vertices of $\{v_1, v_2, \ldots, v_n, v_{e_1}, v_{e_2}, \ldots, v_{e_m}\}$ belong to $V_p$ for $1\leq p \leq 3$ and the other vertices of $G'$ belong to $V_p$ for $1\leq p \leq 3$. Therefore, $V_4, V_5,\ldots V_k$ contain only vertices from $A$ and the sets $V_1, V_2, V_3$ contain only vertices from $V'\setminus A$. ◻ Using the claim, we show that $G$ has a proper $3$-coloring. **Lemma 3**. *If $G'$ has a strong transitive partition of size $k$, then $G$ has a proper $3$-coloring.* *Proof.* Let $\pi=\{V_1,V_2,\ldots ,V_k\}$ be a strong transitive partition of $G'$ of size $k$. Since $\pi$ is also a transitive partition, from [@haynes2019transitivity] we can assume that $|V_k|=1$. Let us define a coloring of $G$, say $g$, by labelling $v_i$ with color $p$ if its corresponding vertex $v_i$ is in $V_p$. The previous claim ensures that $g$ is a $3$-coloring. Now we show that $g$ is a proper coloring. Let $e_t=v_iv_j\in E$ and let its corresponding vertex $e_t$ in $G'$ belong to some set $V_p$ with $p\geq 4$. This implies that the vertices $\{v_i, v_j, v_{e_t}\}$ must belong to different sets from $V_1, V_2, V_3$. Therefore, $g(v_i)\neq g(v_j)$ and hence $g$ is a proper coloring of $G$. ◻ Therefore, we have the following main theorem of this section: **Theorem 4**. *The [MSTDP]{.smallcaps} is NP-complete for chordal graphs.* # Linear-time algorithms ## Trees In this subsection, we design a linear-time algorithm for finding the strong transitivity of a given tree $T=(V, E)$. We design our algorithm in a similar way to the algorithm for finding the Grundy number of an input tree presented in [@hedetniemi1982linear]. First, we give a comprehensive description of our proposed algorithm. ### Description of the algorithm: Let $T^c$ denote a rooted tree rooted at a vertex $c$ and $T_v^c$ denote the subtree of $T^c$ rooted at a vertex $v$. With a small abuse of notation, we use $T^c$ to denote both the rooted tree and the underlying tree. To find the strong transitivity of $T=(V, E)$, we first define the *strong transitive number* of a vertex $v$ in $T$. The strong transitive number of a vertex $v$ in $T$ is the maximum integer $p$ such that $v\in V_p$ in a strong transitive partition $\pi=\{V_1, V_2, \ldots, V_k\}$, where the maximum is taken over all strong transitive partition of $T$. We denote the strong transitive number of a vertex $v$ in $T$ by $st(v, T)$. Note that the strong transitivity of $T$ is the maximum strong transitive number that a vertex can have; that is, $Tr_{st}(T)=\max\limits_{v\in V}\{st(v, T)\}$. Therefore, our goal is to find a strong transitive number of every vertex in the tree. Now we define another parameter, namely the *rooted strong transitive number*. The *rooted strong transitive number* of $v$ in $T^c$ is the strong transitive number of $v$ in the tree $T_v^c$ and it is denoted by $st^r(v, T^c)$. Therefore, $st^r(v, T^c)= st(v, T_v^c)$. To this end, we define another parameter, namely the *modified rooted strong transitive number*. The *modified rooted strong transitive number* of $v$ in $T^c$ is the strong transitive number of $v$ in the tree $T_v^c$, considering the $deg(v)$ as $deg(v)+1$ if $v$ is a non-root vertex and for root vertex $deg(v)$ as $deg(v)$. We denote it by $mst^r(v, T^c)$. Note that the value modified rooted strong transitive number of a vertex depends on the rooted tree, whereas the strong transitive number is independent of the rooted tree. Also, for the root vertex $c$, $st^r(c, T^c)= mst^r(c, T^c)=st(c,T)$. We recursively compute the modified rooted strong transitive number of the vertices of $T^c$ in a bottom-up approach. First, we consider a vertex ordering $\sigma$, which is the reverse of BFS ordering of $T^c$. For a leaf vertex $c_i$, we set $mst^r(c_i, T^c)=1$. For a non-leaf vertex $c_i$, we call the function [Strong_Transitive_Number$()$]{.smallcaps}, which takes the modified rooted strong transitive number of children of $c_i$ in $T^c$ as input and returns the modified rooted strong transitive number of $c_i$ in $T^c$. At the end of the bottom-up approach, we have the modified rooted strong transitive number of $c_i$ in $T^c$, that is, $mst^r(c, T^c)$, which is the same as the strong transitive number of $c$ in $T$, that is, $st(c, T)$. After the bottom-up approach, we have the strong transitive number of the root vertex $c$, and the modified rooted strong transitive number of every other vertices in $T^c$. Next, we compute the strong transitive number of every other vertex. For a vertex $c_i$, other than $c$, we compute the strong transitive number using the function [Strong_Transitive_Number$()$]{.smallcaps}, which takes the modified rooted strong transitive number of children of $c_i$ in $T^{c_i}$ as input. Let $y$ be the parent of $c_i$ in $T^c$. Note that, except $y$, the modified rooted strong transitive number of children of $c_i$ in $T^{c_i}$ is the same as the modified rooted strong transitive number in $T^c$. We only need to compute the modified rooted strong transitive number of $y$ in $T^{c_i}$. We use another function called [Strong_Mark_Required$()$]{.smallcaps} for this. This function takes a strong transitive number of a vertex $x$ and modified rooted strong transitive number of its children in $T^x$ as input and marks the status of whether a child, say $v$, is required or not to achieve the strong transitive number of $x$. We mark $R(v)=1$ if the child $v$ is required, otherwise $R(v)=0$. We compute the strong transitive number of every vertex, other than $c$, by processing the vertices in the reverse order of $\sigma$, that is, in a top-down approach in $T^c$. While processing the vertex $c_i$, first based on the status marked by [Strong_Mark_Required$()$]{.smallcaps} function, we calculate the modified rooted strong transitive number of $p(c_i)$ in $T^{c_i}$, where $p(c_i)$ is the parent of $c_i$ in the rooted tree $T^c$. Then, we call [Strong_Transitive_Number$()$]{.smallcaps} to calculate the strong transitive number of $c_i$. Next, we call the [Strong_Mark_Required$()$]{.smallcaps} to mark the status of the children, which will be used in subsequent iterations. At the end of this top-down approach, we have a strong transitive number of all the vertices and hence the strong transitivity of the tree $T$. The process of finding $Tr_{st}(T)$ is described in Algorithm [\[Algo:strong_trasitivity(T)\]](#Algo:strong_trasitivity(T)){reference-type="ref" reference="Algo:strong_trasitivity(T)"}. **Input:** A tree $T=(V, E)$. **Output:** Strong transitivity of $T$. Let $\sigma=(c_1, c_2, \ldots, c_k=c)$ be the reverse BFS ordering of the vertices of $T^c$, rooted at a vertex $c$. $mst^r(c_i, T^c)=1$. $mst^r(c_i, T^c)$ = [Strong_Transitive_Number]{.smallcaps}$(mst^r(c_{i_1},T^c), \ldots , mst^r(c_{i_k}, T^c) )$.         /\*where $c_{i_1}, c_{i_2}, \ldots, c_{i_k}$ be the children of $c_i$ and $mst^r(c_{i_1}, T^c)\leq mst^r(c_{i_2}, T^c)\leq \ldots \leq mst^r(c_{i_k}, T^c)$\*/ [Strong_Mark_Required]{.smallcaps}$(st(c, T), mst^r(c_{1}, T^c), \ldots, mst^r(c_{k}, T^c) )$         /\*where $c_{1}, c_{2}, \ldots, c_{k}$ be the children of $c$ and $mst^r(c_{1}, T^c)\leq mst^r(c_{2}, T^c)\leq \ldots \leq mst^r(c_{k}, T^c)$\*/ $mst^r(p(c_i), T^{c_i}) = st(c_i, T)$ $mst^r(p(c_i), T^{c_i}) = st(c_i, T) -1$ $st(c_i, T) =$[strong_Transitive_Number]{.smallcaps}$(mst^r(c_{i_1}, T^{c_i}), \ldots , mst^r(c_{i_k}, T^{c_i}) )$. [Strong_Mark_Required]{.smallcaps}$(st(c_i, T), mst^r(c_{i_1}, T^{c_i}), \ldots , mst^r(c_{i_k}, T^{c_i}) )$   /\*where $c_{i_1}, c_{i_2}, \ldots, c_{i_k}$ be the neighbours of $c_i$ and $mst^r(c_{i_1}, T^c)\leq mst^r(c_{i_2}, T^c)\leq \ldots \leq mst^r(c_{i_k}, T^c)$\*/                                         /\* Note that $mst^r(c_{i_j}, T^{c_i})=mst^r(c_{i_j}, T^{c})$ if $c_{i_j}$ is a child of $c_i$ in $T^c$\*/ $Tr_{st}(T)=\max\limits_{x\in V}\{st(x, T)\}$. ### Proof of correctness In this subsection, we give the proof of the correctness of Algorithm [\[Algo:strong_trasitivity(T)\]](#Algo:strong_trasitivity(T)){reference-type="ref" reference="Algo:strong_trasitivity(T)"}. It is clear that the correctness of Algorithm [\[Algo:strong_trasitivity(T)\]](#Algo:strong_trasitivity(T)){reference-type="ref" reference="Algo:strong_trasitivity(T)"} depends on the correctness of the functions used in the algorithm. First, we show the following two lemmas, which prove the correctness of Strong_Transitive_Number$()$ function. **Lemma 5**. *Let $x$ be a child of $T^c$ and $y$ be its parent in $T^c$. Also, let $mst^r(v,T)=t$. Then there exists a strong transitive partition of $T_x^c$, say $\{V_1, V_2, \ldots, V_i\}$ such that $x\in V_i$, for all $1\leq i\leq t$.* *Proof.* Since $mst^r(x,T^c)=t$, there exists a strong transitive partition $\pi=\{U_1,U_2,\ldots, U_t\}$ of $T_x^c$ such that $x\in U_t$. For each $1\leq i\leq t$, let us define another strong transitive partition $\pi'=\{V_1,V_2,\ldots,V_i\}$ of $T_x^c$ as follows: $V_j=U_j$ for all $1\leq j\leq (i-1)$ and $V_i= \displaystyle{\bigcup_{j=i}^{t}U_j}$. Clearly, $\pi'$ is a strong transitive partition of $T_x^c$ of size $i$ such that $x\in V_{i}$. Hence, the lemma follows. ◻ **Lemma 6**. *Let $v_1, v_2, \ldots, v_k$ are the children of $x$ in a rooted tree $T^c$ and $y$ be the parent of $x$ in $T^c$. Also, let for each $1\leq i\leq k$, $l_i$ denote the modified rooted strong transitive number of $v_i$ in $T^c$ with $l_1\leq l_2\leq \ldots\leq l_k$ and $p(y)=1$ when $y$ exists in $T^c$ otherwise $p(y)=0$. Let $z$ be the largest integer such that there exists a subsequence of $\{l_i: 1\leq i\leq k\}$, say $(l_{i_1}\leq l_{i_2}\leq \ldots \leq l_{i_{z}})$ such that $l_{i_{p}}\geq p$, for all $1\leq p\leq z$ and $deg(v_{i_j})\geq deg(x)+p(y)$. Then, the modified rooted strong transitive number of $x$ in the underlying tree $T^c$ is $1+z$, that is, $mst^r(x, T^c)=1+z$.* *Proof.* For each $1\leq j\leq z$, let us consider the subtrees $T_{v_{i_j}}^c$. It is also given that $mst^r(v_{i_j},T^c)=l_{i_j}$, for $j\in \{1, 2, \ldots, {z}\}$. For all $1\leq p\leq z$, since $l_{i_{p}}\geq p$, by Lemma [Lemma 5](#tree_lemma_strong_transitivity){reference-type="ref" reference="tree_lemma_strong_transitivity"}, we know that there exists strong transitive partitions $\pi^{p}=\{V_1^{p}, V_2^{p}, \ldots, V_p^{p}\}$ of $T_{v_{i_{p}}}^c$ such that $v_{i_{p}}\in V_p^{p}$ and $deg(v_{i_p})\geq deg(x)+1$. Let us consider the partition of $\pi=\{V_1, V_2, \ldots, V_z, V_{z+1}\}$ of $T_x^c$ as follows: $V_i=\displaystyle{\bigcup_{j=i}^{z}V_i^j}$, for $\leq i\leq z$, $V_{z+1}=\{x\}$ and every other vertices of $T$ are put in $V_1$. Clearly, $\pi$ is a strong transitive partition of $T_x^c$. Also it is given that $deg(v_{i_j})\geq deg(x)+1$. Therefore, $mst^r(x,T^c)\geq 1+z$. Next, we show that $mst(x, T^c)$ cannot be more than $1+z$. If possible, let $mst(x,T^c)\geq 2+z$. Then by Lemma [Lemma 5](#tree_lemma_strong_transitivity){reference-type="ref" reference="tree_lemma_strong_transitivity"}, we have that there exists a strong transitive partitions $\pi=\{V_1, V_2, \ldots, V_{2+z}\}$ such that $x\in V_{2+z}$. This implies that for each $1\leq i\leq 1+z$, $V_i$ contains a neighbour of $x$, say $v_i$ such that the modified rooted strong transitive number of both $v_i$ is greater or equal to $i$, that is, $l_i\geq i$ and $deg(v_i)\geq deg(x)+1$. The set $\{l_i | 1\leq i\leq 1+z\}$ forms a desired subsequence of $\{l_i: 1\leq i\leq k\}$, contradicting the maximality of $z$. Hence, $mst(x,T)=1+z$. ◻ Note that in line $6$ of Algorithm [\[Algo:strong_trasitivity(T)\]](#Algo:strong_trasitivity(T)){reference-type="ref" reference="Algo:strong_trasitivity(T)"}, when Strong_Transitive_Number$()$ is called, then it returns the strong transitive number of $c_i$ in $T^c_{c_i}$ which is in fact the modified rooted strong transitive number of $c_i$ in $T^c$. And in line $13$ of Algorithm [\[Algo:strong_trasitivity(T)\]](#Algo:strong_trasitivity(T)){reference-type="ref" reference="Algo:strong_trasitivity(T)"}, when Strong_Transitive_Number$()$ is called, then it returns the strong transitive number of $c_i$ in $T^{c_i}$ which is same as $st(c_i, T)$. From Lemma [Lemma 5](#tree_lemma_strong_transitivity){reference-type="ref" reference="tree_lemma_strong_transitivity"} and [Lemma 6](#tree_theorem_strong_transitivity){reference-type="ref" reference="tree_theorem_strong_transitivity"}, we have the function [Strong_Transitive_Number$()$]{.smallcaps}. **Input:** Modified rooted strong transitive numbers of children of $x$ in the tree $T^c$ with $mst^r(v_{1}, T^x)\leq mst^r(v_{2}, T^x)\leq \ldots \leq mst^r(v_{k}, T^x)$ and $p(y)$, where $y$ is the parent of $x$ in $T^c$. **Output:** Modified rooted strong transitive number of $x$ in the underlying tree $T^c$, that is, $mst^r(x, T^c)$. $mst^r(x, T^c)$ $\leftarrow$ 1. $mst^r(x, T)=mst^r(x, T)+1$. $mst^r(x, T^c)=mst^r(x, T^c)$ ($mst^r(x, T^c)$). Next, we prove the correctness of Strong_Mark_Required$()$. Let $T^x$ be a rooted tree and $st(x, T)=z$. A child $v$ of $x$ is said to be required if the $st(x, T^x \setminus T_v^x)=z-1$. The function returns the required status of every child of $x$ by marking $R(v)=1$ if it is required and $R(v)=0$ otherwise. The children of $x$ that are required can be identified using the following lemma. **Lemma 7**. *Let $T^x$ be a tree rooted at $x$ and $v_1, v_2, \ldots, v_k$ be its children in $T^x$. Also, let the strong transitive number of $x$ be $z$ and for each $1\leq i\leq k$, let $l_i$ denote the modified rooted strong transitive number of $v_i$ in $T^x$. Moreover, let for all $1\leq i\leq p$, $deg(v_i)<k$ and for all $p+1\leq i\leq k$, $deg(v_i)\geq k$ and $l_{p+1}\leq l_{p+2}\leq \ldots\leq l_k$. Then the following hold:* 1. *If $k=z-1$, then $R(v_i)=1$ for all $1\leq i\leq k$.* 2. *Let $k>z-1$. If $k-p=z-1$, then for all $1\leq i\leq p$, $R(v_i)=0$ and for all $p+1\leq i\leq k, R(v_i)=1$.* 3. *Let $k>z-1$ and $k-p>z-1$. Then for all $1\leq i\leq p$, $R(v_i)=0$ and for all $p+1\leq i\leq k-z+1, R(v_i)=0$.* 4. *Let $k>z-1$ and $k-p>z-1$. Also, let $k-z+2\leq i\leq k$. If for all $j$, $k-z+2\leq j\leq i$, $l_{j-1}\geq j-(k-z+1)$ then $R(v_i)=0$.* 5. *Let $k>z-1$, $k-p>z-1$ and $k-z+2\leq i\leq k$. If there exists $j$ in $k-z+2\leq j\leq i$ such that $l_{j-1}< j-(k-z+1)$ or then $R(v_i)=1$.* *Proof.* Note that $k-p\geq z-1$ as $st(x, T^x)=z$ and $deg(x)=k$. $(a)$ Since $st(x, T^x)=z$, by the Lemma [Lemma 5](#tree_lemma_strong_transitivity){reference-type="ref" reference="tree_lemma_strong_transitivity"}, there exists a strong transitive partition of $T$, say $\pi=\{V_1, V_2, \ldots, V_{z}\}$, such that $x\in V_z$. In that case, all the vertices in $\{v_1, v_2, \ldots, v_k\}$ must be in $V_1, V_2, \ldots, V_{z-1}$ and each set $V_i$ contains at least one of these vertices. Since $k=z-1$, each set $V_i$ contains exactly one vertex from $\{v_1, v_2, \ldots, v_k\}$. Therefore, if we remove any $v_i$ from the tree, the strong transitive number of $x$ will decrease by $1$. Hence, every $v_i$ is required, that is, $R(v_i)=1$ for all $1\leq i\leq k$. $(b)$ Since $st(x, T^x)=z$, by the Lemma [Lemma 5](#tree_lemma_strong_transitivity){reference-type="ref" reference="tree_lemma_strong_transitivity"}, there exists a strong transitive partition of $T^x$, say $\pi=\{V_1, V_2, \ldots, V_{z}\}$, such that $x\in V_z$. Since the vertices from $\{v_1, v_2, \ldots, v_p\}$ have degree less than $x$, they are not use to strong dominate $x$. Therefore, if we remove any $v_i$ ($1\leq i\leq p$) from the tree, then the strong transitive number of $x$ will be unchanged. Hence, for each $1\leq i\leq p$, $v_i$ is not required, that is, $R(v_i)=0$ for all $1\leq i\leq p$. So, all the vertices in $\{v_{p+1}, v_{p+2}, \ldots, v_k\}$ must be in $V_1, V_2, \ldots, V_{z-1}$ and each set $V_i$ contains at least one of these vertices. Since $k-p=z-1$, each set $V_i$ contains exactly one vertex from $\{v_{p+1}, v_{p+2}, \ldots, v_k\}$. Therefore, removing any $v_i$ ($p+1\leq i\leq k$) from the tree will decrease the strong transitive number of $x$ by $1$. Hence, every $v_i$ is required, that is, $R(v_i)=1$ for all $p+1\leq i\leq k$. $(c)$ Let $\pi=\{V_1, V_2, \ldots, V_{z}\}$ be a strong transitive partition of $T^x$ such that $x\in V_z$. As before, the vertices from $\{v_1, v_2, \ldots, v_p\}$ are not required. Hence, $R(v_i)=0$ for all $1\leq i\leq p$. Now, in this partition, at least one vertex from $\{v_{p+1}, v_{p+2}, \ldots, v_k\}$ must be in each $V_i$ for $1\leq i\leq z-1$. As the vertices are arranged in increasing order of their modified rooted strong transitive number, without loss of generality, we can assume that $\{v_{p+1}, v_{p+2}, \ldots, v_{k-z+1}\}\subset V_1$ and $v_i\in V_{I_i}$ for each $k-z+2\leq i \leq k$, where $I_i=i-(k-z+1)$. Clearly, if we remove any $v_i$ ($p+1\leq i\leq k-z+1$) from the tree, then the strong transitive number of $x$ will be unchanged. Hence, for each $p+1\leq i\leq k-z+1$, $v_i$ is not required, that is, $R(v_i)=0$ for all $p+1\leq i\leq k-z+1$. $(d)$ Let us consider the same strong transitive partition $\pi$ of $T^x$ as in case $(c)$. Let for some $k-z+2\leq i \leq k$, $v_i$ be a vertex such that $l_{j-1}\geq I_j$ for all $k-z+2\leq j\leq i$, where $I_j= j-(k-z+1)$. We can modify $\pi$ to get a strong transitive partition of $T^x\setminus \{v_i\}$ of size $z$. The modification is as follows: for each $j\in \{k-z+2, k-z+3, \ldots, i\}$, we put $v_{i-1}\in V_{I_i}$ and remove the vertices that are not in $T^x\setminus \{v_i\}$. Therefore, $v_i$ is not required and $R(v_i)=0$ for such vertices. $(e)$ Let for some $i$, $k-z+2\leq i \leq k$, $v_i$ be a vertex such that $l_{j-1}< I_j$ for some $k-z+2\leq j\leq i$, where $I_j= j-(k-z+1)$. Since the modified rooted strong transitive numbers are arranged in increasing order, $l_q< I_j$ for all $p+1\leq q\leq j-1$. Suppose, after deleting the vertex $v_i$, let the strong transitive number of $x$ in $T^x\setminus \{v_i\}$ remain $z$. Let $\pi'=\{V_1, V_2, \ldots, V_{z}\}$ be a strong transitive partition of $T^x\setminus \{v_i\}$ of size $z$ such that $x\in V_{z}$. Since $l_q< I_j$ for all $p+1\leq q\leq j-1$, none of the vertices of $\{v_{p+1}, v_{p+2}, \ldots, v_{j-1}\}$ can be in the sets $V_{I_j}, V_{I_{j+1}}, \ldots, V_{I_k}$. On the other hand, the sets $V_{I_j}, V_{I_{j+1}}, \ldots, V_{I_k}$ must contain at least $(k-j+1)$ vertices from $\{v_{p+1}, v_{p+2}, \ldots, v_{i-1}, v_{i+1}, \ldots, v_{k}\}$, as $\pi'$ is a strong transitive partition of $T^x\setminus \{v_i\}$. Therefore, $V_{I_j}, V_{I_{j+1}}, \ldots, V_{I_k}$ contains at least $(k-j+1)$ vertices from $\{v_{j}, v_{j+1}, \ldots, v_{i-1}, v_{i+1}, \ldots, v_{k}\}$. But there are only $(k-j)$ many vertices available. Hence, the strong transitive number of $x$ in $T^x\setminus \{v_i\}$ cannot be $z$. Therefore, $v_i$ is required and $R(v_i)=1$ for such vertices. ◻ Note that the condition in case $(e)$ is such that if $R(v_i)=1$ for some $i$, then $R(v_j)=1$ for all $i+1\leq j\leq k$. Based on the lemma, we have the function [Strong_Mark_Required$()$]{.smallcaps}. **Input:** A rooted tree $T^x$, rooted at a vertex $x$ and modified rooted strong transitive number of the children of $x$, such that $deg(v_i)<k$ for $1\leq i\leq p$ and $deg(v_i)\geq k$, for all $i\geq p+1$ also $mst^r(v_{p+1}, T^c)\leq \ldots \leq mst^r(v_k, T^c)$. **Output:** $R(v)$ value of $v$, $v$ is a child of $x$ in $T^x$. $R(v)=1$, for all children $v$ of $x$ in $T^x$. For all $1\leq i\leq k-z+1$, $R(v_i)=0$ $R(v_i)=0$ $R(v_i)=1$ For all $j> i$, $R(v_j)=1$ ### Complexity Analysis: In the function [Strong_Transitive_Number()]{.smallcaps}, we find the strong transitive number of vertex $x$ based on the modified rooted strong transitive number of its children. We have assumed that the children are sorted according to their modified rooted strong transitive number. Since the for loop in line $2-6$ of [Strong_Transitive_Number()]{.smallcaps} runs for every child of $x$, this function takes $O(deg(x))$ time. Similarly, [Strong_Mark_Required()]{.smallcaps} takes a strong transitive number of a vertex $x$ and modified rooted strong transitive number of its children in $T^x$ as input and marks the status of whether a child, say $v$, is required or not to achieve the strong transitive number of $x$. Here also, we have assumed that the children are sorted according to their modified rooted strong transitive number. Clearly, line $1-2$ of [Strong_Mark_Required()]{.smallcaps} takes $O(deg(x))$. In line $3-4$, we mark the status for a few children without any checking and for each of the remaining vertices, we mark the required status by checking condition in $O(1)$ time. Therefore, [Strong_Mark_Required()]{.smallcaps} also takes $O(deg(x))$ time. In the main algorithm [Strong_Transitivity(T)]{.smallcaps}, the vertex order mentioned in line $1$ can be found in linear time. Then, in a bottom-up approach, we calculate every vertex's modified rooted strong transitive numbers. For that we are spending $O(deg(c_i))$ for every $c_i\in \sigma$. Note that we must pass the children of $c_i$ in a sorted order to [Strong_Transitive_Number()]{.smallcaps}. But as discussed in [@hedetniemi1982linear] (algorithm for finding Grundy number of a tree), we do not need to sort all the children based on their modified rooted strong transitive numbers; sorting the children whose modified rooted strong transitive number is less than $deg(c_i)$, is sufficient. We can argue that this can be done in $O(deg(c_i))$ as shown in [@hedetniemi1982linear]. Hence, the loop in line $1-6$ takes linear time. Similarly, we conclude that line $8-14$ takes linear time. Therefore, we have the following theorem: **Theorem 8**. *The [MSTP]{.smallcaps} can be solved in linear time for trees.* ## Transitivity in split graphs {#STSG} A graph $G=(V, E)$ is said to be a *split graph* if $V$ can be partitioned into an independent set $S$ and a clique $K$. In this subsection, we prove that the transitivity of a split graph $G$ is $\omega(G)$, where $\omega(G)$ is the size of a maximum clique in $G$. First, we prove that $Tr_{st}(G)\geq \omega(G)$. **Lemma 9**. *Let $G=(S\cup K, E)$ be a split graph, where $S$ and $K$ are an independent set and a clique of $G$, respectively. Also, assume that $K$ is the maximum clique of $G$, that is, $\omega(G)=|K|$. Then $Tr_{st}(G)\geq \omega(G)$.* *Proof.* Let $\omega(G)=t$ and $\{v_1, v_2, \ldots, v_t\}$ be the vertices of a maximum clique. Also, assume $deg(v_1)\geq deg(v_2)\geq \ldots \geq deg(v_t)$. Consider a vertex partition $\pi=\{V_1, V_2, \ldots, V_{t}\}$ of size $\omega(G)$ by considering each $V_i=\{v_i\}$, for $i\geq 2$ and $V_1=V\setminus \{v_2, v_3, \ldots, v_t\}$. Since the vertices $\{v_1, v_2, \ldots, v_t\}$ form a clique and $deg(v_1)\geq deg(v_2)\geq \ldots \geq deg(v_t)$, $V_i$ strongly dominates $V_j$ for all $1\leq i<j\leq t$. Therefore, $\pi$ forms a strong transitive partition of $G$ with size $t$. Hence, $Tr_{st}(G)\geq t=\omega(G)$. ◻ Next, in the following lemma, we show that $Tr_{st}(G)=\omega(G)$. **Lemma 10**. *Let $G=(S\cup K, E)$ be a split graph, where $S$ and $K$ are an independent set and a clique of $G$, respectively. Also, assume that $K$ is the maximum clique of $G$, that is, $\omega(G)=|K|$. Then $Tr_{st}(G)= \omega(G)$.* *Proof.* From [@santra2023transitivity], we know that $Tr(G)=\omega(G)+1$ if and only if every vertex of $K$ has a neighbour in $S$. Also, we know that $Tr(G)\geq Tr_{st}(G)$. Now we divide our proof into the following two cases: **Case 1**. *A vertex $x\in K$ exists, such that $x$ has no neighbour in $S$.* In this case, form [@santra2023transitivity], we know that $Tr(G)=\omega(G)$. As $Tr_{st}(G)\leq Tr(G)$, so $Tr_{st}(G)\leq \omega(G)$. Again, by the Lemma [Lemma 9](#STSGLM1){reference-type="ref" reference="STSGLM1"}, we have $\omega(G)\leq Tr_{st}(G)$. Therefore, $Tr_{st}(G)= \omega(G)$. **Case 1**. *Every vertex of $K$ has a neighbour in $S$.* In this case we have $Tr(G)=\omega(G)+1$ [@santra2023transitivity]. So, $Tr_{st}(G)\leq Tr(G)=\omega(G)+1$. Suppose $Tr_{st}(G)=\omega(G)+1$ and $\pi=\{V_1, V_2, \ldots, V_{\omega(G)+1}\}$ be a strong transitive partition of $G$ with size $\omega(G)+1$. Since $|K|=\omega(G)$, a set in $\pi$ contains only vertices from $S$. Also, note that for any $s\in S$, $deg(s)<deg(x)$, for all $x\in K$ as $K$, is the maximum clique of $G$, and every vertex of $K$ has a neighbour in $S$. Let $s\in S$ and $s\in V_{\omega(G)+1}$. Then $deg(s)$ is at least $\omega(G)$, which is impossible. So, no vertices from $S$ in $V_{\omega(G)+1}$. Let $x\in K$ and $x\in V_{\omega(G)+1}$. Also, assume $V_i\subseteq S$. Since $\pi$ is a strong transitive partition, $V_i$ strongly dominates $V_{\omega(G)+1}$. That implies $deg(s)\geq deg(x)$ for some $s\in S$. We have a contradiction as $deg(s)<deg(x)$ , for all $s\in S$ and $x\in K$. Therefore, $Tr_{st}(G)$ cannot be $\omega(G)+1$. Hence, $Tr_{st}(G)< \omega(G)+1$. Again, by the Lemma [Lemma 9](#STSGLM1){reference-type="ref" reference="STSGLM1"}, we have $\omega(G)\leq Tr_{st}(G)$. Therefore, $Tr_{st}(G)= \omega(G)$. ◻ From Lemma [Lemma 10](#STSGLM2){reference-type="ref" reference="STSGLM2"}, it follows that computing the strong transitivity of a split graph is the same as computing the maximum clique. Note that finding a vertex partition of $V$ into $S$ and $K$, where $S$ and $K$ are an independent set and a clique of $G$, respectively, and $\omega(G)=|K|$ can be computed in linear time [@hammer1981splittance]. Hence, we have the following theorem: **Theorem 11**. *The [MSTP]{.smallcaps} can be solved in linear time for split graphs.* # Conclusion In this paper, we have introduced the notion of strong transitivity in graphs, which is a variation of transitivity. We have shown that the decision version of this problem is NP-complete for chordal graphs. On the positive side, we have proved that this problem can be solved in linear time for trees and split graphs. It would be interesting to investigate the complexity status of this problem in other graph classes. Designing an approximation algorithm for this problem would be another challenging open problem. # Acknowledgements: {#acknowledgements .unnumbered} Subhabrata Paul was supported by the SERB MATRICS Research Grant (No. MTR/2019/000528). The work of Kamal Santra is supported by the Department of Science and Technology (DST) (INSPIRE Fellowship, Ref No: DST/INSPIRE/ 03/2016/000291), Govt. of India. [^1]: Department of Mathematics, IIT Patna, India, email:subhabrata\@iitp.ac.in [^2]: Department of Mathematics, IIT Patna, India, email:kamal_1821ma04\@iitp.ac.in
arxiv_math
{ "id": "2310.04476", "title": "Strong transitivity of a graph", "authors": "Subhabrata Paul and Kamal Santra", "categories": "math.CO", "license": "http://creativecommons.org/licenses/by-nc-sa/4.0/" }
--- abstract: | This work showcases level set estimates for weak solutions to the $p$-Poisson equation on a bounded domain, which we use to establish Lebesgue space inclusions for weak solutions. In particular we show that if $\Omega\subset\mathbb{R}^n$ is a bounded domain and $u$ is a weak solution to the Dirichlet problem for Poisson's equation $$\begin{array}{rllc} -\Delta u=&\!\!\!\!f&\textrm{in}&\Omega\\ u=&\!\!\!\!0&\textrm{on}&\partial\Omega \end{array}$$ for $f\in L^q(\Omega)$ with $q<\frac{n}{2}$, then $u\in L^r(\Omega)$ for every $r<\frac{qn}{n-2q}$ and indeed $\|u\|_r\leq C\|f\|_q$. This result is shown to be sharp, and similar regularity is established for solutions to the $p$-Poisson equation including in the edge case $q=\frac{n}{p}$. author: - "Sullivan Francis MacDonald[^1]" bibliography: - references.bib date: September 2023 title: Regularity of Singular Solutions to $p$-Poisson Equations --- # Introduction In this work we are concerned with global regularity of weak solutions to Dirichlet problems for the $p$-Poisson equation on a bounded domain $\Omega\subset\mathbb{R}^n$ for $n\geq 3$, $$\label{p-Poisson} \begin{array}{rllc} -\mathrm{{div}} (| \nabla u|^{p-2}\nabla u)=&\!\!\!\!f&\textrm{in}&\Omega\\ u=&\!\!\!\!0&\textrm{on}&\partial\Omega. \end{array}$$ In particular we are interested in the behaviour of solutions when $f\in L^q(\Omega)$ for $q\leq \frac{n}{p}$. In the case $p=2$ it is a well known consequence of the De Giorgi-Nash-Moser theory that if $f\in L^q(\Omega)$ for $q>\frac{n}{2}$ then $u$ is bounded and Hölder continuous up to $\partial\Omega$, see [@Mouhot2018DEGA] and the references therein. Similar regularity is established for the $p$-Poisson equation with sufficiently regular data in [@DIBENEDETTO1983827]. In the case of $f\in L^q(\Omega)$ for small $q$ solutions to [\[p-Poisson\]](#p-Poisson){reference-type="eqref" reference="p-Poisson"} can be unbounded, though some regularity results have been established. For instance, a comparison principle for unbounded solutions is proved by Lenori & Porretta [@LEONORI20181492], and Lindqvist [@Lindqvist Thm 5.11] shows that $p$-superharmonic functions retain some local integrability even if they are not bounded. There are applications to modelling and numerical schemes in which unbounded solutions to problem [\[p-Poisson\]](#p-Poisson){reference-type="eqref" reference="p-Poisson"} arise naturally, see e.g. [@ASHYRALIYEV200820; @IJNAM-14-500], and studying solutions in this setting also demands refinement of techniques which may be useful in work on related problems. It is therefore worthwhile to pursue improved regularity results for singular solutions to [\[p-Poisson\]](#p-Poisson){reference-type="eqref" reference="p-Poisson"}. It has bee shown that if $p=2$ and $f\in L^q(\Omega)$ for $1<q\leq\infty$ then one has $u\in W^{2,q}(\Omega')$ for some $\Omega'\subset\Omega$ [@TEIXEIRA2013150 (1.4)], though it turns out that $u$ inherits additional regularity from $f$. Our main contribution in this note is to establish the following improved Lebesgue space inclusions for $u$ when $q$ is small. **Theorem 1**. *Let $u$ solve [\[p-Poisson\]](#p-Poisson){reference-type="eqref" reference="p-Poisson"} for $f\in L^q(\Omega)$ with $1< q<\frac{n}{p}$. If $r<\frac{(p-1)qn}{n-pq}$ then $u\in L^r(\Omega)$ and $$\label{Lp Bound} \|u\|_r\leq C\|f\|_q^\frac{1}{p-1}.$$ If $r>\frac{(p-1)qn}{n-pq}$ then there exists $f\in L^q(\Omega)$ and a solution $u$ to [\[p-Poisson\]](#p-Poisson){reference-type="eqref" reference="p-Poisson"} such that $u\not\in L^r(\Omega)$. If $q=\frac{n}{p}$ then estimate [\[Lp Bound\]](#Lp Bound){reference-type="eqref" reference="Lp Bound"} holds for every $r<\infty$.* The techniques used to prove Theorem [Theorem 1](#main){reference-type="ref" reference="main"} employ ideas from both Moser and De Giorgi iterative schemes, and they can be used to reproduce many classical boundedness results in the setting of Lebesgue and Orlicz spaces. Moreover, the sharpness of our exponent implies that this result is optimal on the scale of Lebesgue spaces, though it is unclear whether solutions necessarily belong to $L^r(\Omega)$ when $r=\frac{(p-1)qn}{n-qp}$. We also remark that the level set estimates used to prove our main result have been established for many elliptic equations, see e.g. [@dgref], and we foresee no major difficulty in extending Theorem [Theorem 1](#main){reference-type="ref" reference="main"} to such problems. The remainder of this paper is organized as follows. In Section 2 we state some preliminary definitions and results, and in Section 3 we present a streamlined form of De Giorgi iteration to achieve an estimate for the distribution function of weak solutions. In Section 4 we prove [\[Lp Bound\]](#Lp Bound){reference-type="eqref" reference="Lp Bound"} using this distribution estimate together with an iterative argument inspired by Moser iteration. Finally, in Section 5, we present examples which show that our main result cannot be improved. # Preliminaries For $1\leq q<\infty$ we define $L^q(\Omega)$ in the usual way, and if $f$ is measurable we define the distribution function of $f$ by $\lambda_f(\alpha)=m(\{x\in\Omega:|f(x)|>\alpha\})$, where $m$ is Lebesgue measure. Further, we remind the reader of the identity $$\label{dist} \int_\Omega |f|^qdx=\int_0^\infty q\alpha^{q-1}\lambda_f(\alpha)d\alpha,$$ which follows from Tonelli's Theorem provided that $f$ is measurable. The Sobolev space $W^{1,p}_0(\Omega)$ is defined as the collection of weakly differentiable functions on $\Omega$ which vanish on $\partial\Omega$, and whose weak derivatives belong to $L^p(\Omega)$. Since $\Omega$ is a bounded open set by assumption, if $n>p$ then for $f\in W^{1,p}_0(\Omega)$ we have the Sobolev inequality $\|f\|_{\frac{np}{n-p}}\leq C\|\nabla f\|_p$. A function $u\in W^{1,p}_0(\Omega)$ is said to be a weak solution to the $p$-Poisson equation [\[p-Poisson\]](#p-Poisson){reference-type="eqref" reference="p-Poisson"} if $$\label{weak} \int_\Omega|\nabla u|^{p-2}\nabla u\cdot\nabla\varphi dx=\int_\Omega f\varphi dx$$ holds for every $\varphi\in C_0^\infty(\Omega)$. Indeed, thanks to a standard density argument, if $u\in W^{1,p}_0(\Omega)$ is a weak solution to [\[p-Poisson\]](#p-Poisson){reference-type="eqref" reference="p-Poisson"} then equation [\[weak\]](#weak){reference-type="eqref" reference="weak"} holds for every $\varphi\in W_0^{1,p}(\Omega)$. Henceforth then, we use $W_0^{1,p}(\Omega)$ as our space of test functions with the understanding that all derivatives in [\[weak\]](#weak){reference-type="eqref" reference="weak"} are taken in the weak sense. It is necessary to assume some additional regularity of $f$ for the integral on the right-hand side of [\[weak\]](#weak){reference-type="eqref" reference="weak"} to be finite for an arbitrary test function. Since $\varphi\in W_0^{1,p}(\Omega)$ belongs to $L^\frac{np}{n-p}(\Omega)$ by Sobolev's inequality, if we assume going forward that $f\in L^q(\Omega)$ for $q\geq \frac{np}{np-n+p}$ then convergence of the integrals in [\[weak\]](#weak){reference-type="eqref" reference="weak"} is assured by Hölder's inequality. # Distribution Estimates Given that the norm of a function $f$ can be computed if one knows its distribution function $\lambda_f$ using [\[dist\]](#dist){reference-type="eqref" reference="dist"}, we aim to estimate the measure of level sets for solutions to [\[p-Poisson\]](#p-Poisson){reference-type="eqref" reference="p-Poisson"} at an arbitrary height. We do this recursively by employing the following result. **Lemma 2**. *Let $u$ be a weak solution to [\[p-Poisson\]](#p-Poisson){reference-type="eqref" reference="p-Poisson"} with $1<p<n$ and let $f\in L^q(\Omega)$ for $1<q\leq\infty$. There exists a constant $C$ such that for any $\beta>\alpha$, $$\lambda_u(\beta)\leq \bigg(\frac{C\|f\|_q\lambda_u(\alpha)^{\frac{q-1}{q}}}{(\beta-\alpha)^{p-1}}\bigg)^\frac{n}{n-p}.$$* Our proof of this result employs a similar technique to De Giorgi iteration, though by using a modified test function we condense the standard argument. If one assumes that $f$ belongs to the Orlicz space $L^\Psi(\Omega)$ for any Young function $\Psi$, the argument which we give below can be used to prove that $$\lambda_u(\beta)\leq \bigg(\frac{C\|f\|_\Psi\overline{\Psi}^{-1}(\lambda_u(\alpha)^{-1})^{-1}}{(\beta-\alpha)^{p-1}}\bigg)^\frac{n}{n-p}.$$ This estimate allows one to prove boundedness of weak solutions under appropriate hypotheses. *Proof.* Let $u\in W_0^{1,p}(\Omega)$ be a weak solution to [\[p-Poisson\]](#p-Poisson){reference-type="eqref" reference="p-Poisson"} and fix $\beta>\alpha$. Then define the test function $$\varphi=(u-\alpha)_+-(u-\beta)_+$$ so that $\varphi\in W^{1,p}_0(\Omega)$ by the chain rule and a density argument. Moreover we have $0\leq \varphi\leq \beta-\alpha$ and $\varphi$ is nonzero only on the level set $\{x\in\Omega:u(x)>\alpha\}$. For brevity we let $\chi_\alpha$ denote the indicator of this set and we note that $\nabla\varphi=\chi_\alpha\nabla u$ in the weak sense. To estimate $\lambda_u(\beta)$ we begin by observing that $$\label{a} (\beta-\alpha)\lambda_u(\beta)^\frac{n-p}{pn}=\bigg(\int_{\{u>\beta\}}(\beta-\alpha)^\frac{pn}{n-p}dx\bigg)^\frac{n-p}{pn}\leq \bigg(\int_{\Omega}|\varphi|^\frac{pn}{n-p}dx\bigg)^\frac{n-p}{pn}=\|\varphi\|_\frac{pn}{n-p}.$$ Our aim is to estimate the norm on the right-hand side in terms of $\lambda_u(\alpha)$, to obtain a recursive estimate for $\lambda_u$. Since $\varphi\in W^{1,p}_0(\Omega)$ we can use it as a test function in [\[weak\]](#weak){reference-type="eqref" reference="weak"} to get $$\|\nabla\varphi\|_p^p=\int_\Omega|\nabla\varphi|^{p}dx=\int_\Omega|\nabla u|^{p}\chi_\alpha dx=\int_\Omega f\varphi dx.$$ It follows from the Sobolev inequality, the estimate above, and Hölder's inequality that for $q>1$, $$\label{b} \|\varphi\|_\frac{pn}{n-p}^p\leq C\int_\Omega f\varphi\chi_\alpha dx \leq C(\beta-\alpha)\int_\Omega|f|\chi_\alpha dx\leq C(\beta-\alpha)\|f\|_q\lambda_u(\alpha)^{1-\frac{1}{q}}.$$ Finally, combining equations [\[a\]](#a){reference-type="eqref" reference="a"} and [\[b\]](#b){reference-type="eqref" reference="b"} we get $$(\beta-\alpha)\lambda_u(\beta)^\frac{n-p}{pn}\leq \|\varphi\|_\frac{pn}{n-p}\leq C(\beta-\alpha)^\frac{1}{p}\|f\|_q^\frac{1}{p}\lambda_u(\alpha)^{\frac{q-1}{pq}}.$$ Rearranging this gives the claimed distribution estimate. ◻ # Proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"} {#proof-of-theorem-main} For simplicity, we first take $C$ as in Lemma [Lemma 2](#distbound){reference-type="ref" reference="distbound"} and replace $u$ with $v=(C\|f\|_q)^{-\frac{1}{p-1}}u$ so that $$\lambda_v(\beta)\leq \bigg(\frac{\lambda_v(\alpha)^{\frac{q-1}{q}}}{(\beta-\alpha)^{p-1}}\bigg)^\frac{n}{n-p}.$$ We also assume without loss of generality that $m(\Omega)= 1$. We wish to bound $\lambda_v(\beta)$ by a function of $\beta$, and then employ this pointwise estimate in [\[dist\]](#dist){reference-type="eqref" reference="dist"}. To this end we find successive approximations of $\lambda_v$ by first defining $\lambda_0(\beta)=m(\Omega)=1$, and for $k\geq 1$ recursively defining $$\lambda_{k+1}(\beta)=\inf_{0\leq\alpha<\beta}\bigg(\frac{\lambda_k(\alpha)^{\frac{q-1}{q}}}{(\beta-\alpha)^{p-1}}\bigg)^\frac{n}{n-p}.$$ It follows by induction and an application of Lemma [Lemma 2](#distbound){reference-type="ref" reference="distbound"} that $\lambda_v(\beta)\leq \lambda_k(\beta)$ for each $k\in\mathbb{N}$, and in particular we have for all $\beta\geq 0$ that $$\lambda_v(\beta)\leq \min\bigg\{1,\lim_{k\rightarrow\infty}\lambda_k(\beta)\bigg\}.$$ To estimate the right-hand, we make a crude (but adequate) approximation by choosing $\alpha=\frac{\beta}{2}$ in the infimum defining $\lambda_k(\beta)$, so that $$\label{recur} \lambda_{k+1}(\beta)\leq \bigg(\frac{2}{\beta}\bigg)^{(p-1)(\frac{n}{n-p})}\lambda_k\bigg(\frac{\beta}{2}\bigg)^{(\frac{q-1}{q})(\frac{n}{n-p})}.$$ Fixing $\ell=\frac{(q-1)n}{q(n-p)}$, we now assume that $q<\frac{n}{p}$ to ensure $\ell<1$, and we argue by induction that $$\lambda_k(\beta)\leq \bigg(\frac{2}{\beta}\bigg)^{\textstyle\frac{(p-1)q}{q-1}\sum\limits_{j=1}^k \ell^j}$$ for each $k\in\mathbb{N}$. The base case follows at once from taking $k=0$ in [\[recur\]](#recur){reference-type="eqref" reference="recur"} and using that $\lambda_0(\beta)=1$ for all $\beta\geq 0$. For the inductive step we suppose that the claimed estimate holds up to $k$ and we observe that $$\lambda_{k+1}(\beta)\leq \bigg(\frac{2}{\beta}\bigg)^{\textstyle(p-1)(\frac{n}{n-p})}\lambda_k\bigg(\frac{\beta}{2}\bigg)^{\textstyle\ell}\leq \bigg(\frac{2}{\beta}\bigg)^{\textstyle\frac{(p-1)q}{q-1}\ell}\bigg(\frac{2}{\beta}\bigg)^{\textstyle\frac{(p-1)q}{q-1}\ell\sum\limits_{j=1}^k \ell^j}=\bigg(\frac{2}{\beta}\bigg)^{\textstyle\frac{(p-1)q}{q-1}\sum\limits_{j=1}^{k+1} \ell^j}.$$ Thus the claimed estimate holds for each $k\in\mathbb{N}$. Since we can take $k$ as large as we like and $\ell<1$, the series in the power converges to $\frac{\ell}{1-\ell}$ as $k\rightarrow\infty$. It follows that $$\lim_{k\rightarrow\infty}\lambda_k(\beta)\leq \bigg(\frac{2}{\beta}\bigg)^{\textstyle\frac{(p-1)q}{q-1}(\frac{\ell}{1-\ell})}=\bigg(\frac{2}{\beta}\bigg)^{\textstyle\frac{(p-1)qn}{n-qp}}.$$ Altogether then, we find the following estimate on the distribution function $\lambda_b$ for $\beta\geq 0$: $$\lambda_v(\beta)\leq \min\bigg\{1,\bigg(\frac{2}{\beta}\bigg)^{\textstyle\frac{(p-1)qn}{n-qp}}\bigg\}.$$ Using this estimate we now study $L^r(\Omega)$ regularity of $v$ and $u$. From identity [\[dist\]](#dist){reference-type="eqref" reference="dist"} we observe that $$\int_\Omega|v|^{\textstyle r}dx=r\int_0^\infty\beta^{\textstyle r-1}\lambda_v(\beta)d\beta\leq C+C\int_2^\infty\beta^{\textstyle r-1-\frac{(p-1)qn}{n-qp}}d\beta.$$ The integral on the right-hand side converges whenever $1\leq r<\frac{(p-1)qn}{n-qp}$, showing that if $r$ is within that range then $\|v\|_r\leq C(n,p,q,r)$. Recalling the definition of $v$, it follows that for the same $r$ we have $$\|u\|_r\leq C\|f\|_q^\frac{1}{p-1},$$ giving the claimed bound when $q<\frac{n}{p}$. It remains to treat the case $q=\frac{n}{p}$. In this instance we have $\ell=1$ and our recursive definition for $\lambda_k$ introduced above reads $$\lambda_{k+1}(\beta)=\inf_{0\leq\alpha<\beta}\frac{\lambda_k(\alpha)}{(\beta-\alpha)^{(p-1)(\frac{n}{n-p})}}.$$ We claim that thanks to this recursion, the following bound for $\lambda_k$ holds for each $k\in\mathbb{N}$: $$\lambda_k(\beta)\leq \bigg(\frac{k^{k}}{\beta^{k}}\bigg)^{(p-1)(\frac{n}{n-p})}.$$ Once again the base case follows immediately from the recursive definition of $\lambda_1$ and the fact that $\lambda_0(\beta)=1$. Assuming $\lambda_k$ satisfies the claimed estimate, we observe that $$\lambda_{k+1}(\beta)=\inf_{0\leq\alpha<\beta}\frac{\lambda_k(\alpha)}{(\beta-\alpha)^{(p-1)(\frac{n}{n-p})}}\leq \inf_{0\leq\alpha<\beta}\bigg(\frac{k^k}{\alpha^k(\beta-\alpha)}\bigg)^{(p-1)(\frac{n}{n-p})}.$$ Since the function $\alpha\mapsto\alpha^k(\beta-\alpha)$ attains a maximum of $\frac{k^k\beta^{k+1}}{(k+1)^{k+1}}$ on $[0,\infty)$ at $\alpha=\frac{k\beta}{k+1}$, the infimum on the right-hand side above can be calculated to give the estimate $$\lambda_{k+1}(\beta)\leq \inf_{0\leq\alpha<\beta}\bigg(\frac{k^k(k+1)^{k+1}}{k^k\beta^{k+1}}\bigg)^{(p-1)(\frac{n}{n-p})},$$ giving the required bound for $k\in\mathbb{N}$. It follows now that for each $\beta\geq 0$ we have $$\lambda_v(\beta)\leq \min\bigg\{1,\min_{k\in\mathbb{N}}\bigg(\frac{k^{k}}{\beta^{k}}\bigg)^{(p-1)(\frac{n}{n-p})}\bigg\}.$$ Indeed, for each fixed $k\geq 1$ and $\beta$ belonging to the interval $I_k=\big[\frac{k^{k}}{(k-1)^{k-1}},\frac{(k+1)^{k+1}}{k^k}\big)$ this minimum is achieved by the function $$\bigg(\frac{k^{k}}{\beta^{k}} \bigg)^{(p-1)(\frac{n}{n-p})}.$$ Consequently we have the following pointwise bound for our distribution function: $$\lambda_v(\beta)\leq \chi_{[0,1)}+\sum_{k=1}^\infty\chi_{I_k}\bigg(\frac{k^{k}}{\beta^{k}} \bigg)^{(p-1)(\frac{n}{n-p})}.$$ Applying this estimate and using Fatou's lemma to interchange a limit and integral, we see that $$\int_\Omega|v|^rdx\leq C+C\int_0^\infty\frac{\beta^r}{\beta}\bigg(\sum_{k=1}^\infty\chi_{I_k}\bigg(\frac{k^{k}}{\beta^{k}} \bigg)^{(p-1)(\frac{n}{n-p})}\bigg)d\beta\leq C+C\sum_{k=1}^\infty\int_{I_k}\frac{\beta^r}{\beta}\bigg(\frac{k^{k}}{\beta^{k}} \bigg)^{(p-1)(\frac{n}{n-p})}d\beta.$$ Each $I_k$ has a length of no more than three (indeed, $m(I_k)\rightarrow e$ as $k\rightarrow\infty$), and on each $I_k$ the integrand attains a maximum value at the left endpoint. Evaluating at this point we find that $$\int_\Omega|v|^rdx\leq C+C\sum_{k=1}^\infty\left(\frac{k^k}{(k-1)^{k-1}}\right)^{r-1}\left(\frac{k-1}{k}\right)^{\left(k^{2}-k\right){(p-1)(\frac{n}{n-p})}}\leq C\sum_{k=1}^\infty k^{r-1}e^{-k{(p-1)(\frac{n}{n-p})}}.$$ For any fixed $r<\infty$ the series on the right converges, implying that there exists a constant $C$ for which $\|v\|_r\leq C$ and $\|u\|_r^{p-1}\leq C\|f\|_q$. We remark that the final estimate does not imply that $u\in L^\infty(\Omega)$, since the constant $C$ may blow up as $r\rightarrow\infty$ for some solutions. # Sharpness of Theorem 1 Finally, we verify that the exponent $\frac{(p-1)qn}{n-pq}$ appearing in Theorem [Theorem 1](#main){reference-type="ref" reference="main"} cannot be increased. **Lemma 3**. *Let $\varepsilon>0$ be given and assume that $q<\frac{n}{p}$ for $n\geq 3$ and $p>1$. Fix $\Omega=B(0,1)\subset\mathbb{R}^n$. There exist functions $f$ and $u$ satisfying [\[p-Poisson\]](#p-Poisson){reference-type="eqref" reference="p-Poisson"} such that $f\in L^q(\Omega)$ and $$\label{blowup} u\not\in L^{\frac{(p-1)qn}{n-pq}+\varepsilon}(\Omega).$$* *Proof.* Given $\varepsilon>0$, set $\ell=\frac{\varepsilon(\frac{n}{q}-p)^2}{(p-1)n+\varepsilon(\frac{n}{q}-p)}>0$ and $f(x)=-|x|^{\ell-\frac{n}{q}}$. Since $f$ is radial, we have $$\int_{B(0,1)}|f(x)|^qdx=|S^{n-1}|\int_0^1 r^{n-1}|f(r)|^qdr=|S^{n-1}|\int_0^1 r^{\ell q-1}dr.$$ The integral on the right converges since $\ell q>0$, meaning that $f\in L^q(\Omega)$. Next we define $$u(x)=\frac{(p-1)q^\frac{p}{p-1}}{(nq-n+\ell q)^\frac{1}{p-1}(pq-n+\ell q)} (|x|^\frac{pq-n+\ell q}{(p-1)q}-1).$$ It is easily verified by direct differentiation that $-\mathrm{{div}} (| \nabla u|^{p-2}\nabla u)=-|x|^{\ell-\frac{n}{2}}$ for $x\in B(0,1)$, and if $x\in S^{n-1}$ then $u(x)=0$, meaning that $u$ solves [\[p-Poisson\]](#p-Poisson){reference-type="eqref" reference="p-Poisson"}. It remains to study the integrability of $u$. From our choice of the constant $\ell$ it follows that $$\frac{(p-1)qn}{n-pq-\ell q}=\frac{(p-1)qn}{n-pq}+\varepsilon,$$ and since $u$ is a radial function on the ball we can integrate in polar coordinates to see that $$\int_{B(0,1)}|u(x)|^{\frac{(p-1)qn}{n-pq}+\varepsilon}dx\geq C\bigg(\int_0^1 r^{n-1+\frac{pq-n+\ell q}{(p-1)q}(\frac{(p-1)qn}{n-pq}+\varepsilon)}dr-1\bigg)=C\bigg(\int_0^1 \frac{dr}{r}-1\bigg)=\infty.$$ This gives [\[blowup\]](#blowup){reference-type="eqref" reference="blowup"}, showing that our example has the claimed properties and Theorem [Theorem 1](#main){reference-type="ref" reference="main"} is sharp. ◻ [^1]: University of Toronto, ON, Canada. sullivan\@math.utoronto.ca. The author is a PhD student supported by the University of Toronto Department of Mathematics.
arxiv_math
{ "id": "2309.07274", "title": "Regularity of Singular Solutions to $p$-Poisson Equations", "authors": "Sullivan Francis MacDonald", "categories": "math.AP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We give a construction of a convex set $A \subset \mathbb R$ with cardinality $n$ such that $A-A$ contains a convex subset with cardinality $\Omega (n^2)$. We also consider the following variant of this problem: given a convex set $A$, what is the size of the largest matching $M \subset A \times A$ such that the set $$\{ a-b : (a,b) \in M \}$$ is convex? We prove that there always exists such an $M$ with $|M| \geq \sqrt n$, and that this lower bound is best possible, up a multiplicative constant. address: - | Johann Radon Institute for Computational and Applied Mathematics\ Linz, Austria - | Institute for Basic Science (IBS)\ Daejeon, South Korea - | Institute for Algebra, Johannes Kepler Universität\ Linz, Austria author: - Krishnendu Bhowmick - Ben Lund - Oliver Roche-Newton date: - - title: Large convex sets in difference sets --- # Introduction This paper is concerned with the existence or non-existence of convex sets in difference sets. A set $A \subset \mathbb R$ is said to be convex if its consecutive differences are strictly increasing. That is, writing $A= \{a_1 < a_2 < \dots < a_n \}$, $A$ is convex if $$a_{i+1} - a_i > a_i - a_{i-1}$$ holds for all $i=2,\dots, n-1$. Most research on convex sets comes in the context of sum-product theory, and one may think of the notion of a convex set as a generalisation of a set with multiplicative structure. For instance, it is known than convex sets determine many distinct sums and differences. In particular, it was proven in [@SS] that the bound[^1] $$|A-A| \gg |A|^{8/5-o(1)}$$ holds for any convex set $A$. Here, $A-A:=\{a-b : a,b \in A \}$ denotes the difference set determined by $A$. This results captures the vague notion that convex sets cannot be additively structured, and there has been considerable effort expended to quantify and apply this idea in various way, see for instance [@ENR], [@HRNR] and [@SW]. Given a finite set $B \subset \mathbb R$, define $$\mathcal C(B):= \max_{A \subset B : A \text{ is convex}} |A|.$$ That is, $\mathcal C(B)$ denotes the size of the largest convex subset of $B$. The first question that we consider is the following: given a convex set $A \subset \mathbb R$, what can we say about the possible value of $\mathcal C(A-A)$? A first observation is that $$\label{easylower} |\mathcal C(A-A)| \geq |A|,$$ as can be seen by considering the convex set $A-a \subset A-A$, where $a$ is an arbitrary element of $A$. There are some simple constructions showing that the lower bound [\[easylower\]](#easylower){reference-type="eqref" reference="easylower"} is optimal up to a multiplicative constant; for instance, we can take $A= \{ i^2 : 1 \leq i \leq n \}$. In this paper, we give a construction of a convex set $A$ whose difference set contains a very large convex set. **Theorem 1**. *For all $n \in \mathbb N$, there exists a convex set $A \subset \mathbb R$ with $|A|=n$ such that $A-A$ contains a convex subset $S$ with cardinality $|S| \gg n^2$.* Using the notation introduced earlier, Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"} states that there exists a convex set $A$ such that $\mathcal C(A-A) \gg |A|^2$. This result shares some similarities with main result of [@RZ], where it was established that there exists a set $A \subset \mathbb R$ such that $A+A$ contains a convex subset with cardinality $\Omega(|A|^2)$. The main qualitative difference is that we have the additional restriction that the set $A$ is also assumed to be convex. Also, Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"} provides a convex subset of the difference set, rather than the sum set. The simple construction giving rise to the lower bound [\[easylower\]](#easylower){reference-type="eqref" reference="easylower"} feels like something of a cheat, and so we consider a variant of this problem where we make a further restriction concerning the origin of the convex subset of a difference set. A set $M \subset A \times A$ is a *matching* if the elements of $M$ are pairwise disjoint. Given a matching $M \subset A \times A$, define the restricted difference set $$A-_M A:= \{ a-b : (a,b) \in M \}.$$ Define $$\mathcal {CM}(A):= \max_{M \subset A \times A : M \text{ is a matching and } A-_M A \text{ is convex.}} |M|.$$ That is, $\mathcal {CM}(A)$ denotes the size of the largest matching on $A$ which gives rise to a convex subset of $A-A$. Now, we ask a similar question for this quantity: given a convex set $A \subset \mathbb R$, what can we say about the size of $\mathcal {CM}(A)$? In particular, how small can this quantity be? Should we expect an analogue of the bound [\[easylower\]](#easylower){reference-type="eqref" reference="easylower"} if we rule out this simple construction? In this paper we answer this question by giving the following two complimentary results, showing that $\mathcal {CM}(A) \geq \sqrt { |A|}$ and that this bound is optimal up to a multiplicative constant. **Theorem 1**. *Let $n \in \mathbb N$ be sufficiently large and suppose that $A \subset \mathbb R$ is a convex set with cardinality $n$. Then there exists a matching $M \subset A \times A$ such that $|M| \geq \sqrt n$ and $A-_M A$ is convex.* **Theorem 1**. *For all $n \in \mathbb N$, there exists a convex set $A \subset \mathbb R$ with cardinality $n$ such that, if $M \subset A \times A$ is a matching and $A-_M A$ is convex then $|M| \ll \sqrt n$.* # Proof of Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"} {#proof-of-theorem-thmmain} Define $$a_i=i + c_1i^2 + c_2i^3,$$ where $$c_1= \frac{75}{n^2} \,\,\,\, \text{ and } \,\,\,\, c_2 = \frac{1}{n^5}.$$ Assume that $n$ is a sufficiently large multiple of $100$. This assumption is made only to simplify the notation slightly, and can be easily removed at the price of introducing some floor and ceiling functions to the calculations. Define $A= \{a_1< \dots <a_n \}$. Observe that $A$ is convex. Indeed, the sequence $a_{i+1}-a_i$ is increasing, as can be seen by calculating that $$a_{i+1}-a_i= 1 +c_1(2i+1) +c_2(3i^2+3i+1).$$ For each integer $k \in [0.009 n , 0.01 n]$ define $D_k$ to be the set of $k$th differences $$D_k:= \{ a_{i+k}-a_i : 1 \leq i \leq 0.99 n \} .$$ The set $D_k$ is also convex. Indeed, let $d_i^{(k)}:= a_{i+k}-a_i$ denote the $i$th element of $D_k$. Then the sequence $d_{i+1}^{(k)}- d_{i}^{(k)}$ increases with $i$. This can be seen by observing that $$\label{bidef} d_i^{(k)}= a_{i+k}-a_i = k + c_1(2ki+k^2) +c_2(3i^2k+3ik^2+k^3),$$ and hence $$d_{i+1}^{(k)}- d_i^{(k)}= 2c_1k+3c_2k^2+3c_2k+6c_2ki$$ increases with $i$. We will find a large convex subset of $A-A$ by efficiently gluing together consecutive convex sets $D_k$. We will make use of the following observation from [@RZ]. **Lemma 1**. *Suppose that $A= \{a_1< \dots <a_n \}$ and $B = \{b_1< \dots <b_m \}$ are convex sets. Suppose that there exist $1 \leq i \leq m$ and $1 \leq j \leq n$ such that $$b_i \leq a_j < a_{j+1} \leq b_{i+1}.$$ Then the set $$\{ a_1 < \dots < a_j < b_{i+1} < b_{i+2} < \dots < b_m \}$$ is convex.* Before we get into the details of the proof of Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"}, which involves some rather tedious calculations, let us take a moment to try and explain the idea behind it, with the help of some pictures. Firstly, we note that, although the sets $D_k$ are convex, they are only *slightly convex*, in the sense that, if we zoom out and take a look at $D_k$, it appears to resemble an arithmetic progression with common difference $2c_1k$. Note also that this common difference increases slightly as $k$ increases. The other important feature of this construction is that we have chosen the parameters in such a way that the $D_k$ have convenient overlapping properties. In particular, each $D_k$ has diameter approximately $\frac{3}{2}$, and starts at $k$. In particular, this means that neighbouring $D_k$ have a significant overlap, but also that each $D_k$ takes sole ownership of a section of the real line. We can use this setup to form a convex set by gluing together consecutive $D_k$. In the region where $D_k$ and $D_{k+1}$ overlap, the elements of $D_k$ are slightly more dense (because the common difference of the approximate arithmetic progression is smaller). This ensures that there exist two consecutive elements of $D_k$ in this region, which allows for an application of Lemma [Lemma 1](#lem:RZ){reference-type="ref" reference="lem:RZ"}. Meanwhile, the existence of the non-overlapping region ensures that this glued set contains many elements of $D_k$ for each $k$. Now we come to the formal details. We use the notation $d_{min}^{(k)}$ for the smallest element of $D_k$ and $d_{max}^{(k)}$ for the largest element of $D_k$. Note that $$\begin{aligned} d_{min}^{(k)} & = d_1^{(k)} = k + c_1(2k+k^2) +c_2(3k+3k^2+k^3), \text{ and} \label{bmin} \\ d_{max}^{(k)} & = d_{0.99n}^{(k+1)} = k + c_1\left(2k\frac{99n}{100}+k^2\right) +c_2\left(3\left(\frac{99n}{100} \right)^2k+3\frac{99n}{100}k^2+k^3\right). \label{bmax}\end{aligned}$$ Observe that each $D_k$ is contained in the closed interval $[b_{min}^{(k)}, b_{max}^{(k)}]$. We will prove the following two facts about the intersection properties of these intervals. **Claim 1**. *For each $k \in [0.009 n , 0.01 n]$, there exist $1 \leq i,j \leq 0.99n$ such that $$d_i^{(k+1)} \leq d_j^{(k)} < d_{j+1}^{(k)} \leq d_{i+1}^{(k+1)} .$$* **Claim 1**. *For each $k \in [0.009 n , 0.01 n]$, there are at least $Cn$ elements of $D_k$ in the interval $(d_{max}^{(k-1)}, d_{min}^{(k+1)})$, where $C>0$ is an absolute constant.* Once we have proved these two claims, the proof will be finished. Indeed, we can use Claim [Claim 1](#claim1){reference-type="ref" reference="claim1"} together with Lemma [Lemma 1](#lem:RZ){reference-type="ref" reference="lem:RZ"} to glue together consecutive convex sets $D_k$ to form a set $$S \subset\bigcup_{k=0.009 n}^{0.01n} D_k \subset A-A.$$ Claim [Claim 1](#claim2){reference-type="ref" reference="claim2"} guarantees that, for each $k \in [0.009 n , 0.01 n]$, there at least $Cn$ elements in $D_k \cap S$ that do not appear in $D_j \cap S$ for any $j \neq k$. This implies that $$|S| \geq (Cn) \cdot (0.001 n ) \gg n^2.$$ It remains to prove the two claims. *Proof of Claim [Claim 1](#claim1){reference-type="ref" reference="claim1"}.* We will show that the interval $I=[d_{min}^{(k+1)},d_{max}^{(k)}]$ contains at least two more elements of $D_k$ than it does of $D_{k+1}$. It then follows that there must exist two consecutive elements of $D_k$ in this interval, which then implies the existence of the claimed configuration $$d_i^{(k+1)} \leq d_j^{(k)} < d_{j+1}^{(k)} \leq d_{i+1}^{(k+1)} .$$ We begin by establishing a lower bound for $|D_k \cap I|$, namely $$\begin{aligned} |D_k \cap I| &= | \{ i \in [ 0.99n ] : d_i^{(k)} \geq d_{min}^{(k+1)} \}| \\& \geq 0.99n - | \{ i \in \mathbb N : d_i^{(k)} < d_{min}^{(k+1)} \}|.\end{aligned}$$ We need an upper bound for $| \{ i \in \mathbb N : d_i^{(k)} < d_{min}^{(k+1)} \}|$. To this end, $$\begin{aligned} \label{Bkeq} &| \{ i \in \mathbb N : d_i^{(k)} < d_{min}^{(k+1)} \}| \nonumber \\ & \leq 2 + |\{i > 2: d_i^{(k)} < d_{min}^{(k+1)}\}| \nonumber \\ & = 2 + | \{ i > 2 : 2c_1ki +c_2(3i^2k+3ik^2) < 1 + c_1(4k+3) +c_2(6k^2+12k+7) \}| \\& \leq 2 + | \{ i >2 : 2c_1ki < 1 + c_1(4k+3) \}| \nonumber \\& \leq 2 + \frac{1+c_1(4k+3)}{2c_1k} = \frac{1+c_1(8k+3)}{2c_1k}. \nonumber\end{aligned}$$ Therefore, $$\label{BkIlower} |D_k \cap I| \geq \frac{1.98 n \cdot c_1k - 8c_1k - 1 - 3c_1 }{2c_1k} .$$ We will compare the bound in [\[BkIlower\]](#BkIlower){reference-type="eqref" reference="BkIlower"} with an upper bound for $|D_{k+1} \cap I|$, which we deduce now. Observe that $$\begin{aligned} |D_{k+1} \cap I| &= | \{ i \in [ 0.99n ] : d_i^{(k+1)} \leq d_{max}^{(k)} \}| \\& = | \{ i \in\mathbb N: 1 + c_1(2(k+1)i+2k+1) +c_2(3i^2(k+1)+3i(k+1)^2+3k^2+3k+1 ) \\ \,\,\, & \leq c_12k\frac{99n}{100} +c_2\left(3\left(\frac{99n}{100} \right)^2k+3\frac{99n}{100}k^2\right) \} | \\& \leq \left | \left \{ i \in \mathbb N : 1 + c_1(2(k+1)i+2k+1) \leq 2c_1k\frac{99n}{100} +c_2\left(3\left(\frac{99n}{100} \right)^2k+3\frac{99n}{100}k^2\right) \right \} \right |.\end{aligned}$$ Note that the term which involves the multiple of $c_2$ in the previous step is at most $1/n^2$. Therefore, this term is less than $c_1$, which allows us to write $$\begin{aligned} |D_{k+1} \cap I| &\leq \left | \left \{ i \in \mathbb N : 1 + 2c_1(k+1)i+2c_1k \leq 2 c_1k\frac{99n}{100} \right \} \right | \nonumber \\& = \left | \left \{ i \in \mathbb N : i \leq \frac{2c_1k\left (\frac{99n}{100}-1 \right ) -1}{2c_1(k+1)} \right \} \right | \nonumber \\ & \leq \frac{2c_1k\left (\frac{99n}{100}-1 \right ) -1}{2c_1(k+1)}. \label{Bk+1Iupper}\end{aligned}$$ It remains to show that the lower bound given in [\[BkIlower\]](#BkIlower){reference-type="eqref" reference="BkIlower"} is at least as big as the upper bound given in [\[Bk+1Iupper\]](#Bk+1Iupper){reference-type="eqref" reference="Bk+1Iupper"}, plus two. That is, we need to show that $$\begin{aligned} \frac{1.98 n \cdot c_1k - 8c_1k - 1 - 3c_1 }{2c_1k} & \geq 2 + \frac{2c_1k\left (\frac{99n}{100}-1 \right ) -1}{2c_1(k+1)} \\ & = \frac{4c_1(k+1) + 2c_1k\left (\frac{99n}{100}-1 \right ) -1}{2c_1(k+1)},\end{aligned}$$ which simplifies to give $$\frac{1.98 n \cdot c_1k - 8c_1k - 1 - 3c_1 }{k} \geq \frac{4c_1(k+1) + 2c_1k\left (\frac{99n}{100}-1 \right ) -1}{k+1},$$ and eventually $$1.98 nc_1k \geq 10c_1k^2+15c_1k+3c_1+1.$$ Since we have $0.009n \leq k \leq 0.01n$, it would be sufficient to prove that $$1.98 n \cdot c_1 \cdot 0.009 n \geq 10c_1 \cdot (0.01n)^2 +15c_1 \cdot (0.01 n) +3c_1+1.$$ Substituting in the definition of $c_1$, the previous inequality becomes $$\frac{198 \cdot 75 \cdot 9}{100 \cdot 1000} \geq \frac{75}{1000} + \frac{15 \cdot 75}{100 n} + \frac{3 \cdot 75}{n^2} + 1.$$ This is equivalent to the inequality $$0.2615 \geq \frac{15 \cdot 75}{100 n} + \frac{3 \cdot 75}{n^2},$$ which is valid for $n$ sufficiently large. ◻ *Proof of Claim [Claim 1](#claim2){reference-type="ref" reference="claim2"}.* We will give an upper bound for $|D_k \cap(-\infty, d_{max}^{(k-1)}]|$ and a lower bound for $|D_k \cap( - \infty, d_{min}^{(k+1)})|$, and combine these bounds to deduce the claimed lower bound for $|D_k \cap ( d_{max}^{(k-1)}, d_{min}^{(k+1)})|$. To upper bound $|D_k \cap(-\infty, d_{max}^{(k-1)}]|$, we make use of [\[Bk+1Iupper\]](#Bk+1Iupper){reference-type="eqref" reference="Bk+1Iupper"} and deduce that $$\begin{aligned} |D_k \cap(-\infty, d_{max}^{(k-1)}]| & = |D_k \cap [d_{min}^{(k)},d_{max}^{(k-1)}]| \\& \leq \frac{2c_1(k-1)\left (\frac{99n}{100}-1 \right ) -1}{2c_1k} \\& \leq \frac{1}{2} \left ( \frac{150}{n^2} \cdot (0.01n) \cdot n -1 \right ) \cdot \left( \frac{1000}{9n}\cdot \frac{n^2}{75} \right ) = \frac{10n}{27}.\end{aligned}$$ Next, we present a lower bound for $|D_k \cap( - \infty, d_{min}^{(k+1)})|= |D_k \cap [d_{min}^{(k)},d_{min}^{(k+1)})|$. Indeed, $$\begin{aligned} \label{nearly} |D_k \cap [d_{min}^{(k)},d_{min}^{(k+1)})| & =| \{ i \in [ 0.99n ] : d_i^{(k)} < d_{min}^{(k+1)})\}| \nonumber \\& = | \{ i \in [ 0.99n ]: 2c_1ki +c_2(3i^2k+3ik^2) < 1 + c_1(4k+3) +c_2(6k^2+12k+7) \}| \nonumber \\& \geq | \{ i \in [ 0.99n ] : 2c_1ki +c_2(3i^2k+3ik^2) < 1 + c_1(4k+3) \}| \nonumber \\& \geq | \{ i \in [ 0.99n ] : 2c_1ki < 0.99 + c_1(4k+3) \}|. \end{aligned}$$ The last inequality above uses the fact that the term $c_2(3i^2k+3ik^2)$ is at most $0.01$, provided that $n$ is sufficiently large. Therefore, $$\begin{aligned} |D_k \cap [d_{min}^{(k)},d_{min}^{(k+1)})| &\geq | \{ i \in \mathbb N: 2c_1ki < 0.99 + c_1(4k+3) \}| \\ & \geq \frac{0.99 + c_1(4k+3)}{2c_1k} -1 \\ & \geq \frac{0.99}{2c_1k} -1 \\& \geq \frac{99}{100} \cdot \frac{2n}{3} -1 =\frac{66}{100}n-1 \geq \frac{65}{100}n=\frac{13}{20}n.\end{aligned}$$ By combining this inequality with [\[nearly\]](#nearly){reference-type="eqref" reference="nearly"}, it follows that $$|D_k \cap (d_{max}^{(k-1)}, d_{min}^{(k+1)})| \geq \frac{13}{20}n - \frac{10}{27}n = \frac{151}{540}n,$$ as required. ◻ # Matchings *Proof of Theorem [Theorem 1](#thm:main2){reference-type="ref" reference="thm:main2"}.* Again, write $A= \{ a_1<a_2< \dots < a_n \}$ and $d_i=a_{i+1}-a_i$. Since $A$ is convex, we have $$d_1 < d_2 < \dots <d_{n-1}.$$ For convenience, we the shorthand use $k:= \lceil \sqrt n \rceil$. The matching $M$ is given by $$\begin{aligned} M&= \left \{ (a_{k+1},a_{k+2}), (a_k, a_{k+4}), (a_{k-1}, a_{k+7}), (a_{k-2}, a_{k+11}), \dots ,\left(a_1,a_{k +1+\frac{k(k+1)}{2}}\right ) \right \} \\& = \left \{ \left (a_{k+1-i}, a_{k+1+\frac{i(i+1)}{2}} \right ) : 1 \leq i \leq k \right \}. \end{aligned}$$ The matching $M$ has cardinality $k \geq \sqrt n$, and the choice of parameters ensures that it is indeed a well-defined subset of $A \times A$, provided that $n$ is sufficiently large. To verify this, we just need to check that $k +1+\frac{k(k+1)}{2} \leq n$, which holds comfortably for $n \geq 36$. It remains to check that $A-_M A$ is convex. Let $$e_i:=a_{k +1+\frac{i(i+1)}{2}} - a_{k+1-i}$$ denote the $i$th element of $A-_M A$. We need to check that $e_{i+1}-e_{i} > e_{i} - e_{i-1}$ holds for all $2 \leq i \leq k-1$. A telescoping argument gives $$\begin{aligned} \label{eqed} e_i&= (a_{k+2-i} - a_{k+1-i}) + (a_{k+3-i} - a_{k+2-i}) + \dots + \left (a_{k +1+\frac{i(i+1)}{2}} - a_{k +\frac{i(i+1)}{2}} \right ) \nonumber \\& = d_{k+1-i}+d_{k+2-i} + \dots + d_{k+\frac{i(i+1)}{2}}, \end{aligned}$$ and therefore $$\begin{aligned} e_{i+1} - e_i & = d_{k+1-(i+1)}+d_{k+2-(i+1)} + \dots + d_{k+\frac{(i+1)(i+2)}{2}} \\& - \left (d_{k+1-i}+d_{k+2-i} + \dots + d_{k+\frac{i(i+1)}{2}} \right ) \\& = d_{k-i}+ d_{k+\frac{i(i+1)}{2}+1}+ d_{k+\frac{i(i+1)}{2}+2} + \dots +d_{k+\frac{(i+1)(i+2)}{2}}. \end{aligned}$$ It then follows that $$\begin{aligned} (e_{i+1} - e_i ) - (e_i -e_{i-1}) &= d_{k-i}+ d_{k+\frac{i(i+1)}{2}+1}+ d_{k+\frac{i(i+1)}{2}+2} + \dots +d_{k+\frac{(i+1)(i+2)}{2}} \\ & - \left ( d_{k-i+1}+ d_{k+\frac{i(i-1)}{2}+1}+ d_{k+\frac{i(i+1)}{2}+2} + \dots +d_{k+\frac{i(i+1)}{2}} \right ) \end{aligned}$$ There are $i+2$ terms with a positive sign and $i+1$ with a negative sign. We can pair off the $i+1$ largest positive terms with smaller (in absolute value) negative terms to conclude the proof, as follows: $$\begin{aligned} (e_{i+1} - e_i ) - (e_i -e_{i-1}) &= d_{k-i}+\left (d_{k+\frac{i(i+1)}{2}+1} -d_{k+\frac{i(i+1)}{2}} \right ) + \left (d_{k+\frac{i(i+1)}{2}+2} -d_{k+\frac{i(i+1)}{2}-1} \right ) \\& + \dots + \left (d_{k+\frac{(i+1)(i+2)}{2}-1} -d_{k+\frac{i(i-1)}{2}+1} \right ) + \left (d_{k+\frac{(i+1)(i+2)}{2}} -d_{k-i+1}\right ) \\& >0. \end{aligned}$$ ◻ *Proof of Theorem [Theorem 1](#thm:main3){reference-type="ref" reference="thm:main3"}.* For each $j =1, \dots, n$, define $$a_j:=j(2n)^n+(j-1)(2n)^{n-1} + \dots + 2(2n)^{n-(j-2)}+ (2n)^{n-(j-1)}.$$ The set $A= \{ a_j : 1 \leq j \leq n \}$ is a convex set. Indeed, the consecutive differences of $A$ are given by $$d_j= a_{j+1} - a_j = (2n)^n+(2n)^{n-1} + \dots + (2n)^{n-j},$$ a sequence which is strictly increasing. Let $M \subset A \times A$ be a matching such that $A-_M A$ is convex. Our goal is to prove that $|M| \ll \sqrt n$. Let $k \leq n-1$ be an integer. Repeating notation used earlier in the paper, set $d_j^{(k)}:= a_{j+k}-a_j$ and $$D_k= \{ d_j^{(k)} : 1 \leq j \leq n-k \}.$$ We calculate that $$\begin{aligned} \label{dcalc} d_j^{(k)}&=k[(2n)^n+ (2n)^{n-1} + \dots + (2n)^{n-j}] \\ &+ (k-1) (2n)^{n-(j+1)} + (k-2)(2n)^{n-(j+2)} + \dots +(2n)^{n-(j+k-1)}. \end{aligned}$$ An important feature of this construction is that the diameter of the components $D_k$, which is approximately $(2n)^{n-2}$, is significantly smaller than the gaps between consecutive components, which is approximately $(2n)^{n}$. This allows us to conclude that, with at most one exception, a convex set can have at most one representative from each $D_k$. This is formalised in the following claim. **Claim 1**. *Suppose that $S \subset A-A$ is convex. Then there exists at most one $k \in \mathbb N$ such that $|S \cap D_k| \geq 2$. Moreover, if $|S \cap D_{k_2}| \geq 2$ then $S \cap D_{k_1}= \varnothing$ for all $k_1 < k_2$.* *Proof.* The first sentence of the claim follows from the second, and so it is sufficient to prove only the second sentence. Suppose for a contradiction that $k_1<k_2$, $|S \cap D_{k_1}| \geq 1$ and $|S \cap D_{k_2}| \geq 2$. Let $d_j^{k_2}$ be the smallest element of $S \cap D_{k_2}$. Since $S \cap D_{k_1}$ is non-empty, it follows that $d_j^{(k_2)}$ is not the first element of $S$, and since $S$ also contains a larger element of $D_{k_2}$, we also know that $d_j^{(k_2)}$ is not the last element of $S$. Let $x$ be the element of $S$ preceding $d_j^{(k_2)}$ and let $y$ be the element of $S$ following $d_j^{(k_2)}$. By the convexity of $S$, $$\label{xandy} d_j^{(k_2)}-x < y- d_j^{(k_2)}.$$ On the other hand $$\begin{aligned} \label{xbound} d_j^{(k_2)}-x \geq d_1^{(k_2)} - d_{n-(k_2 -1)}^{(k_2-1)} &> k_2 [(2n)^n+(2n)^{n-1}] - (k_2-1) [ (2n)^n + \dots + (2n) ] \nonumber \\& = (2n)^n + (2n)^{n-1} - (k_2-1)[ (2n)^{n-2} + \dots + (2n) ] \nonumber \\& > (2n)^n + (2n)^{n-1} - n[ (2n)^{n-2} + \dots + (2n) ] \nonumber \\& \geq (2n)^n + (2n)^{n-1} - n \cdot 2 \cdot (2n)^{n-2} = (2n)^n . \end{aligned}$$ The second inequality uses the fact that $k_2 \leq n$, while the third inequality is an application of the inequality $$\label{basic} (2n)^{j}+(2n)^{j-1}+ \dots + (2n) \leq 2 \cdot (2n)^j,$$ which is valid for all $j, n \in \mathbb N$. Meanwhile, since $y \in D_{k_2}$, a similar calculation yields $$\label{ybound} y - d_j^{(k_2)} \leq d_{n-k_2}^{(k_2)}- d_1^{(k_2)} \leq k_2[ (2n)^{n-2} + (2n)^{n-3} + \dots + (2n) ] \leq (2n)^{n-1}.$$ Comparing [\[xbound\]](#xbound){reference-type="eqref" reference="xbound"}, [\[ybound\]](#ybound){reference-type="eqref" reference="ybound"} and [\[xandy\]](#xandy){reference-type="eqref" reference="xandy"}, we observe a contradiction. ◻ Using the same basic fact about the blocks $D_k$ again, namely that the gap between consecutive blocks is significantly larger than their individual diameters, we now show that the blocks which contain elements of a convex set must occur in a weakly convex form. A set $S= \{s_1 < s_2 < \dots < s_n \} \subset \mathbb R$ is said to be *weakly convex* if the inequality $$s_{i+1} - s_i \geq s_i - s_{i-1}$$ holds for all $i=2,\dots, n-1$. For a given set $S \subset A-A$, we define $$K(S)= \{ k \in \mathbb N : |S \cap D_k| \geq 1 \}.$$ **Claim 1**. *Suppose that $S \subset A-A$ is convex. Then the set $K(S)$ is weakly convex.* *Proof.* Suppose for a contradiction that $K(S)$ is not weakly convex. Then there exists three consecutive elements $$d_{j_1}^{(k_1)}, d_{j_2}^{(k_2)}, d_{j_3}^{(k_3)} \in S$$ such that $k_2-k_1 > k_3-k_2$. Since the $k_i$ are integers, it follows that $$\label{kstuff} 2k_2-k_1-k_3 \geq 1.$$ The difference between $d_{j_2}^{(k_2)}$ and $d_{j_1}^{(k_1)}$ is $$\begin{aligned} d_{j_2}^{(k_2)} - d_{j_1}^{(k_1)}&\geq d_{1}^{(k_2)} - d_{n-k_1}^{(k_1)} \\& > k_2[ (2n)^n + (2n)^{n-1}] -k_1[(2n)^n+ (2n)^{n-1} + \dots + (2n)] \\& =(k_2-k_1)[ (2n)^n +(2n)^{n-1}] -k_1[(2n)^{n-2}+ (2n)^{n-3} + \dots + (2n)] \end{aligned}$$ Meanwhile, the next difference can be bounded by $$\begin{aligned} d_{j_3}^{(k_3)} - d_{j_2}^{(k_2)}&\leq d_{n-k_3}^{(k_3)} - d_{1}^{(k_2)} \\ &< k_3[(2n)^n+ (2n)^{n-1} + \dots + (2n)] - k_2[(2n)^n+ (2n)^{n-1}] \\& =(k_3 -k_2)[(2n)^n+ (2n)^{n-1}] + k_3[(2n)^{n-2}+ (2n)^{n-3} + \dots + (2n)].\end{aligned}$$ However, by the convexity of $S$, we also have $d_{j_2}^{(k_2)} - d_{j_1}^{(k_1)} < d_{j_3}^{(k_3)}-d_{j_2}^{(k_2)}$. Combining this with the previous two inequalities and applying [\[kstuff\]](#kstuff){reference-type="eqref" reference="kstuff"} yields $$(k_3+k_1) [ (2n)^{n-2}+ (2n)^{n-3} + \dots + (2n)] > (2k_2 -k_3-k_1) [(2n)^n+ (2n)^{n-1}] \geq [(2n)^n+ (2n)^{n-1}].$$ We then once again use inequality [\[basic\]](#basic){reference-type="eqref" reference="basic"} to obtain $$\label{cont} (k_3+k_1) \cdot 2 \cdot (2n)^{n-2} > (2n)^n.$$ Finally, note that $k_3+k_1 < 2n$. This holds because $k_3,k_1 \leq n-1$, as the sets $D_{k_i}$ are only defined within this range. Plugging this into [\[cont\]](#cont){reference-type="eqref" reference="cont"}, we obtain the contradiction $$2 \cdot (2n)^{n-1} > (2n)^n.$$ ◻ Another useful feature of this construction is that the consecutive differences within the components $D_k$ shrink rapidly, which makes it difficult to find a large convex sets in $A-A$. We use this fact in the following claim to establish that a convex set cannot contain more than two elements from any $D_k$. **Claim 1**. *Suppose that $S \subset A-A$ is convex. Then $$|S \cap D_k| \leq 2$$ holds for any $k=1,\dots,n-1$.* *Proof.* Suppose for a contradiction that there exist three consecutive elements of $S$ belonging to the same block $D_k$. In particular, we have $d_{i_1}^{(k)} < d_{i_2}^{(k)}< d_{i_3}^{(k)}$ satisfying $$\label{convexbasic} d_{i_3}^{(k)} - d_{i_2}^{(k)} > d_{i_2}^{(k)} - d_{i_1}^{(k)}.$$ Then, since $i_2 \geq i_1+1$, we have $$d_{i_2}^{(k)} - d_{i_1}^{(k)} \geq d_{i_2}^{(k)} - d_{i_2-1}^{(k)} = (2n)^{n-i_2} + (2n)^{n-(i_2+1)} + \dots + (2n)^{n-(i_2+k-1)}> (2n)^{n-i_2}.$$ On the other hand, $$\begin{aligned} d_{i_3}^{(k)} - d_{i_2}^{(k)} \leq d_{n-k}^{(k)} - d_{i_2}^{(k)} &< k[ (2n)^{n-(i_2+1)} + \dots + (2n) ] \\ & \leq k \cdot 2 \cdot (2n)^{n-(i_2+1)} \\& < n \cdot 2 \cdot (2n)^{n-(i_2+1)} = (2n)^{n-i_2}. \end{aligned}$$ Combining the previous two inequalities with [\[convexbasic\]](#convexbasic){reference-type="eqref" reference="convexbasic"}, we obtain the intended contradiction ◻ By proving Claims [Claim 1](#claim:most21){reference-type="ref" reference="claim:most21"} and [Claim 1](#claim:most22){reference-type="ref" reference="claim:most22"}, we have essentially proved that each block of $D_k$ in $A-A$ can contain at most one element of a convex set $S \subset A-A$. We can be a little more precise; taking the potential exceptional block into account, we have the bound $$\label{Sbound} |S| \leq |K(S)|+1.$$ It remains to upper bound the size of the indexing set $K(S)$. We need one more claim to allow us to achieve this goal. Note that the following claim represents the first time in the proof where we use the fact that the convex set $S$ is derived from a matching. **Claim 1**. *Suppose that $M \subset A \times A$ is a matching and that $S = A-_M A$ is a convex set. Then the indexing set $K(S)$ does not contain four consecutive elements which form an arithmetic progression.* *Proof.* Suppose for a contradiction that four consecutive elements of $K(S)$ form an arithmetic progression. It then follows from Claim [Claim 1](#claim:most21){reference-type="ref" reference="claim:most21"} that there exist four consecutive elements $d_{j_1}^{(k)}, d_{j_2}^{(k+t)}, d_{j_3}^{(k+2t)}$ and $d_{j_4}^{(k+3t)}$ in $S$, for some positive integers $k,t$ such that $k+3t \leq n - 1$. Since $S$ is derived from a matching, it must be the case that the $j_i$ are pairwise distinct. Write $$\begin{aligned} e_1 & = d_{j_2}^{(k+t)} - d_{j_1}^{(k)} \\ e_2 & = d_{j_3}^{(k+2t)} - d_{j_2}^{(k+t)} \\ e_3 & = d_{j_4}^{(k+3t)} - d_{j_2}^{(k+2t)}. \end{aligned}$$ Since $S$ is convex, we have $e_1 < e_2 < e_3$. We will now show that it must be the case that $j_2 < j_1$. Suppose for a contradiction that this is not true, and so $j_2>j_1$. Then $$e_1=d_{j_2}^{(k+t)} - d_{j_1}^{(k)} > t[(2n)^n+\dots + (2n)^{n-j_1}] + (t+1)(2n)^{n-(j_1+1)}.$$ On the other hand $$\begin{aligned} e_2 = d_{j_3}^{(k+2t)} - d_{j_2}^{(k+t)} &\leq d_{n-(k+2t)}^{(k+2t)} - d_{j_2}^{(k+t)} \\ & < t[ (2n)^{n} + \dots + (2n)^{n-j_2}] + (k+2t)[(2n)^{n-(j_2+1)}+ \dots + (2n)]. \end{aligned}$$ It follows from the previous two bounds that $$\begin{aligned} e_1 -e_2 &> (2n)^{n-(j_1+1)} - (k+2t)[(2n)^{n-(j_1+2)}+ \dots + (2n)] \\& \geq (2n)^{n-(j_1+1)} - (k+2t) \cdot 2 \cdot (2n)^{n-(j_1+2)} \\& > (2n)^{n-(j_1+1)} - n \cdot 2 \cdot (2n)^{n-(j_1+2)} =0 \end{aligned}$$ This is a contradiction, and we have thus established that $j_2<j_1$. The exact same argument implies that $j_3<j_2$. Now, since $j_2< j_1$, we have $$\ e_1=d_{j_2}^{(k+t)}- d_{j_1}^{(k)} \geq t[(2n)^n+ \dots + (2n)^{n-j_2}] - k[ (2n)^{n-(j_2+1)} + \dots + (2n)],$$ Similarly, since $j_3<j_2$, it follows that $$e_2 = d_{j_3}^{(k+2t)} - d_{j_2}^{(k+t)} \leq t[ (2n)^n + \dots + (2n)^{n-j_3}] + (t-1) [ (2n)^{n-(j_3+1)}+ \dots + (2n)].$$ Combining the previous two inequalities, and again making use of the fact that $j_3<j_2$, we have $$\begin{aligned} e_1-e_2 &\geq (2n)^{n-(j_3+1)} - (k+t-1)[ (2n)^{n-(j_3+2)}+ \dots + (2n)] \\ &\geq (2n)^{n-(j_3+1)} - (k+t-1) \cdot 2 \cdot (2n)^{n-(j_3+2)} \\& > (2n)^{n-(j_3+1)} - n \cdot 2 \cdot (2n)^{n-(j_3+2)} =0. \end{aligned}$$ This contradicts the fact that $e_1 <e_2$ and completes the proof of the claim. ◻ When we combine Claim [Claim 1](#claim:weak){reference-type="ref" reference="claim:weak"} and Claim [Claim 1](#claim:ap){reference-type="ref" reference="claim:ap"}, we see that the set $K$ is a weakly convex subset of $\{ 1, \dots , n \}$ which does not contain four consecutive terms in an arithmetic progression. It follows that $$|K| \ll \sqrt n.$$ Combining this with [\[Sbound\]](#Sbound){reference-type="eqref" reference="Sbound"}, the proof is complete. ◻ # Concluding remarks; sums instead of differences The problems considered in this paper were partly motivated by a potential application to a problem in discrete geometry concerning the minimum number of angles determined by a set of points in the plane in general position. This problem was considered recently in [@FKMPPW], and similar problems can be traced back to the work of Pach and Sharir [@PS]. We found that progress on this question could be given by a solution to the following problem: given a convex set $A \subset \mathbb R$ estimate the size of the largest matching on $A$ which gives rise to a convex set in the image set $f(A,A)$, where $f: \mathbb R \times \mathbb R \rightarrow \mathbb R$ is a specific bivariate function whose rather complicated formula is omitted here. Theorems [Theorem 1](#thm:main2){reference-type="ref" reference="thm:main2"} and [Theorem 1](#thm:main3){reference-type="ref" reference="thm:main3"} of this paper solve this problem for this simplified case when $f(x,y)=x-y$. With the potential application to the distinct angles problem in mind, an interesting future research direction could be to generalise the problems considered in this paper by considering an arbitrary $f: \mathbb R \times \mathbb R \rightarrow \mathbb R$ in place of the function $f(x,y)=x-y$. We conclude this paper with some remarks about the most natural case, whereby $f(x,y)=x+y$. It is interesting to see that we can quite easily obtain an optimal result, giving a significant quantitative improvement to Theorem [Theorem 1](#thm:main2){reference-type="ref" reference="thm:main2"}, if we consider sums instead of differences, as follows. **Theorem 1**. *Let $n \in \mathbb N$ and suppose that $A \subset \mathbb R$ is a convex set with cardinality $n$. Then there exists a matching $M \subset A \times A$ such that $|M| \geq \lfloor \frac{n}{2} \rfloor$ and $A+_M A$ is convex.* *Proof.* Suppose that $n$ is even and consider the matching $$M=\{(a_1, a_{n/2+1}), (a_2, a_{n/2+2}),..., (a_{n/2},a_n) \}.$$ Then the set $$A+_M A= \{ a_{n/2+k}+a_k : k \in \{1, \dots , n/2 \} \}$$ is convex. If $n$ is odd then we omit $a_n$ and use the same argument as above. ◻ In particular, it follows from Theorem [Theorem 1](#thm:main4){reference-type="ref" reference="thm:main4"} that an analogue of the construction in the proof of Theorem [Theorem 1](#thm:main3){reference-type="ref" reference="thm:main3"} is not possible if we take sums instead of differences. There are other cases in which problems concerning additive properties of convex sets are sensitive to sums and differences. For instance, a construction in [@RNW] (see also [@Sch]) shows that there exists a convex set $A \subset \mathbb R$ and $x \in A-A$ with $$| \{ (a,b) \in A : a-b=x \}| \gg |A|.$$ On the other hand, incidence geometry can be used to give the upper bound $$| \{ (a,b) \in A : a+b=x \}| \ll |A|^{2/3}$$ for any convex $A \subset \mathbb R$ and $x \in A+A$. We would also be interested to know whether Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"} is still valid when $A-A$ is replaced by $A+A$. We were unable to prove anything non-trivial for this question. # Acknowledgements {#acknowledgements .unnumbered} Krishnendu Bhowmick and Oliver Roche-Newton were supported by the Austrian Science Fund FWF Project P 34180. Ben Lund was supported by the Institute for Basic Science (IBS-R029-C1). Part of this work was carried out at Vietnam Institute for Advanced Study in Mathematics (VIASM) in Hanoi, and we thank VIASM for their hospitality and the excellent working conditions. We also thank Eyvindur Ari Palsson, Steven Senger and Audie Warren for helpful discussions. 99 G. Elekes, M. Nathanson and I. Ruzsa, 'Convexity and sumsets', *J. Number Theory.* 83 (1999), 194-201. H. L. Fleischmann, S. V. Konyagin, S. J. Miller, E. A. Palsson, E. Pesikoff and C. Wolf 'Distinct angles in general position', *Discrete Math.* 346 (2023), no. 4, Paper No. 113282, 4pp. B. Hanson, O. Roche-Newton and M. Rudnev, 'Higher convexity and iterated sum sets', *Combinatorica* 42 (2022), no. 1, 71-85. O. Roche-Newton and A. Warren, 'A convex set with a rich difference', *Acta Math. Hungar.* 168 (2022), no. 2, 587-592. J. Pach and M. Sharir, 'Repeated angles in the plane and related problems', *J. Combin. Theory Ser. A* 59 (1992), no. 1, 12-22. I. Z. Ruzsa and D. Zhelezov, 'Convex sequences may have thin additive bases', *Mosc. J. Comb. Number Theory* 8 (2019), no. 1, 43-46. T. Schoen, 'On Convolutions of Convex Sets and Related Problems', *Canad. Math. Bull.* 57 (2014), no. 4, 877-883. T. Schoen and I. Shkredov, 'On sumsets of convex sets', *Combin. Probab. Comput.* 20 (2011), no. 5, 793-798. S. Stevens and A. Warren, 'On sum sets and convex functions', *Electron. J. Combin.* 29 (2022), no. 2, Paper No. 2.18, 19 pp. [^1]: Here and throughout this paper, the notation $\gg$ is used to absorb a multiplicative constant. That is $X \gg Y$ denotes that there exists an absolute constant $C>0$ such that $X \geq CY$. The notation $Y \ll X$ has the same meaning.
arxiv_math
{ "id": "2309.07527", "title": "Large Convex sets in Difference sets", "authors": "Krishnendu Bhowmick, Ben Lund, Oliver Roche-Newton", "categories": "math.CO", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | For each countable residually finite group $G$, we present examples of irregular Toeplitz subshifts in $\{0,1\}^G$ that are topo-isomorphic extensions of its maximal equicontinuous factor. To achieve this, we first establish sufficient conditions for Toeplitz subshifts to have invariant probability measures as limit points of periodic invariant measures of $\{0,1\}^G$. Next, we demonstrate that the set of Toeplitz subshifts satisfying these conditions is non-empty. When the acting group $G$ is amenable, this construction provides non-regular extensions of totally disconnected metric compactifications of $G$ that are (Weyl) mean-equicontinuous dynamical systems. address: | Facultad de Matemáticas, Pontificia Universidad Católica de Chile\ Santiago, Chile. author: - Jaime Gómez title: Topo-isomorphisms of Irregular Toeplitz subshifts for residually finite groups. --- # Introduction {#sec:intro} The mean-equicontinuous systems have been defined in [@LiTuYe15] for actions on $\mathbb{Z}$, and studied in the case of amenable groups in [@FuGrLe22] and [@LaSt18]. In the context of amenable groups, mean-equicontinuity measures how closely a system resembles equicontinuity under certain Weyl-pseudometrics associated with Følner sequences. In this case, mean-equicontinuity is equivalent to being a topo-isomorphism extension of the maximal equicontinuous factor, as proven in [@DoGl16] for actions on $\mathbb{Z}$ in the minimal case, and later extended to actions of amenable groups in [@FuGrLe22]. Toeplitz subshifts were originally defined in [@JaKe69] and have been extensively studied due to their properties related to entropy, representation of Choquet simplices, among others (See for instance [@Wi84], [@Do05], [@GJ00]). In [@DoLa98], Downarowicz and Lacroix characterized Toeplitz subshifts as minimal symbolic almost $1$-$1$ extensions of odometers, which in turn implies that the maximal equicontinuous factor of these systems are odometers. Within this class of subshifts, there exist regular Toeplitz subshifts, which were studied in [@Wi84] and provide examples of topo-isomorphic extensions of their respective maximal equicontinuous factor. In subsequent years, Toeplitz subshifts were defined for actions of more general groups and characterized as minimal symbolic almost $1$-$1$ extensions of $G$-odometers (see [@Co06],[@CoPe08].[@Kr10]). The notion of regular Toeplitz subshift was also extended (see [@LaSt18] for amenable groups and [@CeCoGo23] for general residually finite groups). Regular Toeplitz subshifts are examples of topo-isomorphic extensions of their maximal equicontinuous factors, as we mention in Proposition [Proposition 6](#regular-topo){reference-type="ref" reference="regular-topo"}. Despite the existence of examples of Toeplitz subshifts that are topo-isomorphic to their maximal equicontinuous factors, these examples have so far been limited to the regular ones. The main challenge in finding new examples lies in the search for uniquely ergodic Toeplitz subshifts that are not regular. Downarowicz and Lacroix constructed an irregular example of this kind for $\mathbb{Z}$-actions in [@DoKa15]. When the acting group is not $\mathbb{Z}$ the challenge is the lack of a well-behaved decomposition of some Toeplitz subshifts, as noted in [@Wi84] and further developed in [@Do05]. This naturally leads us to the question of whether similar examples exist when the acting group is not necessarily $\mathbb{Z}$. In this document, we construct irregular Toeplitz subshifts that are topo-isomorphic extensions of their maximal equicontinuous factors. This construction is carried out for residually finite groups, with no assumption of amenability over the group. This work also complements the findings in [@FuGrLe22], as it provides new examples of mean-equicontinuous systems over amenable residually finite groups beyond $\mathbb{Z}^d$. To construct these examples, we establish sufficient conditions for a Toeplitz subshift in $\Sigma^G$ to have an invariant probability measure, which is a limit point of a sequence of periodic invariant probability measures over $\Sigma^G$, as shown in Proposition [Proposition 11](#Supp-mu){reference-type="ref" reference="Supp-mu"}. Subsequently, we construct an irregular Toeplitz array satisfying the hypothesis of Proposition [Proposition 11](#Supp-mu){reference-type="ref" reference="Supp-mu"}, leading to a subshift associated, which is a topo-isomorphic extension of a totally disconnedted metric compactification. This result is stated with more precision as Theorem [Theorem 1](#theo:main1){reference-type="ref" reference="theo:main1"}, which is the main Theorem of this document. **Theorem 1**. *Let $G$ be a countable, residually finite group and let $\overleftarrow{G}$ be a totally disconnected metric compactification of $G$. Then, there exists an *irregular* Toeplitz element $\eta\in\{0,1\}^G$ such that the maximal equicontinuous factor is $\overleftarrow{G}$, and $\overline{O_\sigma(\eta)}$ is a topo-isomorphic extension of $\overleftarrow{G}$.* Corollary [Corollary 2](#coro:main2){reference-type="ref" reference="coro:main2"} follows immediately from Theorem [Theorem 1](#theo:main1){reference-type="ref" reference="theo:main1"} and [@FuGrLe22 Theorem 1.1]. **Corollary 2**. *Let $G$ be an amenable countable residually finite group and let $\overleftarrow{G}$ be a totally disconnected metric compactification of $G$. Then, there exists an irregular mean-equicontinuous Toeplitz subshift whose maximal equicontinuous factor is $\overleftarrow{G}$.* The structure of this document is as follows: In Section [2](#sec:basicdef){reference-type="ref" reference="sec:basicdef"}, we provide basic facts related to topological dynamical systems and measure theory, along with a brief introduction to residually finite groups, $G$-odometers, and Toeplitz subshifts. In Section [3](#Sectio-3){reference-type="ref" reference="Sectio-3"}, we introduce a new representation of the set of periods of some Toeplitz arrays, which is essential for the proof of Proposition [Proposition 11](#Supp-mu){reference-type="ref" reference="Supp-mu"} at the end of that Section. This proposition guarantees that the irregular Toeplitz array $\eta$ defined in Section [4](#sec:irregular-construction){reference-type="ref" reference="sec:irregular-construction"} generates a Toeplitz subshift having at least one invariant probability measure. In the last part of Section [4](#sec:irregular-construction){reference-type="ref" reference="sec:irregular-construction"}, we define a map from the set of invariant measures of the Toeplitz subshift defined in Section [4](#sec:irregular-construction){reference-type="ref" reference="sec:irregular-construction"} to a one-point set, as stated in Proposition [Proposition 21](#at-least){reference-type="ref" reference="at-least"}. This map plays a crucial role in the proof of Theorem [Theorem 1](#theo:main1){reference-type="ref" reference="theo:main1"}, which is presented in Section [5](#sec:Topo){reference-type="ref" reference="sec:Topo"}, where we guarantee that there exists an isomorphisms between the Toeplitz subshift previously constructed and its maximal equicontinuous factor and finally, we prove that the subshift is uniquely ergodic. # Preliminaries {#sec:basicdef} ## Topological dynamical systems {#subsec:defrfgroups} In this article, $G$ always refers to a countable discrete infinite group, where $1_G$ denotes the neutral element of $G$. We define a *topological dynamical system* (or *dynamical system*) $(X,\sigma, G)$ as a system where $\sigma$ is a left continuous action of $G$ on a compact metric space $X$. The dynamical system $(X,\sigma, G)$ is *minimal* if for every $x\in X$, its orbit $O_\sigma(x)=\{\sigma^gx: g\in G\}$ is dense in $X$. The dynamical system is *equicontinuous* if the collection of maps $\{\sigma^g\}_{g\in G}$ is equicontinuous. A topological dynamical system $(X,\sigma,G)$ is called *free* if $\sigma^gx=x$ implies $g=1_G$ for every $x\in X$. An *invariant measure* of the topological dynamical system $(X,\sigma,G)$ is a probability measure $\mu$ defined on the Borel sigma-algebra of $X$, satisfying the condition that for every $g\in G$ and for every Borel set $A\subseteq X$, we have $\mu(A)=\mu(\sigma^g A)$. A Borel set is called *invariant* if $\sigma^g A=A$ for every $g\in G$. An invariant measure $\mu$ is said to be *ergodic* if $\mu(A)\in\{0,1\}$ for every invariant set $A\subseteq X$. The set of invariant measures of $(X,\sigma,G)$ is denoted $M_G(X)$, which is a metrizable Choquet simplex with extreme points the ergodic measures of $(X,\sigma, G)$ (see for instance [@CoPe14]). When the system has a unique invariant measure, it is called *uniquely ergodic*. Using [@Au88 Theorem 3.6] and [@BeOh07 Proposition 8.1] one can prove that every equicontinuous minimal dynamical system is uniquely ergodic. If $\mu$ is an invariant measure of $(X,\sigma,G)$, then $(X,\sigma,G,\mu)$ is referred as a *probability measure-preserving* dynamical system. Let $(X,\sigma, G)$ and $(Y,\phi,G)$ be two topological dynamical systems. A continuous surjective map $\pi: X\to Y$ is called a *factor map* if $\pi(\sigma^g x)=\phi^g\pi(x)$, for every $x\in X$ and every $g\in G$. In this case, we say that $(X,\sigma, G)$ is an *extension* of $(Y,\phi,G)$ and $(Y,\phi, G)$ is a *factor* of $(X,\sigma,G)$. Given a factor map $\pi$ from $(X,\sigma,G)$ to $(Y,\phi,G)$ and $\mu$ being an invariant measure of $(X,\sigma,G)$, there exists an invariant measure of $(Y,\phi,G)$ associated to $\mu$ called the *pushforward measure* of $\mu$ by $\pi$. It is defined as $\pi(\mu)(B)=\mu(\pi^{-1}(B))$ for every $B\subseteq Y$ Borel set. A factor map $\pi$ between $(X,\sigma, G)$ and $(Y,\phi, G)$ is said to be *almost one to one* (or *almost $1$-$1$*) if the set $\{y\in Y: |\pi^{-1}(\{y\})|=1\}\subseteq Y$ is residual. We say that $(X,\sigma,G)$ is an *almost 1-1 extension* of $(Y,\phi,G)$. Recall that when the system $(Y,\phi, G)$ is minimal, being *almost one to one* is equivalent to the existence of $y\in Y$ such that $|\pi^{-1}\{y\}|=1$ (see [@Ve70]). An equicontinuous system $(Y,\phi,G)$ is the *maximal equicontinuous factor* of a system $(X,\sigma,G)$ if there is a factor map $\pi: X\to Y$ such that for every other factor map $f: X\to Y'$, where $(Y',\phi',G)$ is an equicontinuous system, there exists a factor map $\overline{f}: Y\to Y'$ such that $f=\overline{f}\circ \pi$. It is well known that if $(X,\sigma,G)$ is a minimal almost $1$-$1$ extension of a minimal equicontinuous system $(Y,\phi,G)$, then $(Y,\phi,G)$ is the maximal equicontinuous factor of $(X,\sigma,G)$ (see [@Kr10]). Two probability measure-preserving dynamical systems $(X,\sigma,G,\mu)$ and $(Y,\phi,G,\nu)$ are *measure conjugate* if there exists a factor map $h: X\to Y$, and invariant conull sets $X'\subseteq X$ and $Y'\subseteq Y$ such that $h|_{X'}: X'\to Y'$ is a bimeasurable bijection, and $\nu=h|_{X'}(\mu)$. We say that $h|_{X'}$ is a *measure conjugacy $(\bmod\; 0)$* and $h$ is called a *measure conjugacy of $\mu$*. A dynamical system $(X,\sigma,G)$ is a *topo-isomorphic extension* of $(Y,\phi,G)$ if there is a factor map $h: X\to Y$ such that $h$ is a measure conjugacy of $\mu$ for every $\mu$ invariant measure of $(X,\sigma, G)$. In this case, $h$ is called a *topo-isomorphism*. A group $G$ is said to be *amenable* if there exists a sequence $(F_n)_{n\in\mathbb{N}}$ of non-empty finite subsets of $G$, called a *(left) Følner sequence*, verifying $$\begin{aligned} \lim_{n\to\infty} \dfrac{|gF_n\triangle F_n|}{|F_n|}=0\; \mbox{ for all }\; g\in G,\end{aligned}$$ where $|\cdot|$ denotes the cardinal of a set and $\triangle$ represents the symmetric difference. This definition of amenable group is equivalent to the existence of invariant probability measures for every topological dynamical system $(X,\phi,G)$ (See [@CC10] for a more detailed presentation). If $G$ is an amenable group, a dynamical system $(X,\sigma,G)$ is *mean-equicontinuous* if for all $\varepsilon>0$ there exists $\delta_\varepsilon>0$ such that for every $x,z\in X$ with $d(x,z)<\delta_\varepsilon$ we have $$\begin{aligned} \sup\{\limsup_{n\to\infty}\dfrac{1}{|F_n|}\sum_{g\in F_n} d(\sigma^g x,\sigma^g z):\mathcal{F}=(F_n)_{n\in\mathbb{N}}\mbox{ is a F\o lner sequence}\}<\varepsilon,\end{aligned}$$ where $d$ denotes a metric of $X$. The following Theorem states the relation between mean-equicontinuity and topo-isomorphism. This Theorem holds in a broader context where $G$ is not necessarily a countable discrete group. **Theorem 3** ([@FuGrLe22 Theorem 1.1]). *Let $G$ be a locally compact sigma-compact amenable group. A topological dynamical system $(X,\sigma, G)$ is mean-equicontinuous if and only if it is a topo-isomorphic extension of its maximal equicontinuous factor.* ## $G$-odometers and residually finite groups In this subsection, we provide definitions and basic properties of $G$-odometers and residually finite groups. For a more comprehensive presentation of these topics, we refer to [@CoPe08] and [@Kr10]. We say that $G$ is *residually finite*, if there exists a decreasing sequence $(\Gamma_n)_{n\in\mathbb{N}}$ of finite index subgroups of $G$ with trivial intersection. We can assume that $\Gamma_n$ is normal for every $n\in\mathbb{N}$ (see [@CC10]). Let $G$ be an infinite residually finite group and let $(\Gamma_n)_{n\in\mathbb{N }}$ be a decreasing sequence of finite index subgroups of $G$ with trivial intersection. We define the *$G$-odometer associated to $(\Gamma_n)_{n\in\mathbb{N}}$*, denoted as $\overleftarrow{G}$, as the inverse limit of the inverse system given by $(G/\Gamma_n,\varphi_n)$, where $\varphi_n\colon G/\Gamma_{n+1}\to G/\Gamma_n$ is the canonical map given by $\Gamma_{n+1}\subseteq \Gamma_n$, i.e., $$\begin{aligned} \overleftarrow{G}:=\varprojlim(G/\Gamma_n,\varphi_n)=\{(x_n)_{n\in\mathbb{N}}\in\prod_{n\in\mathbb{N}} G/\Gamma_n\mid x_n=\varphi_n(x_{n+1}) \mbox{ for every }n\in \mathbb{N}\}.\end{aligned}$$ $\overleftarrow{G}$ has the induced topology of $\prod_{n\in\mathbb{N}}G/\Gamma_n$, where each $G/\Gamma_n$ is endowed with the discrete topology. In this scenario, $\overleftarrow{G}$ is a Cantor set. There is also a canonical continuous action $\phi$ of $G$ on $\overleftarrow{G}$ given by the left coordinate-wise multiplication. The dynamical system $(\overleftarrow{G},\phi,G)$ is an equicontinuous, minimal Cantor system, and hence, uniquely ergodic with measure $\nu$. When every $\Gamma_n$ is normal, $\overleftarrow{G}$ is a subgroup of $\prod_{n\in\mathbb{N}}G/\Gamma_n$, and $G$ is identified with a dense subgroup of $\overleftarrow{G}$. In this case, the left Haar measure $\nu$ is the unique invariant measure of the system. A *totally disconnected metric compactification of $G$* is a totally disconnected compact group for which an injective homomorphism $\iota: G\to\overleftarrow{G}$ exists and its image is dense in $\overleftarrow{G}$. Note that every $G$-odometer having a group structure is a totally disconnected metric compactification of $G$, and conversely, every totally disconnected metric compactification of $G$ is a $G$-odometer (see [@CeCoGo23 Lemma 2.1]). ## Toeplitz subshifts {#subsec:deftoeplitzodometers} The definitions and results written in this subsection can be found in [@CoPe08]. Let $\Sigma$ be a finite set with $|\Sigma|\geq 2$. The set $$\Sigma^G=\{x=(x(g))_{g\in G}: x(g)\in \Sigma, \mbox{ for every } g\in G\}$$ is a Cantor set if $\Sigma$ has the discrete topology and $\Sigma^G$ is endowed with the product topology. We can associate with $\Sigma^G$ the continuous action known as *(left) shift action* $\sigma$ of $G$ given by $$\begin{aligned} \sigma^g x(h)=x(g^{-1}h)\mbox{ for every }h,g\in G\mbox{ and }x\in \Sigma^G. \end{aligned}$$ The topological dynamical system $(\Sigma^G, \sigma, G)$ is known as a *full G-shift*. A subset $X\subseteq \Sigma^G$ is a *subshift* of $\Sigma^G$ if it is closed and invariant. The topological dynamical system $(X, \sigma|_X,G)$ is also called a *subshift*. Let $x\in \Sigma^G$ and $\Gamma\subseteq G$ be a subgroup of finite index. We define $$\begin{aligned} {\rm Per}(x,\Gamma,\alpha)&=\{g\in G: x(\gamma g)=\alpha \mbox{ for every }\gamma\in\Gamma\}, \mbox{ for every } \alpha\in \Sigma.\\ {\rm Per}(x,\Gamma)&=\bigcup_{\alpha\in \Sigma}{\rm Per}(x,\Gamma,\alpha).\end{aligned}$$ An element $\eta\in \Sigma^G$ is called a *Toeplitz array* if for every $g\in G$ there exists a finite index subgroup $\Gamma$ of $G$ such that $g\in {\rm Per}(\eta,\Gamma)$. The finite index subgroup $\Gamma$ is a *group of periods* of $\eta$ if ${\rm Per}(\eta,\Gamma)\neq \emptyset$. A group of periods $\Gamma$ of $\eta$ is an *essential group of periods* of $\eta$ if ${\rm Per}(\eta,\Gamma,\alpha)\subseteq g{\rm Per}(\eta,g^{-1}\Gamma g,\alpha)={\rm Per}(\sigma^g\eta,\Gamma,\alpha)$ for every $\alpha\in \Sigma$ implies $g\in \Gamma$. Let $\eta\in \Sigma^G$ be a Toeplitz array. It is possible to construct a *period structure* $(\Gamma_n)_{n\in\mathbb{N}}$ of $\eta$, i.e., a decreasing sequence $(\Gamma_n)_{n\in\mathbb{N}}$ of finite index subgroups of $G$ such that $\Gamma_n$ is an essential group of periods of $\eta$ for every $n\in\mathbb{N}$ and $G=\bigcup_{n\in\mathbb{N}}\Gamma_n$ (see [@CoPe08 Corollary 6]). A subshift $X$ is called a *Toeplitz subshift*, if there exists a Toeplitz array $\eta\in\Sigma^G$ such that $X=\overline{O_\sigma(\eta)}$. Recall that these dynamical systems are minimal (see [@CoPe08]). Let $\eta\in \Sigma^G$ be a Toeplitz array, and let $(\Gamma_n)_{n\in\mathbb{N}}$ be a period structure of $\eta$. Let $\overline{O_{\sigma}(\eta)}$ be its associated Toeplitz subshift. For each $n\in\mathbb{N}$, we define $$C_n=\{ x\in \overline{O_\sigma(\eta)}: {\rm Per}(x,\Gamma_n, \alpha)={\rm Per}(\eta, \Gamma_n, \alpha), \mbox{ for every } \alpha\in \Sigma\}.$$ Lemma 2.5 in [@CeCoGo23] provides that for every $n\in \mathbb{N}$ and for all $g,h\in G$, we have that $\sigma^g C_n=\sigma^h C_n$ if and only if $g\Gamma_n=h\Gamma_n$, and consequently, $\{\sigma^{g^{-1}} C_n:g\in D_n\}$ is a clopen partition of $\overline{O_\sigma(\eta)}$. The stabilizer of $x\in \overline{O_\sigma(\eta)}$ is given by $\bigcap_{n\in\mathbb{N}} v_n\Gamma_n v_n^{-1}$, where $v_n\in G$ is such that $x\in \sigma^{v_n}C_n$. When $\Gamma_n$ is normal for every $n\in\mathbb{N}$, the Toeplitz subshift $(\overline{O_\sigma(\eta)},\sigma, G)$ is free if and only if $\bigcap_{n\in\mathbb{N}}\Gamma_n=\{1_G\}$. The relationship between Toeplitz subshifts and countable residually finite groups is given by the following statement: there exists a free Toeplitz subshift or a free $G$-odometer if and only if $G$ is residually finite (see for example [@CoPe08] and [@Kr10]). From now on, let $G$ denote a countable discrete infinite residually finite group, and let $(\Gamma_n)_{n\in\mathbb{N}}$ denote a decreasing sequence of normal subgroups of finite index of $G$ with trivial intersection. The relation of Toeplitz subshifts and $G$-odometers is represented in the following Proposition. **Proposition 4** ([@CoPe08]). *Let $\eta\in\Sigma^G$ be a Toeplitz array and let $\overline{O_\sigma(\eta)}$ be the Toeplitz subshift associated to $\eta$. Suppose that $(\Gamma_n)_{n\in\mathbb{N}}$ is a period structure of $\eta$. Let $\overleftarrow{G}$ be the $G$-odometer associated to $(\Gamma_n)_{n\in\mathbb{N}}$. Then, the map $\pi:\overline{O_\sigma(\eta)}\to \overleftarrow{G}$ defined by $\pi(x)=(g_n\Gamma_n)_{n\in\mathbb{N}}$, where $x\in\sigma^{g_n}C_n$ for every $n\in\mathbb{N}$, is an almost $1$-$1$ factor map. Consequently, $\overleftarrow{G}$ is the maximal equicontinuous factor of $\overline{O_\sigma(\eta)}$. Moreover, $$\begin{aligned} \mathcal{T}=\{x\in\overline{O_\sigma(\eta)}: x\mbox{ is a Toeplitz array}\}=\pi^{-1}\{y\in\overleftarrow{G}: |\pi^{-1}\{y\}|=1\}. \end{aligned}$$* The following Proposition is considered as the non-amenable version of [@CoPe14 Lemma 5] and it plays an important role in the development of this paper. **Proposition 5** ([@CeCoGo23 Lemma 2.8]). *There exist an increasing sequence $(t_i)_{i\in \mathbb{N}}\subseteq \mathbb{N}$ and a sequence $(D_i)_{i\in \mathbb{N}}$ of finite subsets of $G$ such that for every $i\in \mathbb{N}$,* 1. *$\{1_G\}\subseteq D_i\subseteq D_{i+1}$.* 2. *$D_i$ is a fundamental domain of $G/\Gamma_{t_i}$, i.e., $D_i$ contains a unique element of every coset in $G/\Gamma_{t_i}$.* 3. *$G=\bigcup_{i=1}^\infty D_i$.* 4. *$D_j=\bigcup_{v\in D_j\cap \Gamma_{t_i}} vD_i$, for each $j>i\geq 1$.* ## Regular Toeplitz subshifts. {#Regular-T} See [@CeCoGo23 Section 3] for a further development of this subsection. Let $\Sigma$ be a finite set with $|\Sigma|\geq 2$. Let $\eta\in\Sigma^G$ be a Toeplitz array such that $(\Gamma_n)_{n\in\mathbb{N}}$ is a period structure of $\eta$. Recall that the Toeplitz subshift $\overline{O_{\sigma}(\eta)}$ is an almost 1-1 extension of the $G$-odometer $\overleftarrow{G}$ associated with $(\Gamma_n)_{n\in\mathbb{N}}$, and $\mathcal{T}$ denotes the set of Toeplitz arrays in $\overline{O_\sigma(\eta)}$. Let $(D_n)_{n\in\mathbb{N}}$ be a sequence as in Proposition [Proposition 5](#decom){reference-type="ref" reference="decom"}, and if necessary, by taking a subsequence of $(\Gamma_i)_{n\in\mathbb{N}}$, we can suppose that $t_i=i$. The sequence $(d_i)_{i\in\mathbb{N}}$, given by $d_i=\dfrac{|D_i\cap {\rm Per}(\eta,\Gamma_i)|}{|D_i|}$, is increasing and convergent. We denote $d=\lim_{i\to\infty}d_i$. The Toeplitz array $\eta$ is said to be *regular* if any of the following equivalent statements is true: 1. $\nu(\pi(\mathcal{T}))=1$. 2. There exists an invariant probability measure $\mu$ of $(\overline{O_\sigma(\eta)}, \sigma, G)$ such that $\mu(\mathcal{T})=1$. 3. There exists a unique invariant probability measure $\mu$ of $(\overline{O_\sigma(\eta)},\sigma, G)$ such that $\mu(\mathcal{T})=1$. 4. $d=1$. If $\eta$ does not satisfy any of the previous statements, we call it *irregular* or *non-regular*. As an immediate consequence of the above discussion, we get the following Proposition. **Proposition 6**. *Every free regular Toeplitz subshift is a topo-isomorphic extension of its maximal equicontinuous factor.* # Toeplitz subshifts with invariant probability measures {#Sectio-3} Let $\Sigma$ be a finite set such that $|\Sigma|\geq 2$ and let $(\Gamma_n)_{n\in\mathbb{N}}$ be a decreasing sequence of normal finite index subgroups of $G$ with $\bigcap_{i\in\mathbb{N}}\Gamma_i=\{1_G\}$. Let $(D_n)_{n\in\mathbb{N}}$ be an increasing sequence as in Proposition [Proposition 5](#decom){reference-type="ref" reference="decom"} with $(t_n)_{n\in\mathbb{N}}$ as (possibly after taking a subsequence) $t_n=n$ for every $n\in\mathbb{N}$. For every $n\in\mathbb{N}$, define $$\begin{aligned} \label{J(n)} J(n)=D_n\setminus\bigcup_{i=0}^{n-1} J(i)\Gamma_{i+1},\end{aligned}$$ where $J(0)=\{1_G\}$. Observe that $D_n\subseteq \bigcup_{i=0}^{n}J(i)\Gamma_{i+1}$. **Proposition 7**. *For every $n\in\mathbb{N}$, we have $$\begin{aligned} J(n+1)=\bigcup_{\gamma\in(D_{n+1}\cap \Gamma_n)\setminus \{1_G\}}\gamma J(n). \end{aligned}$$* *Proof.* If $u\in D_{n+1}$, then $u=\gamma v$ for some $\gamma\in D_{n+1}\cap \Gamma_n$ and $v\in D_n$. If $u\in J(n+1)$, using ([\[J(n)\]](#J(n)){reference-type="ref" reference="J(n)"}), we obtain that $\gamma\neq 1_G$. Additionally, $v\notin \bigcup_{i=0}^{n-1} J(i)\Gamma_{i+1}$. Otherwise, by ([\[J(n)\]](#J(n)){reference-type="ref" reference="J(n)"}) and normality of $\gamma_n$, we would have that $\gamma v\in \bigcup_{i=0}^{n} J(i)\Gamma_{i+1}$, which is not possible. Now, suppose that $v\in J(n)$ and $\gamma\in (D_{n+1}\cap\Gamma_n)\setminus\{1_G\}$. We will show that $\gamma v\in J(n+1)$. If $\gamma v\in \bigcup_{i=0}^n J(i)\Gamma_{i+1}$, we obtain that $\gamma v\in J(n)\Gamma_{n+1}$. Therefore, we can conclude that $\gamma\in\Gamma_{n+1}\cap D_{n+1}=\{1_G\}$, a contradiction. Thus, the proposition is proven. ◻ **Proposition 8**. *Let $\eta\in\Sigma^G$ be a Toeplitz sequence with period structure $(\Gamma_n)_{n\in\mathbb{N}}$. Then, $J(n)\subseteq {\rm Per}(\eta,\Gamma_{n+1})\setminus{\rm Per}(\eta,\Gamma_n)$ for every $n\in\mathbb{N},$ if and only if $$\begin{aligned} \label{T1} {\rm Per}(\eta,\Gamma_n)=\bigcup_{i=0}^{n-1} J(i)\Gamma_{i+1} \mbox{ for every }n\in\mathbb{N}. \end{aligned}$$* *Proof.* Assume $J(n)\subseteq {\rm Per}(\eta,\Gamma_{n+1})\setminus {\rm Per}(\eta,\Gamma_n)$ for every $n\in\mathbb{N}$. We use induction on $n$: $n=0$. Note that $\Gamma_1\subseteq {\rm Per}(\eta,\Gamma_{1})$. If there exists $g\in {\rm Per}(\eta,\Gamma_1)\setminus \Gamma_1$, we can find $d\in D_1$ and $\gamma\in\Gamma_1$ such that $g=d\gamma$. Since $g\in{\rm Per}(\eta,\Gamma_1)$, we obtain that $d\in{\rm Per}(\eta,\Gamma_1)$. On the other hand, $d\in D_1\setminus\Gamma_1=J(1)\subseteq {\rm Per}(\eta,\Gamma_2)\setminus {\rm Per}(\eta,\Gamma_1)$, a contradiction. Then, ${\rm Per}(\eta,\Gamma_1)=\Gamma_1$. Suppose that ([\[T1\]](#T1){reference-type="ref" reference="T1"}) is true for $n=k$, we will prove it for $n=k+1$. By hypothesis of induction, we have that $$\begin{aligned} \bigcup_{i=0}^{k-1} J(i)\Gamma_{i+1}={\rm Per}(\eta,\Gamma_k)\subseteq{\rm Per}(\eta,\Gamma_{k+1}).\end{aligned}$$ Moreover, $J(k)\Gamma_{k+1}\subseteq{\rm Per}(\eta,\Gamma_{k+1})$. Therefore, $\bigcup_{i=0}^{k}J(i)\Gamma_{i+1}\subseteq {\rm Per}(\eta,\Gamma_{k+1}).$ Suppose that there exists $g\in{\rm Per}(\eta,\Gamma_{k+1})\setminus\bigcup_{i=0}^{k}J(i)\Gamma_{i+1}$. We know that $g=d\gamma$, for some $d\in D_{k+1}$ and $\gamma\in \Gamma_{k+1}$ Thus, $d\in {\rm Per}(\eta,\Gamma_{k+1})\cap J(k+1)$, which is not possible. Now, assume that ([\[T1\]](#T1){reference-type="ref" reference="T1"}) is true. For every $n\in\mathbb{N}$ observe that $$\begin{aligned} {\rm Per}(\eta,\Gamma_{n+1})\setminus{\rm Per}(\eta,\Gamma_n)=\left(\bigcup_{i=0}^n J(i)\Gamma_{i+1}\right)\setminus\left( \bigcup_{i=0}^{n-1} J(i)\Gamma_{i+1}\right)=J(n)\Gamma_{n+1}.\end{aligned}$$ Therefore, $J(n)\subseteq {\rm Per}(\eta,\Gamma_{n+1})\setminus{\rm Per}(\eta,\Gamma_n)$. ◻ **Lemma 9** ([@CeCoGo23 Lemma 4.7]). *Let $n\in\mathbb{N}$. For every $m\geq n+2$ there exists $$\label{good-relation} \gamma\in (\Gamma_{n+1}\cap D_m)\setminus (D_{n+1}\Gamma_{n+2} \cup \cdots \cup D_{m-1}\Gamma_m).$$ Moreover, $$\left|(\Gamma_{n+1}\cap D_m)\setminus (D_{n+1}\Gamma_{n+2} \cup \cdots \cup D_{m-1}\Gamma_m)\right|\geq \frac{|D_{m}|}{|D_{n+1}|}\prod_{l=1}^{m-n-1}\left(1-\frac{|D_{n+l}|}{|D_{n+l+1}|} \right).$$* *Furthermore, if $\gamma$ satisfies ([\[good-relation\]](#good-relation){reference-type="ref" reference="good-relation"}), then $$\gamma D_{n+1}\subseteq D_m\setminus (D_{n+1}\Gamma_{n+2} \cup \cdots \cup D_{m-1}\Gamma_m).$$* For every $n\in\mathbb{N}$, we define $\eta_n\in\Sigma^G$ as $\eta_n(\gamma d)=\eta(d) \mbox{ for every }\gamma\in\Gamma_n, d\in D_n.$ Note that $\sigma^{\gamma}\eta_n=\eta_n$ for every $\gamma\in\Gamma_n$. Therefore, $O_\sigma(\eta_n)=\{\sigma^{d^{-1}}\eta_n\mid d\in D_n\}$. We define a $\sigma$-invariant measure over $\Sigma^G$, associated with $\eta_n$, as follows $$\begin{aligned} \mu_n=\dfrac{1}{|D_n|}\sum_{d\in D_n}\delta_{\sigma^{d^{-1}}\eta_n},\end{aligned}$$ where $\delta_x$ denotes the Dirac measure supported in $x$. We define the set $U_n$ as $$\label{definition} U_n:=\{x\in \Sigma^G: x(D_{n+1})=\eta_n(D_{n+1})\}.$$ The following Lemma can be regarded as a generalization of [@CeCoGo23 Lemma 4.9], since the Toeplitz array constructed in [@CeCoGo23 Section 4] satisfies ${J(n)\subseteq {\rm Per}(\eta,\Gamma_{n+1})\setminus {\rm Per}(\eta,\Gamma_n)}$ for every $n\in\mathbb{N}$. **Lemma 10**. *Let $\eta\in\Sigma^G$ be a Toeplitz sequence with a period structure $(\Gamma_n)_{n\in\mathbb{N}}$ such that $J(n)\subseteq {\rm Per}(\eta,\Gamma_{n+1})\setminus{\rm Per}(\eta,\Gamma_n)$ for every $n\in\mathbb{N}$. Let $m,n\in \mathbb{N}$ with $m>n+2$. If $\gamma_0 \in \Gamma_{m+1}\cap D_{n}$ satisfies the relation ([\[good-relation\]](#good-relation){reference-type="ref" reference="good-relation"}) and $$\begin{aligned} \label{Patching} \eta(\gamma_0\gamma u)=\eta(u), \mbox{ for every }u\in J(n)\mbox{ and }\gamma\in D_{n+1}\cap\Gamma_n,\end{aligned}$$then $\sigma^{\gamma_0^{-1}}\eta\in U_{n}$. This implies that $U_{n}\cap O_{\sigma}(\eta)\neq \emptyset.$* *Proof.* Let $\gamma_0\in \Gamma_{n+1}\cap D_m$ be an element satisfying the relation ([\[good-relation\]](#good-relation){reference-type="ref" reference="good-relation"}). We aim to prove that $\sigma^{\gamma_0^{-1}}\eta\in U_n$. Lemma [Proposition 5](#decom){reference-type="ref" reference="decom"} implies $D_{n+1}=\bigcup_{\gamma\in D_{n+1}\cap\Gamma_n}\gamma D_n$, and ([\[T1\]](#T1){reference-type="ref" reference="T1"}) implies $D_n=(D_n\cap{\rm Per}(\eta,\Gamma_n))\cup J(n)$. Since $D_n\subseteq {\rm Per}(\eta,\Gamma_{n+1})$, we have $$\label{eq2} \eta(\gamma_0u)=\eta(u) \mbox{ for }u\in D_n.$$ For $u\in D_n\cap{\rm Per}(\eta,\Gamma_n)$ and $\gamma\in D_{n+1}\cap \Gamma_n$, we have that $\gamma_0\gamma\in \Gamma_n$ and, as a result, $$\label{eq3} \eta(\gamma_0\gamma u)=\eta(u).$$ Let $u\in J(n)$ and $\gamma\in D_{n+1}\cap\Gamma_n\setminus\{1_G\}$. Thus, according to Lemma [Proposition 7](#auxiliar0){reference-type="ref" reference="auxiliar0"}, we have $\gamma_0\gamma u\in J(m)$. Since $\gamma_0$ satisfies ([\[Patching\]](#Patching){reference-type="ref" reference="Patching"}), we have $$\begin{aligned} \label{eq4} \eta(\gamma_0\gamma u)=\eta(u). \end{aligned}$$ By combining ([\[eq2\]](#eq2){reference-type="ref" reference="eq2"}), ([\[eq3\]](#eq3){reference-type="ref" reference="eq3"}) and ([\[eq4\]](#eq4){reference-type="ref" reference="eq4"}), we deduce $$\eta(\gamma_0\gamma u)=\eta(u), \mbox{ for every }u\in D_n\mbox{ and } \gamma\in D_{n+1}\cap \Gamma_n.$$ This implies that $\sigma^{\gamma_0^{-1}}\eta (w)=\eta(u)=\eta_n(w)$ for $w\in D_{n+1}$, $u\in D_n$ and $\gamma\in D_{n+1}\cap\Gamma_n$ such that $w=\gamma u$. This implies the result. ◻ **Proposition 11**. *Let $\eta\in \Sigma^G$ be a Toeplitz sequence with a period structure $(\Gamma_n)_{n\in\mathbb{N}}$ such that $J(n)\subseteq {\rm Per}(\eta,\Gamma_{n+1})\setminus{\rm Per}(\eta,\Gamma_n)$ for every $n\in\mathbb{N}$. If there is an increasing sequence $\mathcal{M}=(n_k)_{k\in\mathbb{N}}\subseteq\mathbb{Z}^+$ such that for every $n_{s},n_{j}\in\mathcal{M}$, with $n_{j}>n_{s}+2$, there exists $\gamma_0\in\Gamma_{n_{s}+1}\cap D_{n_{j}}$ that satisfies ([\[good-relation\]](#good-relation){reference-type="ref" reference="good-relation"}) and $$\begin{aligned} \eta(\gamma_0\gamma u)=\eta(u) \mbox{ for every }\gamma\in\Gamma_{n_{s}}\cap D_{n_{s}+1} \mbox{ and } u\in J(n_{s}),\end{aligned}$$ then, every limit point of $(\mu_{n_k})_{k\in\mathbb{N}}$ is supported in $\overline{O_\sigma(\eta)}$.* *Proof.* Let $\mu$ be a limit point of $(\mu_{n_k})_{k\in\mathbb{N}}$. Therefore, there exists a subsequence $\mathcal{N}=(n_{k_j})_{j\in\mathbb{N}}$ of $\mathcal{M}$ such that $(\mu_{n_{k_j}})_{j\in\mathbb{N}}$ converges to $\mu$, when $j\to\infty$. The set of cylinders of the form ${V=\{y\in\{0,1\}^G: y(s)=Q(s), s\in S\}}$, where $S$ is a finite subset of $G$ and $Q\in\{0,1\}^S$, forms a basis for the topology of $\{0,1\}^G$. If there exists a cylinder $V$ as described above with $\mu(V)>0$, then there exists $n\in\mathbb{N}$ and a cylinder $$\begin{aligned} \label{C} C=\{y\in\{0,1\}: y(v)=P(v), v\in D_n\}, P\in\{0,1\}^{D_n}\end{aligned}$$ satisfying $\mu(C)>0$. Let $C$ be a cylinder as in ([\[C\]](#C){reference-type="ref" reference="C"}) with $\mu(C)>0$. Our goal is to prove that there exists an element in $\overline{O_\sigma(\eta)}$ that belongs to $C$. Since the sequence $(\mu_{n_{k_j}})_{j\in\mathbb{N}}$ converges to $\mu$, when $j\to\infty$, there exists $j_0\in\mathbb{N}$ such that $\mu_{n_{k_l}}(C)>0$ for every $l\geq j_0$. Hence, ${O_\sigma(\eta_{n_{k_l}})\cap C\neq\emptyset}$. Choose $l\geq j_0$ such that $n_{k_l}\geq n$. There exists $u\in D_{n_{k_l}}$ such that ${\sigma^{u^{-1}}\eta_{n_{k_l}}(v)=P(v)}$ for every $v\in D_n$. Moreover, observe that $uD_n\subseteq D_{n_{k_l}}\cdot D_n$, and we can assume that ${D_{n_{k_l}}\cdot D_n\subseteq D_{n_{k_l}+1}}$. According to Lemma [Lemma 10](#good-patches){reference-type="ref" reference="good-patches"}, there exists $g\in G$ such that $\sigma^{g^{-1}}\eta\in U_{n_{k_l}}$. Therefore, $\sigma^{u^{-1}g^{-1}}\eta(v)=\sigma^{g^{-1}}\eta(uv)=\eta_{n_{k_l}}(uv)=P(v)$ for every $v\in D_n$, and we conclude. ◻ # Irregular Toeplitz subshifts {#sec:irregular-construction} Inspired by the ideas presented in [@DoKa15 Example 5.1] and [@CeCoGo23], we provide a proof of Theorem [Theorem 1](#theo:main1){reference-type="ref" reference="theo:main1"} in the remaining Sections. Let $(\Gamma_n)_{n\in\mathbb{N}}$ be a decreasing sequence of finite index normal subgroups of $G$ with trivial intersection. Let $(D_n)_{n\in\mathbb{N}}$ be a sequence of finite subsets of $G$ as in Lemma [Proposition 5](#decom){reference-type="ref" reference="decom"} with (possibly after taking a subsequence) $t_i=i$. The following Proposition characterizes the regularity of certain Toeplitz arrays. **Proposition 12**. *Let $\eta\in\Sigma^G$ be a Toeplitz sequence with period structure $(\Gamma_n)_{n\in\mathbb{N}}$ such that $J(n)\subseteq {\rm Per}(\eta,\Gamma_{n+1})\setminus{\rm Per}(\eta,\Gamma_n)$ for every $n\in\mathbb{N}$. For each $n\in\mathbb{N}$, we have that $$\begin{aligned} 1-d_{n+1}=\left(1-\dfrac{1}{|D_1|}\right)\prod_{j=1}^n\left(1-\dfrac{|D_j|}{|D_{j+1}|}\right), \end{aligned}$$ with $d_n$ defined as in subsection [2.4](#Regular-T){reference-type="ref" reference="Regular-T"}.* *Proof.* Proposition [Proposition 8](#Per-eq){reference-type="ref" reference="Per-eq"} implies $$\begin{aligned} \cap D_{n+1}=J(n)=D_n\setminus(D_n\cap{\rm Per}(\eta,\Gamma_n)). \end{aligned}$$ Therefore, $$\begin{aligned} d_{n+1}&=\dfrac{|D_{n+1}\cap{\rm Per}(\eta,\Gamma_{n+1})|}{|D_{n+1}|}\\ &=\dfrac {|D_n\cap{\rm Per}(\eta,\Gamma_n)||D_{n+1}\cap \Gamma_n|}{|D_{n+1}|}+\dfrac{|D_n|-|D_n\cap{\rm Per}(\eta,\Gamma_n)|}{|D_{n+1}|}\\ &=\dfrac{|D_n\cap{\rm Per}(\eta,\Gamma_n)|}{|D_n|}+\dfrac{|D_n|}{|D_{n+1}|}\left(1-\dfrac{|D_n\cap{\rm Per}(\eta,\Gamma_n)|}{|D_n|}\right)\\ &=d_n+\dfrac{|D_n|}{|D_{n+1}|}(1-d_n). \end{aligned}$$ Thus, $$\begin{aligned} 1-d_{n+1}=(1-d_n)\left(1-\dfrac{|D_n|}{|D_{n+1}|}\right). \end{aligned}$$ Using induction on $n$ we can conclude. ◻ **Remark 13**. *Recall that $$\begin{aligned} 0<\left(1-\dfrac{1}{|D_1|}\right)\prod_{j=1}^\infty\left(1-\dfrac{|D_j|}{|D_{j+1}|}\right)\leq 1\mbox{ if and only if }\dfrac{1}{|D_1|}+\sum_{j=1}^\infty \dfrac{|D_{j}|}{|D_{j+1}|}\mbox{ converges}.\end{aligned}$$ Moreover, a computation provides that if $L:=\dfrac{1}{|D_1|}+\sum_{j=1}^\infty \dfrac{|D_j|}{|D_{j+1}|}<\infty$, then $$\begin{aligned} e^{-2L}\leq \left(1-\dfrac{1}{|D_1|}\right)\prod_{j=1}^\infty \left(1-\dfrac{|D_j|}{|D_{j+1}}\right)\leq 1.\end{aligned}$$* *Therefore, Proposition [Proposition 12](#regular-eta){reference-type="ref" reference="regular-eta"} and Subsection [2.4](#Regular-T){reference-type="ref" reference="Regular-T"} imply that $\eta$ is regular if and only if the series $L$ diverges.* ## Construction of irregular Toeplitz arrays {#Irregular-arr} From now on, we assume that the decreasing sequence of normal finite index subgroups $(\Gamma_n)_{n\in\mathbb{N}}$ of $G$ satisfies: 1. $L=\frac{1}{|D_1|}+\sum_{j=1}^\infty \frac{|D_j|}{|D_{j+1}|}$ converges. 2. [\[LL\]]{#LL label="LL"} $1-e^{-2L}<\frac{1}{4}$. We define $\eta\in\{0,1\}^G$ as follows: **1^st^ Step:** Set $J(0)=\{1_G\}$ and let $\eta(g)=1$ for every $g\in\Gamma_1$.\ **2^nd^ Step:** We define $J(1)=D_1\setminus \Gamma_1$ and $\eta(g)=0$ for every $g\in J(1)\Gamma_2$. Let $m(1)=|J(1)|$. Consider $J(1)=\{g_1^1,g_2^1,\ldots,g_{m(1)}^1\}$.\ **^th^ Step:** There exist $k\in\mathbb{N}$ and $0\leq s'\leq m(k)$ such that $s=m_{k-1}+s'$, which implies that $m_{k-1}\leq s<m_k$, where $$\begin{aligned} \label{n-k} m_k:=1+k+\sum_{i=0}^km(i),\end{aligned}$$ with $m(i):=|J(i)|$ and $m(0)=1$ for every $0\leq i\leq k$. Let $J(s)=D_s\setminus (\bigcup_{i=0}^{s-1}J(i)\Gamma_{i+1})$, and $J(k)=\{g_1^{k},g_2^{k},\ldots, g_{m(k)}^{k}\}$. If $m_{k-1}< s+1< m_{k}$, then it follows that $1\leq s'+1\leq m(k)$. Choose $h_{s'+1}^{k}\in J(s)$ such that $h_{s'+1}^k\in g_{s'+1}^k \Gamma_{k}$. Define $\eta(h_{s'+1}^k\gamma)=1$ and $\eta(g\gamma)=0$ for every $g\in J(s)\setminus\{h_{s'+1}^k\}$ and every $\gamma\in \Gamma_{s+1}$. If $s+1=m_k$, define $\eta(g\gamma)=0$ for every $g\in J(s)$ and $\gamma\in \Gamma_{s+1}$. Since $h_{m(k)}^k\in D_{m_k-2}$ for every $k\in\mathbb{N}$, under taking a subsequence of $(\Gamma_n)_{n\in\mathbb{N}}$, we can suppose the following condition: $$\begin{aligned} \label{Linking} v^{-1}h_{m(k)}^k\in D_{m_k}\mbox{ for every }v\in (\Gamma_{m_k-2}\cap D_{m_k-1})\setminus\{1_G\}.\end{aligned}$$ **Example 14**. *This example aims to illustrate the previous construction. We consider $[G:\Gamma_1]=[\Gamma_1:\Gamma_2]=[\Gamma_2:\Gamma_3]=3$. Therefore, ${D_1=\{1_G,g_1^1,g_2^1\}\subseteq G}$, $D_2\cap\Gamma_1=\{1_G,\gamma_1^1,\gamma_2^1\}$, and $D_3\cap \Gamma_2=\{1_G,\gamma_1^2,\gamma_2^2\}$. Thus, ${D_2=D_1\cup\gamma_1^1 D_1\cup\gamma_2^1 D_1}$, and $D_3=D_2\cup\gamma_1^2 D_2\cup\gamma_2^2 D_2$. In the following figures, the group $G$ is interpreted as an infinite rectangle with infinite cells.* *V2c\|c\|c V2 & &\ &&\ &&\ \ * *V2c\|c\|c V2 & $0$&\ &&\ &&\ &$0$&$0$\ $1$&&\ &&\ &$0$&$0$\ $1$&&\ &&\ \ * *V2c\|c\|c V2 $1$ & $\cellcolor{blue!50}0$&$0$\ $1$ &$1\cellcolor{red!50}$&$0$\ $1$ &$0$&$0$\ $1$&$0$&$0$\ $1$&&\ $1$&&\ $1$&$0$&$0$\ $1$&&\ $1$&&\ \ * *V2c\|c\|c V2 $1$ & $0$&$\cellcolor{blue!50}0$\ $1$ &$1$&$0$\ $1$ &$0$&$0$\ $1$&$0$&$0$\ $1$&$0$&$0$\ $1$&$0$&$\cellcolor{red!50}1$\ $1$&$0$&$0$\ $1$&$0$&$0$\ $1$&$0$&$0$\ \ * *In Step 5, it is defined $\eta(g\Gamma_5)=0$ for every $g\in D_4$ and $\gamma\in \Gamma_5$. We repeat the process presented in the figures but using $J(2)$.* This construction gives a Toeplitz sequence in $\{0,1\}^G$. Indeed, let $g\in G$ and $n\in\mathbb{N}$ such that $g\in D_n$. If $g\in J(n)$, then our construction applies directly. Otherwise, if $g\notin J(n)$, by definition of $J(n)$, we get that $g\in \bigcup_{i=0}^{n-1} J(i)\Gamma_{i+1}$. **Proposition 15**. *$(\Gamma_n)_{n\in\mathbb{N}}$ is a period structure for the Toeplitz array $\eta$.* *Proof.* See proof of [@CeCoGo23 Proposition 4.3]. ◻ **Proposition 16**. *Let $\eta\in\{0,1\}^G$ be the Toeplitz array defined above. For every $n$, it holds $J(n)\subseteq{\rm Per}(\eta,\Gamma_{n+1})\setminus{\rm Per}(\eta,\Gamma_n)$.* *Proof.* The construction of $\eta$ implies $J(n)\subseteq {\rm Per}(\eta,\Gamma_{n+1})$. If $g\in J(n)\cap{\rm Per}(\eta,\Gamma_n,0)$, we have that there exists ${h\in J(m_n+l)}$, for some $1\leq l\leq m(n)$, such that $h\in g\Gamma_n$ and $\eta(h)=1$, a contradiction. If $g\in J(n)\cap{\rm Per}(\eta,\Gamma_n,1)$, then $n\neq m_{k-1}-1$ for every $k\in\mathbb{N}$. If $n=m_{k-1}-2$ for some $k\in\mathbb{N}$, then $\gamma g\in J(n+1)$, and hence $\eta(\gamma g)=0$ for every $\gamma\in(\Gamma_n\cap D_{n+1})\setminus\{1_G\}$, a contradiction. Therefore, $n= m_{k-1}-1+s'$ for some $k\in\mathbb{N}$ and $1\leq s'< m(k)$. Hence, we can deduce that $g=h_{s'}^k$. By the construction of $\eta$ we have that $h_{s'+1}^k\Gamma_k\neq h_{s'}^k\Gamma_k$. Therefore, $\gamma g\in J(n+1)$ and $\eta(\gamma g)=0$, where $\gamma\in(\Gamma_n\cap D_{n+1})\setminus\{1_G\}$, and again, we obtain a contradiction. We conclude that $J(n)\cap{\rm Per}(\eta,\Gamma_n)=\emptyset$, which implies the Proposition. ◻ **Proposition 17**. *Let $\eta\in\{0,1\}^G$ be the Toeplitz array previously defined. Let $\mathcal{M}=(n_k)_{k\in\mathbb{N}}$ be the increasing sequence of $\mathbb{N}$ defined by $n_k=m_k-1$, where $m_k$ is defined in ([\[n-k\]](#n-k){reference-type="ref" reference="n-k"}) for every $k\in\mathbb{N}$. For each $n_k,n_j\in\mathcal{M}$ and $\gamma_0\in \Gamma_{n_k}\cap D_{n_j}$ that satisfies ([\[good-relation\]](#good-relation){reference-type="ref" reference="good-relation"}), we obtain $$\begin{aligned} \eta(\gamma_0\gamma u)=\eta(u) \mbox{ for every }\gamma\in\Gamma_{n_k}\cap D_{n_k+1}\mbox{ and }u\in J(n_k).\end{aligned}$$* *Proof.* For $g\in J(n_k)$, we have that $\eta(g)=0$. For each $n_{k},n_{j}\in\mathcal{M}$, with $n_j>n_k$, it is satisfied $n_{j}>n_{k}+2$. By Proposition [Proposition 7](#auxiliar0){reference-type="ref" reference="auxiliar0"}, we can conclude that for all $\gamma_0\in \Gamma_{n_{k}+1}\cap D_{n_{j}}$ satisfying ([Lemma 9](#good-relation1){reference-type="ref" reference="good-relation1"}), we have that $\gamma_0\gamma u\in J(n_{j})$ for all $\gamma\in (\Gamma_{n_{k}}\cap D_{n_{k}+1})\setminus\{1_G\}$ and $u\in J({n_{k}})$. Therefore, we have $$\begin{aligned} \eta(\gamma_0\gamma u)=0=\eta(u).\end{aligned}$$ If $u\in J(n_k)\subseteq {\rm Per}(\eta,\Gamma_{n_k+1})$, then $\eta(\gamma_0 u)=\eta(u)$. Therefore, $\eta(\gamma_0\gamma u)=\eta(u)$ for every $\gamma\in\Gamma_{n_k}\cap D_{n_k+1}$ and $u\in J(n_k)$. ◻ **Corollary 18**. *The Toeplitz array $\eta\in\{0,1\}^G$ constructed above is irregular with $d<1-d$ and $M_G(\overline{O_\sigma(\eta)})\neq \emptyset$.* *Proof.* The irregularity of $\eta$ follows from Remark [Remark 13](#irregularity){reference-type="ref" reference="irregularity"} and condition ([\[LL\]](#LL){reference-type="ref" reference="LL"}). Moreover, Proposition [Proposition 12](#regular-eta){reference-type="ref" reference="regular-eta"} implies $$\begin{aligned} d=1-\left(1-\dfrac{1}{|D_1|}\right)\prod_{j=1}^\infty \left(1-\dfrac{|D_j|}{|D_{j+1}|}\right)\leq 1-e^{-2L}<\dfrac{1}{4}, \end{aligned}$$ and consequently, $d<1-d$. The second part of this Corollary follows directly from Propositions [Proposition 16](#J-sub){reference-type="ref" reference="J-sub"}, [Proposition 17](#T1T2){reference-type="ref" reference="T1T2"}, and [Proposition 11](#Supp-mu){reference-type="ref" reference="Supp-mu"}. ◻ Let $\mu\in M_G(\overline{O_\sigma(\eta)})$ be a limit point of the sequence $(\mu_{n_k})_{k\in\mathbb{N}}$, where $\mathcal{M}=(n_k)_{k\in\mathbb{N }}$ is defined in Proposition [Proposition 17](#T1T2){reference-type="ref" reference="T1T2"}. For each $i\in\{0,1\}$, $[i]=\{x\in\overline{O_\sigma(\eta)}\mid x(1_G)=i\}$ is a clopen set in $\overline{O_\sigma(\eta)}$. Therefore, $$\begin{aligned} \mu([0])&=\lim_{k\to\infty}\dfrac{|J(n_k)|+|{\rm Per}(\eta,\Gamma_{n_k},0)\cap D_{n_k}|}{|D_{n_k}|}=1-d+\lim_{k\to\infty}\dfrac{|{\rm Per}(\eta,\Gamma_{n_k},0)\cap D_{n_k}|}{|D_{n_k}|},\\ \mu([1])&=\lim_{k\to\infty}\dfrac{|{\rm Per}(\eta,\Gamma_{n_k},1)\cap D_{n_k}|}{|D_{n_k}|}.\end{aligned}$$ Henceforth, we fix $\mu\in M_G(\overline{O_\sigma(\eta)})$ and we consider $\mathcal{N}=(n_{k_j})_{j\in\mathbb{N}}$ a subsequence of $\mathcal{M}$ such that $(\mu_{n_{k_j}})_{j\in\mathbb{N}}$ converges to $\mu$ when $j\to\infty$. **Lemma 19** ([@CeCoGo23 Lemma 4.10]). *For every $i\geq 1$ and $\gamma\in \Gamma_i$, there exists $l\geq i$ such that $\gamma J(i)\subseteq J(l)\Gamma_{l+1}$.* **Proposition 20**. *Let $k\in\mathbb{Z}^+$. For each $\gamma\in\Gamma_k$, there exists at most one $g\in J(k)$ such that $\eta(\gamma g)=1$.* *Proof.* Let $k\geq 1$ and $\gamma\in \Gamma_k$. By applying Lemma [Lemma 19](#auxiliar){reference-type="ref" reference="auxiliar"}, there exists $l\geq k$ such that $\gamma J(k)\subseteq J(l)\Gamma_{l+1}$. Since $\eta$ was defined in $J(l)\Gamma_{l+1}$ at the step $l+1$ and there exists at most one $d\in J(l)$ such that $\eta(d\gamma')=1$ for every $\gamma'\in \Gamma_{l+1}$, we conclude. ◻ Let us recall the partition of clopen sets of $\overline{O_\sigma(\eta)}$, given by $\{\sigma^{v^{-1}}C_n\mid v\in D_n\}$, where $$\begin{aligned} C_n=\{x\in \overline{O_\sigma(\eta)}: {\rm Per}(x,\Gamma_n,\alpha)={\rm Per}(\eta,\Gamma_n,\alpha), \mbox{ for every }\alpha\in\{0,1\}\}. \end{aligned}$$ For each $n\geq 1$ and $g\in J(n)$, we define: - $C_{n,g}=\{x\in C_n: x(g)=1\}$. - $C_n^0=\{x\in C_n: x(g)=0,\mbox{ for every } g\in J(n)\}$. - $C_n^1=\bigcup_{g\in J(n)}C_{n,g}.$ It follows that $C_n=C_n^0\cup C_n^1$. Consequently, for every $n\geq 1$, the collection $$\begin{aligned} \mathcal{P}_n= \{\sigma^{v^{-1}}C_n^i: v\in D_n, i\in\{0,1\}\}\end{aligned}$$ forms a clopen partition of $\overline{O_\sigma(\eta)}$. **Proposition 21**. *The map $\pi: M_G(\overline{O_\sigma (\eta)})\to [0,1]^2, \pi(\mu’)=(\mu’([0]),\mu’([1]))$, is constant.* *Proof.* For each $n\in\mathbb{N}$ and $i\in\{0,1\}$, let $a_{n,i} =|D_n\cap Per(\eta, \Gamma_n, i)|$. We define $C_0^i=[i]\cap \overline{O_\sigma (\eta)}$. Note that $$\begin{aligned} C_0^0=&\bigcup_{g\in {\rm Per}(\eta,\Gamma_n,0)\cap D_n}\sigma^{g^{-1}}C_n\cup \bigcup_{g\in J(n)} \sigma^{g^{-1}}\big(C_n^0\cup \bigcup_{h\in J(n)\setminus\{g\} } C_{n,h}\big).\\ C_0^1=&\bigcup_{g\in {\rm Per}(\eta,\Gamma_n,1)\cap D_n}\sigma^{g^{-1}}C_n\cup \bigcup_{g\in J(n)} \sigma^{g^{-1}} C_{n,g}. \end{aligned}$$ Therefore, for every $\mu'\in M_G(\overline{O_\sigma (\eta)})$, we have $$\begin{aligned} \mu’(C_0^0)=&\dfrac{a_{n,0}}{|D_n|}+|J(n)|\mu’(C_{n}^0)+|J(n)|\mu'(C_{n}^1)-\sum_{g\in J(n)}\mu’(C_{n,g}) \\ =&\dfrac{a_{n,0}}{|D_n|}+\dfrac{|J(n)|}{|D_n|}-\sum_{g\in J(n)}\mu’(C_{n,g})\\ =&\dfrac{a_{n,0}}{|D_n|}+\dfrac{|J(n)|}{|D_n|}-\mu’(C_n^1)\\ \mu’(C_0^1)=&\dfrac{a_{n,1}}{|D_n|}+\mu’(C_n^1). \end{aligned}$$ Since $\lim\limits_{n\to\infty}\mu’(C_n^1)=0$, we can conclude $$\begin{aligned} \mu’(C_0^0)=&\lim\limits_{n\to\infty}\dfrac{a_{n,0}}{|D_n|}+\dfrac{|J(n)|}{|D_n|}=1-d+\lim_{n\to\infty} \dfrac{a_{n,0}}{|D_n|},\\ \mu’(C_0^1)=&\lim\limits_{n\to\infty}\dfrac{a_{n,1}}{|D_n|}. \end{aligned}$$ This completes the proof. ◻ **Corollary 22**. *Let $\mu'$ be an invariant probability measure of $\overline{O_\sigma(\eta)}$. For every $n\in\mathbb{N}$ and $i\in\{0,1\}$, we have that $\mu'(C_n^i)=\mu(C_n^i)$.* *Proof.* From the proof of Proposition [Proposition 21](#at-least){reference-type="ref" reference="at-least"}, we can also deduce the following expressions $$\begin{aligned} \mu’(C_0^0)=&a_{n,0}(\mu’(C_n^0)+\mu’(C_n^1))+|J(n)|\mu’(C_n^0)+(|J(n)|-1)\mu’(C_n^1)\\ =&(a_{n,0}+|J(n)|)\mu’(C_n^0)+(a_{n,0}+|J(n)|-1)\mu’(C_n^1), \end{aligned}$$ and $$\begin{aligned} \mu’(C_0^1)=&a_{n,1}(\mu’(C_n^0)+\mu’(C_n^1))+\mu’(C_n^1)\\ =&a_{n,1}\mu’(C_n^0)+(a_{n,1}+1)\mu’(C_n^1). \end{aligned}$$ Let $$\begin{aligned} A_n=\left(\begin{array}{cc} a_{n,0}+|J(n)| &a_{n,0}+|J(n)|-1 \\ a_{n,1}& a_{n,1}+1 \end{array}\right). \end{aligned}$$ Note that $A_n$ is an invertible matrix since $\det(A_n)=|D_n|$. Hence, we can deduce that $\mu'(C_n^i)=\mu(C_n^i)$ for every $i\in\{0,1\}$, since the map $\pi$ defined in Proposition [Proposition 21](#at-least){reference-type="ref" reference="at-least"} is constant. ◻ # Topo-isomorphism of $\overline{O_\sigma(\eta)}$ {#sec:Topo} Recall the sequence of natural numbers $\mathcal{M}=(n_k)_{k\in\mathbb{N}}$ defined in Proposition [Proposition 17](#T1T2){reference-type="ref" reference="T1T2"}, $\mu$ the fixed invariant probability measure of $\overline{O_\sigma(\eta)}$, and the subsequence $\mathcal{N}=(n_{k_j})_{j\in\mathbb{N}}$ of $\mathcal{M}$ such that $(\mu_{n_{k_j}})_{j\in\mathbb{N}}$ converges to $\mu$, when $j\to\infty$. For every $n\geq 1$, the set $U_n$ was defined in ([\[definition\]](#definition){reference-type="ref" reference="definition"}) as $U_n=\{x\in\{0,1\}^G: x(D_{n+1})=\eta_n(D_{n+1})\}$. The following Lemma is similar to Lemma 5.1 in [@CeCoGo23]. **Lemma 23**. *Let $n_{k_j}\in\mathcal{N}$, with $j\in\mathbb{N}$. Then, $$\begin{aligned} \mu(U_{n_{k_j}})\geq \lim_{j\to\infty}\dfrac{1}{D_{n_{k_j}+1}} \prod_{l=1}^{j}\left(1-\dfrac{|D_{n_{k_j}+l}|}{|D_{n_{k_j}+1+l}|}\right), \end{aligned}$$ and $$\begin{aligned} \mu\left(\bigcup_{v\in D_{n_{k_j}+1}}\sigma^{v^{-1}} U_{n_{k_j}}\right)\geq \lim_{j\to\infty}\prod_{l=1}^{j}\left(1-\dfrac{|D_{n_{k_j}+l}|}{|D_{n_{k_j}+1+l}|}\right). \end{aligned}$$* *Proof.* Denote $n=n_{k_{j'}}$ and $m=n_{k_{j}}$ in $\mathcal{N}$ such that $m>n$. According to Lemma [Lemma 9](#good-relation1){reference-type="ref" reference="good-relation1"}, $$\begin{aligned} N_{m,n}\geq \dfrac{|D_{m}|}{|D_{n+1}|}\prod_{l=1}^{m-n-1}\left(1-\dfrac{|D_{n+l}|}{|D_{n+l+1}|}\right). \end{aligned}$$ From Lemma [Lemma 10](#good-patches){reference-type="ref" reference="good-patches"} and Proposition [Proposition 17](#T1T2){reference-type="ref" reference="T1T2"}, we deduce that for every $\gamma\in \Gamma_{n+1}\cap D_m$ satisfying ([\[good-relation\]](#good-relation){reference-type="ref" reference="good-relation"}), $$\begin{aligned} \eta_n(D_{n+1})=\eta(\gamma D_{n+1})=\eta_{m}(\gamma D_{n+1}). \end{aligned}$$ Consequently, it follows that $\sigma^{\gamma^{-1}}\eta_{m}\in U_n$ and $\sigma^{(\gamma v)^{-1}}\eta_{m}\in\sigma^{v^{-1}} U_n$ for all $v\in D_{n+1}$. Therefore, we have $$\begin{aligned} \mu_{m}(U_n)\geq N_{m,n}\dfrac{1}{|D_{m}|}\geq \dfrac{1}{|D_{n+1}|}\prod_{l=1}^{m-n-1}\left(1-\dfrac{|D_{n+l}|}{|D_{n+l+1}|}\right), \end{aligned}$$ and $$\begin{aligned} \mu_{m}\left(\bigcup_{v\in D_{n+1}}\sigma^{v^{-1}}U_n\right)\geq N_{m,n}\dfrac{|D_{n+1}|}{|D_{m}|}\geq \prod_{l=1}^{m-n-1}\left(1-\dfrac{|D_{n+l}|}{|D_{n+l+1}|}\right). \end{aligned}$$ We observe that $\lim\limits_{j\to\infty}\prod_{l=1}^{j}(1-\frac{|D_{n+l}|}{|D_{n+l+1}|})=\lim\limits_{j\to\infty}\prod_{l=1}^{n_{k_{j}}-n-1}(1-\frac{|D_{n+l}|}{|D_{n+l+1}|})$. Since $\mu$ is the limit of $(\mu_{n_{k_j}})_{j\in\mathbb{N}}$ and $U_n$ is clopen, we conclude the Lemma. ◻ Let $k\in\mathbb{N}$. Suppose that $n_{k-1}+1<s< n_{k}+1$, with $n_{k-1},n_k$ two consecutive elements in $\mathcal{M}$. We can deduce from the definition of $\eta$ that $$\begin{aligned} \label{Periodo-1} {\rm Per}(\eta,\Gamma_s,1)= \Gamma_1\cup\bigcup_{i=1}^{k-1}\bigcup_{t=1}^{m(i)} h_t^{i}\Gamma_{n_{i-1}+1+t}\cup\bigcup_{t=1}^{s-(n_{k-1}+1)} h_t^{k}\Gamma_{n_{k-1}+1+t}. \end{aligned}$$ We assume that $m(0)=1$, and $h_1^0=1_G$. Moreover, since $n_{k-1}+1+m(k)=n_k$ and $\eta(g\gamma)=0$ for every $g\in D_{n_k}$ and $\gamma\in \Gamma_{n_k+1}$, we conclude $${\rm Per}(\eta,\Gamma_{n_{k-1}+1+m(k)},1)={\rm Per}(\eta,\Gamma_{n_k},1)={\rm Per}(\eta,\Gamma_{n_{k}+1},1).$$ **Lemma 24**. *Let $n_{k}\in\mathcal{M}$ be such that $n_{k}\geq 2$. For every $w\in D_{n_{k}-1}\setminus\{1_G\}$, there exists $g_w\in {\rm Per}(\eta,\Gamma_{n_{k}-1},1)$ such that $wg_w\notin {\rm Per}(\eta,\Gamma_{n_{k}+1},1)$ and $wg_w\in D_{n_k+1}$.* *Proof.* First, we prove that for every $w\in D_{n_k-1}\setminus \{1_G\}$ there exists $g_w'\in {\rm Per}(\eta,\Gamma_{n_k-1},1)$ such that $wg_w'\notin {\rm Per}(\eta,\Gamma_{n_k+1},1)$. We proceed by contradiction. Let $w\in D_{n_k-1}\setminus \{1_G\}$ and suppose that for every $g\in {\rm Per}(\eta,\Gamma_{n_{k}-1},1)$ it is satisfied that $wg\in {\rm Per}(\eta,\Gamma_{n_{k}+1},1)$. We claim that $w\in \Gamma_{n_k-1}$. We know that $\Gamma_1\subset {\rm Per}(\eta,\Gamma_1,1)\subset {\rm Per}(\eta, \Gamma_{n_{k}-1},1)$. Thus, $w\Gamma_1\subseteq {\rm Per}(\eta,\Gamma_{n_{k}+1},1)$, and consequently, using equation ([\[Periodo-1\]](#Periodo-1){reference-type="ref" reference="Periodo-1"}), there exist $0\leq i\leq k$ and $1\leq l\leq m(i)$ such that ${w\Gamma_1=h_t^i\Gamma_1}$ with $\eta(h_t^i\gamma)=1$ for every $\gamma\in\Gamma_1$. However, according to Proposition [Proposition 16](#J-sub){reference-type="ref" reference="J-sub"}, $h_t^i=h_1^0=1_G$. Thus, we have $w\in\Gamma_1$. Suppose $w\in \Gamma_{n_{s-1}+(r+1)}$, for some $1\leq s<k,1\leq r\leq m(s)$ or ($s=k$ and $1\leq r< m(k)$) such that $n_{s-1}+(r+1)\leq n_{k}-2$. **Case $1$:** $r<m(s)$.\ Since $h_{r+1}^s\in {\rm Per}(\eta,\Gamma_{n_{s-1}+(r+1)+1},1)\subset {\rm Per}(\eta,\Gamma_{n_k-1},1)$, we deduce that $w h_{r+1}^s\gamma'$ belongs to ${\rm Per}(\eta,\Gamma_{n_k+1},1)$ for every $\gamma'\in \Gamma_{n_{s-1}+r+1+1}$. Equation ([\[Periodo-1\]](#Periodo-1){reference-type="ref" reference="Periodo-1"}) implies $w h_{r+1}^s=h_t^i\gamma'$ for some $0\leq i \leq k$, $1\leq t\leq m(i)$ and $\gamma'\in \Gamma_{n_{i-1}+1+t}$.\ If $s<i\leq k$ or ($i=s$ and $r+1<t\leq m(i)$), we have $n_{s-1}+(r+1)+1\leq n_{i-1}+t$. In this case, we obtain $w h_{r+1}^s\Gamma_{n_{s-1}+1+(r+1)}=h_t^i\Gamma_{n_{s-1}+1+(r+1)}$ with $\eta(h_t^i \gamma)=1$ for every $\gamma\in \Gamma_{n_{s-1}+1+(r+1)}$. In particular, $\eta(h_{t}^i\gamma)=1$ for $\gamma\in \Gamma_{n_{i-1}+t}$. This is a contradiction since $h_t^i\in J(n_{i-1}+t)$ and $\eta |_{h_t^i\Gamma_{n_{i-1}+t}}$ is not constant by Proposition [Proposition 16](#J-sub){reference-type="ref" reference="J-sub"}. Therefore, $i<s$ or ($i=s$ and $1\leq t\leq r+1$).\ Now, suppose that $i<s$ or ($i=s$ and $1\leq t<r+1$). Thus, $n_{s-1}+(r+1)\geq n_{i-1}+t+1$. $w\in\Gamma_{n_{s-1}+(r+1)}$ implies $h_{r+1}^s\Gamma_{n_{i-1}+1+t}=h_t^i\Gamma_{n_{i-1}+1+t}$. Consequently, $\eta(h_{r+1}^s\gamma'')=1$ for every $\gamma''\in \Gamma_{n_{i-1}+t+1}$ and again, this is impossible by Proposition [Proposition 16](#J-sub){reference-type="ref" reference="J-sub"}. Thus, $i=s$ and $t=r+1$. We conclude that $w\in \Gamma_{n_{s-1}+(r+1)+1}$. **Case $2$:** $r=m(s)$. Under this assumption, we have that $0\leq s\leq k-1$.\ Since $h_1^{s+1}\in{\rm Per}(\eta,\Gamma_{n_s+1+1},1)\subset{\rm Per}(\eta,\Gamma_{n_k-1},1)$, we have that $wh_1^{s+1}\gamma$ belongs to ${\rm Per}(\eta,\Gamma_{n_k+1},1)$ for every $\gamma\in \Gamma_{n_s+1+1}$. Thus, $w h_1^{s+1}=h_t^i\gamma'$ for some $0\leq i\leq k-1$, $1\leq t\leq m(i)$, and $\gamma'\in \Gamma_{n_{i-1}+1+t}$.\ If $i>s+1$ or ($i=s+1$ and $1<t\leq m(s+1)$), we have that $n_{s}+2\leq n_{i-1}+t$. Thus, $h_{1}^{s+1}\Gamma_{n_{s}+2}=h_t^i\Gamma_{n_{s}+2}$ with $\eta(h_t^i \gamma)=1$ for every $\gamma\in \Gamma_{n_{s}+2}$, which is impossible by Proposition [Proposition 16](#J-sub){reference-type="ref" reference="J-sub"}. Hence, $i< s+1$ or ($i=s+1$ and $t=1$).\ Consider the case $i<s+1$. Our hypothesis says that $w\in \Gamma_{n_{s-1}+1+m(s)}$. Since $n_s+1>n_{s-1}+m(s)\geq n_{i-1}+t$, we obtain that $h_{1}^{s+1}\Gamma_{n_{i-1}+1+t}=h_t^i\Gamma_{n_{i-1}+1+t}$, which implies $\eta(h_{1}^{s+1}\gamma'')=1$ for every $\gamma''\in \Gamma_{n_{i-1}+1+t}$. Proposition [Proposition 16](#J-sub){reference-type="ref" reference="J-sub"} implies that this is only true when $i=s+1$ and $t=1$. Consequently, $w\in \Gamma_{n_{s}+2}$. Applying repeatedly Case 1 and 2, we obtain that $w\in \Gamma_{n_k-1}$. Therefore, $w\in D_{n_k-1}\cap \Gamma_{n_k-1}=\{1_G\}$, a contradiction. Hence, there exists $g_w'\in {\rm Per}(\eta,\Gamma_{n_k-1},1)$ such that $wg_w'\notin {\rm Per}(\eta,\Gamma_{n_k+1},1)$. There exists $d\in D_{n_k+1}$ and $\gamma\in \Gamma_{n_k+1}$ such that $wg_w'=d\gamma$. Note that $d\notin{\rm Per}(\eta,\Gamma_{n_k+1},1)$. Indeed, if $d\in{\rm Per}(\eta,\Gamma_{n_k+1},1)$, it would imply $wg_w'\in {\rm Per}(\eta,\Gamma_{n_k+1},1)$. Moreover, $g_w'\gamma^{-1}\in{\rm Per}(\eta,\Gamma_{n_k-1},1)$. Therefore, $g_w=g_w'\gamma^{-1}$ satisfies the Lemma. ◻ For $n_{k}\in\mathcal{M}$, $k\in\mathbb{N}$, we denote $$\begin{aligned} Y_{n_{k}}=\bigcap_{\gamma\in\Gamma_{n_{k}}\cap D_{n_{k}+1}}\sigma^\gamma C_{n_{k}}^0.\end{aligned}$$ **Lemma 25**. *For every $n_{k}\in\mathcal{M}$, $k\in\mathbb{N}$, we have that $U_{n_{k}}\cap \overline{O_\sigma(\eta)}\subseteq Y_{n_{k}}.$* *Proof.* Let $n_{k}\in\mathcal{M}$ for some $k\geq 1$ and $x\in U_{n_{k}}\cap \overline{O_\sigma(\eta)}$. Recall that $\{\sigma^{v^{-1}}C_{n_k}\mid v\in D_{n_k}\}$ is a partition of $\overline{O_\sigma(\eta)}$. Therefore, there exist $y\in C_{n_k}$ and $v\in D_{n_k}$ such that $x=\sigma^{v^{-1}}y$. Suppose that $v\neq 1_G$. There exists $w\in D_{n_k-1}$ satisfying $vw\in \Gamma_{n_k-1}$. If $w=1_G$, then $v\in \Gamma_{n_k-1}\cap D_{n_k}$. Let $h:=h_{m(k)}^{k}\in J(n_k-1)$ and set $g:=v^{-1}h$, which is in $D_{n_k+1}$ by the condition given in [\[Linking\]](#Linking){reference-type="ref" reference="Linking"}. Since $h\in{\rm Per}(\eta,\Gamma_{n_k},1)$, we have $$\begin{aligned} \label{vgh} y(vg)=y(h)=1.\end{aligned}$$ On the other hand, $g=\gamma d$ for some $\gamma\in\Gamma_{n_k}\cap D_{n_k+1}$ and $d\in D_{n_k}$. If $d\in {\rm Per}(\eta,\Gamma_{n_k})$, Lemma [Proposition 8](#Per-eq){reference-type="ref" reference="Per-eq"} implies that there exist $d'\in D_{n_k-1}$ and $\gamma'\in \Gamma_{n_k}$ such that $d=\gamma' d'$. Thus, $g=v^{-1}h=\gamma\gamma' d'$. Since $h, d'\in D_{n_k-1}$, and $v^{-1},\gamma\gamma'$ are in $\Gamma_{n_k-1}$, we obtain $h=d'$. For that reason, $v^{-1}=\gamma\gamma'\in\Gamma_{n_k}$, and consequently, $v\in \Gamma_{n_k}\cap D_{n_k}=\{1_G\}$, a contradiction. Therefore, $d\in J(n_k)$, and this implies $\eta(d)=0$. Using equation ([\[vgh\]](#vgh){reference-type="ref" reference="vgh"}), the previous argument and the fact that $x\in U_{n_k}$, we obtain that $$\begin{aligned} 1=y(h)=y(vg)=x(g)=x(\gamma d)=\eta(d)=0,\end{aligned}$$ a contradiction. Suppose that $w\neq1_G$. By Lemma [Lemma 24](#Good-D's){reference-type="ref" reference="Good-D's"} there exists $g_w\in {\rm Per}(\eta,\Gamma_{n_k-1},1)$ such that $wg_w\notin {\rm Per}(\eta,\Gamma_{n_k+1},1)$ and $wg_w\in D_{n_k+1}$. Thus, $wg_w\in J(n_k+1)\cup {\rm Per}(\eta, \Gamma_{n_k+1},0)$. There exist $d\in D_{n_k}$ and $\gamma\in \Gamma_{n_k}\cap D_{n_k+1}$ such that $wg_w=\gamma d$. If $wg_w\in J(n_k+1)$, then $d\in J(n_k)$ by Proposition [Proposition 7](#auxiliar0){reference-type="ref" reference="auxiliar0"}. Since $x\in U_{n_k}$, we obtain $$\begin{aligned} x(wg_w)=x(\gamma d)=\eta(d)=0.\end{aligned}$$ If $wg_w\in{\rm Per}(\eta,\Gamma_{n_k+1},0)$, then $d\in{\rm Per}(\eta,\Gamma_{n_k},0)\cup J(n_k)$. If $d\in J(n_k)$, we obtain that $\gamma=1_G$ by Proposition [Proposition 7](#auxiliar0){reference-type="ref" reference="auxiliar0"}. We conclude $wg_w\in J(n_k)\subset D_{n_k}$. Since $x\in U_{n_k}$, $$\begin{aligned} x(wg_w)=\eta(wg_w)=0.\end{aligned}$$ When $d\in {\rm Per}(\eta,\Gamma_{n_k},0)$, we have $$\begin{aligned} x(wg_w)=x(\gamma d)=\eta(d)=0.\end{aligned}$$ In any case, we conclude that $$\begin{aligned} \label{wg0} x(wg_w)=0.\end{aligned}$$ On the other hand, since $g_w\in {\rm Per}(\eta,\Gamma_{n_k-1},1)$, $y\in C_{n_k}$ and $vw\in \Gamma_{n_k-1}$, we obtain $$\begin{aligned} \label{wg1} x(wg_w)=y(vw g_w)=1.\end{aligned}$$ We have a contradiction with equations ([\[wg0\]](#wg0){reference-type="ref" reference="wg0"}) and ([\[wg1\]](#wg1){reference-type="ref" reference="wg1"}). Therefore, we conclude that $v=1_G$. Hence, $x\in C_{n_k}$. Note that $x\in U_{n_k}$ implies $$\begin{aligned} x(\gamma g)=\eta(g)=0 \mbox{ for every }g\in J(n_k)\mbox{ and }\gamma\in \Gamma_{n_k}\cap D_{n_k+1}.\end{aligned}$$ Thus, $x\in \bigcap_{\gamma\in \Gamma_{n_k}\cap D_{n_k+1}}\sigma^{\gamma} C_{n_k}^0=Y_{n_k}$. ◻ **Lemma 26**. *Let $n_{k_j}\in \mathcal{N}$ with $j\in\mathbb{N}$. Then the following relationship holds $$\begin{aligned} \bigcup_{v\in D_{n_{k_j}+1}}\sigma^{v^{-1}} Y_{n_{k_j}}\subseteq \bigcup_{u\in D_{n_{k_j}}}\sigma^{u^{-1}}C_{n_{k_j}}^0.\end{aligned}$$* *Proof.* Let $x\in \sigma^{v^{-1}}Y_{n_{k_j}}$ for some $v\in D_{n_{k_j}+1}$. Therefore, there exist $u\in D_{n_{k_j}}$ and $\gamma\in \Gamma_{n_{k_j}}\cap D_{n_{k_j}+1}$ such that $v=\gamma u$. The definition of $Y_{n_{k_j}}$ implies that $x\in \sigma^{u^{-1}} C_{n_{k_j}}^0$. ◻ Corollary [Corollary 18](#no-empty){reference-type="ref" reference="no-empty"} implies that $\eta$ is irregular. Therefore, Remark [Remark 13](#irregularity){reference-type="ref" reference="irregularity"} implies $$\begin{aligned} \lim_{n\to\infty}\lim_{k\to\infty}\prod_{l=1}^{k-1}\left(1-\dfrac{|D_{n+l}|}{|D_{n+l+1}|}\right)=1.\end{aligned}$$ For every $n\in\mathbb{N}$, we define $$\begin{aligned} Z_n=\bigcup_{v\in D_n}\sigma^{v^{-1}} C_{n}^0.\end{aligned}$$ **Lemma 27**. *Let $\mu'\in M_G(\overline{O_\sigma(\eta)})$. We have that $\lim_{j\to\infty} \mu'(Z_{n_{k_j}})=1$.* *Proof.* By lemmas [Lemma 26](#Y'n containing){reference-type="ref" reference="Y'n containing"}, [Lemma 25](#Containing U_n){reference-type="ref" reference="Containing U_n"} and [Lemma 23](#U'ns){reference-type="ref" reference="U'ns"}, we get $$\begin{aligned} \mu\left(\bigcup_{u\in D_{n_{k_j}}}\sigma^{u^{-1}}C_{n_{k_j}}^0\right)\geq& \;\mu\left(\bigcup_{v\in D_{n_{k_j}+1}}\sigma^{v^{-1}}Y_{n_{k_j}}\right)\geq \mu\left(\bigcup_{v\in D_{n_{k_j}+1}}\sigma^{v^{-1}}U_{n_{k_j}}\right)\\ \geq & \lim_{s\to\infty}\prod_{l=1}^{s}\left(1-\dfrac{|D_{n_{k_j}+l}|}{|D_{n_{k_j}+1+l}|}\right). \end{aligned}$$ Therefore, $\lim_{j\to\infty}\mu(Z_{n_{k_j}})=1$. Corollary [Corollary 22](#equals-partitions){reference-type="ref" reference="equals-partitions"} implies that $\mu(C_{n}^i)=\mu'(C_n^i)$ for every $C_n^i\in\mathcal{P}_n$, $n\in \mathbb{N}$, $i\in\{0,1\}$. In particular, $\mu(Z_{n_{k_j}})=\mu'(Z_{n_{k_j}})$ for every $j\in\mathbb{N}$, and thus, the Lemma is proven. ◻ **Lemma 28**. *Let $n\geq 1$, $\gamma,\widetilde{\gamma}\in (\Gamma_{n}\cap D_{n+1})\setminus\{1_G\}$ and $g\in J(n)$. We have that* 1. *$\sigma^{\gamma^{-1}} C_{n+1}^0\subseteq C_{n}^0$.* 2. *$\sigma^{\gamma^{-1}} C_{n+1,\gamma g}\subseteq C_{n,g}$.* 3. *$\sigma^{{\gamma}^{-1}} C_{n+1,\widetilde{\gamma} g}\subseteq C_{n}^0$, when $\gamma\neq \widetilde{\gamma}$.* 4. *If $n\in\mathcal{M}$, then $C_{n+1}\subseteq C_{n}^0$.* 5. *If $n=n_{k-1}+s'$, with $n_{k-1}\in\mathcal{M}$ and $1\leq s'\leq m(k)$, then $C_{n+1}\subseteq C_{n,h_{s'}^k}$.* *Proof.* Using that $C_{n+1}^0\subseteq C_{n+1}\subseteq C_{n}$, we obtain that $\sigma^{\overline{\gamma}^{-1}} C_{n+1}^0\subseteq C_{n}$ for every $\overline{\gamma}\in\Gamma_{n}$. Let $x\in C_{n+1}^0$. Then, for every $g\in J(n)$ we have that $\gamma g\in J(n+1)$. Thus, $\sigma^{\gamma^{-1}}x(g)=x(\gamma g)=0$. Therefore, $\sigma^{\gamma^{-1}} C_{n+1}^0\subseteq C_{n}^0$. For *(2)* and *(3)*, note that $\sigma^{\gamma^{-1}}C_{n+1}\subseteq C_{n}$. Let $x\in \sigma^{\gamma^{-1}} C_{n+1,\widetilde{\gamma} g}$. This implies there exists $y\in C_{n+1,\widetilde{\gamma} g}$ such that $x=\sigma^{\gamma^{-1}}y$. Let $g'\in J(n)$. Thus, $\gamma g'\in J(n+1)$, and hence, $\sigma^{\gamma^{-1}}y(g')=y(\gamma g')$. Therefore, $\sigma^{\gamma^{-1}} C_{n+1,\widetilde{\gamma} g}\subseteq C_{n,g}$ when $\gamma=\widetilde{\gamma}$, or $\sigma^{{\gamma}^{-1}} C_{n+1,\widetilde{\gamma} g}\subseteq C_{n}^0$ when $\gamma\neq\widetilde{\gamma}$. Suppose that $n\in\mathcal{N}$. Proposition [Proposition 16](#J-sub){reference-type="ref" reference="J-sub"} implies that $D_{n}\subseteq {\rm Per}(\eta,\Gamma_{n+1})$. Thus, if $x\in C_{n+1}$, then $x(g)=\eta(g)=0$ for every $g\in J(n)$. Therefore, $C_{n+1}\subseteq C_{n}^0$, and we obtain *(4)*. For the proof of *(5)*, let $x\in C_{n+1}\subseteq C_{n}$, and note that $x(g)=\eta(g)$ for every $g\in J(n)$. Thus, $x\in C_{n,h_{s'}^k}$. ◻ We denote $$\begin{aligned} W_n=\bigcup_{v\in D_{n-1}}\sigma^{v^{-1}}\bigcup_{g\in J(n-1)}\bigcup_{\widetilde{\gamma} \in (\Gamma_{n-1}\cap D_n)\setminus \{1_G\}}\bigcup_{\gamma\in (\Gamma_{n-1}\cap D_n)\setminus \{1_G,\widetilde{\gamma}\}} \sigma^{\gamma^{-1}} C_{n,\widetilde{\gamma} g},\end{aligned}$$ and $[n,m)=\{s\in\mathbb{Z}\mid n\leq s< m\}$ for every $n,m\in\mathbb{N}$. **Lemma 29**. *If $n\in\mathcal{M}$, then $$\begin{aligned} \label{cont-1} Z_{n}=Z_{n+1}\cup W_{n+1}\cup\bigcup_{v\in D_{n}}\sigma^{v^{-1}} C_{n+1}^1. \end{aligned}$$ When $n\notin\mathcal{M}$, $$\begin{aligned} \label{cont-2} Z_{n}\subseteq Z_{n+1}\cup W_{n+1}. \end{aligned}$$* *Furthermore, if $n_{k_j},n_{k_{j+1}}$ are two consecutives elements in $\mathcal{N}$ with $n_{k_j}<n_{k_{j+1}}$, then $$\begin{aligned} \label{cont-3} Z_{n_{k_j}}\subseteq Z_{n_{k_{j+1}}}\cup\bigcup_{r=n_{k_j}+1}^{n_{k_{j+1}}} W_r\cup\bigcup_{m\in\mathcal{M}\cap[n_{k_j}, n_{k_{j+1}})}\bigcup_{v\in D_{m}}\sigma^{v^{-1}}C_{m+1}^1. \end{aligned}$$* *Proof.* Let $n\in\mathcal{M}$. Lemma [Lemma 28](#Containings){reference-type="ref" reference="Containings"} implies that for every $v\in D_n$, $$\begin{aligned} \sigma^{(\overline{\gamma} v)^{-1}}C_{n+1}^0&\subseteq \sigma^{v^{-1}}C_n^0 \mbox{ for each }\overline{\gamma}\in \Gamma_{n+1}\cap D_n,\\ \sigma^{v^{-1}}C_{n+1}^1&\subseteq \sigma^{v^{-1}} C_n^0. \end{aligned}$$ Moreover, for $v\in D_n$, $g\in J(n)$, and $\widetilde{\gamma}\in(\Gamma_n\cap D_{n+1})\setminus\{1_G\}$, $$\begin{aligned} \sigma^{(\gamma v)^{-1}} C_{n+1,\widetilde{\gamma} g}\subseteq \sigma^{v^{-1}}C_n^0 \mbox{ for each } \gamma\in (\Gamma_n\cap D_n)\setminus\{1_G,\widetilde{\gamma}\}. \end{aligned}$$ Since $\mathcal{P}_n$ is partition of $\overline{O_\sigma(\eta)}$ and $\mathcal{P}_{n+1}$ is finer than $\mathcal{P}_n$ for every $n\in\mathbb{N}$, we can deduce equation ([\[cont-1\]](#cont-1){reference-type="ref" reference="cont-1"}). Let $n\notin\mathcal{M}$. Lemma [Lemma 28](#Containings){reference-type="ref" reference="Containings"} implies that for every $v\in D_n$ and $\widetilde{\gamma}\in (\Gamma_n\cap D_{n+1})\setminus\{1_G\}$, $$\begin{aligned} \sigma^{(\widetilde{\gamma} v)^{-1}} C_{n+1}^0&\subseteq\sigma^{u^{-1}} C_n^0,\\ \sigma^{(\gamma v)^{-1}}C_{n+1,\widetilde{\gamma}g}&\subseteq\sigma^{v^{-1}} C_n^0 \mbox{ for each }g\in J(n) \mbox{ and }\gamma\in (\Gamma_n\cap D_{n+1})\setminus\{1_g,\widetilde{\gamma}\}.\end{aligned}$$ Using that $\mathcal{P}_n$ is a partition of $\overline{O_\sigma(\eta)}$ for $n\in\mathbb{N}$, with $\mathcal{P}_{n+1}$ finer than $\mathcal{P}_n$, we obtain equation ([\[cont-2\]](#cont-2){reference-type="ref" reference="cont-2"}). Finally, equation ([\[cont-3\]](#cont-3){reference-type="ref" reference="cont-3"}) follows by repeatedly applying ([\[cont-1\]](#cont-1){reference-type="ref" reference="cont-1"}) and equation ([\[cont-2\]](#cont-2){reference-type="ref" reference="cont-2"}). ◻ **Corollary 30**. *Let $j,s\in\mathbb{N}$ such that $s>j$. Then, $$\begin{aligned} Z_{n_{k_j}}\subseteq Z_{n_{k_{s}}}\cup\bigcup_{r=n_{k_j}+1}^{n_{k_{s}}} W_r\cup\bigcup_{m\in\mathcal{M}\cap[n_{k_j}, n_{k_{s}})}\bigcup_{v\in D_{m}}\sigma^{v^{-1}}C_{m+1}^1. \end{aligned}$$* *Proof.* This follows directly by applying ([\[cont-3\]](#cont-3){reference-type="ref" reference="cont-3"}). ◻ **Lemma 31**. *There exists a subsequence $\mathcal{L}=(t_l)_{l\in\mathbb{N}}\subseteq\mathcal{N}$ such that for every ${\mu'\in M_G(\overline{O_\sigma(\eta)})}$, we have that $\sum_{l=1}^{\infty} \mu'(V_{t_{l}})<\infty$, where $$\begin{aligned} V_{t_{l}}=\left(\bigcup_{r=t_{l-1}+1}^{t_{l}}W_r\cup\bigcup_{m\in\mathcal{M}\cap[t_{l-1},t_{l})}\bigcup_{v\in D_m} \sigma^{v^{-1}} C_{m+1}^1\right)\setminus Z_{t_{l}}, l\geq 2, \mbox{ and }V_{t_1}=\emptyset. \end{aligned}$$* *Proof.* Let $\{\varepsilon_l\}_{l\in\mathbb{N}}$ be a strictly decreasing sequence of positive real numbers such that $\sum_{j=1}^\infty \varepsilon_j<\infty$. Lemma [Lemma 27](#Measure-1){reference-type="ref" reference="Measure-1"} implies that there exists a subsequence $\mathcal{L}=\{t_l\}_{l\in\mathbb{N}}$ of $\mathcal{N}$ such that for every $l\in\mathbb{N}$ and $r\geq l$, $$\begin{aligned} \mu'(Z_{t_{r}})\geq 1-\varepsilon_{l}.\end{aligned}$$ Using that $Z_{t_l}\cap V_{t_l}=\emptyset$, we have $$\begin{aligned} 1-\varepsilon_{l}+\mu'(V_{t_{l}})\leq \mu'(Z_{t_{l}})+\mu'(V_{t_{l}})\leq 1,\end{aligned}$$ which implies $\mu'(V_{t_{l}})\leq \varepsilon_{l}$. Hence, $\sum_{j=1}^\infty \mu'(V_{t_{j}})\leq \sum_{j=1}^\infty \varepsilon_j<\infty$. ◻ **Lemma 32**. *For every $t_l\in\mathcal{L}$ we have $$\begin{aligned} Z_{t_l}\subseteq \left(\bigcap_{j=l}^\infty Z_{t_{j}}\right)\cupdot \bigcup_{r=l+1}^\infty V_{t_{r}}, \end{aligned}$$ where $V_{t_r}$ is defined as in Lemma [Lemma 31](#V's){reference-type="ref" reference="V's"}.* *Proof.* We claim that for every $l,p\in\mathbb{N}$, it is true that $$\begin{aligned} Z_{t_l}\subseteq \bigcap_{j=l}^{l+p} Z_{t_j}\cupdot \bigcup_{r=l+1}^{l+p} V_r.\end{aligned}$$ We will apply induction on $p$. Corollary [Corollary 30](#j,s){reference-type="ref" reference="j,s"} and the definition of $V_{t_l}$ implies the claim when $p=1$ and every $l\in\mathbb{N}$. Suppose that for some $p\in\mathbb{N}$ and every $l\in\mathbb{N}$ we have that $$\begin{aligned} Z_{t_l}\subseteq \left(\bigcap_{j=l}^{l+p}Z_{t_{j}}\right)\cupdot \bigcup_{r=l+1}^{l+p} V_{t_{r}}.\end{aligned}$$ We need to prove that $$\begin{aligned} Z_{t_l}\subseteq \left(\bigcap_{j=l}^{l+p+1}Z_{t_{j}}\right)\cupdot \bigcup_{r=l+1}^{l+p+1} V_{t_{r}}.\end{aligned}$$ Corollary [Corollary 30](#j,s){reference-type="ref" reference="j,s"} implies that $Z_{t_{l+p}}\subseteq (Z_{t_{l+p}}\cap Z_{t_{l+p+1}})\cupdot V_{t_{l+p+1}}$. Therefore, $$\begin{aligned} Z_{t_l}&\subseteq \left(\bigcap_{j=l}^{l+p-1}Z_{t_{j}}\cap Z_{t_{l+p}}\right)\cupdot \bigcup_{r=l+1}^{l+p} V_{t_{r}}\\ &\subseteq \left(\bigcap_{j=l}^{l+p-1}Z_{t_{j}}\cap ((Z_{t_{l+p}}\cap Z_{t_{l+p+1}})\cup V_{t_{l+p+1}})\right)\cupdot \bigcup_{r=l+1}^{l+p} V_{t_{r}}\\ &\subseteq \left(\bigcap_{j=l}^{l+p+1}Z_{t_{j}}\right)\cup\left( V_{t_{l+p+1}}\cap \bigcap_{j=l}^{l+p-1}Z_{t_{j}}\right)\cupdot \bigcup_{r=l+1}^{l+p} V_{t_{r}}\\ &\subseteq \left(\bigcap_{j=l}^{l+p+1}Z_{t_{j}}\right)\cupdot \bigcup_{r=l+1}^{l+p+1} V_{t_{r}}. \end{aligned}$$ Therefore, for every $l\in\mathbb{N}$ and all $k\in\mathbb{N}$ such that $k>l$, we have that $$\begin{aligned} Z_{t_l}\subseteq \left(\bigcap_{j=l}^{k} Z_{t_{j}}\right)\cup\bigcup_{r=l+1}^{\infty} V_{t_r}.\end{aligned}$$ Consequently, $$\begin{aligned} Z_{t_l}\subseteq \left(\bigcap_{j=l}^{\infty} Z_{t_{j}}\right)\cupdot\bigcup_{r=l+1}^{\infty} V_{t_r}.\end{aligned}$$ ◻ **Lemma 33**. *For every $\mu’\in M_G(\overline{O_\sigma(\eta)})$, we establish $$\begin{aligned} \label{Inter-1} \lim_{l\to\infty}\mu’\left(\bigcap_{j=l}^\infty Z_{t_j}\right)=1, \end{aligned}$$ and $\mu’(A)=1$, where $$\begin{aligned} A=\bigcap_{g\in G}\bigcup_{l\in\mathbb{N}}\bigcap_{j=l}^\infty \sigma^{g} Z_{t_j}. \end{aligned}$$* *Proof.* Lemma [Lemma 32](#Measure-Inter-1){reference-type="ref" reference="Measure-Inter-1"} implies that for every $t_l\in\mathcal{L}$ $$Z_{t_l}\subseteq \left(\bigcap_{j=l}^\infty Z_{t_j}\right)\cupdot\bigcup_{r=l+1}^\infty V_{t_r}.$$ Therefore, $$\begin{aligned} \mu'(Z_{t_l})\leq\mu'\left[ \left(\bigcap_{j=l}^\infty Z_{t_j}\right)\cupdot\bigcup_{r=l+1}^\infty V_{t_r}\right]=\mu' \left(\bigcap_{j=l}^\infty Z_{t_j}\right)+\mu'\left(\bigcup_{r=l+1}^\infty V_{t_r}\right)\leq 1. \end{aligned}$$ Thus, $$\begin{aligned} \lim_{l\to\infty}\mu'(Z_{t_l})\leq\lim_{l\to\infty}\mu' \left(\bigcap_{j=l}^\infty Z_{t_j}\right)+\lim_{l\to\infty}\mu'\left(\bigcup_{r=l+1}^\infty V_{t_r}\right)\leq 1.\end{aligned}$$ Since $\sum_{r=1}^\infty \mu'(V_{t_r})$ converges by Lemma [Lemma 31](#V's){reference-type="ref" reference="V's"}, we obtain that $$\lim_{l\to\infty}\mu'\left(\bigcup_{r=l+1}^\infty V_{t_r}\right)=0.$$ Therefore, Lemma [Lemma 27](#Measure-1){reference-type="ref" reference="Measure-1"} implies equation ([\[Inter-1\]](#Inter-1){reference-type="ref" reference="Inter-1"}). The last part of this Lemma follows directly from ([\[Inter-1\]](#Inter-1){reference-type="ref" reference="Inter-1"}), since it implies $\mu'(\bigcup_{l\in\mathbb{N}}\bigcap_{j=l}^\infty Z_{t_j})=1$. ◻ The following Lemma is inspired by [@CeCoGo23 Lemma 5.7]. **Lemma 34**. *Let $A$ be as in Lemma [Lemma 33](#Measure-Inter-12){reference-type="ref" reference="Measure-Inter-12"} and let $\pi: \overline{O_\sigma(\eta)}\to \overleftarrow{G}$ be the factor map from $\overline{O_\sigma(\eta)}$ to its associated $G$-odometer as in Proposition [Proposition 4](#pimap){reference-type="ref" reference="pimap"}. Then, $\pi$ is injective when restricted to $A$.* *Proof.* Let $x,y\in A$ such that $\pi(x)=\pi(y)$. We aim to prove that $x=y$. Let $g\in G$. Since $x,y\in A$, there exists $l_0\in\mathbb{N}$ such that $\sigma^{g^{-1}} x,\sigma^{g^{-1}}y\in \bigcap_{j=l_0}^{\infty} Z_{t_j}$. Let $k\geq l_0$. $\pi(x)=\pi(y)$ implies there exists $v_k\in D_{t_k}$ such that $x,y\in\sigma^{v_k^{-1}} C_{t_k}$. Therefore, $\sigma^{g^{-1}}x,\sigma^{g^{-1}}\in \sigma^{{(v_kg)}^{-1}}C_{t_k}$. Let $v\in D_{t_k}$ such that $v_kg\in v\Gamma_{t_k}$. This implies $\sigma^{g^{-1}}x,\sigma^{g^{-1}}\in \sigma^{{v}^{-1}}C_{t_k}^0$. Thus, there exist $w,z\in C_{t_k}^0$ such that $\sigma^{g^{-1}}x=\sigma^{v^{-1}}w$ and $\sigma^{g^{-1}} y=\sigma^{v^{-1}}z$. Note that $w|_{D_{t_k}}=z|_{D_{t_k}}$ since $w$ and $z$ are in $C_{t_k}^0$. In particular, $w(v)=z(v)$. Therefore, $x(g)=y(g)$. Since this holds for every $g\in G$, we conclude that $x=y$. ◻ **Proposition 35**. *If $\mu’\in M_G(\overline{O_\sigma(\eta)})$, then $\pi: \overline{O_\sigma(\eta)}\to\overleftarrow{G}$ is a measure conjugacy of $\mu'$.* *Proof.* Let $A$ be as in Lemma [Lemma 33](#Measure-Inter-12){reference-type="ref" reference="Measure-Inter-12"}. By Lemma [Lemma 34](#1-1){reference-type="ref" reference="1-1"}, we know that $\pi|_A: A\to\pi(A)$ is a bijective map satisfying $\pi|_A(\sigma^{g}x)=\phi^{g}\pi|_A(x)$. Moreover, [@Gl03 Theorem 2.8] implies that $\pi|_A$ and $\pi|_A^{-1}$ are measurable. Recall $\overleftarrow{G}$ is uniquely ergodic with unique invariant measure $\nu$. Thus, $\nu(B)=\mu'(\pi|_A^{-1}(B))$ for every $B\subseteq \pi(A)$ Borel set. This implies that $\pi|_A$ is a measure conjugacy of $\mu'$. ◻ The following Proposition is a well-known fact. We provide a proof for completeness. **Proposition 36**. *Let $(X,\sigma,\mu')$ and $(Y,\phi,\nu)$ be two measure conjugate dynamical systems, where the measure conjugacy is given by $\pi': X'\to Y'$ with $X'\subseteq X$, $Y'\subseteq Y$ invariant Borel sets. Then $\mu'$ is ergodic if and only if $\nu$ is ergodic.* *Proof.* The measure conjugacy implies that for every Borel set $B\subseteq Y$, we have $\nu(B)=\mu'(\pi'^{-1}(B))$. Suppose that $\mu'$ is ergodic and let $B\subseteq Y$ an invariant Borel set. Observe that $C=\pi'^{-1}(B\cap Y')$ is an invariant Borel set. Thus, $$\begin{aligned} \nu(B)=\nu(B\cap Y')=\mu'(\pi'^{-1}(B\cap Y'))=\mu'(C)\in \{0,1\}. \end{aligned}$$ Therefore, $\nu$ is an ergodic measure. We can interchange the roles of $\pi'$ and $\pi'^{-1}$ to obtain that $\nu$ ergodic implies $\mu'$ ergodic. ◻ *Proof of Theorem [Theorem 1](#theo:main1){reference-type="ref" reference="theo:main1"}.* Recall that if we take a subsequence $(\Gamma_{n_j})_{j\in\mathbb{N}}\subseteq (\Gamma_n)_{n\in\mathbb{N}}$, then the $G$-odometers associated coincide, i.e., $\overleftarrow{G}=\varprojlim(G/\Gamma_n,\varphi_n)=\varprojlim(G/\Gamma_{n_j},\varphi_{n_j})$ (See [@CoPe08]). Therefore, possibly by taking a subsequence of $(\Gamma_n)_{n\in\mathbb{N}}$, we can suppose that $(\Gamma_n)_{n\in\mathbb{N}}$ satisfies condition ([\[LL\]](#LL){reference-type="ref" reference="LL"}). Let $\eta\in\{0,1\}^G$ be the Toeplitz sequence defined in Section [4](#sec:irregular-construction){reference-type="ref" reference="sec:irregular-construction"}. Proposition [Proposition 17](#T1T2){reference-type="ref" reference="T1T2"} ensures that $\eta$ is irregular, and Proposition [Proposition 15](#period-structure){reference-type="ref" reference="period-structure"} implies that $\overleftarrow{G}$ is the maximal equicontinuous factor of $\overline{O_\sigma(\eta)}$. On the other hand, Proposition [Proposition 35](#Conjugacy){reference-type="ref" reference="Conjugacy"} and Proposition [Proposition 36](#Ergodic-topo){reference-type="ref" reference="Ergodic-topo"} imply that $\overline{O_\sigma(\eta)}$ is uniquely ergodic. Consequently, $\overline{O_\sigma(\eta)}$ is a topo-isomorphic extension of its maximal equicontinuous factor $\overleftarrow{G}$ by Proposition [Proposition 35](#Conjugacy){reference-type="ref" reference="Conjugacy"}. ◻ *Proof of Corollary [Corollary 2](#coro:main2){reference-type="ref" reference="coro:main2"}.* This follows directly from Theorem [Theorem 1](#theo:main1){reference-type="ref" reference="theo:main1"}, since assuming that $G$ is amenable, we can apply Theorem [Theorem 3](#FuGrLe){reference-type="ref" reference="FuGrLe"}. ◻ ## Acknowledgements {#acknowledgements .unnumbered} The author gratefully thanks María Isabel Cortez for her invaluable guidance and insightful comments throughout this work. Additionally, the author is grateful with Maik Gröger and Till Hausser for meaningful conversation related to this research. MMM J. Auslander, *Minimal flows and their extensions*, North-Holland Mathematics Studies, 153, North-Holland, Amsterdam, 1988. Y. Benoist and H. Oh, *Equidistribution of rational matrices in their conjugacy classes*. Geom. Funct. Anal. **17** (2007) no.  1, 1--32. T. Ceccherini-Silberstein and M. Coornaert, *Cellular automata and groups*. Springer Monographs in Mathematics. Springer-Verlag, Berlin, 2010. P. Cecchi-Bernales, M. I. Cortez and J. Gómez, *Invariant measures of Toeplitz subshifts on non-amenable groups,* arXiv preprint arXiv: 2305.09835 (2023) M.I. Cortez, *${\Bbb Z}^d$ Toeplitz arrays*, Discrete Contin. Dyn. Syst. **15** (2006), no. 3, 859--881. M.I. Cortez and S. Petite, *$G$-odometers and their almost $1$-$1$ extensions* J. Lond. Math. Soc. (2) **78** (2008), no. 1, 1--20. M.I Cortez and S. Petite, *Invariant measures and orbit equivalence for generalized Toeplitz subshifts*; Groups, Geometry and Dynamics (8), 2014. T. Downarowicz, *Survey of odometers and Toeplitz flows*. Algebraic and topological dynamics, 7--37, Contemp. Math., 385, Amer. Math. Soc., Providence, RI, 2005. T. Downarowicz and E. Glasner, *Isomorphic extensions and applications*, Topol. Methods Nonlinear Anal. **48** (2016), no. 1, 321--338. T. Downarowicz and S. Kasjan, *Odometers and Toeplitz systems revisited in the context of Sarnak's conjecture*, Studia Math. **229** (2015), no. 1, 45--72. T. Downarowicz and T. Lacroix, *Almost 1-1 extensions of Furstenberg-Weiss type and applications to Toeplitz flows*, Studia Math. (2) **130** (1998), 149--170. G. Fuhrmann, M. Gröger and D. Lenz, *The structure of mean equicontinuous group actions*, Israel J. Math. **247** (2022), no. 1, 75--123. R. Gjerde and Ø. Johansen, *Bratteli-Vreshik models for Cantor minimal systems: applications to Toeplitz flows*, Ergodic Theory and Dynamical Systems **20** (2000), 1687--1710. E. Glasner, *Ergodic theory via joinings*, Mathematical Surveys and Monographs, 101, Amer. Math. Soc., Providence, RI, 2003. K. Jacobs and M. Keane, *$0$-$1$-sequences of Toeplitz type*, Z. Wahrsch. Verw. Gebiete **13** (1969), 123--131. A. Kechris, *Classical descriptive set theory*, Graduate Texts in Mathematics, 156, Springer, New York, 1995. F. Krieger, *Toeplitz subshifts and odometers for residually finite groups*, in *École de Théorie Ergodique*, 147--161, Sémin. Congr., 20, Soc. Math. France, Paris, 2010. M. Ła̧cka and M. Straszak, *Quasi-uniform convergence in dynamical systems generated by an amenable group action*, J. Lond. Math. Soc. (2) **98** (2018), no. 3, 687--707. J. Li, S. Tu and X. Ye, *Mean equicontinuity and mean sensitivity*, Ergodic Theory Dynam. Systems **35** (2015), no. 8, 2587--2612. W. Veech, *Point-distal flows*, Amer. J. Math. **92** (1970), 205--242. S. Williams, *Toeplitz minimal flows which are not uniquely ergodic*; Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete (67), 1984.
arxiv_math
{ "id": "2309.01720", "title": "Topo-isomorphisms of irregular Toeplitz subshifts for residually finite\n groups", "authors": "Jaime G\\'omez", "categories": "math.DS", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Three refined and refined harmonic extraction-based Jacobi--Davidson (JD) type methods are proposed, and their thick-restart algorithms with deflation and purgation are developed to compute several generalized singular value decomposition (GSVD) components of a large regular matrix pair. The new methods are called refined cross product-free (RCPF), refined cross product-free harmonic (RCPF-harmonic) and refined inverse-free harmonic (RIF-harmonic) JDGSVD algorithms, abbreviated as RCPF-JDGSVD, RCPF-HJDGSVD and RIF-HJDGSVD, respectively. The new JDGSVD methods are more efficient than the corresponding standard and harmonic extraction-based JDSVD methods proposed previously by the authors, and can overcome the erratic behavior and intrinsic possible non-convergence of the latter ones. Numerical experiments illustrate that RCPF-JDGSVD performs better for the computation of extreme GSVD components while RCPF-HJDGSVD and RIF-HJDGSVD suit better for that of interior GSVD components. author: - "Jinzhi Huang[^1]" - "Zhongxiao Jia[^2]" title: "Refined and refined harmonic Jacobi--Davidson methods for computing several GSVD components of a large regular matrix pair[^3]" --- Generalized singular value decomposition, generalized singular value, generalized singular vector, standard extraction, harmonic extraction, refined extraction, refined harmonic extraction, Jacobi--Davidson type method 65F15, 15A18, 65F10 JINZHI HUANG AND ZHONGXIAO JIA # Introduction {#sec:1} The GSVD was initially established by Van Loan [@van1976generalizing] and developed by Paige and Saunders [@paige1981towards], and it has soon become one of the most important matrix decompositions [@golub2012matrix; @stewart2001matrix; @stewart90]. Let $A\in\mathbb{R}^{m\times n}$ and $B\in\mathbb{R}^{p\times n}$ with $m+p\geq n$. Suppose that $\mathcal{N}(A)\cap\mathcal{N}(B)=\{\bm{0}\}$, where $\mathcal{N}(\cdot)$ denotes the null space of a matrix. Then $(A,B)$ is called a regular matrix pair. Write $q_1=\dim(\mathcal{N}(A))$, $q_2=\dim(\mathcal{N}(B))$ and $l_1=\dim(\mathcal{N}(A^T))$, $l_2=\dim(\mathcal{N}(B^T))$, respectively, where the superscript $T$ denotes the transpose of a matrix and $\dim(\cdot)$ denotes the dimension of a subspace. Then the GSVD of $(A,B)$ is as follows: $$\label{Gsvd} \left\{\begin{aligned} &A=U\Sigma_AX^{-1}, \\[0.3em] &B=V\Sigma_BX^{-1}, \end{aligned}\right. \qquad\mbox{with}\qquad \left\{\begin{aligned} &\Sigma_A=\diag\{C,\mathbf{0}_{l_1, q_1},I_{q_2}\}, \\[0.3em] &\Sigma_B=\diag\{S,I_{q_1},\mathbf{0}_{l_2, q_2}\}, \end{aligned}\right.$$ where $U=[U_q,U_{l_1},U_{q_2}]$ and $V=[V_q,V_{q_1},V_{l_2}]$ are orthogonal, $X=[X_q,X_{q_1},X_{q_2}]$ is nonsingular, and the diagonal $C=\diag\{\alpha_1,\dots,\alpha_q\}$ and $S=\diag\{\beta_1,\dots,\beta_q\}$ satisfy $$0<\alpha_i,\beta_i<1 \qquad\mbox{and}\qquad \alpha_i^2+\beta_i^2=1, \qquad i=1,\dots,q$$ with $q=n-q_1-q_2$. Here the subscripts in the block submatrices of $U,V,X$ are their column numbers, and $I_{i}$ and $\bm{0}_{i,j}$ denote the $i$-by-$i$ identity matrix and $i$-by-$j$ zero matrix, respectively. The subscripts are dropped whenever their sizes are clear from the context. Let $u_i$, $v_i$ and $x_i$ be the $i$th columns of $U_q$, $V_q$ and $X_q$, respectively, $i=1,\dots,q$. Then the quintuples $(\alpha_i,\beta_i,u_i,v_i,x_i)$, $i=1,\dots,q$ are called *nontrivial* GSVD components of $(A,B)$. In particular, the scalar pairs $(\alpha_i,\beta_i)$ or, equivalently, $\sigma_i=\frac{\alpha_i}{\beta_i}$ are called the nontrivial generalized singular values of $(A,B)$, and $u_i,v_i$ and $x_i$ are the corresponding left and right generalized singular vectors, respectively. From [\[Gsvd\]](#Gsvd){reference-type="eqref" reference="Gsvd"}, for $i=1,2,\ldots,q$, the GSVD of $(A,B)$ can be written in the form $$\label{gsvdvector} \left\{\begin{aligned} Ax_i&=\alpha_i u_i, \\[0.3em] Bx_i&=\beta_i v_i, \\[0.3em] \beta_i A^Tu_i&=\alpha_i B^Tv_i. \end{aligned}\right.$$ Denote by $(\alpha_i,\beta_i)=(0,1)$ or $(\alpha_i,\beta_i)=(1,0)$ a trivial zero or infinite generalized singular value. Then the above form still holds with $u_i,v_i$ and $x_i$ being the left and right generalized singular vectors corresponding to the zero or infinite generalized singular value. From [\[Gsvd\]](#Gsvd){reference-type="eqref" reference="Gsvd"}, we have $X^T(A^TA+B^TB)X=I_n$. Therefore, $X$ is $(A^TA+B^TB)$-orthogonal, and its columns $x_i$'s are of $(A^TA+B^TB)$-norm unit length. Naturally, we require that any approximation to $x_i$ have the same length. In this paper, we consider the following GSVD computational problem of a large and possibly sparse regular matrix pair $(A,B)$. **Problem 1**. *For a given target $\tau>0$, label all the nontrivial generalized singular values of $(A,B)$ as $$\label{order} |\sigma_1-\tau|\leq\dots\leq|\sigma_{\ell}-\tau|< |\sigma_{\ell+1}-\tau|\leq\dots\leq|\sigma_q-\tau|.$$ We want to compute the GSVD components $(\alpha_i,\beta_i,u_i,v_i,x_i)$, $i=1,\dots,\ell$ associated with the $\ell$ generalized singular values $\sigma_i$, $i=1,\dots,\ell$ of $(A,B)$ closest to $\tau$.* If $\tau$ is inside the spectrum of the nontrivial generalized singular values of $(A,B)$, then those $(\alpha_i, \beta_i, u_i, v_i, x_i)$'s are called interior GSVD components of $(A,B)$; if $\tau$ is close to one of the ends of the nontrivial generalized singular spectrum, then they are called the extreme, i.e., largest or smallest, ones. In the sequel, we assume that the target $\tau$ is not equal to any generalized singular value of $(A,B)$. Hochstenbach [@hochstenbach2009jacobi] proposes a Jacobi--Davidson (JD) type GSVD method, called JDGSVD, to compute several extreme or interior GSVD components of $(A,B)$ where $B$ has full column rank. At the subspace expansion phase, an $(m+n)$-by-$(m+n)$ linear system, called the correction equation, needs to be solved iteratively; for analysis and details on the accuracy requirement on the inner iterations of JD type methods for eigenvalue and SVD problems, see [@huang2019inner; @huang2022harmonic; @huang2023cross; @jia2014inner; @jia2015harmonic], where it is shown that it generally suffices to solve correction equations with *low or modest* accuracy; that is, the relative errors of approximate solutions lie in $[10^{-4},10^{-2}]$. More generally, the JDGSVD method formulates the GSVD of $(A,B)$ either as the generalized eigendecomposition of the augmented matrix pair $\left(\begin{bmatrix}\begin{smallmatrix} &A\\A^T&\end{smallmatrix}\end{bmatrix},\begin{bmatrix}\begin{smallmatrix} I&\\&B^TB\end{smallmatrix}\end{bmatrix}\right)$ for $B$ of full column rank or that of $\left(\begin{bmatrix}\begin{smallmatrix} &B\\B^T&\end{smallmatrix}\end{bmatrix},\begin{bmatrix}\begin{smallmatrix} I&\\&A^TA\end{smallmatrix}\end{bmatrix}\right)$ for $A$ of full column rank, computes the corresponding generalized eigenpairs, and then reconstructs the desired approximate GSVD components from the relevant converged eigenpairs. For the second formulation, an $(p+n)$-by-$(p+n)$ correction equation is solved iteratively at each subspace expansion step. However, as has been theoretically proven and numerically confirmed in [@huang2021choices; @huang2023cross], a fairly ill conditioned $B$ or $A$ may make the corresponding JDGSVD method numerically backward unstable for the GSVD problem itself, even if the relative residuals of approximate eigenpairs of the underlying generalized eigenvalue problem are already at the level of machine precision. Zwaan and Hochstenbach [@zwaan2017generalized] present two GSVD methods, called the generalized Davidson (GDGSVD) and multidirectional (MDGSVD) methods, to compute several extreme GSVD components of $(A,B)$. The right searching subspace for the GDGSVD method is spanned by the residuals of the generalized Davidson method [@bai2000 Sec. 11.2.4 and Sec. 11.3.6] applied to the eigenvalue problem of the cross-product matrix pair $(A^TA,B^TB)$; that for the MDGSVD method is first expanded by two dimensions with the vectors formed by premultiplying the best approximate right generalized singular vector with $A^TA$ and $B^TB$, and then truncated by one dimension so that an inferior search direction is discarded. The left searching subspaces for these two methods are formed by premultiplying the right one with $A$ and $B$, respectively. These two methods make use of the standard extraction approach to compute the approximate GSVD components without explicitly forming $A^TA$, $B^TB$. Zwaan [@zwaan2019] utilizes the Kronecker canonical form of a matrix pair [@stewart90], and proves that the GSVD of $(A,B)$ is equivalent to the generalized eigendecomposition of a matrix pair with much larger order $2m+p+n$. Though the pair does not involve cross-products or any other matrix-matrix product, this formulation may be mainly of theoretical value since (i) the nontrivial generalized eigenvalues and eigenvectors of the larger structured matrix pair come in quadruples and are always complex, (ii) the conditioning of the structured generalized eigenvalue problem is unknown, and (iii) it is extremely hard to propose a numerically backward stable structure-preserving algorithm. Adapted standard and harmonic Rayleigh--Ritz projections [@golub2012matrix; @parlett1998symmetric; @saad2011; @stewart2001matrix], or called the standard and harmonic extraction approaches, for eigenvalue and generalized eigenvalue problems to GSVD problems, the authors have recently proposed the cross product-free (CPF) JDGSVD [@huang2023cross], the CPF-harmonic and inverse-free (IF) harmonic JDGSVD methods [@huang2022harmonic], written as CPF-JDGSVD, CPF-HJDGSVD and IF-HJDGSVD for short, to solve Problem [Problem 1](#probl){reference-type="ref" reference="probl"}. At the subspace expansion step, each of these methods requires an approximate solution of its own $n$-by-$n$ correction equation, and uses it to expand the right searching subspace; the two left searching subspaces are formed by premultiplying the right one with $A$ and $B$, respectively. The three methods are fundamentally different in the extraction phase. The CPF-JDGSVD method works on $(A,B)$ directly, applies the standard extraction approach to the left and right searching subspaces, and computes the GSVD of a small projection matrix pair. It implicitly realizes the standard Rayleigh--Ritz projection of the generalized eigenvalue problem of $(A^TA,B^TB)$ onto the right searching subspace [@huang2023cross]. With $B$ of full column rank, the CPF-HJDGSVD method implicitly realizes the harmonic extraction approach of the singular value decomposition (SVD) problem of $AL^{-T}$ onto the right and one of the left searching subspaces, where $L^{T}\in\mathbb{R}^{n\times n}$ is the factor in the sparse Cholesky factorization $B^TB=LL^T$. At each extraction step, the method needs to solve the generalized eigenvalue problem of a small symmetric positive definite matrix pair. For a general and possibly rank deficient $B$, the IF-HJDGSVD method implicitly carries out the harmonic extraction of the generalized eigenvalue problem of $(A^TA,B^TB)$ onto the right searching subspace, and computes the generalized eigendecomposition of a small symmetric positive definite matrix pair; it is the inverses $(A^TA)^{-1}$ and $(B^TB)^{-1}$-free, and works for a general matrix pair $(A,B)$. For justifications and details, we refer the reader to [@huang2022harmonic] Just like those standard Rayleigh--Ritz methods for the matrix eigenvalue problem and the SVD problem [@cullum2002lanczos; @hochstenbach2001jacobi; @hochstenbach2009jacobi; @simon2000lowrank; @stoll2012krylov; @zwaan2017generalized], CPF-JDGSVD suits better for the computation of extreme generalized singular values of $(A,B)$. However, we deduce from, e.g., [@jia2004some; @jia2003implicitly; @jiastewart2001], that the approximate generalized singular vectors obtained by it may converge erratically or even fail to converge even if the approximate generalized singular values converge. These phenomena have been numerically observed in [@huang2023cross]. The refined extraction approach or refined Rayleigh--Ritz projection was initially proposed by the second author in [@jia1997refined], and has been intensively studied and developed in, e.g., [@hochstenbach2004harmonic; @hochstenbach2008harmonic; @jia1999; @jia2005; @jia2015harmonic; @jia2003implicitly; @jia2010refined; @jiastewart2001; @kokiopoulou2004computing; @wu16primme; @wu2015preconditioned] for the eigenvalue and SVD problems. For the large matrix eigenvalue problem, it is systematically accounted for in the books [@bai2000; @stewart2001matrix; @vandervorst]. As is shown, the refined extraction has better convergence, and fixes the erratic convergence behavior and possible non-convergence of standard and harmonic extractions [@jia1997refined; @jia1999; @jia2004some; @jia2005; @jia2010refined; @jiastewart2001]; also see, e.g., [@hochstenbach2004harmonic; @hochstenbach2008harmonic; @kokiopoulou2004computing; @wu16primme; @wu2015preconditioned]. Importantly, the basic convergence results in [@jia2004some; @jia2005; @jiastewart2001] adapted to CPF-JDGSVD, CPF-HJDGSVD and IF-HJDGSVD indicate that these three methods inherit those mentioned convergence deficiencies of the standard and harmonic extractions; that is, the three methods may work erratically and inefficiently. In this paper, in order to fix the deficiency of CPF-JDGSVD, CPF-HJDGSVD and IF-HJDGSVD and to better solve Problem [Problem 1](#probl){reference-type="ref" reference="probl"}, we will propose three refined extraction-based JDGSVD methods. We first present a refined JDGSVD (RCPF-JDGSVD) method. It computes the approximate generalized singular values by the standard extraction-based JDGSVD method but nontrivially adapts the refined extraction of the generalized eigenvalue problem of $(A^TA,B^TB)$ to the GSVD problem of $(A,B)$, and seeks new approximate generalized singular vectors that are generally more and can be much more accurate than those obtained by the standard extraction. Like the interior eigenvalue and SVD problems, for the computation of interior GSVD components, the standard extraction may produce spurious Ritz values and has difficulty to pick up good Ritz values, if any, correctly even if the searching subspaces are sufficiently good [@jia2015harmonic; @jiastewart2001; @stewart2001matrix; @vandervorst], causing that the CPF-JDGSVD method may converge slowly or even fail to converge, as has been numerically confirmed in [@huang2022harmonic]. Whenever Ritz values are poor or, though good, they are selected incorrectly, a refined extraction-based method certainly delivers incorrect approximate GSVD components, which severely affects the correct expansion of underlying subspaces in JDGSVD type methods. As a result, the refined extraction-based method may perform poorly when computing interior GSVD components. Just as the harmonic extraction-based methods for the eigenvalue and SVD problems [@stewart2001matrix; @vandervorst] that suit better for computing interior eigenpairs and singular triplets [@hochstenbach2004harmonic; @hochstenbach2008harmonic; @huang2019inner; @jia2002refinedh; @jia2015harmonic; @jia2010refined; @morgan1998harmonic; @morgan2006harmonic], our two harmonic extraction-based CPF-HJDGSVD and IF-HJDGSVD methods are more suitable for computing interior generalized singular values. Nevertheless, the harmonic Ritz vectors may have erratic convergence behavior and may even fail to converge because of spurious harmonic Ritz value(s) even if searching subspaces are sufficiently accurate [@jia2005; @jia2015harmonic]. In order to overcome the aforementioned deficiency, for the computation of interior GSVD components, on the basis of CPF-HJDGSVD and IF-HJDGSVD, we propose refined harmonic JDGSVD type methods that retain the merits of the harmonic JDGSVD methods for computing generalized singular values but seek more accurate approximate generalized singular vectors by using the refined extraction in a proper way. The resulting methods are abbreviated as RCPF-HJDGSVD and RIF-HJDGSVD, respectively. We first focus on the case $\ell=1$, and propose basic refined and refined harmonic extraction-based JDGSVD methods for the GSVD problem of interest. Then combining the methods with appropriate restart, deflation and purgation, we develop thick-restart RCPF-JDGSVD, RCPF-HJDGSVD and RIF-HJDGSVD algorithms for Problem [Problem 1](#probl){reference-type="ref" reference="probl"} with $\ell>1$, with details on effective and efficient implementations described. We will numerically demonstrate that they have better convergence behavior and are considerably more efficient than the corresponding standard and harmonic extraction-based JDGSVD algorithms. We also illustrate that RCPF-JDGSVD performs better than RCPF-HJDGSVD and RIF-HJDGSVD for extreme GSVD components, but for interior GSVD problems the latter two ones are preferable and RIF-HJDGSVD has wider applicability than RCPF-HJDGSVD. The rest of this paper is organized as follows. In Section [2](#sec:2){reference-type="ref" reference="sec:2"}, we review the CPF-JDGSVD method in [@huang2023cross] and the CPF-HJDGSVD and IF-HJDGSVD methods in [@huang2022harmonic]. In Section [3](#sec:3){reference-type="ref" reference="sec:3"}, we introduce a refined extraction approach for the GSVD computation; combining it with the standard, CPF-harmonic and IF-harmonic extractions, we propose the refined, refined CPF-harmonic and refined IF-harmonic extraction-based JDGSVD methods: RCPF-JDGSVD, RCPF-HJDGSVD, and RIF-HJDGSVD. In Section [4](#sec:4){reference-type="ref" reference="sec:4"}, we develop thick-restart schemes of these three JDGSVD algorithms with effective deflation and purgation for computing several GSVD components of $(A,B)$. Numerical experiments are presented in Section [5](#sec:6){reference-type="ref" reference="sec:6"} to illustrate the performance of the three refined JDGSVD algorithms and to make comparisons of them and the three standard and harmonic extraction-based JDGSVD algorithms. Finally, we conclude the paper in Section [6](#sec:7){reference-type="ref" reference="sec:7"}. # The standard and two harmonic extraction-based JDGSVD methods {#sec:2} We review CPF-JDGSVD, CPF-HJDGSVD and IF-HJDGSVD in [@huang2022harmonic; @huang2023cross] for computing $(\alpha_*,\beta_*,u_*,v_*,x_*): = (\alpha_1,\beta_1,u_1,v_1,x_1)$. Section [2.1](#subsec:2-1){reference-type="ref" reference="subsec:2-1"} is devoted to the construction and expansion of the searching subspaces, Section [2.2](#subsec:2-2){reference-type="ref" reference="subsec:2-2"} reviews the standard extraction, Section [2.3](#subsec:2-3){reference-type="ref" reference="subsec:2-3"} is on the CPF-harmonic extraction, and Section [2.4](#subsec:2-4){reference-type="ref" reference="subsec:2-4"} describes the IF-harmonic extraction. ## The construction and expansion of searching subspaces {#subsec:2-1} Assume that a $k$-dimensional right searching subspace $\mathcal{X}\subset\mathbb{R}^{n}$ is available, from which an approximation to $x_*$ is sought. Then we construct the two left searching subspaces $$\label{search} \mathcal{U}=A \mathcal{X} \qquad\mbox{and}\qquad \mathcal{V}=B\mathcal{X},$$ from which approximations to $u_*$ and $v_*$ are extracted, respectively. Theorem 2.1 of [@huang2023cross] shows that the distance between $u_*$ and $\mathcal{U}$ is as small as that between $x_*$ and $\mathcal{X}$ provided that $\alpha_*$ is not very small; analogously, the distance between $v_*$ and $\mathcal{V}$ is as small as that between $x_*$ and $\mathcal{X}$ if $\beta_*$ is not very small. Therefore, for the GSVD components corresponding to not very large or small generalized singular values, the left searching subspaces $\mathcal{U}$ and $\mathcal{V}$ constructed by [\[search\]](#search){reference-type="eqref" reference="search"} are as good as $\mathcal{X}$. Let $\widetilde X\in\mathbb{R}^{n\times k}$ be an orthonormal basis matrix of $\mathcal{X}$, and compute the thin QR factorizations $$\label{qrAXBX} A \widetilde X= \widetilde UR_{A} \qquad\mbox{and}\qquad B \widetilde X= \widetilde VR_{B}$$ to obtain the orthonormal basis matrices $\widetilde U\in\mathbb{R}^{m\times k}$ and $\widetilde V\in\mathbb{R}^{p\times k}$ of $\mathcal{U}$ and $\mathcal{V}$. With $\mathcal{U}$, $\mathcal{V}$ and $\mathcal{X}$ as well as their orthonormal bases available, we can use one of the following six extraction approaches to compute an approximation to the desired GSVD component $(\alpha_*,\beta_*,u_*,v_*,x_*)$: () the standard extraction, as done in the CPF-JDGSVD method [@huang2023cross] and will be reviewed in Section [2.2](#subsec:2-2){reference-type="ref" reference="subsec:2-2"}; () the CPF-harmonic extraction, as exploited by the CPF-HJDGSVD method [@huang2022harmonic] and will be sketched in Section [2.3](#subsec:2-3){reference-type="ref" reference="subsec:2-3"}; () the IF-harmonic extraction, as adopted in the IF-HJDGSVD method [@huang2022harmonic] and will be reviewed in Section [2.4](#subsec:2-4){reference-type="ref" reference="subsec:2-4"}; () the refined CPF extraction; () the refined CPF-harmonic extraction; () the refined IF-harmonic extraction. In Section [3](#sec:3){reference-type="ref" reference="sec:3"}, we shall propose the three extraction approaches in ()--(). Together with their extensions and restart schemes for computing more than one GSVD components that will be presented in Section [4](#sec:4){reference-type="ref" reference="sec:4"}, we will set up our complete algorithms, which constitute our major contribution in this paper. We temporarily denote by $(\tilde\alpha,\tilde\beta,\tilde u, \tilde v,\tilde x)$ an approximation to the desired GSVD component $(\alpha_*,\beta_*,u_*,v_*,x_*)$ computed by any one of the six extraction approaches listed above, where the positive scalar pair $(\tilde\alpha,\tilde\beta)$ is required to satisfy $\tilde\alpha^2+\tilde\beta^2=1$, and the 2-norm unit length vectors $\tilde u\in\mathcal{U}$, $\tilde v\in\mathcal{V}$ and the $(A^TA+B^TB)$-norm unit length $\tilde x\in\mathcal{X}$ are required to satisfy $A\tilde x=\tilde\alpha \tilde u$ and $B\tilde x=\tilde\beta \tilde v$. Therefore, for the GSVD problem of $(A,B)$, in terms of [\[gsvdvector\]](#gsvdvector){reference-type="eqref" reference="gsvdvector"}, the GSVD residual of $(\tilde\alpha, \tilde\beta, \tilde u, \tilde v,\tilde x)$ is $$\label{residual} r=r(\tilde\alpha,\tilde\beta,\tilde u,\tilde v,\tilde x)= \tilde\beta A^T\tilde u-\tilde\alpha B^T\tilde v.$$ Clearly, $(\tilde\alpha,\tilde\beta,\tilde u, \tilde v,\tilde x)$ is an exact GSVD component of $(A,B)$ if and only if $r=\bm{0}$. Let $\textit{tol}>0$ be a user-prescribed stopping tolerance. If $$\label{converge} \|r\|\leq (\tilde\beta\|A\|_1+\tilde\alpha\|B\|_1)\cdot \textit{tol},$$ we stop the iterations and accept $(\tilde\alpha,\tilde\beta,\tilde u,\tilde v,\tilde x)$ as a converged approximation to the desired $(\alpha_*,\beta_*,u_*,v_*,x_*)$. Throughout the paper, we denote by $\|\cdot\|$ and $\|\cdot\|_1$ the $2$- and $1$-norms of a matrix or vector, respectively. If $(\tilde\alpha,\tilde\beta,\tilde u, \tilde v,\tilde x)$ does not yet converge, a JDGSVD type method first expands the right searching subspace $\mathcal{X}$, then updates the left searching subspaces $\mathcal{U}$ and $\mathcal{V}$ in the way [\[search\]](#search){reference-type="eqref" reference="search"}. Specifically, notice that $$\label{defy} \tilde y=(A^TA+B^TB)\tilde x=\tilde\alpha A^T\tilde u+\tilde\beta B^T\tilde v$$ satisfies $\tilde{y}^T\tilde x=1$. Therefore, $I-\tilde{y}\tilde{x}^T$ and $I-\tilde{x}\tilde{y}^T$ are oblique projectors. We approximately solve the correction equation $$\label{cortau} (I-\tilde{y}\tilde{x}^T)(A^TA-\rho^2B^TB)(I-\tilde{x}\tilde{y}^T)t=-r \qquad\mbox{for}\qquad t\perp \tilde{y}$$ using some Krylov subspace iterative method such as the MINRES method [@saad2003] with the fixed $\rho=\tau$ when the approximate GSVD component $(\tilde\alpha,\tilde\beta,\tilde u, \tilde v,\tilde x)$ is not yet reasonably good, and then switch to solving problem [\[cortau\]](#cortau){reference-type="eqref" reference="cortau"} with the adaptively changing $\rho=\tilde\theta:=\tilde\alpha/\tilde\beta$ if $$\label{fixtol} \|r\|\leq(\tilde\beta\|A\|_1+\tilde\alpha\|B\|_1)\cdot \textit{fixtol}$$ for a user-prescribed tolerance $\textit{fixtol}>0$, say, $10^{-4}$. Criterion [\[fixtol\]](#fixtol){reference-type="eqref" reference="fixtol"} means that $(\tilde\alpha,\tilde\beta,\tilde u, \tilde v,\tilde x)$ is already a fairly good approximation to $(\alpha_*,\beta_*,u_*,v_*,x_*)$. Iteratively solving the correction equations of form [\[cortau\]](#cortau){reference-type="eqref" reference="cortau"} in the JDGSVD type methods is called the inner iterations, and the extraction of approximate GSVD components with respect to $\mathcal{U}$, $\mathcal{V}$ and $\mathcal{X}$ is called the outer iterations. It has been shown in [@huang2023cross] that solving the correction equations with *low or modest* accuracy generally suffices to make the outer iterations of the resulting inexact JD type GSVD algorithms well mimic those of their exact counterparts where all the correction equations are solved accurately. Therefore, for the correction equations of form [\[cortau\]](#cortau){reference-type="eqref" reference="cortau"} in the JDGSVD methods proposed in [@huang2022harmonic; @huang2023cross] and in the new JDGSVD methods to be proposed in this paper, we adopt the inner stopping criterion in [@huang2023cross], and stop the inner iterations when the inner relative residual norm $\|r_{in}\|$ of an approximate solution satisfies $$\label{inncov} \|r_{in}\|\leq\min\{2c\tilde\varepsilon,0.01\},$$ where $\tilde\varepsilon\in[10^{-4},10^{-3}]$ is a user-prescribed parameter and $c$ is a constant depending on the value of $\rho$ and all the approximate generalized singular values of $(A,B)$ computed by the underlying JDGSVD method during the current outer iteration. An approximate solution of [\[cortau\]](#cortau){reference-type="eqref" reference="cortau"}, still denoted by $t$ for brevity, is utilized to expand $\mathcal{X}$ so as to obtain the new $\mathcal{X}_{\mathrm{new}}={\rm span}\{\widetilde X,t\}$, and the corresponding orthonormal basis matrix is updated by $$\label{expandX} \widetilde X_{\mathrm{new}}=[\widetilde X,\ x_{+}] \qquad\mbox{with}\qquad x_{+}=\frac{(I-\widetilde{X}\widetilde{X}^T) t}{\| (I-\widetilde{X}\widetilde{X}^T) t\|},$$ where $x_{+}$ is called an expansion vector. Making use of [\[search\]](#search){reference-type="eqref" reference="search"} and [\[qrAXBX\]](#qrAXBX){reference-type="eqref" reference="qrAXBX"} gives rise to the expanded left searching subspaces $$\mathcal{U}_{\mathrm{new}}=A\mathcal{X}_{\mathrm{new}}=\mathrm{span}\{\widetilde U,A x_{+}\} \mbox{\ \ and\ \ } \mathcal{V}_{\mathrm{new}}=B\mathcal{X}_{\mathrm{new}}=\mathrm{span}\{\widetilde V,B x_{+}\}.$$ We obtain their orthonormal basis matrices $\widetilde U_{\mathrm{new}}$ and $\widetilde V_{\mathrm{new}}$ by updating the thin QR factorizations $$\begin{aligned} A\widetilde X_{\mathrm{new}}&=&\widetilde U_{\mathrm{new}} \cdot R_{A,\mathrm{new}}=[\widetilde U, \tilde u_{+}] \cdot \begin{bmatrix}R_A & r_A\\&\gamma_A\end{bmatrix}, \label{updateura}\\ B\widetilde X_{\mathrm{new}}&=&\widetilde V_{\mathrm{new}} \hspace{0.05em} \cdot R_{B,\mathrm{new}} =[\widetilde V,\hspace{0.1em} \tilde v_{+}] \cdot \begin{bmatrix}R_B & r_B\\&\gamma_B\end{bmatrix}, \label{updateurb}\end{aligned}$$ where $$\begin{aligned} r_A=\widetilde U^TAx_{+},\qquad \gamma_A=\|Ax_{+}-\widetilde Ur_A\|,\qquad \tilde u_{+}=\frac{Ax_{+}-\widetilde Ur_A}{\gamma_A}, \label{updatera} \\ r_B=\widetilde V^TBx_{+},\qquad \gamma_B=\|Bx_{+}-\widetilde Vr_B\|,\qquad \tilde v_{+}=\frac{Bx_{+}-\widetilde Vr_B}{\gamma_A}. \label{updaterb}\end{aligned}$$ We then compute a new and hopefully better approximation $(\tilde\alpha,\tilde\beta,\tilde u, \tilde v,\tilde x)$ with respect to $\mathcal{X}_{\rm new}$ and $\mathcal{U}_{\rm new}$, $\mathcal{V}_{\rm new}$, and repeat the above process until convergence occurs. ## The standard extraction approach {#subsec:2-2} Given $k$-dimensional right and left searching subspaces $\mathcal{X}$ and $\mathcal{U}$, $\mathcal{V}$ of form [\[search\]](#search){reference-type="eqref" reference="search"}, the standard extraction approach finds nonnegative pairs $(\tilde\alpha,\tilde\beta)$ with $\tilde\alpha^2+\tilde\beta^2=1$, unit length $\tilde u\in\mathcal{U}$ and $\tilde v\in\mathcal{V}$, and $(A^TA+B^TB)$-norm unit length $\tilde x\in\mathcal{X}$ satisfying the conditions $$\label{sjdgsvd} \left\{\begin{aligned} &A\tilde x=\tilde\alpha\tilde u,\\ &B\tilde x=\tilde\beta\tilde v,\\ &\tilde\beta A^T\tilde u-\tilde\alpha B^T\tilde v \perp\mathcal{X}. \end{aligned}\right.$$ Write $\tilde\theta=\tilde\alpha/\tilde\beta$. It is straightforward to justify that $$(A^TA-\tilde\theta^2 B^TB)\tilde x\perp\mathcal{X},$$ which is exactly the standard Rayleigh--Ritz projection of the generalized eigenvalue problem of the matrix pair $(A^TA,B^TB)$ onto the subspace $\mathcal{X}$, and each of the $k$ pairs $(\tilde\theta^2,\tilde x)$ is a Ritz approximation. Therefore, we call $(\tilde\alpha,\tilde\beta,\tilde u,\tilde v,\tilde x)$ a Ritz approximation to $(\alpha_*,\beta_*,u_*,v_*,x_*)$ with $(\tilde\alpha,\tilde\beta)$ or $\tilde\theta=\frac{\tilde\alpha}{\tilde\beta}$ the Ritz value and $\tilde u$, $\tilde v$ and $\tilde x$ the left and right Ritz vectors of $(A,B)$ with respect to the left and right subspaces, respectively. It is known from [\[qrAXBX\]](#qrAXBX){reference-type="eqref" reference="qrAXBX"} that the projection matrices $\widetilde U^TA\widetilde X=R_A$ and $\widetilde V^TA\widetilde X=R_B$. Write $\tilde u=\widetilde U\tilde e$, $\tilde v=\widetilde V\tilde f$ and $\tilde x=\widetilde X\tilde d$. Then ([\[sjdgsvd\]](#sjdgsvd){reference-type="ref" reference="sjdgsvd"}) reduces to $$\label{sab} R_A\tilde d=\tilde\alpha\tilde e,\qquad R_B\tilde d=\tilde\beta\tilde f ,\qquad \tilde\beta R_A^T\widetilde e=\tilde\alpha R_B^T\tilde f,$$ which is the vector form of GSVD of $(R_A,R_B)$. Therefore, in the extraction phase, the standard extraction-based CPF-JDGSVD method computes the GSVD of the $k$-by-$k$ matrix pair $(R_A,R_B)$, picks up the GSVD component $(\tilde\alpha,\tilde\beta,\tilde e,\tilde f,\tilde d)$ corresponding to the generalized singular value $\tilde\theta=\frac{\tilde\alpha}{\tilde\beta}$ closest to the target $\tau$, and takes $$(\tilde\alpha,\tilde\beta,\tilde u,\tilde v,\tilde x)= (\tilde\alpha,\tilde\beta,\widetilde U\tilde e,\widetilde V\tilde f,\widetilde X\tilde d)$$ as an approximation to the desired GSVD component $(\alpha_*,\beta_*,u_*,v_*,x_*)$ of $(A,B)$. ## The CPF-harmonic extraction approach {#subsec:2-3} For $B$ of full column rank, let $B^TB=LL^T$ be the Cholesky factorization of $B^TB$. It is proven in [@huang2022harmonic] that $(\sigma_*,u_*,z_*)$ with $z_*=\frac{1}{\beta_*}L^Tx_{*}$ is a singular triplet of the matrix $$\label{deftildeA} \check{A}=AL^{-T}.$$ Take the $k$-dimensional $\mathcal{U}$ and $\mathcal{Z}=L^T\mathcal{X}$ as the left and right searching subspaces for the left and right singular vectors $u_*$ and $z_*$ of $\check A$, respectively. We note that $\widetilde{Z}=L^TX$ is a basis matrix of $\mathcal{Z}=L^T\mathcal{X}$. Then the CPF-harmonic extraction [@huang2022harmonic] finds positive scalars $\phi>0$ and vectors $\check u\in\mathcal{U}$ and $\check z\in\mathcal{Z}$ such that $$\label{cpfharmonic} \begin{bmatrix} 0 &\check A^T\\\check A& 0 \end{bmatrix} \begin{bmatrix} \check z\\\check u \end{bmatrix} -\phi \begin{bmatrix} \check z\\\check u \end{bmatrix} \ \perp\ \left(\begin{bmatrix} 0 &\check A^T\\\check A& 0 \end{bmatrix} -\tau I\right) \cdot \mathcal{R}\left(\begin{bmatrix} \widetilde Z&\\& \widetilde U \end{bmatrix}\right),$$ where $\mathcal{R}(\cdot)$ denotes the range space of a matrix. This is the harmonic extraction approach for the eigenvalue problem of the augmented matrix $\begin{bmatrix}\begin{smallmatrix} &\check A^T\\\check A& \end{smallmatrix}\end{bmatrix}$ with respect to the searching subspace $\mathcal{R}\left(\begin{bmatrix}\begin{smallmatrix} \widetilde Z&\\&\widetilde U\end{smallmatrix}\end{bmatrix} \right)$ and the given target $\tau>0$; see [@stewart2001matrix; @vandervorst]. Write $\check z=\widetilde Z\check d$ and $\check u=\widetilde U\check e$. It is shown in [@huang2022harmonic] that [\[cpfharmonic\]](#cpfharmonic){reference-type="eqref" reference="cpfharmonic"} amounts to the following symmetric generalized eigenvalue problem: $$\label{cpfeq20} \begin{bmatrix} R_A^TR_A\!+\!\tau^2R_B^TR_B\!\!\! & -2\tau R_A^T \\-2\tau R_A &\!\!\! \widetilde U^T\!\!A(B^T\!B)^{-1}\!\!A^T\widetilde U\!+\!\tau^2I \end{bmatrix}\!\! \begin{bmatrix} \check d\\\check e\end{bmatrix} =(\phi\!-\!\tau)\!\! \begin{bmatrix} -\tau R_B^TR_B\!\!\! &R_A^T\\R_A &\!\!\!-\tau I \end{bmatrix}\!\! \begin{bmatrix}\check d\\\check e\end{bmatrix}.$$ Denote by $H_{\mathrm{c}}$ and $G_{\mathrm{c}}$ the $2k\times 2k$ symmetric matrices in the left and right hand sides of the above equation, respectively. Computationally, suppose that $(B^TB)^{-1}=(LL^T)^{-1}$ can be efficiently applied to obtain $H_{\mathrm{c}}$. The CPF-harmonic extraction approach computes the generalized eigendecomposition of the symmetric positive definite matrix pair $(G_{\mathrm{c}},H_{\mathrm{c}})$, picks up the largest generalized eigenvalue $\nu$ in magnitude and the corresponding eigenvector $\begin{bmatrix}\begin{smallmatrix} \check d\\\check e\end{smallmatrix}\end{bmatrix}$, and takes $$\label{cpfh} (\phi,\check u,\check z)=\left(\tau+\frac{1}{\mu},\frac{\widetilde U\check e}{\|\check e\|},\frac{\widetilde Z\check d}{\|\widetilde Z\check d\|}\right)$$ as an approximation to the singular triplet $(\sigma_*,u_*,z_*)$ of $\check A$. Note that the exact right generalized singular vector $x_*=\beta_*L^{-T}z_*$. Therefore, we take $L^{-T}\check z=L^{-T}\widetilde Z\check d =\widetilde X\check d$ as the approximation to $x_*$ in direction. Concretely, we take the approximate right generalized singular vector $\check x$ of $(A,B)$ as $$\label{cpfx} \check x=\frac{1}{\check\delta}\widetilde X\check d \qquad\mbox{with}\qquad \check\delta=\sqrt{\|\check e\|^2+\|\check f\|^2},$$ where $\check e$ is recomputed by $\check e=R_A\check d$ and $\check f=R_B\check d$. With such $\check\delta$, the approximate $\check x$ is of $(A^TA+B^TB)$-norm unit length [@huang2022harmonic]. We then take the new approximate generalized singular value and left generalized singular vectors as $$\label{checksigmauv} \check\alpha=\frac{\|\check e\|}{\check\delta},\qquad \check\beta=\frac{\|\check f\|}{\check\delta} \qquad\mbox{and}\qquad \check u=\frac{\widetilde U\check e}{\|\check e\|},\qquad \check v=\frac{\widetilde V\check f}{\|\check f\|},$$ which are called the CPF-harmonic Ritz approximations and satisfy $A\check x=\check \alpha \check u$ and $B\check x=\check\beta\check v$ with $\check\alpha^2+\check\beta^2=\|\check u\|=\|\check v\|=1$. Moreover, it is known from [@huang2022harmonic] that the new $\check\theta=\frac{\check\alpha}{\check\beta}$ is a better approximation to $\sigma_*$ than $\phi$ in [\[cpfh\]](#cpfh){reference-type="eqref" reference="cpfh"} in the sense that $$\label{rayqu} \|(A^TA-\check\theta^2B^TB)\check x\|_{(B^TB)^{-1}} \leq\|(A^TA-\phi^2B^TB)\check x\|_{(B^TB)^{-1}},$$ where $\|\cdot\|_M$ is the $M$-norm for a symmetric positive definite matrix $M$. ## The IF-harmonic extraction approach {#subsec:2-4} For a general and possibly rank deficient $B$, the CPF-harmonic extraction does not work. Alternatively, the IF-harmonic extraction [@huang2022harmonic] is for more general purpose. It finds approximate generalized singular values $\varphi>0$ and approximate right generalized singular vectors $\hat x\in\mathcal{X}$ with $\|\hat x\|_{A^TA+B^TB}=1$ such that $$\label{ifharmonic} (A^TA-\varphi^2B^TB)\hat x\ \perp\ (A^TA-\tau^2B^TB)\mathcal{X}.$$ This is precisely the harmonic Rayleigh--Ritz projection on the generalized eigenvalue problem of $(A^TA,B^TB)$ with respect to the subspace $\mathcal{X}$ and the given target $\tau^2$. Write $\hat x=\frac{1}{\hat\delta}\widetilde X\hat d$ with $\|\hat d\|=1$ and $\hat\delta$ a normalizing parameter to be determined. Then requirement [\[ifharmonic\]](#ifharmonic){reference-type="eqref" reference="ifharmonic"} is equivalent to $$\label{harmoniceq} \widetilde {X}^T(A^TA-\tau^2B^TB)^2\widetilde X\hat d =(\varphi^2-\tau^2)\widetilde{X}^T(A^TA-\tau^2B^TB) B^TB\widetilde X\hat d.$$ Denote by $H_{\tau}$ and $G_{\tau}$ the $k$-by-$k$ matrices in the left and right hand sides of the above equation, respectively. Then $(\varphi^2-\tau^2)$ is a generalized eigenvalue of the matrix pair $(H_{\tau},G_{\tau})$ with $\hat d$ the corresponding unit length generalized eigenvector. The IF-harmonic extraction approach computes the generalized eigendecomposition of $(G_{\tau},H_{\tau})$, and picks up the generalized eigenpair $(\nu,\hat d)$ corresponding to $\varphi=\sqrt{\tau^2+\frac{1}{\nu}}$ closest to $\tau$ among all the eigenpairs of $(G_{\tau},H_{\tau})$. Then $\varphi$ is the IF-harmonic Ritz value that approximates the desired $\sigma_*$ and $$\label{hatx} \hat x=\frac{1}{\hat\delta}\widetilde X\hat d \qquad\mbox{with}\qquad \hat\delta=\sqrt{\|\hat e\|^2+\|\hat f\|^2}$$ is the corresponding right IF-harmonic Ritz vector that approximates the desired $x_*$, where $\hat e=R_A\hat d$ and $\hat f=R_B\hat d$. Such $\hat\delta$ guarantees that $\hat x$ is of $(A^TA+B^TB)$-norm unit length. The corresponding IF-harmonic Ritz values and left IF-harmonic Ritz vectors are computed by $$\label{hatsigmauv} \hat\alpha=\frac{\|\hat e\|}{\hat \delta},\qquad \hat\beta=\frac{\|\hat f\|}{\hat \delta} \qquad\mbox{and}\qquad \hat u=\frac{\widetilde U\hat e}{\|\hat e\|},\qquad \hat v=\frac{\widetilde V\hat f}{\|\hat f\|}.\qquad$$ It is known from [@huang2022harmonic] that the IF-harmonic Ritz approximation $(\hat\alpha,\hat\beta,\hat u,\hat v,\hat x)$ satisfies $A\hat x=\hat\alpha\hat u$ and $B\hat x=\hat\beta\hat v$ with $\hat \alpha^2+\hat\beta^2=\|\hat u\|=\|\hat v\|=1$. The IF-harmonic approximate generalized singular value $\hat\theta=\frac{\hat\alpha}{\hat\beta}$ is better than the above $\varphi$ in the sense of [\[rayqu\]](#rayqu){reference-type="eqref" reference="rayqu"} where $\phi$, $\check\theta$ and $\check x$ are replaced by $\varphi$, $\hat\theta$ and $\hat x$, respectively. # The refined JDGSVD type methods {#sec:3} For an approximation to the desired GSVD component $(\alpha_*,\beta_*,u_*,v_*,x_*)$ obtained by the standard or harmonic extraction approaches described in Section [2](#sec:2){reference-type="ref" reference="sec:2"}, in this section we will unify the notation and denote the approximation by $(\alpha,\beta,u,v,x)$ with $\theta=\frac{\alpha}{\beta}$. We now propose a refined extraction approach: Find a unit length vector $x_r\in\mathcal{X}$ satisfying the optimal requirement $$\label{refined} \|(A^TA-\theta^2B^TB)x_r\| = \min_{w\in\mathcal{X},\|w\|=1}\|(A^TA-\theta^2B^TB)w\|,$$ rescale $x_r$ to $\bar x$ with $\|\bar x\|_{A^TA+B^TB}=1$, and use $\bar x$ to approximate $x_*$. We call $\bar x$ a refined approximate right generalized singular vector, or simply the refined or refined harmonic right Ritz vector, of $(A,B)$ over the subspace $\mathcal{X}$ if $\theta$ is a Ritz value or harmonic Ritz value obtained by the standard or harmonic extraction approaches. Jia [@jia2004some] has proven that if $(\theta^2,\bar{x})$ is not an exact eigenpair of $(A^TA,B^TB)$ then $\|(A^TA-\theta^2B^TB)\bar x\|<\|(A^TA-\theta^2B^TB)x\|$ strictly; moreover, if there is another standard or harmonic Ritz value close to $\theta$, then $$\|(A^TA-\theta^2B^TB)\bar x\|\ll\|(A^TA-\theta^2B^TB)x\|$$ generally holds, meaning that $\bar x$ may be much more accurate than $x$ as an approximation to $x_{*}$. Most importantly, the fundamental convergence results in [@jia2005; @jiastewart2001] indicate that there is a Ritz value $\theta\rightarrow \sigma_*$ unconditionally and the convergence of the refined Ritz or harmonic Ritz vector $\bar{x}\rightarrow x_*$ is guaranteed too once the distance between $x_*$ and $\mathcal{X}$ tends to zero, while the Ritz or (CPF or IF-)harmonic Ritz vector $x$ may converge erratically and even may fail to converge even if $\mathcal{X}$ contains sufficiently accurate approximations to $x_*$. We now consider the accurate and efficient computation of $\bar{x}$. For an arbitrary unit length vector $w\in\mathcal{X}$, write $w=\widetilde Xd$ with $\|d\|=1$. Then $x_r=\widetilde X\bar d$ with $\|\bar d\|=1$, and the minimization problem [\[refined\]](#refined){reference-type="eqref" reference="refined"} is equivalent to $$\label{mineqmin} \|(A^TA-\theta^2B^TB)\widetilde X\bar d\|= \min_{d\in\mathbb{R}^{k}, \|d\|=1}\|(A^TA-\theta^2B^TB)\widetilde Xd\|.$$ Therefore, $\bar d$ is the right singular vector of $G_{\theta}=(A^TA-\theta^2B^TB)\widetilde X$ corresponding to its smallest singular value, and it is also the eigenvector of the cross-product matrix $$\label{defGr} H_{\theta}=G_{\theta}^TG_{\theta} =\widetilde{X}^T(A^TA-\theta^2B^TB)^2\widetilde X$$ associated with its smallest eigenvalue. Jia [@jia2006using] has proposed and developed a variant of the cross product-based QR algorithm for the accurate SVD computation of a general matrix. In our specific context, we only need to use the standard QR algorithm to obtain $\bar{d}$. Remarkably, Jia [@jia2006using] has shown that the cross product-based QR algorithm is much more efficient than the standard Golub--Kahan and Chan SVD algorithms [@golub2012matrix; @stewart2001matrix] applied to $G_{\theta}$ for $n\gg k$. More precisely, in finite precision arithmetic, Jia [@jia2006using] has proven that, provided that the smallest singular value of $G_{\theta}$ is well separated from its second smallest one then the computed $\bar d$'s by the cross product-based QR algorithm and the Golub--Kahan or Chan SVD algorithm essentially have the same accuracy; when the computed smallest singular value of $G_{\theta}$ is taken as the square root of the Rayleigh quotient $x^TH_{\theta}x$ that is calculated by the formula $(G_{\theta}\bar d)^T(G_{\theta}\bar d)$ with $\bar d$ the *computed* eigenvector of $H_{\theta}$, it has the same accuracy as the smallest singular value computed by the standard Golub--Kahan or Chan SVD algorithm applied to $G_{\theta}$; see [@jia2006using] for a detailed analysis and comparison. We should particularly remind that in our context the smallest singular value of $G_{\theta}$ tends to zero as $\theta\rightarrow \sigma_*$ but the second smallest one of $G_{\theta}$ is typically not small and thus is well separated from the smallest one. In view of the above, rather than computing the SVD of $G_{\theta}$ at expensive cost, we compute the eigendecomposition of $H_{\theta}$ cheaply, and pick up the desired eigenvector $\bar d$. We remark that, as the subspaces are expanded, we can efficiently form $H_{\theta}$ by $$\label{comGr} H_{\theta}=H_A+\theta^4H_B-\theta^2(H_{A,B}^T+H_{A,B}),$$ where the intermediate matrices $$\label{defHAB} H_A= \widetilde{X}^T(A^TA)^2\widetilde X,\qquad H_B= \widetilde{X}^T(B^TB)^2\widetilde X,\qquad H_{A,B}=\widetilde{X}^TA^TAB^TB\widetilde X$$ can be efficiently updated at each step and they are also used to efficiently form the projected matrices $G_{\tau}=H_{A,B}-\tau^2H_B$ and $H_{\tau}=H_A+\tau^4H_B-\tau^2(H_{A,B}^T+H_{A,B})$ involved in the IF-harmonic extraction approach; see [\[harmoniceq\]](#harmoniceq){reference-type="eqref" reference="harmoniceq"}. Also, it is important to notice that when $\ell$ GSVD components are required, which will be considered in the next section, we can efficiently form all the cross-product matrices $H_{\theta}$ in [\[comGr\]](#comGr){reference-type="eqref" reference="comGr"} for different $\theta$'s. By definition, we need to rescale $x_r$ in [\[refined\]](#refined){reference-type="eqref" reference="refined"} to obtain the refined or refined harmonic Ritz vector $\bar x$ with $\|\bar x\|_{A^TA+B^TB}=1$. Write $$\bar x=\frac{1}{\bar\delta}\widetilde X\bar d,$$ where $\bar\delta$ is a normalizing factor to be determined. Following the same derivations as in Section [2](#sec:2){reference-type="ref" reference="sec:2"}, we have $$\bar\delta=\sqrt{\|\bar e\|^2+\|\bar f\|} \qquad \mbox{with}\qquad \bar e=R_A\bar d \quad\mbox{and}\quad\bar f=R_B\bar d,$$ where $R_A$ and $R_B$ are defined in [\[qrAXBX\]](#qrAXBX){reference-type="eqref" reference="qrAXBX"}. Analogously to those done in the CPF- and IF-harmonic extraction approaches, we compute the refined or refined harmonic Ritz values and the two refined or refined harmonic left Ritz vectors by $$\label{refinegsvd} \bar\alpha=\frac{\|\bar e\|}{\bar\delta},\qquad \bar\beta=\frac{\|\bar f\|}{\bar\delta} \qquad \mbox{and}\qquad \bar u=\frac{\widetilde U\bar e}{\|\bar e\|},\qquad \bar v=\frac{\widetilde V\bar f}{\|\bar f\|}.$$ It is easily verified that $$A\bar x=\bar\alpha\bar u \qquad\mbox{and}\qquad B\bar x=\bar\beta\bar v$$ with $\bar\alpha^2+\bar\beta^2=\|\bar u\|=\|\bar v\|=1$. The quintuple $(\bar\alpha,\bar\beta,\bar u,\bar v,\bar x)$ is called a refined or refined CPF-harmonic or refined IF-harmonic approximation to the desired GSVD component $(\alpha_*,\beta_*,u_*,v_*,x_*)$ of $(A,B)$, depending on by which extraction approach $\theta$ in [\[refined\]](#refined){reference-type="eqref" reference="refined"} is computed: the standard extraction, the CPF-harmonic or IF-harmonic extraction. Particularly, $(\bar\alpha,\bar\beta)$ or $\bar\theta=\frac{\bar\alpha}{\bar\beta}$ is called the refined, refined CPF-harmonic, or refined IF-harmonic Ritz value and, correspondingly, $\bar u$, $\bar v$ and $\bar x$ are called the refined, refined CPF-harmonic or IF-harmonic left and right Ritz vectors of $(A,B)$. Combining the refined, refined CPF-harmonic and refined IF-harmonic extraction approaches with the subspace expansion approach described in Section [2.1](#subsec:2-1){reference-type="ref" reference="subsec:2-1"}, we have now proposed the refined CPF, refined CPF-harmonic and refined IF-harmonic JDGSVD methods, i.e., RCPF-JDGSVD, RCPF-HJDGSVD and RIF-HJDGSVD, respectively. For each of these methods, the corresponding residual and stopping criterion are the same as [\[residual\]](#residual){reference-type="eqref" reference="residual"} and [\[converge\]](#converge){reference-type="eqref" reference="converge"}. # Thick-restart refined JDGSVD algorithms with deflation and purgation {#sec:4} In this section, by introducing appropriate deflation and purgation techniques, we develop practical thick-restart refined JDGSVD algorithms for solving Problem [Problem 1](#probl){reference-type="ref" reference="probl"}. ## Thick-restart {#subsec:5-1} As the dimension $k$ of searching subspaces increases, the storage requirements and computational costs of the previous basic JDGSVD type algorithms become unaffordable. When $k$ reaches the maximum number $k_{\max}$ allowed but the algorithms do not yet converge, it is necessary to restart them. To this end, we adopt the thick-restart technique, which was first advocated in [@stath1998] for the eigenvalue problem and has been nontrivially extended to the SVD and GSVD problems in [@huang2019inner; @huang2022harmonic; @huang2023cross; @stath1998; @wu16primme; @wu2015preconditioned]. Specifically, the thick-restart takes certain $k_{\min}$-dimensional subspaces of the current left and right searching subspaces as the initial left and right ones for the next cycle, so that they retain as much information as possible on the desired and $k_{\min}-1$ nearby GSVD components of $(A,B)$. We then expand them step by step in the way described in Section [2](#sec:2){reference-type="ref" reference="sec:2"}, compute new approximate GSVD components with respect to the expanded subspaces at each step, and check the convergence. If converged, we stop; otherwise, we repeat the same process until the dimension of expanded subspaces reaches $k_{\max}$. We will present an efficient and stable computational procedure for the thick-restart. For each of RCPF-JDGSVD, RCPF-HJDGSVD and RIF-HJDGSVD, when $k=k_{\max}$ but the method does not yet converge, we compute $k_{\min}$ approximate right generalized singular vectors $\bar x_i=\widetilde X\bar d_i$, $i=1,\dots,k_{\min}$ associated with the $k_{\min}$ approximate singular values closest to $\tau$, where $\bar x_1$ is selected as the approximation to the desired $x_*$. Then the thick-restart takes the new initial right searching subspace $\mathcal{X}_{\mathrm{new}}=\mathrm{span}\{\bar x_1,\dots,\bar x_{\min}\}$. Denote $D_1=[\bar d_1,\dots,\bar d_{k_{\min}}]$. Next we show how to efficiently obtain the $k_{\min}$-dimensional new right and left subspaces $\mathcal{X}_{\rm new}$, $\mathcal{U}_{\rm new}$ and $\mathcal{V}_{\rm new}$ as well as their orthonormal bases, whose computational details were not described in [@huang2022harmonic; @huang2023cross]. Compute the thin QR factorization $D_1=Q_dR_d$ using $\mathcal{O}(k_{\max}k_{\min}^2)$ flops. Then $$\label{xnew} \widetilde{X}_{\mathrm{new}}=\widetilde XQ_d$$ is the orthonormal basis matrix of $\mathcal{X}_{\mathrm{new}}$, whose computation costs $2nk_{\max}k_{\min}$ flops. As for the new $\mathcal{U}_{\mathrm{new}}=A\mathcal{X}_{\mathrm{new}}$ and $\mathcal{V}_{\mathrm{new}}=B\mathcal{X}_{\mathrm{new}}$, we compute the thin QR factorizations of the small sized matrices $$\label{qrsmall} R_AQ_d=Q_eR_{A,\mathrm{new}} \qquad \mbox{and} \qquad R_BQ_d=Q_fR_{B,\mathrm{new}}$$ using $\mathcal{O}(k_{\max}^2k_{\min})$ flops, where $R_A$, $R_B$ are defined as in [\[qrAXBX\]](#qrAXBX){reference-type="eqref" reference="qrAXBX"}. Exploiting [\[qrAXBX\]](#qrAXBX){reference-type="eqref" reference="qrAXBX"} and [\[xnew\]](#xnew){reference-type="eqref" reference="xnew"}--[\[qrsmall\]](#qrsmall){reference-type="eqref" reference="qrsmall"}, we obtain the thin QR factorizations $$\begin{aligned} &&\hspace{0.1em} A\widetilde{X}_{\mathrm{new}} = \hspace{0.1em}A \widetilde{X}Q_d=\hspace{0.1em}\widetilde UR_AQ_d=(\widetilde UQ_e)R_{A,\mathrm{new}},\\ &&B\widetilde{X}_{\mathrm{new}} = B\widetilde{X}Q_d=\widetilde VR_BQ_d=(\widetilde VQ_f)R_{B,\mathrm{new}},\end{aligned}$$ showing that the columns of $$\widetilde U_{\mathrm{new}}=\widetilde UQ_e \qquad\mbox{and}\qquad \widetilde V_{\mathrm{new}}=\widetilde VQ_f$$ form orthonormal bases of $\mathcal{U}_{\mathrm{new}}$ and $\mathcal{V}_{\mathrm{new}}$, respectively, whose computation costs $2(m+p)k_{\max}k_{\min}$ flops. Together with the computation of $\widetilde{X}_{\mathrm{new}}$, the above whole process approximately costs $2(m+n+p)k_{\max}k_{\min}$ flops since $m,p,n\gg k_{\max}>k_{\min}$. For the intermediate matrices in [\[defHAB\]](#defHAB){reference-type="eqref" reference="defHAB"} used to form $H_{\theta}$ defined by [\[comGr\]](#comGr){reference-type="eqref" reference="comGr"}, as has been done in the IF-harmonic JDGSVD algorithm, we efficiently update them by $$\label{inter} H_{A,\mathrm{new}}=Q_d^TH_AQ_d,\qquad H_{B,\mathrm{new}}=Q_d^TH_BQ_d,\qquad H_{A,B,\mathrm{new}}=Q_d^TH_{A,B}Q_d$$ at cost of $\mathcal{O}(k_{\max}^2k_{\min})$ flops. Summarizing the above, the construction cost of orthonormal bases of the $k_{\min}$-dimensional $\mathcal{U}_{\mathrm{new}}$, $\mathcal{V}_{\mathrm{new}}$, $\mathcal{X}_{\mathrm{new}}$ and the matrices in [\[inter\]](#inter){reference-type="eqref" reference="inter"} is approximately $2(m+p+n)k_{\max}k_{\min}$ flops. Typically, in computation, one takes $k_{\max}=20\sim 30$ and $k_{\min}=3\sim 5$; see [@huang2022harmonic; @huang2023cross]. Therefore, forming the restarting initial left and right searching subspaces in the thick-restart is very cheap. ## Deflation and purgation {#subsec:5-2} We can adapt the effective and efficient deflation techniques proposed in [@huang2022harmonic; @huang2023cross] to the thick-restart RCPF-JDGSVD, RCPF-HJDGSVD and RIF-HJDGSVD. We will elaborate some key elements of subspaces during deflation that were not given adequate and very clear arguments in [@huang2022harmonic; @huang2023cross], which turn out to play a crucial role in both mathematics and effective and efficient implementations of JDGSVD type algorithms for computing more than one GSVD components. Suppose that one of the RCPF-JDGSVD, RCPF-HJDGSVD and RIF-HJDGSVD algorithms has computed $j(<\ell)$ converged approximations $(\alpha_{i,c},\beta_{i,c},u_{i,c},v_{i,c},x_{i,c})$ to $(\alpha_i,\beta_i,u_i,v_i,x_i)$, whose residual norms satisfy $$\label{stopcrit} \|r_i\|=\|\beta_{i,c} A^Tu_{i,c}\!-\!\alpha_{i,c}B^Tv_{i,c}\| \leq(\beta_{i,c}\|A\|_1\!+\!\alpha_{i,c}\|B\|_1)\cdot tol, \quad i=1,\dots,j.$$ Denote $$\begin{aligned} &&U_c\hspace{0.1em}=[u_{1,c}\hspace{0.01em},\dots, u_{j,c}\hspace{0.01em}], \qquad\quad C_c=\diag\{\alpha_{1,c},\dots,\alpha_{j,c}\},\nonumber\\ &&V_c\hspace{0.2em}=[v_{1,c}\hspace{0.11em},\dots, v_{j,c}\hspace{0.11em}],\qquad\quad S_c\hspace{0.07em}=\diag\{\beta_{1,c}\hspace{0.11em},\dots,\beta_{j,c} \hspace{0.11em}\},\nonumber\\ &&X_c=[x_{1,c},\dots, x_{j,c}],\qquad\quad Y_c\hspace{0.09em}=(A^TA+B^TB)X_c. \label{convx}\end{aligned}$$ Then $$AX_c=U_cC_c,\quad BX_c=V_cS_c,\quad C_c^2+S_c^2=I_j,\quad Y_c=A^TU_cC_c+B^TV_cS_c,$$ and the F-norm of the residual matrix satisfies $$\|R_c\|_F=\|A^TU_cS_c-B^TV_cC_c\|_F\leq\sqrt{j(\|A\|_1^2+\|B\|_1^2)}\cdot \textit{tol}.$$ Suppose that the current $X_c$ and $Y_c$ are bi-orthogonal, i.e., $Y_c^TX_c=I_j$. We point out that this bi-orthogonality is fulfilled in our six JDGSVD type algorithms. It is known from Proposition 4.1 of [@huang2023cross] that $(\alpha_i,\beta_i,u_i,v_i,x_i),\ i=j+1,\ldots,q$ are the exact GSVD components of the deflated matrix pair $$\label{defpair} (A(I-X_cY_c^T),B(I-X_cY_c^T))$$ restricted to the range space of the oblique projector $(I-X_cY_c^T)$ if $\textit{tol}=0$ in ([\[stopcrit\]](#stopcrit){reference-type="ref" reference="stopcrit"}). Therefore, we can apply any one of RCPF-JDGSVD, RCPF-HJDGSVD and RIF-HJDGSVD to the deflated matrix pair in [\[defpair\]](#defpair){reference-type="eqref" reference="defpair"} to compute the next desired GSVD component $(\alpha_{*},\beta_{*},u_{*},v_{*},x_{*}):= (\alpha_{j+1},\beta_{j+1},u_{j+1},v_{j+1},x_{j+1})$ of $(A,B)$. Remarkably, when the converged $(\alpha_{j,c},\beta_{j,c},u_{j,c},v_{j,c},x_{j,c})$ has been found, the current subspaces usually contain reasonably rich information on $u_{*},v_{*},x_{*}$. To make full use of such available information, we present an effective and efficient purgation technique in the thick-restart RCPF-JDGSVD, RCPF-HJDGSVD and RIF-HJDGSVD algorithms. Instead of constructing initial searching subspaces from scratch, we purge the newly converged $$x_{j,c}:=x=\widetilde Xd$$ from the current $\mathcal{X}$, and take the reduced subspace, denoted by $\mathcal{X}_{\rm new}$, as an initial right searching subspace when extracting an approximation to $(\alpha_{*},\beta_{*},u_{*},v_{*},x_{*})$. Concretely, denote $$\widetilde{X}_{\mathrm{new}}=\widetilde XQ_D$$ with some orthonormal matrix $Q_D\in\mathbb{R}^{k\times(k-1)}$ to be determined. We require that $\widetilde X_{\mathrm{new}}$ be orthogonal to $Y_{c,\mathrm{new}}=[Y_{c},y]$, where $y=(A^TA+B^TB)x$. Suppose that the current $\widetilde X$ is orthogonal to $Y_{c}$. Then we only need to make $\widetilde X_{\mathrm{new}}$ orthogonal to $y$. By [\[qrAXBX\]](#qrAXBX){reference-type="eqref" reference="qrAXBX"}, this amounts to $$\widetilde X_{\mathrm{new}}^Ty=Q_D^T\widetilde{X}^T(A^TA+B^TB)\widetilde Xd =Q_D^T(R_A^TR_A+R_B^TR_B)d=\bm{0}.$$ Therefore, the columns of $Q_D$ form an orthonormal basis of the orthogonal complement of $\mathcal{R}(d^{\prime})$ with respect to $\mathbb{R}^k$, where $d^{\prime}=(R_A^TR_A+R_B^TR_B)d$. In computation, we compute the full QR factorization of the $k$-by-$1$ matrix $d^{\prime}$ using approximately $4k^2$ flops [@golub2012matrix p.249-250], take the second to last columns of its $Q$-factor to form $Q_D$, and obtain the desired $\mathcal{X}_{\rm new}$. Then following the analogous process to those in Section [4.1](#subsec:5-1){reference-type="ref" reference="subsec:5-1"}, by replacing $Q_d$ with $Q_D$, we can carry out this purgation strategy to obtain the reduced $\mathcal{U}_{\rm new}$ and $\mathcal{V}_{\rm new}$ with little extra cost. We then extract an approximation to $(\alpha_{*},\beta_{*},u_{*},v_{*},x_{*})$ with respect to them, and expand the searching subspaces in the thick-restart way described previously. We now unify to denote by $(\alpha,\beta,u,v,x)$ the approximate GSVD component obtained by one of RCPF-JDGSVD, RCPF-HJDGSVD and RIF-HJDGSVD with respect to the left and right searching subspaces $\mathcal{U}$, $\mathcal{V}$ and $\mathcal{X}$. If the current $\mathcal{X}$ is orthogonal to $\mathcal{R}(Y_c)$, then $$\mathcal{U}=(A-X_cY_c^T)\mathcal{X}=A\mathcal{X}\mbox{\ \ and\ \ } \mathcal{V}=(B-X_cY_c^T)\mathcal{X}=B\mathcal{X}.$$ Therefore, for such an $\mathcal{X}$, in the JDGSVD type algorithms of [@huang2022harmonic; @huang2023cross] and this paper, computationally, we never work on the matrix pair in [\[defpair\]](#defpair){reference-type="eqref" reference="defpair"} explicitly, instead we always work on $(A,B)$ directly. The gains are twofold: the resulting JDGSVD type algorithms are more efficient at each step; in finite precision arithmetic, they enable us to compute the approximations more accurately. The first point is straightforward. But the second point is subtle and quite complicated, and its arguments and details are out of the scope of this paper. Recall that the reduced $\mathcal{X}_{\rm new}$ can be made orthogonal to $\mathcal{R}(Y_c)$ and is an instance of the current subspace $\mathcal{X}$ when computing $(\alpha_*,\beta_*,u_*,v_*,x_*)$. We present the following important result. **Theorem 2**. *Suppose that the current right subspace $\mathcal{X}$ is orthogonal to $\mathcal{R}(Y_c)$. Then the expanded $\mathcal{X}$'s are also orthogonal to $\mathcal{R}(Y_c)$ at subsequent expansion steps.* We only need to prove the assertion for one expansion step. For the current $\mathcal{X}$, if $(\alpha,\beta,u,v,x)$ does not converge yet, then in the expansion phase, the original correction equation [\[cortau\]](#cortau){reference-type="eqref" reference="cortau"} for $\ell>1$ becomes (cf. [@huang2022harmonic; @huang2023cross]) $$\label{deflat} (I-Y_pX_p^T)(A^TA-\rho^2B^TB)(I-X_pY_p^T)t=-(I-Y_cX_c^T)r \quad\mbox{for}\quad t\perp Y_p,$$ where $r$ is the residual of $(\alpha,\beta,u,v,x)$ defined by [\[residual\]](#residual){reference-type="eqref" reference="residual"}, $\rho=\tau$ or $\theta=\frac{\alpha}{\beta}$ (see [\[cortau\]](#cortau){reference-type="eqref" reference="cortau"} and the paragraph that follows), and $X_p=[X_c,x]$, $Y_p=[Y_c,y]$ with $X_c$ and $Y_c$ defined in [\[convx\]](#convx){reference-type="eqref" reference="convx"} and $y$ defined by [\[defy\]](#defy){reference-type="eqref" reference="defy"}. It follows from $x\in\mathcal{X}\perp\mathcal{R}(Y_c)$ that $X_p$ and $Y_p$ are bi-orthogonal: $Y_p^TX_p=I_{j+1}$, meaning that $I-Y_pX_p^T$ and $I-X_pY_p^T$ are oblique projectors. With an approximate solution $t$ of [\[deflat\]](#deflat){reference-type="eqref" reference="deflat"} found, we orthonormalize it against the orthonormal basis matrix $\widetilde X$ of $\mathcal{X}$ to obtain the expansion vector $x_{+}$ and update the basis matrices $\widetilde X$, $\widetilde U$ and $\widetilde V$ by [\[expandX\]](#expandX){reference-type="eqref" reference="expandX"}--[\[updaterb\]](#updaterb){reference-type="eqref" reference="updaterb"}. By the way that $\widetilde X$ is augmented, the resulting expanded $\mathcal{X}$ is automatically orthogonal to $\mathcal{R}(Y_c)$ at the expansion step. The assertion in this theorem is crucial and ensures that we always work on $(A,B)$ rather than the explicitly deflated matrix pair in [\[defpair\]](#defpair){reference-type="eqref" reference="defpair"} when using each of the six JDGSVD type algorithms to computing more than one GSVD components. Once $(\alpha,\beta,u,v,x)$ has converged, we add it to the previous converged partial GSVD $(C_c,S_c,U_c,V_c,X_c)$ of $(A,B)$, and set $j:=j+1$. The RCPF-JDGSVD, RCPF-HJDGSVD and RIF-HJDGSVD algorithms proceed in this way until all the $\ell$ desired GSVD components of $(A,B)$ are found, and the computed $X_c=[x_{1,c},\ldots,x_{\ell,c}]$ and $Y_c=[y_{1,c},\ldots,y_{\ell,c}]$ satisfy $Y_c^TX_c=I_{\ell}$. [\[algorithm:1\]]{#algorithm:1 label="algorithm:1"} ## Thick restart refined and refined harmonic JDGSVD algorithms {#subsec:5-3} Algorithm [\[algorithm:1\]](#algorithm:1){reference-type="ref" reference="algorithm:1"} summarizes the thick-restart RCPF-JDGSVD, RCPF-HJDGSVD and RIF-HJDGSVD algorithms with deflation and purgation for computing a partial GSVD $(C_c,S_c,U_c,V_c,X_c)$ of $(A,B)$ with the $\ell$ generalized singular values closest to a given target $\tau$. Each of these three algorithms demands the modules to perform matrix-vector multiplications with $A$, $B$ and $A^T$, $B^T$, the target $\tau>0$, the number $\ell$ of desired GSVD components, a unit-length vector $x_0$ for the initial right searching subspace, and the stopping tolerance $\textit{tol}$ for the outer iterations. RCPF-HJDGSVD needs a device to act $(B^TB)^{-1}$ so as to form and update the projection matrix $H_{\mathrm{c}}$ in [\[cpfeq20\]](#cpfeq20){reference-type="eqref" reference="cpfeq20"}. Other parameters include the minimum and maximum dimensions $k_{\min}$ and $k_{\max}$ of searching subspaces, the switching tolerance $\textit{fixtol}>0$ for the correction equations [\[cortau\]](#cortau){reference-type="eqref" reference="cortau"} and [\[deflat\]](#deflat){reference-type="eqref" reference="deflat"} with the fixed shift $\rho=\tau$ to their counterparts with the adaptively changing shift $\rho=\theta$, and the stopping tolerance $\tilde\varepsilon>0$ in [\[inncov\]](#inncov){reference-type="eqref" reference="inncov"} for the inner iterations. By default, we set them as $3$, $30$, $10^{-4}$ and $10^{-4}$, respectively. The authors in [@huang2022harmonic] have made a detailed analysis on the cost of the thick-restart CPF-HJGSVD and IF-HJDGSVD algorithms, which straightforwardly applies to Algorithm [\[algorithm:1\]](#algorithm:1){reference-type="ref" reference="algorithm:1"}. The conclusion is that, for $k_{\max}\ll n$, the cost of each algorithm is dominated by the matrix-vector multiplications with $A,B$ and $A^T,B^T$, assuming that the MINRES method is used to solve all the correction equations. We remind that, for the thick-restart CPF-HJGSVD and RCPF-HJGSVD, this conclusion requires that each linear system with the coefficient matrix $B^TB$ can be solved efficiently at cost of $\mathcal{O}(n)$ flops. # Numerical experiments {#sec:6} We illustrate the performance of thick-restart RCPF-JDGSVD, RCPF-HJDGSVD and RIF-HJDGSVD algorithms on several problems, and compare them with the standard and harmonic JDGSVD algorithms [@huang2022harmonic; @huang2023cross]: CPF-JDGSVD, CPF-HJDGSVD and IF-HJDGSVD, showing the considerable advantage of the refined and refined harmonic algorithms over their respective unrefined counterparts. All the numerical experiments were implemented on a 13th Gen Intel (R) Core (TM) i9-13900KF CPU 3.00 GHz with 64 GB RAM using the MATLAB R2022b with the machine precision $\epsilon_{\mathrm{mach}}=2.22\times10^{-16}$ under the Miscrosoft Windows 11 Pro 64-bit system. $A$ $B$ $m$ $p$ $n$ $nnz$ $\kappa(\begin{bmatrix}\begin{smallmatrix} $\sigma_{\max}$ $\sigma_{\min}$ A\\B\end{smallmatrix}\end{bmatrix})$ ------------------------------- ----- ------- ------- ------- --------- ---------------------------------------------- ----------------- ----------------- -- $\mathrm{dano3mip}^T$ $T$ 15851 3202 3202 91237 1.81e+3 1.28e+3 1.00e-16 $\mathrm{plddb}^T$ $T$ 5049 3069 3069 20044 91.9 61.2 3.65e-3 $\mathrm{barth}$ $T$ 6691 6691 6691 46510 5.56 3.16 7.61e-19 $\mathrm{large}^T$ $T$ 8617 4282 4282 33479 3.53e+3 2.42e+3 2.25e-3 $\mathrm{ns3Da}$ $T$ 20414 20414 20414 1740839 5.01 4.02e-1 1.86e-4 $\mathrm{e40r0100}$ $T$ 17281 17281 17281 605403 11.2 9.56 2.62e-8 $\mathrm{rat}^T$ $T$ 9408 3136 3136 278314 3.06 1.42 2.85e-1 $\mathrm{Kemelmacher}$ $T$ 28452 9693 9693 129952 68.0 2.35e+2 2.02e-3 $\mathrm{lpi\_gosh}^T$ $T$ 13455 3792 3792 111327 3.09e+2 1.60e+2 3.31e-16 $\mathrm{rdist2}$ $T$ 3198 3198 3198 66426 1.17e+3 4.50e+2 2.64e-5 $\mathrm{deter7}^T$ $T$ 18153 6375 6375 56254 6.89 7.97 6.97e-3 $\mathrm{shyy41}$ $T$ 4720 4720 4720 34200 1.89e+2 1.85e+2 8.26e-21 $\mathrm{nemeth01}$ $D$ 9506 9505 9506 744064 32.0 7.61e+4 2.06e-1 $\mathrm{raefsky1}$ $D$ 3242 3241 3242 299891 6.81 8.13e+2 2.18e-4 $\mathrm{r05}^T$ $D$ 9690 5189 5190 114523 62.4 1.19e+4 2.91e-1 $\mathrm{p010}^T$ $D$ 19090 10089 10090 138178 60.2 1.95e+4 2.92e-1 $\mathrm{scagr7\mbox{-}2b}^T$ $D$ 13847 9742 9742 55369 2.60e+2 8.93e+3 9.20e-3 $\mathrm{cavity16}$ $D$ 4562 4561 4562 147009 2.19e+2 1.32e+3 9.85e-7 $\mathrm{utm5940}$ $D$ 5940 5939 5940 95720 52.5 8.91e+2 4.10e-9 : Test matrix pairs with their basic properties. In Table [1](#table0){reference-type="ref" reference="table0"} we list all the test problems and some of their basic properties, where for each matrix pair $(A,B)$, $A$ or $A^T$ (so that $m\geq n$) is a sparse matrix from the University of Florida Sparse Matrix Collection [@davis2011university] and $B$ is either $$T=\begin{bmatrix} 3&1&&\\ 1&\ddots&\ddots&\\ &\ddots&\ddots&1\\&&1&3 \end{bmatrix} \qquad\mbox{or}\qquad D=\begin{bmatrix} 1&-1&&\\ &\ddots&\ddots&\\&&1&-1 \end{bmatrix}$$ with $p=n$ or $n-1$, the latter of which is the scaled discrete approximation of the first order derivative operator in dimension one, $nnz$ is the total number of nonzero elements in $A$ and $B$, $\kappa(\begin{bmatrix}\begin{smallmatrix} A\\B\end{smallmatrix}\end{bmatrix})$ is the condition number of $\begin{bmatrix}\begin{smallmatrix} A\\B\end{smallmatrix}\end{bmatrix}$, and $\sigma_{\max}$ and $\sigma_{\min}$ are the largest and smallest nontrivial generalized singular values of $(A,B)$ computed, for experimental purpose, by the MATLAB built-in function gsvd. Note that for the matrix pairs $(A,B)$ with $B=T$, all generalized singular values are nontrivial ones while for those with $B=D$, there exists at least one infinite generalized singular value. We take $B=T$ in Experiments [Experiment 3](#largeexact){reference-type="ref" reference="largeexact"}--[Experiment 7](#tengsvd){reference-type="ref" reference="tengsvd"} and $B=D$ in the remaining experiments. For all the six algorithms, we take the outer stopping tolerance $\textit{tol}=10^{-8}$, the initial vector $x_0={\sf mod}(1\!:\!n,4)/\|{\sf mod}(1\!:\!n,4)\|$ with mod the MATLAB built-in function, and all the other parameters by default as described in Section [4.3](#subsec:5-3){reference-type="ref" reference="subsec:5-3"}. For the correction equations [\[cortau\]](#cortau){reference-type="eqref" reference="cortau"} and [\[deflat\]](#deflat){reference-type="eqref" reference="deflat"}, we take the $n$-dimensional zero vectors as initial guesses, and use the MATLAB built-in function minres to solve them until it converges with the prescribed tolerance $\tilde \varepsilon=10^{-4}$ in [\[inncov\]](#inncov){reference-type="eqref" reference="inncov"} or maximum $n$ inner iteration steps have been consumed. We stop each algorithm and output the computed partial GSVD of $(A,B)$ if all the $\ell$ desired GSVD components of $(A,B)$ have been found or the total $n$ correction equations have been solved. In all the tables, we abbreviate the thick-restart CPF-JDGSVD, CPF-HJDGSVD, IF-HJDGSVD and RCPF-JDGSVD, RCPF-HJDGSVD, RIF-HJDGSVD algorithms as CPF, CPFH, IFH, and RCPF, RCPFH, RIFH, respectively. We denote by $I_{\mathrm{out}}$ and $I_{\mathrm{in}}$ the total numbers of outer and inner iterations used, and by $T_{\mathrm{cpu}}$ the total CPU time in second counted by the MATLAB built-in commands tic and toc. **Experiment 3**. *We compute one GSVD component of $(A,B) = (\mathrm{dano3mip}^T, T)$ with the target $\tau = 3.75e+2$. The desired $\sigma_*$ is one of the largest generalized singular values of $(A,B)$ and is clustered with its nearby ones.* ![Computing one GSVD component of $(A,B)=(\mathrm{dano3mip}^T,T)$ with $\tau=3.75e+2$.](dano3mip_z1relres_srs.pdf){#fig11 width="80%"} For the matrix pairs in this and the next four experiments, the matrices $B^TB$'s are symmetric positive definite and well conditioned, whose Cholesky factorizations can be efficiently computed using $\mathcal{O}(n)$ flops, meaning that matrix-vector multiplications with its inversion $(B^TB)^{-1}$ can be applied at cost of $\mathcal{O}(n)$ flops for each linear system. For this reason, during the subspace expansion phase of the CPF-HJDGSVD and RCPF-HJDGSVD algorithms, we exploit the MATLAB built-in command $\setminus$ to implement matrix-vector multiplications with $(B^TB)^{-1}$ and to update the intermediate matrix $H_{c}$ in [\[cpfeq20\]](#cpfeq20){reference-type="eqref" reference="cpfeq20"}. Moreover, for experimental purpose, in this and the next experiments we solve all the correction equations [\[cortau\]](#cortau){reference-type="eqref" reference="cortau"} and [\[deflat\]](#deflat){reference-type="eqref" reference="deflat"} accurately by using the LU factorizations of $(A^TA-\rho^2B^TB)$'s in order to exhibit how RCPF-JDGSVD, RCPF-HJDGSVD and RIF-HJDGSVD truly behave. Figure [1](#fig11){reference-type="ref" reference="fig11"} depicts the outer convergence curves of CPF-JDGSVD and RCPF-JDGSVD for computing the desired GSVD component of $(A,B)$. Clearly, CPF-JDGSVD converges slowly and irregularly and even suffers from two sharp oscillations at outer iterations 22 and 40, respectively, and it uses 42 outer iterations to converge. In contrast, RCPF-JDGSVD exhibits far smoother outer convergence behavior, and uses only 16 outer iterations to attain the convergence, thereby saving $62\%$ outer iterations. This demonstrates the considerable superiority of RCPF-JDGSVD to CPF-JDGSVD when computing an extreme GSVD component of $(A,B)$. **Experiment 4**. *We compute one GSVD component of $(A,B)=(\mathrm{plddb}^T,T)$ with the interior generalized singular value $\sigma_*$ closest to the target $\tau = 10$, which is clustered with its neighbors.* ![Computing one GSVD component of $(A,B)=(\mathrm{plddb}^T,T)$ with $\tau=10$.](plddb_z1relres_circri.pdf){#fig12 width="80%"} Figure [2](#fig12){reference-type="ref" reference="fig12"} draws the outer convergence curves of the two harmonic extraction-based algorithms CPF-HJDGSVD and IF-HJDGSVD as well as the two refined harmonic extraction-based algorithms RCPF-HJDGSVD and RIF-HJDGSVD. We can see that RCPF-HJDGSVD and RIF-HJDGSVD converge more quickly and much more smoothly than CPF-HJDGSVD and IF-HJDGSVD, and the former two algorithms use $4$ and $7$ fewer outer iterations than the latter two ones to reach the convergence, respectively. We also observe that RCPF-HJDGSVD and RIF-HJDGSVD perform equally well for this problem. **Experiment 5**. *We compute one, five and ten largest GSVD components of $(A,B)\\=(\mathrm{barth},T)$ with the target $\tau = 20$. The desired generalized singular values are clustered one another.* $\ell$ Algorithm $I_{\rm out}$ $I_{\rm in}$ $T_{\rm cpu}$ Algorithm $I_{\rm out}$ $I_{\rm in}$ $T_{\rm cpu}$ -------- ----------- --------------- -------------- --------------- ----------- --------------- -------------- --------------- 1 CPF    36 655 0.18 RCPF    33 673 0.18 CPFH    36 653 0.17 RCPFH    33 673 0.19 IFH    35 697 0.18 RIFH    33 673 0.18 5 CPF    86 2515 0.56 RCPF    67 2543 0.55 CPFH    86 2560 0.58 RCPFH    67 2543 0.59 IFH    81 2609 0.56 RIFH    61 2466 0.51 10 CPF    166 5575 1.25 RCPF    91 4834 0.94 CPFH    173 5634 1.30 RCPFH    91 4834 0.97 IFH    155 5603 1.25 RIFH    109 5269 1.09 : Computing the $\ell$ GSVD components of $(A,B)=(\mathrm{barth},T)$ with $\tau=20$. Table [2](#table21){reference-type="ref" reference="table21"} reports the results obtained. As we can see, for $\ell=1$, the refined and refined harmonic extraction-based JDGSVD algorithms use slightly fewer outer iterations and comparable inner iterations and CPU time to achieve the convergence, compared with the standard and harmonic extraction-based JDGSVD algorithms. However, for $\ell=5$, RCPF-JDGSVD, RCPF-HJDGSVD and RIF-HJDGSVD are considerably faster and use nearly $20$ fewer outer iterations than CPF-JDGSVD, CPF-HJDGSVD and IF-HJDGSVD, respectively, which save $22.1\%\sim 24.5\%$ of the outer iterations, though the six JDGSVD algorithms use almost the same inner iterations and CPU time to converge. For $\ell=10$, RIF-HJDGSVD uses $29\%$ fewer outer iterations, $6\%$ fewer inner iterations and $13\%$ less CPU time than IF-HJDGSVD; RCPF-JDGSVD and RCFP-HJDGSVD save more than $45\%$ of the outer iterations, $13\%$ of the inner iterations, and $24\%$ of the CPU time, respectively, relative to CPF-JDGSVD and CPF-HJDGSVD. Of the three refined and refined harmonic algorithms, RCPF-JDGSVD and RCPF-HJDGSVD are better than RIF-HJDGSVD due to their faster outer convergence, and RCPF-JDGSVD is the most efficient due to its least CPU time. Clearly, for the sake of faster outer convergence and higher overall efficiency, RCPF-JDGSVD is the most recommended algorithm for this problem. This should be expected as the generalized singular values of interest are extreme ones, for which the standard extraction suits better than the harmonic extraction. As a consequence, RCPF-JDGSVD performs better than RCPF-HJDGSVD and RIF-HJDGSVD, though its advantage is not so obvious relative to RCPF-HJDGSVD. **Experiment 6**. *We compute one, five and ten interior GSVD components of the matrix pair $(A,B)=(\mathrm{large}^T,T)$ with the generalized singular values closest to $\tau = 14$ highly clustered.* ![Computing one GSVD component of $(A,B)=(\mathrm{large}^T,T)$ with $\tau=14$.](large_z4relres_scirsrcri.pdf){#fig22 width="80%"} $\ell$ Algorithm $I_{\rm out}$ $I_{\rm in}$ $T_{\rm cpu}$ Algorithm $I_{\rm out}$ $I_{\rm in}$ $T_{\rm cpu}$ -------- ----------- --------------- -------------- --------------- ----------- --------------- -------------- --------------- 1 CPF    13 3266 0.23 RCPF    33 6068 0.44 CPFH    16 2677 0.18 RCPFH    12 3109 0.22 IFH    12 3544 0.24 RIFH    11 3332 0.22 5 CPF    65 42115 2.84 RCPF    720 635121 43.5 CPFH    43 13255 0.93 RCPFH    25 16434 1.11 IFH    29 15602 1.06 RIFH    27 23767 1.61 10 CPF   4291 4084862 358 RCPF   771 720147 50.7 CPFH   61 29032 2.27 RCPFH   39 33313 2.48 IFH   43 30135 2.23 RIFH   36 30570 2.26 : Computing the $\ell$ GSVD components of $(A,B)=(\mathrm{large}^T,T)$ with $\tau=14$. Figure [3](#fig22){reference-type="ref" reference="fig22"} depicts the outer convergence curves of the six JDGSVD algorithms for computing one desired GSVD component of $(A,B)$, and Table [3](#table22){reference-type="ref" reference="table22"} displays all the results obtained. For $\ell=1$, all the six algorithms succeed to compute the desired GSVD component. As is seen from Figure [3](#fig22){reference-type="ref" reference="fig22"} and Table [3](#table22){reference-type="ref" reference="table22"}, RCPF-JDGSVD uses much more outer iterations and thus much more inner iterations and CPU time to converge than CPF-JDGSVD because of the outer convergence stagnation of the former. Notice that this is an interior GSVD problem. The stagnation implies that the expanded subspaces have little improvements at those stagnation steps, causing that the accuracy of selected approximate generalized singular values and right generalized singular vectors remain almost the same. This is because RCPF-JDGSVD selects approximate generalized singular values incorrectly or there is no good Ritz value at all at those steps, which is an intrinsic deficiency of the standard extraction for interior eigenvalue, SVD and GSVD problems, as we have pointed out in the introduction: If this selection is done incorrectly or there are spurious values at some step, the searching subspaces in the refined extraction are expanded wrongly at that step and provide little information on the desired generalized singular vectors. In contrast, RCPF-HJDGSVD and RIF-HJDGSVD outperform CPF-HJDGSVD and IF-HJDGSVD respectively, due to the smoother and faster outer convergence and/or better overall efficiency. This is because the harmonic extraction approach suits better for interior GSVD problems and can pick up the approximate generalized singular values correctly. For $\ell=5$, it seems from Table [3](#table22){reference-type="ref" reference="table22"} that RCPF-JDGSVD uses significantly more outer and inner iterations and much more CPU time than CPF-JDGSVD. As a matter of fact, both CPF-JDGSVD and RCPF-JDGSVD are unreliable and fail to deliver all the desired GSVD components in this case: They find only the first three desired GSVD components but recompute each of the first two twice. RCPF-HJDGSVD uses nearly half of the outer iterations though a little bit more inner iterations and CPU time relative to CPF-HJDGSVD. RIF-HJDGSVD uses slightly fewer and considerably more inner iterations and CPU time than IF-HJDGSVD. Obviously, regarding outer convergence, RCPF-HJDGSVD and RIF-HJDGSVD are the best. For $\ell=10$, only CPF-HJDGSVD, IF-HJDGSVD and RCPF-HJDGSVD, RIF-HJDGSVD succeed to compute all the desired GSVD components of $(A,B)$, while CPF-JDGSVD computes the first three desired ones three times in the beginning and fails to compute the fourth one when $n$ correction equations are solved, and RCPF-JDGSVD only computes the first six desired GSVD components and recomputes the first three ones twice or three times. As a consequence, CPF-JDGSVD and RCPF-JDGSVD fail to solve the concerning GSVD problem correctly and are thus unreliable for this interior GSVD problem. On the contrary, RCPF-HJDGSVD and RIF-HJDGSVD outperform CPF-HJDGSVD and IF-HJDGSVD as they use considerably fewer outer iterations and comparable inner iterations; these four algorithms outperform CPF-JDGSVD and RCPF-JDGSVD significantly. Clearly, due to the smoother and faster outer convergence and the reliability, RCPF-HJDGSVD and RIF-HJDGSVD are the most suitable choices for this interior GSVD problem. **Experiment 7**. *We compute ten GSVD components of the matrix pairs $(A,B)=(\mathrm{ns3Da},T)$, $(\mathrm{e40r0100},T)$, $(\mathrm{Kemelmacher},T)$, $(\mathrm{rat}^T,T)$, $(\mathrm{lpi\_gosh}^T,T)$, $(\mathrm{rdist2},T)$, $(\mathrm{deter7}^T,T)$ and $(\mathrm{shyy41},T)$ with the targets $\tau = 2$, $11$, $0.2$, $0.15$, $14.3$, $19$, $4.5$ and $0.8$, respectively. The desired generalized singular values of $(\mathrm{ns3Da},T)$, $(\mathrm{e40r0100},T)$ and $(\mathrm{Kemelmacher},T)$, $(\mathrm{rat}^T,T)$ are extreme, i.e., the largest or smallest, and are clustered, and those of the remaining four matrix pairs are both interior and clustered.* $A$ Algorithm $I_{\rm out}$ $I_{\rm in}$ $T_{\rm cpu}$ Algorithm $I_{\rm out}$ $I_{\rm in}$ $T_{\rm cpu}$ ------------------------ ----------- --------------- -------------- --------------- ----------- --------------- -------------- --------------- $\mathrm{ns3Da}$ CPF    126 3823 6.81 RCPF    89 3458 6.03 CPFH    119 3858 6.87 RCPFH    90 3471 6.17 IFH    113 3776 6.85 RIFH    100 3646 6.49 $\mathrm{e40r0100}$ CPF    141 9507 5.20 RCPF    58 7656 3.86 CPFH    91 9094 4.74 RCPFH    58 7549 3.87 IFH    84 9020 4.74 RIFH    58 7615 3.83 $\mathrm{Kemelm}$ CPF    45 48004 9.82 RCPF    34 45001 9.09 CPFH    39 48669 10.1 RCPFH    37 48271 10.0 IFH    35 45056 9.25 RIFH    36 46594 9.60 $\mathrm{rat}^T$ CPF    172 2364 0.70 RCPF    81 2289 0.54 CPFH    94 2191 0.54 RCPFH    80 2232 0.56 IFH    105 2118 0.54 RIFH    81 2289 0.54 $\mathrm{lpi\_gosh}^T$ CPF    52 6042 0.81 RCPF    43 6255 0.77 CPFH    44 5661 0.71 RCPFH    37 5203 0.63 IFH    39 5405 0.67 RIFH    36 5023 0.60 $\mathrm{rdist2}$ CPF    68 5397 0.48 RCPF    46 4476 0.39 CPFH    51 4485 0.40 RCPFH    44 4348 0.40 IFH    54 5238 0.45 RIFH    47 4734 0.41 $\mathrm{deter7}^T$ CPF    87 7208 1.12 RCPF    68 6820 1.04 CPFH    78 6876 1.07 RCPFH    55 6716 0.97 IFH    69 7430 1.11 RIFH    57 7418 1.06 $\mathrm{shyy41}$ CPF    RCPF    263 CPFH    1.13e+3 RCPFH    66 81061 6.35 IFH    1.49e+3 RIFH    63 79204 6.29 : Results on test matrix pairs in Example [Experiment 7](#tengsvd){reference-type="ref" reference="tengsvd"}, where $\mathrm{Kemelm}$ is the abbreviation of $\mathrm{Kemelmacher}$. Table [4](#table3){reference-type="ref" reference="table3"} reports all the results obtained. For the computation of the largest GSVD components of $(\mathrm{ns3Da},T)$ and $(\mathrm{e40r0100},T)$, we observe from Table [4](#table3){reference-type="ref" reference="table3"} that the refined and refined harmonic extraction-based RCPF-JDGSVD, RCPF-HJDGSVD, RIF-HJDGSVD algorithms outperform the standard and harmonic extraction-based CPF-JDGSVD, CPF-HJDGSVD, IF-HJDGSVD algorithms as they use moderately to substantially fewer outer iterations, slightly to moderately fewer inner iterations and less CPU time. For $(\mathrm{ns3Da},T)$, RCPF-JDGSVD and RCPF-HJDGSVD are very comparable, and they both are superior to RIF-HJDGSVD due to the faster outer convergence and higher overall efficiency. For $(\mathrm{e40r0100},T)$, all the three refined and refined harmonic JDGSVD algorithms are suitable. For the computation of the smallest GSVD components, we see from Table [4](#table3){reference-type="ref" reference="table3"} that for $(\mathrm{Kemelmacher},T)$, RCPF-JDGSVD and RCPF-HJDGSVD use moderately or slightly fewer outer and inner iterations and less CPU time than CPF-JDGSVD and CPF-HJDGSVD, respectively, while RIF-HJDGSVD and IF-HJDGSVD work equally well. Obviously, except for CPF-JDGSVD, all the other five algorithms are appropriate for the concerning GSVD problem of this matrix pair. Nonetheless, as far as both the outer convergence and overall efficiency are concerned, RCPF-JDGSVD is the most recommended. For $(\mathrm{rat}^T,T)$, the refined and refined harmonic extraction-based JDGSVD algorithms outperform the standard and harmonic extraction-based JDGSVD algorithms substantially, as outer iterations indicate. Regarding the overall efficiency, RCPF-JDGSVD outmatches CPF-JDGSVD considerably as it uses slightly fewer inner iterations and much less CPU time. On the other hand, RCPF-HJDGSVD and RIF-HJDGSVD are competitive with CPF-HJDGSVD and IF-HJDGSVD in terms of inner iterations and CPU time. For the sake of better convergence behavior, RCPF-JDGSVD, RCPF-HJDGSVD and RIF-HJDGSVD are all proper choices for this problem. For the computation of interior GSVD components of $(\mathrm{lpi\_gosh}^T,T)$, $(\mathrm{rdist2},T)$ and $(\mathrm{deter7}^T,T)$, as far as the outer convergence is concerned, RCPF-JDGSVD, RCPF-HJDGSVD and RIF-HJDGSVD surpass CPF-JDGSVD, CPF-HJDGSVD and IF-HJDGSVD, respectively. Regarding the overall efficiency, the three refined and refined harmonic JDGSVD algorithms are superior to the standard and harmonic JDGSVD algorithms as they use less CPU time and/or fewer inner iterations. For $(\mathrm{shyy41},T)$, since the desired generalized singular values are truly interior and clustered, as we have elaborated in the introduction, the standard, harmonic and even refined extraction approach-based JDGSVD algorithms may face severe difficulties, which is confirmed by the results in Table [4](#table3){reference-type="ref" reference="table3"}. In fact, we observe very erratic convergence behavior of CPF-JDGSVD, CPF-HJDGSVD, IF-HJDGSVD and RCPF-JDGSVD; when $n$ correction equations are solved, they fail to compute all the desired GSVD components and only output seven, seven, eight and four converged ones, respectively. On the contrary, RCPF-HJDGSVD and RIF-HJDGSVD are very successful to compute all the desired GSVD components quickly. Undoubtedly, RCPF-HJDGSVD and RIF-HJDGSVD are the only two proper choices for this difficult problem; they are competitive in terms of both outer convergence and overall efficiency. **Experiment 8**. *We compute one GSVD component of $(A,B)=(\mathrm{nemeth01},D)$ corresponding to the interior generalized singular value closest to the target $\tau=6.5$.* ![Computing one GSVD component of $(A,B)=(\mathrm{nemeth01},D)$ with $\tau=6.5$.](nemeth01_z4relres_sirsri.pdf){#fig4 width="80%"} For the matrix pairs in this and the next experiments, the matrices $B$'s are rank deficient, so that CPF-HJDGSVD and RCPF-HJDGSVD are not applicable. We compute the desired GSVD components of $(A,B)$ using CPF-JDGSVD, IF-HJDGSVD and RCPF-JDGSVD, RIF-HJDGSVD. Specifically, for the problem in this experiment, for the illustration of true outer convergence of these four algorithms, we use the LU factorizations of the matrices $(A^TA-\rho^2B^TB)$'s to solve all the correction equations involved accurately, and draw their outer convergence curves in Figure [4](#fig4){reference-type="ref" reference="fig4"}. As is seen from Figure [4](#fig4){reference-type="ref" reference="fig4"}, in contrast to CPF-JDGSVD and IF-HJDGSVD whose convergence is delayed because of the frequent oscillations and stagnation, respectively, RCPF-JDGSVD and RIF-HJDGSVD converge faster and much more smoothly, and use $23$ and $6$ fewer outer iterations to achieve the convergence, respectively. Moreover, since the standard extraction approach may pick up approximate generalized singular values incorrectly for interior GSVD problems, RCPF-JDGSVD may suffer from oscillations, as shown by Figure [4](#fig4){reference-type="ref" reference="fig4"} at iterations $4$--$9$ and $19$--$21$. These phenomena confirm the intrinsic shortcoming of RCPF-JDGSVD for computing truly interior GSVD components. As a consequence, although RCPF-JDGSVD and RIF-HJDGSVD use the same number of outer iterations to converge and outperform CPF-JDGSVD and IF-HJDGSVD, we recommend RIF-HJDGSVD for this problem. **Experiment 9**. *We compute ten GSVD components of $(A,B)=(\mathrm{raefsky1},D)$, $(\mathrm{r05}^T,D)$, $(\mathrm{p010}^T,D)$ and $(\mathrm{scagr7\mbox{-}2b}^T,D)$, $(\mathrm{cavity16},D)$, $(\mathrm{utm5940},D)$ with the targets $\tau=57$, $61$, $80$ and $32.5$, $20.8$, $58$, respectively. All the desired generalized singular values are interior and clustered ones. This implies that all the correction equations may be hard to solve by the MINRES method.* $A$ Algorithm $I_{\rm out}$ $I_{\rm in}$ $T_{\rm cpu}$ Algorithm $I_{\rm out}$ $I_{\rm in}$ $T_{\rm cpu}$ ------------------------------- ----------- --------------- -------------- --------------- ----------- --------------- -------------- --------------- $\mathrm{raefsky1}$ CPF    52 91560 16.0 RCPF   42 76689 13.4 IFH    37 67609 13.5 RIFH   35 66005 11.9 $\mathrm{r05^T}$ CPF    100 60409 8.71 RCPF   99 65947 9.76 IFH    91 64795 19.2 RIFH   69 53142 7.73 $\mathrm{p010}^T$ CPF    104 104797 28.8 RCPF   107 113825 31.4 IFH    87 132654 55.2 RIFH   63 81573 22.4 $\mathrm{scagr7\mbox{-}2b}^T$ CPF    185 130938 22.1 RCPF   294 232935 39.0 IFH    65 141444 23.3 RIFH   46 112561 18.4 $\mathrm{cavity16}$ CPF    122 95455 13.5 RCPF   2107949 292 IFH    101 85234 23.7 RIFH   68 63650 8.95 $\mathrm{utm5940}$ CPF    2.13e+3 RCPF   1.55e+3 IFH    2.36e+3 RIFH   87 35.8 : Results on test matrix pairs in Example [Experiment 9](#Brankdeficient){reference-type="ref" reference="Brankdeficient"}. Table [5](#table5){reference-type="ref" reference="table5"} lists the results. For $(\mathrm{raefsky1},D)$, we see that RCPF-JDGSVD and RIF-HJDGSVD respectively outperform CPF-JDGSVD and IF-HJDGSVD by using significantly or slightly fewer outer and inner iterations and fairly less CPU time. For $(\mathrm{r05}^T,D)$ and $(\mathrm{p010}^T,D)$, RIF-HJDGSVD uses substaintially fewer outer and inner iterations and considerably less CPU time than CPF-JDGSVD and RCPF-JDGSVD, which themselves behave similarly and outperform IF-HJDGSVD significantly with much less CPU time. Obviously, RIF-HJDGSVD is the best choice for these two problems in terms of both outer convergence and overall efficiency. For $(\mathrm{scagr7\mbox{-}2b}^T,D)$, CPF-JDGSVD and RCPF-JDGSVD consume lots of outer iterations to converge. Actually, RCPF-JDGSVD fails to solve the problem, and it repeatedly computes the first two desired GSVD components. In contrast, IF-HJDGSVD and RIF-HJDGSVD use substantially fewer outer iterations to compute all the desired GSVD components. Between them, RIF-HJDGSVD wins IF-HJDGSVD with obviously fewer outer and inner iterations and less CPU time. Clearly, for this problem, RIF-HJDGSVD is the best choice. For $(\mathrm{cavity16},D)$ and $(\mathrm{utm5940},D)$, the desired generalized singular values are truly interior and clustered, causing that the standard, harmonic and even refined extraction approaches may not perform well, as is confirmed and shown in Table [5](#table5){reference-type="ref" reference="table5"}. We observe sharp oscillations in the outer convergence curves of CPF-JDGSVD, IF-HJDGSVD and RCPF-JDGSVD, of which the last one even fails to compute all the desired GSVD components when we use up the solutions of $n$ correction equations. On the other hand, RIF-HJDGSVD converges smoothly and uses marvelously fewer outer and inner iterations and extremely less CPU time than the other three algorithms. Clearly, RIF-HJDGSVD is the most suitable choice for these two problems. Summarizing all the numerical experiments, we can draw two conclusions: () For extreme GSVD components, refined and refined harmonic RCPF-JDGSVD, RCPF-HJDGSVD and RIF-HJDGSVD algorithms generally perform better than the standard and harmonic CPF-JDGSVD, CPF-HJDGSVD and IF-HJDGSVD algorithms, respectively; RCPF-JDGSVD is the best. () For interior GSVD components, RCPF-HJDGSVD and RIF-HJDGSVD outmatch CPF-JDGSVD, CPF-HJDGSVD and IF-HJDGSVD, RCPF-JDGSVD with smoother and faster outer convergence and higher overall efficiency; if $B$ has full column rank, then both RCPF-HJDGSVD and RIF-HJDGSVD are suitable choices; otherwise RIF-HJDGSVD is the most recommended algorithm due to its wider applicability. # Conclusions {#sec:7} The reliable and efficient computation of a partial GSVD of a large regular matrix pair $(A,B)$ is vital in extensive applications, and has attracted much attention in recent years. In this paper, we have proposed three refined and refined harmonic extraction-based JDGSVD methods: RCPF-JDGSVD, and RCPF-HJDGSVD, RIF-HJDGSVD, and have developed practical thick-restart algorithms with deflation and purgation that can compute several GSVD components. They fix erratic convergence behavior and intrinsic possible non-convergence of the standard and harmonic JDGSVD methods: CPF-JDGSVD, CPF-HJDGSVD and IF-HJDGSVD proposed and developed in [@huang2022harmonic; @huang2023cross]. The three new JDGSVD algorithms suit better for the computation of extreme and interior GSVD components of large regular matrix pairs, respectively. Numerical experiments have demonstrated that RCPF-HJDGSVD and RIF-HJDGSVD are generally the best choices for computing interior GSVD components, while RCPF-JDGSVD suits best for the extreme GSVD components. These confirm our elaborations in the introduction and the necessity and superiority of refined and refined harmonic extraction-based JDGSVD methods. There remain some important issues that should be given special considerations. When the desired generalized singular values are truly interior and clustered, the correction equations [\[cortau\]](#cortau){reference-type="eqref" reference="cortau"} and [\[deflat\]](#deflat){reference-type="eqref" reference="deflat"} involved in JDGSVD algorithms are highly indefinite and ill conditioned, causing that the MINRES method may be very costly even if the relative residual of an approximate solution is only required to be fairly small. Unfortunately, we have observed that commonly used incomplete LU preconditioners generally work poorly and have no acceleration effect. Therefore, the efficient solutions of correction equations constitute the bottleneck of all JDGSVD algorithms. How to propose and develop specific preconditioners for the correction equations is extremely important and definitely deserves enough attention. This constitutes our future work. **Funding** The first author was supported by the Youth Fund of the National Science Foundation of China (No. 12301485) and the Youth Program of the Natural Science Foundation of Jiangsu Province (No. BK20220482), and the second author was supported by the National Science Foundation of China (No.12171273). **Data Availability** Enquires about data availability should be directed to the authors. # Declarations {#declarations .unnumbered} The two authors declare that they have no financial interests, and the two authors read and approved the final manuscript. 10 Z. Bai, J. Demmel, J. Dongarra, A. Ruhe, and H. A. Van der Vorst, *Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide*, SIAM, Philadelphia, PA, 2000. J. K. Cullum and R. A. Willoughby, *Lanczos Algorithms for Large Symmetric Eigenvalue Computations: Vol. I: Theory*, SIAM, Philadelphia, PA, 2002. T. A. Davis and Y. Hu, *The University of Florida sparse matrix collection*, ACM Trans. Math. Software, 38 (2011), pp. 1--25. Data available online at <http://www.cise.ufl.edu/research/sparse/matrices/>. G. H. Golub and C. F. van Loan, *Matrix Computations, 4th Ed.*, The John Hopkins University Press, Baltimore, 2012. M. E. Hochstenbach, *A Jacobi--Davidson type SVD method*, SIAM J. Sci. Comput., 23 (2001), pp. 606--628. M. E. Hochstenbach, *Harmonic and refined extraction methods for the singular value problem, with applications in least squares problems*, BIT, 44 (2004), pp. 721--754. M. E. Hochstenbach, *A Jacobi--Davidson type method for the generalized singular value problem*, Linear Algebra Appl., 431 (2009), pp. 471--487. M. E. Hochstenbach and G. L. Sleijpen, *Harmonic and refined Rayleigh--Ritz for the polynomial eigenvalue problem*, Numer. Linear Algebra Appl., 15 (2008), pp. 35--54. J. Huang and Z. Jia, *On inner iterations of Jacobi--Davidson type methods for large SVD computations*, SIAM J. Sci. Comput., 41 (2019), pp. A1574--A1603. J. Huang and Z. Jia, *On choices of formulations of computing the generalized singular value decomposition of a matrix pair*, Numer. Algorithms, 87 (2021), pp. 689--718. J. Huang and Z. Jia, *Two harmonic Jacobi-Davidson methods for computing a partial generalized singular value decomposition of a large matrix pair*, J. Sci. Comput., 93 (2022). article no.41. J. Huang and Z. Jia, *A cross-product free Jacobi--Davidson type method for computing a partial generalized singular value decomposition of a large matrix pair*, J. Sci. Comput., 94 (2023). article no.3. Z. Jia, *Refined iterative algorithms based on Arnoldi's process for large unsymmetric eigenproblems*, Linear Algebra Appl., 259 (1997), pp. 1--23. Z. Jia, *Polynomial characterizations of the approximate eigenvectors by the refined Arnoldi method and an implicitly restarted refined Arnoldi algorithm*, Linear Algebra Appl., 287 (1999), pp. 191--214. Z. Jia, *The refined harmonic Arnoldi method and an implicitly restarted refined algorithm for computing interior eigenpairs of large matrices*, Appl. Numer. Math., 42 (2002), pp. 489--512. Z. Jia, *Some theoretical comparisons of refined Ritz vectors and Ritz vectors*, Sci. China Ser. A, 47 (2004), pp. 222--233. Z. Jia, *The convergence of harmonic Ritz values, harmonic Ritz vectors and refined harmonic Ritz vectors*, Math. Comput., 74 (2005), pp. 1441--1456. Z. Jia, *Using cross-product matrices to compute the SVD*, Numer. Algorithms, 42 (2006), pp. 31--61. Z. Jia and C. Li, *Inner iterations in the shift-invert residual Arnoldi method and the Jacobi--Davidson method*, Sci. China Math., 57 (2014), pp. 1733--1752. Z. Jia and C. Li, *Harmonic and refined harmonic shift-invert residual Arnoldi and Jacobi--Davidson methods for interior eigenvalue problems*, J. Comput. Appl. Math., 282 (2015), pp. 83--97. Z. Jia and D. Niu, *An implicitly restarted refined bidiagonalization Lanczos method for computing a partial singular value decomposition*, SIAM J. Matrix Anal. Appl., 25 (2003), pp. 246--265. Z. Jia and D. Niu, *A refined harmonic Lanczos bidiagonalization method and an implicitly restarted algorithm for computing the smallest singular triplets of large matrices*, SIAM J. Sci. Comput., 32 (2010), pp. 714--744. Z. Jia and G. W. Stewart, *An analysis of the Rayleigh--Ritz method for approximating eigenspaces*, Math. Comput., 70 (2001), pp. 737--747. E. Kokiopoulou, C. Bekas, and E. Gallopoulos, *Computing smallest singular triplets with implicitly restarted Lanczos bidiagonalization*, Appl. Numer. Math., 49 (2004), pp. 39--61. R. B. Morgan and M. Zeng, *Harmonic projection methods for large non-symmetric eigenvalue problems*, Numer. Linear Algebra Appl., 5 (1998), pp. 33--55. R. B. Morgan and M. Zeng, *A harmonic restarted Arnoldi algorithm for calculating eigenvalues and determining multiplicity*, Linear Algebra Appl., 415 (2006), pp. 96--113. C. C. Paige and M. A. Saunders, *Towards a generalized singular value decomposition*, SIAM J. Numer. Anal., 18 (1981), pp. 398--405. B. N. Parlett, *The Symmetric Eigenvalue Problem*, SIAM, Philadelphia, PA, 1998. Y. Saad, *Iterative Methods for Sparse Linear Systems, 2nd Ed.*, SIAM, Philadelphia, PA, 2003. Y. Saad, *Numerical Methods for Large Eigenvalue Problems*, SIAM, Philadelphia, PA, 2011. H. D. Simon and H. Zha, *Low-rank matrix approximation using the Lanczos bidiagonalization process with applications*, SIAM J. Sci. Comput., 21 (2000), pp. 2257--2274. A. Stathopoulos, Y. Saad, and K. Wu, *Dynamic thick restarting of the Davidson, and the implicitly restarted Arnoldi methods*, SIAM J. Sci. Comput., 19 (1998), pp. 227--245. G. W. Stewart, *Matrix Algorithms II: Eigensystems*, SIAM, Philadelphia, PA, 2001. G. W. Stewart and J.-G. Sun, *Matrix Perturbation Theory*, Acadmic Press, Inc., Boston, 1990. M. Stoll, *A Krylov--Schur approach to the truncated SVD*, Linear Algebra Appl., 436 (2012), pp. 2795--2806. H. Van der Vorst, *Computational Methods for Large Eigenvalue Problems*, Elsvier, Holland, 2002. C. F. Van Loan, *Generalizing the singular value decomposition*, SIAM J. Numer. Anal., 13 (1976), pp. 76--83. L. Wu, E. Romero, and A. Stathopoulos, *PRIMME\_SVDS: A high--performance preconditioned SVD solver for accurate large--scale computations*, SIAM J. Sci. Comput., 39 (2017), pp. S248--S271. L. Wu and A. Stathopoulos, *A preconditioned hybrid SVD method for accurately computing singular triplets of large matrices*, SIAM J. Sci. Comput., 37 (2015), pp. S365--S388. I. N. Zwaan, *Cross product-free matrix pencils for computing generalized singular values*, (2019). arXiv:1912.08518 \[math.NA\]. I. N. Zwaan and M. E. Hochstenbach, *Generalized Davidson and multidirectional-type methods for the generalized singular value decomposition*, (2017). arXiv:1705.06120 \[math.NA\]. [^1]: School of Mathematical Sciences, Soochow University, 215006 Suzhou, China ([jzhuang21\@suda.edu.cn](jzhuang21@suda.edu.cn)) [^2]: Corresponding author. Department of Mathematical Sciences, Tsinghua University, 100084 Beijing, China ([jiazx\@tsinghua.edu.cn](jiazx@tsinghua.edu.cn)). [^3]: The work of the first author was supported by the Youth Fund of the National Science Foundation of China (No. 12301485) and the Youth Program of the Natural Science Foundation of Jiangsu Province (No. BK20220482), and the work of the second author was supported by the National Science Foundation of China (No. 12171273).
arxiv_math
{ "id": "2309.17266", "title": "Refined and refined harmonic Jacobi--Davidson methods for computing\n several GSVD components of a large regular matrix pair", "authors": "Jinzhi Huang and Zhongxiao Jia", "categories": "math.NA cs.NA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We give a complete answer to the rationality problem (up to stable $k$-equivalence) for norm one tori $T=R^{(1)}_{K/k}(\mathbbm{G}_m)$ of $K/k$ whose Galois closures $L/k$ are $A_5\simeq {\rm PSL}_2(\mathbbm{F}_4)$ and ${\rm PSL}_2(\mathbbm{F}_8)$ extensions. In particular, we prove that $T$ is stably $k$-rational for $G={\rm Gal}(L/k)\simeq {\rm PSL}_2(\mathbbm{F}_{8})$, $H={\rm Gal}(L/K)\simeq (C_2)^3$ and $H\simeq (C_2)^3\rtimes C_7$ where $C_n$ is the cyclic group of order $n$. Based on the result, we conjecture that $T$ is stably $k$-rational for $G\simeq {\rm PSL}_2(\mathbbm{F}_{2^d})$, $H\simeq (C_2)^d$ and $H\simeq (C_2)^d\rtimes C_{2^d-1}$. Some other cases $G\simeq A_n$, $S_n$, ${\rm GL}_n(\mathbbm{F}_{p^d})$, ${\rm SL}_n(\mathbbm{F}_{p^d})$, ${\rm PGL}_n(\mathbbm{F}_{p^d})$, ${\rm PSL}_n(\mathbbm{F}_{p^d})$ and $H\lneq G$ are also investigated for small $n$ and $p^d$. address: - Department of Mathematics, Niigata University, Niigata 950-2181, Japan - Department of Mathematics, Kyoto University, Kyoto 606-8502, Japan author: - Akinari Hoshi - Aiichi Yamasaki title: Rationality problem for norm one tori for $A_5$ and ${\rm PSL}_2(\mathbbm{F}_8)$ extensions --- # Introduction {#seInt} Let $k$ be a field, $K/k$ be a separable field extension of degree $m$ and $L/k$ be the Galois closure of $K/k$. Let $G={\rm Gal}(L/k)$ and $H={\rm Gal}(L/K)$ with $[G:H]=m$. The Galois group $G$ may be regarded as a transitive subgroup of the symmetric group $S_m$ of degree $m$. We may assume that $H$ is the stabilizer of one of the letters in $G$, i.e. $L=k(\theta_1,\ldots,\theta_m)$ and $K=L^H=k(\theta_i)$ where $1\leq i\leq m$. Then we have $\bigcap_{\sigma\in G} H^\sigma=\{1\}$ where $H^\sigma=\sigma^{-1}H\sigma$ and hence $H$ contains no normal subgroup of $G$ except for $\{1\}$. Let $T=R^{(1)}_{K/k}(\mathbbm{G}_m)$ be the norm one torus of $K/k$, i.e. the kernel of the norm map $R_{K/k}(\mathbbm{G}_m)\rightarrow \mathbbm{G}_m$ where $R_{K/k}$ is the Weil restriction (see Voskresenskii [@Vos98 page 37, Section 3.12]). The rationality problem for norm one tori is investigated by [@EM75], [@CTS77], [@Hur84], [@CTS87], [@LeB95], [@CK00], [@LL00], [@Flo], [@End11], [@HY17], [@HHY20], [@HY21] and [@HY2]. For example, it is known (due to Ono [@Ono63 page 70], see also Platonov [@Pla82 page 44], Kunyavskii [@Kun84 Remark 3], Platonov and Rapinchuk [@PR94 page 307]) that if $k$ is a global field (e.g. a number field) and $T=R^{(1)}_{K/k}(\mathbbm{G}_m)$ is (retract) $k$-rational, then the Hasse norm princibple holds for $K/k$ (see Hoshi, Kanai and Yamasaki [@HKY22], [@HKY23]). Let $C_n$ (resp. $D_n$, $A_n$, $S_n$) be the cyclic (resp. the dihedral, the alternating, the symmetric) group of order $n$ (resp. $2n$, $n!/2$, $n!$). Let $V_4\simeq C_2\times C_2$ be the Klein four-group. Let ${\rm GL}_n(\mathbbm{F}_{p^d})$ (resp. ${\rm SL}_n(\mathbbm{F}_{p^d})$, ${\rm PGL}_n(\mathbbm{F}_{p^d})$, ${\rm PSL}_n(\mathbbm{F}_{p^d})$) be the general (resp. special, projective general, projective special) linear group of degree $n$ over the finite field $\mathbbm{F}_{p^d}$ with $p^d$ elements. Let $D(G)$ be the derived (commutator) subgroup of $G$, $Sy_p(G)$ be a $p$-Sylow subgroup of $G$ and $N_G(H)$ be the normalizer of $H\leq G$ in $G$. The following is the main theorem of this paper which gives an answer to the rationality problem (up to stably $k$-equivalent) for some norm one tori $R^{(1)}_{K/k}(\mathbbm{G}_m)$ of non-Galois extensions $K/k$. **Theorem 1**. *Let $k$ be a field, $K/k$ be a finite non-Galois separable field extension and $L/k$ be the Galois closure of $K/k$ with $G={\rm Gal}(L/k)$ and $H={\rm Gal}(L/K)\lneq G$. Let $T=R^{(1)}_{K/k}(\mathbbm{G}_m)$ be the norm one torus of $K/k$.\ (1) When $G\simeq A_4\simeq {\rm PSL}_2(\mathbbm{F}_3)$, $A_5\simeq{\rm PSL}_2(\mathbbm{F}_5)\simeq {\rm PSL}_2(\mathbbm{F}_4)\simeq {\rm PGL}_2(\mathbbm{F}_4)\simeq {\rm SL}_2(\mathbbm{F}_4)$, $A_6\simeq{\rm PSL}_2(\mathbbm{F}_9)$, $T$ is not retract $k$-rational except for the two cases $(G,H)\simeq (A_5, V_4)$, $(A_5, A_4)$ with $|G|=60$, $[G:H]=15$, $5$. For the two exceptional cases, $T$ is stably $k$-rational;\ (2) When $G\simeq S_3\simeq{\rm PGL}_2(\mathbbm{F}_2)$, $S_4\simeq{\rm PGL}_2(\mathbbm{F}_3)$, $S_5\simeq{\rm PGL}_2(\mathbbm{F}_5)$, $S_6$, $T$ is not retract $k$-rational except for the five cases $(G,H)\simeq (S_3,C_2)$, $(S_5, V_4)$ satisfying $V_4\leq D(S_5)\simeq A_5$, $(S_5, D_4)$, $(S_5, A_4)$, $(S_5, S_4)$ with $|S_3|=6$, $[S_3:C_2]=3$, $|S_5|=120$, $[S_5:H]=30$, $15$, $10$, $5$. For the case $(S_3, C_2)$, $T$ is stably $k$-rational. For the four cases $(S_5, V_4)$ satisfying $V_4\leq D(S_5)\simeq A_5$, $(S_5, D_4)$, $(S_5, A_4)$, $(S_5, S_4)$, $T$ is not stably but retract $k$-rational;\ (3) When $G\simeq {\rm GL}_2(\mathbbm{F}_3)$, ${\rm GL}_2(\mathbbm{F}_4)\simeq A_5\times C_3$, ${\rm GL}_2(\mathbbm{F}_5)$, $T$ is not retract $k$-rational except for the case $(G,H)\simeq ({\rm GL}_2(\mathbbm{F}_4), A_4)$ satisfying $A_4\leq D(G)\simeq A_5$ with $|G|=180$, $[G:H]=15$. For the exceptional case, $T$ is stably $k$-rational;\ (4) When $G\simeq {\rm SL}_2(\mathbbm{F}_3)$, ${\rm SL}_2(\mathbbm{F}_5)$, ${\rm SL}_2(\mathbbm{F}_7)$, $T$ is not retract $k$-rational;\ (5) When $G\simeq {\rm PSL}_2(\mathbbm{F}_7)\simeq {\rm PSL}_3(\mathbbm{F}_2)$, $T$ is not retract $k$-rational except for the two cases $H\simeq D_4$, $S_4$ with $|G|=168$, $[G:H]=21, 14$. For the two exceptional cases, $T$ is not stably but retract $k$-rational;\ (6) When $G\simeq {\rm PSL}_2(\mathbbm{F}_8)\simeq {\rm PGL}_2(\mathbbm{F}_8)\simeq {\rm SL}_2(\mathbbm{F}_8)$, $T$ is not retract $k$-rational except for the two cases $H={\rm Sy}_2(G)\simeq (C_2)^3$, $N_G({\rm Sy}_2(G))\simeq (C_2)^3\rtimes C_7$ with $|G|=504$, $[G:H]=63$, $9$. For the two exceptional cases, $T$ is stably $k$-rational.* **Remark 2**. (1) The case where $H=\{1\}$, i.e. $K/k$ is Galois, corresponding to Theorem [Theorem 1](#mainth){reference-type="ref" reference="mainth"} is obtained by Endo and Miyata [@EM75 Theorem 1.5, Theorem 2.3] (see Theorem [Theorem 6](#th2-2){reference-type="ref" reference="th2-2"} and Theorem [Theorem 8](#th2-4){reference-type="ref" reference="th2-4"}).\ (2) The case where $G\leq S_n$ is transitive and $[G:H]=n$ $(n\leq 15$, $n=2^e$ or $n=p$ is prime$)$ of Theorem [Theorem 1](#mainth){reference-type="ref" reference="mainth"} was solved by Hasegawa, Hoshi and Yamasaki [@HHY20], Hoshi and Yamasaki [@HY21] except for the stable $k$-rationality of $T$ with $G\simeq 9T27$ and $G\leq S_p$ for Fermat primes $p\geq 17$. Theorem [Theorem 1](#mainth){reference-type="ref" reference="mainth"} (6) gives an answer of the problem for $G\simeq 9T27$ as $(G,H)\simeq ({\rm PSL}_2(\mathbbm{F}_8), (C_2)^3\rtimes C_7)$ with $[G:H]=9$.\ (3) For the exceptional cases of Theorem [Theorem 1](#mainth){reference-type="ref" reference="mainth"}, $T=R^{(1)}_{K/k}(\mathbbm{G}_m)$ is retract $k$-rational, i.e. $F=[J_{G/H}]^{fl}$ is invertible (see Section [2](#sePre){reference-type="ref" reference="sePre"} for the flabby class $F=[J_{G/H}]^{fl}$ and Theorem [Theorem 5](#th2-1){reference-type="ref" reference="th2-1"}). In particular, for a smooth $k$-compactification $X$ of $T$, we get the vanishing $H^1(k,{\rm Pic}\, \overline{X})\simeq H^1(G,{\rm Pic}\, X_L)\simeq H^1(G,F)\simeq \Sha^2_\omega(G,J_{G/H})\simeq {\rm Br}(X)/{\rm Br}(k)\simeq {\rm Br}_{\rm nr}(k(X)/k)/{\rm Br}(k)=0$. This implies that, when $k$ is a global field, i.e. a finite extension of $\mathbbm{Q}$ or $\mathbbm{F}_q(t)$, $A(T)=0$ and $\Sha(T)=0$, i.e. $T=R^{(1)}_{K/k}(\mathbbm{G}_m)$ has the weak approximation property, Hasse principle holds for all torsors $E$ under $T$ and Hasse norm principle holds for $K/k$ (see Hoshi, Kanai and Yamasaki [@HKY22 Section 1], [@HKY1 Section 1], Hoshi and Yamasaki [@HY1 Section 4]). More precisely, for the stably $k$-rational cases $G\simeq S_3$, $A_5\simeq {\rm PSL}_2(\mathbbm{F}_4)$, ${\rm PSL}_2(\mathbbm{F}_8)$ as in Theorem [Theorem 1](#mainth){reference-type="ref" reference="mainth"}, we prove the following result which implies the stable $k$-rationality of $T=R^{(1)}_{K/k}(\mathbbm{G}_m)$ for each case (see Section [2](#sePre){reference-type="ref" reference="sePre"} for the flabby class $F=[J_{G/H}]^{fl}$ and Theorem [Theorem 5](#th2-1){reference-type="ref" reference="th2-1"}): **Theorem 3**. *Let $G={\rm Gal}(L/k)$ and $H={\rm Gal}(L/K)\lneq G$ be as in Theorem [Theorem 1](#mainth){reference-type="ref" reference="mainth"}. Let $T=R^{(1)}_{K/k}(\mathbbm{G}_m)$ be the norm one torus of $K/k$ with the Chevalley module $J_{G/H}\simeq\widehat{T}={\rm Hom}(T,\mathbbm{G}_m)$.\ (1) When $(G,H)\simeq (S_3, C_2)\simeq ({\rm PSL}_2(\mathbbm{F}_2), C_2)$ with $[G:H]=3$, there exists the flabby class $F=[J_{G/H}]^{fl}$ with ${\rm rank}_\mathbbm{Z}\,F=4$ such that an isomorphism of $S_3$-lattices $$\begin{aligned} \mathbbm{Z}[S_3/C_2]\oplus\mathbbm{Z}[S_3/C_3]\simeq\mathbbm{Z}\oplus F\end{aligned}$$ holds with rank $3+2=1+4=5$.\ (2) When $(G,H)\simeq (A_5, V_4)\simeq ({\rm PSL}_2(\mathbbm{F}_4),V_4)$ with $[G:H]=15$, there exists the flabby class $F=[J_{G/H}]^{fl}$ with ${\rm rank}_\mathbbm{Z}\,F=21$ such that an isomorphism of $A_5$-lattices $$\begin{aligned} \mathbbm{Z}[A_5/C_5]\oplus\mathbbm{Z}[A_5/A_4]^{\oplus 2}\simeq\mathbbm{Z}\oplus F\end{aligned}$$ holds with rank $12+2\cdot 5=1+21=22$.\ (3) When $(G,H)\simeq (A_5, A_4)\simeq ({\rm PSL}_2(\mathbbm{F}_4),A_4)$ with $[G:H]=5$, there exists the flabby class $F=[J_{G/H}]^{fl}$ with ${\rm rank}_\mathbbm{Z}\,F=16$ such that an isomorphism of $A_5$-lattices $$\begin{aligned} \mathbbm{Z}[A_5/C_5]\oplus\mathbbm{Z}[A_5/S_3]\simeq\mathbbm{Z}[A_5/D_5]\oplus F\end{aligned}$$ holds with rank $12+10=6+16=22$.\ (4) When $(G,H)\simeq ({\rm PSL}_2(\mathbbm{F}_8), (C_2)^3)$ with $[G:H]=63$, there exists the flabby class $F=[J_{G/H}]^{fl}$ with ${\rm rank}_\mathbbm{Z}\,F=73$ such that an isomorphism of ${\rm PSL}_2(\mathbbm{F}_8)$-lattices $$\begin{aligned} \mathbbm{Z}[G/S_3]\oplus\mathbbm{Z}[G/C_9]\oplus\mathbbm{Z}[G/((C_2)^3\rtimes C_7)]^{\oplus 2} \simeq\mathbbm{Z}[G/S_3]\oplus\mathbbm{Z}\oplus F\end{aligned}$$ holds with rank $84+56+2\cdot 9=84+1+73=158$.\ (5) When $(G,H)\simeq ({\rm PSL}_2(\mathbbm{F}_8), (C_2)^3\rtimes C_7)$ with $[G:H]=9$, there exists the flabby class $F=[J_{G/H}]^{fl}$ with ${\rm rank}_\mathbbm{Z}\,F=64$ such that an isomorphism of ${\rm PSL}_2(\mathbbm{F}_8)$-lattices $$\begin{aligned} \mathbbm{Z}[G/C_3]\oplus\mathbbm{Z}[G/C_9]\oplus\mathbbm{Z}[G/D_7]\simeq \mathbbm{Z}[G/C_3]\oplus\mathbbm{Z}[G/D_9]\oplus F\end{aligned}$$ holds with rank $168+56+36=168+28+64=260$.* *In particular, for the cases $(1)$--$(5)$, $F=[J_{G/H}]^{fl}$ is stably permutation and hence $T=R^{(1)}_{K/k}(\mathbbm{G}_m)$ is stably $k$-rational.* We conjecture that $T=R^{(1)}_{K/k}(\mathbbm{G}_m)$ is stably $k$-rational for the cases $(G,H)\simeq ({\rm PSL}_2(\mathbbm{F}_{2^d}), (C_2)^d)$ $(d\geq 2)$, $({\rm PSL}_2(\mathbbm{F}_{2^d}),(C_2)^d\rtimes C_{2^d-1})$ $(d\geq 1)$ (see Theorem [Theorem 3](#th1.3){reference-type="ref" reference="th1.3"} for $d\leq 3)$: **Conjecture 4**. *Let $T=R^{(1)}_{K/k}(\mathbbm{G}_m)$ be the norm one torus of $K/k$. Assume that $G={\rm Gal}(L/k)$ and $H={\rm Gal}(L/K)\lneq G$ as in Theorem [Theorem 1](#mainth){reference-type="ref" reference="mainth"}. When $G\simeq {\rm PSL}_2(\mathbbm{F}_{2^d})$ $(d\geq 1)$, $T$ is not retract $k$-rational except for the two cases $H={\rm Sy}_2(G)\simeq (C_2)^d$, $N_G({\rm Sy}_2(G))\simeq (C_2)^d\rtimes C_{d-1}$ with $|G|=(2^d+1)2^d(2^d-1)$, $[G:H]=2^{2d}-1=(2^d+1)(2^d-1)$, $2^d+1$. For the two exceptional cases, $T$ is stably $k$-rational. Moreover,\ (1) for $H\simeq (C_2)^d$, there exist the flabby class $F=[J_{G/H}]^{fl}$ with ${\rm rank}_\mathbbm{Z}\, F=2^{2d}+2^d+1$ and a permutation $G$-lattice $P$ such that an isomorphism of ${\rm PSL}_2(\mathbbm{F}_{2^d})$-lattices $$\begin{aligned} P\oplus \mathbbm{Z}[G/C_{2^d+1}]\oplus\mathbbm{Z}[G/((C_2)^d\rtimes C_{2^d-1})]^{\oplus 2}\simeq P\oplus\mathbbm{Z}\oplus F\ (d\geq 2) \end{aligned}$$ holds with ${\rm rank}_\mathbbm{Z}\,P+2^d(2^d-1)+2\times(2^d+1)={\rm rank}_\mathbbm{Z}\,P+1+(2^{2d}+2^d+1)$;\ (2) for $H\simeq (C_2)^d\rtimes C_{2^d-1}$, there exist the flabby class $F=[J_{G/H}]^{fl}$ with ${\rm rank}_\mathbbm{Z}\, F=2^{2d}$ and a permutation $G$-lattice $Q$ such that an isomorphism of ${\rm PSL}_2(\mathbbm{F}_{2^d})$-lattices $$\begin{aligned} Q\oplus \mathbbm{Z}[G/C_{2^d+1}]\oplus\mathbbm{Z}[G/D_{2^d-1}]\simeq Q\oplus\mathbbm{Z}[G/D_{2^d+1}]\oplus F\ (d\geq 1) \end{aligned}$$ holds with ${\rm rank}_\mathbbm{Z}\,Q+2^d(2^d-1)+2^{d-1}(2^d+1) ={\rm rank}_\mathbbm{Z}\,Q+2^{d-1}(2^d-1)+2^{2d}$.* Note that Thoeorem [Theorem 3](#th1.3){reference-type="ref" reference="th1.3"} claims that Conjecture [Conjecture 4](#con1.4){reference-type="ref" reference="con1.4"} (1) holds for $d=2,3$ with rank $P=0$, $84$ and Conjecture [Conjecture 4](#con1.4){reference-type="ref" reference="con1.4"} (2) holds for $d=1$, $2$, $3$ with rank $Q=0$, $0$, $168$. We organize this paper as follows. In Section [2](#sePre){reference-type="ref" reference="sePre"}, we prepare related materials and known results including some examples in order to prove Theorem [Theorem 1](#mainth){reference-type="ref" reference="mainth"} and Thoeorem [Theorem 3](#th1.3){reference-type="ref" reference="th1.3"}. In Section [3](#seProof){reference-type="ref" reference="seProof"}, the proof of Theorem [Theorem 1](#mainth){reference-type="ref" reference="mainth"} and Thoeorem [Theorem 3](#th1.3){reference-type="ref" reference="th1.3"} is given. In Section [4](#GAPcomp){reference-type="ref" reference="GAPcomp"}, GAP computations which are used in the proof of Theorem [Theorem 1](#mainth){reference-type="ref" reference="mainth"} and Thoeorem [Theorem 3](#th1.3){reference-type="ref" reference="th1.3"} are given. GAP algorithms will be given in Section [5](#GAPalg){reference-type="ref" reference="GAPalg"} which are also available from\ <https://www.math.kyoto-u.ac.jp/~yamasaki/Algorithm/RatProbNorm1Tori/>.\ # Preliminalies {#sePre} Let $k$ be a field and $K$ be a finitely generated field extension of $k$. A field $K$ is called *rational over $k$* (or *$k$-rational* for short) if $K$ is purely transcendental over $k$, i.e. $K$ is isomorphic to $k(x_1,\ldots,x_n)$, the rational function field over $k$ with $n$ variables $x_1,\ldots,x_n$ for some integer $n$. $K$ is called *stably $k$-rational* if $K(y_1,\ldots,y_m)$ is $k$-rational for some algebraically independent elements $y_1,\ldots,y_m$ over $K$. Two fields $K$ and $K^\prime$ are called *stably $k$-isomorphic* if $K(y_1,\ldots,y_m)\simeq K^\prime(z_1,\ldots,z_n)$ over $k$ for some algebraically independent elements $y_1,\ldots,y_m$ over $K$ and $z_1,\ldots,z_n$ over $K^\prime$. When $k$ is an infinite field, $K$ is called *retract $k$-rational* if there is a $k$-algebra $R$ contained in $K$ such that (i) $K$ is the quotient field of $R$, and (ii) the identity map $1_R : R\rightarrow R$ factors through a localized polynomial ring over $k$, i.e. there is an element $f\in k[x_1,\ldots,x_n]$, which is the polynomial ring over $k$, and there are $k$-algebra homomorphisms $\varphi : R\rightarrow k[x_1,\ldots,x_n][1/f]$ and $\psi : k[x_1,\ldots,x_n][1/f]\rightarrow R$ satisfying $\psi\circ\varphi=1_R$ (cf. Saltman [@Sal84]). $K$ is called *$k$-unirational* if $k\subset K\subset k(x_1,\ldots,x_n)$ for some integer $n$. It is not difficult to see that "$k$-rational" $\Rightarrow$ "stably $k$-rational" $\Rightarrow$ "retract $k$-rational" $\Rightarrow$ "$k$-unirational". Let $\overline{k}$ be a fixed separable closure of the base field $k$. Let $T$ be an algebraic $k$-torus, i.e. a group $k$-scheme with fiber product (base change) $T\times_k \overline{k}= T\times_{{\rm Spec}\, k}\,{\rm Spec}\, \overline{k} \simeq (\mathbbm{G}_{m,\overline{k}})^n$; $k$-form of the split torus $(\mathbbm{G}_m)^n$. An algebraic $k$-torus $T$ is said to be *$k$-rational* (resp. *stably $k$-rational*, *retract $k$-rational*, *$k$-unirational*) if the function field $k(T)$ of $T$ over $k$ is $k$-rational (resp. stably $k$-rational, retract $k$-rational, $k$-unirational). Two algebraic $k$-tori $T$ and $T^\prime$ are said to be *birationally $k$-equivalent $(k$-isomorphic$)$* if their function fields $k(T)$ and $k(T^\prime)$ are $k$-isomorphic. For an equivalent definition in the language of algebraic geometry, see e.g. Manin [@Man86], Manin and Tsfasman, [@MT86], Colliot-Thélène and Sansuc [@CTS07 Section 1], Beauville [@Bea16], Merkurjev [@Mer17 Section 3]. Let $L$ be a finite Galois extension of $k$ and $G={\rm Gal}(L/k)$ be the Galois group of the extension $L/k$. Let $M=\bigoplus_{1\leq i\leq n}\mathbbm{Z}\cdot u_i$ be a $G$-lattice with a $\mathbbm{Z}$-basis $\{u_1,\ldots,u_n\}$, i.e. finitely generated $\mathbbm{Z}[G]$-module which is $\mathbbm{Z}$-free as an abelian group. Let $G$ act on the rational function field $L(x_1,\ldots,x_n)$ over $L$ with $n$ variables $x_1,\ldots,x_n$ by $$\begin{aligned} \sigma(x_i)=\prod_{j=1}^n x_j^{a_{i,j}},\quad 1\leq i\leq n\label{acts}\end{aligned}$$ for any $\sigma\in G$, when $\sigma (u_i)=\sum_{j=1}^n a_{i,j} u_j$, $a_{i,j}\in\mathbbm{Z}$. The field $L(x_1,\ldots,x_n)$ with this action of $G$ will be denoted by $L(M)$. There is the duality between the category of $G$-lattices and the category of algebraic $k$-tori which split over $L$ (see [@Ono61 Section 1.2], [@Vos98 page 27, Example 6]). In fact, if $T$ is an algebraic $k$-torus, then the character group $\widehat{T}={\rm Hom}(T,\mathbbm{G}_m)$ of $T$ may be regarded as a $G$-lattice. Conversely, for a given $G$-lattice $M$, there exists an algebraic $k$-torus $T={\rm Spec}(L[M]^G)$ which splits over $L$ such that $\widehat{T}\simeq M$ as $G$-lattices. The invariant field $L(M)^G$ of $L(M)$ under the action of $G$ may be identified with the function field $k(T)$ of the algebraic $k$-torus $T$. Note that the field $L(M)^G$ is always $k$-unirational (see [@Vos98 page 40, Example 21]). Isomorphism classes of tori of dimension $n$ over $k$ correspond bijectively to the elements of the set $H^1(\mathcal{G},{\rm GL}_n(\mathbbm{Z}))$ where $\mathcal{G}={\rm Gal}(\overline{k}/k)$ since ${\rm Aut}(\mathbbm{G}_m^n)={\rm GL}_n(\mathbbm{Z})$. The $k$-torus $T$ of dimension $n$ is determined uniquely by the integral representation $h : \mathcal{G}\rightarrow {\rm GL}_n(\mathbbm{Z})$ up to conjugacy, and the group $h(\mathcal{G})$ is a finite subgroup of ${\rm GL}_n(\mathbbm{Z})$ (see [@Vos98 page 57, Section 4.9])). A $G$-lattice $M$ is called *permutation* $G$-lattice if $M$ has a $\mathbbm{Z}$-basis permuted by $G$, i.e. $M\simeq \oplus_{1\leq i\leq m}\mathbbm{Z}[G/H_i]$ for some subgroups $H_1,\ldots,H_m$ of $G$. $M$ is called *stably permutation* $G$-lattice if $M\oplus P\simeq P^\prime$ for some permutation $G$-lattices $P$ and $P^\prime$. $M$ is called *invertible* if it is a direct summand of a permutation $G$-lattice, i.e. $P\simeq M\oplus M^\prime$ for some permutation $G$-lattice $P$ and a $G$-lattice $M^\prime$. We say that two $G$-lattices $M_1$ and $M_2$ are *similar* if there exist permutation $G$-lattices $P_1$ and $P_2$ such that $M_1\oplus P_1\simeq M_2\oplus P_2$. The set of similarity classes becomes a commutative monoid with respect to the sum $[M_1]+[M_2]:=[M_1\oplus M_2]$ and the zero $0=[P]$ where $P$ is a permutation $G$-lattice. For a $G$-lattice $M$, there exists a short exact sequence of $G$-lattices $0 \rightarrow M \rightarrow P \rightarrow F \rightarrow 0$ where $P$ is permutation and $F$ is flabby which is called a *flabby resolution* of $M$ (see Endo and Miyata [@EM75 Lemma 1.1], Colliot-Thélène and Sansuc [@CTS77 Lemma 3], Manin [@Man86 Appendix, page 286]). The similarity class $[F]$ of $F$ is determined uniquely and is called *the flabby class* of $M$. We denote the flabby class $[F]$ of $M$ by $[M]^{fl}$. We say that $[M]^{fl}$ is invertible if $[M]^{fl}=[E]$ for some invertible $G$-lattice $E$. For algebraic $k$-tori $T$, we see that $[\widehat{T}]^{fl}=[{\rm Pic}\,\overline{X}]$ where $X$ is a smooth $k$-compactification of $T$, i.e. smooth projective $k$-variety $X$ containing $T$ as a dense open subvariety, and $\overline{X}=X\times_k\overline{k}$ (see Voskresenskii [@Vos69 Section 4, page 1213], [@Vos70a Section 3, page 7], [@Vos74], [@Vos98 Section 4.6], Kunyavskii [@Kun07 Theorem 1.9] and Colliot-Thélène [@CT07 Theorem 5.1, page 19] for any field $k$). The flabby class $[M]^{fl}=[\widehat{T}]^{fl}$ plays crucial role in the rationality problem for $L(M)^G\simeq k(T)$ as follows (see also Colliot-Thélène and Sansuc [@CTS77 Section 2], [@CTS87 Proposition 7.4] Voskresenskii [@Vos98 Section 4.6], Kunyavskii [@Kun07 Theorem 1.7], Colliot-Thélène [@CT07 Theorem 5.4], Hoshi and Yamasaki [@HY17 Section 1]): **Theorem 5**. *Let $L/k$ be a finite Galois extension with Galois group $G={\rm Gal}(L/k)$ and $M$, $M^\prime$ be $G$-lattices.\ (i) $($Endo and Miyata [@EM73 Theorem 1.6]$)$ $[M]^{fl}=0$ if and only if $L(M)^G$ is stably $k$-rational.\ (ii) $($Voskresenskii [@Vos74 Theorem 2]$)$ $[M]^{fl}=[M^\prime]^{fl}$ if and only if $L(M)^G$ and $L(M^\prime)^G$ are stably $k$-isomorphic. (iii) $($Saltman [@Sal84 Theorem 3.14]$)$ $[M]^{fl}$ is invertible if and only if $L(M)^G$ is retract $k$-rational.* Let $K/k$ be a separable field extension of degree $m$ and $L/k$ be the Galois closure of $K/k$. Let $G={\rm Gal}(L/k)$ and $H={\rm Gal}(L/K)$ with $[G:H]=m$. The norm one torus $R^{(1)}_{K/k}(\mathbbm{G}_m)$ has the Chevalley module $J_{G/H}$ as its character module and the field $L(J_{G/H})^G$ as its function field where $J_{G/H}=(I_{G/H})^\circ={\rm Hom}_\mathbbm{Z}(I_{G/H},\mathbbm{Z})$ is the dual lattice of $I_{G/H}={\rm Ker}\ \varepsilon$ and $\varepsilon : \mathbbm{Z}[G/H]\rightarrow \mathbbm{Z}$ is the augmentation map (see [@Vos98 Section 4.8]). We have the exact sequence $0\rightarrow \mathbbm{Z}\rightarrow \mathbbm{Z}[G/H] \rightarrow J_{G/H}\rightarrow 0$ and ${\rm rank}_\mathbbm{Z}\,J_{G/H}=m-1$. Write $J_{G/H}=\oplus_{1\leq i\leq n-1}\mathbbm{Z}x_i$. Then the action of $G$ on $L(J_{G/H})=L(x_1,\ldots,x_{m-1})$ is of the form ([\[acts\]](#acts){reference-type="ref" reference="acts"}). The rationality problem for norm one tori is investigated by [@EM75], [@CTS77], [@Hur84], [@CTS87], [@LeB95], [@CK00], [@LL00], [@Flo], [@End11], [@HY17], [@HHY20], [@HY21] and [@HY2]. **Theorem 6** (Endo and Miyata [@EM75 Theorem 1.5], Saltman [@Sal84 Theorem 3.14]). *Let $L/k$ be a finite Galois field extension and $G={\rm Gal}(L/k)$. Then the following conditions are equivalent:\ (i) $R^{(1)}_{L/k}(\mathbbm{G}_m)$ is retract $k$-rational;\ (ii) all the Sylow subgroups of $G$ are cyclic.* **Theorem 7** (Endo and Miyata [@EM75 Theorem 2.3], Colliot-Thélène and Sansuc [@CTS77 Proposition 3]). *Let $L/k$ be a finite Galois field extension and $G={\rm Gal}(L/k)$. Then the following conditions are equivalent:\ (i) $R^{(1)}_{L/k}(\mathbbm{G}_m)$ is stably $k$-rational;\ (ii) $G\simeq C_m$ $(m\geq 1)$ or $G\simeq C_m\times \langle x,y\mid x^n=y^{2^d}=1, yxy^{-1}=x^{-1}\rangle$ $(m\geq 1:odd, n\geq 3:odd, d\geq 1)$ with ${\rm gcd}\{m,n\}=1$;\ (iii) $G\simeq \langle s,t\mid s^m=t^{2^d}=1, tst^{-1}=s^r\rangle$ $(m\geq 1:odd, d\geq 0)$ with $r^2\equiv 1\pmod{m}$;\ (iv) all the Sylow subgroups of $G$ are cyclic and $H^4(G,\mathbbm{Z})\simeq \widehat H^0(G,\mathbbm{Z})$ where $\widehat H$ is the Tate cohomology.* **Theorem 8** (Endo [@End11 Theorem 2.1]). *Let $K/k$ be a finite non-Galois, separable field extension and $L/k$ be the Galois closure of $K/k$. Assume that the Galois group of $L/k$ is nilpotent. Then the norm one torus $R^{(1)}_{K/k}(\mathbbm{G}_m)$ is not retract $k$-rational.* **Theorem 9** (Endo [@End11 Theorem 3.1]). *Let $K/k$ be a finite non-Galois, separable field extension and $L/k$ be the Galois closure of $K/k$. Let $G={\rm Gal}(L/k)$ and $H={\rm Gal}(L/K)\leq G$. Assume that all the Sylow subgroups of $G$ are cyclic. Then the norm one torus $R^{(1)}_{K/k}(\mathbbm{G}_m)$ is retract $k$-rational, and the following conditions are equivalent:\ (i) $R^{(1)}_{K/k}(\mathbbm{G}_m)$ is stably $k$-rational;\ (ii) $G\simeq C_m\times D_n$ $(m\geq 1:odd,n\geq 3:odd)$ with ${\rm gcd}\{m,n\}=1$ and $H\simeq C_2$;\ (iii) $H\simeq C_2$ and $G\simeq C_r\rtimes H$ $(r\geq 3:odd)$ where $H$ acts non-trivially on $C_r$.* **Example 10**. Let $K/k$ be a separable field extension of degree $m$ and $L/k$ be the Galois closure of $K/k$. Let $G={\rm Gal}(L/k)$ and $H={\rm Gal}(L/K)$ with $[G:H]=m$. Let $T=R^{(1)}_{K/k}(\mathbbm{G}_m)$ be the norm one torus of $K/k$.\ (1) $G\simeq C_n$. We have $H=\{1\}$. By Theorem [Theorem 7](#th2-3){reference-type="ref" reference="th2-3"}, $T$ is stably $k$-rational.\ (2) $G\simeq D_n$ $(n\geq 3)$. By the condition $\bigcap_{\sigma\in G} H^\sigma=\{1\}$, we have $H=\{1\}$ or $H\simeq C_2$ with $H\neq Z(D_n)$. When $H=\{1\}$, it follows from Theorem [Theorem 7](#th2-3){reference-type="ref" reference="th2-3"} that $T$ is is stably $k$-rational if and only if $n$ is odd. When $H\simeq C_2$ with $H\neq Z(D_n)$, it follows from Theorem [Theorem 9](#th2-5){reference-type="ref" reference="th2-5"} (resp. Theorem [Theorem 8](#th2-4){reference-type="ref" reference="th2-4"}) that if $n$ is odd (resp. $2$-power), then $T$ is stably $k$-rational (resp. not retract $k$-rational).\ (3) $G\simeq F_{pl}=\langle x,y\mid x^p=y^l=1, y^{-1}xy=x^t, t=\lambda^{\frac{p-1}{l}}, \langle\lambda\rangle=\mathbbm{F}_p^\times\rangle$ where $p$ is a prime number and $F_{pl}\simeq C_p\rtimes C_l$ $(2<l\mid p-1)$ is the Frobenius group of order $pl$. By the condition $\bigcap_{\sigma\in G} H^\sigma=\{1\}$, we have $H\leq C_l$. When $H=\{1\}$ (resp. $H\neq \{1\}$), it follows from Theorem [Theorem 6](#th2-2){reference-type="ref" reference="th2-2"} and Theorem [Theorem 7](#th2-3){reference-type="ref" reference="th2-3"} (resp. Theorem [Theorem 9](#th2-5){reference-type="ref" reference="th2-5"}) that $T$ is not stably but retract $k$-rational.\ (4) $G\simeq Q_{4n}=\langle x,y\mid x^{2n}=y^4=1, x^n=y^2, y^{-1}xy=x^{-1}\rangle$ $(n\geq 2)$. By the condition $\bigcap_{\sigma\in G} H^\sigma=\{1\}$, we have $H=\{1\}$. It follows from Theorem [Theorem 7](#th2-3){reference-type="ref" reference="th2-3"} (resp. Theorem [Theorem 6](#th2-2){reference-type="ref" reference="th2-2"}) that if $n$ is odd (resp. even), then $T$ is stably $k$-rational because $G\simeq \langle x^2\rangle\rtimes \langle y\rangle\simeq C_n\rtimes C_4$ (resp. not retract $k$-rational because $Q_8\leq {\rm Syl}_2(G)$).\ (5) $G\not\simeq C_n$ and $G$ is nilpotent, i.e. the direct product of each $p$-Sylow subgroup of $G$. It follows from Theorem [Theorem 6](#th2-2){reference-type="ref" reference="th2-2"} (resp. Theorem [Theorem 8](#th2-4){reference-type="ref" reference="th2-4"}) that if $H=\{1\}$ (resp. $H\neq\{1\}$), then $T$ is not retract $k$-rational. When $G\simeq S_n$ or $A_n$ and $[G:H]=[K:k]=n$, we have: **Theorem 11** (Colliot-Thélène and Sansuc [@CTS87 Proposition 9.1], [@LeB95 Theorem 3.1], [@CK00 Proposition 0.2], [@LL00], Endo [@End11 Theorem 4.1], see also [@End11 Remark 4.2 and Theorem 4.3]). *Let $K/k$ be a non-Galois separable field extension of degree $n$ and $L/k$ be the Galois closure of $K/k$. Assume that ${\rm Gal}(L/k)=S_n$, $n\geq 3$, and ${\rm Gal}(L/K)=S_{n-1}$ is the stabilizer of one of the letters in $S_n$. Then we have:\ (i) $R^{(1)}_{K/k}(\mathbbm{G}_m)$ is retract $k$-rational if and only if $n$ is a prime;\ (ii) $R^{(1)}_{K/k}(\mathbbm{G}_m)$ is $($stably$)$ $k$-rational if and only if $n=3$.* **Theorem 12** (Endo [@End11 Theorem 4.4], Hoshi and Yamasaki [@HY17 Corollary 1.11]). *Let $K/k$ be a non-Galois separable field extension of degree $n$ and $L/k$ be the Galois closure of $K/k$. Assume that ${\rm Gal}(L/k)=A_n$, $n\geq 4$, and ${\rm Gal}(L/K)=A_{n-1}$ is the stabilizer of one of the letters in $A_n$. Then we have:\ (i) $R^{(1)}_{K/k}(\mathbbm{G}_m)$ is retract $k$-rational if and only if $n$ is a prime.\ (ii) $R^{(1)}_{K/k}(\mathbbm{G}_m)$ is stably $k$-rational if and only if $n=5$.* A necessary and sufficient condition for the classification of stably/retract rational norm one tori $T=R^{(1)}_{K/k}(\mathbbm{G}_m)$ with $[K:k]=n$ $(n\leq 15$, $n=2^e$ or $n=p$ is prime$)$ and $G={\rm Gal}(L/k)\leq S_n$ except for the stable $k$-rationality of $T$ with $G\simeq 9T27$ and $G\leq S_p$ for Fermat primes $p\geq 17$ was given by Hoshi and Yamasaki [@HY21] (for the cases $n=p$ and $n\leq 10$) and Hasegawa, Hoshi and Yamasaki [@HHY20] (for the cases $n=2^e$ and $n=12,14,15$) (see also Remark [Remark 2](#r1.2){reference-type="ref" reference="r1.2"}).\ An algebraic $k$-torus $T$ is called *$G$-torus* if the minimal splitting field $L$ of $T$ satisfies ${\rm Gal}(L/k)\simeq G$. We also recall the following useful theorem can be applied for not only $G$-tori $T$ (e.g. norm one tori $R^{(1)}_{K/k}(\mathbbm{G}_m)$) but also $k$-varieties $X$ with $G$-lattices ${\rm Pic}\,\overline{X}$ (see e.g. Beauville, Colliot-Thélène, Sansuc and Swinnerton-Dyer [@BCTSSD85 Remarque 3]): **Theorem 13** (Endo and Miyata [@EM75 Theorem 3.3] with corrigenda [@EM80], Endo and Kang [@EK17 Theorem 6.5], Kang and Zhou [@KZ20 Theorem 5.2], see also Hoshi, Kang and Yamasaki [@HKY2 Appendix]). *Let $n\geq 1$ be an integer and $\zeta_n$ be a primitive $n$-th root of unity. Let $h_n$ $($resp. $h_n^+$$)$ be the class number of the cyclotomic field $\mathbbm{Q}(\zeta_n)$ $($resp. the maximal real subfield $\mathbbm{Q}(\zeta_n+\zeta_n^{-1})$ of $\mathbbm{Q}(\zeta_n)$$)$. Let $m\geq 3$ be an odd integer.\ (i) $h_n=1$ if and only if all the $C_n$-tori $T$ are stably $k$-rational, i.e. $[\widehat{T}]^{fl}=[{\rm Pic}\,\overline{X}]=0$;\ (ii) $h_m^+=1$ if and only if all the $D_m$-tori $T$ are stably $k$-rational, i.e. $[\widehat{T}]^{fl}=[{\rm Pic}\,\overline{X}]=0$.* # Proof of Theorem [Theorem 1](#mainth){reference-type="ref" reference="mainth"} and Theorem [Theorem 3](#th1.3){reference-type="ref" reference="th1.3"} {#seProof} Let $K/k$ be a finite non-Galois separable field extension and $L/k$ be the Galois closure of $K/k$. Let $T=R^{(1)}_{K/k}(\mathbbm{G}_m)$ be the norm one torus of $K/k$. Assume that $G={\rm Gal}(L/k)$ and $H={\rm Gal}(L/K)\lneq G$. We may assume that $H$ is the stabilizer of one of the letters in $G$, i.e. $L=k(\theta_1,\ldots,\theta_m)$ and $K=L^H=k(\theta_i)$ where $1\leq i\leq m$. Then we have $\bigcap_{\sigma\in G} H^\sigma=\{1\}$ where $H^\sigma=\sigma^{-1}H\sigma$ and hence $H$ contains no normal subgroup of $G$ except for $\{1\}$. We use the following GAP [@GAP] functions in order to prove Theorem [Theorem 1](#mainth){reference-type="ref" reference="mainth"} and Theorem [Theorem 3](#th1.3){reference-type="ref" reference="th1.3"} (see Section [5](#GAPalg){reference-type="ref" reference="GAPalg"} for details of the GAP codes):\ `ConjugacyClassesSubgroups2(`$G$`)` retruns conjugacy clsses of subgroups of a group $G$ with fixed ordering (the builtin function `ConjugacyClassesSubgroups(`$G$`)` of GAP returns the same but the ordering of the result may not be fixed). (This function was given in Hoshi and Yamasaki [@HY17 Section 4.1].)\ `Hcandidates(`$G$`)` retruns subgroups $H$ of $G$ which satisfy $\bigcap_{\sigma\in G} H^\sigma=\{1\}$ where $H^\sigma=\sigma^{-1}H\sigma$ (hence $H$ contains no normal subgroup of $G$ except for $\{1\}$).\ `Norm1TorusJTransitiveGroup(`$d,n$`)` returns the Chevalley module $J_{G/H}$ for the $m$-th transitive subgroup $G = {}_dT_m \leq S_d$ of degree $d$ where $H$ is the stabilizer of one of the letters in $G$. (The input and output of this function is the same as the function `Norm1TorusJ(`$d,n$`)` which is given in Hoshi and Yamasaki [@HY17 Chapter 8] but this function is more efficient.)\ `Norm1TorusJCoset(`$G,H$`)` retruns the Chevalley module $J_{G/H}$ for a group $G$ and a subgroup $H\leq G$.\ `StablyPermutationFCheckPPari(`$G,L_1,L_2$`)` returns the same as `StablyPermutationFCheckP(`$G,L_1,L_2$`)` but using efficient PARI/GP functions (e.g. matker, matsnf) [@PARI2]. (This function applies union-find algorithm and it also requires PARI/GP [@PARI2].)\ `StablyPermutationFCheckPFromBasePari(`$G,m_i,L_1,L_2$`)` returns the same as\ `StablyPermutationFCheckPPari(`$G,L_1,L_2$`)` but with respect to $m_i=\mathcal{P}^\circ$ instead of the original $\mathcal{P}^\circ$ as in Hoshi and Yamasaki [@HY17 Equation (4) in Section 5.1]. (See [@HY17 Section 5.7, Method III]. This function applies union-find algorithm and it also requires PARI/GP [@PARI2].)\ Some related GAP algorithms are also available from\ <https://www.math.kyoto-u.ac.jp/~yamasaki/Algorithm/RatProbNorm1Tori/>.\ (1), (2) For $G\simeq A_4\simeq {\rm PSL}_2(\mathbbm{F}_3)$ with $|G|=12$ (resp. $G\simeq S_4\simeq{\rm PGL}_2(\mathbbm{F}_3)$ with $|G|=24$), there exist $5$ (resp. $11$) subgroups $H\leq G$ up to conjugacy. Out of $5$ (resp. $11$), we get $s=3$ subgroups $H_1\simeq \{1\}$, $H_2\simeq C_2$, $H_3\simeq C_3\leq G$ (resp. $s=7$ subgroups $H_1=\{1\},\ldots,H_{7}\simeq S_3\leq G$) with the condition $\bigcap_{\sigma\in G} H_i^\sigma=\{1\}$ $(1\leq i\leq s)$. By applying the function `Norm1TorusJCoset(`$G$,$H_i$`)`, we obtain $J_{G/H_i}$ in GAP. Then by the function `IsInvertibleF` (see Hoshi and Yamasaki [@HY17 Section 5.2, Algorithm F2]) we find that $[J_{G/H_i}]^{fl}$ is not invertible for each $H_i\lneq G$ $(1\leq i\leq s)$. It follows from Theorem [Theorem 5](#th2-1){reference-type="ref" reference="th2-1"} that $T$ is not retract $k$-rational (see Section [4](#GAPcomp){reference-type="ref" reference="GAPcomp"} for GAP computations). For $G\simeq A_5\simeq{\rm PSL}_2(\mathbbm{F}_5)\simeq {\rm PSL}_2(\mathbbm{F}_4)\simeq {\rm PGL}_2(\mathbbm{F}_4)\simeq {\rm SL}_2(\mathbbm{F}_4)$ with $|G|=60$, there exist $9$ subgroups $H_1=\{1\},\ldots,H_{9}=G\leq G$ up to conjugacy. As in the case of $G\simeq A_4$, by applying `IsInvertibleF`, we find that $[J_{A_5/H_i}]^{fl}$ $(1\leq i\leq 8)$ is not invertible as an $A_5$-lattice except for $i=4, 8$. For $i=4, 8$, we have $G=15T5$ and $H_4\simeq V_4$ with $[G:H_4]=15$, and $G=5T4$ and $H_8\simeq A_4$ with $[G:H_8]=5$ where $nTm$ is the $m$-th transitive subgroup of $S_n$ (see [@GAP]). Hence it follows from [@HHY20 Theorem 1.2 (4)] and [@HY17 Theorem 1.10] that $T$ is stably $k$-rational for $i=4,8$. Alternatively, by a method given in [@HY17 Chapter 5 and Chapter 8], [@HHY20 Algorithm 4.1 (3)], we get an isomorphism of $A_5$-lattices for $i=4$: $$\begin{aligned} \mathbbm{Z}[A_5/H_5]\oplus\mathbbm{Z}[A_5/H_8]^{\oplus 2}\simeq\mathbbm{Z}\oplus F\end{aligned}$$ with rank $12+2\cdot 5=1+21=22$ where $H_5\simeq C_5$, $H_8\simeq A_4$, $F=[J_{A_5/H_4}]^{fl}$ with ${\rm rank}_\mathbbm{Z}\, F=21$. similarly, for $i=8$, we have: $$\begin{aligned} \mathbbm{Z}[A_5/H_5]\oplus\mathbbm{Z}[A_5/H_6]\simeq\mathbbm{Z}[A_5/H_7]\oplus F\end{aligned}$$ with rank $12+10=6+16=22$ where $H_5\simeq C_5$, $H_6\simeq S_3$, $H_7\simeq D_5$, $F=[J_{A_5/H_8}]^{fl}$ with ${\rm rank}_\mathbbm{Z}\, F=16$. This implies that $F=[J_{A_5/H_i}]^{fl}=0$ and hence $T$ is stably $k$-rational for $i=4,8$ (see Section [4](#GAPcomp){reference-type="ref" reference="GAPcomp"} for GAP computations). For $G\simeq S_5\simeq{\rm PGL}_2(\mathbbm{F}_5)$ with $|G|=120$ , there exist $19$ subgroups $H_1=\{1\},\ldots,H_{18}\simeq A_5,H_{19}=G\leq G$ up to conjugacy. By the condition $\bigcap_{\sigma\in G} H_i^\sigma=\{1\}$, we should take $H_i$ $(1\leq i\leq 17)$ (i.e. $H_i\not\simeq A_5, S_5)$. By applying `IsInvertibleF`, we find that $[J_{S_5/H_i}]^{fl}$ $(1\leq i\leq 17)$ is not invertible except for $i=5,12,14,17$. This implies that $T$ is not retract $k$-rational except for $i=5,12,14,17$. Note that $H_5\simeq H_6\simeq V_4$ and $H_5\leq D(G)\simeq A_5$ alghough $H_6\cap D(G)\simeq C_2$ where $D(G)$ is the derived (commutator) subgroup of $G$. For the cases $i=5,12,14,17$, we see that $[J_{S_5/H_i}]^{fl}$ is invertible and hence $T$ is retract $k$-rational. We find that $G=30T25$ and $H_5\simeq V_4$ with $[G:H_5]=30$, $G=15T10$ and $H_{12}\simeq D_4$ with $[G:H_{12}]=15$, $G=10T12$ and $H_{14}\simeq A_4$ with $[G:H_{14}]=10$, $G=5T5$ and $H_{17}\simeq S_4$ with $[G:H_{17}]=5$. Hence it follows from [@HHY20 Theorem 1.2 (1), (4)] and [@End11 Theorem 4.4] (see also [@HY17 Theorem 1.10]) that $T$ is not stably but retract $k$-rational for $i=12,14,17$. Here we give a proof of not only the remaining case $i=5$ but also $i=12,14,17$ together. By [@HHY20 Algorithm 4.1 (2)], we obtain that $F_i=[J_{S_5/H_i}]^{fl}\neq 0$ with ${\rm rank}_\mathbbm{Z}\, F_i=151, 76, 41, 16$ for $i=5,12,14,17$ respectively. This implies that $T$ is not stably but retract $k$-rational for $i=5,12,14,17$ (see Section [4](#GAPcomp){reference-type="ref" reference="GAPcomp"} for GAP computations). For $G\simeq A_6\simeq{\rm PSL}_2(\mathbbm{F}_9)$ with $|G|=360$ (resp. $G\simeq S_6$ with $|G|=720$), there exist $22$ (resp. $56$) subgroups $H_1=\{1\},\ldots,H_{22}=G\leq G$ (resp. $H_1=\{1\},\ldots,H_{55}\simeq A_6,H_{56}=G\leq G$) up to conjugacy. By the condition $\bigcap_{\sigma\in G} H_i^\sigma=\{1\}$, we should take $H_i$ $(1\leq i\leq s)$ with $s=21$ (resp. $s=54$). By `IsInvertibleF`, we find that $[J_{G/H_i}|_{{\rm Syl_2}(G)}]^{fl}$ $(1\leq i\leq s)$ is not invertible as a ${\rm Syl}_2(G)$-lattice except for $i=11, 17, 18$ (resp. $i=27, 34, 43, 44, 48, 49$). For the cases $i=11,17,18$ (resp. $i=27, 34, 43, 44, 48, 49$), we obtain that $[J_{G/H_i}|_{{\rm Syl_3}(G)}]^{fl}$ is not invertible as a ${\rm Syl}_3(G)$-lattice. This implies that $[J_{G/H_i}]^{fl}$ is not invertible as $G$-lattice for each $1\leq i\leq s$ (see Hoshi and Yamasaki [@HY17 Lemma 2.17], see also Colliot-Thélène and Sansuc [@CTS77 Remarque R2, page 180]). It follows from Theorem [Theorem 5](#th2-1){reference-type="ref" reference="th2-1"} that $T$ is not retract $k$-rational (see Section [4](#GAPcomp){reference-type="ref" reference="GAPcomp"} for GAP computations).\ (3), (4) For $G\simeq {\rm GL}_2(\mathbbm{F}_3)$ with $|G|=48$ (resp. $G\simeq {\rm SL}_2(\mathbbm{F}_3)$ with $|G|=24$, $G\simeq {\rm SL}_2(\mathbbm{F}_5)$ with $|G|=120$), there exist $16$ (resp. $7$, $12$) subgroups $H\leq G$ up to conjugacy. Out of $16$ (resp. $7$, $12$), we get $s=5$ (resp. $2$, $3$) subgroups $H_1=\{1\},\ldots,H_{5}\simeq S_3\leq G$ (resp. $H_1=\{1\}, H_{2}\simeq C_3\leq G$, $H_1=\{1\},\ldots,H_{3}\simeq C_5\leq G$) with the condition $\bigcap_{\sigma\in G} H_i^\sigma=\{1\}$ $(1\leq i\leq s)$. Applying `IsInvertibleF`, we find that $[J_{G/H_i}]^{fl}$ $(1\leq i\leq s)$ is not invertible. It follows from Theorem [Theorem 5](#th2-1){reference-type="ref" reference="th2-1"} that $T$ is not retract $k$-rational (see Section [4](#GAPcomp){reference-type="ref" reference="GAPcomp"} for GAP computations). For $G\simeq {\rm GL}_2(\mathbbm{F}_4)\simeq A_5\times C_3$ with $|G|=180$, there exist $21$ subgroups $H\leq G$ up to conjugacy. Out of $21$, we get $11$ subgroups $H_1=\{1\},\ldots,H_{11}\simeq A_4\leq G$ with the condition $\bigcap_{\sigma\in G} H_i^\sigma=\{1\}$ $(1\leq i\leq 11)$. Applying `IsInvertibleF`, we find that $[J_{G/H_i}|_{{\rm Syl_2}(G)}]^{fl}$ $(1\leq i\leq 11)$ is not invertible except for $i=5,11$. For $i=5$, we have $H_5\simeq V_4$ and $[J_{G/H_i}|_{{\rm Syl_3}(G)}]^{fl}$ $(1\leq i\leq 11)$ is not invertible. This implies that $T$ is not retract $k$-rational except for the case $i=11$. For the case $i=11$, we have $H_{11}\simeq A_4$. Note that $H_9\simeq H_{10}\simeq H_{11}\simeq A_4$ and $D(G)\cap H_9\simeq D(G)\cap H_{10}\simeq V_4$ although $H_{11}\leq D(G)\simeq A_5$ where $D(G)$ is the derived (commutator) subgroup of $G$. We also see $G=15T16$ and $[G:H_{11}]=15$. Hence it follows from [@HHY20 Theorem 1.2 (4)] that $T$ is stably $k$-rational. Alternatively, by [@HHY20 Algorithm 4.1 (3)], we get $F=[J_{G/H_{11}}]^{fl}$ with ${\rm rank}_\mathbbm{Z}\,F=36$, $F^\prime=[F]^{fl}$ with ${\rm rank}_\mathbbm{Z}\,F^\prime=14$ and $F^{\prime\prime}=[F^\prime]^{fl}\simeq \mathbbm{Z}$ and hence $[F]=[F^\prime]=[F^{\prime\prime}]=0$ (see Section [4](#GAPcomp){reference-type="ref" reference="GAPcomp"} for GAP computations). For $G\simeq {\rm SL}_2(\mathbbm{F}_7)$ with $|G|=336$ (resp. $G\simeq {\rm GL}_2(\mathbbm{F}_5)$ with $|G|=480$), there exist $19$ (resp. $48$) subgroups $H\leq G$ up to conjugacy. Out of $19$ (resp. $48$), we get $s=4$ (resp. $13$) $H_1=\{1\},\ldots,H_4\simeq C_3\leq G$ (resp. $H_1=\{1\},\ldots,H_{13}\simeq C_5\rtimes C_4\leq G$) with the condition $\bigcap_{\sigma\in G} H_i^\sigma=\{1\}$ $(1\leq i\leq s)$. Applying `IsInvertibleF`, we find that $[J_{G/H_i}|_{{\rm Syl_2}(G)}]^{fl}$ $(1\leq i\leq s)$ is not invertible as a ${\rm Syl}_2(G)$-lattice. This implies that $[J_{G/H_i}]^{fl}$ is not invertible as a $G$-lattice for each $1\leq i\leq s$ (see Hoshi and Yamasaki [@HY17 Lemma 2.17], see also Colliot-Thélène and Sansuc [@CTS77 Remarque R2, page 180]). It follows from Theorem [Theorem 5](#th2-1){reference-type="ref" reference="th2-1"} that $T$ is not retract $k$-rational (see Section [4](#GAPcomp){reference-type="ref" reference="GAPcomp"} for GAP computations).\ (5) For $G\simeq {\rm PSL}_2(\mathbbm{F}_7)\simeq {\rm PSL}_3(\mathbbm{F}_2)$ with $|G|=168$, there exist $15$ subgroups $H_1=\{1\},\ldots,H_{15}=G\leq G$ up to conjugacy. By the condition $\bigcap_{\sigma\in G} H_i^\sigma=\{1\}$, we should take $H_i$ $(1\leq i\leq 14)$. By `IsInvertibleF`, we find that $[J_{G/H_i}]^{fl}$ $(1\leq i\leq 14)$ is not invertible except for $i=9, 13, 14$. For the cases $i=9,13,14$, we see that $[J_{G/H_i}]^{fl}$ is invertible. We also find that $G=21T14$ and $H_9\simeq D_4$ with $[G:H_9]=21$, $G=7T5$ and $H_{13}\simeq H_{14}\simeq S_4$ with $[G:H_{13}]=[G:H_{14}]=7$. Hence it follows from [@HY17 Theorem 8.5 (ii)] that $T$ is not stably but retract $k$-rational for $i=13,14$. Here we give a proof of not only the remaining case $i=9$ but also $i=13,14$ together. By [@HHY20 Algorithm 4.1 (2)], we obtain that $F_i=[J_{G/H_i}]^{fl}\neq 0$ with ${\rm rank}_\mathbbm{Z}\, F_i=148, 36, 36$ for $i=9,13,14$ respectively. This implies that $T$ is not stably but retract $k$-rational for $i=9,13,14$ (see Section [4](#GAPcomp){reference-type="ref" reference="GAPcomp"} for GAP computations).\ (6) For $G\simeq {\rm PSL}_2(\mathbbm{F}_8)$ with $|G|=504$, there exist $12$ subgroups $H_1=\{1\},\ldots,H_{12}=G\leq G$ up to conjugacy. By the condition $\bigcap_{\sigma\in G} H_i^\sigma=\{1\}$, we should take $H_i$ $(1\leq i\leq 11)$. By `IsInvertibleF`, we find that $[J_{G/H_i}|_{{\rm Syl_2}(G)}]^{fl}$ $(1\leq i\leq 11)$ is not invertible as a ${\rm Syl}_2(G)$-lattice except for $i=7, 11$. For the cases $i=7, 11$, we have $G\leq S_{63}$ is transitive and $H_7\simeq (C_2)^3$ with $[G:H_7]=63$, $G=9T27$ and $H_{11}\simeq (C_2)^3\rtimes C_7$ with $[G:H_{11}]=9$ where $nTm$ is the $m$-th transitive subgroup of $S_n$ (see [@GAP]). By a method given in [@HY17 Chapter 5 and Chapter 8], [@HHY20 Algorithm 4.1 (3)], we get an isomorphism of ${\rm PSL}_2(\mathbbm{F}_8)$-lattices for $i=7$: $$\begin{aligned} \mathbbm{Z}[G/H_5]\oplus\mathbbm{Z}[G/H_8]\oplus\mathbbm{Z}[G/H_{11}]^{\oplus 2} \simeq\mathbbm{Z}[G/H_5]\oplus\mathbbm{Z}\oplus F\end{aligned}$$ with rank $84+56+2\cdot 9=84+1+73=158$ where $H_5\simeq S_3$, $H_8\simeq C_9$, $H_{11}\simeq (C_2)^3\rtimes C_7$, $F=[J_{G/H_7}]^{fl}$ with ${\rm rank}_\mathbbm{Z}\,F=73$. Similarly, for $i=11$, we have: $$\begin{aligned} \mathbbm{Z}[G/H_3]\oplus\mathbbm{Z}[G/H_8]\oplus\mathbbm{Z}[G/H_9]\simeq \mathbbm{Z}[G/H_3]\oplus\mathbbm{Z}[G/H_{10}]\oplus F\end{aligned}$$ with rank $168+56+36=168+28+64=260$ where $H_3\simeq C_3$, $H_8\simeq C_9$, $H_9\simeq D_7$, $H_{10}\simeq D_9$, $F=[J_{G/H_{11}}]^{fl}$ with ${\rm rank}_\mathbbm{Z}\,F=64$. This implies that $F=[J_{G/H_i}]^{fl}=0$ and hence $T$ is stably $k$-rational for $i=7,11$ (see Section [4](#GAPcomp){reference-type="ref" reference="GAPcomp"} for GAP computations).0◻ # GAP computations {#GAPcomp} gap> Read("FlabbyResolutionFromBase.gap"); gap> A4:=AlternatingGroup(4); # G=A4=PSL(2,3) with |G|=12 Alt( [ 1 .. 4 ] ) gap> A4cs:=ConjugacyClassesSubgroups2(A4); # subgroups H of G up to conjugacy [ Group( () )^G, Group( [ (1,2)(3,4) ] )^G, Group( [ (2,4,3) ] )^G, Group( [ (1,3)(2,4), (1,2)(3,4) ] )^G, Group( [ (1,3)(2,4), (1,2)(3,4), (2,4,3) ] )^G ] gap> Length(A4cs); 5 gap> A4H:=Hcandidates(A4); # exclude H=A4 [ Group(()), Group([ (1,2)(3,4) ]), Group([ (2,4,3) ]) ] gap> Length(A4H); 3 gap> A4J:=List(A4H,x->Norm1TorusJCoset(A4,x));; gap> List(A4J,IsInvertibleF); # not retract rational [ false, false, false ] gap> List(A4cs,x->StructureDescription(Representative(x))); # for checking H [ "1", "C2", "C3", "C2 x C2", "A4" ] gap> List(A4H,StructureDescription); # for checking H [ "1", "C2", "C3" ] gap> A5:=AlternatingGroup(5); # G=A5=PSL(2,5)=PSL(2,4)=PGL(2,4)=SL(2,4) with |G|=60 Alt( [ 1 .. 5 ] ) gap> A5cs:=ConjugacyClassesSubgroups2(A5); # subgroups H of G up to conjugacy [ Group( () )^G, Group( [ (2,3)(4,5) ] )^G, Group( [ (3,4,5) ] )^G, Group( [ (2,3)(4,5), (2,4)(3,5) ] )^G, Group( [ (1,2,3,4,5) ] )^G, Group( [ (1,2)(4,5), (3,4,5) ] )^G, Group( [ (1,4)(2,3), (1,3)(4,5) ] )^G, Group( [ (3,4,5), (2,4)(3,5) ] )^G, Group( [ (2,4)(3,5), (1,2,5) ] )^G ] gap> Length(A5cs); 9 gap> A5H:=Hcandidates(A5); # exclude H=A5 [ Group(()), Group([ (2,3)(4,5) ]), Group([ (3,4,5) ]), Group([ (2,3)(4,5), (2,4)(3,5) ]), Group([ (1,2,3,4,5) ]), Group([ (1,2)(4,5), (3,4,5) ]), Group([ (1,4)(2,3), (1,3)(4,5) ]), Group([ (3,4,5), (2,4)(3,5) ]) ] gap>Length(A5H); 8 gap> A5J:=List(A5H,x->Norm1TorusJCoset(A5,x));; gap> List(A5J,IsInvertibleF); [ false, false, false, true, false, false, false, true ] gap> Filtered([1..8],x->IsInvertibleF(A5J[x])=true); # not retract rational except for i=4,8 [ 4, 8 ] gap> mi:=SearchCoflabbyResolutionBase(TransposedMatrixGroup(A5J[4]),0);; # H=C2xC2 gap> F:=FlabbyResolutionFromBase(A5J[4],mi).actionF;; gap> Rank(F.1); # F=[J_{G/H}]^{fl} with rank 21 21 gap> ll:=PossibilityOfStablyPermutationFFromBase(A5J[4],mi); [ [ 1, -2, -1, 0, 0, 1, 1, 1, -1, 0 ], [ 0, 0, 0, 0, 1, 0, 0, 2, -1, -1 ] ] gap> l:=ll[2]; # possibility for Z[G/H5]+2Z[G/H8]=Z+F # with rank 12+2*5=1+21=22 where H5=C5,H8=A4 [ 0, 0, 0, 0, 1, 0, 0, 2, -1, -1 ] gap> bp:=StablyPermutationFCheckPFromBase(A5J[4],mi,Nlist(l),Plist(l));; gap> Length(bp); 16 gap> Length(bp[1]); 22 gap> rs:=RandomSource(IsMersenneTwister); <RandomSource in IsMersenneTwister> gap> rr:=List([1..200000],x->List([1..16],y->Random(rs,[-1..2])));; gap> Filtered(rr,x->Determinant(x*bp)^2=1); [ [ 2, 0, 1, 2, 1, 1, 1, 0, -1, 0, -1, -1, 1, -1, 2, 1 ] ] gap> P:=last[1]*bp; [ [ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1 ], [ 2, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 0, 0, 0, -1, 0, 0, 0, 0, -1, 0 ], [ 1, 2, 1, 1, 1, 2, 1, 1, 1, 1, 2, 1, 0, 0, -1, 0, 0, 0, 0, -1, 0, 0 ], [ 1, 1, 1, 2, 1, 1, 2, 1, 1, 1, 1, 2, -1, 0, 0, 0, 0, -1, 0, 0, 0, 0 ], [ 1, 1, 2, 1, 2, 1, 1, 2, 1, 1, 1, 1, 0, 0, 0, 0, -1, 0, 0, 0, 0, -1 ], [ -1, -1, -2, -1, -1, -1, -1, -1, -1, -2, -2, -2, 1, -1, 1, 1, 1, 2, 1, 2, 2, 2 ], [ 1, 1, 2, 1, 1, 2, 2, 1, 1, 1, 1, 1, 0, -1, 0, 0, 0, 0, -1, 0, 0, 0 ], [ -2, -1, -1, -1, -1, -1, -2, -1, -1, -1, -2, -2, 1, 1, 1, -1, 1, 2, 2, 2, 1, 2 ], [ -1, -2, -1, -1, -2, -2, -1, -1, -2, -1, -1, -1, 1, 1, 1, -1, 1, 2, 2, 2, 1, 2 ], [ -2, -1, -2, -1, -1, -1, -2, -1, -1, -2, -1, -1, -1, 1, 1, 1, 1, 1, 2, 2, 2, 2 ], [ -2, -1, -1, -1, -1, -2, -2, -1, -2, -1, -1, -1, 1, 1, -1, 1, 1, 2, 2, 1, 2, 2 ], [ -1, -1, -1, -2, -1, -1, -1, -2, -1, -1, -2, -2, 1, 1, -1, 1, 1, 2, 2, 1, 2, 2 ], [ -1, -1, -2, -1, -1, -2, -1, -1, -2, -2, -1, -1, 1, 1, 1, 1, -1, 2, 2, 2, 2, 1 ], [ -1, -2, -1, -1, -2, -1, -1, -1, -1, -1, -2, -2, 1, 1, 1, 1, -1, 2, 2, 2, 2, 1 ], [ -2, -1, -1, -2, -1, -1, -2, -2, -1, -1, -1, -1, 1, 1, 1, 1, -1, 2, 2, 2, 2, 1 ], [ 4, 4, 3, 4, 4, 4, 4, 4, 4, 3, 5, 5, -3, -3, -1, -1, -1, -6, -6, -5, -5, -5 ], [ 1, 1, 1, 1, 2, 1, 1, 1, 2, 1, 1, 2, 0, -1, 0, 0, 0, 0, -1, 0, 0, 0 ], [ 2, 1, 1, 1, 1, 1, 1, 2, 1, 1, 2, 1, 0, -1, 0, 0, 0, 0, -1, 0, 0, 0 ], [ -1, -1, -1, -1, -2, -1, -1, -2, -1, 0, -2, -2, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0 ], [ 1, 2, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, -1, 0, 0, 0, 0, -1, 0, 0, 0, 0 ], [ 2, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 0, 0, 0, 0, -1, 0, 0, 0, 0, -1 ], [ 1, 1, 1, 2, 1, 1, 1, 1, 2, 1, 1, 2, 0, 0, 0, 0, -1, 0, 0, 0, 0, -1 ] ] gap> StablyPermutationFCheckMatFromBase(A5J[4],mi,Nlist(l),Plist(l),P); true gap> F:=FlabbyResolutionLowRank(A5J[8]).actionF; # H=A4 <matrix group with 2 generators> gap> Rank(F.1); # F=[J_{G/H}]^{fl} with rank 16 16 gap> mi:=SearchCoflabbyResolutionBaseLowRank(TransposedMatrixGroup(G),0);; gap> ll:=PossibilityOfStablyPermutationFFromBase(G,mi); [ [ 1, -2, -1, 0, 0, 1, 1, 1, -1, 0 ], [ 0, 0, 0, 0, 1, 1, -1, 0, 0, -1 ] ] gap> l:=ll[2]; # possibility for Z[G/H5]+Z[G/H6]=Z[G/H7]+F # with rank 12+10=6+16=22 where H5=C5,H6=S3,H7=D5 [ 0, 0, 0, 0, 1, 1, -1, 0, 0, -1 ] gap> bp:=StablyPermutationFCheckPFromBase(G,mi,Nlist(l),Plist(l));; gap> Length(bp); 11 gap> Length(bp[1]); 22 gap> SearchPRowBlocks(bp); rec( bpBlocks := [ [ 1, 2, 3, 4 ], [ 5, 6, 7, 8, 9, 10, 11 ] ], rowBlocks := [ [ 1, 2, 3, 4, 5, 6 ], [ 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22 ] ] ) gap> r1:=SearchPFilterRowBlocks(bp,[1..4],[1..6],3,[-1..2]);; gap> Length(r1); 94 gap> r2:=SearchPFilterRowBlocks(bp,[5..11],[7..22],2);; gap> Length(r2); 8 gap> P:=SearchPMergeRowBlock(r1,r2); [ [ 2, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0 ], [ 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0 ], [ 1, 1, 2, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1 ], [ 1, 1, 1, 2, 1, 1, 1, 1, 1, 2, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0 ], [ 1, 1, 1, 1, 2, 1, 1, 1, 2, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1 ], [ 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 2, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1 ], [ 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0 ], [ 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ], [ 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1 ], [ 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ], [ 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1 ], [ 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ], [ 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0 ] ] gap> StablyPermutationFCheckMatFromBase(G,mi,Nlist(l),Plist(l),P); true gap> List(A5H,StructureDescription); # for checking H [ "1", "C2", "C3", "C2 x C2", "C5", "S3", "D10", "A4" ] gap> List([4,8],x->StructureDescription(A5H[x])); # for checking H [ "C2 x C2", "A4" ] gap> A6:=AlternatingGroup(6); # G=A6=PSL(2,9) with |G|=360 Alt( [ 1 .. 6 ] ) gap> A6cs:=ConjugacyClassesSubgroups2(A6);; # subgroups H of G up to conjugacy gap> Length(A6cs); 22 gap> A6H:=Hcandidates(A6); # exclude H=A6 gap> Length(A6H); 21 gap> A6J:=List(A6H,x->Norm1TorusJCoset(A6,x));; gap> Filtered([1..21],x->IsInvertibleF(SylowSubgroup(A6J[x],2))=true); [ 11, 17, 18 ] gap> List([11,17,18],x->IsInvertibleF(SylowSubgroup(A6J[x],3))); # not retract rational [ false, false, false ] gap> List(A6H,StructureDescription); # for checking H [ "1", "C2", "C3", "C3", "C2 x C2", "C2 x C2", "C4", "C5", "S3", "S3", "D8", "C3 x C3", "D10", "A4", "A4", "(C3 x C3) : C2", "S4", "S4", "(C3 x C3) : C4", "A5", "A5" ]  \ gap> S3:=SymmetricGroup(3); # G=S3=PSL(2,2) with |G|=6 Sym( [ 1 .. 3 ] ) gap> S3cs:=ConjugacyClassesSubgroups2(S3); # subgroups H of G up to conjugacy [ Group( () )^G, Group( [ (2,3) ] )^G, Group( [ (1,3,2) ] )^G, Group( [ (1,3,2), (2,3) ] )^G ] gap> S3H:=Hcandidates(S3); [ Group(()), Group([ (2,3) ]) ] gap> S3J:=List(S3H,x->Norm1TorusJCoset(S3,x));; gap> mi:=SearchCoflabbyResolutionBaseLowRank(TransposedMatrixGroup(S3J[2]),0);; # H=C2 gap> F:=FlabbyResolutionFromBase(S3J[2],mi).actionF;; gap> Rank(F.1); # F=[J_{G/H}]^{fl} with rank 4 4 gap> ll:=PossibilityOfStablyPermutationFFromBase(S3J[2],mi); [ [ 0, 1, 1, -1, -1 ] ] gap> l:=ll[1]; [ 0, 1, 1, -1, -1 ] gap> bp:=StablyPermutationFCheckPFromBase(S3J[2],mi,Nlist(l),Plist(l));; gap> Length(bp); 6 gap> Length(bp[1]); 5 gap> SearchPRowBlocks(bp); rec( bpBlocks := [ [ 1, 2 ], [ 3, 4, 5, 6 ] ], rowBlocks := [ [ 1 ], [ 2, 3, 4, 5 ] ] ) gap> r1:=SearchPFilterRowBlocks(bp,[1..2],[1],2);; gap> Length(r1); 3 gap> r2:=SearchPFilterRowBlocks(bp,[3..6],[2..5],2);; gap> Length(r2); 4 gap> P:=SearchPMergeRowBlock(r1,r2); [ [ 1, 1, 1, 1, 1 ], [ 1, 1, 0, 1, 0 ], [ 1, 0, 1, 0, 1 ], [ 0, 1, 1, 0, 1 ], [ 0, 1, 1, 1, 0 ] ] gap> StablyPermutationFCheckMatFromBase(S3J[2],mi,Nlist(l),Plist(l),P); true gap> List(S3H,StructureDescription); # for checking H [ "1", "C2" ] gap> S4:=SymmetricGroup(4); # G=S4=PGL(2,3) with |G|=24 Sym( [ 1 .. 4 ] ) gap> S4cs:=ConjugacyClassesSubgroups2(S4); # subgroups H of G up to conjugacy [ Group( () )^G, Group( [ (1,3)(2,4) ] )^G, Group( [ (3,4) ] )^G, Group( [ (2,4,3) ] )^G, Group( [ (1,4)(2,3), (1,3)(2,4) ] )^G, Group( [ (3,4), (1,2)(3,4) ] )^G, Group( [ (1,3,2,4), (1,2)(3,4) ] )^G, Group( [ (3,4), (2,4,3) ] )^G, Group( [ (1,4)(2,3), (1,3)(2,4), (3,4) ] )^G, Group( [ (1,4)(2,3), (1,3)(2,4), (2,4,3) ] )^G, Group( [ (1,4)(2,3), (1,3)(2,4), (2,4,3), (3,4) ] )^G ] gap> Length(S4cs); 11 gap> S4H:=Hcandidates(S4); # exclude H=V4, D4, A4, S4 [ Group(()), Group([ (1,3)(2,4) ]), Group([ (3,4) ]), Group([ (2,4,3) ]), Group([ (3,4), (1,2)(3,4) ]), Group([ (1,3,2,4), (1,2)(3,4) ]), Group([ (3,4), (2,4,3) ]) ] gap> Length(S4H); 7 gap> S4J:=List(S4H,x->Norm1TorusJCoset(S4,x));; gap> List(S4J,IsInvertibleF); # not retract rational [ false, false, false, false, false, false, false ] gap> List(S4H,StructureDescription); # for checking H [ "1", "C2", "C2", "C3", "C2 x C2", "C4", "S3" ] gap> S5:=SymmetricGroup(5); # G=S5=PGL(2,5) with |G|=120 Sym( [ 1 .. 5 ] ) gap> S5cs:=ConjugacyClassesSubgroups2(S5);; # subgroups H of G up to conjugacy gap> Length(S5cs); 19 gap> S5H:=Hcandidates(S5); # exclude H=A5, S5 [ Group(()), Group([ (1,3) ]), Group([ (2,3)(4,5) ]), Group([ (3,4,5) ]), Group([ (2,3)(4,5), (2,4)(3,5) ]), Group([ (4,5), (2,3)(4,5) ]), Group([ (2,5,3,4) ]), Group([ (1,2,3,4,5) ]), Group([ (3,4,5), (1,2) ]), Group([ (3,4,5), (4,5) ]), Group([ (1,2)(4,5), (3,4,5) ]), Group([ (2,3), (2,4,3,5) ]), Group([ (1,4)(2,3), (1,3)(4,5) ]), Group([ (3,4,5), (2,4)(3,5) ]), Group([ (3,4,5), (4,5), (1,2)(4,5) ]), Group([ (2,3,5,4), (1,4)(2,3) ]), Group([ (3,5,4), (2,5,4,3) ]) ] gap> Length(S5H); 17 gap> S5J:=List(S5H,x->Norm1TorusJCoset(S5,x));; gap> Filtered([1..17],x->IsInvertibleF(S5J[x])=true); # not retract rational except for i=5,12,14,17 [ 5, 12, 14, 17 ] gap> List([5,12,14,17],x->S5H[x]); [ Group([ (2,3)(4,5), (2,4)(3,5) ]), Group([ (2,3), (2,4,3,5) ]), Group([ (3,4,5), (2,4)(3,5) ]), Group([ (3,5,4), (2,5,4,3) ]) ] gap> Filtered([1..Length(S5H)],x->StructureDescription(S5H[x])="C2 x C2"); [ 5, 6 ] gap> StructureDescription(Intersection(DerivedSubgroup(S5),S5H[5])); "C2 x C2" gap> StructureDescription(Intersection(DerivedSubgroup(S5),S5H[6])); "C2" gap> IdSmallGroup(SymmetricGroup(5)); [ 120, 34 ] gap> Filtered([1..NrTransitiveGroups(30)],x->Order(TransitiveGroup(30,x))=120); [ 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34 ] gap> Filtered(last,x->IdSmallGroup(TransitiveGroup(30,x))=[120,34]); [ 22, 25, 27 ] gap> List([22,25,27],x->StructureDescription(Intersection(Stabilizer( > TransitiveGroup(30,x),1),DerivedSubgroup(TransitiveGroup(30,x))))); [ "C2", "C2 x C2", "C2" ] gap> StructureDescription(Intersection(DerivedSubgroup(S5),S5H[5])); # G=30T25 and H5=V4 with [G:H5]=30 "C2 x C2" gap> F:=FlabbyResolution(S5J[5]).actionF; # F=[J_{G/H5}]^{fl} with H5=V4 <matrix group with 2 generators> gap> Rank(F.1); 151 gap> ll:=PossibilityOfStablyPermutationF(S5J[5]); [ [ 1, 0, 0, 0, 0, -2, 6, 1, -3, 3, -2, 4, 2, 5, 2, -6, -8, -3, 6, -2 ], [ 0, 1, 0, 0, 0, -1, -1, 0, 0, -1, 0, 0, 0, 0, 1, 1, 1, 0, -1, 0 ], [ 0, 0, 1, 0, 0, 0, -2, 0, 1, -1, 0, 0, -1, -1, 0, 2, 2, 1, -2, 0 ], [ 0, 0, 0, 1, 0, -2, 10, 1, -5, 5, -3, 4, 3, 6, 2, -10, -12, -4, 10, -2 ], [ 0, 0, 0, 0, 1, 2, -2, 0, 2, -2, 1, -2, -1, -2, -2, 2, 4, 1, -2, 0 ] ] gap> List(ll,x->x[Length(x)]); # [F]<>0 ([F]\neq 0) [ -2, 0, 0, -2, 0 ] gap> F:=FlabbyResolution(S5J[12]).actionF; # F=[J_{G/H12}]^{fl} with H12=D4 <matrix group with 2 generators> gap> Rank(F.1); 76 gap> ll:=PossibilityOfStablyPermutationF(S5J[12]); [ [ 1, 0, 0, 0, 0, 0, -8, -1, 3, -5, 2, -2, -2, -5, 0, 8, 10, 3, -8, 2 ], [ 0, 1, 0, 0, 0, -1, -1, 0, 0, -1, 0, 0, 0, 0, 1, 1, 1, 0, -1, 0 ], [ 0, 0, 1, 0, 0, 0, -2, 0, 1, -1, 0, 0, -1, -1, 0, 2, 2, 1, -2, 0 ], [ 0, 0, 0, 1, 0, 0, -4, -1, 1, -3, 1, -2, -1, -4, 0, 4, 6, 2, -4, 2 ], [ 0, 0, 0, 0, 1, 2, -2, 0, 2, -2, 1, -2, -1, -2, -2, 2, 4, 1, -2, 0 ] ] gap> List(ll,x->x[Length(x)]); # [F]<>0 ([F]\neq 0) [ 2, 0, 0, 2, 0 ] gap> F:=FlabbyResolution(S5J[14]).actionF; # F=[J_{G/H14}]^{fl} with H14=A4 <matrix group with 2 generators> gap> Rank(F.1); 41 gap> ll:=PossibilityOfStablyPermutationF(S5J[14]); [ [ 1, 0, 0, 0, 0, 0, -4, 1, 3, -3, 2, 0, -2, -1, 0, 4, 6, 1, -4, -2 ], [ 0, 1, 0, 0, 0, -1, -1, 0, 0, -1, 0, 0, 0, 0, 1, 1, 1, 0, -1, 0 ], [ 0, 0, 1, 0, 0, 0, -2, 0, 1, -1, 0, 0, -1, -1, 0, 2, 2, 1, -2, 0 ], [ 0, 0, 0, 1, 0, 0, 0, 1, 1, -1, 1, 0, -1, 0, 0, 0, 2, 0, 0, -2 ], [ 0, 0, 0, 0, 1, 2, -2, 0, 2, -2, 1, -2, -1, -2, -2, 2, 4, 1, -2, 0 ] ] gap> List(ll,x->x[Length(x)]); # [F]<>0 ([F]\neq 0) [ -2, 0, 0, -2, 0 ] gap> F:=FlabbyResolution(S5J[17]).actionF; # F=[J_{G/H17}]^{fl} with H17=S4 <matrix group with 2 generators> gap> Rank(F.1); 16 gap> ll:=PossibilityOfStablyPermutationF(S5J[17]); [ [ 1, 0, 0, 0, 0, 0, -4, -1, 1, 0, -3, 0, 0, -1, 0, 4, 4, 1, -4, 2 ], [ 0, 1, 0, 0, 0, -1, -1, 0, 0, 0, -1, 0, 0, 0, 1, 1, 1, 0, -1, 0 ], [ 0, 0, 1, 0, 0, 0, -2, 0, 1, 0, -1, 0, -1, -1, 0, 2, 2, 1, -2, 0 ], [ 0, 0, 0, 1, 0, 0, 0, -1, -1, -1, -1, 0, 1, 0, 0, 0, 0, 0, 0, 2 ], [ 0, 0, 0, 0, 1, 2, -2, 0, 2, 1, -2, -2, -1, -2, -2, 2, 4, 1, -2, 0 ] ] gap> List(ll,x->x[Length(x)]); # [F]<>0 ([F]\neq 0) [ 2, 0, 0, 2, 0 ] gap> List(S5H,StructureDescription); # for checking H [ "1", "C2", "C2", "C3", "C2 x C2", "C2 x C2", "C4", "C5", "C6", "S3", "S3", "D8", "D10", "A4", "D12", "C5 : C4", "S4" ] gap> List([5,12,14,17],x->StrucutreDescription(S5H[x])); [ "C2 x C2", "D8", "A4", "S4" ] gap> S6:=SymmetricGroup(6); # G=S6 with |G|=720 Sym( [ 1 .. 6 ] ) gap> S6cs:=ConjugacyClassesSubgroups2(S6);; # subgroups H of G up to conjugacy gap> Length(S6cs); 56 gap> S6H:=Hcandidates(S6);; # exclude H=A6, S6 gap> Length(S6H); 54 gap> S6J:=List(S6H,x->Norm1TorusJCoset(S6,x));; gap> Filtered([1..54],x->IsInvertibleF(SylowSubgroup(S6J[x],2))=true); [ 27, 34, 43, 44, 48, 49 ] gap> List([27,34,43,44,48,49],x->IsInvertibleF(SylowSubgroup(S6J[x],3))); # not retract rational [ false, false, false, false, false, false ] gap> List(S6H,StructureDescription); # for checking H [ "1", "C2", "C2", "C2", "C3", "C3", "C2 x C2", "C2 x C2", "C2 x C2", "C2 x C2", "C2 x C2", "C4", "C4", "C5", "S3", "S3", "C6", "C6", "S3", "S3", "C2 x C2 x C2", "C2 x C2 x C2", "C4 x C2", "D8", "D8", "D8", "D8", "C3 x C3", "D10", "A4", "A4", "D12", "D12", "C2 x D8", "(C3 x C3) : C2", "C3 x S3", "C3 x S3", "C5 : C4", "C2 x A4", "S4", "C2 x A4", "S4", "S4", "S4", "S3 x S3", "S3 x S3", "(C3 x C3) : C4", "C2 x S4", "C2 x S4", "A5", "A5", "(S3 x S3) : C2", "S5", "S5" ]  \ gap> GL23:=GL(2,3); # G=GL(2,3) with |G|=48 GL(2,3) gap> GL23cs:=ConjugacyClassesSubgroups2(GL23);; # subgroups H of G up to conjugacy gap> Length(GL23cs); 16 gap> GL23H:=Hcandidates(GL23);; # exclude H with Z(GL(2,3))=C2<H gap> Length(GL23H); 5 gap> GL23J:=List(GL23H,x->Norm1TorusJCoset(GL23,x));; gap> List(GL23J,IsInvertibleF); # not retract rational [ false, false, false, false, false ] gap> List(GL23cs,x->StructureDescription(Representative(x))); # for checking H [ "1", "C2", "C2", "C3", "C4", "C2 x C2", "S3", "S3", "C6", "Q8", "D8", "C8", "D12", "QD16", "SL(2,3)", "GL(2,3)" ] gap> List(GL23H,StructureDescription); # for checking H [ "1", "C2", "C3", "S3", "S3" ] gap> GL24:=GL(2,4); # G=GL(2,4)=A5xC3 with |G|=180 GL(2,4) gap> GL24cs:=ConjugacyClassesSubgroups2(GL24);; # subgroups H of G up to conjugacy gap> Length(GL24cs); 21 gap> GL24H:=Hcandidates(GL24);; # exclude H with Z(GL(2,4))=C3<H or H=SL(2,4) gap> Length(GL24H); 11 gap> GL24J:=List(GL24H,x->Norm1TorusJCoset(GL24,x));; gap> Filtered([1..11],x->IsInvertibleF(SylowSubgroup(GL24J[x],2))=true); # not retract rational except for i=5,11 [ 5, 11 ] gap> IsInvertibleF(SylowSubgroup(GL24J[5],3)); # not retract rational false gap> IsInvertibleF(SylowSubgroup(GL24J[11],3)); true gap> IsInvertibleF(GL24J[11]); true gap> Filtered([1..NrTransitiveGroups(15)],x->Order(TransitiveGroup(15,x))=180); [ 15, 16 ] gap> G15:=TransitiveGroup(15,15); # G15=15T15 3A_5(15)=[3]A(5)=GL(2,4) gap> G16:=TransitiveGroup(15,16); # G16=15T16 A(5)[x]3 gap> IdSmallGroup(G15); # G15=A5xC3 [ 180, 19 ] gap> IdSmallGroup(G16); # G16=A5xC3 [ 180, 19 ] gap> IdSmallGroup(GL24); # GL24=A5xC3 [ 180, 19 ] gap> StructureDescription(Intersection(DerivedSubgroup(G15),Stabilizer(G15,1))); "C2 x C2" gap> StructureDescription(Intersection(DerivedSubgroup(G16),Stabilizer(G16,1))); "A4" gap> StructureDescription(Intersection(DerivedSubgroup(GL24),GL24H[9])); # H9=15T15 "C2 x C2" gap> StructureDescription(Intersection(DerivedSubgroup(GL24),GL24H[10])); # H10=15T15 "C2 x C2" gap> StructureDescription(Intersection(DerivedSubgroup(GL24),GL24H[11])); # H11=15T16 "A4" gap> StructureDescription(DerivedSubgroup(GL24)); "A5" gap> F:=FlabbyResolutionLowRank(Norm1TorusJTransitiveGroup(15,16)).actionF; <matrix group with 2 generators> gap> Rank(F.1); 36 gap> F2:=FlabbyResolutionLowRankFromGroup(F,Norm1TorusJTransitiveGroup(15,16)).actionF; <matrix group with 2 generators> gap> Rank(F2.1); 14 gap> F3:=FlabbyResolutionLowRankFromGroup(F2,Norm1TorusJTransitiveGroup(15,16)).actionF; Group([ [ [ 1 ] ], [ [ 1 ] ] ]) gap> List(GL24cs,x->StructureDescription(Representative(x))); # for checking H [ "1", "C2", "C3", "C3", "C3", "C2 x C2", "C5", "S3", "C6", "C3 x C3", "D10", "C6 x C2", "A4", "A4", "A4", "C15", "C3 x S3", "C3 x D10", "C3 x A4", "A5", "GL(2,4)" ] gap> List(GL24H,StructureDescription); # for checking H [ "1", "C2", "C3", "C3", "C2 x C2", "C5", "S3", "D10", "A4", "A4", "A4" ] gap> GL25:=GL(2,5); GL(2,5) gap> GL25cs:=ConjugacyClassesSubgroups2(GL25);; # subgroups H of G up to conjugacy gap> Length(GL25cs); 48 gap> GL25H:=Hcandidates(GL25);; # exclude H with Z(GL(2,5)) \cap H \neq 1 gap> Length(GL25H); 13 gap> GL25J:=List(GL25H,x->Norm1TorusJCoset(GL25,x));; gap> Filtered([1..13],x->IsInvertibleF(SylowSubgroup(GL25J[x],2))=true); # not retract rational [ ] gap> List(GL25cs,x->StructureDescription(Representative(x))); # for checking H [ "1", "C2", "C2", "C3", "C4", "C2 x C2", "C4", "C4", "C4", "C5", "C6", "S3", "Q8", "C8", "D8", "C4 x C2", "C4 x C2", "C10", "D10", "D10", "D12", "C3 : C4", "C12", "(C4 x C2) : C2", "C4 x C4", "C8 : C2", "C5 : C4", "C5 : C4", "D20", "C5 : C4", "C5 : C4", "C5 : C4", "C20", "SL(2,3)", "C4 x S3", "C3 : C8", "C24", "(C4 x C4) : C2", "C2 x (C5 : C4)", "C4 x D10", "C2 x (C5 : C4)", "((C4 x C2) : C2) : C3", "C24 : C2", "C4 x (C5 : C4)", "SL(2,3) : C4", "SL(2,5)", "SL(2,5) : C2", "GL(2,5)" ] gap> List(GL25H,StructureDescription); # for checking H [ "1", "C2", "C3", "C4", "C4", "C5", "S3", "D10", "D10", "C5 : C4", "C5 : C4", "C5 : C4", "C5 : C4" ]  \ gap> SL23:=SL(2,3); # G=SL(2,3) with |G|=24 SL(2,3) gap> SL23cs:=ConjugacyClassesSubgroups2(SL23);; # subgroups H of G up to conjugacy gap> Length(SL23cs); 7 gap> SL23H:=Hcandidates(SL23); # exclude H with Z(SL(2,3))=C2<H [ Group([ ]), Group([ [ [ 0*Z(3), Z(3) ], [ Z(3)^0, Z(3) ] ] ]) ] gap> Length(SL23H); 2 gap> SL23J:=List(SL23H,x->Norm1TorusJCoset(SL23,x));; gap> List(SL23J,IsInvertibleF); # not retract rational [ false, false ] gap> List(SL23cs,x->StructureDescription(Representative(x))); # for checking H [ "1", "C2", "C3", "C4", "C6", "Q8", "SL(2,3)" ] gap> List(SL23H,StructureDescription); # for checking H [ "1", "C3" ] gap> SL25:=SL(2,5); # G=SL(2,5) with |G|=120 SL(2,5) gap> SL25cs:=ConjugacyClassesSubgroups2(SL25);; # subgroups H of G up to conjugacy gap> Length(SL25cs); 12 gap> SL25H:=Hcandidates(SL25); # exclude H with Z(SL(2,5))=C2<H [ Group([ ]), Group([ [ [ 0*Z(5), Z(5) ], [ Z(5), Z(5)^2 ] ] ]), Group([ [ [ 0*Z(5), Z(5)^2 ], [ Z(5)^0, Z(5) ] ] ]) ] gap> Length(SL25H); 3 gap> SL25J:=List(SL25H,x->Norm1TorusJCoset(SL25,x));; gap> List(SL25J,IsInvertibleF); # not retract rational [ false, false, false ] gap> List(SL25cs,x->StructureDescription(Representative(x))); # for checking H [ "1", "C2", "C3", "C4", "C5", "C6", "Q8", "C10", "C3 : C4", "C5 : C4", "SL(2,3)", "SL(2,5)" ] gap> List(SL25H,StructureDescription); # for checking H [ "1", "C3", "C5" ] gap> SL27:=SL(2,7); SL(2,7) gap> SL27cs:=ConjugacyClassesSubgroups2(SL27);; gap> Length(SL27cs); 19 gap> SL27H:=Hcandidates(SL27);; gap> Length(SL27H); 4 gap> SL27J:=List(SL27H,x->Norm1TorusJCoset(SL27,x));; gap> Filtered([1..4],x->IsInvertibleF(SylowSubgroup(SL27J[x],2))=true); # not retract rational [ ] gap> List(SL27cs,x->StructureDescription(Representative(x))); # for checking H [ "1", "C2", "C3", "C4", "C6", "C7", "Q8", "Q8", "C8", "C3 : C4", "C14", "Q16", "C7 : C3", "SL(2,3)", "SL(2,3)", "C2 x (C7 : C3)", "C2 . S4 = SL(2,3) . C2", "C2 . S4 = SL(2,3) . C2", "SL(2,7)" ] gap> List(SL27H,StructureDescription); # for checking H [ "1", "C3", "C7", "C7 : C3" ]  \ gap> PSL27:=PSL(2,7); # G=PSL(2,7)=PSL(3,2) with |G|=168 Group([ (3,7,5)(4,8,6), (1,2,6)(3,4,8) ]) gap> PSL27cs:=ConjugacyClassesSubgroups2(PSL27);; # subgroups H of G up to conjugacy gap> Length(PSL27cs); 15 gap> PSL27H:=Hcandidates(PSL27);; # exclude H=PSL(2,7) gap> Length(PSL27H); 14 gap> PSL27J:=List(PSL27H,x->Norm1TorusJCoset(PSL27,x));; gap> Filtered([1..14],x->IsInvertibleF(PSL27J[x])=true); # not retract rational except for i=9,13,14 [ 9, 13, 14 ] gap> Filtered([1..NrTransitiveGroups(21)],x->Order(TransitiveGroup(21,x))=168); # G=21T14 and H=H9 with [G:H]=21 [ 14 ] gap> F:=FlabbyResolution(PSL27J[9]).actionF; # F=[J_{G/H9}]^{fl} with H9=D4 <matrix group with 2 generators> gap> Rank(F.1); 148 ggap> ll:=PossibilityOfStablyPermutationF(PSL27J[9]); [ [ 1, 0, 1, 0, 1, 0, 4, 2, 8, 4, 3, -4, -6, -4, 2, -4 ], [ 0, 1, 1, 0, 1, 0, 1, 1, 3, 2, 1, -2, -3, -1, 1, -2 ], [ 0, 0, 2, 0, 1, 0, 2, 1, 4, 2, 1, -3, -4, -2, 2, -2 ], [ 0, 0, 0, 1, -1, 0, 0, 0, 0, -1, 1, 0, 2, -2, 0, 0 ] ] gap> List(ll,x->x[Length(x)]); # [F]<>0 ([F]\neq 0) [ -4, -2, -2, 0 ] gap> F:=FlabbyResolution(PSL27J[13]).actionF; # F=[J_{G/H13}]^{fl} with H13=S4 <matrix group with 2 generators> gap> Rank(F.1); 36 gap> ll:=PossibilityOfStablyPermutationF(PSL27J[13]); [ [ 1, 0, -3, 0, 0, 0, 0, 1, 0, 0, 2, 1, 2, 0, -2, -2 ], [ 0, 1, -1, 0, 0, 0, -1, 0, -1, 0, 0, 1, 1, 1, -1, 0 ], [ 0, 0, 0, 1, 0, 0, 0, 1, 0, -1, 2, -1, 2, -2, 0, -2 ], [ 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, -1, 0, 0, 0, -2 ] ] gap> List(ll,x->x[Length(x)]); # [F]<>0 ([F]\neq 0) [ -2, 0, -2, -2 ] gap> F:=FlabbyResolution(PSL27J[14]).actionF; # F=[J_{G/H14}]^{fl} with H14=S4 <matrix group with 2 generators> gap> Rank(F.1); 36 gap> ll:=PossibilityOfStablyPermutationF(PSL27J[14]); [ [ 1, 0, -3, 0, 0, 0, 0, 1, 0, 0, 2, 1, 2, 0, -2, -2 ], [ 0, 1, -1, 0, 0, 0, -1, 0, -1, 0, 0, 1, 1, 1, -1, 0 ], [ 0, 0, 0, 1, 0, 0, 0, 1, 0, -1, 2, -1, 2, -2, 0, -2 ], [ 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, -1, 0, 0, 0, -2 ] ] gap> List(ll,x->x[Length(x)]); # [F]<>0 ([F]\neq 0) [ -2, 0, -2, -2 ] gap> List(PSL27cs,x->StructureDescription(Representative(x))); # for checking H [ "1", "C2", "C3", "C2 x C2", "C2 x C2", "C4", "S3", "C7", "D8", "A4", "A4", "C7 : C3", "S4", "S4", "PSL(3,2)" ] gap> List([9,13,14],x->StructureDescription(Representative(PSL27cs[x]))); [ "D8", "S4", "S4" ]  \ gap> PSL28:=PSL(2,8); # G=PSL(2,8) with |G|=504 Group([ (3,8,6,4,9,7,5), (1,2,3)(4,7,5)(6,9,8) ]) gap> PSL28cs:=ConjugacyClassesSubgroups2(PSL28);; # subgroups H of G up to conjugacy gap> Length(PSL28cs); 12 gap> PSL28H:=Hcandidates(PSL28); # exclude H=PSL(2,8) [ Group(()), Group([ (1,5)(2,3)(6,8)(7,9) ]), Group([ (1,6,5)(2,3,9) (4,7,8) ]), Group([ (1,9)(3,7)(4,5)(6,8), (1,6)(3,5)(4,7)(8,9) ]), Group([ (1,3,2)(4,5,7)(6,8,9), (1,2)(4,9)(5,8)(6,7) ]), Group([ (1,9,6,4,7, 8,3) ]), Group([ (1,9)(3,7)(4,5)(6,8), (1,4)(3,8)(5,9)(6,7), (1,6)(3,5) (4,7)(8,9) ]), Group([ (1,5,2,7,8,4,6,9,3) ]), Group([ (2,4)(3,6)(5,7) (8,9), (1,6)(3,5)(4,7)(8,9) ]), Group([ (1,2)(4,9)(5,8)(6,7), (1,6)(2,8) (3,9)(4,5) ]), Group([ (1,4)(3,8)(5,9)(6,7), (1,6,7,3,9,4,8) ]) ] gap> Length(PSL28H); 11 gap> PSL28J:=List(PSL28H,x->Norm1TorusJCoset(PSL28,x));; gap> Filtered([1..11],x->IsInvertibleF(SylowSubgroup(PSL28J[x],2))=true); # not retract rational except for i=7,11 [ 7, 11 ] gap> List(PSL28H,StructureDescription); [ "1", "C2", "C3", "C2 x C2", "S3", "C7", "C2 x C2 x C2", "C9", "D14", "D18", "(C2 x C2 x C2) : C7" ] gap> List([7,11],x->StructureDescription(PSL28H[x])); [ "C2 x C2 x C2", "(C2 x C2 x C2) : C7" ] gap> mi:=SearchCoflabbyResolutionBaseLowRank(TransposedMatrixGroup(PSL28J[7]),0);; # H=C2xC2xC2 gap> F:=FlabbyResolutionFromBase(PSL28J[7],mi).actionF;; gap> Rank(F.1); # F=[J_{G/H}]^{fl} with rank 73 73 gap> ll:=PossibilityOfStablyPermutationFFromBase(PSL28J[7],mi); [ [ 1, -2, 0, 0, 0, -1, 0, 0, 1, 1, 1, -1, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 2, -1, -1 ] ] gap> l:=ll[2]; # possibility for Z[G/H8]+2Z[G/H11]=Z+F # with rank 56+2x9=1+73=74 where H8=C9,H11=(C2xC2xC2):C7 [ 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 2, -1, -1 ] gap> I13:=IdentityMat(13);; gap> bpS:=StablyPermutationFCheckPFromBasePari(PSL28J[7],mi, > Nlist(l)+I13[5],Plist(l)+I13[5]:parisize:=2^33); # we need at least 8GB memory for PARI/GP computations # possibility for Z[G/H5]+Z[G/H8]+2Z[G/H11]=Z[G/H5]+Z+F # with rank 84+56+2x9=84+1+73=158 where H5=S3,H8=C9,H11=(C2xC2xC2):C7 gap> SearchPRowBlocks(bpS); rec( bpBlocks := [ [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31 ], [ 32, 33, 34, 35 ], [ 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65 ] ], rowBlocks := [ [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84 ], [ 85 ], [ 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158 ] ] ) # after some efforts we may get gap> nn:=[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, -1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, > -1, 0, 0, 0, 0, 0, -1, 4, -1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, > 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0 ]; gap> P:=nn*bpS;; gap> Size(P); # 158x158 matrix 158 gap> StablyPermutationFCheckMatFromBase(PSL28J[7],mi,Nlist(l)+I13[5],Plist(l)+I13[5],P); true gap> mi:=SearchCoflabbyResolutionBaseLowRank(TransposedMatrixGroup(PSL28J[11]),0);; # H=(C2xC2xC2):C7 gap> F:=FlabbyResolutionFromBase(PSL28J[11],mi).actionF;; gap> Rank(F.1); # F=[J_{G/H}]^{fl} with rank 64 64 gap> ll:=PossibilityOfStablyPermutationFFromBase(PSL28J[11],mi); [ [ 1, -2, 0, 0, 0, -1, 0, 0, 1, 1, 1, -1, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 1, 1, -1, 0, 0, -1 ] ] gap> l:=ll[2]; # possibility for Z[G/H8]+Z[G/H9]=Z[G/H10]+F # with rank 56+36=28+64=92 where H8=C9,H9=D7,H10=D9 [ 0, 0, 0, 0, 0, 0, 0, 1, 1, -1, 0, 0, -1 ] gap> I13:=IdentityMat(13);; gap> bpQ:=StablyPermutationFCheckPFromBasePari(PSL28J[11],mi, > Nlist(l)+I13[3],Plist(l)+I13[3]:parisize:=2^34);; # we need at least 16GB memory for PARI/GP computations # possibility for Z[G/H3]+Z[G/H8]+Z[G/H9]=Z[G/H3]+Z[G/H10]+F # with rank 168+56+36=168+28+64=260 where H3=C3,H8=C9,H9=D7,H10=D9 gap> SearchPRowBlocks(bpQ); rec( bpBlocks := [ [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92 ], [ 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110 ], [ 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145 ] ], rowBlocks := [ [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168 ], [ 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196 ], [ 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260 ] ] ) # after some efforts we may get gap> nn:=[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, > 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, > 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, > 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, > 0, 0, 1, -1, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, > 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1 ]; gap> P:=nn*bpQ;; gap> Size(P); # 260x260 matrix 260 gap> StablyPermutationFCheckMatFromBase(PSL28J[11],mi,Nlist(l)+I13[3],Plist(l)+I13[3],P); true # GAP algorithms {#GAPalg} The following GAP [@GAP] algorithms and related ones can be available as `FlabbyResolutionFromBase.gap` from <https://www.math.kyoto-u.ac.jp/~yamasaki/Algorithm/RatProbNorm1Tori/>.\ ConjugacyClassesSubgroups2:= function(g) Reset(GlobalMersenneTwister); Reset(GlobalRandomSource); return ConjugacyClassesSubgroups(g); end; Hcandidates:= function(G) local Gcs,GN; Gcs:=ConjugacyClassesSubgroups2(G); GN:=NormalSubgroups(G); return Filtered(List(Gcs,Representative),x->Length(Filtered(GN,y->IsSubgroup(x,y)))=1);; end; Norm1TorusJTransitiveGroup:= function(d,n) local l,T,M; l:=Concatenation(IdentityMat(d-1),[-List([1..d-1],One)]); T:=TransitiveGroup(d,n); M:=List(GeneratorsOfGroup(T),x->List([1..d-1],y->l[y^x])); return Group(M); end; Norm1TorusJPermutationGroup:= function(G) local d,l,T,M; d:=NrMovedPoints(G); l:=Concatenation(IdentityMat(d-1),[-List([1..d-1],One)]); M:=List(GeneratorsOfGroup(G),x->List([1..d-1],y->l[y^x])); return Group(M); end; Norm1TorusJCoset:= function(g,h) local gg,hg,phg,d,l,M; gg:=GeneratorsOfGroup(g); hg:=SortedList(RightCosets(g,h)); phg:=List(gg,x->Permutation(x,hg,OnRight)); d:=Length(hg); l:=Concatenation(IdentityMat(d-1),[-List([1..d-1],One)]); M:=List(phg,x->List([1..d-1],y->l[y^x])); return Group(M); end; BCTSSD85 A. Beauville, *The Lüroth problem*, Rationality problems in algebraic geometry, 1--27, Lecture Notes in Math., 2172, Fond. CIME/CIME Found. Subser., Springer, Cham, 2016. A. Beauville, J.-L. Colliot-Thélène, J.-J. Sansuc, P. Swinnerton-Dyer, *Variétés stablement rationnelles non rationnelles*, (French) Ann. of Math. (2) **121** (1985) 283--318. A. Cortella, B. Kunyavskii, *Rationality problem for generic tori in simple groups*, J. Algebra **225** (2000) 771--793. J.-L. Colliot-Thélène, *Lectures on linear algebraic groups*, Beijing Lectures, Morning Side Centre, April 2007. Available from https://www.imo.universite-paris-saclay.fr/\~colliot/BeijingLectures2Juin07.pdf J.-L. Colliot-Thélène, J.-J. Sansuc, *La R-équivalence sur les tores*, (French) Ann. Sci. École Norm. Sup. (4) **10** (1977) 175--229. J.-L. Colliot-Thélène, J.-J. Sansuc, *La descente sur les variétés rationnelles*, (French) Journées de Géometrie Algébrique d'Angers, Juillet 1979/Algebraic Geometry, Angers, 1979, pp. 223--237, Sijthoff & Noordhoff, Alphen aan den Rijn-Germantown, Md., 1980. J.-L. Colliot-Thélène, J.-J. Sansuc, *Principal homogeneous spaces under flasque tori: Applications*, J. Algebra **106** (1987) 148--205. J.-L. Colliot-Thélène, J.-J. Sansuc, *The rationality problem for fields of invariants under linear algebraic groups (with special regards to the Brauer group)*, Algebraic groups and homogeneous spaces, 113--186, Tata Inst. Fund. Res. Stud. Math., 19, Tata Inst. Fund. Res., Mumbai, 2007. S. Endo, *The rationality problem for norm one tori*, Nagoya Math. J. **202** (2011) 83--106. S. Endo, M. Kang, *Function fields of algebraic tori revisited*, Asian J. Math. **21** (2017) 197--224. S. Endo, T. Miyata, *Invariants of finite abelian groups*, J. Math. Soc. Japan **25** (1973) 7--26. S. Endo, T. Miyata, *On a classification of the function fields of algebraic tori*, Nagoya Math. J. **56** (1975) 85--104. S. Endo, T. Miyata, *Corrigenda: On a classification of the function fields of algebraic tori (Nagoya Math. J. **56** (1975) 85--104)*, Nagoya Math. J. **79** (1980) 187--190. M. Florence, *Non rationality of some norm-one tori*, preprint (2006). The GAP Group, GAP -- Groups, Algorithms, and Programming, Version 4.9.3; 2018. (http://www.gap-system.org). S. Hasegawa, A. Hoshi, A. Yamasaki, *Rationality problem for norm one tori in small dimensions*, Math. Comp. **89** (2020) 923--940. A. Hoshi, *On Noether's problem for cyclic groups of prime order*, Proc. Japan Acad. Ser. A Math. Sci. **91** (2015) 39--44. A. Hoshi, *Noether's problem and rationality problem for multiplicative invariant fields: a survey*, Algebraic number theory and related topics 2016, 29--53, RIMS Kôkyûroku Bessatsu, **B77**, Res. Inst. Math. Sci. (RIMS), Kyoto, 2020. A. Hoshi, K. Kanai, A. Yamasaki, *Norm one tori and Hasse norm principle*, Math. Comp. **91** (2022) 2431--2458. A. Hoshi, K. Kanai, A. Yamasaki, *Norm one tori and Hasse norm principle, II: Degree 12 case*, J. Number Theory **244** (2023) 84--110. A. Hoshi, K. Kanai, A. Yamasaki, *Hasse norm principle for $M_{11}$ and $J_1$ extensions*, arXiv:2210.09119. A. Hoshi, M. Kang, A. Yamasaki, *Class numbers and algebraic tori*, arXiv:1312.6738v2. A. Hoshi, A. Yamasaki, *Rationality problem for algebraic tori*, Mem. Amer. Math. Soc. **248** (2017) no. 1176, v+215 pp. A. Hoshi, A. Yamasaki, *Rationality problem for norm one tori*, Israel J. Math. **241** (2021) 849--867. A. Hoshi, A. Yamasaki, *Birational classification for algebraic tori*, arXiv:2112.02280. A. Hoshi, A. Yamasaki, *Rationality problem for norm one tori for dihedral extensions*, arXiv:2302.06231. W. Hürlimann, *On algebraic tori of norm type*, Comment. Math. Helv. **59** (1984) 539--549. V. A. Iskovskikh, *Rational surfaces with a pencil of rational curves*, Math. USSR Sb. **3** (1967) 563--587. V. A. Iskovskikh, *Rational surfaces with a pencil of rational curves and with positive square of the canonical class*, Math. USSR Sb. **12** (1970) 91--117. V. A. Iskovskikh, *Birational properties of a surface of degree 4 in $\mathbf P^4_k$*, Math. USSR Sb. **17** (1972) 30--36. V. A. Iskovskikh, *On the rationality problem for three-dimensional algebraic varieties*, (Russian) Tr. Mat. Inst. Steklova **218** (1997), Anal. Teor. Chisel i Prilozh., 190--232; translation in Proc. Steklov Inst. Math. **218** (1997) 186--227. V. A. Iskovskikh, Yu. I. Manin, *Three-dimensional quartics and counterexamples to the Lr̈oth problem*, (Russian) Mat. Sb. (N.S.) **86(128)** (1971) 140--166; translation in Math. USSR-Sb. **15** (1971) 141--166 M. Kang, J. Zhou, *Noether's problem for some semidirect products*, Adv. Math. **368** (2020) 107164, 21 pp. B. E. Kunyavskii, *Algebraic tori --- thirty years after*, Vestnik Samara State Univ. (2007) 198--214. B. E. Kunyavskii, *Arithmetic properties of three-dimensional algebraic tori*, (Russian) Integral lattices and finite linear groups, Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI) **116** (1982) 102--107, 163; translation in J. Soviet Math. **26** (1984) 1898--1901. N. Lemire, M. Lorenz, *On certain lattices associated with generic division algebras*, J. Group Theory **3** (2000) 385--405. L. Le Bruyn, *Generic norm one tori*, Nieuw Arch. Wisk. (4) **13** (1995) 401--407. H. W. Lenstra, Jr., *Rational functions invariant under a finite abelian group*, Invent. Math. **25** (1974) 299--325. Yu. I. Manin, *Cubic forms: algebra, geometry, arithmetic*, Second edition. North-Holland Mathematical Library 4, North-Holland Publishing Co., Amsterdam, 1986. x+326 pp. Yu. I. Manin, M. A. Tsfasman, *Rational varieties: algebra, geometry, arithmetic*, (Russian) Uspekhi Mat. Nauk **41** (1986) 43--94; translation in Russian Math. Surveys **41** (1986) 51--116. K. Masuda, *On a problem of Chevalley*, Nagoya Math. J. **8** (1955) 59--63. K. Masuda, *Application of the theory of the group of classes of projective modules to the existence problem of independent parameters of invariant*, J. Math. Soc. Japan **20** (1968) 223--232. A. S. Merkurjev, *Invariants of algebraic groups and retract rationality of classifying spaces*, Algebraic groups: structure and actions, 277--294. Proc. Sympos. Pure Math., 94, American Mathematical Society, Providence, RI, 2017. T. Ono, *Arithmetic of algebraic tori*, Ann. of Math. (2) **74** (1961) 101--139. T. Ono, *On the Tamagawa number of algebraic tori*, Ann. of Math. (2) **78** (1963) 47--73. The PARI Group, PARI/GP version `2.13.3`, Univ. Bordeaux, 2021, <http://pari.math.u-bordeaux.fr/>. B. Plans, *On Noether's rationality problem for cyclic groups over $\mathbbm{Q}$*, Proc. Amer. Math. Soc. **145** (2017) 2407--2409. V. P. Platonov, *Arithmetic theory of algebraic groups*, (Russian) Uspekhi Mat. Nauk **37** (1982) 3--54; translation in Russian Math. Surveys **37** (1982) 1--62. V. P. Platonov, A. Rapinchuk, *Algebraic groups and number theory*, Translated from the 1991 Russian original by Rachel Rowen, Pure and applied mathematics, 139, Academic Press, 1994. D. J. Saltman, *Retract rational fields and cyclic Galois extensions*, Israel J. Math. **47** (1984) 165--215. N. I. Shepherd-Barron, *Stably rational irrational varieties*, The Fano Conference, 693--700, Univ. Torino, Turin, 2004. R. G. Swan, *Invariant rational functions and a problem of Steenrod*, Invent. Math. **7** (1969) 148--158. R. G. Swan, *Noether's problem in Galois theory*, Emmy Noether in Bryn Mawr (Bryn Mawr, Pa., 1982), 21--40, Springer, New York-Berlin, 1983. C. Voisin, *Stable birational invariants and the Lüroth problem*, Surveys in differential geometry 2016, Advances in geometry and mathematical physics, 313--342, Surv. Differ. Geom., 21, Int. Press, Somerville, MA, 2016. V. E. Voskresenskii, *The birational equivalence of linear algebraic groups*, (Russian) Dokl. Akad. Nauk SSSR **188** (1969) 978--981; erratum, ibid. 191 1969 nos., 1, 2, 3, vii; translation in Soviet Math. Dokl. **10** (1969) 1212--1215. V. E. Voskresenskii, *Birational properties of linear algebraic groups*, (Russian) Izv. Akad. Nauk SSSR Ser. Mat. **34** (1970) 3--19; translation in Math. USSR-Izv. **4** (1970) 1--17. V. E. Voskresenskii, *On the question of the structure of the subfield of invariants of a cyclic group of automorphisms of the $\mathbbm{Q}(x_1,\ldots,x_n)$*, (Russian) Izv. Akad. Nauk SSSR Ser. Mat. **34** (1970) 366--375; translation in Math. USSR-Izv. **4** (1970) 371--380. V. E. Voskresenskii, *Rationality of certain algebraic tori*, Izv. Akad. Nauk SSSR Ser. Mat. (Russian) **35** (1971) 1037--1046; translation in Math. USSR-Izv. **5** (1971) 1049--1056. V. E. Voskresenskii, *Fields of invariants of abelian groups*, Uspekhi Mat. Nauk (Russian) **28** (1973) 77--102; translation in Russian Math. Surveys **28** (1973) 79--105. V. E. Voskresenskii, *Stable equivalence of algebraic tori*, (Russian) Izv. Akad. Nauk SSSR Ser. Mat. **38** (1974) 3--10; translation in Math. USSR-Izv. **8** (1974) 1--7. V. E. Voskresenskii, *Algebraic groups and their birational invariants*, Translated from the Russian manuscript by Boris Kunyavskii, Translations of Mathematical Monographs, 179. American Mathematical Society, Providence, RI, 1998.
arxiv_math
{ "id": "2309.16187", "title": "Rationality problem for norm one tori for $A_5$ and ${\\rm\n PSL}_2(\\mathbb{F}_8)$ extensions", "authors": "Akinari Hoshi and Aiichi Yamasaki", "categories": "math.AG math.NT math.RA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We present several characterizations of $\sigma$-compact Hattori spaces, and reject some possible characterization candidates of the spaces. author: - Vitalij A. Chatyrko title: "**On $\\sigma$-compact Hattori spaces.**" --- *Keywords and Phrases: Hattori spaces, $\sigma$-compact spaces* *2000 AMS (MOS) Subj. Class.:* Primary 54A10 # Introduction Let $\mathbb R$ be the set of real numbers and $A$ be a subset of $\mathbb R$. In [@H] Hattori introduced a topology $\tau(A)$ on $\mathbb R$ defined as follows: - if $x \in A$ then $\{(x-\epsilon, x+\epsilon): \epsilon > 0\}$ is a nbd open basis at $x$, - if $x \in \mathbb R \setminus A$ then $\{[x, x+\epsilon): \epsilon > 0\}$ is a nbd open basis at $x$. Note that $\tau(\emptyset)$ (respectively, $\tau(\mathbb R)$) is the Sorgenfrey topology $\tau_S$ (respectively, the Euclidean topology $\tau_E$) on the reals. The topological spaces $(\mathbb R, \tau(A)), A \subseteq \mathbb R,$ are called *Hattori spaces* and denoted by $H(A)$ or $H$ (if $A$ is unimportant for a discussion). It is easy to see that the identity mapping of reals is a continuous bijection of any $H$-space onto the real line. Let us recall ([@CH]) that every $H$-space is $T_1$, regular, hereditary Lindelöf and hereditary separable. However there are topological properties as the metrizability or the Cech-completeness which some $H$-spaces possess and other $H$-spaces do not possess. When the $H$-spaces possess these properties one can find in [@K] and [@BS]. Recall ([@EJ]) that each compact subset of the Sorgenfrey line $H(\emptyset)$ is countable. So the space $H(\emptyset)$ cannot be $\sigma$-compact unlike to the space $H(\mathbb R)$ (the real line) which is evidently $\sigma$-compact. The following natural question was posed by F. Lin and J. Li. **Question 1**. *([@LL Question 3.7]) For what subsets $A$ of $\mathbb R$ are the spaces $H(A)$ $\sigma$-compact?* F. Lin and J. Li also noted **Proposition 1**. *([@LL Theorem 3.13]) For an arbitrary subset $A$ of $\mathbb R$, if $H(A)$ is $\sigma$-compact, then $\mathbb R \setminus A$ is countable and nowhere dense in $H(A)$. $\Box$* **Proposition 2**. *([@LL Theorem 3.14]) For an arbitrary subset $A$ of $\mathbb R$, if $\mathbb R \setminus A$ is countable and scattered in $H(A)$, then $H(A)$ is $\sigma$-compact. $\Box$* In this note I present several characterizations of Hattori spaces to be $\sigma$-compact, and show that the implications of Propositions [Proposition 1](#prop_1){reference-type="ref" reference="prop_1"} and [Proposition 2](#prop_2){reference-type="ref" reference="prop_2"} are not invertible. Moreover, Proposition [Proposition 2](#prop_2){reference-type="ref" reference="prop_2"} (formulated as above) does not hold, its corrected version is presented in Corollary [Corollary 1](#cor){reference-type="ref" reference="cor"}. For standard notions we refer to [@E]. # Main results First of all let us recall the following fact. **Lemma 1**. *([@CH Lemma 2.1]) Let $A \subseteq \mathbb R$ and $B \subseteq A$ and $C \subseteq \mathbb R \setminus A$. Then* - *$\tau(A)|_B = \tau_E|_B$, where $\tau_E$ is the Euclidean topology on $\mathbb R$, and* - *$\tau(A)|_C = \tau_S|_C$, where $\tau_S$ is the Sorgenfrey topology on $\mathbb R$. $\Box$* **Proposition 3**. *For an arbitrary subset $A$ of $\mathbb R$, if $B= \mathbb R \setminus A$ is countable and it is a $G_\delta$-subset of the real line (in particular, if $B$ is countable and closed in the real line), then $H(A)$ is $\sigma$-compact.* Proof. Let us note that on the real line our set $A$ is an $F_\sigma$-set and hence it is $\sigma$-compact there. So by Lemma [Lemma 1](#lem_CH){reference-type="ref" reference="lem_CH"} $A$ is $\sigma$-compact in $H(A)$ too. Since $B$ is countable we get that $H(A)$ is $\sigma$-compact. $\Box$ Since every scattered subset of the real line is a $G_\delta$ (see [@KR Corollary 4]) we get the following. **Corollary 1**. *For any subset $A$ of $\mathbb R$, if $\mathbb R \setminus A$ is countable and scattered in the real line, then $H(A)$ is $\sigma$-compact. $\Box$* We continue with several characterizations of $H$-spaces to be $\sigma$-compact. **Theorem 1**. *Let $A \subseteq \mathbb R$ and $B = \mathbb R \setminus A$. Then the following conditions are equivalent.* - *There exist a $\sigma$-compact subset $D$ and a closed subset $C$ of the space $H(A)$ such that $B \subseteq C \subseteq D$.* - *There exists a closed $\sigma$-compact subset $C$ of the space $H(A)$ such that $B \subseteq C$.* - *The closure $\mbox{{\rm Cl}}_{H(\mathbb R)}(B)$ of $B$ in the real line is $\sigma$-compact in $H(A)$.* - *The closure $\mbox{{\rm Cl}}_{H(A)}(B)$ of $B$ in the space $H(A)$ is $\sigma$-compact in $H(A)$.* - *the space $H(A)$ is $\sigma$-compact.* Proof. The following implications are obvious: $(e) => (a)$, $(a) => (b)$, $(c) => (b)$, $(b) => (d)$, $(e) => (c)$. Let us show $(d) => (e)$. Since $B \subseteq \mbox{{\rm Cl}}_{H(A)}(B)$ each point $x \in H(A) \setminus \mbox{{\rm Cl}}_{H(A)}(B)$ has inside the set $H(A) \setminus \mbox{{\rm Cl}}_{H(A)}(B)$ an open nbd which is an open interval of the real line. Since the space $H(A)$ is hereditarily Lindelöf the set $H(A) \setminus \mbox{{\rm Cl}}_{H(A)}$ is a $\sigma$-compact subset of $H(A)$ (see Lemma [Lemma 1](#lem_CH){reference-type="ref" reference="lem_CH"}). Thus even $H(A)$ is $\sigma$-compact. $\Box$ **Remark 1**. *Note that the set $\mbox{{\rm Cl}}_{H(\mathbb R)}(B)$ does not need to be $\sigma$-compact in the space $H(A)$ (it is of course closed there) even if it is compact in the real line, see Proposition [Proposition 5](#my_prop_2){reference-type="ref" reference="my_prop_2"}.* Let us consider in the set of reals the standard Cantor set $\mathbb C$ on the closed interval $[0,1]$ which can be defined as follows. For any closed bounded interval $[a, b]$ of $\mathbb R$ put $$F([a,b]) = \{[a, \frac{2}{3}a + \frac{1}{3}b], [\frac{1}{3}a + \frac{2}{3}b,b]\}.$$ Then for each $n \geq 0$ by induction define a family $\mathcal C_n$ of closed intervals: $$\mathcal C_0 = \{[0,1]\}, \ \mathcal C_n = \{F([a,b]): [a,b] \in \mathcal C_{n-1}\}.$$ The standard Cantor set $\mathbb C$ of the closed interval $[0,1]$ is the intersection $\cap_{n=0}^\infty (\cup \mathcal C_n)$, where $\cup \mathcal C_n$ is the union of all closed intervals from the family $\mathcal C_n$. Put now $B_1 = \{a: [a,b] \in \mathcal C_n, n \geq 0\}$, $B_2 = \{b: [a,b] \in \mathcal C_n, n \geq 0\}$ and $A_1 = \mathbb R \setminus B_1$, $A_2 = \mathbb R \setminus B_2$. We will use the notations below. **Remark 2**. *Let us note that on the real line (i.e. on the reals with the Euclidean topology) the set $\mathbb C$ is compact, the sets $B_1$ and $B_2$ (which are subsets of $\mathbb C$) are homeomorphic to the space of rational numbers $\mathbb Q$, the sets $\mathbb C \setminus B_1$ and $\mathbb C \setminus B_2$ are homeomorphic to the space of irrational numbers $\mathbb P$. Moreover, $B_1$ and $B_2$ are nowhere dense in the real line.* **Remark 3**. *Let us note that a set $Y \subset \mathbb R$ is nowhere dense in the real line iff $Y$ is nowhere dense in any $H$-space (see for example, [@CN Lemma 3.3]).* **Proposition 4**. *For the space $H(A_1)$ the following is valid.* - *The subspace $B_1$ of $H(A_1)$ is nowhere dense in $H(A_1)$ and it is homeomorphic to the space of rational numbers $\mathbb Q.$* - *The subspace $\mbox{{\rm Cl}}_{H(A_1)}(B_1)$ of $H(A_1)$ is homeomorphic to the standard Cantor set $\mathbb C$ on the real line, and the subspace $\mbox{{\rm Cl}}_{H(A_1)}(B_1) \setminus B_1$ of $H(A_1)$ is homeomorphic to the space of irrational numbers $\mathbb P$.* - *The space $H(A_1)$ is $\sigma$-compact.* Proof. (a) and (b) are obvious. Theorem [Theorem 1](#character){reference-type="ref" reference="character"} and (b) prove (c). $\Box$ **Corollary 2**. *Proposition [Proposition 2](#prop_2){reference-type="ref" reference="prop_2"} is not invertible. $\Box$* Proof. Let us note that $H(A_1)$ is $\sigma$-compact but the subspace $B_1$ of $H(A_1)$ is not scattered. $\Box$ **Corollary 3**. *Proposition [Proposition 3](#prop_3){reference-type="ref" reference="prop_3"} is not invertible.* Proof. Let us note that $H(A_1)$ is $\sigma$-compact but $B_1$ is not a $G_\delta$-subset of the Cantor set $\mathbb C$ in the real line and hence it is not a $G_\delta$ in the real line. $\Box$ **Proposition 5**. *For the space $H(A_2)$ the following is valid.* - *The subspace $B_2$ of $H(A_2)$ is nowhere dense in $H(A_2)$ and it is homeomorphic to the space of natural numbers $\mathbb N.$* - *The subspace $\mbox{{\rm Cl}}_{H(A_2)}(B_2)$ of $H(A_2)$ is equal to the standard Cantor set $\mathbb C$ of $\mathbb R$, and it is not $\sigma$-compact. The subspace $\mbox{{\rm Cl}}_{H(A_2)}(B_2) \setminus B_2$ of $H(A_2)$ is homeomorphic to the space of irrational numbers $\mathbb P$,* - *The space $H(A_2)$ is not $\sigma$-compact.* Proof. (a) is obvious. In (b) let us show that the subspace $\mbox{{\rm Cl}}_{H(A_2)}(B_2)$ of $H(A_2)$ is not $\sigma$-compact. Assume that the subspace $\mbox{{\rm Cl}}_{H(A_2)}(B_2)$ of $H(A_2)$ is $\sigma$-compact, i.e. $\mbox{{\rm Cl}}_{H(A_2)}(B_2) = \cup_{i=1}^\infty K_i$, where $K_i$ is compact in $H(A_2)$. Note that for each $i$ the set $K_i$ is compact in the real line and the Cantor set $\mathbb C$ with the topology from the real line is the union $\cup_{i=1}^\infty K_i$. Hence there is an open interval $(c, d)$ of the reals and some $i$ such that $(c, d) \cap \mathbb C \subseteq K_i$. Moreover, there exist points $b_0, b_1, \dots$ of $B_2$ such that $b_1 < b_2 < \dots < b_0$ and the sequence $\{b_j\}_{j=1}^\infty$ tends to $b_0$ in the real line. Since at the points of $B_2$ the topology of $H(A_2)$ is the Sorgenfrey topology we get a contradiction with the compactness of $K_i$ in the space $H(A_2)$. Theorem [Theorem 1](#character){reference-type="ref" reference="character"} and (b) prove (c). $\Box$ **Corollary 4**. *Proposition [Proposition 1](#prop_1){reference-type="ref" reference="prop_1"} is not invertible.* Proof. Let us note that $B_2$ is nowhere dense in $H(A_2)$ (see Remark [Remark 3](#rem_2){reference-type="ref" reference="rem_2"}) but the space $H(A_2)$ is not $\sigma$-compact. $\Box$ **Corollary 5**. *Proposition [Proposition 2](#prop_2){reference-type="ref" reference="prop_2"} does not hold. $\Box$* *Proof. Let us note that $B_2$ is scattered in $H(A_2)$ and the space $H(A_2)$ is not $\sigma$-compact. $\Box$* # Additional questions The following is obvious. - If a space $X$ is $\sigma$-compact then a subset $Y$ of $X$ is $\sigma$-compact iff it is an $F_\sigma$-subset of $X$. In particular, a subset of the real line is $\sigma$-compact iff it is an $F_\sigma$-set. - A subset of the Sorgenfrey line is $\sigma$-compact iff it is countable, - A subset of the space $\mathbb P$ of irrational numbers is $\sigma$-compact iff it is homeomorphic to an $F_\sigma$-subset of the standard Cantor set $\mathbb C$ on the real line. One can pose the following problem. **Problem 1**. *Let $A \subseteq \mathbb R$. Describe the $\sigma$-compact subsets of $H(A)$.* Let us note in advance that according to (a) if $H(A)$ is $\sigma$-compact then a subset of $H(A)$ is $\sigma$-compact iff it is an $F_\sigma$-subset of $H(A)$ Below we present some other answers to Problem [Problem 1](#problem){reference-type="ref" reference="problem"} by the use of observations (b) and (c) and some known facts. **Proposition 6**. *([@K Theorem 6] and [@BS Theorem 2.8]) $H(A)$ is homeomorphic to the Sorgenfrey line iff $A$ is scattered. $\Box$* **Corollary 6**. *If $A$ is scattered then a subset of $H(A)$ is $\sigma$-compact iff it is countable. $\Box$* **Proposition 7**. *([@BS Proposition 3.6]) $H(A)$ is homeomorphic to the space $\mathbb P$ of irrational numbers iff $\mathbb R \setminus A$ is dense in the real line and countable. $\Box$* **Corollary 7**. *If $\mathbb R \setminus A$ is dense in the real line and countable then a subset of $H(A)$ is $\sigma$-compact iff it is homeomorphic to an $F_\sigma$-subset of the standard Cantor set $\mathbb C$ on the real line. $\Box$* Since the space $H(A_2)$ from Proposition [Proposition 5](#my_prop_2){reference-type="ref" reference="my_prop_2"} is not $\sigma$-compact (as well as any subset of $H(A_2)$ containing some $[a,b]$ from $\mathcal C_n, n = 0, 1, 2, \dots$) one can pose the following question. **Question 2**. *What subsets of $H(A_2)$ are $\sigma$-compact?* 99 A. Bouziad, E. Sukhacheva, On Hattori spaces, Comment. Math. Univ. Carolin. 58,2 (2017) 213-223 V. A. Chatyrko, Y. Hattori, A poset of topologies on the set of real numbers, Comment. Math. Univ. Carolin. 54, 2 (2013) 189-196. V. A. Chatyrko, V. Nyagahakwa, Sets with the Baire property in topologies formed from a given topology and ideals of sets, Questions and answers in General Topology, 35 (2017) 59-76 R. Engelking, General Topology, Heldermann Verlag, Berlin, 1989. M. S. Espelie, J. E. Joseph, Compact subspaces of the Sorgenfrey line, Math. Magazine, 49 (1976) 250-251. Y. Hattori, Order and topological structures of poset of the formal balls on metric spaces, Mem.Fac.Sci.Eng. Shimane Univ. Ser. B Math. Sci., 43 (2010) 13 - 26. V. Kannan, M. Rajagopalan, On scattered spaces, Proc. Amer. Math. Soc., 43(2)(1974) 402-408. J. Kulesza, Results on spaces between the Sorgenfrey topology and the usual topology on $\mathbb R$, Topol. Appl. 231 (2017) 266-275 F. Lin, J. Li, Some topological properties of spaces between the Sorgenfrey and usual topologies on the real numbers. arXiv: 1807.06938v4 \[math.GN\] 21Dec 2022 (V.A. Chatyrko)\ Department of Mathematics, Linkoping University, 581 83 Linkoping, Sweden.\ vitalij.tjatyrko\@liu.se
arxiv_math
{ "id": "2309.11990", "title": "On $\\sigma$-compact Hattori spaces", "authors": "Vitalij Chatyrko", "categories": "math.GN", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | The main classical result of Schubert calculus is that multiplication rules for the basis of Schubert cycles inside the cohomology ring of the Grassmannian $G(n,m)$ are the same as multiplication rules for the basis of Schur polynomials in the ring of symmetric polynomials. In this paper, we give a new explanation of this somewhat mysterious connection by using the geometric Satake correspondence to put the structure of a representation of $GL_m$ on $H^\bullet(G(n,m))$ and comparing it to the Fock space representation on symmetric polynomials. This new proof also extends to equivariant Schubert calculus, and gives an explanation of the relationship between torus-equivariant cohomology of Grassmannians and double Schur polynomials. address: | Department of Mathematics and Statistics\ McGill University author: - Antoine Labelle bibliography: - refs.bib title: Equivariant Schubert calculus and geometric Satake --- # Introduction The cohomology ring of the Grassmannian $G(n,m)$ of $n$-dimensional subspaces of $\mathbb C^m$ has a natural basis given by Schubert cycles $\sigma_\lambda$, which come from a cell decomposition of $G(n,m)$ into Schubert cells. Schubert calculus on Grassmannians is the problem of understanding the multiplication in the cohomology ring with respect to this basis, i.e. to calculate the structure constants $c_{\lambda\mu}^\nu$ in the expansion $$\label{eq:structure-consts} \sigma_\lambda\cdot \sigma_\mu = \sum_\nu c_{\lambda\mu}^\nu \sigma_\nu.$$ It is well-known that these constants can be identified as the Littlewood-Richardson coefficients, which also describe the multiplication of symmetric polynomials in the basis of Schur polynomials. In other words, there is a ring isomorphism from a certain quotient of the ring of symmetric polynomials in $n$ variables to $H^\bullet(G(n,m))$, which sends Schur polynomials to Schubert cycles. The classical proof of this result goes by first considering the case of multiplication by special Schubert cycles, described by Pieri's rule, and then expressing a general Schubert cycle in terms of special ones via Giambelli's formula, which mirrors the Jacobi-Trudi formula for Schur polynomials [@Ful §14.7]. This unfortunately does not give a very conceptual explanation for the existence of the isomorphism. In this note, we give an alternative explanation, by showing how this ring isomorphism can be obtained from an isomorphism of $GL_m$-representations between the two sides. Indeed, $H^\bullet(G(n,m))$, can be given the structure of a $GL_m$ representation via the celebrated geometric Satake correspondence. It is known from the general theory to be isomorphic to the highest weight representation of weight $\omega_n= (\underbrace{1, \ldots, 1}_{n}, 0, \ldots, 0)$, which is the wedge power $\bigwedge^n \mathbb Z^m$. On the other hand, there is also the well-known Fock space representation of $\mathfrak{gl}_\infty$ on the ring of symmetric functions [@Tin], which becomes a representation of $GL_m$ after taking a suitable quotient and is also isomorphic to the wedge representation. It is then possible to show that this representation isomorphism is in fact a ring isomorphism by looking at the action of a certain subalgebra of $\mathfrak{gl}_m$ which is related to the cohomology ring of the affine Grassmannian. It also follows immediately that, up to a scalar, the isomorphism takes Schubert cycles to Schur polynomials, since both are weight bases for their respective representations. If we work over $\mathbb Z$, this scalar is just a sign $\pm 1$. Showing that the sign is in fact always positive is a somewhat delicate issue. To do this, we need as a geometric input the fact that the structure constants for multiplication of Schubert classes are nonnegative. Moreover, this whole story generalizes perfectly well to $T$-equivariant cohomology, where $T\subset GL_m$ is the torus of diagonal matrices, given that we work over the ring $H_T^\bullet(\text{pt})=\mathbb Z[t_1, t_2, \ldots, t_m]$. Schur polynomials, in that case, get replaced by so-called *double Schur polynomials*, which are symmetric polynomials with coefficients in $\mathbb Z[t_1, t_2, \ldots]$. Therefore, we work equivariantly in the rest of the paper for the sake of generality, but the non-equivariant case can be recovered immediately by setting $t_i=0$ for $i=1, \ldots, m$. In Section [2](#sec:geometric-satake){reference-type="ref" reference="sec:geometric-satake"}, we recall the geometric Satake correspondence of Mirkovic and Vilonen and explain how it defines a representation structure on $H^\bullet(G(n,m))$. Then, in Section [3](#sec:equiv-coh){reference-type="ref" reference="sec:equiv-coh"}, we upgrade this to a representation structure on $H_T^\bullet(G(n,m))$ by applying the correspondence with coefficients in $H_T^\bullet(\text{pt})$. In Section [4](#sec:double-schur){reference-type="ref" reference="sec:double-schur"}, we put a representation structure on the ring of symmetric polynomials and define double Schur polynomials, which form the weight basis of this representation. We also define a certain quotient of the ring of symmetric polynomials, depending on $m$. In Section [5](#sec:rep-iso){reference-type="ref" reference="sec:rep-iso"}, we note that this quotient is isomorphic, as a representation of $GL_m$ over $H_T^\bullet(\text{pt})$, to $H_T^\bullet(G(n,m))$. We then explain in Section [6](#sec:ring-struct){reference-type="ref" reference="sec:ring-struct"} how to upgrade this representation isomorphism into a ring isomorphism, by studying the action of the equivariant cohomology ring of the affine Grassmannian. Finally, in Section [7](#sec:sgn){reference-type="ref" reference="sec:sgn"}, we tackle the problem of showing that the sign by which Schubert cycles and double Schur polynomials differ is in fact always $1$, by using an important positivity result due to Graham for multiplication of equivariant Schubert classes. # The geometric Satake correspondence {#sec:geometric-satake} In this section, we briefly summarize the geometric Satake correspondence of Mirkovic and Vilonen [@MV]. For a detailed exposition of the topic, see [@BR]. ## The affine Grassmannian Let $G$ be a complex reductive group and $T\subset B$ a maximal torus and Borel subgroup in $G$. Let $N$ be the unipotent radical of $B$. Set $\mathcal K= \mathbb C(\!(t)\!)$ and $\mathcal O= \mathbb C[\![t]\!]$. The affine Grassmannian $\mathcal{G}r$ is an ind-variety whose set of $\mathbb C$-points is the quotient $G(\mathcal K)/G(\mathcal O)$. Any coweight $\mu \in X_*(T)$ determines an element of $G(\mathcal K)$, hence a point of $\mathcal{G}r$, denoted $L_\mu$. There is a stratification by $G(\mathcal O)$ orbits $$\mathcal{G}r= \bigsqcup_{\mu \in X^+_*(T)} \mathcal{G}r^\mu \qquad \text{where} \qquad \mathcal{G}r^\mu = G(\mathcal O) \cdot L_\mu$$ and $X^+_*(T)$ is the set of dominant coweights. Moreover, we have $\overline{\mathcal{G}r^\mu} = \bigsqcup_{\nu \le \mu} \mathcal{G}r^\nu$. Let $\mathcal P$ be the category of $G(\mathcal O)$-equivariant perverse sheaves on $\mathcal{G}r$ with coefficients in $\mathbb Z$ (by [@MV Proposition 2.1], this is equivalent under the forgetful functor to the category of perverse sheaves constructible with respect to the stratification above). For each $\mu \in X^+_*(T)$, there is a perverse sheaf $\mathbf{IC}_\mu$, supported on $\overline{\mathcal{G}r^\mu}$, called the *intersection cohomology sheaf*[^1]. If $\mu$ is minimal with respect to the partial order on $X_*(T)$, so that $\overline{\mathcal{G}r^\mu}=\mathcal{G}r^\mu$, then this is simply the shifted constant sheaf $\underline{\mathbb Z}_{\mathcal{G}r^\mu}[\dim \mathcal{G}r^\mu]$. Other important subspaces of $\mathcal{G}r$ are the semi-infinite orbits $S_\mu=N(\mathcal K)\cdot L_\mu$, which are indexed by (not necessarily dominant) coweights $\mu$. We have $\mathcal{G}r= \bigsqcup_{\mu \in X_*(T)} S_\mu$ and $\overline{S_\mu} = \bigsqcup_{\nu \le \mu} S_\nu$. ## The correspondence There is a notion of convolution for perverse sheaves on $\mathcal{G}r$, which makes $\mathcal P$ into a monoidal category. Hypercohomology defines a tensor functor $H^\bullet : \mathcal P\to \operatorname{Mod}_\mathbb Z$ to the category of finitely generated $\mathbb Z$-modules. By the Tannakian formalism, there is an equivalence of monoidal categories $$\mathcal P\cong \operatorname{Rep}_\mathbb Z(G^\vee),$$ which commutes with the natural functors to $\operatorname{Mod}_\mathbb Z$ on both sides, where $G^\vee$ is an algebraic group over $\mathbb Z$ such that $G^\vee(R)$ is the group of tensor automorphisms of the functor $H^\bullet(-)\otimes R : \mathcal P\to \operatorname{Mod}_R$ for any ring $R$. Moreover, $G^\vee$ can actually be identified with the Langlands dual group of $G$ over $\mathbb Z$, i.e. the unique split reductive group over $\mathbb Z$ whose root datum is dual to the root datum of $G$, and the equivalence sends $\mathbf{IC}_\mu$ to the Schur module of highest weight $\mu$ [@MV Proposition 13.1]. Moreover, the fiber functor $H^\bullet(-)$ factors through $X_*(T)$-graded $\mathbb Z$-modules via the natural isomorphism $$H^\bullet(-) = \bigoplus_{\mu \in X_*(T)} H_c^\bullet(S_\mu, -)$$ and $H_c^\bullet(S_\mu, -)$ is concentrated in degree $2\langle \rho, \mu\rangle$, where $\rho$ is the Weyl vector (half the sum of the positive roots). Then there is a canonical maximal torus $T^\vee \subset G^\vee$ which acts by $\mu$ on the degree $\mu$ part of $H^\bullet(\mathcal F)$ for any perverse sheaf $\mathcal F$. ## The case of $GL_m$ We will be mainly interested in the special case $G = GL_m(\mathbb C)$, where we take $T$ to be the standard torus of diagonal matrices and $B$ the subgroup of upper triangular matrices. In this case, there is a concrete interpretation of points of $\mathcal{G}r$ as $\mathcal O$-lattices inside $\mathcal K^m$. Indeed, there is a transitive action of $GL_m(\mathcal K)$ on such lattices, and the stabilizer of the standard lattice $L_0=\mathcal O^m$ is exactly $GL_m(\mathcal O)$. By the theory of Smith normal form over PIDs, for any lattice $L$ there exists a basis $b_1, \ldots, b_m$ of $L_0$ and integers $\mu_1 \ge \ldots \ge \mu_m$ such that $t^{\mu_1} b_1, \ldots, t^{\mu_m} b_m$ form a basis for $L$. The tuple $(\mu_1, \ldots, \mu_m)$ is uniquely determined and is called the *type* of $L$. Then, for every $\mu \in X^+_*(T)$, $\mathcal{G}r^\mu$ is exactly the set of lattices of type $\mu$. If $\mu$ is the fundamental weight $\omega_n= (\underbrace{1, \ldots, 1}_{n}, 0, \ldots, 0)$, then lattices of type $\mu$ can be identified with $n$-dimensional quotients of $L_0/tL_0 \cong \mathbb C^m$, so $\mathcal{G}r^{\omega_n}=\overline{\mathcal{G}r^{\omega_n}}=G(n,m)$ is the usual Grassmannian. In that case, $\mathbf{IC}_{\omega_n}$ is the shifted constant sheaf $\underline{\mathbb Z}_{G(n,m)}[\dim G(n,m)]$, so $H^\bullet(\mathcal{G}r, \mathbf{IC}_{\omega_n}) = H^\bullet(G(n,m))$ (the classical singular cohomology) up to shift. The semi-infinite orbits that intersect $\mathcal{G}r^{\omega_n}$ are exactly $S_\mu$ for $\mu$ a sequence of zeros and ones containing exactly $n$ ones. In that case, $S_\mu \cap \mathcal{G}r^{\omega_n}$ is the $N(\mathbb C)$-orbit of $L_\mu$, which is the opposite Schubert cell $\Omega_\mu$ indexed by $\mu$[^2]. The root system of $GL_m$ is self-dual, so $G^\vee$ is simply $GL_m$ over $\mathbb Z$. The geometric Satake correspondence then identifies $H^\bullet(G(n,m))$ with the Schur module of highest weight $\omega_n$ for $GL_m/\mathbb Z$, which is the wedge representation $\bigwedge^n \mathbb Z^m$. The identification of $G^\vee$ with $GL_m$ is not uniquely determined, but we can fix one by requiring that the isomorphism between $\mathbb Z^m$ and $H^\bullet(G(1,m))=H^\bullet(\mathbb P^{m-1})$ sends the $k^\text{th}$ standard basis vector to the class of $\overline{\Omega_{e_k}}=\mathbb P^{m-k}$, where $e_k=(0, \ldots, 1, \ldots, 0)$ with the one in position $k$. # Equivariant cohomology {#sec:equiv-coh} Let $R_T=H_T^\bullet(pt)=\operatorname{Sym}X^*(T)$ denote the $T$-equivariant cohomology of a point. In [@YZ], it is proven that there is a natural isomorphism of functors $\mathcal P\to \operatorname{Mod}_{R_T}$ between $H^\bullet_T(-)$ and $H^\bullet(-)\otimes R_T$. This isomorphism comes from the decompositions $$H^\bullet(-) = \bigoplus_{\mu \in X_*(T)} H_c^\bullet(S_\mu, -)$$ and $$H_T^\bullet(-) = \bigoplus_{\mu \in X_*(T)} H_{T,c}^\bullet(S_\mu, -),$$ and canonical isomorphisms $H_{T,c}^\bullet(S_\mu, -) =H_c^\bullet(S_\mu, -)\otimes R_T$ due to the fact that $H_c^\bullet(S_\mu, -)$ is concentrated in one degree. Therefore, the group of tensor automorphisms of the functor $H_T^\bullet(-)$ can be identitifed by the Tannakian formalism with $G^\vee(R_T)$ and the equivariant cohomology of any perverse sheaf acquires the structure of a representation of $G^\vee$ over $R_T$. Now, and for the rest of this note, take $G=GL_m(\mathbb C)$ and $T$ the standard maximal torus. In this case, $R_T$ is the polynomial ring $\mathbb Z[t_1, \ldots, t_m]$. By the discussion above, the $R_T$-module $V:=H_T^\bullet(G(1,m))=H_T^\bullet(\mathbb P^{m-1})$ gets identified with the standard representation of $GL_m(R_T)$, where the $k^{th}$ basis vector correspond to the class of $\overline{\Omega_{e_k}}=\mathbb P^{m-k}$. It is a standard fact that $H_T^\bullet(\mathbb P^{m-1})$ is isomorphic, as an $R_T$-algebra, to $R_T[x]/\prod_{i=1}^m (x+t_i)$ where $x=c_1^T(\mathcal O(1))$ and that, under this isomorphism, the class of $\overline{\Omega_{e_k}}$ correspond to $\prod_{i=1}^{k-1} (x+t_i)$ [@AF Chapter 4, Example 7.4]. To avoid a choice of basis, we can think of $G^\vee_{R_T}$ as the automorphism group of the free $R_T$ module $V$. Then we have an isomorphism of $G^\vee_{R_T}$ representations $$\label{eq:cohomology-eq-wedge} H_T^\bullet(G(n,m)) = \bigwedge\nolimits^n V.$$ # Symmetric functions and double Schur polynomials {#sec:double-schur} Let $R=\mathbb Z[t_1, t_2, \ldots]$ be a polynomial ring in infinitely many variables. Denote by $\Lambda_n, \Lambda_n^\text{sgn}\subset R[x_1, \ldots, x_n]$ the $R$-modules of symmetric and skew-symmetric polynomials in $n$ variables with coefficients in $R$, respectively. Note that $\Lambda_n^{sgn}$ can be identified with $\bigwedge^n R[x]$ via $$f_1(x) \wedge \cdots \wedge f_n(x) \mapsto \sum_{\sigma \in S_n} \text{sgn}(\sigma) f_{\sigma(1)}(x_1)\cdots f_{\sigma(n)}(x_n).$$ The \"double monomials\" $(x|\underline{t})^k:=(x+t_1)\cdots(x+t_k)$ form an $R$-basis of $R[x]$ as $k$ runs over nonnegative integers. This gives an induced basis $(a_\nu(\underline{x}|\underline{t}))$ of $\Lambda_n^\text{sgn}$ indexed by strictly decreasing sequences $\nu_1> \ldots> \nu_n$ of nonnegative integers, where $$a_\nu(\underline{x}|\underline{t}) = \sum_{\sigma \in S_n} \text{sgn}(\sigma) \prod_{i=1}^n (x_{\sigma(i)}|\underline{t})^{\nu_i}$$ Let $\rho = (n-1, n-2, \ldots, 1, 0)$. We can interpret the definition of $a_\rho$ as the determinant $\det((x_i|\underline{t})^{n-j})_{i,j=1}^n$ and then do row operations to reduce it to the usual Vandermonde determinant $\det(x_i^{n-j})_{i,j=1}^n = \prod_{1\le i<j \le n} (x_i-x_j)$. Therefore we have $$a_\rho(\underline{x}, \underline{t}) = \prod_{1\le i<j \le n} (x_i-x_j).$$ **Proposition 1**. *Multiplication by $a_\rho(\underline{x},\underline{t})$ defines an isomorphism of $R$-modules $\Lambda_n \to \Lambda_n^{sgn}$.* *Proof.* Since the product of a skew-symmetric polynomial with a symmetric polynomial is clearly skew-symmetric, multiplication by $a_\rho$ does send $\Lambda_n$ to $\Lambda_n^{sgn}$, and the map is injective since $R[x_1,\ldots, x_n]$ is an integral domain. Moreover, every skew-symmetric polynomial vanishes whenever two variables are equal, hence is divisible by $x_i-x_j$ for all $i<j$, hence is divisible by $a_\rho$. This shows the surjectivity. ◻ By the proposition above, we get an induced basis $(s_\lambda(\underline{x}, \underline{t}))$ of $\Lambda_n$ indexed by partitions $\lambda_1 \ge \ldots \ge \lambda_n\ge 0$, where $$s_\lambda(\underline{x}, \underline{t}) = \frac{a_{\lambda+\rho}(\underline{x},\underline{t})}{a_\rho(\underline{x}, \underline{t})}.$$ These polynomials are called *double Schur polynomials*[^3]. Let $\mathfrak{gl}_{\infty}^+(R)$ be the Lie algebra of $\mathbb N$ by $\mathbb N$ matrices over $R$ with finitely many nonzero entries. It acts on the free $R$-module with basis indexed by $\mathbb N$, which we can identify with $R[x]$ via the basis $((x|\underline{t})^k)_{k\in \mathbb N}$. Hence it also acts on $\Lambda_n$ through the identification $$\Lambda_n \underset{\sim}{\overset{a_\rho}{\longrightarrow}}\Lambda_n^\text{sgn}\cong \bigwedge\nolimits^n R[x].$$ This is an equivariant version of the so-called *Fock space* representation of $\mathfrak{gl}_{\infty}^+(\mathbb C)$ on the algebra of symmetric functions[^4]. It is clear from the construction that the double Schur polynomials are then a weight basis of this representation for the action of the subalgebra of diagonal matrices (and this uniquely characterize double Schur polynomials up to sign, because all weights have multiplicity one). ## Truncated versions {#sec:truncated} Let $J_m$ be the ideal of $R[x_1, \ldots, x_n]$ generated by $t_{m+1}, t_{m+2}, \ldots$ and $(x_i|\underline{t})^m$ for $i=1, \ldots, n$, and $I_m^\text{sgn}= J_m \cap \Lambda_n^\text{sgn}$. As an $R$-submodule of $\Lambda_n^\text{sgn}$, $I_m^\text{sgn}$ is clearly generated by $t_{m+1}, t_{m+2}, \ldots$ and the basis elements $a_\nu$ for strict partitions $\nu$ with $\nu_1\ge m$. Note that $I_m^\text{sgn}$ is in fact a sub-$\Lambda_n$-module of $\Lambda_n^\text{sgn}$ (since $J_m$ is an ideal). Hence, if we let $I_m$ be the preimage of $I_m^\text{sgn}$ under the \"multiplication by $a_\rho$\" isomorphism $\Lambda_n \overset{\sim}{\to} \Lambda_n^\text{sgn}$, then $I_m$ is an ideal of $\Lambda_n$. Note that $I_m$ is generated as an $R$-module by $t_{m+1}, t_{m+2}, \ldots$ and the basis elements $s_\lambda$ for partitions $\lambda$ with $\lambda_1> m-n$. Define $\Lambda_{n,m}^\text{sgn}= \Lambda_n^\text{sgn}/I_m^\text{sgn}$ and $\Lambda_{n,m} = \Lambda_n/I_m$. Identifying $R_T$ with $R/(t_{m+1}, t_{m+2}, \ldots)= \mathbb Z[t_1, \ldots, t_m]$, we see that $\Lambda_{n,m}^\text{sgn}$ $\Lambda_{n,m}$ have the structure of $R_T$-modules, and multiplication by $a_\rho$ defines an isomorphism between them. The map $R[x]\to V=R_T[x]/(x|\underline{t})^m$ which quotients by $t_{m+1}, t_{m+2}, \ldots$ and by $(x|\underline{t})^m$ induces a surjection $$\Lambda_n^\text{sgn}= \bigwedge\nolimits^n R[x] \twoheadrightarrow \bigwedge\nolimits^n V$$ which has kernel $I_m^\text{sgn}$, so we get an identification of $\Lambda_{n,m}^\text{sgn}$ with $\bigwedge\nolimits^n V$ and $\Lambda_{n,m}^\text{sgn}$ gets the structure of a representation of $G^\vee_{R_T}=\underline{\operatorname{Aut}}_{R_T}(V)$. We can also transfer this representation structure to $\Lambda_{n,m}$ via the \"multiplication by $a_\rho$\" isomorphism. Then the quotient map $$\Lambda_n \twoheadrightarrow \Lambda_{m,n}$$ is compatible with the representation structure on both sides (in the sense that it is $\mathfrak{gl}_m(R)$ equivariant after making both sides into representations of $\mathfrak{gl}_m(R)$ via the quotient map $\mathfrak{gl}_m(R) \to \mathfrak{gl}_m(R_T) \cong \operatorname{Lie}(G^\vee_{R_T})$ and the inclusion $\mathfrak{gl}_m(R)\hookrightarrow \mathfrak{gl}_{\infty}^+(R)$). # The representation isomorphism {#sec:rep-iso} We have put the structure of a representation of $G^\vee_{R_T}$ on $H_T^\bullet(G(n,m))$ and $\Lambda_{n,m}$, and saw that both are isomorphic to the $n^{th}$ wedge representation. We therefore have an isomorphism $$\Phi : \Lambda_{n,m} \to H_T^\bullet(G(n,m))$$ of $G^\vee_{R_T}$-representations. There is a bijection between partitions that fit in an $n$ by $m-n$ rectangle and coweights $\mu \in X_*(T)$ having $n$ ones and zeros everywhere else, which sends a partition $\lambda$ to the coweight $\mu$ with ones in positions $\lambda_1 + n, \lambda_2 +n-1, \ldots, \lambda_n+1$. Since $(x|\underline{t})^k$ span the weight $e_{k+1}$ subspace of $V$, we see that the weight $\mu$ subspace of $\Lambda_{n,m}$ is spanned by $s_\lambda$, where $\lambda$ corresponds to $\mu$ under the bijection above. On the other hand, we also know by the discussion of Section [3](#sec:equiv-coh){reference-type="ref" reference="sec:equiv-coh"} that the weight $\mu$ subspace of $H_T^\bullet(G(n,m)$ is $H_{T,c}^\bullet(S_\mu, \underline{\mathbb Z}_{G(n,m)}[\dim G(n,m)])$, which is spanned by the equivariant cohomology class of $\overline{\Omega_\mu}$. We will denote this class by $\sigma_\lambda$, where again $\lambda$ corresponds to $\mu$ under the above bijection. Since representation isomorphisms preserve weight spaces, and $\pm 1$ are the only units in $R_T$, it follows that $\Phi$ sends $s_\lambda$ to $\pm \sigma_\lambda$ for all $\lambda$. Note also that, by Schur's lemma, $\Phi$ is uniquely determined up to a unit in $R_T$, i.e. up to a sign. We can fix this sign by requiring that $\Phi$ preserves multiplicative identities, i.e. $\Phi(s_\varnothing)=\sigma_\varnothing$. # Recovering the ring structure {#sec:ring-struct} ## Equivariant cohomology of $\mathcal{G}r$ By the Tannakian formalism, the Lie algebra $\mathfrak{g}^\vee_{R_T}$ of $G^\vee$ over $R_T$ can be identified with the natural endomorphisms $(\phi_\mathcal F)_{\mathcal F\in\mathcal P}$ of the fiber functor $H_T^\bullet(-) : \mathcal P\to \operatorname{Mod}_{R_T}$ that satisfy $$\label{eq:lie-alg-condition} \phi_{\mathcal F_1 * \mathcal F_2}=1 \otimes \phi_{\mathcal F_1} + \phi_{\mathcal F_1} \otimes 1$$ [@YZ 5.3]. We now consider the cohomology ring $H_T^\bullet(\mathcal{G}r)$. Since $\mathcal{G}r$ is homeomorphic to the group of based polynomial loops in $G$, there is a natural Hopf algebra structure on this ring. Any element of $H_T^\bullet(\mathcal{G}r)$ defines by cup-product a natural endomorphism of $H_T^\bullet(-)$, and, because of [@YZ Propostion 2.7], equation ([\[eq:lie-alg-condition\]](#eq:lie-alg-condition){reference-type="ref" reference="eq:lie-alg-condition"}) is satisfied for primitive elements of $H_T^\bullet(\mathcal{G}r)$, so we have a map $$\label{eq:cohomology-to-lie-alg} H^\bullet_T(\mathcal{G}r)^\text{prim} \to \mathfrak{g}_{R_T}^\vee.$$ Consider the equivariant line bundle $\mathscr L$ on $\mathcal{G}r$ whose fiber over a lattice $L$ is $$\det\left(\frac{L}{L\cap L_0} \right) \otimes \det\left(\frac{L_0}{L\cap L_0} \right)^*,$$ where $\det$ of a finite dimensional $\mathbb C$-vector space means the top exterior power. By [@YZ Lemma 5.1], the element $e_T=c_1^T(\mathscr L)\in H^2_T(\mathcal{G}r)$ is primitive, hence defines an element of the Lie algebra $\mathfrak{g}_{R_T}^\vee$, which we also denote by $e_T$. If we identify $G^\vee_{R_T}$ with the automorphism group of the $R_T$ -module $V$, then its Lie algebra $\mathfrak{g}^\vee_{R_T}$ can be identified with $\operatorname{End}_{R_T}(V)$. **Proposition 2**. *Under the identification $\mathfrak{g}_{R_T}^\vee=\operatorname{End}_{R_T}(V)$, $e_T$ is the multiplication by $-x$ operator.* *Proof.* The line bundle $\mathscr L$ restricts to the tautological line bundle $\mathcal O(-1)$ on $\mathbb P^{m-1}$, so $e_T=c_1^T(\mathscr L)$ acts via multiplication by $c_1^T(\mathcal O(-1))=-c_1^T(\mathcal O(1))=-x$ on $V=H_T^\bullet(\mathbb P^{m-1})$. ◻ Since $H^\bullet_T(\mathcal{G}r)$ is commutative, the image of ([\[eq:cohomology-to-lie-alg\]](#eq:cohomology-to-lie-alg){reference-type="ref" reference="eq:cohomology-to-lie-alg"}) must lie inside the centralizer $\mathcal Z_{e_T}$ of $e_T$ inside $\operatorname{End}_{R_T}(V)$. Since $-x$ generates $V$ as an $R_T$-algebra, the $R_T$-linear endomorphisms of $V$ that commute with multiplication by $-x$ are exactly the operators of multiplication by some element of $V$. Hence we can identify $\mathcal Z_{e_T}$ with $V$, acting on itself by multiplication. In fact, after changing coefficients to a field, it turns out that ([\[eq:cohomology-to-lie-alg\]](#eq:cohomology-to-lie-alg){reference-type="ref" reference="eq:cohomology-to-lie-alg"}) is an isomorphism onto $\mathcal Z_{e_T}$ [@Gin Corollary 5.3.2][^5]: **Theorem 3**. *The natural map $$H^\bullet_T(\mathcal{G}r, \mathbb Q)^\text{prim} \to \mathcal Z_{e_T} \otimes \mathbb Q$$ is an isomorphism, and $H^\bullet_T(\mathcal{G}r,\mathbb Q)^\text{prim}$ freely generates $H^\bullet_T(\mathcal{G}r,\mathbb Q)$, so there is an induced isomorphism $$H^\bullet_T(\mathcal{G}r, \mathbb Q) \to U(\mathcal Z_{e_T} \otimes \mathbb Q).$$* ## The main theorem We now have the tools to prove that $\Phi$ preserves multiplication. **Theorem 4**. *The isomorphism of representations $\Phi : \Lambda_{n,m} \to H_T^\bullet(G(n,m))$ is a ring isomorphism.* *Proof.* First, since both sides are free $\mathbb Z$-modules, it's enough to show that $\Phi$ is a ring homomorphism after tensoring with $\mathbb Q$. Recall also that we chose $\Phi$ so that it preserves the multiplicative identities. Since $\Phi$ is $\mathfrak{g}_{R_T}^\vee$-equivariant, it is in particular $\mathcal Z_{e_T}$-equivariant. Moreover, we claim that any $f\in \mathcal Z_{e_T}$ acts on both the domain and codomain of $\Phi$ via multiplication by $f\cdot 1$ (using the ring structure on the domain and codomain of $\Phi$). For $H_T^\bullet(G(n,m))$, this is because of Theorem [Theorem 3](#thm:prim-iso-centralizer){reference-type="ref" reference="thm:prim-iso-centralizer"} and the fact that a class $\alpha \in H^\bullet_T(\mathcal{G}r)$ acts via multiplication by $\alpha|_{G(n,m)}$. For $\Lambda_{n,m}$, we see from the rule $$X \cdot (v_1 \wedge \ldots \wedge v_n) = \sum_{i=1}^n v_1 \wedge \ldots \wedge Xv_i \wedge \ldots \wedge v_n$$ for the action of a Lie algebra on wedge powers that, for any polynomial $f\in V\cong \mathcal Z_{e_T}$, $f$ acts on $\Lambda_{n,m}^\text{sgn}$ (hence also on $\Lambda_{n,m}$) via multiplication by $f(x_1)+\ldots+f(x_n)$. We have therefore (after tensoring with $\mathbb Q$) a commutative diagram where $U(\mathcal Z_{e_T}\otimes \mathbb Q)=\operatorname{Sym}_{R_T\otimes \mathbb Q}^\bullet (\mathcal Z_{e_T}\otimes \mathbb Q)$ is the universal enveloping algebra of $\mathcal Z_{e_T}\otimes \mathbb Q$, seen as an abelian Lie algebra over $R_T\otimes \mathbb Q$, and the two vertical arrows are given by $f \mapsto f \cdot 1$. The two vertical arrows are ring homomorphisms, and the left vertical arrow is surjective because the power sum symmetric functions $x_1^k + \cdots + x_n^k$ generate $R[x_1,\ldots,x_n]^{S_N} \otimes \mathbb Q$ (and therefore its quotient $\Lambda_{n,m}\otimes \mathbb Q$) as an $R \otimes \mathbb Q$-algebra. This implies that $\Phi_\mathbb Q$ itself is a ring isomorphism. ◻ # Getting the signs right {#sec:sgn} We constructed a ring isomorphism $\Phi : \Lambda_{n,m} \to H_T^\bullet(G(n,m))$ which sends $s_\lambda(\underline{x}, \underline{t})$ to $\pm \sigma_\lambda$. We now tackle the delicate problem of proving that the sign is in fact always positive. The main geometric input we need is the following positivity result for the multiplication of Schubert classes [@AF Chapter 19, Theorem 3.1]. **Theorem 5**. *The structure constants $c_{\lambda\mu}^\nu$ in the expansion $$\sigma_\lambda\cdot \sigma_\nu = \sum_\nu c_{\lambda\mu}^\nu \sigma_\nu$$ lie in $\mathbb N[t_1-t_2, \ldots, t_{n-1}-t_n]\subset R_T$.* We will prove that the signs are positive by considering the iterated action of $-e_T\in H^2(\mathcal{G}r)^\text{prim} \subset \mathfrak{g}^\vee_{R_T}$. The following two lemmas describe more explicitly how $e_T$ acts on both sides of $\Phi$. **Lemma 6**. *The restriction of $e_T$ to $G(n,m)$ is $-\sigma_1+t_1+\ldots + t_n$.* *Proof.* Note that the restriction of $\mathscr L$ to $G(n,m)$ is the top exterior power of the tautological vector bundle $\mathcal S$, so $$e_T|_{G(n,m)} = c_1^T\left(\bigwedge\nolimits^n \mathcal S\right).$$ On the other hand there is an equivariant map of line bundles $$\bigwedge\nolimits^n \mathcal S\to \bigwedge\nolimits^n \mathbb C^n$$ induced by the quotient map $\mathbb C^m \to \mathbb C^n$ which forgets the last $m-n$ coordinates, and the zero set of this map is exactly the opposite Schubert variety corresponding the partition $(1,0, \ldots, 0)$. It therefore follows from basic properties of equivariant Chern classes [@AF Chapter 2, §3] that $$\sigma_1 = c_1^T\left(\left(\bigwedge\nolimits^n \mathcal S\right)^* \otimes \bigwedge\nolimits^n \mathbb C^n\right)=-c_1^T\left(\bigwedge\nolimits^n \mathcal S\right)+ c_1^T\left(\bigwedge\nolimits^n\mathbb C^n\right)=-e_T|_{G(n,m)}+t_1+\ldots+t_n$$ ◻ The following Pieri rule for double Schur polynomials is standard and follows, for example, from the Littlewood-Richardson rule of [@MS], but we include a proof here for completeness. **Lemma 7**. *For every partition $\lambda$ with at most $n$ parts, $$(x_1+\ldots +x_n)\cdot s_\lambda(\underline{x}|\underline{t})=-\left(\sum_{i=1}^n t_{\lambda_i+n-i+1}\right) s_\lambda(\underline{x}|\underline{t}) + \sum_{\lambda' \gtrdot \lambda} s_{\lambda'}(\underline{x}|\underline{t}),$$ where the sum is over all $\lambda'$ obtained by adding one box to the diagram of $\lambda$.* *Proof.* Multiplying by $a_\rho$ on both sides, this is equivalent to showing that $$(x_1+\ldots +x_n)\cdot a_\nu(\underline{x}|\underline{t})=-\left(\sum_{i=1}^n t_{\nu_i+1}\right) a_\nu(\underline{x}|\underline{t}) + \sum_{\nu' \gtrdot \nu} a_{\nu'}(\underline{x}|\underline{t})$$ for all $\nu_1>\ldots>\nu_n\ge 0$, where the sum is over all $\nu'$ obtained from $\nu$ by increasing one of the $\nu_i$ by $1$ (if $\nu_i+1=\nu_{i-1}$, that term vanishes since $a_{\nu'}$ is then a determinant with two equal rows). Using the identity $x_i\cdot (x_i|t)^k= (x_i|t)^{k+1}-t_{k+1}(x_i|t)^k$, we have $$\begin{aligned} (x_1+\ldots +x_n)\cdot a_\nu(\underline{x}|\underline{t}) &= \sum_{j=1}^n \sum_{\sigma \in S_n} x_j \cdot \text{sgn}(\sigma) \prod_{i=1}^n (x_{\sigma(i)}|\underline{t})^{\nu_i} \\ &= \sum_{j=1}^n \sum_{\sigma \in S_n} x_{\sigma(j)}\text{sgn}(\sigma) \prod_{i=1}^n (x_{\sigma(i)}|\underline{t})^{\nu_i} \\ &= \sum_{j=1}^n \sum_{\sigma \in S_n} \text{sgn}(\sigma) \left( -t_{\nu_j+1} \prod_{i=1}^n (x_{\sigma(i)}|\underline{t})^{\nu_i} + \prod_{i=1}^n (x_{\sigma(i)}|\underline{t})^{\nu_i+\delta_{ij}} \right)\\ &= -\sum_{j=1}^n t_{\nu_j+1} a_\nu(\underline{x},\underline{t}) + \sum_{\nu' \gtrdot \nu} a_{\nu'}(\underline{x}|\underline{t}),\end{aligned}$$ which is what we wanted to show. ◻ We can now prove **Theorem 8**. *For every partition $\lambda$ fitting in the $n$ by $m-n$ box, the map $\Phi$ sends $s_\lambda(\underline{x},\underline{t})$ to $\sigma_\lambda$.* *Proof.* For each $\lambda$, let $\varepsilon_\lambda\in \{\pm 1\}$ be such that $\Phi(s_\lambda)=\varepsilon_\lambda\sigma_\lambda$. Since $\Phi$ is $\mathfrak{g}^\vee_{R_T}$-equivariant and preserves multiplicative identities, we have $$\label{eq:xk-act-on-1} (-e_T)^k \cdot 1_{\Lambda_{n,m}} \overset{\Phi}{\longmapsto} (-e_T)^k \cdot 1_{H_T^\bullet(G(n,m))}$$ for all $k\ge 0$. The left hand side is $(x_1+\ldots +x_n)^k$ (using that $-e_T$ acts on $V$ via multiplication by $x$, hence on $\Lambda_{n,m}$ via multiplication by $x_1+\ldots+x_n$) which, when expanded in the double Schur basis, gives by Lemma [Lemma 7](#lem:pieri){reference-type="ref" reference="lem:pieri"} $$\sum_{|\lambda|=k} f_\lambda s_\lambda(\underline{x}, \underline{t}) + \text{terms in the $s_\lambda$'s for $|\lambda|<k$},$$ where $f_\lambda$ is the numbers of chains of partitions going from $\varnothing$ to $\lambda$ where a single box is added at each step (equivalently, the number of standard Young tableaux of shape $\lambda$). Now, $-e_T$ acts on $H_T^\bullet(G(n,m))$ via multiplication by $-e_T|_{G(n,m)}=\sigma_1-t_1-\ldots-t_n$ (by Lemma [Lemma 6](#lem:res-eT){reference-type="ref" reference="lem:res-eT"}). Hence the right hand side of ([\[eq:xk-act-on-1\]](#eq:xk-act-on-1){reference-type="ref" reference="eq:xk-act-on-1"}) is $$\left(\sigma_1-t_1-\ldots-t_n\right)^k= \sum_{|\lambda|=k} f'_\lambda\sigma_\lambda+ \text{terms in the $\sigma_\lambda$'s for $|\lambda|<k$},$$ where $f'_\lambda$ is the coefficient of $\sigma_\lambda$ in the expansion of $\sigma_1^k$, which lies in $\mathbb N[t_1-t_2, \ldots, t_{n-1}-t_n]\subset R_T$ by Theorem [Theorem 5](#thm:positivity){reference-type="ref" reference="thm:positivity"}, but in fact in $\mathbb N$ for degree reasons. Now, we see from ([\[eq:xk-act-on-1\]](#eq:xk-act-on-1){reference-type="ref" reference="eq:xk-act-on-1"}) that $f_\lambda'=\varepsilon_\lambda f_\lambda$ for all $\lambda$ of size $k$. Since $f_\lambda, f'_\lambda$ are both nonnegative and $f_\lambda$ is in fact strictly positive (every partition has at least one standard Young tableau), the signs $\varepsilon_\lambda$ must be positive. Since $k$ was arbitrary, this completes the proof. ◻ [^1]: If we take coefficients in a field $k$ rather than $\mathbb Z$, these intersection cohomology sheaves are exactly the simple objects in the category $\mathcal P(\mathcal{G}r, k)$ [^2]: This is an opposite Schubert cell rather than a Schubert cell because we identify $\mathcal{G}r^{\omega_n}$ with the Grassmannian of $n$-dimensional *quotients* of $L_0/tL_0$, or equivalently $n$-dimensional subspaces of $(L_0/tL_0)^*$. The passage to dual space makes $GL_m(\mathbb C)$ act on $G(n,m)$ under this identification not in the usual way, but via the inverse transpose matrix. [^3]: Our definition differ slightly from the usual convention for double Schur polynomials, which replaces $t_i$ by $-t_i$ in the definition of double monomials. [^4]: The term Fock space actually usually refers to the representation of the Lie algebra $\mathfrak{gl}_\infty$ of $\mathbb Z$ by $\mathbb Z$ matrices on symmetric functions with infinitely many variables, which is obtained as a limit of our representation as $n \to \infty$ [^5]: The paper [@Gin] works with $\mathbb C$ coefficients, but it is clear that being an isomorphism over $\mathbb Q$ or over $\mathbb C$ is equivalent.
arxiv_math
{ "id": "2310.00855", "title": "Equivariant Schubert calculus and geometric Satake", "authors": "Antoine Labelle", "categories": "math.RT math.AG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper, we study the well-posedness, ill-posedness and uniqueness of the stationary 3-D radial solution to the bipolar isothermal hydrodynamic model for semiconductors. The density of electron is imposed with sonic boundary and interiorly subsonic case and the density of hole is fully subsonic case. It is difficult to estimate the upper and lower bounds of the holes due to the coupling of electrons and holes and the degeneracy of electrons at the boundary. Thus, we use the topological degree method to prove the well-posedness of solution. We prove the ill-posedness of subsonic solution under some conditions by direct mathematical analysis and contradiction method. The ill-posedness property shows significant difference to the unipolar model. Another highlight of this paper is the application of specific energy method to obtain the uniqueness of solution in two cases. One case is the relaxation time $\tau=\infty$, namely, the pure Euler-Poisson case; the other case is $\frac{j}{\tau}\ll 1$, which means that, when the current flow is sufficiently small and the relaxation time is sufficiently large both can satisfy. author: - Siying Li$^{\rm 1}$, Ming Mei$^{\rm 2, 3}$, Kaijun Zhang$^{\rm 1}$ and Guojing Zhang$^{\rm 1, *}$ title: Subsonic steady-states for bipolar hydrodynamic model for semiconductors --- $^{\rm 1}$ School of Mathematics and Statistics, Northeast Normal University,\ Changchun, 130024, P.R.China.\ $^{\rm 2}$ Department of Mathematics, Champlain College Saint-Lambert,\ Saint-Lambert, Quebec, J4P 3P2, Canada.\ $^{\rm 3}$ Department of Mathematics and Statistics, McGill University,\ Montreal, Quebec, H3A 2K6, Canada. **Keywords:** hydrodynamic model, Euler-Poisson equations, radial solution, subsonic solution, steady-state. # Introduction The bipolar hydrodynamic (HD) model, proposed first by Bløtekjær in [@B1973], is usually described for the charged fluid particles such as electrons and holes in semiconductor devices [@B1973; @J2001; @MRS1989], which is written as the following system of Euler-Poisson equations: $$\left\{ \begin{aligned}\label{eq1} &\rho _t+{\rm div} (\rho \vec{u})=0,\\ &(\rho \vec{u})_t+{\rm div} (\rho \vec{u}\otimes\vec{u})+\nabla P_1(\rho)=\rho \vec{E}-\frac{\rho \vec{u}}{\tau},\\ &n _t+{\rm div} (n \vec{v})=0,\\ &(n\vec{v})_t+{\rm div} (n\vec{v}\otimes\vec{v})+\nabla P_2(n)=-n \vec{E}-\frac{n\vec{v}}{\tau},\\ &\nabla\cdot \vec{E}=\rho -n-b(x). \end{aligned} \right.$$ Here, $\rho$, $n$, $\vec{u}$, $\vec{v}$ and $\vec{E}$ represent the electron density, the hole density, the electron velocity, the hole velocity and the electric field, respectively. $P_1(\rho)$ and $P_2(n)$ are given functions which denote the pressure of electron and the pressure of hole. When the system is isothermal, the physical representation of the pressure functions are $$\begin{aligned} \label{eq4} P_1(\rho)=T\rho,\,\,\,\, P_2(n)=Tn,\end{aligned}$$ where $T>0$ is the constant temperature. The function $b(x)>0$ is the doping profile denoting the density of impurities in semiconductor devices. The constant $\tau>0$ stands for the relaxation time. The corresponding steady-state equation of [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"} is as follows $$\left\{ \begin{aligned}\label{eq3} &{\rm div} (\rho \vec{u})=0,\\ &{\rm div} (\rho \vec{u}\otimes\vec{u})+\nabla P_1(\rho)=\rho \vec{E}-\frac{\rho \vec{u}}{\tau},\\ &{\rm div} (n \vec{v})=0,\\ &{\rm div} (n\vec{v}\otimes\vec{v})+\nabla P_2(n)=-n \vec{E}-\frac{n\vec{v}}{\tau},\\ &\nabla\cdot \vec{E}=\rho -n-b(x). \end{aligned} \right.$$ There are many researches on the stationary solution for the HD model of semiconductors. In 1990, Degond and Markowich [@DM1990] first proved the existence of subsonic solution for one-dimensional case, and obtained the uniqueness of solution with small electric current. Subsequently, lots of attentions has been paid to the steady subsonic flows with different boundary conditions as well as the higher dimensional case in [@BDC2014; @BDC2016; @DM1993; @GS2005; @H2011; @HMWY2011; @MWZ2021; @NS2007; @NS2009]. For the supersonic flows, Peng and Violet [@PV2006] showed the existence and uniqueness of supersonic solution with a strong supersonic background for one-dimensional model. Bae et al [@Bae] extended this work to two-dimensional case for pure Euler-Poisson equation, namely, the semiconductor effect is zero. The transonic flows has also been extensively studied in [@AMPS1991; @DZ2020; @G1992; @GM1996; @LRXX2011; @LX2012; @R2005]. Li-Mei-Zhang-Zhang [@LMZZ2017; @LMZZ2018] proved in great depth the structure of all types with the sonic boundary when the doping profile is subsonic and supersonic, respectively. The sonic boundary condition means the system has degeneracy effect, which makes the system has strong singularity. Chen-Mei-Zhang-Zhang [@CMZZ2020] extended the corresponding results to the transonic doping profile. Also, Chen et al [@CMZZ2021; @CMZZ2022; @CMZZ2023] studied the radial or the spiral radial subsonic, supersonic and transonic solutions in two and three dimensional spaces. Further, the existence and the regularity of smooth transonic solutions are also investigated by Wei et al in [@WMZZ2021]. Recently, Feng et al showed the structural stability of different types of solutions in [@FHM2022; @FMZ2023], respectively. Asymptotic limits of subsonic or sonic-subsonic solutions were studied in [@CLMZ2023; @CG2000; @P2002; @P2003; @PW2004]. However, the corresponding results for bipolar model are very limited. For isentropic case, Zhou and Li [@ZL2009] proved the existence and uniqueness of stationary solution with Dirichlet boundary conditions in one-dimensional space when the doping profile is zero. For isothermal case, Tsuge [@T2010] obtained the existence and uniqueness of the subsonic stationary solution in one-dimensional space for the electrostatic potential is small enough. Yu [@Y2017] studied the existence and uniqueness of the sbusonic stationary solution with insulating boundary conditions by the calculus of variations. Recently, Mu-Mei-Zhang [@MMZ2020] used the topological degree method to prove the well-posedness and ill-posedness of stationary subsonic and supersonic solutions with the electrons sonic boundary. The existence obtained in [@MMZ2020] relies strongly on the hole far from the sonic line, and the ill-posedness is partially given only for the case of $\tau=\infty$ and the density of hole is close enough to the sonic value. Moreover, the uniqueness of this kind of solution is still unclear and it will be very difficult to prove. This shows the bipolar model is significantly different to the unipolar model. As known to all, the bipolar model has more importance in physical practice. In this paper, we are devoted to studying the well-posedness, ill-posedness and uniqueness of radial solution to the steady-state equation [\[eq3\]](#eq3){reference-type="eqref" reference="eq3"} in the domain of 3-D hollow ball. Let us denote $$\begin{aligned} &r=|x|=\sqrt{{x_1}^2+{x_2}^2+{x_3}^2},\\ &\rho=\rho(|x|)=\rho(r),\,\,\,\,\,\, n=n(|x|)=n(r),\\ &\vec{u}=u(r)\cdot \frac{x}{r},\,\,\,\,\,\, \vec{v}=v(r)\cdot \frac{x}{r},\\ &\vec{E}=E(r)\cdot\frac{x}{r},\\ &b(x)=b(r),\\ &\vec{J}:=\rho\vec{u}=\rho(r)u(r)\cdot\frac{x}{r}=J(r)\cdot\frac{x}{r},\,\,\,\,\,\, \mbox{the current density of electrons},\\ &\vec{K}:=n\vec{v}=n(r)v(r)\cdot\frac{x}{r}=K(r)\cdot\frac{x}{r},\,\,\,\,\,\, \mbox{the current density of holes}. \end{aligned}$$ Then the system [\[eq3\]](#eq3){reference-type="eqref" reference="eq3"} becomes $$\left\{ \begin{aligned}\label{eq2} &J_r+\frac{2J}{r} =0, \mbox{namely}, J=\frac{j_1}{r^2},\\ &\frac{1}{2}\rho\left(\frac{J^2}{\rho^2}\right)_r+P_1(\rho)_r=\rho E-\frac{J}{\tau},\\ &K_r+\frac{2K}{r} =0, \mbox{namely}, K=\frac{j_2}{r^2}\\ &\frac{1}{2}n\left(\frac{K^2}{n^2}\right)_r+P_2(n)_r=-n E-\frac{K}{\tau},\\ &E_{r}+\frac{2}{r}E=\rho-n-b(r), \end{aligned} \right.$$ where $j_1$ and $j_2$ denote two constants. We can see that $r=0$ is the singular point, so we consider the system in hollow ball, namely, $r \in [\varepsilon_0,1]$ for any given $\varepsilon_0>0$ in this paper. According to the terminology from gas dynamics, we call $c_e:=\sqrt{P_1^{'}(\rho)}=\sqrt{T}>0$ the sound speed of electron and and $c_h:=\sqrt{P_2^{'}(n)}=\sqrt{T}>0$ the sound speed of hole by [\[eq4\]](#eq4){reference-type="eqref" reference="eq4"}. Thus, the corresponding electron velocity $u$ and hole velocity $v$ of the system [\[eq2\]](#eq2){reference-type="eqref" reference="eq2"} are said to be subsonic (or sonic) if $$\begin{aligned} \label{eq5} u=\frac{|J|}{\rho}<(=)c_e=\sqrt{P_1^{'}(\rho)}=\sqrt{T}\,\, \mbox{and}\,\, v=\frac{|K|}{n}<(=)c_h=\sqrt{P_2^{'}(n)}=\sqrt{T}.\end{aligned}$$ Without loss of generality, we assume that $j_1=j$, $j_2=-j$ in [\[eq2\]](#eq2){reference-type="eqref" reference="eq2"}, and take $j=1$, $T=1$. Now dividing $\eqref{eq2}_2$ and $\eqref{eq2}_4$ by $\rho$ and $n$, respectively, we obtain $$\left\{ \begin{aligned}\label{eq7} &\frac{1}{2}\rho\left(\frac{1}{r^4\rho^2}\right)_r+\rho_r=\rho E-\frac{1}{\tau r^2},\\ &\frac{1}{2}n\left(\frac{1}{r^4 n^2}\right)_r+n_r=-nE+\frac{1}{\tau r^2},\\ &(r^2E)_r=r^2(\rho-n-b(r)). \end{aligned} \right.$$ For the sake of simplicity, we introduce two new variables $$\begin{aligned} \label{eq8} g(r):=r^2\rho(r),\,\,\,\, m(r):=r^2n(r) \end{aligned}$$ and define $$\begin{aligned} \label{eq9} B(r):=r^2b(r). \end{aligned}$$ Substituting [\[eq8\]](#eq8){reference-type="eqref" reference="eq8"}-[\[eq9\]](#eq9){reference-type="eqref" reference="eq9"} into [\[eq7\]](#eq7){reference-type="eqref" reference="eq7"}, we have $$\left\{ \begin{aligned}\label{eq10} &\left(\frac{1}{g}-\frac{1}{g^3}\right)g_r+\frac{1}{\tau g}-\frac{2}{r}=E,\\ &\left(\frac{1}{m}-\frac{1}{m^3}\right)m_r-\frac{1}{\tau m}-\frac{2}{r}=-E,\\ &\left(r^2E\right)_r=g-m-B(r). \end{aligned} \right.$$ Now from [\[eq5\]](#eq5){reference-type="eqref" reference="eq5"}, it is clear by a series of simple calculations that the stationary flow of electron and hole to [\[eq10\]](#eq10){reference-type="eqref" reference="eq10"} are called to be subsonic (or sonic) if $$\begin{aligned} \label{eq11} u<(=)1,\,\,\, \mbox{or equivalently},\,\,\, g>(=)1 \end{aligned}$$ and $$\begin{aligned} \label{eq12} v<(=)1,\,\,\, \mbox{or equivalently},\,\,\, m>(=)1. \end{aligned}$$ By [\[eq11\]](#eq11){reference-type="eqref" reference="eq11"}-[\[eq12\]](#eq12){reference-type="eqref" reference="eq12"}, we impose the sonic boundary conditions to [\[eq10\]](#eq10){reference-type="eqref" reference="eq10"} with electron $$\begin{aligned} \label{eq13} g(\varepsilon_0)=g(1)=1, \end{aligned}$$ and a given boundary condition to hole is proposed as $$\begin{aligned} \label{eq14} m(\varepsilon_0)=\eta_0, \end{aligned}$$ where the value of $\eta_0$ will be specified later. Next, we need to study the well-posedness, ill-posedness and uniqueness of solution to [\[eq10\]](#eq10){reference-type="eqref" reference="eq10"} with boundary conditions [\[eq13\]](#eq13){reference-type="eqref" reference="eq13"}-[\[eq14\]](#eq14){reference-type="eqref" reference="eq14"}, and the density of electron is considered interiorly subsonic case $(namely, \, g>1 \, \mbox{on} \, (\varepsilon_0,1))$ and the density of hole is considered fully subsonic case $(namely, \, m>1 \, \mbox{on}\, [\varepsilon_0,1])$. Multiplying $\eqref{eq10}_1$ and $\eqref{eq10}_2$ by $r^2$ and taking the derivative with respect to $r$, then according to $\eqref{eq10}_3$, we get $$\left\{ \begin{aligned}\label{eq15} &\left[r^2\left(\frac{1}{g}-\frac{1}{g^3}\right)g_r+\frac{r^2}{\tau g}-2r\right]_r=g-m-B(r),\\ &\left[r^2\left(\frac{1}{m}-\frac{1}{m^3}\right)m_r-\frac{r^2}{\tau m}-2r\right]_r=m+B(r)-g,\\ &g(\varepsilon_0)=g(1)=1,m(\varepsilon_0)=\eta_0. \end{aligned} \right.$$ Adding equations $\eqref{eq15}_1$ to $\eqref{eq15}_2$, and dividing both sides by $r^2$ for the resultant equation, we derive $$\begin{aligned} \left(\frac{1}{g}-\frac{1}{g^3}\right)g_r+\left (\frac{1}{m}-\frac{1}{m^3}\right)m_r+\frac{1}{\tau}\left(\frac{1}{g}-\frac{1}{m}\right)-\frac{4}{r}=\frac{c}{r^2}.\end{aligned}$$ Integrating the above equation on $[\varepsilon _0,1]$, we obtain $$\begin{aligned} \label{c1} w(m(1))-w(\eta_0)+\frac{1}{\tau}\int_{\varepsilon_0}^{1}\left(\frac{1}{g}-\frac{1}{m}\right)dr+4{\rm ln}\varepsilon_0=c\cdot\frac{1-\varepsilon_0}{\varepsilon_0},\end{aligned}$$ where $w(h):={\rm ln}h+\frac{1}{2h^2}$. By $\eqref{eq10}_1$ and $\eqref{eq10}_2$, we can get $$\begin{aligned} \label{c2} \left(\frac{1}{g}-\frac{1}{g^3}\right)g_r+\left (\frac{1}{m}-\frac{1}{m^3}\right)m_r+\frac{1}{\tau}\left(\frac{1}{g}-\frac{1}{m}\right)-\frac{4}{r}=0. \end{aligned}$$ The equation [\[eq10\]](#eq10){reference-type="eqref" reference="eq10"} is equivalent to the equation [\[eq15\]](#eq15){reference-type="eqref" reference="eq15"} if and only if $c=0$. Now let us define $$\begin{aligned} \label{c3} w(\eta_1)-w(\eta_0)+\frac{1}{\tau}\int_{\varepsilon_0}^{1}\left(\frac{1}{g}-\frac{1}{m}\right)dr+4{\rm ln}\varepsilon_0=0.\end{aligned}$$ Therefore, the equations [\[eq10\]](#eq10){reference-type="eqref" reference="eq10"} and [\[eq15\]](#eq15){reference-type="eqref" reference="eq15"} are equivalent if and only if $$\begin{aligned} \label{y9} m(1)=\eta_1.\end{aligned}$$ Notice that the model [\[eq15\]](#eq15){reference-type="eqref" reference="eq15"} is denegerate at the sonic boudary $g(\varepsilon_0)=g(1)=1$, the solution of [\[eq15\]](#eq15){reference-type="eqref" reference="eq15"} will lose certain regularity, and have to be in the weak form. Thus, we give the following definition of weak solution by [@LMZZ2017]. **Definition 1**. *$(g(r), m(r))$ is called a pair of subsonic solutions, which means that interiorly subsonic solution $g(r)$ coupled with fully subsonic solution $m(r)$ of system [\[eq15\]](#eq15){reference-type="eqref" reference="eq15"} with $g(\varepsilon_0)=g(1)=1,m(\varepsilon_0)=\eta_0$, $\left(g(r)-1\right)^2\in H_0^1(\varepsilon_0,1),m(r)\in W^{2,\infty}(\varepsilon_0,1)$, and for any $\varphi\in H_0^1(\varepsilon_0,1)$ it holds that $$\begin{aligned} \label{eq16} \int_{\varepsilon_0}^{1}\left[r^2\left(\frac{1}{g}-\frac{1}{g^3}\right)g_r+\frac{r^2}{\tau g}-2r\right]\cdot \varphi_rdr+\int_{\varepsilon_0}^{1}\left(g-m-B(r)\right)\varphi dr=0 \end{aligned}$$ and $$\begin{aligned} \label{eq17} \int_{\varepsilon_0}^{1}\left[r^2\left(\frac{1}{m}-\frac{1}{m^3}\right)m_r-\frac{r^2}{\tau m}-2r\right]\cdot \varphi_rdr+\int_{\varepsilon_0}^{1}\left(m+B(r)-g\right)\varphi dr=0. \end{aligned}$$* **Remark 1**. *The identity [\[eq16\]](#eq16){reference-type="eqref" reference="eq16"} is well-defined for $\left(g(r)-1\right)^2\in H_0^1(\varepsilon_0,1)$ and $\varphi\in H_0^1(\varepsilon_0,1)$, which is equivalent to $$\begin{aligned} \int_{\varepsilon_0}^{1}\left[r^2\cdot\frac{g+1}{2g^3}((g-1)^2)_r+\frac{r^2}{\tau g}-2r\right]\cdot \varphi_rdr+\int_{\varepsilon_0}^{1}\left(g-m-B(r)\right)\varphi dr=0. \end{aligned}$$* *Once $(g(r),m(r))$ is obtained in equation [\[eq15\]](#eq15){reference-type="eqref" reference="eq15"}, then $(\rho(r),n(r))$ can be determined by [\[eq8\]](#eq8){reference-type="eqref" reference="eq8"}. According to [\[eq10\]](#eq10){reference-type="eqref" reference="eq10"}, we can further get the solution of the electric field $E(r)$ $$\begin{aligned} E(r)=\left(\frac{1}{g}-\frac{1}{g^3}\right)g_r+\frac{1}{\tau g}-\frac{2}{r}=\frac{(g+1)[(g-1)^2]_r}{2g^3}+\frac{1}{\tau g}-\frac{2}{r}. \end{aligned}$$* *Therefore, solving [\[eq7\]](#eq7){reference-type="eqref" reference="eq7"} amounts to finding the solution to [\[eq15\]](#eq15){reference-type="eqref" reference="eq15"} satisfying [\[c3\]](#c3){reference-type="eqref" reference="c3"}-[\[y9\]](#y9){reference-type="eqref" reference="y9"}.* Throughout the paper we assume that the doping profile $B(r)\in L^\infty(\varepsilon_0,1)$ and denote $$\begin{aligned} \underline{B}:=\underset{r\in(\varepsilon_0,1)}{{\rm essinf}}\,\, B(r)\,\,\,\, \mbox{and}\,\,\,\, \bar{B}:=\underset{r\in(\varepsilon_0,1)}{{\rm esssup}}\,\, B(r).\end{aligned}$$ Our main results are stated below. **Theorem 1**. *For any $B(r)\in L^\infty(\varepsilon_0,1)$, $\tau>0$ and $\underline{m}>1$, there exists a constant $\eta^\ast(\underline{m}, \bar{B}, \tau)>1$ which only depends on $\underline{m}, \bar{B}$ and $\tau$, such that for any $\eta_0\ge \eta^\ast$, there are the pair of subsonic solutions $(g,m)\in C^{\frac{1}{2}}[\varepsilon_0,1]\times W^{2,\infty}(\varepsilon_0,1)$ and $m\ge \underline{m}+3$ on $[\varepsilon_0,1]$ to [\[eq15\]](#eq15){reference-type="eqref" reference="eq15"} with the conditions of [\[c3\]](#c3){reference-type="eqref" reference="c3"}-[\[y9\]](#y9){reference-type="eqref" reference="y9"}.* **Theorem 2**. *For any $\bar{\eta}>1$, there exists $B^*=B^*(\bar{\eta})>1$ such that if $\eta_0<\bar{\eta}$ and $\underset{r\in(\alpha, \beta)}{{\rm inf}}B\ge B^*$, there is no subsonic solution to [\[eq10\]](#eq10){reference-type="eqref" reference="eq10"}, [\[eq13\]](#eq13){reference-type="eqref" reference="eq13"}-[\[eq14\]](#eq14){reference-type="eqref" reference="eq14"}. Here $\alpha=\varepsilon_0+\frac{1-\varepsilon_0}{4}$, $\beta=\varepsilon_0+\frac{3(1-\varepsilon_0)}{4}$.* **Theorem 3**. *The solution obtained in Theorem [Theorem 1](#th1){reference-type="ref" reference="th1"} for [\[eq7\]](#eq7){reference-type="eqref" reference="eq7"} and thus for [\[eq2\]](#eq2){reference-type="eqref" reference="eq2"} will be unique when $\tau=\infty$ and $\frac{j}{\tau}\ll 1$.* **Remark 2**. *1. In this paper, we focus on the well-posedness, ill-posedness and uniqueness of the radial solution to [\[eq3\]](#eq3){reference-type="eqref" reference="eq3"} in 3-D hollow ball. In fact, the results are also applicable to 2-D case.* *2. We prove the ill-posedness of solution to [\[eq10\]](#eq10){reference-type="eqref" reference="eq10"} by direct mathematical analysis and contradiction method. It is totally different from the result of ill-posedness in [@MMZ2020 Theorem 2.2]. The relaxation time $\tau$ is arbitrarily given constant in this paper. Our result show in great depth that the ill-posedness does not require $\eta_0$ close to $1$ or $\tau=+\infty$.* *3. We prove the uniqueness of solution in two cases. For the case of $\tau=\infty$, namely, the semiconductor effect is zero, we use the simple energy method to prove the uniqueness of solution. For the second case, the main difficulty lies on that the electron is degenerate at the boundary and the non-local terms caused by coupling of electrons and holes. To do this, we apply the method of exponential variation in [@GT2001 Theorem 10.7] and make specific modifications based on the degeneracy of electrons at the boundary.* This paper is arranged as follows. In Section 2, we show the well-posedness of the solution utilizing the topological degree method when the model [\[eq15\]](#eq15){reference-type="eqref" reference="eq15"} satisfying the boundary conditions [\[c3\]](#c3){reference-type="eqref" reference="c3"}-[\[y9\]](#y9){reference-type="eqref" reference="y9"}. In Section 3, we apply direct mathemaical analysis and contradiction method to prove the ill-posedness of solution when the doping profile $B(r)$ is sufficiently large and the holes boundary $\eta_0<\bar{\eta}$ for the system [\[eq10\]](#eq10){reference-type="eqref" reference="eq10"} with boundary conditions [\[eq13\]](#eq13){reference-type="eqref" reference="eq13"}-[\[eq14\]](#eq14){reference-type="eqref" reference="eq14"}. In Section 4, we are devoted to prove the uniqueness of solution for the two cases of $\tau=\infty$ and $\frac{j}{\tau}\ll 1$. # The well-posedness of subsonic solution Estimating the upper and lower bounds of the holes are difficult due to the couping of the system for electrons and holes. We apply the topological degree method to prove the well-posedness of solution to [\[eq15\]](#eq15){reference-type="eqref" reference="eq15"}, which is used in [@MMZ2020]. To do this, we first consider the following approximate system $$\left\{ \begin{aligned}\label{yy41} &\left[r^2 \left( \frac{1}{g_k}-\frac{k}{g_k^3} \right)(g_k)_r \right]_r+ \left(\frac{r^2}{\tau g_k} \right)_r =g_k-m_k-B(r)+2,\\ &\left[r^2 \left( \frac{1}{m_k}-\frac{1}{m_k^3} \right)(m_k)_r \right]_r- \left(\frac{r^2}{\tau m_k} \right)_r =m_k+B(r)-g_k+2,\\ &g_k(\varepsilon _0)=g_k(1)=1,m_k(\varepsilon_0)=\eta _0,m_k(1)=\eta_1, \end{aligned} \right.$$ where $r\in [\varepsilon_0,1]$ and $k\in (0,1)$ is a constant. Here, $\eta_1$ satisfies [\[c3\]](#c3){reference-type="eqref" reference="c3"} with $g=g_k$ and $m=m_k$. **Lemma 1**. *For any $B(r)\in L^\infty(\varepsilon_0,1)$, $\tau>0$, $k\in(0,1)$ and $\underline{m}>1$, there exists a constant $\eta^\ast(\underline{m}, \bar{B}, \tau)>1$ which only depends on $\underline{m}, \bar{B}$ and $\tau$, such that for any $\eta_0\ge \eta^\ast$, the system [\[yy41\]](#yy41){reference-type="eqref" reference="yy41"} admits a pair of subsonic solutions $(g,m)\in W^{2,\infty}(\varepsilon_0,1)\times W^{2,\infty}(\varepsilon_0,1)$ and $m\ge \underline{m}+3$ on $[\varepsilon_0,1]$.* *Proof.* Define $$\begin{aligned} &U=\left \{(g,m)\in C[\varepsilon _0,1]\times C[\varepsilon _0,1]\right \},\\ &V=\left \{(g,m)\in X,\lambda <g<\Lambda,l<m<L\right \}, \end{aligned}$$ where $$\begin{aligned} \lambda =\frac{\sqrt{k}+1}{2},~\Lambda =\bar{m}+\bar{B}+\frac{2}{\tau}+3,~l=\underline{m}+2,~L=\bar{m}+2,\end{aligned}$$ and the value of $\bar{m}$ will be given later. $V$ is a bounded and open subset of $U$, and we have $$\begin{aligned} \partial V=\{(g,m)\in U,&\lambda \le g\le\Lambda,l\le m\le L,~\mbox{and}~\exists~x\in[\varepsilon _0,1],\\ &s.t:g(x)=\lambda~\mbox{or}~g(x)=\Lambda~\mbox{or}~m(x)=l ~\mbox{or}~m(x)=L\}. \end{aligned}$$ For given $\tau>0$, $\bar{B}\ge 0$ and $\underline{m}>1$, there exists a unique $\eta^\ast>1$ such that $$\begin{aligned} \label{yy42} w(\eta^\ast)-w(\underline{m}+3)=\frac{4(1-\varepsilon_0)}{\tau}-8{\rm ln}\varepsilon_0+w_k(\underline{m} +\bar{B}+\frac{2}{\tau}+5)-\frac{k}{2},\end{aligned}$$ where the functions $w(h):={\rm ln}h+\frac{1}{2h^2}$ and $w_k(h):={\rm ln}h+\frac{k}{2h^2}$. We will prove there exists a pair of subsonic solutions to equation [\[yy41\]](#yy41){reference-type="eqref" reference="yy41"} when $\eta_0\ge\eta^\ast$. For any $\eta_0\ge\eta^\ast$, it yields $$\begin{aligned} \label{yy43} w(\eta_0)-w(\underline{m}+3)\ge \frac{4(1-\varepsilon_0)}{\tau}-8{\rm ln}\varepsilon_0+w_k(\underline{m} +\bar{B}+\frac{2}{\tau}+5)-\frac{k}{2}. \end{aligned}$$ Now we take $\bar{m}>\eta_0$ satisfying $$\begin{aligned} \label{yy44} w(\bar{m})-w(\eta_0)\ge \frac{4(1-\varepsilon_0)}{\tau}-12{\rm ln}\varepsilon_0\ge -4{\rm ln}\varepsilon_0-\frac{1}{\tau}\int_{\varepsilon_0}^{1}\left(\frac{1}{\hat{g}}-\frac{1}{\hat{m}}\right)dr,\end{aligned}$$ for any $(\hat{g},\hat{m})\in \bar{V}$. There exists a unique $\hat{\eta}_1\in[\underline{m}+3,\bar{m}]$ such that $$\begin{aligned} \label{yy45} w(\hat{\eta}_1)-w(\eta_0)=-4{\rm ln}\varepsilon_0-\frac{1}{\tau}\int_{\varepsilon_0}^{1}\left(\frac{1}{\hat{g}}-\frac{1}{\hat{m}}\right)dr.\end{aligned}$$ Now define the operator $\Gamma:\bar{V}\to U$, that is, $(\hat{g},\hat{m})\mapsto (g_k,m_k)$ by solving the following linearized equation $$\left\{ \begin{aligned}\label{yy46} &\left[r^2 \left( \frac{1}{\hat{g}}-\frac{k}{\hat{g}^3} \right) (g_k)_r \right]_r-\frac{r^2}{\tau \hat{g}^2}( g_k )_r=\hat{g}-\hat{m}-B(r)+2-\frac{2r}{\tau \hat{g}},\\ &\left[r^2 \left( \frac{1}{\hat{m}}-\frac{1}{\hat{m}^3} \right) (m_k)_r \right]_r+\frac{r^2}{\tau \hat{m}^2} ( m_k )_r=\hat{m}+B(r)-\hat{g}+2+\frac{2r}{\tau \hat{m}},\\ &g_k(\varepsilon _0)=g_k(1)=1,m_k(\varepsilon_0)=\eta _0,m_k(1)=\hat{\eta}_1, \end{aligned} \right.$$ where $\hat{\eta}_1$ is defined in [\[yy45\]](#yy45){reference-type="eqref" reference="yy45"}. According to the theory of linear elliptic equations, $\Gamma:\bar{V}\to U$ is a compact and continuous operator. Let $\hat{\varphi}=\left (\hat{g},\hat{m}\right)$ and $\varphi=\left (g_k,m_k\right)=\Gamma \hat{\varphi}$. Set $\Phi: \bar{V}\times [0,1]\to U$ is binary compact and continuous operator with $\Phi (\hat{\varphi},\gamma)=\hat{\varphi}-\gamma \Gamma \hat{\varphi}=\hat{\varphi}-\gamma\varphi$. Obviously, if $\Phi(\hat{\varphi},1)=0$, then $\hat{\varphi}$ is a solution of [\[yy41\]](#yy41){reference-type="eqref" reference="yy41"}. We take $q=(1,\eta_0)\in V$ and let $p(\gamma)=(1-\gamma)q$, where $\gamma \in [0,1]$. If $p(\gamma)\notin \Phi (\partial V,\gamma)$ for any $\gamma \in [0,1]$, then according to the topological degree theory, it holds $$\begin{aligned} deg\left ( \Phi(\cdot,1),V,0\right )=deg\left ( \Phi(\cdot,0),V,q\right )=deg\left ( id,V,q\right )=1, \end{aligned}$$ thus $\Phi(\hat{\varphi},1)=0$ gives a solution $\hat{\varphi} \in V$. Next, we only need to prove that $p(\gamma)\notin \Phi (\partial V,\gamma)$ for any $\gamma \in [0,1]$. Assume there exists $\gamma \in [0,1]$ and $\hat{\varphi}\in\partial V$ such that $$\begin{aligned} p(\gamma)=(1-\gamma)q=\Phi (\hat{\varphi},\gamma)=\hat{\varphi}-\gamma \varphi. \end{aligned}$$ We will show a contradition. If $\gamma=0$, we have $\hat{\varphi}=q$, which is impossible due to $q\in V, \hat{\varphi}\in \partial V$ and $V$ is an open subset of $U$. If $\gamma\in (0,1]$, we have $$\begin{aligned} \varphi=\frac{1}{\gamma}\hat{\varphi}-\frac{1-\gamma}{\gamma}q,\end{aligned}$$ that is, $(g_k)_r=\frac{1}{\gamma}\hat{g}_r$, $(m_k)_r=\frac{1}{\gamma}\hat{m}_r$, then by [\[yy46\]](#yy46){reference-type="eqref" reference="yy46"} we derive $$\left\{ \begin{aligned}\label{yy47} &\left[r^2 \left( \frac{1}{\hat{g}}-\frac{k}{\hat{g}^3} \right)\hat{g}_r \right]_r-\frac{r^2}{\tau \hat{g}^2}\hat{g}_r=\gamma \left( \hat{g}-\hat{m}-B(r)+2-\frac{2r}{\tau \hat{g}} \right),\\ &\left[r^2 \left( \frac{1}{\hat{m}}-\frac{1}{\hat{m}^3} \right)\hat{m}_r \right]_r+\frac{r^2}{\tau \hat{m}^2}\hat{m}_r=\gamma \left( \hat{m}+B(r)-\hat{g}+2+\frac{2r}{\tau \hat{m}} \right),\\ &\hat{g}(\varepsilon _0)=\hat{g}(1)=1,\hat{m}(\varepsilon_0)=\eta _0,\hat{m}(1)=\eta_0+\gamma \left (\hat{\eta}_1-\eta_0\right)=:\tilde{\eta}_1, \end{aligned} \right.$$ where $\tilde{\eta}_1$ is between $\eta_0$ and $\hat{\eta}_1$. Next, we prove that $\lambda<\hat{g}<\Lambda$ and $l<\hat{m}<L$. First, the upper and lower bounds of $\hat{g}$ are to be estimated. Taking respectively $\omega_1=(\hat{g}-1)^-$ and $\omega_2=\left(\hat{g}-\left(\bar{m}+\bar{B}+\frac{2}{\tau}+2\right)\right)^+$ as the test function to $\eqref{yy47}_1$, and utilizing the standard weak maximum principle, we can obtain that $\omega_1\le 0$ and $\omega_2 \le 0$, which gives $$\begin{aligned} \label{yy48} \lambda<1\le \hat{g}\le \bar{m}+\bar{B}+\frac{2}{\tau}+2<\Lambda. \end{aligned}$$ Here and after $(h)^-:=-{\rm min}\left\{h,0\right\}$, $(h)^+:={\rm max}\left\{h,0\right\}$. Next, we will carefully estimate the upper and lower bounds of $\hat{m}$. From [\[yy47\]](#yy47){reference-type="eqref" reference="yy47"}, we have $$\left\{ \begin{aligned}\label{yy49} &\left [r^2\left (\frac{1}{\hat{g}}-\frac{k}{\hat{g}^3}\right)\hat{g}_r+\frac{r^2}{\tau \hat{g}}-2r\right]_r=\gamma (\hat{g}-\hat{m}-B(r))+(1-\gamma)\frac{2r}{\tau \hat{g}}+2(\gamma-1),\\ &\left [r^2\left (\frac{1}{\hat{m}}-\frac{1}{\hat{m}^3}\right)\hat{m}_r-\frac{r^2}{\tau \hat{m}}-2r\right]_r=\gamma (\hat{m}+B(r)-\hat{g})-(1-\gamma)\frac{2r}{\tau \hat{m}}+2(\gamma-1). \end{aligned} \right.$$ Adding $\eqref{yy49}_1$ to $\eqref{yy49}_2$, it yields $$\begin{aligned} &\left [r^2\left (\frac{1}{\hat{g}}-\frac{k}{\hat{g}^3}\right)\hat{g}_r+r^2\left (\frac{1}{\hat{m}}-\frac{1}{\hat{m}^3}\right)\hat{m}_r+\frac{r^2}{\tau \hat{g}}-\frac{r^2}{\tau \hat{m}}-4r\right]_r\nonumber\\ =&2(1-\gamma)r\cdot\frac{1}{\tau}\left (\frac{1}{\hat{g}}-\frac{1}{\hat{m}}\right)+4(\gamma-1). \end{aligned}$$ Integrating the above equation on $\left(\varepsilon _0,r\right)$ and dividing by $r^2$, then we obtain $$\begin{aligned} \label{yy50} &\left (\frac{1}{\hat{g}}-\frac{k}{\hat{g}^3}\right)\hat{g}_r+\left (\frac{1}{\hat{m}}-\frac{1}{\hat{m}^3}\right)\hat{m}_r+\frac{1}{\tau \hat{g}}-\frac{1}{\tau \hat{m}}-\frac{4}{r}\nonumber\\ =&\frac{c}{r^2}+\frac{1}{r^2} \int_{\varepsilon _0}^{r}\left [2(1-\gamma)s\cdot\frac{1}{\tau}\left (\frac{1}{\hat{g}(s)}-\frac{1}{\hat{m}(s)}\right)+4(\gamma-1)\right]ds.\end{aligned}$$ Once again, integrating the above equation on $\left [\varepsilon _0,1\right ]$, by [\[yy45\]](#yy45){reference-type="eqref" reference="yy45"} we get $$\begin{aligned} \label{yy51} &c\int_{\varepsilon_0}^{1}\frac{1}{r^2}dr\nonumber\\ =&[w(\tilde{\eta}_1)-w(\hat{\eta}_1)]-\int_{\varepsilon_0}^{1}\frac{1}{r^2} \int_{\varepsilon _0}^{r}\left [2(1-\gamma)s\cdot\frac{1}{\tau}\left (\frac{1}{\hat{g}(s)}-\frac{1}{\hat{m}(s)}\right)\right]dsdr-\int_{\varepsilon_0}^{1}\frac{1}{r^2}4(\gamma-1)dsdr\nonumber\\ =&:A_1+A_2+A_3.\end{aligned}$$ It is easy to calculate that $$\begin{aligned} \label{yy52} c\int_{\varepsilon_0}^{1}\frac{1}{r^2}dr=c \cdot \frac{1-\varepsilon _0}{\varepsilon _0} \end{aligned}$$ and $$\begin{aligned} \label{yy53} \left|A_1\right|&=\left|w(\tilde{\eta}_1)-w(\eta_0)-\left(w(\hat{\eta}_1)-w(\eta_0)\right)\right|\nonumber\\ &\le \left|w(\hat{\eta}_1)-w(\eta_0)\right|\nonumber\\ &=\left|\frac{1}{\tau}\int_{\varepsilon _0}^{1}\frac{1}{\hat{g}}-\frac{1}{\hat{m}}dr+4{\rm ln}\varepsilon_0\right|\nonumber\\ &\le \frac{1-\varepsilon_0}{\tau}-4{\rm ln}\varepsilon _0, \end{aligned}$$ due to $\tilde{\eta}_1$ is between $\eta_0$ and $\hat{\eta}_1$. Also, we have $$\begin{aligned} \label{yy54} \left|A_2\right|\le&\frac{1}{\tau}\int_{\varepsilon _0}^{1}\frac{1}{r^2}\cdot \left|\int_{\varepsilon_0}^{r}2(1-\gamma)s\left(\frac{1}{\hat{g}}-\frac{1}{\hat{m}}\right)ds\right|dr\nonumber\\ =&\frac{1-\gamma}{\tau}\int_{\varepsilon_0}^{1}\frac{1}{r^2}(r^2-\varepsilon_0^2)dr\le\frac{1-\varepsilon_0}{\tau}\end{aligned}$$ and $$\begin{aligned} \label{yy55} \left|A_3\right|\le\int_{\varepsilon _0}^{1}\frac{1}{r^2}\int_{\varepsilon_0}^{r}4dsdr=4\int_{\varepsilon _0}^{1}\frac{1}{r^2}(r-\varepsilon _0)dr\le4\int_{\varepsilon _0}^{1}\frac{1}{r}dr=-4{\rm ln}\varepsilon _0.\end{aligned}$$ Combining [\[yy51\]](#yy51){reference-type="eqref" reference="yy51"}-[\[yy55\]](#yy55){reference-type="eqref" reference="yy55"}, we get $$\begin{aligned} \label{yy56} \left|c\right|\le&\frac{\varepsilon _0}{1-\varepsilon _0}\left|A_1\right|+\frac{\varepsilon _0}{1-\varepsilon _0}\left|A_2\right|+\frac{\varepsilon _0}{1-\varepsilon _0}\left|A_3\right|\nonumber\\ \le&\frac{\varepsilon _0}{1-\varepsilon _0}\cdot\left(\frac{1-\varepsilon _0}{\tau}-4{\rm ln}\varepsilon _0\right)+\frac{\varepsilon _0}{1-\varepsilon _0}\cdot\frac{1-\varepsilon _0}{\tau}-\frac{\varepsilon _0}{1-\varepsilon _0}\cdot 4{\rm ln}\varepsilon _0\nonumber\\ =&\frac{2\varepsilon _0}{\tau}-\frac{8\varepsilon _0}{1-\varepsilon _0}{\rm ln}\varepsilon _0.\end{aligned}$$ Now we are ready to estimate the upper bound of $\hat{m}$. Integrating [\[yy50\]](#yy50){reference-type="eqref" reference="yy50"} on $(\varepsilon_0,r)$, we have $$\begin{aligned} \label{yy57} &w_k(\hat{g})-\frac{k}{2}+w(\hat{m})-w(\eta_0)+\frac{1}{\tau}\int_{\varepsilon_0}^{r}\frac{1}{\hat{g}(\xi)}-\frac{1}{\hat{m}(\xi)}d\xi-4{\rm ln}r+4{\rm ln}\varepsilon_0\nonumber\\ =&c\left(\frac{1}{\varepsilon_0}-\frac{1}{r}\right)+\int_{\varepsilon_0}^{r}\frac{1}{\xi^2}\int_{\varepsilon_0}^{\xi}2(1-\gamma)s\cdot\frac{1}{\tau}\left(\frac{1}{\hat{g}(s)}-\frac{1}{\hat{m}(s)}\right)dsd\xi+\int_{\varepsilon_0}^{r}\frac{1}{\xi^2}\int_{\varepsilon_0}^{\xi}4(\gamma-1)dsd\xi, \end{aligned}$$ and then $$\begin{aligned} \label{yy58} w(\hat{m})-w(\eta_0)=&\frac{k}{2}-w_k(\hat{g})-\frac{1}{\tau}\int_{\varepsilon_0}^{r}\left(\frac{1}{\hat{g}(\xi)}-\frac{1}{\hat{m}(\xi)}\right)d\xi+4{\rm ln}r-4{\rm ln}\varepsilon_0+c\left(\frac{1}{\varepsilon_0}-\frac{1}{r}\right)\nonumber\\ &+\int_{\varepsilon_0}^{r}\frac{1}{\xi^2}\int_{\varepsilon_0}^{\xi}2(1-\gamma)s\cdot \frac{1}{\tau}\left(\frac{1}{\hat{g}(s)}-\frac{1}{\hat{m}(s)}\right)dsd\xi+\int_{\varepsilon_0}^{r}\frac{1}{\xi^2}\int_{\varepsilon_0}^{\xi}4(\gamma-1)dsd\xi. \end{aligned}$$ Through a series of simple calculations, it yields $$\begin{aligned} \label{yy59} \left|\frac{1}{\tau}\int_{\varepsilon_0}^{r}\frac{1}{\hat{g}(\xi)}-\frac{1}{\hat{m}(\xi)}d\xi\right|\le\frac{1-\varepsilon_0}{\tau},\end{aligned}$$ $$\begin{aligned} \label{yy60} \left|\int_{\varepsilon_0}^{r}\frac{1}{\xi^2}\int_{\varepsilon_0}^{\xi}2(1-\gamma)s\cdot \frac{1}{\tau}\left(\frac{1}{\hat{g}(s)}-\frac{1}{\hat{m}(s)}\right)dsd\xi\right| \le&\frac{1-\gamma}{\tau}\int_{\varepsilon_0}^{r}\frac{1}{\xi^2}\int_{\varepsilon_0}^{\xi}2s~dsd\xi\nonumber\\ =&\frac{1-\gamma}{\tau}\int_{\varepsilon_0}^{r}\frac{1}{\xi^2}(\xi^2-\varepsilon _0^2)d\xi\nonumber\\ \le&\frac{1-\varepsilon_0}{\tau}\end{aligned}$$ and $$\begin{aligned} \label{yy61} \int_{\varepsilon_0}^{r}\frac{1}{\xi^2}\int_{\varepsilon_0}^{\xi}4(\gamma-1)dsd\xi \le 0. \end{aligned}$$ Notice that $w_k(\hat{g})\ge \frac{k}{2}$ due to $\hat{g}\ge 1$ on $[\varepsilon_0,1]$. Substituting [\[yy56\]](#yy56){reference-type="eqref" reference="yy56"} and [\[yy59\]](#yy59){reference-type="eqref" reference="yy59"}-[\[yy61\]](#yy61){reference-type="eqref" reference="yy61"} into [\[yy58\]](#yy58){reference-type="eqref" reference="yy58"}, we derive $$\begin{aligned} \label{yy62} w(\hat{m})-w(\eta_0)\le&\frac{1-\varepsilon_0}{\tau}-4{\rm ln}\varepsilon_0+\left(\frac{2\varepsilon_0}{\tau}-\frac{8\varepsilon_0}{1-\varepsilon_0}{\rm ln}\varepsilon_0\right)\left(\frac{1-\varepsilon_0}{\varepsilon_0}\right)+\frac{1-\varepsilon_0}{\tau}\nonumber\\ \le&\frac{4(1-\varepsilon_0)}{\tau}-12{\rm ln}\varepsilon_0. \end{aligned}$$ So we obtain that $\hat{m}\le \bar{m}<L$ by [\[yy44\]](#yy44){reference-type="eqref" reference="yy44"} and [\[yy62\]](#yy62){reference-type="eqref" reference="yy62"}. Next, we estimate the lower bound of $\hat{m}$. From [\[yy57\]](#yy57){reference-type="eqref" reference="yy57"}, we have $$\begin{aligned} \label{yy63} w_k(\hat{g})-\frac{k}{2}=&w(\eta_0)-w(\hat{m})-\frac{1}{\tau}\int_{\varepsilon_0}^{r}\left(\frac{1}{\hat{g}(\xi)}-\frac{1}{\hat{m}(\xi)}\right)d\xi+4{\rm ln}r-4{\rm ln}\varepsilon_0+c\left(\frac{1}{\varepsilon_0}-\frac{1}{r}\right)\nonumber\\ &+\int_{\varepsilon_0}^{r}\frac{1}{\xi^2}\int_{\varepsilon_0}^{\xi}2(1-\gamma)s\cdot \frac{1}{\tau}\left(\frac{1}{\hat{g}(s)}-\frac{1}{\hat{m}(s)}\right)dsd\xi+\int_{\varepsilon_0}^{r}\frac{1}{\xi^2}\int_{\varepsilon_0}^{\xi}4(\gamma-1)dsd\xi.\end{aligned}$$ Since $$\begin{aligned} \label{yy64} \left|\int_{\varepsilon_0}^{r}\frac{1}{\xi^2}\int_{\varepsilon_0}^{\xi}4(\gamma-1)dsd\xi\right| \le&\int_{\varepsilon_0}^{r}\frac{1}{\xi^2}\int_{\varepsilon_0}^{\xi}4~dsd\xi\nonumber\\ =&\int_{\varepsilon_0}^{r}\frac{1}{\xi^2}4(\xi-\varepsilon _0)d\xi\nonumber\\ \le&4\int_{\varepsilon_0}^{r}\frac{1}{\xi}d\xi\nonumber\\ =&4{\rm ln}r-4{\rm ln}\varepsilon_0. \end{aligned}$$ Thus, plugging [\[yy56\]](#yy56){reference-type="eqref" reference="yy56"}, [\[yy59\]](#yy59){reference-type="eqref" reference="yy59"}-[\[yy60\]](#yy60){reference-type="eqref" reference="yy60"} and [\[yy64\]](#yy64){reference-type="eqref" reference="yy64"} into [\[yy63\]](#yy63){reference-type="eqref" reference="yy63"} gives $$\begin{aligned} \label{yy65} &w_k(\hat{g})-\frac{k}{2}\nonumber\\ \ge &w(\eta_0)-w(\hat{m})-\frac{1-\varepsilon _0}{\tau}-\left(\frac{2\varepsilon_0}{\tau}-\frac{8\varepsilon_0}{1-\varepsilon_0}{\rm ln}\varepsilon_0\right ) \left(\frac{1-\varepsilon_0}{\varepsilon_0}\right)-\frac{1-\varepsilon _0}{\tau}\nonumber\\ \ge &w(\eta_0)-w(\hat{m})-\frac{4(1-\varepsilon _0)}{\tau}+8{\rm ln}\varepsilon_0. \end{aligned}$$ Define $\hat{\Omega}:=\left\{r\in [\varepsilon_0,1],\hat{m}\le \underline{m}+3\right\}$. Since $\eta_0\ge \eta^\ast$, we have $$\begin{aligned} \label{yy66} w(\eta_0)-w(\hat{m})\ge w(\eta^\ast)-w(\underline{m}+3)\,\,\,\,\,\,\mbox{in}\,\,\,\hat{\Omega}. \end{aligned}$$ Thus $$\begin{aligned} \label{yy67} w_k(\hat{g})-\frac{k}{2}&\ge w(\eta^\ast)-w(\underline{m}+3)-\frac{4(1-\varepsilon _0)}{\tau}+8{\rm ln}\varepsilon_0\,\,\,\,\,\,\mbox{in}\,\,\,\hat{\Omega}. \end{aligned}$$ By [\[yy42\]](#yy42){reference-type="eqref" reference="yy42"} and [\[yy67\]](#yy67){reference-type="eqref" reference="yy67"}, we obtain $$\begin{aligned} \label{yy68} w_k(\hat{g})-\frac{k}{2}\ge w_k(\underline{m}+\bar{B}+\frac{2}{\tau}+5)-\frac{k}{2}\,\,\,\,\,\,\mbox{in}\,\,\,\hat{\Omega}. \end{aligned}$$ Thus it can be deduced $$\begin{aligned} \label{yy69} \hat{g}\ge \underline{m}+\bar{B}+\frac{2}{\tau}+5\,\,\,\,\,\,\mbox{in}\,\,\,\hat{\Omega}.\end{aligned}$$ To prove $\hat{m}\ge \underline{m}+3>l$, we take $(\hat{m}-(\underline{m}+3))^-$ as the test function. Multiplying $(\hat{m}-(\underline{m}+3))^-$ for $\eqref{yy47}_2$ and integrating it by parts on $[\varepsilon_0,1]$, we obtain $$\begin{aligned} \label{yy70} &\int_{\varepsilon_0}^{1}r^2\left(\frac{1}{\hat{m}}-\frac{1}{\hat{m}^3}\right)\left|((\hat{m}-(\underline{m}+3))^-)_r\right|^2dr-\int_{\varepsilon_0}^{1}\frac{r^2}{\tau \hat{m}^2}(\hat{m}-(\underline{m}+3))^-\cdot (\hat{m}-(\underline{m}+3))_r^-dr\nonumber\\ =&\gamma \int_{\varepsilon_0}^{1}\left(\hat{m}+B(r)-\hat{g}+2+\frac{2r}{\tau \hat{m}}\right) \cdot (\hat{m}-(\underline{m}+3))^- dr\nonumber\\ =&\gamma \int_{\hat{\Omega}}\left(\hat{m}+B(r)-\hat{g}+2+\frac{2r}{\tau \hat{m}}\right) \cdot (\hat{m}-(\underline{m}+3))^- dr\nonumber\\ \le&\gamma\int_{\hat{\Omega}}\left(\hat{m}+B(r)-\left(\underline{m}+\bar{B}+\frac{2}{\tau}+5\right)+2+\frac{2r}{\tau \hat{m}}\right)\cdot(\hat{m}-(\underline{m}+3))^- dr\nonumber\\ \le&0. \end{aligned}$$ Owing to $\frac{1}{\hat{m}}-\frac{1}{\hat{m}^3}>0$ and $\left\|\frac{r^2}{\tau\hat{m}^2}\right\|_{L^\infty[\varepsilon_0,1]}$ is bounded, using the weak maximum principle in [@GT2001 Theorem 8.1], we ultimately obtain that $(\hat{m}-(\underline{m}+3))^-\equiv 0$, that is, $\hat{m}\ge \underline{m}+3>l$. In summary, we have proved that $\lambda<\hat{g}<\Lambda$ and $l<\hat{m}<L$ for $\eta_0\ge \eta^\ast$, which is contradictory with $(\hat{g},\hat{m})\in \partial V$. Utilizing the standard regularity theory and the discussions above, we can infer to $(g_k,m_k)\in W^{2,\infty}(\varepsilon_0,1)\times W^{2,\infty}(\varepsilon_0,1)$ and $m\ge \underline{m}+3$. Finally, we apply the same method as in [@LMZZ2017 Lemma 2.3] to obtain $$\begin{aligned} \label{yy71} g_k(r)\ge 1+\bar{\nu} sin(\pi \cdot\frac{r-\varepsilon_0}{1-\varepsilon_0})>1, \end{aligned}$$ where $\bar{\nu}>0$ is a small constant and independent of $k$. This completes the proof of Lemma [Lemma 1](#le1){reference-type="ref" reference="le1"}. ◻ Now, we are ready to prove Theorem [Theorem 1](#th1){reference-type="ref" reference="th1"}.\ **The proof of Theorem [Theorem 1](#th1){reference-type="ref" reference="th1"}**.  We apply the compactness method as in [@LMZZ2017] to get the solution of [\[eq15\]](#eq15){reference-type="eqref" reference="eq15"}. Assume that $(g_k,m_k)$ is the solution of [\[yy41\]](#yy41){reference-type="eqref" reference="yy41"}. Multiplying $(g_k-1)$ for the equation $\eqref{yy41}_1$ and integrating it by parts on $[\varepsilon_0,1]$, we have $$\begin{aligned} &(1-k^2)\int_{\varepsilon_0}^{1}r^2\frac{|(g_k)_r|^2}{(g_k)^3}dr+\frac{4}{9}\int_{\varepsilon_0}^{1}r^2\frac{g_k+1}{(g_k)^3}|((g_k-1)^\frac{3}{2})_r|^2dr\\ &\,+\frac{1}{\tau}\int_{\varepsilon_0}^{1}\frac{r^2(g_k)_r}{g_k}dr+\int_{\varepsilon_0}^{1}(g_k-m_k-B+2)(g_k-1)dr=0. \end{aligned}$$ Applying the standard energy estimate and the uniform boundedness of $g_k$ and $m_k$ in $L^\infty(\varepsilon_0,1)$, we derive $$\begin{aligned} \|((g_k-1)^\frac{3}{2})_r\|_{L^2[\varepsilon_0,1]}\le C, \end{aligned}$$ here and below $C$ denotes constant independent of $k$. Owing to $((g_k-1)^2)_r=\frac{4}{3}(g_k-1)^\frac{1}{2}((g_k-1)^\frac{3}{2})_r$, according to the boundedness of $g_k$, we have $$\begin{aligned} \label{yy72} \|(g_k-1)^2\|_{H_0^1[\varepsilon_0,1]}\le C. \end{aligned}$$ Now, multiplying $\eqref{yy41}_2$ by $(m_k-m_{\theta_k})$, where $m_{\theta_k}(r)=\eta_0+r(m_k(1)-\eta_0)$, and integrating it by parts on $[\varepsilon_0,1]$, we arrive $$\begin{aligned} \int_{\varepsilon_0}^{1}r^2\frac{m_k^2-1}{m_k^3}|(m_k)_r|^2dr=&\int_{\varepsilon_0}^{1}r^2\frac{m_k^2-1}{m_k^3}(m_k)_r(m_{\theta_k})_rdr+\frac{1}{\tau}\int_{\varepsilon_0}^{1}\frac{1}{m_k}(m_k-m_{\theta_k})_rdr\\ &\,-\int_{\varepsilon_0}^{1}(m_k+B-g_k+2)(m_k-m_{\theta_k})dr. \end{aligned}$$ Based on the boundedness of $g_k$ and $m_k$ again, it can be obtained that $\|(m_k)_r\|_{L^2[\varepsilon_0,1]}\le C$, and then $\|m_k\|_{H^1[\varepsilon_0,1]}\le C$. Consequently, we get $$\begin{aligned} \label{yy73} &(g_k-1)^2\rightharpoonup (g-1)^2~~\mbox{weakly~in}~~H_0^1[\varepsilon_0,1]~~\mbox{for}~~k\to 1,\\ &m_k\rightharpoonup m~~\mbox{weakly~in}~~H^1[\varepsilon_0,1]~~\mbox{for}~~k\to 1 \end{aligned}$$ and $$\begin{aligned} \label{yy74} ||(g-1)^2||_{H_0^1[\varepsilon_0,1]}\le C,~~||m||_{H^1[\varepsilon_0,1]}\le C. \end{aligned}$$ Using the similar method as in [@LMZZ2017], we can prove that $(g,m)\in C^{\frac{1}{2}}[\varepsilon_0,1]\times W^{2,\infty}(\varepsilon_0,1)$ and $(g,m)$ is the weak solution of the system [\[eq15\]](#eq15){reference-type="eqref" reference="eq15"}. At the same time, since $H^1(\varepsilon_0,1)$ is the compact embedding of $C[\varepsilon_0,1]$, so we obtain that $g>1$ in $(\varepsilon_0,1)$, $m\ge \underline{m}+3$ in $[\varepsilon_0,1]$ and the conditions [\[c3\]](#c3){reference-type="eqref" reference="c3"}-[\[y9\]](#y9){reference-type="eqref" reference="y9"} all hold by [\[yy71\]](#yy71){reference-type="eqref" reference="yy71"} and $m_k\ge \underline{m}+3$. Therefore, we complete the proof of Theorem [Theorem 1](#th1){reference-type="ref" reference="th1"}.$\Box$ # The ill-posedness of subsonic solution In this section, we prove the ill-posedness of subsonic solution to [\[eq10\]](#eq10){reference-type="eqref" reference="eq10"}.\ **The proof of Theorem [Theorem 2](#th2){reference-type="ref" reference="th2"}**.  According to the model [\[eq10\]](#eq10){reference-type="eqref" reference="eq10"}, it yields $$\begin{aligned} \label{eq18} \left(\frac{1}{m}-\frac{1}{m^3}\right)m_r=-\left(\frac{1}{g}-\frac{1}{g^3}\right)g_r-\frac{1}{\tau}\left(\frac{1}{g}-\frac{1}{m}\right)+\frac{4}{r}. \end{aligned}$$ Integrating [\[eq18\]](#eq18){reference-type="eqref" reference="eq18"} on $(\varepsilon_0,r)$, we derive $$\begin{aligned} \label{eq19} w(m)-w(\eta _0)=-w(g)+\frac{1}{2}-\frac{1}{\tau}\int_{\varepsilon_0}^{r}\left(\frac{1}{g}-\frac{1}{m}\right)dr+4{\rm ln}r-4{\rm ln}\varepsilon _0. \end{aligned}$$ Due to $\left|\frac{1}{\tau}\int_{\varepsilon_0}^{r}\left(\frac{1}{g}-\frac{1}{m}\right)dr\right|\le \frac{1}{\tau}$, it holds $$\begin{aligned} \label{eq20} w(m)-w(\eta _0)\le -w(g)+\frac{1}{2}+\frac{1}{\tau}-4{\rm ln}\varepsilon _0.\end{aligned}$$ Thus for any fixed $\bar{\eta}>1$, it should hold for all $\eta_0<\bar{\eta}$ $$\begin{aligned} w(m)-w(\eta_0)>w(1)-w(\bar{\eta})>-w(\bar{\eta}).\end{aligned}$$ Hence, it is true $$\begin{aligned} \label{g1} -w(g)+\frac{1}{2}+\frac{1}{\tau}-4{\rm ln}\varepsilon_0>-w(\bar{\eta}), \end{aligned}$$ which gives a restriction on the upper bound of $w(g)$, and thus a restriction on the upper bound of $g$. That is, $w(g)<w(\bar{\eta})+\frac{1}{2}+\frac{1}{\tau}-4{\rm ln}\varepsilon_0$. This gives the idea of the proof. Let $\bar{g}=\bar{g}(\bar{\eta})>1$ such that $w(\bar{g})=w(\bar{\eta})+\frac{1}{2}+\frac{1}{\tau}-4{\rm ln}\varepsilon_0$. So it holds that $g<\bar{g}$, and then $w(g)<w(\bar{g})$, for any $r\in[\varepsilon_0,1]$. Substituting $w(h)={\rm ln} h+\frac{1}{2h^2}$ into the model $\eqref{eq15}_1$, it becomes $$\left\{ \begin{aligned}\label{equa4} &\left[r^2w_r(g(r))\right]_r+\left(\frac{r^2}{\tau g}\right)_r=g-m-B(r)+2,\\ &w(g(\varepsilon _0))=w(g(1))=\frac{1}{2}. \end{aligned} \right.$$ Assume that $p$ is the maximum point of $w(g(r))$ and $p$ lies in the right half interval of $[\varepsilon_0, 1]$ without loss of generality, that is, $p\in \left(\varepsilon_0+\frac{1-\varepsilon_0}{2}, 1\right)$. Taking any point $q\in \left(\varepsilon_0,\varepsilon_0+\frac{1-\varepsilon_0}{4}\right)$, we have $p-q>\frac{1-\varepsilon_0}{4}$ and $w_r(g(p))=0$. Due to $w\in C^{1,\frac{1}{2}}[\varepsilon_0,1]\cap W^{2,1}[\varepsilon_0,1]$ (see [@LMZZ2017]), then for any $q\in \left(\varepsilon_0,\varepsilon_0+\frac{1-\varepsilon_0}{4}\right)$, we integrate $\eqref{equa4}_1$ on $(q,p)$ to have $$\begin{aligned} \label{eq22} w_r(g(q))&=-\frac{1}{q^2}\int_{q}^{p}(g-m-B(s)+2)ds+\frac{1}{\tau q^2}\left(\frac{p^2}{g(p)}-\frac{q^2}{g(q)}\right)\nonumber\\ &\ge \frac{1}{q^2}\int_{q}^{p}(B(s)+m-g-2)ds-\frac{1}{\tau}\nonumber\\ &\ge \frac{1}{q^2}\int_{q}^{p}(B(s)-(\bar{g}+2))ds-\frac{1}{\tau}\nonumber\\ &\ge \frac{1}{q^2}\left(\int_{q}^{p}B(s)ds-(\bar{g}+2)\right)-\frac{1}{\tau}\nonumber\\ &\ge \frac{1}{q^2}\left(\frac{1-\varepsilon_0}{4}B^\ast-(\bar{g}+2)\right)-\frac{1}{\tau},\end{aligned}$$ where $\underset{r\in(\alpha,\beta)}{{\rm inf}}B=B^\ast$, and $\alpha=\varepsilon_0+\frac{1-\varepsilon_0}{4},\beta=\varepsilon_0+\frac{3(1-\varepsilon_0)}{4}$. Based on this, we have $$\begin{aligned} \label{eq23} &w(g(\varepsilon_0+\frac{1-\varepsilon_0}{4}))-w(g(\varepsilon_0))\nonumber\\ =&\int_{\varepsilon_0}^{\varepsilon_0+\frac{1-\varepsilon_0}{4}}w_r(g(s))ds\nonumber\\ \ge& \int_{\varepsilon_0}^{\varepsilon_0+\frac{1-\varepsilon_0}{4}}\left[\frac{1}{s^2}\left(\frac{1-\varepsilon_0}{4}B^\ast-(c+2)\right)-\frac{1}{\tau}\right]ds\nonumber\\ =&\frac{1-\varepsilon_0}{(3\varepsilon_0+1)\varepsilon_0}\left(\frac{1-\varepsilon_0}{4}B^\ast-(\bar{g}+2)\right)-\frac{1-\varepsilon_0}{4\tau}.\end{aligned}$$ Taking $$\begin{aligned} \label{g4} B^\ast=B^\ast(\bar{\eta}):=\frac{(3\varepsilon_0+1)4\varepsilon_0}{(1-\varepsilon_0)^2}w(\bar{g})+\frac{(3\varepsilon_0+1)\varepsilon_0}{(1-\varepsilon_0)\tau}+\frac{4(\bar{g}+2)}{1-\varepsilon_0}.\end{aligned}$$ Then for $\underset{r\in(\alpha,\beta)}{{\rm inf}}B\ge B^\ast(\bar{\eta})$, we derive $$\begin{aligned} \label{ww1} w(g(\varepsilon_0+\frac{1-\varepsilon_0}{4}))-w(g(\varepsilon_0))\ge w(\bar{g}),\end{aligned}$$ which gives a contradiction with $g<\bar{g}(\bar{\eta})$ and $w(g)<w(\bar{g})$. This completes the proof of Theorem [Theorem 2](#th2){reference-type="ref" reference="th2"}.$\Box$ # The uniqueness of subsonic solution ## The uniqueness of subsonic solution when $\tau=\infty$ In this subsection, we use the direct energy method to prove the uniqueness of subsonic solution when the relaxation time $\tau=\infty$, namely, the pure Euler-Poisson case. The model becomes $$\left\{ \begin{aligned}\label{yy1} &\left(\frac{1}{g}-\frac{1}{g^3}\right)g_r-\frac{2}{r}=E,\\ &\left(\frac{1}{m}-\frac{1}{m^3}\right)m_r-\frac{2}{r}=-E,\\ &\left(r^2E\right)_r=g-m-B(r),\\ &g(\varepsilon_0)=g(1)=1,m(\varepsilon_0)=\eta_0, \end{aligned} \right.$$ and its equivalent equation is $$\left\{ \begin{aligned}\label{yy2} &\left[r^2\left(\frac{1}{g}-\frac{1}{g^3}\right)g_r\right]_r =g-m-B(r)+2,\\ &\left(\frac{1}{m}-\frac{1}{m^3}\right)m_r-\frac{2}{r}=-E,\\ &g(\varepsilon_0)=g(1)=1,m(\varepsilon_0)=\eta_0. \end{aligned} \right.$$ **The proof of Theorem [Theorem 3](#th3){reference-type="ref" reference="th3"} for the case of $\tau=\infty$**. Assume that $(g_1,m_1)$ and $(g_2,m_2)$ are two pairs of solutions to the equation [\[yy1\]](#yy1){reference-type="eqref" reference="yy1"}, it holds by [\[yy2\]](#yy2){reference-type="eqref" reference="yy2"} $$\begin{aligned} \label{yy3} (r^2(w(g_1)-w(g_2))_r)_r=(g_1-g_2)-(m_1-m_2). \end{aligned}$$ Taking $(w(g_1)-w(g_2))^+$ as the test function, multiplying [\[yy2\]](#yy2){reference-type="eqref" reference="yy2"} by $(w(g_1)-w(g_2))^+$ and integrating it by parts on $[\varepsilon_0,1]$, we derive $$\begin{aligned} \label{yy4} &-\int_{\varepsilon_0}^{1}r^2\left|(w(g_1)-w(g_2))^{+}_r\right|^2dr\nonumber\\ =&\int_{\varepsilon_0}^{1}(g_1-g_2)\cdot (w(g_1)-w(g_2))^{+}dr-\int_{\varepsilon_0}^{1}(m_1-m_2)\cdot (w(g_1)-w(g_2))^{+}dr\nonumber\\ =&:B_1+B_2. \end{aligned}$$ According to the monotonicity of function $w$, it holds that $B_1\ge 0$. By $\eqref{yy1}_1$ and $\eqref{yy1}_2$, we have $$\begin{aligned} (w(g))_r-\frac{4}{r}=-(w(m))_r.\end{aligned}$$ Integrating the above equation on $(\varepsilon_0,r)$ and collecting $(g_1,m_1)$ and $(g_2,m_2)$, we get $$\begin{aligned} \label{yy5} w(g_1)-w(g_2)=w(m_2)-w(m_1). \end{aligned}$$ For $w(g_1)\ge w(g_2)$, we naturally have $w(m_2)\ge w(m_1)$. Furthermore, based on the monotonicity of function $w$ again, it holds that $B_2\ge 0$. Hence, we have $$\begin{aligned} \label{yy6} -\int_{\varepsilon_0}^{1}r^2\left|(w(g_1)-w(g_2))^{+}_r\right|^2dr\ge 0.\end{aligned}$$ That is $$\begin{aligned} \label{yy7} \left\|(w(g_1)-w(g_2))_r^+\right\|_{L^2[\varepsilon_0,1]}=0.\end{aligned}$$ Therefore, we ultimately obtain that $\left\|(w(g_1)-w(g_2))^+\right\|_{L^2[\varepsilon_0,1]}=0$ by the Poincaré inequality, that is, $g_1\le g_2$ on $[\varepsilon_0,1]$. Similarly, taking the test function as $(w(g_1)-w(g_2))^-$ and multiplying [\[yy2\]](#yy2){reference-type="eqref" reference="yy2"} by $(w(g_1)-w(g_2))^-$, then performing the same procedure as above, we get $g_1\ge g_2$ on $[\varepsilon_0,1]$. Thus, we have proved the uniqueness of subsonic solution to the system [\[eq2\]](#eq2){reference-type="eqref" reference="eq2"} when $\tau=\infty$.$\Box$ ## The uniqueness of subsonic solution when $\frac{j}{\tau}\ll 1$ In this subsection, we apply the method of exponential variation in [@GT2001 Theorem 10.7] and make specific modifications to prove the uniqueness of subsonic solution when $\frac{j}{\tau}\ll 1$. Here, we take $j_1=j$ and $j_2=-j$ in [\[eq2\]](#eq2){reference-type="eqref" reference="eq2"}, then the following model is presented $$\left\{ \begin{aligned}\label{yy8} &\left(\frac{1}{g}-\frac{j^2}{g^3}\right)g_r+\frac{j}{\tau}\frac{1}{g}-\frac{2}{r}=E,\\ &\left(\frac{1}{m}-\frac{j^2}{m^3}\right)m_r-\frac{j}{\tau}\frac{1}{m}-\frac{2}{r}=-E,\\ &\left(r^2E\right)_r=g-m-B(r),\\ &g(\varepsilon_0)=g(1)=1,m(\varepsilon_0)=\eta_0, \end{aligned} \right.$$ and its equivalent equation is $$\left\{ \begin{aligned}\label{yy9} &\left[r^2\left(\frac{1}{g}-\frac{j^2}{g^3}\right)g_r\right]_r=g-m-B(r)+2-\frac{j}{\tau}\left(\frac{r^2}{g}\right)_r,\\ &\left(\frac{1}{m}-\frac{j^2}{m^3}\right)m_r-\frac{j}{\tau}\frac{1}{m}-\frac{2}{r}=-E,\\ &g(\varepsilon_0)=g(1)=1,m(\varepsilon_0)=\eta_0. \end{aligned} \right.$$ The method in Section 2 can still be applied for [\[yy8\]](#yy8){reference-type="eqref" reference="yy8"}, thus the subsonic solution also exists. Next, we focus on the uniqueness of subsonic solution for system [\[yy8\]](#yy8){reference-type="eqref" reference="yy8"}.\ **The proof of Theorem [Theorem 3](#th3){reference-type="ref" reference="th3"} for the case of $\frac{j}{\tau}\ll 1$**. Suppose that $(g_1,m_1)$ and $(g_2,m_2)$ are two pairs of solutions of [\[yy8\]](#yy8){reference-type="eqref" reference="yy8"}, then from [\[yy9\]](#yy9){reference-type="eqref" reference="yy9"}, it holds $$\begin{aligned} \label{yy10} (r^2(\mathcal{F}(g_1)-\mathcal{F}(g_2))_r)_r=(g_1-g_2)-(m_1-m_2)-\frac{j}{\tau}\left(\frac{r^2}{g_1}-\frac{r^2}{g_2}\right)_r, \end{aligned}$$ where $\mathcal{F}(h):={\rm ln}h+\frac{j^2}{2h^2}$. We define $V:=\mathcal{F}(g_1)-\mathcal{F}(g_2)$ and take $\varphi_h:=\frac{V^+}{V^{+}+h}$ as the test function according to the idea of the comparison principle in [@GT2001 Theorem 10.7], where $h>0$. Then we can obtain $(\varphi_h)_r=\frac{h}{(V^++h)^2}V^+_r$ and $\left({\rm log}\left(1+\frac{V^+}{h}\right)\right)_r=\frac{V^+_r}{V^{+}+h}$. Multiplying the equation [\[yy10\]](#yy10){reference-type="eqref" reference="yy10"} by $\varphi_h$ and integrating by parts on $[\varepsilon_0,1]$, we obtain $$\begin{aligned} \label{yy11} &h\int_{\varepsilon_0}^{1}r^2\left|\left({\rm log}\left(1+\frac{V^+}{h}\right)\right)_r\right|^2dr+\int_{\varepsilon_0}^{1}(g_1-g_2)\cdot \frac{V^+}{V^++h}dr\nonumber\\ =&\frac{j}{\tau}\int_{\varepsilon_0}^{1}\frac{r^2(g_1-g_2)}{g_1g_2}\cdot\frac{h}{(V^++h)^2}\cdot V^+_rdr+\int_{\varepsilon_0}^{1}(m_1-m_2)\cdot \frac{V^+}{V^++h}dr\nonumber\\ =&:I_1+I_2. \end{aligned}$$ First we deal with $\int_{\varepsilon_0}^{1}(g_1-g_2)\cdot \frac{V^+}{V^++h}dr$. Assume that $(g_1-g_2)^+\not\equiv 0$, we have $$\begin{aligned} \label{yy12} \underset{h\to0^+}{{\rm lim}}\int_{\varepsilon_0}^{1}(g_1-g_2)\cdot \frac{V^+}{V^++h}=\left\|(g_1-g_2)^+\right\|_{L^1[\varepsilon_0,1]}>M>0,\end{aligned}$$ where $M$ denotes a positive constant. Here and below we use $\left\|\cdot\right\|_{L^p}$ to denote $\left\|\cdot\right\|_{L^p[\varepsilon_0,1]}$ for simplicity. Next, it holds for $I_1$ $$\begin{aligned} \label{yy13} I_1\le \frac{j}{\tau}\int_{\varepsilon_0}^{1}r^2\left|\frac{h(g_1-g_2)}{V^++h}\right|\cdot\left|\left({\rm log}\left(1+\frac{V^+}{h}\right)\right)_r\right|dr. \end{aligned}$$ Because $g_1> 1, g_2> 1$ in $(\varepsilon_0,1)$ and $g_1=g_2=1$ at $r=\varepsilon_0$ and $r=1$, then it holds that $\underset{h\to0^+}{{\rm lim}} \left|\frac{g_1-g_2}{V^++h}\right|\to+\infty$ when $j=1$ for $r$ is near $\varepsilon_0$ or $1$. For this reason, the comparison principle in [@GT2001 Theorem 10.7] cannot be directly applied in the following proof. Define the set $D:=\left\{r\in[\varepsilon_0,1]|\left|\frac{g_1-g_2}{V^++h}\right|\le \mathcal{C}\right\}$, where $\mathcal{C}>0$ is to be determined. Using the Young inequality, we have $$\begin{aligned} \label{yy14} I_1&\le\frac{j}{\tau} h\int_{D}r^2\left|\frac{g_1-g_2}{V^++h}\right|\cdot\left|\left({\rm log}\left(1+\frac{V^+}{h}\right)\right)_r\right|dr\nonumber\\ &\,\,\,\,\,\,\,\,\,\,+\frac{j}{\tau}\int_{D^c}r^2h\left|\frac{g_1-g_2}{V^++h}\right|\cdot\left|\left({\rm log}\left(1+\frac{V^+}{h}\right)\right)_r\right|dr\nonumber\\ &\le\frac{j}{\tau} h\mathcal{C}\int_{D}r^2\left|\left({\rm log}\left(1+\frac{V^+}{h}\right)\right)_r\right|dr\nonumber\\ &\,\,\,\,\,\,\,\,\,\,+\frac{j}{\tau}\int_{D^c}r^2h\cdot\left[\frac{1}{2\varepsilon}\left(\frac{g_1-g_2}{V^++h}\right)^2+\frac{\varepsilon}{2}\left|\left({\rm log}\left(1+\frac{V^+}{h}\right)\right)_r\right|^2\right]dr,\end{aligned}$$ where $\varepsilon$ is a parameter, and $D^c=[\varepsilon_0,1]/D$. Taking $\varepsilon=\frac{\tau}{j}$, it follows from [\[yy14\]](#yy14){reference-type="eqref" reference="yy14"} that $$\begin{aligned} \label{yy15} I_1&\le \frac{j}{\tau}h\mathcal{C}\int_{\varepsilon_0}^{1}r^2\left|\left({\rm log}\left(1+\frac{V^+}{h}\right)\right)_r\right|dr+\frac{j^2}{2\tau^2}\int_{D^c}r^2h\cdot\left(\frac{g_1-g_2}{V^++h}\right)^2dr\nonumber\\ &\,\,\,\,\,\,\,\,\,\,+\frac{h}{2}\int_{\varepsilon_0}^{1}r^2\left|\left({\rm log}\left(1+\frac{V^+}{h}\right)\right)_r\right|^2dr\nonumber\\ &\le \frac{j}{\tau} h\mathcal{C}\int_{\varepsilon_0}^{1}r^2\left|\left({\rm log}\left(1+\frac{V^+}{h}\right)\right)_r\right|dr+\frac{j^2}{2\tau^2}\int_{D^c}r^2\left|\frac{(g_1-g_2)^2}{V^++h}\right|dr\nonumber\\ &\,\,\,\,\,\,\,\,\,\,+\frac{h}{2}\int_{\varepsilon_0}^{1}r^2\left|\left({\rm log}\left(1+\frac{V^+}{h}\right)\right)_r\right|^2dr.\end{aligned}$$ According to the Taylor expansion, we have $\mathcal{F}(g_i)=\frac{j^2}{2}+\mathcal{F}'(1)(g_i-1)+\frac{{\mathcal{F}}''(1)}{2}(g_i-1)^2+o(g_i-1)^2$, where $i=1,2$, we can get $$\begin{aligned} \label{yy16} \left|\frac{(g_1-g_2)^2}{V^++h}\right|\le \left|\frac{(g_1-g_2)^2}{V^+}\right|\le \left|\frac{(g_1-g_2)^2}{\mathcal{F}(g_1)-\mathcal{F}(g_2)}\right|<C,\end{aligned}$$ uniformly for $r\in[\varepsilon_0,1]$ and $h>0$. There exists a constant $\mathcal{C}\gg 1$ such that $|D^c|\ll 1$ satisfying $$\begin{aligned} \label{yy17} \frac{j^2}{2\tau^2}\int_{D^c}r^2\left|\frac{(g_1-g_2)^2}{V^++h}\right|dr\le \frac{1}{2}\int_{\varepsilon_0}^{1}(g_1-g_2)\cdot \frac{V^+}{V^++h}dr. \end{aligned}$$ Combining [\[yy11\]](#yy11){reference-type="eqref" reference="yy11"}-[\[yy17\]](#yy17){reference-type="eqref" reference="yy17"}, we obtain $$\begin{aligned} \label{yy18} &\frac{h}{2}\int_{\varepsilon_0}^{1}r^2\left|\left({\rm log}\left(1+\frac{V^+}{h}\right)\right)_r\right|^2dr+\frac{1}{2}\int_{\varepsilon_0}^{1}(g_1-g_2)\frac{V^+}{V^++h}dr\nonumber\\ \le &\frac{j}{\tau}h\mathcal{C}\int_{\varepsilon_0}^{1}r^2\left|\left({\rm log}\left(1+\frac{V^+}{h}\right)\right)_r\right|dr+I_2.\end{aligned}$$ Now, we give a detailed calculation of $I_2$, which is critical and more complex. Adding $\eqref{yy8}_1$ to $\eqref{yy8}_2$, it yields $$\begin{aligned} \label{yy19} (\mathcal{F}(g))_r+\frac{j}{\tau}\left(\frac{1}{g}-\frac{1}{m}\right)-\frac{4}{r}=-(\mathcal{F}(m))_r. \end{aligned}$$ Integrating [\[yy19\]](#yy19){reference-type="eqref" reference="yy19"} on $(\varepsilon_0,r)$ and collecting $(g_1,m_1)$ and $(g_2,m_2)$, we derive $$\begin{aligned} \label{yy20} &\mathcal{F}(m_1)-\mathcal{F}(m_2)\nonumber\\ =&\mathcal{F}(g_2)-\mathcal{F}(g_1)-\frac{j}{\tau}\int_{\varepsilon_0}^{r}\left(\frac{1}{g_1(s)}-\frac{1}{g_2(s)}\right)ds+\frac{j}{\tau}\int_{\varepsilon_0}^{r}\left(\frac{1}{m_1(s)}-\frac{1}{m_2(s)}\right)ds,\end{aligned}$$ and thus $$\begin{aligned} \label{yy21} |\mathcal{F}(m_1)-\mathcal{F}(m_2)|\le |\mathcal{F}(g_1)-\mathcal{F}(g_2)|+\frac{j}{\tau}||g_1-g_2||_{L^1}+\frac{j}{\tau}||m_1-m_2||_{L^1}.\end{aligned}$$ Integrating [\[yy21\]](#yy21){reference-type="eqref" reference="yy21"} on $[\varepsilon_0,1]$ yields $$\begin{aligned} \label{yy23} ||\mathcal{F}(m_1)-\mathcal{F}(m_2)||_{L^1}-\frac{j}{\tau}||m_1-m_2||_{L^1}\le||\mathcal{F}(g_1)-\mathcal{F}(g_2)||_{L^1}+\frac{j}{\tau}||g_1-g_2||_{L^1}. \end{aligned}$$ Since $m_1,m_2\in [\underline{m}+3,\bar{m}]$, there exists a constant $C_1\ge 1$ such that $$\begin{aligned} \label{yy24} \frac{1}{C_1}||\mathcal{F}(m_1)-\mathcal{F}(m_2)||_{L^1}\le ||m_1-m_2||_{L^1}\le C_1||\mathcal{F}(m_1)-\mathcal{F}(m_2)||_{L^1}.\end{aligned}$$ When $\frac{j}{\tau}\le\frac{1}{2C_1}$, we have $$\begin{aligned} \label{yy25} ||\mathcal{F}(m_1)-\mathcal{F}(m_2)||_{L^1}\le 2||\mathcal{F}(g_1)-\mathcal{F}(g_2)||_{L^1}+\frac{2j}{\tau}||g_1-g_2||_{L^1}.\end{aligned}$$ Noting that $$\begin{aligned} \label{yy26} I_2=\int_{\varepsilon_0}^{1}\frac{m_1-m_2}{\mathcal{F}(m_1)-\mathcal{F}(m_2)}\cdot(\mathcal{F}(m_1)-\mathcal{F}(m_2))\cdot\frac{V^+}{V^++h}dr, \end{aligned}$$ because if $\mathcal{F}(m_1)-\mathcal{F}(m_2)=0$, then $m_1-m_2=0$ due to the monotonicity of $\mathcal{F}$. There exists a constant $C_2\ge 1$ such that $\frac{1}{C_2}\le\frac{m_1-m_2}{\mathcal{F}(m_1)-\mathcal{F}(m_2)}\le C_2$. Plugging [\[yy20\]](#yy20){reference-type="eqref" reference="yy20"} to [\[yy26\]](#yy26){reference-type="eqref" reference="yy26"} and from [\[yy24\]](#yy24){reference-type="eqref" reference="yy24"}-[\[yy25\]](#yy25){reference-type="eqref" reference="yy25"}, it holds $$\begin{aligned} \label{yy27} I_2&\le\int_{\varepsilon_0}^{1}\frac{m_1-m_2}{\mathcal{F}(m_1)-\mathcal{F}(m_2)}\cdot(\mathcal{F}(g_2)-\mathcal{F}(g_1))\cdot\frac{V^+}{V^++h}dr\nonumber\\ &\,\,\,\,\,\,\,\,\,\,\,\,\,\,+\int_{\varepsilon_0}^{1}\frac{m_1-m_2}{\mathcal{F}(m_1)-\mathcal{F}(m_2)}\cdot|\psi(\tau,r)|\cdot\left|\frac{V^+}{V^++h}\right|dr\nonumber\\ &\le \int_{\varepsilon_0}^{1}\frac{m_1-m_2}{\mathcal{F}(m_1)-\mathcal{F}(m_2)}\cdot(\mathcal{F}(g_2)-\mathcal{F}(g_1))\cdot\frac{V^+}{V^++h}dr\nonumber\\ &\,\,\,\,\,\,\,\,\,\,\,\,\,\,+\left(\frac{2j^2}{\tau^2}C_1C_2+\frac{j}{\tau} C_2\right)\cdot||g_1-g_2||_{L^1}+\frac{2j}{\tau} C_1C_2\cdot||\mathcal{F}(g_1)-\mathcal{F}(g_2)||_{L^1},\end{aligned}$$ where $\psi(\tau,r):=-\frac{j}{\tau}\int_{\varepsilon_0}^{r}\left(\frac{1}{g_1(s)}-\frac{1}{g_2(s)}\right)ds+\frac{j}{\tau}\int_{\varepsilon_0}^{r}\left(\frac{1}{m_1(s)}-\frac{1}{m_2(s)}\right)ds$ and $\frac{j}{\tau}\le \frac{1}{2C_1}$. Substituting [\[yy27\]](#yy27){reference-type="eqref" reference="yy27"} into [\[yy18\]](#yy18){reference-type="eqref" reference="yy18"}, we derive $$\begin{aligned} \label{yy28} &\frac{h}{2}\int_{\varepsilon_0}^{1}r^2\left|\left({\rm log}\left(1+\frac{V^+}{h}\right)\right)_r\right|^2dr+\frac{1}{2}\int_{\varepsilon_0}^{1}(g_1-g_2)\cdot\frac{V^+}{V^++h}dr\nonumber\\ &\,\,\,\,+\frac{1}{C_2}\int_{\varepsilon_0}^{1}(\mathcal{F}(g_1)-\mathcal{F}(g_2))\cdot\frac{V^+}{V^++h}dr\nonumber\\ \le&\frac{j}{\tau}h\mathcal{C}\int_{\varepsilon_0}^{1}r^2\left|\left({\rm log}\left(1+\frac{V^+}{h}\right)\right)_r\right|dr+\left(\frac{2j^2}{\tau^2}C_1C_2+\frac{j}{\tau} C_2\right)\cdot||g_1-g_2||_{L^1}\nonumber\\ &\,\,\,\,+\frac{2j}{\tau} C_1C_2\cdot||\mathcal{F}(g_1)-\mathcal{F}(g_2)||_{L^1}.\end{aligned}$$ Similarly, taking $\varphi_h:=\frac{V^-}{V^-+h}$ as the test function for $h>0$ is small enough, and utilizing the same process as above, we get $$\begin{aligned} \label{yy29} &\frac{h}{2}\int_{\varepsilon_0}^{1}r^2\left|\left({\rm log}\left(1+\frac{V^-}{h}\right)\right)_r\right|^2dr+\frac{1}{2}\int_{\varepsilon_0}^{1}(g_2-g_1)\cdot \frac{V^-}{V^-+h}dr\nonumber\\ &\,\,\,\,+\frac{1}{C_2}\int_{\varepsilon_0}^{1}(\mathcal{F}(g_2)-\mathcal{F}(g_1))\cdot\frac{V^-}{V^-+h}dr\nonumber\\ \le&\frac{j}{\tau} h\tilde{\mathcal{C}}\int_{\varepsilon_0}^{1}r^2\left|\left({\rm log}\left(1+\frac{V^-}{h}\right)\right)_r\right|dr+\left(\frac{2j^2}{\tau^2}C_1C_2+\frac{j}{\tau} C_2\right)\cdot||g_1-g_2||_{L^1}\nonumber\\ &\,\,\,\,+\frac{2j}{\tau} C_1C_2\cdot||\mathcal{F}(g_1)-\mathcal{F}(g_2)||_{L^1},\end{aligned}$$ where $\tilde{\mathcal{C}}>0$ is another constant. Combining [\[yy28\]](#yy28){reference-type="eqref" reference="yy28"} and [\[yy29\]](#yy29){reference-type="eqref" reference="yy29"}, we have $$\begin{aligned} \label{yy30} &\frac{h}{2}\int_{\varepsilon_0}^{1}r^2\left|\left({\rm log}\left(1+\frac{V^+}{h}\right)\right)_r\right|^2dr+\frac{h}{2}\int_{\varepsilon_0}^{1}r^2\left|\left({\rm log}\left(1+\frac{V^-}{h}\right)\right)_r\right|^2dr\nonumber\\ &\,\,\,\,+\frac{1}{2}\int_{\varepsilon_0}^{1}(g_1-g_2)\cdot \frac{V^+}{V^++h}dr+\frac{1}{2}\int_{\varepsilon_0}^{1}(g_2-g_1)\cdot \frac{V^-}{V^-+h}dr\nonumber\\ &\,\,\,\,+\frac{1}{C_2}\int_{\varepsilon_0}^{1}(\mathcal{F}(g_1)-\mathcal{F}(g_2))\cdot\frac{V^+}{V^++h}dr+\frac{1}{C_2}\int_{\varepsilon_0}^{1}(\mathcal{F}(g_2)-\mathcal{F}(g_1))\cdot\frac{V^-}{V^-+h}dr\nonumber\\ \le&\frac{j}{\tau} h\mathcal{C}\int_{\varepsilon_0}^{1}r^2\left|\left({\rm log}\left(1+\frac{V^+}{h}\right)\right)_r\right|dr+\frac{j}{\tau} h\tilde{\mathcal{C}}\int_{\varepsilon_0}^{1}r^2\left|\left({\rm log}\left(1+\frac{V^-}{h}\right)\right)_r\right|dr\nonumber\\ &\,\,\,\,+\left(\frac{4j^2}{\tau^2}C_1C_2+\frac{2j}{\tau} C_2\right)\cdot||g_1-g_2||_{L^1}+\frac{4j}{\tau} C_1C_2\cdot||\mathcal{F}(g_1)-\mathcal{F}(g_2)||_{L^1}.\end{aligned}$$ When $\frac{j}{\tau}:={\rm min}\left\{\frac{-C_2+\sqrt{C_2(C_1+C_2)}}{4C_1C_2},\frac{1}{8C_1C_2^2}\right\}$, we have $$\begin{aligned} \label{yy31} &\frac{h}{2}\int_{\varepsilon_0}^{1}r^2\left|\left({\rm log}\left(1+\frac{V^+}{h}\right)\right)_r\right|^2dr+\frac{h}{2}\int_{\varepsilon_0}^{1}r^2\left|\left({\rm log}\left(1+\frac{V^-}{h}\right)\right)_r\right|^2dr\nonumber\\ &\,\,\,\,+\frac{1}{2}\int_{\varepsilon_0}^{1}(g_1-g_2)\cdot \frac{V^+}{V^++h}dr+\frac{1}{2}\int_{\varepsilon_0}^{1}(g_2-g_1)\cdot \frac{V^-}{V^-+h}dr\nonumber\\ &\,\,\,\,+\frac{1}{C_2}\int_{\varepsilon_0}^{1}(\mathcal{F}(g_1)-\mathcal{F}(g_2))\cdot\frac{V^+}{V^++h}dr+\frac{1}{C_2}\int_{\varepsilon_0}^{1}(\mathcal{F}(g_2)-\mathcal{F}(g_1))\cdot\frac{V^-}{V^-+h}dr\nonumber\\ \le&\frac{j}{\tau} h\mathcal{C}\int_{\varepsilon_0}^{1}r^2\left|\left({\rm log}\left(1+\frac{V^+}{h}\right)\right)_r\right|dr+\frac{j}{\tau} h\tilde{\mathcal{C}}\int_{\varepsilon_0}^{1}r^2\left|\left({\rm log}\left(1+\frac{V^-}{h}\right)\right)_r\right|dr\nonumber\\ &\,\,\,\,+\frac{1}{4}||g_1-g_2||_{L^1}+\frac{1}{2C_2}||w(g)_1-w(g)_2||_{L^1}. \end{aligned}$$ So there exists $h_0$, when $h<h_0$, we derive $$\begin{aligned} \label{yy32} \frac{1}{4}||g_1-g_2||_{L^1}\le \frac{1}{2}\left(\int_{\varepsilon_0}^{1}(g_1-g_2)\cdot \frac{V^+}{V^++h}dr+\int_{\varepsilon_0}^{1}(g_2-g_1)\cdot \frac{V^-}{V^-+h}dr\right)\end{aligned}$$ and $$\begin{aligned} \label{yy33} &\frac{1}{2C_2}||\mathcal{F}(g_1)-\mathcal{F}(g_2)||_{L^1}\nonumber\\ \le& \frac{1}{C_2}\left(\int_{\varepsilon_0}^{1}(\mathcal{F}(g_1)-\mathcal{F}(g_2))\cdot\frac{V^+}{V^++h}dr+\int_{\varepsilon_0}^{1}(\mathcal{F}(g_2)-\mathcal{F}(g_1))\cdot\frac{V^-}{V^-+h}dr\right).\end{aligned}$$ Collecting [\[yy31\]](#yy31){reference-type="eqref" reference="yy31"}-[\[yy33\]](#yy33){reference-type="eqref" reference="yy33"}, we obtain $$\begin{aligned} \label{yy34} &\frac{\varepsilon_0^2h}{2}\int_{\varepsilon_0}^{1}\left|\left({\rm log}\left(1+\frac{V^+}{h}\right)\right)_r\right|^2dr+\frac{\varepsilon_0^2h}{2}\int_{\varepsilon_0}^{1}\left|\left({\rm log}\left(1+\frac{V^-}{h}\right)\right)_r\right|^2dr\nonumber\\ \le&\frac{j}{\tau} h\mathcal{C}\int_{\varepsilon_0}^{1}\left|\left({\rm log}\left(1+\frac{V^+}{h}\right)\right)_r\right|dr+\frac{j}{\tau} h\tilde{\mathcal{C}}\int_{\varepsilon_0}^{1}\left|\left({\rm log}\left(1+\frac{V^-}{h}\right)\right)_r\right|dr. \end{aligned}$$ That is $$\begin{aligned} \label{yy35} &\int_{\varepsilon_0}^{1}\left|\left({\rm log}\left(1+\frac{V^+}{h}\right)\right)_r\right|^2dr+\int_{\varepsilon_0}^{1}\left|\left({\rm log}\left(1+\frac{V^-}{h}\right)\right)_r\right|^2dr\nonumber\\ \le&\frac{2j \mathcal{C}}{\varepsilon_0^2\tau}\int_{\varepsilon_0}^{1}\left|\left({\rm log}\left(1+\frac{V^+}{h}\right)\right)_r\right|dr+\frac{2j \tilde{\mathcal{C}}}{\varepsilon_0^2\tau}\int_{\varepsilon_0}^{1}\left|\left({\rm log}\left(1+\frac{V^-}{h}\right)\right)_r\right|dr. \end{aligned}$$ Let $C_3:=\frac{2j \mathcal{C}}{\varepsilon_0^2\tau}$ and $C_4:=\frac{2j \tilde{\mathcal{C}}}{\varepsilon_0^2\tau}$, we get $$\begin{aligned} \label{yy36} &\int_{\varepsilon_0}^{1}\left|\left({\rm log}\left(1+\frac{V^+}{h}\right)\right)_r\right|^2dr+\int_{\varepsilon_0}^{1}\left|\left({\rm log}\left(1+\frac{V^-}{h}\right)\right)_r\right|^2dr\nonumber\\ \le&C_3\int_{\varepsilon_0}^{1}\left|\left({\rm log}\left(1+\frac{V^+}{h}\right)\right)_r\right|dr+C_4\int_{\varepsilon_0}^{1}\left|\left({\rm log}\left(1+\frac{V^-}{h}\right)\right)_r\right|dr. \end{aligned}$$ From [\[yy36\]](#yy36){reference-type="eqref" reference="yy36"}, it can be seen that there must eixst a sequence $\left\{h_i\right\},i=1,2,...$ with $h_i\to 0$, and at least one of the two inequalities $$\begin{aligned} \int_{\varepsilon_0}^{1}\left|\left({\rm log}\left(1+\frac{V^+}{h_i}\right)\right)_r\right|^2dr\le C_3\int_{\varepsilon_0}^{1}\left|\left({\rm log}\left(1+\frac{V^+}{h_i}\right)\right)_r\right|dr\end{aligned}$$ and $$\begin{aligned} \int_{\varepsilon_0}^{1}\left|\left({\rm log}\left(1+\frac{V^-}{h_i}\right)\right)_r\right|^2dr\le C_4\int_{\varepsilon_0}^{1}\left|\left({\rm log}\left(1+\frac{V^-}{h_i}\right)\right)_r\right|dr\end{aligned}$$ holds true. Without loss of generality, we suppose $$\begin{aligned} \label{yy37} \int_{\varepsilon_0}^{1}\left|\left({\rm log}\left(1+\frac{V^+}{h_i}\right)\right)_r\right|^2dr\le C_3\int_{\varepsilon_0}^{1}\left|\left({\rm log}\left(1+\frac{V^+}{h_i}\right)\right)_r\right|dr \end{aligned}$$ holds for $h_i\to 0$. Denote $h_i$ also by $h$, according to [\[yy37\]](#yy37){reference-type="eqref" reference="yy37"} and using the Hölder inequality, we have $$\begin{aligned} \label{yy38} \left\|\left({\rm log}\left(1+\frac{V^+}{h}\right)\right)_r\right\|^2_{L^2}\le &C_3\int_{\varepsilon_0}^{1}\left|\left({\rm log}\left(1+\frac{V^+}{h}\right)\right)_r\right|dr\nonumber\\ \le &C_3\left\|\left({\rm log}\left(1+\frac{V^+}{h}\right)\right)_r\right\|_{L^2}.\end{aligned}$$ Finally, we obtain $$\begin{aligned} \label{yy39} \left\|\left({\rm log}\left(1+\frac{V^+}{h}\right)\right)_r\right\|_{L^2}\le C_3,\end{aligned}$$ for $h\to 0$. By [\[yy39\]](#yy39){reference-type="eqref" reference="yy39"} and the Poincaré inequality, we know $\left\|{\rm log}\left(1+\frac{V^+}{h}\right)\right\|_{L^2}$ is uniformly bounded with $h\to 0$, which is impossible, except that $V^+\equiv 0$. This means $\mathcal{F}(g_1)\le \mathcal{F}(g_2)$, and thus $g_1\le g_2$. And from [\[yy36\]](#yy36){reference-type="eqref" reference="yy36"}, it is easy to obtain $$\begin{aligned} \label{yy40} \int_{\varepsilon_0}^{1}\left|\left({\rm log}\left(1+\frac{V^-}{h}\right)\right)_r\right|^2dr\le C_4\int_{\varepsilon_0}^{1}r^2\left|\left({\rm log}\left(1+\frac{V^-}{h}\right)\right)_r\right|dr. \end{aligned}$$ Similarly, we will get $V^-\equiv 0$, that is, $\mathcal{F}(g_1)\ge \mathcal{F}(g_2)$, then $g_1\ge g_2$. Hence, we have proved $g_1=g_2$ on $[\varepsilon_0,1]$. According to [\[yy25\]](#yy25){reference-type="eqref" reference="yy25"}, we have $m_1=m_2$ on $[\varepsilon_0,1]$. Therefore, we have proved the uniqueness of subsonic solution to the system [\[eq2\]](#eq2){reference-type="eqref" reference="eq2"} when $\frac{j}{\tau}\ll 1$.$\Box$ # Acknowledgements {#acknowledgements .unnumbered} The research of M. Mei was supported by NSERC grant RGPIN 2022--03374, the research of K. Zhang was supported by National Natural Science Foundation of China grant 12271087, and the research of G. Zhang was supported by National Natural Science Foundation of China grant 11871012. U. Ascher, P. Markowich, P. Pietra and C. Schmeiser, A phase plane analysis of transonic solutions for the hydrodynamic semiconductor model, Math. Models Methods Appl. Sci., **1** (1991) 347--376. M. Bae, B. Duan, J. J Xiao and C. Xie, Structural stability of supersonic solutions to the Euler-Poisson system, Arch. Ration. Mech. Anal., **239** (2021), 679--731. M. Bae, B. Duan and C. Xie, Subsonic solutions for steady Euler-Poisson system in two-dimensional nozzles, SIAM J. Math. Anal., **46(5)** (2014), 3455--3480. M. Bae, B. Duan and C. Xie, Subsonic flow for the multidimensional Euler-Poisson system, Arch. Ration. Mech. Anal., **220** (2016), 155--191. K. Bløtekjær, Transport equations for electrons in two-valley semiconductors, IEEE Trans. Electron Devices, **17** (1973), 38--47. L. Chen, M. Mei, G. Zhang and K. Zhang, Steady hydrodynamic model of semiconductors with sonic boundary and transonic doping profile, J. Differential Equations, **269** (2020), 8173--8211. L. Chen, M. Mei, G. Zhang and K. Zhang, Radial solutions of the hydrodynamic model of semiconductors with sonic boundary, J. Math. Anal. Appl., **501** (2021), 125187. L. Chen, M. Mei, G. Zhang and K. Zhang, Transonic steady-states of Euler-Poisson equations for semiconductor models with sonic boundary, SIAM J. Math. Anal., **54** (2022), 363--388. L. Chen, M. Mei and G. Zhang, Radially symmetric spiral flows of the compressible Euler-Poisson system for semiconductors, J. Differential Equations, **373** (2023), 359--388. L. Chen, D. Li, M. Mei and G. Zhang, Quasi-neutral limit to steady-state hydrodynamic model of semiconductors with degenerate boundary, SIAM J. Math. Anal., **55(4)** (2023), 2813--2837. S. Cordier and E. Grenier, Quasi-neutral limit of an Euler-Poisson system arising from plasma physics, Comm. Partial Differential Equations, **25(1)** (2000), 99--113. P. Degond and P. Markowich, On a one-dimensional steady-state hydrodynamic model for semiconductors, Appl. Math. Lett., **3(3)** (1990), 25--29. P. Degond and P. Markowich, A steady state potential flow model for semiconductors, Ann. Mat. Pura Appl., **165(1)** (1993), 87--98. B. Duan and Y. Zhou, Non-isentropic multi-transonic solutions of Euler-Poisson system, J. Differential Equations, **268** (2020), 7029--7046. Y.H. Feng, H. Hu and M. Mei, Structural stability of interior subsonic steady-states to hydrodynamic model for semiconductors with sonic boundary, (2022), arXiv: 2204.01236. Y.H. Feng, M. Mei and G. Zhang, Nonlinear structural stability and linear dynamic instability of transonic steady-states to a hydrodynamic model for semiconductors, J. Differential Equations, **344** (2023), 131--171. I. M. Gamba, Stationary transonic solutions of a one-dimensional hydrodynamic model for semiconductors, Comm. Partial Differential Equations, **17** (1992), 553--577. I. M. Gamba and C. S. Morawetz, A viscous approximation for a 2-D steady semiconductor or transonic gas dynamic flow: existence for potential flow, Comm. Pure Appl. Math., **49** (1996), 999--1049. D. Gilbarg and N. Trudinger, Elliptic Partial Differential Equations of Second Order. Reprint of the 1998 edition, Springer-Verlag, Berlin (2001). Y. Guo and W. Strauss, Stability of semiconductor states with insulating and contact boundary conditions, Arch. Ration. Mech. Anal., **179** (2005), 1--30. F. Huang, M. Mei, Y. Wang and H. Yu, Asymptotic convergence to stationary waves for unipolar hydrodynamic model of semiconductors, SIAM J. Math. Anal., **43** (2011), 411--429. F. Huang, M. Mei, Y. Wang and H. Yu, Asymptotic convergence to planar stationary waves for multi-dimensional unipolar hydrodynamic model of semiconductors, J. Diffenential Equations, **251** (2011), 1305--1331. A. Jüngel, Quasi-hydrodynamic Semiconductor Equations, Progr. Nonl. Diff. Equ. Appl., Vol. **41**, Birkháuser Verlag, Besel, Boston, Berlin, 2001. J. Li, M. Mei, G. Zhang and K. Zhang, Steady hydrodynamic model of semiconductors with sonic boundary: (I) subsonic doping profile, SIAM J. Math. Anal., **49(6)** (2017), 4767--4811. J. Li, M. Mei, G. Zhang and K. Zhang, Steady hydrodynamic model of semiconductors with sonic boundary: (II) supersonic doping profile, SIAM J. Math. Anal., **50(1)** (2018), 718--734. T. Luo, J. Rauch, C. Xie and Z. Xin, Stability of transonic shock solutions for one-dimensional Euler-Poisson equations, Arch. Ration. Mech. Anal., **202(3)** (2011), 787--827. T. Luo and Z. Xin, Transonic shock solutions for a system of Euler-Poisson equations, Commun. Math. Sci., **10(2)** (2012), 419--462. P. Markowich, C. Ringhofer and C. Schmeiser, Semiconductor Equations, Springer, Wien, New York, 1989. M. Mei, X. Wu and Y. Zhang, Stability of steady-state for 3-D hydrodynamic model of unipolar semiconductor with Ohmic contact boundary in hollow ball, J. Diffenential Equations, **277** (2021), 57--113. P. Mu, M. Mei and K. Zhang, Subsonic and supersonic steady-states of bipolar hydrodynamic model of semiconductors with sonic boudary, Commun. Math. Sci., **18(7)** (2020), 2005--2038. S. Nishibata and M. Suzuki, Asymptotic stability of a stationary solution to a hydrodynamic model of semiconductors, Osaka J. Math., **44** (2007), 639--665. S. Nishibata and M. Suzuki, Asymptotic stability of a stationary solution to a thermal hydrodynamic model for semiconductors, Arch. Ration. Mech. Anal., **192(2)** (2009), 187--215. Y. Peng and I. Violet, Example of supersonic solutions to a steady state Euler-Poisson system, Appl. Math. Lett., **19(12)** (2006), 1335--1340. Y. Peng, Asymptotic limits of one-dimensional hydrodynamic models for plasmas and semiconductors, Chin. Ann. Math. Ser. B, **23(1)** (2002), 25--36. Y. Peng, Some asymptotic analysis in steady-state Euler-Poisson equations for potential flow, Asymptot. Anal., **36** (2003), 75--92. Y. Peng and Y. Wang, Boundary layers and quasi-neutral limit in steady state Euler-Poisson equations for potential flows, Nonlinearity, **17(3)** (2004), 835--849. M. D.Rosini, A phase analysis of transonic solutions for the hydrodynamic semiconductor model, Quart. Appl. Math., **63** (2005), 251--268. N. Tsuge, Existence and uniqueness of stationary solutions to a one-dimensional bipolar hydrodynamic model of semiconductors, Nonlinear Anal., **73(3)** (2010), 779--787. M. Wei, M. Mei, G. Zhang and K. Zhang, Smooth transonic steady-states of hydrodynamic model for semiconductors, SIAM J. Math. Anal., **53(4)** (2021), 4908-4932. H. Yu, On the stationary solutions of multi-dimensional bipolar hydrodynamic model of semiconductors, Appl. Math. Lett., **64** (2017), 108--112. F. Zhou and Y. Li, Existence and some limits of stationary solutions to a one-dimensional bipolar Euler-Poisson system, J. Math. Anal. Appl., **351(1)** (2009), 480--490.
arxiv_math
{ "id": "2309.00418", "title": "Subsonic steady-states for bipolar hydrodynamic model for semiconductors", "authors": "Siying Li, Ming Mei, Kaijun Zhang and Guojing Zhang", "categories": "math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }